►
From YouTube: IETF106-OAUTH-20191121-1550
Description
OAUTH meeting session at IETF106
2019/11/21 1550
https://datatracker.ietf.org/meeting/106/proceedings/
B
A
Guys
welcome
back
to
the
second
session
and
I've
done
a
slight
agenda
bashing
here.
So
we
had
one
leftover
item
from
yesterday
which
got
pushed
to
today.
This
is
the
TR
TX.
Us
summary
dick
is
going
to
do
that
and
followed
up
by
a
short
summary
from
Aaron
about
the
site.
Discussions
on
OS,
2.1
brian's
next
bebop
to
austin
has
two
presentations
on
pushed
authorization
and
rich
authorization,
and
then
aaron
is
going
to
come
up
again
and
talk
about
the
intermediary
metadata,
and
then
we
have
Travis
with
some
flames
discussions.
That's
all
I
have.
C
A
A
A
Yep
thanks
very
good.
We.
D
Are
good,
okay,
their
card
who
is
not
at
the
PX
off
boss?
Coney,
you
took
notes,
so
maybe
you
weren't
there.
Maybe
that
explains
the
notes.
Okay,
so
for
the
people
out
there.
Quick
summary,
we
talked
about
a
number
of
issues.
All
right.
It's
right
here,
passing
the
request
by
a
redirect,
has
a
bunch
of
limitations
and
security
issues.
There's
an
awful
lot
of
BCPs
and
extensions
that
it's
complicated.
You
know
Justin
put
up
this
slide.
They
just
showed
a
whole
bunch
of
the
specs
as
being
part
of
the
issue.
D
A
number
of
new
use
cases
were
described,
PSD
to
IOT
user.
Not
present
Annabelle
had
some
interesting
slides
about
some
types
of
interactions
that
are
not
covered
currently
in
o2,
and
then
there
were
two
proposals
presented,
one
of
them
TXI,
aka,
XYZ,
potentially
ba
three
versus
sort
of
rawr
par
that
are
really
extending
to
the
questions
then
are
do
we
pursue
both
of
these
one
of
or
the
other
of
them?
That's
kind
of
one
of
the
first
questions
that
came
up
out
of
that.
D
A
B
E
D
E
D
So
one
of
the
conclusions
we
came
up
with
in
the
site
meeting
this
morning
was
we
would
want
a
different
mailing
list
than
the
Olas
list
for
discussing
a
ton
of
brand
new
work
that
wasn't
an
extension.
If
we
really
do
that
and
that
otherwise
it's
hard
to
track
for
people
and
probably
have
dedicated
meetings
to
a
big
chunk
of
work.
And
then
another
question
or
potential
would
be
to
do
a
design
team
within
the
existing
working
group.
A
So
there
were
a
few
folks
at
the
buff
who
raised
a
hand
to
who
were
willing
to
work
on
this
topic.
I'm.
Now
it's
some
time
passed
did
you
did
you
find
some
some
more
people
who
are
the
same
type
of
people
who
are
willing
to
work
together
like
how
do
we?
How
do
we
get
this
kick-started
like
who
would
lead
the
design
team
and
and
who
would
be
willing
to
actually
be
on
an
assigned
team
to
work
on
it?
Besides,
obviously
they're
the
authors
of
the
proposals,
I
hope
well,.
A
D
E
D
F
Justin
Richards,
just
to
maybe
clarify
a
little
bit
these
there
was
we
had
a
side
meeting
earlier
today.
These
were
all
brought
up
as
potential
avenues
and
options
of
where
the
work
can
be
done
and
how
the
work
can
be
done.
This
isn't
saying
that
there
was
a
bunch
of
people.
That
said,
we
should
do
a
design
team
or
something
like
that,
but
just
that
would
be
something
that
would
be
more
directly
applicable
to
the
OAuth
working
group.
F
E
F
D
D
D
G
Mike
Jones
I
was
waiting.
My
turn
to
your
question
of
whether,
if
we
do
this
work,
which
isn't
I,
think
a
decided
question,
I
said
it
is
not
a
decided
question.
As
far
as
I
know,
I
strongly
believe
that
it
should
be
in
the
Roth
working
group,
because
that's
where
the
collection
of
domain
expertise
is
and
in
particular,
security
expertise
about
what
can
go
terribly
wrong
in
protocols
of
this
shape.
E
I
Lisanna
I
would
argue
that,
if
we're,
if
we're
gonna,
do
this
in
a
war,
then
it
in
practical
terms
probably
means
three
slots
per
week
and
it.
You
know
it
will
be
quite
a
strain
on
the
current
working
group,
sort
of
the
management
resources
or
the
working
group
and
that
may
need.
We
may
need
to
assign
extra
resources
to
cover
that
both
seems
to
have
enough
on
its
plate,
as
it
is.
J
Backman
Amazon
I
think
it's
pretty
unlikely
or
unrealistic
to
think
that
work
on
Ross
2.0
and
its
extensions
is
going
to
halt
the
reason
I.
I
think
it's
unrealistic
that
is
even
going
to
significantly
slow
down,
because
people
are
working
on
those
extensions
to
solve
immediate
needs.
They
have.
Those
needs
are
not
going
to
go
away
just
because
we
started
to
think
about
a
bigger,
better
way
to
solve
these
problems.
So
the
I
expect
that
work
to
continue
to
you
know
for
some
time
at
a
similar
pace,
at
least
for
the
near
future.
J
G
J
G
Floor
about,
if
we
do
this
work,
do
we
do
it
in
the
oauth
working
group
and
I
think
again,
because
the
security
expertise
is
here,
it
should
definitely
be
in
the
oauth
working
group.
If
we
do
new
work,
there's
a
question
to
the
80's
and
the
chairs.
Whether
that
would
involve
a
reach
are
turing.
I
think
it
would.
It
would
I.
K
Knowledge
tit,
first
of
all,
I
would
like
to
second
Annabelle,
so
I
wouldn't
expect
the
work
on
or
to
slow
down
I'm
for
myself
I'm
accelerating
right
now,
because
we
have
to
solve
problems
at
hand.
So
300
is
Ananda
and
a
different
time
timeline
from
my
perspective,
so
I
would
assume
us
to
work
on
both
and
actually
on
three
aspects,
because
yesterday,
an
aside
meaning,
we
discussed
that
one.
A
K
J
And
well
Backman
Amazon
again,
I
wanted
to
respond
to
something.
Mike
said
as
far
as
what
is
the
open
question
asking:
if
we're
going
to
do
this
work,
do
we
do
it
in
the
working
group?
Doesn't
make
a
lot
of
sense
to
me
because
you
know
the
working
group
doesn't
really
have
a
conclusive
say
on
whether
or
not
IETF
is
going
to
do
the
work.
The
working
group
might
agree
or
disagree
on
whether
or
not
it
is
work
that
that
that
we
should
expand
the
scope,
recharter
ooofff,
to
include
that
work.
J
F
Just
in
Richard
and
I
just
wanted
to
address
the
point
of
getting
the
right
expertise,
I
mean
I,
I.
Think
that's
it's
it's
a
little
bit
of
a
red
herring
to
say
that
if
we
to
start
a
new
working
group,
we
wouldn't
have
the
right
expertise,
because
all
of
us
are
in,
like
dozens
of
working
groups,
I'm
in
dozens
of
working
groups
with
a
bunch
of
you
across
different
standards,
bodies
and
and
within
the
IETF.
F
We
have
a
way
of
reaching
out
to
the
right
experts
and
bringing
them
in
and
I
think
that
with
something
like
this,
if
we
were
to
do
a
new
working
group
or
even
to
do
a
new
mailing
list
which
again
I
think
is
a
good
idea,
we
would
want
to
make
sure
that
those
people
are
involved
that
the
right
people
are
involved.
There
are
some
really
really
brilliant
people
in
this
community.
That
I
think
need
to
be
part
of
this
conversation.
F
I
think
that
there
is
a
slightly
different
audience
for
T
Axl
style
work
versus
ol2
style
work
with
significant
overlap,
but
there
are
going
to
be
a
lot
of
people
coming
to
the
oauth2
world
that
just
want
to
care
about
oauth2,
and
they
should
be
able
to
do
that
and
I
think
that's
especially
important,
because
a
lot
of
people
raise
the
question
of.
Are
we
just
going
to
confuse
people
by
saying
there's
this
new
thing?
F
Well,
one
way
to
not
confuse
people
is
to
say
like
there
is
a
new
thing
and
it's
being
worked
on
here
to
do
something
new.
So
if
you're
interested
in
that
new
thing,
that's
not
done
yet
sure.
That's
over
there
go
alpha
test,
go
beta
test
right,
it's
the
spec
is
not
written.
Yet
if
you
want
to
stay
in
the
oauth2
space,
where
things
are
more
explored,
then
there
is
a
space
for
that.
K
K
So
from
my
perspective,
the
next
logical
step
would
be
to
really
write
up
a
proposal
for
the
scope
of
all
three
and
then
then
we
might
decide
as
a
community
as
individuals
on
how
to
proceed.
I
think
that's
makes
sense.
Instead
of
spending
more
time
now,
because
I'm
waiting
to
give
my
presentations
so.
E
If
I
could
summarize
that
feedback
you're
reiterating
that
there's
two
questions
to
consider,
there
is
what
is
it
that
we
need
to
do
and
there's
then
the
question
of
how
is
it
that
we're
gonna
go
forward?
We're
doing
that,
and
these
are
certainly
interrelated
questions,
but
we
need
to
be
considering
both
dimensions
before.
J
E
Would
endorse
that
approach
and
I
think
we
need
to
be
again
quite
clear
on
what
is
it
that
we
want
to
do
and
I
think
by
then
in
numerating
and
being
kind
of
clear,
crystal
clear,
whether
it's
again
200
to
1
or
3?
Oh
there's
gonna
be
a
lot
of
things
to
do
and
then
we're
needing
to
have
a
conversation
on
what
it
is
to
do.
E
I
mean
the
key
question
for
me
is
if
we
were
to
talk
about
a
hypothetical
new
working
group
or
a
hypothetical
recharter,
one
of
the
key
things
for
me
to
be
figured
it'd,
be
a
bik's
plane
during
that
chartering
process
would
be
if
I.
If
someone
came
to
us
with
a
new
body
of
work
and
I,
was
holding
that
right
now.
Would
it
be
clear
how
we
would
go
about
to
do
that
work?
Would
it
be
clear
if
we
had
two
working
groups?
E
It
would
certainly
be
clear
if
we
had
kind
of
one
kind
of
one
working
group
which
is
not
to
say
that's
the
right
answer,
but
the
key
kind
of
distinguishers
do.
We
know
how
we
go
kind
of
through
that,
so
I
think
we
need
to.
You
know
bottom
line.
I
think
we
need
to
write
down
first,
what
it
is
we
won't
do
and
then
we
can
engage
in
a
conversation
to
how
to
split
it
apart
or
not
at
all.
L
Aaron
Preki
from
octa
I
just
want
to
echo
something
I
heard
from
the
last
meeting.
Actually
we
had,
which
I
haven't
heard,
come
up
again
in
this
discussion,
which
is
that,
like,
yes,
there
is
absolutely
going
to
be
work
on
202
dialects
continuing
for
a
very
long
time,
and
the
primary
reason
for
that
is
that
realistically,
getting
people
to
move
to
a
3.0
is
not
something
that's
going
to
happen
overnight
by
any
means,
so
the
work
on
3.0
is
more
about.
Like
long
aiming
for
the
future,
you
know
trying
to
trying
to
get
something.
L
You
know
in
front
of
people
but
sort
of
long-term
planning,
rather
than
we're
telling
people
to
halt
everything.
Everything
using
2.0
and
switch
immediately
right,
so
I
think
we
just
need
to
make
sure
that
message
continues
to
be
something
we
get
across.
We're
not
saying
stop
using
2.0
3.0
is
better,
but
wait
for
us
to
finish
it,
because
that
doesn't
make
any
sense,
and
that
would
never
actually
work
in
the
real
world
anyway.
So.
E
L
Aaron
cracking
again
yes,
so
yesterday
we
had
a
site
meeting
about
OAuth
2.1.
The
discussion
was
very
good
discussion.
The
sort
of
high
level
end
result
of
that
discussion
was
that
seems
like
there
is
a
pretty
broad
consensus
on
the
idea
that
we
need
to
do
something
to
sort
of
clean
things
up.
There
was
definitely
an
agreement
that
it
there
is
a
lot
of
documentation
to
read
the
we
didn't
come
to
a
conclusion
on
exactly
how
to
proceed.
L
The
main
disagreement
stemmed
from
basically
whether
or
not
any
sort
of
new
document
should
be
should
be
saying
things
that
are
effectively
breaking
changes
to
the
spec.
What
I
mean
by
that
is
not
necessarily
like.
Nobody
was
saying,
we're
gonna,
add
new
features
in
this,
but
by
doing
something
like
requiring
requiring
the
things
that
the
Security
BCP
recommends,
that
is
effectively
a
breaking
change
to
existing
implementations,
which
technically
don't
have
to
follow
the
Security
BCP
right
now
that
was
the
main
disagreement
and
I.
L
We
kind
of
came
to
the
conclusion
that
we
need
to
discuss
those
more
on
a
case-by-case
basis,
I
volunteered
to
take
a
first
pass
at
writing
up
an
outline
for
what
a
document
could
look
like
highlighting
the
particular
aspects
that
fall
on
either
side
of
that
line,
so
that
we
can
talk
about
them
individually,
because
I
think
we
agree
that
there
was
more
to
discuss
there
on
the
specifics.
So
so.
A
In
terms
of
the
changes
you
were,
you
discussing
is
sort
of
the
best
document
or
like
because
abyss
document-
indeed,
it's
not
supposed
to
make
breaking
backwards,
but
compatibility
breaking
changes,
but
removing,
for
example,
optional
features
is
not
such
a
feature
or
further
detailing
some
yeah.
Also.
L
A
L
A
So
it
sounds
a
little
bit
like
we
need
some
more
discussion
and
find
out
what
specifically
will
end
up
in
a
document.
Didn't
then
we
can
say
like
is
this?
A
peace
process
like
an
abyss
can
actually
be
also
advancing
it
to
along
the
standards
letter.
If
you
will,
which
would
be
useful,
I
think
oh,
it
could
be
like.
You
said
that
2.1
it's
something
that
is
really
different,
slightly
backwards,
compatibility
breaking
changes,
although
that
one
typically
indicates
yeah.
E
F
F
There
was
a
lot
of
contention
about
what
it
would
mean
to
be
compliant
with
2.1
and
how
you
would
be
able
to
support
to
oh
and
to
one
on
the
same
endpoints
or
the
same
server
or
the
same
service
or
like
there's
a
lot
of
deployment,
questions
that
would
need
to
be
better
understood
when
we
start
changing.
You
know
shoulds
into
musts
and
adding
new
mosques
that
didn't
exist
ten
years
ago.
So.
M
J
Back
many
Amazon
just
to
add
some
more
detail
there,
the
concern
or
questions
around
how
big
or
how
small
a
change
2.1
is
from
2.0
and
challenges
around
managing
or
living
in
a
world
where
we
have
that
2
protocol
ecosystem.
As
a
lot
of
what
is
driving
the
remaining
question
of,
do
we
actually
do
a
2.1,
or
are
we
doing
a
this,
which
is
not
a
2.1,
so
yeah?
Those
questions
are
very
much
inner
woman.
K
I
would
like
to
roll
that
up
in
a
new
version
after
RC
and
if
that
operates
on
to
Latour
features
fine
with
me,
because
we
actually
have
a
discussion
around
that
exactly,
but
we
need
to
support
or
we
need
to
understand
and
to
admit
that
there
will
be
a
world
with
two
leto
implementations
running
beside
to
that
one
implementations
and
even
servers
that
support
both.
So
from
my
perspective,
we
need
to
find
ways
to
to
make
that
happen
right
so
and
the
to
that
o
implementation
can
just
RC
67
49
compliant
from
my
perspective
right.
K
So
what
you
need
to
be
able,
in
an
ecosystem
to
especially
on
the
server
provider
side,
to
offer
migration
path,
which
might
mean
that
you
have
both
versions
in
power
running
and
then
we
discussed.
How
could
we
determine
what
the
client
needs?
Is
a
policy
based
or
is
it?
Is
it
have
you
do
different
set
of
endpoints
and
so
on?
So
I
think
there's
a
lot
of
stuff.
You
need
to
sort
out
before
we
move
forward
and
we
have
right
now,
I
think
different
opinions
on
whether
we
should
make
breaking
changes
or
not.
A
But
in
any
case,
we
are
looking
forward
to
see
that
discussion
happening
under
OS
list.
So
if
you
could
trigger
that
soon,
I
guess
it
will
take
a
little
bit
a
little.
While
till
we
arrive
at
some
common
point
so
happy
that
you
volunteer
to
to
trick
it
out
and
lead
that
discussion.
Okay,.
O
This
is
a
little
awkward
to
try
to
continue
from
where
I
left
off.
On
the
last
meeting
that
yeah
I
was
try
to
start
from
the
beginning.
Go
a
little
faster,
get
some
slides
up
so
I'm
here
to
talk
about
some
prospective
new
work.
It's
an
individual
draft
called
deep
hop.
That
is
loading
the
slides
for
it.
A
A
B
B
O
Real
quick,
it's
a
draft
proposal
for
a
new,
hopefully
simple
and
concise
approach
of
proof
of
possession
for
OAuth
access
and
refresh
tokens,
and
the
idea
is
to
do
it
at
the
application
layer
using
app
application
level
constructs
and
being
able
to
leverage
existing
library
support,
specifically
JWT
or
our
support.
So
this
can
actually
be
something:
that's
implementable
and
deployable
a
little
history
of
possession.
The
main
motivations
are
to
do
something
be
better
than
better.
We've
had
bear
tokens
for
a
long
time.
We've
had
a
lot
of
attempts
to
do
better.
O
Some
work
out
better
than
others,
mostly
though
we
don't
have
something
that's
widely
applicable.
This
is
partly
motivated
by
the
security
BCP,
which
rather
aspirationally
says
that
you
should
use
sender,
constrained
tokens.
I,
say
it's
aspirational,
because
really
there's
not
a
lot
of
good
options
to
do
that.
Right
now
and
I
say
that
kid
that
we
lack
really
a
suitable
or
at
least
suitable
and
widely
deployables,
a
political
pop
mechanism.
This
is
especially
true
around
single
page
applications,
because
the
one
thing
we
do
have
that's
almost
an
RFC
is
the
mutual
TLS
stuff.
O
The
the
user,
agent
interaction
and
usability
experiences
doing
in
TLS
in
the
browser
are
pretty
much
a
non-starter
for
most
deployments.
It
just
doesn't
work
so
combining
I
I
know
I
got
really
excited
about
it,
maybe
too
excited,
but
it's
just
not
it's
not
panning
out.
So
here
we
are
again
trying
to
try
again,
hopefully
with
some
lessons
learned
and
another
thing.
O
That's
come
up
in
in
spas
as
well
as
mobile
applications
is
a
desire
to
be
able
to
do
proof
of
possession
around
refresh
tokens
as
well
for
public
client
Steve
a
little
bit
stronger
possession
there.
So
the
basic
flow
is.
We
have
this
mechanism
which
we're
calling
a
deep
op
proof
and
it's
just
a
new
HTTP
header.
It's
the
same.
It
looks
the
same
on
all
requests,
both
the
authorization
server
and
to
all
protected
resources.
O
It
flows
with
the
actual
token
requests,
usually
a
code
exchange
or
whatever
to
the
authorization
server,
and
it
provides
the
public
key
and
some
limited
proof
of
possession
the
corresponding
private
key.
In
turn,
the
authorization
server
can
bind
the
issued
access
token
to
that
public
key,
send
it
back
and
then,
when
it
uses
it
at
resource
access,
it
provides
the
exact
same
style
of
proof
on
each
resource
call
in
conjunction
with
the
access
token,
that's
bound
to
that
public
key
and
that's
an
extra
check
that
the
resource
server
can
do
so
that
top
piece.
O
Thank
you,
the
the
the
proof
deep
hop
proof.
Jwt
is
a
JWT.
Here's.
What
it
looks
like
on
the
inside
the
header
atop.
Here
it's
got
some
explicit
typing
because
that's
a
thing
we
do
now
asymmetrical
in
the
algorithm
support
and
in
the
public
key
in
jwk
format
in
the
header.
So-
and
this
is
the
key
that
signed
it
and
the
public
keys
there.
So
it's
basically
just
saying:
hey!
O
Here's
a
jot
I
signed
it
with
this
key,
and
you
can
know
that
because
you
verify
the
signature,
it's
just
showing
possession
of
that
key
and
in
order
to
kind
of
give
that
proof
of
possession
some
meaningful
context,
we
try
to
associate
it
with
the
HTTP
request
in
a
sort
of
minimal
fashion
enough
to
make
it
relevant,
but
not
enough
to
be
difficult
or
error-prone
or
hard
to
deploy
with
all
the
fun
and
pain
that
comes
with
trying
to
deal
with
normalization
and
signing.
They
should
be
requests.
F
Just
enricher
just
really
quick
has
there
been
thought
about
how
extensible
that
body
should
be
I,
realize
pops
only
doing
that
and
that's
fine.
Is
there
any
thought
about
sort
of
what
the
namespace
of
a
deep
op
+
JA
payload
could
have
in
it?
If
somebody
had
something
else
they
wanted
to
cram
in
there.
O
If
you've
followed
some
of
the
conversation
on
the
realist
recently,
there's
potentially
some
deployment
issues
at
scale,
depending
on
how
your
architecture
is
for
ensuring
that
the
the
one-time
use
of
tokens
is
timely
and
accurate.
So
you
might
be
checking
that
validity.
You
might
be
checking
that
sort
of
store
of
IDs
in
a
large
cluster,
with
an
eventually
consistent
back-end
to
store
that.
So
you
want,
we've
made
some
provisions
to
allow
for
things
that
aren't
tightly
transactionally
oriented.
They
could
have
really
negative
impacts
on
the
system
as
a
whole
as
well
as
you
might.
O
O
You
know
I,
it's
there's
a
there's,
a
large
continuum
of
sort
of
the
level
of
security
you
can
get
to
versus
sort
of
the
performance
and
the
usability
of
this
and
we've
might
not
be
the
right
place.
But
we've
tried
to
sort
of
pick
a
meaningful
spot
in
the
middle
that
provides
that
utility
without
being
too
cumbersome
to
implement
or
deploy,
and
so
one
last
thing
by
optional.
I,
don't
say
it's
optional,
but
it's
a
should
and
it's
actually
a
must
another
place
with
a
caveat
of
within
reasonable
considerations
for
performance
and
inconsistency.
O
K
As
a
co-author,
which
just
would
like
to
add
that
you
can
different,
get
different
levels,
replay
detection
right,
if
you
don't
do
the
gtii,
you
get
replay
detection.
If
someone
wants
to
use
the
the
header
with
a
different
race
or
server,
and
if
you
put
into
the
IDI
they
get,
you
get
the
replay
detection,
if
that's
replayed
on
the
same
resource
or
more
I,
think
that's
a
good,
a
good
distinguishment
yeah.
O
J
Beckman
Amazon,
we
talked
in
the
previous
session
about
the
coming
work
for
HP
message
signing
so
I'm
not
going
to
go
into
that
any
more
than
just
reminding
people
that
that
work
is
happening
and
maybe
applicable
here
separate
from
that.
From
the
conversation
on
the
list,
it
seemed
pretty
clear
to
me
that
it
would
be
helpful
to
have
a
clear
statement
of
what
what
the
threat
model
is,
for.
J
You
know
what
we're
trying
to
mitigate
here
with
depop,
because
the
more
of
these
things
that
you
strip
out
yeah
the
more
this
just
becomes
another
bearer
token
that
you're
putting
in
the
request.
So
it
would
really
be
helpful
to
understand
you
know
once
we
strip
all
of
that
out.
Are
there
threats
that
we
are
actually
still
mitigating
here
or
have
we
just
produce
something
that
ends
up
making
people
feel
safer
than
they
really
are
yeah.
O
Some
tightening
up
of
that
I
think
well,
as
with
all
things
it's
hard,
because
I
think
everyone
has
a
little
bit
different
idea
of
what
that
is
and
the
push
to
change.
This
is,
you
know,
there's
different
competing
interests,
but
it
definitely
needs
to
be
some
tightening
up
of
the
of
the
objectives
around
the
threat
model
and
what
we're
trying
to
provide
I
mean.
O
Very
much
want
to
avoid
a
situation
where
the
the
client
is
in
in
the
place
of
saying
what
is
sine
in
this
token,
because
it
has
a
lot
of
ramifications
on
how
do
you
negotiate
what
that
is,
what
are
the
properties
of
that?
That
actually
provides,
and
if
we
can't
talk
about
it
and
I
know,
we
have
a
hard
time
yesterday
talking
about
exactly
what
you
get
from
what
I'm
wholly
unconvinced
that
the
average
client
developer
can
make
a
reasonable
decision.
Well,.
O
J
If
the
scope
of
this
is
to
be
usable
specifically,
and
only
within
the
domain,
where
you're
in
a
pure
discovery,
world
and
and
you're
talking
about
a
P
is
on
completely
standard
interfaces,
then
I
understand
why
you
would
not
want
to
be
in
that.
You
know
place
where
you
have
to
negotiate
that,
but
the
practical
reality
is
that
OS
is
used
for
all
sorts
of
things.
That
aren't
you
know
just
standardized
api's
and
in
those
cases
people
have
documentation
for
their
API.
Is
that
describe
what
needs
to
be?
You
know
how
authentication
authorization
happens.
J
O
J
I'm
under
saying
that
that
that
there
is
value
without
yep,
there
is
value
in
that
flexibility.
There
are
strong
use
cases
for
what
we're.
What
is
what,
where
you're,
actually
specifically
stating
what
what
you're
going
to
include
what
you're
not
and
there
are
strong
use
cases
for
standardizing
the
format
and
allowing
implementations
and
deployments
to
say
this
is
what
I
need
covered,
and
this
is
what
I
don't
both
of
those
use.
Cases
exist
and
I
think
we'd
be
remiss
to
support
one
at
the
expense
of
the
other.
J
J
J
B
J
P
P
O
K
Toastin
speaking
as
a
co-author
first
reflecting
on
Annabelle's
question
regarding
the
threat
model,
it's
being
documented,
it
can
be
found
in
section
2,
so
refers
to
the
threat
model,
that's
documenting
the
security,
BCP
and
I.
Think
generally,
we
need
to
keep
in
mind
the
scope
and
the
objective
of
that
draft
I
stated
that
yesterday
I
do
it
again.
This
is
a
stopgap
measure.
K
This
is
not
intended
to
be,
at
least
from
my
perspective
as
a
general
means
to
do
pop
in
the
org
world,
because
we've
got
another
mechanism
TLS
and
it
works
very
well
this
one.
The
original
objective
was
to
find
something
that
works
for
SPS.
That
wants
to
use
access
tokens
in
the
browser.
Please
keep
that
in
mind.
We
can
change
the
scope,
but
then
we
need
to
consider
the
consequences,
but
that's
that's
the
reason
why
we
started
at
work.
F
This
is
Justin
with
what
Torsten
just
said
and
again,
you
know
and
brought
up
yesterday
in
mind
the
the
scope
and
the
threat
model
is
not
clear
from
the
current
dock.
So
if
we
bring
this
in
as
a
working
group
item,
then
that's
going
to
have
to
be
very,
very,
very,
very
clear,
because
I
think
a
lot
of
the
discussion
that's
happening
right
now
is
because
this
feels
like
it's
it.
It
could
be.
F
You
know
the
resurrection
of
the
whole
low
off
pop
infrastructure
and
that
had
all
kinds
of
features
like
the
server
being
able
to
provision
keys
to
clients
and
symmetric
crypto
and
all
of
this
other
kind
of
stuff
which
aren't
part
of
this
I
think
that
this
would
benefit
from
a
narrower
focus
like,
for
example,
pixie
had
pixie
was
about.
You
know:
public
mobile
clients
protecting
the
auth
code
flow
because
they
don't
have
client
secrets
and
by
the
way,
here's
how
you
do.
F
It
turns
out
Pixies
a
lot
more
useful
for
a
lot
more
than
that
and
we're
using
it
everywhere
for
lots
of
different
things,
because
it
was
a
reasonably
defined
mechanism
that
found
other
uses.
This
right
now
feels
doesn't
feel
like
that.
It
doesn't
feel
like
it's
a
specific
point
solution
or
stopgap
as
as
Torsten
said.
So
if
we
were
to
go
forward
with
this
and
to
be
clear,
I
like
deep
hop
I
think
it's
a
really
clever
way
to
have
the
solution
and
I've
implemented
all
these
header
processing
things
and
stuff
like
that.
F
L
Eric
knee
from
octa
I
don't
mean
to
like
hone
in
on
a
specific
point
again,
but
just
to
echo
Annabelle
was
saying
and
relaying
comments
from
people
at
octa.
There
is
definitely
some
concern
around
the
asymmetric.
Only
keys
and
I
know
that
a
lot
of
my
team
there
would
not
deploy
it
if
it
was
asymmetric,
only
would
rather
see
a
symmetric
option
so
because
of
the
speed
concerns
there.
So
just.
A
Keep
in
mind
that
we
still
actually
have
a
working
group
of
item
if
the
symmetric
version
as
well
are
so
so.
In
that
sense
we
find,
of
course,
when
Annabelle
presented
her
work.
I
asked
her
whether
it
would
make
sense
to
align
that
what
she
presented,
because
it
was
slightly
different
to
in
terms
of
header
fields
etc,
but
that
may
be
an
option.
But
what
I
hear
from
numerous
percent
sort
of
presenters
on
a
microphone
now
is
that
you
want
something
very
specific,
very
specific
need
and
not
a
generic
mechanism
and
I.
O
But
but
well,
I
gave
the
specific
example
of
the
spot
type
applications
and
but,
as
maybe
I
couldn't
sort
of
where
Justin
came
from
oftentimes.
You
know
a
specific
point.
Solution
can
be
generalized
into
something
more
and
so,
from
my
perspective,
I'm
trying
to
make
this
something
that
can
be
a
reasonable
deployable,
usable
pop
option
for
access
and
refresh
token.
A
So
we
how
we
came
here
was
we
had
a
working
group
item.
I
still
have
one
that
is,
that
has
a
symmetric
and
an
asymmetric
variant,
and
then
you
guys
started
this
new
effort
because
you,
the
argument,
was
the
other
one
is
too
complicated
because
it
contains
the
symmetric
the
asymmetric
and
refers
to
some
generic
HTTP
signing,
which
now,
interestingly,
is
being
entertained.
Now,
which
is-
and
now
you
are
saying,
oh
actually,
we
could
extend
it
to
something
more
generic
that
doesn't
make
sense
to
me
right.
O
So
no
I'm
not
saying
it
could
be
extended
to
something
more
generic
I'm,
not
pushing
for
it.
I'm
hearing
people
say
it
should
include
a
symmetric
key
option:
I'm
saying
that
there's
a
very
serious
trade-off
with
that.
Okay,
in
that
this
is
simple
because
it
relies
on
public
private
keys,
I,
understand
the
concerns.
I,
don't
know,
I'm,
not
saying
it's
not
a
done
decision,
but
that's
that's
why
it's
this
way!
That's
why
it
can
be
as
simple
as
just
the
client
presenting
this
header
every
time,
because
we're
not
negotiating
keys.
O
That's
why
you
get
a
particular
security
property
of
not
being
able
to
replay
that
across
resource
servers,
because
you
don't
have
to
distribute
this
is
not
your
key
and
get
in
the
situation
where
the
resource
server
can
then
impersonate
another
one,
because
it
has
access
the
symmetric
key
so
that
that
was
sort
of
driving
bit
of
this.
There
are
people
that
want
it
to
be
something
different
I.
You
know
we're
at
a
working
group.
We
have
to
work
through
that,
but
so
I'm
trying
to
respond
to
different
comments
and
different
desires
for
things.
O
A
N
F
F
So
this
is
really
only
intended
to
deal
with
that
single
page
application
mechanism.
So
I
don't
know
whether,
given
that
we're
talking
about
single
page
applications,
whether
the
performance
issues
are
really
all
that
concerning
these
aren't
high
speed.
You
know
high
volumes,
clients,
server,
clients
that
are
talking
to
two
api's
for
other
applications
that
doesn't.
F
O
F
A
key
negotiations
and
all
and
all
of
this
kind
of
stuff
and
sort
of
general
proofing
mechanisms,
and
things
like
that,
while
I
agree
with
Annabel
that
you
know
if
the
time
skills
were
better,
this
ought
to
be
using
the
general
purpose.
Http
signing
mechanism-
yeah,
that's
probably
going
to
be
a
while
in
cooking,
but
you
know-
because
this
is
a
very
if
this
can
be
a
very
specific
thing.
F
Then
I
think
that
it's
not
going
to
hurt
if
this
exists
and
other
solutions
also
exist.
So
if
there's
a
you
know
a
spot
for
a
symmetrical
proof
of
possession
or
whatever,
then
great,
you
know,
if
there's
something
with
the
next
major
revision
of
OAuth,
it
lets
you
do
all
kinds
of
different
keys
or
goes
back
to
the
the
OAuth
key
distribution.
Architectures,
where
you
know
servers,
are
distributing
keys
and
binding
them
to
client
credentials,
and
all
of
that
other
kind
of
stuff
great
I
mean
you
know
this
existing
doesn't
make
mutual
TLS
go
away.
F
You
it's
it's
very
worth
to
to
add
another
thing
to
the
already
gigantic
menu
to.
Let
people
choose
from
that's
true
and
yeah,
and
we
are
in
the
we
are
not
done.
Adding
to
the
menu
was
the
was
the
heckling
from
the
back.
This
is
not
a
value
judgment.
This
is
a
statement
if
you
want
the
value
judgment
talk
to
me
after,
but
for
for
real,
though
this
having
this
be
like
this.
Does
this
one
thing
I
think
fits.
J
J
If
you
read
between
the
lines
and
knowing
that
you
know
that's
what
the
authors
are
saying
in
forums
like
this,
then
you
can
kind
of
see
it,
but
without
that
it's
really
not
clear
separately
back
to
the
asymmetric
crypto
scaling
question
everybody's
focusing
on
latency,
but
that's
only
one
aspect
of
scaling.
If
you
have
a
service
that
is
taking
millions
of
requests,
a
second
that
little
bit
of
CPU
time
adds
up
and
all
of
a
sudden
you're
running
your
service
becomes
a
lot
more
expensive.
So
yeah
that,
may
not!
J
You
know
one
one
request
taking
a
tiny
fraction
of
a
second
longer
may
not
you
know,
impact
the
end
users
experience,
but
that
doesn't
necessarily
mean
that
it's
scalable.
The
other
thing
to
consider
is
scenarios
where
you're
making
lots
of
these
requests
in
the
course
of
fulfilling
one.
You
want
to
end
user
requests,
so
are
when
you
end
user
operation,
it's
not
necessarily
just
one
hit
here.
It
could
be
multiple.
K
Dolson
in
answering
over
reflecting
on
what
an
apologist
said,
I
understand
that
I
know
that
they
CP
o
cost
for
a
symmetric,
is
I,
think
to
others
or
three
orders
of
magnitude.
Higher
I
think
there
are
other
ways
to
couple
of
that,
then
using
symmetric
crypto.
We
have
at
least
two
architectural
option
and
they
all
document
in
the
SP
a
BCP.
K
One
of
them
is
not
to
use
access
tokens
in
the
browser
and
rely
on
web
security
mechanisms
to
protect
a
the
lack
to
the
backend
and
also
in
the
in
the
in
the
in
the
latter
case,
which
is
talked
about
having
a
lot
of
requests
being
executed
on
the
on
behalf
of
the
of
the
user.
Yeah
just
set
up
a
back-end
and,
and
let
that
back
and
for
all
those
requests
and
use
a
TLS.
So
I
think
this
is
a
much
too
narrow
focus
discussion.
Yet
yeah.
Just
my
father's.
J
Annabel
I'm
again
echoing
I,
think
Mike.
What
Mike
said
just
was
really
a
response
to
that
that,
as
a
you
know,
service
provider,
we
really
don't
want
to
have
to
operate.
You
know
significantly
different
authorization
mechanisms.
You
know
across
our
ecosystem
if
we
can
avoid
it,
and
you
know
forcing
that
on
people
is
not
the
way
to
get
them
to
adopt
things
separately.
J
Iii
feel
like
you're
you're,
telling
people
that
they
should
architect
their
application,
such
that
it
fits
the
model
or
the
constraints
of
your
authorization
mechanism,
which
seems
wildly
backwards
to
me,
like
people
build
s
pas
that
do
not
have
backends
but
need
to
make
lots
of
requests
to
to
other
services.
That
use
case
exists
if
you're
not
going
to
support
it,
then
that's
fine,
just
acknowledge
that
this
is
not
going
to
support
that
use
case,
but
you
know
we
should
be
intentional
about
making
that
decision.
So.
E
O
A
O
K
Torsten,
just
my
guess
is
and
that
it
was
popping
up
in
my
head
a
few
seconds
ago.
I
think
we
need
to
have
a
consensus
in
the
group
whether
we
want
to
rely
on
to
your
last
based
mechanism
for
sender,
constraining,
because
all
the
discussion
now
I
have
feared
from
contributions
from
Mike
and
Annabel.
To
me
point
in
a
completely
different
direction.
I
mean
right
now
we
are
unable
to
provide
a
TLS
base
and
mechanism
for
sender,
constraining
for
all
kinds
of
clients,
and
we
drove
the
conclusion
from
that.
K
We
need
a
stopgap
measure
for
those
that
cannot
really
use
em
TLS,
which
are
s
pas
note.
The
discussion
is
turning
into
all.
We
have
that
mechanism,
let's
make
it
a
general-purpose
magnify
all
kinds
of
plans
and
then
I
understand
the
need
for
symmetric,
cryptography
and
so
on
and
so
on
and
so
on.
So
from
my
perspective,
we
don't
have
a
real
consensus
around
how
we
do
how
we
want
to
implement
sender,
constraining
no.
A
I
see
it
slightly
different,
we
have
one
solution,
for
we
have
several
solution
for
sender
constraint.
One
is
token
binding,
one
is
empty
LS
and
one
is
mechanism
based
or
the
application
layer
or
several
mechanisms
based
on
application
layer
which
just
haven't
we
we
like
them
TLS,
because
that's
why
we
worked
on
and
published
it.
K
Yeah
I
I
agree
with
that
I
think
I've
just
phrased
it
differently.
Okay,.
G
Mike
Jones
I
believe
that
Microsoft's
viewpoint
is
M.
Tls
is
great
in
certain
constrained
environments
such
as
financial
api's,
but
it
is
virtually
under
Boleyn,
general
purpose.
Consumer
applications.
Therefore,
we
felt
get
given
the
existent
threats
about
stealing
bearer
tokens
that
we
need,
as
an
industry,
to
develop
an
application
level
proof
of
possession
mechanism
that
works
across
the
board
for
access
tokens
refresh
tokens,
we're
hoping
that
this
will
be
that.
A
And
I
think
before
enable
you
I'm
tossing
that
was
anybody's
cut
the
line
here
so
I'm.
The
chairs
will
work
with
our
esteemed
ad
to
find
a
path
forward
on
this
I'm
how
to
advance
this
topic,
because
we,
it
feels
a
little
bit
stuck
on
that
issue,
but
I'm
positive
that
we
find
you
find
something
out
rather
quick,
so
expect
a
virtual
intra
meeting
yeah.
E
A
A
Okay,
sorry
about
that.
We
had.
A
H
K
The
problem
this,
the
small
addition
to
a
small
addition
of
o
of
once
the
solve,
is
this:
that's.
The
kind
of
for
is
Asian
requests.
We
are
seeing
in
the
wild
when
you
use
some
of,
for
example,
open
ID
connect
mechanisms,
you're
ending
up
with
a
really
really
bulky
authorization
request,
URL
that
sends
through
the
user
agent
and.
K
It
poses
some
challenges.
First
of
all,
is
lengthy.
It
has
some
impact
on
the
latency
on
the
robustness,
but
there's
also
no
mechanism
for
integrity
and
confidentiality
protection
which
might
be
relevant,
especially
if
you
pass
transaction
specific
values
or
constrains
regarding
the
authorization
process
through
that
channel,
we've
got
a
solution,
at
least
a
partial
solution.
That's
the
extension
to
all
of
that.
We
are
gonna,
extend
it's
the
judge,
secured
authorization
request,
which
I
hope
will
be
published
soon.
K
It
provides
security
for
the
authorization
request
by
providing
a
mechanism
to
sign
this.
The
request
objects
and
also
has
a
mechanism
to
pass
a
request,
your
eye
into
the
authorization
request.
If
you
don't
want
to
pass
all
the
data
to
the
user
agent
and
what
we
do
now,
we
provide
a
mechanism
to
upload
the
payload
of
authorization
request
to
das
and
to
obtain
request
your
I
to
then
use
the
draw
mechanism
request
UI
to
send
the
data
to
das.
K
So,
for
my
perspective,
that's
just
the
nets
next
step
to
bridge
that
to
bridge
that
gap,
and
that's
how
it's
going
to
look
like
so
there's
just
a
small
change
with
a
step.
Two
so
declined.
First
sets
up
the
authorization
request
in
the
direct
post
call
to
the
a
s
and
they
asked
somewhere
stores.
Did
this
payload
and
returns
a
UN
to
the
client
and
that
your
and
then
in
turn,
is
used
in
the
front
end
call
to
the
a
s.
K
All
right,
so
how
does
it
look
like
on
an
HTTP
level?
That's
a
traditional
OAuth
request.
So
it's
a
get
that
sent
from
the
user
agent
to
today
arm
to
the
a
s
authorization
endpoint.
What
we
do
is
we
defined
a
new
push
to
authorization
endpoint
with
das,
and
it
takes
the
same
parameters
in
exactly
the
same
encoding
and
sends
that
why
a
post
request,
that's
the
modification
and
what
you
also
see
on
that
request.
K
F
Now,
just
Richard
just
for
a
quick
clarification
in
Xyz,
so
yeah
keep
it
on
this
slide.
The
main
difference
between
the
way
that
this
this
works
and
XYZ,
is
that
what
you
get
back
in
X,
Y
Z,
is
that
entire
URL
and
not
something
that
the
client
composes
other?
Why
is
it
is
very
much
the
same
pattern?
F
K
We
stripped
it
down
as
far
as
possible,
but
wanted
to
be
compliant
to
to
the
rest
of
oh
of
and
there's
another
option
which
is
to
actually
send
a
signed
and
potentially
a
cryptic
request
object
why
the
same
channel
so
instead
of
using
the
payload,
we
are
familiar
way
from
RC
67
49
use.
The
payload
has
been
defined
by
drower
to
send
payload
to
das,
having
the
advantage
she
can
do
our
application
level
crypto,
for
example,
to
do
non-repudiation.
K
K
K
Significant
improvement
in
security
for,
from
my
perspective,
relatively
less
cost
of
you
cost,
because
we've
got
the
integrity,
confidentiality
and
authenticity
protection
of
TLS
you've
got
the
client
authentication
up
front
of
the
authorization
process,
which
also
mean
you
can
authorize
decline
and
Daniel's
FETs
feeling
is
that
it's
in
resistant
accounts
against
mix-up
attacks
and
that's
from
my
perspective,
better
than
having
some
systematic
analysis
of
other
people.
So
are
the
analysis
is
still
ongoing,
but
we
already
really
confident
about
that
end.
K
Early
experience,
the
show
that's
really
easy
to
use
I'm,
starting
with
the
developers
of
the
a
SS,
because
some
a
s
has
already
have
implemented
that
and
they
that
did
really
quickly
and
that
also
guided
a
bit
just
the
way
we
should.
We
wrote
a
specification
and
it
turned
out
that
the
logic
that's
being
executed
at
the
authorization
endpoint
and
the
logics
being
executed,
that
the
token
endpoint
can
be
combined
in
a
pipe.
K
F
Okay,
just
nurture
just
add
so
I
have
yeah
I
know
we
got
a.
We
got
area
I've,
so
we
did
implement
this
spec
as
part
of
athletes.
Next
revision
and
while
we
weren't
able
to
completely
combined
the
off
endpoint,
we
were
able
to
abstract
a
lot
of
the
processing
and
I.
Can
I
can
state
that
this
is
pretty
easy
and
sensible
to
do
on
an
existing
established,
AAS
platform.
Okay,.
K
J
J
K
Yes,
an
excellent
question:
what
I,
just
out
of
the
top
of
my
head,
I
fear
we
are
running
into
XYZ
territory
and
III.
Don't
have
concerns
going
that
way,
but
I
think
you
will
end
up
with
a
more
conversational
protocol
and
that's
not
the
way
o
of
two
is
designed
to
work,
but
we
should
definitely
take
a
look
into
that
and
I
would
also
include
Seba.
Even
if
that's
not
ITF,.
K
F
Can
leave
look
Justin
just
to
just
to
reinforce
what
Torsen
just
said
when
you
combine
par
at
the
device
flows?
Seba.
You
know
the
the
fappy
stuff
that
predated
this
Yuma,
all
of
those
sort
of
staging
intent
things
into
one
protocol,
you
kind
of
do
get
X
Y
Z.
That
was
the
idea
they
they
are
not
as
compatible
as
they
seem
on
the
box.
Unfortunately,
due
to
the
nature
of
os
conversations,
okay.
K
F
K
A
K
K
Not
really
I
was
part
of
it.
Okay,
all
right,
then
I
talked
about
just
briefly
talk
about
the
problem.
The
proposal
and
we'll
ask
you
to
add
up
the
draft
right.
So
the
problem
is,
is
about
I
would
say
complex
authorization
processes,
Tony
I,
try
as
best
as
I
can
complex
authorization
processes
that
need
more
data,
as
you
can
today
pass
in
a
scope.
So
let's
assume
you
have
a
payment
API,
you
wanna.
The
merchant
wants
to
to
send
the
payment
or
get
some
money.
The
question
is:
how
does
the
author?
K
How
does
the
resource
all
know
that
it?
A
user,
actually
authorized
that
this
merchant
is
allowed
to
to
get
that
money?
And
that's
that's
the
key
question
that
drives
the
rich
authorization
stuff,
because
in
the
end
the
authorization
server
needs
all
the
information
to
render
that
user
consent
right
so
and
that
user
content
contains
transaction.
Specific
values
is
rather
complex
and
as
context
and
the
short.
K
Form
of
that
is,
we
have
come
up
with
a
proposal
to
allow
clients
to
specify
this
kind
of
more
complex
authorization
requirements
in
a
structured
way
to
some.
We
use
something
that
I
was
initially
calling
structured
scopes,
but
Mike
asked
us
to
not
use
the
scope
terminology,
so
we
came
up
with
the
authorization
details
name.
So
it's
an
array
you
can
bump
into
that.
Json
objects
that
are
specifically
designed
for
the
needs
of
a
certain
API.
That's
why
they
are
typed.
So
the
a
s
can
differentiate.
K
What
type
of
authorization
request
object
is
that
can
I
combine
them,
cannot
do
not
combine
them.
You
can
combine
them,
you
can
send
them
everywhere
or
you
can
use
a
scope
yeah
it's
in
the
third
revision.
We've
got
some
positive
feedback
even
on
the
list
from
people
that
are
new
to
our
community.
I,
just
put
in
a
citation
here
from
a
guy
from
Mozilla
I.
Think
it's
he's
on
the
line
as
well.
K
G
K
B
H
Various
places
about
this
problem
that
Torsten
just
talked
about
and
and
I've
been
proposing
a
different
solution
to
the
problem
and
to
make
that
a
little
bit
more
tangible
I
wrote
it
down
as
a
specification
I've
been
proposing
that
that
that
problem
can
be
solved
using
the
claims
request
parameter
from
open
ID,
but
the
pushback
that
I
get
is
that
well,
that's
bound
to
open
ID
and
a
lot
of
people
want
to
do
API
security
only
using
OAuth,
and
so
in
this
draft.
The
what
I've
done
is
I've
lifted
that
out.
H
So
here
you
can
find
the
link
to
the
draft
lifted
that
out
of
open
ID
and
made
it
usable
for
API
access
using
just
ooofff
explained
how
we
can
use
this
in
non
OAuth
or
non
open,
ID
Connect
flows.
I've
stipulated
how
the
the
input
can
work
not
only
on
browser-based
flows
like
authorization
and
implicit,
but
also
the
output,
which
is
something
that's
missing
from
open
ID.
H
H
Is
it
okay
if
we
wait
a
little
bit,
because
I
might
even
address
your
point
so
just
to
know
what
what
we're
talking
about,
if
you've
ever
seen
an
authorization
request
like
this,
where
we
have
the
claims
request
parameter,
which
is
even
liked
Orson's
example
he
just
put
up
this-
is
what
we're
talking
about.
So
it's
basically
a
query
asking
for
certain
claims
to
be
asserted
by
the
AAS.
H
So
the
terms
that
I
mentioned
we're
using
claims,
claim
name
claim
value
from
the
job
specification,
essential
form,
open,
ID
but
rewritten,
so
that
it's
not
bound
to
the
end
user
and
some
others
that
are
defined
in
the
the
draft
importantly
claim
seeing
claims
request
object.
These
are
things
that
exist,
abstractly
and
open
ID,
but
aren't
aren't
given
a
specific
name,
so
a
claim
sink.
What
we
mean
by
this
is
the
the
part
of
the
request.
That's
that
the
client
is
saying
this
is
where
I
want
the
claims
to
be
asserted.
H
I
want
them
to
be
included,
for
example,
in
an
access
token,
a
claim
might
say:
I
don't
know,
I
want
them
to
be
asserted,
but
I
don't
know
where
they're
gonna
go.
So
it
might
just
say
the
claim
sink
is
question
mark
or
might
say:
I
want
them
in
every
every
destination,
so
I'll
use
I'll
use
star.
It
might
also
do
ID
token
or
user
info
if
the
AAS
is
also
an
Opie
so
requesting
claims.
So
the
object
here
I've
been
in
a
name,
a
claim,
scene,
query,
object
and
a
claims
value,
query
object.
H
But
basically
what
this
is
is
a
way
of
querying
for
certain
claims
to
be
asserted
by
the
the
authorization
server.
So
we're
asking
that
claims
be
put
in
a
certain
sink
in
a
certain
destination,
the
access
token
ID
token
and
that
touken
specific
to
a
certain
resource
server,
and
then
we
can
also
have
preferential
values
to
be
a
circuit.
But
it's
really
really
important
to
keep
in
mind
that
this
is
a
query.
This
isn't
a
demand
from
the
client
to
say
you
must
assert
this
value
so
similar
to
like
what
poor
Stan
was
saying
there.
H
You
know
I
want
this
transaction
I
want
this
information
this
account
here.
You
can
see
another
way
of
doing
that.
I
can
say:
okay,
I
want
a
value
of
the
count,
1,
2,
3
or
otherwise
account
4
5
6.
This
is
essential
to
me
for
a
smooth
operation
of
a
ization
at
the
API,
and
here
I
have
a
payment
ID
with
a
value
of
a
certain
payment
number.
So
another
thing
is
essential
claims.
H
So
an
essential
claims,
like
I
just
said,
is
a
claim
that
is
required
or
essential
for
the
smooth
operation
of
a
task
by
the
resource
owner
at
a
resource
server.
It
doesn't
mean
that
it's
a
required
value,
so
the
authorization
server
should
not
respond
with
an
error
if
the
essential
claim
isn't
asserted
with
those
values
or
if
the
claim
isn't
available
at
all,
for
that
resource,
owner
and
and
essential
claims
are
defined
by
open
ID
connect.
H
But
there
is
nothing
in
open
in
D
Connect
that
says
anything
about
it
like
it's
got
to
be
there,
it's
a
required
claim,
so
the
draft
could
create
something
called
a
critical
claim
using
something
similar
to
JSON
web
tokens
where
it
has
a
crit
member
in
in
the
JSON
object,
saying
that
any
Jason
pointer
or
it's
a
it's.
An
array
of
JSON
pointers
that
point
into
some
part
of
the
request
to
say
that
this
is
critical
like
if
the
authorization
server
can't
assert
this
claim
or
claim.
H
H
Some
of
them
can
be
critical
as
I'm
essential,
so
it
ends
up
working
similar
to
the
scope
request
and
in
all
four
of
those
flows
it
defines
some
errors.
Oops
sorry
had
to
find
some
errors,
and
it
does
so
in
a
way
that
maintains
compatibility,
Open,
ID
Connect,
but
it's
very
open,
ID
connect
does
isn't
very
rich
in
its
error
codes.
For
that,
so
it
defines
some
additional
ones
that
can
be
used.
H
Refresh
I
know
I'm
going
on
time,
but
I'll
do
my
best
here
quickly.
You
can
send
the
claims
request,
parameter
to
do
down
scoping
so
to
say,
just
like
you
can,
with
sending
the
scope
request,
parameter
to
down
scope.
If
you've
done
that,
and
then
you
refresh
again,
you
can
up
scope
back
to
your
original
grant.
This
is
difficult
to
implement.
H
I
can
speak
from
experience
using
the
Scopes
and
claims
together
makes
this
hard
and
policy
changes
make
this
hard,
but
those
two
parts
are
left
out
of
the
specification,
so
it
doesn't
talk
at
all
about
using
scopes
and
claims
together
like
open,
ID
does,
and
it
also
doesn't
go
into
handling
policy
configuration
changes,
so
reef
is
actually
pretty
simple.
As
far
as
the
spec
is
concerned.
H
Token
introspection
when
you
introspect
a
token
respond
with
a
space
separated
list
of
claim
names
that
were
authorized
so
that
the
API
can
see
the
extent
of
the
the
scope
of
the
token.
As
far
as
the
claims
go,
I
know,
I
know,
I
know
so
this
oh
yeah,
so
it's
in
there.
The
important
part
is
that
critical
claims
be
supported
because
you
might
be
talking
to
an
Opie
and
open
ID
Connect
open
provider
that
doesn't
support
this.
H
H
It's
confusing
and
oftentimes
it's
like.
Why
do
we
need
to
do
this?
It
it's
it
doesn't
so
I
think
it
should
be
dropped
from
the
draft,
but
I
wanted
to
have
some
broader
input
on
that.
Maybe
some
restructuring
of
the
document
to
avoid
redundancy
with
the
different
flows.
How
do
you
integrate
it
with
the
resource
indicators
I'd,
like
some
suggestions
on
that
and
I'd
also
like
to
talk
about
the
integration
with
token
exchange
in
the
next
version
of
the
draft?
H
Oh
right,
the
token
exchange
the
resource
indicator
tie
in
and,
of
course,
the
the
leftover
considerations
haven't
been
created
and
also
a
way
of
registering
clients
to
say
that
they
want
access
to
certain
claims,
so
full
disclosure,
curity
we've
implemented.
All
of
this.
We
had
to
do
most
of
it
because
it's
in
open
ID
connect,
but
the
other
aspects
like
the
the
output
of
claim
names
and
things
like
that.
We
have
implemented
and
we
have
no
patents
on
any
of
these
things.
A
A
3
I
think
we
need
more
people
to
read
the
document,
so
I
think
it
relates
nicely
to
the
purse
nation
doesn't
give
earlier
on.
Do
you
want
to
say
a
famous
last
what
errand
or
do
you
want
to
deep
dive
into
a
technical
discussion
about
this
I.
L
Want
to
do
the
opposite,
II
deep
dive,
which
is
back
up
to
the
high-level
view
of
this
high-level
all
of
this
and
I'm.
Sorry.
For
being
so
blunt
we're
out
of
time,
I
think
this
is
actually
the
completely
wrong
approach,
bringing
things
from
open
to
connect
into
this,
because
ID
tokens
are
meant
to
be
consumed
by
clients
and
understood
by
them.
L
Access
tokens
are
explicitly
not,
and
this
feels
like
bringing
knowledge
about
authorization,
servers
and
access
tokens
into
the
client
in
a
way
that
the
Torrens
proposal
is
more
about
expressing
the
goals
the
client
is
trying
to
achieve,
rather
than
how
it
expects
the
a/s
to
achieve
them.
I
feel
like
that's
the
line
that
this
is
crossing
and
I,
don't
think
it's
appropriate
yeah.
A
A
Good,
we
always
have
different
perspectives
in
the
group,
so
it's
kind
of
normal
and
we're
going
to
have
a
discussion
on
this,
and
the
story
will
continue
when
we
have
our
meeting
I
will
send
out
an
a
doodle
poll
for
dates
in
December
and
and
of
course,
it's
December
I
know
December
stuff,
leading
into
its
January
to
see
what
works
for
you
guys,
and
we
need
to
look
at
more
time,
obviously
to
cover
these
topics
that
we
left
out.
A
A
Is
all
the
working
group
sessions?
Our
workload
varies
sometimes
a
little
bit
when
we
make
the
request,
then
sometimes
there's
barely
anything,
and
then
you
get
totally
hyped
up
on
something.