►
From YouTube: OAUTH WG Interim Meeting, 2020-11-30
Description
OAUTH WG Interim Meeting, 2020-11-30
A
Okay,
welcome
everyone
to
or
internet
meeting,
so
let's
get
going
reminder
not
well
applies
here.
So
if
you're
not
familiar
with
it,
please
familiarize
with
yourself
with
this.
A
So
today
we
will
talk
about
d-pop.
I
just
want
to
remind
you
that
we
have
one
more
meeting
next
week
to
talk
about
a
new
individual
draft,
the
as
issuer
identifier
in
authorization
response.
It
will
be
next
monday
at
the
same
time
here
today
again
it
will
focus
on
depop
and
I
think
brian
will
be
him
taking
us
through
this
one.
Any
questions
comments
about
our
plan:
okay,
so
brian
I'll,
stop,
sharing
and
I'll
give
you
a
chance
to
share.
A
B
Take
it
away,
I
don't
know
either
way
everyone
feels
silly,
even
asking,
but
the
slides
come
up.
Okay,
that
intro
slide.
Everyone
says
okay,
so
we
are
not
meeting
bangkok,
but
you
know
I
love
the
things
about
keep
some
photos
around
and
throw
them
there,
so
we
should
have
been
in
bangkok,
but
instead
we're
having
this
virtual
interim
meeting
as
fed
already
said,
to
discuss
depot.
So
with
that
said
here
we
go
so
real,
quick
sort
of
a
small
group,
so
I'll
try
to
cruise
through
the
sort
of
introductory
stuff.
B
But
I
still
want
to
cover
this.
What
what
is
depop
and
what
isn't
it?
It's
a
pragmatic
application
level,
sender,
constraining
of
access
and
refresh
tokens
for
oauth
and
does
that
by
binding
a
key
pair
and
a
trust
on
first
use
style,
that's
controlled
by
the
client
to
to
tokens
what
it's
not
is
it's
not
an
http
signature
scheme.
It's
not
a
new
client
to
authorization,
server,
authentication
mechanism
and
it's
certainly
not
perfect
or
an
infallible
solution.
It
has
its.
B
It
has
its
warps,
but
you
know
it
is
useful
and
pragmatic,
not
perfect
real
quick
overview
of
what
it
is.
Basically,
this
thing
called
a
depop
proof.
Jaw
is
sent
as
an
http
header
of
each
http
request,
and
this
demonstrates
a
reasonable
level
of
position
of
a
key
in
the
context
of
that
particular
request.
B
These
proofs
are
sent
the
same
way
with
the
same
syntax
and
semantics
for
both
kinds
of
requests.
To
the
token
token
request
for
new
tokens,
the
afs
grant
type
requests
token
endpoint
requests,
as
well
as
to
protected
resources
the
as
uses
this
proof
to
bind
the
tokens
that
it
issues
to
the
key
pair
that
the
client
has
and
the
rs
uses
the
proof
to
verify
the
bound
tokens
and
make
sure
that
the
client
presents
a
proof
for
a
key
to
which
the
token
it
receives
was
actually
bound.
B
The
depop
proof
job
is
sent
in
a
header
named
epoch
surprisingly,
and
it
looks
kind
of
like
this
depop
and
good
job.
The
actual
anatomy,
the
inside
bits
and
pieces
of
a
depop
proof
are
a
jot
that,
hopefully
many
are
familiar
with.
It's
explicitly
typed
because
that's
what
we
do
now.
We
only
support
asymmetric
signatures
because
it's
a
asymmetric
sort
of
protocol
flow
overall,
the
key
for
which
the
proof
of
possession
is
being
demonstrated
is
sent
as
the
jwk
header
in
the
jwt.
B
So
basically,
this
jot
is
saying
you
can
verify
the
signature
with
this
public
key
and
in
doing
so
you
get
some
proof
that
the
presenter
holds
the
private
key
associated
with
this
public
key.
That's
all
in
the
header,
then,
in
the
claims
itself,
we
have
some
pretty
minimal
information
about
the
http
request.
The
idea
here
is
just
enough
information
to
sort
of
bind
this
proof
to
this
particular
request
and
that's
the
method
and
the
uri.
B
We
have
a
unique
identifier
jti
that
can
be
used
for
some
level
of
replay
checking
and
we
have
the
issue.time
which
we
in
the
in
the
document
specifies
that
this
proof
is
only
acceptable
for
a
limited
window
time
relative
to
the
creation
time
of
the
proof
itself
and
being
jot.
B
We
don't
specify
anything
else
here,
but
certainly
other
claims
could
be
included
here
at
you
know
the
needs
of
specific
implementations
or
sort
of
derivative
profiles
of
this
particular
document
and
there's
actually
some
consideration
and
thinking
of
work
being
done
in
that
area
by
a
couple
of
different
groups
and
people.
So
I
think
it's
worthwhile
to
not
have
anything
more
here,
but
allow
for
it
such
that
extensions
can
make
use
of
it.
B
When
you
make
an
access
token
request,
this
should
all
be
familiar.
It's
an
authorization
code
request
exchanging
that
code
for
tokens
and
the
only
difference
with
depop
is
that
depop
header
is
included
here,
and
this
is
the
depop
proof
specific
to
this
particular
request.
So
it'll
have
post
in
it
and
the
uri
for
the.
As
with
the
token
endpoint,
assuming
that
all
validates
an
access
token
is
returned,
the
only
difference
here
is
the
token
type
indicates
that
this
is
in
fact
a
depop
bound
access
token.
B
So
the
client
knows
that
that
is
the
case.
This
particular
one
happens
to
refer
refresh
token
as
well
the
refresh
token
and
could
be
used
in
a
later
access
token
request
and
the
same
things
apply.
The
depop
proof
is
sent
as
a
header
and
in
the
case
that
this
is
a
public
client,
and
in
this
case
it
is
that
refresh
token
had
previously
been
bound
to
the
same
key.
B
So
this
is
a
way
of
binding,
refresh
tokens
as
well
for
public
clients,
which
is
something
that
we
don't
have
via
other
mechanisms
other
than
mtls,
but
that
has
its
has
its
own
difficulty
in
deployment
and
maintenance.
So
this
gives
an
application
level
way
to
bind
refresh
tokens
to
client
instances,
and
the
presentation
of
it
is
the
same.
Just
the
same.
You
would
make
any
other
access
token
request,
quick
note
that
we
have
some
new
authorization
server
metadata.
B
So
it's
just
a
an
array
of
jdbs
algorithms,
supported
for
depop
proof,
jwts
and
by
inference.
Just
the
existence
of
this
indicates
general
af
support
for
the
depop
protocol.
Much
that's
right
word
profile.
We
need
a
better
word
and
then
the
actual
access
tokens
themselves
for
jot
based
access
tokens
and
introspection
responses.
B
We
specify
the
way
in
which
the
confirmation
claim
is
used
to
bind
the
issued
access
token
to
the
public
key
from
the
depop
proof
and
that's
via
the
cnf
confirmation
claim,
and
then
that
contains
a
member
that
is
the
sha-256
hash
of
the
gwk
thumbprint
of
the
depop
public
key,
and
this
is
what
binds
the
access
tokens
to
the
to
the
client's
public
key
for
depop,
and
this
isn't
the
only
way
to
do
it,
of
course,
as
as
is
always
true
with
oauth.
B
If
you
have
a
specific
agreement
between
as
and
rs,
you
can
do
this,
however,
you
want,
but
the
intent
here
is
to
standardize
the
way
that
this
information
is
carried
for
job
and
for
introspection,
because
those
are
some
very
commonly
used
ways
of
doing
access
tokens
and
allow
for
sort
of
independent,
interoperable
functionality
between
as
and
rs
implementations
that
are
not
necessarily
owned
or
developed
by
the
same
same
group.
We're
saying
provider,
or
whatever
gives
some
interoperability
to
this
level
and
then
protected
resource
requests
are,
as
you
might
imagine,
sort
of
similar.
B
B
There
is
a
401
authenticate
challenge,
so
if
you
request
a
protected
resource
without
a
token
or
without
any
sort
of
authentication,
appropriate
you'll
get
something
like
401
with
www
authenticate
depop
as
the
scheme
and
realm
here,
which
is
in
all
these
challenges,
as
well
as
a
way
to
indicate
the
algorithms
that
this
protected
resource
or
resource
server
supports
for
accepting
depop
proofs
and
then,
if
similar,
challenge
or
similar,
but
somewhat
different
challenge
to
a
resource
request
with
an
invalid
token,
and
that
would
it's
the
same.
B
Depop
dub
authenticate
challenge
with
a
401,
but
there's
also
the
error
error
description.
That
can
also
indicate
the
algs
in
this
case,
but
it's
likely
not
necessary
at
this
point.
It's
possible.
These
are
very
similar
to
the
the
bearer
challenges,
but
with
the
the
addition
of
the
alex
grammar
to
allow
for
somewhat
more
dynamic
exchange
of
the
supported
algorithms
for
depop
and
then
get
into
a
little
bit
of
history,
nice
updates
all
those
sorts
of
things.
B
Nice
transition
slide
here
from
frankfurt,
so
put
together
a
little
bit
of
history
of
all
the
pop.
I
won't
go
through
all
this
here,
but
basically
the
last
item
is
our
last
couple
items
we're
going
to
enter
earlier
this
year,
a
bunch
of
other
discussions,
and
just
recently
on
november,
18th
published
an
o2
revision
of
this
draft,
hopefully
enough
time
for
people
to
take
a
look
at
it
before
this
post
virtual
109
interim.
B
So
we
didn't
actually
have
any
meetings
at
iatf
109,
but
this
interim
follows
closely
on
the
heels
of
it
and
yeah.
That's
where
we're
at
right
now
on
november
30th,
so
what's
new
in
in
the
latest
draft,
and
so
there's
a
lot
of
updates
here.
B
But
some
of
the
main
ones
I
wanted
to
touch
on
and
just
mention
are
a
pretty
broad
expanding
of
the
objectives
in
the
draft
trying
to
better
lay
out
sort
of
what
what
depop
does
what
it
doesn't
do
and
what
it's
meant
to
do
in
terms
of
protections
better,
describing
what
it
would
look
like
to
support
mixed
or
transitional
type
deployments
where
you
have
both
bearer
and
depop
token
types
in
the
same
deployment
and
more
clearly
allow
for
a
bound
refresh
token,
but
issued
in
conjunction
with
a
bearer
access.
B
B
So
there
was
some
wording
that
that
was
backed
off
a
little
bit
and
additional
text
clarifying
that
this
is
in
fact
possible
and
legal
did
add
and
require
that
a
protected
resource
that
is
simultaneously
supporting
both
the
bearer
and
depop
schemes
must
reject
an
access
token
received
as
a
bearer,
if
in
fact,
that
token
is
depop
bound
and
so
for
a
resource,
that's
supporting
both
at
the
same
time.
B
This
is
a
check
that
prevents
against
sort
of
downgrade
type
usage,
pulled
out,
remove
the
case
incentive
qualification
on
the
htm
claim
track.
That's
the
html.
Excuse
me
hgb
method.
It
was
noted
that
we
had
a
case
and
sensitiveness
there,
but
those
are
in
fact,
case
sensitive.
So
it
makes
sense
to
keep
the
claims
in
line
with
that
and
relaxed
a
little
bit
the
wording
and
the
requirements
around
tracking
jti
for
replay
and
also
qualified
the
scope
in
which
it
has
to
be
tracked
by
the
uri
of
the.
B
Request
so
a
couple
points
of
discussion
that
have
come
up
on
the
lists
in
various
places
that
were
not
yet
addressed
in
the
draft
raise
here
and
talk
about
and
see
if
we
can
get
some
idea
of
consensus
to
move
forward
here,
we've
got
a
picture
of
prop
which
has
already
been
canceled
but
another
another
opportunity
to
overshare
photos
here
too
bad
we're
not
going
to
prague.
I
love
it
there,
but
not
happening
anyway.
So
those
are
those
issues
here
we
go
there's
a
few
of
them,
one
of
the
biggest
ones.
B
Is
this
issue
of
sort
of
freshness
of
the
proof
and
sort
of
what
the
signature
itself
actually
covers
on
the
proof,
not
not
the
token
or
anything
else,
but
what
the
signature
of
the
proof
covers,
and
the
the
biggest
issue
is
basically
that
malicious
code,
cross-site
scripting
type
code
executed
in
the
context
of
the
browser-based
client
potentially
create
depop
proofs
that
are
valid
in
the
future
and
exfiltrate
or
steal
those
along
with
tokens
and
together
those
stolen
proofs
and
tokens
can
be
used
to
access
protected
resources
or
potentially
acquire
new
access
tokens
completely
independent
of
the
client
application
and
future
depop
proofs
could
be
created
even
for
tokens
that
have
not
yet
been
issued
and
sort
of
in
conjunction
with
this.
B
Also
there's
no
sort
of
binding
between
the
binding
is,
from
the
token,
to
the
key
there's
no
binding
from
the
proof
to
the
token
itself,
which
is
what
allows
for
these
proofs
to
be
created
that
could
potentially
be
used
for
tokens
not
even
issued
yet
and
potentially
allow
for
some
swapping
of
of
tokens.
B
If
a
lot
of
other
sort
of,
I
guess
I
would
say,
unlikely,
events
came
to
be,
but
trying
to
give
a
nod
to
the
potential
issue
that
justin
raised
previously
there
there's
at
least
the
technical
possibility
that
a
proof
could
be
used
with
a
different
token,
because
there's
nothing
nothing
binding
the
proof.
To
the
token
it
goes
the
other
way
around
so
kind
of
the
current
situation
around.
B
That
is
that
we
have
some
level
of
sort
of
freshness
of
the
proof
in
the
issue
that
claimed,
but
this
doesn't
prevent
some
sort
of
pre-computation
by
an
adversary
who
can
use
the
private
key,
but
not
actually
steal
the
private
key
so
via
cross-site
scripting
or
something
that's
actually.
The
only
vector
I
can
think
of
an
adversary
could
create
these
proofs
and
future
date
them,
so
they
would
be
valid,
for
you
know,
for
resources
at
some
point
in
the
future.
B
This
is
largely
because
there's
no
server
contribution
to
the
content
of
the
proof,
and
at
least
in
part,
because
the
token
itself
is
not
covered
by
the
proof.
Signature
of
the
proof.
B
B
So
some
potential
options
around
here,
one
quite
frankly,
is
that
it's
maybe
sufficiently
okay
as
it
is.
This
is
discussed
in
the
new
objective
section
of
the
draft
and
key
rotation
is
recommended
as
a
means
to
reduce
the
impact
of
this
and
another
issue
that.
B
So
by
sort
of
trying
to
mitigate
this
future
generated
exfiltrating,
we
may
potentially
add
a
lot
of
complexity
into
the
protocol
and
the
implementations
themselves,
while
only
closing
off
one
variation,
an
attack
that
could
be
used
anyway.
Despite
those
those
efforts,
it
kind
of
gets
into
like
this,
this
nihilism
around
cross-site
scripting,
but
pretty
much
if
your
application
has
cross-site
scripting
vulnerabilities,
there's
not
much
that
can
be
done
practically
to
prevent
attacks
through
that
vector,
so
just
preventing
a
subclass
of
those
attacks,
the
exfiltration
and
usage.
B
That
said
some
other
options
is,
we
could
incorporate
a
hash
of
the
access
token
or
the
access
token
itself,
but
probably
a
hash
just
for
reasonable
size
considerations
into
the
depop
proof
that
would
limit
at
least
the
scope
of
what
could
be
done
from
those
exfiltrated
things
to
only
using
currently
issued.
B
B
Although
I
I
start
to
wonder
whether
it's
you
know,
there's
a
there's,
a
trade-off
of
the
the
protocol
itself
enforcing
something
at
the
cost
of
complexity
for
everyone
versus
the
the
individual
implementations
of
clients,
making
some
action
to
mitigate
something
at
the
cost
of
just
a
little
bit
of
extra
complexity.
For
them.
B
It
also
sort
of
at
least
to
me,
raises
the
the
question
of
whether
we
should
be
including
other
types
of
artifacts
into
the
proof.
If
we
were
to
do
this
like
is
there
value
in
having
some
kind
of
coverage
of
the
authorization
code
refresh
token
and
maybe
other
grants?
B
I'm
not
I'm
not
sure,
but
it
sort
of
begs
the
question
if
we
were
to
do
it
in
one
place
and
not
the
other,
but
that's
one.
One
potential
option
to
work
with
here:
another
option
not
necessarily
exclusive
to
that,
but
would
be
to
allow
the
server
to
provide
likely
via
some
kind
of
challenge,
some
contribution
to
the
proof.
B
In
the
context
of
the
current
design,
at
least
when
I
think
about
it,
this
feels
a
little
bit
awkward
to
work
into
it
in
a
in
a
nice
way.
Maybe
I'm
just
not
seeing
it,
but
it
it.
I
struggle
to
see
how
it
can
kind
of
fit
in
cleanly
or
elegantly.
I
keep
that
word,
but
hopefully
that
makes
sense
like
it.
Just
just
sort
of
doesn't
quite
fit
right,
particularly
for
interactions
with
the
as
and
certainly
a
challenge
per
resource.
B
Indication
of
all
seems
completely
untenable,
it's
already
a
pretty
expensive
protocol
and
if
we
were
to
have
a
challenge
response
re-request
on
every
single
call,
I
I
just
think
that's
too
much
overhead,
so
we
need
to
consider
some
way
to
to
amortize
the
cost
of
the
challenge
over.
You
know
many
subsequent
requests,
which
is
likely
possible.
It
certainly,
you
know,
needs
to
be
figured
out
and
considered
and
likely
has
its
own
complications
of
some
kind
of
state
maintenance
and
so
forth.
B
B
Thinking
of
the
chairs,
should
I
just
go
through
all
of
them
and
then
come
back
to
discussion
on
potentially
each
of
these
items.
Or
do
you
want
to
try
to
hit
that
now
or
maybe
there
is
no
discussion.
A
I
think
it
would
be
probably
better
to
do
a
discussion
right
now,
because
if
you
cover
three
and
then
people
start
like
you
again
for
what
I
should
just
start-
that
discussion
right
now
so-
and
I
see
dick
in
the
line
here
before
I
I
lit
dick
chime
in
here-
I
I
will
post
the
link
here
and
if
you
haven't
added
your
name,
please
go
at
your
name
and
go
ahead.
Dick.
B
Yeah,
it's
not,
I
mean
I
don't
quite
understand
the
motivations
and
rationale,
and
it's
not
really
something
that
I
don't
know
how
to
say
that
it's
a
it's
of
interest
or
need
to
me.
So
that's
why
I
have
not
responded.
B
A
C
Sure
the
idea
is
that
you
have,
as
opposed
to
putting
the
hash
of
the
key
or
the
fingerprint
of
the
key
into
the
access
token
or
refresh
token,
that
you
can
use
your
existing
access
token
and
refresh
tokens,
and
you
put
a
hash
of
the
token
and
a
hash
of
the
client's
key
into
a
separate
token,
and
then
that's
a
binding
token.
C
And
so
then
the
resource
server
verifies
that
binding
token
with
you
know
some
key
that
it
has
from
the
access
server,
which
could
be
the
same
key
that
signed
a
jwt
access
token,
but
enables
you
to
use
any
existing
access
token
and
refresh
token,
and
that
then,
that
binding
then
binds
the
access
token
to
the
key
separately
in
in
both
directions.
So
it
solves
this
problem.
It
also
makes
it
easier
for
people
to
deploy
without
having
to
change
their
code
for
access
tokens
or
refresh
tokens.
C
A
Okay
thanks:
anybody
else
has
any
comments,
questions
any
thoughts
about
what
brian
just
described
here
or
dick's
idea.
A
D
Thousand
speaking,
just
one
question
to
dick,
I
I
don't
understand
how
the
binding
between
the
access
token
and
key
materials
being
established,
because
in
depop
it's
the
as
that
in
the
end
the
tests.
I
I
have
seen
that
that
approved
for
the
private
key.
So
I
I
here
with
a
test
that
they
are,
whoever
is
in
possession
of
a
private
key
corresponding
to
a
certain
public
key
was
issued
that
access
token,
and
I
don't.
I
don't
see
how
that
this
this
spawning
is
established
with
your
proposal.
D
B
Hey,
I
agree
with
you,
I
argue
with
it:
it's
changing
different
sort
of
bits
and
pieces
of
the
protocol
and,
depending
on
how
maybe
things
have
been
implemented,
it
might
be
easier
based
on
the
layering,
but
all
the
implementations
that
I
know
you
know
mucking
with
the
contents
of
the
access
token
itself
is-
is
relatively
straightforward
and
simple
versus
or
at
least
manageable
versus
trying
to
issue
additional
artifacts
is
is
less
so
so
I
agree
with
you
but
trying
to.
E
Yes,
philips
cocoon
here,
zero
yeah.
So
what
I?
E
What
I
don't
get
is
how
how
the
resource
server
would
be
able
to
figure
out
that
this
is
a
bound
token
if
the
client
simply
omits
sending
the
proper
token
type
in
the
header
in
the
authorization
header
in
the
you
know,
the
the
proper
steam
and
also
the
the
the
binding,
the
the
second
header
with
the
binding
artifact,
because
what
we
have
now
is
that
the
the
resource
server
can
do
introspection
for
both
jwt
tokens
by
looking
into
the
claims,
as
well
as
talking
talking
to
the
introspection
endpoint
when
there
are
opaque
tokens,
and
it
is
able
to
figure
out
that
this
is
actually
a
palm
token
that
it
should
process
the
binding
check
for.
B
I
don't
believe,
that's
possible
there,
so
either
you
can't
have
an
arrest
that
supports
both
at
the
same
time
or
you
just
are
subject
to
the
possibility
of
a
downgrade
without
that
style.
A
G
Mike
jones
from
microsoft,
I
know
that
a
long
time
ago,
when
we
were
meeting
in
person,
amazon
and
I
think.
G
Being
able
to
use
symmetric,
key
cryptography
efficiencies,
I
know
that
brian
you
and
I
and
I
think,
I'm
directly
whiteboarded
out
again
when
we
could
stand
in
front
of
whiteboards
how
to
do
that.
B
Yes,
I
I
have
thought
about
it
quite
a
bit.
Basically,
it's
a
it
would
detail
out
a
derivation
anyway
on
the
idea
that
neil
presented
around
the
same
time,
it's
my
assertion
that
it's
a
completely
different
protocol
at
that
point.
It's
different
enough
that
it
would.
It
would
be
something
different,
thereby
being
different.
B
Can
I
say,
different
a
few
more
times,
but
it
it's
really
completely
distinct
in
the
way
that
the
flow
works
and
when
we
were
first
early
this
year
we
had
an
interim
on
pop
in
general
and
more
or
less
at
that
point,
I
I
put
forward
sort
of
d-pop
as
a
working
item
or
attempting
to
codify
right
up
and
actually
specify
neil's
idea
in
in
more
depth
and
and
so
forth,
and
the
working
group
at
that
point
indicated
a
strong
interest
in
pursuing
depop,
so
a
table
that
other
work
doesn't
mean
it
couldn't
come
back,
but
it's
it's
not.
B
I
consider
it
sort
of
posed
for
this
particular
document.
It
would
need
to
be
its
own
thing
and
considered
distinctly,
and
it's
not
something
that
we
can
keep
sort
of
considering
reintroducing
into
this
document.
It's
different
enough
that
it
just
doesn't
work,
and
so
it's
I
don't
know
how
to
say
it.
It's
sort
of
closed
with
respect
to
d-pop
d-pop
is
a
asymmetric
scheme
and
that's
just
what
it
is.
H
Yeah
hi,
I
I
suppose
yeah
brian
the,
including
a
hash
of
the
you,
know
access
token
and
other
other
ones
like
is.
Is
it
mainly
just
the
size
consideration?
You
know
why
why
we
want
to?
Maybe
not
not
do
that
it
just
I
suppose
you
know
if
this
was
on
on
on
some
sort
of
back
end
protocol.
I'd
have
said
yeah
we
should
we
should
100.
Do
that.
I
can't
like
I
can't
see.
Apart
from
like
the
size,
I
can't
see
many
negatives
doing
clicking
next.
B
So
the
hash,
the
size,
the
hash,
I
don't
think,
is
a
major
concern.
I
think
it
has
to
be
added
without
materially
impacting
anything.
It
does
bring
the
downside
of
having
to
specify
a
hash
algorithm
and
either
ingrain
it
or
build
some
sort
of
hash
agility,
which
has
many
of
its
own
problems.
So
there's
some
consideration
there,
but
you
know
they
can
be
dealt
with.
B
As
you
know,
there
is,
to
be
honest,
there's
some
resistance
on
my
side,
basically
just
because
of
the
the
my
attachment
to
the
simplicity
of
the
model
and
the
symmetry
between
the
type
of
proof
that
the
client
always
sends
this
proof
in
the
same
way,
regardless
of
how
it's
accessing
things
so
that
doesn't
need
to
make
decisions
about
what
to
put
in
there
and
what
to
hash
in
the
context
of
what
requests
it
just
sends
a
proof,
it's
related
to
the
request
and
sends
it
off,
but
I
do
think
that
said,
there's
probably
worthwhile
breaking
that
little
idea
in
order
to
to
do
something
with
the
access
token,
and
that
could
be
done
relatively
simply
in
my
own
thinking
about
well.
B
If
we're
going
to
do
that
for
access
tokens
and
then
maybe
it's
worthwhile
for
these
other
artifacts
and
cleanly
specifying
sort
of
how
and
what
artifact
you
put
in
there
and
have
based
on
the
different
brand
types
and
requests
which
are
themselves
it's
an
extensible
mechanism,
starts
to
get
a
little
confusing
and
potentially
sort
of
ugly.
Like
do
we
say:
okay,
it's
the
authorization
code
for
code
refresh
token
for
refresh
and
then
nothing
for
other
grants
or
those
grants
have
to
specify
it
and
those
grants
may
have
already
been
written.
B
So
we
do
it
just.
It
starts
to
get
a
little
ugly
from
layering
and
specification
perspective,
which
sort
of
helped
me
to
push
back
on
it
a
little
bit.
But
as
I
as
I
think
about
it
and
even
talk
about
it
here,
I'm
wondering
if
maybe
not
doing
anything
for
the
grant
types
doing.
The
hash
of
the
access
token
for
resource
access
might
be
a
a
good
worthwhile
middle
ground
kind
of
thing,
largely
because
of
the
complexity,
and
also
I'm
not
seeing
really
where
you
get
the
value
in
hashing
on
the
other
artifacts.
B
I
don't
see
what
that
would
prevent
or
protect
against,
whereas
the
there
is
value
in
doing
the
access
token.
So
I've
talked
a
lot
of
circles
there,
but
maybe
that
explains
a
little
bit
of
of
my
thinking
around
it
and
some
of
my
hesitation
to
do
anything
with
it.
But
I'm
thinking
more
or
that
that
doing
the
access
token
and
only
access
cooking
might
might
be
a
good
sort
of
compromise.
H
H
You
know
if,
if
there's
been
that
that
level
of
you
know
vector
exposed
anyway,
so
yeah,
I
don't
I'm
not
sure
but
yeah
that
definitely
does
sound.
You
know,
I
suppose
it's
just
if
there
is
the
ability
to
think
of
the
kind
of
empty
election
notes
on
it
on
a
different
one.
Just
because
it's
on
that
transport
layer,
you
know
it
can't
be
replayed
in
the
same
way
you
know
so
if
it
was
possible
yeah
to
bind
it
to
the
access
token
itself.
B
I
mean,
for
the
actual
token,
I
think
there
are
some
at
least
that
it
does
close
down.
Maybe
that
can
be
achieved
in
other
ways,
but
but
there
are
some
known
ones,
whereas
the
other
ones,
I'm
I'm
less
sure
about,
and
that
that
just
sort
of
happily
coincides
with
the
other
ones
being
harder
to
specify
too.
So
I
guess
that's
sort
of
where
I
I'd
lean
right.
A
I
I
I
believe
that
we
don't
speak
from
the
same
thing:
I'm
not
speaking
in
my
email,
about
the
proof
token
sent
by
the
client
that
of
the
token
generated
by
the
a.s
and
section
8.1,
which
is
part
of
the
security
consideration
section
is
not
the
right
place
to
state.
What
kind
of
processing
should
be
done
by
the
eras
when
receiving
both
an
access
token
and
a
depop
proof
token.
I
The
text
says
so.
Basically
the
text
for
what
the
core
of
the
document
it
doesn't
prevent
to
explain
better
the
idea
in
section
81
that
it
should
not
be
only
in
section
81,
but
in
section
81
there
is
something
important
that
is
written
first.
There
is
a
must
and
I
don't
believe
that
must
should
be
in
security
consideration
section
that
should
be
in
the
core
of
the
document.
I
I
believe
that
when
you
issue
a
token,
it
can
be
valid
for
a
full
day,
because
your
attributes
are
not
going
to
change
in
a
few
seconds.
Well,
if,
as
believes
that
the
token
shall
be
only
valid
during
a
few
minutes
or
seconds,
then
it
the
expiration
date.
In
the
token,
so
I
was
speaking
of
the
expression
date.
In
the
token,
not
the
expiration,
you
made
okay,
but
you
made
the
you
made
the
comment.
B
About
the
proof-
and
this
draft
is
about
the
proof,
so
I
don't-
I
don't
know
where
you
want
to
expirations
in
the
token
themselves,
it's
an
orthogonal
issue,
it's
something
completely
different,
so
it's
not
relevant
to
this,
and
I
don't.
I
don't
even
know
how
to
to
respond
to
that,
because
you're
talking
about
things
that
aren't
even
part
of
the
draft,
I
do
have
some
other
issues
that
that
I'd
like
to
discuss
to
actually
move
forward
with
this.
So
can
we
can
we.
B
A
B
B
Okay,
so
well
it's
difficult
to
achieve
consensus
and
write
these
documents,
even
when
we're
all
talking
about
the
same
thing,
and
there
are
limited
resources
in
time
and
energy,
that's
just
the
reality
of
it
yeah.
I.
A
I
know
I
know
so
so,
let's
maybe
dennis
send
an
email
again
and
and
see
if
he
can
clarify
what
he.
What
are
you
talking
about
there
and
and
from
there
right.
A
B
A
J
Yes,
just
very
quick,
I
just
want
to
point
out
that
for
a
lot
of
these,
it's
important
that
we
keep
in
mind
the
the
first
bullet
on
the
right
hand
side
there
in
that,
sometimes
we
say
we
make
the
choice
to
say
yeah.
This
is
this
is
a
known.
J
This
is
a
known
hole,
here's
how
you
might
avoid
it,
but
this
protocol
is
not
going
to
be
where
it
is
specifically
addressed,
and
I
I
think
that,
just
because
there
is
a
shortcoming
in
a
system
doesn't
necessarily
mean
that
we
have
to
address
it
in
a
particular
way
here
which,
because
I
was
the
one
that
brought
up
the
the
token
coverage
question.
J
B
Yeah
absolutely
agreed,
and
I'm
not
specific
to
to
the
issue
you
brought
up.
That's
sort
of
my
point
in
the
first
bullet
under
the
ideas
is,
if
it's
okay
and
we
discuss
it
and
try
to
warn
against
it,
but
at
the
same
time
the
value
of
adding
something
that
being
access
to
compassion
may
be
worthwhile
enough
to
to
do
it,
and
so
I'm
maybe
sort
of
leaning
in
that
direction.
Now.
A
B
B
A
B
Right,
maybe
I
can
figure
out
how
to
use
my
machine
okay.
Another
issue
that's
come
up
now
twice
anyway.
B
Is
it's
been
suggested
that
for
resource
access
having
the
jbk
in
the
proof
makes
it
really
easy
to
just
use
that
key
to
validate
the
signature
on
the
proof
and
call
it
good,
thereby
missing
the
checking
of
the
binding
to
the
at
itself
in
the
in
the
confirmation
claim,
because
it's
for
the
two-step
process,
you
have
to
validate
proof
and
then
check
that
the
access
token
binding
is
in
fact
to
that
proof
and
the
the
statement
was
that
that
ladder
check
is
potentially
something
that
will
be
missed
enough,
that
it's
worth
considering.
B
That
possibility
was
compared
to
the
out,
none
which
I
thought
was
a
bit
much,
but
it's
sort
of
the
same
kinds
of
things
in
terms
of
is
the
protocol
as
it's
currently
designed
making
it
easy
enough
or
too
easy
to
make
a
mistake.
That
is
believed
to
be.
You
know
common
or
likely
enough
that
it
should
be
changed
to
avoid
the
potential
for
that
mistake,
and
so
the
the
current
situation
is.
B
We
send
the
full
jvk
in
the
proof
and
there's
a
hash
of
the
jwk
in
the
access
tokens
confirmation
plan
and
it's
been
suggested
that
missing
that
check
is
potentially
a
foot
gun
here,
something
that
we
could
avoid
with
a
different
protocol
design.
That
said,
there's
been
only
one
person
sort
of
advocating
this
position.
I've
responded
to
it
and
sort
of
come
around
to
the
idea
somewhat.
It
is
maybe,
like
you
know,
from
a
consensus
point
of
view.
B
It's
you
know
it's
one
person's
opinion
at
this
point,
but
I
wanted
to
raise
it
here
and
really,
as
I
see
it,
we
have
two
options.
One
is
again:
it's
fine.
The
way
it
is
just
leave
it.
B
It's
nice
to
have
the
symmetry
between
the
as
and
the
rs
access
with
the
proof,
it's
very
very
similar
to
the
way
that
mtls
and
it's
defunct
now,
but
the
way
the
token
binding
worked
in
that
the
key
proof
happened,
sort
of
elsewhere
and
the
access
token
was
bound
to
the
key
and
the
idea
of
checking
the
binding
in
a
protocol.
B
That's
about
buying
access
tokens
feels
kind
of
fundamental,
and,
although
it's
possible
to
be
admitted
that
feels
like
something
that,
if
you're
going
to
go
to
the
trouble
of
implementing
a
binding
protocol,
you
should
go
ahead
and
check
the
binding.
B
But
another
option
would
be
to
change
things
around
a
little
bit
and
rather
than
a
hatch
of
the
key
include
the
full
jwk
in
the
access
tokens.
Confirmation
claim
and
emit
it
emit
the
key
entirely
from
the
depop
proof
on
resource
access.
B
B
Just
by
virtue
of
pulling
the
key
to
be
really
moving
yeah,
just
by
putting
the
full
jbk
into
the
access
tokens,
confirmation
claim
and
then
having
it
not
included
in
the
proof
itself
on
resource
access
only
you
still
need
it
for
authorizations
over
access,
because
that's
where
it
finds
out
the
key
and
yeah.
This
would
potentially
be
less
error
prone
in
implementations.
B
It
is
somewhat
smaller
in
terms
of
the
total
amount
of
data
conveyed
between
the
two
artifacts
and
there's
always
trade-offs
with
who's
cashed
in
what
and
whether
it's
a
token
that's
requiring
introspection
or
self-contained
or
whatever,
but
just
in
general,
it
would
be
a
key
rather
than
a
key
and
hash
traveling
between
the
two
artifacts,
and
it
has
a
nice
benefit
of
not
not
requiring
us
to
define
a
hash
function
which
yeah
is
somewhat
side
benefit.
B
Although
it
sounds
like
we
might
consider
adding
a
hash
function
somewhere
else,
so
it's
not
like
it
gets
rid
of
entirely
yeah.
So
those
are
those
are
really
the
two
options.
I
keep
that
in
and
I
guess
the
only
other
consideration
is
even
though
we're
in
draft
here.
We
we
would
be
looking
at
what's
effectively
a
breaking
change
in
the
second
bullet,
which
is
always
okay
in
draft
stage.
B
But
maybe
you
know
something
to
think
about
if
we
went
in
that
direction
as
well,
so
that
that
said,
I'm
certainly
interested
in
getting
to
consensus
between
one
of
these
two
options
and
more
or
less
you
know
picking
one
and
moving
forward
and
putting
it
behind
us.
A
A
J
Yeah,
so
I
I
definitely
get
both
thrust
of
the
argument
here
that
or
arguments
here
that
brian's
presented.
I
think
you've
done
a
good
job,
capturing
that-
and
I
just
want
to
add
that
there's
a
lot
of
other
things
other
really
bad
things
that
can
go
wrong
if
the
rs
isn't
checking
the
aspects
of
the
access
token,
beyond
just
its
immediate
presentation
validity.
J
It
is
aligned
with
algnon
in
that
it
it
makes
it
easy
to
put
a
premature
finish
line
in
your
code
and
not
do
any
checking
past
that,
but
there's
a
lot
of
other
bad
stuff
that
can
happen.
If
say,
the
token
doesn't
have
the
right
scopes
or
was
issued
to
a
different
client
or
was
issued
to
the
wrong
user
or
any
number
of
other
artifacts
dealing
with
that
token,
beyond
just
that,
it
was
presented
with
a
signature
with
the
key
that
it
was
presented
with.
K
Mao
no
yeah
excellent,
okay
yeah.
I
just
like
to
add
so
I
think
this
is
a
very
valid
point
and
we've
seen
in
the
past
how
people
implement
such
things
wrongly,
but
I
think,
there's
also
a
lot
of
value
already
when
we
have
a
good
description
on
on
an
algorithm
how
to
how
to
check
these
things.
So
when
we
make
the
spec
very
explicit
on
what
to
check
and
in
which
order,
I
think
this
can
already
be
a
good
good
step
to
avoiding
these
kind
of.
B
A
The
last
open
issue
and
brian
saw-
and
I
I
see
that
you
have
really
two
important
open
issues
here,
so
I'm
I'm
thinking
that
we
might
need
a
follow-up
meeting
to
continue
this.
B
Well,
let
me
let
me
take
this
one
on
the
list
too,
at
least
see
if
we
can
gather
some
feedback
to
try
to
get
to
consensus.
It
should
be
easy
to
summarize,
because
it's
you
know
a
or
b,
please
pick
one
and
we'll
move
forward,
and
just
issue
is
hopefully
really
short.
So
if
you
don't
mind,
I'm
just
going
to
pound
through
it
on
the
slide.
B
Figure
out
how
to
advance
the
slide.
Oh,
my
god!
B
All
right
I'll,
just
summarize
it
was
the
fact
that
depop
is
not
a
client
authentication
method
and
we
go
out
sort
of
out
of
our
way
to
say
that,
but
we're
seeing
situations
where
asymmetric
stuff
going
is
ventricular
mouth
and
asymmetric
binding.
You
get
the
combination
of
d-pop
and
private
key
job
for
client
authentication,
which
is
a
little
bit
weird
to
have
both
of
those
artifacts
on
the
requests
that
are
doing
in
some
ways.
Very
very
similar
things.
B
By
doing
some,
pre-registration
or
pre-configuration
of
keys
d-pop
could
be
turned
into
an
oauth
client
authentication.
The
ais
method
pretty
easily,
basically
pre-registration
in
some
way
that
carry
the
client
identifier
and
I
think,
and
then
it's
done,
and
it
simplifies
at
least
the
the
that's
where
you
would
otherwise
have
private
key
job
in
there,
and
so
it
seems
like
an
easy
win.
B
I
was
wondering
if
people
are
interested
in
having
that
specified
somewhere
either
in
this
document,
which
I
actually
am
not
a
big
fan
of,
because
I
think
maybe
it
confuses
and
complicates
things
doing.
A
new
document
would
be
relatively
short
and
simple,
but
specify
it
there
or
just
forget
it
and
and
pretend
we've
never
had
this
discussion.
B
A
Their
name
to
the
list:
please
do
that
brian,
let's
start
with
with
open
like
sending
those
open
issues
to
the
list
and
if
you
can't
close
them
that
right
right.
Otherwise,
we
could
consider.