►
From YouTube: IETF99-OAUTH-20170718-1330
Description
OAUTH meeting session at IETF99
2017/07/18 1330
https://datatracker.ietf.org/meeting/99/proceedings/
B
A
C
A
D
A
And
the
char
document
I
in
is
evaluation
and
then
85
document
is
further
along
and
announcement
would
be
sent
fairly
soon.
We
have
one
new
working
group
item
added
to
the
to
the
milestones,
which
is
the
mutual
TLS
profiles
profiles
document.
We
discuss
it
in
Chicago
and
there
was
a
clear
interest
in
in
working
on
that
topic,
so
we
gotta
update
it
in
its
already
I
think
version
2
of
the
document
and
we'll
talk
about
it
later
today.
A
The
token
exchange
in
a
rice
floor,
I'm
working
good
last
call,
but
we
need
more
comments,
but
we're
also
going
to
talk
about
this
net.
Are
you
here
in
the
room?
Whoops?
You
forgot
that
I
was
meeting
so
when
he
shows
up.
He
can
probably
tell
us
a
little
bit
about
the
status
of
the
char
document
because
it
was
waiting
for
an
revised
ID.
Are
there
any
any
other
sort
of
employees?
There
is
that
info
on
and
some
of
the
other
documents
I,
don't
think
so.
From
the
authors
of
the
document.
A
A
It's
an
it's
an
intent
to
sort
of
collaborate
and
discuss
topics
with
security,
researchers,
who
have
been
quite
successful
in
in
doing
a
formal
analysis
of
our
protocols
and
also
discussing
other
security,
related
techniques,
I
think,
while
it's
labeled
as
OS
I,
think
it
has
it's
equally
applicable
to
people
working
on
cozy
and
other
security
mechanisms,
laws
for
IOT-
or
we
also
talked
about
the
formal
message
apply
but
TLS
just
brought
I
was
again
a
very
good
workshop.
The
information
is
uploaded,
so
you
can
find
slides
and
papers
there.
A
So
we
thought
that
said.
We
talked
about
formal
analysis
that
was
done
in
in
in
this
group
in
HDA
ETH
series,
but
also
from
the
guys
in
Korea
who
are
now
in
Stuttgart
the
stood
cath
university.
We
talked
about
crypto
related
attacks,
antonio
from
adobe,
who
had
successfully
compromising
hacked
many.
Our
systems
talked
about
the
recent
vulnerability
discovered
by
the
JW
e,
which
mike
is
going
to
talk
about
a
little
later,
so
that
was
a
good
set
sight
way
into
the
cryptography
was
quite
interesting.
A
You
looked
at
curve
cryptography
and
he
gave
a
give
us
a
good
tutorial.
He
also
had
new
ideas
on
Noah's
and
I
hope
that
some
of
them
actually
trickled
and
down
into
the
OS
working
group
are
the
PCP
ideas
for
up
for
implicit
flow
and
etc,
and
we
also
had
some
disk
son
what
we
could
do
to
improve
our
way
of
working
will
I
will
share
some
more
detailed
notes.
A
I
haven't
had
time
to
do
that
yet
about
the
the
workshop
itself
and
point
out
specific
presentations,
but
it
was
really
good
workshop
and
thanks
to
Dawson
and
and
of
course,
the
team
in
Surrey
for
putting
this
together
and
spending
the
time
to
sort
of
play,
the
agenda
review
the
papers
really
review
the
slides
and
so
on.
So
it's
good
thanks
thanks
dozen
for
today.
The
first
thing
we
will
talk
about
is
the
mutual
TLS
profile
we
have
the
security
topics.
Person
is
going
to
do
that.
A
William
would
speak
about
the
incremental
authorization
and
then
we
have
they
already
mentioned
web
token
test
current
practice
document.
So
that's
what
today,
we
have
another
session.
I
didn't
put
it
in
the
slide
deck
yet
nap
since
you
are
here
now.
Could
you
say
a
few
words
about
the
current
status
of
the
charr
document?
If
you
go
to
the
microphone.
F
So
I'm
not
second
round
number
as
such
I'm,
one
redditor
of
the
author
document,
I
think
I
have
applied
most
all
of
the
remaining
comments
already
and
I
also
have
talked
with
Mike
about
the
registration
of
some
of
the
parameters
into
the
IANA
Ayana
registry,
and
we
discussed
and
ordinary
connects,
has
actually
larger
set
of
things
resistor.
So
we
thought
that
it
might
be
a
good
idea
to
just
do
it
without
not
the
connector.
We
don't
touch
off
jar.
That
means
that
we
don't
have
to
go
through
the
another
iron,
a
review
so
so.
A
F
F
That
so
before
we
were
composing
the
values
in
the
Ricker's
object
and
only
Lucas
parameters
right
and
we
are
fighting
to
use
something
on
the
Lucas
object.
But
now,
due
to
because
of
the
comment,
that's
saying
that
it's
probably
much
better
to
just
use
what
is
in
inside
the
because
object.
We
decided
to
do
that
now.
As
a
result,
we
kind
of
remove
the
possibility
to
add
a
parameter
like
scope
on
the
the
query
parameters
right
and
that
was
I
thought.
F
F
Possibility
to
provide
scope
duplicating
or
existing
as
a
query
parameter
just
as
a
special
but
from
the
security
point
of
view,
just
use.
What
is
inside
the
rigours
object,
it's
just
full
convenience,
so
what
I
have
done
is
in
the
spec.
It's
clearly
saying
that
it
must
use
the
parameters
in
the
Lucas
object
right,
but
it
may
and
client
may
send
the
parameters
in
the
query
as
well,
just
for
the
backward
compatibility
or
other
things
correct.
F
F
I'm
also
I'm
going
to
push
that
to
the
to
the
idea.
For
the
other
thing,
the
other
comment
actually
was
provided
by
Brian,
actually
that
Ayana
staff
was
also
provided
provided
by
Brian,
but
the
other
one
which
could
have
been
substantial
was
that
the
danger
of
using
Lucas
URI,
if
you
don't
vet
the
request,
URI
wall
and
you
know
naively
fetch
the
content
of
request
URI.
It
may
pose
a
DOS
attack
to
that
authorization
server.
So
we've
also
put
some
comments
about
that
in
the
security
consideration
like.
A
Like
looking
at
like
following
up
from
the
discussions
we
had
last
week
like
yeah,
we
have
to
think
about
what
could
implement
this
get
wrong
and
it's
sometimes
very
basic
things.
Since
for
adding
some
additional
stronger
wording
and
explaining.
G
F
Could
specify
that
request
authorization
because
your
eye
as
it
because
you
are
right,
so
it's
going
to
be
recast
right,
so
if
the
the
authorization
server
actually
doesn't
cut
off,
you
know
just
after
first
fetch,
it
may
just
continue
fetching
until
their
order
in
the
Sabha
memory.
Six
is
exhausting.
So
that
kind
of
thing.
H
J
B
J
Forward
to
it,
thank
you
so
I
was
forget
if
I
should
introduce
myself
I,
usually
don't
Brian
Campbell
from
ping
I'm
gonna
present
on
the
status
of
the
mutual
TLS
profile
for
oauth2.
It
is
now
in
draft
two,
as
a
working
group
document,
yeah
a
little
picture
of
Prague
here
from
the
last
time,
they're
all
here.
So
what
is
the
mutual
TLS
profile
for
OAuth?
J
It's
basically,
two
things
kind
of
rolled
up
in
together
into
one
I'm
gonna,
give
just
a
little
background
on
it
here,
hopefully
give
some
context,
so
it
defines
mutual
TLS
client
authentication
to
the
token
endpoint,
as
well
as
a
means
of
doing
mutual
TLS
sender,
constrain
access
tokens
for
protected
resource
access.
It's
a
wire!
Why
are
we
doing
this?
The
TLS
client
authentication
is
something
that's
been
done
in
practice
for
a
number
of
different
deployments
and
implementations
for
a
long
time.
We've
never
actually
written
down
and
Specht
out
how
to
do
it.
J
This
spec,
as
it
is
right
now,
is
referenced
by
fappy,
the
financial
api
working
group
in
the
open,
ID
foundation
in
the
read
and
writes
api
security
profile
as
one
of
one
suitable
holder
of
key
mechanism
to
use
with
OAuth,
which
is
a
requirement
of
Fafi,
and
it's
also
referenced
by
the
open
banking
api
on
security
profile.
So
there's
already
demand,
at
least
from
the
standards
world
for
applications
of
it,
how
it
works
just
real
quickly:
mutual
TLS,
client
authentication.
J
How
that
works
is
it's
for
authentication
of
the
token
endpoint
TLS
connection
is
made,
which
is
always
required
anyway,
but
is
established
with
mutual
concert
if
ik
identification.
The
client
includes
the
normal
client
ID
HTTP
request
parameter
in
all
requests.
The
token
endpoint,
which
allows
for
easy
identification
in
the
client
and
the
a
s
is
able
to
use
that
client
ID
lookup
the
client
configuration
as
appropriate
and
verify
that
certificate
established
on
that
channel
is
the
right
one
for
that
client.
J
This
was
done
intentionally
to
allow
the
trust
model
to
be
to
be
somewhat
open
and
flexible,
because
people
deploy
these
sorts
of
things
in
a
lot
of
different
ways,
and
with
that
said,
there
is
some
metadata
around
indicating
for
support
for
this
and
the
nature
of
it
both
provided
for
the
client
metadata,
as
well
as
the
authorization
server
metadata,
Center
constrain.
Excuse
me,
mutual
TLS
sender
constraint
access
tokens.
J
Following
that
to
actually
use
the
access
tokens.
The
TLS
connection
from
the
client
to
any
protected
resource
is
also
mutually
authenticated.
Tls
and
the
protected
resources
then
matches
the
certificate
from
the
TLS
connection
to
the
certificate
hashed
in
the
access
token
itself,
and
if
those
don't
match
up
its,
it
rejects
the
request
in
order
to
kind
of
facilitate
this
in
an
interoperable
way,
at
least
as
interoperable
as
we
can
be
around
standardizing
access
tokens.
J
There's
a
new
JWT
or
jack
confirmation
method,
which
is
the
call
it
the
x.509
certificate,
sha-256
thumbprint
confirmation
method,
a
little
wordy,
but
x5
t
/
s,
256
is
shorter
sort
of,
and
this
is
building
on
top
of
RFC
7800,
which
defines
the
sort
of
overall
confirmation
structure,
confirmation,
method
structure,
and
this
is
the
new
sort
of
method
under
underneath
the
Seon
CNF
that
defines
for
confirming
certificate
confirmation.
So
here's
a
hash
of
the
certificate
basically
included
in
the
access
token.
J
This
defines
the
syntax
of
it
using
the
facilities
provided
in
7800,
and
it's
not
here,
but
we
were
talking
about
this
in
the
last
one.
Justin
sort
of
pointed
out
that
there
was
a
lot
of
easy
on
sort
of
pointed
out
that
there
is
a
lot
of
talk
about
using
introspection.
But
there
were
a
lot
of
assumptions
made
about
how
exactly
introspection
would
work
in
conjunction
with
just
about
any
holder
or
key
mechanism.
Then
weren't
clearly
specified
in
the
document,
and
so
the
latest
revision
calls
that
out
pretty
explicitly.
J
It
says
that
the
same
data
as
the
the
jock
confirmation
claim
is
returned
in
the
introspection
response
and
it's
the
responsibility
of
the
protected
resource.
To
actually
do
the
validation
of
that
on.
This
document
now
request
registration
as
Ciena
of
CNF,
the
confirmation
claim
as
a
token
introspection
response,
parameter
and
just
says
it's
a
brief
statement
that
says
it
has
the
same
semantics
and
format
as
the
claim
of
the
same
name
to
find
in
7800.
J
That's
largely
to
account
for
I,
don't
know.
Maybe
it
was
an
oversight.
Maybe
was
just
something.
We
recently
realized
that
you're
gonna
want
basically
the
same
kind
of
data
around
7800,
both
in
introspection
responses,
as
you
would
in
in
John,
and
so
this
draft
just
sort
of
it's
not
retroactive
Lee
but
tries
to
retro
actively
define
the
contents
of
7800
as
being
relevant
for
for
introspection
responses
and
then
in
turn,
picks
up
the
the
certificate,
thumbprint
confirmation
and
then
just
I,
don't
know
if
you
can
see
it's
a
small.
J
J
So,
as
I
was
working
on
this
and
just
sort
of
as
a
proof-of-concept
to
myself,
I
took
a
couple
of
cots
products,
an
authorization
server
and
a
resource
server
and
utilizing
what
was
already
in
place
around
mutual
TLS
support,
not
specifically
for
this
profile,
but
just
the
ability
to
set
up
the
two
products
to
require
mutuality.
Less
from
the
client
and
some
existing
configuration
and
customization
hooks
around.
J
That
I
was
able
to
implement
in
support
for
this
profile
using
existing
product
functionality,
which
maybe
hasn't
been
entirely
stated,
but
was
sort
of
an
implicit
requirement
of
wanting
to
be
able
to
do
this
in
a
way.
That's
easy
for
products
and
deployers
to
do
on
a
very
near-term
timescale
and
not
requiring
major
infrastructural
changes.
J
A
So
how
many
of
you
have
read
this
if
the
latest
version
or
one
of
the
earlier
iterations
since
the
last
IDF
meeting,
see
a
few
hands
two
four
five:
six:
okay,
yeah
I
hope
you
have
not
just
written
it.
They're
also
ready
Sean.
G
Leonard
says:
I
read
the
draft
good
work.
I
would
like
to
see
more
algorithm
agility
for
the
confirmation
method,
ie
more
than
just
sha-256
hard
coding.
Any
algorithm
is
not
safe
because
once
it
is
crypt
analyzed
implementations
will
be
hard-pressed
to
move
off
it
also.
How
can
the
certificate
be
transmitted
with
the
token
if
a
relying
party
wants
the
entire
certificate.
J
Implicitly,
supported
in
the
same
mechanism
that
we
support,
thumbprint
agility
for
certificates
in
J,
the
JW
x-series,
the
JW
s
and
j2
ve,
where
there's
the
thumbprint
and
new
hashing
algorithms
can
be
defined
through
basically
a
new
claim
name
and
the
convention
following
this
is
just
the
x5
t,
which
is
x.509
certificate
thumbprint
and
in
the
the
pound
shorthand
for
whatever
the
algorithm
is.
So
you
know
at
which
time
sha-256
is
broken,
I
hope,
I'm,
not
standing
in
front
of
you
guys
doing
the
same
song
and
dance
when
that
happens.
J
But
if
it
does
happen
in
each
upgrade,
there
would
be
the
need
to
define
a
new
compromised
confirmation
method
under
7800
that
defines
the
hash
algorithm
to
use
and
and
just
say,
hey,
it's
the
same
thing,
but
you
have
with
I,
don't
know
what
the
latest
is.
Sha-3
super
coolness
with
these
parameters
for
to.
A
K
L
John
Bradley
yubico.
So
if
somebody
registers
a
new
hash
algorithm
for
certificate
thumb
prints
as
part
of
JSON
web
tokens,
then
that
would
automatically
happen
in
be
usable
in
the
JSON
web
token.
Somebody
would
have
to
take
the
extra
step
of
registering
it
for
the
introspection
endpoint,
but
in
principle
this
is
as
agile
as
Jose,
and
we
could
perhaps
put
some
sentence
in
there
that
called
that
out.
I,
don't
know
that
if,
if
somebody
wanted
to
send
the
entire
certificate,
then
we
would
have
to
create
it,
somebody
could
do
another
extensions
back.
A
L
About
doing
that
in
it,
amongst
the
authors
and
I,
think
it
adds
to
the
confusion
having
too
many
methods,
and
the
problem
is
that
if
you
give
the
Reliant,
if
you
give
the
resource
server
the
entire
certificate,
they
think
that
they
should
do
something
with
that
certificate
other
than
just
comparing
and
they're
most
likely
going
to
get
that
wrong.
So
I'm,
not
keen
on
passing
the
entire
certificate
down
as
part
of
this
spec,
but
somebody
if
they
really
wanted
it
could
and
of.
A
G
So
we're
backed
up
here,
I'm
Sean,
said:
okay
yeah,
another
hash,
algorithm
claimed
name
can
be
defined,
but
an
example
would
be
good
as
well
as
text
in
the
draft.
This
says
what
you
said
to
find
a
new
confirmation,
method,
etcetera
and
then
he
says:
yeah
I
wanted
to
see
how
it
interacts
with
the
JWT
spec,
jose
spec
etcetera,
like
will
one
upstream
registration
be
sufficient,
or
will
you
have
to
do
one
specifically
for
mutual
TLS
TLS?
J
The
last
one
Justin's
point:
yes,
so
at
least
for
confirmation
method,
a
new
hash
algorithm
defined
as
a
new
confirmation
method
would
flow
through
the
JWT
as
well
as
introspection.
If
there
was
a
need
for
new
hash
algorithms
in
the
in
the
Jose,
a
layer
of
things
around
the
J
to
be
K,
some
plant
related
pieces
or
the
Jade
of
us
J
to
be
thumbprint
related
headers.
Those
would
be
separate
registrations
as
far
as
being
explicit
about
how
to
achieve
our
the
majority.
With
this
method.
J
One
would
be
that
it's
it's
potentially
very
large
and
we're
working
with
access
tokens
here,
which
are
space,
can
train
to
some
extent
all
the
parties
involved
here.
Do
you
have
access
to
the
full
certificate
through
the
mutual
TLS
channel,
so
there's
really
just
the
need
to
have
a
secure
pointer
to
that
in
the
token
itself?
And
finally,
this
only
provides
recommendations
on
how
you
might
convey
the
certificate
information
between
the
authorization,
server
and
resource
server,
which
is
how
they
actually
do.
That
is
always
at
the
discretion
of
the
two
parties
involved.
M
Jon
gently
toss,
News
Europe
I
would
like
to
add
that,
as
far
as
remember
we
choose-
or
we
decided
to
use
the
fingerprint,
because
we
just
want
to
give
the
RS
the
option
to
to
verify
that
the
sender
of
the
access
token
is
actually
in
possession
of
the
certificate.
So
how
did
the
sender
actually
proves
that
it
really
is
a
possession
of
the
certificate
and
the
certificate
validation
is
up
to
the
TLS
tank.
Yes,.
J
J
M
I
got
that
feedback
during
the
security
workshop
as
well
from
to
implementers
a.
O
Particular
Google
I
have
a
question:
did
you
consider
using
hash
of
s
PK
ISO
public
key
info
instead
of
the
whole
certificate
just
was
using
SCP
just
riccati
pinning,
which
is
a
smaller
pattern.
It
allows
to
the
heart
stays
the
same
after
you
reissue
that
cells
for
different
expiration
date
and
stuff,
like
that.
J
We
considered
a
lot
of
different
options.
The
hash
of
the
full
certificate
seemed
like
the
most
straightforward.
If
at
which
time
a
client
has
to
obtain
new
certificates.
Basically,
they
all
need
to
obtain
new
access
tokens,
which
didn't
seem
like
particularly
burdensome
things,
since
all
our
clients
are
typically
in
a
position
to
obtain
your
access
tokens
to
the
normal
flows
anyway.
So
it's
it's
possible,
but
the
certificate
hash
seems
like
the
most
straightforward,
most
easiest
to
implement
in
an
interoperable
fashion.
J
A
Because
if
you
hash
the
whole
certificate,
you
obviously
also
in
fluid,
for
example,
the
subject
field
sure
and
if
you
just
if
you
otherwise
issue,
use
the
same
key
for
multiple
different
subjects
and
I,
don't
I
don't
know.
Why
would
why
would
you
want
to
use
the
same
if
you
really
issue
the
same
certificate?
Why
would
you
want
to
use
the
same
key
like
you
have
some
cases
where
this
actually
happens
and
where
it
matters.
O
G
So
Justin
said
that
was
okay
for
his
answer.
Sean
says
some
method
of
retrieval
will
be.
It
would
be
helpful.
He
says
if
it's
available
to
all,
he
would
like
to
see
text
that
describes
the
availability.
Some
method
of
retrieval
would
be
helpful
and
if
it's
not
in
the
token,
then
it
has
to
be
stored
in
referenced,
which
has
downsides,
and
it's
available
to
all
would
like
to
see
the
text
that
describes
how
or
at
least
the
fact
that
all
parties
have
access
to
the
certificate
itself.
A.
P
Couple
cases
for
SPI
has
multiple
issuers
can
occur
if
you're
running
bridges,
for
instance
the
patron
saint
of
legacy
technology
today,
so
I'll
talk
about
peak
averages,
but
and
also,
if
you're,
doing
short-term
certificates
right
reissuing
assert
every
like
five
days
or
90
day.
Right
like
let's
encrypt
us,
you
wouldn't
necessarily
want
to
do
I
know,
let's
encrypt
doesn't
apply
in
today,
but
they
might
right
and
you
wouldn't
want
to
reuse.
You
have
to
reissue
your
access,
those
tokens
just
because
you've
essentially
just
refreshed
the
timer
right.
L
John
Bradley
yubico
again
I
would
take
the
counter-argument
that
access
tokens
should
probably
have
around
a
five
five
minute
to
our
sort
of
lifetime,
and
that's
what
clients
do
is
refresh
them
going
back
to
the
resource
server,
so
you
rotate
the
certificate.
You
then
make
a
call
to
the
token
endpoint
with
your
new
certificate
and
get
a
new
access
token.
It's
adding
extra
complexity
of
having
the
client,
try
and
parse
out
the
spki
out
of
the
certificate.
Q
G
Sean
Leonard
says:
I
disagree
with
the
Hat
I
disagree
with
hashing
the
spki.
The
relevant
security
token
is
the
certificate,
not
the
public
key.
The
public
key
can
be
extracted
from
the
certificate,
not
the
other
way.
This
also
assumes
that
the
cert
is
available
to
all
parties.
I
agree
with
John
Bradley
just
now.
B
J
L
So
if
we
were
going,
if
we
wanted
a
another
thumbprint,
then
probably
going
back
to
John
and
actually
doing
a
char,
3
thumbprint,
which
nobody's
going
to
support
for
some
decades
but
I
would
rather
have
somebody
have
Mike
register
a
new
thumb
parentis.
Since
he
did
the
sha-256
1,
we
could
do
a
Chaudhary
thumbprint.
That
would
be
ignored
but
could
be
use
it
as
an
example
that.
J
R
E
C
J
A
Okay,
so
there
were
a
couple
of
reviewers
already
and
I.
The
me
orders
seemed
to
be
a
few
clarification
needed.
Nothing
major,
but
still
an
update
may
be
appropriate.
Maybe
you
you
even
get
to
that
doing
this
week,
but
I
think
starting
a
working
group
last
call
it
doesn't
seem
to
be
too
far-fetched.
Don't
have
a
few
reviewers
who
could
do
a
review
during
the
working
class
call
to
Austin?
That's
cool.
J
G
G
A
M
M
Fine
so
hi
this
is
Austin
from
yes,
Europe
John
Bradley
from
yubico
we're
gonna
present
the
current
status
of
the
or
security
topics
PCP.
So
we've
been
working
several
yeah
month
on
the
topic
of
coming
up
with
the
best
current
practice
400
of
security,
which,
in
the
end,
is
an
evolution
of
the
security
threat
model
which
is
defined
in
RFC,
6890
n--,
and
the
security
considerations,
sections
of
our
C
60s,
67,
49
and
50.
M
Before
Chicago,
we
more
or
less
concentrated
on
on
how
to
protect
on
how
to
secure
the
redirect
based
flows
in
all
of
implicit,
an
authorization
code
Rand,
and
we
came
up
with
some
recommendations
that
we
presented
last
time
during
Chi,
cago
and
track.
We
worked
on
different
or
we
focused
on
a
different
topic
which
we
are
going
to
present
today,
which
is
access,
token
leakage
and
in
different
ways,
access
tokens
can
leak
on
resource
servers.
M
Today,
we
will
be
focusing
on
one
specific
use
case,
which
is
access
token
fishing
by
a
fake
resource
server,
and
this
this
scenario,
this
threat
angle,
is,
is
really
emerging,
because
a
lot
of
people
are
starting
to
use
OAuth
in
the
context
of
standard
api's,
and
in
this
context
the
basic.
The
basic
assumption
is
the
trust
model
underneath
off
trudeau,
as
it
have
been
defined
in
NRC,
68
19
arm
is
enlarged
or
it's
no
longer
the
whole.
M
So
what's
the
setup,
the
setup
is
a
client
wants
to
access
a
resource
server
which
it
exposes
to
a
certain
standard
API,
for
example,
for
accessing
email,
texe,
calendar
data
so,
for
example,
the
J
map.
Our
working
group
is
working
on
this
kind
of
stuff,
and
also
it's
been
mentioned
that
there
is
work
on
open
banking.
Api
is
across
Europe,
so
also
in
those
cases,
clients
are
going
to
connect
to
resource
service
that
they
haven't.
Seen
before,
which
means
are
the
relationship
is
being
established
at
runtime.
L
M
Electronic
signatures,
and
so
on,
so
these
kind
of
API
are
merging
all
over
the
place
and
we
have
to
make
sure
that
security
are
of
those
use
case
and
those
I
play
ap
is
is
is
insured.
So
our
assumption
is
that
the
client
is
being
configured
with
the
RSS
URL
at
runtime,
which
in
the
end,
means
that
the
trust
model
is
going
to
change.
In
the
past,
the
developer
got
to
know
the
URLs
of
the
resource
of
on
the
a
as
up
front
during
development
time
notice
is
going
to
change.
What
does
this
mean?
M
M
If
the
resource
server
is
a
bad
guy
and
the
resource
server
takes,
the
access
token
turns
around
and
just
impersonates
the
client
and
accesses
the
legit
resource
over
on
behalf
of
the
user,
which
is
perfectly
possible
because
in
the
end
the
client
doesn't
know
whether
the
resource
server
is
going
to
talk
to
is
really
the
ledger
its
resource
or
but
for
this
particular
user,
and
this
is
this
kind
of
API.
So
what
are
our
options?
M
And
we?
We
have
sketched
out
three
different
kinds
of
countermeasures
or
classes
or
categories
of
countermeasures,
and
we
would
like
to
do
is
we
would
like
to
present
those
different
categories
of
countermeasures
to
you,
some
of
them,
or
most
of
them
are
familiar
to
the
working
group
members
and
we
would
like
to
gather
your
feedback.
What
do
you
think
should
be
the
best
way
forward?
What
the
working
group
should
be
recommend
what
should
be
documented,
the
BCP,
so
one
option
would
be
yeah.
M
Let's,
let's
work
with
metadata,
let's,
let's
tell
the
client
where
it
is
safe
to
use
access
tokens.
This
is
more
or
less
consistent
with
the
way
today.
Cookies
are
typically
handed
because
the
browser
typically
are
only
cents
to
cookie
to
the
origin
of
the
of
the
of
the
of
the
cookie,
and
nowhere
else
are
we.
Could
we,
the
a
s?
Codes
could
publish
some
some
some
information
which
resource
over
the
a
s
knows
and
where
it
is
safe
to
send
the
access
token
to,
but
the
the
downside
is.
M
L
M
L
M
So,
in
the
end,
with
what
we
can
do
is
we
can
restrict
the
access
token,
the
scope
of
the
access
token
to
a
certain
audience,
our
resource
server,
and
in
order
to
cope
with
that
particular
threat,
we
must
use
an
audience
that
it's
bound
to
the
transport,
so
we
can't
use
a
logical
audience
for
just
a
simple
reason:
the
resource
server,
for
example,
if
using
the
on
the
HTTP
response
for
the
status
401
as
it
is
defined
in
our
C
6750,
can
just
tell
the
the
DES
client.
Oh
please
mint.
M
Let
me
mint
an
access
token
that
is
good
for
calendar
for
the
calendar
server,
so
this
doesn't
give
any
indication
which
server
or
the
client
is
up
to
talk
to
right.
So
that's
why
we
think
a
good,
a
good
way
would
be
to
use
a
the
physical,
the
URL,
that
the
client
is
sending
the
the
access
token
to
as
the
audience.
Alternatively,
and
that
this
has
been
pointed
out
in
the
workshop
in
Zurich,
we
could
also
use
a
fingerprint
of
the
TLS
certificate
yeah,
which
is
we
share.
Some
merits
I.
M
L
So
there
are
some
objections
that
have
been
raised
to
this
in
the
past:
hi
Justin,
where
the
resource
server
may
be
actually
split
across
multiple
domains.
So
there
are
some
edge
cases
where
some
people
want
an
access
token
that
would
work
across
a
virtual
resource
server
that
had
multiple
routes,
so
that
would
have
to
be
something
that
we
would
address.
If
we
went
down
this
road,
yes,
yes,
you
could
get
two
tokens,
but
I
would.
M
Suggest,
let's,
let's,
let's
further
this
one,
let's
first
proceed
with
our
presentation:
we
will
come
back
to
a
summary
slide
where
all
the
countermeasures
are
being
listed,
which
I
should
serve
as
a
foundation
for
a
discussion.
Okay,
so
let's
assume
for
the
moment
on
the
client,
tells
das,
where
it's
gonna
use
the
access
token.
There
is
a
need
for
a
certain
parameter
to
tell
the
AES
resource
indicators
would
be
one
potential
options
and
then
an
access
token
is
being
created
which
is
bound
to
this
particular
URL.
M
S
M
T
M
Okay,
as
I
said,
there
are
two
different
options:
I
don't
want
to
dig
into
the
details.
We
can
use
the
URL,
which
is
which
allow
us
to
have
a
really
fine
grained
audience
even
for
a
multi-tenant
system
and
so
on.
The
TLS
fingerprints
would
have
the
advantage
that,
if
someone
is
able
to
obtain
a
certificate
for
the
domain
of
the
alleged
RS,
that
would
be
discovered
as
well,
because
the
access
token
is
is
buying
is
bound
to
the
to
the
concrete
public
key.
That's
why
I
think
it's
worth
considering
right.
U
L
L
V
M
Okay,
could
you
please
take
note,
because
this
is
this-
is
the
kind
of
you.
S
M
S
A
M
So
the
third
option-
and
then
we
we
can
like
can
go
into
the
discussion-
is
proof
of
possession.
That's
something
we
have
been
working
on
as
a
what
you
go
for
last
couple
of
years,
which
means
on
the
client
or
you
called
it.
Sender
constraint,
access
tokens,
so
there
are
different
different
terms.
Hold
of
key
is
another
one,
but
in
the
end
the
idea
is
that
the
client
somehow
has
a
key
material
and
is
able
to
demonstrate
possession
of
this
key
material
and
das
amo
associates
the
public.
L
Sure
this
is
essentially
the
the
presentation
that
Brian
just
gave
us
one
of
the
ways
that
one
could
do.
This
we're
also
working
on
a
way
of
doing
it
with
token
binding,
but
essentially
the
client
yeah
obtains
a
certificate
with,
or
contains
a
token
proving,
its
controls.
A
given
certificate
gets
a
access
token
back
and
again
makes
a
TLS
connection
or
a
connection
which
has
some
sort
of
applications
level
signing
to
resource
server
X.
L
If
you're
doing
an
application
like
when
you're
signing
you
have
to
audience,
restrict
whatever
it
is,
you've
sign,
obviously,
and
then,
when
the
bad
RS
goes
to
replay
it
at
the
good
RS,
it
doesn't
have
the
private
key
necessary
and,
in
a
sense,
the
audience
restriction
restricted.
What
application
layer
token
that
you've
created
if
you're
doing
it,
that
way
would
have
the
wrong
audience
which
is
similar
to
them.
Our
first
proposal,
the
resource
server
one,
is
going
to
reject
that
access.
Token.
M
Okay,
so
thank
you,
John,
sorry,
the
different
proposals
on
the
table.
Brian
just
presented
the
Missha
TLS
Tov.
We
also
have
the
token
binding
which,
which
is
on
the
way
which
both
belong
to
the
same
camp,
which
is
the
token
is,
is
somehow
bound
to
d2
the
TLS
connection.
Well
there,
and
there
are
two
two
armed
proposals
on
the
table
on
the
application
layer,
which
is
the
sign
for
crest
proposal
by
Justin
and
in
the
j-pop
proposal
by
Ned
and
John
I.
Think
right.
M
They
all
have
different
different
characteristics,
properties
and
so
on.
In
the
end,
and
that's
that's
defined
the
question
and
that's
why
I
want
to
to
kick
started
the
discussion.
What
should
the
BCP
recommend
I
mean?
There
are
a
lot
of
different
options
and
what
I'm
hearing
from
from
people
out
of
the
working
group?
There
are
much
too
many
options
right.
So
what
are
we
are
we
gonna?
Are
we
gonna,
suggest
or
recommend
people?
Also
there
I
just
gave
an
overview
of
all
the
the
are
the
options
which
is
presented
and
also
something
else.
M
F
M
M
W
Annabelle
Beckman
Amazon
regarding
the
audience
restriction,
two
comments,
a
echo
Justin's
concern.
We
do
have
use
cases
where
not
only
do
we
need
to
use
access
tokens
with
multiple
different
resource
servers.
We
do
not
know
at
the
time.
The
access
token
is
requested.
What
the
precise
origin
of
those
resource
servers
are
so
binding
that
token
to
the
audience
of
the
resource
server
would
be
a
problem
for
us
again
to
echo
dick
the
TLS
server
certificate
problem.
W
M
Why
do
you
need
to
use
the
access
token
on
multiple
different
resources?
What
why
there
is
a
need?
I
would
like
to
understand
that,
because.
W
I
At
Turf
Alphonse
Google,
so
it
seems
to
me
that
the
two
mechanisms
here
are
the
two
classes
of
mechanisms
that
actually
address
different
threat
models.
I
So
the
proof
of
possession
things
since
you
were
asking
like.
Oh
there's,
too
many
of
them.
How
do
we
explain
which
ones
to
use
if
they're,
if
there
is
a
legitimate
reason
to
also
address
the
other
threat
model,
the
wrong
receiver?
Getting
the
the
token
that
I
think
that
would
then
it
necessity
the
audience
restriction
mechanisms
and
the
threat
model
that
is
about
preventing
the
wrong
client
from
using
token
I
would
think
the
proof
of
possession
with
that
Thanks.
U
L
I
Be
dead,
easy
right,
this
is
Turk
again
right,
I,
agree.
I,
just
found
myself
sort
of
having
to
think
quite
a
bit
more
about
why
this
like
I,
feel
like
there's
a
mechanism
that
does
the
audience
restriction,
but
really
what
we
want
is
we
want
to
make
sure
that
the
wrong
client
isn't
getting
its
hands
on
the
token
and
using
the
token
I
found
myself
thinking
much
more
about
like
why
a
implies
B.
Why?
L
If,
if
the
bad
guy
can
completely
hijack
the
rest
of
the
session,
they
can
make
the
rope
you
know
they've
obvious
they've
already
created
the
wrong
client
and
got
the
user
to
login,
whatever
application
it
is
into
their
batteries
or
server.
So
there's
nothing
that
we
can
do
at
this
level
of
the
protocol.
That
would
actually
stop
that
because
it's
just
a
legitimate,
Olaf
client
at
that
point.
M
So
I
think
in
certain
profiles
of
off
you
could
make
you
can
ensure
that
the
S
declined
points
only
to
trustworthy
resources,
but
it
depends
on
the
concrete
profile.
So,
for
example,
in
the
any
mobile
connect
use
case,
there
is
another
trust
anchor
which
is
a
discovery
service,
but
we
do
not
know
how
other
profiles
are
going
to
use
a
wall.
So
our
assumption
is
that
the
or
a
decline
can
be
tricked
into
accessing
a
URL
which
it
doesn't
know
anything
about
right.
G
F
Been
I
just
check
the
record
I've
been
preaching
for
audience
restriction
back
in
2011,
I
thought
I
thought
that
was
a
very
good
solution,
yeah,
but
actually,
since
then,
I
have
kind
of
changed
my
position
because
that
one's
simple
and
good
but
can't
solve
the
problem,
one
legitimate
server
list,
email
resource
Allah-
is
actually
compressed.
That's.
M
M
Think
there
are
some
other
merits
are
pros:
associative
audience
restriction
that
do
not
are
bound
to
this
rat
right,
so
in
in
deployments
I
have
seen
in
the
past.
The
audience
restriction
serves
the
purpose
of
letting
the
or
giving
das
the
chance
to
issue
access
tokens
tokens
which
only
contain
a
data
that
the
RS
is
entitled
to
C,
which
is
a
kind
of
a
privacy
property.
M
But
yes,
your
comment
was
completely
right.
The
client
has
to
be
aware
of
that
fact
right.
So
how
do
you
get
two
different
access
tokens?
So
in
my
former
life
I
taught
you
Telecom,
we
use
refresh
tokens
as
the
anchor
and
then
fetched
different
access
tokens
for
different
resource
service
based
on
that
refresh
token,
but
it's
something
that's
not
specified
in
D
and
it
wharfs
a
space,
but
it
can
be
done
within
the
boundaries
of
it.
Yeah
I
mean.
L
There
are
other
reasons
for
having
the
resource
indicator.
There
may
be
different
token
types
at
different
resource,
resource
servers
or
different
claims
that
you
want
to
put
in
the
access
tokens.
So
people
have
added
this
for
other
reasons,
so
it
happens
to
also
provide
an
audience
route,
a
way
to
do
audience
restriction
which
is
useful
for
this,
but
it
has
other
useful
properties,
and
you
may
want
to
do
it
even
aside
from
from
solving
this
token
theft
problem,
so
audience
restriction
doesn't
solve.
L
The
problem
won't
work,
the
token
leaks
at
the
authorization
server
through
some
miss
configuration
through
an
intermediate
box.
It's
exfiltrated
from
the
client
intercepted
in
other
places.
It
stops
the
bad
client
from
tricking
the
the
bad
resource
server
from
tricking
the
client
into
creating
token
for
it,
but
doesn't
solve
as
many
of
the
possible
threats
as
that's
proof
of
possession.
J
J
Coherent
I
think
the
leftmost
option
is
publishers.
Legit
RSS
is
isn't
really
a
solution
to
your
point.
It
puts
a
lot
of
requirements
on
the
client
it's
over
being
polite
and
it
doesn't
yeah.
Maybe
it's
already
happening,
we've
kind
of
taken
it
out
that
it
doesn't
solve
the
problem
of
cross
play
between
the
legitimate
listed
RSS
and
it
puts
a
lot
of
burden
on
the
client.
That
probably
won't
happen.
J
The
audience
restrictions
stuff
is
there
are
benefits
to
indicating,
where
you're
going
to
use
it,
as
you
guys
have
mentioned,
potentially
varying
the
content
of
the
access
token
and
varying
the
cases
whatever
more
or
less
customizing
the
access
token
for
the
particular
resource
as
well
audience
restricted.
That
was
the
the
intent
of
the
resource
indicators.
I'm,
sorry
for
the
bad
name,
but
that
was
the
name
of
the
draft
about
a
year
year
and
a
half
ago
was
to
facilitate
both
of
those
things.
J
So
that
might
be
difficult.
I
think
the
same
problems
would
exist
here
and
you
get
into
difficulties
of
interplay
between
the
authorization
endpoint
and
the
token
endpoint,
and
using
that
parameter.
I
was
very
interested
in
the
idea
of
using
the
TLS
server
cert
because
it
sort
of
immediately
takes
away
all
those
concerns,
but
apparently
there's
deployments
where
that
you
know
the
variants
of
the
cert
itself
would
make
it
there's
other
concerns
that
come
up.
J
So,
maybe
that's
not
viable
proof
of
possession
solves
the
problem,
as
many
have
stated,
as
well
as
other
potential
problems
that
took
in
linkage,
mutual
TLS
that
I
just
talked
about
solves
it
really
really
well,
but
it's
it's
a
niche
deployment
kind
of
thing.
It's
not
I,
don't
think
widely
applicable
to
the
broader,
the
broader
ecosystem
of
OAuth.
It's
gonna
be
for
things
like
open
banking,
where
there's
big
money,
interests
and
and
regulatory
requirements
around
the
ecosystem.
You
just
could.
J
Deploying
mutual
TLS
is
a
huge
pain
in
the
ass
and
under
certain
constraints
or
requirements.
It
can
be
done
and
it
solves
a
lot
of
problem
in
those
cases,
but
I,
don't
because
of
the
the
pain-in-the-ass
factor.
I,
don't
think
it's
gonna
be
anything
more
than
specific
sort
of
niche
deployments
that
it's
useful
for
so
combining
I
think
it's
great,
but
it's
very
early
and
it
it.
It
remains
to
be
seen
whether
it'll
be
widely
available
and
implementable
and
and
when
application
signing
seems
really
simple.
J
When
you
start
to
think
about
it
and
have
a
and
have
a
solution
in
mind,
it
turns
out
it's
really
really
hard
to
get
right
and
do
it
in
a
bra,
interoperable
manner
that
actually
addresses
all
of
these
problems.
I
know
Justin
will
say
no,
no,
no,
we
have
it,
it
works,
but
it
it
works
under
the
way
that
he
thinks
about
it
and
wants
it
to
work,
making
sure
that
the
right
parameters
are
included
in
what's
on.
What's
not
signed.
Does
it
protect
against
this
particular
threat?
What
other
threats?
J
It's
really
hard
I've
personally
gone
through
that
moment
of
being
up
in
the
night
like
oh
I,
could
invent
an
application
signing
protocol
that
would
solve
all
these
problems,
and
you
know
after
thinking
about
it,
it's
not
that
easy,
it's
very
difficult!
So
that's
my
diatribe,
I,
don't
know
what
to
do.
You.
G
J
A
You
your
description
sounded
like
a
little
bit
like
maybe
that
combination
of
audience
frustration
and
possession
is
sort
of
like,
depending
on
sort
of
their
pros
and
cons
in
different
deployments,
and
in
some
deployments
you
may
be,
it
may
be
feasible
to
just
use
the
proof
possession.
Maybe
some
of
it
will
be
easier.
The
driver
in
the
future,
while
some
others
serve,
can
stand
with
an
audience
restriction.
Ease.
Did
I,
hear
that.
J
I
know
I
was
necessarily
saying,
I
think
audience
restriction,
something
to
enable
audience.
Restriction
is
potentially
something
that
could
be
rolled
out
sooner
to
a
broader
audience,
because
it's
sort
of
just
adding
an
additional
parameter,
but
it
would
still
have
to
be
standardized
and
deployed,
but
it's
more
likely
to
sort
of
take
place.
It
turns
out,
it's
also
I,
think
harder
to
standardize,
then
than
we
think
it
is
yes.
Josslyn's.
M
J
L
Yeah,
it
may
be
that
there
isn't
one
perfect
solution,
because
we
were
a
toolkit
and
the
right
thing
to
do
for
a
native
app
may
not
be
the
right
thing
to
do
for
a
server
application.
That's
also
its
own.
You
know
if
doing
mutual
TLS
is
a
pain
in
the
ass
on
the
server
its
pain
in
both
asses
on
on
on
a
native
app.
Yes,
every
wave
plane
in
everyone's
ass.
G
L
R
A
What
I,
what
I
heard
so
far
and
I'm
sure
we
have
to
continue
that
discussion
us
on
the
million
years,
so
this
is
a
starting
point,
but
I,
but
so
far,
I
heard
at
least
some
of
the
things
that
we
we
could
get
rid
of,
particularly
a
s
polishing.
Let
you
did
RSS
I
think
there
was
no
no
one
sort
of
arguing
for
that,
one,
which
is
the
least
something
I
also
heard
in
for
the
audience
restriction.
E
A
L
A
L
Recollection
was
that
Chester's
proposal
visits
a
lot
of
excitement
in
a
group.
Yes,
so,
like
we
think
acting
is
not.
The
problem
is
application-level.
Signing
is
hard
I.
Don't
think
that
that's
necessarily
the
fact
that
nobody
wants
to
deal
with
it
isn't
necessarily
a
reflection
on
one
of
the
proposals
being
better
than
the
other,
but.
E
J
J
So
I
mean
I
just
wanted
to
clarify
I
hope,
that's
not
on
the
table
to
take
them
off,
but
I
I
think
they're
they're,
maybe
not
sufficient
for
more
immediate
needs
or
wide
enough,
because
em
TLS
is
sort
of
niche
and
toe
c'mon
Ian's
at
least
some
future
period
of
time
away.
But
that
does
solve
a
lot
of
problems.
But
if
the
group
feels
like
something's
needed
some
kind
of
audience
restriction,
I
guess
is
the
way
to
go.
J
M
Zoey,
okay,
so,
as
already
pointed
out,
there
are
no
a
net
net
pointed
out
also
audience
restriction,
for
example,
doesn't
work
for
replay
on
the
compromised
RS.
So
there
are
other
related
topics.
We
will
take
a
look
into
and
in
the
end,
we'll
try
to
come
up
with
a
subset
of
recommendations
that
cover
all
those
different
threads
in
the
document,
right
so
hope
in
Singapore
we
will
see
an
update
which
covers
all
all
those
topics.
Thank
you
very
much.
X
Users
should
have
context
of
the
authorization
request
so,
rather
than
kind
of
you
know,
jamming
10
scopes
that
the
user
at
once,
you
should
be
giving
them
this.
You
should
be
presenting
the
authorization
request
for
just
the
scopes
you
need
when
you
need
them.
So,
for
example,
granting
the
calendar
scope
makes
sense
in
the
context
of
interacting
with
a
calendar
related
feature:
yeah,
don't
don't
bomb
lit
in
with
something
else,
and
so
I
think
there.
X
X
That's
fine
I'm,
just
missing
a
good
image.
That
makes
my
point,
but
if
you
can
imagine
an
OAuth
consent
screen
that
has
like
ten
scopes
on
it
from
from
calendar
to
contacts
to
like
YouTube
the
sort
of
kitchen-sink
of
scopes.
X
We
so
think
that's
a
bad
user
experience
to
have
have
all
of
that
at
once,
and
so
the
definition
of
incremental
authorization,
then,
is
the
ability
to
request
additional
scopes
and
subsequent
requests,
rather
than
front-loading
everything
and
importantly,
in
a
way
that
results
in
a
single
authorization
grant
that
represents
all
the
scopes
granted
so
far.
The
latter
part
is
important
without
that
it's
not
really
incremental,
like
it's
just
a
whole
series
of
separated
requests
each
with
their
own
scopes.
X
X
So
a
typical
implementation
of
incremental
auth
would
just
display
the
new
scopes.
So
if,
if
you're
very
granted,
scope,
8th
and
in
the
app
requests
could
be
normally,
the
authorization
server
would
only
show
consent
for
scope,
B
and,
of
course,
a
single
refresh
token
being
issued
for
the
union
of
all
those
scopes,
go
that
they
consider
granted.
X
Now
this
is
a
this
is
probably
not
a
completely
new
concept.
Our
auth
doesn't
actually
stop
you
returning
an
authorization
grant
with
increased
scope,
and
so
a
lot
of
people
have
used
that
to
implement
incremental
auth,
so
they
increase
the
scope
where
they
know
it's.
Okay
to
do
so
in
the
sense
that
the
user
already
granted
it
previously.
So
it's
sort
of
okay
to
bundle
in
right,
it's
okay,
to
bundling
an
additional
scope
on
subsequent
requests,
but
that
really
only
works
for
confidential
clients.
X
So
if,
if
the
client
is
confidential,
this
that's
probably
a
safe
technique
to
use,
although
it's
undocumented,
for
public
clients,
native
apps
in
particular,
not
so
much
so
as
far
as
schools
concerned.
This
is
definitely
something
we've
been
promoting.
In
fact,
recently
we've
actually
started
requiring
it.
So
this
is
an
excerpt
from
the
user
data
policy.
That
basically
just
says
you.
G
X
X
And
in
addition
to
defining
the
the
profile
for
public
clients
and
their
protocol
for
public
clients,
we
can
also
take
this
opportunity
to
sort
of
formalize
how
it
should
work
for
confidential
clients.
So
just
because
everyone
hasn't
sort
of
worked
out
how
to
do
it
doesn't
necessarily
mean
that
it's
valued
to
have
some
normative
texture
and
how
you
should
do
it,
some
security
considerations,
etc.
X
So,
the
you
know,
the
main
part
of
a
spec
I
think
the
kind
of
even
if
part
of
the
spec
is
the
native
apps,
but
you
know
let's
bundle
in
the
confidential
clients
and
and
document
that
as
well.
That
was
the
kind
of
kind
of
the
idea.
So
how
does
this
app
Kragh
mental
work,
the
incremental
auth
for
native
apps
I'll
stuff,
using
that
we
added
a
new
token
endpoint
parameter?
This
is
for
during
the
exchange
of
the
authorization
code
called
existing
grant.
X
X
So
this
is
the
sequence
diagram.
You
can
see
the
initial
authorization
request
where
we
request
scope
a
we
return,
the
authorization
code
into
a
token
exchange.
We
get
back
the
access
token
refresh
token.
Sometime
later
we
decide
you
know
maybe
we're
presenting
the
calendar
feature
and
we
need
to
increment
and
request
the
calendar
permission
so
in
the
subsequent
requests-
and
this
can
be
repeated
as
many
times
as
we
need.
We
do
a
fresh
request
for
scope.
M
X
Is
not
the
assumption?
No,
the
the
issue
is
that
because
it's
a
public
client,
because
this
is
a
native
app
there's
a
potential
for
client
impersonation.
So
while
we
may
know
that
a
user
has
granted
skroob
a
to
an
app
that
claim
to
be
that
app,
we
don't
necessarily
know
it's
exactly
the
same
client
because
it
could
be
impersonated.
X
Actually,
I
wouldn't
expect
it.
I
have
a
slide
on
that.
So
we
can.
We
can
discuss
that
okay,
but
it
sort
of
think
of
this
as
like
a
proof,
it's
like
proof
that
you
already
obtained
the
grant.
Maybe
I
should
go
over
some
of
the
alternative
designs,
because
I
do
actually
talk
about
that.
Do
you
think
if
this
question
is
about
that,
then
I.
L
Just
want
to
say
that
I
have
seen
native
apps,
you
use
the
sticky
grant
pattern,
some
from
large
apps,
who
wound
up,
having
app
in
personation
to
do
village
escalation
later
that,
if
you
give,
if
you
make
the
assumption
that
everybody
who's
installed,
your
app
gets
the
union
of
all
permissions
that
they've
asked
for,
for
any
instance
of
that
app
bad
things
start
happening,
which
is
why
this
would
be
an
improvement.
Yes,.
X
X
The
user
approves
that
turns
out
these
clients,
actually
counterfeiting,
the
the
legitimate
mail
client,
and
then
they
get
they
actually
get
more
scope
than
what
the
user
approved,
whereas
if
you
force
them
to
actually
request
for
the
mail
scope
themselves,
for
example,
the
user
would
hopefully
see
choose
wisely
this
game
requiring
email
access
and
hopefully
deny
it
so
that
I
think
is
the
privilege
escalation
attack.
That
John
was
referring
to.
F
X
M
As
more
clarification,
Tosun
speaking
I
know
about
a
threat,
my
my
question
was
related
to
the
user
experience
they
expected
user
experience.
So,
in
the
end,
if
the
user
already
has
the
app
installed
already
granted
a
then
in
the
subsequent
authorization
request,
the
a
s
cannot
really
indicate
to
the
user
or
your
already
granted
a
to
this
client.
No,
it's
asking
for
another
permission.
This
is
like
the
client
would
have
asked
for
B.
Only
that's
that's
just
doesn't
that's
what
I
would
like
to.
L
M
X
Potentially
yeah
I
guess
in
in
the
case
of
Google's
authorization
server.
We
display
that
UI,
so
it
wasn't
a
consideration,
but
maybe
we
can
add
something:
okay.
So
now
your
your
thing.
So
as
far
as
the
implementation
details
are
concerned,
like
obviously
don't
just
Union
any
two
grants
and
stick
them
together,
we
have
a
few
checks
in
there.
So
you
make
sure
the
the
existing
ground
is
valid
in
its
own
right,
so
unexpired,
Unruh,
vocht
and
the
client
ID
should
match
of
the
two
grants.
So
those
are
the
kind
of
the
two
must.
Y
F
E
D
X
E
X
Then
yeah
yeah,
so
the
theory
being,
of
course,
that
you
know
the
client
is
already
in
possession
of
that
of
that
Grant
and
can't
use
that
so
the
sort
of
the
theory
being
well
they've
already
got
permission
for
that.
From
the
refresh
token,
they've
already
got
permission
for
the
second
scope
from
the
authorization
code.
Then
there
should
be.
You
know
no,
it's
sort
of
a
convenience
in
a
way
that
you
version
them
right.
X
The
unfortunate
robach,
of
course,
is
putting
that
token
in
the
in
a
get
request,
which
we
never
do
so
the
second
alternative
is
kind
of
similar
in
a
way
it's
just
using
an
access
token,
as
proof
of
that
existing
grant.
The
benefit
is
that
you
could
apply
to
more
than
just
the
code
flow
with
the
Refresh
token,
but
the
drawback
is
that
it's
more
susceptible
to
attack
like,
as
was
covered
in
the
previous
talk.
These
are
these
sort
of
access.
X
Tokens
are
gonna
pass
around
everywhere,
potentially,
whereas
the
the
nice
thing
about
the
Refresh
token,
he
said
it,
it
only
ever
get
sent
to
the
token
endpoint
and
only
ever
gets
sent
on
a
back
on
the
back
Channel
and
so
we're
not
altering
that
property
here,
which
I
think
is
nice.
I
guess,
like
you
know,
if
you,
if
you
want
to
work
on
this,
we
couldn't
think
of
some
other
ways
to
solve
the
DUI
question,
absolutely
yeah,
all
right,
so
some
fun
news
running
code.
X
We
actually
already
implemented
this
and
it's
already
live
on
Google's
all
server
and
I
have
a
proof
of
concept
as
a
client.
You
can
try
it
out
today.
It's
open
source,
it's
based
on
the
apple
of
library
and
I,
don't
know
if
we
can
do
a
YouTube
video,
it's
possible
to
click
that
link.
This
is
a
never
been
done
before.
I
think
it.
Y
X
X
Here
are
the
stills
here.
If,
if
that
was
a
bit
too
fast,
so
yeah,
this
is
Jeffrey.
Gaskin
I
saw
a
second
request
for
the
to
choose,
which
account
is
that
actually
redundant,
and
could
you
avoid
it
or
is
there
some
reason
you
have
to
show
it
good
you
picked
up
on
that.
I
was
hoping,
I
would
notice,
yeah
I
think
that's
a
little
bit
of
jank
in
the
demo.
Ideally
what
you
do
since
we
support
open
early
connect,
which
has
a
parameter
called
login
hint.
X
You
could
actually
pass
in
the
the
hint
of
the
user
in
that
second
request.
To
avoid
that
experience,
you
know
other
implementations
of
all
who,
which
don't
support
multiple
logged
in
users.
You
probably
not
going
to
see
that
problem
at
all,
because
there's
only
one
user
there's
only
concept
of
one
user,
so
I
hope
there
will
fix
that
jank
and
I.
Don't
think
it
should
be
be
concerned
for
a
lot
of
other
people.
And
yes,
anyway,
the
logging
in
pram
can
solve
that
problem.
M
X
I,
probably
should
have
actually
that
is
actually
one
of
things
we
check.
I
should
amend
that
slide,
so
it's
the
client
ID
as
well
as
the
user,
and
the
fact
that
the
previous
grant
has
to
be
valid.
Yes,
so
if
anything
is
different,
if
the
different
use,
a
different
client
thing
like
that,
you
know,
you
definitely
won
like
a
finding
token
and
I
have
I
would
be
just
big
weird.
F
B
F
Don't
you
just
the
first
question:
is
why
don't
you
just
use
refresh
token
or
something
like
that,
because
you're
just
sitting
in
fresh
token,
writing.
The
other
question
is
that
it's
really
good
from
the
point
of
view
of
the
collection
minimization,
you
just
asked
for
the
Scopes
what
you
need
right,
but
at
the
same
time
it's
conceivable
that
the
attacker
can
actually
start
asking
grant
in
a
small
scope
here
and
there
so
that
the
user
doesn't
notice
that
he's
actually
being
profiled.
The
entire
thing,
that's
a
typical
spying
strategy
by
the
way,
but.
G
F
X
X
X
If
we
look
at
the
week,
if
we
look
at
your
sort
of
attack
there,
where
it's
like
you
just
like
nibble
and
get
like
one
scope
here-
one
scope
there,
you
don't
really
need
incremental
auth
to
do
that.
So,
if
you
really
want
you
could
just
do
ten
authorization
requests
and
have
ten,
grands
and
and
and
frankly
like
if
it's
an
attack
that
you're
doing
that,
that's
not
like
a
particularly
large
barrier.
The
goal
of
this
incremental
auth
is
more
of
a
developer
sort
of
friendliness
thing.
Q
Lucy,
Lynch
and
and
I
have
privacy
concerns
about
this,
and
it
appears
from
your
presentation
that
this
is
additive
not
particularly
transparent
to
the
end-user
that
it's
attitude
and
that,
in
order
to
revoke
any
one
of
these
things,
who
would
revoke
the
entire
set
of
permissions
and
start
over
as
as
an
end-user,
so
so
say,
I
give
you
missions
to
access.
My
calendar
mm-hmm
once
but
I
want
to
revoke
that.
But
I've
already
given
permission
for
mail
or
something
else
right.
G
Q
X
X
X
U
Dick
heart
Idec,
were
there
comments
at
showing,
what's
already
been
granted,
showing
what's
added,
as
opposed
to
I
think
showing
that
it's
all
of
the
things
is
over
overloading
it
but
saying
here's
what
you've
already
granted
and
he
knew
things
would
be
right.
I
have
a
little
NIT
aren't
just
terminology
that
auth
is
ambiguous.
Yeah.
K
K
K
L
G
L
Drugs
that
finally
got
me,
so
this
is
really
more
about
being
able
to
discriminate
instances
of
apps
so
that
you're
providing
the
right,
incremental
or
D
incremental
grants
to
to
them.
So
we
should
I'm
happy
to
work
with
you
on
coming
up
with
a
way
of
using
a
hash
of
the
of
the
Refresh
token
or
session
that
the
session
information
from
open,
ID
Connect
there's
a
couple
of
different
ways
that
you
could
indicate
in
the
authorization
request,
which
instance
of
the
client
it
is
so
that
you
could
provide
a
more
fulsome
grouping
of
information.
L
You've
already
granted
these
things
to
this
app.
Do
you
want
to
add
this
one
and
potentially
in
when
you
go
to
revoke
tokens,
you're
now
going
to
have
different
grants
for
different
instances
of
apps,
so
you're
gonna
have
to
look
at.
Do
you
want
to
revoke
the
access
to
this
app
for
this
thing,
on
your
iPad
versus
your
Android
phone,
etc?
So,
there's
a
few
other
things
around
this
that
probably
need
to
be
flushed
out
happy
to
work
great.
X
Thank
you,
I,
look
forward
to
hearing
the
ideas,
yeah
I
think
this
is
an
another
area.
That's
a
bit
under
specified
in
oath,
which
is
with
with
a
native
app
client.
You
can't
potentially
have
two
streams
of
authorization
and
so
with
incremental
auth
you
can
actually
have
two
streams
of
that.
You,
like
you,
said
iPad
app,
I've
thrown
up,
one
could
have
calendar
permission,
one
one
maybe
doesn't
and
I
think
that
again,
I
think
that's
a
positive
thing.
M
I
really
like
your
proposal,
it
solves
a
problem,
a
practical
problem
that
we
also
saw
along
those
lines
at
Archie
telecom,
but
along
the
lines
that
the
speakers
before
me
say
that
I
think
das
needs
to
have
a
complete
picture
on
the
or
they
are
of
the
of
the
client
in
Shenzhen
should
be
able
to
display
all
the
Scopes
that
already
have
been
granted.
M
I
mean
it's
in
the
end,
it's
at
the
discretion
of
the
a
s
how
to
really
show
this
information,
but
it
should
be
possible
and
I'm
also,
not
quite
sure.
That's
really.
The
refresh
token
should
be
the
end
in
the
in
the
object
model
so
tip
what
one
could
also
do.
You
could
have
a
refresh
token
or
multiple
refresh
tokens
referring
to
the
same
authorization
grant
and
so
on.
M
G
David
Tong
says:
should
a
hint
not
be
sent
it
should
a
hint
not
be
sent
in
the
authorization
request
that
the
client
is
going
to
apply
increment
off.
This
will
allow
the
AES
to
customize
consent
screen
I,
agree,
I,
agree
with
not
sending
a
token
in
the
authorization
request,
but
maybe
a
flag
should
be
sent.
X
A
So
we
should
also
cut
this
now,
so
how
many
of
you
have
read
this
document?
This
is
a
serious
error
version
who
read
it
not
many
so
I
I,
Shawn,
3,
yeah
3.3
peoples
is
for.
X
A
L
A
A
You
don't
have
code
running
in
multiple
platforms,
you
know,
forget
it.
No
just
kidding
thanks
thanks
William,
we
support
our
users
on.
K
X
X
A
S
V
X
R
Subsequent
to
that
there
were
a
number
of
discussions,
mostly
in
the
second
working
group
that
led
to
discussions
of
explicit
typing
for
the
security
event
token.
But
really
that
would
be
just
an
example
of
explicit
typing
for
a
job,
and
so
we
turn
the
crank
one
more
time
before
we
got
to
Prague
and
defined
how
to
provide
explicit,
Tibbett,
explicit
typing
for
any
kind
of
job.
R
So
the
structure
of
the
document,
I
think
is
borrowed
a
bit
from
the
TLS
BCP
and
I,
like
the
way
that
Iran
did,
that
that
there's
descriptions
of
threats
and
vulnerabilities
in
one
major
section.
So
these
whole
subsections
describing
all
the
problems
that
we
would
have
described.
And
then
there
was
a
following
section
with
subsections:
defining
a
bunch
of
best
practices
and
there's
links
from
the
problems
to
the
solutions
or
the
mitigations.
And
it's
often
one
too
many.
So
there
might
be
multiple
ways
that
you
would
mitigate
a
particular
threat.
R
First
and
foremost,
we
want
the
document
to
provide
actionable
information,
promoting
the
secure
implementation
of
deployments
and
jobs,
and
we
want
the
guidance
to
be
applicable
to
a
broad
and
growing
set
of
use
cases.
At
the
same
time,
we
recognize
that
not
all
deployments
will
choose
to
do
new
things
and
in
fact
not
all
deployments
are
going
to
need
to
change.
R
But
we
try
to
do
an
analysis
of
under
what
circumstances
it
would
be
beneficial
to
change
and
in
what
ways
that
said,
we're
also
trying
to
describe
ways
to
keep
existing
deployments
secure,
even
if
the
scope
of
what
those
deployments
are
doing
might
increase.
So,
for
instance,
if
you
have
an
issuer
that
starts
issuing
an
additional
kind
of
job.
R
How
do
you
prevent
an
attacker
from
capturing
that
initial
job,
which
was
legitimately
issued
and
trying
to
pan
it
off
as
something
that
it's
not
get
unintended
consequences
and
we
try
to
recognize
the
costs
of
changing
and
hardening
existing
deployments
and
there's
a
bunch
of
practical
trade-offs
like
this?
That
I
think
one
of
the
productive
next
steps
is,
for
people
start
to
start
to
just
to
discuss
those
and
look
at
what
we
want
to
do
in
some
specific
cases.
R
So
one
example
trade-off.
That's
probably
gotten
more
discussion
primarily
again
in
the
security
of
it
touken
world
ii.
Vince
working
group
is
this
notion
of
adding
explicit
typing
now,
in
fact,
there's
already
a
type
header
parameter
which
could
always
have
been
used
for
explicit
typing,
but
it's
largely
not
used
in
deployments
that
I'm,
aware
of
so
after
discussing
a
number
of
alternatives.
R
At
least
the
three
of
us
thought.
Let's
just
recommend
doing
that.
It's
straightforward:
you
don't
have
to
define
any
parameter
and
the
way
that
it
works,
because
it's
a
media
type
as
you
define
an
application-
slash,
for
instance,
second,
vent
+,
JA,
media
type
and
you'll
see
an
example
of
that
in
the
set
spec,
which
is
referenced
from
the
current
PCP
draft
of
using
that
approach
to
type
future
jots
going
forward.
R
R
Dick
is
smiling
because
I
forget
in
one
place,
to
delete
already
secure
because
we
had
an
agreement
to
not
talk
about
it
that
way.
I'm
my
bad
you're
right
to
laugh
at
me,
but
you
know
part
of
the
trade-offs
too,
and
this
was
always
president
echad
design
is
it's
designed
to
be
used
in
some
space
constrained
environments,
including
in
browser
URL
contexts
as
query
or
fragment
parameters
where
space
is
not
free
and
increasing
the
size
of
something
they
actually
break
the
deployment?
A
J
J
R
J
J
R
J
There
was,
there
was
some
commentary
on
the
BCP
that
was
was
not
responded
to
and
as
I
read
through
it,
not
being
a
cryptographer
myself,
I
started
to
wonder
if
these
are
actionable
real
things
or
that
need
to
be
considered
or
if
they're
they're
just
kind
of
fun,
and
it
was
really
hard
to
tell
some
of
the
ones
specifically
that
come
to
mine.
Are
people
have
criticized
a
lot
of
potential
use
of
compression
inside
inside
a
jwe
as
being
a
leak
factor.
J
J
R
Z
So
I
mean
on
the
topic
of
compression.
It's
not
necessarily
the
case
that
compression
and
you
obviously
compression
Elise
information,
because
the
length
length
tells
you
something
about
the
content.
So
you
know
the
dictionary
and
so
they've
been
a
bunch
of
papers
on
this.
Z
You
know
some
some
in
the
adaptive
setting
like
sorry,
so
all
I'm
going
in
the
setting
that
is
be
setting.
Where
is
crime,
we're
on
the
attacker
gets
control
some
of
the
data
and
they
don't
control
all
the
data,
and
so
they
basically
used
the
data
they
send
in
to
see
the
dictionary
and
then
look
for
and
then
basically
sister
Co,
compressing
the
attackers
data
and
then
the
victims
data.
Then
these
are
get
some
some
information
about
the
victims.
Type
data
is
there
also
been
attacks
in
studying
to
designer?
Z
Doesn't
do
this
like
docked
on
Phonics
paper
out
of
what
I
want
to
say
UNC
a
few
years
ago,
showing
you
could
use
a
packet
lengths
to
measure
what
people
are
saying
and
variable
voice
codecs?
So
you
know
it's
not
necessarily.
The
case
that
you
could
never
use
compression
with
encryption
is
just
like
super
bad
news.
If
you
get
it
wrong
and
so
generally
they've
got
I,
think
I
think
that
the
gut
the
appropriate
guidance
here
would
be.
You
know
either
something
that
something
level
of
you
never
do
it,
or
only
do
it.
Z
A
Probably
wrap
up
this
session
I
would
like
to
get
a
sense
from
the
folks
in
this
room
whether
they
believe
that
we
should
sort
of
do
this
work
in
this
group
or
whether
you
oppose
to
that
work
so
hum
now,
if
you
think
that
we
should
adopt
this
work,
that
might
be
presented,
and
please
come
if
you
are
posted
okay
now,
sir,
yes,
I
would
talk
to
echo
about
what
he
sees
this
also
within
scope.
When
we
talked
about
this,
it
was
with
Kathleen
and
I.