►
From YouTube: IETF101-OAUTH-20180321-1330
Description
OAUTH meeting session at IETF101
2018/03/21 1330
https://datatracker.ietf.org/meeting/101/proceedings/
A
B
I
remember
that
we
still
under
the
note
well
so
if
you
haven't
seen
that
please
remember
that
a
we
will
be
M
just
may
be
present
that
that's
that
note
well
here
in
next.
Let's
go
to
now,
so
we
have
a
jabber
scrub
and
many
taker.
Thank
you
to
Justin
and
Mister
nedelin.
Thank
you
and
they
yep
Tony.
Sorry.
B
So
a
mic
will
they
will
have
to
just
talk
about
today.
A
Alma
will
later
a
join
us
remotely
to
talk
about
that
he's.
A
new
idea
for
a
client
assertion
flow
a
we
will
have
a
distribute
our
discussion,
so
we
we
have
maybe
two
slides
in
and
kind
of
a
discussion
and
a
Torsten
had
had
a
one
name
in
one
more
document
that
he
wants
to
discuss.
B
This
is
something
that
he
just
submit
a
few
days
ago
and
we
have
yet
another
job
that
we're
gonna
allocate
maybe
five
to
ten
minutes,
so
we
probably
take
those
5-10
minutes
from
that
distribute
and
you
might
switch
that
a
little
bit.
Maybe
you
can
take
this
to
the
end
and
just
take
that
the
rest
of
that
time
for
their
distribute
of
discussions,
something
again
we
kind
of
shuffled
like
if
we
need
yeah.
Okay,
awesome
so
like
the
floor
is
yours.
Thanks.
D
D
And
at
this
point
all
the
iesg
positions
are
either
yes
or
no
objection,
meaning
that
it
is
procedurally
ready
to
go
to
the
RFC
editor
and
is
now
in
follow
up
state
making
me
wish
we
had
our
ad
in
the
room.
Do
we
know
if
Becker
will
be
okay?
Okay,
so
I
want
to
revisit
this
for
30
seconds
when
beckon
ass-kicker,
oh
god,
status?
D
It
turns
out
that
Adam
Roach,
one
of
the
area
directors
in
the
applications
area,
pointed
out
that
that
actually
violates
URL
policy
and
that
the
only
carve
out
for
our
specifications
to
use
the
URL
space
is
within
well-known
and
with
well-known
at
the
top
level,
and
so
once
he
brought
this
up.
I
had
like
four
or
five
area
directors,
agreeing
with
his
comment
as
he
is
discussed
actually
and
suggested
that
we
instead
insert
well-known
authorization
server
at
the
top
level
and
if
there's
a
path
appended,
this
is
not
quite
as
easy,
but
it
does
work.
D
Ironically,
the
nail
in
the
coffin
of
this
was
there
was
a
issue
in
the
open,
ID
connect
tracker
from
five
years
ago
that
not
filed
and
John
discussed,
where
John
in
fact
suggested.
Well,
maybe
we
could
do
this,
but
the
working
group
decided
no,
that
seems
harder,
so
we
didn't
do
it,
but
this
change
was
made.
It
is
a
breaking
normative
change
to
you,
the
specification
and
that's
what
it
was
going
to
take
to
get
through
the
IAS
G,
as
I
kind
of
just
did
verbally.
D
And
there
is
a
a
note
paragraph
in
the
draft
now
saying
note
that
this
is
different
than
the
string
processing
rules
for
open,
ID,
Connect
discovery
and
talking
about
this,
so
that
developer
is
using
the
spec
will
be
aware
of
it,
but
it
means
that
open,
ID,
Connect
metadata
documents
are
potentially
in
a
different
location
from
generic,
so
our
metadata
documents
issuer
doesn't
have
a
past
component.
If
it's
just
a
domain,
it
turns
out
that
the
two
rules
are
equivalent,
which
is
a
common
case.
E
D
Both
occur
so
as
a
result,
if
you
have
a
service,
that's
both
an
open,
ID,
Connect
identity
provider
and
a
generic
Roth
provider.
You
may
end
up
publishing
metadata
at
two
locations.
If
you
are
a
client
which
is
both
an
open,
ID
relying
party
and
a
generic
Roth
client
in
some
sense,
you
may
end
up
looking
for
the
metadata
in
two
locations.
This
was
understood
by
the
area
directors
and
considered
unfortunate,
but
they
felt
that
the
what
was
informally
called
the
get
off
my
lawn
rules
in
mark
Nottingham's
draft,
which
became
BCP
190
superseded.
D
F
G
H
William
Dennis
with
the
comment,
I
think
it'll
make
sense
and
I
guess
the
clients
probably
have
to
look
in
both
locations
anyway,
who
are
you
William
Dennis?
Thank
you.
I
did
say
my
name,
but
I'm
not
sure
the
mics
on
I
think
from
a
client
implantation,
for
you
looks
fine,
because
I
think
we'd
have
to
look
in
both
locations
anyway,
due
to
the
different
name
from
okay
to
connect
to
also
I
don't
see.
Any
added
complexity
looks
good,
yeah.
D
I
I
D
Do
and
I
responded
to
every
area
director
on
the
appropriate
thread
and
gotten
a
pushback
and
Adam
Roach,
whose
discuss
was
the
main
one
affirmed
that
this
does
the
job.
Okay,
thank
you
next
presentation,
okay,
so
that
was
all
good
news.
The
JSON
web
token
best
current
practices
document
is
another
one
that
is
on
my
plate.
D
D
D
So
the
document
status
there
is
a
zero
zero
which
was
adopted
by
the
working
group
based
on
an
individual
draft
that
you're
on
Scheffer
had
written
in
cooperation
with
decart,
and
I,
which
did
incorporate
some
working
group
feedback
between
zero,
zero
and
zero
one
of
the
individual
draft.
Before
it
was
adopted.
D
D
E
D
E
D
J
J
D
My
last
slide
on
this
I
wanted
to
be
conservative
in
what
I
said.
Is
the
presenter
I
think
we
do
need
to
make
an
affirmative
decision
one
way
or
another?
If
we're
continuing
this
work,
you
could
do
whatever
chairs
do
right,
and
if
so,
if
we
do
proceed,
we
do
need
additional
reviews.
Those
could
come
in
the
context
of
just
asking
for
reviews.
You
could
run
a
working
group
last
call
yeah.
J
J
Think
would
be
useful
to
do.
Is
you
published
document
update,
as
you
said,
doing
this
week,
and
we
started
working
group
last
call
and
I
would
ask
for
some
reviewers
already
now,
so
we
can
get
those
in
for
the
working
group
last
call,
so
it
so.
Is
there
anyone
of
you
who
would
have
some
free
cyclists
to
look
at
this
William.
H
Dennis
I
just
want
to
add
a
comment
of
support
to
this
draft
I
think
it's
extremely
important
I've
seen
multiple
mistakes
made
with
shots
that
can
be
quite
devastating.
You
know
whether
it's
just
taking
a
system
offline
or
or
worse.
So
don't
we
think
it's
need
and
hold
me
to
the
review
all
I'll
try
and
get
them.
Okay.
Thank.
J
D
J
B
K
K
K
Can
you
so
I
want
to
start
with
the
talking
about
why
we
need
a
new
flow?
Why
we
need
a
new
stand
up
whenever,
when
I
started
to
walk
on
our
mobile
application,
we
started
to
think
about
authentication,
usually
authentication.
A
mobile
application
is
done
with
log
in
scale
with
any
kind
of
against,
and
you
know
either
with
authorization
code
or
increase
it.
K
Any
kind
of
those
combination,
but
apps,
don't
always
need
or
want
your
gain
scale
because
can
affect
how
they
are
looks
and
fit,
and
it
will
affect
how
many
users
would
actually
complete
the
onboarding
flow
and
start
using.
You
are
so
we
understood
that
we
need
another
solution
and
we
needed
a
solution
that
will
allow
to
authenticate
the
quest
on
the
app
without
affecting
the
experience,
without
the
need
to
add
a
plug-in
scan.
K
Now
you
might
be
thinking
about
the
draft
core
device
flow,
but
device
flow
is
not
good
enough,
because
device
flow
still
require
authentication,
but
authentication
is
just
done
on
a
different
device.
We,
what
I
need
to
do
is
I
want
to
be
able
to
attend,
get
the
device
without
any
end-user
interaction.
This
would
allow
me
to
do
the
five
separate
old
engine.
For
example,
I
can
say
this
device
is
authorized
to
work
to
view
data
form
on
this
device
and
not
from
other
devices.
K
On
the
other
hand,
I
don't
have
any
way
to
control
who
can
register
any
device
can
register
Hey.
So
those
are
the
two
assumptions
in
a
very
high
level
overview.
Can
you
switch
guys?
Okay,
so
in
a
very,
very
high
/?
Well,
the
art
is
going
to
use
The,
Awl,
yes
and
work
signature
to
authenticate,
and
the
pair
of
the
vegetarians
is
one
time
password.
K
It's
kind
of
similar
to
JWT
client
assertion,
but
it's
not
the
same.
The
main
difference
is
the
payload
enjoyed
about
the
current
assertion.
The
sermon
is
done
with
the
time
the
expiration,
but
time
can
be
tricky
on
mobile
devices.
We
have
a
lot
of
devices
who
which
the
internal
clock
is
not
synced.
K
So
using
time
is
not
it's
not
something
we
call
the
liable
and
as
law
currently
I,
don't
think
to
include
the
registration
as
part
of
the
table
or
the
flow
of
how
device
like
the
matching
public
key
and
send
it
to
the
authorization
seller
and
the
initial
syn.
And
can
you
move
those
lights
because
the
next
one
with
this
one?
Sorry-
and
this
is
how
the
authentication
request
will
look
like
you-
can
see
that
we
have
the
client
assertion
type
in
I,
hope
you
can
see
it
and
it
will.
K
K
What
we
have
is
a
I'm
going
to
use
two
numbers
to
launch
64,
byte,
sign
numbers
and
then
you're
going
to
escape
the
1
up
to
10
you
at
the
value
of
them
to
generate
one-time
password.
The
device
will
all
extract
with
an
initial
seed
and
extended
server,
so,
for
example,
in
this
case
it
start
with
5
and
2
and
send
this
5
and
2
to
the
server
and
then
in
order
to
perform
an
authentication
request.
K
So
when
the
service,
if
the
request
after
validating
the
jwx
it
can,
it
can
vided
the
pedal.
So
it
can
check
that
the
old
came
from
the
client
is
equal
to
the
new
stored
on
the
server
and
if
the
equal
request
is
valid,
can
you
move
to
the
next
ID
okay?
So
this
was
the
high-level
overview
of
the
protocol
and
thinking
time
for
questions.
K
B
L
My
first
observation
is
that
we
have
the
JWT
assertion
flow,
which
enables
you
to
do
exactly
this,
so
I'm
not
sure
why
you're
inventing
a
new
way
of
doing
it
with
symmetric
keys,
which
is
probably
more
complicated
and
less
secure,
so
I
guess
motivation
would
be
a
good
thing
to
understand
as
to
why
the
existing
method,
that
would
let
you
do
exactly
this,
which
is
used
by
Google
and
other
people
in
various
flows.
Why
isn't
that
good
enough?
Okay,.
K
I
said
it
at
first,
but
maybe
it
wasn't
clean
cleaner
and
the
man
he
surely
had
with
JWT
is
that
the
validation
there
is
looking
on
the
expiry
field,
so
you
usually
have
I
think
five
minutes,
it's
the
ability,
which
is
valid
for
five
minutes,
and
we
have
a
lot
of
the
issues
with
mobile
devices
or
the
time
is
not
synch.
So,
for
example,
they
have
like
one
how
I
or
one
day
out
of
sync
of
time
and
then
they
will
not
create
variability.
K
L
I,
just
enricher
I,
both
agree
in
disagree
with
John,
which
is
pretty
so
one
to
the
last
point,
I
mean
it's:
it's
it's
up
to
you
in
your
implementation
and
deployment.
What
the
expiration
conditions
and
the
acceptance
conditions
of
the
assertions
in
a
job
are.
Yes,
they
are
commonly
time-based,
but
not
necessarily
I
mean
it.
You
know
depends
on
how
you
want
us
to
do
that.
L
The
other
thing
is
that
when
I
first
looked
at
this,
the
thing
that
immediately
jumped,
to
my
mind,
was
actually
not
the
assertion
based
flows,
but
the
client
credentials
flow,
but
perhaps
using
a
client
key
based
assertion
as
the
client
authentication.
With
this
now
the
authorization
server.
You
know
you
say
any
device
can
register
and
its
device
level
authorization
and
stuff.
Like
that,
you
know
device
level.
Authorization
already
immediately
tells
me
that
we're
probably
talking
about
a
client
credentials
flow.
L
That's
why
it's
there,
but
you
know
any
device,
can
register
and
having
this
you
know,
symmetric
key
based
thing
or
symmetric
here
OTP,
you
know
it's
all
nonsense
and
I
just
got
all
the
Krypton
nerds
mad
at
me,
but
anyway,
with
this,
you
can.
Basically,
you
can
basically
use
the
assertions
framework
in
the
mode
of
the
client
authenticating
to
the
token
endpoint,
in
a
way
that
the
authorization
server
can
recognize,
because,
fundamentally
that's
what
client
authentication
is
about
it.
So
the
authorization
server
recognizes
that
this
piece
of
software
is
allowed
to
make
this
call
right.
L
That
is
fundamentally
what
needs
to
happen
in
that
in
that
section.
So
to
me,
the
application
of
the
specs
in
that
way
would
make
more
sense
for
this,
as
opposed
to
the
straight
assertions.
Both
are
valid
ways
that
we
already
have,
though,
and
even
given
your
your
qualms
about.
You
know:
X
expire
YZ
and
things
like
that,
I'm
honestly,
not
not
seeing
what
this
does,
that,
what
we
have
in
listing
specs
doesn't
allow
for
or
anyways
that
the
existing
specs
could
trip
up.
L
K
K
I
didn't
want
to
go
too
deep
into
this
because
again
have
a
lot
of
time,
but
one
of
the
he
thinks
that
these
protocols
allowed
to
do
is
to
detect.
If
someone
was
able
to
compromise
the
device,
poverty
as
the
private
key
is
the
key
found
them
the
found
engine
of
this
protocol
is
an
Hakka
were
able
to
compromise
this
pathogen.
Then
this
hacker
will
be
able
to
create
durability
and
silent.
So,
okay,
I
did
a
specific,
unique
one-time
password
for
12.
You
can
move
to
this
and
you
move
to
this
side.
E
K
I
didn't
want
to
go
too
deep
into
details.
I
can
do
it
now
what
I
know
to
how
much
time
I
have,
but
the
main
thing
for
allowance
is
to
detect
if
someone
was
able
to
come
to
compliment
the
private
key
and
you
person
either
device.
So
this
was
the
main
reason
not
to
use
client
potentials
because
granted
urges
will
not
allow
us
to
do
it
and,
as
I'm
talking
about
mobile
application,
the
scenario
of
someone
else
having
your
device
and
getting
the
private
key
for
me
is
pretty
reasonable.
J
This
is
harness
I
guess
what
a
belief
we
would
need
to
do
is
to
have
you
write
on
a
little
bit
more
on
the
motivation,
and
some
of
those
assumptions
are
here.
A
couple
of
assumptions
embedded
India
I
think
those
would
be
very
interesting
to
point
out
and
then,
as
a
secondary
step,
to
actually
look
at
what
are
the
different
solution,
approaches
that
we
could
look
at
Anna,
bros
and
cons.
We've
heard
a
few
of
those
here.
J
J
I
would
have
some
some
some
comments
on
on,
like
the
difference
between
the
the
one-time,
password
based
mechanism
versus
the
public
key
based
approach,
given
that
I
was
once
the
chair
of
the
group
that
standardized
the
one-time
password
mechanism,
but
could
do
you
think
you
could
do
that
right
down,
maybe
as
an
email
or
also
as
a
draft
update
a
little
bit
more
about
the
motivation?
Some
of
the
assumptions
that
you
are
making
I
think
this
will
help
us
to
make
some
progress
in
that
in
that.
K
Regard
I
can
tell
you
what
I
have
write
down
and
we
can
take
it
offline
I
think
it
will
be
easier
and
you
can
move
to
the
last
slide.
I
put
some
reference
reference
there.
So
there
is
a
ready,
blog
post
that
describing
all
the
assumptions
I
have
and
how
these
Possible's
different
from
existing
protocols
and
the
things
I
mentioned,
especially
the
path
of
how
it
allow
us
to
a
to
detect
hoc
way
of
complimenting
the
poverty.
K
K
J
B
M
M
M
So
this
new
draft
proposes
an
extension
to
the
OAuth
token
introspection
endpoint
so
far,
the
OAuth
token
to
inspection
endpoint
only
provides
resource
service
with
token
data
in
blind,
JSON
format
and
I
came,
came
across
use
cases
and
more
came
across
use
cases
where
it
seems
to
be
beneficially
to
have
shots
as
the
response
in
order
to
have
a
signed
and
potentially
encrypted
response.
So
and
that's
what
the
arm,
what
the
RFC
proposes.
So,
instead
of
responding
with
the
plain
JSON
object,
responding
with
a
shot
which,
for
example,
can
be
signed.
M
First
of
all,
as
I
said,
it
allows
to
sign
the
results
and
also
to
encrypt
the
results
from
the
token
endpoint,
and
this
is
something
which
is
useful
or
is
required
in
some
situations
where
the
relying
where
the
resources
over,
for
example,
needs
a
cryptographic
truth
that
this
particular
token
was
issued
by
a
particular
authorization
server,
mainly
for
liability.
Non-Repudiation
are
scenarios
and
also
what
data
this
a
s
has
or
had
asserted
in
this
particular
access
tokens.
We're
going
to
be
going
to
elaborate
more
in
detail
in
the
use
cases.
M
So
in
scenarios
where
there
is
an
intermediary
that
fetches
the
access
to
a
token
data
from
the
authorization
server
and
then
sends
it
downstream
or
upstream,
depending
on
the
only
perspective
to
a
resource
server
to
actually
use
the
access
token
to
meet
an
access
on
or
access
control
decision,
it
might
be
beneficially
to
encrypt
the
data
in
order
to
to
do
not
allow
the
intermediary
to
inspect
the
data
and,
in
the
same
way,
signing
may
preserve
the
authenticity
and
integrate
integrity
of
the
token
label.
I
would
like
to
give
some
insight.
Oh
sorry,.
M
Don't
get
it
cold
all
day
all
day,
all
right,
so
just
to
give
you
an
insight
why,
when
you're
working
on
this
kind
of
stuff
months,
I've
been
working
in
to
two
areas
and
and
have
tried
to
implement
or
for
adopt,
go
off
to
the
needs
of
two
different
areas,
one
is
the
financial
area
of
the
financial
industry
and
the
other
one
is
the
area
of
the
arm.
Ii
idols
directive.
The
idols
is,
is
it
directive
in
the
European
Union
that
will
bring
in
term
ability
into
the
Eid
space?
M
That's
one
one
aspect
and
the
other
aspect
is:
it
will
allow
to
remotely
create
electronic
signatures
armed
for
real,
real,
serious
stuff
for
real
legal
legal
documents.
It's
it's
replaces
the
the
handwriting
more
or
less,
and
what
I'm
working
on
is
a
solution
where
verified
identity
data
are
provided
by
banks
or
governments
or
such
alike,
can
be
utilized
by
api's
that
want
to
create
this
kind
of
of
remote
signature.
M
What
it
needs
for
this
kind
of
signature
or
verified
person,
data
and
a
strong
authentication,
and
what
I'm
have
been
working
on
is
is
an
any
design
or
designs.
Our
Olaf
is
used
for
the
authorization
authentication
authorization
process
and
for
carrying
the
person
data
to
this
remote
electronic
electronic
signal
creation
process
we're
having
being
doing
because
it's
it's
in
all
other
like
in
our
other
areas.
M
If
you
take
a
look
on
industries
like
the
financial
industry
or
if
you
look
into
other
communities
if
they
start
to
build
API,
is
that
typically
built
into
on
this
API.
So
all
the
a
tenth
occasionally
offer
ization
stuff
you've
seen
that
in
in
a
in
a
working
group
recently
here
at
the
ITF,
we've
seen
it
in
in
the
PSD
to
space.
So
in
the
morning
there
was
a
discussion
about
the
Burling
group,
for
example,
and
in
the
in
this
space
of
electronic
signature.
The
same
happens.
There
is
a
is
asan
lot
going
on.
C
M
I
would
like
to
achieve
and
that
this
is
really
about
implementing
a
product.
This
is
not
a
theoretical
academic
exercise.
We're
gonna
build
this
product,
and
what
we
want
to
do
is:
let's
assume
there
is
a
client,
for
example,
an
insurance
company
that
once
offers
a
customer
with
some
some
product,
and
it
wants
to
that
is
the
customer
to
electronically
sign
the
contract
it
wants
to
use
in
order
to
enable
this
closes.
It
uses
an
authorization
server.
We
just
provided
by,
for
example,
a
government
agency,
a
bank
or
any
other
entity.
M
M
Now
note
the
tricky
part,
the
service
provider
that
actually
does
the
electronic
signature
creations.
It's
the
so-called
TSP
underlies
a
regulation.
It
has
to
comply
to
some
legal
regulations
so,
for
example,
it
has
to
really
keep
an
audit
trail
of
the
whole
process.
So
how
has
the
person
being
identified?
How's,
the
user
being
authenticated?
Which
data
have
been
used?
M
And,
from
my
perspective
the
simplest
way
is
to
just
have
an
access
token
or
an
assertion.
That's
digitally
signed
by
the
a
s.
One
might
say:
okay,
that's
not
a
problem.
You
structural
tokens,
because
structural
tokens
have
this
kind
of
digital
signature.
That's
correct!
The
problem
here
lays
in
another
fact.
So
what
we
want
to
achieve
is
that
this
client
can
access
different
services
based
on
the
same
authorization
grant.
So,
for
example,
this
a
s
might
be
an
open,
ID
connect
or
P
as
well.
M
In
which
case
the
client
wants
to
have
an
access
token
that
can
be
used,
or
at
least
once
to
have
access
tokens
that
could
be
used
to
obtain
user
data
from
the
LP
and
to
use
that
access
token
to
create
the
realm
of
our
signature.
So
what
we
wanna
achieve
is
we
want
to
have
a
solution
where
the
client
can
interact
with
different
services,
while
preserving
or
against
restriction
for
security
and
privacy
reasons,
and
that's
why
structured
access
tokens
are.
From
my
perspective,
the
second
best
option
I
mean
we
had
a.
M
We
had
a
we
had
a
discussion
about
audience,
restriction,
structured
access
to
I,
think
back
and
track
mm,
yeah
and
Annabelle
pointed
out
at
that
time.
Yeah,
that
might
be
might
be
a
good
idea,
but
you
have
to
convince
developers
that
it's
also
been
officially
for
them
and
that's
really
not
an
easy
task
and
that's
why
I
think
introspection
is
the
better
way
to
go.
M
I
mean
in
the
end,
it
puts
a
burden
on
the
RS
and
it's
not
the
best
solution
from
a
latency
perspective,
but
for
the
use
cases
I'm
aiming
for
that's
a
good
solution,
because
you
relatively
rarely
sign
an
electronic
document
right.
So
introspection
would
work,
but
in
order
to
fulfill
the
objectives
we
need
in
heaven
and
they're
digital
evidence
of
who
created
the
token
what
data
will
be
even
being
asserted
and
so
on,
and
that's
that's
the
reason
why
I
more
or
less
ended
up
with
that
solution.
M
M
F
B
E
Trying
to
when
I
saw
that
it's
like
yeah
we've
done
that,
but
we
have
a
specific
use
case
where
we
have
done
that,
and
we
thought
it
would
be
interesting
to
do
a
write
up,
because
that
use
case
could
be
common
in
more
scenarios.
But
it's
a
completely
different
use
case
from
what
the
Thurston
is
describing.
E
And
there
is
another
representation
of
that
token,
which
is
in
the
form
of
a
structured
token,
the
job
and
we
want
to
be
able
to
use
the
same
grant
or
just
the
same
authorization
data
in
different
circumstances.
So
the
phantom
token
is
about
an
app.
That's
that's
out
in
the
open,
the
app
requests,
an
access
token,
which
we
call
a
handle.
E
There's
no
data
in
it
once
the
app
is
ready
to
make
her
to
make
a
call
to
an
API
or
to
a
Becca
to
a
resource
server,
and
it
presents
that
token.
There
is
an
API
gateway
in
the
middle.
Very
often
that's
what
we
see
and
that
gateway
takes
the
responsibility
of
introspecting
that
token,
with
the
authorization
authorization
server.
So
it
gets
returned
a
job
carrying
the.
F
E
Same
information
and
the
gateway
can
then
use
that
or
can
catch
that
information,
so
it
only
needs
to
make
to
make
the
request
to
the
authorization
server
once
and
pass
on
the
job
version
of
their
token
downstream
to
the
api's.
So
there
then
the
api's
don't
need
to
do
any
introspection.
They
can
verify
the
token
contents
and
and
use
that.
E
So
this
is
what
it
looks
like
in
a
picture
on
the
left
side.
There
is
a
the
app
and
the
clients
that
set
up
the
initial
token
request,
with
the
authorization
server
making
a
request
to
the
Gateway
and
the
handle
token
is
represented
with
the
white
ticket
and
on
the
right
side.
The
same
token
is
being
represented
with
a
slightly
blue
color
ticket
which
says
job.
If
you
look
very
closely
which
carries
all
the
information.
E
E
We
added
some
stuff
to
it,
and
now
is
the
moment
that
we
meet
again
and
we
get
together
and
see
what
we
all
wrote
up
for
this.
This
presentation,
the
status
there
is
a
very
rudimentary
version
of
it.
We
would
like
to
add
to
that
for
sure
the
alignment
of
the
HTTP
stuff
with
the
job
stuff,
like
job,
has
some
information
about
expiration
and
we
would
like
to
convey
the
information
in
the
HTTP
on
servers.
E
So
at
the
gateway
level,
the
Gateway
doesn't
need
to
know
anything
about
jobs,
but
it
can
work
based
on
the
HTTP
header,
HTTP
status
code.
I
think
what
Morse
to
do
is
to
add
the
standard
stuff
there,
as
well
as
security
considerations,
and
we
could
be
discussing
some
other
topics
how
to
relate
the
yes
from.
M
Since
we
are
running
out
of
time,
I
would
like
it
make
it
really
short.
So
one
potential
area
where
we
could
work
on
is
that
supports
in
areas
where
there
are
s
relies
on
more
than
one
a
s,
so
the
identifier
base
token
must
somehow
inform
the
RS
which
it
has
to
talk
to.
Regarding
the
particular
token
and.
F
M
M
M
I,
don't
see
any
difference
to
the
texture
that
can
be
found
in
a
token
inspection,
spec
I
mean
they
yes
may
require
RSS
to
authenticate
to
about
the
top
introspection
endpoint.
We
would
use
that
too
to
really
in
the
end
and
force
audience
restriction,
because
if
you
don't
know,
what's
gonna
calling
you
you
don't
know
what
the
audience
is
that
is
associated
with
the
Kali,
so
I'm.
L
One-
and
this
has
already
been
started
on
the
list,
but
I
think
it's
a
discussion
we
need
to
have-
and
this
is
apropos
to
the
presentation
I
gave
the
other
day-
is
that
we
need
to
decide
if
signing
the
response
using
jws
as
like
an
entire
jws
element
is
the
approach
that
we
want
to
take
versus
other
HTTP
response
signing
things,
some
of
which
are
just
starting
to
kind
of
pop
out
here.
This
is
more
for
his
case,
as
opposed
to
your
kiss
reading.
L
L
M
M
C
L
M
L
And
I
and
I
get
that
as
an
argument.
It's
it's
sort
of
nice
and
self-contained
and
all
of
that,
but
that
that
all
actually
leads
me
to
a
concern.
I
have
with
the
document
which
marks
presentation
brought
to
fulfillment
and
that's
that
the
introspection
endpoint
doesn't
return
tokens.
The
introspection.
Endpoint
returns
information
about
tokens.
It
does
not
return
tokens
so
treating
the
response.
L
As
a
token,
is
a
very
off-label
and
a
very
disturbing
use
of
this,
especially
when
we
have
token
exchange
at
the
token
endpoint,
which
goes
back
to
what
Annabel
was
saying
to
do
something
similar
now
at
it's
very
base,
token
exchange
is
I.
Give
you
an
access,
token
tell
you
who
I
am
and
you
give
me
back
a
new
access.
Token.
That's
what's
supposed
to
be
I
know
it's
got
all
of
the
other
Microsoft
ws
garbage
in
there,
but
ignore
that
and
you
can
still
Chuck
tokens
in
and
out,
yeah
yeah,
that's!
Okay!
We.
L
L
N
M
C
L
John
Bradley,
so
some
of
the
same
concerns
as
Justin
a
token
exchange
probably
solves
the
second
use
case.
It
may
be
overkill
for
tourist
ins
use
case
where
all
you
want
is
non-repudiation
and
once
there's
also
proof
of
possession
which
comes
into
play,
etc.
So
I
think
some
of
the
stuff
that
that
you're
you're
thinking
about
doing
it,
the
Gateway
actually
once
we
start
digging
into
it,
may
wind
up
looking
a
lot
more
like
token
exchange
and
if
token
exchange
isn't
appropriate
for
people
that
we
are.
O
Brian
Campbell
thing
I,
actually
not
sure
how
to
say
the
same
things
that
introspection
is
meant
to
inform
the
resource
server
about
the
content
of
it.
Okay
and
you
use
cases
around
having
that
information
have
some
signature
over
it,
and
so
it
can
be
stored
away
and
would
be
non
reputable
right,
so
that
fits
in
with
it
when
you
get
into
the
API
sort
of
gateway
use
case,
that's
that's
a
whole
different
use
case
and
that
that
would
then
need
to
return.
E
O
It's
either
likely
to
break
or
not
properly
audience
controlled
or,
and
not.
That
is
in
fact
the
use
case
that
token
exchange
was
specifically
designed
for
so
I.
Don't
I
I'm
hearing
that
it
doesn't
work
for.
O
E
One
main
difference
is
the
integration
on
the
HTTP
level,
which
you
can
do
when
you
use
that
except
type
response
header.
So
it's
it's
way
easier
to
integrate
instead
of
having
to
go
up
to
above
HTTP
level
where
you
have
to
you
talk
in
exchange.
So
it's
it's.
It's
in
optimization
terms,
there's
something
to
say
for
going
the
proposed
way.
O
J
So
this
is
our
I
think
it's
it's
good,
that
you
doesn't
have
a
look
at
the
response,
and
indeed
it
would
be
nice
to
have
sort
of
a
digital
signature
mechanism
in
covering
it.
I.
Never
thought
that
with
the
token
by
itself,
but
I
think
that
would
be
certainly
useful
to
happen.
Maybe
as
part
of
the
audit
trail,
because
that
issue
didn't
come
up
previously.
That
could
be
I
would
look
of
whether
there
is
a
need
to
have
additional
metadata
included.
J
That
would
be
useful
for
for
doing
the
the
audit
trail
because
creep
the
previous
purpose
for
the
claims
in
there
and
that's
why
we
had
to
reuse
them
or
why
we
wanted
to
reuse
them
was
really
for
a
purpose
of
on
policy
enforcement
and
what
is
enforced
when
an
audit
trail
are
still
a
little
bit
different.
Some.
H
William
guys
know
could
no
comment
for
me
on
the
on
the
SES
debate.
I
just
had
a
comment
on
the
API
gateway.
I
thought
it
was
an
interesting
thing
how
using
the
the
by
ref
sorry,
the
bye
yeah
the
by
ref.
One
is
a
case
one
nice
attribute
of
that
is
you
can
probably
implementation
quite
nicely.
One
of
the
problems
with
the
jot
is
that
it's
generally
good
to
expiry
it's
sort
of
hard
to
revoke
the
job.
H
E
B
P
P
So
this
this
new
flow
is
using
the
post
message,
interface
HTML,
to
help
communicate
to
the
application
tokens
in
a
simplified
way.
So
what
we
found
is
that
it's
very,
very
hard
for
some
developers
to
integrate
with
OAuth
I
know
that
OAuth
2
was
designed
to
make
it
easier
than
a
walk
one,
but
it
can
still
be
quite
challenging
for
them
with
all
the
different
query,
string
parameters
and
the
different
end
points,
and
things
like
that.
P
And
what
we
found
is
that
this
flow
has
to
be
done
in
the
OAuth
server,
because
doing
it
outside
of
the
OAuth
server
ends
up
making
an
uber
client,
which
could
be
quite
dangerous,
so
the
way
that
the
assistant
token
flow
works.
First
of
all,
when
the
the
user
is
authenticated,
a
hidden
iframe
is
open
up
to
the
other
question
for
you.
If
you
submit
a
draft,
so
this
has
been
submitted
to
the
IETF
tools
on
to
on
Sunday.
P
So
you
make
the
request
to
the
assistant
token
endpoint,
with
at
least
the
required
parameter
of
client
ID
and
then,
if
the
user
is
already
authenticated
and
has
consented
to
this
client,
it
will
return
HTML
with
a
post
message,
posting
up
the
access
token
from
the
hidden
iframe
into
the
client.
So
it's
a
simple
get.
It's
one
query
string
parameter
and
it's
writing
some
JavaScript
to
handle
the
post
message.
So
this
is
the
authenticated
state.
Ok,
the
response
looks
like
this.
Where
this
JavaScript
is
what
is
in
the
data,
so
it
will
have
the
success.
P
So
if
the
user
doesn't
have
a
session,
though
the
client
must
be
told
so
it
can
take
action,
and
this
needs
to
work
when
not
just
with
open
ID
connect,
so
it
needs
to
work
without
open
ID
connect
in
case
the
the
a
s
is
not
a
no
P
and
the
client
cannot
inspect
the
child
frame,
so
that
frame
is
completely
off-limits
to
us
as
the
framer.
And
so
how
do
we
figure
out?
If
the
the
user
is
authenticated?
We
have
a
timeout
which
is
terrible.
P
P
P
So
the
assistant
token
endpoint,
the
only
required
input
is
the
client
ID,
like
I,
said
a
couple
times
where
you
could
also
send
a
scope.
So
if
you
want
to
have
different
scopes,
if
the
client
allows
for
framing
of
more
than
one
origin,
it's
going
to
need
to
provide
a
four
origin
to
say
to
the
AAS
which
one
wants
to
do
the
framing,
because
when
using
x
frame
options,
you
can
only
have
one
origin
in
that
you
could
have
other
things
around
reuse,
forsyth,
indication
freshness
like
we
have
with
with
open
ID
connect.
P
So
the
the
for
origin.
Here
this
will
be
required
when
there's
there's
multiple
allowed
framers,
because,
as
I
said,
this
particular
response,
header
only
allows
for
one
CSP
allows
for
multiple
origins.
So
in
that
case,
even
if
you
have
configured
multiple
allowed
framers,
you
could
specify
them
all
in
the
content.
Security
policy
response
header.
But
if
you
want
to
support
x-frame
options,
then
you'll
need
to
specify
that
in
four
origins
this
is
also
different
from
normal,
a
walk.
If
you
don't
request
any
scopes,
you
get
all
configured
scopes,
so
this
simplifies
the
developers
life.
P
So
they
don't
have
to
say,
like
here's
all
of
the
scopes
that
I
want,
they
just
get
the
ones
that
are
are
configured
for
the
client
and
if
they
want
to
reduce,
what's
what's
actually
configured
or
change
what's
configured,
they
could
include
it
in
the
request,
but
if
they,
if
they
don't
really
include
any
scopes,
they
get
all
the
ones
that
are
configured.
If
there
aren't
any.
P
L
So
we
we
have
to
do
something
for
single
page
applications.
I
completely
agree
with
that.
I
know
that
Google
and
other
people
have
theirs.
Brenno
did
a
draft
looking
at
some
of
these
issues.
I
know
that
there
are
some
subtle
and
complicated
issues
with
post
message
and
security
issues,
with
people
running
other
iframes
on
the
same
page
and
being
able
to
snip.
So
if
the
page
that
that
was
had
the
iframe
had
a
third-party
JavaScript
on
it,
that
JavaScript
could
exfiltrate
the
access
token
yeah.
P
Q
Q
P
P
L
J
C
F
L
P
P
It
in
the
popover
it's
gonna
be
hard,
but
in
the
popover
that's
usually
the
a
s
and
the
and
the
client
are
run
by
the
same
organization.
So
they
don't
even
see
the
difference
or
care
about
the
difference.
But
when
it's
different
organizations,
that's
when
you
would
use
a
child
when
doing
the
indications
do
the
user
would
be
the
address
bar
and
the
normal
indications
that
the
browser
provides.
L
Thank
you
so
just
just
to
respond
to
that
I
think
we
have
to
keep.
We
have
to
remember
that,
while
you
make
a
very
good
point
that
you
do
lose
the
context
with
a
with
a
popover
that
assumes
that
people
pay
attention
to
the
context
when
it's
a
pop-up
and
they
don't
sure
so,
I
I
think
that
we
need
to
be
realistic
about
what
signals
we
can
rely
on
with
users
and
single
page
apps
that
are
going
to
be
embedding
iframes
as
awful
as
they
are
aren't
going
away.
L
J
J
Thank
You
Trevor.
What
is
a
very
good
starting
point
that
I
think
we
should?
We
will
make
sure
that
some
work
in
the
space
have
happens
Travis
there.
One
of
the
reasons
why
we
didn't
see
the
document
flying
around
is
because,
when
you
submit
a
document
you
need
to
have
draft
your
last
name
and
then
followed
by
the
working
group
name,
because
then
it
shows
up
in
or
otherwise
it
goes
somewhere
right.
No,
no
problem,
yeah.
G
O
O
An
attempt
to
introduce
a
new
resource
parameter
actually
could
have
more
than
one,
but
the
idea
was
one
resource
parameter
that
was
available
as
a
URI,
really
a
URL,
where
the
client
intends
to
use
the
access
token,
and
this
was
applicable
on
the
authorization
request
to
the
access
to
the
authorization
endpoint
for
access
tokens
that
would
be
issued
from
there.
So
didn't
like
sorry,
yeah.
O
It
was
applicable
on
the
authorization
quest
for
access
tokens
that
would
be
returned
directly
from
the
authorization,
employee,
implicit
tokens
as
well
as
applicable,
on
the
token
request
for
access
tokens
that
we
returned
directly
from
the
token
endpoint.
The
way
this
is
used
sort
of
implies
that
the
actual
resource
target,
where
was
going
to
be
used,
wasn't
stored
with
the
grant
itself,
but
was
something
that
was
specified
by
the
client
in
the
moment
of
obtaining
the
token
as
information
for
the
authorization
server
to
use.
When.
E
O
It
why
would
we
want
to
do
this?
It
enables
access
token
to
be
minted
appropriate
to
the
target
resource
where
it's
going
to
be
used,
and
this
this
could
vary
a
bunch
of
things
about
the
access
token
like
whether
or
not
it's
encrypted
the
actual
content
or
claims
within
the
token
might
vary,
depending
on
where
it's
going
to
be
used,
whether
it's
a
reference
style
or
JWT
token,
perhaps
different
keys
and
algorithms
in
the
JT
token
case
suitable
for
different
resource
servers.
It
also
facilitates
audience
restricting
access
tokens
yeah.
O
The
authorization
server
might
take
that
the
value
of
that
resource
that
URL
as
the
value
for
audience
directly
or
more
likely
it
might
map
it
down
into
a
more
general
like
to
the
host
or
to
a
tenant
level
path
and
then
use
that
as
the
audience.
So
this
general
concept
has
come
and
gone
for
a
long
time.
Hannes
Road
audience
draft
I
don't
know
years
and
years
ago
there
was
the
concept
of
odd
AUD
and
the
pop
key
distribution.
O
That
was
in
many
respects
very
similar
to
this
at
least
conceptually
there's
a
resource
and
audience
request,
parameters
in
token
exchange
and
there's
a
number
of
proprietary
variations
of
this
they're,
all
sort
of
conceptually
similar
that
are
out
and
deployed
in
the
wild.
The
names
are
a
little
bit
different,
some
of
the
functionalities.
You
know
not
exactly
this,
but
there's
there's
stuff
like
this.
O
That's
being
done
now
in
proprietary,
fashions
and,
some
might
say,
I
think
honest
actually
argued
for
this
at
one
point,
this
was
a
mission
in
the
original
to
authorization
frencher
framework,
at
least
from
a
conceptual
perspective,
there's
no
real
way
to
say
where
the
token
is
going
to
be
used.
A
question
that
came
up
a
lot
is
how
how
does
this
relate
to
scope?
I
tried
to
break
it
down
here
and
say
that
scope
is
really
about
what
what
are
you
trying
to
do?
O
O
Resource
here
is
really
about
where
where's,
the
access
token
being
requested,
going
to
be
used,
and
this
allows
for
a
district,
a
distinct
treatment
of
that,
where,
from
the
what
so,
whatever
became
of
this
draft.
This
is
my
final
slide
in
the
Buenos
Aires
meeting
and
one
of
the
options
here
sort
of
in
the
next
steps
was
let
it
linger
for
a
few
years
until
the
idea
is
resurrected
in
some
other
form.
O
So
when
there's
a
WW
authenticate
sent
back
with,
it
includes
an
issuer
attribute
in
that
response
that
at
that
point,
the
actual
authorization
server
issue,
although
despite
the
name,
it's
not
it's
the
token
end
point
not
the
issuer,
so
it's
a
little
confusing,
but
conceptually
it's
saying.
Here's
the
authorization,
server
or
servers
I
could
trust
and
then
because
this
sort
of
resource
server
telling
the
client
where
to
go,
get
the
access
token
it.
O
It
opens
up
this
sort
of
security
issue
of
sort
of
phishing
and
pointing
to
malicious,
app
servers
or
polling
for
access
tokens
from
legitimate
server.
So
they
can
be
used
against
other
resources
so
to
prevent
that
this
host
parameter
was
introduced
in
this
draft
to
sort
of
serve
a
similar
purpose
to
the
resource,
which
is
this
host
parameter,
has
passed
in
the
access
token
request
and
then
used
to
basically
become
a
host
claim
or
attribute
placed
in
the
access
token.
O
That
would
then
be
verified
by
the
protected
resource
to
ensure
that
it
was
in
fact
the
original
requester
and
targeted
the
access,
token
request
and
I.
Think
very
notably.
This
draft
constrains
all
this
to
only
use
with
the
client
credentials,
grant
I
think
just
to
try
to
simplify
some
of
the
things
and
probably
because
that's
the
only
use
case
those
environment
is
having
it.
This
is
really
sort
of
the
similarity
piece,
but
it's
all
driven
out
by
trying
to
secure
this
dynamic
nature
of
discovery
in
the
authorization
server
and
so
I.
O
Don't
know
asked
to
talk
about
this
here
because
it's
all
related.
So
what
do
we
do
now
and
I
honestly,
don't
know,
but
I
would
ask
that,
as
we
consider
this,
let's,
please
not
unduly
constrain
what
are
potentially
useful
and
generally
applicable
functionality
and
concepts
to
something
very
narrow
that
will
regret
later,
like
very,
very
tightly
constraining,
something
about
you
know
as
discovery
and
dynamic
nature
to
only
to
client
credentials
grant
so.
B
D
Idea
keeps
coming
up
I
think
this
is
an
old
idea
whose
time
has
come.
We
have
a
lot
of
evidence
that
people
want
this
I'll
say
that
this
exact
syntax
is
used
in
production
by
Microsoft.
That's
not
why
I'm
saying
it's
time,
I'm
saying:
there's
a
heck
of
a
lot
of
evidence
that
people
want
this,
so
we
should
adopt
this
as
a
working
group
draft.
J
This
is
Hannukah,
for
those
of
you
who
participate
in
the
ACE
working
group
may
have
noticed
that
for
ya
we
use
a
proof,
possession
based
model
and
as
mentioned
by
prime
beforehand,
we
obviously
had
to
have
a
barometer
in
place
in
the
request,
but
in
requesting
an
access
token,
because
otherwise
the
authorization
server
doesn't
necessarily
know
in
our
cases
what
key
to
use
for
use
for
specific
resource
servers.
So
we
we
copied
one
of
the
parameters
that
we
put
into
the
document
back
then,
and
good.
J
We
should
know
better
where
it
came
from,
but
they
are
we
using
and
registering,
and
the
document
is
going
to
coming
to
an
end
in
in
term
of
standardization
process.
The
good
thing
is,
we
also
registered
the
for
use
with
not
just
with
the
use
with
coop,
but
also,
of
course,
with
HTTP,
so
it
would
also
be
applicable
to
to
the
OS
itself.
M
You
know
I
like
the
idea
for
me:
that's
that's!
That's
the
road
to
to
audience
restriction
for
structured
access,
token,
so
I'm
favor
of
adopting
the
document,
but
I
think
you
should
clearly
define
or
discuss
the
scope
for
me
is
defined
to
narrow,
because
we
need
to
talk
about
aa.
J
M
Narrow
the
reason
indicate
I
come
I,
come
to
the
Auto
drop
later
on,
because
I'm
still
thinking
that,
if
you
are
going
for
audience
restriction
and
structured
access
totems,
we
should
nevertheless
consider
the
case
where
the
same
authorization
grant
covers
access
to
more
than
one
service.
And
then
you
have
to
somehow
relate
the
resources
and
the
scope
values.
There
is
anyway,
a
relationship
between,
so
you
can't
treat
them
completely
independently,
and
then
we
also
have
tickets.
We
will
discuss
how
the
client
really
determines
the
resource.
M
This
is
all
really
fluffy
and
be
in
the
overall
space
right
now
so
I'm
totally
in
favor
of
adopting
that.
But
we
should
think
through
that
Roley
regarding
dicks
draft,
but
you
pointed
out
reminds
me
on
the
discussion
we
had
in
Prague
regarding.
Do
we
use
ordiance
restriction
or
somewhat
constrained
access
tokens
for
open
liquors
prevention
and
I
think
we
came
up
in
consensus.
Little
Mike,
yeah,
yeah,
okay,
I'm
gonna
start
up
again.
So
what
you
said
about
Brian's,
dicks
draught
reminded
me.
Under
the
under
discussion
we
had
in
Prague
regarding
totally
prevention.
M
B
O
B
R
E
R
R
The
way
the
metadata
is
returned
is
different
in
case
of
the
takes
draft.
It's
return
as
the
extension
to
the
develop
that
will
authenticate
header,
and
it
talks
only
about
the
token
point
well
in
my
case,
I'm,
actually
leveraging
on
the
RFC.
Fifty
nine
eighty
eight
web
link
hidden
that
way
it
in
this
case
it
can
only
be
used
at
the
resource,
but
in
the
later
case
II
it
can
also
be
using
any
other
endpoints
and
the
draft
through
it
actually
doesn't
have
any
in
anything
to
speak
about
scope.
R
But
probably
if
we,
if
we
need
to
do
agree
to
do
this,
something
like
this,
then
we
also
have
to
talk
about
it
as
a
target
attribute
in
the
weblink
now
research
ship
to
other
documents,
and
that
Bryon's
draft
is
talking
about
request.
Mine
is
talking
about
response,
orthogonal
and
complimentary
and
I
guess.
The
earlier
draft
has
actually
analysis
draw
actually
has
been
rolled
up
into
dabba.
K
J
We
just
used
one
from
one
of
the
documents
and-
and
it's
also
it's
not
just
applicable
for
use
with
co-op-
it's
also
applicable
for
use
with
Oh
F.
Oh
I
am,
if
HTTP
sort
of
classically
or
so
that
maybe
maybe
an
interesting.
It
may
not
be
the
scope
that
you
you
want
or
other
other
people
one,
but
I
think
that
that
would
be
a
good
conversation
to
have.
Yes,.
R
L
L
It
closes
a
lot
of
the
security
holes
that
we've
seen
in
the
wild
over
the
last
few
years
with
having
a
single
discovery
document
that
can
mix
and
match
different
services,
especially
when
you're
relying
on
you
know,
replayed
shared
secrets,
client
credentials
and
things
like
that
throughout
your
network.
So
I
do
think
that
that
this
is
good
and
you
know
retrospectively
I
wish
we
would
have
done
ooofff
to
you
like
this
to
start
with,
so
you
know
I'd
like
to
the
chairs
to
consider
maybe
OAuth
3.
L
At
this
point,
we've
got
enough
stuff
bolted
on
to
OAuth
2
that
it's
not
really
Oh
off
to
you
anymore.
That's
for
the
chairs
to
consider,
though,
over
more
alcohol
to
the
to
the
previous
point,
and
this
is
also
about
bolting,
more
things
into
OAuth.
I
agree
with
the
the
point
that
was
raised
before
about
how
we
have
to
consider
this
resource
indicator
along
with
scope.
L
L
And
it
turned
out
in
the
real
world
that
a
lot
of
people
want
more
expressive
'ti
than
what
that
allows
in
a
reasonable
fashion,
and
it's
the
the
real
trouble
comes
not
with
just
the
listing,
because
you
can
obviously
just
put
you
know,
resource
URLs
as
scopes
and
that
works
today.
You
don't
need
an
extension
for
that.
H
F
Annabel
Backman
Amazon
a
couple
of
points
pretending
to
be
dick
here:
I
think
the
driving
factor
for
Dick's
draft
focusing
on
client
credentials
is
just
that's
the
use
case,
so
I
don't
think,
there's
any
objection
around
expanding
that
to
broader
and
more
broadly
than
that.
Likewise,
is
this
one
draft?
Is
it
two
drafts
one
focusing
on
the
audience?
One
focusing
on
resource
I?
Think
that's
immaterial
as
far
as
as
that's
concerned
it
just
let's
just
get
it
done.
F
One
is
the
wanting
to
be
more
specific
about
what
is
being
asked
for
in
this
access.
Token
am
I
am
I
asking
for
scope,
XYZ
for
resource
server,
a
or
scope
XYZ
for
resource
server
B.
That's
that's
one
thing,
as
those
are
potentially
the
same
scope,
but
with
different
meanings.
The
other
side
is
the
security
aspect
of
making
sure
that
the
access
token,
that
the
authorization
server
is
issuing
an
access
token.
That's
then,
going
to
be
used
with
with
the
right
server
and
not
with
the
and
now
with
a
man-in-the-middle
type
of
attack.
F
L
Bradley,
so
you
know
they
the
to
sort
of
address
different
sides
of
the
issue.
I
was
initially
opposed
and
putting
them
together,
but
thinking
about
it,
the
security
considerations
around
resource
and
audience
restricting
the
tokens
are
so
important
to
not
shooting
ourselves
in
the
foot
with
client
metadata,
it's
probably
at
least
useful
to
start
off
with
them.
As
a
single
document,
we
may
decide
to
split
them
up
later,
but
working
on
them
together
will
probably
would
probably
help
us
think
through
some
of
the
security
issues
that
we're
creating
for
ourselves.
I'm.
L
The
one
of
the
very
original
one
of
the
very
early
threats
identified
for
oauth2
was
well.
What,
if
you
let
it
clot?
What?
If
you
let
a
resource
to
say
where
its
authorization
server
is
well.
That
immediately
leads
to
bad
resources.
You
know
tricking
a
client
to
go
to
a
bad
resource
and
get
a
token
that
is
going
to
be
handed
over
very
token
that
gets
handed
over
to
it
that
it
can
then
use
it
someplace
else.