►
From YouTube: IETF105-OAUTH-20190726-1000
Description
OAUTH meeting session at IETF105
2019/07/26 1000
https://datatracker.ietf.org/meeting/105/proceedings/
A
A
A
B
A
D
D
D
Give
you
a
very
quick
recap
of
what's
in
there
and
above
all,
why
we
are
doing
this,
because
that
will
inform
the
various
things
that
you
need
to
decide
and
then
I'll
very
quickly
go
through
some
of
the
main
changes,
thanks
to
everyone
who
gave
a
feedback
and
induced
those
changes
and
then
I'll
place
on
the
table,
free,
open
issues
and
at
least
for
one
called
for
consensus
all
right.
So
it's
the
very
cup!
D
Thank
you
great.
So
why
are
we
doing
this
once
again?
In
a
you
know,
current
user,
most
for
a
physician
service
for
trying
to
use
dread,
abilities
for
access
tokens
arise.
I
have
made
formal
presentations
in
which
have
presented
some
research,
showing
that
they
more
or
less
every
vendor
uses
conveys
with
some
information.
They
just
use
different
syntax,
and
so
there
is
no
in
very
easily
opportunity
for
interoperability
and
also
given
that
everyone
did
these
in
organic
fashion.
Very
often,
they
went
through
our
to
say
utilitarian
routes,
which
led
them
to
do
things
with.
D
We
might
define
anti
patterns
and
similar.
So
very
clearly,
a
guidance
voice
is
not
just
a
problem
of
interoperability
and
finally,
something
that
I
failed
to
highlight
during
other
updates.
One
thing
with
you
observe
very
often
in
the
market
is
the
fact
that
more
and
more
products
are
asking
clients
to
send
ID
tokens.
Well.
Instead,
they
should
ask
for
access
tokens
reason
being
that
in
order
to
actually
make
a
job,
they
need
to
have
a
stable
format
that
they
refer
to.
They
need
to
have
the
information,
but
we
typically
get
in
the
ID
token.
D
So
the
outcome
is
that
if
you
want
to
use
Google
methods,
you've
got
to
send
them
your
ID
token.
There
are
a
number
of
flaws
in
Amazon,
but
require
you
to
do
the
same.
So
the
hope
is
that
if
we
create
a
profile
but
actually
place
in
the
access
token,
where
things
but
those
products
need,
then
we
will
curb
that
practice.
D
Thank
you.
So
in
nutshell,
what
we
did
we
define
a
layout
where
we
lay
down
the
values,
claims
that
represent
most
of
the
concepts
and
type
of
information,
but
we've
seen
vendors
are
already
placing
in
their
existing
in
production
access
tokens.
We
laid
down
some
rules
about,
for
example,
determining
what
should
go
in
the
audience,
how
you
would
mark
certain
parameters
and
resource
into
the
actual
content
of
a
claim
of
a
token.
D
Did
anyone
have
our
coffee?
Did
you
check
all
of
your
emails,
or
are
you
still
checking
your
emails
Zack
wore
Facebook
for
walk
or
whatever
you
are
using?
Alright,
let's
look
at
the
open
issues
so
very
quickly.
What
we
did
our
way
doesn't
say
on
it.
Donny
as
a
question
got
had
a
face,
it's
just
too
mean
Edlund.
E
So
question
is:
are
you
go
back?
Can
you
go
back
one
slide,
one
more.
So
you
said
something
you
were
talking
about
guidance
on
token
validation.
If
your
guidance
meant
to
be
normative
and
do
you
expect
to
have
some
conformance,
do
you
expect
conformance
to
come
out
of
that
so,
like
certain
libraries
would
meet
certain
validation,
libraries
would
meet
certain
conformance
criterias.
You.
D
Think
it's
fairly
now
maybe
I'm
missing
vien
once
or
the
semantics
of
normative
verses.
But
to
me
very
spirit
of
visa
is
to
help
library
implementers
to
know
exactly
what
to
do
when
they
receive
is
talking
and
how
to
decide
whether
they
should
accept
it
or
not.
They
exactly
in
the
same
fashion
as
we
have
done
for
the
ID
tokens
back.
C
E
D
D
I
think
they
got
a
lot
of
a
guidance
in
here
is
definitely
amenable
to
be
turned
into
conformance
testing,
so
I
think
it's
a
possibility
for
sure
great
all
right
so
summarizing
the
changes
since
the
earlier
episodes
at
first.
All
the
claims
in
there
were
mostly
mapping
having
that
source
map
to
open
ID,
and
instead
we
changed
visa
to
actually
go
all
the
way
to
they
draw
a
specification
which
makes
more
sense.
D
Then,
in
the
place
where
we
mentioned
that
that
we
would
have
tokens
which
represent
identity,
I
was
only
mentioning
open,
ID
and
instead
now
I
extended
it
to
mention
also
introspection
and
basically
explicitly
opened
up
to
any
identity
potential
source
of
identity
claims.
So
pretty
much
and
then
also
mentioned
there
clearly
vector
opposition
servers
are
free
to
place
whatever
attribute
they
want
in
there.
As
long
as
we
don't
have
collisions
and
similar,
then
I
added
more
details
in
the
privacy
sections,
in
particular
as
a
result
of
early
feedback
and
also
misunderstandings.
D
D
Okay,
so
here
is
a
released
all
the
things
that
the
gardener,
the
most
interest
and
the
most
discussion,
but
didn't
end
up
with
a
clear
conclusion
on
the
list,
so
I'm
how
big-
and
we
can
use
this
time
to
come
to
a
conclusion.
So
let
and
it
basically
is
a
the
thing
about
distinguishing
between
subject
type
and
then
some
discussion
about
the
affinity
of
time
behavior
and
then
whoever
to
recommend
infected
together,
encryption
or
anything,
but
to
use
symmetric
keys
for
signing
or
securing
the
access
tokens.
So
let's
look
at
the
first
one.
D
Thank
you
so
here
the
idea
is
existing
access.
Tokens
in
the
market
for
various
vendors
have
mechanisms
that
are
used
for
the
API
to
determine
whoever
this
token
was
issued
for
a
resource
owner
or
for
a
or
for
an
application,
as
in
does
the
subject
represent
a
user
or
an
app,
and
it
very
is
a
variant
which
is
a
was
of
a
token
obtained
through
a
confidential
client
or
for
a
public
line.
What
is
a
like?
D
Let's
simply
find
it
just
look
at
users
versus
applications
and
a
very
different
ways
in
which
people
achieve
this
like
for
it.
Identity
server
doesn't
include
a
sub
claim
when
the
token
is
issued
through
client
credential
grant.
There
was
a
discussion
released
and
the
consensus
was
mostly
toward.
We
must
include
if
it's
like,
if
we
have
a
sub
sub,
that
has
to
represent
an
entity
but
was
authenticated
and
in
the
case
of
a
up
Israel,
so
you've
got
to
have
sub
in
all
cases,
so
we
ended
up
saying
about
the
yeah.
D
We
do
need
a
12-man,
so
we
cannot
use
these
mechanism
backwards,
guys
were
using
for
the
third
winning.
Whoever
dave
token
is
an
up
or
not,
then
Carla
from
octa
proposed.
Maybe
we
can
have
a
claim,
a
grantee
type,
which
includes
the
grant
that
was
used
for
obtaining
wins,
and
the
apology
for
jock
already
has
this
claim,
but
they're
in
Veliz
they
were
a
number
of
voices
saying
we
shouldn't
burn
the
burden,
the
resource
server,
with
knowledge
about
this,
because
it's
an
implementation
detail
on
the
client
and
of
a
different
server.
D
So
we
basically
excluded
that
approach.
Then
there
was
a
longer
discussion
about
whether
we
should
impose
a,
but
in
the
case
of
an
up
token,
you
just
have
the
sub
equal
to
the
client
ID,
but
Vento
also
doesn't
work
well,
because
a
lot
of
current
implementations
use
different
internal
mechanisms
for
generating
client
IDs
and
the
subjects
in
their
own
formats
subject
can
also
be
different
depending
on
the
relying
party.
D
So
the
and
also
you
don't
want
to
allow
people
to
choose
their
own
client
ID,
because,
especially
for
entity,
ends
up
being
a
mess
up
for
security
reasons
that
we
heard
during
the
meeting
earlier
in
a
week
and
then
finally,
there
was
another
proposal,
but
we
didn't
really
vote
on,
which
was,
let's
just
have
a
claim.
A
new
claim
that
states,
the
natural
or
the
subject
is
in
the
service
and
represents
resource
owner
or
a
sub
represents
a
application.
D
So
here
basically
I'm
asking
you
guys
do
you
have
any
idea
for
solving
this?
For
me,
they
are
like
I
would
love
it,
whether
the
last
one,
let's
say
having
a
new
claim
that
expresses
wins
but
I'm.
Also
up
the
outcome
of
this
is
complicated
with
as
bad
implications,
and
so
let's
not
do
this
at
all
and
let's
leave
every
vendor
to
worry
about
this
aspect
in
their
own
and
let's
not
have
it
in
the
intro
profile.
Just.
G
Yeah
yeah,
so
that's
where
most
of
the
problem
of
this
usually
comes
in,
because
if
the
AAS
is
controlling
the
namespace
of
the
clients,
you
can
make
sure
that
you
know
the
client
ID
looks
like
a
client
ID.
Whatever
that's
supposed
to
look
like
or
if
you're
letting
your
users
pick,
you
know
gooood
x'
as
their
username.
Something
else
is
probably
wrong.
G
The
other
thing
is
that
okay,
so
I
guess
it's
three
things,
because
the
other
thing
is
that
in
a
deployment
that
I
did,
we
played
around
with
having
the
grant
type
claim
there
and
it
really.
It
wasn't
very
helpful
as
it
turned
out.
So
we
we
ended
up,
dropping
that
and
just
differentiating
based
on
the.
G
G
Basically,
we
we
figured
out
that
it
was
kind
of
a
weird
bit
of
signaling
of
like
how
this
token
showed
up,
instead
of
like
what
this
token
was
and
what
it
meant.
So
it's
it's.
Basically,
it's
saying
how
somebody
got
this
token
and
not
necessarily
what
this
token
should
be
good
for
and
who
this
token
represents,
and
so
it
was
I,
don't
know
it
was
just
kind
of
a
weird
mental
jump.
I
mean
it's.
G
It
is
a
handy
shortcut
because
you're
probably
doing
that
the
other
thing-
and
this
isn't
something
that
we
ran
into
in
production,
but
the
other
thing
that
we
thought
it
was
that
clients
can
use.
You
know
client
assertions
to
get
a
token
on
their
own.
That's
not
client
credentials
grant
type,
and
so
it's
not
necessarily
the
right
signal.
It's
kind
of
it's
standing
in
for
something
else,
which
is
that
the
subject
is
the
client
ID,
and
so
that's
that's
the
logic
that
we
went,
which
I
think
is
the
third
yeah.
The
third
option
there.
G
G
This
draft
is
in
the
second
dense
group
and
it's
for
security
event
tokens
that
basically
says
there
is
a
an
object
structure
for
different
types
of
user
subjects,
so
you
can
have
more
than
just
a
single
subject
string
in
the
job,
but
it's
all
sort
of
collected
into
one
sub
object,
so
you
can
have
an
issue
or
an
subject
field
within
that
you
can
have
an
email
identifier,
a
phone
identifier,
each
of
them
with
a
type
field
as
well.
So.
G
D
G
I
Thank
you
now,
I'll
tell
you
why
that's
a
bad
idea,
Annabella
Backman,
Amazon,
so
I
that
idea
flashed
through
my
head
and
the
idea
of
using
subject
identifier
for
this.
The
reason
I
think
it's
a
bad
idea
is
that
what
that
draft
talks
about
is
subject
identifiers
and
types
of
subject
identifiers.
So
does
not
talk
about
types
subject.
I
So
it's
saying
I
am
pointing
to
some
principle
that
is
the
subject
and
I'm
pointing
to
it
using,
for
example,
an
email
address
or
a
phone
number
or
theoretically,
a
key
pair
or
any
number
of
things
that
we
might
define
down
the
road.
It's
entirely
possible
that
we
will
have
subject
identifier
types
that
are
ambiguous
as
to
if
they
could
be
handles
to
a
user.
They
could
be
handles
to
an
app
they
could
be
handles
to
an
organizational
entity.
Any
number
of
things
so
I
I
would
not.
I
Now,
what
I
actually
got
up
to
up
here
to
say
is
two
things:
I
agree
with
not
using
grant
type
in
particular
because
it
is
tightly
coupling
this
concept
to
specific
protocol
implementation
details
and
doesn't
really
make
sense
to
tie
those
together.
What
what
makes
sense
as
a
mapping
between
those
two
which
grant
types
apply
to
which
you
know,
subject
types
that
mapping
may
change
over
time.
I
It's
going
to
get
more
complex
as
new
grant
types
are
introduced
as
new
subject
types
are
introduced,
it's
unstable
and
it
would
be
really
burdensome
to
try
and
ask
the
resource
servers
to
all
figure
that
out
and
understand
that,
if,
if
you're
having
your
resource
servers,
understand
that,
then
at
that
point,
there's
probably
enough
coupling
between
your
authorization
server
and
your
resource
server
that
you
don't
need
a
standard
way
to
do
this
anyway.
I
had
a
third
point
which.
I
We
do
it,
yes,
no,
no!
Actually,
of
course
not
that
would
be
easy.
I
was
wondering
if
you
could
quickly
summarize
what
the
use
cases
are,
that
people
are
solving
or
trying
to
solve
by
differentiating
between
user
and
appetizer
or
what
are
the
surprises
where
they
need
some
specific
mechanism
to
do
that.
So.
D
I
That
sounds
like
it's
less
about
user
versus
app
and
more
about
how?
What
are
these
circumstances
under
which
this
client
and
byte
by
client
here
I
mean
remote
piece
of
code?
That
is
calling
me
what
are
the
circumstances
under
which
it's
been
authenticated
and
what
the
circumstances
under
which,
whatever
user
may
be
interacting
with
it,
was
authenticated
by
me
and
I-
think
that's
very
different
from
user
and
applicator
versus
app
as
two
different
subject
types.
What
I
think
of
is
kind
of
three
legged
OAuth
versus
two-legged
OAuth
like?
I
A
G
D
D
D
Of
course,
we
challenge.
What
we
have
in
here
is
that
the
momentum
of
education
is
not
corresponding
to
what
to
be
a
lifetime
lifetime
of
a
token.
So
basically,
here
we
are
placing
claims
whose
value
is
determined
elsewhere,
like,
for
example,
the
moment
in
which
you
sanidine,
and
you
got
your
first
batch
of
tokens
or
if,
in
the
middle
of
a
current
session,
you
have
to
do
a
step-up
authentication
and
then
we
authenticate
from
time
changes
and
the
method
of
authentication
changes.
D
When
next
time
you
use
the
corresponding
Refresh
token,
we
need
to
reflect
it
into
into
a
claim
to
the
token,
and
here
like
they
need
from
the
practical
and
business
perspective.
It's
very
clear.
Let's
say
people
already
do
this
so,
but
at
the
same
time,
when
we
discussed
these,
some
people
on
the
list
expressed
unease.
So
I
just
want
to
give
the
opportunity
to
other
people
that
unease
to
bring
their
concerns,
and
if
you
have
alternative
solutions
that
of
course,
still
deliver
on
the
requirement.
Then
this
is
your
opportunity.
J
Apparent
Campbell
thing
I,
one
of
the
doubters
and
I
continue
my
doubt,
but
I
accept
that
it's
gonna
be
in
here,
because
yeah
requirements
and
people
are
going
in
anyway,
but
I
get
I.
Do
get
really
nervous
about
what
like
what
you
just
said.
A
step
authentication
in
a
session
after
token
was
issued,
should
somehow
be
reflected
in
the
auth
time
of
the
next
token.
J
On
refresh
that's
that
sort
of
thing
is
is
like
drawing
a
lot
of
links
between
things
that
aren't
really
linked
and
I'd
like
to
see
it
at
least
clarified
to
this
is
this
is
the
auth
time
or
the
AMR,
whatever
that
was
used
at
the
time
of
authentication
that
the
grant
was
approved
or
I?
Don't
know
the
right
language
it's
hard,
but
creating
some
sort
of
ephemeral
and
imaginary
link
between
an
ongoing
session
and,
however,
one
might
describe
that
and
issued
access
to
friends
off
of
some
other
artifact.
D
It's
confusing
for
the
dangerous
I
think
we
should
yeah
expand.
It
I
think
about
it.
We
lever
to
solve
this
problem
somehow
anyway,
because
I
get
to
make
another
example
say
that
you
have
this
access
token
tied
it
to
a
session.
We
were
certain
token,
the
Refresh
token
and
the
Refresh
token
is
a
supporting
rotation
and
save
it.
Someone
abuses
the
rotation
and
uses
the
wrong
token
and
have
a
right
time,
and
now
you
have
to
gets
a
revoke
all
the
Refresh
tokens
and
allegedly
also
the
access
tokens
that
were
issued
as
part
of
that.
D
So
you
rarely
ever
that
link
is
just
vector
in
this
case
you
are,
you
are
using
that
link
to
reflect
potential
changes
in
the
off
time,
but
in
the
other
case,
instead
of
you
use
it
for
doing
verification.
But
the
thing
is
this:
link
between
session
refresh
tokens
and
access
tokens
already
exists
in
various
other
aspects.
Part.
J
Of
that
is,
when
you
say
allegedly,
it
doesn't
exist.
It's
potentially
some
link,
that's
done
at
implementation
time
and
and
even
if
it
does
exist.
It's
super
weird
to
have
a
step-up
authentication
that
happens
in
a
completely
different
context
and
have
access
tokens
reflecting
that
were
issued
off
of
a
different
authentication
context
reflecting
that
sort
of
in
some
magical
real
time.
J
A
I
To
to
come
soon,
I
just
want
to
give
an
alternative
view
of
where
that
would
be
useful,
and
that's
if
you
are
doing
an
oauth2
authorization
request
in
a
context
where
an
earlier
say,
password-based
authentication
happened,
but
because
of
the
nature
of
the
oauth2
authorization
request,
you
need
to
do
step
up
at
that
time.
Then
you're
going
to
have
a
case
where
your
password
based
authentication
might
have
happened
yesterday,
but
your
step
of
your
two-factor,
whatever
you're
doing
happens
now
so
having
having
those
as
two
different
pieces
of
information,
could
be
valuable.
I
Having
said
that,
I
think,
given
the
ambiguity
of
the
term
step
up
and
to
factor
and
all
of
that
and
the
wide
variety
of
implementations
I
think
it's
gonna
be
very
hard
to
really
formalize
a
nomenclature
for
all
the
different
things.
We're
gonna
want
to
talk
about
here.
So
this
may
be
a
place
where
just
having
the
an
opportunity
for
the
different
vendors
to
plug
in
whatever
they
need
might
be
more
useful.
D
D
C
D
Here,
I'll
just
present.
Basically,
we
Ave
at
the
very
beginning
of
a
recommendation
to
use
a
symmetric
crypto
for
signing
tokens,
because
it's
just
easier
for
people
to
have
a
fixed
sequence
of
stats
for
retrieving
stuff
for
metadata
and
similar
I
don't
suggested
that
we
should
actually
require
it
and
I'm
not
against
it.
But
at
the
same
time,
I
know
that
privacy
conscious
people
were
pushing
to
also
include
in
their
recommendation
to
use
authenticated
encryption.
D
The
thing
is:
reka
indicating
the
corruption
today,
the
only
specs
we
have
all
use,
symmetric,
kids
and
if
that
makes
me
very
uneasy,
because
now
I'm
leaving
as
exercise
to
the
reader,
something
which
is
super
important,
which
is
acquiring
their
stuff.
You
use
for
tracking
tokens
and,
at
the
same
time,
like
I,
saw
basically
here.
I
wanted
to
hear
from
you
guys
what
that
parent
will
be
run
on
yeah
yeah.
C
H
F
K
So,
if
you're,
who
here,
has
read
this
draft
ever
curiosity,
this
is
a
fun
game
to
play
great
okay,
a
few
of
you.
So
the
summary
of
this
of
this
draft
is
that
it
is
recommendations
for
people
who
are
building
browser-based,
apps
or
JavaScript
based
apps
or
single
page
apps,
whatever
you
want
to
call
them
that
are
doing
auth
to
in
a
browser
which
typically
means
that
that
the
browser
ends
up
holding
the
access
token,
but
we'll
get
into
some
of
that.
K
So
that's
the
sort
of
overall
scope,
the
read
this
is
sort
of
supposed
to
be
the
parallel
to
the
native
apps,
best
practices
that
that
we
already
have.
This
is
a
summary
of
what
the
document
currently
says.
Some
of
these
points
are
currently
up
for
debate,
so
the
recommendation
for
browser
based
apps
use,
auth
code
flow
plus
pixie,
do
not
must
not
return
access
tokens
in
the
front
channel,
which
rules
out
the
implicit
flow
completely
must
use
the
oauth2
state
parameter
for
CSRF
protection.
K
It
absolutely
requires
exact,
redirect
URI
matching.
That's
actually
changed
from
last
last
time
we
met,
and
there
is
a
recommendation
that
access
authorization
servers
should
not
return
refresh
tokens
and
that
one
is
one
of
the
points
we're
going
to
talk
about,
but
that
is
what
we
had
currently
discussed
before.
We
started
having
discussions
on
the
list,
so
some
changes
since
the
last
meaning
exact
redirect
URI,
is
new.
K
This
is
one
of
the
architecture
patterns,
single
page
app
with
a
back-end,
and
in
this
case
this
is.
This
is
one
of
the
things
that
I
think
we
need
to
discuss
a
little
bit
more
about
what
we
actually
want
to
recommend
here.
The
recommendation
here
is
basically
that,
if
you,
if
your
single
page
app,
has
a
back-end,
then
you
should
actually
do
o
auth
in
the
backend
component
and
not
do
OAuth
in
the
in
the
JavaScript
code.
At
all.
K
The
question
then
becomes:
do
we
say
anything
about
whether
or
not
the
access
token
should
actually
live
in
the
JavaScript
app
or
if
that
should
be
something
that
is
not
the
access
token
like
an
encrypted
session
cookie
or
some
other
thing.
That's
a
point
up
for
debate.
This
is
one
of
the
architectures
one
of
the
other.
K
This
is
one
of
the
ones.
That's
gotten
a
lot
of
discussion
on
the
list,
which
I
think
we
need
to
come
to
an
agreement
on.
What
exactly
is
the
scope
of
this
document
recommending
here
so
same
two
main
applications?
What
I
mean
by
this
is
that
when
everything
is
on
the
same
domain
or
sub
domains,
so
if
everything
is
off,
of
example,
comm
like
resource
server
example,
API
two
example-
com
auth,
dot
example,
comm
and
app
example-
com.
They
can
all
share
cookies
on
example
com.
K
So
you
had
end
up
actually
not
needing
off
at
all
for
one
and
if
you
do
still
want
to
use
OAuth,
there
are
ways
to
protect
it
better
than
actually
having
the
access
token
end
up
in
accessible
from
JavaScript.
So
right
now
the
document
says:
maybe
you
should
just
avoid
using
OAuth
and
do
some
on
your
own.
Maybe
that's
not
the
best
recommendation.
K
So
that's
a
first
point.
The
redirect
the
redirect
step
where
everything's
being
passed
around
over
redirects
is
where
a
lot
of
the
flaws
come
from
an
OAuth
or
a
lot
of
the
holes
come
from,
and
you
don't
actually
need
to
do
that
if
everything's
on
the
same
domain
but
having
a
separation
between
a
s
and
RS,
and
all
that
is
useful
because
it
lets
you
centralize
your
account
management
and
MFA,
and
all
that
so
there's
a
couple
different
approaches.
We
could
do
we
go
here.
K
I
guess
we
could
say
that
the
scope
of
this
document
assumes
that
you're
going
to
be
using
OAuth
already,
and
we
ignore
the
case
that
you
might
not
use
OAuth
and
only
talk
about
how
to
do
OAuth
in
this
case,
in
which
case,
what
exactly
do
we
say
about
this
case?
Is
there
anything
unique
about
this
case,
or
we
try
to
say
that
there
is
a
particular
recommendation
for
same
domain
applications
that
maybe
doesn't
use
OAuth,
maybe
uses
OAuth
in
as
a
new
way,
so
I'm
curious
about
thoughts
on
this
bucket.
G
Justin
richer.
Yes,
this
absolutely.
This
discussion
absolutely
needs
to
be
in
scope,
because
people
coming
in
outside
of
the
auth
working
group
are
going
to
be
saying:
I
have
a
browser-based
app
I've
heard
of
ooofff.
What
do
I
do
and
if
I
get
to
an
official
document
that
says
hey?
If
your
app
looks
like
this,
don't
use,
OAuth
I
might
actually
do
the
right
thing
and
not
use
OAuth.
So
I
think
it
would
be
a
dangerous
disservice
to
the
Internet.
G
If
we
remove
this
part
of
the
document,
that
said,
we
do
have
to
be
very
clear
about
the
boundaries
around
this,
which
I
think
we
can
continue
to
clean
up
the
language.
There
was
some
discussion
on
the
list
in
the
last
couple
of
weeks
about
framing
this
in
terms
of,
if
your
back
end
still
does
oh
off,
that's
fine,
but
that's
outside
the
scope
of
this
document.
G
D
The
audio
of
zero
I
agree
four
hundred
percent
that
we
needed
to
be
giving
guidance
to
people
they
and
if
the
guidance
is
for
this
particular
deployment,
don't
use
off
use
these
other
mechanisms
that
what
we
should
say,
the
thing
right
there
I
will
be
careful
about,
is
helping
people
to
understand
the
implications
of
a
choice
exactly.
This
is
something
that
I'm
trying
to
do
on
our
cellular
product.
D
People
needed
to
understand
that
they
need
to
backtrack
from
his
approach
and
then
use
the
other
one,
and
then,
if
it's
likely
that
their
future
looks
that
way,
then
perhaps
they
should
use
off
in
this
particular
case,
because
otherwise
we'll
do
their
work
twice.
So
long
story
short
I
think
we
should
tell
people.
This
is
a
simple
and
way
of
doing
this
or
might
not
be
the
right
tool,
but
we
also
need
to
have
them
understand.
What
are
the
implications
of
an
architectural
choice
is
moving
forward.
Yep
thanks.
J
D
O
J
J
J
K
Okay,
so
I'm
hearing
a
couple
different
things
here
and
possibly
probably
committing
this
analyst,
but
possibly
one
of
the
ways
to
reconcile.
This
is
not
calling
it
same
domain
applications
and
instead
trying
to
find
a
pattern
that
describes
this
architecture
other
than
the
fact
they're
on
the
same
domain,
because.
K
It's
that's
not
a
distinction
that
Olof
normally
makes
so
maybe
that's
not
the
appropriate
distinction
to
be
making
in
the
spec,
but
we
need
to
find
I
think
to
Justin's
point.
We
need
to
find
a
what
are
people
going
to
be
looking
for
in
this
document
when
they
have
the
situation
we're
trying
to
describe?
What's
what's
their
keywords,
they're
gonna
be
looking
for
and
that
way
we
can
still
cover
this
content,
but
maybe
not
with
the
same
caveats
or
same
distinctions,
which
I
think
would
also
address.
Hurston's
point
I,
think
yeah.
H
We
probably
have
to
discuss
this
a
little
bit
further,
but
it
sounded
like
very
different
advice
from
different
set
of
yeah,
so
we
have
to
find
out
what
that
or
we
have
to
find
out
what
maturity
at
least
thinks
which
direction
we
should
go
because
I
thought
that
was
actually
quite
good
in
the
sense
that
you
are
recommending
not
to
use
ours
but
you're
not
saying
what
what
else
they
should
be
using.
So
in
some
sense
we're
not
doing
a
straight
analysis
for
the
other
solution.
K
Know
yeah,
that's
also
also
a
good
point.
If,
if
we're
gonna
say
don't
do
this,
we
should
probably
have
somewhere
else
to
send
people
to
tell
them
what
to
do
okay,
so
it
sounds
like
we're
not
gonna,
be
able
to
get
through
this
one
in
any
more
progress
here.
So,
let's
move
on
to
the
next,
because
I
do
have
a
bunch
of
other
do.
J
You
have
something
else:
it's
fine
if
it's
not
in
scope,
but
I
think
there's
a
real
opportunity
for
sort
of
complicity
by
omission.
If
there's
nothing,
no
mention
of
it
in
the
document,
people
will
come
to
it
and
assume.
That
means
it's
the
right
thing
to
do
so.
I,
don't
know
if
it
needs
to
be
an
architectural
pattern
or
even
just
description
in
out
of
scope.
It
doesn't
mean
this
something.
Some
mention
of.
B
J
K
These
are
I,
think
small
issues,
but
these
are
all
ones
that
we
were
not
that
year.
There
was
some
points
on
the
list
that
I
went
back
and
forth
on.
We
didn't
come
to
an
agreement
on
yet
so
let's
just
are
the
top
state
right
now.
This
effect
says
use
it
for
CSRF
protection,
even
though
it
also
says
use
pixie
security.
Best
practice
says
you
can
use
pixie
force
tape
for
CSRF
protection,
so
those
things
don't
quite
match.
K
There
are
some
issues
with
the
fact
that
the
the
client
doesn't
know
that
a/s
supports
pixie,
but
in
this
spec
the
S
has
to
support
pixie,
so
the
client
should
be
able
to
know.
Do
we
actually
want
to
still
say
that
state
has
to
use
for
csr
protection
or
should
I
take
that
out,
because
theoretically
is
covered
by
security,
BCP
and
using
pixie
any
strong
opinions,
if
not
we'll,
just
skip
it
and
deal
with
it
later.
P
P
K
K
K
I'm
gonna
come
back
to
refresh
tokens
in
a
minute
pass
for
grant
right
now.
There
is,
it
basically
says
you
should
use
auth
code
and
pixie,
but
if
you
really
need
to
you
can
use
the
password
grant.
Can
we
just
say
you
can't
use
the
password
grant
because
I
don't
think.
That's
a
good
idea
for
anybody
I'm
seeing
a
bunch
of
nods
of
yes,
let's
kill
it
I'm.
K
R
K
Great
section,
nine
point:
eight
in
the
document
it
this
is
I
might
just
talk
to
Torsten
about
this
there.
There
is
a
list
of
security
issues,
specifically
with
the
implicit
flow.
A
lot
of
these
refer
out
to
the
security
BCP
for
the
details
and
there's
just
a
summary
of
them.
There
are
a
couple
in
there
that
are
I,
didn't
see
in
the
security
BCP
that
are
unique
to
the
implicit
flow
and
I
put
them
in
here,
because
people
you're
writing.
K
K
K
Do
we
need
an
indication
that
the
access
token
may
be
sent
to
the
browser
and
I
guess
that
would
be
an
indication
to
the
resource,
server
or
authorization
server
that
this
access
token
actually
may
end
up
in
a
browser
which
is
not
a
confidential
client
which
has
a
totally
different
threat
model
than
the
confidential
client.
This
is
an
interesting
one,
and
there's
no
mention
of
this
right
now
in
the
document
and
I,
don't
actually
know
of
anything
else.
That's
sort
of
doing
this.
K
A
K
I,
don't
know
what
the
solution
would
be
here.
We
we
need
to
talk
about
what
mechanism
to
actually
use
for
this,
but
it
wouldn't
need
to
be
some
sort
of
communication
about
the
access
token
or
about
the
grant.
That
says,
the
client
is
probably
gonna
give
this
to
a
browser
which
is
a
much
less
trusted
environment.
Does
this
sound
like
something
people
would
like
to
have
a
mechanism
for,
and
then
we
figure
out
what
the
mechanism
is
later,
some
nods
some
nods.
Anybody
really
opposed
to
this.
Is
anybody
like
think
this
is
totally
pointless.
S
K
A
good
question
in
it
makes
some
deployments
easier
because
you
can
avoid
having
a
separate
session
with
the
server.
We
could
also
say:
don't
do
that
and
like
force
you
to
require
to
send
it
to
make
a
session,
but
then
that
does
is
like
limit
your
architecture
a
little
bit.
It
requires
you
to
do
something
else.
I
wouldn't
be
opposed
to
making
that
a
requirement
for
this
architecture,
though,
because
it
does
clean
it
up.
Anabelle.
I
Back
in
an
Amazon
else,
I
think
there
are
use
cases
or
access
tokens
and
other
kinds
of
credentials
being
sent
down
to
the
browser
and
use
directly
from
there
tonight
can.
K
I
B
I
H
J
A
Brian
here
on
my
own
behalf,
I
I,
think
I
kind
of
agree
with
that
about
it.
I
feel
like
trying
to
build
explicit
signals
here
is
a
super
slippery
slope
that
gets
hard
like
we're.
Gonna
have
a
I
mean.
This
is
one
specific
instance,
but
it's
like
you.
You
want
some
kind
of
signal
here
for
a
client
to
tell
you
just
how
promiscuous
they're
gonna
be
with
the
access
token.
It's
like
I.
It
feels
really
yeah.
Okay,
okay,
not
good
to
me.
J
D
Victory
of
zero
I
think
we
have
a
argument
about
there.
We
can't
protect
everyone
in
every
situation,
doesn't
mean
that
we
shouldn't
when
we
already
have
some
knowledge
of
a
topology
in
this
particular
topology.
We
know
what's
going
to
happen,
because
we
are
telling
the
implementer
how
we
want
them
to
behave
and
at
that
point
they're,
given
what
we
said
earlier
about
the
scenario
in
which
the
resource
owner
the
source
server
wants
to
know
whether
we
are
being
called
by
a
confidential
client
or
not.
D
I
I
There
is
presumably
be
the
browser
client
making
a
call
up
to
my
end
point
and
I'm
resolving
some
kind
of
session
token'
or
something
in
the
cookie
to
look
up
an
access
token
and
a
database
in
my
side,
in
which
case
I'm
already
dependent
on
something
in
the
browser
like
it
made,
it
may
or
may
not
be
exposed
to
JavaScript
once
again
my
access
token
going
to
the
browser,
or
what
does
that
mean?
Does
that
mean
I'm,
giving
it
to
JavaScript?
Does
that
mean
I'm
putting
in
a
cookie
that
you'd
have
to
distinguish
that?
I
But
my
point
is
that
we
already
have
to
think
about
the
downstream
effects
of
the
clients
decision
of
how
to
how
to
authenticate
their
own
and
how
to
secure
their
own
kinnor
actions
with
the
with
the
browser
client
that
impacts
us.
Whether
the
access
token
is
stored
in
the
client
or
in
the
browser,
or
in
this
on
the
server
side,
you
can.
J
K
K
The
topic
is
refresh
tokens
in
single-page
apps.
There
was
a
lot
of
good
accession
discussion
on
the
list,
reasons
that
it's
a
good
idea
and
also
reasons
it's
a
bad
idea
and
I'm
not
seeing
a
lot
of
ways
to
reconcile
this
guidance
right
now
the
the
document
says
you
should
not
include
ascend
refresh
tokens
to
browsers.
That
feels
like
not
enough
of
a
guidance
for
the
situation.
K
K
We
could
say
there
should
be
no
bearer
refresh
token,
so
it
requires
some
sort
of
proof
of
possession
which
is
sort
of
undefined
in
this
situation.
Right
now
we
could
require
that
refresh
tokens
have
a
limited
lifetime
and
that
would
either
either
time
based
or
based
on
authentication
session.
We
could
leave
that
part
up
to
people,
but
requiring
that
they
have
a
limited
lifetime
would
I
think
helps
some
of
the
concerns
people
have
with
issuing
refresh
tokens
to
the
browser
requiring
that
they
rotate
is
probably
good
idea.
It's
also
mention
about
security.
K
So
that's
the
list
of
potential
decisions
that
came
up
with
based
on
the
discussions.
I,
don't
know
what
the
right
answer
is:
I
feel
like
not
mentioning
it
as
bad
I'm,
kind
of
leaning
towards
definitely
requiring
rotation
and
not
like
having
to
extend
a
lifetime
and
making
sure
that's
clear,
because
that
helps
eliminate
some
of
the
attacks.
People
were
describing
I
don't
want
to
not
require
bearer
tokens.
Big
bear
refresh
tokens
because
there
isn't
really
solution
for
that
right
now,
so
it
doesn't
feel
practical.
I
go
ahead.
Okay,.
J
G
B
J
And
what's
a
reliable
way
to
get
those
iframes
rely
on
third-party
cookie
access,
which
is
in
the
crosshairs
of
pretty
much
every
browser
vendor
nowadays
constantly
redirecting
blows
you
act,
so
no
so
does
pop
up,
since
it
requires
in
user
interaction.
Either
we
put
forth
restrictions
on
the
ass
and
client
to
be
able
to
get
refresh
tokens
eg
using
pop
Center
constrained
rotating
on
every
use,
something
else
or
get
searching
for
a
completely
new
mechanism.
Oh
Jesus,
we
need
sorry,
apologies
no
need
to
proxy
Brian
Aaron
is
on
the
same.
K
P
P
What
do
we
want
to
solve
for
the
solution?
As
you
might
know,
the
or
security
PCP
and
we've
just
also
seen
the
SP.
A
BCP
makes
yeah
good
points
of
using
similar
constraint
tokens,
but
currently
we
do
not
have
suitable
mechanisms
for
sender,
constraints,
constraint,
oaken's
for
SP
ace
and
probably
also
some
other
requirements
in
areas
we
have
MPLS,
which
is
good.
You
cannot
really
use
that
in
browser
and
we
also
token
binding
with
the
lack
of
browser
support
and
the
future
is
pretty
unclear
of
that.
P
P
The
main
goal
of
deep
pop
is
to
prevent
token
replay
at
a
different
endpoint
more
precisely
if
an
adversary
is
able
to
get
hold
of
an
access
token
or
refresh
token,
because
the
adversary
setup,
a
counterfeit
authorization
server
or
a
resource
server,
then
the
adversary
should
not
be
able
to
replay
that
token.
At
a
different
endpoint
authorization
or
resource
server,
so
that's
the
main
goal
of
Depot.
As
I
already
said,
we
started
discussing
discussions
in
March
I
created
the
first
draft
in
product
during
the
last
ITF
meeting,
and
now
we
are
at
version
zero.
P
Two-
and
this
is
what
the
current
proposal
looks
like.
We
have
the
client
on
the
left
and
we're
in
the
client
sense
the
token
request
to
das.
Then
the
client
attaches
a
deeper
proof.
What
that
looks
like
we
see
in
a
moment
and
then
the
client
gets
back
if
the
AI
supports
D
pop
a
an
access
token
that
is
deeper
bound.
P
This
is
signaled
by
the
token
type
D
pop
and
also
refresh
tokens
for
public
lines,
at
least
that
our
two
bound
to
that
deeper
key
n
on
when
the
client
wants
to
use
an
access
token,
for
example,
then
it
also
has
attached
a
deeper
proof
to
that
one,
and
it's
not
worth
a
that.
We
use
the
same
proof,
structure,
the
date
token
structures
or
two
say
in
the
token
request
and
in
the
exit,
when
the
access,
token
or
Refresh
token
say
used.
P
Okay,
this
is
what
ad
prop
proof
looks
like
in
a
header.
You
can
see
that
we
have
a
type
of
epub
plus
JWT
and
we
have
a
public
key
and
in
the
body
of
the
JWT
we
have
things
such
as
a
token
ID.
We
have
the
HTTP
method
of
the
HTTP
request,
to
which
the
token
is
attached.
We
have
the
HTTP
URI
and
we
have
a
timestamp,
and
all
of
this
is
of
course
signed
with
the
private
key
that
belongs
to
the
public
key
in
the
header.
P
In
the
token
request,
the
depop
token
is
added
to
a
new
HTTP
header
called
depop
and
when,
as
I
already
said,
so
this
is
additional
to
all
the
stuff
that
we
already
have
in
the
token
request,
which
is
completely
unchanged.
If
the
a
has
supports
depop,
then
the
AAS
will
send
back
the
access
token
with
token
type
deeper.
Otherwise,
if
it
doesn't
support
deeper,
then
everything
will
go
as
normal
and
the
client
will
see
that
the
AAS
does
not
support
deeper.
P
Now
the
resource
access
in
the
resource
access.
There
is
an
authorization
header
as
before,
but
the
token
type
is
not
Vera.
That
is
depop.
Therefore,
the
header
is
depop
and
then
the
access
token
and
there's
a
deep
opera
as
before,
with
the
deeper
proof
belonging
to
the
resource
axis
in
the
introspection
response
or
if
you
have
agile,
UT
access
token.
P
There's
a
CNF
claim
signaling
that
the
access
token
is
bound
to
some
key
and
in
the
CNF
claim
there
so
Jake
80s
to
five
six
claim,
which
is
the
basics
before
URL
encoding
of
the
JW
key
2:56
thumbprint
of
the
public
key
to
which
the
token
is
bound.
That
is,
the
resource
other,
can
check
that
a
deeper
proof
was
presented
for
that
access
token.
P
Regarding
the
security
of
PC
of
the
pop,
we
have
server
features
that
prevent
token
replay.
We
have
the
jei
the
token
ID
and
IIT
the
time
stamp,
which
together
can
be
used
to
to
maintain
a
list
of
tokens
that
have
been
seen
and
that
would,
in
theory,
be
still
valid.
We
also
have
HTTP,
URI
and
HTTP
method,
which
bind
this
token
to
the
access
or
through
there,
to
the
request
status
make
to
prevent
a
swapping
of
Depot
tokens,
Depot
proofs
with
other
JW
teas
that
are
traded
somewhere.
P
We
also
have
a
tad
claim,
which
has
the
unique
value
that
must
be
checked.
We
don't
allow
the
signature
algorithm
and
regarding
message,
integrity
which
is
not
guaranteed
by
the
prop.
As
you
can
see,
there
is
no
signature
about
the
message.
Buddy.
P
Of
course
you
have
the
transport
layer,
of
course,
use
enter
and
TLS
if
possible,
but
really,
if
you
want
to
prevent
that
a
token
resource
endpoint
that
is
manipulated
or
that
is
malicious
uses
the
token
at
the
same
URI
at
the
honest
endpoint,
then
you
can
or
with
changing
the
request
body,
then
you
can
also
bring
your
own
data
and
sign
it
in
the
Deepak
token.
So
it's
extendable,
if
you.
P
H
Speaking
as
as
myself
here,
we
have
it
team
working
group
items
that
pretty
much
do
the
same
stuff.
They
don't
have.
This
shiny,
named
of
and
I,
was
wondering
on
whether
you
had
sort
of
like
considered
that
work,
because
one
part
of
the
work
is
is
actually
split
into
documents.
One
part
is
the
key
transport
which
we
talked
about
last
time
where
it
was
made
his
comment
about.
H
Do
we
need
to
add
a
proof
there
and
which
was
followed
up
with
this
formal
analysis,
and
then
we
have
the
second
piece
of
work
which
Justin
was
asked
or
is
be
heading.
The
assigning,
which
of
course
has
the
very
same,
does
mean
a
more
or
less
the
same
stuff
in
a
you
know
in
we
have
different
incarnation
incarnations
because
there
were
issues,
challenges,
I,
think,
that's
yet
to
say.
Request
signing
had
always
had
some
challenges
in
this
group,
so
I'm
curious
on
why?
P
H
Because
it
doesn't
actually,
it
actually
does
less
like
from
a
functionality
point
of
view
and
what
we
previously
did
with
the
ACE
working
group
we
tried
to.
As
you
remember,
we
tried
to
balance
this
work
with
ace
and
RTC
web
because
they
needed
the
approved
possession
mechanism
and
they
also
encoded
the
fields
in
co-op
and
we
had
the
corresponding,
but
in
HTTP,
because
we
said
that
we
would
do
the
HTTP
stuff
here,
the
core
there
is
working
group.
H
Does
the
court
stuff
there,
so
there
was
kind
of
a
understanding,
and
so
now
we
I'm
feeling
we're
doing
the
same
thing
twice
even
like
you
had
in
earlier
slides.
You
have
this,
you
call
it
a
deep
op
pro
possession
I
forgot
what
the
name
was
in
the
other
case,
because
it
was
also
finding
the
key
to
the
token
so
yep.
S
You
know
kind
of
wondering
the
the
graduation
ship
of
this
draft
to
the
key
distribution
draft,
which
is
and
and
then
also
there's
a
draft
that
I
wrote
a
while
ago,
which
is
called
j-pop,
of
which
a
part
of
the
spec
was
actually
taken
out
to
become
an
TLS,
but
this
one
is
looks
like
trying
to
do
more.
That's
the
same
thing,
except
that
this
can
only
be
used
for
code
for
Wow.
In
that
case,
because
we
arrived
Jpop
is
wrong
on
key
description.
We
can
actually
be
useful.
It
was
the
first
oh
yeah.
J
Brian
try
to
put
some
color
on
on
the
origins
and
some
new
history
here
to
it,
but
these
not
address
but
talk
to
honest
questions.
A
default
came
out
of
I
think
a
desire
and
a
real
need
from
developers
using
this
stuff
for
a
simplified,
concise
mechanism
to
do
public
key
proof
of
possession
for
an
access
token,
both
at
the
authorization
server
and
the
resource
server.
And
while
it
bears
a
lot
of
similarity
to
some
other
things
that
are
in
progress
it
it,
it
does
I
think
it
does
more
with
less
its.
J
It
delivers
that
public
key
proof
and
for
both
the
the
delivery
and
binding
the
access
tokens
with
the
authorization
server,
as
well
as
using
them
at
the
resource
server.
So
it's
it
has
a
lot
of
conceptual
similarities,
but
I
think
it's
it's
significantly
simpler,
has
a
more
straightforward
model
and
is
much
more
concise.
Me.
J
In
its
both
in
the
well
in,
in
my
view,
it's
simpler
in
the
key
distribution
because
there's
there's
not
the
overhead
of
the
potential
of
symmetric
keys,
the
key
distribution
itself
works
the
same
way
for
both
presentation
that
are
us
and
they
ask
it's
it's
the
authors.
It's
the
client
saying
here's,
my
key,
here's
proof
of
it.
This
is
what
I
want
to
use
and.
H
H
B
J
Just
me
it's
like,
but
despite
the
there's
work
here,
that's
bits
aimed
at
deployment
and
use
in
the
wild
and
I
know.
There's
other
groups
that's
building
on
top
of
all
off,
but
this
is
specifically
geared
at
deployment
and
usage,
and
the
possession
distribution
stuff
has
taken
on
a
role
of
being
an
underpinning
for
additional
work
within
aids.
I
guess
I
guess.
T
J
Rtc
web,
so
I
there's
some
tension
there,
but
this
was
maybe
an
end-around
from
that
work
because
it
has
a
more
explicit
and
straightforward
goal,
including
not
symmetric
keys,
because
there's
no
mechanism
for
proof
of
symmetric
keys
and
trying
to
account
for
that
significantly
complicates
things
as
you're.
Probably
we're
trying
to
have
written
it
in
in
the
document.
L
I
I
So
I
guess
a
question:
none
for
the
working
group
is:
is
there
a
significant
enough
win
in
terms
of
simplification
to
warrant
having
a
mechanism
for
proof
of
possession
that
is
specific
to
asymmetric
signatures,
or
does
the
proliferation
of
pop
methods
outweigh
the
that
that
simplicity
gain?
The
other
comment
I
have
on
this?
Is
the
draft
mentions
being
able
to
bring
your
own
data
piece,
but
it
doesn't
really
get
into
exactly
how
that
would
fit
into
it?
Q
Q
We
set
out,
we
said,
okay,
we
could
use
web
crypto
and
create
an
onyx
portable
key
in
the
browser,
and
these
would
be
the
mechanisms
they
that
we
could
do
it.
So
that's.
Why
is
there
no
symmetric
keys,
because
there
is
no
way
of
protecting
a
symmetric
key
in
the
browser
and
being
able
to
do
proof
of
possession?
So
this
describes
how
we
can
address
that
particular
use
case
if
the
working
this
lays
out
a
way
to
do
it.
If
the
working
group
wants
us
to
refactor.
Q
Yes,
we
did
take
the
ideas
of
Justin's
draft,
which
is
not
currently
active
and
but
I
believe
it's
expired,
so
so,
okay,
so
it's
expired
for
reasons,
so
it
is
entirely
possible
to
profile.
Should
we
reconstitute
that
document
we
could
profile
that
document.
This
is
what
we
specified
is
a
subset
of
that
document
and
arguably
the
way
that
we're
publishing
the
key
to
the
authorization
server
at
the
at
the
endpoint
is
a
subset
of
key
distribution,
but
we're
not
distributing
a
key
we're
using
a
very
small
set
of
that.
Q
So
could
we
do
this
as
profiles
of
those
documents?
Should
those
documents
actually
be
active
sure,
but
this
is
essentially
our
core
use
case
document.
If
you
want
us
to
refactor
it
to
use
other
specs
okay,
but
those
other
specs
have
to
actually
exist,
didn't
be
making
progress,
but
this
is
a
concrete
use
case
of
people,
something
that
people
actually
want
to
do
to
bind
tokens.
As
we
talked
about
in
Aaron's
presentation,
there
is
no
way
currently
for
you
to
actually
do
group
possession
of
the
Refresh
token
in
the
browser.
Q
Should
somebody
use
this
mechanism,
they
can't
actually
protect
the
Refresh
token
in
the
browser.
So
this
is
a
real
use
case.
There
is
I
think
we
need
to
actually
address
the
use
case
if
we
want
to
address
it
in
the
very
narrow
sense.
And
yes,
we
took
out
all
of
the
other
stuff
that
addresses
a
bunch
of
other
use
cases
so
that
we
could
actually
focus
on
what's
the
minimum
that
we
need
to
do
this,
we
want
to
refactor
this
to
point
to
the
other
specs
I'm
perfectly
okay,
with
doing
that.
Q
P
F
Mike
Jones
Microsoft
I'm
actually
representing
right
now
our
engineering
teams,
who
did
a
detailed
analysis
of
whether
and
how
they
would
use
the
spec,
and
there
were
three
sets
of
comments
they
had
one
was
they
want
to
use
different
pop
keys
for
access
tokens
and
refresh
tokens.
It's
my
belief
doing
a
cursory
analysis
that
they
could
just
use
different
keys
in
the
request
and
have
that
work
and
the
keys
are
not
just
maintained
as
state
between
different
requests
off
the
top
of
your
head.
Am
I
right
yep.
F
They
were
a
little
bit
surprised
by
the
tokens
issued
being
bearer
tokens
and
then
there's,
secondly,
being
a
perv,
that's
distinct
from
the
token
which
it's
up
to
the
recipient,
to
check
the
proof
using
information
in
the
bearer
token
that
it
actually
is
attested
to
you
now.
I
understand
the
reasons
for
that
that
all
of
that
you
could,
by
adding
this
proof,
header,
continue
sending
the
access
tokens
using
6750
using
authorization
bearer
token
and
resources
that
don't
understand
that
there's
also
prevents.
P
F
They
would
actually
be
happy
with
that
because
they
don't
want
to
just
add
information
to
that
existing
there,
because
some
resources
would
break
even
though
they're
not
supposed
to
yeah.
Thank
you.
I
will
report
that
back
to
them
there
and
the
third
thing
which
I
failed.
An
issue
on
several
months
ago
is,
despite
some
of
the
sentiments
in
the
working
group
for
years,
we're
going
to
continue
using
the
implicit
flow
and
that's
not
up
for
negotiation.
F
F
Fine
but
I'd
also
like
to
talk
to
you
and
some
of
the
authors
about
what
technically
you
think
that
binding
should
look
like
and
I
want
to
get
it
written
down,
and
it's
a
working
of
decision,
whether
to
adopt
that
piece
of
it
or
not.
But
I
have
a
mandate
from
my
employer
to
create
it
either
in
this
draft
or
as
a
separate
draft.
If
we're
ever
going
to
use
any
of
this
yeah
and
I
just
wanted
to
put
that
on
the
record.
F
G
Justin
Richard
to
that
last
point,
I
would
be
unhappy
to
see
it
in
the
same
draft
so
plus
one
to
Brian's
point.
So
all
right,
since
my
name
was
tossed
around
a
bunch,
I
figure
I
should
get
in
the
mic
line.
Having
tried
to
spec
out
HTTP
signing
and
implemented
a
number
of
different
signature
methods,
I
think
one
of
which
we're
gonna
hear
about
later.
It's
really
hard.
It's
really
really
surprisingly
hard
I'm
with
Annabel
that
there
should
be
a
few
additional
fields,
but
I
mean
that's
it.
You
have
a
structure.
G
We
can
add
that
we
can
specify
that
that's
great
signing,
headers
and
signing
body
very
important
in
other
types
of
requests:
yeah
yeah,
totally
I,
really
like
this
approach.
I
agree
with
pennis
in
principle
that
having
a
server
provided
key
to
enable
symmetrical
keys
and
stuff
like
that,
helps
a
lot
of
different
use.
Cases
embedded
use
cases,
especially
because
of
key
generation
costs,
but
other
use
cases
potentially
as
well
and
even
having
the
server
generate.
A
an
asymmetrical
key
pair
makes
sense,
I
know,
but
it
makes
them.
H
G
G
Ultimately,
if
this
is
cryptographically,
agile
enough,
I
think
those
are
details
that
will
fall
out
the
mode
of
the
client
presenting
and
proving
possession
of
that's
an
important
part
proving
possession
of
the
key
that
it
is
saying.
I'm
going
to
use
is
a
really
really
important
pattern
and
that
actually
solves
the
security
use
cases
for
the
majority
of
ways
that
we
want
this
thing
to
get
used,
because
if
the
server
provides
a
key,
the
client
hasn't
proved
yet
that
it
can
actually
use
that
key
to
do
anything.
G
So
I
like
this
pattern,
so
much
that
I
also
invented
it,
because
this
is
how
currently
key
proofing
in
possession
works
inside
of
X
Y
Z.
So
it's
the
same
kinds
of
structures
and
yes
before
Hannes
asked
I
was
aware
of
my
other
draft
for
HTTP
signing
so
and
I
purposefully
didn't
use
it
for
this,
because
that
is
a
really
complicated
thing,
and
there
really
has
not
been
enough
energy
in
the
working
group
to
move
that
forward,
possibly
because
we
have
cabbage
signatures.
G
We
have
a
bunch
of
other
things
that
have
been
making
progress
that
people
could
use
instead,
ultimately,
I,
don't
care
where
that
draft
lives.
If
we
were
to
take
all
of
the
proofing
stuff
in
here
and
paste
it
into
the
HTTP
signatures
draft
and
call
that
the
new
version-
fine,
you
know
like
whatever,
if
and
I,
think
that
we
might
be
able
to
do
that.
U
Howdy
roomie,
didn't
you
is
the
responsibility
taking
new
technical
position
on
what
we
just
kind
of
talked
about
here.
That's
for
the
working
group
to
talk
about
what
I
would
strongly
urge
before
we
talk
about
we're
group
adoption.
We
make
sure
that
we
better
understand
the
equities
of.
Perhaps
other
working
groups
put
them
on
the
table.
I
actually
don't
know
all
those
equities
are,
we
sure,
check
and
make
sure
we
surface
them,
as
we
reason
about
what
to
do
here
and
ditto.
U
I
heard
that
again,
this
is
kind
of
my
first
real
meeting
with
this
working
group.
You
know,
as
we
talked
about
past
decisions,
the
fact
that
we
don't
remember
what
we
decided
and
that's
kind
of
coming
with
your
same
vacuity
and
that,
let's
just
kind
of
surface
that
as
we
figure
out
what
you
know
with
the
scope
of
it
and
boxes
of
some
of
these
drafts,
and
so
what,
let's
again
surface
what
the
equities
might
be,
in
the
other
words,
okay,.
A
P
P
P
What
is
the
rationale
here?
As
you
might
know,
in
JA
we
have
a
mechanism
that
ensures
the
integrity,
authenticity,
confidentiality
of
the
authorization
request
by
essentially
signing
all
that
stuff
or
whatever,
and
the
problem
is,
if
you
put
the
signed
draught
for
example,
in
a
you
know:
authorization
request
by
value,
you
would
put
it
in
a
request
parameter,
you
end
up
with
very
lengthy
URLs.
P
There
might
be
a
lot
of
scopes
in
them,
for
example,
and
the
job
itself
is
very
long
if
your
effort
so
there's
a
method
to
to
transport
it
by
reference
called
the
request,
your
eye,
where
you
put
the
shot
somewhere
on
some
URI
and
then
in
the
authorization
request,
you
have
the
parameter
called
request,
your
eye
that
refers
to
that
your
eye.
So
the
a
s
goes
to
that
your
eye
and
takes
the
job
from
there
problem
with.
P
P
S
P
It
does
not.
The
jaw
draft
says,
put
it
somewhere
can
be
on
the
client
can
be
on
the
a
s
can
be
somewhere
else
and
at
least
open
how
to
how
to
do
that.
Yeah,
that's
right!
So
this
this
this
is
kind
of
left
open
in
the
draft,
so
the
post
request
object
is
essentially
a
description
on
how
to
do
that.
The
idea
is
to
move
the
staff
to
the
air,
so
the
responsibility
for
managing
these
request
objects
by
creating
a
new
request,
object
endpoint.
P
So
the
client
calls
this
endpoint
delivers
the
request
objects
say
with
a
post
request.
The
client
is
then
provided
a
unique.
Your
I
to
that
request
the
contents
there-
and
this
is
then
used
as
the
request
your
ID
parameter.
Essentially,
the
jar
draft
already
foresees
exactly
that.
I
think
there's
a
sentence
somehow
outlining
this
idea,
but
this
makes
it
more
concrete.
P
There
could
be
two
modes:
the
reakless
object
could
be
stored
as
a
job.
If
you
need
the
features
like
signing
encryption
whatever
or
it
could
be,
a
raw
request
object
just
in
JSON
format,
to
keep
it
more
simple.
This
would
look
like
this.
This
is
the
post
request
to
create
the
resource
up.
They
request
object,
so
you
would
send
a
post
request
and
in
a
post
request
you
will
have
a
jason
with
all
the
parameters
that
you
otherwise
would
find
in
the
authorization
request.
You
could
also
have
this
as
a
dot.
P
Then
you
just
post
the
draw
it
somewhere
in
the
response.
You
get
a
request.
Your
I
containing
enough
entropy,
sir,
that
it's
unguessable
by
somebody
else
and
then,
when
you
send
your
authorization
request,
all
you
need
to
send
is
one
single
parameter:
request
URI,
containing
whatever
you
receive
before
you
request,
URI.
That
is
the
mechanism.
As
I
already
said,
it
has
some
advantages.
There's
no
reakless
object.
Management
on
the
client
itself.
P
We
can
have
client
authentication
at
the
endpoint
where
the
request
object
is
posted,
so
you
can
refuse
unauthorized
clients
very
early
in
the
process
and
also
you
can
have
patterns
where
the
authorization
process
itself
relies
on
the
identity
of
the
client,
because
you
know
her
essentially
proving
the
identity
of
the
client
and
also
because
we
now
have
the
option
to
transport
very
large
authorization
request
requests.
This
is
also
a
good
foundation
to
convey
something
like
rich
authorization
requests,
aka
structured
scopes
where
you
put
a
lot
of
information
in
them.
P
K
Hierarchy,
I
I
think
all
these
are
very
good
points.
This
makes
a
lot
of
sense.
I
just
wanted
to
point
out
that
this
is
almost
exactly
what
I
implemented
when
Justin
first
are
talking
about
XYZ
last
year.
K
Q
We
prop
you
know
things
have
developed
over
time
and
this
pattern
has
become
clearer
and
clearer
as
being
preferable.
We
left
it
open
jar
open
to
this
pattern,
but
didn't
include
a
whole
bunch
of
new
text.
You
know.
Have
we
had
the
timing
been
different?
This
might
have
gone
into
jar,
but
we
don't
necessarily
want
to
reopen
and
extend
those
people
are.
Q
I
Annabel
Backman
Amazon,
so
I
want
to
point
out
that
this
is
very
similar
to
what
the
device
authorization
draft
defines
in
terms
of
we're,
pushing
a
request
up
to
the
and
endpoint
on
the
a
s,
we're
getting
a
URL
back
and
then
we're
opening
that
up
or
were,
in
this
case
you're.
Adding
that
to
an
author,
an
authorization
request
but
ultimately
you're
getting
a
URL
and
you're
you're
opening
it
up.
The
only
real
difference
is
the
if
the
other
direction,
how
the
response
then
gets
back
to
the
client.
I
G
Thank
you
for
that
introduction.
A
development
Jessamyn
richer.
So
you
know
obviously
this
once
again.
This
pattern
shares
a
lot
with
X
Y
Z,
because
I
stole
all
the
best
ideas
from
everybody
and
puts
in
one
thing,
I
mean
a
lot
of
the
motivation
was
noticing
that
there
was
a
lot
of
overlap
between
things
like
the
device
flow
and
request
object.
Registration,
but
I
actually
went
to
try
to
do
the
exercise
of
collapsing
those
two
within
ooofff.
You
really
can't
do
it
it
it
there.
G
There
are
bits
and
assumptions
made
on
both
sides
that
it's
if
we
were
to
try
to
like
twist
this
into
device
flow
or
make
device
flow
like
that,
it
would
break
the
models
for
both
of
them
right.
So,
unfortunately,
I
think
that
if
we
do
want
to
bring
this
in
as
an
OAuth
2
extension,
it
does
need
to
be
its
own
thing,
and
that
is
a
shame.
But
to
me
that
is
also
motivation
for
next
generation
protocol
that
doesn't
have
those
same
assumptions
that
can
actually
split.
G
G
G
But
you
know
if
we
bring
this
into
the
IETF
that
will
all
obviously
need
to
be
aligned.
I
think
we
should
bring
it
in
I
think
it
fits
under
the
group's
umbrella,
and
you
know
just
to
reiterate,
I
think
this
is
more
motivation
for
working
on
a
new
protocol.
That
also
does
this
stuff,
but
in
a
cleaner
and
better
kind
of
way,
without
all
of
the
legacy
baggage
that
this
of
necessity
has
to
deal
with.
S
So
just
a
couple
of
clarification
game:
this
has
rated
the
scope
of
this
document
they're,
sending
the
request
URI
to
the
authorization.
So
what
actually
is
covered
by
char.
So
it's
out
of
scope,
this
document
there,
then
it's
called
the
the
scope.
This
document,
I
think,
should
be
just
pushing
the
request,
object
to
the
the
authorization
server
and
just
getting
back
a
response
which
includes
there,
because
your
repeater,
so
small,
concise,
look
yeah,
yeah
yeah.
You
know
I
just
wanted
to
make
sure
that
people
understand.
N
U
D
N
V
H
U
So
Romi
genius
we
can
get
again
to
clarify
the
hum.
We're
about
to
do
is
that
someone
that
participates
in
the
IETF
community
has
an
idea
and
would
like
feedback
from
the
community
in
a
very
concrete
way
that
he
should.
He
and
his
team
should
work
a
lot
harder
and
bring
something
to
the
ITF,
which
then
we'll.
A
U
A
I
So
this
is
gonna
be
interesting,
so
just
looked
in
like
half
my
slides
are
missing,
so
that's
cool,
so
I'll
kind
of
wing
part
of
this,
that's
okay.
So
what
I
want
to
talk
to
you
all
about
is
how
we
do
HTTP
requests
signing
in
AWS,
using
a
format
that
we
call
signature
version.
Four
guess
why
so
in
brief
quickly
up?
Why
and
what
is
this
all
about?
I
There's
three
specific
things:
we're
looking
for
the
three
specific
things
that
we
get
out
of
request
signing
in
AWS
first
is
authentication
of
the
the
client
that's
making
the
request.
Second,
we
do
it
for
message,
integrity,
which
means
we're
actually
signing
a
significant
amount
of
significant
number
of
the
different
parts
of
the
request
as
you'll
see,
and
then
we
have
some
element
of
replay
prevention
that
we
get
out
of
this
as
well.
I
So
there's
three
two.
We
will
walk
you
through.
What
this
signature
version.
Four
algorithm
looks
like
there's
three
basic
steps:
we
have
to
canonicalize
a
request.
We
then
use
that
to
construct
a
string
that
we're
going
to
sign,
and
then
we
sign
that
string
and
stick
that
in
the
HTTP
request,
I'm
going
to
go
through
this
part
a
little
bit
quickly,
because
if
you
really
really
want
to
see
it,
you
can
go
check
it
out
on
the
public
documentation,
but
just
to
give
you
an
idea
of
what
we're
talking
about.
I'm
gonna
go
through
this.
I
So
so
to
start
with,
let's
talk
about
how
we
canonicalize
this
thing,
the
top
there
I've
got
kind
of
an
example.
Http
requests
to
a
example
service
at
the
bottom
is
the
format
of
kind
of
what
goes
into
this
canonicalized
version
of
the
request.
Like
I
said,
we
have
the
request
method.
We
have
a
canonicalization,
the
docs
call
it
a
canonical
URI.
I
What
it
really
is
is
a
canonical
version
of
the
past
component
of
the
URL
that
you're
heading,
and
then
we
have
a
canonicalized
query,
string
and
canonicalized
headers
and
lists
of
the
headers
that
you
are
actually
signing,
and
then
we
also
include
a
hash
of
the
request
payload
in
that,
so
we
just
kind
of
walk
through
this,
make
it
clear
we
start
with
the
method.
Then
we
canonicalize
the
path
by.
We
provide
some
pretty
specific
instructions
on
how
you
can
analyze
that
we
do
reference
some
RFC's
for
that.
I
I
I
And
I
think
I
actually
skipped
a
step.
My
example
there,
but
that's
okay,
oh
no,
I
didn't
okay,
so
the
the
actual
string
design
has
a
few
different
components.
First,
we
say
we
specify
the
exact
algorithm
we're
using
in
this
case.
It's
this
tag,
AWS
for
H,
max
sha-256.
All
sig
v4
uses
that,
but
it
could
change
we
put
in
a
timestamp
for
when
we're
making
the
request
we
put.
I
In
this
thing
we
call
a
credential
scope
which
relates
to
how
we
derive
the
key
that
we're
going
to
use
to
sign
this
when
we
derived
that
key,
we
bind
it
to
the
day,
the
region,
your
communicator,
but
yes,
region,
you're,
communicating
with,
and
the
AWS
service
you're
communicating
with,
and
then
there's.
Finally,
that
the
hash
of
the
canonical
request
we
just
created.
So
you
create
that
and
you
sign
it
with
a
derived
key.
I
I
You,
like
I,
said,
do
a
bunch
of
trying
to
change
H
max
you
generate
this
key
and
then
you
sign
your
string
to
sign
and
you
have
a
signature,
and
then
you
stuff
that
generally
into
a
header
in
your
HTTP
request
and
you're
good
to
go
and
that's
the
end
of
what
my
actual
slides
contain
whoops.
Now,
let's
talk
about
why
we
do
some
of
this
stuff
and
some
of
the
lessons
we
learned
I
want
to
start
with
this.
So
minutes,
three
minutes
cool
all
right.
I
I
Another
thing
we
ran
into
over
the
years
protocol
agility.
We
started
out
with
a
protocol
that
was
very
query
string
oriented,
and
so
we
were
just
kind
of
signing
query
string
parameters
over
time
that
changed
we
needed
to
put
stuff
in
the
body
we
need
to
put
stuff
in
headers,
etc,
etc.
We
need
to
extend
the
format
to
enable
us
to
sign
more
of
the
HTTP
requests
in
as
a
whole,
because
different
protocols
are
running
on
top
of
HTTP
are
gonna.
Put
that
different
value
on
different
parts
of
the
request.
I
Canonicalization,
that
was
a
big
thing.
We
talked
I
mentioned
that
word
a
whole
lot
of
times
there
we
are.
If
you
look
at
our
documentation,
it
is
very
specific
about
how
we
canonicalize
these
things,
because
you
have
to
be
even
something
like
URL
encoding.
You
can't
just
say:
go
follow
this
RFC,
because
guess
what
the
RFC
changes
or
the
recommend
the
list
of
of
reserved
characters
change,
and
then
you
have
to
deal
with
the
fact
that
older
libraries
aren't
doing
it.
I
That
way,
a
newer
libraries
are,
you
can't
just
say,
URL,
encode
and
assume
that
the
API
is
gonna.
Do
it
right,
because
if
you
look
at
languages,
take
JavaScript,
for
example,
there's
like
three
different
encoding
related
methods,
you
could
call
and
they
all
encode
things
slightly
differently.
So
we
have
to
be
very
specific
about
that
sort
of
thing.
I
It
gets
alright
yeah,
going
back
to
the
assigning
different
parts
of
the
request,
there's
a
reason
that
we
specify
what
headers
were
signing
and
there's
a
reason.
We
specify
explicit
ordering
for
query
string
parameters.
It's
because
again
you
have
to
here.
It's
not
so
much
an
issue
of
how
are
things
getting
canonicalized,
but
an
issue
of
middlemen
between
the
person
who
is
actually
true
the
code.
That's
actually
trying
to
make
the
request
and
your
server
that's
going
to
receive
it.
I
I
I
So
what
are
you
what's
the
client
signing,
because
they
don't
necessarily
know
if
this
request
is
going
over
HTTP,
11.1
or
HTTP
2.0.
It
might
change
over
as
it's
sent
over
the
wire.
So
they
have
to
deal
with
that
as
well.
We,
thankfully,
because
of
the
flexibility
in
our
protocol,
that
you
specify
what
headers
you're
signing
we
were
actually
able
to
deal
with
that.
Just
with
some
code
changes
and
documentation
changes,
we
didn't
actually
have
to
Rev
the
format,
so
that
was
kind
of
a
positive
there.
I
So
that's
something
we
have
to
think
about.
Lastly,
is
I
mentioned
Rick
body
was
saying
the
body
of
the
request,
the
fact
that
we
sign
a
hash
of
it,
not
the
full
thing,
that's
again,
why
we
run
version
4,
not
version
1
or
2
or
3.
That
was
something
we
realized
hey.
This
is
not
a
great
idea,
so
another
reason
for
flexibility
in
what
parts
of
the
request
you're
signing
and
how
you're
signing
those-
and
that
was
my
world
warden
to
earth
through
AWS
Sigma
4.
I
C
A
A
The
enclosing
dot
doesn't
have
its
own
a
claims,
and
so
the
goal
is
to
extend
that
that
that
concept
and
allow
the
enclosing
jar
to
have
its
own
claims
if
there
are
two
use
cases
that
I'm
aware
of
one,
is
that
native
app
that
a
interact
with
two
different
authorization
servers.
What
one
is
that
that
the
native
as
aware
of
that
first
authorization
server,
but
not
that
the
second
one
in
and
because
we
don't
have
time
and
I'm
not
going
to
go
through
that.
A
The
flow
itself
in
the
second
use
case
is
a
use
case
actually,
and
now
it's
too
late
actually
to
affect
this.
This
document,
because
it's
a
it's
a
think
already
with
there
is
G
and
that
a
allows
that
in
or
needs
look
at
that
same
concept
of
a
nested
jot
one
inside
the
the
other
and
with
with
two
different,
a
claims
in
and
we've
talked
about.
Em
I'll
talk
about
the
example,
so
we've
talked
about
defining
a
content
type
and
a
new
claim
to
kind
of
define.
A
A
U
W
H
Not
not
enough
people
I
would
say,
have
read
it
to
make
the
make
a
call
for
adoption.
Yeah
I
think
we
will
okay
yeah.
So
it's.
C
A
U
Can
I
have
a
last
word,
so
administrative
leave
coming
in
again
no
value
judgment,
all
the
work?
What's
really
with
really
always
excites
me
we're
not
coming
off
as
the
tight
link
of
people
have
code
I'm
doing
this
thing
like
I
need
this
addition
and
kind
of
the
fast
inner
you
know
interplay
is
phenomenal,
but
from
the
governance
artifacts
we
have.
U
To
make
sure
again,
my
primary
driver
here
is
that
we
know
where
we're
going,
and
one
of
my
key
things
is:
there
are
folks
that
don't
participate
here,
and
they
don't
understand
that
this
being
Adam
gate
doesn't
mean
we're
not
doing
work.
So
our
ability
to
talk
about
future
milestones
is
signals
to
the
community
that
we
have
a
lot
of
work
to
do
to
kind
of
keep
watching
the
space.
That's.