►
From YouTube: IETF115-OAUTH-20221107-0930
Description
OAUTH meeting session at IETF115
2022/11/07 0930
https://datatracker.ietf.org/meeting/115/proceedings/
B
Okay,
I
think
we're
ready
to
go
it's
7
30
good
morning.
Everyone
welcome
to
the
first
session
of
oauth,
hey
great,
to
see
you
again
here
and
let's
get
going
there,
you
go
the
north.
Well,
if
you're
not
familiar
with
this,
please
take
a
minute
to
review
this.
This
governs
everything
that
we
do
at
the
IDF,
so
it
is
important
that
you
understand
what's
going
on
there,
this
one.
B
B
So
it's
important
that
you
kind
of
use
that,
while
you're
here
and
please
wear
a
mask
during
this
time,
unless
you
want
to
speak,
speak
up
a
for
remote
but
spend
a
please
use
that
the
full
remote
client
and
make
sure
that
audio
and
video
are
off
until
you.
Unless
you
want
to
say
something,
and
we
highly
encourage
that
you
use
a
headset
if
you're
remote.
B
Okay,
so
we
have
four
sessions:
two
official
sessions,
the
the
first
one
is
obviously
today
this
one.
The
second
one
is
on
Wednesday
at
one
o'clock
and
we
have
two
side
sessions.
The
first
one
is
Tuesday
at
two
and
the
next
one
is
Thursday
at
two.
B
At
the
same
time,
in
the
same
room,
the
one
on
Tuesday,
we
have
allocated
an
hour
and
a
half,
probably
we're
gonna
use
only
the
first
hour,
because
we
probably
have
a
conflict
with
another
Airway
group,
but
but
that's
that's
for
the
side
meeting,
okay,
so
work
group
updates.
B
B
C
Good
morning
Roman
do
Native
responsible,
A.D
and
I
guess
it
says
something
as
I
got
up.
I
didn't
feel
like
I'd
leave
could
leave
without
grabbing
my
cup
of
coffee
and
I'm
like
Wait.
No,
that's!
That's
not
a
good
idea.
So
in
the
RC
Ed
queue
it's
it's
in
misref,
so
it's
waiting
for
the
for
the
security
best
practices.
C
B
C
And
then,
just
since
I'm
up
here
just
to
kind
of
comment,
the
depop
document
is
now
in
kind
of
revised
ID.
So
we're
we're
moving
a.
B
Little
bit
yeah
awesome.
Thank
you!
Yeah
yeah.
We
got
the
feedback
from
you.
Roman
I
appreciate
that
so
we're
gonna.
So
the
depot
yes
exactly
we're
chatting
about
this
in
in
the
side
meeting
I
think
tomorrow,
right,
yeah,
yeah,
yes,.
B
Yeah
yeah
2,
30.,
2
30,
we'll
be
there
yeah
awesome
and
waiting
for
security
Chevron
actually
before
the
security
BCP.
Think,
like
you,
had
some
comments,
Daniel
that
you
want
to
incorporate
I
think
Mike
used.
You
have
some
other
comments
that
you
want
to
contribute
to.
So,
let's
yeah!
So
let's
if
you
can
get
that
as
soon
as
possible.
Let's
get
this
going.
Okay
and,
and
hopefully
after
that,
we'll
give
a
harness
a
chance
kind
of
to
review
it
and
push
it
forward
as
a
Shepherd
for
this
document.
B
And
so
our
agenda
for
today,
besides
chairs,
update
we'll
start
with
Aaron
talking
about
giving
us
an
update
on
browser-based,
apps
and
auto
that
one
Daniel
and
Christina
will
talk
about
the
SD
jot
and
Brian
will
give
us
an
update
on
the
step
Authentication
and
a
bench
watch
his
remote
I,
hopefully
he's
on
the
list.
Yeah
I
see
him
there,
so
he
will
talk
about
interactive
authentication
document
that
he
submitted
just
recently
at
the
end.
So
that's
for
Monday
for
Wednesday
I'll
be
talking
about
jot,
embedded
tokens.
B
A
Atul
will
be
talking
about
fine-grained
transactional
authorization.
A
Peter
will
be
talking
about
his
new
draft
that
just
submit
submitted
recently
across
device
flows
and
Christina
will
talk
about
the
client
ID
for
anonymous,
say
clients.
So
that's
for
Wednesday
side
meetings,
depop
Ada
review.
There
are
a
few
comments
and
we're
gonna
discuss
this
directly
with
the
with
Roman
tomorrow
and
there's
also
some
questions,
comments
say
from
the
fappy
working
group
at
open
ID.
We
want
to
talk
about
that
guide.
The
Roman.
C
C
C
B
We
also
want
to
talk
about
oauth
work.
Group,
GitHub,
I,
think
Aaron
helped
us
kind
of
get
this
going,
so
it
will
Aaron
we'll
talk
about
that.
A
tool
we'll
kind
of
dig
deeper
into
the
fine-grained
transactional
authorization,
and
then
Aaron
will
have
more
chance
to
talk
about
a
browser-based,
app
and
auth2.1.
I
will
add
a
security
BCP
for
for
Daniel.
Also
to
talk
about
those
that
too
and
I
think
that's
all.
We
have
any
questions
comments,
foreign.
B
E
There
you
go
all
right
good
morning,
I'm
Aaron
praki
from
OCTA
two
drafts
to
talk
about
today.
Oh
2.1
and
browser-based
apps
will
start
here,
decided
to
follow
Brian's
tradition
of
including
photos
from
the
location
in
slide.
Decks.
I
think
we
should
all
do
that.
So
that
is
my
personal
challenge
to
everybody
else.
E
If
you're
not
familiar
with
this,
it's
essentially
an
effort
to
consolidate
the
existing
oauth
2.0
drafts
and
best
practices
into
a
document
that
is
easier
to
read
and
more
up
to
date
on
a
number
of
fronts,
there's
quite
a
lot
of
language
and
references
that
are
now
quite
outdated
in
oauth
2.0,
which
has
just
celebrated
its
10-year
anniversary
last
month,
so
been
a
while.
So
since
the
last
time
we
chatted,
there's
been
a
little
bit
of
updates
to
the
draft.
E
The
abstract
has
now
been
updated
to
remove
the
term
third
party
since
I.
Think
a
lot
of
the
use
of
oauth
today
is
actually
in
first
party
as
well
as
third
party,
so
it
now
it
just
applies
to
everything.
E
Also
part
of
the
motivations
of
using
oauth
is
to
enable
things
like
multi-factor,
auth
and
passwordless
auth.
So
that's
now
added
to
the
introduction
as
well
as
we
talked
about
last
time.
Push
authorization
requests
is
an
interesting
kind
of
exception
to
the
requirement
that
is
described
elsewhere
as
needing
to
register
redirect
URLs
everywhere.
E
That
is
now
mentioned
as
a
way
that
registration
can
happen
because
it's
effectively
on
the
Fly
registration
of
the
redirect
URL
there's
a
mention
of
if
your
token
endpoint
is
going
to
be
used
by
browser-based,
apps
you'll
need
to
support
the
course
headers.
Although
that's
something
else
we're
going
to
have
to
talk
about
in
a
little
bit
and
some
updates
to
references
and
little
fixes
here
and
there
so
overall,
not
a
ton
of
not
a
ton
of.
E
Since
the
spring
meeting,
these
are
some
of
the
changes
as
well.
I
think
we
talked
about
all
of
these
last
time
in
Philadelphia.
E
If
you
were
there,
so
I'm
not
going
to
go
over
the
one
by
one
I
do
have
links
on
the
slides
to
the
GitHub
repo
that
talks
about
all
of
the
issues
that
were
closed
or
discussed
during
this
round
of
changes
and
there's
also
the
link
to
the
diff
of
draft
five
to
seven,
which
was
from
July
till
now,
and
you
may
notice
that
it's
at
a
new
GitHub
org,
which
we
will
also
talk
about
in
the
side
meeting
a
couple
of
things
that
are
still
just
sort
of
on
the
path
to
do
which
have
some
of
these
have
now
been
on
the
to-do
list
for
a
long
time,
because
they're
just
big
tasks,
any
help
is
appreciated
on
any
of
these,
but
Justin
Victoria's
write-ups,
still
working
through
the
their
write-ups
from
the
original
draft
of
this.
E
A
lot
of
that
is
now
dealt
with
and
closed
and
revised
by
various
things,
but
there's
still
just
a
little
bit
more
left
in
those
write-ups
to
get
to
there's
still
the
rather
large
task
of
identifying
any
normative
language
that
may
be
in
various
sections,
like
security
considerations
and
moving,
where
appropriate,
backup
to
the
actual
core
document
main
part
of
the
draft,
and
then
one
thing
that
we
identified
at
one
point:
I,
don't
remember
which
meeting
it
was
at
adding
an
explicit
section
talking
about
the
core
differences
from
oauth
2
and
which
changes
are
breaking
changes
to
who,
because,
essentially,
if
you
are
following
the
best
practices
today
that
it
already
is
2.1.
E
E
There
are
a
few
more
issues
that
I
didn't
pull
out
specifically
here
feel
free
to
take
a
look
on
the
GitHub
repo.
If
you
are
interested
and
we'll
be
working
through
those
on
GitHub
now,
a
couple
things
I
did
want
to
talk
about
here,
in
particular,
are
some
things
that
either
I
felt
were
particularly
relevant
to
a
synchronous
group
discussion
or
things
that
one
of
them
was
something
that
needed
to
get
pulled
from
the
side
meeting
to
the
e-real
meeting.
E
One
in
one
of
the
meetings
we
identified
that
the
token
endpoint
needs
a
mention
of
supporting
cross-order
resource
sharing,
so
that
was
then
added
and
then
that's
really
the
only
endpoint
in
the
core
draft,
but
there's
a
whole
bunch
of
other
endpoints
that
are
defined
in
extensions
to
oauth
2.0
that
are
pretty
commonly
used
and
widely
deployed.
That
will
also
need
cross-order
resource
sharing,
support
and
I.
Think
it's
worth
mentioning,
in
particular
in
the
core
draft.
E
There
may
be
extensions
that
require
this
as
well,
and
that
would
be
things
like
the
the
metadata
endpoints
or
the
par
endpoint,
or
what
those
kinds
of
things
and,
in
addition,
the
authorization
endpoint
itself
which
will
be
visited
by
the
browser
not
used
by
JavaScript,
actually
should
explicitly
not
allow
cross-origin
requests,
because
bad
things
can
happen.
If
so,
which
means
essentially,
there
should
probably
be
a
course
section
in
oau
2.1
that
talks
about
this
as
a
whole,
instead
of
just
one
mention
of
it
somewhere
and
an
open,
ID
connect.
E
This
discussion
has
also
been
happening
and
I
think
we
need
to
sort
of
have
a
general
agreement
on
the
language
that
goes
into
to
both
of
these,
which
also
applies
to
security
BCP.
So
essentially,
the
my
thought
is
to
add
a
section
about
cores,
mentioning
explicitly
not
allowing
cores
of
the
authorization
endpoint
to
support
JavaScript
apps,
then
you'll
need
to
support
cores
at
the
following
endpoints
that
are
either
defined
in
oauth
2.1
or
in
extensions
that
are
already
being
referenced
in
in
the
draft
other.
E
These
are
the
four
that
I
found
quickly.
If
anybody
knows
of
any
others
that
are
worth
pointing
out.
Specifically,
suggestions
are
welcome.
That
links
to
the
issue
133,
where
I'm
collecting
this
stuff
right
now
so
feel
free
to
go
in
there
and
add
suggestions.
If
you
have
any
other
ideas
for
things
that
will
be
useful
to
mention
around
General
course,
recommendations
for
oauth,
so.
B
G
I
just
want
to
ask
you
bring
all
right:
can
you
state
your
name
Daryl
Miller
from
Microsoft,
you
mentioned
JavaScript
apps,
but
I'm,
just
wondering
in
this
world
of
wasm
now,
whether
how
cores
interacts
with
wasm
and
whether
we
should
say
that
more
broadly.
E
H
Brian
Campbell
paying
identity
I'm
totally
on
board
with
all
this
I
think
it's
a
great
way
to
go
along
the
same
lines
of
trying
to
explain
the
difference
between
used
by
browser-based,
apps
and
used
directly
by
browser-based
apps,
because,
like
the
the
authorization
endpoint
is
going
to
be
used
by
these
apps
and
it's
the
nature
of
the
use
that
I
think
is
both
the
reason
for
not
exposing
its
occurs.
At
course.
That's
the
beer
from
where
I'm
from
it's
not
really
beer,
of
course,
and
and
how
it's
used.
H
The
that's
sort
of
the
heart
of
the
issue
and
I
think
everything
you're
saying
is
right
on,
but
actually
really
explaining
how
that
works
in
good
language
that
I,
don't
have,
but
I'm
suggesting
you
right
is
is
good.
So
thanks.
E
A
So
Mike
Jones
Microsoft
as
I,
was
doing
the
Errata
edit
for
Connect
about
this
I
was
the
one
who
came
up
with
the
used
directly
by
versus
used
via
redirect,
but
I
would
love
somebody
to
give
me
a
short
phrase
which
connotes
using
HTTP
verbs
such
as
get
post.
E
Yeah
I
I,
agree,
I,
think
we'll
have
to
do
a
little
bit
of
work
to
figure
out
the
specific
language
for
it.
But
I
agree
that,
like
yeah,
we
need
to
be
very
clear
about
whether
the
browser
is
the
one
making
the
request
or
whether
you're
JavaScript
or
other
languages
that
run
in
a
browser
making
requests
itself
directly.
E
I,
don't
know
the
the
right
term
for
that
so,
but
it
sounds
like
we're
at
least
all
in
agreement
on
the
concept,
so
cool
yeah
feel
free
to
add
suggestions
into
that
to
get
that
GitHub
link.
If
you
have
any
thoughts
about
the
language
that
would
be
useful
to
use
in
the
section
cool,
okay.
E
New
issue
54.:
this
was
an
older
discussion
that
we
brought
up
during
the
side
meeting
last
time.
We
met
had
a
brief
discussion
about
it,
and
then
it
continued
to
get
a
comment
or
two
on
GitHub.
E
At
the
token
endpoint
there's
a
list
of
parameters
required
to
exchange
an
authorization
code
when
exchanging
an
authorization
code.
One
of
them
is
the
redirect
URI
there's
language
in
the
original
spec.
That
says,
send
the
redirect
URI
to
avoid
certain
types
of
mix-up
attacks,
and
it's
arguably
not
a
bulletproof
Solution
by
any
means
in
the
original
spec,
but
it
actually
has
zero
technical
purpose.
Once
you
are
using
Pixi,
it
doesn't
actually
do
anything
else.
E
Pixie
solves
the
mix-up
attack
that
redirect
URI
barely
solved
to
begin
with.
So
my
question
was:
since
we're
cleaning
this
up:
can
we
just
leave
it
out
off
because
it
doesn't
do
anything
and
there
was
a
discussion?
We
do
want
backwards.
Compatibility
of
course,
with
we
shouldn't
be
like
we
shouldn't
make
it
so
that
oauth
clients
are
going
to
suddenly
stop
working
if
they're
sending
the
parameter,
for
example.
E
So
the
question
is:
how
do
we
support
that,
while
also
not
just
having
it
required
for
the
sake
of
it,
because
it's
always
been
there?
The
options
are
anything
from
just
not
documenting
it
and
letting
people
just
do
it
or
not.
E
There's
also
making
it
required
that
the
as
would
accept
and
also
verify
it
if
sent
but
not
require
it
to
be
sent,
which
feels
like
the
right
way
to
do
the
backwards,
compatibility,
I,
think
other
options
or
just
leave
it
alone
and
make
it
just
keep
it
as
a
required
parameter,
because
maybe
that's
just
the
simplest
and
anything
else
is
adding
new
problems,
I'm
I'm
on
the
fence
about
it,
but
it
feels
like
an
opportunity
to
to
like
clean
things
up,
because
every
time
I'm
explaining
this
to
a
developer
and
I'm
like
send
the
redirect
URI
parameter
because
I
don't
know,
we've
just
always
done
it.
E
This
way,
it's
like
not
terribly
compelling.
So
that's
I
would
like
to
actually
have
a
quick
discussion
here
about
about
this.
A
Mike
Jones
Microsoft
I
suspect
that
some
code
is
going
to
break
because
when
it's
not
sent
because
it's
expecting
it
to
be
there
and
I
think
not.
A
It's
probably
enough
reason
to
keep
it
either
as
recommended
or
required.
I
would
be
fine
with
a
note
being
put
in
saying
this
is
effectively
redundant
when
you're
using
pixie,
but.
E
I
think
for
for
the
for
the
backwards
compatibility
discussion,
it's
like
you
have
to
consider
which
end
it's
breaking
on
and
I
think
for
the
as
like.
Obviously,
if
you're,
not
if
the
client's
not
doing
pixie,
this
is
marginally
useful.
So
if
the
client's
not
doing
pixie,
then
you're,
not
even
you're,
not
doing
the
oau
2.1
flow
to
begin
with.
E
So
if
the
as
supports
oauth
2.1
or
requires
it
for
a
client,
it
feels
like
it's
okay
for
it
to
just
ignore
the
parameter
and
not
require
it,
because
if
you're
already
opting
into
the
new
behavior
in
the
spec,
then
there
is
nothing
to
break
there.
But
yes
for
an
oauth,
2.0
authorization
server.
Obviously
we're
not
saying
don't
want.
2.1.
Clients
are
expected
to
work
with
2.0
servers
because
that's
not.
I
J
Aaron,
what
type
of
compatibility
do
you
want?
Would
you
want
a
2.1
client
talking
to
an
old
2.0
authorization
server?
Would
that
be?
Would
that
be
something
that
is
actually
okay
or
because
that's
that's
where
the
problem
shows
up
the.
E
E
I
I
believe
so
so
if
a
oauth
client
is
if
I
know
about
2.0
client
was
talking
to
a
2.1
server
and
it
does
send
the
redirect
URL.
We
don't
want
the
as
to
reject
it,
so
it
should
ignore
it
because
it
doesn't
serve
any
purpose
which
is
already
what
it's
supposed
to
do
with
unknown
parameters,
basically
turning
it
into
a
unrecognized
parameter.
And
then,
if
oauth
client
opts
into
the
2.1
Behavior,
it
should
work
with
a
lot
of
servers
that
are
following
the
best
practices.
E
It
sounds
like,
and
then
the
the
option
two
here
that
I
have
on
the
list
of
making
a
note
of
if
the
client
does
send
it,
then
the
as
should
also
check
it,
because
that's
how
it
can
support
older
clients.
K
Yeah
then,
you've
had
yes.com
just
saying
so.
This
is
obviously
not
something
we
will
do
in
a
security
BCP,
because
it's
not
security
related.
So
we
cannot
really
say
there.
You
may
omit
the
parameter
and
everything
will
be
fine.
K
K
K
L
If,
if
the
point
of
2.1
is
to
take
2.0
and
apply
all
the
best
practices,
it
doesn't
feel
like
removing
it
would
create
a
new
world
despite
me,
wanting
to
get
rid
of
this,
because
it
makes
the
apis
for
consuming
callbacks,
oh
so
ugly,
because
you
always
have
to
redirect
URI
and
you're
trying
to
do
magic
in
browser-based
applications
by
putting
your
current
URL
in
there.
L
I
I
feel
like
the
ship
has
sailed
on
this,
and
rather
than
create
a
whole
new
world.
I
would
just
keep
this
as
if.
L
C
L
Also,
if,
if
like
you
said,
if
I
have
an
existing
of
two
server
and
it
applies
all
the
security
best
practices
and
then
a
2.0
client
talks
to
it,
it
should
just
work
even
2.1.
It
should
just
work
because
it
has
all
the
BCPS
applied,
but
if
it
doesn't
send
the
redirect
URI
it
wouldn't.
E
E
So
that
actually
brings
up
an
interesting
point,
which
I'm
pretty
sure
is
I'm,
pretty
sure
that
there's
nothing
else
like
this.
Yet
in
terms
of
things
that
2.1
defines
that
are
not
described
by
the
by
the
security,
BCP
or
other
drafts.
If
there
are,
though,
then
it
seems
like,
we
would
need
to
go
down
that
road
of
of
a
discovery
flag
and
then
it
would
be
okay
to
include
this,
but
I.
Don't
I,
don't
think
there
are
any
other
things
like
that
yet,
but
it
seems
like
it's
worth:
double
checking,
Justin.
M
Richer,
so
if
we
do
option
two
like
Aaron's
suggesting
we
don't
actually
have
to
do,
Discovery,
because,
no
matter
what
the
client
does,
the
AIS
is
going
to
do
the
right
thing
if
it's
a
2.1
compliant
as
if
the
client
doesn't
send
the
URL
as
is
going
to
be
fine
with
it.
The
client
does
send
the
URL
the
as
is
going
to
check
it
and
enforce
that
it's
the
right
thing.
M
A
M
E
The
only
that
works
great
for
the
2.0
and
2.1
client
talking
to
either
2.0
or
2.1
as,
however,
that
doesn't
work
with
the
2.1
client
talking
to
the
2.0
plus
vcp,
as
that's
the
one
that
it
breaks
and
if
we're
okay
with
that
breaking
in
that
direction,
then
it's
fine.
That's
the
only
one
of
the
four
combinations
that
breaks.
E
B
E
L
I
I
wasn't
sure
if
we're
okay
with
that
break
and
personally
I'm
not.
But
if
the
consensus
is
you
know,
let's
just
document.
D
E
My
gut
feeling
is
that
it's
okay,
because
the
clients
don't
they're
frankly
less
likely
to
update
in
general
unless
they
need
to
for
some
reason
just
because
if
things
work
then
they're
just
going
to
let
it
work.
E
So
if
they
want
to
update,
then,
if
they
go
update
to
a
what's
labeled
2.1
compatible
or
a
2.1
client,
then
they
won't
be
expecting
it
to
necessarily
work
with
the
2.0
server
anyway,
just
because
there
probably
are
going
to
be
other
things
that
break
just
because
2.0
is
such
a
broad
term
to
begin
with,
so
it
feels
like
it's
the
oak.
It's
the
okay,
Direction
and
I
agree
that
would
be
less
willing
to
have
the
ASP
be
the
one
that
it
breaks
for
all
right.
E
Maybe
maybe
the
other
thing
to
note
here
for
for
the
action
items
once
I
do
add
the
section
that
talks
about
all
the
changes
from
OSU
and
who
they
break
for.
Let's
just
revisit
this
because
it
may
make
sense
it
may
show
up
as
oh.
It's
not
a
big
deal,
there's
other
things
that
break
in
that
direction
or
if
it's,
maybe
the
only
one
that
breaks
in
that
direction,
we
can
change
our
mind
about
it.
I
feel
like
once
we
get
that
full
picture,
we'll
have
more
context
for.
E
E
E
This
is
now
draft
11,
after
picking
it
up
in
July
again
after
some
time
away
from
it.
So
we
had
a
little
discussion
in
July
and
a
quick
recap
of
what
this
draft
is
supposed
to.
Do
it's
supposed
to
be
recommendations
for
people
who
are
building
browser-based
apps
using
oauth,
which
are
defined
as
applications
executing
in
a
browser
AKA
single
page
apps.
E
The
language
is
not
actually
mentioned
in
the
draft
because
it
applies
to
any
browser-based
execution
environment
and
it
may
include
a
back-end
component
that
is
part
of
the
interaction
of
the
application,
the
application
having
its
own
sort
of
back
end
since
July
there's
been
some
more
reworking
of
the
document.
The
there
are
these
four
architectural
patterns
that
are
called
out
in
the
draft
in
terms
of
they're
like
different
ways
that
you
might
organize
your
app
or
architect
your
app.
E
The
single
domain
architecture
is
just
when
your
client
runs
at
this
at
a
URL,
that's
the
same
domain
as
the
as
there's.
The
these
names
are
names
that
have
changed,
but
the
patterns
have
been
have
been
there
for
a
while
now
back
in
for
front
end
proxy.
That's
the
the
one
where
the
back
end
basically
has
a
its
own
session
to
the
front
end
and
the
back
end
does
everything
it
acquires
the
tokens
it
makes.
Api
calls
everything
about.
E
The
there's
also
now
a
sub
section
of
the
pure
JavaScript
or
pure
browser-based
app,
which
talks
about
using
this
a
service
worker
to
manage
it,
because
there
are
different
considerations
and
different
threat
models
when
all
of
the
token
Management
in
that
position
happens
from
within
a
service
worker
versus
in
the
Dom
directly.
E
So
that's
sort
of
like
a
subsection
of
the
of
the
pure
browser-based,
app
and
and
there's
also
now
more
notes
about
things
to
worry
about.
If
you
are
actually
storing
tokens
in
local
storage,
if
you
are
either
that
applies
to
both
the
pure
browser-based
app,
as
well
as
the
the
one
where
the
back
end
acquires
the
tokens
but
passes
it
to
the
front
end
that
where
it
deals
with
the
tokens
somehow.
E
So,
if
you
have
feelings
about
local
storage
and
tokens
which
I
know
a
lot
of
people,
do
please
make
sure
your
feelings
are
represented
in
the
document.
There
are
no
right
or
wrong
answers
for
this.
It
is
meant
to
capture
the
information
that
is
known,
and
this
has
been
a
probably
one
of
the
reasons
why
this
discussion
kind
of
stalled
out
last
year.
There's
a
lot
of
differing
opinions
about
handling
tokens
and
browsers
and
again,
the
goal
is
not
to
say
that
one
is
right
or
wrong.
E
B
Couple
of
so
before
before
we
go
into
changes,
I
think
Johan
and
I
talked
about
the
the
issue
of
some
people.
Storing
tokens
in
cookies
is
that
I
think,
and
this
is
something
that
we
as
a
work
group.
We
don't
recommend
but
I,
don't
think
I've
seen
it
documented
anywhere.
So
would
that
be
covered
in
this
document
there.
E
Is
a
I'm
trying
to
think
of
where
cookies
are
mentioned
in
here
cookies
are
mentioned
a
couple
of
times,
I,
don't
know
if
there's
any
actual
mention
of
using
cookies.
As
token
storage,
though,.
E
Are
there
strong
feelings,
hopefully
the
same
feelings
about
about
this,
about
a
about
browser-based
code
using
cookies
as
a
storage
mechanism,
not
talking
about
the
concept
of
the
like
the
token
mediating,
backend
May
set
a
or
sorry
the
the
pure
browser,
the
pure,
the
proxy
version,
where
everything
gets
routed
through
the
proxy
in
that
model,
you
need
some
sort
of
cookie
between
the
proxy
and
the
client,
which
may
be
the
token
itself.
E
I
believe
that
case
is
already
called
out
with
some
notes
in
in
the
document.
But
this
what
we're
found
is
mentioning
is
the
idea
of
JavaScript
code
using
the
cookie
API
in
the
browser
to
actually
store
things
which
it
can
do.
That's
not
really
what
cookies
are
for,
but
you
can
use
it
that
way.
Local
storage
is
obviously
the
better
solution
for
storing
things
in
JavaScript.
E
N
E
Oh
actually,
that
is
relevant
to
number
two
issue
number
two,
which
it
does
need
a
actual
section
calling
about
calling
out
token
storage
techniques.
Okay,
so
there
are
a
few
different
ways:
JavaScript
apps
can
handle
storing
tokens
and
again
it's
regardless
of
how
the
token
was
acquired,
whether
it's
from
the
back
end
or
from
the
browser
app
doing
oauth
itself.
E
But
there
are
you,
could
you
could
keep
it
just
in
memory
where
it
isn't
actually
persisted
anywhere?
You
can
use
local
storage
session
storage
or
cookies
and
we'll
recommend
not
using
cookies,
and
then
there
are
reasons
to
put
it
in
local
storage
and
reasons
to
not
and
just
use
memory
instead
and
both
of
those
are
fine
in
different
scenarios.
So
we're
not
going
to
say
don't
do
one
or
the
other,
but
we'll
have
to
mention
the
considerations
about
both
situations.
E
So
that's
probably
the
last
big
section
that's
going
to
go
in
and
then
one
of
the
last
two
new
items
number
six
just
go
back
through
the
security
BCP
to
make
sure
that
this
draft
is
is
consistent
with
it.
Hopefully
not
too
much
of
it
is
duplicated.
E
To
begin
with,
but
I
know
there
were
some
things
that
were,
at
least
like
said
as
described
by
the
security
BC
blah
blah
blah
just
to
make
sure
people
are
finding
it
so
that'll,
be
one
pass
to
go
through
that
and
I
think
that's
like
mostly
it
except
the
new
thing
that
just
came
up,
which
is
a
whole
section
about
core's
recommendations
which
is
I
feel
like
it
touches
every
every
draft
that
we're
working
on
right
now,
so
the
security
BCP
it
either
does
or
soon
will
recommend
the
as
on
the
authorization
endpoint
not
having
the
course
headers.
E
That
feels
like
the
right
spot
for
that
one
to
go
that
doesn't
really
apply
to
the
browser-based
app
spec.
It
does
apply
to
the
security
BCP
and
2.1
and
open
ID
connect.
The
browser-based
apps
spec
probably
should
make
another
mention
of
all
the
endpoints
that
do
need
course
headers
to
support
browser-based
apps
properly.
That
feels
like
an
appropriate
place
to
put
that
and
then
2.1,
of
course,
like
I
mentioned
and
basically
grab
these
from
both
of
those
drafts
to
make
sure
that
it's
mentioned
in
2.1.
E
So
nothing
really
new
to
talk
about
cores
here,
because
we
already
talked
about
it
in
the
context
of
2.1.
But
it
feels
like
it
is
worth
putting
it
into
here
for
people
who
are
reading
this
draft
and
then
that
would
be
the
so
yeah.
Basically
two
sections
to
add
the
the
token
storage
section
and
then
the
course
section,
and
that
hopefully
is
the
end
of
this
draft.
I
E
E
Have
lots
of
work
a
little
a
couple
more
sections,
and
hopefully
I,
would
very
much
like
to
have
these
sections
added
in
any
discussion
about
the
specific
text
happen
over
the
next
couple
of
months
on
the
mailing
list,
so
that
by
the
time
we
meet
next
time,
there
aren't
any
more
planned
changes.
Awesome.
B
Perfect,
thank
you.
Aaron
all
right
see
that
thank
you.
Okay,
Daniel
I
think
the
slides
might
not
be
the
last
one
because
I've
just
approved
it
I
just
noticed
that
you
sent
it.
But
do
you
want
me
to
I,
don't
know
how
to
share
it.
Otherwise,
maybe
can
you
send
me
the
slides,
directly
I'm,
maybe
just
this
plate
here.
B
Let
me
share
it
and
tell
me
if
this
is
the
latest:
do
you
know
which
slide
that.
B
B
J
B
K
That's
good
I'll
try
to
adjust
this
a
bit:
okay,
hello!
Everybody
we're
going
to
speak
about
the
SD,
draw
draft
selective
disclosure
for
jwts.
K
This
is
mostly
an
update
to
our
last
presentation
in
Philadelphia.
So
if
you're
not
familiar
with
Selective
disclosure,
charts
Mikey
worthwhile
to
read
the
draft
next
slide,
please
keep
in
mind
that
one
of
the
main
design
features
of
St
George
is
to
be
simple,
so
simple:
to
implement
a
simple
to
use.
So
that's
what
we
consider
the
main
feature
here
next
slide.
Please.
K
We
did
a
lot
of
a
lot
of
updates
since
last
time,
especially
last
time
this
was
still
an
individual
draft,
and
now
this
is
a
working
group
draft.
So
thank
you,
everybody
for
your
support.
K
We
updated
the
terminology
that
is
used
in
the
document
to
be
hopefully
more
clear
than
it
used
to
be.
We
introduced
what
we
called
a
combined
format.
Actually
we
didn't
introduce
it,
but
we
now
properly
name
it
and
explain
it.
The
combined
format
for
transporting
SD
drafts
and
other
data.
K
K
We
clarified
what
you
need
to
do
to
verify
the
signature
on
an
SD
jot
and
the
data
that
is
disclosed
as
a
verifier.
K
We
also,
hopefully
improve
our
explanation
on
why
we
chose
this
specific
encoding
that
we
choose
in
in
this
draft.
We
get
to
that
later.
K
We
introduced
a
feature
that
was
often
asked
for,
namely
a
blinding
claim
names,
and
we
now
describe
a
processing
model
that
we
think
will
be
useful
to
most
verifiers
in
in
processing.
Sc
draws
thanks
to
Aaron.
We
also
now
have
a
repository
in
the
oauth
or
working
group,
GitHub
project
or
organization
or
whatever
it's
called.
K
So
this
is
the
place
where
you
need
to
go.
If
you
want
to
see
the
latest
like
the
latest
latest
editors
draft
and
so
on,
and
with
that
I
think,
Christina
will
speak
about
updated
terminology.
O
Thanks
Daniel,
then
you'll
give
a
great
summary,
but
just
to
dive
into
a
couple
of
details
next
slide,
please
server
received
feedbacks
that
the
house
each
object,
slash,
jots,
being
sent,
could
be
better
explained,
better
name
to
be
more
intuitive,
so
the
actual
SD
job
signed
by
the
issuers
that
part
has
no
changes
in
terminology.
O
One
new
part
is
an
object
that
is
sure
sends,
alongside
the
signed
jot.
This
object
that
includes
a
mapping
between
plain
text,
claim
values,
salts
and
now
optionally
claim
names
that
was
used
to
be
called
SVC
sold
value
container.
Now
it's
called
issue
issue
disclosures
objects,
so
we're
introducing
this
new
concept
of
disclosures,
which
is
essential,
is
a
snapping
between
for
all
this
plain
text
values.
So
this
one
is
issued
by
the
issuer,
just
the
original
disclosures
and
it
is
not
signed.
So
it's
an
object
and
it's
never
signed.
O
So
it's
an
object,
not
a
job.
If
you
go
to
the
next
slide,
and
also
when
the
end
user
chooses
out
of
those
issue
issue
disclosures
which
of
those
disclosures
which
of
those
mapping,
the
user
actually
wants
to
disclose
that
object
is
now
called
holder
selected
disclosures
jot
I
think
it
used
to
be
called
releases,
and
now
it's
again,
holder
selected
disclosures
jot
and
it's
a
jot
because
when
holder
binding
is
required,
it
could
be
signed
or
it
could
be
unsigned.
O
O
So
yeah,
just
a
summary
and
in
addition
to
what
I
have
already
covered
in
terms
of
actual
claim,
the
property
names
SD
underscore
lease
in
IA
disclosures
is
now
is
the
underscore
II
disclosures
and,
and
they
and
hold
or
select
the
disclosures.
Now
it's
called
SD
underscore
H
is
underscore
disclosures.
So
you
see
this
throughout
examples
in
the
presentation
and
the
last
button
bullet
point
is
related
to
the
topic.
I
think
I'll
cover
soon.
If
you
go
to
the
next
slides,
oh
wow,
it's
old,
slides,
I
guess
give
me
skip
it.
O
So
yeah
from
the
implementers
is
also
received
the
feedback
that,
if
you
go
to
the
next
slide,
people
wanted
clarification
of
what
how
these
objects
are
actually
being
sent,
transported
between
issuer
and
end
user
and
an
end
user
and
the
verifier.
So
we
introduced
the
concept
of
combined
formats,
so
combined
format
of
issuance
consists
of
four
dot
separated
parts
or
the
first
three
is
a
DOT
which
is
assigned
SD
job
and
the
last
part
is
base64,
Euro,
encoded
Json,
which
is
this.
O
That's
where
we
introduce
the
concept
of
combined
format
for
presentation
which
consists
of
six
dot
separated
Parts
when
the
holder
binding
is
required,
meaning
the
first
three
are
always
sure:
sign
SD
job,
but
the
last
three,
our
holder,
signed
holder,
select
the
disclosure
shots
and
the
last
signature
could
be
optional
if
there
is
no
holder
binding
required,
so
they
started
to
basic
units
that
are
going
to
be
transported
between
these
entities
and
how
they're
actually
being
transported
is
out
of
scope.
O
So
that's
up
to
transfer
protocol,
but
just
to
clarify
that
this
is
a
basic
unit
and
what
I'm
implicitly
saying
is,
if
you're
sending
one
SD
job
you
have
to
send
one
holder
so
that
the
disclosures
per
SD
job.
So
that's
an
implicit
clarification
of
San
Jose
comments.
They've
received
next
slide,
please.
O
So
we
also
received
feedback
that
in
the
original
individual
draft
yeah,
if
you
can
look
next
slide,
we
only
supported
basic
hashing
algorithm,
but
we
also
received
feedback
that
some
people
wanted
to
use
on
hmac
or
do
something
a
bit
more
fancier.
On,
for
example,
using
really
Advanced
cryptography
to
make,
but
with
really
really
small
salt
values
but
achieve
the
same
level
of
security,
as
you
know,
really
complex,
hashing
algorithm,
for
example,
so
that's
expanded.
O
So
throughout
the
text
you
would
see
changes
in
terminology
from
hash
algorithm
to
a
more
General
digest,
derivation
algorithm
but
still
shot.
256
is
mandatory
to
implement
hash
algorithm
and
all
the
security
guidance
related
to
you
know.
Minimum
bytes
that
has
to
be
used
on
that
part
has
not
changed.
J
Like
the
term
digest,
derivation
algorithm
is
a
little
bit
unusual
and
then
I
see
two
types
of
algorithms.
One
is
an
HVAC
which
is
a
key
hash
function.
The
other
one
is
the
like.
Unkit
hash
function
like
shot
256.
So
what
in
which
directions
is
going
like.
O
So
the
intention
was
to
accommodate
both
by
using
a
terminology
digest
derivation
algorithm,
which
is
originally
I,
think
suggested
by
Christian
Paquin,
the
cryptographic
researcher,
but
if
this
is
not
intuitive
enough,
the
purpose
does
not
change
I
think
it
would
be
a
terminology
change
to
be
honest
and
the
way
we
clarified
in
the
spec
is
so
it's
so
much
value.
Obviously,
changes
we're
using
the
soul,
the
terms
sold
throughout
the
spec
for
consistency,
but
obviously,
if
it's
hmac
it's
the
key,
it's
not
the
random
salt
value
so
yeah.
O
O
H
North
people-
apologies
I,
forget
the
intention.
I
think
sort
of
does
shine
through,
but
the
realization
of
this
is
is
really
problematic
from
an
implementation
standpoint
like
you
talk
about
interchangeability
of
salt
and
and
key
between
hash
and
hmac,
but
the
actual
actually
doing
that
is
it's
not
specified
in
any
kind
of
inoperable
way
and
note
that
the
salt
exists
underneath
the
string
literal
that
will
be
computed
for
the
digest
so
to
actually
make
that
work
for
an
hmac.
You
would
have
to
parse
the
string.
H
That's
supposed
to
be
considered
opaque,
basically,
at
that
layer,
pull
out
that
and
then
use
that
as
the
key
and
then
I
don't
know
if
you're
supposed
to
leave
it
in
there
for
the
computation
of
the
the
digest
like
these
are
all
questions
that
could
be
answered,
but
they're
totally
underspecified,
and
then
you
have
the
name
also
is
so
I
I
did
some
Googling
and
the
only
results
for
digest.
H
Derivation
algorithm
are
this
draft
and
the
comment
where
Christian
suggested
it
so
I,
don't
think
it's
an
actual
yeah
term
that
that's
sort
of
a
side,
but
the
names
are
too
long
to
like
just
from
from
jots.
We
try
to
do
things
shorter,
so
like
digest,
derivation
algorithm
like
it
just,
but
the
I
think
the
real
problem
is,
is
the
applicability
of
the
two
different
algorithms
is
to
honest's
point
like
it?
It
either
needs
to
be
built
in
a
way
that
it
can
actually
accommodate
the
two
or
not
and
I
I.
H
Sort
of
questionable
the
need
for
it,
based
on
things
that
that
that,
like
Neil
had
said
he
was
worried
about
length,
prefix
or
length
of
pension,
I,
don't
even
know,
but
the
fact
that
it's
using
Json-
it's
not
really
a
problem.
H
Referring
back
to
the
jws
ALG
for
hmac
is
also
like
conceptually
I
know
what
you're
trying
to
do,
but
from
a
spec
implementation
standpoint,
it's
it's
not
appropriate
or
interoperable
at
all,
and
then
you
also
have
basically
a
namespace
here,
that's
covered
by
two
different
Registries
and
oh.
You
could
also
extend
it
yourself
so
that
that
doesn't
really
work
either
I
don't
mean
to
be
over
overly
critical,
but
it's
sort
of
like
the
idea
sort
of
makes
sense.
The
realization
of
it
is
is
not
there.
H
No
because
it
there's
a
lot
to
describe
it,
I
didn't
know
how
to
write
it
down.
So
I
wanted
to
take
the
opportunity
to
Face
to
Face
Time
To
Explain,
the
rationale
but
I
will
add
a
GitHub
issue
that
at
least
mentions-
maybe
not
the
specifics,
but
the
Brokenness
of
it.
Thank.
B
You
so
so,
maybe
like
just
for
harnesses
because
he's
taking
notes
like
can
you
summarize
some
like
a
sentence.
O
Expanded,
if
someone
can
throw
an
issue
we
can
discuss
there
like
if
we
want
to
you,
know,
clarify-
or
we
want
to
take
it
out
optional
itself
right,
because
there
are
a
few
people
asking
for
that.
But
I
agree
that,
yes,
you
have
to,
you
know,
take
it
out
and
then
the
treatment
of
thought
as
a
key
becomes
problematic.
P
Thank
you,
John
John,
Bradley
I'm,
going
to
largely
agree
with
Brian.
What
you're?
What
you're
trying
to
do
here
is
sort
of
isomorphic
to
kdfs
you're,
just
using
the
key
as
the
hash
or
as
the
digest,
so
I
would
probably
try
to
name
it.
Similarly,
because,
essentially
for
key
derivation
functions,
you
have
either
a
straight
hash
or
hmac.
P
Etc
I
mean
it
that
you're
just
using
it
for
a
different
purpose,
but
they're,
essentially
the
same
algorithms,
so
I
would
either
take
it
out
and
just
say
you
have
to
use
hashing
algorithm
or
you
need
a
lot
more
specification
because
yeah,
whether
you
leave
the
the
nonce
in
the
thing
that
you're
hashing
in
gets,
gets.
O
Yeah
great
yeah,
thanks
for
the
feedback
appreciate
it,
should
we
go
the
next
slide,
yep,
okay,
so
the
last
slide
from
my
part,
I,
think
signature
validation.
This
is
for
holder
selected
disclosures
jot
so
as
I
think,
we've
said
before
it
can
be
signed
or
can
be
unsigned.
So
we
updated
the
validation
section
to
make
clear
that
then
verifier
verifies
a
holder
cycle
disclosures,
because
that
becomes
the
crucial
for
the
security
of
this
mechanism
that
the
verifier,
actually
you
know,
does
all
the
validation
Computing
hashes
whatnot.
O
So
it
should
be
not
passive
in
a
sense
that
it's
signed,
I'm
going
to
verify
it
no
hold.
The
verifier
must
have
a
policy
whether
it
accepts
it
requires
signing
on
the
holders.
Fine
disclosures
job
or
it
doesn't
so
if
it
requires
signing
meaning
it
requires
holder
binding
where
it
validates
that
the
signature
on
the
this
HS
disclosures
joint
is
done
by
the
key
signed
over
by
the
issuer.
So
the
holder,
the
user,
is
proving
control
the
same
key
both
during
the
issuance
and
the
verification.
O
If
that
feature
is
required
and
they
this
HS
disclosure
shot
is
not
signed
very
far-
must
reject.
Like
that,
just
you
know
like
if
you
wanted
to
be
signed
and
it's
not
signed
blank
or
Jack
if,
for
whatever
reason,
trust
framework,
you
know
policy,
they
verify
is
okay
to
not
have
a
holder
binding
and
there
are
legitimate
use
cases
that
are
okay
with
that.
If
that
is
your
use
case,
you
could
accept
the
jots
using
nav
algorithm
is
the
clarification
of
the
edit.
Let's
see
Brian.
C
H
K
Yes,
there
are
use
cases
where
you
don't
need
holder
binding,
so
you
just
want
to
know
that
the
document
exists
that
was
signed
by
an
issuer
for
some
user.
So
for
some
end
user
and
you
don't
care
whether
it's
the
user.
O
I
Tony
and
Edmund
question
is
the
holder
binding
meant
to
do
device
binding
also
so
I.
There
is
a
little
bit
of
a
difference
there,
so
I'm
just
trying
to
understand
your
whether
you
want
this
to
cover
device
binding
or
not.
K
O
K
There
yeah
okay,
a
word
on
the
encoding
that
we're
using
I
think
this
is
actually
the
old
slide
set.
Is
it
yeah
anyway?
Let's
let's,
let's
try
with
that.
So
one
problem,
maybe
next
slide.
This
was
actually
oh,
so
we
have
to
this
so
yeah.
So
there
are
a
couple
of
slides
that
we
didn't
want
to
show
and
we
mark
them
as
not
show,
but
they
are
in
the
PDF.
G
Anyway,
yeah
so.
K
We'll
skip
a
few
slides,
okay,
but
this
one
is
good.
So
when
an
issuer
creates
the
SD
draft,
it
takes
the
data
from
in
this
case
an
address
claim
with
the
street
address,
locality
and
so
on,
then
transfers
that
to
a
byte
string,
obviously
to
then
hash
that
and
sign
it.
So
that's
very
simple
process
next
slide.
Please
now
that's
a
complication!
K
The
stuff
is
sent
to
the
verifier
and
the
verifier
needs
to
do
the
same
computation.
So
the
net
verifier
has
some
data
transfers
it
into
bytes,
hashes
it
and
obviously
next
slide.
Please.
K
The
verifier
also
gets
so
so
looks
at
the
SD
drop
and
there's
some
signed
hash
values
and
the
verifier
now
checks
whether
the
assigned
hash
values
are
the
same,
and
in
this
case
so-
and
there
might
be
cases
where
that
doesn't
work,
especially
if
the
data
that
is
transferred
from
the
issuer
to
the
verifier
is
modified
between
the
issuer
and
the
verifier.
And
such
things
can
happen
when
you
transfer
Json,
because
in
Json
the
order
of
elements
is
undefined.
K
Essentially
there
are
some
there's
some
room
for
expressing
the
same
thing
in
different
ways.
For
example,
you
have
that
with
numbers
floating
Point
numbers,
for
example,
specific
things
of
how
to
encode
Unicode
strings,
for
example,
but
also
white
space
between
the
elements
in
adjacent.
K
So
the
issue
has
sent
something
that
is
Jason
or
has
some
data
transverse
it
to
to
Json
and
the
verifier?
That's
a
Json
and
then
does
the
same
computation,
but
not
necessarily
with
the
same
result.
So
the
byte
string
that
is
hashed
might
be
different
and,
of
course,
when
that
happens,
there's
a
different
hash
and
the
the
apparently
the
the
signature
check
will
not
work,
although
the
same
data
was
transferred.
So
here
in
this
case,
you
can
see
that
street
address
and
locality.
K
K
So
I
call
this
the
source
string,
because
so
the
source
of
of
your
hash
or
you
can
apply
a
transformation
both
at
the
issuer
and
at
the
verifier
that
ensures
that
both
really
hash
the
same
byte
string
so
both
end
up
with
the
same
hash
at
the
end
of
the
day,
so
that
would
be
canonicalization
where
so
this
you
need
obviously
to
do
at
both
the
issue
room
and
the
verify
in
any
way.
K
In
any
case
whatever
we
do,
we
need
to
Define
what
we
do
in
the
spec
to
ensure
interoperability.
We
need
to
ensure
that
issue
and
verifier
agree
on
on
how
this
Computing
is
done.
J
I
want
to
walk
around
I
think
you
have
to
do
both
anyway,
because
if
you
think
back
about
HTTP
yeah.
A
J
The
HTTP
signature
work.
We
did
that's
the
same
problem
if
you
just
compare
the
hashed
value
or
take
the
hashed
value
to
do
further
things
then,
and
not
really
compare
it
to
what
was
originally
hashed.
So
you
have
the
same
issue
again.
J
Then
you
will
run
into
problems
because
someone
attacker
could
swap
out
things
because
you
are
later
creating
or
do
all
the
actions
based
on
the
second
transmitted
hash
value,
and
while
you
actually
base
decisions
on
what's
in
the
content
of
the
of
the
original
plain
text,
that
would
be
a
problem,
so
you
I
think
you
have
to
do
at
least
the
clinicalization
in
some
form.
No,
you
don't
think
so,
not.
J
K
Q
K
K
Okay,
let's
see
if
we
can
get
the
slides
back.
A
A
F
Trying
it.
K
A
K
Okay,
thank
you
very
much
next
slide.
Please
questions.
K
So
I'd
approach
that
we've
taken
and
the
draft
is
to
ensure
that
a
byte
string
is
transferred
from
the
issuer
to
the
very
file
that
and
and
because
we
transfer
the
byte
string,
we
can
ensure
that
always
the
same
thing
is
Hash,
because
you
just
hash
this
byte
string.
If
you
look
closely
here
on
the
top
left
side,
you
see
the
address,
and
now
it's
not
a
Json
object
any
longer.
Instead,
it's
a
string
of
bytes
which
happens
to
encode
Json.
K
Actually,
you
could
use
any
other
encoding
at
this
point,
but
Json
is
just
something
we
use
anyway.
So
what
we
do
is
we
build
one
string
per
object
so
that
object
can
be
an
address
in
this
case
like
a
complex
Json
object
can
also
be
just
a
Json
string.
K
K
Next
slide.
Please.
L
K
So
canonicalization
gives
you
a
clean
data
structure,
so
you
can
just
transfer
things
essentially
as
they
are,
but
the
problem
that
we
saw
with
that
it
is
that
it's
it
adds
a
really
non-trivial
dependency.
So
you
need
to
ensure
that
issue
and
verifier
follow
exactly
the
same
rules
that
there
are
libraries
for
that.
K
That
Implement,
that
and
if
you
happen
to
implement
it
yourself
or
the
library,
is
not
well
implemented,
it
can
be
really
hard
to
debug,
because
the
issuer
just
sends
you
something,
but
you
will
never
learn
what
the
issue
actually
has
to
get
to
that
hash
value
that
you
get
alongside.
K
So
this
can
be
this
this,
so
obviously
we
can
test
that
you
can
do
conformance
tests
and
so
on.
But
if
you
happen
to
have
an
error
somewhere,
it's
really
not
transparent
to
the
verify
what's
happening
and
it
can
be
really
hard
to
debug
next
slide,
please.
K
So
the
approach
that
we've
chosen,
the
sawstring
encoding,
is
really
easy
to
implement
with
any
Json
Library.
That's
also
feedback
that
we
got
from
the
implementers
of
the
spec.
You
just
do
a
Json
encode
on
a
thing
and
you're
good
to
go.
You
don't
need
any
new
dependencies,
so
you
just
need
the
Json
Library
and
it's
actually
something
that
in
a
similar
way,
is
being
done
in
jws
anyway,
so
you
have
the
whole
Json
thing
and
you're
just
a
bit.
K
The
downside
of
this
is
that
it
certainly
looks
strange,
so
some
people
ask
if
that's
an
error
in
the
in
a
spec.
No,
it's
not
that's.
Why
we
have
this
lengthy
explanation
now
and
of
course,
if
you
just
look
at
the
the
disclosures
object
where,
where
the
raw
strings
are
in
you
don't
so,
if
you
just
have
that
thing,
you
cannot
apply
Json
schema
to.
K
K
It's
not
a
place
where
you
would
do
that
because
you
process
a
thing
according
to
what's
in
the
SD
drop
spec
and
at
the
end
of
the
day
you
get
a
document
out
that
has
the
same
data
as
the
issuer
process
in
the
first
place,
so
you
get
the
same
types,
the
same
object
and
so
on.
M
Justin
Richard,
not
exactly
so
Json
strings,
have
the
ability
to
have,
for
example,
Unicode,
encoded,
characters
and
stuff
like
that.
That
allows
you
to
put
a
different
character
sequence
and
get
the
same
semantic
bites
out
the
other
end.
Yes,
so
this
is
something
that
you
can
do
and
say
it's
the
Json
string
value,
but
you
have
to
be
way
way
way
more
precise
than
just
saying
call
json.encode,
because,
for
example,
inside
Json
strings
you
can.
M
You
are
fully
allowed
to
prefix
forward
slashes
with
a
backslash
character
and
that
gets
sent
as
a
as
a
two-character
thing,
but
it
is
supposed
to
be
interpreted
as
a
single
backslash
character
or
sorry
as
a
single
forward,
slash
character.
You
can
also
send
the
forward
slash
character
without
the
prefix
backslash,
because
it's
technically
not
a
an
escaped
character,
but
you're
allowed
to
escape
it
with
the
same
semantic
meeting.
M
There's
also
the
backslash
U
Unicode
characters.
And
then,
then
there
are
some
Json
libraries
that
do
really
really
weird
things,
with
Unicode
characters
without
doing
the
backslash,
U,
prefix
and
coding,
and
just
chucking
it
in
there
and
hoping
for
the
best,
because
it's
using
the
system's
underlying
string,
libraries,
in
other
words,
this
works
until
it
doesn't
and
when
it
doesn't,
it
goes
really
sideways,
really
really
hard.
M
So,
if
you're
going
to
be
basing
this
off
of
Json
raw
Json
strings,
you're
going
to
need
to
be
incredibly
precise
about
how
you
actually
pull
those
bites
out,
because
your
normal
test
cases
and
your
normal
use
cases
are
probably
going
to
work
most
of
the
time
until
somebody
gets
a
weird
library
that
is
implemented
completely
compliantly.
M
That
does
something
that
you
weren't
expecting.
So
it's
not
as
easy,
as
as
it
might
seem
to
say,
Json
strings
and
and
on
top
of
that,
I
would
encourage
the
authors
to
look
into
the
the
it's
a
bit
of
a
pariah
RFC.
The
the
JCS
yeah
I
see
a
lot
of
head
check
exactly
exactly
because
that's
what
you're
doing.
K
Okay,
let
me
let
me
just
say
something
here:
I
think
so:
either
I
I
I
don't
see
the
problem
yet
so
I
would
be
happy
to
to
discuss
that
with
you
sure.
But
as
far
as
I
see
this
we
do
the
Json
encoding.
So
we
we
take
back
the
address,
object
right,
right,
We
call
json.encode
we
get
a
string
and
that
string
Ascend
as
part
of
another
Json
object
from
A
to
B.
K
Of
course
you
can
add
backslashes
and
unicode
escapes
and
so
on
there,
but
you're
actually,
like
you,
call
the
encode
on
one
string
and
you
get.
You
call
the
D
code
on
the
other
string
and
you
get
the
same
data.
You
get
the
same.
Byte
string,
you're.
D
K
We
are
not
hashing
the
thing
that
was
already
encoded,
so
how
to
explain,
because
we
do
do
two
encodings
here,
but
as
far
as
I
see
so
and
correct
me
if
I'm
wrong,
when
I
have
any
byte
string
and
I
call
Jason,
encode
and
I
call
Json
D
code
I
get
the
same
byte
stream
right.
It's
kind
of
kind
of
sometimes
right,
usually
so
like
not
on
the
wire
right.
I
know
not
on.
M
Be
happy
to
validate
it,
but
it's
a
beautiful
expectation.
So
what
I
would
say
is
that,
where
you
need
to
be
precise
in
this
is
where
exactly
in
the
encoded
versus
decoded
versus
stuff
stack
you're
expecting
to
be
able
to
get
those
bytes,
because
if
you're
throwing
things
through,
especially
a
Json
decoder
on
the
far
end,
it's
already
gone
through
a
Json
parser,
most
likely
because
it's
an
encoded
string.
So
it's
been
parsed
as
a
Json
strings,
and
so
that's
been.
M
Right
exactly
so,
that's
already
been
through
one
round
of
Json
parser,
which
can
do
all
sorts
of
stuff
to
the
insides
of
the
string
and
and
change
the
bytes
that
were
on
The
Wire.
M
M
B
M
Saying
that
if
you're
going
to
do
this
approach,
you
need
to
be
very
precise
about
where
exactly
you're
saying
get
the
byte
stream?
Okay
and
because
there
are
some
subtleties
in
implementations
that
are
going
to
burn
people
in
weird
and
unpredictable
ways.
It's
going
to
be
it.
This
is
going
to
be
the
corner
cases
and
the
edge
cases
that
really
really
get
you
here,
the
day-to-day
stuff
Chuck
it
in
Json
and
code
and
Json
decode.
It's
just
going
to
work
completely
agree
with
that.
That's
it
like
the
vast
majority
of
cases.
M
It's
just
gonna
work
and
that's
fine.
But
in
order
for
this
to
be
a
real
like
robust
security
spec,
as
we
know,
we
need
to
care
about
those
corner
and
edge
cases,
and
this
type
of,
if
I'm,
getting
it
I've,
already
called
json.parse
and
now
I'm
calling
json.decode
on
something
like.
Maybe
that's
already
been
called,
or
maybe
that's
not
been
called
they're
called
No
you're
calling
parse
in
order
to
get
it
out
of
the
object
in
the
first
place.
Yes,.
M
O
B
M
F
A
steel
transmute
agree
with
most
of
what
he
just
said,
but
I
think
generally
you're
doing
canonicalization
whether
you
want
to
call
it
that
or
not
that's.
It.
D
A
Mike
Jones
Microsoft
I'll
actually
disagree
with
ori's
last
point:
I
think
what
they're
trying
to
do
is
hardening
the
string
for
transmission.
I
mean
a
lot
of
the
Jose
stuff
uses
base64
URL
for
hardening
in
the
same
way,
I'm
not
saying
you
want
to
use
that
choice
or
not,
but
the
point
is
to
get
a
string.
That's
going
to
survive
transmission.
K
Q
You're
on
you're
answering
to
it
so
sort
of
a
follow-up
on
what
Mike
just
said:
you're
you're,
starting
with
the
fullness
of
Json,
and
all
its
complexity.
As
a
prerequisite,
you
want
to
essentially
support
disclosure
of
anything
that
looks
like
Jason,
and
my
question
is
whether
you
can
Harden
it
enough,
maybe,
for
example,
by
putting
constraints
on
claim
names
on
cardinality.
So
can
I
have
multiple
claims
of
the
same
name
stuff
like
that.
Maybe
yeah.
The
the
encoding
of
clan
names
to
their
security
doesn't
depend
as
much
on
the
implementation
details
of
these
libraries.
Q
O
K
A
few
minutes:
okay
I
tried
to
do
this
in
a
few
minutes.
Yeah
next
slide
please.
So
this
is
just
a
quick
example.
K
So
this
is
what
the
this
is:
an
SD
dropped.
So
what
the
issuer
creates
the
issue
of
Science
and
then
sends
to
the
holder
and
the
holder
can
send
this
to
the
verifier.
K
This
is
essentially
so
not
many
changes
except
for
the
names
there
from
the
last
presentation.
So
these
are
the
digest,
the
hashes
of
the
claims
of
the
values.
So
now
what
are
the
values
next
slide?
Please?
K
K
This
Json
object
itself
has
two
keys
s
for
the
salt
and
V
for
the
value,
and
this
approach
is
also
nice,
because
it
ensures
that
you
have
a
separation
between
the
salt
and
the
value,
which
means
that
there
can
be
no
hash,
lengths,
extension
attacks
or
anything
like
that
where
the
where
part
of
the
salt
is
considered
like
part
of
the
value
or
vice
versa.
So
it's
it's
clearly
separated.
K
If
you
don't
do
it
this
way,
you
need
to
think
about
how
to
take
the
salt
in
the
value
and
then
hash
them
together.
You
need
to
Define
that
step
as
well.
We
don't
need
to
do
that.
We
just
have
an
object,
S
and
B
salt
and
value
and,
of
course,
that
whole
string
for
birthdate,
for
example,
then
hashes
to
what's
in
the
SD
jot
next
slide.
Please
next
slide.
K
This
is
then
so
the
issue
issue
disclosures
is
created
by
the
issuer
sent
to
the
holder.
The
holder
selects
some
of
their
claims
to
disclose
to
the
verifier.
So
the
HS
disclosures
document
is
a
subset
of
the
issuer
issued
disclosures
document
in
the
current
spec.
K
The
same
document
is
also
used
for
the
holder,
binding
so
stuff
like
an
on
so
audience
or
other
things
can
be
added
into
this
document
and
then
the
whole
thing
can
be
signed
and
sent
from
the
holder
to
the
verifier
and
the
verify
it
and
then
verify
that
this
was
actually
signed
with
the
holders
key
now,
coming
back
to
what
a
point
that
Brian
raised
this
thing
as
we
have
it
in
respect,
right
now
actually
serves
two
different
purposes.
One
is
to
just
transfer
the
holder
selected
disclosures.
K
You
could
just
send
it
as
a
Json
object
and
then
there's
the
holder
binding,
which
is
just
a
signature
over
the
nons
and
the
audience
wherever
that
nonce
comes
from
by
the
way,
and
that
is
a
different
purpose.
So
that
is
to
show
that
the
holder
is
able
to
sign
something,
that's
fresh
because
it
has
response
and
intended
for
that
verifier
using
its
key,
so
two
different
things
and
we
could
think
about
separating
them.
So
just
having
a
Json
object
for
the
HS
disclosures
and
then
optionally.
K
B
K
We
also
have
claim
name
blinding
now,
so
some
of
the
claim
names
can
be
replaced
by
random
strings
in
this
case.
I,
don't
know
what
it
was:
family
name
no
family
names
anyway,
this
claim
was
blinded,
so
it
has
been
replaced
by
a
random
string.
The
Israel
selects
that
random
string
and
just
exchanges
claim
name
and
the
random
string
next
slide.
K
Please
in
the
disclosures
we
now
have
an
entry
for
the
claim
with
that
random
string,
and
we
now
have
a
third
element
in
the
disclosure
which
is
called
n
for
the
original
claim
name,
and
that
element
just
contains
the
original
claim
name
now.
K
This
might
look
like
complicating
things
further
for
an
application
consuming
SD
drafts,
but
we'll
get
to
the
processing
model
where
we
say:
okay,
the
steel
drought,
Library
can
do
all
the
hard
work
on
processing
this.
Putting
in
the
original
claim
name
instead
of
the
the
line
of
claim
name
and
the
application
will
address
that
adjacent
document
that
will
will
not
have
any
traces
of
the
planning
claim
named
planning
claim
names
next
slide,
please
processing
model,
how
fitting.
K
Just
loving
everything,
okay,
the
processing
model
is
how
we
think
that
as
C
drive,
libraries
will
process
SD
jobs,
simple
steps
verify
all
the
things
that
you
get
verify
the
the
disclosures
actually
match
the
SD
draft
unblind
any
blinded
claim
names.
K
If
there
are
any
March,
the
selectively
disclosable
claim
names
claims
into
the
non-selectively
disclosable
claims
in
the
SD
drought,
which
means
that
at
output
time
you
get
just
one
document
which
looks
like
the
body
of
a
normal
dot
which
has
been
verified
processed,
and
you
can
just
put
that
into
your
application.
So
the
application
doesn't
need
to
know
anything
about.
Sd
dropped
next
slide,
please!
K
Okay!
Next
slide,
we
now
have
five
running
implementations,
so
we
have
our
implementation
that
we
keep
up
to
date
to
generate
all
the
examples
in
the
spec
and
we
now
have
a
new
typescript
implementation.
Second
type
script:
implementation
as
well,
next
slide.
Okay,.
K
Just
really
quickly,
next
steps,
I
think
we
need
to
think
about
how
this
can
look
like
in
the
context
of
other
existing
credential
formats,
if
there's
a
mapping
between
them.
If
this
is
completely
different,
we
need
to
think
about
that
probably
create
some
examples
and
discuss
that
with
the
relevant
groups.
K
We
still
need
some
security
and
privacy
considerations
and
it
would
be
great
if
at
some
point
we
could
do
an
interoperability
test
between
implementation
stuff.
We
have,
we
could
even
do
that
offline,
because
you
can
just
create
these
things
and
consume
these
things.
You
don't
need
to
to
be
online
for
that.
Thank
you
very
much.
You.
B
B
I'm
gonna
pass
then
control
to
you.
Where
are
you
here.
H
All
right,
tough
act
to
follow
I'm
here
to
talk
about
the
Step,
Up,
authentication,
challenge,
protocol
and
yeah,
so
including
a
picture
as
Aaron
said,
is
sort
of
fundamental.
This
was
actually
taking
ietf
89
back
in
I,
don't
know
two
a
while
ago,
so
without
further
Ado
Let's
move
forward.
So
a
little
bit
of
backstory
context,
I
have
a
hard
time
presenting
without
providing
some
context.
I'll
try
to
get
through
it
quickly.
H
Basically,
a
protected
resource
can
technically
reject,
like
you
can
reject
a
technically
valid
access
token,
for
whatever
reason
that
it
wants.
Maybe
a
risk
engine
decision,
some
local
constraints,
I,
say
here:
bad
vibes
like
really
it
whatever
it
decides,
is
a
reason
to
reject
that
token.
It's
totally
within
it's,
it's
purvey
to
do
so
and
really
oftentimes.
H
What
a
resource
wants,
then,
is
a
token
obtained
from
a
more
recent
user
and
active
authentication
event
or
a
token
obtained,
with
a
different
authentication
flow,
probably
a
stronger
one
and
there's
no
current
standardized
guidance
on
how
to
do
this.
For
the
RS
to
express
those
requirements
down
to
the
client
and
the
client
to
indicate
those
requirements
back
through
to
the
authorization
server
in
the
flow
to
acquire
a
new
token.
H
My
phone
locked
there
we
go,
so
we
Victoria
myself
tried
to
address
this
through
a
draft
in
the
working
group
process
so
forth
so
forth.
H
The
summary
of
the
drafts
approach
is
extending
RFC
66
750,
with
a
new
error
code,
insufficient
user
authentication
to
for
the
end,
sorry
and
a
new
new
parameters
on
the
www
authenticate,
header,
ACR
values
and
max
age,
and
this
gives
then
the
resource
the
opportunity
to
express
down
to
the
client
the
the
conditions
that
we
previously
talked
about,
either
and
or
both
that
different
ACR
value,
representative
of
the
authentication
flow
or
authentication
context
or
a
more
recent
authentication
event
is
required
associated
with
the
access
token
that
will
be
issued
off
of
in
turn.
H
Then
we
utilize
the
authorization
request,
parameters,
ACR
values
and
max
age
to
allow
the
client
to
convey
to
the
authorization
server
its
needs
around
the
authentication
event.
These
are
parameters
already
defined
and
registered
via
oidc
core
in
the
our
oauth
parameters
registry
and
then
Define
and
or
reference
depending
on
the
context,
ACR
and
off
time.
Introspection
response
parameters
and
JW
claims
JWT
claims
to
express
that
information
about
the
authentication
event
associated
associated
with
the
access
token
to
the
protected
resource.
H
So
really
it's
just
stuffing
them
in
the
job
or
making
them
available
of
via
introspection
via
the
same
claim,
names
they're
already
defined
for
jots.
We
just
reference
them,
they're
more
explicitly
defined
in
this
draft
for
introspection
to
be
to
be
clear-
and
this
is
the
kind
of
flow
diagram
I.
H
I'm
hesitating
whether
it's
really
worth
going
over,
basically
you're
making
API
requests
up
before
one
you
get
to
one
a
tokens
presented
it.
It
has
token
information.
Those
are
the
projected
resource,
decides
hey,
it's
not
good
enough,
and
in
this
case
it's
challenging
in
step.
H
Two
basically
saying
I
need
a
more
recent
authentication
event
associated
with
a
token
that
you're
presenting
me
as
a
result
of
to
the
client
pops
up
a
browser
or
directs
the
end
user's
browser
to
make
a
new
authorization
request
and
includes,
in
this
case
a
max
age,
parameter,
saying:
I
need
a
more
recent
event,
authentication
event
associated
with
the
access
token.
H
Then
the
magic
happens
all
out
of
scope,
but
the
authorization
server
prompts
the
user.
Does
the
authentication
dance
at
per
it's
purvey
and
ultimately
returns
a
new
access
token
in
step,
four.
Some
things
submitted
there,
but,
hopefully
you
know
the
drill
and
then
in
step
five.
It
makes
the
same
API
call
with
the
new
access
token
inside
or
referenced
by
that
access
token
is
a
more
recent
authentication
event
represented
by
off
time.
H
Same
flow
could
happen
with
ACR,
but
using
off
time
here,
and
the
protected
resource
is
happy
with
it,
keep
locking
my
phone
so
just
kind
of
quick
summary
of
where
we
are
in
Vienna.
We
first
presented
this
draft
back
in
itf13.
It
was
adopted,
and
shortly
after
that
and
followed
by
the
pretty
typical
sort
of
standard
iteration
process,
some
comments,
some
new
updates
so
forth.
We
got
drafts
one
and
two
and
then
talked
about
it
in
Philly.
H
A
few
things
have
happened
since
then
published
three
clarified
that
the
ACR
values
and
the
max
age
can
occur
in
the
same
challenge
when
and
if
they're
necessary.
Sometimes
you
want
to
ask
for
both
flush
out
the
deployment
and
security
considerations.
Also
did
all
the
INR
registry
stuff
that's
necessary,
tried
to
clarify,
because
it
wasn't
clear
to
everyone
that,
while
the
ACR
values,
which
is
basically
saying
these,
are
the
accepted
acrs
that
we
would
take
for
this
can
have
more
than
one
value.
H
It's
a
space
separated
listing
of
ACR
values,
the
actual
authentication
only
can
be
qualified
as
meeting
one
though
one
of
those.
So
the
token
itself
only
has
a
single
ACR.
Only
one
ends
up
in
the
token
we
did
migrate
this
over
and
by
we
I
mean
Aaron,
migrated
it
over
into
the
new
GitHub
org
here,
and
thank
you
for
doing
that
were
coming
along
to
the
the
new
process.
H
But
it's
it's
nice
tooling,
it's
very,
very
useful
did
working
group
last
call
was
September
22nd,
October
2nd
did
drafts
four
and
five
to
mostly
editorial
updates
and
feedback
addressing
those,
and
then
they
were
the
day
after
each
other
noticed
that
the
updates
needed
some
updating.
So
we
fixed
those
recently
I
updated
some
examples
and
figures
to
be
really
clear
that
the
authorization
request
is
sent
by
the
client
not
directly
but
via
directing
the
end
users,
user
agent
or
browser
to
make
the
call
in
some
ways.
H
This
is
related
back
to
the
the
discussion
of,
of
course,
although
it's
not
mentioned
here,
what
happened
here
is
I
was
I,
was
actually
sort
of
refurbishing
vittorio's
slide
deck
to
to
do
these
slides
and
it
became
kind
of
clear
that
the
the
ideas
had
been
intermixed
and
that
some
of
the
slides
made
it
look
like.
H
Maybe
the
authorization
request
was
coming
directly
from
the
client,
didn't
say
that,
but
it
maybe
implied
it
and
I
wanted
to
clarify
that
look
back
at
the
draft
and
it
in
fact
had
the
same
kind
of
potential
for
ambiguity.
So
I
fixed
those
and
I
say
a
new
draft
is
coming
soon.
They
actually
published
this
yesterday,
there's
no
normative
changes,
but
I
think
it's
an
important
clarification
that
maybe
folks
that
are
familiar
with
oauth
would
just
sort
of
gloss
over
because
they
know
how
it
works.
H
But
if
you're
reading
it
kind
of
literally
I
think
it's
important
clarification
to
fix
and
then
hopefully
anticipated
soonish
will
be
the
Shepherds
review
here
and
I
know.
The
shepherd
you
know
has
a
lot
going
on,
including
covet
bout.
That
slows
some
other
things
down,
but
no
pressure,
but
that
that's
kind
of
where
we're
out
with
hopefully
soon
and
someone
recently
kindly
pointed
out
that
iatf-16
is
not
in
Prague
I
was
so
eager
to
include.
H
And
I
don't
have
one
of
Yokohama
and
I'm
going
to
miss
Yokohama
that
I
got
confused
myself,
but
looking
ahead.
Hopefully
this
will
be
the
last
time
seeing
a
presentation
about
this.
I
will
be
probably
in
fog,
but
I
won't
be
in
Yokohama
to
talk
about
it
anyway.
So
yeah.
B
H
The
summary
was
some
back
and
forth
between
him
and
Victoria.
Ultimately,
he
sort
of
came
to
the.
It
would
be
nice
if
you
said
a
little
something
like
this,
but
it's
not
a
big
deal
and
he
said
he
wouldn't
push
for
it
to
be
changed,
which
I,
appreciated
and
I.
I
am
personally
of
the
opinion
that
that
what
he's
asking
for
only
complicates
things
more
than
it
confuses
either
way.
It's
it's
not
protocol
level.
It's
just
clarification.
So,
okay,
he
I
think
the
the
output
is
nothing's
going
to
happen
Okay.
B
I'll
take
a
look.
Anybody
else
has
any
comments,
questions.
G
Daryl
Miller
Microsoft
and
just
one
call
out
if
you're,
adding
a
new
parameter
into
the
www
authenticate
header
I,
don't
know.
Have
you
looked
at
the
structured
Fields
work,
there's
an
effort
to
try
and
standardize
how
structured,
Fields
work
and
there's
some
work
going
on
in
the
HP
working
group
to
retrofit
existing
HP
headers
to
use
more
standardized
ways,
and
you
mentioned
about
space
delimited
I
need
to
look
into
it.
That
might
be
the
right
way
of
doing
it.
It
might
not
fit
into
the
new
structured
header
field,
World.
G
H
G
H
B
Okay,
thanks
Daryl
somebody
in
the
queue
Jem
and
deep
go
ahead.
D
Forensic
Sciences
University,
so
I
was
just
wondering
like
if
we
want
to
include
the
other
parameters
also
which
are
going
to
the
protected
resource
server
like
client
authentication
parameters
like
if
we
are
also
looking
at
that
way.
D
H
It's
obviously
a
lot
more
to
that
that
that's
been
discussed,
sort
of
ad
nauseam
and
the
thread,
but
that
for
for
a
number
of
reasons
that
stuff
isn't
applicable
or
either
isn't
within
the
scope
of
the
drafters
and
technically
makes
sense
in
in
the
context
of
where
these
things
are
communicated.
So,
okay,
okay,.
H
B
B
Awesome,
thank
you.
Brian
appreciate
it.
Thank
you.
Thank
you.
Okay,
I
see,
Ben
is
on
the
line
there.
Let
me
pop
your
slides
here.
B
R
So
hi
everybody:
this
is
a
new
draft.
Next
slide,
that's
a
long
title,
but
I
think
the
the
short
answer
is
that
this
is
about
pop-up
authentication.
So
this
is
what
login
looks
like
on
the
web
today,
thanks
in
part
to
the
great
work
of
people
in
this
working
group.
We
have
this
single
sign-on
ecosystem.
We
have
a
bunch
of
very
rich
ways
to
authenticate
users
very
securely
through
Yuba
keys
and
pass
keys
and
all
sorts
of
great
new
Innovative
stuff
next
slide.
R
This
is
what
it
looks
like
if
you're
not
using
the
web
today,
if
you're
trying
to
use
HTTP
standards
outside
of
the
web
context.
So
on
the
left,
we
have
a
caldav
login
screen
on
the
right.
We
have
a
proxy
authentication
login
screen.
These
are
the
state
of
the
art
respectively,
for
for
caldap,
for
proxies
and
in
general,
for
any
login
system
that
doesn't
benefit
from
a
web
browser
next
slide,
so
non-web
login
is
basically
stuck
in
1996.
These
standards
have
have,
in
my
view,
not
significantly
changed
next
slide.
R
So
what
about
oauth
I'm
here,
because
I'm?
Definitely
not
an
expert
on
oauth
and
I-
want
to
get
input
from
this
group
about
how
to
address
this
problem
and
and
bring
these
systems
into
the
modern
era.
R
But
my
understanding
is
that
oauth
generally
requires
the
client
to
know
in
advance
who
it's
going
to
be
talking
to,
and
this
is
really
about
clients
that
want
to
be
able
to
essentially
access
any
HTTP
resource
on
any
domain
subject
to
potentially
an
authentication
prompt
with
the
only
requirement
being
that
that
the
client
and
this
origin
speak
the
both
Implement
a
defined
standard
next
slide.
R
R
Maybe
you've
just
changed
a
configuration
setting
or
or
maybe
you're
just
going
about
your
business
on
on
your
device,
and
you
get
a
notification
that
something
on
your
device
is
requesting
interactive
Authentication,
and
so
you
can
open
your
browser,
which
will
open
to
essentially
a
login
page
you'll,
go
through
some
sort
of
probably
oauth
driven
single
sign-on,
probably
server
to
server
oauth
and
then
at
some
point
there
will
be
a
signal
that
comes
back
to
your
browser
that
says:
authentication
has
completed
at
that
point.
The
browser
window
will
close,
you'll,
see
a
second
notification.
R
So
this
is
the
specific
protocol
that's
laid
out
in
the
draft,
but
this
is
really
I.
Think
kind
of
a
placeholder
I
would
really
welcome
some
more
input
on
how
to
structure
this
exchange.
But
in
this
case
the
client
says
Hey.
Do
you
support
caldav
in
this
example,
and
the
server
says
who
are
you
open
a
browser
or
you
can
see?
R
There
are
other
www
authenticate
response
headers
here,
so
there
could
also
be
other,
maybe
more
old-fashioned,
authentication
options
presented
in
the
challenge
next
slide,
so
that
that
www
authenticate
header
had
a
parameter
called
location
equals
slash
login.
So
that
tells
the
client
where
this
login
page
is
so.
The
client
now
opens
a
web
browser
and
that
web
browser
is
instructed
to
load
this
page,
and
so
here
we
get
back
again
a
401
response,
but
this
time
it
contains
HTML
telling
us
the
login
instructions
next
slide.
R
Another
client
does
something
clicks
buttons
enters
passwords,
Taps
you
but
Keys
navigates,
maybe
between
different
Origins
and
approves
things
at
some
point.
The
client
comes
back
and
the
browser
fetches
this
login
page
again
this
time
it
sends
that
Fetch
with
a
cookie
that's
been
set
in
some
as
some
part
of
that
flow,
and
now
it
gets
a
200
response,
which
is
the
signal
that
means
we're
done
here.
R
So
the
browser
closes
next
slide
and
now
we're
back
in
the
non-interactive
part
of
this.
So
now
again,
the
caldav
client
is
trying
to
load
this
endpoint,
but
it
now
it
copies
that
cookie
header
out
of
the
successful
request
from
the
browser
into
the
caldav
client
and
now
it
can
actually
speak
cow
Dev
to
that
endpoint
next
slide.
R
So
this
is
an
overview
of
the
proposed
protocol.
I
want
to
focus
on
step
five,
where
the
or
step
four,
where
the
browser
loads,
the
authentication
path
and
if
that
authentication
path
ever
loads
successfully.
While
this
browser
instance
is
operating,
the
client
copies
all
those
request,
headers
out
of
the
request
and
then
kills
the
browser
and
then
those
headers
are
copied
into
any
future
requests
that
are
made
by
this
non-interactive
client
trying
to
reach
that
origin.
R
There
are
a
lot
of
interesting
Corners
that
come
up
in
this
again.
This
sort
of
placeholder
particular
instantiation
of
of
this
idea,
like
we
like
the
draft,
says
cookies
and
authorization
headers-
are
both
allowed.
Authorization
is
more
natural,
I
think
in
this
context,
but
it
I
in
my
understanding
it
would
force
us
to
use
JavaScript
in
this
browser
context.
It
would
be
nice
to
be
able
to
support
non-javascript
browsers.
R
Why
not
cookie
seems
to
to
allow
us
to
do
that,
but
then
my
my
personal
use
case
here
is
really
related
to
these
proxy
clients.
So
we
need
to
convert
authorization
headers
for
the
request
of
the
browser
into
proxy
authorization
requests
for
the
proxy
and
that's
natural
enough.
But
what
do
we
do
about
cookies?
So
the
draft
says
find
you
can't
use
cookies
for
that,
but
we
could.
We
could
Define
some
way
to
copy
those
cookies
to
a
proxy
server.
R
There
are
some
interesting
ux
implications
about.
You
know.
Your
your
oauth
type
token
expires
at
some
point
with
some
system
service.
You
want
to
refresh
that
we
need
to
like
we're
interrupting
the
user
in
at
some
random
point
in
their
day
to
say:
oh,
please,
re-authenticate.
R
So,
but
that's
enough
about
this,
let's
move
on
so
this
is
a
brand
new
draft.
It's
designed
to
bring
all
of
the
great
oauth
driven
single,
sign-on
and
and
two-factor
login
stuff
to
the
the
sort
of
the
rest
of
HTTP
or
the
rest
of
Standards
driven
HTTP.
R
It
definitely
needs
more
input
from
folks,
like
you,
I'm,
especially
interested
for
this
group,
in
what
components
here
could
be
shared
more
with
the
oauth
ecosystem,
or
even
you
know,
is
there
a
way
to
express
this
in
you
know
us,
and
by
the
way,
I'll
be
repeating
this
presentation
I
believe
tomorrow
morning
for
the
HTTP
API
working
group
to
get
their
input
and
I
think
and
I'm
seeking
adoption
here
and
it's
I
think
it's
an
interesting
question
of
where
that
adoption
would
live.
R
And
if
you
want
to
see
more
use
cases,
I
would
mention
the
access
descriptions,
draft
and
mask
okay.
That's
all
Aaron.
E
Yeah
hi
Aaron
purkey
from
OCTA.
If
I
were
to
rephrase
this
in
terms
of
oauth
terms
and
roles
that
we
already
use,
it
seems
like
there.
It's
almost
everything
is
already
there
to
make
this
work
and
there's
really
only
one
thing
needed.
So
it
sounds
like
what
you're
trying
to
do
is
Drive
authentication
event
when
a
resource
is
requested,
so
we
have
a.
We
have
the
in
one
of
the
in
I.
Think
it's
6750
talks
about
the
error
response
when
a
client
makes
a
request,
a
resource
server.
E
That
does
not
contain
a
token
or
contains
a
wrong
kind
of
token,
and
that's
also
what
the
step
of
auth
draft
is
sort
of
getting
into
as
well,
and
what
we
don't
have
in
that
draft
or
I.
Don't
think
any
other
ones
is
the
concept
of
the
resource
server
driving
the
location
of
the
telling
the
client
where
the
as
is
and
that's
kind
of
what
you're
asking
to
do
of
a
calendar,
client
shows
up
and
says
I
know
a
resource
server.
E
I'm
talking
to
try
to
make
a
request
that
resource
says
you
need
to
log
in
here's
where
to
go
log
in
where
the
client
doesn't
necessarily
know
that
before
and
I
think
you
write
them
most
of
the
oauth
work
is
the
other
way
around
where
the
client
knows
the
as
ahead
of
time.
But
that's
the
only
missing
piece
everything
else
actually
already
fits
into
the
oauth
world,
so
I
think
there's
a
lot
of
I.
E
B
But
we
have
just
few
minutes
and
we
have
a
number
of
people
on
the
Queue,
so
I'm
I'm
gonna
give
people
a
chance
to
just
to
it,
give
their
comments,
and-
and
maybe
we
can
take
it
after
that-
offline,
okay,
okay,
and
so
are
you
done
with
your
comment?
Yeah.
E
B
Okay,
thanks
Adam,
just
one
John
just
hold
on
a
few
people
in
the
line
there
Armando.
S
Hello,
can
you
hear
me?
Yes,
okay,
just
one
quick
comment.
So
basically,
in
the
Privacy
past
working
group,
we
are
working
on
one
solution
very
similar
to
what
you
are
proposing,
which
is
basically
just
increase
a
new
option
in
the
HTTP
authentication
header.
So
you
can
do
this
kind
of
authorization
mechanism.
So
please
take
a
look
on
the
Privacy
path
group.
Thank
you.
N
I
just
wanted
to
point
out
that
there
was
an
internet
draft
I
believe
a
while
back,
where
someone
proposed
might
have
been
that
Tech
Middle
to
use
the
link
header
to
advertise
the
AES
that
could
fill
the
that
401.
L
L
Thing:
I
guess
that
it
depends
on
who
the
draft
expects
to
drive
this
orchestration,
whether
it's
the
client
running
inside
the
JavaScript
inside
the
browser
or
whether
it
should
be
the
user
agent
it.
The
the
presentation
did
not
clarify
that
for
me
and
if
that's
the
user
agent,
then
this
is
really
just
limiting
to
web
experience
and
it
cannot
be
extended
to
use
well
I
mean
to
authenticate
clis
Etc,
because
then
there
is
a
Handover
information
from
the
browser
to
a
CLI
that
we
currently
do
not
have.
R
That's
right,
this
would
all
be
be
driven
from
from
inside
the
the
web
page
essentially,
but
the
the
CLI
needs
to
have
its
fingers
inside
the
browser
to
be
able
to
pull
out
these
these
outputs
right.
It
needs
to
be
able
to
pull
out
these
these
authorization
or
cookie
headers
for
then
use
essentially
by
a
different
user
agent.
There's
a
browser
user
agent,
then
there's
this.
P
I
think
this
is
a
interesting
idea
that
needs
to
be
explored
when
we
were
originally
doing
oauth.
If
memory
serves
me,
the
error
from
the
resource
server
set
returned
what
Scopes
were
required,
but
we
explicitly
didn't
say
what
the
authorization
server
endpoint
was,
because
there
were
a
number
of
security
concerns
around
essentially
having
the
resource
server
being
able
to
do
fishing
on
the
user.
P
The
other
is
that
you
know
as
the
one
of
the
editors
of
of
the
Fido
specifications.
There
are
a
bunch
of
things
in
browsers
that
stop
web
views
from
being
able
to
use
web
authen
Etc.
So
exactly
how
you're
calling
these
things,
if
you're
planning
on
grabbing
headers
out
Etc
that
may
be
at
odds
with
the
security
that
browsers
and
and
the
os's
are
implementing
the
other
thing
to
that.
You
should
probably
talk
to
Google
about.
P
Is
that
they're
doing
a
bunch
of
work
on
proof
of
possession
for
cookies
to
prevent
cookies
from
being
exfiltrated
outside
outside
of
the
browser?
So
that
may
also
be
somewhat
at
odds.
So
there's
a
bunch
of
things
that
have
to
be
coordinated
to
figure
out
whether
or
not
you
know
this
can
actually
be
practically
used
end
to
end.
But
it's
probably
worth
thinking
about.
B
R
Time,
no
thanks
for
all
that
input,
we'll
we'll
discuss
on
the
list.
Awesome.