►
From YouTube: IETF114-OAUTH-20220725-1400
Description
OAUTH meeting session at IETF114
2022/07/25 1400
https://datatracker.ietf.org/meeting/114/proceedings/
A
A
Welcome
everyone
to
oathwork
group
at
iatf
114
in
philadelphia.
So
let's
get
going,
we
have
packed
agenda,
so
this
is
the
not
well,
please
make
sure
you're.
You
understand
this.
This
governs
everything
that
we
do
here
at
the
ietf,
so
make
sure
we
understand
what's
going
on
there,
so
we
have
the
one
official
session
today
this
this.
A
On
the
other
side,
a
quick
workgroup
update,
we
request
rfc
in
9207,
was
a
published.
So
congratulations
to
daniel
and
and
co-author.
A
A
Okay,
we
have
packed
agenda.
As
you
know,
a
step-up
authentication,
victoria
and
brian
will
talk
about
this.
Selective
disclosure
jot
daniel
and
christina
will
talk
about
this
for
20
minutes
and
then
daniel
give
us
a
quick
update
on
security,
bcp
and
we'll
talk
about
2.1
and
browser-based
apps.
A
I
will
be
talking
about
multi-subject
chat.
Hersh
will
be
talking
about
talking,
theft,
kelly
will
be
talking
about
token
and
identity
chaining
and
the
a
tool
will
wrap
it
up
with
the
rpc
security
standard.
As
you
can
see,
the
time
allocated
for
each
topic
is
really
short,
because
we
have
really
long
list
of
topics.
A
If
we
need
to
dig
deeper
into
any
of
those
topics,
we
will
dig
deeper
into
those
on
the
side
meetings
right
so
based
on
what
happened
right
now,
will
kind
of
adjust
the
side
meeting
agenda
so
side
meetings.
The
main
topic,
one
of
the
main
topics,
the
deepap
shepard
open
issues-
we're
trying
to
push
this
forward
and
finish
it.
So
there
are
a
few
few
open
issues
as
part
of
the
shepard's
review,
and
then
brian
will
walk
us
through
this,
and
hopefully
we
can
adapt
those
this
up
soon.
A
Conformist
tests
and
sdks
daniel
and
joseph
will
have
will
have
a
discussion
around
this.
If
there's
we're
attempting
to
do
something
about
this
again,
as
I
mentioned,
we
we
have.
We
probably
have
some
deep
dive
into
some
of
those
say
topics
that
we
discussed
today.
A
We
want
to
have
a
few
open
discussions
around
a
few
topics.
For
example,
do
we
need
a
new
rfc
to
for
for
a
job
to
update
the
jot,
based
on
our
experience
so
far,
brian
did
a
fantastic
presentation
during
game
identiverse,
and
I
think
we
have
lots
of
lessons
learned
from
that.
So
maybe
it's
time
for
a
new
rfc
when
I
discuss
post
quantum
tool,
so
just
open
discussion
and
any
any
thoughts
about
this.
A
What
could
be
done
here-
and
there
is
some
issues
around
oauth
work
group
perception
and
some
some
people
find
that
welfare
group
to
be
not
easy
to
deal
with.
So
I
would
like
us
to
kind
of
have
a
discussion
and
see
how
we
can
be
a
bit
more,
welcoming
and
and
accommodating
for
for
people
that
are
not
from
the
close
tight
kind
of
group
that
that
typically
attends
the
meeting.
So
that's
it
for
me
any
questions
comments.
B
B
All
right,
good
morning
in
10
minutes
I'll
try
to
update
you
on
the
work
that
we've
been
doing
with
the
off
step
up
authentication
challenge
protocol
with
brian.
Can
I
have
a
next
slide.
Please
all
right
agenda
is
ultra
fast,
I'm
going
to
refresh
for
you
what
we
mean
by
step
up
and
what's
the
scenario
we're
trying
to
solve
their
proposal
that
we
have
I'll
give
you
an
update
on
what
we
changed
since
the
last
time.
We
met
in
similar
circumstances,
and
then
I
mentioned
some
of
the
discussion
items.
B
As
rifat
mentioned,
we
will
probably
have
to
deal
with
the
q
a
in
the
extended
meetings,
because
I
just
don't
have
enough
time
all.
C
B
Thank
you,
so
resource
servers
can
rejective
tokens,
which
allegedly
are
still
valid
for
many
many
reasons,
but
typically
they
will
have
a
black
box
like
risk
engines
that
evaluate
that.
That
particular
token
is
not
enough
for
the
operation
that
is
being
attempted,
or
there
might
be
things
in
the
call
in
itself.
B
So
beyond
the
token
that
calls
for
different
circumstances
like,
for
example,
I'm
trying
to
buy
an
item
which
is
very
pricey,
and
it
would
be
better
for
a
very
sour
server
if
the
token
would
have
been
obtained
using
a
higher
level
of
authentication.
Am
I
speaking
too
fast?
B
Yes,
no!
Maybe
I'm
just
trying
to
stay
in
within
minutes,
okay,
so
and
typically
in
fact,
what
the
resource
server
wants
when
they
reject
the
token
is
a
fresher
token
or
a
token
that
was
obtained
with
a
different,
usually
higher
authentication
level,
and
today
there
is
no
guidance
for
the
resource
server
to
tell
the
client
of
a
reason
for
which
they
don't
want
such
a
token,
and
there
is
also
no
in
off
guidance
for
the
client
to
communicate
to
the
authorization
server
that
they
want
something
different
next
slide,
please.
B
So
our
proposal
is
ultra
simple:
we're
just
extending
the
collection
of
error
codes
in
1650
with
the
new
code,
which
is
insufficient
user
authentication
and
in
the
corresponding
header.
We
propose
to
send
back
from
the
resource
server
to
the
client
parameters
that
are
already
defined
in
openly
connect,
acr
values
and
max
age
which
basically
allow
the
resource
server
to
ask
for
particular
authentication
levels
or
a
certain
fractions
of
a
token,
and
and
then,
given
that
we
already
have
these
in
exitability
access
tokens.
B
B
And
here
there
is
a
ultra
complicated
back
and
forth
in
the
past.
You
got
the
animated
version.
If
you
want,
you
can
get
the
recording
from
the
last
time
here.
Don't
have
time
to
do
this,
but
super
high
level.
Basically,
you
just
have
the
client
that
he's
a
resource
server
using
a
token
which
you
cannot
read,
but
it
doesn't
have
acr.
B
Then
the
resource
server
replies
with
a
new
error
code,
insufficient
of
educational
user
authentication
and
includes
in
this
particular
example
the
desire,
the
acr
value
we
could
have
included
in
a
max
age
as
well,
but
we
didn't.
This
is
just
one
example.
Then
the
client
gets
these
parameter
and
goes
back
in
lag
free
to
the
authorization
server
repeats
the
request
for
an
access
token
and
includes
the
desired
acr.
B
B
So
that
is
the
substance
of
our
proposal
and,
of
course,
I'd
be
remiss
if
I
wouldn't
point
out
that
to
the
wonderful
wonderful
graphic
comes
from
the
art
and
the
genius
of
brian
campbell
and
what
happened
since
113..
Well,
first,
we
got
voted
in
now.
We
are
in
the
island,
and
this
is
an
actual
work
item
which
we
are
working
on
as
a
off-working
group,
philip
and
peter
have
been
a
very,
very
faro,
especially
peter.
B
Thank
you
peter
in
describing
a
sec,
mentioning
their
comments
on
the
scenario
and
the
proposing
improvements
which
we
tried
to
reflect.
In
particular,
peter
pointed
out
that
step
up
doesn't
always
mean
step
up.
Let's
say
that
the
token
that
you
obtain
after
the
step
up
might
not
be
a
replacement
of
the
token
you
already
have,
and
so
we
clarified
these
in
in
the
spec
and
philippe
mentioned,
that
it
would
be
nice
to
have
a
mechanism
to
signal
the
fact
that
the
authorization
server
supports
this
particular
scenario.
B
And
so
that's
what
we
have
done.
We
have
extended
the
authorization
server
metadata
with
one
element
that
comes
straight
from
openid,
which
is
a
acr
value
supported,
and
we
established
that
the
presence
of
that
parameter
indicates
that
the
authorization
server
supports
these
specifications.
Now.
The
tricky
thing
which
we'll
mention
also
at
this
in
the
discussion
points
is
that
by
using
that
element
we
imply
that
the
authorization
server
supports
both
acr
values
and
max
age
in
the
request
parameters,
and
it's
not
immediately
obvious
because
the
element
acr
value
supported
only
talks
about
this
acr.
B
Also,
we
extended
the
example
so
to
also
feature
max
age
and
we
fixed
titles,
and
I
really
liked
the
joke
in
the
acknowledgments,
but
unfortunately
we
had
to
start
putting
serious
acknowledgements
there.
So
now
it's
gone,
but
there
are
the
people
that
helped.
B
B
Well,
there
is
the
ayana
section
which
of
course,
we
need
to
fill
up,
because
we
want
to
extend
parameters
also
for
introspection,
and
we
have
to
add
the
security
consideration
like
all
the
non-normative
have
a
big
tbd,
so
we
have
to
do
it,
and
then
there
is
a
thing
that
I
just
mentioned
like.
Is
it
okay
to
just
use
this
cr
value
supported
or
do
we
need
something
more
explicit?
B
And
here
is
the
interesting
part
this
pack,
as
it
exists
today,
already
does
the
job
that
it's
meant
to
do
like.
There
are
no
missing
parts
and
we
are
getting
indication
of
people
adopting
this
thing
already
like.
There
was
a
comment
in
one
of
my
linkedin
posts,
in
which
someone
said
that
the
department
of
health
in
norway
is
already
adopting
this
thing,
as
is
so.
B
I
think
that
it's
important
for
us
to
discuss
this
spec
so
that
if
there
are
things
that
people
think
should
be
changed
or
removed
or
added
the
sooner
we
do
it.
The
less
instant
legacy
will
have,
because
if
we
don't
again,
this
is
something
that
the
people
deem
useful
and
a
problem
that
people
do
have
already
today.
So
they
are
using
the
spec
in
this
form.
B
So
if
you
think
that
there
is,
if
there
is
something
in
respect
that
you
don't
like,
please
make
sure
to
to
chime
in
if
there
is
something
that
you
believe
should
be
in
scope-
and
you
want
respect
to
address-
please
chime
in
because
otherwise
we
risk
going
really
fast
through
a
various
stages
and
then
be
close
to
final,
and
you
didn't
have
a
chance
to
try
it.
So
I
think
that's
pretty
much.
It.
D
C
D
So,
okay,
let's
try
it
go
yeah.
Thank
you
very
much
also
for
scheduling
this
we're
presenting
on
new
work.
That's
called
selective
disclosure
disclosure
for
jwts
and
we
in
this
case
are
christina
and
I,
if
you
have
not
seen
the
draft
yet
that's
what
it's
called.
You
can
scan
that
if
you
want
to
see
the
latest
version
in
the
git
repository,
that's
you
so
much
for
the
intro
think.
The
next
part
is
yours.
E
Yeah,
we'll
quickly
cover
on.
Why
do
we
need
select
disclosure
for
jots
I'll,
give
an
overview
of
the
mechanism
we're
introducing
and
then
daniel
will
go
into
kind
of
deeper
dives
and
features
we
are
enabling,
based
already
on
some
really
good
implementation
feedback
coming
in.
If
you
can
go
to
the
next
slide
so
really
quickly.
E
E
So
this
is
very
different
from
this
emerging
new
model,
which
has
been
really
prominent
in
with
our
credentials
world,
for
example,
or
mobile
driving
license
world,
but
not
only
limited
to
that
where
the
issuer
is
issuing
a
credential
to
the
user
with
maximum
amount
of
user
claims,
like
let's
say,
issuer
issued,
20
claims
to
the
user
and
when
user
is
presenting
that
claim
to
the
verifier
verifier
is
different
very
far,
as
would
request.
Different
subsets
of
those
claims
issued
by
the
issue
is
the
issuer,
so
it's
unfeasible
for
the
end
user.
E
Go
back
every
time
to
the
issuer.
To
get
those
credentials
have
only
subset
of
those
claims
that
does
not.
That
goes
against
the
promise
of
decoupling
issues
and
presentations
to
have
all
these
new
features
such
as
yeah,
and
so
this
is
the
main
idea
when
it
flows
emerging
flows
at
the
corporations
and
presentations.
We
need
the
ability
to
use
one
jot
sign
by
the
issuer
and
then
use
only
subset
of
those
claims.
As
a
different
verifiers
and
cavia
said
by
stuff
disclosure,
you
do
not
mean
end
user
saying.
E
I
only
want
to
present
these
claims.
No
like
say
it's
relying
party
the
verifier
who
says
give
me
those
claims
and
this
application.
The
holder
used
by
the
user,
can
generate
those
subsets.
You
can
go
to
the
next
slide.
Oh
I
have
it
here,
never
mind
so
yeah
so
set
disclosure.
I
received
10
claims
and
I
want
to
let's
say
present.
E
Only
two
or
three
hours
go
to
the
next
slide:
yeah
sd
dot
and
one
thing
that
we
really
really
daniel
and
I
have
really
been
valuing-
is
having
simplicity
as
a
feature
so
that
it's
really
easily
implementable
for
the
existing
drug
implementations,
and
we
explain
why.
So
there
are
some
existing
sexual
disclosure
themes.
Arguably,
and
that's
kind
of
how
we
see
set
disclosure
for
dot's
work,
we're
trying
to
keep
it
as
simple
as
possible
and
it
is
possible
to
set
disclosure
without
using
advanced
cryptography.
E
But
those
a
you
know:
ecdsa
dsa,
curves
you're,
used
to
formats
some
formats
and
disclosure
are
limited
to
binary
formats
here.
We're
focusing
on
json.world
is
value,
developer,
familiarity
for
security
device
because
it's
in
json
and
jot.
It
is
much
easier
to
audit.
You
know
whether
algorithms
are
implemented
correctly
or
whether
there
are
any
errors
rather
than
having
it
in
the
binary
format:
language
availability.
E
There
are
daniel
and
I
have
been
amused
they're
already
on
four
independent
implementations
of
this
work
within
two
three
months.
We've
been
working
on
this
and
this
is
partially
because
you
can
leverage
existing
java
libraries
and
again
we're
not
limiting
those
to
identity,
use
cases
going
next.
E
So
you
should
be
wondering
how
does
it
work
if
you
can
go
to
the
next
slide,
because
you
know
in
the
dots,
let's
find
the
next
slide,
the
net
nine
slide,
because
in
the
jobs
you
know,
as
we
all
know,
issuer
signs
over
all
of
the
claims
and
what
we
did
is
we
go
next
slide.
The
idea
is
when
the
issuer
is
issuing.
A
credential
issuer
is
not
signing
over
the
actual
claim
values.
E
Issue
is
signing
over
the
hash
of
those
values,
and,
if
you
just
if
you
hash,
only
the
value
is
a
it
becomes
guessable
for
attackable.
So
the
issue
is
actually
hashing
over
the
actual
value
and
a
random
unique
sold,
which
is
unique
to
that
claim
value,
so
issuer
sign
credential
contains
only
these
digests
next
slide,
but
then
you
know
from
a
end
user
perspective.
E
You
know
another
thing
that
the
issuer
needs
to
send.
Is
this
soap
value
container,
which
includes
the
mapping
of
actual
claim
values
and
these
unique
salts
which
is
not
signed?
It's
a
plain,
json
object
and,
and
finally,
when
the
end
user
is
presenting
this
object,
what
end
user
does
it's
a
dot?
You
can't
really
break
issue
signatures.
You
cannot
modify
anything
in
the
issue
assigned
sd
job,
so
select
disclosure
jot
in
the
blue,
one
is
being
on.
E
The
left
side
is
being
sent,
as
is
where
selective
disclosure
happens
is
within
the
container,
so
user
is
sending
only
the
source
and
claim
values
of
the
claims
that
the
user
has
consented
to
release
and
that's
how
verifier
can
verify
and
receive
only
the
claim
values
that
the
user
has
consented
and
given
the
verifier
in
the
release
and
there's
optionally
holder,
binding
yeah.
I
think
this
is
a
high
level
overview
and
daniel
go
speak
to
details.
D
This
is
what
it
looks
like.
So
it's
a
jwc
obviously
may
contain
public
key
for
over,
so
you
can
do
holder,
binding
or
reference
there
too,
and
that's
really
undefined
or
doesn't
need
to
be
defined
here.
The
important
part
is
at
the
bottom.
You
see
two
new
claims.
The
one
is
called
sd-arc
with,
which
is
obviously
a
hash
algorithm
and
then
sd
digests.
D
That's
the
hash,
the
hashes
of
the
claim
values
with
the
salts
assaulted,
hashes
of
all
the
claim
values,
and
if
this
would
be
a
more
complex
object
where
you
have
like
in
the
open
id
for
identity
assurance
where
you
have
nested
claims
and
so
on,
then
would
have
one
digest
usually
per
claim
we'll
get
to
that
in
a
moment
yeah.
So
that's
that's
like
the
the
superset
of
all
that
can
be
released
to
to
the
verifier
next
slide.
D
Please
and
then
so
that
was
only
the
hashes
and
then
we
have
this
construct
called
assault
value
container,
which
is
sent
from
the
issue
to
the
holder
or
the
end
user.
D
Next
slide,
please-
and
here
we
see
a
construction
that
looks
really
ugly
when
you
see
it
for
the
first
time,
but
that
works
quite
well.
Actually
what
you
see
here
is
that
for
each
of
the
claims,
so
we
have
the
same
structure
as
before,
but
the
claim
values
so
to
say
they
are
now
json
encoded
strings
and
these
json
encoded
strings
encode,
an
array
of
the
salt
value
and
the
plain
text
lane
value.
D
Now.
Why
do
we
do
this?
This
looks
strange.
It
feels
strange
feels
a
bit
dirty,
but
it
works
really
well,
because
what
we
need
to
do
is
we
need
to
send
parts
of
this.
So
whatever
the
holder
wants
to
select
so
to
release
that
we
need
to
transfer
from
the
holder
to
the
verifier,
then
the
verifier
needs
to
hash
it,
the
salt
and
claim
value
and
needs
to
come
to
the
same
digest
as
the
one
that
is
signed
in
the
sdjwt.
D
So
we
need
to
ensure
that
everybody
has
just
the
same
stuff.
We
could
just
send
the
salt
value
and
the
claim
value
separately
as
plain
json
values.
The
problem
with
that
would
be
that
we
need
to
attach
the
salt
and
the
claim
value
together
to
form
a
byte
string.
That
is
then
hashed.
D
That's
the
first
challenge
you
need
to
have
like
a
separator
character
and
then
escaping
or
some
other
form
of
encoding
result
and
a
claim
value
in
this
case
we're
just
sending
the
whole
thing
that
is
to
be
hashed,
so
you
already
have
a
separation
between
salt
value
and
the
claim
value.
Second
challenge
is
that
no
please
stay
on
that
slide.
D
Second,
challenge
is
that
you
have
claim
values
that
have
more
than
one
representation
and
that
actually
affects
almost
all
data
types
in
json.
So
if
you
have
a
string,
you
can
use
some
some
escapings
to
to
to
encode
that
string
for
numbers.
D
You
have
different
encodings,
especially
floating
point
numbers,
and
then,
when
you
look
at
the
address
claim
here,
we
have
even
have
things
such
as
a
complex
claim
so
where
the
whole
claim
is
another
object
and
all
that
needs
to
be
encoded
reliably
into
byte
strings,
and
this
is
why
we
say:
okay,
the
issuer
does
all
the
encoding
and
just
makes
the
json
string
out
of
this
and
just
sends
the
whole
string.
The
verifier
gets
that
thing
can
hash.
D
D
You
can
do
complex,
claim
values,
as
you
like
you,
don't
have
to
think
about,
cannot
canonicalization
at
all,
so
it
just
works
now.
The
next
slide,
please
and
as
christina
already
said,
part
of
that
salt
value
container
that
is
sent
to
the
holder
is
then
sent
to
the
verifier
to
actually
check
where
or
to
get
the
claim
values
next
slide
please
and
to
check
the
hashes,
of
course.
D
So
you
can
see
here.
This
is
only
a
part
given
name
and
family
name
in
this
case
for
the
verifier,
the
task
is
really
easy
check.
The
signature
over
the
sd
jar
that
is
sent
in
parallel
to
this
then
check
that
this
thing
here,
which
we
call
the
release,
is
valid,
can
be
signed.
So
you
have
to
check
the
signature,
the
nonce,
if
you
do,
bind
it
to
the
transaction
and
so
on,
and
then
of
course-
and
this
is
like
the
single
thing-
that
a
verify
can
get
wrong.
D
So
please
don't
do
that.
Don't
skip
that!
You
have
to
check
that.
If
you
hash
the
string
for
the
claims,
then
you
actually
have
the
same
digest
as
the
one
that
is
in
the
sd
chart.
This
is
the
one
thing
that
you
can
get
wrong.
We
thought
about
how
to
avoid
that,
and
we
didn't
come
up
with
anything
that
works
better
than
this.
So
we
put
some
work
into
communicating
that
you
really
should
not
skip
this
part
and
then,
of
course,
after
that,
you
can
extract
the
claim
value.
D
The
nice
thing
about
this
whole
construction
is
that
you
can,
as
the
issuer
select
how
you
want
to
build
your
sd
chart.
You
can,
for
example,
if
you
have
a
complex
claim
like
address
here,
you
can
say:
okay,
the
whole
thing
can
be
released
as
a
complete
thing,
so
the
verifier
gets
the
complete
address
or
nothing
or
you
can
do
a
deep
dive
and
you
can
like
go
into
the
individual
elements
here.
D
It's
relatively
simple
when
you
look
at
address
next
slide,
please,
but
of
course,
in
some
use
cases
you
have
very
complex
objects
like
here
and
open
id
for
identity
assurance.
This
is
an
actual
example
from
the
spec,
where
you
can
see
that
you
can
do
very
complex
things
next
slide,
please
we
already,
as
christina
mentioned,
have
for
independent
implementations.
D
So
you
can
go
ahead
and
play
around
with
this
thing
in
your
favorite
language.
If
your
favorite
language
is
python,
called,
invest
or
typescript,
our
python
implementation
is
the
one
that
we
call
the
reference
implementation.
D
We
we
build
it
or
we
we
created
it
actually
before
the
spec,
and
it's
about
500
lines
of
code,
really
small.
It
evolves
with
the
specs.
So
we
try
to
keep
it
up
to
date,
because
we
also
also
generated
all
the
almost
all
the
examples
in
the
spec
from
that
code.
D
So
yeah,
it's
hopefully
useful.
You
can
also
plug
your
own
examples
into
it.
If
you
like
so
play
around
with
it
next
slide,
please
we
also
have
looked
at
how
you
can
format
w3c
vcs.
With
this
we
have
a
proposal
in
the
issue
checker
and
actually
also
in
spec,
where
so
the
syntax
is
not
final,
but
it
will
work
out
in
the
end,
so
this
will
also
be
covered
next
slide.
Please,
what's
the
current
status?
What
are
the
next
steps
next
slide?
D
Please
we
do
have
a
functional
com
specification
practically
complete,
so
there
are
no
aspects
that
are
uncovered,
yet
it's
somewhat
stable
in
the
sense
that
we
don't
have
big
items
open
that
we
need
to
discuss.
Obviously,
there
might
be
other
input
now.
So,
let's
see
where
this
will
lead
us,
but
we're
quite
confident
that
this
is
already
in
a
good
state.
D
D
That
should
be
relatively
easy,
and
due
to
this,
and
also
due
to
the
simplicity
we
heard
from
many
groups
and
organizations
already
that
they
consider
this
a
serious
alternative
to
existing
formats.
It's
easy
to
understand
it's
easy
to
implement.
You
don't
end
up
with
just
one
java
implementation
that
nobody
understands,
and
we
heard
that
this
will
also
be
adopted
in
eu
projects,
nothing
official
that
we
can
announce
yet,
but
hopefully
soon
and
also
by
microsoft.
D
Next
slide,
please.
Why
are
we
here
today?
Well,
jwts
were
developed
in
the
oauth
recon
group,
and
we
see
this
as
a
tool
based
on
jwts
that
is
useful
everywhere
in
all
authentic
technologies,
so
not
tied
to
mighty
connect,
not
tied
to
identity
at
all.
We
hope
that
it's
universal,
a
universal
building
block
for
new
technologies,
for
example,
selectively
disclosable
access
tokens
or
something
like
that.
We
don't
know
how
that
will
look
like
in
the
end
but
yeah.
So
these
are
the
things
that
we
can.
D
There
are
more
things
that
we
can
also
imagine
so,
hopefully,
some
good
use
cases
there
as
well
and
what
I
already
said
on
the
on
a
mailing
list.
We
hope
that
this
can
be
adopted
by
the
working
group
next
slide,
please
so
that
we
can
work
on
this
in
the
working
group,
get
the
feedback
from
the
working
group
and
yeah
work
on
this
together.
A
Thanks
daniel
rick.
F
Hi,
oh
that's
really.
A
F
D
Don't
but
I
think
oh,
my
impression
was
that
if
we
go
down
that
route,
then
we
make
the
implementation
much
harder,
because
you
have
to
implement
all
that
stuff
from
that
rfc
as
well,
and
now
you
can
just
use
any
json
encoder,
so
you
can
just
so.
It
really
doesn't
matter
where
it
puts
the
white
space,
whether
like
all
these
things,
it
just
has
to
encode
the
json
faithfully.
F
D
Leaving
this
get
out
clause,
you
don't
need
to
do
canonicalization,
so
it's
you
just
need
to
do
faithful
encoding.
So.
D
Would
make
sense
to
to
say
a
worldwide
space
if
you
can
to
to
keep
the
thing
small,
you
don't
need
nicely
formatted
json
in
there,
okay,
with
new
lines
and
and
taps
and
so
cool.
Thank
you.
Thank
you.
A
G
I
definitely
ever
seen
this
work
progress.
I
I
want
to
say
maybe
that
you've
overstated
the
stability
just
a
touch
it's
in
good
shape
for
this
level
of
maturity,
but
I
hope
we
don't
if
we
do
proceed
with
adoption,
that
we
don't
use
that
as
an
excuse
not
to
make
necessary
and
important
changes.
Yeah.
D
Absolutely
I
think
I
also
wrote
that
on
a
mailing
list,
what
I
mean
by
this
is
there's
no
like
big
questions.
How
do
we
solve
x
or
why,
in
that
we
know
of
yeah?
We
don't
have
as
far
as
we
know,
this
has
not
been
implemented
in
in
actual
running
production
environments,
so
we're
open
to
changes
and
stability
in
this
case
just
means
that
we
don't
expect
that
if
we
solve
this
issue
that
we
have
on
our
list,
everything
will
change
fair.
A
Just
one
second
there,
until
j,
jim
and
deep
in
the
queue
there.
I
Yeah,
I'm
sorry,
I
wanted
to
know
the
motivation
of
using
selective
disclosure
jwt
per
se
in
the
oauth,
because
in
the
oauth
flow
we
do
not
find
it
having
much
utility.
It
would
be
better
if
we
can
look
at
certain
models
of
jwt
in
which,
if
we
issue
a
token
to
a
particular
user
or
an
app,
it
should
not
be
used
by
any
other
app.
I
So
maybe
if
we
can
prevent
some
kind
of
token,
stealing
or
token
reuse,
that
would
be
a
more
appropriate
use
case
as
far
as
the
oauth
is
concerned.
So
that
was
my
thought
on
it.
So
maybe
once.
D
Yeah
thanks
for
the
question
I
didn't
get
the
first
part,
but
I
think
I
got
the
question
the
I
think
the
token
theft
topic
is
orthogonal
to
this,
so
you
need
to
prevent
token
theft.
In
any
case,
the
selective
disclosure
here
is
not
meant
as
a
way
to
protect
data
from
malicious
third
parties
that
get
into
the
flow
and
steal
your
token.
D
It's
meant
to
just
release
the
data
that
is
necessary
to
any
relying
party,
and,
if
that,
relying
party,
of
course,
misbehaves
or
becomes
corrupted
or
something
like
that,
less
data
can
be
stolen
from
there.
But
token
theft
is
a
different
topic.
I
think
I'm
not
we're
not
precluding
other
measures
against
token
theft.
A
J
Great
stuff,
so
just
one
minor
question
in
the
svc:
why
is
the
salt
field
like
has
the?
Why
does
it
have
the
escape
double
quotes?
I
mean,
could
we
just
use
the
standard
format
for
the
store
solved.
D
The
salt
is
a
string,
so
it's
a
basically
far
encoded
random
number
and
the
salt
and
the
plain
plain
text
claim
value
form,
an
array
that
is
then
json
encoded.
That's
where
you
get
the
escaped
double
quotes
from
so
the
whole
thing.
J
D
H
Oh
and
if
you
want
so
I
I
wanted
to
just
address
and
comment
on
brian's
note
on
on
maturity,
I
think
actually
might
be
a
good
idea
to
try
to
capitalize
on
the
some
of
the
eu
projects.
That's
going
to
run
on
this
to
do
an
implementation
report
after
this
right,
because
we
are
in
a
position
to
be
able
to
take
this
from
sort
of
relative.
We
don't
know
whether
this
will
actually
work
too
well.
We
actually
know
because
we
tested
it
so
take
that
opportunity.
H
As
my
advice
and
and
talk
to,
I
would
just
talk
to
peter
oldman,
who
is
in
the
the
eu
toolbox
group,
and
I
think
he
is
kind
of
has
his
hand
on
the
trigger
to
figure
out
what
whether
this
goes
into
the
reference
implementation
of
the
wallet
and
then
you
can
have
a
a
wide
sort
of
widely
deployed
base
to
point
that,
after
that,.
E
Yeah,
go
ahead
just
one
second
yeah
thanks
so
much
for
bringing
it
up.
Yes,
there
is
a
really
thorough
assessment
of
sj
job
happening
in
yes
working
upon
toolbox,
and
I
think
that
would
give
us
a
bit
more
confidence.
A
Okay,
so
I'm
gonna
call
for
adoption
and
see
how
how
much
support
here
and
obviously
we
will
have
to
take
it
to
the
list
anyway,
but
I
want
to
see
kind
of
show
hand
who
supports
adopting
this
document
as
a
working
group
of
documents.
Okay,
so
one
two,
three,
four:
five:
six,
seven,
eight,
nine,
ten,
eleven
twelve!
I
guess
I
missed
you
there,
okay.
A
I
A
D
Test
one
two
yeah,
so
they
can
hear
me
and
I
can't
hear
me,
I
think
so.
Okay,
good
and
one
bigger
question
arose
and
that
is
regarding
the
pixie
support,
whether
to
whether
the
pixel
support
should
be
mandatory
for
asses.
D
So
last
time
we
discussed
pixie
already,
unfortunately,
what
happened
in
vienna
state
in
vienna,
which
means
in
this
case
that
I
didn't
get
around
to
making
the
changes
to
the
spec
quickly
after
the
meeting
I
did
now,
but
yeah
should
have
probably
done
that
earlier
in
vienna,
we
discussed
that
the
routes
around
pixie
are
probably
fine,
as
we
have
them
right
now
and
in
short,
what
we
discussed
there
was
that
the
clients
are,
with
some
exceptions,
obliged
to
use
pixie.
D
The
exceptions
is,
if
you
have
a
confidential
client
and
you're
using
open
id
connect,
then
under
precautions
you
may
use
nonce.
Instead,
servers
are
not
forced
to
enforce
pixie.
We
said
that
is
up
to
the
servers
to
do
that
or
not
that's
not
what
we
say
there
and
we
said
that
all
of
this
needs
a
bit
more
explanation.
That's
the
changes.
I
did
already
next
slide.
D
Please
now
mike
asked
whether
we
should
so
currently
we
also
have
the
sentence
saying
authorization
servers
must
support
pixi,
and
so
the
question
now
is:
do
we
want
that
in
there
because,
as
we
said,
clients
under
some
circumstances,
when
they
use
open
id
connect,
confidential
clients
and
take
precautions,
then
they
can
use
nonsense
steps.
So
what's
the
value
or
what's
the
damage
that
we
cause
by
saying
that
authorization
servers
must
support
pixie
next
slide?
Please
currently
we
say
that
every
as
should
offer
pxe
so
to
say
to
the
client.
D
D
The
reason
reasoning
behind
that
is
that
the
nonce
may
contain
may
be
contained
in
the
authorization
response
as
well,
and
if
the
attacker
can
intercept
that,
then
the
attacker
can
forge
a
new
response
with
the
same
nonce
and
then
inject
that.
Obviously,
this
requires
some
sector
that
can
intercept
the
authorization
response.
D
What's
also
nice
is
that
the
check
for
pixi's
authorization
server
enforced
and
that's
really
something
that
we
see
in
the
yes
ecosystem
with
the
implementations.
So
speaking
from
the
experience
we
have
there,
it's
and
and
also
experience
that
we
discussed
earlier
when
creating
this
draft
from
others.
The,
if
you
can
enforce
the
check
by
an
as
which,
in
this
case,
obviously
would
be
an
additional
check.
Then
it
cannot
be
skipped
by
the
client
or
if
the
client
starts
using
pixy
the,
as
can
say,
okay,
you
also
have
to
provide
your
verifier.
A
So
so,
just
one
second
here
so
I
see
justin
and
george
in
the
line
are,
do
you
have
questions
about
something
that
was
mentioned
here
or.
K
Yeah
I
had
a
question:
I'm
still
struggling
a
little
bit
about
so
sorry.
Sorry.
K
When
I
think
about
the
state
and
the
way
we
talk
about
people
in
implementing
state
right,
we
tend
to
use
it
as
a
mechanism
to
ensure
the
requests
are
coming
through
the
same
device.
The
same
browser
right,
especially
in
a
redirect,
based
flow-
and
I
don't
think
anything
in
ixy-
requires
that.
K
So
are
we
to
basically
say
if
we
use
pixi
in
place
of
state
right
again
we're
ensuring
that
only
the
entity
that
started
the
flow
can
obtain
the
tokens,
but
at
the
end
right.
But
we
don't
know
if
that
bounce
through
a
whole
bunch
of
different
devices
or
not.
So
is
that
something
we're
okay
losing,
because
that
tends
to
be
where
state
gets
pulled
in
at
least
the
implementations
I've
heard
of
state
or
we
need
to
add
something
to
pixie.
K
D
So
I'm
not
sure
if
I've
got
the
question
correctly,
but
are
you
saying
that
pixi
might
not
be
usable
in
like
cross
device
flows?
Oh.
K
Where
in
the
use
of
state,
basically
did
require
that,
so
I
feel
like
we're
losing
something
if
we
say
do
pixie
and
not
state
now,
maybe
in
a
certain
context,
tracking
that
the
requests
are
coming
from
the
same
device
is
not
critical,
but
in
many
cases
you
don't
want
the
request
to
start
on
one
device
and
finish
on
another
device.
D
Not
sure
what
the
running
around
state
is
or
was
but
pixie
has
this
this
nice
automatism
where
the
so
whoever
goes
to
the
token
endpoint
has
to
provide
the
correct
verifier.
So.
C
K
Yes,
I
guess
to
me:
the
difference
is
they
were.
They
were
handling
two
different
aspects
from
a
security
perspective
for
the
a.s,
and
so
I'm
always
a
little
bit
concerned.
When
we
say
just
do
pixie
don't
worry
about
state
is
basically
the
the
end
of
the
thing
we
can.
We
can,
you
know,
take
it
to
the
list
or
whatever.
L
L
Mechanisms
for
what
the
mechanisms
are
each
are
each
meant
for,
and
probably
advice
to
use
them
together
in
appropriate
ways,
so
sticking
something
in
the
browser
so
that
you
know
that
you're
looking
at
the
right
pixie
value,
instead
of
trying
to
smash
everything
into
the
pixie
values,
probably
better
advice
in
terms
of
sort
of
the
layering
here
I
actually
forgot
that
I
was
on
thecube,
because
I've
been
on
the
q
since
the
last
presentation
and
what
I
wanted
to
say
there
was
that
any
call
for
adoption
of
sd
jaw
needs
to
be
done
in
the
context
of
jwp,
which
is
having
it
off.
L
I
think
it's
later
today.
I
know
that
daniel
and
christina
are
aware
of
that
proposed
work
as
well.
I
just
want
to
make
sure
that
the
rest
of
the
oauth
working
group
is
also
aware
of
that,
because
I
do
see
a
a
very
real
possibility
of
two
parallel
efforts:
solving
exactly
the
same
set
of
problems
in
very
different
ways.
If
we're
not
careful
here,
so
I
just
want
to
make
sure
that
everybody
that's
interested
in
that
is
tracking
what
are
what's
currently
two
different
pieces
of
work.
A
D
Thanks
both
good
points
regarding
the
first
point
mike
posted
this
on
amazingness.
I
think
this
would
be
the
right
place
to
discuss
that.
So
I
would
welcome
both
yours
and
george's
emails
there
to
to
discuss
this
further.
D
C
D
Finishing
up
anyway,
okay,
not
much
here,
okay
yeah,
also,
if
you
have
pixie
everywhere
offered
by
an
as
this
provides
consistencies
consistency
for
the
clients,
so
they
know
what
to
expect
next
slide.
Yeah.
Let's
discuss
that,
then
in
the
site
meeting
and
yeah,
that's
it
from
my
side.
I
hope
to
finish
up
a
20
this
week
and
to
close
the
issues
soon.
A
N
All
right,
hi
everybody,
I'm
aaron
parecki.
I
decided
to
steal
brian's
idea
of
putting
photos
on
the
first
slide.
N
This
is
a
photo
I
took
yesterday
morning
from
the
top
of
that
statue,
so
here
to
start
with
2.1
and
then
we're
going
to
also
get
into
browser-based
apps,
and
I
have
only
15
minutes,
so
this
is
going
to
be
mostly
just
a
status
update,
and
I
hope
that
this
will
spur
some
ideas
of
things
to
talk
about
during
the
side
meetings,
but
we
probably
almost
certainly
don't
have
time
for
actual
questions
about
this
stuff
here.
N
So
last
we
talked
about
oauth
2.1
was
in
vienna
and
the
we
made
a
lot
of
progress
on
some
of
the
open
issues
that
there
it
was
really
great.
The
big
the
biggest
change
is
probably
this.
The
whole
concept
of
the
credentialed
client
and
the
result
of
that
discussion
was
to
take
out
that
term
and
at
the
same
time
simplify
the
definition
of
what
we
actually
mean
by
confidential
and
public
clients,
which
previously
they
had
a
lot
of
additional
meaning
behind
them.
N
That
was
sort
of
baked
into
a
lot
of
the
text.
So
the
definition
now
is
just
confidential.
Client
has
credentials,
public
clients
do
not,
and
there
is
no
other
assumptions
made
about
other
things
about
the
clients
like
how
much
you
can
trust
them,
because
it
turns
out
you
can
trust
both
confidential
and
public
clients
differently,
depending
on
a
lot
of
other
factors.
N
N
Number
46
was
an
easy
one
that
we
discussed
last
time.
So
that's
now,
there's
just
a
mention
of
that
parameter
and
reference
out
to
to
the
new
spec.
There
is
a
new
section.
We
were
the
editors
were
discussing
this
and
we
realized
that
actually,
both
core
rfc
and
the
bearer
token
spec
never
actually
explicitly
say
that
a
resource
server
has
to
actually
validate
tokens.
N
There
isn't
that
actually
isn't
spelled
out,
and
it's
kind
of
just
an
assumption
that
everybody
makes
because
it's
kind
of
we
all
it
would
be
a
good
idea.
So
there
there
that
was
in
the
context
of
this
lifetime,
limited
lifetime.
Token
discussion
and
we
realized
actually
there's
nowhere.
That
actually
says
you
have
to
validate
so
there's
a
new
section.
It's
very
short,
that's
what
that
says.
That's
the
only
really
new
thing.
Everything
else
was
things
we
discussed
previously
and
then
yeah
the
last
one.
N
There
is
the
the
whole
discussion
around
the
uri,
the
redirect
uri
schemes
of
which
is
supported
so
they're
in
the
or
the
better
order.
Now
the
more
secure
order
and
the
sentence
that
says
asses
have
to
support
all
three
is
removed
because
that
was
really
just
for
that
was.
N
For
as
supporting
native
apps,
some
other
changes
since
the
since
the
last
meeting
there's
a
bunch
of
references
that
have
been
updated
so
thanks,
everybody
who's
working
on
that
there
were
some
unused
references
and
a
bunch
of
new
rfcs
again.
So
that's
always
great
there's
some
more
clarifications
of
of
some
of
the
terms
and
and
other
bits
and
pieces.
These
are
issues
that
were
already
open
from
a
lot
of
the
feedback
that
was
from
vittorio
and
justin.
N
So
those
are
all
linked
to
there.
The
url
there
will
give
you
the
diff
of
what
changed
in
the
source,
which
sometimes
easier
to
read
than
the
actual
spec
difference.
And
then
I
made
a
milestone
link
that
links
to
all
of
the
issues
that
were
closed
from
this
as
well.
In
case
you're,
curious
about
tracking
all
the
details.
N
So
some
of
the
I
did
publish
last
night
sorry
draft
six,
which
is
snapshotting
all
that
stuff
again,
there's
not
that
much.
The
only
thing,
that's
really
new,
that
was
that
sectional
validating
access
tokens,
which
again
shouldn't
be
a
surprise
to
anybody
other
than
that
it
was
all
of
the
changes
that
we
discussed
in
vienna
so
future
stuff.
N
N
I
know
vittorio
is
waiting
very
eagerly
on
me
to
review
that
section,
so
that
should
actually
probably
that'll
be
an
interesting
discussion
when
we
finally
do
have
that
there's
still
more
to
do's
on
moving
some
of
the
normative
language
from
the
security
considerations,
that's
been
sort
of
an
ongoing
in
progress
of
like
pulling
up
the
things
that
say
you
have
to
do
this
and
putting
it
where
it
belongs,
which
is
again
part
of
the
whole
idea
of
this
is
now
taking
several
documents
and
collapsing
it
into
a
shorter
one.
N
So
that's
basically
it
I'm
hoping
that
we
can
have
some
of
these
discussions
in
the
side
meeting.
I
don't
know
which
day
but
would
love
to
have
some
time
set
aside
to
knock
out
some
more
of
these
and
I've
got
a
couple
of
tags
on
the
issues.
I
do
encourage
you
to
go
like
look
at
the
issues
that
are
open
on
there
and
then
see
if
any
of
them
stick
out
to
you
as
things
you
want
to
make
sure
you
get
discussed
sooner
rather
than
later.
N
Great
okay,
so
second
document
new
photo
from
also
from
the
town
hall
city
hall,
browser-based
apps.
This
is
a
draft
that
we
started
a
while
ago
and
it
turns
out
actually
hadn't
discussed
in
any
of
the
meetings
since
april
2020.
N
So
that
was
one
of
the
virtual
interim
meetings
and
it
is
completely
my
fault
for
just
not
putting
it
on
the
agenda
since
then.
So
some
quick,
quick
refresher
of
this
document,
the
idea
is
this
document
is
meant
to
be
recommendations
for
people
who
are
building
browser-based
apps,
also
commonly
known
as
single-page
apps
using
javascript
frameworks
in
a
browser,
it
is
meant
to
be
a
sort
of
counterpart
to
the
native
apps
bcp,
which
goes
into
detail
about
browser
specifics
and
browser
apis
that
really
are
only
relevant
relevant
to
javascript
developers.
N
So
the
the
last
time
that
this
was
discussed
was
draft.
Six
there's
been
a
little
bit
of
work
in
between
then
and
now
we
just
haven't
had
any
presentations
on
it
in
any
of
the
meetings
there
are.
The
the
big
changes
are
here.
The
the
clarification
around
the
requirement
for
pixie
is
now.
It
now
explicitly
says
it's
only
about
the
access
token
issuance,
there's
the
reference
to
the
new
iss
parameter.
N
There's
some
more
notes
about
the
the
scenario
of
the
single
page.
App
is
actually
on
the
same
domain
as
the
rest
of
the
system,
which
is
the
sort
of
smallest
possible
deployment
scenario
of
an
oauth
system
and
then
there's
in
may
may
last
year
again,
I
did
an
update
to
pull
the
recommendations
from
the
security
bcp
at
that
date
into
the
browser
bcp,
which
means
there's
now
more
changes
to
do
from
may.
Until
now,
any
of
the
security
bcp
that's
changed
and
there
are
a
bunch
of
just
minor
text,
cleanups
and
stuff.
N
The
big
thing
that
is
still
in
the
works
yannick
has
been
very
helpful
and
in
in
adding
this
pattern,
this
was
something
that
we've
identified
previously
and
I
think
actually
this
was
brought
up
at
the
last
time.
At
the
last
interim
meeting,
the
whole
idea
of
using
a
service
worker
as
the
oauth
client
within
the
context
of
a
javascript
app,
is
a
pattern
that
exists.
It.
I've
seen
some
documentation
about
it
from
people's
implementations.
N
That
is
now
there's
a
whole
new
section
about
that
pattern
is
one
of
the
main
patterns,
basically
there's
a
section
in
this
document
that
talks
about
like
here's,
a
couple,
different
options
you
have
about
how
you
can
do
oauth
in
a
browser,
whether
that's
using
a
a
back
end
that
is
done,
is
running
server,
side
code
or,
if
you're,
doing
purely
a
single
page
app
as
the
oauth
client
and
then
even
within
that
you
have
the
difference
between
just
javascript
code
running
in
the
browser
or
a
service
worker
being
the
client
itself
which
can
better
protect
tokens.
N
So
there
are
a
lot
of
trade-offs
between
all
of
them.
So
the
whole
point
of
this
draft
is
to
spell
out
the
trade-offs,
not
necessarily
say
this
is
the
only
possible
right
solution,
but
this
was
a
big
big
chunk.
It's
still
in
progress,
there's
a
couple
of
notes
in
that
pull
request
about
some
details
still
to
work
out
on
this.
But
this
is
a
great
great
start
there
and
then
the
things
that
we're
going
to
do
in
the
future
and
the
two
biggest
items
are.
N
We
need
to
make
sure
that
it's
up
to
date
with
the
latest
info
in
the
security
bcp,
since
that
has
been
changing
and
then.
Secondly,
there's
a
lot
of
people
want
guidance
on
how
do
you
store
tokens
in
a
browser
like
where
do
you
put
them?
What
is
what
do
we
need
to
know
about
that
and
again,
there's
no
right
or
wrong,
there's
just
a
lot
of
trade-offs
for
the
different
options,
so
it's
important
to
spell
out
the
trade-offs,
because
people
will
end
up
making
different
decisions
anyway.
N
The
two
I
put
on
here
is
like
you
could
just
store
them
in
memory
where
there
is
no
persistence
at
all,
which
is
technically
the
most
secure
or
you
store
tokens
in
local
storage
because
you
want
them
permanently
in
the
browser
and
again
both
of
those
have
down
upsides
and
downsides
and
there's
a
couple
other
versions
of
various
things
you
can
do
as
well.
So
we
just
want
to
make
sure
that
we
get
all
those
listed
out.
So
those
are
the
big
things
to
discuss.
I'm
sure
a
lot
of
people
here
have
opinions.
N
I
see
a
opinion
face
happening
in
front
of
me,
so
we
will
I'm
again
hoping
we
can
have
some
time
during
the
side
meetings
to
discuss
that.
I
think
that's
the
last
slide
so
yeah.
B
Thanks
for
the
update
very
very
exciting,
during
the
osw
there
was,
there
is
a
big
discussion
about
the
fact
that
storing
the
tokens
isn't
as
much
of
a
problem,
but
is
the
acquisition
of
the
token,
which
is
the
critical
part,
because
you
can
be
super
secure
where
you
save
stuff.
But
if
an
attacker
has
the
ability
to
use
you
to
means
new
tokens,
then
all
that
production
is
pointless.
And
so
there
were
like
long
discussions
about
the
bff
pattern
and
the
intermediaries,
like
the
tmi
bff,
which,
with
brian
we
suggested.
B
So,
would
you
contemplate
the
possibility
of
covering
those
things
in
in
the
browser
based
up
spec
in
any
capacity
from
mentioning
that
they
exist
all
the
way
to
actually
embedding,
for
example,
the
guidance
that
we
were
putting
together
for
tmi
bff.
N
Yeah,
definitely
definitely,
at
the
very
least
mentioning
all
of
them
are
a
good
plan.
That's
what
I
want
to
do
with
that
new
section.
I
noticed
that
we
haven't
had
a
lot
of
discussion
on
the
tmi
bff
document
in
a
while
either,
so
maybe
it
does
make
sense
to
combine
these
two
and
just
put
it
in
as
here's
how
you
do
this
pattern,
because
it
is
basically
one
more
option
in
the
tool
belt
and
that
will
be
a
fine
place
for
I'm
happy
to
do
that.
A
N
A
Okay,
we're
gonna
talk
about
nested,
jot
or
multi-subject
dot
or
jot
embedded
token.
So
the
whole
idea
and
the
whole
document
started
with
the
idea
of
a
specific
use
case
that
needed
a
way
to
embed
one
jot
into
another
jot
and
the
natural
way
or
a
natural
place,
for
it
was
kind
of
to
start
with
a
nested
job.
Then
this
the
jot
is
a
a
mechanism
that
allows
you
to
have
a
payload
of
that.
The
payload
of
a
jot
contains
another
jot,
but
it
has
it
doesn't.
A
Allow
you
to
contain
doesn't
allow
the
outer
jaw
to
contain
its
own,
its
own
claims,
and
so
after
I
kind
of
published
that
I
started
getting
more
requests
for
different
ideas
or
different
use
cases
that
need
something
like
this
and
it
quickly
becomes
clear
that
there
are
so
many
use
cases
that
need
something
like
this
and
the
relationship
also
needed
to
allow
you
to
embed
one
or
multiple
tokens
into
the
jar
itself
and
the
relationship
between
those
tokens
and
the
main
token,
and
the
focus
was
mainly
on
that
in
access
token.
A
In
that
case,
and
then
more
recently,
it
was
clear
that
there
are
even
more
use
cases
and
in
this
case,
for
the
id
token
to
contain
a
jot
with
multiple,
a
tokens
like
this.
So
and
for
this
reason
you
see
dick
and
giuseppe
are
now
going
to
be
a
co-author
of
this
document.
A
We're
going
to
work
on
updating
the
document
and
kind
of
provide
spell
out
all
those
details,
and,
and
so
for
this
reason
we're
not
asking
for
adoption
at
this
stage
just
and
I
present
the
idea,
the
concept,
the
problems
we're
trying
to
solve,
and
maybe
next
time
we'll
ask
for
the
adoption,
but
for
now
just
say,
get
some
feedback.
A
So
why
is
that
useful?
The
obvious
one
is
the
other
trail.
You
want
to
be
able
to
know
who
accessed
what
and
when
and
in
some
use
cases
it's
very
useful
because
it
allows
you
to
present
a
information
from
those
jots
in
real
time
and
we'll
talk
about
an
example
later
on
and
obviously
an
evaluation.
You
get
that
token.
You
evaluate
this.
You
can
make
decisions
based
on
authorization,
basic
decisions,
also
based
on
the
context.
That's
that
token
next
next
slide,
please.
A
A
The
parent
will
be
able
to
log
in
and
then
when
the
resource
gets,
that
the
token
will
get
that
token
for
in
the
the
parent
and
the
child
right
so
next
slide.
Another
example
would
be
for
a
multiple
primary
subject,
so
you
have
a
think
about
a
marriage
couple
when
that
the
husband
or
wife
want
to
log
in
and
perform
some
operation
based
on
on
the
on
the
ability
to
access
the
other.
The
other
side,
resources
next
slide.
A
A
So
all
the
use
cases
that
we
we
mentioned
so
far,
where
the
tokens
were
issued
by
the
same
idp
that
the
ones
that
I'm
going
to
talk
about
are
different
in
the
sense
that
tokens
could
be
issued
by
completely
different
idps
and
prime
example
is
a
stir.
A
use
case,
which
is
a
telephony
use
case,
think
about
a
use
case
where
a
calls
b
again
we're
talking
about
telephone
here.
A
calls
b
but
b
has
a
redirection
to
c.
A
So
when,
when
that
happens,
when
c
receives
a
call,
they
want
to
know
that
the
actual
the
initial
call
was
actually
from
a
to
b
and
not
directly
to
c,
and
in
that
case,
what
would
happen
that,
when
the
redirection
service
received
that
initial
call
with
the
token
they
will
create
their
own
token
and
embed
the
original
token
into
this
token
and
then
ship
it
to
to
see
next
slide?
A
Please
another
example
is
the
nsm
in
a
project
which
is
a
mechanism
that
tried
to
replicate
the
the
concept
of
a
mesh,
but
at
the
layer,
2
and
layer
3
layer.
So,
in
that
case,
they
have
a
bunch
of
middle
middle
intermediaries
and
those
intermediaries
receive
one
token.
A
They
change
that
the
message
and
then
create
the
token
embed
the
original
message
and
hand
it
to
to
the
next
intermediary.
So
another
example
again,
these
are
different
idps.
Next
slide,
please
another
example.
This
is
talking
about
now
id
token
here.
So,
for
example,
you
have
a
multiple
issuers
that
issue
claims
for
the
same
subject.
So
an
example:
you
get
an
id
token
with
that
date
of
birth,
token
that
talks
about
the
date
of
birth,
from
one
in
identity,
a
idp
and
professional
accreditation,
for
example,
from
a
completely
different
idp
next
slide.
A
Please,
and
a
last
example
is
this
is
a
use
case
in
in
italy.
They
have
a
multiple
attributes,
attribute
authorities
and
they
need
a
way
to
embed
multiple
tokens
in
the
id
token,
to
allow
the
client
to
access
those
attribute
authorities
directly
using
those
embedded
tokens
right
and
we'll
show
a
few
examples
like
this
and
next
slide,
please.
A
A
So
this
is
an
example
like
you,
you
see
that
the
child,
as,
for
example,
this
is
a
childhood
parent
token,
the
child
as
the
main
primary
in
subject
here,
and
you
have
the
tokens
and
inside
that
tokens.
You
have
one
token
in
this
case
that
talks
about
the
parent
and
the
relationship
between
the
embedded
token
and
the
main
token
next,
next
one
please-
and
this
is
an
example
of
a
multiple
embedded
tokens.
This
is
specifically
from
a
italian
government
in
the
way
they're
doing
it.
A
A
O
Mike
jones
first,
a
question
about
the
example
on
the
previous
slides
your
tokens
list.
I
expected
the
embedded
jots
to
start
with
double
quote:
eyj:
okay,.
A
O
O
Correct
and
you've
done
a
good
job
demonstrating
there's
a
lot
of
diverse
use
cases.
It's
not
clear
to
me
there's
a
lot
of
commonality
between
them.
Although
having
this
syntax
and
letting
people
use
it,
may
maybe
the
best
we
can
do
I
mean
otherwise.
Applications
such
as
stir
will
have
to
define
their
own
claims
for
embedding
jots
for
their
own
purposes,
which
is
not
terrible.
O
K
George,
can
you
go
ahead
yep?
So
I
guess
the
only
thing
I
wanted
to
add
is
the
token
exchange
work
that
brian
and
others
did
has
the
concept
of
an
actor
token
which
so
I
think
I
couldn't
tell
some
of
these
use
cases.
You
know,
maybe
purely
you
know,
claims
related.
K
Some
of
them
may
be
things
that
you
would
want
in
an
authorization
model,
and
so
I
think
we
just
need
to
make
sure
that
if
we
pick
up
this
work,
we
look
at
it
from
a
lot
of
you
know
like
when,
should
these
things
potentially
be
present
in
access
tokens?
Are
these
concepts
be
present
from
an
authorization
perspective
as
opposed
to
an
identity
perspective?
I
think
will
be
an
important.
C
C
P
Alrighty,
hey
folks,
first
time
listener
or
no
long
time
listener.
First
time.
P
Screwing
things
up,
so
do
you
appreciate
any
patience
here
to
do
a
bit
of
an
informational
session
effectively
on
how
github
is
looking
at
token
theft.
P
Right,
how
github
is
looking
at
token
theft
and
what
we
want
to
do
to
protect
against
it
into
the
future,
bit
of
a
call
for
a
conversation
and
perhaps
progress
at
the
end
of
this
next
slide
and
just
for
reference,
I
do
identity
at
github
next
slide.
P
So
the
reason
that
we're
bringing
this
up
is
that
we're
seeing
a
different
kind
of
attack
than
what
we
are
typically
reading
about
when
you
read
the
token
binding,
specs
or
typical
expectations
of
an
attack,
is
we're
not
looking
at
we're
not
seeing
rather
attacks
in
the
wild
that
are
based
off
of
xss
or
getting
malware
onto
somebody's
desktop?
It's
actually
attackers
breaking
into
trusted
cloud
providers
or
integrators
to
steal
masses
of
tokens
in
order
to
then
attack
the
real
victims.
P
P
So
they
can
then
look
into
your
code
base
and
find
secrets
to
move
laterally
into
your
infrastructure
and
pull
off
the
actual
attack
that
they
wanted
to
do,
and
so
this
isn't
about
preventing
the
actual
attacks
where
they're
getting
into
your
infrastructure.
That's
spire
spiffy,
all
that
kind
of
stuff,
and
it's
not
about
preventing
the
attacks
on
those
cloud
providers
in
the
first
place,
we're
really
looking
at
mitigation,
what
happens
once
they've
gotten
into
your
cloud
provider
or
your
integrator,
and
what
can
we
do
to
make
that
less
bad
next
slide?
P
Obviously,
just
some
kind
of
tokens
that
you
can
lose.
This
is
mostly
for
reference
next
slide,
but
all
of
those
tokens
do
share
some
similar
weaknesses
that
we're
kind
of
focusing
in
on
none
of
them
are
center
constrained
right.
Basically,
no
tokens
out
in
the
wild
right
now
actually
have
binding
a
lot
of
them
have
way
too
long
of
a
lifetime
and
most
are
over
permissioned.
P
This
is
a
common
pattern,
we're
seeing
across
all
the
tokens
that
are
being
stolen
next
slide,
so
sort
of
as
our
advice
for
app
developers.
P
If
you
are
going
to
be
one
of
those
integrators
or
you're
going
to
be
a
developer
whose
code
base
is
being
broken
into
through
those
cloud
providers,
we
do
have
some
suggestions
right
encryption
at
rest
once
that
attacker
gets
into
your
system
and
dumps
all
of
your
tokens,
please
don't
make
it
easy
for
them
to
use
them,
make
them
steal
something
else
that
is
better
protected,
like
your
decryption
key,
actually
use
expiring
tokens.
P
What
we
see
out
in
the
wild
is
that
there
are
tons
of
infinite
life
span,
tokens
that
get
committed
into
code
bases
if
you
are
going
to
be
committing
secrets
into
your
code
base,
please
turn
on
secret
scanning
of
some
kind
and
get
them
back
out.
P
If
you
turn
on
secret
scanning,
there
aren't
any
secrets
inside
of
your
code
base
to
steal
and
abuse,
and
then
finally,
you
know-
and
this
is
kind
of
the
guidance
that
we're
going
to
be
providing
to
people
who
want
to
integrate
against
github,
just
don't
have
credentials
to
lose
in
the
first
place.
If
you
can
use
workload,
federation
store
keys
inside
of
an
hsm,
please
do
it,
because
that
is
just
one
fewer
thing
for
you
to
lose
next
slide.
P
There's
a
lot
of
easy
stuff
ish
that
I
think
we
can
get
done
today,
so
ip
allows
us
for
applications
having
confidential
clients
right.
The
focus
of
most
of
this
is
confidential.
Clients
have
them
declare
where
they
will
be
using
tokens
from
that
ups.
The
game
from
I
steal
your
tokens
and
use
them
to.
I
steal
your
tokens
and
now
have
to
still
be
within
your
perimeter
to
use
them.
That
is
extremely
powerful
limit,
non-expiring
tokens.
P
So
this
is
something
that
certainly
we
want
to
get
done
over
at
github
start
limiting
the
use
of
infinite
lifespan.
Tokens
eventually
deprecate
those
and
make
those
go
away.
Have
everything
be
proper,
like
oauth
rotating
access
tokens
or
even
better
support
workload
federation
and
support
people,
not
having
client
secrets
not
having
tokens
that
they
are
storing
at
all?
We
want
them
to
be
using
their
infrastructure
identities
as
authorization
against
our
services
and
then.
P
Finally,
if
you
are
issuing
tokens,
please
register
those
tokens
for
detection
with
anybody
who
will
do
secret
scanning
a
little
bit
of
that
on
next
slide.
P
P
If
you
add
checksums
high
entropy,
you
can
make
this
very
low,
false
positive
as
well,
which
means
that,
when
developers
turn
this
on
in
their
code
bases,
they
aren't
going
to
be
swamped
with
a
bunch
of
stuff.
They
want
to
ignore
just
to
give
an
idea
of
the
efficacy
of
this
we've
had
customers
turn
this
on
and
find
thousands
of
secrets
inside
of
their
code.
P
Bases
like
this
is
a
real
problem
that
we
actually
want
to
use
to
combat
token
theft
next
slide
and
then
kind
of
the
meat
of
the
topic
is
token
binding.
Right
like
every
time,
there's
a
big
token
theft
that
hits
the
news
people
say.
So
why
were
they
able
to
use
those
tokens
bearer
tokens
they
suck?
Let's
fix
that.
Let's
talk
about
that
next
slide,
so
this
is
purely
from
a
implementer's
perspective
that
actually
is
told
to
go
fix.
P
This
8705
is
dead,
we're
not
going
to
implement
it,
we're
not
interested
because
it
doesn't
support
enough
platforms.
It
doesn't
work
across
all
of
our
platforms
that
we
care
about
yeah.
Sorry,
ad705
is
mtls
binding
right.
This
is
channel
binding.
P
It
doesn't
work
because
it
doesn't
work
inside
the
browser
anymore,
and
so,
if
we
have
to
invest
in
that
it
doesn't
get
us
far
enough
mobile
apps,
though,
and
sorry
that
is
to
say,
though
depop
seems
to
be
offering
us
a
lot
of
what
we
actually
care
about,
which
is
device
binding,
making
sure
that
the
device
we
gave
the
token
to
is
the
same
one
using
it
seems
to
be
working
pretty
well
for
mobile
apps
right.
This
is
something
that
we
can
reasonably
go
out
and
implement
pretty
cleanly
today
desktop
apps.
P
You
can
start
using
the
mac
os
keychain.
You
can
start
using
tpms
to
bind
keys,
but
because
there's
no
strong
application
identity,
you
get
medium,
il
privileges,
you
run
as
user,
and
you
just
say
that
you're,
the
github
cli
or
you
just
say
that
you're
chrome
and
you
can
get
those
keys
and
okay,
that's
not
great
confidential
clients.
This
actually
seems
fairly
reasonable
from
a
technical
perspective.
How
we
can
start
having
confidential
clients
prove
that
they
are
still
the
same
client
that
was
sent
the
token.
However,
there's
no
profile
for
this.
P
There
isn't
a
dedicated
way
that
we
have
seen
to
actually
go
implement
this
and
then
finally
web
apps,
there's
like
web
crypto
api,
which
is
great,
and
if
you
can
xss
the
tokens
out,
you
can
probably
xss
out
a
use
for
them
as
well.
We'd
love
to
see
some
stronger
device
binding
there,
because
next
slide
of
our
risk
profile
that
we're
looking
at
is
assume
they've
gotten
into
your
database,
assume
that
your
site
has
xss.
P
How
do
we
still
bind
the
tokens
given
those
two
things,
and
so
we're
looking
for
is
really
get
them
force
attackers
to
get
to
the
next
level,
which
is
infrastructure
compromise,
either
on
the
desktop
that
your
application
is
running
inside
of
either
on
the
browser
as
an
actual
desktop
app
or
into
your
web
app
infrastructure
right.
They
need
to
be
able
to
stamp
a
microservice
inside
of
your
trusted
perimeter.
P
That's
the
bar
that
we
want
attackers
to
have
to
meet
in
order
to
lift
and
steal
and
use
tokens
and
right
now
for
web
apps.
It
seems
like
that's
hovering
right
around
like
we
can
do
two,
but
not
three.
So
next
slide,
there's
also
a
lot
of
stuff
here.
That
seems
like
it's
not
quite
solved
yet
and
again,
this
is
basically
just
please
come
talk
to
me.
I'd
love
to
help
figure
out
how
we
can
solve
these
one
is
the
I'm
calling
it
referred
binding.
P
I
think
there's
probably
better
terms
for
it,
but
when
that
authorization
code
is
sent
to
a
confidential
client,
then
they
have
to
respond
with
a
cookie
that
is
somehow
bound
to
the
same
browser
that
got
that
authorization
code.
There
seems
to
be
some
weaknesses
there
around.
Is
that
the
optimum
time
to
go
steal
something
and
replay
it
tpm
rate
limits.
This
works
fine.
If
you're
like
on
a
windows
device
right,
you
can
get
10
20
signatures
a
minute
out
of
that
thing,
and
everything
works.
Fine.
P
One
that
is
particularly
interesting
to
us
is
sort
of
hosted
workloads
or
shared
execution.
So
if
you're,
a
ci
cd
system
or
github
actions-
and
you
are
requesting
tokens
for
a
workload
that
you
are
hosting,
how
do
you
bind
those?
And
what
do
you
even
bind
them
to?
P
P
If
we're
saying
hey,
we
want
to
durably
identify
your
device,
there's
a
whole
separate
community
and
kind
of
zeitgeist
right
now
around
making
sure
that's
not
possible,
so
not
sure
where
that
tension
is
next
slide,
so
call
to
action,
really
love
to
see
a
confidential
client
profile
for
depop
and
again
very
happy
to
work
with
you
on
this
and
figure
out
how
we
move
this
forward,
working
with
the
browser
vendors
to
bind
keys
from
the
operating
system
up
through
the
browser
into
web
applications,
so
that
we
can
hopefully
heighten
some
of
the
security
guarantees
there
and
then
finally,
really
just
guidance
and
standards
for
like
what
does
token
binding
look
like
in
a
successful
deployment
and
again
we're
there
to
both
help,
move
that
forward
and
be
a
reference
next
slide.
P
O
B
I
think
that
doing
something
with
a
browser
for
supporting
browser
artifacts
like
cookies
and
similar,
given
the
demise
of
the
token
binding,
would
be
very
good
for
the
industry,
and
I
think
it's
a
very
good
idea
and
I'm
looking
forward
to
see
how
if
your
proposal
develops-
and
you
know
that
we
work
with
browser
vendors
for
other
reasons.
So
please
feel
free
to
use
users
as
the
zone
as
much
as
possible.
I
A
A
I
So
I
had
a
question:
basically:
sometimes
we
have
seen
that
even
for
the
native
apps,
the
authorization
code
flow
is
used
so
many
times
the
client
secrets
are
embedded
into
the
desktop
apps
or
the
native
apps.
So
are
we
looking
also
at
ways
to
bind
that
client
secrets
in
some
ways,
or
is
it
restricted
to
only
tokens.
P
Yeah,
absolutely
not
I've.
Actually,
I
was
a
little
bummed
to
see
that
aaron
had
talked
about
removing
credentialed
clients
from
2.1,
because
there
was
some
interest
in
like
temporary
credentials
that
are
assigned
on
a
per
application
basis
had
no
idea
how
that
was
actually
going
to
function,
but
no.
N
Brief,
I
did
mention
this
last
in
vienna
or
non-event
right
after
vienna,
at
the
oats
security
workshop
about
the
possibility
of
turning
native
apps
into
confidential
clients
using
other
mechanisms
that
exist
so
we'd
love
to
talk
about
that
further
and
see
what
your
interest
is
in
that
yeah.
A
A
A
M
Mostly,
it
was
adding
security
requirements
that
were
optional
in
the
oauth.
2.0
is
mandatory,
and
we
also
have
in
our
target
environment,
is
a
enterprise
environment,
and
so
clients
and
servers
have
pki
credentials
that
they
use
to
authenticate
to
each
other
using
mutual
tls,
and
so
now
we're
working
on
extending
oauth
token
exchange.
M
In
a
case
where
you
have
a
protected
resource
in
one
environment,
that
needs
to
access
a
protected
resource
in
a
different
organization's
environment,
and
so
next
slide.
M
M
Excuse
me,
and
so
we
use
make
use
of
oauth
token
exchange,
but
in
a
way
that's
more
of
a
it's
a
mix
of
a
profile
and
an
extension
to
token
exchange.
So
the
goals
for
this
week
are
to
get
some
feedback.
I
I
sent
the
document
to
the
mailing
list
and
I
can
we
haven't
had
a
chance
to
get
a
public
release
for
our
website
yet,
but
I'm
happy
to
send
it
to
you.
M
So
some
some
requirements
that
we
have
are
that
the
when
pr2
in
the
second
organization
receives
the
access
token
that
it
received
through
token
exchange
from
the
first
protected
resource
that
token
needs
to
have
the
client
id
and
the
confirmation
claim
related
to
pr1,
so
that
it
can
do
some
verification
in
the
presentation
of
the
token.
M
It
also
includes
an
actor
claim
which
includes
the
subject
as
pr1
and
the
issuer
of
that
particular
token
and
all
previous
actor
claims.
So
you
can
imagine
this
may
propagate
from
pr1
to
pr2
to
pr3
and
so
that
the
final
protected
resource
can
verify
the
identities
of
everyone
participating
in
in
the
exchange
next
slide.
M
It
presents
that
token
to
pr2
in
the
second
organization
and
then
pr2
uses
it
validates
the
token
and
id
with
its
authorization,
server,
as2
and
organization
2.,
and
because
we
have
this
attribute,
sharing
infrastructure,
as1
and
as2
can
communicate
with
each
other
out
of
band.
So
the
as2
can
validate
the
token
that
it
received
because
the
first
token
was
generated
by
as1,
and
so
there
needs
to
be
this
underlying
attribute,
sharing
infrastructure
for
the
two
authorization
servers
to
communicate
with
each
other.
M
Now
the
next
slide,
please
the
the
use
case
that
we've
had
problems
with
in
the
past,
and
some
of
you
may
recall.
We
proposed
this
that
didn't
have
a
solution.
So
now
I'm
proposing
a
solution.
M
So
in
this
case,
when
pr1
performs
token
exchange
with
its
authorization,
server
as1
instead
of
off
instead
of
as1
generating
the
token
and
returning
it
it
actually
pauses
generates.
A
jot
assertion
sends
that
to
as2
or
as2
generates
the
token
sends
it
back
to
as1,
who
then
returns
it
to
pr1,
and
then
everything
is
fine
from
there
on
out,
because
pr2
can
validate
the
token
because
it
was
generated
by
as2.
M
So
to
fix
this
problem,
we
are
proposing
a
new
claim
next
line
and
this
claim
is
chained
id
which
basically
just
passes
those
two
bits
of
information,
the
client
id
and
cnf
fields
from
pr1
to
as2,
and
then
as2
does
a
little
bit
of
it.
Populates
the
the
sudden,
odd
claims,
according
to
the
specs,
but
instead
of
filling
the
client
id
and
cnf
fields
with
the
values
of
as1.
M
M
And
so
that's
that's.
My
proposal
for
a
new
claim
and
next
slide
I'd
like
to
get
some
feedback
on
thoughts
of
whether
this
new
custom
processing
at
as1
is
reasonable
and
whether
this
new
claim
could
be
valid
or
useful
to
anybody
else.
We
do
have
some
implementations.
We've
got
one
in
ping,
feder
right
now
that
uses
custom
processing
at
as1
like
I
mentioned,
and
we're
working
on
key
cloak
next,
and
so
I
know
we
don't
have
a
lot
of
time
today.
M
So
I'm
happy
to
be
available
the
rest
of
the
week
to
discuss
any
interest
in
these
profiles,
as
well
as
what
you
think
on
this
new
claim.
Q
Hi
sorry,
the
meet
you
wouldn't.
Let
me
log
in
is
there
a
draft
for
this.
M
Yeah
so
like
I
mentioned
we,
we
have
to
go
through
a
public
release
process,
and
so
it
hasn't
been
posted
to
a
url
yet,
but
I
can
send
it
to
you.
It's
been
posted
to
the
mailing
list.
I
included
it
as
an
attachment,
and
I
don't
know
if
it
got
scrubbed
on
the
way
to
you
guys,
okay,
but
I'm
happy
to
send
the
document
out
to
anyone
who
requests
it.
Q
Okay,
so
my
general
comment
when,
as
soon
as
I
saw
a
client
id
my
my
thought
was:
what's
the
format
of
that,
is
there
a
specified
format
or
is
it
ambiguous?
Is
it
a
you
know,
just
a
opaque
string.
Q
So
I
want
to
caution
anyone
who
likes
the
idea
of
leaving
opaque
strings.
I
want
to
use
the
cautionary
tale
of
the
subject:
field
of
x-509
certificates.
Q
We
had
to
go
and
invent
a
whole,
another
a
whole
other
extension
of
subject,
alt
name,
which
has
a
structure
which
has
several
different
types
so
that
you
can
partially
unscrew
the
the
mess
that
was
created
by
having
something
that
was
completely
opaque.
I
think
it's
perfectly
fine.
If
you
have
sort
of
types
and
one
of
the
types
is,
this
is
an
opaque
blob,
but
not
having
a
way
to
indicate
that
there
is
a
structure.
Is
a
recipe
for
pain
going
down
the
road
later.
Thank
you
got
it.
Thank
you.
F
Sorry
rick
taylor:
I
should
probably
use
the
meat
echo
app,
but
I've
just
jumped
to
my
feet.
I'm
just
kind
of
I'm
I'm
observing.
I
haven't
obviously
read
this
draft
because
it's
difficult
to
get
hold
of,
and
I
haven't
read
rifat's
draft,
but
I'm
you
it
isn't
the
solution
to
this.
This
claim
this
kind
of
delegated
authority
that
that
as1
is
saying.
I
I'm
just
thinking
off
the
top
of
my
head
here.
Isn't
the
solution
to
the
what
should
go
in
this
new
claim
field
is
actually
you're
embedding
a
sub
token.
F
M
F
J
On
the
schedule,
so
I'm
atul
tul
shibakwali,
I'm
cto
of
a
company,
a
relatively
new
company
called
signal
and
that's
my
email
address
and
twitter
handle
there.
So,
let's
jump
sorry,
yeah
yeah,
let's
jump
right
into
it.
So
what
I'm
going
to
talk
about
today
is
more
of
a
sort
of
you
know.
J
Do
we
need
something
like
this
like
trying
to
have
a
discussion
about
whether
there
is
a
need
to
standardize
something
at
this
level
of
the
rpcs
which
is
sort
of
somewhat
lower
level
to
what
what
we
are
used
to
talking
about
and
the
other
question
then
that
brings
up
is:
is
this
the
right
forum
or
is
there
some
other?
J
You
know
working
group
that
this
should
be
discussed
right,
so
how
rbc
security
seems
to
work
today.
In
most
cases
I
know
some
companies
do
things
differently.
J
Is
that
at
the
top
level,
you
get
an
sort
of
a
service
that
is
consuming
an
award
token,
and
then
you
know
there's
a
whole
bunch
of
micro
services.
That
hang
off
of
that,
and
you
know
all
that
context
is
lost
at
the
top
level
itself
right.
J
The
other
problem
is
that
you
know
when
you
call
a
third
party
api,
like
we
had
the
presentation
before,
where
you
have
an
api
key.
That
is
very
powerful,
because
when
you
call
that
third-party
api
you're,
using
a
token
that
could
be
long-lived,
could
could
do
things
on
behalf
of
many
users
and
as
a
result,
by
using
that
key,
you
can
pretty
much
do
a
lot
of
damage
on
the
tenant
that
you
have
in
that
third-party
platform.
J
That
is,
you
know
providing
that
api
and
the
third
problem
that
we
see
is
as
more
multi-cloud
deployments
proliferate.
Where
you
were
your
own
tenants
are
spread
across
multiple
cloud
platforms.
You
still
need
some
security
model
that
sort
of
crosses
those
boundaries
and
maybe
you're
using
your
own
kind
of
api
keys
or
you're.
Maybe
you
know
creating
some
specialized
kind
of
tls
connections
between
your
vpcs.
J
But
again,
all
of
these
things
seem
to
have
a
few
problems
and
let's
go
to
the
next
slide.
Where,
where
I
talk
about
you
know
what
are
the
issues
right
and
basically
the
issue
is
that
you
know
if
you
have
a
vpc
compromise,
you
end
up
with,
like
a
you,
know,
terrible
amount
of
power
that
you're
offering
to
any
attacker
like,
for
example,
you
know
I'm
not
trying
to
pick
on
any
one
company,
but
there's
a
company
called
one
login,
where
you
know
an
attacker
entered
into
their
vpc.
J
That
attacker
was
in
their
vpc
for
months
and
that
company
unfortunately
held
the
credentials
for
a
lot
of
users
for
thousands
of
other
companies
right.
A
massive
compromise
resulted
from
that.
I
mean
I
don't
know
the
extent
of
the
compromise,
but
I
can
imagine
the
the
you
know
the
possibilities
there,
but
there
are
you
know
various
ways
in
which
this
can
be
achieved
right.
You
can
have
a
software
supply
chain
attack
where
you
know
your
microservice
within
your
vpc
gets
compromised
and
then
the
attacker
has
control
over
your
vpc.
J
In
that
way,
right,
like
the
person
from
github,
pointed
out
that
you
know
you
could
have
a
dev
chain
issue
where
you
know
your
github
has
been
compromised
and
from
there
your
vpc
gets
compromised,
or
you
know
like
in
the
case
of
one
login,
the
privileged
user,
who
had
access
to
the
vpc
at
an
administrative
level,
their
account
got
phished
and
you
know
that
attacker
was
able
to
enter
the
vc
vpc
in
that
way.
So
I
think
you
know
we
can
imagine.
J
There
are
various
scenarios
in
which
vpcs
can
get
compromised,
and
what
we
would
like
to
do
is
make
sure
that,
even
if
such
a
compromise
happens
that
you
know
the
extent
of
the
damage
can
be
limited
right
and
so
the
same
thing
where
you
know,
if
you're
using
a
third-party
api,
let's
say
you
have
a
tenant
and
you
know
your
your
tenant
uses
some.
You
know
custom
process
to
call
a
workday.
You
know
like
an
hr
api
to
onboard
people
onto
your
system.
J
Imagine
you
know
if
somebody
was
able
to
get
into
your
vpc
and
actually
use
that
same
key
to
create.
You
know
fake
employee
that
had
a
lot
of
access
in
your
sort
of
organization
right.
So
this
these
api
keys
are
very
powerful
and
you
know
they.
You
know
somehow,
if
you
can
restrict
that
power,
that
that
would
be
good
as
well,
and
you
know
the
other
problem.
J
Like
I
said
in
your
multi-cloud
deployments,
you
know
you
you
may
have
to
roll
your
own
security
today
and
you
know
there's
no
standard
for
doing
that,
and
one
important
thing
is
that
you
know
the
oauth
token
that
comes
in
has
a
lot
of
context.
It
has
basically
a
scope,
it
has
a
user
identity
associated
with
it,
and
you
know
that
context
is
lost
in
subsequent
calls
right.
J
So
if
you
can
somehow
preserve
that
context,
all
the
way
through
to
the
lowest
level
to
maybe
at
a
database
level
that
can
limit
the
attack
possibilities
quite
a
bit
right
next
slide,
please,
and
so,
while
I
don't
have
a
solution,
I
have
seen
some
solutions
previously.
I
used
to
work
at
google
and
really
admire
what
they've
done
internally,
but
what
I
feel
is
that
you
know
a
good
solution
would
preserve
the
identity
and
scope
at
any
level
right.
J
J
You
know,
restrict
the
scope
even
further,
if
you
want
all
the
way
through,
you
know
across
different
cloud
platforms
down
to
the
your
lowest
level
unit
database,
and
you
know
the
possibility
of
compromise
through
a
vpc
compromise
becomes
much
lower
now,
because,
even
if
you
had
a
fake
service
in
the
vpc
you're
not
going
to
be
able
to
do
much
until
you
have
a
user
credential
token
with
a
specific
scope
on
behalf
of
you
know,
so
that
you
can
act
on
behalf
of
that
user.
J
J
A
token
I
shouldn't
be
able
to
use
it.
You
know
sometime
later
and
replay
that
user's
ability-
and
because
this
we
are
talking
about,
we,
you
know
rpcs,
which
are
like
happening
much
more
frequently
than
like
an
token
exchange
at
the
higher
level.
You
need
this
to
be
super
efficient,
right,
excellent,
and
so
you
know
what
are
possible
solutions
here
right,
so
you
know
you
need
to
be
able
to
come
up
with
a
token
that
that
is
specific
to
users
and
scopes
right
and
you
should
be
able
to.
J
You
know
further,
restrict
the
scope
and
downstream
calls
right,
so
so
that
the
service
cannot
like
suddenly
switch
the
context
I'm
operating
on
use
on
behalf
of
user,
a
and
they're
asking
me
to
make
this
change
to
their
sort
of
you
know
personal
email
address,
or
something
like
that,
and
I
can.
I
can't
switch
it
to
be
like
user
b
and
they're
trying
to
change
their
social
security
number
or
something
like
that
right.
J
So
you
need
to
be
able
to
do
something
like
that
in
order
to
limit
replay,
you
know
you
may
have
to
go
with
much
smaller.
J
You
know
much
shorter,
lift
tokens
and
I
think
something
that
was
proposed
before
as
well
in
today's
session,
and
you
should
be
able
to
bind
these
tokens
strongly
between
the
originating
and
the
destination
services
so
that
you
know
you
cannot
reuse
those
tokens
for
something
else
than
what
they
were
intended
for
and
somehow
achieve
interoperability
across
cloud
boundaries,
by
maybe
token,
introspection
or
sort
of
having
a
common
route
of
trust
and
then
being
able
to
verify
those
tokens
when
they're
received
right.
So
this
is
this
is
all
I
had.
J
I
know
I
may
maybe
finishing
much
more
quickly.
J
Somewhat
by
design,
because
I
want
a
lot
of
discussion
even
today
on
this-
and
you
know-
hopefully
you
have
some
questions:
okay,.
R
Whoa
looks
like
I'm
talking
great,
so
I
have
several
points
to
make
actually
and
I
guess
the
highest
level.
One
is
just
to
get
a
common
understanding
of
what
we
mean
by
rpc
and
like
in
the
talk
it
sort
of
sounded
like
you.
R
Question
to
consider,
and
one
of
the
things
that
comes
to
mind
as
a
potential
approach,
to
combine
a
solution
for
all
these
would
be
if
the
highly
privileged
credential
that
the
implementation,
how
the
service
implementation
has
is
not
directly
usable
for
calling
the
microservices.
R
And
I
know
that's
going
to
be
hard
to
do
efficiently,
which
I
think
you
specifically
raised.
But
it's
it's
also
the
only
thing
that
really
comes
to
mind
as
a
way
to
combine
all
these
properties-
and
I
just
dropped
a
lot
on
you.
So
ask
clarification.
If
you
need
anything
and
I'm
happy
to
hear
your
thoughts.
J
J
There's
also
the
multi-cloud
aspect
of
this,
where
you
have
you
know
multiple
vpcs
and
different
cloud
platforms
and
so
you're
trying
to
solve
the
similar
problem,
but
it's
just
within
your
sort
of
infrastructure,
if
you
will
but
yeah
you're
right
on
in
terms
of
like
the
scope
of
the
issue
that
we're
trying
to
solve
or
or
we're
trying
to
I'm
trying
to
bring
up
over
here,
yeah
was
there
anything
specifically.
You
wanted
me
to
answer.
R
I'm
kind
of
curious
to
hear
your
thoughts
about
how
this
relates
to
existing
solutions
for
the
confused
deputy
problem,
because
I
would
hope
that
people
are
already
solving
the
confused
deputy
problem,
and
I
don't
know
if
you
were
thinking
this
would
build
on
top
of
that
or
be
a
different
sort
of
solution.
J
Right
so
I
guess
I'm
aware
of
a
few
proprietary
solutions
today
that
solved
some
of
these
problems,
but
I'm
not
aware
of
something
that
is
adopted
as
a
standard.
C
R
K
All
right,
yeah,
I
was
just
going
to
follow
up.
I
threw
a
couple
of
links
in
the
chat
as
well
a
tool
for
work.
That's
very
similar
to
this.
I'm
not
sure
that
trying
to
solve
all
of
these
problems
with
a
single
solution
is
going
to
make
sense.
Like
the
third
party
api
one
you
might
want
to
think
about.
You
know
what
kelly
from
mitre
was
talking
about
in
cross
domains.
K
If
those
are
truly
third
parties
that
I'm
reaching
out
to
and
how
that
might
be
a
better
solution,
as
as
it
relates
to
the
token
management,
but
for
sure
the
the
top
two
or
three
things.
I've
seen
a
couple
of
things.
One
of
them
is
from
netflix.
One
of
them
was
a
talk
I
gave
identiverse
in
2019,
so
I
think
there's
thoughts
in
the
space.
I
would
agree
with
you.
I
don't
know
of
any
standard
that
exists
today,
but
I
think
we
have
a
lot
of
things
to
build
on.
J
That's
yeah,
that's
great
I'll!
Look
them
up!
I'm
I'm
obviously
not
familiar
with
the
netflix
implementation,
but
yeah
I'd
love
to
take
a
look
so
yeah.
I
think
the
real
question
is,
you
know,
is
this
something
that
everyone
can
benefit
from
from
standardization
right
and
that's
sort
of
the
open
question
I
wanted
to
bring
up
in
this
talk
today.
C
Yeah,
I
think
I
think
I
hope
you're
here
this
week,
so
that
may
give
us
a
chance
to
like
socialize
and
discuss
and
to
dig
deeper
into
the
into
this
aspect
and
see
where
some
of
the
folks
in
the
room
have
actually
thought
about
solutions
already
or
have
run
into
similar
challenges.
J
Right,
yeah
and
informally,
like
I've
spoken
to
some
folks
at
one
of
the
big
platform
providers
aws,
and
they
seem
to
have
interest
in
standardizing.
I
think
peter
also
peter,
and
I
also
talked
about
this.
There
seems
to
be
some
interest
from
google
very
early,
so
things
may
be
sort
of
going
in
the
right
direction,
but
this
is
a
good
week
for
me
to
have
discussions
and
and
sort
of
bring
it
up
a
little
bit
on
in
terms
of
mind,
share.
A
J
Sure
yeah
sure
thanks
everyone,
maybe
in
one
of
those
successful
meetings
we
can
sort
of
go
into
some
detail.