►
From YouTube: OAUTH WG Interim Meeting, 2020-03-09
Description
OAUTH WG Interim Meeting, 2020-03-09
A
Okay,
okay,
let's
let's
get
going,
I
guess
we
have
three
minutes
passed,
so
this
meeting
isn't
really
discuss
a
pop.
So,
and
so
what
I
would
like
to
do
is
a
kind
of
first.
I
have
one
one
page
kind
of
to
to
to
show
about
that.
The
documents
for
these
documents,
because
roman
and
I
don't
have
a
very
long
history
with
the
with
oauth
and
and
some
of
those
documents
are
really
old.
So
so
I
wanna
just
present
those
documents.
A
A
Okay,
so
so
I
I
just
want
to
talk
about
those
say
for
for
a
minute
or
two
here
and
then
brian
prepared
few
slides
that
he
wants
also
to
present
and
and
discuss,
and
I
guess
we'll
we'll
take
it
from
there.
So
so
that's
that's
my
plan
like
just
to
talk
about
this
in
general
and
dig
deeper
into
it.
So
these
are
the
the
documents
that
we
have
today
about
the
pop.
A
A
And
then
we've
talked
about
the
http
signatures.
There
are
a
few
old
drafts
and
a
new
newer
one
that
is
being
discussed
right
now.
There
is
the
ace
work
with
one
rfc
and
and
a
oauth
authorization
draft.
A
That
has
been
on
rfc
for
some
time.
There
is
mtls
work
and
the
recent
one
is
is
d-pop,
so
I
guess
our
goal
here
in
this
meeting
is
to
to
try
to
understand
the
overlaps
between
all
all
of
those
kind
of
proposals
that
we
have
on
the
table
and
and
if
there
are
any
misalignments
between
between
those
proposals,
understand
why
and
and
how
we
we
could
resolve
those
again.
We
personally.
C
A
And
roman
and
myself
we
don't
have
lots
of
history
with
lost,
so
some
of
those
are
completely
new
to
us.
We
haven't
looked
at
them
in
a
long
time
for,
for
example,
the
initial
ones,
the
cabbage
and
and
and
the
signed
http
request
too.
So
so
that's
the
goal
of
the
meeting
and
any
any
comments
about
this.
As
that
sounds
like
a
plan
here,.
A
A
I
don't
have
slides
prepared,
so
I
will,
I
guess,
pass
it
to
brian
to
be
able
to
kind
of
walk
us
through
his
slides
what
he
has
to
say
and
and
we'll
take
it
from
there.
D
D
D
Does
it
show
up
in
kind
of
slideshow
mode
here,
yeah,
okay,
oops,
so
yeah?
So,
as
is
my
trademark
here,
throw
a
few
photos
up
here
and
we've
got
a
interim
meeting.
This
is
the
second
one
between
the
singapore
and
vancouver,
but
between
yeah,
singapore
and
vancouver.
So
here
you
are
we're.
D
I
want
to
talk
about
just
sort
of
the
general
concepts
of
proof
possession
in
oauth
2
and
the
variety
of
different
tracks
of
work
that
are
out
there
and
and
what's
currently
being
worked
on
or
there's
interest
in
and
see
if
we
can
come
to
some
at
least
agreement
on
where
we
should
be
taking
the
work
and
what
we
should
be
doing
going
forward.
So
I'll
try
to
set
the
stage
for
some
of
that
and
yeah.
D
So
back
in
singapore
106,
we
had
two
different
meetings
and
just
to
try
to
set
some
tone
for
how
we
got
to
this
interim
there's
a
recording
from
the
youtube
shows
of
me
in
two
different
outfits
at
two
different
meetings
presenting
the
same
slide.
D
So
I've
prepared
a
presentation
on
d-pop
and
tried
to
try
to
get
through
the
content
twice
and
was
got
to
slide
six
before
a
barrage
of
questions
and
comments
and
criticisms
were
levied
and
we
never
made
it
past
there
and
during
that
time
I
think
during
the
end
of
the
second
attempt
hannah
suggested,
we
we
considered
doing
a
virtual
interim
to
see
if
we
could
move
the
conversation
forward
a
little
bit
more
than
we
had
during
that
time.
So
some
things
that
were
were
not
presented
at
that
time
were.
D
D
Some
folks
sort
of
praising
or
expressing
interest
in
the
work
to
try
to
show
that
there
was
interest
out
there
in
in
the
work,
progressing
and
people
eager
to
see
it
progress
and
consider
implementing
it
and
that
whole
presentation
was
meant
to
wrap
up
with
a
beautiful
picture
of
vancouver
and
a
humble
recall
call
that
the
working
group
consider
a
call
for
adoption
on
the
depop
draft.
But
that
didn't
go
down
that
way
and
maybe
it's
for
the
better.
D
But
there
were
a
lot
of
different
comments,
ex
opinions
and
concerns
expressed
around
the
draft,
and
I
tried
to
capture
the
spirit
of
the
meaning
here
around
the
outside
here
with
all
the
various
commenters.
This
was
all
taken
from
the
youtube
recording
of
the
meeting,
actually
just
one
meeting,
but
we
didn't.
We
didn't
get
to
a
call
to
adoption.
It's
still
an
individual
draft
and
I
think
there's
some
some.
D
I
guess
diverse
opinions
about
whether
that
should
become
a
working
group
item
or,
as
is
or
whether
there's
other
pursuits.
The
working
group
should
follow
to
try
to
attain
some
sort
of
some
sort
of
reliable,
depop
type
or
I'm
sorry
proof
of
possession
type
method
in
oauth
itself.
D
Yeah
just
setting
a
little
bit
of
tone
here,
so
hana
suggested
the
interim
and
we
did
get
an
internal
schedule,
but
it
ended
up
being
focused
around
the
2.1
or
whatever.
It
may
be
called
effort
and
then
a
later
poll
around
discussing
pop
or
dpop
and
then
finally,
a
sort
of
last
minute
scheduling
of
this.
As
we
approach
the
the
actual
meeting
here
in
vancouver.
Assuming
that
does
happen.
D
But
that's
a
different
question
and
this
list
of
drafts
here,
as
well
as
danielle
sort
of
asking
what
what
happened
to
depop
in
the
list
of
topics,
not
necessarily
because
it's
the
only
thing
here,
but
it
was.
It-
is
sort
of
the
catalyst
of
discussion
right
now
around
how
how
the
working
group
proceeds
with
the
the
pop
question.
D
So
some
of
the
motivations,
at
least
in
my
mind,
but
I
think
in
general,
representing
some
of
the
working
group,
motivations
for
something
like
d-pop
or
a
generalized
pop
approach-
is
really
there's.
There's
sort
of
this
broad
one
that
we
we'd
like
to
do
something
that's
better
than
bearer
tokens,
and
that's
broad,
there's
a
lot
of
different
reasons
for
wanting
that.
But
I
do
think
that
there
is
at
least
some
consensus
for
having
something
in
this
solution
space.
D
Also,
we
have
the
oauth
2
security
bcp,
that
is,
it
was
making
its
way
through
last
call.
We
might
have
another
one,
but
it
it
somewhat
aspirationally,
but
it
does
recommend
the
use
of
sender,
constraints,
tokens
and
there's
some
work
occurring
in
the
fappy
financial
grade.
Api
working
group
that
that
does
the
same
basically
recommends
or
even
requires
the
use
of
some
kind
of
sender,
constrainer,
key
constraint,
tokens
and
largely,
at
least
in
the
bcp.
D
This
is
around
preventing
the
play
or
replay
at
a
different
endpoint
of
resource
from
the
one
that
originally
received
the
tokens,
but
there's
there's
a
number
of
other
benefits
that
you
know
are
bestowed
upon.
Pop
tokens
as
well
proof
of
possession-bound
refresh
tokens
for
public
clients,
clients
that
don't
themselves
have
provision
authentication
credentials
is
also
a
recommendation
of
the
security
bcp
and
there's
really
not
a
lot.
D
The
mtls
spec
was
recently
published
as
a
as
an
rfc,
which
is
great,
and
I
think
it's
super
useful
for
a
small
sort
of
niche
set
of
deployments,
but
quoting
some
sentiment
that
I
think
is
is
held
reasonably
widely,
that
it
is
virtually
undeployable
for
general
purpose,
applications
I'll,
let
them
rename
nameless.
But
w
working
group
participants
said
that
during
the
meeting
and
short
of
mtls,
we
don't
really
have
a
viable
solution.
That's
specked
out
and
interoperable
that
we
can
recommend
to
the
the
populace
at
large.
D
At
the
same
time
that
we
have
these
best
practice
security
documents
that
are
saying
you
need
to
do
this,
which
is
sort
of
a
problematic
situation
to
be
in.
It's
sort
of
you
know
sending
out
advice
that
can't
actually
be
met
in
in
the
in
the
vast
majority
of
cases
and
the
problems
or
or
or
the
lack
of
solutions
for
pop
type
tokens
is
actually
exacerbated
when
we're
talking
about
single
page
applications
or
javascript
applications
running
in
the
browser.
D
If
you
look
at
mtls
for
oauth,
it
would
have
major
major
usability
issues
with
spas
when
you
get
into
the
browser
requesting
from
the
end
user
to
select
certificates
outside
of
very,
very
tightly
constrained
sort
of
corporate
environments
that
pre-provision
certificates
to
their
browsers.
I
I
don't
think
that
this
is
the
sort
of
thing
that
the
average
user
is
is
really
capable
or
willing
to
deal
with.
D
It'll
cause
a
lot
more
problems
than
it
would
any
solutions,
and
while
many
of
us,
myself
included
sort
of
penned
our
hopes
on
token
binding
for
a
while,
it's
really
dead
in
the
water
in
terms
of
adoption
and
implementation
in
the
world,
and
even
if
it
was
moving
along
or
it
wasn't
caught
in
the
situation,
it
needed
fetch
level.
Api
changes
to
actually
work
for
spa
type
applications
and
that's
an
even
bigger
sort
of
push
to
get
in
there.
So
it's
just
not
happening.
D
There's
a
lot
here.
I'll
try
to
go
through
these
quickly,
but
reiterate
some
of
the
things
that
had
at
the
beginning,
which
is
sort
of
a
look
at
the
existing
pop
efforts
that
we've
had
in
oauth
throughout
the
the
ages.
If
you
will,
you
know
oauth
one
started
with
what
was
effectively
a
sort
of
max
style,
signature
operation
with
the
the
client
key
or
client
secret,
or
whatever
it
was
called
at
the
time
sort
of
mixed
into
all
that,
and
at
least
the
folklore.
D
The
canon
is
that
that
was
very
difficult
and
that
made
interop
very,
very
problematic
and
as
sort
of
a
response
to
that
oauth2
itself
pushed
away
from
doing
that
level
of
sort
of
signature,
verification
and
tried
to
rely
more
more
completely
on
tls,
but
the
some
of
the
participants
at
the
time,
including
the
the
author
up
until
the
very
end,
was
still
interested
in
having
some
sort
of
pop
work.
D
It
largely
just
defines
the
cnf
claim
the
confirmation
claim,
which
is
a
way
to
embed
in
jots
the
key
that
you
would
probably
need
to
confirm
somehow
to
have
a
proof
of
possession
token,
but
it's
completely
silent
on
the
way
that
you
would
get
that
claim
into
the
token
in
a
reliable
way
or
what
means
of
actually
proving
possession
what
or
confirmation
during
an
actual
http
exchange
or
protocol
exchange
would
look
like.
So
it's
an
important
building
block,
but
it
sort
of
doesn't
do
anything
on
its
own.
D
We
had
an
architecture
document
discussing
sort
of
ideas
and
almost
requirements
around
what
proof
of
possession
would
look
like,
as
well
as
this
authorization
server
to
client
key
distribution
draft
the
pop
key
distribution,
which
describes-
and
it's
now
expired
but
describes
both
how
a
public
key
could
be
presented
to
the
authorization
server
by
the
client
or
the
server
could
present
to
the
client
a
symmetric
key
or
it
looks
like
list
of
symmetric
keys.
It's
a
little
bit
unclear,
but
really
just
about
the
the
client
and
the
authorization
server
telling
each
other.
D
So
the
idea
being
that,
if
you,
if
the
client
said
just
what
parts
it
was
going
to
sign,
you
could
work
around
some
of
those
things,
but
also
didn't
have
a
minimum
amount
of
what
needed
to
be
signed
and
left
a
lot
up
to
sort
of
be
negotiated
and
figured
out
between
the
two
and
also
wasn't
able
to
handle
some
things
like
repeating
request,
parameters
and,
and
some
other
other
cases
like
that.
I
think
we
all
know
that
sign.
Http
requests
are
difficult,
difficult
to
do
for
a
variety
of
reasons.
D
D
But-
and
we
also
had
here
a
oauth
profile
of
using
https
token
bonding-
and
I
admit
to
have
been
very
excited
and
distracted
by
this,
but
larger
factors
came
along
and
this
this
work,
as
I
described
a
little
bit
earlier,
is
basically
dead.
D
During
the
course
of
some
of
this
stuff,
nat
published
this
jwt
pop
token
usage,
which
I
wouldn't
say,
is
really
a
protocol
at
all
or
anything
that
would
be
interoperable.
But
it
is
an
interesting
exploration
of
a
few
different
ways
that
pop
tokens
could
sort
of
manifest
themselves.
D
Everything
from
some
work
that
later
ended
up
being
part
of
the
mtls
work
right
below
it
to
a
challenge,
response
type
of
protocol
and
there's
some
ideas
in
there,
but
it
nothing
that
took
hold
beyond
at
least
laying
the
groundwork
for
some
of
the
ideas
that
came
up
in
the
oauth2
mutual
tls,
client
authentication
and
certificate
bound
access
tokens,
so
that
one
itself
does
a
few
things,
including
client,
authentication
and
and
thus
binding
refresh
tokens
that
client
authentication
with
mutual
tls,
as
well
as
affording
the
ability
to
bind
refresh
tokens
to
mtls
client
authentication
for
public
clients
and,
of
course,
generalized
sort
of
proof
of
possession
certificate
bound
access
tokens.
D
It's
now
rfc.
It's
been
implemented
and
deployed
it's
out
there,
but
it
really
does
have
some
difficulties
in
deployment
that
are
just
sort
of
inherent
in
the
nature
of
how
mutual
tls
deployments
and
trust
frameworks
and
various
components
work,
and
so
it
can
be
used.
It
does
meet
a
pretty
high
bar
of
security
and
binds
these
access
tokens
to
private
keys
and
proof
of
possession
at
the
transport
layer.
It
all
works
really
well,
if
you
can
get
it
deployed,
but
it's
just
not
feasible.
D
I
think
for
a
pretty
significant
chunk
of
the
deployments
out
there
and
in
particular
the
the
spa
type
deployments.
It's
just
it's
sort
of
a
non-starter.
It
just
doesn't
work
and
so
about
a
year
or
so
I've
lost
track
of
time.
D
Some
of
us
were
talking
at
the
oha
security
workshop
and
started
to
revitalize
the
idea
of
some
sort
of
application
layer
mechanism
to
do
proof
of
possession
that
might
be
a
little
bit
simpler
and
more
generally
applicable
than
something
like
mtls
and
that
eventually
manifested
itself
in
the
form
of
the
the
d-pop
work,
which
we've
discussed
a
few
times
at
the
face-to-face
meetings
and
have
either
just
not
not
progressed
yet
and
and
haven't
gotten
past
this
sort
of
individual
draft
stage
during
during
the
time
leading
up
to
the
last
meeting.
D
He
proposed
the
idea
of
looking
into
doing
ecdh
type
key
exchange
up
front
in
the
series
of
exchanges.
And
then
you
know
using
that
using
the
resulting
symmetric
key
to
do
an
hmac
rather
than
a
public
key
signature
over
the
sort
of
little
proof
token,
for
each
individual
request,
which
on
when
I
first
heard
it
it
was.
It
was
leading
up
to
the
the
meeting
itself
and
I
was
sort
of
invested
in
presenting
d-pop
and
sort
of
liked
the
idea.
D
So
I
was,
I
was
a
little
reluctant
to
to
really
listen
to
it,
but
he
did
some
additional
work
showing
that
it
it
is
more
complicated
than
depop,
but
it's
not
as
significantly
more
complicated.
Probably
it's
maybe
a
a
step,
more
complicated,
not
an
organ
of
magnitude
so,
and
it
maybe
is
a
something
that
can
land
us
in
a
middle
ground
of
doing
an
application
level
proof
of
possession
that
is
also
reasonably
performant
in
terms
of
the
the
use
of
cryptography.
D
So
I
include
that
here,
even
though
it's
just
an
email,
because
I
I'm
actually
really
interested
in
in
the
concept
moving
forward
like
us
to
think
about
it.
A
So
brian
just
questioned
about
that
depop
and
the
alternative
approach
also,
and
how
is
does,
does
any
of
this
overlap
from
with
that
any
of
the
previous
proposals
that
you
listed
at
the
top
there.
D
Not
really
so
you
know
everything
most
of
what
we've
done
does
overlap
with
rfc
7800
or
maybe
overlap,
isn't
the
right
word,
but
does
use
rfc
7800
in
some
capacity
to
carry
the
the
sort
of
confirmation
or
proof
key,
but
the
actual
oauth
level
and
protected
resource
access,
request
and
response
protocol
stuff
is
not
there.
There's
not
the
same
kind
of
overlap
that
if
that
makes
sense.
E
Brian,
there
is
overlap
with
like
conceptually,
like
many
of
the
solutions
are
very
different,
very
similar.
For
example,
if
you
take
the
a
justin's
http
signing
it's
not
that
you
do
something
entirely
different,
it's
still
http
signing
you.
You
missed
out
one
thing
that
I
relayed
from
the
ace
working
group
when
they
worked
on
an
oscar.
They
also
thought
that
that
would
be
a
potential
solution,
but
so
yeah.
E
D
There's
a
ton
of
conceptual
similarity
across
a
lot
of
these
things
and
you
did
put
the
os
core
security
in
there,
but
I
don't
I
I'm
not
sure
that
that's
really
the
kind
of
solution
in
anybody
in
the
oauth
space
is
is
even
interested
in
or
looking
at,
but
maybe
others
that's.
Okay,
that's
fine
and
I
mean
ultimately
yeah
it's
all.
D
This
is
a
difficult
issue
because
at
some
level
it's
it's,
it's
really
deceivingly
simple!
You
have
some
key
you!
You
encode
some
indicator
of
that
key
into
the
token
and
then,
when
you
use
the
token
you
prove
possession
of
that
key
and
at
that
level,
they're
all
kind
of
like
that
and
they're
all
pretty
similar.
D
F
Right
what
strikes
me
as
interesting
is
actually
the
differences.
So
thanks
for
that
that
list
that
was
actually
really
helpful.
I
mean
it
seems
to
me
that
we've
done
a
lot
of
different
things
in
the
space,
largely
in
response
to
needs,
and
so
when
we
made
something
it
was
responsive
to
it
and
we've
done
it
a
number
of
times
to
be
responsive
and
we've,
which
is
great,
which
is
helpful
and
then
equally
helpful.
F
Is
that
we've
gotten
a
lot
of
implementation
experience
of
whether
the
solution
that
we
thought
would
address
the
gap
actually
works,
and
it
seems
like
we
found
various
limits
to
those
solutions
and
kind
of
correct
me
here.
It's
it
feels
like
we're.
Circling
back
and
saying:
hey,
listen!
Should
we
be
maybe
thinking
about
all
that
implementation
experience
and
is
there
a
holistic
solution
that
we
could
start
be
applaud,
start
thinking
through
you
know
something
more
generic,
and
so,
if
that's
accurate,
what
strikes
me
is?
F
Do
we
have
a
really
good
handle
on
all
of
those
limits
that
I
would
translate
instead
of
saying
limit,
saying
all
those
requirements
of
the
new
thing
we
would
want
to
generate,
and
if
we
were
to
have
that,
you
know,
and
we
were
to
say
we
have
this
draft
depop.
What's
interesting
is
what
would
not
be
done?
What
what
would
not
be
solved
with
that.
E
The
one
one
thing
that
was
an
issue
that
sort
of
like
the
bob
architecture
was
meant
to
collect,
use
cases,
requirements,
threats
and
so
on
and
one
of
the
things
the
symmetric
key
part
was
introduced
there
and
also
later
picked
up
by
the
ace
working
group,
and
we
had
a
discussion
at
some
point
in
time
on
like
where
would
we
do
the
barometer
encoding
for
the
http
transport?
E
And
where
would
we
do
the
co-op
stuff
for
this,
and
the
decision
back
then
was
after
some
back
and
forth
with
the
two
groups?
Was
we
do
the
http
stuff
in
in
oas
and
the
co-op
stuff
in
ace
now
deep,
dropping
the
symmetric
key?
We
suddenly
left
some
some
strings
hanging
essentially
for
at
least
the
ace
working
group.
I
don't
know
if
they
change
their
mind
in
the
meanwhile,
but
at
least
that
was
not
my
reading
of
some
of
the
documents
recently.
D
E
No,
would
they
the
normative
requirement
or
the
normative
dependency
still
exists?
The.
E
D
Okay,
well
I
so
I've
reviewed
the
ace
document
because
I
had
to
do
it
for
a
jwg
claims
request
and
I
I
am
personally
not
convinced
that
the
encoding
issues
solved.
There's
there's
not
a
clean
translation
between
the
keys.
I
think
jim
raised
this
after
looking
at
it
more
closely.
I'm
not
sure
what
the
resolution
is,
but
ultimately
the
the
ace
document
does
not
normatively
depend
on
pop
key
distribution
anymore.
D
Resources-
I
I
presume
that's
in
the
document,
if
that's
one
of
the
use
cases
that
the
ace
working
group
was
trying
to
solve,
I
believe
they
have
parameters
defined
to
pass
the
key
and
presumably
there's
a
relevant
encoding
from
either
a
json
or
a
cbor
implementation
with
a
key
or
encoding
that
key
and
I'm
I'm
not
sure
that
actually
works
100
of
the
time.
But
that's
that's
where
it
is
jimmy.
Is
that
correct.
H
H
It
is
registry
work,
because
it
is
confusing
in
terms
of
whether
the
cwt
and
the
jwt
are
supposed
to
be
the
same
thing
and
how
the
different
key
identifiers
are
encoded,
but
there
is,
but
that's
pretty
easy
to
work
around,
and
once
we
get
some
implementation
experience.
I
think
we'll
just
do
an
update
to
fix
that.
E
I
just
reviewed
the
mqtt
document
in
ace
and
I
noticed
that
that's
the
functionality
wasn't
there
and
I
posted
it
into
the
list,
and
they
also
said
they
thought
that
it's
done
in
hours.
E
H
E
E
Well,
maybe
you
want
to
respond
to
the
email,
because
the
authors
didn't
know
how
to.
I
Chidam
shangri-li
I'm
one
of
the
co-authors
of
the
mqtt
tls
profile.
I
implemented
it
similarly
to
jim
said,
but
I
knew
that
highness
raised
an
issue
before
it
was
acknowledged
in
the
group
and
that's
what
I
acknowledged
it
in
the
email
saying
that
I'm
aware
of
the
issue.
I
just
the
solution
I
was
looking
for.
Was
the
text
to
say
how
to
resolve
the
issue
in
the
profile
rather
than
an
implementation.
I
E
I
think
you
you've
assumed
that
pop
key
distribution
work
would
proceed.
I
I
was
looking
at
which
described
how
it's
done
for
I
for
for
the
for
the
http
case,
but
then
jim
pointed
out
that
this
has
been
resolved
in
the
main
document
in
his
last
email.
F
D
So
much
as
a
deployable,
workable
solution
for
oauth
deployments,
the
relatively
near
term
like
that
there
are
dependent,
not
dependent,
but
there
are
additional
profiles
and
regulatory
regimes
and
things
that
are
demanding
this
stuff
or
placing
requirements
on
this.
And
it
would
be
really
useful
to
have
something
that
could
be
implemented
and
be
interoperable
and
work.
E
Brian
in
the
original
os
document
1.2,
we
had
the
max
solution
and
then
the
max
solution
was
perceived
to
be
difficult
to
implement
and
that's
why
we
then
didn't
go
for
that
in
os2.
E
E
Getting
to
that,
but
then
the
you
came
up.
If-
and
I
don't
know
different
cabbage
did
so
too,
and
I
know
I
did.
D
As
well,
I
did
not
come
up
with
an
alternative
to
hp,
signing
I'm
very,
very
much
trying
to
do
something.
That's
not
http
signing
here
and
I
realize
it's
hard
to
distinguish
between
the
two,
but
but
there
is
a
distinguishing
factor
there.
C
Could
I
recommend
that
we
let
brian
actually
finish
a
presentation
for
once,
because
I
think
I
I
see
where
I
think
I
see
where
he's
going
with
this,
and
I
think
it
would
benefit
the
rest
of
the
group
too
see
that
as
well
yeah.
Thank
you
yeah.
At
least.
We
would
have
that
to
to
discuss
right,
because
these
these
are
different
things.
I
don't
believe
that
depop
is
being
presented
as
a
generic
solution.
C
I
don't
think
that
it
is
necessarily
stepping
on
the
web
signature,
the
http
signature
stuff
that
I'm
working
on
with
annabelle
and
others.
Nor
is
it
really
an
answer
to
the
mac
draft
in
quite
the
same
way.
So
I
I
propose
we
let
brian
actually
talk
thanks.
Justin.
D
And
I
guess
before
I
do,
I
will
try
to
address
what
roman
was
saying
is
in
in
that
I
think
the
way
you
have
framed
things
roman
is
sensible
and
almost
accurate,
but
in
reality
there
are
a
lot,
a
lot
of
disparate
ideas
and
opinions
and
desires
and
use
cases
behind
what
what
is
seemingly
a
simple
case
here.
So
it
may
not
be
as
straightforward
as
as
you
laid
it
out
as
as
appealing
as
that
might
be
so.
D
Circling
back
trying
to
get
through
this
depop
was
an
idea.
It's
still
an
individual
drop,
but
it's
an
idea.
It
does
depart
from
some
things,
because
the
way
that
the
keys
are
distributed
or
shared
is
is
fundamentally
pretty
different
than
what
was
what
was
described
in
in
key
distribution
and
sort
of
critically
to
that
is
part
of
the.
D
The
sharing
of
the
keys
includes
some
level
of
proof
of
ownership
of
the
associated
private
key,
and
that
itself
enables
things
like
the
binding
of
a
refresh
token
for
public
clients,
which
would
not
otherwise
be
possible
and
they're
not
possible
in
in
the
context
of
some
doing
something
along
the
lines
of
top
key
distribution.
D
It
suffers
from
the
potential
problem.
We'll
talk
about
this
more
of
being
very
inefficient
due
to
the
high
use
and
the
number
of
asymmetric
cryptographic
operations
that
are
required
and
neil's
suggestion
on
list
was
a
way
to
do
a
lot
of
the
same
things,
sort
of
get
the
more
or
less
functional
similarities
in
terms
of
how
tokens
are
bound
and
used.
But
but
do
it
in
a
way?
That's
not
as
inefficient.
D
Although
that's
just
an
email
at
this
point,
we've
had
some
other
things
come
up
that
I
didn't
even
know
until
I
started
looking
at
this
and
following
the
train
of
drafts,
but
we
have
what
looks
to
be
from
annabelle
a
rewriting
of
the
work
that
justin
did
but
split
into
two
drafts,
which
I
I
just
saw.
I
felt
they
should
be
here,
but
I'm
not
sure
I
understand
the
intent
behind
that.
C
I
do
understand
the
intent,
the
intent-
and
I
can
talk
about
that
later.
If
we
have
time
okay
and
then.
D
Not
directly
related
is
the
generalized
work
on
http
message
signing
which
justin
and
annabelle
have
taken
correct
me
if
I'm
wrong
here,
but
I
think
I
got
it
right
have
more
or
less
taken
cabbage
and
spruced
it
up
a
little
bit
and
submitted
it
to
the
http
working
group
as
a
prospective
sort
of
working
group
document
going
forward
as
a
more
general
hdb
message
level
signature
scheme-
and
I
believe
that
was
it
hasn't
been
done
yet,
but
it
was
is
adopted
by
the
working
group
to
as
a
work
item
going
forward.
F
C
I
thought
I
was
doing
an
adoption
yeah
it.
It
has.
Sorry
to
this
is
justin
to
sorry
to
jump
in.
But,
yes,
it
has
been
adopted
by
the
working
group
and,
and
it
is
going
to
be
an
http
working
group
item
and-
and
that
is
to
be
clear,
where
this
sits
with
the
rest
of
this
work.
That
is
just
going
to
be
on
on
the
scale
of
forming
http,
forming
signatures
for
http
messages
and
normalizing
http
messages
and
whatnot.
It
does
not.
C
It
does
not
account
for
the
security
properties
of
token
presentation
or
what
a
token
means,
or
even,
if
you're,
representing
a
token
with
that
presentation
or,
if
you're,
just
proving
possession
of
a
key,
as
you
make
an
http
request
for
some
other
reason,
it
is
meant
to
be
a
very,
very
generic
thing,
so
that
actually
brings
back
to
annabelle's
other
proposed
drafts,
where
what
she
basically
did
was
take
the
work
that
I
had
done
previously,
which
was
kind
of
a
sort
of
a
different
swipe
at
what
captain
had
done.
C
With,
with
the
other
draft
initially
and
bound.
My
work
was
bounce
tightly
to
oauth.
So
what
annabelle
basically
did
was
say:
okay,
assume
that
we
have
a
general
purpose,
http
message:
signature:
what
are
the
parts
that
we
actually
need
from
the
oauth
signing
draft
going
forward
to?
What
do
we
need
to
be
in
the
oauth
draft
now
that
we
could
actually
potentially
depend
on
a
general
depend
on
and
use
a
general
purpose,
a
signing
mechanism
which
we
could
not
previously
that's
annabelle's,
two
other
drafts.
D
Well,
okay,
I
did
not
read
them
that
way,
but
that
might
be
just
a
failing
on
my
part.
C
She
also
has
some
some
what-if
drafts,
so
I'm
I
they're
all
named
so
similarly,
I
I'm
honestly
not
sure
which
drafts
each
of
these
are
without
going
up,
but
but
basically
the
two
important
ones
are
the
one
that's
on
the
bottom,
in
whichever
one
says
take
that
one
that's
on
the
bottom
and
put
an
oauth
token
in
it.
C
It's
kind
of
yeah
there
are.
There
have
been
a
handful
of
other
what,
if
okay
ones,
which
those
those
may
be
but
they're
they're
less
important,
but
yeah
you're
right
they
do
exist.
There's
lots
of
ideas
on
how
to
do
this
out
there.
But
again
sorry,
just
I
started
jumping
trying
to
no.
No,
it's
helpful.
D
Me
too,
all
right
I'll
take
the
floor.
So
while
this
was
generally
around
pop
in
general,
the
the
the
interim,
the
the
impetus
for
a
lot
of
this
came
out
of
the
presentation
and
early
criticism,
or
not
really
heavy
or
or
vocal
criticism
of
depop.
D
So
I
wanted
to
revisit
that
at
least
try
to
understand
them
a
little
bit
better,
and
so
I
tried
to
rephrase,
or
paraphrase
some
of
the
the
criticisms
of
depop
that
came
out
largely
during
the
last
meeting,
but
a
little
bit
on
the
mail
and
comments
here
in
the
period,
and
one
of
those
is
basically
that
it's
not
top
key
distribution,
that
that
seems
to
be
a
running
criticism.
And
yes,
it's
it's
not
it's
a
little
bit
different.
It
accomplishes
some
different
things
and
is
is
different.
D
Concerningly,
I
think,
is
that
an
asymmetric
cryptographic
operation
on
every
single
http
request
is
just
too
expensive,
at
least
for
some.
It's
been
said
to
me
both
publicly
and
in
private
that
this
would
be
a
non-starter
for
at
least
some
organizations
or
deployments
it
just
couldn't
be
done
just
too
expensive
and
if
you're
not
familiar
with
the
the
details
of
depop,
that's
just
are
a
factor
the
way
it
works
currently,
the
way
it's
written
up.
D
It
requires
a
unique
signature
and
signature
verification
over
every
signal
request,
and
so
that
is
potentially
inexpensive,
tracking
jti.
So
there's
a
replay
prevention
mechanism
that
suggests
or
requires
the
jti,
be
tracked
and
rejected
if
duplicates
are
seen-
and
this
makes
some
sense
at
a
certain
level,
but
it
can
be
really
prohibitive
at
scale
if
you
have
to
coordinate
sort
of
whatever
data
structures,
tracking
jti's
across
a
large
distributed
deployment
that
can
be
really
problematic,
and
if
you
combine
that
with
at
least
one
way
to
address
the
the
prior
issue
is
the
asymmetric.
D
Crypto
is
expensive,
yes,
but
it
could
scale
out
horizontally
very
well,
at
least
in
terms
of
overall
performance
and
latency.
It
might
still
be
expensive
to
run,
but
it
would
scale
out
horizontally
well,
but
if
you
introduce
the
need
to
sort
of
track
jti
across
all
instances
of
whatever
you're
running,
that
sort
of
undermines
the
ability
to
scale
horizontally
and
that
can
be
really
problematic.
D
There's
some
language
in
there
that
I've
loosened
up
to
try
to
allow
for
sort
of
a
lazier
or
lighter
weight
approach
of
tracking
jti,
to
the
extent
that
it
makes
sense
for
the
deployment.
But
it
is
it's
still.
It's
arguably
problematic
and
I
believe
there's
also
the
issue
raised-
that
it
could
also
be
sort
of
unexpected
or
to
the
point
of
violating
sort
of
general
http
retry
semantics
on
in
I'm
going
to
say
it
wrong.
D
Id
potent
requests
if
different
layers
of
the
architecture
see
a
jti
and
re
and
prevent
its
replay
when
the
the
actual
processing
of
the
message
wasn't
seen
so
just
generally
jti
is
problematic
and
this
is
kind
of
not
actionable.
But
I
do
want
to
note
that
it's
a
bit
of
a
rorschach
test,
even
among
the
supporters
of
d-pop.
I
think
different
people
have
their
different
ideas
of
why
it's
useful
and
what
the
benefits
bring
for
it.
So,
even
though
there's
a
there
there's
criticisms
as
well
as
support
of
it.
D
None
of
the
support
sort
of
rallies
around
the
same
thing.
I
don't
think
it
seems
like
everybody,
wants
to
see
a
little
something
different
out
of
it
or
has
a
different
idea
of
how
it'll
benefit
them,
which
which
makes
I
guess
of
a
generalized
voice
of
support,
and
why
this
is
good.
A
little
bit
difficult.
D
I
am
personally
a
bit
like
I've
been
on
the
fence
about
the
asymmetric
crypto
cost.
I
mean
I
recognize
it's
it's
a
fact
of
reality
with
asymmetric
crypto
and
is
definitely
more
expensive,
but
I
feel,
like
I
don't
know
if
it's
legitimately
a
non-starter
for
real
deployments
or
not
it's
real,
but
I
don't
know
how
real,
I
guess
is
a
way
of
phrasing
that.
D
So
trying
to
sort
of
get
a
look
at
where
we
are
now
and
and
where
we
might
want
to
go.
This
isn't
necessarily
all
the
options,
but
this
is
the
best
way.
I
could
kind
of
phrase
it
together.
D
One
thing
we
could
do
is
we
could
stay
the
course
of
where
we
are
now
and
I'm
not
exactly
sure
what
that
looks
like,
but
it
sort
of
stands
to
be
something
between
doing
nothing,
which
is
largely
what
we
were
doing
for
a
while
and
maybe
revving
pop
keter
distribution
with
some
kind
of
h2b
signing
mechanism.
D
The
the
work
that's
happening
in
hgp
seems
a
lot
more
promising
than
having
the
oauth
level
definitions
of
the
same
kind
of
stuff,
but
I
don't
know
that
that's
maybe,
but
it's
also
early
so
see
where
that
goes.
I
suppose,
or
or
not.
I
don't
know
that
the
stay
the
course
and
sort
of
do
what
we've
been
doing
hasn't
gotten
us
very
far,
but
there
are
some
developments
that
might
provide
some
of
the
underlying
tools
there.
D
That
would
make
that
more
tenable
going
forward
at
some
point,
although
I
don't
think
that's
a
very
near
point
yet
I
know
there
are
some
people
myself
sort
of
on
the
fence,
but
some
others
too,
that
are
at
least
somewhat
interested
in
seeing
the
depop
work
adopted,
possibly
tweaked,
particularly
around
some
of
the
jti
stuff,
make
it
a
little
bit
better
and
see
if
we
can
get
that
out
there
it
as
a
solution
for
people
that
are
wanting
to
do
something
about
this
today,
a
couple
of
quotes
here,
I
had
from
email,
I
think
off
list.
D
But
you
know
the
the
performance
characters
is
of
d-pop
are
are
not
ideal
in
some
circumstances,
but
you
know
for
some
of
us
mere
mortals.
It
was
said
that
depop
really
is
fine,
as
is
it's
functional
and
it
can
work
and
that
we
have
this
need
to
have
some
kind
of
sender
or
key
constraint,
refresh
tokens
issued
to
spas
yesterday,
at
least
for
some
there's,
some
real
urgency
here.
D
They
want
to
have
a
solution
for
this
and
we
don't
have
one
right
now
at
all
binding
refresh
tokens,
just
the
there's
there's
nothing
available
at
this
point
in
any
sort
of
standards
track
to
make
this
happen.
D
So
I
think
arguably
there's
some
value
still
depict
despite
the
inefficiency
problems,
efficiency
problems
excuse
me
and
depop
that
we
might
want
to
consider
taking
on
the
work
and
moving
it
forward,
but
it
also
runs
the
risk
of
sort
of
complicating
things
by
having
multiple
options
and
we
already
have
a
lot
of
documents,
a
lot
of
options,
and
I
think
we
do
need
to
be
conscientious
of
the
way
we
message
and
the
number
of
solutions
we
produce
in
this
space,
as
I
alluded
to
earlier
neil,
unless
suggested
an
approach,
that's
somewhat
similar
to
depop
in
at
least
some
of
the
conceptual
levels,
but
instead
by
using
asymmetric
keys,
binding
tokens
to
asymmetric
keys,
but
using
ecdh
to
sort
of
prove
possession
of
those
keys
and
then
to
amortize
the
cost
of
asymmetric
crypto
over
many
requests.
D
Again.
This
would
be
riffing
on
neil's
idea,
and
this
would
allow
for
even
allow
for
the
derived
or
agreed
upon
key
to
be
non-exportable,
so
it
could
be
kept
inside
of
either
virtual
or
real
sort
of
tpm,
like
things
or
within
sort
of
the
non-exportable
world
of
the
browser
web
crypto
api
and
allow
for
unique
keys
to
be
negotiated
between
client
and
rs
or
client
nas
that
are
used.
D
You
know,
with
an
hmac
to
show
sort
of
bind
to
some
amount
of
the
http
request
to
prevent
at
least
large
scale
or
easy
replay
show
possession
of
that
key,
but
do
so
in
a
way
that
they're
only
unique
to
those
two
parties
and
we
don't
get
back
into
the
need
to
have
additional
sort
of
audiencing
of
tokens
and
so
forth
to
prevent
cross
rs
replay.
D
There's
work
that
would
need
to
be
done
here,
but
I,
after
thinking
about
it
quite
a
bit,
I'm
to
feel
like
there's
a
lot
of
merit
in
this
idea,
and
then
I
don't
know
do
something
and
profit,
which
is
just
a
a
nod
to
build
underwear
gnomes
piece
from
south
park.
A
little
picture
up
there
on
the
right.
A
So
just
quick
question
brian
just
for
me
to
understand
the
first,
the
first
bullet
there
saying
that
if
we
go
with
the
first
bullet
there,
the
proposal,
then
that
will
address
your
needs
and
you
don't
need
the
second
bullet.
The
d-pop
yeah.
A
D
D
D
We
don't
have
to
do
anything,
but,
but
I
agree
with
you,
it
does
sort
of
feel
like
throwing
in
the
towel
and
just
just
giving
up
and
even
if
and
when
an
http
signing
finds
itself
sort
of
fully
realized
and
deployable,
and
I
I
say
that
with
a
nod
and
hope
towards
justin
and
annabelle
for
good
work
there
and
I,
but
recognizing
that
it's
really
it's
hard.
It's
really
hard.
D
D
A
So
let
me
just
again
make
sure
and
understand
so
so
you're
saying
even
if
we
http
signing
but
we'll
go
and
get
approved
tomorrow,
we
still
have
kind
of
gaps
in
that
solution
that
will
not
address
the
depop
needs
is.
Is
that
what
you're
saying.
D
I
believe
those
things
are
largely
orthogonal,
because
the
http
signing
is
well.
I
I
I
guess
it
depends
on
what
comes
out
of
it,
but
it's
really
a
lot
more
about
hp
canonicalization
than
the
actual
signing
itself,
and
I
presume
either
way
it's
going
to
allow
for
an
asymmetric
or
hmac
over
over
whatever
that
canonicalized
or
hashed
representation
of
the
http
message
is
the
the
sort
of
asymmetric
symmetric
is
really
it's.
It's
an
orthogonal
issue.
D
I
think
in
how
the
keys
are
agreed
upon
and
used
between
the
two
and,
as
I
say
that
there's
there's
likely
some
small
intersection
of
signing
over
some
bit
of
the
hd
message,
regardless
of
which
way
you
do
it,
but
but
having
hdb
signing
done
even
yesterday
doesn't
doesn't
solve
asymmetric
versus
symmetric
crypto
issue.
It
maybe
gives
some
different
tools
for
the
syntax
of
what
it
looks
like
in
presentation,
but
it
doesn't
address
the
core
core
fundamental
piece
of
it.
D
Maybe
it
depends
on
how
it
would
be
that
ends
up
being
more
of
a
deployment
and
algorithm
choice.
Question
I
guess,
based
on
my
understanding
of
how
these
things
work,
sorry
go
ahead.
I
I
was
gonna
mumble
and
say
nothing
worthwhile.
So
please.
C
Okay,
so
to
give
some
some
perspective
on
the
http
signing
work.
Yes,
we
are
starting
with
the
cabbage
draft,
but
we
are
by
no
means
stopping
there.
C
There
are
some
very,
very
big
problems
with
the
cabbage
draft,
it's
very
limited
in
in
sort
of
how
it
can
manage
keys
in
determining
what
is
allowed
to
be
signed
and
what
is
required
to
be
signed,
and
things
like
that
so
to
to
what
I
think
may
have
been
part
of
jim's
question
you
know:
do
you
have
to
recalculate
the
signature
over
every
request?
C
Well,
that
kind
of
depends
on
what
it
is
that
you're
signing
as
part
of
that
request,
and
if
you
have
a
a
profile
that
says
you
must
sign,
you
know
a
nonce
on
every
request
and
check
that
for
replay.
Then
yeah
that's
going
to
change.
If
you
are
signing
parts
of
the
request
that
don't
change
over
time,
including
perhaps
the
access
token
value
itself,
then
then
yeah
that
signature
wouldn't
change
over
time.
The
http
signing
thing
isn't
really
it's
not
really
going
to
be
its
job.
C
At
least
this
is
my
view
as
one
of
the
editors.
It's
not
really
going
to
be
that
sponks
job
to
dictate
that
type
of
requirement.
That's
more
on
the
security
layer
of
the
application
of
the
signing
spec.
What
the
spec
is
is
going
to
do
is
say
you
have
an
http
message:
here's
how
you
throw
that
into
a
into
a
signature
or
hashing
function,
and
here's
how
you
take
the
result
of
that
and
shove
it
back
into
an
http
message,
so
that
you
can
send
it.
C
That's
really
all
that
it's
ultimately
that's
what
it's
going
to
do.
So
brian
is
absolutely
right
that,
even
if
we
had
a
workable,
fully
functional
http
signing
thing
today,
which
cabbage
is
not
for
our
purposes
for
some
very
specific
reasons,
but
even
if
we
did
have
that
today,
we
would
still
need
an
oauth
profile
of
it.
That
said,
yes,
you
actually
have
to
sign
the
access
token
and
and
by
the
way.
C
This
is
how
you
send
the
access
token,
so
that
it's
tightly
bound
to
the
signature
that
you're
sending
so
yeah
it's
there
would
still
be
work
within
this
working
group.
I
think
there
will
be
work
within
this
working
group,
even
in
the
light
of
the
h2d
signing
draft.
D
And
maybe
just
to
clarify
a
little
bit
too
everything
you
say
there
is
accurate
justin.
The
way
I
was
thinking
about
the
asymmetric
versus
metric
was
really
with
the
expectation
that,
however,
this
is
done
almost
certainly.
The
issue
request
is
going
to
need
a
new
signature
over
it
and
that,
even
if
that
wasn't
the
case,
that
the
complexity
of
trying
to
determine
when
a
signature
could
be
reused
in
a
ca
in
some
case
that
that
might
be
allowed
is
probably
prohibitive
enough
to
make
it
unlikely.
C
At
all,
okay,
that's
I
understand
your
interpretation
now.
Thank
you.
Thank
you
for
clarifying
that,
but.
C
In
different
ways,
right
exactly
because
if
you
look
at,
if
you
look
at
the
depop
proofing
mechanism,
it
only
signs
the
method,
the
uri,
without
parameters
and
the
the
token
itself
right.
No,
it
does
not
sign
the
token
it
does
not
sign
the
token.
The
token
is
bound
to
the
associated
key
right
right.
So
it's
key
presentation
and
token
in
here,
yeah.
D
But
there's
also
a
jti
in
there,
and
so
at
least
the
way.
The
way
it's
written
now
means
a
new,
a
new
one,
every
single
time
right
and
that's
that
would
be
sort
of
expensive.
I
think
doing
the
same
kind
of
thing
with
an
hmac
would
be
okay,
it's
it's
so
fast
that
it.
I
don't
think
it
would
be
prohibitive
to
anybody
right.
C
Doesn't
work
exactly
right
exactly
so
yeah
for
the
for
the
hp
signature
thing.
We
are
fully
intending
to
be
both
symmetric
and
asymmetric
algorithms,
because
there
are
people
that
have
been
deploying
cabbage
and
cabbage
derivatives
out
there,
of
which
there
are
at
least
a
half
dozen
that
I
know
about
personally,
which
means
there
are
a
lot
more
people
are
using
it
in
both
of
both
of
those
modes.
So
my
view
of
this
of
the
depop
work
and
the
hp
signing
work
is
that
they
they
seem
to
solve.
C
They
do
seem
to
solve
similar
problems,
but
depop
really
isn't
as
general
purpose
as
it
might
seem
at
first,
and
I
think
that
that's
a
feature
it.
It
was
not
my
impression
on
first
reading,
the
draft
and
I've
told
the
authors
this
that
it
was
it's
designed,
as
you
know,
key
presentation
for
single
page
apps.
C
But
if
that's
what
it
is
it
does
that
really
well
and
it
can
probably
be
used
with
other
apps
too
if
they
want
to,
but
ultimately
we're.
I
think
we
are
going
to
have
a
general
purpose.
C
Proof
of
possession
with
a
general
purpose,
http
message,
signature
that
I
don't
see
a
reason
why
that
can't
live
in
parallel
to
2d
pop
there's
already
more
than
one
way
to
solve
these
problems,
and
it
makes
sense
in
different
ways
for
different
use.
Cases
and
different
deployments
is
really
the
biggest
thing.
D
C
A
J
J
J
Okay,
so
even
if
we
would
push
depop
forward
and
also
have
a
solution
based
on
the
upcoming
http
signing,
it
would
be
three
options
in
total.
D
That's
true
well,
but
I
guess
I'm
maybe
because
I've
been
thinking
about
it.
I'm
also
considering
the
third
bullet
here
as
well.
C
So
potentially,
but
that
so
in
terms
of
actual
implementations
for
for
key
proof
of
possession,
we
have
implementations
of
mtls,
we
have
implementations
of
depop
and
we
have
implementations
of
my
old
http
message
signing
that
are
still
kicking
around.
C
So
those
are
the
three
that
I'm
aware
of.
There
are
people
that
are
doing
cabbage,
but
that
are
kind
of
tied
to
oauth,
but
not
in
any
interoperable
way.
K
C
That's
that's
the
plan
for
the
general.
That
was
the
plan
for
the
general
purpose.
Oh
pop,
that's
that's
how
it
was
structured
and,
I
think,
is
the
right
method
for
the
general
purpose,
http
signing
mechanism,
because
how
you
get
the
key
and
how
you
use
the
key
should
be
two
different
questions,
because
I
mean,
if
you
look
at,
for
example,
how
aws
does
their
key
derivation
for
for
their
signing
stuff
like
it's?
Not
it's,
not
a
dynamic,
key
distribution
system.
C
You
do
you
do
a
derivation
algorithm
and
then
you
use
the
key
that
you
have
like.
You
know
four
steps
into
the
process
to
actually
call
the
api.
The
signing
mechanism
that
you
use,
for
that
should
be
the
same
whether
or
not
you
did
this
fancy
amazonian
derivation
or
if
somebody
just
said,
hey,
here's
a
jock
use.
It.
K
I'm
just
wondering
if
a
http
working
from
supplied
message,
signature
system
and
epop
could
eventually
be
reduced.
To
what
sort
of
message
are
you
signing?
Are
those
requests.
C
So
I
think
so
this
is.
This
is
just
my
opinion
as
justin.
I
think
that
if
we
had
general
purpose
http
signing,
then
depop
could
use
that
and
still
do
its.
You
know
it's
optimized
key
distribution
and
proofing
and
presentation,
but
we
just
we
don't
have
that
and
it's
gonna
be
a
long
time
before
we
do.
L
L
That
said,
our
product
groups
would
also
like
something.
That's
simple.
Like
d-pop,
I
realized
that
depop
is
still
in
flux,.
L
But
in
particular
you
know
john,
and
I
and
and
talked
during
the
last
ietf
about
okay.
How
could
we
add
symmetric,
2d
pop
along
the
lines
of
what
neil
had
suggested
and
there's
a
feasible
path
to
do
that?
It
is
not
quite
as
simple,
but
you
know,
inherently
you're
having
to
do
a
key
distribution
step
beforehand.
L
I
do
agree
with
justin
that,
while
the
general
http
signing
is
of
long-term
value,
it
would
be
great
to
get
depop
into
a
stable
state
soon,
so
that
an
industry
which
is
dying
for
this
could
have
it.
This
is
not
academic
people
are
stealing
bearer
tokens
and
using
them
to
carry
out
attacks
and
there's
a
number
of
mechanisms
for
that,
and
so
I
think
we
do
have
an
onus
to
meet
that
market
need
sooner
than
later.
So
I
would
like
to
see
the
depop
draft
adopted
as
a
working
group
draft
recognizing
that.
D
A
L
Support
adopting
it,
as
is
to
get
it
in
the
working
group's
attention,
and
then
the
working
group
can
make
a
decision
about
whether
to
also
add
symmetric.
We
know
how
to
do
it.
Okay,
but
I'd
like
to
see
the
adoption
happen
so
that
I'm
signaling
to
the
marketplace
that
we're
addressing
this
issue
rather
than
just
waiting
years.
G
A
Yeah,
I
guess
we
will
call
for
adoption
on
the
mailing
list
anyway,
but
yeah
if,
if
people
feel
that
they
want
to
say
that
they
support
it,
yeah
go
ahead,
feel
free
to
do
that.
Yeah,
okay,.
F
I
wouldn't
ask
a
clarifying
question
about
so
we've
been
talking
a
lot
about
what
I
would
characterize
as
option
one
and
not
option
one.
Could
we
finesse
a
little
bit
about
what
I
would
say
is
option
two
and
three?
How
should
we
go
about
kind
of
reasoning
about
three
given
what
we
have
written
into.
L
F
Yeah,
I'm
calling
one
stay
the
course
to
push
forward
with
the
d-pop
kind
of
as
is,
and
then
three
was.
We've
had
some
some
kind
of
conversation
from
neil's
idea
about
doing
something
different
and
you
know
for
I'm
not
talking
about
that
really
has
kind
of
an
option.
So
is
there
something
we
need
to
talk
about
two
versus
three,
because
that's
three
seems
like
a
bunch
more
work
than
beyond
what
we've
written
down
in
two.
Unless
I'm
misunderstanding,
it.
D
It
is
something
that
could
potentially
be
grafted
on
to
two,
and
so
they
are
kind
of
one
and
the
same,
and
you
could
somehow
choose
at
runtime
which
to
do,
but
I
think
that
would
be
having
thought
about
it
a
little
bit.
I
think
that
would
be
prohibitively
difficult
and
super
complicated,
and
so
I
guess
three
would
be
likely
be
its
own
thing
that
could
live
alongside
and
be
deployed
independently
of
or
even
in
conjunction
with
two.
D
If
and
when
it
got
to
that
stage
or
we
could
or
it
could
just
not
be
pursued,
and
we
see
if
the
the
need
really
arises
or
not-
or
I
guess
the
some
of
my
thinking
was
that
the
the
push
back
on
the
performance
of
dpop2
is
so
significant
that
it's
not
worth
pursuing,
and
we
should
jump
ahead
to
trying
to
build
out
three.
D
So
it's
more,
it
accomplishes
a
lot
of
the
same
things,
but
is
a
little
bit
more
a
lot
more
performant
and
a
little
bit
more
complicated,
that's
sort
of
where
I
was
trying
to
drive
this.
I
guess
was
either
to
be
honest,
I'm
a
little
uncertain.
So
I'm
not
sure,
but
I'm
adopting
d-pop
is
fine.
Moving
forward
with
it
doesn't
preclude
anything
if
three,
but
I
do
think
there's
the
risk
of
having
too
many
options.
D
J
Is
the
assessment
that
is
that
too,
is
is,
is
too
slow
or
because
too
much
it's
based
on
just
theoretical
assessment,
or
is
it
a
practical
experience?
J
I
think
I
know
that
ashmac
is
is,
is
faster
than
rsa
and
other
asymmetric
crypto
algorithms?
The
question
is
whether
this
is
really
a
problem.
L
I
think
I
think
it
depends
on
the
use
case,
which
is
efficient
enough.
There's
plenty
of
things
where
the
volume
is
low
enough,
that
symmetric,
asymmetric
crypto
is
simple
and
fine
if
you're
running
aws
or
maybe,
if
you're
running
azure
every
cycle,
counts
and
you'll.
Do
the
extra
protocol
steps
to
do
that,
but
I
think,
in
terms
of
engaging
the
working
group,
the
normal
way
to
engage
the
working
group
is
to
adopt
a
working
group
document
and
then
revise
it.
So
that's
what
I'm
proposing.
J
I
fully
fully
agree
with
at
that
point.
I
mean:
let's
move
that
forward
and
if
there
is
a
need
for
a
more
efficient
scheme,
people
will
step
up
and
will
contribute
new
algorithms
protocol
extensions
and
so
on
to
to
happen
so
for
to
me.
The
whole
conversation
feels
like
a
committee
wanting
to
decide
what
the
best
solution
will
be
for
the
market.
M
M
How
does
tls
1-3
make
some
requirements
moot
or
or
alter
outcomes
to
to
this
work?
And
I
sort
of
say
that,
in
the
light
that
brian
did
a
great
job,
highlighting
all
the
different
efforts
that
have
gone
on
and
I'm
sort
of
concerned
that
we
pile
in
again
on
d-pop
or
some
variation
just
to
find
out
that
it
will
have
its
own
problems
because
of
the
other
things
that
are
changing
around
us,
I.e,
tls13,
and
if
the
http
working
group
is
starting
to
do
signing.
D
I
don't
think
there's
anything
there.
D-Pop
is
moving
away
from
the
reliance
on
any
kind
of
key
or
exported
key
or
identifier
from
the
transport
layer,
and
so
it
could
be
deployed
over
one
three
just
the
same
as
it
would
over
one
two
and
there's
it's
kind
of
a
no-op
there
were
there
were
yes
there.
D
There
were
some
potential
problems
with
the
exported
key
material
in
token
binding
because
it
wanted
to
play
across
those
layers,
but
in
this
particular
like
where
we're
trying
to
work
on
the
application
layer
around
d-pop
is,
is,
is
sort
of
completely
independent
of
that.
So
I
don't,
I
don't
think,
there's
any
intersection
or
overlap
there.
C
Okay,
and
as
far
as
the
http
thing,
I
will
just
reiterate
that
as
one
of
the
authors
or
editors
of
that
spec
as
it
moves
into
the
hp
working
group,
I
do
think
that
there's
room
for
both.
I
do
not
think
that
d-pop
should
be
stretched
and
twisted
to
be
a
general
solution.
I
I
don't
think
it
is,
I
think
it
and
brian
I
mean
this
in
the
best
way
possible.
I
think
that
it
is
a
really
clever
hack
for
a
really
specific
set
of
deployments
and
use
cases.
C
C
C
It's
implemented
a
number
of
times.
I
think
it
could
use
a
little
bit
of
cleanup,
but
for
the
most
part
I
think
it
makes
a
lot
of
sense
as
it
stands
today,
if
there's
a
clever
way
to
do
to
layer
in
symmetric
stuff
on
there
without
exploding
everything.
I
think
that
that
makes
some
sense,
but
I
honestly
think
that
things
like
aws
shouldn't
use
it
I
mean
you
know
sending
the
entire
key
or
something
like
that.
Every
time
doesn't
make
sense
at
the
scale
of
aws.
C
C
You
know
aws
should
be
using
the
http
in
an
ideal
world,
the
eventual
http
signing
from
the
working
group,
which
is,
of
course
based
largely
in
part
on
aws
signatures
in
in
its
history
and
layering
in
the
oauth
access
token
binding-
that
I
hope
this
working
group
to
adopt
at
some
point
in
the
future.
When
the
time
comes,
I
mean
I
don't
see
these
as
as
competing
with
each
other.
So
much
as
they
are
complementing
different
bits.
C
M
If
it's,
if
it
is
that
we
need
to
make
clear
and-
and
I
like
the
point
that
brian
made
again-
was
that
the
issue
of
proof
of
possession
is
orthogonal
to
message
signing
and
maybe
that's
the
track
is
to
say
number
one.
Those
two
things
are
always
going
to
be
separate
going
forward
and
that
way,
they're
clear
and
have
a
greater
chance
and
then,
within
those
forks.
M
Those
two
orthogonal
approaches
to
your
point.
Justin
there
may
be
need
for
specialization,
though,
because
it
is
difficult
to
do
general
purpose,
I'm
just
a
little
worried
that
we're
repeating
a
cycle
again
for
the
third
or
fourth
time
in
ten
years.
So
I'm
you
know
working
on
it.
I'm
just
not
sure
that
we're
working
on
the
right
thing
yet.
L
So
this
is
mike
jones
again,
I
I
will
point
out
that
the
slides
left
off,
maybe
mercifully
one
piece
of
the
pop
history
in
the
oauth
working
group,
which
is
we
created
something
called
oauth
one
which
used
proof
of
possession
tokens
and
the
marketplace
largely
abandoned
it
because
of
the
interoperability
problems
encountered
when
trying
to
actually
do
the
http
signing
in
the
way
specified.
M
When
I
was
looking
at
mtls-
and
I
was
quite
excited
by
one
of
the
problems
I
ran
into-
was
a
similar
perception
of-
do-
we
really
need
it
mike.
You
spoke
to
that.
There
are
threats
with
bearer
tokens,
but
I
think
there's
sort
of
there's
two
extremes
right
now:
to
sign
everything
and
encrypt
everything
and
throw
crypto
at
every
layer
and
do
everything
and
go
nuts,
but
none
of
those
actually
work
and
then
the
other
people
are
saying
that
this
stuff's
working.
M
Why
are
we?
Why
are
we
complicating
things
unnecessarily
and-
and
so
it's
frustrating
to
me
to
say,
what's
the
right
level,
I
guess
that's
why
I
like
the
idea
if
we
say
okay,
if
we
solve
token
binding
and
that's
what
we
solve-
that's
a
big
improvement
and
then
maybe
it's
appropriate
that
the
apps,
which
is
a
more
of
a
niche
area
like
financial
apis,
look
to
their
own
solution
in
that
niche
because
they
can
specialize
it
appropriately.
L
L
It
does
just
enough
more
to
actually
get
proof
of
possession,
and
you
know,
as
justin
described
the
glorious
hack
or
whatever
you
want
to
characterize
it
as
it
signs
just
as
little
as
possible
to
be
able
to
demonstrate
proof
of
possession
for
the
tokens,
thereby
solving
a
marketplace
need.
D
As
as
justin
and
some
others
have
said
too,
that
there's
definitely
some
opportunity
available
in
the
text
of
the
draft
to
make
some
of
those
things
more
clear,
and
I
don't
think
it's
as
simple
as
saying
this
is
for
only
spa
type
apps.
I
do
think
there's
a
lot
that
could
benefit
from
clarifying
sort
of
what
it
is
and
isn't
and
what
it
can
do.
I've
certainly
signed
up
to
work
on
that
text
going
forward.
If
we
proceed
with
this.
A
A
Okay:
okay,
like
I've,
I've
heard
lots
of
support
for
d-pop,
but
we're
not
going
to
be
able
to
call
kind
of
a
hum
or
something
for
this.
We
would
have
to
take
it
into
into
the
mailing
list
to
give
everyone
an
opportunity
to
chime
in
so,
but
I
think
it
was
was
a
really
good
discussion
and
I
guess
that's
where
we
that's.
A
I
think
next
step
is
just
to
take
it
to
the
list
and
and
and
see
if
people
are
interested
in
adopting
this
and
and
whether
it's
a
it's,
a
and
symmetric
is
a
different
story.
We
can.
The
work
group
will
have
to
decide
later
on
on
the
scope
of
that,
but
but
I
think
the
goal.
Oh
the
action
right
now
for
me,
at
least
for
as
as
chairs,
to
ask
for
adoption
on
the
main
investment
and
see
how
how
that
goes.
A
H
E
Make
sense
at
the
beginning
of
the
meeting
jim
asked
on
sort
of
how
many
folks
on
the
call
are
going
to
be
at
the
face-to-face
meeting.
A
F
Sorry,
if
I
just
just
to
see
if
I
can
confirm
it,
so
what
what's
going
to
happen
is
on
the
mailing
list.
There'll
be
a
call
for
adoption.
It
will
not
be
framed
at
the
different
options.
The
way
we've
been
talking,
which
has
been
very
informative,
but
we're
going
to
be
just
very
precise.
We
have
a
draft
or
we
are
we
adopting
or
not
adopting
the
draft
right,
correct.
A
Correct,
that's
that's
exactly
what
perfect
plan?
Okay,
okay,
yeah,
thanks!
Okay!
Anybody
any
has
any
other
comment
about
this
before
I
ask
who's
gonna
be
in
in
vancouver,
okay,
so
so
regarding
vancouver.
Unfortunately,
neither
I
nor
hannes
would
be
there
and
we
wanted
to
know
who's
who's
planning
on
attending
the
meeting
in
vancouver
to
understand.
If
we
can
like
have
a
meaningful
meeting
so
who's
attending
in
person.
D
I
plan
to
I.
C
D
L
N
Anybody
else
I'll
miss
that
I'll
also
be
attending,
but
it's
also
it
depends
a
lot
on
like
if
other
people,
which
seems
a
lot
of
people
are
having
the
same
thought.
I
wonder
if
there's
a
more
formal
way
to
to
make
sure
that
we,
everybody
is
going
to
actually
confirm
that.
A
It
okay,
you
know
what
maybe
maybe
we
can
put
some.
F
Formal
taking
no
formal
position,
the
iesg
is
actively
considering
all
three
constraints,
as
jay
has
been
putting
on
the
ietf
list
period.
No
further
comment.
Thank
you.
A
D
I
was
just
gonna
say
thank
you
to
you
and
honest
and
and
the
80s
for
putting
this
together.
Okay,.
A
Oh
thank
you
guys
and
appreciate
your
help,
and
that
was
a
great
discussion.
So
thank
you
for
that
and
we'll
take
it
to
the
list
to
continue
this
discussion.
Okay,.