►
From YouTube: IETF113-GNAP-20220325-0900
Description
GNAP meeting session at IETF113
2022/03/25 0900
https://datatracker.ietf.org/meeting/113/proceedings/
A
I
guess
we
should
get
started
so
if
somebody
could
volunteer
to
take
notes
and
I'm
looking
at
the
remote
participants
who
who
there
are
17
of
you
well
16,
I
guess
so
if
anybody
could
step
up
and
volunteer
to
take
notes
that
be
very,
very
helpful.
A
A
Thank
you,
christopher
perfect,
christopher
nacio
has
volunteered
to
do
that.
All.
B
A
So
jaron,
I
think
we
can
sort
of
just
get
get
the
show
on
the
road
at
that
point.
So
this
is
the
session.
Let's
get
to
the
note.
Well,
no,
the
note
well
well,
as
the
saying
is.
A
And
your
own,
this
is
usually
when
you
tell
people
to
sign
up
to
meet
echo.
A
Exactly
all
right:
well,
we
have
a
note
taker
and
hoping
we
get
good
notes
in
hedgehog
and
I
think
with
that,
the
agenda
so
we're
basically
gonna,
have
justin
up
here.
Talking
and
he's
also
gonna
attempt
a
live
demo
from
from.
You
know,
tempting
the
gods
of
the
demo
gods
to
see
if
we
can
actually
do
a
live
demo
based
on
the
hackathon,
which
is
going
to
be
fun,
but
we
have
a
lot
of
stuff
to
go
through.
A
So
I
think
we'll
just
invite
you
up
and
I'll
switch
the
slides
over
to
protocol
update
and,
let's
see
here.
A
E
Oh
there
we
go
here,
we
go
gotcha
and
that
way
I
should
have
slides
magic.
There
we
go
there.
We
go
awesome
all
right
so
good
morning.
Everybody
welcome
to
the
last
day
of
ietf
113..
I
am
going
to
be
presenting
all
of
the
presentations
today,
because
fabion
was
unable
to
travel
here
in
person.
He's
online,
though
I
saw
and
aaron
was
unfortunately
unable
to
make
it
this
morning.
He
is
also
online
and
we'll
be
jumping
in
from
time
to
time.
E
So,
first
off
we
are
going
to
go
through
the
protocol
update
I'm
going
to
go
through
the
changes
to
the
core
draft
since
iatf
113,
that's
from
version
8
to
version
9.
E
and
including
all
of
the
editorial
and
functional
changes,
because
there's
there's
been
a
fair
bit
and
I
want
to
take
a
take
a
pause
to
say
that
we
have
not
been
focusing
on
the
rf,
dr
on
the
rs
draft,
during
this
time
frame.
E
The
reason
for
that
is
that
there's
still
been
enough
sort
of
editorial
and
structural
stuff.
That
has
been
happening
in
the
core
draft
that
the
editors
didn't
want
to
split
our
focus
and
split
our
attention
too
much
right
now,
because
they
do.
The
two
drafts
are
intended
to
work
fairly
orthogonally
with
each
other.
So
the
idea
being
we
can
get
the
core
draft
even
more
solid
and
then
start
focusing
on
the
rs
draft
in
the
future.
All
right!
E
So
if
you
download
the
slides,
all
of
these
links
are
live.
You
can
go
and
look
at
the
actual
diffs
between
all
of
all
of
the
different
drafts.
If
you're
curious
about
the
actual
text
that
we
changed,
we
did
37
pull
requests
on
the
core
draft
and
and
two
on
the
rs
draft.
E
Although
we
did
not
publish
a
new
version
of
the
rs
draft
in
this
time
frame
and
again,
if
you
go
click
on
those
live
links,
that'll
actually
pull
up
a
list
in
github
of
everything.
That's
changed
during
that
time,
so
that
you
can
see
the
actual
changes
with
all
the
commentary
and
everything
like
that.
E
All
right,
we
managed
to
close
40
issues
in
the
in
the
core
github
tracker
in
this
time
frame.
Again,
you
can
go
look
at
that
list
yourself,
see
all
of
the
things
that
we've
managed
to
close.
E
They
were
either
duplicates
of
other
p
of
other
issues
and
we
tied
the
conversations
together
or
things
like
that,
or
we
went
and
read
through
them
and
decided
and
and
brought
to
the
group.
The
fact
that
you
know
this,
this
has
kind
of
been
overrun
by
events
or.
B
E
There
wasn't
consensus
to
make
the
changes
requested
or
something
like
that,
but
for
the
most
part,
the
issues
that
we've
been
closing.
We
are
actually
making
changes
into
the
text
to
make
it
more
consistent
to
make
it
more
readable
and
to
make
it
more
robust
overall
and
no
issues
closed
on
the
on
the
rs
draft.
E
So,
on
the
editorial
side,
things
fell
into
three
different
categories.
First,
and
this
is
this-
is
a
big
one
for
a
draft
this
big.
I
forget
our
page
count
right
now,
but
it's
it's
a
very
extensive
draft
text.
E
Consistency
is
going
to
be
a
really
really
big
thing,
because
sections
get
written
at
different
times
by
different
authors
by
different
people
and
stuff,
like
that,
so
there
have
been
a
number
of
things
over
the
last
six
months,
so
you
know
the
last
ietf
period
and
this
period,
where
we've
realized
that
we
haven't
been
using
terms
correctly
or
somebody
asks
like.
Why
do
you
call
it
this
here
and
that
there
and
stuff
like
that
and
most
of
the
time,
there's
actually
not
a
good
reason?
E
And
so
there's
been
a
lot
of
cleanup
on
that.
So
two
big
issues
for
that
ones-
I
want
to
point
out-
is
excuse
me,
use
of
the
term
end
user
and
use
of
oh
I'd,
have
to
click
through
to
remember
what
the
other
one
was
anyway
again.
These
are
all
live
links,
thanks
to
aaron's,
editing
on
all
the
slides,
these
all
actually
link
into
github
the
actual
github
issues
and
and
pull
requests
the,
as
you
can
see,
there's
a
lot
of
editorial
stuff
that
went
in
here.
These
are
typos.
E
These
are
formatting.
These
are
you
know,
little
bits
and
pieces
and
then,
finally,
a
lot
of
what
we
just
put
under
cleanup
like
it's,
not
even
really
changing
editorial
text.
It's
shuffling
headers
and
things
like
that
functional
changes
we
actually
had
a
few,
a
few
good
ones
and
I'm
gonna
go
into
a
couple
of
these
categories
in
more
detail
as
part
of
this
presentation.
E
Actually,
if
you
guys
don't
mind,
would
you
guys
mind
if
I
took
my
mask
off
while
I'm
talking
because
trying
to
project
in
a
mask
all
right?
Thank
you.
E
Thank
you.
Trying
to
project
in
that
mask
is
a
little
a
little
difficult,
so
we
have
a
very
extensive
security
consideration
section
now.
It's
obviously
not
complete,
because
you
know
it's
a
security
protocol
there's
going
to
be
a
lot
of
security
considerations,
but
we
added
sections
on
a
few
new
attacks
or
newly
identified
attacks
and
either
the
mitigations
or
the
descriptions
about
them
are
now
in
the
security
considerations.
E
One
of
the
things
that
we
did
with
the
security
considerations
in
this
in
this
round
and
tied
a
little
bit
into
the
last
revision
as
well
was
we
tried
to
make
sure
to
have
forward
links
from
the
from
the
actual
normative
text
that
was
making
the
requirements
and
forward
link
into
the
security
considerations.
E
That
explains
why
you
would
want
to
do
that
or
the
additional
things
you
might
want
to
think
about
and
and
talk
about,
we
made
some
changes
to
the
subject:
identifier
format,
sect,
that's
tracking
work
in
the
sec
event
group
that
is
wrapping
up,
hopefully
real
soon
right
now,
I
did
a
bit
of
work
on
keys
and
discovery.
E
Some
changes
in
how,
in
the
interaction
section
and
I'll,
be
talking
about
the
changes
in
the
user
code
mode
specifically
and
those
of
you
that
were
in
the
oauth
working
group.
This
will
you
know
this
this.
This
ties
into
some
of
the
discussions
that
have
been
happening
there
as
well.
E
We
finally
have
error
codes
thanks
to
thanks
to
aaron,
and
I
I
totally
take
the
blame
for
us
not
having
error
codes
up
until
now,
because
I
got
into
the
lazy
habit
of
saying
it
returns
an
error
and
linked
to
the
to
be
defined
error
section
so
aaron
went
through
and
created
the
error
code,
section
and
sort
of
established
the
pattern
of
what
these
will
look
like
and
we're
still
still
kind
of
backfilling,
some
of
the
places
in
the
spec
that
need
to
define
specific
errors,
but
I
think
we've
got
a
lot
of
them
done
right
now
and
we
had
a
couple
of
different
changes
to
to
how
token
management
works
and
I
think
we're
going
to
go
over
those
in
a
bit
all
right.
E
Oh
here
we
here
we
go.
That
was
the
other
change
I
forgot
I
for
aaron
was
sorry.
Aaron
was
supposed
to
do
these
slides,
so
my
bad.
So
on
the
editorial
side,
we
were
more
deliberate
and
consistent
about
the
use
of
uri
versus
url
anybody.
Who's
tried
to
get
something
through
the
iesg
knows
that
this
we
would
have
been
raked
over
the
goals
for
this
anyway,
so
good
to
at
least
try
to
be
consistent
and
correct.
About
that
now
turns
out.
E
Most
of
the
places
inside
gennapp
are
don't
really
need
a
url,
so
almost
everything
is
a
uri
inside
the
spec
space
and
the
use
of
end
user
without
the
dash,
as
opposed
to
end
user
with
a
dash
and
one
one
of
the
big
ones
for
readability
that
that,
I
think,
is
actually
you
know
it's
one
of
those
improvements
that
you
don't
notice
until
you're.
E
Reading
text
that
doesn't
have
it
is
the
parameter
lists
are
all
listed
now
in
such
a
way
that
things
are
actually
consistently
listed
out
in
terms
of
what
the
parameter
is
its
description
and
whether
or
not
it's
required
all
right,
so
user
code
interactions.
E
That
said,
that
you
return
a
user
code
for
a
user
to
type
in
and
then
a
uri
which
shouldn't
vary,
but
it
was
kind
of
allowed
to
and
if
it
did,
then
maybe
you
should
do
something
about
it,
but
the
client
wasn't
actually
required
to
do
anything
with
that,
because
the
client
was
supposed
to
be
able
to
count
on
that
being
static
and
all
this
kind
of
stuff
we've
realized
for
a
while
that
that
was
that
that
was
kind
of
messy
and
we
finally
finally
came
up
with
the
idea
of
actually
splitting
the
user
code
mode
into
into
two
separate
interaction
modes.
E
Just
very
simply,
there
is
now
user
code,
which
has
no
uri
and
user
code
uri,
which
does
have
a
uri.
The
expectations
of
the
client
instance
are
different
in
each
of
these
cases,
right
so
for
user
code.
That's
the
art,
that's
the
slides!
So
for
user
code
you
just
get
back
the
code
and
the
uri,
where
you
type
that
in
is
assumed
to
be
static
and
known
out
of
band.
So
this
is
for
stuff
that
can't
actually
display
anything.
Maybe
it's
like
talking
or
something
like
that.
E
E
It
is,
however,
intended
to
be
variable
such
that
when
you
are,
you
know,
say
talking
to
a
new
as
or
talking
to
a
multi-tenanted
system,
or
something
like
that.
E
You
might
get
back
a
different
uri
for
a
different
tenancy
depending
on
what
it
is
you're
connecting
and
what
it
is
you're
asking
for,
and
things
like
that,
a
client
asking
for
this
mode
is
making
the
declaration
that
I
can
display
not
only
or
I
can
communicate,
not
only
the
user
code
itself,
but
this
short
uri
and
the
expectations
from
the
as
are
the
are
that
these
are
both
supposed
to
be
short
enough
and
simple
enough
for
somebody
to
type
in
right.
These
are
not
you
know,
32
character,
random
things
and
stuff,
like
that.
E
E
So
the
reason
for
the
change
is
that
things
were
ambiguous
before
we
think
that
this
does
make
it
clear,
even
though
it
is,
is
adding
another
interaction
mode
to
the
possible
list.
It
does
give
you
a
way
to
declare
what
your
capabilities
are
at
a
more
fine-grained
model,
and
that,
fundamentally,
is
is
really
key
to
how
can
app
manages
all
of
its
user
interactions
right.
The
client
instance
shows
up
and
says
this
is
what
I
can
do
and
the
as
responds
to
okay.
E
These
are
the
bits
that
I
support,
and
this
is
this
is
what
I
can
give
you.
So
a
client
instance
saying
that
it
can
do
both
of
these.
I
can
do
user
code
or
user
code
uri
the,
as
might
give
you
different
codes
for
both.
It
might
only
give
you
one
of
the
two,
because
you've
said
you
can
do
either
and
it's
gonna,
let
you
it's
gonna,
let
the
client
pick
which
one
it
actually
wants
to
engage
in
right.
If
the
eas
supports
both
this
does
raise
an
interesting
question,
though
the
two
interesting
questions.
E
E
We
need
to
make
sure
that
we
don't
expect
any
other
extended
parameters
inside
of
that,
but
I
think
with
the
new
tighter
definitions,
I
don't
think
that
that's
going
to
happen.
That
would
be
a
sin.
That
would
be
an
additional
syntax
change,
but
I
personally,
I
think
that
that
is
a
good
way
to
go.
E
This
also
raises
the
question-
and
this
is
relevant
to
the
a
lot
of
the
discussions
we've
been
having
in
the
oauth
working
group
recently.
Does
the
redirect
mode?
Is
that
actually
a
good
name
for
it,
because
redirect
in
ganap
doesn't
actually
mean
redirect
it
means
I
can
communicate
an
arbitrary
uri
to
the
end
user
and
get
them
to
go
there.
Somehow
now
I
might
do
a
redirect,
I
might
launch
their
system
browser.
I
might
display
a
giant
qr
code
for
them
to
scan
on
a
secondary
device.
E
So
with
that
in
mind,
the
editors
are
opening
the
bike
shed
discussion
of.
E
Should
we
rename
the
redirect
mode,
because
it's
not
really
just
redirect
it
never
actually
was
just
a
redirect
in
implementation
and
as
you'll
see
during
the
during
the
hackathon
demo,
the
the
scannable
qr
code
thing
that
aaron
has
on
his
command
line.
Client
uses
the
redirect
mode,
even
though
there's
no
redirection
happening
and
it's
on
a
secondary
device.
E
So
do
we
call
it
arbitrary
uri?
What
do
we
call
it?
I
don't
know,
but
that
is
something
that
that
we
need
to
consider
all
right.
E
Subject:
information
request
this
is
what
fabian
did
to
align
the
the
subject,
identifier
and
assertions
with
each
other.
More
than
anything
else,
he
realized
when
doing
work
on
updating
the
subject,
identifier
formats
to
be
in
line
with
the
with
the
latest
sec
event.
Draft
that
the
way
we
were
asking
for
assertions
and
subject
identifiers
was
it
they
didn't
match
each
other.
E
E
So
subject
identifier
formats
are
defined
by
the
sec
event
draft
we
are
using
pretty
much
the
same
kind
of
structure
for
assertion,
response
and
right
now
assertions
are
defined
to
be
a
a
single
json
string
in
the
value
with
the
format
as
the
as
the
indexing
key
to
tell
you
how
to
parse
that
if
there
are
other
formats
that
could
use
other
structures
in
there,
you
know
we're
open
to
that.
We
haven't
seen
any
examples
of
it
yet
because
the
only
ones
that
have
come
up
so
far
have
been
open.
E
Id
connect,
id
tokens
and
saml
assertions
and
both
of
those
would
just
get
chucked
into
a
json
string.
So
this
is
what
we
have
right
now
we'd
like
to
see
this
exercise
more
to
figure
out
if
it
needs
to
be
more
flexible,
but
I
do
think
that
this
is
a
big
improvement
over
the
previous
method,
which
allowed
you
to
only
do
one
assertion
of
a
given
type
and
it
was
a
different
type
of
indexing
and
formatting
than
what
we
had
before.
E
We
also
editorially
cleaned
up
a
lot
of
the
the
text
around
how
these
things
align
with
each
other.
The
fact
that
you
know
the
assertions
and
the
identifiers
should
be
about
the
same
person,
but
they
might
use
different
identifiers.
So,
for
example,
you
get
back
an
id
token.
That
has
a
subject
identifier,
but
you
ask
first
that
has
the
issuer
and
subject
inside
the
id
token.
E
But
the
subject
identifier
you
ask
for
is
an
email
address
right,
and
so
those
are
different
identifiers
used
for
the
same
person,
but
it
is
implied
by
the
as
as
part
of
this
contract
that
these
are
pointing
to
the
same
person.
E
That's
part
of
the
request
and
response,
like
I
mentioned
before
a
whole
bunch
of
security
considerations,
got
added
a
lot
of
these
came
from
the
community.
So
thank
you
to
you
know.
Florian,
I
know
wrote
a
couple.
E
I'm
I'm
really
bad
at
remembering
names.
I'm
sorry.
A
few
people
wrote
these,
but
tightening
down
redirect
codes
bits
on
session
management.
The
session
management
one
is
actually
a
little
bit
interesting.
We
used
to
have
normative
requirements
about
session
management,
which
made
absolutely
no
sense
in
practice,
and
so
we
backed
off
the
normative
requirements
and
turned
that
into
a
more
comprehensive
security
considerations,
with
advice
about
what
to
look
for
and
how
to
do
it.
E
That
is
going
to
depend
on
the
type
of
client
instance
and
deployment
that
you
have
there's
a
really
interesting
attack
about
how
you
replay
stolen
tokens,
even
though
gnapp
uses
bounce
tokens
by
default,
there's
still
an
ability
to
poke
a
client
to
get
it
to
replay
a
bound
token.
So
this
is
also
relevant
for
oauth
mtls
and
oauth
depop,
and
things
like
that,
and
that's
that's
actually
where
the
attack
was
first
discovered
and
researchers
applied
it
to
get
out.
E
So
we've
got
discussion
on
how
to
mitigate
that
considerations
on
self-contained
access
tokens,
especially
as
it
relates
to
token
management
inside
of
gnab.
So
gnap
has
an
explicit
token
management
layer
to
it.
That
is
separate
from
requesting
the
grants
and
the
thing
you
know
it's,
it's
like
the
little
statements
that
are
in
the
oauth
revocation
draft
about
like.
If
you
can't
revoke
a
token,
then
the
token's
not
revoked
right.
E
If
you
have
a
completely
self-contained
token
and
you
try
to
revoke
it,
it's
well,
that's
that's
a
you
problem
and
excuse
me,
and
that
is
all
that's
all
recorded
here
now,
oh
and
another
big
one
that
I
have
not
seen
nearly
enough
in
in
this
space
is
a
server-side
request.
Forgery
anytime,
a
protocol
allows
a
an
input
of
a
uri
for
the
server
to
fetch
for
the
authorization
server
to
go
and
fetch,
whether
that
be
the
client's
keys
or
in
gnapp's
case
the
logo
and
home
page,
and
things
like
that.
E
E
The
recommendation
is
that
a
client
that
is
capable
of
talking
to
multiple
asses
should
use
a
different
key
for
each,
as
this
completely
prevents
a
whole
class
of
attacks
of
the
client
being
handed
a
token
that
was
supposed
to
be
used
with
one
set
of
rss
and
or
was
supposed
to
be
fetched
with
one
rs
with
one
as
and
actually
getting
it
from
an
attacker's.
E
As
for
the
attacker
to
be
able
to
inject
it,
it's
a
really
simple
thing
for
a
client
to
be
able
to
do
inside
of
ganap,
and
now
we've
called
that
out
explicit
I've
already
talked
about
most
of
these,
I
forgot
that
we
had
slides
for
each
of
these.
My
bad
yeah.
I
already
covered
this.
This
was
the
user
presence
and
session
management
requirement.
E
This
is
aaron's
error
responses.
These
are
the
these
are
the
responses
here
and
one.
I
do
want
to
point
out
that
I
think
is
interesting.
The
last
one
request
denied,
which
is
separate
from
user
denied-
that's
basically
the
as
saying
no
and
I'm
not
going
to
tell
you
why.
So
if
you
have
some
reason
to
distrust
the
client
instance
or
something
else
happened
it,
this
is
just
saying
no,
like
the
the
request
is
over
and
you're
done.
I
want
to
point
out
for
anybody
coming
from
the
oauth
side.
E
These
are
all
error,
responses
that
come
in
the
back
channel.
Only
the
front
channel
does
not
carry
any
error
codes
at
all,
which
already
closes
a
huge
set
of
potential
vulnerabilities.
All
right
so.
E
Right
so
the
token
rotation
changes
that
we
have
in
there
when
you
go
and
rotate
a
token
through
the
token
management
system
in
ganap.
That
means
that
you
are
getting
a
replacement
for
the
token
that
you
are
rotating.
A
Yourself
with
your
re-requests
yeah,.
E
All
right,
I
think
this
is
the
last
slides
again.
Sorry,
I
didn't.
I
wasn't
prepared
to
present
this
deck
this
morning,
so
anyway,
that's
the
state
of
the
current
core
draft.
The
editors
are
still
working
through
issue
backlogs
and
and
things
like
that.
Any
questions
on
where
this
is
before
we
move
to
the
next
presentation.
A
Somebody
so
jay
hoyla
is
asking
so
I
haven't
read
any
of
the
drafts
or
anything
but
with
the
computer
attacks.
Would
it
be
possible
to
have
a
master
key
and
then
use
the
kdf
to
produce
an
independent
key
for
each
ass.
G
Ahead
there
you
go
yep,
so
yeah.
As
I
said,
I
haven't
read
any
of
the
drafts
or
anything,
but
with
the
confusion
attack
a
few
slides
before
yeah.
A
G
Ahead,
it
says
you
should
use
a
different
key
for
each
ass.
Yes,
would
it
be
possible
to
do
something
where
you
have
a
master
key,
and
then
you
use
a
kdf
to
derive
a
different
key
for
each
as
yeah.
Absolutely.
E
Yeah,
absolutely,
and
at
the
end
of
the
day,
as
far
as
good
map
is
concerned,
as
long
as
the
as
accepts
the
key
for
the
client
instance
and
whatever
trust
logic,
it
needs
in
order
to
accept
that
key
as
long
as
the
as
accepts
the
key,
it's
fine,
and
so,
if
you're,
in
an
ecosystem,
with
like
multiple
as's
that
are
in
like
a
tight
federation
relationship-
and
you
know
you
can
tell
that
this
has
been
derived
from
a
certain
master
key
or
it's
co-signed
or
some
something
like
that
when
that
key
shows
up-
and
you
can
verify
that
that's
great
gnapp
actually
doesn't
care
how
that
trust
gets
established
as
long
as
it
is
established.
E
All
right
so
this
last
weekend,
aaron
parechi
and
I
sat
down
in
the
hackathon
and
implemented
a
whole
bunch
of
stuff,
and
I'm
gonna
tell
you
guys
about
what
we
did.
You
know
what
we
learned
from
that,
including
some
some
feedback
for
the
drafts
that
we
were
implementing
themselves
and
hopefully,
hopefully
show
you
guys.
Some
of
this
stuff
live
we'll
we'll
see
how
that
goes.
E
E
Our
focus
during
the
hackathon
was
that
all
the
requests
would
be
protected
with
http
signatures.
We
were
trying
to
build
stuff.
You
know
from
from
sort
of
ground
up
as
much
as
we
could
using
existing
libraries
for
components
where
they
were
available,
building
them,
where
they
weren't.
E
And
we,
our
goal
was
to
be
able
to
make
requests,
make
valid
requests
for
access
tokens
and
get
the
user
to
interact
with
the
aes.
Now,
because
gnapp
is
such
a
flexible
protocol,
we
limited
our
scope
of
what
we
wanted
for
interaction.
What
we
wanted
for
access
tokens
and
things
like
that,
but
at
the
end
of
the
day
it
was
mostly
about
getting
these
messages
along
the
wire.
E
So
we
built
a
whole
bunch
of
new
code,
php
code
that
aaron
wrote
from
scratch
for
a
cli
and
a
web
client.
He,
if
I
recall
correctly,
he
built
the
cli
first,
then
refactored
out
sort
of
the
core
bits
of
that
and
built
that
into
a
php
website
as
well
that
work
that
acts
as
a
good
app
client.
E
I
built
a
javascript
spa
single
page,
app
again
pretty
much
pulling
things
from
scratch.
I
had
a
basic
react:
react
shell
from
previous
work
that
I
had
done
throughout
pretty
much
everything
that
was
in
the
middle
of
it
and
and
put
that
forward
in
the
process
of
this,
we
made
significant
updates
to
existing
code
that
we
already
had
that
we
were
running
against.
E
So
we
had
a
java
based
web
server
client
based
on
java
spring,
and
we
had
a
java
based
authorization
server
both
of
these
needed
to
be
tweaked
because
it
turned
out
there
were
some
errors
and
bad
assumptions.
Inside
of
of
each
of
those
implementations.
E
Now
we
leveraged
as
many
existing
libraries
as
we
could
find,
but
in
a
lot
of
cases
there
weren't
full
libraries
on
hand
to
build
this
stuff
out.
So
those
of
you
that
were
in
the
again
the
oauth
meeting
the
other
day,
the
whole
lack
of
you
know
good
libraries
to
do
things.
E
This
is
as
important
for
the
component
pieces
of
a
system
as
it
is
for
somebody
just
being
able
to
pull
down
a
nap
library
and
go
which
we
wouldn't
expect
at
this
point
in
time,
but
being
able
to
pull
things
down
for
http,
structured
fields
and
cryptographic
primitives,
and
things
like
that.
That
was
all
really
important
for
aaron
and
I
to
be
able
to
build
stuff
all
right.
E
E
How
to
how
to
get
keys
in
and
out
of
that,
crypto
library,
that
kind
of
knowledge
you
would
need
for
any
crypto
system,
and
it
applies
to
this
no
less.
There
were
a
couple
of
surprising
bits
that
were
more
fiddly
than
we
expected
in
particular
turns
turns
out
that
the
order
of
the
parameters
was
not
on
on
one
of
the
http.
Structured
fields
was
not
being
preserved
on
one
of
the
platforms
that
we
were
going
on
and
when
you're
doing
a
when
you're
doing
a
cryptographic
operation.
E
So,
ideally,
the
end
goal
for
http
message
signatures,
and
this
is
we're
taking
back
to
the
http
working
group.
The
end
goal
for
that
really
is
that
it
really
should
be
transparent
within
http
library
platforms.
E
E
Key
excuse
me,
I
will
say,
though,
once
we
had
that
part.
The
rest
of
the
gnat
protocol
came
together
very
very
quickly,
so
I
also
want
to
point
out
that
aaron
and
I
built
our
signing
functions
deliberately
so
that
it
they
would
sign
an
arbitrary
http
message.
E
We
could
have
hard-coded
a
bunch
of
stuff
to
say,
okay.
This
is
just
making
a
good
app
request
and
setting
the
fields
this
way
and
stuff
like
that.
Neither
of
us
wanted
to
do
that.
We
both
wanted
to
have
something
that
says,
given
an
http
url
to
go
to
and
a
key
sign
this
and
send
it.
So
the
hackathon
was
as
much
a
http
message
signatures
implementation
as
it
was,
can
happen,
but
once
we
had
that
function
in
place
on
all
of
the
different
platforms,
everything
else
really
kind
of
went
fairly
smoothly.
E
E
So
when
you're
doing
http
message
signatures,
for
example,
you've
got
a
signing
algorithm
and
a
digest
algorithm,
and
all
these
other
sort
of
sub
layers
of
that
that
are
not
communicated
with
just
the
key
that
says,
do
http
signing
same
with
jws
detach
signing.
You
would
want
to
be
able
to
communicate
the
jw.
The
target
intended
jws
algorithm
inside
of
that
as
well
as
opposed
to
just
accepting
whatever
signed
object,
came
in
because
we
know
that
that
is.
One
of
that
is
the
basis
of
one
of
jose's.
E
You
know
most
most
famous
misimplementations
people
just
accepting
whatever
whatever
algorithm
is
in
the
jws
structure.
E
E
So
the
fact
that
we're
doing
a
at
one
step
of
the
process,
in
some
circumstances,
we're
doing
a
post
with
no
message
body
we
finally
took
us,
took
a
beat
and
asked
ourselves.
Why
are
we
doing
a
post
with
no
message
body?
Is
this
more
semantically
like
a
get?
Does
this
make
sense,
that's
the
type
of
stuff
that
we
need
to
step
back
and
ask
us:
ask
everybody:
what
does
each
piece
actually
mean
and
and
what
are
we
doing
with
it?
E
E
E
We
need
during
the
hackathon
aaron,
and
I
actually
got
into
an
interesting
and
lively
debate
about
what
the
hash
actually
protects
and
drew
up
a
few
diagrams.
So
I've
already
put
in
a
pr
to
add
the
diagram
that
we
drew
during
the
hackathon
to
the
to
the
draft
and
security
considerations
and
yeah
and,
like
I
mentioned,
there's
a
little
bit
of
semantic
cleanup
that
some
of
the
names
might
not
make
as
much
sense
as
as
we
originally
thought.
E
Http
structured
values
is
a
fantastic,
fantastic,
spec,
rfc
8941.
I
think
it
is
now
it's
absolutely
brilliant,
but
it's
weird
because
it
is
very
http.
Ish
and
http
is
a
weird
protocol.
E
So
if
you're,
not
thinking
in
very
http
kind
of
structures
and
terms,
it
can
be
jarring
to,
and
the
first
question
you
are
going
to
ask-
is
why
isn't
this
just
json
there's
a
good
reason
that
it's
not
just
json
there's
a
lot
of
good
reasons?
Actually,
but
some
you
know
some
more
starter
guides
for
structured
fields
would
be
really
good
to
see.
E
And
I
mentioned
with
http
signatures
already.
This
really
needs
to
be
built
into
the
http
libraries,
and
this
is
something
that
I
would
like
to
really
see
happen.
We
are
seeing
a
lot
of
uptake
of
developers
on
different
platforms.
Writing
signature,
implementations
people
have
been
waiting
for
the
ietf
to
actually
declare
an
http
message,
signature
process
for
a
long
time
now
and
so
we're
seeing
developer
uptake.
Hopefully
it's
a
matter
of
time
before
those
get
more
tightly
integrated,
all
right.
E
So
now
it's
time
for
the
demos
we
have
the
cams
demos
here,
but
I
am
actually
going
to
go
and
grab
my
laptop.
E
All
right,
so
you
guys
get
to
get
to
see
this
fail,
live.
E
E
All
right
so
here's
to
hoping
this
works
and
you
guys
can
follow
along
at
home.
If
you
go
to
ganap
c
dot,
herokuapp.com.
E
E
E
And
I
am
going
to
do
a
redirect
mode
here,
I'm
going
to
create
new
request
and,
in
the
background,
is
where
all
the
interesting
stuff
happened
right.
This
is
the
problem
with
doing
a
demo
of
a
security
protocol.
It's
you
click
a
button
and
something
works.
E
All
the
fun
stuff
is
happening
off
on
the
server
where
it's
creating
the
messages
and
signing
them
and
stuff
like
that.
We'll
be
able
to
see
a
little
bit
more
of
that
when
I
show
the
spa
demo
in
a
moment
and
aaron's
aaron's
demo
actually
shows
it
a
little
bit
better
also,
but
I'm
going
to
go
here
to
the
to
the
interaction
url.
This
is
that
redirect
uri
that
we
were
talking
about
before
now
up
here,
you
can
see
that
we
are
approving
access
to
four
different
kinds
of
things.
E
What's
not
clear
from
this
is
that
the
top
three
are
oauth
scope,
style
requests,
so
they're
just
simple
strings,
so
if
you
have
apis
that
are
protected
with
oauth
scopes,
this
is
how
gnapp
represents
that
the
bottom
one
is
actually
that
large
object
block
the
rich
authorization
request
style
request.
E
E
E
So
one
of
the
things
about
the
about
the
hackathon
was
that
we
wanted
to
be
able
to
do
all
of
this
inside
the
browser
and
see
what
that
would
feel
like.
So.
E
This
key
was
generated
using
web
crypto,
so
navigator.credentials.createkey
whatever
it
is,
I'm
actually
going
to
go
down
and
so
look
at
the
end
value.
It
starts
with
2i20s
I'm
going
to
go
and
this
button,
which
you
would
never
guess,
does
create
a
new
key.
So
now
our
n
value
starts
with
mfx
q4,
completely
different
key
generated
just
now.
E
This
client
is
going
to
use
that
to
make
a
request
to
the
authorization
server
all
right
and
oh-
and
I
do
want
to
point
out
that
both
the
spa
and
the
and
the
java
based
client,
one
of
the
things
that
they
do
allow
is
you
to
set
your
grant
endpoint
right
at
the
top,
so
gnapp
is
a
protocol
is
designed
to
not
really
require
a
discovery
step.
It
is
you
know.
Negotiation
is
built
into
the
protocol
in
a
lot
of
spaces,
so
you
just
need
this
one
url
to
kick
things
off.
E
E
It
auto
redirects,
because
I
was
lazy,
but
this
is
doing
the
same,
redirect
based
flow
as
we
were
doing
before,
and
I'm
going
to
go
through
all
of
that,
in
addition
to
all
of
the
access
that
it's
asking
for,
it's
also
asking
for
these
two
different
subject:
identifiers
for
me,
is
the
user
I
hit
approve
and
now,
in
addition
to
my
oh,
my
access
token,
which
is
here,
I
also
have
a
user
information
in
form
of
an
email
address
and
then
a
server
specific,
opaque
identifier.
E
All
of
that
came
back
straight
to
the
client.
This
is
not
wrapped
up
in
an
id
token,
this
is
not
a
user
info
call.
This
is
information
that
is
just
dropped
directly
back
to
the
client
in
a
back
channel
response
so
that
the
client
can
just
pick
it
up
and
use
it.
You
can,
additionally,
as
we
saw,
do
the
assertions
and
things
like
that.
You
can
of
course,
call
apis
that
are
as
specialized
as
you
want
them
to
be.
F
E
Have
to
rejoin
it's
telling
us
it's
starting,
oh
is,
is
it
permissions.
E
G
Jonathan
hoyland
cloud
player-
this
is
just
like
that
I
was
bringing
up
in
chat
again.
I
haven't
read
any
of
these
documents
or
anything
right,
but
the
the
message
format
the
signatures.
Do.
They
include
a
like
a
string
that
says,
like
gina
such
that
you
couldn't
possibly
use
this
interface
as
an
oracle
like
if
I,
if
I
have
a
dumb
client
that
will
just
sign
anything
and.
B
G
E
G
So
tls
does
include
long
strings
saying
you
know
tls13
message
and
those
mean
that
if
somebody,
if
I'm
talking
to
somebody
malicious
and
I
send
them
a
signed
thing-
they
can't
then
turn
around
and
give
that
sign
thing
to
some
other
service.
To
pretend
to
be
me,
oh.
E
E
To
open
up
the
console,
so
I
can
show
you
guys.
E
Yeah
aaron
shows
a
lot
of
details,
but
just
inside
the
sba
clan.
I
know
it's
a
little
bit
small
here,
but
what's
happening
behind
the
scenes,
this
is
the
string.
That's
actually
getting
signed
so
http
message.
Signatures
is
a
detached
signature
mechanism.
You
generate
this
on
both
the
signer
and
the
verifier
side
independently,
based
on
the
context
of
the
message
all
right.
E
So
when
I
am
sending
a
request,
I
take
that
request
message
and
I
decide
that
I
am
going
to
be
signing
the
method,
the
target
uri,
the
authorization
header,
the
content,
digest
header,
the
content,
type
header
and
then.
Finally,
these
are
the
signature
parameters
for
the
signature
that
I'm
creating
right
now
right
all
of
that
gets
concatenated
together
in
a
in
a
very
deterministic
algorithm.
E
D
A
C
Want
to
do
yes,
actually,
I
do
I'll
start
with
the
class
one
that
signatures
are
reasonably
easy
to
write.
If
you
have
a
structured
field
library
and
with
golang,
I
was
lucky
to
have
one
so,
but
otherwise
it's
really
difficult
and
then
to
to
jonathan's
point.
I
support
this
idea.
I
think
it's
something
we
should
consider
for
message
signatures
and
my
example
of
an
analogous
context
is
the
end
time
type
in
in
jobs
again
to
to
prevent
confusion,
mix
up
attacks
where
people
are
reusing.
E
A
All
right
so
yeah
I'm
trying
to
type
to
meet
echo
about
that,
but
all
right
so
aaron
do
you
want
to
so.
Do
you
want
to.
A
E
F
Okay,
tunnel
vision:
let's
there
we
go
there,
we
go
so
hi,
aaron
perecki.
I
want
to
share
what
I
built
during
the
hackathon,
so
this
is
going
to
be
the
two
versions
of
the
same
same
code,
the
command
line
version
and
the
web
based
client
version.
F
Both
of
these
are
talking
to
the
same
authorization
server
that
justin's
client
was
talking
to
so
just
for
context.
You
will
see
the
same
authorization
screen
because
it's
talking
to
the
same
graph
server,
so
the
command
line
version.
B
F
Off
with
creating
the
request,
for
you
know
what
it's,
what
it's
trying
to
get
access
to,
so
I
have
a
command.
That's
going
to
do
that
and,
like
I
said,
mine
is
much
more
verbose,
so
I'll
explain
what's
going
on
here.
This
is
the
string
that
is
being
signed
with
the
private
key,
so
it
generated
the
key
that
it's
got
in
a
in
a
file
here,
and
this
is
the
string
that's
being
signed.
So
I
this
this
is
what
ends
up
creating
that
header.
F
This
is
the
http
signatures,
part
which
plus,
I
guess,
the
content
digest
header,
and
this
was
the
part.
F
The
font
size-
this
is
almost
it's
mostly
mostly
a
blur
anyway.
So
thank
you.
Hopefully
that
helps
so.
The
this
is
the
http
signatures
part.
This
is
the
string
being
signed
talking
about
what
url
it's
talking
to
the
different
headers
being
signed,
etc,
etc.
This
was
the
part
that
took
the
most
the
most
work,
because
I
didn't
find
an
existing
library
for
doing
this,
specifically
http
signatures
part.
F
Thankfully
I
did
find
the
library
for
doing
the
actual
crypto
work
as
well
as
building
this
header,
which
is
the
structured
header
structure,
atp
header
spec,
so
it
goes
in,
creates
a
signature.
This
is
the
actual
post
request
request
it's
making
and
then
here
is
the
json.
That's
part
of
the
actual
gnap
protocol.
F
So
this
is
where
it's
saying
I
can
send
the
user
to
a
url,
I'm
trying
to
get
an
access
token,
and
here
is
the
info
about
the
client's
key
and
the
name
of
the
client.
The
server
was
able
to
validate
that
accept
the
key
and
then
created
this
response,
sending
back
to
the
client.
So
it's
saying
go
over.
You
know
send
the
user
out
to
this
url
to
continue
or
sorry.
This
is
the
url
to
continue
at
the
command
line
app.
This
is
all
sort
of
debug
up
here.
F
E
Well,
while
aaron's
well
aaron's
scanning
that
I
want
to
point
out
in
case
the
audio
was
a
little
low.
The
url,
that's
in
the
qr
code
is
not
the
url
that
you
go
to
enter
the
the
user
code
into
it.
Is
that
full
url,
which
has
this
randomly
generated
blob
at
the
end,
which
we're
not
expecting
a
user
to
type
or
memorize
or
recognize
right,
so
that
could
be
a
whole
encoded
encrypted.
Whatever
thing
tacked
on
to
the
end
of
that,
because
it's
not
expected
to
be
user
facing
directly.
F
F
Yes,
okay,
so
I'm
looking
at
the
same
authorization
screen
that
you
saw
on
justin's
demo
and
then
I
can
click
approve
and
because
there's
no
way
for
the
server
to
send
my
phone
back
anywhere
useful,
it
just
says:
go
back
to
the
device.
F
So
at
this
point
the
command
line
app
needs
to
go
and
actually
do
a
poll.
So
this
little
thing
is
in
the
way
there
we
go.
So
I
will
instead
do
a
poll
and
it's
going
to
do
a
poll
with.
F
To
the
here
we
go
to
the
continue
endpoint,
which
I
got
back
in
that
first
response,
and
it
sends
back
a
token
that
it
got
in
that
first
response
and
does
this
whole
signing
dance
again
and
because
I
approved
the
request
on
my
phone,
it
actually
got
back
the
access
token
now
so
now
the
app
is
considered
login
and
there
is
the
access
token.
F
I
guess
I
should
have
shown
what
happens
if
I
don't
approve
the
request
first.
So
if
I
pull
with
a
pending
request,
the
response
is
sort
of
looks
the
same.
There's
no
access
token.
In
this
response,
there's
just
a
you
know
the
continue
url.
So
it's
basically
saying
try
again
and
if
I
keep
doing
that,
it'll
keep
saying
pending
until
I
actually
go
and
approve
that
request.
F
So
that
is
the
command
line
demo
and
then
we
can
look
at
the
same
thing
on
the
web.
So
now
this
is
I'm
in
the
web
browser
I'm
talking
to
this
application.
It's
actually
using
the
same
client
library
underneath,
but
now
this
one's
running
the
web
browser.
So
I'm
going
to
click
log
in
we're
going
to
see
the
same.
Verbose
debugging
of
what
it's
sending
this
time,
it's
sending
it's
saying
we
can
start
the
user
by
redirecting
them
somewhere
and
we
can
finish
the
flow
by
redirecting
them
back
to
localhost
8080..
F
So
it's
gonna
we're
telling
the
authorization
server
to
send
the
user
back
here
after
they
log
in
which
is
different
from
the
command
line,
which
doesn't
have
a
way
to
do
that.
So
here's
the
response
from
the
authorization
server
saying
all
right
go,
send
the
user
here
and
because
we're
in
a
browser.
I
can
just
put
that
as
a
link
on
this
button.
You
know
in
a
real
app.
I
would
obviously
just
send
them
right
there
with
no
interstitial.
F
We
click
on
this.
We
see
the
same
approval
screen
click
approve,
and
this
is
going
to
now
redirect
the
user
back
in
this
browser
back
to
the
redirect
url,
which
you
probably
can't
read
the
address
bar,
but
up
in
the
address
bar.
There
is
a
a
hash
and
interact
ref
and
those
two
things
combined
work
out
to
the
client
being
able
to
say
cool.
Let's
go
make
that
request
to
to
the
continue
endpoint.
This
is
the
thing
that
I
got
in
the
url
and
then
the
response
is
now
the
access
token.
F
A
Actually
worked
quite
impressive,
so
I
we
did
have
sam
weiner
at
the
q
for
a
little
bit,
but
I
guess
he
stepped
away.
So
I
think
that
means
that
we're
at
future
work
right.
Where
is
it
yeah
or
is
it
embedding
now.
E
So
should
have
control
all
right
so
now
that
we've
just
waited
to
beat
no,
nobody
in
the
queue
for
questions
on
the
hackathon
results,
all
right.
Okay.
So
where
do
we
go
from
here?.
E
That's
what
we're
going
to
talk
about.
I
put
a
lot
into
that
slide,
so
the
biggest
thing
that
the
editors
did
between
last
ietf
and
now
is
we
went
through
all
of
the
existing
github
issues
and
triaged
them
and
assigned
little
little
less
than
half
of
them
at
the
time
to
ourselves
as
things
that
like
we
are
going
to
go
through
and
close
this
specific
one
and
figure
out.
E
What's
going
on
we're
going
to
keep
doing
that,
you
know
it's
it's
it's
a
lot
of
overhead
for
us,
but
it's
also
forcing
us
to
be
very
thorough
about
how
we're
addressing
everything
so
we're
gonna
we're
gonna
plan
to
keep
doing
that.
But
there
are
a
couple
of
sort
of
larger
issues
that
we
know
that
we
need
to
address
in
this
to.
E
In
order
for
this
to
be
a
you
know,
rich
full
and
complete
protocol,
the
biggest
thing-
and
this
is
something
that
I
am
going-
I
am
personally
going
to
be
taking
the
lead
on,
and
that
is
the
life
cycle
of
these
grant
requests,
because
right
now,
there's
an
implied
life
cycle.
You
make
a
request.
You
continue
it.
E
You
can
update
it,
you
can,
you
know,
revoke
it
and
all
this
other
stuff
we're
just
not
explicit
about
the
fact
that
it
is
stateful,
it's
inherently
stateful.
So
what
we're
going
to
do
is
we're
going
to
add
that
discussion
explicitly
into
the
spec.
E
This
may
end
up
actually
having
syntactic
and
semantic
knock-on
effects
to
the
protocol
itself.
Specifically,
we
want
to
be,
and
we
know
we
need
to
be
much
more
precise
about
what
you're
allowed
to
send
at
each
stage
of
the
protocol.
E
So
if
I
am
doing,
for
example,
a
continuation
request,
am
I
allowed
to
send
a
new
client
key
like
right,
then
that
probably
doesn't
make
any
sense
right
now
you're
kind
of
allowed
to
and
kind
of
not,
but
if
I'm
making
a
continuation
request-
and
I
send
you
a
new
interaction
block,
the
client
might
actually
have
some
additional
interaction
method
that
it
realizes
it
can
do
now,
based
on
what
the
authorization
server
has
told
it.
You
know
it
has
new
bits
of
information
that
it
that
it's
saying
I
can
share
this.
E
E
E
If
you
need
interaction,
it
goes
into
this
pending
approval
state,
where
it
stays
until
somebody
approves
it,
and
then
it's
approved,
that's
where
you
get
tokens
and
eventually
gets
thrown
out.
If
you
create
a
request
and
it
doesn't
need
interaction,
it
doesn't
need
external
approval.
So
this
is
the
oauth
client
credentials
or
assertions
or
some
of
the
more
advanced
uma
flows
that
gnap
can
do
natively.
E
E
So
again,
this
is,
I
will
be
sending
this
diagram
out
to
the
list
and
hopefully
starting
a
conversation
on
this
as
well.
I
surprised
my
co-editors
with
this
diagram
I
think
yesterday
afternoon.
So
when
I
say
we
came
up
with
it,
I'm
hoping
they're
not
going
to
yell
at
me.
E
But
again
this
is
a
straw
man
and
hopefully
we
can.
We
can
improve
this.
G
E
Thank
you
yarn.
I
don't,
I
think,
we're
clarifying
the
state
machine,
but
I,
but
I
agree
that
until
this
is
this
is
fully
formalized.
You
know
we
don't
have
quite
all
of
the
tools
to
do
the
full
static,
formal
analysis
of
the
protocol.
I
think
that
this
will
actually
help
us
a
lot
like
once
we
get
this
written
down.
This
is
the
kind
of
thing
that
feeds
into
formal
analysis,
engines
and
that
will
help
us
from
a
security
perspective
and
also
from
from
a
developer
and
implementer
perspective.
E
Being
able
to
look
at
this
and
say
like
this
is
where
I
am
right
now
inside
this
process.
Here's
what
I
need
to
do
next
right,
one
of
the
one
of
the
things
that's
great
about
oauth
2-
is
that
it's
a
tends
to
be
a
very
linear
state
process.
E
All
right,
key
rotation
is
something
we've
been
kind
of
talking
about
for
a
long
time.
This
whole
state
machine,
I
think,
will
actually
help
us
with
that.
We
now
have
a
much
clearer
model
of
where
the
keys
are
associated
and
what
they're
associated
with
inside
the
inside
the
protocol-
and
this
is
something
that
I
actually
realized
during
the
hackathon.
E
E
E
E
We
actually
no
longer
think
that
that
actu,
that
that
makes
sense,
because
each
of
these
different
proofing
methods
has
very
very
different
properties
in
terms
of
key
presentation
and
key
verification.
E
So
what
we're
now
proposing
which
we
need
to,
we
need
to
write
the
text
for
what
we're
now
proposing
is
different
ways
to
handle
key
rotation
based
on
the
proofing
mechanism
that
that
you
have
in
use.
This,
of
course,
raises
the
question
of
what,
if
I
want
to
change
proofing
mechanisms,
say
I'm
doing
htv
sig
and
I
need
to
switch
to
mtls.
E
For
who
knows
what
reason
is
that
something
that
can
app
is
going
to
allow
natively
or
is
knapp
going
to
say
if
you
want
to
do
that?
Just
start
over
pretend
it's
a
brand
new
key:
that's
that's
a
bridge
we're
gonna
have
to
cross
when
we,
when
we
actually
get
through
this
section.
E
Okay,
now
for
mandatory
to
implement.
This
is
something
that
the
editors
have
gone
back
and
forth
on
a
lot
and
what
we
are
we
still
don't
have
mandatory
to
implement
text
partially,
because
gnapp
is
such
a
an
incredibly
flexible
design.
E
Openid
connect
has
an
implementation,
consideration
section
which
includes
profiles
that
say,
if
you
are
doing
this
type
of
application,
here's
all
of
the
stuff.
You
have
to
include
right
and
that's
not
to
say
that
it
is
going
to
necessarily
be
like
everybody
follows
that,
but
providing
those
base
recipes,
we
think,
is
the
editors
are
at
least
proposing
that
that
is
a
reasonable
way
to
approach
the
mandatory
to
implement
question
for
for
a
protocol
as
flexible
as
this.
E
Okay,
extensions
are
another
really
big
part
of
the
spec.
This
doesn't
really
change
the
core
spec.
This
changes
how
designers
and
implementers
interact
with
the
spec
itself?
Are
you
allowed
to
ignore
an
extension
that
you've
never
heard
of?
E
That
seems
like
a
sensible
thing
to
do,
but
what,
if
that
is
the
extension
that
actually
adds
security
like
oh
office,
2
struggles
with
pixie
right
now,
if
the
as
doesn't
know
pixie
it
ignores
pixie,
and
then
you
don't
get
any
of
the
benefits,
but
you
think
you
are
getting
them
from
a
client
perspective.
E
So
this
is
not
an
as
easy
a
question
as
it
seems
on
the
surface.
We
are
at
a
spot
in
in
the
development
of
gnapp
that
we
think
there's
a
lot
that
we
can
do
to
make
this
clearer,
based
on
what
we've
learned
from
other
protocols
and
other
systems
and
other
stacks,
and
the
editors
are
currently
proposing
to
kind
of
create
a
section
about
how
to
add
new
features
and
functions
to
gnapp
and
what
you're
allowed
to
do
with
all
of
the
different
parts
of
the
protocol.
E
Right
now
we
have
a
bunch
of
scattered
sections
that
says,
and
this
can
be
extended
and
it
waves
to
iana
and
that's
it.
We
obviously
need
a
lot
more
than
that,
and
that's
that's
what
we're
going
to
plan
to
do.
E
There
is
the
lingering
question
of
what
to
do
with
the
two
jose
based
key
proofing
mechanisms:
we've
kept
them
in
core
for
now.
This
is
the
same
slide
as
at
ietf
112.,
I'm
just
bringing
it
up
right
now
that
this
is,
I
think,
going
to
be
a
a
continual
question
for
a
while.
It's
the
only
jose
dependency
in
all
of
canapcore
is
is
this
proofing
mechanism,
and
I
do
think
that
that
is
a
good
thing
that
it
is
limited
to
just
this.
This
particular
space.
E
E
So
this
is
doing
things
in
a
way,
that's
a
little
bit
different
from
depop,
because
it
is
not
tied
into
the
the
deep
pop
key
presentation
and
sort
of
the
oauth
protocol
flow
directly,
but
could
this
be
used
outside
of
gnapp
as
well?
Possibly,
that
is.
That
is
a
question
that
the
community
eventually
needs.
To
answer.
Inertia,
however,
will
keep
these
as
methods
inside
of
gnap
itself.
E
All
right
so
the
resource
server
draft,
we
know
it's
still
there,
we
haven't
forgotten
about
it.
It's
been
expired
for
a
little
bit,
but
that's
because
we
haven't
been
doing
active
publication
work
on
it.
E
The
two
biggest
things
for
the
resource
server
draft,
though
because
it
is
a
lot
simpler
than
core
is
the
security
privacy
and
trust
considerations
like
we
did
with
the
core
draft
last
fall
we're
going
to
do
the
same
exercise
with
the
rs
draft
and
we're
also,
very
importantly,
going
to
be
presenting
a
token
model.
E
Now.
This
is
not
necessarily
a
token
format.
It
could
be
expressed
as
a
token
format
and
it
can
be
expressed
as
introspection
responses,
but
a
model
of
what
tokens
represent
and
sort
of
that
data.
Data
structure,
data
model
style
thing
of
there
is
a
user.
There
are
rights,
there
is
a
client.
There
is
an
as
there
are
target
rs's.
That
type
of
stuff
needs
to
be
enumerated
and
clarified
and
then
mapped
into
things
like
jot
and
introspection
response.
B
H
Hey
so
I
just
in
in
listening
to
the
presentations
this
morning,
it
sounded
like
key
rotation,
was
proof
method
specific.
Is
there
value
in
just
basically
making
proofing
method
a
sort
of
plugable
entity
and
then
jose
just
becomes
a
one
of
the
proofing
mechanisms
that
support
it,
and
that
way,
it's
sort
of
you
know
it.
Can
you
can
either
plug
it
in
or
or
not
based
on
on
how
you
want
to
do
it?
H
It
just
seemed
like,
from
a
factoring
perspective,
you're
already
heading
down
a
path
of
saying
the
proofing
mechanisms
are
sort
of
unique
in
their
own
way.
So
you
could
write
a
standard
set
of
way
of
saying
you
could
even
write
instructions
like
hey.
If
you
have
a
new
proofing
mechanism,
here's
the
things
that
you
need
to
describe
right,
how
you
you
know
cycle
keys.
You
know
blah
blah
blah.
E
Yeah
yeah
thanks
george,
that
is
exactly
how
it's
written
already.
Actually,
the
so
the
jose
stuff
is
already
a
separated
module
if
you
will,
as
is
http
signatures
and
mutual
tls,
so
you're
already
allowed
to
you
know
plug
that
in
or
out
as
you
see
fit.
E
So,
for
example,
one
of
the
things
that
we
did
for
the
hackathon
is
we
decided
we
were
just
going
to
use
http
signatures
as
our
proofing
method
for
all
of
our
stuff,
even
though
the
the
java,
as
does
support
the
jose
methods
and
and
other
stuff
as
well
so
yeah.
E
This
also
ties
into
the
mandatory
to
implement
discussion
of
you
know
if
you're
talking
to
an
authorization
server,
can
you
expect
for
there
to
always
be
one
type
of
proofing
method
that
is
always
supported,
or
is
that
going
to
be
based
on
some
other
deployment
profile
so
yeah?
It
already
is
that
type
of
separate
module.
H
Cool
in
in
that
context
you
had
mentioned
that,
and
obviously
I
checked
out
a
long
time
ago,
so
I'm
listening
to
try
and
gain
some
ideas
or
gain
some
familiarity.
I
can't
talk
it's
early
here
in
the
u.s.
E
Sorry
I
had
to
get
up
to
the
speaker
here,
so
the
question
is:
can
you
ask
what
proofing
methods
are
supported,
and
yes,
so
with
that
first
url?
If
you
make
an
http
options,
request
to
that,
the
as
is
required
to
send
back
what
is
effectively
a.
E
E
We
see
clients
like
trying
to
pre-configure
themselves
and
do
the
right
thing,
and
we
see
clients
that
just
kind
of
blast
the
thing
that
they
know
how
to
do
and
if
that
fails
well,
I
couldn't
reconfigure
myself
anyway,
so
get
out
kind
of
allows.
Both
types
of
client
behavior
with
at
least
predictable
results.
E
It's
not
going
to
guarantee
that
it's
going
to
work,
because
you
know
if
a
client
only
does
jose
then,
and
it's
talking
to
a
server,
that's
doing
tls,
that's
obviously
not
going
to
work,
but
there's
at
least
predictable
results
when
that's
when
that
kind
of
combination
shows
up
cool
thanks.
A
I
think
that
means
that
all
right,
you
got
a
unfair
or
actually
just
you
could
switch
over
to
your
final
yeah.
The
embedding
thing.
E
All
right
last
presentation,
I
still
kind
of
have
a
voice
embedding
nap.
This
is
something
that
that
I
posted
out
to
the
list.
We
actually
had
a
speaker
about
during
the
interim
call
for
this
talk.
I
am
not
speaking
as
an
editor,
I'm
speaking
as
an
individual
contributor
to
the
working
group.
None
of
this
can
carries
editorial
weight
or
consent
or
intent.
Rather,
I'm
not
even
saying
that.
I
necessarily
think
this
is
a
good
idea.
E
This
is
a
question
that
I
think
the
group
should
consider,
and
so
what
I
would
like
to
get
out
of
this
discussion
is
if
the
group
thinks
that
this
is
something
that
this
is
an
interesting
problem
that
we
should
address.
E
So
there
are
a
couple
of
use
cases
that
that
have
been
brought
up
that
have
some
similar,
similar
aspects,
all
right,
so
one
that
got
brought
up
during
this
working
group's
initial
boss
was
this
idea
of.
I
am
calling
an
api
and
the
api
says:
wait.
E
I
need
the
user,
that's
using
you
right
now
to
go
enter
new
credit
card
information,
so
I
need
them
to
go,
interact,
go
do
some
approval,
enter
some
information
and
then
come
back
to
me
and
we
can
keep
doing
the
api
thing
that
feels
a
lot
like
a
delegation
protocol
stepping
out
in
the
middle
there
right,
even
though
it's
it's
it's
kind
of
not
it's.
It's
getting
the
user
involved
in
the
middle
of
an
existing
process.
That's
something
that
delegation
protocols
like
knapp.
E
That's
what
they
do.
That's
their
entire
purpose.
Another
use
case
came
up
recently
in
the
verifiable
credentials.
Api
special
interest
topic,
whatever
it's
called
in
the
w3c.
It's
not
its
own
working
group,
because
w3c
is
weird.
E
E
In
order
to
do
that,
it
needs
to
be
able
to
get
the
user
in
front
of
some
other
piece
of
software
for
the
user
to
say
hi.
It's
me,
I'm
okay
with
this
this
and
this
okay
go
right.
E
In
some
cases
the
user
might
log
in.
In
some
cases,
the
user
might
present
a
separate
set
of
vcs
to
this.
In
sort
of
this,
you
know
change
transaction
thing
regardless,
once
the
user
says
go,
then
they
actually
need
to
come
back
and
the
app
needs
to
finish
calling
the
api
exactly
like
it
always
has.
E
So
the
reason
that
I
think
that
this
is
interesting
is
that
protocols
like
an
app
are
never
used
on
their
own,
like
nobody
ever
runs
a
security
protocol
for
the
sake
of
running
a
security
protocol
unless
you're
a
couple
of
nerds
at
a
hackathon
last
weekend,
but
that
doesn't
count
in
the
real
world
you're
either
protecting
an
api,
you're
gathering
user
information
or
you're
just
you're
just
doing
something
and
the
security
protocol
is
your
means
of
doing
that
right
or
it's
part
of
your
means
of
doing
that,
and
that's
that
is
actually
really
important.
E
E
E
So
what
do
those
look
like
with
traditional
delegation?
You
go
try
to
do
the
protocol,
your
tar,
your
end
target
protocol.
You
try
to
go.
Do
that
and
you
fail
and
you
get
told
to
go
talk
to
an
authorization
server
do
whatever
that
says
and
then
come
try
again
come
back
with
an
access
token
and
talk
to
me
to
prove
that
you've
done
this
all
right.
E
This
is
really
simple
from
a
protocol
design
perspective
like
we
all
know
how
this
works.
We've
been
doing
this
for
a
very,
very,
very
long
time.
E
Gnap
fits
completely
in
that
box
in
the
middle
and
does
not
really
touch
the
outside
of
the
protocol,
except
for
that
www
off
header
that
indicates
go
start
the
connect
process,
and
otherwise
you
know
that's
that's
pretty
much
it.
This
is
plain,
vanilla,
api
access,
so
the
benefits
of
this
is
that
it
separates
the
layers
very
cleanly,
and
that
is
a
really
really
big
benefit
to
be
clear,
the
ganap
layer
and
the
protocol
layer
don't
really
have
to
know
anything
about
each
other.
E
They
don't
have
to
deal
with
each
other
separate
set
of
concerns,
and
it's
usable
without
modif,
without
modifying
either
protocol
like
an
app
doesn't
have
to
do
anything
special.
The
protocol
doesn't
have
to
do
anything
special
except
say
that
it's
protected
by
access
tokens
and
knows
how
to
kick
this
off
right,
but
it
doesn't
need
to
the
protocol,
doesn't
need
to
know
anything
about
how
these
are
interacted
or,
what's
being
requested,
or
anything
like
that,
if
it
doesn't
want
to-
and
another
huge
benefit
here-
is
that
you
can
do
this
against
multiple
protocols
at
once.
E
The
downside
is.
This
is
very
chatty.
There's
a
lot
of
back
and
forth.
There's
a
lot
of
discrete
steps
when
you
know
the
end
result
that
you're
trying
to
get
to,
and
it's
really
really
inefficient
for
the
simple
cases
like
going
and
getting,
and
you
know
credit
card
information
like
I'm,
not
really
asking
for
additional
access
to
the
api.
At
that
point.
It's
just
that's
the
only
means
the
api
has.
If
we're
doing
this,
the
only
means
the
api
has
for
getting
the
user
involved
is
to
say
your
access.
E
Then
there's
the
case
of
embedding
protocols
inside
of
canap
itself,
so
you
start
a
genetic
process,
but
then,
with
extensions,
probably
through
the
interaction
methods,
you're
actually
going
to
go,
do
a
co,
a
totally
different
protocol
inside
the
context
of
canal
and
then
you're
going
to
return
the
results
of
that
as
part
of
the
canap
response.
An
app
is
built
to
be
extended
in
exactly
this
way.
The
reason
for
this
is
this
is
exactly
what
openid
connect
did
to
oauth
2..
E
You
go
start
off
the
map
process,
make
your
grant
request
and
say
I
can
do
the
foo
api
for
my
interaction
method.
This
is
how
I
can
get
the
user
involved,
and
so
I'm
going
to
go
do
that
food
protocol
and
I
can
do
all
whatever
I
need
to
do,
whether
that's
exchanging
vcs
or
calling
you
know,
back-end
fabrics
to
post
information
or
doing
key
exchanges.
E
E
I've
got
this
extra
chunk
of
information
that
I
can
return
directly
to
you
at
the
client
as
part
of
the
gnap
response,
the
most
obvious
way
to
use
this
is
for
things
like
identity
information,
so
I'm
going
and
I'm
doing
a
negotiation
for
which
types
of
user
claims
and
attributes
that
I'm
asking
for
and
I'm
getting
back,
signed
assertions
that
I
can
then
use
as
part
of
federation
transactions
right.
E
E
E
This
is
a
whole
other
extra
layer
of
overhead,
which
is
why
the
login
case
makes
sense,
because
you're
kicking
off
a
brand
new
process
when
you're
going
to
get
login
information.
So
it
makes
sense
that
you
might
have
to
do
some
additional
stuff
to
get
there
when
you're
calling
other
kinds
of
apis.
It
doesn't
make
nearly
as
much
sense.
E
So
that
brings
us
to
the
third
and,
in
my
opinion,
weirdest
case,
of
embedding
gnap
inside
of
a
different
protocol.
E
Now
what
this
looks
like
this
absolutely
looks
weird
this
is
this
is
looks
really
funny,
but
this
is
the
way
that
these
use
cases
keep
getting
described,
and
this
is
why
I
think
this
is
fascinating,
because
I
go
to
call
the
api
and
I'm
just
doing
the
native
api
call
whatever.
That
is
not
even
necessarily
access
tokens,
I'm
just
calling
the
api
and
doing
that
and
then
the
api
says
hey.
I
have
figured
out
that
I
need
to
get
the
user
in
front
of
me
somehow.
E
E
You
do
the
token
continuation
you
do
all
of
that
stuff
and
then,
when
that
has
been
fulfilled,
the
api
call
comes
back
with
the
native
api
response,
so
the
thing
that
you
were
asking
for
in
the
first
place,
it's
a
vc.
It's
the
the
purchase
completion,
it's
whatever
this
is
not
something
embedded
inside
of
a
knapp
response
anymore.
This
is
the
native
ape
target
api.
E
Now.
The
upside
of
this
is
that
this
is
a
really
efficient
use
of
the
gen-app
security
components,
because
you're
only
using
the
bits
that
are
relevant
to
you,
like
you're,
getting
the
user
in
front
of
something
else.
You
are
managing
the
state
of
that
request
over
time
and
then
you're
getting
out
of
the
way.
E
So
the
downsides
is
that
this
is
a
really
strange
specialized
integration
and
gnapp
was
never
meant
to
start
halfway
through
like
this,
like
the
nap
process
is
always
you
start
it
exactly
the
same
way
and
then
then
you
branch
off
into
all
of
these
different
options,
that
is,
that
is
a
fundamental
pillar
of
gnapp's
design.
E
All
right.
Every
version
of
this
is
going
to
depend
on
the
protocol
that
it's
being
embedded
in
so
that
can
app
response
coming
back.
You
know
it's
going
to
look
different.
What,
if
I'm
calling
an
api
that
speaks,
xml
or
speaks
cozy
or
speaks
something
else?
Gnapp
is
defined
in
json
over
http.
E
So
if
I'm
getting
a
json
blob
embedded
in
some
other
protocol
definition,
that
means
I
now
need
to
sort
of
roughly
shift
gears
to
go.
Do
this
other
thing
that
feels
very
strange
and
what
we're
seeing
right
now
with
the
handful
of
places
that
are
doing
stuff
like
this,
is
that,
instead
of
embedding
gnap
they're,
actually
claiming
inspiration
from
gnab
and
doing
this
interaction
type
stuff
right,
so
my
question
to
the
community
to
the
working
group
is:
is
this
something
we
care
about?
E
Should
we
explicitly
consider
this
case
of
pulling
like
the
middle
bits
of
gnap
out
and
saying
how
people
should
use
it
in
that
way,
or
is
this
something
that
we
just
we
just
are
silent
on
and
if
people
take
inspiration
from
ganap,
that's
good
enough.
Do
we
even
care?
So
that's
that's
my
question.
I
don't
have
those
answers.
G
Jonathan
hoyland
cloudflare
this
I
mean,
obviously
people
think
I
say
this
about
everything,
but
this
this
looks
like
a
place
where
you'd
use
channel
bindings
and
importers
and
exporters,
because
what
you're
actually
trying
to
do
is
not
produce
one
sort
of
super
big
protocol,
you're
trying
to
say
we
want
to
in
a
principled
way,
get
some
security
guarantee
from
using
gene
app
or
using
some
other
protocol
inside
gene
app
and
transfer
that
between
the
two
protocols,
and
rather
than
trying
to
come
up
with
a
sort
of
one-size-fits-all
solution,
you
should,
or
you
could
produce
an
importer
interface
where
people
can
import
keys
and
an
exporter
interface
where
people
can
take
keys
out
that
give
you
those
guarantees
when
used
yeah,
I
know
so
it
could
be
done
in
a
principles
way.
G
If
you,
if
you
have
this
sort
of
protocol
in
the
middle,
rather
than
trying
to
say,
define
your
food
protocol,
gene
app
interface.
G
And
then
going
away
and
using
it,
you
could
just
say:
gene
app.
Has
this
api
call
that
you
can
make
at
the
beginning
and
produces
some
output
at
the
end,
and
then
foo
just
has
to
be
able
to
use
those
two
without
understanding
anything
about
gene
app
or
defining
anything
about
cheating.
G
Okay,
yeah,
but
my
point
is
you
could
do
it
in
a
formal
analysis,
style?
Okay,
thank
you.
E
So
the
embedded
gnap
just
to
be
clear,
is
basically
an
optimization
of
this
model
where
you
sort
of
slice
off
the
top
and
bottom
of
that
box
in
the
middle,
in
order
to
make
it
more
streamlined
and
compact,
at
least
in
theory
right
and
the
main
reason
for
this
is
that
you
are
no
longer
in
in
the
proposed
use
cases.
You
are
no
longer
really
calling
a
protected
api.
E
You
are
just
kicking
off
for
user
interaction
and
coming
back,
and
the
answer
may
very
well
be
we
just
tell
all
of
these
groups
go
figure
it
out
on
your
own.
That's
that's
your
protocol.
If
you
don't
want
to
fully
step
out
into
delegation
space,
then
that's
not
our
problem.
That
might
very
well
be
the
the
right
answer,
and
I
would
be
fine
with
that,
but
because
this
has
come
up
a
few
times,
people
have
looked
at
knapp
and
said:
maybe
I
can
use
parts
of
it.
C
All
right,
so
I
think
this
can
become
very
complicated
very
quickly
and
we
should
avoid
it
now,
we're
still
figuring
out
our
own
state
machine.
C
E
So
if
those
of
you
in
the
room
didn't
hear
yarns
take
was
that
we
shouldn't
should
not
look
at
this
until
v1
is,
is
done,
which
is
a
totally
reasonable,
take
and
honestly
kind
of
something
I
personally
lean
towards.
But
I
wanted
to
make
sure
that
this
was
written
down
and
we
considered
if
this
was
worth
it.
A
All
right,
that's
that's
it
yeah
and
I
think
that
gets
us.
I'm
gonna
put
the
chair
slides
back
just
for
for
the
hell
of
it
and
yeah,
so
the
I
that
gets
us
to
the
end
of
our
content
for
for
today,
and
we
didn't
actually
put
up
a
slide
that
says
open
mic,
but
this
is
it
right.
So
if
somebody
has
anything
they
want
to
raise
otherwise
we're
we're
out
of
here
early
and
hopefully
meet
again
well
gerald.
Have
we
decided
on
the
interims?
A
No,
but
we
will
definitely
discuss
that
on
the
list
and
it's
like
very
likely
that
we'll
do
in
further
interims
before
philadelphia,
but
you
know
hope
to
see
more
of
you
in
philadelphia
for
itf
140
114
and
I
think
that's
not
seeing
anybody
running
for
the
mics
or
up
to
the
queue.
So
I
think
that's
it
and
thank
you
all
for
for
coming.
F
A
One
of
them
well.
A
And
who
you
might
have
talked
to,
I
don't
know,
maybe
maybe
but.