►
From YouTube: IETF111-GNAP-20210726-1900
Description
GNAP meeting session at IETF111
2021/07/26 1900
https://datatracker.ietf.org/meeting/111/proceedings/
A
A
So
I
think
we
are
good
to
go
welcome
again.
This
is
the
first
session
of
itf
111
not
quite
morning
for
most
of
us,
so
let's
get
going,
I'm
iran
schiffer
and
I'm
glad
to
have
leave
here.
My
culture
leave.
A
C
A
A
A
And
that's
it
and
now
for
the
first
check
checkpoint.
We
are
looking
for
a
minute
taker.
A
A
This
was
christina
all
right.
Thank
you
very
much
christina
for
taking
notes.
You
can
use
any
old
emacs
or
you
can
use
the
kodi
md.
The
link
is
on
this
slide
and
it's
already
ready
with
some
some
material
filled
in
so
that
would
be
even
more
convenient
if
you're
doing
it
offline.
Please
send
me
the
minutes
to
leave
and
myself
after
the
meeting.
A
Agenda
for
today,
after
this
short
introduction,
we
will
go
into
a
session
presentation
by
adrian
gropper,
followed
by
the
main
part
of
the
session,
where
the
editorial
team
will
discuss
the
latest
of
the
nap
core
protocol,
as
well
as
the
air
resource
server
draft.
D
A
A
A
And
then
everyone
is
welcome,
there's
a
half
half
hour
break
after
this
session,
so
please
join
us
on
gather,
gather
town
in
room
seven
and
with
that,
let's
move
on
to
the
next
step.
A
E
E
Please
go
ahead.
Okay
next
slide,
I
should
introduce
I'm
the
chief
technology
officer
of
a
nonprofit
called
patient
privacy
rights.
It's
a
volunteer
position
and
I
also
lead
a
an
implementation
of
self-sovereign
technology
protocols
built
around
some
of
the
standards
that
I'll
be
talking
about
today
and
that
we're
working
on
gnapp.
E
E
E
I
I
want
to
talk
about
the
human
rights
perspective
on
protocol
design,
where
we're
not
just
talking
about
privacy
to
the
individual,
but
the
impact
on
society,
the
efficiency
that
is
increasingly
shifting
power
not
to
self-sovereign
people
through
decentralization
but
actually
to
the
sovereign
corporations
and
governments
and
to
deal
with
that
protocols
are
essential.
You
know,
standardized
data
models
like
verifiable
credentials
and
decentralized
identifiers
are
great.
They
are
useful,
but
they
do
not
have
a
primary
impact
on
the
human
rights
perspective.
E
I'm
going
to
just
describe
one
slide
of
what
aspect
of
self-sovereign
identity.
I
think
matters
the
issue
of
separation
of
concerns,
sort
of
we're
familiar
a
bit
with
the
gdpr,
the
the
legal
difference
between
a
controller
and
a
processor
and
the
separation
of
concerns
of
not
lumping
them
together
in
the
standard
sort
of
facebook
model.
If
you
would
and
then
well,
how
do
you
do
controls
what's
going
on
in
w3c
and
diff
and,
as
I
said,
I
hope
to
have
10
or
15
minutes
for
discussion
next
slide,
please.
E
So
what
are
the
human
rights
concerns?
You
know
it
made
me
think
of
of
tyrell
corporation
in
in
the
movie
blade
runner,
where
you
know
the
the
problem
was
that
it
was
really
impossible
for
humans
to
regulate
what
a
corporate
entity
did
once
it
sort
of
became
a
a
thing
of
its
own.
The
kind
of
concerns
we
have
are
like
facial
recognition,
which
I'm
calling
ambient
authentication.
It's
been
banned
in
a
couple
of
jurisdictions,
but
it's
still
out
there.
E
Non-Repudiable
authentication,
in
other
words,
strong
credentials
where
we
credentials
would
do
barriers
to
anonymity
because
of
the
ubiquitous
surveillance.
I
mentioned
strong
credentials
chain
of
custody
protocols
that
are
a
little
bit
like
pre-crime.
They
we
sort
of
they.
E
You
don't
know
ahead
of
time
that
a
crime
has
been
committed,
but
you
impose
chain
of
custody
possession
protocols
just
in
case
a
crime
will
be
committed,
a
lack
of
agency,
in
other
words,
the
lack
of
the
right
to
have
a
lawyer
or
a
doctor
that
you
choose
represent
you
in
front
of
the
sovereigns
in
front
of
the
institutions
controls
on
free
assembly,
in
other
words
the
ability
to
prevent
or
manage
the
formation
of
gangs
if
you
would
or
congregations
or
etc,
as
self-sovereign
entities
and
proprietary
ai
again,
that
sort
of
turns
a
lot
of
what
concerns
humans
into
either
trade
secrets
or
or
just
intellectual
property
instead
of
education.
E
Next
slide,
please!
So
I'm
not
going
to
dwell
on
this
slide.
I'll
simply
say
that
self-sovereign
identity
could
be
thought
of
as
an
identity
that
only
you
control.
There's
no
third
party,
you
know,
there's
no
domain
controller
or
anybody
else,
and
it
basically
has
a
public
component,
the
decentralized
identifier
and
the
did
method,
and
in
the
did
document
in
the
things
that
are
controlled
by
you.
E
You
have
a
service
endpoint,
and
that
is
the
demarcation
point.
If
you
would
between
that
which
is
public
and
that
which
is
protected,
what
we
would
call
the
resource
servers
and
the
authorization
servers
in
gnapp.
E
Everything
else
is
a
processor,
in
other
words,
everything
other
than
your
identity,
whether
it's
a
nation
or
a
vendor
or
is
is
basically
a
sovereign
as
far
as
you're
concerned
there.
Sometimes
you
have
a
choice,
but
very
often
you
don't
have
a
choice
of
sovereignty
unless
you
want
to
be
a
refugee
and
the
control
models
which
I'll
say
a
little
bit
more
in
a
second,
are
either
direct
control
through
possession
messaging
type
control.
E
Where
you
get
to
pick
an
email
address
and
again
it's
kind
of
a
possession
based
thing,
because
you're
going
to
include
the
the
document
or
whatever
the
verifiable
credential
in
the
message,
but
you
get
to
choose
the
address
and
the
mediator
an
example
of
a
mediator
in
the
wild
right
now
is
apple
sign
in
where
they'll
give
you
an
email
proxy
in
both
directions,
and
then
the
one
we
care
about
here,
of
course,
is
the
the
delegated
control
model,
where
you
have
an
authorization
server.
E
I've
worked,
as
I
mentioned,
on
2,
and
we
all
understand
a
little
bit
about
why
gnapp
is
where
we
are
next
slide.
Please.
E
So
where
do
I
hope
or
what's
my
picture?
What
do
I
want
you
to
keep
in
mind
as
we
go
to
the
discussion
slide
section?
We
originally
had
the
narrow
waste
being
internet
protocol,
and
then
you
see
all
those
alternatives:
email,
www
phone
on
one
side,
copper,
fiber,
physical
layer.
What
we
ended
up
with
partly
as
a
direct
result
of
oauth
is
a
platform
narrow
neck
where
we
get
to
choose
our
sign
in
with
facebook
or
sign
in
with
google.
E
Unless
you
happen
to
be
a
geek
and
then
you
can
sign
in
with
github
and
it's
a
matter
of
what
doc's
earl's
cause
choose
your
captor
at
the
narrow
waist
as
we
move
to
self-sovereign
identity,
I
would
maintain
that
there
is
this
thing
that
I'm
calling
the
minimum
viable
protocol
as
the
narrow
waste.
E
Where,
on
the
one
side,
you
have
the
application
level
stuff.
On
the
other
side,
you
have
the
services
layers
edp
by
the
way
stands
for
encrypted
data-
vaults
I'll
just
mention
that
briefly
later
on,
and
so
what
I
hope
to
discuss
and
the
role
of
gnapp
is
around
this
mvp
next
slide.
Please.
E
So
specifically,
we
can
bring
up.
These
are
five
w3c
and
diff
protocol
work
groups,
the
one
that
I'm
most
involved
in
and
justin
is
as
well.
I
don't
know
about
the
others,
because
I
don't
attend.
All
of
them
is
the
verifiable
credential
api
protocol
work
there's
also
the
encrypted
data
vaults
identity
hubs
which
has
been
allowed
for
a
long
time.
E
We
can
talk
about
them,
but
it's
basically,
person-centered
storage
did
com,
which
is
message
level
security
and
going
back
to
having
a
self-issue
component
to
open
id
connect
called
self-issued
op
next
slide.
Please.
E
So
that's
it!
I
would
like
to
basically
pose
these
five
questions
for
the
group
down
below.
E
You
have
my
contact
info
and
I
really
really
welcome
a
continuation
of
the
discussion
as
well
as
help
in
implementing
an
app
in
in
the
particular
way
that
we're
trying
to
do
it
in
the
hiu
of
one
example:
implementation
of
various
standards,
including
dids
and
vcs
in
the
healthcare
context.
Hie
stands
for
health
information
exchange
of
one.
So
it
is
sort
of
this
idea
that
you
can
have
a
self-serving
agent.
So
the
questions
are,
as
you
can
read
them
yourselves,
how
many
authorization
protocols
does
the
internet
need
at
that
narrow
waste?
E
Do
we
need
something
other
than
gnapp
in
the
narrow
waste
there?
How
do
we
feel
about
you
know
compatibility
with
oauth
in
terms
of
the
human
rights
and
particularly
things
like
client
credentials
that
are
often
associated
with
all
laws?
Do
we
understand
and
recognize
the
difference
between
self-sovereign
agents
and
fiduciary
agents?
E
Both
are
important
to
human
rights
and
if
people
want
to
discuss
that,
I'm
happy
to
go
into
it,
how
to
detach
the
chain
of
custody
type
protocols
that,
like
I
say,
treat
every
transaction
as
a
pre-crime
event,
potentially
without
actually
introducing
biometrics
that
we
have
associated
with
analog
credentials
in
the
past
and,
finally,
the
the
question.
The
big
question
is:
should
gonapp
be
seen
as
the
narrow
waste
of
self-sovereign
identity.
E
Thank
you.
I'm
done
with
the
presentation.
F
All
right,
I
think,
I'm
sending
it
says
I
am
okay,
so
adrian
first
off.
Thank
you
so
much
for
bringing
this
in,
as
adrian
mentioned,
I'm
also
tangentially
involved
in
a
few
of
the
the
efforts
that
were
that
were
discussed
here:
okay,
including
the
awkwardly
named
vc
http
api
alphabet
soup,
but
also
the
selfish
udo,
open
id
provider
and
a
few
other
things
so
to
me.
F
So
first
adrian,
I
want
to
say
I
I
really
appreciate
your
perspective
on
asking
this
question
about
you
know:
is
this
security
protocol
the
narrow
waste?
I
think
that's
a
very
interesting
way
to
look
at
it,
but
my
personal
perspective
on
this
is
that
it
is
equally
important
to
always
ask
the
question
where
these
things
fit
together.
F
So,
on
those
layered
diagrams
a
lot
of
times,
the
the
the
important
questions
in
getting
answer
to
get
answered
are:
what
do
the
interactions
between
those
layers
look
like?
That
would,
of
course,
it
does
of
course,
have
to
be
very,
very
clear
when
you're
at
the
narrow
waist,
if
you're
at
the
narrow
waist,
because
you
know
you've
got
the
two
most
important
boundaries
sitting
above
and
below
you,
but
with
a
security
layer
protocol
like
gnap
you've
got
to
focus
on.
F
This
is
this
is
an
api
for
getting
cryptographic
credentials
moved
around
using
using
web
protocols.
In
short,
there
are
a
number
of
different
ways
that
gnap
and
the
vc
http
api
could
fit
together
and
they
all
have
different
bearings
on
where
you
view
sort
of
the
centrality
of
the
protocol,
for
example,
as
an
hp
as
an
api
ganap
can
be
used
to
protect
calls
to.
It
could
also
be
used
to
in
the
process
of
gathering
the
credentials
that
would
be
used
in
the
api.
F
So
you
know
getting
a
user
involved
is
something
that
can
happen.
Oauth
do
that's
kind
of
the
one
one
of
their
main.
You
know,
selling
points
if
you
will
and
so,
depending
on
which
direction
you're
approaching
the
problem
from
gnab
could
look
like
a
narrow
waist
or
it
could
look
like
just
another
function
in
sort
of
the
overall
structure.
F
I
guess
my
my
comment
and
question
is:
what
dimension
are
you
most
interested
in
looking
and
what
is
kind
of
like
what
is
it
that
you're?
Looking?
What
kind
of
engagement
are
you
looking
for
from
from
the
gnap
group.
E
Very
briefly,
because
I
I
hope
we
have
other
perspective
for
at
least
another
perspective,
two
two
components
to
to
your
question
number
one,
I'm
primarily
interested
in
delegation
as
a
human
right.
We
all
understand
why
we
get
to
choose
our
defense
lawyer
when
facing
the
court
or
the
state
why
we
get
to
choose
our
doctor
when
we
face
the
hospital
or
the
drug
vendors,
and
so
as
you,
we
think
about
those
different
layers.
We're
gonna
might
or
might
not,
and
you
are
very
clear
as
to
what
you
meant.
E
Is
there
really
only
one
above
and
below
or
is
an
app
spread
into
the
other
layers
of
the
of
the
hourglass?
I
I
think
if
at
any
layer
we
ignore
delegation,
we
are
in
introducing
the
potential
for
a
human
rights
problem,
and
so
that's
the
direct
answer
to
your
question.
It's
delegation
all
the
way
down
to
use
the
turtle's
analogy.
E
The
second
part
of
the
thing
is,
I
am
basically
looking
to
potentially
either
take
up
within
the
ganap
group,
an
appendix
that
discusses
these
five
protocols
or
whatever
the
right
number
is
that
are
being
worked
on.
You
know
on
top
of
dids
and
vcs
and
either
literally
say
about
them.
What
we
said
about
oauth
is
here's
what
they
don't
do
that
needs
to
be
done
in
gnapp
or
help
me
start
another
work
group.
Hopefully
many
of
us
here
will
join
this
and
I'm
new
to
iatf.
E
So
I
don't
know
how
to
do
that,
so
that
we
could
take
up
this
human
rights
issue
in
a
place.
That
is
more
what
should
I
say,
friendly
to
human
rights
concerns
than
w3c.
Thank
you.
Thank
you.
F
Thanks:
try
to
make
the.
G
Yeah
hi,
so
I
wanted
to
get
your
perspective
on
this.
G
On
this
point,
I
I
completely
understand
the
the
thought
behind
the
problems
with
the
current
deployments
of
of
the
major
oauth
providers
being
there's
only
a
handful
to
choose
from,
and
those
are
not
things
that
you
can
really
choose
whether
or
not
you
get
to
to
have
accounts
there,
and
those
accounts
are
of
course
owned
by
those
major
providers
and
most
the
vast
majority
of
those
major
providers
have
require
that
consumers
of
the
identities
of
those
providers
are
registered
entities
themselves,
which
is,
I
think,
what
you're
getting
at
with
this
point.
G
My
question
is:
there
is
nothing
in
oauth
itself
that
actually
requires
that
relationship.
There's
there's
no
requirement
for
client
credentials
at
all
and
there's
dynamic
client
registration,
which
is
in
oauth,
which
would
allow
someone
to
without
any
prior
relationship,
show
up
and
use
and
assign
in
with
facebook
button
or
access
data.
G
E
A
two
word
answer
with
a
slight
expansion
justin
richard
and
the
the
the
expansion
is,
and
maybe
justin
wants
to
actually
pick
up
on
this
we
have
worked
on
not
just
uma
2.0,
which
is
the
wide
world
dynamic,
client
registration
thing,
but
also
on
the
heart
work
group
with
which
justin
and
I,
which
is
a
profile
of
uma
and
oauth.
E
That
justin
was
an
editor
for,
and
I
was
a
co-founder
of
five
years
ago
and
you
cannot
get
there
from
here
now.
If
that's
my
opinion,
you
know
as
an
advocate
and
not
a
geek
if
justin
wants
to
pipe
in
for
his
opinion
as
to
what
failed
about
uma,
2
and
the
hard
work
group
as
a
profile
of
oarth
and
uma.
B
Cut
the
line
here
we'll
take
the
last
two
questions,
but
we
just
have
two
minutes
left
for
this
section.
Just
to
note
that
so.
C
Oh,
go
ahead,
robin
okay!
Sorry
thanks
kathleen
adrian!
Thank
you
again
for
bringing
this.
I
I
guess
my
question
is
a
very
simple
one.
What
is
it
in
this
model
that
would
prevent
a
relying
party
from
simply
saying
to
the
individual
here
are
all
the
pieces
of
data
that
I
require
about
you.
Otherwise,
you
can't
access
the
service.
E
Again,
it's
the
issue
of
delegation.
As
long
as
we
are
careful
that
it
becomes
obvious
when
the
relying
party
is
preventing
you
from
choosing
your
defense
lawyer
or
from
choosing
you're
a
doctor.
I
happen
to
be
a
physician,
though
I've
never
practiced
medicine
and
I
I
only
had
a
license
for
like
15
years
and
decided
it
wasn't
worth
the
money
every
year,
but
I
so
I'm
a
geek
I'm
an
engineer
by
by
nature,
but
yes
that
that's
basically
what
has
to
happen
now.
E
Obviously
you
know
if
the
sovereigns,
whether
they're,
sovereign
verifiers
or
sovereign
issuers
in
the
ssi
sense
won't
allow
you
to
delegate
to
an
intermediary
of
your
choice.
Then
yeah
you
gotta,
move.
A
B
F
All
right,
hi
everybody,
I'm
justin
richard
joined,
also
by
aaron,
parecki
and
fabian
involved,
and
I
will
be
doing
the
editor's
presentation
today,
aaron
and
fabian.
Please
feel
free
to
jump
in
at
any
point
and
yeah.
Let's
get
started
so
first
off
we're
going
to
talk
about
the
changes
in
the
drafts.
We
are
now
a
two
draft
working
group,
which
is
you
know
a
hundred
percent.
F
More
than
last,
we
met
so
we'll
be
talking
about
the
changes
in
the
core
draft
and
also
what's
in
the
new
draft
and
how
we
got
there,
we're
going
to
be
talking
about
a
an
attack
that
was
discovered
and
codified
by
some
researchers.
F
The
mix-up
attack
and
also
I
want
to
focus
on
some
features
that
have
been
removed
and
and
changed
significantly
since
the
since
the
draft
at
the
last
meeting,
and
then
we
want
we're
gonna
end
by
talking
about
kind
of
our
next.
Our
next
set
of
big
topics,
kind
of
like,
where
we're
going
next
and
and
the
status
of
implementations
that
we,
the
editors,
knew
about
and
we'd
like
to
really
start
calling
preview
there.
F
We
we
would
like
to
start
calling
out
more
people
who
are
implementing
knapp
implementing
parts
of
knapp
and
and
that
kind
of
thing,
so
you
can
go
through
the
the
rfc
diff.
If
you
want
the
detailed
text
changes
for
the
core
draft,
the
resource
server
draft
is
functionally
brand
new,
except
that
a
lot
of
that
text
actually
came
from
the
core
draft
to
begin
with.
F
So
a
lot
of
the
sections
that
you
see
deleted
from
the
core
draft
version,
4
have
wound
up
in
the
resource
service
draft
and
then
been
expanded
on,
but
more
on
that
in
just
a
bit
next
slide.
Please
there's
been
a
lot
of
work.
Over
the
last
few
months
we
have
merged
36,
pull
requests
against
core
10
against
the
resource
servers
draft,
and
you
can
use
those
urls
there
to
go.
F
Look
at
github
and
see
the
exact
text
changes
for
all
of
those,
and
so
just
I'm
going
to
now
talk
through
the
what
the
actual
changes
were.
We're
not
gonna
go
diving
deep
into
every
individual
pull
request,
especially
because
some
of
them
undid
the
changes
that
that
earlier
ones
do
because
this
is
an
iterative
process,
but
I
do
wanna
try
to
thoroughly
cover
the
stuff.
That's
changed
since
last
time,
so
the
discovery
mechanism
and
the
core
draft
these
are
these
are
by
the
way.
F
Okay,
so
the
first
off
there
were
a
bunch
of
small
changes
to
the
discovery
mechanism
inside
of
knapp.
F
So
this
is
the
stuff
that
allows
a
client
instance
to
do
a
pre-flight
call
to
the
auth
server
and
figure
out
different
bits
and
parameters,
and-
and
things
like
that,
as
most
in
the
group
know,
this
type
of
discovery
in
gnapp
is
is
functionally
optional
because,
as
you
know,
the
name
suggests
the
negotiation
mode
of
the
protocol
or
the
negotiation
nature
of
the
protocol
allows
the
different
parties
to
be
able
to
say
like
here's,
what
I
can
do,
here's
what
you
can
do.
How
can
we
you
know?
F
What
is
it
that
we
overlap?
It's
meant
to
be
it's.
It's
meant
to
be
deployable
in
a
way
that
you
can
actually
talk
to.
You
know
an
arbitrary
installation,
and
at
least,
and
at
least
figure
out,
what's
going
on
the
discovery
mechanisms.
F
However,
let
you
optimize
that
and
let
you
sort
of
document
like
these
are
the
pieces
that
I
can
do
and
express
that
kind
of
thing
ahead
of
time,
so
that
you
don't
have
to
so
that
you
can
know
ahead
of
time
not
to
put
in
options
in
your
requests
or
look
for
options
in
your
responses
that
aren't
going
to
be
supported,
and
this
is
especially
important
if
you're
going
to
be
doing
multiple
calls
to
something
over
time.
F
Anyway.
This
discovery
mechanism
is
fairly
lightweight
and
the
changes
here
are
are
relatively
minimal,
but
it's
they're
reflecting
other
changes
that
that
we
made
down
the
line,
including
this
next
section
about,
subject
identifiers.
I
have
to
give
huge
credit
to
fabian
on
this.
He
did
a
lot
of
great
work
with
clarifying
how
subject
identifiers
work
inside
of
gnapp,
so
this
whole
notion
of
being
able
to
identify
who
the
user
is
and
potentially
who
the
resource
owner
is.
We've
got
some
open
issues
for
for
clarifying
that
language,
but
identifying
the
person.
F
That's
there
is
really
was
a
was
an
important
driving
use
case
for
starting
gnap,
because
it
allows
ganap
to
be
able
to
say
without
additional
apis
like
just
who's
here
right
now
and
we
are
aligning
with
the
the
sec
event.
Working
group
has
a
subject:
identifiers
draft,
which
I
believe
is
in
last
call
or
just
out
of
last
call,
hopefully
yarn.
F
Please
it's
on
its
way,
and
so
we
aligned
with
some
of
the
latest
changes
from
there
that
that
the
editor
had
put
in
for
supporting
dids,
for
example,
as
a
format.
That's
something
that
that,
as
adrian
mentioned,
the
dude
community
is
is,
is
pushing
more
stuff
out
there
and
aligning
with
that
ongoing
format.
F
One
of
the
other
important
things
about
the
subject,
identifier
changes
is
that
we've
got
a
couple
of
a
couple
of
features
that
gnapp
had
to
invent
things
for,
like
we've,
got
a
user
handle
and
a
couple
of
other
things
that
the
editors
are
looking
at
now
with
the
subject,
identifier,
changes
in
mind
and
realizing
that
we
we
might
be
able
to
actually
simplify
some
of
this
stuff
by
reusing
some
of
some
of
those
same
features.
F
We
haven't
had
a
chance
to
to
go
and
make
those
specific
changes
yet,
but
this
is
this
is
important,
a
lot
of
changes
on
the
cryptography
and
signing
side.
F
We
changed
how
the
jose
based
systems
work,
how
access
tokens
get
presented.
We
made
some
very
specific
changes
to
like
how
depop
gets
used
and
normalization
on
on
claims,
which
is
important
at
the
cryptography
and
canonicalization
layer
of
a
protocol
like
this.
F
Some
of
these
ended
up
getting
overcome
by
other
changes
which
will
which
we'll
cover
in
just
a
moment
as
as
the
protocol
itself
as
the
gnap
core
protocol
got
kind
of
cleaned
up
regardless
there
was
a
lot
of
stuff
that
got
changed
around
how
sort
of
the
the
smaller
details
of
how
the
cryptography
stuff
works
next
slide.
Please.
F
We
had
some
I
want
to
highlight
here.
This
is
a
a
piece
of
this
is
a
non-editor
pr,
the
cache
control
header
and
somebody
came
along
andre,
I'm
gonna
butcher
their
full
name.
F
If
I
try
to
remember
that,
but
they
came
along
and
we're
like,
hey
you're,
not
mentioning
cash
control
here
on
any
of
these,
you
really
should
and
not
only
that,
but
here's
where
all
of
the
places
where
it
should
be
those
types
of
things
are
it's
a
small
contribution
in
terms
of
like
number
of
lines
change,
but
it
is
immensely
valuable
to
the
working
group
and
the
editors
absolutely
love
seeing
this.
So
thank
you
so
much
for
that.
F
This
is
because
this
is
an
important
detail
for
implementers
that
you
know
we
either
would
have
noticed
eventually,
or
somebody
would
have
tripped
up
and
made
a
mistake
and
then
we
would
have
noticed
too
late.
Basically,
so
thank
you
for
that.
A
big
change
which
we're
going
to
cover
in
its
own
in
its
own
discussion
is
we
extracted
the
portions
of
this
of
the
core
specification
that
talk
about
resource
server
communication
out
into
its
own,
separate
spec.
F
F
Server
goes
to
a
url
at
the
authorization
server
and,
and
that
type
of
thing
what
was
becoming
clear
in
the
discussions
that
we're
having,
especially
with
like
the
did
community
and,
like
you
know,
the
things
like
web,
often
and
and
newer
technologies-
is
that
this
this
whole
notion
of
having
to
send
a
user
to
the
server
like
sending
them
to
a
web
page
that
was
assumed
by
a
lot
of
the
language,
really
isn't
necessarily
core
to
the
protocol.
F
What's
really
happening
here,
and
this
change
of
language
is
meant
to
reflect.
This
is
that
the
user
is,
is
providing
some
amount
of
interaction
with
not
just
the
authorization
server,
but
also,
but
potentially
a
bunch
of
other
components
that
the
authorization
server
has
different
kinds
of
relationships
with,
and
the
client
can
facilitate
that
connection
like
that.
F
That
was
really
at
the
core
of
what
our
interaction
phase
of
gnapp
really
does
is
it
is
it
makes
that
connection
with
the
user,
and
so
the
the
thing
about
pr242
is
that
it
changed
a
lot
of
the
language
but
didn't
actually
make
normative
changes.
It
was
a.
F
It
was
a
shift
in
how
we
are
discussing
different
things,
how
we're
approaching
things
like
redirecting
to
a
service
or
launching
an
application
or
stuff
like
this,
that
people
are
already
doing
even
in
the
oauth
world,
but
shifting
that
away
from
talking
about
it
as
going
to
something
at
the
authorization
server,
because
we,
what
we
realized
is
that
there
are
a
lot
of
use
cases
today
in
in
both
oauth
and
map,
where
people
aren't
going
to
the
authorization
server
in
the
same
way
that
we
might
think
about
that
in
our
sort
of
simplified
diagram.
F
That
adrian
brought
up,
I
think,
starts
to
get
really
interesting
so
that
vc
http
api
that
adrian
mentioned.
F
This
is
something
that
the
authorization
server
could
either
provide
as
a
resource,
or
it
could
actually
be
a
client
itself
of
a
vc
http
api,
meaning
that
the
client
instance
instead
of
saying
or
in
addition
to
saying
I
could
redirect
you
to
a
url
somewhere,
it
can
say:
hey
I've,
got
claims
about
my
user
verifiable
credentials
at
this
url
at
this
api
endpoint
and
give
the
as
all
of
the
information
that
it
needs
to
be
able
to
go,
interact
with
those
and
get
signed,
vcs
and
proofs,
and
and
be
able
to
query
things
and
all
of
this
other
stuff
again
facilitated
by
the
client
and
facilitated
by
this
negotiation.
F
F
I'm
not
saying
that
necessarily
the
vc
http
api
should
be
in
knapp
core
as
an
option,
but
that's
the
kind
of
thing
that
it's
really
important.
We
allow
for.
We
don't
accidentally
like
box
out
of
the
realm
of
possibilities
here,
because
if
we
do
box
it
out,
then
somebody's
just
gonna
come
up
with
a
really
weird
way
to
patch
it
back
in
which
is
exactly
what's
happening
with
oauth
and
openid
connect
and
the
vc
http
api
stuff
right
now,
so
we
need
to
be
aware
of
this.
Okay.
F
We
added
a
privileges
field
to
the
access
control
array,
whereas
previously
we
had
actions
and
identifier
and
location
and
data
types
there's
there
was
a
long
discussion
about
other
things
we
might
want
to
add
to
that
sort
of
for
these
core
dimensions.
F
This
notion
of
privileges
was
brought
up
of
some
as
something
that
really
is
orthogonal
to
these
other
things.
Now
this
is
a
description
of
the
kind
of
thing
that
you're
asking
for
in
your
access
token,
not
necessarily
what
goes
into
the
token
itself,
because
that
is
opaque
to
the
client
instance,
and
so
this
is
part
of
the
client's
request
saying
like.
I
know
that
to
call
this
api.
This
is
a
privilege
that
I
want
to
be
able
to
have
on
this
api.
F
I
also
want
to
point
out
another
important
aspect
here.
Is
that
the?
What
goes
in
that
that
access
array?
F
F
We
have
had
some
editorial
changes
which
we'll
see
in
a
moment
that
try
to
clarify
and
expand
that
language.
That's
that's
gonna,
be
an
ongoing
effort
for
this,
and
our
sister
group
oauth
is
working
on
the
rich
authorization
request,
which
is
a
back
port
of
this
feature
to
oauth
2.,
and
that
is
nearing
last
call
in
that
space
as
well.
F
So
you
know
we're
trying
to
to
sync
this
language
between
the
two
so
that
the
same
concepts
can
be
applied
at
both
spaces
and
the
privileges
field
is,
is
a
part
of
that
we
added
a
new
parameter
for
the
mix-up
attack.
I'll
talk
about
that
attack
in
just
a
bit
and
then
and
then
the
editors
got
bored
and
deleted
a
bunch
of
stuff.
No,
we
had
we
had
actual
ongoing
conversations
about
what
people
were
using.
What
like
why?
F
Each
thing
in
the
protocol
was
in
there
we're
getting
to
the
point
of
the
protocol
where
it's
starting
to
stabilize
the
core
is
really
stabilizing
so
now's
the
time
to
really
look
at
all
of
the
stuff.
That's
grown
around
it
and
be
like
okay.
Why
do
we
have
this
piece?
Why
are
we
doing
this?
Does
it
still
make
sense,
like
I
was
saying
before
about
the
user
handle
now
that
we've
got
this
new,
opaque
subject
identifier,
it
probably
doesn't
make
sense
to
have
a
separate
field
for
that
anymore.
F
That's
the
kind
of
thing
we're
going
to
start
really
looking
at
and
I'll
talk
more
about
the
specifics
of
the
remove
features
towards
the
end
of
the
presentation
when
we
get
there
next
slide,
please
whole
bunch
of
editorial
changes
and
some
of
them
small,
some
of
them
big
one
of
the
biggest
ones,
though,
was
fixing
up
sort
of
the
examples,
especially
the
cryptographic
examples.
I
we
now
have
a
script
that
will
generate
properly
formatted
and
signed
examples
for
much
of
the
spec.
F
I'm
not
gonna,
say
everything
that's
in
there,
but
but
the
goal
of
the
editors
is
to
is
to
have
those
generated
by
real
code
in
in
the
individual
draft
from
over
a
year
ago.
F
This
started
the
example
started
as
me,
copying
and
pasting
from
my
personal
implementation
and
and
then
a
lot
of
things
got
edited
by
hand,
and
people
noticed
that
the
signatures
weren't
weren't
actually
valid
anymore,
or
that
in
in
in
one
in
one
at
least
one
case
that
we
didn't
even
have
valid
json
in
in
a
spot.
So,
anyway,
a
lot
of
that's
being
being
cleaned
up.
Obviously,
there
are
probably
still
mistakes.
F
We
want
to
know
where
they
are
so
that
we
we
can
fix
them
or
you
can
fix
them.
There
are
also,
you
know.
There
was
a
bunch
of
typos
aaron,
actually
did
a
a
bunch
of
work
for
updating
and
clarifying
the
diagrams
that
are
throughout
the
document,
including
a
new
sort
of
foundational
diagram.
That's
right
at
the
beginning
of
draft
six
that
says
like
these
are
the
major
players
and
what
they
mean
to
each
other.
F
F
We've
also
got
some
more
text
in
there
to
sort
of
expand
the
rationale
for
not
just
y
knapp,
but
why
ganap
is
how
it
is.
So
you
know
why
are
we
doing
particular
things
like
representing
clients
by
keys,
for
example,
in?
F
Why
are
bearer
tokens
now
sort
of
a
second
class
citizen
like
little
things
like
that
and
helping
somebody
who's
coming
new
into
this
space,
to
be
able
to
figure
out
what
the
difference
is
between
this
gnap
thing
and
something
else
that
they
may
have
seen
in
another
space
and
and
finally,
back
real
quick?
F
F
So
those
those
got
in
there,
thankfully-
and
and
the
reason
I
wanted
to
point
this
out-
is
that
we
noticed
there
was
a
discrepancy,
because
when
we
went
to
go
implement
the
latest
draft,
we
realized
that
we
were
sending
strings
in
one
place
and
booleans
in
another,
and
it
didn't
make
sense
for
them
to
be
different.
F
So
that's
where
that's
coming
from
okay!
Next
up
next
slide,
please
all
right!
So
the
big
document
structure
change,
like
I
said,
there's
been
a
lot
of
changes
since
draft
four,
but
the
biggest
change
you'll
see
is
that
there
are
now
two
documents.
F
F
Now,
if
you
were
here
at
the
last
ietf,
there
was
a
we
had
a
big
discussion
about
kind
of
what
this
looked
like
and
what
the
connections
and
actually,
if
you
could
go
to
the
next
slide,
there's
one
of
the
diagrams
that
we
used.
F
The
idea
with
splitting
this
up
was
to
have
things
that
face
a
client
instance
in
one
document
and
things
that
face
a
resource
server
only
in
another
document,
because
these
legs
of
the
triangle
can
vary
independently
of
each
other,
like
that's,
that's
fundamentally.
What
it
gets
down
to
is
that
I
can
do
things
to
interact
with
users
and
get
tokens
and
have
different
client
deployments
and
do
all
of
that
in
a
completely
different
space
from
the
decisions
I
need
to
make
about.
What
do
I
put
in
my
access
token?
F
Am
I
using
a
structured
format?
Am
I
using
references?
Am
I
doing
introspection?
Am
I
doing
you
know
some
shared
memory?
Bus
thing?
What
does
the
token
even
represent,
like
the
the
client,
doesn't
need
to
know
about
those
decisions
in
order
to
do
its
functions
and
the
resource
server
doesn't
need
to
know
about
what
the
client's
doing
in
order
to
decide
whether
the
access
token
it
gets
is
good
or
not
so
separating
that
decision
space
into
these
two
documents
was
really
key.
F
So
if
you
can
actually
go
back
a
slide,
there's
a
couple
of
bullet
points
thanks
what
the
what
this
functionally
means
is
that
we
took
resource,
set
registration
and
sort
of
resource
server
introduction
this
dynamic
introduction
into
a
different
draft
token
introspection
is
in
there
because
a
client
this
is
something
that
is
designed
to
face
resource
servers.
There's
open
questions
of
whether
client
should
be
allowed
to
do
that
and
how
that
gets,
exposed
and
also
yarn's
got.
F
Some
has
already
filed
some
very
some
important
questions
about
how
we
handle
discovery
between
these
different
sort
of
sides
of
the
authorization
server.
Should
they
be
the
same,
should
they
be
separated?
What
are
the
trade-offs
there?
F
We
got
to
figure
that
out,
and
the
important
thing
here,
though,
is
that
we
now
have
two
different
places
where
we
can
focus
the
discussion
so
that
we're
not
talking
past
each
other,
because
there
were
a
lot
of
earlier
discussions
about
like
oh,
we
need
to
decide
like
you
know
what
attributes
go
inside
the
access
token,
where,
since
that's
opaque
to
the
client
software
to
the
client
instance,
that's
not
something.
F
D
A
D
So
yes,
we've
had
we've
had
quite
a
few
discussions
on
how
to
choose
the
token
formats,
and
now
we
are
going
to
do
that
and
if
you
look
at
what's
in
the
in
the
spec
today,
you'll
see
that
there
are
many
many
improvements
on
top
of
that,
and
so
we
speak
about
biscuits
and
that
cap
and
a
few
other
things
and
I
was
hoping
we
could
see
with
dispatch
a
way
to
formalize
a
bit
more
of
this
additional
token
formats,
since
we've
got
most
on
that.
D
For
that,
the
only
thing
that
that's
very
well
defined
is
judd,
of
course,
but
here's
since
exactly
what
adrian
was
explaining
before
we
are
trying
to
achieve
also
delegation
and
advanced
types
of
scenarios
which
basically
require
a
bit
more
advanced
cryptography
type
of
of
requirements,
and
so
that's
well
that's
at
least
an
open
issue,
and
so
far
what
we've
got
in
the
document.
From
from
what
jaren
explained
as
well
in
in
in
the
issues
is
we
just
have
to
you
have
some
links,
but
it's
not
official
standards
right
now,.
F
All
right
thanks
if
you
can
actually
go
a
couple
of
slides
forward.
Please
thank
you.
So
now
I'm
going
to
talk
about
the
mix-up
attack.
It
was
identified
by
a
couple
of
researchers
and
because
I
am
a
terrible
person,
I
forgot
to
copy
their
names
into
the
slides.
I'm
sorry
if
somebody
could
actually
add
that
to
the
notes.
That
would
be
appreciated.
F
So
sorry
about
that,
but
this
is
related
to
the
oauth
2
mix-up
attack.
If
you're
familiar
with
it,
it
basically
works
by
getting
a
client.
That
knows
how
to
talk
to
multiple
authorization
servers
to
ask
for
a
token
at
one
as
but
get
the
user
to
interact
at
a
different
as
and
and
have
an
attacker
come
away
with
an
access
token.
F
Now
out
of
the
box,
knapf
is
already
in
a
better
state
than
oauth
2,
because
we
don't
have
bear
secrets
as
sort
of
a
fundamental
building
block,
so
it
is
not
possible.
If
you
know,
assuming
your
keys
are
the
keys.
The
private
keys
themselves
are
not
compromised.
It's
not
possible
for
an
attacker
to
be
able
to
just
easily
impersonate
a
good
authorization
server
and
and
convince
a
client
that
they're
talking
to
someone
else.
F
You
can
ask
for
a
bearer
token,
but
if
you're
not
asking
for
a
bearer
token,
then
the
mix-up
attack
succeeds,
but
it
succeeds
in
a
detectable
way,
because
the
client
will
not
get
back
a
usable
access
token,
and
so
the
client
is
going
to
notice
that
the
access
token
doesn't
work
in
oauth,
2's,
mix-up
attack
and,
if
you're,
using
bearer
tokens
in
knapp,
the
attacker
gets
the
copy
of
the
access
token,
and
so
does
the
client.
So
the
client
doesn't
actually
know
this.
F
You
know
being
able
to
detect
these
conditions
is
is
as
important
as
being
able
to
prevent
these
these
conditions
in
a
lot
of
in
a
lot
of
ways.
So
next
slide,
I'm
gonna
talk
through
how
this
works.
Now
I
will
say
that
there's
a
there's,
a
text
write
up
on
the
next
slide
as
well.
I'm
going
to
talk
through
the
diagram,
but
I
encourage
people
to
follow
along
with
the
text.
F
So
user
starts
off
talking
to
their
client
instance,
and
this
is
a
diagram
from
the
researchers,
so
the
user
starts
off
by
talking
to
the
client
instance
and
the
client
instance,
the
the
players
here
are,
you
know
the
end
user.
The
client
instance
aas
is
the
attacker's
authorization
server.
So
this
is
the
bad
authorization
server
and
has
is
the
legitimate
authorization
server.
F
I
don't
know
what
the
h
stands
for,
but
it's
the
good
authorization
server.
So
it
starts
off
by
the
user.
Talking
to
the
client
and
saying
oh
honest,
thank
you.
That's
the
honest
authorization
server.
F
F
Now
the
client
software
knows
that
it's
talking
to
the
attacker's
authorization
server,
who
knows
the
url
that
it's
talking
to
and
and
the
so
the
client
may
repres
may
misrepresent
to
the
user
in
some
human
readable
form,
like
you
know,
which
server
you're
going
to
or
which
service
you're
trying
to
access,
but
on
the
wire,
the
client
knows
that
it's
talking
to
aas
aas
then
turns
around
and
makes
its
own
request
to
the
honest
as
so.
F
F
F
Here's
the
server
side
nonce
the
interaction
redirect
uri
that
I'm
going
to
send
you
to
sends
all
of
that
back
to
the
attacker
say:
yes,
the
attacker's
as
forwards
that
pretty
much
as
is
back
to
the
client,
now
the
user
redirects
not
to
the
attacker's
server,
where
the
client
was
talking,
but
to
the
honest
as
that
that
the
attacker
is
targeting,
so
they
go
there
as
far
as
the
user
is
concerned,
they're
approving
access
to
to
client
software
now
once
again
has
knows
that
it's
the
attacker's
software
they
have
that
identifier,
but
in
either
a
dynamic
world
or
you
just
have
a
somebody
sneaky
using
self-service
registration.
F
The
attacker
can
disguise
themselves
enough
to
fish
a
user
for
this,
so
they
interact
they
they
approve
it,
and
all
of
that
comes
back
with
the
validation,
hash
and
the
interaction
reference
back
to
the
client
we're
down
in
step
nine.
Now
at
this
point
and
here's
where
the
attack,
here's,
where
the
attack
hits
at
this
point,
all
of
the
information
needed
to
make
that
validation,
hash
was
the
client
nonce,
the
server
nonce
and
the
interaction
reference.
F
The
honest
authorization
server
has
all
of
these.
The
attacker
has
the
client
nonce
and
the
server
nods,
because
it
proxied
those
values
back
in
steps,
2
and
4
above
so
what
that
means
is
that
the
validation
hash
in
step
9
actually
does
validate,
even
though
the
initial
request
going
to
one
server
is
different
from
what
would
be
expected
by
the
honest
authorization
server
now.
F
Here's
where
the
fallout
of
the
attack
starts
to
happen
is
that
the
client
forwards
that
interaction
reference
signed
with
its
the
client's
key
to
the
attacker's
server,
and
then
the
attacker
sends
that
interaction
reference
on
it
proxies
that
again
signed
with
the
attacker's
key.
So
it's
not
impersonating
the
client
to
the
honest
authorization
server
and
gets
back
an
access
token
here.
If
it's
a
bound
access
token,
the
attack
stops,
the
attacker
gets
a
token
the
user
doesn't.
F
Maybe
they
call
and
complain.
Maybe
they
don't
notice,
but
at
this
point
the
attack
is
over
because
the
attacker
has
access
approved
by
the
client.
So,
in
summary,
this
is
really
substituting
a
client
getting
a
user
to
approve
a
client
that
they
don't
think
that
they're
approving
and
the
access
token
being
delivered
somewhere
else.
F
So
two
slides
forward,
please
the
next
slide
is
the
is
the
description
of
that
in
text.
There
was
also
a
thread
on
the
list.
Please
go
read
through
that
for
more
details.
F
The
mitigation
is
based
roughly
on
what
what
the
same
mitigation
is
in
oauth
2
and
that's
to
include
something
about
the
server
that
the
user's
interacting
with
now
in
oauth
2.
The
accepted
solution
is
to
use
this
issuer
parameter.
That
goes
back
alongside
the
state
and
everything
else,
but.
F
But
what
we
realized
in
looking
at
this
attack
in
gnapp
is
that
this
verification,
this
validation,
hash
that
we
already
had,
we
could
actually
mix
in
the
grant,
request
url
and
have
the
same
effect
without
having
to
add
additional
values
on
the
wire
which
an
attacker
could
then
possibly
sit
on
that
redirect
and
and
steal
that.
So
what
happens?
F
Is
you
now
mix
together
the
client
nonce,
the
server
nonce,
the
interaction
reference
and
the
url
and
the
the
grant
endpoint
url
mix
those
together,
which
means
that
when
that
hash
is
initially
generated,
the
honest
authorization
server
is
going
to
use
its
own
grant
url
like
it's
not
being
told
what
that
url
is
on
the
way
in
by
anybody.
It's
going
to
be
using
its
own
grant,
url
that
the
attacker
called
the
client
to
do.
F
The
validation
is
going
to
use
the
attacker's
url
and
so
that
hash
won't
match,
because
this
isn't
an
attack
about
the
about
the
attacker
like
masquerading.
That
request
url
at
all,
that's
a
different
kind
of
attack,
but
the
client
will
be
able
to
detect
this
if
they
validate
that
hash.
F
Yes,
this
is
still
putting
weight
on
the
client
to
protect
itself
which,
as
we
know,
client
developers
will
do
everything
to
avoid
having
to
do
more
work.
But
this
is
a
very
simple
addition
to
code
that
was
already
required
by
compliant
client
software
to
do
here.
So
there's
no
additional
parameters.
You
have
to
check,
there's
nothing
more.
You
have
to
send
nothing
more.
F
You
have
to
protect
it's
all
information
that
you
already
have,
and
we
validated
this
approach
with
the
researchers
who
were
looking
at
that
they
confirmed
that
this
does
in
fact
cut
off
the
attack,
so
this
has
been
implemented
in
draft
six
and
the
the
pr
that
implements.
This
was
remarkably
short
because
it
simply
said
add
this
url
into
this
mix,
and
everything
else
should
should
fall
from
that.
F
Any
questions
on
the
mix-up
attack
before
we
move
on,
because
that
was
a
it
was
a
fairly
big
piece.
I
guess
I'll
also
pause
to
see
if
anybody
had
questions
on
the
specific
edits.
Although
we're
going
to
be
we're
going
to
be
talking
about
some
more
of
the
edits
in
just
a
sec.
F
Okay
thanks,
I
did
see
robin's
question.
Yes,
the
client
is
relying
on
the
fact
that
the
that
the
aas
and
has
urls
are
going
to
be
different,
and
so
it's
it.
It
is
just
it's
a
string
that
gets
added
in
to
the
to
the
calculation.
F
So,
yes,
it
is,
and
it's
simply
what
it
boils
down
to.
Is
it's
an
identifier
for
the
authorization
server
that
that
cannot
easily
be
co-opted
by
the
attacker,
because
it's
the
url
that
the
client
is
directly
talking
to.
F
So
thank
you
for
that
all
right
next
slide.
Please.
F
So,
oh
yeah,
I
forgot
about
this
wrap
up.
So
basically,
any
redirect
based
protocol
is
gonna,
be
inherently
fishable.
It's
it's
the
nature
of
the
space
that
we
work
in
and
there
are
related
phishing
attacks
if
you're
not
using
the
the
finish
feature
as
part
of
your
interaction,
because
there's
no
interaction
reference
that
you
can
protect
in
that.
F
But
this
is
kind
of
that's
part
of
the
trade-off
of
being
able
to
do
this
with
different
kinds
of
clients,
and
while
this
is
made
easier
by
dynamic
world,
it
is
still
very
possible
even
with
static
clients
and
and
we've
seen
this
in
the
wild
in
the
oauth
world,
with
people
using
self-service,
client
registration
stuff
to
register
things
that
to
a
human
look,
a
lot
like
google
documents,
but
aren't
so
next
slide.
Please.
F
So
we
removed
a
number
of
features
in
drafts,
five
and
six,
the
first
of
which
is
the
signature
methods.
The
editors
put
out
a
call
to
the
list
a
while
back
about
this.
We
kept
http
message
signatures
which
is
still
progressing
in
in
the
http
working
group.
We
kept
mtls
based
on
the
concepts
from
the
oauth
mtls
work.
F
We
dropped
the
oauth
proof
of
possession
partially
because
that
that
draft
has
long
since
expired
and-
and
it
is
probably
going
to
be
replaced
by
a
new
draft
based
on
http
message-
signatures
in
the
oauth
working
group.
That's
in
adoption
call
you
know
oauth
right
now
we
also
dropped
oauth.
D-Pop
d-pop
was
never
meant
to
be
a
signature
method
for,
like
general
http
signatures.
F
What
it
covers
is
very
limited,
and
it
also
has
its
own
built-in
key
negotiation
which
ginap
wasn't
using,
so
it
was,
it
was
an
awkward
fit
from
the
beginning
and
we've
dropped
both
of
those.
F
The
editors
think
that
there's
still
a
discussion
to
be
had
for
both
of
the
jose
methods,
both
the
attached
and
detached
jws.
There
wasn't
a
clear
consensus
whether
we
should
keep
them
in
the
core,
keep
them
as
an
extension
or
drop
them
entirely.
F
What's
the
what
the
editors
have
done
in
the
for
the
moment
is
we've
kept
them
in
the
core,
but
we
have
swapped
out
the
examples
to
use
http
message
signatures
throughout
the
document,
except
for
obviously
the
specific
examples
of
these
signing
methods
so
that
if
we
do
pull
them
out,
it
will
be
cleaner
surgery
later
so
a
little
bit
of
editorial
reordering
of
sections
and
things
like
that,
but
overall
they're
still
in
there.
F
So
we
do
think
that
we
should
have
more
discussion
on
this
next
slide.
Please,
if.
F
So
we
don't
actually
have
any
listed
as
must
implement
yet
so
mandatory
to
implement
is
going
to
be
a
different
thing.
We
we
didn't
add
that
language,
yet
there
there
was
some
discussion
about
this
and
people's
the
feeling
so
far,
and
I'm
not
going
to
go
so
far
as
to
say
that
this
is
the
consensus,
but
the
feeling
so
far
seems
to
be
that
http
message
signatures
will
be
the
mandatory
to
implement
for
at
least
kind
of
a
you
know.
F
A
baseline
partially
because
mtls
is
is,
is
really
deployment
specific,
where
that
makes
the
most
sense
for
people
right
so
http
message,
signatures
at
least
gives
you
a
baseline
to
be
able
to
make
a
call
and
be
told
no
go
do
something
else
instead.
F
So
that's
that's
what
I
saw
in
the
discussion.
I
also
land
there
personally
as
an
individual
as
an
editor.
F
I
didn't
see
a
call
for
that
yet,
but
in
the
next
revisions
we
may
honestly
just
put
that
stamp
down
and
to
get
feedback
in
the
group
on
that
and
see
how
we
go
forward
so,
but
but
the
first
step
was
was
at
least
culling
the
two,
and
we
do
think
that
the
jose
methods
are
potentially
fit
for
that
as
well,
particularly
the
attached
jws
method,
because
it
relies
on
replacing
the
hpv
entity
body
with
a
different
entity
body
in
order
to
use
the
method
and
that's
problematic
for
a
lot
of
implementations.
F
But
you
know
this
is
this
is
something
we
got
to
figure
out
what
we're
going
to
do
with
these
things.
So
thank
you,
though
important
question
capabilities
was
a
sort
of
a
pseudo-discovery
mechanism.
I
still
like
it
as
an
individual.
I
think
it's
it's
clever,
but
the
thing
is
we
looked
at
it
and
nothing
was
using
it
and
it's
really
dangerous
to
have
something
in
a
security
protocol
that
you
think
somebody
will
use
for
extending,
but
without
actually
exercising
that.
F
So
we
took
it
out
for
now
and
with
the
hopes
that
if
we
do
come
into
a
use
case
or
an
extension
or
something
that
explicitly
needs,
this
we'll
be
able
to
pull
back,
not
only
sort
of
the
text
that
we
have,
but
the
discussion
around
that
text
and
start
from
a
little
bit
better
than
scratch.
F
So,
like
I
said,
I
still
think
it's
clever,
but
there
there
just
wasn't
enough.
There
next
slide,
please.
F
Similarly,
the
existing
grant.
This
was
a
way
to
ask
for
a
to
make
a
new
grant
request
based
on
a
separate
grant.
So
this
isn't
about
updating
a
grant
to
get
new
rights,
new
access
tokens,
it's
not
about
step
up
or
step
down,
it's
not
about
reading
or
managing
an
existing
thing.
It's
about
saying
give
me
something
new
in
the
context
of
something
else,
and
it
was
one
of
those
things
that
always
kind
of
confused
people
and
people
weren't
quite
sure
what
was
going
on
this
was.
F
F
But
once
again
we
didn't
really
see
a
push
that
needed
this
right
now,
so
the
editors
propose
what
the
editors
have
done
is
we
pulled
it
out
from
the
core
spec?
But
what
we
would
like
to
see
is
that
if
people
want
this,
that
we
actually
attack
this
as
a
holistic
feature
right,
I
think
adrian's
trying
to
jump
on
the
queue.
H
E
A
E
Sorry,
what
I'm
saying
is
relative
to
the
resource
owner,
interacting
with
the
resource
server.
I
know
of
only
two
ways
to
do
this:
either.
There's
a
pre-registration
of
an
authorization
server
that
the
resource
server
has
to
agree
to
for
the
self-sovereign.
You
know,
users
for
self-sovereign
delegates,
authorization,
servers
or
the
resource
owner
has
to
get
a
capability.
F
Oh,
I
think
he
has
a
fire
alarm.
Okay,
stay
safe,
adrian,
so
the
the
the
short
answer.
I
don't
know
if
you
can
still
hear
me,
but
the
short
answer
is
that
that's
not
that's
not
even
what
the
capabilities
field
indicated.
So
what.
F
Is
that
there's
this
security
notion
of
a
capability
where
it's
it's
a
credential
with
its
destination,
all
sort
of
bound
up
into
one
thing
for
a
really
sort
of
dumb
version
of
it?
Think
of
it
as
a
url,
with
an
oauth
token,
on
the
end,
you
have
everything
you
need
to
be
able
to
just
go
to
that
magic,
url
and
just
use
it.
That's
not
what
the
capabilities
negotiation
array
was
about
at
all,
and
so
that
was
that's
another
good
reason
to
to
remove
that.
F
What
adrian's
discussing,
though,
is
exactly
the
kind
of
thing
this
notion
of
like
how
do
we
dynamically
introduce
resource
owners
into
this
process?
That's
exactly
the
kind
of
thing
that
the
major
editorial
changes
around
the
the
interaction
and
authorization
gathering
section
was
really
about,
and
so
so
that
was
all
very
you
know
very
key
to
that
part
of
the
discussion
so
yeah.
Hopefully,
hopefully,
that
answered
answered
the
question,
hopefully
adrian's,
safe,
because
that
did
sound
like
a
fire
alarm.
Thank
you.
That's
all
good!
F
Okay,
excellent!
Thank
you
all
right,
so
move
on!
Please
next
slide!
One
last
thing
we
pulled
out.
There
were
a
couple
of
different
ways
to
pass
a
the
client
instance
identifier.
F
We
narrowed
that
down
to
just
one:
it's
basically
what
it
what
it
boils
down
to
there
was
this
at
the
the
very
beginning,
there
was
a
well
what,
if
we
needed
this
along
with
other
information
and
that
what
if
has
not
yet
materialized,
and
so
again
what
we've
decided
to
do
is
we
we
took
out
that
kind
of
you
know.
F
Underspecified
what-if
feature
with
the
hope
that
if
this
really
does
need
to
be
used
that
we
can
bring
it
back
in
in
a
way,
that's
more
robust
and
makes
makes
more
sense
as
sort
of
a
holistic
approach.
Just
like
the
existing
grant
thing,
you
know,
you
know,
I
think
it
was
yarn,
brought
up
the
notion
like
what,
if
we
had
a
grant
identifier
to
use
there
instead,
you
know
those
are
the
kinds
of
questions
that
should
be
asked
all
together
in
the
notion
of
what
is
this
feature?
F
F
So,
where
do
we
go
from
here?
Our
current
discussion
on
the
mailing
list?
One
of
our
current
discussions
is,
you
know,
key
rotation.
You
know:
should
we
define
a
mechanism
as
part
of
knapp
but
also
enable
existing
mechanisms?
Some
things
that
people
do
today
are
like
host
your
keys
at
an
external
url
and
have
the
other
party
fetch
that
whatever
keys
at
that
url,
you
know.
How
do
we
want
to
manage
this
kind
of
stuff?
F
Gnap
has
a
it
uses
keys
as
software
identities
all
over
the
place,
but
it
does
not
have
baked
in
reliance
on
something
like
pki.
You
can
use
pki
to
solve
all
of
these,
but
you
can
also
use
this
without
without
any
pki
infrastructure.
F
So
the
question
is
you
know
how
do
we
handle
key
management
and
specifically
rotation
in
this
kind
of
space,
or
do
we
even
handle
it
inside
of
canap
itself?
That
is
that
is
part
of
this
question,
and
so
far
the
discussion
seems
to
be
saying:
we
should
handle
it
and
there
are.
F
One
of
the
other
things
that
we
need
to
need
more
text
and
more
discussion
on
is
how
these
different
components
relate
to
each
other.
What
are
the
assumptions
that
we're
making
about
these
different
components,
like
I
mentioned
previously,
aaron
started
that
with
that
sort
of
introductory
diagram
and
the
text
around
that
fabian's
done
a
lot
of
really
great
work
on
sort
of
the
entities
and
roles
section.
We
need
to
keep
expanding
that
we
need
to
keep
expanding
that
so
that
it's
it's
more
clear
like
this.
F
This
entity
trusts
this
other
entity
for
this
purpose.
In
this
context,
and
and
it's
that
type
of
transitive
discussion
that
needs
to
be
encoded
in
the
draft
right
now
there
there
are
trust
relationships
and
they
are
mentioned
in
there,
but
they
are
not
explicitly
described
as
well
as
they
could
be.
So
we
know
that
that's
something
that
needs
to
happen.
F
Another
big
thing
is
that,
especially
now
that
the
editors
believe
that
the
sort
of
the
core
draft,
the
major
churn,
is
fairly
stable,
at
least
for
now,
so
now
is
really
our
best
time
to
to
look
at
what
we've
got
and
start
really
filling
out:
security
and
privacy
considerations
and
the
editors
are
already
taking
it
as
an
action
for
the
three
of
us
to
start
building
that
text
out
along
and
the
the
trust
relationships
discussion
is
a
part
of
that,
because
you
know
you're
only
secure
if
you're,
if
the
thing
that
you're
trusting
to
be
secure
is
also
secure,
but
we
need
to
point
that
kind
of
stuff
out.
F
F
We
need
to
just
go
make
those
registries,
it's
a
lot
of
mechanical
stuff
there,
but
we
think
that
we've
got
a
a
decent
feel
on
the
fields
that
that
are
clear
candidates
for
extensibility
through
registry
versus
those.
That
really
should
be.
You
know
locked
down
more
specifically,
and
there
are
related
issues
about
guidance
to
extensions
like
what
does
it
make
sense
for
an
extension
to
be
able
to
say
here:
how
does
it
fit
in
with
the
other
functions?
F
So
that's
that's
part
of
all
of
that
next
slide.
Please!
Actually
you
can
keep
going
because
I
already
already
talked
about
key
rotation,
so
without
actual
running
code.
This
is
all
just
an
idea.
F
We
don't
believe
in
that
here
at
the
ietf,
so
I've
updated
the
java
implementation
that
that
I
personally
work
on,
which
is
xyz,
because
I'm
terrible
at
naming
things
that's
been
updated
to
the
latest
draft
of
knapp,
including
all
of
the
flag,
changes
that
and
all
of
the
syntax
changes
that
we
mentioned
before,
although
come
to
think
of
it,
I'm
not
sure
I
fixed
the.
F
I
don't
think
I
actually
fixed
the
discovery
document
I'll
have
to
go.
Do
that
right
after
call,
but
there
are
implementations
that
I
know
about
or
that
the
editors
know
about
in
python,
php
and
rust
in
the
works,
some
of
them
from
us,
some
of
them
not
from
us.
There
are
also
more
implementations
on
the
on
sort
of
our
core
dependencies,
specifically
http
message:
signatures
and
security
event
identifiers.
F
So
there
are
libraries
that
are
coming
up
to
you
that
are
getting
worked
on
libraries
and
implementations
that
support
these
that'll
help
people
that
are
developing
nap,
and
you
know
once
again,
I
wanna
say
that
the
editors
really
do
honestly
believe
that
sort
of
the
major
core
churn
that
you
see
in
early
days
of
a
protocol
definition
is
largely
settled.
F
F
You
know
we
know
this,
but
we
do
think
that
it
is
a
very,
very
good
time
to
start
implementing
this
to
start
trying
it
out
to
start
actually
fitting
it
into
different
spaces,
and
I've
reached
out
to
a
couple
of
the
groups
that
had
worked
on
early
implementations
of
the
xyz
proposal
from
before
the
gnap
working
group,
about
updating
to
updating
to
gnab
for
their
systems
and
following
the
draft,
and
so
we'll
see
we'll
see
where
we
go
from
there.
Oh
robin
bike
shedding
very
shortly
bike.
F
Shutting
is
arguing
about
what
to
name
things.
It
comes
from
the
it
comes
from
the
adage
of
arguing
over
what
color
to
paint
the
door
of
the
bike
shed.
Instead
of
building
the
damn
bike
shed.
F
All
right,
and
that
I
believe,
is
yes,
that's
it
so
gonna
hand
it
back
to
the
chairs
now
take
any
final
final
cue
questions
before
we
move
on.
I
know
we're
right
about
at
that
time.
F
A
Oh
sorry,
thank
you.
So
just
one
word
before
we
go
into
questions
and
discussions,
the
editor
seems
to
be
very
optimistic
about
the
completeness
of
the
protocol
right
now.
A
Please
give
it
a
read,
not
as
detailed
as
you
would
for
working
group
last
call,
but
let's
sketch,
let's
catch
them
in
what's
what
are
missing
features
and
also
we
we
used
to
have
too
many
ways
of
doing
signatures.
We
still
have
a
few
too
many
other
other
things
that
are
duplicate
that
are
redundant,
that
can
be
removed
and
maybe
come
back
as
extensions
or
maybe
not
so.
A
quick
read
by
everybody
here
would
be
highly
appreciated.
A
And
now,
if
you
have
any
questions
comments
regarding
any
of
the
two,
oh
well,
maybe
the
core
protocol.
A
B
I
A
I
Okay,
thank
you.
So
an
issue
has
been
reopened
six
days
ago.
It's
the
issue
295
and
it
the
title
is
in
practice:
only
rights
are
supported,
but
attribute
should
be
supported
as
well.
I
So
I
have
a
proposal
to
explain
how
this
should
be
done
now.
I
I
would
also
like
to
address
another
message,
which
is
one
finding
about
the
current
document,
which
I
believe
is
important
and
in
section
seven
today
it
is
stated
in
gnab.
The
client
instance
re
secures
its
request
to
the
rs
by
presenting
an
access
token.
I
B
J
Great,
I
just
discovered
this
working
group
this
morning,
while
I
was
trying
to
figure
out
what
to
do
with
my
first
ietf
meeting
so
hope.
This
is
a
reasonable
question.
Welcome.
I
was
thanks.
J
J
I'm
particularly
thinking
of
I
was
really
happy
to
see
that
the
rs
first
af
discovery
is
in
there,
because
I
I've
wanted
for
a
long
time
for
oauth
to
be
able
to
be
used
in
place
of
like
http,
basic
or
digest
auth,
and
this
looks
like
it's
almost
there,
so
that
I
guess
that's
my
my
wondering
is,
if
there's
a
way
to
use
gnapp
as
a
better
form
of
hdb
basic
or
digest
off
for
generic
http
resources.
F
Yeah,
this
is
to
answer
jamie.
That's
a
really
interesting
idea,
so
the
resource,
access
rights
and
oauth
2's,
rich
authorizations
requests
have
discussed
this
notion
or
the
notion
of
registering
the
types
somewhere
and
the
consensus
has
been
that
we
don't
want
to
require
registration
of
types,
but
having
a
catalog
would
be
an
interesting
one.
Now,
it's
not
that's
not
usually
what
an
iana
registry
is
used
for,
so
there
might
be
a
better,
a
better
form
or
venue
for
something
like
that.
F
F
I
would
love
to
see
that
written
up
more
if
you
could
bring
that
to
the
list,
but-
and
that
may
even
be
something
that
you
know-
perhaps
we
use
that
in
in
one
of
the
examples.
That's
a
really
neat
idea.
So
thank
you
for
both
of
those,
I
hope
see
you
participate
more.
These
are
these
are
great
questions
and
there's
a
lot
of
discussion
that
that
fits
in
well
with.
F
G
Yeah,
I
just
wanted
to
respond
to
some
of
what
dennis
said,
in
particular
the
point
about
tls.
This
has
been
a
ongoing
conversation
that
we've
been
having
on
a
github
issue
in
case.
Anybody
is
curious
in
looking
at
some
of
the
previous
parts
of
that
discussion,
but
the
the
idea
of
dropping
client
instance
knees.
G
I
have
not
yet
seen
a
proposal
for
how
to
effectively
get
the
same
level
of
security,
as
the
client
instance
keys
provides
using
just
using
the
just
dropping
the
keys
as
described
currently
so
there
would
have
to
be
something
else
added
back
to
that
which
I
have
not
seen
dennis
propose
yet
the
main
security
property
that
we
gain
by
client
instance
keys
is
being
able
to
bind
access
tokens
to
individual
instances
of
clients,
as
in
it
is
a
particular
install
on
a
particular
mobile
phone
that
has
its
own
keys
that
are
different
from
the
sort
of
quote-unquote
same
app
installed
on
another
phone
such
that
you
can't
pick
up
an
access
token
and
use
it
in
another
app,
because
the
underlying
keys
would
be
different.
G
A
B
Sure
so,
after
some
consideration
and
discussion
with
euron
this
morning-
and
we
wanted
to
propose
out
to
the
working
group
a
workshop
to
invite
people
in
to
analyze
and
do
security
protocol
proofing
in
case
there's
any
gaps
in
the
protocol
now
or
considerations
that
need
to
be
raised.
Just
like
tls
did
before
tls
1.3
was
published.
B
It
would
be
much
better
to
fix
it
in
advance,
and
so
this
is
just
a
proposal,
which
is
why
there's
a
question
mark.
You
know
what
a
two-day
workshop
four
hours
per
day
work
assuming
that
it
would
be
remote
because
thinking
you
know
the
next
year,
it'd
be
really
hard
to
get
people
to
travel
consistently
from
around
the
world.
B
With
you
know,
various
statuses
and
capabilities
of
traveling,
so
we'd
like
to
get
a
little
bit
of
feedback
on
interest
in
this
and
also
willingness
to
reach
out
to
researchers
to
invite
them
to
analyze
the
protocol
or
different
aspects
where
they
have
expertise,
be
that
cryptography
or
you
know,
they've
been
involved
with
oauth,
for
instance,
or
other
authorization
protocols,
and
would
like
to
spend
some
time
trying
to
find
some
holes.
B
F
This
is
an
annual
security
focus
workshop
for
the
oauth
protocol
and
in
sort
of
family
of
protocols,
but
it's
really
started
to
take
on
a
life
and
identity
of
its
own,
to
the
extent
that
the
the
founders
of
the
workshop
have
started
to
reach
out,
they
reached
out
to
the
editors
to
the
gnap
editors
asking
us
if
we
would
even
be
interested
in
being,
you
know
coming
in
and
talking
about
gnapp
or
having
that
be
a
focus
of
part
of
that
workshop.
F
So
I
think
that
this
aligns
very
very
well
with
that
and
personally
I
100
agree
get
this
in
front
of
a
bunch
of
smart
people
and
let
them
throw
hatchets
at
it.
That's
the
only
way
we
make
it
better.
F
There
are
some
provisional
ones
and
I
am
not
allowed
to
share
them
yet
and
but
so
keep
keep
an
eye
out
from
from
folks
like
daniel
fett,
who
are
organizing
that
in
terms
of
like
what
kind
of
conference
it
is
when
and
where
it's
gonna
be
and
stuff
like
that.
But
traditionally
it's
been
an
annual
conference
sometime
around
the
spring
ietf
give
or
take,
and
so
will
will
I
don't.
I
cannot
say
when
it
will.
C
Hi
is
the
mic
on
yes,
okay,
thanks
yeah.
I
think
that
sounds
like
a
really
good
idea.
I
think
the
security
protocol
proofing
obviously
is
functionally
necessary.
It
might
not
attract
a
huge
audience.
I
think,
if,
if
it
can
be
accommodated
in
the
scope
of
the
workshop,
some
discussion
of
the
trust
relationships
that
justin
mentioned
in
the
draft
roadmap
would
also
be
extremely
useful
and
might
have
slightly
broader.
B
B
A
So,
like
we
said,
the
editors
feel
that
where
their
core
document
is
stabilizing,
so
I'd
like
to
propose
basically
a
framework
of
progress
for
the
core
draft.
A
So
to
start
with,
in
my
opinion,
we
should
separate
progress
on
the
core
draft
versus
progress
on
the
resource
server
draft
I
think
trying
to
progress
both
in
parallel
would
be
very
difficult,
and
so,
let's
focus
on
one,
even
if
the
other
one
is
being
amended
in
parallel.
A
A
We
have
a
list
of
use
cases
that
we
collected
a
while
ago,
and
I'd
like
to
ask
the
editors
to
go
back
to
the
to
this
list
of
use.
Cases
find
out
if
we're
really
covering
them
find
out.
If
we're
not
and
come
back
to
the
list
with
what's
being
covered
and
what
we
consciously
decided,
we
don't
want
to
cover,
or
we
don't
know
how
to
cover
or
whatever
we
have
a
bunch
of
open
issues
right
now,
as
of
yesterday,
I
believe
it
was
a
bit
over
100
just
for
the
core
document.
A
C
A
More
for
for
the
iris
draft,
so
obviously
we
cannot
say
the
document
is
stable
when
we
have
over
100
open
issues,
and
I
would
suggest
just
completely
arbitrarily
to
bring
the
number
down
to
about
10.
A
B
A
A
Everybody
reads
the
document
there's
a
bunch
of
comments,
maybe
reasonably
large
changes,
hopefully
no
no
major
features
or
no
major
changes,
but
we
might
need
to
iterate
a
little
bit
after
working
group
last
call,
because
if
we
are
to
have
a
security
review
and
especially
if
people
are
using
a
formal
tools,
we
will
need
to
have
a
sort
of
feature
freeze,
and
this
is
what
we
had
for
tls
1.3
when
it
went
through
a
lengthy
security
review
by
the
academic
community,
and
I.
A
To
have
this
level
of
review
from
the
community,
then
after
security
review,
maybe
they
don't
find
anything.
Maybe
they
do.
We
need
to
complete
the
document
based
on
that
feedback
and
any
other
feedback.
A
That's
come
since,
and
possibly
have
a
second
working
group
last
call
that's
that
happens,
that's
not
a
tragic
tragedy
and
then
we
are
ready
to
to
publish
the
document
as
an
rfc.
So
it's
a
long
process,
I'm
avoiding
dates
here.
Kathleen's
proposal
for
workshop
was
for
six
months
from
today.
I
think
that
kind
of
lines
up
with
these
steps,
I'd
like
to
hear
from
from
you
people
what
you're
thinking
about
this
plan.
K
A
F
Yeah,
just
we
don't
yeah
I
there.
There
are
a
couple
that
I
have
plugged
together
at
various
stages
of
implementation.
We
don't
have
enough
implementations
of
draft
six
to
have
a
meaningful
matrix
yet
because,
like
like,
I
said
there
was
a
bunch
of
stuff
in
the
security
layer,
the
signing
layer
that
changed
that
that
broke
everything
right
so
not
yet.
F
I
would
also
love
to
see
that
I
would
also
like
to
encourage
this
group
to
to
consider
something
like
what
open
id
foundation
has
for
the
openid
connect
core
protocol
and
its
profiles,
in
that
there
is
an
automated
test
suite
that
you
can
go
and
basically
point
your
client
at
or
your
server
at,
and
it
acts
like
the
other
parties
in
the
ecosystem
and
gives
you
results
of
like
what
did
you
do
based
on
all
of
these
other
things?
F
Now,
there's
that's
a
lot
of
work
and
a
lot
of
tooling
to
get
there,
but
you
know
the
the
oadfs
suite
is
at
the
very
least
it's
it
is
open
source.
So
if
you
know
that
could
be
the
basis
of
an
effort
like
that,
I
would
love
to
see
that
also,
and
I
would
all
I
would
also
love
to
see
a
a
matrix
getting
the
time
and
engineers
on
that
is,
of
course,
always
the
problem.
A
A
Okay,
there's
a
common
pitfall
where
the
the
the
implementation
or
the
all
the
open
source
becomes.
The
spec.
I
don't
want
us
to
add
the
our
main
deliverable
is
the
you
know
the
paper,
the
document,
and
so
if,
if
the
editors
or
the
community
at
large
wants
to
create
two
links,
that's
all
great,
but
the
focus
should
be
on
on
having
the
best
document.
A
A
Right
and
now
a
few
questions
were
already
raised
and
some
are
lurking
out
there.
A
A
I've
already
asked
everybody
to
to
read
the
two
documents
to
make
sure
we
don't
have
any
remaining
big
holes
that
the
editors
may
have
missed.
A
And
then
there's
the
the
bigger
question
of
how
does
this
relate
to
the
bigger
earth
community
to
the
evolution
of
the
oauth
protocol
itself
and
the
the
ecosystem
open
id
and
so
forth?
A
A
I
believe
we
don't
have
enough
and
if
we
are
to
to
have
a
very
open
discussion,
post,
2.0
or
oath
2.1,
and
then
what
I
think
we
need
a
lot
more
engagement
on
both
sides.
D
Yes,
well
what
we
did
recently
with
the
annex.
The
appendix
d
on
the
ot
protocol
was
published
actually
as
a
paper,
and
it
was
discussed
during
conferences
as
well,
and
so
we
had
also
discussions
with
guys
like
daniel
fett
as
a
consequence
of
things
like
that.
So
that's
also
something
we
could
do
is
of
broader
communication
to
the
wider
community
as
well,
not
stay
within
the
ietf
in
some
cases
and
get
get
the
work
presented
elsewhere
as
well.
I.