►
From YouTube: IETF115-GNAP-20221110-0930
Description
GNAP meeting session at IETF115
2022/11/10 0930
https://datatracker.ietf.org/meeting/115/proceedings/
A
B
A
B
Let's
get
going
if
we
have
a
volunteer
for
no
taker,
that'll
be
great,
otherwise
it'll
be
left.
B
B
We
are
going
to
spend
most
of
our
time
on
the
with
the
editors
going
through
the
latest
changes.
Major
major
changes
to
the
main
protocol
in
preparation
for
working
group
last
call
we
will
also
during
this
part
of
the
session
we
will
have
10
or
15
minutes
to
speak
about
Adrian's
human
rights
considerations
somewhere
near
11
o'clock.
B
B
D
D
Yeah
I
was
going
to
say
we
should
make
sure
that
Fabian
is
online
because
he
can
all
right
fantastic,
fantastic,
great.
He
and
I
are
going
to
be
trading
off
for
for
the
editors
update,
so
I
need
to
I
need
to
request
it
again
technology.
It's
wonderful!
Thank
you.
Good
thing.
Authorization
is
easy
right,
all
right,
all
right,
so
hi,
everybody
welcome
to
the
gnat
meeting
on
ITF,
115.
D
and
I
was
inspired
by
some
of
the
other
presentations
this
week
to
include
a
local
picture
with
some
very
rough
Photoshop
this
morning.
So
thank
you
to
the
chairs
for
helping
me
get
that
in
at
the
last
minute.
So
everybody
mind
the
gnat,
I
know
it's
a
little
earlier
for
this
pun,
but
I
love.
It
just
deal
with
me
anyway.
D
Today
we're
going
to
be
going
through
the
chord
changes.
The
changes
to
the
core
document,
which
there
have
been
a
lot
Fabiana
and
I,
have
been
really
busy
over
the
last
couple
of
months.
We're
going
to
be
grouping
these
into
the
editorial
and
the
functional
changes
like
we
always
do.
D
There
have
been
no
updates
to
the
RS
draft,
but
more
discussion
about
that
a
little
bit
later
and,
as
the
chairs
mentioned
we'll
be
talking
about
sort
of
the
futures
of
the
gnat
protocol
and
sort
of
where
we
go
from
here.
So
first
off,
if
you
pull
down
the
slides,
all
of
these
links
are
live.
You
can
go
look
at
the
RFC
diff
between
the
two
to
see
the
difference
in
tax
changes
we
have
merged
23,
pull
requests
on
core
and
one
on
the
resource
server
draft.
D
Over
the
last
few
months,
like
I,
said,
we've
been
busy
been
a
lot
of
changes
and
we
have
closed
55
issues
56
if
you
count
the
one
that
Fabian
closed
this
morning,
and
that
leaves
us
with
at
this
point
only
issues
in
the
core
draft
that
have
come
up
after
we
publish
draft
11..
D
What
was
that
two
weeks
ago,
so
we
were
able
to
go
through
it.
This
is
the
biggest
news
we
were
able
to
systematically
go
through
every
single
issue
and
the
tracker
decide
sort
of
what
needed
to
be
done.
You
know
and
and
execute
a
resolution
for
all
of
the
outstanding
issues,
including
all
of
the
issues
that
were
have
been
labeled
in
the
draft
as
like
you
know,
editor's
note
go
think
about
this.
Go.
Take
a
look
at
this
from
the
very
beginning
of
the
protocol.
D
No
closed
issues
on
the
RS
draft.
We've
opened
up
a
couple
more
because,
as
you'll
see,
some
of
the
some
stuff
has
been
shifted
in
that
direction.
D
Okay,
on
the
functional
side
and
I'm
going
to
go
through
each
of
these
in
more
detail,
we
have
finally
added
key
rotation
after
talking
about
it
and
how
we
might
do
it
for
a
while.
We've
got
text
on
Cross
user
authorization,
sort
of
more
concrete
text
on
Cross
user
authorization.
We
have
made
some
normative
changes
to
the
syntax,
but
it's
really
ways
that
makes
it
more
consistent
within
the
protocol.
So
you
get
the
same
kinds
of
structures
between
similar
things
again.
D
We
have
finally
gone
in
and
added
an
Ianna
section,
including
extensive
guidance
for
how
extensions
to
the
core
protocol
would
actually
work
if
you
want
to
go
and
build
something
new
and
build.
On
top
of
this,
what
is
Our
intention
as
a
working
group
for
the
extension
points
and
how
to
exercise
those
extension
points
and
some
smaller
issues,
including
noting
that
all
Uris
are
intended
to
be
absolute?
We
just
actually
had
to
spell
that
out.
D
That's
pretty
much
that
entire
issue,
we
expanded
the
error
messages
and
the
syntax
around
those
that
actually
also
fits
into
being
consistent
with
the
syntax
between
different
different
types
of
objects
and
there's
an
expanded
bit
on
RS
first
discovery,
so
the
editorial
changes
once
again,
we've
expanded
the
security
considerations.
This
has
happened
to
every
single
revision
over
the
last
year
or
so,
as
you
might
expect,
for
a
security
protocol.
D
We've
explained
class
ID,
updated
some
definitions
and
we
have
finally
included
interoperability
profiles,
which
is
our
way
of
doing
mandatory,
to
implement
considerations,
because
gnap
is
such
a
sort
of
broadly
reaching
protocol
with
lots
of
different
use
cases.
The
mandatory
to
implement
considerations
are,
if
you
are
doing
this
kind
of
application.
This
is
what
you
expect
if
you're
doing
this
type
of
application.
Here's
what
you
expect
we've
added
two.
We
could
probably
use
some
more
in
the
core
document
by
the
time.
D
This
hits
final,
but
you
know
this
is
this:
is
the
current
approach
We've
added
a
list
of
implementations
and
would
like
more
and
we've
done
a
lot
of
cleanup
on
the
document
itself?
So
there
were
some
sections
that
we
thought
we
were
going
to
add,
but
never
actually
did,
and
so
we've
cleaned
up
a
lot
of
stuff
in
the
cortex
Fabian.
You
want
to
talk
about
the
Json
schema.
E
Hello,
everyone,
so
yes,
we,
it
was
planned
for
a
long
time.
It
was
an
outstanding
issue
and
what
we
decided
to
do
is
to
actually
not
put
that
into
the
core
document,
but
on
the
GitHub
Wiki.
The
main
reason
for
that
is
that
it's
actually
very
lengthy,
and
so
it's
something
that
would
add
a
lot
of
of
length
to
the
text,
which
is
already
quite
long,
and
so
we
wanted
to
just
add
some
resources
for
developers
and
implementers
so
that
they
could
actually
simplify
their
their
developments.
E
So
the
the
nice
thing
with
student
schema
is
it's
actually
completely
integrated
with
our
way
of
approaching
the
nut
protocol,
because
everything
is
based
on
Json,
and
so
you
can
actually
understand
the
mother.
It's
it's
kind
of
a
nice
summary
of
the
data
model
for
fog
nap
and
then,
on
top
of
that
you
can
actually
Implement
verification
at
runtime
for
your
different
data.
That's
rights,
changing
if
you're
interested
into
more
about
that
you've
got
the
jsonschema.org
resource,
which
is
quite
nice.
E
You've
got
a
rich
set
of
tools
for
well
pretty
much
every
language
out
there,
so
you
can
actually
generate
a
code
from
that.
You
can
test
with
some
some
data
and
see
whether
your
schema
is
fine
and
you
can
verify
at
runtime
we're
actually
using
the
latest
draft
for
that.
E
So
that's
the
kind
of
thing
we
need
to
to
tune,
but
in
general,
you've
got
the
the
request
and
you've
got
the
the
response
which
are
already
available
here
onto
the
the
wiki
on
on
the
GitHub
for
napco
protocol.
E
So
overall
I
think
it
should
help
you
and
and
provide
some
guidance
for
implementers
and-
and
we
expect
we'll
expand
on
that
over
time.
So
that's
that's.
This
part.
B
Commenting
as
an
individual
and
actually
putting
my
now
librarian
head
on
the
only
way
the
ITF
knows
how
to
make
a
material
available
for
the
long
term
is
through
published
rfcs.
Unfortunately,
there's
nothing
else
we
can
do
for
for
our
archival
purposes.
B
So
one
way
to
do
it
is
to
Simply
add
this
material
as
an
appendix
to
the
document.
I'm
not
saying
we
must
do
that,
but
please
consider
the
long-term
implications
of
having
stuff
on
the
wiki.
D
Yeah,
that's
that's,
definitely
a
consideration.
One
of
the
other
things
that
we've
considered
with
having
it
outside
the
document
is
that,
as
extensions
are
defined
and
stuff
like
that,
we
would
want
the
schema
to
be
updated
based
on
what's
in
the
registry,
and
you
can't
do
that
without
doing
a
whole
RFC
publication
cycle.
So
it's
I
absolutely
understand
that
that,
from
an
ietf
perspective,
that's
the
only
archive
that's
available,
but
on
the
other
hand,
this
is
not
quite
the
same
type
of
document.
D
F
One
option
there
might
be
to
have
a
chord
Json
schema
for
your
for
what
is
in
the
dock
and
then
use
a
vocabulary
for
the
the
registry,
so
that
part
ends
up
being
external,
but
the
core
vocabulary
is
internal
with
vocabularies
and
Justice
game.
You
can
do
that.
C
F
C
Question
for
the
authors,
do
you
guys
should
we
try
to
get
some
expert
review
from
from
I?
Don't
know
that
there
is
a
Json
director
or,
or
you
know,
the
ability
to
do
things
like
yaml
there's
a
you
can
request,
expert
reviews
on
yaml
stuff
and
on
on
mips
and
stuff
I
don't
know.
Is
there
I
I
can
I,
don't
know
whether
you
can
do
it
but
I.
D
We
can
ask
right:
yeah,
we
can
ask
I
mean
I
figure,
we
just
call
up
Timbre
and
have
him
say
it's
fine,
all
right,
but
yeah?
No,
that
is,
that
is
another
way
to
approach
it
as
well.
Like
Fabian
mentioned,
though
this
is
a
Json
schema,
is
very
lengthy
and
very
verbose.
It
would
add
many
pages
to
the
document,
which
is
a
consideration
for
an
already
long
document,
not
necessarily
enough
to
definitely
keep
it
out.
But
it's
it's
a
consideration.
D
Okay
on
to
some
more
of
the
functional
changes,
so
previously
our
key
formats,
we
had
text
in
there
that
allowed
you
to
publish
Keys
when
you
were
sending
Keys,
you
could
put
them
in
multiple
formats
as
long
as
they
were
the
same
key.
D
And
so
this
was
apparently
a
very
big
foot
gun
that
we
didn't
realize
was
quite
in
the
spec,
which
was
very
simply
solved
by
saying
that
you
can
only
use
one
key
format
per
request
and
if
it's
not
the
right
format
or
if
it's
something
that
the
receiver
can't
understand,
then
you
return
an
error.
D
Sorry
I
forget
where
I
was
going
with
that
sentence,
but
but
basically,
when
you're
sending
a
message,
you
can
send
it
as
a
jwk.
You
can
send
it
as
a
certificate
that
key
format
is
still
extensible
by
a
registry.
So
if
there
are
other
future
formats
that
people
want
to
do,
you
can
absolutely
extend
it
to
do
that.
D
If
somebody
wants
to
do
some
weird
multi-base
key
string
thing,
you
could
do
that
if
you
wanted
to
do
keys
by
URI
reference,
which
we
are
as
have
we've
previously
discussed,
is
not
in
the
core
document.
If
somebody
wants
to
do
that,
we
absolutely
have
a
space
where
that
can
go
and
you're
allowed
to
only
have
one
type
of
key
at
a
time.
D
Key
rotation
is
probably
the
biggest
new
feature,
and
this
is
something
that
we've
been
stewing
on
for
a
long
time
now.
Basically,
all
tokens
in
gnat
by
default
are
bound
to
Keys.
You
can
get
Bearer
tokens,
but
you
have
to
specifically
ask
for
them
and
when
you
have
something
that's
bound
to
a
key,
we
would
like
to
be
able
to
rotate
the
key
in
case
the
artifact
that
it's
bound
to
outlives
the
lifetime
of
the
key
itself.
D
D
So,
if
you're
using
HTTP
signatures,
you
need
a
way
to
prove
that
you
have
access
to
the
old
key
and
access
to
the
new
key
simultaneously
in
a
way
that
they're
layered
on
top
of
each
other,
so
that
an
attacker
can't
just
take
your
existing
signed
message
and
sign
it
with
their
key
and
say
no.
This
is
the
new
key.
D
It's
totally
fine
trust
me
and
the
way
you
do
that
layering
is
is
a
little
complex,
but
as
long
as
you're
following
the
same
pattern,
everything
works
out
pretty
well,
so
each
proof
method
defines
its
own
way
to
do
this.
The
token
management
API,
so
when
you
call
for
token
rotation
this
is
where
key
rotation
actually
shows
up
inside
an
app,
because
everything
in
gnap
is
is
bound
to
a
bound
to
an
access
token
of
some
type.
D
D
This
sort
of
abstraction
and
simplification
makes
it
much
more
consistent
how
you're
managing
Hue
rotation
throughout
the
protocol.
So
what
you
do
is
you
show
up
and
say:
I
want
to
rotate
this
token
and
I
want
to
bind
it
to
this
new
key.
We
currently
have
a
provision
that
you
provide
the
previous
key
at
the
same
time,
we're
not
sure
if
that's
actually
necessary,
because
you
should
be
able
to
look
up
the
old
key.
D
So
that's
that's
kind
of
a
maybe
like
I
said
this
is
brand
new
functionality,
but
but
we
think
it
works
out
pretty
well
and
anyway,
what
you
do
is
you
sign
the
message
with
the
old
key
and
the
new
key.
D
So
for
HTTP
signatures,
for
example,
HTTP
Sig
allows
multiple
signatures
on
a
single
HTTP
message,
so
you
include
the
new
key
in
your
rotation
message,
as
shown
it
down
at
the
bottom,
and
then
you
sign
the
message
with
the
old
key,
so
you're,
basically
putting
a
signature
around
the
new
key
using
the
old
key,
which
is
basically
declaring
I
the
holder
of
the
old
key
I'm
okay,
with
this
new
key
as
part
of
this,
but
then
you
also
have
to
sign
that
signature
with
the
new
key
to
prove
that
you
actually
have
access
to
the
new
key.
D
Because
again
this
is
we
are
assuming
you
are
introducing
a
new
key
at
runtime
during
this
rotation
mechanism.
Now
there
can
of
course
be
out
of
band
rotation
mechanisms,
but
you
don't
need
a
protocol
to
handle
that
that's
that's
handled
out
of
band.
This
is
for
the
times
when
your
client
instance
is
saying
I
made
up
a
new
key
and
I
want
it
tied
to
the
Token.
So
you
prove
that
you
have
the
the
old
key.
D
You
prove
that
you
have
the
new
key
and
all
of
that
wraps
together,
and
this
is
a
signal
to
the
as
to
say,
I
need
to
swap
out
the
key.
That's
bound
to
this
particular
access.
Token,
a
very
similar
thing.
With
with
the
Jose
methods
you
put
the
new
key
in
the
message.
D
You
sign
the
outer
message
with
the
old
key
and
then
you
take
that
signed,
object
and
wrap
it
in
it's
another
layer
of
jws,
because
Jose
doesn't
quite
do
multiple
signatures
very
cleanly,
and
then
you
send
that
as
the
signed
message
and
there's
a
new
jws
object,
type.
That's
defined
in
here
to
to
signal
that
this
wrapping
is
taking
place
for
mtls.
We
quite
frankly
decided
to
kind
of
Punt
on
this
one,
because
there's
no
really
good
way
to
present
multiple
client
certs
in
TLS.
That
I'm
aware
of
I
know.
D
Acme
is
working
on
client
certificate,
identity,
extensions
that
might
be
able
to
fit
into
this,
but
we
have
not
had
the
TLs
expertise
to
be
able
to
Define
this.
So
for
now
the
course
says
that
this
is
doing
a
dynamic
rotation
for
TLS
is
out
of
scope
because
in
a
lot
of
cases,
you're
going
to
be
using
some
type
of
certificate
management
system
to
do
that,
rotation
in
your
infrastructure
anyway,
and
if
you're
not
you
know,
come
make
an
extension
that
says
how
to
do
that
at
runtime.
D
D
We
set
out
to
do
that
during
this
round
and
we
realized
that
we
were
actually
missing
a
core
bit
of
signaling
in
the
protocol.
We
had
a
way
for
the
client
to
say
this
is
who
I
think
is
here
right
now.
D
This
is
who
I
think
the
identity
of
the
end
user
using
the
client
software
is,
we
had
a
and
we
had
a
way
for
the
client
to
ask
for
specific
kinds
of
information
about
the
resource
owner,
but
we
did
not
have
a
way
as
it
turned
out
for
the
client
to
identify
who
it
thought
the
resource
owner
was
especially
in
cases
where
that
person
was
different
from
the
end
user.
D
At
the
client
instance,
the
classical
case
for
this
comes
from
openid
connect,
siba,
where
you
have
somebody
with
an
authorized,
an
authorized
piece
of
software
as
the
resource
owner,
so
the
person
in
a
call
center,
and
then
you
have
sorry
the
resource
owner
is
the
is
the
person
calling
into
the
call
center
and
they
have.
You
know
an
onboarded
official
piece
of
software
that
the
that
the
as
can
talk
to
them
through
and
then
you
have
the
end
user
is
the
person
in
the
call
center?
That's
making
the
request.
D
So
now
we
have
a
way
to
very
simply
signal
using
structures
that
are
parallel
to
the
rest
of
the
protocol
hi.
This
is
who
I
think
is
here
right
now,
and
this
is
who
I
want
information
about
so
as
go
find
and
ask
that
person
if
it's
okay
for
this
other
person
to
know
who
they
are,
it
allows
us
to
very
explicitly
separate
those
two,
those
two
parts
of
the
protocol,
and
so
it
was
a
it
was
an
additional
bit
of
syntax,
an
additional
bit
of
semantics
to
that.
D
We
also
added
several
examples,
including
the
call
center
example
to
the
core
document
that
talk
about
sort
of
how
this
information
flows
and
what
an
as
would
be
expected
to
do
in
this
case,
and
some
of
this
adds
in
adds
on
to
the
existing
discussion
we
already
had
about
asynchronous
authorization.
Many
of
those
use
cases
come
from
the
came
from
the
Uma
work
in
the
Uma
working
group.
D
Yeah
so
subject
is:
who
we're
asking
about,
and
user
is,
who
we
think
there.
B
D
Right,
yeah,
no
yeah
I
agree
the
I'm,
not
sure
what
better
terms
would
be
I'd
be
I'd,
be
open
to
bike
shed.
Some
of
these
terms
at
even
at
this
point
subject
was
named
subject
because
originally
that
was
where
you're
putting
information
about
what
you're
requesting
the
subject
information
that
you're
requesting.
So
that's
also
where
you
put
the
subject
identifier
formats
and
the
assertion
formats,
and
things
like
that
in
your
request.
I.
D
D
C
I
was
just
going
to
say
that
that
I
mean
it's
actually
worth
noting
that
that
kind
of
practice
is
the
is
the
I
mean
the
a
lot
of
like
cross
media
phishing
attacks
actually
use
this
particular
capability
of
some
authentication
systems.
G
C
There
right
so
it
it
is.
This
is
kind
of
a
this
is
potentially
a
a
gun
right
aimed
at
both
of
your
feet
and
so
I
I
think
it.
It
is
like
if,
if
this
actually
goes
in
there
I
think
it's-
and
this
is
speaking
as
a
individual
I-
think
it
has
to
be
accompanied
by
some
something
in
the
security
section.
C
But
you
know
this
is
extremely
dangerous
and
people
actually
get.
This
is
like
a
major
source
of
fraud
in
in
many
jurisdictions.
D
Yeah
I
agree:
we've
got
some
text
in
there
that
we
did
add
in,
but
I
could
probably
stand
to
be
expanded.
Fundamentally,
from
the
protocols
perspective,
this
capability
is
indistinguishable
from
a
phishing
attack
exactly
because
you're
exactly
asking
for
information
about
somebody
else.
The
what
this
allows
us
to
do.
D
What
this
syntax
allows
us
to
do
is
at
least
detect
that
this
is
what's
happening,
that
this
is
intentionally
what's
happening
and
you
can
put
up
enough
barriers
to
make
it
less
likely
for
that
to
to
actually
be
propagated
as
a
phishing
attack.
So
the
as,
for
example,
can
say
like
you're
asking
for
subject
information
with
no
interaction
and
a
different
user
identifier.
D
Well,
you
better
be
a
pre-registered
client
with
a
certificate
that
I
issued
otherwise
I'm,
not
gonna
I'm
not
going
to
touch
this
one
right
so
that
that's
the
call
center
use
case.
You
know,
that's
the
we've
pre-provisioned
all
of
these
client
instances
in
our
call
center,
and
we
know
that
this
software
is
authenticated
to
a
specific
user
who's
allowed
to
ask
for
this
kind
of
thing.
So
it's
not
perfect,
but
yeah.
D
Okay,
so
there
were
a
bunch
of
small
cleanups
in
the
syntax
where
we
had.
We
had
things
that
were
kind
of
awkwardly
defined
as
it's
an
object,
but
sometimes
that
object
isn't
an
object
and
stuff
like
that.
We
took
a
step
back
and
realized
that
we
can
drastically
simplify
this
by
basically
taking
a
page
out
of
the
out
of
Json,
schema
and
similar
things
and
just
declaring
things
in
terms
of
type.
D
So,
for
example,
the
proofing
methods
can
be
sent
as
either
an
object
or
as
a
string
now,
except
that
those
need
to
be
defined
separately.
So
the
object
version
has
a
required
field,
name
method,
and
that
is
a
string
that
comes
from
a
registry
and
then
that
has
required
parameters
for
algorithm
in
the
content
digest
algorithm.
D
The
string
version
has
to
Define
everything,
including
all
parameters,
because
there's
no
space
to
put
parameters.
It's
just
a
string.
So,
instead
of
you
having
to
do
jump
through
some
hoops
and
saying
like.
Oh
this,
you
just
put
this
into
the
method
field,
and
then
you
default
a
couple
of
fields
and
sort
of
do
that
kind
of
thing.
It
is
instead
just
very
explicitly
defined
as
you
do
this
in
this
way,
and
that
is
the
only
way
that
it
works,
and
we
did.
We
did
this
for
access
rights
previously.
D
The
key
proof
methods,
interaction,
start
methods,
error
responses
are
all
basically
use
the
same,
the
same
kind
of
pattern
throughout
the
protocol.
D
All
right.
We
have
a
lot
of
extended
security
considerations.
Specifically,
we
wanted
to
call
out
referencing
the
BCPS
for
Key
Management
and
also
some
informative
references
from
how
to
manage
things
in
TLS
I'll.
Let
you
guys
go
read
those,
and
in
particular
the
TLs
and
Utah
experts
in
the
room
should
please
go
read
those
bits
to
make
sure
that
that
we
actually
are
referencing,
those
appropriately
and
and
there's
other
security
considerations
that
were
added
in
there.
D
D
We
clarified
the
role
of
class
ID
in
the
system.
We
didn't
really
change
what
it
was
or
how
it
works.
We
just
clarified
what
it
means,
so
this
is
something
that
is
decided
by
the
as
it
has
to
be
known
to
the
as
and
it
could
be
pre-registered
it
could
be
issued
by
a
third
party.
D
It's
really
just
a
hint
to
the
as
that
this
software
is
claiming
to
be
part
of
this
class
if
you
want
a
greater
attestation
to
the
membership
of
that
class,
so
you
know
that
that
attribute
actually
applies
to
this.
Then
you
need
software
attestations,
which
are
a
layer
above
and
beyond
what
just
a
simple
class
ID
string
can
give
you.
D
So
we
clarified
that
this
is
a
hint
just
like
anything
else
that
the
client
might
dynamically
send,
but
the
text
in
there
now
says
a
little
bit
more
how
to
apply
that
hint,
how
to
use
that
hint!
D
Oh-
and
this
is
the
kind
of
thing
that
in
some
cases,
is
really
going
to
act
like
a
a
browser
agent
string.
So
it's
just
going
to
be
kind
of
hard-coded
in,
say
a
set-top
box
and
say
I'm
this
piece
of
Hardware
with
this
version,
so
you
can
really
trust
it
about
that
much
without
other
things,
strapped
to
it.
E
So,
as
you
were
saying
in
the
introduction,
we've
got
some
work
that
has
been
done
on
the
entire
Webb
profiles
on
especially
on
the
mandatory
to
implement.
So
here
is
it's
again
guidance
for
implementers
since
we're
at
the
at
the
state
where
it
becomes
really
stable
enough
to.
A
E
Actually
implement
it,
so
you've
got
two
main
profiles:
the
one
the
first
one
is
web
profiles,
Illinois
secondary
profile,
basically
you'll
see
that
most
of
the
methods
are
actually
similar.
The
main
difference,
obviously,
is
there's
some
some
case
on
the
web
profile,
where
you
actually
redirect
during
your
interaction,
while
obviously
for
a
secondary
device.
Your
integration
is
actually
based
on
a
push
so
you're
actually
going
to
use
either
a
user
code
or
user
code
very
which
was
discussed,
I
think
at
the
previous
iitf
meeting.
E
So
it's
basically
a
different
interaction,
so
you've
got
both
cases
which
are
to
be
implemented
and
then
based
on
that
we
actually
just
Define
the
minimal
spec
that
needs
to
be
implemented
so
that
actually,
clients
and
authorization
server
can
actually
discuss
on
a
minimal
basis
so
and
that
concerns
everything
related
to
key
proofing,
the
different
Ash
algorithms,
the
different
signatures,
algorithms,
and
also
the
types
of
subject
identifiers.
So,
for
instance,
to
take
this
one.
E
We,
we
could
have
said
well,
go
implement
the
entire
SEC
events
back
and
you
need
to
actually
provide
email
and-
and
you
need
to
to
also
support
deeds
and
and
so
on.
But
obviously
this
comes
at
a
bigger
expense
for
implementers.
So
we
wanted
to
have
something
which
is
common
to
all
implementations
of
nap
and
we
think
that,
for
instance,
the
OPEC
version
of
it
makes
sense
to
be
able
to
communicate
between
the
systems
same
for
our
session
formats.
E
E
So
basically,
that's
really
a
way
to
to
make
sure
that
we
can
actually
understand
ourselves
in
in
the
week,
and
the
last
part
is,
is
a
key
proofing
method
and
we
we
suggest
that
we
go
for
HTTP
Sig,
which
is
clearly
the
most
the
most
advanced
way
of
of
making
sure
that
our
key
proofing
is
is
actually
implemented
correctly.
E
The
only
downside
from
that
which,
which
was
discussed
at
that
point,
is
obviously
it's
a
new
method,
it's
something
new
for
for
people
to
implement,
but
we
also
think
this
significant
upside
in
making
sure
that
it
can
be
implemented.
For
instance,
what
just
in
described
earlier
for
ntlo
us
you've
got
additional
infrastructure
to
put
in
place
here.
It's
it's
different.
You
can
actually
base
that
on
your
HTTP
stack.
Only
and
that's
that's
something
that's
we
think
is
is
worthwhile.
E
D
Right
and
I
wanted
to
say
one
more
thing
about
the
the
proofing
method,
because
this
was
this
was
brought
up
in
discussion,
even
though
Jose
is
itself
very
well
established.
Applying
Jose
to
an
HTTP
request
in
a
robust
way
is
not
well
established,
there's
a
lot
of
stuff
that
an
app
had
to
invent
and
sort
of
copy
to
get
that
to
fit
because
Jose
and
http
aren't
the
best
of
friends
so
bringing
that
up.
D
F
E
So
we
actually
expect
more
in
in
the
coming
weeks,
so
just
for
myself,
I'm
actually
working
on
on
my
rest
implementation,
but
we'd
like
to
thank
all
the
people
that
have
actually
committed
to
to
implementing
nap,
or
at
least
the
version
of
snap,
and
you
see
that
there's
already
quite
quite
a
bunch
of
them
either
on
the
client
or
on
the
server
and-
and
you
see
that
there's
also
a
variety
of
languages
out
there
you've
got
JavaScript
roast
Java
python
PHP.
E
We
we
do
see
that
it's
actually
implementable
in
in
pretty
much
every
every
situation.
That's
that's
common.
Today.
H
E
That's
really
what
we
want
to
do.
Obviously
we
are
quite
aware:
there's
there's
more
to
be
done
on
the
on
the
entire
release,
side
and
we'd
like
to
to
actually
get
more.
So
if
you've
got
your
implementation,
please
let
us
know
and
we'll
add
that
to
to
the
draft.
D
Right
and
I
just
want
to
add
that
these
are
the
publicly
available
implementations.
There
are
a
couple
of
other
implementations
floating
around
out
there
that
are
part
of
proprietary
products,
so
secure
key,
which
is
now
a
vast
which
is
now
Norton
did
I
get
that
right,
yeah,
okay,
all
right!
Where
what
now
gent
that's
the
first
Monday,
that's
the
first
I've
heard
that
so
whatever
this
company
is
called
now,
the
verified.me.
D
The
the
verified.me
product
actually
is
also
an
implementation
of
gnat,
but
that's
sort
of
internal
proprietary
code,
and
so
I
couldn't
couldn't
point
to
it
here
quite
as
easily
I.
Don't
know
if
we're
actually
supposed
to
include
stuff
like
that
in
the
implementation
list,.
D
D
All
right
so
we'll
we'll
add
in
the
verified.me
stuff
then
as
well,
and
you
know
if
there
are
any
others,
then
then
yeah,
please,
please
get
in
touch.
B
And
one
more
thing
going
back
to
the
previous
slide:
if
yep
do
we
have
an
actual
security
cryptography
justification?
Why
we're
using
two
different
hash
algorithms,
or
is
it
just
historic
and
if
it
is,
why
don't
we
make
it
so
256
for
both.
D
It's
historic,
I,
just
kind
of
picked
sha3
the
first
time,
because
three
is
bigger
than
two,
and
so
it
must
be
better
and
I
picked
that,
like
three
years
ago,
and
only
one
person
complained
and
that's
why
we
made
Shaw
to
an
option
and
that's
that's
where
we
are
so
if
we
wanted
to
make
these
both
shop,
256
I
honestly
think
that
that's
fine,
given
the
the
lifetime
of
the
interaction
hash
is
very
short.
There's
there's
not
a
good
reason
to
to
do
the
additional
level
so
I'm,
actually
fine
with
that.
D
Yeah
so
I
know
I
just
remembered
I
should
put
Dimitri's
implementation
on
on
this
list.
I
totally
forgot
to
do
that,
because
I
know
he
was
the
one
that
actually
asked
for
shot
256
because
he
was
doing
a
pure
in
browser
implementation
and,
at
the
time
didn't
have
a
shot
three
in
web
crypto.
D
So
yeah
I'm
fine
with
that.
If
anybody
else
has
feedback
on
choosing
the
the
hash
algorithms,
please
let
us
know
all
right.
D
So
we
only
have
a
couple
of
open
issues
on
the
core
document
now,
at
least
as
far
as
I
know
as
of
this
morning,
it
should
only
be
two
during
the
key
rotation.
Somebody
pointed
out
that
we
forgot
to
add
new
errors
and
forgot
to
mention
this
in
Discovery.
Absolutely
true.
We
forgot
to
do
that.
So
that'll
get
that'll,
get
added
in
it's
fairly
mechanical,
it's
just
getting
stuff
into
the
right
spaces.
D
You
know
not
really
a
big
deal.
Just
an
oversight
on
on
the
editor's
part
there's
a
second
sort
of
more
in-depth
issue
from
some
researchers
actually
building
on
Florian's
graduate
thesis
as
it
turns
out,
and
they
pointed
out
rightfully
so
that
gnap
is
not
entirely
clear
that
the
subject
identifiers
coming
back
are
scoped
to
the
as
that
is.
D
That
is
releasing
them
even
when
it's
a
global
identifier
like
an
email
address,
which
is
to
say
that
I
can
stand
up
in
a
s
that
says
that
you
know,
I
am
yarin
and
I
can
have
it
spit
out.
Yarn's
email
address
for
my
account,
but
you
need
to
be
able
to
trust
that,
as
for
that
specific
account
in
order
to
actually
have
that,
make
any
sense.
D
This
is
completely
enshrined
in
nist
800-63c,
which
has
extensive
discussion
about
this,
which
we
can
reference
directly
here
to
say
that
pretty
much
if
you're
getting
something
back
from
an
as
you
can
only
believe
it.
As
far
as
that,
as
told
me
this,
you
can't
believe
it
beyond
that.
There
are
no
globally
Universal
identifiers
and
no
dids
don't
count
Justin.
C
D
C
D
Yep
absolutely
yeah.
This
is
this
is
not
a
new
issue.
We
have
some
text
that
already
says
this,
but
it's
it's
not
that
strong
and
it's
not
it's
not
a
normative
requirement
for
the
client
instance
to
process
this
with
a
particular
mindset,
and
so
that's
why
the
researchers
in
doing
their
formal
modeling
was
like
you
know
you
could
run
away
and
get
get
all
kinds
of
all
kinds
of
users
confused.
If
you,
if
you
didn't
do
this
so
yeah
Fair,
Point
foreign,
all
right,
I,
think
this
is
our
last
slide
yeah.
D
D
Yeah
hold
off
on
this
all
right.
We
will
absolutely
break
here.
G
B
D
G
B
A
Okay,
great
so
I,
don't
know
exactly
how
much
time
I
have.
A
Okay,
that
should
be
plenty,
I
have
five
slides.
Hopefully,
I
can
go
through
them
in
less
than
half
that
time,
and
so,
let's
see
how
do
I
present.
A
And
share
the
screen.
H
A
Okay,
are
you
seeing
the
slide.
C
A
Miracle,
amazing,
okay,
so
just
slide.
Two
is
a
summary
of
what
my
point
is
that
we
need
a
separate
human
rights
consideration
and
that
some
aspects
of
it
might
need
to
be
normative
in
the
protocol.
The
reason
is
that
privacy
by
itself
does
not
recognize
the
asymmetry
of
power
between
the
resource
owner,
typical
individual
and
the
institutions
that
are
playing
the
other
roles.
A
The
concept
of
free
association
is
discussed
at
length
in
the
hrpc
documents
and
how
it
applies
to
as
nrs's
and
clients
again
is
not
a
discussion.
We
have
time
for
today,
necessarily
except
during
the
questions
of
course.
A
We
now
have
to
consider
in
the
hrpc
context
of
force
Association
what
it
is
that
we
are
doing
and
how
it
can
be
mitigated
and-
and
so
the
the
problem
without
diving
into
it,
I
think
is
pretty
well
understood
at
this
point
that
forced
Association
and
oauth
to
the
authorization
server
has
a
lot
of
convenience
features,
but
it
also
comes
with
scaling
problems
that
are
cause
regulatory
issues
in
the
human
rights
or
privacy
context,
and
so
my
recommendation
is
that
the
resource
owner
must
be
allowed
to
State
who
their
preferred
authorization.
A
Server
is
slide.
Three
is
a
summary
of
the
Privacy
interests
that
have
been
mentioned
in
various
issues
for
the
past.
Whatever
how
long
I've
been
around
I'm,
not
gonna,
go
through
them,
there's.
A
Obviously
the
the
top
issue
is
who
gets
to
choose
the
as
the
resource
server
wants
to
limit
their
liability
and
the
resource
owner
wants
to
limit
both
their
lock-in
and
their
various
risks
of
policy
surveillance,
traffic
analysis,
spam,
Etc
and
the
requesting
party,
as
has
been
discussed
in
some
issues,
once
data
minimization,
basically
and
their
ability
to
choose
their
client.
A
A
It
also
has
issues,
and
my
point
is
that
one
of
these
needs
to
be
a
must
or
should,
and
then
finally,
we
have
some
links
around
this
argument
that
people
can
can
access
they're
obvious,
there's
the
pr
itself,
the
hrpc
document,
which
I
guess
will
be
discussed
tomorrow
at
exactly
the
same
time
and
and
so
I
intend
to
use
these
slides
tomorrow
as
well,
and
that's
it.
C
All
right,
thank
you,
Adrian.
So
can
we
get
a
show
of
hands
here
and
in
the
in
the
room
who
has
reviewed
the
the
the
pr
looked
at
the
proposed
text?
Justin
anybody
else,
and
if
anybody
online
has
done
that
maybe
say
so.
Fabian
has
reviewed
it.
He
says
on
chat
all
right,
so
we
actually
I
think
we
need
a
little
bit
more
review
on
this
to
act
on
it.
C
But
I
don't
know
if
any
of
the
any
of
the
core
editors
could
step
up
and
give
your
impressions
and
sort
of
feedback.
D
Hi
Justin,
richer
speaking
speaking
as
editor,
one
of
the
one
of
the
problems
with
the
text
as
proposed
is
that
it
places
requirements
on
an
app
as
a
specification
and
not
on
implementers
of
canap.
D
This
brings
up
a
lot
of
very
important
things
that
should
be
considered
during
deployments,
but
it
needs
to
be
written
such
that
somebody
making
a
deployment
decision
knows
the
trade-offs
from
the
human
rights
perspective
about
doing
different
things.
So
what
are
the
trade-offs,
for
example,
for
allowing
an
end
user
to
specify
which,
as
is
associated
with,
which
are
us,
there
are
very
clear
security
trade-offs
for
doing
that?
D
D
Fabian
has
moved
a
trimmed
down
and
edited
version
of
this
to
the
RS
draft,
because
this
is
a
lot
about
this.
The
text
was
mostly
about
association
between
RS
and
as
and
not
about
sort
of
client
to
as
client
to
RS
stuff
that
the
core
document
is
focused
on.
D
B
Speaking
as
an
individual,
I
haven't
read
the
pr,
so
the
pr
as
is,
may
not
be
appropriate,
but
I
think
requiring
that
the
spec
should
provide
capabilities
that
allow
good
implementations.
I.
Think
that
makes
total
sense.
So
I
would
encourage
the
editors
to
look
at
Adrian's
three
options
and
see
which
of
them
can
be
reasonably
implemented
within
the
current
framework,
or
maybe
we
should
add
something
to
the
framework
to
allow
them.
Having
said
that,
anything
that
involves
implementers,
in
my
opinion
again
as
an
individual,
should
be
guidelines,
trade-offs,
considerations
versus
normative
text.
E
What
Justin
was
saying
originally
well:
first
I
I
want
to
thank
Adrian
for
for
the
contribution,
because
it's
I
think
it's
an
important
topic
and
I'm
going
to
be
able
to
to
participate
tomorrow
at
the
hpsc
meeting
as
well.
So
that's
the
first
thing.
The
seventh
thing
is
initially,
there
was
a
PR
on
the
core
draft
and
actually
moved
it
to
the
arrest
draft.
So
I
think
that's
already.
The
first
discussion
to
have
is
what
the
right
place
for
that
specific
text
to
to
be
I
do
think.
E
The
RS
draft
is
a
better
place
because
it's
really
where
we've
got
the
requirements
on
the
RS
part,
and
it's
also
where
the
actual
mitigation
can
take
place,
such
as
integration
for
for
the
tokens
and
stuff
like
that
now
said
that
I
think
there's
remaining
work
to
be
done
and
we
are
very
open
to
actually
get
more
feedback
on
the
actual.
F
E
The
actual
way
is
to
to
do
that
from
a
practical
perspective
really
depends
on
the
situation,
and
so
that's
why
so
far,
at
least
in
in
the
part
I've
put,
is
there's
no
shoot
and
no
must
because
we'd
like
to
before
doing
that
at
least
be
sure
that
something
can
be
done
in
in
practice.
But
that's
it's
funny.
Thank
you.
A
Okay,
for
my
part,
first
of
all,
I
want
to
thank
the
the
editors
and
and
the
chairs
for
taking
this
seriously
and
point
out
that,
hopefully,
tomorrow
we
will
see
whether
gnap
is
really
the
the
ideal
example
of
a
of
a
protocol
that
hrpc
and
ietf
wanted
to
address
by
whatever
process
they
had
in
mind
I.
A
A
To
the
extent
that
it's
optional
it.
It
does
not
happen,
and
the
bad
thing
about
that
is
that
regulatory
capture
in
other
contexts,
outside
of
our
context
here,
basically
denies
Regulators
a
an
opportunity
to
to
fix
this.
This
hyperscaling
problem
on
platforms,
so
what
I'm
looking
for
is,
of
course,
the
advice,
as
was
said
of
the
editors
as
to
whether
this
can
be
done
and
how
it
can
be
done,
and
then.
A
We
go
okay
and
so
number
one
is
the
discussion
that
is
anticipated
as
to
whether
this
is
possible
and
how
and
then
what
is
the
process
that
sort
of
the
the
the
group
will
use
to
determine
whether
it
should
be
a
must
or
should.
Thank
you.
C
To
make
it
clear
that
I'm
speaking
as
an
individual,
so
a
couple
of
comments,
I
I,
think
Adrian.
Your
your
point
is
well
made.
Maybe
maybe
not
it
worth
repeating
that,
like
in
in
society
as
a
whole,
like
free
association,
also
comes
with
cost
right.
It's
not
the,
but
we
have
chosen
in
society
to
optimize
too
often
right
for
for
free
association
and
human
rights.
C
Over
security
considerations,
I
mean
there
are
security
considerations
with
with
free
association
and
some
in
some
cases,
Regulators
actually
come
down
on
the
on
the
side
of
sort
of
against
free
association,
but
in
in
technology
we've
kind
of
defaulted,
the
other
way
because
in
in
large
part
we
we're
allowing,
as
you
put
it,
hyperscaler
to
to
kind
of
govern
what
we
do
and
I
think.
C
The
the
the
way
to
deal
with
this
in
or
at
least
in
this
technology
stack,
is
to
make
sure
that
Regulators
that
let's
flip
this
around
to
make
sure
that
technology
providers
cannot
hide
behind
the
lack
of
technical
capability
to
do
the
right
thing
and
if
the
regular
is
just
Regulators
choose
to
to
point
at
a
particular
instance
of
deployment
of
technology
and
say
this
in
this
particular
instance,
we're
going
to
optimize
for
free
association
over
security
and
the
technology
itself
shouldn't
sort
of
place
restrictions.
C
The
capability
should
be
there
and
I
think
for
for
gnap.
That
means
that
sort
of
the
the
field
should
be
there,
but
they
don't
necessarily
have
to
be
judicious
use
of
of
sort
of
normative.
Language
will
be
enough
and
we
should
think
about
sort
of
the
situations
where
Regulators
will
come
in
and
say
no
no
in
this
particular
instance.
C
You
know
the
deployment
of
gnat
must
follow
this
particular
model
and
must
be
a
must
allow
for
for
the
resource
owner
to
to
control
their
as,
for
instance,
if
that's
what
what
kind
of
this
this
comes
down
to
to
so
so
I
think.
C
Let's
think
carefully
about
sort
of
what
normative
language
we
put
there
to
make
sure
that
we
give
Regulators
kind
of
Maximum
opportunity
to
to
do
their
job,
which
is
kind
of
to
govern
the
deployment
of
technology
and
situations
where,
where
there
is
sort
of
where
human
rights
considerations
really
actually
matter,
all
right,
that's
it.
D
Justin
richer,
this
time,
speaking
as
an
individual,
not
as
an
editor.
So
one
of
the
things
about
this
first
off
again,
echoing
Fabian
Singh
and
thanking
Adrian
for
bringing
this
in
I,
do
think
that
this
is
important
work
and
you
know
I,
remember
a
time
in
the
iitf
where
we
didn't
have
privacy
consideration
sections
in
in
our
documents.
We
do
now
they're
required,
and
that
was
that
was
a
hard
shift
in
thinking
for
a
lot
of
technology.
D
Nerds
that
just
wanted
to
go
build
protocols
for
things
to
talk
to
each
other,
so
I
get
that
this
is
that
this
is
a
hard
shift
for
for
a
lot
of
folks.
All
that
said,
I
think
that
the
considerations
being
raised
around
free
association
and
the
trade-offs
they're
in
while
those
are
very
very
important.
We
also
need
to
keep
in
mind
that
there's
more
than
one
solution
to
providing
this.
D
So,
for
example,
one
of
the
things
that
gnap
has
had
as
a
fundamental
pillar
from
the
beginning
is
the
ability
to
get
more
types
of
interaction,
more
types
of
information,
more
types
of
things
expressed
to
the
as
by
different
parties.
That
would
allow
for
a
resource
owner
to
externalize
their
policies
and
express
them
to
the,
as
which
then
turns
that
into
an
access
token,
without
the
RS
and
as
needing
to
have
a
loose
coupling
between
each
other.
D
D
Where
this
really
starts
to
come
into
play
is
what
the
as
knows,
and
what
the
as
can
do,
we're
no
longer
in
the
oauth2
world
of
assuming
that
somebody
has
an
account
that
they're
logging
into
the
as
to
execute
something
we
can
go
far
beyond
that
and
that
fits
with
what
we've
presented
many
times
here.
What
the
editors
have
presented
many
times
here
as
the
as
a
token
Factory.
That
is
our
model.
This
Factory
can
take
in
multiple
kinds
of
inputs.
It
can
take
in
verifiable
credentials.
D
It
can
take
in
did
documents
it
can
take
in
zachimal.
If
you
want
it
to
all
of
those
would
be
extensions
to
connect
its
protocols
built
on
top
of
and
around
canap,
to
provide
those
things
to
the
as
in
order
for
it
to
be
translated
to
the
RS,
and
in
doing
that,
you
could
provide
the
same
kinds
of
Human
Rights
considerations.
The
same
types
of
blindings
that
are
that
are
talked
about
in
the
pr
without
having
to
dissociate
the
As
and
rs
in
the
way
that
is
proposed
in
Adrian's
text.
D
So
I
do
not
I
want
to
be
very,
very
sure
that
we
do
not
limit
ourselves
and
limit,
as
they
put
the
guidance
to
Regulators
to
say
only
choose
this
only
choose
bring
your
own
as
as
the
only
option
in
order
to
fulfill
this.
What
we
really
want
is
a
system
that
allows
these
types
of
sort
of
Human
Rights
pieces
to
be
addressed
and
provide
guidance
on
ways
that
that
can
be
accomplished
and
not
necessarily
a
single
way
to
do
that.
F
This
isn't
a
space
I've
spent
a
lot
of
time
in,
but
the
concern
I
have
is
you
can
solve
this
at
a
technology
problem
and
make
the
AIS
independent,
but
resource
servers
make
a
commitment
to
the
resource
owners
to
protect
that
data
to
a
certain
level,
and
it's
going
to
be
very
difficult
for
resource
servers
to
keep
those
commitments
if
they
don't
have
control,
or
they
don't
have
trust
of
that,
the
authorization
server
and
so
I,
even
if
it
not
made
it
possible
I'm,
not
quite
sure
whether
resource
servers
would
be
able
to
continue
making
those
commitments
to
the
resource
owners
on
protecting
the
data.
C
I
I
to
that
point,
I
I,
that's
also
actually
turning
into
a
regulatory
thing,
I
mean
we're
we're
seeing
the
EU
regulating
sort
of
platform
providers
to
force
them
to
provide
sort
of
externalized
mechanisms
for
for
data
access,
etc,
etc.
So,
again
it's
it's
kind
of
it's,
not
yes,
you're
right,
right
to
a
certain
extent,
you
have
to
be
able
to
sort
of
trust
all
the
components
in
your
system,
but
again
it's
all
of
that
is
up
to
a
point.
You
know.
C
Trust
is
not
absolute
and
in
the
you
know,
in
in
a
situation
where
you,
actually,
you
have
Regulators
saying
no,
no
you're.
You
actually
need
to
be
able
to
provide
some
externalized
mechanism
for
for
injecting
policy
into
your
your
own
resources,
which
is
happening
today
right
that
kind
of
Regulation
I
I.
What
I?
Actually
you
know
the
the
trust
issues
may
not
actually
be
that
absolute
right
as
as
a
as
a
resource
owner
as
a
resource
server,
you
may
actually
have
to
comply
with
legislation
saying
all
right.
C
Here's
a
set
of
trusted
independently
provided
resource
servers.
You
know,
choose
one
right,
that's
another
sort
of
mechanism
for
doing
that,
and
what
I
actually
came
up
to
this
comment
was
Justin
and
I.
You
know,
yes,
gnat,
provides
a
bunch
of
open
a
certain
amount
of
openness.
C
C
So
again,
I'm
I'm,
all
for
providing
I
think
the
the
approach
would
be
to
provide
maximum
amount
of
technological
Freedom
in
implementations,
but
then
allow
The
Regulators
to
go
and
tell
the
resource
servers
and
the
authorization
servers
and-
and
you
know,
and
here's
how
you
must
behave
in
this
particular
context,
and
that
should
be
possible,
and
you
know
we
won't
don't
want
the
resource
owners
or
the
authorization
servers
to
come
back
to
us
and
say
no.
We
can't
do
it
because
it's
not
in
the
spec.
D
D
The
abstraction
of
those
two
components
is
a
technological
abstraction,
not
an
ideological
one.
The
authorization
server
is
the
component
that
provides
an
artifact
for
access
the
access
token,
and
it
is
the
part
of
the
resource
protecting
ecosystem
that
does
the
translation
between
policy
and
identity
and
context
into
that
access.
Token,
so,
I
think
a
lot
of
this
we're
getting
hung
up
on
the
resource
server.
D
Is
this
independent
thing
that
somebody
actually
has
an
account
on
and
they
stored
their
data
on
and
that's
where
they
set
their
policies
and
the
authorization
server
is
something
that's
coming
in
and
trying
to
like?
You
know
wedge
that
open,
whereas
what's
really
the
reality
out
there
is
that
what
people
think
of
as
the
protected
resource
at
large
is
some
combination
of
authorization,
server
and
resource
server
in
oauth
one.
We
called
it
just
the
server.
It
was
one
component
in
oauth
2.
D
We
allowed
those
to
be
abstracted
not
for
ideological
reasons,
but
for
technological
reasons,
people
were
building
distributed
systems
and
we
wanted
standardized
standardized
ways
to
connect.
Those
on
The,
Wire
gnap
is
taking
an
app
as
a
protocol
allows
those
two
to
be
loosely
associated,
but
also
allows
the
as
to
take
in
more
information
like
I
was
saying
before,
so
that
you
can
draw
that
line
in
multiple
ways,
depending
on
what
makes
the
most
sense
for
your
use
case,
and
if
we
can
provide
better
guidance
about
what
it
means.
B
A
No
thank
you
very
much.
I
think
this
moves
to
a
process,
question
for
hrpc
tomorrow
and
then
back
to
the
editors
in
this
group.
D
Just
put
it
up
on
the
last
slide:
that's
pretty
much
all
we
got
all
right
so
well,
that's!
Well!
That's
popping
up!
Basically,
the
question
is:
what
do
we
do
now?
D
D
At
this
point,
the
editors
think
you
know
it's
probably
time
to
ship
it.
It's
probably
not
perfect,
it
would
have
been
nice
to
have
had
you
know,
sort
of
more
eyes
and
more
things
involved
here,
more
features
and
things
like
that.
But
what
we
have
is
pretty
solid
and
so
our
proposal
as
editors
is
that
we
take
the
document.
We
take
the
core
document
and
we
move
this
into
working
group.
D
Last
call
Fabian
and
I
started
looking
through
the
document,
and
you
know,
I
could
use
a
little
bit
of
an
editorial
dusting
on
the
text,
but
we
think
that
conceptually,
the
the
core
document
modulo
the
missing
error
messages
that
we
already
know
about
is
is
ready
for
a
last
set
of
last
call
reviews.
So
that's
what
we
would
like
to
do
now.
D
So
we
are
actively
seeking
feedback
on
that
decision
from
the
working
group
and
if
anybody
wants
to
commit
to
doing
a
last
call
review,
that
would
also
be
very,
very
good.
A
Thank
you.
How
would
how
would
this
impact
the
process
by
which
something
might
be
normatively
changed
due
to
human
rights
considerations.
C
I
mean,
regardless
of
where
that
discussion,
lands,
I,
I,
think
we're
I,
think
we
we
can
deal
with
it
during
working
group.
Last
call
I
think
what
this
is
not
uncommon
for
working
groups.
That's
kind
of
that
has
a
lot
of
sort
of
churn
and
limited
resources
that
you
kind
of
need
to
push
it
into
a
working,
Loop
class
called
process
in
order
to
get
people
to
pay
attention
at
the
last
minute.
So
even
if
it
winds
up
being
two
working
group
last
calls
so
be
it
right.
C
Yeah,
it's
better
to
kind
of
start
working
on
that
and
then
commit
to
being
done.
I
think
I,
I
wouldn't
go
so
far
as
to
say
that
there
will
be
absolutely
we're
sure,
there's
not
going
to
be
any
normative
language
in
there.
I
I
think
it's
too
soon
to
tell,
but
I
would
encourage,
like
everybody
to
kind
of
just
Sprint
to
the
Finish.
D
Right
and
to
be
clear
what
I
said
in
case
in
case
that
Precision
was
lost,
was
that
the
editors
do
not
anticipate
any
normative
changes
there
might
be,
but
considering
what
we've
seen
and
what
we've
read
in
the
conversations
we've
had
so
far,
we
don't
expect
there
to
be
any.
So
that's
that's
that.
But
yes,
I
agree
with
life
that
you
can
change
things
during
last.
Call.
F
B
Thank
you,
and,
and
for
those
in
in
the
room
and
those
who
are
not,
people
have
implemented
this
protocol.
B
D
All
right,
the
the
next
part
of
this
is
that,
of
course,
a
while
ago,
we
split
gnap
into
two
main
documents.
The
resource
servers
draft
deals
with
the
association.
Oh
sorry,.
B
D
Let
us
get
the
the
errors
and
Discovery
bits
for
the
open
issues
that
are
in
there.
I,
don't
think
that's
going
to
take
very
long.
I
may
even
be
able
to
do
it
like
on
the
plane,
but
let
us
do
that.
Do
one
quick
revision
and
then
I
think
from
there.
That's
that's
my
take
on
it
Fabian.
What
do
you
think.
E
Yes
same
I
think
we
need
to
just
have
a
read
for
we
need
to
get
through
that
one
more
time,
just
to
make
sure
that
everything
is
on
good
shape
and
and
easy
to
read
for
for
people,
and
then
we
can.
We
can
achieve
that.
Yes,
so
I
think
I
would
say
next
week.
It
should
be.
It
should
be
fine.
D
So
that
that'll
be
draft
12,
then
that'll
end
up
being
all
right,
excellent,
so
the
resource
service
draft.
This
deals
with
the
association
between
resource
servers
and
authorization
servers.
This
is
where
we
have
thing
would
have
things
like
the
models
of
access,
tokens
and
sort
of
what
what
these
things
mean
and
how
you
communicate
that
this
hasn't
really
received
a
lot
of
a
lot
of
energy
and
a
lot
of
attention.
This
is
an
open
call
to
the
working
group.
C
D
All
right,
fantastic,
like
the
state
of
the
the
RS
draft,
is
really
going
to
be
dependent
on
the
kind
of
input
that
we
get
as
it
is
right
now,
it's
a
very,
very
drafty
draft,
but
we
think
there's
potential
to
kind
of
quickly
bring
this
bring
this
up
in
quality
and
also
get
this
over
the
line.
D
Ideally,
we
would
love
to
have
the
RS
draft
in
a
state
similar
to
what
core
is
today
by
ietf
116
Yokohama,
it's
a
little
aggressive
but
I
honestly
think
it's
possible.
It's
a
shorter
draft.
There's
there's
fewer
moving
Parts,
but
there's
Le
there's
more
unknown
in
there
right
now.
So
that's
that's
why
we
need
more
eyes
and
more
hands.
F
Clarifying
question:
isn't
the
the
this
resource
server
drafted
a
fairly
critical
part
of
actually
being
able
to
allow
people
to
do
this
interoperable
as
and
the
fact
that
you
could
that
they
can
discover?
As
capabilities
is
one
of
the
key
parts
of
achieving
this
human
rights
requirement?.
D
For
the
human
rights
side,
yes,
because
that's
about
association
between
the
RS
and
the
as
fundamentally
so
yes
for
that
aspect,
that's
why
Fabian
moved
Adrian's
PR
with
that
consideration
section
to
that.
So
for
clarification,
I
know
you
said
you're
you're
new
to
this
space.
The
core
draft
is
about
it's
really
focusing
on
the
client
instance
and
how
that
talks
to
the
authorization
server
and
how
that
talks
to
the
resource
server.
But
it's
a
very
sort
of
client
focused
draft
to
how
you
make
those
connections.
D
The
core
draft
is
intentionally
silent
on
how
you
connect
the
RS
and
as
together,
because
by
All
rights
they
could
literally
be
in
the
same
box
and
reading
from
the
same
database,
in
which
case
you
don't
need
any
interoperable
way
to
connect
them.
They
just
they're
just
on
some
internal
backplane
and
and
that's
how
a
lot
of
systems
actually
work.
D
Modern
distributed
systems,
though,
need
to
have
ways
to
communicate
across
different
nodes,
and
so
that's
where
you
get
token
introspection,
you
get
structured
tokens
and
that's
where
the
common
token
models
and
things
like
that
and
the
discovery
of
as
capabilities
and
stuff
that
the
client
never
ever
sees
or
cares
about.
That's
where
that
is,
and
so
what
we
did
a
while
ago
was
take
those
considerations
which
were
all
originally
just
dumped
into
one
big
document.
D
Yeah,
that's
that's
all
we
had.
Please
read
the
new
draft.
There
have
been
quite
a
few
changes
in
the
actual
text
and
look
for
the
new
new
draft.
Hopefully
in
about
a
week's
time.
B
Okay,
Justin
I
think
we
we
have
a
plan
for
the
time
period
between
now
and
Yokohama.
We
do
expect
to
meet
in
Yokohama.
B
Thank
you
left
and
thanks.
Everyone.