►
From YouTube: OAUTH WG Interim Meeting, 2020-05-04
Description
OAUTH WG Interim Meeting, 2020-05-04
A
C
Alright,
so
I'm
gonna
discuss
deep
pop
a
little
bit
here
today
is
a
little
bit
of
an
overview
of
some
open
issues
so
forth,
deepak,
roughly
short,
for
go
off
to
the
demonstration
of
position
where,
as
everyone
does
I'd
like
to
throw
in
a
few
photographs
in
my
presentations,
unfortunately,
we
didn't
make
it
here,
but
I
can
still
appreciate
the
city,
so
that
would
get
started
quick
over
here
refresher
kind
of
introduction
to
declaw.
C
It's
the
ideas
of
the
new
ish
anyway,
simple
and
concise
approach
to
possession
for
OAuth
access
and
refresh
tokens,
and
doing
so
at
an
application
level
on
using
the
existing
ingenuity
library,
support
to
think
it's
something
that
people
can
actually
do
an
implement
and
deploy
in
a
relatively
simple
fashion.
It's
not
she's
frizzy,
but
the
idea
is
to
be
something
that
regular
developers
can
actually
do.
C
Key
takeaway
here
is
I
think
that
that
the
deep
pop
goof
mechanism,
which
is
what's
used
to
say
what
the
public
key
that
the
client
holds
and
give
some
level
proof
that
it
holds
the
corresponding
private
key,
is
the
same
on
both
the
token
now
many
access
surfing
or
any
effective
resource
access
that
be
pot,
proof
itself,
it's
a
JWT
I
looked
a
little
bit
like
this
exploded
out
or
unencoded.
It
has
a
type.
C
It's
explicitly
type
symmetric
or
asymmetric
signatures
are
the
only
ones
that
are
supported
for
this,
and
then
the
jwk
header
of
the
proof
contains
the
public
key
for
what
possession
is
being
demonstrated
by
the
client
and
then
it's
aditya
with
some
minimal
claims.
Information
about
the
HTTP
request.
Basically,
the
method
and
the
URI
of
the
requests,
a
meeting
in
the
fragment
or
query
parameters,
there's
a
unique
identifier
in
there
which
we're
currently
using
for
some
repo
checking
and
a
timestamp
with
some
conditions
around
the
suggesting
the
feet.
C
The
proof
that
stuff
only
be
accepted
for
some
small,
reasonable
time
window
relative
to
that
creation
time
in
an
access.
Token
request
is
a
request
to
the
token
endpoint
will
be.
Pop
proof
is
set,
as
the
header
saying,
it'll
be
the
same
for
both
types
of
requests.
But
it's
here
you
see
as
a
deep
pop
header
and
the
encoded
job
there
is
included
directly
to
nether
and
that
again
is
proving
possession
to
some
extent
of
the
private
key
corresponding
the
public
key.
C
C
Presumably
knocks
out
the
access
token
is
issued
like
normal,
but
it's
the
exes
open
itself
is
bound
to
the
public
key
that
was
in
that
be
pop
header,
and
then
we
use
the
token
type
here
to
indicate
to
the
comment
that
this
this
access
token
and
only
the
access
token
is
indicated,
but
this
is
bound
to
the
be
property
that
access
token.
Something
like
this
for
both
job
based
tokens
and
tokens
that
are
introspected.
C
We
defined
jkt
for
sha-256
Dudek,
a
thumbprint
of
the
public
key,
and
that
indicates-
and
does
the
binding
of
this
particular
access
token
to
the
public
key
that
the
clients
asserting
through
the
proof.
Obviously
this
could
be
done.
You
know
internally
to
if
it's
a
database
lookup
or
something,
but
we've
concentrated
on
finding.
That
piece
is
needed
for
interoperability
here,
which
would
be
in
a
JWT
or
the
infrastructure.
Respondents
in
this
works
similar
to
also
other
types
of
access.
C
Token
binding
that
we're
doing
today,
such
as
the
MPLS
stuff,
that
was
recently
with
the
RFC
on
a
protected
resource
request.
Then
we
have
sort
of
two
elements:
one
is
the
authorization
header
instead
of
being
a
bearer
token,
we're
using
the
newly
defined
depop
scheme
here
and
that
public
access
token
is
done
in
as
the
access
token.
Obviously,
but
that
happens
in
conjunction
with
the
depop
header,
containing
the
depop
proof
being
sent
as
well.
C
The
proof
is
what
actually
does
be
proof
of
possession
when
the
client
held
a
key
and
then,
in
addition
to
the
normal
checking
of
the
stuff
you
do
for
an
access
token.
The
protected
resource
also
needs
to
check
that
the
binding
of
the
key
of
the
access
token
matches
up
to
the
the
key
that
was
signed
in
the
group.
C
C
So
now
there's
an
actual
401
challenge
that
can
be
sent
from
the
resource
server,
either
for
invalid
token
or
protected
resource
request
with
no
token
available,
and
it
has
the
things
you
would
expect
to
have
in
the
response
like
the
stoats
and
needed.
We
also
I
also
added
a
out
friend
to
this
challenge,
and
this
is
an
opportunity
for
the
resource
server
to
indicate
be
supported,
algorithms,
that
it
supports
for
checking
and
validating
the
depop
proof.
C
C
What's
going
on
there
fixed
up
and
added
a
bunch
to
the
IMO
section,
it's
trying
to
register
all
the
various
different
pieces
here,
which
was
actually
useful
because
it
helped
sort
of
bear
out
some
pieces
that
weren't
fully
defined
like
the
the
authorization
header
scheme,
as
well
as
the
need
for
the
challenge
sort
of
thought
of
that,
as
well.
C
So
this
to
the
optimization
server
has
metadata
to
indicate
that
the
poppins
supports
and
the
resource
server
can
use
the
challenge,
then,
to
keep
the
rhythms
that
it
supports
and
just
knocked
around
a
little
bit
moving
the
end
of
pretty
knowledge
from
this
back
into
an
appendix
and
I
added
a
bunch
of
paints
based
on
that
I
messed
up
correctly,
to
make
looking
back
at
emails
and
so
forth
as
usual.
If
you
feel
like
I've
admitted
you
here,
it's
please
just.
Let
me
know
it's
hard
to
sort
of
track.
Back
all
this
stuff.
C
So
I'm
going
to
jump
into
some
open
questions
that
had
come
up
from
a
number
of
places.
Discussions
here
understand
that
was
some
pretty
good
discussions
last
week
at
the
I
thought
your
workshop
and
then
I
do
want
to
acknowledge,
have
been
a
couple
emails
that
human
this
morning,
with
reviews
that
I
haven't
had
a
chance
to
go
through
fully.
Do
I
touch
on
some
of
the
issues
and
unfortunately
you
know
from
earliest
money
this
presentation.
So
this
isn't
a
exclusive
list
of
open
questions,
but
probably
covers
a
lot
of
it.
C
So,
in
general,
the
threat
model
and
objectives
that
we're
trying
to
deal
with
have
come
under
some
criticism
and
need
for
some
improvement
a
few
different
times
and
honestly
they're
not
entirely
clear
in
the
dock
world.
We're
all
pretty
well
aware
of
that,
but
they're
also
sometimes
maybe
overly
specific,
there's
some
real
specific
content.
C
Fancy
education
in
the
Greek
can
help
with
some
of
the
specs
here
as
he
he
did
write
some
initial
stuff
I've
struggled
to
updated,
but
in
in
conjunction
with
that
he's
also
given
a
few
slides
that
I've
incorporated
here
to
discuss
this
a
little
bit
more
than
I
would
like
to
hopefully
hand
it
over
to
him
just
to
speak.
To
these.
A
D
C
C
D
E
D
C
It's
still
open,
as
it
always
is,
but
when
you're
either
it
ultimately,
the
resource
server
has
to
know
of
something
about
it.
So,
let's
see
they're
going
to
read
the
data,
be
to
look
for
the
confirmation
plan
introspective
and
look
for
the
interest
function.
Introspection
response
parameter
are
the
same
name
of
the
same
semantics
or
do
it.
C
You
know
my
database
lookup
or,
however,
we
it
would
be
done
internally
without
any
sort
of
interoperability
concerns,
but
we
concentrated
on
defining
the
CNF
came
here,
because
jobs
and
infrastructure
are
the
sort
of
semi
standardized
ways
of
conveying
this
information,
but
it
doesn't
have
to
be
done
this
way.
There's
some
other
proprietary
mechanism
occurring
between
the
RSS
pass.
D
That
is
you're
requiring
the
jkt
information
to
be
in
the
access
token,
where
potentially,
you
could
have
the
signing
be
totally
a
layer
above
it,
so
that
you're
not
changing
the
access
token
at
all.
Since
you
do
have
a
detached
token
happening
at
it,
you
could
go
and
define
the
depop
piece
to
be
something
that
so
you're
not
making
any
changes
at
all
to
the
access
token,
which
would
enable
somebody
to
go
and
add
this
as
a
completely
independent
layer
and
what
they're,
using
already
I.
D
If
you're
viewing
the
token,
just
as
an
opaque
string
and
the
detached
signature,
you
could
have
a
and
a
detached
and
the
D
pop
parameter.
You
could
have
something:
that's
a
hash
of
the
access
token
and
some
signing
mechanism.
So
then
you
know
that
that
access
that
D
pop
token
is
associated
with
that
particular
access
token,
without
making
any
changes
to
the
access
token.
C
Okay,
I
mean
not
making
any
changes,
so
it
was
in
no
way
a
desired,
requiring
an
order
or
objective
of
doing
that,
so
it
I
guess
it
didn't
consider
that
the
Deepak
proof
also
is
sent
directly
to
the
authorization
server
as
part
of
the
whole
mechanism,
in
which
case
there's
no
access.
Token
ball.
So.
D
In
the
slide,
eight
though
you're
sending
that
could
be
the
same
as
usual,
and
then
the
Deepak
would
be
able
to
prove
proof
of
possession
for
that
access
token,
it's
a
regular
bearer
token
anyway.
I
was
wondering
if
there
was
reason
why
you
hadn't
done
that,
or
was
it.
Maybe
just
did
that
didn't
come
up
as
a
requirement.
C
G
H
G
Example,
Thanks
we
have
implemented
awfully
we've
implemented
deepak
and
the
default
mode
of
auth
is
with
an
opaque
string.
It's
a
random
blog
token
there's
no
data
even
atomium
itself.
That's
used
as
an
indexing
device,
storage
level,
regardless.
Our
storage
mechanism
has
reference
to
the
key
that
was
used
in
the
Deepak
proof
and
that's
what
we
use
to
recalculate
and
validate
a
signature
for
both
token
presentation
at
the
RS
and
also
for
refreshes
and
all
of
that
other
stuff.
So
this
work
does
not
change
what
an
access
token
is.
It
does
not
change.
G
What
is
in
your
access
token.
All
it
requires
is
that
the
RS
is
able
to
figure
out
which
key
it
needs
to
know
about
improving
the
access
token.
That's
that's
how
they're
about
and
and
Deepak
doesn't
actually
dictate
how
you
do
that.
Instead,
it
provides
a
couple
of
interoperable
patterns
that
you
can
use
based
on
existing
standards
that
are
already
well
established
for
berrykins,
to
allow
you
to
do
that.
So
this
does
not
change
what
an
access
token
is
at
all.
Well,
so.
C
B
E
On
top
of
that
think
this
touches
on
something
that
will
come
up
in
the
slides
later,
but
there
is
some
security
risk
in
that
in
that,
if
it
isn't
clear
and
obvious
that
an
access
token
is
a
depop
access
token,
and
if
it
is
possible
for
a
relying
party
to
process
a
depop
access
token
as
a
unbound
bearer
token,
then
that
creates
a
potential
downgrade
vector
if
you've
got
clients
that
are
expecting
to
if
you've
got
a
deployment.
That's
expecting
to
be
sometimes
issuing.
B
A
F
I'll
keep
it
short,
so
I
think
over
time.
We
have
kind
of
conflated
all
the
attacks
that
we
want
to
defend
against
using
people
and
therefore
I
took
the
time
to
kind
of
uncomplete
this
or
at
least
try
to,
and
it
turns
out.
We
have
like
three
or
four
different
kinds
of
attack
that
people,
possibly
defense
against.
F
C
So
one
thing
that's
come
up
a
few
times
is
the
inefficiency
they
see
my
crypto
on
this
isn't
efficient,
but
there's
other
cost
involved
and
there's
something
different.
So
far
we
don't
have
any
real-world
quantify
implications
of
this.
It's
true,
but
there's
been
a
couple
of
different
potential
approaches
that
are
not.
There
would
be
like
a
distribution.
C
But
basically
I'm
working
out
the
the
idea
that
I
consider
this
I
mean
out
of
the
meeting
we
had
before
the
the
full
107
and
then
the
general
working
group
adoption
of
the
document
there
seemed
to
be
not
enough
interest
in
pursuing
something
different,
something
concrete
and
so
we're
kind
of
on
this.
For
now,
another
issue
has
been
raised.
Is
the
difficulties
around
API,
the
point
being
that
detecting
and
preventing
replay
of
Jake's
yeah
can
be
very
problematic
for
large
scale
deployments?
C
This
can
also
set
to
great
problems
with
inefficiency,
asymmetric
sort
of
scale
out
horizontally.
If
you're
tracking
JCI
for
replay
you,
you
lose
something
the
value
of
that
scale
out,
because
you
start
with
coordinator
Hmong
horizontal.
No,
it's
also
potentially
some
unexpected
issues
with
idempotence
and
retry
attempts,
if
you're
catching
sometimes
without
other
times
the
jti.
C
That's
intentionally
put
in
there
to
to
allow
for
a
larger
scale
deployments
to
you
know
not
be
a
hard
must
that
the
jti
replay
has
to
be
remembered
all
the
time,
but
more
of
a
should
be
with
consideration
for
the
rest
of
your
deployment.
It
still
seems
to
be
considered
a
problem,
so
some
options
around
this
further
would
be
to
further
mention
the
replay
place
that
you
care
about.
C
You
have
to
check,
is
qualified
by
the
URI
and
a
method,
so
the
scope
of
the
data
replication
of
the
JPI,
even
if
you're
trying
to
do
this
is
definitely
smaller.
It
was
also
a
mention
of
splitting
out
the
path
from
it.
If
you
claim
I'm
not
actually
sharing
other
cells,
we
could
further
sort
of
loosen
or
qualify
the
requirements
on
checking
making
it
a
May
or
even
on
longer
text
sort
of
like
encouraging
it,
but
but
not
putting
in
the
normative
requirements.
C
B
C
The
token
type
is
the
same
regardless
took
from
RFC
67
49
and
only
applies
the
access
token
that's
healthy
easier,
but
in
the
current
situation
in
the
doc
you
may
refresh
tokens
are
only
bound
for
public
times.
This
came
up
a
few
times
this
morning
and
apparently
need
some
better
explanation
in
the
graph.
I'll.
C
Definitely
take
your
doing
that
note
that
that
and
this
gets
to
larger
issues
but
big
deposits
access
tokens
are
almost
certainly
usable
as
playing
better
access
tokens
as
well,
but
there
there's
been
questions
that
maybe
the
client
needs
some
kind
of
signal.
C
Public
client
that
the
Refresh
token
itself
has,
even
if
the
access
tokens
aren't
so
maybe
it
bearer
access
to
a
clearance
boss
with
a
bound
refresh
token,
and
so
one
potential
approach
to
doing
this,
we
could
easily
define
the
new
token
input
response
parameter
that
is
used
as
a
signal
to
the
client
that
refresh
token
has
been
bound.
The
name
here
is
obviously
not
a
real
suggestion,
but
conceptually
it
could
be
something
like
this,
but
this
isn't
used
as
a
signal.
That's
happened.
C
There's
been
discussion
of
client
metadata.
Some
people
seem
to
think
we
should
do
this.
I,
don't
I
cannot
see
the
actual
utility
to
useful
lesson.
Client
metadata
so
like
to
understand
what
actually
you
used
before
we
go
about
introducing
them.
C
C
Anytime,
that
there's
a
CNF
or
a
JK
TV
with
a
JPEG
T
in
in
the
access
token
or
a
reference
to
by
the
access
token,
such
that
a
regular,
a
deep
hop
access,
token
couldn't
be
used
as
a
bearer
token,
and
in
my
opinion,
as
I,
say
here.
Obviously
we
don't
want
to
do
this
and
in
reality,
I
don't
think
we
really
can
we're
not
in
a
position
to
retrofit
requirements
on
to
bearer
access
tokens,
both
in
the
standards
worlds
or
the
implementations
and
currently
in
a
JWT.
C
The
other
things
are
a
must
ignore
in
introspection.
It's
not
quite
as
clear,
but
basically
it
says
you
can
add
extra
stuff
here,
6750
sort
of
silent
on
the
topic
and
basically,
in
my
mind
what
this
means
is
that
if
you
took
a
deep
pop
on
access
token
and
stuffed
in
a
bearer
authorization,
almost
all
implementations
I
think
would
accept
that
as
a
protected
resource
requested
and
be
fine
with
it.
C
And
while
there's
the
potential
there
for
downgrade
type
attacks,
there's
also
the
ability,
then
to
rule
out
and
have
mix
token
type
deployments
where
you
don't
have
to
necessary,
distinguish
between
the
two
service
that
only
supports
Maricopa.
This
could
accept
a
Deepak
found
access
token
and
continue
work
me
a
service
that
only
accepts
teapot
access.
J
Quickly,
I
understand
the
time
constraints.
I
just
want
to
clarify
the
intent
of
the
issue.
It
was
not
happy
existing
resource
service
altered
in
any
way.
It
is
specifically
for
the
issue
that
Annabel
mentioned
when
you
have
a
gradual
rollout
of
the
deep
upstream,
where
for
the
time
being,
depending
on
the
user
agent,
or
you
may
support
both
bearer
and
Depot.
J
In
that
case,
if
a
client
actually
gets
a
deep
of
people,
typed
access
token,
the
resource
server
needs
guidance
on
actually
not
only
relying
on
the
use
of
the
deep
pop
scheme,
but
also
check
the
confirmation
being
there,
because
it
already
supports
Depot,
and
in
that
case,
if
it
detects
a
supported
scheme
through
confirmation,
but
the
scheme
is
different.
It
shouldn't
allow
access.
That
was
the
whole
point.
Okay,.
B
K
Okay,
so
this
one's
incremental
off
I'm
just
going
to
give
a
quick
recap
of
what
this
is
just
in
case.
There's
new
people
here
or
someone
who
doesn't
read
the
draft.
So
the
problem
statement
is
essentially
this
that
an
app
or
website
asking
for,
like
the
kitchen
sink
of
scopes
up
front
is
considered
a
bad
thing
and
the
more
correct
approach
is
that
users
should
have
context
of
the
authorization
request.
So
one
example
of
that
is,
let's
say,
like
an
app
has
features
related
to
you
know
calendar.
K
Instead
of
instead
of
the
app
like
asking
for
everything
upfront,
it
can
actually
ask
for
each
different
scope
in
the
context
of
that
feature.
So,
for
example,
if
it's
doing
something
routine
calendar
blada,
you
could
ask
to
be
counted
out
only
way
the
user
kind
of
gets
to
that
part
of
the
application,
and
you
can
ask
the
other
scope
when
the
user
exercises
that
other
functionality.
So
this
is
an
example
of
a
screen
of
heat
which
would
be
kind
of
the
wrong
approach.
This
is
it
act
like,
let's
just
say
this
app.
K
You
know
it
is
like
all
these
different
features.
It
has
like
some
YouTube.
You
know
it
can
do
something
with
like
videos.
It
can
do
something
calendars
and
contacts
and
right
so
asks
the
use
of
everything
up
front,
even
though
the
user
may
have
come
to
this
app
only
for
one
of
those
kind
of
bits
of
functionality.
K
So
to
fix
that
and
the
definition
of
incremental
off
then,
is
the
ability
to
request
additional
scopes
in
subsequent
requests.
Add
into
a
single
organization
grant
that
would
represent
everything
that
have
been
grounded
so
far.
So
the
single
authorization
grant
is
kind
of
important
here,
because
right
now,
with
all
of
you,
could
simply
just
do
new
or
with
requests
and
kind
of
maintain
separate
grants
for
each
separate
scope
that
you
asked
for,
but
that
that
is
kind
of
a
burden
to
the
developers.
They
need
to
sort
of
keep
track
of
all
these
different
things.
K
So
it
implies
them
that
the
access
token
to
refresh
token
kind
of
carried
the
union
of
all
the
grantor
scripts,
and
so
far
so
with
that
this
specification
proposes
two
methods
for
chilliness.
The
first
is
designed
for
confidential
clients,
so
the
benefit
here
of
confidential
clients
is
that
you
know
in
theory
they
can't
be
spoofed
or
impersonated.
K
So
with
confidential
clients,
it
defines
a
new
parameter
called
include
granted
scopes
or
essentially
they
can
make
any
request
for
any
scope,
any
single
scope.
If
they
add
this
parameter,
it
indicates
that
they
would
like
the
the
authorization
grant
to
come
back
and
include
any
scopes
that
have
been
granted
in
the
past.
K
There
is
also
a
public
client
protocol.
Also.
This
is
one
I
guess
for
for
clients
that
can
be
impersonated,
which
is
typical
public
clients,
but
not
always
the
case.
In
fact,
as
I'll
talk
about
look
at
later,
and
this
one
defines
a
site
more
complex
method,
which
a
token
endpoint
parameter
called
existing
grant.
K
So
the
problem
here
is
that,
at
the
time
of
the
authorization
request,
the
server
has
really
no
way
of
knowing
that
this
is
in
fact,
the
same
client
that
you
know
has
an
existing
grant,
because
it
could
be
an
impersonation
of
a
client,
so
the
way
that
this
client
can
essentially
prove
that
that
it
does.
In
fact,
you.
K
That
previous
grant
is
supplying
that
previous
grant
during
the
token
exchange.
So
when
the
code
is
exchanged,
it
just
passes
in
its
existing
refresh
token,
which
is
perhaps
better
illustrated
on
this
diagram
here.
So
you
know,
the
first
kind
of
block
is
a
very,
very
typical
all
the
flow
resulting
in
the
client,
in
this
case
of
public
client
being
an
app
having
a
refresh
token
and
an
access
token
at
first
grant
when
it
comes
time
to
the
incremental
request.
It
just
requests
the
second
scope,
so
scope
B.
K
In
this
case
it
gets
back
an
authorization
for
scope
B.
So
at
this
point
it
essentially
has
sort
of
two
separate
authorizations,
but
when
it
exchanges
the
code
for
the
Refresh
token
of
the
incremental
auth
request,
it
passes
the
Refresh
token
from
the
first
one
in
that
existing
ground
parameter
and
what
it
gets
back
is
a
great
covering
both
scopes
spoke
like
that,
okay,
so
getting
onto
the
the
updates
and
the
discussion.
K
Firstly,
thanks
everyone
for
your
feedback
and
Annabelle,
actually
apologized,
that
I
am
mr.
your
email
when
I
was
addressing
the
comments,
so
I
just
took
a
look
at
it
and
and
will
respond
to
you
shortly
for
your
points.
But
the
updates
in
this
version
I
clarified
some
information
around
the
April
one
for
metadata
field
and
a
little
bit
more
information
about
this.
K
The
scope
responses
you
know
if
they
so
it's
possible
to
get
that
kind
of
less
Curt
than
what
was
requested
so
I
just
added
some
in
a
way
that
was
the
implicit
I've
just
added
some
subjects
was
a
text
discussing
that
that
may
happen
also
defined
a
new
error
code.
So,
despite
previously
did
talk
about
the
idea
here
that,
if
an
authorization
server
dot
one
of
those
you
know
overboard,
requests
like
the
one
I
Ellis
traitor
at
the
beginning
that
they
could
reject
it,
otherwise,
actually
going
through
Aaron's
feedback
that
you
know.
K
Maybe
this
may
it's
appropriate
actually
of
a
proper
error
message
to
indicate
the
reason
that
was
rejected,
so
I
just
quickly
define
one
called
overboard
scope,
just
as
a
way
of
explicitly
saying
that
that
was
the
reason
it
got
rejected
and
I
documented
a
little
bit
more
behavior
if
the
user,
it
reduces
the
scope
there
now
I
do
have
an
open
question,
I
be
interested
via
people's
feedback.
When
is
supports
and
and
anything.
E
K
Reflection,
I
guess,
like
a
year
later
now,
I
realized
that
this
is.
There
are
potentially
public
clients
that
are
impersonation
proof,
perhaps
ones
that
are
using
your
HTTP
schemes
and
things
like
that
way
where
you
can
actually-
or
even
you
know,
some
key
exchange
or
or
whatever
so
I'm
wondering
actually,
if
kind
of
just
repositioning
those
two
approaches
defined
in
the
draft
from
public
and
confidential,
to
just
kind
of
I
need
a
better
or
most
succinct
way
to
say
this,
but,
like
you
know,
clients
that
can
be
impersonated
ones.
K
B
E
E
Does
it
have
credentials
that
it
can
keep
secure,
basically,
which
I
think
actually
covers
what
you're
trying
to
capture
with
impersonation,
vulnerable
and
impersonation
proof
I
think
the
confusion
comes
in
and
that
people
have
tended
to
equate
public
with
native
app
and
confidential
with
you
know,
you
know
web
application
with
a
back-end,
and
that's
not
necessarily
you
know
the
case.
It's
it's
more
nuanced
than
that.
E
I
do
want
to
one
other
concern
that
wasn't
I
didn't
mention
my
email
that
the
the
process
of
a
proposal
you
have
where
we
are
sending
the
existing
authorization
in
the
token
request,
free
clues,
the
authorization
server
from
informing
the
resource
earner,
the
end
user
who's
consenting
of
the
total
sum
of
permissions
that
are
being
given
to
the
client
so
there's
context
where
you
know
that
the
end
user
may
may
be
interested
in
that
may
want
to
know.
Oh
I've
already,
given
this
app
XYZ.
K
Yes,
that
that
one
I
do
think
about
this
and
even
thought
about
having
a
an
authorization
request
parameters.
Well,
the
the
main
problem
that
I
see
with
this,
and
if
you
have
a
solution,
I
have
no
lawyers,
and
that
is
that
no
way
to
prove
it
so
hot
hot
I
guess
they
might
be
away,
but
it's
hot
how
to
prove
that
the
client
is
in
possession
of
that
grand
at
the
time
it
makes
the
request,
and
essentially,
if
it,
if
it
sort
of
said
I,
have
these
ghosts,
that's
sort
of
in
a
way,
that's
unproven.
K
You
know
that
that
could
lead
to
a
problem
where
I
guess
the
user
might
be
presented
a
screen
that
says,
sir.
You
know
the
app
Ori
has
like
these
two
scopes.
You
know
you
want
to
add
one
more
and
that
might
be.
It
may
kind
of
increase
their
trust,
whereas
this
app
could
actually
have
nothing
in.
Could
it
be
so
pertaining
so.
D
And
now
we've
got
the
idea
that
the
client
can,
you
know,
keep
a
secret,
but
that's
not
quite
the
same
as
the
confidential,
client
and
I
think
we're
struggling,
sometimes
because
we
have
this
new
care,
new
type
of
client
that
isn't
really
competence
for
client
and
isn't
really
the
fall
public
client,
and
maybe
we
need
to
address
on
the
incremental
auth.
Is
there
I'm
just
wondering?
D
K
We
didn't
implement
on
only
Google
old
server
for
the
purpose
of
public
clients,
away
specifically
like
apps
being
able
to
do
kind
of
the
kind
of
incremental
or
firm
that
I'd
apply
here
so
yeah
I,
guess
you
know
the
the
mode
of
it
is
draft
in
a
way
did.
The
motive
of
this
idea
is
to
improve
the
user
experience
by
giving
them
a
more
granular
way
to
like
give
grants
and
so
not
leaving
out
any
client
I
think
it's
probably
best
to
achieve
that
end.
H
Is
hostile
speaking
hi
Berlin
I've
got
a
question:
I've
been
working
with
others
on
a
grant
management
extension
too
often
the
Fafi
working
group,
and
we
yeah.
We
evaluated
your
draft
on
the
hood
as
a
potential
foundation
for
that
Elias
been
reading.
The
definition
of
the
include
granted
scope
parameter
as
far
as
I
read
the
text.
H
It
doesn't
really
ensure
that
the
SAS
is
gonna,
is
gonna
really
to
use
the
Horta
to
to
extend
the
group
and
using
the
existing
scope,
values
and
the
new
scope
values
that
the
grant
asked
for.
Why
add
a
peg
to
that
conclusion?
It
says
when
true,
the
authorization
shovel
should
include
previously
granted
scopes
for
this
client
in
the
new
authorization
grant
and
that,
in
my
opinion,
somehow
reduces
the
on
the
usefulness
of
the
draft,
because,
as
far
as
I
understand,
what
you
want
to
achieve
is
that
the
client
world
doesn't
need
to
keep
track.
H
What
scoffs
values
are
associated
with
a
certain
client,
ID
or
a
refresh
token.
Instead,
it
can
just
incrementally
ask
for
the
scope
values
that,
for
a
certain
use,
cases
and
situation
are
relevant,
but
das
makes
sure
that
all
the
otoscope
values
are
kept,
and
this
little
should,
from
my
perspective,
means
for
my
perspective.
H
K
K
Now,
that's
that's
a
really
good
point
and
and
I
guess
you
know
what,
if
they're
confusing
things
about
all
of
them
in
animals
email.
Actually
this
is
raised
as
well.
Is
that
you
know
the
server
can
always
override
anything
and
kind
of
return
whatever
grant
at
once
so
but
I
guess
I,
guess
I.
Must
he
wouldn't
preclude
that
so
I.
H
Mean
in
the
end,
you
can
you
can
never
overwrite
a
user
decision
or
a
authorization
service
policy
decision,
but
at
least
15
of
those
contradicts
what
you're
asking
for
the
client
should
be
able
to
really
rely
on
the
a
s
to
use
the
old
scope
values
and
at
the
new
scope,
failures.
I
think
that
would
be
very,
very
useful.
Yeah.
K
K
B
E
Think
that
the
problem
with
that
line
that
were
just
talking
about
is
that
it's
talking
about
requiring
the
the
scopes
in
the
authorization
grant.
It
should
actually
be
saying
that
the
a
s
must
treat
the
authorization
requests
as
if
those
previously
granted
scopes
were
requested
within
it
as
well,
because
that's
really
the
intent
here.
E
You
know
we
want
the
the
a
s
should
treat
this
request,
as
does
the
the
client
asking
for
what
it's
already
been
given
Plus
this
new
stuff,
and
that
would
eliminate
the
need
for
your
asterisk
and
any
confusion
over
how
that
fits
with
the
a
s
is
ability
to
make
decisions
or
the
cost
end-users
ability
to
say
no
I
don't
want
them
to
have
that.
One
yeah.
K
E
K
It
makes
sense
one
of
the
one
of
the
challenges.
I
always
find
writing
these
fakes
it
to
that
Olaf
is
very
unprescribed
about
to
kind
of
how
the
consent
experiences
handled
so
I,
don't
say
so.
I
didn't
get
to
do
that
a
bit
later
in
the
document,
it's
very
hard
to
phrase
it
without
yeah,
without
making
kind
of
a
lot
of
assumptions
about
about.
What's
going
on
so
I
think
like
if
I
can
say,
like
an
unstated
assumption
in
your
comment.
Just
then
is
that
you
usually
would
see
those
other
scopes.
K
No
and
the
V
made
a
small
comment
that
you
know
that
it
would
be
good
to
show
that
it's
I
find
this
hard
to
capture
because,
like
like
y'all
expect
never
even
said,
you
need
to
show
that
at
all
it
just
it's
just
sort
of
turns
out.
That's
you
practice
what
everyone
exactly
did
I,
don't
know,
maybe
that
something
that
should
actually
be
covered
in
the
in
a
refresh
doc,
I
guess,
but
okay,
yeah
thanks
for
the
thanks
of
the
comment.
K
We're
not
really
there's
some
background
material
I
do
have
a
lot
get
one
most
like.
This
is
just
an
example
of
some
kind.
That's
right,
people
interested
but
yeah.
We
definitely
go
through
everything
thanks
Tyrone
that
was
actually
really
useful.
I'm
gonna
do
another
couple
of
rants
the
iteration
to
capture
this
feedback
and
hoping
to
today.
That's
the
draft
I
guess:
okay,.
E
B
Okay-
okay,
deep
so
so,
Brian
and
and
and
I
think
Danielle.
Do
you
wouldn't?
Would
you
like
to
have
another
section
to
discuss
that?
Okay,
you
see
Danielle,
okay,
awesome
sure
what
you.