►
From YouTube: IETF110-SAAG-20210311-1200
Description
SAAG meeting session at IETF110
2021/03/11 1200
https://datatracker.ietf.org/meeting/110/proceedings/
A
B
Yeah
we
are
at
the
top
of
the
hour,
so
let's
go
ahead.
This
is
sag
the
security
area,
advisory
group,
I'm
jeff,
kadek
and
roman.
Do
you
know
you
already
here
talking
some?
So
we
are
the
80s
and
let's
get
started.
Hopefully
everybody
has
seen
the
note.
Well
already
the
standard
itf
ipr
rule
supply.
B
If
you
have
not
seen
the
note
well
already,
please
do
take
a
look.
Those
are
the
rules
that
we
operate
under.
So
let's
keep
going
so
we've
got
good.
A
Oh
sorry,
I
was
just
gonna
say
I
mean
we,
we
have
the
specific
agenda,
but
before
we
even
get
into
the
details
of
the
agenda,
I
just
wanted
to
provide
kind
of
a
top-line
view
of
what
ben
and
I
have
been
thinking.
Thinking
about
this
I
mean
we
really.
We
realized
that
the
virtual
format
is
incredibly
difficult
for
us
and
I
think
we've
been
doing
as
good
of
a
job
as
we
can
kind
of
with
that
work.
A
But
one
of
the
things
that
we're
always
cognizant
of
that
we
may
have
lost
is
just
that
informal
discussion
that
leads
to
situational
awareness
about
what's
happening
related
to
sex
issues
across
the
itf.
So
one
of
the
things
that
we're
trying
really
hard-
and
you
see
that
reflected
perhaps
in
the
agenda-
that
we're
a
bit
more
internally,
focused
that
we
may
have
been
pre-pandemic
and
that's
an
explicit
choice.
What
we're
trying
to
do
is
to
find
topics,
perhaps
across
the
itf
and
just
bring
them
to
sag
to
to
provide
that
visibility.
A
You
know
one
part
of
that
is
you
know,
we've
been
harping,
I
think,
a
little
harder
than
usual
about
getting
working
group
summaries
as
well.
We
think
this
is
just
really
important
to
you
know
just
take
the
time
to
to
summarize
what's
happening
across
just
sec
and
make
that
a
little
more
easily
accessible
to
everyone
else,
both
in
sag
and
outside
and
the
other
thing
you'll
you'll
notice
is.
A
We
have
oauth
at
the
end
of
the
agenda,
and
this
is
actually
going
to
be
the
second
installment
of
what
we
started
last
time
with
dots,
which
was
finding
sec
work.
That
has
made
a
lot
of
progress
on
on
their
deliverables
and
to
just
really
step
back
from
the
individual
documents
and
have
the
top
level
kind
of
introduction
to
what
that.
A
What
what
that
work
is
about
and
what
it's
doing
and
potentially
what,
where
it's
headed
in
the
marketplace,
and
so
it's
oauth's
turn
to
do
this
and
we're
going
to
continue
choosing
working
groups
in
future
meetings
and
the
choice
of
oauth
is
quite
kind
of
deliberate.
We
had
a
couple
of
options,
I
mean
this
is
also
a
response
to
you
know.
We
got
some
questions
that
we
don't
understand
what
oauth
is
working
on
and
what
oauth
is,
and
so
we
hope
to
hope
to
address
some
of
that
there
as
well.
B
A
B
Yeah
I
mean
I,
I
guess
it's
been
up
on
the
on
the
screen.
If
anybody
has
any
suggested
edits
or
changes,
you
can
join
the
queue,
but
we
should
probably
just
keep
going
because
we
do
not
have
too
much
time
left
in
scheduling.
A
A
B
Yeah
and
you
can
always
join
the
queue
as
we
keep
going.
We
can
come
back
if
needed,.
A
All
right,
maybe
we'll
circle
around
at
the
end,
to
see
in
the
interest
of
moving
forward.
Ipsec
me
has
a
report
kitten.
Likewise
with
a
report.
Thank
you
for
the
report
from
lake
lamps,
we'll
maybe
circle
around.
We
don't
have
a
report
there
mls.
We
just
got
one
on
the
list.
I
just
didn't
update
it
in
time.
A
Oauth
did
not
meet.
We
can
circle
around
nothing
for
privacy
pass
rats.
We
have
a
report
sam.
Likewise,
sec
dispatch
will
meet.
Actually
after
that,
this
is
a
scheduling
anomaly.
This
is
a
little
unusual
for
us.
Sec
event
did
not
meet.
Has
a
report,
nothing
from
suit.
We
have
something
from
teep
we're
waiting
for
tls
to
point
out
here.
I
just
wanted
to
stop.
We
have
a
we
have
a
report
that
did
not
get
included,
but
there
was
a
call
on
the
mailing
list
in
the
last
two
weeks.
B
Steven
also
notes
out
that
we
failed
to
add
openpgp
to
the
list.
I
thought
I
had
syncs
up
with
the
data
tracker,
but
I
guess
not
sean
says
the
tls
should
be
coming
shortly.
B
And
I
guess
we
can
also
note
that.
Oauth
though
they're
not
meeting
this
week,
has
been
doing
a
lot
of
regular
interims
and
rowan,
you
may
have
a
better
since
than
I,
but
I
think,
they've
been
making
good
progress.
A
Yeah
indeed
they
have,
I
mean
they've.
I
think
meeting
we're
meeting
almost
every
other
week
unless
something
kind
of
cancels
it
and
if
no
one
comes
up
to
the
mic
for
i2nsf
iqnsf
is
kind
of
at
the
back
end
of
all
their
scheduled
milestones.
So
their
documents
are
slowly
either
responding
to
the
isg
feedback
for
the
ones
that
have
been
through
a
telechat
or
they're
polishing
director
reviews
from
a
working
group
last
call
or
from
an
itf
last
call.
B
C
Good
morning,
everybody
one
of
the
things
that
came
up
during
the
84
46
bis
discussions,
which
is
the
revision
to
the
tls
1.3
spec,
was
this
kind
of
discussion
that
we
had
about
adding
some
more
nuance
about
the
meaning
of
the
recommended
column,
just
just
kind
of
a
heads
up
to
everybody
that
we're
going
to
start
looking
at
that,
and
it
actually
applies
to
8447
abyss.
So
ecker
is
actually
technically
off
the
hook,
but
the
idea
is
to
try
to
add
more
influ,
a
more
subtle
definition
for
what
that
is.
C
So
we're
going
to
kind
of
kick
that
off,
and
I
just
wanted
to
highlight
that,
because
a
lot
of
other
working
groups
are
kind
of
adopting
that,
and
so
we're
going
to
try
to
figure
out.
If
there's
some
better
way
to
to
do
this,
so
that
we
can,
we
can
reflect
the
actual
meaning
of
what
we
wanted
it
to
be.
Thanks.
B
Yeah,
that's
a
really
good
point.
You
know
tls
sort
of
pioneered
some
ground
in
making
it
easy
for
people
to
get
code
points,
but
still
having
some
indication
of
what
the
ietf
thinks
is
a
good
idea
and
we're
still
evolving.
What
the
best
way
to
express
that
is
so
it's
great
that
we
have
progress
there.
D
Yeah,
I
can
just
say
a
few
things
about
privacy
pass.
We
are
going
to
meet
tomorrow
session.
Three
we'll
have
a
discussion
of
kind
of
the
existing
drafts,
along
with
some
deeper
discussion
on
kind
of
issue
or
consolidation,
and
then
there
are
some
some
new
drafts
that
will
be
presented
in
privacy
classes.
E
F
Yeah
I
just
wanted
to
mention
about
oas.
We
are
not
meeting
this
week
and
but
it
may
be
worthwhile
to
have
a
look
at
the
upcoming
virtual
intro
meetings
that
we
are
planning.
If
someone
cares.
Obviously
those
have
been
discussed
on
the
list
and
once
those
are
done,
we
can
send
a
meeting
report
and
we
have
the
presentation
later
today
sure
yeah,
I'm
happy
to.
B
Have
the
report
justin.
G
Yeah,
so
just
two
related
things:
first
in
the
w3c,
the
distributed
identifiers
or
did
spec,
is
progressing
towards
community
review
and
that's
gonna
be
of
interest
to
a
number
of
people
in
the
security
community.
So
I
would
recommend
people
take
a
look
at
that.
Also
from
the
https
working
group.
Two
security
related
documents
are
moving
along,
making
some
progress,
both
http
message,
signatures
and
message
body
digest.
The
second
is
abyss,
and
so
yeah.
Please
we
need
some.
G
We
need
to
make
sure
that
we
have
enough.
You
know
security
focused
eyes
on
those
drafts,
even
though
you
know
they
are
focused
on
the
http
semantics.
More
than
anything,
they
are
definitely
security,
related
protocols
and
components.
B
Thanks
for
the
heads
up,
ontario
go
ahead.
H
15.9
is
now
going
through
the
sponsor
bullet,
trying
to
make
a
new
revision
which
actually
moves
it
from
recommended
practice
with
standard
and
59
provides
the
key
management
for
15
4
and
it
it
doesn't
define
its
own
key
management.
It
used
reuses
existing
ones
and
it
has
heap
and
like
version
2
and
1x,
for
example,
it
doesn't
have
a
tls,
because
nobody
has,
you
know,
volunteer
to
write
that.
H
I
B
I
guess
the
only
other
thing
I
wanted
to
note
from
the
groups
that
were
not
that
did
not.
Release
in
the
report
is
that
lamps
is
working
on
a
new
charter,
so
we'll
be
seeing
that
out
come
out
for
review
at
some
point.
A
B
So
other
security
highlights
that
you
know,
ladies,
want
to
call
out
there's
still
two
80
sponsored
drafts
the
same
two
as
last
time.
B
We
do
have
the
danish
buff
as
roman
as
he
mentions
coming
up
later
this
week
and
then
the
dlt
gateway
protocol
was
a
proposal
for
this
time,
but
was
not
quite
ready,
but
I
think
the
proponents
are
still
planning
to
work
on
that
and
try
and
get
some
more
interest
in
it.
If
you
want
to
reach
out
to
them
or
us,
we
can
get
you
in
contact.
If
that's
something
you're
interested
in
so
next
slide,.
B
And
we
we
do
have
a
call
out
for
people
who
might
be
interested
in
being
a
working
group
chair,
there's
an
immediate
need
for
acme
and
which
is
sort
of
always
interested
in
general,
even
if
you're
not
interested
in
acne,
but
might
be
interested
in
something
else.
We'd
like
to
know,
it's
always
good
to
have
a
sort
of
broad
candidate
pool
when
we
do
have
an
opening
that
needs
to
be
filled.
So
we
can
get
something.
B
That's
a
good
fit
and
we
just
did
have
the
this
common
security
discuss
items
on
the
slides
last
time.
But
I
think
it's
just
worth
highlighting.
We've
tried
to
make
a
list
of
things
that
show
up
a
lot
in
our
reviews
and
we
try
to
get
some
broader
awareness
and
visibility
into
that.
These
are
common
issues
and
hopefully
we
can
get
them
resolved
before
the
documents
make
it
into
igf.
Last
call
even
to
try
and
improve
the
quality
of
the
documents
earlier
on
in
the
process,
and,
let's
see,
looks
like
in
the
chat.
J
Okay,
I
was,
I
was
just
typing
something
up,
so
I'm
one
of
the
chairs
of
it,
the
I
don't
know
if
any
of
the
proponents
are
here,
but
the
goal
of
danish
is
to
be
able
to
get
internet
of
things,
devices
being
able
to
cross-communicate
a
whole
lot
better,
so
that
includes
being
able
to
cross-communicate
between
themselves
in
the
long
run,
and
it
includes
being
able
to
be
talk
to
other
infrastructure,
besides
just
the
owner
of
the
iot
device
in
question,
and
so
they
want
to
be
able
to
use
dane
to
do
that,
to
be
able
to
do
look
up
both
keys
for
other
devices,
and
things
like
that,
as
well
as
to
be
able
to
look
up
keys
for
other
organizations
and
things
like
that,
rather
than
just
rely
on
the
pkx
infrastructure
for
doing
that.
B
And
yes,
so
we
want
to
put
this
in,
we
do
have
in
the
data
tracker
there's
a
very
handy
feature
that
shows
sort
of
an
80
dashboard
of
where
all
the
documents
are
that
we're
responsible
for
and
it
breaks
it
down
by
the
category
of
what
is
in
public
publication
requested
where
you
know
it's
on
the
id
to
take
the
next
item
versus
getting
reviewed
in
various
states
and
whatnot,
so
you
can
always
go
and
look
at
those
it
doesn't
require
authentication
and
track.
B
A
Yeah
one
of
the
questions
we've
we've
gotten
is
I've
been
working
on
my
document.
It's
in
the
working
group.
It's
now
out
of
the
working
group
like
what
happens
next
kind
of
what
else,
what
else
it
kind
of
is
in
the
workflow
as
it
relates
to
to
sex.
So
looking
at
these
dashboards
for
either
kind
of
better
me
gives
you
insight
into
what
happened
after
it
left
the
working
group
and
what
else
is
on
deck.
A
What
else
is
on
deck
as
it
progresses
through
through
the
different
phases?
So
this
isn't
necessarily
new
information.
You
can
see
that
information
on
particular
documents
as
they're
buried
in
working
groups,
but
if
you
want
to
get
a
sense
for
what's
in
the
sec
pipeline
and
how
far
it
is
in
the
progression
process,
once
it's
outside
the
working
group
in
an
in
a
place
aggregated,
you
know
well
aggregated
in
two
places,
because
that
that's
how
it's
kind
of
split
you
can
just
look
at
these
urls
and
kind
of
self-serve.
A
Okay,
so
I
think
the
the
next
thing
is
just
kind
of
moving
through
slides
to
keep
us
moving.
We
wanted
to
just
put
out
another
big
thank
you,
as
we
always
do
to
all
the
sector
reviewers
who
make
all
the
ietf
documents,
either
in
last
call
or
before
they're
published.
You
know
so
much
better
because
of
that
review.
A
Thank
you,
and
you
know
a
special
kind
of
thank
you
to
taro,
who
actually
manages
the
as
our
secretary
kind
of
manages,
the
workflow
and
kind
of
the
pipeline
with
all
of
these
reviewers
and
all
the
different
working
groups.
I
mean
it's
really
a
thankless
kind
of
job
to
kind
of
do
it
week
in
and
week
out,
so
very
much
appreciate
it.
B
F
K
All
right,
so,
thank
you
all
for
giving
me
a
little
bit
of
time
to
talk
about
this
right.
Qlog
excited
securelog
stands
for
quick
logging.
Its
project
started
a
couple
of
years
ago
when
we
noticed
that
the
quicken
hp3
were
becoming
quite
complex
and
you'd
probably
need
some
additional
tooling
to
to
help
analyze
their
behaviors
for
other
protocols,
for
example
tcp.
You
would
do
that
next
slide,
for
example,
by
taking
a
packet
capture
somewhere
in
the
network
and
then
analyzing
that
using
tools
like
like
wireshark.
K
K
So
for
quick,
you
would
have
to
store
it
in
its
entirety
and
also
all
of
the
tls
secrets,
of
course,
which
is
quite
terrible
for
for
scalability
and
privacy.
When
you
try
to
do
that
at
scale,
there's
a
secondary
aspect
there.
K
What
I
want
to
say
is
that
it's
something
that
we've
historically,
of
course,
also
seen
for
encrypted
application
layer
protocols
as
well
there's
a
secondary
aspect
there
next
slide,
which
is
that
not
all
aspects
of
the
protocol
are
always
reflected
on
the
wire
of
course,
especially
things
like
congestion
control,
which
can
be
complex
to
debug
or
always
visible.
K
So
for
quick,
we
decided
to
take
a
new
approach
next
slide,
which
is
to
log
the
necessary
information
at
the
endpoints
directly
exfiltrate
them
from
the
implementations
instead
of
from
the
network.
This
is
interesting
because
you
can
log
only
the
things
that
you
need
and
you
can
very
easily
leave
out
sensitive
information,
for
example.
K
K
On
top
of
that,
so
next
slide.
This
means
the
q
log
is
far
from
rocket
science.
It's
really
really
simple.
Basically,
we
we
map,
we
have
a
schema
of
different
events
that
you
can
have
in
quick
and
hp3,
and
we
map
that
to
currently
a
json
serialization
format
listing
how
we
should,
for
example,
log
a
received
packet
or
on
the
right
side,
could
just
control
information
next
slide.
K
Building.
On
top
of
that
format,
you
can
then
create
tools.
We
have
created
a
couple
of
them.
There
should
be
a
screenshot
on
this
slide,
but
I'm
not
seeing
it
yeah.
So
we
we
have.
We
have
created
a
list
of
tools.
It's
called
the
q
vis
tool
suite
which,
for
example,
we
have
a
a
sequence
diagram
of
all
the
packets
going
on
and
also
next
slide.
K
K
This
combination
of
of
the
common
format
and
the
tooling
has
been
relatively
successful
for
us,
and
this
has
led
to
most
quick
implementations
supporting
log
in
one
form
or
the
other
with,
for
example,
facebook
using
this
is
our
primary
way
to
to
debug
and
monitor
their
their
quick
deployment
at
scale
as
well.
Because
of
this,
the
fact
that
it
seems
to
work
quite
well
next
slide.
K
K
K
Next
slide
this
goal
of
having
q
log
in
a
more
general
fashion
is
also
reflected
in
the
current
draft.
So
we
have
a
separate
document
for
quick
and
hp3
specific
stuff,
but
then
we
also
have
something
called
the
main
schema,
which
kind
of
has
the
protocol
agnostic
stuff
and
the
goal
is
there
to
grow
this
into
a
concrete
set
of
best
practices
and
guidelines
that
other
working
groups
and
other
efforts
can
then
use
to
to
guide
their
definition
of
new
q
log
events
for
new
protocols.
K
Next
slide,
we
foresee
quite
a
few
challenges,
of
course,
in
doing
this,
and-
and
we
also
think
that
this
should
be
a
like
an
itf
wide
effort,
which
is
why
I've
been
doing
this
presentation
at
several
working
groups
and
areas
this
week.
K
I
just
wanted
to
give
you
an
example
of
three
different
challenges,
so
just
so,
you
can
have
an
idea
of
the
things
we
will
be
discussing
next
slide,
so
the
first
aspect
there
is
is
the
simplest
thing
to
do
would
be
to
reflect
the
raw
wire
image
into
the
logging
format
as
well,
which
is
what
you
see
here
on
the
left.
We
have
an
acknowledgement
frame
with
packet
number
16
missing,
that's
useful,
but
it
doesn't
necessarily
reflect
what
the
implementation
is
actually
doing.
K
A
second
type
of
event
explicitly
indicates
with
the
implementation
was
doing,
and
this
is
also
something
that
we've
seen
a
lot
for
for
tls.
For
example,
it's
not
because
you
get
a
certain
tls
tls
records
or
tls
extensions
in
that
they're
also
correctly
acted
upon,
for
example,
and
so
one
thing
to
do
would
be
to
define
the
two
type
of
events
always,
but
that
makes
it
a
lot
more
work
and
a
lot
more
difficult
for
tool
creators
to
to
know
which
data
they
should
act
on
if
there
is,
if
there
is
an
over.
K
Tooling,
across
protocols
next
slide,
because,
of
course,
tcp
also
has
the
concept
of
select
acknowledgements,
but
there
it's
not
in
the
frame
it's
in
the
packet
header
inside
of
the
options,
so
you
have
essentially
the
same
semantic
information
but
reflected
in
a
different
form,
making
things
more
difficult.
So
we
hope
we
can
somehow
find
a
way
to
make
this
easier.
More
consistent
to
work
with.
K
K
Data
type
definition,
language,
which
you
can
see
on
the
right
side,
we're
hoping
to
align
this
a
bit
more
with
with
standard
options
for
that
to
make
it,
for
example,
easy
to
automatically
generate
code
that
serializes
and
deserializes
the
events
to
different
formats
next
slide,
and
then,
of
course,
the
the
main
things
of
why.
I
think
it
was
interesting
to
bring
this
here
as
well.
Is
that
conceptually
it's
it's
easy
to
say
that,
because
we're
looking
at
the
endpoints,
we
can
be
very
privacy
sensitive.
K
K
So
we're
we're
thinking
about
having
a
kind
of
a
sanitization
level
approach
where
you
can
say
for
different
levels
of
privacy
sensitivity.
This
is
how
you
should
either
hash
or
or
anonymize
or
leave
out
certain
certain
fields
and
certain
events,
something
that
has
also
already
come
out
this
week
is,
for
example,
do
we
want
to
encrypt
the
logs
themselves?
So
not
just
the
information
contained
within,
but
also
the
logs
that
we
store
which
I've
I
I
think
I've
seen
something
here
about
the
core
working
group
for
for
seaboard.
K
That
might
be
interesting
for
us
to
to
take
a
look
at
and
there's
a
separate
aspect
to.
That
is
that
we
we
think
it
might
be
interesting
to
have
a
way
of
sharing
q
logs
from
between
between
organizations
in
some
way
to
help
well
make
make
quick
manageability
better.
L
K
Hear
me
yeah,
we
can
hear
you,
but
it's
not
clear.
What's
what
you're,
what
what
what
you're
referring
to
to
me.
K
It's
just
one
more
slide,
so
we
can.
I
kept
this
to
the
last
two
that
it
would
be
fresh
and
everyone's
fine,
so
I'm
not
just
to
immediately
come
into
that.
I'm
not
saying
this
is
a
good
idea
that
we
want
to
do
this.
I'm
just
saying
this.
K
We
can
hear
you
sometimes
and
sometimes
not
hacker,
but
so
to
to
conclude,
we
we're
thinking
about
these
privacy
and
security
issues
of
this,
where
we
can
take
this,
how
we
can
do
this
in
a
secure
way
and
that's
why
I
wanted
to
make
people
in
this
working
in
this
area
aware
of
this,
this
work
that's
happening
so
last
slide.
K
K
So,
if
you're
in
any
way
interested
in
that,
please
please
find
us
there
in
the
coming
months,
which
should
be
part
of
the
recharge
of
the
working
group
and
we're
also,
of
course,
on
github,
and
we
have
a
separate
mailing
list
as
well
and,
of
course,
I'm
more
than
happy
to
answer
any
questions
or
listen
to
feedback.
You
have
right
now,
thanks.
B
Thank
you
again,
robin
roman
just
so
you
know
I
did
mute
you
at
the
start
of
the
talk,
so
you'll
need
to
restart.
If
you
want
to
comment,
and
also
I
apologize
from
it,
I
to
robin-
I
told
you-
I
was
going
to
give
you
a
nice
intro
about
what
what
the
goal
of
the
talk
is,
and
I
totally
forgot
to
so.
B
I
put
it
in
the
chat,
but
I
think
it
sort
of
became
clear
that
you
know
this
is
a
thing
that
people
have
been
using
for
quick
and
it's
quite
effective,
but
there's
also
still
some
other
questions
in
terms
of
how
to
make
it
broadly
applicable
to
new
and
existing
encrypted
protocols
in
general,
and
you
know
so
to
make
it
broadly
useful
and
sort
of
improve
and
deal
with
some
of
these
open
questions
that
you
mentioned.
B
M
Yeah,
here's
a
good
idea-
I
just
wonder,
have
you
kind
of
thought
about
how
this
relates
to
pcap
files
and
wireshark
and
the
like?
Which
people
are
also
used
to
using
for
similar
purposes,
and
the
second
quest
part
of
that
is
the
sense
that
you
know
the
privacy
sense
of
information
that
is
in
these
files.
That
problem
exists
for
heat
caps
as
well,
and
I
don't
think
it's
been
solved
very
well.
So
again,
I
think
looking
at
that
would
be
good.
K
Yeah
we've
we've
definitely
of
course,
looked
at
at
peak
apps.
The
the
thing
that
we
have
is
what
I
was
trying
to
say
and
definition
format
is
that
they
are
necessarily
very,
very
much
tied
to
the
wire
image
of
the
format
and
they
have
some
ways
of
had
adding
metadata
annotations,
but
it's
it's
not
as
flexible
as
we'd
like
it
to
be.
K
It
is
definitely
an,
I
would
say
a
complementary
thing.
We
currently
have
tools
that
transfer
from
p
caps
to
q
log
for
for
quick,
for
example,.
N
H
So
I
guess
you
are
going
to
be
doing
some
work
for
tls.
Also
I
mean
the
problems
I
have
had
lately
is
to
try
to
debug
things
in
tls,
because
it's
always
happening
inside
the
you
know,
library
and
the
upper
layer.
Applications
don't
get
any
useful
information
and
the
wireshark
doesn't
tell
you
it's
interesting
information.
So
is
there
going
to
be
schema
describing
tls
events
or
that
kind
of
things?
And
of
course
I
don't
want
to
have
you
know,
for
example,
into
storing
the
tls
keys,
because
that
would
be
very
sensitive.
K
Yeah,
I'm
I'm
very
sorry,
but
there's
construction
right
next
to
me,
so
I
don't
catch
everything,
but
the
thing
is
yes:
we
want
to
have
a
separate
tls
schema.
We
have
parts
of
tls
that
we
needed
for
quick,
but
I
intentionally
put
off
doing
full
tls
until
we
also
can
do
something
for
tcp
as
well,
which
is
what
we're
doing
right
now
into
tls
is
a
logical
next
step.
So
there
are
people
interested
in
defining
a
first
version
of
the
schema
for
tls.
That
would
be
definitely
welcome.
O
Yeah
I
I
was
just
going
to
say
that
one
of
the
things
that
kept
coming
up
was
is
this
something
itf
should
do,
because
it's
not
protocol,
and
it
occurred
to
me
that
it
is
protocol
in
the.
If
you
have
alice
and
bob
who
are
trying
if
you've
got.
If
you
call
up
customer
service
and
you're
trying
to
debug
some,
you
know
apple.
Whatever
the
customer
service
person
can
have
one
browser.
O
The
person
who
is
coming
up
can
have
another
and
you're
going
to
need
to
have
some
common
mode
of
interchange
so
that
that
customer
service
piece
can
be
debugged.
So
there
is
actually
an
interoperability
issue
here,
beyond
the
convenience
of
everybody
being
able
to
use
the
whole
being
able
to
use
the
tools
when
you
think
of
customer
service,
you
are
going
to
need
a
feature
like
this,
so
that
you
can
say
send
me
the
logs.
K
Yeah
and
that's,
for
example,
something
that
f5
I've
been
have
been
saying
and
they've
added
qlog
to
their
to
their
products
as
well.
Specifically
for
that.
N
Hello,
one
yeah
hi,
thanks
ben.
This
is
pretty
brief
and
it's
purely
about
optics
and
it's
and
it's
driven
by
my
current
paranoia,
because
I'm
trying
to
defend
encryption
against
policies
that
are
hostile
towards
it.
So
two
things
the
first
one
is:
is
there
anything
that
we
can
say
explicitly
that
says
that,
although
this
is
a
a
debugging
analysis
method,
that's
intended
to
be
used
where
the
encryption
is
where
the
data
is
encrypted,
there's
no
sense
in
which
it
represents
a
threat
to
that
encrypted
data.
N
So
that's
that's
the
optics
bit
and,
and
then
the
second
one
practically
is.
Does
this
actually
compromise
the
the
the
privacy
of
the
people
communicating
over
that
encrypted
channel.
N
K
I
B
B
B
We
are
just
on
time,
so
I
think
robin
lee
is
also
up
next
with
application
aware
networking
champion,
can
you
join
the
or
start
sending
your
audio
great.
P
Okay,
hello,
everyone:
this
is
a
champion.
P
In
fact,
this
is
my
first
presentation
in
the
security
area
yeah
because
I'm
always
have
the
ietf
work
in
the
routine
area
yeah,
but
because
the
application
we
are
not
working
has
much
relation
with
the
security
and
the
privacy
issue.
So
we
would
like
to
take
the
chance
to
have
your
advice
on
this
work.
Okay,
and
also
thanks
to
the
circle
area,
to
give
me
the
chance
to
do
the
presentation.
P
Okay,
first,
I
introduce
the
purpose
of
presenting
the
apm
work
in
this
working
group,
so
in
fact
that
we
proposed
apm
work
in
the
ietf
two
years
ago,
yeah,
but
in
the
process
promoting
the
promoting
the
apm
work.
That
means
application.
We
are
networking,
there's
always
the
security
issue
and
the
properties
issue
have
been
challenged
by
people.
So
we
would
like
to
take
the
chance
to
get
the
advice
in
this
working
group.
That's
the
purpose
of
the
presenting
okay
next
slice.
P
Okay,
so
first
I
introduce
what's
the
application
where
networking,
so
this
is
the
apn
user
focused
on
developing
a
framework
and
a
set
of
the
mechanism
to
derive,
convey
and
use,
attribute
information
to
allow
the
implementation
of
the
fine,
green
user
group
application
group
and
the
service
level
requirement
as
a
network
layer.
P
So
from
this
the
picture
we
can
see
that's
the
as
a
network
edge,
it
can
tag
the
application
group
information
or
the
user
group
information
and
this
information
be
encapsulated
in
the
network
layer.
So
when
the
packet
is
a
transverse
to
the
network,
domain
search
information
will
be
treated
as
a
object.
P
P
P
Okay,
so
here
this
is
the
color
confession
about
what
vpn
is
not
so
the
first
one
apn
is
not
about
to
identify
the
specific
application
or
within
the
network.
P
So
this
is
the
first
collocation
so-
and
this
is
the
second
one
that
the
apn
is
about
telling
the
network,
what
policy
to
apply
to
the
traffic.
So
that
means
the
application
can
be
applied
to
apply
multiple
policy
to
different
traffic
flow
and
also
multiple
applications
can
ask
for
the
same
policies.
P
So
that
means
that
the
policy
is
used
for
the
generalized
application
group
or
the
user
group
instead
of
the
policy,
will
be
applied
to
a
specific
application
or
specific
users.
So
this
is
also.
This
is
the
clarification,
so
the
third
one
I
think,
the
apn
in
the
history
there
had
some
work
which
sung
the
seminar
such
as
sabad
and
the
plus
network
token,
and
also
there's
some
work
and
the
priority
for
the
party.
That's
the
difference,
because
that's
the
that
means
the
application
who
can
be
aware
of
the
network
information
but
for
the
apn.
P
That
means
the
network
can
aware
of
the
application
information,
that's
the
different
aspects
and
for
the
sabbath
and
the
plus
at
the
network
token.
The
idea
of
this
work,
the
similar
part,
is
that
it
always
encapsulates
the
application
information
in
the
host
under
the
other
applications.
P
So
this
means
the
information
will
be
traversed
along
the
whole
internet.
That's
the
information
to
be
carried
under
to
under
around
the
internet.
So
we
think
that
this
will
introduce
the
more
security
and
privacy
issues,
because
this
is
always
open
to
the
whole
internet,
but
the
apn
is
only
to
only
be
applied
in
the
service
providers.
Limited
trusted
domain.
That
means
the
application.
Information
will
be
tagged
as
a
network
edge,
but
when
the
packet
leaves
the
limited
domain,
the
information
will
be
removed.
P
So
we
think
that's
the
information
is
only
be
applied
in
the
limited
domain
where
there
is
a
third
one.
So
that's
in
the
past
two
years
we
have
the
two
apn
related
set
meeting.
So
that's
in
the
ietf18,
so
we
asked
the
brain
tremor
to
also
introduce
this
to
the
sabbath
and
the
past
work
and
in
the
said
meeting
we
clarified
as
the
convey
in
the
information.
P
P
So
this
information
can
be
carried
by
the
transport
layer
or
the
application
layer,
but
for
the
apn,
because
that
is
always
the
application
group
information
and
the
user
group
information
is
encapsulated
as
a
network
edge
so
that
the
api
from
the
application
aware
information
can
be
encapsulated
in
the
network
layer.
So
in
summary,
we
try
to
clarify
that
the
conveyor
information
through
the
transport
layer,
application
layer
is
different
from
that
work
that
the
information
be
encapsulated
in
the
network
layer.
So
this
is
the
information
we
would
like
to
clarify
in
this
part.
Okay,
next
one.
B
So
jim
bin,
just
in
the
interest
of
time,
maybe
for
the
next
few
slides,
we
can
just
touch
on
the
very
important
points
and
then
get
out
to
the
q.
A
topics.
P
Okay,
yeah
thanks
thanks,
mandy,
okay,
so
here
there's
a
user,
give
a
you're
the
case
about
this
is
the
sd-wan,
because
it's
very
popular
yeah
and
also
the
me
mef
defined
some
of
the
standards.
So
here,
let's
use
the
refer
to
the
mtf
70
the
standards,
so
that
means
in
the
cpe.
So,
according
to
that,
the
five
tubes,
the
information
can
map
this
the
traffic
flow
into
the
different
one
line,
but
it
will
enter
into
the
one
enter
into
the
one:
there's
a
still
the
multiple
paths.
P
So,
in
order
for
the
different
network
paths,
there
can
be
the
different
sre
guarantee
characteristics.
So,
in
order
to
satisfy
this
requirement,
we
think
that
there's
the
need
to
carry
this
the
application
information
when
entered
into
the
one.
So
that's
in
the
one
so
that,
according
to
this
information
to
mapping
the
different
network
paths
for
the
specific
sra
guarantee
requirement.
P
One
okay,
so
this
is
why
the
apm,
so
now,
let's
use
the
in
order
to
implement
the
requirement
mentioned
in
the
about
this,
the
sdy
use
cases.
So
we
need
to
map
to
the
different
network
paths.
So
that's
we
think,
there's
a
convenient
one.
So
this
we
can
use
this.
The
information
of
this
can
convey
this
application
group
information
or
the
user
group.
P
Information
to
the
network
is
the
easy
way
or
else
if
we
use
the
five
tube
of
this,
the
ip
packet,
because
we'll
enter
into
the
one
network,
so
the
tunnel
will
be
encapsulated
before
this
is
the
ip
preload,
so
that
this
effective
information
is
difficult
to
be
achieved
in
the
network,
so
that
is
harder
to
take
use
of
this
effective
information
for
the
specific
sre
guaranteed
requirement.
Okay,
next
one.
P
Okay,
so
here
I
will
not
repeat
this
information,
so
here
is
adjuster
to
convey
this
information
as
a
network
edge
and
to
the
and
the
user,
along
with
this,
the
packet
to
transverse
this
network
work,
so
this
can
get
some
of
this
the
benefit.
So
this
is
easy,
and
also
only
the
one
one
fields
instead
of
the
five
tube.
It
can
also
improve
the
forward
forwarding
performance.
P
It
can
also
use
the
other
purpose
such
as
to
guarantee
this
the
use,
this,
the
information
for
the
security
or
for
the
performance
environment,
offer
the
sr
guarantee.
Okay,
next
right.
P
Okay,
so
next
one
okay,
so
here
you
are
just
a
quick.
This
is
there
are
some.
This
is
the
existing
information,
but
we
think
that
that's
the
solution
is
not
generalized,
so
we
think
that
we
need
a
generalized
information
to
carry
this.
The
application
aware:
information,
okay,
next
slice,.
P
Okay,
so
here
this
is
the
frequently
asked
information
for
this
one.
In
fact,
that
is
also
mentioned
in
the
presentation
in
the
dispatch
working
group
on
this
monday.
So
here
are
these
summaries.
So
this
are
there
any
applications
that
can
benefit
from
the
apn.
So,
in
fact,
we
think
this
is.
According
to
the
experience
of
the
media,
echo
of
the
ietr
meeting.
P
We
can
see
that
the
qoe
can
be
improved
and
we
think
that
the
network
can
play
a
role
in
this
one
and
the
second
one
is:
how
can
apn
help
us
resolve
this
qoe
issue?
We
think
that
apn
can
help
resolve
this
qe
issue
through
the
network,
but
koe
covered
us
more
aspects,
so
that's
used
not
only
to
solve
by
the
apn
and
the
third
one
who
is
to
sell
the
epm
attribute.
P
So
that's
user
mentioned
there's
a
network
edge
and
how
to
set
this
the
api
attribute
so
that
we
think
there's
the
different
way
according
to
the
existing
information
of
the
packet.
The
basic
way
is
to
use
this,
the
fab
tube
to
map
to
the
corresponding
the
user
group
or
the
application
group,
and
these
are
the
ones
that
we
maybe
we
can
use
some
of
the
ai
is
the
method
there's
also
some
of
these
papers
for
this
work
and
last
one.
P
So
this
is
how
the
api
attribute
used
in
the
network,
so
this
is
just
in
the
header
end
to
sever,
to
enhance
to
steer
this
the
traffic
under
the
midpoint.
So
that's
the
for
the
performance
environment
or
some
of
these
the
necessary
process
in
the
midpoint-
and
this
is
the
and
also
we
have
some
the
sfc
the
function
so
that
for
the
service
function,
you
can
also
exchange
the
specific.
This
is
the
policy
according
to
this,
the
application
information,
okay.
P
So
next,
last
okay,
in
fact,
this
is
the
last
slide
for
the
presentation,
so
we
introduce
the
scope
and
the
possible
clarification
of
the
epm
work
and
also
introduce
the
frequently
asked
the
question
so
here
that
we
would
like
to
get
more
advanced,
especially
about
the
secularity
issue
and
the
privacy
issue.
So
here
are
these
questions,
so
we
want
to
learn
this
from
the
experts
in
this
area.
P
So
what
are
the
security
issue
may
be
caused
by
the
ap
attribute
encapsulated
in
the
network
layer
or
and
also
used
in
the
limited
domain
and
also
regarding
to
the
privacy
issue?
P
So
we
also
saw
this
the
positive
way
to
mitigate
this
issue,
so
the
first
that
we
think
that's
you
that
just
use
the
group
information
instead
of
the
specific
application
or
the
specific
user
can
head
the
details
of
this,
the
specific
application
of
the
user
and
and
also
because
this
information
will
try
to
convey,
let's
use
the
opaque
value
and
also
that's
the
user,
the
information
and
also
this
information
and
used
to
apply
the
different,
the
policy,
so
that's
either
just
the
some.
This
is
the
possible
way
to
mitigate
the
privacy
issue.
P
Okay,
that's
my
the
introduction
about
this
work.
Okay,.
B
I
P
Here,
that's
the
user
proposal
this
week
is
that
we
can.
According
to
this,
the
existing
information
in
the
packet,
the
basic
information
is
used
as
the
five
tube
information
in
the
package
and
the
map.
The
this
is
the
five
tube
information
to
a
specific
user
group
or
the
application
group.
P
So
we
need
another
to
know
this.
The
the
concrete
user
and
the
application
information.
Q
F
Q
M
Hi
so
earlier
you
said
this
is
not
the
same
as
footer
plus,
and
I
didn't
understand
why
that's
true.
It
seems
to
me
that
all
of
the
many
many
objections
that
spud
and
plus
also
apply
here-
and
I
don't
see
that
any
of
those
objections
really
addressed
so
can
you
say
why
this
is
different
from
spot
on
plus
from
the
point
of
view
of
security
and
privacy
objections
to
doing
either.
P
So
this
is
just
my
doubt
about
this
security.
This
is
the
prior
privacy
concern
because
for
us,
that's
the
this
information
is
used
in
the
limited
trusted
domain,
so
I
think
the
the
security
and
the
privacy
issue
can
be
under
control
and
also
we
know
that
the
3gpp
and
the
bbf
there's
a
similar
way
to
because
because
they
need
to
authorize
the
user
information
so
that
they
also
have
this
protocol
to
do
like
this
way.
B
Yeah
I
mean,
I
think,
that
the
clarification
for
how
this
is
different
than
spun
plus
might
be
something
we
have
to
take
onto
the
email
list
or
out
of
band.
I
don't
have
high
hopes
that
we
would
resolve
it
today
and
we
do
have
a
few
more
people
in
the
queue.
So
I
think
we
should
probably
keep
going
okay,
stephen
and
jimin.
I
hope
you
can
continue
to
explore
this
topic
david.
Please
go
ahead.
Okay,.
R
Hi
david
schnazzy
internet
architecture,
enthusiast,
I
saw
I,
I
glanced
over
the
the
main
document
and
also
the
security
and
privacy
considerations
document
and
and
I've
been
listening
to
your
slides
and
I
feel
like
there's
a
contradiction.
R
So
maybe
you
can
help
me
clear
this
up,
because
I'm
probably
misunderstanding
so
is
the
idea
that
the
end
device,
so
let's
say
my
phone,
for
example,
okay,
is
going
to
add
something
some
information,
so
you
can
tell
that
I'm
running
meat
echo
as
opposed
to
you,
know
downloading
a
large
file,
or
is
this
going
to
be
added
by
another
device
like
the
cpu?
Can
you
clarify
this?
Please.
P
Okay,
there,
the
thanks
for
your
question,
so
this
is
just
the
collocation.
This
is
the
data
as
to
your
user
case.
This
information
is
not
encapsulated
by
your
the
mobile
phone
or
by
the
applications
on
your
mobile
phone.
So
we
think
this
information
can
be
encapsulated
by
the
base
station
or
the
e-node
b,
and
that
means
this
is
the
wireless
network,
an
awareness
network
product,
so
because
this
is
the
network
device
as
a
network
edge.
So
you
can
encapsulate
this
information.
P
So
that's
you.
This
is
the
case
yeah,
because
for
this
one
we
need
in
order
to
understand
this,
the
the
concrete
this
application-
or
this
is
the
user
information
either
just
use
the
course,
the
application
group
or
the
user
group.
They
may
have
the
different
sra
requirement
or
some
of
this,
the
security
requirement.
Yeah.
That's.
R
P
So
this
is
the
first
one
I
think
that's
the
for
me.
I
think
that's
the.
I
think
this
is
why
first
one
I
think,
the
because
I
I'm
not
sure
about
this-
the
process
of
the
meteorico,
but
I
think,
for
example,
because
this
is
the
ip
package
that
maybe
has
this
the
destination
address
either
has
some
of
these?
P
P
This
is
the
the
case
and
the
second
one,
because
in
the
3gpp,
because
when
especially
for
the
mobile,
in
fact
your
when
you
does
the
be
accessed
to
the
wireless
network,
so
needed
to
negotiate
with
this
the
cloud
core
network,
so
the
cloud
core
network
can
learn
this,
the
user
information
and
they
will
set
up
the
gtp
tunnel
so
that
the
ttp
tunnel-
they
have
some
of
these,
the
queues,
the
information
and
also
has
the
tid
information
which
can
carry
some
of
this-
the
user
group,
this
information,
so
this
information
can
also
be
re-utilized.
P
So
I
mean
so
that
when
this
the
you're,
the
packet
will
be
encapsulated
in
the
gtp
tunnel.
So
that's
the
gtp
tunnel
when
the
encapsulated
gtp
tunnel.
So
that's
here
this
can
have
some.
This
is
the
application
aware
information.
R
R
I
R
R
Like
we're
going
in
circles
a
little
bit-
and
I
just
really
want
to
understand
who
makes
the
like-
because
I
suspect
the
whole
point
of
this-
is
that
there
is
some
traffic
on
my
phone-
that's
going
to
get
different
treatment
from
other
traffic
and
someone
has
to
make
the
determination
between
which
traffic
is
which
so,
then
is
you
know,
tunnels
all
that,
that's
an
implementation
detail.
Someone
needs
to
be
able
to
make
this
determination.
P
Work
david:
I
think
that
they
simplify
this.
The
answer,
I
think
the
first
one
I
think,
that's
according
to
that's
the
original,
this
the
destination
address
or
the
source
address,
and
it
can
map
to
some.
This
is
the
application
group
and
user
group.
So
this
information,
when,
when
the
when
this
is
the
package,
is
tunneled,
so
this
is
the
information
will
be
go
along
with
this,
the
tunnel
information.
P
R
R
G
C
C
E
R
All
right
thanks
all
I'll
pause
here,
because
and
now
I'm
finally
understanding.
So
I
have
a
question
final
comment.
I.
R
S
R
No
worries,
so
let
me
back
up
just
a
little
bit,
so
I
was
saying
thank
you
so
much
for
explaining
this.
This
really
really
helped
me.
So
what
since
you're?
What
you're
saying
is
that
from
the
information
that
is
already
in
packets
today,
like
the
five
tuple,
you
can
make
a
determination
on
how
to
handle
these
packets.
R
P
Okay-
I
I
have
mentioned
this
one
in
the
in
the
slides,
but
not
to
take
much
time,
because
this
is
the
the
ip
package
that
will
encapsulate
in
some
of
this.
The
tunnel
such
as
this
is
the
mcs
tunnel
or
some
of
these,
the
vxlan
tunnel,
but
when
this
information
used
to
be
encapsulated
so
that
the
the
the
tunnel,
the
packet,
be
transversed
along
this
network
node,
so
the
network
node
cannot
cannot
learn
this
the
effective
information
in
the
ip
payload.
P
P
Okay,
good
question,
so
I
also
mentioned
that
one.
If
we
use
this
effective
information,
you
there's
some.
This
is
a
drawback,
so
the
first
one,
the
fact
of
the
information.
So
that's
the
use
the
two
match
fields.
So
that's
the
user
can
call
this
the
forwarding
performance
issue
so
because
that's
the
process
and
also
that's-
is
also
the
implementation
in
the
network.
So
that's
the
scalability.
We
also
use
a
challenge
because
that's
a
user
match
the
five
tube.
So
that's
the
user
need
the
complex
the
algorithm
process
in
the
forwarding
plane.
P
So
that's
is
the
the
entry
for
the
for
the
classification
and
the
process
is
kind
not
be
very
scalable.
So
that
is
also
the
challenge.
Yeah.
L
Thank
you
so
yeah.
I
think
I
guess
first
of
all,
like
I'm
finding
this
pretty
hard
to
follow
like
I've.
Read
these
documents
and
I've
heard
your
presentation
now
and
they
seem
to
say
different
things.
So
the
documents
explicitly
talk
about
injecting
the
user
identity
and
all
this
information
in
the
endpoint.
I'm
referring
in
this
case
to
draft
the
epm
framework.
Section.
Four
point
number
one.
So
I
guess
I'm
a
little
confused
like.
Are
you
expecting
the
application
standpoint,
your
justice
information
or
not.
P
Erica,
I'm
not
sure
I
I
I
not
catch
all
your
points.
Can
you
briefly
repeat
you're
the
last
question.
Q
P
Yeah.
Okay,
so
thanks
for
your!
This
is
the
question.
So
that's
the
index
in
the
original.
This
is
the
draft.
So
that's
we
clarified
this.
The
some
little
scenario
about
this
avm
work
at
the
beginning.
So
this
is
a
scope.
So
that's
a!
I
think,
that's
cause
a
confusion
because
some
of
these
application
scenarios
mentioned
this
is
the
network.
I
said
some
mentioned:
there's
a
application
aware
that
means
application
side.
P
L
Okay,
so
now
I
guess
I'm
okay,
so
now
I'm
trying
to
figure
out
which
information
is
carried,
so
you
so
you
referred
and
earlier
you
said
you
weren't
carrying
the
user,
an
application
id.
But
then
later
you
did
so
I'm
trying
to
figure
out
which
one
it
is
again.
This
is
this
is
in
the
description
of
app
info.
So
does
this
carry
a
user
identifier
or
is
it
not.
P
Okay,
I
recall
so
I
so
maybe
I
this
is
the
lack.
This
is
the
background
about
what
you
mentioned.
Can
you
forward
your
the
question
in
the
main
list?
So
I
can
answer
directly
in
the
main
list.
Is
that
okay.
L
Okay
seems
like
a
fairly
critical
point.
L
Okay,
I
think
more
generally,
you
know:
we've
just
spent
like
an
enormous
amount
of
time
in
working
groups
like
quick
trying
to
remove
as
many
signals
from
the
network
about
what's
happening
as
possible,
and-
and
it
seems
to
me
like
what
you're
describing
at
mo
that
most
requires
a
very,
very
small
number
of
prioritization
bits
given
that
you're
in
for
even
distilling
it
down
from
what
you
can
infer
from
the
five
people
so
to
have
a
framework
we're
talking
about
having
like
an
enormous
amount
of
labeling.
L
That
apparently
includes
the
application,
and
maybe
the
user
id
and
and
just
generally
a
method
or
entertaining
packets
in
this
way
does
not
strike
me
as
like
the
right
direction
for
us
to
go.
So
I
don't
think
we
should
do
this.
P
Okay,
so
we
tried
to
solve
this.
Usually
you
mentioned,
I
think,
the
at
the
beginning,
so
the
when
proposed
this
is
the
18
work.
That's
is
we
talk
about
this,
the
application
side
and
also
the
network
side.
So
as
we
align
this
scope
we
and
try
to
align
this
to
update
this
draft
to
align
with
this
scope.
B
Richard
I
think
we
were
still
intending
to
have
you
ask
your
question.
I
don't
know
if
you
feel
it
was
already
answered.
T
I
I
think
eckerd
and
david
largely
covered
my
concerns
here.
I
think
the
the
point
that
david
made
about
kind
of
mapping
to
other
technologies.
It's
a
degree
that
this
is
something
observed
and
injected
by
the
network,
an
observable
based
solely
on
network
observable
properties.
T
It
seems
like
this
is
kind
of
a
standard,
tunneling
problem,
and
I
wonder
whether
the
one
of
the
many
tunneling
technologies
that
already
exists
this
is
appropriate
and
I
think
that's
it's
more
of
a
design,
question
saving
bits.
I
think
it's
it's
really
when
we
cross
the
boundary
into
well.
T
By
the
same
token,
if
it's
based
only
on
network
observable
properties,
it
doesn't
seem.
I
I
wonder
what
the
utility
is,
because
you're
not
really
adding
anything
that
can
already
be
observed
by
the
network.
That's,
I
guess
a
benefit
in
that
you
know
you're,
not
creating
any
privacy
risks,
because
there's
no
information
there
that
you
never
couldn't
already
observe,
but
the
utility
question
arises.
T
So,
on
the
flip
side
of
that,
if
there
is
information
here,
if
there
is
information
here
that
the
network
couldn't
just
observe,
then
we
have
a
big
security
and
privacy
problem,
because
we
need
to
worry
about
how
that
information
is
protected
and
who
it's
exposed
to.
So
I
guess
I
don't
understand.
T
Based
on
what's
been
said
here,
it
seems,
like
there's
been
some
things
on
both
sides
of
that
you
know
kind
of
whether
this
is
something
where
we
really
need
to
worry
about
something
or
that's
just
something
that
is
like
yet
another
tunneling
protocol.
So
I
think
it
would
be
really
good
to
be
chris
from
that
point.
So
let
me
highlight
that
as
one
point
and
no
need
to
respond
here,
we
can.
We
can
follow
up
on
mailing
lists
afterwards.
T
Security
is
all
about
cost
benefit
analyses
you
know,
take
risks
to
get
benefits,
so
it'd
be
really
good
to
be
really
crisp
here
on
what
the
benefit
is
to
applications
of
a
technology
like
this,
given
the
a
in
the
name
like
you
mentioned
meat
echo
as
an
example,
but
if
you
could
dive
in
if
you
could
have
some
like
worked
examples
of
applications
that
specifically
applications
that
do
benefit
from
this
and
then
specifically
how
they
benefit
from
this.
T
That
would
help
motivate
motivate
the
utility
of
this
mechanism
and
highlight
you
know
what
are
the
risks?
What
did
the
application
have
to
give
up
in
order
to
get
that
benefit,
so
it
would
facilitate
the
security
analysis.
B
B
S
B
A
topic
that
has
come
up
in
a
few
working
groups
and
it
seemed
worth
presenting
this
in
front
of
the
broader
audience.
So
I
think
you
know
we
may
have
enough
time
to
do
the
full
20
minutes
but
john,
if
you
can
do
it
in
15.
That
would
be
good
to
leave
some
more
time
to
look
back
at
the
end,
so
I
have
to
take
it
away.
U
Thanks
so
this
is
a
presentation
of
this
was
shortly
presented
in
core
early
this
week,
but
it
needs
to
be
presented
also
in
the
security
area.
Core
is
not
a
security
working
group.
There's
also,
I
think,
there's
interest
for
the
findings
here
for
a
outside
of
core,
for
example,
for
use
of
ccm8,
with
dtls
for
iot
or
for
any
other
idf
security
protocol.
That
would
do
this
work,
so
this
is
analysis
of
the
work
that
has
been
done
in
dtls
working
group,
quick
working
group
and
ongoing
in
cfrg
next
slide.
U
There
is
a
list
of
where
this
work
has
been
done
and
recently
core
has
been
starting
to
look
at
looking
at
doing
aad
limits
for
for
oscor,
which
I
think
is
great,
driven
by
rickard
herglund,
just
short
summary
of
the
notation,
so
it
is
taken
from
the
cf
audio
document.
So
q
is
the
number
of
protected
messages.
U
V
is
the
number
of
four
jury
attempt
and
l
is
the
length
of
messages
in
number
of
blocks
and
the
enqueue
is
per
key
unless
I
state
anything
else
and
there's
other
reasons
to
re-key
to
limit
leak
limit
the
effect
of
key
leakage
that
will
not
be
discussed
here
and
also.
This
presentation
has
only
analyzed
single
key
limits,
not
multi-key,
and
the
suggestions
here
are
not
intended
to
be
general
recommendations.
U
They're
mostly
at
this
point
input
to
the
discussion
next
slide,
so
summary
of
the
the
limit
work
that
has
been
done
in
tls
quick.
So
you
start
with
some
mathematical
limits.
Then
you
apply
a
process.
You
end
up
in
some
explicit
limits
for
b
and
q,
and
then
you
add
these
counters
and
the
radium
mechanism
to
your
security
protocol.
U
I
think
in
general
this
is
great
and
in
general
I
think
the
the
limits
that
has
been
come
up
in
the
dtls
and
tls
and
quick
are
great
and
practically
useful,
except
when
you
start
to
apply
this
process
to
ccm8,
and
I
think
you
start
to
see
the
flaws.
So
I
think
this
will
try
to
analyze
this
patch
and
summarize
the
recommendations
that
I
have
currently
given
to
core
good.
U
U
Most
of
the
paper
by
gordon
assumes
that
cha-cha
is
a
random
function.
That
would
mean
c
a
equals
0
for
charge
of
20.
in
the
work.
As
I
understand
in
tls
and
the
c
authority
paper,
then
the
the
gordon
ends
up
in
the
aaa
de
limit
for
cha-cha
and
the
c40
and
tls
work
has
taken
that
and
split
it
up
back
in
ca
and
ia.
U
So
this
limit
is,
of
course
true,
but
I
think
the
delimiter
is
doesn't
really
say
anything
useful.
I
think
it
gives
the
right
the
wrong
impression
of
shacha.
I
think
it
should
be
changed
and
then
yes,
the
observation
that
cnia
is
quite
different.
One
is
used
for
online
attacks.
The
other
for
offline
connect
next
slide.
B
U
Five
minutes,
but
you
should
hurry
up
so
then
analysis
of
the
calculating
limits
step.
This
mostly
lead
to
both
practically
usable
limits,
except
for
ccm8.
U
I
think
one
strange
thing
that
you
get
from
the
process
used
in
tls
and
dtls
is
that
if
you
apply
it
to
the
ideal
mac,
you
get
the
result
that
your
mac
needs
to
be
repeated,
and
this,
of
course
makes
little
sense
and
does
not
increase
the
security
and
our
finding
is
that
ccm8
for
blue
q
and
v
actually
behaves
extremely
similar
to
the
video
mac,
a
64-bit,
idiom
mac
and
also
does
not
need
wiki,
then
yeah.
I
think
we
can
move
on
shorter
time.
U
I
think
here
is
a
summary
of
the
different
different
inequalities
and
it's,
of
course
very
easy
to
see
that
re
re-keying
lowers
ca
and
ia
per
key,
but
we,
my
thinking,
is
that
that's
probably
not
the
right
thing
to
do
for
a
security
protocol
where
you
can
have
a
large
number
of
keys,
and
you
can
also
have
a
large
number
of
collect
connections
between
the
nodes.
So
we're
keying
lowers
ia
and
ca
for
a
specific
key,
but
might
not
do
the
same
for
the
whole
connection.
U
Yeah
the
summary
I
think
that
might
not
be
a
perfect
solution
either.
It
seems,
like
a
simpler
solution,
seems
to
be
to
calculate
security
levels
based
on
the
inequalities
that
would
then
be
calculated
as
the
minimum
of
attack
cost
divided
by
advantage
in
the
traditional
way
and
then
minimized
overall
attackers.
This.
U
So
here
are
some
slides
graphs
over
the
inequalities
and
we
can
see
that
lowering
q
and
v
and
l
limits
or
q
and
l
limits
raise
the
security
level
in
the
beginning,
and
here
is
this
and
that's
in
the
top
my
recommendation,
the
core
will
be
to
probably
lower
q
and
b
l,
a
little
bit
which
I
constrain,
ot
can
do
and
then
the
lower
are
integrity
advantages
and,
as
you
can
see,
if
you
calculate
security
level,
they
are
straight
lines
next
slide.
U
Here
is
ccm
with
a
16-bit
mac
base
a
bit
strangely.
So
the
reason
why
they
go
up
is
that
in
the
beginning,
all
the
numbers
are
dominated
by
cube,
and
I
think,
if
you
look
at,
if
you
look
at
advantage
for
the
whole
connection
and
use
attacker
cost
for
your
them,
then
actually
re-keying
can
increase
the
advantage
integrity
advantage
for
the
whole
connection,
not
per
key
and
not
security
level.
But
let's
you
can
move
on
to
next.
U
Here's,
maybe
the
most
interesting
finding
this
is
ccm8
on
the
left
is
a
graph
of
ccm84
with
q
and
l
chosen
as
in
tls
dtls,
and
you
can
see
that,
except
for
in
the
beginning,
for
low
values
of
b
and
high
values
of
v,
it
behaves
extremely
similar
to
ideal
64-bit
mac.
U
If
you
lower
q
and
v
slightly,
which
is
in
the
right
graph.
For
example,
l
2
to
the
power
of
8
and
q
2
to
the
power
of
20
then
starts
behaving
like
an
idiom
mac
already
for
d
equals
zero,
there's
a
slight
slight
deviation
that
is
almost
hard
to
see,
but
it
based
like
an
ideal
mac
until
v
equals
2
to
the
power
of
35,
or
something
like
that-
and
this
is
my
current
recommendation
to
core
that
they
can
perfectly
fine
with
keep
using
ccm
8
next
slide.
U
And
here
are
some
general
recommendations.
I
I
definitely
think
this
the
the
limit
work
done
by
a
lot
of
people.
It's
great
oscar
should
do
the
same.
I
think
you
can
use
ccm8
behaves
like
a
perfect
ideal
max
as
long
as
you're
fine
with
64-bit
forgery.
Probability
ccm8
is
close
to
a
perfect
algorithm
for
that
at
least
the
integrity
part-
and
I
think
64-bit
for
the
probability,
is
very
acceptable
for
constrained
rt.
Basically,
you
need
to
get
a
single
folder.
U
B
U
I
think
I
think
the
tls
and
etls
does
not
need
to
change,
except
if
dtls
want
to
use
ccm8,
but
that
could
also
be
done
in,
for
example,
the
iot
profile,
I
think,
to
see
if
all
the
document
should
probably
change
the
ca
limit
for
charger
20
or
at
least
explain
it
better
that
this
is
not
not
likely
to
be
close
to
the
to
the
exact
limit
for
touch
attempt,
and
then
I
think
the
process
should
change.
It
does
not
give
the
strange
results.
M
Q
Part
of
the
problem
I
think
we're
having
in
this
space
is
that
we
don't
really
understand
the
the
way
to
think
about
these
things
from
the
perspective
of
of
long-term
security
and
re-keying
multi-user
security
and
the
various
aspects
of
this.
Q
The
discussions
that
we
had
when
we
were
putting
the
draft
together
essentially
concluded
that
the
the
larger
block
size,
and
particularly
the
larger
tag
sizes,
were
such
that
you
had
to
have
extremely
large
resources
in
order
to
attack
even
in
the
multi-user
context,
but
the
the
smaller
tag
sizes
don't
have
those
same
advantages,
and
my
personal
conclusion
was
that
ccm8
was
was
really
not
very
good
for
those
sort
of
settings
under
those.
Q
I
also
think
that
the
assumptions
are
probably
a
little
bit
weak
as
well,
so
it
leaves
us
in
this
awkward
position
where
I
I
really
don't
know
how
to
to
think
about
ccm8
in.
In
this
context,
it
would
have
been
better
if
we
had
a
lot
a
larger
block
size
and
applied
attack
size
all
around,
but
I
understand
that
constraints
exist.
Q
So
I
think
that,
as
far
as
you've
gone,
the
recommendations
that
you
have
for
ccm
are
reasonable,
but
I
I
really
don't
know
if
it'll
be
the
the
model
that
applies
every
web.
U
S
Hi
yeah,
just
a
couple
of
points:
have
you
been
considering
ocd
the
patents
go
backwards.
Fire
and
rugway's
just
put
his
patent
from
the
public
domain,
so
it's
probably
worth
considering
it,
because
what
I've
never
liked
about
gcm
is
basically
turning
a
block
type
into
a
screen
cycle,
and
so
mostly.
S
The
other
thing
kind
of
like
meta-knacker
is,
with
you
know,
aas
16-bit
block.
Maybe
we
should
think
about
a
box
like
a
competition
for
32-bit.
That's
two-byte.
U
I
have
not
looked
at
ocd,
I
don't
know
if
anybody
else
have
done
that.
I
don't
know.
If
there's
any
advantages
published
for
ocd,
it
would
be
very
interesting
to
see,
I
think,
for
the
constrained
iut,
where
they
are
currently
depending
on
ccm
8,
and
I
think,
if
there's
no
major
problem
with
ccm8,
I
think
the
iot
world
would
like
to
continue
to
use
ccm8.
U
Otherwise
they
would
need
to
change.
Then
I
don't
know
if
ucb
would
be
a
good
answer
for,
for
that.
Also
we
definitely
have
it
advantages
in
general,.
B
Okay,
thanks
again
for
putting
this
together
and
thanks
to
martin
and
everyone
else
in
fact,
actually
been
working
on
looking
at
the
limits,
as
obviously
we
want
to
keep
looking
at,
and
hopefully
we
can
make
some
more
progress.
B
A
Well,
aaron
is
bringing
up
kind
of
the
slides
again,
it's
kind
of
a
reminder
after
you
hear
oauth.
If
you
want
to
give
ben
and
me
a
suggestion
on
what
working
group
you
want
to
hear
about
next
time,
we're
kind
of
happy
to
slot
into
the
agenda
for
111.
V
Great
thanks
so
yeah
hi,
I'm
aaron
parkey,
coming
to
you
from
the
oauth
group.
V
The
goal
of
this
presentation
today
is
really
to
sort
of
give
you
a
sense
of
what
oauth
is
what
problems
we
think
about
sort
of
the
way
we're
approaching
the
the
world,
and
this
is
going
to
be
a
mostly
high
level
session,
so
mostly
not
a
lot
of
on
the
wire
protocol
stuff
more
about
the
high
level
concepts
and
architectures
of
the
different
parties
involved
and
towards
the
end,
I
will
catch
you
up
on
some
of
the
new
work
being
being
done
in
the
group
as
well.
V
So
I
want
to
start
off
by
saying
that
specs,
it
turns
out
are
actually
not
a
great
way
to
learn
about
this
stuff,
as
I'm
sure
you're
all
aware.
Specs
are
you
know
the
the
legal
contract
that
we
are
all
writing
and
in
the
oauth
group
yeah.
It
turns
out
that
the
oauth
core
spec
written
now
almost
10
years
ago,
there's
been
a
lot
of
of
progress
made
since
then,
and
it
is
a
bit
of
a
mess.
It
is
definitely
a
bit
of
a
mess.
V
There's
a
lot
of
different
extensions,
there's
a
lot
of
the
there's,
a
lot
of
work
being
done
in
other
groups
as
well
outside
of
the
ietf
as
well
building
on
this
work.
V
But
I
want
to
take
a
step
back
from
all
that
and
rewind
back
in
time
to
talk
about
how
we
actually
got
here,
which
is
this
very.
What
used
to
be
a
very
common
pattern
on
the
internet,
which
is
when
an
app
like
yelp
not
to
pick
on
yelp,
but
when
an
app
like
yelp
would
launch,
they
would
want
to
see
if
your
friends
were
already
using
yelp.
V
They
offered
you
to
you,
know
bootstrap
your
social
network
within
this
new
application,
and
to
do
that,
it
would
ask
you
for
access
to
your
contact
list
and
where
is
your
contact
list
in
your
email,
so
to
ask
you
to
enter
the
email
address
and
the
password
to
your
email-
and
this
was
a
very
common
thing
like
even
facebook
was
doing
this.
We
understand
that
this
is
a
terrible
idea.
Now
it
is
generally
understood
that
we
should
not
be
giving
our
email
credentials
to
random
applications
right.
V
A
couple
of
very
concrete
problems
with
this:
how
do
you
decide
that
if
you
no
longer
want
this
application
to
have
access
that,
you
can
revoke
that
access?
How
do
you
actually
know
that
that
app
is
not
going
to
store
your
password?
For
example?
How
do
you
actually
know
that
it's
going
to
only
do
what
it's
saying
it's
going
to
do,
which
is
reading
your
contacts
and
not
actually
reading
your
email,
and
you
actually
trust
that
app
not
to
do
things
like
changing
your
password
or
deleting
your
account.
V
So
the
the
the
fact
is
that
people
were
happily
putting
in
their
email
passwords
into
these
apps
because
they
wanted
what
the
app
was
was
promising,
which
is
to
find
their
friends.
So
we
need
to
find
a
solution
to
this.
Otherwise
people
will
just
keep
doing
this
putting
their
passwords
into
into
random
apps.
V
We
would
like
to
find
a
solution
that
lets
yelp
access,
some
part
of
a
person's
account
while
not
having
to
be
able
to
access
other
parts
of
the
account
that
sort
of
delegated
access-
and
this
is
really
the
problem
that
oauth
set
out
to
solve
a
long
time
ago,
which
is
how
do
we
let
apps
access
data
without
sharing
passwords
with
the
apps?
So
what
used
to
be
the
sort
of
you
you
give
your
password
to
the
application.
V
V
The
person
at
the
front
desk
will
hand
you
a
key
card.
That
key
card
is
what
you
take
to
the
door
and
you
can
go
and
access
the
room
on
your
door
with
that
key
card.
This
is
exactly
analogous
to
oauth,
where
the
person
at
the
front
desk
is
the
authorization
server
handing
out
these
key
cards.
The
key
card
is
like
the
oauth
access
token,
and
then
that
door
would
be
the
resource
or
the
api.
V
So
the
there's
there
that
key
card
may
give
you
access
to
your
room.
It
may
give
you
access
to
other
resources
in
the
hotel,
and
the
important
thing
here
is
when
you
are
using
that
key
card,
you
don't
need
to
know
how
it
works.
You
just
need
to
know
the
thing
that
you're
sending
it
to
knows
how
it
works
and
it
doesn't
even
need
to
represent
a
user.
It
doesn't
need
to
have
your
id
written
on
it.
It
doesn't
need
to
have
anything
about
you
as
a
person.
V
It
represents
access
to
data
and
what
is
so?
What
that
means
is
that,
because
oauth
was
created
to
solve
this
sort
of
delegated
access
problem,
there's
actually
nothing
in
the
spec
that
talks
about
users,
there's
no
user
identity
built
into
the
oauth
spec.
It's
always
about
accessing
data,
so
oauth
started
with
that
problem
and
then
because
it
is
actually
very
common
that
applications
do
care
about
who
the
user
is.
V
Then
the
openid
openly
connect
group
built
on
top
of
the
oauth
spec,
to
add
back
in
that
user
identity
information.
So
I
want
to
start
with
a
bit
on
how
oauth
works,
and
then
we
will.
V
V
In
the
oauth
terminology,
we
say
that
the
client
will
use
an
oauth
flow
to
get
an
access
token
and
then
there's
several
different
flows
defined
in
the
spec,
as
well
as
some
extensions
and
there's
a
couple
that
are
defined
in
the
spec
that
are
being
undefined
by
some
some
new
extensions
and
the
regardless
of
which
flow
is
being
was
used.
The
end
result
is
going
to
be
the
same.
The
end
result
is
the
access
token.
V
So,
okay,
starting
let's
get
into
some
terminology
here
in
in
the
spec
there's
a
bunch
of
terminology
defined.
Those
are
the
terms
you'll
find
in
the
bottom,
the
the
in
parentheses
resource
owner
user
agent.
Those
are
the
technically
more
correct
terms
for
the
more
conversational
terms,
you'll
find
in
bold.
We
talk
about
applications,
oauth
servers
apis,
but
in
in
this
fact
they're.
You
know
more
precisely
defined
for
for
good
reason,
but
that's
why
this
stuff
is
confusing
to
people.
Usually,
the
point
with
these
is
the
these
are
roles
that
the
spec
defines.
V
These
are
not
necessarily
always
discrete
physical
components,
so
we
might
see
this
expressed
differently
in
a
particular
deployment.
For
example,
github
has
a
piece
of
software
that
is
both
an
api
and
an
oauth
server,
or
a
resource
server
and
authorization
server,
and
then
the
client
application
might
be
a
third-party
application
accessing
data
in
that
api.
V
That
is
going
to
look
very
different
from
a
situation
like
this,
where
you
might
be
writing
an
iphone
app.
You
might
be
also
creating
an
api
that
backs
that
iphone
app
and
in
the
middle
is
some
sort
of
you
know
third-party
service
as
your
oauth
server
that
you
are
purchasing
or
it's
a
it's
a
thing.
You
spun
up
open
source
in
a
different
physical
deployment.
V
V
We
always
map
it
back
to
the
roles
in
oauth
when
we
talk
about
how
the
flows
work
and
how
the
different
security
properties
work
so
to
get
into
the
how
the
authorization
code
flow
and
how
the
sort
of
oauth
flow
actually
works.
The
first
thing
I
want
to
do
is
talk
about
a
concept
that
we
talk.
A
lot
spend
a
lot
of
time
talking
about
in
the
oauth
group,
which
is
front
channel
and
back
channel.
V
This
is
essentially
the
back
channel
is
the
sort
of
like
normal
way
of
passing
data
around
it's
an
http
request
from
a
client
to
a
server
and
there's
a
lot
of
security
properties
of
that
that
we
often
take
for
granted
because
they're
just
sort
of
baseline
https.
You
know
it's
encrypted.
The
request
can't
be
tampered
with.
The
response
that
comes
back
can
be
trusted
because
it's
part
of
the
same
connection
and
that's
kind
of
stuff
we
take
for
granted
about
the
back
channel.
T
V
V
The
way
that
the
the
high
level
of
this
flow
is.
The
user
starts
off
by
saying:
hey,
I'm
trying
to
use
this
application
and
that's
them
clicking
the
button
login
the
application
says
great,
don't
give
me
your
password,
I
don't
want
your
password.
Instead,
I'm
going
to
generate
a
temporary
secret
right
now
and
calculate
a
hash
of
that
secret.
We
use
shot
256
right
now
because
it
only
needs
to
be
saved
for
a
couple
of
seconds.
V
The
app
then
says
great
go
over
to
the
oauth
server
and
take
this
hash
value
with
you.
Now
this
request
is
actually
from
the
app
to
the
os
server,
but
it's
going
through
the
user's
browser.
So
this
is
the
first
front
channel
request.
The
app
is
actually
trying
to
send
something
to
the
oauth
server,
but
it
does
not
send
it
directly.
V
It
sends
it
through
the
browser,
then
that
means
the
the
user's
browser
lands
at
the
oauth
server,
which
is
where
they
log
in
that's,
where
they
type
in
their
password
or
they
might
be
delegated
to
some
other
third-party
sso
thing
at
this
point
doesn't
really
matter:
that's
out
of
scope
of
the
spec,
that's
the
business
between
the
user
and
the
oauth
server.
They
may
also
then
have
to
approve
this
request
that
the
os
server
might
say.
Do
you
actually
want
this
application
to
be
able
to
access
this
data?
V
And
if
the
user
says
yes,
the
oauth
server
is
ready
to
create
the
access
token.
Now,
instead
of
creating
the
access
token
and
sending
it
back
right
now
we're
still
in
the
front
channel,
so
we're
going
to
actually
we're
going
to
actually
send
back
just
a
temporary
code.
A
temporary
authorization
code
is
what
it's
called,
and
this
is
essentially
one
time
use
and
short
lived.
So
this
is
the
second
front
channel
message.
This
is
the
message
that
the
os
server
is
sending
back
to
the
app
through
the
user's
browser.
V
So
this
is
a
temporary
authorization
code.
Now
the
app
can
go
and
exchange
that
authorization
code
for
an
access
token.
This
is
a
back
channel
request
and
this
is
where
pixi
comes
in.
This
is
where
this
is,
where
pixi
solves
this
problem,
the
problem
being
that
when
the
when
the
os
server
gets
this
first
message,
this
is
the
first
time
it
sees
anything
about
this
application
that
the
application
made
right,
and
it's
not
getting
this
message
from
the
app
it's
getting
it
from
the
user.
V
The
second
time
the
oauth
server
sees
the
application
in
this
flow
is
the
back
channel
request
which
it
which
it
can
actually
trust
that
it
is
that
application.
Pixie
is
the
thing
that
links
up
those
two
parts
of
the
request,
so
in
order
for
the
app
to
make
that
request,
it
has
to.
It
also
include
the
secret
to
generate
at
the
beginning
of
the
flow
which
the
os
server
can
then
calculate
the
hash
of
itself
and
compare
the
two
hashes
linking
up
the
front
channel
and
the
back
channel.
V
Now
we
can
reply
back
with
the
access
token
in
the
back
channel
and
at
this
point
the
flow
is
done
and
the
app
can
go
and
use
that
access
token
to
make
requests.
So
this
is
the
high
level
of
the
authorization
code
flow,
including
the
pixie
bits,
which
are
the
bits
highlighted
in
bold
pixie,
being
the
way
that
the
os
server
can
link
up
the
front
channel
back
channel
so
that
the
oauth
server
knows
that
the
same
thing
that's
delivering.
The
access.
Token
is
the
same
thing
that
started
the
exchange.
V
So
that's
the
that's
the
flow.
I
want
to
talk
about
a
few
more
oauth
concepts
as
well,
I'm
starting
with
refresh
tokens.
So
an
access
token
is
the
thing
the
api
is
going
to
use
or
the
application
is
going
to
make
api
requests.
That's
great!
That
access
token
might
expire
at
some
point
for
many
many
reasons
that
are
basically
out
of
control
of
the
application
and
entirely
up
to
the
oauth
server.
Refresh
tokens
are
a
way
to
have
the
user
stay
logged
in
or
have
the
application
work
offline.
V
So
the
very
common
use
of
this
is
to
is
to
streamline
the
flow
in
a
mobile
app
because
the
mobile
apps,
the
cost,
the
user
experience
cost
of
doing
an
oauth
flow
is
higher
because
it
involves
opening
up
a
browser
within
the
application
and
because
of
that,
it's
it's
useful
to
use
refresh
tokens
to
smooth
that
over
so
you'll
see
that
the
the
application
will
start
the
oauth
request,
the
user
will
see
a
browser
prompt
log
in
there
delivering
the
access
open
the
refresh
token
to
the
app.
V
V
So
that's
a
way
to
sort
of
streamline
the
user
experience
on
devices
that
that,
where
the
cost
of
doing
that
flow
that
redirect
flow
is
higher.
V
V
The
idea
with
scope
is
that
we
want
to
say
actually
this
application
is
only
going
to
get
access
to
this
particular
data,
and
you
might
see
that
in
these
consent
screens
like
this
one
or
with
fitbit,
where
you
can
see
like
great,
we
can
we
can
access.
You
know
your
sleep
data
food
data
whatever
or
the
user
might
uncheck
certain
scopes.
V
The
important
thing
here
is
that
this
is
a
request
by
the
application,
the
application's
saying
I'm
trying
to
get
this
data
and
that
request
may
or
may
not
be
granted.
It
may
be,
it
may
be
confirmed
by
the
user,
and
it
may
also
have
the
authorization
server
might
have
its
own
policies
about
which
applications
are
allowed
to
to
request
certain
scopes.
Things
like
that,
it's
all
sort
of
the
business
of
the
authorization
server.
V
Okay,
the
I
want
to
talk
briefly
about
access
tokens,
since
this
is
a
a
pretty
big
aspect
of
oauth
as
well.
Access
tokens
are,
of
course,
what
the
application
uses
to
get
data
from
the
api.
They
are
mostly
considered
an
implementation
detail
within
a
particular
deployment
of
oauth
there's,
actually
not
a
lot
of
of
you
know
hard
hard-coded
rules
about
what
access
tokens
should
be
or
how
they
should
work.
It's
a
lot
of
decisions
that
the
resource
server
as
well
as
the
authorization
server
can
make.
V
So
generally,
we
consider
access
tokens
ball
into
two
families,
reference
tokens
or
self-encoded
tokens.
In
both
cases,
these
are
considered
bearer
tokens,
which
means,
if
you
have
the
string,
you
can
use
it.
That's
all
it
takes
to
use
it.
These
are
not
key
bound
tokens,
even
if
there
is
a
key
that
was
created
used
to
sign
the
json
web
token.
V
V
It
is
a
pointer
to
a
record
in
some
other
database
data
about
that
token
would
live
somewhere
else,
so
the
expiration
permissions,
whatever
it
is,
might
that
would
live
in
some
other
database,
whereas
a
self-encoded
token
actually
takes
the
data
that
is
trying
to
be
stored
about
the
token
packs
in
some
sort
of
serialized
format
signed
or
encrypted
or
both
all
the
same.
That's
all
a
self
encrypted
token.
There's
many
different
ways
to
implement
both
of
these
families
of
tokens.
V
The
advantage
that
the
that
self-encoded
tokens
give
you
is
that
you
then
have
two
different
ways
to
validate
them
at
the
api.
The
resource
server,
what
we
call
local
validation
or
the
fast
way
is,
but
because
it's
a
signed
token,
you
can
look
at
the
token
itself
without
going
externally
over
the
network,
and
you
can
decide
whether
it's
valid.
You
can
also
use,
what's
called
strong,
remote
introspection
or
the
strong
way
which,
which
is
you,
do
go
back
and
call
the
oauth
server
and
say:
hey.
V
Is
this
token
really
still
valid,
and
that
lets
you
do
things
like
check
whether
a
token
has
been
revoked
for
some
reason
before
it
has
expired,
and
these
are
the
different
trade-offs
that
you
have
to
make
in
the
different
token
validation
methods,
based
on
whether
you
care
about
things
like
early
revocation
of
tokens
based
on
the
timelines
of
the
timeline
of
the
token
lifetimes,
and
things
like
that,
so
the
the
the
classic
example
here
is:
let's
say
your
tokens
last
for
eight
hours,
the
two
different
ways
you
validate
tokens
will
agree
at
first
until
something
about
it
changes
once
the
token's
been
issued
that
a
self-employed
token
has
been
issued,
there's
no
way
to
change
the
token
itself,
so
you
can't
revoke
it.
V
So
if
the
user
goes
and
revokes
the
application
or
the
application
is
deleted,
or
something
about
that
changes
like
the
policies
of
the
user
change
that
isn't
reflected
in
the
token,
which
means
the
two
different
validation
methods
disagree
until
the
token
expires,
and
then
you
have
to
decide
what
is
your
threshold
for
being?
Okay
with
the
fact
that
these
two
different
validation
methods
disagree-
and
this
is
something
that
you
can't
really
avoid-
this
is
not
even
a
unique
problem
to
oauth
this
essentially
a
caching
problem.
V
Self-Encoded
tokens
are
basically
a
cache
of
the
data
at
the
point
it
was
created.
So,
okay,
that's
what
I
want
to
talk
about
about
how
things
work.
I
want
to
wrap
up
just
a
brief
overview
of
some
of
the
current
work
being
done
in
the
group
and
sort
of
future
directions
that
things
are
heading.
You
may
have
heard
some
some
mentions
of
oauth
2.1,
which
is
essentially
an
effort
to
consolidate
the
best
parts
of
oauth2.
The
things
that
generally
the
industry
has
agreed
are
the
best
practices
of
how
this
stuff
should
work.
V
So,
if
you
actually
look
at
in
the
in
the
field,
there
is
a
generally
accepted
best
practice
and
a
lot
of
the
optional
bits
of
oauth
2
aren't
being
used.
Things
like
that,
so
oauth
2
has
grown
into
a
kind
of
mess
of
a
bunch
of
different
specs
and
extensions.
The
idea
is
to
roll
those
all
up
into
something
that
actually
represents
what
is
being
done,
but
in
a
document
that
is
a
lot
smaller
and
easier
to
get
started
with,
so
that
effort
is
happening.
V
Ongoing
right
now,
there's
also
a
profile
for
access
tokens,
because
a
lot
of
people
do
implement
access
tokens
as
specifically
adjacent
web
tokens,
so
that
work
is
also
you
know,
being
worked
on
right
now.
Where
is
a
standardized
way
to
use
json
web
tokens
as
access
tokens?
There's
a
couple
of
new
features
being
added
in
extensions
as
well.
Rich
authorization
request
is
a
way
to
sort
of
it's
an
extension
of
the
idea
of
scope
where
scope
allows
you
to
access
sort
of
high
high
level
buckets
of
types
of
data.
V
V
There
is
a
lot
of
use,
so
oauth
is
being
used
by
a
lot
of
other
groups
as
well
being
built
on
and
extended
in
a
lot
of
different
ways,
sometimes
in
again
different
organizations,
so
I
just
threw
a
few
of
them
up
on
the
screen
here,
and
that
is
where
I
want
to
end.
Thank
you
all
very
much.
This
was
a
whirlwind.
I
hope
this
was
hope.
This
was
helpful.
Helps
give
you
an
idea
of
how
we
are
thinking
about
things.
B
B
E
E
E
That
seems,
like
I,
mean,
there's
a
whole
separate
set
of
questions
about.
How
does
the
user
understand
the
scope
of
data
that
they're
granting
access
to
which
are
maybe
out
of
maybe
we
can't
discuss
here,
but
certainly
with
the
parties
that
are
involved,
it
seems
critical
that
you
understand
who
you're
getting
access
to
and
if
the
data
passed
is
merely
a
blob
provided
by
the
person
who's
requesting
the
data.
It's
hard
to
see
how
that
ties
to
their
identity.
V
Yeah
I
kind
of
glossed
over
that
in
the
in
the
flow,
but
it
is
that
consent
screen
of
the
os
server
prompting
the
user
to
log
in,
and
then
that
is
where
they
will
sometimes
see.
The
request
described
on
screen
like
this
application,
is
trying
to
access
these
parts
of
your
account.
That's
entirely
encapsulated
at
that
point
in
the
flow
right.
E
But
where
does
the
information
about
about
the
thing
that
is
requesting
come
through.
G
E
E
A
front
door
flow,
it's
it's
like
it's
all
being
passed
through
the
user's
browser
or
the
user's
the
user
agent,
which
doesn't
necessarily
know
how
to
translate
that
bit.
V
Two
parts
to
that,
so
the
the
flow
is
started
with,
so
every
application
has
to
be
registered
at
the
oauth
server.
That's
generally
how
it
works,
there's
a
registration
step
establishing
an
identifier
for
the
application,
at
the
very
least
and
in
some
cases
a
password
or
a
client
secret.
So
that's.
E
V
It's
usually
actually
out
of
band.
It's
like
the
developer,
goes
to
the
website
and
registers
an
application.
There
is
a
way
to
do
it
with
an
api
as
well
dynamic,
client
registration,
that's
much
less
common.
More
often,
it's
literally
the
developer
goes
to
the
website.
Signs
up
as
a
developer
goes
and
creates
an
application,
uploads,
an
icon,
and
things
like
that.
That
gives
it
an
identifier
which
it
can
use
in
the
flow
and
then
that
identifier
is
passed
in
the
first
request
from
the
application
which
now
to
get
your
second
point.
V
How
do
you
know
it's
really
that
application?
If
the
app
has
a
client
secret,
a
password,
that's
used
before
the
that's
required
in
order
to
actually
get
the
access
token
in
the
back
channel
and
that's
sort
of
the
the
how
we
can
make
sure
that
it
really
is
that
application
doing
it?
Not
all
apps
can
use
client
secrets,
some
apps
can't
like
mobile
apps
and
single
page
apps,
in
which
case
that
is
kind
of
a
big.
You
know
gaping
hole
in
in
this
and
there
isn't
really
a
way
around
it
of
you.
V
Don't
actually,
you
can't
actually
tell
if
somebody
is
impersonating,
a
mobile,
app
or
not,
and
that
isn't
something
that
can
really
be
fixed
on
the
wire
without
cooperation
of,
for
example,
the
app
stores,
but
the
best
we
can
do
there
is
rely
on
the
redirect
url
of
like,
where
is
the
user
going
to
be
sent
back
to
in
their
browser
after
after
the
flow
is
complete
and
the
redirect
url
acts
as
a
form
of
confirming
that
applications,
identity.
V
Yeah,
the
redirection
closes
the
loop,
but
there
yes,
there
is
also
ways
for
a
malicious
client
to
cooperate
with
a
user
to
get
an
access
token
that
both
people
think
or
the
authorization
server
thought
was
issued
to
somebody
else,
and
it's
there's
a
lot
of
edge
cases
here,
as
you
can
imagine
of
different
different
aspects
of
this,
and
a
lot
of
this
is
documented
in
the
security
best
current
practice.
Spec
as
well
of
here
are
the
things
that
you
can
prevent.
Here
are
the
things
you
can't
prevent.
V
Here's
ways
that
we're
addressing
you
know
you
know
trying
to
solve.
Those
are
ways
that
we
know
that
this
is
a
limit.
B
Thanks
again
aaron,
I
was
a
great
talk.
We
are
formally
at
time,
roman.
I
guess
you
could
bring
up
the
open
mic
slide
just
in
case
there
is
something
that
someone
wants
to
say,
but
we
should
thank
all
the
speakers
for
their
talks
and
I'm
just
about
ready
to
close
the
session.