►
From YouTube: IETF112-SECDISPATCH-20211109-1200
Description
SECDISPATCH meeting session at IETF112
2021/11/09 1200
https://datatracker.ietf.org/meeting/112/proceedings/
A
All
right,
I
think,
we're
at
the
top
of
the
hour
and
why
don't
we
get
started
good
morning
good
afternoon,
good
evening
kind
of
everyone
and
welcome
to
the
security
area
advisory
group
meeting
this
is
session,
one
that
we're
holding
now.
My
name
is
roman
de
nilo,
I'm
one
of
the
sekidi
sec
and
my
cosec
id
is
ben
kadek.
Then,
if
you
want
to
turn
on
your
video.
A
All
right,
if
we
can
go
next
slide,
please.
A
Of
course,
we
have
first
the
notewell
which
covers
the
various
policies
and
procedures
related
to
how
we
do
work
here
at
the
ietf
if
we
could
flip
to
the
next
slide,
but
in
particular
what
we
want
to
do
is
call
out
one
of
those
things
from
the
notewell,
which
is
our
code
of
conduct,
and
so
this
is
really
a
reminder,
as
documented
in
rfc
7154,
that
we
extend
courtesy
and
respect
to
our
colleagues
that
when
we
talk
about
things,
we
focus
on
the
technical
bits
of
it,
not
necessarily
the
messenger
and
the
particular
individuals
involved.
A
So
let's
not
kind
of
focus
on
the
person,
let's
focus
on
the
tech
itself
and
just
a
reminder
that
we're
all
coming
actually
from
different
contexts.
Different
kind
of
perspectives
so
remember
that
assumptions
about
what
operational
environments
or
the
technical
environments
may
actually
be
different
than
what
you
consider
that
particular
kind
of
environment.
So
please
do
give
the
benefit
of
the
doubt
to
different
participants.
A
Next
slide,
please
so
you'll
notice
in
the
agenda
here
and
on
the
on
the
112
schedule
that
we're
doing
something
different
than
what
we
usually
do.
We
like
to
have
sex
dispatch
early
in
the
week
we
like
to
have
sag
a
little
bit
later
in
this
week,
but
to
accommodate
different
kind
of
presenters,
we're
doing
a
we're
doing
a
hybrid
here,
so
a
split
session.
So
the
first
hour
of
this
meeting
is
going
to
be
focused
on
sag.
A
The
second
hour
is
going
to
be
sec
dispatch
and
on
thursday,
we're
also
going
to
split
the
time.
Sag
will
take
the
first
hour
and
if
sec
dispatch
needs
it
we're
gonna
we're
gonna,
have
them
in
the
in
the
second
in
the
second
hour.
So
the
agenda,
roughly
kind
of
splits
like
this,
where
today,
in
continuing
largely
on
the
conversation
we
had
in
the
last
ietf
meeting
about
what
post-quantum
cartography
agility,
will
look
like.
A
Last
time
we
talked
a
little
bit
about
what
we
would
be
doing
in
the
ietf
this
time
around
graciously
nist,
specifically
dustin
moody
who's,
the
post
quantum
cryptography
project
lead
at
nist,
has
agreed
to
come
here.
A
We're
not
gonna.
Do
any
agenda
bashing
here.
We're
gonna,
save
that
flexibility
for
thursday.
You
know
today
we
just
really
are
gonna
focus
on
our
invited
on
our
invited,
talk
so
kind
of
with
no
further
ado
ben.
Unless
you
would
like
to
add
something,
let's
kind
of
move
over
to
dustin.
B
Okay,
thank
you
very
much.
I
appreciate
the
invitation
to
come
and
speak
here
to
the
itf,
hopefully
you're,
seeing
and
hearing
me
not
seen
my
slides
yet.
Hopefully
you'll
get
them
in
a
moment.
B
So
I
plan
to
talk
for
about
30
minutes
or
so
and
leave
plenty
of
time
for
questions
at
the
end.
So
I
know
that
there's
lots
of
things
you
might
want
to
ask
so
again:
yeah,
I'm
dustin
moody
from
nist
and
nist,
as
you
probably
are
very
well
aware,
is
leading
a
post
quantum
cryptography
standardization
project
which
we're
nearing
the
end
of,
and
that's
what
I
will
be
speaking
about
today.
B
So
the
motivation
is
probably
pretty
clear:
it's
well
known
that
there
are
quantum
computers
that
are
actively
being
researched
and
built
that
if
they
are
constructed
to
a
large
size
where
they
can
do
pretty
good
computations
that
these
quantum
computers
would
have
a
noticeable
impact
on
cryptography
and
we've
seen
in
the
press
over
the
past
few
years,
progress
being
made,
there
are
certainly
still
challenges
to
be
overcome.
It's
not
an
easy
problem,
but
quantum
computers.
If
they
are
built,
they
will
be
sufficiently
powerful
to
give
us
computational
power
for
certain
problems.
B
So
if
a
quantum
computer
comes
online,
it
would
completely
break
those
standards.
We
need
to
completely
just
remove
them
and
get
new
algorithms,
so
the
threat
is
to
public
key
cryptography
the
most
and
that's
where
the
focus
of
post-quantum
cryptography
is
a
quantum.
Computers
would
also
be
able
to
impact
symmetric
key
crypto.
So
we
have
various
symmetric
algorithms
like
aes
and
sha-2
and
sha-3,
but
the
effect
would
be
a
lot
less
dramatic.
B
Grover's
algorithm
would
give
you
a
polynomial
speed
up,
not
an
exponential
speed
up,
but
we
could
handle
that
by
increasing
key
size
using
longer
output
from
the
hash
functions.
So
we
wouldn't
need
to
completely
scrap
the
algorithms.
We
just
need
to
change
how
we
use
them
and
that's
that's
easier
to
handle
than
coming
out
with
new
algorithms.
B
So
next
slide.
So
there
are
many
names
given
to
this
area.
Sometimes
it's
post
quantum
cryptography.
It's
also
called
quantum
resistant,
cryptography
quantum
safe
cryptography,
but
the
idea
is
that
we're
looking
for
cryptosystems,
which
will
be
implemented
on
our
current
classical
computers,
because
that
will
still
be
what
everyone
is
mostly
using,
but
they
need
to
provide
security
against
attacks
from
both
classical
and
quantum
computers.
B
Now
quantum
computers
there
is
progress
being
made,
but
they're
not
yet
here.
So
we
may
ask
the
question:
you
know:
why
do
we
need
to
worry
about
this
now
and
there's
a
a
simple
explanation
from
dr
michele
mosca?
That
explains
why
there's
a
threat
now,
even
if
we
do
not
yet
have
a
large-scale
quantum
computer
and
the
idea
is
kind
of
there's
a
there's.
B
A
harvest
now
decrypt
later
threat
that
your
your
adversary
could
be
copying
down
your
your
data
and
it's
encrypted,
and
you
can't
read
it
right
now,
but
he
will
wait
until
there's
a
quantum
computer
that
comes
along
and
then
he'll
be
able
to
get
access
to
it,
and
this
little
equation,
if
x,
plus
y,
is
greater
than
z,
helps
you
measure
that
risk
if
x,
is
how
long
you
need
your
your
data
to
be
protected
and
y
is
how
long
until
we
have
post-quantum
algorithms,
standardized
and
widely
implemented
and
adopted,
and
if
z,
is
how
long
until
there's
a
quantum
computer
well,
if
x,
plus
y,
is
greater
than
z,
they
will
decrypt
your
data
and
you
will
not
be
providing
the
x
years
of
information
security
that
you're,
hoping
for
so
you
can
put
in
what
x
and
y
and
z
are
for
your
particular
organization,
your
application
to
see
if
you
could
be
at
risk
already
being
in
standards.
B
We
know
that
y
is
not
as
small
as
we
might
hope.
You
know.
Y
is
more
likely
going
to
be
10
15
years
to
get
good.
Standardization
adopted
we've
seen
that
in
the
past,
with
crypto
transitions
x
will
depend
on
your
organization.
The
application
could
be
due
to
regulations
where
you've
got
to
keep
health
information,
private
or
you're.
Protecting
financial
data,
if
you're
worried
about
national
security
x,
could
be
really
long
like
30
years.
B
So
it's
important
to
have
an
idea
of
kind
of
what
what
z
might
be
then.
So,
if
we
go
to
the
next
slide,
there
are
various
experts
around
the
world
who
were
surveyed
in
this
quantum
threat
timeline
report
that
was
done
by
the
global
risk
institute
that
tried
to
give
give
an
idea
of
when
quantum
computers
that
would
be
big
enough
to
break
public
key
cryptography
could
be
around.
B
Of
course
it's
an
open
research
question,
so
nobody
knows
for
sure,
but
they
they
surveyed
44
experts
in
the
field
and
asked
them.
What
is
the
likelihood
in
five
years
in
10
years
and
15
years,
that
we
would
have
a
big
enough
quantum
computer?
That
would
break
the
public
key
crypto
that
we
use
this
chart
is
kind
of
it
summarizes
their
findings.
B
So
in
terms
of
managing
risk,
we
can
see
that
maybe
10
years
15
years
and
in
their
opinion,
certainly
by
20
years.
This
is
something
we
need
to
be
working
on
today
and
seriously
considering
next
slide,
so
to
just
be
clear,
sometimes
when
we're
talking
about
cryptography
and
quantum
there's
another
side
of
the
coin
that
that
is
mentioned,
and
I
want
to
make
sure
that
you're
clear
that
that's
not
what
I'm
talking
about
today,
quantum
cryptography,
also
the
term
qkd
or
quantum
key
distribution,
is
using
quantum
technology
to
build.
B
B
B
B
Go
ahead
next
slide,
so
nist
has
been
looking
at
this
for
over
a
decade,
but
it
was
around
2016
that
we
began
to
take
some
concrete
steps
in
this
project
to
get
standardization
happening.
B
We
received
82
different
algorithms
in
which
69
met
our
requirements
and
were
accepted
into
the
first
round,
and
since
then
we
proceeded
through
a
number
of
rounds
where,
during
the
round
that
lasted
about
a
year
a
year
and
a
half,
we
would
evaluate
the
candidates
internally.
We
would
look
at
them
as
well
as
the
cryptographic
community
around
the
world
as
well
looking
at
their
security,
their
performance
and
so
on
and
so
forth.
B
At
the
end
of
each
round,
we
would
select
the
most
promising
ones
to
to
move
on
for
further
study,
so
we
went
to
a
second
round
and
then
we
went
to
a
third
round,
which
is
where
we
are
now
at
the
end
of
each
round.
We
issued
a
report
explaining
our
decision
and
in
each
round
we
also
held
a
workshop
where
submitters
could
come
and
talk
about
their
algorithms
and
other
research.
B
B
B
That
said,
it's
not
always
easy
to
measure
security
in
a
concrete
way,
especially
against
quantum
attacks.
Quantum
computers
are
not
yet
built.
We
don't
know
how
expensive
they'll
be
to
run.
We
do
not
know
how
fast
they
will
be
to
run
so
we
have
to
make
some
guesses
and
some
extrapolations
and
there's
there's
some
uncertainty
built
into
that.
B
We
designed
five
security
categories
so
that
submitters
could
give
us
parameters
to
tell
us
what
how
much
security
their
algorithms
would
be,
providing
with
certain
parameter
sets,
and
we
related
this
back
to
the
the
amount
of
resources
it
would
take
to
break
some
of
our
symmetric
key
algorithms
kind
of
an
unusual
way
to
define
it.
But
this
is
kind
of
the
what
we
came
up
with
at
the
time
level.
One
meant
that
your
algorithm
was
providing
as
much
security.
B
Besides
security
performance
is
obviously
very
important.
If
things
are
too
slow
or
too
large,
we
won't
be
able
to
use
them.
So
we
would
measure
performance
in
a
variety
of
different
platforms,
both
on
software
hardware,
small
devices,
and
so
we
could
see
the
comparisons
between
the
different
algorithms
as
we'll
see,
there's
not
one
silver
bullet
that
is
just
magically
the
best
in
in
all
different
categories.
B
B
Okay
next
slide,
so
we
had
submissions
from
around
the
world.
There
was
several
hundred
submitters
that
were
involved
with
the
the
algorithms
and,
if
you
put
them
on
a
map,
they
were
from
all
six
continents
and
over
25
different
countries
around
the
globe.
So
that
was
nice
to
see
that
we
have
a
worldwide
effort
going
on
here
next
slide
and
summarizing
the
first
round.
So
we
had
a
lot
of
algorithms
that
we
we
published
on
our
website.
We
put
their
specs
up.
We
had
the
code
there,
so
you
could
download
and
implement
them.
B
So
we
asked
for
public
key
digital
signatures
and
the
other
category
we
asked
for
was
public
key
encryption
or
equivalently
key
encapsulation
mechanisms
and
in
that
table
down
in
the
bottom
right,
you
can
see
kind
of
the
categories
of
the
type
of
algorithm,
as
well
as
the
mathematical
primitive
that
was
based
on
and
we
had
the
most
that
were
based
on
lattices
and
codes
for
chems.
B
We
established
an
online
message
forum
called
the
pqc
forum,
where
we
could
keep
track
of
comments
about
the
algorithms
people
could
ask
questions,
discuss
research
results.
That
forum
continues
today
and
is
an
active
area.
Where
there's
a
lot
of
discussion
going
on
there
and
in
the
first
round
we
started
to
to
get
research
and
the
first
performance
numbers
as
well.
So
we
could.
We
could
start
evaluating
these
schemes.
B
After
a
year,
we
made
some
hard
decisions
and
we
selected
26
of
the
schemes
to
move
on
into
the
second
round,
so
next
slide
also
going
on
into
the
second
round.
We
encourage
some
submit
some
submissions
to
merge
that
were
very
similar
and
we
saw
some
ford
four
different
mergers
continued
on
and
they
all
advanced
into
the
second
round.
B
The
second
round
was
a
lot
like
the
first
cryptanalysis
continued,
some
algorithms
continued
to
get
broken.
We
held
a
workshop
more
research
after
18
months,
we
had
15
submissions
that
we
picked
to
move
on
into
the
third
round
and
we
let
we
can
go
ahead
and
yeah
next
yeah.
We
can
go
to
the
next
slide,
so
there
are
some
tough
decisions
here.
If
you've
been
following
the
process,
you
can
see
some
of
the
submissions
that
were
on
the
left.
B
Okay,
let's
move
on
to
the
next
slide.
So
currently
we
are
in
the
third
round,
we've
selected,
seven
finalists
and
eight
alternates.
The
finalists
are
the
algorithms
that
we
feel
are
most
likely
to
be
ready
for
standardization
at
the
end
of
the
third
round
and
would
fit
the
most
general
purpose
applications
the
alternates
we
are
still
interested
in,
but
they
would
most
likely
need
a
fourth
round
of
study.
B
Some
of
these
were
selected
as
kind
of
backup
candidates.
If
there
were
research
results
or
for
certain
performance
applications,
some
of
these
algorithms
were
suited
for
the
performance
was
tailored
that
they
might
be
appropriate
for
one
application,
but
maybe
not,
as
is
good
for
general
purpose,
use
the
chem
finals
that
we
had
were
kyber
and
true
saber
classic
mcleese
and
the
signature
finalists
were
dilithium
falcon
and
rainbow,
and
then
the
alternates
are
listed
there
as
well.
B
So
if
we
look
at
the
lattice-based
chems,
three
of
them
were
selected
as
finalists
kyber,
saber
and
entrue.
All
three
are
good
all-around
candidates,
their
key
sizes
are
pretty
good.
Their
efficiency
and
implementation
is
quite
good,
kyber
and
saber
a
little
bit
more
similar
entries
based
on
a
slightly
different
problem,
but
all
three
we
selected
as
finalists,
we
had
n
true
prime
as
an
alternate
and
frodo
cam,
was
also
an
alternate
frodo
cam
made
a
a
security
performance
trade-off
where
they
were
a
little
bit
more
security.
B
B
Okay,
next
slide
in
looking
at
the
remainder
of
the
chems.
Three
of
them
were
based
on
codes
classic
mclease,
which,
if
you've
been
in
cryptography,
you've,
heard
that
before
it's
been
around
for
a
long
time,
it
was
selected
as
a
finalist,
very
large
public
keys,
very
small
ciphertext,
very
efficient
for
some
applications
that
would
work
well.
B
So
here's
a
couple
of
charts
just
to
show
you
kind
of
the
different
key
sizes
that
we're
we're
looking
at
and
to
some
extent
this
is
informed
by
the
the
mathematical
primitive
that
they're
based
on.
I
do
note
that,
on
the
axis
these
are
logarithmically
scaled.
B
B
So
if
we
look
at
the
signatures,
we
have
two
lattice-based
signatures.
Dilithium
and
falcon
again.
Both
are
very
balanced,
they're,
very
efficient.
They
were
selected
as
finalists.
We
had
two
signatures
that
were
based
on
symmetric
key
primitives
we
have
sphinx
plus,
which
is
very
stable.
Its
security
is
very
well
understood
because
it's
based
on
the
security
of
hash
functions,
it's
a
little
bit
larger
and
slower
than
dilithium
and
falcon
picnic,
a
very
novel
design.
B
B
If
we
look
at
a
chart
to
look
at
performance
again
a
logarithmic
scale
in
the
cycle
count,
we
can
see
that
each
of
these
algorithms
has
different
kind
of
performance
areas
where
they
tend
to
little
do
a
little
bit
better.
Some
have
better
key
gen,
some
have
a
slower
key
gen,
some
verify
very
much
faster,
so
we
can
see
kind
of
different
comparisons
there.
B
So
our
timeline-
we
have
been
in
the
third
la
third
round
for
over
a
year,
and
it
will
end
roughly
the
end
of
this
year
start
of
next
year.
Right
now,
it's
looking
probably
more
likely,
it
will
be
the
beginning
of
2022,
and
that
is
when
we
will
announce
which
finalist
algorithms,
that
we
will
standardize.
B
We've
also
previously
mentioned
we
may
potentially
standardize
sphinx
plus,
even
though
it's
an
alternate,
because
we
want
to
make
sure
we
have
more
than
one
signature
algorithm
and
we
had
dilithium
in
falcon.
We
expect
to
choose
one
of
those
algorithms,
the
multivariate
signatures.
B
B
Are
these
these
algorithms
that
we
announced
to
include
solutions
that
can
be
used
by
most
applications?
So
that's
what
we're
targeting
at
the
very
first,
especially
we'll,
also
issue
a
report
to
explain
our
decisions
and
we
will
announce
any
candidates
that
we
want
to
advance
to
a
fourth
round
to
continue
to
study
in
a
fourth
round
for
potential
standardization,
these
algorithms,
that
we
announce
at
the
end
of
the
third
round.
B
Okay
next
slide,
so
I
talked
about
the
signatures.
We
have
a
lot
more
chems
and
different
types
of
chems
in
signatures.
We
have
a
smaller
number,
so
we're
going
to
introduce
at
the
end
of
the
third
round
what
we're
calling
an
on-ramp
for
new
signatures.
We
will
issue
a
new
call
for
signatures
that
we
want
designers
to
design
for
the
future,
we'll
we'll
announce
a
submission
deadline,
we'll
give
designers,
probably
a
six
months
to
a
year.
B
This
will
be
much
smaller
scoped
project
than
what
we
announced
five
years
ago,
we're
not
looking
to
get
60
70
80
new
algorithms.
We
want
a
very
small
number.
The
main
reason
is
to
diversify
our
signature
portfolio,
we're
happy
with
the
general
purpose,
digital
signature
schemes
that
are
based
on
structured
lattices.
B
That
would
be
dilithium
and
falcon,
but
we
also
want
another
general
purpose:
algorithm,
that's
not
based
on
structured
lattices,
the
other
ones
that
we
are
still
in
the
third
round,
they're
a
little
bit
bigger
or
slower
in
some
areas.
So
that's
our
primary
target
that
we're
looking
for.
You
might
be
convinced
to
be
interested
in
some
signature
schemes
that
target
certain
applications.
B
How
are
we
making
our
decisions?
Well
right.
Now,
we're
already
meeting
to
begin
our
process
to
select
the
algorithms
we're
meeting,
frequently
we're
comparing
the
algorithms
we're
taking
a
detailed
look
again
at
their
specifications.
What's
happened
during
the
third
round.
New
research
results,
new
performance
benchmarks,
but
we're
trying
to
just
follow
the
evaluation
criteria
that
we
announced.
B
B
There
are
some
differences
and
that's
what's
making
this
a
hard
decision
for
lattice
signatures.
Our
main
decision
will
be
looking
at
dilithium
and
falcon
our
other
choices.
They'll
mostly
be
made
on
an
individual
basis.
If
we
standardize
classic
mcleese,
it's
kind
of
competing
against
itself,
and
so
on
with
the
other
algorithms
next
slide.
B
B
When
we
have
something
concrete,
we
will
share
it.
We
want
to
make
very
clear,
though
it
may
not
be
possible
for
us
to
resolve
all
the
ip
issues
we
can't
control.
What
outside
parties
do
completely
and
ip
is
a
legal
issue
and
it's
not
to
be
decided
based
on.
You
know
what
what
I
think
is
a
mathematician.
B
So,
in
light
of
the
above,
we
think
that
this
the
discussion
is
best
focused
on
the
impact
of
ip.
How
will
that
affect
adoption,
and
how
should
we
factor
that
into
our
decision
making
process?
B
So
there
has
been
some
work
done
already
on
stateful
hash
based
signatures.
The
ietf
has
two
rfcs
that
standardized
xmss
and
lms,
based
on
those
nist,
also
issued
a
standard
sp
800
208
that
has
stateful
hash
based
signatures
want
to
caution.
These
are
not
general
purpose:
digital
signatures.
You
have
to
carefully
manage
the
state,
otherwise
you
can
completely
blow
your
security,
so
there's
both
all
of
these
documents
include
guidance
on
how
to
safely
do
this
and
which
applications
this
is
appropriate
for,
but
wanted
to
make
sure
you're
aware
of
hash
based
signatures.
B
Okay,
next
slide
as
we
look
towards
the
transition.
One
thing
that's
being
discussed
is
a
a
hybrid
mode
and
that's
where
you
use
a
classical
algorithm
and
a
post
quantum
algorithm
in
some
combination,
where
you're,
relying
on
the
security
of
both
algorithms-
and
this
is
a
potential
approach
for
migration-
that
many
people
seem
to
be
interested
in
a
nist
allows
this
in
one
of
our
documents,
you
can
combine
a
shared
secret
derived
from
a
current
nist
standardized
algorithm,
with
a
shared
secret
derived
from
some
other
way,
I.e
a
post
quantum
algorithm.
B
You
can
catenate
them
and
run
them
through
a
kdf,
and
you
can
still
get
phipps
approval
on
on
that
process,
as
explained
in
that
document,
so
we
know
that
there's
going
to
be
can
probably
continue
discussion
on
the
best
way
to
implement
any
mode
where
you're,
combining
algorithms.
This
hasn't
really
taken
a
position
on
the
best
way
of
how
to
do
this.
We
think
it.
B
So,
let's
go
to
the
next
slide:
nist
will
issue
guidance
on
the
transition
to
post
quantum
algorithms.
We
don't
yet
have
algorithms
announced,
so
we
cannot
give
that
guidance
yet,
but
we
we've
previously
given
crypto
transition
advice,
and
we
will
do
so
here.
So
let's
go
to
the
next
slide
in
working
with
the
nccoe,
the
national
cyber
security
of
excellence.
B
We
also
have
a
project,
that's
focusing
on
the
transition
and
migration
to
pqc,
so
this
product
is
being
run
by
some
of
our
colleagues
in
the
nccoe,
but
we're
participating
with
them.
In
this
process,
they've
held
a
workshop.
They
finalized
their
pro
projects.
You
can
go
to
their
website
most
recently
they
announced
a
or
they
published
a
federal
register
notice
where
you
can
join
the
the
project
and
participate
in
this
process.
B
So
I'd
encourage
you
to
go
to
the
project
webpage
if
you're
interested
in
learning,
they
also
have
white
papers
on
how
an
organization
can
start
preparing
for
this
transition,
and
this
is
an
important
effort
to
help
us
get
ready.
What
we
can
do
now
before
the
standards
are
announced
and
before
the
final
standard
is
published.
B
It
mostly
involves
taking
a
look
at
the
cryptography
you're
using
now
seeing
what
public
key
algorithms
you
have
looking
at
how
you
can
transition
out
of
them,
how
you
can
swap
in
new
algorithms,
that's
often
called
crypto
agility
and
then
just
making
sure
that
you're
you're
making
your
staff
is
aware
of.
This
you've
got
someone
tasked
with
having
a
plan
for
making
the
transition
to
post
quantum,
algorithms
and
tracking
developments
in
quantum
computers
in
standardization
projects.
B
You
could
test
out
some
of
these
algorithms
to
see
how
will
they
meet
how
they
fit
in
your
applications,
but
we'd
encourage
you
the
sooner
you
plan
ahead
and
prepare
the
better
it
will
be
for
you.
You'll
you'll
make
less
mistakes,
it'll
be
less
disruptive,
less
expensive,
as
you
do
that
so
next
slide.
C
B
Yeah,
so
at
the
end
of
each
round,
we
allowed
each
of
the
algorithms
that
advanced
on
to
make
minor
tweaks,
not
major
redesign,
so
we
expected
at
the
end
of
the
third
round.
You
know
if
we
select
an
algorithm,
we
would
work
with
the
team
and
allow
them
to
make
any
minor
tweaks
from
things
they
learned
during
the
third
rounds,
but
nothing
major,
no,
no
big
redesigns.
So
I
would
say
that
the
type
of
changes
would
be
similar
to
what
you
saw
at
the
end
of
each
round.
When
teams
made
tweaks.
E
Mad
cover
so
one
of
the
things
that
happened
with
elliptic
curve
photography
is,
there
was
a
very
long
gap
between
new
standardization
and
adoption,
and
that
was
partially
due
to
ipr
concerns
and,
and
some
operating
systems
really
didn't,
have
any
support
for
quite
a
long
time
because
of
these
ipr
concerns
as
well
as
performance
issues,
and
then,
when
further
developments
resolve
those
performance
issues,
this
didn't
respond,
and
it's
not
until
two
years
really
that
nist
has
said
that
oh
yeah,
we
can
do
that.
E
We
are
going
to
standardize
faster
with
the
curves,
and
my
concern
is
has
this
is
with
post
quantum,
where
we
have
possibilities,
especially
in
signatures
of
faster
and
better
algorithms
coming
out?
Is
this
going
to
be
more
responsive,
or
is
it
going
to
be
a
situation
where
we're
going
to
be
stuck
for
a
while
with
things
that
maybe
we
can't
do
everywhere
and
really
need
to
because
nist
isn't
isn't
paying
attention
anymore
to
what's
going
on?
They
say:
okay,
we've
got
the
got
the
spectrum
we're
done.
B
We
expect
our
post
quantum
work
to
continue
on
into
the
future.
At
the
end
of
the
third
round,
we're
announcing
these
algorithms,
we
will
get
a
first
standard
out,
but
the
project
will
continue
on
there's
going
to
be
a
fourth
round,
there's
going
to
be
the
on-ramp
for
signatures.
So
to
the
best
of
my
knowledge.
Yes,
we
will
be.
F
Yeah
great
talk:
I've
got
to
ask
about
threshold.
Can
any
of
these
be
adapted
to
a
threshold
version.
B
It's
possible
that
some
of
them
could,
as
they
currently
are
in
the
specs
right
now,
I'm
not
aware
that
any
of
them
have
a
threshold
implementation
and
that's
not
our
our
primary
focus.
We
first
want
to
just
get
an
algorithm
standardized
so
that
people
will
have
it
begin
to
get
using
and,
and
that
could
be
future
work
where
there
could
be
threshold
versions
of
these
done
are
any
of
them
particularly
suited
where
they
be
able
to
they
look
like
they
could
allow
that
some
of
the
lattice-based
ones.
B
Potentially,
I
think
if
I'm
remembering
lima
was
a
first-round
ladder
space
candidate
and
it
talked
about
how
it
could
be
done
in
a
threshold
way
potentially
so
that
would
be
where
I
would
guess,
is
a
lot
of
space
ones
might
have
the
easiest
way
to
do
that,
but
I'm
not
currently
aware
that
anyone
has
has
done
it
yet.
F
G
Yeah
thanks
thanks
for
talking
appreciated,
so
I
noticed
that
I
noticed
that
you're,
standardizing,
chems
and
but
a
lot
of
our
protocols
really
want
nike
instead,
so
I
know
there's
sort
of
a
dearth
of
algorithms
there,
but
is
this
planning
to
do
anything
to
try
to
like
foster
that
work.
B
Well,
back
at
the
beginning,
when
we
were
first
starting
this
project,
that's
what
we
were
hoping
for
was
kind
of
a
direct
analog
of
the
diffie-hellman.
Looking
at
the
submissions
that
came
in
there
was
none
that
really
provided
that
at
the
time
I
think
there
have
been
some
ideas
that
are
out
there
right
now.
That
could
potentially
do
that,
but
I
think
our
feeling
is.
B
G
H
Hi,
I
I
guess
partly
repeating
what
said
earlier,
I
mean
encumbered
outcomes,
would
be
a
really
bad
outcome.
I
think
for
this.
I
know
you
can't
control
that.
Also,
though,
I
think
the
thing
you
can
control
are
the
number
of
options,
so
I
know
that
you're
you're
not
liable
to
pick
one
single
algorithm
to
declare
a
winner,
but
I
hope
for
each
algorithm.
H
You
don't
declare
75
different
options
and
then
that
would
cause
a
lot
of
work
in
an
organization
like
the
itf,
where
people
then
have
to
decide
which
of
the
15
options
that
they
winnow
down
to
10
and
then
to
one
and
so
on.
So
I'd
encourage
you
to
extend
possible
to
reduce
the
number
of
outcomes
to
the
absolute
minimum.
You
can.
B
I
Hi
ben
schwartz
of
google-
I
I
wanted
to
ask
if
you
and
and
the
the
whole
group
at
nist
had
given
much
thought
to
combine
schemes
that
that
combined
one
of
these
one
of
these
signature,
algorithms,
with
a
with
a
merkle
algorithm
to
to
amortize
the
signing
costs,
there's
a
draft
in
the
tls
working
group
right
now
adopted
draft
on
batch
signing
using
merkle
trees
to
to
share
the
cost
of
a
signature
across
a
large
number
of
handshakes.
I
wonder
if
that
influences
any
of
your
thinking.
B
Yeah
that
that
would
fall
into
our
third
evaluation
criteria,
kind
of
the
algorithm
and
implementation
specifics
yeah.
We
are
aware
of
kind
of
research
being
done
in
that
way
and
if
an
algorithm
allows
for
you
know,
batch
signing
in
a
particularly
efficient
way,
that
would
be
a
plus
for
it.
So
we
do
try
and
be
aware
of
that
and
factor
it
in
when
we're
we're
going
to
select
something.
Yes,.
I
Okay,
yeah,
just
because
in
general
it
seems
like
slow
signature
schemes
with
small
signatures
could
easily
be
combined
but
sort
of
trivially
with
the
with
the
old-fashioned
merkle
signature
trees.
For
in
that
situation,.
B
D
Hi
ben
kadek,
so
there
was
a
topic
that
came
up
in
the
chat
a
couple
times
about
sort
of
key
sizes,
and
I
know
you
had,
I
think,
a
little
bit
of
of
data
in
the
charts
about
key
sizes.
But
like
is
there
some
minimum
expansion
factor
that
we're
going
to
hit
when
in
key
size?
When
we
go
to
the
post
quantum
algorithms.
B
Well,
the
main
ones,
the
finalists
that
we're
looking
at
right
now,
the
structured
lattice
ones
for
chems
they're,
roughly
about
for
category
one,
a
thousand
bytes,
that's
the
smallest
we're
seeing
for
good
general
purpose.
Algorithms
psyc
does
have
smaller
keys.
That
are
a
couple
hundred
bytes.
But
then
its
performance
is
an
order
of
magnitude
slower.
B
J
I
remember
at
the
florida
iteration
of
this
conference
there
was
a
proposal
around
hybrid
schemes.
It
was
sort
of
flippant,
but
the
proposal
was
we
were
able
to
achieve
a
security
level
by
combining
two
smaller,
faster
things
into
a
single
primitive.
Do
you
remember
that
idea
floating?
Has
there
been
any
any
traction
like
did
that
idea
die?
I
know
it
was
a
joke,
but
it
sort
of
seemed
maybe
worth
pursuing.
B
I
don't
remember
that
from
the
florida
workshop,
so
I'm
not
remembering
it,
which
makes
me
think
there
I
mean.
Maybe
there's
been
some
efforts
in
that
direction,
but
I'm
not
aware
of
them.
So
I
don't
know.
A
I
am
not
seeing
anyone
join
so
justin,
a
profound
kind
of
thank
you
for
for
joining
us
and
giving
us
an
update
for
what
nist
is
doing.
This
is
kind
of
such
an
important
kind
of
activity
for
our
future
roadmap.
So
so
again
we
if
we
have
any
other
questions,
we
may
kind
of
follow
up
with
you,
but
again
kind
of
thanks
thanks
so
much
and
have
a
good
rest
of
your
day.
A
All
right,
so
we
have
ended
early.
We
have
about
10
minutes,
officially
kind
of
on
the
schedule
before
we
pivot
to
sec
dispatch,
so
opening
the
mic
kind
of
open
mic
for
sag
topics,
and
if
we,
after
we
drain
that
cue,
we'll
just
gonna
pivot
on
to
starting
with
the
sec
dispatch
meeting,
would
anyone
like
to
join
the
queue
for
open
mic,
sag.
K
Hi
flo
from
uk
ncsc,
I
was
wondering
kind
of
picking
up
on
something
we
discussed
at
the
last
sag
meeting.
K
There
was
some
chat
around
kind
of
coordination
of
post
quantum
work
in
the
itf
and
it
seems
on
the
mailing
list
that
that's
kind
of
been
turned
into
the
discussion
around
the
pq
maintenance
group,
which
sounds
great,
but
I
was
wondering
if
there's
any
plan
for
coordinating
the
post
quantum
work
between
different
groups.
So
the
stuff
that's
going
on
in
yeah
different
working
groups.
A
I
would
say
from
at
least
kind
of
the
ad
perspective:
we
are
just
informally
doing
that
formal
class
of
kind
of
coordination.
If
we
end
up
with
a
working
group
to
do
that,
we
would
hope
that
that
entity
would
roughly
kind
of
take
the
lead
so
that
class
of
coordination
is
kind
of
on.
You
know
in
the
kind
of
thinking,
but
right
now
this
is
roughly
just
informally
done.
The
three
places,
at
least
in
sec,
that
we
have
actively
chartered
items
are
lamps,
tls
and
ipsec.
A
Thank
you.
I
I
think
that's
up
to
what
the
community
actually
wants.
The
way
that
post
quantum
that
that
particular
charter,
that
particular
thing
we're
having
on
the
mailing
list
kind
of
talking
about
the
that
agility
work
group
is
largely
to
catch
orphans.
So
it's
focused
on
what
are
protocols.
A
We
may
need
to
touch
that
don't
have
existing
working
groups,
so
if
there
is
existing
work,
for
example,
kind
of
in
tls
or
kind
of
ipsec,
it's
probably
going
to
be
handled
out
of
that
working
group,
which
is
not
to
say
you
know,
the
topic
can't
be
coordinated,
but
that
work
is
likely
going
to
be
in
that
working
group,
at
least
as
the
text
is
currently
written.
A
L
Apologies
for
that,
hopefully
you
can
hear
me
now,
so
I
was
just
saying
it's
afternoon
here
in
helsinki,
so
good
afternoon
evening
or
morning,
depending
on
your
time
zone,
I
guess
roman
ben.
We
can
just
start
with
sex
dispatch.
We
don't
have
to
wait
for
another
eight
minutes
to
like
meet
the
hour.
Hopefully
that's
fine
yeah.
Absolutely,
please
continue
yeah
yeah,
so
welcome
to
sec
dispatch
at
idf
112,
and
we
have
our
ads
and
my
co-chairs
richard
and
kathleen
will
be
helping
me
today.
L
So
welcome
I'm
trying
to
arrange
the
slides
from
here.
So
you
have
already
seen
the
note
well
once
and
it
still
applies,
it
hasn't
changed
you're
still
in
an
iedf
meeting,
hopefully
you're
familiar
with
most
of
the
bcps,
but
if
not,
you
can
have
a
look
at
them.
They
are
listed
here
on
the
slide.
Please
yeah
go
go
and
have
a
look
at
them.
Is
it's
anyways
good
to
refresh
your
memory
once
in
a
while?
L
Perhaps
we
would
like
to
emphasize.
The
idf
code
of
conduct
still
applies,
so
you
just
saw
this
when
roman
was
presenting
the
slides
earlier
but
again
to
re-emphasize.
L
L
So
this
session
is
being
recorded.
If
you
were
not
aware
of
it
so
far
you
have
you
were
being
recorded
in
the
last
one
hour
and
you
will
continue
to
be
recorded
for
about
a
little
over
the
next
hour
as
well,
and
headsets
are
recommended
state.
Your
name
before
you
speak
in
the
mic,
queue
all
right.
Here's
some
links
so
very
importantly,
we'll
be
using
the
notes,
dot,
idf.org
for
notes
and
we'll
be
using
the
sec
dispatch
link,
not
the
sag
link.
L
L
L
L
Thanks
jonathan,
so
we'll
have
note
takers
with
that
sorted
out
before
we
get
to
the
actual
agenda.
Here's
a
quick
reminder
of
what
is
sex
dispatch
and
what
do
we
do
so?
Sec
dispatch
is
mostly
for
recommending
the
next
steps
for
new
work.
It
does
not
adopt
crafts.
L
L
Yeah,
hopefully
you're
familiar
with
these
buttons
by
now.
I
don't
think
we
need
to
spend
much
time
on
on
this
slide
fi.
So
before
we
get
to
the
agenda
for
sex
dispatch,
there
was
a
draft
presented
on
secure,
credential
transfer
in
the
dispatch
session
earlier
this
week.
So
yesterday
there
was
a
draft
on
secure,
credential
transfer.
There
was
presented
and
the
outcome
is
on
the
dispatch
mailing
list.
Kirsty
has,
thankfully,
given
us
a
heads
up,
so
that
we
can
also
inform
the
security
community
in
in
this
meeting
in
insect
dispatch.
L
The
full
outcome
is
on
the
mailing
list,
but
I
anu
is
wanted
to
provide
a
heads
up
that
there
will
probably
be
a
buff.
At
least
there
was
consensus
in
the
dispatch
working
group
to
to
initiate
a
buff
on
this
topic,
and
I'm
sure,
like
this
is
of
interest
to
to
many
of
those
who
are
in
attending
the
sect
dispatch.
So
do
have
a
look,
follow
what's
going
on
and
we
might
have
a
buff
before
or
during
the
next
ietf
in
2022.
L
With
that
we
are
now
at
the
agenda
for
this
session.
Thankfully
we
managed
to
finish
the
administration
before
the
hour,
so
we
are
in
in
good
time,
we'll
have
a
presentation
from
tommy
on
private
access
tokens,
another
presentation
on
security
and
privacy,
considerations
for
multicast
transports
from
jake,
and
then
we'll
one
of
the
chairs
will
summarize
the
outcomes
at
the
end
of
the
session.
L
We
already
have
jonathan
again
thanks
for
helping
us
with
the
notes
we
have
scribes
in
place.
Is
there
anyone
who
would
like
to
bash
the
agenda.
N
N
M
N
All
right
good
morning,
good
afternoon,
everyone
so
I'm
tommy
pauley
and
I'm
going
to
be
presenting
on
behalf
of
several
co-authors.
N
I
have
here,
I'm
from
apple,
I'm
also
working
on
this
document
with
chris
wood
who's
at
cloudflare,
johnny
yangar
from
fastly
and
stephen
valdez
and
scott
hendrickson,
who
are
at
google,
and
this
is
a
document
that
we
are
calling
private
access
tokens.
N
First
we're
going
to
talk
about
the
motivation
for
why
we
think
this
is
important
work
to
do
we're
going
to
briefly
go
over
what
the
protocol
architecture
is
without
diving
into
all
of
the
details.
If
you
want
to
see
that
read
the
document
and
then
lastly
talk
about
some
deployment
considerations
before
we
get
on
to
the
dispatching
questions,
all
right,
let's
go!
N
What
problem
are
we
trying
to
solve
here?
Okay?
So,
let's
tell
a
little
story
so
servers
as
we
know,
often
use
client
ip
addresses
as
an
identification
mechanism,
and
this
is
because
clients
generally
just
tell
them
hey
when
I'm
connecting
to
you
here's
my
ip
address,
you
can
know
who
I
am,
and
this
is
useful
and,
amongst
other
things,
servers
can
recognize
these
addresses
over
time,
and
some
of
these
are
useful
features
like
they
can
use
it
to
rate
limit
access
to
some
particular
resource
when
that's
appropriate
and
the
origin
can
say.
Oh
yeah.
N
This
is
the
fifth
time
I've
seen
you
today
or
this
week.
Sometimes
that's
a
good
thing.
For
example,
there
are
services
that
want
to
be
able
to
have
oh
here's,
a
metered
paywall
of
you
get
seven
free
articles,
and
if
I
can
see
this,
you
get
the
free
articles.
N
You
have
issues
where
multiple
clients
can
be
behind
the
same
ip
address
quite
often,
and
you
essentially
are
sharing
fate
with
everyone
who's
at
your
house
or
at
the
cafe
you're
at
you
can
hit
limits,
but
worse
than
that
being
able
to
track
the
ip
address
over
time
is
a
privacy
violation
and
it's
a
tracking
vector
often.
N
But
as
you'll
note,
this
does
take
away
some
of
the
capabilities
of
origins
to
do
correct
rate
limiting
or
you
know,
any
type
of
inference
of
client
state
because
they
don't
know
the
ip
address
anymore.
They
don't
know
how
to
do
this,
and
this
is
what
the
entire
ecosystem
was
built
upon
for
a
lot
of
things.
N
So
private
access
tokens
is
about.
You
know,
figuring
out.
How
do
we
let
things
like
rate
limits,
work
regardless
of
ip
address,
but
we
don't
want
to
do
this
by
introducing
a
new,
stable
identifier,
that's
going
to
hurt
privacy,
we
want
something
that
is
privacy
preserving
but
lets
you
prove
something
about
your
access
patterns,
and
so
where
is
this
useful?
It's
certainly
not
for
everything.
N
This
is
about
cases
where
you
have
anonymous
access
that
you
want
to
be
able
to
prove
some
limited
client
state
to
do
things
like
per
origin
rate
limiting-
and
this
is
not
for
any
case
where
you're
going
to
log
in
anyway-
that's
a
much
stronger
identity
and
that
should
be
solved
elsewhere
and
help
us
think
about
this.
N
I
just
want
to
be
able
to
read
a
wikipedia
article.
I
just
want
to
use
the
search
engine.
Public
resources,
no
need
to
track
me
on
that
on
the
right
side.
In
the
orange
box
we
have
things
that
legitimately
should
have
authenticated
access.
If
I'm
going
to
upload
something
to
my
own
social
media
account,
you
should
probably
know
who
I
am,
and
I
should
probably
log
in
so.
The
question
is,
you
know,
there's
this
category
in
the
middle,
which
is
something
where
I'm
coming
in
with
an
anonymous
access.
N
I
haven't
logged
into
anything,
but
you
do
want
still
some
minimal
per
client
states,
such
as
rate
limiting,
as
we
mentioned
before
this
can
be
accessing
some
being
a
bit
more
privileged
resource
like
being
able
to
read
a
newspaper
article
that
otherwise
would
require
you
to
pay
but
they're,
trying
to
let
you
see
some
things,
but
also
this
account.
N
This
applies
for
account
creation,
where
something
wants
to
make
sure
that
you're
not
having
a
thousand
different
accesses
from
a
bot
trying
to
create
a
thousand
new
accounts,
or
even,
if
I'm
doing
log
into
a
site
where
I'm
anonymous
before,
because
I
haven't
logged
in
yet
a
failed
login
is
something
that
you
probably
want
to
rate
limit
so
that
someone's
not
just
trying
to
brute
force
a
password
in,
and
so
these
are
cases
that
have
been
built
up
around
doing
ip
address
rate
limiting
today
they
exist
and
they're
going
to
need
to
do
something
as
connections
get
more
and
more
private,
and
the
concern
is
that
if
there
is
not
a
good
privacy,
preserving
solution
use
cases
that
are
in
that
middle
box
are
going
to
have
to
move
to
one
side
or
the
other,
either
they're
going
to
say
great.
N
We're
just
going
to
go
totally
open
and
that's
fine,
but
we're
concerned
that
they're
going
to
be
more
things
also.
That
would
move
more
towards
the
right
to
say.
Okay,
if
you
want
to
read
this
now,
you
need
to
sign
in
with
this
major
social
media
network,
to
be
able
to
get
into
something
or
hey.
If
you
want
to
create
an
account
with
us
or
log
in
you
need
to
bootstrap
it
with
some
other
account
of
some
major
service
that
we
trust
and
that's
going
to
be
worse
and
worse
for
privacy.
N
N
Essentially,
it
allows
the
client
to
prove
to
an
origin
that
is
performed
fewer
than
n
accesses
in
a
time
window,
and
that's
all
it's
going
to
prove
and
the
key
thing
that
we
have
as
a
requirement
in
here
is.
We
want
to
make
sure
that
no
entity
in
the
system
can
correlate
user
identity
with
their
browsing
history
because
of
this
all
right.
N
So
how
does
the
protocol
work?
There
are
two
parts
to
it,
there's
a
token
challenge
request
and
then
there
is
an
issuance
protocol,
so
the
challenging
request,
quite
simple.
What
we're
proposing
here
is
just
have
a
new
http
authentication
method.
You
get
a
challenge
is
actually
asking
the
client
hey.
You
know
you're
trying
to
do
some
privileged
operation,
you're
trying
to
create
a
new
account
you're
trying
to
read
the
newspaper
article.
You
need
a
token
to
get
this
or
I'll.
N
So
the
oregon
origins
can
challenge
for
this,
and
then
the
clients
will
asynchronously
fetch
an
unlinkable
token,
that's
specific
to
that
origin
and
then
present
it
and
then
the
origin
can
validate
that
without
having
to
do
any
more
work.
This
is
a
public
verifiable
thing.
So
how
do
we
actually
get
to
the
tokens?
N
However,
you
know
this
doesn't
meet
the
goals
we
have
for
privacy,
because
this
issuer
would
be
in
too
privileged
of
a
position,
because
it
would
be
able
to
identify
the
clients
and
track
their
state
potentially,
as
well
as
knowing
all
the
origins
that
they're
going
to,
and
this
would
allow
them
to
learn
the
browsing.
N
Be
a
bad
privacy
story
all
right,
so
we
end
up
with
a
bit
more
split
model
if
you've
gone
to
oblivious
http.
N
N
N
The
mediators
are
responsible
for
keeping
a
count
of
tokens
that
are
issued
to
each
client
for
a
specific
anonymous
origin
within
a
window,
and
the
window
is
something
that
the
issuer
can
define
to
essentially
say
hey.
I
care
about
rate
limiting
things
within
an
hour
like
you
could
only
try
to
create
an
account
five
times
in
an
hour,
or
you
can
only
have
five
failed
logins
in
an
hour
and
that
policy
window
is
given
back.
N
There
are,
I
just
want
to
briefly
mention
some
of
the
cryptographic
dependencies
that
we
have
for
the
protocol
for
the
origins,
it's
very
very
simple:
they
just
have
to
do
a
verification
of
an
rsa
blind
signature
token.
They
don't
have
to
do
any
double
spend
prevention.
They
can
just
validate
that.
The
token
looks
good
on
the
issuance
side.
N
N
The
client
uses
hpke
to
encrypt
the
actual
origin
name
to
the
issuer,
and
then
the
system
also
uses
a
blind
it
diffie-hellman,
with
a
schnorr
proof
of
knowledge
to
help
the
mediator
prove
that
the
client
is
not
lying
about
the
mapping
between
its
anonymous
origin
and
the
actual
origin
name.
Chris
will
be
talking
about
that
in
cfrg
for
the
crypto
details.
If
you're
interested.
N
All
right,
so
we've
talked
about
why
we
think
this
is
important
to
solve.
We've
talked
about
what
the
basic
architecture
is,
but
the
last
thing
I
want
to
hit
on,
which
is,
I
think,
one
of
the
most
interesting
parts
is
the
kind
of
deployment
model.
N
So
I
mentioned
some
of
this
already,
but
the
clients
are
responsible
for
choosing
their
trusted
mediators.
These
are
people
who
are
they're
willing
to
log
in
with
or
who
have
some
notion
of
client
identity
anyway,
that
they
know
isn't
going
to
learn
about
their
browsing
history,
but
can
help
them
as
they
want
to
get
tokens,
and
this
could
be
based
on
a
number
of
different
things.
That's
not
specified
in
the
document.
It
could
be
based
on
a
device
level.
Built-In
hardware
assert,
it
could
be.
Some
verified
account
that
you
have
with
a
service.
N
N
But
if
trust
does
break
down,
if
meteors
start
lying
or
doing
letting
clients
get
away
with
too
much,
then
you
could
imagine
that
there
wouldn't
be
good
support
between
mediators
and
issuers,
so
they
do
need
to
live
up
to
a
certain
standard.
N
Briefly,
talking
about
client
identity,
you
know
this
is
something
that
is
determined
by
the
mediator.
Really,
the
private
access
tokens
architecture
does
not
require
just
one
mechanism
for
this.
This
is
something
that
we
can
imagine
could
evolve,
but
really
just
needs
to
be
something
that
the
ecosystem
agrees
is
hard
to
forge.
You
know
it
shouldn't
just
be:
oh,
you
have
this
ip
address.
N
It
needs
to
be
something
that's
relatively
relatively
hard
to
get
so
one
client
couldn't
just
get
a
thousand
different
identities
very
easily,
but
users
could
definitely
legitimately
have
a
few
identities.
You
know
I
may
have
different
devices
that
if
I'm
using
device
level
authentication,
those
would
look
up
like
locally
different
things.
N
You
could
also
imagine
that
different
applications
or
different
browsers
on
a
single
device
could
use
different
accounts
as
their
root
of
trust,
but
effectively
the
amount
I'd
be
able
to
multiply.
My
effect
on
the
system
would
be
maybe
one
two
or
three
times
not
thousands
of
times
to
be
able
to
fool
a
system,
and
then
lastly,
the
thing
I
want
to
touch
upon
for
deployment
is,
you
know
how
we
avoid
centralization
in
the
system.
It's
definitely
something
that
we
are
conscious
of
and
want
to
avoid.
N
So
the
mediators
and
the
issuers
kind
of
by
definition
are
entities
that
are
helping
represent
many
different
clients
and
origins.
So
there
are
fewer
mediators
and
issuers
than
clients
and
origins
that
that
is
important,
but
we
should
also
be
avoiding
letting
the
overall
ecosystem
just
have
a
handful
of
these
that
are
just
kind
of
the
big
players
that
it's
impossible
to
get
in.
N
It
needs
to
be
easy
for
new
mediators
and
issuers
to
enter
the
overall
ecosystem,
and
you
particularly
want
to
avoid
situations
where
issuers,
who
are
kind
of
working
on
behalf
of
origins,
only
allow
a
specific
mediator
to
essentially
force
clients
to
have
a
particular
type
of
trust
in
order
to
be
able
to
use
different
origins
so
yeah.
This
is
not
an
easy
problem
to
solve
it's
the
type
of
thing
that
is
difficult
to
solve
at
a
protocol
level,
but
we
believe
that
you
know
private
access.
N
So,
looking
at
some
of
the
kind
of
comparisons,
as
I
mentioned
before,
if
we
don't
have
a
good
solution
in
this
space,
we'll
see-
and
we
already
do
see
a
lot
of
things
moving
towards.
Oh
a
sign
in
with
blah
where
blah
is
some
large
company,
and
these
are
things
that
origins
choose
to
prefer
and
choose
to
partner
with,
and
they
can
essentially
embed
some
relationship
with
another
site
that
will
be
more
likely
to
have
a
user
account
already
and
they
can
be
sharing
data
on
the
back
end.
N
You
really
don't
know
what
they're
doing,
and
this
is
potentially
a
big
privacy
problem
and
so
without
an
alternative
for
clients
to
use.
If
we're
trying
to
move
away
from
captchas,
and
if
we
have
more,
you
know
private,
ips
and
stuff.
We
could
see
things
moving
towards
this,
and
this
being
a
per
origin.
Choice
is
something
that
would
be
really
hard
to
move
and
we're
not
going
to
end
up
with
a
situation
we're
going
to
have
sign
in
with
50
different
services
and
every
little
guy
is
going
to
get
in
there.
N
Another
comparison
to
make
is
the
existing
proposals
like
privacy
pass,
so
I'll
mention
a
bit
more
later,
but
privacy
pass
is
something
that
allows
a
client
to
present
a
token
to
an
origin
that
it
got
from
another
origin
that
he
was
able
to
get
some
bag
of
tokens
for
that's,
not
origin,
specific
and
in
this
case
the
origins
that
are
redeeming.
N
With
private
access
tokens,
you
know
overall,
yes,
there
is
that
general
risk,
but
it's
important
to
note
that
the
origins
don't
get
to
see
the
mediators
they're
only
doing
this
through
the
issuers
that
they
work
with,
so
they
can
origins,
cannot
individually
discriminate
based
on
how
the
client
authenticated
they
have
actually
no
way
of
knowing
how
the
client
authenticated
or
what
type
of
device
or
what
type
of
account
they
had.
So
it's
actually
a
better
privacy
thing
here
now
you
could
have
a
situation
where
issuers
start
rejecting
new
mediators.
N
Since
you
know,
meteors
and
issuers
are
all
kind
of
all
talking
to
each
other,
they're,
not
really
paired
in
any
way.
They
are
going
to
be
a
kind
of
a
globally
known
list,
and
this
is
something
that
could
be
publicly
reported
and
publicly
audited
if
there
starts
to
be
blockages
between
any
of
them.
N
Now,
sometimes
this
could
be
legitimate,
like
you
could
have
a
meteor
that
starts
cheating
and
lying
on
behalf
of
its
clients.
In
that
case,
yeah,
the
ecosystem
should
decide
to
reject
them
in
the
same
way
that
you
can
stop
trusting
a
given
certificate
authority
if
it
stops
behaving
correctly
around
how
it
signs
certificates.
N
So
this,
we
think,
is
the
type
of
architecture
you
would
end
up
with
and
we
think
with
they
could
have
the
right
incentives
to
move
this
forward,
and
I
do
see
someone
in
the
cube,
but
I'll
just
quickly
finish.
My
slides,
I
only
have
two
more
so
anyway,
we
have
the
question
about
where
this
work
should
be
done.
N
I
do
want
to
point
out
that
there
is
a
lot
of
related
work
within
privacy
pass,
so
private
access
tokens
is
very
similar
in
many
ways
to
privacy
pass,
but
it
has
some
key
differences
from
what
privacy
pass
is
currently
working
on.
First,
it
does
involve
some
per
client
and
per
origin
state.
It's
not
allowing
just
unlimited
access,
and
we
believe
this
is
actually
something
that
makes
it
much
more
attractive
and
more
useful
for
cases
that
want
to
be
able
to
replace
ip
address
as
a
rate
limiting
mechanism.
N
N
It
is
using
a
private
access
token
to
use
a
challenge.
So
it's
an
online
protocol,
and
so
this
doesn't
allow
you
to
do
the
same
type
of
token
hoarding
that
you
could,
with
privacy
pass
in
which
you
could
actually
get
passes
from
many
different
clients
and
just
hold
them
on
hold
on
to
them
and
then
spend
them
and
make
it
look
like
you're
legitimate
when
you're
not.
N
And
lastly,
we
are
defining
this
as
a
publicly
verifiable
token,
so
it
doesn't
require
the
origin
to
do
any
work
of
talking
to
the
backend
or
the
original
token
issuer
to
prove
that
it's
a
valid
token.
So
in
some
ways
we
think
this
could
be
a
more
generic
form
and
potentially
more
useful
form
of
privacy.
Privacy
pass
all
right.
So
our
question,
essentially,
is
you
know
first,
what
do
people
think,
but
then,
if
this
work
should
be
done,
should
it
be
done
as
something
like
extending
the
work
that
privacy
pass
does?
N
Should
it
be
something
short-lived
in
a
very
specific
working
group
or
something
else,
and
that
is
all
I
have
php.
F
Yeah
I
like
this
a
lot
there's
just
one
slight
problem.
I
proposed
something
very
similar
back
in
2003
and
verisign
has
a
patent
on
it.
So
it's
not
a
very
problematic
patent.
It
was
filed
march
25th
2003,
but
that
is
something
to
bear
in
mind.
F
The
other
thing
is
that
what
I
was
doing
at
the
time
was,
I
got
very
upset
about
proof
of
work,
which
I
believe
is
absolutely
appalling
proposal.
I
thought,
then,
I
think
it
now
and
the
claims
are
very
broad
it
does
it
doesn't
it's
not
just
for
trusted
computing
implementations,
but
the
main
idea
was
put
this
in
a
trusted
module
on
the
device,
and
then
you
can
get
rid
of
all
your
other
requirements.
F
You
know
all
your
media
mediating
parties
go
into
the
device.
It
might
be
that
in
the
current
circumstances,
yeah,
probably
the
only
thing
that
could
verify
could
do
to
get
some
value
out
of
that
patent
at
this
point
would
be
to
make
some
agreement
with
google.
If
you
could
get
google
and
apple
involved,
they
control
silicon
and
they
could
get
it
pushed
into
the
hardware.
N
F
F
Yeah
I
I
I'll
file
the
ipr
or
tell
bert,
or
whatever
to
do
it.
I
I
think
that
this
work
is
appropriate
for
the
privacy
pass
working
group.
Okay,
I
would
be
happy
to
see
it
come
there.
I
I
don't
think
that
there's
you
know
any
presumption
that
we
would
essentially
take
this
work,
as
is
as
a
and
add
it.
As
a
new
working
group
item
work
item.
I
I
think
that
these
use
cases
are
important
and
that
we
should
think
about
making
sure
that
we
satisfy
the
the
use
cases
that
are
most
prominent
here,
and
that
may
mean
that
may
mean
changing
what
the
privacy
past
working
group
is
working
on.
I
don't
think
that
we
should
essentially
forge
ahead
with
both
this
and
something
that
looks
like
the
current
privacy
pass
protocol.
P
I
Parallel,
I
think
we
should
essentially
look
at
the
elements
of
these
look
at
the
requirements
and
come
up
with
one
solution
to
pursue
first
at
least,
and
let
it
roll
out
and
get
some
experience
with
it
before
we
attempt
to
do.
You
know
another
permutation
of
the
same
ingredients
for
a
similar
purpose
right.
N
Yeah
and
just
speaking
to
that
briefly-
and
you
know
we
have
you
know
both
chris
wood
and
steven
valdez-
who
are
privacy
path-
past
authors
working
on
this-
and
I
think
there
is
a
appetite
to
even
if
long-term,
some
of
the
use
cases
could
involve
different
flavors
of
tokens
to
make
a
lot
of
the
stuff,
like
the
token
challenging
requesting
an
issuance
compatible
and
maybe
just
have
oh,
I
want
this
type
of
token
or
I
want.
I
have
that
type
of
token.
I
I
think
those
are
all
reasons
to
take
this
to
the
privacy
password
great.
That
makes
sense
as
an
individual.
I
have
to
say
I
don't
find
this
as
compelling
as
privacy
pass
today.
I
In
particular,
it
seems
to
me
that
this
architecture
has
much
more
complicated
trust
relationships
required
between
the
various
parties,
whereas
privacy
pass
manages
to
achieve.
It
seems
to
me
very
similar
security
and
performance
results
with
much
less
reliance
on
on
tricky,
partial
trust
relationships
between
the
parties.
I
So
I
think
that
we
should
try
to.
You
know
essentially
lean
toward
computational
privacy
and
away
from
information
theoretic
privacy
to
the
extent
that
we're
able
and
privacy
pass,
as
currently
formulated,
I
think,
gets
us
a
little
closer.
Also,
I
think,
there's
more
similarity
here
than
we
let
on,
because
both
of
these
protocols
have
certain.
I
Baked
into
them
that
we
are
not
particularly
talking
about,
for
example,
the
a
huge
having
a
huge
number
of
issuers
here
would
would
get
you
back
into
the.
I
think
the
privacy
pass
problem
of
of
essentially
leaking
one
bit
of
information
for
every
issuer
that
is
or
is
not
present
on
the
on
the
client,
and
so
that
means
that
they
share
the
they
share
the
same.
L
Privacy
consideration
there
then,
as
much
as
I
hate
to
disrupt
the
technical
conversation.
Let's
end
it
here
and
for
all
others
who
are
on
the
queue,
so
the
queue
is
closed
and
yeah.
Please
keep
it
brief
and
we
we
would,
most
importantly
like
to
hear
your
feedback
on
how
to
dispatch
this.
L
N
N
I
think
the
main
issue
we
see
with
the
current
privacy
passes
that
it
is
simple,
but
it
also
offers
sufficiently
little
information
low
value
to
the
origins
that
it's
actually
not
useful
for
replacing
a
lot
of
the
cases
that
would
have
captchas
or
would
have
other
types
of
rate
limiting
today.
So
we
want
to
just
increase
the
utility
all
right,
siobhan.
O
Q
Okay,
great
turns
up
my
headphones,
don't
work
so
yeah
we're
just
wondering
it.
It
really
is
like
the
the
presentation
kind
of
read
like
privacy,
preserving
rate
limiting,
but
the
draft
mentioned
geofencing
as
well,
and
I
saw
that.
P
Q
Thought
there
was
a
client
hints
proposal
as
well
in
the
github
repository
and
just
wondering.
P
Q
Yeah,
like
is
there
was
a
reason
for
that,
and
I
guess
like
just
quickly,
but
the
other
thing
was.
It
seems
like
there's
a
centralization
concern
around,
especially
the
mediators,
so
they
might
be
worth
having
some
discussion
about
that
in
the
draft.
N
Yeah,
so
yes,
there
are
in
the
draft
mentions,
and
we
believe
you
know
the
overall
architecture
could
allow
you
to
have
other
bits
of
information
other
than
rate
limiting,
like
you.
Essentially,
you
could
use
this
or.
N
Of
any
privacy
pass
type
ecosystem
to
sign
or
verify
that
oh
yeah,
you
came
from
this
country
or
whatever,
and
you
know
we
know
something
about
the
client
that
way.
So
there
are
other
bits
of
information
we
don't
want
to
get
into
the
complexity
here.
In
this
talk,
what
was
the
second
part
of
your.
Q
N
Yeah
and
yeah,
we
should
discuss
that
more
in
the
draft,
as
I
mentioned
here,
I
think
you
know
largely.
It
does
degenerate
to
a
lot
of
other
cases
where
you
know.
Essentially,
if
an
origin
can
know
where,
where
your
kind
of
sign
into
where
your
root
of
trust
is
coming
from,
then
it
could
similarly
end
up
favoring,
just
a
few
large
actors-
and
you
know,
hopefully
we
can
actually
find
a
way
to
make
this
better
than
those.
G
Hi
tommy
thanks
for
presenting
this
hello,
so
I
think
I
mean
I'll
I'll
hit
this
best
question
first,
but
I
do
want
to
talk
about
technology
for
a
moment,
so
I
think
this
isn't
quite
ready
to
go
honestly.
There's
a
bunch
of
questions
about
what
the
actual
problem
which
you're
trying
to
solve
is.
I
do
agree
with
ben
that
having
this
and
privacy
passed
together
is
like
bad
news,
so
we
should
try
to
figure
out.
You
know
what
the
itf
is
going
to
do
in
this
spaces.
G
So
I
think
we
really
first
agree
get
agreement
on
the
use.
Cases
do
this
at
a
number
of
them,
but
quite
a
complicated
system
and
the
complexity
all
derives
from
the
desire
to
have
a
rate
limit
on
a
specific
client
for
a
specific
origin
and
all
the
other
cases
you
have
are
largely
about
like
making
assertions
about
the
client
in
particular,
but
not
about
the
origin
specifically,
and
so
all
those
can
be
solved
with
the
privacy
pass
solution.
Do
not
need
any
of
this
complicated.
G
You
know
stop
having
to
do
with
the
media
and
issuer.
So
I
think
it's
really
important
to
determine
whether
we
think
that's
an
important
enough
of
an
important
use
case
to
motivate
an
effort
in
this
space.
And
I,
I
guess,
you're,
probably
getting
from
like
what
I'm
saying
that
I'm
not
persuaded
can.
N
You
explain
why
they
are
not
origin
related
that
essentially
having
a
global
pool
of
rate.
Limiting
for
everything
I
do
would
satisfy
these
cases.
G
Well,
no,
what
I'm
saying
is
geo
does
anything.
N
O
G
The
server
is:
is
it
human
to
something
to
do
with
the
server.
G
Okay,
but
you
do
list
them
so
like
I
think
you
know,
and
but
we
have
captures
right.
The
point
of
a
capture
is
the
person
a
human
and
and
so
and
and
so
when
you
sort
of
say,
oh,
the
alternative
is
capture
this.
Well,
they
turn
out.
This
is
not
captured.
They'll
turn
their
privacy
passes
captures
so.
N
L
L
G
Well,
what
I
would
like
what
I
would
like
to
see
is
a
more
concrete
analysis
of
the
use
cases
and
which
ones
can
be
addressed
with.
Is
it
with
required
technology,
that's
fancy
and
which
ones
do
not,
and
and
then
we
can
decide,
those
use
cases
are
sufficiently
important
to
solve,
to
pay
the
price
of
the
complexity
and
the
potential
potential
or
choke
point
effect
that
this
creates.
G
I,
I
guess
I
guess
I
do.
I
think
it's
important
to
mention
this
chokepoint
effect,
because
I
think
was
mentioned
previously.
You
know
that
we
do
have
a
worked
example
of
this
in
the
identity
system
and
we
have
basically
two
identity
providers
so
and
it's
I
don't
think
it's
all
impossible-
that
the
origins
will
insist
that
I
see
that
the
issue
only
trusts
google
on
facebook
and
apple.
That
would
not
be
desirable
outcome.
P
Yep
I'll
I'll
be
very
brief,
but
just
two
two
quick
technical
points
to
echo
the
discussion
from
yesterday
in
a
different
working
group.
I
I
think
this
app
really
does
need
to
address
before
it
does
go
forward,
how
to
avoid
the
problem
of
colluding
mediators
and
issuers,
because,
if
that's
not
addressed
the
the
benefit,
it's
providing
is
completely
illusory
and
in
fact,
probably
unhelpful,
because
it
potentially
implies
it's
giving
more
privacy
when
it's
actually
giving
less.
P
I
think
that,
from
an
end
user
point
of
view,
that's
dangerous
if
that
happens,
or
also
I
completely
agree
with
philip
about
the
absolute
appalling
use
of
proof
of
work
and
just
as
a
general
principle
that
absolutely
should
be
avoided
in
terms
of
answering
the
spanish
question.
P
Privacy
pass
seems
like
the
obvious
place,
and
I
think,
as
somebody
just
suggested
in
the
chat,
maybe
have
it
looked
at
in
more
depth
in
privacy
past
and
then,
if
need
be,
come
back
to
dispatch
after
a
deeper
look
with
more
time
would
be
my
suggestion
thanks.
So
that's
good.
L
Okay,
question
to
ads
and
coaches:
what's
the
dispatch
decision
here,
I
think
you
have
been
following
the
chat
more
more
than
I
have,
but
is
is
dispatched
to
privacy.
Pass
the
correct
decision
here.
R
Yeah,
there's
there's
been
some
some
proposals
of
the
mic
here
and
some
discussion
in
the
jabra
channel.
So
it
looks
like
that.
There's
a
fairly
good
energy
behind
dispatching
to
privacy
paths
having
some
detailed
discussion
there
about
how
to
kind
of
mesh
this
with
the
the
use
cases
and
technologies
that
privacy
test
has
and
if,
if
anything
like
that
new
comes
out
of
that,
I
think
we
can.
We
can
take
it
up
here
again,
but
I
think
to
first
order.
Dispatching
the
privacy
passes
the
outcome
here.
L
A
Yeah,
so
just
to
follow
up
on
what
kind
of
richard
said,
it
really
does
look
like
the
expertise
to
have
this
conversation.
Isn't
privacy
pass.
So
I
think
if
we're
gonna
decide
decide,
is
this
privacy
pass
plus
this
work?
Privacy
pass
becomes
this
work
use
cases
from
this
informed
kind
of
privacy
pass.
They
all
nexus
is
point
all
the
arrows
kind
of
point
back
to
the
nexus
of
privacy
pass.
So
let's,
let's
continue
the
conversation
there.
I
would
remind
the
community
kind
of
we
still
have
two
potential
angles
out
of
that.
A
You
know
if
we
need
to
change
privacy
pass,
that's
a
recharger
opportunity,
so
that's
another
place
for
us
to
kind
of
talk
about
whether
we're
comfortable
with
what
that
what
the
scope
is.
If
there
is
a
scope
change
and
then
if
it
turns
out
that
the
kind
of
discussion
from
privacy
pass
says
this
is
different.
You
know
again,
I'm
kind
of
purely
speculating
here,
as
others
have
suggested,
that
that
speaks
to
another
turn
through
sex
dispatch.
Then.
L
Thanks:
okay
dispatch
to
privacy
pass
in
that
case
thanks
tommy
jake,
I
guess
you're
presenting,
I
shared
your
slides
already
hope.
That's
fine!
Should
we
start.
L
M
S
S
So
the
problem
we're
trying
to
solve
the
scalable
delivery.
Multicast.
Of
course,
the
the
thing
that
we
don't
know
another
way
to
solve
and
the
reason
we're
focusing
on
multicast
is
the
the
congestion
we
observe
at
isp
access
layers.
This
is
essentially
a
broadcast
medium
and
so
sending
a
single
packet
across
it
for
all
the
duplicated
traffic
that
drives
the
peak
loads
that
are
observed
at
isps
is
something
that
could
be
avoided
if
we
could
make
use
of
multicast
and
we
see
a
fairly
large
gap.
S
I
discussed
this
there's
a
there's,
a
link
to
the
barb
off
at
itf
111,
where,
where
I
went
into
a
bit
more
detail
about
this
later
in
the
deck,
there
are
other
impacts
to
end
users
that
we
believe
can
be
addressed.
In
the
same
way,
the
the
capital
costs
are
driven
by
the
peak
load.
S
This
is
both
for
the
sort
of
content
owner
content
distribution
network
and
for
the
isps
and
these
costs
pass
all
the
way
through
to
end
users
and
have
a
disproportionate
effect
on
those
who
can
afford
it
less
and
the
the
peaks
are
generally
driven
by
the
popular
content.
You
know
these
these
big
game
downloads,
the
big
sporting
events
that
everybody
wants
to
watch
at
the
same
time
and
and
that's
the
main
thing
we're
trying
to
address
and
the
problem
keeps
getting
worse.
S
You
know
we
can
go
into
more
detail
on
that,
but
but
the
motivations,
I
think,
are
clear,
there's
a
few
things
that
that
people
often
react
to
like
do.
We
really
have
to
do
something
as
hard
as
multicast.
We
have
looked
at
peer-to-peer
for
this
and,
of
course,
that
doesn't
address
the
isp
access
congestion
problem.
That
in
fact
makes
it
worse
and
the
isps
are
firmly
against
it,
as
opposed
to
multicast,
where,
as
you'll
see
in
a
few
moments,
there
tend
to
agree,
it
makes
sense
application
level.
S
Multicast
also
doesn't
address
that
access
layer
problem,
although
it
does
help
address
some
of
the
other
problems.
This
is
essentially
what
cdns
do
already
and
we're
finding
that
there's
a
substantial
gap
that
we
would
really
love
to
to
see
addressed.
S
This
the
reason
that
we
did
it
in
this
order
that
we
haven't
come
to
security
really
before
is
about
sort
of
figuring
out
how
viable
it's
going
to
be.
Are
we
really
going
to
be
able
to
get
this
stuff
delivered,
but
we
started
with
dozens
of
informal
conversations
bouncing
the
idea
around
over
the
course
of
several
itfs
with
particularly
with
a
number
of
browser
implementers
just
in
in
private
discussions,
seeing
if
there's
any
obvious
stoppers
that
would
make
it
clearly
impossible.
S
Everybody
agreed
there
are
challenges,
but
that
with
sort
of
sufficient
buy-in
this
this
could
be
reasonable.
So
we
went
ahead
and
we
made
we
made
the
pieces
that
we
think
would
be
necessary.
We
had
some
proof
of
concept.
Implementations
done,
hackathons
have
been
reporting
on
it
at
mbod.
There's
all
these
adopted
drafts
in
mundi
and
at
the
last
ietf
we
ran
a
barb
off
and
got
you
know
what
feedback
we
could
from
that,
and
now
we're
trying
to
sort
of
follow
up
on
the
next
steps
here.
S
Next
slide,
please,
in
addition
to
our
ietf
work,
we've
been
doing
a
bunch
of
outreach
outside
the
itf.
We've
had
a
bunch
of
isp
conversations.
We
consider
that
kind
of
the
critical
feedback,
because
there's
no
point
if
this
stuff
won't
get
delivered,
but
the
answer
we've
gotten
has
been
largely
positive.
S
We've
likewise
been
talking
to
a
bunch
of
the
content
owner
customers
we
have
about
whether
this
is
feasible
from
their
side
and
likewise
the
answer
is
is
mostly
yes,
we
did
several
lab
trials
with
isps
where
the
isp
gear
was
being
used
to
do
the
multicast
forwarding,
based
on
based
on
ingesting,
according
to
the
architecture,
we're
proposing
using
amt
and
dryad
references
there.
S
S
This
also
has
been
adopted
by
mbo
and
d,
and,
and
so
with
that,
we
think
we've
covered
the
sort
of
practical
problems
to
getting
this
potentially
delivered
and
we'd
like
to
move
forward
with,
with
the
the
problems
of
how
to
do
this
in
a
useful
way
at
the
application
layers
we're
looking
into
both
web
and
non-web
traffic,
primarily
because
the
peaks
that
we
have
are
driven
by
by
both
of
these
cases,
the
the
peak
traffic
loads.
S
But
the
the
web
traffic
is
is
one
of
the
places
where
you
know
we're
going
to
be
blocked
if
we
can't
get
into
browsers
and
in
order
to
get
into
browsers,
we
need
to
have
a
decent
security
model
and
there
are
differences
between
the
way
multicast
security
would
work
and
the
way
unicast
security
has
worked
in
up
until
now.
S
So
this
is
why
we
tried
to
to
cover
the
these
considerations
in
the
in
the
draft.
I
hope
people
got
a
chance
to
read.
There's
some
other
implementation
work
here.
We've
got
a
w3c
community
group,
that's
chartered
to
try
to
incubate
the
work.
I'm
sure
that
there's
a
demo
api,
just
sort
of
proving
the
viability
of
playing
video
using-
and
this
is
this-
is
by
the
way,
a
port
of
a
receiver
that
that
we
have
as
part
of
a
multicast
product.
S
That's
a
wild
garden,
multicast
tv
delivery
product,
but
we've
got
that
running
under
webassembly
and
playing
video
in
a
you
know
our
own
build
of
a
browser.
We
gave
an
intent
to
experiment
into
chromium
and
we
got
some
feedback
from
them.
The
feedback
is
kind
of
what
drove
the
the
the
writing
of
this
draft,
because
we,
you
know,
the
the
point
that
was
made
is
that
we
have
to
have
confidentiality
as
part
of
the
design
which
it
was
not
up
until
up
until
that
feedback
came
in.
S
We
get
into
some
of
the
reasons
why
that
is
the
case
in
the
draft,
but
basically
it
was
a
convincing
convincing
position,
and
so
we've
we've
taken
it
on
board
and
we're
trying
to
grapple
with
the
complexities.
That
opens
that's
the
sort
of
background
of
the
the
genesis
of
this
draft.
S
We
we
are
starting
in
the
community
group
with
aiming
to
to
integrate
to
web
transport,
and
that's
so
that
we
can.
The
idea
there
is
that
you
can
take
existing
multicast
applications.
S
If
you
can
solve
the
fundamental
security
problems
that
we
do
outline
and
draft
and
sort
of
get
an
acceptable
use
case,
then
then
web
transport
would
be
a
great
fit
and
would
allow
in
the
way
that
we've
already
done
sort
of
adapting
existing
multicast
applications
into
a
browser
context
and
deploying
them
at
scale
and
then
would
open
up
experimentation
for
for
other
apis.
That
would
also
be
useful.
Yeah
next
slide.
Please.
S
So
the
problems
that
we
want
to
address
here,
we
want
to
separate
the
integrity
and
authenticity
from
the
confidentiality
concerns,
because
these
are
kind
of
driven
by
different
threat
models.
It's
very
important
to
prevent
someone
from
inserting
a
packet
of
death.
That's
accepted,
and
you
know,
uses
the
rendering
bug
to
take
over.
You
know
all
the
all
the
devices
of
people
watching
the
super
bowl
or
whatever.
S
It
is
likewise
very
important
to
protect
confidentiality,
but
this
is
kind
of
a
different
set
of
problems
and
there's
some
trade-offs
here
that
we
try
to
get
into
in
the
draft
that
that
come
from
the
fact
that
with
multicast
you
have
the
same
exact
packets
that
have
to
be
decoded
by
many
different
receivers.
S
This
inherently
creates
a
sort
of
you
know:
content
discoverability
problem
of
some
sort,
but
that
comes
with
the
removal
of
the
destination
ip
address,
which
it
increases
anonymity
at
all,
the
network
locations
that
are
just
you
know
increasingly
distant
from
the
end
from
the
end
user,
that's
receiving
it.
We
tried
to
look
into
like
what
are
the
threat
models
about
here
and
for
confidentiality.
S
Most
of
what's
written
down
covers
like
personalized
information.
Things
like
your
medical
records
or
your
bank
account
info,
and
it's
you
know.
On
top
of
that,
the
traffic
analysis
discovery
is
pretty
good
at
picking
out
the
you
know,
especially
popular
things,
so
we
think
this
doesn't
open
up,
particularly
new
stuff,
but
that
is
a
point
for
for
discussion.
S
If
this
really
does
create
a
an
unsolvable
hold,
then
we
need
to
know
about
it
and
stop
working
on
it,
but
our
position
is
basically
that
the
alternatives
are
just
as
bad,
perhaps
worse,
for
the
the
scalable
traffic
that
that
we're
actually
seeing-
and
so
we
think
this
is
still
worth
pursuing
in
spite
of
the
sort
of
differences
in
the
security
model,
we're
trying
to
get
that
all
articulated-
and
this
is
a
big
part
of
the
work
that
we
anticipate
doing
next
slide.
Please.
S
So
the
existing
work
is
starts
with
the
the
multicast
security
draft.
You
saw.
We
have
an
adopted
draft
in
mbondi.
That
would
benefit
from
security
from
a
better
security
expertise.
This
was
originally
argued
as
being
reasonable
to
include
in
mbd
based
on,
like
it
doesn't
do
any
novel
crypto
anything
and.
S
I
would
argue
that
if
there
were
a
multicast
focused
security
place,
it
would
have
a
better
home
there,
because
it
really
does
need
some
security
expertise.
Looking
at
it.
There
are
other
pieces
of
this
that
are
likely
to
also
require
documents
going
forward
when
I've
listed
the
ones
here,
as
I
said,
we're
trying
to
target
web
transport
so
documenting
exactly
how
that's
going
to
work.
These
experiments
are
just
beginning,
but
going
forward.
S
We
would
anticipate
writing
the
appropriate
documents
to
include
it
and
incorporating
all
the
great
feedback
and
and
useful
insights
that
we
get
from
reviewing
the
actual
security
requirements
and
considerations
that
we've
tried
to
put
forth
in
the
in
the
multicast
security
draft
and
there
there
may
be
going
forward
into
the
other
apis.
Some
other
related
work
that
we'd
want
to
address
as
well,
including
specifically,
which
protocols
would
be
supported
by
browsers
and
how
it
would
all
work
for
for
handling
apis
like
fetch
or
download.
S
This
is,
you
know,
we're
basically
blocked
on
on
progress
until
we
can
get
a
firmer
consensus
answer,
or
at
least
a
you
know,
a
firmer
tentative
consensus
answer
on
what
needs
to
happen
in
order
for
multicast
to
be
used
safely
on
the
internet,
with
the
sort
of
modern
threat
model
for
the
internet,
and
I
think
we're
really
missing
a
lot
of
security
expert
participation
in
this
space
at
this
time.
So
that's
basically,
why
I'm
here
today
next
slide,
please.
S
So
my
first
question
is
whether
this
is
suitable
for
ietf
work
and,
if
so,
what
should
we
do
with
it?
Precisely
you
know
I
have
a
few
ideas
here,
one
of
which
is
reopen
msac.
That
was
a
possibly
with
a
retarder
that
that
seemed
like
a
pretty
good
fit
for
this
kind
of
thing,
maybe
a
separate
thing
for
broadcast
msec,
but
I
would
welcome
suggestions
and
feedback
on
what
we're
doing
here
and
whether
anybody
can
usefully
comment.
So
with
that,
I
will
open
the
floor
to
questions
comments.
L
So
before
we
let
others
weigh
in
just
we
have
like
nine
minutes
left
and
we
have
the
option
of
overflowing
on
thursday.
Of
course,
I
would
personally
prefer
to
get
a
dispatch
decision
already
today,
but
go.
O
D
So
I
guess
the
question
I
have
is
to
try
and
get
clarity
on
what
you
what
questions
you
want
to
answer
as
the
next
steps.
So,
like
I
heard
a
fair
bit
about
like
the
question
being,
is
there
a
security
model
or
security
framework
that
we
can
use
that
would
make
multicast
traffic
compatible
with
like
traffic
in
the
web?
To
the
browser?
S
I
would
say
with
with
delivering
it
safely
to
end
users,
I
think
there's
a
lot
of
multicast
traffic
today
that
goes
to
end
users.
That
is
not
a
part
of
the
web
security
model
and
we
can
start
there,
certainly
like
we
can
do
some
multicast
based
delivery
on
some
of
our
use.
S
Cases
like
some
of
the
game,
delivery,
pieces
or
or
software
updates,
and
as
well
as
as
potentially
like,
there's
there's
some
interested
parties
with
that
have
kind
of
large
footprints
of
the
those
like
smart
tv
on
a
stick
kinds
of
things
you
can
plug
into
the
hdmi
cable
on
your
tv
or
whatever,
and
these
can.
These
can
go
ahead
and
start
deploying
multicast.
S
S
Multicast
delivery
for
tv,
that's
operated
by
by
msps,
so
we'd
like
to
address
all
of
these
questions
and
land
on
a
sort
of
as
safe
as
it
can
be
for
end
users
under
specifically
what
considerations
that
are
properly
written
down
and
to
apply
that
to
all
of
our
solutions.
If
the
answer
is
well,
you
have
to
get
some
deployment
first,
then
I
mean
we
are
intending
to
do
that
to
the
best
of
our
ability,
but
browsers
are
going
to
be
a
big
part
of
solving
the
ultimate
problem
of
scalability
that
we
need.
S
G
Thank
you
yeah.
I
mean,
I
think
it
is
a
dispatch
question.
I
think
I
don't
know
where
to
take
this.
I
guess
I'm
not
sure
your
problem
is
quite
the
problem.
You
think
it
is
richard
vegas
like
like
I
I
I
I
didn't
read
that
message
with
chris
barroscu
and
I
guess
what
I
would
say
is:
if
chrome
were
really
enthusiastic
about
this,
then
you
would
get
a
different
message
and
it
wouldn't
have
said,
like
probably
problem
for
me
and
I'll,
be
interested.
G
I
think
we're
like,
I
think,
you're,
probably
not
very
about
security,
but
I
think
probably
like
trying.
I
guess
I
would
say,
like
there
really
are
two
up
security
models.
One
is
you
know
the
one
that's
certainly
implied
by
web
transport.
Whatever
you
see,
and
there
was
the
one
that's
applied
by
fetch
and
as
they
fetch
as
the
origin
model
and
web
transfer,
but
we
can
see
something
quite
different.
G
I
I
I
do
not
think
there's
like
any
like
any
any
near
future
chance
of
like
having
this
somehow
mapped
into
the
the
true
web
origin
security
model.
It's
says
they're
just
too
different
and
we
spent
so
many
years
trying
to
reason
about
that
and
only
barely
understand
it
now
that
I
just
like,
like
think,
that's
extremely
unlikely
that
people
want
to
put
the
effort
into
it.
G
The
maybe
your
chrome
version
that
mapped
into
like
the
web
transfer
over
to
the
model,
which
is
like
not
really
tied
to
origins,
because
that's
a
much
more,
simpler
kind
of
case,
but
I
I
guess
you
know,
I
guess
I
would
say,
like
the
itf-
is
probably
not
the
place
to
think
about
that
at
some
level,
because
the
security
properties,
the
people
are
not
the
actual
security
properties.
Technically
right
here
are
not
that
complicated.
What's
complicated
is
mapping
into
the
web.
G
So
I'm
not
saying
again,
I'm
not
quite
sure
what
to
do
about
that
question
whether
we
have
a
question
yeah.
S
Well,
thanks:
we
are
certainly
engaging
with
w3c
as
well.
We've
opened
the
discussion
with
web
transport
in
particular
and
and
web
networks.
Interest
group
has
has
taken
some
of
this
on
maps
well
with
some
of
the
sustainability
work,
that's
starting
there,
but
we
do
want
to
get
the
conversation
underway.
I
mean
you're
kind
of
right
about
chromium,
they're
they're,
not
interested,
but
my
read
is
that
they're
not
interested
because
they're
skeptical
it
will
end
up
working.
S
I
think
a
lot
of
people
have
that
skepticism
and
multicast
has
a
history
that
certainly
justifies
it,
but
you
know
we
do
think
this
problem
needs
solving.
This
is
a
really
good
way
to
do
it
and
that
the
problems
do
not
look
fundamentally
unsolvable,
so
to
the
extent
that
we
can
get
continue,
gaining
traction
and
get
some
forward
progress
on
this.
I
don't
think
it's
really
as
out
of
reach
as
people
think
it
is,
but
I
I
do
acknowledge
that
that
remains
to
be
seen,
but
thanks
for
the
feedback
there.
L
E
Labs
cloudflare:
it's
not
clear
from
my
presentation
whether
or
not
you
need
a
protocol
to
meet
security
properties,
have
a
good
idea
about,
or
don't
really
under,
don't
really
have
an
idea
of
what
the
security
properties
are.
You
need
to
meet,
and
I
agree
with
echo
that
if
you
you
don't
know
what
security
properties
you
need
to
meet,
that
sort
of
w3c
versus
coming
up
with
a
protocol
to
meet
the
networks
in
multicast,
which
might
be
more
of
an
iepf
thing.
S
So
we
have
some
idea.
We
think
that
the
the
draft
we
put
forth
about
security
considerations
presents
a
model
that
we
invite
criticism
on
and
we'd
like
to
sort
of
hash
out
to
everyone's
satisfaction.
The
the
feedback
we
see
from
w3c
is
that.
S
Well,
obviously,
this
has
to
have
the
underpinnings
of
of
protocols
that
satisfy
the
web
security
model
needs,
and
so
this
is
the
the
sort
of
middle
ground
we're
stuck
in
at
the
moment,
and
if
we
can
get
some
forward,
progress
on
agreeing
with
the
security
model
get
some
feedback
on
on
what
that's
doing
then,
as
we
as
we
go
ahead
with
trying
to
build
the
protocols
that
meet
that
security
model,
then
we'll
have
some
confidence
that
it's
actually
driving
in
the
right
direction.
S
That's
kind
of
the
way
we're
looking
at
this
now
it's
it's
got
a
lot
of
moving
pieces
and
a
lot
of
people
involved
absolutely.
But
I
you
know
if.
L
I
Hi,
I
I
think
that
this
piece
of
work
is
too
big
to
do
all
at
once.
I
think
that
that
the
first
step
would
be
to
solve
the
protocol
problem
and
leave
the
browsers
out
of
it.
Basically,
and
I
think
there
is
a
logical
way
to
do
that,
my
suggestion
would
be
to
dispatch
to
cdni
and
treat
this
as
a
cdn
to
cdn
multicast
use
case
where
you're
moving
content,
where
you're
essentially
standardizing
a
component
of
application
layer,
multicast
so
yeah.
S
L
I
think
we
have
to
stop
here
and
we
need
to
come
to
some
sort
of
consensus
on
what's
the
best
way
to
dispatch
this.
What
I
saw
on
the
list-
and
I
received
some
chats
people
seem
to
be
saying-
we
are
not
sure
at
this
moment,
but
perhaps
discussing
this
on
a
mailing
list
seems
sensible.
L
L
We
continue
the
discussion
on
a
list
once
there
is
enough
traction
and
we
have
more
clarity
whether
this
is
cdn
to
cdn,
whether
browser
should
be
included
in
the
first
attempt
or
not
will
result
from
the
discussion
on
on
that
mailing
list
and
it's
up
to
the
ads
and
you
which
mailing
list
this
discussion
happens
on
we'll
obviously
inform
on
sect
dispatch
for
those
who
are
interested
in
continuing
this
discussion.
How
does
that
sound?
L
It
is.
Is
that
okay.
A
S
Okay,
that
is
a
start.
Thank
you
very
much.
L
So
we
are
right
at
the
hour,
I
think
I'll.
Just
summarize,
the
two
dispatch
decisions
for
multicast
security,
the
discussions-
will
continue
on
a
mailing
list
for
private
access
tokens.
We
have
tentatively
dispatched
it
to
privacy
pass
and
the
working
group
needs
to
make
a
decision
on
whether
it
needs
to
recharter,
update
new
use
cases
or
send
it
back
to
sec,
dispatch
and-
and
we'll
see
about
that.