►
From YouTube: IETF113-TLS-20220323-0900
Description
TLS meeting session at IETF113
2022/03/23 0900
https://datatracker.ietf.org/meeting/113/proceedings/
A
A
Okay,
welcome
to
tls
at
iatf
113
here
in
vienna
and
virtually
online
just
a
reminder
for
everybody.
People
who
are
online
blue
sheets
will
be
attendance
will
be
automatically
collected
for
those
in
the
room.
A
A
A
All
right
we
have
minute
takers.
Please
also,
I
think,
usually
people
are
on
jabber
at
this
point,
but
if
there's
things
that
need
to
be
relayed
to
the
mic
and
we're
not
getting
to
them,
please
let
us
know.
A
All
right,
we
have
some
administrivia
to
deal
with
that.
I
think
we're
done
with
that.
There
are
some
working
group
drafts
that
we
have.
A
In
progress
so
we'll
discuss
those
at
the
meeting
today,
I
think
we
have
a
a
request
from
ben
to
discuss
a
kitten
issue
that
we
will
add
after
the
working
group
drafts
and
then
we'll
have
some
discussion
of
new
work.
Are
there
any
other
agenda
bashing
topics
that
people
would
like
to
discuss.
B
Yeah
hi
joe
thanks.
I
just
want
to
say
thanks
to
the
working
group
and
all
those
participated,
we
got
two
rfcs
published
since
the
last
time.
We
met
the
dtls
cid
draft
and
the
deprecate
md5
and
shaw
one
draft
yay.
We
have
three
documents
with
the
rfc
editor
today.
It's
possible
that
dtls
1.3
will
pop
out
and
ticket
requests
as
well.
B
We
have
the
external
psk
guidance
document
is
out
the
door
and
I
guess
there
were
there's
another
document
that
went
back
on
the
isd
telechat
because
it's
been
sitting
around
for
a
while
and
lost
enough
ads
to
go
forward,
and
it's
only
got
a
couple
of
minor
editorial
things
that
need
to
get
fixed,
so
hopefully
that'll
pop
through.
B
So
we're
hoping
that
a
lot
of
the
log
jam
of
the
drafts
that
are
in
the
working
group
will
pop
through
and
as
you
can
see,
that
some
of
the
notes
and
the
asterisks
in
pink
and
yellow
there
on
the
on
the
left
on
the
bottom
of
the
left
hand,
side,
column,
they've
been
kind
of
sitting
around
for
a
while.
So
we're
happy
to
to
move
those
on
and
get
them
out
the
door,
and
that's
it
you,
I
think
back
to
you.
Unless
anyone
has
any
specific
questions
about
a
draft.
A
The
meet
echo,
I
think
it
should
be
back
now:
okay,.
B
Yeah
we
see
now,
can
you
go
back
to
the
slide
real,
quick,
just
to
make
sure
to
jog
my
memory
here?
Yeah
and
the
other
thing
is
that
delegator
credentials
is
in
iatf
last
call,
and-
and
I
guess
we
should
note
too-
that
we
paused
two
drafts
the
crosses
and
our
assumption
and
the
flags
extensions
until
we
actually
get
some
implementation
experience.
A
Okay,
I
think
now
we
will
start
into
our
regularly
scheduled
program
with
I
think
ctls
was
first,
is.
C
A
On
the
it's,
it's
the
thing
that
looks
like
a
piece
of
paper
on
the
upper.
B
Yes,
under
your
name
to
the
right
to
the
left,
I
think
of
the
oh,
yes
right
right.
What
could
be
more
clear.
D
C
C
Tljr,
I
think
you
know
we're
getting
pretty
close
to
think
being
done
with
this.
Our
intention
was
to
take
it
to
experimental,
I
think,
and
get
some
experience
rather
than
try
to
take
it
to
ps.
I
think
I
mentioned
this
last
time
and
so
recent
changes
these
are
mostly
due
to
because
by
ben
schwartz.
So
thank
you
ben.
So
I
can
just
summarize
these.
The
first
is
allowing
optional
elements
in
the
profile,
so
you
can
have.
C
If
you
have
elements
which
are
which
are
distinguishable
by
the
receiver,
whether
they're,
not
then
you
can
have
the
optional
moving
the
profile.
Do
the
client
hello
to
make
it
the
the
structure
clear
instead
of
having
a
a
conditional
sequence
number
which,
which
is
the
conditional
profile
making
it
show
why
you're
doing
stream
or
datagram?
C
This
seems
to
make
more
sense,
because
you
know
it
does
you
really
need
the
sequence
number
if
you're
in
the
datagram
and
you
don't
need
a
future
stream,
some
clarifications
around
the
alerts
and
around
or
in
the
encryption
computation?
These
actually
don't
change
anything
they're
just
like
well,
the
encryption
computation
really
doesn't
change
anything
and
the
alerts.
I
always
intended
to
be
the
same,
and
somehow
I
just
never
wrote
that
down.
So
I
don't
know
if
you
want
to
call
it
a
clarification
or
not.
C
That's
about
the
other
to
you.
So
there's
still
numerous
small
number
issues.
The
next
two
are
actually
related,
so
I'm
going
to
do
them
both
and
then
we
can
talk
about
them
that
we
remove
the
length
field
in
the
same
space
and
the
underlying
idea
was
either
that
the
the
messages,
actually
every
message
actually
was
self-describing.
So
you
could
parse
the
messages
and
then
just
stop
and
also
that
perhaps
this
would
be
carried
over
transport
that
wasn't
wasn't
really
like.
C
Tcp
was
more
message
oriented,
in
which
case,
in
which
case
you
would
need
to
do
that,
because
you
could
just
like
parse
the
end
of
the
message
so,
but
if
you're
kind
of
tcp,
this
is
actually
kind
of
a
pain
because
you
have
to
actually
pursue
the
message
before
you
get
to
the
end,
rather
than
just
being
as
an
ordinary
tls.
C
So
you
could
just
like
look
at
the
message
header
and
find
out
how
long
it
is
and
then,
of
course,
you're
facing
the
possibility
that
the
that,
like
the
message,
is
not
as
long
or
short
as
it
is,
but
at
least
you
could
you
could
have
a
two-layer
parser,
so
ben
suggested
a
flag
in
the
record
for
this,
but
it's
of
course,
won't
work
for
udp
and
there's
a
separate
issue
issue.
41.
C
he's
been
here
by
the
way,
and
he
just
look
good
awesome
about
fragmentation
for
udp,
so
we
originally
assumed
the
tc.
The
ctls
was
going
to
run
over
tcp,
and
so
we
wouldn't
need
to
reconstruct,
and
so
the
headers
don't
get
any
information
at
all
about
how
to
read
the
reconstruction.
So
that's
pretty
clearly
in
a
mission
and
and
then
we
just
like
said
well,
this
will
work
fine
for
dtls.
We,
like
you
know
we
put
a
d
in
front
of
everything,
and
so
that's
not
also
fantastic.
C
My
first
solution
to
this,
which
I'm
thinking
people
will
be
surprised
here,
is
to
profile
even
further,
which
is
simply
have
the
traditional
framing
for
both
the
la
you
have
you
know
a
a
version
that
has
no
framing
a
version
that
has
a
length
framing
and
a
person
that
has
the
full
framing
and
those
would
simply
be
as
they
are
in
the
current
in
the
current
system,
although
we
might
might
make
the
image
a
little
smaller.
C
So
I
don't
love
this,
but
I
think
it's
probably
the
clean
solution
and-
and
if
which
is
that,
doesn't
work,
then
then
it's
to
say
the
profile,
because
it
lets
us
save
ourselves.
So
I
guess
I'll
take
comments,
but
this
is
my
put
on
this
ben.
E
F
Effectively,
okay,
so
so
full
is,
is
the
one
that
that
supports
include.
F
And
this
is
optional,
because
it's
because
there
are
some
use
cases
where
you
know
that
your
handshake
messages
are
so
small
that
they
fit
into
a
single
message.
C
I
was
more
thinking
a
single
record.
No,
I
was
more
thinking
use
cases
where
you
have
a
transport,
that's
carrying
them,
that
the
time
to
limit
them.
F
Okay,
so
so
you,
it
sounds
like
you
imagine
that
in
the
ctls
dtls
we
need
a
better
tls
udp
case
that
we
would
always
use
fall.
I
think
should
have
to
yes,
okay
and
then
the
question
is
just
on
the
on
the
ctls
tcp
side.
This
would
be
an
option
for.
F
C
Affirmative,
okay,
yeah,
I
think
I
I
I
just
I
want
to
just
like
sharpen
that
a
tiny
bit,
which
is
to
say
that
there's
a
you
know.
One
of
the
purposes
of
this
exercise,
though
I
think
it
may
be
it
might
be
as
clear
as
it
was
before
we
started,
is
also
to
like
draw
a
clear
line
between
tls
and
uses
like
a
handshake
protocol.
F
C
Okay,
I
I
I
think
I'm
happy
to
like
have
this
is
like,
obviously
not
like
the
most
amazing
thing
ever
so,
like
I'm
really
happy
to
like
hear
this
is
a
bad
idea
in
the
future.
You
know,
I
I
I
don't
think
them
knows
anything
now,
they're
not
committing
themselves
forever.
Martin.
G
Oh,
when
you
say
full,
I'm
looking
at
the
low
spec
here
and
it
has
a
length
which
is
the
length
of
the
handshake
message,
there's
a
fragment,
offset
and
fragment
length.
Do
we
need
all
three
or
can
we
get
by
with
just
two
you
can
get
by.
C
You
can
get
by
you,
you
can
you
can
shave
off
one
with
that,
you
can
sorry,
you
need
all
three
with
the
cur
sorry,
if
they're
the
same
as
they
currently
have.
You
need
all
three,
but
you
can
imagine
different
semantics,
which
only
had
two
in
particular
in
particular.
If
you
had
an
end
of
message
bit
like
like
t
like
like
ip
does,
you
could
shave
off,
you
could
take
one.
C
You
could
run
you,
you
could
say
you
could
rob
one
bit
from
one
of
those
and
and
then
and
then
and
then
you
know
and
then
say,
there's
only
two.
G
C
Just
I
think
the
gtls
one
is
pretty
arguable
you
can
improve
it.
I
think
I
think
actually,
arguably
for
ctl
are
you
for
tls.
You
could
just
make
it
two
bytes
too.
I
think
like
almost
certainly
yep
okay,
so
I
think
I'm
gonna,
like
I'm
gonna,
put
something
like
vaguely
like
this
in
here
and
then
and
then
and
then
people
can
like
say
the
pr
a
little
bit.
C
So
there
are
the
other
there
are.
There
are
two
other
optimizations
of
the
ache
on
which
people
suggested.
C
One
is
emitting
the
random
values
in
favor
of
requiring
diffie-hellman
to
be
random,
and
the
other
is
even
in
the
finish
value,
so
the
implication
of
meaning
the
randoms
like.
If
that's,
why
there's
randomness,
which
is
obviously
a
problem
if
you
do
like
pure
pure
pure
psk,
equation
detailment,
which
is
like
inadvisable
in
general,
but
also
like
a
lot
of
analysis.
People
do
often
will
assume
the
random
values.
Are
there
the
the
the
finished?
C
Basically,
as
you
will
recall,
from
the
discussion
of
phil's
1.3,
when
we
first
did
it?
If
you
take
the
finished
out,
you
still
do
get
integrity
of
the
handshape,
but
you're
relying
entirely
on
the
aed
on
the
subsequent
messages.
So
it's
not
impossible
but
like
it
really
substantiates
the
analysis
and
the
guarantees
and
the
separation
between
those
two
proofs,
my
I
given
I,
I
guess
I
think
you
know
when
we
first
started
this.
C
There
was
like
an
attempt
to
like
radically
summon
everything
we
possibly
could,
because
there
was
a
feeling
like
that
was
the
only
chance
we
had.
I
think
that
my
sense
now
is
now
that
we
have
the
profile
mechanism.
C
These
are
things
we
could
like
to
add
later
and
of
the
jr
now
and
I'm
sort
of
like,
I
think,
I'd
rather
try
to
get
some
and
if
you
like,
the
people,
I've
talked
to
that
actually
are
talking
about
doing
this,
like,
I
think
you
know,
let's
see
if
we
actually
get
any
interest
anyway,
or
rather,
let's
see
if
we
haven't
developed,
justified
me
just
any
cases
that
really
have
that
narrow
requirement,
as
well
as
extra
pieces.
E
C
Okay,
I
I
plan
to
do
that,
as
I
say:
okay,
the
sec.
The
second
piece
that
I
heard
suggested
these
are
these
by
the
way
these
suggestions
are,
are,
you
know
been
floating
around,
but
most
recently
they're
due
to
karthik
bargavan.
So
I
want
to
make
sure
I'm
getting
proper
credit
there.
The
was
to
remove
the
transfer
reconstruction
piece.
C
So
so
richard
had
this
very
clever
idea
of
like
treating
this
like
a
compression
layer
in
the
in
the
system
and
then
but
the
implication
of
the
compression
layer.
Is
that
then,
when
you
that,
then
you
actually
operate
the
t
at
the
tls
layer
as
if
it
were
as
if
it
were
rare
class,
and
then
you
extract
everything
down
and
brought
it
back
up
again,
so
you
had
actually
a
compatible
transcript,
except
for,
except
for,
like
some
handshake
value,
distinguishing
that
you
actually
were
doing
ctls.
C
So
this
seems
to
add
some
like
conceptual
complexity
and
tetra
complexity.
I'm
not
that's
not,
I
think,
there's
something
better.
How
much
so
I
don't
know
if
you
want
to
think
what
real
thoughts
on
that
or
not.
A
So
I'm
defense,
myself
harness
is
at
the
mic
line,
honest
just.
I
couldn't
see.
C
C
I
H
So
those
those
three
proposals
were
made
by
kartik,
as
you
said,
and
I
think
they
actually,
they
are
very
good
optimizations,
but
there's
a
kind
of
a
chicken
and
egg
problem
before
we.
We
didn't
incorporate
them
into
the
document,
so
neither
karthik
nor
others
like
really
sort
of
what
looked
like
considered
them
as
a
a
serious
area
of
investigation
and
so
like
delaying
them
to
a
later
stage.
H
I
think
could
be,
could
be
tricky
in
practice
because
you
don't
want
to
do
these
sort
of
like
bigger
changes
later
on,
specifically
the
the
transcript
sort
of
reconstruction,
because
that
could
be
quite
a
time.
Saver
implementation,
wise,
because
now
what
I
have
to
do
is
I
have
to
basically
construct
the
message
and
then
re-run
the
whole
process
to
create
the
kind
of
real
message
to
just
have
the
the
transcript
version,
which
is
in
an
embedded
implementation.
H
Not
a
piece
of
cake
in
richard's
go
implementation,
it's
not
a
problem,
but
if
you
care
about
saving
ram
and
and
also
the
computational
efforts,
it's
it
is
something
so
I
don't
know
so.
H
Yeah,
who
who
to
look
for
to
actually
do
that
analysis,
because
this
is
a
pretty
serious
security
analysis
that
needs
to
be
done
here,
but
it's
not
just
oh.
It
looks
good,
I'm
sort
of
like
on
the
envelope
assessment.
C
So
I
guess
I
think
this
way.
I
think
I
think
this
one
we
should.
I
think
this
one.
We
should
resolve
one
with
the
other
before
we
do
this
because,
like
I
think
that
the
I
think
you're
right,
this
is
like
this
is
like
the
other
things
you
can
imagine
adding
later,
but
this
one
I
think
either
works,
do
it,
it
should
happen
or
not.
I
I
guess
well,
I
guess
I'd
like
to
hear
for
other
people,
I'm
sort
of
like.
I
thought
this
was
a
clever
idea.
C
I
think
when
richard
suggested
it,
but
I've
called
on
it
some.
So,
for
the
reasons
you
indicate
honest,
so
I
guess
obviously
there's
some
other
points
of
view.
E
E
I
I
don't
think
I
ever
envisioned
that
the
mechanical
expansion
of
compression
and
extraction
was
was
necessary,
so
I
think
it
would
be
useful
to
have
you
know
validation
on
this,
but
I
expect
that
the
results
of
that
you
know
analysis
would
be
that
it's
okay,
not
to
do
the
reconstruction.
E
C
Thanks,
I
think
I
know
what
to
do
now.
I
don't
think
that's
all
those
are
all
slides.
I
have
so
I'm
I'm
not
aware
of
any
other
police
like
changes
here.
I
do
go
through
the
issues.
Okay,
sorry.
I
thought
it
was
kind
of
mike.
I
did
go
through
the
issues,
so
I
think
you
know
now.
The
dtls
is
basically
done.
C
I'm
trying
to
clear
out
my
queue
of
these
other
documents,
as
you
can
see,
so
my
hope
was
to
make
these
changes,
such
as
they
are,
and
then
ask
for
your
price
call
expert.
C
A
C
I
didn't
okay,
I
did
call,
I
did
call
it
the
right
thing.
Look
at
that.
C
Oh
I'm
glad
to
see
that
my
confusion
about
the
number
of
the
rfc
is
only
it
only
occurs
at
two
in
the
morning.
It
does
not
generally
occur
okay,
so
this
also
is
hopefully
close
to
done.
There
are
still
a
few
substantive
issues,
some
of
which
I
propose
to
do
something
about,
and
some
of
which
I
propose
to
do
little
about.
C
As
people
recall,
the
purpose
of
this
exercise
was,
like
you
know,
largely
to
remove
some
problematic
language
for
the
document
and
in
the
process
to
clean
up
some
points
that
have
been
confusing
in
the
in
in
the
rfc.
It
was
not
to
make
any
substantive
changes
that,
like
would
cause
real
problems
or
in
our
problems,
because
we're
not
planning
to
rev
the
protocol
number.
So
so
I
think
you
know
any
any
change
we
make
actually
measured
against
that
question
of.
Are
we
improving
the
world?
C
Are
we
just
like
thinking
ourselves
a
whole
and
we
do
have
sort
of
the
overhang
of
obvious
confusion
about
different
pieces
as
well
as
said
the
problematic
language?
So
I
do
want
to
get
this
out
and
if
they're
things
that
are
like
grievously
wrong
but
require
like
a
lot
of
engineering
time,
I
want
to
kind
of
pump
them
or
even
less
curiously
wrong.
C
Hopefully
I
I
guess,
if
they're
grievously
wrong,
we
have
to
fix
them,
but
if
they're
just
complicated
and
someone
and
somewhat
confusing
but
but
edge
cases,
maybe
less
important,
since
we
already
have
84
46.
So
first
issue
was
by
john
matson,
who
I
don't
think
is
actually.
I
was
actually
here
because
I
was
presenting
in
some
other
some
other
meetings.
I
don't
see
them
here
was
to
have
recommendations
around
ecdhe,
so
currently
key
update
offers
pfs
because
you
hash
the
keys
forward
rotate
these
forwards.
C
As
long
as
you
delete
the
old
keys
you're,
fine,
which
I
mean
always
is
a
condition
for
pfs,
but
it
doesn't
offer
pcs
and
for
that,
of
course,
you
need
ecdha
exchange,
so
johnny
proposed.
We
had
some
text
explaining
this
point,
but
in
that
pr
john
proposed
also
recommending
the
new
exchange
at
some
precipice
intervals,
specific
intervals,
namely
an
hour
or
100
gigabytes.
C
There
seemed
be
a
lot
of
talk
to
me
and
it
seemed
like
you
know.
We
probably
did
not
need
that,
so
my
appraisal
actually
is
the
closest
to
no
change,
which
is
what
I'm
worked
on
the
bug.
Obviously
you
know
if
people
I
I
said
since
john
is
not
here,
if
john
wants
to
jump
into
the
pr
or
later,
if
that's,
okay
or
in
the
bug,
but
I
wanted
to
see
what
other
people
thought.
C
The
second
is
also
john
metsen.
Tls
is
a
relatively
rich
set
of
errors
for
certificate
problems.
So
there's
like
certificate
revokes
difficult
spirits.
If
you've
got
a
known,
it
doesn't
really
have
anything
for
psk's
psks.
After
all
kind
of
an
afterthought
in
original
tls
and
1.3,
we
didn't
bother
to
extend
the
alert
set.
C
So
pr
is
just
repurposing
these
these
alerts
to
mean
the
same,
you
know
effectively
mean
the
same
thing
with
the
tickets
that
seemed
confusing
to
me,
since,
after
all,
you
could
present
with
it,
you
could
have
a
psk
and
a
ticket
simultaneously,
sorry,
a
psk
and
a
certificate,
although
I
guess
it
would
go
in
the
opposite
direction,
depending
on
the
time
of
the
handshake.
C
My
sense
is
that
these
granular
messages
in
in
ordinary
tls
for
certificates
were
not
actually
as
helpful
as
you
might
have
seemed,
and
in
particular,
because
of
the
way
you
know
in
particular,
because
the
way
the
path
construction
happens,
you
couldn't
you
could
you
know,
especially
for
intermediates,
you
might
encounter
any
of
these
so
for
the
same
certificate.
C
So
my
sense
is
probably
that's
not
a
great
idea,
and
I
think
we
should
just
if
we're
gonna
do
anything
we
should
add
just
a
ticket
invalid
alert
that
says
like
something's,
wrong
with
your
ticket.
It's
also
the
case
that,
depending
on
exactly
your
service
constructed,
you
might
encounter
this
for
a
given
ticket,
because
tickets
are
because
it's
possible
to
have,
like
you
know,
to
fall
back
to
ordinary
certificate-based
authentication
depending
how
things
are
configured.
C
G
You
have
my
input
in
chat.
This
is
I'm
finally
on
certificates,
it's
really
hard
to
get
the
right
alert
under
the
different
conditions.
So
I'd
be
very
happy
to
say
just
a
single
thing:
not
the
people
have
that
richness
to
go
either.
So
that's
fine,
I
guess
they
can
acquire.
C
Yeah,
I
think
this
was
larger
than
external
ps
case,
but
again
it
just
seems
like
this
is
like
I
can't
make
a
connection
right,
yeah,
okay,
I
will
take
this
on
board.
Thank
you.
C
Okay,
this
brings
us
to
issue
1227
raised
by
david
benjamin,
so
we
we
were.
It
turns
out
that,
like
there
are
multiple
hashes
in
play,
when
you're
doing
psks,
one
is
the
one
associated
with
the
psk
and
the
other
is
the
one
that
is
associated
with
the
transcript,
and
these
indepen
and
at
least
one
case
is
potentially
different,
which
is
supposing
you
have
two
ps
case.
C
Obviously
you
have
to
use
the
hash
associated
with
psk
for
the
binder
in
the
client
l1,
because
you
had
no
other
information,
but
you
know
they
might
have
different
different
hashes
associated
with
them.
So
if
you're
gonna
use
the
when
it's
just
the
blind,
the
psk
hash
for
the
binder
associated
with
the
psk
and
ch1,
you
obviously
should
use
it
for
ch2,
because
that
was
like
part
of
the
whole
point
of
attributing
of
having
the
hashes
associated
with
the
psks.
C
So
it's
clearly
you
have
to
do
that.
But
then
we
have
message
and
but
on
the
other
hand,
for
the
transcript
you
at
the
very
end
of
the
day,
you
clearly
have
to
use
the
negotiated
hash
and
what
hash
is
used
for
the
message.
C
Hash
rejection
becomes
the
question
and
when
david
and
I
talked
about
this
a
little
bit,
it
became
clear
that
david
thought
that
that
this
text
said
that
the
that
you
should
use
the
psk
hash
for
re-injection,
and
I
thought
you
should
use
the
negotiated
hash
and
david,
and
I
I
believe,
agree
that
it
should
be
the
negotiated
hash.
But
it's
not
clear
what
the
test
currently
says.
So
it
will
work
it
pretty.
C
Much
doesn't
work
properly
if
you
use
the
psk
hash,
and
so
the
proposal
is
to,
and
so
I
think
you
know
david's
model
where
that,
where
the,
where
it
said,
he's
the
psk
hash,
he
had
a
hack
to
make
sure
that
hap
made
sure
that
they
matched.
And
so
I
think
the
proposal
here
is
just
to
modify
a
document
to
make
it
clear
at
every
point,
in
the
first
of
all
to
make
clear
that
that
is
the
negotiated
hash.
C
Yes,
it
does
include
negotiators
ever
suite,
and
so
the
proposal
is
to
modify
a
document
to
make
sure
that's
the
case
and
in
fact,
every
case
every
place
we
use
hash
to
indicate
which
hash
it
is
properly.
In
some
way
or
another,
so
the
transcript
is
clear,
so
I
think
everywhere
else
it
actually
is
relatively
clear,
but
that's
because
when
I
produce
the
pr
people
can
object
to
that.
If
I,
if,
if
I
concretize
it
in
some
way,
that
thing
is
wrong,
that'll
be
helpful.
C
Okay,
david
benjamin
also
read
a
couple
points:
12
23
and
12
24
about
the
hr.
C
So
we
don't
really
say
I
think,
martin
and
I
had
largely
assumed
that
almost
every
important
decision
was
made
in
ch2
and
sometimes
they
hadn't
made
some
preliminary
decisions
in
ch1,
but
that
they
were
just
effectively
confirmed
by
ch2,
and
I
think
david
sort
of
assumed
that
you
must
have
made
a
decision
in
ch1
and
and
then
and
then,
like
you,
just
pictured,
pick
things
out
of
ch2
and
memorize
the
material
systems
h2
and
neither
is
possible
purely
because,
as
you
make
decisions
in
ch,
one
which
implement
implement
in
fact
ch2.
C
So
the
example
david
gave
in
in
this
comment
was
imagine
what
happens
if
someone
offers
you
like
psk,
only
resumption,
with
no
dhe
and
so
depending
on
whether
you
accept
that
or
not,
and
they
also
offer
you
a
set
of
helmet
groups
where
you
need
to
do
an
hr
and
so,
depending
with
your
accept
presumption
or
not,
you
might
need
to
do
an
hr
or
not.
So
this
is
like
a
bit
of
a
rat
hole.
C
I
fear-
and
so
I
think
the
and,
as
I
understand
it
from
talking
to
david
yesterday
for
a
while
most
ambiguity
comes
into
play
when
we're
doing
ech,
so
he's
less
worried
about
it
now,
and
so
I
guess
my
question
would
be
what's
the
minimum
thing
we
can
do
here
to
like
reduce
the
ambiguity
of
this,
and
what
do
we
not
have
to
do,
and
so
I
think
we
do
the
minimum
thing
I
don't
have
a
proposal
of
that
I
was
actually
hoping
david
could
make
suggestions
or
or
antiques.
C
I
think
they're
a
little
closer
to
this
point,
but
I
think
like,
as
I
was
sort
of
saying
at
the
beginning,
what
I
don't
want
to
do
is
spend
the
next
six
months
or
a
year.
You
know
trying
to
clarify
this
in
unhelpful
ways,
because
where
it's
an
improvement
to
get
this
out
and
then,
if
someone
wants
to
make
a
bigger
pass
at
it,
we
could
position
it
out
there
unless
there's
like
things
that
are
actively
causing
people
confusion
right
now.
C
Oh
it's
unfortunate!
I
see
dave
ben's,
not
actually
here
so
well.
I
think
stephen,
I
don't
think
ditching
hr
is
in
scope
for
this
effort,
so
feel
free
to
process
overdraft,
but
this
is
like
was
intended
to
be
like
a
a
set
of
compatible
clarification
changes.
C
C
Finally,
I
want
to
make
a
last
call
on
some
issues.
I've
commented
in
these
issues.
I
don't
think
any
changes
are
needed,
I'm
just
going
to
go
through
them
very
quickly.
If
anyone
wants
to
object
to
them
now,
please
do
and
if
not,
I'm
just
going
to
close
them
again.
Basically
after
this
meeting
so
in
1206
ben
kadek,
I
think
going
from
a
some
comments
that
have
been
on
the
list
suggested
that
we
give
more
guidance
about
what
should
actually
be
in
the
cookies.
C
My
proposal's
not
to
do
anything
here.
I
think
it's
like
at
least
clear
enough
and
I
think
any
more
guidance
probably
risks
getting
confusing.
C
I'm
not
going
to
stop
for
these
and
someone
like
think
of
the
microphone
or
or
whatever.
Of
course
you
know
I'll
stop,
but
otherwise.
The
second
is.
There's
a.
I
had
a
pr
sorry
issue
to
expand
the
discussion
of
recommended
not
recommended
and
have
like
another
term,
but,
as
I
understand
it,
this
is
being
carried
847
this.
This
is
no
longer
appropriate
for
8446,
and
so
we
can
just
close
this,
though
we're
not
we're
not.
C
We're
really
deciding
we're
not
going
to
handle
that
84
46,
because
I'm
belong
there
and
finally,
there
was
some
discussion
on
more
guidance
on
how
to
handle
multiple
identities
in
post-handshake
auth.
So
if
the
server,
for
instance,
offers
multiple
identities,
if
a
client
does
how
do
you?
How
do
you
think
about
that
in
your
authentication,
without
authorization
decisions,
and
this
I
think
we
agree
with
an
application
issue,
so
I
prefer
to
close
that
as
well?
C
Okay,
so
I'm
going
to
close
all
these
out.
Essentially
after
I
stop
speaking
so
I
believe
there
were
a
couple
issues
that
I
didn't
get
to
here
because
I
think
they're
easy
to
handle,
but
I
posed
to
handle
them
and
put
some
aprs
for
them
and
try
to
get
a
new
draft
out,
and
hopefully,
let's
just
start
with
last
call
as
soon
as
we
get
those
things
done
and
potentially
david
benjamin's
appear.
C
So
this,
please
consider
this
your
your
your
chance
to
to
make
it
to
raise
any
other
issues
which
I've
forgotten.
It'll,
probably
take
me
a
couple
weeks
to
a
month
to
like
actually
do
a
little
stuff,
but
if
you-
but
this
is
this-
is
this-
is
your
call
to
to
raise
other
issues,
and
otherwise
we'll
have
to
go
to
e446
business.
B
Sean
no
new
issues,
I
think
what
I
heard
is
the
next
time
you
spend
a
new
version
which
will
come
in
about
a
month,
maybe
the
end
of
april.
Beginning
of
may
we
will
issue
a
working
group
last
call
to
get
this
thing
out
the
door.
C
That's
my
hope,
I
I
think
you
know
you
say,
given
that
we
already
have
this
protocol
shipping
and
this
is
really
about
clarifying
improving
people's
lives.
I
think
that,
like
that
is
best
served
by
actually
having
the
document
published
and,
and
then,
if,
if
you
know
and
if
we,
if
we
find
then
subsequently
there's
a
bunch.
C
B
A
All
right
next,
I
think
we
have
martin
thompson
with
snip.
Do
you
want
to
run
the
slides
martin.
G
There
we
go
sorry
about
that.
One
slide.
Nothing
has
happened
really
here
technically,
but
a
lot
has
happened
in
terms
of
the
editorial
content
of
the
document
I
went
through
and
did
a
rip
and
tear
and
reordered
things
rewrote
things
I
think
is
a
whole
lot
more
comprehensible
than
previously.
G
There
were
some
good
discussions
after
that
revision
that
I
haven't
yet
published,
but
I
think
it's
just
removing
an
appendix
was
was
the
upshot
of
most
of
that
discussion.
So
at
this
stage
we
don't
have
anyone
implementing
this.
This
is
probably
another
one
of
those
documents
that
it's
worth
putting
in
that
parked
state.
G
I
don't
know
what
other
people
think
about
that,
but
I
I
think
that's
probably
the
right
way
to
do
this.
Let's
sit
it
in
the
park
state
and
see
if
we
can
get
some
implementations
going,
it's
relatively
straightforward
to
do
from
a
client
side,
so
I'd
be
willing
to
set
up
some
sort
of
interoperability
event
if
someone's
willing
to
do
the
server-side
work,
which
I
understand
is
a
little
more
complicated.
B
Away
so
hey
on,
so
I
think
unless,
like
I'm,
we're
not
hearing
seeing
anything
else
here,
I
think
it
pops
up
on
the
list.
Thank
you,
martin,
for
doing
your
editorial
changes
to
make
it
more
readable
and
we'll
leave
it
parked
till
we
start
to
hear
people
beating
down
the
door
to
implement
this.
A
Thanks
next,
we
have
a
hybrid
key
exchange.
B
A
C
A
A
All
right
was
who
is
presenting
was
that
vincent
presenting
or
somebody
else.
A
A
Vincent
was
trying
to
share
and
it
didn't
seem
to
to
work,
he's
failed.
K
I
can
I
can
drive
vincent
you,
you
have
to
share
audio
as
well.
The
icon
that
has
the
microphone
button
will
allow
you
to
unmute
yourself,
which
you
could
let
us
know
in
chat.
If
that
is
or
is
not
working.
Oh,
you
have
a
commission
problem.
I
see.
I
K
In
the
interest
of
time,
while
vincent
sorts
that
out,
maybe
we
can
move
on
to
the
next
presentation.
A
Okay,
I
think
the
next
topic
sean
do.
You
want
to
go
through
the
kittens
sure.
B
So
back
in
october,
I
guess
there
was
a
message
about
a
draft.
It's
a
draft
in
the
kitten
working
group
where
they're,
including
an
updates
header
in
top
of
the
oh,
is
that
vincent
there.
L
Guess
can
someone
share
the.
L
Okay,
thank
you
so
yeah,
so
I'm
gonna
talk
about
the
ech
updates
and
in
particular
or
privacy.
L
A
Somewhere
having
a
little
trouble
hearing,
you
might
need
to
get
closer
to
the
mic.
L
Okay,
so
it's
a
joint
work
with
chris
and
kartik
next
slide,
please
so.
The
the
basic
diplomacy
now
that
you
consider
is
typically
a
client
that
wants
to
connect
to
the
to
a
website
that
is
hosted
on
a
client
facing
server.
L
So
here
the
backend
server
is
the
the
websites
and
the
next
slide
please,
and
so
for
tls
stls
contains.
Can
you
switch
to
the
next
slide,
chris?
L
Okay,
so
as
gls
contains,
if
I'm
supposed
to
secure
communication
between
the
client
to
the
server
it
acts
first
as
a
as
negotiation
and
dpml
key
exchange,
so
we
consider
also
the
server
authentication
with
the
certificates
and
then
that
we're
gonna.
We
also
model
the
encryption
of
the
data
through
the
application
traffic
and
created
with
application
traffic
key
that,
where
the
keys
derived
from
the
from
the
full
transcript.
L
So
all
this
kind
of
a
recall,
but
I'm
guessing
all
of
you
knows
about
all
this
information
so
in
term
of
a
next
slide,
please
in
term
of
a
feature
that
we
looked
at.
Can
you
see
the
slides
chris
yeah?
So
we
consider
most
of
the
most
of
the
feature
that
is
in
the
nfc
of
tls.
In
particular,
we
wanted
to
model
the
hello.
L
We
try
requests
that
come
from
the
negotiation,
the
certificate
based
authentication,
the,
as
well
as
the
pre-shed
key,
and
the
ticket
assumption
that
are
generated
at
the
end
of
the
session
at
the
end
of
the
handshake
plus
any
other
extension.
In
particular,
we
looked
at
the
sni
that
can
save
the
identity
of
the
identity
of
the
the
server,
so
in
term
of
verification.
L
What's
important
is
that
because
all
this
feature
can
can
be
used
or
not
used,
depending
on
the
on
the
scenario
that
you
consider,
we
need
to
verify
tls
with
as
many
scenario
as
possible,
depending
on
the
feature
that
you
use
next
slide,
please
so
in
term
of
security,
tls
supposed
to
achieve
quite
a
lot
of
security
guarantees,
so
we
have
kind
of
the
classical
one
which
are
the
authentication
and
the
confidence
security
goal.
So
in
particular,
the
one
that
have
already
been
studied
in
the
literature
are
all
of
those,
so
typically
key
secrecy.
L
The
secrecy
of
the
the
data
that
I
send
at
the
rtt
and
one
entity
so
for
one
entity,
even
the
forward
secrecy
and
in
term
of
authentication,
you
have
the
client
and
server
authentication,
as
well
as
the
downgrade
resilience
to
ensure
that
the
attacker
is
not
able
to
force
the
the
client
server
to
use
the
whole
version
of
tls
next
slide,
please
so
in
term
of
verification,
because
you
have
to
consider
all
those
properties
and
as
well
as
many
scenarios,
it's
important
to
be
able
to
verify
the
security
goal
not
by
hand
but
using
automated
verification.
L
But,
as
you
can
see,
the
picture
is
not
really
homogeneous
yet
because
some
work
have
been
focusing
on
some
properties
and
some
others.
So
we
don't
have
a
full
homogeneous
picture.
Next,
slides,
please
but
more
problematic
is
that
the
model
do
not
cover
all
the
features
of
I
mean
the
union
of
the
model
usually
covers
most
of
the
feature,
but
the
intersection
they're,
not
intersection
of
the
model,
is
not
complete,
so
the
goals
of
our
work
was
try
to
have
a
model.
L
That's
feature
model
that
covers
all
the
features
as
well
all
the
security
properties
next
slides
so
in
terms
of
a
privacy
goal
that
have
been
much
less
studied
in
the
past.
So
we
looked
at
identity
of
the
participants,
in
particular
the
client
and
the
server.
L
So
here
the
problem
is
that,
even
though
tls
current
gls
is
supposed
to
guarantee
at
least
the
identity,
the
privacy
of
the
client
and
capability
and
the
sub
extension,
there
have
not
have
been
any
automated
proof
rights.
There
have
been
some
proof
by
hand,
pen
and
paper
and,
of
course,
for
the
client
extension
and
the
server
identity.
It's
not
guaranteed
by
tls
almost
by
design,
because
everything
is
sent
in
the
clear
in
the
clientele,
so
the
ecs
extension
is
supposed
to
guarantee
next
slide.
I
L
But
if
you
just
do
this,
it's
it's
not
easy
still
to
obtain
a
privacy
guarantee
of
the
identity
of
the
backend
server
next,
because,
if
you
so,
if
you
just
do
this
simple
intuition
of
encrypted
the
sni
and
even
bind
it
with
the
client
random,
you
have
a
very
simple
attack
where
the
attacker
just
take.
L
So
the
the
idea
of
the
ech
that,
through
the
many
draft,
have
been
to
in
fact
encrypt
the
whole
client
cell
that
is
destined
for
the
backend
server,
which
we
call
the
which
is
called
the
inner
client
hello
and
bind
it
with
the
parameter
of
the
client
cello.
That
is
this
thing
for
the
client-facing
server.
So
the
overall
is,
if
you
look
at
the
rfc,
it's
called
the
outer
clamp.
L
But
if
you,
even
if
you
do
this,
you
have
some
issues
with
some
features.
In
particular
the
hello
retry
request.
Next,
please,
because
if
even
though
you
have
the
this
binding
between
the
inner
and
the
other
clientele,
the
attacker
can
forward
the
the
first
clientele
and
upon
receiving
hello
retry
request
from
the
server
it
can
re-inject
his
own
inner
client
hello
to
the
third
message
on
the
slide
and
for
the
identity
of
the
server.
He
puts
his
guess
whether
it
typically
put
s
prime,
which
is
a
guess
on
the
identity
of
s.
L
L
So.
The
key
idea
here
to
solve
this
problem
was
to
ensure
that
the
encryption
of
the
second
inner
clientele
should
be
linked
with
the
first
one,
which
leads
us
to
the
current
version
of
ech.
Next,
where
we
use
hpk
encryption
for
encrypting
the
the
clientele.
So
this
the
setup
of
the
hpk
create
a
context
that
is
updated
every
time.
There
is
an
encryption
and
same
thing
that
is
updated.
L
Next,
so
in
terms
of
verification,
we
consider
the
symbolic
models
so
typically
otherwise
know
that
dollavio
model,
so
where
the
attacker
can,
you
know,
have
a
control
over
the
network.
He
can
read,
write
and
intercept
messages,
but
it's
a
bit
idealized
in
the
sense
that
he
cannot
break
cryptographic
the
cryptography
nor
use
side
channel.
The
good
thing
about
using
the
avio
model
is
that
you
have
very
powerful
state-of-the-art
tool
and
in
particular,
we
are
using
probe
next.
L
So
we
only
focus
in
our
model
on
tls
1.3,
so
no
version
negotiation
to
previous
version
of
tls,
but
we,
as
I
mentioned
before,
we
modeled
all
the
features
that
I've
presented
as
well
as
all
the
security
property
authentication
configuration
privacy
goals
all
of
them
and
ideally
we
would
have
liked
to
be
able
to
prove
all
the
property
with
all
the
feature.
But
it's
too
taxing
in
terms
of
poverty.
We
put
the
time
out
of
48
hours
as
well
as
a
limit
of
memory
of
100
gigabytes
and
for
some
security
properties
and
some
scenarios.
I
L
Did
next
please,
what
we
did
is
that
we
parameterized
our
model
with
a
configuration
file,
so
we
have
one
model,
but
this
configuration
file
allows
you
to
easily
activate
or
deactivate
features
the
as
well
as
which
key
are
going
to
be
compromised
and
the
behavior
of
the
client
in
the
server
and
just
using
this,
this
configuration
file.
It
allows
us
to
run
about
600
round
of
progress
corresponding
to
about
20
to
30.
L
30
scenario
per
security
properties,
so
our
results
next,
so
we
first
look
at
the
classical
signature
property.
In
particular,
we
rechecked
all
the
property
on
tls
alone,
valina
tls,
with
all
the
feature
activated,
so
that
was
as
almost
as
insanity
checked,
but
still
now
we
have
a
homogeneous
picture
and
we
try
to
reprove,
of
course,
all
of
them
also
with
ech
to
ensure
that
echo
doesn't
break
any
of
those
guarantees.
But
in
that
case,
because
of
the
blow
up
in
memory
most
of
the
time
we
had
to
deactivate
some
of
the
features.
L
So
we
tried
here
to
present
just
the
interesting
scenarios
next,
so
in
symbolic
models,
the
way
we
model
the
privacy
of
the
server
identity,
so
it's
gonna
be
similar
for
unlinkability,
client,
privacy
and
so
on,
but
just
here
I'm
only
focusing
on
the
privacy
of
the
server
it's
typically,
it's
an
equivalence
between
two
situations
where
on
one
side,
you're
going
to
have
a
handshake
with
back-end
server
one
and
on
the
other
side
back
in
server.
L
Two
and
those
identities
are
known
to
the
attacker,
and
you
put
that
in
parallel
with
other
handshake
with
other
client
or
the
front
server,
so
you
have
unbound
number
of
session
and
everything,
and
so,
if
the
attacker
is
not
able
to
distribute
the
situation,
we
consider
that
the
identity
of
the
backend
server
that
is
used
in
these
two
different
scenarios
are
remaining
private.
So
we
did
have
to
consider
some
fairly
standard
assumption,
simple
assumption.
L
So
all
of
those
are
very-
it's
not
very
restricting
assumption
next,
please
so
in
terms
of
results
for
the
privacy
goal,
so
equivalence
in
progress
is
even
more
taxing
in
terms
of
memory
and,
more
specifically
in
in
memory.
L
Sorry,
it's
you
know
more
taxing
in
time,
but
much
more
so
in
terms
of
memory,
so
we
have
by
default,
to
deactivate
the
one
entity
and
zero
tt
for
the
proofs,
and
so
we
still,
we
managed
to
prove
most
of
the
security
properties
that
we
were
looking
at
for
vanilla
tls
for
the
one
that
are
possible
to
achieve,
with
almost
the
feature
activated
and
for
ech.
C
I
ask
a
clarifying
question:
I've
been
saying:
oh
perfect,
I
didn't
realize
you're
getting
any
questions
so
first,
let
me
say
this
is
like
amazing
work.
So
thank
you
so
much
for
do
for
doing
and
presenting
it.
Can
you
it's
really
great
to
have
this
done?
Can
you
clarify
what
you
mean
when
you
say
that
one
rtt
and
zero
to
two
are
both
disabled?
I
understand
what
it
means
to
have
zero
t
disabled
if
you're
one
or
two
disabled.
That
means
we
don't
have
tls.
M
L
In
the
middle,
when
you
do
the,
when
you
do
the
proof,
the
use
normally
in
the
model,
you're
gonna
send
the
messages
that
are
sent
at
the
end
encrypted
with
the
application
data.
But
then
in
for
the
proof
there,
the
we
we
assume
that
they
are.
L
We
assume
we
hope
that
those
adding
those
entities
at
the
end
of
those
one
entity
at
the
end
would
not
impact
the
proof,
because
we
managed
to
have
some
of
them
with
when
almost
re
remove
all
the
conflict,
the
all
the
all
the
feature
and
just
put
one
entity.
It
didn't
change
anything
but
in
term
of
verification,
really
really
really
made
product
blow
up
like
right.
I
use
the
server
of
500
gigabytes
and
in
some
in
one
scenario
I
was
going
over
that
here,
the
prime
that
we
really
reached
the
limit
of
the
material.
C
N
N
N
Now
you
have
a
customer
that
has
to
deal
with
network
inspection.
So
if
you
were
to
have
to
do
this
exercise
with
a
more
complex
environment
with
where
you
have
people
who
have
to
do
data
loss
prevention
because
they
are
concerned
by
compliancy,
they
are
concerned
by
gdpr.
They
are
concerned
by
mass
exfiltration.
They
are
concerned
by
attacks.
They
are
concerned
by
all
sorts
of
other
things.
L
But
I
mean
I
am
yeah
to
be
completely
honest,
I
I
cannot
give
you
a
definitive
answer,
because
it
will
depend
what
you,
what
your
aim
about.
Well,
not
what
the
game,
but
what
exactly
which
feature
would
you
need
to
to
see
appear
in
the
model?
But
theoretically,
yes,
we
could
model
it
and
extend
the
model
to
have
those
those
features
added.
H
Yeah,
go
ahead,
honest,
I'm,
okay,
hi!
This
is
hannes.
Thanks
for
doing
the
work,
I
was
wondering
whether
you
have
published
a
paper
and
released
the
the
scripts
that
you
had
shown
would
be
interesting
to
verify.
Some
of
that
work
is
that
available
somewhere
already
or.
L
Yes,
so
there's
a
github
public
github
that
is
available
online.
If
it's
public
and
it's
the
work
is
going
to
be.
It's
ongoing
submission
to
cc
to
ccs
of
this
year.
Okay,
so.
H
D
D
No
no,
but
there
was
a
physical
line
of
two
people
hi.
So
again,
thanks
this
work,
can
you
go
back
two
slides
to
the
assumptions
there
yeah?
So
again,
I
I
asked
this
in
the
chat,
but
I
just
wanted
to
kind
of
raise
it.
So
I
think
there's
also
an
assumption
here
that
the
same
ech
configuration
is
in
use
for
both
ps1
and
bs2.
L
Yes,
yes,
of
course,
you
have
to
to
use
wait.
Sorry
for
between
p.
L
L
No,
not
necessarily
I
mean
typically,
if
the
client
facing
server
has
several
configuration
possible.
So
of
course
his
public
key
wait.
No,
no.
D
Right
so,
given
that
those
are
read
from
the
dns
entries
of
the
backend
servers,
they
could
differ
because
of
caching
in
dns
or
something
and
and
even
you
might
even
have
different
groups
or
something.
So
it
would
be
nice
to
try
and
write
down
the
the
assumptions
there
so
that
people
have
some
guidance
as
to
you
know
what
not
to
do
in
terms
of
publishing
ech
configs
in
dns.
L
Yeah
indeed,
I
agree
thanks
yeah,
I'm
not
completely
in
terms
of
in
some
of
the
implementation
detail.
I'm
not
completely
aware
of
the
you
know
the
the.
L
How
the
the
link
with
the
the
the
the
public
key
of
the
fonted
server
with
respect
to
backend
server,
so
this
I
think
chris
would
be
able
to
answer
those
questions
more
in
detail.
Yeah.
D
J
Jonathan
hoyland
cloudflare-
I
just
want
to
say
this-
is
super
awesome
work.
I
think
it's
so
exciting.
What's
the,
how
does
pro
verif
handle
using
500
gig
of
ram?
Is
it
does
it
just
choke
or
is
it
does
it
work.
L
Well
it
does,
it
does
work,
I
mean
the.
The
thing
is
that
the
the
the
previous
typically
has
to
store
it
translate
your
your
your
process
into
one
class.
You
have
to
store
like
a
huge
amount
of
classes,
so
the
storage
in
the
end,
you
know
it's
just
a
normal-
doesn't
really
make
any
problem
with
product
if
it
doesn't
have
time
to
go
through
the
memory,
but
it's
mostly
you
know
the
your
limitation
of
your
server.
So
from
the
poverty
point
of
view,
it
doesn't
have
any
problem,
handling
500
gigabytes.
L
I
mean
from
from
my
experience
I
mean
when
it's
the
first
time
that
I
had
to
handle
that
much
memory
for
for
for
a
model.
So
I'm
I'm
not
gonna
say.
Definitely,
that's
it's
not
a
problem,
but
apparently
the
only
error
we
got
was
every
time
we
would
reach
the
the
limit
of
the
memory
and
then
just
then
the
call
would
just
be
erased
by
the
system
and
that's
it.
A
P
Okay
good
morning,
can
you
hear
me
yeah
great,
so
I'm
here
to
give
a
quick
update
on
hybrid
key
exchange-
and
I
guess
I
saw
in
the
chat
there
is
a
discussion
of
the
word
hybrid
hybrid
in
this
setting
is
meant
to
refer
to
the
use
of
multiple
algorithms
together.
P
So
the
motivation
of
this
draft
and
I've
presented
this
before
is
to
commit
the
simultaneous
use
of
traditional
and
post
quantum
key
exchange
in
tls
1.3
and
that
thereby
enable
early
adopters
to
get
post
quantum
security
without
discarding
the
existing
security
that
might
be
offered
by
current
algorithms,
and
that
would
reduce
the
risk
of
the
break
from
one
algorithm
and
help
maintain
standards.
Compliance
during
the
transition
to
post-quantum
cryptography.
P
So
in
this
document
the
goals
are
to
define
the
mechanisms
for
negotiat
for
negotiating
and
establishing
a
shared
secret
using
hybrid
key
exchange,
but
there
are
several
non-goals,
so
it
is
not
a
goal
to
select
which
post-quantum
algorithms
are
actually
used
in
tls.
That's
the
ongoing
work
of
nist
as
well
as
cfrg,
and
it's
not
a
goal
to
do
hybrid
or
composite
certificates,
or
to
do
digital
signatures.
P
Some
of
that
is
being
done
by
the
lamps
working
group
and
others
will
have
to
be
done
later
by
this
working
group.
So
the
mechanism
in
this
document
is
to
just
define
a
new
key
exchange
mechanism.
A
new
key
exchange
group
for
each
desired
combination
of
traditional
and
post-quantum
algorithm
parameter
sets
and
that
will
act
as
a
new,
opaque
key
exchange
group
and
then
from
that
main
idea,
negotiation
is
straightforward.
P
Using
new,
named
groups
for
each
combination,
which
will
have
to
be
standardized
subsequently
key
shares
are
conveyed
by
concatenating
the
key
shares
for
each
algorithm
and
submitting
them
just
in
the
current
key
share
fields,
and
then
shared
secret
calculation
is
done
by
concatenating
shared
secrets
just
as
byte
strings
and
then
inputting
that
into
the
key
schedule,
and
I
note
here
that
concatenation
isn't
a
combiner
for
key
material.
That's
been
approved
by
nist
as
maintaining
fips
compliance.
If
one
of
the
key
material
inputs
is
fips
compliance.
P
The
main
thing
since
I
last
presented
was
a
discussion
on
the
mailing
list
started
in
august
2021
by
nimrod
avaram
and
his
colleagues
about
whether
concatenation
is
in
fact
safe
to
use,
and
they
started
from
the
premise
that
if
the
hash
function
used
in
the
combiner
is
not
collision
resistance,
then
it
may
be
possible
to
learn
some
keying
material
and
the
conditions
of
their
scenario
were
actually
that
the
hash
function
needed
to
not
be
collision
resistant
and
that
collisions
could
be
found
within
the
lifetime
of
the
tls
session
that
the
first
spite
string
in
the
keying
material
is
variable
length
and
that
the
ephemeral
keys
used
in
the
second
component
are
reused
for
a
long
period
of
time,
and
this
is
based
on
similar
to
the
crime
attack.
P
It
kind
of
works
in
a
bite-by-byte
scenario.
So
this
is
very
interesting
and
obviously
has
some
significant
assumptions
here
and
asking
for
security
in
a
world
where
the
hash
function
is
not
collision.
Resistant
is,
is
a
challenge
and
there
are
certainly
impacts
on
other
parts
of
the
tls
protocol
as
well.
P
For
this
specific
attack
it
ends
up
not
applying,
while
the
first
and
third
conditions
could
plausibly
hold.
The
second
condition
that
the
first
keying
material
component
is
variable
length
is
not
satisfied.
So
in
all
standardized
tls
1.3,
diffie-hellman
groups,
the
shared
secret
component
is
fixed
length,
and
so
it
is
not
possible
for
this
attack
to
apply
to
the
draft,
as
stated
so.
At
this
point
we
decided
that
meant.
We
did
not
need
to
make
any
changes
to
the
draft
now.
This
is
still
a
worthwhile
exercise,
given
long-lived
hard
to
upgrade
implementations.
P
How
should
we
design
our
protocols
to
be
robust
against
algorithm
failure,
and
it
may
be
worthwhile
to
for
the
tls
working
group
to
further
consider
the
role
of
collision
resistance
in
the
protocol
design
of
tls
1.3
overall,
but
at
this
point
we
don't
see
a
need
to
continue
the
adjustments
to
the
hybrid
key
exchange
draft
as
a
result
of
this
document.
P
P
G
Yeah
thanks
for
this
douglas
when
you
on
the
previous
slide,
I
think
it
was
when
you
said
that
we're
not
making
any
changes
in
the
document
have.
We
stated
our
assumptions
very
clearly:
okay,.
P
I
there
are
statements
in
the
document
that
sorry,
I
guess
when
I
say
no
change
is
made
to
this
draft.
There
are
no
technical
changes
to
make
this
draft.
We
did
mention
this
scenario
and
there
is
a
clear
statement
that
this
document
is
only
meant
to
be
used
with
algorithms
that
have
fixed
length,
shared
secrets
to
to
be
clear,
that
we
require
assumption
b
to
to
not
be
satisfied,
perfect
thanks.
D
Stephen
hi
steven,
just
on
the
question
of
working
group
last
call
I
mean
generally.
I
think
a
lot
of
this
work
is
being
done
too
soon.
All
this
post
quantum
stuff
I
might
be
in
the
minority
there-
that's
fine.
It
would
seem
very
weird
to
have
a
working
group
last
call
on
a
protocol
document
where
the
algorithms
that
are
used
are
unknown.
D
So
if
we
have,
if
we're
starting
a
practice
of
parking
documents
until
people
implement
them,
maybe
we
should
also
park
documents
until
the
actual
cryptographic
algorithm
details
are
understood.
P
A
And
I
think
we
could
have
a
working
group
last
call
and
not
move
the
document
forward
after
that,
depending
on
you
know,
to
to
kind
of
get
feedback
on
the
current
state.
D
B
All
right,
hey:
this
is
sean
just
to
use
chair
prerogative
to
jump
in
here.
The
only
thing
that
I
would
add
to
the
last
bullet
there
is,
you
know:
could
we
move
to
a
working
group
last
call
it
doesn't
have
to
be
the
le
working
group
last
call.
The
problem
is
that
I
think
that
a
lot
of
times
that
people
don't
really
review
drafts
unless
we
start
to
say
working
group
last
call-
and
I
think
doug's
only
point
is
hey.
B
We
want
people
to
like
look
at
this
so
that
when
it
when
the,
when
the
our
algorithms
are
actually
announced
that
they
can
move
quicker
to
implementation,
I
don't
want
to
mischaracterize
you
doug.
Is
that
basically
what
you're
thinking?
D
B
Oh
yeah,
absolutely
so
that
was
when
we
first
started
this
whole
draft.
That
was
the
you
know,
we're
not
going
to
finish
this
until
we
actually
the
algorithms
get
picked.
So
I
that's
supposed
to
happen
any
moment
now.
Possibly
you
know
today,
who
knows?
Maybe
it
maybe
it's
sag
tomorrow
who
knows,
but
so
the
the
algorithm
is
supposed
to
be
coming
fairly
quickly,
and
so
I
think
it's
just
trying
to
get
this
draft
in
the
right
place
to
launch
ports.
So
sorry
for
using
chair
prerogative
there
to
jump
in
the
cube.
A
C
Yeah
working
across
college
is
appropriate
to
me
here.
I
have
no
problem
with
parking
it
algorithms
are
announced.
As
far
as
I
can
tell
no
significant
part
of
like,
like
no
significant
part
of
one's
review.
This
document
actually
depends
what
the
algorithms
are,
because
this
is
intentionally
algorithm
agnostic.
C
So
if
we
assume
the
algorithms
are
sound,
then
you
know,
then
it
should
not
matter
to
me
whether
which
algorithm
it
actually
is,
which
is
fortunate,
as
I
don't
understand
the
algorithms
the
could
you
go
back
to
your
to
decide
about
the
active
attack.
Please.
C
Right,
so
I
think
your
professor
illusion
is
fine,
but
I
want
to
push,
but
so
let
me
just
add
in
advance
before
I
push
so
suppose
we
were
to
introduce
a
you
know,
a
tails,
4.4
or
an
extension
which
had
a
different,
a
different,
well,
more
collision,
resistant,
more
sorry,
more
length
and
length,
ambiguous
resistance
structure
right.
C
C
You
have
to
do
the
older
mode
so
like
so
have
in
the
while
we
have
like
that,
the
structure
we
have
now
and
then
some
future
collision
resistance
structure,
and
we
have
a
way
of
negotiating
that
and
both
clients
ever
have
to
do
to
chip
between
them.
Is
this
attack
going
to
slowly
possible
or
will
the
fact
that
the
client
server
will
naively
negotiate
the
stronger
version,
frederick's
attack.
P
So
you're
imagining
a
scenario
where
there's
this
version,
which
is
only
used
with
fixed
length
elements
and
then
there
is
a
version
that
can
be
used
with
variable
length,
shared
secrets,
but
has
some
protection
mechanism
padding
or
a
different
combiner?
That
is
meant
to
protect
when
using
variable
length,
secrets.
C
That's
a
good
good
question.
I
actually
I
was
imagining
that
we
relaxed
that
we
stupid
really
relaxed
the
restriction
on
it
being
fixed
length
secrets
and
you
use
it
and
you
use
this
version,
so
I
was
imagining.
So
I
think
what
you
I
hear.
C
Let
me
see
if
I
can
recap
what
I
hear
you
saying,
which
is
this
is
entirely
safe
because
we're
requiring
fixed
length
secrets-
and
so
I
think
what
I
had
in
mind
was
imagine
we
like
actually,
foolishly,
you
know
a
lot
of
variable
like
secrets,
but
I
think
the
question
that
you're,
but
I
think
the
question
that
that
you're
phrasing
still
is
a
reasonable
question
because
we
still
could
have.
C
We
still
have
the
possibility
of
the
of
those
of
the
two
of
the
two
versions
you
described
the
existing.
P
Right,
okay,
so,
if,
if
if
there
are
two
modes
of
operation,
both
of
which
are
secure
so
where
you
know
in
the
variable
length
setting
there's
appropriate
protection
and
in
the
other
setting
it
is
insured
to
use
fixed
length,
my
intuition
tells
me
that
that
would
be
safe,
that
there
wouldn't
be
a
downgrade
route
if
one
of
them
is
weak.
For
example,
one
did
allow
the
one
on
the
screen
here
to
be
used
with
variable
in
secrets.
P
P
C
Like
a
not
very
interesting
question,
so
I
think
the
the
relevant
point,
as
I
take
home
is
that
if
we
ever
were
to
specify
a
is
it
if
we
were,
if
we
were
in
the
future,
to
attempt
to
specify
a
version
of
this
that
had
sorry
if,
if
we
were,
if
we
were
to
specify
a
variant
of
this,
that,
as
you
say
through
potentially
through
padding
or
whatever
was
was
like
with
length,
we
dealt
with
variable
length
correctly,
as
opposed
to
making
the
algorithms.
C
If
we
had
so,
we
have
variable
length
algorithms,
and
we
have
two
ways
of
doing
that.
One
is
the
algorithm
is
fixed
length
under
the
under
the
hood,
and
the
other
is
to
make
is
to
have
a
version
of
this.
That
is
that
deals
with
a
variable
algorithms,
and
if
we
were
to
do
the
latter,
then
we
have
to
have
a
filter
in
the
negotiation
system.
C
That
said,
if
that
that
you
can't
do
you
can't
do
a
hybrid
key
exchange
with
variable
algorithm
blah,
unless
you
also
have
the
new
combiners
in
place,
so
that
suggests
that
we're
really
committing.
So
that's
just
that
as
a
practical
matter,
we're
probably
really
committing
to
having
fixed
length
algorithms
later,
because
that
filter's
annoying,
but
I
mean
we
had
to
we
could,
if
we
had
to
but
like
that
kind
of,
pushes
us
down
this
this
this.
C
P
Thanks
yeah,
I
I
will
mention
that
avaram
and
his
colleagues
did
propose
a
combiner
that
does
work
with
variable
lengths
using
some
more
extensive
hashing
construction.
So
it
is
possible
to
do
that
at
this
moment
and
if
the
working
group
prefers
to
do
that,
we
can,
but
we
did.
We
chose
to
to
not
do
this
and
focus
on
fixed
length
instead.
Q
Q
My
understanding
of
the
literature
around
key
combiners
is:
it
will
be
very
hard
to
prove
the
security
of
this
whole
protocol
absent
assumptions
like
the
the
how
horror
hash
function
is
like
a
random
occur.
Q
So
I
get
that
people
are
not
worried
about
this
attack,
and
this
is
fine
by
me.
I
would
know
that
we
are
kind
of
backing
in
something
that
will
be
how
to
prove
later.
Q
Q
Alternatively,
if
we,
if
we're
saying
we
don't
plan
to
ever
come
up
like
ever
use
a
different
three
combiner
throughout
tls
1.3,
then
we
should
probably
be
explicit
about
it.
P
A
And
I
think
we'll
move
to
the
next
nicholas.
R
Yeah
hi
nitchkowski
nsa
yeah.
I
just
wanted
to
make
a
comment
just
to
follow
up
on
the
working
group.
Last
call
so
like
the
document
is
a
little.
It's
not
very
specific,
on
reuse
of
keys,
ephemeral
keys,
and
you
know
that
that
is
an
issue,
and
it
does
note
that
is
an
issue
with
some
of
the
lattice-based
mechanisms,
and
you
know
because
of
well.
R
You
know
it's
it's
common
with
the
the
last
space
protocols
and-
and
I
guess
part
of
the
reason
why
it's
kind
of
a
little
loose
on
the
the
guidance,
I
suspect
is
because
we
don't
actually
have
the
specification
written
and
how
you
know
certain
things
will
be
done
with
you
know
like
key
encapsulation
and,
and
and
you
know
it
just
you
know
we
this-
you
know
they
the
competition's
about
done.
You
know
they
have
their
their
candidates,
they
have
in
mind
and
stuff,
but
you
don't
they.
R
The
details
aren't
aren't
you
know
set,
and
you
know
they're
not
going
to
be
set
until
nest.
Writes
the
standard.
That'll
take
a
little
time,
and
so
you
know
I
just
you
know.
So
I
just
want
to
stress
it's
a
little
early,
because
you
know
you
can't
even
pin
down
you
know
how
how
to
you
know,
handle
reuse
of
keys
without
you
know
having
that
other
component
finalized.
P
Okay
yeah,
so
the
document
does
not
say
that
fml
key
reuse
is
prohibited
because
tls
1.3,
in
fact,
if
my
instruction
is
correct,
does
not
prohibit
ephemeral,
key
reuse.
A
Q
Yes,
all
right
so
hi
everyone,
my
name,
is
nimboda
viram.
My
co-author
cake
battle
is
also
here
virtually
and
we
would
like
to
deprecate
obsolete
key
exchange
methods
in
tls.
Q
Q
for
email.
The
picture
is
more
complicated.
Q
The
approach
of
letting
other
market
segments
be
because
no,
it
does
not
isolate
the
web
from
potential
security.
Arms,
for
example,
consider
drown
a
black-and-white
attack
from
2016.
Q
Q
These
types
of
attacks
like
blackenbach
type
attacks,
allow
rsa
signature
4g
and
the
forged
signature
can
then
be
used
for
a
man
in
the
middle
attack
against
web
clients
and
hosts,
even
if
they
don't
support
rsa
at
all.
So
even
fully
deprecating
rsa
key
exchange
from
the
entire
web
in
and
of
itself
would
not
isolate
the
web
from
problems
with
rsa.
Q
Q
For
finite
field
diffie-hellman,
the
document
merely
requires
the
minimum
properties
that
guarantee
security,
that
is,
for
ephemerality,
appropriate
group
size
and
appropriate
group
structure.
Q
As
for
static
elliptical
diffie-hellman,
these
siphon
suits
don't
provide
forward
secrecy,
so
they
are
already
listed
as
not
recommended
and
also
because
they
reuse
secrets.
This
opens
the
door
for
a
class
of
attacks
that
exploit
the
secret
reuse
to
gradually
learn
cryptographic
secrets
and
we've
had
quite
a
few
of
those
over
the
years.
Q
Q
Some
people
have
argued
for
full
deprecation
of
ffdhg,
even
when
fully
ephemeral,
and
it
did
not
appeal
to
me
those
consensus
for
this-
and
I
would
argue
this
is
probably
unnecessary,
because
the
requirements
in
the
document
should
be
enough
to
provide
security.
Q
Q
But
if
someone
needs
ffdhe
and
can
operate
it
securely,
which
is
kind
of
equivalent
to
equivalent
to
the
requirements
in
the
document,
then
I
would
argue
we
should
probably
allow
it
and,
as
for
the
last
bullet
point,
we
still
have
a
few
open
questions
around
ffdhe,
namely
how
would
the
client
verify
that
the
group
is
safe
and
what
it
would
do
if
it
can't
verify
that
the
group
is
safe?
Q
Q
S
Okay,
so
there
was
a
conversation
on
the
jabra
room.
This
looks
like
an
update
of
rfc
7525.
It's
recommendations
for
how
to
use
tls,
which
is
really
more
about
configuration,
because
I
assume
this
is
all
about
tls,
1.2
and
lower,
because
1.3
already
deprecated
all
these
things.
S
So
utah
wrote
this
rfc
7525
about
how
you
should
configure
your
tls,
1.2
and
lower,
and
if
you
want
to
revise
that
one,
that's
also
a
great
idea.
I
don't
think
you
should
just
deprecate
the
protocols
one
by
one
or
the
algorithms
one
by
one,
rather
than
put
it
in
this
big
how
to
configure
tls
1.2
document.
Q
You
do
you
imagine
a
scenario.
Well,
someone
would
use
our
rsa,
kxg
or
say
static,
finite,
3d.
Film.
Do
you
imagine
a
scenario
someone
uses.
S
Just
saying
that
this
sounds
like
part
of
what
utah
has
been
doing,
rather
than
something
that
this
working
group
is,
should
work
on.
Q
S
I
think
I'm
not
going
to
fix
open
a
cell
0
9
8
to
get
tls
1.2
safer
and
people
either
upgrade
to
the
latest
openssl
and
get
1.3
or
they
configure
their
098
to
not
have
do
not
have
bad
stuff
so
that
it
looks
to
me
like
the
what
you're
specifying
is
how
to
configure,
rather
than
changing
the
protocol,
all
right.
T
Hi,
so
I
think
like
oh
I
mean-
maybe
we
should
all
like.
I
don't
know.
Utah
is
also
a
place
to
do
this,
but
we
have
had
other
drafts
in
this
working
group
to
deprecate
things
that
are
broken.
We
recently
published
9155,
which
is
like
sha-1
and
md5.
T
I
think
we
also
deprecated
deprecated
tls-1011
and
we
had
rc4
before
that.
So
I
think
this
is
at
least
consistent
with
what
we've
done.
Historically,
I
don't
if
we
want
to
do
something
different
from
historical
things,
that's
fine,
but
otherwise
I
think
this
draft
is
perfectly
reasonable.
T
Q
Oh
by
the
way
also,
I
spoke
with
someone
on
the
youtube
group,
whose
name
escapes
me,
and
they
said
they
would
want,
like
our
working
group,
to
to
first
deprecate
some
algorithms
before
saying
before:
disrecommending
them
yeah
they're
selling.
That
was
there
anyone
else
on
the
queue.
U
U
U
U
If
we
had
done
the
work
once
and
right
now,
the
right
forum
for
that
is,
in
my
opinion,
and
the
right
venue
for
that
is
utah
and
specifically
75
to
and
25
base.
A
Showing
give
his
oh
okay,
panos
is
next
okay
yeah,
I
think
chairs
we'll
have
to.
We
can
discuss
with
you
to
chairs
and
see
how
to
proceed.
W
I'm
back,
hopefully
you
can.
You
folks
can
hear
me
now
much
better,
all
right.
It
was
the
headset
all
right,
hello,
everyone.
My
name
is
panus
campanakis
with
aws,
and
I'm
here
to
talk
about
a
new
draft.
We
submitted
about
a
month
back
with
my
co-authors,
martin
bass
and
cameron,
so
this
draft
is
practically
an
attempt
to
revive
martin's
old,
tls
suppression
draft
from
three
years
back.
I
think-
and
you
know
it's
a
draft
that
tries
to
address
a
well-known
well-understood
problem.
W
Basically,
we
all
know
that
tls
is
heavy
on
authentication
data.
We
know
that
it
includes
a
bunch
of
signatures
and
public
keys
that
are
that
come
down
as
part
of
the
certificate
and
the
identity
certificate
and
the
certificate
chain
to
authenticate
the
identity.
We
know
that
it
includes
certificate,
verification,
signature.
W
So
we
know
that
there
are
a
bunch
of
signatures
and
public
keys
included
in
these
handshakes
and
that
could
introduce
issues
for
some
use
cases
so
basically
for
in
for
the
post-quantum
use
case,
all
post-quantum
signatures
that
are
being
considered
for
standard
standardization
by
nist
have
relatively
big
public
keys
and
signatures.
W
There
has
been
research,
there
have
been
research
papers
published
on
this
and
and
past
published
a
very
good
blog
post,
where
he
he
tested,
big
big
certificates
and
certificate
chains,
and-
and
he
proved
that,
if
we
go
over
10k,
then
we
start
seeing
double
digit
slowdowns.
W
So
there
could
be
some
slowdowns
introduced
by
post
quantum
signatures
and
also
quake
would
would
would
see-
and
at
least
one
extra
round
trip
because
of
its
amplification
protection.
W
W
W
So
john
matson
has
these
drafts
and
he
has
talked
about
this
before
and
finally,
I've
seen
some
use
cases
in
in
mesh
networks
where
they're
constrained
mediums-
and
you
know
we
have
certificate
chains-
could
cause
slowdowns
there
where
the
medium
is
constrained
and
everyone
is
fighting
over
just
a
little
bit
of
bandwidth.
W
And
the
solution
that
we're
proposing
it's
it's
using
the
tls
flags
draft
in
order
to
signal
ica
suppression
to
the
peer.
So
basically
we
require
to
to
pre-acquire
a
fresh
ica
list.
W
That's
the
list
of
icas
that
someone
would
use
to
verify
the
peers
chain
and
we
cri
required
for
it
to
be
fresh
in
order
to
avoid
failures.
You
know
in
case
that
we
have
entries
that
are
expired
and
that
shouldn't
be
used
that
they're
old,
so
we
first
get
use
a
an
ica
chain,
an
ica
list
which
pre-load,
let's
say
on
the
client,
and
then
you
know
the
client
could
signal
to
the
peer
using
a
tls
flag
to
say
you
know,
don't
send
me
your
intermediate
cas
and
and
that
way
alleviate
the
protocol.
W
While
we're
proposing
this
it's
for
the
post
quantum
use
case,
we
know
the
post
quantum
use
case
would
suffer.
So
you
know
using
a
mechanism
like
that,
would
keep
the
authentication
data
within
a
limit.
That's
relatively
acceptable,
based
on
testing,
so
it
would
save
one
and
a
half
or
three
kilobytes.
W
If
we're
talking
about
one
ica
and
if
we're
talking
for
nist's
leanest,
two
signature
finalists,
you
know,
if
we're
talking
about
other
signatures,
it
it's
even
more
or
it
would
save
three
three
kilobytes
or
six
kilobytes.
If
we're
talking
about
two
intermediate
days
for
nist's
round
three
leanest
post
quantum
signatures,
so
you
know
it,
it
helps
a
little
bit
the
the
post
quantum
use
case.
It
also
helps
the
emu
use
cases
that
I
mentioned
earlier.
W
T
W
So
we
wanted
to
give
a
little
bit
of
data
on
the
icas
or
the
ica
list.
So
if
we're
talking
about
web
pki,
the
total
number
of
intermediate
cas
would
be
around
a
thousand
or,
let's
say
1500
intermediate
cas
in
the
list
that
we
would
need
to
to
keep
and
that
could
amount
to
one
or
two
megabytes
of
compressed
data
that
would
have
to
live
on
the
client
or
on
the
verify.
Let's
say
so.
These
are
not.
W
The
use
cases
where
we
have
a
very
limited
set
of
peers
and
a
very
limited
set
of
icas
in
the
draft
we
defined
that
you
can
you
you
can
send
you.
You
can
allow
the
server
to
send
its
intermediate
cas
if
it
knows
that
if
they're
likely
to
not
exist
on
on
the
client,
so
that
could
be
a
constrained
ica
list.
W
So
you
know,
in
that
case
the
server
may
choose
and
say
you
know
I'll,
send
it
no
matter
what,
because
I
want
to
pretend
to
prevent
failures
or
another
use
case
could
be.
If
we,
if
there
was
a
third
party
that
was
hosting
this
intermediate
cas,
then
the
server
could
could
confirm
if
it's
icas
exist
there
or
not,
and
in
order
to
prevent
a
failure
it
could
you
could
decide
to
send
them
if
they
don't
exist
in
that
ica
list.
W
In
this
way.
I
also
wanted
to
bring
up
that.
We
have
some
similar
precedents.
That
would
make
the
deployment
of
such
a
draft
relatively
easy,
so
we
know
that
mozilla
firefox
already
has
an
ica
preload
list.
That
list
is
not
used.
For
the
same
reason,
it's
used
to
prevent
outages,
but
it's
a
list
that
that's
loaded
on
the
on
the
browser
already.
W
We
also
know
that
the
browsers
deploy
some
sort
of
list
for
revocation
whitelist
for
revocation
lists,
so
there
is
a
mechanism
to
deploy
such
listing
lists
to
a
browser,
and
we
also
know
that
the
ctls
draft
that
eric
was
presenting
earlier
also
defines
compression
certificate
dictionaries.
K
K
Yeah,
I
I
I
I
imagine,
there's
going
to
be
good
questions,
so
I
wanted
to
make
sure
there's
time
for
questions,
but
also
for
the
following
presentation.
I'm
sorry
to
just
sort
of
jump
in
like
this
yeah.
W
W
The
last
slide
I
want
to
bring
up
is
that
you
know
there
could
be
some
challenges
for
web
pki
and
ryan
has
brought
them
up
on
the
list
before
we
think
that
we
can
address
them,
but
I
would
also
suggest
the
group
to
to
not
forget
that
tls
is
used
in
a
bunch
of
other
use
cases
that
could
benefit
from
this,
and
I
would
like
to
ask
for
a
discussion
in
the
list
or
or
on
the
github
github
repo,
and
we
will
not.
W
T
This
draft
assumes
that
the
list
in
all
the
browsers
gets
pushed
it
gets
like
updated,
basically
instantaneously,
whereas
there's
going
to
be
this
long
tail
of
like
clients
that
still
are
on
an
old
list
and
so
you're
basically
going
to
be
stuck
with
the
servers,
always
assuming
they
have
to
send
the
intermediates
or,
like
some
interviews
just
won't
work,
I
could
imagine
a
more
complicated
scheme
where
we
like
name
the
lists
or
something,
but
that
would
be
slightly
higher
hanging
fruit
than
this
hanging.
T
Fruit,
maybe
for
non-web
use
cases
to
be
useful,
but
it
sounded
like
you
were
pretty
interested
in
the
webkinz.
X
I
think
authcam
updates
very
exciting
stuff.
I
can
do
it
for
my
phone,
so
we
don't
have
a
lot
of
time,
thanks
everyone
for
being
here,
of
course,
and
also
those
that
are
in
unfortunate
time
zones.
I'm
just
gonna
give
a
brief
update
about
the
oddcam
draft.
X
The
idea
is
basically
instead
of
doing
authentication
via
a
signature,
we
do
it
via
a
key
exchange,
because
that
is
better
post,
quantum,
possibly
smaller,
more
efficient,
but
we
realized
that
we
hadn't
had
a
lot
of
space
in
the
rfc
to
do
also
why
this
is
a
good
idea,
because
it's
many
folks
on
all
the
implementation
stuff
and
we
had
a
bunch
of
questions
so
in
this
presentation
we
have
basically
what
do
we
need
from
you
to
make
this
thing
better,
a
bunch
of
the
small
changes
that
we
made
and
some
bigger
changes,
and
we
have
this
offcam
abridged
sort
of
behind
the
scenes
document.
X
So,
let's
start
with
what
do
we
need
from
you?
Basically,
we
need
feedback.
It
is
not
a
small
change
and
we
would
like
to
know.
Does
this
fit
for
you?
Does
this
not
fit
for
you
stuff,
like
that?
We
also
need
some,
I'm
not
I'm
a
first-time
rfc
writer,
I'm
used
to
writing
scientific
papers,
which
is
quite
different,
so
yeah
that
is
quite
visible
still,
and
it's
been
mentioned
a
few
times.
X
Yeah,
the
changelog,
so
we
restructured
the
whole
thing.
We
split
the
protocol
diagrams
from
a
lot
of
implementation
details
and
we
hope
that
that
makes
the
thing
a
lot
more
readable.
This
is
really
the
most
important
thing
that
we
changed
since
revision00,
so
you'll
find
that
it
now
first
introduces
the
arrows
on
the
paper
before
it
starts
to
go
into.
How
do
you
put
those
arrows
into
bytes
and
then
transmit
them
over
the
wire?
X
We
don't
want
to
sweep
anything
under
the
rug
here,
so
we
also
very
much
try
to
clarify
all
of
the
open
questions
and
the
more
tricky
security
intuition
bits,
and
we
really
really
would
like
also
help
there.
X
So
please
ask
your
questions
either
directly
or
in
the
mailing
list,
and
we
will
also
try
to
keep
this
document
up
to
date.
So
some
of
the
things
that
we
have
there
for
example,
why
do
we
use
camps
for
authentication
stuff
like
how
much
time
do
we
have
five
minutes
so
again
end
of
the
month?
We
should
see
announcements
from
nist's
from
nist's
pq
thing,
the
signature
schemes
are
it's
looking
like
it's
going
to
be
either
quite
big
or
quite
annoying.
X
If
you
have
like
a
small
microcontroller
that
can't
do
floating
point
paths-
and
it's
quite
probably
be
gonna-
be
just
one
of
those
two
options,
so
that
is
why
we
think
of
chemists
work
well,
why
do
we
not
do
draft?
Semi-Static
is
another
question
that
we
saw
last
time.
X
X
Why
do
we
want
to
do
it
now
is
also
something
that
came
up
and
our
answer
there
is
that
really
this
work
is
very
hard
and
post
quantum.
X
This
this
move
is
a
really
good
good
opportunity
to
re-examine
the
things
that
we
have
right
now
and
to
see
if
that
still
works
instead
of
just
scribbling
in
pq
in
front
of
all
of
the
things,
the
last
meeting
already
identified
a
rough
edge,
so
we
really
see
that
this
interaction
is
helping
and
we
want
to
file
down
all
of
those
rough
edges,
and
that
is
going
to
take
a
lot
of
effort
and
is
this
finished
enough
was
also
something
that
came
up.
We
have
a
lot
of
academic
work
going
on.
X
We
are
working
on
a
tamarind
proof.
Actually,
we've
mainly
finished
that
work,
we've
extended
the
tls
1.3
model
and
douglas
has
also
done
a
model
of
the
time
of
the
chem
tls
protocol
and
research
has
a
hard
time
finding
out
the
practical
stuff,
though,
and
we
think
that
also
this
interaction
helps
in
that
area.
X
Let's
see
next,
there
we
go
other
things
to
read
are
like
why
we
added
another
handshake
secrets
and
why
we
put
chems
into
signature
algorithms,
because
clearly,
chems
aren't
signature,
algorithms
and
just
small
stuff
like
that.
X
So
again
we
would
like
to
hear
from
you
if
you
want
to
help
or
have
feedback.
I
want
to
comment
on
any
of
the
open
issues
of
want
to
point
out
something
that
we
missed
unanswered
questions.
Really.
There
are
no
dumb
questions
and
also,
if
you
have
something
that
we
should
investigate,
if
you
want
to
work
with
us
on
some
experiment
or
really
do
whatever,
if
you
want
to
say
that
we're
being
idiots
and
forgetting
something
really
stupid,
that
would
also
be
helpful,
so
yeah.
That
is,
I
think,
my
ask
to
the
room
today.
X
If
you
come
up
with
something
later,
our
email
boxes
are
always
open,
so
I
don't
think
I'm
particularly
hard
to
reach
and
also
for
those
here
in
person.
You
can
also
always
come
up
to
me
with
questions
and
if
that
went
very
fast
for
you,
I
think
the
slides
are
on
the
data
tracker,
so
you
can
also
go
at
them
at
your
own
pace,
but
yeah.
I
was
asked
to
hurry
up.
I
A
I
think
that
draws
tls
at
iatf
113
to
a
close
looking
forward
to
seeing
everybody
online
or
in
person
at
the
next
ietf
in
philadelphia
and
on
the
list.
In
the
meantime,.