►
From YouTube: IETF111-TLS-20210728-1900
Description
TLS meeting session at IETF111
2021/07/28 1900
https://datatracker.ietf.org/meeting/111/proceedings/
A
A
All
right,
let's
get
going,
welcome
everyone
to
the
tls
session
for
the
week
lovely
to
have
you
here
to
talk
about
all
the
great
things
related
to
tls
yeah.
A
This
is
the
notewell
you
if
you've
been
in
meetings
earlier
in
the
week.
You
may
have
seen
this
already.
This
just
notes
your
obligations
regarding
contributions
and
ipr
related
things,
and
also
just
how
you
should
engage
with
others
in
general.
Just
be,
please
be
respectful,
encourage
others
when
in
the
mic
line,
and
we
should
have
a
pleasant
and
productive
meeting.
A
As
usual,
we
need
a
minute
and
jabber
scrub
minute
taker
under
javascribe.
Fortunately,
we
have
both
thanks
to
jonathan
and
and
rich.
Don't
do
anything
with
blue
sheets
thanks
to
technology.
Doing
this
automatically
for
us,
which
is
fantastic,
and
hopefully
we
can
keep
doing
that
in
the
future.
A
As
a
reminder
when
you
are
in
the
queue
to
say
something
at
the
mic,
because
the
people
who
are
taking
notes
may
not
be
looking
at
the
queue
line,
be
helpful,
if
you
just
say
your
name,
so
we
can
capture
that
accurately
in
the
notes,
don't
have
to
state
your
affiliation
or
whatever
here's
our
agenda.
We're
going
to
have
three
presentations
on
active
working
group
documents,
ctos
the
hybrid
key
exchange,
doc
and
then
ech
and
then
we'll
finish
up
with
a
number
of
new
or
returning.
A
Before
moving
forward
to
talk
about
document
status,
does
anyone
have
any
comments,
suggestions
or
anything
that
they
could
change
in
the
agenda
here?.
A
All
right,
so
here's
a
quick
update
as
to
where
we
are,
I
finally
got
896
out
the
door
so
thanks
to
all
over
involved
in
that
that
effort
short
or
not
far
behind
it.
In
the
rc
editor
cube,
we
have
ticket
request,
dtls,
1.3
and
the
connection
id
draft,
those
thanks
to
dtls
1.3.
Finally,
moving
forward
this,
should
you
know
free
up
a
lot
more
space
for
other
things
that
the
tls
working
group
is
producing
so
expect
to
see
them
at
some
point
in
the
near
future.
A
We
have
a
number
of
drafts
that
went
through
the
the
working
group
less
call
process,
but
we
need
revised
individual
drafts,
specifically
the
export
authenticators
and
the
external
pska
importer
draft,
both
of
which
are
just
blocked
on
people
making
those
changes,
and
we
should
poke
the
people
who
are
responsible
for
those
changes,
among
others.
A
So
we'll
do
that
and
then
hopefully,
move
them
forward
to
the
isg
we
also
after
I,
I
can't
recall
exactly
when,
but
the
external
psk
guidance
draft,
which
was
meant
to
kind
of
go
alongside
the
external
psk
importer
draft
that
has
been.
We
requested
publication
for
that
now.
It's
just
sitting
in
the
adq
for
review,
which
I'm
sure
will
happen
shortly.
A
We
have
a
number
of
other
drafts
sort
of
moving
along
as
well
specifically
delegated
credentials
and
deprecating,
md5
and
sha-1.
I
don't
know
sean
if
you
want
to
say
anything
specific
about
the
md5
shot.
One.
B
So
the
md5
shall
one
draft.
There
was
one
outstanding
comment:
daniel
was
nice
enough
to
go
back
through
his
email
and
say
which
ones
were
resolved,
and
so
there
was
one
outstanding
comment,
which
is
whether
we
should
remove
some
text
through
the
updates
that
we're
doing
in
this
draft
of
5246
it's
on
the
list
take
a
look.
The
delegated
credentials
draft
is
stuck
on
me.
Reviewing
daniel's
response
I'll
get
to
that.
Shortly.
Back
to
you,
chris.
A
Thanks
john
we're
also
expecting
most
likely
a
change
or
an
update
to
the
dc
draft.
That
includes
some
test
factors
for
people
who
are
hoping
to
do
some
interop
so
expect
to
see
those
two
things
shortly.
We
currently
have
in
work
group
last
call
two
related
drafts.
A
The
working
group
last
call
is
ending
this
week,
so
if
you've
not
already
reviewed
them
and
left
comments,
we
would
greatly
appreciate
if
you
did
so,
that
is
the
tls
flags
extension
draft
and
the
cross
sni
resumption
draft,
which
is
really
quite
small,
now
that
it
just
simply
adopts
the
gls
flag,
extension
and
uses
it.
As
the
you
know,
the
first
item
in
the
registry
hacker:
do
you
have
a
question.
C
Yeah
great
yeah,
so
I
just
wanted
to
flag
that
we
got
a
comment
from
john
matson
and
I
think
magnus
as
well
about
like
that.
It's
unfortunate
that
the
epoch
number
is
so
short,
so
I
think
there
are
a
couple
possible
approaches
to
fixing
that
in
2013,
but
I
think
we'll
probably
actually
probably
should
fix
it.
C
C
The
answer,
I
think
the
most
likely
answer
is
to
make
the
epoch
longer
and
then
and
then
either
extend
it
into
extend
the
extend
the
fixed
part
of
the
of
the
of
the
ead
or
or
or
truncate
it
the
truncating.
So
I
think
like
we
would
need
an
analysis
both
of
those,
but
that
also
these
would
be
something
reported.
A
Okay
right,
thank
you
for
no,
it's
fine!
I
had
completely
forgotten
that
that
was
no.
A
Oh,
no,
not
your
fault,
so
yeah
there's
a
well
once
we,
you
know
figure
out
resolution
for
that.
We'll
likely
do
a
quick
short
working
group
last
call
just
to
make
sure
that
it
gets
the
additional
review
that
it
needs
and
then
move
it
forward
from
there.
So
we'll
work
with
ben
and
the
ads
to
make
sure
that
it's
parked
in
the
where
it
is
right
now
and
doesn't
move
forward
without
this
particular
change.
A
All
right,
I
heard
you
have
anything
else,
sir
I'll
just
take
you
out
of
the
queue
and
then,
as
far
as
improv
strats,
we
have
the
usual
suspects,
ech
compact,
tls
and
a46
biz,
a
hybrid
key
exchange
and
then
return
reachability
check
for
1.2
1.3.
We
talking
about
some
of
them
today
and
others
are
just
you
know,
sort
of
slowly
making
progress
on
the
side.
B
D
B
Number
of
drafts
that
we've
kind
of
talked
about
that
are
related
that
haven't
really
made
it
in,
because
we're
trying
to
get
rid
of
details,
1.3
and
others.
So
maybe
we
can
revisit
bringing
those
back
on.
I
know
jonathan
had
a
draft
that
we
were
looking
at
thinking
about
it,
trying
to
get
adopted
so
we'll
kind
of
still
have
a
couple
in
the
wings
that
we're
thinking
about
filling
the
key
back
up.
A
Okay
and
if
we're
missing
anything
else,
you
know
please
let
us
know
and
we'll
you
know
we'll
we'll
get
on
that
as
best
we
can
all
right
with
that.
I
think
we
can
jump
over
to
the
the
first
presentation.
Ctos
ecker.
Will
you
be
giving
the
presentation.
A
Like
how
do
you
do
the
slides,
yeah
yeah,
so
I'm
gonna
stop
sharing
and
then
there's
a
button
in
the
top
left
to
share
preloaded,
slides
or
join
the
slide
queue.
I
forget,
I
don't
know
exactly
what
I.
A
C
A
For
your
time,
all
right,
okay,
so.
C
Now
that
like
dtls,
is
allegedly
almost
done,
I
I
I
can
turn
my
like
less
than
focus
attention
to
this
topic.
Oh
yes,
it
is
comic
sans
all
the
white
chris.
So
so
yes,
next
slide.
I
don't
know
why.
I
have
the
raw
stupid
wrong
title,
because
this
is
like
draft
ietf
whatever,
but
whatever
we
can
know
that
part
next
slide.
C
E
C
So,
okay
right,
so
we
we
did
a
zero
three
update.
This
is
like
large
editorial
and
the
the
the
idea
was
to
focus
really
on
as
we
started
out
in
our
experience
with
this
to
focus
on
the
template-based
specialization
and
in
particular
the
dh
based
handshake.
That's
where
most
the
value
proposition
is
importantly
recognize.
C
I
think
the
thing
that
we're
getting
better
about
is
that
you
know
the
the
template
system
itself
is
generic,
so,
like
we
don't
have
to
get
every
specialization
keyword
in
now.
We
should
have
the
system
in
place
and
then
because,
when
you
think
about
this
as
a
template,
they
say
like.
C
If
you
have
this
that
light
system,
then
you
can
like
remove
this
piece
or
shorten
this
piece
or
whatever,
and
so
like
you
can
perfectly,
and
since
both
sides
have
to
agree
on
the
template
anyway,
you
can
just
add
new
keywords
so
right,
I
think
so.
Nevertheless,
there
are
a
few
issues
I
want
to
discuss.
At
least
briefly,
I
think
the
first
is
the
warrant
coding.
I
think
when
we
first
did
this
before
we
sort
of
invented
the
template
thing.
C
We
had
this
boring
encoding,
which
is
hoping
to
save
some
space,
because,
like
tls,
is
a
bunch
of
situations
where
you
know
like
you're
encoding,
you
know
a
length
of
24
like
24
bits,
even
though
thing
can
only
be
like,
like
you
know,
one
account
long
or
something
or
practices,
but
it
doesn't
actually
and
also.
C
I
think
it
actually
shows
a
sense
that
this
is
a
crafty
kind
of
encoding
but
like
it,
doesn't
really
see
that
much
space
and
it's
really
a
hassle
for
tls
stack,
which
does
both,
especially
in
places
where
you
know,
especially
in
places
where
you
don't
have
light
functions
that
do
that
work,
and
even
you
know
many
many
stacks
like
in
some
places
have
like
functions
which
do
the
work
and
some
places
kind
of
like
do.
C
You
know
you
know
a
a
zero
left
left
shift,
eight
plus
a
one.
So
so
so
I
think
we're
supposing
to
remove
this
and
just
like
live
with
the
fact
that
you
also
slightly
question
your
encoding,
then
a
quick
dust.
So
I
guess
I'd
hear
anybody
who
really
adjusted
this
speak
up
and
if
not,
I'm
gonna
do.
F
Yeah,
I
don't
have
an
objection
and
I
haven't
thought
much
about
this
myself,
but
I
was
curious
if
folks
really
do
plan
to
put
this
in
the
same
stack
as
their
normal
tls
stack,
or
would
this
be
a
totally
different
like?
Would
this
be
a
totally
separate
thing?
I
guess
this
might
is
my
question.
How
do
folks
imagine
implementing
this.
C
Well,
yeah,
we
think
about
this
yeah.
So
as
hannah
was
saying,
he
worked
on
the
code,
it
was
really
hard.
I
think
when
we
think
about
this,
is
that,
like
one
definite
way
we're
thinking
this
actually
is
the
compression
layer
below
tls,
and
in
that
case
it
would
be
the
same
stack
in
any
case,
because
your
weakness
are
the
transcript
you
have
to
have
the
both
both
sets
of,
but
you
have
to
have
both
system
code.
C
So,
okay
hearing,
no
objection,
I'm
going
to
go
ahead
and
do
it
if
someone
could
know
that,
for
me,
that'd
be
helpful.
Okay
next
slide,
just
saying
it,
but
here
it
is
so
I
think
we've
known
for
a
while
that
you
can
do
much
better
with
semi-static
defendant.
Then
you
can't
in
terms
of
in
terms
of
bandwidth.
Then
you
can
with
the
signatures.
Those
integers
are
bigger.
C
C
So
one
three
as
people
recall,
focuses
on
these
sort
of
signal
style
modes,
but
we
we
there's
been
so
much
talk
about
opt-out
modes,
so
you
know
the
good
news
is
that
there's
been
already
been
work
on
this
and
we
about
to
have,
I
believe,
have
a
accepted
draft
on
this,
which
we're
just
like
not
doing
very
much
on
so
ctls
can
just
pick
us
work
up
immediately
as
soon
as
it's
done
so
so
like.
C
While
I
think
it'd
be
good
for
ctos
to
have
this
functionality,
we
can
do
it
separately
and
I'm
like,
as
you
can
see
from
this
presentation,
I'm
a
big
fan
of
everything
separately.
So
this
has
been
floated
that
we
should
add
this
to
dtls,
but
I
don't
want
to
unless
somebody
like
really
really
is
going
to
argue
for
it,
because,
like
it's
just
a
close
example,.
C
C
C
Martin
asks
the
puzzle,
make
a
ctls
extension,
and
I
assume
that
I
assume
you're
talking
about
semi
static,
no
they're,
there's
no
cypher
suites,
so
so
like.
Basically,
what
happened
will
be
as
soon
as
you
like
had
as
soon
as
you
had
like
ctos
for
tails,
moving
three,
which
is
a
sorry
typically
sorry
for
those
both
three
just
work
for
cd
lost
in
the
usual
fashion.
C
So
as
far
as
so
going
back
to
the
point
compression
that
compress
curves
like
waste,
you
know
32
octets
or
whatever
it
is
yes,
david
ben.
Exactly
I
just
operate
in
this
chat,
so
the
you
know
they
weighs
the
y
coordinate
right.
So
everybody
knows
this
so,
but
the
good
news
is
that
once
again,
this
is
like
thanks
to
like
negotiability.
This
is
straightforward.
C
All
you
have
to
do
is
like
add
new
points,
new
new
code
points
and
named
groups,
for
you
know
for
nist
pg56.
You
know
compressed
and
if
anybody
wants
this,
it's
easy
to
do
so.
Anybody
who
is
like
excited
about
this,
you
know
please
write
an
rfc
or
a
document
that
is,
like
you
know,
registering
those
code
points.
So
again,
I
don't
think
we
need
anything
ctl
specific
and,
of
course,
this
is
not
necessary
for
the
255.9
for
for
the
cfrg
curves,
which
already
only
encode
the
s
coordinate.
C
C
Now
I
come
to
like
two
things
that
actually
do
matter,
and
I
just
let
me
just
get
like
ahead
of
these
by
saying
what
I
said
at
the
beginning,
which
is
that
the
I
think
it's
important
to
to
break
out
the
idea
of
specialization
and
the
sort
of
basic
specializations
you
need
from
everything
you
ever
want
to
do
to
make
tls
one
point
three
see
I
see
that
was
smaller,
and
so
what
I'm
gonna
pose
here
in
the
next
slide
is
that
we
is
that
we
don't
do
anything
and
then,
if
we
and
then
or
that
we
in
parallel
look
at
analysis
and
if
this
analysis
pops
up
well,
then
we
can
add
specialization
keywords
and
if
not
we
don't
have
to,
we
can
do
it
later.
C
So,
to
recap
this
so
to
get
back
to
this
particular
point,
the
tl
2.3
handshake
is
designed
in
such
a
way.
It
does
not
require.
You
know
the
record
layer
to
deliver
security
right,
namely
the
unfinished
messages
provide
a
mac
of
the
entire
handshake,
and
so
you
know
this
gets
science.
This
gets
called,
like
you
know,
uc
or
ck
security,
but
the
idea
being
that
you
don't
have
to
make
any
assumptions
at
all
about
the
record
layer
and
you
can
just
guarantee
the
handshake.
C
You
run
the
handshake
at
the
clear
and
you
would
and
be
sure
you'd
negotiate
it
properly.
Of
course,
let
me
just
be
clear.
You
would
not,
of
course,
be
protecting
the
things
you're
supposed
to
protect
that
are
in
the
hand
shakers
but
to
secret,
but
you
would
you
have
the
the
the
cryptographic
guarantees?
C
Negotiation
would
be
correct,
the
no
we
don't
john,
the
so,
but
but
but
the
fact
that
we
do
run
tls,
1.3
and
ctls
over
an
encrypted
transport
and
current
record
layer
means
that
the
finished
messages
have
two
integrity
checks
on
them.
They
have
the
one
that's
in
the
finished
message
itself
and
then
they
have
the
aed
authentication
tag
and
so
analyzing.
C
This
case
is
somewhat
hard,
and
so
one
of
the
reasons
we
didn't
do
this
in
tls.3
and
so
the
point
being
that
in
some
protocols,
simply
omit
that
final
integrity
check
and
they
say,
like
I'm,
relying
on
the
record
layer
to
guarantee
things
are
correct
and
you
can
either
do
this
by
having
like
sending
an
empty
message,
or
you
can
do
this
by
having
it
be
that
the
first
you
know
the
first
data
message
guarantees
answer
completion.
In
any
case,
this
is
not
like
making.
C
This
change
is
not
straightforward
and
chris
will
just
put
some
analysis
in
the
chat
that
indicate
that,
like
you
actually
have
to
work
very
hard
to
convince
yourself,
this
is
okay,
so
so,
while
it
may
be
possible,
I
don't
think
we
should
do
this
today,
and
the
good
news
is.
C
If
we
were
to
do
this,
it's
simply
a
new
keyword
that
says
like
emit,
finished
and
then
and
then
and
but
it's
like
technically
it's
easy,
but
from
an
analysis
perspective,
they
get
to
be
able
to
restate
what
the
analysis
guarantees
are
and
there's
something
like.
Once
you
get
the
first
application
layer
data
message,
then
you
know
things
are
okay,
oh
and
this
does
not
guarantee
that
any
downgrade
in
the
case
of
catastrophic
failure
of
the
record
layer.
C
So
that's
actually
probably
the
second
important
point,
which
is
imagine
you
have
like
a
record
layer
with
a
really
really
like
weak
authentication
tag,
so
you
said
like
I
was
only
willing
to
live
like
a
32
authentication
tag.
Now
it's
not
clear
what
the
security
of
the
the
finished
message
is,
because
that
would
be
the
guarantee.
C
So
that
would
be
the
guarantee
of
the
handshake
right
yeah,
as
martin
says
ccm4,
so
so
scoping
this
out
is
difficult,
and
so
so
I
I
I
was
not
through
this
today
because
I
wanted
ctl
is
done,
people
who
are
enthusiastic
about
this.
C
D
C
I
think
your
currently
can
be
reordered
because
I'm
not
sure
what
that
gets.
You.
C
Okay,
so
so
it's
quite
possible
okay,
so
so
I
may
be
being
so.
That's
an
interesting
point,
so
I
think,
reinforces
my
point
of
view.
Let's
not
be
hasty
but
bear
in
mind
that
I
am
suggesting
removing
the
finish
not
suggesting
root,
suggesting
removing
the
the
the
the
aed.
C
But
but
yes,
I
think
it's
I
think
it's
entirely
possible
that
it's
not
safe
to
remove
the
finished
in
the
post
handshake,
but
it
is
safe
to
remove
it
in
the
handshake.
So
so
I
think,
like
again
like
I
think
we
like
this
is
why
I
might
want
to
be
cautious.
Here
sounds
good.
C
C
Next
slide
here,
I'm
just
doing
myself
right.
The
the
second
thing
that
place
of
like
big
wastage
is
the
the
the
random
values
the
chills.
Ctrls
are
random
tails,
one
with
three
random
values
like
incredibly
long,
the
32
octets.
C
Now
it's
true
that
we
use
a
bunch
of
the
that
for
like
random,
like
signaling
of
various
kinds,
so
just
removing
them
isn't
like
totally
straightforward,
but
it's
potentially
possible,
and
if
you
read
the
original
sigma
proofs,
they
actually
assume
there's
no
nods
because
they
assume
the
diffie-hellman
shares
are
fresh.
So
there's
like
some
precedent
for
removing
them,
but
then
you
also
read
like
they
feel
like
the
sigma
paper
and
he
goes
like
this
is
a
terrible
idea.
C
So
I
don't
do
this
and
like
a
real
protocols,
have
like
have
like
nonsense
right
so
so
like
it
was
been
proposed
to
remove
this.
But
I
think
that,
and,
as
I
said,
there
are
protocols,
don't
have
this.
I
say
samara
wire
guard
being
launched
protocol,
but
I
think
for
tls.
I'd
want
to
see
like
much
better
analysis
before
I
was
comfortable.
C
Removing
this
and
again
you
can,
like
you
know,
there's
like
the
version
where
you
like
to
say:
there's
no
random,
there's
a
version
where
you
like
have
to
give
him
values
to
call
the
random
or
whatever,
but
they're,
all
the
same
kind
of
property.
There's
no
fresh
enterprise
at
the
random.
It's
also
really
important
to
mention
that
for
zero
rtt.
C
Of
course,
you
can't
do
this
because,
unless
you're
again,
unless
you're
definitely
doing
it,
putting
dv
element,
values
in
and
so
like
the
whole
thing
gets
like
really
hard
to
reason
about.
So
again,
this
scenes
analysis
get
some.
So
if
somebody
could
like,
if
people
are
excited
about
this,
like
please,
you
know,
send
me
papers.
C
Okay
next
slide
here,
I'm
just
gonna
keep
saying
that,
so
this
is
actually
a
pro
second
bench,
schwartz
mentioned
on
the
email
list,
which
is
that
we
could
do
encryption
for
client,
hello
and
server
hello.
C
A
lot
quick,
this
prudential
brand,
some
friends
like
nat
strip
streaming,
perhaps
there's
some
possible
number
of
possible
ways
to
do
this,
one
of
which
is
one
of
which
is
like
having
a
fixed
key
as
in
quick,
one
of
which
is
having
like
a
per
specialization
key
and
one
of
which
have
like
an
ech
based
key.
It's
not
entirely
accurate,
which
of
these
would
prevent
exactly
what
I
think
they
all
prevent.
C
Ossification
somewhat,
but,
like
maybe
some
of
the
windfall
streaming,
I'm
actually
not
entirely
sure
exactly
how
to
the
nastiest
things,
I
think
is
actually
a
little
complicated.
So
I
guess
my
question:
is
anybody
like
really
excited
about
doing
this
on
the
one
speak
up
for
it?
If
so
like
come
on,
can
we
see
some
pr?
Is
that
if
not,
can
we
not
do
it.
C
Good.
Thank
you.
G
Hi
walt
schwartz,
thanks
for
the
slide,
I
I
guess
I
just
want
to
ask
you:
do
you
think
that
this
is
something
that
can
reasonably
done
as
a
follow-up.
C
I
think
it
could
be
yeah,
but
I
have
to
actually
think
about
it.
I
think
I
I
mean
I
think
about
it.
I
think
maybe
I
mean
that
the
the
profile
id
is
in
the
is
in
the
record
header,
so
it's
very
early,
and
so
you
would
be
able
to
debug.
I
think.
C
F
Patent
just
wanted
to
raise
my
hand
in
support
of
this.
I
think
this
is
a
great
property
of
quick.
I
would
be
fine
with
the
fixed
key.
I
think
it
would
be
too
complicated
to
try
to
use
eck
for
this
yeah.
That's
it.
Okay,.
C
Okay,
well,
so
so
so
I
think
you,
as
people
see
from
the
next
slide,
I'm
not
posing
to
like
finish
working
with
last
call
like
any
minute,
so
I
think
we
have
a
little
time
to
think
about
it.
So
I
guess
ben.
Hopefully
you
could
take
a
look
at
me.
Chris,
give
you
a
hand
and
we
can
cause
conclusions,
okay,
so
the
next
steps.
So
thank
you,
wood
for
filing
the
move
bar
end.
I'm
gonna
do
that.
I
think
we'll
spin
it
104.
C
I
know,
there's
been
some
work
on
implementation,
so
I'd
like
to
get
this
implementations
patent.
Are
you
still
do
you
want
to
talk,
or
you
just
didn't
remember
yourself.
C
So
I
think,
like
the
next
step,
I
think
is
like
for
us,
implementations
and
interop
working,
and
I
know
honestly
working
on
something-
and
I
know
richard
I've
been
working
on
something
mostly
richard
and
and
in
parallel.
We
can
do
these
these
pieces
of
work
that
people
proposed
and,
if
they're
and
then
just
like
race
them
to
the
finish.
C
But
I
think
like
that,
the
feature
set
the
features
that
are
close
enough
to
finish.
Now
that
if
we
had
like
interrupt,
we
could
just
shut
the
thing.
C
Okay,
you
know,
you
know
where
the
github
is
in
case.
You
find
things
that
I
did
not
notice.
A
Bye-Bye
all
right,
thanks
hacker:
can
you
stop
sharing,
so
we
can
pull
up
double
slides
all.
A
H
Okay,
hello,
I'm
douglas
tabila
from
the
university
of
waterloo.
I
just
wanted
to
give
an
update
on
the
hybrid
key
exchange
draft
with
my
co-authors,
scotler
and
schaegeron,
so
the
motivation
for
this
draft
is
to
permit
simultaneous
use
of
traditional
and
post-quantum
key
exchange
in
the
same
handshake,
the
idea
being
that
this
would
enable
early
adopters
to
get
post-quantum
security
while
still
having
the
security.
H
That's
present
in
the
elliptic
curve
or
standard
dh
right
now,
and
we
might
want
to
do
this
because
of
some
potential
uncertainty
on
newer
cryptographic
assumptions
or
the
need
to
the
desire
to
use
post
quantum,
while
also
keeping
a
certification
of
implementations
that
use
traditional
algorithms
before
those
certifications
are
updated.
H
So
the
goal
of
this
draft
is
to
define
data
structures
for
the
negotiation
communication
and
shared
secret
calculation
for
this
goal,
but
it
is
not
a
goal
to
do
anything
about
hybrid
or
composite
certificates
or
digital
signatures
or
authentication,
in
order
to
select
which
post-quantum
algorithms
to
actually
use
in
tls
we've
had
a
drove
a
draft
up
for
a
little
while
with
the
mechanism,
it's
a
straightforward
mechanism
relatively
straightforward.
H
Basically,
the
idea
is
that
each
desired
combination
of
a
traditional
and
post-quantum
algorithm
will
be
tagged
as
a
new,
opaque
key
exchange
group
and
then
to
negotiate
these
new
name
groups
for
each
desired
combination
will
need
to
be
standardized
in
terms
of
communicating
the
key
shares.
I
simply
concatenate
the
key
shares
for
the
two
constituent
algorithms
in
in
no
clever
way
whatsoever,
just
straight
concatenation
and
similarly
shared
secret
calculation.
H
You
just
concatenate
the
two
shared
secrets
and
use
that
in
the
existing
tls
1.3
key
schedule,
we
considered
a
bunch
of
design
options
in
each
of
these
three
parts
of
the
process,
and
the
draft
has
a
lot
much
too
long
appendix
that
explores
these,
but
we've
selected
the
ones
that
I
mentioned
on
the
previous
slide.
H
H
One
particular
question
that
has
come
up
is
whether
this
document
should
include
concrete,
named
groups
for
the
existing
elliptic,
curves
plus,
say
the
nist
post
quantum
crypto
round
three
finalists,
so
that
people
can
already
start
working
on
interoperable
implementations
or
whether
that
should
be
left
for
a
future
work
or
for
when
round
three
has
concluded.
H
So
that's
the
the
the
idea
I
don't
know
how
to
do
questions
between
the
queue
and
the
chat
eric
did
you
have
a
question.
C
Question
so
I
attempt
to
answer
your
questions
more
accurately.
I'm
I
mean
I
looked
at
this
draft
today
and
it
seemed
like
it
was
roughly
what
I
was
expecting
so
so
I
think
we're
probably
I
would
actually
start.
I
was
not
unless
somebody
has
like
some
objections.
Now
we
started
working
across
call
because,
like
that's
kind
of
how
we
force
people
to
review
the
draft,
I
know,
like
probably
I'd,
be
more
likely
to
really
read
it.
C
If
I
wasn't
working
with
us
call
myself-
and
you
know
not
like
I
mean
like
it's
not
like,
unless
we
ever
wear
a
bunch
of
open
issues
like
we're,
not
you
know
like
having
a
bunch
of
like
quasi
reviews
that
aren't
like
forced
by
that
isn't
very
helpful.
We
might
consider
removing
a
bunch
of
dependencies
by
the
way,
depending
on.
C
So,
in
terms
of
your
in
terms
of
your
question
that
would
also
let
us
hit
november.
In
terms
of
your
question
about
the
groups,
I
guess
what
I
would
suggest
is
that
we
do
some
kind
of
informal
well
rather
than
sort
of
like
the
commentary
exploration,
but
we
just
had
an
informal
poll
on
the
list
of
like
who's
interested
in
what
and
like
anything
which
has
like
you
know
more
than
like,
like
more
than
like
one
client
or
one
server
interested.
C
If
we
like
do
some
cover
integrations
that
we
have
a
lot
of
code
points
to
work
with,
so
I
think
that
will
give
us
you
know
like
that,
will
give
us
the
ones
people
actually
need,
and
it's
not
like
it's
expensive,
to
do
component
registrations
so
like.
If
you
know,
if
we
miss
one
and
people
get
excited
about
like
I
don't
know
what
you
know
you
can
psych
with
you
know
you
know
psych
with
export,
fourier
or
something
community
exactly
right.
So
that'd
be
my
suggestion.
H
Great
thanks
very
much
I'll
try
to
follow
up
on
the
working
group,
then,
with
some
kind
of
way
of
identifying
what
combinations
people
are
interested
in
jonathan
lennox
has
a
question
in
the
chat:
does
this
only
work
for
the
pairing
of
classical
and
post
quantum
algorithms,
or
could
it
work
for
other
algorithms,
such
as
national
algorithms
yeah?
The
draft
is
completely
agnostic
as
to
whether
they're
post
quantum
or
not.
H
In
fact,
I
think
the
term
is
be
used
as
next
generation
just
to
make
it
clear
that
it
really
can
be
two
any
two
combinations
combining
any
two
algorithms
you
like
so
there's
no
specific
requirement
that
you,
post
quantum-
that
was
just
the
original
motivation.
A
A
B
No
hat
on
here,
I
just
want
to
make
sure
that
it's
very
clear
that
if
we
start
picking
some
that
we
are
not
anointing
a
winner
in
any
way
shape
or
form,
we
have
to
make
sure
we
put
all
kinds
of
caveats
all
over
this.
Any
kind
of
like
parameters
that
we
pick
or
anything
need
to
get
blessed
by
the
cfrg
that
this
is
merely
just.
B
You
know,
messing
around
to
try
to
make
sure
that
things
get
or
end
up
being
interoperable
and
then,
when
things
finally
get
selected
that
we
we
go
because
when
we
first
adopted
this
draft,
that
was
one
of
the
big
questions.
Where
was
well,
what
are
we
going
to
do
and
we
we
definitely
don't
want
to
be
seen
as
picking
a
winner
here
and
now
I'll.
Let
people
that
are
actually
cryptographers
that
implement
talk.
So
thanks.
H
Yeah,
I
absolutely
agree
and
and
there's
a
tension
between
that
stephen.
E
Hi,
just
echoing
john's
point,
it
might
be
worthwhile
thinking
about
you
know
having
some
grease
like
code
points
that
we
pick
and
document
on
a
web
page
or
and
get
over
somewhere
for
experiments
with
the
knowledge
that
those
can
never
be
final
answers
rather
than
just
alexa.
E
I
Yeah,
just
one
a
second,
what
sean
said,
I
don't
wanna
without
going
through
cfrg,
define
code
points
for
anythingness
related,
and
I
would
recommend
using
existing
code
points
in
in
the
document.
I
For
for
any
examples
or
whatnot,
you
can
combine
existing
algorithms
as
a
demonstration
for
say
test
vectors,
okay,
gotcha
understood.
Thank
you.
H
C
Apply
this
press
throwing
button
chairs,
but
so
I
guess
I
didn't
quite
understand
nick's
point
so
nick,
if
I'm,
if
I'm,
if
I'm
re,
if
I'm
mischaracterizing
you
let
me
know,
like
my
assumption,
had
been
that
we
would
pick
the
like.
You
know
the
sort
of
top.
C
You
know
the
most
attractive
of
the
ones
that
people
actually
wanted
to
do,
probably
some
of
which
are
in
this
category,
but
that
we
shouldn't
pick
them
just
because
we're
a
nist
and
then
we
should
mark
them
not
recommended
or
recommended
equals
n,
which
is
the
thing
we
do
for
like.
We
think
it's,
maybe
it's
fine,
maybe
it's
not
and
then
once
and
then
once
they're.
C
What's
he
once
the
one
once
once
the
final
lists
are
approved,
then
we
can
send
them
through
cfrg
and
see
if
our
g
can
bless,
whichever
ones
they
think
are
appropriate
and
then
we
can
mark
them
recommended
equals
y
with.
Basically,
no,
we
don't
like
a
single
document,
so
I
think
like
that.
Then
then
we'd
be
allowed
to
keep
using
the
code
points
which
things
were
attractive
and
we
should
be
marking
how
we
think
about
them.
But
nick,
if
I'm,
if
I'm
mischaracterizing
you
or
you
disagree
like
you,
know,
now's
the
time.
I
I
I
would
request
if,
if
this
is
something
we
should
kickstart
now
that
some
type
of
formal
request
from
tls
to
cfrg
to
to
look
into
that
would
be
something
we
consider,
but
in
turn,
in
terms
of
like
defining
a
kyber,
cipher
suite
and
point
in
this
document
that
that
seems
a
little
bit
too
much
fine
to
do
it
in
implementations
and
interoperability
perspectives
and
in
this
document,
if
there
are
test
vectors
to
to
use
existing
code
points,
that's
my
suggestion
and
happy
to
happy
to
discuss
further.
If
you
disagree
here.
J
Okay,
thank
you
ludovic.
J
You
you
mentioned
that
you
are
not
going
to
consider
authentication
but
but
still
at
some
point
we
are
going
to
to
solve
this
point.
So
what
is
the
strategy
and
so
kind
of
related?
So
there
is
a
another
proposal
mtls
which
is
going
to
be
proposed
after,
and
so
I
I
wanted
just
to
understand
what
is
going
to
be
the
strategy
or
if
there
is
a
strategy,
it's
fine
right.
So.
H
The
reason
for
considering
key
exchange
prior
to
signature
based
authentication
is
the
concern
about
retroactive,
decryption
that
a
quantum
computer
could
retroactively
decrypt
today's
communication,
but
could
not
retroactively
break
authentication
of
today's
communication,
so
that
speaks
to
focusing
on
post-quantum
confidentiality
before
post-quantum
authentication.
H
H
K
Ahead
question
about,
or
I
think,
watson,
real
quick,
the
answer
the
question
about
where
we
put
these
things
is:
we
have
experimental
code
points
for
a
reason,
that's
where
we
should
decide
that
we're
using
kyber
and
x2519,
and
I
think
it's
fine
for
there
to
be
text,
saying
we're
doing
an
experiment
and
here's
the
definition
and
then
later
you
change
it
to
be
like
okay,
here's
what
we
decided
on
here's,
the
final
standard
kind
of
code
point.
A
And
we
can
assist
doug,
we
can
coordinate
with
cfrg
and
security
ids.
If
you
need
to.
A
All
right
time
for
ech.
A
Okay
quickly
start
a
timer.
A
All
right,
so
this
is
just
an
update
on
ech
and
hopefully
a
a
sort
of
you
know,
marking
a
significant
milestone
for
the
the
work
group
in
this
particular
protocol.
A
So
just
to
give
you
a
sort
of
quick
summary
as
to
where
we
are
right
now,
after
many
many
iterations
and
updates
over
the
the
past
couple
months
between
the
last
meeting
and
the
interim,
the
document
feels-
or
we
feel
as
though
the
document's
in
a
good
enough
place
where
one
can
implement
it.
Indeed,
there
are
a
number
of
implementations
floating
around
that
do
inter-operate
to
some
extent.
We
are
also
not
aware
of
any
known
security
issues.
A
This
is
thanks
to,
of
course,
the
the
careful
work
of
all
the
people
contributing
to
ech
and
its
design,
but
also
in
parallel,
analysis
has
been
been
done
by
karthik
and
others
in
riya
to
demonstrate
that
the
the
design
with
some
slight
changes
gets
a
little
bit
out
of
date.
It
is
mostly
safe
and
achieves
the
desired
security
and
privacy
properties.
A
So
at
this
point
we're
kind
of
at
a
stage
where
you
know
any
other
decisions
that
we'd
like
to
make
or
you
know,
features
we'd
like
to
add
changes.
What
have
you
really
kind
of
need
to
be
informed
by
or
seems
as
though
they
need
to
be
informed
by
you
know?
Actually
using
this
thing
in
the
wild
and
before
kind
of
going
into
sort
of
the
specific
issues,
I'd
like
to
just
up
front
to
say
it's
probably
a
good
time
now
to
sort
of
park.
A
This
particular
draft
maybe
make
go
through
working
with
last
call
to
you,
know
iron
out
some
of
the
editorial
deficiencies
or
what
have
you
and
then
let
it
run
for
a
little
bit,
and
you
know
from
that
experiment
or
from
that
experience
come
back
similar
to
like
what
we
did
with
tls
1.3
and
make
any
necessary
changes.
Sean
go
ahead.
B
A
quick
question
because
I
know
people
are
going
to
ask:
how
long
do
you
think
you'd
want
to
let
it
bake
for.
A
Really
unclear,
I
think
it
depends
on
people
who
are
interested
in
using
this
particular
technology.
So
at
the
end,
there's
a
proposal
to
just
kind
of
revisit
the
next
itf
meeting
it
might
be
longer.
I
guess
it
depends
on
how
you
know
what
the
timeline
for
baking
actually
is.
A
Yep,
okay,
so
with
that
is
the
framing.
There
are
a
number
of
open
issues,
three
sort
of
high
level
ones.
I'm
going
to
talk
about
three
groups
of
open
issues,
the
first
of
which
relates
to
the
accepted
signal
as
a
reminder
on
the
acceptance
signal.
It's
the
thing
that
the
server
effectively
says
to
indicate
whether
or
not
ech
was
accepted,
and
the
inner
client
hello
was
chosen
or
rejected,
in
which
case
the
outer
client
hello
was
chosen,
and
there
are
two
related
issues
here:
450
and
441.
A
Basically
asking
the
question
as
to
whether
or
not
the
acceptance
signal
should
move
from
its
current
location
in
the
non-hr
path,
from
the
server
hello
random
to
an
extension,
specifically
in
the
server
hello
and
related
to
that.
If
we
do
continue
using
this
extension,
should
we
start
greasing
it,
and
so
there's
a
number
of
different
answers
to
this
question
you
could
you
could
make
both
of
these
changes,
in
which
case
the
hr
and
non-hr
paths
are
sort
of
consistent.
A
A
You
could
not
do
any
of
these
changes,
which
would
be
what
we
have
right
now
and
that
potentially
risks.
As
noted,
you
know
potential
issues
with
hr
later
down
the
road
you
could
do
a
mix
and
match
you
know,
take
the
extension,
but
then
not
grease
it
that
seems
less
ideal
and
then
the
opposite,
which
is
you
know,
don't
take
the
extension
and
grease
it
just
seems
kind
of
strange
or
there's
little
benefit
to
it.
A
A
So
I'd
like
to
pause
here
and
see,
if
folks
have
any
opinion
about
that,
martin
go
ahead.
A
It's
I'm
not,
I
I
don't
know
if
it's
others
are
experiencing
it,
but
I'm
having
a
really
hard
time
hearing
you.
Could
you
potentially
type
your
comment
or
if
someone
else
heard
it
and
they
could
relay?
That
would
be
great.
I'm
sorry.
C
It's
no
yes,
so
I
guess
my
dldr
is.
I
want
to
like
stop
screaming
with
this
and
run
some
experiments,
and
I
think
it
doesn't
actually
matter
that
much
what
the
contents
of
this
particular
thing
are.
I
think
they
sent
to
which
the
thing
is
like.
C
If
the
signal
contains
bits,
as
opposed
to
just
being
it's
like,
they
think
the
syndrome
could
be
empty,
but
if
it
contains
bits,
then
we
have
to
define
it
with
what
happens
to
those
bits
are
wrong,
so
I
think
no
no
is
like
no
gifts
and
no
yes,
it's
got
some
bits
and
then
they
can
be
greased.
So
I
think
like
I
would
suggest
that,
like
whatever
the
draft
is
closer
to
now,
is
that
what
we
do
and
if
we
decide
we
don't
like
it
like,
we
can
change
things
later
by.
C
I
mean
empty
extension,
okay
yeah
I
mean
I
mean.
The
point
of
the
point
of
the
non-empty
extension
is
to
the
point
of
non-empty
extension
is
to
allow
for
greasing
right,
and
so
if
the
attention
is,
if
this
is
so
so
this
is
this.
I
think
it's
silly
to
have
a
non-empty
extension
that
cannot
be
greased,
but
anyway,
I
I
see
character
and
david,
so
I'll
step
back.
N
So
just
I
was
reading
through
all
the
comments
last
night
on
this
and
it
seems
like
it
boils
down
to
I
mean
to
me:
it
doesn't
matter
it's
like
irrelevant,
like
whether
we
do
the
extension
or
the
or
keep
it
the
way
it
is,
but
the
the
sticking
point
seems
to
be
primarily
on
do
we
think
middle
boxes
are
going
to
freak
out
if
the
server
hello
has
this
extension,
or
you
know
if
it
doesn't
like
our
middle
box,
is
more
likely
to
freak
out.
N
If
the
hr
has
this
extension
and
like
I,
I
don't
think
we
like
have
enough
data
to
answer
that
question.
Maybe
that's
what
ecker
was
referring
to
in
terms
of
like
just
doing
the
implementation
and
like
seeing
what
happens,
because
I
I
just
don't
know
I
don't
I
don't
know
if
we
should
bother
to
answer
that
question
like
I
mean
you
could
just
say,
look
they
should
be
doing
the
right
thing,
their
problem,
if
they're
not,
but
then
again
you
could
say
well,
it
is
our
problem
if
they're
not
if
they're
acting
up.
N
So
I
don't,
I
don't
know
what
you
guys
usually
do
in
terms
of
that.
So
that's
that's
my
comment
on
it.
A
Yeah
thanks-
and
I
agree
at
this
point-
we're
sort
of
and
some
extent
speculating
what
potentially
might
happen.
So
the
proposal
to
do
nothing
just
implement
what's
there
and
try
to
try
to
run
with
it
and
see
what
happens
in
the
wild
will
help
inform
the
decision
one
way
or
the
other,
and
it's
less
change
at
this
particular
point.
So
david
go
ahead.
O
O
It's
a
little
interesting
that,
like
no
change,
is
kind
of
halfway
in
between
so
so
I
think,
as
sugar
noted,
the
the
the
grease
question
for
the
second
question
is
mostly
like
how
you
define
the
error
handling
for
those
bits,
and
I
believe
this
draft
currently
says
that
on
the
client
side
you
like,
if
you
see
an
extension
with
the
wrong
bits,
you
just
kind
of
ignore
it,
which
is
no
yes,
but
on
the
server
side,
it
doesn't
say
you're
allowed
to
do
that.
So
on
the
server
side,
it's
no!
O
No
as
we're
kind
of
weirdly
halfway
in
between
the
two
I
think
no
no
would
be.
The
client
should
reject
the
invalid
ones,
in
which
case
it's
still
giving
you
some
value,
because
it's
the
the
bits
also
prevent
someone
from
forging
an
hrr
when
you
wouldn't
have
done
one,
because
the
hr
sticks
out
a
bit
more
than
non-hr
case,
and
no
no,
yes,
I
think,
would
be.
O
A
C
It
sounds
like
nobody
cares
very
strongly,
but
everyone
cares
that
they
get
done.
So
I
suggest
that
we
that
we
empower
the
editors
to
solve
this
in
some
way.
That
is
one
of
those
two
and
move
on.
B
C
B
With
my
chair
head
on,
I
think
that's
the
way
to
go
since
no
one's
trying
to
find
a
sword
to
fall
on.
Let's,
let's
do
this
what's
proposed
here
on
the
slide.
C
A
Yep,
all
right
so
we'll
take
the
action
to
resolve
this
one
with
the
other
and
then
follow
up
on
the
list.
Chris.
A
I
assume
that
clarifies
chris
okay
yeah,
so
we'll
take
the
action
to
resolve
this
and
then
fold
it
into
the
changes
necessary
for
the
next
version
and
should
be
good.
Okay,
next
issue
is
padding.
This
is
one
we've
been
punting
around
for
a
while,
now,
specifically
server
side
padding.
A
There
are
two
two
different
proposals
up
for
addressing
this,
one
of
which
is
to
introduce
a
new
handshake
message
specifically
for
the
purpose
of
introducing
padding
during
the
handshake,
another
of
which
is
to
use
existing
extensions
on
existing
or
use
extensions
on
existing
handshake
messages
for
the
purposes
of
padding.
You
could
also
just
you
know,
not
do
either
of
these
and
just
apply
sort
of
some
default
dummy
padding
policy
out
of
the
box
using
like
tls,
regular,
padding
or
quick
padding,
or
what
have
you
whatever?
The
contacts
happens
to
be.
A
There
are,
if
you,
if
you
look
at
the
issue
and
the
relevant
pull
requests,
there
are
a
number
of
different
considerations
to
take
into
account.
David
has
done
a
lot
of
work
talking
with
the
integration
between
ech
and
quick,
and
there
are
some
interesting
cafes
there
to
be
mindful
of
how
the
padding
works
in
in
relation
to
certificate.
A
Compression
is
also
quite
interesting,
specifically
whether
or
not
padding
is
applied
before
or
after
compression
and
yeah,
and
there's
also
considerations
for
what
the
impact
on
the
state
machine
is,
which
is
relevant
for
the
new
handshake
message
and
then
there's
also,
if
we're
going
to
use
existing.
You
know
padding
extension,
there's,
potentially,
restrictions
that
were
put
in
place
from
our
ca446
or
tls
1.3,
with
respect
to
where
that
particular
extension
can
go
and
how
which
party
can
send
it
and
under
which
circumstances.
A
So,
in
general,
padding
has
been
sort
of
this
thing
that
we've
not
tls,
specifically
but
just
sort
of
product
network
protocols
in
general,
have
tried
to
account
for,
but
haven't
quite
figured
out
how
to
use
it
correctly
or
in
the
best
way,
for
some
definition
of
in
the
best
way.
A
dns,
for
example,
has
this
recommended
padding
policy
that
came
out
of
deprive
for
using
just
like
fixed
block
length,
padding
that
was
based
on
some
analysis
that
dkg
did
with
respect
to
what
you
know.
A
The
average
dns
query
and
response
sizes
were
maybe
something
that
like
that
would
work
here,
but
my
general
impression
is
that
we
haven't
quite
figured
out
how
to
to
hold
padding
correctly
in
order
to
you
know,
make
ech
have
maximal
privacy
in
order
to
get
the
maximum
privacy
benefit
the
ech
promises,
so
in
absent.
A
In
absence
of
you
know
that
particular
skill,
or
what
have
you
I
propose,
we
do
nothing
at
this
particular
point
specifically
because
the
as
discussed
sort
of
in
the
context
of
the
previous
issues
as
we
you
know,
roll
this
out
and
start
to
do
some
initial
experiments,
we're
primarily
trying
to
figure
out.
A
You
know
what
the
network
interference
is
and
what
the
network
impact
is,
whether
or
not
there's
little
boxes
that
fall
over
and
whatnot
and
how
padding
is
applied
should
not
for
some
definition
or
should
not
should
not
affect
that
mma,
but
ideally
it
does
not.
Now,
it's
necessarily
true
that
you
know
absent
server-side
padding,
you
don't
have
the
you
know
the
ideal
privacy
posture
ech
promises
to
have.
A
There
may
be,
you
know
certain
domains
within
behind
a
client
behind
the
client-facing
server
that
you
know
leak
unintentionally
based
on
the
size
of
the
server's
flight,
but
for
the
purposes
of
this
initial
experiment,
I
I
don't
think
that's
really
necessary.
So
the
proposal
is
to
basically
do
nothing
at
this
point
and
then,
once
later
on,
once
we,
you
know
see
what
these
you
know.
These.
These
handshakes
look
like
on
the
wire
see
what
the
impact
is.
A
Maybe
then
we'll
be
more
informed
or
in
a
better
position
to
make
a
decision
on.
You
know
how
to
address
this
particular
problem.
Ecker
go
ahead.
C
I
I'm
getting
sadly
a
little
warmer
to
david,
that's
proposal,
but
not
warm
enough
yet
and,
and
and
you
know
we
can
still
do
it
enough
perfectly
fine,
even
even
for
quick
with
with
with
without
this
problem
at
all.
So
so
I'd
like
to
I'd
like
to
just
like.
C
I
I
like
to
punt
this,
so
we
can
argue
about
it
for
the
next
six
months,
and
you
know
the
good
news
I
think,
is
that
if
we
were
to
like,
like
precisely
for
the
reasons
that,
like
the
padding
matters,
if
if
if
we
were
to
add
padding
and
it
were
to
break
anything,
that
would
be
surprising,
it'd
be
quite
surprising
and
so
and-
and
so
I
think
like
we
should
just
move
on,
and
we
can
like
just
forget
the
next
six
months
or
so.
It's
gonna.
A
Yep,
okay,
carrick.
N
I
I
I
I'm
totally
fine
with
just
punting
it
until
later,
except
that
you
know,
if
we're
gonna
have
to
deal
with
it
anyway,
it's
like
we
might
as
well
figure
it
out.
I'm
not
really
sure
I
see
the
total
advantage,
except
unless
you
want
to
like,
try
it,
because
we
can
already
interrupt
now
without
we
already
don't
have
padding
so
also
like
I
was
reading
through
the
extension
like
the
extension
pr
and
it
seems
fine.
I
don't
really
understand
like
what
the
objections
are
to
it.
N
It
seems
a
lot
more
clear
how
like
why
it
works
or
how
it
works
than
the
message
version.
So
I
don't.
I
don't
really
understand
why
we
can't
like
just
do
you
know
just
do
that
pr
and
be
done
with
it.
G
F
So
I
guess
my
only
rebuttal
to
to
carrick's
point
is
the
two
proposals
right
now
either
right
now
we
are
either
proposing.
We
have
an
extension
that
the
client
uses
to
solicit
service
or
server-side
padding,
or
we
define
a
brand
new
handshake
message.
F
F
I
think
that
would
be
like
the
ideal
solution
for
whatever
padding
mechanism,
something
that's
like
independent
of
this
and
that
we
just
use.
So.
For
that
reason
I
think
it's
worth
punting
on
this
on
until
that
document
exists,
others
might
think
that's
the
wrong
solution.
C
The
right
approaches
and-
and
I
think
that's
that's
the
answer
to
carriage
question,
which
is
like
that
we
can
spend
the
next
six
months
arguing
about
how
to
do
padding
or
we
can
spend,
or
we
can
spend
that
always
been
this
without
shipping
and
arguing
about
how
to
do
padding
this
in
the
background
and
I've
I
prefer
the
second.
That's
that's
my
that's.
My
motivation.
A
Yeah,
I
I
I
think
the
intent
here
is
just
to
try
to
not
let
perfection
be
the
enemy
of
the
good.
You
know
it
does
seem
like
a
separable
problem
to
some
extent,
not
not
something
we
should
not
punt
out
of
ech
entirely,
but
something
that
we
can
separate
out
for
now
and
to
get
some
initial
experience
with
ech,
and
then
you
know
address
it
later
on.
This
also
gives
us
time
to
figure
out.
A
You
know
how
we
will
actually
pad
you
know
setting
aside
the
mechanism,
what
the
actual
number
will
be,
which
you
know
is
a
deployment
specific
thing,
and
it's
not
clear
that
you
know
everyone
has
the
answer
to
that.
So
my
take
is
to
just
as
ecker
said
just
point.
B
So,
hey
chris,
this
is
this
is
shauna's
chair.
I
think
that
that's
the
proposal
we
should
go
forward
with
I
I
kind
of
feel
carrick's
point,
but
reading
the
chat
messages
basically
made
me
think
that
we
should
put
so
I
think
that
that's
probably
just
the
way
to
go
forward
and
we
can
just
move
on
to
the
next
issue.
A
I
mean
perhaps
a
proposal
to
to
help.
You
know
make
sure
that
this
doesn't
languish
indefinitely
after
we
sort
of
cut
the
next
version
and
that's
the
thing
we
target.
Potentially,
we
could
spin
up
like
a
design
team
for
folks
who
are
have
very
strong
opinions
about
the
particular
padding
mechanism
and
they
can
sort
that
out
and
come
back
to
the
working
group.
I
don't
know
how
folks
feel
about.
A
C
Say
it
again,
I'm
not
worried
about
it
languishing.
I
think,
like
it's
an
issue
tracker
like
you
know,
if
I
think
I
think
people
feel
strongly
about
this,
and
they
really
want
to
get
the
padding
thing
resolved.
Then,
like
yeah,
go
ahead
and
convene
the
discussion,
they
make
us
better
resilient
or
something
and
try
to
get
like
trying
to
get
the
people
who
disagree
on
both
sides.
Like
I'm
not
agreement
like
that's
the.
A
Okay,
so
we'll
just
note
in
the
issue:
we're
parking
it
for
the
time
being
and
where
is
it
after
after
the
next
draft
and
we
get
some
experience?
A
Okay,
the
last
issue
is
on
extensibility,
simple
question:
should
the
ech
config
be
extensible
sort
of
binary
question,
we
don't
currently
have
any
extensions
that
are
in
use.
A
There
have
been
proposals
in
the
past
and,
I
think,
there's
a
proposal
from
ben
for
one
ben
schwartz,
but
given
that
you
know
this
potentially
has
value
going
forward
and
there
are
some
potential
candidates,
but
there's
also
some
you
know
objections
to
them
existing
entirely
and
the
fact
that
the
the
cost
of
parsing
extensions
that
don't
exist
is
effectively
zero
right
now
I
propose
that
we,
like
the
others,
just
park
this
and
revisit
it
later.
A
N
I'm
I'm
fine
with
that,
but
I
am
kind
of
concerned
about.
I
think
someone
mentioned
like
a
use
it
or
lose
it
sort
of
thing
it's
like.
If
we
don't
do
it
now,
it's
gonna
be
harder
to
do
later.
But
if
you,
if
no
one
thinks
that
that
like
is
gonna
happen,
then
you
know
fine.
A
I
don't
yeah,
I
don't
think
that'll
be
the
case.
If
anything,
I
I
don't
know
I
I
I
don't
really.
I
mean.
A
About
this,
okay,
stephen.
E
I
I
think
these
extensions
are
useless,
I'm
okay
with
leaving
them
and
parking
this
I
do
have
a
question,
though,
is
I
I
did
write
a
bit
of
code
that
could
put
any
random
file
in
to
an
ech
config
at
that
point,
so
I'm
kind
of
wondering
whether
we
want
to
add
language
saying
so.
Please
don't
do
that
during
the
experiments
or
is
everybody
else
writing
code
that
won't
fall
over?
If
you
give
it
a
pic
picture.
A
A
What
that
effectively
means
is
sort
of
all
the
you
know
main
technical,
open
issues
I
either
resolved
or
we're
sort
of
parking
them
to
to
resolve.
You
know
after
we
get
some
initial
experience.
Draft
12
is
currently
the
the
published
version.
A
However,
depending
on
how
or
you
know
as
discussed
earlier,
we're
going
to
likely
make,
we
will
most
likely
make
some
small
changes
to
address
the
acceptance
signal
text,
in
which
case
we'll
bump
a
new
version,
potentially
folding
in
some
editorial
changes
that
some
people
have
contributed
and
moved
to
13..
A
So
I
would
propose
that
we
move
to
draft
13
and,
as
we
start,
bringing
up
implementations
filling
out
the
interop
matrix,
which
is
linked
here,
also
linked
from
the
wiki
page
onto
the
repository
yeah,
exactly
lucky
13..
Does
anyone
have
any
objections
to
that
being
sort
of
the
next
interrupt
target?
It
means
we
sort
of
like
skipped
over
12,
but
that
seems
fine.
A
Okay,
the
the
dogs
are
sleeping,
they
don't
really
care
about
uch.
A
Right
so,
just
to
summarize,
we,
as
I
said,
we're
gonna,
resolve
some
of
these
open
issues
or
park
them
accordingly,
noting
in
github
I'll,
probably
just
introduce
a
new
label
just
to
note
that
they
are
sort
of
on
hold,
and
hopefully
we
can
reach
resolution
in
parallel
with
you
know,
actually
using
this
thing
may
tried
to
fold
in
some
editorial
clarifications.
In
fact,
the
document
slightly
a
number
of
people
have
filed
issues
regarding
sort
of
organization
in
the
document.
A
So
we're
going
to
try
to
do
a
careful.
We
may
try
to
do
a
careful.
You
know
pass
over
it
to
to
make
it
more
friendly
to
implementers,
without,
hopefully
breaking
anything
or
introducing
any
wire
format.
Changes
we'll
see
how
much
work
is
actually
to
be
done
there
an
empty.
I
may
reach
out
to
you,
since
you
went
through
the
similar
exercise
with
quick
long
ago
after
we
do
after
we
do
this
and
publish
the
next
version.
A
E
So
I'm
just
wondering
we're
going
to
do
a
working
group
last
call-
and
I
I
just
have
a
local
reason
for
that-
is
that
I
have
some
funding
to
write
this
code.
That
might
come
to
an
end
in
the
meantime,
and
it
would
be
good
say
for
me
to
say
there
was
a
group.
Let's
call
it
matches
that,
and
I
I
don't
know
if
that
would
be
generally
good
for
people.
C
Chairs,
like
my
question,
I
guess
would
be
what
would
this
advantage
that
work
in
your
class
call,
given
that
we
just
agreed
to
put
a
bunch
of
issues
which
people
don't
agree
on
so,
like
would
be
like
we
do.
We
agree,
but
model
of
these
changes
that
we
have
not
decided
to
make
we'll
be
submitting
like
like
like
like
like.
If
we
do
a
glass
call
steven,
what
would
you
say
about
extensions
and
then
we
do
add
information.
E
So
I
I
what
I'm
asking
for
is
it's
a
purely
local
thing
and
it's
independent
of
of
that?
It's
just
a
checkpoint.
I
can
map
code
to
oh,
okay.
I
don't
know
if
it's
working
with
glass
cover
at
all.
B
So
it's
probably
easy
enough
to
write.
A
working
group
last
call
that
carves
out
certain
issues
that
we've
obviously
punted
on
that
we've
talked
about
during
this
session,
and
I
can
work
with
chris
to
make
sure
that
we
document
that
and
we'll
basically
we'll
try
to
share
it
around
with
everybody.
Who's
been
involved
to
make
sure
that
we're
not
tripping
over
anything
and
then
get
it
out
to
the
working
group.
Does
that
sound
like
a
good
way
forward.
L
A
Okay,
unless
there's
anything
else,
I
think
we
can
call
it.
I
just
want
to
say
thanks
to
everyone
who
helped
and
contributed
one
way
or
the
other
to
get
us
here.
I
think
what
we
have
right
now
is
a
good
protocol
lb
somewhat
complex,
and
you
know
you
know.
Hopefully
this
is
the
start
of
ech.
H
H
Sorry,
I'm
just
adjusting
the
camera
all
right
thanks,
so
my
name
is
nimrod
aviram
and
with
more
support
from
carrick
and
chris
I'd
like
to
deprecate
absolute
key
exchange
methods
in
tls.
H
So
what
this
document
does?
Is
it
deprecates
rsa
key
exchange
and
limits
ffdhe
to
only
only
to
reasonable
groups
with
sufficient
security?
As
a
reminder,
we
already
have
a
separate
document
that
says:
if
we
do
use
ffdh8,
it
must
be
fully
ephemeral
and
we
can't
use
static
financial
development.
H
H
H
So
the
document
disallows
these
practices
so
again
we're
talking
about
deprecating,
rsa,
key
exchange
and
limiting
ffdhe
only
to
reasonable,
well-known
groups,
and
this
would
be
widely
compatible
and
before
I
take
questions,
I'd
like
to
preempt
another
question:
well,
people
on
the
meeting
on
the
mailing
list
have
argued
that
this
would
be
incompatible
with
some
market
niches,
and
I
think
if
that
is
the
case,
we
should
have
before
us
concrete
evidence
as
to
why
this
would
be
incompatible
and
in
which
specific
niches
and
like
as
another
point
to
consider.
H
J
Q
Okay,
so
I'm
the
maintainer
of
niche
implementation,
which
is
postfix
and
there
are
various
other
smtp
servers
in
the
same
boat
postfix-
has
a
compiled
in
generated
by
me.
You
know
strong
group
as
a
default.
Q
Q
So
if
clients
stop
and
stop
understanding
the
ffdhe,
one
might
hope
that
ecdhe
would
be
sufficient.
But
I
expect
interrupt
issues
if
we
drop
support
for
ad
hoc
ffdhe.
H
All
right,
so,
first
nice
talking
to
again
victor
can,
can
you
elaborate
on
this
group?
Is
it
like
a
safe
prime.
H
So
so,
like
off
the
top
of
my
head,
it
sounds
like
there
is
a
case
to
to
add
to
consider
this,
a
well-known
group
after
after
careful
analysis,.
Q
Users
are
also
encouraged
to
generate
their
own,
and
and
many
do
so-
lots
of
servers
have
just
custom
ph
parameters.
The
same
applies
to
exim,
not
just
postfix.
I
think
the
smtp
space
is
very
different
from
the
web
space
and
there
are
probably
other
applications
in
the
same
boat.
H
I
see
and
and
do
we
have
so
assuming
we
don't
deprecate,
we
don't
limit
ffdhe
to
well-known
groups
now.
Would
we
have
a
way
to
get
metrics
for
this?
So
so
we
we
know
when
we
can
deprec
when
we
can
replicate
it
or
limit
it
to
groups.
Q
Good
question:
you
know
I
can
put
out
new
code
and
it
will
take,
as
I
said
roughly
five
to
ten
years
before
at
least
the
postfix
ecosystem
is
no
longer
typically
doing
that.
I
don't
know
how
long
exam
and
I
don't
know
about
applications
outside
of
smtp
other
people
would
have
to
speak
to
those
niches.
H
All
right
so
so
like
to
summarize,
I
think
so
we
can
expect
interrupt
problems
for
smtp
and
at
least
it's
on
the
open
internet.
So
we
could
expect
to
gather
some
metrics
as
to
where
things
stand.
Do
I
get
it
correctly.
Q
Q
H
Yeah
we
also
have
like
specialist
research
as
well,
very
capable
in
collecting
these
metrics.
I
I
see
we
have
a
cure
the
cure
building
up.
So
maybe
maybe
I
thank
you
again
victor
when
we
get
to
the
next
question.
If
that's
okay,.
N
All
right,
yeah,
thanks
so
much
for
giving
this
presentation.
I
actually
had
a
follow-up
question
for
victor.
He
was
saying
that
you
know
they're
gonna
need
this
for
five
to
ten
years.
I
was
just
wondering
like
what
exactly
is
sorry?
If
you
already
mentioned
this,
I
don't
think
you
did,
but
what's
gonna
happen
within
those
five
to
ten
years,
like
are
people
in
the
midst
of
like
planning
to
upgrade
within
five
like
what
exactly
is
like?
Why
not
three
years,
why
not
20
years,
you
know.
Q
It's
the
upgrade
cycles
for
smtp
are
much
longer
than
for
browsers.
This
is
server
infrastructure
that
people
deploy
and
then
don't
touch
for
half
a
decade
or
more.
O
Yeah,
so
I
am,
I'm
extremely
excited
about
getting
all
of
these
things
out
of
here.
One
thing
I
would
caution,
though,
on
the
ffghz
side,
so
for
tls12
we
kind
of
screwed
up
the
way
we
defined
the
named
groups.
There
is
no
way
for
a
client
to
stay.
I
support
dhe,
but
not,
but
only
with
well-known
groups,
because
we
use
the
same
cipher
suites
for
the
server
chosen
dhe
and
the
well-known
group
dhe.
O
So
this
is
one
of
the
reasons
why,
like
chrome
and
anything,
anything
using
boring
sl
does
not
support
dhe
snipers
at
all
anymore,
because
when
we
were
trying
to
raise
the
bar
from
1024
to
2048,
there
was
no
way
we
could
like
say
this
in
the
client,
hello,
and
so
you
end
up
with
a
situation
where
the
client
wants
to
support
dhe,
but
only
with
these
groups,
but
because
it
can
only
say
I
support
all
of
dhe.
The
server
is
totally
the
server
is
completely
within.
O
The
server
is
like
right
to
pick
a
tiny
one,
even
though
there
is,
and
then
we
would
break,
even
though
there
is
actually
another
key
exchange
that
it
supports
that
we
would
have
been
okay
with.
So
we
may
have
some
difficulties
like
clients
that
haven't
gone
all
the
way
to
remove
dhe
altogether
from
tls.
One
two
may
have
some
difficulties
saying
this.
O
I
well
I
have
no
data
on
this
because
I
just
took
it
away
altogether,
but
I
I
guess
it's
probably
a
question
of
like
what
does
so
of
the
servers
that
today
negotiate
dhe
ciphers,
how
many
of
them
are
sort
of
on
accident.
Already
the
groups
that
we
like
because
effectively
what
we're
saying
is
like
we're
going
to
let
the
server
pick
and
like
if
it
accidentally
picked
the
right
one.
Then
we're
happy,
and
at
the
time
when
we
were
looking
at
it,
it
was
like.
O
95
of
servers
chose
the
10
24-bit
group,
and
so
we
were
like
this
is
not
worth
trying
to
salvage
at
all.
It's
possible
that
in
those
interviewing
years
the
ecosystem
has
shifted
a
bit
all
right.
B
P
Yes,
I'd
like
to
bring
up
a
possible
compromise
to
deal
with
victor
scenario.
One
possibility
is
that
you
actually
allow
a
server-proposed
parameters
but
decline
mandatory
for
the
client
to
check
to
see
if
the
prime
specified
is
in
fact
a
safe
prime,
which
can
be
done.
It's
not
particularly
cheap,
but
it
can
be
done
and
as
long
as
the
generator
is
between
two
and
p
minus
two
you're
safe.
H
R
Rich,
please
hi.
So
the
thing
the
issue
that
victor
brings
up
is
a
regular
problem.
Here
we
want
to
move
the
industry
forward
and
there's
always
some
community
that
can't
because
of
entertains
enterprise
issue
deployment
issues,
I
used
to
work
at
ibm
where
they
had
software
older
than
I
was
in
production
too
bad.
R
We
should
do
the
right
thing.
We
should
do
the
thing,
that's
more
secure
and
if
there
is
a
community
that
can't
follow
this
particular
rfc
or
standard,
if
it
becomes
that
so
be
it
there's
no
requirement.
You
know,
nobody's
got
a
check
box.
That
says:
oh
yeah,
you
know
you're,
not
rfc
10110
compliant,
that's
not
what
auditors
and
so
on,
look
for
right.
They
look
for
other
things
like
pci
compliance
and
so
on.
So
I
am
sympathetic
to
postfix.
R
I
think
victor's
joke
about
a
niche
implementation
was
kind
of
funny,
but
it
shouldn't
stop
the
whole
rest
of
the
world
from
moving
forward.
Also
isn't
smtp
world
still
mostly
opportunistic
encryption,
so
yeah,
I
I
I
that's
all
I
we
shouldn't
let
the
a
particular
community
hold
back
the
security
of
the
rest
of
the
world.
Thank
you.
R
Thanks
we're
yeah
yeah.
H
Yeah
all
right
chris,
please
christopher.
H
All
right
thanks
so
as
an
as
an
itf.
No,
it
sounds
to
me
like
this:
has
kind
of
support.
Doesn't
it
would
the
chairs
be
so
kindness
to
summarize
how?
How
should
we
proceed.
H
A
All
right,
thanks,
nebrad
all
right
here
we
go
sophie
playing
up
your
slides.
U
A
U
Okay,
so
hi
everyone.
My
name
is
sofia
sally
and
I'm
a
cryptographer
clavler
and
I'm
joined
today
with
tom
biges,
a
cryptographer
as
well
from
broadband
university,
and
today
we
wanted
to
talk
about
a
new
mechanism
for
achieving
authentication
in
the
tls
1.3
handshake.
And
if
you
want
to
find
more
details
about
it,
please
go
to
the
draft
that
is
listed
in
this
slide.
U
Okay,
so
basically,
how
one
has
chief
authentication
in
the
tls
one
country
handshake
as
we
all
know,
is
that
you
have
to
prove
who
you
are,
and
you
prove
this
by
showing
possession
of
a
private
game
most
of
the
times,
specifically
in
certificate
based
authentication
context.
U
So
what's
how
does
this
mechanism
work
is
basically
using
this
this?
This
algorithm
that
I
just
saw
you
the
key
encapsulation
mechanism,
which
had
two
important
algorithms
to
use,
which
is
the
encapsulation
one
and
the
decapsulation
one.
U
So
what
happens
is
that,
for
example,
if
a
client
wants
to
talk
to
a
server
instead
of
sending
a
signature,
what
will
be
sending
you
will
be
sending
a
certificate
with
a
probably
long-term
public
and
key,
and
then
the
client,
in
this
case
sophia
in
this
case
myself,
will
do
an
encapsulation
towards
that
long-term
public
key.
U
The
result
of
that
operation
is
going
to
be
a
ciphertext
which
is
the
end
and
a
shared
secret,
which
is
the
ss
that
ciphertext
is
going
to
be
sent
in
in
turn
to
the
server
which
in
this
case
is
tom,
and
if
tom
is
able
to
decapsulate
and
arrive
to
the
same
shared
secret
that
is
going
to
be
used
to
derive
all
of
the
handshake
tree,
all
the
handshake
secret
keys,
then
indeed,
you
are
assured
that
you
are
talking
to
the
right
entity
or
to
the
right
person.
U
How
does
it
look
compared
to
tls
in
tls
1.3?
It's
now
on
the
slide
in
the
left
and
on
the
right.
You
see
the
modification
of
oauth
cam.
If
you're
talking
only
about
severe,
only
authentication
in
the
case
of
server-only
authentication,
then
what
you
will
be
having
to
send
is,
of
course,
is
the
scam
encapsulation
after
the
server
has
sent
the
certificate
for
server-only
authentication.
U
This
adds
only
an
extra
half-run
trip
from
the
point
of
view
of
the
server,
because
the
client
can
send
immediately
application
data
in
the
same
way
as
cls
1.3
does
in
the
case
of
mutual
authentication.
There's
an
extra
full
run
trip
that
gets
added,
because
every
entity
has
to
send
the
key
encapsulation
as
a
result
of
the
certificate
that
got
sent
of
the
long-term
key
publicly
encapsulation
of
the
certificate
operation,
and
therefore
this
extra
round
trip
gets
added
in
comparison
with
tls
1.3.
V
Okay,
so
we
send
data
before
the
server
has
sent
the
finish
message,
because
otherwise
you
get
into
a
two
round
trip
protocol,
and
you
don't
want
that.
So
is
this!
V
Okay
and
yes,
we
think
this
is
okay,
because
this,
although
the
cyber
suites
are
not
authenticated,
the
client
must
anyway
be
confident
in
its
selected
cipher
suites
and
any
handshake
that
you
would
try
to
do
anything
messy
messy
with
the
surface
cipher
suites
such
a
handshake
could
never
finish
so
such
any
attack
that
might
want
to
downgrade
to
I
don't
know,
aes
128
would
never
finish
successfully.
So
I
don't
think
this
is
a
risk.
V
U
Okay
and
keep
them
so
in
case
also
that
this
extra
round
trip
is
also
way
too
much.
There's
also
a
version
of
the
oath
cam,
which
reduces
quite
significantly
significantly
this
extra
round
trip
that
gets
added
and
is
basically
using
the
knowledge
of
any
predistributed
material,
basically
any
predistributed
certificate,
or
any
way
to
find
the
long-term
public
key
that
the
the
server
or
the
client
is
using
in
the
case
for
server-only
authentication.
U
What
happens,
then,
is
that
the
scheme
can
encapsulation
message
gets
sent
along
the
client
hello,
either
as
an
extension
or
as
a
separate
message
and
immediately
the
client
and
immediately
the
server
can
send
application
data
at
the
same
in
the
same
way
as
the
tls
1.3
does,
and
the
client,
of
course
can
send
applications
that
are
the
same
way
as
in
auth
game
or
tls
1.3
for
the
case
of
the
server
for
the
sorry,
for
the
case
of
mutual
authentication
for
the
client,
this
is
sometimes
happens
in
the
scenarios
where
you
already
know
that
that
the
cl
the
server
is
going
to
be
asking
for
client
authentication.
U
Prior
to
so,
you
will
be
sending
the
certificate
of
the
client
alongside
in
the
client,
hello,
and
this
is
going
to
be
encrypted
by
the
shared
secret
that
that's
derived
of
the
encapsulation
mechanism,
and
in
that
way
you
also
reduce
the
extra
the
extra
round
trip,
because
you're
basically
able
to
send
application
data
in
the
same
way
a
tls
1.3
at
the
same
point
that
tls
1.3
does.
U
This
also
have
security
considerations,
of
course,
because
the
encryption
of
the
client
certificate.
If
one
follows
the
tls
1.3
guidelines,
has
to
be
sent
encrypted
defense
certificate
for
privacy
consideration.
But
this
is
a
in
the
current
auth
camp.
It's
not
encrypted
under
the
forward
secure
security,
so
I
have
similar
considerations
and
trade-off
as
when
sending
zero
rtt
data
and
of
course,
you
have
to
be
sending
encrypted
under
cypher
suite
that
you
think
that
server
will
accept.
U
Of
course,
this
will
not
always
be
the
case
because,
for
example,
from
kafla
data,
we
only
know
that
less
than
80
traffic
actually
use
a
resumption
mode
or
a
cache
mechanism.
So
there
would
be
some
cases
in
which
this
pre-distributed
mode
will
not
fit.
And
now
you
got.
V
Yes,
so
this
is
all
a
lot
and
we
realized
that
you
have
new
handshake
messages:
new
authentication,
algorithms,
you're,
moving
your
handshake
state
machine,
a
bit
closer
to
tls
1.2
again,
where
the
client's
finished
message
is
sent
first,
and
we
also
need
to
make
changes
to
the
handshake
key
schedule,
because
you
need
to
send
for
client
authentication,
you
need
to
make
sure
that
sent
encryptedly
and
we
need
a
new
handshake,
an
authenticated
handshake
secret
for
that
as
well.
So
this
is
a
lot,
but
why
do
you
want
to
do
this?
Then?
V
V
V
So
we
want
to
point
for
that
also
to
those
papers.
Why
don't
you
use
draft
semi-static?
Well,
this
is
leading
up
into
the
real
motivation
of
this
proposal,
and
that
is
post
quantum
cryptography
draft
semi-static
is
very
exciting.
It's
also
known
as
the
op
tls
proposal,
but
it
requires
non-interactive
key
exchange.
There
is
no
usable
for
tls
non-interactive
key
exchange
at
this
moment.
Psych
is
very
slow.
Its
security
level
at
psyc.
Sorry,
not
psych,
is
not
a
cyclist
again.
Seaside
is
an
all
interactive
key
exchange
and
it
is
very
slow.
V
V
Authentication
via
canvas
is
much
smaller
and
for
the
signature
schemes
we've
seen
that
there's
very
few
suitable
choices,
only
really
the
lithium
and
falcon
will
fit
in
a
tls
handshake,
see
the
paper
by
dimitris
and
panels
and
sorry
I
forget
the
third
author:
they
have
larger
public
keys
and
and
signatures,
then
chems
have
proper
keys
and
cypher
text
slower
operations
and
for
exa,
especially
falcon
needs
floating
point
hardware
to
really
work
quickly.
V
Next
slide,
please,
so
we
have
a
overview
of
roughly
the
sizes
here,
and
you
see
that
especially
kyber
and
the
lithium
are
leaks
apart
in
terms
of
the
size
that
you
send
over
the
wire,
and
that
is
at
cargo
512
and
the
lithium-2
are
roughly
as
128
in
terms
of
security.
If
you
go
up
to
higher
levels,
they
go
even
further
apart.
V
The
draft
right
now
uses
pre-quantum
hpke
to
get
this
con
conversation
started,
but
obviously
we
want
to
move
to
post
quantum
as
soon
as
that
is
available,
but
we
think
it's
very
important
to
start
this
conversation
right
now.
V
What
about
this?
It
seems
like
we
have.
We
don't
not
yet
have
post
quantum
hpke,
hopefully
soon
what
about
this
extra
half
round
trip?
We
argue
that
the
client
can
still
send
its
its
http
request
or
any
other
request
to
the
server
at
the
same
point
as
in
tls
1.3.
V
I
think
the
client
needs
to
send
something
first,
and
we
make
sure
that
in
the
simple
server
authentication
case,
the
client
can
still
send
its
request
in
the
same
place
and
we
have
experiments
both
on
real
networks
and
a
lot
of
simulations
where
we
see
that
artcam
should
perform,
as
well
as
doing
a
signed
key
exchange.
V
B
B
But
before
we
go
there
we'll,
let
you
guys
manage
the
queue
but
we're
going
to
cut
it
after
benjamin
schwartz,
because
we
have
other
presentations
we
want
to
get
to
and
obviously
we'd
like
to
see
the
stuff
move
to
list
thanks.
Panos.
Q
Comment
smtp,
of
course
the
server
speaks
first,
but
it's
not
latency
sensitive.
So
even
if
it
slows
things
down,
smtp
is
willing
to
tolerate
the
cost.
U
C
Presentation
so
assistant
comments,
the
list-
I
guess
I
might
might
be
pre-characterized
as
this
seems
premature.
This
is
like
an
enormous
change
and,
like
you
know,
one
of
the
things
like
we
did
in
one
point.
C
Three
was
trying
to
standardize
exactly
one
set
of
handshake
flows
that
were
based
on
difficulty
and
their
analogs
and,
as
far
as
I
can
tell
like,
the
entire
rationale
for
this
is
a
performance
difference
between
signatures
and
chem
and,
like
you
know,
it's
2021
and
we're
like
five
years
away
from,
like
post
quantum
authentication,
algorithms
being
like
useful
in
any
meaningful
kind
of
way,
and
so
like
on
a
good
day.
You
did
a
postcard
on
pki.
C
Let's
start
with
that,
and
so
and
like
I
and
like
it
took
like
it,
took
us
like,
we
don't
even
have
like
eighty
two,
five,
five
one,
nine
in
in
our
sorts,
so
so
I
think,
like
the
massively
architect
tls
in
order
to
like
accommodate
what
is
really
quite
a
modest
performance.
Improvement
is
really
not
worth
doing.
C
In
particular,
like
I've,
seen
your
this
data
on
the
sizes
I
indicated
on
my
email
like
like
this-
is
not
the
limiting
factor
like
increase
your
tcp
hanging
window
or
a
quick
application
window
and
you'll
get
most
of
the
way
there
so
so
and
and
and
to
send
to
which
we
have
a
problem
that
size
it's
gonna.
It's
gonna
replicate
the
search
chain
and
the
signatures
in
the
search
chain
anyways.
We
have
to
solve
these
problems
in
the
first
place.
So
so
it's
like,
I
think
you
know
yeah.
C
I
I
mean.
I
think
this
is
interesting,
but
I
think
for
like
you
know
that
it's
kind
of
like,
if
you
can't
get
it
from
the
tl,
it's
going
through
your
handshakes,
going
to
get
a
pretty
big,
lift.
F
Hey,
I
just
wanted
to
say
I
I
I
support
this
idea.
F
I
it
seems
it's,
it
seems
to
me
like
it
was
the
dream
of
optls,
but
they
didn't
use
the
right
abstractions
at
the
time,
and
I
think
the
chem
abstraction
is
a
really
useful
way
to
model
a
hand.
It's
just
it
just
makes
it
easy
to
me
it
it's
a
nice
abstraction
for
reasoning
about
the
security
of
a
handshake,
but
I
kind
of
agree
with
ecker.
I
don't
know
if
you
can
put
this
in
tls
1.3
as
a
practical
matter,
but
is
this
something
that
we
want
to
do
for
tls
1.4.
U
Thank
you
please.
I
guess
the
last
one
is
benjamin.
G
V
You
so
there
you
still
need
a
key
exchange.
The
fmr
key
exchanges
as
normal
and
the
authentication
part
would
not
be
ephemeral.
You
would
have
a
long
term
server,
public,
key
or
so
or
client
public
key.
That
is
a
chem
public
key.
V
Yeah,
but
you,
the
shared
secret,
is
mixed
in
into
the
full
key
schedule.
So
all
the
ephemeral
data
is
also
in
there
and
actually
to
break
the
the
the
shared
secret
that
the
the
main
secret
that
comes
out
of
the
handshake.
At
the
end,
you
would
have
to
have
both
the
firmware
key
and
the
service
long-term
key.
So
in
that
sense,
it's
even
slightly
better
than
than
the
normal
case.
B
Like
a
ticket
to
the
list
kind
of
kind
of
kind
of
discussion,
so
thank
you
very
much
for
your
time
and
we're
gonna
move
on
to,
I
think,
is
owen
gonna.
Do
it
or
is
dan
gonna,
do.
T
W
W
Okay,
it's
just
I'll
be
really
quick
if
you
haven't
much
time
so
an
update
since
igf-110
and
draft
hall
2.
So
I
was
agreeing
on
the
meter
straight
out
for
the
last
presentation.
We're
not
we've
done
a
refactor
of
the
crypto
and
the
draft
we're
no
longer
using
draft
j
holland.
Here,
let's
extend
the
key
schedule:
we're
no
longer
injecting
additional
static.
W
Thermal
ecdht
pairs
with
a
key
schedule
instead
and
we're
deriving
a
psk
from
the
dpp
booster
public
key
and
we're
using
rfc
8733,
as
is
we're
using
rfc
7250,
as
is
server,
is
going
to
improve
knowledge
of
the
bootstrap
publicly
via
psk,
and
the
client
is
going
to
prove
knowledge
of
the
associated
private
key
using
rfc7250.
W
And
until
a
subsequent
bootstrap
keypair
can
be
used
to
attempt
to
get
to
something
to
provision
against
wi-fi
network,
and
it
gives
a
guarantee
to
the
supplicants
that
it's
connecting
to
a
network.
That
knows
a
public
key.
It
gives
a
guarantee
to
the
network
that
the
software
that
is
connecting
knows
the
associated
private
key
and
the
trust
bot
model
is
based
on
knowledge
of
the
bootstrap
keys
and
the
booster
public
key
is.
W
It
seems
to
be
secret
and
known
only
to
the
owner
for
the
network
operator,
the
key
it's
encoded
using
the
aso,
as
in
one
sequence
of
public
info
from
rfc
5280,
it's
a
rocky
pair.
It
doesn't
have
to
be
part
for
pki.
It
may
be
static,
it
may
be
embedded
in
the
supplicant,
maybe
printed
on
a
qr
label
included
in
a
bomb
etc,
and
we
want
to
reuse
the
that
bootstrap
public
key
for
wi-fi
onboarding.
We
want
to
reuse
that
for
wired
onboarding
inside
an
eep
exchange.
W
Since
the
high
level
flow
of
the
wired
or
sorry
the
wireless
exchange,
the
subsequent
will
do
a
chirp,
which
is
a
hash
of
his
public
key.
It
will
broadcast
that
and
a
network
that
thinks
it
knows
the
full
public
key
will
respond
and
step.
Two
here
is
the
important
thing
that
we're
trying
to
do
inside
the
tls
handshake,
a
dpp
authentication
exchange
will
take
place
as
part
of
the
authentication
exchange.
The
network
will
approve
and
knows
the
subsequence
and
booster
public
key.
The
device
will
prove
ignore
the
associated
private
key
in
step.
W
Three
dpp
allows
provisioning
of
any
credentials
that
is
needed
for
network
access,
so
we
will
for
this
use
case.
We
will
do
provisioning
inside
an
eep
and
then,
after
the
exchange
is
completed,
the
subsequent
data
network
access.
W
So
these
are
the
changes.
These
are
a
bit
more
detailed
in
the
changes
since
the
last
draft
and
the
bootstrap
key
is
used
to
derive
two
pieces
of
data,
an
identifier
which
is
used
to
signal
the
booster
key
for
psk,
and
then
the
actual
psk
itself
is
used
for
authentication
we
use
8773,
as
is
which
is
surprised
also
with
an
external
psk,
and
we
use
7250,
as
is
to
roll
up
the
keys.
W
We
use
the
raw
keys
so
that
the
client
can
prove
knowledge
of
the
bootstrap
private
key,
and
we
also
propose
using
giraffe
group
tls
extension,
extensible
psks
and
all
we
want
to
use
that's
to
define
a
mechanism
for
specifying
its
derived
psk
from
a
dpp
and
bootstrap
key,
and
also
just
to
signal
to
the
identity.
W
So
no
keyless
changes
or
extensions
are
required
over
and
above
defining
the
new
bsk
type
for
that
draft.
Here,
that's
extensible,
sensible.
This
case.
W
This
just
shows
how
this
fits
into
the
handshake,
and
the
only
change
we're
proposing
here
now
to
any
of
the
drafts
is
to
define
a
new,
extended,
psk
type
and
we're
using
the
circuit
external
psk,
as
is
we're
using
the
raw
public
key
and
client
server
type.
As
is.
W
And
so
what
this
will
enable
us
is
if
we
have
a
device
or
a
thing
that
has
a
a
booster
public
key,
a
dpp
booster
public
key
and
whether
it's
printed
on
a
label
on
the
device
itself
or
whether
it's
included
in
a
bom
or
downloaded
from
a
cloud
service
and
we'll
be
able
to
reuse
that
same
public
key
for
bootstrapping.
The
device
against
both
the
wired
wired
network
using
beep
and
a
wireless
network
using
dpp.
W
And
spec
is
there:
there
is
a
running
code
for
the
tls
part
of
it
and
we
did
present
this
at
emu
at
ik
form
online.
There
was
interest
in
progressing
it
and
based
on
that,
we
presented
the
tls
and
got
some
good
feedback
on
the
crypto
mechanism,
and-
and
we
are
presenting
this
again
at
emu
on
thursday-
that
is.
W
S
Questions
all
right,
ben
sorry.
I
was
about
to
type
this
in
the
chat,
but
I
figured
I
should
ask
in
person
thank
you
for
this.
It
sounds
much
improved.
I
didn't
quite
get
just
from
the
brief
presentation
what
the
need
for
the
extend
psk
extension
was,
but
I
should
probably
either
read
the
draft
or
take
it
to
the
email
list.
W
C
Thanks
ellen,
I
took
a
quick
look
at
this
earlier
today.
This
seems
like
more
long
lines
of
what
I
was
expecting,
but
I
haven't
really
studied
it.
Okay,
can
you
say
a
lot
more?
What
you're
hoping
the
outcome
of
this
discussion
will
be.
W
What
we
think,
because
we're
not
making
any
changes
to
not
really
making
any
changes
to
any
tls
handshake
at
all.
Really
this
is
just
a
do.
You
have
a
review
here,
but
I
think
we
can
probably
progress
this
in
emu
working
group
alone,
I
think,
and
but
because
we
got
such
strong
feedback
on
the
previous
goals.
We
wanted
to
run
the
pastor,
that's
working
group
again,
but
we
think
this
leaves
an
emu
now
I
would
like
to
get
your
feedback
on
that.
B
If
emu
wants
to
take
it
and
go
forth,
I
think
that
there's
no
real
concern
with
that
happening.
I
mean
lot,
there's
obviously
been
some
cross
review,
but
things
got
a
working
group.
Last
call
I
mean
there's
no
requirement
that,
like
extensions
or
whatever
coupons
that
get
assigned
must
be
done
in
tls
they
can
many.
Many
of
them
can
be
done
elsewhere.
So
unless
we're
hearing
any
violent
objections
about
adopting
this
now,
I
think
emu
is
the
way
to.
T
Go
and
as
as
a
email
co-chair,
I
think
we'll
have
a
presentation
discuss
this.
B
O
Yeah,
just
a
quick
question-
hopefully
sorry
I
may
have
missed
this
from
the
slides.
Is
there
a
reason
you
all
need
to
authenticate
the
server
based
on
knowledge
of
a
secret
public
key?
So
it's
like
the
public
keys
are
generally
treated
as
public
like
when
we
write
implementation.
When
we
like
worry
about
side
channels
and
such.
W
So
this
is
the
way.
This
is
the
way
that
the
wi-fi
aligns
dpp
versus
slide
here.
This
is
the
way
that
the
wi-fi
aligns
tpp
protocol
works,
which
is
you
have
a
bootstrap
public,
key
associated
with,
say,
a
light
bulb
or
whatever
right
and
and
the
public
part
of
that
key
is
printed
on
a
label
if
something
is
cheap,
or
else
it's
included
in
a
bomb.
If
you
want
to
have
a
bit
more
protection
off
it,
and
but
you
use
the
booster,
it's
not
a
it's,
not
a
certificate.
W
That's
used
a
full
certificate.
That's
used
for,
like
a
tls
handshake,
it's
a
in
the
dpp
protocol.
It's
a
it's
a
public
private
key
pair,
but
the
public
power
printed
on
the
printed
on
the
label
and
the
way
you
provision.
The
device
against
your
wi-fi
network
is
a
configurator
which
could
be
an
access,
could
be
your
trip.
It
could
be
an
app
that's
linked
to
your
aaa
and
you
will
scan
our
provision.
O
Yeah
so
so
this
is
a
an
existing
like
an
existing
thing
where
you
already
have
like
these
secret
public
keys
printed
on.
I
guess
the
rubber
duck,
and
so
so
yeah
sort
of
stuck
with
okay
because
like
if
we.
D
B
I
think
we're
gonna
move
on
to
that's
martin
thompson's
snip
presentation,
thanks
owen.
M
M
M
So
this
is
a
really
simple
extension.
Now
the
client
says
that
it
supports
the
extension.
The
server
lists,
all
of
the
the
protocols
that
it
might
have
that
aren't
compatible
with
the
current
one
and
if
the
server
would
have,
if
the
client
rather
would
have
preferred
one
of
the
listed
protocols
incompatible
ones,
then
maybe
it
might
do
something.
Maybe
it
considers
this
as
an
attack?
Maybe
it
just
tries
to
attempt
to
connect
with
that
other
one.
It's
going
to
be
up
to
the
client
as
to
how
they
react.
M
And
when
I
last
presented
this,
it
was
a
far
more
complicated
thanks
to
ben
schwartz
here
for
talking
me
down
from
the
ledge,
and
the
only
thing
here
that
really
changes
is
that
the
scope
over
which
a
server
is
claiming
to
support
a
particular
alternative
protocol
is
essentially
just
ipplus
port.
M
And
at
that
point
it
has
relatively
limited
applicability.
But
I
think
it
will
be
applicable
in
a
lot
of
the
cases
that
people
care
about
quite
a
bit
and
it
won't
run
afoul
of
some
of
the
multi-cdn
deployment
scenarios
and
all
the
complex
deployment
cases
that
that
people
have
in
in
deployment.
M
And
so
it's
much
less
likely,
I
suspect,
to
run
afoul
of
of
just
not
being
usable
in
a
certain
deployment.
M
Key
point
here
is
that
it
doesn't
really
care
what
sort
of
discovery
mechanism
you
use
to
find
the
server.
The
assumption
here
is
that
tls
is
authoritative
for
information
about
what
protocols
are
supported.
M
M
M
C
I'm
generally
in
favor
of
this,
but
I
have
a,
I
think,
it's
a
clarifying
question.
If,
if
you
are
doing,
if
you
have
any
cast,
then
everyone
in
any
cash
group
has
to
have
the
same
configuration
right,
so
you
can't
like
otherwise.
M
My
assumption
in
these
cases
is
that
the
the
way
that
you
would
do
this
is
you
would
deploy
the
alternative
protocols
and
you
would
have
that
as
consistently
available
and
you'd
be
fairly
confident
in
your
deployment
of,
say,
http,
2
and
http
3
across
that
entire
anycast
deployment,
and
then
you
would
turn
this
capability
on
one
of
the
things
about.
This
is
it's
kind
of
discretionary
when
you
use
this
extension,
if
you're
not
sure
about
it,
don't
use
this
extension
right.
M
B
R
Yeah
hi
thanks.
I
I
really
appreciate
the
simplicity,
but
could
you
give
me
an
example
of
what
a
ser?
What
protocol
numbers
protocol
names
on
port
443
a
server
might
send
back.
M
The
the
canonical
example
here
is
http
2
and
http
3..
So
if
you've
got
a
quick
deployment,
you
want
to
make
sure
that
clients
are
aware
of
the
fact
that
you
have
quick
deployment
and
without
this
there's
the
potential
for
someone
to
block
access
to
your
quick
deployment
or
your
http
3
deployment
and
and
force
you
down
to
the
tcp
based
design,
which
has
maybe
less
security
than
than
the
quick
design,
and
so
you
you
use
it
for
that.
M
A
Okay,
thanks
martin,
very
quickly
before
we
do
a
poll
just
to
kind
of
get
a
rough
sense
of
whether
or
not
people
are
interested
in
adopting.
Can
I
just
get
like
a
plus
one?
If
you've
read
the
draft
in
the.
A
A
Okay,
all
right
well,
that
comes
in
I'm
gonna,
throw
up
a
poll
to
see
if
this
is
something
that
people
would
like
to
adopt
so
see
you
should
have.
If
you
go
to
the
poll
in
the
top
right,
you
should
see
the
question:
should
the
working
group
adopt
this
draft
and
let
it
run
for
just
a
little.
A
A
A
A
In
the
interest
of
I
guess,
francis
will
ask
the
opposite
question.
C
A
Okay,
this
fair
this
this
seems
really
consistent.
Do
the
do
the
people
that
do
not
want
to
adopt?
Do
they
want
to
say
anything
at
the
mic?
E
Yeah,
I
have
nothing
against
it.
I
just
haven't
read
it.
I
think
it's
interesting
that
nineteen
people
want
us
and
only
you've
got
seven
or
eight
reddit,
so
it
doesn't
sound
crazy,
but
maybe
not
quite
yet,
as
it
would
be.
My
answer.
A
Gotcha,
okay,
thank
you
all
right,
as
always,
we'll
confirm
on
the
list.
I
guess,
if
there's
nothing
else,
perhaps
we
can
call
it
early
or
actually
we're
over
by
one
minute.
I'm
sorry.