►
From YouTube: IETF114 TLS 20220725 1900
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Is
somebody
wanting
to
make
a
come
in
at
the
queue
or.
A
I
think
we're
about
ready
to
get
started
here
at
ietf
114
in
philadelphia.
A
Thank
you
all
for
coming.
A
A
All
right
well
we'll
get
started
with
some
of
the
formalities
here.
Thank
you
for
coming
once
again,
please
check
in
with
the
on-site
tool
and
wear
a
mask
unless
you're
presenting
at
the
front
of
the
room-
and
you
are
all
probably
familiar
with
the
notewell
that
governs
the
various
policies
here
at
the
ietf.
A
And
here's
some
code
of
conduct
guidelines,
we'd
like
to
remember,
remind
you
of
to
treat
each
other
respect
and
speak,
so
you
can
be
understood.
A
We
have
a
minute
taker,
and
hopefully
there
are
some
folks
on
jabber
who
can
help
us
if
there
are
things
that
need
to
be
brought
to
the
mic.
A
B
B
B
Somebody,
jim
reed,
asked
me
about
when
dtls
was
going
to
be
done
about
two
and
a
half
years
ago.
My
crystal
ball
was
really
wrong
by
the
way
we
have
two
other
drafts.
The
remaining
psk
psk
related
drafts,
they're
an
off
48
done,
which
basically
means
they
should
be
published
at
any
time
now.
We've
got
delegated
credentials,
aka
subsearch,
which
I
believe
is
in
paul's
on
paul's
plate.
B
So
we
need
to
make
sure
that
he
gets
through
and
where
he's
at,
we
don't
have
anything
in
itf
last
call
right
now
we
have
two
drafts
that
we
paused
the
cross
and
I
resumption
and
tls
flags
extension
waiting
on
implementations.
B
We
can,
I
think,
we're
going
to
go
ahead
and
revisit
that
and
right
now
we
have
the
rc
for
tls,
1.2
and
1.3.
That
is
out
for
work.
Group
last
call
right
now
that
actually
ends
august
5th
and
then
everything
else
that
we
have
in
progress
which
we'll
talk
about
today
and
I
think
that's
it.
I
think
we're
going
to
go
to
the
the
presentations
and
then
we
have
a
slide
at
the
end.
So
these
are
some
scaled
or
expired.
B
C
D
All
right
I'll,
just
okay,
hi,
I'm
ben
schwartz
wow.
This
sounds
loud.
I
recently
joined
as
another
editor
or
author
on
this
draft
and
I'll
present
some
updates
on
the
changes
to
this
draft
in
version
six
next
slide.
D
So
there
are
a
bunch
of
major
changes.
Draft
six
is
very
different
from
draft
five.
Okay,
one
big
difference
is
with
profile
ids.
Previously
it
was
not.
These
were
just
kind
of
free
form.
Now,
there's
a
reserved
subset
of
these
there's
an
iana
registry.
D
Another
really
important
change
is
that
ctls
is
no
longer
specified
as
a
compression
layer.
Instead,
this
draft
specifies
ctls
as
a
protocol
generator,
so
you
define
a
profile
and
each
profile
defines
a
unique
tls
like
protocol,
but
it
is
not
a
compression
system.
It
is
a
new
compact
tls
protocol
related
to
that
there
are
now
binary
objects
describing
the
templates
and
finally,
there's
a
new
system
of
handshake
framing
I'll.
D
D
So
that
means
these
can
only
be
used
in
cases
where
the
server
knows
that
the
client
is
going
to
be
sufficiently
up
to
date
that
it's
gotten.
This
entry
from
the
iana
registry
longer
profile
ids
longer
than
four
bytes
are
essentially
pri.
Well
they're
they're
not
registered
they're
free
for
anybody
to
use
there's.
They
only
have
significance
within
a
specific
deployment
and
the
profile.
Then,
after
the
profile
id
proceeds
to
lay
out
all
the
information,
that's
required
to
understand
what
that
profile.
Id
means
next
slide.
D
Okay,
ctls
is
no
longer
a
compression
layer,
specifically
the
previous
drafts
of
ctls
structured
ctls
as
a
layer
that
sat
basically
between
tls
and
its
transport.
So
in
principle,
you
could
take
a
totally
standard,
tls,
1.3
stream
and
then
like
maybe
even
literally
in
a
middle
box
or
in
some
sort
of
middleware.
D
You
could
take
that
encrypted
stream
and
and
convert
it
into
a
ctls
stream
and
you
could
convert
it
back
on
the
other
side,
it
was
a
transformation
that
didn't
require
access
to
any
of
the
secrets
associated
with
the
connection
that
has
positives
and
negatives.
We
can
talk
about
it,
but
it
seems
like
the
the
net
consensus
after
the
last
discussion
was
that
we
would
rather
have
ctos
authenticate
its
own
transcript,
instead
of
reconstructing
a
tls,
transcript
and
authenticating
that
so
the
new
draft
does
this:
it
authenticates
its
own
transcript.
D
Reconstruction
is
therefore
no
longer
an
implementation
requirement.
You
can
implement
ctls
without
having
to
reconstruct
the
the
corresponding
standard,
tls
handshake,
but
this
has
its
own
problem.
Ctls
transcripts
are
very
condensed
because
they
omit
a
bunch
of
information.
That's
very
important
on
the
assumption
that
both
sides
already
know
it,
and
so
now,
at
least
in
some
use
cases,
we
want
to
make
sure
that
both
sides
actually
agree
on
that
information
that
wasn't
exchanged
because
it
they
somehow
was
was
already
configured
ahead
of
time
and
to
make
sure
that
it
matches.
D
We've
adopted
this
solution
in
this
draft,
where
we
take
the
shared
information,
the
pre-shared
information,
which
we
call
the
template
and
we
prepend
it
to
the
transcript.
So
it's
present
in
the
transcript
on
both
sides,
and
so,
if
there's
any
disagreement
about
that,
the
handshake
will
fail.
Next
slide,
of
course,
putting
the
template
in
the
transcript
and
then
hashing
it
into
the
finished
message
means
that
both
sides
have
to
agree
on
it
exactly
and
in
draft
five
and
prior
the
template
was
described
as
a
json
object.
D
I
can
imagine
that
nobody
here
would
be
very
excited
about
trying
to
figure
out
how
to
get
byte
exact
hashes
of
json
objects
that
are
being
passed
around.
So
in
this
draft
there's
still
a
json
format
defined,
but
there
is
also
a
consistent
reproducible,
binary
format
defined
for
the
templates
that
allows
us
to
consistently
hash
it
into
the
template.
D
And
finally,
there's
a
new
system
for
framing
the
handshake
previous
drafts
were
a
little
bit
ambiguous.
I
think
about
how
the
handshake
was
framed.
We've
decided
to
cover
all
our
bases,
basically
by
supporting
both
a
full-size
handshake
option
which
allows
you
to
send
giant
handshake
messages
and
fragment
them
in
dtls
reorder
them
have
them
be
reassembled
in
the
right
order.
D
So
this
draft
is
definitely
still
a
work
in
progress.
There
are
a
lot
of
open,
interesting
questions
that
I've
I've
attempted
to
highlight
some
of
them
here,
but
there's
there's
a
lot
more
details
open
in
the
draft,
a
lot
of
highlighted,
open
issue
or
open
question
tags
in
the
draft.
So
I
would
encourage
anybody
who's
interested
in
this
to
read
the
draft
and
and
think
for
yourself
and
maybe
help
the
editors
think
through
some
of
these
questions.
D
F
I
think
I'm
up
yeah
first
ben
thanks.
I
want
to
thank
you
for
picking
this
up.
We've
been,
I
mean
I've
been
a
little
swamped
and
I
really
appreciated
you
like
picking.
F
Sorry,
so
I
think
this
first
point
yeah.
I
think
that
the
answer
is
almost
certainly
what
you
have
listed
here,
which
is
that
if
we
want
a
compressed
electric
curves
which
just
define
new
code
points
after
all,
we
already
have
compressed
electric
curves
for
x25519
and
x448.
They
just
come
that
way.
So
I
think
with
your
p26c.
If
anyone
still
cares
use
discourage
watching
the
press
is
the
right
answer
and
of
course
you
know,
tails
1.3
doesn't
have
very
many
curves
left
anyway.
F
D
So
I
think
a
mid-empty
messages
point
I'll,
just
point
out
that
that
is
text.
That
is
in
the
draft
right
now.
The
draft
currently
says
that
empty
messages
can
be
omitted,
but
it
also
has
a
question
in
the
draft
about
whether
this
is
going
to
work.
H
I
forgot
to
check
in
oops
one
of
one
of
the
things
that
hillary
also
raised
was
the
relationship
to
ddls
1.3
and
initially
we
wanted
to
have
it
defined
in
a
way
that
it
works.
The
compression
works
for
both
ddls
and
dls
because
for
apparent
reasons,
because
the
protocols
at
the
hancheck
layer
very
very
similar.
H
But
we,
I
think,
that's
also
an
open
issue
that
we
hadn't
really
gotten
to
yet
we
had
at
one
point
in
time
we
had
worked
on
the
on
the
framing
format
from
the
underlying
record
layer
framing
and
changed
that
in
numerous
times,
but
I
think
that's
should
be
added
to
that
list.
D
So
I'm
not
sure
I
understand
I'll
note.
This
is
a
conversation
between
the
editors,
so
maybe
it
can
happen
offline,
but
I
do
think
that
the
current
draft-
essentially
the
current
draft-
is
no
longer
compression
layer,
it's
its
own
protocol,
but
it
is
essentially
a
a
system
for
generating
both
streaming
protocols
streaming
security
protocols
like
tls
and
datagram
oriented
security
protocols
like
dtls.
I
do
believe
they're
both
fully
covered
now.
H
Right
and
that
that
was
an
intention
and
then
ilari
raised
that
question
because
he
was
in
his
email.
He
was
saying
that
this
is.
He
doesn't
see
the
the
need
for
this,
where
I
actually
see
the
need,
I'm
not
entirely
sure
whether
we
are
fully
there
and
specifying
sort
of
the
functionality
for
both
dds
and
tls,
but
yeah.
I
think
that's
something
prototyping
will
help
whether
we,
whether
we're
really
there
yet.
D
E
Martin
thompson,
so
the
empty
messages.
One
bothers
me
a
little
bit.
We
have,
I
think
it's
end
of
early
data
is
probably
the
one
that
bothers
me
the
most
here.
We
we
need
that
one
and
we
need
to
know
that
it's
there,
because
that's
a
signal
that
we
use
to
to
determine
the
transition
point.
It's
not
necessary
in
the
datagram
versions,
but
it
is
necessary
in
the
stream
versions.
Otherwise
we
wouldn't
have
added
it.
So
I
think
we
can't
omit
them
and
it's
probably
better
not
to
worry
about
that
sort
of
thing.
D
E
And
the
other
one
that
I
got
up
to
speak
about
was
the
versioning
as
long
as
you
have
some
sort
of
context,
string
that
goes
into
the
transcript.
You
can
change
it
later.
You
don't
have
to
worry
about
putting
a
version
number
in
anywhere
or
anything
like
that.
One,
just
change
the
context
string
and
I
think
that'll
be
fine.
D
Yeah
we,
we
did
put
a
version
number
in
at
your
at
your
request,
more
or
less
yeah.
I
I
think
it's
I
like
having
a
version
number.
E
Yeah
I
mean,
whichever
way
you
do
it,
the
the
problem
that
you
need
to
solve
and
I'm
not
sure
if
you've
worked
through.
All
of
that
is
that,
if
you
have
a
version
number,
you
have
to
have
expectations
about
how
people
handle
a
version
number
they
don't
understand,
which
I
imagine
at
this
point
is
don't
use
the
thing
at
all.
Yeah
yeah.
I
F
The
compression
doesn't
affect
that
one
way
or
the
other
all
the
analyses
that
these
that
I
know
of
were
done
on
a
symbolic
level,
assuming
that
ignoring
how
things
were
worked
and
right
in
the
wire.
So
so
it
is
plot.
So
like.
Let
me
need
to
be
clear.
As
far
as
I
know
it
is,
it
might
be
possible
to
produce
a
profile
that
was
horribly
broken
where
you
say
like
all
the
max
for
zero
or
something,
but
I
do
not
believe
there's
compression
not
affects
that.
I
D
I
think
that's
that
is
right.
We
we
do
expect
to
to
have
some
formal
analysis,
although
it's
it's
very
tricky,
I
think
to
figure
out
what
exactly
the.
F
Question
is
so
let
me
just
like
try
to
narrow
that
very
slightly.
There
were
a
number
of
proposals
to
make,
so
I
think
roughly
he's
doing
some
work
here.
So
it's
a
concrete
example:
supposing
that
you
make
the
transformation
that
some
people
try
to
make,
which
is
you
remove
the
finish
max
and
you
replace
them
and
you
rely
entirely
on
the
aed,
then
you
don't
have
they
don't?
F
Have
you
see
between
the
the
key
key
exchange
and
the
and
the
hint
and
the
encryption
layer
that
we
ordered
nearly
what
right
and
so
under
those
circumstances?
For
instance,
it
would
not
be
safe
to
replace
the
cypher
suite
with
one
that
had
a
very
short
map
as
well,
whereas
it
would
be,
it
would
be
quasi-safe
with
those
1.3.
As
long
as
you
know
what
you're
doing
quasi,
by
which
I
mean
like,
of
course
you
have
a
short
mac,
so
you
get
what
you
get.
F
C
J
So
two
two
things
one
I
actually
spoke
with
punch
out.
I
don't
really
care
if
we
emit
empty
messages
per
se,
just
so
long
as
there's
only
one
valid
way
of
doing
it
like
either.
Everyone
must
omit
or
no
one
may
ever
admit,
omit
is
fine,
but
just
like,
even
if
you
say
both
must
be
accepted,
people
won't
implement
that
and
with
respect
to
formal
analysis,
I
think
the
only
version
that
the
the
only
analysis
that
I'm
aware
of
that
actually
models.
J
The
wire
format
is
the
one
we
did
with
tls
one
three.
I
think
all
of
the
others
already
emitted
the
wire
format.
So,
given
that
this
is
mostly
a
wire
format,
change,
I
think
that's
where
you'd
have
to
start
performing
analysis.
H
There
were
two
other
things
we
enabled
abstinent
performance
analysis,
and
maybe
we
should
just
do
it
the
other
way
around.
We
describe
it
and
then
have
the
community
like
kartik
and
jonathan
to
to
do
a
formal
analysis.
Analysis
is
the
use
of
the
randomness
of
random
numbers
initially
in
client,
hello
and
cervelo.
H
K
I
Thank
you
what's
the
next
slide,
so
this
was
presented
back
at
dispatch
in
itf
113..
I
I
The
you
know:
it's
not
the
case
that
this
is
intended
to
be
the
one
true
way
of
publishing
ech
keys,
there's,
probably
much
more
simple
ways
that
will
work
for
lots
of
people
and
next
slide.
I
I
One
of
the
things
about
this
is,
I
don't
know
anything
about
cdns,
but
some
people
do
so.
I
had
a
meeting
at
lunch
with
rich
and
ben
schwartz
who
have
raised
comments
on
this
and
they've
agreed
to
kind
of
help
with
the
draft
and
be
co-authors
that
should
help
that
there's
the
next
slide.
I
There's
a
picture
next
slide.
Stop
me.
If
you
want
me
to
just
go
faster
or
slower,
it
kind
of
works,
it's
a
work
in
progress,
it'll,
probably
change
a
little
bit
next
slide.
There
is
a
description
of
what's
in
the
response,
which
is
relatively
obvious.
I
think
it
has
the
ech
config
list
a
ttl
you'd
like
and
which
ports
on
the
web
server
are
using
that
next
slide.
I
Okay,
so
these
are
the
questions.
These
are
the
issues
that
were
raised
on
the
list,
mostly
from
ben
and
thanks
again
ben
next,
so
one
an
alternative
you
could
think
of
for
how
to
get
the
ech
information
out
of
a
web
server
is
to
use
the
retry
configs,
and
that
could
work.
I
I
think
the
idea
is
you
just
make
a
kind
of
a
greased
connection
or
something
to
the
the
web
server,
and
it
would
give
you
the
retry
configs
and
then
you
could
go
and
publish
that
in
the
dns,
if
you
like
it
and
so
on,
that
would
require
changes
to
ech
to
the
draft
because
for
a
few
reasons
to
add
the
extra
info,
if
you
need
it
like
ttls
or
ports
whatever,
and
also
it's
not
clear
that
the
set
of
keys
loaded
in
a
server
at
a
given
moment
is
exactly
what
you
want
published
in
the
dns
at
that
moment.
I
So
you'd
have
to
say
something
about
what
to
put
in
something
more
about
what
to
put
in
retry
configs,
which
we
don't
currently.
So
you
could
change
the
ec
h
draft
and
do
it,
but
I
don't
think
that's
really
satisfactory.
So
I
think
that
doesn't
particularly
work
well,
we
might
change
our
minds,
but
that's
what
I
think
next
one
unless
somebody,
if
you
have
any
comments
or
want
to
disagree,
just
jump
to
the
mic,
please
another
comment
was
you
could
create
another
resource
record
somewhere?
That
basically
says
here's?
I
What
I'd
like
into
an
scvp,
svcb
or
https
rr
again
that
could
work.
The
difference
is
you'd
lose
the
server
authentication
that
you
get
with
the
well-known
url,
exactly
the
properties
of
having
that
server
authentication
is
something
to
think
about,
but
you'd
lose
it
and
again.
You
know
for
at
least
for
my
setup.
It
wouldn't
really
solve
the
problem,
because
I
still
need
some
way
to
get
the
public
key
for
ech
out
of
the
web
server
and
into
dns
infrastructure.
I
The
question
is
so
it
looks
like
this.
You
know
in
theory,
you
could
be
much
more
generic
about
this
and
say
we'd
like
to
provide
a
mechanism
for
a
tls
server
or
a
web
server
to
publish
everything
it
wants
in
a
svcb
or
https
uri,
and
that
could
get
very
complicated,
and
so
you
could
aim
for
more
generic.
It
seems
to
me,
at
least
for
now
that
it's
the
ech
keys
that
seem
to
be
changing
regularly
and
that's
kind
of
motivating
this
and
seems
to
be.
I
Okay,
next,
a
point
that
then
raised-
which
I
think
is
is
fair
enough-
is
that
the
air
you
may
need
a
lpn
information
in
this
to
publish
in
the
resource
record
and
I
think,
that's
correct,
so
that
should
be
added.
I'm
not.
I
don't
know
what
to
do
about
the
no
default
api
really
so
yeah.
I
You
can
ignore
the
side
now
for
now,
unless
we
get
into
a
discussion
about
it
next,
there
could
be
other
content
that
that
you'd
like
to
see
in
https
that
end
up
essentially
reflected
in
the
inner
client,
hello
in
ech,
and
if
there
is,
I
had
a
look
through.
I
didn't
see
anything
obvious,
but
if
there
is,
we
should
think
about
that
and
then
probably
add
it
to
this
mechanism.
If
we
go
ahead
with
it
and
next
svcb
is
a
kind
of
a
bit
of
a
mystery
to
me
to
be
honest,
it's
it's.
I
It
allows
some
complicated
options.
It
seems
so
I've
kind
of
had
a
look
through
it
and
we
need
to
kind
of
think
through
some
of
the
other
use
cases
that
exist
and
make
sure
that
we're
not
doing
something
stupid.
I
Yep
other
points
that
were
raised
in
the
discussion
rob
made
a
point
about
not
not
mentioning
that
the
sheridan
split
mode,
topologies
mnot,
said
it
might
need
a
well-known
url.
That's
fair
enough
ben
said
the
path
was
wrong.
I
didn't
understand
why
but
nevermind
and
lastly,
I
think
looking
through
it
recently
there's
either
you
have
a
prefix
poor,
prefix
queue,
name
in
your
https
or
or
you
can
have
a
port
in
the
https
or
or
which
I
didn't
understand.
I
What
the
right
approach
to
that
would
be,
and
we
have
address
hints
that
can
go
in
these
rr's
and
I
don't
know
if
they
should
be
reflected
in
this
or
not,
and
I
think
that's
the
last
slide
other
than
process
slides
and
one
more.
F
That's
my
cue
actually,
okay,
so
when
we
first
designed
ech
one
of
the
problems
we
were
concerned
with
was
desynchronization
between
the
ecs
configuration
and
the
ip
address,
and
so
this
is
addressed
in
so
the
concern
here
right
is:
you
have
two
two
cdn's
right,
and
so
you
have
to
get
that
axis
in
front
of
them
and
you
get
the
ech
from
one
and
ip
from
the
other
right,
and
so
this
is
addressed
in
standard
ech
by
having
the
whole
thing
blocked
up
in
the
https
rr.
F
And
then
the
one
sort
of
lacuna
is
the
is
the
you
know,
retry
mode
with
the
public
name,
but
that
happens
like
seconds
afterwards.
It's
like,
if
you
cheat,
if
you
change
your
spology,
that
in
that
case
like
sorry
bad
day,
but
because
you
have
a
long
tt
for
you
a
lot
of
ttl
here,
that's
much
longer
than
that.
That
is
not
seconds,
but
it's,
like
you
know,
hundreds
of
seconds
that
easily
can
produce
the
situation
we're
talking
about
here,
and
so
I
and
so
I'm
on.
F
F
Yes
yeah,
then
you
can
easily
run
a
situation.
We
have
a
topology
shift
underneath
and
now
you're
talking
to
a
cdn
which
doesn't
in
fact
handle
this
user
at
all.
D
Hi,
so
I
I
think
the
the
distinction
here
is
that
this
draft
claims
that
the
only
http
client
for
this
is
running
inside
the
authoritative
dns
server.
This
is
only
for
communication
between
a
an
http
server
and
a
dns
server.
I
F
Let
that
I
mean
that
I
mean
so
I'm
provisionally
prepared
to
believe
that,
like
it's
fine
under
the
situation,
the
case
you
just
laid
out
then,
but
then
I
think
it
is
a
very
clear
warning
that
this
is
not.
This
is
not.
This
is
not
a
replacement
for
http
https
rr
for
generic
clients,
because
we
don't
want
people
getting
in
that
box
mode.
Sure.
I
That's
definitely
not
the
intent
for
this
yeah.
I
agree.
Okay,
just
before
you
go
back,
I
don't
know
so.
The
next
steps
basically
we're.
I
had
a
meeting
with
ben
and
rich
they're
gonna
join
his
co-authors.
They
said
unless
they
hate
me
after
this
and
we'll
probably
we'll
ask
the
chairs
whether
we
should
create
a
gif
repo
for
this
immediately
or
whatever,
and
I
think
ultimately
the
aim
would
be
to
whenever
you
hit
publication
request
for
ech,
maybe
to
look
for
a
working
class
called
roundabout,
then,
but
not
before.
L
One
of
my
points
was
actually,
I
think,
covered
to
some
extent
by
by
eckerd.
I
am
I'm
a
bit
concerned
here
that
the
hashing
infrastructure-
that's
there
for
http,
is
not
really
being
considered
by
this
draft.
I
think
your
answer
that
this
is
intended
for
a
very
limited
use
case
of
the
http
server.
Talking
to
a
specific
dns
infrastructure,
somewhat
removes
the
possibility
that
there's
a
random
set
of
transparent
caches
in
the
middle,
but
especially
if
you
think
there
might
be
a
different
use
case
in
the
future.
L
You
really
do
need
to
think
through
what
the
caching
architecture
looks
like
here,
and
the
interaction
between
the
the
time
to
live
that
might
be
present
in
the
in
the
http
caches,
which
is
not
always
obedient
to
things
in
in
the
instructions
and
how
you
would
deal
with
that,
like
whether
you'd
use
e-tags
or
something
like
that.
But
the
other
bit
of
this
is
really
this
doesn't
sound
real
baked
yet.
L
D
And
ben
schwartz
I'll
just
say
I
do
prefer
the
the
more
general
approach
here
as
stephen
well
knows.
D
I
think
it
would
be
better
to
not
just
convey
ech
but
but
treat
this
as
a
general
way
for
http
or
engines
to
describe
themselves
to
their
dns
infrastructure,
but
steven
and
I
are
going
to
spend
a
lot
of
time
talking
about
that
and
we'll
see
where
it
goes.
Yeah.
N
I
G
Alessandro
gadini,
I
just
have
a
quick
comment
about
the
json
description.
The
ports
fields,
descriptions
talks
about
specifically
tcp
ports.
Is
there
any
reason
why
those
need
to
be
tcp
ports
and
not
say
quick
ports.
I
I
Okay,
so
I
I
guess,
that's
the
plan
and
we'll
work
on
it.
B
All
right,
exciting
times,
registries
yeah.
I
know
exactly
next
slide,
please
just
a
quick
refresher.
We
had
an
individual
draft
that
we
took
to
sag
to
try
to
figure
out
what
we
wanted
to
do
to
change
the
recommended
column,
because
that's
really
all
this
update
is
really
about
and
the
consensus
was
to
add
a
d
which
basically
is
discouraged
or
weak,
and
so
this
01
version
of
the
working
group
draft
is
an
attempt
to
trying
to
do
that.
B
So
there
might
be
some
controversial
selections
here,
joe
and
I
basically
just
threw
them
down
to
see
what
was
going
to
happen.
Also
note:
there
are
some
other
changes
in
this
version
which
are
trying
to
make
it
a
little
bit
easier
on
iana
to
be
like
we
changed
these
ones.
We
didn't
change
these
other
ones
to
kind
of
make
it
a
little
a
little
clearer
it's.
This
change
has
been
of
a
bit
of
a
pain
ass
because
there
was
new
registrations.
B
So
we
did
some
minor
updates
to
the
references
too.
The
first
extension
types
value.
We
went
through
each
registry
of
the
tls
registry
and
said:
what
are
we
going
to
change
so
the
two
that
we
came
up
with
were
truncated,
hmac
and
connection
id.
So
we
we
marked
those
as
d,
I'm
just
going
to
roll
through
these
and
people
can
jump
up
and
scream
and
yell
the
cipher
suit
registries.
I'm
hoping
that
all
of
the
most
of
these
will
be
done
by
this
obsolete.
F
B
F
F
I
think
we
should,
I
think
we
should
rename
it
to
underscore
capital
reserved
and
then
make
it
n
or
d.
I
don't
care
okay,
I
think
all
I
guess
all
the
my
position
is
all
the
reserved
ones
like.
Maybe
we
may
need
like
another,
like
I
mean
like
we
like
it's
like
in
a
different
there's,
a
different
category.
It's
like
it's
like
this
is
like
not
even
a
valid
codeplay
anymore.
Well,.
H
Sean,
my
name
is
not
tron,
but
the
connection
id
because
you
write
deprecated
does
that
refer
to
one
of
the
connection
id
values
which
we
allocated
before
that
and
it's
used
in
deployments
and
that's
the
one
you
want
to
refer
to
d
or
you
want
to
deprecate
the
whole
connection.
Id
fingerprint.
B
B
Okay,
no,
no,
definitely
not
trying
to
pull
the
rug
out
from
you,
the
cypher
suit
registry,
I'm
hoping
that
we're
going
to
review
that
after
we
get
through
the
deprecated
obsolete,
kecks
draft,
because
hopefully
it's
going
to
take
them
all
out
for
us
now,
there's
some
new
ones,
which
was
probably
as
a
result
of
last
time.
B
We
did
this
draft
that
there
were
some
orphaned
registries
that
were
tls
1.2
specific
and
we
were
lazy
and
didn't
address
them,
but
tls
1.2
is
going
to
be
around
for
a
while,
and
now
we've
added
this
d
we
figured
we
had
to
go
through
these,
so
here's
another
list
of
things,
registries
that
are
that
are
orphaned,
that
we
need
to
address
so
next,
so
the
first
one
hash
algorithms.
I
highlighted
the
ones
in
d
just
to
show
that
they
were
different.
B
B
B
P
C
Sean
in
the
chat
martin
suggested
both
anonymous
and
rsa,
should
both
be
marked.
As
d
is
d.
F
B
F
D
D
E
It's
kind
of
martin
thompson.
I
think
I
think
the
standard
that
we
need
to
be
applying
here
is
that
if
there
isn't
an
rfc
that
we
can
all
get
behind
that
says
this
is
bad.
Then
it
shouldn't
say
d
either.
I
think
that's
really
the
standard
that
I
would
prefer
us
to
be
applying
here.
So
when
it
says
d
it
means
the
itf
think
this
is
all
think
this
is
bad.
E
Now
I
think
that
some
of
these
are
bad
and
we
should
probably
publish
an
rfc
that
says
that,
but
I
I'd
prefer
that
we
at
least
make
our
position
very,
very
clear
in
terms
of
what
the
the
rules
are
for
for
when
these
get
labeled.
This
way.
F
F
G
B
All
right,
so,
let's
go
past
this
one
and
then
we
have
some
open
issues
where
we
have
elliptic
curve
related
registries,
and
so
one
of
the
things
was
that
we
thought
hey.
Maybe
we
could
put
this
in
the
deputy
obsolete,
keck
straps,
but
you
know
these
are
the
these
are
the
six
registered
values?
Maybe
we
just
stick
it
in
our
draft
and
figure
out
what
we
want
to
do
with
it.
I
didn't
really
know
what
to
do
with
these.
So.
F
E
So
the
reason
that
you
don't
want
the
other
ones-
and
you
want
them
to
be
d-
is,
if
you
put
them
on
the
wire
things,
will
explode
right.
It's
not
that
they're
broken
it's
not
that
they're,
fundamentally
insecure!
It's
just
that
if
you
put
them
on
the
wire
things
will
break,
and
so
we
can
actively
discourage
people
from
doing
that.
So
let's
do.
Q
Yeah,
so
about
the
explicit
curve
things
I
believe
they
are
broken
to
some
extent,
there's
an
attack
by
nicosia
and
it
showed
how
to
exploit
them
to
do
some
damage
just
pointing
that
out.
F
Yeah
yeah,
I
agree,
explicit
curves
are
like
bad
news.
It
was.
It
was
the
compress
that
I
wasn't
saying
where
they
were
just
the
the
supplies
occurs.
Definitely
we
should
forbid
and
the
compressed
are
kind
of
like
like
just
nobody
uses
them
and
martin
says
it's
gonna
make
things
blow
up,
so
I
think
we're.
I
think,
we're
in
agreement
all
right.
Q
Q
This
document,
which
is
now
a
working
group
item
deprecates
our
sake,
exchange
and
static,
finance,
fair
development
and
limits
finite
field
development
in
its
ephemeral
form
only
to
reasonable
groups
with
sufficient
security,
and
it
also
discourages
static.
Elliptical
based
different.
Q
During
the
last
working
group
meeting
at
itf
113,
we
were
asked
to
verify
that
this
document
should
fall
under
this
walking
group,
the
tls
working
group
and
the
chairs
kindly
checked
with
the
security
area
area
director
paul
wooters,
and
he
confirmed
that
the
document
indeed
belongs
to
this
group.
So
thanks
for
writing,
raising
this
issue.
R
G
Q
Advance
this
document
towards
our
working
group
last
call
so
the
only
open
issue
or
issues
we
are
aware
of
is
regarding
groups
in
ffdig.
Q
Currently,
the
document
safe
lists
several
standardized
and
widely
used
groups
and
are
also
not
standardized
but
widely
used
groups
such
as
the
one
that
ships
with
postfix
and
there's
the
question
of
whether
the
document
should
safe
list
them
as
well,
and
we
are
leaning
towards
yes
that
it
should
save
list
any
widely
used
group,
that
is,
that
provides
sufficient
security.
Q
The
other
part
of
this
issue
is
what
happens
when
the
client
encounters
a
so-called
bad
group.
So
if
the
group
is
of
an
appropriate
size,
it
is
safe,
security-wise
for
the
client
to
verify
the
group
structure
and
proceed
with
the
connection.
If
the
group
is
safe,
we
could
add
language
to
the
document
allowing
that,
however
performing
this
verification
is
computationally
expensive.
Q
So
if
the
client
is
unwilling
to
invest
in
performing
this
verification
or
if
we
choose
to
disallow
non-safe
listed
groups
altogether,
does
the
question
of
what
behavior?
What
behavior?
The
document
should
specify
whether
the
client
must
abort
the
connection
in
such
circumstances
or
that
it
merely
should
abort
the
connection.
Q
And
that's
it
for
us
from
us.
Let
me
say
again
that
we
hope
to
get
this
to
working
group
last
call.
So
if
anyone
has
questions
or
comments,
please
join
me
thanks.
Q
I
say
that
oe
is
in
the
in
the
queue
yeah.
Please
go
ahead.
S
T
Q
G
T
Q
All
right,
I'm
I'm
not
sure
I
I
got
your
intent,
but
we
can
take
it
to
the
list
or
maybe,
if
we
have
time
during
this
session,
we
can.
You
can
continue
all
right.
Q
K
Okay,
scott
floor,
cisco
systems
about
mike
okay,
scott
flora,
cisco
systems.
As
for
checking
for
the
group
structure,
unless
it's
a
safe
crime,
we
do
not
pass
enough
informa
information
to
to
to
check
the
group
structure
and
even
a
safe
prime,
it's
it's
it's
it's
not
cheap.
I
would
recommend
just
a
abolish
just
just
forbidding
all
any
anything
other
than
a
name
name
group.
U
Mike
ellsworth,
this
is
not
really
a
comment
on
your
draft,
but
more
of
a
rant,
I'm
a
pen,
tester
professionally
and
in
the
last,
like
six
months,
I've
seen
automated
scanning
tools
are
starting
to
freak
out
about,
like
group
dh
groups
that
are
not
random
or
the
dh
groups
that
the
tool
knows
about,
and
I've
had
banks
ask
me
like
how
do
I
generate
my
own
dh
params
and
put
them
in
tomcat,
and
I
don't
know
where
the
tools
got
this
idea
or
how
we
can
like,
but
it's
super
frustrating
coming
up
like
almost
weekly
at
this
point.
P
P
N
Yeah
ben
katek
back
and
a
nice
segue
from
victor's
comment,
which
is
just
me,
sort
of
spitballing
brainstorming
on
the
fly
here.
But
what
would
would
we
get
enough
if
we
went
and
made
it
easy
to
register
new
named
groups
for
these
things
that
are
already
wide?
It
might
be
deployed,
such
as
what
postx
has
or
anything
else
like?
Would
that
get
us
enough
enough
properties
that
we
would
be
able
to
leverage
that,
for
the
purposes
of
this
document,.
Q
Yeah
all
right
thanks,
I'm
not
sure
how
that
would
work.
To
be
honest,
I'm
not
familiar
enough
with
the
details.
Thanks.
V
V
V
Therefore,
it
feels
safer
to
do
an
arbitrary
group
in
practice
in
the
whole
ecosystem,
given
that
you
can't,
as
a
client
tell
whether
that's
a
known
group
or
not
that's
not
safe
right,
because
we,
you
could
be
attacked
by
the
server
that
you're
talking
to
who's
just
giving
you
bad
groups
or
somebody
could
have
misgenerated
a
group.
V
So
maybe
what
the
specification
needs
to
do
is
say
unless
there
was
a
known
pre-agreement
with
the
third,
with
with
the
peer
that
you're
talking
with
right,
we
don't.
I
don't
believe
anybody
here,
actually
thinks
that
it's
wrong
for
a
private
organization
to
use
their
own
groups,
that
they've
done
their
own
verification
on
that
they
could
that
they
could
list
right.
I'm
not
recommending
people
do
that,
but
the
the
pushback
that
you're
going
to
get
from
these
pen
testing
tools
needs
to.
We
need
to
be
able
to
say
this.
Pushback
is
wrong
and
here's.
V
Why
and
here's
a
statement
that
understands
why
the
pen
testing
tools
thought
they
had
the
right
idea.
We
need
to
acknowledge
that
concern
so
that
we
can
actually
get
them
to
stop
making
these
requests
right,
and
so
I
don't
know,
I'm
not
sure
exactly
the
right
way
to
do
that
in
this.
In
this
wording,
I,
like
ben's
suggestion
of
like
let's
have
a
a
registry
that
people
can
put
you
know
widely
distributed
groups
in
that
actually
have
been
checked,
but
I,
but
I'm
very
reluctant
to
like.
V
I
don't
think
just
the
fact
that
we
say
hey,
don't
use
anything
like
this
will
fix
this
pen
testing
tool
complaint
problem.
I
think
we
need
to.
We
need
to
address
it
head
on.
Q
Yeah,
let
me
add
that
I
think
all
of
these
groups
are
kind
of
on
their
way
out,
so
this
is
finite
field.
Dpm
and
people
are
like
in
the
process
of
switching
to
ecdh.
I
would
hope
thanks.
E
Go
ahead,
martin
yeah
martin
thompson.
The
discussion
in
in
chat
here
is
kind
of
interesting.
I
think
that
there's
several
of
us
who
sort
of
realized
that
in
tls12
and
earlier
using
the
ffdhe
groups,
which
only
specify
the
the
one
value
essentially
which
doesn't
let
you
do,
the
validation
of
scot
point
points
out.
E
The
only
way
that
you
can
do
that
safely
is
to
have
a
list
of
values
that
you
you
would
accept,
and
I've
implemented
that
with
the
79
19
groups,
and
we
just
can't
turn
that
on,
because
in
practice,
people
use
other
ones
and
ultimately
we
concluded
that
the
only
safe
way
to
do
this
was
to
turn
off
ffdhe
in
tls12,
and
you
can
do
it
in
one
three,
because
you
have
the
named
groups,
but
we
couldn't.
E
We
couldn't
turn
it
79
19
on
to
tls
1.2,
because
it's
just
impractical
and
then
with
people
going
off
and
generating
their
own
groups.
You've
got
no
way
of
knowing
that
they're
okay
a
priori.
So
this
is
exactly
the
point
that
dkg
made
if
you've
got
your
own
private
agreement-
that's
great,
but
you
can
also
have
your
own
private
protocol
at
that
point.
So
we're
not
adding
a
lot
of
value
here.
E
Q
G
Q
Yeah,
I
hope
I'm
pronouncing
your
name
right
you're
in
the
cube.
Let's
go
ahead.
T
T
Q
All
right
thanks
any
other
questions
or
comments
all
right,
so
I
guess
I
will
try
to
summarize
the
points
that
everyone
brought
up
and
take
it
to
the
list.
Thanks.
C
C
S
So
in
global
aviation
we
are
defining
a
trust
framework
to
be
able
to
use
pki
and
to
harmonize
and
map
commercial
aviation
identities
and
access
requirements
to
a
common
set
of
operating
rules,
something
that
may
seem
very
common
within
the
internet
and
within
even
some
examples
of
u.s
federal
bridge.
But
it's
have
never
been
done
so
far
in
aviation
and
we
have
been
using
something
that
has
not
been
used
very
much,
which
is
the
server
based
certificate
certificate
validation
protocol
and
to
validate
trust
and
identity.
S
Because
one
of
the
challenges
in
aviation
is
that
you
have
a
lot
of
different
organizations
that
are
not
centralized.
You
have
iko
that
operate
on
a
principle
of
state
sovereignty
and
the
software
is
quite
often
custom
and
developed
owned
and
operated
and
managed
independently.
So
there's
not
no
easy
way
to
say
I
can,
you
know,
send
out
trust
lists
with
my
software
updates
to
the
different
entities
and
then
like
with
ssl
on
the
web.
You
you
can
trust
who
connect,
who
I
am
when
you
connect
to
me.
S
S
Having
used
osi
to
now
ip
services,
which
is
a
really
big
step
for
aviation
and
using
trustless
and
crls
for
this
purpose
is
very
difficult
because
you
have
a
lot
of
entities
flying
entities
aircraft
with
different
commercial
airlines
that
all
have
to
communicate
via
data
communications,
with
the
with
different
air
navigation
service
providers
in
193
countries
and
the
airlines.
S
So
you
can
see
it
becomes
a
big
maintenance
problem
for
the
aircraft
itself.
In
that
case,
even
if
you
use
short-lived
certificates
and
so
grant
certification
validation
on
the
aircraft,
using
suv
validation
basically
would
only
require
a
very
small
one
or
a
few
trust
anchors
that
don't
change
and
we
are
proposing
a
new
svp
validation
extension
to
remove
the
burden
of
the
sevp
request
from
the
aircraft
client
and
having
the
ground
server.
S
Making
the
scpp
request
provide
the
result
to
the
client
and
therefore
also
reducing
the
complexity
and
the
cost
of
avionics
software,
to
have
to
calculate
the
trust
path
and
I'm
handing
it
over
to
ashley
xlite.
W
Hi,
I'm
ashley
kaufman.
Is
this
too
tall?
Sorry?
Is
that
better,
okay,
I'm
ashley
cottman,
so
this
shows
a
diagram
of
how
I'm
still
not
close
enough.
Okay.
Is
that
better.
W
All
right
is
that
better
all
right,
sorry,
I
don't
speak
loudly
to
start
with
okay,
so
this
shows
a
diagram
of
how
it
could
be
used
in
aviation.
So
in
our
case,
the
aircraft
is
the
dtls
client
when
it
sends
down
the
hello
message
to
the
ground
system.
It
would
include
with
it
this
new
dtls
extension
of
scvp
validation,
request
and
a
structure
which
optionally
includes
a
list
of
the
scvp
responders
that
it
trusts.
W
If
it
does
not
include
any,
then
it
insinuates
it's
explicitly
known
by
the
ground
system
which
scvp
responders
the
aircraft
trusts.
It
also
can
optionally
send
down
a
list
of
trustless
trust
anchors
if
it
includes
the
trust,
anchors
and
the
certificate
path
has
to
terminate
at
one
of
those
trust
anchors.
W
And
finally,
it
optionally
can
include
scbp
request
settings,
so
we've
defined
a
few
of
these
and
I'll
show
those
later.
W
It
optionally
can
have
a
cash,
it's
suggested
to
have
a
cash,
and
if
there
is
a
matching
value
in
the
cash
to
return
that
up,
if
not
to
translate
that
validation
request
into
an
seb,
cbp
cv
request,
object
and
send
it
to
the
sevp
server
and
receive
back
the
response
from
that
server.
The
response
is
signed
by
the
trusted
suvp
server,
so
it
can
be
trusted
by
the
aircraft
and
it
is
sent
back
up
to
the
aircraft
with
the
certificate
of
the
server.
W
Next
slide,
please
all
right,
so
this
shows
it
in
a
simplified
handshake
diagram.
So
you
can
see
there's
very
little
changes
to
the
actual
handshakes.
It's
a
extension
on
the
client,
hello
message,
a
validation
request
for
this
message.
We
have
defined
it
for
type
scvp,
but
have
allowed
it
to
be
expanded
to
other
validation
protocols
as
well
for
future
use,
and
then
the
sevp
cv,
request
and
cv
response
is
existing.
W
That's
that's
not
new,
it's
new
to
incorporated
into
the
tls
server
and
then
there's
a
new
extension
to
the
certificate
message
of
validation,
request
with
the
path,
validation,
information
again
it's
defined
for
type
scpp
and
would
include
that
cv.
Response
back
to
the
aircraft
or
in
this
case
back
to
the
tls
client.
W
N
C
N
So
I
guess
the
the
tls
part
of
this
seems
to
make
a
lot
of
sense
is
pretty
straightforward
and
it
seems
completely
reasonable.
N
W
N
But
like
the
but
that's
more
a
matter
of
coding
in
the
server
than
it
is
like
defining
the
protocol.
I
guess
that's
my
question.
W
So
there
is
a
a
mapping
from
the
request
message
coming
from
the
client
to
the
request
going
to
the
scbp
server.
That's
defined
in
the
proposal,
and
then
there
are.
There
are
certain
values
that
are
only
known
to
the
server
that
need
to
be
checked
in
the
response.
Coming
back
to
verify
that
the
the
response
is
valid
before
passing
it
up
to
the
aircraft.
A
T
T
S
Yes
yuri
this
is
rob
siegel's
responding,
so
sc223.
H
Hi
this
is
hannes.
This
is
sort
of
more
in
response
to
uri,
because
this
topic
comes
up
regularly.
We've
gone
through
this
numerous
times
in
the
meanwhile
on
the
question
of
like
is
ddls
something
that
you
can
use
on
iot
devices
and
the
answer
is
look
at
the
papers.
Look
at
the
documents
we've
done
the
profile
it's
deployed
in
millions
of
devices,
even
dls
is
deployed
in
millions
of
iot
devices.
H
So
like
this
topic
has
been
answered.
So
there's,
as
you
said
like,
if
you
don't
send
everything
you
have
over
the
over
the
air
for
no
good
reason,
then
you
are
completely
fine.
H
F
T
I
just
I
just
want
to
quickly
add
here
that
iot
can
have
different
constraints.
There
is
a
computational
constraint
and
there
is
bandwidth
constraint
right.
F
I
believe,
honestly
talking
about
all
these,
he
does
work
on
iot
devices,
so
erica
scroll,
so
just
I
just
want
to
see
if
I
can
sharpen
ben's
point
a
little
bit
to
make
sure
I
understand.
As
I
understand
it,
this
is
an
entirely
stock
scvp
server
on
the
right,
correct,
good,
okay,
so
this
does.
This
seems
this
is
quite
reasonable.
We
talked
on
email,
I
think
a
little
bit.
I
guess
my
question
is:
what
do
you
want?
Do
you
want
this
adopted
by
the
working
group?
S
We
would
like
the
help
from
the
working
group
to
get
this
to
a
standards,
and
we
invite
everyone
to
review
and
comment
and
to
make
sure
it's
a
good
approach.
Okay,
so
it
sounds
like
you're
asking.
F
For
working
group
adoption-
yes,
please
I
I
guess
so
I
I'm
sort
of
provisionally
in
favor
of
that.
I
guess
my
one
question
would
be
something
we
often
have
situations
where
an
external
body
wants
us
to
do
something
and
where
we
have
like
a
very
thin
kind
of
like
relationship
with
them
with
like
a
couple
people
are
you,
the
people
we're
talking
to?
Are
there
other
people
that
sort
of
show
up
and
and
help
us
out,
because,
like.
S
The
the
world
the
plan
was
going
forward
to
get
airbus
boeing
collins
involved.
F
X
Bob
squitz
I've
rob
segers,
has
pulled
me
into
this
fun
and
games,
and
so
I
can
answer
that.
Yes,
the
people
at
the
table
are
the
the
airframe
manufacturers,
the
other
national
caas,
the
the
airline
industry.
A
number
of
the
players
are
there
at
the
table
and
how
they're
going
to
do
this
because
they
really
have
a
serious
issue
they
need
to
address.
I
feel
that
players
are
there.
They
are
dedicated
to
get
this
done.
X
They
do
have
a
a
reasonable
time
frame
because
it
does
take
time
to
make
the
changes,
but
they
do
want
to
get
into
their
proof
of
concept,
so
they
have
a
commitment
going
forward.
Yes,
there
are
changes
on
the
on
the
the
tls
server
side
that
it
has
to
now
support
this
particular
extension.
X
But
the
number
of
those
servers
which
are
around
the
world
is
a
manageable
number
and
again
the
parties
that
own
these
things
at
the
the
national
airports
and
so
forth.
There
we
have
enough
of
them.
I
think
committed
that
the
rest
will
then
follow
along
the
big
players
are
committed
and
the
rest
will
participate.
So
I
think
you
have
the
community
of
interest
for
this
and
it's
worth
the
work
group
putting
their
knowledge
behind
to
make
sure
this
is
done
right,
because
this
has
international
consequences.
C
All
right,
thank
you.
Can
we
get
a
just
a
quick
show
of
hands
for
folks
in
the
room
who
have
read
the
draft.
C
All
right
not
a
whole
lot,
but
there's
probably
some
more
online,
we'll
convene
amongst
ourselves
and
take
it
to
the
list.
I
think
thank
you
so
much
all
right,
hannes.
I
think
you're
next.
H
Okay,
some
of
you
may
have
been
at
the
hot
rc
event
yesterday
and
where
I
talked
about
this
new
work
now
I
focus
more
on
the
dls
related
part
and
go
a
little
bit
more
into
details.
This
is
obviously
new,
so.
R
H
Close
to
anything
asking
for
adoption,
more
figuring
out
whether
any
one
of
you
is
interested
in
that
type
of
activity,
as
you
know,
I
work
for
arm
was
previously
mentioned
in
my
side
of
the
industry.
There's
a
lot
of
excitement
in
defining
new
hardware
extension
for
coming
up
with
new
forms
of
isolation,
which
then
obviously
bubble
up
in
operating
systems,
and
also
demonstrating
that
you
have
these
hardware
capabilities
and
this
isolation
capabilities
to
other
parties,
and
that's
happens
in
form
of
at
the
station.
H
Okay
next
slide,
so
we
have,
since
there
are
some
couple
of
or
various
projects
ongoing.
If
you
look
at
the
confidential
computing
consortium,
you
see
where
some
of
those
or
how
some
of
those
activities
look
like
that
utilize.
These
new
forms
of
software
isolation.
H
You
see
that
there
are
various
different
ways
to
communicate
app
station
information
from
the
device
out
to
whatever
type
of
relying
party,
and
so
we
are
trying
to
make
an
attempt
to
generalize
channelize
the
solution
a
little
bit
to
avoid
having
everyone
come
up
with
their
own
technique,
which
is
doing
more
or
less
the
same,
and
what
many
of
those
mechanisms
do?
H
Is
they
stuff
something
into
dls
for
fairly
obvious
reasons,
because
you
have
to
establish
a
dls
connection
very
early
in
the
interaction
anyway,
and
so
what
we
wanted
to
do
is
we
want
to
do
taking
the
work
of
rats
support
their
two
models?
H
If
you
don't
know
what
the
background
check
in
a
passport
model
is,
there's
a
good
architecture
document
in
the
rads
group
explaining
it,
and
we
also
wanted
to
support
different
at
the
station
formats
or
at
the
station
technologies
like
dbms
and
some
of
the
stuff
we
have
come
up
with
in
arm
with
the
entity
at
the
station
token,
and
also
we
wanted
to
be
agnostic
to
the
de
technology,
the
underlying
technology,
whether
that's
our
most
recent
architecture,
called
ambi,
arm,
b9
or
older
things,
which
you
guys
are
all
using
in
your
phones
in
your
tablets
and
big
fangs,
being
loyal
customers
next
slide
and
technically
it
sounded
like
at
the
beginning,
very
simple.
H
We
wanted
to
use
the
certificate
types
because
we
are
conveying
the
attestation
information
in
the
certificate
payload,
but
unfortunately,
we
had
to
stuff
in
a
new
field
nons
which
is
used
for
freshness
of
the
produce
data
station
information,
so
that
sort
of
made
the
reuse
a
little
bit
more
complicated
and
obviously,
as
I
said,
the
content
in
the
certificate
certificate
messages
then
changed.
H
What
it
actually
I'll
explain
on
the
next
slide,
what
this
content
of
the
certificate
message
is,
but
in
in
many
cases
the
attestation
information
that
is
produced
by
the
hardware
or
by
software
hardware
combination
is
something
like
I
linked
an
example
in
case
of
what
we
do.
H
It's
called
the
arm
platform,
security
architecture,
initial
attestation,
token,
some
fancy
name
that
our
marketing
people
came
up
with
and
that
sort
of
captures
what
the
hardware
is
and
also
what
the
state,
the
initial
state
at
the
boot
time,
the
software
actually
what
it
compromises,
what
the
different
components
are
quite
useful
information,
obviously,
but
for
the
dls
exchange
you
need
more
than
just
having
that
token,
which
you
could
describe
as
a
bare
token.
You
need.
H
G
H
Talk
about
also
obviously
described
in
a
document,
so
here's
an
instantiation
of
how
this
looks
like
in
a
in
a
typical
scenario.
So
the
device
is
split
into
in
this
case
into
two
parts,
which
is
like
the
part
where
the
linux
runs,
one
sort
of
compartment
and
another
one
which
is
here
called
the
secure
world,
which
runs
something
like
opti
as
a
as
an
operating
system.
H
So
they
have
different
operating
systems
running
at
the
same
time
in
different
software
isolation,
containers
and
if
the
dls
stack
is
in
in
running
in
linux
as
an
application
and
then
sort
of
communicates
with
this
secure
world
side
with
what
in
where
an
attestation
service
is,
and
that
talks
to
a
specific,
in
this
case,
a
security
engine
like
a
dpm
or
something
else
to
get
the
other
station
information.
And
then
it
bubbles
up.
H
But,
as
I
said
that
token,
by
itself
won't
do
the
job.
So
there's
another
layer
needed
where
the
device
at
some
point
in
time
generates
a
public
private
keypair
and
produces
an
at
the
station.
A
key
at
this
high
quality
key
at
the
station
token,
which
is
conceptually
you
could
think
of
it
as
a
sibo
web
document,
with
a
proof
of
possession
key
that
is
then
used
in
the
dls
handshake
to
actually
demonstrate
the
possession
of
the
private
key,
and
that
happens
that
is
generated
inside
this,
the
secure
world.
H
So
it's
a
little
bit
more
nuanced,
but
sticking
together
a
couple
of
idf
technologies
in
the
end.
In
the
dbm
case,
there's
also
some
w3c
attestation
format
involved
as
well,
because
they
use
their
own
sort
of
technique.
F
Questions
yeah,
so
you're,
probably
gonna
you're,
probably
gonna.
Tell
me
there's
some
internal
reason
why
this
won't
work
which
I'm
willing
to
accept,
but
is
this?
Is
there
one
key
or
two
keys
like?
Does
this
look
like
tls
client
auth,
or
does
it
look
like
with
a
funny
certificate
attached
to
it
or
like
something
else.
H
So
there's
there's
one
key:
there's
one
key:
a
separate
key
to
sign
the
app
station
information,
the
initial
platform
at
the
station
token
yeah
and
then
there's
another
key
that
is
used
for
the
dls
client
authentication.
H
Fundamentally,
yes,
okay,
I
just
couldn't
use
the
same
payload
or
extension,
because
I
had
to
put
the
nonce
in
there
and
it
was
in
the
original
certificate
type
extension.
There
was
no
place
to
put
the
nonce
because.
H
Yes,
in
in
it
comes
from
the
verifier
extruder
through
the.
So
that's
why
there
are
these
two
types
of
models
in
in
rats.
So
if,
in
the
background
check
model,
the
nonce
comes
in
all
cases,
it
comes
from
the
verifier,
but
in
the
background
check
model
it
then
gets
channeled
over
the
the
dls
exchange,
the
dls
handshake
right,
I'm
just.
F
H
A
We
have
online
from
penguin,
okay,.
Y
Thank
you.
Thank
you,
hi
hi.
I
have
a
question.
First
of
all,
I
think
this
draft
is
very
useful
when
we
use
this
in
the
trusted
social
environment
and
to
the
client,
but
I
have
a
question
is
that
a
this
protocol
only
supports
the
heat
and
the
tpm
attestation
of
if
there
is
this
protocol
support
other
attestation,
for
example,
if
there
is
a
measurement
result
or
just
a
hash
value
that
this
protocol
will
support
that.
H
Yeah,
so
we
we
wanted
to
support
different
attestation
formats.
To
begin
with,
we
wanted
to
focus
on
the
sort
of
like
the
dbm
and
the
and
the
heat
based
approach
and,
like
I
don't
see
a
reason
why
others
couldn't
be
included
in
the
air,
so
they,
the
the
part
that
goes
into
the
dls
exchange,
doesn't
actually
care
much
about
what
it
is,
but
for
the
for
the
overall
functionality.
Obviously,
it
matters
how
you
stick
the
different
pieces
together.
H
Well,
we
probably
should
try
it
out
so
we'll,
so
our
plan
is
to
contribute
this
or
to
use,
contribute
the
project,
the
software
to
the
confidential,
the
software
to
the
confidential
computing
consortium
to
actually
have
others
to
look
into
that
as
well.
V
V
H
Actually,
it's
it's
both
and
in
in
this
simplifying
example.
I
just
focus
on
the
client
and
the
code
initially
focus
on
the
client,
but
it
turns
out
that
you
also
want
the
the
server
adjusting
to
the
client.
For
example,
if
you
have
some
confidential
workloads
in
some
cloud-based
service,
you
also
want
to
know
what
are
you
actually
uploading,
your
code
or
whatever
to
the
hardware
you're
expecting
it
to
be.
V
Yeah,
so
thank
you.
That's
that's
useful
to
understand.
My
second
question
is
about
the
the
persistence
of
the
platform
state
and
how
this
can
be
used
like
what
are
the
ways
that
this
could
be
anonymized
so
that,
for
example,
two
separate
processes
that
run
on
the
same
machine
don't
end
up
identifying
themselves
as
being
cotenants
or
if
a
user
say
wants
to
clear
their
their
state
and
come
up
with
a
new
identity
that
doesn't
that,
then
the
platform
state
doesn't
itself
leak,
a
linkable
identifier
to
the
server
over
time.
V
H
A
very
good
question-
and
I
think
this
is
on
the
to-do
list-
obviously
like
you
monty-
for
dpm
and
and
for
some
of
the
the
at
the
station
technology.
So
there
are
two
pieces
to
to
your
answer.
One
is:
how
does
the
attestation
technology
avoid
providing
identifiable
information
across
different
interactions?
The
other
one
is
there's
some
additional
information
this.
H
Z
Yeah,
so
I
think
you've.
A
lot
of
these
are
a
bunch
of
to-do
things
which
you
know
I'm
going
to
be
involved
in
obviously
monty
wiseman
by
the
way,
beyond
a
beyond
identity.
The
the
other
there's
two
reasons
for
the
attestation
key,
not
being
the
thing
to
sign.
One
is
the
nonce,
but
the
other
isn't
is
in
order
to
be
of
any
value.
The
attestation
key
has
to
have
the
properties
of
only
signing
things
that
are
inside
the
tpm.
Z
Otherwise
you
can
hand
out
a
blob
and
say
here
sign
this
it'll
become
a
signing
fool
right,
so
it
cannot
sign
external
data.
It
can
only
sign
stuff,
that's
inside
the
tpm,
except
that's
another
important
reason
why
you
can't
use
the
fstation
key.
The
other
thing
that
I
think
we
need
to
start
thinking
about
is
there's
lots
of
aspects
of
the
platform.
Are
you?
Do
you
only
care
about
the
bios?
The
firmware?
Do
you
only
care
about
the
os?
Do
you
care
about
ima?
Z
We
have
to
be
able
to
hand
it
a
bunch
of
stuff,
this
here's
the
stuff
I
care
about
and
then
get
this
backside,
and
I
think
this
is
going
to
be
a
more
complicated
thing
than
we.
I
think
it's
a
valuable
thing
to
do.
Don't
get
me
wrong,
but
I
think
this
is
going
to
be
a
lot
of
work
and
I'll
be
happy
to
be
involved.
O
Hi
nick
doty
center
for
democracy
and
technology.
Thanks
for
presenting
this
work,
I
have
some
of
those
same
privacy
concerns
that
have
already
come
up,
and
I
see
yes,
it's
an
early
draft
privacy.
Consideration
section
is
not
written.
I
expect
it
will
be
a
very
expensive
section
and-
and
I
certainly
would
encourage
you
to.
O
That
early
on,
because
I
think
it
could
have
very
fundamental
implications
for
for
the
design
of
the
architecture
altogether.
One
one
question
that
is
coming
to
mind
is:
I
look
at
this.
Why?
I
think
you
even
note
this
it
could
this
could
happen
in
lots
of
different
layers?
Why
should
this
happen
in
in
tls?
O
It
seems
like
setting
that
up
in
the
in
in
just
securing
the
communication
and
end
seems
like
kind
of
an
odd
time
to
do
that.
You
don't
have
any
of
that
application,
specific
information.
So
so,
why
should
this
be
a
tls
layer
protocol
rather
than
an
application
layer.
H
Hey
nick
haven't
seen
you
for
a
while.
That's
a
good
question
as
well.
I've
been
also
wondering
of
why
the
tls
working
group
would
be
a
good
place,
whether
maybe
rats
would
be
a
better
place
or
maybe
some
totally
different
group.
H
I
don't
see
a
concern
with
the
application
specific
information,
because
that's
not
really
something:
that's
sort
of
more
policy
what
to
include
and
whatnot,
but
there's,
obviously
an
tls
extension
that
needs
to
be
defined
here
and
and
described
similar
to
the
certificate
types
extension
which
described
on
what
type
of
certificate
to
put
in
there
and
in
some
sense,
from
a
dls
point
of
view,
there's
little
little
to
do
practically
like
it's
actually
a
short
document.
H
The
problem
is,
I
have
to
set
the
context
also
here
in
the
presentation
like
otherwise
you
you
don't
know
what
I'm
talking
about
and
that's
the
difficulty
and
I
don't
know
how
to
best
deal
with
that,
but
it's
actually
better
to
put
that
into
two
documents
or
one.
That
is
more
like
the
architectural
aspect
to
it,
which
already
rats
does,
in
some
sense
so
a
lot
of
it.
What
I'm
talking
about
is
in
rats,
so
I
don't
know.
H
O
But
well,
maybe
just
to
continue
that
a
little
bit
like.
Why
does
the
user
want
to
do
it
in
the
setting
up
of
the
connection,
rather
than
attesting
to
themselves
once
they
know
some
more
about
what
they're
trying
to
do
with
this
server.
H
Okay,
yeah
the
the
use
case.
So
actually
maybe
it
would
be
best
for
me
to
distribute
one
of
the
confidential
computing
consortium
use
case
our
date,
their
white
papers,
because
they
describe
some
of
the
use
cases
on
why
you
would
want
to
have
at
the
station
and
some
of
those
software
isolation
techniques
in
general.
H
I
think
there's,
for
example,
you
want
to
upload
a
code
onto
a
cloud-based
infrastructure
and
run
it
there,
but
you
only
want
to
disclose
the
code
and
the
configuration
data
to
certain
platforms
that
meet
certain
criteria,
so
they
can,
for
example,
to
run
the
code
there
that
even
the
cloud
provider
doesn't
see
what
you're
running.
That
is
one
of
the
that's
sort
of
the
promise
of
of
confidential
computing,
and
so,
for
example,
you
may
have
some
machine
learning
data
in
there
that
you
don't
want
to
spread
around
that.
H
You
want
to
keep
yourself.
That
is
one
of
the
the
use
cases
in
in
in
confidential
computing,
so
in
general,
and
like
pushing
code
around
and
making
sure
that
it's
actually,
you
run
it
in
an
environment.
That
is
what
you
would
expect,
for
example,
pushing
moving
workloads
from
the
cloud
to
the
edge
and
running
it
there,
including
obviously
data
that
this
code
would
run
on.
H
You
want
to
make
sure
that
you
actually
store
it
on
servers
that
meet
certain
criteria
so
that
the
data
doesn't
leak
out
into
into
the
wild,
because
otherwise
anyone
could
sort
of
like
claim
that.
Oh
I'm
running
a
virtual
machine
here
with
some
great
hardware,
I'm
really
protecting
your
data,
but
in
fact
it's
not
happening.
R
O
So
yeah,
maybe
that
would
be
a
more
promising
direction
rather
than
rather
than
users
having
to
attest
to
I'm
not
even
quite
sure,
everything
about
their
device
and
and
potentially
hard
linkable
certificates
or
keys.
The
the
use,
those
other
use
cases
might
might
make
more
sense
or
make
that
easier
to
understand.
H
I
will
post
the
the
white
paper
to
to
the
list
on
what
the
confidential
computing
consortium
believes
that
it's
useful
areas.
AA
What
would
I
put
in
there?
So
there
you
say,
eat
or
tpm,
so
I
have
some
understanding,
so
that
has
to
be
fleshed
out,
but
this
is
zero
zero.
I
get
this,
but
evidence
is
never
set
to
a
lying
party,
which
is
literally
what
you
say
and
then
so
then
there's
a
verifier
in
the
picture
and
at
the
station
servers,
which
are
also
so
that
I
I
get
the
point
where
you
want
to
want
to
solve,
but
yeah
some
so
there's
this
really
has
to
thomas
is
on
this
draft.
AA
So
that
makes
me
confident
that
this
is
sorted
out
yeah
at
the
moment
it
looks
a
little
bit
all
over
the
place
and
I
would
never
know
what
is
where
and
never
use
the
term
at
the
station
by
the
way.
H
It's
bad,
it's
yeah!
The
challenge
is
in
in
rats
that
the
different
flows
on.
AA
Transported
that's
so
I
meant
to
make
that
clear
and
also
talking
about
evidence
and
attestation
results
would
resolve
all
the
problems.
I
think
just
label
those
differentiate
them.
I
think
that
will
make
everything
very
clear
and
then
you're
basically
halfway
there.
I.
H
I
will
give
it
the
the
hank
some
sort
of
brush
next
with
zero
one.
C
Next
up,
we
have
a
update
on
the
post
quantum
process
from
sophie
and
tom
tom.
Are
you
going
to
share
slides.
R
We
shared
the
slides
with
the
chairs,
but
if
you
don't
have
them,
oh
there
you
go.
Oh
thank
you.
C
AB
Okay,
I
can
see
the
slides
now,
so
we
can
get
started.
So
this
is
a
very
brief.
Very
abbreviated
talk
just
to
get
the
ball
rolling
on
on
pqc
again,
because
some
stuff
happened.
So
let's
go
to
the
next
slide.
AB
Somewhat
of
an
overview,
but
it's
definitely
not
complete,
we're
going
to
briefly
touch
keg's
key
exchange,
but
that
has
been
talked
about
a
lot
already
so.
AB
R
Okay,
I
think
I
think
tom
got
frozen
sorry,
but
I
can't
take
it
so
yes,
here
in
the
right
slide.
R
So
basically,
as
some
already
said,
we're
not
going
to
focus
specifically
too
much
about
the
key
exchange
part
of
tls
1.3,
because
that's
indeed
there's
a
way
to
actually
easily
add
the
per
squad,
the
malgody
genome,
since
to
it
that
every
of
the
experiments
that
have
been
run
all
of
the
academic
papers
related
to
it
seems
to
show
that
indeed,
is
easy
to
just
go
up
the
classical
algorithm
for
apos
one
two
one
for
a
while.
R
It
will
be
good,
as
it
has
been
recommended
by
the
hybrid
design
document,
to
actually
not
only
use
the
pos
quantum
algorithm
in
isolation,
but
rather
to
combine
it
with
a
classical
algorithm
for
the
timing.
R
We're
talking
about
certificate-based
authentication
nexus
live,
please,
okay
and,
as
you
know,
there's
many
signatures,
some
verification
operations
that
are
actually
happening
as
part
of
the
tls
session.
It's
not
only
about
the
handshake
signature,
but
also
all
of
the
related
ones,
internal
attempts
about
the
handshake
signature.
R
It
does
seem
that
at
least
some
of
the
experiments
that
I
have
around
it
seemed
that
it
would
be
okayish
to
swap
the
classical
algorithm
for
a
plus
quantum
one,
and
that
should
be
okay,
as,
as
we
know,
the
signature,
algorithms
are
not
only
the
handshake
signature,
so
all
of
the
other
ones
that
are
related
to
the
to
the
to
the
tls
session
itself
seem
to
be
more
cumbersome
to
actually
migrate
with
quantum
cryptography.
AB
Great,
let's
hope
the
wi-fi
holds
up
yeah.
We
can
go
to
the
next
slide,
so
this
is
all
the
new
schemes
and
some
of
the
things
that's
out
there
already
on
one
slide.
So
for
comparison
we
have
the
pre-quantum
stuff.
AB
AB
If
you
put
that
everywhere,
it's
probably
going
to
be
difficult
falcon,
on
the
other
hand,
is
nice
and
relatively
small,
but
it's
worth
pointing
out
here
that
the
if
implemented
correctly
here
is
a
direct
quote
from
nist's
selection,
blurb.
AB
AB
Is
is
quite
tricky
to
implement
correctly.
We
talked
about
that
a
bit
before
in
in
the
cfrg
talk
earlier
today,
then
we
have
sphinx
plus,
which
is
hash
based,
slow,
but
conservative,
and
it's
worth
looking
out
here.
AB
AB
Yeah,
that
is
the
the
name
of
the
game.
Here,
probably
there
is
going
to
be
an
on-ramp.
We
also
talked
about
this
previously
in
the
cfrg,
where
nist
is
calling
for
new
signature
schemes,
but
the
scheme
that
I
think
is
most
likely
to
go
into
that
process
is
uov
a
cubase
scheme,
but
400k
for
a
public
key
is
really
really
quite
big.
Although
then,
the
plus
side
will
be
that
the
signatures
are
going
to
be
very
tiny,
so
maybe
space.
R
Okay,
so
if
you're
ever
interested
actually
into
checking
how
post
quantum
cryptography
deals
into
the
networks,
there's
different
experiments
that
have
been
run
by
google
and
also
by
cluffler,
specifically
and
most
of
the
times,
these
experiments
have
been
around
and
have
been
focused
on
the
key
exchange
plot,
as
I
already
said,
and
there
has
been
really
few
ones
focusing
on
the
authentication.
So
this
is
also
a
call
for
anybody
who
is
interested
to
actually
run
more
authentication
experiments
with
this
quantum
cryptography.
R
Note
also
that
open
ssh
in
the
version-
8.0
nine
sorry
uses
entry,
prime
as
the
default
key
exchange
algorithm
nexus
live.
Please
that's
also,
as
I
said
already,
some
academic
studies.
In
this
case.
It's
more
constraints
the
results,
because
most
of
the
times
they
are,
they
are
running
on
simulated
networks,
and
this
is
in
the
only
sense
it's
the
only
academic
papers
that
exist
around
password-based
authentication
only
come
from
the
academia,
so
there
has
been
almost
no
actual
working
items
around
that
area.
AB
Yeah,
so
I
just
put
a
bunch
of
the
stuff
that
I
could
find
if
I
type
pq
into
the
data
tracker
so.
AB
For
limited
use
cases,
probably
lance
is
really
working
very
hard
on
the
whole
pq
topic
right
now
it
seems
and
of
course
we
have
the
hybrid.
AB
Word,
mentioning
is
also
the
pqc
mailing
list,
which
I
think
opened
recently
and
flow
recently
sent
off
a
draft
around
there,
which
aims
to
resolve
the
whole.
What
is
hybrid
versus
transitional
versus
composites
and
try
to
resolve
that
language,
which
I
think
is
very
cool
and
I
think
that's
going
to
suck
this
patch,
so
you
might
want
to
look
out
for
that
thing
as
well.
AB
C
Oh
someone's
getting
up.
K
Okay,
scott
flora,
cisco
systems,
one
minor
knit
you
you're
using
the
round
two
sphinx
plus
parameter
sets
round.
Three
is
slightly
smaller.
I
don't
know
if
it's
enough
to
make
a
difference.
R
No,
I'm
not
sure
actually,
but
we
will
check
that
and
if
indeed
they
are
wrong,
we
will
update
the
slides,
the
ones
that
we
corrected
were
from
cfrg,
but
we
will
check
it
scott.
Thank
you.
AB
And
you,
this
is
not
very
authoritative.
D
I
was
wondering
if
you've
done
the
arithmetic
on
how
many
bytes
we're
talking
about
and
whether
we're
likely
to
run
into
any
length
limits
in
in
tls.
So
things
like
you
know,
there's
a
there's,
a
limit
of
16
megabytes,
I
think
for
each
certificate
for
each
handshake
message
or
maybe
more
practically,
there's
a
limit
in
how
many
how
many
packets
can
be
sent
in
the
first
flight
of
a
quick
connection
which
limits
the
size
of
quick
initials.
D
I
wonder
if
you
know
how
far
are
we
or
from
from
those
limits
with
these
algorithms.
AB
Experiments
with
post-quantum
stuff
in
tls
and
and
while
implementing
that
for
the
more
ridiculous
schemes
out
there.
I
found
that.
AB
Common
for
implementations
to
have
a
comment
in
their
parser
and
one
part
so
that
oh
yeah.
AB
Whatever
is
going
to
send
that
so
by
and
that
kind
of
stuff
is
implicit
but
definitely
present,
so
that
might.
AB
Quick
has
the
anti-amplification
thing
that
you
can
only
send
back
three
times
the
initial
message,
and
this
will
likely
mean
that
you
need
to
do
padding
of
the
initial
message
or
perhaps
even
initial
messages
such
that
the
server
can
actually
send
back
the
certificate
chain.
If
it
goes
to
the
side.
AB
Thought
about
this
a
little
bit.
I
think
they
also
have
an
issue
on
their
interoperability
checker
to
also
actually
check
if
this
works.
But
I
I.
AB
R
Yeah,
the
the
quick
working
group
does
have
it,
I'm
not
sure
if
they
have
updated
it,
but
they
were
planning
at
some
point,
doing
a
hackathon
to
actually
check
it,
but
lucas
padilla
might
not
know
more
updated
versions
of
that
it
has
not
been
a
formal
study
of
the
different
sizes.
I
know
it
has
happened
for
dns
sec
because
they
sent
very
interesting
paper
by
ronan
phone
heist
van
dyke,
but
not
specifically
to
a
tls.
So
maybe
that's
something
that
can
be
looked
into.
Vietnam.
F
Erica
scroll,
yes,
it's
probably
worth
distinguishing
the
network
dynamics
from
implementation
issues,
so
I
mean
I
I
agree.
There
probably
probably
are
implementations
which
are
sad
if
you
send
them
one
megabyte
certs,
but
the
implementation
would
have
to
advertise
that
they
spoke
post
quantum
anyway
and
no
and
you
could
just
not
do
that
if
you
haven't
figured
out
how
to
fix
the
you
know
the
giant
message
problem
on
the
receiver
side.
F
So
I
don't
that's
a
big
deal
and
the
bigger
deal
is
around
trip
time
and
like
the
size
of
the
initial
flights
and
whatever,
and
so
once
you
get
outside
like
iw10,
things
start
to
get
pretty
terrifying,
even
if
you
have
indie
amplification
techniques,
because
you
just
can't
dump
like
you
can't
just
dump
50
packets
on
the
wire
like
under
basically
any
conditions
so
and
remember.
F
The
quick
any
amplification
is
about
is,
is
about
on
avoiding
having
established
a
run
trip
time,
and
I
think
practically
like
I
mean
I'm
not
sure
david
ben's.
Not
here,
I
don't
think,
but
you
know,
I
think
the
general
sense
is
like
once
you
get
about
about
10
kilobytes
for
like
the
certificate
payload,
like
people
start
to
get
pretty
sad
about
the
whole
thing
and
illustrate
tls,
and
it's
not
because
the
implementation
is
because
of
tcp
dynamics.
So
you
know
I
was
looking
at
this
and
being
like.
F
This
is
all
pretty
horrifying,
and
so
I
I
think
the
good
news
is
like
you
know
it's
all.
It's
all
over
in
the
future,
but
like
it's
all,
pretty
horrifying
mt
can
speak
for
the
teal
work.
R
Yeah
there's
something
there
that
maybe
there
will
be
hope
of
something
coming,
that
it
has
some
smaller
sizes.
So
I
mean
in
the
c30
presentation
we
do
talk
about
that.
There's
going
to
be
a
new
call
for
algorithm
proposals
for
signatures,
so
maybe
there
will
be
something
smallish
coming,
but
I
don't
think
there
are.
I
don't
think
elliptic
curve.
Cryptography
is
more.
E
I
basically
that
just
means
I'm
really
sad
about
the
the
points
that
nist
has
managed
to
to
decide
are
appropriate
for
the
security
levels
they're
targeting.
I
would
probably
be
happy
with
a
slightly
less
lesser
security
target
in
order
to
fit
it
in
the
down
packets
because
the
performance
is
going
to
tank
badly.
E
As
far
as
anti-amplification
goes,
I
think
that's
much
less
of
a
pressing
concern,
because
the
client
can
send
a
few
extra
packets
if
it
knows
that
the
client
needs
to
send
a
lot
more
and
and
if
they're
not
critical,
then
if
they
get
lost,
it's
not
a
big
deal,
but
if
they
are
critical
and
you're
putting
the
key
exchange
in
multiple
packets,
then
it
becomes
a
problem.
E
As
I
said,
10k
is
probably
we
we
might
be
able
to
push
it
over
10k
for
something
like
quick
where
it's
relatively
new,
but
I
don't
imagine
that
there's
going
to
be
very
many.
People
are
happy
if
it's
many
tens
of
k's
or
you
know
the
400k
thing
is
just
a
non-starter,
so
we'll
have
to
get
creative
with
compression
and
and
things
like
that
when
it
comes
to
that,
I
can
confirm
that
all
of
the
stacks
are
fine.
If
you
send
a
very
large
client
hello,
but
the
performance
is
going
to
be
terrible.
R
Yeah,
I
completely
agree,
and
one
of
the
things
is
that
during
the
standardization
process,
there
were
some
people,
academic
papers
and
some
experiments
about
what
could
fit
into
the
networks
or
not,
but
not
much
that
many
information
as
compared
with
all
the
things
that
the
nissa's
focused
on.
So,
if
you're
actually
interested
in
this
to
hear
and
this,
my
recommendation
will
be
to
actually
attending
either
the
nice
workshop
that
is
going
to
be
happening
on
november
and
actually
putting
together
what
will
be
real
considerations
from
networks
and
protocols.
R
We
are
also
running
an
event
which
is
called
pq
net,
and
I
will
put
it
later
on
the
sulip
chat
in
which
we
talk
about
pos
quantum
manners,
we're
protocols,
and
this
time
is
probably
going
to
be
local
located
within
this
workshop
on
november.
So
if
you're
ever
interested
in
actually
talking
about
these
consent
and
main
I'm
making
these
reach
the
nist
ears.
Maybe
that
is
also
the
place.
E
That
we
missed
in
that
one
and
scott's
distracted
me,
so
I've
largely
lost
the
thread
I'll
carry
it
on
chat,
that's
right!
E
Okay!
Yes,
I
apologize.
I
just
thought
of
it
as
I
walked
away.
If
you
send
a
message
in
two
packets
to
a
server,
those
servers
that
are
doing
stateless
processing
will
likely
to
likely
tell
you
to
go
away
and
come
back
again
later
and
that's
going
to
add
a
round
trip
to
everything,
and
I
can
see
yes,
yes,
yes,
she's
saying
that's,
that's
a
major
major
problem.
C
C
M
So
the
one
question
that
remained
was:
is
it
possible,
with
the
tls
flag
extension
to
to
define
a
flag
that
the
request
saying
the
client?
Hello
is
just
that
one
flag
bit,
but
the
response
is
going
to
be
a
real
response
and
a
real
extension
with
actual
content
rather
than
just
yeah.
I
support
this
so
the
client
says
well,
it
could
be
the
other
way
around
with
the
reverse
thing,
but
so
say
the
client
says
I
support
this
and
then
the
server
says
okay.
M
So
here's
a
whole
bunch
of
information
and
the
question
was:
is
this
legitimate
use
and
well
so
I
wrote
the
pr
that
says:
no,
you
can't
do
that
and
we
asked
for
comments
and
we
got
pretty
much
nothing
until
today
when
we
got
something
kind
of
non-committal
from
ecker
and
martin
so
either
way.
We
can
have
it
with
this
with
this
pr
or
without
this
pr,
but
I
think
it's
time
to
close
out
this.
F
F
So
I
mean
I
agree,
this
pr
is
clear
and
this
pr
clearly
says
you
can't
do
it.
My
view,
I
think
is
you
should
be
able
to
do
it
because
I
think
of
sending
it
as
I
think,
of
sending
the
bit
as
sending
the
extension.
I
think
martin
thinks
I'm
not
thinks
of
sending
the
extension
of
sending
the
extension.
You
know,
I
think
you
know.
F
I'm
not
gonna
like
lie
down
on
the
floor
over
this,
but
I
think
martin
isn't
either
so
I
guess
like
is
there
anybody
else
having
to
find
this
topic?
Do
they
want
to
weigh
in.
M
F
G
F
Taste
thing
which
is
like
martin
thinks
simple
is
good
and
I
think
I
think
I
think
flexible
is
good
and
that,
like
I'm,
just
I'm
just
like
sad
that
we're
gonna
like
someday,
have
an
extension
that
is
like
you
know
that
is
like
that
is,
like
you
know,
another
200
bytes
and
the
server's
going
to
want
to
send
and
the
clients
have
to
burn
four
bytes
to
say,
go
ahead
and
send
it,
and
I
think
martin
thinks
that
doesn't
make
a
difference.
So
that's
that's
the
exact
that's
the
sum
total
disagreement.
H
I
think
the
tricky
thing
here
is
to
find
some
good
uses
of
the
flex
extension
and
then
to
to
actually
see
whether
that's
practical,
because
it
would
be
annoying
if
we,
if
there
are
a
few
extensions
that
can
make
use
of
it,
and
then
they
can't
be
used
because
of
some
other
constraints
that
put
you
put
in
there,
and
that
makes
it
completely
useless.
G
M
N
Okay,
like
basically
same
things
that
have
already
been
said,
we
should
be
clear
about
whether
it
is
or
is
not
allowed.
I
don't
have
a
strong
opinion.
I
tend
to
have
my
my
intuition
be
the
same
as
martin
that
you
should
not
allow
it,
but
if
we
can
come
up
with
reasons
we
like
that,
we
should
allow
it.
I
think
I
I
don't
know
of
a
reason.
It's
a
fatal
flaw.
I
don't
know
of
any
vulnerabilities
with
it.
M
Let's
imagine
a
silly
extension,
we
have
the
terms
and
condition
extension
that
we
get
from
the
server
the
terms
and
condition
for
using
it,
and
so
we
only
have
to
send
this
one
bit
from
the
client
and
then
get
the
whole
terms
and
conditions
as
opposed
to
sending
a
whole
four
bytes
of.
I
want
the
terms
and
conditions
for
getting
it,
but
that's
just
a
silly
example:
yeah.
F
C
Oh,
the
the
pr
text
is
in
the
chat
we
dropped
it
in
there.
C
C
Okay,
so
the
we
just
put
a
quick
show
of
hands
with
the
title
disallow
non-flag
responses,
which
is
basically
the
pr
as
written,
and
if
you
are
supportive
of
this
pr
raise
your
hand,
I
guess:
do
not
supportive
and
you
want
to
go
with
the
officer
out.
Do
not
raise
your
hand
if
you
do
not
have
a
strong
opinion.