►
From YouTube: IETF112-TLS-20211109-1600
Description
TLS meeting session at IETF112
2021/11/09 1600
https://datatracker.ietf.org/meeting/112/proceedings/
A
Okay,
I
think
we'll
get
started
in
just
a
minute,
just
to
make
sure
we're
here
for
tls.
A
Alrighty,
let's
get
started,
you
all
are
probably
familiar
with
the
notewell
which
is
with
respect
to
ipr
in
the
ietf.
So
please
take
a
quick
look
at
this.
If
you're
not
familiar,
you
know
also
find
it
on
pretty
much
every.
A
Thing
that
you
get
from
the
iepf
when
you're
signing
up
the
next
thing,
we'd
like
to
mention
here,
is
the
ietf
code
of
conduct.
A
Try
to
make
sure
you
speak,
so
people
can
understand
using
reasoned
arguments
better
than
you
know,
trying
to
attack
somebody's
person
using
your
best
engineering
judgment,
to
find
solutions
for
the
internet
as
a
whole,
and
we
all
want
to
keep
that
in
mind
and
that
people
often
have
different
points
of
view
on
on
coming
from
different
experiences
and
backgrounds.
A
So
keep
these
in
mind
when
coming
to
the
mic
speaking
on
jabber
or
on
the
mailing
list.
Thanks.
A
A
Perfect,
thank
you
thank
you
and,
I
think,
probably
we'll
have
a
number
of
people
in
jabber,
but
if
somebody
wants
to
speak.
A
And
doesn't
have
audio
access,
we
can
relay
that
to
the
mic
and
again
state
your
name
when
you're
speaking
and
then
keep
it
professional.
A
We
have
a
agenda,
we'll
have
a
couple
discussion.
A
couple
of
working
group
jazz
looks
like
we've,
had
a
revision
here
already
and
then
we'll
have
a
discussion
of
of
some
new
work
that
people
have
brought
to
the
tls
working
group.
Is
there
any
agenda
bashing
that
we'd
like
to
do
at
this
point?.
A
Alrighty,
I
think
the
next
thing
is
just
a
quick
check
on
document
status.
We
have
a
number
of
things
coming
up
into
the
rfc
editor
queue.
There
are
some
drafts
that
have
been
in
this
state
in
their
various
states
for
some
significant
amounts
of
time,
usually
waiting
on
author
action
in
in
oauth
48
or
waiting
for
some
revisions.
A
The
document
shepherds
will
be
following
up
with
you
to
strong
arm
you
into
getting
some
of
these
things
to
move
forward.
Some
of
them
may
rely
lying
on
the
chairs
too,
but
these
are
things
that
we
want
to
try
to
push
forward.
A
Other
than
that,
I
think
you
know
we
have
a
number
of
things
in
flight,
so
it's
you
know
good
work
by
the
working
group
good
to
see
that
we're
making
progress
in
a
lot
of
areas.
A
A
You
did
who's
presenting
that.
C
C
Hi,
so
this
is
nick's
draft,
but
I
think
he's
out
on
vacation,
but
basically
I'm
here
as
document
shepard,
to
try
to
get
this
document
to
the
finish
line
because
it's
been
through
isg
review
and
we
have
one
loan
comment
remaining
from
ben
kadek
and
I
want
to
make
sure
that
we
get
it
reviewed
next
slide.
C
So
again
we
got
one
comment
from
ben
and
could
basically
be
summarized
that
he
noted
the
are
there
any
security
issues
caused
by
the
fact
that
the
exported
authenticators
is
based
on
the
exporter
secret,
which
is
not
incorporate
the
entire
transcript
and
this
issue
kind
of
arose
during
the
tls
1.3
discussions
that
were
on
the
mailing
list
and
nick
basically
said:
hey
it'd,
be
great.
If
jonathan
could
look
at
this
jonathan
did
take
a
look,
he
did
an
analysis
and
proposed
two
ways
forward.
C
There's
links
there's
to
the
emails
and
the
proposal.
The
response
basically
was
like
hey.
We
could
have
security
considerations
to
address
this
or
we
can
change
how
the
exporter
works
and
what
he
proposed
was
the
security
consideration
and
that's
what
we're
going
to
review
next
and
so
we've
basically
taken
the
approach
that,
since
no
one
has
suggested
the
other
option
that
we're
gonna
go
with
this.
So
we
we,
he
proposed
some
text.
C
C
Going
once
going
twice,
I
did
note
that
martin
thompson
suggested
in
the
draft
that
the
short
answer
is
that
this
is
fine
and
that
tls
1.4
or
whatever
the
next
version
will
be
numbered,
might
want
to
include
the
entire
transcript
under
the
exporter.
Secret
watson
also
suggested
that
it
was
ready
to
go
so
with
that.
I
think
we're
going
to
go
ahead
and
go
ahead
and
get
this
merged
and
get
this
draft
done
and
get
it
in
the
rc
utter
skew.
So
thank
you
for
your
time.
E
G
G
H
H
H
So
the
only
thing
that
includes
the
client
finished
is
the
resumption
master
secret,
so
producing
a
message
that
uses
the
resumption
master
secret
first
before
you
make.
This
request,
guarantees
that
the
client
and
server
already
agree
on
the
client
finished
okay.
How
does
any
application
need
to
do
that
because
it's
a
implicit
versus
explicit
authentication
thing
once
you
send
your
first
thing
of
application
data?
H
E
H
E
E
E
Like
this
isn't
clear
enough
for
me
to
like
agree,
it's
just
like
ready
to
move
forward
so,
like
I
think
like
like,
I
guess
I
mean
we
have
to
go,
read
the
read
the
issue
but,
like
I
don't
think
this
text
like,
like,
I
don't
think
like.
I
guess
what
I
would
say
is.
I
don't
think
this
text
is
clear
enough
to
put
this
specification.
E
I
Flags
are
next,
you
up.
Are
you
here.
J
Oh,
I
thought
you
were
doing
that.
Where
is
that.
J
So
hi
talking
about
the
tls
flags
draft
and
so
what's
happened
since
the
last
itf.
Well,
that's
a
bad
font
for
111,
so
we've
gone
through
what
we
would
last
call.
We
added
the
requirement
to
drop
malformed
packets
and
published
version
07.
That
includes
this
requirement.
J
So
are
we
done
not
necessarily
there's
three
issues
that
maybe
are
open,
so
the
first
one
is
about
the
guidance
for
ayana
experts,
so
section
4.1
contains
guidance
and
how
to
assign
numbers
to
the
flags,
and
it's
kind
of
a
complicated
thing
so
0
to
7
is
for
things
that
are
like
everybody
has
to
implement
them,
and
I
was
thinking
of
ri
or
a
new
thing
like
our
eye,
and
then
we
have
number
8231.
J
That's
for
things
coming
out
of
this
working
group
or
specific
requests
said
to
have
a
small
number
and
then
from
there
on
it's
things
that
are
maybe
standards
tracks
but
not
are
more
specialized
and
then
there's
experimental.
J
It's
kind
of
a
complicated
thing,
so
martin
thompson
opened
issue,
number
11
and
suggested
to
remove
all
advice
and
leave
it
just
to
the
discretion
of
the
expert,
and
I
think
that
giving
the
expert
some
rationale
is
important,
given
if
it's
at
some
point
in
the
future,
when
I'm
no
longer
the
expert
so
so
far,
I
haven't
made
any
changes
to
the
draft.
K
I
don't
think
I
made
that
specific
suggestion.
What
I
was
concerned
with
was
the
very
specific
set
of
rules,
specific
boundaries
that
you'd
set.
So
you
had
no,
no
I'm
just
running
off
memory.
I'm
gonna
have
to
go
and
look
at
the
draft
again,
but
there
were
specific
points
that
you
had
that
you
said
I
have
to
follow
particular
rules
now.
Flags
0
to
7,
have
to
be
out
of
the
working
group.
K
8
to
31
have
to
be
standard
track
documents
with
very
specific
conditions
on
that
and
32
to
63
are
experimental
and
and
that
sort
of
thing,
and
I
think
that
what
you
want
to
provide
is
general
guidance
to
experts
and
maybe
reserve
the
top
few
flags,
because
they're
super
super
important
or
something.
But
the
level
of
specificity
here
is
far
in
excess
of
what
I
think
is
reasonable.
J
You
can
see
that
practically
and
the
ayana
people
always
ask
about
every
assignment,
they
don't
just
say.
Oh
well,
we
figure
out
through
the
the
rules
and
we
think
you
should
do
that
one.
They
always
ask
so
yes,
eric.
E
I
I
basically
agree
with
martin's
guidance
which
is
like
if
we
think
that
we
I
mean
so
obvious,
obviously
like
just
we're
all
on
the
same
page.
To
restate
the
obvious,
you
know
the
resources
being
conserved
is
a
big
pile
of
zeros
at
the
front
of
the
front
of
the
flags
for
the
fly's
word,
and
so
I
grew
them
t
like
and
first
of
all,
I
guess
like
this
is
already
a
relatively
small
idea,
so
that's
a
better
shape
than
that.
E
As
long
as
you
don't
start
like
reserving
like,
you
know
out
like
wait,
wait,
but
the
I
mean
so
I
think,
like
my
sense,
would
be
like,
let's
reserve
like
zero
to
seven
or
zero
zero
to
fifteen.
Like
you
know,
fuel
that's
appropriate
for
like
working
group
discretion.
E
Everything
else
should
be
allocated
in
sequence
right
after
that,
and
if
this
turns
out
to
be
like
a
nightmare-
and
we
have
like
a
usually
sparse
thing,
then
we
can
event
like
a
new
flags
thing
with
like
romantic
coding
for
zeros
or
something
but
but
like
I
just
like
you
know,
trying
to
spend
a
lot
of
effort
trying
to
like
optimize
down
that
last
night
or
two
I
mean
that's,
the
kind
of
thing
ctls
tries
to
do.
I
don't
think
it's
particularly
helpful
in
class.
I
mean
the
handy
is
already
quite
large.
E
J
Okay,
so
can
we
go
on
to
the
next
issue?
Okay,
so
the
next
issue
about
is
about
the
recommended
flag
well
flag
in
the
yana
registry,
so
the
recommended
flag
is
in
the
new
registry.
The
relative
flags
is
what
it
says.
Right
now
is
recommended,
which
is
either
wire
or
y
or
n
value
determined
in
the
document
defining
the
optional
feature
and
doesn't
say
really
anything
about
when
we
should,
when
it
should
be
recommended
when
the
flag
should
be
recommended
when
it
should
not.
J
L
Yes,
joe
hi
hi,
this
is
honest
yeah.
I
thought
it
would
be
good
to
have
some
extra
text,
as
I
mentioned
on
the
mailing
list
like
this,
is
everything
but
clear
like
just
have
something
meaningful
there?
What
recommended
here
actually
means
like.
L
Well,
in,
in
this
case,
it's
a
little
bit
different
because,
like
you
are
defining
flags
which,
by
its
own
nature,
already
different
from
in
the
general
purpose,
extensions
like
the
flags
refer
to
some
the
need
for
having
something
optimized
right.
That's
why
the
flags
are
defined.
L
Well,
I
went
through
the
through
the
the
list
of
extensions
and
even
with
what,
with
the
language
from
the
r
from
the
tls
rfc
like
many
of
them,
don't
make
any
sense
like
recommend
it
for
what.
L
Yeah,
I
can
propose
some
text
as
dkg
proposes,
but
I
I
I'm
in
general
against
like
just
putting
some
columns
there
and
populating
with
values.
Nobody
knows
what
they
mean
like,
and
these
type
of
things.
A
Second,
I
guess
I'm
sorry.
J
I
guess
at
least
they
mean
that
we
expect
all
or
most
implementations
to
implement
this
flag
because
it's
recommended,
but
that's
the
informal
way
of
expressing
it.
A
Yeah-
and
I
think
maybe
this
is
something
we
would
touch
on
with
rfc
8447
this
discussion
later
on,
but
I
think
some
of
this
may
be
around
ambiguity
about
what
recommended
means
in
general
or
the
yes
and
no,
I
think
those
there's
things
that
need
to
be
clarified
there,
and
I
would
hesitate
to
put
something
too
specific
in
here
when
that
should
be
really
be
clarified
in
the
447.
M
E
I
think
this
column
should
look
exactly
like
the
column
for
other
extensions,
and
so
we
should
defer
this
discussion
84
47.,
and
we
want
to
just
go
especially
if
we're
saying
this,
like
whatever
the
heck
we
decide
like
the
you
know
what
it'll
be
what's
applied
here,
because
these
are,
as
you
say,
just
like
regular
extensions.
The
purpose
of
the
the
purpose
of
this
field
is
not
to
cons,
not
to
say
space
is
to
indicate
to
people
what
like
what
things,
what
things
itf
has
confidence.
E
In
one
thing,
it
does
not
have
confidence
in
and-
and
you
know,
it's
certainly
possible
to
have
a
one
bit
extension,
which
you
have
no
confidence
in
there's.
Everyone
essentially
says,
like
don't
cook
anything
like
we
have
like
very
little
confidence
and
that
would
actually
have
very
high
confidence.
It
was
bad
so,
like
I,
I
don't
really
think
it's
possible
to
have
one
bit
extensions
which,
like
we
have
an
opinion
on
positive
or
not
so
so
so
I
don't
think
that,
like.
E
C
Hey,
I
think
the
easiest
thing
to
do
is
just
reference
8447,
because
then
you
don't
have
to
worry
about
updating
it.
This
document,
if
you
end
up
deciding
to
change
the
other
values,
because
then
that
document
has
to
update
two
things.
So
it's
probably
best
just
to
point
and
then
move
on
and
just
kind
of
deal
with
it.
This
thing
will
go
in
the
rfc
editor's
queue
and
it
can
wait
for
the
best
document
which
hopefully
shouldn't
take
all
that
long
to
catch
up.
C
J
Right
so
I'll
go
to
the
last
issue,
probably
the
more
controversial
one
so
about
dtls
1.2.
So
I
can
cross
sorry,
I'm
from
butchering
your
name
opened
issue
number
13
asking
for
the
flag
extension
to
be
defined
in
dtls
1.2
as
well.
It's
already
relevant
for
tls
1.3
and
dtls
1.3.
They
want
it
for
dtls
1.2
as
well.
J
C
The
backstory
here
is
for
the
working
group
is
that
typically,
when
we
start
working
on
new
extensions,
the
default
is
that
they
apply
to
d
to
1.3
and
later
and
not
backwards.
And
then
we
have
to
kind
of
make
an
explicit
decision
to
go
back
and
try
to
support
something
that
essentially
back
port
it
to
1.2.
J
F
F
L
F
And
there's
there
is
the
conclusion:
the
corollary
is
that
all
of
the
individual
flags
are
also
1.3
only
so
if
we
start
saying
that,
okay,
the
flags
extension
itself
can
now
be
used
in
1.2,
like
is
the
scope
of
that
literally
just
the
flags
extension.
And
then
we
have
to
say
something
else
about
each
individual
flag
or
is
the
implication
that
all
of
the
flags
contained
within
it
are
also
applicable
to
tla
to
tls
and
dtls
1.2
as
well,
because
if
that's
the
case,
then
there's
a
really
severe
implementation
burden.
F
F
L
Honest
yeah
this.
For
me,
this
is
really
a
very
practical
issue
like
I
was
working
on.
The
update
of
our
implementation
for
the
cid
and,
and
also
this
our
this
return.
Routability
check
is,
is
part
of
that
story
and
we
have
the
1.2
and
the
1.3,
and
the
question
for
me
was
like:
I
have
to
provide
features
for
both
because
both
provide
this
cid
for
dtls.
L
L
And
in
general,
of
course
it
raises
the
question
of
like
what
happens
with
the
details
or
with
dls
and
details
1.2
in
general,
like
if
someone
wants
to
sort
of
define
an
extension,
and
probably
this
is
less
fewer
people
want
to
do
that,
but
it
it
will
be
out
in
the
field
for
a
while,
because
there's
it's
still
a
fine
brother
code
like
this.
It's
not
broken.
If
you
use
it
with
the
right
algorithms
in
the
right
settings,.
J
And
we
have
already
decided
that
rc
does
apply
to
dtls
1.2.
It's
not
just
the
latest
version.
J
I
guess
we
could
work
around
the
complexity
of
having
the
same
extension
by
making
this
two
extensions,
one
flags
for
1.2
and
one
flags
for
1.3,
and
if
you
want
a
certain
flag
to
apply
to
both,
then
you
just
update
both
registries
and
to
add
the
flag
to
each
extension.
It's
a
bit
more
complex,
but
the
complexity
then
dies
with
1.2.
E
I
guess
I
kind
of
think
this
is
a.
This
is
a
set
of
paramedic
questions,
because
no
answer
seems
ideal.
You
know,
basically,
it
is
a
is
as
ben's
as
a
hassle
to
have
half
like
been
maybe
been
support
like
this
for
1.2
1.3.
I
I
think
if
we
were
going
to
do
that,
we
would
have
to
say
whether
it's
two
cup
points
or
one
that
the
flags
are
valid
from
one
point
two
and
not
knowledge,
one
point
three,
and
vice
versa.
E
Right
and
probably
we
shouldn't
define
any
that
are
right
home,
but
two
only
you
know
like
it's
not,
I
think,
like.
I
guess.
The
question
I
would
have
is
like
you
know
like
is
like
the
people,
the
people
who
the
people
would
like
to
have
this
from
1.2
like
how
inconvenient
is
it
to
like?
Not
have
it.
E
If
the
answer
is
like
really
inconvenient,
then
we
should
do
it
and
at
the
end,
unless
the
end,
because
because
I
already
heard
from
david
ben
and
ben
and
benjamin
kadek-
that's
like
not
that
inconvenient
to
like
implement
it's
just
kind
of
inconvenient,
and
so
I
don't
feel
that
strongly
about
it.
So
I
guess,
like
you
know
I
like
to
hear
from
honest
and
and
other
people
who
sort
of
like,
like
that.
This
is
valuable
like
like
this
is
getting.
E
Your
ass,
if
we
don't
do
it,
if
the
answer
is
the
answer
is
yes,
though
you
shouldn't
like
do
it,
even
though,
like
kind
of
this
might.
C
Echo
to
your
point
and
that
that's
where
I
was
going
to
go
is
that
you
know.
Apparently
this
could
be
useful.
Well
like
useful,
like
how
useful
like
you're
going
to
use
it
every
time
you
you
send
a
dtls
1.2,
you
know
set
up
details,
1.2
and
handshake
or
not
so
that's
kind
of
where
I'm
I'm.
I'm
kind
of
thinking
about
this.
E
L
Yeah,
I
think
it's
not.
For
me
it's
not
terribly
inconvenient.
It's
just.
I
need
to
define
an
a
separate
extension
for
1.2
and
then
that
means
a
few
more
bites
over
the
wire
extra
coat,
but
only
a
little
bit.
So
so
it's
not
it's.
It's
definitely
not
the
end
of
the
world,
so
I
wouldn't
die
for
this.
C
So
I
guess
we
should
do
do
we
need
to
do
a
hum
if
the
person
that
was
asking
the
question
isn't
willing
to
fall
on
a
sword,
I'm
thinking
that
we
should
basically
just
leave
it,
as
is.
C
Okay:
let's
go
ahead
and
note
that,
thank
you
and
thanks
honest
for
being
pragmatic.
I
E
Okay,
okay,
why
can't
I
just
talk
multiples?
No,
I
can
only
share
one
okay.
That
would
have
been
awesome
so
yeah,
but
obviously
I
I'm
too
clear.
I've
heard
like
temporary
bankruptcy,
any
four
forty
six
ps
I
just
wanted
to.
I
still
plan
to
work
on
it,
but
I
I
prefer
this
meeting.
I
went
through
a
bunch
of
comments.
E
David
mentioned
had
and
realized
that
I
was
not
able
to
like
like
produce
useful
discussion
in
the
time
I
had
available,
and
so
I'm
planning
to
do
that.
But,
like
I
didn't
see
any
point
in
requiring
us
to
discuss
it,
when
I
wasn't
here,
does
anything
meaningful
next
slide.
Please.
E
No,
it's
me,
isn't
it
that's
right,
okay,
yeah!
I
just
did
that.
It's
not
used
to
being
I
used
to
have
net
control,
so
the
status
of
g203
is
that
we're
all
48,
but
we've
had
two
celsium
issues
raised
that
we
thought
we
had
to
deal
with,
and
so
this
is
hoping
to
get
this
issues
out
as
fast
as
possible
and
then
get
this
out
the
door.
E
So
the
first
subset
of
issue
was
raised
by
I'm
john
matson,
which
is
that,
because
gtls
has
much
tighter
record
limits
than
tls
for
asgcm
due
to
the
the
the
fact
that
forgery,
that
that
mass
failure
does
not
cause
protection,
tear
down
then
as
a
practical
matter
you
have
to
you,
have
to
rekey
very
often
in
order
to
deal
with
the
key
extraction,
but
the
xbox
are
only
16
bits,
which
means
that
the
total
number
of
records
you
can
actually
send
is
only
about
to
the
40th,
which
is
really
quite
a
small
number
and
like
concerning.
E
So
that
seems
like
not
it's
worth
noting
that,
like
that.
This
is
also
a
problem
with
details
1.2,
it's
just
that
we
didn't
actually
write
anything
about
about
the
key
record
limits
in
dlc
1.2,
so
we're
trying
to
be
more
rigid
about
1.3.
So
we
have
two
poles
in
front
of
us
that
both
kind
of
are
relatively
similar
one
by
chris
wood.
E
One
myself,
though,
I
don't
think
chris
and
I
are
fighting
about
this-
we
just
decided
to
to
try
to
like
flash
both
of
those
people
to
see
them,
so
the
first
is
to
expand
the
outbox
to
64
bits
which
you
have
to
do
in
any
case,
so
that
you
can
re-key
more
than
16
times,
and
so
the
question
is
what
to
do
with
the
with
the
the
key
formatting
for
the
aead.
E
So
one
option
is
to
just
encode
the
lower
16
bits
in
the
in
the
knots,
so
the
nods
would
be
you
know
to
still
be
16
bits
of
epoch
and
48
bits
of
of
record
identifier.
So
this
obviously
requires
you
to
spend
the
act,
112
bits
to
incorporate
the
entire
epoch,
but
it's
still
the
same.
It's
still
the
same,
underlying
formatting
for
the
details
records.
So
that's
more
consistent
with
gl's
1.2.
E
The
other
option
is
to
expand
the
sequence,
number
64
bits
use
the
entire
sequence
number
and
the
knots
and
just
remove
the
epoch
entirely.
So
I
think
the
argument
that
I
think
to
try
to
argue
for
like
what
one
or
the
other,
the
argument,
I
think
for
the
235
is
it's
a
simple
change
and
and,
and
you
still
have
the
app
on
the
not
you
know
the
lower
1650
at
back
of
the
knots.
E
But
when
you
go
to
look
at
the
analysis
about
the
situation,
you
say
what
is
the
epoch
doing
the
knots
anyway
and,
to
be
honest,
like
this,
like
really
really
old,
it
was
done
when
we
first
did
gtls,
I'm
not
sure
we
had
a
good
reason,
but
the
essential
which
you're
thinking
about
this
problem
anyway,
if
you're
worried
about
the
epoch,
allowing
you
to
distinguish
between
different
different
key
phases.
E
Well,
now
you
have
to
worry
about
cycling
around
2016.,
and
so
it's
not
going
to
actually
helps,
and
so
what
you're
really
saying
is,
and
so
we
went
to
the
analysis
we
actually
said
each
outback
has
to
have
separate
keys
and
that's
what's
giving
you
separation
between
the
epochs
not
having
that
block
bit
in
the
nonce.
So
the
argument
for
257
is
once
we
had
to
accept.
The
keys
had
to
be
separate
anyway,
which
they
already
are,
and
we
were
depending
on
that
property.
E
Then
it
didn't
matter
what
that
was
what
was
in
the
knots
and
so
and
and
so,
and
this
allows
you
to
be
to
a
have
264.
You
know
possible
sequence
numbers.
If
you
had
a
cipher
permitted
that
and
b
it
looks
more
like
t
tl's
from
point
three,
so
it
harmonizes
the
tail
some
point
three,
instead
of
being
a
special
dcls.
E
So
that's
so
that's
what
we
were
thinking
about
when
we
did
we,
when
I
post
this,
I
think
you
know
in
either
case.
I'm
gonna
want
to
run
this
by
a
couple
on
some
some
photographers,
but
we
can
do
that
quite
quickly.
Once
it's
written
down
and
we
already.
L
E
Run
by
the
255
by
karthik,
so
I
I
think
I'm
seeing
scott
flores
comments.
My
my
impression
is
because
we're
also
randomizing
we
because
we
randomized
the
sequence
numbers
we
have
the
xor
against
the
epoch.
As
martin
was
saying
yeah.
I
don't
think
that
actually
as
much
difference,
you
think
so.
I
guess
I
think
I
prefer
2d7.
I
think
that
being
closer
to
sales,
1.3
is
better
than
being
closer
to
details.
E
1.2
in
this
case-
and
I
did
it-
leaves
us
with
this-
like
this
sort
of
weird
work,
we
have
like
half
of
it
there
and
we
have
to
reason
about
it.
E
So
I
guess
what
other
people
think
I
saw
I
said
mt
was
suggesting
that
he
thinks
257
is
better.
E
Okay,
then
I
will
find
287.
I
will
run
it
quickly
by
some
of
the
usual
suspects
and
we
will
call
soup
okay
next.
So
this
is
saying
this
thing
dave
benjamin
raised
that
the
because
the
dc,
so
gtl's
handshake
structure
is
different
from
the
dlc
structure
and
it
has
these
three
fields
which
are
used
for
reconstruction
of
the
handshake
of
the
firing
hand,
tape
messages,
and
so
the
question
becomes
like
and
so
like.
E
If
you
like,
ordinarily,
you
would
say
the
transcript
like
consists,
and
so,
even
when
you
reconstruct
the
messages,
what
you
do
is
you
instead
of
instead
of
basically
saying
we
didn't,
need
this
framing?
What
you
do
is
you
basically
set
the
message
question
to
be
what
it's
supposed
to
be
and
then
use
the
fragment
offset
and
the
fragment
length
to
be
like
zero
and
length
and
the
length
of
the
message
right,
and
so
so
so
these
are.
E
These
are
fixed
values,
but
they
still
go
in
the
transcript
currently
and
they
eventually
pointed
out
that
this,
like
is
a
little
weird
in
the
message
hash
that
we
use
that
we
use
for
you
know
for
hr,
and
he
also
pointed
out
that
in
this
case
it's
not
entirely
clear.
You
convince
yourself
the
tl,
0.3
and
details
1.3
transcripts,
like
don't
overlap
in
some
way,
because
this
sequence,
number
and
stuff
is
overlapping
with
message
contents.
So
I
think
david
sort
of
said
like
well.
E
This
is
just
annoying
and
grumpy,
but
we
should
like
live
with
it,
but
I
actually
am
wondering
if,
in
line
with
a
previous
thing,
we
should
just
bite
the
bullet
and
like
say
that
that
the
dtls
message
transcript
does
not
include
these,
whatever
six
eight
octets
and
then
it
would
look
the
same
as
the
tails
with
the
message
transcript,
and
so
I
I
think
you
know
like,
I
don't
think
it's
much
harder.
I
think
this
would
not
be
harder
for
us
in
any
meaningful
way.
E
I
don't
think
hard
for
anybody
else.
I
know
it's
like
breaking
change,
but
I
guess
like
because
we
had
to
do
some
david
points
out.
We
did
something
here,
because
we
have
to
specify
what
goes
in
message
hash
and
so
once
we're
doing
that,
maybe
we
should
just
like
live
with
it
and
make
the
change.
E
People
are
probably
getting
the
impression
I'm
like
trying
to
eliminate
any
distinction
between
one
point
in
detail,
loss
and
tails
and
point
three.
When
I
count.
E
I
I
Pretty
I
don't
have
any
concerns,
given
especially
the
point
that
martin
points
out
in
the
chat,
so
perhaps
I
can
work
with
david
to
prepare
pr,
and
here
he
is.
O
O
I
originally
found
it
just
because
it
was
like
ambiguously
specified,
it
wasn't
sure
which
one
was
supposed
to
be,
but
I
guess
it's
true
that
the
only
like
by
the
time
the
only
mess
is
only
the
first
client
hello
that
ever
is
in
this
like
ambiguous
stage
so
and
at
that
point
you
haven't
even
picked
the
hash
yet
so
you
could
just
like
keep
the
clientele
around
and
just
sort
of
like
erase
the
extra
bites
as
needed.
Maybe
it's
okay.
O
It
also
like,
like
we
also
don't
strictly
need
them
to
be
separable
if
like
well,
I'm
not
sure
I
suspect
we
don't
strictly
need
them
to
be
separable
if
all
the
labels
say
dtls
rather
than
tls,
but
it
is
like
kind
of
annoying
to
have
to
remember
to
do
that
like
every
time.
So.
E
Okay,
so
I'll
prepare
for
this,
and
if
someone
given
there's
gonna
take
a
couple
weeks
to
like
you
know
to
like
finish
the
other
thing
like
you
know,
someone
could
try
implementing
this
make
sure
it
doesn't
like
vaping
explode
that'd,
be
great.
E
I
I
don't
think
by
the
way
you
keep
the
client
around
very
long
just
to
contact
honesty's
comment.
I
mean
like
you,
did
you
do
the
you
do
the
version
negotiation
immediately
so
like
so
you
already
know
what
you're
doing
it.
E
C
The
in
the
repo
there's
also,
this
pull
request
to
address
sorry
to
reference
rfc
7457,
which
is
the
utah
draft.
I
think
that
the
you
know,
I
definitely
think
it's
not
needed.
I
think
you
didn't
either.
Does
there
anyone.
I
Okay,
I
think
we're
all
set
echo
I'll,
take
the
slides
away
and
then
pop
up
vch.
I
Sean
I'm
going
to
meet
you
as
well.
That's
okay!
Oops!
Sorry,
no
worries!
Okay!
This
is
a
fairly
quick
update,
hello.
Just
I
guess
a
quick
primer
for
those
of
you
who
are
not
familiar
with
the
protocol.
I
Imagine
you
had
a
normal
tls
1.3
connection
in
which
you're
connecting
to
a
server
revealing
in
the
sni
extension,
potentially
sensitive
information
about
what
you're
doing
specifically,
what
server
you're
connecting
to
ech's
name
suggests
is
just
some
mechanism
for
encrypting
that
information
to
the
the
target
server
that
you're,
eventually
trying
to
connect
to
such
that
you
know
passive
ease,
droppers
on
the
on
the
wire
between
client
server,
can't
figure
out
any
of
that
potentially
sensitive
information
beyond
the
sni.
Other
extensions
in
the
future
might
be
sensitive.
I
Since
the
last
itf
we
sort
of
declared
drop
13
as
the
interop
and
deployment
target
to
get
some
initial
experience
to
avoid
sort
of
circling
on
issues
that
we
had
open,
particularly
around
you
know
what
is
the
right
mechanism
for
signaling
acceptance
versus
rejection?
How
do
you
deal
with
hrr
in
greece,
practically
speaking
how
you
would
implement
server-side
padding
for
the
purposes
of
hiding?
You
know
which
certificate
was
actually
chosen
for
the
particular
connection
and
there's
also
been
other
questions
around
extensibility
in
the
ech
configuration
that's
published
in
the
https
record.
I
We've
parked
all
of
those
open
issues
and
labeled
them
as
such
in
github,
with
the
the
plan
being
to
effectively
get
some
experimentation.
I
I
There
are
a
number
of
implementations
out
there
right
now
bring
us
open,
ssl,
forks,
nss
and
so
on.
Some
of
these
are
interoperable
than
others.
Others
are
you
know
working
their
way
up
towards
draft
their
team,
which
is
good,
if
you
know
of
other
implementations,
request
that
you
please
add
it
to
the
wiki.
You
can
find
the
wiki
page
under
the
the
esni
or
ech
repository
on
github.
I
I
During
the
course
of
bringing
up
the
different
implementations
and
other
people
have
sort
of
been
taking
notes
in
terms
of
you
know
what
are
some
of
the
pain
points
in
the
protocol
right
now,
thanks
to
note
going
forward
to
potentially
revisit
depending
on,
you
know
how,
how
severe
the
or
how
how
painful
these
sharp
edges
happen
to
be.
I
You
know,
for
example,
configuring
your
stack
such
that
things
from
dns
are
plumbed
into
the
the
tl
stack
accordingly
and
parsed,
and
validated
in
the
right
way
has
been
somewhat
annoying.
There's
a
there's,
the
required
text
for
checking
the
public
name
in
the
ech
config
inside
the
tls
stack,
which
requires
you
know
an
ip
address,
parser,
which
typically
was
or
previously
was
not
a
requirement
for
for
most
tls
stacks.
I
So
the
I
mean
what's
currently
in
the
draft
right
now
is,
is
not
the
best
in
terms
of
the
technical
procedure
for
validating
that
particular
ech
config.
So
it's
no
surprise
that
this
became
somewhat
of
a
pain
point
when
implementing
things.
Perhaps
we
can
do
something
in
the
future
to
fix
it,
but
you
know
something
to
be
mindful
of
if
you're
gonna
spit
up
an
implementation
of
your
own
also
been
noted
by
several
people.
I
That
split
mode
is
kind
of
challenging
to
deal
with
as
well,
not
aware
of
any
experiments
or
deployments
that
are
planning
on
using
split
loan
right
now.
But
if
there
are
folks
who
do
specifically
try
to
implement
this
use
case,
it'd
be
very
good
to
know
what
sort
of
obstacles
you
run
into
specifically
around
coordination
between
the
front
end
and
the
the
client-facing
server
and
the
back-end
server
similar
to
the
implementation.
I
I
Just
so
we
you
know
can
be
mindful
of
potentially
ways
to
simplify
the
specification
going
forward
or
you
know
areas
we
might
expand
upon,
and
you
know
the
draft
and
appendix
or
what
have
you
for
the
purposes
of
helping
implementers
similar
to
what's
in
the
rfc
8446
steven?
Did
you
have
a
question,
or
did
you
want
to
elaborate
on
anything.
P
Yeah,
just
on
the
on
the
split
board
thing
split
mode
itself
was
was
pretty
straightforward.
You
just
required
a
new
api,
but
it
was.
It
was
easy
enough.
It's
split,
node
plus
hr.
That
was
the
problem,
and
the
issue
was
with,
in
particular
with
h
a
proxy
you
either
operate
as
a
kind
of
a
tls
endpoint
in
non-split
mode
or
in
what
they
call.
P
Basically,
a
tran
you
know:
tcp
mode
is
what
they
call
it,
where
they're
trying
to
do
go
faster,
stripes,
essentially,
and
in
that
mode
they
only
look
at
the
first
packet.
So
I
don't
know,
there's
any
way
to
fix
the
issue
that
with
hr
you
have
to
change
the
second
packet
as
well
as
the
first
one,
but
on
the
other
hand
it's
presumably
when
hr
might
happen
more
often,
so
it
might
indicate
that
what
we're
doing
with
hrr
is
less
likely
to
be
of
real
interest
in
the
real
world.
P
I
Okay,
yeah
thanks
for
clarifying
no
surprise
that
hr
is
problematic.
I
guess
I
think
you
also
raised
an
interesting
point
in
that
this
was
particular
to
particular
or
to
a
specific
server
implementation.
Aj
proxy.
The
interesting
thing
to
know
is
people
are
integrating
ech
as
it
exists
in
various
tls
stacks,
like
boeing
ssr.
What
have
you
into
different
server
implementations?
I
What
what
pain
points
come
up
along
the
way?
Similarly,
for
clients,
you
know
how
challenging
is
it
or
not
to
you
know
plumb,
this
into
curl,
if
you're
gonna,
if
you're
gonna,
if
you
were
to
extend
curl,
to
add
support
for
this
or
to
browsers,
or
what
have
you.
P
Yeah,
so
you
remind
me
of
something
I
meant
to
say
there,
which
is
the
it's
not
on
the
list.
I
think
it's
on
the
wiki
page,
though,
with
coral
one
of
the
big
issues
is
going
to
be
supporting
svcp,
it's
not
ech
per
se,
that's
tricky,
it's
all
the
dns
handling
and
caching
that
might
go
around
that.
That
I
think,
would
be
a
giant
piece
of
work
for
carl.
N
Then
hey
so
ssl
stacks
generally,
general
purpose.
Ssl
stacks
have
to
have
ip
address,
parsing
to
be
able
to
to
be
able
to
do
certificate,
validation
for
ip
certificates.
I
N
I
N
I
If
that
matches
the
existing
parsers
that
are
present-
and
you
know
existing
dls
stacks
or
elsewhere
in
host
stacks,
so
it's
mainly,
I
guess
a
problem
of
you-
know
ensuring
parity
between
what's
available
and
what's
expected
from
the
perspective
of
vch
and,
as
you
may
recall,
trying
to
carve
out
specifically
in
language
what
is
considered
to
be
a
valid
or
not
public
name
the
world
would
be.
I
guess
a
lot
simpler
if
we
did
not
have
to
deal
with
this
ipv4
nonsense,
but
last
year
we
are.
I
Okay,
go
ahead,
david.
O
So
the
the
web
with
parser
did
also
recently
change
in
a
way.
That's
actually
probably
useful
to
us,
because
we
realized
that,
if
the
the
that
you
really
want
dns
names
to
be
closed
under
a
subset
of
other
under
suffixes,
and
so
we
can
instead
simplify
our
rule
to
if
the
last
component
is
all
numeric
or
looks
like
hex
then
reject
it,
which
should
be
a
lot
simpler
than
the
like.
I
Yeah,
I
didn't
know
that
thanks
for
flagging
that
might
be
good.
It
might
be
useful
to
track
that.
I
guess
I
don't
think
that
would
require
a
spec
change,
provided
that
you
just
implement
the
latest
version
of
the
the
wg
parser,
but
but
anyways
good
to
know.
Steven.
P
Yeah
I
mean
I
just.
I
only
just
noticed
that
the
title
of
the
slide
here,
I
think
we
should
there's
some
really
reasonably
good
news
too.
I
think
actually,
because
although
eh
is
a
lot
more
complicated
internally
than
the
s
I
externally
for
all
the
web
servers
we've
integrated
with
it.
P
It
was
really
really
easy
to
to
upgrade
him
from
vs9
to
ech
so
and
we
got
a
drop
with
with
with
I
think
mozilla
for
about
10
and
with
boring
for
13
without
too
much
hassle,
I'm
a
cloud
player
as
well,
I'm
not
sure
what
on
13..
So
you
know,
I
think,
there's
quite
a
bit
good
news
as
well
as
red
flags,
more
good
news.
I
Yeah,
sorry,
I
didn't
mean
to
to
cast
out
on
sort
of
the
the
future
of
ech,
but
mostly
just
to
point
out
that
the
few
implementation
sharp
edges
that
have
come
up.
It
is
good
news
indeed,
that
it's
been
relatively
easy
to
upgrade
existing
servers
to
support
ech,
as
you
say,
it'd
be
great
if
this
list
was
empty,
but
you
know,
I
can't
have
it
all.
I
guess
all
right,
that's
pretty
much
it
I
I
you
know
between
now
and
I
guess
ihf
113..
I
I
guess
the
expectation
is
still
still
targeting
draft
13
for
interoperative
deployment
and,
as
you
know,
client
stack
server
stacks.
You
know
proceed
in
adding
more
support
for
this.
We
should
have
a
better,
better
picture
in
terms
of
you
know
how
deployable
ech
is
on
the
internet.
At
that
meeting,
any
questions.
I
If
not,
I
will
yield
the
floor
and
we
can
turn
it
over.
I
think
steven,
your
next,
no
sorry
joe
you're
next
for
a47
no
stop.
A
Okay,
here
we
are
again
so
this
is
going
to
be
a
little
bit
of
a
re
hash
of
the
some
of
the
discussion
we
had
in
the
flags
spec.
So
in
rfc
8447
we
have
this
recommended
column
which
is
supposed
to
indicate
okay.
If
it's
yes,
then
this
is
an
extension
or
a
parameter.
That's
generally
recommended
for
an
implementation
to
support,
and
if
it's
no,
it
means
it's
not
generally
recommended
for
an
implementation
support
to
support.
A
So
on
the
surface,
this
seems
seemed
like
a
a
good
idea
so
that
you
know
there
were
certain
ciphers
that
we
we
thought
were
good
and
then
others
which
we
hadn't
evaluated,
or
we
might
think,
were
bad-
that
we
didn't
recommend
or
were
useful
only
in
particular
circumstances
or
could
only
be
thought
of
as
secure
in
particular
circumstances.
A
The
feedback
that
we
got
from
here
is
that
well,
this
isn't
really
perhaps
enough
information.
You
know
n
could
mean
a
number
of
different
things
from
what
hey
we
didn't.
It
was
not
evaluated
by
us.
It
was
not
recommended
in
any
circumstances.
This
cypher
is
not
safe
at
any
speed.
Can't
you
know
you
should
should
not
use
it
or
could
mean
that
it's
only
recommended
in
specific
cases.
A
So
as
we're
working
on
rfc
8447
this,
we
we'd
like
to
try
to
address
this
issue
and
make
it
a
bit
clearer,
which
probably
means
adding
some
additional
states
in
here.
I
guess
you'd
say
so.
We
have
a.
We
have
a
draft
out
and
we
have
a
proposal
in
that
draft.
A
You
know
if
it's
yes
and
we
can
argue
about
what
the
what
the
character
here
is,
but
yet
why
right
now
would
mean
parameters
are
recommended
for
general
religious
as
it
is
today,
and
then
n
would
mean
that
they're
discouraged,
and
then
we
would
also
include
a
reference
there
that
to
the
to
the
document
that
says
why
this
parameter
is
limited
and
then,
if,
if
the
area
was
blank,
if
there
was
a
space
there,
then
parameters
would
be
unevaluated,
so
they
would
in
general,
be
not
recommended,
but
we
wouldn't
have
any
specific
thing
to
say
about
why
other
than
it's
just
we
haven't
gone
through
the
work.
A
K
If
you,
if
the
ietf
wants
to
say
something
else
like
limited
applicability
or
something
else,
I
don't
know,
can
we
just
say
what
we
mean.
A
Yeah,
that's
that's
a
good
suggestion.
Next,
stephen.
P
Yeah,
I
gotta,
maybe
argue
the
opposite.
I
think
the
semantics
of
no
can't
be
defined
precisely
and
therefore
has
to
be
vague,
and
I
think
we
can't
do
any
better,
really
because
sometimes
no
means
maybe
it'll
be
good
in
future
and
sometimes
no
means
we
absolutely
hate.
This
sometimes
no
means.
This
is
something
that
somebody
else
thinks
is
good,
but
we
have
no
opinion
on
if
we
try
and
get
any
more
fine
grained.
P
E
E
Yeah,
I
guess
I
guess
I'm
persuaded.
I
think
that
we
need
to
have
something
more
fancy
than
we
have
now.
I
think
that,
like
what
we
have
now
is
innovation
and
that,
like
it,
you
know
it
it.
It
avoided
us
having
to
like
spend
a
lot
of
time
evaluating
things
we
didn't
care
about,
but
I
think
it's
also
clear
people
are
confused.
I,
I
think
martin's
suggestion
is
actually
pretty
pretty
solid.
E
I
don't
care
what
things
are
called,
but
I
think
that,
like
the
like,
I
think
the
minimum
things
we
need
to
be
able
to
say
are
we're
in
favor
of
this.
We
think
this
is
bad
and
we
have
no
opinion
about
it
and
there's
many
things
we
have
to
say,
and
I
think
there's
like
room
for
here
were
some
nuanced
opinion
like
that.
E
Maybe
is
only
good
for
iot
or
something
so
I
think
you
know,
and
I
think
it's
clear
that
the
why
end
thing
didn't
really
work
out
that
well
specifically,
so
I
think
I
think
empty
swirls
will
be
fine.
You
know
I
I
I
think
I,
but
I
think
we
probably
should
just
strike
the
y
in
the
end.
A
Okay,
stephen.
A
J
Right
so
these
kind
of
values
make
sense
for
cypher
suites
or,
for
I
don't
know,
different
helmet
groups
like
yeah.
We
really
like
the
512
bit
from
one
edd
sa,
but
this
is
a
yes
and
the
512
bit.
J
Rsa
is
no
and
we're
not
saying
anything
about
the
brain
pool
curves
that
that
makes
sense,
but
for
extensions.
What
possible
extension?
Are
you
going
to
say?
No,
don't
use
that
testing,
standardize
that
we
just
added
this,
so
it
might
be
a
change
that
we
make
later
to
this
extension,
but
I
don't
see
how
we're
making
new
extensions
and
making
them
not
recommended
with
this.
With
these
semantics.
C
Answer
that
question
the
the
ones
that
have
come
up
before
typically
do
not
get
adopted
by
the
working
group,
and
then
they
go
to
the
ise
to
get
published.
But
what
I
got
on
the
queue
to
say
was
that,
instead
of
having
an
open-ended
section
where
we
could
put
it
in
this
column,
we
could
also
just
put
notes
in
the
in
the
actual
registry
and
point
so
that
there's
less
ascii
art
problems.
B
Yeah,
I
think
I
thought
I
heard
steven
say
he
was
against
having
a
no
and
I
think
we
need
that
or
when
we
have
yet
another
die
die,
die
rfc
that
says,
don't
use.
You
know
these
ciphers.
Okay,
it
wasn't
steven.
If
somebody
else
I
thought
said.
No,
I
think
notes
as
a
one
of
the
designated
experts
is
a
slippery
slope.
Where
do
we
get
the
note
text
from?
B
A
M
For
any
standards
draft,
I
think
it
would
be
good
with
a
text
column
to
complement
this.
Yes,
no
reference
that
could,
for
example,
like
explain
what
any
limitations
would
be
or,
for
example,
a
specific
use
case.
This
is
recommended
for
iot
and
so
on.
That's
similar
to
what
ipsec
is
doing
today,
at
least
in
their
drafts,
but
I
think
that
should
only
be
used
for
standards
track,
not
for
any
individual
registrations.
B
Sorry,
I
meant
to
take
my
hand
down,
but
in
the
chat
I
think
was
watson
proposed
having
a
d
for
you,
know,
deprecated
and
maybe
that's
a
finer
grain
than
n.
Don't
know.
A
N
Hi
so
yeah,
then
the
concept
of
freeform
notes
seems
pretty
strange
because
it
seems
like
we
a
lot
of
the
places
where
we
want.
These
notes
are
on
ise
submissions,
and
so
we,
I
guess,
be
in
the
position
of
writing
an
ietf
standards
track
rfc.
That
adds
a
comment
to
an
entry
created
by
an
isd
commission.
It
seems
at
best
hostile
and
also
pretty
complicated.
N
I
think
that,
oh
in
terms
of
deprecated
remember
that
we
can
also
remove
coke
from
the
register.
We
not
that
they
be
free
again.
They
just
get
marked
as
as
reserved
due
to
collision,
or
we
have
various
ways
that
we
could
potentially
notate
that
independent
of
this.
N
I
think
what
might
be
most
straightforward
and
clear
would
be
to
simply
note
the
status
of
the
mining
document
in
the
registry.
So
if
we're
allowing
code
points
to
be
registered
from
documents
that
are
independent
stream
or
standards
track
or
experimental,
let's
just
have
a
column
that
tells
you
the
status
of
the
accompanying
document.
Q
So
speaking,
just
for
myself
joe,
what
I
like
about
this
is
that
it
doesn't
take
a
lot
of
itf
arcane
process
interpretation
to
parse
those
words,
even
if
we
want
to
put
a
little
more
nuance
in
it.
So
I
think
we
may
need
some
finesse
but
top
line.
I
like
the
starting
point
of
just
the
clarity
of
of
what
that
might
mean
to
someone,
that's
not
steep
in
in
an
itf
process
and
back
to
kind
of
your
comment
been
pre
that
you
previously
made
to
list
kind
of
status.
Q
A
A
There
are
a
couple
other
things
in
this
draft
that
I
want
to
quickly
go
through
here.
The
other
thing
we
had
added
was
a
a
way
to
reserve.
A
Experimental
code
point
range
that
we
kind
of
set
aside
a
temporary
space
where
we
could
assign
values
that
were
valid
for
only
a
year
for
for
kind
of
larger
scale.
Experience
experiments,
I
think
you
know
after
some
discussion.
K
So
I
think
this
is
this
is
a
good
direction.
The
the
idea
that
we
give
someone
a
con
experiment
within
the
experiment
turns
into
success,
and
then
they
have
to
pick
a
different
code
point
and
deal
with
the
deployment
come
from.
That
is,
is
kind
of
annoying
at
the
same
time,
having
the
entire
space
available
for
experimentation,
perhaps
with
a
a
process
that
streamlines
the
registration
would
be,
would
be
a
good
thing.
I
think
we
want
to
encourage
people
to
register
the
code,
points
that
they're
going
to
use
and
keep
that
as
simple
as
possible.
A
B
So
you
got
to
see
my
face.
I
don't
think
the
registration
is
overly
complex.
I
mean
sometimes
you
have
to
wait.
You
know
the
email
delay
for
one
of
the
experts
to
respond,
but
you
know
it's
only
like
a
week
or
two
in
general
and
no
experiment
that
lasts
a
year
is
really
going
to
be
offset
by
a
week.
You
can
always
pre-allocate
ask
for
a
pre-allocation.
B
I'm
just
mainly
concerned
about
the
you
know
two
places
to
go
for
things
and
the
minute
you
have
to
write
down
information
in
two
places:
you're
almost
guaranteed
somewhere.
It's
gonna
be
out
of
sync
right.
The
old
comment
about
code
and
comments
disagreeing
expiration
date,
yeah.
A
I
Yeah
thanks
joe
that
that
that
sounds
reasonable
to
me.
I
know:
there's
been
some
discussion
about
this
on
the
list.
There's
been
interest
in
doing
this
for
sort
of
a
while.
So
I
guess
the
the
question
that
we'll
ask
the
list
is:
is
it
you
know?
What's
in
the
document
right
now,
pending
the
changes
that
were
discussed
in
the
last
15
minutes?
I
I
All
right,
I
believe
now
then
this,
unless
anyone
had
any
questions
or
concerns
about
that,
I
believe
now
we're
we're
over
to
stephen.
So
stephen,
you
can
request
a
share
of
slides
and
then
I
will,
if
you
have
that.
P
So
I'm
just
pulling
that
there
was
two
drafts
I
had
back
when
eth
was
esi
and
I
I'm
just
this
is
kind
of
a
heads
up
and
to
get
some
feedback.
I'm
gonna
I'm
proposing
to
maybe
refresh
these
with
a
general
substitution
of
the
smi
for
ech,
and
then
I
think
last
time
we
talked
about
this
a
couple
years
ago.
People
seemed
like
okay
with
maybe
adopting
them,
but
we
didn't
do
an
adoption
call.
So
I'd
hope
to
get
back
to
that
state.
Pretty
quick.
P
The
text
could
go
into
ech
as
an
appendix
or
something
that'd
be
fine.
The
first
one
basically
just
defines
a
pem
format
for
a
private
key
and
an
ach
config,
and
that's
really
useful
for
loading
into
a
server
when
you
want
to
have
multiple
key
keys
supported,
so
you
load
one
file
for
ech,
config
plus
private
key
pair,
and
then
you
can
load
a
whole
directory
of
them
or
do
various
things,
and
it's
just
just
a
pen
file
format.
P
The
second
one
is
a
well-known
uri
and
last
time
I
did
it,
I
found
it
very
useful
to
be
able
to
have
a
web
server,
publish
its
ech
config
so
that
a
dns
process
could
pick
it
up
and
then
publish
it
if
you're
refreshing
keys,
and
so
that
was
the
well
known
the
well-known
uri
one.
So
that's
what
I
propose
doing
I'd
love
to
get
any
feedback,
I'm
happy
to
do
that,
the
next
little
while
and
then
see
if
the
work
group
want
to
adopt
them
or
not.
E
Pam
thing
seems
seems
harmless,
but
I
don't
think
it
needs
to
be
specified
here,
it's
pen
file,
so
so
no,
I
don't
think
we
should
adopt
that.
The
other
thing
I
I
guess
I
I
don't
know
yet
until
I
know
whether
or
not
anybody
who
deploys
dns
would
be
interested
in
it.
If
they
do,
then
I
think
we
could.
Then
I
think
the
question
would
be
I'd.
E
Ask
your
ids,
where
the
appropriate
places,
because
that
could
it's
not
like
a
team
like
a
tls
thing
in
particular,
so
it's
not
clear
to
me.
It
belongs
here.
If
that's
the
case,
but
I
guess
the
question
would
be
like
I
mean
I
mean
no
particular
is
like
epp
or
something
right.
It's
like
you
know,
so
so
so
I
guess
you
know.
I
guess
I
I
would
defer.
I
guess
like
the
answer
I
try
to
give
is
like
is.
E
P
E
Itself
or
whatever
right,
that's
what
I'm
suggesting
I
mean
just
to
be
clear:
I'm
opposing
taking
the
pen
thing
so,
yes,
I
don't
think
we
should
take
it
okay,
so
I
don't
understand
why?
Because
I
don't
think
we
need
to
specify
here.
I
think
it's
fine.
It's
internet
draft.
P
I
R
I
Yeah,
I
think,
if,
if
you're
able
to
spend
some
cycles
on
it,
just
updating
it,
then
we
we
can
talk
with
the
the
chairs
or
the
other.
I
can
talk
with
the
other
chairs
to
see
you
know
whether
or
not
this
is
something
we
want
to.
You
know
actively
pursue
here
or
elsewhere.
It
does
seem
like
there's
some
interest
in
at
least
the
the
well-known
draft,
whether
that's
tls
or
not.
I
I
I,
I
don't
think
it's
a
a
big
concern
and
I
guess
same
for
the
fan
format
as
well.
I
don't
know
specifically
where
they
used
to
be,
but
it
does
seem
that
there's
at
least
some
interest
in
it.
So
all
that
is
to
say
we'll
chat,
offline
and
come
back
with
proposal
for
how
to
move
it
forward.
I
Right
yep,
thank
you,
stephen
okay.
Next,
up
on
the
agenda
is,
I
believe,.
I
N
Export
authenticators,
chair,
slides
e
c
h,
zero
knowledge
proofs,
it's
okay,
so
I
will
try
to
present
for
my
own
screen.
Instead.
I
Oh
sorry,
I
must
have
skipped
over
this
one
when
I
was
importing
things
hold
on
it's
processing
right
now.
Oh
okay,
that's
that's
my
bad
one
moment.
Okay,
can
you
is.
I
N
N
So
what
are
we
talking
about?
We
are
talking
about
an
experimental
extension
for
ctls,
so
ctls,
that's
the
also
relatively
new,
still
an
underdevelopment
version
of
tls
that
has
a
pre-shared
template
that
the
client
and
server
somehow
agree
on
out
of
band
before
they
attempt
the
connection,
an
extension
to
that
that
makes
the
wire
image
purely
pseudorandom.
In
other
words,
it
makes
all
of
the
transport
contents
pure
pseudorandom
bytes.
N
So
this
is
possible
because
we
have
a
template
and
because
ctls
already
has
essentially
an
unpredictable
wire
format
where
the
wire
format
varies
in
arbitrary
ways,
depending
on
the
contents
of
the
template,
that's
very
different,
of
course,
from
mainline
tls,
1.2
or
1.3.
That
has
a
very
deliberately
carefully
structured
wire
image.
N
This
extension
sits
between
ctlx
and
the
transport,
so
the
transport
like
tcp
or
udp
is
not
modified
and
ctls
is
also
not
modified.
This
is
layered
between
them
with
the
goal,
basically
of
simplifying
security
analysis
right
all.
The
security
analysis
that
holds
for
tls
or
ctls
should,
for
the
most
part,
continue
to
hold.
N
N
Where,
where
one
party
causes
the
other
party
to
emit
text
on
the
mid
bytes
on
the
wire
that
are
confusing
to
a
badly
written
parser
in
the
middle,
so
that's
that's
one
for
concern.
There's
also
a
privacy
benefit
here,
because
because
each
ctls
template
results
in
a
different
wire
image,
ctls
reveals
which
template
is
in
use
normally,
but
this
extension
causes
all
ctls
templates
that
use
this
extension
to
be
indistinguishable
on
the
wire
there's.
N
N
So
the
current
model
that
we've
that
we've
specced
out
in
our
draft
is
based
on
something
called
a
strong
tweakable,
pseudo-random
permutation
or
sometimes
in
the
literature.
This
is
called
a
tweakable
super
pseudorandom
permutation
or
a
wide
block,
cipher
or
variable
input,
length,
block
cipher,
there's
a
lot
of
different
terminology
around
this,
but
it's
fundamentally
just
a
slightly
trickier
cipher
mode
like
the
usual
block,
cipher
modes
like
aes
cbc,
but
more
complicated,
and
it
gives
you
one
one
really
big
block
cipher.
N
Our
extension
takes
that
and
applies
it
to
any
tls
messages
that
are
plain
text
that
aren't
already
ciphertext
and
also
then,
to
segments
that
contain
both
the
headers,
which
are
clear
text
and
at
least
16
bytes
of
the
ciphertext.
This
is,
we
chose
this
setup
because
it
has
no
ciphertext.
Expansion
has
zero
space
overhead,
so
the
output
and
the
input
are
exactly
the
same
size
and
we
believe
and
and
hope
to
be
able
to
prove
that
it
is
still
very
highly
secure.
N
So
this
is
just
a
a
quick
outline
of
what
we're
talking
about
here.
This
is
a
typical
seat,
a
diagram
of
a
typical
ctls
stream.
So
you've
got
a
handset
handshake
message
like
a
client,
hello
that
can
be
fragmented,
especially
in
a
dtls
style
situation,
into
multiple
fragments,
handshake
fragments,
which
are
then
prefixed
with
headers
and
sent
to
the
other
party
and
then
eventually,
when
the
handshake
is
over,
the
the
bytes
that
follow
those
handshake
messages
are
aad,
ciphertext
output,
so
they're
they're
already
pseudorandom.
N
So
in
our
extension,
we
take
two
steps.
First,
we
use
this.
This
stprp
the
cipher
to
render
the
handshake
message
itself
pseudorandom
before
it's
fragmented,
I'll
note
that
this
cipher
this
sdprp
takes
a
key
and
a
tweak
tweak
is
a
little
bit
like
a
nonce,
but
we
don't
actually
require
it
to
be
non-repeating.
N
So
this
is
very
early
days.
We
are
seeking
working
group
input.
Most
importantly,
I
think,
on
the
cryptographic
construction
here,
so
we've
we've
chosen
stprp
for
this
draft,
but
there
are
a
lot
of
other
options
here
with
different
trade-offs
in
terms
of
simplicity
of
description
and
and
also
how
complicated
the
analysis
is,
especially
under
active
attack
assumptions.
N
We
do
have
some
requested
changes
to
ctls
itself,
so
I
sent
an
email
a
few
weeks
ago
with
a
bunch
of
input
based
on
based
on
this
work,
things
that
could
change
in
ctls.
That
would
make
it
clearer
in
general,
or
that
would
make
this
extension
easier
and
we
do
hope
to
pursue
working
group
adoption
at
some
point,
but
I
think
right
now.
Really.
The
question
is:
what
would
you
like
to
see
in
a
version
of
this
that
was
ready
for
adoption.
D
Yes,
I
just
I
was
just
gonna
say
I
don't
think
this
prevents
that
slipstream,
because
the
you
know
the
so
far
as
the
server
can
force.
You
know
the
server
obviously
knows
the
key
stream
and
can,
insofar
as
they
can
should
get
a
you
know,
select
what
the
client
sends
as
plain
text.
They
can
therefore
select
what
could
set
a
cipher
text,
which
is
what
you
need
for
for
that
substrate.
D
To
discuss,
we
had
a
discussion
earlier
today
in
transport
where
we
have.
This
might
be
a
problem.
You
know
for
all
of
quick
too,
so
this
might
be
but
yeah.
So
basically
you
know
because
you
know
the
key.
You
know
you
know
the
key
stream
in
advance.
You
can
just
say:
okay,
send
me
this.
D
N
That's
that's
a
very
interesting
observation.
I
think
that
we
have
been
thinking
about
this,
principally
in
the
handshake
messages
where
there's
clear,
client
provided-
or
rather
you
know,
source
provided
entropy
each
side
is
providing
their
own
entropy
there
you're
right
that
gets
harder
in
the
in
the
body
of
the
stream.
So
we'll
have
to
give
that
some
more
thought.
Thanks
for
thanks
for
noting
that
ecker.
E
This
is
quite
fancy
and
interesting.
Those
are
independent
observations,
not
one.
So
I
mean
I'm
not
sure
I
think
of
the
not
split
screening.
I
guess
you
know.
Obviously,
if
we're
going
to
expand
the
data,
then
it's
to
remember
about
that.
This
ring,
depending
on
how
this
construction
actually
works.
It
may
be
it.
E
May
it
may
be
quite
inconvenient
to
to
to
construct
the
appropriate,
appropriate
cipher
text,
because
I
recall
you
you're,
probably
depending
on
the
because
the
aad,
because
because
the
authentication
tag
is
random,
authentication
tag
then
becomes
input
to
the
the
prp.
E
Then
it
may
be
the
case
that
you
have
to
essentially
iterate
through
a
large
number
of
plaintext
in
order
to
get
in
order
to
get
advertisers
of
choice,
but
I
wouldn't
be
able
to
guarantee
that
because
I
don't
understand
construction
well
enough,
so
I
guess
one
thing
I
should
a
question
about
that.
I'm
not
sure
I
understand
is:
does
this
mean
that
a
server
that,
if
a
server
supports
multiple
profiles,
it
is
then
required
to
trial
to
correct
in
order
for
which
profile
the
client
is
using.
N
That's
so
mostly
no,
if
all
of
the
profiles
share
the
same
key,
then
you
can
have
distinct
profiles
and
still.
N
E
Right,
although
they'll
take
this
check
and
then
the
jabber
that
the
that
the
key
material
should
depend
on
the
profile
as
well,
that
will
no
longer
work.
N
E
N
This
is
actually
the
the
thing
that
worries
me
more.
Is
it's
not
clear
that
trial
decryption
is
possible
because
there's
no
mac
specifically
on
handshake
messages,
the.
N
N
E
The
excellent
point
right
right
and
since
details
are
trying
to
make
it
smaller
right
max,
is
unfortunate
yeah
anyway,
I
think
this
is
like
pretty
interesting.
You
know
we've
been
discussing
exactly
what
we
needed
to
get
ctos
are
aligned
to.
So
I
think,
like
you
know,
you
know
I
I
guess
well.
E
N
E
The
thing
I
want
to
see
is,
I
would
surprise,
not
see
this
in
the
specification,
but
I
may
use
that
around
time
is
at
least
one
concrete
puzzle
for
for
the
stdrp
because,
like
when
I
would
specification
I
was
kind
of
like
we
just
don't
define
one
thanks.
I
think,
like
I'd
like
to
see
a
definition
for
one,
even
if
it
wasn't
great
I'm
gonna
now
be
honest.
I
don't
want
to
see,
but
maybe.
E
N
So
I
see
my
co-author
in
the
queue,
maybe
I'll.
Let
him
maybe
I'll
let
chris
chime
in
on
that
topic.
L
S
Oh,
my
microsoft
go
ahead.
Go
ahead!
Stephen
I
was.
I
was
just
saying,
you're
first
in
the
queue
so
go
ahead.
Okay,
so.
P
I
I
I
would
encourage
further
work
on
this.
I
think
it's
it's
interesting.
I
really
like
how
many
people
this
will
annoy.
I
would
encourage
you
to
maybe
think
about
this
as
a
way
of
kind
of
pushing
the
envelope
and
breaking
things
for
a
while
at
least,
and
you
know
see
if
you
did
all
this
and
succeeded
even
ignoring
some
of
the
security
goals.
What
would
you
break?
It
would
be
interesting
to
learn
that
and
then
last
comment
is.
P
I
think,
as
this,
if
you
assume
this
kind
of
eventually
reaches
maturity,
maybe
the
stage
we
wanted
to
be
mature
is
not
to
have
one
fixed
way
of
doing
it,
but
many
many
ways
that
are
malleable
even
after
the
standard
has
been
produced
in
an
rfc,
because
the
yeah,
the
anniversary
in
this
kind
of
context,
is
probably
pretty
good
at
changing
themselves
as
well.
P
So,
maybe
think
about
you
know
the
eventual
outcome
not
being
just
a
fixed
single
interoperable
way
of
doing
it,
but
more
of
a
way
that
you
can
keep
changing
what
you
do
and-
and
I
like
it's-
a
good
good
work.
Thank
you.
S
Thanks
stephen
yeah,
so
about
the
st
prp
construction,
so
we
were,
we
were
going
back
and
forth
about
this
because,
like
it
doesn't
it
like,
we
were
wondering
if
this
is
something
that
the
scfrg
is
supposed
to
do
or
like.
Could
we
specify
our
own
construction?
S
There
are
like
there's
lots
of
options
in
the
literature
that
I
think
work
really
well
and
all
you
really
need
is
aes.
So
I
think
I
mean
I'd
be
happy
to
stick
a
construction
in
in
the
document,
but
yeah.
I
guess
the
question
is
like
where,
where,
where
should
that
live.
I
Yeah
so
before
I
guess
talking
about
particular
constructions
and
what
now
it
might
be
useful
to
try
to
identify
what
are
the
requirements
of
the
thing
that
you
need?
I
had
been
mentioned
that
you
know
you.
You
need
non-malleability,
but
now
cca
security.
I
S
I
think
that's
true.
I
think
I
think
that
we
it's
I
mean
the
the
the
very
informal
idea
is
by
tampering
with
the
the
bits
on
the
wire.
You
can't
predict
an
adversary
can't
predict
how
it
impacts,
what
is
deciphered
and
how
the
server
like
interprets
the
handshake.
So
the
idea
is
that,
like
that,
should
cause
there
with
you
know
you
can
you?
S
Can
you
can
probably
talk
about
the
probability
that
the
it
leads
to
a
handshake
failure,
but
you're
right,
like
we
haven't,
really
worked
this
out,
like
like
ben
said,
this
is
super
early.
N
E
Yeah,
so
I
I
I
would
think
like
for
the
moment
it
sounds
like
it
sounds
expression's
pretty
simple,
so
I
would
just
like
pick
one
you
like
and
kind
of
like
set
a
reference
and
stuff
and
stuff
and
stuff
in
appendix
or
like
or
something
I
mean,
there's
just
like
they
need
to
complete.
Then
you
have
a
complete
specification
right.
It's
like
that.
I
put
and
then
like
I
could
just
you
know,
hold
up
loosely.
N
Yeah,
I
think
we'll
do
that
pretty
soon,
maybe
after
the
next
ctls
iteration
scott.
T
I
would
I,
if
you're
talking
about
which
stprp
you
want
to
use.
Well,
once
you
it's
up
to
you
to
to
decide
what
are
the
security
properties
you
need
from
it,
but
once
you
do
it
really
really
that's
what
the
cfrg
is
there
for.
You
really
should
go
to
them
and
saying
gee,
here's
what
we
need.
What
do
you
recommend.
N
I
agree:
I
think
that
we
ultimately
we
we
would
not
really
progress
to
publication
without
a
probably
a
cfrg
approved
underlying
construction.
I
All
right,
thanks
ben
paul
you're
up,
if
you
guys
request
to
share
slides.
L
U
Hi
everyone.
Can
you
see
my
my
desktop
here.
I
U
U
Okay
thanks
chris,
so
today
I'm
going
to
be
talking
about
zero
knowledge.
Proofs
meets
tls,
so
I'd
like
to
well
sorry
at
first
I'd
like
to
say
that
this
is
based
on
joint
work
with
urasu
arun,
joe
bonneau
zach
de
stefano
mike
wallfish,
and
yes,
and
I'd
like
to
start
by
just
saying
that
I
think
tls
is
pretty
awesome.
U
I'm
a
newcomer
to
this
working
group,
but
I've
always
been
a
big
fan
of
the
work
that
everyone
here
does,
and
I
think
I'd
like
to
congratulate
everyone
here
for
getting
a
bunch
of
awesome,
crypto
deployed
on
the
internet,
most
notably
tls
1.3,
which
is
an
amazing
protocol,
and
this,
I
think,
is
a
great
accomplishment
for
the
working
group
to
get
this
pushed
into
more
and
more
places.
U
However,
I'd
like
to
talk
about
the
challenges
that
the
increasing
deployment
of
tls
is
causing
and
these
challenges
are
so-called
visibility.
Challenges
for
for
network
operators,
because
network
operators
who
are
used
to
enforcing
network
policies
by
scanning
plaintext
traffic
directly.
U
So
an
example
of
these
policies
would
be
like
dns
filtering
data,
loss
prevention
and
enterprise
networks,
malware
scanning,
and
the
list
goes
on
and
on
there's
a
ton
of
these
security
network
security
policies.
So
obviously,
though,
by
design
tls
prevents
the
network
from
scanning
the
traffic
and
enforcing
these
policies.
So
this
this
this
causes
a
challenge
for
for
network
operators,
and
so
these
visibility
challenges
are
are
are
pretty
old
and
even
in
the
in
the
development
of
pls
1.3.
U
As
far
back
as
2016,
there
is
this
famous
well,
maybe
some
people
consider
infamous
thread
on
the
tls
mailing
list
in
which
a
representative
from
the
financial
services
industry
came
and
requested
that
tls
1.3
retain
rsa
key
transport
cipher
suites,
because
it
makes
the
kind
of
like
monitoring
and
policy
enforcement
that
they
need
to
do
on
their
own
traffic
easier.
U
And
so
this
this
challenge
still
persists
today
and
there's
lots
of
companies
that
have
kind
of
sprung
up
to
address
these
challenges
with
tls
1.3.
So
I
googled
last
week-
and
I
found
this
extra
hop
company
that
is
talking
about
security
operations,
visibility
for
tls,
1.3
and
interestingly
also
nist.
Now
has
a
some
kind
of
working
group
that
specifically,
is
is
focused
on
finding
solutions
to
these
kind
of
visibility,
problems
with
tls
1.3.
U
U
Probably
the
most
mature
is
multi-context
tls,
which
changes
the
tls
record
layer
to
a
lot
of
different
kind
of
contexts
to
have
different
keys,
which
some
of
which
can
be
shared
with
the
network
or
or
a
middle
box,
to
allow
them
to
kind
of
enforce
policies
in
a
more
fine-grained
way,
but
directly
on
plain
text.
Other
proposals
are
like
enterprise
tls,
which
is
or
like
ets,
which
is
a
protocol,
that's
kind
of
a
variant
of
tls.
U
That
retains
some
like
some
some
properties
of
older
tls
versions
that
make
monitoring
easier,
and
there
are
lots
of
proposals
based
on
like
trusted
heart,
like
sgx,
on
the
client
or
or
in
the
network.
But
all
these
solutions
basically
focus
on
keeping
scanning
and
other
kinds
of
network
functionality
in
the
network
by
weakening
or
modifying
tls.
U
So
today,
I'm
just
gonna
talk
to
you
about
zero
knowledge
proofs,
because
I
think
zero
knowledge
proofs
are
a
really
interesting
set
of
tools
for
addressing
these
visibility
challenges
in
a
different
way.
Importantly,
that
doesn't
require
kind
of
weakening
the
security
guarantees
of
tls.
U
Nor
does
it
require,
like
modifying
tls
the
protocol
itself,
so
this
discussion
will
mostly
be
centered
on
a
recent
paper
that
my
co-authors
and
I
wrote
called
xero
knowledge
and
middle
boxes
and
I'll
focus
on
a
key
contribution
of
this
paper,
which
is
an
efficiency
or
knowledge
protocol
for
decrypting
tls
1.3
records,
then
I'll
just
speak
kind
of
in
general
about
future
directions.
U
I
see
for
zero
knowledge,
proofs
and
tls
and
suggest
some
other
open
problems
from
our
work
and
then
finally
I'll
finish
by
soliciting
some
feedback
and
criticism,
and
hopefully
some
maybe
some
suggestions
for
interesting
applications
of
xero
knowledge
proofs
to
tls.
U
So,
first
I'll
talk
about
this.
This
recent
work,
but
first
I'll
give
just
a
very
quick
primer
on
zero
knowledge.
Proofs,
so
zero
knowledge
proof
is
a
cryptographic
primitive.
That's
focused
on
allowing
approver
to
convince
a
verifier
that
a
public
statement
like,
for
example,
an
arithmetic
or
a
boolean
circuit,
is
true.
So
in
the
circuit
sense,
this
would
be
that
the
this
circuit
has
a
satisfying
assignment.
U
So
the
approver
wants
to
convince
the
verifier
that
the
statement
is
true
without
revealing
why
it
knows
that
it's
it's
true,
so
zero
knowledge
proof
is,
is
a
is
a
string
like
a
bit
string
that
the
prover
generates
and
it
will
send
to
the
verifier
and
the
verifier
can
check
this
bit
string
along
with
this
public
statement,
and
by
checking
this
bit
string
it
can
it
can
gain
an
assurance
that
the
approver
knows
why
this
public
statement
is
true,
but
but
not
not.
Why
so
so?
U
The
zero
knowledge
proof
kind
of
guarantees,
the
verifier
that
the
statement
is
true
but
but
prevents
the
verifier
from
learning.
Why
the
prover
knows
it's
true.
So,
for
example,
in
the
circuit
case
this
this,
the
provers
knowledge
could
be
something
like
a
satisfying
assignment
to
this
to
the
circuit,
so
zero
knowledge,
middle
boxes
use
zero
knowledge
proofs
to
allow
network
operators
to
enforce
security
policies
on
traffic
without
decrypting
it.
U
So
this
is
this
addresses
a
lot
of
these
visibility
challenges,
especially
with
things
like
like
filtering
and
scanning
in
in
traffic.
So
the
way
zero
knowledge
middle
box
works
is
as
follows.
So,
when
a
client
first
joins
the
network,
the
the
middle
box
gives
a
description
of
the
security
policy
to
the
client.
U
Then
the
client,
when
it
wants
to
make
an
outbound
connection
to
a
server,
can
just
perform
a
normal
handshake
and
establish
a
session
key
with
that
server.
Then,
when
the
client
wants
to
send
some
traffic,
the
client
enforces
the
policy.
The
security
policy
locally
on
its
own
traffic
and
additionally
creates
a
zero-knowledge
proof
that
it
enforced
this
policy
correctly
on
its
traffic,
and
so
it
sends
the
the
encrypted
traffic
as
well
as
zero
knowledge
proof
that
the
underlying
plaintext
is
compliant
with
this
policy.
U
And
finally,
the
middlebucks
can
verify
this
proof
and
block
the
traffic
if
the
proof
verification
fails
so
just
to
make
this
setting
a
little
bit
more
concrete
I'll
explain
a
use
case
from
the
paper
that
my
co-authors
and
I
wrote
so.
The
use
case
is
filtering
dns
queries
that
are
sent
using
either
dns
over
tls
or
dns
or
https,
which
are
too
increasingly
deployed
ways
to
encrypt
dns
traffic.
U
So
how
do
we
build
these?
Your
knowledge
proofs
in
the
paper
we
describe
a
three-step
pipeline
and
in
my
talk
I
really
don't
have
time
to
go
into
it
in
too
much
detail,
but
the
this
three-step
pipeline,
the
first
step,
is
basically
a
zero-knowledge
like
a
circuit
that
can
decrypt
the
tls
1.3
records
in
in
zero
knowledge,
and
then
in
the
paper
we
describe
how
this
these
circuits
can
be
composed,
with
other
circuits
that
are
kind
of
responsible
for
checking
policy.
U
So
and
by
composing
these
circuits,
you
can
kind
of
do
an
end-to-end
policy
enforcement
on
encrypted
traffic.
So
I
won't
talk
about
that.
The
latter
steps
of
this
pipeline,
but
I
will
talk
about
this
zero
knowledge
circuit
for
decrypting
tls
connections.
U
So
the
question
is:
how
do
we
decrypt
tls
1.3
records
in
in
zero
knowledge?
Proof?
The
most
obvious
way
to
tackle
this
is
to
just
re-run
the
record
layer
decryption
in
this
in
the
xero
north
group,
so
basically
express
record
layer
decryption
as
a
circuit
and
just
decrypt
the
encrypted
traffic
in
the
in
the
in
the
circuit.
U
So
the
problem
with
doing
this
is
that
tls
1.3
records
are
not
binding
commitments
to
the
underlying
plaintext,
because
tls
1.3
really
only
supports
either
gcm
or
cha
cha
20,
poly
1305
and
both
of
these
aads
have
the
property
that
you
can
create
craft,
a
single
cipher
text
that
has
multiple
possible
decryptions.
U
So
this
means
that
a
malicious
client
could
lie
about
what
it's
putting
in
the
proof,
basically
craft,
a
cipher
text
that
has
multiple
decryptions
and
then
issue
a
zero-knowledge
proof
about
one
of
these
decryptions,
but
actually
send
a
different
one
to
the
to
the
server.
So
so
this
this
problem
of
kind
of
lying
in
the
proof
about
the
key
that
is
used
is
one
we
need
to
tackle.
U
U
U
So
the
question
really
boils
down
to:
how
do
we
build
an
efficient
zero-knowledge
key
consistency
check
for
zeal
for
tls
1.3,
and
so
because
tls
1.3
key
schedule
and
the
handshake
are
are
quite
complicated.
I
I
could
get
into
the
fine
details
here
and
I'm
sure
everybody
here
would
understand
them,
but
it
would
take
a
lot
of
time
that
I
really
don't
have.
U
So
I
would
I'll
I'll
just
skip
most
of
the
details
here,
but
I
first
would
like
to
thank
the
dowling
at
all
tls
1.3
analysis,
paper
of
like
cryptographic,
analysis
of
the
tls
1.3
handshake
for
this
incredible
diagram
of
the
tls
key
schedule,
which
I
think
is
like
this
is
like.
Maybe
the
best
diagram
I've
ever
seen
in
a
crypto
paper
but
anyway,
so
we
don't
really
have
time
to
get
into
the
details
here.
U
U
What
we
observe
is
that
this
the
server
finished
value,
basically
the
mac
of
the
server
computes
over
the
transcript
acts
as
a
commitment
to
some
the
intermediate
steps
of
this
key
derivation
process,
and
so
by
checking
these
in
the
circuit,
we
can
shortcut
most
of
the
expensive
operations
of
the
client's
key
derivation
and
get
like
a
key
consistency
check
whose
cost
and
the
size
of
the
circuit
doesn't
depend
on
the
size
of
the
tls
transcript.
U
So
I'll,
just
talk
very
very
briefly
about
the
results
of
our
prototype
implementation.
We
talked.
We
used
the
xj,
smart
development
environment
and
the
graph
16,
zero
knowledge,
proof
system,
and
so
the
cost
of
this
key
consistency
check.
The
proof
generation
is
about
15
seconds,
and
so
in
the
paper
we
explained
that
this
consistency
check
really
only
has
to
be
done
once
per
tls
session,
which
ameliorates
the
high
cost
of
doing
this
somewhat,
but
it
still
is
quite
a
high
cost.
U
This
is
this
is
pretty
impractical
at
this
point,
however,
once
you
do
this
key
consistency
check
actually
you're
doing
like
per
like
for
the
dn,
the
dot
filtering
use
case
I
described
before
the
per
packet
cost
of
this
is
only
about
three
seconds
per
per
dns
query.
U
If
you
want
to
prove
that
your
dns
query
is
not
for
a
is
for
a
domain
not
on
a
block
list
and
finally,
the
proof
verification
cost
here
is
the
best
of
the
three
it
we
inherit
the
fast
proof,
verification
of
grout
16,
so
proof
verification
for
the
middle
box
only
takes
about
five
milliseconds.
U
So
the
tldr
here
is
that
these
techniques
are
not
practical
yet,
but
they're
they're
really
really
close,
and
there
are
a
lot
of
interesting
optimizations
that
are
possible
here,
so
like,
for
example,
ongoing
work,
we're
using
a
different
zero-knowledge
proof
system
that
reduces
the
client
cost.
Here
by
about
70x,
but
there
are
some
some
caveats
here
which
I
can
get
into
if
people
are
curious-
and
I
encourage
you
if
you're
interested
in
more
of
these
numbers
or
more
context,
to
see
the
paper
online.
U
So
finally
I'll
just
conclude
with
some
general
thoughts
about
zero
knowledge,
proofs
and
and
tls.
So
whatever
people
think
of
of
this,
this
particular
research
that
my
co-authors
and
I
have
done.
I
hope
you'll
agree-
that
zero
knowledge
proofs
are
a
really
really
interesting
tool
for
lots
of
of
practical
network
security
and
privacy
problems.
U
So
zero
knowledge
proofs
are
in
a
really
really
interesting
place
right
now,
as
a
technology,
they're
they're,
really
on
the
cusp
of
becoming
unpractical
practical
enough
to
use
real
applications,
so
one
other
interesting
application,
specifically
tls
could
be
in
other
s
in
other
network
security
settings,
it's
important
to
see
sni
values
in
plain
text,
so
one
possible
application
could
be
doing
like
zero
knowledge
proofs
about
the
contents
of
of
an
encrypted
client
hello
in
in
in
a
tls
handshake,
and
there
are
lots
of
other
problems
here
which,
for
time
reasons
I
won't
go
over
them.
U
But
there
are
a
lot
of
other
parts
of
tls
that
we
didn't
really
approach
in
this
work
on
zero
knowledge,
proof
so
figuring
out
how
to
do
things
like
client,
authentication
in
azure
knowledge
proof
would
be
pretty
interesting
and
another
very
outlandish
suggestion
which
I'll
I'll
just
suggest-
and
I
expect
people
to
to
push
back
on
this,
but
I
it
would
be
interesting
to
me
to
imagine
zero
knowledge
friendliness
as
a
design
consideration
for
future
versions
of
tls
and
indeed
for
other
future
secure
channel
protocols.
N
Hi
ben
schwartz,
so
you
you
mentioned
the
idea.
N
So
you,
you
mentioned
zero
knowledge,
proof
friendly
version
of
tls.
Can
you
imagine
a
zkp
hostile
version
of
tls?
What
would
that
look
like.
U
Well,
the
the
glib
answer
to
your
question
is
that
tls
1.3
is
already
the
zkp
hostile
version
of
cls,
and
that
sounds
like
a
criticism
and
it's
honestly,
it's
not
a
criticism.
Tls
1.3
is
extremely
well
designed
for
the
the
use
cases
that
the
designers
envisioned,
but
the
the
like
some.
Some
of
the
reasons
why
tls
1.3
excuse
me
is
well
designed
are
also
reasons
why
it's
it's
quite
quite
hostile
to
zero
knowledge.
U
So,
for
example,
the
the
kind
of
extreme
precision
and
detail
of
the
key
schedule
is
something
that
makes
it
quite
hard
to
do:
efficiency
or
knowledge
proofs
about,
because
in
the
key
schedule,
not
only
do
you
need
to
do
a
lot
of
kind
of
hkdf
operations
hk,
you
know
you're
using
hkdf
on
a
primitive
like
like
hmac
shot
56,
which
is
like
not
zero,
knowledge
friendly,
so
shot.
56
is
extremely
expensive
to
evaluate
and
zero
knowledge
proof.
U
So
the
key
schedule
having
this
you
know
there
are
lots
of
context
hashes
and
like
like
label
inputs
and
everything,
there's
lots
of
like
the
there's
kind
of
a
hierarchy
of
keys.
This
is,
this
is
quite
quite
hard
to
do
efficiently
in
zero
knowledge
and
zero
knowledge.
Proof
yeah
yeah.
I
guess
another
thing:
well,
yeah,
yeah!
T
U
Yeah
great
question,
so
this
depends
a
lot
on
the
specific,
zero
knowledge
proof
back
end
that
that
you
use
so
for
our
system.
The
proofs
are
quite
short,
so
grass
16,
which
is
the
proof
system
that
we
use,
is
really
designed
to
produce
extremely
succinct,
small
proofs.
U
So
the
proofs
are
only
only
going
to
be
about
128
bytes
in
in
our
in
our
prototype,
and
so
the
trade-off
here
is
that
gross
16,
the
the
the
work
of
proof
generation,
is
relatively
high,
that
this
is
kind
of
a
trade-off
in
all
existing
zero
enough
proof
systems.
U
Is
that
there's
a
kind
of
trade-off
between
the
succinctness
of
the
proof
and
the
work
that
the
proverb
has
to
produce
it
so
we're
exploring
like
kind
of
different
trade-offs
here
like
maybe,
if
you
were,
if
you
were
able
to
accept
like
a
one
kilobyte
proof,
you
could
get
a
much
faster
proverb,
but
currently
our
prototype
is
128.
Bytes
per
proof.
P
I
just
had
a
a
comment:
I
think
the
the
proposition
that
visibility
is
a
reasonable
thing
to
want
for
tls
is
contested.
I
wouldn't
accept
it
at
all
myself
and
I
would
encourage
you
to
maybe
do
more
research
on
on
trying
to
make
this
harder,
rather
than
easier
and
slower,
rather
than
faster.
P
In
other
words,
it
would
be
good
research
to
try
and
figure
out
how
to
improve
tls
to
make
these
methods
work
less
well,
in
my
opinion-
and
it
seems
like
as
as
good
a
research
goal,
perhaps
not
as
popular
with
certain
people
so
well.
U
P
But
my
question
is
so
you
you,
you
said
that
you
have
like
a
15
second
overhead
and
maybe
a
70
x
speed
up.
How
realistic
is
the
70x
speed
up
in
what
time
frame
can
you
can
you
can
I
give
a
feel
for
that.
U
So
the
70x
speed
up
relies
on
a
different
set
of
trade-offs
for
the
the
zero
knowledge
per
system.
So
it's
you,
but
in
terms
of
time
frame,
do
you
mean
like
when?
Could
this
be
deployable
or
yeah
yeah
but
yeah?
When
might.
U
Well,
the
the
back
end,
the
zero
knowledge
backing
that
we're
using
I
mean
exists
today
I
mean
we're
using
code
that
was
written
by
actually
like,
like
a
researcher
at
northwestern
xiao
wang.
So
the
the
70x
speed
up
is
is
the
caveat
here
is
that
there
are
other
trade-offs,
so
the
the
middle
boxes,
verification
cost
is
also
quite
a
bit
higher
in
this
in
in
the
70x
speed
up.
U
So
so
200
milliseconds
is,
you
know
quite
a
bit
faster
than
15
seconds,
of
course,
but
to
be
doing
this
on
every
packet
on
every
network
packet
is
probably
still
impractical.
So
there
are
there's
a
lot
of
research
ongoing
in
like
these
zero-knowledge
proof
systems,
and
I
think,
like
in
the
next
five
years.
I
expect
that
people
will
develop
techniques
that
kind
of
achieve
the
best
of
both
worlds,
where
you
have
kind
of
more
succinct
proofs,
but
that
are
also
faster
to
generate.
P
U
Well,
thank
you.
I
appreciate
that,
although
I
would
I
if,
if
I
can,
I
would
just
kind
of
like
to
push
back
a
little
bit
on
on
the
the
first
thing
you
said
about
visibility.
I
think
it's
it's
it's
very,
very
natural
to
oppose
like
like
efforts
towards
the
kind
of
network
visibility,
but
I
think
it's
important
to
remember
that
the
starting
point
like
if
you
are
taking
a
very
pro
privacy
standpoint.
I
think
it's
important
to
remember
that.
The
starting
point
here
is
not
tls
1.3
security
guarantees.
U
So
to
me,
the
reason
why
I
think
this
is
an
interesting
research
question
and
a
very
challenging
one,
admittedly,
is
that
by
doing
these
kinds
of
like,
like
you,
know
these
visibility
by
doing
this
kind
of
visibility
research,
I
think
we
can
improve
the
privacy
of
a
lot
of
networks
and
improve
the
privacy
of
end
users,
who
today
don't
get
to
encrypt
their
communications,
because
the
network
operators
can't
have
this
visibility
into
the
traffic.
I
Okay,
thank
you
paul
ecker,
since
we're
one
minute
over
is
it?
Is
it
quick.
E
Yes,
thank
you
paul
interesting
talk.
I'm
looking
forward
to
waiting
15
seconds
to
to
download
thanks.
So
can
you
say
what
would
what
would
make
what
would
make
those
more
friendly
to
this?
Is
it
just
having
committing
ciphers
or
something
more
fancy.
U
Yeah
yeah,
that's
a
good
question,
so
having
committing
ciphers
would
help,
however,
committing
ciphers.
They
actually
aren't
the
best
way
to
solve
this
problem.
They
are
the
way
that,
like
kind
of
in
the
past,
people
have
approached
this
and,
like
the
kind
of
receiver
wisdom
in
the
crypto
community
about
doing
verifiable
decryption.
Is
that
you
know
as
long
as
your
encryption
is
committing,
then
you
can
just
kind
of
verify
the
commitment
and
everything's
fine.
U
However,
in
this
paper,
one
of
the
interesting
things
we
found
is
that
it's
actually,
if,
if
your
sessions
are
long-lived,
it's
actually
much
more
efficient
to
do
a
one-time,
key
consistency
check
and
then,
rather
than
having
to
kind
of
check
the
commitment
in
every
ciphertext
every
time
you
do
a
proof.
So
one
thing
that
tls
could
do
is
send
as
part
of
the
handshake
a
commitment
to
the
session
keys,
even
even
just
as
an
extension
like.
U
If,
if
you,
if
you
sent
this
and
had
both
both
parties,
kind
of
check
these
commitments,
then
this
whole
key
consistency
check
this
15
second
cost
that
I
described
would
go
away
pretty
much
immediately,
because
the
handshake
would
actually
to
have
a
like.
Like
a
correctly
set
up
tls
connection,
you
would
need
to
check
a
commitment
to
the
session
key,
and
if
the
middle
box
can
extract
this,
then
they
can
just
check
subsequent
proofs
against
this
commitment
that
is
sent
by
the
client.
U
Does
that
make
sense
not
for
the
purposes?
Thank
you.
I
All
right
so
we're
three
minutes
over.
Thank
you
paul,
for
you
know
talking
about
this.
You
know
interesting.
A
new
take
on
tls
and
privacy
and
security.
M
I
That's
our
fault,
this
chairs,
for
not
managing
the
time
better.
As
discussed
in
the
chat,
the
the
paper
was
shared
on
the
list.
So
if
you're
curious,
please
check
it
out,
be
interesting
to
see
how
things
can
be
improved
or
or
not
one
way,
the
other,
and
unless
anyone
else
has
any
burning
questions,
I
think
we
can
wrap
up
here.