►
From YouTube: IETF108-TLS-20200728-1410
Description
TLS meeting session at IETF108
2020/07/28 1410
https://datatracker.ietf.org/meeting/108/proceedings/
A
Okay,
let's
get
started.
This
is
tls
session
at
ietf,
108
virtual
meeting.
It's
a
couple
here.
We
have
just
a
few
kind
of
suggestions.
Keep
your
video
off
if,
when
you're
presenting-
and
you
want
to
put
your
video
on,
then
you
can
request
that
and
shares
can
allow
it.
Please
mute
and
then
there's
some
information
about
the
session,
but
I
think
you
probably
already
know
that
since
you're
here
next
slide
so
note.
Well,
no
well
rules
still
apply,
even
though
this
is
a
virtual
session.
A
Okay,
let's
move
on
so
we
are
at
tls.
If
you're
in
the
wrong
place,
then
you
might
want
to
join
the
correct
meeting.
So
we
have
some
agenda.
I
think
we
have
all
of
the
administrivia
taken
care
of.
We
have
a
few
kind
of
status
updates
on
drafts
with
subserts
flags.
A
A
Okay,
any
is,
does
anybody
have
any
other
agenda
items
just?
Oh
then,.
B
Hey
I
had
a
couple
questions
coming
out
of
the
ad
review
that
might
benefit
from
time,
but
I
also
didn't
check
how
tight
we
are
for
time.
C
A
All
right,
then,
let's
get
started
so
this
list
of
kind
of
the
active
drafts
we
still
have
a
you
know
quite
a
full
load
of
drafts
within
the
working
group
and
we're
looking
to
adopt
the
return.
Routability
call
for
adoption
is
going
on
number
of
drafts
in
the
publication
requested
state
that
the
ad
is
working
through
and
some
are
in
the
rfc
editor
getting
close
to
publication.
E
D
From
the
ed
about
the
status
of
the
public
documents-
and
we
should
see.
B
Them
so
the
old
versions
deprecate
my
review
out
a
couple
days
ago.
Dtls
connection
id
is
next
thing
on
my
slate
and
sort
of
in
the
queue
are
the
ticket
requests
and
the
md5
shot
one
and
the
external
psk
and
details
one
three
and
ex
programs
leader
in
that
order.
I'm
happy
to
swap
the
order
around
if
there's
a
preferred
different
order.
B
My
current
plan
is,
after
I
do
the
connection
id
to
hit
a
couple
of
the
dots
documents.
But
after
that
I
should
be
making
pretty
fast
progress
through
the
tls
stuff,
because
most
of
these
look
pretty
short,
but
again
I
have
to
take
feedback
if
people
want
to
swap
that
order
around
I'm
just
doing
a
fifo,
basically,
right
now
by.
C
C
I
mean
I
know
that
a
couple
of
people
have
asked
for
the
dtls
version
1.3
to
get
out
the
door,
but
I
don't
know
I
I
I
guess
I
I'm
okay,
with
your
order.
F
F
So
we
went
through
two
working
group
last
calls.
This
was
mostly
my
issue
for
not
having
submitted
draft
08
in
time,
but
it
actually
was
successful
in
providing
some
improvements.
The
first
working
group
last
call
was
draft.
Oh
seven,
this
included
quite
a
lot
of
comments
here
and
actually
that
there's
there's
a
typo
here,
but
several
updates
to
draftstowe7,
which
was
missing
some
updates
I
incorporated,
or
we
had
incorporated
from
the
list
in
in
08.
F
So
we
took
the
updates
from
o7
and
the
updates
from
08
and
created
an
09
and
created
a
working
group.
Last
call
number
two
for
the
diff
between
o7
and
o9.
That's
the
typo!
Here
it's
draft
09.
there
haven't
been
any
wire
changes
since
o
6.,
but
there
were
there
were
some
minor
comments
to
be
addressed
that
a
new
version
will
be
submitted
pretty
soon
one.
Another
major
comment
was
for
test
factors
and
we're
currently
working
on
a
test
vector
pr
for
github
next
slide.
F
Please
all
right
so
running
code.
This
has
progressed
significantly.
Delegating
potential
enabled
certificates
are
available
from
digicert
there's
several
of
them
that
have
been
issued
and
that
are
currently
in
use.
If
you
upload
them
to
cloudflare,
they
will
automatically
be
used
with
delegated
credentials
and
traffic
will
be
served.
The
client
side
piece
in
firefox
is
landed
in
the
extended
support
release
of
78
and
there
are
plans
to
enable
by
default.
F
So
I'm
told-
and
this
is
something
we
can
if
someone
from
the
firefox
side-
wants
to
confirm
or
deny
what
I
heard
in
the
pipeline,
but
you
can
enable
it
at
any
time
in
the
settings
on
next
slide
and
last
slide
formal
analysis.
There
was
a
request
for
this
to
have
some
extra
assurances
from
the
formal
method.
Side
of
things
and
jonathan
hoyla
has
posted
the
results
of
his
tamarind
model
on
the
list
and
they're
looking
pretty
good
in
a
slightly
stronger
model
than
the
default
model.
F
So
definitely
take
a
look
on
the
list
on
jonathan
holland's
models.
I
believe
there's
going
to
be
a
more
concise
write-up.
That's
not
just
hammered
code
coming
relatively
soon
next
slide
and
that's
where
we're
we're
done
with
this.
So
hopefully,
at
this
point
we
can
we'll
have
a
new
version
draft
10
with
the
final
comments
addressed
and
we
can
move
on
to
the
next
stage.
A
All
right,
great,
so
yeah
we'll
look
forward
to
the
next
revision.
Thanks.
G
Yeah
hi
so
in
case
you're
wondering
yeah.
The
wall
is
really
pink,
okay,
so
next
slide.
G
So
just
a
quick
summary
of
what
this
draft
does
it's
a
extension.
It
carries
a
bit
string
of
flags.
Each
flag
would
have
to
be
defined
separately
in
the
niana
registry.
Each
flag
indicates
that
some
feature
is
enabled.
Some
attribute
is
supported.
Whatever
is
simple:
yes,
no
answers
the
yes,
no
question
and
extension
is
as
short
as
the
highest
number
flag
will
allow.
G
So
we
have
this
ayanna
registry
policy
that
looks
to
make
make
it
so
that
most
such
extensions
will
only
carry
very
low
numbered
flags,
so
the
really
common
ones
will
have
very
low
numbers
and
really
rare
ones
will
have
higher
numbers
so
flag,
appearing
client,
hello
on
the
client
side
and
the
server
responds
either
in
server
hello
or
encrypted
extensions,
depending
on
whether
you
want
to
hide
the
response
or
not,
and
each
extended
document
specifying
such
an
extension,
such
a
flag
would
have
to
say
whether
it
goes
in
the
server
hello
or
the
encrypted
extensions.
G
So
that
was
just
in
a
brief
overview
of
what
the
draft
does
next
slide.
So
since
we
left
talked
about
it,
which
I
think
is
itf
106
or
maybe
even
earlier,
we've
eliminated
the
option
for
servers
to
respond
and
unsolicited.
So
if
the
client
didn't
present
flag
number
x,
then
the
server
does
not
present
flag
numbers,
and
if
the
client
doesn't
support
this
feature,
then
it
doesn't
matter
if
the
server
does,
which
is
not
supported.
So
we
had
all
sorts
of
ideas.
G
Maybe
there
could
be
a
legitimate
use
for
this,
but
nobody
could
come
up
with
the
concrete
example,
so
we
removed
it,
and
we
clarified
that
the
this
extension
supports
up
to
20
40
2040
flags,
and
this
would
be
plenty
if
we
get
to
the
mianon
register
with
over
2
000
flags,
then
we're
doing
something
very
wrong
and
we've
added
the
guidance
for
the
iana
expert
and
some
clarification
thanks
to
martin
for
the
error.
Handling
bottom
line
is,
I
think,
it's
ready,
but
we
need
something
controversial.
So
that's
on
the
next
slide.
G
Okay,
so
hannes
made
this
suggestion
getting
a
lot
of
echo
that
we
wouldn't
want
to
register,
make
this
extension
just
empty.
So
why
not
register
post,
handshake,
client
authentication,
that's
a
flag
that
is
already
tls
1.3
based
document,
and
then
the
good
thing
is
that
then
the
registry
doesn't
start
out
empty.
The
bad
thing
is
that
they
will
have
two
ways
of
saying
the
same
thing.
So
I'm
not
sure
that's
such
a
great
idea
and
you
always
have
to
hold
one.
G
G
I
think
we
had
the
conclusion
that
it
wasn't
really
it
wasn't
really.
It
was
really
good
to
use
it
in
a
certificate
and
the
echo
is
really
terrible.
H
G
D
So
yeah,
I
think
this
is
this
was
pretty
close
to
done.
I
had
a
sort
of
similar
comment
to
martin
about
certificate
and
nst.
Actually,
so
I
guess
my
general
point
would
be
this
should
be
permissible
in
both
nst
and
certificate
and
in
particular,
nst
seems
like
it
has
an
immediate
use
case,
which
is
a
vector
for
cilia's
cross
cross
domain
authentication,
request
sni
handshakes,
which,
after
all,
is
just
a
single
bit.
D
So
that
also
seems
like
would
solve
the
problem
in
this
slide,
which
is,
we
would
then
then
then
have
an
empty
registry,
not
an
empty
registry
for
law.
G
Okay,
now
now
we've
seen
something
really
strange
about
the
me
taco
because
it
looks
like
I
must
be
sending
audio
at
all,
but
you're
still
hearing
me.
I
can
mostly
hear
you
you're.
B
Okay
yeah,
I
guess
I
was
in
a
similar
vein
as
ecker,
but
just
to
point
out
that
we're
defining
this
mechanism
now
and
if
we
don't
say
that
you
can
put
it
in
nst
or
certificate
or
server
hello,
then
we're
locking
ourselves
out
of
that
forever.
We'd
have
to
define
a
new
flags,
2.0
extension.
If
we
did
come
up
with
a
feature
used
for
say
server,
hello,
so
well,
I'm
definitely
amenable
to
concerns
that
it's
hard
to
implement
for
server
hello.
B
G
Well,
if
the
client
doesn't
send
the
question,
send
it
yet,
then
it's
not
going
to
get
the
response
in
the
server
hello,
so
it
won't
be
surprised
by
having
the
extension
in
the
server
hello.
It
still
looks,
you
know
to
define
it
you
from
now
it
could
be
in
the
server
hello
and
just
say
about
all
the
flags
that
are
not
supposed
to
be.
G
K
K
K
A
Okay,
so
it
sounds
like
we
may
have
a
little
bit
of
discussion
to
resolve
on
the
list
with
respect
to
where
this
extension
can
appear,
but
that,
hopefully
we
go
pretty
quick
and
we
can
start
moving
this
forward.
H
H
L
Okay,
so
can
you
hear
me
so
I
want
to
talk
about
an
extended
key
schedule,
draft
that
I
proposed,
I
think
106.
next
slide.
Please.
L
Okay,
basically,
the
idea
is
very
similar
to
exporter
keys,
so
tls
allows
you
to
tls13
in
particular,
will
allow
you
to
produce
a
key
from
a
handshake
that
you
can
use
to
prove
in
other
protocols
that
you
are
one
of
the
participants
in
a
particular
tls
handshake
importer
keys
are
just
the
inverse.
They
allow
you
to
prove
within
a
tls
handshake
that
you
are
a
participant
in
a
different
protocol.
L
So
potential
use
cases
are
there's
one
active
draft,
bootstrap,
tls
authentication,
which
basically
says
it
would
say.
If
you
have
an
iot
device
which
doesn't
necessarily
have
access
to
roots
root
service
certificates,
you
could
use
some
sort
of
qr
code
sellotape
to
the
box
to
prove
ownership
of
the
device
and
that
knowledge
of
that
key
that
was
on
the
box
would
be
injected
into
the
tls
handshake,
and
you
would
have
some
sort
of
bootstrapping
authentication
mechanism.
L
L
A
two
message
authenticated
key
agreement
or
authenticated
key
exchange
protocol
and
the
resultant
key
would
be
injected
into
the
tls
handshake,
and
that
would
let
us
experiment
with
post
quantum
cipher
suites
without
having
to
worry
about
which
post
quantum
cypher
suite
is
chosen
by
nist.
L
It
also
lets
you
do
complicated
authentication
properties
in
particular
ech
or
yes,
and
I
or
echo
or
whatever
we're
calling
it
could
use
importer
keys
to
bind
the
inner
and
outer
handshake
messages
together,
and
that
would
be
another
way
of
approaching
something
that
was
actually
quite
difficult.
Next
slide,
please.
L
So
why
do
I
want
to
propose
this
when
changing
the
key
schedule
is
always
a
bit
risky
or
a
bit
scary?
One
is
if
you
have
a
generic
interface.
It
means
you
only
have
to
do
one
security
analysis
and
then
it
can
be
used
by
anybody
without
having
to
redo
the
analysis.
L
L
L
And
this
is
not
necessarily
the
only
way
of
doing
things,
but
we
think
this
is
the
most
flexible.
L
The
the
way
it's
actually
would
work
is
that
every
user
of
the
interface
is
given
a
type
which
is
basically
just
an
integer
and
injections
occur
in
strictly
ascending
order,
and,
and
that
means
that
there
is
a
a
strict
and
obvious
way
of
inputting
all
of
this
information,
other
structures.
You
could
look
at
the
now
expired
drafts
the
villa
and
they
have
a
few
variants
or
nkdf,
which
was
proposed
for
mls,
which
is
basically
a
a
secure
way
of
doing
xor
of
xoring.
L
All
the
inputs
together
and
one
nice
thing
about
that
is
if
the
inputs
aren't
all
available
in
strict
order,
necessarily
you
could
inject
them
in
any
order
as
long
as
they're
all
available
before
the
handshake
progresses
next
slide.
Please
I'd
like
to
know
if
there's
any
interest
in
the
one
group
taking
this
on
as
a
working
group
item
and
are
there
any
questions.
N
D
I
understand
it's
important
to
have
not
to
avoid
ambiguity
between
the
multiple
injection
case
and
the
current
case,
which
we
foolishly
did
not
do
by
having
a
structured
thing
when
we
injected
them
on
the
left,
but
it
seems
like
there
would
be
some
way
to
persuade
ourselves
that
we
could,
just
you
know,
have
either
like
the
left
part
of
the
ladder,
either
ec
dhg
or
ecdhc
plus
a
pile
of
stuff,
rather
than
have
to
add
a
new
new
extraction
effect
and
drive
face.
D
So
I
guess
I
would
I
wouldn't
I
would
encourage
like.
We
should
actually
try
to
look
at
that
because,
like
having
this
sort
of
additional
thing
where
these
are
not
peers,
she's
really
kind
of
goofy.
L
Yeah,
I'm
not
in
any
way
wedded
to
this
particular
structure.
I
think
it's
important
that
we
do
a
generic
way
of
have
a
generic
way
of
doing
this,
but
yeah.
I
think,
as
martin
said
in
chat,
there's
one
of
the
solutions
in
drafts.
The
villa
is
also
very
nice
and
if
we
think
that's
better
sure,
but
I
do
think
it's
important
that
we
do
this
in
a
standardized
way
agreed.
D
Sorry
here
see,
I
mean
so
it
seems
there
are
two
questions
right.
One
is
how
we
is,
how
we
modify
the
the
key
scheduling
tls
to
accept
this,
and
the
second
is
how
we
structure
how
we
merge,
how
we
glue
the
inputs
together.
I
think
they're
orthogonal,
I
I
have
no
strong
opinion
on
like
you
know
that
the
xor
thing
like
seems
like
like
it
seems
into
it,
like
seems
like
it
ought
to
be.
D
Okay,
but,
like
seems
also
has
some
like
it's
just
like
a
little
harder
to
reason
about,
but
like
anyway,
so
I
think
that,
but
I
think
I
think
something
along
these
lines
will
be
available.
L
D
O
It
I
supposed
to
have
the
floor
now
sure
so
yeah.
This
is
good.
The
right
slide.
I
have
up
here,
so
I
notice
you're
chaining
this
stuff
together
with
derived
secret.
Is
there
any
situation
to
choose
that
as
opposed
to
just
feeding
the
hkdf
extracts
one
to
another.
L
O
P
I
just
wanted
to
quickly
comment
on
the
nk
ef
thing.
I
believe
hugo
looked
at
this
in
the
context
of
mls
and
his
suggestion
was
to
use
concatenation
in
extract
rather
than
xor,
which
seems
to
line
up
with
the
earlier
design
from
douglas
and
others.
So
I
would
advocate
for
that,
rather
than
the
xor
40
page
proof,.
L
Yeah
the
the
nice
thing
about
the
xor
is
that
it
means
you
don't
have
to
wait
for
everything
to
be
available,
but
whether
that's
necessarily
a
pro.
A
Okay,
is
there
anybody
else?
If
not,
what
I'd
like
to
do
is
take
a
hum
to
see
if
there's
interest
in
making
this
a
working
group
item
so
for
people
who
I
don't
know
how
many
people
have
done
this
before,
but
there's
an
icon
next
to
the
jabber
chat,
I'm
gonna
start
a
test.
Tum.
A
Yeah
we'll
do
two,
but
I'm
just
starting
a
test
tum
right
now,
so
everybody
can
find
it
because
it
takes
a
little
bit
of
time
to
kind
of
navigate
around
the
interface
if
you're
not
used
to
it.
So
it
started
you
can
navigate
there
and
hum
loud
or
soft
and
we'll
see
what
the
results
is
and
then
we'll
go
into
that
to
the
question.
Actually,
this
is
just
just
a
test.
H
Yeah,
I
want
to
understand
the
between
this
and
the
drafts
of
bella
work.
If
we're,
we
talked
about
the
pq
integration
stuff
in
the
past,
and
it
seemed
to
me
like
there
was
some
reasonable
amount
of
support
for
for
all
of
that,
but
it
sort
of
stalled
on
some
unspecified
rumblings
about
what
was
in
that
draft.
L
So
this
would
be
basically
the
design
of
this
is
I
looked
and
dropped.
The
biller
took
one
of
the
drafts,
one
of
the
designs
that
I
liked
and
put
it
in
a
draft
that
wasn't
about
pq
and
just
made
it
completely
generic,
because
that
means
it
sort
of
circumvents.
The
whole
discussion
like
honest,
isn't
finished.
Yet
because
this
isn't
about
the
nissan
position
but
yeah
I
it
is
supposed
to
supersede
drafts
to
build
up,
but
is
yes,
it
is
totally
taken
from
drastic
villa.
H
H
O
A
A
A
P
E
A
A
List,
okay,
so
we'll
have,
you
know,
have
to
take
this
to
the
list.
Obviously,
thanks
jonathan,
who
do
we
have
up
next.
N
All
right,
I'm
victor
vasilev
and
I'm
presenting
the
two
drafts
today.
The
first
draft
is
resumption
across
sni.
So
next
slide
the
resumption
across
us
and
I
draft
so
the
basic
idea
is.
N
So
the
client
must
only
resume
if
sni
value
matches
the
value
in
the
original
certificate
and
it
should
only
resume
if
it
matches
the
one
used
in
the
original
session.
That
is
the
requirement
is
that
the
must
requirement
is
that
it
has
to
be
in
certificate,
but
normally
you
would
only
resume
if
it
was
the
smi
of
the
used
on
the
original
connection,
because
otherwise
you
have
no
guarantee
that,
even
though
that
the
server
name
is
on
that.
N
Third,
that
you
can
resume,
and
if
you
choose
to
resume,
then
you
would
lose
a
ticket
and
next
slide.
The
notable
part
of
that
requirement
is
that
you
can
wave
that
should
level
or
shed
level
recommendation
by
providing
an
indication
that
you
can
in
fact
reuse
the
ticket.
You
just
got
to
any
sni
that
would
match
against
the
search
that
you
have
next
slide.
N
So
the
draft
in
question
proposes
such
indication
and
that
indication
is
just
an
completely
empty
extension
in
new
session
ticket.
N
You
put
that
extension
and
that
tells
the
client
that
if
you
have
a
certificate,
for
instance,
if
you're
connected
to
a.example.com
and
your
certificate
is
valid,
it's
for
star.example.com
now
may
probably
should
send
it
to
be.example.com.
If
it's
doing
beat.example.com-
and
this
increases
the
amount
of
times
you
can
use
zero
rtt.
N
This
is
in
general,
not
very
helpful
for
a
lot
of
setups,
because
in
a
lot
of
setups,
this
will
be
already
solved
by
http
server
pullings.
However,
there
are
setups
in
which
pooling
is
impossible,
because
the
host
names
resolve
to
different
ip
addresses.
So
in
situations
like
that.
N
N
However,
with
tls13,
we
decided
to
postpone
that
this
question
and
move
this
into
a
separate
draft,
and
this
is
this
draft,
and
we've
discussed
this
topic
in
the
past,
and
there
was
group
of
researchers
who
also
proposed
this
at
ietf
in
prague
back
in
2019,
and
they
actually
did
some
research
and
concluded
that
this
is
in
general,
very
helpful
for
performance
and
they
run
like
simulated
hazards
with
work
in
practice
and
they
concluded
that
would
decrease
latency
for
a
lot
of
big
websites.
N
A
All
right,
I'm
gonna,
then
you
should
have
the
update
hold
on.
E
A
I
have
too
many
people
here
in
the
queue.
Okay
that
then
you
go.
K
Hi,
this
is
ben
schwartz.
I
support
this.
I
think
I
think
we
should
have
this.
I
do
wonder
what
have
you
thought
any
about
the
potential
case
where
there's
one
server,
that
is,
that
is
capable
of
responding
for
the
entire
certificate,
but
there
are
potentially
or
other
certificate.
K
Other
servers
that
are
are
not,
in
other
words,
it
seems
like
it
seems
like
there's
a
question
about
whether
it
is
safe
for
a
server
to
do
this,
based
solely
on
its
own
configuration
or
whether
it
essentially
needs
to
know
about
the
configuration
of
all
servers
in
the
world
that
have
that
have
this
certificate
or
that
actually
they
can
respond
for
any
of
the
names
in
this
certificate.
N
N
N
Q
Handshake,
okay,
yeah,
sorry,
ian,
sweat,
yeah,
I'm
certainly
supportive
of
this.
In
terms
of,
I
have
a
variety
of
use
cases
that
this
would
be
incredibly
useful
for
including
current
use
cases.
I
I
did
not
anticipate
that
the
handshake
would
fail.
However,
that's
a
little.
A
Q
Concerning
so
and
and
I
do
think
it's
worth
documenting,
some
of
the
cases
been
laid
out
about
making
sure
you're,
authoritative
for
all
servers.
N
Q
Okay,
thanks!
That's
that's!
That's
both
what
I
would
expect
and
also
much
more
reassuring
in
case
someone
accidentally
misconfuses.
Q
R
N
I
don't
believe
this
alters
behavior
for
dns
lookups,
that
is
to
say
that
this
does
not
bypass
being
dns
jack.
You
still
have
to
do.
Dns
lookup
for
dot
example,
dot
com-
if
you
have
session
ticket
for
a.example.com,
because
in
fact
you're
just
establishing
a
new
connection-
and
this
probably
actually
happens
after
you
did
the
dns
lookup.
N
It's
like
you're,
looking
into
your
session
ticket
hash
and
you're,
trying
to
find
whether
you
have
a
ticket
and
you
find
that
you
have
a
ticket
that
was
originally
shipped
for
a.example.com,
but
it's
actually
available
for
start.example.com
and
that's
where
the
behavior
changes.
It
doesn't
really
change
a
behavior.
N
Otherwise
that
doesn't
alter
it,
doesn't
introduce
meaningful
changes
to
how
the
connections
are
routed.
Unlike
say,
the
origin
frame.
A
A
A
Okay,
the
hum
came
out
as
a
forte
I'd
like
to
take
the
opposite
hum
and
see.
If
there's
anybody
who
would
not,
who
thinks
we
should
not
take
it
on
as
a
working.
A
A
Okay,
it
seems
there
are
maybe
a
few
dissenters
in
the
room.
So
if
you'd
like
to
comment
on
that.
A
Okay,
so
let's
move
to
the
next
presentation.
N
N
We
had
in
the
quick
working
group
where
we
I'm
sorry,
so
the
main
problem
is
the
problem
with
http
2
and
the
way
gtp2
interacts
with
tls
is
that
in
siri,
http
2
is
a
protocol
in
which
server
speaks
first
and
the
server
can
speak
for
first
by
sending
its
settings
frame
and
since
the
client
would
send
a
settings
frame
and
send
us
requests
and
the.
N
N
But,
however,
this
is
not
some
things
that
we
are
really
doing
currently.
In
practice.
We
try
to
find
whether
there
are
servers
which
actually
do
that
in
practice.
There
are
very
few
servers
that
actually
do,
and
it
is
unclear
that
so
the
there
are
two
problems
here,
one
as
it's
unclear.
N
If
we
start
to
deploy,
if
we
start
actually
doing
this,
whether
there
will
be
clients
that
will
be
intolerant
to
sending
server
settings
and
have
rtt,
and
the
second
problem
is
that,
since
we
are
currently
in
a
situation
where
server
settings
are
not
sent
in
half
rtt
and
most
of
the
servers
will
send
them
together
with
their
first
responses.
N
Deploy
any
extension
where
you
would
need
to
know
the
value
of
server
settings
before
you
send
your
first
request,
because
you,
by
the
time
you
send
your
first
request,
you
do
not
know
the
value
of
server
settings
and
if
you
attempt
to
wait
on
that
value
in
series,
this
is
reasonable,
because
settings
can
be
sent
in
half
rtt
in
practice,
you
don't
know
whether
they
will
be
sent
in
half
rtt
or
whether
they
will
be
sent
after
receiving
client
finished.
N
Next
slide,
so
lps
is
an
extension
that
solves
this
problem
by
moving
the
settings
from
just
being
application
data
in
the
very
beginning
of
the
connection,
to
being
it
a
part
of
tls
handshake,
and
there
are
three
main
advantages
that
I
see
here,
and
the
first
advantage
is
that
you
can
design
settings
for
your
application
protocol
like
h2h3
or
any
other
protocol
with
assumptions
that
whenever
you
start
reading
or
writing,
your
settings
are
always
there,
which
means
you're.
N
Never
in
a
situation
where
you
have
to
design
your
protocol
with
assumptions
that
you
always
have
reasonable
settings
defaults
and
like
you,
if
you
wait
for
settings,
you
would
block
for
a
rtt,
and
this
solves
the
problem,
and
this
would
allow
us
to
make
a
lot
of
http
2
extensions
that
are
either
proposed
or
already
standardized
like
websocket
over
http,
2,
client,
hint,
reliability,
extension
for
opting
out
of
header
compression,
etc.
N
Those
are
all
extensions
which
are
either
require
or
benefit
from
you
being
able
to
send
a
request
in
the
very
first
to
use
them
on
the
very
first
request,
and
there
is
an
additional
problem
which
is
less
of
a
problem
with
wire
protocol
and
more
problem
with
how
api,
with
of
interaction
between
tls
and
application
protocols
that
is
currently,
if
you're
doing
zrtt.
N
There
are
complicated
ways
in
which
you
can
organize
your
retaining
your
settings,
because,
often
in
when
you
send
your
rtt,
you
have
to
have
some
idea
of
what
are
the
settings
of
the
server
and
since
it's
zero
rtt,
you
have
to
retain
them
as
a
ticket.
So
you
have
to
add
an
extra
api
to
like
put
data
into
ticket,
and
it's
all
very
messy
and
alps
solves
this
problem
automatically
by
providing
a
standardized
approach
to
this
next
slide.
N
So
how
does
a
lps
works?
Lps
works
a
lot
like
it
copies
the
model
of
http
settings
frame,
both
client
and
servers
and
settings,
and
it's
just
an
opaque
blob.
The
key
property
of
the
blob
is
that
lps
is
considered
to
be
declarative
and
not
negotiation
in
the
sense.
How
in
tls
server
hello
is
a
response
to
client
hello
in
the
lps
plan.
Settings
are
independent
from
server
settings,
since
this
is
nice
because
in
theory
we
could
do
things
like
move
client
settings
from
second
flight
to
first
flights.
N
If
we
have
something
like
ech
currently
in
current
draft
settings,
all
settings
are
encrypted
with
handshake
keys,
and
this
is
an
update
on
how
some
of
the
previous
similar
proposal
worked,
because
in
quick
working
group,
when
we
tried
this
originally
is
we
had
the
problems
that
client
settings
were
not
encrypted,
and
that
was
bad
and
we
solved
this
problem
here
and
the
way
zero
ttt
is
currently
defined
in
the
lps
draft.
Is
you
when
you
create
ticket?
N
You
store
the
client
settings
and
server
settings
in
the
ticket,
and
then
you
just
reuse
them
if
they
match
when
you
do
zrtt
and
when,
if
during
zero
ttt
connection
client
decides
to
use
different
settings,
you
just
discard
your
session
tickets
into
full
handshake.
N
Another
problem
that
I've
not
mentioned
in
previous
slides
that
we're
solving
here
and
that's
a
problem
with
we
had
with
h3
design.
Is
that
not
only
that
the
ls
http
interaction
is
messy
conceptually?
N
There
is
a
problem
that
settings
in
quick
settings
and
you
have
to
force
the
ordering
of
you
have
to
deal
with
the
problems
that
you
don't
know
so
or
you
might
not
have
settings
available
when
you
send
new
session
tickets,
so
you
either
have
to
wait
until
do
a
new
session
ticket.
And
now
you
have
this
weird
synchronization
point
between
tls
and
http,
with
quick
somewhere
in
the
middle
and
alps
solves
this
because
alps,
you
always
have
settings
basic
end
of
handshake,
so
you
no
longer
have
to.
N
Oh,
so,
here's
a
very
brief
overview
of
how
the
handshake
works
on
slide.
Five
client,
hello,
you
send
an
empty
application
settings
extension
to
indicate
that
you
support
lps
server,
hello
in
encrypted
extension.
Application
settings
has
the
body
of
server
settings
and
on
the
client
final
flight,
which
is
usually
just
finished
message
or
certificates
and
finished
message.
You
add
a
client
application
settings
message
which
has
the
client
settings
and
that's
way
it's
encrypted
next
slide,
so
the
main
alter.
N
So
the
main
debate
here
with
lps
is
whether
you
we
want
to
do
lps
or
whether
we
just
want
to
see
whether
we
can
get
half
rtt
to
just
work
with
all
situations
and
that
we
currently
don't
whether
we
just
want
to
get
half
rtt
to
work
in
tcp,
because
we
know
how
rtt
works
and
quick,
because
we
made
sure
that
it
works.
Real,
quick,
but
it
is
nss
does
support
it.
But,
as
far
as
I
know,
neither
open
or
boring
ssl
supports
half
rtt
data.
N
Currently,
and
as
I
said,
there
are
some
questions
about
whether
if
we
actually
start
doing
this
in
practice,
whether
this
will
work
and
even
then
even
we
solve
this
problem,
we
still
would
have
to
add
a
very
explicit
signal
into
the
handshake
and
the
that
we
are
actually
supporting
half
rtt
and
if
we're
actually
supporting
half
rtt-
and
we
want
to
add
the
signal-
we
can
use
the
signal
in
application
protocol.
N
N
N
So
there
are
some
open
questions
in
the
draft
which
I
left
put
the
simplest
design
possible,
but
we
might
want
to
extend
it.
The
first
question
is
currently
we.
N
The
way
we
handle
jrtt
is
that
the
settings
the
client
has
to
say
in
exactly
the
same
settings
and
the
server
has
to
have
exactly
the
same
settings
in
order
for
this
to
succeed,
and
there
is
some
flexibility,
but
the
flexibility
is
not
as
great
as
what
quick
currently
defines
and
the
reason
I
went
with
that
design
is
that
it's
very
easy
to
define
very
simple
api
for
this,
which
works,
but
I
understand
that
people
might
want
to
go
for
the
quick
approach
which
could
use
us
with
transfer
parameters,
because
that
approach
is
more
flexible.
N
In
some
cases,
we
can't
copy
quick
approach,
one
to
one,
because
the
quick
approach
for
http
free
is
designed
explicitly
around
this
paradigm,
where
in
http
free,
you
have
to
be
always
be
able
to
operate
with
default.
Settings
and
alps
is
intentionally
trying
to
get
us
to
the
state
where
we
always
know
server
settings.
N
So
we
can
always
use
extensions
for
http,
etc,
and
the
second
question
is
whether
we
want
to
add
support
for
settings
which
are
per
connection
and
not
retained
across
multiple
sessions.
So
http
currently
does
not
have
those,
which
is
why
the
draft
does
not
have
this,
but
there
are
some
protestants
precedent
for
those
kinds
of
settings
in
things
like
quick
transport
parameters.
H
Okay,
there's
suddenly
five
people
joined,
so
I
I
don't
find
that
the
that
the
arguments
in
favor
of
adding
a
new
going
up
embarking
on
a
new
protocol
engineering
adventure
really
just
really
justified
here.
There's
an
argument
to
be
said
that
just
making
half
rt
work
is
is
far
superior,
particularly
when
we
talk
about
having
to
deal
with
h2
in
tls
1.2,
which
has
different
characteristics
to
cls13,
and
I
certainly
don't
want
to
be
fixing
this
and
putting
it
into
tls
one
two.
H
So
I
think
that
having
people
wait
for
the
0.5
rtt
settings
is
actually
providing
the
right
sort
of
incentives
for
people
and
having
implemented
it.
I
don't
think
that
there's
any
real
problems
with
doing
so.
It's
just
that
people
haven't
done
so
in
the
past
and
there's
reasons
for
that,
but
it's
very
doable.
D
D
Victor
thanks
for
the
presentation,
I
wasn't
sure
what
I
thought
before,
but
now
that
I
know
it's
going
to
be
an
adventure.
I
think
we
should
totally
do
it
more
seriously.
I'm
not
I'm
I'm
sort
of
a
different
right
now
on
on
on
with
this
right
approach.
D
I'm
still
thinking
that
through,
but
one
thing
I
do
want
to
flag
is
that
when
you're,
adding
when
you're
adding
a
new
message
to
the
third
leg
of
the
handshake
and
you're
calling
that
settings,
I
think
that's
an
I
that's
an
error.
The
right
term
here
is
encrypted
extensions.
We
did
you
know
we
did
look
into
adding
this
before
in
one
three
and
took
it
out
because
nobody
had
any
use
for
it.
D
But
if
you
know,
if
it
turns
out
that
we
do
have
a
use
for
it,
then
we
should
add
a
more
generic
thing
than
just
there's
the
application
application.
Other
settings
thing
because,
certainly
like
we,
you
know
what
what
the
the
the
difficulty
was.
The
critical
engineering
make
it
happen
at
all,
and
so,
if
we're
going
to
bother
to
add
that
then,
which
answer
the
engineers
got
to,
do
it
twice.
P
P
S
Am
I
audible
now
or
okay?
I
see
a
little
thing
cool.
Sorry,
I'm
still
trying
to
figure
out
this
tool.
So
one
of
the
new
senses
with
half
rdt
data
is
in
the
one.
So
half
of
these
data
in
zero
dt
connections
is
fairly
straightforward
because
at
the
half
rtt
point,
the
like
connection
properties
are
stable.
But
the
the
interesting
point
here
is
for
the
first
connection,
you
just
first
time
you
connect
to
the
site.
S
You
would
like
to
be
able
to
send
those
settings
early
and
that
point
is
actually
more
complicated.
I
would
argue
than
the
zero
rtt
connections,
because
the
connection
properties
aren't
fully
established.
S
Let
them
write,
and
so
this
can't
be
the
usual
api
and
then,
but
you
also
need
to
then
in
parallel
drive
the
handshake.
Then
there's
some
like
weird
tcp
deadlocks
that
can
happen.
When
I
I
sent
a
long
message
to
the
working
group
a
bit
ago,
but
if
the
server
thinks
that
this
data
is
part
of
the
handshake,
but
the
client
doesn't,
then
you
end
up
with
a
deadlock
depending
on
the
sizes
of
things.
S
So
as
a
result
of
all
of
that
mess,
we
like
intentionally
punted
the
haparties
stuff
in
the
one
rgb
connections,
and
I
my
understanding
from
talking
to
martin,
maybe-
or
maybe
you
all
can
correct
me-
is
that
nss
doesn't
implement
this
for
client
cert
connections
either,
which
suggests
to
me
that
we
don't
actually
have
evidence
of
poverty
data
working.
Yet.
H
Could
have,
I
did
not
intentionally
that's
all.
K
Hi,
this
is
ben
schwartz.
I
just
wanted
to
clarify
the
victor
on
your
last
slide.
You
said
quick
would
not
use
this.
I
was
a
little
surprised.
I
thought
this
was
partly
for
the
benefit
of
quick.
N
I
meant
that
quick,
the
transport
protocol
would
not
use
it
for
its
transport
level.
This
is
meant
for
application
level
parameters.
K
Okay,
that's
helpful
thanks.
I
mean
overall
my
feeling
about
this.
Is
you
know,
I
think
the
arguments
for
why
half
rtt
data
is
inconvenient
are
persuasive
enough,
especially
if
this
can
be
done
without
adding
a
new
handshake
message
if
it
can
be
done
as
an
extension,
but
it
seems
like
the
most
compelling
argument
would
be
for
quick
where
the,
where
the
lack
of
head
of
line
blocking
makes
seems
to
make
this
much
more
complicated
without
handshake.
R
I
am
a
little
torn
here
because
I
feel
like
we're
talking
about
two
slightly
different
things,
and
I
want
us
to
zoom
out
a
little
bit
and
think
about
how
tls
is
going
to
get
used
by
the
implementers
who
don't
know
what
they're
doing
but
know
that
they
want
to
sprinkle
the
tls
security
dust
on
things.
R
P
I
have
a
just
a
quick
comment:
this
is
chris
wood.
There
is
a
recent
proposal
for
a
post
quantum
variant
of
tls
called
chem
tls
from
douglas
stabila
and
others
that
effectively
removes
the
ability
to
do
half
rtg
data,
even
if
you
wanted
to
so
something
like
this
would
be
useful
in
that
context,
because
it
would
allow
you
know
you
to
get
that
feature
without
being
prohibited
by
the
protocol
or
not
blocked
by
not
having
an
implementation
or
whatever.
A
Tls,
okay,
it
seems
the
queue
has
closed.
I
think
for
for
this
particular
one.
I
think
there
needs
to
be
more
discussion
that
we've
having
on
the
list
right
now
so
before
we
decide
whether
this
is
an
item
or
not.
A
H
Oh
look:
it
works
wow
all
right.
So
this
this
all
came
out
of
the
discussion
around
the
service
b,
https
dns
records,
and
it's
rooted
a
lot
of
the
discussion
that
we
had
about
quick
version
negotiation
and
the
work
that
erica
scholar
and
david
schnazzy
did
so
moving
on.
H
H
H
Slide,
maybe
my
video
is
frozen
and
the
next
slide
is
not
available.
Oh
there
we
go
so.
The
core
thing
here
is
that
dtls
and
tls
are
not
compatible
with
each
other.
You
can't
attempt
one
and
get
the
other
and
quick
is
incompatible
with
either
one
and
potentially
quick
has
multiple
versions
that
are
incompatible
with
each
other
as
well.
H
Next,
please.
So.
The
key
insight
here
is
that
when
you
have
incompatible
protocols,
the
clients
are
responsible
for
making
a
choice
between
them,
whereas
with
lpn
we
have
the
client
offer
a
bunch
of
options
to
a
server
and
the
server
picks.
It's
most
preferred
when
we're
talking
about
incompatible
protocols,
clients
are
responsible
for
for
choosing
what
we
can
try
in
the
first
place,
and
that
means
we
have
a
different
set
of
solutions
that
we
have
to
consider.
H
H
The
idea
is
to
provide
the
client
with
all
of
the
available
options
and
then
the
client
can
either
give
up
what
they've
tried
thus
far
and
tried
their
most
preferred
option
or
take
whatever
policy
actions
that
they
decide
so
that
they
end
up
with
the
best
option
out
of
those
presented
to
them,
but
they
have
to
act
next,
please
right
so.
First
bad
idea:
dnsec
authenticates
the
service
b
records
that
we're
talking
about.
H
So
you
can
advertise
all
of
the
things
in
the
dns
and
use
dns
sec
to
authenticate
that
and
that
solves
the
problem
right
next
slide.
No,
so
dnsec
uses
a
different
authority
to
the
connection
which
leads
to
some
weird
inconsistencies,
with
the
way
that
everything
fits.
You
have
one
authority
that
tells
you
that
this
is
the
server
that
you're
talking
to
and
you
have
a
different
authority
that
tells
you
what
your
protocol
authentication
is.
H
H
H
H
So
this
is
the
hard
part
of
the
draft
which
gets
a
little
complicated.
Unfortunately,
and
I
think
ben
schwartz-
and
I
are
still
having
the
conversation
about
how
we-
how
we
think
about
this
one,
but
the
idea
is
that
you
divide
the
overall
deployment
that
exists
consists
of
multiple
servers
into
smaller
compartments,
whether
that
is
a
single
ip
address,
or
a
single
domain
name
is
something
that
we're
debating
currently.
H
But
the
idea
is
that
you
concede
the
downgrade
attack
between
compartments.
So
if
you
have
one
compartment
that
supports
good
protocol
and
one
compartment
that
supports
doesn't
support
that
good
protocol,
an
attacker
can
steer
you
towards
the
compartment
that
doesn't
include
the
protocol
that
you
would
prefer
to
have,
but
most
importantly,
that
allows
the
dns
resolvers
to
to
do
the
things
that
they
need
to
do
in
terms
of
picking
the
right,
the
right
service
to
to
direct
you
to
board.
H
The
result
of
this
is,
if
you
have
a
truly
uniform
deployment
with
all
the
servers
supporting
all
of
the
versions,
then
you
get
the
full
downgrade
protection.
If
you
have
a
mix,
then
you're
exposed
to
downgrade
attacks
that
are
based
on
the
the
set
of
servers
that
has
the
least
preferred
set
of
protocols.
H
So
we
have
to
consider
that
there
are
multiple
ways
of
getting
to
a
given
server.
One
of
the
ways
in
which
you
might
end
up
getting
to
a
particular
server
is
just
using
an
a
record.
Now,
that's
how
we
do
https
today
and
that
really
only
identifies
a
single
endpoint
with
a
single
ip
and
port,
and
you
can't
really
say
well.
We
also
support
quick,
because
you
don't
really
know
what
quick
is
at.
H
In
that
case,
you
have
to
rely
on
alt
service
or
something
like
that
to
to
get
a
secure,
authenticated
advertisement
of
the
quick
endpoints
availability,
but
with
service
b
we
have
the
ability
to
potentially
say
that
anything
within
a
certain
scope
defined
here
as
the
service
form
name,
share
the
same
configuration.
H
And
so
you
can
say
that
if
you
resolve
down
to
a
set
of
service
form
records
that
point
to
here's
an
h2
server,
here's
an
h3
server,
then
that
entire
set
is
expected
to
understand
the
the
extent
of
the
protocols
that
are
supported
within
that
set
and
can
offer
you
downgrade
protection
as
a
result.
H
H
Please
there's
two
places
you
get
information
about
these
scopes.
You
can
look
in
the
dns
and
the
discovery
process
that
you
follow
will
give
you
information
about
what
services
are
available,
but
ultimately
the
information
that
you
get
from
the
tls
handshake
once
you've
authenticated
the
certificate
is
the
authoritative
source
of
that
information.
The
discovery
process
provides
you
with
some
hints
that
the
tls
handshake
provides
you
with
the
definitive
information
that
you
can
use
to
determine
your
actions.
Based
on
that
next.
H
Yes,
you
couldn't
negotiate
in
this
connection,
so,
for
example,
if
you
made
a
connection
over
http
2,
you
would
potentially
learn
that
http
3
is
available
somewhere
else.
You
don't
necessarily
know
why,
where
it
is,
unless
you've
discovered
it
through
something
like
service
b.
Next,
please.
H
So,
there's
a
little
bit
of
a
debate
now
about
what's
the
the
sort
of
scope
of
these
advertisements
and
that
remains
open
at
the
moment
we're
having
ben-
and
I
are
having
some
discussion
about
that
one.
I
don't
quite
know
what
the
answer
is-
would
be
looking
for
some
input
on
on
that
and
people's
opinions
on
on
how
difficult
that
that
that
solution
can
be.
I
was
looking
for
something
that
was
absolutely
minimal,
but
it
seems
like
that.
H
Yeah,
so
in
in
working
through
all
of
this,
we
realized
that
using
alpn
identifiers
is
convenient,
but
it
has
this
interesting
property
in
that
it
assumes
that
alpn
identifiers
are
only
limited
to
one
protocol
at
a
time,
and
most
already
do
this,
but
stun
in
turn
have
defined
an
identifier
that
applies
to
both
tls
and
dtls
at
the
same
time,
and
so
those
protocols
are
both
compatible
and
incompatible
at
the
same
time,
which
is
really
awkward.
H
So
I
don't
quite
know
what
to
do
with
that,
and
I
don't
know
whether
it's
safe
to
to
presuppose
an
outcome
to
all
of
this
so
questions.
I
guess,
of
course,
this
clear.
I
am
not
certain
that
it
is.
It
probably
needs
more
soap.
Time
is
the
draft
clear.
Is
it
worth
going
after
this
problem,
and
is
this
the
general
shape
that
of
a
solution
that
seems
reasonable.
K
So
I've
been
bothering
martin
about
this.
This
draft
and
I've
been
been
giving
him
grief,
but
I
think
that
it's
a
problem,
that's,
I
think
it's
a
problem
worth
solving.
K
I
think
it's
not
a
problem
that
actually
applies
today,
but
I'd
like
to
live
in
a
world
where
this
is
a
problem
that
we
can
care
about
where
we
can
actually
require
support
in
the
on
the
path
for
cross
protocol
upgrades-
and
I
think
the
biggest
question
to
me
is:
do
we
want
a
mechanism
that
just
enables
the
client
to
fail
or
which,
essentially,
you
know
a
deliberately
brittle.
H
You
know
so
the
whole
point
of
security
is
to
make
things
not
work
right,
and
so
that's
that's
clearly
where
this
one
is
aiming
for.
So
I
think
that
your
assertion
that
this
is
not
needed
right
now
is
interesting
one.
So
there's
certainly
the
view
that,
if
you've,
if
you've
negotiated
a
connection
for
h2
and
quick,
is
advertises
being
available,
and
you
can't
really
rely
on
quick
being
available
because
seven
percent
of
networks
or
whatever
the
number
is
today
block
udp.
K
Right,
but
in
those
cases
you
can,
if
you
already
have
been
told
that
quick
exists,
then
you
could
plausibly
rely
on
that
information
and
simply
declare
that
you're
going
to
demand
that
it
actually
work
right.
So.
K
K
Basically,
we
we
sort
of
have
some
mild
encouragement
for
clients
to
to
fall
back
within
the
list
of
endpoints
that
are
made
available,
but
it's
not
clear
to
me
that
it
seems,
like
clients,
probably
could
actually
demand
availability
within
the
specification,
but
that
that's
just
a
sort
of
narrow
narrow
example
that
it's
my
point
is
is
more
generic.
H
We
also
have
to
deal
with
the
fact
the
potential
for
your
dns
resolver
to
provide
you
with
a
spurious
advertisement
for
quick,
so
it
cuts
both
ways.
K
D
So
this
seems
interesting.
I
think
I
think
you're
right
that
it
needs
more
bait
time
before
we
can
seriously
consider
it.
I
I
guess
I
do
at
this
point
you're
making
about
like
you
know,
quick
udp
and
you
know
I
know
I
have
a
clear
channel
thing:
has
some
force
it'd
be
really
nice?
I
think
to
have
an
example
that
was
more
clear-cut
in
the
sense
that
there
were
two
quasi-equivalent
protocols
that
you
know
that
like
were
guaranteed
to
work,
so
you
really
were
you
know,
you're
being
downgraded.
D
You
know
I
mean
you
know
you
bring.
I
don't
know
I
map
two
down
f3
or
something
like
that,
the
I'm
not
that
it'd
be
more
motivating
the
other.
I
guess
the
other
things
I
would
say
about
this
are.
It
seems
like
we're
kind
of
getting
into
a
rat
hole
of
like
having
a
a
number
of
different
naming
styles
of
things,
with
a
lpn
and
hdb
service
b.
D
D
That's
even
goofier,
perhaps
in
this,
but
in
any
case-
and
I
guess
the
third
point
I
would
make
is
this:
does
this
does
make
an
assumption,
which
I
think
you
know
is
important
to
make
sure
we
note
that
there's
a
minimum
strength
of
the
security
handshake
in
these
systems
when
they're
all
tls
is
fine
because
they're
all
basically
the
same,
at
least
at
this
level.
D
But
you
know
if,
if
you
had
some
other
other
setting
somehow,
then
then
it
would
be
like
harder
to
reason
about.
Perhaps,
but
it's
important
I
mean,
I
guess
the
point
being
that
there's
there's
a
sense
in
which,
like
you,
can't
get
downgraded
down
by
at
a
certain
point,
which
I
guess
will
all
tls
say
here
great
depends
on
too.
H
I
Right,
can
you
hear
me,
can
you
hear
me
yes,
cool,
so
just
wanted
to
say
that
overall,
I'm
very
supportive
of
this.
I
would
say
yes
to
the
four
first
questions
and
on
the
fifth
one
I
would
say,
because
mt
spent
all
his
time
finding
the
right
font
no
but
more
seriously.
I
think
this
isn't
something
that
we'll
need
immediately
right
away
because
of
the
deployment
model
that
we're
envisioning
for
h3.
I
M
Okay,
that's
going
to
be
distracting.
So
what
exactly
is
your
threat
model
like
if
someone
is
downgraded
from
tls
to
quick,
or
vice
versa?
I
mean
what's
the
worst
that
could
happen
like
since
we
have.
I
think
ecker
mentioned
that
we
have
the
same
security
benefits
from
that
handshake
so
like
when
you
downgrade
from
tls
1.2
to
1.0,
like
there's
an
obvious
security
issue,
but
what's
the
security
issue
here
as
opposed
to
perform
or
is
it
just
performance
degradation.
H
So,
generally
speaking,
this
is
I'm
not
aware
of
any
specific
problem
with
tls
1.2
or
tls
1.3.
That
would
justify
having
this
protection
in
place,
but
we
do
generally
try
to
prevent
downgrades
when
they're
available,
and
this
present
presents
an
attacker-
an
opportunity
to
steer
you
toward
whichever
of
quick
or
tls
or
what
dtls
that
they
most
prefer
to
attack.
M
Okay
and
so
then,
what
is
the
user
you're
also
supposed
to
do
in
this
like
say
it
discovers?
Okay,
I've
been
downgraded
in
tls,
you've
shut
down
the
connection
because
you're
like
there's
a
man
in
the
middle.
So
in
this
case
I
mean,
do
you
just
try
again
based
on
what
the
server
sends
you
or
or
do
you
just
say?
Oh
I'm
not
going
to
talk
to
this
end
point.
H
P
R
So
I
think
there
is
a
security
I
mean
there
are
security
considerations
for
being
downgraded
from
quick
tls
just
to
respond
to
character.
In
particular,
quick
has
some
metadata
hiding
properties.
When
you
have
multiple
streams
that
tls
does
not,
and
so
it
there
is
a.
I
think
that
there's
a
decent
argument
to
be
made
that
there's
better
surveillance
possible
from
metadata
on
tls
channels
than
on
quick
channels.
R
Information
is
a
really
critical
one
and
it
might
be,
it
might
vary
from
user
to
user
and
it
might
vary
from
application
profile
to
application
profile,
and
I
just
wanted
to
point
out
that
I
think
this
is
a
similar
problem
to
one
that
we've
dealt
with
in
much
worse
security
context
like
hsts,
right
hsts
is
an
attempt
to
identify
and
prevent
downgrade
attacks
between
two
different
protocols
that
a
network
attacker
can
force.
R
The
problem
is
not
that
one
endpoint
might
be
broken.
It's
that
the
network
operator
might
be
interfering
with
the
ability
to
get
the
highest
resolution
the
highest,
the
strongest
security
rating
possible
right
and
that
exists,
because
the
choice
to
quick
made
was
well
we'll
just
try
to
do
it
and
if
it
doesn't
work
we'll
fall
back
to
tls
right.
We
built
we
built
the
tool.
We
said.
Here's
a
fallback
mechanism
and
that
fallback
mechanism
is,
of
course,
it's
a
lever
that
the
network
operator
can
use
against
the
user.
R
So
I'm
wondering
whether
this
kind
of
signaling
isn't
signaling
that
it's
somehow
equivalent
to
hsts,
I'm
looking
at
that
as
a
model
for
what
I
think
is
a
ridiculous
process,
but
it
is
a
process
that
we've
used
to
really
defend
against
an
otherwise
easy
and
and
long-standing
problem
of
downgraded
tax
issued
by
the
network
operator.
R
I
don't
know
I
mean
I
but
but
but
hst
has
worked
across
time
across
multiple
temporal
connections.
Here,
I'm
signaling
to
you
as
an
endpoint
that
you're
that
maybe
that
some
network
operator
is
interfering
with
your
other
connection.
R
Can
this
be
useful
in
like
network
diagnostic
reports
so
that
users
can
see
what
the
problem
was
and
report
it
so
that
their
network
operator
knows
that
that
they
can't
do
this
anymore?
Like
I,
don't
I'm
trying
to
see
how
this
actually
fixes
the
problem
for
us,
because
I.
R
R
H
I
I
gave
a
concrete
example
if
there,
if
there
are
specific
servers
that
are
have
quick
blocked
for
whatever
reason-
and
you
talked
about
the
metadata
protections.
Maybe
there
are
certain
servers
that
are
of
particular
interest
to
someone
performing
that
sort
of
surveillance.
Then
then,
maybe
you
would
get
to
make
a
decision
about
that.
Knowing
that
other
servers
work,
I
I
don't
have
any
concrete
other
solutions
other
than
just.
Let's
make
sure
this
thing
is
actually
as
strong
as
we
can
make.
A
It
okay
thanks
martin.
What
we'd
like
to
do
is
go
back
to
the
so
think.
Thanks
for
this,
I
think
there's
more
discussion.
Obviously,
that
needs
to
be
had
what
we'd
like
to
do
is
go
to
the
then
give
then
the
four
to
discuss
the
deprecate.
If
that's
okay,.
B
There
was
just
a
couple
points
that
I
really
wanted
to
spend
any
time
on
with
the
whole
group,
so
the
first
was
what
to
do
about
rfc's
five,
four,
six,
nine
and
seven
five
zero
seven.
B
So
the
first
one
is
defining
the
single
des
and
idea
ciphers
and
explicitly
notes
that
these
ciphers
were
removed
from
tls102.
So
once
we
get
rid
of
tls1.1.1,
these
ciphers
essentially
don't
exist
anymore.
So
I
believe
that
we
should
be
either
ops,
leading
or
making
historic
this
document
as
well
and
then
for
seven.
B
It
didn't
fully
reason
through
it,
but
that's
the
signaling
cypher
suite
value
to
detect
downgrade
attack.
But
if
your
lowest
supported
version
is
100.2,
then
either
you
only
support
10
less
one
and
two
and
there's
no
downgrade
at
all,
or
you
also
support
10
less
of
about
three,
in
which
case
the
1.3
defined
downgrade
mechanism
takes
effect.
C
The
feeling
is
that
you
know
id
and
desert
is
dead
as
doornails
and
if
we
can,
if
we
can
nuke
them
via
this,
this
update
it
seems
like
a
good
idea
to
do
something.
C
B
But
since
no
one's
in
the
queue
right
now
I
will
mention
another
point,
so
I
did
go
through
all
of
the
documents
that
we're
updating.
There
are
some
really
interesting
tidbits
in
some
of
them.
There
are
a
few
that
have
like
you
must
implement
teamless1.1
or
you
must
implement
tls1.0
and
that's
sort
of
a
bit
awkward,
because
we're
simultaneously
saying
you
must
implement
1.0,
but
then
also
you're,
saying
you
can't
implement
1.0.
B
So
I
wanted
I
suggested
to
put
in
like
another
sentence
to
just
say
you
know.
Yes,
some
of
these
documents
say
you
must
use
1.0.
We're
saying
don't
do
that
the
minimum
is
now
1.2.
B
So
hopefully,
that's
not
too
controversial
to
just
have
a
little
bit
of
text
to
acknowledge
some
more
details
about
what
the
updates
are,
that
we're
making.
B
So
you
know
definitely
having
it
as
an
overall
summary
is
worthwhile.
There
was
also
a
very
interesting
statement
in
I
think.
A
couple
of
oauth
documents
that
yes,
tls
1.2,
is
the
current
version
of
tls,
but
t-los
1.0
is
more
widely
deployed
and
more
likely
to
be
interoperable.
C
B
Yeah,
so
I
don't
have
super
strong
feelings
about
that.
I
did
put
in
some
suggested
language,
but
if
we
say
that
we
don't
want
to
say
anything
about
that,
that's
that's
probably
fine
and
another
thing
that
showed
up
several
times
in
in
probably
10
or
so
of
these
documents
was
that
they
listed
a
specific
mandatory
to
implement
cipher
and
we're
not
currently
saying
anything
about
ciphers
in
this
document,
and
that
probably
seems
correct.
B
So
I
didn't
really
have
a
huge
proposal
about
sort
of
changing
what
the
mti
ciphers
would
be.
I
do
note
that
all
of
the
ciphers
that
people
listed
as
mti
are
still
defined
for
tls
1.2,
so
in
that
sense
we're
not
creating
something
where
it's
impossible
to
conform.
C
So,
from
my
perspective
again
like
a
lot
of
times,
I
think
it's
just
was
repeating
the
cypher
suites
that
were
in
the
tls
spec.
So
if
you're
changing
the
base
protocol
to
with
the
the
security
protocol,
that's
being
used
to
a
later
version
which
has
different
mtis,
then
there
should
be
a
natural
progression.
They
just
go
to
the
next
one.
B
Right
right-
and
I
there
were
definitely
some
that
were
saying
like,
and
the
mti
cipher
suite
for
tls.
Whatever
is
this,
but
there
were
a
few
that
said,
the
mti
cipher
suite
for
this
application
protocol
is
the
cipher
that
happens
to
be
the
mti
cipher
for
the
loss
1.0,
but
where
the
the
language
that
we're
using
is
making
it
mti
for
this
application,
which
is
not
super
great
writing,
I
guess,
but
it
is
what
it
is.
A
B
A
All
right
great,
I
think
we're
we're
done
then
about
time
so
see
you
in
a
video
conference
near
by.