►
From YouTube: IETF104-TLS-20190326-1610
Description
TLS meeting session at IETF104
2019/03/26 1610
https://datatracker.ietf.org/meeting/104/proceedings/
B
Our
chairs
Chris
Wood
Joe,
Sal
alia,
myself,
Sean
Turner.
This
is
the
note.
Well,
you
should
have
seen
it
a
couple
times
by
now.
You
know
what
Oh
closer
okay
I
can
do
that
too.
This
is
the
no
well
super
loud
up
here
but
anyway.
Basically,
if
you
get
to
the
microphone
you're
gonna
get
recorded.
There's
video
in
here.
If
you
get
IP,
are
supposed
to
disclose
it
don't
be
a
bad
person.
You
can
read
the
rules
next
requests.
We
need
a
minute
taker
and
a
jabber
scribe
before
we
proceed.
B
B
Okay,
I
think
someone's
phone's
gonna
blow
up
in
a
second
Richard's
gonna.
Do
it?
Thank
you
very
much,
sir.
We
need
to
the
microphone.
Please
state
your
name,
let's
keep
it
professional
and
let's
keep
it
succinct
to
you,
because
we
have
a
lot
of
presentations
today,
Thanks
that
was
the
first
day
the
next
day.
So
this
is
the
administrative
part
and
we
had
a
whole
lot
of
presentations.
The
ones
between
the
two
lines,
our
working
group,
jobs
and
the
ones
at
the
end
are
not.
B
We
did
have
another
none'll,
we
did
I,
don't
know
another
non
working
group
draft
called
CTLs
for
compact
Els,
but
the
agenda
bashed
it
off
it's
out
there,
so
you
can
go,
read
it,
but
it
wasn't
really
ready
for
part
time
and
without
further
ado
is
anything
else
we
want
to
talk
about
before
turning
it
over,
which
is
deprecating.
Tell
us
one
then
nope
all
right:
let's
do
it
so
I
think
it's
Kathleen.
C
C
C
1.0
and
so
a
result
of
this
conversation
was,
should
we
be
more
explicit
about
DTLS
1.0
deprecation
in
light
of
this
work
being
recent,
and
the
work
in
this
case
is
that
I
believe
it
recommends
D
TLS
1.2,
but
doesn't
do
a
hard
deprecation,
and
so
the
update
made
was
in
the
abstract
here
to
add
mention
of
DTLS
1.0.
So
are
there
objections
to
that
update?
Do
people
need
time
to
look
at
this?
C
D
E
F
B
The
only
point
I
would
add
is
that
obviously
the
system
was
not
one
of
our
documents,
so
we
should,
you
know,
make
sure
that
we
tell
them
before
IETF
last
call
that
we've
done
this
right,
isn't
80
261
am
I.
Looking
at
the
right
one,
more
details,
encapsulation
of
SCTP
packets
done
by
the
transport
area
ad.
B
E
C
Okay,
so
the
next
question
discussed
on
the
list
was
about
nntp
and
some
should
not
and
must
not
language
for
using.
I
think
it
was
1.0
in
this
case,
and
the
response
Stephen
gave
on
list
was
that
it
was
good
to
do
this
update
because
should
not
must
not
are
not
the
same
things
right.
So
we
are
deprecating
its
usage,
and
so
this
makes
it
a
little
more
clear
by
doing
that
update.
Does
anyone
have
a
problem
with
that?
C
Okay,
wonderful,
a
reference
for
3gpp,
deprecating
TLS,
one,
oh
and
one
one
was
updated
and
then
so
I
think
at
the
last
meeting,
I'm
pretty
sure
Stephen
already
went
through
the
70
different
documents
that
this
updates-
and
these
are
just
some
notes
on
additional
updates
to
other
documents.
I,
don't
think,
there's
anything
here
to
really
comment
on,
but
it's
here
in
case
they're,
something
that's
you
feel
is
important
to
comment
on
before
we
move
along.
C
G
Violence
support,
perhaps
Martin
Thompson
I
dropped
a
link
to
our
current
stats
on
where
things
are
at
in
terms
of
turning
yourself
for
the
web
into
the
Java.
If
anyone's
interested,
they
should
take
a
look
at
that.
It's
pretty
bad.
There's
a
lot
of
sites
out
there
that
still
don't
support.
Tell
us
one
I,
don't
mind
that
people
accept
TLS,
one,
oh,
but
not
supporting
TLS
one
is
kind
of
bad
in
this
situation,
and
that's
really
what
we're
pushing
for
here
by
deprecating
one
one.
Oh
one,
one
wait.
B
F
All
right,
this
may
be
even
shorter,
certainly
more
that
I
said
so.
I
just
put
a
rush
to
31
draft
literally,
like
I,
think
yesterday,
I
was
doing
other
things
for
the
meeting.
This
has
basically
a
bunch
of
a
deferral
improvements
thanks
to
Martin,
especially
date,
samples
has
sort
of
an
annoying
bug
with
the
pizza
for
the
original
orders
that
the
new
order,
the
new
orders
better,
but
the
urgent
workers
confusing
so
I
fixed
that
and
I
chef
I,
said
twice:
1
fix
it
then
I
fixed
it.
So,
hopefully
people
funnies
an
implement.
F
F
So
quick
like
there's
a
lot
of
effort
and
says
basically
like
the
client,
hello
has
to
be
a
certain
maximum
size
and
when
initial
Latasha
client
alone
in
this
case,
and
that
you
only
let
us
send
in
packets
in
response
on
DTLS,
has
been
like
historically
kind
of
vague
on
this
and
I
said.
Has
this
on
this
path,
validation
that
has
been
nice
to
be
called.
However,
if
I
request
now
it's
called
hollow
retail
request
on,
but
you're
not
required
to
do
it.
F
You
just
kind
of
concurs
to
do
it
on
there's
some
scenarios
where
this
isn't
any
kind
of
issue,
so
especially
for
the
probably
like
one
of
the
most
common
detail,
scenarios
is
doing
where,
where
TC
and
the
you've
already
done
ice,
you
have
you
have
confirmation
of
receipt,
so
questions
should
reduce
anything
here,
something
not
something
I,
guess,
I'm
sort
of
like
not
enthusiastic,
doing
a
lot
of
things
here
on
the
major
reason
this
is
an
issue
is,
if
you
have
like
a
big
server,
which
is
like
solving
a
lot
of
traffic
and
therefore
the
DRS
amplifier,
and
also
has,
and
and
also
has
like
a
big
response
hum
and
also
is
basically
doing
zero
RCT
or
has
like
a
giant
certificate
and
like
no
these
things,
it
seems
like
super
wobble
on.
F
If
people
wanted
to
have
a
quick
type
thing,
I'm
like
not
gonna
like
be
completely
objected
to,
but
I
think
probably
just
like
say
you
should
do
HR
in
sensitive
cases
where
you're
like
leaving
amplifiers
good
enough
on.
Are
you
gonna
get
to
this.
G
So
I
think
the
advice
can
be
a
little
more
than
that,
but
sorry,
Martin,
Thompson,
the
the
advice
we'll
be
having
in
quick,
is
basically
don't
send
significantly
more
than
what
you've
received
for
this
apparent
address,
and
so
in
the
z-row
ICT
case.
There's
there's
this
huge
potential
for
amplification,
but
if
you're
not
careful
simply
say
don't
do
that
unless
you've
validated
that
the
other
side
would
be
fine,
I
think
you,
you
probably
want
to
have
a
must
level
requirement
in
here.
F
G
H
How
dishonest
the
only
environment
where
we
have
seen
DDoS
attacks
in
the
IOT
context
was
when
co-op
was
used
without
didi
Ellis.
Actually,
we
had
not
seen
any
amplification
attacks
with
based
on
the
use
of
Detailers
and
specifically
thinking
about
1.3
there's,
also
depreciates
secret
based
mechanism,
where
there's
actually
no
need
and
to
use
the
H
or
the
Hillary
fury
request,
because
you're
already
authenticating
the
client
on
the
first
backup
anyway,
we.
F
I
G
I
G
This
is
great
for
the
handshake,
but
there
is
also
the
addition
of
connection
IDs
to
the
protocol,
which
means
that
you
can
use
the
protocol
on
new
paths
based
on
an
existing
handshake
and
we
don't
have
any
explicit
pass
validation
mechanism
in
the
protocol.
Sure
which
makes
it
a
little
interesting
that
that
I
feel
comfortable
saying
as
an
application
issue.
G
F
Know
you
have,
but
basically
there's
no
on
like
if
there's
no
instructions,
if
you
get
if
you
get
a
new
IP
addresses
or
sending
another
P
address,
and
do
we
not
it
wouldn't
be
good
to
because
then
you
because
then
you
have
like
you
have
active.
You
have
information
attacks,
so
we
assume
that's
gonna,
look
like
when
we
just.
We
need
connections,
plus
it
aside
to
keep
our
Gration
out
of
the
protocol
right.
So.
K
L
L
Given
that
correction
ID
is
a
construct,
that's
India
TLS
is
not
available
to
the
application.
It
might
be
something
it
may
be
worth
saying
something
about
how
some
sort
of
a
signal
from
the
application
if
the
path
has
changed
might
be
useful
for
DTLS
in
details
can
then
use
that
to
change
the
connection
ID
sure
I
can
I.
Can
you
know
that
someone
you
know
I
can
do
that,
assuming
that.
L
F
You
you
okay
next
slide.
This
is
me
I
think
this
is
just
like
Millia
to
toriel
the
spec
doesn't
say
what
to
do
so
in
details.
1.2.
There
was
a
cookie
and
details
1.3
that
got
moved
into
the
cookie
extension
because
we
had
a
cookies
for
hvhr,
so
there's
no
good
reason
to
have
a
non-empty
cookie
and
the
cooking
extension.
So
my
proposal
is
just
to
forbid
doing
this
and
require
the
server
to
detect
it
and
choke.
F
So
I
think
this
is
really
just
a
troll
issue
at
some
point:
okay,
72
Martin
at
that
Martin
Mastiff,
Weeki
separation
with
TLS
first
to
GCLs.
My
proposals
no
and
the
reason
is
falling
on
the
transcripts-
is
different.
The
headers
are
different
because,
because
the
handshake
headers
are
constructed,
so
there's
no
chance
that
you
can
have
it
and
and
the
record
layers
are
as
well
so
there's
no.
Thank
you
there's!
No!
So,
even
though
you
even
that,
so
you
you
can't
get
the
same
key.
F
You
can't
have
a
TLS
CG
on
you
all
have
at
each
Ellison
DTLS
like
proxy.
That
would
take
it
like
the
TLS
messages
and
send
the
details.
Connection
and
get
the
same
keys
out,
and
even
if
you
did
messages
wouldn't
be
correctly
formed
when
you
talking
use
them
so
I
think
this
is
gonna,
be
fine,
so
I
posed
in
noting
its
if
Kitty
or
cuff,
that
comes
and
tells
me
is
any
different.
I
will
change
my
opinion.
F
Implementation
status
liking
a
bit
on
on
tales
for
my
three,
but
we
now
have
implementations
in
an
SS,
mint
and
embed
I
thought
I
said
she
knew
size
which
had
better
in
AB
story
on
them.
So
you
know,
F
story
is
improved
slightly
from
this
slide
and
that
we
got
new
out,
but
can
we
mint
and
SS
I
think
yesterday?
F
These
are
sort
of
these
are
somewhat
genetically
similar
implementations
and
I
did
some
of
the
work
on
mint,
but
I
don't
work
with
them
by
Chris,
Wood
I
know
I'm,
not
sure,
instead
of
classes
but
honest
I've
been
getting
together
to
do
some,
how
many
discipline
tation
work
together
and
get
that
working
as
well
so
hopefully
have
I'm
interrupt
there
as
well
on
these
are
both
to
have
30
I
think,
but
31
is
unchanged
if
I
recall
correctly
on
there
actually
they're
actually
displaying
the
released
version
numbers
for
convenience,
but
otherwise
we're
the
same
so
on
that's
not
on
there's
a
few
more
tests.
F
I
want
to
run
but
I.
Think
that
looks
basic
okay,
so
post
next
times
here,
I
want
to
update
the
issues
as
discussed
above
there's
a
few
more
soil
issues
which
I
just
haven't
dealt
with,
but
are
easy
to
deal
with.
Ornery
new
draft
and
multiplied
Nolan
run-up
testing
and
send
a
report
to
the
list
and
I
think
we're
ready
to
go.
Guess
you
yeah.
B
So
I
mean
that's
the
question
will
if
they
come
back
and
do
another
last
call
and
I
didn't
see
anything
that
actually
kind
of
warrants
that
so
I
think
if
you
guys
just
continue
dinner
up,
and
if
you
find
anything
great,
if
you
don't,
then
we
can
just
push
it
forward.
Yeah
we're
also
going
to
start
to
deploy
this
in
Firefox
all
right.
Thank
you.
H
Hello,
I'm
Alejandra
I'm,
one
of
the
authors
of
the
certificate
compression
draft.
There
wasn't
been
any
major
update
on
the
drop
itself
since
the
last
presentation
I
think
a
year
ago,
except
that
few
small
changes
there
have
been
some
developments
on
the
implementation
and
deployment
side.
H
Boring-Ass
asalaam
integrated
some
support
some
time
last
year
and
chrome
shifted
in
July
I,
don't
remember
the
specific
version
CloudFlare
also
integrated
that
boringest
south
support
and
deployed
it
on
a
last
year
in
September,
there's
been
also
some
work
from
Apple
I
think
also
boring
itself
and
from
Facebook.
So
we
were
able
to
collect
some
data
from
the
CloudFlare
point
of
view.
What
we
see
is
an
average
1.5
kilobyte
reduction
in
the
certificates
for
both
ECDSA
and
ever
say.
H
Ecdsa
went
from
about
3.5
kilobytes
to
2
kilobytes
RSA
from
about
4.9
kilobytes
to
3.5.
So
one
of
the
main
reasons
why
we
did
this
work
was
to
reduce
the
potential
amplification
factor
during
the
quick
handshake
which
uses
TLS
1.3.
The
quick
specification
allows
for
allows
the
server
to
send
about
three
times
the
amount
of
bytes
the
client
sent
in
its
first
light.
Before
the
address
verification,
so
the
client
is
is
supposed
to
send
at
least
1200
bytes,
so
the
server
can
then
send
back
something
like
3.6
kilobytes.
H
What
certificate
with
certificate
compression
we
get
quite
a
good
Headroom
with
ECDSA,
but
for
RSA
there's
it's
not
there.
We
still
need
more
space.
I
forgot
to
point
out
that
cloud.
Fair
use
is
partly
brought
recompression
with
the
with
the
default
compression
level,
so
tweaking
the
the
either
the
compression
level
or
the
dca
used.
We
can
probably
save
more
bytes.
K
F
Dina
asada
is
really
interesting.
D
know
offhand
how
much
of
that
value
you're
getting
is
because
your
Co
compressing,
like
you,
know
the
issuer
in
the
subject.
Sorry,
what
so
I
mean?
Do
you
know
how
much
so
do
you
know
offhand
if
you
just
compressed
like
the
EE
certificate,
how
that
would
compare
to
compressing
the
chain
like
if
you
get
as
high
compression
ratio
most.
H
F
Now
joins
wondering
like,
for
instance,
the
name
like
the
issuer
name
is
duplicated
between,
as
it
is
renamed
this.
Your
name
is
like
duplicate,
but
in
the
subject
of
this
years
to
forget
in
it
as
I
was
wondering
yeah.
Are
you
if
part
of
the
value
you're
getting
that
you're
compressing
redundancy
between
the
chicken
inside
the
chain
or
not?
No.
N
H
So
so,
there's
just
one
remaining
open
issue,
which
is
the
attacks
about
presented
some
time
last
year.
The
idea
is
that
if
the
decompression
function
produces
different
results
based
on
the
timing,
it
receives
the
data
an
attacker
might
be
able
to,
for
example,
delay,
network
packets
and
then
cause
the
the
receiver
of
the
certificate
to
to
physically
process
a
different
certificate,
and
what
the
what
the
sender
sent
it
was
pointed
out.
The
disease
did
the
same
problem
basically
applies
to
a
s,
and
one
parsing
is
just
that.
H
So
so
one
potential
solution
is,
to
just
add
the
decompress
certificate
today
handshake
transcript
Azaz,
either
by
adding
both
the
compress
certificate
and
the
decompressed
certificate,
or
just
the
decompressed
certificate.
So
the
question
is:
do
we
do
we
want
to
fix
it?
We
actually
care
from
the
mailing
list
discussions
and
from
talking
to
people
in
person
there.
It
didn't
seem
like
there
was
a
lot
of
interest
in
this,
but
the
discussions
were
like
pretty
small
anyway,
so
you
tell
me
I
guess.
H
B
B
O
So
Victor
vasila
of
Google
what
I
wanted
to
notice
as
the
issues
with
potential
attack
and
compression
while
the
issue
is
itself,
is
a
very
theoretical
in
nature
in
nature,
fixing
it
will
lead
in
various
ways
of
fixing
it
to
various
forms
of
headaches
and
which
are
not
so
radical.
Unlike
illicit
post
is
an
issue
which
is
theoretical,
which
is
why
I
would
advocate
for
not
fixing
it.
H
On
this
on
the
open
issue,
I
wouldn't
want
to
have
the
decompress
certificate
in
the
in
the
transcript,
because
it
makes
the
implementation
much
more
complicated
because
in
a
way
how
most
flowery,
including
ours,
works.
So
at
that
time,
when
you
process
the
packet
and
create
the
transcript,
we
actually
don't
pass
the
content.
So
maybe
not
good.
A
separate
question
did
you
if
you
look
at
the
code
size
of
the
functionality
in
specifically
the
compression
algorithm
and
the
van
requirements
that
would
be
I
would
be
curious
about
this.
H
In
light
of
many
of
the
discussions
we
have
currently
in
the
idea
for
that
TLS
being
too
heavy
weight,
and
so
on
and
so
on
right,
so
we
do
add,
some
numbers
on
the
actual
compression
I
think
it's
kind
of
like.
If
I
we
saw
some
some
of
this,
some
incrementing
in
CPU
usage
I
think
it
was
about
1/2
percent
globally,
I,
don't
I,
don't
know
if
that
means
anything
to
you,
but
so
this
was
on
on
clawed
floors,
edge,
Network,
okay,
odd,
you
didn't
do
anything
on
a
sort
of
more
client-side
device.
No.
C
I
Benkei
doc
is
interesting.
The
house
was
up
before
me
to
say
that
he
doesn't
want
the
decompressed
certificate
in
handshake,
because
I
was
sort
of
considering
the
case,
also
for
IOT,
where
you
might
have
an
extreme
compression
function,
which
is
like
a
one
byte
index
into
a
table
certificates,
and
in
that
case
it's
a
lot
easier
because
you
might
have
some
miss
binding
problems.
G
H
I
H
G
H
F
Yeah
I'm
much
less
concerned
about
this.
This
particular
flavor
attack,
I
I,
think
that
they
they're
relevant
concern.
Is
it
somehow
possible
to
have
I'm,
not
sure
I'm,
just
taking
my
feet
on
a
compression?
It's
not
closing
the
compression
function
where
there
are
two
things
that
compression
the
same
thing,
but
that
seems
like
it's.
Gonna
have
other
problems
like
operationally
yeah
aside
for
us,
aside
from
like
any
miss
binding
issues,
so
I
think
we
have
to
convince
ourselves
that
the
crush
pressure
function
is
collision
free.
So.
H
F
You
wouldn't
be
collision
for
it
right
right,
so
it
has
to
be.
It
has
to
be
lossless
right,
I.
Think
that's
the
record.
Ask
the
relevant
requirement,
I'm
thinking
incorrectly,
that
it
has
to
be
like
you
know
that.
F
You
know
that
has
to
be
injected
right,
but
I'm
not
concerned
of
this
defect.
Issue
I.
Think
it's
like
I
think
it's
like,
as
you
say
it
had
the
a
bug
in
the
deacon
pressure.
I
think
it's
fasting
more
likely
that
bug
in
the
deacon
pressure
is
like
it's
like
an
undersized
memory.
You
know
a
UAF
for
something
else,
so
I
think
much
more
likely
than
this.
If
you
have
been
much
more
serious,
so
I
I
think
we
should
proceed
with
this
as
it
is,
and
just
talking
with
Rome.
Isn't
it
yeah,
then
the.
B
O
Victor
Vasily,
if
I
wanted
to
us
answers
a
question
about
the
binary
size,
so
the
binary
size
of
broadly
is
quite
substantial
because
it
carries
a
giant
dictionary
with
it,
which
is
why
this
is
mostly
intended
for
clients
and
servers
which
serve
web
content,
because
when
you
serve
web
content,
you
want
perfectly
for
other
reasons-
and
this
is
like
this
is
three
really.
Both
clients
and
conservers
can
trivially
choose
to
not
implement
this
and
they
will
work.
Just
fine
yeah.
P
B
Q
R
All
right,
hello,
I'm,
going
to
talk
about
a
draft
that
is
on
draft
number.
Three.
It's
been
adopted
as
of
very
long
time.
We
haven't
talked
about
it
for
a
while,
so
it's
due
for
an
update,
so
just
as
a
bit
of
background,
if
folks
remember
what
this
is,
it's
meant
to
help
protect
interface
internet
facing
applications
from
having
their
long-term
TLS
keys
in
memory,
allowing
them
to
instead
have
a
shorter
lead,
lived
key
in
memory
and
keep
their
long-term
key
somewhere
else.
R
There
was
I
believe
a
buff
or
lurk
about
this,
but
the
motivation
behind
this
change
is
to
do
so
is
to
not
introduce
the
latency
that
these
changes.
These
proposals
introduce
so
just
to
kind
of
spell
that
out.
If
you
want
to
have
your
key
in
a
very
secure
location
in,
say
west
coast
of
United
States
and
have
traffic
in
Europe,
you
may
have
to
incur
additional
latency
to
do
the
handshake
itself.
This
latency
comes
here
in
delegated
credentials.
The
way
it
works
is
that
a
short-lived
key
is
issued
to
the
web
server.
R
Every
X
number
of
hours,
or
every
periodically
that
is
signed
by
the
certificates
key,
so
it
comes
with
some
additional
parameters.
So,
quite
specifically,
it's
the
way
to
think
about.
This
is
not
as
a
sub
certificate,
it's
more
as
a
time
bounded
key
swap.
This
is
how
Richard
bonds
described
it
so
on
the
right
here,
you
can
see
all
the
the
the
aspects
of
it:
it's
not
an
x.509.
It
doesn't
have
all
the
different
details
of
this.
It
has
specifically
a
this
see
where
it
says
protocol
version
this
has
been.
This
is
the
part.
R
R
There's
a
requirement
for
the
certificate
to
have
a
specific
object,
ID
that
enables
this
feature
but
other
than
that
all
certificate
content
constraints
still
apply.
So
the
subject
alternative
names
any
of
the
key
usages,
all
everything.
That's
that's
on
the
certificate
still
applies
to
the
validation
of
the
connection,
including
revocation
any
certificate,
transparency
requirements.
They
all
apply
to
this
certificate.
That's
delegated
the
credential
structure
itself
is
validated
validated
against
this
public
key
in
the
delat
in
the
end
entity,
certificate
and
and
then
obviously
the
certificate
verifies
is
checked
against
the
key
in
the
delegation.
R
So
this
has
several
benefits.
You
can
put
the
signing
key
far
away.
You
can
centralize
the
control
of
it.
You
can
kind
of
split
your
operations
for
key
management
into
short-lived,
fast
rotating
keys
and
a
long-lived
more
secure
key,
and
this
doesn't
have
the
risk
of
expanding
the
scope
of
the
certificate,
unlike
say,
adding
additional
certificate
at
the
end
of
the
chain
and
relying
on
name
constraints.
S
S
R
There's
several
options:
there's
a
if
you
want
to
keep
the
long
term
key
far
away.
You
can
continue
to
do
that.
If
you
want
to
refuse
connections
from
clients
that
don't
support
this
extension,
you
can
choose
to
do
that
as
well
in
and
those
would
be
the
two
ways
to
not
have
the
long
term
key
in
the
local
memory.
F
It's
worth
knowing
there
couple
other
benefits
here,
aside
from
sorry,
I
thought
I
was,
is
what
other
benefits
here
was
the
security
benefits
adjusting
so
I
mean.
Basically
the
idea.
Is
you
have
these
remote
datacenters
right,
and
so
you
can
remote
key
port
O'call.
The
way
nick
says
you
can
backhaul
the
data
all
the
way
back
to
your
main
datacenter
on,
but
you
point
easier
motivated.
F
You
can
have
a
lower
security
level
than
then
you
mean
datacenter
another,
but
though
this
worth
noting
in
the
method
is
deployment
algorithms,
so
like
et
tu
509
is
like
very
slowly
rolling
out
in
cab,
F,
in
fact
not
really
more
young
cab
Beth.
But
this
lets
you
deploy
teacher
thought
of
a
ninth
day,
yeah.
F
G
Then
this
would
support
that.
So,
madam
Thompson
and
the
case
that
you
wanted
to
use
a
new
type
of
signature
algorithm,
the
client
would
have
to
support
both
the
signature
algorithm
on
the
end
entity,
cert
and
the
signature
algorithm
on
the
delegated
credential,
and
so
that
that's
a
little
limiting.
But
it's
it's
certainly
a
lot
better
than
it
has
been
really
sauce.
You're,
not
worried
about
the
size.
G
No,
no,
it
was
just
a
point.
That's
one
way
of
thinking
of
this
is
an
extension
of
the
certification
path,
but
it's
not
it's
it's
an
in
protocol
thing,
which
just
makes
it
now
that
you
have
to
deal
with
two
signatures.
You
have
to
deal
with
potentially
two
different
signature,
algorithms,
and
that
was
that
was
all
as
a
feature,
but
it's
also
means
that
you
need
to
have
the
code
for
those
things
and
assess
doesn't
have
eighty
two
thousand
nine
at
the
moment.
But
you
know
it's.
F
Really
more
a
feature
from
the
server
side,
because
it
means
you
can
ASSU
that
only
said
the
same
ways
you
can
have
this
server
backhaul,
you
can
server
the
onliest
one
or
you
can
just
there's
just
algorithms
faster,
so
you
can
save
server
CPU
on
I
mean
generally
like
it's
pretty
much,
not
possible
it's
possible
for,
like
the
foreseeable
future,
to
have
like,
even
if
he
teach
you
five
out
of
the
nine
it
was
like
rolled
out
by
Kevin
of
tomorrow.
R
So,
on
the
client
side,
the
validation
of
the
handshake
and
the
validation
of
the
PKI,
if
they're
separate
like
they
are
a
lot
of
plot,
this
allows
you
to
rely
on
upgrades
to
the
TLS
stack
without
necessarily
needing
upgrades
to
the
PKI
stack
and
and
the
certificate
I
guess.
The
signature
algorithm
extension
is
reused
in
this
case,
so
this
is
the
same
thing
that
you
rely
on
so
updates
in
oh
three,
there
was
some
short
discussion
on
the
list
about
protocol
version,
so
this
is
this
was
removed.
R
This
is
TLS
1.3
only
and
there
are
no
non
TLS
1.3
versions,
but
in
any
case
it
is
potentially
useful
for
future
versions
of
TLS
1.3.
Without
changing
the
structure
of
the
credential.
There
is
now
server
implementation
of
oh
3
in
boring,
SSL.
There
is
a
server
deployment
here
at
this
as
this
website
with
the
proposed
ID,
which
is
issued
by,
did
you
cert?
We
there's
a
hiccup
in
deployment,
so
we
we're
going
to
try
to
fix
that
by
by
tomorrow.
So
this
is.
This
is
optimistic.
R
The
one
thing
that
we
wanted
to
have
here
is
is
some
sort
of
formal
security
analysis,
so
we've
grabbed
somebody's
to
do
that,
essentially,
the
properties.
We
want
to
prove
that
this
is
equivalent
from
a
security
perspective
of
having
an
additional
certificate
in
your
chain
as
well
as
we
want
to
have
this
brings
stronger
binding.
Then
you
would
for
a
PKI
chain
and
PKI
chain,
so
you
can
change
the
intermediate
and
use
the
same
key,
and
it's
still
fine
in
this
case,
delegated
credentials
are
specifically
bound
back
to
the
credential
itself.
So
the
next.
R
T
R
It's
like
you're,
seeing
a
certificate
that
swaps
out
its
private
key
for
another
short
that
private
key.
So
there's
no
assumption
of
keys
being
pinned
or
anything
like
that.
I.
F
U
V
Alright,
hello
you're
here
to
give
you
enough
the
unencrypted
at
SN
I
for
those
that
are
unfamiliar
just
a
quick
summary
of
the
problem
as
it
exists
today,
assuming
you're
not
using
an
encrypted
DNS
or
using
a
legacy
version
of
TLS.
Your
network
traffic
leaks
a
lot
of
information
to
any
passive
observer,
you'd
buy
at
the
DNS
query
itself
in
the
equator
answer
or
in
the
TLS
S&I
or
in
the
certificate
that
comes
back.
V
We
also
added
a
rule
that
prohibits
sending
a
cached
info
search,
so
we
specifically
because
they
don't
want
to
leak
what
certificate
the
client
has
previously
cached
and
responds
to
a
connection.
We
add
some
clarifying
words
around
HR
our
behavior,
although
they're
still
a
little
bit
more
than
needs
to
be
done
there.
We
added
also
an
initial
simple
st.
design
for
the
multi
CDM
problem.
V
That
is
the
combined
records
approach,
along
with
a
mandatory
extension
mechanism
in
a
way
to
specify
with
a
particular
extension
in
the
s
and
I
record
is
mandatory
by
mandatory
I
mean
not
mandatory
to
implement
but
clients
which
get
a
mandatory
extension,
and
they
don't
understand
it.
They
must
ignore
the
East
and
I
record
that
comes
back
so
I'll
describe
this
solution.
V
A
little
bit
more
in
a
couple
of
slides
and
in
kind
of
landing,
this
initial
PR,
we
also
dropped
the
S&I
prefix
for
the
the
special
DNS
query
and
moved
from
a
text
record
to
a
new
are
type
per
advice
that
we
got
back
from
various
stakeholders.
So
there
are
also
some
pending
changes
that
are
probably
ready
to
land,
particularly
the
Greece
one
from
David
management
as
well.
It
just
needs
is
it
up
to
date?
V
I,
it
is
okay,
so
we
should
merge
that
relatively
shortly,
there's
also
APR
to
swap
the
version
in
check
some
fields
in
the
sni
keys
record.
If
you
believe
that
the
check
sum
adds
value
for
version,
one,
it
likely
adds
value
for
any
kind
of
version
of
es
ni,
so
by
swapping
them
you're
effectively,
making
the
check
summit
invariant,
which
seems
okay.
V
You
can
end
up
in
a
situation
in
which
the
addresses
that
you
get
back
from
your
a
or
quanta
queries
are
effectively
provided
by
one
CDN
or
one
provider
and
the
PSNI
records
and
says
Texas
should
be
not
text.
The
PSNI
records
you
get
back
are
provided
by
another
CDN
provider
and
that
doesn't
work
because
CDN
two
in
this
case,
doesn't
our
CDN
one
doesn't
have
the
SMI
keys
receiving
and
two
and
vice-versa
and
the
robustness
technique
that
we
added
does
not
allow
you
to
fall
back
from
this
particular
scenario.
V
So
either
you
have
to
hard
fail
in
this
case
or
you
fall
back
to
point
excess
and
I,
both
of
which
are
not
great
and
ideally,
we'd
like
to
avoid
heart
failure
in
order
to
make
this
work.
So
the
simple
PR
that
landed
effectively
works
as
follows:
combine
the
PSNI
keys
and
a
in
quantity
records
into
a
single
simple
record
by
adding
the
addresses
to
an
es,
ni
extension
that
marked
as
mandatory
and
then
when
you
query
for
the
es
ni
records,
and
you
get
back
one
of
these.
V
V
You
sent
our
records
because
you're
simply
not
using
the
a
in
quality
records
at
all
the
you
know.
The
mismatch
rate
is
completely
irrelevant.
The
count,
of
course,
then
is.
It
requires
an
a
provider,
that's
able
to
actually
make
these
records
and
make
sure
that
the
contents
of
the
ESN
I
record
matched
those
that
match
match
the
addresses
that
the
for
the
hosts
that
they
happen
to
service,
so
some
providers
out
there
can
do
it.
Some
cannot.
So
that's
the
trade-off.
There
is
a
the
general
PR
or
the
generalized
PR.
That's
out
there
right
now.
V
This
is
what's
written
is
not
effectively
what's
in
the
text.
What's
in
the
text
is
arguably
a
lot
more
complex,
but
effectively.
What
we're
trying
to
do
is
allow
for
the
addresses
that
are
contained
in
the
sni
records
to
filter
or
to
act
as
a
filter
for
or
ability
check
for,
the
addresses
you
get
back
and
the
Inc
wide
a
records.
So
the
algorithm
basically
works
as
follows.
V
You
would
query
for
your
a
in
quantity
records
for
yeeess,
and
I
record
and
assume
you
get
everything
back
and
you
get
back
in
a
sign,
a
record
that
has
one
of
these
address
pointers,
structures
which
contains
a
netmask
inside
of
it
or
one
one
or
more
net
masks.
You
use
them
to
match
against
the
addresses
that
you
get
back
from
your
in
quasi
records
if
they
match
go
ahead
and
use
those
addresses
and
connect.
V
Just
fine
if
you're,
a
or
quad-a
answers
happen
to
resolve
in
a
cname
chain
of
some
sorts
and
the
cname
are
the
canonical
name
ur.
We
didn't
quite
not
quite
sure
what
to
call
it.
The
name
inside
the
ESN
I
address
pointer
record,
happens
to
match
the
cname
that
you
got
back
from
the
a
or
quad
a
record
used
the
address.
V
Otherwise,
you
have
to
take
the
name,
that's
in
the
es
and
I
record
and
resolve
that
to
an
address
and
in
connect
to
that
one,
because
presumably,
that
particular
record
has
been
constructed
in
such
a
way
that
the
name
will
ultimately
resolve
or
must
resolve
to
a
host
that
is
serviced
by
the
same
CDN
provider.
Otherwise,
you've
royally,
screwed
up
and
things
will
go
badly.
Of
course.
Now.
The
efficacy
of
this
particular
approach
depends
on
how
often
the
mismatch
occurs.
So
if
you
have
a
scenario
in
which
mismatch
occurs
say
50
percent
of
the
time.
V
That
means
you
will
be
doing
a
second
sequential
DNS
query
to
resolve
the
name:
that's
inside
the
ace
and
I
record
to
an
address,
50
percent
of
the
time.
If
you
don't
want
to
run
into
heart
failure,
it's
also
problematic,
because,
depending
on
how
you're
doing
your
happy
eyeballs
like
querying
for
your
a
and
your
quad-
and
you
use
my
records,
you
might
introduce
a
delay
of
some
sorts
in
order
to
get
your
initial.
V
Yes
and
I
responds
back
so
the
time
you
it
actually
takes
to
go
from
nothing
to
you,
know,
es
and
I
keys
and
address.
Some
addressing
information
could
potentially
be
quite
long,
so
not
only
you
paying
the
hit,
but
the
time
it
takes
to
actually
start
connecting
could
substantially
increase,
which
is
not
ideal.
V
It
makes
the
mismatch
rate
irrelevant,
provided
that
you
can
actually
bend
these
particular
simpler,
es
ni
records
that
have
the
full
addresses
in
them.
We
propose
moving
forward
with
that
taking
and
then
taking
the
generalized
version
that
is
in
137,
making
that
an
extension
potentially
in
a
separate
draft-
and
you
know,
working
on
that
and
I
guess
either
bringing
it
back
to
the
working
group
or
something
so
I'm
I'm
hopeful
that
people
will
people
who
are
opinion
about
this
particular
topic.
V
Well,
come
to
the
mic
now
and
tell
me
whether
or
not
they're,
okay
with
this
or,
if
not
because
the
multi
CDN
issue
was
the
big
thing
we
needed
to
address,
order
to
move
forward
with
the
s
and
I.
So
I
would
really
like
to
see
something
happen
here,
so
just
that
we
can
get
ourselves
on
block
to
move
forward.
So
Stephen.
E
E
C
E
Extensions
in
ES
and
I
or
yeah,
which,
if
anybody
bought
into
that
which
I
don't
think
they
do
that,
would
rule
out
this.
Up
to
this
password
on
the
1:03
has
I
just
kind
of
worried.
It
will
cause
another
fest
with
a
bunch
of
people
who
have
dependencies
on
how
a
and
quad
a
records
a
manage.
That,
therefore,
wouldn't
be
like
able
to
use
yes
and
I,
but
if,
because.
V
We're
deploying
this
to
an
extension
is
assuming
you
by
the
extension
thing
and
because
we're
deferring
it
to
an
extension
it
just
not
prohibit
them
from
doing
that
later.
In
extension,
right,
it
simply
enables
the
people
who
want
to
do
yes
and
I
and
me
the
way
it's
currently
specified
to
do
it
right.
If
you
assume
that
everybody
that
implements
both
not
everyone
might
implement
both.
V
So
if
you're,
a
client
and
what
is
the
incentive
for
a
client
to
do
the
more
general
one
if
you're
paying
a
potential,
but
it's
the
server
side
who
has
the
dependency
on
the
a
in
for
a
management
right?
But
if,
if
the
clients
are
paying
a
substantial
performance,
if
a
large
fraction
of
time
for
one
of
the
signs
versus
the
other
there's
not
much
incentive
them
to
do
the
the
other
one,
regardless
of
how
hard
or
easy
it
is
for
the
server
yet.
Actually,
if
you
it
right.
E
W
V
Response
to
that
would
be
as
a
client,
so
I'm.
Now
speaking
as
a
client
as
Apple,
we
would
not
it's
very
unlikely
that
we
would
implement
a
solution
that
had
a
performance
regression
in
this
that
PR
137
potentially
provides
or
gives
us.
So,
regardless
of
whether
or
not
it's
in
the
draft
or
in
it
in
this
draft
or
in
a
separate
draft,
it
doesn't
change
the
fact
whether
or
not
we'll
implement
it
we're
looking
for
something
to
implement.
D
C
C
D
Know
but
that's
kind
of
what
the
question
is
asking
as
well.
You
know
it's
not
intuitive
to
me
so
I'd
like
to
understand
it
more
deeply.
The
current
solution
in
the
graph
actually
does
impose
a
performance
penalty
as
well
right
because
you're
essentially
racing
an
a
lookup
of
the
es
and
I
look
up
right
and
you're,
giving
it
you're
tolerating
a
certain
amount
of
time
right.
Isn't
the
connection
delay
apart?
D
C
D
V
F
Doing
a
lookup
and
just
telling
the
TLS
they
were
to
go
I
mean
quite
serious.
That's
how
I
simple
emitted
like
these
two
Firefox,
so
I
guess
like
try
to
reframe
this
discussion.
Perhaps
on
the
situation
we
have
at
hand
is
that
we
have
some
evidence
and
some
work
experience
that
will
t
see
the
ends.
We
have
a
real
problem
and
the
yes
and
is
not
to
playable
in
the
current
state,
because
it
causes
wait
because
it
was
a
hard
fail
on.
F
Now
it
may
be
the
case
that
there's
some
mechanism
that
will
be
easier
for
some
people
employ
on
the
server
side
and
will
not
induce
a
major
performance
regression
on
and
I
welcome
that
solution
and
that
solution
is
available
today.
I
prefer
solution
to
this
appear,
even
if
it's
not
more
complicate
for
the
client
to
play.
However,
my
impression
is
that
another
situation
and
then
available
solutions
like
you,
have
no
performance,
regressions
or
seem
likely.
A
performance,
regressions
and
I
can
tell
you.
This
is
a
client.
F
Do
a
connection,
that's
like
not
as
a
non-starter
for
us
yeah
for
like
a
very
ain't
like
for
any
substantial
fraction
of
a
fraction
of
requests,
or
all
that
happens
like
will
liability
for
any
server
like
you
know,
if
it
was
like,
you
know,
like
some
1%
of
requests,
and
it
was
I
suppose
across
servers
like
including
with
new
server,
was
always
taking
regression
like
me,
but
like
if
it's
gonna
be
like
a
major
regression
of
all
time.
We're
signers
then
so
on
that.
F
That's
that
the
requirement
for
solution
on
the
as
far
as
like
the
design
of
the
system,
the
I
think
I,
was
personal,
closest
attention
version,
this
invention
approach,
I
think.
The
advantage
of
that
is
that
lets
us
get
some
deployment
Springs
with
this
with
something
we
know,
will
work
for
the
people
can
play
wall
and
then,
if
something
awesome
comes
along,
that
makes
us
work
better
like
that.
Like
has
the
properties
version
we're
more
than
interested
in
like
taking
that
as
well.
F
B
F
I
forgot
to
mention
one
more
thing:
also
Chris
didn't
mention
I'm
being
nice
I'll.
Probably
this
extension
is
that
that,
because
we
have
that
the
bit
indicating
it's
mandatory,
it's
quite
straightforward
for
you,
advertise
yes
and
I
keys
with
the
like
some
sort
of
peel
and
3070
thing
and
then
conform,
clients
will
should
reject
it
if
they
don't
know
about
it.
So,
like
that
great
quite
smoothly
yeah,
thank
you.
X
I
think
the
on
that
topic
of
the
they,
what
happens
when,
when
it's
mandatory
I
think
it's
some
of
that
will
be
to
be
a
function
of
how
comfortable
are
we,
then,
if
adding
that
extension
is
unintentional,
Ike
one
137
is
mandatory.
If
that
means
that
clients,
using
that
than
that
happened,
implemented
137,
basically
can't
use
yes
and
I
anymore
yeah
when
they
just
I
guess
it
was
enough
being
in
echo,
Eckerson
said
that
that
was
intended.
X
Behavior
and
I
think
that,
where
that
starts
becoming
important,
is
that,
like
with
136
I,
think
the
biggest
challenge
is
going
to
be
cases
where
the
sheer
number
of
a
and
quad
a
record,
the
more
that
might
get
used
in
any
one
point
of
time,
is
more
than
necessarily
fit
in
I.
Think
to
Patrick's
comment.
I
think
there
is
a
bit
of
a
I.
X
Also
do
have
some
anxiety
of
what
of
what
are
going
to
be
the
operational
issues,
we're
going
to
run
into
of
having
the
a
and
quad
a
records
that
people
use
be
something
that
comes
from
another
record.
That's
not
a
or
quad
a
that's
something
that
does
have
a
big
enough
operational
change
and
breaks
a
lot
of
assumptions
that
could
have
interesting
side
effects,
but
I
think
until
we
try
to
deploy
that
we
don't
know
what
the
side
effects
are
doing
that
are
yet
so
to.
V
X
Operational
stuff
of
all
the
complexities
of
how
the
internet
works
between
that,
like
the
example
would
be
like
the
known
known
case,
would
be
dns64
of
dns64
relies
on
being
able
to
rewrite.
Do
those
rewrite.
We
can
put
specify
that
one
in
the
draft
to
cover
that
case
if
clients
implement
that
properly,
but
it's
very
likely.
There
are
other
things
we
haven't,
thought
of
which
were
worked
by
accident
and
not
for
an
undocumented
manners.
It
will
start
tripping
over
as
we
start
trying
to
roll
this
out.
Sure.
U
V
Y
Wester
occur,
I,
say
so:
I
I
need
to
go,
read
the
document
in
greater
detail
and
I'm
glad
that
I
was
in
the
room
so
that
I
could
see
this
issue
propping
up
so
I'll
speak
at
a
sort
of
a
higher
level,
which
is
that
I
think
this
is
especially
to
the
chairs.
If,
if
this
is
gonna
go
forward,
we
need
significantly
more
review
from
expertise
that
is
not
in
this
room.
There's
some
rather
large
interesting
ramifications
of
doing
this
type
of
stuff,
where
we
have
to
some
extent
your
fracturing
the
namespace
right.
Y
So
the
dns
has
considered
the
this
global
naming
system
and
and
there's
already
some
fracturing
because
because
of
things
like
S&I
and
things
like
that,
where
x.509
MPX
certificates
tend
to
have
their
belief
of
what
the
proper
name
is
and
the
dns
has
their
belief
of
the
proper
name
and
now
we're
splitting
the
address
space
too.
And
so
these
danger
bells
are
ringing
off
in
my
head.
But
until
I
actually
go
dive
deep,
I
can't
be
for
or
against
it,
but
I
do
think,
especially
to
the
chairs.
Y
B
Will
note
that
we
learned
a
lesson
from
one
draft
that
was
involved
in
DNS
recently
as
well,
and
so
obviously
we
will
do
that.
Luckily,
a
lot
of
these
people
that
are
proposing
this
have
DNS
experts
in
shop,
and
my
assumption
is
that
they're
also
consulting
with
them
as
well,
but
obviously
we
will
have
to
make
sure
that
we
don't
just
spring
this
on
everybody.
We're
also
not
hiding
it
right.
So.
J
Eric
line
I
actually
would
like
to
second
eric
Negron's
dns64
equipment.
That
will
be
problematic,
but
I
had
a
question.
So
please
pardon
my
ignorance.
How?
What
are
the
implications
for
an
authoritative
server
for
a
CDN
that
doesn't
use
cname
chasing
and
does
do
like
geolocation?
So
if
I
have
a
thousand
IPS
I
could
return
to
you
I'm
going
to
return
three
that
I
think
are
in
your
area.
Do
I
generate
one?
V
R
Nick
Sullivan
CloudFlare,
so
to
speak
from
the
server
operators.
Point
of
view.
I
spoke
with
the
folks
in
our
team
who
do
this
and
the
way
that
we
construct
es
and
I
keys,
as
well
as
a
and
quad
A's
from
the
same
database.
So
we
see
no
problem
with
1:36
and
going
forward
with
it.
If
a
query
comes
in
for
any
one
of
those
three
types,
we
look.
V
So
perhaps
we
could,
you
know,
send
a
message
to
the
DNS
ops
mailing
list
or
something
indicating
that
this
is
both
potentially
what
we're
planning
to
do.
This
is
what's
currently
in
the
draft
and
seek
additional
feedback
for
safety
minutes.
Please
note
that
yeah
I'd
be
good
to
get
some
more
feedback
there,
certainly
again
I'm
not
trying
to
dismiss
her
belittle
anyone's
particular
comments,
but
this
is
certainly
a
difficult
problem
to
deal
with
and
we're
just
trying
to
find
the
best
way
to
move
forward.
V
V
Currently,
if
the
s
and
I
works
in
a
shared
mode
architecture
in
which
the
gos
terminating
server
also
is
the
thing
that
serves
up
the
content
in
the
split
mode
it
there
TLS
determining
server,
but
the
server
that
actually
decrypt,
CS
and
I
and
four
is
a
connection
on
to
a
another
server
that
bends
up
the
application
data
potentially
there's
valid
use
case
to
that
particular
architecture.
It
does
increase
the
complexity
and
actually
getting
the
information
from
the
fronting
server
to
the
back-end.
V
V
F
F
V
In
terms
right
so
sorry,
complexity
in
terms
of
you
know,
number
of
words
or
the
amount
of
words
in
the
draft.
It
also
allow
us
to
do
potentially
other
things
like
encrypt
more
than
just
the
SNI
in
the
client,
hello
I
know
there
are
people
who
desire
to
potentially
encrypt
a
OPN
values
and
potentially
other
extensions
as
well
so
sure
yeah,
I'm.
F
B
C
B
E
Sure
so
I
I
just
had
a
look
at
1:34
there
and
that
reflects
of
discussion
and
github,
but
I
don't
think
much.
It
says
it
was
a
comment
for
me,
which
I
don't
think
much
is
by
coming
so
I
think
we
need
to
manage
the
github
discussion,
just
gonna
list
this
Kosh
and
then
maybe
a
little
better
on
some
of
these
okay.
B
AB
All
right,
so
a
lot
of
the
discussion
around
those
two
proposals
for
yes
and
I.
It
depends
on
whether
you're
actually
going
to
see
divergence
between
any
SMI
record
type
and
the
a
or
the
quad
a
results.
I
was
fortunately
able
to
tap
into
somebody
else's
research
database
with
lots
of
DNS
resolutions
to
try
and
find
an
answer
to
that.
The
problem
is
you
can't
actually
test
something
that
hasn't
been
deployed,
so
I
can't
directly
measure
what
would
happen
with
yes
and
I,
so
instead
we
actually
have
a
pretty
decent
proxy
question.
AB
We're
already
doing
two
resolutions
for
the
same
hostname
in
a
and
kwatak
so
I'm,
using
the
database
to
try
and
figure
out
if
a
and
quad
a
diverged
from
each
other
and
if
they
do
it's
likely.
The
des
and
I
will
diverge
from
Warner,
both
of
them
as
well,
so
I'm
going
to
a
lot
of
detail
about
what
the
database
about
how
its
contracted,
but
we
have
lots
of
host
names,
lots
of
resolvers
and
every
run.
It
takes
a
random
selection
from
those
resolvers.
AB
Of
course
we
throw
most
of
it
away,
because
if
what
we're
looking
for
are
potentially
divergent
cases,
we're
boiling
that
down
to
what
CDN,
where
we
point
it
to
so,
if
we
couldn't
figure
out
what
C
D
and
it
was
doesn't
help
us
also,
there
are
host
names
that
use
one
CDN
and
if
you
always
return
the
same
CDN,
it's
not
possible
for
you
to
diverge.
So
you're
not
helpful
in
answering
this
question
and
DNS
resolutions
aren't
always
successful.
I'm,
not
everybody
supports
ipv6.
AB
So
sometimes
we
didn't
get
an
A
and
a
quad
a
result
from
the
same
resolution
attempt
for
the
same
server.
So
at
the
end
of
it
those
243
million
resolutions
left
us
with
about
2
or
24,000
of
resolutions
that
actually
meet
the
criteria.
We
care
about
so
drilling
into
that
little
slice
of
data
we
see
about
23%
divergence,
which
is
a
lot
higher
than
I,
was
hoping
to
find.
AB
W
AB
W
AB
W
AB
AB
AB
W
L
Clarifying
question
here
that
might
help
okay,
if
I,
if
I
understand
correctly,
what
you're
saying
is
that
the
host
names
don't
resolve
to
the
same
CD
and
at
different
points
in
time,
but
also
potential
different
resolvers
and
when
you're
looking
at
ones
that
are
returning
both
a
and
quad-a
results.
When
you
are
looking
in
the
next,
we
know
them
for
divergence,
you're
looking
for
them
to
be
at
roughly
the
same
time
and
from
the
same
resolver,
but
yeah.
AB
So
the
the
client
is
running
the
test
virus,
an
A
and
a
quat,
a
request
off
to
the
same
resolver.
At
the
same
time,
the
ones
that
I'm
discarding
every
attempt
over
the
course
of
the
week
from
any
resolver
from
any
client
always
give
us
the
same
CDN.
So
I
don't
expect
to
ever
be
able
to
observe
divergence
theorem
because
it
always
goes
the
same
place.
I
And
there's
the
idea
being
that
you
know
so:
we've
got
this
set
of
stuff,
which
is
like
a
sense
which
is
data,
points
that
are
across
space
and
time
and
you've
used
that
dataset
to
figure
out
your
to
try
to
figure
out
when
there
are
multiple
scenes
and
that's
necessarily
lossy,
and
we
just
don't
have
to
assume
or
make
inferences
about
how
reliably
we
can
do
that
yeah.
And
it's
only
within
that
space
of
data
that
we're
actually
looking
at
these
percentage
numbers.
Correct
and
I.
I
AB
D
And
then
it's
not
just
limited
to
folks
with
multiple
CD
ends.
It's
specifically
then
constrained
because
we
need
an
artificial
sort
of
second
record
type
in
the
test.
People
are
doing
both
a
in
Quon,
a
which,
I
think
is
the
first
part
of
that
multi
CD
n
is
actually
that
people
were
looking
for
for
the
problem,
but
the
second
constraint
is
actually
kind
of
artificial
to
what
the
working
group
is
considering
so
I'm
actually
curious
how
big
of
a
filter
that
last
step
is
on
the
multi
CD
and
data
set
yeah.
I
D
AB
AB
D
D
Okay,
so
one
of
the
one
of
the
suggestions
has
been
that
this
shouldn't
happen
that
often,
if
it's
in
parallel,
because
because
it's
not
really
in
parallel,
one
of
them
gets
there
first
and
that
a
cname
ought
to
be
cached
or
there
ought
to
be
some
back-end
connection
collapse
in
happening
to
make
that
cname
get
cached.
It
would.
C
D
AB
That
was
my
supposition
as
well.
I
thought
that
this
should
be
fairly
aware,
because
either
the
see
names
already
cashed
or
you
collapse
the
resolutions,
and
so
you
would
maybe
potentially
see
this
when
they
happen
to
straddle
the
expiration
of
your
caste
name
or
maybe
it
was
cold,
and
it
said
both
in
both
lookups
backward
I
have
heard
that
there
are
some
pathological,
stub
resolver
'z
that
then
round-robin
amongst
multiple
upstream
resolvers,
which
almost
guarantees
divergence
are.
D
AB
X
The
exam
like
one
example
cases
you
is
it
is
that
the
research,
the
recursive
resolver
service
interface,
you
might
talk
to,
may
actually
be
load
balancing
across
the
pool
of
recursive
resolvers
and
each
have
independent
caches
and
because
there's
because
it's
connection
is
less
and
UDP,
there's
no
necessary
reason
that
you
necessarily
do
determinism
in
terms
of
which
of
which
mission
of
which
back-end
server
a
given
request
goes
to.
So,
if
you
just
doing
pure
round
rod
than
that
stateless
across
to
all
of
them,
true
requests
may
go
to
two
different
ones.
F
So
I
guess
thanks
for
doing
this
Mike.
This
is
like
great,
like,
like
everyone
else.
Is
too
lazy,
so
it's
really
awesome.
You
did
it
I
mean
I,
think
I,
guess
what
I
would
say
like
this
is
scary
and
so
I
think
I
definitely
love
to
see
like
more
people
take
in
it's
like
I,
think
I'd
love
for
you
be
wrong
right
if
you're
like
wrong
in
this
later
point
O
or
one
that
be
like.
So
it's
like
you
know,
Patrick
like
definitely
like.
If.
F
Here,
they're
like
that'd,
be
sweet,
but
I
think,
like
you
know
like
he
was
he
pretty
wrong
for,
like
the
numbers
could
be
acceptable,
so
yeah
so
I
don't
know
thanks
thanks
and
like
people,
people
who
are
like
sort
of
beating
on
this,
like
you
know,
like
go
to
the
mic
afterwards
and
see.
If
you
can,
like
you,
know,
conquer
the
study
that
makes
you
happy.
E
L
I'm
good
I
think
this
is
I
mean
the
data
is
obviously
great
and
and
I
do
they
couldn't
accept
and
I
love
that
Ian
caudate
might
be
qualitatively
different
than
other
things.
I,
don't
know
exactly
how
do
how
to
think
about
any
further
or
how
to
turn
that
into
an
experiment,
but
that's
a
caveat.
We
should
keep
in
mind.
I
mean.
F
F
I,
perhaps
one
thing
what
you
do,
that
would
be
ideal.
What
would
be
some?
Yes?
Yes,
you
question:
how
do
you,
how
do
you
distinguish
whether
people
or
what
were
the
things
on
a
CDN
person,
s
database,
yeah.
AB
F
F
Awesome
so
I
guess
so
one
thing
we
could
do
I
think
like
my
Firefox
to
do
is:
if
you
want
to
give
us
a
list,
some
people
who
are
exposed.
We
can
go
and
ask
how
often
it
happen
for
them,
because
we
can
just
do
a
force
resolution
and
then
compare
the
an
equality
and
and
those
numbers
and
then
will
at
least
get
work
it
ever
got
cross.
I
mean
that
won't
tell
us.
B
AB
We're
talking
about
how
does
this
affect
from
the
browser
perspective
versus
the
surfer
operator
perspective
from
the
browser
side,
multi
CDN,
the
hosts
are
a
about
1%
of
all
the
hosts
that
we
resolved
and
it
happens
about
20%
of
the
time
for
them.
So
from
the
browser's
standpoint,
this
is
about
two-tenths
of
a
percent.
If
you
are
a
multi
CDN
site,
it
happens
about
20%
of
the
time
for
you,
which
kind
of
stings
for
your
site's
performance.
R
Okay
and
then
we're
short
on
time-
and
this
is
a
lot
of
content,
so
I'll
try
to
get
through
it
opaque.
This
is
this-
is
this?
Is
a
presentation
about
a
password-based
authentication
system
for
TLS
pink
we've
had
these
before
SRP
is
one
of
the
first,
which
is
an
Augmented
password
authentication,
authenticated
key
exchange.
It's
widely
implemented
using
a
bunch
of
protocols,
but
not
necessarily
on
the
web.
Dragonfly
is
another
one
that
was
recently
brought
through.
R
It's
been
published
as
an
RFC
s
and
independent
submission,
including
integration
into
a
TLS,
1/2
and
1/3,
and
so
there
are
some.
So
if
we
look
specifically
at
SRP
there's
a
couple
issues
with
that
one
is:
the
salt
is
sent
in
the
clear
which
is
actually
a
problem
with
almost
all
of
the
previous
pics,
and
so
this
leads
to
potential
pre-computation
attacks.
If
you
get
a
the
server's
database
of
passwords,
you
can
have
precomputed
rainbow
tables
to
to
check
password
databases
rather
than
having
to
do
a
brute
force.
The
security
analysis
of
SRP
is
unsatisfying.
R
R
Okay.
So
that's
just
kind
of
a
background
on
on
SRP
in
some
of
the
original
pigs
and
opaque
was
presented
last
year,
and
this
is
a
new
paper
that
has
it's
a
methodology
for
describing
how
to
build
a
secure
asymmetric
pigment
cake
in
conjunction
with
a
authenticated
key
exchange
like
we
haven't
TLS
1.3.
So
the
way
that
this
is
presented
by
a
crotchet
and
others
at
real
crypto
there's
a
paper
published,
there's
also
a
proposal-
that's
been
through
several
iterations
at
CFR
G.
R
So
the
way
it
works
is
you
take
an
authenticated
key
exchange
like
TLS
1.3
and
combine
it
with
the
primitive
called
a
no
PRF,
which
is
an
oblivious
pseudo-random
function,
I'll
give
some
intuitionist
how
that
works,
and
if
you
combine
them
together
together
in
the
right
way,
opaque
describes
how
you
can
you
can
get
a
secure,
a
pic
out
of
that
as
a
peek
description,
primitive.
It
has
some
really
nice
properties,
such
as
security
proof.
R
R
This
is
a
draft
that
we
wrote
for
TLS
1.2
LS
working
group,
but
it
depends
on
a
bunch
of
other
primitives
that
are
currently
going
through
the
CF
RG,
specifically
the
opaque
overview,
the
OPR
F,
which
is
currently
an
individual
draft,
as
well
as
a
hash
to
curve,
which
is
a
dependency
of
that
o
PRF
draft,
which
is
currently
an
eye.
Rtf
document
for
CF,
RG,
okay,
fundamental
components,
o
PRF
this
opr
F
protocol
is
it
allows
two
parties
to
negotiate
a
value
based
on
something
that
the
client
provides
something
that
the
server
provides.
R
The
output
of
this
protocol
is
not
known
by
the
server,
so
you
can
I'll
sort
of
explain
it
a
little
bit
later,
but
but
but
essentially
it's
it.
You
have
a
server
with
a
private
key
and
a
client
with
a
value
like
a
password,
and
you
get
a
result.
That's
based
on
both
of
those
that
involves
a
two
two
round
when
I
guess
one
round-trip
flow
so
in
in
the
construction
for
opaque
this
OPR
F
it.
R
The
output
of
the
o
PRF
is
used
to
encrypt
an
envelope
which
contains
a
set
of
public
keys
or
a
that
contains
opaque
keys,
which
is,
if
you
think,
in
TLS,
when
you're
talking
about
key
negotiation,
you
have
your
signature
keys
and
you
have
your
key
agreement
keys.
This
is
another
pair
of
case.
It
could
be
either
signature
keys
or
key
agreement
keys.
These
are
keys
that
are
tied
specifically
to
the
user,
account
on
that
specific
server.
R
So
I'll
go
a
little
bit
into
the
math
I
know
this
is
worked
or
tight
for
time,
but
to
get
a
bit
of
fundamentals
as
to
how
this
what
this
relies
on.
It's
the
same
sort
of
thing
that
TLS
1.3
relies
on.
So
you
have
a
prime
order
group
so,
for
example,
the
groups,
the
group
of
points
on
elliptic
curve,
such
as
P
256.
These
group
elements
are
denoted
by
capital,
P
or
pet
capital
Q.
You
have
scalar
multiplication,
which
is
adding
a
point
to
itself
n
times.
R
Scalars
will
be
lowercase
letters
and
then
there's
an
additional
element
here
called
hash
to
group
element.
This
is
something
that
almost
all
Pakes
have
taken
to
account
and
it's
how
you
get
from
a
password
into
the
group.
This
is,
as
I
mentioned,
currently
being
gone
over
at
the
at
the
C
FRG.
So
it
takes
a
scaler
and
outputs.
A
red
group
element
that
is
uniformly
random.
R
The
O
PRF
flow
kind
of
looks
like
this
client
takes
a
password
blinds.
It
sends
it
to
the
server.
The
server
applies,
its
private
key
operation
and
you
get
a
blinded,
o
PRF
out,
and
then
you
can
unbind
that
o
PRF,
so
this
password
and
the
bonne
blended,
o
PRF,
are
tied
together
by
the
service
private
key
and
same
with
the
blinded
password
password,
and
the
blinded,
o
PRF
out
in
terms
of
Mattia
is
just
a
random
value.
So
you
take
your
password,
you
hash
it
into
a
curve
element,
multiplied
by
the
blinding
value.
R
That's
your
o
PRF
one
value,
send
it
to
the
server
the
server
multiplies
by
its
secret
scalar
sends
it
back.
Then
you
divide
out
the
blinding
value
and
then
you're
left
with
the
server's
private
scalar
times.
The
point
that
represents
your
password
and
this
up.
Your
OPR
flow
is
the
fundamental
basis
for
opaque,
and
so
this,
the
final
output
there
is
what's
used
to
encrypt
the
envelope
that
contains
the
extra
keys,
okay
going
forward
or
backwards.
Here
we
go
okay,
so
the
user
creates
the
envelope
during
password
registration.
R
It
runs
the
o
PRF
and
puts
its
own
private
key
in
the
server's
public
key
into
the
envelope
sends
it
back
to
the
server
and
server
saves
it
with
the
identity
of
the
client
and
for
this
to
work
as
part
of
a
handshake.
The
user
provides
knowledge
of
the
password
by
being
able
to
open
the
envelope
and
use
the
keys
that
have
the
private
key
inside
and
the
server
knows
the
associated
public
key.
R
So
these
private
keys
are
used
to
authenticate
the
handshake
and
there's
two
proposed
well
there's
two
main
proposals
in
this
document
of
how
this
can
be
done.
The
opaque
private
keys
can
be
used
in
place
of
the
PKI
keys.
So
if
you
think
of
the
certificate
verify
you
rather
than
using
a
certificate
and
private
key
there,
you
take
the
opaque
private
key
and
use
it
in
the
certificate
verify
in
client,
authentication
or
option.
R
The
other
way
is
to
take
those
keys
and
mix
them
with
the
key
granite
keys
in
a
kind
of
four
key
based
key
agreement
and
and
use
that
as
part
of
the
key
schedule.
So
this
is
sometimes
called
Triple,
D
H
and
there's
a
more
efficient
version
called
QV,
which
is
I
believe
has
some
IPR,
but
Hugo
said
it
was
fine.
I
think
we're
probably
safe
from
it
for
now,
but
in
any
case
that
details
details
are
in
the
graph.
So
how
does
it
work
in
place
of
P
khakis?
R
This
is
that
the
instantiation
called
opaque
sign,
so
you're
opening
keys,
the
ones
that
are
in
the
envelope
our
signature
keys.
The
client
sends
its
identity
in
an
extension
as
well
as
the
first
OPR
F,
and
the
server
sends
back.
You
know
a
certificate
message
that
has
the
it's
it's
piece
of
the
OPR
F,
as
well
as
a
certificate
request
for
that
identity
and
the
certificate
verifies.
The
server's
opaque
key,
that's
associated
with
the
account
on
the
and
the
clients
then
completes
the
client
auth
flow
with
its
certificate.
R
R
So
in
this
case
the
opaque
keys
are
TLS,
1.3
key
shares,
so
the
client
takes
the
identity,
OB
RF
key
and
then
the
OPR
F
key
is
a
the
key
share
that
is
used
for
the
key
negotiation
has
to
match
the
type
of
key
that
was
in
the
envelope.
So
you
have
essentially
two
client
key
shares
and
two
server
key
shares.
R
The
server
sends
back
in
encrypted
extensions
with
the
OPR
f2
keys.
Encrypted
extensions
is
derived
as
the
normal
key
key
exchange
flow
works,
so
just
with
the
doing
the
regular
diffie-hellman
and
so
the
main
mess
the
master
secret
is
actually
derived.
There
is
a
specific
place
where
you
do
an
H,
KD,
KD
f
extract
of
zero
to
get
the
master,
zero
master
secret
and
and
the
handshake
secret
from
above
it.
R
It's
just
more
efficient
to
do
so,
and
the
interesting
thing
about
this
instantiation
is
that
you
can
also
have
a
certificate
and
do
regular,
PK,
PK,
I
off
both
client
or
server,
and
the
reason
why
this
is
relevant
is
there's
certain
applications
where
you
could
think
that
you
would
do
the
pake
to
do
individual
off,
but
you
would
also
have
some
sort
of
higher
level
off
like
all
of
these.
The
devices
belong
to
the
same
or
we're
manufactured
by
the
same
manufacturer,
or
something
like
this.
R
So
these
are
the
two
in
handshake
modes
and
we
talked
about
using
pigs
for
in
handshake
modes
because
you
know
you're
activating
devices
or
things
like
this,
but
when
you
think
about
potential
uses
on
and
say
the
web
you're
not
going
to
want
to
do,
have
user
experience
of
typing
in
a
password
before
any
of
the
content
loads.
So
having
a
post
handshake
authentication
option
here
is
is
is
pretty
interesting
so
because
a
pig
son
is
really
just
replacing
the
pka
off
it
actually
Maps
really
nicely
on
to
exported
authenticators.
R
So
the
client
can
create
an
export
Authenticator
request
with
its
identity.
It's
au
prfd.
The
server
can
send
back
an
excellent
export
Authenticator
with
it.
So
PR
f
signature
key
and
then
also
initiate
a
second
export,
authentic
airflow,
where
the
server
asks
the
client
to
authenticate
itself,
and
so
you
get
one
round-trip
flow,
and
this
this
uses
the
exact
same
structure
that
opaque
sign
does
okay,
so
properties
here,
a
picks
on
is
there's,
there's
no
username
privacy,
so
that
be
the
identity
itself
is
sent
in
the
first
first
request.
R
This
is
a
problem
with
both
of
these
in
handshake
forms.
There
could
be
some
ES
and
I
type
mechanism
that
could
be
used
to
protect.
This
opaque
sign
does
not
have
PK
off
and
PKI
off
in
the
handshake
and
opaque
sign-in
exporter
authenticators,
because
it
happens
after
the
handshake,
it
requires
PKI
off
and
but
it
does
provide
username
privacy
and
it
can
fit
potentially
nicely
into
things
like
hgp
to
you
could
think
of.
Rather
than
having
an
additional
certificate
that
uses
export
authenticators,
you
could
have
a
opaque
flow.
R
That
is
part
of
hb2
frames
going
forward.
So,
as
a
recap,
the
strap
proposes
a
new
password
based
authentication
mechanism
for
TLS,
1.3
and
opaque
is
the
first
secure,
apec
protocol
prove
of
provably
secure
against
pre-computation
attacks,
there's
multiple
constructions-
and
this
is
this-
is
I
guess.
The
main
question
for
the
group
is:
is
this
interesting
for
the
working
group
to
pursue,
as
as
an
alternative
for
for
SSR
P
or
for
having
some
sort
of
Paik
that
works
into
u.s.
1.3?
I
B
F
Yeah
I
was
gonna,
say
much
what
you
just
said,
namely
this
seems
interesting.
I'm.
Obviously
too
much
lag,
gaiz
repetition-
and
this
has
been
well
reviewed
from
I-
understand.
Cain
tells
me
it's
good
I'd
loved
I
think
we
shouldn't
take
it
see
how
far
gee
you
know
signing
off.
I
think
the
other
thing
I'd
want
to
see
is
some
people
telling
me
they're
an
implement,
deploy
because
one
of
the
problems
that
the
previous
thing
has
been
like
people
didn't
want,
and
so
I
think
as
I
just
want
to
note.
F
I
Q
Kenny
Patterson
and
thanks
for
bringing
this
forward
I
think
opaque
is
a
very
interesting
protocol.
That's
had
a
lot
of
formal
security
analysis,
I,
guess
I'm
slightly
concerned
about
the
integration
with
TLS
1.3,
because
now
you're,
bringing
together
all
kinds
of
other
Tiff's
Hellman
shares
and
signatures,
and
so
is
there
any
plans
for
doing
some
kind
of
formal
security
analysis
of
this
integrated
protocol.
None
officially.
T
K
K
This
form
you've
seen
me
on
the
stage
before
with
some
earlier
paper
puzzles
like
we
need
some
cakes
and
stuff
for
some
of
our
low
touch
onboarding
for
devices
with
limited
interfaces,
so
I
think
we'd
be
looking
to
deploy
something
like
this
and
in
this
context,
to
help
with
IOT
like
onboarding,
oh
and
also
on
the
browser
context.
I
think
that's
I
think
there's
some
possibilities:
the
export
Authenticator
flavor.
K
You
could
imagine
some
scenarios
if
there
were
interest
by
web
developers
in
this
in
exposing
a
an
API
that
could
pipe
through
to
that
sort
of
export.
Authenticator
mode.
Like
you
there's,
obviously
all
the
standard.
You
know
Web
API
considerations
there,
but
there's
there's
at
least
the
logical
connection.
There
seems
to
be.
G
One
thing
assuming
that
we
can
resolve
that
then
I'm
pretty
happy
with
this
one,
but
I
didn't
see
how
that
would
work.
Considering
that
you
have
an
enrollment
stage
that
that
exists
in
opaque.
That
is
a
little
awkward
and
honestly,
it
sort
of
assumes
that
you've
got
a
pre-existing
relationship
that,
in
this
context,
you
don't
necessarily
have
so
I'd
like
to
understand
what
the
requirements
are
for
bikes
and
how
this
one
would
fit
into
that
context.
A
little
better.
B
H
H
It
may
also
be
something
to
look
at
to
see
whether
that
does
meet
your
requirements,
because
it
was
developed
also
by
after
looking
at
SRB,
which
didn't
which
had
the
same
issues
as
you
found
out,
but
it's
a
different
scheme.
You
didn't
mention
it
on
on
the
slides,
I
wasn't
appointed
to
the
list.
It's
and
there's
an
inspired
idea
draft
around.
Please
do.
Z
V
All
right
so
I
am
presenting
on
behalf
Douglas
and
Shai,
who
unfortunately
could
not
be
here
due
to
academic
and
timezone
reasons,
so
because
we
are
very
short
on
time.
I'm
gonna
have
to
hustle
through
some
of
this
and
just
kind
of
focus
on
the
high-level
goals.
So
this
is
all
about
hyper
key
exchange
in
TLS
1.3,
as
you
may
or
may
not
be
aware,
there's
currently
a
lot
of
interests,
a
lot
of
interest
in
this
area,
in
particular.
There's
a
number
of
drafts
describing
how
you
might
do
it
with
TLS,
1.2
and
1.3.
V
There
are
actual
experiments
going
on,
in
particular
the
ones
by
Google.
There
are
implementations
by
the
open
quantum
safe
project
that
Douglas
is
working
with,
there's
even
some
work
and
a
similar
framework
of
sorts
for
Ike
as
I
understand
that
and
in
general
people
are
trying
to
build
or
future-proof
the
protocols
such
that.
If
you
we
had
a
you
know,
blessed
post
quantum
or
next
generation,
ehh
algorithm,
we
could
potentially
slide
it
in
and,
in
the
meantime,
just
begin
doing.
Experiments
alongside
existing
classical
can
exchange
algorithms.
V
So,
of
course,
the
motivation
for
this
is
the
post
quantum
threat,
but
this
is
more
so
focusing
on
just
how
would
we
add
multiple
key
exchange,
algorithms
to
TLS
1.3?
Should
we
desire
to
do
that?
A
specific
non
goal
is
to
do
any
kind
of
algorithm,
selection
or
recommendation
of
any
kind
of
sort
specifically,
and
especially
because
the
this
competition
is
still
ongoing
and
see
how
has
not
decided
that
they
are
going
to
take
on
any
particular
post.
Quantum
key
change,
algorithm
and
just
I
hope.
That's
very,
very
clear.
V
This
is
simply
just
the
framework
as
to
how
you
would
extend
the
existing
protocol
to
incorporate
multiple
key
exchange,
algorithms
and
so
there's
a
couple
different
design
parameters.
You
could
go
or
just
axes
along
which
you
could
potentially
slide
things
in.
So,
for
example,
you
have
to
answer.
The
question
is
how
you
would
actually
negotiate
use
of
a
classical
and
post
quantum
khi
algorithm.
There's
the
question
of
how
many
algorithms
and
key
change
our
energy
we
want
to
combine
typically
wishes
to,
but
I
don't
know
in
crazy
world.
V
Maybe
you
want
like
ten
incomes
in
a
particular
connection.
That
might
be
overkill.
Probably
you
have
to
specify
exactly
how
you
would
convey
the
public
keys
necessary
for
the
key
exchange
and,
importantly,
how
you
would
mix
the
result
of
the
key
exchange
into
the
traffic
secret
and
into
the
key
schedule,
and
in
doing
this
sort
of
looking
at
these
different
design
trade-offs.
It's
important
to
keep
in
mind
that
different
aspects
of
their
implication
on
the
protocol
itself,
its
performance
and
potential
software
implementation
complexity.
V
The
first
is
I
would
actually
negotiate
use
of
a
particular
classical
and
post
quantum
variant,
so
you
could
negotiate
them
individually.
So,
for
example,
have
two
separate
lists,
one
that
lists
all
the
classical
algorithms
one
that
this
all
the
post-mortem
algorithms,
that's
probably
not
great,
because,
especially
if
you're
alright,
sorry,
it
might
be
good
from
a
reducing
duplicate
information.
If
you
want
to,
you
know,
have
a
design
such
that
you
can
mix.
You
know
one
post
or
one
classical
algorithm
with
potentially
multiple
post
onto
my
rhythms.
V
It's
simply
it's
much
much
much
easier,
as
was
pointed
out
by
the
list
by
Martin
and
others
to
simply
negotiate
them
together
such
that
you
offer
up
a
new
name
group,
for
example.
That
says,
I
am
doing
curve
decline
and
whatever
fancy
post
comes
from
algorithm,
you
wants
to
negotiate
the
key
combination.
Issue
issue
concern
is
also
pretty
important.
The
main
requirement
is
that
you
want
the
desired
or
the
the
security
of
the
session
key
to
be
robust
and
by
robust
I
mean
effectively.
V
If
one
of
the
key
exchange
algorithms
is
broken,
it
reduces
to
the
minimum
security
level
of
the
the
combination
of
the
two.
So
if
curve,
devanam
breaks
with
the
post,
quantum
is
fine
and
it
reduces
the
postman
over
than
vice-versa,
and
so
there's
different
ways
you
could
achieve
this
by
mixing
in
the
result
of
the
two
key
exchange
steps
into
the
key
schedule,
one
of
which
is
to
simply
concatenate
the
results
and
shove
them
into
the
key
schedule.
V
F
Yeah
I
mean
just
in
terms
of
like
implementation.
Convenience
I
mean
first
of
all,
I
think
like
pretty
clearly
like
the
fake
kind
of
Yoda.
Cove
is
obviously
the
most
DC
would
implement
it.
That's
we
knew
we
noticed
what
Google
did
in
their
last
round,
so
fornication
convenience.
If
you
do
that,
then
clearly
you
clearly
you
want
to
follow.
F
One
of
like
you
know,
one
of
one,
two
or
three
yeah
yeah,
probably
isn't
read
matter
which
one
of
those
it
which
one
of
those
like
the
ice
fog
refer
we're
talking
to
do,
but,
like
yeah
I,
think
like
like,
like
we're
much
more
likely
to
do
it.
If
it
looks
like
that,
yep
agree,
yeah.
G
V
V
V
A
question
to
the
workgroup
is
whether
or
not
as
sort
of
a
input
or
feedback
into
this
competition.
Whether
or
not
we
should
work
on
defining
a
set
of
requirements
for
Kem's
or
any
other.
You
know
next-generation
case
change
over
them.
That
would
be
necessary
for
it
to
be,
and
you
know
a
couple
TLS
1.3,
particularly
going
back
to
the
ones
that
have
nonzero
probability
of
decryption
failure,
so
yeah
merton,
yeah.
G
I
would
like
to
see
the
document
say
which
one
we've
picked
and
which
one
will
be
recommending
having
an
appendix
with
the
with
the
alternative,
isms
and
some
analysis
of
those
and
and
perhaps
some
explanations
of
why
we
thought,
though
they
were
suboptimal,
would
be
extra
valuable,
but
ultimately,
what
I
want
to
have
is
this
is
the
way
we're
gonna
do
it
in
TLS.
So
then
we
don't
have
that
have
to
have
that
debate
for
every
single
PQ
thing
that
comes
along
when
it
comes
to
non-problem
nonzero
probability
of
failure.
No
thanks,
yeah.
AC
V
Ending
on
that
topic,
the
the
we
were
just
using
two
words
to
define
two
possible
candidates.
You
could
use,
for
example,
2
for
9
in
P
56.
If
you
wanted
to
like
the
framework,
was
what
we're
trying
to
get
at
with
the
framework.
Is
a
weight
just
fit
in
to
K
change?
Algorithms,
be
it
to
classical
to
next-generation
to
post
quantum
to
fubar
whatever
well
drop,
doesn't
say
that
right
now
so,
but
I
think
that's
I.
Think.
V
Q
Kenny
Kenny
Patterson
two
quick
points
of
information.
The
decryption
failure,
probabilities
for
the
NIST
candidates
are
all
pretty
low,
like
2
to
the
minus
40
and
I'm
lore.
So
you
can
just
fail
the
protocol
and
restart,
in
that
case
right
you're,
more
likely
to
have
a
TCP
error
of
some
kind
or
something
and
the
second
one
is
the
the
research
community.
The
crypto
research
community
is
still
going
around
in
circles,
trying
to
figure
out
what
the
right
way
to
combine
different
key
material
is,
and
so
it's
still
very
much
alive.
Q
Research
topic,
which
suggests
it's
slightly
too
early
for
the
TLS
working
group
to
be
really
pinning
down
details
on
the
right
way
to
combine
different
key
shares.
I
think
the
papers
you
cited
are
like
from
the
last
year
or
two
yeah,
and
we're
still
developing
security
proofs,
and
particularly
in
for
quantum
enforcer
ease
for
those
kinds
of
things.
Yeah.
V
Give
it
a
give
it
a
little
while
to
settle
time
the
arguments
that
were
given
I,
don't
know
if
it's
actually
written
in
draft,
but
the
the
sort
of
connection
was
that
the
in
the
abstract
sense
that
the
combiners
were
proved
secure
and
the
papers,
for
example,
Douglass's
paper
that
the
audience
was
sort
of
just
carry
over
to
us.
By
virtue
of
the
fact
that
you
know
the
the
key
derivation
Achilles
works
a
certain
way.
So,
for
example,.
W
V
The
dual
P
rough
example:
they
give
the
naive,
broken
case
where
you
just
take
the
like,
HK
DF
of
you
know,
shake
secret
share
one
and
secret
share
two
and
use
that
as
the
traffic
secret,
but
that
doesn't
work
due
to
some
reason.
So
their
recommendation
is
to
take
the
PRF
of
the
KDF
of
key
share
wanted
to,
in
addition
to
the
ciphertext,
the
chemist
I
protect
so
and
they
say
like
that.
That's
fine
in
TLS,
because
there's
this
drive
secret
and
guess
the
transcript,
and
that
includes.
Q
H
That's
the
thing
could
be
fast.
This
is
about
IOT
and
I'd,
be
related
topics,
so
we
are
trying
to
find
out
how
we
can
optimize
the
key
exchange.
One
area
of
investigation
we
are
doing
is
besides
other
CTLs,
etc,
is
how
to
shrink
the
size
of
the
certificates,
and
this
proposal
uses
Sieber
web
tokens
in
TLS
instead
of
the
certificates.
H
We
then
found
out
that
that
may
be
a
little
bit
too
heavy
for
IOT
scenarios
and
use
the
different
encoding
based
on
C
bar,
which
then
turned
into
the
C
bar,
let
dokkan,
which
is
what
I'm,
using
here
JW.
These
are
widely
used
in
today
for
all
sorts
of
identity
scenarios
and
authentication
scenarios
there
are
set
of
claims
defined
in
those
and
are
carried
in
those
tokens.
They
are
registered
in
the
Vienna,
so
you
can
look
at
the
many
of
them
for
the
JWT.
What
is
interesting
is
those
tokens
work
in
a
somewhat
generic
fashion.
H
They
are
using
both
symmetric
as
well
as
asymmetric
cryptography
in
terms
of
how
they
are
protected
or
how
the
claims
are
protected
and
or
what
key
is
embedded
inside
so
quite
flexible.
Here's
an
example-
and
this
is
obviously
not
the
binary
encoding,
but
the
diagnostic
syntax,
and
you
can
see
the
key,
an
asymmetric
key
that
is
embedded
inside
and
yeah.
H
Key
and
and
Nick
was
just
recommending
it
to
reuse
it
for
also
for
these
password
based,
authenticated
key
exchange
as
well,
and
it's
used
for
the
bra
public
key,
which
is
obviously
very
small,
but
we
found
out
that
that's
a
little
bit
too
aggressive
for
aggressive
for
some
of
the
deployments
because
it
lacks
obviously
intentionally.
It
doesn't
include
more
than
the
key,
but,
as
you
can
imagine,
for
some
deployments,
that's
our
or
for
many
deployments,
that's
not
really
possible
picture
I
stole
it
from
Constance
presentation.
H
Why
we
wrote
this
up
is
because
I'm
trying
to
find
out
whether
some
of
you
have
also
been
looking
into
this
and
working
the
IOT
space
and
want
to
figure
out
on
what
the
implications
for
code
size
ran
over
the
wire
overhead,
etc,
etc.
And
so,
if
you
want
to
do
that,
drop
me
and
drop
me
a
message.
The
document
is
extremely
simple.
H
Obviously
it
just
it
registers
the
pipe,
but
one
thing
it
also:
does
it
talks
about
how
to
do
the
map
or
the
matching
between
what
is
found
in
this
CWD,
which
is,
for
example,
provided
by
this
server
side
for
a
server
side?
Authentication
in
the
subject
claim,
with
the
s
and
I
provided
by
the
client
side.
H
It
focuses
only
on
pop
tokens,
not
like
in
many
of
the
older
OS
deployments,
which
use
para
tokens.
Chim
shot
worked
on
implementation
of
faith
over
the
weekend
or
at
the
hackathon
and
found
it
quite
easy
to
integrate,
but
he
hasn't
gotten
to
that
pop
usage.
Yet
so
drop
me
an
email.
If
you
care
about
this,
obviously
not
ready
for
buying
time
an
experiment,
quick.
K
Question
it
seems
like
some
CW
T's
are
certificate
like
the
possession
ones,
but
many
are
not
so
I.
Think
your
last
point
in
previous.
Let
me
have
addressed
this.
Is
there
need
here
to
kind
of
profile,
down
CWT
to
be
appropriate,
so
so
that
only
the
ones
that
are
certificate
like
end
up
in
TLS
yeah.
H
AD
AD
AD
We
also
indicate
hostname
in
SMI
and
the
antithesis.
An
IE
can
be
suspicious
itself
and
blocked
itself,
and
there
was
information,
but
as
far
as
I
know,
it's
not
infirmed
that
that
was
a
censorship
based
on
encrypted
SMI
usage,
and
there
is
a
very
suspicious
idea
of
this.
Oh
that
a
white
listing
we
don't
block
content.
If
we
see
that
hostname
is
in
this
or
that
white
list,
so
I
suggest
a
way
to
try
to
achieve
depay
ought
to
make
it.
AD
So
it's
complete
implementation
harder,
as
the
standard
does
not
require
that
SMI
should
be
a
real.
We
want
that
SMI
should
allow
to
determine
we
first
name.
We
wanted
to
access.
We
call
this
name
through
hostname
and
fake
name
can
be
synonymous
published
in,
for
example,
in
DNS,
so
the
client
sends
fake
it's
an
I
in
its
it's
an
extension.
It
goes
in
clear-text,
then,
when
paris
1.3
is
in
use,
we
get
a
normal
handshake
and
this
solution
seems
more
or
less
reasonable,
more
or
less
reasonable
phobic.
When
I'm
Kissin
ie
is
sensor
itself.
AD
Very
of
course,
it's
possible
not
to
deal
with
fake
SMI,
it's
possible
to
mine
names,
but
a
fake,
ass
and
I
can
go
for
free
and
you
hey,
and
in
most
case
you
have
to
pay
for
registering
domain
that
tend
to
live
of
fake
ass
and
ie
can
be
relatively
small
if
it
is
delivered
via
Venice.
And
it's
not
a
problem
to
implement
fake
is
an
eye
base
solution
in
encode,
neither
in
server
code
no
in
fine
code.
AD
AD
AD
H
H
Yeah,
as
least
not
not
the
es,
a
nice
solution,
the
es
in
a
s,
ni
requirements
as
the
least
of
attacks
there,
that
unknown
against
values,
s
ni,
obfuscation
or
encryption
systems
and
I'm
asking
whether
you
have
you
black.
You
have
done
that
or
you
plan
to
do
that
as
check
your
Papa's
solution
against
its
list
of
attacks.
Okay,.
H
AE
Roy
so
two
comments.
The
first
one
is
I.
Think
you're
wasting
your
time,
because
the
devices
to
do
this
can
do
the
same
thing.
They
can
have
a
database,
so
it's
you're
just
moving
the
arms
very
slightly
for
it
and
it's
you
wasting
your
time.
Second
comment
is
that
censorship
is
not
always
bad.
I
have
children
and
I
have
parental
guy
in
software,
so
that
kind
of
censorship
should
not
be
avoided
by
this
mechanism.
Well,.
AD
X
Erick
Nygren
I
think
what
like,
yes
and
I
I
think
the
big
challenge
with
this
is
gonna
be
configurability,
especially
you're,
not
going
to
want
to
have
people
having
to
do
yet.
Another
DNS
lookup
ahead
of
this
and
a
lot
of
the
yes
and
I
type
issues
in
terms
of
DNS
records
not
being
aligned
together.
Gonna
also
show
up
with
challenges
it
this.
There
are
cases
we've
talked
about
this
in
this
passport,
combined
with
other
things.
X
It
became
very
useful
there
in
the
discussions
and
on
that
matrix
was
to
be
able
to
indicate
that
when
you
have
a
wild-card
cert
that
you're
going
to
be
going
against
to
be
able
to
specify
something
like
wildcard
at
example.com,
rather
than
the
4s
and
I
won
just
cuz,
that's
a
fairly
easy
win.
If
there's
a
way
to
provision
it,
it's
just
provisioning,
it's
hard.
Oh!
Maybe.