►
From YouTube: IETF 115 Technology Deep Dive: QUIC Part 2
Description
IETF 115: Technology Deep Dive on QUIC
This technical deep dive, held in two sessions, will cover in detail the operational realities and technical nuances of QUIC, the new Internet transport technology.
Slides and all the materials from the session:
https://datatracker.ietf.org/meeting/115/session/tdd/
A
A
A
So
it
is
now
Tuesday
morning,
so
many
of
you
have
probably
seen
the
note.
Well
if
this
is
a
surprise
to
you,
please
do
go,
have
a
look
at
this
wall
of
text
by
downloading
it
from
the
data
tracker.
Make
sure
you
understand
it
any
so
we
didn't
really
have
any
participation
with
the
mic
yesterday
today.
A
A
So
so
we
run
of
yesterday
morning
slides
for
those
of
you
who
are
here.
It's
still
great
to
see
you
all
in
person.
Thank
you
especially
much
for
waking
up
so
early
to
be
with
us
today.
7
30.,
and
you
know,
if
you're
watching
us
on
YouTube
later
we
love
you
too.
A
So
today's
agenda
yesterday
recap
of
basically
a
extremely
informative
and
very
enlightening
I
learned
stuff,
which
I
was
honestly
not
expecting
to
do
presentation,
sort
of
introducing
quick
and
you
know
why
it
is
and
what
it
is
and
how
it
is,
and
today
we're
gonna
dig
a
little
bit
more
into
into
specific
aspects
of
this
so
Ian
with
some
help
from
Martin
I
guess.
A
Moral
support
for
Martin
and
slides
for
Martin
we'll
be
talking
about
some
experience
with
deploying
quick
at
scale
and
then
Lucas
who
I
don't
see
here,
but
he
had
to
commute
in
so
he'll,
be
here
in
a
bit
we'll
be
talking
about
applying,
observing
and
debugging
quick
and
then,
after
that,
we're
going
to
have
all
of
our
speakers.
Come
up
for
a
panel
discussion,
slash
q,
a
that
is
the
audience
participation
part.
So
you
know
those
of
you
who
are
still
waking
up.
A
You
have
either
an
hour
to
find
your
notes
from
Monday's
session
to
figure
out
what
you'd
like
to
ask,
or
you
know,
start
thinking
of
your
questions
now
and
think
of
them
as
things
go
on.
So
with
that
we
will
reload
the
slides.
A
D
I
mean
I
can
I
can
start
talking
through
some
of
it
yeah,
that's
fine,
okay,
so
we're
going
to
start
with
load,
balancing
or
the
quick
LB
draft.
D
A
number
of
you
who
are
involved
in
the
quick
working
group
or
have
been
to
the
quick
working
group
are
probably
fairly
familiar
with
the
quick
load
balancing
draft,
but
we
just
wanted
to
go
over
it
real
quickly
because
we're
actively
in
the
process
of
deploying
it
at
Google,
and
so
specifically,
there
are
a
number
of
things
that
it
potentially
could
really
help
us
with
including
linkability.
D
But
one
major
issue
is
when
you
have
any
cast
where
you're
using
bgp
to
try
to
like
do
load
balancing
for
a
single
IP
potentially
globally.
In
our
case,
it's
there
are
a
number
of
complications,
so
one
complication
is,
if
you
ever
want
to
change
the
weights
on
bgp
or
anything
about
the
bgp
announcements.
D
Of
course,
a
number
of
connections
like
start
getting
sent
to
the
the
wrong
server,
at
least
without
like
a
good
bit
of
extra
work,
and
that's
particularly
true
and
you
might
combine
with
like
a
nap
rebind
or
something,
and
this
makes
this
all
like
much
much
worse.
D
Another
problem
is
also
connection
migration.
So
typically,
when
you
migrate
a
connection
when
you
think
of
like
connection
migration,
you're
migrating
say
from
like
cell
to
Wi-Fi
or
vice
versa,
so
it's
fairly
common
you're
actually
going
to
land
on
a
different
as
an
entirely
different
network.
As
a
result,
it's
fairly
likely
that
you
actually
land
on
basically
like
an
entirely
different
peering
Point
next
slide.
D
I'll
go
back
one
moment
just
to
go
through
so
the
identity
load,
balancing
and
then
we're
going
to
talk
a
little
bit
about
black
hole
or
flow
black
coin,
a
fun
little
story
on
an
outage
that
we
had
just
about
a
year
ago
and
then,
like
one
slide
on
zero,
rtt
and
quick,
and
why
it's
hard,
as
well
as
some
information
that
you
can
go,
look
at
more
if
you
want
to
care.
So,
let's,
let's
go
next
slide.
D
Actually,
two
more
there,
so
this
is
essentially
the
situation.
I
was
talking
about
where
we
have
an
anycast
address,
we're
using
an
L4
load
balancer
or
in
this
case,
like
a
slightly
more
intelligent,
L4
load
balancer.
That
is
quick,
aware
and
we're
sending
it
to
an
L7
server
of
some
sort,
whether
it's
application
front
end
or
another
layer
of
load,
balancers
or
another
layer
of
load
balances,
and
so
again,
if
you
have
flapping
or
anything
else,
next
slide
or
connection
migration,
it's
fairly
likely.
You
will
end
up
on
another
load
balancer.
D
So
but
yes,
this
is
particularly
true
if
you're
migrating
from
cell
to
Wi-Fi
you're,
potentially
changing
carriers
and
you're,
potentially
hitting
different
viewing
points,
and
so
this
means
that,
even
though
connection
migration
is
great,
it
does
not
work
nearly
as
well,
at
least
in
our
infrastructure,
for
any
cast
as
it
does
for
unicast,
whereas
you
know
basically,
an
IP
is
tied
to
the
very
least,
a
physical
location
in
a
fairly
strong
sense
like
a
Metro
peering
point,
or
something
smaller
like
that.
D
So
when
you
arrive
at
the
second
load
balancer
it's
like.
Is
this
a
new
flow
I?
Don't
know
what
to
do
with
this
I'll
load
balance
it
like
a
new
flow,
so
it's
like
all
the
old
information
is
lost.
We
have
a
lot
of
servers.
The
odds
you
hit.
The
same
server
is
basically
zero
and
you
know
the
whole
thing.
That's
and
you
know,
and
migration
will
fail.
So
we'd
really
like
this
to
work,
because
we'd
really
like
to
move
more
traffic
to
anycast,
actually
for
a
number
of
reasons.
D
Additionally,
it's
like
a
very
popular
option
for
another
number
of
customers.
So
next
slide,
that's
one
great
reason
to
do
quick,
lb
and
connection
IDs,
but
there's
also
kind
of
a
discussion
here.
Oh
next
slide
about
linkability
and
there
we
go
so.
D
D
Perfect
linkability
is
basically
like
anytime.
You
have
a
single
device
connecting
to
a
single
machine
and
you
can
like
track
it
the
whole
time.
You
know
if
it,
if
you
have
a
single
IP,
for
example,
a
single
device
like
that's
perfectly
trackable
right
like
it
doesn't
even
matter
if
quick
is
involved
or
any
other
thing
is
involved
like
you're,
going
to
be
able
to
track
that
connection.
D
Because
there's
you
know,
there's
only
one
connection
to
that
IP
in
the
entire
world.
So
that's
kind
of
the
worst
case
scenario,
and
you
know
so
for
but
more
typically,
what
you
have
is
kind
of
a
good
number
of
clients.
Next
slide
connecting
to
quite
a
large
number
of
servers,
and
you
know
asymptotically.
D
This
looks
like
perfect
unlinkability,
so
anycast
is
actually
helpful
here
in
the
sense
that
there's
only
one
Global
IP,
and
so
you
have
a
lot
of
servers.
That
kind
of
look
like
one
server
and
potentially
you
have
something
like
you
know:
a
million
clients
or
possibly
even
more,
connecting
to
that
IP
at
any
given
point
in
time,
and
you
combine
that
with
encrypted
connection
IDs
from
Quick
lb,
and
you
start
like
approaching,
you
know
some
real
vision
of
unlinkability.
That's
that's
quite
strong.
D
You
know
it.
The
word
perfect
is
you
know
it
is
asymptotic,
but
it
becomes
extraordinarily
difficult
as
an
adversary,
short
of
compromising
the
keys.
So
it
becomes
quite
a
compelling
proposition
from
our
privacy
perspective,
as
well
as
from
a
network
infrastructure
perspective,
and
that's
why
we're
we're
deploying
it.
So,
hopefully
you
know
probably
q2ish
q1ish
next
year.
If
all
things
go
well
and.
A
D
D
B
A
D
Martin
Duke
in
the
front
row
is
the
expert
and
as
well
as
author
of
quick
lb,
and
so
you
know
any
any
super
detailed
questions.
I
will
defer
to
him
and
he
will
be
on
the
panel
with
us.
So,
okay,
there
we
go
yay.
Okay,
this
is
the
thing.
I
was
alluding
to
the
entire
time.
So
quick
lb
has
a
single
connection,
ID
format
that
begins
with
two
bits
for
config
ID,
so
that
allows
like
rotation
of
keys
and
some
other
things.
D
It
has
what
is
called
a
server
ID
portion
and
a
nonce
portion
and
the
you
know
the
typical
approach
is
that
it
didn't
list,
at
least
in
the
encrypted
version,
which
is
what
we're
looking
to
deploy
that
this
is
basically
like
all
encrypted,
and
this
is
all
you
know,
except
for
the
few
bits
and
the
the
length
this
is
all
essentially
opaque.
Bytes
did
any
passive
Observer
yeah,
but
the
technically
the
draft
allows
for
unencrypted,
but
I
personally
would
not
recommend
then,
and
the
marginal
cost
of
doing
encrypted.
D
D
Different
infrastructures
are
different,
but
the
key
fact
here
is
when
a
packet
that
it
does
not
know
how
to
arrives
I,
don't
any
load
balancer
on
Earth.
As
long
as
that
load
balancer
has
the
keys
that
we
have
distributed
to
it
in
order
to
decrypt
these
connection
IDs,
it
can
decrypt
the
connection
ID
look
at
the
routing
information
inside
of
it
and
do
whatever
the
network
infrastructure
like
needs
it
to
do
so
like
at
Google.
D
We
have
a
bunch
of
stuff
that
doesn't
really
make
any
sense
outside
of
Google
and
so
like
describing
it
would
not
be
particularly
helpful,
but,
like
you
know,
every
different
network
infrastructure
has
information
that
they
want
to
wrote
by
that's
like
that
they
can
pack
into
this
encrypted
payload.
D
D
Okay,
let's
talk
about
black
bowling,
so
this
has
been
a
problem
for
a
while,
at
least
for
quick.
This
is
partially
historically
due
to
some
issues
with
middle
boxes.
I
think
most
of
those
issues
have
been
solved,
at
least
to
my
knowledge,
but
nonetheless
is
a
terrible
place
and
black
calling
happens
and
it's
extremely
user
visible.
D
This
does
also
happen
for
TCP
as
well,
and
I
know
that
there
are
some
mitigations
in
the
kernel,
but
I
wanted
to
go
through
kind
of
what
we
did
in
quick
to
mitigate
this
problem
and
next
slide
and
I
also
want
to
say
that
a
five
Tuple
can
be
black
hole
in
either
direction
or
both,
but
it's
quite
often
actually
just
One
Direction,
but
it
doesn't
really
matter
because
you
can't
really
tell
like
you
can't
really
tell
if
you're
not
getting
any
acts,
because
the
packets
aren't
getting
through
from
you
to
the
server
or,
if
you're
not
getting
acts,
because
the
ax
get
dropped.
D
It
looks
the
same
to
you
as
the
client,
so
another
challenge
is
it's
fairly
common:
that
for
a
given
five
people
that
multiple
paths
on
that
five
two
pole
are
similar.
Five
people
will
sorry
will
work
like
the
the
connectivity
between
like
IBA
and
ipb
is
largely
functional,
but
one
percent
of
the
flow
hashes
may
be
dropped,
so
it
could
be
because
you
go
through
a
bad
wine
card.
D
You
could
be
going
through
like
a
bad
Rudder.
You
could
end
up.
You
know,
there's
some
load
balancing
going
on
and
suddenly
you
end
up
getting
sent
through,
like
some
bizarre
peering
Point,
that's
like
extremely
far
away
and
there's
extraordinarily
High
packet
loss,
like
just
things
happen,
and
so
it's
not
that
uncommon
for,
like
you,
know
two
endpoints
to
be
largely
reachable
but
like
either
intermittently,
not
reachable
or
otherwise
unreliable.
D
So
the
worst
part
about
this
is
when
you
have
black
hole.
Wing.
Typically,
you
experience
a
timeout
at
least
by
default,
and
you
have
to
wait
for
the
idle
time
out
of
the
connection.
So
this
can
be
as
short
as
30
seconds,
but
commonly
you
know
could
be
a
few
minutes.
D
This
is
extremely
visible
and
extremely
painful
and
so
like
having
someone
sit
there
and
like
wait
for
a
page
to
load
for
a
minute
or
two,
because
you
know
a
flow
happened
to
get
a
lot
cold,
though
it's
rare
is
quite
frustrating,
and
so
in
order
to
bring
down
tail
latency
next
slide.
D
D
D
So
at
that
point
the
user
is
probably
quite
frustrated
anyway,
assuming
it's
an
actual
user,
and
if
it's
a
non
like
if
it's
an
internet
client
like
grpc
or
non-user
facing
then
I
mean
that's
still
an
extraordinary
amount
of
tail
latency
and
at
that
point,
you're
better
off
closing
the
connection
starting
all
over.
D
So
it
does
reduce
alien
Z
has
some
downsides.
It's
kind
of
unclear
why
this
is
so
useful
on
the
server
as
well
as
the
client
we
tried
to
just
do
it
only
in
the
client
side,
and
it
turned
out
doing
it
on
both
sides.
Sides
does
help.
I
mean
I
can
come
up
with
theories,
but
it's
a
little
bit
hand-waving.
D
D
See
a
quick
solution
got
a
joke
in
there
again
is
poor
migration.
Oh
that's!
You
dude
never
got
resolved.
That's
nice!.
D
So
the
observation
is
that
changing
only
Port
can
drastically
change
the
internet
path
that
you
eroded
to.
If
you
do
a
trace
route
from
two
different
ephemeral
reports
on
your
machine,
we've
seen
cases
where
you
actually
ended
up
going
through
different
pairing
points,
for
example,
which
seems
like
amazing
in
some
cases
like
you'll,
have
like
nine
hops
on
one
side
and
13
on
the
other,
or
something
it's
like
drastically
different
routes.
D
It's
just
it's
almost
inexplicable,
it's
kind
of
like
shocking
how
different
they
are,
and
so,
as
a
result,
I
mean
sometimes
like
if
the
connection
gets
black
hole
like
just
try
a
different
port
on
the
client
side,
and
it
turns
out
this
fixes
the
connection
in
a
remarkably
High
fraction
at
the
time
like
I,
don't
have
exact
numbers
and
it
does
depend
on
platform
but
and
I'll
get
exact
numbers
at
some.
D
The
near
future,
once
you
know
maybe
for
a
future
iitf
talk,
but
it's
it's
quite
high
and
it's
you
know
that
avoids
us
relying
on
this
five
RTO
or
PTO
approach
of
closing
the
connection
and
assuming
it's
dead
because,
instead
of
dying,
those
connections
get
saved
and
we
have
to
use
that
mechanism
less
frequently.
D
So
this
introduces
entropy
in
both
directions
so
and
like
some
of
the
changes,
such
as
changing
like
an
ecmp
flow
hash.
Sorry,
a
flow
hash
in
IPv6
in
order
to
get
like
ecmp
to
change
and
getting
entropy,
which
only
works
in
One
Direction.
D
This
actually
works
bi-directionally,
and
so
it
doesn't
matter
which
side
of
the
path
is
actually
being
blocked,
holding
for
in
order
for
it
to
fix
and
it's
Now
default
enabled
in
chromium,
which
is
chronet
Chrome
and
anyone
else
who
pulls
in
the
chromium
source
code,
which
is
a
good
number
of
folks.
D
But
yeah.
It's
quite
simple
mechanism,
Works
quite
well,
I
really
recommend
it
yeah
and
a
nice
benefit
of
quick
over
TCP,
actually,
because
we've
had
similar
problems
at
Google
for
TCP
and
the
solutions
are
much
more
challenging
than
just
changing
import.
As
I
said
IPv6,
you
know
flow
hashes
like
one
of
them,
so
next
slide
excellent.
D
Oh
so
we
had
an
average
last
November
I'm
fairly
sure
it
was.
It
was
quite
terrifying.
Actually
it
was
definitely
like
one
of
the
scary
outages.
It
turns
out.
The
impact
will
not
end
up
being
as
bad
as
you
might
fear,
but
it
was
quite
terrifying.
So
there's
a
query
of
depth.
It
was
triggered
by
resumption
information
sent
by
gfes,
which
are
Google
front,
ends
two
clients
and
then
the
clients
within
the
resumption
information
back
to
server.
D
Unfortunately,
this
information
was
sent
by
these
new
servers
caused
our
server.
The
other
servers
to
crash
immediately
so
much
so
that
the
handshake
never
even
completed
I
got
the
client
low
and
immediately
died.
D
So
at
Peak
it
was
around
10
of
all
GFS
or
public-facing
internet
servers
at
Google.
It's
quite
a
number,
but
it
was
very
uneven.
It
was
mostly
in
Europe
and
it
took
an
hour
and
44
minutes
to
fully
resolve
the
issue,
which
is
quite
a
while.
The
reason
for
that
is
actually
mostly
due
to
the
SRE
tooling
and
the
impact
on
that
the
reliability
Engineers
actually
did
not
have
access
to
a
number
of
the
tools
that
they
normally
would
have.
Access
to.
D
D
So
we
we
coined
after
this
urge,
we
coined
this
new
idea
called
contagion,
which
is
like
an
interaction
of
distributed
systems.
So
a
key
Point
here
is
typically
at
Google.
D
What
happens
is
We
Roll,
something
out
we
can
area
it
say
we
tend
to
in
this
case,
it'll
extend
to
only
two
machines,
but
typically
you
know
send
it
to
a
rack
or
something
we
test
it
and
if
it
starts
crashing
or
doing
something
terrible,
then
we
roll
it
back
right
like
standard
stuff,
however,
contagion
bugs
at
least
in
this
case
have
the
property
that
rollbacks
don't
work,
because
there's
now
a
state
distributed
in
things
that
are
not
your
server
and
not
the
thing
you
are
rolling
back.
D
So
in
this
case
it
was
all
clients
on
the
public
internet,
particularly
Chrome,
and
it
turns
out
to
be
the
worst
one
for
reasons
I'll
get
into
later,
but
it's
quite
a
number
of
clients
that
were
potentially
having
to
roll
back,
and
you
know
this
basically
we're
waiting
for
these
machines,
people
to
like
restart
their
browser
or
something
like
that
before
we
can
fully
roll
back
in
some
sense.
D
So
that's
quite
scary
from
an
SRE
perspective
and
a
reliability
perspective,
because
you
know
rollbacks
are
like
your
friend
you're,
like
you
assume
that
if
you
roll
back
the
right
thing,
you
will
definitely
fix
it
and
like
rolling
back,
the
issue
did
not
fix
it,
and
that
also
means
a
single
task
can
actually
cause
a
global
average,
which
is
the
other
thing
that
terrified
everyone
so
next
slide.
D
So
an
example
and
one
of
the
ones
that
happened
in
this
case
is
TLS
or
quick
resumption.
So
again
you
know
this
is
widely
used.
The
server
gives
the
client
something
that
the
client
then
repeats
back
to
the
server
and
then
the
server
parses
it.
And
then
you
know
if
server
a
produces
something
that
when
server
B
parses
it
it
crashes
or
otherwise,.
B
D
Know
has
undefined
Behavior
memory
access
and
so
on
and
so
forth,
then
you're
in
a
bad
place.
This
is
sort
of
a
unique
outage
in
the
sense
that
it
can't
be
fuzzed
in
traditional
Senses,
at
least
not
particularly
well,
and
it's
not
a
zero
day
in
the
sense
that
an
attacker
could
take
advantage
of
it.
So
it
was
an
attacker.
You
cannot
craft
a
token
or
a
resumption
ticket
that
our
servers
will
parse
it'll,
just
not
decrypt.
It
just
won't
work.
D
D
So
let's
go
through
what
actually
happened
to
Google
I'll
go
a
little
faster
through
these,
because
they're,
more
visual
but
they're
quite
fun.
D
So
a
browser
does
a
handshake
to
where
we
call
it
a
canary
GFE,
which
is
a
machine.
That's
you
know
using
the
latest
software
and
or
Flags
so
completes
the
handshake
get.
You
know
next
slide,
as
I
said
before.
A
token
is
given
to
the
client
that
you
can
use
to
approve
your
IP
address
for
future
use.
So
that's
the
source
address
token.
D
You
know
the
the
client
saves
it
everything's,
wonderful,
in
this
case
the
canary
jobs
populated
a
new
field
that
was
mostly
used
for
informational
purposes,
but
you
know,
and
so
there's
this
new
field
that
was
that
was
shoved
into
this
particular
blob
of
data.
But
of
course
it's
all
encrypted
to
the
ganto
next
slide
again
tokens
and
on
the
by
the
client
domain.
Next
GFE,
this
one
does
not
have
the
new
code
and
Additionally.
D
The
token
should
be
cleared
no
matter
what,
after
doing
this
handshape,
because
you
already
used
it
once.
However,
due
to
a
bug
in
our
iitf
quick
implementation
in
Chrome
at
the
time
that
did
not
happen,
so
it's
sort
of
stuck
around
in
the
browser
until
a
handshake
completed,
but
unfortunately,
next
slide.
D
Oh
yeah,
it
just
yeah,
so
you
basically
have
a
poison
token
that
will
never
clear
itself
and
the
sir.
The
client
will
just
keep
like
contacting
servers
and
be
like
I.
Don't
understand
like.
Why?
Are
you
not
there?
Why
are
you
not
responding
to
me
this
other
person
responded
to
me.
Why
don't
you
like
me
anymore
next
slide
and
it
keeps
trying
and
they
keep
crashing
next
slide
yeah
exactly
so
the
one
of
the
good
things
that
did
happen
here
at
like
one
of
the
few
good
things
is
chromium.
D
Has
this
exponential
back
off
thing
where,
like
the
handshake
fails,
it
won't
try
quick
to
that
domain
for
another
five
minutes,
however,
we
were
giving
out
tokens
across
like
a
whole
host
of
domains
for,
like
the
period
of
time
that
we
were
giving
out
tokens,
and
so
you
know,
a
single
client
could
still
be
crashing
like
10
or
20
different.
You
know
services
for
like
five
or
ten
minutes,
so
you
know
it's.
D
D
I'm
gonna
have
to
do
this
from
memory.
Okay,
two
seven
PSD
said
the
middle
of
the
night
server
started
crashing.
The
automated
Canary
checks
started
noticing
that
servers
were
crashing.
They
instantaneously
rolled
them
back
after
only
two
servers
got
the
new
code.
This
rollback
took
approximately
four
minutes,
so
two
servers
performance
had
the
mysterious
code
enabled
on
them
so
but
zero.
D
Three
one
everything
was
rolled
back,
sres
were
looking
at
it
and
they
were
confused
because
the
rollback
had
completed
successfully
and
servers
were
still
crashing
and
additionally,
they
started
realizing
that
none
of
their
tooling
worked.
So
all
of
their
monitoring
and
everything
at
least
in
the
London
office,
was
completely
unaccessible.
D
They
couldn't
access
logging,
they
couldn't
access
crash
dumps,
they
couldn't
access,
they
could
barely
even
figure
out
like
how
many
servers
were
actually
crashing
or
where
it
was
quite
virtual
and
it
took
them
a
while
to
even
realize
that
this
was
not
an
in
fact.
A
global
Google
outage.
In
fact,
the
vast
majority
of
Europe,
where
this
was
centered,
had
no
outage
whatsoever
for
google.com
youtube.com
and
most
major
Google
domains
due
to
our
Quirk
of
how
this
particular
like
we're
allowed
to
happen.
D
But
for
a
while
they
were
absolutely
paranoid
and
completely
apparent.
What
they
ended
up
doing
is
contacting
some
colleagues
in
Australia
and
New
Zealand,
who
are
awake
at
the
time
and
had
no
problems
whatsoever
accessing
these
corporate
systems,
because
it
basically
was
only
crashing
servers
in
Europe
and
they
like
basically
hooked
them
up
with
access
I.
Think
it
also
turned
out
that
meat
worked
perfectly
during
this
time
so
like
they
like
once.
D
They
realized
that
they're,
like
oh
I'll,
just
get
like
people
in
like
in
other
time
zone
to
like
get
on
a
video
call
and
like
help
me
fix
this
up
so
once
they
actually
had
access
to
everything
all
right,
which
is
around
138.
D
D
D
You
know
lengthy
and
painful
I
will
re-export
these
slides,
so
they're.
Actually,
what
I
just
said
is
done
there.
Okay,
so
let's
go
to
the
last
topic,
so
zero
rtt
is
is
quite
hard
even
on
a
good
day.
It's
particularly
challenging
in
ITF
quick
versus
Google
quick.
D
So
you
have
things
like
multiple
packet
number
spaces
which,
from
a
design
perspective,
are
great
and
have
a
number
of
number
of
nice
properties,
but
they
are
quite
challenging
to
get
correct
and
make
sure
that
you
are
re-transmitting
the
right
thing
at
the
right
time
and
bundling
the
right
thing,
but
the
other
right
thing,
and
particularly
if
any
single
packet
starts
getting
lost.
D
What
you
should
do
in
response
to
losing
that
packet
is
sometimes
subtle
and
if
you
get
it
wrong,
you
either
end
up
with
a
handshake
deadlock
or
something
that
takes
so
darn
long
to
complete
that
your
usual
will
be
very
unhappy,
so
like
a
single
packet
loss
at
the
wrong
moment,
if
you,
your
algorithm,
isn't
implemented
correctly,
it
can
easily,
you
know,
increase
tail
latency
a
few
seconds,
not
a
few
hundred
milliseconds,
so
Additionally.
The
key
management
is
less
synchronous
in
compared
to
like
TLS
over
TCP.
D
So
in
particular
the
problem
that
Chrome
had
was
Chrome.
Had
this
thing,
where,
like
it
made
the
keys
available
when
it
were
for
anti
Keys
available
when
it
got
the
servers
initial
and
like,
and
then
it
could
parse
the
server's
hand,
J
packet
and
then
acknowledge
that
and
like
continue
through
the
quick
key
schedule.
D
Unfortunately,
there
was
a
bug
in
Chrome
where,
if
certificate
verification
took
longer
than
it
took
for
the
server's
initial
to
come
back,
the
event
like
the
event
never
got
triggered
to
actually
release
handshake
keys,
you
would
never
get
handshake.
Keys
and
chrome
would
just
be
like
I
can't
process.
D
This
packet,
because
I
don't
have
hanji
keys
because,
like
pouring
SSL,
didn't
give
it
to
me
and
like
it
would
just
sit
there
and
like
not
acknowledge
anything
and
not,
and
of
course
the
server
was
purple
X,
because
the
server
had
gotten
the
you
know
its
initial
acknowledged,
and
it's
like
well,
you
process,
my
initial,
why
don't
you
have
hanji
keys,
but
yeah?
D
It
was
just
do
do
a
timing,
sequencing
error
and
again
like
this-
is
one
of
those
bugs
that,
like
quick,
due
to
kind
of
like
the
potential
reordering
events
and
stuff
I
think
is
quite
a
bit
more
likely
to
to
have
so
there's
a
whole
talk
on
this
recording
and
slides
that
you
can
watch
and
that's
why
I
didn't
go
into
it
further.
D
But
it's
it's
there's
some
complex,
sticky
details
in
there,
but,
needless
to
say,
it's
it's
something
that
requires
a
lot
of
patience
and
a
lot
of
debugging,
an
excellent
tooling
without
excellent,
tooling
I
would
say
it's
it's
extremely
difficult,
if
not
impossible,
to
really
get
this
right.
D
D
If
you
know
a
bunch
of
them
are
like
have
terrible
tailings
here
or
something
it's
going
to
drag
everything
down,
it's
going
to
make
the
performance
worse
so
like
please
do
not
assume
just
because
you
enabled
Xerox
that
it
will
be
a
performance
win.
You
know
really
test
it
and
actually
like
do
the
measurement
and
gather
the
data
to
like
ensure
that
it's
actually
a
win
for
your
particular
implementation.
D
That
being
said,
once
there's
a
win
for
you
for
a
given
implementation,
I
would
expect
it
to
be
a
win
for
all
applications
using
that
implementation.
So
I'm
not
saying
you
have
to
like
retest
it
per
application,
but
for
developers
I
would
just
definitely
be
very
cautious
about
that.
An
amount
of
time.
A
Thank
you
very
much
Ian
and
now
direct
from
the
London
Underground
to
talk
about
excellent
tooling.
We
have
Lucas
Perdue.
A
F
One
that
one
good
we're
gonna
have
a
clock.
Oh
I'll,
get
your
clock.
Okay,
I'll
fill
in
the
meantime,
I
was
trying
to
think
of
a
transport
related
joke
on
the
way
in
while
I
stood
on
the
tube.
All
I'll
say
is
my
round
trip
time
from
here
to
home
is
about
eight
hours,
I
think
so
anyway,
I'm
Lucas
Padu
I'm
a
co-chair
of
the
quick
working
group
I'm
here
today
to
talk
you
through
all
these
slides
about
applying,
observing
and
debugging
quick
I'm.
F
A
little
tired,
so
just
bear
with
me
in
case
I
ramble
a
bit,
but
I've
got
a
clock,
so
I
won't
run
over
time.
So
this
is
good.
I
wanted
to
spend
more
time
talking
about
the
applicability
in
this
talk,
but
there's
just
way
too
many
slides
so
in
the
slide,
deck
or
a
load
of
overflow
got
a
backup
slides.
Maybe
if
you
have
some
questions
towards
the
end
or
in
the
panel
session,
we
could
go
back
to
those
or
you
go
away
and
you're.
F
So
inspired
by
this
talk,
you
want
to
find
out
a
bit
more.
So
just
keep
that
in
mind.
In
this
talk.
We're
just
going
to
kind
of
probably
touch
on
other
topics
that
Martin
Jana,
Ian
and
Martin
may
be
touched
on,
but
just
from
a
lens
of
something's
going
wrong
with
my
quick
connections.
I,
don't
really
understand
it
that
well
I'm,
maybe
like
an
SRE
or
an
Ops
person
and
I
just
need
to
figure
this
out.
F
I'm
familiar
with
TCP
I
understand
networking,
but
what
what
can
I
transfer
from
my
previous
knowledge
into
quick
and
what
things
shouldn't
I
do?
Where
do
I
need
to
retrain
my
brain,
these
kinds
of
things,
so
this
is
just
going
to
walk
through
a
couple
of
real
world
examples.
Literally
something
I
was
looking
into
the
other
week.
It's
a
silly
book,
but
you
know
something:
that's
been
there
latent,
and
so
we
looked
at
it
and
so
let's
go
next
slide.
Please,
but
first
the
important
things
just
to
reiterate.
F
F
You
probably
need
to
know
some
TLS,
so
some
of
the
stuff
we're
going
to
look
at
shortly
is
it's
going
to
get
into
things
like
TLS,
client,
hello,
understanding
around
aspects
of
the
TLs
handshake,
but
not
the
timing
and
interaction
specifically
of
those
messages.
It's
going
to
be
important,
because,
quite
often,
when
things
don't
work,
they
don't
work
from
the
first
step
where
the
handshake
fails.
So
let's
go
next
slide
and
it's
not
it's
not
HTTP.
F
This
is
one
of
my
bug:
beds,
quick,
yes,
okay,
G
quick
was
built
to
carry
HTTP,
but
that's
not
what
we
have
today.
We
have
our
ETF
quick
and
it's
capable
of
carrying
anything.
You
can
imagine
literally,
you
need
to
go
if
you
have
an
application
and
create
an
application.
Mapping
I'll
allude
to
that
later
on,
maybe
more
about
what
it
means.
F
But,
yes,
it's
not
just
HTTP.
There
are
aspects
of
Quick's
design
that
lend
itself
well
to
bi-directional
exchanges,
but
we
have
other
capabilities
to
carry
things
unidirectionally
or
we
have
a
datagram
extension
that
will
allow
unreliable
message.
Delivery,
don't
make
assumptions
about
your
understanding
of
how
HTTP
works
and
if
you
don't
understand
how
HTTP
works,
that's
even
better
because
you
can
come
in
and
take
all
these
learnings
away
next
slide.
Please-
and
it's
not
it's
definitely
not
the
web
over
EBP.
It's
it's
here
for
for
many
applications
in
our
working
group.
F
We
can
help
advise
you
if
you
want
to
use
Quick,
but
we
we
won't
be
The
Gatekeepers
for
application
mapping,
documents
that
describe
how
to
use
Quick
generally
the
transport
services
it
provides
work
now
module
some
implementation
concerns,
but
it's
it's
here
for
everyone
to
use.
It's
not
the
only
thing
you
should
use.
It's
not.
You
know
our
Panacea
of
everything,
but
it's
definitely
a
tool
to
sit
in
your
toolkit
for
internet
next
slide.
Please,
and
it's
quick
quick,
is
quick.
F
Next
slide,
it's
a
secure
transport
protocol,
which
means
we
might
have
some
issues
trying
to
analyze
it.
If
we're
doing
things
like
packet
captures,
just
keep
this
in
mind
that
some
of
the
things
you
might
be
able
to
do
with
TCP,
where
you're
analyzing
it
with
tools
like
Wireshark
they're,
not
impossible,
but
they're
different,
and
they
might
require
different
approaches
next
slide,
and
it's
what
you
make
it,
which
I've
already
touched
on
next
slide.
F
So
if
you
haven't
got
the
time
for
any
of
this
quick
starts
with
a
handshake,
we
have
application
data
sent
as
soon
as
that.
Handshake
is
done
for
some
definition
of
then
Ian
just
talked
about
zero
rtt,
there's
different,
effectively
stages
within
the
connection
that
application
data
can
be
exchanged,
but
it
all
relies
on
this
handshake
happening
first
post
that
at
some
phase
within
that
checkpoint
package
is
going
to
be
protected.
F
If
you
don't
have
the
keys
for
that
session,
you're
not
going
to
be
able
to
see
the
contents
you'll
have
a
very
slim
sliver
of
information
available
in
a
header.
It's
tiny
and
you
can't
just
rely
on
that
and
the
other
important
thing.
Maybe
it's
been
touched
on
already,
but
we
we
have
a
reliable
and
reliable
application
data.
The
reliable
data
is
re-transmitted
in
new
packets.
It's
not
re-transmitted
in
in
the
packet
being
resent.
F
This
can
cause
differences
in
the
way
that
packets
and
framing
work,
if
you're
familiar
with
TCP
and
you're,
trying
to
look
for,
maybe
particular
byte
offsets
within
a
packet
capture.
This
isn't
not
the
way
to
do
it,
you're
going
to
have
to
drill
down
into
streams
and
probably
any
application
layer
use
of
that
stream
or
reframing
or
subframing
on
top
of
quick
streams.
Next
slide,
please
and
we'll
talk
about
applicability
management
too.
F
Whatever
we
mean
by
management
of
a
network,
maybe
if
you're
just
an
operator
a
network,
then
you're
the
manager
I,
don't
want
to
get
into
that,
but
we
have
two
excellent
drafts
that
have
been
published
in
September
so
fairly
recently,
but
they've
been
developed
alongside
their
core
Roc
specs,
you
know
899
through
9003,
and
they
cover
two
different
aspects
of
things.
You
might
not
be
aware
of
these
documents.
F
They're
a
really
good
read
if,
if
you
want
to
get
away
from
the
nitty-gritty
protocol
details
which
are
good-
but
you
know
that's
more
of
a
reference
manual,
these
documents
are
written
for
a
different
target
audience
the
first
one
in
the
applicability
draft,
RFC
9308
talks
about
the
features
of
a
transport
protocol,
how
you
might
adapt
your
application,
whatever.
That
is
to
work
on
top
of
quick,
but
not
just
an
instruction
guide.
F
But
the
caveats
are
considerations
that
you
might
need
to
make
where
you
have
stream
concurrency,
that's
a
different
feature
compared
to
TCP,
which
had
one
single,
reliable,
byte
stream,
quick
off
easy,
a
lot
more
things
and
a
lot
more
ways.
You
could
hurt
yourself
and
a
lot
more
potential
for
interrupt.
Consider
problems
where
you
have
a
client
in
the
server
that
have
a
different
kind
of
understanding
about
what
concurrency
means,
for
example.
F
So
it's
an
excellent
documentation
document
for
that
kind
of
thing
and
then
a
completely
different
target
audience
in
the
manageability
draft,
which
is
more
for
people
who
aren't
operating
a
quick
stack
entirely,
but
seeing
quick
traffic
flowing
back
and
forth
across
their
Network.
So
this
touches
on
the
quicker
variants
and
the
quick
transport
protocol
and
gives
you
effectively
some
explanation
about
what
quick
is
and
what
quick
will
be?
Maybe
how
to
analyze
it,
maybe
how
you
won't
be
able
to
analyze
it
compared
to
other
protocols
that
are
traversing
your
network.
F
So
next
slide,
please
again,
let's
go
back.
Everything
starts
with
a
handshake,
there's
all
the
the
places
where
handshakes
mentioned
it's
everywhere.
It's
great.
This
is
already
covered,
I
hope,
but
the
key
items
here
I
want
to
focus
on
today
are
these
two
packet
types
that
we
have
initial
and
handshake.
These
are
the
things
you'll
probably
see
in
the
first
five
lines
of
any
structure
that
you
take.
F
The
initial
is
not
type
is
a
type,
so
it's
not
an
adjective.
Sometimes,
when
I
talk
to
people
who
aren't
familiar
with
quick
and
talk-
let's
say
you
know
we
need
to
see
the
initial
packet
and
they
think
they
mean
the
first
one,
but
you
get
things
like
re-transmissions
or
retries
reattempts
to
send
the
initial
and
you'll
see
multiple
initials
and
they
they
also
hinge
on
the
the
actor
in
The
Exchange.
F
So
you'd
have
a
client
initial,
the
server
initial
using
the
terminology
like
I'm
that
person
who
insists
on
trying
to
use
it
correctly
and
to
to
everyone's
annoyance.
But
it's
really
critical
here
and
and
my
my
understanding
is:
we
need
to
up
level
a
lot
of
people
within
the
wider
ecosystem,
people
like
sres
or
operations
to
really
they
don't
need
to
understand
all
of
the
details.
But
when
we're
just
communicating,
can
you
help
me
and
see
if
the
initial
made
it
through
through
the
network
and
arrived
at
my
server
stack?
F
These
kind
of
conversations
people
can
be
familiar
with
the
TLs
technology.
Like
did
we
receive
the
client
hello,
it's
similar
to
this
kind
of
thing?
So,
but
if
that's
one
thing
I
would
ask
you
to
focus
on
and
take
away
use
the
right
terms
next
slide,
please
so
there's
this
excellent
tool
called
The
Illustrated
guide
to
stuff
I,
can't
remember
the
right
name,
there's
a
link
here.
It
covers
not
just
quick
but
TLS,
1.2,
1.3
I
think
this
is
a
visualization
of
of
actual
kind
of
simulated
data.
F
If
you
go
to
that
website
in
the
repo
behind
it,
you'll
see
that
they
use
kind
of
real
client
server
interactions
and
take
the
pcapp
and
then
take
those
bytes.
So
you
can
see
here,
I've
just
tried
to
point
the
text
as
well:
I
apologize,
but
there's
a
lot
of
stuff
going
on
I
do
encourage
you
to
go
visit
it,
but
you
can
see.
There's
there's
two
different
initials
within
this
vertical
trace:
the
client
initial
and
the
server
initial
the
arrows
there
kind
of
pointing
in
the
opposite
directions.
F
So
if
you
go
to
the
next
slide,
each
of
these
boxes,
you
can
expand
and
it's
really
meant
as
a
learning
tool.
This
isn't
a
debug
tool,
the
bytes
here
you
can
like,
so
you
can
go
into
the
GitHub
and
they're
canned
and
you
can
go
and
understand
exactly.
Maybe
you
want
to
go
and
Fiddle
or
do
other
things.
Maybe
you've
got
some
comments
on
the
additional
kind
of
annotations
you
might
like
to
see,
but
you
can
you
can
open
up
the
client
initial.
F
You
can
view
the
TLs
client
hello
within
that
client
initial
and
then
drill
down
into
different
bytes
there.
So
it's
good,
if
you
just
want
to
say,
maybe
take
an
existing
trace
and
compare
it
to
kind
of
a
reference
example
say
next
slide,
please
just
to
drill
even
further
in
there
are
these
transport
parameters.
These
are
the
the
properties
of
a
it's,
a
connection
that
each
endpoint
will
advertise
to
its
peer.
F
We
don't
have
time
to
go
into
that,
but
if,
if
you're
like
looking
at
a
packet
capture-
and
you
see
and
you're
drilling
down-
and
you
see
these
transport
parameters-
and
you
think
what
what
are
these
going,
looking
Ayana,
they
should
be
registered
there.
Maybe
not,
but
you
know
this
will
give
you
a
brief
overview.
Most
of
these
are
in
RFC
9000,
but
we
have
things
like
extensions
which
would
use
transport
parameters.
These
kinds
of
things
might
be
new
to
you.
F
F
So,
let's
look
at
an
illustration
from
Live
Connections.
We
just
looked
at
you
know
the
some
pre-canned
examples
you
can
use
your
old
friends,
pcapp
and
Wireshark
to
look
at
this
to
just
successfully.
Does
that
quick
packet
to
get
any
division
of
Wireshark,
3.4
and
upwards?
F
The
examples
here
for
the
remainder
of
this
session
I
just
created
using
cloudflare,
quiche
I
work
for
cloudflare,
it's
the
client
and
the
server
I'm
most
familiar
with.
We
have
some
example
apps
in
the
repo
pretty
much
any
other
client
is
going
to
be
very
similar
to
this.
So
if
you
have
a
favorite,
you
could
probably
recreate
these
things,
but
just
in
this
example
have
a
server
running
on
localhost
these
flags
effectively
minimize
the
handshake.
They
just
reduce
a
few
steps
for
clarity.
F
If
I
didn't
have
that
no
retry
flag,
for
instance,
it
would
send
an
additional
message
during
the
handshake.
It's
all
cool,
but
it's
complicated.
We
don't
have
the
time.
It's
okay,
you
can
kind
of
ignore
those
things
but
yeah.
F
This
is
just
a
request
on
my
Local
Host
to
get
an
index
file
and
that
pick
up
underneath
what
you
see
is
very
small
text
and
you
probably
can't
see
it,
but
that's
kind
of
the
point
because
I
want
you
to
go
and
look
at
this
itself,
but
the
first,
the
first
packet
or
the
first
line.
There
is
the
client
initial
going
towards
the
server,
and
then
we
have
some
stuff
coming
back
in
the
other
way.
It
says
handshake
there,
but
we're
going
to
drill
into
that
next.
F
Well,
one
more
so
I
have
some
ready-made
examples.
Speaking
with
Brian,
we
thought
it
would
be
cool
to
like
take
some
live
real
things
that
I
captured
and
put
up
on
GitHub
this.
These
pcaps
and
Q
logs
that
I've
got
for
local
who's,
good,
just
a
simple
good
exchange
of
a
request
that
succeeded
at
the
quick
layer
at
least
so
next
slide,
please
again
too
tiny
to
see.
F
It's
easier,
it's
a
lot
easier
here,
because
we've
only
got
you
know
a
very
small
pcap
of
of
one
interaction,
and
we
already
know
that.
But
if
you're
kind
of
trying
to
find
a
needle
in
a
haystack
trying
to
look
for
some
of
these
things,
it
can
be
a
bit
difficult,
but
yeah
we've
got
indicators
like
the
source,
port
or
the
destination
Port.
F
This
is
a
bad
indicator
using
Source
sport
don't
rely
on
that
being
anything,
it
could
be
the
same
for
everything,
because
quick
is
clever,
but
alongside
that,
within
the
the
packet
type
information
here,
we've
got.
You
know
dissection,
to
tell
you
the
type
based
on
stuff
and
underneath
within
within
the
contents
of
the
client
initial.
We
have
the
the
client
hello.
So
this
is
the
TLs
client
Hello
message
again
zooming
in
we
see
the
transport
parameters.
This
is
exactly
the
same
as
the
other
slide.
I
showed
you,
but
these
are
the
actual
transport
parameters.
F
Is
that
the
the
quiche
client
sends
in
this
case
next
slide?
Please
important
thing
here:
is
application
layer,
protocol
negotiation?
I
don't
really
have
the
time
to
get
into
this
at
all,
but
the
in
this
case
what
happens
is
the
client's
going
to
send
a
whole
set
of
different
variants
or
versions
of
HTTP
that
it
talks?
F
So
it's
like
an
offer
answer
application
protocol,
and
this
is
it
you
might
see
different
lists
different
sets
of
protocols
in
there.
If
you're
writing
your
protocol,
you
should
definitely
make
an
alpn
for
it
because
it
helps
you
allow
multiplexing
of
different
applications.
Maybe
if
you
just
want
to
set
up
one
quick,
listening
port
and
support
a
slew
of
applications
on
there,
this
touches
on
the
applicability
draft.
There's
things
you
should
consider
around
transport
parameters
in
relation
to
the
application,
virtual
you're,
trying
to
negotiate
and
where
things
might
not
completely
converge.
F
But
that's
another
talk
for
another
day.
Go
read
the
applicability
draft
next
slide,
please
in
a
reverse
direction.
We
have
a
server
initial
and
a
handshake,
and
you
can
see
in
the
top.
The
indicators
are
kind
of
similar
The
Source
Port
is
the
server's
listening
address
that
we
showed
in
the
line
a
couple
slides
back.
We
have
the
the
initial
type
again
indicated
and
then
just
below
that
we
have
a
server
hello
coming
back,
but
we
can't
see
as
much
information
as
we
could
in
the
client
initial.
F
We
have
this
server
handshake
packet
just
underneath,
but
you
can
see
it's
been
quite
clear
there.
We
can't
decrypt
anymore,
because
the
secrets
aren't
available.
What
does
that
mean
next
slide?
Please
we
need
the
keys.
We
need
the
keys
to
see
the
full
picture
for
me,
even
in
that
early
stage
of
you
know
one
and
two
interactions
going
in
in
opposite
directions.
F
We
might
not
be
able
to
see
everything,
that's
happening.
That
would
help
us
diagnose
some
kind
of
issue
that
might
be
happening
early
in
the
connection.
You're,
probably
familiar
with
something
called
SSL
key
log
file.
But
if
you're
not
it's
used
by
many,
but
not
all
implementations
endpoints
can
be
instructed
with
like
an
environment
variable-
or
maybe
you
know
just
built
or
configured
to
drop
sorry
Place
their
keys
explicitly
in
a
location
that
can
be
used
to
contain
the
session
keys
in
a
format.
F
This
is
kind
of
common.
It's
a
de
facto
standard
Martin's
been
working
on
a
new
ID
to
kind
of
formalize.
This
format,
which
is
good
work,
but
the
session
keys
are
symmetrical.
So
if
you
can
drop
them
from
the
server
or
the
client
end,
you
should
be
able
to
decrypt
the
the
packets
for
anything
so
depending
on
who
you
are
what
you're
in
control
of.
Sometimes
you
can.
You
can
use
this
as
a
kind
of
I
need
to
see
both
directions
of
traffic.
F
In
this
example,
like
I,
said
we're
just
using
an
environment
variable
to
to
dump
some
keys
into
files
locally,
we
can
configure
Wireshark
to
pick
that
file
up
next
slide.
Please,
okay,
so
we
I
can't
go
into
it,
but
yeah.
The
Wireshark
documentation
will
explain
this
and
go
into
your
preferences
or
there's
all
kinds
of
cool
tricks
you
can
use,
but
once
you're
configured
once
it
finds
the
correct
session
keys
for
an
interaction.
F
This
is
again
the
same
kind
of
dissection
I
showed
earlier,
but
this
time
that
handshake
packet
is
revealing
a
lot
more
information,
it's
showing
the
alpn.
It's
showing
that
in
this
case
the
server
picked
H3.
So
from
that
whole
list
of
things
it
picked
one
next
slide
please.
F
But
if
we
revisit
the
kind
of
complete
picture,
not
just
the
one
packet,
we
can
see
the
dissection
without
the
keys.
There's
some.
You
know
the
top
four
lines
too
small,
again
sorry,
but
we've
got
some
packet
types
and
then
it
trans
kind
of
converts
into
those
protected
payload,
where
we
can
see
a
feel
called
the
dzid
I'll
explain
that
in
a
couple
of
slides,
but
that's
it
if
you
click
those
things
you're
just
going
to
see
opaque,
bytes
and
you're
not
going
to
know
what
they
are
with
the
dissection.
F
Well,
with
keys,
you
can
see
that
we're
actually
can
drill.
In
now
we
can
see
different
packet
types,
packet
numbers
frames
within
those
packet
types
stuff
around
streams,
acknowledgments
crypto,
etc,
etc.
Next
slide,
please,
and
then
attachments
said
these
are
connection
IDs.
If
you
look
in
this
example,
we're
going
to
see
different
connection,
IDs
going
in
opposite
directions,
this
can
be
useful
if
you're
just
trying
to
trace.
F
F
So
that's
why
shark,
and
that
does
one
thing
well,
especially
if
you
can
dump
the
keys,
but
we
have
something
called
q
log,
which
is
like
my
colleague
Robin
down.
The
front
has
been
working
on
for
a
number
of
years.
He'll
tell
me
my
description
is
rubbish
here,
so
go
and
speak
to
us
at
the
end,
and
we
can
inform
you
everything,
but
you
know
the
implementations
often
have
debugging
or
you
know,
logging
of
their
own
to
augment
stuff.
F
Like
packet
captures,
you
can
see
kind
of
the
reason
why
something
happened
not
just
the
fact
that
it
did
happen
and
then
a
common
logging
format
can
encourage
ecosystem
of
analysis
and
tooling
that
isn't
just
specific
to
an
implementation.
F
So
so
what
we
have
in
the
quick
working
group
is
adopted
working
items
for
a
coin
of
core
base
schema
and
cddl.
This
is
very
extensible.
We
can
use
that
extensibility
to
Define
concrete
definitions
at
the
moment,
we'll
just
focused
on
quick
and
HTTP
3
related
events,
because
that's
what
we've
worked
on,
but
other
kinds
of
things
can
be
added.
So
if
you're
adding
application
mapping
you
can
well.
Maybe
you
wanted
to
find
some
q-log
events.
That's
something
we're
trying
to
figure
out
this
week.
F
What's
the
right
level
of
guidance
to
give
to
people,
but
it's
extensible,
it's
just
logs.
It's
just
kind
of
text:
you
can
stick
stuff
in
if
you
know
how
to
read
it.
If
you
know
what
to
look,
but
what's
nice
is
the
the
current
Fleet
of
quick
implementations,
many
of
them
do
support
q
log.
F
So,
if
you're
trying
to
find
a
bug
between
client
and
server
implementations-
and
you
understand
like
what
stack
that
is
say-
it's
curl
built
with
hb3-
you
can
enable
q
log
and
get
this
q
log
format
out
and
stick
it
somewhere.
That
will
help
you
analyze,
maybe
fault,
diagnose,
more
quickly
or
understand,
what's
happening
with
the
congestion
window,
next
slide
yeah
and
we
have
cuviz
Robin's,
lovely
cue
visitor,
which
I
would
just
describe
as
trying
to
make
sense
out
of
Oodles
of
data.
F
If,
if
you
went
on
the
GitHub
and
grabbed
some
of
the
the
reference
material
I
had
you
can
go
to
this
tool
now
you
want
to
do
for
option
two
upload
a
file.
It's
not
an
upload.
It's
kind
of
all
client
hosted
well
for
what
we
would
analyze
today,
at
least
so
all
happened
in
JavaScript
in
your
browser
and
come
up
with
this
kind
of
excellent
sequence.
Diagram.
Again
way
too
small
to
see
the
details
here,
but
this
is
effectively
rendering
the
same
information.
F
We
just
looked
back
in
the
Wireshark
in
a
different
way,
so
this
is
going
to
give
you
maybe
a
bit
more
of
a
context,
aware
view
of
packets,
exchanging
between
the
client
on
the
left
and
the
server
on
the
right,
each
Line's,
a
packet.
That's
my
interpretation
again,
Robin
correct
me
at
the
end,
but
within
those
you
have
the
frames,
we
have
kind
of
a
very
quick
summary
view
of
what
those
frames
are
and
then,
if
you
click
any
of
those
boxes,
it's
going
to
give
you
an
expanded
View
and
it's.
F
This
is
a
real
example,
but
it's
a
good
learning
tool
if
you're
just
trying
to
get
familiar.
If
not
somebody
who
can
read
us
back,
you
know
in
one
go
and
walk
away
and
understand
everything
if
you're
a
bit
more
like
me
a
bit
more
practical
and
you
just
want
to
run
some
stuff,
try
some
things
out
and
do
this
next
slide.
Please
so
I
just
want
to
look
at
a
real
world
failure
that
we
had
again.
F
This
is
using
SSL
key
log
file,
but
great
and
no
the
box
is
in
the
wrong
place:
oops,
so
yeah,
it's
using
SSL
key
log
file,
amazing,
but
there's
a
property
in
hb3.
You
need
to
be
able
to
open
some
new
directional
streams.
Again,
don't
have
the
time
to
go
into
that,
but
you
can
configure
this
quiche
client.
This
example
application
to
advertise
from
the
client
to
the
server
a
value
of
zero
that
it
can
open
zero
initial
unidirectional
streams
when
the
handshake
is
complete.
F
Our
server
code
doesn't
like
that.
It
wants
to
be
able
to
open
these
streams
and
if
it
can't
it
will
detect
the
error
in
the
code
and
send
a
connection
close
message.
So
an
explicit,
immediate,
close
from
the
server
to
the
client
to
say
I
want
to
open
up
the
Control
stream.
I
can't
sorry
like
go
away.
F
This
is
the
kind
of
thing
you
should
probably
be
looking
for,
if
there's
ever
a
report
of
an
issue
stream
research,
Connection,
close
messages
are
kind
of
good,
strong
indicators
of
what
happened,
but
maybe
not
why
next
slide
same
failure.
Same
same
Trace
effectively,
the
q
log
captured,
at
the
same
time
a
different
rendering
you
can
see
here.
The
connection
closes
in
the
bottom
in
red.
That's
as
much
as
you
can
see
because
of
the
rendering
but
yeah
it
might
be
a
better
way
to
view
these
things
next
slide.
F
Another
one.
Another
failure,
this
time
pretty
much
an
identical
command,
but
for
a
different
host
name.
So
the
last
one
was
the
localhost
and
and
this
time
to
to
my
website
and
in
the
packet.
Well,
the
peacock
looks
a
little
different,
just
just
from
a
mile
kind
of
view,
it's
different,
which
is
always
a
good
sign
of
if
the
behavior
is
different.
If
someone's
saying
this
behaves
weird
for
this
thing
and
not
that
one
and
you
see
a
p-cap,
that's
identical!
That's
annoying!
But
if
there's
something
like
this,
that's
immediately.
F
There's
a
change
in
Behavior!
That's
a
good
signal!
So
here
it's
like.
Where
is
that
connection
closed,
I
told
you
what
the
behavior
was,
what
we're
expecting
we're?
Not
seeing
that!
Why
next
slide?
Please
it's
even
worse!
It's
even
smaller!
You
can't
see
it
the
indicator
here.
Is
it's
really
long?
There's
a
lot
of
things
going
on
here
in
terms
of
timing,
there's
no
collection
closed
received
by
the
there's,
a
bug
here.
It's
received
by
the
client,
not
the
server,
but
anyway
back
to
five.
Please!
F
So,
what's
the
difference,
you
know
we've
got
effectively
the
same
client
Behavior,
it's
just
pointing
at
two
different
endpoints.
Are
there
different
implementation?
Are
they
different
Stacks?
No,
like
they
use
the
same
underlying
stack
in
this
case.
I
know
that,
but
that's
because
I've
got
Insider
knowledge.
What's
the
root
cause,
what
could
this
possibly
be
and
that's
what
I
had
to
spend
some
time
on
the
other
day
looking
into
outside.
F
There's
different
types
of
connection
close:
we
haven't
got
the
time
to
go
into
this,
but
you
have
a
transport
layer
and
an
application
layer
and
code,
and
they
send
different
types
and
stuff
next
slide.
F
So
the
root
cause
here
is
a
trouble
with
timing.
The
hp3
light
library
that
we
use
for
the
server
side,
like
I,
said
Caesar
zero
doesn't
like
it
calls
close
on
the
transport
layer,
passes
it
an
application
layer
code
and
a
reason,
but
neither
the
application
or
the
hb3
light
retract.
The
transport
State.
The
handshake
had
completed
correctly
at
that
time,
so
that
this
doesn't
happen
unilaterally.
But
there's
some
timing,
differences
that
I
found
and
because
there's
a
potential
leak
of
information
of
application
layer.
F
F
But
that
was
just
a
bug
and
so
the
the
server
didn't
send
that
the
the
client
kept
retrying,
not
retiring.
That's
what
I
want
to
do,
but
eventually
the
idle
timeout
kicked
in
and
it
gave
up
after
to
several
rounds
of
retry
X
slide
and
so
that
that
debugging,
given
I
had
some
tooling
and
some
knowledge
of
where
to
look
God
quite
quickly.
F
I
was
able
to
kind
of
turn
that
into
a
unit
test
come
up
with
a
fix
which
was
a
one-liner
and
now
the
client
will
always
receive
the
timely
close
next
slide.
F
So
yes,
in
summary,
it
works.
If
you
know
what
to
do
and
that's
what
I
want
to
make
sure
that
we
can
scale
out
our
understanding
of
quick
our
ability
to
use
it,
it's
one
thing
having
us
back
on
the
Shelf
great
wow,
but
we
want
people
to
use
this
quicker
than
TCP.
It's
not
TLS.
It's
not
HP
all
the
web
of
UDP.
F
The
the
handshake
is
it's
kind
of
the
first
step
and
everything
there's
minimal
information
available
on
the
wire
crc9312
for
the
for
information
about
how
to
observe
quick
if
you
you're,
not
as
kind
of
involved
in
the
interactions
between
client
and
server
as
you
would
like,
and
and
just
be
aware
that
implementations
and
deployments
can
behave
differently.
Sometimes
there's
bugs
sometimes
there's
a
lot
of
things
defer
to
an
implementation
choice.
F
You
know
how
it
should
kind
of
level
of
normative
requirement
and
just
opinions
of
The
Operators,
so
I
don't
assume
too
much
I,
don't
assume
that
the
implementation
is
perfect
or
that
a
behavior
that
you
see,
that
seems
weird,
isn't
done
by
intention
and
there's
ways
you
can
kind
of
get
some
logs
to
do
this.
That's
it
I'm
done.
Thank.
A
A
H
A
For
the
for
remote
people,
can
you
like
actually
put
yourself
in
queue?
I
mean
you
go
ahead
and
say
your
name
but
like
we
do
want
to
make
sure
refer
to
people
who
are
not
here.
B
My
name
is
David
and
I.
Guess:
question
for
Martin
Thompson
I
probably
could
have
looked
at
myself,
but
I
think
you
mentioned
that
you
initially
tried
to
use
dtls,
but
then
it's
a
bad
idea,
so
not
using
TLS
but
quick
is
UDP,
so
is
TLS
TCP
or
UDP
I
got
confused
on
that
point.
I
J
That's
that
that's
that's
a
fun
question,
so
we
we
initially
thought
that
maybe
the
handshake
could
use
the
DLS
dtls
mechanisms
for
reliability
and
we
would
just
do
dtls
to
start
with,
and
then
we
would
get
to
do
quick
things
once
tcls
is
done.
J
Unfortunately,
dtls
doesn't
have
the
sort
of
really
sophisticated
recovery
mechanisms
and
all
the
other
things
that
quick
has,
and
so
you
get
a
lot
more
Advantage
from
from
building
all
of
the
the
reliability
and
and
in
order
delivery
mechanisms
on
top
of
quick.
And
so
that's
what
we've
we've
done.
J
By
doing
that,
we
were
able
to
use
TLS,
which
is
a
much
simpler
protocol,
and
so
we
avoided
all
these
details,
Machinery
necessary
to
do
reliability
and
fragmentation
reordering
and
all
those
sorts
of
other
wonderful
things
and
just
use
the
stuff
that
we
use
for
reconstructing
streams
in
quick
and
and
that
turned
out
to
be
much
much
easier
in
the
end
as
it
turns
out
once
we've
made
that
decision
it
was.
It
was
very
straightforward
to
to
use
all
of
the
quick
machinery
for
for
managing
packet
loss,
recovery
order,
delivery
and
whatnot.
A
So
yeah
you
get
the
you
get
the
the
advantage
of
you
can
use
like
a
sort
of
a
single
software
architecture
and
the
disadvantage.
Is
you
get
a
really
wonky
layering
diagram,
right,
I?
Think
it's
like
and
we
we
spoke
about
that
early
on.
It's
like.
Oh,
this
layering
diagram
looks
weird.
There
must
be
something
wrong
here
and
it
turns
out
that
it
was
right
because
otherwise
you
have
to
have
or.
I
Whatever
so
yeah
I
mean
I
I
think
we've
always
held
that
the
layering
diagram
should
follow
reality.
Reality
shouldn't
follow
the
diagram,
and
in
this
particular
case
it
was
very
obvious
that
actually
having
a
separate
loss,
detection
recovery,
Machinery
was
going
to
be
fundamentally
sub-optimal.
It's
not
something
you
want.
If
you
have
a
big
data
stream
and
you
have
all
the
mechanisms
built
in
you
want
to
use
the
same
one
for
all.
The
data
that
you
send
not
just
for
the
handshake
data
is
not
special
in
this
particular
way.
G
I
tail
I
apologize
that
I
suspect
that
this
was
some
already
addressed
and
I
missed
it,
but
what's
the
current
status
of
the
original
Google
Quick
implementation
with
regard
to
the
ITF
implementation,
I.
D
We
are
attempting
to
turn
it
down
and
remove
the
code
as
soon
as
humanly
possible.
It
is
a
very,
very
high
priority
of
my
team.
The
expectation
is
q1.
D
It
may
be
off
sooner
for
certain
other
properties,
so,
like
YouTube,
might
go
down
like
turn
it
off
earlier
and
stuff,
but
it's
either
going
to
be
very
end
of
Q4
or
q1
that
you're
going
to
start
seeing
it
being
turned
off.
Hopefully
the
code
will
be
delayed
by
the
mq1
if
I'm
really
lucky,
but
it
it's
a
lot
of
code.
I
might
think
you
do.
H
So
yeah
hello,
I'm
escort
AG,
just
a
quick
question
about
quick
and
using
it
over
Nats
and
firewalls
Etc.
Is
there
any
special
consideration
or
things
you
have
to
do?
Let's
say
to
keep
a
connection
alive.
It's
just
wondering
about
that.
D
D
Just
don't
let
any
package
through
because
remember
one
packet
in
each
direction
is
enough
to
complete
a
handshake
in
some
circumstances,
and
so
like
some
we've
had
issues
with
packet
inspection
where,
like
some
of
them,
are
like
I'm,
going
to
take
a
few
packets
to
figure
out
what
flow
type
this
is,
and
it's
like
after
three
packets,
it's
like
yeah
I,
don't
think
I
know
what
this
is
so
I'm
gonna
just
drop
it.
It
goes
terribly.
These
are
experiences
Dreadful.
D
So
don't
do
that
besides,
that,
like
I,
think
a
lot
of
things
are
quite
acceptable.
Oh.
K
D
Also
information
in
the
manageability
draft-
that's
actually
quite
good.
J
Yeah,
so
the
the
manageability
drive
talks
about
from
from
the
client
or
server
side
when
you,
when
you
potentially
have
a
middle
box
on
the
path,
that's
doing
that
or
something
or
firewalling,
and
as
long
as
they
follow
the
advice
that
Ian
talked
about
there,
then
then
things
will
work.
What
we
do
find,
though,
is
that
some
Nets
will
time
out
faster
than
the
connections.
Will
time
out,
that's
okay,
the
use
of
connection
IDs
will
ensure
that
connections
will
still
manage
to
to
work.
J
It
looks
like
migration
at
the
server,
but
everything
just
sort
of
sort
of
works
out
reasonably
well.
We
don't
recommend
that
people
do
keep
lives.
Keeper
lives
are
useful
in
very
narrow
circumstances,
but
in
most
cases
you
won't
need
them.
It
is
better
to
just
walk
away
from
the
connection
and
never
send
any
other
packets.
If
you
don't
need
it,
and
we
found
that
in
particular
when
you're
dealing
with
mobile
devices.
J
That's
that's
excellent
advice,
because
you
don't
want
to
wake
the
radio
up
just
to
say
that
I'm
going
away
just
walk
away.
I
Just
to
add
some
context,
because
I
I
think
it
may
not
have
been
clear
browsers,
currently
rely
on
TCP
as
a
backup,
if
quick
fails.
I
So
if
the
quick
connection
were
to
fail,
the
user
would
not
see
a
problem
would
be
able
to
fall
back
to
http,
2
and
TCP,
and
that
would
work
just
fine.
That's
currently
something
that
we
do
across
the
board
and
the
problem
is:
if
quick
doesn't
fail,
quick
succeeds,
HTTP
succeeds
and
then
fails.
Eventually,
that's
a
harder
one
to
debug
is
the
is
the
the
basis
for
that
now.
I
These
are
things
by
definition,
just
part
of
the
protocol.
It's
all
end-to-end
encrypted,
and
that's
that's
that's
one
thing,
but
yeah
the
things
that
the
nut
can
do
are
things
like
dropping
in
odd
places
or
or
things
of
that
sort.
But
those
are
the
things
to
be
mindful
of.
D
One
thing
I'll
add
is,
as
the
stated
in
the
manageability
draft,
if,
if
possible,
if
Nats
could
actually
conform
to
the
RFC
and
I
use,
the
I
think
two
minute
minimum
idle
demo
for
not
rebinding,
that
that
would
be
that's
sufficient
for,
like
I,
think
the
vast
majority
of
use
cases,
but
if,
if
that's
that
would
be,
my
like
ask
is
like
ideally
if
they
could
do
that.
That
would
be
wonderful.
F
Hey
just
to
add,
Donna
touched
on
TCP
fallbacks.
This
works
well
for
HTTP,
but
quick
is
not
just
HTTP.
So
if
you
are,
if
you're
designing
another
application
to
go
on
top
of
quick
you,
you
might
have
different
world
or
different
considerations,
maybe
you're
taking
something
like
that.
Already
works
over
TCP
and
a
new
mapping
on
to
Quick.
The
applicability
draft
gives
a
good
section
of
considerations
about
the
need
for
a
TCP
fallback,
but
not
the
Mandate,
and
gives
the
color
commentary
about
these
kinds
of
trade-offs,
which
are
Illuminating.
C
George
Michaelson
can
I
shift
Target
a
little
bit
which
riffs
on
what
you
just
said.
I
realize
99.9999
of
the
world
is
HTTP
and
I
would
approve
of
targeting
the
thing
that
will
make
everyone
happy
and
money,
but
I
live
in
SSH
and
mosh
and
I
sit
here
wondering
why?
Isn't
there
a
shimlair
library
that
simply
allows
me
to
do
single
packet,
transactional
work
reliably
across
address
Mobility,
taking
care
of
this
stuff?
Why
hasn't
there
been
an
interface
spec?
C
That
gives
me
a
command
and
control
interface
for
a
machine
that
runs
over
quick
and
S
Tunnel
exists.
So
it's
not
like
people,
don't
understand
how
to
do
SSH
over
TLS
through
abstractions,
but
couldn't
you
code
a
shim
layer
that
just
gave
me
transparently
access
to
this
framework
to
do
zero,
rtt,
low
latency,
protected
access,
walk
away
from
it,
come
back
on
a
different
IP
and
carry
on
the
session.
C
D
I
I
mean
it's,
it's
mosh
solved
a
problem
for
all
of
us
right
and
I
mean
it
did
solve
a
problem
for
all
of
us
and,
and
the
fact
is
that
it's
that
Mobility
is
something
we
want
session.
Mobility
is
something
across
the
board
is
something
you
want
I
mean,
but
but
as
Martin's
pointing
out
I
think
that
the
protocol
offers
the
facility
for
doing
it.
C
But
the
essential
quality
in
the
very
badly
worded
question
is:
there
is
a
significant
barrier
to
entry
unless
you
sit
in
HTTP
mode
and
think
oh
well,
quick
is
just
HTTP,
I'll
use
Quick.
Well,
it's
not
just
HTTP
and
your
comment
went
to.
If
you
are
running
in
different
contexts,
you
may
have
to
do.
Tls,
failover
and
I'm
sitting
here
going
I
have
to
code
the
TLs
failover.
No,
you
should
be
coding
the
TLs
failover.
For
me
it
should
be
a
transparent
shim
function.
I
should
not
have
to
perform
that
function.
C
G
J
J
These
capabilities,
we
would
all
love
that
very
much.
Some
of
us
are
doing
other
things
at
the
moment.
So
maybe
this
gentleman
here
will
do
it.
A
For
us
hold
on
I
want
to
do
I
I
do
want
to
to
I
thought.
One
of
you
was
going
to
say
the
following,
but
I
think
mask
is
tomorrow
morning.
Right
and
mask
is
about
80
of
what
you
want
built
put
together
in
a
different
way
for
a
different
use
case
right.
So,
like
I'm,
not
saying
you
know,
use
mask
for
mosh,
but
if
you
basically
reorganize
the
bits
of
mask,
you
could
build
a
mosh,
and
you
know
you
all
of
you
just
let
this
be
inspiration.
I
Yeah
but
I
do
want
to
reiterate
this.
This
fact
that
you
know
we
we've
done
the
work
so
far
and
we
explained
the
protocol
so
go
build
the
dam
tools,
and
this
is
so
to
all
of
you
right
I
mean
it's,
it's
all
of
us
as
a
community
to
build
this.
E
Yeah
Phil
Han,
Baker
I,
was
going
to
say
yes,
I
am
working
on
something
like
that,
but
it's
going
to
be
a
long
time
before
it
works
over
quick
in
the
first
I've
got
to
work
out
how
to
do
exactly.
You
know,
transport
layer
fixed
then
work
out
what
the
API
needs
to
be
in
order
to
abstract
an
agile
cryptographic,
transport
or
whatever,
and
then
finally
I'm,
not
using
quick
at
the
moment
I'm
using
the
M
stuff
I'm
going
to
have
to
work
out
how
to
map
that
onto
quick.
E
On
quick,
I
mean
I
think
it
would
be
very
valuable
to
abstract
down
and
work
out
how
we
apply
quick,
like
transport
directly
to
web
services
and
get
rid
of
http,
because
hdp
is
a
really
rotten
way
to
do
an
RPC
interface
there's
a
huge
amount
of
overhead
and
I'm
speaking
as
one
of
the
original
authors
of
HTTP
1.0
and
the
person
who
originally
made
it
possible
to
do
that.
It's
what
I
want
to
do
is
to
take
those
parts
of
HTTP
and
put
them
into
the
transport
ship.
L
Yeah
there
is
the
quick
working
group
is
not
going
to
produce
a
quick,
API
I
think
there's
enough
Divergence
in
the
many
implementations
that
work,
that's
just
not
going
to
converge.
L
There
is,
however,
the
Taps
working
group
which
is
working
on
abstract
transport
layer
interfaces
that
that
support
all
sorts
of
advanced
functionality
that
that
transports
like
quick,
provide
and
so
I
really
encourage
people
interested
in
a
more
generic
API
for
this
kind
of
stuff
to
to
take
their
energy.
There.
A
So
the
documents
are
mostly
done
so
take
your
energy.
There
means
please
comment
on
ITF.
Let's
call
true
there
is
this
totally
quick,
nothing
to
do.
F
Yeah,
so
it's
not
just
HTTP,
we
have
DNS
over
quick,
which
is
defined
and
it's
a
thing
and-
and
you
might
say
well,
it's
an
actual-
just
fall
back
to
TLS
there,
but
that
might
not
be
what
you
want
right.
You
might
want
to
fall
back
the
DNS
over
HTTP,
there's
I
think
it's
very
easy
just
to
assume
that
we
can
build
the
batteries
included,
API
and
Library.
F
Those
things
could
exist,
they
could
be
powerful,
but
sometimes
they
behave
in
ways
that
you
don't
want
and
there's
no
hooks
or
configurability
to
change
things,
because
the
needs
of
your
application
are
different.
Presuming,
there's
always
going
to
be
a
TCP
equivalent
to
your
application
protocol
is
like
that's
just
a
guess.
We
don't
know
what
those
are
and
trying
to
anticipate
is
a
bit
a
bit
rubbish.
F
So
we
we
have
some
of
this
going
on
in
web
transport,
where
you
know
we.
We
have
different
ways
to
carry
this
thing.
Some
are
UDP
based
over
quick
and
they
want
to
use
the
service
of
unreliable
message
delivery,
and
so,
if
that
isn't
available,
falling
back
to
a
TCP
based
fallback
isn't
going
to
give
you
that
same
transport
feature.
Is
that
really
what
applications
want
or
would
they
prefer
to
give
up
and
fall
back
to
something
like
webrtc?
F
A
I
And
so
there
are
two
things
here
right:
one
of
them
is
that
there's
there's
the
network
and
then
does
your
your
application
that
you
want
to
run
on
top
of
quick.
So
in
terms
of
what
do
you
want
to
build
here,
you
just
what
I
said
yesterday
in
my
talk
we
built
using
HTTP
because
it
was
an
excellent
vehicle
for
us
to
get
this
out
there,
but
the
work
that's
happening
now,
DNS
and
and
others,
and
and
also
mock,
which
is
also
happening
later
this
week.
I
All
of
this
are,
you
know,
work,
that's
going
on
and
trying
to
map
other
applications
and
and
protocols
on
top
of
quick.
Finally,
in
terms
of
the
network
itself,
we
know
for
a
fact
that
there's
a
small
percentage
of
the
network
that
doesn't
allow
UDP
or
you
know
when
with
experiments
we
found
that
quick,
doesn't
work
on
100
of
the
paths.
That's
also
an
application
Choice.
Your
application
can
absolutely
choose
to
say
that
it's
okay,
if
it
doesn't
work,
it
doesn't
work,
the
application
doesn't
work.
I
If
quick
doesn't
work,
you
don't
need
a
TCP
fallback
and
that's
all
up
to
the
application.
M
Some
of
us
are
thinking
about
migrating
in
a
strange
application
to
Quick.
This
application
is
called
bgp
and
we'd
kind
of
like
to
not
mess
it
up.
Do
you
have
any
idea
where
the
potholes
are
going
to
be.
F
F
Absolutely
if
you
want
to
come
and
ask
for
like
an
early
review.
This
is
the
kind
of
thing
that
personally
I'm
I'm
always
happy
to
do,
but
I
think
people
in
the
working
group,
because
we
don't
want
people
to
try
quick
and
then
it
fails
trying
to
preempt
what
those
problems
might
be
is
difficult,
but
kind
of
the
typical
ones
for
me
would
be
flow
control,
connection
level
and
stream
level
flow
control.
F
How
big
are
the
messages
that
you're
exchanging
you're,
going
to
find
some
way
to
get
not
deadlocked
but
not
not
perform
as
well
as
you
might
do.
Multi-Streaming
in
general,
is,
is
a
big
issue
here,
you're
using
Quirk,
probably
to
make
use
of
the
multiple
streams
and
the
one
draft
I
read:
how
do
those
things
actually
interact
at
the
application?
Can
the
the
API
from
the
quick
layer
and
your
application
work
in
such
a
way,
for
example,
there's
no
Global
audience
of
streams
in
quick.
F
This
is
the
done
by
Design
as
a
performance
Improvement,
so
we've
always
had
a
line
blocking,
but
can
the
application
manage
out
of
order
delivery?
You
can
send
them
in
the
correct
order,
but
the
network
is
the
network
and
they
might
arrive
in
a
different
order
if
you're,
depending
on
specific
ordering
between
things,
that
the
application
layer
you'll
need
to
accommodate
that
in
terms
of
synchronization
or
checkpointing
or,
however,
you
might
need
a
design
if
you
have
more
Atomic
messages
like
DNS
over
quick,
where
these
things
are
very
independent.
F
It's
a
good
mapping,
just
stream
credits.
So
we
talk
about
flow
control,
but
the
number
of
streams
that
you
can
have
open
concurrently.
How
is
that
done?
You
have
the
initial
unidirectional
stream
limit.
I
talked
about
earlier.
Those
are
the
initial
limits,
but
as
soon
as
you've
kind
of
blown
those
limits,
you
need
to
keep
sending
stream
Max
stream
frames
to
keep
granting
credit,
so
I
think
something
we
found
in
the
early
interop
days.
F
Is
you
could
do
some
like
quick
tests
like
I
just
did
one
or
two
two
requests
and
response
interactions,
all
good?
Then
you
run
a
server.
Then
you
have
a
bug
where
you
forget
to
Grant
you
credits
and
you
eventually
get
to
a
point
of
kind
of
connection
saturation
and
nothing
can
happen.
Nobody
can
do
anything
any
ideal
timeouts
which
is
okay,
but
it's
probably
not
what
you
want
if
you're
running
a
Services
trying
to
run
and
run
and
run
foreign.
J
What's
interesting,
I
think
about
bgp
to
Lucas's
point
is
that
it
tends
to
have
extraordinarily
long-lived
sessions,
and
quick
sessions
will
eventually
run
out.
J
This
is
something
we
did
in
HTTP,
2
and
and
and
also
have
done
in
quick,
it's
probably
not
likely
in
an
app
in
a
protocol
like
bgp
that
you
will
run
into
these
things.
But
who
knows
some
of
these
boxes
stay
up
for
a
long
long
time.
J
The
thing
that
I
think
is
probably
more
of
a
challenge
in
this
in
this
context,
is,
is
managing
keys
and
and
Authentication,
because
I
understand
that
the
bgp
is
often
run
essentially
in
the
clear
between
different
asses
and
that's
going
to
be
a
shift
in
the
attitudes
in
in
terms
of
how
you
configure
this,
the
the
the
peering
points
that
that
happen
there,
because
you
need
encryption
with
quick
I,
don't
know
how
many
stacks
currently
use
something
like
appreciate
key
mode.
J
That
is
not
something
that
is
widely
used
in
on
the
web,
so
it
may
be
that
you
end
up
in
a
situation
where
you
want
to
use,
for
instance,
self-signed
certificates
with
with
with
maybe
even
no
validation
of
them
to
to
ensure
that
it
works,
as
the
the
existing
system
has
done,
with,
of
course,
the
option
to
upgrade
to
something
that
that
is
fully
authenticated.
D
Yeah
I
was
going
to
add
that
I'm
happy
to
review
a
draft
on
bgp
I.
Think
it's
like
a
desolate
use
case.
Key
distribution
seems
like
a
problem,
but
yeah
there
are
a
number
of
solutions.
I
think
I
think
it's
just
a
matter
of
looking
through
the
options
and
getting
a
review
from
the
quick
folks
and
the
transport
bits
and
then
maybe
getting
a
crypto
review
as
well
on
the
on
whatever
key
distribution
or
lack
there
have
you
choose
to
appreciate,
but
it
seems
very
doable.
C
K
So
when
the
how
the
handshake
Works
I
saw
that
there
was
obviously
some
information
in
those
initial
packets,
for
example,
I
saw
advertisements
about
what
alpns
we
might
be
about
to
talk
on
and
obviously
when
I
saw
alpns
in
the
clear
alarm,
Bells
went
off.
K
J
So,
as
I
talked
about
yesterday,
we're
not
strictly
leaking
it
in
the
clear
anyone
who
is
able
to
see
the
initial
packets
from
the
client
and
the
connection
ID,
that's
in
them,
and
they
know
the
quick
version.
That's
in
use
we'll
be
able
to
recover
the
keys
and
decrypt
that
information.
J
So,
technically
speaking
when
we,
when
we
talk
about
what's
in
the
clearing
quick,
that's
that's
often
a
shorthand
that
we'll
use
to
sort
of
mean
that
well,
there's
no
genuine
cryptographic
protection
for
these
things,
we're
just
applying
what
is
effectively
AES
as
a
check
something
mechanism
and
an
obfuscation
mechanism.
So,
what's
in
the
clear,
is
everything
that
will
be
in
the
clear
and
TLS
1.3
and
not
a
lot
more
honestly.
J
So
if
you
look
at
TLS
1.3
and
it's
client,
hello
and
server
hello,
those
are
the
things
that
will
be
in
the
clear
and
so
for
the
most
part.
Well,
that
sort
of
falls
into
two
categories:
there's
the
there's,
the
things
that
you
need
in
order
to
configure
the
key
exchange
in
TLS.
J
Those
things
will
always
be
in
the
clear
to
some
extent,
but
then
there's
also
a
bunch
of
configuration
information
from
the
client
side,
LPN
some
certificate,
related
extensions
and
also
the
Sni,
sorry
and
yes,
and
at
quick
level,
there's
transport
parameters,
which
is
configuration,
information
usually
related
to
the
operation
of
quick
and
so
the
encrypted
client
hello
spec,
will
ultimately
be
able
to
take
all
of
those
things
and
provide
some
level
of
protection
for
them.
J
There's
all
the
debates
about
the
availability
of
Sni,
it's
potentially
the
case
that
you
could
still
put
Sni
in
the
clear
and
still
protect
all
of
those
those
values.
At
the
same
time,
that's
that's
something
that
I
think
a
lot
of
people.
Sort
of
don't
really
understand
about
the
value
of
ech
is
that
it
provides
the
ability
to
protect
all
of
that
sort
of
thing.
That's
coming
not
yet.
D
Instead
of
extracting
Sni
from
packets,
if
we
can
all
move
to
anycast
I
know
that's
like
asking
for
ponies,
then
you
can
just
use
the
IP
and
like
please
don't
look
in
my
packets
yeah
but
yeah.
If
I
want
you
to
look
at
the
S,
I
would
like
staple
it
on
to
the
front
and
like
so.
You
could
find
it.
L
So
all
this
stuff
is
going
to
change
as
as
the
protocol
evolves.
It's
one
of
the
key
points
of
quick.
A
Anyone
else
in
queue-
oh
I,
have
I,
have
one
I
have
one
gotcha
question
that
I
had
you
know
lined
up
in
case
nobody
came
up
so
one
of
the
things
that
I
saw
as
a
theme
across
all
of
the
presentations
was
we
tried
this
and
then
we
realized
we
had
to
do
this
and
then
we
realized
we
had
to
do
this,
like
the
the
idea
of
building
a
pro
a
transport
protocol
on
top
of
a
security
layer
on
top
of
the
transport
protocol
that,
like
this
turned
out
to
be
super
hard
to
do,
and
was
one
of
the
reasons.
A
This
was
a
really
long
effort
in
what
came
out
in
quick
version,
one
in
your
opinion,
whoever
grabs
the
mic
first,
what
is
the
sort
of
the
hackiest
part
of
the
protocol
and
sort
of
like
the
undone
work?
Martin
take
it
unless
Martin
wants
to
take
it.
A
J
For
me,
there's
some
there's
some
pretty
happy
parts.
We
managed
to
get
more
of
the
protocol
encrypted
than
I.
Thought
was
possible
at
the
outset,
so
that
that
was
a
happy
thing.
Probably
the
hackiest
and
and
the
part
that
I'm
least
confident
in
is
connection
migration.
J
The
security
model,
for
that
is
a
little
unclear,
and
the
mechanisms
that
we
have
in
place,
I
not
as
thoroughly
tested
as
as
I
would
have,
would
have
been
confident
with
at
the
time.
J
D
I
agree
with
Martin
and
cinnamon
I
can
attest
that
connection
migration
works
because
we've
default
enabled
it
and
then
quite
a
number
of
circumstances,
and
so
like
it
does
actually
function
as
intended,
but
the
attack
surface,
it
seems
very
difficult
to
reason
about
and
I
I
am
I'm
sure
our
implementation
has
something
that
you
could
do.
That
would
at
least
be
annoying
to
a
user,
if
not
like
problematic
but
like
at
the
very
least.
It's
certainly
not
perfect.
D
I
I
So
I
think
we
did
a
lot
of
things
that
were
clever
in
the
protocol,
but
I'd
say
that
the
things
that
are
that
feel
a
bit
fragile
are
both
connection
migration
for
sure,
but
also
the
handshake.
It's
it's.
It's
robust.
It
I
think
it's
robust,
but
having
you
know
that
the
fact
that
I
have
to
say
I
think
is
probably
the
argument
there
right
so
outside
of
that
I
think
the
the
happiest
part
of
the
protocol
is
that
we
actually
managed
to
encrypt
everything.
I
It's
okay.
We
got
the
packet
numbers
encrypted
too,
which
was
which
was
a
joy.
L
L
The
happiest
thing
about
it
is
extensibility,
like
many
of
my
colleagues
here,
spent
a
lot
of
time
in
TCP,
where
it's
like
so
hard
to
change
anything
for
for
various
reasons,
and
you
know,
but
it
also
leads
to
what
I'm
kind
of
most
deeply
concerned
about,
which
is
that
and
they'll
probably
laugh,
because
this
is
my
hobby
horse,
but
like
version
ossification
with
the
things
that
are
I
mean
the
question
alluded
to
things
we
could
read
in
the
client
hello
between
that
and
just
this
very
readable
version
field.
L
You
know
people
could
just
say
well
version.
One
is
fine
and
everything
else,
I,
don't
know
what
that
is.
I'm
gonna
drop
it
and
you
know
that's
something
that
we
have
to
think
a
little
harder
on
and
come
up
with
some
solutions
for
to
preserve
that
accessibility,
because
I
think,
eventually
you
can
get
to
a
place
where
we
get
really
big
performance
Improvement.
We
already
have
big
performance
proofs
on
TCP,
but
as
the
internet
changes,
quick
can
change
in
a
way
that
the
old
tools
can't.
A
A
Given
myself
the
last
word,
because
we
are
at
time,
I
would
like
to
thank
all
of
the
presenters
for
their
presentations
and
for
the
discussion
and
all
of
our
audience
for
joining
us
here
at
7
30
in
the
morning.
Your
dedication
to
the
internet
is
much
appreciated.
Please
enjoy
the
rest
of
your
week
and
see
you
around
thanks
a
lot.