►
From YouTube: IETF98-TUTORIAL-QUIC-20170326-1500
Description
QUIC Tutorial at IETF98
B
B
B
A
B
C
D
E
A
A
If
you
can
write
an
app
in
15
minutes,
a
mobile
app
I
couldn't
do
it
to
save
my
life,
but
you've
never
seen
a
TCP
dump
trace.
Then
this
presentation
is,
for
you
just
bear
in
mind
to
the
other
people
who
already
know
what
quickest.
This
is
not
a
working
group
meeting.
Please
don't
turn
it
into
one.
If
you're
already
participating
in
quick
work,
please
feel
free
to
offer
clarifications
at
any
point
in
time-
and
this
was
true
for
everybody
in
general
feel
free
to
interrupt
me.
A
A
What's
quick,
a
very
brief
history
of
quick
cause,
you
might
have
heard
of
quick
as
this
thing
from
Google.
As
this
thing,
that's
a
working
group
and
you
are
completely
confused
exactly
what
it
is.
It
was
an
experimental
protocol,
it's
been
deployed
by
Google
and
it
started
in
2014
the
department
of
quick.
A
This
was
this
has
been
quite
widely
deployed
between
Google's
services
and
chrome
and
other
mobile
applications
such
as
YouTube,
app
and
search
Google
search
happened.
So
on
this
basically
was
deployed
widely
because
it
improved
basically
a
page
load
latency
for
us
it
improved
search
latency
for
us
and
it
improved
with
your
e
buffer
rate,
among
other
things,
for
YouTube.
A
The
working
group
for
quick
was
formed
in
the
IDF
in
2016
last
October
and
the
after
the
Berlin
IDF,
and
the
goal
of
the
working
group
is
to
modularize
and
sanitize
quicken
parts,
quick,
awesome
experiment.
It
was
built
as
like
one
big
monolithic
thing:
you'll
understand
these
things
as
we
go
through
I'm,
just
giving
you
a
rough,
quick,
high
order.
History
of
what's
happened
so
far,
and
the
working
groups
goal
is
to
use
HTTP
as
the
initial
application
for
for
quick
before
I
move
on
just
just
so
I
know
who
my
audience
is
here.
A
A
So,
let's
start
with
what
http/2
is
before
we
actually
get
into
what
quick
is
and
actually
before
we
talk
about
what
http/2
is.
We
need
to
understand
what
a
web
page
looks
like
because
we're
going
through
a
brief
history
of
HTTP
first
before
you
get
too
quick
itself.
So
what
does
a
web
page
look
like?
Well,
it
looks
like
this
there's
the
IDF
web
page.
It
has
a
bunch
of
objects,
scripts,
various
things
that
you're
familiar
with
lots
of
things
essentially
make
up
a
web
page.
A
A
So
it
takes
a
fair
amount
of
time
to
actually
set
up
an
HTTP
connection
or
a
TLS
over
TCP
connection
before
you
can
even
transfer
these
objects
down
from
the
server
to
the
client.
The
question
transfer
these
objects
down
after
the
set
up
HTTP
requests
and
responses
flow.
Over
this
connection,
that's
been
set
up.
A
Going
further,
yes,
what
it
looks
like
roughly.
Let's
say
there
are
these
three
objects
that
we
saw
on
the
IETF
web
site
and
excuse
me
and
the
web
server.
Has
these
three
objects?
You
have
a
client,
that's
a
browser
and
we
just
went
through
this
three
round-trips
of
setting
up
a
TLS
over
TCP
connection,
and
now
we
are
talking
to
the
web
server,
but
we
don't
have
the
objects
yet.
So
the
current
first
sends
a
request
over
the
TCP
connection
and
fetches.
A
The
first
object
from
the
server
then
sends
a
second
request
and
fetches
the
second
object
from
the
server
and
then,
since
the
third
request
and
fetches,
a
third
object
from
the
server.
This
is
all
wonderful,
so
we
have
the
web
page
and
it
shows
up
and
it's
great,
can
we
do
better
any
ideas,
I'm
sure
you
read
something.
A
What
could
you
do
to
make
this
better
come
on?
Tell
me
the
right
answer:
cuz,
we
have
it.
What's
it,
let's
pipeline
them,
excellent
idea,
as
it
turns
out
that
doesn't
get
deployed
very
well.
What
else
can
we
do
use
them
in
parallel?
Excellent?
Why
didn't
we
think
of
that?
Before
well
as
it
turns
out,
we'd
have
thought
about
third
one
before
and
that's
roughly
what
we
do.
How
do
you
deal?
A
The
problem
here,
of
course,
is
that
as
you're
sending
objects
one
after
the
other,
the
second
object
shows
up
only
after
the
first
object
is
completely
transferred.
Well,
that's
not
fun!
What,
if
you
want
to
see
the
fifth
picture?
On
a
you
know,
a
page
of
thumbnails
yet
wait
for
the
first
four
to
load
and
that's
really
not
fun.
So,
ideally,
you
want
to
basically
be
able
to
see
them
roughly
in
parallel,
because
a
webpage
has
a
whole
bunch
of
different
objects
and
that's
what
we
ended
up
doing.
This
was
a
hack.
A
What
was
the
hack?
Well,
the
client
wants
to
request
the
first
object,
so
it
says:
hey
web
server,
here's
a
TCP
connection,
and
now
please
send
me
the
first
object
over
this
connection
and
the
client
says:
oh
I'm
I'm,
a
client
I
would
like
this
other
object
from
you
over
a
new
TCP
connection
and
that's
the
same
thing
for
all
objects
does
not
do
the
same
thing
for
all
objects.
A
A
A
A
What
maybe
we
didn't
care
that
much,
but
we
definitely
didn't
have
a
better
way
to
do
this
for
quite
a
long
time.
This
is
still
better.
So
this
is
what
a
lot
of
browsers
even
now
do.
Then
you
go
to
an
HTTP
one
one
side
and
you
might
see
recommendations
on
various
websites
that
say,
if
you
want
to
increase
your
speed,
increase
the
number
of
connections
that
you
are
allowed
to
use
200
bad
idea.
Don't
do
that
better
idea
use
HTTP.
What
does
HTTP
do
do
well?
A
It
tries
to
deal
with
the
same
head-of-line
blocking
problem,
but
in
a
different
way.
So
here's
what
it
does.
The
time
wants
to
send
a
request
for
one
of
these
objects
and
it
creates
what
will
call
an
HTTP
to
stream,
not
Wilco,
that's
what
they
should
actually
be
working
group
calls
and
that's
a
stream.
A
The
idea
of
a
stream
will
become
clear
in
a
moment.
Similarly,
it
wants
to
request
the
other
two
objects
as
well.
It
requests
them
over
to
other
HTTP
streams.
All
of
these
streams
get
mapped
to
the
same
connection,
meaning
that
the
browser
here
or
the
http/2
part
of
the
browser
is
multiplexing
pieces
of
a
request
and
a
response
over
the
same
connection.
A
So
the
server
sends
them
out
and
we
get
them
again
slightly
slower
than
if
you
would
have
gotten
any
one
of
them,
but
you
get
them
in
parallel
and
they
go
over
one
TCP
connection,
thereby
avoiding
a
lot
of
the
overhead
that
we
talked
about
in
the
previous
case.
This
is
all
fine
and
dandy.
This
this
is
awesome
right.
A
A
Hello
am
I
the
only
one
here,
apparently
so,
don't
set
up
the
connection
in
the
first
place.
You,
yes,
you
well,
you
won't
get
the
website.
You
won't
get
the
web
page.
There
will
be
a
bit
of
a
problem
here
actually
trying
to
get
the
web
page
come
on
I'm,
not
moving
forward
until
I
get
an
answer
from
this
crowd.
A
A
A
Congestion
control
mic
go
on
congestion.
Control
knows
about
the
streams.
Yes,
the
transport
knows
about
the
streams.
The
transport
knows
that
there
are
three
different
objects
going
over
this
connection:
the
TCP.
That's
the
problem
in
the
in
this
case,
you
still
have
head
of
line
blocking.
Where
does
the
head
of
line
blocking
show
up?
Well
this
this
gray
line
that
you
see
up
here,
that's
basically
a
byte
stream.
A
A
He
doesn't
know
that
there
are
many
objects
going
on.
It's
it's
going
through
this
connection.
Sctp
does,
but
we
are
not
here
to
talk
about
a
CTP.
We
can
have
that
fight
later.
That
is
I.
Do
have
actually
have
a
slide
about
something
like
that
later
so,
but
the
idea
is
exactly
the
same.
The
problem
here
is
that
the
transport
does
not
know
about
streams,
does
not
know
about
parallelism.
Doesn't
do
anything
to
give
you
parallelism,
so
how
does
HTTP
over
quick
work?
Quick,
solves
all
of
your
problems?
A
We
can
shut
down
the
IETF
after
this,
so
how
do
we
set
up
connections
in
quick,
but
it's
quick,
zero
round
trips
to
a
known
server,
if
you're
talking
to
a
server
that
you've
already
talked
to,
if
a
client's
speaking,
if
kind
is
writing
to
a
server
that
has
spoken
with
in
the
past,
it
takes
exactly
zero
round
trips
to
set
up
a
connection.
That's
like
magic!
How
can
you
set
up
a
connection
in
zero
round
trips?
Well,
you
can't
you
send
a
connection,
handshake
packet
and
you
can
immediately
send
data
after
it.
A
What
it
means
is
that
you
don't
have
to
wait
for
the
server
to
respond
to
you.
You
can
expect
that
the
server
will
finish
its
handshake
as
soon
as
it
receives
this
packet
from
the
point
and
if
it
doesn't,
then
you
still
have
a
one
count
up
for
work
where,
if
your
keys
are
not
new,
then
or
then
you
have
to
go
through
a
little
bit
of
work,
I'm
not
going
to
go
into
the
details
of
these
things
by
the
way
into
the
details
or
the
handshake
itself
works
cause.
A
A
If
version
negotiation
is
needed,
then
we
take
up
to
two
round
trips.
But
these
are
the
uncommon
cases
and
the
most
common
case
is
expected
to
be
the
0
round
trip
case,
because
when
you
go
to
a
website,
when
you
go
to
a
server
and
request
a
resource,
it's
very
likely
they're
going
to
go
back
to
the
same
server
and
a
request
on
the
resources.
It's
just
simply
locality
and
that's
very
common
and
yet
I
haven't
yet
forgotten
about
TF
41.3.
We'll
talk
about
that.
A
A
The
client
request
makes
the
request
over
one
HTTP
to
stream,
which
now
maps
to
two
quick
streams.
The
details
of
this
I
will
not
go
into
why
but
Mike
is
in
the
back,
so
find
him
after
this.
If
you
want
to
know
why
Mike
raise
your
hand
there,
you
go.
That's
my
question,
he's
the
editor
of
the
HTTP
mapping
draft
and
he's
he's
the
reason
the
world
looks
like
this,
so
you
can
blame
him
for
it,
so
we
have
two
quick
streams
that
I
used
per
HTTP
stream.
A
In
short,
one
of
the
streams
is
used
for
headers.
The
other
stream
is
used
for
body
which
is
fantastic,
so
that's
an
HTTP
to
stream.
That's
a
quick
stream
and
that's
what
happens
with
quick.
This
is
what
happens
here.
Similarly,
the
client
sends
requests
from
other
objects
and
they
end
up
on
other
quick
streams.
What
is
it
quick
stream?
Your?
A
Maybe
you
understand
now,
roughly
what
an
HDD
stream
is,
like
I
told
you
need
should
be
to
stream,
is
where
HTTP
to
multiplexes
things
above
into
the
things
below,
in
this
case
it's
slightly
different
HTTP
to
knows
about
quick
streams
below.
So
it
takes
the
request
from
about
the
response
from
above
and
ships
it
through
multiple
streams
below.
So
what
does
quake
do?
Well,
it
takes
those
and
multiplexes
them
over
the
connection
that
has
to
the
other
side.
So
this
is
what
you
have.
You
have
a
multiplexed
transport
quick
is
a
multiplex
transport.
A
It
multiplex
is
multiple
streams
over
a
single
transport
connection,
which
means
one
connection
setup.
It
means
one
loss,
recovery
context.
It
means
one
condition,
control
context.
It
means
we
can
do
priorities
if
you
wanted
to
do
priorities
across
multiple
objects.
So
it's
very
exciting.
So
a
lot
of
interesting
things
that
we
can
do
and
at
the
IDF
we
love
doing
more
things.
So
there
we
go.
That's
what
pic
does
and
again
the
objects
are
received
in
this
manner.
A
So
going
back
just
a
bit,
I
told
you
that
this
was
there
was
an
experiment
at
Google
and
then
there's
IDF,
quick,
so
I
want
to
clarify
exactly
what
the
differences
there
was
an
experiment.
That
still
is
an
experiment,
that's
deployed,
and
this
is
what
it
looks
like
so
Google
didn't
experiment
built
a
protocol
and
we
called
it
quick,
and
this
is
what
it
roughly
looked
like
it.
Bcd
maps
to
the
standard
HTTP
stack
in
this
manner,
but
it
runs
over
UDP.
A
As
you
can
see
here,
the
protocol
was
built
to
run
over
UDP
I'll
talk
about
that
later
in
the
presentation
and
HTTP
maps
onto
quick.
We
had
our
own
mapping
that
was
fairly
simplistic
and
it's
getting
much
more
sophisticated.
Now,
as
we
are
going
through,
ITF
working
group
work
and
we
used
our
own
home-grown
our
own
little
proprietary,
quick
crypto,
for
doing
the
handshake
for
doing
the
rapid
0
oddity
connection
setup.
So
this
was
a
an
encrypted
transport
in
the
experiment
and
we
liked
it
as
an
encrypted
transport,
but
it
used
a
proprietary
handshake
mechanism.
A
Since
then,
we've
had
TLS
1.3,
but
I'll
get
that
in
a
moment.
So
what's
a
quick
working
group
doing
like
I,
said
here's?
What
old,
Google
quick
looked
like?
Of
course
crypto
was
proprietary,
so
we
want
to
get
rid
of
that
and
you
want
to
do
a
whole
bunch
of
other
things
at
the
IDF
with
it.
But
first
we
didn't
want
quick,
crypto
or
not.
We
didn't
want.
We
decided
not
to
have
quick
crypto
in
here
and
just
leaving
it
like.
A
That
would
make
it
an
unencrypted
transport
and
it
would
make
a
lot
of
people
very,
very
grumpy.
So
we
decided
to
use
TLS
1.3,
which
was
developed
after
quick
crypto,
but
it
gives
you
the
same
benefits
as
quick
rip
door,
meaning
it
gives
you
0
RTD,
handshake
latency.
This
is
the
brand
new
hotness
in
TLS
by
the
way.
So,
if
you
want
0
oddity
handshakes
used
here
is
1.3
and
the
reason
that
it's
embedded
inside
quick
is
because
we're
actually
doing
some
careful
work
and
you
again
remember
we
want
parallelism.
A
Pls
usually
sits
on
top
of
TCP,
which
means
it's
used
to
a
stream
of
bytes
underneath
it,
and
it
offers
roughly
that
above
it.
So
you
do
some
work
to
make
sure
that
TLS
framing
you
know,
falls
within
quick
packet
boundaries
and
that
when
one
packet
is
lost
that
other
stream
data
can
still
be
decrypted
and
delivered
so
pls
1.3
here
is
really
not
sitting
under
quick,
it's
actually
being
integrated
with
quick,
it's
sitting
around
quick.
Oh
sorry,
quick
is
sitting
around
it,
meaning
that
the
TLS
handshake
actually
happens
inside
of
a
quick
stream.
A
The
goal
for
us
is
to
have
that.
That's
the
first
goal.
The
first
goal
should
have
an
HTTP
mapping
on
top
of
quick
and
make
it
work
with
tears,
1.3
going
forward.
The
idea
is
that
once
we
have
finished
this
you'll
have
a
protocol.
That's
modularized
enough
that
we
could,
with
you
know,
relatively
little
friction
to
place
the
handshake
from
TLS
1.3
to
a
new.
Once
a
TLS,
1.4
or
god
forbid,
pls
2.0,
because
that's
never
happening,
and
you
could
replace
you
know
HTTP
with
like
WebRTC
and
so
on.
A
A
Maybe
some
of
the
sounds
familiar
and
I've
mentioned
some
of
it.
I
mentioned
some
ideas
that
you
might
know
like
a
CD,
be
like
multi-streaming
make
other
things
yes,
you're
just
be
playing
some
of
the
greatest
hits
that
we
already
see.
The
difference
is
that
we
are
trying
to
make
it
deplorable
and
we're
trying
to
integrate
it
with
the
needs
of
today.
So
yes,
the
ideas
you
see
come
back
this
by
the
way
is
not
limited
to
quick,
it's
all
of
computer
science.
A
We
love
to
replay
ideas
and
you're,
doing
exactly
that,
but
you're
adding
something
new.
Hopefully,
if
you're,
adding
a
bunch
of
new
features
and
a
bunch
of
new
things
which
have
never
been
done
before.
0R
TD,
for
example,
with
the
crypto
handshake,
has
never
been
done
before.
So
we
feel
like
you're,
always
on
thin
ice
when
you're
doing
that.
So
there's
a
lot
of
new,
exciting
things
as
well,
so
some
of
the
work
that
U
is
quite
relevant
and
very
related
to
what
you've
seen
so
far.
A
You
not
going
quick,
so
it
was
a
quick
crypto
used
to
be
the
thing
that
had
Co
RTT,
but
now
we
have
the
other
7.3
and
that's
what
we
are
planning
to
use,
TCP
fast,
open
or
if
you
want,
if
you
go
for
the
back,
if
you
used
to
work
on
those
green
things
called
terminals,
you
might
remember
T
TCP
how
many
people
here
actually
remember
etcb
all
right,
excellent,
so
TCP
fast
open
was
recycled
from
TDC
B
and
of
course
we
will
keep
using
the
same
ideas.
These
are
very
happy
to
steal
ideas.
A
A
Another
ideas
like
TCP
session,
which
predates
SC
DB,
which
which
predates
SST
all
of
these
talk
about
multi
streaming
as
a
brand
new
idea.
It
never
was
a
brand
new
idea.
Parallelism
has
been
there
if
parallelism
is
an
idea,
that's
as
old
as
the
hills
and
there
bunch
of
shared
ideas
with
all
of
these
old
protocols,
and
we
are
very
happy
to
steal
all
of
those
ideas
and
if
you
have
more
ideas,
we're
happy
to
steal
them
as
well.
A
So
here
are
the
broad
design
aspirations
that
quake
has.
The
probability
is
critical.
We
want
quick
to
work
on
today's
internet,
not
on
some
future
ipv6
internet.
It
has
to
work
on
the
IP
for
internet,
which
is
riddled
with
the
middle
boxes
and
everything
else.
It
has
to
work
on
currently
deployed
client
operating
systems.
If
it
doesn't,
then
nobody's
gonna
implement
it
and
that's
something
that
we've
seen
in
the
past.
With
the
CDT,
we
had
a
lot.
A
I
worked
on
a
CDP
for
six
years,
so
there
was
a
lot
of
work
in
just
trying
to
convince
folks
Microsoft
deploy
SC
DB.
This
was
work
and
it's
it.
You
have
to
answer
well.
Can
you
deploy
this
today
and
the
answers?
Well,
no
cause
middle
box
won't
allow
a
CDP
through
and
then
the
answer
is,
then.
Why
should
we
implement
it
if
it
doesn't
work
until
implemented?
Middle
boxes
won't
change
themselves.
So
you
have
this
vicious
cycle
of.
You
know
just
not
going
anywhere.
So
that
is
important.
A
We
want
to
be
deployed
along
the
today's
Internet,
which
is
why
we
have
UDP
but
I'll
talk
about
that
in
a
moment,
and
we
want
to
be
able
to
evolve,
because
today's
internet
is
obviously
not
going
to
be
tomorrow's
internet
and
we
might
have
different
needs
matter,
different
applications.
We
want
the
transport
protocol
to
adapt
to
evolve.
A
D
A
Of
this
old
transport,
so
you
could
think
of
it.
This
way,
no
latency
connection
establishment
was
critical
cause
again.
That
pages
are
very
latency
sensitive
as
an
application.
The
web
itself
has
gotten
better
latency
sensitive
over
the
past
decade,
or
so
so
it's
really
important
to
have
low
latency
connection
establishment.
Multi-Streaming
we've
talked
about
that.
In
addition,
we
wanted
to
have
better
loss,
recovery
and
more
flexible
congestion
control,
so
TCP
lost.
Curry
has
been
for
a
long
time,
just
riddled
with
complexity
because
of
some
early
design.
A
That
was
early
design,
components
that
were
in
DCP
and
you
can
overcome
them,
but
it
takes
a
while
to
get
there.
We
had
the
opportunity
to
design
this
from
scratch,
and
so
we
have
lost
recovery.
That
is
just
fundamentally
has
more
signaling
and
what
to
do
better
under
a
larger
set
of
scenarios,
and
we
want
to
be
able
to
have
flexible
condition.
Control
UDP
also
has.
A
Does
everybody
here
know
what
the
knight
rebinding
issue
is
I'll,
very
briefly,
summarize
it
for
TCP
connections.
It's
easy
to
see
that
when
a
TCP
connection
goes
through
a
NAT
donor
has
to
maintain
a
binding
for
the
connection
that's
coming
into
the
night
and
leaving
than
that
for
TCP
connections,
you
usually
see
the
fin
or
something
like
that
or
a
reset
packet
that
tells
the
rat.
Oh,
this
collection
is
gone,
so
you
can
discard
State
for
this
connection.
A
A
Multiple,
is
something
that's
on
a
radar
it's
in
the
Charter
for
the
working
group
to
work
on,
but
it's
going
to
happen
later,
once
we've
resolved
some
of
the
more
burning
issues
that
we
need
to
do
so,
just
on
a
couple
of
these
points,
I'll
go
into
just
a
little
bit
more
detail
the
probability
in
evolvability.
How
do
we?
How
do
we
satisfy
this
constraint
or
this
design
aspiration?
The
first
thing
that
pic
does
is:
we
will
use
UDP
as
a
substrate.
A
This
enables
deployment
through
middleboxes
and
it
allows
us
to
do
user
space
implementations,
meaning
that
we
do
not
have
to
wait
for
client
operating
systems
to
you
know,
get
upgraded
and,
and
that
happens
at
a
very
slow
clip.
Virtual
negotiation
allows
evolvability
none
of
the
transports
that
we've
seen
so
far
have
versioning
in
them.
This
is
an
odd
thing,
because
almost
everything
else
does
have
versioning
in
it,
but
transports
don't
so,
which
negotiation
allows
us
to
evolve
the
wire
format
of
the
protocol
going
forward
and
to
make
sure
that
metal
boxes
don't
ossify.
A
The
bits
that
we
are
sending
on
the
wire
today
allows
us
which
allow
us
to
change
our
wire
format
going
into
the
future.
We
encrypt
most
of
the
quick
packet
and
we
authenticate
all
the
it's
all.
The
headers
are
fully
authenticated
and
some
of
them
are
visible,
but
others
are
not
so
that
little
box
is
con.
Tam
per
them
there
are
the
whole
packet
is
tamper
resistant.
This
is
again
to
help
us
retain
quicks
evolvability,
going
into
the
future.
A
But
it
also
the
funnels
network
operators.
Cosmetic
operators
are
our
metal
operators
in
the
room.
Would
you
like
to
identify
yourself
there
you
go
yeah,
no,
no
I'm,
not
asking
you!
You
know.
Yes,
thank
you.
This
is
a
problem
for
network
operators
because
they
used
to
seeing
TCP.
They
used
to
seeing
all
the
bits
that
are
on
the
DCP,
header
and
quick,
says
well,
you're
going
to
show
you
anything
right.
A
A
That's
well,
it's
possible
for
the
current
quake.
The
current
quake
is
not
your
future
quake.
The
most
important
thing
is
that
it
has
to
be
done
in
a
manner
that's
sensible
to
do
the
versions
that
we
use
right
now.
The
quick
version
that
you
see
on
the
wire
is
different
from
the
quick
versions
you
see
in
the
future
and,
as
I
said,
the
packet
header
format
is
also
changing
dramatically,
so
the
connection
ID
won't
be
in
the
same
place.
A
A
So
exactly
the
question
I'm
going
to
show
you
very
briefly
what
a
quick
packet
used
to
look
like
or
what
it
looks
like
in
be
consensus.
Well,
right
now
we
have
a
proposed
had
a
change.
We
still
don't
have
consensus
on
that,
but
I'm
going
to
show
you
both
of
those
to
give
you
a
sense
for
what
this
actually
looks
like
on
the
wire.
So
quick
has
what
we
called
regular
packets.
These
are
the
common
package
that
you
see
the
data
carrying
packets
and
they
have
flags.
A
Version
number
is
optional
in
here,
but
it's
present
in
this
format
and
there's
a
packet
number,
which
is
synonymous
with
the
sequence
number
in
TCP,
but
different
and
I'll
talk
about
the
difference.
In
a
in
a
bit
and
then
there's
the
encrypted
preload,
this
is
quick
frames.
This
is
the
entire
payload
of
quick
when
I
say
the
paper.
This
is
not
just
user
payload.
This
is
quick
frame
headers
as
well.
So
what
are
quick
frames?
Well?
This
is
all
the
signalling
a
lot
of
the
signaling
that
quick
does
to
its
peer.
A
The
quick
endpoints
do
with
each
other
happens
via
frames,
so
you
have
AK
frames
instead
of
AK
packets
or
you
know
an
act
bit
in
the
in
the
header.
We
have
an
AK
frame
that
can
give
you
a
lot
more
detail
about
what
all
is
being
acknowledged,
how
long
it's
been
since
those
packets
were
received
and
so
on.
Then
you
can
do
flow
control
in
here,
which
is
the
window
update
frame
in
the
stream
frame
which
actually
carries
user
payload,
as
we
saw
earlier,
quick
carries
user
data
in
streams.
A
This
is
exactly
where
it
shows
up
on
the
wire
now.
All
of
this
is
encrypted
right,
so
the
nobody
else
can
see
this,
but
for
the
endpoints
at
the
moment,
stuff
in
the
orange
can
be
seen,
but
not
tampered
on.
The
network
these
are
regular
packets.
However,
there
are
other
packets
which
are
unencrypted,
there's
virtual
negotiation,
because
well
we
don't
even
know
the
version
we
are
speaking
yet.
Obviously
we
can't
have
a
shared
key.
A
A
There
are
a
number
of
problems
with
these
header
formats.
Part
of
it
is
that
there
are
a
number
of
packet
types
that
are
going
on
here
and
it's
hard
to
figure
out
exactly
what
a
quick
packet
looks
like
from
the
plethora
of
options
that
you
have
in
this
space.
So
there's
a
proposed
change
to
the
headers
to
the
packet
itself.
That
allows
us
to
somehow
try
to
sort
streamline
this.
A
So
there's
long,
header,
packets,
which
have
which
start
with
the
bit
1
and
then
there's
a
packet
type
in
there,
which
tells
you
what
type
of
packet
it
is.
It
could
be
a
version,
negotiation
packet.
It
could
be
a
publicly
set
packet.
It
could
be
or
a
regular
0
RTD
packet,
a
handshake
packet.
It
has,
it's
always
has
a
64
bit
connection
ID.
A
It
has
a
32
bit
packet
number,
a
32
bit
version
number
and
it
has
a
payload
which
is
type
dependant
and
may
or
may
not
be
encrypted
depending
upon
exactly
what
type
of
packet
is
is,
for
example,
a
version
ago.
She
ation
packet
is
not
encrypted
again.
For
the
same
reason,
we
haven't
quite
negotiated
a
key
yet,
but
there
are
other
payloads
that
we
won't
encrypt
it.
A
We
want
other
and
other
things
like
a
public
reset
to
actually
be
encrypted
or
actually
have
a
proof
of
some
sort
from
the
peer
and
that's
going
to
show
up
here
the
type
of
proof
itself
it's
TBB,
but
all
of
this
is
work.
That's
ongoing,
as
I'm
saying
this
is
proposed.
We
are
going
to
actually
have
a
consensus,
call
on
this
in
the
working
group
this
week
for
the
common
case,
where
you
have
a
lot
of
data,
that's
going
back
and
forth
once
you
establish
the
once
you've
done
finished
the
handshake
and
everything
else.
A
There
are
the
short
header,
packets,
I'm,
not
gonna,
go
with
the
details,
but
basically
it's
a
shorter,
tighter
version
of
the
packet
header.
In
the
best
case,
you
can
have
this
be
exactly
two
bytes,
which
is
quite
nice,
a
packet
header
of
two
bytes,
that's
awesome
or
three
bytes
in
the
common
case.
That's
what
we
expect
to
see,
and
this
will
always
have
an
encrypted
payload.
So
these
are
proposed
and
not
gonna.
Go
over.
D
A
The
packet
looks
like
in
the
type
of
work
that's
going
on
in
the
the
depth
of
the
work.
That's
going
on
the
working
group
on
condition,
control
and
loss
recovery.
I
mentioned
earlier
that
we
are
trying
to
make
this
better
in
quick.
We
incorporate
ECB
best
practices
and
because
we
can
design
this
from
scratch,
we've
decided
to
use
new
packet
numbers
for
retransmissions
than
four
original
transmissions.
How
many
of
you
are
familiar
with
the
retransmission
ambiguity
problem
in
DCD,
fair
number
of
here,
the
others.
A
Please
ask
your
neighbors
well,
the
problem
in
DCT
is
that
you
that
a
sender
when
it
receives
an
acknowledgment
for
a
packet
which
it
has
retransmitted
in
the
past?
Let's
say
a
sender
since
back
at
number
10
later
on
retransmitted,
it
sends
it
again
as
packet
number
10,
all
with
sequence
number
10.
We
have
to
seize
an
AK
for
10,
it
doesn't
know
which
of
the
two
packets
it
is
for.
We've
worked
on
this
for
a
long
time
and
we've
you
know
tried
to.
A
We
have
a
number
of
proposals
that
make
retransmission
I'm
going
to
go
away,
but
it's
a
problem
that
came
because
we
did
not
have
different
packet
numbers
for
these
different
packets.
Quick
allows
you
to
do
this.
We
not
allows
you
to
quickly
acquires
here
to
do
this
and
by
the
way
the
packet
number
is
also
used
for
crypto.
A
So
what's
the
working
group
up
to
as
a
boundary
point
out
one
example
of
what
we
were
doing
in
the
working
group,
but
broadly
it's
really
taking
an
experimental
protocol
and
turning
it
into
like
a
professional.
It's
making
it
capable
of
handling
all
the
corner
cases
that
nobody
had
thought
about
before.
It's
a
makeover,
it's
quite
a
deep
change,
number
of
changes
that
are
happening
to
the
protocol,
so
the
bottom
group
is
figure
out
how
to
map.
A
You
should
be
cleanly
too
quick
how
to
use
tiers
1.3
with
quake
number
of
open
questions
that
are
unresolved.
Yet,
like
I
said
publicly
said
proof
we
still
don't
have
resolution
on
that
and
how
to
make
quick
work
for
normal
HTTP
applications.
That's
not
work
that
we
are
doing
immediately,
but
that's
certainly
something
that
we
all
have
in
the
back
of
our
minds
as
we
make
design
decisions
in
the
working
group.
So
is
this
just
google's
quick?
A
No,
it's
not
I
want
to
be
very
clear
that
Google's,
quick
was
the
experimental
protocol
and
the
quick
working
group
used
the
experiment
as
a
starting
point.
All
the
work
that's
going
on
is
fairly
fundamental
to
how
quick
works
and
how
we
think
of
quick
and
the
current
IETF
spec
is
already
it's
zero.
Zero
two
right
now,
but
it's
already
miles
away
from
where
we
started
six
five
months
ago.
So
it's
a
fantastic
example,
but
the
whole
process
was
a
fantasy
example
of
running
code
and
forming
protocol
design.
A
We
started
off
with
the
protocol
an
experimental
one
that
was
in
there
was
done
done
by
Google
and
all
the
results
drove
the
the
drove
a
lot
of
the
design
decisions
that
have
happened
in
the
IETF
world
and
anything
that
we
do
now
in
the
idea.
If
you
have
questions
about
something
that
we
don't
know
which
one's
going
to
do
well,
we
can
immediate
take
it
back
ported
to
Google's
quick
and
see
how
the
performance,
how
performance,
which
one
is
more
performant
that
allows
us
to
make
decisions
going
forward
as
well.
A
So
the
goal
is
for
everybody
do
use
the
the
version
of
quake
that
comes
out
as
standardized
with
the
IDF.
There
are
two
quick
implementations
at
the
moment.
Neither
of
them
speak
the
IDF
version
of
quick.
Yet
these
both
speak
Google's
versions
of
quick
they're
available
and
there
are
debugging
tools.
The
most
important
one
will
say
is
chrome,
let
internal.
So
if
you
have
a
chance,
go
to
Chrome
colon,
slash,
slash
net
internals
and
you
will
see
quick
as
one
of
the
items.
A
A
A
A
From
the
bars
from
the
buff
in
Berlin
last
year,
that
should
show
you
some
again
I
can
give
you
some
numbers
again.
These
are
not
universally.
These
are
our
numbers
right,
so
the
numbers
will
be
different
for
others,
which
is
partly
why
we
want
to
see
other
people
to
work.
We
can
see
how
well
it
works,
but
we
saw
rebuff
aerates
for
video
playback,
reduce
by
about
20%
when
we,
by
20%,
by
20%
in
greater
than
20%
but
yeah
and-
and
this
was
we
saw
this-
the
the
benefit
was
more.
A
The
this
is
actually
important,
so
I'll
just
take
a
moment
before
I
dig
in
question.
The
benefit
was
higher.
The
worse
the
network
was
so
when
you
went
when
you
looked
at
benefits
and
say
India,
for
example,
or
Brazil,
where
the
networks
are
crappy.
We
saw
more
benefits
with
quick.
The
difference
between
quick
and
easy
CBO
is
higher
than
in
the
US,
for
example,
which
again
tells
you
that
quick
is
not.
You
know.
It's
not
icing
on
the
cake,
it's
actually
fundamentally
changing
and
it's
making
HTTP
work
well
for
crappy
networks.
F
F
A
F
A
Well,
you
tried
really
hard
not
to
allow
for
that
to
happen.
In
general,
there
are
rules
around
how
many
packets
can
be
sent
in
the
first
round
trip.
For
example,
you
would
have
the
same
I
think
you
have
the
I,
don't
think
you
have
any
new
DDoS
implications
for
using
quick
as
far
as
I
can
tell
I
look
to
some
other
people
in
the
room
to
verify
that
that's
the
case
thanks,
but
yes,
I,
believe
that's
the
case.
I.
G
Probably
some
some
great
tutorial
since
you're
building
a
new
transport
protocol.
Why
build
it
on
UDP
at
all?
Why
not
just
start
from
ground
floor.
A
You
said
great
tutorial
when
we
stick
with
that
theme.
Let's
talk
more
about
how
awesome
this
was
because
of
what
I
said.
Deployability
you
can't
deploy
anything,
that's
not
that's
not
UDP
or
TCP.
On
the
Internet.
Today,
we've
tried
this
with
the
CDP
and
we
failed,
and
we
certainly
did
not
want
to
wait
for
all
that
in
years
before
this
got
out
there.
A
H
A
All
requests:
it's
multiple
requests.
Each
request
goes
over
a
stream
right.
All
of
them
go
all
requests
go
over
their
own
stream,
but
they
all
go
over
the
same
connection.
The
client
can
send
the
request,
as
it
does
always
whenever
it
has
a
request.
It
sends
it
the
request
that
it
sends
end
up
becoming
a
stream
or
a
couple
of
streams
in
this
case,
inside
of
the
same
quick
connection,
okay,.
A
So
the
client
does
connect
to
the
server
and
sends
a
request.
First,
let's
say
get
/.
Well,
you
you
you,
you
have
to
ask.
You
always
send
HTTP
requests
first
to
the
server
right.
So
you,
when
you
have
a
TCP
connection,
what
happens
you
set
up
a
TCP
connection
and
then
the
client
sends
a
get
request
and
that
goes
out
and
you
get
a
response
back.
I
A
Few
things
the
first
thing
is
that
in
the
handshake
is
a
completely
different
beast.
The
second
thing
is
that
it
wasn't
optimized
for
for
web
transactions
and
for
from
opera
in
the
third,
and
the
most
important
thing
is
that
it
wasn't
integrated
with
the
security
protocol.
The
only
way
you
could
do
that
would
be
sort
of
like
taking
a
CD,
be
taking
D,
TLS
or
TLS
and
figuring
on
how
to
smush
them
together
and
now
trying
to
write
code
or
trying
to
you
know,
take
pieces
of
software
and
making
them
work
like
that
is
hell.
A
I
Okay-
and
we
thought
it
was
worth
doing-
that
a
second
question
have
you
anayia,
probably
gonna
say
this
is
outer
scope
for
the
current
work
in
the
working
group,
but
one
of
the
things
I'd
like
Ted
of
SCTP
and
was
kind
of
trying
to
look
to
reuse
myself
was
the
partial
reliability
functionality.
Is
that
on
your
roadmap
origin,
that
the
back
of
people's
minds
it.
A
Is
in
the
back
of
people's
minds?
Okay,
that's
exactly
where
it
is
at
the
moment!
It's
it's
definitely
something
that
they'll
want
to
address
when
you
go
past
HTTP
yeah
at
the
moment
again,
the
goal
is
to
you
know,
try
to
build
something
that
works.
Why
should
he
be?
And
then
once
you've
done,
that
when
you
get
to
like
video
all
about
TC?
But
yes,
that's
absolutely
on
everybody's
mind
and
we
we
would
want
to
get
that
at
some
point.
J
So
this
is
not
actually
a
question.
It's
a
favor
request
of
all
of
you
all
so
I'm
part
of
the
edgy
team.
That's
responsible
for
these
and
there's
a
survey
associated
with
this
and
the
link
to
the
survey
didn't
make
it
in
here.
This
is
a
fabulous
tutorial.
We
would
really
help
it
would
help
us,
with
planning
purposes,
for
people
to
respond
to
the
survey
and
I'll,
get
the
link
sent
out
and
also
to
send
in
suggestions
about
future
topics
that
you
would
want.
J
J
A
Pets,
we
don't
have
any
mechanism
at
the
moment
we
use
of.
We
did.
We
did
work
on
sort
of
measuring
what
empty
use
work
test
and
we
have
a
chart
somewhere,
which
I'm
happy
to
present
I've,
been
looking
for
a
venue
to
present
some
of
the
experiments
that
we're
on
the
date
of
some
of
them
are
present
in
the
buff
slides.
But
this
one
isn't
but
yeah.
We
chose
a
fixed
size
based
on
what
we
saw.
What
did
you.
K
K
K
A
Have
it
deployed
and
performance
is
still
better
now,
I
mean
if
the
reason
is
because
again,
the
point
is
that
we
we
one
of
the
short
things
is
to
bear
in
mind-
is
that
the
experiment,
as
I
told
you
that
we
have
deployed
in
Chrome
for
for
Google
services,
tries
to
use
quick
and
erases
it
against
PCP.
So,
if
you're
using
a
very
large
unto
you,
the
first
packet
is
padded
to
be
maximum
packet
size
and
it
doesn't
get
through.
E
I'm
still
using
the
best
of
both
worlds.
One
addition
is
that
it
is
based
on
our
anecdotal
experience
once
you're
in
the
1,400
to
1,500
byte
range.
The
network
is
no
longer
limited
by
the
number
of
packets
sent
it
can
send,
but
actually
about
the
number
of
bytes
in
the
packets,
so
except
for
some
minor
extra
like
IP
et
cetera,
percentage-wise
overhead.
E
The
cost
of
sending
like
a
14
error
by
packet
versus
a
1500
byte
is
incredibly
small
and
at
least
at
Google
we'd,
actually
I
think
downsize
the
packets
to
some
slightly
smaller
number
anyway,
because
fragmentation
ends
up
being
more
of
a
hassle
than
that's
that
it's
worth
you
in
for
TCP
yeah
yeah.
It.
A
L
A
Care
of
how
so
so,
Natura
binding
effects,
endpoints
that
look
at
the
IP
n
port
could
determine
to
identify,
can
actually
look
at
the
four
table
in
quick.
You
would
use
the
connection
ID,
which
is
in
the
quick
header.
Even
of
the
forty
bill
changes
you
can
use
the
connection
ID
to
identify
the
corruption,
that's
how
it
works.
That's.
A
A
So
if
you
go
to
Chrome
colon,
slash,
slash
net
internals
you'll,
see
on
the
left
side,
quick
and
HTTP
doodoo,
but
there
it
is.
Those
are
three
quick
connections
that
are
using
quick
version
35,
and
this
is
Google's
quick
version,
35
Q,
0,
3
5
specifically,
and
if
you
it
shows
a
bunch
of
things
that.
D
E
E
You
can
see
my
e
Mia,
so
you
can
see
the
full
handshake
with
the
cert.
You
can
see
how
long
everything
takes
you
can
see.
Every
single
packet,
that's
going
by
probably
one
the
more
interesting
thing
is,
is
that
you
can
see
the
handshake
process.
So
this
is
the
crypto
reject
with
all
the
fields
that
are
available
in
it
like.
This
is
an
indication
that
we're
offering
a
s
GCM
and
cha-cha
20
is
cyphers
case.
We're
checking
cg
compliance.
E
A
E
Point
is
just
gonna
be
from
a
PII
perspective.
It
does
show
the
entire
URL,
but
it
does
not
show
any
of
your
cookies
or
any
of
the
payload
being
going
back
and
forth,
and
so,
if
you
were
going
to
use
this
for
debugging
purposes,
it
definitely
has
a
lot
much
lower
sensitivity
than
like
a
full
unencrypted
T
speed.
Oh.
A
Yeah,
that's
actually
really
important,
because
we
all
that
we
get.
You
know
the
port
support
quick,
not
working
well
in
some
situations,
all
the
time,
not
all
the
time.
I'm
sorry
I
shouldn't
say
that
we
get
it
like
once
in
six
months.
That's
it's
infrequent,
but
when
we
do
get
it,
we
ask
people
to
send
us
in
editorials,
because
you
can.
You
can
store
this
and
send
it
off,
and
this
does
not
have
any
PII
in
it
at
all.
So
you
can.