►
From YouTube: IETF 115 Technology Deep Dive: QUIC Part 1
Description
IETF 115: Technology Deep Dive on QUIC
This technical deep dive, held in two sessions, will cover in detail the operational realities and technical nuances of QUIC, the new Internet transport technology.
Slides and all the materials from the session:
https://datatracker.ietf.org/meeting/115/session/tdd/
A
A
A
B
A
A
A
B
A
So
if
you
are
sitting
in
a
room
called
Kensington
and
you
do
not
see
people
probably
you're
in
Kensington
2
or
Kensington
3.,
which
is
very
confusing
because
when
you
first
come
up
the
stairs,
you
see
Kensington
and
you
walk
in-
and
it's
not
here.
So
if
there
are
not
a
bunch
of
people
in
the
room
with
you,
you're,
probably
in
the
wrong
place,
you
get
up,
walk
down
the
passageway
with
all
the
glass
and
then
turn
left
already
so
hi
welcome
to
the
technology
deep
Dives
on
quick,
I'm,
Warren.
A
And
let's
get
to
the
next
slide,
this
is
the
ietf
note.
Well
we're
the
first
sort
of
like
official
sessions,
so
it's
entirely
possible.
You
have
not
seen
this.
You
should
probably
read
this
and
figure
out
what
all
it
means
and
talk
to
Legal
Etc,
if
you
don't
know
what
it
means
but
Yep.
This
is
our
ietf
Nook
well,
and
thank
you.
A
Everyone
for
showing
up
so
early
I
realize
it
is
ridiculously
early
and
also
hello
to
everyone
who
is
remote
and
here
and
also
hello
to
everybody,
who's
going
to
be
watching
this
later
on,
YouTube,
probably
or
some
remote
Thing
video
streaming
service
of
your
choice,
alrighty,
and
with
that
done,
let
me
hand
it
to
Brian.
C
So
I've
actually
had
this
question
in
the
hallway.
A
lot
so
do
I
have
to
come
to
the
technical
Deep
dive
on
Monday
and
on
Tuesday.
Well,
that's
really
early
both
days,
it's
even
earlier
on
on
Tuesday,
actually
at
7
30
tomorrow
and
you
know,
I
was
asked.
Okay,
that's
a
joke
right,
no
I
mean
it
might
be
a
joke,
but
it's
also
the
truth.
C
C
The
basic
introduction
talk
about
the
future
with
quick
and
Martin
will
follow
up
on
that
with
a
talk
about
how
quick
is
layered
and
how
that
layering
is
a
bit
different
from
you
know
the
intuition
that
you'll
have
from
you
know,
TCP
over
IP,
etc,
etc.
C
Tomorrow
we
will
go
a
bit
deeper
into
a
few
topics,
so
this
is
really
about.
Okay.
Now
we
have
the
basic
fundamentals
of
quick:
let's
talk
about
how
that
gets
deployed
at
scale.
C
How
it
is
is
sort
of
like
used
in
the
internet
now
and
things
that
we've
learned
from
running
at
the
scale,
so
that'll
be
Ian
or
Ian
and
Lucas
who
I
haven't
seen
yet
but
he'll
be
here
tomorrow,
talking
first
about
the
deployments
at
scale
and
some
of
the
things
that
we've
learned
through
that
and
then
like
how
to
observe
and
debug
applications
on
quick
and
with
the
rest
of
the
time
tomorrow,
we
will
have
a
panel
discussion
with
all
of
the
speakers.
C
So
if
you
have
like
interesting
questions,
please
hold
them
till
tomorrow.
Come
at
you
know,
8
30
is
when
that
should
start,
and
we'll
have
a
panel
up
here
with
people
to
answer
your
questions
and
have
some
discussion
about
quick.
So
with
that
I
will
stop
talking
and
invite
up
John.
D
Thank
you,
Brian
do
I
have
to
stand
in
the
spot,
I
think
I
do
no
I.
Don't
we'll
see
the
camera
follows
me
well.
Thank
you.
Thank
you,
everybody
for
being
here
early
this
morning.
Hopefully,
hopefully
this
talk
will
wake
you
up.
If
it
doesn't
I'll
ask
Martin
to
wake
you
up,
but
anyways.
Let's
get
started
with
this
thing.
Are
we
doing
question
announcer
everything
currently,
alongside
what
are
we
doing
with
q?
A.
D
Will
be
tomorrow,
so
if
you
have
any
questions,
write
them
down,
you'll
forget
them
through
the
day
and
ask
us
tomorrow
or
catch
us
in
the
corridors,
so
I'm
going
to
start
with.
Just
this
brief
agenda,
it's
basically
what
is
quick,
and
this
is
for
well
I'm
gonna
talk
about
Quick's,
immediate
value,
proposition
and
I'm
going
to
talk
about
really.
What
is
quick,
enable
next
slide,
so
the
subtext
here
is
that
the
first
piece
is
simply
a
shot
primer.
D
A
very,
very
short
primer
on
Quake
I
can't
do
more
than
that,
and
we
want
to
talk
about.
How
did
we
get
the
world
interested
in
this
like?
What
did
we
do
to
make
the
world
interested
in
this
in
this
particular
technology?
And
finally,
what
was
the
real
goal?
What
was
the
thing
that
we
were
really
wanting
to
do?
What
did
we
set
out
to
do
from
the
get-go
next
slide,
so
before
I
get
going
on
to
Quick,
who
am
I?
D
I
am
Jana
anger,
I'm,
VP
of
infrastructure
and
Network
Services
said
fastly.
I
am
an
editor
of
the
ID
of
quick
specifications.
I'm,
a
chair
of
the
iccrg
irtfc
ccrg
research
group
and
I've,
worked
on
transport
for
for
way
longer
than
I
care
to
remember
and
on
quick,
also
for
way
longer
than
I
care
to
remember,
but
that's
me
next
and
now
on
to
a
short
primer
on
quick
next
slide.
D
I'm
going
to
tell
you
this.
This
is
going
to
be
a
very
short
primer.
There's
not
going
to
be
I.
Don't
have
a
lot
of
slides
here
talking
about
the
details
of
the
protocol,
the
details
of
the
bits
we've
done,
I
and
others
have
done
talks
on
these
in
various
places.
So
if
you
go
on
YouTube
and
do
a
search,
you'll
find
a
bunch
of
these
things.
D
So
my
goal
here
is
not
to
go
into
the
details
of
this,
but
to
give
you
a
a
base
from
where
you
can,
you
can
dig
in
and
and
get
deeper,
but
I'm
just
going
to
start
off
by
saying
quick
is
a
new
transport
protocol.
Now,
if
you
look
at
this
picture,
this
sort
of
has
a
depiction
and
Martin's
going
to
come
in
thrash
this
picture
and
say:
well,
that's
not
quite
how
this
works
later.
D
So
that's
that's
fun
for
later,
but
this
is
roughly
a
schematic
understanding
of
where
quick
sits
in
sort
of
the
protocol
stack
so
to
speak.
We
have
the
TCP
TLS
and
HTTP
protocols
in
the
traditional
stack
and
where
quick
sits
is
basically
a
parallel
to
TCP
and
TLS
and
sum
of
HTTP.
So
yes,
it's
weird,
but
that
was
kind
of
the
the
it's
all
weird.
It
was
just
a
compression
of
multiple
layers
so
to
speak,
and
TLS
sits
sort
of
within
quick,
but
it's
not
within
quick.
D
However,
it
sort
of
does
sit
within
quick
in
in
depending
on
how
exactly
you
look
at
it,
but
that's
sort
of
the
layering
picture
here.
It
is
deliberately
compresses
multiple
layers
and
it
deliberately
sits
at
these
in
these
spaces,
because
the
goal
here
was
to
accelerate
and
deploy
something
that
we
could
for
the
web
next.
D
So
what
are
these
these
features
of
quick
that
really
made
it
viable
and
useful
for
the
web?
Well,
first,
it's
multi-streamed
multi-streaming
is
a
very
powerful
feature.
It's
a
very
powerful
service.
The
idea
here
is
that
you
get
within
one
end-to-end
connection.
You
get
multiple
ordered
byte
streams.
Now
this
is
not
just
multiple
audit
byte
streams,
it's
a
more
General
abstraction.
Now
this
works
really
well
for
the
web,
because
the
web
has
every
website
has
a
lot
of
objects,
they're
all
multiple,
effectively,
parallel
independent
objects
and
so
on.
D
But
this
is
a
more
General
abstraction
streams
are
designed
to
be
lightweight
they're,
designed
to
be
built
and
toned
down
rapidly
efficiently,
and
if
your
implementation
does
it
right,
you
can
use
this
as
even
a
message
abstraction,
so
you
can
think
about
this
as
a
protocol
that
that
gives
you
a
message
abstraction
in
the
form
of
streams
and
for
the
transport
nerds
among
you.
You
can
build
partial
ordering
on
top
of
this
thing
or
you
can
build
complete
ordering
inside
this
thing,
so
you
have
those
degrees
of
freedom
with
multi-streaming
next
slide
foreign.
D
D
This
doesn't
necessarily
mean
that
the
protocol
has
to
live
in
user
space.
However,
remember
that
if
you
wanted
to
build
something
on
top
of
Ip,
you
are
necessarily
stuck
to
being
inside
the
kernel,
and
that
is
a
big
problem
for
us.
Also
in
terms
of
building
deploying
shipping
things
so
being
on
top
of
udb
gave
us
two
significant
benefits.
One
of
them
was
that
we
could
get
through
the
internet
as
it
as
it
as
it
is
with
middle
boxes
and
firewalls
and
everything
else,
and
it
allowed
us
to
deploy
in
user
space.
D
D
We
we
did
a
better
job
of
doing
them,
because
we've
learned
from
the
past,
we
we
want
to
incorporate
all
the
learnings
of
TCP
and
we
did,
and
importantly,
quick,
has
encryption
baked
in
this
means
data
everything
that
is
carried
by
the
quick
protocol
and
quick
headers
are
all
protected,
and
this
uses
Steelers
1.3
for
key
negotiation,
and
this
is
basically
a
really
important
premise
and
I'll
come
back
to
this
in
a
moment
and
again,
the
Martin
is
going
to
go
into
more
detail
on
exactly
how
this
is
done
in
in
in
quick,
but
this
was
really
important
to
us.
D
Why
was
this
important
to
us?
Well,
of
course,
it
was
important
to
us
to
protect
the
metadata.
We
know
today
that
if
we
were
to
design
protocol
that
is
not
fundamentally
protected.
That
seems
like
we're,
not
learning
the
lessons
of
the
past
20
years,
so
we
did
that,
but
there
was
an
even
more
important
lesson
here
or
an
even
more
important
reason
for
doing
this
middle
boxes.
D
Ossify
protocols
that
are
exposed
we
did
not
want
quick
to
be
ossified
TCP
is
today
ossified
many
other
protocols
that
have
been
deployed
in
the
wild
in
plain
text
are
completely
ossified.
You
cannot
change
them
on
the
wire
without
seeing
unexpected,
weird
interactions,
with
middle
boxes
that
have
ossified
them
or
have
certain
expected.
Behaviors
expect
certain
behaviors
of
them.
D
Oh,
that's
an
extra
so
with
sorry
go
back,
one
slide
so
with
baking
and
encryption.
What
we
were
able
to
do
is
basically
say
that
only
the
endpoints
can
really
understand
and
change
the
metadata,
the
headers
in
the
protocol,
as
well
as
the
body
and
that's
an
important,
important
thing,
because
now
metal
boxes
are
unable
to
change
the
headers
or
to
obfuscate
or
to
to
mess
with
the
headers
or
even
read
the
headers,
so
they
can't
have
any
expected
behaviors,
which
means
that
endpoints
are
free
to
change
the
protocol
as
they
see
fit.
D
D
So
that
is
my
rough
introduction
into
quick
and
I'm.
Sorry,
if
you
didn't
see
all
the
header
bits
that
you
wanted
to
see
and
I
don't
want
to
do
that
here
to
you
early
in
the
morning
on
Monday
you're,
going
to
see
plenty
of,
if
you
want
header
bits,
walk
into
any
room
this
week,
you'll
see
plenty
of
those
I'm
going
to
talk
to
you
about
what
Quick's
immediate
value
proposition
was.
How
did
we
get
the
world
in
so
these
are
the
features
right.
This
is
what
I
talked
about.
D
This
is
how
we
talk
about
quick.
Why
did
we
get?
How
did
the
world
get
interested
in
quick,
but
it's
a
quick
broke
new
ground
in
several
ways.
The
first
thing
was
the
zero
auditory
transport
in
crypto
handshake,
again
you're
going
to
hear
more
about
that.
After
my
talk-
and
this
is
fundamentally
difficult
to
do
with
TCP
and
the
split
TCP
TLS
models,
you've
got
Taylor
sitting
on
top
of
TCP.
D
They
end
up
having
different
Scopes
when
I
say
different
Scopes
I
mean
scopes
of
identity
Scopes
in
terms
of
where
the
connection
gets
terminated
in
the
network,
and
that
makes
it
fundamentally
difficult
to
do
something
like
zero
RTD
people
have
argued.
People
would
say
you
know,
isn't
it
the
same?
Isn't:
zero
RTD
in
quick,
so
before
I
talk
about
the
same
as
TCP,
what
zero
Oddity
and
crypto
handshake?
What
what
that
gives?
You
is
a
low
latency
connection
setup
for
those
of
you,
who've
not
been
paying
attention.
D
Zero
RTD
means
zero
round
trip
time
delay
before
data
is
exchanged
on
a
connection.
Excuse
me,
so
we've
created
a
zero
RTD
transfer,
transport
and
crypto
handshake.
Now,
TCP
could
do
this
with
TCP
fast,
open
in
TCP
and
with
TLS
1.3
and
zero
RTD
and
TLS.
However,
because
of
the
split
model,
you
still
end
up
having
different
Scopes,
you
have
TCP
that
doesn't
understand
domains
certs
things
like
that.
D
It
understands
only
IP
addresses
Tres
operates
in
a
different
space
and
you
have
a
split
between
an
understanding
of
what
the
endpoint
identity
itself
is
at
these
two
levels.
You
can
reconcile
these
things,
but
there's
a
lot
of
nuanced
work
to
be
done
there.
If
you
want
to
make
this
work
in
the
split
TCP
model,
next
slide
collection.
Migration
was
another
thing
that
we
wanted
for
20
25
years
to
build
into
transport
Technologies,
and
we
finally
got
it
in
and
we
were
able
to
build
this
into
Quick
again.
D
This
is
fundamentally
different
to
do
with
TCP
in
the
split
TLS
TCP
model
in
part
because
of
the
the
endpoint
identities,
but
also
in
part,
because
TCP
we've
done
this
with
mptcb
IDF
has
done
this
with
mptcp,
but
I
would
again
challenge
you
to
think
about
what
we
should
think
of
as
our
end
goals.
We
want
this
to
get
deployed
everywhere
with
mptcp.
You
still
have
to
play
nice
with
the
operators
with
the
network
devices
and
so
on.
I
don't
mean
that
we
shouldn't
play
nice
with
the
operators.
D
What
I
mean
is
that
we
can't
wait
for
every
operator
and
every
metal
box
vendor
to
come
on
board
before
we
consider
a
protocol
deployed.
That
is
a
very,
very
long
poll,
and
that
makes
the
tent
unlivable
it's
it's
too
loud,
too
long,
a
poor.
So
that's
basically
connection
migration.
Again,
we
were
we've
deployed
it,
it's
being
used
already
next
slide
and
we
are
able
to
build
troubleshooting
and
debugging
capabilities.
You
want
to
hear
about
this
a
little
bit
more
from
Lucas
tomorrow,
but
the
the
difference
here
is
this.
D
D
Have
you
tried
to
correlate
those
traces
yeah?
That
is
a
pain
right.
So
when
you
have
certain
application,
behavior
and
you
go
okay,
I've
got
application,
trace
and
I
go
okay
now
I
need
to
grab
the
TCP
traces
or
S
trays
or
whatever
it
is
that
you
need
to
do,
and
you
need
to
go
down
to
the
kernel
and
go
all
the
way
down
the
network
path,
to
figure
out
everything
you
are
trying
to
use
completely
different
pipelines
built
by
completely
different
people
for
completely
different
use
cases
and
try
to
correlate
them.
D
Companies
that
have
managed
to
successfully
build
those
things
have
used
them
very
very
effectively.
However,
it's
not
a
small
order.
It's
difficult
right
so
being
in
user
space
for
for
quick
basically
gives
you
the
ability
to
log
it
log
transport,
Network
level,
traces
alongside
application,
Level
traces.
That
is
huge,
because
you
don't
have
to
go
around
doing
this
separately.
You
can
log
alongside
the
application
traces.
You
cannot
log
things
like
what
is
the
conditional
window
value?
What
is
the
state
of
the
connection?
D
What
happened,
when
was
when
was
teams
created
again
you'll
hear
more
about
this
from
Lucas
tomorrow,
we'll
talk
to
you
about
about
logging
under
the
logging
format
that
we're
standardizing
for
quick
here,
but
we
get
much
significantly
richer
capabilities
for
doing
this
in
user
space.
Next.
D
So
actually
go
back
one
thing:
yeah,
oh
right,
I
meant
to
say
this
here.
There
is
a
other
really
really
cool
thing
which
I
you
know
I'll
show
at
the
end
of
today.
If
you
have
time,
I'll
demo
it,
but
one
thing
that
you
have
a
problem
with,
for
instance,
is
is:
if
you
have
poor
Behavior
at
the
client
side,
for
instance,
it's
you
can
grab
client-side
packet
traces
or
you
can
report
it
to
the
server
side.
D
Who
whomever
is
at
the
server,
and
you
can
say
Hey,
you
know,
go
dig
into
this
particular
Trace.
I'll.
Give
you
an
identifier
for
a
connection,
for
instance,
go
find
out
what
happened
with
that
connection.
Why
was
it
behaving
poorly?
You
have
no
other
recourse
and
on
the
server
side,
what
we
would
do
at
the
server
side
is.
Generally
we
end
up
having
to
find
your
connection.
Now.
That
is
an
impossibility.
D
Usually
finding
a
connection
in
the
fleet
of
service
that
we
have
and
so
on,
is
super
difficult
to
do,
and
it's
and
then
go
trace
it
track.
It
find
the
where
the
client
is
connected.
It's
really
difficult
to
do
what
we
were
able
to
do.
Wouldn't
it
be
cool.
D
That
is
what
we
built
and
we
were
able
to
do
that
next
slide,
I'm,
not
going
to
show
you
this
demo
right
now,
because
it's
a
bit
tricky
to
get
going,
but
the
links
are
here,
go
to
the
video
only
link
and
then
go
to
the
self
Trace
link
using
the
same
browser
window,
meaning
that
you
have
a
connection
going
over
the
you'll
have
a
stream
going
over
the
same
connection
and
that
only
works
of
course,
if
you're
using
quick
and
I'm
happy
to
show
it
to
you
later,
if
I
can
get
it
going.
D
So
this
is
really
really
valuable
at
the
client
you're
able
to
see
a
service
packet
race
now
I'm
gonna
move
to
the
next
thing,
which
is
transforming
server
architecture.
So
I
won't
go
into
the
details
of
this,
but
deg
server
return
is
an
ability
for
a
server
to
be
able
to
hand
off
a
request
to
another
server
and
have
that
server
serve
the
user
directly.
D
So
it's
sort
of
like
if,
if
a
client
requests
a
resource
from
a
server,
the
server
doesn't
have
the
resource,
but
knows
another
server
that
has
it
is
able
to
sort
of
Kick
the
connection
over
or
the
request
over
and
have
the
response
served
directly
from
there.
This
is
called
direct
server
return.
It's
called
direct
server
return
because
normally
what
what
and
what
commonly
happens
and
the
easier
way
to
solve
this
problem
of
I
don't
have
the
content.
D
D
We
are
able
to
design
this
in
quick
and,
and
some
of
us
have
been
designing,
building
this
into
into
our
server
infrastructures
and
the
reason
we're
able
to
do
this
is
because
in
quick
again
in
terms
of
how
we
build
the
transport
protocol
itself,
we
were
able
to
separate
the
sender's
view
from
the
receiver's
view
of
the
world
and
in
this
particular
case,
that
plays
very
nicely
allows
us
to
actually
build
something
like
direct
server
return.
We
can
have
multiple
servers
sending
with
one
receiver
receiving.
D
So
this
is
how
we
got
the
world
interested.
These
are
the
different
things.
These
are
the
different
benefits
that
we
brought
out
and
we
when-
and
this
is
how
all
of
us
got
excited
about,
quick
and
so
on,
but
I'm
going
to
now
talk
about
what
quick
is
enabling
right
next
slide.
What
does
quick
enable
so
quick
enables
multiple
new
technologies
that
you
can
build
within
quick,
so
you
can
hear
about
you
know.
D
You
won't
hear
these
experiments
today,
but
maybe
tomorrow,
you're
gonna,
there
are
new
condition
controllers
that
you
can
easily
build
and
deploy.
I
know
that
meta
has
done
this
Google's
done
this
fastly
we've
done
this,
so
it's
it
makes
it
much
much
easier
to
deploy
these
things
in
in
in
in
quick
than
it
has
been
in
TCP
next
slide.
D
If
you've
not
heard
about
mask,
mask
basically
employs
HTTP,
3
and
quick
to
create
hidden
tunnels
right-
and
this
is
something
that
you
can
again
quick-
is
enabling
this
technology
was
it
possible
before
yes,
but
this
makes
it
much
more
efficient,
much
a
more
performance
and
also
more
efficient
at
the
server
Stacks
to
be
able
to
deploy
something
like
tunneling,
and
this
is
not
just
I'm,
not
just
saying
talking
about
things
here
that
could
be
built.
D
If
you
have
an
iPhone
and
you've
turned
it
on
you're,
using
mask
you're
using
HTTP
3M
quick
to
do
this
today,
next
slide
and
finally
media
over
quick
or
what
I
call
the
new
world
for
webrtc
refugees
is
is
as
a
proposal
to
do
media
directly
over
quick
and
that's
again,
quickest
enabling
these
Technologies
now
to
happen,
because
it
has
is
its
feature
Rich
enough
that
you
can
actually
think
about
doing
more
interesting
things
directly
with
the
transport
next
slide.
D
So
quick,
I
I've
told
you
that
the
quick
makes
the
web
faster,
more
resilient,
more
responsive,
but
this
is
just
the
beginning.
Next
slide
quick
enables
these
Technologies
I
talked
about,
so
it
all,
it
becomes
a
platform
for
these
new
technologies.
Moq
mock
mask
other
stuff
that
we
want
to
deploy
next
and
I've
also
told
you
that
quick
is
a
transport
technology
that
can
be
evolved
on
the
internet
because
we
managed
to
encrypt
everything
we
can
evolve
this
thing
going
forward.
D
It
is
already
continuously
evolving
in
multiple
versions
of
quick
already
exist
in
parallel
on
the
internet
today
and
next
slide
I'm
going
to
offer
that
we've
pulled
a
sleight
of
hand.
We
basically
convinced
everybody
that
these
are
the
reasons
we
wanted
to
deploy.
This
thing
HTTP
was
the
reason
and
getting
these
these
milliseconds
of
latency
Improvement,
and
these
features
were
the
reasons.
But
next
we
used
HTTP
on
the
web
as
a
vehicle
to
deploy
quick
into
almost
serve
all
server
and
client
deployments.
D
The
quickest
deployed
now
widely
it
also
it
it.
Almost
every
server
deployment
has
quick
in
it.
Almost
every
client
browser
and
other
client
libraries.
They
all
have
quick
in
them
now
next,
but
our
goal
really
was
to
create
an
end-to-end
transport
that
allowed
for
end-to-end
transports
and
Technologies
to
thrive
through
an
ossified
internet
and
I
would
say
that
we've
we
are
sort
of
somewhere,
not
at
the
beginning
of
this
journey.
D
E
All
right
so
Dex
coming
all
right,
so
I'm
going
to
talk
a
little
bit
about
the
quick
handshake
and,
in
particular,
some
of
the
security
properties,
security
being
one
of
the
sort
of
primary
drivers
behind
building
this
thing.
Do
you
have
a
clicker.
E
There's
a
couple
of
things
in
here:
I
may
not
get
to
some
of
the
later
things
in
any
real
detail,
but
that
layering
diagram
that
Jonah
was
talking
about.
I.
Think
we'll
spend
a
little
bit
of
time
on
that,
quite
possibly,
the
most
difficult
part
of
getting
quick
working
was
integrating
the
the
TLs
handshake
into
quick.
E
There
is
a
something
of
a
tight
interaction
between
those
those
two
pieces
and
it
turned
out
to
be
extraordinarily
fiddly
and
we
were
given
a
protocol
from
the
work
of
the
folks
at
Google
who
had
designed
their
own
cryptographic
handshake,
and
it
was
broken
in
tiny,
subtle
and
very
significant
ways
that
required
years
of
work
to
get
to
work
so
next
slide.
E
Please
so
you've
seen
what
is
I
think
the
the
standard
reference
point
for
how
we
think
about
layering
in
quick
there's,
this
little
TLS
slice
that
sort
of
jammed
in
on
the
quick
layer-
and
this
is
something
that
I
think
makes
people
who,
like
their
nice
layer,
cakes
a
little
uncomfortable
and
we'll
explain
why
that's
the
case
as
we
get
through
this
one
next
slide,
please!
E
E
So
what
we
wanted
to
do
was
avoid
replicating
a
lot
of
the
security
work.
It
turns
out.
They're
building
a
good
security
handshake
is
extraordinarily
difficult
and
tls13
is
the
result
of
a
number
of
years
of
work,
and
we
didn't
want
to
have
to
redo
all
of
that
work
because
we're
also
building
all
of
the
TCP
bits
in
tiling
on
top
of
UDP,
which
is
an
entirely
new
protocol
and
that's
more
than
enough
work,
and
it
turned
out
to
be
even
more
work
than
we
anticipated
when
we
we
came
into
this.
E
So
the
way
to
think
of
this
is
that
TLS
provides
all
of
the
cryptographic
assurances
that
you
might
expect
from
a
protocol
and
quick
provides
all
the
things
that
that
TCP
would
provide
being
reliable
audit
delivery
and
in
turn
they
each
provide
services
to
the
other.
Tls
requires
ordered,
reliable
delivery.
Quick
requires
a
secure
handshake.
E
Now
other
people
can
talk
about
streams
and
and
the
various
application
semantics
that
we're
providing
quick,
which
include
some
of
the
core
TCP
guarantees
like
in
order
reliable
delivery,
but
for
the
handshake
there's
a
lot
of
the
things
around
the
TCP
handshake
that
I
think
weren't
in
the
original
versions
of
TCP
But,
ultimately
TCP
needed
to
have,
which
are
things
like
being
assured
that
the
other
side
that
you're
talking
to
is
willing
to
talk
to
you,
for
instance,
and
that
turns
out
to
be
an
extraordinarily
important
part
of
the
design
of
quick
and
we'll
talk
about
that.
E
Look
as
much
as
we
can.
The
other
thing
is.
We
were
looking
to
do
better
than
any
of
these
protocols.
We
have
a
new
protocol
that
we
we're
implementing.
Here
we
took
every
opportunity
we
could
to
make
things
better
and
I'll
touch
on
a
few
of
those
points
as
we
go
through
next,
please
so
rtts
TLS
1.3
is
optimistic
in
the
sense
that
a
client
will
guess
what
configuration
will
work
for
a
server,
and
that
will
in
the
case
that
the
guess
is
correct,
save
a
round
trip
time.
E
E
If
the
client
guesses
wrong,
you
have
another
round
trip
added
to
that,
and
one
of
the
sort
of
themes
that
we'll
have
with
quick
is
that
it
has
a
very
short
handshake
if
everything
goes
correctly,
but
it
turns
out
that
you
can
add
multiple
round
trips
if
you
have
packet
loss,
the
client
guess
is
wrong.
The
server
is
under
duress
and
wants
to
tell
the
client
to
back
off
and
and
wait
a
little
long
while
longer,
and
so
we
have
this
very
flexible
handshake.
Ultimately,
the
key
Insight
is
we.
E
E
I
think
the
messages
we
exchange
is
two
round
trip
times,
but
under
normal
circumstances,
if
the
client
guesses
correctly
and
the
server
is
willing
to
communicate
with
the
client,
we
can
send
data
from
either
side
after
that
first
exchange
between
the
clients,
so
we're
actually
sending
before
the
handshake
is
complete,
and
in
the
extreme
case,
if
the
client
has
been
to
that
server
in
the
past
and
set
up
the
zero
round
trip
time
thing
that
there
is
no
delay
for
either
end.
The
client
sends
immediately
application.
E
So
this
is
what
the
TLs
handshake
looks
like.
We
have
some
key
agreement
and
configuration
that
is
exchanged
more
or
less
in
the
clear
and
then
some
authentication
information-
and
you
can
sort
of
see
here-
we've
got
these
lighter
lines
that
say
where,
where
the
data
is
being
exchanged,
there's
a
flow
from
the
client.
There
might
be
some
some
application
data
following
after
that
one
there's
a
flow
from
the
server
there
might
be
some
application
data
and
then
finally,
at
the
end,
some
more
messages
lots
lots
more
data
at
that
point.
E
Next,
please
that's
what
happens
when
you
put
TLS
on
top
of
it
and
there's
a
little
note
there
saying
that
we
we
had
to
tweak
TLS
in
order
to
get
this
to
work
and
that's
going
to
be
a
bit
of
a
theme
as
we
get
into
this
one
next,
so
the
the
quick
handshake
sort
of
takes
the
TLs
handshake
and
builds
on
top
of
that
tell
us
messages
have
essentially
four
types
of
keys
that
that
are
used
and
the
no
key
in
the
case
of
TLS
is
turned
into
a
real
set
of
keys
in
quick.
E
So
we
have
what
we
call
Initial
Keys,
which
are
not
secure
in
any
meaningful
sense,
but
they
provide
us
protection
against
ossification
to
to
the
points
that
John
was
talking
about
before
every
single
version
of
quick
uses
a
different
set
of
keys.
If
you
don't
know
the
keys,
you
can't
speak
that
version
a
quick
sort
of
a
a
nice
little
protection
against
someone
who
might
be
inclined
to
interfere
with
the
handshake.
E
But
if
they
don't
know
the
version
of
quick,
that's
been
being
spoken,
they
don't
get
to
to
interact.
Tls
also
provides
handshake
keys.
Those
handshake
Keys
protect
the
the
details
of
the
handshake,
the
security
guarantees.
There
are
very,
very
interesting.
E
Those
of
you
who
know
TLS
will
perhaps
have
a
better
idea
of
what
those
properties
are,
but
essentially
we're
providing
confidentiality
for
things
like
the
server
certificate
and
a
lot
of
the
configuration
parameters
that
the
the
protocol
has
the
TLs
bytes
put
into
specific
frames
within
the
packets,
so
we
have
packets
with
frames
in
them.
E
The
packets
are
protected
with
these
keys
and
we
we
put
multiple
packets
in
the
one
UDP
datagram,
as
it
turns
out,
and
a
quick
after
many
iterations
between
first
going
with
something
that
was
based
on
dtls,
because
why
not
dtls
does
UDP
turns
out
to
be
a
bad
idea.
We
went
to
TLS
and
tell
us
exporters
for
for
getting
Keys.
Ultimately,
what
quick
does
is
runs
the
TLs
handshake
and
then,
when
TLS
produces
Keys
quick
reaches
in,
takes
those
keys
out
and
uses
it
for
packet
protection.
E
Tls
doesn't
TLS
record
protection,
isn't
engaged
in
in
this,
the
raw
bytes
coming
out
of
a
handshake
of
TLs
are
used
directly
by
quick.
So
the
final
two
types
of
keys
we
have
are
zero
RCT
Keys,
which
the
client
uses
to
send
to
the
server
if
it
happens
to
be
attempting
zero
ITT
and
then
the
final
application
data
Keys,
which
are
used
for
everything
once
the
handshake
is
completed.
We
also
have
a
key
update
process
that
that
rotates
those
keys
periodically
to
prevent
to
prevent
them
from
wearing
out
next
place.
E
So
this
is
what
the
simplified
handshake
looks
like.
We
have
the
client
sending
an
initial
packet
which
contains
a
crypto
frame
which
contains
a
TLS
client,
hello
and
on
the
server
end,
we
have
an
initial
packet
and
that
contains
a
crypto
frame
that
contains
a
server
hello
and
that
is
all
effectively
sent
in
the
clear.
E
Although
we're
using
these
special,
quick
version,
specific
keys
that
are
generated,
the
the
interesting
thing
to
observe
here
is
that
there
is
a
flow
that
goes
from
the
client
to
the
server
and
Back
Again
in
the
clear
and
then
in
the
opposite
direction
for
handshake
Keys,
there's
a
flow
that
goes
from
the
server
to
the
client
and
Back
Again,
using
those
handshake
keys
and
then
finally,
there
is
application
data
flowing
from
that
point
onwards.
E
Now
this
is
a
quirk
of
TLs,
but
what
you
will
actually
see
here
is
the
there's
a
final
message
that
confirms
the
handshake
is
done
at
the
bottom
there
that
the
server
sends
once
has
received
everything
from
the
client
and
we
spent
years
trying
to
avoid
putting
this
message
in.
It
turns
out
to
be
absolutely
crucial.
In
a
number
of
scenarios,
we
had
the
worst
problem
with
handshake,
Deadlocks
and
all
sorts
of
weird
Corner
cases
before
we
decided
look,
let's
just
put
another
message
in
here.
E
What
this
means
is,
ultimately,
this
is
a
two
round
trip
protocol.
You
can
see
two
round
trips
on
this
on
this
diagram
here,
but
you're
sending
data
a
lot
sooner
than
that,
and
that's
one
of
the
weird
things
about
operating
this
protocol.
Next,
please,
of
course,
all
of
this
integrates
with
quick
and
so
quick
underneath.
All
of
this
is
providing
acknowledgments
for
all
of
the
data,
that's
being
exchanged
back
and
forth
here,
and
so
that's.
E
E
You
can't
acknowledge
something
with
a
different
key
because
well
maybe
the
other
side
doesn't
have
that
key
yet,
and
so
there's
this
weird
interlocking
thing
that
that
goes
on
here,
including
some
implicit
acknowledgments
in
the
in
in
certain
cases,
which
gets
a
little
bit
interesting
as
well,
but
there's
a
sort
of
illustrate
that
that
quick
is
providing
all
of
the
transport
reliability
features
that
TLS
requires
TLS
sends
very
large
messages
that
need
to
be
sent
that
and
received
in
a
very
particular
order.
E
So
this
is
ultimately
what
we
have
in
terms
of
the
layering
diagram
and
I.
Think
thinking
about
layering
in
the
classic
sense,
where
you
have
a
protocol
that
sits
on
top
of
another
protocol,
doesn't
really
work
for
quick.
The
the
key
thing
to
realize
here
is
it's
more
like
a
software
architecture
diagram
where
there
are
certain
components
that
provide
different
capabilities
and
they
have
interactions
with
other
components.
E
If
you
think
of
the
TLs
stack
as
taking
handshake
messages
and
returning
handshake
messages
and
then
providing
information
about
State
changes
and
the
the
various
secrets
that
it
might
be
generating,
then
you
have
the
ability
to
to
build
a
component
that
then
sits
inside
the
greater
protocol.
And
so
you
have
crypto
streams
responsible
for
exchanging
those
handshake,
bytes
back
and
forth.
E
And
then
you
have
a
packet
protection
layer
that
takes
the
the
packets
that
you're
sending
to
the
to
the
other
side
and
takes
the
secrets
from
TLS
and
protects
those
packets
or
removes
protection
from
those
packets
and
then,
of
course,
all
of
the
things
that
we
concretely
care
about
in
terms
of
streams
and
ultimately
the
quick
datagram
work
as
well
is
is
sort
of
sitting
in
there
providing
more
frames
that
that
can
be
exchanged
back
and
forth.
E
So
this
is
this
is
what
I
tend
to
think
of
as
the
the
the
ideal,
the
structure
of
quick
on
the
on
the
inside,
and
it's
not
that
simple.
But
this
is
a
gross
simplification
of
that
next
place.
E
So
the
other
part
of
all
of
this
is
actually
mostly
new
in
the
protocol.
We've
taken
inspiration
from
from
protocols
that
proceeded
at
TCP
and
and
other
things,
but
the
denial
of
service
mitigations
in
quick
as
part
of
the
handshake
and
later
somewhat
more
interesting
than
the
software
engineering
exercise
of
getting
a
quick
TLS
stack
crammed
in.
So
we
have
a
few
basic
rules.
E
We
had
a
long
debate
a
little
while
ago
about
where
this
number
three
came
from
and
no
one
knows
concretely.
We
had
a
number
of
people
involved
in
the
design
of
this,
and
none
of
them
can
remember
where
the
number
three
comes
from,
but
there's
there's
this
basic
rule
that
we
follow
in
quick
is,
if
you're
sending
to
an
address
that
you
haven't
confirmed,
is
willing
to
receive
the
packets
that
you're
sending
only
send
three
times
as
much
data
to
that
as
what
you've
received
from
that
apparent
address.
E
E
No
more
there's
an
address
validation
process
that
we
use
to
say
send
to
that
address
and
get
confirmation
that
that
address
is
live
and
willing
to
receive
the
package
that
you're
sending
to
it,
and
we
have
multiple
types
of
address
validation
in
quick
because
of
the
way
that
we
design
the
handshake
and
ultimately
I'll
touch
on
this
later
migration
and
really
what
we
want
to
do
here
is
ensure
that
quick
is
not
used
as
a
platform
for
denial
of
service
tax
against
on
unwitting
and
unwilling
victims
on
the
internet.
B
E
That's
probably
not
such
a
pretty
important,
ultimately
for
making
sure
that
everything
works
properly
next
place.
E
So
the
the
basic
handshake
amplification
attack
is
that
a
client
sends
an
initial
packet
that
happens
to
have
zero
rtt
in
the
same
datagram.
You
can
put
a
request
in
the
same
packet
as
your
your
initial
thing
that
might
be
a
totally
valid
packet
that
a
server
would
accept
under
normal
circumstances.
E
It
might
be
very
happy
to
accept
that
packet
from
the
client,
but
if
the
client
manages
to
spoof
the
address
the
return
flow
from
the
server
which
contains
all
the
quick
handshake
information
and
potentially
the
answer
to
the
question
that
they
asked
could
be
very
large
and
the
poor
victim
who
genuinely
owns
that
address
might
find
themselves
on
the
receiving
end
of
a
large
flight
of
packets
from
a
very
well
connected
server.
We
don't
want
this
to
happen
so
next
slide.
Please
so
TCP
solved
this
I
think
a
long
time
ago.
E
I
don't
know
when
that
was,
but
it
was
before.
I
was
involved
in
any
of
this
and
it
has
a
three-way
handshake,
of
course,
when
we're
doing
TLS
and
TCP.
This
adds
an
extra
round
trip
to
the
setup,
which
is
a
little
Annoying.
That's
slows
things
down
a
little
bit,
but
essentially
TCP
confirms
his
willingness
to
communicate
before
you
start
doing
any
of
the
TLA
stuff,
but
in
quick
we
put
this
all
together.
So
we're
doing
the
cryptographic
handshake
and
this
confirmation
to
communicate
all
at
once
and
of
course
not.
E
E
This
looks
approximately
this
like
this.
The
client
sends
a
packet,
maybe
some
extra
packets
and
the
server
says:
hey.
No,
please
confirm
before
proceeding
and
it
sends
the
client
a
token
and
if
the
clients
are
genuine
type
client,
it
will
receive
that
token
and
can
stick
it
in
the
packet
that
it
generates
for
the
next
attempt
and
everything
moves
moves
on
from
there
and
the
server
has
now
received
confirmation
that
the
client
is
able
to
receive
the
messages
that
the
server
is
sending
and
the
client
is
willing
to
participate
in
the
protocol.
E
Retry,
of
course,
is
probably
not
something
that
you
want
to
do
because
you're
adding
a
round
trip
time
to
the
connection
setup.
It
is
particularly
good
for
cases
where
the
server
is
under
stress
and
they
want
to
make
sure
that
every
client
is
genuine.
If
they're
under
attack,
then
then
it
might
be
might
be
a
good
way
to
manage
that
or
if
you
think
that
the
traffic
is
coming
from
somewhere.
E
That
is
unreliable
for
various
reasons:
the
reputation
systems
that
you
have
indicate
that
it
may
be
a
little
a
little
chunky,
but
that
round
trip
is
expensive.
So
we
have
some
tricks
for
the
case
that
the
handshake
is
is
shorter.
Next,
please.
E
So
what
we
need
to
do
is
prove
to
the
server
that
the
client
saw
the
server
initial
packet.
How
do
we
do
that?
It's
very
simple.
The
first
exchange
that
happens
in
the
clear
between
the
client
and
server
establishes
some
cryptographic
cryptographic
keys,
and
it
does
that,
based
on
information
in
those
packets,
the
next
set
of
packets
from
the
client,
these
handshake
packets,
use
those
cryptographic
keys
to
generate
new
packet
protection
keys.
E
If
the
client
produces
a
valid
handshake
packet,
that
is
because
it
saw
everything
that
the
server
produced,
and
so
we
we
have
what
what
amounts
to
an
implicit
token.
At
that
point,
and
until
this
point,
the
the
the
three
and
three
out
for
for
one
in
rule
applies,
but
next
slide
will
show
you
the
same
thing
that
you
saw
before,
but
as
the
initial
from
the
server
reaches
the
client,
the
client
generates
some
new
cryptographic
keys
to
protect
the
handshake
packets.
E
That
handshake
packet
is
proof
that
the
client
saw
the
server
keys,
and
so
that
allows
us
to
proceed
by
layering
in
the
address
validation
process,
without
paying
any
extra
bytes
at
all.
I
hope.
That's
that's
clear,
but
that's
a
trick
that
we
apply
in
a
couple
of
other
places
next
place.
E
Oh,
but
of
course,
that
that
goes
both
ways.
The
server
needs
to
prove
that
it
saw
the
client
initial
as
well
so
server
needs
to
the
client
needs
to
confirm
that
the
server
is
willing
to
talk
to
it,
and
at
this
point
we
have
a
little
trick.
The
key
these
initial
keys
that
I
told
you
have
a
version.
E
Specific
thing
are
actually
derived
based
on
what
we
use,
what
we
call
a
connection
ID
and
that
connection
ID
is
an
unguessable
value
from
the
client,
and
so
when
the
server
responds
using
that
connection,
ID
or
using
keys
derived
from
that
connection
ID,
then
the
client
can
confirm
that
the
server
is
willing
to
talk
to
it.
E
So
not
only
does
This
Server
proven
that
it
understands
the
version
of
quick,
that's
being
involved.
It's
also
proving
that
it's
willing
to
talk
to
the
client
by
responding
to
it
in
this
way
and
that
retry
thing
that
we
talked
talked
about
before
we
need
to
have
the
same
sort
of
mechanism
there.
That
has
a
different
mechanism
for
for
managing
that
that
same
process,
but
there's
an
Integrity
check
in
there
as
well.
E
Next,
so
that's
the
implicit
token
that
we
have
on
the
server
side
next,
okay,
yeah.
So
all
of
this
is
somewhat
fiddly
to
get
right.
There
were
a
lot
of
Deadlocks
that
we
discovered
in
the
process.
I
think
we
spent
what
the
better
part
of
two
years
going
back
and
forth
over
some
of
the
the
more
tricky
ones.
Certain
people
had
a
very
good
habit
of
finding
new
ones.
Every
time
we
thought
we
fixed
them.
E
E
We've
spoken
to
academics
about
this
one
and-
and
there
are
systems
out
there
that
might
might
be
able
to
prove
these
sorts
of
things,
but
it's
it's
rather
challenging
and
I
didn't
even
talk
about
version
negotiation,
which
adds
even
more
complexity,
but
I
won't
talk
about
that
here,
because
we
don't
have
the
time
next.
Please
brief!
On
migration,
the
migration
process
follows
the
same
three
out
for
everyone.
In
rule
we
have
migration
for
a
number
of
reasons.
E
Probably
the
most
interesting
one
is
a
client
that
is
sitting
behind
a
Nat
and
they
get
given
an
address
and
they
happily
talk
to
a
server
back
and
forth,
and
then
they
go
quiet
for
a
little
while
because
well,
they
just
don't
have
anything
to
say
at
that
point
and
when
they
restart
the
communication.
After
that
brief
period,
then
that
has
decided
that
it's
going
to
give
it
a
different
IP
address
on
that
flow,
and
so
what
the
services
is
a
message
from
the
client
that
has
a
new
IP
address.
E
That
is
completely
unvalidated
and
that
could
be
maybe
an
attack,
and
so
if
it
were
to
continue
sending
large
amounts
of
data
to
that
new
address,
bad
things
might
happen,
because
that
might
be
spoofed.
Of
course,
in
a
lot
of
cases,
most
cases,
in
fact
it's
just
the
gnats
doing
what
Nats
do
next
please.
E
So
we
want
to
deal
with
net
binding
changes.
We
want
to
allow
connections
to
move
to
New
Paths,
even
legitimately,
but
we
also
want
to
ensure
that
the
an
attacker
can't
force
someone
to
move
if
they
don't
want
to
move.
E
We
also
want
to
ensure
that
an
attacker
can't
stop
someone
from
moving
if
they
want
to
move
and,
unfortunately,
if
you,
if,
if
we
were
to
show
you
an
IP
and
UDP
packet,
header
they're,
not
protected,
and
the
network
rewrites
them
all
the
time,
in
fact,
to
some
extent,
we
kind
of
rely
on
the
network
being
able
to
rewrite
these
things.
E
Maybe
that
was
a
bad
idea,
but
that's
the
network
that
we
have
and
so
we're
in
this
kind
of
really
awkward
situation.
So
next
slide.
Please.
E
There
we
go
so
migration
looks
like
this.
You
have
an
established
connection
between
client
and
server
and
maybe
an
attacker
takes
one
of
your
packets.
Maybe
it's
a
packet
that
you
had
dropped
and
sends
it
from
a
new
address.
The
server
looks
at
this
packet
and
says
I'm,
not
sure
about
this
one.
This
might
be
legitimate,
I,
don't
know,
and
so
what
it
does
is
probes
that
address
saying.
E
If
it's
legitimately
moving
to
the
new
address,
then
it
will
respond
from
the
new
address
and
proceed
if
it's
still
on
the
old
address
and
the
attacker
decided
that
it
wanted
to
force
the
client
to
migrate
to
A
New
Path,
then
the
attacker
should
be
unable
to
produce
the
the
correct
response
and
whichever
one
wins,
whichever
one's
legitimate
will
produce
a
response
that
the
server
will
then
respect
and
migration
will
proceed
next
place.
E
So,
in
order
to
get
this
to
work,
we
didn't
want
to
solve
the
problem
that
ice
solves,
so
only
clients
can
migrate
at
this
at
in
this
version
of
quick
anyway,
servers
can
ask
clients
to
migrate,
but
only
once
we
have
this
thing
that
happens
during
the
handshake
that
allows
it
to
happen,
but
clients
are
the
ones
that
initiate
the
process.
E
Migration
is
very
simple
at
some
levels,
you
simply
detect
that
an
address
has
changed
on
the
other
end
and
you
start
sending
data
to
that
address,
but
you
only
Follow
You
Follow,
the
three
times
rule
until
you've
managed
to
validate
that,
and
the
validation
process
was
on
the
previous
slide.
Next,
please
and
I
think
we're
up
so
having
this
very
simple
three
times,
anti-amplification
rule
applies
to
all
addresses,
it's
pretty
straightforward
to
plot
to
apply.
E
You
need
to
validate
all
paths
before
you
speak
on
them,
and
simplifying
to
the
client
only
means
that
we
don't
end
up
with
complications
in
the
protocol
State
machine,
where
both
sides
decide
to
migrate
at
the
same
time,
which
doesn't
really
work
very
well,
and
then
that
leads
into
a
whole
set
of
other
design
problems
where
we
use
connection
IDs
on
different
paths.
But
when
I
haven't
spoken
about
connection
IDs
and
probably
shouldn't
because
I
don't
have
time
next,
so
that's
only
sort
of
a
taste
of
all
the
security
relevant
things.
E
We
could
probably
spend
another
couple
of
hours
talking
about
how
packets
are
protected,
how
the
packet
header
is
protected,
which
was
an
interesting
story
there.
Key
rotation
is
a
part
of
that.
E
We
also
provide
a
an
equivalent
to
a
TCP
reset,
which
we
call
a
stateless
reset
that
allows
the
server
that
that
loses
state
to
clear
up
any
connections
that
might
be
hanging
around
from
from
before
when
it
lost
that
state
that
is
secure,
so
TCP
resets
cannot
be
injected
by
the
network.
We
also
have
a
whole
version
negotiation
thing
that
is
nearing
publication.
That
requires
a
whole
lot
of
interesting
discussions
as
well,
but
not
enough
time
to
cover
all
those
things
here
today,
and
that's
me
done.
Thank
you.
Thank.
C
C
E
Honestly,
no
one
really
knows
it
was
pulled
out
of
the
air
I.
Think
yeah.
E
C
To
him
talk
to
Ian
Ian,
we'll
talk
about
that
tomorrow
is
to
wear
that
three
three
yeah,
oh
yeah,
that
makes
a
lot
of
sense
cool.
So
thank
you
all
very
much
for
being
with
us
so
early
this
morning,
thanks
especially
to
our
speakers,
excellent
excellent,
excellent
presentations.
We
hope
to
see
all
of
you
and
more
tomorrow
morning
at
7
30,
not
at
eight.
We
have
a
little
bit
of
extra
time
for
the
panel
discussion
and
we
ran
early
as
opposed
to
running
late
on
that
one.