►
From YouTube: IETF102-TSVAREA-20180718-1520
Description
TSVAREA meeting session at IETF102
2018/07/18 1520
https://datatracker.ietf.org/meeting/102/proceedings/
B
A
A
A
The
first
one
is
from
Christopher
ash
on
magic,
our
CCP
and
like
the
difficulties
to
bring
this
into
the
Linux
kernel,
then
we
have
actually
a
presentation
from
Ian
who,
which
has
a
different
title
now
and
I
didn't
update
this
light.
So
this
is
on
quick
deployment
challenges
and
then
the
cert
presentation
is
on
wire
images
and
passing
else
on
from
Brian
and
tete,
which
considers
two
documents
which
are
under
discussion
in
the
IAB
and
then,
if
everything
runs
well,
we
still
have
ten
minutes
open
mic.
At
the
end,
any
agenda
bashing.
A
Okay,
as
always
a
big
thank
you
to
our
review
team
and
we're
still
kind
of
trying
to
get
everything
up
to
speed,
but
we
see
more
views
and
we
see
those
videos
in
a
timely
manner
which
is
really
important
for
us.
So
thank
you
for
that,
but
we
also
still
try
to
figure
out
how
to
optimize
the
workflow
and
kind
of
the
videos,
so
in
case
you're
interested
to
become
a
transport
area
reviewer.
You
can
talk
to
us
or
you
might
talk
to
you
at
some
point.
C
A
And
this
is
the
useless
light
we
have
on
the
working
group
status
from
like
the
80
per
spective.
It's
a
very
brief
high-level
view.
So,
if
you're
chairing
you
disagree,
don't
worry
or
talk
to
us
from
my
side,
I
have
a
couple
of
working
groups
which
which
are
wrapping
up
their
Charter,
which
is
very
positive,
I
think
so
we
get
some
work
done
there
and
then
I
have
also
some
more
long,
lasting
working
groups
which
make
good
progress.
A
C
To
say
that
the
connection
ID
for
quick
is
really
hard
and
I.
Think
the
other
thing
to
mention
is
you
know
we
are
looking
at
several
groups
that
are,
you
know,
finishing
up
milestones,
so
we're
kind
of
getting
close
to
recharter
or
conclude
for
several
working
groups,
so
just
kind
of
keeping
that
the
back
of
our
minds
are.
C
A
True
any
questions
on
this
part
yeah.
We
have
also
done
some
work.
I
think
this
is
actually
less
documents
than,
for
example,
last
time,
but
that's
like
I
think
in
the
normal
variation.
Also,
there
are
a
couple
of
documents
which
are
under
I
used
evaluation.
Currently,
so
that's
the
progress
we
make
since
the
last
meeting.
A
Yes
Spencer,
do
you
want
to
go
on
this?
One
yeah.
C
So
we
were
contracted
by
Kemp
to
talk.
Basically,
the
BBF
has
been
relying
on
TLS
keepalive
mechanisms
in
the
net
comp
server
and
everybody
post
heartbleed
is
removing
support
for
that
as
fast
as
they
can
tithe,
they
were.
The
suggestion
was
made
over
around
in
the
that
cough
world
about
using
TCP,
keep
lives
and
the
sec
hey.
These
were
not
happy
about
Playtex
mechanism
for
keep
alive,
so
we
were
not
happy
about
encouraging
transport
only
keep
alive
for
several
reasons.
So
we
had
a
couple
of
couple
of
proposals
there.
C
One
was
you
know:
could
they
could
they
actually
talk
people
into
not
removing
RC
6520
support
which
is
humanly
possible,
but
I
wouldn't
bet
on
it?
The
other
one
is
just
basically
saying
that
they
need
to
do
application
level.
Keep
lives
anyway,
because
that's
what
matters
more
broadly,
we've
been
asked
about
providing
some
kind
of
consensus
guidance
about
this.
That's
not
just
you
know
me
and
Maria,
and
a
couple
of
second
ease
opinions.
C
So
you
know
basically
a
statement
that
would
recommend
against
using
plain
text,
keep
alive
mechanisms
for
secure
transport
sessions,
recommend
using
application
level,
keep
lives
to
actually
test
liveness.
There's
a
there's,
a
thread:
ESP
aerialist
call
statement
regarding
and
keep
lives,
and
we
would
you
know,
there's
minutes
of
discussion
under
that
thread
already.
We
would
be
interested
in
hearing
more
so.
A
D
A
A
Okay,
I
go
to
the
next
slide
yeah.
This
is
also
related
work.
We
want
to
point
the
transport
area
to
so
that's
a
congestion
control
mechanism
that
was
or
is
developed
for
cope
and
mainly
discussed
in
the
core
working
group
and
but
be
given.
Given
this
is
a
congestion
control
scheme,
there
will
be
presentation
in
ICC,
oh
geez,
so
a
transport
feedback
is
highly
appreciated
in
ICC
Archie
on
tomorrow
afternoon,
I
thoughts
and
Friday.
Okay,
tomorrow,
good.
C
Yeah,
thank
you
in
advance,
so
I'm
still
the
outgoing
transport
area
director
my
term
ends
in
March,
Martin
and
Mary
and
I
have
been
working
since
2013
to
make
the
job
doable
and
there's
a
link
in
the
presentation
for
what
we
told
Malcolm,
which
is
actually
pretty
close
to
what
Nam
Kham
is
using
for
their
position.
Descriptions
if
you
just
look
this
up
on
the
NomCom
page,
please
nominate
freely
and
please
give
lots
of
feedback
on
willing
nominees,
because
this
really
matters.
A
Yeah
and
I
also
saw
ty
talk
a
little
bit
more
about
what
doable
means.
You
should
be
a
conscious
people
to
my
Pecha
Kucha,
but
in
this
case
maybe
more
information
might
be
interesting
for
you,
so
I
mean
what
does
the
the
daily
life
or
from
80
nowadays
or
transport
80.
Look
like
and
I
think
those
spent
and
I
will
spend
like
15
to
20
hours
a
week
might
be
sometimes
more
might
be
less.
The
only
problem
I
sometimes
have
is
that's
really
20
15
to
20
hours
every
week.
A
So
if
I
go
to
a
conference,
I
will
be
more
hours
the
next
week,
but
yeah,
it's
still
a
reasonable
amount.
Most
of
this
time
is
really
reviewing
in
I
use,
G
evaluations.
So
it's
reading
drafts
of
all
different
areas
trying
to
understand
as
much
as
possible,
but
and
it's
about
it
like
we
have
now
a
limit
of
400
pages
for
each
telecheck.
So
there's
a
telecheck
every
two
weeks,
which
is
about
like
10
to
20
drafts,
usually
but
I
mean
also.
You
know
drafts
right.
A
A
If
the
draft
is
short
enough
and
like
not
not
very
related
to
transport,
I
can
do
like
a
draft
in
30
minutes
but
like
if
there
actually
is
a
problem
and
I
have
to
understand
thing,
and
it
of
course,
takes
longer
still.
Then
there's
also,
of
course,
a
little
bit
load
that
is
related
to
the
working
groups,
but
we
in
transport
only
have
12
working
groups.
So
it's
like
six
working
groups
per
ad
and
I
actually
looked
at
a
published
25
are
seized
last
year.
A
A
There
are
other
things
the
IHG
is
doing
like
everything
related
to
the
IETF
process
itself,
and
we
need
people
doing
this
in
the
I
use
qi,
but
usually
it
really
depends
a
little
bit
on
who's
interested
in
the
topic.
Who
has
time
to
do
it
so,
like
I
myself,
don't
take
up
a
lot
of
these
additional
responsibilities.
It
really
depends
on
your
Co
time
commitment.
A
B
A
So
I
actually
try
to
manage
to
keep
my
I
80
work
on
like
two
days
a
week,
so
it's
usually
Monday
Thursday
and
sometimes
I.
Don't
read
those
emails.
The
other
days
or
I
have
a
very
quick
look,
but
I
mean
after
all,
it's
also
it's
not
that
you
have
to
read
all
the
emails,
because
it's
like
it's.
It
reviews
its
comments
to
reviews
on
oh
I.
A
C
I
could
yeah
so
what
I
ended
up
doing
was
bullying
my
ISG
email
into
documents
that
I
was
responsible
for
documents
that
I
was
not
responsible
for
on
intelligence
and
kind
of
everything
else,
so
that
allows
that
allows
me
to
prioritize
a
bit.
The
other
thing,
I
would
say
you
know.
Maria
Maria's
prefer
referred
to
this
a
couple
of
times,
but
there
are
things
that
the
isg
needs
to
do
things
that
the
ice
tree
needs
to
worry
about,
but
we
don't
need
15
people
worrying
about
them,
so
you
know
that
would
not
actually
help.
C
A
A
C
C
F
Hello,
so
this
is
a
modified
version
of
a
talk
that
we
have
been
giving
last
week
at
the
Linux
Network
Developers
Conference.
This
is
work
where
we
are
working
together
with
a
bunch
of
people.
There
is
Matt
Martino
and
Peter
crushed-up
from
Intel
much
about
from
casares
and
myself,
and
so
the
bow
met
and
matured.
They
presented
that
last
week
at
Linux,
Network
developers
and
I'm
presenting
here
a
slightly
modified
version
that
shows
more
the
IETF
and
protocol
design
impact
on
up
streaming.
F
Mb
TCP,
as
most
of
you
know,
MB
TSP
has
been
our
experimental
standard
for
I
think
maybe
a
few
years,
and
we
since
beginning,
we
always
had
linings
implementation
that
was
implemented
in
a
standard.
However,
it
is
still
not
upstream
inlets
from
Linux
kernel
and
some
are
surprised
by
that,
and
there
are
various
reasons
for
that,
and
I
will
talk
to
one
of
the
some
of
those
now
so
the
question
is,
one
is
wise
up
string,
MPGs
be
actually
so
complicated.
F
F
Any
addition
of
additional
fields
in
structures
incurs
additional
cache
misses
which
basically
kills
performance
and
if
statement
will
introduce
additional
branching,
they
are
very
sensitive
to
all
of
this.
The
maintainer
of
the
Linux
kernel
networking
stack
right
and
because
of
this,
the
stack
has
become
extremely
optimized,
so
every
change
in
the
Linux
TCP
fast
path
is
being
scrutinized
heavily.
F
Then
there's
also
the
original
implementation,
the
one
where
everything
started
off
with
that
we
created
back
in
the
days
when
that
Sebastia
body
actually
began
in
days
for
those
who
know
him
and
that
I
took
over
then
and
this
implementation,
the
goal
was
a
bit
different.
Our
goal
was
not
to
upstream
it
immediately.
The
goal
was
basically
to
have
a
way
to
quickly
iterate
and
implement
the
standard,
as
the
standard
was
evolving.
As
the
draft
was
evolving,
we
wanted
to
be
able
to
experiment
to
get
quickly
numbers
out
of
this
currency.
F
Empties
the
implementation
to
see
how
certain
decisions
in
the
protocol
would
affect
the
performance
of
MP
DCP.
So
our
goal
was
basically
to
have
a
non-generic
stack
and
be
quickly
at
iterating
at
the
time,
and
also
we
were
researchers,
so
our
goal
was
also
to
write
papers
and
not
necessarily
to
upstream
code
to
the
Linux
kernel.
F
Now
over
time,
this
implementation
has
evolved
into
a
more
stable
version
and
actually
nowadays,
there
I
think
millions
of
devices
that
are
using
it.
People
have
heard
of
Samsung
deploying
it
in
Korea,
and
all
of
this
is
based
on
this
particular
stack,
but
all
of
those
deployments
that
are
currently
out
there
there
are
still
very
special
purpose
deployments
where
the
one
the
system
administrator
was
deploying.
It
has
tight
control
over
how
MPTP
is
being
used.
F
Now
we
want
to
evolve
and
the
MPSP
implementation
and
Alana's
kernels
so
that
we
can
actually
even
more
easily
be
integrated
into
the
Linux
kernel,
and
that
means
we
have
a
few
limitations.
First
of
all,
that
means
they
are
not.
There
can't
be
any
performance
regressions
in
a
regular
TCP
stack.
That's
the
non
plus
ultra
condition,
the
certain
second
one
is.
We
want
it
to
be
maintainable
and
configurable.
F
The
current,
like
special-purpose
way
of
deploying
MPSP,
can't
be
done
in
a
generic
implementation
that
might
be
used
by
android
and
many
other
systems,
and
so
we
won't
so
want
it
to
be
deployable
in
a
variety
of
deployments.
So
an
ideal
implementation
would
basically
look
like
that
right.
You
have
the
socket
layer
below
that.
You
have
an
MP
TCP
socket,
and
then
you
have
to
TCP
sub
flows
over
this.
That
would
be.
F
F
So
that
brings
me
to
the
protocol
challenges
and
I
must
say.
This
is
not
to
like
kind
of
put
the
blame
on
one
or
the
other.
It's
just
like.
Well,
because
I
mean
I
was
also
part
of
the
protocol
design.
So
it's
kind
of
we
all.
We
all
are
kind
of
part
of
this,
and
also
this
is
just
how
one
part
of
the
protocol
specification
kind
of
influenced
influences
certain
decisions
that
need
to
be
made
inside
an
implementation.
F
So
first
protocol
challenge
is
that
the
data
sequence
numbers
and
the
mappings
right
for
those
of
you
who
know
MP
TCP
right.
We
have
the
two
sequence:
number
spaces,
there's
one
sequence
number
space
for
the
data
that
is
being
sent
and
the
sequence
number
spaces
in
the
individual,
TCP
sub
flows
and,
for
example,
the
data
sequence
space.
F
Let's
say
here
in
this
example
goes
from
one
two,
three,
four
five
and
we
are
sending
segments
and
let's
say
we
send
the
gray
part
of
the
data
we
send
it
on
the
left,
sub
flow
and
yellow
part
of
the
data
we
send
an
on
the
right
part
of
the
subfloor
in
the
right
sub
flow.
So
this
means
for
every
segment
that
we
send.
We
need
to
specify
the
data
sequence
mapping
in
side
as
part
of
the
segment.
The
MPT
protocol
says
that
this
kind
of
DSS
mapping
is
to
fight
inside
a
TCP
option.
F
That
is
seems
like
a
very
obvious
design
decision,
and
it
was
at
the
time
when
we
were
designing
the
protocols
seems
seemed
like
a
good
approach.
Now,
however,
if
we
want
to
implement
this,
it
becomes
a
little
bit
tricky.
One
of
the
problems
is
so
we
have
the
empty
space,
socket
who's,
basically
holding
data,
and
then
we
are
pushing
data
down
on
once
up
flow.
Now,
if
we
have
a
clean
interface
between
both
right,
we
are
basically
pushing
data
down
this
just
memory
allocation.
F
Now
we
need
to
kind
of
add
to
this:
tell
the
TCP
stack.
A
I
would
like
you
to
write.
Was
this
segment,
this
particular
TCP
option,
and
so
the
TCP
stack
would
need
kind
of
to
ask
call
back
into
the
empty
speech
tag
and
ask
hey
what
kind
of
TSS
option
right
should
I
write,
but
this
introduces
a
lot
of
peg
and
force
between
the
layers.
F
So
the
other
solution
that
we
would
want
to
do
is
well.
Why
not
just
simply
add
this
information
well
to
DSS
option
as
part
of
the
metadata
that
comes
with
the
data
and
if
you
know
the
Linux
kernel,
it
has
basically
the
memory
region
that
is
holding
the
data,
and
then
you
have
what
is
called
the
SK
buff,
which
is
holding
the
metadata.
F
So
the
obvious
approach
would
be
well,
let's
simply
add
the
DSS
mapping
inside
the
SK
buff.
That
is
problematic
for
few
reasons.
The
first
reason
is
that
we
can't
just
simply
increase
the
size
of
this
structure,
because
the
TCP
stack
is
so
highly
optimized.
Adding
fields
into
a
structure
means
potentially
more
cache
misses
and
a
cache.
F
Miss
can
go
anywhere
from
up
to
two
knots
CPU
cycles
and
that
will
simply
kill
the
performance,
so
it
is
basically
forbidden
to
add
anything
inside
this
kind
of
a
structure,
except
if
you
managed
to
do
it
in
a
cache
neutral
way.
The
second
reason
why,
even
if
we
would
manage
to
somehow
add
this
information
into
the
sk
buff
such
that
there
are
no
additional
cash
line,
misses
there's
still.
A
problem
is
because
the
TCP
stack
can
at
any
point,
decide
to
split
the
segment
and
split
them
in
multiple
parts.
F
Merge
it
with
others
when
he
has
always
used,
and
it
receives
the
sack.
The
Lennox
TSP
stag
is
going
to
split
those
segments
in
different
pieces
and
might
merge
them
back
with
others.
So
what
that
means
anywhere
in
the
TCP
code,
where
we
are
splitting
and
merging
segments,
we
need
to
make
sure
that
the
DSS
mapping
is
propagated
through
the
metadata
as
well
again,
meaning
more
if
statements
into
the
TCP
stack.
F
Another
issue
of
challenge
that
we
are
facing
is
that
we
are
transmitting
signals
on
TCP
options
and
there's
no
clean
interface
in
the
TCP
stack
to
basically
propagate
TCP
options.
Up
to
other
layers,
usually
TCP
options
have
only
a
notification
locally
inside
the
TCP
stack
right.
Let's
take
the
example
of
sack,
the
sack
option
will
be
treated,
will
be
taken.
Care
of
by
the
TCP
stack
only
same
for
these
same
for
the
timestamp
option.
Same
for
all
the
other
option,
all
most
options
that
we
can
think
about.
Now.
F
F
The
TCP
socket
will
be
destroyed
and
we
now
need
to
send
a
remove
address
on
the
other
sub
flow,
so
that
means
first,
there
is
a
notification
coming
up
to
the
MP
TCP
stack,
and
then
we
need
to
push
an
information
down
on
another
TCP
stack
and
telling
the
other
TCP
stack.
I
want
you
to
send
an
acknowledgment
with
a
very
particular
TCP
option
right.
Currently,
there
is
no
no
clean
interface
for
those
kind
of
things,
usually
an
application
that
is
sitting
on
top
of
the
TCP
stack.
F
F
Another
problem
known
you
can
see
this
on
the
receivers
site
when
we
are
receiving
TCP
options
right.
We,
for
example,
we
receive
the
remove
address
option,
which
means
you
need
to
kill
the
other
TCP
sub
flow,
so
receive
a
TCP
egg,
with
a
particular
TCP
option.
First
of
all,
this
TSP
Ike
looks
like
it
like
a
duplicate,
ACK
and
usually
gets
simply
dropped.
F
F
Signaling
between
the
layers
that
are
is
originating
from
a
TCP
option,
so
there
are
many
of
those
signals
in
inside
and
M
TCP
inside
a
MDM
PT
specific
specification
and
each
of
them
need
needs
different
kind
of
different
kinds
of
behaviors
and
a
different
layers
in
the
stack,
and
so
it's
kind
of
tricky
to
make
sure
that
all
of
this
gets
consolidated
into
a
single
point.
All
those
cross
layer,
signaling,
are
basically
major
pain
point
for
the
NP
TSP
implementation.
F
So
how
we'll
have
we
are
we
trying
to
fit
the
amputees
piece
back
into
the
networking
stack
in
Linux?
So,
first
of
all,
having
this
layered
approach
is
very
clean.
It
fits
very
nicely
into
the
existing
stack.
We
can
create
a
socket
with
a
certain
type
that
basically
is
sitting
like
a
shim
layer
between
the
TCP
flows
and
the
application.
There
are
some
internal
interfaces
that
allow
to
send
data
together
with
the
meta
meta
data
with
the
SK
buff,
and
also
to
read
it
as
well.
F
This
kind
of
design
fits
very
well.
The
challenge
comes
later
on,
nor
for
the
for
the
cross
layer
interactions,
one
more
obviously
obvious,
obviously
think
that
adding
this
like
indirect
list
for
indirect
costs,
because
that
allows
to
have
a
generic
approach.
It
allows
to
make
the
TCP
stack,
not
MPGs,
be
specific,
which
is
something
an
operating
system
always
tries
to
avoid.
F
However,
since
spectra
and
meltdown
indirect
costs
have
become
increasingly
extremely
costly,
and
so
the
Linux
kernel
developers
and
maintainer
are
basically
avoiding
indirect
costs,
so
annoyed
calls
are
no
more
solution
to
have
an
generic
implementation
and
the
other
problem
that
I
already
mentioned
is
the
sk
bath
non
extensibility.
If
we
want
to
add
any
kind
of
metadata
to
to
a
packet
or
segment
that
is
being
sent,
it
needs
to
be
done
in
such
a
way
without
increasing
the
size
of
the
structures,
which
is
very
tricky.
F
So
what
are
the
next
steps
for
us
to
target
up
streaming?
So,
together
with
the
people
from
Intel
and
Tessa
rice,
we
have
been
now
been
working
roughly
for
one
year
on
it,
and
last
week
we
presented
our
plan
and
our
challenges
that
we
are
facing
in
a
more
detailed
presentation
than
the
one.
Here
today,
we
received
very
supportive
feedback
from
the
Native
community,
telling
us
that
the
TSP
maintainer
said
that
he
actually
really
wants
MP
TSP
to
be
upstream.
F
We
started
to
reduce
MPT
speed
to
the
least
minimal,
viable
implementation,
moving
all
the
features
just
making
em
participe,
bringing
it
all
the
way
down
to
the
bones
so
that
it
can
just
interoperate
with
another
implementation,
but
without
supporting
any
feature,
the
cross
layer
interactions.
We
are
trying
very
hard
to
consolidate
together
in
one
single
coil,
so
that
reduce
those
scars
layer
interactions.
F
F
One
slide
for
lessons
alone:
I
mean
okay,
okay,
so
in
terms
of
lessons
learned
in
specific,
specifically
with
regard
to
standardization,
here's.
What,
in
my
opinion,
is
one
of
the
things
that
we
should
maybe
keep
in
mind
in
the
future.
One
is
obviously
the
protocol
design
has
a
direct
impact
on
the
implementation
right.
Any
decision
that
is
being
made
can
impact
the
upstream
ability
or
the
widespread
deployment
of
the
protocol.
One
thing
that
is
very
tricky
in
my
opinion,
is
that
TCP
options
should
only
be
useful.
F
Let's
call
it
unreliable
signals
and
not
for
signals
that
are
linked
to
the
payload,
because
in
an
implementation
it
is
extremely
difficult
to
make
this
metadata
move
along
with
the
payload,
because
the
payload
in
the
end
inside
the
DSP
stagette
can
get
splayed.
It
can
only
get
merged
with
other
with
other
segments,
and
so
many
TCP
stacks
don't
have
a
real
notion
of
fragments
or
segments
right,
and
so
it's
best.
The
TCP
options
are
best
used
for
just
unreliable
signals
like
a
sec
option
right
it
there's
no
relation
to
the
data
itself.
F
It's
just
purely
related
to
the
TCP
header
cross
layer
interactions
should
at
best
be
asynchronous
so
that
it,
the
signal
that
is
coming
in
through
the
TCP
option
can
basically
be
queued
and
can
be
read
out
of
the
TCP
stack
later
on.
If
it's
asynchronous,
it
simplifies
the
layer
separation
between
the
different
stacks.
F
Also,
one
lesson
that
also
I
learned
I
was
very
much
involved
in
the
prototyping
at
the
beginning
of
the
linings
and
empties.
The
implementation
is
that
the
prototyping
is
very
different
from
widespread
deployment
and
integration
in
an
upstream
car
product
during
prototyping.
You
try
to
be
quick.
You
try
to
iterate
fast,
you
do
short
cuts.
F
You
do
sometimes
nasty
things,
because
you
want
to
finish
paper
before
the
deadline
and
you
want
to
get
some
numbers
out
of
out
of
the
implementation,
and
so
all
of
this
is
extremely
different
from
the
deployment
in
a
real
in
a
large-scale
and
in
a
real
generic
way
and
now
I.
Thank
you
very
interesting.
B
I
learned
quite
a
bit:
I
would
I,
don't
know
if
I
would
replace
your
first
bullet,
but
I
would
certainly
add
a
second
bullet
that
says
the
the
converse,
which
is
implementations
directly
impact
protocol
design.
It
seems
to
me
that
what
you
learned,
what
your
the
the
story
that
you're
telling
is
that
there
are
a
bunch
of
constraints,
design
constraints
on
the
protocol
that
you
didn't
know
when
you
are
designing
MPT
CP
that
you
discovered
late
on
and
that
caused
you
have
to
do
a
bunch
of
rework.
B
That
you
learn
things
like
that,
like
the
the
buffer,
the
cache
access
efficiency
was
a
constraint
and
you
needed
to
change
your
design
to
accommodate
that,
and
maybe,
if
you've
known
that
earlier
on
and
what
you
might
have
made
different
design
decisions.
So
I
guess,
let
me
ask
it
differently.
If
you
had
known,
then
what
you
know
now
about
the
constraints,
would
you
have
designed
the
protocol
differently?
Yes,
absolutely!
Okay,
so.
F
B
Seems
for
people
who
want
to
get
the
technology
I
mean
we
spend
a
lot
of
time
in
the
ITF
these
days.
Talking
about
making
our
stuff
deployable
right.
That's
one
of
the
reasons
we're
talking
about
github
right.
We
want
to
sort
of
reduce
the
distance
between
us
and
actually
rolling
things
out
and
I.
Think
you've
identified
a
pretty
big
issue
here,
which
is
like
we're
designing
stuff
that
you
know
that
the
maintainer
may
look
at
and
say
yeah.
You
can't
do
that
and
you
know
that's
a
real
impediment
getting
it
out
there.
So.
F
It's
a
good
point.
One
question
I
would
have
Dennis
then,
like
those
kind
of
constraints,
are
very
olynyk's
but
specific
other
implementations
like
like
in
iOS.
We
don't
have
that
problem.
We
don't
because
we
kind
of
don't
care
right
so
well.
There
are
certain
things
where
we
can
say:
okay,
the
benefit
of
adding
empties,
be
over
weights,
the
cost
that
it
is
introducing.
G
Hi
Brian,
Trammell
I
guess
we're
just
gonna
go
down
these
bullet
points.
Oh
I
have
I
want
to
make
a
point
on
point
too.
I
expect
two
more
people
to
get
in
line
behind
me.
So
this
is
a
really
interesting.
A
really
interesting
lesson
learnt
easy
options.
Best
used
for
on
reliable
signals
right
like
so
the
further
away.
You
are
from
the
core
specification,
the
less
you
can
rely
on
it
working
everywhere
and
I
think
we
kind
of
know
that
in
you
know,
I
mean
like
we
have.
G
You
know
the
hops
and
the
map
party
stuff
we've
dug
into
that.
These
things
we're
measuring
more
and
more
I,
think
sort
of
a
lot
of
middle
box
measurement.
Work
was
actually
spurred
on
by
MP
TCP
right
because
that's
what
it
turned
it
started
as
a
let's
extend
the
protocol
and
it
turned
into
a
whole
bunch
of
work
on
how
metal
boxes
break
the
protocol.
There
seems
to
be
kind
of
a
there's
needs
to
be
like
a
protocol
architecture,
truth
hiding
behind
this,
and
it's
something
we.
G
F
G
H
Possibly
difficult
question
radically
so
I
how
much
how
much
of
the
stuff
would
you
say
is
is
specific
to
Escobar
Farsi
you
and
all
the
stuff,
that's
specific
to
the
Linux
kernel
like
how
like,
if
you,
if
you
were
to
take
these
learnings
and
you
take
them
to
the
iOS
user
space
networking
team.
How
different
are
they
I
mean
how
directly
applicable
are
they?
Is
this
something
that
we
do
so
there
could
be?
I
could
see
two
things.
H
First
of
all,
you
write
all
this
down
and
you
come
here
and
people
don't
believe
you,
unlike
that
they
don't
believe
you
anyway.
So
ok,
so,
but
at
least
you're
right,
but
if
you
write
it
down,
but
but
if
you
write
it
down
and
he
says,
look
dude
like
in
my
implementation.
It's
entirely
different,
like
the
shed
constraints.
I
have
a
completely
different
I.
Don't
have
this
locality
problem
because
my
architectures
I
have
something
else
right.
Have
you
compared
notes
with
the
people
working
on
the
use
of
space
implementation
to
see?
I
Generalize
all
of
that
right,
Eric,
Kinnear
Apple,
very
briefly.
Yes,
we
have
compared
notes,
we're
all
the
same
team,
but
I
think
your
point
still
stands
in
general.
Is
it
would
be
really
nice
if
we
could
take
some
of
the
stuff
that
we've
learned
and
seen
and
kind
of
put
that
out
there
as
more
of
a
as
we
go
forward
in
the
future
and
look
at
MP,
quick
or
other
areas
of
protocol
extension
or
even
just
first
time
design?
Where
is
that
kind
of
things
exactly?
How
can
that
apply?
The
yeah.
G
Well,
it's
it's
not
overly
complex
and
you
may
be
at
max
introducing
won't
like
one
extra
pointer
drf
in
the
worst
case
scenario,
but
like
overall,
the
fact
that
in
quick,
you're,
acknowledging
packets
with
packet
numbers
instead
of
segments
in
sequence,
pace
means
that
the
type
of
data
structure
that
is
conducive
to
multi
path,
quick,
more
looks
more
like
two
entire
sets
of
basket
buffs,
and
so
you
don't
need
to
add
things.
The
sk
box.
You
just
need
two
parallel
data
structures,
each
indexed
by
packet,
number.
J
G
D
H
Lorenz
could
be
a
difficult
question
now
that
you're
here.
How
much
of
this
do,
you
think
is
due
to
the
fact
that
you
know
decisions
are
right
in
the
past
that
were
absolutely
in
the
best
interests
of
the
Linux
TCP
stack
and
it's
gone
really
really
really
far
down
the
path
of
being
highly
optimized
and
all
those
decisions
like
if
you'd
had
like
multi
path
in
mind
at
each
of
the
decisions
points
like
how
much
do
you
think
you
know
would
have.
F
H
Guess
should
does
this
mean
that
we
should,
if
we
want
it
to
be
deployable,
should
this
mean?
Should
we
be
saying?
Oh,
that
should
we
should
we
not
ever
or
try
ever
not
ever
to
say?
Oh,
this
is
just
an
extension
or
an
existing
protocol,
so
it's
gonna
be
easy
to
implement
right
because
because
we
might
have
said
like
you
know,
well
it's
already
there
right.
H
The
code
is
already
there,
but
if
you,
if
you
have
the
structural
problem,
then
you
can't
get
into
a
real-world
implementation,
because
you're
trying
to
you
have
a
chicken-and-egg
problem
and
the
people
who
maintain
that
implementation
is
like.
No,
you
can't
make
me
3%
slower,
because
you
know
because
it
cost
me
a
lot
of
blood,
sweat
and
tears
to
get
that
3%
and
took
me
a
year
right.
So
yeah.
K
My
name
is
Tim
Sheppard
I,
actually
just
reading
this
I
know
you
mentioned
Linux
a
little
bit,
but
just
reading
the
slide
I
was
actually
thinking.
This
is
more
about
decisions
that
were
made
in
the
NP
TCP
working
group,
which
might
have
been
made
differently
and
I
think
you
said
something
along
those
lines.
Yes,.
F
K
Makes
the
implementation
very
difficult
I
remember
there
was
a
MPT
CD
working
group
meeting,
which
I
was
sitting
there
listening
to
where
they
were
deciding
to
either
use
TCP
options
or
put
stuff
in
the
data
stream
of
the
sub
flows,
and
it
was
the
people
in
the
room.
It
was
overwhelming
use
options,
but
there
was
one
person,
I
can't
remember,
who
was
I,
think
Michael
Shaw.
Okay,
thank
you
for
remembering
that
and
I
thought
he
had
compelling
arguments,
but
essentially
very
few
people
I
mean
a
lot
of
people
shrugged
I.
K
Think
I
was
the
one
other
ones
shrugging,
even
though
I
found
his
arguments
compelling
and
but
there
was
a
very
large
number
of
people
who
are
like.
Oh
we're
definitely
going
to
use
options
because
that's
because
it's
a
because
it's
multipath
TCP,
and
so
it
should
use
him
TCP
options.
So
that's
and
the
decision
was
made
and-
and
here
we
are
and
I'm
wondering,
could
we
think
about
MPT
zbb
that
doesn't
try
to
do
it
all
in
options.
F
L
Using
option
yeah
because
performance
is
not
only
focus
of
the
film
we
designed
the
protocol
in
my
understanding,
so
yeah
Thomas
is
not
when
we
make
on
MPT.
We
think
other
factors
as
well.
That's
a
result
right
and
then
that's
the
one
common
and
then
the
other
comment
is
then
a
new
design
MPCP
from
scratch.
You
want
to
put
the
information
in
the
payroll.
That's
your
conclusion.
I
would.
F
M
A
G
Perfect,
okay
nope,
as
this
says
I'm
slide.
I
am
Ian
sweat
from
Google,
so
this
is
actually
taken
also
from
Annette
dev
talk,
but
this
is
much
more
truncated.
The
net
dev
talk
talked
quite
a
bit
about
the
history
of
deploying
quick,
including
a
few
more
performance
numbers
and
some
of
the
other
data,
but
that
all
all
that
information
pretty
much
has
risen
presented
at
the
ITF,
either
in
the
form
of
a
bath
or
some
other
kind
of
working
group
activities,
quick
working
group
or
otherwise.
G
There's
some
rebuffering
improvements,
some
search,
latency
movements,
it's
about
a
third
of
Google's
traffic,
as
of
sigcomm
in
2017
7%
of
the
internet,
and
obviously
there
is
a
quick
working
group.
That's
very
active
you
know.
So
all
of
this
is
good
motivation
for
us
to
plan
this
more
widely,
but
there
are
challenges
with
that.
G
So
here's
our
ramp
up
over
time
from
the
sitcom
paper,
but
the
thing
I
want
to
call
out
here,
is
during
the
March
to
August
time
frame
where
it
was
all
flat
and
there
was
no
increase
in
rollout.
We
were
just
furiously
working
on
CPU
improvements,
so
some
of
these
were
kernel
related.
Many
of
them
were
just
internally
in
our
own
software
and
that's
why
you
keep
kind
of
see
a
long
period
of
a
flat
and
then
a
huge
jump
read
afterwards
where
it
goes
from.
G
So
initially,
the
major
sources
of
CPU
crypto
is
a
fairly
large
one,
although
certainly
not
as
large
assume
the
other
ones,
but
cha-cha
20
at
the
time
was,
was
fairly
slow
originally
and
we
did
not
have
the
avx2
assembly,
and
that
seems
to
make
a
ton
of
difference.
So
you
know
just
FYI
like
cha-cha
doesn't
have
to
be
slow,
but
you
probably
do
need
a
relatively
modern
Intel
processor,
and
so
that's
exactly
what
we
did.
G
The
other
thing
we
did
is
actually
we're
using
in-place
encryption,
at
least
for
the
sending
pad
that
you
know,
consumes
a
little
bit
less
memory
and
bandwidth
and
appears
to
actually
have
a
few
percent
CPU
gain.
So
a
bit
surprised
that
it's
true
but
but
it
turns
out
to
be
true,
we
haven't
tried
it
on
the
received
path
yet,
but
I
suspect
it
would
work
equally
well,
we
thought
about
doing
scatter
gather
and
you
know
kind
of
copy
and
encrypt
all
in
one
operation.
G
It
turns
out
the
api's
for
that
are
more
complicated
than
doing
in
place,
so
we
stuck
with
in
place
just
FYI
so
sending
and
receiving.
This
is
where,
like
the
vast
majority
of
the
cost,
goes
so
on
Linux,
it's
not
uncommon
for
this
to
be
sending
to
be
something
on
the
order
of
25%
of
CPU
on
the
machine
receiving
maybe
like
5%
10%
at
the
worst
case
scenario
on
Android
I've
seen
numbers
over
50%
presenting
I've
also
seen
that
on
iOS,
so
it
can
be
pretty
huge.
G
Currently,
our
two
biggest
kind
of
one
is
new
and
one
is
older.
The
older
one
is
packet,
rx
ring,
which
is
a
receive
side,
optimization
that
allows
us
to
share
a
memory
buffer
with
the
kernel
packets
get
very
efficiently
read
into
that
buffer,
and
then
we
just
get
an
up
call
and
says
like
here's,
a
ton
of
data
like
go
at
it
and
then,
when
we're
done,
of
course,
we
give
it
back
to
the
kernel
and
rinse
and
repeat
so,
at
least
in
terms
of
profiles.
G
This
is
not
quite
free,
but
shockingly
cheap
I,
don't
know
why.
But
it
is
super.
So
when
it's
available,
it's
it's
quite
appealing,
UDP
receive
in
general
is
actually
not
that
expensive.
It
was
more
expensive
when
we
started
the
quick
project,
but
Willem
and
Eric
doomsday
made
some
really
nice
optimizations
on
the
UDP
received
path
over
the
last
few
years,
and
so,
when
we've
done
recent
benchmarks,
it
was
only
a
few
percent
difference
between
our
x-raying
and
UDP
received.
Where
is
probably
more
like
a
10
percent
difference
when
we
first
launched
it.
G
The
other
thing
that
has
been
added
more
recently
about
May
time
frame
is
something
called
UDP
GSO
and
that
allows
the
kernel
to
do
segmentation,
not
fragmentation,
but
segmentation,
where
you
give
it
a
very
large
UDP
packet
and
it
basically
segments
it
for
you
now.
You
have
to
do
everything
correctly,
because
it's
very
important
that
the
quic
packet
boundaries
line
up
exactly
with
where
it's
going
to
segment
it.
G
So
we're
still
working
on
improving
cache
efficiency,
improving
your
data
structures
and
minimizing
copies
and
allocations
we're
pretty
good
on
copies
and
allocations.
At
this
moment,
cache
efficiency
is
actually
still
I
made.
Your
problem
like
there's
still
some
pointer
following
and
visitors
and
such
and
I
think
we
pretty
much
gotten
rid
of
all
the
linked
lists
or
anything
else.
That's
you
know
of
that
nature
in
our
code,
because
pretty
much
you
want
as
much
as
possible
to
be
in.
You
know
contingent
our
coherent
memory.
G
Okay,
so
the
last
quirk
is
quick
has
encrypted
acknowledgments,
unlike
TCP
that
does
defeat.
Some
of
TCP
is
kind
of
receive
side
aggregation
that
I
can
do
I.
Believe,
that's
our
the
giro
mechanism,
if
I
remember
correctly
or
at
the
very
least,
some
middle
boxes
do
it
for
you,
whether
they
you
ask
them
to
or
not,
and
so
do
some
Wi-Fi
access
points
promoting
here,
and
so
you
know
in
general,
it's
possible
and
quick
to
get
to
fewer
acknowledgments,
but
also,
more
importantly,
you
don't
have
to
do
decryption
and
they're
relatively
simple
to
process.
G
So
our
solution,
for
that
is,
we
send
ax
less
often
so
by
default.
Right
now
we
are
sending
ax
every
quarter
RTT
or
every
10
packets,
whichever
comes
first
so
at
least
with
PBR.
That
turns
out
to
work
rather
nicely.
It
doesn't
actually
give
you
better
congestion
control
by
itself,
but
it
saves
enough
CPU
in
the
client-side
that
you
end
up
actually
with
a
slight
increase
in
bandwidth.
H
It
turns
out
that
if
you,
if
you
send
one
act
per
packet
or
run
it
for
two
packets
and
you're
receiving
very
fast
on
Wi-Fi,
you
run
out
of
transmit
opportunities
and
so
estimation
actually
really
helps,
even
though
you're
doing
nothing
else
wrong
so
and
the
other
side
of
the
coin
is
you
know,
people
have
tried
to
get
that
into
upstream
Linux,
as
well
as
like
a
we
don't.
Have
this
problem
data
centers
go
away.
G
So
ever
want
to
talk
a
little
bit
about
how
we're
using
sockets
we've
gone
back
and
forth
with
our
kernel
team,
about
kind
of
what
the
recommended
approach
is
and
a
server
side
application.
It's
not
clear
that
this
should
be
the
recommended
approach
for
everyone,
but
it's
it's
sort
of
what
they
came
up
with.
So
the
approach
is
to
use
the
socket
per
thread.
G
Sorry
I
should
clarify
a
receive,
socket
and
a
sense
of
separate
lis
per
thread
with
s
eries
port
for
the
receive
socket,
so
it
splits
all
the
traffic
kind
of
stabili
for
you
into
you,
the
number
of
threads.
Obviously,
you
can't
change
the
number
of
threads
after
you
start
up
the
server,
but
that's
usually
not
a
huge
constraint.
G
J
G
Think
I
say:
reuse
port
works
with
our
extreme.
You
can't
remember
because
our
experiencers
packet
sockets,
so
the
app
dispatcher,
is
based
on
quick
connection
ID.
So
you
know
it
gets
all
these
random
packets
for
various
different
connections
dispatches
the
correct
connection.
You
know
hash
table
instead
red
stuff,
and
you
know
at
least
initially
what
we
did
is.
We
just
had
used
straight
as
a
respawn,
and
if
a
packet
landed
on
the
wrong
thread,
we
would
just
toss
it
to
another
thread,
so
that
adds
a
fair
amount
of
contention.
But
now
rebinding
is
in
connection.
G
Migration
together
are
much
much
less
than
1%
of
all
connections.
So,
if
you're
doing
this,
for
you
know
0.5
or
0.4
percent
of
all
your
packets,
it's
it's
somewhat
acceptable
in
the
world.
If
we,
if
we
thought
we
were
mostly
going
to
have
connections
that
were
longer
lived
or
a
lot
more
connection,
migration,
then
that
birch
might
be
a
little
less
viable
so
and
then
the
other
thing
I
kind
of
that
renzo
got
out
of
me
is
that,
yes,
you
can
also
use
a
PPF
for
connection
ID
based
earring
and
that's
pretty
simple.
G
Yes,
so
reason,
the
socket
bird
thread
for
sending
a
sending
socket
per
connection
is
mostly
impractical
from
what
we've
tried
together.
We
actually
been
going
back
and
forth
this
week
on
kind
of
what
the
right
API
for
that
would
be,
but
at
least
my
current
understanding
is
that
in
Linux,
none
of
the
API
is
do
quite
what
you
would
like.
So
you
can't
kind
of
follow
the
standard
TCP
accept
and
then
create
a
new
socket
sort
of
like
pattern
that
that
pattern
doesn't
quite
work
for
sort
of
connection
oriented,
UDP
sockets
if
I'm
wrong.
G
Oh
I
should
mention,
since
we're
sharing
socket
among
all
of
these
threads
and
all
of
these
connections.
We
do
use
application
layer
pacing
and
we
only
paste
1
millisecond
into
the
future,
which
is
fairly
comparable
to
what
Linux
does
internally
in
the
FQ
pacing
acutest,
and
that
allows
us
to
not
have
an
insanely
large,
socket.
D
G
One
issue
that
we
do
have
with
the
setup
is
the
fqq
disk,
creates
some
unfairness
between
quick
and
tcp.
So
if
you
do
get
into
a
situation
where
your
NIC
limited
quick
will
sort
of
suffer
relative
to
TCP,
because
all
the
quic
flows
get
as
much
egress
out
of
the
box
as
every
each
individual
TCP
flows
flow.
So
you
know
you're,
basically
getting
in
the
worst
case
scenario,
maybe
I
100x
less
throughput,
then
then
needs
to
be
in
practice.
G
G
So
packet
sockets
packet,
sockets,
with
chaired
membered
ring
Eric's
ring
as
I
said,
are
a
nice
improvement
over
standard
UDP
sockets
we
tried
out
TX
ring,
which
is
a
memory
act,
transmitted
version,
it's
kind
of
very,
very
similar
to
our
experiment,
just
the
opposite,
and
we
couldn't
get
really
large.
Cpu
went
out
of
them.
G
It's
not
really
clear
why
they
should
work
quite
well,
but
a
more
important
point
is
that
they
are
very,
very
difficult
to
deploy
so
packet
sockets
on
the
receive
side,
basically
only
require
you
to
have
cap
net
raw
and
then
you're
you're
good
to
go.
You
don't
really
need
a
whole
lot
of.
You
know:
intelligence
and
software
complexity
on
the
send
side.
You
are
bypassing
an
awful
lot
of
awesome
cool
stuff
that
the
kernel
is
doing
for
you
and
you
will
hate
yourself.
G
G
G
Udp
GSO,
so
you
know
at
least
according
to
Williams
recent
benchmarks.
Udp
gso
achieves
performance.
That's
around
the
same
as
TCP
for
send
performance.
It's
around
3x,
faster
than
what
UDP
is
today,
so
the
the
one
quark
is.
It
does
release
all
the
data
grams
that
innocent
call
at
once,
and
so
in
order
to
get
the
full
CPU
savings.
Assuming
you
really
wanted
a
one
millisecond
pace
and
granularity.
G
So,
ideally,
the
segment
could
be
split
up
in
some
way
to
really
reduce
loss.
But
you
know
it's
it's
still
a
great
addition,
and
especially
on
high
bandwidth
connections.
Either.
You
know
in
the
future,
maybe
client-side
uploads
that
are
high
bandwidth
or
in
data
center
of
sort
of
applications.
It's
pretty
easy
to
get
to
a
point
where,
like
this
is
a
big
CPU
when
and
you're
getting
either
all
of
the
majority
of
the
to
benefit
packet
pacing.
G
So
for
those
who
are
at
net
dev,
which
is
not
too
many
people
van
spent
a
nice
hour
talking
van
Jacobson
spent
a
nice
hour
talking
about
released
time-based
packet
pacing
and
how
he
was
a
big
fan
of
this
I
am
also
the
kind
of
this.
It
really
makes
it
easy
to
integrate
it
with
your
congestion
control.
It's
easy
to
reason
about
when
the
packet,
actually
it's
the
wire.
G
Admittedly,
the
packet
might
go
out
a
little
bit
late,
but
you
know
you
can
basically
consider
Nick
queuing
delay
is
just
part
of
the
the
path
RT
to
you.
For
that
purpose,
it
would
allow
us
to
you
know,
use
our
shared,
socket
approach,
but
also
control
pacing
in
theory,
so
like
if
the
fqq
disc,
you
know,
I
had
a
release
time
based
pacing
module,
which
I
I've
been
told
it
should
in
the
nourish
feature,
then
it
would
actually
allow
us
to
share
that
socket
and
also
pace
to
the
socket.
G
At
the
same
time,
unlike
a
rate
based
pacer,
where,
if
you
try
to
share
a
single
socket-
and
you
use
a
rate
based
pacer,
there's
just
no
correct
rate
that
you
can
set
for
multiple
flows,
so
disabling
pacing
can
save
us
up
to
30%
of
our
CPU,
which
is
a
pretty
insane
number.
The
actual
pacing
CPU
cost
in
a
profile
is
like
1%,
you
know
in
terms
of
timers
and
all
that
junk.
So
it's
not
the
direct
cost.
It's
actually
the
indirect
cost.
G
It
also
does
retrain
increased
retransmit
rates
about
50%,
which
does
cost
CPU,
but
not
as
much
as
the
cache
locality
stuff.
So
I
added
some
links
to
some
patches
there's
reason:
sport
for
TX
time,
which
is
released
time-based
pacing.
It
was
added
to
a
non
a
different
cue
desk,
I
can't
remember
which
one.
But
you
take
a
look
and
there's
now
code
in
Chrome
for
pacing
off
load
said
the
chromium.
Quick
implementation
will
will
do
pacing
off
load
if
we
have
a
release
time
based
pacer
available.
G
All
right,
so
this
is
kind
of
my
dream
of
sending
and
it's
a
very
approximate
dream,
because
there
are
a
lot
of
details
that
I'm
I'm
completely
leaving
out,
but
it
kind
of
gives
you
an
idea
of
where
I
think
things
are
heading
in
the
future.
So
you
have
quick
as
an
application.
There's
some
set
of
shared
memory
pages.
G
It's
handing
over
to
the
networking
stack
a
symmetric
key
to
potentially
do
crypto
offload
it's
sending
over
a
release
time.
So
we
can,
you
know,
allow
the
packet
to
be
released
and
encrypted
later
and
some
efficient
data
structure
says:
there's
a
timing.
Wheel
is
inside
the
networking,
stack
and
potentially
even
interacts
with.
G
You
know
a
hardware
sort
of
like
pacer,
depending
on
kind
of
how
the
the
roles
are
split
and
then
you
can
offload
crypto
as
well
to
the
rack
if,
if
things
were
working
out
so
the
the
thing
that's
hand-wavy
about
this
is
there
are
an
awful
lot
of
details
about
exactly
what
these
API
should
look
at.
Look
like
rather
and
I.
Think
that's
what
I
and
a
variety
of
other
people
are
trying
to
figure
out
in
the
next
few
months.
Did
you.
J
Reduce
your
sentence
than
that
suppose
I
know.
I
was
just
wondering
what
your
thoughts
are
bound
so
currently,
when
you
do
the
pacing
and
chromium,
you
delay
the
time
at
which
you
create
the
packet
as
a
result
of
facing
versus
this
would
change
to
a
model
where
you,
the
time
you
send
the
packet.
So
for
going
away,
those
two
different
models
are
effacing
and
which
one
would
you
lose
some
efficiencies
as
a
result,
in.
G
Applications
where
you
think
things
like
cancellation
are
going
to
be
very
likely,
then
you,
you
could
lose
a
fair
amount
or
cases
where
you're
extremely
sensitive
to
recovery
time,
and
so
the
in
fact
that,
like
no
I,
can't
send
out
a
retransmission
until
where
I've
like
cleared
the
buffer
of
the
things
that
I
already
send
out.
So
there
are
circumstances
where
it
actually
could
be
measurably
costly.
G
So
you
have
to
kind
of
balance
like
how
far
in
the
future,
you
want
to
allow
yourself
to
pace
for
CPU
performance
with
these
other
application,
metrics,
but
I
think
for
so
far.
In
our
initial
testing
for
YouTube,
we
haven't
seen
any
QE
changes
when
we
we've
done
it
well.
As
long
as
we
haven't
screwed
up
paying
out.
H
G
J
H
G
G
G
G
There's
the
question
of
what
actually
is
the
API,
for
you
know
crypto
offload
when
we,
whenever
we
do
that,
that's
pretty
wide
open
at
this
point
and
they're
pretty
expensive
topic,
I
think
KD,
K
TLS,
hopefully
at
least
gives
us
an
architecture
for
where
to
start
in
that
process.
But
I
don't
know
if
we're
gonna
actually
adopt.
G
You
know
extremely
similar,
API
or
not
and
yeah
some
way
to
to
do
multi,
Datagram,
UDP
sins
and
actually
have
them
split
out
this
sort
of
a
critical
issue
at
the
moment,
whether
it's
euro
or
some
other
mechanism,
I
think
that's
it.
Oh
yeah
and
thank
you.
I
want
to
thank
all
the
people
who
have
contributed
to
making
the
kernel
better
for
for
quick
and
for
other
UDP
applications.
A
H
One
comment
on
the
previous
slide:
I
do
so
for
receive
pass
depending
on
whether
crypto
is
the
is
the
bottleneck
or
kernel
stack
time
is
the
bottleneck
with
with
the
right
flags
in
the
Nick
you're
supposed
to
be
able
to
get
using
AF
x,
DP
you're
supposed
to
be
able
to
get
zero.
Zero
copy
receive
pass
so
basically
like
packet,
rx
ring,
but
packet
rx
ring
is
one
copy.
Yeah.
D
H
G
I,
don't
think
I
have
any
idea
of
what
the
answer
is
at
the
moment,
except
I
can
say
that,
as
long
as
the
acceleration
on
a
given
platform
is
working
well,
whether
it's
a
s
and
I
or
otherwise
for
crypto
crypto,
it's
pretty
cheap,
so
it's
it
doesn't
seem
to
be
the
worst
part
about
cocoa.
Is
you
have
to
touch
the
memory
so,
like
you
know
in
all
of
this,
like
if
you
can
just
not
touch
the
memory
that
you're
trying
to
move
around?
Like
that's
that's
golden!
That's
that's
worth
a
lot!
G
N
G
G
G
Alright,
so
everyone
here
should
remember
the
1990s.
This
is
what
transport
protocols
looked
like
back.
Then
you
had
transport
headers
for
end
operation.
It
turns
out
that
we
built
a
lot
of
stuff
that
also
used
the
transport
headers.
Like
you
know,
in-network
inspection,
forwarding
I
know
that
we
didn't
like
to
say
the
word
NAT
in
the
ietf
in
the
1990s,
but.
G
Things
like
delaying
acts
and
spoofing
acts
and
doubling
packets
and
doing
other
sets
of
medical
stuff,
and
then
because
there
was
no
crypto.
There
was
a
lot
of
deep
packet
inspection
and
random
payload
modification,
and
everyone
was
sad.
So
then
we
invented
security,
but
we
put
what
we
called
transport
layer:
security
around
the
application,
layer,
headers
and
payloads.
Just
to
confuse
everyone.
I
still
have
the
transport
headers
for
antenna
operation.
G
This
is
where
we
are
today.
You
know
this
is
what
an
encrypted
transport
protocol
design
looks
like.
So
quick
is
an
example
of
this.
It's
like
the
example
of
this
right
now,
but
it's
a
general
pattern.
You
have
kind
of
the
function
of
the
transport.
Headers
is
now
split
between
outer
transport,
headers
and
inner
transport.
Headers
right,
like
so
in
the
outer
transport
headers
in
quick,
are
as
little
information
as
you
can
possibly
expose.
G
We've
spent
a
lot
of
time
talking
about
how
little
that
should
be,
and
then
the
inter
transport
headers
are
all
of
you
know,
sort
of
the
acknowledgement
so
on
and
so
forth
and
those
are
all
encrypted.
So
you
get
the
end
end
operation
on
the
inner
transport
headers.
You
can
do
in
network
inspection
on
the
outer
transport
headers,
but
no
modification,
because
the
transport
layer
security
can
also
be
used
to
do
Integra
in
DES
and
integrity
protection
of
those
headers.
G
This
is
the
wire
image
right.
It's
just
this
blue
box.
Everything
inside
the
blue
box
is
static.
This
is
this.
Is
all
of
that
stuff
that
we
used
to
have
in
the
90s.
This
is
the
gripping
surface.
What's
there,
the
obvious
part
is
like
any
information
that
is
carried
in
the
unencrypted
bits
in
the
protocol
headers.
G
There
are
a
few
other
things
here.
Right
lakes
are
the
length
and
the
entropy
of
all
of
the
bits
in
the
packet
that
essentially
provides
an
upper
bound
on
information
content.
Even
for
the
encrypted
bits,
it
does
not
provide
a
lower
bound
right,
so
much
of
sort
of
traffic
analysis.
Resistance
goes
into
adding
length
and
entropy
to
all
of
the
bits
and
all
of
these
packets,
so
that
the
upper
bound
is
is
not
driven
by
the
traffic
dynamics.
It's
driven
by
the
dynamics
that
thing
doing
the
obfuscation.
G
You
can
also
do
timing
of
packet
and
observation,
so
transmission
arrival,
and
these
things
that
gives
you
information
about
what
the
sender
is
doing
and
you
can
maybe
fingerprint
the
center.
Based
on
that.
Why
am
I
going
through
all
of
this
review?
Why
does
this
matter?
We
are
used
to
how
the
protocol
operates.
Being
what
you
see
on
the
wire
and
when
you
have
a
wire
image
that
you
can
design
explicitly.
G
The
protocols
and
end
operation
is
separate
from
its
appearance
on
the
wire
and
it's
separate
from
how
the
intermediate
devices
interact
with
it,
and
this
is
new,
and
this
so
Ted
said
this
at
a
plenary
at
one
point
and
I
was
like
this
is
new.
This
is
different.
This
is
novel.
You've
got
to
go
in
the
other
direction,
cuz.
O
I
want
to
go
back
to
timing,
so
I
will
point
out
that
timing,
a
packet
observation
may
not
depend
solely
on
the
protocol,
but
may
depend
on
something
else,
derived
from
the
sender's
behavior.
You
may
have
seen
that
there
was
a
study
a
while
back
where
somebody
was
able
to
demonstrate
that
they
could
determine
what
somebody
was
looking
at
from
the
Netflix
of
a
select
number
of
Netflix
series
or
movies,
because
there
was
a
characteristic
packet
inter
arrival
time
and
packets
size.
O
So
it
isn't
just
the
packet
where
image
from
a
protocol
perspective
that
has
to
be
examined
here,
but
really
some
of
the
wire
image
is
derived
from
the
senders,
behavior
or
the
recipients.
Behavior
an
acknowledgment,
that's
still
visible
on
the
wire,
and
that
turns
out
to
be
important
because
of
path
signals.
We
talked
about
this
at
a
ball
field
shall
not
be
named,
but
in
essence,
when
transports
used
to
transmit
clear
text,
data
on
path
devices
read
it
and
use
it
to
create
state,
manage
resources,
and
in
for
permission,
that
is,
we
had
Nats.
O
O
There
are
some
interesting
policy
issues
around
that.
We
don't
want
to
go
to
what
we'd
rather
see,
then
people
go
out
and
build
very
expensive
inferences
boxes
that
are
based
on
things
that
we
don't
really
want
them
to
infer
from
is
that
we
switch
to
explicit
pass
signals
where
we
send
data
that
you
intend
for
the
path
to
consume.
Now,
where
does
that
signal
go
in
the
information
to
the
path?
Well,
you
could
use
internet
layer
facilities
to
send
the
signals.
O
You
could
send
these
signals
onto
each
transport
or
you
could
do
nothing
wait
given
where
we
are.
You
may
know
what
our
intention
is
hint,
not
internet
area
facility,
okay,
so
the
example
we
talked
about
quick
before
is
the
example
of
an
encrypted
right
image.
The
latency
spin
bit
is
a
quick
example
of
an
experimental
piece
of
explicit
signaling.
The
bit
is
set
by
the
client
and
echoed
by
the
server.
The
client
changes
the
bit
once
per
a
round-trip
time,
and
integrity
is
protected
by
each
side.
O
This
exposes
the
round-trip
time
to
on
path
observers
without
exposing
session
state.
When
the
client
chooses
to
that's
what
makes
it
explicit,
it
isn't
a
default
in
the
protocol
that
this
is
always
done,
but
the
client
chooses
whether
to
start
sending
the
spin
bit
and
the
server
chooses
whether
or
not
to
echo
it
back.
O
So
there
is
unfortunately
not
linked
to
here,
but
some
work
that
was
recently
shared
over
in
the
quick
working
group
on
some
tests
would
show
that
this
is
relatively
effective,
except
in
the
case
of
pretty
extreme
reordering
and
even
in
the
case
of
pretty
extreme
reordering.
There
are
some
heuristics
that
you
can
use
to
to
determine
that
the
the
cause
of
the
the
failure
is
reordering
rather
than
a
failure
of
the
spin
bit
itself.
I
direct
you
over
to
the
quick
working
group,
I
think
Oh
Marcus.
O
Oh
sorry,
I
didn't
see
you
Marcus
guest-starring
on
today's
image
with
the
family
guy,
so
this
exposes
the
round-trip
time
to
own
path.
Observers
without
exposing
session
state
note,
every
single
bit
has
to
be
designed
and
considered.
This
is
explicitly
added
to
the
protocol
by
the
designers
of
the
protocol,
and
the
use
of
it
has
to
be
explicitly
considered
when
you're
using
the
protocol.
There's
no
default
for
what
signals
to
send
it
needs
to
be
determined
per
transport,
and
so
the
use
cases
of
specific
transports
may
never
use
this.
O
If
you're
inside
a
data
center
and
you're
using
quick-
and
you
turn
on
the
latency
spin
bit,
everything
is
going
to
be
so
quick.
It's
not
going
to
change
your
behavior
note
the
extra
K
on
the
last
quick
and
it
needs
to
be
optional.
If
a
client
or
server
don't
want
to
send
that
signal,
it
can't
be
needed
for
session
state.
Remember
the
whole
point
of
this
is
this:
is
going
in
the
outer
part
that
that
blue
bit.
O
G
G
Should
be
a
third
point
on
the
future
version
of
this
slide?
Is
you
actually
have
to
think
not
only
about
how
the
signal
is
to
be
used,
but
how
it
could
be
misused
right
because
you're,
basically
you're
putting
you're
putting
information
out
there?
The
nice
thing
is
because
of
this
separation
here,
you're
not
forced
to
leak
inner
information
out
to
the
path.
However,
if
you're
not
careful
about
what
you
put
out
here,
it
might
be
just
as
bad
right
so.
O
In
the
case
of
the
round
trip
time
discussion,
for
example,
we
actually
had
a
design
team
and
quick
that
went
off
and
looked
at
what
what
could
possibly
be
leaked
by
a
spin
bit
and
spent
a
good
bit
of
time,
most
of
it
Brian's
trying
to
work
out
whether
it
could
leak
geolocation
information
and
determined
that
the
the
fidelity
of
the
geolocation
that
it
could
potentially
leak
was
so
poor
compared
to
geolocation
information.
You
could
get
in
many
other
ways
that
it
was
not
an
effective
leak.
O
You
know
one
of
the
difficult
things
about
any
leakage
is:
there's
always
the
possibility.
Somebody
will
be
able
to
put
something
together
with
other
data
and
make
make
no
inferences
that
you
weren't,
expecting
if
they're
still
in
the
mode
of
making
inferences,
which
is
one
of
the
reasons
the
use
of
it
has
to
be
optional.
If,
in
the
future,
somebody
discovered
some
fingerprinting
aspect
of
it
that
we
were
not
aware
of
when
we
ran
the
design
team
devices
could
not
send
could
decide
not
to
set
it
and
the
result
would
still
be
correct.
Q
Then
I
really
like
the
fact
that
we're
taking
this
pin
bit
as
an
example
for
the
explicit
signaling
and
and
and
I
really
hope
that
we
can
accelerate
for
other
bits
in
the
future.
My
only
concern
is
that
if
we
start
doing
one
bit
per
use
case,
we
can
end
up
with
pretty
fragmented
frameworks.
I
don't
want
to
rush
the
design
to
something
that
potentially
we
can.
We
can
think
of
more
extensible
framework
that
doesn't
need
to
be.
You
know
one
use
case
per
bird.
G
So
so
this
spin
bit,
like
the
amount
of
effort
that
we
went
to
on
the
spin
bit,
that's
been
bit
looking
at.
That
was
easy,
because
there
were
no
other
explicit
bits
that
we
had
to
consider
interactions
with
right.
So
when
you
throw
the
larger
the
surface
that
you're,
adding
the
more
sort
of
degrees
of
free
I,
just
feel
like,
we.
Q
K
K
What
I
hadn't
thought
of
had
learned
about
there
was
that
people
had
privacy
concerns
about
the
existence
of
this
been
bit
because
it
reveals
the
round
trip
time
to
be
passive
observers
of
the
traffic
on
the
net.
Can
the
can
discover
the
round
trip
time
of
a
user
more
easily
I
guess
they
could
probably
do
it
anyway.
K
But
what
I
just
realized
sitting
here-
and
you
must
have
thought
of
this
already-
is
that
this
design
leaves
open
the
possibility
for
a
client
to
be
deceptive
in
its
setting
of
the
spend
vet
to
make
it
appear
that
you
have
a
longer
or
shorter
round-trip
time.
You
could
set
it
every
half,
round-trip
timer
a
third
round
trip
time
for
every
two
round-trip
times.
If,
for
some
reason
you
thought
that
was
useful,
I
don't
know
if
we
should
be
concerned
or
happy
so.
G
There's
so
like
in
you
know,
and
we
did
that
work
and
there's
a
bunch
of
there's
some
text
in
the
drafts
about
okay,
about
sort
of
the
model
and
I
won't
really
go
into
it
here,
because
we
don't
really
want
to
talk
about
the
spin
bit
we
do
want
to
talk
about
is
sort
of
the
general
principle
there
is
that?
Yes,
so
when
you,
when
you
separate
the
information
here
from
the
operation
here,
you
now
have
like
you,
don't
even
have
to
have
an
encrypted
side
channel.
O
So
if
you're
running
ipv4,
you
may
want
to
expose
round-trip
time
in
order
that
the
state
in
that's
not
aged
rapidly
without
you
having
to
send
heartbeat
packets
to
keep
the
stat
fresh.
So
so
there
there's
a
variety
of
reasons.
You
might
actually
design
a
client
to
enable
somebody
in
the
network
to
have
access
to
the
state,
but
it's
going
to
be
different
depending
on
what
it
is
you're
exposing
and
you
may
find
that
there
are
cases
where
you're
like
hey
I'm,
running
ipv6.
O
There
should
be
no
net
in
the
state
between
me
and
and
the
rest
of
the
world,
so
that
reason
doesn't
exist,
so
you
might
set
it
and
won't
address
family
and
not
in
another
again.
The
whole
design
here
is
that,
with
any
of
these
explicit
past
signals,
the
client
has
to
make
the
decision
to
set
it
and
it
can't
affect
the
the
correct
operation
of
the
protocol
from
the
in
systems
perspective.
If
it
is
not
set.
R
G
The
path
can
look
at
the
information
that
deep
right,
like
so
for
things
that
are
explicitly
looking
at
Diagnostics.
You
can
essentially
have
sort
of
like
in
been
Diagnostics
targeted,
and
then
in
that
case
you
you,
you
would
expect
in
that
case
you're
adding
a
little
bit
more
information
than
just
one
bit
and
then
you
would
usually
leave
it
all
right,
so
they're
different
different
bits,
different
complexes
of
bits,
different
use
cases-
you
know
the
he
sometimes
the
decision
goes
up
to
UI
some
sentence
decision
is
a
system
model
decision.
H
Presumably
there
was
a
reason
why
you
could
have
said:
okay,
the
spin
bits
one
bit
if
I
want
to
if
I
want
to
express
the
RTT
as
whatever
twelve
bits,
I
can't
do
it
once
every
12
packets
as
a
12
bit
number
and
I
guess
that
kind
of
goes
I'm
not
interested
in
that
particular
engineering
trade-off.
But
what
it
means
is
like
how
clear
does
that
blue
have
to
be
right
in
the
outer
layer
right
and
if
it
was
really
more
explicit
like
this?
H
G
But
we
have
so
far
not
been
successful
in
coming
up
with
a
framework
that
would
allow
us
to
express
that,
in
a
way
that
you
could
actually
have
a
vocabulary
of
those
things,
work
continues.
The.
O
How
easy
it's
going
to
be
to
read
and
from
both
sides
perspective
what
the
parsing
characteristics
are.
So
there's
a
good
bit
of
work
to
go
into
to
the
he
about
the
design
and
you're,
probably
right,
there's
also
probably
some
marketing
and
we're
probably
not
great
for
having
chosen
to
market
it
as
spin
bit.
Instead
of
RTT
bit
where
the
spinning
characteristic
is
how
you
derive
the
RTT,
but
don't.
H
O
H
O
So
it
depends
on
what
and
actually
there's
a
good
bit
of
stuff
in
in.
In
the
analysis,
we
did
for
the
spin
bed
on
what
the
particular
application
characteristics
are
and,
if
you'd
like
to
talk
about
that
offline
I'm
happy
to
do
that.
But
one
of
the
things
is
is
there
are
application
characteristics
where
you
don't
expect
the
packet
trained
to
be
milliseconds,
but
maybe
many
seconds
before
one
particular
side
sends,
and
that's
actually
something
you
have
to
consider
in
in
this
is:
are
there
application
forms?
O
M
It's
Christine,
Hutchison,
just
say
I'm,
supportive
of
this
I,
where
an
operator
I'd
rather
have
a
few
bits
of
information
explicitly
and
optionally
stated,
but
that
I
have
confidence
in
rather
than
doing
something
relatively
expensive
and
computationally
heuristic
ly
to
infer
things
which
you're
never
quite
sure
about.
So
it's
better
to
I
mean
it
actually
going
back
to
Lorenzo's
point.