►
From YouTube: IETF98-TSVAREA-20170327-1520
Description
TSVAREA meeting session at IETF98
2017/03/27 1520
B
C
E
A
A
F
E
E
G
A
D
We're
getting
ready
to
start
the
second
tsv
area
session.
We
are
passing
around
a
new
set
of
blue
sheet.
So
if
you
were
at
the
last
meeting,
please
sign
blue
sheets
again
because
that's
per
meeting
are
we
putting
up
the
note?
Well
again?
Yes,
excellent,
so
we
will
put
up
the
note
well
again
so
that
everybody
can
note
well.
That
thing
said
here
are
ITF
contributions.
H
Yeah
we
had
one
session
already
and
we
start
in
the
into
the
next
sessions.
We
have
a
presentation
by
Tom
Herbert
and
then
we
have
the
rest
of
the
time
which
we
probably
hopefully
might
not
use
completely
on
the
discussion
about
congressional
troll
and
how
to
handle
congestion,
control,
algorithms
and
conditional
to
work
in
the
IETF.
But
we
start
with
Tom.
Where
is
he
they're
perfect?.
I
Hi,
my
name
is
chrome,
Herbert
and
I'm,
going
to
talk
about
Express
data
path
or
xdp
a
little
bit
of
background.
This
is
a
project
that
we
started
in
a
Linux
networking
stack
about
a
little
over
a
year
ago
and
if
you
remember
at
HTF
in
soil,
in
the
open
plenary,
we
had
that
discussion
about
the
denial
of
service
attacks
and
dying
and
there
was
X
a
cloud
fire
presentation.
I
One
of
the
rationales
behind
xtp
was
precisely
for
dressing
denial
of
service
attacks,
kind
of
like
that,
and
it
turns
out
that
CloudFlare
actually
presented
a
discussion
about
a
year
ago
on
what
they
were
doing
to
solve
denial
of
service
in
their
system
and
basically
they
were
going
through
a
pretty
long
path
and
getting
really
miserable
numbers.
So
some
of
us
in
the
Linux
community
we
started
looking
at
the
problem
and
realized.
We
need
a
better
wife,
so
that's
kind
of
the
background.
I
I'll
get
a
little
more
into
some
of
the
things
we're
doing
with
cloudflare
a
minute
next
Laye
so
agenda
today.
So
I'll
go
a
little
bit
with
history,
specifically
userspace
tax,
which
have
kind
of
been
the
current
model
to
solve
a
lot
of
these
problems
and
then
we'll
go
through
the
the
foundation,
BPF,
eb
b,
PF
and
then
present
the
solution
and
then
how
we're
looking
forward
to
extend
the
solution
so
usually
inflamed
works.
The
idea
behind
these
is
that
there's
a
librarian
user
space
and
the
application
can
access
the
networking
directly.
I
So
in
this
case
packets,
for
instance,
arrived
on
a
Q
and
a
user
space
application
or
process
processing
them
directly
without
any
interference
or
any
processing
the
colonel.
So
this
is
kind
of
like
a
direct
data
path
from
the
network
to
the
application,
completely
bypasses
the
colonel
it's
safe
to
use
people
like
that
and
has
found
a
lot
of
niches,
particularly
in
high-frequency
trading
and
some
hpc
realms.
I
So
the
question
is
what
why?
Why
did
we
ever
bother
with
this?
We've
had
colonel
networking
stacks
for
years,
bsd
stacks,
bsd,
sockets,
they've,
all
been
kind
of
developed.
The
nominal
reason
is
usually
given
as
performance.
So,
by
having
direct
access
to
say,
network
device
from
user
space,
we
can
squeeze
out
some
performance,
but
some
of
the
other
reasons
are
safety,
for
instance,
because
of
the
isolation
that
the
system
gives
you.
If
we
crash
an
application
in
user
space,
it
doesn't
crash
the
whole
system.
I
I
Trepidatious
you
look
at
companies
like
Google
and
Facebook.
They
have
maybe
20
or
30
curl
engineers
and
thousands
and
thousands
of
C++
programmers
and
Java
programmers.
So
there's
a
little
bit
of
weight
towards
kind
of.
We
know
how
to
do
stuff.
In
user
space
we
get
more
and
more
into
user
space.
That's
somewhat
of
a
trend
we've
been
seen
and
another
big
one-
and
this
is-
is
one
that
that
we've
known
for
a
while
has
a
lot
to
do
with
upgrades
reboots
things
like
that.
I
If
we
have
to
reboot
the
colonel
because
of
problem
or
read
about
the
system,
very
invasive
takes
a
long
time
an
upgrade
Colonels.
What
have
you
so
that's
kind
of
a
known,
ongoing
problem
so,
on
the
other
hand,
user
space
tax
frameworks,
they're
difficult
to
use
for
general
solutions.
If
you
look,
most
of
them
are
usually
some
niche
applications.
I
Best
example
is
probably
the
high
frequency
traders,
the
guys
who
want
to
execute
trades
in
microseconds.
They
really
don't
need
anything
except
access
to
the
stack,
maybe
do
UDP
or
TCP,
but
it's
very
dedicated
application.
So
it's
a
good
use
case
for
this.
For
the
general
case,
though,
implementing
say
a
full
tcp/ip
stack
and
user
space
very
difficult
to
make
work
and
cross
all
circumstances,
hybrid
path,
solutions.
This
is
kind
of
what
cloudflare
was
doing.
I
Originally,
they
were
actually
taking
packets
received
in
the
kernel,
affording
them
into
user
space
on
a
raw
interface,
doing
the
denial
of
service
mitigation,
they're,
basically
filtering
out
which
packets
to
say
which
packets
to
fort
and
then
they
were
sending
the
package
back
into
the
kernel,
do
more
processing,
therefore
TCP
and
then
actually
go
up
to
the
application.
So
this
kind
of
boomerang
effect
we've
seen
occasionally.
I
Amazingly,
they
claim
this
was
better
than
than
implementing
something
in
the
kernel
using
TC
classifiers.
Nevertheless,
that's
what
they
were
doing
performance
numbers.
If
you
look
at
the
documentation,
a
lot
of
the
documents
about
user
space
tax
will
say:
they're
10,
10
x,
100x,
faster
than
the
kernel,
and
those
are
little
jaded-
be
careful
about
that
user.
I
Space
2
X
will
be
faster,
but
not
binding,
so
many
orders
of
magnitude,
but
more
practically
when
we
saw
em,
for
instance,
at
Facebook
and
Google,
when
we
consider
doing
this
if
we
have
to
bring
in
a
separate
stack,
even
in
user
space,
maintaining
two
stacks
in
the
same
system.
That's
a
little
bit
of
pain,
that's
one
of
the
kind
of
big
downsides
to
having
user
space
backs
and
we
still
need
a
kernel
stack.
I
The
other
thing
is
user
space
and
kind
of
implies
more
proprietary
solutions.
This
can
be
good
in
some
context
for
for
an
application
that
needs
that.
But
if
you
think
in
terms
of
a
larger
community,
if
we
do
have
a
good
solution
for
denial
of
service,
if
one
application
needs
that
is
probably
true
that
almost
all
of
them
need
it.
So
it's
good
to
have
that
kind
of
community
behind
us.
I
I
So
what
we
really
need
is
going
to
be
programmability
of
policy
in
the
kernel
and
it's
actually
programmability
everywhere
in
networking,
if
you
think
about
it.
So
the
trend
of
SDN,
for
instance,
is
programmable
networking
program
own
devices
programmable
in
the
kernel
programmable
from
user
space,
so
everything's
programmable
in
some
senses
is
kind
of
the
goal
and
the
colonel
it's
a
particular
issue,
like
I
mentioned
kernels,
are
typically
monolithic
hard
to
update
making
it
nimble
for
program
available,
t
kind
of
an
oxymoron,
in
some
sense,
with
some
of
the
goals
of
the
colonel
nextly.
I
So
now
we
can
go
back
in
history.
A
little
bit
and
beginnings
of
our
solution
actually
go
back
to
nineteen.
Ninety
two
at
Lawrence,
Berkeley,
Lab's
van
Jacobson
and
Stephen
McCain
kind
of
were
the
first
to
see
that
we
need
this
programmability
inside
the
colonel
needs
for
one
small
part.
So
from
this
berkeley
packet
filters
was
born.
We've
known
about
this.
I
It
had
a
whole
two
registers
was
I
think
a
16-bit
system,
so
the
idea
was
great:
I
have
a
program
ability
of
networking
in
the
kernel,
a
packet
parser
that
is
effectively
open-ended,
but
with
the
limitations
and
I
guess
with
we
needed
to
develop
the
rest
of
the
coronal,
the
rest
of
networking.
So
for
about
24
years
we
didn't
see
this
hidden
gem
and
some
in
some
sense,
like
so
fast
forward
2015,
and
this
thing
called
extended.
Bpf
was
born,
alexys,
solver,
12,
now
Facebook
actually
had
this
idea.
I
What,
if
we
extend
BPF
added
number
of
registers,
make
it
full
64-bit
system
make
a
compile
easily
from
C
code
directly
into
whatever
assembly.
Whatever
bytecode,
we
need
have
a
comprehensive
instruction
set,
so
we
needed
to
add
more
than
just
simple
header
parsing.
We
want
64-bit
operation
or
64-bit
ops,
atomic
operations,
data
structures,
actual
real
data
structures
that
we
can
put
ancillary
data
use
those
for
things
like
state.
What
have
you?
The
other
important
thing
was
since
we're
running
this
code
kind
of
loaded
into
the
kernel.
I
We
need
a
model
of
safety,
so
we
have
this
thing
called
a
verifier
which
prevents
code
from
doing
bad
things.
So,
for
instance,
we
can
prevent
division
by
0
or
illegal
memory,
accesses
things
like
that,
even
though
within
its
within
the
colonel.
So
the
idea
is
that
this
gives
us
the
same
sort
of
isolation,
same
sort
of
guarantees
that
user
space
gives.
However,
we're
still
running
natively
in
the
colonel
and
sense,
with
a
very
thin
API
so
effectively.
I
We
expect
that
to
be
almost
bare
metal
inside
the
colonel,
so
II
BPF
ended
up
being
pretty
successful
and
right
now
in
the
linux
stack,
this
is
used
pretty
much
all
over
the
place.
The
tracing
guys
really
jumped
onto
this.
So
we
used
to
have
things
like
DTrace
estrace.
These
were
kind
of
difficult.
We
wanted
them
to
be
flexible,
but
we
also
need
it
some
sort
of
programmability
like
if
I
wanted
to
trace
specific
events.
I
So
the
BPF
architecture
is
pretty
elaborate.
Now,
as
I
mentioned,
there's
a
lot
of
use
cases
at
the
top,
even
outside
of
networking,
so
we're
using
these
for
analytics
blogging,
as
I
mentioned,
and
the
good
news
is.
We
can
write
this
in
different
languages,
so
the
original
BPF
was
pretty
much
machine
code
assembly
hard
to
use
about
about
the
same
time
he
BPF
came
out.
Lol
vm
added
compiler
supports.
Now
we
can
write
BPF
directly
and
see
so
that's
a
huge
win
for
programmability.
I
I
So
all
of
this
is
great.
It
still
doesn't
really
answer
the
question
of
how
do
we
do
denial
service
mitigation
at
line
rate
millions
of
packets
per?
Second?
That's
where
express
data
path
came
from,
or
at
least
the
motivation,
so
about
2016.
We
started
looking
at
the
denial
service
problems
in
particular
also
load,
balancing
load,
balancers
things
like
that,
and
we
came
up
with
this
concept
of
xkp.
The
idea
is
to
run
a
BPF
program
as
low
as
possible
in
the
kernel,
basically
on
the
device,
so
the
program
is
actually
sitting.
I
The
received
part
of
the
program
sits
on
receive
cues
as
packets
come
in.
We
actually
process
the
raw
packet.
So
there's
no
concept
of
metadata
and
no
concept
of
going
through
various
layers
of
protocol
processing
we're
just
basically
calling
a
function.
A
BPF
program
saying
here's
the
packet,
here's
the
beginning
of
it
hears
the
end
of
it
may
be
a
little
bit
of
ancillary
information
that
we
got
from
the
device.
But
that's
it
BPF
program
go!
Do
whatever
you
need
to
do
so.
I
Bpf
program
takes
this
and
it
actually
comes
up
with
a
simple
decision:
either
drop
the
packet
accept
the
packet
into
the
rest
of
the
stack
or
transmit
the
packet
transmitted.
The
packet
usually
entails
either
in
popping
or
pushing
encapsulation
headers
and
then
sending
it
back
out
right
now
at
the
same
interface,
but
eventually
any
interface.
All
of
this
being
done
basically
on
the
raw
packet.
For
various
reasons,
we
don't
need
to
do
DMA
on
maps,
for
instance,
or
in
a
fancy
memory
allocation.
So
very
fast.
We
get
the
speed.
Bpf
gives
us.
I
The
programmability
and
the
environment
gives
us
kind
of
that
safe
context
that
I
was
talking
about
before.
So
the
important
thing
is
this:
xdp
data
path,
even
though
it's
in
the
kernel
works
in
harmony
with
the
colonel,
it's
still
kind
of
programmable.
It's
like
so
from
this.
We
kind
of
derive
the
xdp
packet
processor
and
pretty
much
what
I
just
said.
So
at
the
bottom,
we
have
the
various
input
queues
for
my
device.
They
call
into
the
BPF
program
and
the
vpf
program
district
turns
drop,
in
which
case
we
just
discard
the
packet.
I
Not
a
lot
of
work
to
do
there
forward
is
rewriting
the
packet
prep,
maybe
an
address
and
then
sending
it
back
out
the
same
interface,
and
then
we
can
go
through
the
receive
stack
gr.
0
is
one
thing
that
we
have
still
have
to
do.
That's
a
way
to
Cole
ask
packets
of
a
single
TCP
connection
into
a
larger
packet
for
the
advantages
of
processing,
larger
packets
in
the
set,
and
then
those
packets
can
go
up
and
then,
if
we
receive
the
packet,
it
just
becomes
a
normal
packing
in
the
kernel.
I
I
Xep
has
a
few
important
properties,
so
one
is.
It
is
designed
for
high
performance,
so
there's
a
number
of
techniques
that
we
know
about
user
space
tax
have
been
using
these
and
we
can
put
these
same
sort
of
techniques
inside
the
colonel
on
next
EP
spin
polling,
optimizing
cache
misses
with
catch
prefetch,
four
or
five
other
things.
I
All
of
this
is
locked
less
and
we
never
get
into
atomic
operations
in
this
path,
as
I
mentioned
its
design
for
programmability,
so
we
can
add
new
functionality
into
the
kernel
without
changing
the
colonel
without
a
reboot,
everything
can
now
be
done
on
the
fly.
It
is
not
Colonel
bypass
in
the
sense
that
it's
integrating
to
the
colonel
and
we
can
call
into
kernel
functions
and
use
Colonel
data
structures.
This
will
become
really
important,
for
instance,
if
we
want
to
do
statesville
tcp,
and
we
actually
want
to
see.
I
Does
the
current
local
stack
actually
have
a
tcp
state,
so
we
can
access
that
tcp
state
from
the
BPF
program
to
determine
if
there's
a
state
associated
with
it,
it
does
not
replace
the
tcp/ip
stack.
This
right
now
is
a
layer
below
it.
We
know
some
ideas
for
kind
of
introducing
some
of
these
features
at
a
higher
layer
layer
to
get
some
of
the
benefits,
but
for
now
its
programs
running
at
the
lower
layer,
there
are
some
other
BPF
programs
that
run
in
concert
with
sockets
and
some
of
the
other
aspects
of
tcp.
I
So
this
is
kind
of
one
aspect,
one
application
of
BPF
in
the
networking
back.
But
the
point
here
is
this
is
so
low
level.
This
is
in
a
sense
below
the
stack
kind
of
a
preprocessor
for
the
stack
and
then
the
last
one
is
is
very
important.
So
xdp
does
not
require
any
specialized
hardware.
We
in
theory
can
run
xtp
on
pretty
much
any
device
and
we've
added
it
to.
I
think
five
or
six
devices.
I
Now
it's
really
helpful
if
the
device
has
multi
q
well,
but
that's
the
only
extent
we
only
property
we
need,
so
we
don't
need
ritual
functions
or
any
of
that
stuff
now
advanced
features
we
do
want
to
interoperate
with
and
use
those.
However,
fundamentally,
this
is
something
we
believe
can
sup.
We
can
solve
problems
in
the
field
with
existing
hardware
with
software
change,
so
it's
kind
of
an
important
feature
exactly
so
currently,
xdp
is
being
deployed
for
various
use
cases
best
way
to
describe.
It
is
layer,
2,
layer,
3,
packet,
processing,
denial
of
service
mitigation.
I
The
easiest
one
is
just
have
a
sort
of
black
list
of
bad
IPS
and
can
filter
on
those
very
simple
sort
of
lookups.
More
sophisticated
denial
is
heard
of
protection,
gets
us
into
some
stateful
tcp,
for
instance,
pattern
matching.
Can
we
identify
signatures
bad
packets?
At
some
point,
we
may
even
need
more
advanced
lookup
mechanisms,
regular
expressions.
What
have
you
some
of
the
other
applications?
I
Facebook
right
now
is
deploying
this
for
load
balancing.
This
is
replacing
ipv
ipv,
yet
PVS
as
a
solution.
We're
finding
has
a
lot
more
performance
and
that's
working
out
really
well
we're
also
seeing
use
cases
in
switching
routing
tamil
termination
again
at
Facebook,
I
was
working
on
ila
routing
and
one
of
our
use
cases.
What's
basically
built
a
la
router
in
the
network,
it's
nothing
more
than
a
glorified
host
routes,
which,
in
a
sense
so
packet
comes
in.
We
rewrite
64
bits
of
the
destination
address
and
send
the
packet
on
it's
very
simple
process.
I
It
was
like
20
lines,
BPF
code
and
we're
able
to
build
a
device
kind
of
doing
line
rate
without
requiring
specialized.
A
second
specialized
Hardware
sets
win
a
few
performance
numbers,
so
this
was
a
intel
xeon
with
mellanox
ml
x5
Nick.
It's
a
kind
of
continuous
thing.
We've
been
getting
performance
numbers
to
improve
and
improve
so
there's
kind
of
latest
one.
I
So
for
comparison,
the
colonel
using
TC,
a
traffic
control
which
is
kind
of
the
typical
way
that
we
would
implement
this
sort
of
functionality
about
3.5
million
packets
per
second
on
one
core
xdp
doing
the
same
thing
gets
us
up
about
16.9
million
and
xtp
transmitted
a
little
more
work
because
we
have
to
rewrite
packet,
we're
still
at
thirteen
point:
seven
million
packets
per
second
overall
in
the
system.
If
we
allow
24
cores
we're
getting
45
million
packets
per
second
transmitted
I,
believe
that's
the
current
hardware
limitation.
I
So
it's
scaling
pretty
well
one
of
the
benefits
of
doing
it.
This
way,
since
we
don't
have
a
lot
of
atomic
operations,
most
of
these
will
scale
linearly
with
the
number
of
course.
The
reason
we
test
one
cores
we
need
to
show
performance
at
that
level
for
certain
attacks
that
are
obviously
like
a
temple
with
hack
would
be
on
one
core.
So
that's
why
we
typically
measure
this
number
cores
and
then
have
the
whole
system
next
slide.
I
So
looking
forward
right
now,
we're
planning
on
enabling
more
drivers.
Currently
we
have
five
supported,
we'll
be
adding
more.
The
virtualization
was
actually
just
recently
supported.
There
is
hardware
support
for
xgp.
Metronome
is
the
first
case
of
that.
So
basically,
the
idea
is
all
flowed.
The
BPF
program,
so
BPF
is
nothing
more
than
some
sort
of
bytecode
when
you
compile
it
in
theory,
can
run
on
anything
hardware
software
or
what
have
you
so
they
can
download
the
BPF
program,
and
we
can
run
that
in
hardware.
I
Let's
see
so,
we
need
to
build
out
the
ecosystem
of
contributed
solutions.
So
the
good
news
going
back
to
cloudflare
next
week
we're
having
another
conference
on
Linux
networking.
They
are
presenting
a
solution
for
din,
Alice
or
service
mitigation
for
xdp.
Looking
forward
to
that
one
another
important
one
that
we
learn
from
something
like
VPP
and
some
other
initiatives,
packet
batching
in
terms
of
processing
is
important,
so
more
packets.
We
can
process
at
once
for
a
particular
piece
of
code,
the
better
instruction
cache
of
fits
and
see
what
have
you.
I
We
see
better
performance
with
that.
So
these
are
in
ways
to
squeeze
more
performance
out
of
the
system.
We
do
have
a
little
bit
of
a
performance
gap
that
we're
still
working
on
with
dpd
k,
as
I
said,
if
not
orders
of
magnitude,
it's
within
a
few
percentage
points,
and
we
have
some
ideas
on
that.
One
of
the
big
differences
between
users
face
and
kernel
is
user.
Space
can
enable
certain
instruction
sets
on
the
x86,
for
instance,
of
floating
point
instructions
and
string
instructions
can
be
used.
I
We
haven't
really
turn
those
on
in
the
kernel
for
obvious
reasons
that
they're
kind
of
difficult,
if
necessary,
we
might.
We
might
get
into
that
also
we've
kind
of
anticipating
some
TLB
translation,
lookaside
buffer
issues
with
huge
pages
in
certain
applications.
So
that's
another
thing
we're
looking
at
and
as
I
mentioned
busy
polling
and
some
other
techniques
or
certainly
techniques.
We
can
also
integrate
into
xdp
to
get
that
last
bit
of
performance.
H
H
So
we
now
have
the
rest
of
our
time.
We
can
use
freely
for
discussion
on
congestion
control
in
the
IETF,
because
we
see
a
lot
of
congestion.
Troll
were
coming
up
and
we
want
to
get
feedback
from
the
community.
What
should
we
do
here
is
something
we
should
target
in
the
I
ATF
and
the
I
RTF
and
where
and
how
and
so
on
and
say
we
have
two
people
giving
a
little
bit
of
background.
J
J
B
J
So
the
history
of
Isis
GOG
icche
was
created
is
about
the
role
of
congestion,
control
and
ecology
and
how
this
whole
stuff
fits
together
in
the
ITF.
It's
really,
you
know
why
was
I
see
zhe
created.
Why
does
it
exist?
It
was
created
at
some
point
because
there
were
all
these
proposals
on
new
congestion
controls
cubic
HTTP
compound
TCP.
J
J
Here's
a
very
important
aspect
to
this
animation
before
anything
else
happens.
There's
nothing
else
there
right,
but
ii
CPM
is
already
grumpy,
because
that's
that's
how
they
operate.
So
this
was
ITF
75
2009.
Mr.
junkie,
they
always
are
right,
but
this
is
about
about
a
historic
procedure.
We
have
a
proposal
coming
in
in
rainbow
colors.
It's
a
double
u-turn.
I
was
proposed
to
TC
p.m.
at
this
ITF
and
off.
It
went.
K
J
What
happens
you
know?
I
mean
you
spend
time.
This
is
your
G
and
it
came
back
anyway
and
then
they
were
happy
and
that's
pretty
much
how
we
operate.
You
know
you
bring
stuff
to
us,
we,
you
know
it
spent
some
time
in
trash
and
up
afterwards
it's
beautiful
and
shiny
and
everything,
and
so
in
this
case
it
took
only
three
I
mean:
that's
maybe
a
positive
aspect.
It
only
took
three
idf's
or
had
to
come
back
and
then
put
some
more
time
in
TC
p.m.
and
then
it
was
an
experimental
RFC.
J
Icc
OG
also
can
publish
documents.
That's
you
know
something
that
people
don't
seem
to
be
aware
of.
That's
the
IRT
attract.
Other
groups
are
publishing
a
lot.
They
can
be
informational
experimental.
We
have
only
to
congestion
control
in
the
RFC
series
and
open
research
issues
in
congestion
control
kind
of
survey
stuff.
Why
don't?
We
have
any
specs
of
any
experimental
things
only
because
people
don't
seem
to
care
about
it
or
don't
want
that.
It
really
is
an
open
question.
I
believe
that
people
kind
of
now
I've
been
proposing
it
actively
to
people.
J
They
were
like
all.
Oh
IC
AG
publication
I'd
rather
go
in
the
corner
and
shoot
myself
and
I.
Don't
know
why
that
is,
but
it
seems
to
be
a
really
don't
want
to
do
that.
So
one
of
the
questions
you
know
that
was
that
was
asked
I'm
trying
to
address
here.
Is
it
necessary
that
I
teeth
congestion
control
work
is
spread
around
the
ITF
the
way
it
is
I
personally
think
this
isn't
an
inevitable
thing,
because
congestion
control
as
a
function
is
intrinsically
tied
to
the
goals
that
the
protocol
is
trying
to
solve.
L
Jahangir
who's
also
holding
the
trashcan
button
shaking
there's
a
does
we
just
to
continue
this
thread.
We
have
right
now
cubic
that's
sitting
in
TC
p.m.
and
we
have
a
BB
r.
That's
been
presented
at
icc
RG
a
couple
of
times
and
if
we
end
up,
you
know
getting
a
draft
out
of
that
at
some
point
there
is
going
to
be
a
question
of
where
should
it
land
and
the
in
mind
that
we
know
how
a
quick
working
group
that
actually
has
conditioned
controller
also
as
a
part
of
it
so
going
forward.
L
That
is
a
real
question
of
where
should
all
of
this
condition
control
activity
lie
should
be
even
in
in
retrospect.
Now
you
know
maybe
take
cubic
and
say:
well,
you
should,
you
know,
write
it
in
such
a
way
that
it
also
applies
to
do
PCB.
Maybe
we
should
have
a
cubic
TCB
and
a
cubic.
A
quick
draft,
I
don't
know,
but
if
you're
describing
it
without
framing,
it
certainly
seems
to
belong
in
a
place.
That's
not
specific
to
a
protocol.
M
So
the
IETF
a
long
time
ago
bought
this
Dogma
TCP
friend
body,
which
is
of
course
an
oxymoron,
because
it's
not
about
TCP
at
all.
It's
about
Reno,
we've
reached
the
Internet
has
reached
the
scale
where
you
should
replace
those
words
by
not
relevant
in
today's
Internet
and
the
problem.
Is
that
really
belongs
in
this
working
group?
This
this
research
group
is,
if
you
disband
the
words
TCP
friendly.
What
do
you
replace
them
with?
How
do
you
say,
freedom
from
congestion
collapse?
How
do
you
say
reasonable
behavior
under
resource
exhaustion?
M
J
M
N
M
Seem
to
be
a
better
bar
to
be
striving
for,
but
the
problem
is,
if
you,
for
instance,
look
at
at
the
research
community.
Academic
research
review
panels
still
dismissed
protocols
that
don't
address
the
issue
and
making
authors
spend
twenty
percent
of
their
page
space.
Explaining
why
TCP
friendly,
where
the
protocol
is
adequately
TCP
friendly,
is
actually
doing
us
all
a
disservice,
because
it
means
there's
some
very
good
ideas
out
there.
That
probably,
are
the
feel
double
in
today's
internet,
which
are
in
fact
being
banned
from
being
published
because
they're
not
TCP
friendly.
O
J
H
N
P
P
So
this
is
kind
of
a
recap
of
where
we
are
in
terms
of
condition,
control,
support
and
various
operating
systems.
If
any
of
this
is
out
of
date,
please
let
me
know
in
a
fix
up
the
slides
later
Android
and
Chrome
OS
are
doing
cubic
by
default.
I
believe
iOS
and
Mac
OS
are
also
doing
cubic
by
default.
P
The
other
kind
of
interesting
thing
to
notice
is
that
the
network
operators
are
trying
to
solve
the
problem
in
the
network
using
active
queue
management,
but
the
end
devices
have
no
visibility
about
this,
so
they
cannot,
for
example,
assume
that
a
given
network
has
a
km
or
not.
There
is
no
way
to
detect
that
so
the
operating
system
implementation
has
to
kind
of
account
for
the
case
where
there
is
no
a
qm
quick
is
coming
up.
P
There
might
be
cases
now
we're
popular
mobile
applications
write
their
own
transport,
so
things
are
getting
to
a
stage
where
there's
not
just
one
or
two
congestion
control
algorithm
stood.
There
could
be
ten
congestion,
control,
algorithm,
sharing
a
bottleneck
link
and
it
becomes
very
difficult
to
kind
of
reason
about
performance,
or
you
know
how
the
network
would
behave.
P
So
buffer
bloat
is
real.
This
is
like
smooth
dirty
D
information
from
like
TCP
connections
on
desktop
of
an
xbox
consoles.
So
if
you
notice
obviously
I
don't
want
to
imply
that
correlations
links
causation
here,
but
it
does
look
like
on
in
peak
load
times.
The
rtd
is
inflating
a
lot
and
this
impacts
everything
from
like
page
load
times
to
us
awesomeness
in
games.
So
this
is
a
real
problem.
We
are
seeing
this
data
right
now,
so.
P
So
yes,
a
multitude
of
congestion
control
algorithms.
So
how
do
implementers
deal
with
this
situation?
Do
we
test
the
entire
kind
of
matrix
here
of
every
condition,
control
working
with
another
condition,
control
as
well
as
a
QM?
This
matrix
is
kind
of
blowing
up.
So,
as
an
implement
of
my
question
is
what
would
be
what
would
be
the
best
way
to
kind
of
address
that
that
problem?
P
The
other
question
I
have
is
how
do
we
prevent
this
from
becoming
an
arms
race?
For
example,
when
we
run
tests
with
cubic
and
compound
sharing,
the
same
bottleneck,
link
cubic
just
kills
compound,
and
then
it
would
create
a
perception
problem,
because
people
are
running
speed
tests
that
you
know,
for
example,
windows's.
You
know
slower
than
android,
for
example
right.
So
the
question
is:
how
do
we
kind
of
prevent
this
from
everybody
trying
to
go
as
fast
as
they
can
and
kind
of
causing
Buffalo
to
become
worse
over
time?
P
I
think
Matt
was
talking
about
TCP
friendly,
so
yeah
I
have
the
quiz
question
now.
What
does
differently
actually
mean?
Bbr
is
also
complicating
things,
a
lot
more
because
it's
heuristic
based
so
now
you
know
in
presence
of
bbr.
How
do
we
define
things
like
RTT
friendliness,
late,
comer,
fairness
and
yeah?
The
final
question
is:
what
do
we
do
about?
This
is
just
publishing
a
set
of
informational.
Q
P
My
question
would
be:
how
do
we
kind
of
define
what
it
means
to
coexist,
with
other
condition
at
all
algorithms
and
again
the
TCP
friendliness
question
remains
so
do.
Does
every
RFC
have
to
go
out
there
and
take
into
account
every
popular
condition,
control,
algorithm
and
list
how
it
is
friendly
with
it?
How
do
we
define
rtd
fairness
and
late
comer
fairness,
and
all
of
that.
Q
Ok,
so
my
other
comment
is
really
addressed
a
little
bit
more
towards
Michael's
presentation,
but
I
think
it's
it's
relevant
to
both,
but
just
going
back
to
the
happy
trash
can-
which
I
believe
is
a
metaphor-
that.
A
Q
Q
Well,
you
guys
go
figure
it
out
and
they
sent
them
off
to
the
happy
trash.
Can
so
I
think
that
that's
still
a
really
valuable
role
and
I
think
what
the
discussion
and
I'm
really
the
comment
that
Matt
me:
oh
good,
you're
here
most
Lars's
in
life,
I
think
that
Matt
made
really
good
comment,
which
is
that
our
criteria
for
how
we
evaluate
transport
protocols
has
probably
changed
over
the
past
couple
of
decades,
and
we
haven't
done
a
really
good
job
of
articulating.
That
I
mean
some
of
that
falls
on
the
research
community.
Q
But
I
think
that
from
from
this
is
a
community,
the
implementers
and
operators
and
engineers,
and
so
I
think
that
we
can.
We
can
speak
to
the
research
community
and
say
how
how
we
prioritize
or
how
we
think
things
should
be
evaluated
for
for
safety
or
desirability,
and
I
think
that
that
I
still
think
that
that
would
fall
into
the
icc
argies
bucket,
as
opposed
to
that
I
etfs
bucket,
because
I
think
that
that's
still
kind
of
a
research
consensus
sort
of
question
does.
J
Also,
this
TCP
evaluation
sweet
that
has
been
around
for
a
very
very
long
time
and
it
has
suffered
from
cycle
problems.
It
has
been
handed
over
and
at
some
point
when
David
Hayes
was
in
charge
of
it
and
while
he
was
working
for
me,
I
pushed
for
getting
this
done
and
getting
it
into
RFC
stated
simply
percentages
here,
and
it
was
pushed
back
for
being
outdated,
which
it
is
so
all
we
need
is
a
volunteer.
R
David
black
one
of
the
chairs
of
tissue
of
G
transport,
air
working
group,
which
will
meet
right
after
this
I
likely
have
a
trash,
can
like
I'm,
not
go,
go
look
at
websites.
Ect
I
can
find
when
the
represents
the
whole
variety
stuff.
That
comes
our
way
we
deal
with
congestion
concerns
some
which
are
control,
some
of
which
may
being
forward
isn't
applicable
to
for
whole
pile
things
that
haven't
yet
been
mentioned
here.
Sctp
lives
in
TSU
WG.
Anybody
wants
to
play.
D
CCP
comes
to
us.
R
This
miss
controlling
you
in
in
the
new
you
UDP
usage
guidelines,
UDP
and
cap
as
a
whole.
Other
adventure
in
and
of
itself,
so
Mike
I
tend
to
agree
with
your
comment
earlier
that
we're
going
to
string
just
all
over
the
place
and
at
least
speaking
from
my
parochial
perspective,
wearing
a
teacher
of
G
working
group,
chair
hat
having
ICC
RGB
a
central
clearinghouse
for
congestion
control
issues
that
we
don't
understand
because
we
haven't
got
the
expertise
in
the
smokes
of
people
once
were
cons
where
might
want
to
work.
H
Can
I
had
some
thanks,
I,
don't
think
the
question
is:
do
we
still
need
I,
see
crg
I
think
there
is
no
question
about
it.
It's
it's
a!
They
do
good
work!
Oh
that's
where
the
research
has
come
when
we
get
the
input
from
research
that
we
really
need.
My
question
is:
rather,
can
we
give
as
the
IETF
any
recommendation
what
you
do,
because
our
current
recommendation
is
use
Reno,
which
is
a
little
bit
outdated?
H
Our
soon
future
recommendation
will
be
used
cubic
because
there's
a
draft
out
there,
which
soon
will
will
be
in
our
seat
and
the
reason
why
we
think
it
can
be
an
ietf
rfcs,
because
it's
deployed
large
enough
that
we
know
it's
safe,
but
it's
still
not
like
where
we
are
at
the
state
of
the
art.
So
how
can
we
make
our
recommendation
be
given
the
IETF
a
little
bit
more
up
to
date?
That's
my
question.
N
Laura
Secord
so
Michael
said
a
bunch
of
the
Oscars
it
so
this
slide
could
have
been
from
10
years
ago,
because
it's
the
exact
same
questions.
We
asked
ourselves
when
we
had
Hamilton,
TCP
and
cubic
and
and
compound
right
and
back
then
we
had
Sally
is
still
around
an
active
and
she
basically
said
we're
going
to
do
this
evaluation,
sweet
and
there's
a
paper
on
it,
and
then
somebody
turned
into
into
Anna's
to
code
and
that
sweetest
is
very
boring
right.
N
It's
basically
just
a
bunch
of
scenarios
and
the
idea
was
that
four
th
EF,
but
also
for
the
academic
research
community,
if
your
paper
didn't
run
through
at
least
these
scenarios
and
shoulders
the
plots
but-
and
they
were
really
really
prescriptive-
that
you
know
this
is
the
parameter.
This
is
the
metric.
Here's
the
range
plot
this
for
your
thing
and
compare
it
against
the
others.
You
know
we,
you
would
be
automatically
rejected
or
you
wouldn't
get
face
time
at
the
ITF.
So
the
idea
was
really
to
to
provide
this
sort
of
arm.
N
You
know,
how
would
you
call
this
today?
I,
don't
know
test
cases
or
something
it's
very
boring
work
right.
So
thanks
for
kicking
David
to
do
this,
but
that
would
be
one
way
forward
and
and
today
right
this
better
tooling,
we
probably
wouldn't
do
this
to
anymore.
I
hope
arm
so
that
that's
one
way
forward
if
we
believe,
but
then
you
need
to
keep
adding
to
it
right
and
and
it's
it's
at
best.
J
N
The
other
problem
with
it
really
is
that
it
as
networks
get
faster
right
in
this
one
data
center
stuff
earlier
right.
N
Forget
it
right
did
the
the
hardware
effects
are
so
overriding
that
you
can
take
you
an
extra
stimulation
and
you
know
throw
it
away
and
and
for
the
data
center
case,
we
really
don't
have
any
way
to
do
large-scale
anything
other
than
running
into
the
data
center
and
and
it's
getting
even
harder
to
compare
that
the
one
thing
I'm
since
I'm
rambling
here
the
one
thing
I
think
that
is
going
to
be
interesting
for
this
year.
N
Gee
that
isn't
all
sort
of
gloom
and
doom
is
that
it's
quick
right,
because
it
has
a
whole
new
set
of
information
about
the
path
that
is
better
quote
unquote
and
what
you
get
with
TCP,
and
so
there's
a
lot
of
new
meat
for
congestion
control
researchers
to
play
around
with
and
come
up
with
new
schemes
that
are
actually,
you
know
improving
things
compared
to
what
you
can
do
with
TCP.
N
So
I
think
that
that's
my
hope
for,
for
where
we're
a
lot
of
research
will
happen
in
the
future,
so
that
that's
something
that's
something
I
won't
happen
in
the
quick
working
group.
I
hope.
No,
it
won't
have
an
immigrant
working.
G
G
That
means
that
the
nice
guys
cannot
get
in
the
network,
because
it
will
be
known
that
the
refacing
pigs
and
when
they
face
the
pigs
they
lose,
and
that
means
that
you
end
up
having
to
design
your
software
as
having
two
modes.
It
will
are
the
nice
mode,
and
if
your
detection
cut
tells
you
that
you
have
the
pig
on
the
network,
you
have
to
pick
mode.
S
G
No
TV,
no
II,
ok,
yeah
yeah,
so
as
I
am
by
point
is
that
we
should
really
have
recognized
this
idea
that
the
state
of
the
network
is.
Finally,
if
you
want
evolution,
we
will
have
to
recognize
that
we
have
the
nice
modern.
The
pig
mud
and
the
pig
mud
is
when
you're
competing
with
cubic
and
you
to
be
a
speaker
Sarah.
P
M
Do
you
want
not
to
be
able
to
run
HD
TV
during
primetime,
and
if
you
build
enough
capacity
where
all
the
customers
can
get
HD
TV
during
primetime,
nothing
else
matters,
and
that
is
this
effect
that
is
driven
it
such
as
the
internet,
doesn't
care
much
about
congestion
control.
You
have
bottlenecks
at
the
edges
that
are
at
a
few
megabits
per
second,
which
are,
or
even
tens
of
mega,
but
even
even
a
gigabit
per
second,
which
is
not,
which
is
a
small
fraction
of
the
core.
No
individual
flow
can
ever
cause
congestion
in
the
core.
M
M
What
is
important
is
the
fact
that
the
research
community
and
a
large
number
of
other
people
are
still
solving
irrelevant
problems,
problems
that
that
that
the
solutions
to
mean
that
there
are
solutions,
the
solutions
to
the
problem
that
they
have
posed
is
irrelevant
to
today's
Internet,
and
we
need
to
fix
that.
One
of
the
things
that
we
don't
have
is
a
good
way
of
defining
whether
or
not
a
protocol
is
safe.
I
recently
read
in
a
paper
a
comment
that
said:
oh
well,
we're
using
tcp.
M
So
so
we
know
that
we
have
don't
have
to
worry
about
congestion
collapse,
no,
it's
trivial
to
write
applications
that
will
cause
congestion
collapse.
Unfortunately,
you
can
cause
congestion
collapse
with
DNS,
so
it
is
a
complicated
problem.
It
is
a
very
complicated
problem
of
defining
what
is
a
safe
protocol
and
it
is
really
the
problem
that
we
care
about.
H
Nick
11
I,
wanted
to
add
on
on
the
test,
sweet
stuff,
so
I
think
the
reason.
Another
reason
why
I
didn't
proceed
further
is
because
it's
actually
not
that
easy
I
mean
might
be
boring,
but
it's
not
that
easy
I
try
to
implement
it
and
the
recommendation
there,
because
the
idea
was
to
make
it
as
realistic
as
possible
is
to
use
like
traffic
traces.
Try
purchase,
has
that
have
a
lot
of
short
flows
and
that
congestion
control
doesn't
do
anything
but
short
flows.
It's
mostly
slow
start.
H
So
when
I
was
implementing
it
I
didn't
see,
I
couldn't
say
anything
about
it.
Basically,
and
then
I
came
up
with
my
own
test
cases
for
my
own
stuff.
So
it's
actually
not
that
easy.
It's
not
clear
what
to
do
there
and
the
other
thing
I
want
to
point
out
in
this
area.
Is
that
there's
also
RM
kit,
which
has
some
test
cases
as
well
and
I
think
they
took
a
different
approach.
They
took
more
simple,
artificial
test
cases,
but
here
the
idea
was
also
not
to
say
not
to
evaluate
a
congestion
control.
H
J
R
Or
whatever
have
heavy
RC
categories
actually
mean
what
they
say:
David
black
speaking
more
I,
think
as
an
individual
than
a
working
group
chair
I'm
might
be
formats
comma
ballot
Matt
be
the
judge,
be
the
judge
of
that
he
used
the
word
safe
and
I
think
he
was
using
it
in
the
context
of
congestion
control.
Algorithms
not
doing
damage
to
each
other.
I
seem
to
have
been
spending
all
my
time
in
a
space
where
safe
means
traffic.
That
is
not
congestion,
control,
it's
not
going
to
implement
cubic
or
or
compound
or
whatever
have
you?
R
What
are
the
minimum
guardrails
that
happy
put
in
that
traffic,
so
it
doesn't
damage
the
stuff
that
is
congestion.
Controlled
you'll
see
this
turning
up
in
a
number
of
things
that
come
out
of
TF?
U
WG
recently
a
UDP
guidelines
and
circuit
breakers
drafts
percentage
now,
RFC's
in
particular,
and
I.
Think
that
broader
question
deserves
some
attention,
because
much
is
we're
going
to
work
on
much
as
we
will
work
on
interesting
congestion
control,
algorithms
in
transport
area
as
all
very
good
and
important.
L
Generally
and
God
I
was
gonna,
say
something
else,
but
I'm
gonna,
piggyback
on
what
David
just
said,
which
is
I
think
this
is.
This
is
exactly
one
reason
why
Reno's
pogo
stick
around
for
for
quite
a
while.
It's
incredibly
easy
to
implement
it's
hard
to
get
wrong
and
it
works.
That's
a
pretty
good
reason.
I
mean
I.
L
Think
I
think
that,
having
having
something
that's
as
simple
as
Reno
to
recommend
knowing
fully
well
that
anybody
who's
building
our
transport
protocol
is
going
to
experiment
the
congestion
control,
because
there
are
no
interrupt
problems
as
such
is
I.
Think
of
a
healthy
place.
That's
where
we
have
been
for
quite
a
while.
This
is
the
reason
that
we
have
PBR.
This
is
the
reason
we
have
compound
or
CT,
CT
and
I.
Think
that's
not
a
bad
place
to
be
have
one
basic
requirement
and
then
have
a
bunch
of
things.
Of
course.
L
I
understand
this
is
a
hard
problem.
How
do
you
specify
how
they
coexist?
I
I,
don't
have
any
mean
it's
Larsa
spawning
out.
I.
Remember
seeing
this
is
quite
a
while
ago
too,
and
it's
it's
hard.
It's
definitely
hard
problem.
The
test,
sweet
thing,
I,
don't
think
scales
we
can
have
we
can.
We
can
come
up
with
a
test.
Feedin
I'll
tell
you
now
you
start
working
on
it
now
you're
going
to
get
done
with
it
in
over
three
years.
From
now,
and
at
that
point
it's
already
going
to
not
be
useful.
L
G
Hi,
this
is
honest,
I'm
trying
to
read
him
between
the
lines
but
math
and
some
other
depress
saying,
is
and
I
try
to
compare
it
with
this
sick
state
in
a
security
environment,
you
have
a
couple
of
people
who,
from
security
context,
invent
cryptographic.
Algorithms,
you
get
them
in,
but
you
obviously
don't
want
all
of
those,
because
more
agarose
mean
more
problems,
let
alone
the
analysis
of
those.
G
G
If
there's,
if
those
deaths
in
practice,
however,
the
impression
I
get
is
you
have
to
be
a
really
big
player
to
actually
really
get
something
adopted,
because
those
players
then
also
go
ahead
and
deploy
their
stuff
right
away
and
then
tell
you
after
few
years,
we
deployed
it
and
it
works
pretty
well
and
and
that's
how
they
actually
get
the
work
done.
This.
G
M
So
one
of
the
reasons
why
I
speak
of
a
test
to
check
on
this
is
it's
actually
trivial
to
write,
an
application
that
causes
congestion
collapse.
You
have
a
loop
with
a
that,
does
a
fork
and
then
the
child
launches
a
query
and
you
put
asleep
in
the
outer
loop
or
something
like
that,
and
it's
behaves
very
nicely
under
note
normal
load.
M
So
the
consequence
of
running
out
of
capacity
is
it
latches
up
and
at
fifty
percent
loss
rate,
and
it
does
not
recover
until
the
load
goes
down
to
below
half
of
the
bottleneck
rate.
This
is
a
mess.
It's
not
really
fixable
in
DNS,
cuz
DNS
doesn't
have
enough
stability,
but
it's
really
really
easy
to
write
perfectly
ordinary
sounding
applications
that
take
a
very
conservative,
TCP
implementations
and
still
cause
congestion
collapse,
and
these
really
need
to
be
viewed
as
transport
issues,
because
its
transport
dynamics.
M
Well,
the
research
question
is:
is
how
could
you
imagine
creating
a
test
environment,
a
test
suite
to
look
for
regenerative
load?
There's
two
different
things,
one
is
is
if
the
presented
load
rises
as
the
performance
goes
down
and
the
other
is
if
the
overhead
rises
as
a
performance
goes
as
your
as
load
goes
up
and
I.
M
M
U
U
Sam,
so
one
thing
that
seems
to
be
coming
up
from
this
conversation
is
that
is
a
lot
of
fun
community
knowledge
about
control,
booth,
folklore,
maybe
it
isn't
being
written
down
and
I
think
it
will
be
really
losing
life
as
a
community.
We
could
find
documents
in
that
form
so
that
I
was
a
community
example
to
build
on
it.
Yeah
my
brother.
M
H
H
Mean
it's
so
even
so,
I
mean,
like
the
the
point
about
using
congestion
control,
is
to
avoid
a
congestion
columns
just
by
having
a
adaptive
congestion
control
the
ideas
that,
if
you
see
increasing
load,
you
would
use
reduce
your
load
right.
But
if
you,
if
you
combine
this
with
other
mechanisms
or
you
have
implementation
errors
or
whatever
there's
still,
something
can
go
wrong.
N
Last
I
guess
so:
I'm
one
of
the
authors
and
I
think
I
speak
for
all
of
the
office.
I
say
if
you
make
us
open
that
thing
up
again,
we
could,
but
so
what
it,
what
it
says
for
UDP,
actually
I
think
captures
even
met
is
with
overload
right.
N
It
basically
says
that
you
should
keep
an
RTT
sample
if
you
can
and
then,
if
you,
you
know,
keep
keep
it
at
most
one
message
outstanding:
unless
you
want
to
do
something
more
complicated,
like
implement
and
actually
regression
control
algorithm,
but
it
also
has
a
default
timer
when
you
can't
and
I
think
it's
like
three
seconds,
because
that's
what
TCP
uses
for
the
centenary
transmit,
but
it's
not
only
it's
not
only
dns.
That
has
this
thing,
but
they
are
they
already
sort
of
and
then
the
UDP
guidelines.
N
R
R
L
So
I
was
gonna
say
that
actually
piggybacking
on
this
and
and
and
trying
to
go
back
to
a
previous
thread
about
cubic
itself
cubic
is
is
well
it's.
It
certainly
qualifies
as
BCD
not
necessary
as
standard,
but
I
don't
want
to
call
it
best.
So
maybe
you
should
come
up
with
a
categorization
called
mcp
most
current
practice
or
something
like
that
deployed
current
practice.
There
we
go.
L
One
of
those
classifications
would
be
quite
useful,
I
think
for
us,
as
the
ID
calls
we've
struggled
with
this
for
a
while
say
just
because
it's
deployed
we
are
not
going
to
sound
eyes,
it
sure,
but
let's
talk
human
tit
in
some
form,
because
that's
what
cubic
the
cubic
document
is
we're
not
agreeing
that
it's
the
best
thing
on
the
planet.
We
are
saying
it's
deployed.
L
It
will
be
quite
useful
to
have
that
and
I
think
that
actually
does
go
towards
you
know
achieving
some
of
these
goals
may
be
so
I'll
propose
this
no
condition
control
work
gets
standardized,
they
all
show
up
in
icc
RG.
There
are
only
RG
documents
because
there
is
no
end
to
any
one
of
those
things.
It's
a
fine
world
to
live
in.
All
we
need
is
documentation,
they
don't
need
to
get
sanitized
at
all,
not.
L
L
L
P
D
New
yeah,
this
woman
s,
definition
of
TCP
friendly
I,
just
wanted
to
inject
that
when
Colin
was
speaking
and
it
came
out
somewhat
garbled
in
the
room
we
were
saying
was
that
there's
a
lot
of
important
important
focal
or
knowledge
in
this
conversation
that
aren't
written
down
there
isn't
written
down.
The
community
should
try
to
document
that
in
a
form
that
can
be
published,
so
it
can
be
referenced.
L
it's
a
so
that
is
a
real
question
of
you
know
we
sort
of
all
the
same
transport
folks
end
up.
You
know
going
to
all
of
these
meetings,
which
happens
to
be
the
happy
accident.
It
just
doesn't
mean
that
the
work
is
well
distributed
among
these
things
or
its
I.
Don't
know
if
it's
if
there
is
as
praveen
says,
is
that
there's
a
good
process
that
anybody
could
follow
to
figure
out
exactly
where
to
go
for
any
of
this
stuff?
L
I,
really
think
that
we
should
leave
condition
control
where
it
is,
which
is
evolving
all
the
time
we
should
embrace
that
I.
Don't
think,
there's
a
reason
to
say
that
this
is
the
standard
that
we
recommend.
We
will
never
get
anybody
to
use
a
standard
that
we
recommend
it's
always
going
to
be
in
the
past.
M
Yeah
I
can
concur
very
strong
with
that
I'm
one
of
the
clues
about
all
of
the
congestion
controls
about
how
well
they're
doing
is
you
look
at
the
control.
Frequency
I
from
in
years
gave
a
slide
some
of
the
one
that
you
Chung
showed
earlier,
which
you
know
the
pack
crack
packet
loss
right
to
run
it
some
ridiculously
high
rate.
M
Well,
some
normal
high
rate
today
is
point
0,
something
or
another
percent,
but
the
real
issue
is
for
cute
for
Reno
and
under
those
environments
that
correct
loss
rate
is
one
loss
episode
every
many
tens
of
minutes.
It's
like
you
can't
control
a
system
that
way
and
and
the
in
the
controls.
The
the
real
fundamental
issue
is
the
control
frequency
has
to
be
in
the
right
range
and
it
has
to
not
change
as
you
increase
the
data
rate.
M
We
understand
this
now
that,
in
order
for
it
to
scale
right-
and
that
means
that
that
the
loss
rate
any
I
don't
want
to
I,
don't
want
to
go
into
details.
The
problem
with
cubic
is
because
of
the
way
the
exponential
phase
goes,
as
the
scale
gets
larger.
The
control
frequency
doesn't
keep
dropping
I,
don't
remember,
the
numbers
haven't,
worked
much
with
cubic
and
believe
it
stays
it
a
few
seconds.
M
The
amount
that
it
overshoots
when
it
overshoots
increases,
and
so
it
slams
everybody
else
on
the
network
and
that
effect
gets
worse,
as
the
network
gets
faster
and
and
as
a
consequence,
people
who
are
competing
with
cubic
at
a
big
scale.
Networks
often
get
drive
by
verse,
losses
that
are
very
large
that
they
had
nothing
to
contribute
did
not
contribute
to.
V
I
can't
remember
what
it
says
about
congestion,
control,
right
and
so,
and
so
I
think
it's
probably
useful
and
not
such
a
bad
thing
for
the
ITF
to
have
a
default
pointer
that
points
to
some
document.
This
says
you
should
do
something
and
I'm
not
even
sure,
I
care,
which
thing
that
pointer
is
pointing
at
at
the
moment,
but
that's
probably
a
reasonable
thing,
and
it's
probably
not
so
horribly
wrong
that
some
person
who's
doing
from
scratch.
V
V
V
J
O
T
T
O
O
We
see
all
the
competing
pick
already
been
deployed
by
different
the
platform
and
behind
the
each
platform
there
is
a
company,
so
I
think
I
originally
if
we
back
20
years
ago,
the
continued
continued
discussions
that
one
is
is
evolving
around
the
environment,
which
everybody
has
assumed.
The
open,
Internet
everybody's
have
a
nice
behavior,
so
everybody
need
to
be
friendly,
and
I
think
that
probably
is
the
PSP
for
endless.
That's
the
term
actually
come
from
that
environment,
but
now,
with
our
reality
now
is
we
are
having
internet,
which
is
essentially
the
competitive
environment.
O
There
are
different
players:
companies
they
have
competing,
be
disinterested
their
countries,
can't
you
object,
which
is
essentially
try
to
make
their
customer
happy
and
it
in
such
an
environment.
I
think
I'm
talking
about
currently
probably
a
little
matter
very
easy
right
now
of
evil.
What
if
we
look
at
a
different
approach
on
this
whole
issue?
O
He
would
probably
be
some
CIT
who
can
provide
guidance
to
understand
that
the
the
user
to
have
idea
their
application
is
always
going
to
deliver
some
kind
of
food
good
service.
In
other
words,
it
is
the
Armory's
just
let
it
run
a
race
and
admitted
us
identify
what
is
the
best
of
the
weapon.
What
is
the
best
that
they've
already
come
out
or
from
that?
Our
memories,
which
you
will
be
most
robust
and
hyper,
consists
in
the
performance.
H
Thank
you.
I
can
just
add
to
this
point
that,
like
we
have
how
many
I
don't
know
how
many
years
of
congenital
research
there
like
tons
of
algorithms
out
there
and
we
see
like
very
limited
deployment
of
new
other
isms
and
I-
think
it's
because
people
just
don't
know
which
one
to
use
and
we
recently
see
little
bit
more
deployment
because
one
case
is
web
right.
You
see
because
the
traditional
conditional
read
doesn't
apply
here
then
the
other
one
is.
H
If
they
want
to
use
TCP,
the
only
thing
they
can
go
to
at
the
moment
is
New
Reno
and
then
at
some
point
they
look
at
their
transmissions
and
their
quality
of
experience
metrics
whatever
that
is,
and
they
see
that
it
sucks
because
TC
periods
not
really
optimized
for
today's
network,
and
then
they
say:
oh
that's,
TCP,
it
doesn't
work.
I
have
to
write
any
protocol
and
at
the
end,
it's
just
they
use
a
different
the
wrong
under
control
and
as
long
as
we
don't
give
any
further
guidance
then
use
TCP
Reno.
N
So
I
came
here
to
actually
it
sort
of
agree
with
Tim
right,
so
I
don't
actually
think
we
have
a
big
problem
and
I
don't
take
the
case
that
Mayor
describes
is
it's
pretty
theoretical,
but
I
mean
who
here
writes
a
new
stack
and
then
deploys
it
on
on
an
amount
of
systems
that
may
can
move
the
needle
anywhere
on
the
global
internet?
Almost
nobody
right
that
the
big
stacks
are
incredibly
tightly
controlled.
N
There's
a
lot
of
engineering
going
in
and
I,
don't
think
it's
an
arms
race
either
either
right,
because
we
have
this
going
back
to
bittorrent
and
skype
right
BitTorrent
got
hammered
because
they
they
basically
broke
people.
Skype
calls
right
and
if,
if
Netflix
started,
interfering
with
I,
don't
know
what
skype
or
or
some
Google
service
right,
all
hell
will
break
loose
it.
So
everybody
is
in
everybody's
best
interest
to
avoid
that
situation
right
so
I
think
we
actually
pretty
far
from
her,
not
from
an
arms
race.
Here.
N
I
do
agree
that
we
want
to
have
that
what
Tim
said
right.
We
want
to
have
something
that
people
can
constrain,
reading
and
start
learning,
and
you
know
New
Reno
is,
is
you
know
yeah,
it's
not
ideal,
but
it's
also
not
terrible
and
it's
sort
of
easily
understandable.
The
other
analogy
is
going
to
make
right
that
so
transport
is
very
similar
to
security
in
terms
of
IETF
right
both
of
those
areas
about
telling
other
people.
No.
No.
No.
N
You
can't
send
that
packet
right
or
not
know
you
can
send
that
indicate
so
and
and
in
security
right
crypto
schemes
change
security
protocols
changed.
Is
you
know
if
you
look
in
TLS
right,
there's,
there's
a
bunch
of
cipher
suites
that
we're
deprecating
there's
new
ones
they
were
putting
in
based
on
what
we're
understanding
it.
The
attackers
can
do
and
what
the
capabilities
are
and
that's
fine
right.
It's
a
continuously
changing
thing
and
it's
it's
similar
in
transport
right.
I
think
our
pace
of
changes
is
slower,
but
it's
the
same
thing
right.
N
We
we
had
no
reno,
and
maybe
now
we
have
cubic,
which
is
better
in
some
cases
and
worse
and
others,
and
we're
going
to
have
more
specialized
ones
that
that
are
never
going
to
really
run
on
the
internet
anyway,
and
that's
okay
right.
If
you
have
something
better,
we're
going
to
update
the
plunger
and
we're
gonna
say:
look
at
you
know:
bbr,
for
example,
like
in
a
couple
of
years
or
something
it's
okay,
I,
don't
think
it
is
a
big
problem
and
I,
don't
think.
There's
an
arms
race
necessarily
happening
I.
T
P
To
add
to
that
so,
but
butter
bloat
is
a
problem
and
we
know,
for
example,
that
big
is
worsening
it
and
at
this
point
it's
not
an
arms
race.
But
if
you
were
kind
of
trying
to
compete
for
the
same
bottleneck
link,
you
would
have
to
implement
cubed
vx6
right.
So
there
is
a
problem.
So
it's
not
like.
There
is
no
problem
right.
N
Okay
now
can
that
can
be
described
right
and
maybe
that's
something
that
that
could
be
an
experiment
document
out
of
icche
I,
don't
know,
but
but
very
few
of
them
have
sort
of
catastrophic
failures
and
and
buffer
plot
is,
is
something
that
that
a
congestion
controller
can't
really
do
anything
about
any
way
or
its
operating
over
the
path
that
it's
been
given.
I'm,
not
saying
we
shouldn't
fix
it
right.
We
should
we
definitely
shouldn't
up,
but
it
I
don't
think
it's
necessarily
a
congestion
control
problem.
B
W
Scalability
to
the
positive
side
and
make
it
compatible
for
normal
situations
with
RINO
and
cubic,
but
if
the
round
trip
times
become
bigger
than
the
Rays
become
bigger,
we
can
improve
and
correct.
Actually
what's
what's
going
wrong,
so
I
believe
there
there
is
an
opportunity
and
I
welcome
everybody
to
join
that
discussion.
M
There's
a
lot
of
mutually
conflicting
constraints
on
all
the
players,
but
I
want
to
point
out
one
that
maybe
didn't
occur
to
people
if
you're
selling
advertising,
for
instance,
and
somebody
clicks
on
an
ad
and
the
entertainment
that
they
happen
to
be
watching,
interferes
with
the
ad
loading.
You've
done
a
very
bad
thing
to
yourself,
and
so
that
ad
comes
from
somebody
else's
server
using
some
unknown
technology,
or,
I
should
say
the
content.
M
M
To
say
the
least,
but
in
fact
we
do
need
to
have
a
story,
especially
in
the
cases
where
bbr,
for
example,
does
very
poorly
relative
to
cubic
under
some
easily
demonstrated
conditions.
It's
like.
What's
the
story
you
tell
and
part
of
the
story
is
we'd
like
cubic
to
just
go
away,
but
we
can't
cause
that
to
happen
in
the
transit
case.
We
don't
have
fairness.
Well
guess
what
we
don't
have
fairness
today.
It
never
existed,
it
never
will
exist.
It
just
has
different
symptoms
in
different
cases.
We
want
to
avoid
starvation.
H
V
I've
been
trying
to
understand
the
internet
for
a
long
time
longer
than
I
would
care
to
even
admit
and
part
of
the
challenge
of
I'm
trying
to
understand
the
internet
is
by
the
time
I
figured
it
out.
It
has
changed.
It's
not
it's
different
and
I.
Think
this.
You
know
the
whole
notion
that
TCP
friendly
is
sort
of
difficult
to
to
think
about
these
days
because
of
the
fact
that
the
net
is
just
been
changing
and
I'm,
so
I'm
actually
used
to
the
fact
that
the
net
keeps
changing
by
the
time.
V
I
wonder
I,
think
I've,
understood
it,
and
just
like
a
week
and
a
half
ago,
I
watched
a
video
of
Jeff
Houston,
a
talk.
He
gave
I
think
three
or
four
weeks
ago
at
apricot
2017
and
there's
a
youtube
video,
it's
hard
to
find
his
his
talk
by
searching
on
YouTube,
unless
you
know
exactly
what
to
search
for
and
it's
apricot
2017
panel
forces
shaping
the
network
and
that's
on
YouTube.
V
V
Instead
of
trying
to
explain
why
that
might
be
true
anyway,
but
I
found
that
tie
I've
watched
that
talk
two
times
and
I've
already
recommended
it
to
about
a
dozen
people
and
I
realized
that
it
might
be
relevant
to
people
who
are
trying
to
understand
what
should
we
do
for
congestion
control
in
the
future
and
that,
if
you
watch
his
talk,
it
might
reset
you,
because
you
to
stop
and
wonder
what
you
actually
believe
anymore
about
what
the
internet
is.
So
anyway,
I
thought.
V
J
H
Done
basically,
sorry
we're
all
right
I'm
already,
so
thank
you
very
much
for
the
discussion.
I
think
this
was
a
very
good
discussion
and
I
want
to
go
through
the
minutes
and
also
the
java
chat
and
see.
If
there's
anything,
we
could
actually
write
down
and
con
conserve
for
the
future
and
give
some
guidance
but
other
than
that
I
don't
see
any
action
points
right
now.
I
think
we
just
operate
as
we're
operating
right
now
and
thank
you
for
your
presentations.