►
From YouTube: IETF95-TSVAREA-20160405-1740
Description
TSVAREA meeting session at IETF95
2016/04/05 1740
B
C
C
C
C
E
F
Welcome
to
be
transferred
all
over
me
I
might.
Ultimately
we
need
a
minute
taker
volunteers
from
taking
it.
Thank
you,
Brian
and
the
JavaScript
looks
at
the
Java
room.
If
there's
somebody
who
has
questions
chiapas,
quiet
java,
scratchy
was
like
JavaScript
and
Java
scribe,
Thank,
You,
Allison.
So
via
said,
the
blue
sheets
are
going
around,
please
sign
them
and
keep
them
going.
F
F
We
just
started
Spencer
search
well
time
for
reading
over
to
node.
Well,
so
the
agenda
for
today
it's
pretty
full
at
least
on
this
slide,
so
not
well
be
done.
Blue
sheets
is
done.
Javascript
done.
We
have
minute
taker.
That's
the
agenda.
It's
time
to
bash
the
agenda.
We
talked
the
first
15
minutes
about
administrative
issues
out
of
the
area.
There's
nothing
really
aleris
to
be
mentioned.
Then
we
have
a
small
both
announcement
from
Gonzalo
about
the
court.
F
Fourth,
he
doesn't
prepare
any
slide,
which
is
a
pity,
and
then
we
ever
talk
from
Felipe
about
recanting
network
protocol
innovation
with
user
level
stacks
and
at
the
end,
you're
going
to
use
two
remaining
time
to
discuss
open
issues
and
cow
in
the
future,
hot
topics
and
transport
in
the
attention
there
is
to
see
if
we
in
transport
actually
covering
everything
we
should
be
covering
in
terms
of
transfer
protocol
signaling
protocols
whatever.
If
you
have
specific
ideas,
you
are
free
to
get
to
the
open
mic
at
the
end
and
express
your
opinion.
F
Okay,
let
me
conclude:
bashing
gear
gender.
We
gonna
go
over
to
the
current
set
of
work
groups.
We
still
have.
You
can
see
this
on
the
left
hand,
side,
those
are
the
active
working
groups
and
on
the
right
and
Sen
hide
right
hand,
side.
You
see
their
groups
which
have
been
closed
since
the
last
ITF
meeting
and
they
are
just
listed
in
The
Chronicle
order
where
they
got
close,
which
is
PV,
HP
conexant
storm.
They
all
have
finished
successfully
their
work
and
yeah.
That's
it
no
questions.
No!
F
It's
just
run
anyway
and
then
spends
I
made
some
nice
pictures
today
about
me
closely
working
groups
and
because
at
this
ITF
we
have
this
nice
red
button
up
here,
which
is
the
close
working
group
button,
and
if
something
goes
wrong,
we
just
hit
the
button
and
it's
gone
and
Spencer
was
saying:
that's
that's.
How
are
you
it
on
the
slow
days?
So?
Ok,
good?
Then
we
probably
you
might
know
you
might
not
know
we,
the
area
secretary,
which
was
Linda
Dunbar.
F
She
helped
us
in
the
past
in
running
those
area
meetings
and
getting
slides
together,
asking
for
the
mini
takers
and
so
and
she's
stepping
down
as
area
direct
secretary,
and
the
main
reason
is.
We
have
since
the
last
meeting
of
the
meeting
before
a
so-called
triage
team,
which
is
Allison
banking
and
where's
Eddie,
who
are
actually
helping
the
transport
IDs
and
running
a
bit
revenues,
and
so,
and
we
just
said
that
this
task
is
being
two
merged
into
the
triage
team.
F
F
Then
we
I'm,
not
sure,
announced
this
last
time
already
we'll
talk
about
this,
so
we
have
now
a
new
transport
area,
not
transparent
service
area,
where
your
team
t
is
the
art
which
is
replacing
the
old
transport
error
Directorate
and
right
now
it
has
like
a
fixed
number
of
13
members,
the
soil-
it's
not
me
anymore,
but
the
future
80s,
which
is
Miriah
and
Spencer,
might
consider
of
goring
or
shrinking
it.
You
can
see
the
list
of
people
who
is
currently
on
the
area
routine.
H
H
H
So,
but
we
so
like
I
say
we
think
we
think
we
have
a
pretty
good
description.
We
are
interested
in
what
people
think
and
please
send
comments
to
the
TSP
aerialists.
H
I
J
I've
been
asked
to
give
a
brief
heads
up
on
Accord
I
gave
a
similar
one
in
TS
bwg.
How
many
of
you
were
there.
H
J
Yeah
Spencer
was
right,
actually,
I
mean
I
thought.
The
overlap
was
gonna,
be
much
bigger
anyway,
so
this
is
about
the
Accord
off
and
let
me
give
you
that,
basically,
the
same
head
sab
that
have
been
given
you
in
the
week.
The
origin
of
this
book
was
that
basically,
operators
started
seeing
a
lot
of
encrypted
traffic
on
the
internet
and
in
particular
on
radio
networks.
You
know
the
resource
for
that.
They
are.
You
know
many
fold,
but
important
part
is.
J
I
am
the
IETF
actually
recommending
you
that
protocols
by
default
you
security
and
that
the
encryption
is
used
all
the
time.
So
the
problem
is
like
some
of
the
techniques
that
they
used
to
do:
radio
management
or
management
of
traffic
over
the
radio
networks.
They
actually
rely
on
on
having
access
to
to
an
encrypted
traffic.
So
when
the
amount
of
traffic
that
is
encrypted
is
really
high,
they
cannot
use
those
techniques
any
longer
as
a
consequence,
actually
gsma,
which
is
the
association
of
mobile
operators.
They
they
held
that
workshop
with
the
IAB.
J
It
was
called
Manu
and
basically,
they
were
discussing.
You
know
how
how
to
look
at
the
problem.
They
were
actually
checking
specifically,
not
just
so
much
the
architectures
like
3d
BP,
but
more
like
what
is
actually
deployed
on
what
is
actually
in
use
and
and
actually
I
would
suggest
that
you
go
and
read
the
report
because
they,
it
were
just
coming
up
last
week,
I,
think
and-
and
you
know
that,
will
give
you
an
idea
of
all
these
discussions.
J
Those
discussions
were
basically
no
scope
in
a
smaller
group
because
it
was
an
IEP
workshop
and
this
is
an
ITF
both.
So
the
the
scope
is
much
bigger,
we're
going
to
basically
not
be
continuing
in
those
discussions
and
investigating.
Basically
what
are
they,
the
proposals
and
solutions
that
we
could
use
in
order
to
address
some
of
those
problems?
So,
as
I
said,
we
will
be
discussing
your
requirement,
pain
points
and
potential
solutions.
This
is
a
no
working
group
forming
both.
J
So
if
we
actually
come
up
with
something
that
could
look
like
that
solution,
as
I
said,
for
example,
one
of
the
proposals
is
to
use
diffserv
for
that
it
would
actually
come
back
to
the
right,
working
group
or
or
if
something
new
would
be
Charter,
but
it
will
not
be
an
accord
working
group
basically
working
on
on
that
type
of
thing.
So
it's
more
like
you
know
more.
Let's
say
you
know
open
any
discussions.
We
want
to
explore
the
issue
and
and
see
what
it
comes
out
of
that.
J
So,
having
said
that,
do
you
have
any
questions,
suggestions,
yeah,
good
point
on
Thursday
the
Buffy's
on
Thursday
I
forget,
which
lot
I
think
it's
in
the
morning.
Actually,
anyway,
a
look
in
the
agenda,
a
court
I
on
Thursday.
J
Morning,
ok,
so
it's
morning
session,
if
you
want
to
to
basically
go
get
familiar
with
that,
as
I
said
repay
my
new
report
and
we
have
uploaded,
you
know
a
lot
of
material.
They
are
you
know,
presentations
life,
the
agenda,
so
you
just
gon,
can
go
to
the
material
state
and
have
a
look
at
everything.
Okay,.
L
L
So
the
motivation
here
is:
if
we
could
extend
let
your
for
functionality,
we
could
address
a
lot
of
problems
and
there's
a
lot
of
people
working
on
things
like
if
it's
the
windows,
I'll,
fast,
open
and
so
forth.
There's
also
things
like
encryption,
TCP,
crypt
and
so
forth.
So
clearly
there
there
is
quite
a
need
for
improving
transport
protocols,
but
is
it
really
possible
to
still
deploy
there
for
extensions?
We
know
there's
a
lot
of
middle
boxes
out
there,
but
we've
done
some
measurement
studies
and
we
show
that
for
well-designed
TCP
extensions.
L
It
is
still
possible
to
deploy
these
new
extensions
over
about
eighty-six
percent
of
the
paths.
So
if
the
paths
actually
work,
how
come
we're
not
able
to
deploy
new
transport
layer
so
easily
on
the
Internet,
so
the
actual
implementations
are
in
the
OS
stacks,
of
course,
and
this
is
for
many
reasons
high-performance.
It
provides
isolation
between
applications
and
then
you
provide
socket
api's,
and
then
you
have
new
OS
versions
that
adopt
these
new
protocols
and
extensions.
L
A
part
of
the
problem
is,
though,
that
the
OS
is
the
release
cycle
is
pretty
slow
in
the
magnitude
of
years
and
once
you
have
the
supporting
the
osu
actually
have
installed.
That
doesn't
mean
that
you're
actually
using
it
because
defaults
mean
that
certain
features
are
not
turned
on
and
even
when
they're
there.
If
many
system
admins
are
sort
of
loathe
to
actually
turn
them
on,
because
why
would
you
fix
something
that
isn't
broken?
Just
improve
performance
of
X,
so
I
don't
want
to
really
go
into
the
details
of
this
graph.
L
Very
much
I'll
just
give
you
the
overview.
This
is
basically
trace
from
Maui
from
2007,
3,
2
2012,
and
what
it's
doing
its
measuring
deployment
amount
of
certainty.
Cp
features
such
as
selective
acknowledgements,
timestamps
and
windows
scales,
and
the
sort
of
that
this
is
for
Windows
and
Linux,
and
some
of
these,
such
as
selective
acknowledgement
in
Windows,
was
deployed
in
Windows
2000,
and
you
can
see
in
the
curse
that
even
in
2012
you
don't
have
full
deployment.
L
L
L
So
what
we
did
is
we
built
the
system
that
sort
of
provides
a
little
tease
and
sort
of
these?
This
is
the
overall
architecture,
so
I'm
going
to
walk
you
through
it
from
bottom
to
top.
So
you
have
the
nic
at
the
bottom
and
then
we
were
based
on
the
software
switch
call
tada,
which
is
the
name
of
API
from
the
university
of
pisa
and
inside
that
switch.
L
We
have
a
module
that
basically
works
on
three
tuples,
so
networks
tax
and
applications
register
three
tuples,
and
then
this
module
make
sure
that
packets
are
directed
in
the
right
to
the
right
stacks
on
RX
and
on
TX
and
make
sure
it
filters
such
that
stacks
don't
send
packets
to
from
the
wrong
three
couple,
and
then
we
add
virtual
port
to
the
switch.
Those
virtual
courts
are
connected
to
the
user
level
networks
tax,
and
on
top
of
that
we
have
the
the
actual
applications.
L
L
We
provide
namespace
isolations
through
the
standard
three
tupple
we
have
very
high
performance
and
I'll
show
some
graphs
in
a
minute
and
we
run
both
on
on
freebsd
and
linux,
and
it's
open
source
and
I'll
give
you
think
later
on.
Okay.
So
to
give
you
some
performance
numbers,
the
first
thing
we
did
is
measure
just
roll
txt,
UDP
packets.
What
we
have
in
this
diagram
is
the
experimental
setup,
so
this
is
TX
0
at
the
very
top
we
implemented.
L
Essentially
it's
it's
not
really
a
stack,
it's
just
GDP,
and
on
top
of
that,
we
implement
a
packet
generator
right.
That's
a
sense
packets
as
fast
as
possible
that
connects
to
a
ring
of
packet
connected
to
the
switch
and
what
we
can
do
is
we
can
create
multiple
of
these
rings
and
these
rings
each
can
be
assigned
to
different
cpu
core.
So
we
can
scale
the
performance
of
this
with
multiple
cpu
cores.
L
These
packets
then
go
down
to
the
actual
kernel
module,
and
so
the
switch
and
the
kernel
module
will
make
sure
that
if
that
packaging
applications
said,
I
will
send
that
of
port
34.
Make
sure
that
that's
actually
the
case,
so
the
application
needs
to
register,
make
sure
the
port
is
free.
The
OS
agrees
that
it
is
and
then
goes
in
packets
and
the
module
will
check
that
it
keeps
up
with
a
registration
at
it,
and
then
it
goes
out
to
the
neck.
L
Okay,
so
that's
the
basic
setup
and
that's
those
are
the
results
so
TDP,
so
we
can
do
different
packet
sizes
and
what
this
is
showing
is
that
for
any
packet
size,
128
and
bigger,
we
can
saturate
in
decline
rate
with
a
single
CPU
core
and
for
minimum
size
packets
with
two
course.
We
already
saturate
the
entire
thing,
big
pipe
so
that
that's
TX
and
then
our
exit,
the
setup
is
basically
the
same,
but
we're
going
up
and
instead
of
a
package
and
application,
we're
using
a
packet
receive
application.
And
again
we
get
these.
L
This
sort
of
nice
curve,
where
a
single
CPU
core
is
mostly
enough
for
most
packet
sizes
to
get
the
tag
line
right.
Okay,
so
that
was
a
single
stack
and
a
single
application.
What
happens
when
you
add
multiple
stacks,
multiple
applications?
So
you
add
more
virtual
port.
Essentially,
to
that
switch
on
TX,
what
is
essentially
happen?
Is
you
still
have
this
filter
checking
that
each
port
is
sending
out
of
the
three
tablet
registered
with,
and
we
see
pretty
much
line
rate?
L
Okay,
RX
pretty
similar,
and
we
get
this
graph
you
can
see
in
with
64
byte
packets.
As
we
add
ports,
the
rate
goes
a
little
bit
down.
What
happens
here
is
that
the
reduced
number
of
packets
there's
more
ports.
So
each
time
you
call
is
cisco
on
each
port.
You
receive
your
packets,
since
the
cisco
has
a
certain
overhead,
and
so
your
rate
goes
a
little
bit
down,
but
you
still
get
10
gigs
with
those
few
ports.
E
Yes,
I
did
you
mention
the
package
in?
Is
this?
Is
a
UDP
or
TCP
no
to
be?
This
is
UDP,
that's
why
we
have
the
packet
sizes
right
right.
L
L
L
L
So
yeah
the
summary:
this
is
basically
numbers
from
the
graphs
which
are
not
always
easy
to
reach
from
the
graph
stem
cells.
It's
for
gigabits
per
second
for
minimum
size,
packets,
with
a
single
CPU,
core
10
gigs
for
all
packets,
with
two
cpu
cores
and
10
gigs
for
250
sites,
packets
with
single
speak
or
ok,
so
yeah,
so
I
I
slightly
mentioned
this.
So
so
far
it
was
a
very
dumb
UDP
stock.
L
So
what
we
did
is
we
took,
or
we
call
a
micro
TCP,
which
is
a
very
simple
TCP
stock,
just
minimalistic
TCP
stock,
and
we
built
a
very
small
HTTP
server
on
top
of
that
and
that's
the
graph.
Basically,
we
compare
this
against
injects
with
with
TSO
with
the
standard
OS
stack
because
remember
we
can
run
our
user
level
stack
on
the
same
system
as
the
whole
stack.
L
So
we
can
do
nice
comparisons
and
what
you
can
see
V
is
there
is
that
the
bar
all
the
way
on
the
right
is
our
bar,
some
micro
tcp,
you
tcp.
We
can
outperform
the
host
OS
stack.
This
is
not
a
big
surprise,
because
our
stack
is
doing
this
work.
It's
not
so
much
to
say
that
we're
beating
the
standard
tcp
in
the
host
is
to
show
that
we
get
pretty
good
performance
in
the
Indy
gigabits
per
second,
even
with
an
HTTPS.
L
Our
server
running
on
top
I
think
this
is
something
like
a
hundred
thousand
requests
per
second,
which
is
a
decent
number,
given
that
we
have
a
sort
of
low-end
server.
Okay,
so
the
conclusion
is,
I
presented
multi
stack,
it's
always
support
for
user
stacks.
The
idea
is
to
be
able
to
experiment
and
try
to
deploy
protocol
new
transfer
protocols
in
a
more
timely
fashion.
I
hope,
I
convinced
you
that
maybe
it's
possible
if
this
is,
of
course,
a
research
project.
L
M
Just
always
park
a
couple
of
questions.
Number
one
is:
did
you
consider
a
potential
performance
hit
by
the
fact
that
you
have
to
run
multiple
stacks?
So
in
your
experiments?
Was
there
just
one
app
made
be
multi-threaded,
but
it
isn't
all
as,
though
is
supporting
like
any
number
of
applications
running
stacks
and
what
the
performance
it
has
for
you
or
the
OS
yeah.
L
Right
so
the
stacks
in
these
experiments
were
always
the
same,
but
they
were
completely
independent
from
each
other
right,
so
they
had
the
same
weight,
but
you
know
it's
if
the
experiment
shows
what
the
hit
might
be.
Of
course,
the
unrealistic
part
is
that
the
stack
is
not
a
very
heavy
stack
that
we
were
using.
It
was
a
new
listing
one,
no.
M
L
M
N
L
E
Cherish
your
google
game,
so
the
multiple
instance
of
stacks-
actually,
we
think,
is
a
preacher.
Now
bug
actually
got
a
lot
of
advantage
of
like
isolation,
resource
accounting,
accounting
and
also
the
new
pattern,
casual
locality,
yeah
so
I.
To
answer
the
previous
person's
caution.
We
think
this
is
an
advantage,
but
one
thing:
I
I,
don't
know
you.
E
L
Know,
I
I,
agree,
I,
I,
don't
think
I,
don't
think
it
would
make
I,
wouldn't
recommend
any
of
you
to
take
the
tcp
stack
out
of
the
colonel
and
sort
of
adapted
and
put
it
on
top
of
that,
because
that's
not
a
trivial
port,
and
on
top
of
that,
who
would
maintain
that
afterwards,
you
can
do
once
you
can
get
some
nice,
you
can
do
it
a
nice
research
paper
out
of
it
but
who's
going
to
maintain
that
right.
What's
nice
about
so.
E
E
When
I
said,
who
would
do
this,
it's
google
right,
but
anyway,
by
the
way,
I
know
all
those
other
three
all
servers
exactly
you
know,
yeah
I,
wonder
how
much
your
benefit
problems
with
a
pussycat
from
lamb
nap,
even
if
I
just
put
this
on.
You
know,
on
top
of
raw
sake
and
going
to
the
colonel
yeah
without
any
special
Howard
support.
By
do
you
have
a
casa
mia?
How
much
may
you
gather
most
of
the
Bulls
phone
that
map
I.
L
E
L
L
E
G
Jen
iyengar
high
for
the
guy
Harry
I
love
this
book
I've
seen
that
letter
paper
when
it
came
out
while
ago
and
it's
it's,
it's
excellent
I
think
this
is
definitely
a
general
direction
in
which
the
industry
is
moving
its
it's
very
they're,
very,
very
strong
clothes
in
it
that
it's
happening,
but
but
the
way
this
work
is
cooked,
I,
I
read
it
every
time.
Reading
I
find
it's
smooth
sailing
and
then
I
hit.
G
The
solution,
however,
seems
to
be
something
that
allows
for
rapid
deployment
on
the
server
side,
which
is
not
the
critical
block
right
now.
The
blockages
are
largely
happening.
Client-Side
operating
systems
which
we
can't
push
forward
and,
to
that
extent
again
tcp
as
a
protocol
yep
does
figure
into
that
problem.
The
extensibility
of
TCP
is
limited,
changing
DCP
because
it's
inside
the
operating
system
on
the
client.
L
Right
yeah,
so
you
we've
had
these
discussions
internally
for
so
you've
hit
one
of
the
nails
in
the
eighth.
So
for
sure
one
of
the
problems
is
you
require
this
kernel
module
right
and,
as
you
say,
what
should
be
targeted
is
not
servers
or
data
centers.
Where
we
got
control
of
stuff
anyways,
you
should
be
targeting
and
Cline's
right,
Windows
users,
Linux
users
and
on
laptops
whatever,
and
for
those
they
would
need
them
up
running
on
them.
You
need
that
they
don't
have
that
right.
G
I
do
like
that
this,
this
work
is
being
done
in
the
public
and
I.
Think
it's
an
excellent
use
of
Nick
map
yeah
as
well,
so
I
would
love
to
see
more
work
that
looks
at
UDP
performance,
okay,
because
I'm
sorry,
Yugi
paper
files,
udb
yeah
yeah,
because
not
just
quick,
but
there
are
proposals-
are
ideas
that
people
now
are
thinking
about.
Building
transports
on
top
of
UDP
and
for
the
use
of
the
that
the
client-side
operating
system
was
a
block
and
I'd
say
that
it's
not
just
the
client
side.
G
It's
really
the
middle
boxes
that
are
blocking
even
windows.
10
now
has
upgrades
every
year,
so
you
be
able
to
ship
upgrades
the
colonel
odds
over
here,
but
nobody's
going
to
be
updating
those
boxes
in
the
middle.
So
we
have
a
long
tail
there
ago
and
I
would
love
to
see
more
work
and
sort
of
optimizing,
server-side
and
client-side
performance
for
UDP.
Based
transports
that
work.
I
think
I
would
be
fantastic
to
see
happen.
Yeah.
L
You're,
absolutely
right
and
in
the
paper
we
didn't
go
into
that
argument
too
much
because
middleboxes
and
UDP
done
don't
like
each
other
too
much.
But
we
do
a
lot
of
we
use
UDP
a
lot,
especially
in
performance
work,
because
the
it's
deterministic
we
can
specify
packet
size
as
TCP
is
a
much
more
complicated
Beast
when
you're
charging
sort
of
tease
out
what
the
underlying
performance
is
right.
Thank
you.
Yeah
welcome,
Kris,.
O
L
L
Yeah,
so
in
the
paper
we
use
the
term
well
well
designed
extensions
to
mean
not
just
anything.
You
want
to
put
out
there,
because
the
middle
boxes
will
drop
a
whole
lot
of
things,
but
yeah
you're,
absolutely
right
and
this.
This
is
one
graph,
but
there's
a
nyeem
see
paper
from
a
few
years
back
that
it
explains
a
lot
of
effects,
including
the
ones
you're
making.
Then.
O
When
you
go
to
the
clients
side
yeah
there
is
another
problem:
is
that
when
we
do
users
user
level
stack?
That
means
that
the
stack
is
now
a
library
inside
the
application.
That
means
that
you
passed
to
update
is
by
updating
the
application.
Yeah
there
are
so
application
will
get
a
bit
too
frequently
yeah,
but
a
whole
lot
of
them.
Don't
yeah.
L
O
O
M
Natural
so
spark
the
service
funny
coastal
they
were
presenting.
Alesis
is
funny
to
me
for
the
group
as
a
whole
right
now,
you're
describing
having
this
user
space
stack,
which
I'm
one
hundred
percent
in
favor
of
least
Lisa
questions
like
person
was
mentioning
on
having
potentially
the
stack
built
into
your
application
and
maybe
not
being
a
plug
right
or
not,
but
basically
what
you're
doing
is
you're
defining
a
type
of
API
underneath
you,
but
in
your
case
you're
pushing
them
or
whatever
you
are.
Somebody.
M
Bpd
k
but
or
whatever
it
is,
you
might
imagine,
having
some
library
that
goes
into
the
CCF
has
support
for
net
map.
Or
do
you
hear
something
instinct
eats
its
own
stack
and
uses
that
the
community
yeah
at
the
same
time
or
working
thing
like
taps
right,
we're
trying
to
abstract
from
a
higher
level
above
what
we're
providing
underneath
yeah
right.
So
we
just
divided
our
API
into
two
and
cargar
self,
a
little
sport
a
little
spot,
little
box
that
we
now
try
to
make
an
inter
operate
with
the
rest
of
the
network
right.
M
That
has
the
issue
of
not
allowing
the
protocols
to
evolve
right
and
we're
basically
fighting
on
three
fronts,
underneath
above
and
basically
to
the
network
side
yeah,
which
I
don't
know.
It's
a
funny
puzzle
that
we're
trying
to
fight
against
which
this
present
for
the
community
a
good
opportunity,
but
a
big
fight
right,
because
everything
everybody
in
different
layers
has
their
own
interests
of
things
that
point
towards
yeah.
L
Absolutely
it's
a
it's
completely.
A
tussle
I
mean
the
stuff
we
work
on
is
only
on
two
of
those
fronts,
which
is
either
compatibility
with
sockets,
API
or
high
performance
with
some
other
packet
io
frameworks
such
as
the
PDK
mm
up
and
pick
one.
What
do
you
want?
You
want
compatibility
or
you
want
to
be
tick,
tick,
tick
and
we
pick
speed,
but
then
we're
not
compatible
anymore
and
they
need
to
chase
the
applications.
And
what
do
you
do?
That's
why
I
was
I
was
suggesting.
L
Maybe
you
can
do
some
sort
of
you
can
do
an
abstraction
layer.
That's
one
of
the
things
I
was
suggesting,
which
is
what
you're
working
on
and
then
but
then
people
have
to
comply
with
that
abstraction
layer,
but
it's
already
better
than
our
approach
in
terms
of
compatibility,
but
maybe
you
sacrifice
a
little
bit
of
performance.
Maybe
you
don't?
You
know.
M
I
think
performance
at
the
OS
side
will
be
okay.
I
mean
the
client
side
is
not
gonna.
Have
that
issue
right
in
the
server?
You
can
optimize
the
hell
out
of
it
whether
it
was
custom
hardware
software,
but
it
is
an
interesting
tussle
in
your
probity,
basically
providing
in
your
case
weapons
right.
So
we
can
fight
and
actually
now
have
a
way
to
start,
deploying
and
figure
it
out
in
in
case
of
Google
stays
right.
They
have
their
own
controlling
both
sides.
I
try
to
figure
out
in
the
middle.
M
G
Jen
I
and
God
again
so
I
didn't
I
I
said
something
on
the
other
I'd
suggest.
I
recommend
that
you
look
at
you
repeat:
performance
I
wanted
to
ask
you.
If
you
actually
had
and
I
had
a
more
specific
recommendation.
Have
you
looked
at
the
difference
in
so
good
net
Matt?
And
but
this
tack
have
you
actually
looked
at
performance
over
UDP,
vs
TCP
and
how
different
they
are
and
actually
with
respect
the
one
thing
I'm
assuming
you're,
using
some
sort
of
TSO
style
and.
L
Offload
for
for
some
of
the
TCP
ones-
yes,
yes,
but
the
end
of
the
presentation,
but
all
the
other
base
tests
or
EDP
yeah
right.
So
what
I
can
tell
you
is
not
in
not
in
this
work
but
some
of
the
other
stuff
that
we
we
build
a
high-speed
virtualized
content
caches
and
we
still
use
the
net
map
API
for
those
and
I
guess
you
must
absolutely
turn
TSO
on
otherwise.
G
G
So,
in
the
interest
of
thinking
about
evolving
protocols
on
you
d
be
yep,
that
is
a
limitation
that
still
stands
yeah,
so
there's
the
school
there
for
work.
That
needs
to
be
done
in
terms
of
figuring
out
how
to
reduce
that
cost.
When
you're
doing
offload,
how
can
you
get
something
that's
equivalent
to
TSO
four
generic
unity
based
transports,
yeah
I,
completely
yeah
I,
agree,
Thanks,
I,.
D
Brian
Trammell
I
think
I've
already
actually
told
you
like
a
year
ago,
when
the
you
were
working
on
the
paper
that
I
like
this
work,
I
still
like
the
work,
so
I
wanted
to
go
back
to
one
thing.
You
said
about
sort
of
like
UDP
versus
tcp
middle
box
interference.
D
It's
actually
way
more
complicated
than
that
I'd
like
to
tell
you
to
go,
grab
your
favorite
time
machine
and
come
back
to
the
map
RV
session
yesterday,
because
we're
doing
some
some
sort
of
active
measurement,
I'm
trying
to
tease
this
out
and
it's
actually
a
lot
more
complicated,
UDP
and
TCP
or
broken
in
different
ways.
Yeah.
N
D
Metal
boxes
in
the
internet,
a
second
thing
that
I
want
to
say,
there's
currently
some
work
going
on
in
the
the
stack
evolution
program.
It's
basically
I
guess
to
go
back
to
nacho
we're
another
group
of
people
who
are
going
to
use
the
weapons
you're,
providing
us
in
this
slightly
different
way.
Okay,
so
I'd
encourage
you
actually
take
a
look
at
that
to
continue
working
on
this
work
and
we'll
be
in
touch.
Ok,.
L
K
I
The
kind
of
works
right
okay,
so
we
have
another
about
I,
guess,
40
minutes
to
talk
about
current
and
future
hot
topics
in
TV
and
what
we
like
you
guys
to
do,
is
to
think
about.
You
know:
what's
a
hot
topics:
you've
been
talking
about
to
others
in
the
hallways
or
insight
meetings
and
let
the
larger
community
know
or
what's
the
hot
topics
you
would
like
to
discuss
in
the
ITF
future?
P
So
it's
its
speed
measurement
and
I
think
that
there's
certainly
an
interaction
between
what
is
being
developed
for
the
future
of
transport
and
what
we
will
ultimately
measure
for
speed.
I
think
I
think
we
can
figure
out
how
to
measure
the
speed
of
IP
packet
transfer,
but
the
minute
you
go
anywhere
above
that
and
start
to
have
significance
for
user
applications.
It's
a
very
complicated
problem.
P
We've
been
working
on
this
for
a
while
Matt
and
Matt
Mathis,
of
course,
and
another's
up
in
tackling
this
for
years,
but
we're
on
the
verge
now,
where
the
regulators
are
not
going
to
leave
us
alone
anymore,
and
we
need
to
have
something
that
we
can
all
agree
on
here.
I
believe
that
is
going
to
be
useful
and
relevant
and
fair
to
the
service
providers,
and
it
may
mean
that
there
are
multiple
methods
of
measurement
to
do
that
in
the
end.
P
O
Christian
Rita
ma
from
Microsoft
you're
asking
what
kind
of
interesting
stuff
we
we
could
do
what's
popping
that
is
new
at
the
high
level.
What
is
very
new
is
not
new,
but
it's
a
new
focus.
It's
the
focus
on
latency.
A
lot
of
the
work
that
was
done
in
transport
historically
has
been
on
the
fair
sharing
of
resource
and
achieving
a
third
showing
of
bandwidth
in
particular,
but
we
did
not
really
have
the
focus
on
latency.
O
O
Of
course,
one
thing
we
are
doing
is
the
work
on
active
queue
management,
but
we
have
very
little
synchronization
between
the
active
queue
management
and
the
evolution
of
transport.
For
example,
there
was
a
box
message
about
if
you
cuddle
explain
how
deploying
fair
queuing
in
a
router
is
going
to
actually
pesum
eyes
your
performance,
if
a
lot
of
your
traffic
is
background
traffic
that
would
have
otherwise
slowed
down.
D
Hi
Brian
Trammell,
mainly
I,
got
up
here
because
I
wanted
to
yell
rat
hole,
@
@l,
and
then
he
took
the
rat
hole
and
he
threw
a
bunch
of
lawyers
in
it,
which
makes
it
even
more
fun
and
then
Christian
said
something
about
latency
and
then
I.
Remember
there
was
this
workshop
that
I
sock
did
a
few
years
ago
about
sort
of
like
reducing
internet.
Ladies
yeah
there's
ice
Hawk
and
right
put
this
together.
D
D
How
do
you
measure
latency
of
an
access
network
so
that
you
can
actually
give
service
providers
something
other
than
the
one
dimensional
number
that
they
can
compete
on
and
we
got
nowhere
with
it
and
I
think
that
that
might
it
might
be,
it
might
be,
a
rat
hole
filled
with
rat
holes
filled
with
lawyers,
but
going
beyond
sort
of
like
the
basic
pipe
so
step.
One
is
going
beyond
basic
the
basic
IP
transport
measurement
to
you
know
something
on
top
of
that
and
then
step.
D
Two
is
then
getting
some
sort
of
number
that
talks
about
the
interaction
between
latency
and
low
end
and
capacity
I
mean
there's
been
a
lot
of
of
emphasis
on
reducing
buffer
bloat
and
that's
been
good.
That's
been
theirs,
but
I
think
the
idea
that
some
Q
is
good
there
for
a
morkie
was
better
and
therefore
all
the
Q
is
the
best
we've
we've
kind
of
managed
to
beat
that
out
of
the
industry,
or
at
least,
if
it's
the
industry,
that
listen
to
us,
but
it
would
be
interesting
to
go
into
okay.
D
Well,
what
are
the
other
points
along
that
curve
other
than
other
than
just
what
we
can
get
through?
Reducing
with
you
and
being
a
little
bit
smarter
about
how
we
use
it,
I'm
going
to
have
to
sit
down
and
think
about
what
Christian
just
said
about
about
making
the
transport
evolution
work
better
with
the
aqm.
But
I
will
note
that
there's
a
lot
of
things
that
we're
thinking
about
sort
of
in
the
spud
work.
For
those
of
you
who
are
interested
in
it,
we
just
did
some
new
documents.
D
We
post
the
links
to
those
to
the
spud
and
the
stock
Evo
discuss
lists
yesterday,
where
we're
talking
about
sort
of
knobs
that
you
can
put
on
the
traffic
that
are
essentially
like
the
traffic
expressing
its
preferences
about
the
aqm
thresholds.
It
would
like
to
see
so
we're
at
the
very
beginning
of
that,
but
I,
don't
think
we
have
anything
like
I.
Don't
even
have
consensus
with
myself
as
to
whether
or
not
the
stuff
we're
thinking
about.
D
I
Before
you
want
to
wait
and
I
just
thought
when
you
said
we
had
this
latency
workshop
while
ago,
a
couple
years
ago,
we
should
actually
go
back
and
look
at
the
report
and
see
which,
because
we
just
introduced
at
that
point
of
time
and
I
got
to
do-
was,
for
example,
and
supporting
easy
and
support
yep.
Another
one
was
a
km
I
think
this
was
like
the
starting
point
of
the
Aiken
working
group
and
there
were
other
to
do
so.
I
N
So
gori
first
responded
a
little
bit
too
Christian
and
I
kind
of
by
what
he
said,
but
I
also
don't
believe
that
we
do
transports
really
for
furnace.
Fairness
is
just
kind
of
a
thing
that
comes
because
we
don't
want
starvation,
we
don't
want
congestion
collapse.
So
it's
a
more
of
a
mechanism
and
the
fkn
fq
cuddle
is
Fleur
Fleur
puing,
the
idea
being
that
we
don't
provide
starvation
and
I.
N
Think
these
these
things
are
so
important,
and
one
of
the
things
we
seem
to
be
missing
is
combining
things
like
diffserv
and
a
qm
where
the
ecn
and
putting
this
together
as
a
package
to
try
and
solve
some
of
the
problems.
We're
seeing
other
people
propose
much
more
exotic
solutions
for
I
think
we
have
a
number
of
good
techniques
and
it's
the
furnace
thing.
The
idea
of
competing
with
traffic
is
just
just
came
about
in
the
past,
but
now
we
have
better
techniques
and
we
we
have
ways
of
doing
this.
N
I
I
I
Would
super
curious
to
hear
if
people
think
they
have
seen
in
a
non
transport
walking
group
something
super
interesting
people
show
up,
let's
make
it
show.
Hence
who
has
visited
an
entre
working
group
in
the
last
two
days
and
buffs?
Don't
count
okay,
not
too
many.
Okay.
We
need
to
bring
more
other
people
in
this
in
this
meeting.
Okay,
so
last
chance
to
go
to
the
mic
three
two
one.
So
then
we're
done!
Thank
you
very
much.