►
From YouTube: IETF105-TSVAREA-20190725-1550
Description
TSVAREA meeting session at IETF105
2019/07/25 1550
https://datatracker.ietf.org/meeting/105/proceedings/
A
B
A
D
F
F
I
I
A
I
I
This
is
our
agenda.
Today
we
just
started.
You
have
a
chance
to
pitch
the
agenda.
Basically
right
now,
if
you
want,
we
have
as
usually
a
couple
of
slides
about
what
we
did
in
the
last
couple
of
months
and
what
the
working
groups
are
doing
right
now,
then,
like
I,
have
a
point
where
I
would
like
feedback
from
the
community
about
where
in
the
IETF
to
do
multipass
work,
because
it's
coming
up
in
different
places
and
I
want
to
make
sure
that
people
are
aware
and
talk
to
each
other.
I
So
maybe
you
have
better
ideas
than
I
do
and
then
we
have
a
talk
by
Christian
about
congestion,
defense
in
depth
and
finally,
we
get
an
update
from
Robin,
which
is
a
remote
talk.
He
presented
some
quick
based
logging
schemes
last
time
and
he
has
pushed
out
two
drafts
and
a
lot
of
work,
and
so
we
thought
we'd
give
him
some
time
to
update
on
what
he
did.
Any
questions
about
the
agenda.
I
As
always,
we
would
like
to
thank
our
review
team.
We
got
quite
frequent
reviews
in
the
last
couple
of
months
and
that's
always
very
helpful.
You
can
see
it's
not
like
a
huge
load,
but
it's
it's
important
to
have
those
previous
in
quickly
and
yeah,
some
some
good
feedback
from
these
people.
Thank
you
very
much.
I
And
this
is
also
the
useless
light,
which
is
like
kind
of
summarizes
very
briefly,
our
view
about
the
current
status
of
each
working
group.
The
one
thing
to
mention
is
that
there's
one
working
group
less,
but
also
big
thanks
to
everybody
who
worked
on
this
working
group
and
did
the
work
in
this
group.
So
we
closed
it
successfully
and
have
there
are
seats
out
there
now.
I
I
Most
of
those
things
had
happened
already,
because
it's
Thursday,
the
the
only
thing
that's
so
on
tomorrow
is
Mapuche
and
it's
usually
there
are
a
couple
of
transport
related
discussions.
There
are
presentations
there,
but,
like
you
can
also
go
back
and
watch
the
recordings.
There
was
ICC
a
tree
with
some
interesting
TCP
related
and
congestion
related
talks.
There
was
something
which
just
touch
transport
quite
a
lot
in
dispatch.
I
L
In
general,
I
make
one
point
on
the
web
print
works
thing.
They
also
had
a
site
meeting
here
and
it
seems
like
the
outcome
of
that
site
meeting
is
going
to
be
they're,
going
to
think
about
maybe
doing
a
bath
in
Singapore
I,
don't
know
what
area
they're
going
to
send
that
to,
but
if
it
doesn't
end
up
in
the
transport
area,
even
though
it's
got
transport
rate
in
the
name,
we
should
make
sure
that
there
is
encouragement
for
transport
people
to
show
up,
because
it
definitely
needs
transport
are
those
on
it.
During
the
passage.
M
Dates:
Ganassi
to
add
to
the
web
transport
topic.
We
have
just
created
a
mailing
list:
web
transport
ITF,
it's
currently
under
the
applications
area,
but,
like
yes,
I
Brian
said
there
will
be
transport
components
and
more
love
to
hear
from
people
knowledgeable
who
all
send
emails
to
lists
about
the
very
list.
I
Okay,
that
gets
us
to
my
point
where
I
really
need
some
feedback
from
you
guys.
So
we
do
have
a
couple
of
efforts
where
something
happens
with
multipath
and
it
seems
that
it's
getting
worse
so
that,
if
there's
of
course
MP
TCP,
however,
they
are
basically
at
the
end
of
their
Charter.
They
don't
really
have
a
new
work
item
right
now
and
there
are
people
interested
in
amputees
being
general,
but
it
doesn't
look
like
they
have
any
standardization
work.
Should
you
so.
I
My
big
question
is:
should
I
close
a
working
group
or
is
there
another
option,
but
we
also
have
had
a
lot
of
discussion
about
making
quick,
multipass
capable
it's
even
in
the
Charter.
It
will
not
be
an
aversion
one
of
quick,
but
it
will
come
up
again
and
there
has
been
a
proposal
for
multi,
pass,
DCC,
P
and
he's
working
group.
And
of
course
there
are
other
discussions
which
are
related
to
multi
pass
in
Panaji
and
ICC
or
gene.
I
Yes,
a
the
important
point
here
that
I
want
to
make
sure
that
people
are
somehow
think
that
everybody
knows
what
what
the
other
people
are
doing,
that
good
thoughts
are
brought
to
the
different
groups
and
not
only
discussed
in
one
part
of
the
community,
and
that
also,
maybe
you
know
we
don't
do
the
work
twice.
We
solve
the
problems
once
and
then
can
apply
it
somewhere
if
applicable,
and
the
other
problem
I
I
really
would
like
to
address.
I
Is
that
also
new
people
from
outside,
because
these
protocols
become
more
mature,
so
more
people
get
interested
in
it
that
they
know
where
to
go
that
they
have
a
place
to
ask
the
question
that
they
know
there
is
the
place
where
this
discussion
will
happen
in
the
IETF,
and
it's
like
as
easy
for
them
as
possible
to
find
the
right
place
and
I
have
one
more
quick
slide,
which
is
basically
kind
of
the
questions.
A
little
bit
broke
them
down.
So
do
we
need
a
new
group
in
the
ITF?
I
O
O
Could
I
suggest
a
non
working
group
bath
for
Singapore
to
go
over
all
of
these,
possibly
along
the
line,
and
he
might
even
think
of
it
in
along
the
lines
of
The
Dispatch
mechanism.
We
she
used
in
other
areas,
but
at
least
for
Singapore
a
multi
path.
Interest
bath
might
be
a
good
way
to
get
everybody
in
one
place
and
have
a
better
discussion
say.
I
P
P
Right
now
so
I
think
MPT
TV
should
probably
could
be
closed
if
there's
any
sort
of
maintenance
stuff
that
pops
up,
you
know,
is
ticket
TS,
vwg
or
tcp
em,
whoop,
sort
of
preternatural
homes
right,
I'm,
I'm,
I,
I,
sort
of
saw
the
DCCC
beep
multipath
proposal,
I
didn't
hear
about
setp.
Frankly,
I
don't
really
care
about
either
of
those
two,
so
I
I.
Don't
so,
however,
unsurprisingly
I
care
a
lot
about
quick,
so
I
I
think
actually
so
the
multipath
work
effect.
P
The
main
one
right
that
we're
going
to
do
is
is
for
quick
and
and
in
my
mind
at
least,
it
needs
to
be
done
in
quick
because
it
ties
in
a
lot
with
the
protocol
details
right
and
with
with
TCP
M
and
multipath
tcp
tcp
was
already
a
you're,
quite
established
protocol.
One
could
say,
and
by
the
time
we
did
multipath
tcp,
and
so
there
wasn't.
P
I
think
it
was
okay
to
have
it
as
in
a
different
working
group,
but
doing
multipath
quick
in
a
working
group
that
isn't
the
quick
working
group,
I
think,
will
be
very
difficult.
I
would
rather
just
give
quick.
You
know.
A
third
slot,
or
something
like
that.
Well,
maybe
we
keep
the
second
one
and
we
use
it
for
this.
We
absolutely
want
to
learn
from
what
multipath
tcp
did
we
don't
want
to
reinvent
the
stuff
there's
a
lot
of
the
problem.
I
see.
P
Is
that
there's
a
lot
of
sort
of
architectural
thinking
that
was
done
like
already
before
the
ITF
work
started
in
trilogy
and
elsewhere
right,
and
so
it
would
probably
be
good
to
figure
out
what
the
principles
are
that
underlie
multipath
TCP
and
apply
those
to
quick
the
protocol
mechanisms
in
multi-party
CBR,
but
actually
because
they
had
to
be
because
it
was
TCP
and
we
needed
to
get
through
middle
boxes
right
and
so
in
quick.
P
We
can
hopefully
do
this
a
lot
cleaner
and
simpler,
but
but
those
those
principles,
I
think,
would
be
valuable
to
figure
out
what
they
were.
If
I
could
only
remember
and
and
see
how
we
can
apply
them
so
so
that
could
be
something
that
I
see
crg
could
do
or
you
could
have
a
joint
joint
session
or
a
workshop
or
something
I,
don't
know
if
it's
a
buff
so.
I
In
theory,
I
would
like
to
do
it
exactly
that
way.
I
think
that's
the
right
way,
however,
in
practice,
what
we
have
right
now
is
that
we
already
have,
for
example,
the
problem
that
some
of
the
they
take
or
TCP
people
don't
show
up
in
MB
TCP
anymore,
and
we
have
the
problem
that
some
of
the
court
you
see,
people
don't
show
up
and
quick
anymore.
So
we
really
have
to
make
sure
that
the
right
people
show
up
at
the
right
places
and
I'm,
not
sure
if
that's
the
right,
the
perfect
solution
for
it,
but.
P
We
just
like
the
bits,
go
and
and
got
the
underlying
protocol
correct,
I
think
now
this
will
probably
ramp
up,
especially
with
China's
and
Martin's
work
on
the
simulator
and
then
the
Q
log
stuff
that
we're
gonna
see
where
you
can
actually
do
congestion
control
work
with
quick,
because
we
actually
have
a
protocol.
Now
that
runs
so
I'm
hoping
that
I'll
sort
of
correct
itself,
but
maybe
I'm
optimistic.
E
Spencer
darkus
Mary.
Thank
you
for
bringing
this
topic
forward
to
address
what
it
seems
like
to
me
that,
yeah,
to
the
extent
that
we
can
come
up
with
principles
that
anybody
like
the
point,
was
not
to
do
quick
right.
The
point
was
to
be
able
to
do
quick
and
prove
that
that
was
then
there
we
can
do
in
their
transport
protocols,
which
may
even
be
a
different
variant
versions
of
quick.
The
behave
in
different
ways.
So
I
think
that
trying
to
think
about
things
at
the
at
the
principle.
E
What
were
the
principles
level
it's
really
worth
doing,
and
then
you
can
feel
what
you
know.
If
you,
it
seems
like
to
me
that,
like
questions
like
how
do
you,
how
do
you
pick
which
paths
to
use
for
which
packets
win,
but
maybe
that's
not
tied
to
a
protocol
so
that
maybe
Wesley?
Maybe
maybe
there
are
things
like
that
that
you
can
spin
off?
If
you
ask
about
course,
area
possibilities,
I
we've
had
conversations
in
the
idea
for
a
long
time
about
applications.
The
pop
out
that
want
to
manage
their
bench
pass
themselves.
Q
Okay,
so
I've
actually
been
running.
Multipath
routing
and
transport
to
aircraft
for
at
least
15
years,
published
a
paper
back
in
2004
at
the
I
Triple
E
military
communications
conference
so
practical
background
in
it.
But
not
a
lot
published
just
going
to
first
principles.
I
mean
it's
the
job
of
routing
to
concatenate
links
to
create
paths.
Q
It's
the
job
of
Transport
to
then
use
the
paths
with
which
it's
presented,
but
then,
if
we
want
to
do
better
than
that,
it
might
also
be
the
job
of
Transport
to
give
routing
a
hint
about
the
kind
of
paths
that
it
might
like,
and
it
might
be
the
job
of
routing
to
give
Transport
a
hint
as
to
the
kinds
of
paths
that
it
was
able
to
construct.
So
I
think
this
is
intrinsically
a
a
cross
layer
topic
and
it's
not
just
that.
We
need
the
different
people
to
talk
to
each
other.
Q
L
The
hat
is
actually
kind
of
messing
with
my
brain,
the
V.
It
sounds
like
a
multi
path.
Interest
bath
is
probably
a
good
idea
for
Singapore.
One
of
the
outcomes
of
that
could
be
hey.
Look
it's
the
usual
suspects
in
the
room.
L
We
need
to
have
I
think
we
call
it
a
multi
path:
interest,
boss,
you're
gonna,
get
a
ladder,
outing
people
showing
up-
and
that's
a
good
thing
just
to
make
sure
that,
like
the
when
we're
talking
about
the
principles
that
we're
talking
about,
the
principle
is
in
the
same
way
as
to
were
the
as
to
where
to
work
ends
up
after
that.
It's
definitely
in
the
transport
area.
L
Somehow,
but
I
don't
want
to
have
the
buff
that
we
haven't
had
yet
in
the
mic
line
right
here,
a
reflection
on
sort
of
a
difference
between
NP
TCP
and
MP,
quick
as
I'll
say
a
little
bit
less
pejoratively.
The
Moores
did
the
the
way
that
that
TCP
was
multipath
was
constrained
by
the
design
of
TCP
in
a
way
that
the
way
that
quit
will
be
multipath
is
not
necessarily
designed
by
be
constrained
by
the
design
of
quick
way,
just
the
network.
Okay,
yes,
it
allows
to
stutters
the
network.
L
The
constrained
which
yeah
I
agree
with
that
quick
does
not
have
the
same
set
of
constraints
right,
so
it
has
quick,
has
a
much
larger
in
design
space
that
it
can
explore
for
how
it
does.
Multipath
and
Van
itself
suggests
to
me
that
there
does
need
to
be
some
work
separate
from
quick
on
sort
of
design
principles
and
like
transport,
independent
design,
principles.
I
think
that's
a
that's
a
work
that
we
have
to
take
on.
I
have
no
idea
where
to
do
it,
I'd
like
to
discuss
that
in
a
bath
in
Singapore,
so.
P
That's
where
they
can
jump
in
las,
going
to
ask
a
really
quick
clarification
question.
So
when
you
say
multipath,
do
you
think
it
is
in
scope
for
a
transport
session
to
use
path,
diversity
that
is
deeper
in
the
network
than
the
first
hop
or
last
stop?
So
do
I
target
multi
path
or
multi
interface,
because
multipath
TCP
originally
tried
to
do
multi
path,
but
will
actually
deliver
this
mostly
multi
interface
and
I.
Think
for
quick,
the
interest
specifically
now
is
a
multi
interface
support
right.
I
R
R
Mpt
Cebu,
NP
TCP
was
right
to
be
chartered
as
a
separate
working
group
because
of
a
ton
of
protocol
work
to
be
done.
Then
that's
primarily
what
happened?
We
did
not
discuss
the
condition
controller
really
that
much.
It
was
a
separate
document
and
it
was
fine
and
you
could
replace
it
with
whatever
you
want.
That
was
effectively
what
the
the
protocol
document
said.
I
think
that's
appropriate.
Similarly,
if
you
think
about
quick
as
the
next
thing,
that's
that
we
want
to
think
about
doing
multipath
in
I
agree
completely
with
loves.
R
That's
where
the
quick
working
group
is
where
the
discussion
ought
to
happen
and
where
that
we
I,
don't
think
that
we
need
to
charter
an
MP,
quick
working
group,
because
what
I'm
trying
to
say
here
there
is
no
requirement
for
an
equivalent
of
the
MPD
City
working
group
in
in
terms
of
outside
of
that
I.
Actually,
don't
know
what
is
left
to
be
done
if
all
we're
talking
about
here
I
mean,
if
you
don't
know
about
doing
multi
power,
DC,
CP
and
I
said
again.
This.
H
R
All
protocol
level
work
that
we're
talking
about
not
condition,
control,
work
or
anything
like
that.
I,
don't
think
it
makes
sense
to
have
them
in
a
common
place
just
because
they
have
the
word
multipath
in
front
of
them,
because
the
work
that
I
ultimately
doing
there
is
protocol
work.
You
want
the
protocol
experts
to
be
in
that
room,
so
you
want
the
quick
experts
to
be
in
the
room
where
MP
click
is
happening.
You
want
TCP
experts
in
the
room
at
MPT
CDs
happening
so.
I
I,
don't
disagree,
but
I
think
there
are
things
left
that
like
may
come
up
in
future
more
when
people
use
these
protocols
even
more,
which
is
like,
for
example,
congestion,
control.
This
questions
about
scheduling
and
smaller
optimizations
interface,
questions
which
pass
to
setup
when
right
at
or
if
at
all-
and
these
questions
are
all
very
similar.
I
Some
of
them
might
clearly
be
more
in
the
research
area
than
in
in
the
transfer
protocol,
but
then,
as
you're
standing
there
do
you
think
there
it
would
be
space
for
another
multi
pass
research
group
in
the
IRT
F,
because
it's
also
it's
also.
My
parents
also
covered
by
other
groups.
A
little
bit
I
mean.
R
There's
room:
if
you
create
room
for
it,
you
let
us
call
it.
That's
not
my
place
to
say
anything
there
right,
but
I
again,
I,
don't
know
how
that
this
overlaps
here.
So
in
terms
of
the
condition
controller
that
could
go
into
IC
crg,
the
scheduling
itself,
I
I
would
I
would
not
touch
it
with
a
ten-foot
pole,
because
I've
done
work
on
that
aspect
of
it
and
it's
it's
difficult
to
nail
down
any
particular
you.
You
won't
come
to
consensus
or
anything.
That's
meaningful
there.
R
I
I
Try
to
press
down
on
this
light
was
that
I
want
to
make
sure
that
those
people
who
have
the
expertise,
even
so,
if
they've
been
working
on
a
different
protocol,
actually
come
to
the
to
the
new
working
group
where
this
work
is
done
and
currently
I
see
that
this
is
like
off
not
happening,
because
we
have
too
many
places
where
we
do
the
work.
Even
so
it's
different
protocols
and
they
need
to
do.
We
have
places
for
them.
The
problem
is
we
have
too
many
places
and
the
same.
I
The
same
base
problem
is
for
the
second
point
on
this
slide,
where
people
who
are
interested
in
MB
TCP
and
as
soon
as
I
close
the
embassy
working
group,
they
will
think
outside
of
the
idea
if
they
would
think
we
don't
have
a
place
for
it
anymore.
So
I
want
to
make
sure
that
they
can
kind
of
look
at
the
agender
and
know
where
to
go
and
no
where
to
talk
to
these
people,
because
everybody
is
still
around
and
open
to
talk
to
you
right.
So
that's
a
problems.
I
have
yeah.
R
So
so
I
can
I
can
speak
to
the
in
terms
of
MPD's.
If
you
work
where
that
should
happen,
I
have
a
very
quick
opener
which
is
I,
think
yeah,
TCP
MRTS
vwg
both
are
wonderful
homes
for
continuing
work.
That's
all
we've
done
in
the
past,
sed
B.
When
it
closed,
it
moved
to
TS
vwg,
and
that
seemed
natural,
and
here
we
have
two
potential
homes
and
they
both
seem
natural,
but
in
terms
of
the
how
to
get
the
right
people
in
the
room.
We've
struggled
with
that
and
quick
as
well.
R
The
lost
recovery
document
I've
tried
to
bring
it
up
to
TCP,
M
and
present.
It
I've
done
it
several
times,
but
we
have
a
few
people
engage
in
Gauri's
there
and
Bob
is
Babas
in
all
of
it,
but
then
there's
the
force
of
TCP
M
is
not
there
right.
We
have
a
handful
of
people
who
are
in
order
DCP
more
there,
but
you're
not
getting
the
views
from
a
lot
of
other
people
in
TCP.
M
I,
don't
know
how
to
solve
that
problem.
I
I
mean
like
even
if
I
mean
quick
right.
You
have
this
the
same
problem
but
like
you
could
have,
we
could
have
split
up
the
quick
working
group
interest
in
a
quick,
quick
mapping,
working
group
and
a
quick
core
protocol
working
group
which
could
have
addressed
this
problem
but
maybe
cost
other
phone
whatever.
So
quick.
I
Know
a
cheapy
mapping
put
the
up
yeah
so
like
this
could
have
addressed
the
problem.
We
have
right
now
like
I,
don't
want
to
change
it
I'm,
not
producing
it
right,
but
that,
like
this,
is
what
I'm
talking
about,
because
making
decision
about
who
had
to
do
the
work
and
how
to
frame
it
is
actually
called.
R
Yeah
in
the
multi
part
case,
I
honestly
think
that
the
protocol
work
is
much
more
than
anything
that
you
do
above
it
like
I,
don't
think,
there's
a
lot
of
abstraction
work
to
be
done.
Like
you
said,
the
scheduling
and
the
condition
controllers
are
independent.
The
condition
controller
can
go
to
ICC
are
gee
I,
don't
think
we'll
do
anything
useful
which,
as
a
scheduling
that
leaves
the
protocol
work
and
that
should
be
long
in
the
protocol.
Homes.
H
In
Baku
Baku
few
questions
of
besides
agreeing
with
Larsen's
a
previous
speaker,
I,
believe
it
actually
having
a
both
it's
a
bad
idea
and
why
I
say
so,
it's
totally
dependent
on
what
is
your
overall
timeline.
You
would
like
to
address
his
work
if
you
do
consider
this
work
to
be
used
also
inside
of
other
SDOs
essentially
above,
would
eliminate
this
or
from
3gpp.
Ellis
was
a
really
seventeen,
because
practically
you
cannot
do
envy
DPP
anymore
beef
before
at
least
March
next
year,
so
forget
about
having
any
type
of
multipath
quake.
H
I
H
H
N
Perkins
speaking
as
an
individual,
just
in
response
to
that
I,
don't
think
if,
if
we
have
above
to
discuss
that
this
sort
of
thing
is
gonna
make
the
slightest
bit
of
difference
to
the
timescale
of
the
quick
work,
yeah
anything
to
do
with
multipath
quick
as
protocol
specifics
clearly
has
to
go
to
the
quick
working
group.
I
will
all
that
allows
chip
in
for
the
timescale
of
the
chip
of
the
quick
working
group,
but
I
would
be
surprised
if
everything
is
finished
and
ready
to
do.
Multiplayer
work
in
that
timeframe.
N
N
N
N
Can
we
generalize
congestion
control,
multiple,
most
of
eff,
congestion
control
across
the
transport
protocols
or
other
protocols,
specific
differences
enough
to
cause
problems?
I'm,
not
sure?
We've
we've
investigated
that
well
enough
to
know
the
answer.
Maybe
we
can
I
don't
know.
Similarly,
you
know
things
like
half
awareness
routing.
If
you
see
you
reach
retransmission
scheduling
a
I
think
we
perhaps
need
a
conversation
to
figure
out
how
general
this
can
be
and
how
much
it
is
unavoidably
protocol-specific.
M
This
sounds
to
me
a
lot
like
a
premature
optimization
in
the
sense
that
the
reason
we
form
working
groups
in
the
ITF
is
that
we
have
people
that
don't
have
a
home,
and
so
we
take
those
birds
of
a
feather
and
we
help
them
flock
together
to
make
a
bath
working
group.
So
everyone
is
happy
and
sing
Kumbaya
yeah,
but
the.
But
in
this
case
like
I,
don't
see
anyone
with
a
draft
saying
I,
don't
know
where
to
put
this
like.
M
M
M
I
I
mean
like
I
again
agree,
but
it
doesn't
address
the
problems
I.
Have
it
doesn't
address
that
people
with
the
expertise
we
need
have
to
go
to
multiple
groups
and
potentially
have
to
say
the
same
things
over
there
right
and
it
doesn't
address
a
problem
that
people
come
from
come
from
the
outside,
don't
know
where
to
go
because
they
don't
know
the
inner
structure
as
well
as
we
do
so.
M
M
M
You
creating
a
new
working
group
will
not
get
people
to
show
up
like
I'm
I
work
on
quick
and
I
will
keep
going
to
quick,
and
if
we
do
multipath
in
quickly,
I
will
go
there
if
I
don't
go
to
NP.
Tcp
I
won't
go
to
this
other
or
working
group,
either
like
at
the
end
of
the
day
or
maybe
I
will
choose
to,
depending
on
the
drafts
they
are.
But
I
will
go
for
the
drafts
in
the
agenda,
not
for
the
working
group.
So.
I
I
M
And
then
leave
halfway
through,
but
and
people
do
that
all
I've
done
like,
for
example,
I
started
this
particular
meeting
in
another
session,
because
I
wanted
to
see
that
and
then
came
over
here,
we're
all
busy.
There
are
conflicts,
and
that
was
always
be
the
case.
It's
I.
If
you
want
cross-pollination
and
I've
it's
been
said
before
you
want
to
encourage
people
to
review
and
that's
very
hard,
but
adding
process
and
working
groups
is
not
how
you
solve
this
I,
don't
know
how
to
solve
it,
but
I.
Don't
think
this
will
help.
K
But
but
basically
I
mean
what
not
said
there
is
no
evidence
that
the
guy
were
doing.
Multi
paths
are
not
participating
in
quick,
they
are
so
there
is
no
problem
to
solve
there.
There
is
something
that
could
be
said
about
adding
ICC
ology.
Look
at
the
issue
of
multi-link
congestion
control
for
a
theoretical
point
of
view.
That
will
be
a
very
interesting
because
that's
the
kind
of
stuff
that
gets
picked
up
I
mean
if
I
look
at
the
work,
that
quick
did
on
congestion,
that
values
teams
do
yeah
in
here.
K
I
It
might
be
it
might,
we
would
every
just
go
on
as
we
are,
but
I
also
had
people
who
would
be
interested
in
a
I've
said
this
already
did
yes,
I
need
it
in
I
heard.
Also
people
who
thought
that
like
having
a
border,
come
communication
and
discussion
would
be
useful.
However,
I
think
the
community
needs
to
propose
something
here.
So
if
you're
actually
interested
in
this-
and
you
can
scope
it
in
a
way
that
you
think
it
will
be
useful
discussion,
then
please
work
on
that
and
now
Christian
I.
You
need
slides
right.
G
K
K
The
idea
is
that
if
you
want
to
check
in
some
scenes
in
the
Windows
camera
that
I
know
well
well,
I
mean
there
will
be
quite
a
process
before
you
do
that.
And
similarly,
if
you
want
to
checking
something
in
Linux,
you
can
definitely
check
that
in
your
own
branch
and
do
your
own
copy.
But
if
you
want
to
have
that
in
the
main
distribution
there'll
be
some
process,
I
mean
people
will
check.
K
What
have
you
been
doing
show
me
the
the
RFC
was
that
new
algorithms
it
is
defined
or
there
is
no
I've
seen
then
I
want
to
see
your
simulation
I
want
to
see
something
I
want
to
get
face
and
what
you
are
doing.
So
there
is
effectively
a
gatekeeper
that
stands
before
between
you
and
and
imposes
friction
that
limits
the
amount
of
crazy
innovation
that
you
could
do,
and
the
good
thing
is
that
application
level
transports
solve
that
I
mean.
If
you
look
at
the
the
quick
interrupts
spreadsheet.
K
So
clearly
there
is
independence
and
they
can
do
that
because
the
way
you
ship
transport
is
with
implementations
as
the
library,
so
you
do
application
update
and,
for
example,
if
the
Google
guys
want
to
update
the
transport
running
in
Chrome
shipping
on
Windows
10,
they
don't
have
to
ask
permission
to
Microsoft.
They
don't
have
to
wait
that
Microsoft
implements
TCP
first
open.
They
don't
do
that.
K
They
just
update
the
code
and
they
ship
it
is
there
and
Google
is
an
example,
but
you
could
see
any
kind
of
application
if
you
have
an
app
and
it
speaks
to
a
server
and
you
can
update
at
the
same
time
the
server
undercutting
your
up,
you're
good
to
go.
You
can
do
what
you
want
so
this
guy
here
the
gatekeeper
has
been
removed.
K
Okay,
later
so,
I
would
say
that
being
able
to
do
that
kind
of
innovation
is
actually
very
good
when
there
is
no
question
about
that,
and-
and
in
fact
what
we
have
done
by
going
into
I
mean
letting
people
do
this
transport
at
the
application
level
is
open,
new
opportunities
for
development
and
research,
and
it's
not
like
there
is
no
need
for
that.
I
mean
we
are
mentioning
multipath
before,
but
all
open
of
migration
estimating
which
link
you
go
to
how
you
mix
real-time,
to
have
it
in
a
non
real-time
transport.
K
How
you
deal
with
loss
independently
of
congestion?
How
do
you
deal
with
the
fading
of
radio
links,
etc,
etc?
All
these
are
problems
that
are
partially
solved
in
the
transport
that
we
have,
but
in
fact
this
work
to
do
okay
and
because
we
have
this
application
level
transport.
If
you
are
running
a
research
lab,
for
example,
you
can
tell
to
your
grad
students
that
hey
take
that
and
and
try
and
they
don't
have
to
be
compiler
kernel.
They
just
have
to
be
comprise
the
app
they
can
try.
K
25
valuation
of
the
Argo
is
in
a
single
afternoon.
So
that's
quite
okay
and
I'm
sure
that,
because
we
have
done
that,
we
are
going
to
see
many
mortices
PhD
master,
whichever
being
written
being
walked
on
and
loss
was
telling
me.
That's
that's
actually
something
to
happen.
So
that's
that's
very
good.
So
don't
do
not
ever
quote
me
in
saying
that
I
think
it's
a
bad
idea.
It's
a
really
good
idea
next,
but
there
is
this
guy,
okay
and
assume
that
you
are
working
not
in
a
research
lab.
K
You
are
working
for
some
kind
of
coma
company
and
you
are
developing
a
new
version
of
the
transport
that
you
are
going.
That's
going
to
go
between
your
application
and
your
server,
and
what
is
these
guys?
Gonna
tell
you,
let's
cut
you
Google,
you
beat
Facebook
no
back
to
work.
It's
absolutely
going
to
tell
you
something
like
that
or
some
variation
of
that,
okay
and
and
that
could
go
very
wrong
very
quickly.
I
mean
the
first
way
that
can
go
wrong
is
competitive
congestion
control.
K
So
we
know
that
we
have
seen
that
already
remember
the
time
where
brazo
were
competing
on
how
many
TCP
connection
they
could
open.
At
the
same
time,
so
that
they
could
have
n
times
the
congestion
window,
I
mean,
if
you
have
application
level
transport,
that's
much
easier.
Just
had
to
change
a
constant
in
the
code
to
get
the
same
effect.
K
Already
you
can
see
people
I
mean
that
was
actually
in
the
Google
quick
code.
It
was.
There
is
a
factor
in
the
Google
quick
code
that
says
how
many
cubic
congestion
control
many
cubic.
Conscious
Corrections
do
you
want
to
emulate
and
you
can
compile
with
a
tractor
and
he
said
the
default
was
to
I
believe,
but
you
could
set
it
to
4
or
16.
If
you
wanted
okay
and.
K
Someone
is
going
to
invent
a
super
fast
version
of
the
new
TCP
that
is
not
bothered
by
these
stupid
losses
in
the
network
and
goes
as
fast
as
you
can.
I
mean
they'll.
Do
that
I
think
they
already
did
actually,
but
yes,
ok,
but
now
with
they
can
put
that
in
there
up
and
ship
it,
and
that
shows
the
competitive
congestion
control.
K
That's
not
the
most
interesting
suppose
that
you
are
managing
your
bottleneck
and
you
you
do
some
smart
sensing,
which
we
can
do
that
and
says
damn
I'm
copying
with
a
cubic
connection
and
they
are
bursting
my
database
congestion
control.
What
can
I
do
well?
I
know
the
specifics
of
cubic
I
know
what
I
can
do.
I
can
do
a
spike
of
traffic.
Very
briefly,
then
I
go
on
doing
what
I
was
doing
before
I
know.
The
spike
of
traffic
would
have
caused
a
loss
in
the
cubic
in
the
in
the
bottleneck
and
the
connections.
K
K
Now,
what
will
be
our
best
two-qubit?
You
can
absolutely
do
to
be
be.
Are
you
look
at
the
idea
technique
and
how
they
are
doing?
You
know
that
they
will
be
sensing
at
one
specific
point
in
the
flow.
At
that
precise
point,
you
send
a
spike
talking
about
cough
and
they're
gonna
back
up
for
six
transmission
windows,
that's
great
for
the
next
six
transmission
windows.
You
get
the
network
for
you
and
that's
what
I
call
address
on
your
congestion,
control
and
I
think
that's
a
new
idea
of
research
in
which
we
can
find
many
PhDs.
K
K
But
basically,
if,
if
you
have
an
application
that
keep
winning
local
congestion
control
because
they
are
doing
one
of
those
smart
adverse
higher
congestion
control
stuff,
what
will
happen
is
that
the
users
of
the
other
apps
are
going
to
be
pissed
off,
because
the
application
doesn't
work
and
they
are
going
to
chorus
lines
but
very
good.
If
you
are
going
to
have
one
of
those
competitive
things
that
emulates
256
connections,
it
will
have
some
kind
of
an
effect
on
the
network,
bottleneck
between
two
networks,
etc,
etc.
K
Now,
normally
I
don't
expect
that
to
have
more
than
local
effects,
but
with
a
little
work
and
a
good
software
update
who
knows?
Okay,
we
have
seen
software
updates
already
bringing
down
big
companies,
so
it
can
very
well
happen
at
that
point.
You
want
to
call
back
the
master
sorcerer,
but
is
not
there
anymore,
so
hey
and
that
next
slide
so
who's
seen
a
streetcar
named
desire,
yeah,
and
so
that's
this
lady
blondie
well
she's
at
the
Alice's
I
always
depended
on
the
kindness
of
strangers.
K
K
R
After
all
that,
thank
you,
Chris
and
Jenna
Inger
I
think
this
is
a
very
important
question
and
I
think
it's
an
important
question
because
we
are,
as
you
point
out,
going
into
sort
of
a
new
space
where
we
are
gonna
have
potentially
an
easy
ease
of
develop
development
and
deployment
of
congestion
controllers,
and
that's
going
to
change
things
somewhat.
It's
not
clear
to
me
how
much
so,
let
me,
let
me
be
specific
here.
R
You
might
remember
that
about
I
want
to
say
almost
15
15
to
20
years
ago,
video
applications
started
to
get
deployed
on
the
internet
and
they
were
deploying
their
own
condition
controllers
in
the
application
space.
Yes
and
again,
people
were
crying
murder
at
the
time
and
we
seem
to
have
survived
that
I'm,
not
saying
that.
R
Was
you
pointed
out
that
it
there's
a
gatekeeper
and
I
agree
with
you
that
there
has
been
a
bit
a
bit
of
a
gatekeeper
in
the
Linux
kernel,
but
for
most
server
providers
when
you're
talking
about
congestion
control
for
the
most
part
on
the
internet?
We're
talking
about
content
providers
like
Google
cloud
providers,
etcetera,
etcetera,
now
not
having
something
in
the
kernel
has
not
stopped
them
from
deploying
whatever
they
want
to
deploy
so
not
having
something
up
streamed
into
Linux
has
not
stopped
them
from
experimenting
and
deploying,
maybe
as
a
good
example.
R
There
are
measurements,
there
are
various
things
that
are
done
there
and
I
I,
guess
I.
What
I
want
to
say
is
that
the
colonel
isn't
has
not
been
in
the
past.
The
gatekeeper
of
this
this
particular
thing.
It
has
been
people
who
deploy
stuff.
So
yes,
is
Google
a
gatekeeper
of
this.
Absolutely
it
is
it's
a
gatekeeper
of
what
they
deploy
on
their
service,
his
Netflix,
a
gatekeeper
of
this.
K
R
R
Yeah,
so
the
line
is
closed
for
now,
so
I
was
lost
in
so
I
I.
What
I'm
trying
to
point
out
here
is
that
thinking
about
the
kernel
as
a
gatekeeper,
I
think
is
a
fallacy,
because
the
people
who
are
gatekeeping
at
the
kernel
were
not
and
are
not,
experts
in
congestion
control.
This
is
a
long-standing
problem
and
the
second
thing
is
what
you're
talking
about
here
with
isolation
of
users
and
so
on.
I
agree
with
you,
that's
a
useful
strategy
for
doing
things.
It's
already
happened,
a
fair
bit.
Yes,
it's
what
happen.
Yes,.
P
P
So
I
used
to
worry
about
many
of
those
same
things
and
I'm,
so
chilled
out
a
lot
and
there's
some
reason
for
it
right.
So
what
one
reason
is
right,
so
this
is
sort
of
roughly
two
kinds
of
applications
on
the
Internet
right.
There's
this
the
ones
that
ship
enough
bytes
to
really
do
some
damage
and
then
there's
the
ones
that
don't
right
so
the
long
tail.
What
the
long
tail
does.
Who
cares
right?
P
Think
what
is
actually
now
the
protection
that
we
have
for
the
most
part
and
then,
when
I
got
back,
what
China
said
that
we
do
actually
see
over
the
last
10
20
years,
that
that
access
networks,
a
sort
of
isolate
users
to
an
upstream
link.
So
you
can
at
best
sort
of
harm
yourself,
and
we
saw
that
when,
like
BitTorrent
went
to
that
pad
right,
because
the
the
work
calls
would
die
if
somebody
turned
on
BitTorrent
and-
and
that
was
not
good
and
then
they
fixed,
it
was
all
self
interference.
P
T
U
Second,
a
lot
of
those
things
I
actually
thought
about
this
problem
on
and
off
for
two
decades
and
Matt
Mathis
I'm.
Sorry,
there
are
a
bunch
of
different
mechanisms
that
defend
the
internet
and
up
until
fairly
recently,
every
code
in
TCP
wasn't
good
enough,
and
if
you
pushed
harder
and
congestion
control,
you
took
more
losses
and
therefore
ran
slower,
and
so
there
was
this
built-in
mechanism.
U
The
recovery
code
has
gotten
good
enough
from
the
new
and
they're
stacks
and
sublet
protocols
where
this
is
no
longer
true
and
it's
yeah
I
know
I,
know
I
thought
about
that
too,
where
I
have
actually
run
some
TCP
implementations
under
situations
where
they
were
sustaining
above
50
percent
lost,
because
Seawind
was
pegged
at
some
large
number
and
they
just
ran
happily
along
fixing
all
walls.
You
can
do
this,
but
there
are
other
things:
those
topological
constraints,
access
links
are
a
small
fraction
of
the
core.
U
U
The
things
that
worried
me
I'll,
be
perfectly
honest:
I
am
NOT
on
the
VBR
project
and
I
fretted
about
VBR
one
in
some
of
the
situations
where
it
could
get
in
trouble
in
the
public
Internet
and
the
the
telemetry
that
we
had
that
the
BBI
team
was
looking
at
very
carefully
is
whether
or
not
running
a
b
b
BL
in
parallel.
If
we
were
in
the
bowbully,
the
bbr
flows
had
lower
round
trip
times,
which
meant
lower
queue
occupancy,
and
so
there
was
a
single
metric
that
we
were
looking
at.
U
K
U
V
I
have
experience
with
video
conferencing,
where
RTP
is
the
similar
thing,
because
it's
a
transport
that
runs
in
the
application
layer
now,
in
that
case,
in
the
beginning,
even
I
mean
you're
talking
about
congestion
control.
There
was
no
congestion
control
at
all,
because
everybody
tried
to
use
it
as
much
as
you
can
and
they
had
the
mechanism
inside
to
get
over
packet
loss
at
up
to
a
certain
percentage.
So
they
didn't
care
much
and
I
said.
V
Maybe
the
other
one
one
would
use
a
trait
that
I'll
be
able
to
show
that
my
product
worked
better
okay,
so
I
think
it's
a
very
important
problem.
By
the
way
this
happened
mostly
in
the
safe
environment,
because
in
h.323
there
was
a
gatekeeper
because
before
we
can
start
the
call
you
wait
to
ask
for
bad
words
from
someone
it
was
not
enforced,
but
at
least
there
was
the
process
of
the
application
tried
to
abide
for
you,
but
the
moment
you
leave
it
open
to
them
to
decide
whatever
they
run.
V
They
can
do
whatever
they
want.
They
don't
wouldn't
ask
you
what
to
do
then,
if
they,
if
they
don't,
if
they
won't
care
for
what
they
network
is
about,
and
they
can
get
over
losses,
then
why
should
they
care
at
all,
so
I
think
it
is
important
to
have
some
enforcement
for
that?
What
I
think
and
maybe
to
have
some
proactive
condition,
comes
to
control
instead
of
the
weighing
whether
to
just
give
them
or
lose
it.
We
even
lose
packets
or
do
something
about
it.
S
Victor
vasilia
so
few
thoughts.
First,
many
of
you
might
remember
I
think
about
a
decade
ago.
Bittorrent
decided
that
they
no
longer
want
to
use
UDP
and
media,
including
I,
think
major
press
raised
panic
about
BitTorrent
switching
to
the
super
aggressive
congestion
algorithm,
so
glad
that-
and
this
is
basically
continued.
S
S
So
that's
basically
my
sauce,
but
I
do
agree
that
if
we
deploy
a
QM
or
wisely,
this
will
be
better
not
because
of
I'm
afraid
of
collapse.
But
because
this
would
mean
people
would
be
able
to
run
a
congestion
control,
algorithms
that
are
more
robust
to
stochastic
loss,
which
is
not
induced
by
themselves.
K
Iii
see
that's
a
very
important
point.
I
mean
that's
point
I
should
have
made.
Is
that
the
other
big
reason
of
deploying
a
so
they
between
users
is
a
basic
unit,
enable
freedom
of
innovation
for
those
users
and
that's
that's
kind
of
the
flip
side,
the
story?
But
if
you
know
that
yourself,
then
your
you
can
try
what
is
the
best
for
you.
You
don't
have
to
be
compatible
with
Kubik
yeah.
S
The
thing
about
VBR
versus
T,
a
cubic
competition-
is
that
I'm
not
sure
how
well
understood
it
is,
but
it's
extremely
parametric,
as
you
scale
your
TCP
buffer
from
like
half
BGP
to
ten
BGP,
you
will
see
what
either
one
or
other
dramatically
just
suppress
other's
traffic,
and
that
was
a
big
problem
with
PBR
VLAN.
But
it
worked
kind
of
fine
because
most
buffers
were
kind
of
in
the
middle
and
PBR
v2
a
lot
of
heuristics
to
make
this
work
even
better.
So.
W
W
It's
not
the
only
searchy
that
Google
has
another
one
balanced
enforcers
for
Google
to
Google
traffic,
the
other
one
which
I'm
not
sure,
has
a
public
name
but
serves
the
similar
purpose
for
traffic
facing
the
Internet
I'd
like
to
point
out
that
traffic
out
of
Google
compute
engine
has
to
run
the
gauntlet
of
both
of
them,
and
so
it's
so
it's
well
controlled
I.
Think
any
responsible,
CDN
or
cloud
provider
is
going
to
have
an
external
facing
global
on
the
siient
congestion,
controller
or
some
sort,
because
otherwise
your
own
mistakes.
W
W
I
know
not
supposed
to
say
that,
but
you
know
I
don't
work
and
now
it's
in
the
minute,
but
we
did
that
and
the
Kubek
traffic
didn't
stop
and
that
wasn't
all
bad
one,
four
four.
So
that
was
also
partly
just
the
fact
that
they
worked
into
out
to
be
somewhat
more
compatible.
And
perhaps
you
know,
is
obviously
the
case.
N
Hey
Colin
Perkins,
with
no
hats,
Ronnie
mentioned
RTP
and
video
streaming
earlier.
I'm
gonna
be
a
little
cynical
and
probably
get
fired
as
chair
of
the
errand
can't
working
group
to
saying
this,
but
I,
don't
think
ITP
congestion
control
matters
on
the
basis
that
by
the
time
you
hit
persistent
congestion,
the
user
has
given
up
because
the
video
is
unwatchable
anyway,
I'm
also
tempted
to
say
that
all
the
traffic
is
video
anyway,
the
on/off
dynamics
of
MPEG
completely
break
congestion
control.
Well,
it
doesn't
matter
cause
it's
all
application
limits
it
and
all
the
large
providers.
M
Hi,
my
name
is
David
skinned,
Ozzy
and
I
used
to
be
a
gatekeeper,
so
these
days,
I
have
this
laptop
and
using
it.
I
can
check
in
code
to
Google
quick
and,
as
you
say,
the
kids
with
the
magic
of
read
discs
could
break
things
last
year.
On
the
same
time,
I
had
the
same
laptop
that
Eric
had
which
allowed
me
to
check
in
code
checks
and
you
and
kind
of
the
actually
now
at
Google.
We
have
tests,
whereas
there
there
weren't
any.
So
the
gatekeeper
is
probably
better
and
Google
quick
I
mean
I.
M
M
There
was
a
network
in
Switzerland
that
was
really
unhappy
with
them
for
a
while
yeah.
So
on
all
your
points
about
how
this
is
getting
worse,
like
I,
really
really
disagree.
I
saw
some
people
pitch
some
I
personally
think
worse.
They
call
companies
telling
now
telling
us
that
hey
look,
we
make
all
your
traffic
faster
were
when
you
look
deeper
was
just
a
thing
that
in
the
kernel,
disabled
congestion
control
that
was
in
TCP.
That
said,
while
I
completely
disagree
with
your
premise,
I
fully
support
your
conclusion.
M
If
you
want
to
go
around
to
router
manufacturers
telling
them
the
world
is
about
to
end,
please
deploy
it
continue
that
we
might
be
able
to
solve
buffer
bloat.
That
way,
I.
L
Actually,
as
it
turns
out
of
this
slide
back
because
I
didn't
do
anything
except
for
a
remind
Christian
that
he
had
to
do
slide
so
I
have
no
idea.
Your
name
is
on
it,
I'm,
trying
to
name
name
being
on
it.
I
agree,
actually
that
there
is
a
little
bit
of
a
problem.
I
think
it's
maybe
a
slightly
different
problem.
I
want
to
go
back
to
something
that
some
other
Google
guy
said.
L
Security
and
reliability
here
or
flip
sides
are
the
same
thing:
they're
all
essentially
transmission
safety
problems
and
the
work
that
has
been
done
to
in
some
networks,
isolate
users
or
isolate
processes
from
each
other
or
to
isolate
good
traffic
from
evil.
Ddos
traffic
is
all
kind
of
the
same
work,
to
the
extent
that
putting
better
aqm
in
the
internet
makes
that
work
better.
L
K
You
well
thank
you
for
your
attention.
I
I
accept
them
in
the
point
that
we
have
many
more
defenses
today,
then
people
would
think-
and
it's
great
I
accept-
that
it
will
be
very
hard
to
do
something
more
aggressive
to
the
intelligence
should
be
cubic,
as
we
know
that
and
so
that
we
should
not
panic,
but
we
should
do
isolation
because
it's
a
good
thing:
yeah,
okay,.
X
Right
great,
so
an
object
on
queue
lock
quite
quickly
for
those
who
are
not
institute
law
yet
next
slide,
please.
So
the
main
motivation
is
this:
we
want
to
help
debug
quick
implementations,
it's
difficult
to
make
this
kind
of
visualization
and
tools
just
using
packet
capture
files
because
they
don't
contain
things
like
ingest
control,
information,
so
skip
the
next
slides
and
then
go
to
next
slide,
the
next
one.
So
what
we
proposed
to
help
that
will
make
tooling
development
much
easier
if
everybody
would
just
pretty
the
same
time.
Flung.
X
Of
course
we
call
q
log,
it's
a
relatively
simple
scheme
based
on
JSON,
because
we
want
to
make
web-based
tools
and
we
wanted
to
be
human
readable
for
people
not
using
house
tools
and
so
skip
the
next
slide,
and
so
we
presented
that
at
the
end
of
last
year
and
the
bid
is
a
surprise
to
me.
A
lot
of
people
seemed
quite
enthusiastic
about
that.
A
lot
of
people
had
some
interests
among
those
was
media,
and
she
said
why
would
you
do
this
just
for
quick
and
hp3?
X
X
What
we
clearly
have
is
two
different
drafts:
two
documents,
one
that
we're
calling
the
high
level
schema,
which
is
basically
the
protocol
agnostic
to
general
purpose,
part
of
what
we're
doing
and
then
the
other
draft
is
a
quick
specific
stuff.
The
second
one
is
very
easy
to
understand:
right,
quick
has
packets,
it
has.
X
X
You
can
see
it
is
kind
of
the
metadata
that
helps.
You
interpret
the
offense
on
the
right
side,
so
the
simplest
form
is
that
is
that
we
have
like
a
title
and
a
description
of
the
current
file,
but
the
most
important
field.
There
is
the
protocol
type
as
you
can
see,
if
that
is
the
current
values
quickest
betray.
X
That
means
the
offense
in
that
part
of
the
file
or
from
the
other
document,
but
so
if
we
would
expand
this
to
more
protocols
next
slide,
we
would
change
the
value
of
the
field
and
suddenly
you
which
you
would
have,
for
example,
tcp
of
hb2,
of
course,
first
kind
of
the
high
level
approach
which
taken
out
to
still
have
something
protocol
agnostic,
while
working
on
a
quick,
specific
thing,
I'm
going
to
give
it
a
little
bit
more
detail
on
that.
Just
some
examples
next
slide.
X
So
in
the
high-level
schema,
for
example,
some
people
were
worried
about
the
overhead
of
JSON,
so
we
have
some
file
size
optimizations
that
you
can
skip
loading
the
same
value
in
the
same
file
multiple
times.
We
also
like
to
make
this
very
flexible
so,
for
example,
we've
already
with
Facebook.
They
feel
that
they
have
a
lot
of
custom
events
specific
to
their
implementation.
We
support
that.
They
can
also
walk
those
without
a
specific
schema.
X
The
bottom
one
is
I.
Think
one
of
the
most
interesting
things
is
that
we
want
to
do
an
end-to-end
lock.
So
you
have
not
just
client
and
server,
but
you
also
have
several
in-network
intermediaries
that
you
want
to
log.
If
you
do
that
with
pcaps,
you
would
probably
end
up
with
four
individual
files
without
much
context,
and
it
would
be
difficult
to
bundle
them
together.
So
one
of
the
things
that
Q
log
does
is
allow
you
to
aggregate
different
traces
into
the
same
queue
log
file
and
then
provide
some
context.
X
So,
for
example,
here
we
have
the
fantage
point
saying
from
which
point
of
view
was
this
trace
taken.
This
can
also
allow
us
to
add
different
protocols
into
the
same
file.
For
example,
let's
say
you
have
an
h3
to
a
Mississippi.
It
needs
to
be
to
implementation.
You
would
have
a
CP
to
traces
from
a
user
space
implementation
and
then
TCP
level,
traces,
maybe
from
the
kernel-
and
you
could
combine
them
in
the
same
file
to
nicely
keep
everything
together.
X
If
you
don't
want
to
share
that
with
a
colleague,
you
would
have
to
take
a
screenshot
right
now
right.
So,
instead
of
doing
that,
we
say,
let's
just
embed
those
settings
into
the
Q
log
file
as
well,
so
you
can
just
share
the
adjusted
Q
log
file
with
the
colleague.
They
can
open
it
up
in
the
same
tool
and
view
the
exact
same
data
that
you
were
looking
at
before.
X
So
there's
just
a
couple
of
examples
of
what
is
in
the
high-level
schema
next
slide
in
terms
of
the
event
definition,
we
currently
have
about
50
40
events,
I
think
it
will
be
about
twice
that
by
the
end,
some
of
the
most
important
ones,
I.
Think,
of
course,
the
ones
that
are
not
in
the
typical
packet
capture,
so
the
things
related
to
congestion,
control
or
when
packets
were
lost.
That
kind
of
stuff.
But
of
course,
next
to
that,
we
also
log
your
typical
packet
Racing
type
events
next
slide.
X
X
Using
this,
we
kind
of
notice
that
it's
not
enough
in
all
use
cases.
For
example,
we
had
this
case
where
we
clearly
saw
the
acknowledgement
frame
coming
in
and
after
that
it
was
still
logging,
a
packet
lost
for
one
of
the
just
act,
packets
right,
which
is
very
weird.
That
should
never
happen,
and
we
found
out
that
it
was
agony,
realization
where
we
parse
the
AG
frames,
but
we
don't
immediately
process
them.
So
there
was
kind
of
a
long
time
in
between
those
two
sometimes,
and
it
was
time
enough
for
the
loss.
X
Events
himer,
so
you
come
in
and
declare
the
packet
lost,
even
though
the
AG
frame
just
wasn't
processed.
So
the
only
way
to
really
solve
that
next
slide
is
to
add
yet
another
defense,
more
explicit
events
specifically
for
packet
acknowledgments,
because
those
receipt
of
the
egg
frame
and
the
package
acknowledged
might
not
be
directly
correlated.
X
So
this
one
example
I
recently
did
an
order
to
talk
with
a
lot
more
information
which
you
can
find
in
the
youtube
link
on
the
bottom
of
slide.
So,
as
you
can
see,
we've
made
a
lot
of
progress
on
the
text.
I'm
happy
to
say,
we
also
made
some
implementation
progress
next
slide.
We
currently
have
five
different,
quick
stacks
that
are
outputting
Q
Lord
directly.
X
What
I
forgot
to
say
what
last
slide
is?
You
might
end
up
with
80-plus
events,
but
I
do
not
expect
everybody
to
implement
those
eight
events.
Most
people
will
probably
have
enough
with
about
20
of
those
events,
and
so
a
lot
of
these
tools
are
outputting
Q
log
but
they're,
definitely
not
outputting
every
possibility,
like
events,
they're
outputting,
the
the
main
basic
set
that
is
supported
by
our
current
tools.
Those
tools
are
not
current,
really
extensive.
Right.
Now
we
mainly
have
one
that
is
being
used.
X
Next
life
is
so
the
nice
thing
about
having
other
people.
Look
at
this
is
that
we've
been
getting
some
feedback
from
other
implementers.
It
seems
that
for
a
lot
of
people,
the
overhead
actually
is
quite
okay.
Facebook
reports
that
they're
logging,
20
billion,
quick
events
per
day,
I,
don't
think
they're,
actually
logging
Q
log
events
at
that
scale.
Yet,
but
it's
nice
to
know
that
it's
possible
that
it's
a
possible
way
forward.
X
Someone
mentioned
that
they
would
like
to
use
Q
log
as
a
way
to
implement
tests.
So,
instead
of
augmenting
the
code
with
unit
testing,
they
can
just
look
at
what
was
happening
in
the
NICU
log
and
try
to
devise
test
failed
or
succeeded.
That
way,
which
I
think
is
a
very
interesting
use
case
and
then
the
the
last
sentence.
There
was
very
interesting
for
me
because
when
we
started
this
I
asked
people.
Why
isn't
anybody
doing
this?
Yet?
Why
doesn't
this
exist
and
a
lot
of
people
said?
X
X
So
each
vertical
rectangle
is
one
quick
packet
and
I
really
like
this
visualization,
because
it
immediately
makes
clear
the
differences
between
different
prioritization
approaches
right
and
yet
it
only
took
me
about
a
day
to
create
this
visualization
based
on
the
Q
log
formats.
So
it's
a
lot
of
profit
for
not
a
lot
of
work.
The
bottom
one
is
even
faster.
This
was
implemented
by
one
of
our
pageant
students
in
about
two
hours
and
really
helped
us
to
debug
the
prioritization
implementation,
as
we
were
going
so
I
really
like
that.
X
Q
log
enables
this
kind
of
quick
and
dirty
fast
visualization
that
helps
implement
one
of
the
some
of
the
complex
systems
as
well.
So
we've
got
a
lot
of
good
feedback.
We've
also
got
a
lot
of
what
I
would
call
constructive
feedback
next
slide.
So
there
are
also
people
that
say
that
there
is
some
overhead
I'm
a
bit
ashamed
to
admit
that
our
own
implementation
gets
very
slow
if
we
enable
all
the
queue
logging,
but
that's
because
we're
using
javascript
and
a
very
crappy
logging
framework,
but
still
there
is
a
cost.
X
Some
people
still
don't
like
json
some
people,
their
implementations,
aren't
fit
to
output
queue
long
as
we
have
it
right
now,
and
then
one
of
the
most
important
ones
is
the
one
at
the
bottom
is
even
besides
the
possible
performance
of
red.
There
is
also
a
certain
maintenance
burden
in
adding
this
morning.
Now
something
changes
in
quick.
You
don't
just
need
to
update
your
business
logic.
You
also
need
to
update
your
logging
code,
and
so
they
would
rather
wait
until
things
settle
down
a
bit.
X
I
think
that's
quite
ironic,
coming
from
people
implementing
quick
write,
but
it's
still
a
very
good
point
to
make.
So
next
slide,
we've
seen
that
a
lot
of
stacks
that
have
these
reservations
right
now
still
are
trying
to
move
towards
key
log.
They
mainly
do
that
by
using
what
calling
converters,
so
they
have
some
kind
of
an
internal
watering
format.
They
write
a
small
program
that
transformed
it
into
Q
log
so
that
they
can
use
the
tools.
This
works.
Just
fine
I
think
it's
a
good
approach.
X
So
that's
about
what
I
had
to
say
next
slide,
of
course,
I'm
a
bit
biased,
but
I'm
very
happy,
but
what
we've
been
able
to
do
in
Korea
knew
we
were
going
I'm
very
happy
with
old
support
from
the
community.
Thank
you.
Everyone
has
been
great,
however.
Next
slide
there
remains
some
serious,
open
questions.
X
The
one
of
the
major
feedbacks
from
last
time
was
that
we
need
to
think
about
privacy
and
security.
Sadly,
that
is
one
of
the
areas
that
has
you
know
will
last
the
least
amount
of
progress
since
then,
one
because
I'm
not
an
expert
on
that
until
because
we
haven't
really
had
a
lot
of
feedback
on
that.
X
So
if
you
have
opinions
on
that
and
ways
that
we
can
integrate
this
into
the
format,
please
let
us
know,
and
then
it
seems
like
this
is
working
quite
well
for
quick,
but
it
doesn't
mean
that
it
might
work
well
for
other
protocols
as
well.
So
that's
maybe
a
discussion.
We
still
need
to
have
next
and
last
slide.
X
So
maybe
we
don't
have
to
have
all
these
discussions
right
now.
We
also
don't
have
time.
So
please
join
us
on
github.
We
also
have
a
brand
new
Q
log
mailing
lists.
Thank
you
for
that.
Maria
I'm
also
planning
on
landing,
launching
the
quick
tools
of
info
website,
which
is
going
to
gather
all
the
liquid
cooling
that
exists,
not
just
ours,
but
everything
that
other
people
have
made
as
well
to
work
as
kind
of
a
central
hub.
X
If
you
want
to
go
further
in
and
that
feel
free
to
add
Q
log,
not
just
your
quick
stack,
it
would
be
fantastic
to
have
someone
try
to
do
this
for
other
protocols
as
well.
If
you're
able
to
do
that,
and
of
course
we
should
be-
surely
the
only
ones
making
open-source
visualization
tools
feel
free
to
do.
X
S
To
say
sorry:
I've
not
had
time
to
make
the
Google
quick
tracing
compatible
appeal
we're
working
on,
but
they
still
have
time.
I
wanted
to
know
about
extension
from
quick
to
other
protocol
from
our
old,
very
old
logging
format
was
designed
to
be
compatible
with
TCP
and
quick,
and
it
didn't
make
anyone
happy
nicer,
TCP
nor
quick
people,
because
it
turns
out
that
while
protocols
are
similar
in
some
level,
they're
also
substantially
different
in
many
important
ways.
R
R
So
so
the
the
mini.
All
this
is
is,
is
it's
great
to
see
you
continuing
to
do
work
on
this
and
I
will
be
soon
starting
to
work
on
on
integrating
quickly
with
Q
log
as
well,
so
I
hope
to
be
engaging
with
you
on
that
soon.
In
terms
of
the
question
of
both
working
group,
etc.
I
honestly
think
that
the
work
as
it's
going
along
is
is,
is
great.
R
I
think
that
there
should
be
just
more
iterations
on
on
how
want
people
integrating
and
then,
as
people
start
using
traces
which
hopefully
will
happen
as
people
started,
do
more
performance
testing
which
again
we
are
trying
to
get
the
quick
implementers
to
do
more
performance
testing
shortly
that
that
there
will
be
evolution
at
that
point
and
I
think
there's
an
organic
engagement.
That's
happening
right
now.
That
is
wonderful!
I!
R
L
See
we
can
do
about
Singapore
Robin.
Thank
you
for
this.
Thank
you
for
giving
the
talk
at
11:00
p.m.
I'm
like
as
somebody
who,
like
really
encouraged
you
to
bring
this
forward
I'm
kind
of
shocked
and
amazed
at
how
far
this
has
come
in
one
meaning
cycle.
This
is
really
really
cool.
I
will
reiterate
what
Johnny
said,
given
the
amount
of
progress
that's
going
on
sort
of
on
the
code.
Naturally,.
L
L
How
can
we
generalize
this
to
sort
of
like
transport
development,
logging
stuff
later
I
actually
had
a
question
about
about
privacy
and
security
stuff,
but
since
we're
kind
of
short
on
time,
I
will
take
that
offline.
For
you
again,
thank
you
very
much,
I'm
I'm
kind
of
shocked
and
appalled
at
how
awesome
this
is
thank.
I
You
very
much
Robin.
Unfortunately,
we
will
close
this
meeting
with
a
sad
news.
Sally
Floyd
is
in
the
hospital,
and
many
people
in
this
room
knew
her
very
well
I
guess,
so
we
will
ask
you
to
come
to
the
front
and
take
a
picture
for
her
and
we
will
send
it
to
her
as
memory.
Thank
you.