►
From YouTube: IETF109-TSVWG-20201118-0730
Description
TSVWG meeting session at IETF109
2020/11/18 0730
https://datatracker.ietf.org/meeting/109/proceedings/
A
A
C
You're,
I
was
raising
my
hand
to
volunteer
I
I
can
do
it.
I'd
be
happy
to
have
assistance.
I
see
andrew
offering
in
the
chat
so
yeah,
please
great
and
thanks
brian
also.
A
A
C
There,
the
the
link
that
I
saw
on
on
the
note
taking
tool
from
inside
the
meat
echo
here
is
pointing
me
to
something
that
that
is
new
for
this
session
right.
So
that's
notes:
ietf
109
tsv2,
which
I'll
put
the
link
here
just
to
clarify
in
the
chat
and.
A
E
So
I
might
get
it
wrong
as
well
right,
so
there
there
are
separate
strings
for
the
meet
echo
session,
because
it's
like
two
sessions
is
two
sessions,
but
the
agenda
page
actually
links
back
to
the
same
for
the
two
sessions
things.
That's
actually
probably
something
we
should
talk
to
me
to
echo
about
for
the
second
time,
but
let's
use
the
link
that
I
posted
so
they're
all
in
the
same
place,
just
to
keep
keep
you
all
from
having
to
merge
things.
Okay,
I
will.
C
G
H
H
I
I
J
J
D
B
Started
yeah,
so
I
think
we
I've
already
spent
time
wrestling
with
tools
here,
so
I
think
we
probably
should
just
jump
into
the
agenda.
So
I
think
today
we
have
greg
on
the
operational
guidance,
then
bob
on
the
other
l4s
drafts
and
if
there's
any
time
left,
there's
some
l4s
and
5g
work
that
ingomar
will
discuss.
So
if
there's
nothing
else
or
any
agenda
bashing
people
want
to
do.
I
think
we'll
go
straight
to.
B
H
H
In
that
case
and
in
particular
it
provides
guidance
for
operators
of
hosts
that
would
like
to
utilize
l4s
and
guidance
to
operators
and
networks,
in
particular
networks
that
have
deployed
single
queue,
rfc
3160
bottleneck
links
and
then
recommendations
to
researchers
as
well
and
how
to
or
things
that
they
could
look
at
and
try
to
try
to
understand
the
scope
of
the
issue
and
any
any
severity
metrics
that
could
be
derived
from
that.
H
H
There's
a
comment
recently
on
the
mailing
list
from
jake
and
pete
heist
that
perhaps
this
draft
should
also
mention
fq
bottleneck
links
that
implement
rfc3168.
H
H
So
just
to
summarize
situations
where
coexistence
of
l4s
and
classic
crappit
have
been
discussed
in
the
mailing
list.
H
H
H
In
all
these
cases,
l4s
traffic
and
classic
traffic
coexist
well,
except
for
the
one
in
red.
Where
there's
a
problem,
that's
been
identified
where
classic
flows
will
receive
less
throughput
than
l4s
flows
when
they
are
coexisting
in
a
bottleneck.
That
is
a
single
queue
and
classification
and
just
the
focus
of
the
draft.
There
is
an
asterisk
on
the
fq
classic
ecn
case,
getting
back
to
jake
and
pete's
comment
that
there
are
scenarios
in
an
fq
classic
ecm
bottleneck
where
we
don't
see
fairness
being
enforced
by
fq.
H
One
of
those
is
a
case
where
you've
got
mixed
traffic
in
tunnels
that
are
going
through.
The
fq
fq
is
known
to
not
provide
fairness
for
the
flows
that
are
inside
tunnels,
and
in
this
case,
if
you
have
a
tunnel
that
has
a
mix
of
classic
and
l4s
traffic,
there's
a
further
unfairness
between
the
flows
that
are
within
the
tunnel.
H
K
Can
I
do
you
hear
me?
Yes,
please
go
ahead,
okay,
so
this
looks
fine
and
very
systematic,
except
that
the
first
draw
in
this
table
is
pretty
much
under
discussion.
So
it
says,
duracu
coupled
akm
provides
reasonable
fairness
across
a
range
of
conditions,
which
is
true.
I
admit
that
the
problem
is
whether
the
range
of
tested
conditions
actually
is
sufficiently
broad
to
allow
extrapolation
onto
the
wider
internet,
and
I
do
not
believe
that
this
is
currently
true.
That's
just
the
point
I
wanted
to
make.
H
So
a
little
bit
more
detail
on
on
this
specific
issue
of
the
classic
ecm
bottleneck.
H
And
specifically,
the
l4s
flows
effectively
aim
for
a
higher
ce
marking
rate
than
classic
flows
do,
and
so,
when
you've
got
a
mix
of
the
two
flow
types
sharing
a
single
queue,
rfc
3168
bottleneck,
the
l4s
flows
will
outperform
the
classic
flows,
and
so
this
really
affects
when
you
have
multiple
long-running
capacity
seeking
flows
where
this
phenomenon
occurs
appears
to
be
worse
in
situations
with
moderate
bdp
levels.
H
So
on
the
order
of
100
150
packets
give
or
take
seems
to
be
where
the
problem
is
most
pronounced,
so
less
less
so
in
low
bdp
connections
or
in
high
bdp
connections,
so
low
data
rate
connections,
not
as
big
of
an
issue
and
appears
that
potentially
it's
for
the
high
bdp
bdp
case,
potentially
when
cubic,
is
operating
in
its
cubic
mode.
H
Maybe
that's
the
the
reason
why
it's
less
impacted
to
hide
bdb,
okay,
but
in
the
cases
where
the
impact
is
at
its
worst,
it's
been
shown
to
to
result
in
more
than
ten
to
one
flow
rate
ratio
and
there's
an
example
of
some
of
the
data.
That's
been
shown
on
that
there's
a
number
of
experiments
have
been
done
over
the
past
year
or
more
on
that,
but
you
can
see
figure
one
in
the
in
that
link
for
an
example.
B
H
Next
slide
yeah,
so
the
the
draft
has
again
these
sections
on
guidance
for
the
different
different
audiences.
H
The
operator
from
l4s
host
breaks
down
the
host
into
two
categories,
so
one
that
would
be
deployed
in
a
constrained
whether
there's
a
constrained
set
of
networks
or
clients
that
it's
serving
where
there
are
opportunities
for
coordination
and
and
pre-launch
testing
across
that
constrained
set
of
networks
of
clients,
and
so
the
graph
talks
about
what
some
of
those
options
are.
And
you
know
as
as
I
mentioned,
this
is
a
work
in
progress
draft.
H
So
there
are
some
to
do's
left
in
this
section,
so
we
would
invite
input
from
the
working
group
on
more
details
that
can
be
added
in
in
these
sections,
as
well
as
others.
H
So
in
particular,
one
thing
that
seems
to
be
potentially
useful
in
in
the
case
of
the
cdn
or
isp
servers
is
to
do
some
pre-launch
testing
to
assess
the
presence
or
absence
of
these
bottlenecks,
and
there
be
nice
to
have
some
specific
tests
coupled
out
or
specific
requirements
around
those
tests
in
the
draft
to
give
guidance
for
those
who
might
be
interested
in
deploying
hosts.
H
The
second
kind
of
sub
bullet
under
section
3.1,
is
in-band
detection
monitoring,
so
the
l4s
id
draft
has
some
mandatory
features
that
hosts
need
to
support
to
facilitate
this,
to
attempt
to
detect
rfc
3168
in
in
the
bottleneck,
and
then
the
server
could
fall
back
in
real
time.
So
when
it
detects
the
potential
presence
of
rc3168,
it
immediately
discontinues
the
l4s
congestion
response
and
utilizes
the
classic
congestion
response.
H
An
alternative
would
be
to
flag
that
path
of
that
host
client
as
being
potentially
behind
a
3168
bottleneck
and
then
discontinue
using
l4s
for
future
connections
to
that
host,
and
the
second
subsection
in
section
three
is
other
hosts
that
are
serving
a
much
wider
set
of
networks
or
clients
where
that
pre-launch
testing
may
be
not
feasible.
H
Still,
the
in-band
methods
apply
in
either
a
real-time
or
non-real-time
response
for
those
real
time
might
be
a
little
bit
more
feasible.
There
also
could
be
some
per
destination
path.
Testing
that
could
take
place
in
addition
to
the
in-band
detection,
monitoring
and
draft
has
a
placeholder
for
specific
tests
there
again.
Feedback
or
input
would
be
helpful
to
flesh
out
that,
as
well
as
the
other
components
of
this
section.
D
Yeah
greg:
where
do
you
expect
the
code
that
implements
those
tests
to
reside?
Is
that
residing
in
the
protocol
being
used.
D
By
the
servers
involved
or
and
other
hosts,
or
do
you
and
or
do
you
expect,
set
separate
test
functionality.
B
H
So
the
pre-launch
testing
seems
to
me
to
be
separate
testing
code
that
would
be
provided
on
a
l4s
host
that
could
be
used
to
detect
whether
rc3168
exists
or
not.
H
H
Non-Real-Time
response
would
presumably
be
somewhat
different
code,
but
more
operational
management
code
that
could
control
whether
or
not
l4s
is
enabled
on
a
per
host
basis.
So
I
think
it's
a
mix,
but
the
draft
certainly
adds
some
some
detail
on
that.
Okay,
just
please,
please
make
sure
that.
M
So
I
was
wondering
as
a
follow-up
to
david's
question
on
what
kind
of
granularity
you
expect
this
to
be
done
as
in
per
what
destination
prefix
per
host
or
or
what.
H
That's
a
good
question,
so
it
probably
depends
a
bit
on
the
network.
So
one
scenario
that
has
been
highlighted
on
the
mailing
list
is
a
residential
isp,
where
the
individual
customer,
so
a
single
ip
address,
may
have
an
implementation
of
a
single
qr,
6138
bottleneck
in
their
network,
and
so
results
of
this
testing
could
be
different
on
an
ip
address
by
ip
address
basis.
H
Okay,
the
other
comments
that.
H
All
right
next
slide,
then
the
next
section
the
draft
talks
about
recommendations
for
an
operator
who
has
deployed
single
queue,
rfc
3168
bottlenecks.
H
It
provides
currently
a
a
list
of
potential
options
for
configuring
that
bottleneck
to
or
those
bottlenecks
to
work
better
with
l4s
traffic,
and
I
won't
go
through
all
these
in
detail
on
the
call,
but
the
first
one
there's
a
question
on
the
mailing
list.
Whether
routers
are
able
to
to
actually
support
that.
I
think.
H
Certainly
it's
well
known
that
software
routers
can
can
do
that,
but
the
question
would
be
in
hardware,
routers
or
other
restrictions
that
would
prevent
the
differential
treatment
between
ect,
0
and
ect1
so
that
in
that
case,
here
it's
effectively
treating
ec2-1
as
not
ect
and
and
only
doing
congestion
experience
marking
on
ect
0
packets.
H
How
feasible
are
all
of
these?
Are
there
any
downsides
to
any
any
one
of
these?
Are
there
other
options
that
have
not
been
identified
so
far?
That
could
be
added
to
this
section
and
further.
The
details,
I
guess
on
on
configuration
of
of
these
different
approaches,
would
be
helpful.
H
Okay,
corey.
A
Okay,
I
just
wondered
if
anybody
who
was
joining
the
meeting
had
any
eyes
which
might
help
us
from
the
understanding
of
particular
operated
routers
that
they're
using
or
maybe
they're
a
vendor.
A
M
Had
routers
exist
that
can
do
4.1
treat
ect1,
as
not
ect.
Jonathan
pointed
out
in
the
chat
that
it's
formally
contrary
to
rfc
3168.
I
don't
think
the
feature
is
particularly
common
or
the
ability
is
particularly
common,
but
it
does
exist
in
some
routers.
D
O
O
Can
I
just
pick
up
andrew's
point
about
formally
being
countries
to
3168?
I
don't.
I
don't
think
it
is
because
it's
not
saying
you
don't
change
east
tijuana
or
tct.
You
just
treat
it
as
not
ect,
so
you're
you're,
effectively
being
a
non-ecn
router
for
the
ect1
code.
Point.
A
Bob,
while
you're
on
the
line
wasn't
there's
something
in
pcn
anyway,
so
we
we
have
that
specs
that
have
chip
played
with
ect
one
in
the
ietf
in
a
different
context.
A
H
I
think
it's
also
been
suggested
that
you
know
the
working
group
could
consider
formally
deprecating
treating
ect1
identically
to
ect
0.
I
think,
as
a
good
point
out,
it's
not
required
to
be
treated
the
same,
but
it
could
be
formally
deprecated
and
that
might
nudge
things
along
towards
implementing
borderline.
H
Okay,
next
slide
nope
couple
of
people
in
the
queue
bob.
C
Let's
say
I
thought:
8311
did
that
and
placed
some
requirements
on
it.
Is
that
not
correct
that
has
deprecated
the
treatment
of
these
t1
as
necessary
and
easy
to
zero?
I
thought
that's
already
like
released
as
standard
strike.
H
D
You're
saying
this
is
this:
is
subtly
different,
I
think
goes
back
to
3168,
which
in
particular
3168
allows
allows
drop
of
ect
marked
to
mark
mark
traffic
and
end
points
that
that
apply
to
3168
have
to
be
prepared
to
deal
with
drops,
because
some
somebody
got
backed
up
and
couldn't
mark
had
to
drop.
H
All
right,
you
know,
I'm
short
on
time
here
so
section,
five
in
the
draft
rule
for
researchers,
pretty
lightweight
text
in
this
section
so
far
so
certainly
would
appreciate
input
from
researchers
on
things
we
could
add.
H
Point
out
as
well
that
it
would
be
nice
to
have
some
specific
test
requirements
in
this
section
and
not
just
the
fairly
high
level
statements
here,
but
we
do
have
at
least
the
high
level
statements
of
a
couple
of
of
items
that
researchers
could
look
at.
H
Obviously,
detecting
3168
through
measurement
campaigns
would
be
an
interesting
step.
There's
been
some
attempt
at
that
in
the
past,
but
more
useful,
also,
out-of-band
measurement
of
l4s
versus
classic
performance
might
be
something
that
be
done
as
well
and
then
not
in
the
graph.
Yet,
but
something
is
pointed
out
off
list
to
me
research.
H
H
Next
slide,
so
there's
some
additional
meaningless
comments.
Recently,
last
few
days,
sebastian
had
a
comment
to
reword
the
phrase:
more
precise
flow
balance
in
reference
to
dual
cue
jonathan
morton
pointed
out
that
if
the
working
group
had
gone
a
different
path,
then
we
wouldn't
have
to
write
this
draft
and
then
jake
had
a
couple
of
other
comments
around
adding
some
references.
That
help
illustrate
the
unfairness
issue
and
then
raise
the
question.
H
So
last
slide.
I
think
this
is
time
for
the
work
group
working
group
to
consider
whether
this
is
something
we
adopt,
and
I
think
you
know.
As
I
said,
I've
been
the
editor
and
folding
in
comments
from
and
text
provided
from
others.
H
There
have
been
several
who
have
contributed
texts,
but
I
would
hope
that
with
adoption,
we'll
bring
the
commitment
from
the
working
group
to
to
flesh
this
out
and
provide
a
full
list
of
what
we're
aware
of,
in
terms
of
guidance
for
for
making
l4s
work
well
with
the
presence
of
3168
bottlenecks.
So.
B
Yeah,
I
think
starting
an
adoption
call
on
the
mailing
list
is
probably
a
good
idea,
but
I
think
we'd
like
to
open
the
floor
to
hear
some
other
people's
thoughts,
but
I
think
because
this
is
something
that
came
about
as
a
mitigation
to
some
of
the
comments
and
concerns
people
had
when
we
decided
to
make
the
ect
one
decision
at
the
interim
meeting.
B
H
I
I
think,
that's
overstating
it
as
I
indicate
in
the
first
few
slides
there
are
they're
known
to
exist.
These
3168
single
cube,
bottleneck,
implementations,
what's
not
known,
is
the
prevalence
of
them
believed
to
be
rare.
H
At
least
you
know,
studies
so
far
have
not
shown
them
to
be
widely
deployed
in
bottleneck
locations
and
again,
as
as
I
discussed
earlier,
the
impact
of
them
existing.
It
is
not
catastrophic.
It's
not
a
not
a
major
issue
in
a
lot
of
cases,
but
it's
only
in
the
case
of
long-running
flows.
H
Sharing
those
bottlenecks
with
a
moderate
bdp
levels
where
result
is
something
that
we
want
to
track
and
and
avoid
it.
So
I
wouldn't
say
it
has
anything
to
do
with
making
networks
in
general
prepared
for
l4s.
It's
really
just
this
specific
scenario.
N
So
when
it
comes
to
these
long-running
flows,
sharing
a
single
queue
does
that
take
into
account
the
possibility
of
tunnels
encountering
an
aqm,
even
if
that
aqm.
N
H
Yeah,
as
as
I
mentioned
earlier,
that
you
know
the
asterisk
on
the
regular
slide,
so
so
flow
queue
flow,
queueing,
bottlenecks,
group
tunnel
traffic
into
a
single
queue,
and
so
they
don't
provide
fairness
for
tunnel
traffic
or
flows
are
within
a
tunnel
yeah,
that's
a
known
issue
with
with
fq
systems,
and
on
top
of
that,
if
the
mix
of
flows
within
that
tunnel
contains
both
l4s
and
classic,
then
the
entire
tunnel
is
clearly
only
seeing
a
single
ce
marking.
H
Well,
at
least
existing
at
q
implementations
that
total
traffic
would
see
a
single
ce
marking
probability
and
you
see
further
unfairness
within
flows
inside
the
tunnel.
That's
true.
It's
been
discussed
that
it's
fairly
straightforward
for
fq
implementations
to
support
the
all4s
marking
thresholds.
H
So
it's
not
a
problem
with
fq
in
general,
but
but
yes
in
the
case
of
tunnel
traffic
over
an
fq
bottleneck,
then
there
is
a
second
half
in
there
as.
H
P
P
H
You
know,
thank
you,
stuart,
that's,
that's.
Definitely.
Good
feedback
help
understand
the
scope
of
this
issue
and
or
maybe
lack
thereof,.
H
All
right,
jenna.
Q
Hi,
I
love
stuart.
I
love
the
zero
percent
to
many
decimal
places
by
the
way,
I'd
love
to
know
yeah.
The
the
thing
I
wanted
to
say
about
this
draft
is
that
I
think
it's
the
general.
It
makes
a
lot
of
sense
to
have
an
operational
guidance
draft
in
the
working
group.
For
for
for
this,
I
think
the
the
questions
that
are
being
raised
and
addressed
in
this
draft
are
very
useful,
and
I
I
think
this
ought
to
be
adopted.
Q
I've
taken
a
look
at
the
draft
I'll
admit
to
not
having
read
it
carefully.
It
seems
very
thin,
but
I
suspect
that
can
be
resolved
as
we
get
into
the
as
it
gets
into
the
working
room.
I
would
definitely
encourage
adoption
of
this.
The
one
thing
I
would
I
would
also
suggest
is
maybe,
to
articulate
a
little
bit
more
of
articulate
the
the
the
the
the
goal
as
not
to
promote
fairness,
but
to
avoid
unfairness.
I
think
there
is
a
difference
between
the
two,
but
again
that
isn't
it.
H
C
Yeah,
so
I
I
did
a
presentation
at
map
rg
the
interim
map
rg.
I
think
it
was
just
after
last
ietf
and
it
was
interim
with
with
that
at
the
conference
right
yeah
and
they
so
we
did
not
see
zero
percent.
There
were
several
networks
in
which
we
saw
non-zero
percentage
deployment
of
ecn
when
we
looked
according
to
asn,
so
I
I
would
be
very
interested
stuart
if
you
get
a
chance
to.
C
I
don't
know
if
you're
planning
on
publishing
the
data
that
you
see,
but
I
you
know,
not
only
did
we
see
non-zero
percent
in
a
number
of
networks,
but
we
saw
a
growth
in
adoption
over
the
last
year.
So
I
just
wanted
to
mention
that
I,
although
I
agree
the
deployment,
is
relatively
low.
C
When
you
look
in
the
aggregate
it
was,
it
was
only
zero
percent
down
to
down
to
two
decimal
places
for
us
overall
and
I
think
we're
we
are
getting
probably
a
different
data
set
than
the
then
you
know
what
iphone
would
see
in
general,
but
we
we
did
have
some.
You
know.
Zero
percent
to
many
decimal
places
was
not
our
observation,
so
it
might
be
good
to
sort
of
see
if
we
can
link
those
up
a
little
better
and
maybe
even
get
an
ongoing.
P
P
The
one
suggestion
I
would
have
is
when
we
first
collected
data,
we
were
seeing
some
plausible
amount
of
ecn
marking
in
argentina,
notably
and
in
france,
and
it
led
us
to
a
false
sense
of
optimism
at
the
start
and
when
we
dug
into
a
little
bit
deeper,
it
turned
out
to
be
bogus.
It
wasn't
actual
working
smart
queuing
with
ecm.
P
It
was,
as
far
as
we
could
tell
it
was
just
bugs
so
yeah
if
you
have
some
actual
working
ecm.
That's
that's
very
interesting
because.
C
Thank
you.
We.
We
did
note
that
when
we
started
our
project-
and
we
were
looking
for
it
also
and
and
did
fail
to
replicate
that
I
think
I
reported
that
in
the
in
the
presentation
I
did
and
but
nonetheless
there
were
other
networks
where
they're
there
and
by
country
it
was
hard
to
find
anything
with
that
remotely
approached
some
of
the
numbers
you
reported
in
the
initial
stuff,
and
I
understand
that
was
found
to
be
bugs.
C
I
think
that's
also
been
reported
and
maybe
mentioned
during
that
talk,
but
they're
they're.
Looking
at
individual
networks,
there
are
some
that
do
appear
to
be
doing
real
ecn,
traffic
yeah.
H
O
Yeah,
I
just
just
to
make
sure
everyone's
caught
up
on
this
as
well.
O
You
know
jake's
data
was
important,
but
then
there's
also
the
question
of
was
that
single
queue
or
fq,
because
obviously,
in
the
last
few
years,
fq
coddle
has
been
deployed
with
ecn
support,
and
so
I
would
have
expected
to
see
some
of
that,
and
I
I'm
assuming
that's
what
jake
was
finding,
but
we
obviously
now
don't
know
without
even
more
difficult
tests
to
distinguish
between
an
fq
producing
ce
and
a
single
queue,
because
the
fq
obviously
does
the
scheduling
itself
and
isn't
isn't
going
to
allow
this
domination
of
one
flow
by.
B
Yeah-
and
I
think
this
has
been
useful
discussion-
I
think
we're
going
to
plan
to
take
a
adoption
call
to
the
mailing
list.
D
One
quick
reflective
discussion
for
the
chat,
we're
going
to
run
in
we're
going
to
run
over
one
hour
into
the
break.
Sebastian
suggested
that
we
take
no
more
than
25
minutes
to
break,
which
seems
like
a
reasonable
suggestion.
A
Can
I
ask
was
that
that
we
also
check
how
many
people
have
read
the
adoption
draft
that
this
or
the
previous
version
and
therefore
get
some
view
of
there?
Are
people
being
exposed
to
this
sure?
Do
we
know
how
to
press
these
buttons?
I.
B
A
So
click
on
the
bar
chart
and
tick
raise
your
hand
if
you
have
read
it
or
do
not
raise
your
hand
if
you
have
not
read
it
and
we'll
collect
that
as
we
go
into
the
next
talk
so
that
we
can
run
this
adoption
call
later
on
the
list.
O
I'd
would
rather,
you
do
to
be
honest,
but
I
well,
I
suppose
I
could
whatever
no
you
do
it.
Okay
is
that
okay.
O
O
I
guess
I
could
just
start
by
saying
that
the
next
door,
what
the
next
talk
is
going
to
be
before
we
get
the
title
side.
It's
a
status,
update
on
l4s
and
the
the
three
main
alfres
drafts
other
than
the
one
greg
has
just
been
talking
about.
That's
the
architecture
draft,
the
alphas
id
and
the
dual
queue
couple
day:
qm
right,
that's
fairly
small
on
my
screen.
A
Yeah
there
were
15
people
said
that
they'd
read
one
of
the
operational
drafts,
one
of
the
two
revisions
and
there
were
19
people
who
said
that
they
hadn't,
presumably
intended
to
so
a
reasonable
number
of
people
have
read
this
and
that's
useful
input
to
our
processor's
chairs.
Thanks
ever
so
much
that
polls
closed
back
to
you,
bob.
O
Okay,
yeah-
actually,
I
I
don't
know
was
that
meant
to
be
a
precursor
to
a
working
group,
working
group
adoption
or
what's
going
to
happen
with
that.
O
Okay,
so
there
are
the
three
drafts
with
their
vision,
numbers
and
next
slide.
O
Please
one
slide
recap
of
what
l4s
is
and
why
how
it's
motivated,
it's
motivated
as
a
desire
for
ultra
low
queueing
delay
for
all
internet
applications,
so
so
that
you
don't
have
the
problem
of
the
management
problem
of
having
some
favored
over
others,
and
the
key
idea
is
getting
smaller,
saw
teeth
to
respond
to
congestion,
so
that
you
don't
have
the
compromise
between
low
queueing
delay
and
poor
utilization.
O
So
you
can
keep
your
get
your
aqm
right
tight
and
you
and
you
also
get
them
scalable
throughput,
which
is
the
s
part
of
the
l4s
which
otherwise,
that
has
has
been
known
since
early
work
in
the
1990s
which
sally
floyd
published
as
an
rfc
3649.
I
think
you
you
get
longer
and
longer
time
between
each
sawtooth,
as
as
you
try
and
get
faster
and
you
get
less
tight
control
next.
O
I'll
try
and
speed
up
the
just
just
a
bit
of
news
on
various
implementation
things
been
going
on.
Since
the
last
time
we
gave
a
status
update,
which
was
april,
I
believe,
maybe
july.
I
can't
remember
the
low
latency
docsis
working
group
has
been
continuing
doing
interop
testing,
currently
three
independent
implementations,
two
cable
modems
one
cmts
and
two
implementations
now
completely
pass
all
the
functional
tests.
There
was
a
I
know
before
I
left
cleveland.
O
There
was
a
large
set
of
conformance
tests
and
functional
tests
that
that
were
defined
against
the
spec
next
one.
This
work
hasn't
started
yet,
but
the
only
new
news
is
that
it
it's
got
a
date
for
when
it's
planned
to
start
and
that's
the
data
plane
development
kit,
that's
the
open
source,
libraries
for
virtualization
for
a
variety
of
cpu
architectures
and
that's
planned
for
dual
queue:
coupled
aqm
implementation
in
q1
next
year,
and
then
there's
a
lot
of
publications
coming
out
of
ns3.
O
Sorry
by
publication
I
mean
published
source
out
of
the
ns3
efforts,
there's
the
simulation
model
of
of
low
latency
docsis
itself,
which
was
used
by
the
low
latency
docsis
working
group
to
test
everything
and
that's
in
the
ns3
app
store,
and
that
link
takes
you
to
it
off
the
slides
and
it
includes
not
only
the
dual
queue:
capital,
dqm,
but
also
low,
latency,
doctor's,
cue
protection,
which
I
think
will
be
useful
news
to
some
people.
O
O
Then
it
also
tom
henderson,
pointed
out.
Tom
was
largely
pulling
all
that
stuff
above
together,
then,
he's
also
pointed
out
that
alfred
support
has
been
added
to
coddle
and
fq
coddle
models
in
ns3
and
it's
about
to
be
released.
O
It's
been
added
and
about
to
be
released
in
fq,
pi
and
fq
cobalt
in
their
models
in
ns3,
and
it's
already
in
the
linux
fq
cuddle,
and
that
adds
as
as
well
that
they
add
to
the
the
itf
dual
cue
coupled
acre,
meaning
the
one.
That's
in
the
itf
draft,
the
sort
of
reference
pseudocode,
the
the
code
for
that
is
also
modeled
in
ns3
and
has
been
for
some
time
right.
O
Right
just
want
a
quick
heads
up.
We
have
a
iccrg
session
on
friday,
500
utc
1400
in
bangkok
time,
and
I
think
that's
right.
No,
no
midday,
1200,
bangkok
time
and
the
the
I'll
give
a
quick
outline
of
what's
going
to
happen.
O
We
have
a
congestion
control
that
gives
really
low
latency
over
a
range
of
conditions,
as
in
that
paper
you
know
reasonable
rtts
and
link
rates,
but
you
know
pieces
of
it
are
still
missing.
You
know,
that's
absolutely
obvious,
so
it's
easy
to
think
up
conditions
where
it
doesn't
work
well
and
show
that
and
we
admit
progress
has
been
slow
covering
up
all
those
holes
and
and
fixing
it
all.
O
So
you
know,
as
we've
said
many
times,
we
being
the
core
team,
the
design
team
working
on
l4s.
O
We
we're
primarily
for
network
companies
and
and
being
network
operators
or
vendors
for
for
networks,
and
you
know
we've
got
a
bit
of
a
background
in
congestion
control
over
the
years,
but
we're
not
the
guys
that
will
produce
this.
So
you
know
we're
having
to
do
it
in
spare
time
because
it's
not
funded
by
our
companies
and
so
on.
O
O
We
might
generate
some
interest
and
get
a
bit
of
a
buzz
around
this,
but
at
the
end
of
the
talk
we
want
to
have
start
a
conversation
about
what
what
is
needed
to
get
that
buzz
going
again,
because,
after
the
initial
buzz,
we
imagined
that
this
was
a
good
enough
potential
that
people
would
want
to
work
on
this.
O
We
imagined
a
different
way
of
building
congestion
control
if
you
like,
whereas
mostly
what
a
congestion
control
has
been
given
as
a
blob
to
the
community
by
researchers-
and
we
were
thinking
it'd-
be
great-
to
have
something
where
different
components
were
were
brought
in
by
different
different
groups
as
more
of
open
source
project.
And
that
didn't
happen,
and
that's
probably
our
fault,
nearly
certainly
our
fault
as
the
core
team.
O
Just
you
know
we
didn't
do
it
right,
didn't
make
it
open
enough
at
the
start,
but
now
with
something
worse
has
happened,
we've
sort
of
got
a
toxic
code
point
war.
That's
you
know
it's
it's
good
to
have
a
red
team.
You
know
to
be
picking
holes
in
in
everything,
but
it's
it's
now
getting
to
the
point
where
it's,
I,
I
think,
being
a
bit
more
destructive
in
that
it's.
You
know
there
are
problems
that
aren't
really
anything
to
do
with
l4s
congestion
controls
in
particular
that
you
could.
O
J
Bob
so
yeah
there's
there
was
some
confusion
about
the
the
deadline
for
this
meeting,
and
and
that's
not
your
fault.
However,
I
and
many
other
people
are
not
going
to
be
able
to
stretch
this
until
55
past.
Is
there
any
way
you
can
bring
forward
like
the
key
takeaways
and
then
for
people
who
are
interested
they'd
be
welcome
to
stick
around
for
okay.
O
O
O
So
all
three
drafts
that
you
know
the
rest
of
the
rest
of
the
talk
is
about
is
about
the
alfres
drafts,
because
we've
been
putting
a
lot
of
work
into
them
to
get
them,
get
them
finished
and
ready
for
working
group.
Last
call
in
our
humble
opinion
that
stands
for
or
it's
an
anagram
of
ohio
and
there.
The
only
bit
left
really
is,
is
what
the
status
of
this
alpha
s.
O
Ops
draft
is
it's
an
individual
draft,
so
we
didn't
refer
to
it
yet
so
it
we
need
to
put
in
a
reference
to
that,
but
otherwise
we're
done
and
and
describe
it
a
bit
in
the
l4s
id,
but
otherwise
the
three
drafts
there's
the
offress
architecture,
had
two
revs
since
the
last
status
update,
and
I
guess
the
main
change
I
mean
there's
been
some
very
significant
change
in
that
and
I
will
go
through
them,
but
top
level.
O
Really
it's
it's
put
fq
up
there
with
dual
queue
as
sort
of
alternatives
in
the
architecture.
It's
no
longer.
The
dual
queue
is
no
longer
the
architecture.
It's
fq
as
well,
and
the
other
l4sid
draft
there's
been
a
lot
of
changes
in
that
one.
O
Mainly,
you
know,
I
mean
a
lot
of
clarity
changes
but
there's
also
been
a
number
of
normative
text
changes
based
on
conversations
on
the
list
where
we've
been
tweaking
the
wording,
if
you
like
and
and
some
new
stuff
about
smoothing
smoothing
signals
pacing
to
to
make
them
requirements,
because
that
draft
is
primarily
a
draft
about
how
to
write
a
congestion
control
draft
or
how
to
write
a
a
draft
about
network
equipment
and
it's
giving
the
requirements
as
to
what
has
to
be
in
that
draft.
O
So
there's
some
shits
and
musts
on
what
on
what
has
to
be
in
those
things
about
pacing
and
stuff
and
finally,
the
dual
queue
coupled
aqm,
that's
very
stable.
Wasn't
I
couldn't
find
anything
to
change
in
that
just
some
modern
editorial
stuff.
So
that's
the
top
level
for
martin.
If
you
want
to
go
now.
A
B
Q
Okay,
thank
you
well,
at
a
high
level,
I
bob
I
I
don't
know
that
we
need
to
go
over
all
the
l4s
drafts
again,
but
I
want
to
ask
what
is
the
intent
here
ultimately
are?
We
are
we
asking
whether
we
should
go
to
last
call?
Are
we?
Q
Are
there
any
major
issues
pending,
but
the
extent
that
I've
seen
my
understanding
is
that
the
biggest
issue,
the
biggest
issue
I've
see
I've
seen
and-
and
forgive
me
if
I've
not
seen
all
the
issues-
is
the
competition
with
classic
eqn
at
fifo
queues,
and
is
that
the
biggest
issue-
that's
spending,
because
I'd
like
to
speak
to
that?
If
that
is
the
biggest
issue,
that's
pending.
O
I
guess
god
against
that,
and
and
the
point
I've
just
made
about
our
congestion
control
is-
is
not
up
to
speed
with
the
with
the
aspirations
in
this
draft.
But
but
one
of
the
questions
at
the
end
is
is
the
intent
that
that
is
part
of
the
experiment
rather
than
right.
So
so
that's
that's.
Q
So
so
so
maybe
I'll
just
take
a
minute
to
speak
to
to
to
both
of
those
points,
because
I
think
my
my
point
will
ought
to
cover
them
both
at
a
high
level.
I
feel
I
I'm
I
I
feel
like
this
is
ready
for
an
experiment.
I
absolutely
think
that
we
should
outline
precisely
what
the
outcomes
of
the
experiment
ought
to
be,
meaning
that
what
are
we
measuring?
What
are
we
looking
to
see
but
era?
It
absolutely
seems
like
we
are
ready
to
launch
this
experiment.
Q
If,
if
the
concern
about
a
competition
with
classic
issue,
a
classic
ecn
is
a
concern,
I
might
argue
that
that
only
becomes
a
real
concern
or
a
reality
if
l4s
becomes
very
successful.
Q
Q
So
if
the
world
we
are
talking
about
is
is
potentially
that
we
could,
we
could
make
this
get
us
out
the
door,
try
and
deploy
l4s.
Then,
let's
do
it,
I
don't
know
that
we
need
to
wait
for
much
else
at
this.
D
Jonas,
I
was
going
to
add
to
john's
comment
that
I
think
john
is
going
to
suggest
that
a
path
to
successful
working
with
glasgow
on
the
main
lforest
drafts
runs
through
solidification
of
the
lfrs
operations
traffic.
That
that's
where
the
3168
issue
could
sell.
A
Are
you
taking
that
as
a
are
you
taking
that
as
a
comment
that
we're
not
ready.
D
D
I
I'm
not
sure
I
think,
as
part
of
ruger
glass
quality
group
has
to
judge
whether
there
is
a
reasonable
approach
to
dealing
with
3168.
A
Yeah,
I
can
care
with
that,
and
I
think
that's
that
is,
I
mean
just
so
to
clarify
for
anybody
who's
watching
this
or
listening
remotely
and
the
working
group
last
call
doesn't
confirm
the
document.
The
working
group
last
call
gives
people
a
last
call
for
comments
on
it,
where
we
have
to
in
detail
check
that
things
are
right
and
the
document
could
be
frozen
after
that
or
the
document
could
be
published
or
the
document
could
be.
Could
we
have
certain
pieces
of
work
identified
that
need
to
be
done
so
or
indeed
it
could
be
dropped?
A
We're
seeing
several
things
in
the
java
does
anybody
want
to
speak?
We
can
take
notes
from
the
jabber.
C
A
Q
One
to
that
for
no
particular
reason,
but
because
I'm
here
in
the
queue
I
I
don't
think
that
we
need
to
wait
for
the
ops
drafts.
I
mean,
I
think
the
obstacles
are
useful.
Q
It's
it's
it's
a
brand
new
one
and
I
think
that
putting
that
on
the
critical
path
rushes,
the
draft,
I
think
it's
worth
spending
some
time
getting
that
ironed
out
just
in
terms
of
getting
the
text
right
and
everything
else,
but
I
still
don't
understand
why
that
needs
to
be
on
the
critical
path
for
getting
these
two
experimental
status.
If
people
know
how
to
deploy
this
stuff,
then
they
can
do.
Q
C
So
I'm
in
the
queue
after
jonah,
but
I
see
colin
and
john-
are
still
ahead
of
me.
Or
should
I
just
speak?
I
guess
yes,
I
will
so
the
you
know.
The
tests
that
jonathan
and
pete
have
published
lately
do
show
some
some
bad
problems.
I
guess
the
you
know.
The
claim
is
that
this
is.
As
I
understand,
it
is
essentially
sufficient
to
to
call
the
l4s
experiment
likely
to
fail.
C
So
I'm
not
sure
if
the
intent
is
to
sort
of
just
fix
that
after
seeing
the
experiments
and
I'm
not
sure
why
that
wouldn't
happen
in
lab
before
moving
it
forward,
but
I
guess
the
sentiment
here
seems
to
be:
let's
get
this
rolled
out
so
that
people
can
try
it
or
something
I
think,
there's
already
an
implementation.
People
can
try.
So
I'm
a
little
confused.
What
the
what
the
hold
up
is
on
running
an
experiment
just
in
terms
of
gathering
data,
I'm
not
sure
how
that
relates.
C
C
You
know,
that's
that's
just
my
sort
of
take
on
on
the
experiments
I've
seen
so
far,
but
yeah
in
terms
of
I
guess
the
the
goal
here
is
to
get
wider
operational
experiment
and
that's
waiting
on
a
draft,
I'm
a
little
just
confused
on
a
on
an
rfc
rather
than
than
on
a
draft,
and
I'm
a
little
confused
on
how
that
even
relates
exactly
so.
I
don't
know
who
wants
to
speak
to
that,
but.
L
So
I'm
not
sure
who's
running
with
q,
but
I
guess
I
may
be
next-
I
I
don't
think
we
need
to
wait
for
the
abstract
here,
I'm
confused
as
to
why
we're
not
just
proceeding
with
this
already
the
the
problems
we've
seen
identified
seem
very
minor
and
unlikely
to
be
a
practical
issue,
so
I
don't.
I
don't
quite
understand
why
we're
not
proceeding
with
the
the
experiment.
So
yes,
I
I
would
encourage
us
to
to
move
forward
with
this.
I
Okay,
so
I
guess
it's
my
turn
and-
and
I
will
echo
what
some
people
said
on
this
like
that,
just
like
on
the
chat
that
a
little
active
queue
management.
I
Is
next
would
be
helpful
from
the
chairs,
so
I'm
sort
of
struggling
to
know
what
the
experiment,
the
experiment
actually
is
right.
R
Yes,
so
I
think
the
experiment
is
so.
I
think
problems
that
have
been
detected
about
dual
queue
actually
make
certain
assumptions
about
deployments,
so
part
of
the
experiment
is
actually
to
figure
out
if
these
deployments
are
do
exist
in
our
problem,
because
we
assume
that
it's
not
the
case,
so
you
know
that's
something
we
can
figure
out
from
a
larger
scale,
experimentation
and
the
and
to
also
like
hit
on
last
point.
R
I
think
part
of
the
experiment
is
also
to
give
these
new
kind
of
congestion
control
rooms
to
evolve,
and
therefore
we
need
some
deployment
of
the
queuing
scheme
in
the
network.
So
I
think
we're
ready
for
that
part.
I
S
Maybe
the
thing
I
want
to
say
is
the
the
experiment
is:
is
mainly
for
people
to
to
start
working
on
the
congestion
controls
to
get
the
real
experience
of
of
what
is
going
on
and
to
get
engagement
because
a
lot
of
application
developers-
I
guess,
are
interested
in
using
this
and
want
to
see
deployment
and
also
get
convinced
that
deployment
will
happen
before
they
really
engage
and
and
solve
all
the
problems
and
what
the
problems
we
see
today
are
mainly
well.
S
The
main
problem
is,
of
course,
classic
ecn
fallback
and
the
longer
we
wait,
the
more
classic
ecn
will
be
an
alternative
because
nobody
starts
to
believe
in
whether
alphas
will
actually
really
happen.
That's
that's
the
main
danger
and
I
I
don't
think
a
lot
of
people
are
convinced
that
the
problems
in
in
the
current
implementation
cannot
be
solved.
It's
a
huge
opportunity
to
to
to
experiment
and
to
again
gain
better
experience.
S
In
the
beginning.
We
thought
okay
data
center
tcp
would
would
be
the
reference
and
we
didn't
need
to
work
on
on
the
congestion
control.
But
in
the
meantime,
this
this
prague
has
become
a
kind
of
reference,
implementation,
status
and,
and
now
even
asked
to
be
to
prove
everything
that
that
it
can
gain.
I,
I
think
it's
a
wrong
expectation,
especially
from
the
people
working
on
that.
S
If,
if
people
are
really
concerned
that
there
is
no
improvement
possible,
then
then
okay,
we
we
should
stop
here
or
wait
until
this
is
really
the
case,
but
I
don't
have
the
the
impression
that
people
are
are
thinking
in
that
way.
People
are
waiting
for
deployment
and
really
to
to
to
get
things
running
and
see
this
also
what's
important.
Is
we
get
a
lot
of
traction
in
implementations
if
you're
going
to
wait?
S
I
I
think
the
the
confidence
in
in
this
whole
experiment
and
this
whole
topic
will
will
slowly
disappear,
which
is
a
bigger
danger
to
get
progress
than
than
whatever
concerns
there
are
now,
which
are,
I
agree,
minor
and
or
mainly
performance
improvement
or
performance
issues
with
the
the
congestion
control
itself.
S
So
I'm
I'm
saying
we
should
not
wait.
Definitely
not,
and
the
experiment
is
still
an
experiment.
We
should
we
should
engage
and,
and
more
people
will
will
automatically
work
on
it.
B
Okay
and
then
janna.
Q
To
to
last
this
question
of
what
is
the
experiment,
I
think
I
think
he
is
right
in
that
we
should
articulate
precisely
what
the
experiment
here
is,
and
I
think
answer
some
of
the
questions.
I'll
emphasize
just
two
points
in
there.
One
of
them
is
that
our
history,
history
has
shown
that
just
getting
this
deployed
itself
can
be
a
simple
experiment.
What
I
mean
by
that
is,
even
though
it
does.
Yes,
you
write
large
that
you
know.
Q
Are
we
just
going
to
throw
this
out
there
and
see
what
happens
getting
this
getting
getting
something
like
for
us.
The
the
the
ecn
marking
itself
deployed
is
a
pretty
big
experiment.
If
you
succeed
in
that,
then
I
I'd
be
very
happy.
Q
But
yes,
the
encouraging
endpoints
to
experiment
with
different
congestion
controllers
is
also
an
additional
piece
there,
but
for
them
to
do
that,
we
need
the
bit
out
there
in
the
network,
so
I
would
say
that
we,
we
launched
the
experiment
to
see
if
we
can
actually
deploy
this
into
the
network,
and
once
we
do
that,
we
know
that
we
can
build
controllers,
and
we
already
have
some
example
controllers
that,
even
though
they
are
not
perfect
they'll
do
something
with
it,
something
pretty
good
with
it.
Q
I
would
also
ask
a
a
slightly
different
question,
which
is:
do
we
think
that
by
keeping
it
in
committee
so
to
speak
or
keeping
it
as
a
draft
in
the
working
group,
we
gain
anything
if
we
don't
gain
anything
by
making
it
an
experiment
out
there?
My
question
would
be:
what
do
we
gain
by
keeping
it
in
the
working
group.
B
K
Thanks,
so
the
concern
I
have
at
the
moment
is
that
the
operational
safety
of
all
of
the
lfrs
design
and
implementation
over
the
wider
internet
has
not
been
proven
sufficiently
at
the
current
time.
Speaking
about
deploying
what
we
have
seems
rather
premature
without
data
showing
that
it
really
is
safe
over
the
long
haul
and
not
just
over
the
short
rtt
low,
hop
count,
links
between
end
user
and
cdns
seems
rather
uncautious.
K
I
think
l4s
simply
isn't
there
that
we
should
talk
about
the
lost
last
call.
Now
the
data
is
missing.
I'm
it's
sad
that,
after
seven
years
the
data
is
still
on
there,
but
that's
how
it
is,
I'm
pretty
sure
that
this
data
can
be
produced
relatively
quickly
and
presented,
and
if
it's
safe,
no
matter
whether
I
like
it
or
not,
then
make
it
an
experiment,
but
if
it's
not
safe,
let's
just
drop
them.
But
let's
make
this
based
on
data
and
not
just
how
we
feel
about
this.
K
B
I
see
cone
jumped
back
in.
Maybe
you
want
to
address
that
comment
before
we
go
to
greg.
S
S
It's
a
range
where
the
the
congestion
control
that
we
have
now
is
not
optimal.
Yet
for
the
users
themselves,
a
very
simple
fallback
is
just
okay.
If
your
run
trip,
time
is
bigger
than
a
certain
amount
we
fall
back
to
cubic
and
and
the
problem
in
itself
for
our
reference.
S
Congestion
control
is
solved
and
it
could
be
used
as
a
as
a
as
a
reference
for
for
experiments
and,
of
course,
it's
better
to
to
work
on
improving
for
longer
round
trip
times,
also
the
performance
for
alphas,
but
on
the
other
hand,
what's
the
value
of
that,
because
if
you
have
a
run
trip
time
of
160
already
you
you're
better
off
or
that
extra
one,
millisecond
or
15
milliseconds
is
not
going
to
make
a
big
difference.
S
So
we
should
also
see
what
is
the
relevance
of
the
problems
that
we
see
today
in
in
practice.
They
I
I
think
they
are
zero.
It's
just
a
matter.
If
there
is
interest
for
people
with
a
very
long
round
trip
time
to
have
a
buffer
of
one
millisecond
instead
of
15
milliseconds,
it
will
be
worked
on.
Otherwise
the
the
congestion
controls
will
automatically
fall
back,
but
all
these
things
that's
experience
that
we
need
people
working
on
it
and
and
the
right
people
with
the
right
incentives
working
on
that
and
improving
it.
S
If,
if
you're,
if
everybody
looks
at
us
and
and
and
expects
that
we
are
going
to
make
this,
this
ultimate
super
perfect
congestion
control,
then
then
that
is
the
wrong
way
of
progressing.
We
will
probably
starve
at
the
end
from
from
exhausting,
I
mean,
let's
engage,
let's
other
people
get
the
opportunity
to
work
on.
I
see
only
one
main
problem,
which
is
not
that
big,
it's
the
the
classic
ecn
detection
so
and
the
longer
we
wait,
the
more
classic
ecn
will
be
the
alternative.
S
So
I
don't
see
any
gain
in
waiting.
Let's
go
forward.
The
question
is:
is
the
network
part
okay,
then
we
should
go
ahead
if
the
congestion
control
is
okay,
I
think
now
it's
not
perfect,
but
it
can
be
improved.
So
what's
why
stopping,
I
would
say:
go
ahead,
make
sure
we
can
as
fast
as
possible,
go
to
work
group
lost,
call
and,
and
the
rest
will
follow.
B
Okay
thanks,
I
have
greg
then
gory
lars
and
sebastian,
and
we
have
to
be
quick
there's
only
a
couple
minutes
left.
H
Yeah,
I
just
want
to
point
out
that
the
original
definition
of
ecn
is
is
20
years
old
and
so
far
sounds
like
the
deployment
of
that
in
networks
is
zero
percent
to
some
number
of
decimal
points.
H
I
Okay,
lars.
I
I
didn't
want
to
sound.
Oh,
my
audio
is
really
garbled.
I
didn't
want
to
sound
like
I
was
going
to
require,
like
the
perfect
congestion
controller
for
for
this
experiment
right,
but,
but
you
know,
asking
for
a
fabric
upgrade
is
a
pretty
tall
ask
and
not
being
able
to
motivate
it
to
application
developers
that
you
also
talked
about
kind
of
that
you
know.
No
application
can
use
this
without
there
being
a
congestion
controller
that
delivers
some
sort
of
benefits
to
that
application
right.
I
So
you
can
throw
this
in
the
network,
but
if
you
can't
be
exploited
by
the
app
it's,
it's
not
going
to
be
a
very
great
deployment
story
right,
so
we
can
argue
whether
what
we
currently
have
as
a
congestion
controller
is
good
enough.
But
I
think
this
is
an
integral
part
of
of
this
experiment.
If
you
think
back
right,
we've
done
a
few
classes
of
this
before
so
either.
We've
done
queueing
improvements
that
intended
to
improve
existing
tcp
traffic
without
tcp
changes
right.
We
know
how
to
do
that.
I
We've
also
done
tcp
changes
to
try
to
you
know,
work
better
with
whatever
cues
they
found
in
the
network,
and
we
know
how
to
do
that
and
then
we've
done
ecn
right.
We
changed
both
changed
the
network
and
changed
gcp
and
it
was
a
pretty
integrated
design
right.
It
was
pretty
minimal,
but
it
was.
It
was
integrated
and,
in
my
mind,
this
this
needs
to
be
a
similarly
integrated
activity.
I
S
Okay,
so
so
I
I
agree,
it's
it's.
It's
very
important
to
have
both
and
looking
at
the
past.
In
the
past,
we
had
the
congestion
control
for
acn,
which
was
very
simple.
Just
do
the
same
as
before,
instead
of
drop
and
afterwards,
we
didn't
see
any
or
hardly
any
deployment
in
the
network
now
is
the
opposite.
S
We
have
a
network
with
a
lot
of
interest,
with
a
lot
of
plans
to
deploy
with
standards
already
defined
people
working
on
products,
and
now
we're
saying:
let's:
let's
wait
until
the
end
host,
which
are
usually
the
easiest
one
to
change
things
are,
are
at
par.
I
I
think
we
are
blocking
this
at
the
moment.
We
we
have
good
engagement
in
networks
and
and
plans
to
deploy.
S
Let's,
let's
release
this,
I
don't
see
any
and
unless
I
would
like
to
ask
other
people
as
well,
I
I
don't
see
anything
that
cannot
be
solved
in
the
host
at
the
moment.
We're
gonna
need.
S
So
so,
but
anyway,
the
short
thing
is:
we
have
something
which
is
good,
which
can
be
used
at
the
moment
already
within
a
specific
range,
with
some
corner
cases.
Not.
But
if,
if
that's
the
pro
or
the
the
the
question,
we
have
that
already
we
can
do
the
the
experiment.
Applications
can
use
it
already
within
certain
constraints
defined
in
the
operational
guidelines.
D
Okay,
I
think
the
chairs
need
to
take
what's
been
said
here
under
advisement
we'll,
and
we
will
consult
with
the
ad
about
a.d
about
the
path
forward
here.
Corey
did
you
want
anything
to.
D
O
Well,
I
don't
necessarily
want
to
wrap
up
the
talk,
but
I
I
just
want
to
sort
of
state
a
fact
that
I
believe
that
the
people
who
are
doing
these
network
implementations
would
not
deploy
it
unless
there
was
an
rfc
out
there,
and
you
know
the
process
of
going
from
working
group
call
to
that.
O
A
I
think
it
would
be
worthwhile
taking
a
sense
of
those
people
left
in
the
room.
We
know
people
have
already
left
the
meeting,
so
this
can
only
be
imprecise,
but
if
you
think
that
it
would
be
a
good
idea
to
have
a
working
group
last
call
in
the
next
period.
A
A
A
There
we
go
at
that
point
here
and
18
people
said
that
they
would
review
this
if
it
was
presented
for
a
working
group.
Last
call.
A
A
I'm
happy
with
that.
Bob's
got
additional
slides
that
will
be
in
the
proceedings.
O
Yeah
I
mean
they're
basically
saying
what's
in
the
draft,
I
can
also
post
on
the
mailing
list,
which
well
I've
already
done
that
for
the
last
set
of
deltas,
but
I
can
say
what
the
main
changes
are
that
haven't
been
discussed
on
the
main
list.