►
From YouTube: TSVWG Interim Meeting, 2021-05-10
Description
TSVWG Interim Meeting, 2021-05-10
A
Welcome
to
this
tsbwg
virtual
interim
meeting,
specifically
on
l4s
today
and
we'll
show
the
ietf
notewell,
which
applies
to
this
meeting.
Please
read
and
understand
it.
A
The
understanding
I
have
is
that
meat
echo
will
be
available
for
future
virtual
interims,
but
wasn't
set
up
in
time
for
this
one.
So
it's
a
little
bit
different,
hoping
we
can
use
the
ether
pad
for
blue
sheets
and
also
we
could
keep
minutes
there.
A
A
Hoping
people
will
use
the
webex
chat
mainly
to
draw
attention
to
when
they
want
to
be
put
in
the
mic
queue
and
also
to
respond
to
polls
that
we
might
ask
so
and
if
you
could
keep
a
response
to
like
support
or
not
support.
That
would
be
ideal.
I
think,
because
it's
most
clear
rather
than
us,
trying
to
parse
sentences.
A
Otherwise
there
is
the
normal
tsvwg
jabber
room
set
up,
and
I
think
gory
is
planning
to
monitor
that
during
the
meeting
for
anything
that
that
comes
up,
that
you
want
channeled.
For
some
reason,
if
you're
unable
to
speak
yourself.
A
Are
these
things
all
clear
any
other
changes
we
should
make.
A
Okay,
good
so
the
agenda
today,
we
really
set
this
up
primarily
to
to
work
through
the
two
main
open,
big
questions
that
we
as
chairs
thought
still
were
existing
around
the
l4s
topic,
and
we
were
hoping
to
work
towards
some
conclusion
to
these
things
and
understand
the
working
group
consensus.
A
If
there
is
one
around
first,
whether
there's
a
sufficient
description
of
the
transport
requirements
to
use
the
l4s
id
such
that
the
working
group
is
comfortable,
that
different
implementations
could
be
done
for
multiple
protocols
and
and
there's
no
concerns
that
these
are
impossible
to
implement
or
unclear
in
some
way.
A
So
I'm
hoping
bob
or
actually
cohen
will
give
a
update
on
what's
going
on
with
that
draft
recently,
but
hopefully
be
fairly
brief.
And
then
we'll
go
straight
to
an
open
discussion
on
issues
or
questions.
People
have
on
these
things
and
if
that
doesn't
take
the
full
40
minutes,
that's
fine
and
then
we'll
move
to
the
l4s
ops.
Draft
greg
will
give
us
the
status
in
the
same
way
quickly
and
then
we'll
have
time
for
open
discussion
and
then
we'll
try
to
wrap
up
at
the
end.
A
Maybe
starting
some
consensus
calls
if
that
looks
like
it
makes
sense
to
do
so.
The
main
goal
of
the
meeting
I
wanted
to
set
this
up
so
that
we
had
ample
time
to
discuss
the
topics.
I
think
in
the
last
a
couple
of
ietf
meetings.
We've
run
short
of
time
for
discussion
on
these
things.
We've
mainly
had
the
meeting
time
for
status
updates
and
then
had
to
quickly
either
move
on
or
adjourn
and
haven't.
A
Maybe
drove
some
of
the
discussion
to
conclusion
and,
depending
on
how
things
look,
I
think
we're
hoping
to
start
consensus
calls
to
make
sure
that
we're
understanding
the
group's
feelings
on
these
things
by
the
end
of
the
meeting
and
those
would
of
course
carry
over
to
the
mailing
list.
A
We
don't
want
to
spend
a
lot
of
meeting
time
talking
about
alternatives
to
l4s
design
or
significant
changes
that
would
that
would
lead
us
on
to
tangents.
We
like
to
keep
with
the
established
support
we
had
in
the
past
for
the
use
of
ect1.
As
the
identifier.
Specifically,
that's
what
I
mean
by
by
not
spending
a
lot
of
time
on
alternatives
is,
I
don't
think
we
should
be
spending
time
on
conversation
that
deviates
from
use
of
ect1
as
the
l4s
identifier
and.
A
That's
that's
it
if
all
that
is
clear.
I'd
like
to
hand
it
over
to,
I
guess,
cohn,
for
a
discussion
of
the
l4s
id
requirements.
C
C
The
screen
is
coming
through
as
well,
yes,
sir
yep,
then
we
can
go
so
I
think
there
has
been
a
lot
of
activity,
good
discussions
and
progress
on
the
on
the
l4s
id
draft,
so
we
had
the
prague
requirement
survey
so
targets
targeting
congestion
control
developers
looking
at
feasibility
realizability
and
to
see
whether
it's
a
broad
support.
So
we
had
a
lot
of
both
responses
and
discussions
based
on
the
responses.
C
C
So
we
focused
because
there
was
a
lot
of
work.
We
focused
mainly
on
the
normative
texts.
There
are
still
a
lot
of
editorials
to
do
as
well,
so
there
is
now
a
16
draft
version,
16
that
bob
just
recently
released.
C
So
so
it's
still
not
finished,
let's
say,
but
we
we
focused
on
on
mainly
the
normative
text,
so
I
think
it
resulted
definitely
already
in
a
clear
and
more
to
the
point
set
of
requirements,
maybe
short
about
the
the
requirements
survey.
We
had
a
new
one
from
apple
vd,
shared
that,
thanks
to
that,
so
I
updated
the
the
web
page
added
to
it,
and
and
also
the
consolidated
version
is
updated.
So
you
can
have
a
look
at
it,
the
main
the
main
contribution.
Let's
say
it
was.
C
It
was
mostly
in
line
apple's
comments.
The
main,
let's
say
new
thing
that
popped
up
was
an
objection
on,
should
detect
loss
in
counting
of
time
based
units,
so
that
is
updated
now,
based
on
that
of
apples
and
google's
feedback.
C
So
the
objection
was
that
the
time-based
is
maybe
not
the
only
or
the
sufficient
way
to
allow
scheduled
reordering.
C
So
also
it
might
be
better
to
express
again
the
requirement,
not
the
mechanism,
so
to
allow
alternative,
implementations
and
and
potentially,
more
robust
implementations
and
also
so
feedback
from
google
new
mentioned.
Okay,
iraq
is
actually
doing
already
more
than
the
time,
based
only
so
we
came
to
the
conclusion.
Okay,
it
should
not
be
time
based
only
it
should
be
adaptive
interval
and
we
we
should
refer
to
the
rfc
of
rack,
so
that's
good.
C
So,
for
the
rest,
a
short
status
I'm
going
to
go
quickly
through
these,
so
there
were
objections
on
the
document
only
requirements,
so
they
are
now
only
they
have
been
removed
and
only
played
by
a
general
advice
in
making
documentation
available
if
possible.
C
So
again,
the
mechanism
is
not
important,
but
the
results,
and
so
it
is,
there
is
consensus
to
make
safely
coexist
with
renal
congestion
control
to
replace
that
with
a
classic
congestion
control
such
as
reno,
because
there
is
an
rfc
and
it's
an
example
as
required
by
and
seems
there
is
another
rfc
which
describes
this
more
in
detail
5033.
E
Where
did
we
wind
up
on
the
concern
that
the
repeated
mentions
of
reno
might
focus
too
much
attention
on
it?
I
think
I
saw
at
least
one
reference
to
cubic
in
the
new
draft,
and
that
was
good.
C
Yeah
yeah,
so
so
there
are
two
aspects:
we
we
focused
also
on
repeat
replacing
tcp
so
that
it's
not
only
tcp,
because
a
lot
of
people
get
primed
by
the
idea.
Okay,
it's
only
about
tcp,
so
you'll
see
also
this
is
removed
and
the
reno
we
minimized
it.
So
maybe
it's
a
good
remark
because
for
cubic
there
is
also
an
rfc
right.
F
G
C
So
so,
depending
on
the
timing,
this
might
indeed
be
then
a
possibility
to
to
also
include
cubic,
as
with
an
rfc
reference
right.
That.
E
H
C
Okay
going
further,
so
there
were,
there
were
comments
on,
should
scale
down
to
fractional
congestion
windows.
C
So
not
everybody
was
convinced
that
it
is
a
problem
and
there
were
already
implementations
that
are
already
in
use
even
today
that
that
covered
that
problem.
C
So
so
today
it's
a
shoot,
but
I
think
we
should
make
sure
if
it
occurs
on
the
internet,
those
that
implement
such
reduced
back
off,
they
will
back
off,
while
others
are
not
so
maybe
a
final
call
to
to
say:
should
we
keep
the
shoot
anyway?
It's
an
important
output
of
the
experiment.
I
think
whether
it's
it's
disadvantage
for
the
ones
who
implement
it
or
whether
it's
a
very
often
occurring
problem
on
the
internet
or
not.
C
Okay,
there
were
also
a
lot
of
comments
on
monitoring,
fail
back
and
replacement
to
clarify
things.
So
we
have
clarified
monitoring
it's
both
on
on
live
traffic
unless
there
is
an
alternative
or
external
monitoring
on
the
part,
also
clarified
adaptation
and
replacement
conditions.
C
C
So
it's
hopefully
this
text.
This
is
clarified
now
and
we
also
aligned
these
requirements
with
the
operational
guidelines.
Draft.
C
There
is
also
a
smaller
discussion
based
on
on
reduced
rtt
bias,
so
there
is
mainly
the
conflicts
between
the
must
and
as
much
as
possible.
I
think
it's
also
important
to
make
sure
that
this
irtt
bias
is
meant
only
for
rate
conversions.
So
after
a
long
time,
clearly,
it's
not
for
slow
start
for
getting
up
to
speed
reduction
on
on
a
strong
marking
signal
if
the
marking
signal
gives
100
percent
continuously.
C
All
of
these
things
are
better
to
scale
with
the
rtt
to
prefer
preserve,
also
stability
and
efficiency.
C
So
so
I
think
there
we
we
still
can
improve
the
text
a
bit
there's
another
discussion,
whether
rate
fairness
is
absolute
or
more
gradual,
so
that
we
still
say
as
much
as
possible
or
another
words
compromising
between
stable
throughputs
for
low
latency
services
and
optimal
using
the
short
period
surveill
of
available
bandwidth.
C
Also
for
a
low
latency
flow.
Is
it
really
important
to
to
to
quickly
vary
the
the
bandwidth
so
yeah?
We
we
still
can
discuss
and,
and
maybe
fine-tune,
I
think,
definitely
to
limit
it
to
rate
conversations
is
important.
All
the
other
things
might
be
also
an
outcome
of
yeah.
I
expect.
E
I
don't
think
there's
a
actually
I'm
on
the
other
side
of
the
discussion
of
first
folks
about
must
as
much
as
possible.
The
short
story
of
my
view
is,
must
do
x
as
much
as
possible
is
in
practice.
It
should,
and
I
don't
think,
there's
disagreement
about
what
we
want.
What
we
want
the
implementers
to
do,
I
think
we're
having
a
and
a
a
surprisingly
long
discussion
over
the
precise
words
to
use
and
and
kuhn's
point
about
waking
fruits,
I
think,
is
much
more
important
than
whether
it's
a
must
or
should.
C
Yeah
that
that's
one
aspect,
of
course
yeah.
There
is
this
discussion,
but
also
I
I
also
saw
people
thinking
yeah,
but
if
we
need
to
be
rtt,
unbiased,
okay,
there
are
a
lot
of
mechanisms
that
are
still
rtt
dependent
and
need
to
be
rtt
dependent.
So
so
still,
I
think
it's
additionally
to
the
discussion
also
important
to
to
make
sure
this
applies
only
to
the
rate
conversions,
so,
okay,
but
yeah,
something
to
to
further
discuss.
I
guess
I'm.
G
Going
this
way
again
so
rtt
bias,
for
I
think
this
is
a
good
point.
I
don't
remember
reading
this
in
the
draft.
Is
this
updated
like
not
to
do
it
for
slow
start?
I
don't
know
what's
the
language
to
use
here,
but
these
are
good
points
that
you
have
on
the
slide.
C
Yeah
indeed
so
so
this
is
a
recent
discussion.
I
don't
know
bob.
What
is
the
latest
version
that
you
have
in
the
slides,
but
I
think
it's
we
kind
of
agreed
that
rate
conversions
is
part
of
the
wording
here
I
didn't
check
the
latest
16,
maybe
offline
we
can
check
and
but
but
definitely
we
need
to
to
be
discussed
on
the
mailing
list.
In
that
case,
okay,.
C
So
there
are
still
requirements
that
were
not
changed,
so
only
some
minor
typos
and
clarifications
still
the
same.
Maybe
other
topics
that
are
ongoing
are
about
guards
the
scps
and
whether
this
this
can
protect.
Is
there
already
a
robust
scheme
detected?
On
the
other
hand,
I'm
I'm
also
thinking
it
will
kind
of
defeat.
The
purpose
of
the
experiment,
which,
for
me
is
mainly
the
adoption
level,
which
is
the
success
criterium
of
these
these
experiments,
if
it's
something
good
that
people
want
to
use.
C
Also,
there
is
a
recent
replay
protection
interactions
discovered
this
limit,
so
so
it
seems
that
secure
tunnels
are
limiting
the
reorder
resilience
which
can
cause
when
ect
0
is
marked
to
see
in
a
first
bottleneck
and
later
in
a
second
bottleneck,
the
ends
up
in
the
dual
queue
and
gets
priority.
C
So
there
is
a
actually
a
expedited
forwarding
of
the
ce
mark,
but
this
with
this
replay
protection
internals
can
cause
packets
in
the
same
round
trip
time
to
be
to
be
dropped,
so
yeah
there
seems
to
be
so
it
is
a
problem
and-
and
it's
a
problem
only
for
for
east
d0
users.
At
this
stage,
the
impact
is
limited
to
drops
in
the
same
round
trip
time,
as
I
have
said
so,
there
shouldn't
be
too
much.
C
It
seems
there
are
also
similar
issues
with
diffserv
if,
if
this
surface
mixed
in
a
single
tunnel
and
clearly,
solutions
need
to
be
in
making
this
reorder
or
reordering
resilience
with
or
scalable
so
well,
if
windows
become
bigger
and
bigger,
a
very
small
or
limited
reorder
window
in
these
mechanisms
is
causing
more
and
more
problems,
not
only
for
for
this
case,
of
course,
this
is
a
new
topic
that
probably
needs
some
more
discussion.
E
Yeah,
I
don't
think
it's
quite
as
simple
as
described
here.
I'm
not
sure
it's
a
good
idea
to
go
dive
into
this,
but
in
particular
I
don't
think
the
ect
0
to
ce
marking
is
the
part
is,
is
the
is
the
is
the
only
cause
of
the
problem.
C
E
And
if
you
think
what
happens
here
as
opposed
to
diving
to
this
now,
which
will
be
a
rat
hole,
I
will
take
a
note
to
try
to
go
into
this
on
the
list.
But
I
don't
believe,
okay,
that
the
summary
of
the
problem
space
is
correct.
A
I
think
sebastian
is
in
the
mike
line.
C
Yeah
maybe
best
to
to
further
discuss
on
the
list,
or
maybe
in
the
if
it's
an
important
topic
to
be
discussed
afterwards,
of
course,
can
be
done
as
well,
but
to
quickly
go
through
the
presentation.
C
H
Yeah
my
confidence
was
at
least
this
should
appear
in
security
considerations.
C
Good
another
open
topic,
so
there
there
is
a
a
new
end
of
a
set
of
new
end
of
experience,
requirements
added
so
and-
and
there
is
a
discussion
topic
on
what
is
the
preferred
end
of
experiment-
treatment
of
ect1
packets
must
be
treated
as
if
they
are
not
ect
or
as
ect.
0
so
might
also
be
an
interesting
discussion
which
we
hopefully
don't
have
to
turn
into
reality.
But
okay.
C
A
I
I
would
like
to
clarify
so
there
are
some
open
issues,
but
is
it
the
correct
sense
that
you
think
that
this
is
maybe
like
one
revision
away
from
a
working
group
last
call
or-
and
you
think
that's
a
quick
provision,
or
am
I
wrong
about
that.
C
Yes,
I
think
we,
if,
if
the
discussion
goes
like
it
isn't,
we
can
further,
let's
say,
make
decisions
on
on
the
open
points
soon
we
did
already
a
lot.
I
think
so
I
just.
F
Comment
on
that
yeah
there
are
a
lot
of
edits
from
gory's
or
gory's
points
and
and
david's
most
of
them
are
non-normative
now
so
hopefully
any
discussion
on
the
latest
normative
changes
can
go
on
in
parallel
to
sorting
out
the
editorial
stuff.
But
there
are
some
sticky
issues
left
so
yeah.
E
I
was
going
to
ask
how
much
of
our
favorite
issue
3168
coexistence
hits
the
normative
text
in
this
draft.
F
It's
it's
no
longer
new,
but
it's
still
there
and
I
would
love
it
if
a
lot
of
the
40
minutes
of
discussion
was
about
that
and
whether
it's
what
is
required
and
all
the
rest
of
it,
and
whether
it's
consistent
with
you
know
for
sops
draft
and
so
on,
because
there
hasn't
been
any
discussion
about
text
on
the
list.
Since
I
we
posted
it
last
week,
but
there's
been
a
lot
of
discussion
about
a
lot
of
other
things.
F
Meeting
so
kim
have
you
finished.
A
Yeah,
so
I
would
like
some
feedback
some
feedback,
so
I
I
would
like
to
sort
of
open
up
mic
lines
and
encourage
people
to
share
their
thoughts
on
whether
this
is
whether
you
share
what
has
just
been
expressed
about
sort
of
the
maturity
of
the
draft,
and
specifically,
I
think
we
were
looking
for
a
sense
of
whether
people
that
build
transport
protocols
think
this
is
doable
for
their
transports
and
and
whether
the
requirements
described
are
all
clear
enough
and
and
reasonable
enough
that
that
l4s
transports
could
be
developed.
K
All
right,
thank
you,
so,
starting
with,
what's
on
the
screen
right
now,
this
business
of
the
detection
heuristic
one
thing
I
noticed
is
that
there
doesn't
seem
to
be
a
normative
description
of
how
such
a
detecting
detection
heuristic
should
be
implemented.
K
There's
a
description
of
what
it
should
do
and
what
it
should
achieve,
but
not
how
it
works,
and
I
don't
believe,
there's
a
working
implementation
that
we
can
test
to
see
how
effective
it
is.
C
Yeah,
that's
that
was,
let's
see.
F
F
It
just
sometimes
thinks
an
l4s
is
a
classic
case,
so
you
know
that
that's
the
base
we're
starting
from,
and
so
we
we
agreed
that
it's
not
something
you'd
want
to
be
happening
automatically,
because
it
would
be
finding
false
positives,
but
it's
a
it's.
It's
a
start
in
your
monitoring
and
then
you
can
do
if
you're
off
off
out
of
band
off
offline,
doing
your
testing.
You
can
then
go
and
check
a
bit
more
carefully
after
that.
K
F
Well,
I
I
wouldn't
say
it
needs
to
be
a
normative.
In
fact,
there's
hardly
any
algorithm-
and
this
has
been
this
being
discussed
off
list
by
other
people-
that
there
isn't
normative
stuff
about
our
algorithm
in
this
document,
and
that
is
that
is
very
deliberate.
I'm
not
talking
about
the
detection.
I
mean
just
in
terms
of
a
congestion
control.
It's
not
a
normal
document
about
a
congestion
control,
because
it's
a
document
about
what
other
congestion
control
documents
and
and
implementations
have
to
do.
F
Yes,
it
would
be
great
if
there
was
just
one
way
of
doing
this
or
if
there
was
one
perfect
way
of
doing
this,
but
there
isn't
and
there's
ideas
on
how
to
do
it
different
ways.
We
don't
want
to
stop
that,
so
I'm
I'm
not
sure
that
we
need
to
specify
an
algorithm.
F
C
F
L
I
F
J
C
So
as
source
code
in
the
alphas
themes,
prague
repository-
oh
no,
it's
not
part
of
that
repository,
probably
bob.
The
source
code
is,
is
accessible
right
and
there
is
a
white
paper
which
is
even
describing
a
lot
of
extra
ideas
that
can
be
used,
but
we're
not,
let's
say
explored
by
us
or
only
partly.
K
C
But
so
so
the
the
code
that
is
available
is
is
parameterizable
a
lot,
and
so
there
are
a
lot
of
parameters
that
can
be
tuned
to
detect
specific
cases.
And
it's
it's
true.
It's
it's
our.
Let's
say
real
world.
C
Deployment
or
real
world
experience
that
that
needs
to
find
out
how
these
can
reliably
be
detected
and
how
they
can
automatically
and
reliably
be
detected,
and
that's
based
on
the
algorithm
that
exists,
and
you
say
it's
not
working
it.
It
is
working
and
it
can
be
tuned
to
be
very
at
this
stage
at
least
very.
K
F
Now,
hang
on,
it
shows
extremely
few
false
negatives
out
of
millions
of
tests,
extremely
few
fields,
negatives
and
I'm
sorry,
if
you're
worried
about
that
few
false
negatives
on
a
on
an
issue
that
doesn't
really
you
know,
everything
can
still
make
progress,
then
I'm
you
know
I'm
out
of
here.
That's
just
that's
just
asking
too
much.
C
E
As
long
as
I
think,
I'm
hearing
a
discussion
here
that
ought
to
be
reflected
reflected
in
the
draft,
as
opposed
to
left
to
left
to
a
white
paper.
That's
referenced,
particularly
the
discussion
of
when
when
and
how
would
meet
the
part
to
be
tuned.
K
I
would
agree
with
what
david
said
and
also
just
point
out,
that
it
is
not
just
long-running
flows
that
are
affected.
We
have
data
showing
that
short
flows
are
also
a
problem
here.
F
F
The
the
experiments
have
taken
short
loads
of
short
flows,
hitting
long-running
flows
and
done
them
with
cubic
and
then
replace
them
with
l4s
and
shown
that
they
don't
make
a
difference,
except
in
one
or
two
very
small
cases.
So
yes,
of
course,
flows
are
affected
by
other
flows.
But
what
do
you
mean?
There's
no
unfairness,
issues.
F
C
Vague
and
and
related
to
where
it
is
described.
Okay,
currently,
I
think
it's
a
difference
in
the
operational
guidelines
drafts.
F
Now
it's
referenced
in
the
appendix
that
this
refers
to
is
the
appendix
a15
for
rationale
and
bit
higher
up
it's
you
know,
and
and
it's
that
that
white
paper
is
summarized
in
the
reference,
sorry
in
the
appendix
and
it's
which
metrics
you
have
to
measure
and
what
the
main
results
are
and
the
weaknesses
of
it
and
everything
is
all
in
the
appendix
that
this
refers
to.
It
is
already
in
the
draft.
A
Now
I
might
make
a
suggestion.
So
if
I
understand
correctly,
some
of
the
people
in
this
meeting
have
been
implementing
transports
for
using
l4s
service,
and
obviously
they
must
have
been
doing
something
in
this
regard.
So
it'd
be
great.
If
we
could
hear
what
they're
thinking
about
this
and
whether
the
the
white
paper
reference
was
okay
for
them
or
what
was
described,
there
was
clear
enough
or
what
whether
the
goals
of
eliminating
false
positives
and
false
negatives
to
the
greatest
extent
possible
were
hard
for
them.
A
What,
basically,
what
their
experience
was
trying
to
implement
some
kind
of
classic
bottleneck,
detection,
algorithm.
F
Can
I
add
that
something
I
think
would
be
useful
to
put
in
this
draft
there's
a
very
short
section
in
the
white
paper
that
describes
an
out-of-band
test?
That's
very
simple
for
detecting
not
only
whether
something
is
a
it's
a
classic.
You
know
3168
aqm,
but
also
whether
it's
in
a
single
queue
or
you
know,
in
the
shared
cure,
in
a
multiple
queues
and
it's
just
two
flows.
It's
just
something.
F
That's
much
more
difficult
to
do
passively,
because
you've
only
got
one
flow,
that
your
congestion
control
is
controlling
and
that's
that's
very
reliable
and
we
could
put
that
in
this
test
and
the
whole
idea
is
that
once
you've
got
a
candidate
that
you
think
might
be
a
classic
aqm.
You
can
then
just
run
that
test
and
that's
it
and
basically
there's
four
states.
F
You
know
you
you,
you
look
for
the
delay
and
the
throughput
of
the
two
flows
and
if
the
delay
is
greater
than
one
and
the
three
puts
greater
than
one
and
so
on,
there's
the
four
possible
states
of
the
matrix
tells
you
which
type
of
aqm
is.
C
Maybe
something
to
add
is
that
if
you
can
hear
me
believe
so
yeah
so
so
that
quick
also
allows
a
lot
of
possibilities
there.
So
it
shouldn't
be
only
tcp,
but
in
quick
you
can
have
multiple
flows
with
with
different
congestion
controls
and
even
with
different
ecn
treatments
potentially
so
there
also
different
type
of
traffic
can
go
side
by
side
and
and
heuristics
can
be
used,
or
at
least
the
detection
mechanisms
can
more
easily
be.
A
A
E
Vpn
reordering
issue
is
going
to
require
more,
is
going
to
require
going
to
require
more
discussion
on
more
more
discussion
on
the
list.
It's
come
up
very
recently.
It's
there's
been
some
very
long
discussions,
it's
very
very
hard
to
keep
very
hard
to
tease
out
from
that.
What's
going
on
there
I
think
there's
an
actual
problem
there,
but
I
think
I
want
to
take
that
list.
Suppose
it's
going
to
spend
the
next
half
hour
on
it.
H
Yeah
all
right-
we've
been
talked
about
a
lot
of
these
things.
As
we
talked
about
l4s
topics
on
this
one,
I
think
we
really
need
to
focus
in
on
what
l4s
does
differently,
not
just
the
generic
problem
of
internet
paths
and
the
reordering
thresholds
in
things
like
ipsec,
because
this
is
a
topic
that's
going
on
all
over
the
ietf
at
the
moment.
K
Okay,
besides
the
two
things
that
have
already
been
mentioned,
the
other
thing
I
wanted
to
bring
up
was
the
success
criterion
for
the
experiment.
K
The
the
safety
case
for
l4s
and
how
that
affects
how
much
or
how
widely
it
can
be
deployed
on
the
internet,
for
example,
whether
it
satisfies
the
rsc
4774
option,
2
criterion
or
whether
it
remains
an
option,
one
proposal
which
requires
a
more
stringent
containment.
E
K
Right
but
option
two
is
what
says
that
it
should
detect
what
sort
of
path
it's
running
over
and
adapt
to
that,
and
that
seems
to
be
the
the
approach
to
this
trying
to
aim
for
was
the
detection
chair,
based
which
we
don't
need
to
discuss
anymore
today,
but
that
doesn't
sound
like
an
option
three
which
to
me
sounds
more
like
an
inherent
coexistence
rather
than
adapting.
A
Go
ahead
bob,
I
think
the
only
other
person
in
the
queue
is
gory
at
the
moment.
F
Okay,
I
was
just
going
to
say
that
the
the
amount
of
unfairness
to
use
that
quoted
word
you
get
still
allows
you
to
make
progress
and
and
that's
why
we've
moved
to
this
operational,
alfres
ops,
draft
approach
where
the
you
you
can
allow
that
not
ideal
situation
to
persist
for
a
short
while
and
then
you
sort
it
out,
maybe
in
human
time
scales
rather
than
automatically.
F
And
so
therefore,
I
think
you
know
we're
not
talking
here
about
something
that
blocks
the
internet
or
you
know
starve
something
takes
it
down
to
its
minimum
window.
They
have
a
balance,
it's
just
not
a
fair
balance
and-
and
you
know,
and
in
particular
in
situations
when
you
have
a
a
congested
situation,
meaning
that
all
the
flows
go
to
a
low
rate,
it
gets
closer
to
one
to
one,
not
further
away,
which
is
important
for
safety.
F
So
you
know
all
these
factors
have
to
be
taken
into
account
in
how
you
classify
what
this
situation
is.
But
I
would
I'm
surprised
here
david
saying
that,
but
maybe
not
completely
surprised
at
this
stage.
K
F
A
Okay-
and
I
I
want
to
move
to
so
gory
is
in
the
queue
and
then
jake
and
then
pete
following.
H
I
was
responding
to
whether
deployment
was
a
criteria
for
success
and
I
think
in
the
ietf
we
defined
a
number
of
different
experimental
rfcs
that
have
seen
quite
wide
deployment
and
some
that
have
seen
practically
no
deployment.
H
H
I
think
we
just
have
to
decide
whether
we
wish
to
go
forward
as
a
working
group
and
try
a
new
version
of
ecn
and
we've
been
talking
about
it
for
a
long
time,
and
I
just
applied
a
little
bit
of
pressure,
perhaps
to
people
to
make
that
decision,
because
there
are
going
to
be
things
that
don't
work
so
well,
perhaps
with
the
new
system
they're
going
to
be
things
that
work
better
and
there's
going
to
be,
things
need
to
be
changed,
but
at
some
point
we
have
to
decide.
A
Great
and
jake
you're
in
the
queue.
B
All
right
thanks,
I
was
asked,
I
guess
so,
the
section
a
uh.1.5,
the
part
clarifying
the
coexistence
with
classic
congestion
bit.
It
talks
about
this,
this
concept
of
intervening
in
administrative
time.
I
think
bob
mentioned
that
just
recently
also.
My
question
is
about
like.
B
Is
this
intended
to
capture
a
notion
that
that,
when
a
user
sort
of
calls
to
complain
to
somebody
that
no
matter
who
they
call
to
complain
to,
like
all
the
plausible
candidates,
that
they're
going
to
call
are,
are
going
to
sort
of
feed
back
into
a
channel
that
can
take
action?
Or
I.
F
Mean
if
so,
is
that
no,
no
sorry
jake
can
I
answer
that
straight
away.
Please,
the
the
structure
of
all
the
all
the
requirements
from
the
one
that
was
on
the
screen
is
that
the
monitoring
must
go
on
all
the
time.
It's
not.
It's
not
doesn't
require
a
customer
to
phone.
You
know
monitoring
it.
The
hosts.
F
And
and
then
the
question
of
whether
you
respond
to
that
in
real
time
which
it
recommends
you
do
or
you're
allowed
to
have
done,
pre-validation
test
pre-deployment
tests
or
whatever,
so
that
you
don't
have
to
have
your
machines
always
doing
all
the
monitoring
all
the
time
that
that
was
the
process
that
we
tried
to
capture.
In
those
words
with
all
the
possible
options
of
it,
which
maybe
would
need
the
l4s
ops
draft
to
understand
it
all.
But
the
other
s
draft
isn't
isn't
that
different
from
what
it
was.
F
So
it's
the
general
idea
is
that
it's
recommended
that
you
do
everything
in
real
time.
But
and
if
you
don't,
you
must
still
do
the
monitoring
in
real
time.
Obviously,
because
you
can't
monitor
stuff
out
of
real
time.
B
Okay,
okay,
so
this
basically
relies
on
an
essentially
the
absence
of
false
negatives
that
you're
yeah;
okay,
okay,
just
the
checking
to
make
sure
I
understood.
Thank
you.
A
All
right,
pete
and
then
following
pete,
I've
got
sebastian
and
then
cohn.
I
think
we'll
cut
the
line
after
you,
so
pete
you're
up
now.
N
Oh
right,
just
in
talking
about
the
interaction
between
l4s
and
non-in
flows,
the
l4s
transports,
at
least
prague
and
scream,
share
the
same
response
to
ce
marks
as
dc
tcp,
or
at
least
that's
what
it
says
now
for
sid
4.3.
N
A
If,
if
the
topic
or
the
question,
could
be
characterized
as
prague
and
scream
essentially
perform
like
data
center
tcp
in
terms
of
this.
N
Is
it
is
that
your
question
not
exactly
I'm
talking
about
the
interaction
between
l4s
and
non-l4s
flows
when
they
meet
in
a
shared
3168q,
which
was
starting
to
be
discussed
in
terms
of
short
and
long
flows?
I'm
just
trying
to
establish
when
we're
talking
about
those
interactions
and
the
sort
of
harm
that
we
see
there.
Are
we
really
talking
about
the
same
harm
that
we're
seeing
between
that?
You
can
see
between
tcp
and
conventional
tcp.
O
Okay,
a
short
one
as
regarded
scream.
I
know
one
should
keep
in
mind
that
the
scream
is
fed
by
you
know
by
data
from
video.
Encoders
and
they'll
typically
have
a
frame
cycle.
That
vary
quite
a
lot
and
if
you
use
a
screen
with
l4s
that
will
actually
accommodate
the
headroom
for
those
larger
frames,
which
means
that
the
cues
will
pretty
often
be
empty.
So
it's
not
like
infinite
trans
infinite
data
transmission
that
you
would
run,
for
instance,
with
dc-tcp.
O
So
there
you
have
the
difference,
but
I'm
gonna
say
I
haven't
tried
it
out
against
normal
tcp,
but
I
suspect
strongly
that
the
screen
will
will
be
the
one
that
suffers
the
most
in
that
respect.
E
Folks,
please
take
a
look
at
the
minutes.
I'm
doing
my
best
to
capture
discussion
go
along
in
ingomar.
I
think
I
caught
your
comment
but
didn't
use
your
words.
H
So
I'll
let
it
go
for
now
thanks
so
pete.
Can
I
just
check
what
were
you
talking
about
the
protocol
which
I
think
is
specified
only
in
the
itf,
for
particular
cases
or
the
congestion
control
behavior
were
you
which,
which
part
of
d?
Was
it
the
dc,
tcp
congestion
control
part?
What
was
it
what's
called
dctp.
C
Thanks
and
and
maybe
to
to
add
there
like,
tcp,
proc
and
and
congestion
control
or
data
center
tcp
or
it's
congesting
automatically
are
only
let's
say
examples.
There
are
other
mechanisms
that
could
fit
there,
maybe
not
the
aimd
or
well.
So
it's
it's
not
really
defined
right.
C
The
only
let's
say,
interface
between
different
flows
are
their
marking
rates
and
the
marks
that
they
can
use
to
keep
the
buffer
under
control
and
to
share
a
fair
rate.
C
Classic
tcp,
it's
it's
another
story.
E
Attempting
to
rephrase
pete's
question
is
the
experience
in
what
happens
when
you
mix
say
dctp
with
reno
or
cubic
flows
in
a
shared
3168q
applicable
to
what
we
can
expect
to
see
when
we
mix
alphas
and
non-alpha
as
flows
in
a
shared
3168q.
F
I
was
going
to
say:
I'm
not
really
sure
why,
where
why
the
question's
being
asked,
because
it's
it's
not
intended
that
dctcp
is
used
on
the
public
internet
and
we're
we're
built
prague
for
that
purpose,
and,
and
so
it
has
got
slightly
different
behaviors
and
those
behaviors
are
diverging
all
the
time
in,
for
instance,
because
it's
we,
the
additive
increase
in
particular,
doesn't
anymore
pause
for
the
congestion
window
reduce
period,
but
it's
it
only
doesn't
have
to
increase
on
every
ack,
not
every
knack,
and
so
that
makes
it
a
lot
smoother.
F
N
Real
quickly,
just
to
say
that
the
reason
for
the
question
is,
we
have
done
testing
on
short
and
long
flows
and
and
shown
the
harm
in
that,
and
I
posted
data
earlier
today
which,
if
people
want
to
look
at
those
results,
we
can't
you
can.
But
are
we
really
just
testing
the
interaction
between
tc,
tcp
and
dc-tcp?
And
it's
sufficient
to
just
point
to
the
dc
tcp
draft,
which
says
we
can't
deploy
it
as
enough.
N
F
Right
so
there
I
mean
I
mean
certainly
the
startup,
behavior
and
searchlight
of
tcp.
Prague
is
only
going
to
get
you
know
lesser
well,
not
I
won't
say
less
aggressive.
It's
it's
not
going
to
cause
any
more
queuing.
F
So
if
you're
talking
about
short
flows
versus
long
flows,
I
mean
typically
the
the
problem
has
been
that
dc
tcp
was
too
lame
against
against
classic
congestion
controls,
but
I
mean
I
I
I
to
be
honest:
that's
why
I
was
a
bit
flummoxed
about
you,
two
asking
about
short
flows,
because
I
hadn't
seen
your
your
mail
this
morning
about
new
tests,
short
flows
against
long
flows,
but
certainly
we've
done
a
lot
of
those
we've
done
loads
and
loads
of
tests
of
that
sort
of
thing.
F
But
I'll
have
to
have
a
look
at
your
test,
but
I'm
still
not
really
sure
what
question
is
getting
at
what.
F
So
certainly
don't
use
tcp
if
you're
trying
to
test
anything,
because
that's
no
longer.
E
F
Sorry
there
was
a
few
words
sort
of
delighted.
There
does
not
consider
dctp
what.
E
You
have
settled
rs,
this
has
settled
rfc,
3168
coexistence
question.
F
Yes,
if
you're
just
talking
about
coexistence,
then
we're
not
expecting
them
the
tcp
prague
to
coexist
in
a
classic
queue
with
cubic.
Unless
you're
doing
these,
this
testing
either
live
or
off
out
of
band,
and
then
you
you
find
that
you've
got
a
classic
cue
and
then
you
deal
with
it
either
by
turning
off
your
classic
aqm
or
whatever.
You
know
the
text,
that's
on
the
screen
at
the
moment,
so
yeah
I
mean
if,
if
you
produce
results
that
say
the
two
don't
co-exist
very
well.
F
Yes,
we
know
that
and
that's
sort
of
the
whole
point
of
the
alphas
ops
thing
and
the
monitoring
and
all
the
rest
of
it.
A
Yeah
and
speaking
of
that
so
I've
got
sebastian,
looks
like
you
left
the
queue,
and
so
I've
got
cone
in
the
queue
and
then
I'd
like
to
move
to
greg's
part
of
the
meeting.
C
Yeah,
I
just
wanted
to
say
about
gory's
point
that
indeed,
if,
if
alpha
is,
is
successful
and
I
assume
most
low
latency
or
applications
that
want
low,
latency
and
interactive
applications,
they
will
use
this
l4s
and
to
get
a
low,
latency
and
also
a
smoother
throughput.
So
it
might
not
be
at
the
end,
the
service
for
for
quickly
grabbing
link
speed
when
it's
available,
and
it's
also
not
important
for
interactive
applications
to
to
use
this
this
this
rate
quickly.
C
C
Hopefully,
it
ends
up
in
being
a
better
ecn
service
and
a
new
kind
of
selector
for
applications
to
determine
whether
they
prefer
low
latency
over
the
top
speeds
or
following
variations,
and
and
only
have
a
small
queue
compared
to
downloads,
which
might
want
top
speed
and
don't
mind
of
having
a
very
big
queue
in
the
network
where,
if
capacity
becomes
available,
it
quickly
can
drain
that
queue
and
keep
that
q
full.
C
In
cases
there
are
variations,
or
even
if
the
throughput
goes
down
that
that
q
grows
quickly
instead
of
the
alpha
rescue.
So
I
think
this
is
a
indeed
a
good
point
and
we
should
look
to
the
future
and
the
future
evolutions
and
and
think
in
terms
of
in
the
future.
What
is
that
still?
The
importance
of
ect
0,
which
I
think
is
minimal
and
and
hopefully,
if
the
experiment
goes
well,
will
will
end
up
in
end
systems
that
have
to
decide
anyway,
whether
they
use
et0
or
ect1
or
non-ect.
C
There
is
a
clear
choice
for
for
both
either
using
easy
t1
for
low
latency
or
use
non-ect
to
avoid
maybe
complexities
with
with
old
ecn
aqms,
which
are
still
there
so
that
they
can
be
decommissioned
as
soon
as
possible.
But,
okay,
that's
that's
my
view
and
I
think
we
should
look
at
to
the
future
and
not
stop
ourselves
in
any
further
evolution.
A
Okay,
great
and
from
what
I
can
tell
so
we
have
some
follow-ups
to
do
on
this
draft,
and
I,
I
guess
some
of
the
open
items
we
talked
about
are
closely
linked
to
the
guidelines.
A
Work
that
that
greg
is
gonna
talk
about
the
status
of
so
actually
I'd
like
to
not
try
to
check
consensus
on
the
transport
requirements
right
now,
but
maybe
do
that
more
towards
the
end,
since
some
people's
thoughts
on
them
might
be
linked
towards
more
discussion,
we'll
have
on
the
operator
guidelines.
L
Okay
sure
we'll
do
all
right,
so
this
is
just
a
quick
overview
of
the
updates
in
the
l4s
ops
draft.
I
did
last
week
post
the
first
working
group
version
of
this.
L
I
was
the
editor
of
the
individual
drafts
and
I
continued
to
be
the
editor
of
the
working
group
craft
and
again
I'm
just
the
editor.
Several
folks
have
contributed
the
text
that
exists
in
the
draft
and
I
certainly
do
welcome
suggestions
for
additional
text
if
it's
needed
just
a
quick
status.
So
there
were
three
individual
drafts
that
had
been
discussed
in
previous
ietf
meetings,
and
the
working
group
adopted
this
draft
in
late
march
of
this
year
and
then
I
last
week
again
updated
or
uploaded.
L
The
first
working
group
draft
version.
M
L
L
If
you
want
to
look
at
all
of
the
all
the
edits,
the
major
changes
were
a
new
section
3.1,
which
summarizes
the
recent
studies
on
deployment
of
rfc
3168
aqms
in
the
internet,
and
there
are
three
that
are
discussed
so
jake's
study,
which
he
presented
at
a
map
rg
interim
last
year,
which
indicated
a
small
number
of
asn's
that
had
significant
deployment
of
rfc
3168
based
on
the
graph
that
was
shown
in
that
presentation.
L
Actually,
all
these
studies
there's
no
direct
detection
of
fq
versus
shared
queue,
but
nonetheless,
that
I
think,
is
the
largest
study
where
there's
detailed
data
available.
L
L
Incidences
of
ce
marking
on
paths
across
the
world
and
three
countries
were
mentioned
as
having
a
prevalence
that
was,
you
know,
exceeding
the
global
baseline
that
that
actually
had
shown.
So
no
explanation
currently
on
a
couple
of
those
with
china,
one
percent
of
the
paths
we
don't
have.
L
As
far
as
I
know,
further
information
on
that
nor
on
mexico,
whereas
three
point
two
percent
of
has
the
third
one
that
was
france,
where
six
point
or
six
percent
of
the
past
showed
prevalence
of
ce
marking
by
by
an
aqm.
L
That
seems
to
be
largely
consistent
with
the
comments
made
on
the
mailing
list
about
a
large
isp
in
france
having
implemented
fq
caudal
in
their
dsl
gateways,
so
that
sort
of
the
apple
result
and,
lastly,
pete's
data
from
a
small
cooperative
isp
in
check
where
a
subset
of
the
backhaul
links
had
an
fq
call
implementation
that
was
deployed
by
that
the
organizers
of
that
cooperative.
L
And
then,
aside
from
those
backhaul
links,
there
were
a
number
of
other
paths
where
ce
marking
was
observed
or
ce
marking
or
ec
ece
response
was
seen,
and
that
corresponded
to
roughly
ten
percent
of
the
paths
that
that
did
not
have
the
fq
caddle
provided
by
the
isp
itself,
so
that
that
small
isp
seems
to
potentially
fall
into
the
category
of
one
of
the
small
number
of
asn's
that
had
a
significant
percentage
of
paths
with
roxy
3168
deployed.
L
In
that
case,
I
believe
the
discussion
on
the
mailing
this
pointed
to
the
likelihood
of
that
being
fq
in
the
majority
of
those
cases.
Although
again
there
was
no
direct
evidence
of
of
fq
versus
a
single
queue
or
shared
queue,
but
based
on,
I
think,
knowledge
of
the
the
isp
and
the
participants
in
that
cooperative
isp.
L
The
discussion
on
the
mailing
list
led
us
to
believe
that
the
dominant
deployment
there
was
most
likely
fq
call
so
the
new
section
31
after
public
publishing
this
draft
zero
zero
sebastian
pointed
to
another
paper
which
had
some
data
around
ce
marking.
L
I
respond
on
the
mailing
list
that
I
will
include
a
link
to
that
paper
or
reference
that
paper
in
an
update
of
the
draft.
Although
the
data
that
was
observed,
there
seemed
to
be
more
puzzling
than
enlightening.
My
right
collection
is:
they
saw
five
percent
of
packets
on
this.
This
was
a
study
of
a
single
link
at
equinix
in
new
york
city.
L
And
five
percent
of
the
package
showed
non-zero
values
in
the
ecn
field,
but
of
those
94
of
them
were
ce
marking,
which
does
not
seem
to
be
indicative
of
actual
classic
ecn.
L
Behavior
with
that
high
marking
probability
all
right,
so
moving
on
also
added
a
section
six
6
that
talks
about
actions
that
can
be
taken
by
the
by
an
operator
of
an
fq
bottleneck.
L
It's
a
relatively
short
section.
It
talks
about,
ideally,
updating
those
fq
bottlenecks
could
be
all
for
us.
Aware
would
be
the
first
recommendation
and
then
points
to
some
of
the
same
remedies
that
are
available
to
an
operator
of
a
single
queue.
L
L
L
I
Thanks
so
in
that
paper
that
you
mentioned,
the
five
percent
number
truly
looks
a
bit
fishy,
but
they
also
report
0.3
pcn
bits
for
port
80
and
443,
with
believable
ratio
of
x0
act
1
and
ce
proportions.
So
I
think
that
supports
the
number
that
from
jake
that
you
report
as
akamai
2020..
So,
okay,
okay,.
L
L
Okay
and
then
finally,
a
new
section,
seven
was
added
discussing
the
conclusion
of
the
lfrosh
experiment,
both
if
it's
deemed
successful
and
that
the
rfc's
moved
to
pro
standard
status,
as
well
as
the
potential
that
is,
is
concluded
as
an
unsuccessful
experiment
and
would
like
to
reclaim
the
ect1
code.
Point.
L
All
right,
so
this
this
is
the
outline
of
the
document,
and
you
know
again
with
a
new
section
and
7..
L
L
One
dimension
is
provided
in
the
outline
here
whether
the
server
is
intended
for
or
isn't
deployed
in
a
context
where
it's
serving
a
small
number
of
networks
or
a
small
population
of
of
end
hosts,
for
example,
cdn
servers
or
servers
operated
by
an
isp
and
then
the
other
category
of
hosts
that
are
operating
more
generally
and
are
serving
content
across
a
wide
variety
of
endpoints
across
a
wide
variety
of
networks.
L
The
other
dimension
that
the
section
4
goes
into
is
what
type
of
content
effectively
the
the
host
is
serving.
Is
it
a
general
purpose,
server
serving
a
wide
variety
of
content,
for
example
a
web
server
which
may
be
using
tcp
and
or
and
or
quick
and
is
serving
different
file,
sizes
and
kind
of
general
purpose
content,
as
opposed
to
a
specialized
server
that
is
implemented
to
serve
a
particular
type
of
content.
L
So
example
that
the
draft
mentions
is
a
a
cloud
gaming
server
that
is
running
a
real-time
video,
codec
and
and
serving
content
at
you
know,
perhaps
long-running
sessions
but
more
real-time,
rather
than
file
transfer
type
applications
yeah
so
and
there
are
different
expectations
depending
on
which
of
those
types
of
of
hosts
we're
talking
about.
L
The
draft
does
try
to
put
the
onus
on
any
kind
of
3168
detection
and
mitigation
of
any
issues
that
might
result
on
the
server,
as
opposed
to
the
the
client,
so
that
that,
and
that
can
be
more
easily
managed
than
if
you
know
the
server
clearly
is
their
deploy
operate.
The
server
is
clearly
in
a
better
position
to
understand
what
types
of
networks
and
what
type
of
content
is
going
to
be
served.
So
that's
the
outline
in
terms
of
two
news
and
discussion.
L
L
The
second
one
is
discussion
of
the
risk
of
incorrectly
classifying
a
path.
So
again,
what's
what
is
the
result
of
a
false,
positive
or
false
negative
in
terms
of
detecting
3168
and
then
third
one?
The
draft
talks
about
potentially,
in
certain
cases,
a
host,
maintaining
a
list
of
paths
on
what
or
endpoints
on
which
3168
was
detected,
and
it
was
requested
that
we
add
more
information
on
how
a
host
might
attach
or
maintain
that
list.
L
L4S
here,
but
kind
of
general
issue
that
arises
with
with
vpn
implementations
and
then
this
discussion
of
remarking
ect1
to
not
ect
that
is
listed
in
the
draft
as
a
kind
of
last
resort,
bleaching
ect1
to
not
ect,
and
that
does
violate
one
of
the
requirements
in
the
ecnl
for
sid
draft.
And
so
we
either
need
to
agree
that
that
violation
of
that
requirement
is
acceptable
as
a
last
resort
or
eliminate
that
as
a
potential
remedy.
So
that's
what
I've
got
for
slides.
D
E
I
figured
I'd
cue
myself
as
an
individual
here,
don't
see
a
mention
of
marking
dscps
on
alfres
traffic
to
increase
operator
ability
to
deal
to
deal
with
it.
This
draft
seems
like
the
right
place.
The
right
right
place
the
right
place
to
have
that
discussion.
L
Yeah,
that's
a
good
point
that
should
be
listed
as
further
discussion
and
yeah.
We
can
certainly
add
that
as
an
option,
if
there's
a
general
view,
that
is
a
worthwhile
mechanism
that
that
can
be
used
by
an
operator.
L
L
Whether
that
required
endpoints
to
do
some
specific
implementations
in
order
to
support
it
or
not,
if
it's
relying
on
your
network
boundaries,
to
look
for
a
particular
dscp
and
if
it's
present
bleach
ect
1
to
non-ect
and
if
it's
not
present,
don't
there
are
a
number
of
options
there
and
it'd
be
nice
to
be
clear
about
what
we
think
is
a
worthwhile
usage
of
dsep.
In
this
context,.
E
Yeah
right,
okay,
I
think
that
I
I
think
for
this
question.
This
makes
sense.
I
will
make
a
note
to
myself
to
send
a
send
a
note,
send
a
note
to
the
list
about
what
it
would
mean
just
as
a
network
mechanism
without
endpoint
without
endpoints,
actually
getting
involved
in
in
reacting
to
to
to
receive
dscps.
E
M
I
used
to
be
an
engineer
now.
My
days
are
full
of
meetings
I
feel
like
a
manager,
so
I
apologize
that
I've
not
tracked
all
the
work
as
closely
as
I
would
have
liked.
My
goal
is
I'm
trying
to
get
low,
latency
internet
deployed
in
my
life.
M
I've
seen
throughput,
go
from
300
board
dial
up
modems
to
killer
bits
to
megabits
to
gigabits,
but
round
trips
times
are
still
stuck
at
about
half
a
second
or
worse,
and
I'd
like
to
see
us
fix
that
before
I
die,
rfc
3168
was
published
20
years
ago
and
it's
still
not
widely
deployed.
M
So
some
observations
I've
made
is
that
even
with
some
kind
of
fq,
statistically
putting
flows
into
hash,
buckets
queues
will
be
shared
between
different
flows
and
the
super
low
latency
flows
want
to
be
protected
from
other
flow
sharing
the
queue
and
that
implies.
We
need
some
kind
of
input
queue
selector
like
l4s,
with
ect
0
versus
ect1.
M
We
also
have
the
consideration
because
I
work
for
apple
and
we
make
end
systems
the
end
systems
want
the
lowest
delay
they
can,
and
that
means
if
the
bottlenecks
link
is
l4s
or
similar.
Whatever
this
new
better
thing
is
whatever
we
call
it
in
the
end,
if
that
super
low
latency
queuing
is
available,
we
want
to
make
use
of
it,
but
if
the
bottleneck
is
classic
ecm,
we
don't
to
give
that
up.
M
In
private
conversations
with
colleagues,
I've
heard
talk
about
using
dhcp
code
points
instead
of
the
ecn
bits
as
that
queue
selector,
and
I
joined
this
meeting
today,
hoping
to
hear
more
discussion
of
that
and
turns
out.
I
was
a
mistake
and
that
was
not
the
focus
of
this
agenda,
but
that
is
something
that
I
think
we
will
have
to
pursue.
I
know
dhcp
code
points
are
defined
as
per
hot
behaviors,
not
end
to
end
behaviors,
but
maybe,
if
we
decide
that's
not
right,
that's
something
we
can
change.
M
So
that's
my
my
players.
Let's
get
this
done
before
we're
all
dead
and
my
slightly
more
focused
question
is:
maybe
we
should
pursue
dhcp
marking
so
that
an
end
point
can
indicate.
I
support
classic
ecn,
and
I
also
support
this
new
thing.
So
give
me
what
you've
got.
E
Stuart
stuart-
this
is
david.
I
have
promised
to
to
swing
swing
the
spring,
the
baseball
bat
at
the
yellow
jacket,
nest,
labeled,
dscp
marking.
I
will
do
that
on
the
list.
I
suggest
you
get
your
kevlar
armor
out
of
storage,
it's
going
to
be
interesting.
M
Yes
to
fully
appreciate
this
is
not
easy.
I
just
I'm
really
depressed
that
we're
stuck
in
such
a
log
jam
and
everybody
on
this
call
shares
the
same
high
level
goal,
which
is
lower
latency
for
flows
on
the
internet.
M
One
comment
I'll
make
people
it's
very,
very
common
for
people
to
think
that
some
traffic
warns
throughput
and
some
traffic
wants
low
delay
like
that's
an
either
or
choice
and
the
classic
example
of
I
don't
care
about.
M
I
can
sit
and
watch
a
two-hour
netflix
video
from
start
to
end
and
it
works
fine.
Today,
that's
a
problem
we've
solved,
but
if
I
get
bored
and
decide
to
skip
ahead
to
chapter
seven,
then
suddenly
on
that
ttp
flow,
the
video
streaming
client
has
got
to
abandon
the
media
segments.
It
was
requesting
and
request
a
different
segment,
so
even
things
that
we
think
of
as
being
bulk
transfer
also
benefit
from
better
agility
given
through
lower
network
delays,
so
I
actually
think
most
traffic
on
the
internet
benefits
from
lower
delay.
L
Yeah,
I
agree,
and
I
think
that
in
my
general
observation
of
the
situation,
we're
in
is
that
we're
talking
about
coexistence
in
a
fairly
small
percentage
of
flows,
that
or
situations
in
which
there
are
issues
with
coexistence
with
classic
ecn
and
in
this
very
small,
proportional
links
that
that
have
implemented
a
shared
queue
with
classic
ecn
and
essentially
all
of
those
are
you
solvable,
fixable
situations
right
all
those
classic
ecn.
L
Implementations
can
eventually
get
replaced
with
l4s,
aware
implementations,
and
we
should
keep
that
in
mind
that
that
our
goal
is
at
least.
I
think
a
common
goal
is
to
improve
latency
performance
of
the
internet
and
maintaining
pure
3168
deployments
as
they
are
today
is
not
shouldn't
be
a
strong
goal.
M
M
We
know
other
vendors
who've
got
products
in
the
pipeline
and
these
products
once
deployed
are
not
going
to
go
away
for
a
decade.
I'll
give
one
example
from
my
personal
experience,
my
home
internet
connection
is
cable
modem
and
most
of
the
time
the
download
bottleneck
is
at
the
cmts.
M
So
the
queuing
at
that
cmts
is
what
dictates
my
delay
and
if
cable
labs
are
successful,
that
will
be
upgraded
to
something
better
soon,
but
that
connects
to
a
separate
wi-fi
access
point
and
as
long
as
I'm
close
to
it,
it
gets
hundreds
of
megabits
per
second
and
the
cmts
remains
the
bottleneck.
M
P
Yeah,
thank
you.
I
am
starting
by
saying
that
I
agree
with
what
stewart
has
been
saying
and
the
longer
he
talks
the
more
I
agree
with
him,
but
I
wanted
to
mention
one
other
thing
here
that
so
I've
been
working
on
a
couple
of
drafts
about
multipath
in
quick
in
the
quick
working
group
and
two
things
that
have
come
up
there.
Basically,
and
this
is
well
a
big
thing
that
I've
come
up
with
that
there.
P
P
So
if
you
know
so,
the
it
seems
like
to
me
that
stuart's
thing
about
relying
on
being
you
know
being
able
to
have
some
of
the
packets
go
to
the
left,
and
some
of
them
go
to
the
right
because
some
of
the
packets
going
to
the
left,
don't
ca.
You
know
the
senators
don't
care.
I
I
think
that
that's
something
we
really
need
to
think
seriously
about
like
stuart.
I
have
not
watched
this
group.
P
This
work
as
closely
as
I
did
when
I
was
the
area
director,
but
I'm
very
pleased
to
see
the
discussion
about
things
like
possible,
ds,
dsep
guards
and
things
like
that
that
might
be
able
to
make
it
move
faster
and
be
more
realistic
in
whatever
deployments
look
like
so
like
I,
I
do
want
to
give
the
working
group
kudos
for
continuing
to
come
up
with
good
ideas,
even
though
I
know
this
has
been
difficult.
B
All
right,
I
came
into
the
queue
when
greg
mentioned
small.
You
know
in
terms
of
the
scope
of
breakage,
I
guess
or
the
difficulties.
I
forget
the
exact
phrasing,
but
the
the
point
I
wanted
to
make
is
that
0.3
might
sound
pretty
small,
but
with
4.5
billion
internet
users
that
that
means
some.
B
You
know
13
and
a
half
million
end
users
that
are
potentially
affected
here
and-
and
I
think,
there's
a
a
good
and
legitimate
question
of
like
how
many
broken
flows
are
we
talking
about
and
how
bad
are
they
broken
with
the
coexistence
questions
and
so
to
this
end
I
did
put
a
recent
email
a
couple
days
ago
out
about
the
plausibility
of
a
flag
day
in
which
there's
one
point
I
want
to
highlight
here
in
this
context,
which
is
that
I
think
it's
a
really
useful
thing
to
talk
about
our
breakage
budget,
how
many
flows,
how
what
the
user
experience
is
at
you
know,
and
and
sort
of
what's
going
to
be
okay,
because
to
stuart's
point
you
know
we're
going
to
be
stuck
with
some
3168
qs
for
a
long
time.
B
But
if
we
do
have
a
credible
plan
for
kind
of
for
making
the
world
safe
for
coexistence
by
deprecating,
the
old
ect1
behavior,
you
know
then
like
at
what
sort
of
level
would
we
be
willing
to
push
that
forward
and
to
induce
some
level
of
breakage
but
satisfy
ourselves
that
it's
within
sort
of
a
manageable
scope?
So
I
I
would
encourage
further
discussion
on
that
point.
B
I
I
thought
bob's
response
was
promising
to
that,
as
well
as
as
the
others
that
have
responded
so
far,
but
I
just
let's
we'll
have
to
take
that
to
the
list.
Obviously,
but
yeah.
That's
it
thanks.
L
Yeah
and
again
I'd
point
out
that
the
other
aspect
of
that
is
what
is
the
timeline
for
remedying
those
situations
where
there
is
some,
you
know,
quote
breakage,
you
know
you.
You
pointed
to
a
fairly
small
number
of
asn's,
where
clearly
the
isp,
or
presumably
the
isp,
was
deploying
the
3168
bottleneck.
L
So
that's
a
small
number
of
of
people
that
need
to
be
made
aware
of
the
redefinition
of
ect1
and
ce,
and
maybe
a
large
swath
of
those
situations
can
either
be
remedied
by
not
deploying
l4s
on
those
networks
and
or
getting
those
networks
upgraded
to
to
l4s
awareness
and
then
on
the
0.3
baseline
moving
as
quickly
as
we
can
to
implementations.
That
can
continue
to
support
3168
if
that's
of
interest,
but
do
so
in
an
all
for
us
aware
way
so
that
we
don't
have
the
coexistence
issue.
A
All
right,
I
see
four
people
in
the
queue
and
I'd
like
to
use
the
last
five
minutes
to
see
if
we're
getting
close
to
consensus
on
this
drafts
aims.
So
I'd
like
to
keep
have
people
keep
that
in
mind
when
they
comment
so
bob
you
are
next
to
the
queue.
F
Yep
I
wanted
to
pick
up
on
that
point
that
jake
was
making,
which
was
why
I
originally
came
to
the
queue,
even
though
jake
hadn't
talked
about
it,
then
in
in
that
there's
been
a
lot
of
talk
about
3168
and
everyone
is
sort
of
not
really
distinguishing
that
between
multiple
flows
in
a
30
on
68q
and
not
multiple
flows.
F
You
know
single
flows
in
those
queues
and
something
that
came
up
in
jake's
conversation
that
I
thought
maybe
was
a
big
difference
between
us,
and
maybe
the
one
group
should
focus
on
is
there
seems
to
have
been
a
presumption
that
any
mixing
of
queues,
you'd,
say
to
the
hash
collision
in
a
3168q
in
an
fq
is
problematic,
and-
and
this
comes
to
the
point-
p
heist
made
about
short
flows
and
what
is
a
short
flow,
because
the
point
I
made
is
that
unless
you
you've
got
time
to
allow
the
flows
to
converge,
if
there's
a
little
bit
of
impact
as
the
flows
happen
to
coincide
in
a
queue
but
they're,
you
know
one
meg
flow
lasting
for
less
than
a
second
or
whatever
it
doesn't
it.
F
You
can't
really
worry
too
much
about
the
fairness.
It's
only
that
you,
when
you
get
to
the
much
longer
flows
where
humans
can
actually
tell
the
difference
that
that
you
really
need
to
be
worried,
and
so
I
was,
I
was
very
concerned
at
the
calculations.
F
Jake
had
done
that
you
know
sort
of
back
of
the
back
back
of
envelope
numbers
as
to
how
likely
it
was
that
these
things
are
going
to
appear
in
a
queue
which
looked
to
me
more,
like
birthday,
paradox
type
numbers
as
though
there
were
large
numbers
of
flows,
possibly
colliding
when
you
need
to
first
of
all
worry
about
how
many
long
flows
ever
collide,
even
if
they
were
in
a
fifo.
You
know
how
many
of
them
collide
in
time.
If
you
like,
before
you
know
whether
they
might
collide
in
the
hash.
F
So
I
think,
that's
an
area
where
the
working
group
needs
to
focus
on
and-
and
that
would
also
be
a
focus
of
this
draft,
because
if,
if
we
haven't
really
got
a
problem
in
fq
systems,
apart
from
the
vpn
one-
and
we
have
anyway
got
a
solution
for
modifying
those
fq
systems
to
separate
out
these
t1
and
h20.
F
Obviously
you
know
that
have
to
sort
that
have
to
be
modified.
But
if
we're
talking
about
modifying
them
anyway,
to
change
their
ec-1
behavior
to
not
ect,
why
not
change
it
to
to
one
to
do
the
alfres
thing,
which
is
really
simple
in
an
fq
coddle
or
cake
system
and
you're
already
in
the
code,
apart
from
just
a
classifier
so
now
that
that
I
think,
would
massively
cover
a
huge
area
of
what
people
seem
to
think
is
a
problem,
and
I
I
didn't
realize
that
maybe
they
were
thinking.
F
This
was
a
problem
because
they
were
imagining
there
were
all
these
flows
coincident
with
each
other,
which
may
not
be
the
case,
and
so
we
need
to
focus
on
on
characterizing
that
problem
and-
and
I
know
we've
only
got
five
minutes.
I
just
wanted
to
quickly
respond
or
give
some
tutorial
if
you'd
like
to
stuart
and
spencer
about
the
stp
thing.
So
it's
it's
not
as
a
classifier
stuart
and
that's
not
what's
being
proposed.
F
I
I
mean
it
could
be
as
well,
but
it's
it
this
idea
of
regard
the
scp
is
is
not
as
the
classifier
it's
as
something
that
sort
of
walks
along
beside
the
packet.
If
you
like
to
to
tell
you
whether
which
domain
it
went
into-
and
it's
also
related
to
the
question
of
whether
the
receiver
has
to
tell
the
sender
what
it
got
and
I
think
that's
a
non-starter
if
it
does
and
I'm
hoping
to
see
that
from
david
afterwards,
yeah.
E
F
To
have
it,
though?
Well,
no,
I'm
just
I.
I
don't
think
we've
got
a
problem
with
needing
a
classifier.
We've
got
a
classifier.
What
we
haven't
got
is
separation
of
the
two
ambiguous
meanings
of.
C
Yeah,
I
also
wanted
to
respond
to
to
stewart's
remarks,
so
so
so,
if,
if
there
is
currently
a
push
for
sd0
deployment,
I
hope
the
ect1
is
kind
of
covered
in
in
in
a
way
so
that
you're
also
putting
the
requirement
of
at
least
not
marking
this
ect1
packets,
as
if
they
are
ect,
0
and
and
potentially,
and
hopefully
also
thinking
future
safe
to
to
mark
them
in
a
squared
way
or
to
support
an
l4s
in
some
way
mechanism
and
a
second
second
topic
is
the
question
in
the
future.
C
I
I
guess
it's
from
an
applications
point
of
view,
and
if
there
is
a
mixture
of
deployments,
your
low
latency
traffic
will
clearly
use
alpha
s,
and
probably
if
it's,
if
latency
is
not
the
problem,
you
will
use
either
ect
0
or
or
drop
so
so
they
are
maybe
also
thinking.
What
is
the
value
of
ect
0
in
the
future
is
something
that
should
be
considered
and
then
and
then
from
from
diffserv.
C
I
I
agree
with
bob
that,
as
far
as
I've
seen
all
the
discussions,
there
is
no
bulletproof
solution
even
for
regards,
because
if
a
network
lets
through
diffserv-
and
there
is
ect,
0
or
ecd1
marks
there
and
there
is
a
classic
ecn,
nothing
is
stopping
that
classic
single
qe
aqm
from
marking
those
ect
ones.
Even
if
there
is
a
diffsurf
code
point
on
it,
marking
them
with
ce
in
in
the
same
queue
as
as
there
is
d0.
So
I
don't
see
directly.
C
L
M
M
I
would
like
to
even
lower
latency
stuff
better,
but
I
don't
want
to
make
a
bid
in
the
network
saying
please
give
me
l4s
and
then
find
that's
not
on
the
path
and
the
bottleneck
was
actually
classic
ecm,
and
then
I
gave
up
that
so
as
an
end
system
developer.
I
think
my
first
priority
is:
I
set
ect
0
to
get
ecn
marking
when
the
bottleneck
supports
that,
if
I
can
additionally
put
an
extra
mark
on
the
packet
that
says,
but
I
also
support
l4s.
F
Quickly,
yes,
I
totally
agree
and
that's
what
that's:
what
we're
aiming
for?
Have
a
look
at
the
plausible
flag
day
or
whatever
thread
that's
in
there.
A
Right,
I'd
like
to
give
jonathan
time
to
say
something
brief
before
we
try
to
check
consensus,
I
think
you're.
K
Thank
you,
okay.
Thank
you.
I
just
like
to
mention
that
the
existing
technology
of
fq
codel,
I
believe
that's
rc
8290,
already
provides
a
combination
of
low
latency
and
reasonably
high
throughput
it
in
a
it
in
a
way
that's
already
being
deployed,
and
this
can
be
applied
both
at
the
last
mile
and
at
the
wi-fi
access
point
which
are
usually
for
consumers,
the
most
significant
bottlenecks.
K
A
Okay,
thanks
and
greg,
I'm
going
to
steal
the
presentation
window
from
you.
A
A
So
I
think
this
is
actually
quite
close
to
what
martin
just
sent
to
the
tsvwg
mailing
list.
A
Saying
like,
I
think
we're
getting
a
sense
of
the
understanding
of
classic
bottleneck
deployment,
a
sense
of
the
possible
conditions
where
there's
maybe
an
issue
that
could
arise,
and
we
also
have
a
list
of
things
that
can
be
done
to
help
mitigate
that,
and
we
know
how
difficult
some
of
them
are,
how
effective
they
are.
So
I
want
to
get
a
sense
from
the
group
of
whether
this
is
essentially
converging
to
something
that's
going
to
be
publishable
and
make
l4s
suitable
for
experimentation
in
parts
of
the
internet.
A
E
Do
do
me
a
favor
as
the
frantic
taker
of
minutes
paste
the
question
and
what
the
responses
mean
into
webex
chat
which
can
copy
them
into
the
minutes
and
then
summarize
the
responses
so
that
we
know
we.
We
know
what
the
question
was,
that
people
thought
they
were
responding
to
absolutely.
Q
All
right
now
we're
not
taking
that
one.
So
just
a
a
friendly
reminder:
there's
a
raised
hands
tool
here
which
might
work
better.
Q
I
I
know
it
exists
in
that
app,
I'm
not
sure.
If
it's
just
in
the
browser,
I
don't
see
it
well
there.
So
in
the
app
there's
a
there's,
a
little
emoji
plus
reactions,
button
kind
of
next
to
share
and
you
can
have
an
emoji,
but
you
can
also
raise
your
hand,
which
would
then
show
up
in
a
participant's
window.
Q
I
don't
know
if
somebody
with
a
browser
can
verify
that
exists
in
some
way
in
the
browser
in
the
browser.
It's
probably.
Q
Q
Now
so
I
guess
we've
now
at
least
identified
that
exists.
And
yes,
I
agree
it's
horribly
intuitive,
but
but
it's
there
and
that
might
work
better
than
been
trying
to
use
a
chat
window
to
account
number
count.
Heads
because
you
can
then
you
can
then
sort
the
participants
list
by
raised
hands,
and
so
it
should
be
relatively
easy
to
count
on.
A
A
F
Q
A
And,
in
addition,
I
think
I
see
some
people
have
type
support
that
haven't
figured
out
how
to
raise
their
hand.
So
this
is
probably
a
little
bit
of
a
mixture.
E
A
Yeah
and
now,
okay,
please
only
raise
your
hand
if
you
disagree
with
the
statement
shared
and
I
know
I
saw
several
in
the
chat
already.
A
Okay,
well
that
that's
a
half
a
dozen
in
the
hand
raising-
and
I
think
those
are
maybe
there's
one
more
in
the
chat.
I
read
it
correctly.
So,
okay,
I
think
that's
that's
good
all
right!
Well,
I
don't
have.
I
think
that
was.
That
was
a
real
thing.
I
wanted
to
accomplish
this
meeting
david
or
gory.
Do
you
have
anything
else
you
want
to
do
before
we
close.
H
From
my
perspective,
it's
useful
just
to
tell
the
list
that
the
the
chairs
went
through
the
document
and
looked
for
textual
things.
That
would
also
come
to
us
and
maybe
would
stumble
we'd
stumble
on
later
in
the
process.
So
we've
also
sent
in
a
set
of
comments
which
we
painted
to
the
list,
and
hopefully
the
authors
will
be
able
to
address
many
of
these
in
some
way
so
that,
as
we
get
on
with
the
process
with
something
that
we
can
all
clearly
understand
and
be
clear
about.
H
F
Yeah
and
as
the
editor,
that's
got
to
cope
with
all
that.
How
would
you
like
I
mean
I've,
I've
written
responses
to
most
of
them,
but
I
haven't
posted
them
on
the
list
just
because
of
the
amount
of
list
traffic
at
the
moment,
I
wanted
to
focus
on
the
particularly
normative
text
so,
and
do
you
want
it's
just
I
wanted
to
try
and
compensate
some
together
that
seem
to
be
related
and
respond
to
those,
and
maybe
put
the
number
of
them
in
the
subject
line.
H
H
F
H
F
B
F
H
I
really
don't
mind
how
it's
done
as
long
as
we
end
up
with
a
document
at
the
end
that
people
can
actually
go
through
line
by
line
in
the
working
group.
Last
call
and
say
they
agree
or
disagree
with,
because
I
was
our
comments
are
mainly
to
try
and
make
it
clear
what
it
is
that
the
documents
claiming
so
I
I
don't
think
you
have
to
do
blow-by-blow
accounts,
but
you
can
do.
It
depends
on
how
you
want
to
use
the
bandwidth
of
the
mailing
list
right
in.