►
From YouTube: IETF110-TCPM-20210312-1200
Description
TCPM meeting session at IETF110
2021/03/12 1200
https://datatracker.ietf.org/meeting/110/proceedings/
A
Okay,
this
is
usual
not
well.
I
think
people
are
already
familiar
with
this
one,
but
just
in
case,
oh,
you
are
moving
on
very
first
okay
by
participating
itf.
Basically
you
that
means
you
agree
to
follow
itf
process
and
the
policies.
If
you
have
any
concerns
about
it,
please
check
the
url
in
the
slide
and
then
read
it
carefully,
and
then
also
this
is
as
a
reminder
this
session
has
been
decoded,
so
will
be
published.
Eventually,
please
go
to
next
slide.
A
Okay,
this
is
logistics.
We
really
appreciate
richard
for
being
a
note
taker
thanks,
so
much
and
then
michael
take
care
of
whatever,
and
then
this
is
simple
reminder.
When
you
submit
your
internet
drug
to
this
working
group,
please
include
tcpm
in
your
draft
names
so
that
we
can
track
the
status
of
your
draft.
A
Ok,
please
go
to
the
next
slide.
This
is
the
agenda
for
today's
meeting.
First,
the
trailers
will
talk
about
working
group
status.
After
this
we
have
two
presentations
for
working
group
item.
One
is
young
tcp
from
michael
and
then
after
this
bob
will
talk
about
two
ecn.
Draft
one
is
accurate
dc
and
the
other
one
is
general
generalized
ecn.
A
After
this
we
have
seven
presentations
for
knocking
non-working
group
items.
First,
one
video
talk
about
cubic
rlc
and,
as
you
might
know,
we
have
initiated
working
with
us
adaption
core
on
this
one,
and
then
we
will
make
our
decision
and
then
make
an
announcement
very
soon,
but
at
this
moment
this
draft
still
listed
as
an
individual
document.
A
After
that
yoshi
will
talk
about
aggregated
option
for
syn
option
next
or
sync
option
space
extension
and
if
time
permits
alexander
will
talk
about
rto
dependent
flow
level
generation,
and
this
presentation
does
not
have
a
specific
internet
draft
yet,
but
this
topic
could
be
interested
topic
for
this
community.
That's
why
we
place
it
in
agenda.
A
A
This
is
rto
constant
draft
and
second
one
is
rfc
8985.
This
is
rockdraft
and
we
appreciate
everyone
for
this
effort
thanks.
So
much
and
also
we
have
currently
have
2140
for
the
base
draft,
and
this
draft
is
currently
under
isc
evaluation
and
moving
on
to
the
next
slide.
Please,
okay
and
currently
we
have
7
working
group
document.
A
A
The
new
version
of
the
draft
has
been
published,
but
we
don't
have
discussion
for
for
this
draft
in
this
meeting,
but
I
will
have
a
next
chance
and
the
next
one
is
young
tcp
draft
and
since
last
idea
of
this
draft
has
been
updated.
So
we
will
have
a
discussion
for
this
draft
in
this
meeting
and
next
month's
rfc
6937
bistroth.
A
This
draft
recently
has
been
accepted
as
a
working
group
draft,
so
the
discussion
has
just
started
and
the
last
one
is
idio
draft
and
currently
this
draft
has
been
expired,
but
as
far
as
we
check
the
authors,
the
authors
has
intention
to
continue
this
work,
so
we
are
just
waiting
for
the
new
version.
That's
the
current
status,
any
question
to
the
working
group
status.
B
B
Okay,
thanks
a
lot
for
giving
me
the
opportunity
to
talk
about
the
yang.
C
B
I
hope
that
you
can
hear
me.
This
is
joint
work
with
vishal
and
mahesh,
and
some
of
the
content
also
has
contributions
from
two
of
my
students
who
have
done
some
work
on
this
year.
Module
and
I'd
like
to
report
on
that
in
this
talk
so
next
slide.
Please.
B
So
the
first
slides
are
just
a
brief
recap
on
of
what
this
draft
is
about,
I'll
go
through
them
rather
quickly.
So
this
is
a
relatively
new
working
group
item
that
defines
a
very
basic
yang
module
for
gcp
configuration.
B
We
have
discussed
quite
a
bit
about
the
scope
of
this
document
and
we
decided
that
it
should
have
a
very
narrow
scope.
That
means
it
currently
includes
four
different
parts
of
the
model.
Two
of
them
are
very
similar
to
the
old
tcp
map
and
that's
basically,
statistics
and
the
basic
tcp
connection
list.
B
B
B
If
you
go
to
the
next
slide,
I've
mentioned
that
one
of
the
things
to
be
aware
of
is
that
there
are
two
tcp
related
year
modules
in
the
itf.
It's
the
one
that
I
present
here
and
the
other
one
that's
done
in
netconf,
given
that
there
are
two
models.
I
try
to
better
explain
the
difference,
at
least
from
my
point
of
view,
between
these
two
models,
because
they
both
evolve.
B
The
basic
difference
is
very
simple,
and
the
draft
in
netconf
basically
models
per
meter
of
tcp
as
they're
relevant
for
an
application,
and
the
specific
application
in
the
draft
is
actually
the
net
conf
client
one
at
conf
server,
but
the
model
probably
is
applicable
to
other
tcp
based
applications
as
well.
So
that's
the
item
that
exists
in
netconf.
B
B
There
is
some
overlap
between
those
two
models
and
that's
why
we
also
import
at
the
moment
some
parameters
in
the
tcpm
model
from
the
arrow
one,
but
from
high
level
perspective.
I
I'd
like
to
emphasize
that
there's
a
different
scope
and
I
hope
that
this
clarifies
why
we
have
two
different
models
in
the
itf
for
different
purposes.
B
If
you
now
go
more
in
detail
to
the
specific
model
that
we
discussed
here
in
tcpm,
the
full
model
is
shown
on
the
next
slide.
B
If
you
go
to
the
next
slide,
you
would
you
will
see
the
typical
tree
diagram.
That's
used
to
show
the
content
of
a
module.
So
this
is
a
complete
list
of
the
parameters
that
exist
in
the
model
you
can
see.
It
fits
on
two
pages:
it's
not
a
very
complex
yang
module.
It
basically
consists,
as
I
have
already
mentioned,
of
a
connection
list.
B
That's
on
the
left
hand,
side
the
counters
that
are
similar
to
the
tcp
map
on
the
right
hand,
side
and
some
imports
from
the
other
model
that
I've
just
mentioned
on
the
bottom
part
of
the
left
side.
So
that's
the
model
and
as
mentioned
before,
it's
a
relatively
simple
model
because
we
decided
to
keep
it
as
simple
as
possible
and
not
to
try
to
boil
the
ocean
next
slide.
The
next
slide
talks
about
what
we've
done
since
the
last
meeting.
B
So
we've
not
updated
the
document
a
lot,
but
we
started
to
have
a
look
at
how
we
could
get
an
implementation
of
that
model
and
for
getting
there.
I've
started
a
small
research
project
with
two
of
my
students
and
I
basically
told
the
students
to
try
to
come
up
with
an
implementation
of
that
model,
and
the
students
have
at
least
started
to
do
that
and
I'm
reporting
here
on
the
following
slide.
Some
of
the
lessons
learned
from
that
implementation
that
implementation
is
still
work
in
progress.
B
So
this
is
an
early
report.
Nonetheless,
there
are
some
findings
and
that
are
probably
useful
to
be
reported,
and
that
probably
requires
some
updates
in
the
document,
and
that
is
then
shown
in
the
next
slides.
We
will
try
to
publish
the
code
for
this
prototype
when
it's
available,
but
we
are
not
there
yet
so
now.
Regarding
lessons
learned
from
these
prototyping
efforts
and
there's
three
lessons
learned
the
first
one
is
the
connection
list.
B
The
connection
list
is
relatively
straightforward,
but
it's
modeled
in
the
air
module
as
a
read,
write
model
and
that's
something
that
could
create
confusion,
because
you
could
assume
that
you
could
use
this
model,
for
example,
to
create
a
new
connection.
However,
that
would
be
difficult
in
reality,
because
you
typically
need
an
application
endpoint
for
the
connection
as
well.
B
So
the
the
fact
that
this
connection
is
read,
write
originates
from
the
constraints
in
yang
semantics,
not
because
we
want
to
allow
creation
of
connections
by
the
model
and
the
plan.
Therefore,
in
the
model,
is
to
better
document
that
and
the
read
write
status
of
the
connection
list
is
needed
because
of
yannick
semantics.
B
B
So
that
is
a
relatively
straightforward
change
that
we
plan
to
do
in
the
next
release
of
the
document.
It's
basically
a
better
documentation
and
explanation.
Why
are
the
connections?
It
has
to
be
read
white,
so
that's
the
first
finding
if
there
are
no
comments,
I'll
go
to
the
second
finding.
The
second
finding
or
comment
from
the
students
is
that
it's
very
hard
to
understand
why
we
actually
import
the
client
and
the
server
groupings
from
the
netconf
young
module.
B
This
was
originally
added
to
this
model
to
align
the
two
documents,
but
it's
relatively
difficult
to
understand
why
that's
actually
needed
if
this
model
actually
models,
something
that
deals
with
the
stack
configuration
so
there's,
at
least
in
the
opinion
of
my
students,
there
is
no
clear
use
case
for
that
and
one
easy
improvement
would
be
a
quite
a
simplification
of
the
model
just
to
remove
on
those
two
imports.
B
If
an
application
needs
a
client
or
server
tcp
configuration,
it
can
directly
import
the
groupings
from
the
netconf
model
and
then
have
the
corresponding
parameters.
So
there's
no
clear
use
case
why
to
do
that
inside
this
model.
So
the
basic
proposal
here
for
the
next
release
of
this
draft
is
just
to
remove
those
two
groupings.
B
And
if
there
are
no
comments
on
that,
we
I
will
move
on
to
in
the
last
finding
we
have
in
the
stats
a
section
of
the
mobile,
an
rpc
to
reset
the
statistics.
B
Resetting
counters
is
something
that's
typically
available
in
routers,
so
most
rod
operating
systems
have
such
an
configuration
option.
The
same
applies
to
some
other
operation
systems,
for
example.
Freebsd
probably
can
do
that.
However,
we
try
to
prototype
the
model
on
linux
and
on
linux,
it's
pretty
hard
to
reset
on
the
counters,
because
there's
no
easy
kernel
support
for
that.
So
that's
why
we
could
discuss
whether
to
make
that
reset
our
pc
optional.
This
would
imply
that
it's
a
feature
in
the
yang
module
and
then
see.
B
Yeah
just
to
be
clear
it
even
if
it's
optional,
it
would
still
be
there
in
the
young
lady.
We
are
not
suggesting
to
removing
it
before,
because
it's
clear
that
it's
that
is
useful.
The
authors
themselves
are
not
entirely
sure
whether
we
should
remove
it
or
not.
I
mean
if
a
base
operating
system
doesn't
support
it.
It
could
also
deviate
from
the
young
module.
B
The
last
slide
is
about
other
feedback
that
I
got
rather
recently
in
discussions
with
juniper
and
nokia,
and
they
had
a
recent
tcpao
interop
between
routers
and
since
this
model
is
about
tcpo
as
well.
I
chatted
with
melchior
and
craig
and
they
reported
one
of
the
findings
in
their
interop,
namely
that
in
interoperability
of
tcpo,
it's
required
to
have
very
clear
descriptions
how
to
set
the
send
id
and
the
receive
id,
and
that
is
two
parameters
we
have
in
the
yang
module,
because
this
can
cause
confusion
at
the
moment
in
our
yam
module.
C
B
That's
something
that
might
be
of
general
interest
but,
as
I
said,
there
are
some
specific
lessons
learned
for
the
yang
module
as
well,
and
that's
actually
all
that
I
wanted
to
present
today
and
of
course,
I'm
open
to
further
suggestions
here.
E
Yeah
the
question
I
think
remark:
you
said
that
on
the
read
write
status
of
connections
is
something
which
looks
suspicious
to
the
students.
I
I
agree,
you
can't
create
a
tcp
connection,
but
some
there
might
be
some
use
in
closing
a
connection
so
at
least
in
http.
We
we
have
this
in
the
mip
that
you
can
close
the
connection.
B
B
E
Possible
so
so
we
have
this
in
the
sctp
and
because
I
think
erickson
wanted
it,
and
at
least
freebsd
can
do
it
so
from
the
command
line.
You
can
kill
the
tcp
connection.
F
Hi,
so
I
have
a
question,
as
you
guys
remember,
I
was
pretty
negative
about
this
document
when
we
adopted
it,
and
the
reason
that
was
brought
up
at
adoption.
Time
is
why
we
needed
up
this.
Is
that
netconf
wants
it?
It
turns
out
now
that
netcode
is
still
doing
their
own
yang
model,
and
it's
not
even
referencing
this
one,
and
if
you
look
at
the
data
tracker,
no
other
document
at
the
moment
references
this
one.
So
I
just
want
to
ask
again:
why
are
we
doing
this?
Thank
you.
B
So
the
one
document
where
a
reference
actually
should
be
in
place
is
the
one
in
idr.
So
you
you
will
notice
that
mahesh
is
a
courser
of
both,
so
we
are
working
on
getting
the
reference
from
the
id
idr
model
to
that
one
for
the
tcpo
part.
So
that's
so
that's
the
one
where
you
need
the
reference
from
the
netconf
one.
You
don't
need
a
reference,
because
you
would
not
reference
from
an
application
model,
something
like
a
cisco
control.
You
don't
do
that
in
reality.
So,
okay.
F
F
B
B
And
other
than
that
I
mean,
of
course
we.
I
know
that
that
we
can
discuss
whether
to
do
that
or
not,
but
you
can
see,
for
example,
things
like
it.
The
last
slide
having
a
clear
documentation
of
how
to
configure
certain
parameters
might
be
pretty
useful
even
to
people
that
don't
implement
a
young
module,
so
there
might
be
value
in
just
documenting
how
to
configure
certain
things,
even
if
the
exact
semantics
of
the
year
module
are
not
implemented.
But
that's
of
course,
my
personal
view.
G
G
He
had
some
places
in
his
code
where
it
did
put
ecn
capability
on
a
reset
and
others
where
it
didn't.
He
realized
that
would
give
clues
as
to
the
internal
state
of
the
tcp
state
machine
so
suggested.
We
add
this
to
the
drafts
which
we've
done,
that
that
you,
if
you're,
going
to
set
ecn
capability
on
a
type
of
packet
you
set
out
on
all
of
them
to
avoid
leaking
information
about
the
internal
state.
G
G
So
this
this
one
is,
there's
been
a
bit
more
substantial
activity.
Well,
I'm
not
saying
richard's
activity
wasn't
substantial,
but
there's
been
more
people
involved
in
in
this
one.
So
this
is
the
accurate
ecn
draft
that's
headed
for
proposed
standard.
G
Next
there's
the
problem
was
to
try
and
make
sure
more
more
than
just
one
congestion
indication
per
round
trip
next,
and
the
solution
was
to
use
a
three
bit
three
of
the
flags
in
the
tcp
header
that
were
already
used
for
other
things,
but
overload
them
as
a
three
bit
counter
and
also
define
an
optional
option,
if
possible,
next
so
activity.
G
Since
the
last
cycle
there
or
in
the
last
cycle
joe
suggested
one
possible
other
way
to
do
two
orders
for
the
field
for
the
ttp
option,
because
it
all
depends
which
code
point
is
more
being
used.
So
we
the
current,
currently
the
draft
or
at
the
last
meeting
the
draft
had
two
option
kinds
and
job
was
concerned.
G
They
they're
not
necessarily
plentiful,
so
he
suggested
using
the
first
bit
of
the
first
field,
but
there
was
some
light,
push
back
on
that
and
actually
very
early
on
in
this
pre
pre-working
group
draft,
a
use
of
the
one
bit
first
bit
of
the
field
had
already
been
discussed
and
rejected.
So
we've
left
it
as
it
is
next.
G
So
this
one
was
again
related
to
that
tcp
option.
There's
some
text
in
the
in
the
draft
that
makes
sure
that
there
could
be
forward
compatibility,
meaning
that
other
option
lengths
might
be
defined
in
the
future.
G
So
it
says
that
when
a
peer
receives
a
tcp
peer
receives
this
option,
it
should
not
reject
it
and
I
actually
must
not
reject
it
if
it
doesn't
have
one
of
the
valid
lengths
so
that
other
links
can
be
added
in
future,
but
they're
only
understood
by
machines
with
the
logic
to
understand
them,
and
there
are
some
texts
about
how
middle
boxes
that
are
claiming
to
be
transparent
should
not
or
must
not,
reject
such
unknown
length
if
they
haven't
been
updated
for
a
new
standard.
G
Michael
pointed
out
that
this
created
a
covert
channel
so
that
different
lengths
could
be
used
which
I've
identified.
Now
in
the
security
considerations,
I
I
I
said
I
wasn't
personally.
I
wasn't
particularly
worried
about
it
because
there
are.
G
This
is
a
known
thing,
but
it's
good
that
it's
prompted
an
early
sector
security
directorate
review
and
we
could
just
sacrifice
forward
compatibility
here
and
and
just
just
say
that
these
only
valid
lengths
are
allowed,
but
I
don't
think
there's
any
real
need
here,
which
is
why
I
haven't
caved
and
just
taken
it
out,
because
it's
not
really
a
new
covert
channel.
G
If
you
go
back
to
rfc
1122,
it
says:
tcp
must
ignore
without
error
any
tcp
option
it
does
not
implement
and
obviously
you
you've
got
a
covert
channel
on
any
unknown
option.
You
know
you
pick
a
number
that
isn't
used
and
that
is
a
covert
channel,
so
this
isn't
really
a
new
covert
channel
and
also
current
intrusion
detection
systems
already
close
off
all
these
unknown
options
and
unknown
lengths
by
blocking
them.
So
I'm
I'm
not
worried
about
this.
G
We
can
have
a
discussion
on
the
list
if
you
want,
but
it's
it's
it's
important
to
have
pointed
it
out,
but
I
don't
think
I'm
worried
next.
G
Yes,
thank
you
yoshi
or
whoever's
moving
the
slides,
michael.
Maybe
I
don't
know
so
now
we
have
three
slides
on
an
area
that
does
need
a
bit
of
thought,
so
sort
of
heads
up
and
the
shakespearean
question
is
to
act,
acts
or
not
to
hack
axe.
That
is
the
question.
G
Now
we
have
two
bullets
in
the
text
that
are
the
context
for
this
that
were
designed
to
ensure
that
when
congestion
starts,
it
rapidly
gets
fed
back
and
that
that
you
have
a
continuous
update
of
the
number
of
c
c
marks
when
you've
got
a
lot
of
them,
because
we
have
only
a
three
bit
counter
in
the
protocol,
so
that
will
be
wrapping
fairly
fast.
G
So
the
the
first
of
these
bullets
says
when
there's
a
transition
change
to
ce
marking
on
either
a
packet
or
a
a
data
packet
or
not
it
it.
If
it's
from
not
ce2ce,
then
send
it
immediately.
G
The
second
one
says
after
nc
marks,
you
should
sorry
you
must
immediately
send
an
ack,
but
n
can
be
anything
between
two
and
six,
but
it
should
be
two
because
the
counters
can
only
count
eight
and
and
while
reviewing
this,
I
realized
that
that
would
also
mean
you
would
have
to
send
an
ack
of
an
act.
G
If
there
was
a
series
of
acts
or
with
ce
marks
on
it,
and
then
we
thought
well,
actually,
let's
not
necessarily
remove
that
possibility,
because
that
might
be
useful
and
actually
it's
the
principle
of
not
acting.
Acts
in
tcp
doesn't
really
apply
here,
because
it's
new
information,
there's
new
congestion
information
on
the
axe.
So
it's
so
it's
not
as
if
it's
just
not
new
information
and
therefore
doesn't
need
hacking.
G
But
we
thought
a
bit
about
this
and
we
thought
the
case
is
where,
where
this
applies,
the
only
cases
where
this
applies,
as
far
as
we
can
think,
is
where
you've
got
a
volume
of
data,
as
shown
in
the
picture
and
the
the
acts
are
obviously
coming
back
from
that
that
and
in
this
case
they're
shown
red,
which
means
they're
ce
marked,
they're
congested,
experienced
and
then
there's
this
possibility
of
acting
every
n
of
them
in
this
case
every
every
second
one.
G
It
shows
that
the
host
on
the
right
is
actually
getting
some
feedback
of
its
acts
once
the
data
stops.
Obviously,
if
the
data
was
still
going,
that
feedback
would
be
on
the
data,
because
the
counter
just
goes
with
the
data
and
and
then
there's
potentially
a
ping
pong
back
to
the
other
end,
if
that
sequence
of
acts
is
also
congestion
marked,
but
it's
sort
of
unlikely
that
large
amounts
of
them
that
this
is
the
sort
of
corner
case
where
everything
is
c
marked.
G
So
that's,
that's
the
that's
the
question
now.
If
we
move
on
to
the
next
slide,
yoshi
pointed
out
if,
if
we
do
acax,
there
is
a
another
wrinkle
in
that.
If
the
data
direction
changes
as
shown
here,
the
data
is
the
thicker
arrows
and
it
changes
quickly
before
an
act
has
arrived,
then
that
those
acts
will
look
like
duplicate
acts,
because
the
right-hand
host
doesn't
know
that
you
know
it
just
looks
like
the
round
trip.
G
Time
suddenly
got
shorter,
and-
and
so
we
realized,
though,
that
maybe
this
this
is
okay,
because
you
a
you,
can
detect
it,
but
it's
already
a
bit
of
a
corner
case
and
if
you've
negotiated
sac
or
timestamps.
Actually,
then
you
can.
You
can
eliminate
this
by
just
checking,
if,
if
you
think
it's
dupac,
if
it's
not
got
zach
on
it,
then
it's
not
a
dupac
so
and
even
if,
even
if
you
didn't
bother
to
add
that
to
your
code,
you'd
get
a
spurious
retransmit
in
a
corner
case.
G
So
it's
not
particularly
worrying
to
me
anyway,
but
other
others
may
think
otherwise.
So
now
we
come
back
to
on
the
next
slide,
the
original
question
of.
Do
we
act,
acts
or
not,
and
there
are
two
positions
we
could
either
just
prevent
acts
of
acts
completely
and
that
the
green
text
there
just
says
only
acting
or
if
there's
outstanding
data
to
acknowledge.
So
then
it
won't
be
a
duplicate
and
it's
perfectly
valid
to
do
so.
G
The
other
one
is
to
take
the
opportunity
to
feedback
ce
on
these
acts,
but
damn
penny
ping
pong,
and
so
we
thought,
if,
if
we
make
the
numbers
here,
n
should
be
three
and
must
be
in
the
range
three
to
six.
It
will
damp
it
quite
fast.
It
won't
make
it
any
more
complicated
so
that
data
and
acts
can
be
treated
the
same
and
actually
there
are
complexity
and
simplicity.
Arguments
on
both
sides
in
that,
in
the
first
case
they
say
the
by
not
feeding
back
acts
of
acts.
G
You
can
get
a
a
build
up
of
congestion
marks
on
one
end
and
then
when
data
is
sent,
you
get
this
sudden.
Multiple
wrap
of
the
counter
at
the
other
end
and
you'd
have
to
deal
with
that.
In
the
code
or
you'd
have
to
ignore
it
and
then
deal
with
the
consequences
so
possibly
missing
a
load
of
congestion.
G
G
Yep
I've
I've
written
all
this
on
the
list
and
I'd
appreciate
any
comments
on
that
on
the
list
or,
if
they're
quick
here
on
the
call-
and
I
think
that's
next
slide-
is
just
a
wrap-up
one.
G
I
think
I
know
there's
you
can
jump
through
this-
that
some
editorial
work
has
gone
on
thanks
to
gauri
for
the
review
and
then
finally,
the
status
on
the
next
slide
again,
once
we've
resolved
this
acts
of
acts
thing,
I
think
we're
again
ready
for
working
group
last
call
and
just
as
I
just
pointed
out,
the
generalized
ecn1
depends
on
this
and
we
did
say
back
in
april
20
that
we
wait
for
l4s
for
a
bit
and
otherwise
go
ahead
with
this.
So
I
don't
know
what
you
plan
to
do
about
that.
G
I
I
G
The
the
definition
of
transparent
is
that
the
the
tc
head
is
that
the
wire
protocol
doesn't
look
any
different
from
one
side
to
the
other.
So
it's
probably
not
you
know.
Certainly
it
would
be
very
difficult
to
make
everything
look
the
same
and
have
the
same
timing
yeah.
What
was
behind
the
question.
I
Well,
that
was
I
mean
I,
I
think
you
probably
see
the
the
why
this
wouldn't
work
for
a
terminating
proxy
and
we're
just
talking
jabra,
like
just
my
interpretation
of
the
word
transparent,
is
the
end.
Endpoint
is
not
aware
that
the
that's
not
talking
to
the
end
server,
which
would
include
peps.
J
Yeah,
I
just
want
to
add
one
small
comment
on
the
egg
of
egg
thing.
I
think
this
is
really
a
decision.
What
we
want
to
support,
because
if
we
don't
act
x,
then
there
is
a
risk.
A
very
minor
is,
but
there
is
a
risk
that
the
counter
might
rap
and
we
might
do
some
information.
J
If
we
have
this
x
of
x,
we
have
a
more
accurate
and
currently
we
don't
need
this
information.
Currently
we
can't
do
anything
with
information,
but
in
future
this
information
could
be
used
for
congestion
control.
J
So
you
know,
I
think,
that's
really
the
question
that
we
have
to
answer
here
if
we
want
to
support
already
something
that
is
maybe
needed
in
future
for
ad
congestion
control
or
if
we
want
to
leave
this
kind
of
as
a
separate
topic
for
future,
which
might
then
need
more
effort.
G
Can
I
just
come
back
on
that
mia
yoshi?
If
you
can
go
to
slide
six
or
seven?
Maybe
seven,
I
think?
Okay,
it's
it's
not
just
for
at
congestion
control
in
in
this
case,
where
you're
you're,
exchanging
volleys
and
one
end
tens
and
then
the
other
you've
got
congestion
information.
G
No,
that
one
yeah!
That's
right!
You've
got
congestion
information
if
you're,
if
you're
on
the
right-hand
side
before
you
start
sending
you're,
potentially
getting
congestion
information
about
the
the
direction
you're
about
to
send
in
from
the
act
stream
of
the
that
you've
just
been
sending
on
that
direction.
G
So
that
could
be
used
now
to
maintain
your
congestion
window
before
you
actually
start
sending
a
volley.
J
A
A
L
K
K
Okay,
sorry,
there
might
be
echo
without
the
headphone,
but
sorry
if
there
is
low
volume
on
the
hello
everyone,
I
will
talk
about
the
updates
that
we
have
made
to
the
cubic
informational
rc8312
to
adapt
to
the
recent
advancements
in
transport
protocols.
Next
slide,
please.
K
The
some
of
the
variables
and
constants
you
guys
already
know,
and
if
for
reference
I
have
put
them
down
on
the
bottom
of
the
slide.
One
important
variable
that
we
are
adding
to
the
revised
draft
is
even
start,
which
is
basically
the
congestion
window
at
the
start
of
congestion
avoidance.
Here's
why
we
have
done
that.
Some
modern
networking
stacks
implement
rfc
7661
for
a
better
estimate
of
network
capacity
for
weight,
limited
applications.
K
Now
this
rfc
uses
either
the
pipe
value
or
the
loss
flight
size
value
to
determine
the
slow
start,
threshold
and
congestion
window
after
a
condition
event.
This
means
that
steven's
start
could
be
smaller
than
the
product
of
w
max
and
beta
cubic.
The
second
reason
to
use
condition
window
start
is
that
if
fast
convergence
is
applied
to
make
room
for
new
flows,
w
max
is
further
reduced,
but
we
don't
want
to
change
this
even
start
when
this
happens.
K
With
that
in
mind,
let's
take
a
look
at
k,
which
is
basically
the
time
it
takes
to
increase
the
condition
window
size
at
the
beginning
of
the
condition
avoidance
to
wmax
in
the
rfc
a312
after
the
conduction
event.
Sievent
is
set
to
the
product
of
wmax
and
beta
cubic,
and
the
same
product
is
used
at
the
beginning
of
the
condition
avoidance.
This
resulted
in
the
simplification
of
k's
equation,
which
you
see
on
the
left.
Now,
as
I
discussed
earlier,
steven
start,
could
be
completely
different
from
the
product
of
w
max
and
beta
cubic.
K
K
K
Here
I'm
showing
the
window
increase
function
used
by
cubic
on
receiving
an
act
during
conduction
avoidance,
rfc
a312
says
cubic
computes
the
target
increase
rate
during
the
next
rtt
using
the
w
cubic
t
plus
rtt
equation.
This
equation
could
have
possibly
three
outcomes,
as
shown
on
the
slide.
For
the
first
case,
we
have
added
a
lower
bound
when
the
target
could
become
smaller
than
the
current
conduction
window.
Although
this
might
not
happen,
it
is
an
added
safety
guard.
K
The
second
case
is
more
common
during
the
app
limited
periods
where
time
t
keeps
increasing,
even
if
the
sender
is
app
limited
so
similar
to
what
linux
already
does,
we
have
added
an
upper
bound
to
make
sure
that
the
growth
in
congestion
avoidance
is
slower
than
doubling
the
condition
window
every
rtt,
and
the
third
case
is
the
generic
case,
where
we
will
use
w
cubic
t
plus
rtt,
as
it
is
next
night.
Please.
K
K
K
So,
in
the
revised
draft
we
switch
to
the
more
precise
ad
clocking
method
based
on
the
based
on
the
segments
acknowledged
to
do
so.
We
first
initialize
double
estimate
to
the
condition
window
at
the
start
of
the
condition
avoidance,
and
then
we
increment
it
based
on
the
segments
that
are
getting
acknowledged.
K
Next
slide,
please,
there
is
another
update
in
the
draft
for
the
aimd
region.
If
we
use
the
aimd
region
amd
approach
based
on
the
segments
acknowledged,
we
have
to
make
sure
that,
once
w
estimate
reaches
w
max,
we
set
alpha
aimd
to
1,
which
is
similar
to
the
new
renal
behavior.
I've
created
two
graphs
to
with
some
sample
values
to
depict
what
I'm
trying
to
convey.
K
K
After
that
we
follow
the
w
estimate
function
for
cubic
and
the
aimd
function
for
new
reno.
If
you
look
at
the
left
once
the
condition
window
has
reached
wmax,
if
we
continue
to
use
the
same
alpha
aimd
for
w
estimate,
as
we
saw
on
the
previous
slide
cubic,
will
have
a
slower
growth
as
compared
to
new
reno.
K
This
is
the
last
slide.
There
are
a
lot
of
other
updates
that
will
make
the
life
of
an
implementer
much
easier.
Firstly,
we
have
added
some
edge
cases
by
adding
a
lower
bound
for
condition
window.
We
have
also
added
definition
of
variables
and
constants,
along
with
their
units,
which
would
come
in
handy
christian,
who
implemented
cubic
for
pico
quick,
had
raised
an
issue
to
document
the
cubic
behavior
on
spurious
losses,
and
we
have
added
a
whole
section
for
this.
For
that
we
have
updated
the
terminologies
that
make
most
sense
in
today's
networking
world.
K
We
have
also
documented
the
broader
deployment
of
cubic
over
the
past
decade,
as
some
of
the
developers
refer
to
the
cubic
research
paper
for
the
pseudocode.
We
have
highlighted
the
differences
between
the
research
paper
and
the
cubic
internet
draft
for
better
understanding,
and
the
thing
that
will
be
most
useful
is
the
pretty
latex
math
for
the
various
equations
and
formulas
which
will
make
your
life
easier
while
they're
looking
at
those
equations.
K
A
A
Yes,
thanks,
vidi.
One
question:
is
you
know
you.
M
N
N
So
the
idea
is
to
document
test
vectors
for
two
algorithms
that
are
specified
in
the
rfc
5925
and
five
nine
to
six
and
also
include
the
version
where
options
are
not
included
to
the
message
authentication
code.
So
it
should
have
all
all
needed
for
the
implementer
to
really
implement
the
support
for
the
authentication
option.
N
N
So
there
are
a
couple
of
new
drafts
since
the
last
last
iitf,
where
we
presented
this
so
textbook
test
with
zeros
one
was
published
shortly
during
testing.
We
actually
found
that
there
is
a
issue
in
the
our
kid
every
version
function
and
we
updated
the
traffic
accordingly
and
after
that
we
got
interoperability
with
two
vendors.
Actually,
so
we
managed
to
get
a
routing
vendor
in
our
lab
and
there
we
were
happy
to
see
that
it
fully
works
with
regarding
the
test
vector.
N
So
both
algorithms
and
also
both
option
versions,
work
fine
and
we
were
also
collaborating
with
the
fasting
vendor
during
last
fall.
The
fasting
vendor
also
gave
us
a
test
version
of
their
new
implementation
and
he
was
working
with
us.
So
we
have
now
two
vendors
working
the
first
thing
when
they're
only
implemented
the
show
one.
So
we
are
waiting
for
the
ice
128,
but
anyway
looks
pretty
good.
N
N
As
such,
the
ipv6
is
pretty
minimal
regarding
the
implementation,
so
only
a
few
lines
of
code
were
changed
in
our
implementation,
so
it
should
be
pretty
straightforward,
and
so
the
header
is
the
only
handling
that
is
different
really.
So
in
that
sense,
we
don't
expect
any
issues,
but
still
we
don't
have
interrupt
with
ipv6.
N
N
N
The
plan
for
the
ipv6
is
probably
to
get
the
feedback
and
maybe
have
a
hackathon
if
we
get
hackathon
attendees.
But
let's
see
I
have
already
initiated,
initiated
this
discussion
on
the
hackathon
mailing
list.
But
of
course
this
is
for
the
future
future
ideas
there.
B
A
Thanks
thanks,
okay,
okay,
so
I
think
this
draft
has,
you
know
very
clear,
straightforward
vision,
so
people
can
easily
understand
this.
You
know
purpose
and
its
usefulness,
and
then
we
have
a
discussion.
The
chairs
have
discussed
about
this
one
and
I
think
the
draft
is
you
know,
ready
for
adoption
call
basically,
and
so
what
I
would
like
to
check
is,
if
you
have
any
concerns
on
running
adoption,
call
on
this
one.
Please
speak
up
right
right
now.
C
A
A
Okay,
let's
start
the
call,
so
if
you
agree
to
adapt
this
item
as
a
vacuum,
robot
item,
please
click
raise
hand
and
if
you
disagree,
please
click
the
not
raise
hand.
If
you
don't
have
a
specific
opinion,
you
don't
have
to
do
anything.
A
Case,
okay,
stop
six
raise
the
hand
and
then
zero
disagree.
Okay,
thanks
for
the
statistics,
and
then
we
will
make
a
decision
and
then
make
some
announcements
on
the
mailing
list
later,
thanks
so
much
thank
you
to
hamadi.
G
G
Going
to
point
out
that,
normally
with
an
adoption
call,
you
also
ask
how
many
people
have
read
the
draft.
M
M
So,
first
of
all,
a
reminder
of
the
motivation
for
this
draft
delay.
Tax
is
a
widely
used
mechanism
which
is
intended
to
reduce
protocol
overhead
and,
however,
it
may
also
contribute
to
suboptimal
performance
in
different
scenarios
such
as
the
ones
identified
here
as
large
congestion
window
scenarios,
meaning
congestion
windows
size
much
greater
than
the
mss
and
small
congestion
window
scenarios,
meaning
a
congestion
window
size
up
to
the
order
of
one
mss.
M
C
M
Well,
as
in
the
previous
version,
there
are
two
formats,
okay,
but
the
novelty
would
be
that
those
would
be
distinguished
by
the
value
of
the
length
field,
so
the
two
formats
are
the
one
on
top,
which
is
to
announce
support
of
the
option
and
the
second
one.
There's
one
bullet
didn't
appear
here,
for
whatever
reason,
the
second
one
is
the
main
one
which
conveys
the
values
for
the
main
parameters
of
the
option.
M
M
Then
we
have
also
introduced
a
change
in
the
main
format,
which
originally
was
of
a
size
of
seven
bytes
and
the
new
format.
Again,
there's
some
wallet
which
is
not
present
in
the
middle
of
the
slide,
which
anyway,
the
new
format
is
the
one
below.
So
it
has
been
reduced
to
six
bytes.
M
M
Then
also
there
was
an
email
yesterday
by
michael
scharf,
suggesting
a
possible
new
arrangement
for
the
main
format
by
which
we
might
even
decrease
by
one
further
byte
that
format,
and
that
would
mean
that
if
we
go
that
path,
then
the
values
for
r
and
n
well
might
be
lower,
let's
say,
or
we
might
have
to
represent
a
lower
range
of
values
for
those
parameters.
As
a
reminder,
r
is
the
act
ratio.
M
After
how
many
full
data
segments
the
receiver
will
transmit
a
knack
and
n
is
the
number
of
subsequent
data
segments
for
which
we
are
requesting
immediate
acts
when
r
is
set
to
zero,
so
yeah.
We
may
want
to
discuss
that
and
consider
that
as
well,
and
it
would
be
good
by
the
way
to
know
if
the
working
group
thinks
that
it's
necessary
to
represent
values
for
r
and
n
greater
than
63.
M
Next,
please
so.
We've
also
added
content
in
the
security
consideration
section,
because
this
new
option
would
also
create
some
new
form
of
attack
by
which
an
attacker
impersonating,
some
legitimate
sender,
may
communicate
about
our
value
to
a
receiver.
That
would
be
a
too
higher
value
in
a
small
congestion
window
scenario
or,
conversely,
a
too
low
our
value
in
a
large
congestion
window
scenario.
M
So
one
example
of
the
last
case
could
be
communicating
such
a
low
r
value
to
some
battery
operated
receiver
with
the
aim
to
to
lead
it
to
transmit
too
many
acts
which
may
contribute
to
decrease
the
amount
of
energy
available
at
a
much
faster
rate.
So
the
goal
may
be
in
general
to
damage
or
degrade
communication
performance
or
degrade
the
performance
of
a
device
so
as
possible
mitigation.
M
C
M
Next,
please,
after
publication
of
zeal
2
also,
there
was
a
review
by
yoshi
by
the
way,
thanks
a
lot
to
everyone,
who's,
giving
feedback,
michael
jonathan
and
also
yoshi
for
the
very
useful
comments
and
feedback.
So
the
the
main
point
in
yoshi's
review
is
that
we
may
want
to
provide
guidance
on
when
or
how
sender
would
use
the
features
in
this
option
and,
for
example,
about
setting
the
frequency
in
some
cases
or
many
cases,
perhaps
in
those
small
or
large
congestion
window
scenarios.
M
Also,
we
may
want
to
provide
guidance,
for
example,
when
reordering
is
not
supported,
also
about
how
to
set
the
n
parameter,
and
specifically
here
there
was
a
question
about
whether
we
might
want
to
be
able
to
set
r
to
zero.
That
would
mean
requesting
immediate
x
for
the
entire
connection,
so
perhaps
we
could
enable
something
like
that
by
resolving
some
special
value
for
the
parameter
n
to
represent
infinity
and
well.
M
So
next,
please
so
considering
that
at
least
there's
some
relatively
stable
basis.
Now
for
the
document,
we
were
wondering
whether
the
document
could
be
ready
for
working
group.
Adoption.
M
Yeah,
that's
a
good
point.
We
we
need
to
look
into
that
area.
So
currently
we
were
not
actually
assuming
any
specific
baseline.
We
would
compare
with
or
against.
M
O
A
Okay,
so
I
basically
have
a
concerns
about
ignoring
reorder.
I
think
this
is
the
comment
from
janna,
I
guess,
and
so,
if
we
ignore
the
order.
First
return
mission
may
not
work
and
then
so
in
case
of
quick.
This
might
be
okay,
because
no
quick,
timer
based
retransmission
scheme,
but
in
case
of
tcp
with
rack,
is
not
always
supported.
So
sometimes
you
know
we.
A
Fast
recovery,
possibly
transmission
algorithms,
but
if
we
ignored
the
order,
you
know
it
might
not
work
and
then
we
might
have
to
wait
for
timeout.
So
I
just
like
to
hear
about
what's
you
know,
use
case
ignoring
reorder
more
precisely,
that's
otherwise.
It's
to
some
extent.
It's
risky
from
my
point
of
view.
M
Yes,
so
yeah,
I
agree
that
we
may
need
to
clarify
in
the
document
to
provide
guidance
of
when
it's
safe
to
to
use
this
feature
and
advise
that
well.
Otherwise
there
can
be
the
issues
that
you
just
mentioned.
M
H
Hi,
thank
you
for
this.
I,
and
this
is
perhaps
a
common
dollar
question
for
the
chairs.
It
seems
like
I
mean
this
is
certainly
interesting,
but
the
real
question
here
to
me
is
how
valuable
is
this
and
how
much
real
world
impact
this
can
have.
H
This
is
following
up
on
gauri's
question,
and
maybe
yours
as
well
michael,
if
we
don't
have
a
real
implementation,
if
we
don't
understand
what
the
implications
of
this
are
in
the
real
world
and
if
we
don't
understand
what
real
value
this
has,
I
don't
know
how
to
evaluate
whether
something
like
this
should
be
taken
up
in
the
working
group
or
not.
I
think
it's
a
perfectly
fine
mechanism,
it
seems
reasonable,
and
all
of
that
is
fine,
but
the
real
question
is:
do
we
do
we
gain
anything
from
working
on
this,
and
I
don't.
H
G
A
M
Well,
so
perhaps
perhaps
that's
not
like,
we
don't
have
implementation
experience
by
using
this
option,
but
as
the
motivation
there's
a
different
set
of
scenarios
mentioned
in
the
introduction
and
which
well
can
be
categorized
in
some
cases
into
those
large
or
small
congestion
window
scenarios,
so
yeah
at
this
moment
we
have
not
been
able
like
to
make
experiments
with
implementations
and
measure
the
performance
when
we
use
something
like
this,
but
we
have
some
reference
from
use
cases.
M
L
One
I've
been
asking
about
use
cases,
so
one
typical
use
case
for
this.
I
need
to
admit,
I
haven't
read
the
drafting
poll,
but
the
one
use
case
is
the
4g
systems.
With
the
that
will
run
some
millimeter
wave,
in
which
case
you
might
easily
have
user
equipment
that
are
power
limited
in
uplink,
which
can
give,
in
some
cases
quite
symmetric,
behavior
depending
how
networks
are
deployed,
and
then
it's
good
to
have
some
kind
of
negotiation
without
the
sender.
L
That
could
be
one
use
case
at
the
where
this
can
be,
and
that
can
also
send
a
signal
for
institute
3dpp
that
this
is
taken
seriously
and
we
have
we
try
to
devise
methods
to
avoid
too
much
acknowledgement
or
more
than
necessary,
and
then
you
avoid
the
other
egg
aggregation
features
and
discarding
and
whatever
that
might
otherwise
be
the
option.
A
O
A
C
Can
you
can
you
hear
me
clearly?
Okay,
this
presentation
is
about
a
new
function
in
mptcp,
we
call
it
subtype
capability
negotiation.
First,
I
actually
be
honest
that
this.
Currently,
this
is
the
idea.
I
have
no
demo
to
show
this
function.
C
As
we
all
know,
russian
negotiation
has
been
has
been
defined
in
mptcp
protocol.
This
page
released
for
some
case
for
russian
negotiation
that
that
we
have
found
that,
in
a
definite
version,
all
subtype
messages
are
mandatory
and
fixed.
This
is
why
I
present
this
idea.
Okay,
go
to
next
page.
C
C
So
we
found
that
if
a
new
message,
type
subtype
message
tab
is,
is
added
in
the
future
extension,
a
higher
version
should
be
released
to
import
it,
and
maybe
a
new
subtype
should
be
allocated,
and
then
the
next
issue
that
I
think
that
should
maybe
happened
in
the
case
that
dynamic
function
deployment
be
imported
in
mpcp.
C
That
means
that
if
a
sender
does
not
know
the
subtypes
supported
by
the
receiver
in
your
mptcp
session,
as
a
result,
invalid
date
package
may
be
sent
during
this
transformation
from
the
sender
to
the
receiver
and
the
anita
we
are
and
for
the
receiver,
for
it
cannot
cannot
support
it.
So
it
will
guide
it
which
will
cause
the
system
of
head
on
receiver
side
next
page.
C
So
I
think
for
subtype
capabilities
change.
There
are
from
my
instagram.
There
are
four
scenarios.
First,
that
m
eg
pures
illustrations,
support
same
mtcp
version,
including
same
subtypes,
says,
and
the
second
is
that
the
peers
support
same
version
but
with
different
subtype
sets.
C
Secondly,
that,
as
the
the
peers
support,
different
mtcp
version
include
same
subprime
sets.
The
last
one
is
that
appears
in
the
session
support
different
version
with
different
subtypes.
Yes,
I
think
the
last
two
scenario
is
more
complicated,
so
this
current
version,
the
draft
is
currently
cover
only
the
first
two
scenarios
that
they
are
in
the
same
version,
but
include
different
or
maybe
the
same
as
subtype
sets.
C
Okay,
because
next
page,
I
think
a
typical
flow
for
this
function
between
two
end
parts
is
is
like
like
in
the
future.
C
I
suggest
that
this
exchange
happened
during
session
during
connection
established.
The
center,
where,
when
the
sender
will
send,
will
carry
his
sub
subtype
capability
in
the
first
message,
when
you
want
to
set
up
less
connection
and
the
servers
receive
the
subtype
capability,
we
are
determined
to
catch
it
and
the
heat.
C
Then,
in
the
response
here
we
are
carried,
it
will
carry
his
own
subtype
capability
other
than
the
the
initial
we
are
depending
on
the
cash
the
cavity
from
the
receiver,
so
for
for
the
two
parties
will
know
noticeability
of
subtype
from
of
of
each
of
an
and
then
during
data
transmission
when
before
they
send
the
subtype
message,
you
know
whether
the
receiver
will
support
it
or
not,
and
it
can
avoid
as
much
then
you
can
avoid
the
you
very
evaluate,
standing
next
page,
okay,
this
page
just
give
one
possible
solution
on
the
protocol
design,
and
maybe
it's
not
visible,
currently
area.
C
There's,
maybe
some
problem
that
own
compressor
compatibility
and
I
will
consider
it
later.
First,
I
think,
maybe
in
the
mp
cap
option
can
allocate
some
bytes
to
indicate
the
each
indicator,
each
capability
of
any
locality
for
each
subtype,
and
there
are
some
bytes
and
the
we
are
corresponding
to
one
subtype
and
to
indicate
whether
the
sender
supports
the
subtype
or
not.
C
This
is
maybe
it's
not
mature,
but
I
think
we
may
think
it
in
deep
in
the
future
version.
Okay,
let's
get
next
page
yeah.
If
us,
I
think
this.
Currently,
it
is
our
idea
and
I'm
not
think
whether
this
requirement
or
scenario
is
interesting
or
useful.
If
it
yes,
I
can
go
ahead
to
complete
the
job
in
deep
and
considering
more
use
case
for
it.
A
A
Questions
yeah.
One
question:
I
one
comment
I
have
is
no
yeah.
I
might
want
to
see
some
more
tangible
use
cases.
So,
basically,
what
you
are
saying
is
you
know
if
we,
the
end
point
knows
which
subtitle
is
supported
at
the
initial
stage
of
the
connection,
then
you
might
want
to
avoid
sending
a
waste
of
the
packet.
But
let's
say
if
I
implement,
if
I
send
a
specific
option
in
the
middle
of
the
connection
three
times
and
then
it
fails
three
times
in
a
row
then
disable
this
feature.
A
This
is
also
possible
right
and
then
yeah.
So
so
it's
a
pros
and
cons,
and
I
still
want
to
see
you
know
which
one
is
better,
and
so,
if
you
could
provide
more
useful
case,
a
useful
use
case.
That
would
be
great.
That's
my
you
know
initial
impression,
but
that's
my
personal
opinion,
not
that
we
still
have
obviously.
I
C
I
mean
that
I'm
not
very
sure
which
extension
about
mtcpc,
so
I
so
I
still
research
on.
I
will
discuss
with
my
team
about
the
possible
the
possibility
about
the
person.
I
P
Can
you
hear
me
very
good,
okay
yeah?
I
want
to
talk
about
again
that
again
want
to
talk
about
the
multi-parts
tcp
rover
session
establishment,
something
we
drive
since
several
itfs.
That's
a
joint
initiative
between
huawei
and
deutsche
telekom.
Next
slide,
please
just
a
short
recap:
multi-pass
tcp
ruby
tries
to
overcome
the
issue
we
see
with
the
regular
multi-pass
tcp,
which
means
if
in
the
regular,
multi-part
tcp
the
initial
flow
cannot
be
established.
Then
there
is
no
connectivity
and
with
three
different
proposals:
the
aerobee
sim,
timer
and
ips.
P
We
try
to
overcome
this.
With
the
ruby
sim.
We
simultaneously
establish
a
communication
over
multiple
parts
or
all
available
paths
with
the
rover
timer.
We
will
try
to
set
up
a
second
communication
when
the
first
communication
fails.
After
a
timer
expires
and
with
the
ruby
ips,
we
use
some
available
information
to
decide
whether
to
use
a
first
path
or
a
second
password
statistic.
Communication
next
slide,
please.
P
While
all
these
proposals
are
well
described
in
the
draft
from
our
standing,
we
tried
to
get
adoption
or
we
requested
adoption
last
time,
but
that
failed,
because
there
was
some
feedback
during
the
ietf
109th
session,
where
two
issues
came
up
so
the
first
one
was
about.
Does
the
existing
ipr
allow
implementation
in
an
open
source
initiative
at
the
second
issue?
Was?
Is
there
a
way
to
publish
the
existing
implementations
in
huawei
open
source,
so
the
following
slides
will
deal
with
this
question,
so
please
next
thanks
slide.
P
So
coming
to
the
issue,
one
license
clarification,
so
that
is
a
slide
provided
by
huawei,
and
I
will
try
to
to
summarize
it
shortly.
It
will
not
explain
everything
so
huawei
is
part
of
the
oin
community
that
did
the
open
invention
network
and
within
this
open
invention
network
whoever
will
share,
or
that
is
part
of
this
on
a
community.
So
everyone
who
is
part
of
this
has
to
share
their
patterns,
which
are
linux,
kernel
related
free
of
charge.
P
So
that
means
from
from
our
understanding
that
this
is
the
first
good
step,
because
everyone
can
profit
from
this
yeah
from
being
part
of
this
oen
community
and
have
access
to
this
patterns
free
of
charge.
At
the
end.
P
Yeah
then
open
source
status,
so
part
of
the
robbie
ipps.
So
that
is
one
of
the
solutions
we
propose
are
more
or
less
indirectly
open
sourced
by
huawei,
because
it's
part
of
their
products
and
everything
is
which
is
under
the
gpl,
has
to
be
published
by
them
and
that
you
can
find
under
the
link
which
is
mentioned
here,
but
nevertheless
independent
from
the
open
source
status.
We
also
provided
at
itf
108
some
prototypes
and
gave
some
insights
into
the
results
we
were
able
to
together
with
the
yeah
to
connect
with
these
prototypes.
P
So
there
there
are
some
initiatives
we
have,
which
I've
already
shown
that
the
first
the
proposal,
the
different
roby
proposed
work
and
that
we
have
done
with
prototypes
and
secondary
part
of
this
prototype
implementations
are
already
open
sourced
next
slide.
Please.
P
E
Yeah,
this
is
not
not
asking
as
a
chair,
but
as
a
individual.
You
said
it's
so
if
I'm
implementing
this
for
linux,
it's
okay.
P
Is
sorry
go
ahead,
please.
E
Sir,
what
does
it
mean
for
non
linux?
Implementations.
P
So,
to
be
honest,
maybe
gio
can
jump
in
here,
because
this
information
was
provided
by
huawei
and
at
least
it
says,
when
there's
a
member
in
the
oen,
then
this
means
all
the
ipr
protected
developments
are
can
be
shared
for
free.
If
this
is
also
related
to
a
free
bsd,
for
example,
I
don't
know,
but
maybe
jiao
give
some
more
information.
C
Okay,
this
the
license.
The
last
information
is
provided
by
by
our
leisure
person.
So
I
I
cannot
answer
the
question
about
the
license.
Personally,
if
you
have,
if
any
question
about
the
license,
you
can
write
in
the
minutes,
I
can
forward
it
to
my
company
and
to
my
later
person-
and
here
we
are
give
us
a
formal
material.
I
think
that
is
a
good
way
to
discuss
the
problem
about
the
license.
B
B
P
P
B
B
I
understand
what
your
suggestion
is
such
as
this
suggestion,
but
I
do
think
this
needs
as
if
you
think
about
it.
You
should
write
it
down
so
that
people
can
understand
what
the
what
you're
thinking
is
and
comment
on
it.
A
P
C
A
A
F
Hi
they
are
speaking
as
an
individual,
so
there's
there's
one
example:
we
have
in
gcp
with
the
rfc
that
we
did
it
as
ipr,
if
I
remember
correctly,
which
was
iphone
and
the
existence
of
the
ipr
is
basically
meant
that
nobody
ever
implemented
it
and
we
expanded
working
group
cycles
on
it
for
no
reason
for
no
benefit,
and
so
I
personally
I
mean
this
is
this
is
even
more
complicated
because
there's
two
people
declaring
ipr
that
are
potentially
conflicting
things.
I
don't.
F
I
don't
know
what
this
orange
box
is
trying
to
tell
me,
but
nobody
will
implement
this
with
this
confusing
ipr,
so
we
should
not
spend
any
time
on
the
discussing
in
the
working
group
if
you
want
it
to
be
implemented
for
tcp
or
multiplex
tcp,
it
needs
to
come
under.
You
know,
no
obligations
that
that
that
are
not
understandable.
F
P
F
Does
that
mean
linux
is
used
for
both
commercial
and
non-commercial
purposes,
and
if
you
want
to
implement
it
in
the
linux
kernel
or
freebsc
or
any
other
operating
system,
does
the
user
need
to
like
check
whether
each
application
is
is
making
use
of
this
commercially
or
not?
Conversely,
just
not
working,
it
can't
be
tied
to
the
use
right
and
specifically
also
with
the
gpl
right,
if
I
remember
correctly,
the
license
or
holders
fbr
holders
need
to
give
the
license
for
free.
Otherwise,
the
thing
doesn't
get
merged.
F
And
so
I
mean
this
is
such
a
complicated
setup
that
that
is
created
with
this
document
that
that
I
don't
think
we
should
adopt
this,
and
this
is
nothing
new
with
a
technical
quality.
I
want
to
make
this
clear
right.
This
is
purely
based
on
the
licensing,
because
I
don't
think
anybody
will
want
to
run
the
risk
of
changing
their
implementation
by
touching
this,
and
so
it's
dead.
I
think.
E
Yeah
as
an
individual
and
as
a
frequency
committer,
so
I
wouldn't
implement
it.
At
least
I
wouldn't
commit
this
to
the
previous
d3,
because
someone
might
use
freebsd
and
not
knowing
that
he's
running
a
business,
not
knowing
that
turning.
This
is
control
on
or
off
gets
him
into
the
need
of
paying
something.
O
Just
echoing
from
jabba
there's
tom
jones
and
john
morton
and
saying
also
the
same
as
other
people
are
saying:
if
they,
if
it's
not
clear
of
the
ipr,
then
they
won't
upstream,
it.
A
I
Yeah,
just
very
briefly
before
we
ask
marcus
to
send
us
to
bring
us
a
rock,
I
might
be
useful-
maybe
not
right
now,
but
either
the
jabber
or
on
the
list.
If
people
are
actually
interested
in
this,
but
for
the
licensing
before
he
goes
and
fights
a
legal
battle
to.
Let
him
know
that
rather
than
just
have
them
go,
do
this,
then
people
don't
don't
like
the
work
anyway.
So,
like
I
think
some
some
comments.
I
A
A
A
Oh,
oh,
this
is
a
little
over
the
slide
for
some
reason:
okay,
but
it's
okay,
so
yeah,
but
the
purpose
of
this
draft
is
basically
simple:
it's
try
to
provide
a
solution
for
the
option
space
in
segment,
and
I
think
this
is
well
very
old
issue
and
well
known
issue.
So
I
don't
have
to
explain
the
background
careful,
very
detail,
so
I
just
skip
this
ride.
A
So
moving
okay
and
then
there
are
several
previous
proposals
for
this
issues,
but
I'm
going
to
skip
this
slide
as
well,
because
people
know
this
issue
a
little
already
know
this
one.
So,
basically
our
approach
is,
you
know,
combining
two
approaches:
one
is
aggregated
option,
the
other
one
is
derived
option
negotiation
and
then,
by
combining
two
approach,
it's
try
to
save
option
space
in
segment.
A
A
Simple
format
means
just
it's,
you
know
a
quarter
point
and
the
blanks
that's
all
and,
for
example,
suck
permit
or
fast
option
cookie
or
edo
or
option,
and
this
is
the
example
of
such
kind
of
option
and
the
reason
why
this
option
has
a
very
simple
format
is
that
they
are
used
for
just
for
indication
of
the
feature.
Indication
of
the
feature
means
that
just
endpoint
claim
that
I
want
to
use
this
feature.
A
That's
it,
and
this
is
kind
of
one
bit
information,
and
but
because
the
thp
option
format,
each
option
need
to
use
its
code
point
and
the
ranks
at
least,
and
then
also
exid,
if
it's
experimental
option
and
then
because
of
that,
each
option
need
to
consume
two
or
four
bytes,
one
more
option,
space
and,
and
then
so
idea
here
is.
If
we
can
aggregate
them
into
a
single
option,
then
we
can
create
some
option
space
in
same
segment.
That's
the
basic
idea
for
aggregated
option.
Also,
please
go
to
the
next
next
slide.
A
In
this
case
clients
send
normal
options
such
as
timestamp
and
or
windows
scale,
and
in
addition
to
them,
it's
an
aggregated
option
and
an
inside
aggregated
option
bits
for
option
a
and
b
are
set
and
then
on
the
server
side,
server
side.
A
Also,
in
addition
to
the
sending
normal
options,
it's
send
aggregated
option,
but
in
this
example,
only
bit
for
option
b
is
set
bit
for
option
a
is
not
set
and
in
this
case
basically
option
a
a
is
not
negotiated,
but
option
b
will
be,
is
negotiated
and
will
be
used
in
this
connection.
So
so
like
this
way
by
using
aggregated
option,
it
can
negotiate
multiple
options
in
a
single
option
so
that
we
can
save
the
space.
That's
the
basic.
You
know,
example
of
the
aggregate
option.
A
So
moving
on
to
the
next
slide,
please!
So
now
I'd
like
to
talk
about
the
delayed
option
negotiation,
the
key
feature
of
the
delayed
option.
Negotiation
is
it's
basically
extended,
mptcp
option
exchange
scheme
for
more
generic
purposes,
and
so
it's
supposed
to
be
very
easy
to
easy
to
implement
and
they're
supposed
to
be
middle
box
friendly
and
moving.
Please
go
to
the
next
slide
and
then
so
before
I
explain
the
red
option
negotiation.
I
just
like
to
explain.
Mptsp
option
exchange
so
left
side
figure
indicates
windows,
scale,
option
exchange
in
this
case.
A
It's
simple
client
send
windows
scale:
option
in
since
in
scene
segment,
server
send
back
windows,
scale,
options,
synagogue,
that's
it,
but
in
case
of
mptcp
option
it's
need
to
use
four
segments
to
negotiate
and
then
the
important
point
is
friend
cry
understand
the
sync
packet
mp
capable
option
is
very
small
size.
It's
just
a
four
bytes
so
that
it
can,
you
know,
save
some
option
space
in
segment.
A
Instead,
when
client
sends
the
third
segment
arc
for
synagogue,
it's
sending
a
rather
big
bite
and
be
capable
it's
20
byte
of
mp
capability
and
under
four
segments
server,
send
another
big
segment,
another
big
option,
22
or
24
bytes.
So
that's
the
mp
display
option,
exchange
scheme
and
moving
on
to
the
next
slide.
Please,
and
then
now
I
would
like
to
show
the
abstract
view
of
tcp
option
exchange
and
so
basically
sending
option
means
basically
two
servers,
two
purposes.
A
One
is
in
the
case
of
the
feature
and
then
the
other
one
is
additional
information
for
the
feature.
Indication
of
the
feature
means
that
the
point
claim
that
I
want
to
use
feature
additional
information
for
the
feature
is,
for
example,
in
case
of
windows.
Scale,
option
windows,
scale
parameter,
is
example
of
the
additional
information
and
then
usually
in
normal
option.
In
the
case
of
the
feature
and
additional
information
for
the
feature
is
integrated
into
a
single
option,
but
in
case
of
mptcp,
it's
basically
separate
to
information.
A
So
in
the
fan
client
sender,
sim
packet,
it
just
send
the
indications
of
it.
I
want
to
use
mptcp,
that's
it
and,
and
then
instead
in
the
third
segment,
could
understand
additional
information
for
the
feature.
That's
why
the
size
of
the
mptc
mp-capable
option
in
sadaq
is
very
big
and
then
server
side
also
send
back
as
a
confirmation
for
reliable
network
option
negotiation
scheme.
So
that's
the
mpd
option,
exchange
scheme
and
moving.
Please
move
on
to
the
next
slide,
and
then
so.
Basically,
the
raid
option
negotiation
utilize.
A
This
mpd6
option
exchange
scheme
concept
and
then
utilize
it
for
more
generic
purposes.
So
it
supports
four
week
exchange
and
a
five-way
exchange
and
in
case
select
figure
indicates
a
forward
exchange.
In
case
of
four
way
exchange
the
option
negotiation
scheme
is
very
similar
to
mptcp
when
clients
send
the
same
packet,
it
just
send
the
indication
of
the
feature
in
same
segment
so
that
it
can
save
space
optional
space
in
scene
segment
in
case
of
five
way,
exchange
in
conditions
in
the
scene
segment
when
server
send
synag.
A
It's
also
saying
the
indication
of
the
feature
so
that
it
can
save
the
option
space
in
synagogue,
and
so
it
depends
on
how
much
in
optional
space
it
won't
save
the
server
can
choose
forward,
exchange
or
five
way
exchange.
That's
the
basic
idea
of
delayed
option
negotiation.
Please
move
on
to
the
next
slide.
A
And
then
this
is
example,
usage
of
trade,
option
negotiation
and
basically
delayed
option
negotiation.
Is
you
know,
utilize
aggregated
option
so
then
client
send
the
sim
packet
in
addition
to
the
normal
option,
such
as
timestamp
of
windows,
scale,
it's
use,
aggregated
option
and
then,
in
this
case
bit
for
option
8
set
and
then
server
side
is
similar
same
way
in
sending
no
send
back
normal
option
and
the
aggregated
option
and
the
bit
for
option
a
is
set
inside
aggregated
option
and
then
then
client
receive
synag.
A
The
option
aids
negotiated
so
in
the
third
segment
client
send
the
normal
format
of
option
a
to
send
additional
information
for
this
option
a
and
then
also
it
send
a
two-byte
aggregated
option
to
indicate
this
packet
contains
additional
information
and
then
side-by-side
also
do
the
same
things.
It's
send
option
with
normal
format
and
then
also
send
a
two
byte
aggregated
option
to
indicate
this
is
this
packet
calculates
additional
information
and
then
the
on
the
fifth
segment,
the
client
sender
thing
aggregated
option.
This
is
an
indication
of
the
confirmation.
A
So
so
this
is
a
basic.
You
know
usage
of
the
delayed
option.
Negotiation.
Please
move
on
to
the
next
slide,
so
this
is
my
end
slide
final
slide,
and
so,
as
I
explained
so,
this
approach
do
not
option
space
in
scene
segment.
Instead,
it's
compress
options
in
scene
segment
and
then
also
move
some
part
of
the
option
in
segment
to
non-scene
segment.
So
it's
not
extending
space.
So
you
might,
you
might
think
you
know
it's
just
no
moving
problem
to
other
from
other
place.
So
because
you
know,
if
we
do
this,
you
know
maybe
nonsense.
A
Segment
may
gets
crowded.
May
you
know
run
into
option
space,
but
in
this
case,
if
the
no
option
space
segment
is
running
out
of
space,
we
might
be
able
to
use
edo
because
idio
cannot
be
applied
to
the
scene
segment,
but
idio
can
apply,
can
be
applied
to
non-syn
segment.
So
if
by
combining
edo
and
this
approach,
we
can
solve
the
option
space
issue,
so
I
think
so
this
is
it
basically.
We
believe
this
is
a
practical,
practical
approach.
A
So,
unlike
previous
approach,
we
don't
we
don't
change
fader,
how
much
well
we
don't
send
multiples
in
packets
and
for
a
single
connection.
We
don't
do
that
it's.
It
sounds
like
it's
a
bit
tricky,
but
this
approach
is
more
practical,
so
that's
easy
to
implement
and
basically
that's
the
basic
idea,
and
so
this
is
just
this
work.
We
just
started
so
for
more
detailed
information.
Please
read
the
draft
and
then
please
provide
a
feedback
thanks.
So
much.
K
A
That's
very
good
good
point:
yeah
actually
mss
is
tricky.
Windows
scale
is
a
little
tricky
and
then
yeah.
So
what
I'm
thinking
is
no
more
future.
If
you
people
try
to
invent
new
options,
then
they
might
run
into
option
space
issue.
Then
you
know
and
then,
if
even
if
this
cool
idea,
sometimes
we
don't
have
a
optional
space,
then
people
do
not
activate
the
feature.
That's
the
scenario
I'm
concerning.
A
Q
You
can
hear
me
clearly
and
loudly
so
much
time
left
and
I'll
try
to
make
it
fast.
My
name
is
alexander
azimov.
I
work
for
yandex.
This
is
a
follow-up
to
the
discussion
that
happened
a
few
months
ago
in
the
main
eclipse,
and
it
wasn't
that
fruitful
next,
like
this,
so
this
time
I
tried
to
leave
my
homework
and
carefully
read
all
documents
that
are
released
below
label
generation.
Q
Q
Thanks
these
two
documents
discuss
applications,
both
stateful
and
stable
other
slides.
There
are
a
few
quotes
which
I'd
like
to
highlight
the
first
clearly
states
that
flow
is
connected,
but
not
necessary
map.
I
want
one
transfer
connection.
Q
Q
So
flow
label
must
not
be
used
alone
in
the
cache
function.
The
second
part
of
the
discussions
discuss
is
a
network
function
and,
let's
move
to
the
next
slide.
Q
Rft
62-94
discusses
cases
for
label
as
a
packet
classification
using
flow
label,
this
additional
limitations
and
significant
technical
requirements.
The
flow
labels
should
be
strongly
fixed
to
a
selected
flow.
A
route
support
flow
label
based
states,
they
should
support
processing
of
unknown
flow
labels
and,
of
course,
garbage
collector
and.
Q
Mostly,
it
mimics
what
we
are
using:
quality
of
service
or
mps
labels
networks,
and
since
the
document
is
10
years
old
already,
we
can
say
that
the
high
confidence
that
there
were
no
real
deployments
of
a
flow
label
that
uses
stateful
approach
next,
like
this.
Q
So
rsc
64
36
finally
finishes
the
arguments
and
states
that
the
label
should
not
be
used
as
end
to
end
of
any
kind.
It
also
states
that
flow
label
may
be
used
only
as
part
of
hash
function
for
stateless
cloud.
Balancing
next
slide.
Q
This
rfc
expands
the
idea
of
stateless
alarm
balancing
and
it's
the
support
of
loud
balancing
with
a
flow
label
to
different
kinds
of
encapsulation,
for
example,
gre
or
ip
tunnels,
and,
of
course,
in
nowadays
it's
also
important
for
srp6
next
slide.
Please.
Q
Q
It
also
creates
a
decorator
with
fragmented
packet
floating
and
once
again,
speaking
for
myself,
aware
of
any
kind
of
of
these
kind
of
implementations,
and
if
there
were
some,
they
should
have
already
experienced
working
with
linux
boxes.
Q
Q
Next
year
happened
a
very
important
update
to
this
functionality.
Upon
negative
writing
event.
Most
of
you
should
notice
as
rto
timeout.
The
hash
that
is
related
low
label
is
recalculated
next
slide.
Q
Now
the
hash
and
corresponding
of
the
label
is
changed
after
each
rto
timeout,
both
if
it's
happening
for
thin
packets
and
it's
if
it
happens
in
the
middle
of
our
cp
connection,.
Q
Next,
next,
okay,
some
time
to
refresh
it,
at
least
in
my
window,
why?
It
is
this
important
hurry,
because
in
certain
scenarios,
with
multiple
alternative
paths,
it
gives
tcp
flow.
A
fantastic
problem
in
case
of
packet
loss
session
will
jump
from
one
path
to
another
path.
The
more
alternative
paths
you
have,
the
higher
is
the
chance
that
the
network
outage
will
have
a
zero
effect
on
your
services.
Q
What
do
you
need
to
make
need
to
enable
flow
level
enriched
loud
balancing
on
egress
devices?
In
this
particular
example,
if
we
have
an
outage
on
the
device
s1
and
we
lose
packet,
the
circuit
cache
will
be
regulated,
we'll
get
a
new
flow
label
and
it
has
to
reach
the
destination
not
affected,
but
alex
like
this.
Q
Here
is
an
experiment
that
took
place
on
a
production
traffic
in
one
of
yandex
data
centers
for
the
use,
the
top
of
track
switch
with
four
uplinks
and
one
uplink
on
one
uplink.
We
created
a
sale,
so
dropped
all
the
traffic
that
was
coming
from
service
through
this
through
on
the
left
output
of
our
data
plane,
monitoring
that
shows
20
packet
loss,
the
right,
the
disruption
that
happens
with
services
next
slide.
Please
now
we
enable
flow
label
hatching.
Q
On
the
top
of
rex
feature,
the
picture
is
changing
dramatically,
as
you
can
see
packet
loss
from
our
data
plane.
Monitoring
is
not
aware
of
any
tcp.
It's
still
the
same.
It's
still
25
percent
loss,
but
services
are
not
affected
anymore.
Of
course,
this
also
requires
the
audio
terminals
values,
but
anyway
it
wouldn't
be
possible.
The
flow
label
changing
at
the
host
next
slide.
Q
Q
And
once
default
behavior,
so
linux
kernel
developers
have
improved
stack
and
the
enriched
gcp
that
works
over
a
dv6
with
self
healing
properties,
although
they
haven't
standardized
it
in
any
kind.
Today
we
have
multiple
paths,
not
only
in
the
data
center
environment.
Most
of
the
users
may
have
improved
quality
of
service
from
this
growing
from
these
changes,
but
if
it
was
great,
I
would
this
slide
and
you
will
haven't
been
suffering
from
my
russian
accent
next,
slightly
way
in
a
hurry.
Q
The
side
effect
occurs
if
rto
on
the
client
side
during
upload,
for
example,
the
flow
label
change
may
redirect
dissipation
to
another
node,
for
example,
to
another
tcp
property.
In
this
case,
tcp
session
will
end
up
with
a
timeout.
We
have
faced
this
issue
with
our
users
and
had
to
switch
off
flow
hashing
edge.
Routers,
though
it
helps
only
in
subsets
of
cases
in
general.
If
you
see
more
time
outs
during
uploads
in
ipv6
than
in
ipv4
now,
you
should
know
why.
Q
Q
Q
In
tcp
we
have
two
sites,
it's
obvious,
the
client
who
sends
a
sim
packet
and
the
server
who
responds
with
a
scene
arc.
The
problem
that
I
described
may
occur
only
if
a
client
changes
the
flow
label
while
connected
to
a
stateful
anycast
service.
So
we
can
keep
current
defaults
for
the
server
unchanged
and
recalculate
the
hash
on
each
audio
timer
on
in
all
in
opposite
that,
for
clients
should
not
need
a
hash
recalculation,
although
there
should
be
eggnog
to
enable
in
a
controlled
environment
next
slide,
and
it
will
be
the
last
last
one.
Q
From
multiple
rfcs
related
to
flow
label
generation,
though
some
of
them
look
obsolete,
there
are
non-standard
behavior
in
the
linux
kernel
that
may
and
maybe
other
operational
systems,
I'm
not
aware
of
it
that
improves
connection
stability
in
case
of
packet
loss
that
affects
a
subset
of
available.
Q
Unfortunately,
it's
also
introduced
a
mistake
that
may
result
in
speed
connection
timeouts
during
uploads.
It
can
be
fixed
and
from
what
I
learned
in
the
mailing
list,
it
will
be
fixed,
which
is
good,
but
I
believe
it's
also
important
to
update
related
standards
and
reflect
what
is
happening
in
real
world
deployments.
Q
Q
A
Okay,
one
question
I
have
is
no:
it's
depends
on
skv
flash
right,
but
so
not
all
implementation
use
skip
hash
to
generate
flow
rather
and
then
it's
we
don't
want
the
standard
as
very
impressive,
dangerous
physics
algorithm.
My
understanding.
Q
We
are
defined
how
to
you
are
modulating
the
cache,
but
it
can
be
specified
that
the
hash
is
calculated
calculated
for
the
socket
without
specification,
the
the
algorithm
itself
and
how
it
is
related
to
not
only
flow
label.
It's
how
it
is
related
to
behavior,
okay,.
A
O
I
I
think
it's
interesting
to
see
people
practically
using
the
floor
label
and
if
you
go
to
other
working
groups
in
the
iatf
you'll
see
people
debating
whether
this
is
used,
how
this
is
used
and
whether
it
changes.
So
this
is
part
of
a
much
bigger
discussion.
So
it's
really
good
to
see
this
presentation
here.
Q
Thank
you
very
much.
I
can
say
that
at
least
three
hyperscale
at
the
moment
are
using
flow
label.
This
way.
G
Bob
hi
there
to
save
time.
I've
put
you
some
references
on
on
the
chat
for
a
document
called
the
social
cost
of
cheap,
pseudonyms
and
problem
with
the
flow
label.
If
you
start
using
it
to
represent
other
fields
that
that
say
what
the
flow
is,
it's
just
so
easy
to
shard
flows
and
and
take
advantage
of
systems
that
depend
on
the
flow
labels
which
are
already
cheap
pseudonyms.
So
I
encourage
you
to
look
at
those
references
and
and
think
about
them.
Thanks.
G
G
What
of
an
fq,
scheduler,
yeah,
they're
they're,
pretty
sticky.
E
Q
Q
R
Yeah
I
just
I
didn't
feel
I
need
to
answer
what
bob
said
since
I'm
very
interested
in
fq
based
aqms.
R
So
one
obvious
way
to
guard
against
this
would
be
for
the
the
flow
mechanism
to
look
at
the
port
numbers,
as
well
as
the
flow
label,
and
to
perform
a
fairness
between
flow
labels
within
the
same
five
tuple.
R
Just
one
example
of
how
to
deal
with
that
situation.
G
A
Q
Just
the
last
comment
on
top
of
this
discussion,
if
you
don't
mind,
okay,
so
from
the
feedback
of
this
working
group
and
also
from
the
field,
because
I
learned
the
physics
operationals,
I
believe
that
there
is
the
feedback
is
mostly
positive
and
it
might
be
important
to
standardize
it,
though,
I'm
not
sure
that
I
want
to
author
this
document,
but
I
will
try
to
contact
those
kernel,
hackers,
that's
it
in
linux
and.