►
From YouTube: IETF101-TSVWG-20180322-1550
Description
TSVWG meeting session at IETF101
2018/03/22 1550
https://datatracker.ietf.org/meeting/101/proceedings/
C
D
C
D
F
B
Okay,
have
note-takers
want
to
thank,
which
is
chef
maker
and
paul.
Congdon
Gauri
is
going
to
try
to
see
on
top
of
jabber
from
up
front
here,
welcome
back
to
the
three-ring
three-ring
circus,
better
better
known
as
the
Trent,
the
transport
area
working
group
I'm,
the
I'm
David
black
one
of
your
chairs.
C
B
Is
chair
is
a
chair
chair
in
absentia?
Okay,
this
is
the
new
and
improved
note
well
complete
with
BCP
numbers.
This
applies
everything
that
goes
on
that
goes
on
this
me
you're
expected
to
you're
expected
to
be
aware
of
it.
Okay,
we
have
a
couple
note
takers.
We
have
jabber
session
scribes.
We
will
be
soliciting
as
we
go
along
reviewers
for
various
drafts
that
come
up
in
the
meeting
reminder.
B
Please
use
TS
vwg
as
the
as
part
of
the
draft
name
draft
your
name
TS,
pwg
da-da-da-da-da,
and
that
way
the
chairs
will
will
notice
it
when
we're
trying
to
figure
out.
What's
going
on
reminder
doctrine,
quality
relies
on
reviews,
so
please
review
documents
in
your
working
group
and
hear
at
least
one
of
the
doctor
of
another
working
group.
If
you
like
documents,
you
care
about
reviewed,
please
return
the
favor
and
review
other
people's
documents.
They
may
be
yours,
okay,
on
ecology,
status,
I.
B
Think
at
one
point,
I
saw
a
slide
from
Spence.
It
only
had
two
RFC's
on
I
found
two
more
carefully
setting
the
search
date
to
the
beginning
of
the
Singapore
IETF
meeting
week.
We've
had
four
RFC's
Publishing's
I
last
put
put
put
a
set
of
slides
like
this
together,
the
best
of
them
of
the
first
two,
which
are
a
couple
SCTP
RFC's
that
are
part
of
the
massive
web
RTC
draft,
a
hairball
or
tar
ball.
If
you
like,
we
managed
to
extract
those
two
out.
B
The
really
really
good
news
is
thanks
to
diligent
efforts,
a
number
of
people
in
the
room.
We
are
no
longer
on
the
critical
path
to
getting
rubber
to
getting
getting
to
getting
WebRTC
done
thing.
Thank
you
very
much
everybody.
We
still
got
one
draft,
that's
stuck
against
some
other
web
RTC
dress,
but
web
RTC
will
sort
that
out
two
more
each
an
experimentation
draft
and
the
diffserv
to
Wi-Fi
mapping
draft
the
draft
on
the
screen
is:
is
the
remaining
graph?
That's
stuck
in
the
web,
RTC
hairball,
okay,
so
Spencer
right!
Now
you
get
it
easy!
B
G
B
Believe
so,
okay,
we
have
an
idea.
Mercury
blast
call
ends
March
30th,
DCPI,
Anna
process
changes.
This
is
needed
for
the
diffserv
lower
effort
to
pH
be
draft
that
we're
going
to
talk
about
later
in
the
session.
I
think
I
heard
something
Spencer
said,
says:
I'm
a
tidal
wave
earlier.
We
are
about
to
send
five
internet
draft
ring.
Group
last
call
hope
to
do
last
call
on
all
of
these
before
Montreal.
The
good
news
is
they
only
come
in
three
batches
after
we
get
the
D
scpi
on
a
draft
dealt
with
we're
going
to
lap.
B
We
expect
to
working
with
last
call
the
diffserv
lower
effort,
page
B
draft
the
e
C
n
drafts
on
encapsulation
four
layer
protocols
and
tunnels
use
shim
headers
will
be
last
call
together.
I
believe
those
are
about
ready
for
last
call
and
addition,
the
at
the
to
FEC,
update
drafts
are
also
its
might
be
ready
for
work.
New
glass,
Chloe
expected
working
group
last
call
those
before
Montreal.
B
So
there
are
another
seven
additional
working
group
drafts
that
we'll
talk
about
a
CV
net
draft
three
drafts
on
the
l4s
low-latency
service.
You
the
options,
draft
tonal
congestion
feedback
and
Datagram,
a
Datagram
packet,
ization
layer,
PMP
MTU
discovery,
which
is
a
recently
adopted
draft,
a
couple
of
slides
on
related
drafts.
B
There
are
two
drafts
that
you
will
see
on
today's
agenda.
One
is
transferred
header
encryption
impact
which
is
Gauri's
draft
and
the
other
is
privacy,
switching
scheduler
draft
Finzi.
Let
me
say
a
word
about
what
we're
going
to
do
with
that.
There's
been
some
offline
discussion
and
it
appears
the
best
loop
for
that
draft
is
publication
as
an
independent
submission.
B
However,
when
other
trap
is
presented,
we're
going
to
be
asking
for
reviewers,
because
what
we'd
like
to
do
is,
in
essence,
empower
the
draft
authors
to
send
it
off
to
the
innovative
submission
editor
with
a
with
with
a
stamp
of
approval
from
TSV
wge.
That
says,
we've
looked
at
it
and
we
think
this
is
we.
We
think
this.
We
we
think
this
is.
This
is
a
good
thing
for
the
invention,
submission
editor
to
publish.
So
that's.
What's
that's
gonna
happen
later
this
session,
three
more
drafts,
not
on
today's
agenda.
B
Generic
multiplexing
draft
draft
ladonna
Tom
Herbert
has
a
new
draft
on
firewall
and
service
tickets,
he's
trying
to
do
things
in
band
I'm,
looking
at
Tom
on
assuming
he's
going
to
sort
out
how
this
relates
to
all
the
work
going
on
in
the
eye,
the
eye
to
NSF
working
group
congestion
control
for
for
guaranteed
bandwidth.
This
draft
actually
turned
out
to
be
a
TCP
M
related
draft,
but
I
left
it
here
because
they
put
th
vwg
in
its
title.
There
was
some
discussion
of
this
in
TC
p.m.
B
on
Monday,
and
it's
been
quite
a
bit
of
list
discussion
over
in
int
area.
There's
three
more
drafts
that
are
relevant
to
us
tunnel.
Mtu
considerations
there's
actually
whole
draft
on
on
tunnels
in
the
internet
architecture
in
in
inter
area
fragmentation
fragility,
that's
also
a
draft
into
area
and
socks
version.
Six.
All
of
these
drafts
are
being
handled
by
the
by
the
interior
working
group.
B
Okay,
so
you're
in
the
your
you're.
Now
in
the
middle
of
chair,
slides
we're
going
to
be
bashing
general
a
little
bit.
We've
done.
The
note
well
just
went
through
document
columns
and
status.
So
what
comes
next
is
milestones
review
this.
We
have
a
few
milestones
that
have
gone
past,
but
we're,
but
we're
close.
B
The
RFC,
4960,
errata
and
issues
draft
should
be
submitted
to
the
iesg
in
the
next
week
or
two.
The
other
two
orange
graphs
up
here
are
the
two
EC
ending
cap
drafts.
Those
are
we're
going
to
move
the
dates
out
to
June
2018.
Both
drafts
have
new
versions
Bob,
both
new
versions
or
just
6040.
Both
have
new
versions.
Both
are
believed
to
be
very
close
to
ready
for
a
group
last
call
our
intent
is.
Our
intent
is
not
to
have.
B
H
B
B
During
the
Alpha
recession,
we
could
discuss
likely
timing
for
those,
and
then
this
December
date
on
the
remaining
drafts
and
as
we
get
to
Montreal,
we'll
try
to
figure
out
what's
going
to
get
done
this
year
and
what's
going
to
have
new
dates
and
new
new
dates
in
the
next
year.
Okay,
a
little
bit
on
the
agenda,
so
we've
done
a
document
status
charting
accomplishments.
This
is
agenda
bashing
and
I'm,
going
to
use
the
opportunity
to
say
a
few
words
about
the
tunnel
congestion
feedback
draft
right
now.
This
draft
is
currently
sort
of
stuck
I.
B
Think
we
had
discussion
with
our
ADEs
about
this
draft
back
in
Prague,
and
the
determination
at
that
point
was
a
mechanism
without
a
work.
Example
of
usage
doesn't
get
as
much
anywhere
and
with
luck,
bob
is
good.
At
going
to
the
mic
can
tell
us
that
there's
a
work
example
of
its
usage
about
to
emerge.
B
I
Way,
I'm
not
trying
to
repeat
your
words
for
you.
Okay,
don't
least
like
I
happened
to
be
talking
to
him
in
the
service
function.
Training
working
group
wants
to
use
this
with
and
MLS
for
doing,
load,
balancing,
essentially
across
service
functions
and
I
pointed
the
him
at
that
draft,
and
so
he's
willing
to
jump
in
and
elbows
guys
finish.
It.
B
B
They
were
referring
to.
Is
there
the
banana
drafts
which
are
into
area
stuff,
are
some
individual
submissions
and
there's
an
EC
is
a
banana
draft
and
ecn
that
is
also
trying.
That
also
is
trying
to
use
what
is
effectively
tongue,
dish
and
feedback
to
do
to
do.
Load,
balancing,
okay
right
after
I
get
done
with
this.
With
this
wonderful
monologue,
we're
going
to
do
a
the
lower
effort
of
PHP
draft,
which
will
be
a
gory
and
Roland.
We
have
presentation
of
prairie
switching
scheduler,
which
is
exposes
material.
B
We
have
the
packetization
layer
path,
MTU,
discovery,
draft
the
trees
and
cap
drafts
l
fresh
drafts,
and
then
there
is
some
new
activity.
That's
been
proposed
over
in
aqua
t8o
2.1
called
congestion,
isolation.
Paul
con
is
going
to
talk
about
that
should
get
us
to
the
break.
We
may
pick
up
the
feck
frame
dress
before
the
break.
If
we
do
well
on
time
that
would
be
a
items.
B
He
sent
me
some
slides
and
I
think
I
know
what
they're
about
a
couple
of
SCTP
drafts,
which
I
think
is
going
to
be
fairly
quick
and
then,
if
you're
at
the
plenary
last
night,
you
heard
a
little
bit
of
discussion
about
sort
of
the
overall
impact
crypts
on
the
internet.
We
have
a
draft
on
transport,
header,
encryption,
I,
think
that's
it.
B
C
C
Pill
three
one
values
to
require
publication
of
a
standard
strike,
our
best
current
practice
RFC,
and
if
my
ad
wants
to
bash
the
name,
you
can
do
I,
don't
care,
there's
a
photo
of
Balmoral,
it's
close
to
our
body,
and
this
is
the
purpose.
Oh,
he
wants
to
try
and
bash
the
name.
Already.
Okay,
here
we
go
as
I
said:
dispenser
darkens
speak
his
response.
J
L
C
I
am
seriously
and
we
might
have
get
a
name
before
we
finish
it
and
which
castle
did
you
use?
That's
Balmoral,
that's
where
we
are
and
the
overview
of
the
draft
is.
It
simply
changes
this
pool
in
there,
no
registry,
which
already
exists
from
being
a
local
use
registry
to
being
a
standards,
action
registry,
and
we
believe
that
there
are
no
bad
effects
of
doing
this,
but
we're
the
ietf.
So
we
write
a
document
to
make
sure
everyone
else
believes
that
there
are
not
bad
effects
of
doing
this.
That's
the
purpose
of
the
document.
C
Yeah
apologies.
If
you
read
the
last
version,
b:00
version
it
had
everything
talking
about
Iona,
which
this
is
this
picture
of
I
honor
in
Scotland
as
well,
not
I
honor.
My
spellchecker
had
a
wonderful
time
over
the
draft
and
I
was
submitting
it
very
quickly
and
don't
read
that
version
read
zero
one
and
seriously,
because
it's
only
one
registry
item
we're
changing.
I
think
this
is
now
ready
for
any
comments
you
may
have
it's
in
working
group
last
call
thanks
to
people
who
have
already
sent
me
comments,
I
know
of
one
or
two
changes.
I!
C
Do
have
to
make
so
I
will
include
those
in
the
final
version
and
I've
already
fixed
one
type
or
I
realize
there's
an
AM,
a
with
a
capital
m
and
but
small
days
and
wise
and
I
realize
there's
some
inconsistency
with
the
honor
registry
in
terms
of
the
placement
to
the
ordering
of
some
of
these
things
and
you'll
get
any
other
comments.
Please
tell
me
yeah.
L
I
C
Yeah
we
have
talked
about
this
in
previous
meetings
and
the
reason
for
presenting
this
here
is
to
try
and
make
sure
of
that.
When
we
go
to
IETF
last
call,
we
will
make
sure
that
we
also
tell
other
working
groups
heads
up
Luke.
There
is
a
change
here.
We
don't
think
it's
bad,
but
there
is
a
change
if.
B
J
Expense
to
Dawkins
David
David
answered
the
question
that
I
was
going
to
ask.
So
let
me
ask
my
next
question,
which
is
don't
if
those,
if
those,
if
those
still
make
if
those
are,
if
those
networks
are
using
the
local
use
the
SCPs
now
don't
they
have
to
explicitly
shoot
themselves
in
the
foot
to
start
having
problems
they.
B
Diffserv
work
diffserv
asserts
that
it
behaves
best
when
you
have
completely
perfect
configuration
of
the
network
perimeter.
Now
a
network
with
a
completely
correct
configuration
of
his
perimeter
is
missing
categories
as
Santa
Claus,
the
Easter
Bunny
and
the
Tooth
Fairy,
but
I,
don't
think,
there's
gonna
be
made
that
I,
don't
think,
there's
going
to
be
made,
there's
going
there
going
to
be
major
problems
and
a
network
that
is
using
these
as
local
use
will
have
the
ability
to
reconfigure
to
so
many
other
local
use
code
points
are
the
one
ones
I
look.
M
K
L
B
L
C
Summary
is
still
basically
that
when,
when
this
completes
working
group
last
call
we're
going
to
do
a
write-up
and
we're
going
to
make
sure
we
have
feedback
from
everyone
in
the
ITF,
it's
much
more
important
that
this
is
well
known
before
it's
published,
then,
when
it's
published
so
and
the
purposes
document
is
to
tell
everyone
on
the
ITF
that
it's
happening
and
then
Ayane.
Finally
to
make
the
change
and
that's
my
slide.
David
okay,.
L
L
Presented
draft
from
gory
and
now
the
working
group
draft
in
order
to
open
pool
three
four
standards:
action.
Several
editorial
changes
so
not
really
much
new
here,
I,
actually
edit
updates
also
to
the
recently
published
mapping,
gives
earth
through
arrow
2.11,
drag
RC
and
yeah
I.
Think
there's
one
more
draft
or
our
C
to
be
out,
throw
up
to
be
updated,
but
I
come
to
that
later.
So,
review
comments
really
go
gaps
and
some
comments
in
many
editorials,
so
I'll
try
to
maybe
shorten
some.
B
Phrases
and
suggested
rollin
before
you
go
to
Bob
coach.
Could
you
go
back
the
previous
slide
and
let
me
do
a
real,
quick,
quasi
administrative
item.
I
want
to
make
sure
that
we
can
report
that
the
sense
of
the
room
is
that
the
first
bullet
is
what
we
want
to
do.
We've
had
long
discussions
about
that
and
I
think
it
came
down
to
code
points
one
or
five,
and
there
was
some
there's
some
data
that
I
think
Brian
Trammell
sent
the
list.
That
suggests
that
as
we
suspected
one,
his
problem
is
probably
the
better
choice.
L
L
So
I
just
sent
your
email
to
the
list.
That
I
will
try
to
clarify
on
that.
So
my
proposal
is
to
riot
le
users
should
use
a
lower
than
best
effort,
congestion
control
because
blah
blah
blah
and
then
also
what
happens.
If
you
don't
do
that,
all
right,
so
implications
of
yeah
may
be
negative
or
if,
if
you
don't
use
such
kind
of
congestion
control,
then
you
can
expect
some
some
problems
on
cement,
brisker.
I
I
just
wanted
to
try
and
explain
in
face-to-face
the
that
what
I
was
trying
to
say-
and
that
is
that
at
the
moment
the
text
essentially
says:
if
you
don't
want
to
harm
people,
you
shouldn't
harm
people
and
and
the
condition
shouldn't
be
what
you
want
to
do
or
what
the
application
wants
to
do.
It
should
be
what
whether
there
will
be
harm.
I
You
know
it
so
it's
it's
about.
If
the
traffic
is
is
unlikely
to
harm
anyone,
then
you
don't
need
to
do
an
lvl,
LBA,
congestion
control,
but
otherwise
you
do
know
it's
not
about
what.
Whether
you
want
to
help
people,
because
that's
just
saying
psychopaths
are
okay
and
and
Psychopaths
they're
gonna.
Do
it
anyway
sure.
C
Well:
okay,
so
gory
fur
hats
from
the
floor,
Mike
and
yeah
I
much
prefer
a
approach
that
says
should
use
a
leadbox
style,
less
than
TCP
type
congestion
control.
If
you're
using
the
early
class
should
be
don't
have
to.
If
you
don't,
if
you
don't,
then
there
are
going
to
be
interactions
between
an
application
that
uses
multiple
classes
on
the
maybe
interaction
with
other
traffic,
and
if
we
explained
that
more
clearly
in
the
wording,
then
I
think
that's
much
better
guidance
than
simply
kind
of
waving.
C
Our
hands
I'm
a
little
bit
scared
as
an
individual
about
the
use
of
must
do
something
when
we
can't
actually
force
anyone
to
do
it,
and
it's
not
clear
how
you
check
that
interaction,
but
I
do
like
the
idea
of
saying
should
and
really
meaning
should
are
then
explaining
what
the
ramifications
are
and
then
people
can
make
an
intelligent
decision.
Oh
great.
B
Chairs
running
around
yes,
we've
now
done
the
checks
changes
as
David
from
the
floor
Mike,
and
what
I
was
going
to
observe
is
that,
from
an
operator,
a
point
of
view
should
versus
must
is
kind
of
irrelevant,
as
the
operator
is
going
to
have
to
defend
against
this,
regardless
of
which
word
we
put
into
the
RFC's
and
I'm
inclined
to
agree
with
Gauri
that
a
should
is
appropriate
because
I,
don't
think
I,
don't
think
anything
changes
in
what
the
network
is
gonna
have
to
do.
If
we
put
a
mustin.
I
My
point
wasn't
about
I
sure
enough:
I,
don't
care
about
the
sugar?
Well,
I
do
a
bit,
but
it's
more
about
you
know
it's
just
it
just
doesn't
make
sense,
saying
that
if
you
don't
want
to
do
harm
then
using
the
OBE,
because
if
you
don't
want
to
do
harm,
you
will
use
an
L
ve,
it's
it's
it!
It's
it's
about
whether
you're
going
to
do
home.
Well,.
B
And
it's
it's
a
and
I
think
we
take
a
closer
look
at
the
wording,
because
the
real
concern
for
the
network
is
not
it's
not
whether
a
flow
does
harm
is
whether
the
flow
is
using.
The
using
this
in
aggregate
would
do
harm
and
that's
a
little
bit
different
and
not
and
not
something
an
end.
User
always
has
enough
visibility
into.
I
Let
me
explain
where
I'm
coming
from
I
have
in
mind:
I'm
thinking
about
l4s
and
I'm
writing
mappings
for
this
server
l4s.
At
the
moment,
you
can
do
a
really
good
lesson
best
ever
congestion
control
with
a
scalable
congestion
control
cuz.
It
jumps
up
really
fast,
but
I
don't
want
anything
that
doesn't
have
an
L
ve
in
the
elf
rescue
because
it
will
screw
the
latency,
and
so
you
know
it
it
affects
that.
But
I
mean
you
know,
that's
a
special
case,
maybe,
but
it's
from
the
network
operators
point
of
view.
I
C
C
B
L
I'm
mailing:
this
is
fine
yeah.
So
what's
yeah,
maybe
we
can
discuss
this
on
mailing
list.
I
checked
the
PHP
guidelines
from
our
C
2475
and
I.
Just
came
across
the
the
g7
guideline
7,
which
talks
about
tunneling
I,
don't
know
whether
we
have
to
state
any
extra
text
about
tunneling
I,
don't
know
I'm,
not
a
tunneling
expert.
I
know
they
use
the
tunneling.
Do
so
drag.
L
L
C
L
J
J
C
B
J
For
any
normal
speaking,
expensive
Dawkins
for
any
normal
craft
I
would
be
happy
to
handle
stuff
in
all
48,
but
I'm
not
sure
that
there
will
be
I'm,
not
sure
that
all
the
author's
will
be
living
when
the
hairball
is
published,
which,
which
makes
it
less
likely
that
that's
going
to
be
effective.
I'm
happy
I'm
happy
to
drop
a
droplet
out
to
the
RFC
editor.
Now,
if
you
would
send
me
what
I
need
to
tell
them
I'm
happy
to
do
that
and
let
let
them
do
the
things
that
they
do.
You.
B
C
We
probably
should
let
the
RTC
what
people
know
that
we've
decided
the
code
point
pretty
soon,
because
we
shouldn't
change
the
document
that
they
think
they've
been
referring
to.
So
we
will
let
other
people
know
of
the
consensus
this
ITF
finally
confirmed
about
which
code
point
it
is,
and
they
will
then
be
aware
of
this
upcoming
change.
So
we
can
handle
that
bit
and.
B
B
Quick
quick
reminder:
the
the
destination
is
draft
is
a
publication
it
by
the
independence,
submission
editor,
and
the
goal
of
this
is
for
people
to
understand
this
draft
I'm
gonna
be
soliciting
a
a
few
reviewers,
because
we'd
like
to
be
able
to
say
that
TC
WG
thinks
that
publication
of
this
trap
is
in
PES.
Admission
is
a
good
idea.
So
an
a
is
that
the
house
for
functioning,
so
you
first
name
Annie
I
mean.
N
I'm
going
to
talk
about
our
priority
switching
schedule
so
sharing
the
capacity
offering
is
an
important
issue
for
mix
traffic.
As
you
all
know,
and
there
are
many
existing
solutions
like
waiting,
fracturing
deficit
from
Laureen
and
so
on,
but
there
are
complex
to
configure
and
provide
only
soft
guarantees,
and
so
our
objective
with
this
new
priority
scheduling
switching
schedule,
is
to
achieve
a
service
closer
to
PG
PS,
obtain
a
more
predictable,
predictable
available
capacities.
N
So
in
fact
we
want
to
ensure
a
more
predictable
output
right,
so
we
have
a
use
case
to
so
how
we
want
to
use
it
as
an
example.
So
the
idea
is
to
make
the
AF
class
more
predictable
on
the
deep
south
quarter
architecture,
just
as
a
teaser
here,
I
have
shown
two
figures
with
the
round:
AF
output
rate
as
a
function
of
the
EF
input
rate.
When
you
vary,
the
read
scheduler
wait.
So
here
you
can
see
that
the
range
is
larger,
with
the
ALP,
with
a
red
color
compared
to
our
Ghidorah.
Wait.
N
N
Next,
the
use
case
I
just
talked
about
showing
the
benefit
of
using
PSS
in
a
diff
circle
network
and
the.
Finally,
the
security
considerations
are
still
a
work
in
progress,
so
PSS,
you
know
in
a
nutshell,
the
PSS
is
based
on
the
birthday
meeting
shaper
and
the
key
idea
is
to
use
a
credit
contour
to
change
the
priority
viewed
by
the
bio
priority
scheduler.
So
you
have
a
non
active,
PSS
cues,
which
are
regular
priority
schedule,
schedule,
Jews
and
other
cues
with
an
active
PSS.
N
N
So
that's
the
key
idea.
As
you
can
see,
there
are
two
things
to
establish
it's:
how
does
the
credit
change
and
how
to
select
the
prairies
we
use?
So
we
have
three
parameters:
/
ctrl
Q.
We
have
a
max
level
for
the
credits,
a
resume
level
and
the
result
bandwidth,
which
is
used
to
compute
the
credit
slopes.
I
will
give
I
will
explain
more
how
it
works.
So
here
is
an
example,
as
the
FX
axis
you
have
the
time
and
as
y-axis
the
credit
of
the
crunchyroll
q
we
are
looking
into.
N
We
are
transmitting
several
packets
from
here
and
when
we
are
transmitting
packets
from
the
control
queue,
we
are
increasing
the
credit
with
a
center
with
a
right
I
sent
when
the
credit
reaches
the
level
max.
We
change
a
priority
from
the
high
priority
to
the
low
priority
and
due
to
the
fact
we
are
using
a
non
primitive
static.
N
Scheduler
there
is
some
non
preemption
leading
to
the
saturation
of
the
credit
at
the
maximum
level.
Then
we
send
all
the
types
of
traffic,
so
we
decrease
the
credit
with
a
right
I
either
when
the
credit
reaches
the
resumed
level,
the
priority
is
switched
back
to
the
high
priority
and
again
you
do
not
preemption.
We
keep
decreasing
your
credit
until
I
we
reach
zero.
The
credit
reaches
zero
or
until
the
end
of
the
transmission
of
the
packet.
N
So
this
is
the
an
example
that
is
true
but
I.
Think
it's
a
bit
misleading.
It's
I
think
it's
the
best
way
to
explain
it
first,
but
a
bit
misleading
because
it
may
it
may
give
you
the
impression
that
when
you,
when
the
priority
is
high,
then
we
only
sent
the
control
traffic
and
when
the
priority
is
low.
You
on
this
concern
also
traffic's.
N
So
another
example:
here
we
start
by
sending
a
few
packets,
and
so
we
increase
the
rate
with
right
with
the
eyes
sent
then
for
some
reason
either
because
zero
traffic
as
a
higher
priority
or
because
there
are
no
more
control
traffic
enjoyed
and
we
can
send
other
types
of
traffic.
So
we
decrease
the
credit
with
the
right
I've
either
done.
Maybe
we
come
against
an
the
control
traffic
so
again
the
credit
increases.
N
So
we
have
again
the
change
of
priority
with
to
the
low
priority,
and
so
we
can
continue
with
sending
other
type
of
traffic's
and
if
all
the
Q's
with
higher
priority
are
empty,
then
we
can
send
the
control
traffic,
even
though
the
priority
is
low.
And
finally,
the
last
example
is,
if
also
chose
again,
our
empty.
So
credit
decreases.
So
you
can
see
that
we
have
a
fairly
simple.
N
So
I
think
I
hope
I
have
convinced
you
that
it's
an
interesting
schedule.
Robots
here
is
a
use
case
to
emphasize
it,
so
we
want
to
use.
We
used
the
current
quarter,
architecture,
which
you
I
think
you
will
know,
and
here
is
what
we
usually
obtain
with
such
anarchic
architecture
here
on
the
y-axis,
is
a
weight
of
the
AF
dress
and
on
the
on
the
x
axis,
is
the
weight
and
on
the
y
axis,
is
a
output
right.
So
the
aim
is
to
make
the
AF
class
more
predictable.
N
So
we
first
study
how
it
works
with
the
current
architecture.
Let's
say
you
want,
you
know
that
the
input
right
of
the
EF
classes
around
50%
and
it
varies
from
25
to
75.
Then
as
you
come
as
you
know,
the
output
rate
will
also
vary
here.
For
example,
if
you
set
the
white
at
0.5
s,
output
rate
will
vary
between
twelve
point
five
to
thirty
seven
point
five,
so
it
is
quite
large
and
we
can
conclude
that
the
AF
output
rate
is
uncertain
when
the
EF
input
rate
is
unknown.
N
Our
goal
is
to
make
the
rate
of
the
AF
class
more
predictable.
Our
proposal
here
is
to
add
the
PSS
and
to
use
it
to
change
the
priority
of
the
AF
class,
so
the
EF
class
still
has
the
first
first
priority,
so
no
change
here
and
the
AF
class
sometimes
has
a
higher
priority
than
the
default
class,
sometimes
the
lower
priority
and
today
for
class
under.
As
a
result,
we
obtain
more
of
its
this
as
fairness
to
the
priority
scheduler
through
simulations.
N
We
obtain
these
curves,
so
we
set
the
parameters
to
obtain
some
more
of
the
same
red
line
as
before
here
the
same,
and
what
we
obtain
is
that,
for
a
new
input,
right
of
the
EF
class
below
50%
will
change
the
red
lines,
which
is
very
good
and
then
of
when
we
increase.
If
we
still
increase
the
EF
input
rate
we
obtain
as
a
minimum
between
the
remaining
capacity
and
our
red
line.
So
here
at
75%
of
remaining
capacity,
it
is
25.
N
So
if
we
compare
both
schedule,
we
obtain
a
much
larger
area
of
uncertainty
with
the
red
Piedra
compared
to
our
proposal
of
PSS.
And
what
is
interesting
is
that
if
we
want
to
provide
a
minimum
guarantee
for
the
AF
right
output
right
with
the
with
a
red
schedule,
we
have
to
set
the
right
a
0.5
to
obtain
12%
output
rates.
But
with
our
proposal
we
have
only
to
set
it
at
0.5%,
meaning
we
have
0.75
percent
that
can
be
given
to
other
priorities
if
needed.
N
So,
to
conclude,
on
this
use
case,
the
EF
class
is
not
impacted
by
the
change.
I
think
it's
important
when
we
know
the
EF
input
rate
we
can
have
easily
at
the
same
behavior
with
PSS
and
with
the
weight
in
wrong
Robin
when
the
EF
input
rate
varies.
That
is
where
our
PSS
as
a
better
behavior,
because
the
AF
output
rate
is
much
more
predictable,
and
this
result
that
we
corroborate
corroborated
with
the
simulations
on
Anniston.
N
L
N
Situations
and
we
didn't
see
any
changes,
and
even
if
you
look
at
it
here,
you
see
the
EF
class
is
as
a
first
priority
and
it's
exactly
the
same
with
our
PSS.
So
the
impact
maximum
impact
is
only
the
maximum
frame
size
of
a
lower
capacity
of
priority.
So
on
the
EFM,
the
yeah
that
doesn't
change.
Okay,.
B
P
P
B
Miss
miss
miss
miss
introduction.
This
is
this
is
not
going
to
be
published
as
a
standard.
This
is
going
to
be
independent,
submission
we're
looking
to
provide
some
reviews
so
that
we
can
give
the
authors
a
a
effectively
a
note
to
send
the
insufficient
editors
as
TS
vwg
thinks
is
a
fine
thing
to
publish
it's
good
work
right.
Okay,
would
you
be
interested
in
reviewing
the
draft.
B
P
I
appreciate
the
work
and
I
understand
what
it's
good
for,
to
be
honest,
I
think
I,
yeah
I'm,
not
sure
whether
I
like
to
entertain
so
many
classes
on
a
100,
gigabit
trunk,
honest.
P
B
L
N
N
B
C
E
G
G
So
since
it
was
last
night
in
Singapore
they've
been
several
updates
address
been
adopted,
we
have
taken
some
implementation
experience
and
we
have
updated
our
state
machine
as
we've
renamed
some
states
and
we've
added
a
new
state
for
UDP
transports.
We've
added
text
for
considering
search
algorithms
based
on
discussion
from
the
list
and
probably
the
most
important
thing.
There
is
providing
useful
growth
as
you
are
still
searching.
So
we
can
feed
that
forward
and
we
got
added
text
on
quick
that
we
think
quick
can
support
everything
we
need
in
the
draft.
G
Well,
that
was
quick
last
week,
not
necessarily
quick.
This
week
we
run
experiments,
Trang
get
rid
of
things
and
we've
also
added
text
on
path.
You
pick
handling
and
other
mechanisms
were
written
single
signal,
latitude,
big
style
messages,
so
in
Singapore
the
state
machine
look
like
this
in
just
quick
overview.
G
We
we
enter
through
in
hidden
probe
non.
We
have
this
for
unconnected
transports
and
then
we
validate
connectivity
moved
to
probe
base
in
pro
base.
We
need
to
to
validate
that
a
base.
Mtu
is
going
to
work
and
once
we
do
that,
we
could
have
move
into
a
search
state
and
then
from
there
we
we
can
perform
a
search
and
we
search
shop
until
we
heat
a
Maxima
to
you
and
then
we
are
done
and
then
we
we
can
sell
that.
G
So
what
we've
changed
is
we
have
renamed
the
the
initial
state
from
program
to
protect
where
we
are
we've
also
added
a
timer
mechanism,
so
we
can
figure
out
if
we
can
actually
establish
connectivity
and
then
one
thing-
that's
probably
still
missing
from
this-
is
in
the
in
the
probe
search
state.
We
need
a
pointer
out
so
that
we
can
incorporate
different
search.
Our
lives.
G
G
G
We
know
we
need
to
at
least
rebuild
the
5-tuple,
so
we
know
it's
coming
to
the
right
connection
and
we're
pretty
sure
that
the
the
implementations
we
have.
We
cannot
not
do
this,
but
we
also
have
questions
about
how
we
consider
the
the
path
you
signals
we
receive
and
what
we
should
do
with
with
the
new
information
we
get
from
the
network.
B
Comment
on
this
one
I
don't
have
any
good
answers
for
you,
but
have
a
place
to
dig
in
in
the
the
slides
I
started.
The
meeting
you
saw
a
reference
to
draft
Bonica
interior
frag
fragile.
Take
a
look
at
that
that
that
will
probably
tell
you
a
little
more
than
a
little
more
about
ptb
signals.
Thank
you.
I.
H
Jackson,
can
you
go
yeah
so
at
least
for
a
CDP?
We
can
do
a
verification
and
we
do
this
based
on
their
verification
tag.
So
it's
not
just
a
five
tuple,
but
we
have
something
in
a
blind.
Attacker
would
have
to
guess
and
I
think
we
have
to
validate
the
the
package
of
big
messages,
because
this
can
just
come
from
a
middle
common
box
in
the
middle.
So
you
only
know
you
can
reach
it
gives
you
an
upper
link,
so
you
an
upper
limit,
so
you
have
to
you
have
to
prove
it.
Yes,.
J
R
I
can
get
all
those
words
out:
I
didn't
catch
the
semi
horizon
soon
enough
to
rid
of
them.
Sorry
I
do
intend
to
do
that.
The
issue
of
authenticating
messages
from
the
network.
The
end
system
of
course,
is
much
broader
than
just
this
problem,
and
it
would
be
good
to
revisit
whether
or
not
there
might
be
some
new
solutions.
There
are
some
more
work
there.
R
R
R
There's
always
I
believe
the
other
draft
admitted
the
possibility
of
just
completely
ignoring
ICMP
messages
yeah.
We
help
that
in
this
draft
as
well.
You
have
that
in
this
draft-
and
we
did
definitely
see
cases
where
they
were
bite-
swapped
yeah.
There
definitely
is
like
a
sensibility
filter
on
the
the
signals
you
get
out.
R
I,
don't
remember
other
cases,
the
other
thing.
This
may
not
be
evident
from
reading
the
other
document,
but
alright,
but
our
intent
all
along
had
been
to
facilitate
jumbo
discovery,
and
it
turns
out
that,
after
putting
all
that
work
into
that
draft,
what
it
really
did
was
move
the
problem
down
a
level
because
the
Knicks
need
to
do
buffer
carving
before
they
can
talk
to
the
switch
to
ask
the
switch.
R
What
into
you
can
just
do
and
by
the
time
you've
buffer
carved
the
neck,
it's
too
expensive
to
restarted,
and
so
it
never
attained
that
goal.
So
that
was
a
and
I'll
say
an
off
topic
agenda
that
we
didn't
adequately
vet
her
I
was.
It
was
our
motivation
for
doing
all
that
work,
but
it
didn't
didn't
pan
out
thanks
for
the
author
of
reviewing
this,
send.
S
H
D
H
Q
A
T
I
think
one
thing
I've
come
to
is
the
EOP
TB
is
a
nice
optimization
rather
than
a
must-have
and
but
I
think
one
thing
that,
in
order
to
make
it
useful
as
an
optimization,
that's
worth
really
calling
out
and
thinking
about,
and
it's
got
we
started
discussion
on
quick
on
this
is
how
does
it
interact
with
load
balancers
because
dogmas
routing
it
back
to
the
you
know,
from
a
in
most
in
server
clustered
environment,
see
how
do
you
get
that
to
the
back
to
the
right?
Server
is
an
interesting
challenge.
T
That's
there
there's
some
there's
a
few
different
approaches
that
may
be
worth
calling
out
of
pointing
people
to,
but
certainly
on
the
validation
side.
I've
seen
both
I
both
see
not
to
do
interesting
things
to
PT,
be
messages
causing
them
to
not
be
useful
anymore
or
not
rewrite
them
properly,
as
well
as
at
least
one
tcp
optimizer
that
did
creative
face,
as
it
was
passing
back.
The
the
PTV
message
so.
C
To
two
bits
of
feedback
immediately
from
me
on
those
two
main
count
on
my
also
card,
but
the
the
first
thing
is:
please
tell
us
about
strange
paths
to
big
messages,
because
it
might
help
us
verify
things
correctly.
If
you
want
to
send
us
an
email,
anybody
who's
seen
a
really
strange
path
to
big
message
relating
to
this.
It
would
be
really
helpful
input.
The
second
thing
is
them.
G
G
We
still
have
two
weeks
to
do
to
the
state
machine,
and
so
there
are
still
constants
and
times
which
don't
have
values,
and
this
says
when
to
set
the
maximum
packet
size,
but
I
think
we
really
mean
when
to
feed
this
new
MTU
to
applications,
we're
providing
with
service.
And
then
we
have
a
bigger
issue
trying
to
deal
with
with
inconsistent
results.
As
we
spoken
about.
G
G
We
will
then
come
back
into
checking
with
implementations,
and
that
was
updating
their
c2b
implementation,
the
UDP
options,
implementation
is
tracking
and
then
I
think
we
hope
to
have
at
least
a
prototype
than
one
of
the
quick
implementations
for
from
Montreal
and,
as
Gauri
said,
we're
very
interested
in
people's
experience
with
weird
networks
and
weird
empty
use,
and
and
when
things
get
really
strange
in
their
networks.
Thank.
J
But
having
says
what
I
think
large
is
going
to
say,
I
do
hope.
We
can
make
some
progress
on
this
because
you
know
I
mean
it
does
matter
you
know,
and
when
we,
when
we
tried,
we
tried
to
advance
ipv6
to
two
full
internet
standard.
This
is
what
broke
you
know
this
is.
This
is
the
one
where
this
is
the
one
where
we
did
up
the
ITF
last
call
people
came
back
and
said:
you
know
path,
MTU
discovery
using
ICMP,
yeah
I'll
get
right
on
it.
You
know
so
I
mean
this
this
this
is
broken.
U
Yeah
I
wonder
to
what
I
will
say
so
at
the
blogger.
At
the
moment,
most
of
the
quick
implementations
do
something
very
simple
I.
Just
basically
you
try
to
send
a
packet
over
a
larger
size,
and
if
that
works
you
too
charactery
and
move
on
so
and
that's
okay,
it's
not
great,
but
it's
okay,
the
other!
It's
also
having
a
functional
and
and
and
better
path,
MTU
scheme.
It's
not
necessarily
something
we
need
to
ship
with
very
first
version
of
quick.
U
It
would
obviously
be
nice
right,
but
it's
not
sort
of
blocking
us
from
from
from
doing
so,
because
we
can
quickly
rev
that
said,
right,
I,
currently
so
I
haven't
had
to
draft
right
because
of
other
things
to
do
this
week,
but
I
don't
have
a
good
feeling
how
complicated
these
schemes
would
be
right.
If
we're
talking
about
something
that
that
is,
you
know
a
paragraph
of
text
I'm,
pretty
sure
that
many
implementations
will
probably
try
to
put
it
in
for
v1.
U
If
it's
something
that
is,
you
know
more
transport
in
complexity
like
usual,
maybe
not,
but
we
can
build
this
later
right.
So
in
my
mind-
and
this
is
sort
of
not
with
the
chair
and
on
but
I-
see
us
after
we
do
v1
I
see,
is
actually
work
on
two
things:
we're
going
to
probably
do
a
v1
dot
on
that.
It's
just
bug
fixes
and
little
things
like.
C
So
speaking
as
a
chair,
rather
than
an
offer,
I
think
it'd
be
good.
If
the
this
document
has
very
small
paragraphs
in
about
how
the
individual
transports
in
key
would
implement
this,
it
would
be
good
if
the
quick
working
group,
at
least
somebody
in
there
reviewed
those
bits
to
check
the
align
currently
they're
just
pulled
from
the
quick
spec,
and
that
might
be
all
that's
needed.
They
don't
define
how
it
works,
but
they
say
there
are
mechanisms
in
quick
which
can
do
these
things
that
you
need.
So
we
can
get
that
fixed.
C
U
B
Prefer
the
the
earth,
the
slightly
earlier
discussion,
where
quick,
would
would
would
likely
be
upwards
compatible
with
to
come
with
this
likely
not
coming
in
quickly
when
I
put
the
state
machine
diagram
up
Lars,
because
this
is
a
this
this.
This
is
not
the
same
this.
This
is
not
a.
This
is
not
not
a
paragraph
of
text
right,
it's
a
little
more
work.
B
C
G
G
If
you
go
past,
my
last
slide,
there's
a
there's,
a
pointer
to
our
our
github.
You
keyboard
this
one
yeah,
there's
a
pointer
to
our
github
and
all
of
the
the
work
we're
doing
on
this,
and
my
later
presentation
is
just
happening
in
the
open,
so
you
can
just
go
and
pull
it
whenever
you
want
and
I'll
be
very
receptive
to
people
who
actually
want
to
look
at
code
over
email.
V
My
clearances
so
as
someone
has
tried
to
run
jumbo
frames
server
the
blow
of
different
networks
and
trying
to
actually
get
this
to
work.
My
conclusion
is:
we
need.
We
need
PLP,
MTU
d
for
everything
all
the
applications
need
to
do
this.
That's
just
the
way
it
is.
The
operators
should
build
networks
that
work
and
send
PT
B's,
but
they
don't
always
so
all
applications
need
to
do
field.
Pim
2d
should
be
a
generic.
V
V
Right,
but
if
it
turned
that
on,
even
if
my
network
equipment
is
wrongly
configured
or
something
put
in
something
that
wasn't
a
popular
configured,
TCP
still
works.
We
need
to
do
this
for
everything,
so
I'm,
extremely
supportive.
If
this
helps
application
programmers
to
get
that
done,
is
there
a
overall
recommendation?
We
can
put
out
to
make
sure
that
this
is
know
the
best
common
practice
that
all
protocols
should
do
feel
at
p.m.
duty.
V
C
V
G
V
There
are
problems
with
the
implementations,
oh
and
one
more
thing
when
this
fails
and
is
start
going
down
and
unto
you
I
want
telemetry
to
know
that
it
happens,
a
logging
or
a
cache
or
whatever
something
that
can
tell
me
that
there
is
a
generic
like
I'm,
a
snack
hunter
like
a
net
snack
hunter.
That.
Q
V
J
Spencer
Dawkins
because
the
isg
was
invoked
and
you're
sitting
in
the
room
with
2:152
15,
so
that
so
you
get
a
clue
at
least
I,
don't
know,
I
mean
the
problem,
but
I
would
have
taking
stuff
like
this
forward
right.
You
know
right,
you
know
the
BCP
thing
ford
right
now
is
it's
really
easy
to
say
you
know
it
barely
works.
You
know
I
mean
you
know
so.
J
I
think
that
I
think
that
the
work
that
you
all
are
doing
to
make
it
work
better
and
to
make
it
work
for
people
who
don't
don't
have
it
now
I
think
you
know
I,
think
I'm
talking
about
for
UDP,
right,
I,
think
yeah,
I,
think
that's
great.
You
know,
I
think
that
I
think
you're
right
about
the
you
know:
bull
running
code
and
more
experience
and
I
hope
you're
very
wrong
about
more
fun
than
you
think,
because
I
have
to
do
the
ad
evaluations.
T
The
TCP
MSS
to
something
low
enough
that
they
don't
run
into
problems,
and
even
then
they
still
run
into
problems
when
that
breaks
and
I
think,
there's
a
risk
that
that
and
like
like
to
Michael's
point
I.
Think
if
we're,
if
we
don't
do
something
they're
getting
the
jumbo
frames
is
never
gonna,
it's
never
gonna
happen
or
we
won't
have
it.
We
want
overpass
there.
So
I
think
it's
key
to
kind
of
figure
out.
T
How
do
we
have
to
be
C
P
and
even
the
the
Linux
Blackhall
detector
implementation
as
of
a
year
ago,
I
haven't
looked
recently,
was
also
really
busted
in
a
bunch
of
ways
where
it's
behavior,
when
it
kicked
in,
was
not
well
designed
and
could
use
a
bunch
of
improvements
and
treated
ipv4
and
ipv6
the
same
in
ways.
It
didn't
make
sense.
H
Mycotoxin,
one
comment
again
regarding
the
question:
why
isn't
this
used
for
TCP
for
TCP?
If
you
send
propagates
you
probe
with
the
user
data,
so
you
are
taking
the
risk
so
that
your
pro
packet
is
being
dropped
and
you
have
to
retransmit
this,
and
this
is
not
only
a
problem
that
you
have
to
retransmit
this
data,
but
also
these
retransmissions
might
affect
the
congestion
control,
and
this
is
where
maths
document
talks
about.
H
If
you
have
a
packet
loss
figuring
out,
if
it's
related
to
a
packet
to
big
message
and
it's
not
a
congestion
event
or
doing
it
answer
not
if
it's
not
that
not
doing
this
congestion
control
stuff.
So
that's
much
more
complex
than
doing
this,
where
we
have
the
assumption
that
we
don't
probe
with
user
data
and
we
don't
have
to
congestion
contraband
we
don't
have.
We
are
not
impacting
with
the
pro
packets,
the
congestion
control.
V
Michael--Ah
Remsen
against
so
I
think
I
actually
am
more
interested
in
the
black
hole
detector
that
it
goes
down
in
in
packet
size
when
it
detects
that
I'm
not
getting
the
packets
through
optimizing
it
and
bringing
it
up
is
actually
less
of
interest.
It's
it's
great
if
it
works
but
I'm
more
interested
in
making
making
it
work
and
logging
the
the
event
of
ok
I
now
lower
the
path
on
to,
but
you
in
field
the
impunity
and
I
want
all
my
applications.
V
H
I
Okay
right
this
one
recently
updated
on
propagating
you
see
an
across
an
IP
headers
separated
by
shim.
The
executive
summary
is
that
this
draft
is
finished
as
far
as
either
editor
is
concerned,
both
in
terms
of
socialization
around
all
the
areas
it
refers
to
and
the
content
so
ready
for
working
group,
that's
cool,
and
that
also
means
the
other
one
that's
been
waiting
for.
It
is
ready
to
go
and
I've
updated
that
that's
the
ECN
encapsulation
guidelines,
so
it
was
a
proposed
standard.
I
I
There
was
a
an
RFC
6040
on
tunneling
of
ACN,
but
it
had
two
problems.
First,
it
omitted
any
mention
of
shims
and
they
have
their
own
interesting
characteristics.
A
shim
is,
as
shown
there,
it's
something
that
when
it's
on
the
outer
it
wouldn't
you
and
Bella
forward
it
it's
not
enough
to
forward
on
its
own,
and
you
also
get
them
with
layer.
Two
headers
stuck
between
usually
two
IP
headers,
but
sometimes
something
else
that
you
don't
know
what's
inside.
So
that
was
one
thing.
I
I
It
would
make
a
large
number
of
existing
implementations
invalid
just
at
the
stroke
of
a
RFC
editor
pressing,
a
button,
and
so
we
sort
of
failed
to
take
a
position
but
thinking
about
it.
What
we
now
do
is
not
make
the
implementers
responsible
for
that,
but
at
least
make
the
operators
responsible
for
in
configuring
their
ingress,
if
they
can
by
saying
those
words
there
if
it
does
not
or
might
not
propagate
easier
and
if
possible,
and
that
means
they've
got
to
get
out
if
the
vendor
hasn't
allowed
them
to
configure
it.
I
The
operator
must
configure
the
ingress
to
zero.
The
outer
right
and
David
says
he's
when
he's
reviews,
this
he's
going
to
make
sure
that
that
is
always
the
case
on
every
case.
All
right.
So
that's
that's
sort
of
top
level.
But
then,
when
you
look
at
what
this
effects
there's
all
these
outputs
of
the
mostly
the
interior,
some
of
the
routing
area
over
the
years
that
are-
and
this
is
just
a
selection
that
are
widely
deployed
and
so
been
through
this
before,
but
the
main
ones
meant
to
push
the
red
thing.
I
There's
a
couple
without
at
this
time
all
the
previous
ones,
as
the
couple
were
written
text
into
here,
that
updates
all
these
RFC.
So
this
is
why
this
has
to
be
a
proposed
standard,
because
it's
updating
other
proposed
standards.
So
we're
updating
a
load
of
interior
proposed
standards,
all
in
one
go
which
they're
going
to
thank
us
for
and
the
ones
that
it
doesn't
update,
I'm
going
to
come
to
on
the
next
slide
is
mobile,
IP,
either
proxy
mobile
ipv6
and
the
last
one
here.
I
The
network
service
header,
which
is
output
of
recent
output
of
the
service
function.
Chaining
I
can
go
at
MIT
and
in
that
case
we're
not
updating
it
because
we're
doing
a
new
draft.
For
that
warm
only.
Let
me
explain
so
there's
what
we
RFC's
we're
updating
copied
from
the
updates
header,
the
exceptions
proxy
or
just
mobile
IP.
The
main
reason
we
dropped
that
it
would
have
been
nice
to
finish
that,
because
then
the
whole
scope
of
everything
it
would
have
covered
everything
we
believe,
but
just
the
attempt
to
coop
the
expertise
necessary
past
the
cutoff.
I
I
had
a
volunteer,
but
then
he
just
kept
saying
yeah
I'm
really
keen
to
do
this,
but
didn't
get
around
to
it.
So
I
thought
we
just
can't
wait
any
longer.
Let's
just
go
so
that's
just
left
in
the
drafters.
This
would
be
advised
as
to
how
you
do
it,
but
we
haven't
done
it.
The
network
services
header-
this
is
a
bit
of
a
sorry
tale.
I
I
actually
went
a
notice.
This
was
coming
along
and
went
approached
the
authors
of
the
architecture
RFC
for
the
service
function,
training
working
group
and
they
said
well
we're
not
there
in
the
header
year.
This
is
cross-layer.
It's
not
part
of
the
architecture.
I
said
well.
Cross
layer
is
very
much
architectural,
you
know,
but
anyway,
it
now
turns
out
that
they've
published
the
shim
header,
there's
no
FC
and
they
omitted
ICN
support.
So
after
all
this
effort,
we've
now
got
another
one
that
hasn't
got
ezn
support
in
it.
I
So
Donald
he's
like
has
prepare
the
drafts
enough
coal
fit
with
him
to
fix
that
now.
The
problem
is,
when
you
add
EC,
and
after
it's
it's
already
been
there
without
ACN.
You've
then
got
to
deal
with.
It
will
be
incremental
deployment.
Problems
of
you
know
what
about
cases
where
one
the
capsulated
doesn't
support
it
and
the
ingress
thousand,
all
the
rest
of
it.
I
But
what
we're
going
to
do
is
we're
just
going
to
write
into
this
one
that,
because
this
is
something
that
is
likely
to
be
a
single
operator
setting
up
a
service
function,
training
domain.
We
will
just
say
that
the
operator
is
responsible
for
making
sure
all
the
all
the
nodes
around
the
edges
are
support
this
if
they
want
the
Savoy,
ACN
and
I've.
B
I
M
I
M
I
B
I
There
yeah
it
was
alright.
What's
the
one
I
mean
yeah,
go
you
just
refer
to
RFC
6040.
If
you
do
it
that
the
main
thing
is,
if
you
do
it,
when
you
first
design
it
it's
easy
yeah,
it's
it's
all
that
it's
all
the
incremental
deployment.
That's
the
problem!
If
you
don't
do
it
right
at
the
start,
we
need
to
get.
W
On
the
service
function,
chaining,
you
just
I
think
you
just
said
the
solution
is
that
it's
all
within
a
domain
or
within
an
operator
so
not
happy
in
the
comms
both
earlier
they
were
talking
about
this
stuff
and
in
inter
domain,
seem
to
be
very
much
in
the
focus
of
that,
so
that
somehow
that
should
be
thought
about
something.
Okay,.
I
Well,
I
mean
yeah,
I
mean
the
other
reason
we
said.
Let's
just
make
this
a
management
problem
is
that
with
service
function,
training
most
your
functions
are
virtualized.
You
know
it's
easier
to
update
things,
so
you
know,
if
that's
a
problem,
if
that's
not
how
it's
gonna
go,
but
I
mean
if
this
is
a
buff.
That
means
it's
not
yet
happened.
You
know.
So
it's
and
we've
got
a
new
draft
to
pick
it
up.
Okay,
we
knew
we
knew
try
to
finish
this
traditional
fairly
quickly.
I
My
personal
view
is
that,
and
if
we
as
a
whole
is
an
inter
or
intra
operator
thing,
so
any
any
aspirations,
otherwise
our
aspirations
right,
so
the
60/40
update,
shim
we
reckon
is
complete.
I
reckon
it's
complete
the
guidelines
for
adding
ACN
to
protocols
they
encapsulate
IP,
which
I
just
mentioned
to
Tom.
I
I
I
So
you've
got
incremental
deployment,
even
though
you've
got
to
deploy
a
number
of
parts.
Obviously
the
protocol
isn't
actually
a
a
part.
It's
it's
sent
by
the
host
and
read
by
the
network,
so
just
I
when
we
had
the
buff
on
this,
we
decided
not
to
set
up
a
new
working
group,
so
I
always
just
give
a
status
update
of
all
the
different
parts
in
TSP,
even
though
some
of
the
parts
aren't
NTS,
vwg
and
first
of
all,
so
there's
the
second
side
is
about
standards.
I
I
Then
moving
on
to
the
standards
sort
of
things
in
the
ITF
we've
updated,
nearly
all
every
one
of
those
drafts
is
updated.
Only
the
I'm
going
to
talk
about
that.
Your
cue
update,
so
it's
been
two
this
time
and
the
other
one
there's
a
new
draft
on
the
interaction
between
l4
s
and
diffserv,
which
I'm
not
going
to
talk
about
and
I've
updated
in
the
first
to
second
level.
I
I
As
a
measurement
paper
being
published
on
them
on
the
other
one
in
the
I
Triple
E,
communications
magazine
and
trill
AC
and
support,
which
is
mainly
being
done
for
l4
s,
as
is
now
in
the
RFC
editors
queue,
and
there
was
a
talk
on
quickie,
CN,
n
quick.
That's
also
likely
to
decision
to
go
into
release
1v1
of
quick.
I
So
that's
status
of
l4s
I
said:
I've
talked
about
Jill
cue,
the
I
jump
straight
to
the
third
bullets,
because
the
overload
handling
was
just
explanation.
We
put
in
some
terminology
because
the
implementers
that
are
working
on
it
I
sort
of
probing
into
the
various
variables
inside
it
and
we
didn't
have
terminology
for
those.
I
So
we
were
talking
past
each
other
when
we
were
trying
to
describe
what
we
were
doing
and
the
other
main
thing
is
started
to
put
in
more
specific
text
on
flexibility
for
classifiers,
not
only
East
u1,
which
is
the
is
the
primary
one,
but
some
possibilities
for
using
addressing
protocols
or
DHCP
in
a
sort
of
operates,
a
specific
way
which
is
well
I'm
just
about
to
talk
about
well
at
least
the
diffserv
part
of
it.
Okay,
please.
I
Right
so
this
is
a
new,
so
it's
you
see
it's
an
individual
drafts
dose
of
first
first
individual
draft
and
you
might
ask
well
why
do
we
have
to
talk
about
an
IQ
m
and
diffserv
together?
Because
the
other
eye
terms
didn't
do
that,
and
it's
put
it's
mainly
because
there's
two
cues
here
we're
already
using
a
classifier
and
you
need
to
know
what
order
you
do
things
in
with
other
classifiers
and
and
what
the
point
of
it
all
is.
I
So
the
main
thing
is
that
diffserv
controls
bandwidth
and
by
virtue
of
that
it
can
control
latency.
In
some
cases
the
l4s
architecture
is
purely
about
latency
in
it.
It
deliberately
does
nothing
about
bandwidth,
but
then
you
can
add
ways
of
controlling
bandwidth,
one
of
which
being
diffserv
around
the
outside
of
it.
I
If
you
want
to,
but
it's
it's
sort
of
decouples
the
two
rather
than
forcing
one
to
be
done,
because
you
want
to
do
the
other,
so
you
can
think
of
it
that
it's
it's
like
two
Q's
as
far
as
latency
is
a
concern,
but
only
one
as
far
as
bandwidth
concerned
and
what
this
draft.
Essentially,
the
structure
of
it
is
that
there
are
four
types
of
interaction
with
l4s.
The
main
one
is
none.
I
You
know
in
that
sorry,
there's
four
types
of
interaction
between
alpha
and
diffserv
main
one
is
that
quite
often
in
in
Turley
and
public
Internet.
There
is
only
one
diffs
of
PHP
the
default
best
effort,
and
so
the
interaction
is
none
and
that's
that's
important,
because
the
whole
point
of
l4s
was
to
give
you
a
low
latency
service
for
everything,
and
then
you
wouldn't
need
any
more
diffserv
complexity,
there's
also
stuff
about.
I
If
you
are
in
that
none
case
that
you're
you're,
seeing
other
diffserv
code
points
how
you
map
them
to
one
or
the
other
queues
which
is
on
the
next
slide,
then
there's
a
load
of
stuff,
really
nice
ASCII
art
in
this
draft
two
examples
of
which
you
hear
that
that's
not
ASCII
art.
By
the
way.
That's
some.
O
I
Actually,
every
column
except
the
most
right-hand
one
is
just
a
copy
out
of
out
of
the
RFC
45
94,
except
it's
updated
other
drafts
next
slide.
Now,
I
am
going
through
the
right-hand
column,
which
is
essentially
saying
we
reckon
it
would
be
quite
safe
if
your,
if
you
are
just
doing
a
dual
cue
with
l4
s
and
classic
and
you're,
not
as
an
operator
supporting
any
other
hot
behaviors,
you
could
put
it
on.
I
You
could
put
EF
voice,
admit
and
cs5
that
certification
signaling
into
the
lq
and
if
any
of
the
others
have
AC
AC
t1
on
them.
You
could
put
them
in
there
as
well,
because
they
are
essentially
saying
we
are
nice
traffic,
alright
and
then
there's
first
notes
there
about
that.
So
just
let
you
go
and
read
that
and
comment
on
that,
but
just
a
heads
up
that
it's
there.
So
here
we
go
a
qm,
p
or
q
coupled
we're
getting
more
evaluations
of
it
for
other
links
than
dsl,
which
is
what
it
started
on.
I
I
Prog,
TCP,
prog,
there's
a
number
of
parts
being
pulled
together
next
time
around
there'll
be
more
on
queuing,
the
dual
queue
couple:
they'll
be
adding
policing
stuff,
hopefully,
and
please
review
the
diffserv
one.
What
I
want
to
ask
the
chairs
on
that?
One,
though,
is
that
I,
you
know
I,
don't
want
that.
One
hold
up
everything
else,
because
it's
sort
of
more
informational.
It's
about
how
this
works
with
diffserv
where's,
the
other
ones,
are
more
specifying
how
l4s
works
copyright
to
time
to
us,
because.
C
B
X
Yeah
so,
as
David
said,
this
is
a
proposed
project
in
a
tour
to
dot.
One
of
the
belief
is
that
there
would
be
a
lot
of
interest
here
in
the
group.
So
that's
why
we're
sharing
that
we,
we
agreed
back
in
November
to
create
what
we
call
a
project,
authorization
request,
which
is
how
we
begin
a
standard
there.
The
motivation
for
this
is
described
in
a
report
that
is
available
at
that
link.
It's
a
output
from
an
industry
connections
activity
that
the
I
Triple
E
is
also
doing
where
we
it's
sort
of
pre
standardization
activity.
X
I
talked
about
that
at
the
hot
RFC
yeah,
so
the
statuses
that
we
decided
to
delay
so
approving
this
project
until
we
got
further
feedback
and
input
and
sort
of
improved
some
of
the
simulation
data
as
well.
So
that's
why
I'm
here
to
do
that?
The
hope
is
that
we
would
revisit
this
in
July.
That's
when
we
would
create
that
so
so.
What
are
we
talking
about
congestion?
Isolation
is
that
is
an
amendment
to
a
tour
2.1
q.
X
X
The
goal
is
to
eliminate
or
reduce
head-of-line
blocking
that
those
congested
flows
are
creating
kind
of
to
do
some
automatic
separation.
If
you
will
of
mice
and
elephants-
or
if
you
mean
the
elephants
might
be
the
ones
that
are
causing
that
and
what's
important
here-
is
that
this
is
intended
to
work
in
conjunction
with
the
Indian
congestion
control
that
that
we
define
here
in
IKEA,
so
assumptions
in
the
environment
that
data
center.
These
kind
of
high
performance
data
centers
are.
V
X
You
was
all
right
assumptions
about
how
people
are
building
these
high
performance
data
centers,
so
yeah
they.
These
are
basically
routed
links
right.
Every
everything
is
a
is
a
you
know,
a
switch,
but
yet
it's
doing
the
layer
tree
forwarding.
You
know
a
lot
of
commercial
silicon
being
used
to
do
this,
so
that's
primarily
an
l-3
cloths
network.
X
Now
a
lot
of
people
don't
like
that
because
it
has
issues,
but
if
you
really
don't
want
to
drop
a
packet,
that's
still
kind
of
one
of
the
options
or
the
option
to
kind
of
keep
in
place
there.
Typically
now
that
they're
saying
these
are
all
routed
link,
so
we're
using
we're
not
necessarily
using
VLAN
tags
we
may
be,
but
you
know
we're
used
to
dscp
code
points
to
to
indicate
things.
So
this
is
not
a
big
layer
to
fabric
so
802
already
has
some
congestion.
X
Can
management
tools,
as
I
mentioned
per
priority,
or
priority
based
flow
control,
so
in
in
our
standards
we
have
eight
classes
of
service.
You
can
enable
flow
control
on
a
per
class
basis.
So
if
you
want
to
create
a
lossless
class,
you
turn
that
on
when
the,
when
the
congestion
occurs
and
in
a
egress
queue
is
filling
up
or
an
ingress
queues
filling
up.
X
You
would
push
back
to
your
upstream
neighbor,
where
they
pause
that
blocks
all
traffic
in
that
traffic
class,
and
you
know
some
of
the
downsides
of
that,
of
course,
is
that
there
may
be
other
traffic
in
that
class
or
is
going
somewhere
else
and
we're
now
blocking
it,
and
if
that
pause
remains
on
for
a
long
time,
it
spreads
and
creates
a
problem
of
congestion
spreading
which
causes
buffers
to
blow
latency.
To
be
very,
you
know
all
over
the
map,
so
a
lot
of
people
don't
like
this.
X
A
lot
of
people
turn
it
off
in
the
middle
of
the
network,
specifically
because
of
these
problems.
There's
some
discussion
about.
Maybe
you
still
have
it
on
at
the
very
edge
with
a
server
because
of
difference
matching
of
buffer
sizes
and
things
the
other
tool
that
802
that
one
has
is
thing
called
congestion
notification.
X
Now
this
was
built
a
while
ago,
specifically
for
the
layer
2
fabric,
so
this
picture
imagined
it
being
an
all
layer,
2
network,
and
we
needed
to
do
something
to
support
non
IP
protocols
like
Fibre
Channel,
over
Ethernet
or
rocky
version
1.
They
didn't
tolerate
loss
very
well.
So
we
wanted
to
create
a
layer,
2
lossless
environment
and
in
this
solution
there's
a
congestion
is
detected
and
a
message
is
sent
across
the
layer,
2
fabric
to
the
source
and
a
raid
adapter
a
rate.
X
A
reaction
point
would
be
perhaps
on
the
Nick
who
slows
down
the
the
traffic
to
avoid
the
congestion,
didn't
necessarily
get
a
lot
of
deployment
in
the
real
world,
a
lot
of
complexity.
People
didn't
like
that
one
either.
So
what
we're
trying
to
propose
here
is
congestion.
Isolations
of
what
again,
is
it
it's
working
in
conjunction
with
the
layer
4
or
into
in
congestion
control?
X
The
goal
is
to
try
to
build
these
data,
centers,
bigger
and
faster,
and
to
subdue
support,
lossless
transfers
as
needed,
but
in
a
way
that's
not
so
impacting,
or
you
know
that
the
priority
flow
control
has.
We
want
to
do
this
sort
of
agnostic
of
the
types
of
flows
they
are
and
one
one
thing,
it's
very
obvious.
As
we
go
faster
links
like
hundred
Gigabit
Ethernet,
the
amount
of
buffer
per
port
per
gigabit
is
really
been
dropping.
X
So
you
know
there's
a
lot
of
pressure
on
keeping
small
buffers
for
and
but
again
that
could
get
aspirates
the
problem
of
loss
here.
So
so
we
really
want
to
reduce
the
frequency
of
using
that
per
priority,
full
control
or
needing
to
use
it
at
all
and
really
eliminate
headline
blocking.
So
so
again,
just
quick
summary:
it's
you
identify
the
flow
when
you
identify
the
flow,
that's
causing
congestion,
you
reschedule
it
to
a
different
traffic
class
and
we
want
to
do
that
in
a
way
that
doesn't
create
an
ordering
issue.
X
Although
there's
been
a
lot
of
discussion
about,
maybe
ordering
isn't
it
quite
as
bad
as
it
used
to
be,
and
then
we
want
to
signal
our
upstream
neighbor
so
that
that
neighbor
can
do
the
same
and
eliminate
the
head-of-line
blocking
that's
caused
by
by
pausing
the
traffic
class.
So-
and
we
saw
this
picture
before
when
we
don't
have
this
solution
in
place,
if
I
send
a
pause
to
a
traffic
class
upstream
switch
a
blue
flow
here
that
might
be
going
somewhere
else
around
the
congestion
gets
blocked
as
well.
X
So
that's
the
head
of
line
blocking
where
we're
trying
to
eliminate.
So
when
we
detect
the
red
flow
here,
that's
causing
congestion.
The
idea
is
to
move
it
into
a
different
traffic
class
schedule.
It
differently
signal
our
neighbor
to
do
the
same,
he'll
sit,
he'll,
move
it
and
that
frees
the
traffic
class,
the
for
the
you
know
the
non
congested
flows.
So
that's
at
the
highest
level,
the
the
mechanism,
so
in
little
more
detail,
we
would
again
be
quote
identifying
the
flow
that
causes
congestion.
Well,
most
routers
switches
have
means
of
doing
that
today.
X
When,
when
we're
marking
EC
in
bits,
we
do
it
either
probabilistically
or
in
some
method
we're
identifying
the
flow
that
you
know
it's
causing
the
congestion.
At
that
point,
we
want
two
subsequent
packets
that
come
in
to
the
downstream
switch.
We
would
want
to
put
in
a
different
in
a
different
address
queue
and
now
that
queue
would
be
scheduled
differently.
It
perhaps
a
different
priority,
perhaps
with
a
new
technique
that
would
eliminate
or
out
of
order
scenarios.
X
Eventually,
if
that
congestion
persists,
we
may
hit
a
threshold
on
that
queue,
and
then
we
would
signal
to
our
neighbor
hey.
This
flow
is
causing
a
problem.
Let's
tell
the
upstream
neighbor
to
do
the
same,
and
the
objective
now
is
that
upstream
neighbor
will
be
scheduling
subsequent
packets.
In
this
other
traffic
class,
we've
eliminated
Hetal
I'm
blocking.
If
the
congestion
continues
to
persist,
we
might
if
we
want
to
be
lossless,
we
might
issue
a
purple
priority
pause,
but
now
we're
pausing
only
the
congested
queue.
X
So
the
goal
here
is
that
were
again
leaving
the
non
congested
queue
without
being
blocked
so
that,
in
a
nutshell,
that's
kind
of
the
the
the
the
details.
There's
a
lot
of
subtleties
about
it,
we're
happy
to
talk
about
it.
We've
done
some
simulation,
there's
a
need
to
do
more
simulation.
All
the
detail
is
available
here
on
these
lengths.
The
way
we
did
this
was
setting
up
a
kind
of
a
two-tier
class
network.
X
We
used
sort
of
a
rocky
v2
traffic
model
and
we
we
sort
of
organized
the
randomness
of
the
the
the
server's
talking
to
one
another
in
two
factors:
one
we
created
persistent
in
cast
with
a
set
of
traffic
and
then
the
other
servers.
We
just
do
Minnie
Minnie
traffic
and
we
would
select
flows
that
that
match
this,
this
distribution.
X
So
what
we
we
also.
We
also
modeled
a
couple
different
approaches,
one,
the
one
that
we're
proposing
in
the
in
the
project,
which
is
just
that
having
a
single
congested
queue,
monitoring
a
non
congested
queue,
but
there
are
implementations
that
have
already
been
doing
say:
mice
and
elephants,
separation
on
the
front
end
where
they
may
already
have
that.
So
we
also
modeled
what,
if
we
add
that
congestion,
isolation
to
that
environment
as
well,
so
we
had
both
of
those
and
so
the
results
here,
I'm
just
giving
you
a
little
snapshot
there.
X
What
we
looked
at
here
was
well,
how
do
we
reduce
packet
in
general,
and
we
looked
at
two
different
scenarios
here.
One
was
where
I
don't
tell
my
upstream
neighbor
about
the
congestion
I
just
do
local
rescheduling
for
that
flow
that
had
about
a
twenty
six
percent
gain
over
just
pure
ecn
tech
mechanism.
But
if
we
do
signal
to
our
upstream
neighbor
we're
effectively
adding
more
buffering,
you
know
to
these
to
allow
these
flows
to
be
delayed,
while
the
EC
in
control
loop
catches
up,
and
we
saw
a
larger
savings
in
in
packet
loss.
X
So
because
of
that
reduction
in
packet
loss,
you
get
better
throughput.
You
get
lower
latency,
a
number
of
obviously
beneficial
factors.
We
looked
at
how
long?
If
you
you
know,
are
using
PFC.
How
much
are
we
reducing
the
use
of
it
so
because
we
know
it
kind
of
has
bad
implications,
but
it's
still
your
last-gasp
effort
in
order
to
not
drop
a
packet,
but
what
we've
done
is
reduced
the
the
frequency
of
seeing
those
packets
significantly.
V
X
X
V
V
X
A
K
M
Y
Y
They
will
send
it
on
when,
when
they
an
X
on
when
they
want
it
to
go
back
on
so
generally,
they
have
a
buffer
fill
level
where,
if
the
buffer
gets
up
to
this
threshold
to
the
high-water
mark,
they
send
the
X
off
and
then
when
it
gets
down
to
low
when
it
gets
down
to
a
low
watermark,
they
send
the
X
on
so
that,
hopefully
they
don't
they.
The
bump
here
does
an
empty
before
things
start
coming.
Y
V
Correct
so
my
point
here
is
that
a
hundred
gig
one
microsecond
is
ten
packets,
ten
to
1,500
bytes
packets,
approximately,
if
I
tell
it
for
one
of
my
crochet
one
microsecond
the
pulse.
If
I
got
the
you
know
raised
to
the
ten
here,
correct,
100,
Gig
I'm,
asking
him
to
pulse
for
the
equivalent
to
look
great
of
tenth.
You
know
1500
bytes
packets.
Is
this
actually
feasible
or
am
I
off.
X
V
V
X
X
I
X
I
X
Sure
we're
trying
to
eliminate
or
not
use
PFC
at
all,
right,
yeah
so
and
there
yeah,
so
we
we
have
talked
about
it
severely
the
ability
to
do
say,
hierarchical
queues
or
things
like
this
that
are
sort
of
out.
We
wouldn't
necessarily
standardize
me
kind
of
implementation
specific
so
that
that
has
been.
X
I
Virtue
of
queues
I
mean
what
we
called
here:
pre
congestion
notification
in
years
ago,
which
is
where
you
can
notify
congestion
before
the
queue
is
before
the
link
is
actually
it's
buffering,
ya
know,
and
we
head
out
that
gives
you
more
Headroom,
so
it
so
you
can.
You
can
start
slowing
down
the
other
end
before
it's
you've
actually
got
to
the
so
we
can
talk
about
yeah.
I
Z
So
we
actually
do
have
this
and
it's
not
1500
byte
packets.
It's
usually
like
two
thousand
five
packets,
I'm,
so
many
from
Oracle,
so
we
actually
used
like
two
thousand
byte
packets
or
four
thousand
red
packets.
So
and
yes,
the
buffering
is
a
pain.
And
yes,
we
don't
like
the
FC,
but
we
also
are
trying
ICN
and
we
have
PFC
the
edge
not
at
the
core
and
ecn
mostly
works
except
like
for
the
initial
burst.
So
the
real
key
to
this
is
that
you
know
if
the
enter
and
Sen
works.
You
won't
need
this.
X
That
it's
in
support
of
EC
n
right,
and
so
the
idea
is
that
we're
providing.
Let
me
go
back
to
this
picture,
we're
trying
to
provide
enough
time
for
that
blue
that
control
loop
to
do
its
thing.
So
the
assertion
is
that
as
we
get
to
faster
and
larger
more
data
centers
that
can
be
potentially
more
data
in
flight.
The
buffer
sizes
are
not
keeping
up
with
that
performance,
so
we
want.
X
We
want
to
make
Sen,
be
more
responsive
in
those
kind
of
environments,
and
so
what
we're
doing
is
using
effectively
the
switch
buffering
for
the
controls
that
needed
that
are
causing
congestion
to
give
time
for
EC
into
to
kick
in.
So
that's
that's
the
fundamental,
a
high-level
objective
there
and
that
that's.
B
Also,
some
rewired
Barban
advocate
for
having
Paul
come
present
here,
because
he's
got
to
interact
in
control
loops
when
that
went
when
that
buffer
starts
to
back
up
you're
going
to
get
pushed
back
with
congestion,
isolation
as
opposed
to
part
of
flow
control.
First,
and
that's
going
to
each
switch
resources
and
you're
going
to
spend
switch
resources
on
that
link
level.
Push
back
until
the
ecn
control
kicks
in
and
reduces
the
overload,
at
which
point
you
can
recover
switch
resources
and
interacting
control
loops
are
no
end
of
fun
and
that's
something
that
I
think
this.
B
AA
X
Yeah
using
the
similar
techniques
that
you
would
be
using
to
mark
the
packets
with
the
you
know,
with
the
ECM
bits
in
the
forward
direction,
so
something
you
know
on
the
order.
We
have
a
specified
mechanism
in
the
in
our
congestion
notification.
It's
a
sampling,
you
know
kind
of
random.
It
is
possible
that
you
might
pick
the
wrong
flow
right.
You
know
probabilistically,
but
the
expectation
is
we're
using
the
same
mechanisms
that
the
route
that
are
specified
or
routers
are
using
to
set
ECM
bits.
X
A
AA
Even
if
you
take
one
away,
they
will
try
to
fill
it
again
say
it
actually
just
helps
you
if
you
either
take
no
way.
That
really
has
is
on
a
constant
bit
rate
that
is
higher
than
an
incapacity
or
if
you
can
take
all
the
depth
of
flows
away,
and
you
only
have
like
constant
bit
rate
flows
that
are
below
link
capacity.
Otherwise,
you
will
still
have
congestion.
X
AA
Okay,
what
I
actually
want
to
say
is
that
the
congestion
is
not
there's
two
kinds
of
congestion
and
one
is
simply
just
the
feedback
signal
for
the
congestion
controller.
So
the
the
goal
of
the
congestion
controller
is
to
drive
congestion,
to
get
a
signal
to
to
understand,
understand
where
to
adapt.
Mm-Hmm
say
as
long
as
your
congestion
is
not
you
know,
going
higher
and
higher
up
over
and
over
again,
that's
like
the
normal
operation
and
I'm,
not
sure
if
it
makes
sense
to
intercept
there.
X
AB
X
Yeah,
you
would
I
mean
you
would,
whichever
yeah
you
would
have
effectively
moved
all
the
flows.
They
can
jessic
you
now
that
scheduling
of
that
congesting
queue
can
be
done
in
a
way
that
when
the
congestion
subsides,
it's
it's
it's
it's
almost
as
though
you've
added
some
buffering
in
the
middle
of
the
queue
right,
so
you
would
yeah
you
could
be
putting
all
all
the
cues
that
you're
marking
in
there.
AB
AB
X
Which
is
the
kind
of
call
to
action
if
you
will
so
again,
the
next
steps
are
continued
technical
review,
getting
feedback,
we're
gonna
try
to
do
some
additional
simulation.
I
would
love
to
work
with
anybody
here
that
was
interested
or
willing,
or
has
simulation
environments
that
we
could
talk
about
how
we
could
leverage
to
model
this
we
have
our
own,
but
you
know
you
need
third
parties
to
to
get
convincing
data
I'd
love
to
find
out
how
about
how
this
works
with
other
congestion
control.
Algorithms
like
bbr-
and
you
know
time-based
schemes
as
well.
X
So
we
expect
a
motion
in
July
to
approve
this
project
or
continue
to
delay
it
based
on
where
we're
at.
So
how
can
the
ITF
help
so
again?
There's
a
real
already
a
working
relationship
between
the
ITF
and
I
Triple
E?
Oh,
this
would
be
a
great
topic
for
discussion
and
discussion.
The
next
steps
there
I'm
happy
to
contribute
there.
If
possible,
you
can
send
feedback
directly
to
me
and
we
also,
as
I
mentioned,
the
motivation
for
all
this
work
is
written
in
a
in
a
in
a
draft
report.