►
From YouTube: TSVWG Interim Meeting, 2020-02-20
Description
TSVWG Interim Meeting, 2020-02-20
A
I
still
I
David,
while
we're
waiting
for
I,
don't
think
he's
joined
yet
I
think
we
can
complete
the
agenda
bashing.
So
this
is
the
agenda
and
I'm
sharing
now
that
we
plan
to
use
today,
and
does
anyone
want
to
make
changes
to
it,
speak
now
or
will
receive.
B
The
only
card
I
think
I
make
on
the
on
the
agenda
is
that
we
want
to
be
generous
to
some
congestion
experienced
of
folks,
because
we've
cut
them
short
a
number
of
times
in
the
past
and
it'd
be
nice
to
to
balance
things
slightly
here
so
about
30
minutes
or
runs
long.
It
runs.
It
runs
a
bit
long,
make
sense.
B
B
We've
given
people
the
customary
three
minutes,
should
we
go
ahead
and
get
and
get
and
get
started,
I
think
so.
Yes,
all
right,
Oh
title
is
ECT
one
and
the
internet,
I'm
speaking
on
behalf
of
all
the
tissue
WG
chairs,
however,
the
usual
diskant
responsibility
applies.
All
the
good
stuff
in
here
is
shared
responsibility,
the
really
the
the
disastrous
obvious
mistakes,
our
our
arts
are
so
are
solely
my
doing.
B
Okay,
this
is
the
usual
note.
Well
slide
of
this
is
an
IETF
making
the
note
well,
procedure
procedures
apply,
so
chairs
I've
been
trying
to
figure
out
what
our
path
forward
looks
like,
and
this
one's
in
Churchill
quote
seem
to
be
a
pretty
good
description
of
where
we
find
ourselves.
This
is
not
the
end.
It's
not
even
the
beginning
of
the
end.
It
is
perhaps
the
end
of
the
beginning.
B
So
what
do
we
think?
The
end
of
the
beginning
might
look
like
backing
off
from
the
deep
the
technical
details
of
both
proposals,
SCE
and
l4s,
which
are
important.
There's
a
core
decision
easily
made
about
about
ect,
how
ect
one
is
going
to
be
using
the
Internet
in
the
issue
tracker.
This
is
issue
number
2000,
though
we're
gonna
it
you'll
see
in
a
minute
that
we
plan
to
frame
it
a
bit
differently.
B
B
The
chairs
plan
to
set
total
content
guidelines
for
those
few
slides
review
and
post
at
least
a
week
in
advance,
after
which
we
get
to
frame
and
moderate
the
Vanco
meeting
discussion,
beyond
which
we
hope
a
small
miracle
happens,
and
we
actually
make
a
decision.
This
is
wishful
thinking,
but
we
need
to
be
optimistic
for
now
we're
going
to
review
next,
two
slides
for
content
and
not
come
to
make
this
look
out.
I
mean
oh
you're
gonna.
Take
question:
what
happens
next
before
put
up
the
slide
says:
gonna,
be
the
real
piece
of
fun.
B
Okay
for
the
North
Americans,
we
coffee's
not
working
but
I'm
impressed
that
I
don't
hear
anything
yet
alright
continuing
this
is
hang
on.
This
is
the
proposed
first
slide
for
Vancouver,
which
is
our
attempt
to
frame
the
decision
and
we're
friendly
decision
around
the
ECT
one
code
point:
we
walked
through
this
slide
and
described
a
few
subtleties.
So
the
background
here
is
RFC
47
74,
which
is
a
document
on
specifying
alternate
semantics
in
notification
field.
B
At
the
time
forty-seven
seventy-four
was
published,
it
assumed
that
dscp
would
be
used
as
a
signal
for
all
that
easy
and
semantics.
We
now
have
two
proposals
that
are
planning
to
use
East
the
ECT
one
code
point
as
that
signal
and
the
decisions
that
we
think
the
working
group
should
be
making
is.
How
does
ECT
one
signal
or
altered
essence
makes
the
network?
Is
it
an
input,
for
example,
as
used
by
l4s
as
a
classifier
for
queue
selection
or
is
it
an
output,
for
example,
an
indication
of
lesser
degree
of
Q
of
Q
congestion?
Sce?
B
Any
comments
or
questions
on
the
content
of
the
slide.
I
would
point
out
that
the
the
e
G's
here
in
a
and
B
are
very
important,
because
this
is
being
made
is
how
is
it
being
used?
This
doesn't
automatically
approve
the
corresponding
a
proposal
going
forward,
but
it
gives
you
some
indication
which
way
the
wind
is
blowing.
A
Sure
this
muted
David,
oh
yeah,
he
does
show
up,
is
connected
with
audio.
One
thing,
I
would
also
know
is
maybe
for
sort
of
things
that
need
area
director
attention
is
there's
a
assumption
here
is
that
we
can
only
do
one
or
the
other
well
for
us
and
that
the
working
group
can
do
both
because
it's
not
viable
for
the
internet,
so
I
think
that's,
maybe
well.
We
have
some
ad
presence
here.
It's
useful
to
make
sure
that's
understood
that
that's
the
assumption.
F
It
may
be
a
remark
from
me:
Kunis
Shepard
do
I
understand
the
decision
between
a
and
B.
If
you
say
it's
purely
input
or
output,
of
course,
both
have
consequences
and
do
we
take
you
account
to
consequences
as
well
or
is
it
I
mean
it
needs
to
be
founded
on
some
mechanism
behind
it
that
is
provable
to
work?
No
I.
B
G
B
Decision
gets
made
in
the
context
of
discussing
the
specific
proposal
to
use
it,
which,
which
is
what
the
working
group
recedes
to
next
after
making
that
decision.
The
decision
is
broader
scope
in
particular,
see
the
comment
I
sent
to
the
mailing
list
about
how,
whatever
we
do
here
move
forward,
even
if,
even
if
it
doesn't
pan
out,
is
going
to
leave
behind
running
code.
That
knows
how
that
that
knows
what
what
decision
was
made
here
and
that's
going
to
be
with
us
for
a
while.
H
B
Who
was
speaking
there,
Steve
Blake
all
right
I
would
be
interested
to
hear
what
the
offerors
folks
think
I
see
a
question
from
Sebastian.
Could
we
required
a
couple
any
ect,
one
loading
overloading
with
a
dscp?
It's
an
interesting
question,
my
personal
opinion-
and
this
is
now
not
speaking
as
written
group.
Chair
is
I,
don't
believe
either
l4s
or
SCE
wants
to
go,
wants
to
do
that.
I
B
Agreed,
however,
I
know
lease
in
the
context
of
l4s.
There
was
an
individual
draft
that
was
digging
in
this
area
and
it's
basically
it's
basically
been
abandoned
and
I
hope
I'm
not
putting
were
I,
hope
I'm,
not
putting
words
into
Bob
Briscoe's
mouth
by
by
saying
that
Bob
can.
Can
you
speak
to
us
or
do
I
need
to
try
to
relay
stuff
off
a
chat.
J
B
If
someone
wants
to
do
a
DCP,
scoped
deployment
I
think
we're
in
a
different
place
with
that,
as
Gauri
sort
of
as
as
Gauri
summarized,
which
is
that,
if
you
go
forward
with
a
dscp
based
deployment,
then
RFC
47
74
is.
Is
your
friend
and
it's
relatively
straightforward
to
figure
out
what
we're
trying
to
do?
You
will,
however,
have
interesting,
interesting
running
code
issues
with
the
common
practice
of
bleaching.
Dscp
is
to
zero
at
at
network
or
autonomous
system
boundaries
noted.
H
Steve
Lake
again
can
make
another
comment
deploying
this
any
of
these
schemes
requires
changing
the
data,
data
path
and
configuration
of
network
nodes,
so
any
current
practice
around
what
happens
with
DSC
PS
I
question
the
relevance
considering
anybody
deploying
either
one
of
these
experiments
is
going
to
have
to
be
tinkering
with
these
boxes.
That's.
B
B
All
right,
let
me
go
since
Steve
has
sewed
invited
us
to
go
to
the
next
slide.
Let
me
bring
up
the
next
slide,
which
I
think
speaks
to
his
point.
The
chairs
thought
it
was
important
enough
to
go
ahead
and
include
this
is
probably
decision,
so
the
Tyler
slide
is
friendly
coexistence
with
competing
traffic,
and
we
observe
that
both
proposals
have
chosen
option.
B
3
from
RFC
seventies,
forty-seven
seventy-four
about
how
traffic
using
the
new
ECM
semantics
in
interacts
with
traffic
that
doesn't
now
the
full
knew
of
RFC
47
74
option
three,
which
is
section
four
point
three
of
RFC
4760
four
is
increment
appointment
option
three
friendly
coexist
with
competing
traffic,
and/or
the
purpose
of
clarity
or
at
least
motivating
what's
going
on
here.
The
competing
traffic.
B
That's
interesting
is
that
which
uses
existing
TCP
congestion
control,
such
as
we
know,
are
cubic
and
the
coexistence
focus
is
at
is
at
unmodified
notes,
since
this
is
going
to
be
let
loose
on
the
entire
internet,
which
is,
and
in
particular
shared
bond.
The
Q's
with
the
EZ
and
a
QM
reason
is
focus.
Is
that
fair,
queuing
network
nodes
don't
have
coexistence
problems?
They
very
cleanly
separate
the
traffic,
the
the
competing
traffic
out
in
two
separate,
separate
queue,
instances
and
deal
with
it
quite
nicely.
B
The
scenario
of
that
is
important,
for
this
slide
is
traffic
competition
at
a
shared
bottleneck.
You
with
the
CCNA
QM
starvation
of
one
class
of
traffic
is
not
shuttle
outcome,
and
in
this
scenario,
if
you
don't
starvation,
the
competing
traffic,
which
is
using
TCP
congestion
control
such
as
you
know,
are
cubic
drives.
The
bounded
queue
occupancy
level
and
both
Republicans
need
to
explain
how
to
deal
with
the
scenario
in
order
to
move
forward.
B
J
This
is
Jake
Holland
again,
a
I'm,
not
sure
is
the
saying
the
FQ
Network
note
that
fair
q
is
out
of
scope
here,
I'm,
not
quite
sure,
it's
it's
accurate
to
say,
there's
no
coexistence
problems
I
think
that
building
a
standing
queue,
even
in
a
even
in
an
FQ,
for
example,
could
be
considered
a
sort
of
a
problem.
There
can
be
a
hash,
pakka,
collisions
and
such
but
I
think
you're,
saying
that
that
these
should
be
emphasized
in
terms
of
the
decision,
focus,
I,
think.
C
B
And
the
point
is
that
you
need
you,
and
the
point
mirror
is
exactly
that,
but
the
the
discussion
needs
to
focus.
What
did
I
do
there?
Sorry,
the
discussion
needs
to
focus
on
the
shared
bond,
the
q,
because
that's
that's
where
that's
where
the
big
problems
are
there,
the
FQ
network
nodes
are
not
a
cause
of
big
problems.
Then.
I
Well,
David's
editing,
so
you
can
think
and
story
here.
I
was
I'm,
not
convinced
that
this
is
a
showstopper
I'm
convinced
it's
something.
We
need
to
think
carefully
about
and
write
some
text
about.
If
we're.
If
we
really
are
deploying
a
new
technology
and
an
experimental
case,
then
we
might
want
to
make
some
decisions
here,
but
we
can't
just
do
that
without
considering
it
and.
B
H
A
David
I
would
recommend
we
we
take
that
offline,
and
these
are
out
now
people
can
look
at
them.
We
can
talk
about
them
between
now
and
Vancouver
on
the
mailing
list,
if
we're
mostly
agreed
that
this
is
a
fine
ad
to
take
for
the
working
group
and
I
would
suggest
that
we
move
on
and
start
talking
about,
a
CD
I
think.
B
A
A
E
E
Obviously
we're
not
expecting
a
decision
to
be
made
today,
but
while
we
prepare
to
this
some,
this
slight
step
before
those
slides
were
put
up
so
I
believe
SCE
is
in
a
good
position
to
do
so.
After
about
a
year
of
actively
working
on
this
particular
problem,
we
didn't
get
much
time
to
show
that
in
Singapore,
so
we're
grateful
to
have
this
opportunity
instead.
E
E
E
The
baseline
latency
is
a
team
in
a
second
round
which
is
typical
internet
class,
and
you
can
see
that
there
are
small
excursions
above
pass
by
just
a
few
milliseconds
as
cubic
posts
or
capacity
which
in
this
case
is
50
megabits.
A
second
and
give
it
received
see
marks
tell
it
to
back
off
at
the
peace
of
the
agency
over
wall
cubic
is
using
maybe
ninety
percent
of
available
capacity
on
after
late
much
more
than
Rena
would
and,
of
course,
a
done
man
chopped
doctrine.
E
Job
tailed
FIFO
would
look
for
far
worse
in
terms
of
agency,
and
we
know
this
works
well
in
practice
on
the
project
engine.
In
fact,
we
using
something
very
like
this
to
talk
to
you
like.
No,
no
can
we
improve
the
link
utilization
and
to
move
those
agencies.
That's
the
question
here
is
our
approach
to
solving
it.
E
E
A
bit
of
history
in
the
late
1980s
rejean
establish
that
packet
loss
was
a
congestion
signal.
It
still
often
the
only
available
signal
that
can
reliably
be
detected
and
them
separately.
There's
the
feminine
familiar
aimd
sawtooth
pattern
emerge.
The
time
packages
were
typically
only
8
bits
long,
but
hardware
vendors
soon
learned
that
deeply
buffers
increased
throughput.
E
The
expensive
agency,
so
ecn
was
introduced
so
that
second,
in
congestion
no
longer
required
a
topic.
Backus
wastes
capacity
and
incurs
application
latency
soon.
Transmissions,
AQL
micro
essence
then
appeared
which
could
reduce
latency
without
destroying
the
ability
of
the
cue
to
absorb.
First
deployment
has
been
slow,
but
is
now
significant.
E
Cns
congestion
experienced
lacks
a
reliable
mechanism
in
TCP
due
to
the
sticky
ECE
suitable
on
slags
feedback.
This
also
makes
it
imprecise
and
heavy
handed
CD
by
contrast,
is
fed
back
concisely,
but
underlying
through
an
additional
plan
see
he
itself
is
the
killer
to
spare
ECT
wanna
code,
unambiguous
output,
signal
form
of
the
networks.
E
E
This
is
absolutely
fundamental
to
NCS
design
and
has
not
changed
since
a
year
ago,
when
we
first
presented
some
of
the
details,
change
comparing
this
whistling
elephant
over
see
some
clear
parallel.
Each
SCE
Marcus,
essentially
is
semantically
similar
to
each
l4s
CEO
mark
same
end.
Result
is
achievable
with
either
signaling
system,
assuming
they
operate
in
isolation
and
on
a
steady
path,
but
you
can
see
the
tower,
for
s
would
have
some
more
difficulty
fitting
into
the
same
network
as
our
scp-1689
cm
traffic,
responding
to
a
shrug
decrease
in
capacity.
E
To
reiterate,
SCE
retains
the
mobile
ECM
signaling
in
its
entirety.
Then
the
marks
outgoing
traffic
was
ECT,
similar
informing
the
network
that
it
will
correctly
process
a
see,
EEMA
little
box.
May
then
Sigma
CD.
The
receiver
will
inform
the
center
with
ECE
and
the
center
acts
was
congestion
window
reduced.
E
E
E
E
L
L
As
the
control
loop
stabilizes,
then
you
switch
to
the
RFC
3168
bottleneck
and
you've
got
there
a
little
small,
but
there
are
red
C
e
marks
at
the
bottom
that
show
where
that
eats
latency
peaks
occurred,
and
then
we
transition
back
to
the
SCE
bottleneck
and
we
a
little
spike
in
in
SC
e
marking
in
the
beginning
and
then
in
the
control
of
stabilize
incident.
The
flow
continues.
E
The
next
test
has
to
do
with
capacity
reduction
of
steady-state
conditions
are
all
very
well,
but
the
Internet
is
not
so
forgiving
at
a
minimum.
We
should
expect
a
competing
flow
or
10
to
startup
and
share
our
bottleneck.
We
simulated
this
was
at
10%
and
a
90%
reduction
in
past
capacity,
followed
sometime
later
with
restoration,
and
here
we
see
the
10%
reduction,
which
SC
it
flows
achieve
entirely
through
SCE
signaling.
In
this
particular
case,
and
without.
E
E
There's
just
a
slight
peak
in
the
SCE
signaling
at
the
beginning
of
the
capacity
reduction
and
when
we
do
a
90%
reduction
in
capacity.
There
are
some
latency
effects
and
this
does
actually
require
some
CD
marks
in
order
to
do
that.
The
Dutch,
but
since
we
have
seen
amongst
the
SCE
flow,
the
SE
Center
moves
to
do
a
big
reduction,
and
this
is
done
entirely
transparently
and
the
boss,
when
they
can
see
effects.
E
But,
honestly,
this
is
a
pretty
clean
response
to
a
severe
event
and
it
also
recovers
cleanly
when
the
capacity
return
look
at
the
signaling,
and
we
can
see
that
there
is
a
large
pink
in
the
SCE
marking
and
we
can
also
see
a
significant
amount.
O
ce
marking
accompanying
that,
along
with
some
seeds
of
new,
are
responses
around
here.
E
E
E
Non
SCE
traffic,
so
first
let
me
emphasize
that
when
SCE
traffic
meets
existing
infrastructure,
such
as
a
drop
tell
find
phone
a
dropping,
a
qm
or
in
earnest
the
z16
8i
k
qm.
It
behaves
just
like
any
other
conventional
k,
CP
deployed
today,
so
this
means
FC
endpoints
can
safely
be
deployed
on
existing
network.
Obviously,
you'll
go
into
more
detail
in
that
on
lat
in
Vancouver
and
when
an
SCP
middle
box
is
deployed.
That
is
where
coexistence
with
conventional
traffic
has
to
be
addressed.
E
L
So
what
we're
looking
at
here
is
when
you
see
the
throughputs
here,
I
should
point
out
that
you're
looking
at
the
kernels
delivery.
So
that's
a
bit
more
accurate
with
these
tight
control
loops
than
what
you
can
see
from
networks
results.
So
this
is
a
cubic
flow
starting
up
and
then
after
10
seconds
or
so
a
cubic
St
eat,
slow
start
and
that
that
traces
in
orange.
So
you
can
see
that
as
the
cubic
flow
in
green
has
its
its
drops
and
link
utilization.
E
Now,
for
those
cases
play
even
mfq
is
too
much
leads.
The
model
Casa
technique
called
approximate
fairness,
which
is
implemented
in
certain
high-end
cisco,
and
so
this
is
a
technique,
that's
known,
to
work
up
to
100
bits
a
sec,
so
this
is
biases
the
condition
signals
of
different
flows
based
on
their
relative
occupancy,
a
listen
Akil
most
of
the
required
information
was
already
present
in
our
earlier.
Cnq
is
cheap
nasty
killing
implementation.
E
E
E
E
The
arty
arty
T
dependence
is
one
of
these.
These
some
mathematically
fundamental
things.
So
when
you
have
a
window
based
congestion,
control,
you're,
obviously
sending
a
certain
amount
of
data,
the
size
of
the
congestion
window,
curve,
RTT
and
obviously,
if
the
RTT
is
longer,
then
your
throughput
will
be
done
by
the
same
amount.
So
you're
saying.
E
E
So
our
approach
to
scalable
throughput
is
simply
to
implement
cubic
and
to
adapt
cubic
to
in
to
understand
SCE
signals.
Now
cubic
was,
of
course,
designed
to
deal
with
long
track
networks
as
they
used
to
be
called,
and
what
we
do
on
an
SCE
mark
is
simply
research.
The
polynomial
both
curve
to
its
inflection
point,
putting
it
at
the
current
congestion
window
at
the
current
time,
and
then
we
apply
the
usually
SCE
response
to
both
the
current
congestion
window
and
the
inflection
point
level,
and
this
we
find
works
pretty
well.
So.
E
E
E
E
Another
difficult
environment,
remote.
We
would
like
to
emphasize
it's
bursting
networks,
and
this
happened
particularly
when
you
have
a
wireless
link.
This
could
be
Wi-Fi,
VG,
LTE,
probably
even
5g
there'll
be
Sun
bursting
us
and
I
fidelity
congestion
control
tends
to
be
overreact
of
this.
Now,
queues
are
meant
to
absorb
both
Center
meet
of
the
melt.
You
know
wall
smoothly.
E
E
E
E
D
It
wasn't
it
mr.
questions
and
mores
escaped
me,
but
he
lfq
mechanism
to
me
just
images
possible
in
hard
work.
Hues
I
describe
it
as
hiring
iPhones,
which
earlier
fight,
though
us,
but
thinking
about
who
is
it's
not
does
not
used
by
Authority,
there's
fused
that
or
the
hardware
he
would
be
able
to
reach
into
the
queue
to
some
arbitrary
place
or
pack
us
out
from
the
transmat.
E
You'll
notice
that
we
mentioned
having
three
fine
phones
and
the
only
tears
that
we
we
circulate
the
package
rather
than
pulling
stuff
out
of
the
middle
and
having
thief
I
host,
makes
that
relatively
efficient,
as
opposed
to
having
only
to
one
being
for
the
spark
you
and
one
for
the
Pope
you
by
having
a
third
queue.
We
have
an
efficient
way
of
recycling
recycling
recirculating
the
bulk
in,
and
then
we're
only
pulling
from
the
head
of
any
particular
fiber.
D
E
D
E
E
D
B
E
E
Where
they're
sharing
equally
Oh
at
some
point
but
I,
believe
the
total
capacity
utilization
is
higher
than
it
would
be
with
to
conventional
flow.
Now
we
do
also
see
that
of
the
SCE
flow
is
learning
at
a
lower
throughput
than
the
conventional
flow,
and
that's
because
it
is
being
controlled
to
a
lower
latency
level.
But
then
title
AF
is
having
to
compensate
by
biasing
the
congestion
signals
of
burst
flows
and
that's
that's
a
result.
I
I
E
E
J
I
L
Think
maybe
is
just
that
with
cata
layeth,
where
a
single
queue
which
obviously
in
a
single
queue.
There
is
no
such
thing
as
flow
rates,
which
we
view
that,
as
also
a
way
to
transition
where
you've
got
to
see
flows
and
as
you
increase
the
number
of
st.louis,
you
be
able
to
utilize
more
of
the
link
and
then
eventually,
when
there
are
all
icy
flows,
they
should
have
lower
latency
overall.
So
it's
more
of
a
gradual
transition
in
the
case
of
a
single
queue.
K
L
K
L
D
It
strikes
me
that
the
data
that
you're
presenting
is
somewhat
anecdotal.
It's
a
handful
of
cherry-picked
cases,
a
short
time
series
of
throughput
and
latency,
and
you
know
I
think
to
make
a
convincing
case.
You
really
need
to
demonstrate
the
behavior
across
a
much
wider
range
of
scenarios
and
present
the
data
in
a
way
that's
more
useful
for
for
comparing
performance,
so
EULA
CDF's
of
latency
are
nice,
but
he
looking
at
the
tale
agency
rather
than
just
trying
to
pull
numbers,
although
in
a
short
time
series.
E
G
K
Maybe
three
minutes
for
one
of
them,
so
I'm
sure
everyone's
read
them
because
they're
quite
short,
sort
of
that
that's
understated,
British
humor
there
and
that's
main
third
elf
race
draft.
The
Campanile
qaq
M
is
in
pretty
good
shape
when
we
had
a
look
at
it,
have
other
people
review
it
and
couldn't
find
anything
particularly
wrong
with
things
like
terminology
and
wording
and
things
so
we've
got
some
updates
to
do
to
that.
K
K
So
I'd
ask
those
that
in
like
that
that
the
style
to
if
they
could
have
a
look
again,
but
this
time
I
mean
last
time.
I
said
we
can
probably
find
the
high
power
selves
no
need
to
list
it
all.
But
if
you
can
now
suggest
text
or
you
know
identify
where
the
where
the
problems
are,
if
you
review
it
again,
I'm
thinking
particularly
Jake,
who
was
particularly
concerned
about
that,
okay.
So
next.
K
K
K
K
K
It
went
in
there
right
in
the
early
days
probably
about
five
years
ago
and
in
city
must
Marquis
de
zero
packets
under
the
same
conditions
that
you
would
drop.
Nonny,
nonny,
no
teamcc
team
packets
and
that's
sort
of
stronger
than
even
rst
3168
said
because
3138
doesn't
pay,
you
must
marques
t0
packet.
It
just
says
you
know.
If
you
support
them,
you
must
so
we
need.
K
We've
said
if
you,
you
need
not
market
easier
packets,
but
if
you
do,
you
do
so
under
the
same
conditions
and
you
drop
not
easy
t
packets,
so
that
was
just
a
bit
of
a
gap
and
we're
asked
to
add
information
about
how
to
monitor
deployment
of
l4
s
for
any
harm
on
the
internet
where
any
harm
it
might
be,
causing
so
I've
added
a
bit
on
that
and
then
loads
of
other
minor
edit.
So
I
think
the
identify
draft.
It's
I'm
happy
with
it
now,
but
it
needs
reviewing
they're.
K
Just
two
things
I'd
like
to
add,
and
in
addition
to
that,
there's
Jake
is
ask
if
we
can
add
some
material
on
at
least
documenting
an
STD
based
alternatives.
So
I've
got
a
response
back
on
the
list.
The
other
things
I'm
not
happy
with
that,
may
need
to
be
added
after
every
review
by
Brian
carpenter,
but
in
some
wording
he
wanted,
which
was
recommend
to
standard
use.
B
David
Bob
this
is
David.
I
can
try
to
help
that
I
understand
what
Bryant,
where
Brian's
coming
from
coming
from
on
that
one
and
yeah
you're
you're
digging
in
the
right
place.
Where
that
that
the
that
particular
concern
is
specific
to
how
gift
serve
respect
was
respect.
I'll
volunteer
hunger.
The
text
also
I'm
gonna
assume
that
when
you
work
over
the
need
not
mark
et0
packaged
green
text,
that
will
will
turn
into
a
shooter
or
must
write
I
would
the
Masters
earlier
independence.
That's
right
and.
K
K
Any
questions
you,
okay,
should
we
move
straight
on
then
all
right.
So
there's
not
much
to
say
about
the
aqn
code
for
this
meeting
jewel
pi
squared
+,
fq
caudal.
We
haven't
really
got
a
name
for
it
at
kick:
coral
th
for
threshold,
if
you
could,
with
a
threshold
that
code
is
stable,
I
mean
the
FQ
got.
Special
code
is
easy.
So
all
the
ATM
code
is
really
quite
stable.
K
K
Divided
the
requirements
are
for
sort
of
safety
and
the
optimizations.
The
performance,
improvements
and
then
I've
got
those
repeated
I'm
not
going
to
read
the
left-hand
side
of
the
slide.
This
is
a
copy
of
an
earlier
slide,
but
they
are
requirements
that
we've
set
ourselves
and
other
people
set
back
in
2015
to
make
scalable
congestion
control
safe
on
the
Internet.
So
next
slide.
K
That
that
was
the
status
of
this
chart,
traffic
light
chart
in
November
19,
and
if
you
want
to
flick
back
and
forward
with
the
next
chart,
which
is
the
status
now
they
should.
You
should
be
able
to
see
the
differences
and
the
main
focus
is
in
the
text
on
the
Left.
What
we've
been
working
on
is
the
it's
not
grayed
out
and
you'll
see
reduce
the
RTP
dependents
wit.
Now
got
that
running
and
working
and
gonna
be
talked
about
later
and
also
the
classic.
You
can
bottleneck
detection
code.
K
We've
got
that
working
and
evaluating
it
a
wide
range
of
cases
in
progress.
We
should
definitely
have
that
all
sorted
soon
and
I'm,
hoping
you
know
today
or
tomorrow,
once
the
everyone
on
the
teams
have
a
chance
to
look
through
it
or
we
can
post
a
large
set
of
results
on
that.
There's
something
like
4,000
different
scenarios
with
all
the
bottom
platen
and.
K
K
You
get
this
potential
unfairness
between
4s
and
I.
Thick
flows
is
the
the
PPP
per
our
goal,
Prague
or
Yale
Forest
congesting,
and
it's
much
more
aggressive
and
that's
the
whole
point
of
the
other
isolation
solutions.
But
if
you
haven't
got
that
solution
at
your
bottleneck,
the
idea
is
that
the
end
system
is
meant
to
detect
that
it's
in
a
3168
bottleneck
and
start
behaving
more
like
that
again.
3168
would
expect
it
to
not
Alpharetta.
K
So
here
you
see
roughly
I,
don't
know
12
times
unfairness
in
the
in
the
middle
of
the
plot.
In
the
worst
case,
this
is
this
is
the
sort
of
thing
that
was
being
mentioned
earlier.
You
know
you
need
multiple
link
rates
and
multiple
round-trip
times,
and
you
need
to
show
where
the
mean
is,
and
some
whiskers
to
show
what
the
typical
deviation
is
across
the
number
of
tests,
rather
than
just
one.
What
great
called
cherry-picking
test.
K
Identical
this
is
this
is
about
the
problem.
This
isn't
the
this,
isn't
the
solution.
Yet,
as
I
say,
we
want
to
summarize
all
these
4000
scenarios
we've
done
into
those
sort
of
plots,
but
we're
just
gonna
run
that
all
through
okay
almost
well
and
we're
still
tweaking
things
because
it's
a
it's
a
this
is
a
heuristic
algorithm.
It's
not
what
I
would
say.
I'm
I,
wouldn't
say
it's
an
elegant,
mathematical
algorithm
it
it's
purely
based
on
finding
a
sweet
spot.
K
K
K
K
So
if,
if
there's
a
brief
period
where
the
algorithms
getting
it
slightly
wrong,
it
will
just
slightly
push
towards
the
other
behavior,
and
then
it
will
come
back
again
and
if
it
has
transitioned
right
over
to
classic
and
there's
a
slight
period
when
things
look
a
bit
more
scalable
with
it
won't
immediately
jump
to
the
other
side.
It
might
transition
a
bit
but
then
go
back
to
classic
and
we
can
see
that
happening
in
the
tests
and
I'll
post
after
the
session
or
in
a
couple
of
days
of
time,
all
the
tests.
K
So
people
can
see
this
classic
easy
end
variable.
That's
in
the
horizontal
axis
there
how
it,
how
it
moves
in
real
a
bit
the
classic
bottleneck
is
detected
so
just
to
be
clear.
It's
always
start
for
scaleable
and
we
can
use
the
per
destination
cache
to
record
where
what
appeared
to
have
had
and
for
this
house
before,
and
so
it
may
not
start
a
scalable
ish.
K
Last
time
it
ended
up
looking
classic,
it
could
start
straight
as
classic,
but
it
starts
with
credible
because
the
understood
behavior
isn't
that
worrying
in
the
first
second
or
so,
you
know
you
for
fairness,
you're
talking
about
long
term
trailers
for
long
running
flows,
not
really
about
dynamic
behavior,
and
it
starts
maintaining
the
scores
straight
away.
Sorry
from
the
see
mark,
it
maintains
the
underlying
metrics
right
from
the
start
and
the
the
reason
for
the
cut
the
algorithm
behaving
like
in
the
plot
is
so
that
it
doesn't
immediately
jump.
K
K
I'll
describe
the
plot
now,
actually
so
that
the
bottom,
the
x-axis,
is
the
score
based
on
the
metrics
that
it's
measuring,
which
I've
got
on
the
next
line.
I
would
slide
after
I.
Remember
and
Galactus
is
the
this
values
C,
which
is
which
behavior
that's
what
drives
the
behavior
so
from
zero
to
one
and
if
it's
zero,
it
scalable,
if
it's
one
its
classic.
So
that's
how
you
get
your
hysteresis
from
by
having
this
sort
of
piecewise
ramp
and
it
will
move
faster
across
the
transition,
the
more
strongly
classic
it's
detected
or
clearly
move
faster.
K
K
K
Instead
of
I
can't
ETB
target,
says
reduction
equals
C
win
times
alpha
over
2.
Well,
instead,
you
do
the
max
of
alpha
over
2,
which
is
the
prog
1
and
C
times
the
alpha
that
the
fixed
out,
where
you
get
for
a
classic
congesting
controller.
We
use
a
that
congestion
control,
because
this
is
ECM,
and
so
it
will,
if
it
so
once
the
thing
is,
got
moved
over
the
classic.
K
It
will
give
you
just
full
solid
large,
salty,
as
you
would
get
with
Abe
and
if
it's
at
the
other
end,
you'll
get
adaptive
salty,
which
is
what
you
get
with
prog
they're
shown
small
here,
because
they
usually
are
but
whole
point
of
it
is
it's
adaptive,
so
it
can't,
they
can't
go
large
and
in
fact
they
can't
go
larger
than
a
band.
That
max
will
pick
that
up,
and
so
it
can
respond
faster
than
they
would.
K
When
it's
going
down,
then
just
a
point
that
that's
an
active
link
in
the
slides
to
the
paper
on
this,
and
we
will
put
as
a
result
in
that
paper
or
a
link
to
the
results,
will
probably
both
for
the
selection
of
the
results
in
our
link
to
the
pool
set,
and
it
gives
the
rationale
for
it
all
and
everything.
So
next.
K
One
deeper
I
think
we're
running
out
of
time
because
we
want
to
talk
about
our
TP
independent,
so
I
won't
go
one
level
deeper
boo.
You
can
read
this
in
that
play.
Projects
pointed
you
out:
next,
oh,
oh,
no.
They
I!
Guess
that
is
important
to
say
it's
working
and
sad
I,
don't
know.
I
said.
Are
you
on
the
call
I'm
the.
K
B
There's
a
question
to
chat
which
you
might
want
to
take
up
offline
because
it
looks
like
a
rat
hole
invitation.
Essence,
there
was
a
common
hysteresis
and
risk
for
oscillation,
which
I
think
is
gonna
boil
down
to
explaining
what
the
damping
properties
are
of
this
control
of
this
control
loop.
If
you,
if
you
attempt
to
drive
it
into
oscillation,.
K
And
that's
true
its
stickiness,
not
not
hysteresis,
sorry
wrong
word,
and
if
it's
the
oscillation
within
one
of
the
thickens,
nothing
happens.
If
there's
an
operation
into
the
translation
area,
the
whole
point
of
using
a
transition
rather
than
the
modal
thing
is,
it
will
just
give
you
a
slightly
more
classic
than
scalable
congestion
control
as
we
go
through
those
oscillations.
It's
no
and
that's
the
whole
point
of
it
not
being
modal,
understood.
B
Please
please
take
I'll
just
take
this
off
in
line
because
there
there
there
is
a
concern
about
when
you
have
an
oscillating
control
loop,
whether
the
oscillations
with
where
the
oscillations
are
stable,
an
increase
or
whether
the
control
loop
has
damping
properties
that
will
tend
to
damp
out
the
oscillations
over
time,
and
that's
not
something
we
have
time
to
dive
into
in
this
meeting.
I,
don't
think.
Okay.
K
Very
briefly
it
it's
essentially
steps
every
time
it
does
every
round-trip
time
it
will
push
more
or
less
one
way,
depending
on
what
the
measurements
are,
so
it
it's
damped.
In
that
sense,
it
doesn't
jump
straight
over
it.
The
school
has
to
be
high
every
round-trip
time
for
it
to
keep
moving
in
one
direction.
K
K
So
when
I
posted
it
I
did
ask
you
know,
please
do
criticize
the
design,
because
that's
the
best
time
to
criticize
something,
not
wait
until
it
doesn't
work.
You
know.
So
if
you
don't
think
it's
going
to
work
or
whatever
go,
go
ahead
and
look
in
there
and
tell
us
what
you
think
might
be
better
and
so
on,
and
please
make.
F
We
get
a
ratio
of
15
and
1/2
next
slide.
So
there
is
also
the
argument.
Okay,
we
have
in
the
duo
queue
two
different
targets.
Two
different
queues
so
also
there
with
the
same
base,
round-trip
time
of
two
flows,
flows
having
the
equal
data
entry
time
and
using
the
different
health
resort
classic
with
the
different
queue
targets
in
the
middle.
They
get
similar
unfairness.
So
it's
not
only
a
problem,
let's
say
of
the
aqm
which
cannot
know
what
is
the
round-trip
time
of
the
flows?
F
We,
we
developed
a
new
proc
add-on
to
steer
the
round-trip
time
deployments,
and
this
code
is
available
and
will
be
released
soon.
We
we
also
record
with
the
demo
video
which
we
can
show
if
there
is
time
so
the
goal
is
that
track
instead
of
the
rate
which
is
proportional
to
2,
divided
by
the
probability
and
the
round-trip
time.
This
round-trip
time
can
be
replaced
by
a
macro
function,
kind
of
target,
round-trip
time
function
that
you
can
define
inside
that
we
can
define
inside
rock.
F
So,
for
instance,
if
we
would
define
this
function
as
a
function
of
the
round-trip
time
can
be
any
state
that
in
here
we
just
use
the
round-trip
time
again.
If
we
just
add
fourteen
and
a
half
to
the
round-trip
time,
you
will
see
the
problem
solved
right,
and
this
is
also
what
we
can
demonstrate
that
this
really
works.
F
But,
of
course,
is
this
now
the
magic
equation
that
we
want
and
that's
a
little
bit
to
discussion,
maybe
in
the
next
slide
you
see
there
are
different
reasons
why
we
want
to
have
a
smarter
mapping
function,
round-trip,
mapping
function.
So,
of
course,
we
want
in
a
long
term,
triplets
balanced
with
all
the
round-trip
time.
Flows
flow
is
having
a
different
round-trip
times.
We
also
want
to
handle
shorter
flows
faster
typically,
because
we
have
a
shorter
round-trip
time
we
can
handle
them
them
also
faster
and
accelerate
faster
at
small,
around
reconized.
F
Well,
there
can
be
many
ideas
behind
this.
We
have
been
exploring
already
these
these
things
and
it's
all
possible.
The
only
thing
is,
of
course,
the
question:
is
this
smart
function?
Do
we
decide
that
as
an
IDF
or
is
it
the
application
dependent
and
can
applications
decide
on
these
as
well?
So
that's
probably
the
discussion
we
have
to
go
through
in
the
future.
F
Give
a
small
overview
on
what
is
actually
done,
what
it's
changed
in
TT
prior
to
having
active
time
dependence.
So
the
first
point
is
that
we
control
the
additive
increase.
You
behave
as
the
target
runtime
flow,
so
we
treated
the
same
amount
or
frequency
of
marks
as
a
target
round-trip
time
we
leave
the
multiplicative
decrease
unchanged,
so
preserve
the
responsiveness
as
much
as
possible
to
preserve
the
low
latency,
and
we
also,
if
change,
also
the
e
WMA
update
frequency
to
the
target.
Round-Trip.
F
The
target
round-trip
time,
then
we
also
ensure
that
they
converse
to
the
same
alpha,
even
on
a
step,
because
there
is
a
different
behavior
on
a
step
and
on
the
random
probability
marker.
Like
my
controller,
so
all
of
these
situations,
we,
we
got
managed
next
right,
a
few
tweaks
that
we
also
have
to
correct
or
change
a
little
bit
so
to
have
a
more
smooth
throughput,
also
and
a
better
stability.
We
modified
the
base
behavior
of
data
center
DSP,
which
does
not
increase
for
a
full
wrap
and
a
round-trip
time.
K
M
Can
I
ask
a
question
in
between
yeah
okay,
so
this
looks
great,
but
how
you
make
sure
that
this
RCT
increased
RTT
independence
of
tcp
proc
actually
will
solve
the
dual
queue
problem?
Do
you
have
any
plans
of
making
TCP
proc
mandatory
and
then
force
that
or
admitting
in
the
llq
or
how
do
you
want
to
deal
with
this
issue?
M
F
M
Okay,
let
me
try
to
expand
on
this
before
you
go
answer.
The
issue
is,
and
I'm
not
sure
whether
everybody
is
is
conscious
of
that
is
that
the
dual
queue
sharing
between
the
two
traffic
class
is
approximate,
and
this
is
evident
by
the
old
TCP
croc
behavior.
Without
the
decreased
RTT
independence
that
at
Short
Arty's,
the
llq
will
crowd
out
traffic
in
the
other
queue,
and
you
argue
that
increasing
the
RCT
dependence
is
the
solution
to
this
problem,
and
this
is
fine.
M
All
I
want
to
know
is
how
you
make
sure
that
all
the
flows
that
actually
into
the
ML
crew
low-latency
cube
and
can
crowd
out
stuff
in
the
classic
view
that
all
of
these
flows
actually
use
a
transport
that's
sufficiently
RTT
independent,
so
that
you
have
the
guarantee.
That's
required
and
I
only
see
some
sort
of
admission
control,
but
please
go
ahead.
It's.
F
The
is
the
same
question:
how
can
you
guarantee
that
today,
somebody
is
not
using
a
congestion,
control
or
TCP
without
congestion
control?
Do
we
have
on
each
bottleneck,
a
police
know
we
have
in
the
network
at
the
right
places
police's
where
it
is
important,
for
instance
between
users?
Yes,
so
it's
it's,
not
a
generic
mechanism
for
networking,
too
short
policing.
It
can
be
added
on
top
and
it
will
be
added
on
top
where
it
is
important,
when
it
only
hurts
yourself,
probably
operate,
the
small
invest
to
put
the
police
or
an
F
cube
or
whatever.
M
Cool
my
concern
is
not
an
Akio
aqm
under
my
own
control.
My
concern
isn't
badly
managed
aqm
somewhere
upstream
of
me,
where
I
have
to
live
with
the
consequences
of
somewhat
unsafe
design
and
I.
Just
wondered
how
you
can
make
sure
that
this
thing,
which
is
known
to
have
potential
side
effects
of
starving
classic
traffic,
how
you
can
make
sure
that
this
is
exceedingly
unlikely
and
just
ask
it's
telling
me
the
operator
might
do.
This
is
not
going
to
make
give
me
a
warm
fuzzy
feeling.
B
I
F
F
F
We
first
show
it's
a
dual
PI
square
ache.
When
you
know
is
it's
a
dual
PI
square
a.m.
and
we
will
at
the
top
use,
struck
another
bottom
cubic
and
first
show
a
problem:
let's
say
that
it's
not
round
three
time
independent
and
if
we
use
and
the
park
with
ran
to
the
time
independence
you'll
see
that
problem
solve,
and
it's
just
this
equation,
which
is
just
add
in
the
end
system,
with
its
mapping
function,
U,
star,
guitar,
DT
mapping,
function
just
add
the
difference
between
the
the
cues,
the
two
targets.
F
Okay,
if
you
press
play
okay,
so
you'll
see
first,
we
start
cubic
flow
and
we
add
the
center
or
a
plug
flow
to
it.
Time
is
0
milliseconds.
So
you
see
there
is
quite
a
bit
of
rate
and
fairness
when
we
click
window
fairness,
the
bottom
up
the
bottom.
Here
we
be
showing
a
display
instead
of
the
rate
the
window,
and
you
see
that
the
window
firm
is
more
or
less
still
update.
That's
that's
expected.
F
If
we
increase
the
base
run
trip
time,
the
window
fairness
is
still
obeyed,
more
or
less
cubic
is
a
bit
slower.
So
if
we
switch
them
to
rate
fairness,
you
will
see
with
a
bigger
round
trip
time
based
on
three
time.
The
difference
between
the
flows
is
less
so
this
is
what
is
today.
It's
acceptable
both
flows
work,
but
it
can
be
improved
if
we
go
back
to
zero
millisecond
based
round-trip
time
and
use
the
proc
run
trip
time,
independent
track,
we
again
start
a
cubic
flow
and
a
data
center
piece
track.
F
Independent
flow.
You
see
the
right
now
is
fair.
It
doesn't
matter
what
base
round-trip
time
we
take,
because
the
prog
has
this
mapping
function
to
act,
fourteen
or
fifteen
milliseconds
to
with
measured
round-trip
time
and
those
behaves
the
same
like
a
classic
flow
with
which
experience
the
real
fifteen
milliseconds
more.
So
this
could
be
a
solution,
as
you
see.
Also,
if
we
change
the
number
of
flows
doesn't
matter.
F
F
Questions
there
are
okay.
What
if
the
round
the
other
flow
is
large,
that
I
see
there
and
that's
of
course
again.
This
is
a
solution
with
this
round-trip
time,
plus
15
milliseconds,
which
still
not
salts
that
there
are
differences
between
different
graph
right.
So
a
better
solution
would
be-
let's
say,
let's
make
prac
time:
independence
to
15
milliseconds
of
5
milliseconds,
so
that
all
flows
around
return
independent
to
a
number
and
okay,
the
others.
F
K
Yeah,
this
is
just
to
say,
ilpo
I
noticed,
is
on
the
call
he's
implemented
the
full
spec
dealt
with
in
ttpm
for
the
actually
at
the
end,
part
of
PCP
prog.
With
previously
the
TTP
option
part
wasn't
implemented.
That's
based
on
Olivia's
interim
based
on
Mia's
implementation.
He
sort
of
fully
repented
it
to
up
streaming.
K
He's
also
found
some
of
you,
given
some
review
comments
on
the
design
which
were
tweaked
as
a
result
and
serve
up
some
ongoing
discussions
about
that
tweaking.
So
the
immediate
plan
is
to
upstream
that,
to
Linux
and
and
accurately
CN
is,
is
sort
of
independent
of
l4s.
It's
needed
for
other
purposes
as
well
like
in
PC,
TCP
itself.
I
know
Microsoft
want
it
within
their
datacenter,
so
this
isn't
necessarily
part
of
the
four
SS
any
controversy.
K
B
K
K
K
K
K
Yep
so
issue
16
is,
is
the
classic.
You
see
and
fall
back
and
I've
added
in
brackets
FIFO
to
the
subject
line
of
the
issue
because
it,
as
was
pointed
out
in
the
chat
it
only.
This
problem
only
occurs
when
you've
got
30
158
in
a
shared
Q,
not
in
a
fq
and
and
I
I'm.
Quite
happy
for
this
issue
to
remain
open
until
we've
demonstrated
the
classic
ACN
fallback
works
reasonably
well
and
we're
making
good
progress
on
that.
K
There,
the
issue,
actually
is
it's
two
separate
issues.
It's
about.
Is
this
problem
prevalent
and
is
there
a
solution
and
both
of
those
are
moot
if
the
other
one
is
not
true?
So
if,
if
the
solution
works,
whether
the
problem
is
permanent
or
not,
isn't
so
important?
And
similarly,
if
there's
no
prevalence,
the
solution
is
not--.
K
So
I
would
encourage
those
that
are
looking
for
this
problem
to
see
if
they
can
find
any
evidence
of
it.
I
know,
Apple
are
still
trying
to
find
the
problem
and
in
the
data
they
presented
a
couple
of
years
ago
now
showing
where
CE
Marking
was
on
the
internet.
They've
found
that
the
larger
parts
of
that,
or
would
you
to
disturb
wrangling,
not
not
easy
end,
but
there
is
the
marking
out
there.
We
just
need
to
find
some.
That's
in
a
single
queue.
I
must
admit
all
right.
K
You
know
we're
taking
quite
a
bit
of
time
working
on
this
gzn
pullback
thing
it's
like
I
went,
they
I
hope,
there's
something
out
there
that
needs
it,
but
it
would
be
a
bit
of
a
waste
of
time
if
we
don't
find
any
detailed
solution
design.
As
I
say,
was
posted
and
that
was
in
the
earlier
slides
and
then
implement
being
evaluated
and
working
well
with
distinguishing
killed,
PI
squared
and
cobble
we're
using
coddle
as
the.
K
K
J
The
solution
moot
if
there's
no
prevalence
just
because
we
cannot
observe
prevalence,
does
not
mean
that
hey.
You
know,
if
you,
if
we
adopt
this
position,
particularly
as
sort
of
part
of
the
adoption
of
an
experimental
RFC
I,
think
it
countermands.
You
know
previously
established
ITF
consensus,
that
a
QM
is
beneficial
and
I.
K
That's
fine
I
mean
it
I
agree
with
you
in
theory.
You
know
that
if
there's
a
standard
out
there
and
it's
it's
not
being
deployed
as
a
single
queue
solution
that
it
could
theoretically
be
deployed,
but
we
can't
sort
of
stop
the
world
for
every
possibility.
You
know
there's
this
other
examples
of
things
that
have
been
deployed
and
you
know
sometimes
you
have
to
take
a
punt.
Basically,
you
know
if
something
is
but
not
happened
at
some
point
at
which
you
have
to
say:
okay,
this
is
not
go
sure.
Sure.
J
And
that's
that's
reasonable,
except
I!
Think
in
this
case
there
are
many
solutions
that
are
in
fact
deployed
like
it's.
You
know
a
single
cue
pie
is
available
today
in
in
Linux,
so
anything
Linux
based
doing
some
some
queueing
stuff.
Additionally,
we
see
it
in
I
think
it's
on
by
default.
In
you
know
it's
present
by
default
in
many
Cisco
devices
like
it's
not
so
clear
to
me
that.
B
K
K
Is
that
I,
don't
think
the
question
here
is
whether
SP
preclude
a
sorry
whether
L
phrase
precludes
SVA
its,
whether
there's
anything
the
community
might
want
from
it,
yeah
that
it
can't
get
from
l4s
I,
don't
think
we
we
need
to
just
allow
to
it
to
experiment
or
somehow
bend
over
backwards
to
allow
to
experiment
a
bit
both
aiming
to
achieve
the
same
thing.
I.
Don't.
B
Think
we
can't
I
think
the
the
crucial
observation
in
there
and
unfortunately
I
couldn't
get
last
word
in
and
drop.
Is
that
what
the
two
experiments
proposed
to
do
are
make
fundamentally
incompatible
uses
of
ECT
one,
and
we
will
have
to
select
one
of
the
two
users.
It
can't
be
both
an
input
and
an
output
to
network
yeah.
K
K
K
B
K
If
we
can
go
back
one,
thank
you.
David
I'll,
just
cover
up
the
other
one,
which
is
about
the
C
code.
Point
semantics,
so
issue
21
was
raised
as
a
general
concern
about
the
ambiguity
of
C,
E
and
two
specific,
and
that
concerns
have
been
raised.
That
is,
the
classic
ET
and
Paul
bear
can
be
actually
22
about
reordering.