►
From YouTube: IETF103-RMCAT-20181108-1120
Description
RMCAT meeting session at IETF103
2018/11/08 1120
https://datatracker.ietf.org/meeting/103/proceedings/
A
C
B
C
C
C
C
There's
a
nice
big
URL,
which
you
also
can't
read
it's
cut
off
by
the
pop-up,
which
says
Monica,
which
said
mirroring
the
screen,
which
has
the
URL
for
the
remote
well
on
it
slides
all
the
materials
are
on
the
data
tracker
we
have
meet
echo,
we
have
Java,
Jonathan
Lennox
has
kindly
volunteered
to
to
monitor
the
jabber
room
and
also
to
be
the
big
red
button
pusher
for
me,
tech
oak.
So
if
remote
questions
they
will
let
Jonathan
press
the
button
to
let
you
in
at
the
appropriate
time.
We
also
have
a
mailing
list.
C
Please
consider
subscribing
if
you
haven't
done
so
already
and
I
will
be
taking
notes,
so
our
agenda
today,
we've
got
this
brief.
Introduction
on
status
updates
will
then
talk
about
the
eval
test
draft
from
the
head
for
ten
minutes
or
so
I
suspect
it
will
effectively
take
Western.
That
cha-ching
will
then
talk
about
the
video
traffic
model
draft
and
the
updates
on
another
implementation
in
I,
guess,
Firefox
and
then
Julius
will
finish
by
talking
about
now
their
implementation
experiences
and
then
an
issue
with
the
couple
of
congestion
control,
directors.
C
C
The
bonus
draft
has
been
in
the
RFC
editor
queue
for
about
the
last
four
years
and
is
blocked
on
cluster
238,
along
with
many
other
things,
so
hopefully
that
will
appear
relatively
soon.
The
evaluation
drafts
the
about
test
we've
all
criteria
and
video
traffic
model
drafts.
We
have
had
working
group
West's
calls
on
all
of
those
Ebell
tests
and
a
traffic
model
we're
going
to
be
discussing
later
today.
C
F
Yeah
you're
good,
so
so,
essentially,
we
have
in.
We
have
an
up
an
update
of
this
one
that
addresses
all
comments,
except
for
strengthening
the
security
consideration,
sections
that
that
that
requires
a
bit
more
thinking
that
watan
wasn't
easy
to
be
done
at
the
quick
shot
that
we
gave
it
and
the
other
one.
The
other
requirements
seemed
to
be
in
an
ok
shape
now,
so
that
this
could
move
ahead
and
we'll
hope
that
we'll
be
getting
the
security
consideration,
sections
updated,
also
soonish.
F
C
F
Right
so,
but
we
wouldn't
want
to
discuss
of
what
people
would
need
to
consider
what
what
what
they
might
leak
is
side
channel
information
from
there
congestion
control
mechanisms
or
their
timing
of
packets
and
whatnot.
This
is
nothing.
This
is
not
of
influence
right
in
itself
that
the
draft
seems
to
be
pretty
easy
on
that
yeah.
C
C
C
C
Yeah
so
well
provel
criteria.
We
will
wait
for
yo
to
do
an
update
to
revise
the
security
considerations
and
then
do
a
brief
working.
Your
bicycle
discuss
the
other
two
in
a
few
minutes
wireless
tests.
We
have
previously
decided
that
this
was
ready
for
working
great
last
call
once
the
other
evaluation
drafts
are
done.
I
see
no
reason
to
change
that
decision.
So
hopefully,
in
a
few
weeks,
while
everyone's
out
the
way
we
can
work
working
group,
let's
call
that
first.
C
Feedback
message
was
discussed
in
abt
core
and
we'll
talk
briefly
about
the
hack
fun
stuff
in
a
second,
the
condition:
control
feedback
draft
which
which
gives
advice
for
how
to
use
the
feedback
message.
Draft
has,
as
I
think,
currently
expired,
but
we
will
update
that
once
the
feedback
message
has
been
completed
to
reflect
the
final
state
of
the
feedback
message:
the
two
interfaces
draft,
the
the
framework
and
critic
interactions.
I
think
the
current
plan
is
to
reconsider
those
once
we've
moved
the
candidates
forward
to
you
just
endless
track
and
to
leave
them
at
this
time.
C
We
have
set
of
milestones
as
usual.
They
are
out
of
date
what
I
am
proposing
and
what
I've
discussed
briefly
with
the
co-chairs
is
that
a
milestone
for
submitting
the
requirements
in
evaluation
drafts,
which
was
this,
which
was
August
2017?
We
moved
to
December
2018
and
since
those
those
are
essentially
ready
to
go
so
quickly,
we
will
actually
then
meet
that
milestone
after
having
moved
it
for
18
months,
the
the
the
interactions.
This
is
Cody
contractions.
C
B
C
Suspect
that's
aspirational
and
optimistic,
but
we
may
as
well
be
aspirational
optimistic
and
we
can
always
change
it
again
and
some
are
meeting
if
we
don't
make
that
condition
child's
proposed
standard.
We're
suggesting
is
push
it
forward
a
year
and
again
it's
likely
that
that
will
slip,
but
we
may
as
well
be
optimistic
and
try
and
encourage
people
to
do
it.
H
Meuk
ooh
heaven,
so
those
milestones
was
meant
to
provide
some
kind
of
invent
measurements.
So
if
you
deploy
your
congestion
control,
you
want
to
capture
certain
metrics
to
figure
out
that
the
loss
rate
didn't
go
to
like
infinity
or
whatever
and
report
that
back.
But
given
there's
no
work
ongoing
on
this
Purdum,
okay
to
remove
it
and.
B
C
Looking
at
implementing
the
condition,
control
feedback
draft
in
the
hackathon
and
I
think
they
made
some
good
progress.
We
had
a
discussion
of
that
in
a
VT
core
earlier
this
week
and
we
resolved
most
of
the
issues
and
they'll
be
seeing
an
update
to
that
draft
in
the
next
week
or
so
Jonathan
anything
to
report.
So
this
working
group
John.
A
Yeah
I
think
the
one
thing
we
that
came
up
at
ABT
core,
where
we
felt
like
we
needed
more
feedback
from
people
actually
employing
congestion
control.
Algorithms
is
how
should
the
feedback
mechanism
handle
duplicated
packets
because
the
feedback
is
indexed
by
sequence
number,
so,
obviously,
if
it
packet
is
duplicated,
you
can
only
report
on
that
once
and
the
question
is
what
would
be
the
most
useful
report
in
that
case,
it's
report
on
the
first
arrival
time.
A
The
most
recent
arrival
time
should,
we
I,
don't
know,
maybe
add
some
additional
flag,
which
would
be
a
format,
change
saying
this.
You
know
packet
got
duplicated,
something
strange
happened
or
what
I
mean
so
I
feel
like
we
need.
You
know
what
is
going
to
call.
You
know
be
the
most
useful
for
the
congestion
algorithms
to
get
the
information.
So
if
people
who've
done
suppression,
algorithms
could
get
feedback
and
I
would
be
helpful.
A
H
Yeah
Kulemin
so
arrival
time,
I
think
you
should
choose
the
first
one,
because
that's
when
you
actually
process
a
packet
and
then
you
just
throw
away
the
duplicate
right,
so
that
doesn't
matter,
for
you
also
have
easy
and
feedback
in
there
right
you
might
want
to
like
I
mean
it
can
also
happen
that
you
get
the
packet,
you
send
a
report
back
and
then
you
get
the
duplicate
right.
Then
it's
already
too
late.
But
if
you
actually
happen
to
receive
the
duplicates
before
you
send
out
the
report
and
one
of
them
had
a
congestion
marking.
C
A
A
G
Okay,
so
so
this
is
about
a
status
update
of
this
evil
test
draft
we
went
with
vacuum
last:
call
for
zero,
zero,
six
question
and
yeah
yeah.
So
we
got
some
comments
and
reviews
actually
I'm
very
pleased
to
see
like
a
detailed
review
from
the
reviewers
and
comments,
and
we
authors
especially
me
and
shouting
actually
work
through
the
reviews
and
comments
and
updated
that
one
so
next
slide
and
we
have
a
zero,
zero,
seven
passional,
addressing
mostly
all
the
comments
we
got
and
thanks
for
the
review.
So
next
one.
G
So,
let's
focus
on
what
we
have
I
mean
there
are
typos
there
things,
that's
quite
trivial.
That
is
come
up
with
review,
thorough
review,
so
I'm
just
focusing
here
what
was
basically
changed.
That
I
think
you
guys
should
be
our
of,
and
obviously
you
can
do
the
diff.
So
so
the
thing
is
like
I
think
one
word
is
missing
here.
G
So
so
the
thing
here
like
we
had
this
fist
Kappa
City
pot
capacity
and
then
we
said
like
we'll
use
some
other
traffic
to
actually
change
like
non-responsive
EDB
traffic,
to
change
the
pot
capacity
and
then
me
and
Sergey
kind
of
work
through
a
new
formula
which
basically
is
more
I.
Think
more
more
like
more
what
what
is
supposed
to
say.
We
have
a
table
with
all
the
pot
capacity
at
pad
capacity
ratio.
G
G
Just
great
so
anybody
has
any
comment.
G
Have
we
applied
this
to
a
current
at
the
table
and
see
like
well?
This
is
the
right
thing
we
kind
of
did
mutters,
yeah
I.
Think
it's
fine,
then
the
next
one
we
kind
of
I
don't
like
zero
and
we
kind
of
missed
expected
behavior
for
two
of
our
use
cases.
So
it
was
basically
kind
of
hidden
into
the
text,
but
we
didn't
really
wrote
it
as
a
separate
section.
So
that's
what
we
did
so
test
case
five
point:
seven.
This
is
about
short
clip,
TCP
flows
and
basically
they
suspected
behavior
here
is
like.
G
When
you
have
shortly
TC
flows,
you
actually
kind
of
media
bitrate
kind
of
go
to
the
minimums.
It
stays
there.
When
the
live
flows
are
gone,
it
will
be
like
into
the
steady
state
and
obviously
the
show,
but
it
should
show,
is
like
it
is:
it's
gone
down
into
the
lower
minimum
betrayed.
It
doesn't
stay
there,
so
it's
actually
ramp
up
ramp
up
to
the
steady
state.
So
that's
like
the
behavior
expected
behavior.
We
added
on
the
test
case.
Five
point:
eight,
that's
like
pause
and
resume.
G
This
is
basically
I
think
everybody
showed
the
results
with
the
expected
behavior.
So
we
just
wrote,
wrote
wrote
that
this
is
pretty
simple
like
when
you
pause
it
and
then
basically,
the
rest
of
the
rest
of
the
flows
should
go
and
corrupt
the
bandwidth
almost
equally.
They
should
converse
and
when
you
again
kind
of
like
start
your
Postmedia,
it
should
be
able
to
find
this
way
to
a
some
economy
level.
Converges
convergence
where
all
the
three
flows
are
a
steady
state.
G
C
G
Okay,
fine
to
me
as
well
next
slide
so
security
considerations
there,
it
used
to
say,
like
security,
should
have
not
been
discussed
in
this
memo.
I
personally
thought:
that's
like
understatement,
so
we
kind
of
wrote
a
new
security
considerations.
So
what
does
it
says?
It
stays
like
these
constant
control,
these
Cheska's
you're
supposed
to
run,
you
know
simulators
or
in
a
control
environment
where
you
know
what's
what's
happening,
it's
a
basically
a
lab
environment
or
your
simulators.
So
we
don't
expect
anything
any
anything
that
impacts
the
desired
result.
G
So
your
you
should
which
one
okay
there's
something
the
copy
quest
thing
like
this,
not
to
say
what
it
says,
but
the
idea
is
like
well,
you
should
not
have
some
traffic
that
impacts
your
results
in
a
controlled
environment
and
then
we
also
refer
to
eval
criteria.
If
there
is
a
security
consideration
there,
every
condition
control
will
have
to
write
the
security
conscious
concentration.
G
So
when
you
are
particular
testing
something
you
should
be,
you
should
be
considering
that
and
then
we
get
a
solution
like
we
should
also
mention
like
we
should
not
let
the
test
things
make
into
the
Internet
which,
where
our
condition
control
algorithm,
has
not
been
tested
or
something
like
you
should
not
leak.
This
lab
thing
into
the
internet
and
flood
the
internet
and
break
the
internet
issues
to
do
that.
So
I
think
that's
very
valid
comment.
So
I
we
just
incorporated
that
one,
but
this
is
kind
of
like
yeah.
I
Cory
Ferris
did
you
really
have
last
sentence
as
well,
but
I
agree
with
the
last
sentences
intent
I
suggest
it
should
be
a
separate
paragraph,
but
when
you
actually
write
it
because
the
congestion
control
things
a
little
bit
different
and
I
think
you
probably
need
to
be
more
explicit
than
avoid
leaking,
because
you
just
say
this
isn't
to
be
used
in
the
journal
internet
or
something
that's
a
little
bit
more
concrete,
so
that
earn
somebody
who's
silly
enough.
To
think
that
that
might
be.
I
G
So
if
you
have
some
citations,
I
send
it
to
you
and
I.
We
think
we
think
we
also
find
a
typo
but
I,
don't
think
like.
We
need
to
update
that
one,
but
this
one,
if
you,
if
you
think
like
this,
need
to
be
updated
a
lot
of
centers
centers,
then
we
should
update
it.
So
that's
it!
What's!
Next
from
my
question
to
the
working
movement.
F
C
B
C
G
G
G
B
E
So
basically,
we
have
an
after
the
working
group
last
call
yeah
sort
of
within
that
period
was
gathered,
a
collection
of
reviews.
There
were
sort
of
two
batches.
That's
why
we
kind
of
updated
the
draft
and
twice
first
from
0
for
4
from
0
and
Oh
204
205,
to
address
a
collection
of
comments
and
both
from
Colleen
and
from
Jimmy
foo.
So
the
I'm
listing
out
the
main
one
here
is
in
adding
a
set
of
security
considerations
by
the
way
and
J
:
later
on
in
his.
E
We
also
pointed
to
the
risk
of
that
second,
and
this
is
the
text
that
we
have
added
to
the
security
considerations
and
I
thought
it
would
be
convenient
to
post
it
up
here
for
people
to
review
and
provide
comments.
22
I
can
read
it
out
briefly.
So
basically,
we
mentioned
that
it
is
important
to
evaluate
our
GP
based
congestion
control
schemes
using
realistic
traffic
patterns
so
as
to
ensure
stable
operations
of
the
network.
E
Therefore,
it
is
recommended
that
candidates
and
RTP
based
congestion
control
algorithms
be
tested
under
using
the
video
traffic
models
presented
in
this
draft
before
why
deployment
over
the
internet
so
sort
of
more,
not
necessarily
like
a
security
consideration
for
the
traffic
models
that
that's
been
presented,
but
rather
more
as
a
recommendation
for
people
to
try
out
their
candidate
scheme
and
using
the
more
realistic
traffic
pattern.
If
there's
any
other
suggestions
or
comments,
I'll
be
happy
to
incorporate
that
as
well
and
I'll.
E
Just
finish
that
that
round
of
update
and
fixed
a
couple
of
other
terminology,
you
know
sort
of
the
issues
mentioned
by
calling
basically
avoiding
the
use
of
Arts
Ram
Katra
in
our
job,
as
well
as
fixed
a
couple
of
issues.
We
run
it
by
the
ID
units
checked
so
our
past.
Here,
just
in
case
people
have
comments
regarding
the
text
on
the
security
considerations.
E
Then
moving
forward,
we
later
also
and
get
some
additional
comments
from
both
Michael
and
Jake.
Those
are
more
sort
of
editorial
in
nature.
One
is
in
ensuring
you
know
more
consistent
use
of
third
person
narrative
and
the
other
one
is
that
Jake
pointed
out
some
confusion
in
our
use
of
the
term
transient.
So
we
we
made
a
pass
and
try
to
make
mention
of
either
transient
period
or
transient
States
throughout
several
pieces
of
the
draft.
Those
can
be
easily
identify
if
you
click
on
the
different
of
the
draft.
E
Another
minor
wording
changes
I,
don't
think
it's
worse.
A
year,
basically,
would
be.
We
have
addressed
all
reviewer
comments
and
by
the
way,
and
for
the
more
detailed
review
comments
on
the
mailing
list
after
we
updated
draft
as
I.
Remember,
I've
also
provided
sort
of
PowerPoint
responses
to
the
reviewers,
and
so
it's
just
we're
just
posting
up
it
here,
just
to
make
sure
that
if
there
are
additional
comments-
or
you
know,
if
you
really
yourself
grow
up
comments,
so
we
can
accommodate
them
as
well.
E
C
C
E
Yes,
thank
you,
so
this
is
update
and
related
to
the
draft
by
the
way,
maybe
a
brief
mention
of
the
draft
update
status
as
an
calling
has
showed
earlier,
and
the
NADA
draft
has
has
completed
working
group
last
call
and
a
martin
has
provided
his
shepherd
reviews
of
comments,
and
I
believe
we
have.
We
have
actually
updated
the
draft
to
version
oh
nine,
to
address
the
individual
and
sort
of
review
comments
from
martin.
That
was
a
fairly
you
know.
E
Eggs
I
just
want
to
mention
it
up
here
too,
to
make
sure
that
the
status
updates
are
in
sync
with
what
the
chair
has
them
in
their
mind
and
go
so
so
so
going
after
that.
We
have
been
mostly
focusing
on
the
implementation,
your
effort
in
perfect,
incorporating
it
inside
Mozilla,
Firefox
browser
and
next
page.
Yes,.
C
E
There's
a
slight
delay
by
the
way
I'm
waiting
for
its
show
to
show
up
on
my
side,
so
just
a
brief
review.
The
way
we
have
made
Co
changes
is
that
most
of
the
changes
are
in
what's
up
for
one
of
the
subfolders
and
that's
one
of
the
modulars
within
the
app
web
RTC
code
and
in
the
firefox
and
code
repo
and
we've
basically
took
the
hack
of
replacing
the
default
and
send
bandwidth
estimation
modular,
which
is
in
charge
of
estimating
the
recommended
bandwidth
with
an
alternative,
and
that's
called
nada
bandwidth
estimation.
E
And
of
course,
we
made
changes
to
a
cup
of
other
files
to
you
know
serve
to
enable
that
change
to
enable
that
switch
between
de
and
also
for
this
version.
This
is
before
the
hackathon,
so
we
have
kept
the
receiver
side
of
behavior
intact,
which
means
that
we
do
not
really
have
one
way,
delay
measurement
as
feedback,
and
we
also
have
not
modified
the
the
feedback
interval
provided
from
the
other
end.
E
So
so
we
sort
of
have
a
ma
different,
slightly
different
version
of
neither
running
in
our
current
implementation,
so,
instead
of
using
one
way,
delay
relative
one
way
delay
as
the
congestion
signal
as
specified
in
the
draft.
In
this,
what
we
consider
an
intermediate
version
we're
using
the
relative
RTG
as
the
congestion
signal
and
also
right
now,
we're
ignore.
We
have
not
observed
when
the
petty
losses
in
our
tests,
so
our
code
right
now
has
not
added
those
handling
for
petty
losses.
E
So
it's
kind
of
a
slightly
simplified
version
of
the
de
nada
congestion
control
algorithm.
The
main
I
also
want
to
point
out
that
the
default
behavior
of
the
rtcp
feedback,
the
typical
swings
around
and
the
default
interval
of
one
second
I'm,
going
to
show
some
data
on
that
I
believe
it's
sort
of
plus
minus
fifty
percent
right
of
the
target
feedback
in
which
means
that
were
actually
operating.
E
This
congestion
control
modular
kind
of
this
feels
like
interval
higher
than
what
we
have
designed
it
for
so
we're
also
curious
to
see
whether
it
breaks
or
whether
it
still
works,
and,
finally,
just
for
convenience.
We
added
a
bit
of
our
logging
using
the
existing
web
RTC
login
framework
so
that
we
can
pull
out
the
stats
to
facilitate
the
graphing
and
next
page.
E
So
these
are
all
the
code
change,
an
overview
of
the
code
changes
and
next
leg
and
page,
please
so
the
way
we
run
our
tests
will
basically
and
set
it
up
to
have
one
side
of
the
client
and
so
first
off
with
all
these
tests
at
least
one
side.
The
modified
version
runs
on
Mac
and
then
runs
on
the
modified.
You
know
Firefox
and
nightly
Butte.
The
collection
of
changes
is
summarized
below.
Basically,
for
the
center
side,
we
now
use
the
NADA
bandwidth
estimation
modular.
E
We
have
revised
the
maximum
sending
rate
to
be
format
between
second
and
also
to
explicitly
configure
the
default
video
height
to
720p,
but
in
the
default
Firefox
behavior,
the
video
height
is
at
360
P
and
that
also
automatically
limits
T,
the
maximum
rate.
That's
allowed,
that's
specified
for
sending
and
finally,
a
sort
of
logging.
We
enabled
locking
States
only
and
outgoing
clothes,
another
modular
and
there's
no,
because
we
don't
require
any
change
from
the
receiver
side.
We
actually
gone
/
a
true
Chrome.
You
know
I
modified
for
the
receiver
side.
E
We
use
a-f-dot
RC,
that's
an
online.
You
know
web
RTC
tool
to
enable
the
cost,
so
we
don't
do
anything
other
than
changing
the
congestion
control
modular.
So
we
run
three
sets
of
tests.
The
first
one
is
really
a
sanity
test.
I
have
two
laptops
in
my
home.
You
know
connected
to
the
same
Wi-Fi
and
I
kind
of
have
fairly
unlimited
than
home
Wi-Fi.
So
this
is
really
to
measure.
E
You
know
whether
things
work
and
how
fast
the
rate
can
Graham
here,
I'm,
showing
the
three
set
of
graphs,
showing
the
the
rate
calculated
by
the
NADA
congestion
control
modular
in
the
first.
The
top
graph
graph
is
the
the
relative
RTG
and
that
you
know
they
write
from
the
rtcp
feedback
message.
Of
course,
here
you
know
with
putting
a
local
environment
the
base.
Rtp
is
around
one
minute.
Second,
so
that's
almost
just
the
you
know,
the
the
actual
article
orders
and
the
the
bottom
graph
shows
the
feedback
interval
of
the
individual
read.
E
Basically,
it
swings
50
50
percent
flat
miners
are
drawn
as
the
one
assessment
and
targeting
for
I
kind
of
looked
into
the
code
to
confirm
that
behavior
to
this
one
basically
shows
that
the
nagas
algorithm
because
of
the
accelerated
ramp
up
design
it
can
start
from
the
minimum
rate
of
200k
and
ramp
up
to
the
you
know
the
maximum
format
within
15
seconds
or
so
so
it
does
Prem
up
pretty
quickly
and
then,
even
though
this
is
drawn
within
the
home
unlimited,
but
it's
Wi-Fi.
So
occasionally
we
do
see.
E
E
So
that
is
all
the
delay,
appreciable
yeah.
So
then
we
are
ramping
up
the
game.
A
little
bit.
I
have
a
colleague
actually
gem,
foo
and
he's
sorry,
gentle
foo
he's
also
one
of
the
co-authors
on
the
draft.
He
is
based
in
San
Jose
California,
so
we
basically
carry
out
the
same
exercise
except
that
now
the
sender
is
from
my
side
and
based
in
Austin
Texas
and
he's
from
San
Jose.
So
there
is
a
little
bit
of
a
green
boat.
You
know
kind
of
that.
E
Mimics,
more
of
a
typical
remote
car
and
I
was
sending
out
the
corner
from
the
Cisco
office,
where
he,
the
receiving
end,
is
home
Wi-Fi.
Oh
by
the
way,
all
these
tests
are
carried
over
with
bi-directional
audio
video
course
and
I'm
only
reporting
on
the
the
statistics
of
the
outgoing
sample.
The
other
side
is
kind
of
unmodified
behavior.
So
with
this
one,
what
we
are
seeing
is
that
the
base
RTP
is
actually
very
high.
E
It's
around
160
milliseconds
and
then
also
as
we
can
see
that
even
after
you
subtract
the
base,
the
residual,
the
relative
RTP
is
fairly
spiky
and
sort
of
spikes
up.
Sometimes
quite
outrageously
constants
in
the
very
beginning-
and
you
know
towards
the
the
second
you
know
the
towards
the
end
of
the
car,
so
in
reaction
to
that
the
nada
congestion
control
modular,
indeed,
sometimes
jobs
to
fairly
load
rate
and
then
after
the
RTP
recovers
a
little
bit
and
ramps
up
again.
E
So
when
were
no
longer
seeing
a
very
more
flat
line,
I
think
control
black
paths.
We
do
see
these
adaptations
going
back
and
forth.
You
know
sort
of
swinging
back
and
forth,
but
maybe
on
the
other
hand,
the
good
news
is
that
it
does
not
get
starved
throughout.
It
does
have
the
capability
of
ramping
back
again
when
there
were
never.
There
is
the
opportunity
whenever
to
really
yell
it's
a
bit
and
I
see.
There's
a
question:
yes
go
ahead:
yeah.
E
And
this
is
just
we
have
not
much.
We
want
to
try.
First,
with
our
modified
receiver
behavior.
We
obviously
are
interested
in
testing
house
whether
you
know
faster
feedback
helps.
In
my
opinion,
it
will
help,
because
you
know
it
reduces
the
the
feedback
loop
right.
The
delay
in
the
feedback
control
loop
yeah,
it's
just
that
we
want
to
show
kind
of
an
intermediate
result.
A
J
C
E
J
C
E
Yes,
oh
yeah
sure
by
the
way,
I
want
to
mention
that,
from
from
our
perspective,
we
are
interested
in
its.
You
know
experimenting
with
other
forms
of
feedback
or
more
frequent
feedback.
Actually,
our
to-do
list,
it's
just
that
the
results
were
showing
here.
We
want
to
were
sort
of
making
stage
two
changes,
so
we
want
to
for
now
keep
the
receiver
assist
right.
E
Sort
of
you
know
this
way,
I'll
be
able
to
make
the
code
changes
and
call
anyone
I
want,
without
you
know,
doing
anything
special
on
the
other
side,
but
then
going
forward
will
be
interested
in
exploring
you
know,
sort
of
understand
a
bit
better.
What
new
forms
of
feedback
helps
and
to
what
extent
yeah
next
page
next.
A
So
relaying
for
serchio,
he
says
it
realized
during
the
hackathon.
We
were
not
aware
of
transport
feedback.
When
we
could
go,
I
mean
we're
not
where
there
was
a
flavor
of
transport
feedback
already
implemented
so
John
Fleck
speaking
for
myself,
chrome
implements
the
Google
transport
CC
extensions
I
believe
Firefox
display
needed
to
epic
that
it
has
live
web
RTC
in
there
does
not
currently
implement.
It
does
not
really
have
that
enabled.
A
C
E
Yeah,
so
under
the
final
set
of
results,
is
between
B
and
Sergio,
so
two
across
Atlantic
car
from
Austin
and
two
and
Switzerland
and
again
one
side
is
from
the
office.
The
other
side
is
from
the
home
where
I'm
reporting
again
the
base,
our
GT
here
you
know
300
plus
many
second,
just
to
show
kind
of
the
challenge
and
then
but
then
I
guess
this
time
got
slightly
lucky
in
that
most
of
the
time,
if
we
don't
see
outer
reaches
our
Tiki,
nada
actually
is
able
to.
You
know
that
reach
the
phone
maker
per
second.
E
By
the
way
the
connectivity
is
right,
even
if
from
the
home,
etc,
are
fairly
high
and
but
then
occasionally
we
do
see
those
you
know
one
second,
two
second
RTP
spikes
and
nada
will
react
back
and
then
try
to
grab
up
again
again
yeah
and
we're
also
seeing
their
corresponding
feedback
intervals.
Extra
reports
in
the
bottom
so
sort
of
similar
results,
and
for
us
this
is
the
first
set
of
at
least
the
know,
sort
of
running
in
the
water
test
compared
to
all
previous
controller
pairs
or
simulations.
E
So
we're
actually
happy
enough
that
no
things
getting
really
a
break
or
you
know
the
congestion
controller,
didn't
either
flood
the
network
or
keep
starving
all
the
time.
But
then,
of
course,
we
are
very
much
interested
in
making
it
more
reactive
and
then
trying
to
see
how
how
well
can
you
can
go
next
page.
C
E
As
a
summary
of
observations,
that
may
be,
two
things
to
highlight
is
that
in
the
let's
see,
India-
and
you
know
some
unlimited
scenario-
the
accelerator
and
Grandpa
feature-
nada
allows
us
to
grant
up
to
next
weights
fairly
quickly.
Sorry,
it's
not
15
minute.
Second,
it's
a
15.
E
Second,
that
was
the
title
of
my
slide
and
the
century
briefly,
due
to
occasional,
RTD
spikes
and
then
once
we
get
on
to
and
remote
connect
connections
we
you
know
the
algorithm
reacts
to
RTP
spikes
and
because
of
the
long
feedback,
unity
and
fairly,
you
know
sort
of
infrequent
and
interval.
It
does
take.
You
know,
sir,
it
doesn't
wider
great
dip,
but
otherwise
it
sustains
maximum
streaming
grades
was
you
know
in
the
in
the
presence
of
random
fluctuations?
E
E
Yeah
right
and
basically
that
motivates
us-
and
you
know,
looking
sort
of
looking
at
what
to
do
next
or
obviously,
this
a
hackathon
and
feedback
messages
are
something
we'll
want
to
be
able
to
the
result
of
it.
We
want
to
be
able
to
incorporate
and
the
mention
about
the
existing
transport
feedback
implementation
that
something
were
look
into
but
interested
in
moving
it
all
the
way
to
the
implementation,
as
described
in
the
trap
in
the
draft,
in
the
sense
that
we
want
to
use
the
feedback
message
as
defined
as
well.
E
As
you
know,
move
to
the
all
the
calculations
based
on
relative
one
regulate
values,
we're
also
interested
in
truck.
You
know,
testing
the
impact
of
the
different
feedback
intervals,
because
that
intuitively
is
a
very
cheap,
no
sir
parameter
and
and
then
tackling
handling
lost,
expected
losses
to
at
least
adding
that
statistics
and
adding
our
existing
algorithm
in
how
we
handle
it
over
the
series,
and
so
that
that's
about
that's
it.
For
the
update
for
this
draft.
Okay.
E
K
Thank
you,
everybody
who
doesn't
know
me
I'm
jealous
floor
I'm,
a
PhD
student
from
the
University
of
Duisburg
Essen
next
slide.
Please,
and,
as
you
might
be
aware,
we've
done
an
implementation
of
the
anodic
congestion
control
algorithm
in
the
omnibus
plus
ion
of
discrete
ventilation
framework,
and
we
finally
managed
to
update
it
to
the
latest
version
of
the
draft,
and
so
this
talk
is
supposed
to
be
a
short,
independent
validation
of
the
ns3
model.
K
You
already
have
up
on
github
next
slide,
please
and
since,
since
the
last
version
of
the
draft
has
had
a
focus
on
thatÃs
competitiveness
with
lost
base
flows.
This
I'm
focusing
here
on
test
case
5.6,
where
we're
comparing
how
Natha
behaves
with
long-lived
TCP
flows,
but
in
in
this
scenario
we've
used
SCTP
instead
of
TCP,
because
I
don't
know
the
TCP
model
in
omlette
very
well
and
I
know
that
the
SDP
model
is
quite
good
because
she
has
lots
of
code
with
the
FreeBSD
kernel
and
CTP
in
this
case
also
used
in
the
arena.
K
Congestion
control
there's
only
one
thing:
I'm,
not
quite
sure,
because
so
I
don't
know
whether
the
extra
energy
the
SCTP
model
uses
is
the
same
as
TCP
and
no.3
because,
as
you
will
see
in
in
the
following
graphs,
the
the
frequency
of
the
delay,
spikes
you
see
induced
by
the
loss
based
congestion
control
is
higher
for
my
plots,
I
don't
know
where
those
come
from
I
guess
this
is
has
something
to
do
with
TCP
in
na
3.
I
can
just
tell
you
that
setp
X
on
every
other
packet.
K
So
in
this
test
case
we
have
a
bottleneck
with
one
megabit,
16
milliseconds
of
propagation
delay
and
the
maximum
cumulative
3
milliseconds.
This
test
case
also
specifies
I,
think
30
milliseconds
of
jitter,
that
we
don't
edit,
that
to
to
our
model
and
also
we've
been
using
the
perfect
video
encoder,
which
means
all
its
immediately
always
generates
the
perfect
desired
sending
rate,
and
also
please
note
that
in
this
version
of
our
algorithm
there's
currently
no
accelerated
ramp
up.
Not
this
means
it
may
take
a
while
for
for
the
algorithm
to
converge
to
the
desired
rate.
K
K
So,
on
the
left
side,
we
see
the
reference
implementation
of
three
and,
in
the
right
side,
are
the
results
generated
with
our
implementation
and,
as
you
can
see,
they're,
basically
the
same
with,
with
the
exception
that
our
frequency
of
the
delay
spikes
is
higher
as
I
mentioned
and
I.
Don't
know
what
is
this,
but
it's
it's
that
converges
to
the
same
operating
point
yeah
next
next
slide,
please.
This
is
just
the
same
scenario,
but
with
a
larger
bottleneck
of
10
megabit.
So
again,
basically
the
results
are
the
same.
K
This
was
the
reasons
I
only
got
like
you
like
these
two
cases,
because
across
the
board
our
curves
looks
very
similar.
So
it's
just
not
that
interesting
next
slide,
please
so
this
one
I
guess
you
haven't
seen
yet
so
this
is
the
new
implementation
of
Nara,
as
it
interacts
with
the
synthetic
video
codec
stress
based
codec
by
Cisco.
Here
again,
the
results
look
good,
but
for
some
reason
RTP
tends
to
converge
to
a
lower
sending
rate.
K
Maybe
surge
or
cha-ching
are
able
to
to
say
something
about
this
I
don't
know
if
their
behavior
is
the
same
yeah,
but
but
that's
basically
it
yeah
again.
Sorry,
the
confusion
is,
we
have
essentially
evaluated
a
large
parameter
spectrum
with
bose
implementations
and
the
behavior
we've
seen
has
been
basically
the
same.
If
you're
interested
in
are
in
some
special
simulation
scenario,
I'd
be
happy
to
implement
it
and
push
the
results
to
the
mailing
list.
I.
E
E
C
A
C
K
We've
also
been
playing
with
a
couple
of
congestion
control
flow
state
exchange.
Next
slide,
please,
and
so
we've
implemented
version
7
of
the
document
and
we've
come
across
an
issue
where
we
find
that
the
document
talks
about
application,
limited
scenarios
for
the
passive
version
of
the
algorithm,
but
not
for
the
active
version
of
the
onion
with
them.
I,
don't
know
what
what
the
reason
for
this
is.
K
So
what
we
have
is
the
the
congestion
control
of
an
RTP
flow
sends
a
new
sending
a
desired,
sending
rate
to
the
FEC
and
computes
a
delta
to
what
the
f
is:
iike,
muted
in
the
last
iteration
of
the
algorithm
and
sums
all
of
these
deltas
up
for
periodically.
So
what
we
get
is
an
estimate
of
the
the
bottleneck
capacity
and
what
you
can
do
is
you
can
use
that
estimate
of
the
bottleneck
capacity
to
share
it
according
to
two
weights,
among
all
floats,
you
have
in
your
in
your
flow
state.
K
K
Next
slide,
please,
yes,
so
this
is
like
the
same
scenario
again
with
that
fix
and,
as
you
can
see,
both
lungs
no
get
the
maximum
amount
of
they
correct
possible
next
slide.
Please
so
yeah
question
is
this
an
issue
we
should
address
questions?
How
do
we
get
this?
Our
max
into
the
flow
state
exchange,
maybe
in
the
register
message,
but
also
we
have
two
distinct
cases
where
we
are
not
limited
by
what
our
congestion
control
could
deliver,
but
maybe
what
and
immediate
source
deliver.
K
M
Microwaves,
first
of
all,
thanks
a
lot
for
playing
with
this
I
mean
quite
nice
to
see
second.
So
the
reason
we
didn't
have
two
left
over
eight
in
this
version
is
just
I
mean
I
was
now
looking
back
at
the
history
of
the
old
presentations.
You
know
we
had
it
in
the
passive
version
and
we
ended
up
simplifying
it
and
making
it
active
and
then
the
whole
simplification
process.
M
We
removed
that
just
to
address
the
feedback
from
the
group
not
be
very
happy
to
have
that
back
in
so
I
think
it
is
an
issue
that
we
should
address.
I
think
there
is
a
small,
a
small
issue
with
the
our
group,
but
that
could
be
fixed
and
is
probably
not
not
a
big
thing.
Actually,
if
you
could
go
back
a
few
slides.
M
Right
now,
I'm
not
100%
sure,
what's
what's
wrong
here,
but
something
seems
wrong
because
it
seems
that
if
there
is
a
leftover
rate
to
be
added
to
the
FAC,
our
calculation
of
the
very
first
flow,
the
very
flush
flow
is
always
gonna
get
zero
right.
So
something
here
seems
to
be
in
the
wrong
place.
Oh
yeah,
it
might
be
sorry
yeah
yeah.
So
this
is
a.
It
seems
like
a
minor
thing.
I
mean
it's
definitely
doing
the
right.
The
right
thing,
yeah.
M
Just
adding
that
up,
it
may
be
that
the
plaster
yellow
has
to
go
below
if
block
or
something
like
it's
just
a
minor
detail,
and
we
can
think
this
is
easy
to
fix
yeah,
but
in
principle
I
mean
I
I
like
this
a
lot
I
think
this
should
happen.
I,
remember
that
I
believe
I,
remember
100%
sure,
but
that
may
be
a
part
of
the
discussions
that
led
us
to
remove.
It
was
also
the
second
question
that
was
on
the
last
slide.
M
K
M
C
So
what
I'm
hearing
is
that
this
seems
like
a
real
issue
and
we
should
address
it,
and
question
is
just
how
we
go
about
addressing.
It
makes
sense
to
people
okay,
so
I'm,
seeing
a
cup
of
people
nodding.
Okay,
so
the
stressed
is,
is
with
the
RFC
editor.
So
at
this
point
we
can
either
do
this
in
off
48
or
we
can
pull
it
back
to
the
working
group.
C
My
belief
is
that
this
is
probably
too
substantive
a
change
to
just
making
off
48.
So
if
that
area
director
is
comfortable,
if
that's,
what
we
probably
want
to
do
is
try
and
pull
this
back
to
the
working
group
make
the
change
fairly
quickly
and
then
you
know
Ari
last
four
words
and
send
it
back
area.
There
activates
here
hiding
behind
Jonathan.
H
H
C
A
G
I
Am
trying
Gouri
Fairhurst
I'm
I'm
trying
to
write
some
security
considerations
text
I,
wonder
if
it
perhaps
applies
to
many
of
the
trust
from
the
working
group.
So
I
will
send
comments
specifically
on
this
one,
but
I
think
there
is
a
number
of
security
considerations
that
point
to
one
another,
and
maybe
the
chairs
or
the
working
group
wants
to
have
a
quick
look
at
the
security
considerations
across
the
current
set
of
drafts.