►
From YouTube: IETF105-RMCAT-20190725-1740
Description
RMCAT meeting session at IETF105
2019/07/25 1740
https://datatracker.ietf.org/meeting/105/proceedings/
A
B
A
B
A
Good
so
then
welcome
everyone
to
the
armed
cat
session.
Now
we
also
have
slides
to
you,
so
this
is
good.
So,
let's
let's
get
started.
This
is
the
note
well
I,
think
by
now
you
are
all
familiar
with
it.
It
guides
the
procedures
for
the
meeting.
So
if
you
are
not
familiar
with
it,
then
please
read
it.
A
We
have
now
you
have
a
scribe
and
minute
taker.
So
thank
you
for
this.
This
is
the
under.
For
today
we
will
start
by
going
over
the
status
for
the
working
group,
the
different
documents.
Then
we
have
a
presentation
on
nada,
some
updates
on
the
draft
and
some
evaluation
results
and
the
charging
will
give
this
presentation
remotely
and
then
Collin
is
going
to
give
an
update
on
the
rtcp
feedback
work
and
if
there
are
any
other
items
that
anyone
wants
to
bring
up.
A
So
let's
start
with
the
overview
of
the
status,
so
we
are
finishing
up
most
of
our
documents
so
for
the
algorithms
congestion
control
algorithms,
two
of
them
were
already
RFC's
before
the
meeting
and
we
have
nada
that
was
submitted
to
the
iesg
and
it
gets
some
review
from
Mira
and
the
80
review
and
the
authors
are
updated
to
draft
after
that.
So
we
have
this
on
the
agenda
for
today
and
then
it
will
go
back
of
course
and
continue
the
process.
A
And
then
we
have
the
coupled
congestion
control
draft,
which
was
in
the
RFC
additives
queue.
When
there
was
some,
we
detected
the
optimisation
for
the
using
the
bandwidth
fully.
So
we
took
this
draft
back
to
the
working
group.
The
authors
are
updated
it
and
it
was
with
the
shares
for
review
in
the
last
meeting
that
has
been
completed,
and
we
use
now
need
the
final
document
from
the
waters
before
it
goes
back
to
media,
so
shabby
cool.
A
Who
is
the
the
first
author
on
the
draft
has
been
on
parental
leave,
so
he's
coming
back
in
August
and
then
we
will
have
the
updated
version
and
then
Miriah
will
have
a
look
and
decide
what
needs
to
be
done
with
it,
for
it
to
go
back
into
the
queue
and,
of
course,
the
GCC
congestion
control
was
a
draft
that
we
don't
expect.
That
wit,
will
continue.
A
Then
we
have
the
various
documents.
We
have
the
requirements
draft,
that
is
in
the
RFC
editor
too,
and
there
is
now
hopes
that
this
whole
bundle
will
be
resolved
quite
soon.
So
probably,
this
draft
will
finally
move
on
to
RFC.
We
have
the
different
evaluation
graph
drafts
and
one
of
them
now
became
an
RFC.
The
RFC
85
93,
the
video
traffic
model
and
the
valve
test
draft
is
in
the
RFC
editor
queue
also
waiting
for
the
requirements
draft
to
finish
up
because,
yes,
so
it
hopefully
will
come
out
on
the
same
bundle
results.
A
A
Part
so
column
will
give
us
an
update
on
the
draft
in
a
Vedic
core,
and
then
we
have
the
draft
in
Orion
cap
that
will
be
updated
once
the
AVD
core
draft
gets
finished
and
we
have
some
of
the
old
other
drafts
that
we
don't
expect
to
go
on
at
this
point,
our
milestones
so
the
evaluation
criteria
and
those
drafts
or
being
submitted.
So
we
still
need
to
finish
up
the
final
ones.
A
Then
we
had
some
milestones
related
to
the
evaluation
results
and
how
to
proceed
beyond
the
experimental
part.
So
we
have
some
presentations
today
on
the
evaluations
and
and
at
some
point
we
need
to
figure
out
how
to
proceed.
The
area
field
continue
moving
draft
onto
the
the
proposed
standard
track
or
if
they
will
end
up
as
the
experimental
draft,
depending
on
the
the
interest
and
needs
okay.
So
that
completes
the
overview.
A
C
Hoping
the
audio
goes
through:
okay,
just
to
double-check.
First
fine!
Thank
you
great,
so
yeah.
So
this
will
be
and
I'm
giving
an
update
another
post
on
the
draft,
as
well
as
on
some
updated
implementation
and
evaluation
results
in
integrating
it
in
a
the
open
source.
Mozilla
browser
steering
the
development
branch
next
slide,
please
so.
First,
the
update
on
the
draft
is
that
recently
we
got
that
the
80
review
from
amnesia.
C
Thanks
for
that
and
on
the
draft
she
raised
a
collection
of
questions
as
well
as
some
editorial
suggestions
and
I
pushed
the
updated
version
and
11
version
11
now
to
the
basically
uploaded
the
draft
this
morning
and
then
I
wanted
to
summarize
the
changes
here.
These
changes
were
also
summarized
in
sort
of
my
email
response
to
Mira
before
the
the
addition,
the
edit
so
first
off.
The
question
was
around
the
the
impact
of
how
the
rate
adjustment
on
the
great
shaping
buffer
size
influences
of
targets,
video
encoding
rate
as
well
as
descending
grades.
C
So
Mira
raised
some
good
points.
Some
concerns
about
what,
if
the
video
encoder,
for
instance,
is
then
it's
an
iris
irresponsive
one,
then
what?
If
the
great
shaping
buffer
buttes
builds
up
to
you
know
an
outrageous
size,
would
that
basically
become
counterproductive
with
respect
to
the
actual
rate
per
one
percent
over
to
the
network,
so
based
on
those
concerns,
I've
both
and
carried
out.
Some
simulation
based
results
that
I
will
show
later,
as
well
as
some
additional
sort
of
you
know
sort
of
caveats
noting
and
text
in
the
current
draft.
C
So
there
is
an
instantaneous,
you
know,
sort
of
a
big
frame
interested
in
great
shaping
buffer
and
the
size
is
between
you
know,
sort
of
2,000
to
4,000
bytes
typically
depend
to
an
iframe.
So
that's
where
the
numbers
come
from
and
the
assimilation
results
that
I'll
show
later
will
kind
of
show
the
impact
of
the
ratio
ping
buffer
when
we
use
the
recommended
parameters
and
the
corresponding
and
code
changes
have
also
been
pushed
to
our
open
source
report
for
this
one
and
github
now.
C
So
all
these
text
changes
are
now
in
the
version
11
of
the
draft
and
in
addition,
as
I
mentioned,
this
is
a
option
of
the
feature.
Not
really
not
the
core
congestion
control
modeler.
The
second
point
raised
by
the
ad
review
was
to
add
text
on
the
security
considerations,
so
we
did
that
and
I'll
be
fleshing
out
the
security
consideration
text
in
the
next
page.
And
finally,
there
were
some
editorial
suggestions.
We
incorporated
all
of
them
in
the
draft
and
I
see,
mirror
a
console.
There's
a
question:
go
ahead.
C
That
do
you
that
this
is
optional.
Can
you
explain
why
it's
optional
optional,
in
the
sense
that
so
the
implementation
is
not
on
the
congestion
control
modular,
but
rather
in
the
sender
module.
So
in
the
draft
we
have
a
diagram
showing
that
it,
it
basically
revises
the
rate
calculation
from
the
congestion
control
modular,
but
modulates
descending.
C
So
that's
why
it's
an
external
mechanism
to
to
handle
instantaneous
sort
of
additional
bursts
of
traffic
kind
of
the
discrepancy
between
what
the
video
encoder
produces
and
what
we
want
to
send
over
the
network,
but
was
that
clear,
yeah,
okay,
so
yeah
cool?
So
then
the
next
page
I'll
just
show
up
the
text
on
the
security
consideration.
C
Next
slide.
Yes,
so
this
one,
maybe
I'll
briefly
mention
it.
And,
of
course,
you
know,
people
are
welcome
to
read
it.
It's
the
same
text
now
in
the
track,
as
well
as
I
coded
the
same
text
in
the
email.
So
there
are
sort
of
two
maybe
aspects
of
security
consideration.
The
first
one
is
to
say
that,
because
everything
is
feedback
based,
so
we
do
want
the
feedback
message
to
be
an
integrity
checked.
You
know
there
is
kind
of
the
potential
vulnerability
to
attacks
based
on
feedback
messages
now.
C
Secondly,
is
kind
of
inspired
by
mirrors,
comment
to
say
that
you
know
this
mechanism
of
the
great
shaping
buffer
potentially
is,
can
you
know,
can
can
hold
up?
You
know
large
chunks
of
data
if
the
encoder
is
not
a
responsive
one,
and
our
current
medication
in
the
draft
is
to
limit
its
impact
on
the
sort
of
on
the
outgoing
traffic.
So
in
the
worst
case
the
video
itself
will
suffer,
but
the
traffic
over
the
network
will
still
be
congestion
control.
C
So
if
people
have
suggestions
of
notes
on
the
text
in
the
in
this
section
feel
free
to
either
mention
it
here
or
you
know,
discuss
further
on
the
mailing
list,
and
so
if
that
is
fine,
maybe
we
can
move
on.
So
these
are
some
of
the
results
just
showing
kind
of
the
before
and
after
in
introducing
the
influence
of
the
great
shaping
buffer.
These
are
simulation
results
from
the
and
s3
simulation.
C
Four
I
think
I
picked
three
out
of
the
test
cases
to
show
in
this
section,
and
then
there
are
a
few
more
in
the
very
and
just
because
qualitatively
we
see
very
similar
behavior,
except
that
and
the
one
on
the
right
because
of
the
the
you
know,
the
slight
change
multiplications
of
the
grades
added
by
the
great
shaping
effort,
there's
a
little
bit
of
a
jitter.
So
this
is
the
first
test
case
with
a
single
flow
and
next
page
please.
C
So
the
five
point
six,
this
one
is
the
competition
against
TCP,
so
I
thought
these
side-by-side
simulations
can
help
to
characterize
that
the
impact
of
the
rating
buffer
for
the
parameter
value
that
will
pick
the
end
of
this
slide.
Deck
also
includes
the
remaining
four
test
cases
and
for
this
suit,
just
for
just
for
reference,
but
I
think
maybe
in
the
interest
of
time.
C
I
won't
go
into
them;
they
show
very
similar
qualitative
comparisons,
and
next
page,
please
so
now
for
the
second
half
of
the
presentation,
I
wanted
to
give
an
update
on
the
implementation,
both
on
the
implementation
state
status
and
some
evaluation
results,
and
this
is
of
the
NADA
implementation.
We
have
rebates
or
modifications
to
the
mozilla-central
nightly
branch
to
a
much
newer
recent,
so
sorry
to
a
much
more
recent
version
compared
to
our
last
presentation.
So
is
this
one
we
this
one?
C
The
main
difference
is
that
this,
this
May
version
of
the
Mozilla
Apple
now
incorporates
are
also
a
much
more
recent
version
of
the
web
RTC
Mozilla.
We
did
realize
that
the
earlier
modifications
we
did
there
were
a
lot
of
coach
angels
in
web
RTC.
So
we
now
replace
it
to
the
newer
version,
and
if
people
interested
in
checking
out
the
code
we
do
have,
we
do
have,
we
have
pushed
it
up.
You
know
under
Sergio's
account
and
also
under
my
accounts,
are
as
a
development
branch.
C
So
this
link,
for
instance,
if
you're
looking
to
there
you'll,
be
able
to
see
our
code.
If
you
want
right
now,
it's
slightly
pending
some
further
clean
cleaning
up,
but
you
know
the
main
logic
is
there
and,
in
terms
of
the
main
part
of
the
code,
changes
that
in
this
version
of
the
implementation,
what
we
did
is
that
we
replaced
with
switch
the
use
of
playing
web
RTC.
C
There
was
a
delay
based
sorry
I
think
it
should
be
delay
based,
spend
with
CERN
and
with
estimation
model,
not
one
other
delay
based
PMP
w/e
modular
with
our
own
nada,
one-way
delay
based
on
congestion
control,
and
this
is
in
the
congestion,
control
modular.
A
within
web
RTC
and
here
compared
to
our
previous
presentation
in
this
version,
will,
following
the
draft
more
strictly
in
the
sense
that
we
now
use
a
relative
one-way
delay
as
the
congestion
signal.
So
the
calculation
of
the
congestion
control
signal
now
follows
more
closely
to
the
draft.
C
I
should
mention
that
at
this
stage
we
haven't
yet
added
the
influence
of
packet
losses,
yet
that
is
pending
further
investigation,
but
in
terms
of
using
a
reacting
to
delay
and
using
a
relative
one-way
delay
as
the
basic
congestion
control.
No,
we're
not
were
not
in
sync
with
the
draft
and
the
reason
we
are
able
to
do
it
is
that,
with
this
version,
if
we
interoperate
with
an
unmodified,
Chrome
or
Firefox,
they
both
support
the
track,
see
back.
C
We
provides
which
actually
reports
a
perfected
information,
including
sequence
number,
as
well
as
send
and
receive
timestamp,
which
turned
out
to
be
a
super
convenient
for
us
to
gather
all
the
per
packet
information
at
the
sender
and
the
your
error.
I
product
here
is
sort
of
embedded
in
the
code
itself,
and
that
is
how
the
transport
CC
feedback
message
is
sort
of
incremented
based
on
so
with
that
we
can
operate
with
sort
of
against
our
modified
Chrome
or
Firefox
browser
so
long
as
both
sides
support
transport
system
and,
finally,
just
for
convenience.
Our
implementation.
C
We
also
use
export
statistics
using
similar
mechanism,
the
sort
of
the
standard
web,
RTC
login
framework
and
that's
kind
of
the
main
changes
in
the
in
the
code.
In
terms
of
implementation,
we
basically
implemented
our
modular
and
you
know,
modify
some
of
the
switching
logic
in
WebRTC
to
use
the
nagas
based
bandwidth
estimation,
as
opposed
to
the
delay.
The
original
delay
based
Ventus
estimation,
and
if
that
is
good,
I
can
maybe
explain
a
bit
more
kind
of
the
test.
C
Setup
we
we
have
carried
out
will
follow
more
or
less
the
same
formula
in
the
sense
that
we
set
up
by
directional
audio-visual
cars
between
a
modified
Firefox
nightly
with
our
Ranade
in
one
way.
Relative
one
way
delay
based
bandwidth
estimation
against
an
unmodified
Chrome
browser.
So
basically,
the
blue
stream
right
from
hi
and
a
to
B
will
be
sort
of
congestion
controlled
using
the
proposed
nada
scheme,
whereas
the
reversed
and
green
extreme
from
client
B
to
a
will
be
sort
of
congestion
control
using
Chrome.
C
So
whatever
is
the
import
algorithm
there,
and
the
reason
why
we
want
to
do
bi-directional
and
also
check
out
the
performance
of
the
reverse
direction
is
mostly
just
to
consider
that
as
a
reference.
But,
of
course
I
do
want
to
call
out
that
the
comparison
between
the
blue
and
the
green
are
not
really
side-by-side
comparisons
yeah.
This
is
more
just
for
our
reference
and
most
of
the,
but
the
other
thing
we
did
for
our
is
that
we
just
fixed
the
screen
but
address
it.
C
So,
basically
in
the
browser
you
can
configure
the
default
resolution,
so
we
set
it
to
720p
and
then
also
inside
the
code.
We
set
our
our
mags
to
three
men,
but
then
I
looked
into
some
of
the
code
in
the
Mozilla
browser.
There
is
a
spatial
resolutions
of
baseline
rates,
which
ended
with
limits
the
sending
rate
to
0.55
by
the
representin
if
the
X
the
resolution
of
720p-
and
that
is
indeed
what
we
observed.
C
And
finally,
as
I
mentioned,
we
do
lock
the
statistics
of
the
outgoing
floor
based
on
both
the
our
calculation
itself,
as
well
as
the
as
well
as
the
the
per
packet
sort
of
information
from
the
feedback
message.
So
from
that
we
do
have
kind
of
per
packets
and
information
of
how
our
how
the
blue
stream
went
and
then
also
to
double
check
on
those
results.
In
chrome,
the
conveniently
provides
you
know,
graphs
of
ongoing
statistics
in
both
directions.
C
So
we
also
have
a
few
screenshots
just
to
corroborate,
with
whatever
we
measure
locally,
and
so
let's
go
through
a
few
of
the
evaluation
scenarios
and
what
we
measure
next
paper
please.
So
the
first
scenario
is
really
more
a
sanity
check.
So
this
is
more
I
have
two
Macs
in
my
home:
I
have
you
know,
sort
of
Google
Fiber
and
then
you
know
Wi-Fi.
So
basically,
it's
a
local
connection
between
two
clients
and
connected
through
a
home.
Wi-Fi
I
am
used.
A
sorry
I
forgot
to
mention
I
am
using
the
APR.
C
C
What
I'm,
showing
here
are
three
graphs
and
they
were
state
of
the.
The
semantics
would
stay
the
same
for
the
rest
of
the
presentation.
The
first
one
shows
both
the
calculated
grade
by
the
conditional
continued
modular
in
blue,
as
well
as
the
reported
kind
of
the
impact
rates
by
the
receiver,
based
on
the
per
packet
information
and
in
red.
So,
for
instance,
once
in
a
while
those
people
to
introduce
sort
of
spiked
up
arrivals
at
the
receivers,
we
do
see
them
at.
C
You
know
the
s
reported
back
in
red
and
as
as
I
mentioned
before,
even
though
our
reads
congestion
control
algorithm.
In
this
case,
it's
kind
of
uncongested
specifies
the
sending
grades
to
be
the
reference
rate
to
be
3
megabits
per
second,
the
actual
sending
rate
is
limited
by
one
to
two
point.
Five
meter
per
second
in
the
middle
graph
shows
the
delay
in
pollution.
C
We
report
on
both
the
congestion
control
signal,
which
is
the
relative
one-way
delay
in
blue,
as
well
as
as
a
reference,
the
round-trip
time
measurement
on
the
per
packet
basis
in
Inglot,
and
the
bottom
graph
shows
the
instantaneous
packet
loss
ratio
measured
over
a
window
of
five
hundred
minutes
ago,
I
believe,
but
then
with
the
sliding
windows.
Our
report
for
for
one
data,
point
for
every
100
minute.
C
Second,
so,
even
though
you
do
see,
sort
of
vocational
route
are
some
very
high
in
setting
up
a
Colossus,
coinciding
with
those
big
delays,
the
overall
loss
ratio,
as
reported
up
only
one
point,
six
percent
throughout
this
session.
So
most
of
the
time
there
are
no
losses,
but
when
there
is
it's
very
thirsty
so
and
also
in
this
case,
because
it's
a
local
connection,
the
baseline
RTP
is
only
five
millisecond,
but
then
believe
it
or
not.
You
know
even
in
a
local
session.
One
thing
about
the
RTT
measurement
is
one
point
the
highest
part.
D
D
C
A
Anna
yeah
yeah
so
from
the
floor,
so
I'm
also
the
clarifying
question
because
you
have
the
path,
characteristics
and
they're,
the
max
or
tt
is
1.2
seconds
and
in
your
delay
measurements
you
don't
have
that
high
value.
So
is
this
during
a
longer
time
period
or
how
does
have
character?
That's
a
great
respond
to
the.
C
I
limited
the
graph
to
400,
mini-set
and
top
just
so
that
we
can
see,
because
otherwise,
if
we
saw
everything
for
scale
and
the
blue
and
the
red
would
be
locked
together,
that's
also
why
I'm
reporting
on
the
one
going
to
sew
a
are
high
up
there,
part
yeah.
This
is
more
visualization
challenge
yourself,
but
I
think
for
that
question
yeah!
That's
also
why
I'm
always
reporting
on
the
maximum
RTP
see
usually
they're,
not
even
a
single
data
point.
C
C
So
if,
if
folks
are
good
sort
of
with
this
graph,
we
can
then
we
can
move
forward
and,
of
course,
feel
free
to
answer
a
question.
Sorry
ask
questions
because
I'm
always
happy
to
chat
about
how
I
measure
and
you
know
all
kinds
of
observations.
The
second,
so
the
from
the
same
experiment
shows
none
of
the
view
report
by
the
chrome.
So
keep
in
mind
that
chrome
is
the
client
B.
C
So
when
it
says
it's
a
receive
stream,
it
really
refers
to
the
blue
a
to
be
stream
which
is
controlled
by
nada,
whereas
when
it
says
when
it's
reporting
on
any
of
the
sense
doing,
statistics,
it's
the
one
from
p2a
or
just
adding
those
labels
for
reference.
So
we
can
see
here.
The
other
thing
is
that
the
reported
grades
are,
in
some
reason,
are
in
bytes
per
second.
C
So
whatever
number
you
see
here,
if
you
multiply
by
eight,
it
should
more
or
less
corroborate
disturb
a
red
curve
before
for
the
direction
of
A
to
B,
whereas
the
reverse
direction
within
the
nada
modular.
We
really
don't
have
any
visibility
into
that,
so
it
was
good
or
so
cool
just
to
check
how
the
reversal
was
doing.
I
want
to
mention
that
here
because
of
the
scale,
so
the
blue
stream
was
operating
more
or
less
at
300
kilobytes
per
second,
so
that's
approximately
2.4
2.5
megabits
per
second,
whereas
the
the
reverse
direction.
C
It's
a
little
bit
lower
it's
more
at
200
kilobytes
per
second,
so
that's
more
like
1.6!
So,
for
some
reason
the
reverse
flow
was
operating
at
a
lower
rate
and
that's
also
reflected
are
they're
operating
at
a
little
bit
lower
at
a
lower
spatial
resolution,
even
though
with
the
same
frame
rate
at
30
frames
per
second.
So
this
is
a
local
flow
by
the
way
yeah.
So
we
are
looking
at
more
or
less
a
typical,
but
you
know
sort
of
probably
kind
of
the
best-case
scenario
more
or
less,
but
but
everything
is
a
world
Wi-Fi.
C
So
they're
still,
you
know
spikes
of
delay
and
next
page,
please
so
this
next
one
sort
of
and
confirmed
more
to
a
more
typical
remote
cause.
So
this
is
between
me
and
a
colleague
in
San
Jose.
So
both
of
us
are
in
fiscal
office,
but
those
are
thing
over
enterprise
Wi-Fi,
so
enterprise
Wi-Fi
and
then
you
know
as
well
and
kind
of
enterprise
of
enterprise
grade
wired
connections
right
behind
that.
C
C
So
next
page.
So
in
this
scenario,
I'm
again
kind
of
the
reported
rate
on
post
directions
from
the
chrome
and
sort
of
chrome
reported
statistics,
the
corresponding
framed
resolution,
as
well
as
the
frames
per
second.
So
in
this
case
number
wise,
both
on
the
streams
reach
around
three
hundred
kilobytes
per
second
at
the
maximum
rate.
C
But
maybe
one
thing
I
do
want
to
point
out
here
is
that
apparently
the
NADA
control
stream
is
able
to
ramp
up
much
faster
compared
to
the
the
chrome
default
behavior
and
also
in
the
middle
there
was
Chrome's
or
maybe
there
was
a
a
glitch
and
it
took
it
like
somewhat
longer
to
recover,
whereas
in
another
stream
the
recovery,
typically
its
varied
asked,
so
one
that
I
didn't
really
talk
about
but
I
want
to
mention
is
that
and
another
we
have
designed
this.
The
great
adaptation
to
automate
entry
modes.
C
C
So
that
turned
out
to
be
the
main
kind
of
one
feature
that
really
helped
and
if
folks
are
good
with
this,
one
I
can
move
on
to
a
second
case
between
Austin
and
San
Jose,
and
so
this
one,
the
main
difference
is
that
again,
this
is
between
the
two
locations,
but
in
this
case
client
B
is
behind
a
home,
Wi-Fi
network.
So
we
see
much
easier
delay,
measurements
and,
in
this
case
also,
even
though
physically
the
similar
location
much
higher
baseline,
our
Kiki,
just
maybe
for
reference
the
previous
set
up
between
the
two
offices.
C
It
was
60
minute
set
here,
even
though
the
sender
is
at
the
same
location
just
because
receiver
was
behind
a
home
Wi-Fi
behind
cable
modem.
If
the
baseline,
our
Tiki
is
now
a
hundred
and
ten
milliseconds
and
the
the
high
belay
and
sort
of
the
corresponding
burst
of
patchy
losses
are
much
more
frequently
within
similar
duration
but
other
than
that
yeah.
C
In
this
case,
the
reverse
direction
does
suffer
a
hike
a
bit,
so
we
see
sort
of
fairly
low
grades
and
lower
resolution
and
the
frame
the
frame
rate
drops
quite
a
bit.
I
want
to
mention
that
visually
obviously
I
also
have
some
screenshots,
but
since
it's
all
people,
spaces
I
wasn't
so
sure
whether
to
show
them
here
the
of
course
you
know
at
this
resolution
and
frame
rates
and
also
encoding
grades.
Yet
the
visual
effect
were
not
very
good
on
the
other
direction.
So.
C
And
then
moving
forward
next
page,
this
is
again
between
Austin
and
San
Jose,
and
this
is
a
case
where
were
a
bit
curious
about
what
happens?
If
you
have
a
background
address,
you
know
suburb
ongoing
data
downloading
flow,
so
in
this
case
the
client
be
behind
the
home.
Wi-Fi
also
has
an
ongoing
bit
current
as
our
BitTorrent
sort
of
traffic,
and
he
verified
that
this
disposal
uplink
and
downlink
both
directions.
C
So
in
this
case
we
did
see
that
the
fellow
from
A
to
B
now
operates
at
a
much
lower
grades,
around
650,
8,
kilobits
per
second
and
and
the
other
thing
I
want
to
mention.
But
I,
don't
really
have
an
explanation
here
is
that
in
this
case
we
also
see
that
the
actual
grades
is
the
target
rate.
Where
is
you
know
in
critical
way
drawn
so
for
this?
One
I
just
want
to
mention
this
conservation,
but
it
it's
pending
further
investigation
to
to
find
out.
Why
now
delay
wise.
C
They
did
my
article
125
mini
second
and
similar
maximum
RT
key
and
loss
ratio
as
before,
so
this
mainly
shows
that,
in
the
presence
of
a
persistent
background
traffic,
the
video
flow
itself
in
term
you
know
only
operates
at
a
much
lower.
Sort
of
this
is
how
it
operates
right
now,
in
the
face
of
a
competing
flow.
C
C
C
So
I
have
two
more
two
more
experiments
to
go
over
and
now
in
this
case,
I
was
working
with
another
colleague
and
this.
This
is
the
call
over
even
longer
distances
across
the
Atlantic,
because
we
really
wanted
to
stretch
the
past
RT
and
Lisa.
These
type
of
remote
colors
can
crystal
sustain.
So
this
one
is
between
Austin
Texas
and
Florida
in
Switzerland,
and
we
listed
two
cases
in
this
first
case.
The
client
B
is
wired
lately
connected,
so
the
only
waters
hub
is
between
I
laptop.
C
You
know
in
the
local
enterprise
Wi-Fi
network
that
may
be
compared
to
previous
graphs.
Now
the
past
is
much
cleaner.
There
are
still
occasional
archie
keys
and
even
a
big
one,
but
overall,
with
even
lower
losses
and
more
stable,
calculate
posts,
the
calculate
the
gradient
and
the
actual
rate.
I
should
mention
that
in
this
case,
the
based
on
RTG
is
180
mini.
Second,
that's
very
high,
so
I
was
on.
C
I
was
happy
that
this
in
didn't
break
it
sort
of
still
worked
well
next
page,
and
this
one
shows
the
the
corresponding
screenshot
me
you
both
for
for
both
directions.
I
guess
by
now,
you
guys
are
very
familiar.
What
to
look
for
in
these
graphs
supposed
sending
worried,
as
well
as
the
frame
sort
of
the
temporal
rate
yeah
off
of
the
frames.
I
forgot
to
point
out
that,
in
this
case,
for
instance,
the
reverse
direction.
This
is
quite
typical
of
Chrome.
It
typically
takes
them
a
bit
longer
time
to
ramp
up
on
the
next
wait.
C
One
final
set
of
experiments:
this
is
again
between
Austin
and
the
Switzerland
office
and
this
Switzerland
office,
the
Wi-Fi
connection,
happens
to
be
very
choppy,
so
even
when
we
were,
let's
say,
occasional
cause
over
WebEx,
etc,
sometimes
even
the
audio
itself,
that's
largely
delayed,
so
instead
of
using
the
wired
connection
for
client
B.
In
this
case,
we
switch
back
to
enterprise
Wi-Fi
and
we
see
a
similar
behavior
kind
of
similar,
as
previous
case
with
a
home
connection.
C
C
One
I
should
look
up
the
number
and
feed
adding
that
was
a
glitch,
but
it
but
I
should
mention
or
sin
it
should
be
similar
to
the
the
previous
work
is
around
one
or
two
percent,
probably
next
one
and
this
one,
we
didn't
capture
the
corresponding
resolution
and
great
information.
What
the
I
should
mention
is
that,
throughout
our
case,
my
guest
speakers,
we
have
specified
the
import
resolution
in
the
in
the
nada
controlled
stream
in
Firefox.
C
I
think
this
should
be
the
last
slide
in
terms
of
experimental.
So
next
slide
is
more
a
summary.
Our
observations
so
so
far
in
I
think
we
kind
of
tried
you.
We
try
three
different
combinations.
The
local
session
is
a
very
easy
one.
The
other
two
are
rolling.
What
we
think
as
typical
remote
car
and
situations
in
these
cases.
C
Typically,
we
do
have
sufficient
bandwidth,
but
then,
because
of
the
presence
of
the
wireless
link,
we
do
see
that
there
are
occasional
delay
spikes.
So
our
observation
of
the
NADA
control
and
flow
is
that
it
tends
to
ramp
up
fairly
fast.
A
maximum
allowed
rate.
That's
within
a
few
seconds,
and
also
the
sending
rate
will
react
to
occasional
big
delay
spikes,
but
it
recovers
very
quickly
thanks
to
the
the
other
mode
of
operation,
the
accelerated
grab
mode
operation,
and
we
have
examined
how
sort
of
how
frequently
the
modes
switch.
C
So
we
do
see
that
the
switching
between
the
axillary
ramp
up
and
the
gradual,
updated
ramp
up
a
gradual
update
of
modes
are
fairly
frequent
and
he
certificate
is
used
by
those
of
frequent
delay,
spikes
and
also
just
because
we
have
a
stringent
some
of
criteria
for
the
flow
to
operate
in
accelerated
relic
mode,
yeah
and
finally,
maybe
kind
of
the.
The
reassuring
part
is
that
we
have
tested
these
cars
with
past
baseline,
our
teepees
up
to
a
hundred
millisecond,
and
you
see
that
the
flows
seem
to
operate
in.
C
E
Show
from
tensing
so
I
think
it's
better
to
show
a
third
graph,
all
the
scenario
to
show
that
a
prompt
chrome,
because
all
the
data
at
the
reverse
that
I
could
be
because
of
the
rest
method,
to
giving
a
different
path.
And
then
the
delay
is
gonna
be
different.
So
it's
better
to
show
like
ROM
to
chrome
from
differencing
arrows.
Then
you
will
see
if
Ranade
from
nada
to
chrome
might
be
better.
E
E
C
F
John
thanks
in
terms
of
figuring
out,
what
do
you
is
for
your
typical
prefer,
your
bandwidth
limited
case
I'm,
not
I'm,
sort
of
spitballing
here,
but
it
might
be.
If
you
can
find
somebody
who's,
you
know,
got
cheap
home
connection
like
DSL
or
something
that
might
be
a
useful
case.
I
think!
That's
probably
the
you
know
where
somebody
has
a
I
mean
I'm
gonna
have
to
find
a
colleague
who's
lived
somewhere
remote.
You
know
lips
of
more
rural
or
doesn't
feel
like
spending
a
lot
for
home
bandwidth.
F
It
can
be
hard
for
you
know,
engineering
colleagues,
I,
understand,
yeah,
I,
think
that
feels
like
that
might
be
a
you
know.
You
know
somebody
with
you
know
just
to
you
know,
live
out
in
the
woods
and
all
we
can
get
his
DSL
or
something
like
that.
Okay,
another
thing
to
try,
but
I
guess
this
probably
has
a
lot
of
other
complexity
might
be
mobile
data,
but
that
might
have
a
whole
lot
of
other
complexity
here.
So.
A
I'm,
a
transfer
from
the
phone,
so
in
terms
of
comparing
the
two,
maybe
you
can
also
run
them
in
parallel,
so
that
you
would
have
gotten
call
with
with
nada
and
one
call
with
the
chrome
in
the
same
direction.
At
the
same
time,
if
you
do
with
her
after
each
other
the
path
I
mean,
the
conditions
will
also
not
be
exactly
the
same.
At
the
two
points
in
time,
they
may
remove
similar
done
going
in
different
directions,
but
but
still
then
Wi-Fi
over
time
and
then
the
path
over
time
may
change.
G
H
G
Comcast
you
know
like
the
like,
like
you're,
not
so
great
in
that
provider
in
the
US
or
I,
don't
know
I'm
I'm,
guessing
that's
what
the
naming
is
coming
from,
but
it
allows
you
to
basically
set
on
the
command
line,
bandwidth
limitations
for
given
ports
and
protocols
and
so
on.
So
maybe
that's
like
something
to
explore.
C
Thank
you,
I
think
we
can
definitely
try
was
artificially
limiting
the
bandwidth
and
see
what
we
want,
but
maybe
the
other
side
and
actually
early
on
in
some
of
the
tests
that
evaluations
we
we
did
some
of
those
too,
but
maybe
we'll
try
those
as
well
I
think
we're
really
trying
to
hunt
for
typical
scenarios
where
the
bandwidth
is
limited,
just
more
like
a
real
word
type
of
test.
So
this
so
yeah
I
think
you
know
what
pursue
pros
sort
of
both
methods.
C
I
J
All
right,
hi,
I'm,
Colin
Perkins
I,
can
now
actually
see
my
slides
help.
So
I
want
to
run
the
RTP
congestion
control
feedback.
Draft
I
feel
I
should
be
looking
at
the
rest
of
the
room,
but
then
I
can't
see
this
at
all.
K
J
Will
do
so.
The
conditioning
trophy
back
draft
was
obviously
something
we've
been
discussing
for
a
while.
We
got
a
bunch
of
feedback
based
on
hackathons
at
the
last
few
meetings,
most
recently
in
the
product
meeting,
and
we
submitted
the
the
zero
for
draft
to
reflect
the
implementation
experience
and
the
hackathon
lots
of
changes
in
the
draft.
A
few
changes
in
draft
no
changes,
the
packet
format.
Everything
is
clarifications
and
we
discussed
these
in
a
boutique
or
earlier
in
the
week.
J
Most
of
the
clarifications
are
around
Sigma
link
which
I'm
proposin,
but
this
or
about
that
the
precise
details
of
what
you
report
on
when
and
how
you
split
our
TCP
packets
up
and
that
sort
of
thing
there's
two
changes.
I
wanted
to
highlight
for
this
working
group.
One
is
about
the
feedback
hardened
effect
in
retransmission
packets
and
one
is
the
response
to
lost
feedback.
J
So
in
the
first
subject,
the
the
thing
we've
added
to
the
draft
this
time
is
a
statement
that,
if
you
are
sending
separate,
RTP
streams
that
contain,
if
forward,
error,
correction
or
retransmission
packets
with
their
own
SSRC,
then
when
you're
sending
congestion
feedback,
you
also
send
the
congestion
feedback
reports
on
those
streams
see
using
congestion
feedback
on
on
the
on
the
fact,
trafficker
on
the
retransmission
traffic
on
the
basis
that
this
is
taking
a
bandwidth
from
potentially
causing
congestion.
So
you
should
report
on
it.
J
The
second
change
was
to
add
a
section
talking
about
congestion
response
when
congestion
control
feedback
packets
are
lost,
and
this
is
two
things.
Firstly,
it
points
out
that,
like
all
all
our
TCP
packets,
it's
possible,
the
congestion
control
feedback,
packets
are
lost
and
any
TP
congestion
control
algorithm
must
specify
what
it
does
in
response
to
the
fact
that
feedback
is
being
lost,
it
can't
just
assume
the
feedback
could
be
delivered
reliably.
J
It
was
before,
but
if
a
significant
number
of
feedback
packets
are
being
lost,
you
should
fairly
rapidly
reduce
something
rate
down
towards
zero,
on
the
assumption
that
if
all
the
feedback
is
being
lost,
then
it's
likely
that
something
significant
has
failed
on
the
pass
and
it
points
out
that
the
circuit
breaker
RFC
provides
some
further
guidance
here
and
just
when
you
should
stop
sending
entirely
and
does
they
have
a
microphone?
Yes,
it
does.
Okay
and
those
are
the
main
changes
that
there
are
two
very
hopefully
minor,
open
issues
with
a
drift.
J
It
has
previously
been
suggested
that
we
add
some
discussion
of
power,
of
how
you
convert
between
the
per
SSRC
sequence
numbers
as
proposed
in
this
draft,
and
that
the
unified
sequence
numbers
face
in
the
other
draft
and
talk
about
what
are
the
trade-offs
between
the
two
approaches.
You
know
that
these
are
fairly
straightforward
in
that
the
per
SSRC
sequence
numbers.
Let
you
do
per
SSRC
congestion
control.
J
D
The
comparison
with
the
home
of
draft
yes,
as
far
as
I
remember,
leaving
the
design
team.
There
are
some
comparisons
done
and
I
have
compared
the
results
like
well,
what
are
the
compression
or
what
kind
of
benefit
or
cost
we
get
our
overrated
get,
but
I'm
still
like
I
have
a
hard
time
digesting
like
why
we
need
this
compression
in
this.
It.
J
Was
a
previous
meeting
we
were
asked
to
provide
such
a
comparison?
I
have
no
particular
desire
to
do
it,
but
it
was
what
are
the?
What
are
the
things
we
were
asked
to?
Do
it
a
previous
meeting,
of
course,
at
every
Depot
of
ABC
car
yeah
and
what
benefit
it
brings
to
the
draft
unknown
whatsoever.
As
far
as
I
could
tell
it
just
positions,
it
makes
it
clear
what
why
you
would
implement
certain
I.
D
Mean
what
it's
like
yeah
I
mean
we
in
the
design.
We
did
calculation
and
we
saw
like
yeah
that
could
be
a
better
I
mean
there
could
be
another
way
of
doing
the
signalling-
and
we
agree
to
this
working
group
will
do
that
and
we.
This
is
what
our
output
right
and
then,
if
somebody
thinks
like
okay,
they
have
a
better
and
a
feedback,
and
they
don't
want
to
really
compile
generation.
D
Because
now
then
we
have
other
bike:
Nara
has
their
own
consumer
feedback
scream,
has
their
own
condition
config
crap.
Shall
we
be
comparing
all
of
them?
If
you
are
not
comparing
all
of
them,
then
why
we
are
picking
up
this
one
and
I
actually
really
have
to
understand
why
we
need
to
do
that
at
all.
As.
F
I,
don't
remember,
done
I,
don't
remember
exactly
what
the
group
said,
but
I
mean
my
argument,
for
it
would
be
because
the
homer
graph
is
actually
widely
deployed
and
to
convey
need
to
convince
people
to
switch
from
that
to
something
standard.
You
have
to
show
them.
No,
this
isn't
gonna
hurt
you
and
in
fact,
hopefully
will
help
you.
Yes
and.
J
F
Basically,
you
know
be
know
the
you
know
the
extremely
aggressive
you
know
very
Zara.
You
know
virally
encoding
as
well
in
home
or
don't
actually
help
you
that
much
in
the
normal
case,
where
you're
just
reporting
on
a
few
packets.
So
this
isn't
gonna,
be
you
know
some
huge
explosion
and
your
feedback,
because
for
the
despite
the
much
simpler
packet
format,
for
instance,
yes,.
J
Yeah
and
I'm
not
sure
I
would
even
go
so
far
as
to
have
a
comparison
of
the
bandwidth
costs
in
this
I
think
that
the
main
the
main
advantage
of
this
is
that
it
provides
per
sssee
feedback,
which
gives
you
flexibility
to
do
differential
congestion
control
for
the
different
sources,
and
this
one
here,
but
worth
the
whole.
My
draft
give
kiss
it
gives
you
a
single
secret.
I.
F
F
But
no,
I
think
the
big
advantage
is
not
having
the
header
extension,
especially
as
I
think
I'm
pretty
sure
that
gradually,
in
most
cases,
you'll
actually
get
all
header
extensions
out
of
your
average
varsity
video
packet,
which
is
actually
losing
the
generic
header
extension
overhead
is
actually
a
fairly
large
win,
as
opposed
to
just
yeah
I.
Think
that's,
that's!
The
bigger
win
is
losing
the
per
per
packet
header
extension
overhead.
Yes,.
A
J
A
F
Jonathan
exam
what
Colin
said
in
a
boutique
or
is
that
he
hopes
to
get
all
this
done
before
his
students
come
back
for
the
fall
semester,
because,
as
many
doesn't
have
time
anymore,
so
that's
is
the
hook.
Yes
is
the
help.
So,
basically,
so
where
our
hope
was
before
the
end
of
the
summer,
we
will
get
the
new
draft
with
all
this
requested
text.
They'll
be
able
to
do
organ
group
last
call
which
indeed,
we
will
make
sure
to
send
to
both
maybe
décor
and
Armco
yeah.