►
From YouTube: IETF112-RMCAT-20211112-1430
Description
RMCAT meeting session at IETF112
2021/11/12 1430
https://datatracker.ietf.org/meeting/112/proceedings/
A
And
it
is
now
15
30,
I
see
so
I
think
we
can
start
so.
This
is
the
rmcat
session
and
I'm
anna
bristow
and
I'm
here
with
colin
and
we
are
perkins
and
we
are
the
co-chairs
and.
A
Let's
see
if
I
managed
so
this
is
the
note.
Well,
I
think
we
are
on
friday,
so
you're
probably
all
familiar
with
it
by
now,
if
not
do
read
it,
of
course,
because
it
does
apply
to
the
meeting.
A
C
A
Thank
you,
and
did
you
want
to
add
something
else
or
no?
This
was
why
you,
you
left
the
queue
now
so
great,
so
I
think
we
don't
need
a
separate
jabber
scribe
because
we
have
the
jabber
now
integrated
in
mythical,
so
we
will
keep
an
eye
on
that.
A
So,
let's
see
so
as
colleen
hinted
that
we
have
not
met
in
a
while.
So
this
is
the
agenda
we
have
for
the
day.
We're
gonna
have
an
update
on
our
algorithm,
so
we
met
last
time
for
a
meeting
at
itaf
106
and
at
this
point
we
decided
we
had
delivered
our
algorithms
and
most
of
our
documents
and
we
decided
that
we
would
go
idle
for
a
while
to
try
and
gain
experiences
with
the
algorithms
to
then
come
back
and
see
how
we
would
proceed
after
that.
A
And
at
that
point
we
were
thinking
to
go
idle
for
about
a
year,
but
then,
of
course,
the
pandemic
arrived
and
we
were
hoping
to
also
be
able
to
meet
in
person.
But
now,
of
course,
it's
high
time
to
come
back
and
see
what
has
happened
to
the
algorithms
and
what
do
we
have
left
as
work
in
the
the
working
group?
So
after
we
finish
this
introduction
part,
we
will
move
to
looking
at
the
update
on
the
status
of
the
rmcap
algorithms,
so
first
an
overview
of
it
from
the
shares.
A
And
then
we
will
have
ingmar
presenting
some
updates
with
the
scream
algorithm
from
rfc
8298,
and
we
have
david
presenting
some
updates
on
the
shared
bottleneck
detection
from
rfc
8382.
A
A
They
have
been
in
in
various
queues,
some
of
them
waiting
for
each
other,
but
we
have
the
the
scream
algorithm
and
the
nada
algorithms,
the
two
congestion
controls,
and
then
we
have
the
the
coupled
congestion
control
and
the
shared
bottleneck,
detection
and
the
last
of
those
the
nada
algorithm
was
was
published
since
we
last
after
our
last
physical
meeting
and
the
couple
congestion
control
had
the
dependency
there.
So
it
was
also
published
after
we
last
met,
but
they
were
all
submitted
at
this
point
in
time,
but
now
they're
also
all
rfcs.
A
So
three
of
these
four
of
these
documents
were
all
now
published
in
january
of
this
year
when
this
whole
sequence
of
documents
were
completed
and
all
published,
and
we
already
had
the
video
traffic
models
published
before
and
then
we
have
one
active
working
group
document
on
the
feedback
side,
and
this
is
the
draft
that
colin
will
talk
about
and
when
we
last
met.
A
This
was
waiting
for
the
rtp
control
protocol
feedback
for
congestion
control,
the
feedback
message
that
was
handled
at
that
time
by
the
avt
core,
so
that
is
now
published
in
rfc
8888,
and
so
we
should
now
finish
up
this,
this
document
that
was
depending
on
it.
So
we
have
that
on
the
agenda.
A
A
Otherwise,
we
will
move
on
to
the
algorithms
to
have
an
update
on
those,
so
we
have
four
algorithms
and
igmar
you're
in
the
queue
because
you
will
be
presenting
soon.
I
assume
so
is
this:
why
or
you
wanted
to
ask
something.
A
No,
it's
there.
I
I
see
it.
I
will
just
give
the
overview
first
and
then
I'll.
Let
you
share.
Oh,
so
I
I'm
on
the
overview
slide,
so
no
worries,
so
we
have
the
four
algorithms.
So
we
have
scream
and
ingmar
will
give
us
the
the
update
on
that.
Then
we
have
the
the
nada
congestion,
control
and
xiaojing
updated
the
status
on
that
last
year
that
they
had
the
full
implementation
completed
of
nada
in
the
mozilla
browser,
and
this
was
kind
of
the
completion
for
them
on
the
work.
A
So
there
has
as
far
as
the
authors
and
shares
no,
there
has
not
been
any
ongoing
activities
after
they
completed
this.
This
implementation
and
xiaojing
and
her
co-workers
have
moved
on
to
other
projects
or
other
employees
after
that.
So
I
think
there
is
no
ongoing
work
or
nada
as
far
as
we
know,
but
there
is
the
open
source
implementation.
A
Then
there
is
the
coupled
congestion
control
and
there
there
is
some
ongoing
research
on
the
topic
by
the
waters.
So
they
have
some
students
working
on
coupling
the
the
video
and
data
flows
in
chromium,
but
there
are
no
no
results
or
or
ready
things
to
report,
but
still
some
ongoing
research
and
then
we
had
the
shared
bottleneck.
Detection
and
david
will
give
us
the
update
on
that.
A
So
with
that,
I
will
now
stop
my
sharing,
and
I
will
now.
I
think,
in
my
view
should
be
able
to
share-
and
I
will
give
the
word
to
you
to
give
us
an
update
on
scream.
D
Okay
and
little,
I
hope
you
hear
me
him
well.
D
And
this
is
an
update.
What
has
been
done
in
terms
of
experiments
the
future
and
about
around
screen
which
was
developed
by
me,
and
so
I
had
them
published
or
was
it
the
late
2017?
It's
been
a
qui
a
couple
of
years
since
on
and
you
can
skip
to
the
next
okay.
I
need
to
click
on
number
two
here
and
there
are
some
to
start
with
where
scream
is
or
isn't
used.
D
D
We
don't
have
any
any
much
experimentation
around
that
so,
but
perhaps
something
that
was
a
bit
below
my
radar
is
that
there
was
a
can
probably
look
at
the
avt
core
presentation
where
martin
singelbert
and
yoriot
presented
will
have
integrated
screen
with
a
quick
with
our
go
quick
implementation.
So
they
have
some
results
there
and
and
boundary
issues
when
implementing
a
screen
there.
D
More
is
a
cloud
rendered
gaming,
and
so
now
we're
not
talking
about
one
two
megabits
per
second,
it's
more
like
30
megabit
per
second
video
with
high
quality,
and
then
we
have
done
quite
a
lot
of
experimentation,
and
then
we
also
have
for
remote
driving
where
we
have
a
hero
head
or
a
remote
control,
call
that
we
have
been
driving
around
in
various
projects,
and
that
also
includes
a
high
bitrate
video,
with
multiple
cameras
and
and
also
a
scream,
has
some
kind
of
it's
a
wrapper
around
the
congestion
control
that
makes
it
possible
to
do
some
benchmarking,
for
instance
over
5g
networks.
D
And
about
to
clone
rendering
gaming-
and
here
we
have
been
much
focused
around
the
development
of
l4s
and
the
screen
was
picked
because
it's
not
the
perfect
algorithm,
don't
claim
that.
But
it's
a
known
algorithm
and
it
was
quite
easy
to
dig
up
statistics
around
how
it
worked
and
what
kind
of
latency
got
on
and
that
later
on,
evolved
into
using
implementing
a
gear,
streamer
plug-in
to
do
as
a
real
experiments
with
a
cloud
rendering
gaming
and
where
we
there.
D
We
were
able
to
try
out
the
experiment,
we'd
done
without
l4s,
and
it
was
a.
We
found
a
couple
of
issues
with
the
green
algorithms
in
the
process
and
mostly
related
to
how
video
encoders
behave,
especially
when
you
frequently
update
your
target
bit
rate,
and
I
will
come
back
to
that
later
and
yeah.
And
then
we
have
remote
driving
where
we
have
done
quite
a
lot
to
experimentation.
D
And
we
have
other
students
who
have
integrated
the
screen
algorithm
with
the
remote
control
core
and
we've
tried
out
in
various
environments
and
over
4g
and
party
networks.
D
And
it
has
been
a
pretty
good
test
platform
to
benchmark
how
it
really
behaves.
In
pole,
coverage,
for
instance,
and
and
unfortunately,
we
don't
have
much
data
to
show
because
it's
a
bit
sloppy
behavior,
we
run
the
test
and
when
then
we
drop
all
the
data.
You
know
it's
a
bit
has
been
a
bit
of
a
ad
hoc
testing,
pretty
much
and
for
demo
purposes,
and
I
hope
hopefully
we
do
that
better
later
on.
D
And
then
we
have
this
benchmarking
table
that
is
available
for
for
public
use
and
and
it
what
it
does
is
that
it
it
emulates
a
video
encoder
where
you
can
have
other
you.
You
can
pick
either
a
fixed
bit
rate
or
you
make
it
rate
adaptive
and
you
can
do
the
rate
adaptation
over
quite
a
large
bit
rate
range
from
a
cup
tens
of
kilobits
per
second
up
to
more
than
500
megabits
per
second,
the
upper
limit.
D
There
is,
depending
on
what
kind
of
laptop
you
are
using,
or
the
virtual
machine
that
you
have
in
in
the
other
end,
for
instance,
however,
much
of
the
because
it
becomes
a
bit
cpu
consuming,
especially
given
the
packet
pacing,
and
that
is
involved,
so
it's
not
ideal
for
really
high
bitrate
applications,
but
up
to
something
like
500
megabit
per
second,
you
can
try
out.
I
believe,
though,
the
highest
I
got
was
like
almost
900
megabit
per
second
over
one
gig
gigabit
line,
and
you
can
also
model
iframes
and
variable
frame
sizes.
D
You
get
something
that
is
similar
to
how
a
video
encoder
in
a
constant
bitrate
mode
behaves.
But
it's
not.
I
it's
not
it's
not
identical,
of
course,
and
you
can
measure
rounded
time
with
the
estimated
q
delay,
which
is
a
by
output
from
the
screen
algorithm,
the
throughput
that
you
get
both
the
transmitted
bit
rate
and
what
is
hacked
in
the
other
random
packet
loss
and
see
marks,
and
it's
a
it.
There
is
capable
about
trying
classic
easy
and
l4s.
D
And
the
finding
is
so
far
is
that
what
we'll
find
is
that
the
window
based
congestion
control
is
which
is
a
scream.
It
is
more
or
less
actually
tcp
land,
but
but
with
a
little
more
bells
and
whistles.
That
is
probably
good,
at
least
for
cellular
access,
because
you
can
have
a
radar,
equal
resource
configuration
that
happens
and
you
can
have
andover
that
can
pose
transmissions
pulses,
can
cause
causal
transmission
policies
on
that.
D
If
that
is
not
handled,
you
end
up
quite
easily
with
a
lot
of
data
in
the
transmit
buffers
in
the
in
the
network.
That
is
just
there
and
it's
a
it's.
It's
it's
a
causes
head
of
line
blocking
for
subsequent
video,
for
instance.
D
So
it's
a
bet
that
you
have.
It
is
a
window-based
congestion
control
and
then
you
have
a
lots
of
zero
in
cases
like
this,
you
have
lots
of
a
lot
of
packets
in
the
center
that
you
can
discard,
if
they're
all
deemed
too
old
anyway.
So
and
then
you
can
locally
on
the
center
side
of
forcing
you
idr,
so
you're
going
to
get
quite
the
fast
recovery
because
of
that,
and
that
is
has
been
shown
in
the
experimentation.
D
The
remote
control
call,
for
instance,
where
you
can
in
a
really
bad
coverage
situations
you
can
end
up
with
a
in
those
situations
and
the
feedback
rate.
We
have
had
a
lot
of
discussion
on
calling,
for
instance,
I've
done
some
work
around
the.
What
is
the
applicable
feedback
rate
that
is
needed
as
it
is
today,
so
something
like
one
feedback
per
16
rtp
packets,
and
that
is
probably
overkill
and
but
to
be
honest,
I
haven't
done
so
much
experimentation.
What
is
the
pain
point
pain
limit
here?
D
D
That
is
something
like
80
of
the
development
work,
doing
the
network
congestion
control.
That
has
been
quite
easy,
but
the
video
coders
yeah.
Sometimes
they
are
really
strange
animals
and
you
have
a
video
coder
that
are
sluggish
when
you
sort
of
set
the
target
bitrate,
it
can
take
a
while
for
it
to
adapt
to
the
new
target
bitrate
on,
and
sometimes
you
have,
that
the
video
encoders
can
become
confused
by
frequent
update
dates
and,
for
instance,
the
nvm
encoder
can
be
given
what's
called
a
systematic
offset.
D
So
if
you
set
it
to
20
megabits
per
second,
you
will
get
like
22
megabits
per
second
on
the
average,
and
that
has
been
can
it's
not
it's
not
a
disaster
for
screen,
but
it's
to
give
some
kind
of
a
sub-optimal
behavior,
and
that
is
not
always
visible.
If
you
don't,
if
you
use
just
endpoint
based
adaptation,
but
it's
more.
If
you
use
head
for
rest-
and
you
can
you
start
to
see
those
anomalies
and
also
that
iframes
they
are
really
problematic
in
congestive
situations,
because
they
can
be
really
larger.
D
So
there
are
companies
that
transmit
iframes
only
when
needed,
and
also
gradual
decoder,
refresh
also
called
periodic
injury
refresh
can
be
useful
if
that
is
implemented.
In
decoders
and
in
the
encoders
and
another
way
that
is
used
in
a
roman
caller
remote
control
car
at
lower
bitrates
you
to
compress
the
iframes,
even
even
harder
you
set,
the
qpmax
and
qpmain
mean
values
to
keep
control
of
that.
D
So
there
are
some
issues
with
the
video
encoder
that,
even
though
your
congestion
control
algorithms
works
like
the
sean,
you
have
a
video
encoders,
it's
a
essentially
a
black
box
that
can
cause
a
lot
of
issues
for
you.
D
And
then
you
have
a
what
is
to
be
done
in
the
future
and
currently
for
some
kind
of
an
lfrs
implementation
is
in
running
code.
That
is
not
in
the
in
the
standard.
So
I
have
some
plans
to
do
an
update
and
possibly
based
on
improved
alphas
implementation
later
on,
but
they
say
that
is
not
planned
right
now,
but
hopefully
later
on.
D
I
believe
that
was
general
presentation
and
four
minutes
left.
Do
we
have
questions
or.
A
D
What
results
we
have
is
that
so
far
they
are
in
sort
of
in
official
or
they
also
what
the
to
the
degree
that
they
are
officialized
in
this
demo.
Here
and
here,
is
he
again?
If
you
probably
won't
see
you
know,
but
you
can
see
that
the
amount
of
marked
packets
you
see
in
the
sorry.
D
E
Yes,
the
from
what
I
understand
the
original
design
in
in
scream
was
motivated
by
wireless
clients.
Do
you
think
that
that
that
original
design
is
still
part
of
the
model,
or
do
you
think
it
would
be
equally
optimized
for
things
like
data
center
to
data
center
traffic?
So
I'm
really
wondering
whether
or
not
it
makes
sense
to
start
considering
the
full
topology
when
we
look
at
the
congestion
controls
and
wonder
if
a
one-size-fits-all
model
is
is
maybe
not
appropriate
and
we
need.
D
I
can
tell,
I
believe
the
most
of
the
focus
was
on
the
last
mile
4g
and
5g
access
that
was
being
was
quoted
driving,
but
I
haven't
thought
about
the
data
center,
the
data
center,
for
instance,
if
that
is
applicable
there-
and
I
somehow
suspect
that
the
other
algorithms
like
if
we
look
at
the
frog
or
bb
or
version
2
or
something
like
that-
could
be
more
applicable
there.
E
Follow-Up,
do
you
think
that
the
direction
matters
were
you
mostly
optimizing
for
send
from
from
a
client,
or
were
you
mostly
optimizing
for
receive
from
the
clients.
E
Are
you
trying
to
optimize
was
screen
mostly
trying
to
optimize
the
transmit
video
of
a
mobile
endpoint
or
the
receive
video
of
a
mobile
endpoint,
or
do
you
think
that
you
tried
to
actually
optimize
both
directions?
Equally,.
D
Yeah,
I
believe,
the
the
actual
scream
algorithm,
that
is,
it
doesn't
really
optimize.
If
you
look
at
it
and
think
about
the
heater
buffers,
for
instance,
that
is
up
to
the,
for
instance,
the
key
streamer
pipeline
to
handle,
and
it
doesn't
do
re-transmissions
and
that
stuff
and
it
just
handles
the
congestion
control
without
like
it's
like
tcp,
without
re-transmissions,
more
or
less,
but
it's
another
retardation
algorithm
that
is
more
sensitive
to
their
delay.
Increasing.
A
F
F
Okay,
some
work-
that's
happened.
Probably
since
the
last
time
we
were
here,
which
was
quite
a
while
ago,
there's
been
some
work,
putting
basically
the
rsc
version
of
sbd
in
multipath
tcp
and
in
multi-path,
quick,
and
what
we'll
be
talking
about.
F
F
With
that,
with
the
stat
statistics
that
we
collect
that,
but
a
different
type
of
way
of
grouping,
a
dynamic,
online
clustering
method,
and
then
we
compare
that
with
what
was
one
of
the
most
promising
ones
in
the
literature,
a
different
type
of
method,
a
cross-correlation
method
that
originally
was
an
offline
method
and
also
our
online
adaption
of
that,
and
we
in
the
paper.
We
look
at
teacup
test.
Bed
experiments
simulations
some
experiments
over
the
internet
as
well,
and
we
can.
F
The
effect
of
different
path,
delays
on
how
well
you
can
detect
bottlenecks
and
the
effect
of
having
lots
and
lots
of
parallel
bottlenecks
and,
of
course,
there's
more
in
the
paper.
If
you
want
to
read
it.
F
Now
the
the
rmcat
version
is
is
based
on
summary
statistics.
It
was
based
on
summary
statistics,
because
originally
one
of
the
requirements
was
very
low
feedback
overhead.
So,
with
this
method,
we
send
the
summary
statistics
not
very
often,
and
it
allows
it,
also
allows
for
us
to
detect
bottlenecks
on
senders
or
receivers
and
work
with
both
it's
a
divide
and
conquer
method.
The
one
that's
written
up
in
the
rfc,
where
we
and
it's
very
simple,
very
light
where
we
take
take
our
flows
in
our
measurements
and
first
divide
them
into.
F
What's
what
flows
they're
experiencing
congestion
and
what
not,
then
we
take
that
group
and
subdivide
it
according
to
an
estimate
of
the
oscillation,
then
we'll
look
at
the
very
sort
of
variance
how
they
vary
as
a
summary
statistic
and
then
an
estimate
of
the
skew
and
then,
if,
if
the
packet
loss
is
high
enough
to
be
statistically
relevant,
then
look
at
packet
loss
as
well,
dividing
into
smaller
and
smaller
groups.
Till
we
end
up
with
the
final.
F
The
most
common
question
we
were
asked
is:
why
don't
we
use
a
clustering
algorithm
and
that
tends
to
be
difficult,
because
we
don't
know
the
number
of
groups
we,
the
number
of
groups
keeps
changing
and
the
flows
in
each
group
can
keep
changing,
but
we
have
got
a
novel
iterative,
one
that
we
present
and
it
it
sort
of.
Does
the
closest
neighbor
by
an
inverse
square
law.
F
Now
the
other
main
way
of
doing
this
sort
of
thing
is
to
correlate
the
different
flows,
but
because
one-way
delay
is
so
noisy.
If
you
don't
use
summary
statistics
to
help
with
noise,
then
you
need
to
filter
it
some
way,
and
this
uses
wavelet
filtering
techniques
which
are
easily
implementable
in
an
online
way
as
well.
But
the
original
one
takes
the
whole
delay
trace
of
every
flow,
puts
it
into
matlab
and
gets
matlab
to
work
out
what
the
optimal
filter
characteristics
are
for
the
whole
of
everything.
F
Okay,
firstly,
looking
at
detection
delays,
so
how
long
does
it
take
after
a
bottleneck
starts
if
we
know
ground
truth
than
we
did
in
this
experiments
before
we
can
say,
there's
a
shared
bottleneck
and
then,
after
it
goes
away.
How
long
before
each
of
these
algorithms
can
say
it's
stopped,
and
it's
not
that
surprising
that
the
offline
crystal
ball
method
in
green
works
out
the
start
of
bottleneck
very
very
quickly.
F
It's
not
so
good
at
detecting
the
end,
because
it
was
never
designed
to
do
anything
but
detect
bottlenecks,
not
not
stop
them.
Our
online
version
of
that
in
yellow
also
is
similar,
not
quite
as
good,
because
it's
not
optimal.
I
suppose.
F
Then
we
have
the
rfc
based
ones.
The
dc
sbd
is
the
clustering
dynamic
clustering
one
and
our
mcat
sbd
is
a
straight
rfc
and
you
can
see
that
the
summary
statistic
ones
introduce
some
lag
and
it
takes.
You
know
up
to
about
four
seconds,
sometimes
more
to
detect
or
the
start
or
a
stop
of
a
bottleneck.
F
So
if
all
of
the
the
paths
that
the
flows
are
going
on
have
exactly
the
same
delay,
then
the
source
lag
difference
we
say
is
zero,
but
if
the
difference
between
the
flow
with
the
shortest
shortest
transit
time,
propagation
delay
and
the
longest
propagation
delay
is
1000
seconds,
then,
and
here
the
source
lag,
difference
would
be
one
thousand,
and
what
you
see
here
is
as
the
difference
between
the
propagation
delays
of
the
different
flows
increase
the
the
summary
statistic,
methods
handle
that
very
well,
but
the
correlation
ones,
the
the
more
lag
we
get,
the
less
well,
they
handle
it,
and
you
can
see
that
by
them
the
efficiency
we
use.
F
The
f1
score
for
accuracy
in
this
case,
which
is
sort
of
a
harmonic,
is
harmonic,
mean
of
precision
and
recall,
but
they
drop
off
quite
sharply
after
you
get
above
100
milliseconds
in
this
case,.
F
Now
the
other
thing
we
looked
at
is
what
happens
if
we
have
lots
and
lots
and
lots
of
parallel
bottlenecks
at
once.
How
does
that
affect
the
efficiency,
and
we
find
out
that
the
rfc
version
after
we
get
above
four
parallel
bottom
x,
starts
to
sort
of
drop
off
in
its
accuracy.
The
dynamic
clustering
method
that
we
have
improves
on
that
with
the
summary
statistics
and
the
cross
correlation
wavelet
filtered
methods
are
pretty
pretty
straight,
regardless
of
how
many
parallel
bolt
necks.
We
have.
F
F
Summary
statistics
introduce
a
lag,
so
that
slows
it
down
in
detecting
and
undetecting,
and
if
you
have
high
numbers
of
parallel
bottles
next,
then
a
different
grouping
mechanism
such
as
the
one
that
we
proposed
here,
improves
on
that
now,
a
limitation
of
all
of
these
methods
and
pretty
all
shared
bottleneck,
detection
methods.
A
C
Sorry
so
david
did
you
look
at
more
complex
layer,
2
bottlenecks
where
you've
got
some
sort
of
algorithm
going
underneath
to
allocate
bandwidth
and
the
bottleneck
kind
of
is
changeable,
maybe
not
characterizable.
Did
you
actually
manage
to
touch
any
of
that
aspect?
When
you
were
doing
your
exploration
and
thinking.
C
Sure
so,
I'm
thinking
about
a
radial
link,
that's
actually
doing
some
sort
of
adaptive,
bottleneck,
adaptive,
bitrate
method,
maybe
to
counter
rain,
fade
or
movement
of
the
of
the
parties,
and
then
your
bottleneck
doesn't
fit
any
mold.
And
presumably
you
have
to
kind
of
reject
this
one
as
being
difficult
to
characterize.
F
C
A
B
Yes,
okay,
so
hi,
I'm
colin
perkins,
I'm
going
to
talk
quickly
about
some
of
the
overheads
of
congestion
control
feedback
in
rtp,
which
is
the
the
final
working
group
draft
we
we
have
in
this
this
group.
B
B
You
know
the
the
various
compounded
non-compound
packets,
the
different
feedback
profiles,
the
different
reporting
extensions
and
so
on,
to
try
and
sort
of
work
through
the
calculations
and
show
what
sort
of
overhead
you
get
from
sending
congestion
control
feedback
in
in
some
typical
cases.
B
B
So
it
looks
at
two
relatively
straightforward
cases,
the
first
being
that
the
voice
over
ap
case
and
the
second
one
being
a
simple
point-to-point,
video
case
so
in
in
the
voiceover
ap
case.
We're
looking
at
two-part
party
voice
call
sending
frames
you
know
so
so
so
so
many
voice
packets
per
second,
I'm
wanting
to
send
congestion
feedback,
either
every
frame
every
second
frame
or
so
on.
B
We're
trying
to
set
the
rtcp
reporting
interval
so
we're
sending
our
tcp
reports
every
few
audio
frames,
maybe
every
second
frame,
every
fourth
every
sixteenth
frame
or
or
whatever,
and
obviously
we
there's
different
ways
of
configuring,
rtp,
rtp
and
rtcp,
and
we
we
can
send
the
the
reports
that
the
feedback
reports
either
as
regular
compound
rtcp,
packets
or
as
non-compound
packets.
B
I
mean
we
can
mix
and
match
between
these.
So
so
we
can
either
send
compound
packets
every
time
or
we
can
send
compound
packets
with
non-compound
packets
between
them.
B
So
the
draft
works
through
the
format
of
a
bunch
of
these
packets.
So
it
starts
by
looking
at
the
format
of
a
compound
rtcp
feedback
packet
and
it
sort
of
looks,
looks
at
the
size
of
the
various
headers.
So
it
says
if
we're
using
udp
an
ipv
ipv4
you've
got
a
certain
size
of
header.
B
You
know
a
certain
size
of
source
description
packet,
a
certain
amount
of
congestion,
control,
feedback,
packets,
so
the
srt
srtcp
authentication
tag
and
so
on,
and
I'm
not
going
to
talk
through
all
the
numbers
here,
but
the
draft
works
through
sort
of
counting.
You
know
the
number
of
bytes
in
each
of
these
to
figure
out
the
the
packet
sizes.
B
B
Obviously,
you
get
a
much
smaller
packet
because
you
don't
have
the
source
description,
packets
and
the
sender
reports,
and
so
a
real
system
would
then
send
a
mix
of
these.
So
it
has
to
send
occasional,
non
occasional
compound
packets
in
order
to
keep
rtp
working,
because
you
need
the
occasional
sender,
reports
and
source
description,
packets
and
then.
B
That
it
will
send
non-compound
packets
that
are
just
sending
congestion
control
feedback
to
reduce
the
overhead,
and
obviously
you
can
change
the
balance
between
these.
And
what
we're
seeing
on
the
slide
is
something
that
sends
one
compound
packet,
followed
by
a
couple
of
non-compound
packets
in
a
repeating
pattern,
and
you
know
in
some
cases
you
could
just
send
the
compound
packets-
or
you
could
have
you
know,
one
or
two
or
three
or
whatever
non-compound
packets
in
between.
B
The
draft
then
sort
of
looks
at
how
you
do
the
calculation
to
figure
out
the
overheads
and
it
take.
It
goes
through
the
rfc
3550
reporting
interval
calculation
and
shows
that
the
the
reporting
interval
depends
on
the
number
of
participants.
In
the
session
the
average
size
of
the
packets
the
rtcp
bandwidth
fraction
has
been
allocated
and
that
lets
you.
You
work
out
the
the
rtcp
reporting
interval.
If
we
want
to
send
a
congestion
control
report,
every
a
certain
number
of
frames
you
set
the
reporting
interval
to
be
you.
G
B
That
that
that
number
times
the
the
framing
interval
and
you
you
work
through
the
maths-
and
that
gives
you
a
fraction
for
the
the
bandwidth
overhead
that
you'll
get.
If,
if
you
want
to
send
rtcp
reports
at
that
particular
rate,
in
order
to
get
congestion,
control
feedback.
B
B
You
know
that
this
table
shows
that
if
you
have
your
audio
being
framed
into
20,
millisecond
packets
and
you're,
sending
rtcp
feedback
every
second
frame
of
audio
using
only
compound
packets,
your
rtcp
bandwidth
is
57
kilobits
for
the
congestion
control
feedback
and,
as
you
increase
the
number
of
frames,
if
you
report
every
eighth
frame,
this
drops
down
to
15
kilobits
and
if
you
reduce
it,
if
you
increase
the
audio
framing
interval,
then
the
overhead
drops
down
more
again.
This
is
only
sending
compound
pack
compound
packets,
but
it's
showing
that
you
can.
B
B
If
you
do
the
same
thing
by
sending
compound
packets
in
between-
and
this
is
sent,
this
is
alternating
compound
and
non-compound
packets.
You
see
the
the
overheads
reduce
somewhat
and
if
you
send
more
non-compound
packets
in
between,
you
can
reduce
the
overheads
further
and
it's
showing
the
sort
of
example
behavior
you
get
as
you
play
with
the
different
parameters
to
illustrate
what
what
the
overhead
of
these
mechanisms
is.
B
Now
it
then
does
the
the
same
sort
of
calculation
for
a
point-to-point
video
conference,
assuming
we've
got
two
people
both
sending
audio
and
video
streams,
all
bundled
onto
a
single
five-two
pulsar
you've
got
four
active
ssrc's
one
audio
one
video
for
each
of
the
two
participants
and
we're
trying
to
send
congestion
control
feedback
every
so
many
video
frames
and
the
the
sort
of
feedback
you
get
in
this
case.
B
You've
got
your
your
your
udpip,
headers
and
you've
got
because
there's
two
ssrcs
you've
got
the
feedback
being
aggregated
into
the
packet,
and
you've
got
one
compound
rtcp
packet
from
the
reporting
source,
one
from
the
non-reporting
source,
assuming
you're,
using
the
the
reporting
groups
and
the
aggregating
feedback
mechanisms,
and
then
you've
got
the
the
authentication
tag
for
situ
srtp.
B
And
again
that
I'm
not
going
to
talk
about
it
in
in
much
detail,
but
the
draft
works
through
this.
The
non-reporting
packet
has
assuming
wearing
extending
compound
packets,
has
an
empty
sender
report,
a
source
description,
c
name
and,
and
the
reporting
group
group
source
packets
in
it.
B
The
reporting
source
has
a
complete
sender
report.
It
has
the
the
the
sc
name
and
the
reporting
group
packet
and
it's
got
the
congestion
control
feedback
because
you're
using
reporting
groups.
This
is
the
congestion
control
feedback
for
both
ssrcs,
since
that
that
they're,
co-located
and
they're,
seeing
the
same
thing
and
the
draft
works
through
counting
up
the
size
of
these
various
packets
and
figuring
out
what
the
overhead
would
be,
and
that
turns
out
to
be
quite
large.
B
In
this
case,
you've
got
a
lot
of
overheads
because
of
the
ssrcs
and
the
various
reporting
packets.
You've
got
a
couple
of
hundred
bytes
overhead,
plus
the
the
feedback.
B
B
You
know
if,
for
example,
you're
picking
you're
sending
media
at
200
kilobits
per
second,
your
and
16
frames
per
second
video.
B
Then
that
gives
you
and
you're
trying
to
send
a
report
for
every
frame
that
gives
you
one
on
video
packet
per
reports,
probably
free
audio
packets
per
reports,
depending
how
you
do
the
audio
packetization,
leading
to
an
rtcp
bandwidth
of
sort
of
67
kilobits
per
second,
which
is
about
a
third
of
the
the
video
rate
and
as
as
you
see
as
you
go
down,
the
table,
as
the
media
rate
increases
the
the
overheads
of
the
reporting
go
down
and
you
the
table
shows
how
this
varies
with
the
frame
rate,
you're
choosing
and
so.
B
Similarly,
you
can
use
reduce
size
packets
by
playing
with
the
the
way
you
you
can
figure
out
tcp
and
I'm
not
going
to
walk
through
the
details.
But
the
draft
does
that
and
it
shows
that
the
the
the
required
bandwidth
drops
down
as
you
might
expect
and
reduces
the
overheads.
B
So
that's
the
this
is
a
very
quick
version
of
what's
in
this
draft,
the
the
latest
version
of
the
draft.
What
what
what
I've
done
in
this
version
is
brought
the
calculations
up
to
date,
with
the
published
version
of
rfc
8888
corrected
a
couple
of
minor
mistakes
in
the
way
the
calculations
were
being
done
and
the
previous
versions
had
placeholders
for
multi-party
and
screen
sharing
scenarios.
I've
taken
those
out,
because
when
I
tried
to
work
through
them,
I
just
realized
there
were
way
too
many
variables
to
to
easily
characterize
it.
B
And
if
I
wanted
it,
if
we
want
something
in
there,
which
would
actually
be
representative
of
the
different
ways
of
configuring
it,
this
draft
is
going
to
balloon
out
to
50
pages
long
and
it
it
didn't
seem
worth
the
it
didn't
seem
worth
the
effort,
because
the
scenarios
in
there,
I
think,
illustrate
the
the
fundamentals
of
the
behavior.
B
Now
so
that's
where
we're
at
in
this
version,
as
I
say,
it
illustrates
the
various
factors
that
that
influence
the
the
overhead
of
using
this
congestion
control
feedback
and
I
think,
hopefully
gives
some
useful
hints
to
implements
as
how
you
might
configure
and
use
the
format.
B
From
my
point
of
view,
I
I
think
this
is
is
done.
It's
got
everything
I
I
want
to
put
in
it.
I
you
know-
and
I
think
it
illustrates
the
point
so
my
my
recommendation
would
be
that
if
the
group
thinks
this
is
useful
enough
to
publish,
we
should
take
it
to
last
call.
If
there
are
a
few,
if
there
are
nits
or
details
to
sort
out,
we
can
do
that.
B
A
Okay,
thank
you
colin
for
the
update.
Do
we
have
any
questions
for
colin.
B
I
I
I
I
I
think
it's
useful.
Yes,
I
mean
yeah,
it's
obviously
only
covering
some
some
of
the
basic
cases,
but
I
think
it
shows
shows
the
point
of
how
to
configure
things.
B
G
B
G
A
This
is
also
moving
us
into
our
last
item
on
the
agenda
that
we
have
a
few
minutes
to
to
see
how
to
wrap
up
and
what
our
outstanding
work
points
are.
And,
of
course,
this
draft
that
colin
just
presented
is
our
last
open
working
group
document
that
we
had
not
concluded
and
from
from
the
author's
perspective
this
this
strat
is
complete.
A
So
the
the
suggestion
here,
I
think,
is
to
send
it
out
for
for
working
group
last
call
and
feedback
from
the
from
the
working
group
and
there,
of
course,
it's
it's
very
good
to
get
feedback
on
the
the
current
status
of
the
document,
and
if
you
see
that
that
this
is
a
good
good
piece
of
advice
to
publish
from
the
working
group,
it
now
was
a
while,
since
we
talked
about
this
document
and
I'm
not
sure
if,
if
the
participants
have
had
time
to
to
read
it,
so
I
think
that
we
have
to
take
take
it
to
the
list
to
to
get
more
feedback.
A
A
At
the
moment,
I
think
what
we
maybe
need
to
discuss
is
is
the
scream
side,
because
I
think
this
is
the
algorithm
that
has
seen
most
use
and,
as
you
had
on
your
sliding
mark
to
to
have
an
update
that
brings
in
the
handling
of
easy
and
maybe
something
that
that
is
useful
still
as
an
experimental
algorithm.
I
would
would
expect
in
that
sense.
So
we
are
now
also
open
to
have
have
comments
on
any
of
the
open
points
from
the
the
rest
of
the
participants
and
and
what
your
view
is.
I
Hey
sorry,
so
I
basically
think
that
that
that
last
remaining
document
should
be
80
sponsored,
and
the
group
should
probably
close
last
remaining
one
being.
I
A
So
I
think,
for
the
open
document
hope
was
that
this
was
very
close
to
being
done.
B
I
I
mean
the
documents
basically
been
around
for
like
eight
years
or
so
right
and
it's
occasionally
getting
updates,
which
is
great,
but
you
know
I
would
either
sort
of
start
working
group
last
call
now
or
make
it
a
sponsor.
A
J
Okay,
just
to
add
to
what
lars
said
for
scream
this
8382
bis,
it
is
experimental
document,
so
we
could
actually
even
do
it
in
iccrg.
If
that's
a
a
better
venue
for
it.
A
C
A
So
and
I
I
think
it
would
be
very
good
also,
of
course,
if
some
of
the
algorithm
authors
can
also
read
it
so
that
that
would
be
my
hope,
but.
A
H
Oh
I
mean
with
my
eddie
hat
on.
I
think
I
have
not
seen
any
huge
activities
or
or
any
activities
in
the
mailing
list
and-
and
I
have
not
seen
like-
I
mean
nada
as
you
reported
2020,
it
has
implemented
a
firefox
mozilla.
I
have
no
information
on
whether
that
is
in
use.
Scream,
ingomar
is
doing
some
trials
and
that's
good
to
have,
and
we
can
actually
fix
some
updates
and
all
these
things.
So
that's
that's
fine
other
than
that.
H
I
don't
see
like
anything
happening
on
somebody
I
mean
previously
in
one
of
the
item
meeting.
There
was
one
proposal
coming
from.
Like
somebody
asked
like:
can
we
can
we
make
another
condition,
control
algorithm
and
all
this
thing
I
don't
think
like
they're
coming
back
so
and
today
from
this
meeting
I
get
the
view
like
well,
there
might
not
be
too
much
energy
to
carry
the
working
group
on.
H
So
I
think
for
the
this
document,
the
lab
in
that
working
role
that
when
we
have,
we
can
go
directly
to
the
working
blast
cop
and
push
it
to
iit
plastic
call
very
soon.
Other
than
that,
I
don't
I
I
am.
H
I
cannot
convince
myself
like
why
we
should
have
this
working
group
and
if
we
need
we
can
actually
keep
the
mailing
list
alive
right.
So
if
there
is
anything,
some
people
want
to
come
in
in
future
or
some
updates
want
to
be
discussed,
this
mail
even
could
still
be
there.
So
that's
actually
what
I
think
and
yeah.
I
think
this
has
been
working
with
a
really
good
document.
H
I
have
been
part
of
this
working
group
from
very
beginning.
It
was
a
quite
good
journey,
but
I
would
love
to
see
this
work
has
been
deployed
and
adopted
other
than
that.
I
don't
see
any
output
from
this
working
group.
A
Now
so,
as
I
said,
I
mean
we
had,
we
had
decided
to
to
have
some
pause
to
see
if
there
would
be
more
deployment
and
experience
from
the
from
the
algorithms,
but
I
I,
as
I
said
I
don't
see,
we
don't
as
shares,
also
don't
see
that
any
of
the
algorithms
would
be.
You
know,
candidates
to
be
pushed
forward
at
this
point
so
that
that
part
of
the
shorter,
I
think,
is
yeah.
It's
not
relevant
at
this
point
in
time.
So
I
I
think
we
all
all
have
the
same
view
on
that
yeah.
A
So
we
can,
of
course,
also
so
coordinate
on
how
to
how
to
wrap
up
that
that
last
document-
if
you
have
other
comments
on
on
that
one
side,
but
otherwise,
I
think
the
plan
is
we
try
to
finish
off
the
the
last
document
and
then
we
are
ready
to
to
close
the
group.
This
is
my
my
understanding
of
the
this,
the
situation
and
the
sense
of
the
group
as
well,
and
we
are
a
little
bit
over
time,
but
gauri
you
are
in
the
queue
you
wanted
to
add
something.
C
A
So
anything
you
want
to
add
colleen.
B
Mostly
I'm
going
to
add
that
I
still
can't
work
the
mute
button.
I
was
just
going
to
say
thanks
everyone
for
for
participating.
As
you
say
it
was,
it
was
good
to
to
get
an
update
and
come
to
closure
on
the
group
and
yeah.
We
we
should
try
and
get
that
working
group
last
call
done
very
quickly.
I
think.