►
From YouTube: IETF99-RMCAT-20170719-0930
Description
RMCAT meeting session at IETF99
2017/07/19 0930
https://datatracker.ietf.org/meeting/99/proceedings/
A
B
B
Thanks,
you
said:
okay,
the
agenda
has
been
circulated,
so
we
have
not
such
a
long
agenda
today.
So
we'll
start
with
the
status
update
on
the
different
documents,
then
we
have
a
discussion
on
the
feedback,
design
team
and
the
progress
there,
and
then
we
would
like
to
discuss,
as
we
are
now
starting
to
finish
up
our
initial
experimental
candidate
algorithms
and
the
associated
drafts.
B
As
the
working
group
document
status,
so
we
have
the
candidate
algorithms,
so
the
couple
the
CC
algorithm
has
been
submitted
to
the
highest
iesg.
The
share
the
bottleneck
detection
has
passed.
The
working
group
last
call
and
there's
been
quite
some
discussion
from
the
feedback
that
came
there
and
I
think
all
those
issues
are
now
resolved
and
David
has
updated
the
draft
also
earlier
this
month.
So
this
one
is
not
progressing
into
write-up.
B
C
B
B
Then
we
have
the
evaluation
drafts
and
those
I
think
are
now
the
ones
we
need
to
start
also
get
out
after
the
candidate
algorithm
drafts
have
been
moved
on
so
the
eval
test.
We
discussed
this
draft
at
the
virtual
interim,
so
the
eval
test
was
ready
for
working
group
last
call
than
the
eval
criteria.
There
was
some
outstanding
issue
with
the
TCP
port
there
that
you
were
looking
at
maroon.
D
B
I
think
this
was
the
only
outstanding
point
that
needed
to
be
resolved
and
we
wanted
to
send
the
eval
test
and
the
well
criteria
together
for
working
group
last
call
as
they
are
so
closely.
I
did
so
that
people
can
look
at
both
of
them
and
see
that
they
are
in
NSYNC,
okay.
So
that's
as
soon
as
we
have
the
DAP
date
for
that
bomb.
We
can
move
both
of
those
two
to
work
in
group
last
one,
and
then
we
will
of
course
need
some
reviewers
for
those
drafts.
B
So
we've
had
a
little
bit
of
not
so
much
review
enthusiasm
in
the
the
last
drafts
that
have
been
circulated.
So
please
try
and
help
review
those
and
I
think
it
would
be
very
good.
Also.
We
have
different
authors
on
the
and
they
need
to
be
in
sync.
So
if
you
can
also
review
each
other's
drafts
to
see
that
yeah.
D
I
think
the
review
is
appreciated,
I
would
say.
The
eval
test,
of
course,
has
been
implemented
by
several
people's
I
would
be
much
more
confident
about
the
the
correctness
of
actually
both
the
drafts
because
removes
things
between
so
people
have
to
actually
implement
part
of
the
eval
test
without
of
it
by
reading
eval
criteria,
and
some
things
have
been
pointed
out
in
the
past,
based
on
implementers
experience,
so
I
would
believe
that
they're
much
more
I'm,
much
more
confident
in
the
correctness
of
those
of
those
drafts.
But
reviews
are
much
appreciated.
Yeah.
B
E
B
F
F
B
B
C
B
B
D
What
I'm
saying
I
was
just
gonna
say
that
we've
implemented
at
least
half
of
the
traffic
model
in
I
think
the
trace
based
one
we
haven't
done
the
synthetic
codecs
but
I
think
there
was
some
rumor
by
Cisco
that
they
were
implemented
in
ns3.
So
if
anyone's
actually
done,
that
would
be
good
to
see
if
someone's,
actually
independently
run
the
code
base
and
gone
through
it
and
seen.
B
D
So,
as
I
said,
like
the
first,
the
second
part
of
the
draft
we've
implemented
so
we're
confident
that
it
kind
of
works
as
we
expected
to
the
first
one.
It's
not
been
independently
been
verified,
so
there's,
of
course,
like
results
that
we've
seen,
but
no
one's
confirm
them.
So
it
would
be
nice
if
someone
can
confirm
it.
I.
C
Had
described
Segawa
again
saying
like
they
are
working
on
getting
the
code
ready
for
open
source,
then
what
they
want
to
mention
make
sure
is
likes
a
good
quality
code,
so
they're
almost
there,
so
the
code
might
be
getting
open
source
pretty
soon
so
people
contract
I
think
in
stock.
Here's
referring
to
this
ns3
Park,
because
I
believe
this
traffic
model
is
already
open,
source
I
think
so.
D
B
B
What
is
there
actually
the
relationship
between
these
these
documents,
because
this
has
been
moving
around
a
little
bit,
then
we
have
the
work
around
the
feedback,
and
that
is
also
something
that
we
have
on
the
agenda
there.
So
Colin
will
present
an
update
on
the
feedback
overhead,
and
then
we
have
the
associated
feedback
message:
draft
that
the
design
team
has
been
working
on.
B
These
are
the
milestones,
so
we
are
now
I
think
going
to
be
able
to
send
out
the
drafts
in
July,
so
we
are
on
track
for
the
the
next
few
milestones.
We
of
course
updated
the
dates
for
these.
So
that's
why
we
now
are
in
track
on
those
and
then
at
the
discussion.
At
the
end,
we
will
get
back
to
to
the
other
milestones
and
how
to
approach
the
work
around
the
evaluations
of
the
outer
ears.
Now
that
we
have
the
candidates,
because
our
next
milestones
are
several
milestones
related
to
that
actually.
C
Okay,
so
I
have
a
very
short
status
up
this
pondus
and
design
team
design
team.
What
we
have
been
doing,
I
think
in
interim
we
have
been
showing
like
we
almost
resolve
all
the
issues.
We
had
only
thing
that
was,
there
was
like
whether
we
need
a
commission
in
terms
of
compression
or
something
else.
There
was
some
purple
jobs
and
during
this,
in
tremendous
meeting,
didn't
even
have
been
meeting
several
times
and
what
we
have
been
doing.
C
You
have
been
analyzing
and
comparing
like
what
is
already
deployed
compared
to
what
would
be
the
artistic
advantage
determine
if
somebody
deploys
this
dis
current
format
and
the
idea
was
like
if
we
can
compare
our
numbers
with
a
workable
solution
and
we
have
like
certain
margins
to
see
like
how
how
efficient
or
non
efficient
it
is.
We
have
an
idea
like
well
how
how
Hamas
did
the
compression
would
matter,
and
we
have
kind
of
come,
come
to
a
conclusion.
C
I
think
Colin
is
going
to
present
the
comparison
results
from
the
design
team
that
we
have
been
looking
at
and
their
current
conclusion.
So
far
we
have
is
like
we
don't
see
a
great
increase
in
or
do
sorry
a
decrease
in
our
on
the
requirements
on
artistic
bent
and
if
we
do
some
sort
of
optimization
so
I,
we
believe
from
a
design
point
of
view.
C
This
conformant
is
pretty
good
and
after
Colin's
comparison,
I
think
I'm
going
to
ask
whether
we
can
conclude
this
work
in
our
MCAT
and
movie
to
a
geek
or
4/4
instrument
track.
So
I'll
ask
that
question.
So
keep
bear
that
in
mind.
Colin's
presentation,
where
the
tasks
discussion
to
the
working
group.
G
E
G
So
the
obviously,
when
you
work
out
when
you
think
about
the
overhead
of
sending
congestion
feedback
in
our
TCP,
there's
a
bunch
of
things,
it
depends
upon
right.
It
depends
upon
how
often
you
want
to
send
the
feedback.
It
depends
on
what
information,
you're,
feeding
back?
What
type
of
feedback
is
needed?
How
you're
formatting
that
information,
whether
it's
big
sensors,
a
compound
or
a
non
compound
packets?
How
many
media
streams
you're
sending
itself?
G
We've
got
two
approaches
to
sending
this.
This
feedback,
which
I've
considered
we've
got
that
design
team
feedback,
which
is
what
I
presented
in
I
guess.
The
sole
meeting
I
went
through
the
details
of
that
and
I
also
just
did
an
analysis
of
the
overheads
of
using
the
mechanism.
Google
has
been
using,
or
one
of
the
two
mechanisms
Google
have
been
using
in
the
the
Aaron
Katz
transfer,
wave
congestion
control
extensions,
which
I
think
was
presented
here
a
year
or
two
back
okay,
so
we
looked
at
two
scenarios.
First,
one
is
voice
over
IP.
G
G
As
a
result
of
that,
you
want
a
reporting
interval,
which
is
the
framing
interval
times.
How
many
frames
you
want.
How
often
you
want
the
feedback
and
you
can
send
the
congestion
feedback
either
in
the
regular
compound
rtcp
packets,
along
with
the
sender
reports
and
the
esters,
or
you
can
use
a
non
compound
packet
and
send
it
in
between
the
regular
regularly
scheduled
packets
and
we're
sending
N
and
sub
MC
non
compound
packets
between
every
compound
packet,
and
we
see
that
sub
parameter.
Indeed,
the
analysis
later.
G
The
format
of
the
packets
which,
looking
at
the
format
Stefan
Stefan,
has
proposed
the
format
of
the
feedback.
It
was
like
this.
You
see,
there's
a
fixed
header
with
some
SS,
our
C's
based
sequence,
numbers
packet,
counts,
reference
times
and
so
on.
There's
then
a
block
of
what's
labeled
as
packet
chunks
and
then
a
block
of
what
label
those
received
deltas,
the
packet
chunks
IVA,
just
a
bit
vector
of
V
of
this
packet-
was
received.
This
one
wasn't
or
there
are
alien
encoded.
G
Obviously,
the
size
of
this
packet
varies
depending
on
how
many
other
packets
you're
sending
feedback
upon,
and
it
varies
depending
on
the
loss
patterns,
because
you've
got
a
loss,
RLE
or
a
bitmap,
and
you
can
choose
whether
to
send
that
is
entirely
or
a
bitmap,
but
the
overheads
were
there
if
you're
only
sending
a
single
stream.
This
works.
If
you're,
sending
multiple
RTP
streams
is
designed
as
as
a
transpose
got
a
transport
wide
sequence
number.
G
So
it
relies
on
putting
an
additional
sequence
number
in
an
RTP
header
extension
and
they
then
send
one
feedback
packet
from
one
SSRC,
which
reports
on
those
additional
sequence
numbers,
and
there
are
a
bunch
of
issues
about
how
that
fits
with
within
RTP
in
general,
which
I'm
not
going
to
talk
about
and
talk
about
the
events,
but
for
the
voice
over
IP
case.
You
don't
need
this
header
extension.
G
G
You
also
send
nan
compound
packets,
which
just
have
the
congestion
controlled
feedback,
a
TCP
packet
within
a
UDP
packets.
The
of
course
varies
depending
on
what
you
know:
the
packet
lost
patterns
and
so
on.
I'm
am
analyzing
a
couple
of
cases
if
we
start
with
the
the
best
case,
which
is
that
no
packets
are
lost.
Therefore,
you
can
just
put
a
single
aryl
each
chunk
in
which
says
everything
was
received,
like
that
takes
two
bytes
for
the
packet
chunks
and
the
receive
deltas.
G
The
best
case
is
that
all
of
the
deltas
are
small
enough
to
timestamp
Delta's
small
enough
to
fit
into
one
bite.
Therefore,
you
need
one
byte
for
each
packet,
you're
reporting
on.
If
you're
reporting
on
NR
packets,
you,
that
means
you
have
n
R
bytes
for
the
the
deltas
and
you've
got
28
bytes
for
the
UDP
IP
headers
20.
D
G
G
Okay,
so
you're
sending
some
compound
packets
between
them
you're
sending
some
number
of
non
compound
packets.
You
can
work
out
the
average
a
TCP
packet
size
very
straightforward
way,
plugging
in
the
number
of
Mon
compound
packets
being
sent
just
by
averaging
outs,
I,
sort
of
compound
and
non-combatants.
G
If
you
troll
your
way
through
RFC
3550,
you
find
that
the
the
expression
for
the
reporting
interval
reduces
number
of
number
of
participants.
Number
of
SSR
sees
times
the
size
of
the
rtcp
packets
divided
by
the
tcp
bandwidth
you
plug
in
the
size
of
the
the
average
size
of
the
packets
based
on
the
size
of
the
compound
and
non
compound
packets.
G
You
decide
you
want,
yes,
voting
interval
to
be
framing
interval
times
number
of
packets,
you're
reporting
on
there's
some
algebra,
and
you
end
up
with
this
expression
for
the
TCP
packet
and
the
the
erm
cat.
Cc
feedback
draft
walks
through
that
in
a
bit
more
detail,
and
if
this
is
exactly
the
same
thing
as
I
presented
at
the
soul
meeting.
G
The
best
case,
you
know
the
smallest
to
pick
values
as
you
increase
the
number
of
frames
on
which
you're
reporting
new
you
report
less
often,
then
the
rtcp
bandwidth
goes
down.
Similarly,
if
you're,
if
you're
increasing
the
size
of
the
packets
again
the
rtcp
bandwidth
acoustic,
in
the
best
case,
you
can
get
down
to
1
point
8
kilobits
per
second
nor
that
number.
But
in
the
best
case
you
hit
your
sending
one,
our
tcp
packet
per
second
for
one
point:
they
kilobits
per
second
of
a
TCP.
D
G
G
G
Okay,
you
can
do
the
same
analysis
for
the
design
team
proposal.
The
design
team
proposal
sends
packets
with
an
extended
report
and
the
extended
report.
Yeah
has
the
design
team
format
for
reporting
on
the
time,
stamps
and
packet
loss,
our
Legion
and
the
the
ECM
feedback,
and
if
you
work
for
the
analysis
and
I
had
all
the
details
of
this
in
the
presentation,
so
you
find
that
the
sight
of
these
is
a
hundred
thirty,
two
plus
twice
the
number
of
reports.
G
G
The
other
is
that
I've
made
best
case
assumptions
for
the
size
of
the
packets.
However,
even
if
you
make
the
worst
case
assumptions-
and
they
did
do
this
analysis-
that
they
haven't
put
it
on
the
slides-
it
still
wins
out-
well
they're,
not
by
as
much
so
for
the
voice-over-ip
case.
Stefan's
proposal
does
win
but
report
some
slightly
less
information.
G
So
in
this
case,
if
you
don't
need
the
EC
and
feedback,
then
this
shows
that
you
can
optimize
things
and
get
away
and
whether
we
use
Stefan's
proposal,
whether
we
use
something
else.
We
do
there's
clearly
a
little
bit
of
benefit
in
in
optimizing
this
and
yeah
in
the
most
extreme
case,
you're
going
down
from
two
kilobits
to
one
point:
four
kilobits
about
TCP,
it's
a
little
more
significance
at
the
higher.
G
Okay,
the
other
case
we
looked
at
was
a
video
conference
point-to-point
video
conference,
two
participants,
each
one
sending
audio
and
video.
G
So
in
this
case
we've
got
four
SS
RCS
for
each
participant,
one
for
the
audio
one
for
the
video
we're
assuming
they're
all
bundled
together
into
a
single
five
tuple.
So
this
is
all
a
single
at
RTP
session
we're
seeing
a
video
framing
interval
of
TF,
so
one
over
TF
frames
per
second
we're
trying
to
get
the
rtcp
reporting
into
ought
to
be.
Some
number
of
video
frames.
I
said
there
is
you
maybe
want
to
send
on
a
TCP
report,
every
video
frame
or
every
second
frame?
G
G
G
The
reporting
sauce
ends
up
as
having
a
sender
report
with
to
report
blocks,
one
for
each
of
the
audio
and
video
SS
RCS.
It
has
an
S.
That's
packet
with
C
name
and
the
reporting
group
identifier,
and
it
has
the
X
sub
block,
which
contains
the
congestion
feedback.
If
you
work
through
the
analysis,
walk
is
2,
bytes,
reach
of
the
audio
packets
and
2
bytes
of
each
of
the
video
packets.
G
G
Okay,
we're
then
making
a
couple
of
assumptions
we're
assuming
everything
is
constant
rate
video,
because
the
voice,
the
analysis
is
not
practical,
we're
assuming
all
the
frames
are
equal
size,
so
we're
just
chopping
up.
The
bitrate
into
equal
sized
packets
audio
is
assumed
that
being
sent
at
50
packets
per
second
1500
byte
video
packets,
same
rtcp
calculation.
As
for
the
voice
case,
you
just
plug
in
different
numbers
for
the
sizes
of
the
compound
and
the
non
compound
engines.
G
If,
for
example,
you
think,
while
megabit
per
second
30
frames
per
second
video
joined
up
with
free
earth,
free
videos
like
this
very
thoughts,
audio
packets
and
we
need
122
kilobits
per
second
of
rtcp,
which
is
taqwa
sends
with
a
media
rate.
So
that's
working
through
a
bunch
of
different
media
rates,
a
bunch
of
different
frame
rates
and
we're
only
sending
compound
packets
here,
we're
not
filling
in
with
non
compound
packets.
And
you
see
the
the
overheads
vary
from
about
30
to
35
percent
of
films,
about
6
percent.
G
H
G
Yep,
so
this
I
think
is
looking
at
the
and
it's
trying
to
work
out.
The
overheads
of
you
know
putting
the
compound
packets
in
for
the
audio
and
the
video
for
the
Google
proposal,
which
is
just
a
straight
forward.
You
send
the
reports
requesting
group,
extensions
and
so
on,
which
would
be
that
in
the
most
optimal
way
of
doing
it.
If,
if
I
understand
right,
this
is
not
quite
what
the
Google
implementation
does.
G
I
think
they
do
something
slightly
different
I
think
this
is
actually
lower
than
what
they're
doing
and
it
matches
what
the
design
team
proposal
is
to
do
for
the
compound
packets.
If
you
get
to
the
next
one,
think
sorry,
can
you
end
up
with
two
220
bytes
and
you
have
a
aggregation.
The
Google
proposal,
also
for
the
month
that
sends
non
compound
packets.
So
the
compound
packets
are
just
regular
packets
and
the
events
and
non
compound
packets
that
have
the
congestion
control
feedback
in
them,
and
that's
the
only
congestion
control
feedback.
G
It
sends
the
non-compliant
packets
here
we're
assuming
again,
if
we
start
with
assuming
the
best
case.
There's
no
packet
lost
the
deltas
all
fit
into
one
byte.
You
end
up
with
the
non
compound
congestion,
feedback,
packets
being
50,
bytes
Heather,
plus
the
number
of
audio
packets
plus
number
of
video
package
surety
in
the
best
case,
the
NA
and
the
env
come
from
the
received
timestamp
deltas
and
then
it's
48,
bytes
heather,
plus
2
bytes
for
the
asura.
G
So
you
put
those
numbers
in
with
the
Google
proposal
and
you
come
these
TCP
bandwidth
requirements
and
again
we're
assuming
best-case
timings
and
were
currently
excluding
the
overheads
we
require
that
TP
header
extension,
it's
like
if
you
assume
the
worst
case
so
the
worst
case,
we're
assuming
the
packet
loss
is
unpredictable,
so
you
need
to
send
a
bit
vector
rather
than
the
rles
and
we're
assuming
the
timing.
Variation
is
large
enough
that
you
need
two
bytes
per
packet
for
each
one.
So
that's
that's
the
best
case.
G
G
G
G
Well,
the
voice
over
IP
case
the
Stefan's
proposal.
Google
proposal
wins
out,
provided
you
don't
need
ecn
feedback
because
it
doesn't
send
ecmt
battery
for
video
once
you
take
into
account
the
extra
cost
of
the
RTP
header
extension.
It
requires
the
Stan's
proposal
losses
slightly
in
most
cases
to
the
design
team
proposal.
Although
there
are
a
couple
of
cases
where
it
wins
out,
the
bandwidth,
the
rtcp
bandwidth
for
the
video,
especially
for
earthtones,
proposes
very
dependent
on
the
data
rates
packet
loss
patterns,
exactly
how
many
packets
are
reported
in
each
case.
G
So
my
I
think
my
conclusion
is
that
yeah
in
general,
given
that
I
think
we're
mostly
interested
in
the
the
video
conferencing
in
the
more
the
more,
we
won't
be
flexible
and
support
a
wide
range
of
scenarios.
I
think
the
design
team
proposal
is
the
right
way
to
go,
especially
if
we
want
to
include
ecn,
although
if
we
do
care
about
highly
optimizing
the
voice
over
IP
case
and
don't
care
about
ECM,
we
can
clearly
get
it
with.
I
G
Yeah
definitely
and
I'd,
say
one
thing
I
haven't
done
is
you
know,
simulated
a
for
example,
15
person
conference
and
see
what
the
overheads
are
there
so
that
there's
a
bunch
of
scenarios
we
haven't
looked
at
here,
but
for
the
two
simple
cases
the
design
team
has
been
considering,
I
mean
what
one
of
the
things
we
had
in
the
in
the
sole
meeting
is
that
there
are
other
savings.
You
can
make
you
that
this
is
making
assumptions
that
you're
sending
18
bytes.
G
Sorry
16
byte
are
group
identifiers
and
16
bytes
C
names
and
that
sort
of
thing-
and
you
can
certainly
send
smaller,
identifies
in
some
of
those
cases
and
you
can
get
wins
in
that
way.
You
can
play
with
a
number
of
non
compound
versus
compound
packets
and
that
sort
of
thing,
so
there
are
other
optimizations
you
can
make.
C
We
have
been
looking
at
other
factors
and
that
has
been
proposed
like
early
and
completion
and
separation,
vector
and
also
we
now
have
compared
this
with
a
deployed
solution
with
some
of
some
of
the
numbers
that
in
Colin
mentioned,
might
not
be
as
exactly
as
how
deployed
but
I
think
we
we
have
included
the
numbers,
that's
not
like
it's
a
bit
bigger
than
what
is
what
is
there,
but
still
I
mean
I.
Think
we
have
done
a.
We
have
seen
this
compression.
We
have
our
conclusion.
C
C
C
D
D
Avoid
scenario
right
yeah,
so
the
reason
I
bring
it
up
is
to
to
see
because
that's
the
the
scenario
which
is
where
we
can
be
the
most
optimal
right
and
there
if
the
savings
are
not
in
the
congestion
feedback,
because
I
can
see
it.
It's
about
20
percent
off
right
so,
like
the
the
yellow
is
20
percent
lower.
Then
then
the
blue.
Now
the
question
is:
do
we
want
to
make
some
savings
if
I.
F
D
C
G
Parkinson's
flow,
Mike,
I
think
partly
the
difference
is
the
feedback.
I
mean
one
is
reporting
any
CNM,
one
isn't,
although
it
that
certainly
doesn't
account
for
20%.
But
there
is
a
difference
there
I
think
also
partly
it's
the
one
is
sending
feedback
in
every
packet
and
one
is
sending
I
mean
the
compound
packets
that
will
make
a
difference
yeah
and
we're
also
slightly
formatting
the
things
slightly
differently.
One
uses
an
Excel
block
and
one
uses
a
regular,
a
TC
packet
type.
So
that's
the
slight
differences
in
header,
overheads,
I.
Think.
C
F
C
C
We
know
the,
but,
but
this
is
like
compound
I
mean
you:
can
you
can
I
think
you
can
play
with
this
non
component
combination?
It
can
get
like
pretty
close
figures
here
also,
but
we
want
to
I
think
that
Paul
Collins,
point
or
I
want
to
say
show
that,
like
this
is
the
this
is
the
case
might
be
there,
and
this
is
not
that
far.
G
D
I
John
flexagon
sorry,
this
is
I'm
just
writing
to
make
sure
you
so
you're
sending
you're
doing
a
VoIP
call
where
you're
sending
a
compound
back
at
every
other.
Every
other
voice
frame,
essentially
no.
G
I
G
G
F
G
I
I
G
I
D
G
C
So
I
mean
I'm,
not
hearing
like
people
who
are
super
super
be
sad
about
this,
but
our
what
I
am
hearing
like
we
can
do
engineering
around
it
to
get
actually
make
the
difference,
and
this
figure
says
the
slide.
What
we're
looking
at
is
not
even
at
the
same
kind
of
values,
yeah
alternative
reduced
compound
and
we
have
only
reduced
size
there.
C
D
I
was
just
gonna
ask:
are
we
going
to
document
so
just
going
back
to
what
Jonathan
said
before?
Are
we
going
to
document
strategies
on
because
you
could
do
a
lot
more
savings
by
not
doing
or
like
just
doing
non
compound
or
deterring
the
compound
back?
It's
further
apart
and
putting
more
non
compound,
because
once
this
draft
moves
to
a
VT
core
I
guess
we
would
need
to
come
back
once
in
a
while
to
be
to
be
discussed
here,
but
some
of
these
strategies
might
impact
congestion
control
algorithms.
D
F
C
We
decided
like
okay,
we
we
can
have
a
guideline
for
how
to
use
this.
I
mean
there
are
other
things
that
that
was
on
the
discussion
on
like
adaptiveness
of
artists.
If
we
feedback
and
all
this
thing
and
that
so
the
decision
was
like,
okay,
we'll
have
and
the
packet
format
the
design
team
is
will
be,
there
will
be
producing
these
at
adjust
and
packet
format
and
go
to
the
decoder
and
publish
it.
And
then,
if
design
team
continues
doing
this
adaptive
thing
or
the
guideline
thing,
this
ending
can
produce
another
document.
C
H
Participated
in
design
team
and
I,
but
I
mean,
for
my
perspective,
I'm
Cedars
bit
rates
difference
is
that's
significant
and
I
mean
you
can
tweak
more
I.
Think
in
the
space
we've
been
talking
about.
How
often
you
send
compound
etcetera,
there
are
I
think,
but
the
important
thing
here
is
I
think
is
to
have
a
structure
which
aligns
with
our
to
CPS.
Were
normal
just
be
used,
so
any
future
extensions
in
other
dimensions
also
be
able
to
who
come
to
your
on
on
those
known
compound
packets,
etc.
H
I
would
expect
that
when
you
write
this
as
a
packet
extends,
you
actually
need
to
in
enmity
core
you're,
probably
at
least
discuss
a
bit
on
where
it's
transmitted
etc
and
because
its
intended
use
so
I
think
you're
gonna
get
to
go
into
part
discussion,
maybe
just
on
guidelines
level,
but
about
how
you
use
it
for
being
being
reasonably
efficient.
So.
G
You
in
some
ways
I
think
it
would
probably
make
sense
to
put
the
discussion
about
how
you
tune
a
TCP
and
how
you
use
the
the
packets
format
into
that
draft
revelon
into
the
draft
that
just
documents
the
performance,
especially
since
that's
when
I
run
cat
and
therefore
more
like
draft
and
therefore
more
like
the
end
up
as
an
EBT
draft
I.
Think.
C
I
I
might
make
on
that
is
the
promise.
It's
not
how
much
you
tune
it
as
because
the
algorithm
and
the
feedback
happened
at
the
tops
at
endpoints
this.
If
an
algorithm
needs
something
specific
about
the
feedback,
it's
going
to
need
to
be
negotiated
so
and
I
think
that
might
need
to
be
in
the
deep
draft.
That's
gonna
specify
it
whatever
you
know,
signaling
negotiation.
You
have
to
do
along
with
this
message.
I
would
imagine
:.
G
Programs
too,
too
far
to
walk
to
the
floor.
My
cake
that
my
clothes
will
give
you
a
little
yeah.
So
sorry,
Colin
wagons,
too
fast
walks
up
to
the
floor
Mike,
but
from
the
virtue
of
war,
Mike,
I
I,
don't
believe
this
needs
any
additional.
G
Well,
I
think
the
only
additional
signaling
this
will
need
will
be
one
to
say,
use
this
feedbacks.
I
have
everything
else
is
just
the
existing
tuning
knobs.
We
have
n
STP,
there's
things
like
tuning
the
rtcp
bandwidth
fraction
and
they
bring
different
features
which
are
already
have
singling
extension.
I
I
D
Just
going
back
in
the
end,
there's
a
sequence
number
that
you
report
on
I:
don't
think
there
is
a
way
to
say
that
you
report
on
fewer
sequence.
Numbers
are
faster
anywhere
in
the
mechanism.
I
think
you
just
live
with
whatever
the
other
end
points.
Actually,
so
it's
the
sender
decides
to
like
to
report
on
whatever
it's
receiving
yeah.
Those
feedbacks
Andrew
decides
whatever.
D
B
Yeah
we
can
do
that.
I
mean
impression,
is
that
we
have
some
discussion,
but
most
people
seem
okay
with
the
proposal,
but
to
formalize
it.
Let's
take
a
ham
on
it.
So
if
you
are
in
the
question,
is
gonna,
be
if
you're
in
favor
to
move
the
document
forward,
gravity
call
and
see
yes
for
the
first
time
and
then,
if
you're,
against
moving
it
forward
in
the
second
hum.
C
So
I
think
that's
it
I
think
then
the
I
think
the
working
wall
also
have
to
decide
whether
the
design
team
still
will
work
on
the
some
of
the
other
thing
that
we
discussed
like
guidance
and
all
these
things,
but
I
think
in
every
decor.
It
will
be
again
assured
as
a
design
point
of
view
right
so
with
the
iron
country.
Design
team
will
again
have
to
do
some
revisions
and
to
incorporate
every
decor
things
yeah.
C
I
Charlie,
this
is
an
individual
submission,
so
I
mean
we're
go
in
whatever
it's
on
the
agenda,
for
a
boutique
or
on
Friday
and
a
BT
Court
will
decide
again.
It
wants
to
adopt
this
as
a
milestone,
and
you
know
we're.
Obviously
we
can
say
we'll
use
the
the
fact
that
it's
currently
and
draft
DTR
MCAT
is
does
not
stop
a
BT
car
from
saying
this
will
be
the
basis
for
draft
a
BT
car
feedback
message
or
whatever
yeah.
If
it
wants
to
don't
then
we'll
take
that
up,
and
maybe
people
are
okay.
B
I
B
J
B
B
Okay,
then,
we
will
move
on
to
the
last
point
on
the
agenda
and
these
discussions
on
the
outstanding
parts
of
our
milestones.
So,
first,
we
would
like
to
revisit
the
work
on
the
interactions
between
the
applications
and
the
RPP
flows.
So
some
of
this
started
quite
early,
and
there
were
some
different
document
circulated.
Also
some
of
this
before
the
current
chairs
were
active
and
that
we
have
a
number
of
related
draft.
So
there
is
the
orange
cat,
CC
codec
interaction
draft,
which
expired
some
time
ago.
B
We
have
the
framework
draft,
which
was
an
is
an
individual
draft
still
at
the
moment,
but
that
came
about
as
a
request
from
the
working
group
and
then
an
even
older
draft
on
orange
cat
app
interaction.
So
we
would
like
to
ask
the
working
group
how
you
think
we
should
what
documents
do
we
actually
need
here
and
in
what
order
do
we
need
to
produce
them?
Because
this
this
discussion
has
been
kind
of
going
for
us.
So.
D
I'll
start
with
the
last
one,
because
I
think
the
last
one
got
split
into
CC,
codec
interaction
and
the
other
idea
was
to
keep
some
part
of
the
app
interaction
there.
I'm
not
sure
if
it's
been
overtaken
by
events,
because
I
think
two
years
have
passed
in
between
and
like
whatever
recommendation,
RM
cat
had
wanted
to
make
for
the
app
interactions,
probably
already
been
done.
I
don't
know
if
you
want
to
document
that,
still
in
the
interaction
between
the
application
and
the
congestion
control
and
then
the
CC
codec
interaction
was
the
lower
bit.
D
D
C
C
It's
a
bit
interesting
if
I
say
that,
like
if
app
interaction,
framework
document
should
have
all
these
kind
of
things
again,
because
that
was
that
that
was
where
we
started
like
in
the
beginning
saying
like
well,
we
have
one
document
doing
all
the
interaction
thing
and
then
we
split
it
up,
but
I
do
believe.
Framework
document
has
a
other
purpose,
so
basically
we
will
have
three
kind
of
experimental
as
I
believe
three
kind
of
experimental
condition.
C
Control
algorithm
will
be
there
and
we
need
some
sort
of
framework
to
actually
see
how
how
implementers
can
use
those
things.
So
stream
of
document
definitely
serves
another
purpose,
but
I
don't
know
like
will
it
be
good
to
just
say
something
about
app
interaction
or
CC
interaction,
and
because
there
are
like
modules
like
what
this
is
that
and
then
you
talk
about,
and
the
idea
was
like
when
a
framework
will
have
the
modules
of
codec
and
app
then
app
and
CC,
a
NAPA
correction
will
come
and
say
like.
C
This
is
how
we
interact
with
this
frame,
or
maybe
that
that's
it.
Maybe
we
will
do
work.
We
want
to
work
with
framework
document
and
say
like
how
much
we
need
to
put
in
the
other
interaction
documents
that
might
be
wanna
prospects.
Let's
work
on
this
frame
or
document
and
come
back
to
their
other
interactions.
When
we
have
a
bit
clearer
idea
and
if
the
working
group
really
things
like
discovered
and
framework,
then
we
can.
A
E
You
live
in
just
one
comment,
so
the
reason
why
we
splitted
it
up
is
because
it's
really
for
a
different
audience,
like
the
framework
is
for
somebody
who
wants
to
work
on
congestion
control
mechanisms.
The
Kotick
interaction
is
for
somebody
who
wants
to
work
on
codecs
and
the
app
interaction
was
like
for
ya
developers
for
like
its
turn
of
people.
So,
but
if,
like
the
feedback
is
already
where
it's
needs
to
be,
we
don't
necessarily
have
to
push
it
to
an
RFC.
D
B
So
this
clearly
is
an
opportunity
right
for
the
next
steps
that
we
need
to
take.
If
we
now
have
three
candidates,
if
we
now
going
to
move
down
to,
we
don't
know
the
outcome
of
that
process.
Yet
if
there
will
be
one
if
there
will
be
multiple
or
you
know
how
the
evaluations
turns
out,
but
if
there
will
be
multiple
than
it
could
clearly
have
a
benefit
to
have
some
common
terminology
for
the
audience
that
we
propose.
C
Yeah
I
I
think
maybe
I
said
what
I
think
we
discussed
a
bit
in.
You
know
around
authors
and
also
with
the
chairs
like
what
is
the
future
of
this
framework.
If
everybody's
already
going
forward
with
the
condition
contract
India,
the
idea
was
like
okay,
when
is
like
it
going
for
a
standard
track
track,
it
happens.
This
document
would
really
and
the
the
final
output
of
our
mcat
will
conform
with
this
framework.
So
there
is
a
point
of
what
she'll
work
in
with
the
framework
I
think
the
framework
didn't
get
adopted.
C
I
would
further
think
like
if
you
want
to
work
on
it
out
of
this
document
in
the
next
update.
Perhaps
we
work
with
issues
updated
and
then
work
on
it,
so
otherwise
I
mean
I
to
heal,
to
layer
on
as
the
individual,
and
then
we
spend
quite
a
lot
of
time
on
fixing
all
this
thing
and
then
might
be
no
use.
So.
B
This
is
one
reason
we
wanted
to
bring
it
up
here
right
so
that
we
can
see
what
what
path
we
think
we
should
take,
because
I
think
I
mean
we
have
an
option
to
now
say
we
have
three
candidates.
Now
is
a
good
time
to
work
on
the
framework
and
and
see
we
agree
on
terminology,
and
we
are
whether
the
other
documents
are
still
needed
or
not.
I
mean
that
so
the
I
think.
D
We
need
someone
that
find
them
users,
and
otherwise
we
should
so
the
first
one
is
the
most
mature
of
it.
So
I
think
we
just
like
need
to
figure
out
what's
missing
from
there
and
maybe
even
get
a
round
of
review.
They
think
they
were.
It
was
quite
ready
when
the
0
1
and
the
0
to
submit
it
so
I
think
the
first
one
should
move
on
the
the
only
thing
I'm
concerned
about
the
framework.
Is
that
since
it's
three
documents
don't
take,
the
terminology
are
not
conforming
to
framework
I.
D
Have
this
feeling
that
it
might
be
overtaken
by
events
again
like
you
would
have
a
framework
document
which
probably
defines
everything
in
a
nice
way
and
and
then
maybe
the
burden
is
for
the
new
algorithms
that
come
that
may
be
proposed
later
on
for
four
experimental
they
might.
That
might
be
guidance
to
them
to
use
the
framework
document,
because
so
that
I
feel
there's
work
for
framework
to
be
done.
I'm
just
afraid
that
we
will
get
this
thing
done
and
then
maybe
be
overtaken
by
events
and
no
one's
actually
going
to
ever
use
them.
I.
G
D
G
D
G
G
B
D
And
I
understand
the
overlap
between
the
the
authors
of
the
proponent
proposed
proposed
algorithms
and
the
framework
document,
so
assuming
they
can't
work
on
both
of
them
simultaneously.
They
probably
prefer
to
work
on
the
algorithms
themselves
and
like
taking
them
to
completion,
but
that's
what
they
like.
It's
an
energy
question
it's
like.
If
these
documents
do
not
need
it,
then
who
are
we
writing
it
for?
Well.
B
I
think
that
if
we
are
gonna
push
multiple
in
the
end,
if
we
were
gonna
push,
multiple
algorithms
for
a
proposed
standard
and
using
common
set
of
terminology
for
the
same
things
would
be
required.
Right,
I
mean
now
they
are
all
experimental
and
means
in
some
sense,
not
so
aligned.
But
if
we
gonna
push
multiple
documents,
then
I
think
you
could
see
a
there
is
a
value
for
having
them.
B
But
I
mean
at
this
point:
we
don't
know
the
outcome
of
that
process,
so
I
mean
that
could
be
an
argument
for
saying
that
you
know
we
keep
it
around.
We
can
update
it,
but
we
don't
put
the
effort
on
completing
that
now
when
we
know
more,
what
we
think
will
happen
with
the
candidate
algorithm
somewhat
to
push
forward.
We
also
see
more
what
we
need
in
terms
of
aligning
the
terminology
and
and
the
framework
for
the
documents
regarding
the
the
codec
interactions.
B
D
I,
don't
think
so
they
were
like
the
framework
came
much
later.
I
think
it
was
the
app
interact
like
it
was
full
app
interaction,
app
codec
interaction
or
something
like
that
was
called
before.
It
got
split
into
two
documents.
All
the
app
stuff
got
removed
from
the
original
document,
and
only
the
CC
codec
was
kept.
I
actually
do
not
have
any
recollection
with
the
framework.
Maybe
I
had
Ken,
since
he
is
the
common
author
between
the
terrific
and
so
I
mean
I.
Clearly,.
B
C
Think
there
has
been
long-standing
history
behind
these
documents,
especially
the
foredeck
interaction.
I
have
interaction
I.
As
far
as
I
remember,
the
codec
interaction
was
was
pretty
gritty,
but
then
again
what
you
said
like
we
started
to
work
with
our
mcat
framework
document,
and
we
would
like
to
I
mean
the
app
interaction
coding.
Interaction
doesn't
meant,
I
mean
they
would
like
to
interact
with
the
framework.
So
the
idea
was
like
we
should
use
that
common
terminology
and
or
stuff
like
that.
C
That
was
the
only
thing
I.
Remember
actually
that
but
I
don't
know
like
I
thought,
like
the
coding
interaction
did
in
its
expired
because
of
lack
of
enthusiasm
from
the
working
group
to
work
with
it,
and
that
was
my
thinking
thinking
but
yeah
it
was.
There
was
discussions
like
okay
having
that
interaction
and
having
the
interaction
with
the
framework
makes
a
lot
of
sense.
C
So,
let's
work
on
the
framework,
because
at
that
time
it
was
the
urgency
from
the
working
group
to
work
on
the
framework
document
like
let's
work
on
it
and
align
the
other
and
the
performance
to
the
framework
document.
But
then
they're
working
moved
aside,
realized
like
it
will
be
a
longer
process
to
actually
finalize
the
framework
document.
So
we
we
said
like
for
sake
of
progress.
We
don't
hold
up
the
proponent
algorithms
just
because
of
the
ephemeral,
definite
move
on,
and
that's
why
we
didn't
really
work
a
bit
more
with
framework
document.
C
D
By
now,
I
think
the
CC
codec
one
was
always
important
because
it
actually
says
what
the
CC
and
the
codec
would
do
like
like.
Are
you
gonna
increase
and
decrease,
and
that's
why
I
think
the
reliance
on
the
framework
is
only
between
those
components
in
the
framework
we
can
of
course
go
back
and
look
at
the
framework.
D
I
haven't
had
the
look
at
the
framework
document
recently,
so
I
can't
really
say
if
it
touches
more
than
one
component
or
more
than
the
two
components
that
are
named
in
the
named
in
the
title,
every
worthwhile
to
check
it
out.
Maybe
he's
ahead
and
I
can
take
some
time
off
line
this
week
and
actually
come
back
with
what
we
think
may
be.
The
next
steps
for
this.
C
C
That's
the
whole
point
I
think
that
we're
discussing
whether
it
is
needed
and
if
needed,
shall
we
be
again
resurrecting
these
drafts
and
started
working
and
maybe
from
the
very
beginning
and
I,
don't
believe,
like
all
the
three
and
the
framework
it
has,
he
it
needs
his
want
of
work
to
just
align
as
I
was
a
common
author
I
know
like
coding,
interaction
and
framework
framework.
It
was
like
we,
we
have
been
thinking
about
like
this
alignment
from
the
beginning.
D
Might
have
stopped
it
and
that
might
have
been
the
fec
thing,
because
that
might
be
the
that
might
have
been
one
of
the
things
that
the
Sisi
codec
says
that
when
you
have
to
up
like
increase
the
bitrate,
you
might
want
to
do
it
opportunistically
or
something.
And
then
it
probably
interacts
more
with
some
thing.
Like
an
FCC
thing,
as
a
map
as
an
extra
component,
which
perhaps
the
framework
wants
to
like,
have
some
ideas
about.
D
G
Yeah
I
mean
you
know
if,
if
some
of
you
are
willing
to
look
at
these
and
come
up
with
a
concrete
proposal
for
what
to
do
going
forward,
that's
that
seems
reasonable
mm-hmm.
You
know.
Obviously,
if
there's
no
there's
no
one
willing
to
do
a
working
progressing.
These
then
we're
not
gonna
go
anywhere
with
people.
Don't
fancy
fancy.
They
need
enough
to
do
some
work,
but
if
you're
willing
to
work
on
them
and
still
still
feel,
there's
a
need
for
them.
Then
yes,.
C
Again,
maybe
maybe
you
can
ask
this
question
like
there
are
a
couple
of
implementers
here,
I'd
say
like
they're
working
with
some
sort
of
implementation
of
this.
They
can
actually
say
like
whether
this
kind
of
document
is
useful
for
them
for
implementing
already
component.
So
that
would
make
quite
a
lot
of
sense
to
me
actually
to
stop
start
working
on
these
things.
So
anybody
implementing-
and
this
condition
control
from
this
company
I
can
say
like
what
they're
thinking
I
think.
E
G
D
B
The
dates
clearly
can
may
need
some
adjustment,
but
I
think
that
the
main
question
we
had
was
if,
if
this
is
a
work
that
we
should
push
forward
now
or
if
it's
something
that
we
should
you
know
put
on
hold
for
later
or
what
the
feeling
from
the
working
group
was
from
need.
But
I
mean,
if
school
ensured.
If
people
are
willing
to
work
on
it
and
see
that
it's
useful
done.
B
Still
it's
not
so
many
people
involved
in
the
discussion,
so
in
that
sense,
if,
if
the
ones
that
are
interested
want
to
have
a
further
discussion
on
it
and
come
back
on
the
mailing
list,
with
with
how
you
would
suggest
for
these
documents,
I
think
that
would
be
be
useful.
And
then
we
can
have
a
further
discussion
on
it
and
I
think
for
the
for
the
implementers.
B
Okay,
then
we
move
on
to
the
second
the
discussion
item.
We
wanted
to
bring
up
in
terms
of
how
to
move
the
work
in
the
working
group
forward,
and
this
is,
we
now
have
a
number
of
candidate
algorithms
that
we
are
pushing
out
this
experimental
and
the
next
steps,
for
those
algorithms,
of
course,
is
to
have
evaluations
and
see
if
some
of
them
are
suitable
to
be
pushed
forward.
B
As
for
standards
track
and
as
a
working
group,
we
need
to
kind
of
agree
on
what
type
of
evaluations
and
how
we
should
follow
up
on
this
work.
So
that
is
the
point
that
we
would
like
to
have
some
input
from
the
working
group
on
what
you
see
there.
We
have
a
number
of
starting
points
that
can
play
a
role
in
this.
Of
course,
we
have
the
evaluation
drafts
and
you
can.
We
can
go
back
and
and
check
against
those.
B
At
some
point,
each
of
the
candidate
algorithms
are
identifying
aspects
that
should
be
evaluating
during
the
experimental
stage
part
and,
of
course,
in
some
sense.
We
also
need
to
look
back
at
that
and
see
how
that
turns
out.
We
will
need
some
deployment
experience
from
the
algorithms
and
we
also
have
the
the
anise
and
the
three
open
source
module
that
was
discussed
at
the
interim.
That
could
also
allow
experimenting
in
a
controlled
environment
with
visual
algorithms,
possibly.
D
So,
in
my
opinion,
I
think
the
last
one,
the
industry
or
mcat
or
open
source
module
is
basically
for
taking
things
from
a
draft
or
an
idea
to
to
experimental
I.
Think
most
of
the
most
of
the
candidate
algorithms
have
done
something
similar
either
like
in
a
benchmark
test
bed
or
inside
ns3
or
ns2
as
I
would
definitely
remove
the
last
one,
because
I
think
that's
what
we
used
for
the
first
round
of
evaluation.
I
think
there
are
a
lot
of
papers
been
published
for
some
of
these
paper.
D
These
things,
so
that's
also
another
so
I
think
both
scream
nada
and
GCC
have
had
some
track
record
on
that
and
I
guess
so.
The
first
draft
of
evaluation
results
is
that,
like
just
pointers
to
to
results,
because
people
have
been
presenting
their
results
and
some
of
them
have
been
doing
it
and
in
some
form
of
test
beds,
and
so
on
so
forth,
some
curious
what
the
working
group
thinks.
The
first
thing
is
and
like
what
would
be
the
what
would
trigger
to
the
second
bullet.
G
So
I
think
that
that's
the
discussion
we
want
to
have
I
mean
some
of
this
can
clearly
be
or
could
clearly
be
formalized
evaluations
using
a
set
of
criteria.
We've
come
up
with
some
of
it
could
just
be
reports
on
deployments
and
new
real
world
experimentation,
and
the
discussion
we're
trying
to
have
is
to
what
extent
we
need
you've
a
formalized
criteria
for
this
and
what
extent
we
say.
Okay,
come
back
in
a
couple
years,
when
you've
got
some
experience
with
some
documentation
of
that
experience,
and
then
we'll
have
a
discussion.
E
Me
a
cool
event,
so
at
this
point,
I
would
really
like
to
see
some
results
from
using
it
like
in
the
wild
on
the
internet,
not
only
in
a
test
bed
and
the
point
about
having
a
draft
about
this
and
publishing.
The
draft
is
also
to
give
then,
when
everything
is
finalized,
give
people
some
kind
of
additional
documentation
about
which
candidate
is
the
right
one
to
choose,
and
that
definitely
needs
also
to
relate
to
like
real
world
experience
and
not
only
test
fit.
B
And
I
think
also
one
thing
that
would
be
still
interesting
is
comparisons
between
the
algorithms,
because
I
think
a
lot
of
the
results
are
still
being
done
individually
for
each
algorithm,
and
even
if
you
know
the
test
scenarios
are
specified,
there
may
be
some
differences.
So
it's
not
always
easy
to
compare
those
results.
I
think
there
is
also
still
room
for
actually
comparing
the
three
and
they
maybe
also
with
the
common
feedback
format,
and
so
that
is
still
a
space
right
right.
D
C
F
F
You
know
getting
it
in
is
it's
the
first
step
and
make
sure
it
works,
and
second
step
is
then
once
you
have
it
in
you
know.
How
do
you
then
evaluate
how
well
it's
working
in
the
in
the
real
world,
which
is
not
necessarily
a
particularly
easy
thing
to
do
beyond
simple
subjective
comparison,
perhaps
something
whereby.
E
I
think
I
had
a
different
color,
I
wanted
to
say,
I,
think
the
bar
for
getting
a
document
or
an
experimental
RFC,
getting
something
something
documented
internet,
experimental,
I,
see
shouldn't
be
too
high,
but
to
get
it
cheaper
Poston
that
we
can
actually
set
the
bar
rather
higher,
because
there
is
already
documentation.
People
can
try
and
work
with
it
and
kind
of
moving
into
proposed
on.
That
is
really
saying
kind.
We're
sure
that
this
is
something
that
works
well
and
we
can
recommend
you
to
use
it.
D
So
what
I'm
saying
and
this
time,
I'm
gonna,
probably
use
call
stats,
I
hope
and
not
only
as
an
individual,
but
as
a
company
that
measures
things
I
think
we
can
definitely
help
in
this.
So
if
Mozilla
or
Firefox
or
any
other
browser
would
develop,
builds
that
could
be
put
in
the
wild.
We
could
definitely
help
build
an
app
which
could
measure,
and
we
have
ways
to
do
that
and
like
of
course,
the
question
of
subjective
quality
is
an
interesting
one
and
I
think
that
we
can
look
at
as
we
move
forward.
D
But
this
is
something
that
we
could
really
help
with,
given
that
we
have
large-scale
instrumentations
of
apps
and
so
on
so
forth.
So
if
there
would
be
a
way
to
control
this
from
the
application
to
choose
a
congestion
control
or
whatever
mechanism
the
browser
vendors
can
come
up
with,
we
definitely
be
able
to
help
devaluation.
G
C
Yes,
I
would
like
to
thank
a
modular
and
Hollister
Deo
for
saying
that
they're
willing
to
help
and
and
for
your
information
we
have
for
a
scream
I'm
talking
now
about
this
room.
We
have
open
source
code,
so
it
would
be
I
think
very
interesting
to
see
like
if
somebody
in
triggers
is
that
one
introduced
that
one
and
try
it
in
a
while
and
yes,
I
would
like
to
second
what
Colleen
said.
Maybe
the
implementation
knit
I
think
not.
Maybe
you
need
to
use
the
on
the
feedback
form
I.
C
E
I'm
I,
don't
think
that's
a
discussion.
Mia
could
have
in
the
discussion
of
setting
a
bar
issues
for
right
now,
we're
probably
good
to
see
some
first
results
and
then
rather
have
a
discussion
about.
What's
the
right
metrics
to
track,
and
then
you
can
actually
say
this
is
ready
or
not
ready,
I,
don't
think
you
can
make
a
decision
on
this
right
now.
D
So
I
had
another
question
which
was
so
I
think
a
few
years
ago
we
presented
FEC
based
congestion
control
at
the
time.
I
think
the
the
feedback
from
the
group
was
to
pursue
a
generic
path
which
we
never
actually
got
to
like.
There
was
not
enough
excitement
with
the
proponents,
and
the
proponents
have
moved
forward
so
I
think
just
as
a
pointer,
we
would
be
submitting
one
more
experimental
draft
soon
for
a
fact-based
congestion
control.
D
B
And
I
think
it
will
also
be
valuable
to
report
back
on
the
experiences
in
those
meetings.
So
when
people
I
mean
both
if
there
are
new
proposals
coming,
of
course
that
can
be
discussed
at
the
meetings
and
also
if
we
have
experience
reports
from
the
algorithms
that
are
deployed.
That
would
also
be
interesting
for
the
working
group
to
hear
about
I
think.
B
Okay,
so
then
we
would
encourage
people
to
bring
your
experiences
with
algorithms
and
if
we
can
get
real
implementations
and
real
experience,
that
will
be
very
helpful,
of
course,
but
I
think
otherwise.
We
close
this
discussion
for
now.
If
there
are
no
other
comments
and
if
so,
we
also
close
the
meeting.
So
thank
you
all
for
attending
and
for
your
input-
and
this
also
help
finish
up
and
review
the
drafts
that
we
still
have
in
the
pipeline.
So
we
still
also
have
some
of
the
old
work
here
to
finish
up.