►
From YouTube: IETF101-RMCAT-20180321-1330
Description
RMCAT meeting session at IETF101
2018/03/21 1330
https://datatracker.ietf.org/meeting/101/proceedings/
A
B
Okay,
welcome
everyone.
This
is
the
Orem
cat
session
I'm
on
a
branch
room,
and
this
is
my
kosher
Colin
Perkins
and
we
have
our
third
co-chair
Martin
stimulating
online
in
jabber.
I
think
this
is
the
note
well
I
think
most
of
you
should
be
familiar
with
it.
If
not,
please
read
it
carefully.
The
rules
of
course
apply
to
this
meeting
as
well
as
all
other
ITF
meetings.
B
Thank
you
and
you
have
all
the
meeting
material
online
as
user.
You
should,
of
course-
and
you
know,
our
mailing
list
is
also
available
for
all
the
discussions,
so
we
have
a
fairly
short
agenda
today.
We
want
to
go
through
the
status
of
the
working
group
documents,
then
column
will
give
an
update
on
the
feedback
for
congestion.
B
This
draft
is
not
with
ABT
core,
but
it's
still
good
I
think
to
get
an
update
for
our
MCAT,
what
is
happening
there
and
then
we
have
York
who
will
give
an
update
on
the
eval
criteria
draft
and
we
will
have
shouting
remotely
giving
an
update
on
the
video
traffic
model.
So
we
were
hoping
to
maybe
also
have
some
experimental
results,
but
we
don't
have
that
ready.
Yet
so
I
think
we
will
have
the
some
of
those
from
from
multiple
parties
at
our
next
meeting.
B
So
let's
start
with
going
through
the
status
so
for
our
algorithms
we
have.
The
screen
draft
has
now
been
published
as
an
RFC
or
C
82
98.
So
thanks
for
to
the
authors
for
pushing
that
through
the
whole
process,
we
have
the
couple
congestion
control
draft.
That
is
still
in
the
editors
queue
that
is
waiting
for
the
another
draft.
B
The
shared
bottleneck,
detection
draft
is
submitted
to
iesg,
so
that
is
also
being
processed,
and
then
we
have
another
draft
which
has
passed
the
second
working
group
last
call
we
have
been
update
in
the
last
meeting,
then
we
had
some
updates
for
authors.
Martin
is
the
shepherd
for
that
document
and
I
hope
we
hope
to
get
that
shipped,
maybe
by
the
end
of
the
week.
So
then
we
will
have
all
of
those
candidates
drops
out
to
the
ISD
or
already
published.
The
GCC
draft
is
still
with
the
waters
and
it's
unclear.
B
The
requirements
draft
was
the
first
draft
that
was
made
and
that
is
still
in
the
original
queue
waiting
for
completion
of
other
drafts.
The
value
evaluation
draft
we
have
a
number
of
those,
so
the
eval
test
draft
is
ready
for
working
group
last
school.
We
have
the
draft
described
in
the
wireless
test
cases.
This
is
also
ready
for
working
group
last
call
and
they
have
been
waiting
at
least
the
first
one
for
the
eval
criteria
draft,
because
we
wanted
to
ship
them
as
a
bundle.
B
So
hopefully
we
have
that
on
the
agenda
today,
we'll
talk
about
that,
so
we
hope
to
get
them
all
going
for
a
working
group
last
call
very
quickly,
and
then
we
have
the
video
traffic
model
that
cha-ching
will
make
an
update
on,
but
that
is
also
ready
for
getting
reviews
and
getting
to
watch
the
end.
I
think
the
feedback,
the
feedback
message
draft
was
completed
from
our
MCAT.
B
It
has
been
handed
over
to
a
VT
core,
but
we
still
have
a
graph
that
the
colons
graph
that
looks
at
how
what
this
means
in
terms
of
the
overhead
and
so
on.
So
this
draft
will
be
updated
once
the
draft
in
a
VT
core
is
finished,
then
we
will.
It
will
be
updated
to
reflect
the
final
version
of
the
feedback
message,
and
then
we
have
some
drafts
related
to
the
interfaces
that
we
haven't
really
decided.
B
What
to
do
about
I
think
that
we
are
probably
going
to
get
back
to
them
once
we
have
finished
evaluation
drafts
unless
there
is
any
updates
on
that.
So
varun
antsy
he'd
had
a
item
some
time
ago
to
think
about
what
we
should
do
with
them,
but
I
think
not
much
has
happened
there
if
I'm
so
I
think
we
will
get
back
to
that
once
we
get
the
evaluation
drafts
out
and
we
looked
for
also
the
next
step
for
how
to
take
the
candidate
draft
further.
B
We
need
to
think
about
one.
We
can
have
some
first
draft,
also
reporting
on
evaluation
results
from
the
algorithms,
but
we
will.
We
will
update
these
drafts
in
a
bit
milestones
in
a
bit.
That
was
a
short
update.
The
status
of
the
documents
that
we
have
in
the
working
group.
Are
there
any
questions
or
comments
on
that
part,
I.
D
E
B
D
D
Okay,
so
changes
since
the
last
meeting,
there's
been
a
bunch
of
technical
changes
here,
primarily
to
specify
the
the
clock
rates
and
the
format
of
the
timestamp
fields
and
to
clarify
what
gets
reported
on
in
which
packets
and
what
to
do.
If
no
reports
are
received
and
I'm
good
to
talk
through
all
of
these
in
a
little
bit
more
detail
in
the
following,
slides,
plus
some
clarifications
around
the
feedback
timing,
you
refer
to
these
congestion
control
feedback
draft
which
will
be
updated
to
match
this.
D
So
the
of
the
technical
changes
that
there
are
or
checking
may
impact
that
the
work
of
this
group,
the
first,
is
that
we've
now
specified
in
more
detail
what
happens
with
the
timing.
The
previous
version
of
this
draft
had
a
report
time
stamp
filled
in
it
and
didn't
specify
the
clock
rate
or
the
format
for
that
that
report
time
stem
field.
D
The
previous
version
of
the
drafts
specified
that
they
they
were
offsets
counting
milliseconds
from
the
report
time
stamp,
I've
updated
the
drafts
say
that
they're
now
counting
1024.
So
the
second,
because
that
gives
you
an
exact
difference
from
the
report
time
stamp
and
avoids
odd
rounding
errors
in
the
sorting
because
of
the
formats
and
things
just
work
out
a
little
bit
neater
and
that
clock
should
be
accessible
because
it's
using
the
same
format
as
the
report
time.
Simply
it's
just
cutting
it
down.
D
So
the
question
for
this
group
I
think
is:
are
these
appropriate
for
the
candidate
congestion
control
algorithm?
So
is
this
an
accurate
enough
report,
time
stamp
and
accurate
enough
offset
shrublands?
My
belief
is
that
it
is
on
the
basis
that
the
offsets
are
virtually
identical
to
the
ones
before
they
just
fit
slightly
more
neatly
and
the
report
timestamp
was
ill-defined
before,
but
the
suggested
ranges
matched
the
time
stamp.
I've
picked
here.
So
that's
the
first
question
should
a
group
and
if
they're
any
comments
or
feedback
on
that
it
would
appreciate
it.
D
Okay,
the
the
other
real
technical
change
here
was
to
specify
what
happens
if
no
packets
are
received,
and
we've
clarified
that
if
no
packets
are
received
from
a
particular
SSRC
in
a
reporting
interval,
then
we
don't
send
a
reporting
block
for
that
SSRC,
but
you
should
send
a
regular,
send
a
report
or
receive
report
which
indicates
that
there
is
some
feedback
being
sent.
We
just
leave
the
this
report
out.
In
that
case,
I
believe
that
is
sufficient.
An
alternative
may
be
to
send
a
report.
D
A
report
block
with
a
beginning
and
ending
sequence
is
equal
to
each
other
and
just
report
on
the
last
packet
received
again
that's
problematic
if
no
packets
are
ever
received,
but
you
know
I
see
that's
a
great
great
problem.
We
just
have
to
specify
that
nothing's
ever
been
received.
You
don't
send
it
any
of
these
reports
at
all
I.
Don't
personally
think
there
is
any
need
for
that.
But
if
people
think
is
useful
to
have
an
empty
report
block
then
then
we
could
add
that.
F
F
D
So,
yeah,
to
the
extent
you
care
you
just
send
the
reception
report
with
the
last
sequence
number
received
a
little
bit
since
the
previous
last
sequence
number
you
received,
so
you
can
tell
nothing's
been
received.
I
think
I.
Think
this
thing
that's
what's
next
one
all
right!
The
other
thing
we
we
know
specifies
that
we
should
give
guidance
on
the
sequence.
Number
ranges
that
you
should
include
in
the
packets
and
that
just
says
that
that
each
report
should
can
say
contain
should
report
on
a
consecutive
range
of
sequence
numbers.
D
D
If
you
do
send
overlapping
reports,
I
specified
that
the
information
in
the
later
report
updates
the
earlier
report,
because
packets
could
arrive
late,
I
guess
you
could
do
that,
but
I
expect
that
such
reports
will
will
arrive
too
late
to
be
used
to
a
congestion
control
feedback
anyway,
although
if
you
wanted
to
do
that,
it
wouldn't
break
anything.
If
you
did
and
I
put
in
some
rules
about
sequence,
number
ranges
that
were
significantly
different
from
the
last
range
and
you
should
ignore
them
to
the
wrap-around
cases.
I
have
guidelines
again.
F
I
think
this
is
probably
a
topic
for
the
one
that's
in
here,
but
they
could
simply
only
guidance
on
what
a
congestion
algorithm
should
do
in
the
presence
of
feedback
packet
loss,
because
assuming
everything's
great
there's
no
congestion
in
the
presence
of
the
first
path.
Packet
loss
is
probably
a
bad
idea,
but
also
assuming
every
single
one
of
those
packets
was
lashed.
And
slam
yourself
to
the
floor
is
probably
also
a
bad
idea.
So.
F
I
guess,
but
if
one
Traverse
back
feedback
gets
lost
and
that
looks
like
you
know,
30
consecutive
determine
that
is
30
per
second
of
packet
losses,
or
slam
yourself
to
the
floor
is
probably
a
bad
idea,
but
assuming
everything's
great
there
was
no
delay
ever.
It's
probably
also
that
idea
so
haunted
some.
F
D
Probably
need
some
guidance
that
says:
if,
if
you
just
lose
one
of
these,
then
don't
worry
about
it
and
wait
for
the
next
one.
But
if
you
keep
losing
them,
then
you
should
slow
down.
We're
gonna
need
to
specify
that
someone
yeah
you're
right
and
I,
don't
know
how
the
candidate
congestion
controllers
handle.
That's,
maybe
that's
an
issue
for
them.
F
D
I
mean
I
think
the
candidate,
assuming
we
pick
this
as
the
way
in
which
we're
providing
feedback,
the
congestion
control
algorithms
will
have
to
specify
what
they
do
if
the
feedback
gets
lost
and
the
response,
and
not
sure
they
do
currently
buts
only
when
they
get
a
draft
to
propose
standard.
If
that
happens,
then
they
should
take
that
into
account.
D
Okay,
so
that's
all
I
have
to
say.
Please
have
a
look
at
the
draft
if
you're
implementing
one
of
the
congestion
control,
algorithms
or
defining
an
RTP
congestion
control
algorithm,
please
give
us
feedback
whether
the
choices
we've
made
here
make
sense,
I
believe
they
do,
but
I'm
not
one
of
the
implementers
of
a
the
congestion
control
algorithm,
so
it
would
like
some
feedback
from
from
those
people.
The
idea
is
that,
with
any
luck,
we
can
take
this
to
working
your
blast
call
after
the
next
ITF,
depending
on
what
happens
in
the
AVT
core.
H
I
All
right,
so
my
name
is
your
God
I'll
have
a
brief
discussion
of
the
draft
ITF
MCAT
Evo
a
criteria
version.
Oh
six,
next
slide,
please!
This
has
been
a
while.
So
we
the
the
raft,
expired
about
a
year
ago,
not
quite
a
year
ago
and
from
my
just
digging
through
the
mailing
list.
There
hasn't
been
any
traffic
related
to
this
since,
since
it's
kind
of
middle
of
2016,
so
before
the
draft
expired.
I
So
since
this
is
evaluation
criteria,
one
would
usually
expect
that
everybody
would
be
looking
at
this
because
they
need
to
get
the
evaluation
done.
Apparently
that
was
clear
enough,
so
everybody
went
off
and
did
some
evaluation,
and
now
we
would
like
to
close
this,
but
come
to
a
conclusion
in
a
way
that
actually
matches
what
people
have
been
doing.
So
it
would
be
pretty
useful
as
a
quick
reminder,
so
this
provides
a
bunch
of
recommendations.
I
I
So
we
had
three
open
issues
documented
in
this
draft,
of
which
I
don't
quite
remember
how
they
actually
came
about,
but
that's,
okay,
that's
been
a
while.
So
the
first
one
was
related.
A
comment
came
up
at
some
point
about
whether
one
should
be
using
Jaynes
fairness
index
for
comparison
and
so
looking
at
this,
our
suggestion
would
be
since
we
haven't
written
CD.
I
C
I
C
Four
mics
on
I
think
I,
like
the
surgeon,
because
we
have
been,
we
have
been
looking
to
fairness
things,
obviously,
especially
the
test
case
scenarios
that
we
define
and
the
evolution
made
presented
here
are
using
those
kind
of
things
we
none
of
us
use
this
chain
fairness
index
and
we
just
looked
into
more
about
throughput
fairness
and
all
these
things
so
I,
don't
think
like
this
makes
sense
to
talk
about
it
anyway.
Okay,
thank
you.
Okay,
we.
J
Have
a
question
in
medical
hi:
this
is
Xiao
Qing.
Can
you
guys
hear
me?
Yes?
Yes,
we,
okay
great
yeah,
so
my
question
is
and
I
think
jumping
there
Jaynes
fairness
index
is
fine.
Do
we
still
have
something
in
the
eval
criteria?
Draft
that
addresses
the
fairness
concern
because
I
think
at
least
qualitatively
you
know.
Maintaining
fairness
across
dreams
is
a
good
thing
to
strive
for
there's.
I
Nothing
in
the
draft,
as
of
now
that
discusses
explicitly
fairness
for
evaluation.
There
is
this
there's
this
comment.
Of
course
there
is
there's
discussion
on
how
it
on
how
it
deals
with
background
traffic,
and
there
is
text
in
there
in
other
places.
That's
it
shouldn't
be
K
times
worse
than
then
another
type
of
traffic,
and
then
there's,
of
course,
the
baseline
that
we
have
that.
If
something
goes
horribly
wrong,
we
have
it's
always
the
circuit
breaker
fall
back,
but
there
is
no
explicit
definition
of
a
fairness
metric
inside.
J
So
my
my
proposal
would
not
be
chop
the
discussion
on
fairness
altogether,
because
at
least
that
is
an
important
issue.
Yet
we
may
admit
we
don't
have
a
object.
We
don't
have
a
you
know,
quantitative
way,
to
measure
it,
but,
for
instance,
you
know
guard
guard
drilling
things
to
be
not
exceeding
K.
Times
is
a
good
criteria.
Lizer,
even
though
it's
not
you
know
it's
not
a
score,
so
we.
I
I
I
C
I
Be
the
only
thing
right
so
so
the
one
thing
I
wanna
I
want
to
comment
on
is
so.
If
this
is,
if
we
want
to
have
some
specific
metrics
or
something,
then
we
need
to
give
more
guidance
than
just
mention
something
like
oh
by
the
way.
Fairness
is
also
important,
because
that
does
give
any
actionable
guidance
to
somebody
trying
to
evaluate
something.
J
Reporting
on
the
ratio
of
throughput,
basically
I,
think
I'm
hearing
a
little
bit
of
a
equaling.
You
know
no
specific
metric
equalling
that
to
say
we
don't
care
about
fairness.
Personally,
I,
don't
think
it's
true.
It's
really
a
degree
of
how
concretely
we
can
measure
it
not
having
a
single
score.
Metric
does
not
mean
that
we
cannot
evaluate
it
in
some
way,
and
at
least
you
know,
reporting
on
the
relative
ratio
of
throughput
per
flow.
That
is
concreting
not
right.
That's
an
alternative
that
we
can
stay
with.
C
When
it
comes
to
fairness,
I
think
in
the
recommend,
when
we
talk
about
so
I
think
dropping
the
fairness,
fairness
thing
as
a
whole
as
a
topic
as
nobody's
using
gain
fairness
in
this
anyway.
There's
there's
not
to
be
mentioned,
I
mean
all
other
word.
Option
is
like,
if
you
put
it
on
the
Jay
in
fairness
index,
and
we
run
every
simulation
again
to
actually
approach
a
in
fairness
index
on
our
own,
our
test
results
and
that
I
don't
think.
I
J
K
Unfortunately,
I
didn't
fall
in
this
work,
but
I
was
present
during
which
her
during
exercise
and
some
of
the
language
about
fairness
in
charter,
the
original
charter
I
think
it's
changed.
Actually,
since
I
was
involved
with
was
my
language
or
at
least
I
liked.
It
I
have
reason
to
believe
in
sort
of
a
fundamental
theoretical
way
that
you
don't
have
enough
two
different
RTP,
two
different.
K
Those
are
sharing
cue
and
trying
to
avoid
causing,
like
you
can't
signal
enough
information
to
attain
any
sort
of
strong
fairness
on
what's
more
important,
Aniston's
like
routing
from
starvation,
which
is
which
I
I
don't
know
if
you
can
define
here.
But
although
we
all
have
this
or
a
philosophical
interest
in
some
definition
of
fairness,
if
the
conversions
time
was
hours,
it
doesn't
matter
it's
pointless
and
and
the
signal
the
information
right
that
to
senders
get
between
each
other,
such
as
that's
likely
to
be
the
case.
B
So
I
think
we
have
the
proof
pourcel
shouting,
that
you
offered
to
send
some
texts
that
replace
the
fairness
index,
but
have
some
discussion
about
the
issue.
So
we
have
the
test
cases
that
do
have
the
relationship
between
the
flows
right.
So
in
some
sense
we
are
covering
fairness
in
our
test
cases,
but
I
think
we
agree
that
we
don't
need
games
fairness
index.
B
J
J
Can
you
hear
me
yeah
so
I
was
saying
yes,
I
can
sending
some
draft
text
well,
the
main
rule
is
to
discuss
over
okay.
B
I
Another
issue
that
we
had
was
on
Ariane
the
lost
generation
model,
which
is
section
4.4,
and
there
is
currently
just
a
short
list
of
what
one
could
do.
There's
nothing
really
specified
and-
and
the
suggestion
was
that
we
just
pick
mention
which,
which
options
exists
and
suggest
that
we
for
for
now,
we
go
with
independent
losses
and
we're
essentially
random
losses.
I
L
J
He's
charging
yeah
one
so
I
agree
with
the
suggestion
here.
It
makes
sense.
I
want
you
to
ask
for
random
petty
losses.
Iid
do
I
have
to
have
a
suggested
range.
If
I
remember
correctly,
there
is
rise
up
in
de
eval
criteria.
Would
we
so
we
would
say?
Basically
we
do
the
evaluation
using
iid,
random,
packing
losses
and
other
suggested
packing
loss
ratios
that
people
we
would
like
people
to
try.
We.
I
I
L
Handley
when
it
comes
to
understanding
congestion
control
models,
they
one
of
the
things
we
discovered
with
my
country
FRC
many
years
backwards,
that
it
worked
really
nicely
with
random
losses
where
it
started
to
have
difficulties
as
where
the
losses
were
strongly
college,
with
your
data
rate.
So,
for
example,
where
you
were
the
only
flow
on
a
link-
and
you
filled
up
the
link,
it's
easy
to
end
up
in
oscillations
in
that
case,
and
you
won't
notice
that,
if
you're
looking
at
this
kind
of
thing.
I
Would
be
there
so
if
there's
a
reference
thing
that
you
would
that
you
have
to
do
to
look
into
stuff
and
we
might
actually
document
these
kind
of
past
observations
in
a
sense
of
a
form
and
then
say
that
you
may
want
to
use
alternative
models
and
not
just
try
with
this
but
have,
but
otherwise
we
would
need
to
fight
over
I,
don't
know
5,
10
or
whatever
different
lost
generation
models.
So
maybe
this
is
a.
This
is
a
sensible
way
ahead
here.
Yeah.
L
L
I
I
Okay,
then,
finally,
we
have
Jojo
models
which
probably
again,
could
be
done
reasonably
well
about
modeling
a
queue,
and
this
is
what
some
of
the
models
actually
have.
This
is
actually
a
there's.
There's
three
options
to
subscribe
to
this.
The
suggestion
is
kind
of
partly
already
implemented
in
the
in
the
current
draft,
but
the
issue
still
stuck
out
there
when
I
was
looking
for
issues.
I
I
E
C
C
B
J
Yeah,
so
you
just
want
to
report
on
what
we
have
used
for
evaluation
inada.
We
have
tried
in
earnest
to
platform
early
on
we've,
actually
added
a
random
jitter
model.
That's
uniform
distribution
plus
minus
30%
I
mean
sorry
10%
sort
of
the
measure
delay
without
very
Audrey.
If
I,
if
I
remember
correctly
and
for
the
NS
three
simulation
platform,
we
have
not
yet
figured
out
a
mechanism
to
introduce
you
know
sort
of
grant
externally
induced
jitter.
So
all
the
jitters
are
now
basically
induced
by
the
queuing
behavior.
J
Only
I
just
want
to
report
on
what
we
have
been
able
to
do.
Yeah
from
the
evaluation
side
and
increment
implementation
and
ash
I
agree
with
that.
He
that
the
presence
of
jitter
does
impair
our
words
and
behavior
so,
for
instance,
early
on
in
our
design
of
the
algorithm.
After
introducing
the
random
jitter,
it
did
introduce
a
change
in
how
way
process
and
to
a
measure
delay
to
account
for
that
so
having
it
in
the
evaluation
sounds
like
a
good
idea.
I
I
B
C
D
C
Halfway
for
a
sentence,
so
there
was
an
inconsistency
in
a
number
proportionable
criteria
and
test
draft,
and
then
we
what
he
did
we
from
Jessica
struck.
We
said
like
what
it
we
should.
Also
men
only
mention
the
test
TCP
test
model
in
a
well
right
here
and
in
fact,
right
here,
I
think
it
got
updated
with
that.
So
don't
think
we
have
an
issue
at.
I
C
I
C
B
Yeah,
so
the
plan
is
we
get
an
update
of
the
draft
searching,
provides
some
text
for
fairness
and
otherwise
the
author's
update,
the
rest
of
the
outstanding
issues
and
then
I
think
we
can
send
the
draft
for
working
group
last
school
together
with
other
evaluation
draft.
So
we
will
have
a
little
bit
extended
working
group
last
call,
so
you
can
review
all
of
them
together
and
then,
as
it's
now
have
been
some
time
since
Steve
Jobs
was
up.
It
would
be
great
if
we
can
get
some
review
during
that
process.
J
J
J
So
in
terms
of
the
status
of
the
draft,
we
did
send
a
refreshed
version
in
January
this
year,
mainly
with
minor
edits.
So
most
of
the
changes
in
the
draft
in
terms
of
content
have
already
been
aligned
with
our
open
source
and
implementation
of
a
few
video
traffic
models,
and
just
maybe
to
remind
people
what
this
draft
is,
and
it
talks
about
some
general
design.
J
You
know
some
desired
behavior
for
a
synthetic
video
traffic
source
as
a
way
to
feed
to
simulation-based
like
evaluations
before
people,
you
know
get
out
and
drive
around
and
collect
data,
and
also
for
repeatability
for
for
the
purpose
of
evaluation
in
our
truck
were
mainly
distinguishes
with
many
distinguish:
two
phases
of
the
code
that
coded
traffic
and
output.
Behavior
one
is
during
the
transient
in
response
to
abrupt
changes
in
the
target
rates
submitted
by
the
congestion
control
modeler.
J
The
other
is,
and
in
the
steady-state
behavior,
where
typically,
the
target
rate
does
not
change
much
and
the
output
from
the
video
traffic
source
is
just
a
flood.
Typically
grande
fluctuates
around
the
constant
target
and
the
other
thing
the
draft
does.
It
covers
three
categories
of
synthetic
traffic
model,
a
statistical
one,
attract
race,
driven
version
and
also
a
hybrid
one,
which
combines
the
choice
driven
at
steady
state
with
an
statistical
behavior
during
the
transient
in
the
draft.
We'll
also
talk
about
why
we
do
the
combination
and
why
we
think
that's
it.
J
The
hybrid
version
makes
sense
so,
as
I
mentioned
before,
the
content
of
the
draft
is
now
an
synced
up
with
our
open
source
code,
implementation
and
in
terms
of
the
types
of
traffic
models.
It's
the
part
and
we
believe
from
the
author's
point
of
view,
there
are
no
outstanding
issues
and
we
would
like
to
obtain
a
review
input
from
the
working
group
next
slide.
Please
a.
J
Little
bit
status,
update
sort
of
again
a
status
recap
for
the
open
source
code
and
we
call
it
synced
X
standing
for
synthetic
codecs.
It
is
open
source
on
github,
with
the
link
below
we
have.
Basically,
in
this
implementation,
we
have
implemented
a
collection
of
synthetic
codecs,
ranging
all
the
way
from
what
we
call
perfect,
how
that,
but
really
just
no
mimicking
ideal
CBR
behavior
at
fixed
excise,
no
randomization
at
all.
J
As
an
you
know,
as
a
synthetic
output
from
dia
and
the
the
codec,
and
we
also
have
an
added
a
simple
content
sharing
codec,
which
tries
to
mimic
the
slight
shearing
behavior
as
the
traffic
source
as
a
follow
up
on
some
of
the
discussions
on
the
waiting
list,
as
well
as
the
latest
addition
was
the
hybrid
codec
which
combines
the
statistical
model
for
the
transient
phase
with
the
Tres
based
model
for
the
steady-state
phase.
One
thing
I
forgot
to
mention
is
that
all
the
parameters
in
the
statistical
model
have
also
been
updated.
J
Based
our
previous
presentation,
using
the
data
collected
from
the
the
HD
video
trace
with
browser-based
encoding,
so
so
again,
from
the
author's
point
of
view,
we
feel
that
we
have
checked
off
all
our
you
know,
to-do
items
in
terms
of
feeding
the
right
content,
as
well
as
two
things.
There's
recommended
parameters
for
the
draft
and
who
have
provide,
who
are
providing
this
open
source
implementation
also
for
people
to
double-check
and
try
it
out
next
slide.
J
And,
and
also
maybe,
as
an
example
for
the
evaluation
of
nada,
we
have
been
using
synthetic
code,
the
Cinco
Dex
modular
as
part
of
our
NS
3
based
evaluation.
So
this
table
just
shows
I
mean
right
now
we
don't
have
any
implementation
on
LTE
test
cases
in
NS
3.
But
apart
from
that,
this
table
shows
the
like
what
codecs
we
have
tried
for
the
group
of
test
cases.
J
The
first
collection
of
wired
corresponds
to
that
Ebell
test
draft
and
the
second
Wi-Fi
to
response
to
the
Wi-Fi
section
in
the
wireless
test
draft
of
Curtin
and
providing
the
link
to
the
collection
of
results
we
shared
last
November
for
and
for
the
evaluation
of
this
algorithm
in
a
nursery
next.
One.
J
Yeah
so,
basically,
as
a
recap
of
obviously
our
next
steps
in
terms
of
evaluation
effort,
who
would
like
to
add
additional
evaluations
for
the
Wi-Fi
test
case
using
the
few
you
know
extra
codecs,
the
code
is
already
it's
really
a
matter
of
bargaining
them
and
recording
on
the
results
and
one
the
LTE
test
case.
Implementations
already.
J
Another
colleague
of
mine
is
working
on
that
item
or
hope
to
also
add
evaluations
in
industry
for
using
these
codecs
and
from
the
draft
point
of
view,
though,
we
think
the
draft
is
you
know,
I
mean
there's
no
more
pending
edits
from
the
authors
at
this
point,
and
we
would
like
to
obtain
review
input
from
the
working
group
and
given
that
now
we're
in
the
face
of
trying
to
ship
ship
all
the
evaluation
drafts
for
then
for
consideration
of
working
group.
Last
call
I
think
this.
C
I'm,
a
co-author
of
this
job
I,
have
been
reading
this
job
every
time
charging
desam
up
there,
so
it's
in
pretty
good
shape
actually
and
it
quite
good
reflection
what
is
in
the
code?
So
that's
that's
a
good
thing
just
to
clarify
this
additional
evolution
of
Wi-Fi
just
kassemg
test,
if
that
is
that,
has
nothing
to
do
with
the
draft
right.
There's
something
a
different
thing:
Charles
II
was
mentioning
so
yeah
I
think
it's
ready
for
work.
B
F
John
Maddox
I
was
just
thinking,
given
the
discussion
about
chairs
lunch.
I
was
wondering
if
this
would
be
an
interesting
topic
to
do
a
hackathon
at
the
next
IETF
to
have
people
working
on
our
MCAT
related
stuff.
If
they
knew
it,
that's
something
that
people
you
know
it
might
be
worth
asking
people
about
or
whatever,
because.
B
It's
a
good
suggestion.
We
can
think
about
that.