►
From YouTube: IETF100-AVTCORE-20171116-0930
Description
AVTCORE meeting session at IETF100
2017/11/16 0930
https://datatracker.ietf.org/meeting/100/proceedings/
A
A
A
A
A
D
Just
want
to
take
the
opportunity,
since
payload
is
not
meeting
there's
the
effect,
the
effect
the
effect
draft
payload
for
fact
for
flexible
fakie
in
and
we
are
going
to
start
we're,
hoping
to
start
a
white
nope
last
call,
since
it's
does
dependency
in
ordered
on
RTC
web
and
not
like
the
opportunity
that
people
would
pay
attention.
I
will
also
try
send
to
the
to
the
RTC
web
and
the
payload
that
they
should
review
it.
Maybe
I
should
also
send
it
to
a
VT
course,
or
at
least
people
who
are
not
worth
following.
A
E
A
We
still
have
a
number
of
things
as
you
see,
deliver
q3,
which
are
part
of
238,
Wiggum
bundle
and
possibly
other
stuff.
The
bundles
need
to
be
the
primary
thing
listed,
and
then
we
have
one
document
which
is
waiting
on
ourselves.
So
let's
do
pre
marking
dog
so
at
least
week.
What
have
the
hanging
over
us
other
than
that
so
multipath
RTP
is
Europe
here,
who's
mentioning
it
last
night
exhibit
on
my
place.
A
D
A
A
G
I
I
think
I
think
what
Ben
said
like.
Maybe
you
can
you
can
take
it
later
and
if
it
is
more
important
because,
as
I
say
like
Lee,
there
have
been
some
work
proposed
for
like
RTP
work
week
and
stuff
like
that,
and
that
certainly
changes
some
demography
when
it
comes
to
multi-party
PNL
stuff
like
that
right,
so
so
I
actually
I,
don't
feel
any
interest
in
it.
So
just
just.
D
This
is
also
one
of
the
document
that
was
standing
there
for
a
while
without
that
milestone,
not
continuing.
So
we
took
it
to
took
it
to
finish
this
document.
So,
as
you
see,
the
zero
three
version
was
submitting
2000,
dr.
BIR,
2014
and
Marvin,
and
myself
did
editorial
changes
on
the
document,
because
it's
mostly
editorial
changes
to
the
structure
and
based
on
discussion.
We
had
with
Magnus
before
about
what
should
be
done
because
Magnus
and
calling
or
the
previous
authors
of
the
document.
H
D
Of
course,
there
are
still
authors
so
therefore
the
Edit.
Sorry
about
that.
Sorry,
sorry
about
that.
Okay!
So
there's
still
the
authors
of
the
document.
Sorry,
the
0
4
&
5
are
bigger
detour
edges
in
the
document
structure,
it's
updated
to
reflect
the
current
state
of
multiplexing
documents
and
options
which
would
change
between
2014
and
today
yeah.
We
tried
also
to
do
that.
I
mean
we
try
to
do
everything
to
keep
it
up
to
date
with
the
current
documentation
which
which
I'd
like
civil
cars,
things
that
that
were
added
during
the
time
from
them.
D
It's
not
ready
for
publication
that
the
4
&
5
they're,
not
I,
mean
the
reason
I.
We
had
the
4
&
5
because
I'm
the
force
there
was
some
typos
that
that
was
that
was
there
and
it
was
just
a
change
of
typo.
But
if
you
want
to
compare
to
previous,
then
you
have
to
compare
it
to
3
and
I.
Think
Magnus
put
it
on
the
dis
deep
in
the
in
the
mando
mailing
list.
D
It's
not
ready
for
publication.
There
are
some
open
issues
based
on
the
review
from
all
the
authors
in
the
in
section
7,
and
the
next
step
is
to
update
the
document
to
reflect
the
open
issues.
It
was
reviewed
only
by
Colleen
and
Magnus.
The
quarter
said
we
need
more
reviewers,
probably
for
the
next
version.
A
C
I
A
J
I
So
this
was
something
we
presented
at
the
last
ITF.
It's
come
out
of
the
our
MCAT
working
group.
We
had
a
design
team,
do
devising
a
a
TCP
feedback,
a
couple
of
our
TCP
feedback
formats
to
carry
the
information
we
need
to
do
congestion
control
and
that
they
will
be
able
to
drive
the
congestion
control.
Algorithms
that
are
MCAT
is
developing.
We
presented
the
the
design
team
versions
are
free
of
that
draft
at
the
last
meeting.
I
Okay,
so
the
the
change
we
that
the
primary
change
we've
done
this
time,
the
the
the
previous
versions
of
the
draft
had
two
different
feedback
formats.
They
had
an
Excel
block
for
regular
reports
and
they
had
a
transport
layer,
feedback
packet
for
or
early
feedback
reports
which
which
wish
could
be
sent
in
on
compound
packets.
I
The
feedback
we
got
from
this
group
at
the
last
meeting
was
that
this
was
unnecessary
and
we
should
just
use
a
transport,
their
feedback
packet,
everything
and
just
put
it
into
a
compound
packet
if
we
were
using
it
as
our
regular
reports,
so
the
change
we
made
was
essentially
to
take
out
the
exile
block
format,
leaving
just
the
the
transport
layer
feedback
packet
formats
in
the
draft
and
the
transport
layer.
Feedback
packet
format
is
exactly
the
same
as
the
one
we
discussed
before,
we're
just
taking
out
the
the
alternative.
I
I
We
put
the
report
time
stamp,
which
all
the
offsets
are
relative
to
at
the
end,
because
the
transport
layer,
feedback
format
says
that
everything
has
to
start
with
a
SS
RCS.
So
we
just
conformed
into
the
usual
style
of
those
and
yeah
and
we'll
you
just
stack
a
bunch
of
those
and
then
you
have
the
report
time
stamp
at
the
end.
So
the
Train
is
for
the
draft
adjust
to
keep
this
format.
Take
the
other
format
out
and
fix
the
ionic
considerations.
I
So
next
steps,
the
I
think
primary
open
issue
we
have
is
just
quite
how
the
type
of
the
time
stamps
work,
I
think
the
draft
is
currently
specified
to
use
milliseconds
for
arrival
time
offset
and
it's
a
little
vague
about
the
report.
Time
stamp
clock
rate
in
the
format
and
we
need
to
align
those
to
the
extent
possible
and
pick
sensible
rates
and
sensible
formats.
I
What
I
think
would
make
for
the
report
time
stamp
would
probably
be
to
use
the
the
middle
32
bits
out
of
the
NTP
time
stamp.
Just
like
we
do
in
the
LS
are
filled
in
in
the
sender.
It
was
so
we're
using
the
same
clock.
We
already
have
for
a
TCP
that
doesn't
quite
line
up
nicely
with
the
the
millisecond
which
we've
got
in
the
arrival
time
offsets.
I
I
We
then
need
to
review
the
candidate
condition.
Control
algorithms
in
our
MCAT
make
sure
this
fits
with
what
they
they
need
gives
the
right
information
has
acceptable
overheads
and
so
on.
They
find
some
STP
signaling
for
it
and
go
through
and
do
some
editorial
cleanups
other
than
that
I
think
we're
in
reasonably
good
shape
nice
to
say
that
the
main
issue
is
figuring
out
the
time
stamps.
L
K
Yeah
I
mean
considering
the
goal
of
having
this
being
kind
of
agnostic
to
watch
center
side
congestion
control
grants,
it's
needed,
I
think
we
need
to
ensure
that
you
have
both
the
kind
of
type
of
scaling
and
the
kind
of
reporting,
because
I
think
that
you
design
your
receiver
behavior
is
well
defined
and
it
needs
to
be
if
it's
gonna
be
in
the
arm.
Ket
document.
That's
probably
you
need
to
be
normative
reference
then,
and
so
I
mean
it's.
B
K
I
I'm
not
sure
it's
normative
in.
Quite
that
way,
I
mean
I,
think
all
it
will
specify.
Is
you
configure
a
TCP
bandwidth
fraction
and
you
using
the
existing
parameters
as
appropriate
for
your
congestion
control,
and
it
will
give
some
guidance
for
how
you
do
that
I
think
what
what
will
actually
happen
is
the
the
candidate
congestion
control
algorithms
will
say,
use
this
feedback
format
and
configure
your
a
TCP
bandwidth
appropriately.
You
know,
according
to
the
media
rate,
you
have,
and
there
are
some
guidelines
for
how
you
do
that
and
singing
it
over
there.
I
K
I
mean
the
I
think
part
of
my
think.
It
was
maybe
a
bit
more
not
only
to
the
your
cases,
I
guess
where
you
have
okay,
when
are
you
actually
sending
this
you're,
including
it
even
if
you
seen
zero
new
packets
during
the
reporting
interval,
so
things
like
that
there
so
I
think
a
few
questions
around
this
saying
when
you're,
having
sparse
or
or
maybe
over
full
mode.
That's.
G
G
B
M
A
I
I
M
Yeah
so
I
think
I
kind
of
agree
with
Magnus.
This
point
that
think
of
the
you
you're
just
a
dumb
endpoint.
That
knows
nothing
about
congestion
control,
but
you
want
to
participate
well
with
anything
that
is
doing
sender,
side,
condition,
control
I,
think
you
want
to
be
able
to
have
already
the
enough
smarts
without
looking
at
any
arm
count
work
to
be
able
to
send
the
right
feedback
at
the
right
times
without
having
a
congestion,
control
implementation
inside
of
yourself
as
a
receiver,
yeah.
I
I
Hey,
maybe
I,
don't
know,
I
mean
I,
think
they're,
separate
I.
Think
the
format's
is
a
separable
issue
to
how
you
can
figure
it.
The
other
thing
and
I'm
not
sure
the
format
wants
to
say
you
have
to
configure
it
this
way.
Although
we
could
probably
say
these
are
appropriate
ways,
you
could
configure
the
bandwidth
fraction
for
it.
So.
K
K
I
think
I'm,
fine,
we're
not
tying
them
too
closely.
I
think
this
document
I.
Would
you
need
to
specify
when
to
send
a
feedback
message
based
on
packet
arrival
if
it's
received
packet
or
not,
etc
or
in
cases
when
the
packet
starts
to
go
to
go
towards
overflow?
I,
really,
don't
think
this
document
should
do
that,
but
I
mean
if
you
receive
zero
packets.
Are
you
sending
a
feedback
packet.
I
K
But
I
mean
the
feedback
here
is
that
the
feedback
is
about
received
packets.
If
you
receive
no
packets
or
you're,
sending
a
feedback
message
to
indicate
that
you
didn't,
you
see
a
packet
and
then
you
need
to
have
discussion
at
least
that's
happening
and
that's
I
think
go
to
another
part
which
I
was
a
little
bit
thinking
about
what
happens
when
you
have
losses
of
these
feedback
messages
and
and
if
you
have
any
overlap
or
anything
like
that,
that's,
but
it's
a
separate
question,
but
it's
this.
These
kind
of
issues
becomes
evident.
K
G
I
think
I
kind
of
agree
with
like
we
need
to
write
all
this
thing.
I
mean
the
the
SIS
interaction
that
draft
perhaps
a
better
place
to
describe
in
these
details,
but
maybe
we
need
to
say
write
something
about
like
okay
go
and
look
at
that.
If
you
would
like
to
have
like
implementation
guidelines
and
I
was
not
really
convinced
like
this
document
need
to
have
information
guidelines,
and
this
is
just
an
format
up
to
me.
But
anyway,
maybe
it.
I
Didn't
make
sense
doesn't
need
congestion,
control
implementation
guidance,
I
mean
I,
I.
Think
I
I
think
I
agree
that
it
needs
something
that
says
you
if
you
get
a
hundred
percent
packet
loss
or
something
send
it's
yours.
What
to
do
in
when
the
sender
stops
sending
you
packets,
that
sort
of
thing
well.
M
Most
MERS
and
I
think
maybe
a
practical
example.
It
would
helpful
I
shot
the
issue,
it
suppose
that
we
say
it's
only
configured
through.
You
know
our
TCP
fractions,
but
those
are
maximums.
What
are
so
much
there's
a
hundred
percent.
You
know
III
will
allow
you
to
do
you
know
whatever.
Whatever
you
want
the
the
draft,
if
it's
expected,
we
can
figure
it
only
through
that
still
leaves
a
lot
of
leeway
to
say.
M
Should
you
always
fully
saturate
your
rtcp
bandwidth
or
you
choose
to
be
very
conservative
and
and
only
use,
you
know
what
you
think
is
reasonable
as
a
receiver,
so
I
think
that
that
leeway,
sort
of
depends
on
what
the
congestion
can.
The
sender's
congestion
control
algorithm
expects.
So
I
think
that's
the
mismatch
that
just
using
the
artistry
bandwidth
fractions
to
configure
this
misses
that.
Okay,
if
somebody
says
you
know
fifteen
percent
or
TCP
bandwidth,
does
that
me
should
always
saturate
your
a
TCP
bandwidth,
the
fifteen
percent,
and
send.
A
A
I
K
I
M
But
I
think
the
practical
thing
is.
The
implementers
should
have
been
a
pretty
clear
guidance
of
what
to
do
and
I
think
it's
it's.
If
I
was
gonna
implement
this
as
a
receiver,
I
don't
think
I
would
I
would
I'd
have
to
have
extreme
knowledge
of
all
the
algae
specs
and
then
still
have
enough
wiggle
room
to
maybe
throw
up
some
receive
some
sender's
algorithms,
and
you
know,
work
well
with
some
other
senators.
Algorithms,
so
I
think
a
little
bit
of
practical
guidance
for
an
implementer
of
a
receiver
stack.
M
I
I,
don't
think
that
makes
sense.
I
really
think
what
this
is
doing
is
saying:
configure
the
rtcp
bandwidth
fraction,
which
will
tell
you
how
you
would
normally
send
your
feedback
and
when
you
send
feedback
reports
on
all
the
package
you
got
during
that
interval,
yeah
and
I.
Think
it's
that
straightforward.
G
Ahead,
III
I
kind
of
agree
with
Colin
here
I
mean
you
don't
want
to
say
because,
as
you
said,
like
you
mentioned,
Moe
like
they
send
our
algorithms,
it
could
be
different
and
different,
demanding
so
I.
Don't
think
like
this
document
have
to
do
with
that.
What
what
it
can
definitely
do
is
like
okay,
pointing
to
saturation
kind
of
thing
like
well.
You
have
other
feedbacks
to
also
send
like
you,
have
feed
and
packet
loss
rate
paper
and
all
this
tips
ended.
G
So
when
your
cooze
saturating
your
artistic
bandwidth,
you
need
to
think
about
those
things
like.
Shall
we
be
saturating
all
the
artistic
bandwidth
I
have
located
for
this
one
only
for
congestion,
control
then
and
to
send
field
and
that
need
to
wait
or
what
so
that
kind
of
I
think
like
we
can.
We
can
say
something
in
the
document
like
you,
you
need
to
before
you
saturate
the
artistic
allocation
with
feedback
in
it.
G
If
you
are
using
some
other
feedback
message,
you
need
to
also
calculate
that
one,
just
a
hint:
let
people
don't
make
mistakes,
so
this
I
think
I
agree.
But
I
don't
agree
like
you
really
need
to
specify
within
this
this
amount
you
need
to
send
a
feedback
and
because
that's
condition,
control
candidate
specific,
and
this
should
not
go
here.
I.
I
K
K
And
and
and
think
about
this,
are
they
similar
very
basic
things
like
that?
That
has
to
do
with
and
and
maybe
think
through
what
happens?
If
what
will
happen,
if
you
actually
lose
this
feedback
message,
etc,
what
how
does
it
look?
40,
you
see
the
one
that
receives
two
reports
when
they
gets
parsed
you
de
packet
loss.
Yes,.
A
I
feel,
like
you
know,
yes,
so
si
si
may
note
that
they
are
mcat
draft,
and
you
know
the
various
algorithms
are
what
somebody
was
implementing.
You
know
who
a
media
center
ignition
control
implementer
should
need.
This
draft
should
be
everything
a
media,
receiver
and
feedback
centers
with
me,
plus
the
Arctic
respects
in
South.
Obviously,
but
I
mean
they
they
should
I
mean
somebody
was
just
you
know,
providing
feedback
and
not
implementing
conditioning
for
all
food.
We
need
to
read
it.
A
I
A
You
know
so,
are
you
expected
to
send
feedback
on
every
early
feedback
option
because
I'm
sending
you
early
feedback
this
optional
day
to
be
up?
If
you
feel
like
this
is
appropriate
time
said
that
you
sent
so
this
is
now
in
case
where
you're
expecting
to
send
it.
Always
that's
I
think,
especially
because
I'm
fairly
certain.
If
I
remember
the
details
needed
record
that
for
clues,
you
send
it
immediately.
I
A
N
D
Was
not
following
much
the
arm
cut
with
without
work,
but
I'm
wondering
if
the
receiver
doesn't
support
these
messages,
but
he
sends
ECM.
For
example,
the
ecn
marking
the
Czar's
arm
cut
address
this
case.
Where,
when
we're
negotiating,
you
don't
support
that
you
support
the
other
one.
So
so
you
can
get
the
information.
What
I
mean
you
can
get
the
information
from
other
rtcp
feedback
if
the
if
they
are
supported,
I.
I
D
I
As
I
say,
I
think
that
depends
on
the
implementation
and
the
erm
candidates
and
some
of
them
support
also
a
massive
types
of
feedback.
Some
might
not
enjoy
her
there.
Obviously,
if
the
congestion
control
doesn't
do
easy
and
obviously
it's
not
going
to
work
with
that
feedback
right
serious,
we
can
give
general
guidance
so.
G
So
the
idea
here
is
like
this
CC
feedback.
Here
we
agree
to
an
arm
caddy.
This
is
necessary
information
and
to
do
the
condition
control.
One
of
the
signal
was
like
Sen,
but
it's
not
only
the
only
this,
and
so
some
of
the
algorithms
like
you
scream
or
nada.
They
can.
They
can
react
on
its
end,
but
how
efficient
would
be
that
only
to
react
resent
and
that
has
not
been
discussed.
G
It
was
like
as
you,
if
you
read
the
draft
in
the
draft
and
the
analysis
part
we
did
like
these
are
the
information
it's
like
packet.
I
need
to
fire,
then
offset
delay
things
and
easy,
and
so
you
see
n
is
part
of
it.
Yes,
we
discussed
this
in
arm
cat,
and
some
of
the
algorithm
actually
has
implemented
like
it's
pretty
scream
as
implementation
of
EC
and
reaction
and
I.
Think
nada
also
has
that,
but
whether
only
CN
signaling
will
do
the
trick.
That
is
not
part
of
the
evolution,
at
least
in
our
evolution.
I
G
Think
it
is
like
that
it
is
likely
if
you
get
in
additional
signaling,
you
want
to
use
fine
but
like
like
yeah
yeah
I
mean,
but
then
then
Aaron
cat
I
think
I
don't
get
proponent
cannot
really.
We
have
not
evaluated
just
only
for
like
how
much
efficient
we
are
to
only
take
easy
and
as
a
signaling
that
we
haven't
evaluated,
we
don't
have
result
on
arm
cap.
Just
for
your
information.
N
Allows
Chan
know
there
will
be
a
little
more
brutal
and
our
MCAT
is
about
the
latest
congestion
control.
If
you
don't
have
information
about
delay,
you
come
to
the
lab
ace
congressional
control.
You
might
have
fall
backs,
but
in
order
to
have
the
latest
conjugation,
which
all
you
need
information
about
delay,
you
need
this
this.
This
thing
anything
else
is
fullbacks.
Yes,.
A
A
I
G
Jihad
again
I
mean
we
have
a
section
called
design
rationale
where
we
actually
talked
about
like
tell
like
people
already
have
some
of
the
excel
block
and
some
of
their
signaling
that
they
can
do.
But
yeah
we
have
not
talked
about
like
well
by
the
dest
is
good
enough.
I
mean
the
rationale
was
like
there
is
no
no
way
to
sell
like
arrival.
Time
is
time.
There
is
no
signal
right
now,
yeah,
but.
A
A
I
D
G
A
More
is
something
where
I
would
like
to
see
some
implication
experience
before
we
finalized
this,
because
there
are
a
lot
of
these
details
and
also
with
you
know,
how
does
it
behave
in
practice
in
real
networks?
I
know:
that's
some
of
the
are
mcat
candidate
Albertus,
looking
at
social
real
systems,
hopefully.
A
A
A
M
This
might
not
take
the
full
20
minutes,
Somoza
natty,
on
behalf
of
a
esperance
new
house
and
found
the
the
RTP
header
extension
for
frame
markings
next
slide.
Please
so
version
o6,
a
quick
review
of
what
this
work
is
for
mm.
The
main
motivation
is
to
be
able
to
support
RTP,
switching
topologies,
where
we
have
a
middle
box
that
does
not
transcode
the
media,
it
just
us,
which
is
the
media
and
and
often
the
payloads
of
the
of
that
meeting
will
be
encrypted.
M
In
order
to
be
in
order
to
be
able
to
do
intelligent,
switching
and
then
a
very
more
forward-looking
aspect
of
this
is
that
you
could
potentially
support
arbitrary
payload
formats,
even
new
ones,
even
undefined
ones,
that
that
the
middle
box,
if
it
has
support
for
this
header
extension,
it
could
understand
these
other
formats,
understand
how
to
switch
them
intelligently,
without
necessarily
having
to
implement
anything
specific
to
that
payload
format.
M
So
what
could
be
renderable
and
what
could
be
not
and
then
of
in
case
of
congestion,
the
middle
box
can
make
better
decisions
about
which
packets
are
better
to
drop
towards
downstream
receivers
that
have
congested
links
and
it
could
drop
entire
layers
if,
if
codecs
are
scalable
and
then
endpoints
can
also
take
advantage
of
this.
The
same
way
as
middle
boxes,
when
there's
packet
loss
I
can
understand
better
what
the
implications
of
that
packet
loss
are
for
frame
rendering
next
slide.
So
this
is
that
our
extension
there's
a
two
variants
of
it.
M
You
see
the
length
field,
I
can
be
either
l
equals
2,
which
actually
means
3
bytes
of
payload
3,
bytes
of
a
header
extension
or
l
equals
0,
which
is
only
one
byte
of
header
section.
The
one
by
version
is
for
non
scalable
streams.
It
only
has
some
flags
start
an
end
to
frame
flags,
independent
frame
flags
and
discardable
frame
flags
and
then
the
longer
extension
adds
layer,
IDs
temporal
layer,
IDs
and
and
spatial
and
quality
layer,
IDs
and
picture
indices,
temporal
layer,
0
picture
indices,
so
that
for
scaleable
codecs
you
can
do
more.
M
M
Any
scaleable
codec
would
have
the
same
kind
of
implications,
not
necessarily
the
exact
bits
that
were
discussed
from
the
vp9
payload
draft,
but
very
similar
things
also
exist
for
2
6
4
to
6
5,
and
basically
there
was
a
confusion
about
how
to
map
some
of
the
vp9
bits,
the
P
and
the?
U
bits
to
the
frame
marking
definitions
of
the
I
and
the
B
bits.
So
we
made
some
changes
to
clarify
how
this
mapping
can
be
done
in
section
3.1
and
to
the
definition
of
the
extensions
for
the
scalable
non-scalable
streams.
M
That,
okay
yeah
all
right,
so
the
the
eye
bit
was
basically
changed.
In
definition,
the
only
signal
temporal
independence-
and
so
in
that
case,
it's
basically
the
inverse
of
the
of
the
vb9
p
vb9
p
bit
is
predicted,
I
think
predicted
frame,
which
means
the
opposite
of
independent
frame,
means
that
this
frame
depends
a
P,
the
P.
That
means
this
frame
depends
on
prior
bits.
It's
predicted
from
I
think
it's
from
prior
frames
and
in
the
frame
marking
header
extension.
The
eye
bit
means
this
is
in
an
independent
frame
or
like
an
intra
frame.
M
There
was
discussion
about
when,
when
you
could
do
up
switching
to
higher
temporal
layers
and
I
think
the
rough
consensus
on
that
was
that
the
scalability
structures
that
are
recommended
for
this
draft
would
be
restricted
to
simpler
structures
that
can
only
support
temporally
nested
hierarchies.
So
we
changed
that
in
three
for
two,
it's
recommended
for
a
temporally
nested
hierarchies
and
it's
not
recommended
for
other
scalability
structures.
So
basically,
in
that
case
the
you--but
is
always
set.
M
The
vp9
you--but
would
always
be
one
for
all
the
frames,
because
you
could
always
up
switch
to
a
higher
temporally
at
any
time.
Next
slide,
please
give
a
little
more
detail
about
the
I
bit
here,
so
this
will
I
think
Bernard
was
asking
about
what
does
the
I
bit
mean,
so
we
changed
it
to
mean
only
temporal
independence
for
non-scalable
strings,
it's
very
straightforward,
just
it's
basically
still
iframe
or
P
frame
kind
of
semantics.
We
just
modified
it
to
say
it's
only
for
temporally
Prior
frames
next
slide.
M
This
is
the
more
important
case
for
scaleable
streams.
Ibid
is
again
still
just
temporal
independence.
So
if
you
set
this
bit,
then
you
can
set
it
to
one,
even
if
there
are
spatial
or
quality
enhancement
layers
that
this
frame
does
depend
on.
So
previously
you
couldn't
set
this
bit
unless
this
was
truly
independent,
that
it
had
no
other
dependencies,
but
now
you
can
set
it
even
if
it
has
dependencies
on
lower
spatial
or
quality
layers.
So
that's
the
that's
the
main
change
that
aligns
it
with
with
the
vp9
pbut.
M
M
Okay,
next
slide.
Okay
for
for
scale,
buddy
structures.
Again
this
is
related
to
the
vp9
new
bit,
which
allows
up
switching
to
a
higher
temporal.
We
clarified
that
basically
align
it
with
LR
our
draft
in
in
its
in
its
nomenclature.
We
use
the
same
definition
for
temporal
nesting
that
lrr
uses.
So
here
it's
only
recommended
for
the
case
of
temporally
nested
scalability
structures
and
it's
not
recommended
for
any
other
cases
of
scalability.
So
that
means
that
the
vp9
you
but
is
basically
always
inferred
to
be
one
so
now
Bernard.
B
If
I'm
trying
to
remember
the
case
you
had
in
your
draft
Jonathan,
it
was
a
case
where
we
were
needed
to
up
switch
in
scalability
and
basically,
it
had
been.
There
had
been
a
temporal
dependence
and
then
finally,
there
isn't
one
and
you
needed
a
way
to
signal
that
this
is
the
place
you
could
up
switch
particularly
I.
Think
if
you
have
like
more
than
two
spatial
layers
and
I
think
what
we're
saying
here
is
that
the
I
bit
would
that
the
at
the
up
switching
point
the
I
bit
would
become
one.
A
B
M
And
I
think
Sergio
also
replied
that
he
was
weary
of
requiring
temporal
nesting
because
he
wouldn't
to
have
to
change
vp9
implementation
and
I.
Think
all
of
the
implementations
that
I've
seen
do
only
implement
temporal
nesting.
So
you
I
have
not
seen
a
vp9
encoder,
yet
certainly
not
live
VPX.
That
will
do
a
non
temporally,
nested
scalability
structure,
someone.
B
B
A
M
Next
slide,
so
this
is
not
an
update
in
the
draft.
This
is
a
an
explicit
non
update.
So
there
was
a
discussion
about
whether
the
discardable
bit
should
be
expanded
to
have
different
different
levels
of
discard
priority
that
was
not
put
into
the
draft.
So
there's
no
changes
for
it.
I
mean
Ronnie,
has
a
more
granular
definition
for
priority
markings
of
non-scalable
streams
and
his
draft.
That's
it
next
slide,
so
I
think
we're
ready
for
worker
bus
call.
We
don't
know
of
anything,
that's
open.
B
A
O
D
D
D
D
So
if,
for
example,
if
I
have
contiguous
a
number
of
frames
that
are
droppable,
I
can
mark
the
ones
that
are
better
to
be
dropped
off,
so
if
I
don't
need
to
drop
all
of
them,
then
what
is
the
preference
which
one
too
broad
to
drop?
It
provides
more
flexibility
in
adapting
different
bit
rates
accordingly.
D
Basically,
you
can
try
to
do
some
heuristic
without
that,
but
it's
if
the,
if
they
are,
are
to
reorder
the
the
packet
so
that
that
provides
better
granularity.
For
that.
So
that's
the
first
usage,
the
other
one
is
to
allow
differentiation
between
reference
and
non-reference
be
frame.
So
if
I
drop
first
the
non
reference
frame,
then
I
can
drop
the
reference
one
two
words
again:
it's
more
granularity
in
what
I,
what
I'm
dropping.
B
D
The
last
one
is,
we
can
also
have
key
frames
can
also
be
dropped
because
the
closer
they
are
to
the
next
to
the
end
of
the
GOP
they
can.
There
are
less
important
than
the
one
in
the
beginning,
because
it
assumed
you'll
have
a
full
frame
that
you
can
resynchronize.
So
in
the
worst
case,
I
mean,
if
you
you
want
to
go
this
way
you
can
have.
This
is
the
priority.
That's
the
last
to
drop
okay.
So
so
these
are
the
cases
that
we
we
analyze
and
try
and
try
to
bring
forward
next
slide.
D
Okay,
so
we
talked
about
non-scalable
once
and
there
is
the
reason
for
that:
I
mean
it's
mostly.
We
figure
out
and
the
the
use
case
that
we
had
was
mostly
when
you're
coming
to
the
to
Wi-Fi,
where
you
have
I
mean
the
situation
there
is
that
you
have
to
drop,
because
you
know
at
the
home
when
you're
trying
to
watch
a
lot
of
videos.
At
the
same
time
you
have
to
drop
and.
B
D
A
D
Okay,
at
least
for
the
at
least
for
the
ppb:
it's
not
really
scalable
in
the
case
for
the
baby,
but
also
it's
also
for
the
case
of
I.
Think
the
only
case
that
can
say
it
scalable
is
the
reference
and
non
reference
be
frame.
Also,
if
you
have
a
contiguous
be
frames,
and
you
want
to
mark
which
one
you
can
drop,
then
again,
it's
not
really
scalability.
It's
mostly.
It
can
be
any
other
decision
that
you
can
make.
M
I
think
when
we
discussed
this
before
you
know,
frame
markings
goal
was
to
make
sure
that
something
is
decodable.
So
if
your
middle
box,
you
know
that
you're
forwarding
a
decodable
stream
without
error,
whereas
this
is
more
useful
for
even
something
which
may
not
be
decodable,
artifact,
free,
even
you're,
trying
to
minimize
the
artifact.
M
You
could
use
this
to
minimize
the
amount
of
artifact,
but
you
you,
don't
you
don't
guarantee
that
there's
no
artifacts,
you
you're
allowing
some
level
of
artifact
but
you're,
trying
to
minimize
them
by
having
more
granular
priority,
something,
whereas
the
other
frame
working
draft
is
talking
about
artifact
free
video.
What
can
you
do
to
guarantee
that
you
have
a
full
decodable,
fully
renderable
artifact
free
experience
to
all
the
receivers
and.
D
That's
when
I
read
your
presentation
now
it's
a
bit
different
than
what
they
and
they
stood.
What
you
want,
because
the
presentation
was
talking
and
you're
right.
The
presentation
that
you
presented
was
disgusting,
mostly
the
case
for
how
to
which
one
to
pass
not
wha
to
discard
okay,
but
my
understanding
of
the
document
and
from
the
document.
It's
not
only
because
the
also
the
item
says
it's
also
what
you
can
discard
and
this
carding
doesn't
necessarily
means
that
it
would
be
a
without
any
effects.
D
M
Murder,
if
to
clarify
the
document
that
the
other
frame
working
draft
does
say,
the
discard
with
the
definition
of
the
describe
it
is
is
it
is
discardable
and
still
allows
a
fully
decodable
media
experience.
That's
the
definition
the
discard
bit.
You
can
only
set
it
if
something
is
truly
discardable
and
not
impacting
decoding
of
other
frames.
So
you
could
not
set
that
for
something
that
would
degrade
the
you
know
cause
artifacts
in
the
stream.
M
D
D
M
D
B
D
M
M
D
D
A
D
No,
no
zero.
Zero
is
the
one
to
to
drop
the
tides
priority.
Okay.
So
that's
why
I
said
it's
the
same
thing
as
before:
I,
don't
change
it!
This
way!
That's
what
I
meant
by
that!
So
I've
said:
there's
no
problem
with
with
support
or
not
because
if
you
don't
support
it,
you
don't
you
ignore
these
bits.
Okay,
if
you've
supp,
if
I'm,
if
there
are,
if
you
support
it
and
so
in,
if
you,
if
you
support
it
and
you
don't
see
them,
then
you
see
zero,
zero,
only
zero
zeros
and
that's
what
what?
M
D
A
A
P
So,
depending
our
first
I,
don't
think
you
need
a
parameter
but
independently
of
that
there's
an
old
idea.
This
is
stuff
that
actually,
when,
when
we
did
the
h.264
now
reference
IDC
thing
those
two
bits
in
the
in
the
nail
unit
header
for
h.264,
which
also
mirrored
in
the
payload
format.
Those
were
initially
in
the
initial
proposals
were
meant
as
encoders
selectable
and
not
and
not
specifically
nailed
to
the
nail
unit
I
base.
It
was
later
done
in
that
committee.
P
D
And
I
think
I
think
it's
not
it's
not
that
relevant
because
from
the
strain
itself
I
mean
it's
just
that
if
I
want
to
be
able
to
decide
from
the
stream
which
one
to
drop
its
dah,
which
one
to
drop
in
which
not
to
draw
then
I
go
accordingly.
Okay,
I,
don't
care
I,
don't
care
how
it's
related
to
others
if
I
seen
in
this
stream,
these
are.
P
D
P
Know
so
the
middle
parks
has
decided
at
this
point:
I
need
to
drop
something.
Then
it
looks
through
its
queue,
which
is
hopefully
mercifully
short.
Otherwise,
we
have
too
much
delay
right
and
if
it
finds
something
that's
droppable,
then
it
picks
one
of
those
with
the
lowers
of
these
indices.
That's.
P
P
A
M
The
iris
that
doing
this
in
the
context
the
frame
marking,
because
I
thought
it
would
be
difficult
to
actually
specify
what
what
cinders
should
populate
this
with
and
so
I
didn't
want
to
hold
up
frame
marking
for
a
potentially
open-ended
discussion
about
how
to
how
to
actually
signal
these
bits
properly.
In
the
absence
of
specifying
how
to
signal
them,
I
feel
it's
a
little
impotent.
So
if
the
document
doesn't
say
anything
about
what
you
actually
do
with
these
with
these
bits,
then
it's
a
little
you.
M
In
right
because,
like
for
example,
the
frame
marking
bits
are
very
deterministic,
so
it's
clear
how
a
sender
must
send
them.
There's
no
ambiguity,
so
we
don't
need
a
lot
of
rules
and
and
and
in
discussion
about
how
they
should
be
marked.
This
is
something
where
it's
more
subjective
that
there
are.
You
know
different
ways
that
you
know
senders
may
choose
to
mark
them.
D
Which
drop
I
mean
need
to
drop,
but
it's
for
this
specific
stream.
I
mean
that's
all
I
mean
it's
not,
for
it
doesn't
affect
any
other
streams.
There's
no!
It's
just
for
this
stream.
I
mean
how
you
decide
which
one
to
mark
and
which
one
discarded,
but
that's
what
you
say,
you're
telling
the
middle
box
that
these
are,
for.
You
are
less
important
and
this
one
are
more
important,
so
you
could
first
drop
the
the
one.
That's
are
less
important.
That's
all.
O
D
I
A
I
A
A
I
I
Good
all
right,
so
quick,
if
you
haven't
been
paying
attention,
is
a
new
transport
protocol.
It
layers
above
UDP
and
is
initially
intended
to
replace
TCP
in
the
HTTP
TCP
stack.
So
there
are
there
browsers
and
servers
out
there,
which
are
running
HTTP
over
quick
over
a
UDP
and
quick
may
or
may
not
be
a
good
thing.
Lots
of
people
have
opinions
on
that,
but
it
is
certainly
something
which
is
happening
and.
I
It
would
that
there
are
two
things
which
I
think
it
would
be
useful
to
do
with
quick.
It
will
be
useful
to
run
quick
in
a
peer-to-peer
manner
in
the
long
term,
so
it's
not
just
for
interacting
with
web
servers,
and
we
can
use
it
for
other
things
that
need
peer-to-peer
transport
and
I
think
it
would
also
be
useful
to
be
able
to
coexist
with
quick
and
WebRTC
traffic
on
the
same
UDP
pods.
I
So
thinking
through
those
both
with
a
little
more
I
mean
the
first
goal,
I
mean
initially
quick
is
being
specified
as
a
client-server
protocol,
your
HTTP
over
quick.
If
it's
going
to
be
general-purpose,
it's
going
to
end
up
being
used
in
a
peer-to-peer
way.
If
it's
a
being
used
in
a
peer-to-peer
way,
it
will
need
to
work
through
nets,
which
means
that
we
need
a
net
reversal
scheme.
B
I
I
I
In
order
to
do
that,
we
must
be
able
to
demultiplex
ton,
packets
and
quick
packets
running
on
the
same
port,
or
we
must
reinvent
stun
as
part
of
quick
and
stun
has
been
painful
enough
as
it
is
and
redoing
all
that
work
in
the
context
of
quic
is
not
my
idea
of
fun.
Maybe
other
people
like
the
job
security
but
I
think
we
want
to
you
know
we
don't.
We
need
to
find
a
way
of
multiplexing
stun
and
quick
packets.
I
I
So
it's
a
WebRTC
endpoint
co-located
with
the
server,
and
you
know
yes
in
principle,
we
could
run
that
media
over
quick.
You
know
RTP
inside
quick
and
so
on.
In
principle,
we
could
do
the
key
exchange
using
quick
and
then
just
use
that
to
key
the
SRTP,
because
there's
a
D,
TLS,
handshake
runs
and
so
quick,
but
in
the
short
term,
that's
a
lot
of
specification
work.
So
if
we
want
that
use
case,
we'd
want
to
be
able
to
demultiplex
the
the
DTLS
and
the
SRTP
packets
from
the
quick
packets
on
the
same
port.
I
In
addition,
there
are
people
talking
about
using
quick
as
a
replacement
for
the
WebRTC
data
channel
in
peer-to-peer
use
cases
which
would
use
the
stun
stuff
and
also
we
need
to
do
all
the
rest
of
the
D,
multiplexing
and
demultiplexing
with
the
media
and
so
on
and
both
of
these
cases.
It
would
be
useful
to
be
able
to
do
multiplex,
quick
packets
and
the
packets
being
sent
by
WebRTC,
so
stun
and
turn
and
DTLS
and
SRTP,
and
so
on.
I
So
of
those
two
I
mean,
to
my
mind,
I
think
the
essential
one
is
that
we
did
be
able
to
do
multiplex,
stun
and
quick.
So
we
can
run
quick
in
the
peer-to-peer
way
while
we're
doing
that,
it
would
seem
useful
to
be
able
to
if,
if
we
can
to
be
able
to
also
support
the
maxing.
All
all
the
other
protocols
who
use
for
WebRTC.
I
So
how
does
WebRTC
currently
do
the
d-max?
Well
currently,
if
you
have
a
WebRTC
endpoint,
when
the
packet
arrives,
you
look
at
the
first
byte
of
the
packet
and
if
the
first
byte
isn't
in
the
range
zero
to
free,
you
forward
it
to
your
stun
implementation.
If
it's
sixteen
to
nineteen
and
you're
running
ZRTP,
you
run
it
to
your
Zed
ITP
implementation.
If
it's
2263,
it
goes
to
your
DTLS
64
to
79,
it's
turn
128
to
191,
it's
a
TP
or
a
TCP.
I
You
forward
it
to
the
appropriate
bit
of
your
stack
based
on
the
first
bytes
of
the
packets.
And
yes,
this
is
a
clutch.
It's
a
horrible
clutch.
It
happens
to
work
by
luck,
we've
kind
of
semi
formalized
it
in
RFC
79-83,
but
clearly
it's
a
clutch,
there's
not
really.
If
I
remember
right,
a
good
extensibility
strategy
for
this
in
79-83
I,
don't
believe,
there's
an
eye
on
ax
registration
policy
for
how
we
we
add
new
things
in
here.
It's
a
mess
which
happens
to
work
because
we
got
lucky.
I
If
I
am
remembering
right
and
Magnus
will
probably
clarify
a
lot
of
this
was
done
because
things
happen
to
work
out,
I.
Think
if
you're
running
a
stun
server,
then
you'll
see
stun
and
turn
packets
on
that
port,
but
probably
not
the
others.
If
you're
running
a
web
RTC
endpoint,
you
see
stun
and
DTLS
and
a
DHCP,
but
probably
not
the
others
and
so
on.
So
we
didn't
necessarily
need
all
of
these
combinations
for
every
endpoint,
but
things
happen
to
work
out
that
way,
and
it's
all
been
defined
such
that
and
documented.
I
How
does
quick
what
fits
with
all
this?
Well,
there
are
two
types
of
packets
in
quick,
long,
header,
packets
and
short
header
packets.
A
quick
long
header
packet
is
used
for
the
initial
or
negotiation,
the
the
initial
handshake
version,
negotiation,
that
sort
of
thing,
and
then
it
switches
to
short
head
of
packets.
Once
the
connections
going
long
had
a
packets,
the
first
bit
of
the
packet
is
set
to
one.
That's
then
followed
by
a
7-bit
type
field,
a
connection
ID
packet
numbers
versions
and
the
payload,
and
so
currently
packet
types.
I
One
through
six
are
defined
right
and
was
there
a
connection
idea,
but
ID
gives
you
a
sets
of
values
for
the
first
bytes
of
the
packet
which
ranged
between
129
and
134
for
long
header,
packets
and
that
conflicts
with
the
range
of
RTP
and
rtcp
are
using.
If
you
look
at
the
first
bite
of
an
RTP
are
a
TCP
packet,
quick
short
header
packets.
I
The
first
bit
is
0,
was
then
a
bit
to
indicate
whether
there's
a
connection
ID
present
a
bit
called
key
phase,
which
is
to
do
if
that
the
keying
and
the
security
Stefan
I.
Don't
quite
understand,
and
a
packet
type
field,
and
currently
packet
types
1
through
3
are
defined.
It's
followed
by
an
optional
connection,
I'd
be
a
packet
number
on
the
payload,
but
again
we
only
really
care
about
the
first
byte
give
them
those
values.
I
When
you
look
at
the
you
know
whether
the
connection
idea
is
there
or
not,
with
the
key
phases
there
or
not.
This
gives
short
header
packets
with
where
the
initial
bite
will
fits
in
the
range
of
either
1
through
3,
33,
35,
65,
367
or
97
399,
and
those
conflict
with
stun
and
DTLS
and
tug.
If
you
try
to
multiplex
them
all
so
yeah
as
it
is,
these
just
do
not
go
together
with
a
TP
and
stun
turn.
I
So
what
do
we
want
to
do
if
we
want
to
run
these
together?
How
can
we
we
address
this
problem?
Well,
in
that
the
quick
working
group
on
Tuesday,
Martin
Thompson,
talked
about
this?
He
walked
through
these
four
options:
option
1
just
rely
on
the
crypto.
All
of
these
packets
are
authenticated
in
some
way,
so
you
can
just
run.
You
know
you
can
check
the
signature
of
it
all
the
packets.
You
can
try
and
authenticate
them
and
you
do
whichever
works
and
it's
horrible.
A
I
Yeah
I
think
that's
the
important
thing.
If
we
have
something
which
is
probabilistic,
you
can
fall
back
to
checking
with
signatures
in
the
worst
case
and
we
don't
want
to
do
it,
but
we
could
option.
2
was
rearrange
the
quick
packet
formats
make
it
so
that
all
of
the
quick
packets,
though,
if
the
top
two
bits
set
to
one
and
that
avoids
the
collisions
entirely
puts
them
into
a
space.
That's
not
used
by
anything
else,
but
the
quick
folks
don't
like
that.
I
They
want
to
be
able
to
use
all
of
those
bits
potentially,
but
it's
the
possibility
option
free.
It
was
to
add
a
single
locked,
octet
shim
to
the
start
of
quick
packets.
When
used
peer-to-peer,
you
put
a
single
byte
at
the
front
which,
with
a
non
conflicting
value
and
say
this,
is
you
know
this
byte
will
always
be
present
when
quic
is
running
theater
pair,
which
in
practice
would
probably
turn
into
this
byte
is
always
present
if
you're
using
quick.
I
We
do
the
security,
handshaking,
quick
and
then
just
export
the
key.
So
we
don't
need
to
run
DTLS.
We
only
use
long
packets
during
the
handshake,
so
the
fact
that
they
can
flick
with
our
TP
doesn't
matter
because
by
the
time
you've
done
the
handshake.
You
can
do
that
and
then
you
can
switch
to
ICP.
You
never
send
both
at
the
same
time
and
yeah.
Okay,
mostly
Fitz.
If
you
squint
a
bit
and
pay
attention
to
the
sequencing
and
clearly
none
of
these
are
ideal
and
I.
I
I
So
if
we
take
a
quick
packet-
and
we
look
at
the
long
head
of
packets
and
currently
the
packet
types
which
it
defined
range
from
one
through
six
and
I-
think
it's
like
one
is
the
version,
negotiation
and
the
rest.
Various
other
features
in
the
thing
and
an
packet
at
one
is
special
at
that
so
sort
of
mandated.
Everything
must
understand
this
and
the
rest
is
specific,
specific
to
particular
quick
versions.
I
So
the
suggestion
here
is
that
we've
renumber
these
packet
types
so
version
negotiation,
changes
from
being
packet,
side,
1/2,
packet
type,
7f
and
all
the
rest
just
drop
down
following
that,
so
it
so
we
number
from
the
top
down.
Reveling
we've
been
going
bottom
up
and
that
doesn't
do
anything
too
quick
right.
It's
a
trivial
renumber
of
the
packet
types
isn't
changing.
If
the
semantics
doesn't
change
anything
else
than
that,
but
it
changes
where
they
fit
into
that
space
in
the
first
objects.
I
Similarly,
further
connection
ID
field
in
the
short
packets,
rather
than
having
a
one
to
indicate
connection
ID
present,
we
can
flip
it
around.
So
zero
means
the
connection
ID
as
present,
and
one
means
it's
absent,
which
is
perhaps
the
opposite
of
what
you'd
expect,
but
there's
no
reason
why
we
couldn't
define
it
that
way
and
that
flips
those
into
packet
ranges
that
also
don't
conflict.
And
again
it's
a
trivial
change
to
quick.
It
doesn't
change
any
of
the
semantics.
It's
just
a
slightly
different
encoding
of
the
bits.
I
So
if
we
do
this,
the
quick
long
header
packets
then
fit
in
the
range
starting
it
by
values,
255
down
to
250
for
the
currently
specified
quick
packet
types.
So
they
avoid
conflicts
with
everything
else
that
we
might
want
to
multiplex,
quick
short
head
of
packets,
depending
exactly
which
bits
are
set
to
fit
somewhere
in
the
range
of
64
to
127,
mostly
only
a
small
portion
of
that
range
that
conflicts
with
turn.
K
L
K
Q
Tamiya
I
think
there's
actually
no
collision
with
the
turn
channel,
because
in
case
you
run
a
turn
channel.
You
know
that
everything
is
going
to
come
through
the
channel.
First,
you
have
to
give
it
to
your
turn,
client
first
and
it
will
unpack
it
and
then
you
will
give
it
to
then
you
do
max
it
further.
Yes,
I
mean
the.
I
I,
don't
see
why
anyone
would
want
to
do
that
anyway,
but
yeah
not
from
the
client
perspective.
So
assuming
that
we
don't
use
channels
for
the
quick
short
header,
packets
and
I,
don't
think
we
need
to
keep
the
no
connection
IDs
for
the
quick
short
had
a
packet,
so
in
the
peer-to-peer
case,
and
it
got
the
impression
that
we
didn't
need
to
the
quick
people
didn't
think
they
needed
to
assuming
we
don't
worry
too
much
about
turn.
I
Then
we
seem
to
be
ok
said
that
the
quick
people
might
want
to
use
more
of
this
space
in
the
future.
So
we
might
need
to
give
some
guidance
about
what
happens.
If
that's
the
case
and
the
quick
people
might
just
decide,
they
want
to
expand
the
range
of
packet
types.
They
might
want
to
redefine
the
quick
header
in
some
future
version
of
quick
to
use
more
packet
types
or
they
might
just
want
to
grease
the
space
and
you
send
other
values
of
the
type
field,
just
to
make
sure
it's
still
possible
to
send
them.
M
I
Mean
one
way
of
looking
at
this
is
that
we're
updating,
RFC
79-83
to
say:
ok,
quick,
is
fitting
in
these
values
and
those
are
the
values
quick,
we'll
use
the
multi
multiplex
and
ok.
That
I
suspect
probably
works
reasonably
well
from
our
point
of
view,
but
I
think
the
quick
people
would
would
dislike
it.
M
I
From
my
point
of
view,
the
essential
thing
here
is
multiplexing
stun
and
quick.
Yes,
I
mean
that
there
is
work
on
RTP
in
quick,
there's
also
work.
There's
also
been
talk
of
doing,
of
getting
rid
of
the
DTLS,
because
you
can
do
the
handshake
in
and
just
specify
how
to
extract
the
keys
from
the
quick
handshake
and
use
that
for
keying
stuff.
I
Those
are
long-term
things.
I
mean
you're
specifying
RTP
ever
quick
is
not
gonna,
be
something
that
happens.
You
know
next
week
it's
going
to
take
a
couple
years.
I
suspect
is
quite
a
big
change.
It's
quite
different
transports,
so
I
think
in
in
the
short
term.
It
would
be
useful
to
be
able
to
do
multiplex
it
in
the
long
in
the
long
run.
I
expect
RTP
will
end
up
running
over
quick,
and
it
would
just
be
the
stun
that
we
care
about
my.
M
Concern
is
that
if
we
had
so
many
different
competing
options,
not
necessarily
conflicting
but
the
the
suppose
you
support
supported
them
all
suppose
the
quick
folks
actually
work
on
on
better.
You
know
NAT
traversal,
then
then
you
know,
then
what
we
do
in
AR
T.
So
then
quick
has
native
native
naturale,
traversal,
native
TLS
and
and
native
you
know:
media
carriage
bindings,
maybe
a
quick,
RTP
encapsulation,
or
something
like
that.
Then
we
also
have
this
totally
parallel
universe
of
of
AVT
transport.
M
I
If
you
wish
to
be
able
the
multiplex,
if
you
arrange
the
if
step
back,
if
we
make
this
change
too
quick,
then
it
becomes
possible
to
D
multiplex
them.
If
you
care
to
do
it,
not
that
you
have
to
do
both
and
you
have
to
be
able
to
be
multiplex
them.
I
L
I
I
I
think
it
clearly
would
make
sense
to
be
able
to
say
quick
is
using
these
ranges
from
that
first
bite,
and
if
we
you,
if
you
wish
to
be
happy
and
do
multiplex,
then
that's
great
I
think
the
problem
is
the
quick
folks
actually
want
to
claim
all
of
that
first
bite
and
they
happen
to
only
be
using
a
subset
of
those
values
now,
but
they
want
the
option
of
using
the
rest
of
them
and
they
want
the
option
of
greasing
that
bite.
So
I
think
from
a
quick
point
of
view.
I
What
we
all
have
is
we
happen
not
to
conflict
with
the
packet
types
defined
now,
but
a
quick
implementation
will
probably
grease
it
and
use
the
whole
of
that.
The
rest
of
that
bite,
but
could
be,
could
be
instructed
not
to
do
so
if
it
wishes
to
run
with
these
other
protocols
on
the
same
pot,
because
that
particular
implementation
wishes
to
coexist.
So
I
suspect
the
question
of
whether
we
we
update
seven,
nine
eight
three
is
is
actually
a
much
more
complicated
one
than
the
question
of
whether
we
rearrange
the
bits.
I
I
E
Adam
wrote
as
an
individual
I'm
glad
that
you
concluded
by
saying
you
can
run
anything
you
want
over
quick,
because
I
think
it's
kind
of
key
to
why
this
seems
like
it's
doing
a
whole
lot
of
work
that
isn't
really
necessary
Minh.
If
we
can
put
these
protocols
on
top
of
quick,
then
it
seems
like
the
problem
is
reduced
to.
How
do
we
do
stun
and
quick
at
the
same
time
we're
not
to
worry
about
the
rest
of
this
and
that's
a
problem.
I
E
E
You
know
pack
they're
gonna,
be
you
know
whatever
it
is
you're
trying
to
slam
inside
here
and
that's
that's
architectural
II,
designed
as
opposed
to
something
we
kind
of
noticed,
happened
and
then
tried
to
sort
of
squeeze
in
around
the
edges
when
we
realized
that
well,
really
want
to
do,
doesn't
actually
fit,
but
maybe
it
with
Amer
enough
movies
bits
around.
It
will
I.
E
I
D
E
D
I
B
I
We
don't
regard
I,
don't
believe,
that's
a
problem
right
for
quick
version,
1
they're
not
going
to
define
new
packet
types
and
it's
quick
version.
1
we
care
about
for
multiplexing
with
RTP,
because
by
the
time
there's
more
versions
of
quick
RTP
I
expect
we'll
be
running
with
inside
quick
and
they're.
Never
going
to
get
down
low
enough.
They're
never
can
define
enough
packet
types
that
it's
starts
conflicting
with
stun
I.
D
B
E
D
As
for
the
top
part,
you
see
we
have,
we
have
the
top
by
starting
with
1,
always
a
zero,
ok
and
all
the
rest
are
you
can
do
whatever
you
want
now
for
the
first
bite,
the
only
limitation
that's
now
being
added.
Is
that
what
we're
definitely
saying
that
for
the
long
header
we'll
start
with
1
1
and
not
with
1?
Okay,
that's
the
limitation!
D
I
D
I
I
You,
let
me
finish
running
so
that
the
long
header,
packets
stun
we
have
no
problem
with
that's
always
going
to
be
the
most
flexible
by
renumber
fields.
We
happen
to
gain
coexistence
with
RTP
and
all
the
other
things
for
quick
version,
one
and
you
may
or
may
not
think
that's
important,
but
for
stun
there's
no
problem
for
the
short
header
packets.
If
we
flip
the
connection
ID
bits,
then
we
also
get
coexistence
with
stun
and
that's
important
okay,
and
if
we
don't
then
it
duck,
then
they
conflict
with
stun
packets.
E
I
D
Q
Newtonian
yeah
I
think
this
comes
down
to
the
question
of
do
we
do
we
need
to
support
a
transitional
period,
I
guess
of
like
where
you
can
do,
for
example,
for
WebRTC,
like
click
for
the
data
channel,
but
still
use
RTP
at
the
same
time
right
like
or
if
we
go
like
later
on.
If
everything
goes
over
quick,
then
we
don't
no
longer
need
to
to
worry
about
that.
But
the
other
question
is
like
is
quick.
Now
only
the
first
one
of
more
protocols.
N
I
N
I
F
Campbell
is
an
individual
and
a
couple
things
that
went
by
in
the
line
before
I
got
up
here
that
I'm,
not
sure
I
interpreted
quickly,
so
my
point
might
be
moot,
but
when
we
start
talking
about
the
do
much
thing
of
things
other
than
stun
RTP
or
whatever,
and
it
might
be
nice
and
we
do
it
if
we're
doing
it
cuz
it
might
be
nice,
we
might
use
it
someday.
I
would
rather
give
people
guidance
to
not
do
it
if
we,
unless
we
need
it
right,
that's
not
what
ports
are
for.
F
K
K
Yeah
I
actually
been
thinking
about
this
connection
assumption
it's
probably
fine,
assuming
that
you
actually
have
all
the
bits,
so
you
can
lock
on
UDP
IP
addresses
UDP
ports
and
not
run
multiple
simultaneous
connections
to
the
same
pier
which
I
guess
is
actually
what
the
limitation
comes
down
to.
Yes,
it.
I
I
K
K
I
mean
I
was
thinking
about
some
of
these
middle
box,
etc
is
probably
has
one
starts
finding
port,
and
then
you
need
to
kind
of
bind
up
these,
so
you
use
the
filter
after
you
down
the
ice,
handshakes
et
cetera.
You
need
to
move
on
so
I.
Don't
know
how
much,
how
clutch
the
implementation
stage
of
actually
going
from
fully
open,
exceptional,
some
and
then
binding
it
down
to
I,
put
a
call
about
remote
address
so
but
I.
K
I
D
K
A
I
I
think
that
the
next
steps,
I
think
for
the
quick
side.
What
I
proposed
to
do
is
submit
a
pull
request
to
quick
spec.
That
makes
those
two
changes
and
I
think
that
will
then
be
discussed
in
the
quick
group
and
they
will
see
whether
to
take
it
up
or
not
and
talking
with
Martin
Thompson.
Who
is
when
the
editors,
the
quick
spec
he
seems
amenable
to
that,
but
obviously
they're.
I
The
group
would
need
consensus
that
this
was
the
right
thing
to
do
from
the
AVT
point
of
view
or
possibly
the
transport
area,
point
of
view,
I,
don't
know
where
what
the
right
home
for
that
is.
We
should
consider
whether
we
need
to
update
79-83
or
whether
this
is
something
quick
specific
or
who
knows
what
we
do.
A
I
A
A
I
I
F
Don't
have
an
answer
for
that
day.
I
would
lean
towards
if
all
we're
doing
is
simply
changing-
79-83-
that's
probably
here,
but
there
are
implications
as
I
said
it
pretty
far
wide.
So
I
think
some
conversations
need
to
happen
behind
the
scenes
to
give
a
thorough
guidance
on
that
I
think
we
can
assume
for
now
that
if
the
work
is
done,
this
is
probably
the
place
it
would
be
done,
but
that,
if
is
still
a
pretty
big
if
I.
S
S
So
in
this
chapter
we
introduced
a
new
metric,
we
call
the
effective
loss
index
and
we
also
define
the
except
block
report
for
meit
and
to
cover
this
new
magic.
So
this
new
metal
will
be
used
to
Mayor
the
effective
the
packet
loss.
Actually,
we
compare
with
two
existing
of
see.
Why
is
obviously
75099
is
5725
and
we
compare.
The
difference
is
different
from
the
5725.
S
Actually,
this
chapter
that
doesn't
need
to
report
a
packet
by
packet.
Actually,
so
we
can
save
this
packet
by
packet,
the
report
block
overhead
and
given
from
the
obviously
seventy
seventy
five
zero.
Nine.
Actually
we're
more
focused
on
to
provide
static,
resolves
the
total
package
loss
change,
so
we
can
actually
predict
the
the
the
polka
dots
change
over
the
time
next.
S
So
what?
What
is
the
effective
loss
in
taxes?
This
is
a
new
into
metric.
We
introduced
here.
We
actually
we
see
see.
This
is
actually
a
simple
metric
to
measure
the
effectiveness
of
packet
loss
actually,
in
some
other
case
that
we
can
actually
measure
the
the
the
the
first
degree
or
the
packet
loss
when
we
apply
the
packet
loss
recovery
mechanism-
and
we
give
examples
of
suppose
we
receive
freedom
attend
at
he
1010.
S
So
how
do
we
calculate
these
kind
of
effective
loss
index?
So
we
make
a
some
machine,
so
we
have
RTP
receiver,
RTP
and
opponent.
They
can
receive
stream
the
RTP
data
packets
and
we
can
for
the
RTP
and
receiver.
They
can
actually
place
the
in
the
middle
box
or
place
in
an
host,
and
we
can
send
a
time
window.
They
can
actually
to
apply
the
packet
of
a
colossus
repair
max
chunk
by
chunk
Oh.
Actually,
the
channel
size
can
be
the
time
window
size,
and
so
you
so
for
each
chunks.
S
If
there's
some
unrecovered
packet
loss
anyway,
we
are
in
actually
this
kind
of
packet
loss.
Bootcamp
repair
Maxim
is
infective,
so
we
introduced
a
formula
equation.
Actually,
for
example,
we
actually
receive
we
can
can
measure
the
total
package
with
packet
loss,
repair
maximum
applied
and
we
can
compare
these
total
packet
loss.
Actually,
this
is
more
like
the
pre
repair
packet
loss.
This
is
something
actually
has
already
be
defining
of
say
3611,
but
it's
actually
is
the
follow
wrong.
Lansing
encoding
for
mater.
S
Actually,
so
we
can
use
these
prepare
the
packet
loss
that
we
compare
with
ease
the
loss,
a
effective
loss
suresh
ago
that
we
said
actually
and
we
can
actually
gain
as
a
effect,
will
also
factors.
So
we
so
it's
about
the
ways
received
can
channel,
so
we
can
actually
average.
It
is
effective
loss,
factor
and
and
also
multiplied.
S
Ten
thousand,
so
we
can
translate
as
a
percentage
into
the
integrity.
Okay
gave
the
more
more
either
way
to
see,
see
the
it's
kind
of
effective
loss
next,
so
this
is
simply
was
almost
suppose.
We
should
receive
nine
packet.
Actually,
we
can
actually
program
them
into
straight
chunks.
Each
chunks.
We
have
three
pack
a
day
with
different
a
second's
number
one,
two
three
and
four
five,
six
seven
so
in
each
chunks
are
supposed.
We,
we
lost
two
pack
ADA,
but
I
would
settle
stretch
coded,
for
example,
one.
S
So
it
actually
is
the
greater
than
the
threshold.
We
actually
said
the
effectual
loss
factor
as
as
one,
if
actually
less
than
the
threshold.
Where
said
the
factor
in
to
zero,
so
we
can
actually
Kaku
this
kind.
Effective
loss
index
yeah
next,
so
this
is
a
new
block,
accept
lockup
for
Maeda
we
proposed.
Actually
we
the
for
the
example
header.
S
We
actually
use
the
same
hideaway
define
in
obviously
as
3611,
and
in
addition
we
introduce
the
new
metrics
and
it
is
68
new
metrics
the
parameter,
and
we
plus
we
have
six
bit
padding
yeah
next,
so
to
calculate
these
effect
loss
index.
We
need
to
calculate
it
to
to
to
Pamina.
We
need
to
actually
need
to
configure
with
number
threshold
and
the
the
chunk
size
or
patch
aside.
S
We
use
a
different
different
term
may
be
a
bit
confusing,
but
we
think
maybe
channel
is
more
better,
so
we
can
use
SBP
to
signal
these
two
parameter
and-
and
actually
this
is
the
the
the
only.
This
is
one
way
we
may
use
some
as
a
out
of
mana
maximum
or
with
just
a
manual
configure
to
do
to
configure
these
two
parameters.
Next,
so
there's
some
discussion
in
a
bt
call.
Many
missin
I
didn't
follow
that
and
my
colleague
actually
follows,
so
we
have
to
consideration
for
these
two
parameter:
wines
effect
loss
ratio.
S
The
other
is
the
chump
numbers
for
for
this
threshold.
We
think,
as
we
already
discussed
in
the
previous
types,
we
can
use
STV
to
signal
these
kind
of
parameter
and
but
the
force
ratio.
How
do
studies
for
this
threshold?
This
based
on
the
RDP
allocation,
for
example,
FEC
application?
You
may
consider
the
FPS
a
the
coding
mechanism
or
coding
rate,
and
and
here
we
gave
it
a
simple
example-
use
threshold.
It
can
be
said
that
a
number
of
the
package
in
the
FEC
stream
so
for
the
chunk
of
the
number
which
anchors.
S
S
K
S
That's
a
good
question:
actually
we
thought
about
these.
We
discussed
with
our
colleague
are
actually
this
a
lot
more
look-alike
is
a
new
metric.
Actually
they
can
based
on
the
existing
metric
defined
in
obviously
3611,
and
also
the
other
two
poster
repair
pakka
doors
metrics.
You
can
derive
this
measure,
it's
more
like
a
prepare
loss,
uses
pre,
repair
loss
counter
to
compare
with
threshold.
We
said
so.
These
are
pre
repair,
pre,
repair
packet.
Also,
you
can
get
calculated
based
on
the
the
los
lunas
encoding
that
define
FC
611.
S
In
addition,
there's
another
job
that
actually
actually
actually
is
from
ratio
and
some
other
core
do
is
they
define
the
pose
repair
lost
Congress
they
define
to
pay
me.
The
wines
poster
poster
repair
lost
count,
also
repair
Lots
counter.
So
you
can
some
of
these
two
parameter
together.
You
can
get
these
pre
repair
pack
loss,
so
this
is
a
new
magic.
We
introduced
here,
yeah.
C
I
No
I
was
just
commenting
on
this,
I
mean
it.
I
mean
I'm,
not
a
measurement
guy,
but
so
so
I
can't
come
in
somewhere,
it's
useful,
but
it
it
clearly
fits
the
architecture.
It
doesn't
cause
any
harm.
We
have
plenty
of
space.
You
know
that
there
seems
no
reason
not
to
do
it,
although
you
know
whether
whether
it's
useful
I
cannot
comment.
Thank
you.