►
From YouTube: IETF111-AVTCORE-20210726-2300
Description
AVTCORE meeting session at IETF111
2021/07/26 2300
https://datatracker.ietf.org/meeting/111/proceedings/
B
A
A
C
D
A
A
Enter
the
cue
by
clicking
on
the
hand
icon
when
you
want
to
be
spoke
when
you
want
to
speak.
You
also
turn
on
your
audio
yourself
and
your
video.
If
you
want
video
video
to
audio,
are
separate
so
make
sure
you
turn
on
audio,
even
if
you
also
turn
on
video.
Otherwise,
we
can
see
you
but
not
hear
you,
which
is
frustrating
for
everybody.
A
Notewell
applies,
hopefully,
you've
seen
this
before,
if
you're
ready
for
this
meeting,
if
you
have
any
questions,
please
contact
the
chairs
or
an
id
or
you're
responsible
you're
on
legal
advice.
If
that's
relevant.
A
Okay,
we
have
an
agenda,
we
have
notes.
Shuai
zhao
has
volunteered
to
take
notes,
though
he'll
be
presenting
in
one.
So
it'd
be
good
if
we
had
somebody
backing
him
up
when
he's
presenting
vvc,
but
thank
you
charlie
for
doing
for
volunteering.
A
A
Well,
anyway,
I
hope,
hopefully,
something
can
cover.
Oh,
he
can
do
his
own
notes.
Okay,
justin
says
you
can
do
the
easy
notes.
Great
perfect.
A
Okay,
we
have
a
jabber
scribe.
Anybody
saw
a
volunteer
jabber,
it's
just
basically,
just
if
somebody
is
not
able
to
their
microphone,
isn't
working
whatever
and
they
need
to
say
something
just
to
repeat
what
they
said.
I'm
curious
can
do
that.
If
somebody
volunteers,
it's,
hopefully
not
hugely
challenging,
though
I
guess
bernard,
we
should
decide
which
of
us
is
doing.
Q
management,
which
of
us,
is
doing
jabber.
A
Cue
management,
okay,
great
because
I'm
doing
the
slides-
that's
probably
a
good
split.
Okay.
Other
meetings
this
need
week
there's
a
side
bidding
for
video
ingestion
over
quick,
see
the
link
at
the
side.
Meetings.
Link!
That's
well,
that's
right!
1800
utc
is!
A
Okay,
yeah-
and
I
guess
I
I
was
confused
because
I
thought
yeah
okay
well,
hopefully
we'll
be
all
be
able
to
figure
out
so
basically
an
hour
before
the
first
session
is
the
important
thing
so
yeah,
okay,.
E
A
And
sorry:
here's
our
agenda:
if
anybody
has
any
comments
on
the
agenda,
let
us
know
otherwise
we
will
get
to
it.
So
our
draft
status
we've
had
a
number
of
things
published.
So
that's
exciting.
A
That's
basically
cluster
238
clearing,
so
that's
very
exciting
for
everybody,
not
just
this
working
group
vp9
is
in
the
rfc.
Editor
queue
that's
exciting
for
me,
because
that
was
me
my
work,
that's
a
misref
on
lrr,
which
is
itself
a
mystery
f1
framework.
A
Jpeg
xs
is
in
ad
follow-up
frameworking.
We
believe
that
the
existing
draft
is
basically.
There
was
one
open
issue
where
we
had
a
discussion
on
the
list
where
we
decided
that
the
existing
text
was
fine,
so
I
think
that's
ready
for
write
up
bernard
and
I
will
decide
who's
doing
the
write
up
on
that.
A
It's
possibly
should
be
me
because
I've
been
around
longer-
and
this
has
been
a
very
long-
lingering
draft,
as
you
can
tell
by
the
working
group
name
in
the
title,
we
dropped
tetra
because
nobody
seemed
to
be
interested
in
it.
If
anybody
is
interested
in
it
and
wants
to
pick
up
working
on
the
draft,
let
us
know,
but
for
the
time
being
it
didn't
seem
like
anybody
was
interested
in
doing
work.
A
We've
adopted
the
evc
and
vvc
drafts
and
7983,
which
I'm
embarrassed
to
say.
I
forget
what
that
much,
which
that
one
is
and
cryptex
oh
multiplexing
right
right.
Yes,
yes,
yes,
yeah
and
then
two
of
those
are
on
the
attendant
for
today
and
cryptex
is
sergio
here.
Yet
yes,
he
has
excellent
sergio.
Do
you
wish
to
all
right?
I
H
Hey,
that's
what
and
those
test
vector
was
added
to
the
into
the
drafts
zero
two
and
they
have
been
implemented
by
by
jonathan
by
jonathan
in
ditsy
and
by
mainly
besides
rtp.
So
they
are
ready
to
implementation
that
are
using
this.
The
best
this
test
best
aren't
ready
to
to
be
merged
and
used,
and
there
is
only
one
issue
that
is
a
pending,
that
that
is
one
that
jonathan
also
opened
today
to
the
expect
that
it
is
to
clarify
the
aad
for
for
aaid
modes,
and
but
it
is
just
editorial.
H
H
So,
let's
just
like
this,
so
the
idea
is
to
to
request
the
they
will
get
the
to
to
progress
this
forward
with
the
workload
last
call-
and
I
don't
know
if
we
need
to
to
push
a
new
draft
before
this
is
requested
or
what
should
be
the
best
way
to
to
progress
it.
But
I
mean
all
everything
is:
is
done
at
least
for
our
side
with
just
this
minor.
A
H
A
H
I
A
A
E
D
C
It'll,
do
okay
all
right!
Oh
that's!
Great!
Sorry,
all
right,
so
this
is
going
to
be
a
quick
update
for
the
revision
program.
Next
slide.
Please.
C
So
it's
it's
almost
it's
it's
almost
two
years
since
we
submitted
the
first
ids
now
we
have
a
review
in
10..
So
since
iatf
110,
we
has
been
actually
resolved
all
the
edited
notes,
so
the
review
intent
the
card
revision
has
addressed
all
of
them
and
the
big
part
is.
We
have
edited
the
complete
stp
offer
later
thanks.
Stefan,
for
that,
we
still
have
a
few
bucks
fixed
it
already
and
I'm
missing
ia
constellation.
C
But
beside
that,
I
think
we,
the
current
william
10,
is
ready
for
the
working
last
call
next
slide.
Please
there's
only
a
few
things.
I
think
it's!
No!
It's
probably
worth
mentioning.
You
can
also
see
the
details
feedback
from
the
main
list.
The
first
one
is
regarding
the
edit
note
11.
it's
regarding
the
encoding
scheme
for
bbc.
C
We
suggest
to
use
base64
instead
of
the
hex
big
16,
which
is
the
from
hevc
language,
the
justifications
that
some
of
the
binary
field
in
the
bbc's,
which
is
actually
a
32-bit
unsigned
integer
like
a
bps,
sdi
supervisor
id
etc.
So
we
suggest
using
basic
c4
here
to
trade
off
encoding
space
versus
speed.
C
So
some,
if
you
have
comments,
please
leave
in
the
meeting
list
next
ones.
Regarding
the
I
did
not
19,
which
is
regarding
the
education
buffer
non-unit,
and
that
was
left
with
a
copper
piece
from
the
hevc,
the
girl
we
tried
to
remove.
It
is
because
we
didn't
find
the
actually
very
practical
usage
for
that
parameter.
C
I
mean
the
goal
was
for
the
parameters
to
try
to
reduce
the
star
delay,
but
we
didn't
see
any
critical
usage
for
that,
so
we
just
removed
that
already
in
any
written
language,
then
editor
node
20.
We
complete
the
whole
list
of
stp
parameters.
C
I
don't
know
24,
that's.
That
is
one
of
the
object,
so
we
introduce
the
arbit
in
the
firmware
unit
header
from
last
meeting.
The
r
between
the
fu
header
really
means
the
last
law
units
when
they
coded
the
picture,
so
we
use
r
as
a
it's
a
placeholder,
but
we
actually
wanted
to
object
to
p
as
a
picture.
C
I
think
that
it's
not
reserved
anymore.
It's
actually
we'll
put
something
to
use,
so
I
think
p,
probably
a
better
name,
but
if
in
other
suggestions,
please
comment
on
that
one
as
well.
Next
slide,
please.
C
So
this
is
really
update
the
section
seven
regarding
the
stp
offer
letter-
oh
sorry,
all
from
under
all
front
and
also-
and
the
edition
is
really
focused
on
the
section
7.2.2-
all
the
way
to
7.2.4
again
most
of
the
hgbc
language
with
updated
for
bbc
itself
such
that
we
added
subsections
covering
not
not
scalable,
silver
media
format
and
multicast
et
cetera,
so
it
it's.
I
didn't
put
all
the
details
here.
C
C
That
was
a
lot
of
things
missing
we.
This
is
really
easy
to
fix
for
the
section
11a
consideration
similar
as
the
hebc
as
well
and
luxe
light.
C
I
believe
this
is
the
last
slide
as
well,
so
up
until
now,
graphene
10.
We
believe
we
have
addressed
all
the
action
notes
we
had
before.
So
we
think
we
believe
it's
ready
for
the
working
group
last
call
some
bugs
informative
issues.
We
can.
We
can
fix
along
the
comments
coming
from
working
on
glasgow.
A
You
all
right
so
yeah.
If
anybody,
I
think
that
seems
reasonable
to
me.
If
nobody
has
any
comments,
I
think
we
should
be
able
to
go
to
working
close
columnist.
H
What's
what
was
the
section
that
you
wanted
to.
K
Get
no
group
review
on.
C
Well,
ideally,
ideally
all
the
sections.
The
last
big
chunk
is
the
section
seven,
mainly
the
sdp
operator
got.
K
K
C
Think
the
slide
itself
has
a
link.
Already,
I
put
everything
that
puts
links
on.
Obviously
you
can
click
on
it.
That
will
be
the
new.
It
will
link
you
to
the.
E
A
E
Things
ready
to
go
yeah
today.
I
want
to
briefly
talk
about
a
longest
journey,
looking
at
rtp
over
quick
that
has
kept
us
busy
for
about
my
good
part
of
a
year.
Now
there
has
been,
as
I
saw
in
the
january
interim,
a
similar
draft
been
presented
by
sam
hurst
from
bbc
I'll
briefly
talk
about
that
later.
Our
focus
is
somewhat
different
and
the
details
differ,
but
they
are
pretty
similar
in
spirit,
as
we
also
noted
in
the
draft
itself.
E
So
I'm
going
to
talk
about
rtp
over
quick
next
slide,
please
with
twofold
motivation
on
one
hand,
basic
mechanisms
to
actually
carry
stuff
carry
rtp
packets
inside
quick
at
the
same
time,
try
to
come
to
a
sensible
interaction
between
a
quick
library
that
collects
all
kinds
of
statistics,
information
and
an
rdp
implementation,
and
possibly
a
higher
layer,
congestion
control,
implementation
and
leveraging
that
lower
layer
information
and
then
adjusting
how
we
would
need
to
use
our
tcp,
which
packets
we
can
avoid
in
order
to
become
more
efficient.
E
A
meta
point
of
all.
This
is
also
to
figure
out
over
time
how
to
properly
interact
with
quick
congestion
control.
If
we
do
our
own
congestion
control
at
the
media
level,
since
we
probably
can't
just
rely
on
what
quake
offers
next
slide,
so
in
terms
of
related
work,
I
just
mentioned
the
somehurst's
qrt
draft
that
was
presented
in
earlier
this
year.
So,
as
I
said,
our
draft
is
roughly
similar
in
spirit,
but
we
do
differ
in
a
number
of
details
and
then
also
maybe
a
bit
in
scope.
E
Both
drafts
use
quick's
unreliable,
datagram
extension.
You
know,
quick
is
a
reliable
transport
protocol
all
encrypted
and
so
forth,
with
multiple
interface
support,
avoiding
head
of
line
blocking
and
allowing
for
stream
multiplexing,
and
it
also
got
an
unreliable
datagram
extension
which
could
be
feasible
for
carrying
rtp
packets
inside.
However,
that
datagram
extension
is
also
subject
to
congestion
control
not
to
flow
control
but
to
congestion.
Control.
E
E
That
would
like
to
use
quick
data
grams
would
need
to
provide
their
own
multiplexing.
There
has
been
a
discussion
in
the
context
of
http
3
to
use
flow
ids
for
that
purpose.
Essentially,
a
16-bit
identifier
prefix
before
that,
to
identify
different
channels,
one
would
use
sdp
for
signaling
I'll
have
an
example
for
that
in
the
end,
but
that's
not
so
fun
the
focus
of
this
draft.
E
E
Rfc
8888
then
provided
congestion,
control
feedback
in
detail
with
precise,
protected
timestamping
and
receive
reception
loss
reports
that
could
work
or
should
should
be
sufficient
to
supply
enough
information
for
the
rmcat
congestion
control
protocols
to
work,
which
is
the
goal
that
we
also
see.
Two
things
seek
to
use
here
to
supply
here.
E
Obviously,
any
transport
protocol
will
roughly
cover
similar
pieces
of
information
together.
That's
those
kinds
of
statistics
so
does
quick,
but
it
has
kind
of
limited
resolution
in
places
I'll
come
to
that
in
a
slide
for
or
two
so.
E
The
information
may
not
be
as
exact
as
the
rtcp
feedback
that
we
can
get
and
it's
not
always
clear
or
when
a
datagram
frame
actually
will
be
transmitted
and
not
every
datagram
frame
might
be
act
and
so
forth.
So
we
get
a
coarser
granularity.
E
So
this
kind
of
congestion,
control
or
rtp
adaptation
mechanism
sits
in
the
middle
between
a
quick
implementation
from
which
we
expect
that
it
uses
that
it
supplies
us
with
unreliable,
quick
data
gram
frames
that
we
have
a
way
to
get
out
of
the
implementation.
E
Some
parts
of
its
local
state
information,
namely
when
datagrams
got
x,
got
got
act
or
reported
to
be
lost,
including
statistics
on
when
the
respective
perspective
reception
would
have
happened,
plus
rtt
statistics
that
we
could
actually
parameterize
our
congestion
controller
and
that
condition
controller
towards
towards
the
upper
interface
would
be
assuming
that
we
are
going
to
use
one
of
the
rmcat
algorithms
to
which
we
supply
the
act.
Packets,
delay,
rtt
estimates
possibly
easy
and
marks
if
available,
and
that
is
then
supposed
to
provide
bandwidth
estimation
for
the
media
encoder.
E
F
Yeah,
I
just
had
a
question
relating
to
the
acts
loss
of
datagram
frames
and
the
rtt
stats.
There's
work
going
on
on
the
wsvc
called
the
web
transport
api
and,
as
far
as
I
can
tell,
it,
doesn't
meet
your
two
criteria.
The
acts
are
not
a
signal.
You
basically
figure
it
out
at
the
app
layer,
but
it
isn't
in
the
api
and
it
doesn't
provide.
I
don't
think
the
rtt
stats
that
you
need
for
this
rtb
over
quick.
Have
you
have
you
looked
at
that.
E
I
have
been,
we
have
been
super
busy
with
looking
at
the
internal
interfaces.
What
we
actually
would
need-
and
we
haven't
yet
finally
confirmed
that-
what
what
would
be
the
final
data
data
points
we
would
be
looking
at.
So
now
we
haven't
looked
at
the
w3c
spec
of
specific
to
the
web
transport,
something
to
be
to
be
put
on
the
list,
but
at
the
moment
we
are
trying
to
establish
what
the
right
grenarity
of
information
is
in
the
first
place
before
we
asked
somebody
else
to
provide
them.
What's
the
time.
F
That
spec
to
be
complete.
Well,
that's
part
of
the
thing
is
that
the
the
stats
aren't
in
they're
kind
of
like
kind
of
fuzzy,
and
they
probably
won't
they
aren't
in
the
current
experimental
release
so
that,
but
that's
something
that
could
presumably
be
added.
The
axe
issue
is
a
little
bit
more
fundamental.
F
We've
fooled
around
with
a
couple
of
different
ways
of
doing
that
and
haven't
come
up
with
anything
particularly
satisfactory,
so
that
may
be
a
little
bit
more
of
an
issue
than
the
stats
which
can
probably
be
added
on
later.
K
And
if
we
were
going
to
do
that
for
web
transport,
we
probably
just
want
to
select
like
a
real-time
mode,
and
we
wouldn't
need
all
the
stuff
exposed
to
the
application.
We
would
just
say
you
know
run
using
the
signals
that
you
have
directly
available
to
you,
because
you're
running
inside
the
actual
implementation
itself.
E
Actually,
I
want
to
come
back
to
that
question
in
the
end,
because
this
is
kind
of
the
discussion
we'd
like
to
have,
in
the
end,
probably
not
not
exhaustively
at
that
point,
because
that
we
believe
there's
different
different
ways
of
actually
doing
this
and
not
sure
what
the
best
way
to
to
sort
that
part
out
spider.
But
I'm
happy
to
take
any
any
suggestions
and
insights
on
that.
E
Taking
silence
as
yes
quickly
next
slide,.
E
So
this
tries
to
summarize
what
we
can
usually
expect
on
the
top
from
rtp
plus
rfc88888
red
packets
are
supposed
to
be
lost,
grey
packets
are
received
and
wherever
there's
a
small
black
marker.
This
is
the
reception
timestamp
that
you
would
get
as
a
feedback
using
the
respective
feedback
packet
from
the
congestion
control
feedback
in
quick.
We
don't
have
the
luxury
to
have
every
packet
reported
in
detail
at
some
early
version
in
the
draft.
That
kind
of
information
was
available.
E
E
So
we
have
to,
and
this
x
may
be
cumulative,
but
then
you
only
know
what
the
timestamp
of
the
triggering
packet
was
not
the
previous
datagram.
So
this
is
what
I
meant
earlier
by
getting
coarser
information
only
so
we
need
to
come
up
with
ways
to
interpolate
what
the
other
reception
time
stems
were
we
could
get
rtt
stats
from
quick
if
they
were
exposed
in
order
to
figure
out
what
datagrams
actually
got
received.
E
You
would
also
need
to
understand
what
packet
number,
what
rtp
packet
number
went
into,
which
datagram
number
in
that
datagram
frame
and
into
which
packet
number
that
went
in
order
to
get
the
precise
reporting
back
and
then
you
could
keep
lists
of
both
inside
the
implementation
match
that
and
then
run
some
kind
of
congestion
control
algorithm.
E
H
E
H
E
Okay
next
slide,
so
we
have
been
building
a
small
pipeline
that
runs
automated
experiments
where
we
have
been
using
gstreamer
with
initially
starting
with
the
stream
implementation,
because
that
was
available
or
with
plain
rtp,
combining
that
we
expanded
the
quick,
go
implementation
and
built
an
interface
to
have
a
talk
to
g
streamer,
also
exposing
providing
additional
logging
facilities
and
also
then
exposing
the
intrinsics
of
the
state
from
the
quake
implementation
towards
the
rtp
implementation
inside
gstreamer
that
we
that
we
have
there.
E
So
so
this
setup
is
drawing
a
bit
comes
with
a
nice
dashboard
in
order
to
figure
out
how
things
are
performing
in
order
to
do
extensive
parameter,
space
exploration
on
how
the
different
mechanisms
work
and
also
how
the
different
approximation
or
interpolation
algorithms
could
actually
fit.
That's
currently
in
progress,
nothing
for
the
spec.
But
it's
a
bit
of
the
background
that
that
we
look
into
I'll
come
back
to
the
congestion
controller.
Let
me
just
do
a
very
brief
excursion
to
signaling
next
slide.
E
So
quick
is
connection
oriented,
so
we
need
to
use
support
connection
oriented
operation.
If
you
wanted
to
signal
this
in
sdp,
we
need
to
bundle.
We
want
to
bundle
multiple
rtp
session
into
the
same
quick
connection,
so
we
need
to
uv
ubr.
You
be
proposed
using
the
bundling
bundle
mechanism
and
by
default
we
will
probably
also
multiplex
rtp
and
rtcp,
possibly
also
using
the
max.
Only
extend
in
a
simple
stp
example
is
on
the
next
slide,
but
that's
nothing
for
avt
core.
E
Your
next
slide.
That's
nothing
in
depth
for
avt
core
to
look
at.
This
is
just
included
here
for
completeness
how
that
could
in
theory,
look
like
some
parameters
would
need
to
be
registered
and
so
forth.
Next
slide.
E
What
is
interesting-
and
that
was
alluded
to
in
the
comment
earlier-
is
so
how
how
are
actually
going
to
make
that
work
and
in
such
an
environment
we
might
have
a
quick
library
that
gets
linked
to
some
process
that
has
some
real-time
streams.
Some
non-real-time
reliable
streams
on
the
left,
one
or
more
rtp
flows
on
the
right
and
possibly
also
other
datagram
flows.
That
may
or
may
not
be
real
time.
Sensitive
and
all
of
them
go
into
a
quick
implementation
that
currently
runs
a
single
congestion
control
mechanism
that
is
usually
turned
on.
E
The
question
is
how
it's
unspecified,
how
the
different
types
of
packets
would
actually
interact
with
one
another.
The
datagram
spec
says
that
datagrams
may
be
delayed.
An
interface
might
provide
an
expiry
time
for
datagrams
to
be
before
they
went
and
if
they
weren't
transmitted
by
that
time,
they
could
be
dropped.
E
But
overall,
there
isn't
no
really
well
defined
resource
sharing
among
those
things.
So
we
have
possibly
priorities
among
quick
streams,
but
that
is
it's
not
clear
how
this
relates
to
the
other
things,
whether
reliable
streams
are
considered
more
important,
more
more
worthy
than
the
unreliable
ones,
because
you
could
easily
drop
them
and.
E
Normally,
I
would
expect
that
an
arbitrary
congestion
control
mechanism
that
is
being
built
into
quick
might
not
necessarily
fit
best
for
what
the
application
would
need,
or
a
real-time
application
would
need,
and-
and
so
one
related
question
could
be-
whether
there
is
going
to
be
some
resource
sharing
mechanism
where
a
certain
capacity
or
a
relative
capacity
share
is
set
aside
for
real-time
congestion
control
that
is
then
being
managed
on
its
own
and
that
that
could
be
then
independent
or
within
the.
E
But
then
the
confines
defined
by
the
overall
quick,
real-time
congestion
control.
There
are
probably
many
different
options,
and
one
of
one
of
the
parts
of
our
experiment,
experimental
setup,
is
trying
to
actually
try
out
which
of
these
work.
How
well,
but
we
don't,
have
we
quickly,
don't
have
an
answer
to
that
and
what
might
be
best
practices
here.
Yet
I
can
see
in
different
settings
all
kinds
go
wrong.
E
E
So
this
is
a
bit
a
report
of
what
we
have
been
digging
through
for
for
for
some
time
now
we
are
continuing
doing
this
general
questions.
Is
this
useful?
E
What
other
questions
did
we
miss
that
one
should
be
looking
at
one
came
up
on
congestion
control?
What
what
what
might
be
worth,
how
much,
but
this
is
obviously
to
be
seen
in
this
larger
context
of
all
the
other
media
over
quick
effort.
So
this
is
mostly
for
for
information,
but
it
seems
to
be
user
relevant
to
to
avt
core
thanks.
K
Great
this
show
up
in
the
agenda
here.
I
I
think
that
many
of
us
have
been
kind
of
thinking
about
this
for
for
several
years,
and
I
think
that
well
a
couple
things
one.
It's
proven
to
be
a
bit
harder
than
it
looks,
and
then
two
we
were
sort
of
waiting
for
quick
to
show
up
as
like
a
real
that
essentially
quick
v1,
be
done,
because
I
think
that
the
quick
team
didn't
really
have
time
to
really
think
about
this,
while
they're
trying
to
just
get
quick
viewing
at
the
door.
K
So
now
it's
probably
a
good
time
for
us
to
take
another
run
at
this.
I
I
do
think
that
you
know
many
of
us
have
looked
at
sort
of
the
grand
unification
theory
that
you
had
on
on
your
previous
slide
of,
like
you
know,
datagrams
and
reliable
traffic
and
rtc
stuff
all
using
a
single
channel.
K
I
I
would
advise
that
in
the
first
run
that
we
not
try
to
solve
that.
I
think
that's
one
of
the
hardest
problems.
I
think
it's
really
not
there's
no
known
solution
that
can
be
applied
there
and
just
getting
it
to
the
point
where
you
could
actually
run
rtp
over
quick
even
to
its
own
destination,
its
own
over
its
own
quick,
five
tuple.
K
I
think
that
would
be
a
huge
win,
so
I
think
just
figuring
out
a
way
to
do
that,
and
ideally
you
know
doing
it
where
the
application
doesn't
have
to
do
a
ton
of
logic
that
we
can
basically
tell
the
the
application
plug-in
real-time.
You
know
until
the
browser
implementation
plug-in
the
real-time,
you
know
congestion
control
mechanism.
I
think
that
would
allow
us
to
basically
encapsulate
rtp
directly
over
quick
datagrams,
and
I
think
that
would
be
a
really
nice.
A
Yeah
jonathan
lennox
as
an
individual.
I
guess
my
first
question
is
to
ask
back
to
you
the
is
this
useful,
which
is
to
say
what
does
this
actually
gain
us
over
rtp
over
udp
or
srgp
over
udp,
because
I
mean
obviously
I
mean
quick
proper.
I
mean
I'll
gain,
there's
a
whole
lot
over
tcp
there's
a
lot
of
interesting
work
of
why
it's
better
than
tcp.
But
it's
not
entirely
clear
to
me
why
this
is
better
than
rtp
over
udp
as
a
sort
of
a
really
compelling
story.
So
that's
my
question.
E
Well,
I
think,
the
if
you
think
of,
if
you
think
in
terms
of
rtp
over
dtls
over
udp,
with
all
kinds
of
multiplexing,
for
port
saving
mechanisms
put
into
place
at
the
same
time.
Also,
you
only
need
one
single
one
single
flow
pair
in
the
end
between
the
two
end
points
that
will
probably
be
roughly
the
equivalent
that
we
are
talking
about
right.
E
So
I
guess
the
the
main
question
for
us
is
if,
if
we
are
going
to
establish
something
more
than
just
an
rt,
a
single
rtp
session,
but
we
are
looking
especially,
for
example,
from
the
context
of
webrtc
that
mixing
data
and
and
real-time
flows,
then
it
seems
to
be
a
useful
thing
to
reduce
the
amount
of
brittleness
from
having
to
set
up
multiple
independent
transport
connections,
and
so
that
seems
worthwhile
a
try
in
order
to
possibly
simplify
my
operation.
E
Okay,
that's
that's
that's
I
guess
one
take
on
the
the
other.
The
other
part
is.
It
appears
to
be
useful
to
be
to
be
able
to
understand
what
you
should
actually
be
providing
and
then
maybe
also
to
be
able
to
quantify
the
gain
in
the
end.
E
If
there
is
one,
if
there
is
no
gain,
then
it
might
be
useful
to
have
found
that
out
rather
than
having
maybe
guests,
and
so
so
it's
also
an
experimental
study
to
see
how
useful
that
turns
out
to
be
in
the
end
and
how
how
effective
for
efficient,
okay,
not
sure
if
this
answers
your
question
satisfactory,
so.
A
A
H
I
have
some
some
mixed
videos
about
it,
so
I
I
I
think
with
justin.
It
was
thing
that
I
don't
think
that
all
this
same
multiplexing
of
several
rtp
sessions
in
the
in
the
in
the
same
transport
should
be
something
that
at
least
I
don't
see
how
it
could
be.
I
see
a
lot
of
potential
of
rtp
over
quick
it
just
especially
if
we
consider
with
transport
when
and
the
future
uses
of
codex
and
with
transport,
or
things
like
that.
H
So
maybe
the
only
thing
that
I
would
say
is
that
I
I
think
that
it
would
be
interesting
to
see
how
to
mix
or
to
how
to
be
able
to
negotiate
a
rtp
over
quick
over
a
already
established
west
transport
session.
I
think
that
that's
the
only
thing
that,
for
me
it
would
make
sense
to
multiplex
that
rtp
transfer
over
something
else,
and
but
I
don't
see
exactly
the
point
of
both
the
previous
quick
or
not
the
the.
H
Yeah
I
mean
I
mean
only
or
most
of
the
most
successful
webrtc
on
rtp
and
today
are
not
using
it.
I
mean
we
are
happy
having
one
session
with
one
factor,
sending
all
the
data,
so
I'm
not
sure
why
this
multiplexing
or
the
flow
id
isn't
required
at
all,
but
then,
in
the
other
side,
just
having
something
so
how
a
way
to
map
rtp
or
quick.
H
H
J
The
classic
mutant
or
unmeet
yourself
at
the
wrong
time-
and
you
don't
hear
your
your
announcement-
yeah
sorry
go
ahead.
So
jorge
I'm
curious
about,
I
guess
kind
of
the
inverse
of
jonathan's
question.
Not
why
bother
adding
quick
but
when
you
do
add
them
together
what
what
is
obviously
broken
or
what
you
know
needs
more
attention
because
there's
conflicts
you
mentioned,
you
know,
congestion
control
seems
pretty
obvious.
You
mentioned
you
know,
feedback.
J
J
J
You
know,
why
would
I
bought
what
would
make
this
a
good
transport
or
a
bad
transport
like
it
seems
to
me,
like
all
of
the
you
know,
dynamics
of
of
aggregation
and
fragmentation
and
everything
become
totally
different
and
we'd
have
to
re-evaluate
how
codec
payload
formats
you
know,
do
aggregation
and
fragmentation
you
do
want
to
either
break
those
apart
or
map
them,
or
you
know,
have
guidance
about
prohibiting
them
or
or
not
doing
certain
things
over
quick
because
of
the
because
of
the
way
that
quick
is
going
to
internally
handle
these
datagrams
or
we
ask
for
an
api.
J
You
know
that
exposes
more
control
over
over
the
quick.
You
know
datagram
aggregation
and
things
like
that.
You
mentioned
the
delay,
but
I'm
I'm
I'm
even
more
concerned
than
the
delay
about
you
know.
How
does
a
transport
functionally
work
for
something?
That's
you
know
fundamentally,
a
packet
push,
not
not
a
stream
oriented
transport.
E
Well,
we
wouldn't-
we
wouldn't
necessarily
have
that
here
too,
is
from
from
from
what
we
looked
at
the
implementations
there
most
of
the
time
pre,
so
they
they
keep
the
the
packet
sizes
and
the
the
payloads
integer,
so
they're,
not
not
not
undoing
any
kind
of
sensible
payload
type
choices
that
you
would
have
been
doing
at
a
higher
layer.
E
So
so
nothing
gets
fragmented
it
something
may
get
padded
if
there's
space,
for
example,
to
have
an
egg
in
the
opposite
direction,
go
in
or
some
some
other
tiny
data
data
frames
might
get
a
might
fly
along
with
the
packet
with
the
datagram
frame.
So
there
I
wouldn't
be
worried
and
we
have
a
few
fairly
constant
number
of
extra
overhead.
But
that
is
not
that
much
so
it
wouldn't
fundamentally
screw
up
your
your
mtu
expectations
and
you
might
benefit
from
actually
querying
the
transport
protocol
to
say,
okay.
What
what?
What
what
mpu
can?
E
I
expect
right
now.
That
would
be
indeed
useful,
a
useful
addition,
but
I
think
otherwise
we
are
not.
Nothing
is
going.
None
of
the
assumptions
that
rtp
would
usually
make
besides
the
packet
size
and
the
media
transmission
would
be
undermined
and
what,
when
it
comes
to
immediate
transmissions
versus
delays,
I
would
expect
that
that
was
something.
That
is
a
key
thing
where
one
could
guide
give
guidance
to
the
quick
working
group
on
what
to
do
and
what
to
not
screw
up.
J
Yeah
a
little
bit,
I'm
just
thinking
practically
as
a
as
someone
who
wants
and
cares
about
media.
I'm
writing
a
complex
media
stack.
It
has
a
scalable
codec.
It
has,
you
know,
differentiated,
you
know
frames,
it
has
all
these
things
and
it
would
try
to
do
those
things
intelligently
on
its
own
and
not
assume
that
there's
any
intelligence
in
the
transport.
It's
assuming
our
to
be
packet
pusher
right,
but
if
you're
over
quick
now
you
have
either
the
potential
to
use
something
else.
As
as
an
intelligent,
you
know
transport
or
the
intelligence
may
defeat.
J
Some
of
the
things
you
know
make
make
a
bad
impact
on
some
of
the
things
that
you're
trying
to
do
that.
Those
are
the
dynamics
that
I
was
more
worried.
E
About
I
I
I
see
your
point
so
so.
If
I
could
wish
for
something,
then
I
would
have
a
prioritization
between
my
rtp
stream
that
I
that
I
hand
to
a
transport
instance
of
quick
and
any
other
traffic
that
might
go
into
that
same
transport
instance
of
quick,
so
that
I
can
do
all
the
smarts
on
my
own
and
have
fairly
fairly
good
control
over
my
own
rate
adaptation
and
whatever
else
I
might
need
that
isn't
interfered
with
by
somebody
else.
E
As
long
as
I'm
not
trying
to
violate
similar
lines
of
circuit
breakers,
the
overall
capacity
that
I
could
that
I
could
possibly
use
then
maybe
other
in
an
ideal
from
a
media
perspective
from
an
ideal
world.
Probably
any
kind
of
quick
congestion
control
would
stay
out
of
the
way
unless
there's
some
other
data
to
send
and
maybe
then
things
could
get
shared
or
you
try
to
exceed
what
is
meaningfully
acceptable
for
the
for
the
path.
E
So
I
believe
but
but
as
was
mentioned
earlier,
I
don't
know,
I
don't
know
exactly
by
whom
that
is
a
very
hard
problem
to
solve,
and
so
maybe
some
simple
mecha
some
some
simple.
First,
steps
could
could
work
here.
You
clearly
can't
afford
pushing
real-time
media
into
an
elastic
blob
of
buffer,
of
which
you
know
nothing
when
it's
actually
going
to
to
spit
them
out
again
and
how
it's
going
to
shape
them.
So
there
must
be
some
more
direct
access,
otherwise
that's
not
going
to
work.
F
I
just
wanted
to
comment
on
a
few
things.
I
know
sergio
had
a
question
about
the
flow
id
in
http
3,
that's
used
for
multiplexing,
potentially
between
browser
tabs
and
there's
a
concept
of
pooling,
so
it
may
have
a
use,
even
if
it
wouldn't
be
something
you
would
just
use
on
a
raw
quick
connection
for
web
transport
over
http
3.
That
could
be
a
use
of
the
flow
id.
F
F
You
know,
depending
on
what
it
thinks
the
available
bandwidth
is
and
that's
interesting,
because
you
mentioned
the
apis.
It
turns
out
that
the
web
codec
apis
doesn't
really
allow
you
to
do
that.
It's
a
much
more
conventional
encoder
interface.
F
So
I
think
from
that
paper
at
least
I
learned
that
that
the
conventional
bit
rate
target
may
not
be
the
greatest
thing
for
a
for
an
encoder
api
and,
and
then
I
think,
from
your
talk,
maybe
that
also
the
web
transport.
The
transport
api
may
not
give
you
what
you
want.
So
we
kind
of
the
interface
looks
like
a
pretty
promising
thing
to
do.
Research
yeah,
absolutely.
E
Yeah
just
for
you,
so
I
saw
the
talk
earlier
tonight
and
I'm
I'm
very
much
aligned
with
this
idea.
It
reminds
some
of
the
bit
stuff
with
I
mean
the
the
little
bit
of
state
exposure
that
we
had
in
the
past
is,
I
believe,
if
we
went
at
the
time
when
we
had
actually
reference
picture
selection
in
order
to
allow
different,
different
states
inside
the
the
encore
to
be
explicitly
referenced,
but
that
didn't
that
didn't
use
different
encoding
alternatives
right
also
for
for
the
next
one.
E
F
Yes,
yeah.
The
other
thing
I
just
wanted
to
mention
is
that
people
have
been
using.
There
are
some
things
that
have
occurred
before
which
might
be
useful
that
people
have
been
using
the
webrtc
data
channel
now
to
transport
audio
video
for
low
latency,
and
it's
works.
F
It's
working
much
better
than
one
would
have
thought
and
the
reason
it's
working,
much
better
is
they're
using
partial
reliability,
which
is
also
what
was
proposed
in
the
in
in
the
rush
work,
so
they're,
essentially
using
taking
a
frame
and
they're
not
waiting
forever
if
it
doesn't
re-transmit
but
they're
they're
setting
some
kind
of
timer,
and
I
would
just
mention
that
that's
become
extremely
popular
in
game
streaming.
F
So
it
there
are
potential
uses
of
reliable
streams
and
quick
which
are
not
at
all
similar
to
what
people
people
would
do
in
tcp
right.
You
kind
of
use
a
separate
stream
for
each
frame,
and
that
has
some
weird
I
wouldn't
yeah.
So
I
wouldn't
necessarily
it
might
be
worthwhile
to
look
at
some
of
the
dynamics
of
that
as
well,
because
it
may
not
be
as
bad
as
you
know,
trying
to
send
media
over
tcp
used
to
be.
E
Yeah,
we
actually
had
that
in
original
considerations
for
alternatives,
but
we
just
scrapped
it
for
the
time
being
for
time
reasons,
but
we
had
looked
at
that
option,
since
this
was
mentioned
as
one
of
the
first
ways
to
transport
media
over
quick
in
order
to
cope
with
avoiding
avoiding
head
of
line
blocking
for
for
individual
frames,
but
just
giving
and
using
a
new
stream
id
for
every
for
every
frame.
E
But
that's
that's
definitely
also
worthwhile.
Also.
It
might
also
be
interesting
to
see
where
the
intersection
between
that
and
a
flavor
of
rtp
ish
could
come
in,
but
that's
another
thing
to
explore.
F
K
Jonathan's
comment:
basically,
let's
try
to
identify
what
it
is
that
we
want
to
do
here,
because
I
think
there's
like
a
pretty
broad
continuum
of
you
know.
What
I
see
you
proposing
is
a
pretty
minimal
approach
where
in
many
ways
quick
is
basically
acting
as
a
replacement
for
for
dtls
to
you
know
the
the
approach
of
sort
of
putting
this
all
over.
K
You
know
quick,
reliable
streams
or
quick
streams,
which
was
like
the
the
other
approach
like
that
serving
rush,
I
guess
that's
being
proposed,
which
a
lot
more
of
the
transport
functionality
is
being
leaned
on,
and
I
think
there
are
two
slightly
different
use
cases
and
I
do
think
even
if
we
just
say
we're
just
replacing
dtls
and
maybe
ice
at
the
same
time.
You
know
we're
still
getting
a
pretty
good
value.
K
I
think
not
because
it's
necessarily
any
better,
because
it
makes
it
easier
to
deploy
this,
that
if
you
have
a
cloud
hosted
service
and
you
consume
things
over
quick
you'll
be
able
to
use
some
of
the
load
balancing
other
types
of
things
that
you're
using
for
other
quick,
reliable
deployments
that
you
now
have
to
kind
of
build
yourself
if
you're,
if
you're
doing
rtp,
and
so
even
if
we
don't
get
any
performance
benefit
out
of
the
box.
K
Reason
quick,
I
think
we'll
get
a
deployment
benefit
and
that
by
itself,
I
think,
is
interesting
enough
to
pursue
this
work.
E
A
Yeah
there
were
just
at
the
time
I
mean
this
would
have
finished.
That's
correct,
I
think,
from
the
working
from
a
share
point
of
view.
I'd
say
you
know
the
working
group
thinks
this
is
interesting
work
you
should
keep
working
on
it,
but
we're
not
going
to
adopt
anything
anytime
soon
or
at
least
thought.
This
would
idea.
A
Okay,
who's
presenting
the
next
one
is
it
sergio?
Are
you
in.
H
H
H
The
let's
say,
first
of
all,
sorry
for
not
be
on
being
able
to
present
this
in
the
in
the
planned
interim,
but
there
was
some
certain
events
that
you
know
that
prevented
us
for
doing
the
work,
so
we
are
continuing
it
here.
What
we
have
done
is
say
in
this.
New
draft
is
mainly
two
things.
First,
we
have
a
as
there
was
some
quite
a
strong
position
of
using
the
the
term
generic
meaning
that
many
people
claim
that
it
was
not
possible
to
do
it
in
a
generic
way.
H
We
have
a
thing
that
we
just
going
to
narrate
down
to
certain
codes
that
we
plan
to
use
and
also
in
the
future
if
it
needs
to
be
extended,
so
it
will
be
easier
just
to
to
cover
those
codec
in
a
one
by
one
basis,
and
also
so
that's
why
the
the
change
now
is
not
the
generic
rtp
packetizer
is
like
a
multicodec
rtp
pilot
format
that
to
reflect
this
new
well
is
the
same,
but
it's
just
like
a
small
change
in
the
philosophy
of
the
draft
and
also
we
have
fame,
as
there
was
also
several
people.
H
That
was
claiming
that
why
were
we
doing
this
in
a
frame
base
that
was
causing
a
lot
of
problems
that
we
need
to
solve
and
while
we
were
not
doing
it,
the
end-to-end
encryption
in
a
per
packet
base?
We
have
also
taken
the
exercise
to
think
about
how
it
would
be
this
frame
to
be
applied
to
an
upper
packet
basins
instead
of
a
per
frame
base.
So
please
not
as
necessary.
H
So
this
is
the
so.
This
is
like
a
kind
of
you
want
to
think
about
it
as
a
theoretical
exercise
to
just
to
to
to
provide
these
answers
to
the
question
that
we
were
watching
were
recent
in
the
in
the
in
the
past
meetings.
So
how
would
an
s
packet
work
that
it
is
say
what
is
a
s
packet
is
just
applying
the
s
frame
packet
decision
over
a
instead
of
a
full
frame
on
a
packet
per
rtp
packet
base.
H
H
Like
also
er,
even
if
you
are
applying
sp
explaining
upper
packet
base,
you
still
need
to
do
a
most
of
the
work
or
most
of
or
you
need
to
solve.
Most
of
the
issues
that
you
have
with
this
frame
and
you
will
not
need
a
new
packetizer,
because
you
are
going
to
use
the
rtp
package
as
input
2s
frame.
But
you
will
still
require
to
negotiate
it
in
the
the
support
of
both
ends
in
the
sdp.
H
I
H
J
H
It
is
incremental,
so
not
all
yetis
has
the
same
length
and
also
not
all
the
frame.
Counter
has
the
same
length
so
and
basically,
just
if
you
take
halo,
cable
out
the
overhead
between
the
then
the
kilowatts
per
second
on
the
frames
and
the
and
the
first
person
at
the
pizza
per
second.
Obviously,
if
you
are
the
this
frame,
has
a
a
fixed
set
overhead
regarding
than
the
bandwidth
and
the
headings
packet
is
increasing,
as
you
send
more
bandwidth,
because
each
frame
will
have
more
more
packets
per
frame.
So
you
could
have.
H
So
you
could
check
the
the
overhead
in
video
between
and
in
using
these
two
different
encryption,
algorithms
ace
or
a
c
cm
or
gcm,
and
with
this
frame
always
taken
out
into
account
30
frames
per
second,
you
will
have
3
kilobits
per
second
as
overhead,
while
with
this
packet,
you
have
24
or
23
gigabits
per
second,
and
with
dcm
you
will
have
5
kilos
per
second
in
overhead
in
this
frame
and
on
a
and
40
you
are
doing
this
packet.
H
This
is
taking
not
taken
into
consideration
if
you
are
sending,
for
example,
special
escalability
that
you
will
have,
you
will
have
to
add
this
over
here:
spatial
layer
or
if
you
are
doing
simulcast
and
you
will
have
to
to
do
the
overhead
in
this
frame
for
each
space
is
simulcast
layer.
Also
there
are.
H
This
is
only
applicable
to
two
videos.
You
are
mixing
audio
on
video,
the
the
total
overhead
will
be
lower
because
and
depending
on
the
of
the
mix
between
the
the
video
build
rate
that
you
are
sending
and
the
audio
bill
rate
because,
as
we
have
said
in
the
beginning,
the
audience
be.
The
overhead
is
exactly
the
same,
so
there
is
no
gain
there
and
also
a
bit
as
the
counter
is
incrementing
in
pair
and
they're
packaging,
one
case
and
perfect
in
in
another.
H
H
H
This
is
obviously
this
is
packet
encryption.
That
means
that
the
svu
or
the
receiving
site
will
not
need
to
know
that
the
content
that
a
packet
that
is
received
it
is,
if
it
has
the
encryption
turning
on
and
off.
So
if
it
knows,
is
if,
when
you
receive
a
packet,
if
it
needs
to
pass
it
through
this
packet,
then
the
creator
or
not.
H
So
the
proposal
we
had
we
could
do
is
the
same
that
we
were
doing
with
this
frame,
that
it
is
negotiating
a
new,
a
pilot
type
in
terms
and
to
multiplex
all
the
all
the
all
the
different
codecs
that
are
negotiated
in
order
to
not
increase
the
the
pilot
types
that
we
are
negotiating,
because
you
will
be
if
you
could
just
send
one,
for
example,
one
for
each
h264
format.
But
then
you
will
be
needing
eight
and
you
will
be
double
or
even
keeping
the
number
of
payloads
that
are
in
use.
H
H
We
want
to
call
it
version
and
one
for
the
rtx
for
for
the
day
and
then
the
payload
tip
for
opaque
and
probably
it
will
require
some
working
in
audio
because
probably
we'll
need
to
to
define
to
to
negotiate
one
pillow
type.
Additional
pillow
tight
for
for
it's
a
clock
rate
and
a
flow
rate
for
each
of
codec,
because
I
don't
know
if
you
can
just
it
will
be
proper
rtp.
We
just
switch
the
the
claw
rate
of
the
package
depending
of
of
something
external,
so
I
think
that
probably
will
need
to
to
just.
H
H
So
again,
as
you
are
doing-
and
you
are
doing
with
the
blacks
enough
of
the
er
of
the
pile
of
types
and
of
the
of
the
codex
inside
the
same
pillow
tights
in
order
to
reduce
the
number
of
pilots
that
are
being
negotiated,
you
need
to
signal
the
original,
the
the
original
pail
of
tide
of
the
packet,
that
it
is
that
we
call
the
associated
pilot
type
that
we
propose
to
be
sent
in
a
header
station
in
a
header
extension
for
the
just
signal
which
was,
which
is
the
the
original
payload
type
that
will
have
to
be
replaced
when
once
the
packet
is,
is
degraded
and
also
as
there
was
one
free
byte.
H
We
can
just
say
we
propose
to
use
the
despite
us
as
like
a
kind
of
iframe
indications
just
to
and
be
able
to
to
know
how
to
use
like
the
the
full
metadata.
If
you
are
for
watching
video
and
if
you
are
not
doing
sbc
or
any
complex
things
just
to
know,
when
you
can
start
forwarding
a
new,
a
a
new
video
or
for
you,
for
example,
you
are
doing
multiplexing
between
different
ssrcs
when
you
can
install
multiplex
and
sending
the
new
the
new.
J
H
And
it
will
be
decodable,
and
this
is
like
this
frame
metadata
we,
this
is
probably
the
most
clear
part
I
mean,
as
the
content
will
be
encrypted
anyway,
and
you
will
then
in
this
view,
you
still
need
to
do
the
the
switch
and
do
the
sbc
and
simulcas
they
need
to
to,
or
some
of
them
of
some
of
the
svgs
needs
to
know
the
resolution
and
the
stream
different
rate
bit
rate,
something
like
that
of
of
the
stream
to
do
the
switching
they
er
some
extra,
the
and
the
codec
information
for
live
profile
level,
four
species
for
some-
some,
that's
several
other
metadata
that
could
be
needed,
so
we
need
to
to
be
able
to,
and
all
this
metadata
is
actually
being
extracted
from
the
from
the
rtp
payload
or
even
the
video
code
and
following
the
video
code,
information
passing,
then
the
actual
data
stream
of
the
code.
H
So
this
is
soft,
obviously
not
possible
in
if
you
encrypt
the
the
either
the
frame
or
the
packet
in
any
way.
So
you
we
need
to
provide
a
way
to
provide
this
metadata
in
the
in
a
header
extension
that
is
not
encrypted
between
the
the,
in
this
case,
the
the
the
sender
and
the
svu.
H
H
So
next
slide,
please
what
would
be
the
delta
being
if
we
implement
everything
that
will
be
required
from
upper
packet
encryption
to
per
frame
increasing
the
only
thing
really
missing
is
and
the
is
the
the
rtp
or
the
rtp
packetization
that
I
mean
it's
at
least
from
my
point
of
view,
something
and
kind
of
easy
to
implement,
because
ugia
needs
to
split
the
bytes
of
its
special
frame
into
several
rtb
packets
payloads
up
to
the
mtu
without
any
codec
specific
boundaries.
H
As
a
again,
the
the
main
problem
was:
how
do
we
specify
the
the
what
is
a
frame
in
all
the
the
codex?
But
if
we
narrow
the
the
the
of
the
potential
codes
to
be
used
to
just
the
excuses
for
bpa,
bp9
and
ev1,
it
is
easy
to
to
specify
what
what
is
the
assad
by
format
for
each
of
them.
I
mean
there
is
already
a
lot
of
standards
to
apply
either
just
going
to
the
specific
codec
and
codec
and
and
reference
where
they
are
defined.
H
I
mean
mp,
mp4
and
with
media
contents.
The
formats
are
ready
to
do
so
same.
Does
web
codec
and
same
is
going
to
do
media
very
quick.
So
I
don't
think
that
it
is
something
reasonable
to
say
that
we
can
specifically
define
which
are
these
frames
for,
and
this
frame
format
for
each
of
the
the
packets
that
we
are
going
to
support
in
the
or
that
we
could
support
in
these
multi-codec
packetizers.
H
H
Was
we're
not
agreeing
in
the
past
about
the
this
frame,
this
frame
or
how
to
carry
this
frame
in
in
rtp,
because,
obviously
either?
If
we
do
is
packet
or
s
frame
and
when
excretion
is,
is
a
mass
for
rtp
and
we
need
to
find
a
way
to
solve
it.
So,
even
if
it
is
a
spakeros
frame,
all
the
previous
mechanisms
that
a
that
we
that
we
have
presented
needs
to
be
covered
somehow.
F
J
And
you
think
that
this
work
that
worked
to
see
how
how
per
packet
level
instead
of
frame
level
end
to
end.
But
you
think
that
from
that
investigation
it
sounds
like
you
would
prefer
not
to
do
that,
and
you
don't
see
a
compelling
benefit
or
reason
to
do
that,
and
you
would
prefer
to
stick
with
s
frame
as
the
primary
option
for
people
is
that
right.
H
In
fact,
I
don't
have
a
strong
opinion
in
one
versus
the
other.
I
think
that,
for
example,
what
would
if
we,
if
we
are
able
to
solve
or
the
problem
that
we
have
presented
for
this
packet,
it
will
be
80
or
90
percent
of
the
world
needed
for
a
stream.
H
J
Yeah,
I
think
it
was
maybe
colin
perkins
that
had
the
strongest
opinion
about
rtp
payload
formats
were
designed
to
handle.
You
know
splitting
up
the
the
frame
payload
into
a
packet
payload
explicitly
per
codec,
because
they
wanted
to
have
codecs
to
have
the
flexibility
to
find
the
best
mechanism.
For,
for
that,
you
know
themselves
in
either
case
we
have
to
have
an
encapsulation
format,
which
I
think
is
what
you're
calling
that
opaque,
that
opaque
payload
type,
that
encapsulation
format
has
to
be
there,
whether
you're
doing
s
frame
or
s
packet.
J
So
it's
really
just
a
question
of
whether
or
not
removing
the
you
know
frame
to
packet
appetizer
from
the
from
the
encryption
from
the
crypto
layer
and
putting
it
back
into
codec
layer,
and
you
know
letting
the
rt
payload
formats
live
as
they
are.
I
think
that
was
the
biggest
pushback
before
I
think
khan
was
the
was
the
loudest
voice.
I
remember
in
that
camp
yeah.
H
H
H
But
if
we
can
agree
on
the
on
the
on
the
on
the
common
mechanism
between
this
packet
and
the
spring,
it
will
just
a
matter
of
of
checking
the
plans
of
cough
or
something
very
small,
because
even
if
we
decide
to
go
with
this
packet,
we
will
have
almost
the
what
all
that
we
need
completed
and
if
we
go
with
s
frame,
we
will
like
have
like
80
or
90
of
what
we
need
in
place.
So
if
we
can
agree
on
this
80
percent
at
the
beginning,
I
can
leave
the
20
of
the
for
later.
J
Got
it
okay.
The
one
of
the
final
comment
is
that
that
one
bit
that
you
added
of
metadata
seemed
a
little
seemed
a
little
out
of
place
because
you
said
that
we're
gonna
have
to
have
metadata
anyway
from
something
else
from
some
other
rtv
extension
or
payload
header
or
something.
But
but
you
stuck
another
bit
of
metadata.
I
think
it's
intended
to
be
an
I.
You
know
an
iframe
bit
or
a
switching
point
bit,
but
is
that
the
first
first
packet
of
a
switching
point?
Well,
it's.
H
If
imagine
that,
for
example,
if
you,
if
you
are
not
doing
sbc
or
anything
complex,
probably
you
will
not
need
the
db1
dependency
descriptor,
and
you
only
need
to
know
when
you
can
start
sending
the
forward
in
the
stream
with
an
iframe,
and
I
think
that
we
can
just
we
have
one
extra
bit
in
the
in
that
it
is
in
the
in
the
the
header
that
we
propose
to
carry
the
the
associated
pilot
type.
F
Yeah
so
in
this
here
the
opaque
right
that
could
be
any
of
it
could
be
like
vp8
or
vp9
right.
It's
it's
a
payload
which,
and
so,
if
you
say
in
a
weird
case,
you
negotiated
only
opaque.
How
would
the
endpoints
know
what
codec
was
coming.
H
Well,
our
proposal
right
now
is
that
a
you
can
only
send
a
one
rtp
packet
inside
the
pack
if
you
have
negotiated
it
in
the
in
the
zp.
So
this
knows
this
does
not
replay
the
sdp
negotiation,
so
you
have
to
first
negotiate
the
standard
packetizer,
for
example,
for
vp8
or
finite,
and
then
you
can
use
it
in
within
opaque
because
you
need
the
associated
pilot
type
to
know.
F
What's
what's
inside
so
you'd
negotiate
the
clear
text
one
and
then
basically,
once
you
found
out,
the
other
person
did
bpa
to
rp9.
You
would
put
that
into
the
opaque
and
clarify
it
with
the
apt
header
extension
right,
yeah,
okay,
back
up
a
slide
up
relating
to
the
incompatibility
with
insertable
streams
back
one.
J
F
H
If
it
will,
if
we
want
to
use
like
with
the
same
model
that
we
have
today,
that
it
is
exposing
the
the
data
into
javascript,
so
the
encryption
is
done
in
javascript,
then
you
will
have
to
either
instead
of
sending
frames
to
the
to
to
the
insta
double
stream
process,
you
will
have
to
do
it
after
the
rtp
packetization
and
send
all
the
rtp
packets
to
the
to
the
javascript,
I'm
not
sure
in
terms
of
performance,
how
it
could
be
if
it
could
be
feasible
to
send
all
the
all
this
communication
between
the
javascript
and
the
on
the
on
the
c
plus
plus
that
year,
you
will
have
like.
H
It
will
be
insertable
stream,
but
it
is
working
on
on
a
packet
base
instead
of
a
perfect
base
and
not
sure
exactly
what
people
that
are
not
a
that
are
using
this
for
other
things,
not
for
entering
an
equity
if
it
would
be
worth
it
for
them
or
not.
F
Okay,
another
question
kind
of
to
separate
out
the
changes
between
s
packet
and
s
frame.
Do
you
just
in
particular,
I
know
the
overhead
was
one
reason
behind
s
frame:
do
you
need
necessarily
to
have
the
authentication
tag
on
every
packet
in
s
packet.
H
If
you
are
just
applying
the
the
s
frame
to
each
of
the
packets,
yes,
if
you
probably
there
is
something
that
could
be
done,
but
I'm
not
agreed
to
guys.
So
I
cannot
tell
if
we
can
do
something
smarter
if
we're
just
trying
to
to
reduce
the
the
attack.
But
if
you
try
to
do
this
frame
authentic
engraving
in
upper
packet
base,
you
have
it.
I
mean.
F
Okay,
all
right,
thank
you.
L
Harold
and
I'm
a
little
bit
puzzled
by
this
approach,
what's
seen
what
it
seems
to
me
that
you're
doing
is
defining
two
ways
of
accomplishing
the
same,
approximately
the
same
thing
and
then
and
then
planning
to
discard
one
of
them
where
we're
closer
to
the
end
game.
Is
that
the
correct
description
of
your
plan.
H
So
if
the
the
requirements
that
or
what
we
need
to
implement
for
forex
packet
is,
we
will
need
first
frame.
H
So
is
if
at
least
we
can
agree
on
what
we
need,
regardless
of
what
we
decide,
we
have
like
all
the
work
done
for
us
packet
and,
like
eighty
percent
done
for
eight
percent
of
the
work
work
done
for
explain
and
if
we
start
to
discuss
about
you,
we
are
going
to
do
a
spaghetto
stream.
Maybe
we
can
be
discussing
in
this
20
percent
forever.
H
I
would
say
that
at
the
end,
we
will
not
have
two
specs.
We
will
have
to
choose
one,
but
I
won't.
I
don't
want
to
to
have
this
discussion
block
until
the
the
people
that
want
a
packet
encryption
or
the
people
that
will
not
perform
encryption
agreed
to
a
common
ground,
so
at
least
if
we
can
work
within
this
month,
while
we
discuss
the
the
other,
the
other
thing
in
parallel,
it
would
be
easier
also.
H
I
think
that
if
you
want
the
s
frame
or
one
of
the
claims
that
that
they
were
done
in
the
in
the
past
meetings
is
that
doing
the
the
end-to-end
ingredient
in
upper
packet
base
was
much
more
easy
to
specify
that
doing
do
it
within
a
winner
or
in
a
perfect
base.
So
and
this
what
we
have
so
here
is
that
the
word
needed
is
almost
the
same.
A
Yeah,
so
I
had
a
question
about
how
can
I
grab
the
flights
myself
to
talk
about
this?
Yes,
renegotiation
when
you
say
it
duplicates
or
duplicates
the
number
of
payload
types.
That's
only
if
you
actually
are
supporting
opportunistic
s
packet
arrest
frame
right,
if
you're
saying
I'm
willing
to
do
you
know
s
frame
or
nothing,
then
it
would
not
do
not
expand
the
number
of
payload
types
required,
though
the
signaling
might
need
to
be
a
little
more.
A
You
need
to
put
the
information
about
the
associated
prototypes
in
the
in
some
other
signaling
information,
but
following
and
I
think
in
most
cases
I
don't
feel
like
in
most
cases
you
people
would
want
to
be
doing
opportunities
opportunistically.
They
want
to
be
doing
this.
I
I
know
I'm
doing
you
know,
I
think
I
I
might
or
might
not
get
end-to-end
encryption.
It's
probably
not
a
good
security
property.
H
A
G
A
So
I
think
the
other
yeah,
I
think
the
other
interesting
difference
with
the
ass
frame
and
s
packet
is
so.
The
frame
metadata
isn't
just
useful
for
sfus
right
frame.
Metadata
is
useful
for
decoders,
so
they
can
do
proper
reassembly,
and
so,
if
you're
doing
as
frame,
then
you
need
that
frame
metadata
for
the
decoders
as
well.
A
Whereas
if
you're
doing
s
packet,
then
the
decoders
can
use
their
existing.
You
know
if
you're
using
s
packages
that
you're
reusing
the
existing
payload
type
formats,
then
they
can
use
the
metadata.
That's
in
the
in
the
existing
payload
types
so
like,
for
instance,
vp9,
which
I
know
very
well.
You
just
finished
writing
it.
That's
somewhere
between,
like
four
and
on
the
order
of
20
bytes
per
packet
of
extra
metadata,
which
I
think,
on
the
one
hand
so,
which
I
mean
also
contributes
to
your
question
of
overhead.
A
So
if
s
frame
is
only
using
the
the
the
you
know,
av1
extension
and
not
including
that
then
that's
additional,
that
would
be
additional
overhead
of
s
packet.
A
So,
and
I
think
the
the
question-
I
don't
I'm
not
sure
anybody
who's
ever
I
mean
I
agree,
s
packet
will
be
easier
to
specify,
but
that's,
I
think,
that
overhead
of
it,
I
think,
is
possibly
an
additional
argument
for
s
frame,
but
it's
you
know,
I
I
think
what
now
that
you've
said
no
we're
not
doing
it's
not
for
every
possible
payload
type.
We
know
carrots
we're
defining
it.
For
these.
You
know
four
or
five
palette
types,
which
is
say
the
four
video
codes
we
care
about
plus
opus.
A
A
I
think
that's
the
main
thing
that
we're
saying
before
you
know
it's
a
simple
specification:
it's
you
know,
take
the
frame
chop
it
up
and
make
sure
you
have
the
the
and
getting
the
metadata
right
is
hard,
but
hopefully
you
can
outsource
that
to
80
watt
or
whoever
else,
so
I
would
still
probably
be
inclined
toward
that
frame
for
the
overhead,
but
I
think
that-
and
I
think
this
you
know
taking
this
approach
of
saying
we're
doing
it
for
these
six
payload
types
you
know
to
start
with,
and
additional
things
can
be.
M
Yes,
from
apple
just
to
comment
on
the
idea
that,
on
a
connection,
you
will
only
use
encryption
for
all
streams,
and
that
might
not
be
necessarily
true.
M
If
you
look
at
video
streaming,
for
instance,
often
websites
start
with
non-encryption
for
the
first
10
seconds,
and
then
they
switch
to
encryption.
So
it's
really
application
specific.
So
I
would
think
that
having
the
flexibility
to
do,
to
select
encryption
or
not
is
is
a
kind
of
a
bonus
fare
and
the
second
thing
they
are.
M
We
really
really
talking
about
this
frame,
but
there
might
be
other
transforms,
and
some
websites
for
good
or
bad
reasons
are,
are
starting
to
to
want
to
to
do
that,
maybe
not
to
the
train,
but
something
something
different
for
some
other
purposes,
and
if
we
allow
to
reuse
the
mechanisms,
the
opaque
mechanisms
for
vows
things,
it's
also
a
benefit.
A
A
F
Jonathan,
I
think
we
were
you
still
asking
a
question.
Sorry,
I
I
didn't
hear
sergio
answer
as
sergio
muted.
H
J
Yeah
to
answer
harold's
question
from
earlier:
I
don't
think
anybody
wanted
to
do
s
packet
and
you
know,
and
what
that
would
people
did
not
want
to
write
the
s
frame.
Packetizer
or
people
did
not
have
confidence
that
the
s
frame
packetizer
would
work
well
for
all
current
and
you
know
all
current
codecs
that
people
care
about
and,
more
importantly,
future
ones.
So
I
think
that
was
the
crux
of
what
jonathan
just
mentioned.
J
J
You
know
halo
format,
headers
in
nds
frame,
headers
or
not
so
something
like
that
is
is
what
I
think
worried
people
that,
if
you're,
if
you
get
into
the
nooks
and
crannies
of
all
the
payload
formats,
that's
not
an
easy
thing
to
normalize
and
not
even
for
current.
You
know
deployed
codex,
you
know
not
even
for
the
handful,
you
know
three
or
four
codecs
that
people
think
they
care
about
in
the
webrtc
context,
but
you
know
going
forward.
J
It's
going
to
be
even
harder
and
harder
to
make
all
those
assumptions
hold,
and
I
think
that's
why
there
was
an
aversion
to
to
having
this
general
generalized
packetizer
have
confidence
that
it
would
work
for
everything.
So
it
wasn't,
people
wanted
s
packet,
it
was,
they
were
afraid
of
the
fragility
of
the
s
frame,
general
packetizer
and
that
it
maybe
it's
20
like
sergio
says,
maybe
it
becomes
80
of
an
addition
we'll
we'll
see,
but
I
think
that's
that's
the
crux
of
where
we
are
on
this.
H
So
at
the
same
so
it
there
is
always
I
mean
you
always
have
to
define
what
a
frame
a
video
frame
is.
So
it
is.
You
have
to
do
the
work,
but
I
think
it's
quite
easy
to
do.
For
example,
in
a
codex
is
already
into
it
that
media
work,
quick
is
going
to
do
that.
So,
if
why
other
people
can
do
it,
we
can't.
J
Yeah,
so
I
agree
with
that,
but
that
has
another
wrinkle,
so
you're
talking
about
elementary
stream
formats
binding
to
container
formats
and
those
are
usually
different
than
the
rtp
payload
formats
right.
So
in
r2,
payload
format
don't
have
the
same
container
format
binding
as
as
the
containers
right
you
don't
have
to
put
start
codes
in
you
know
and
or
or
now,
length
codes
in
in
the
rtp
payload
formats.
So
those
little
nuances
you
know,
is
what
we
need
to
decide
on
whether
we
say
okay,
everybody's
going
to
do
a
a
a
container
binding.
J
So
don't
even
tell
them
to
write
an
rttpl
format.
Just
do
an
elementary
stream
to
container
binding
and
we
take
the
container
binding
and
we
gen
we
generate
a
generic
packetization
of
the
container
binding.
You
lose
all
the
semantics
of
you
know
the
rich
semantics
that
you
think
the
codec
may
have
had,
but
you'll
get
these
normalized.
J
You
know
semantics
of
the
general
of
the
general
packetization
and
you
don't
even
need
an
rtplo
format
at
all.
You're,
not
encapsulating
rtp
anymore
you're,
encapsulating
a
a
container
format
and
that's
we
haven't
talked
about
doing
that
yet.
But
that's
another
wrinkle
that
another
direction
you
could
go
in.
F
Now,
to
follow
up
on
what
mo
just
said,
you
know
with
respect
to
quick
right.
There
is
an
that
is
an
inherent
question
in
what
we've
been
looking
at.
If
you
look
at
the
difference
between
rush
and
rtp
over
quick,
I
think
it's
fair
to
say
that
rush
is
most
likely
using
a
container
format
and
or
at
least,
and
that's
in
fact,
what
webcodex
produces
right.
It
produces
a
container
format.
F
H
H
J
Just
to
clarify
that
the
codex
will
specify
their
elementary
streams,
but
they
won't
specify
their
container
format.
Another
bind.
Another
group
will
generate
the
binding
from
the
elementary
stream
to
a
container
format.
That's
usually
pretty
simple:
it's
usually
a
small
minimal
overhead,
it's
not
as
complex
as
an
rtp
payload
format,
because
it
doesn't.
You
know,
care
about
things
like
you
know,
re-transmission
and
feedback
and
stuff
like
that.
But
it's
not
the
same
as
the
elementary
stream
and
every
codec
I
know
of
they're
all
all
different.
A
Wanted
jonathan,
I
mean
the
concern
about
the
container
formats.
Is
that
you
know
one
of
them.
We
don't
want
to
give
up
on.
You
know,
obviously
on
temporal
and
spatial
scalability
and
the
container
formats.
Typically
don't
do
that
or
at
least
don't
do
it
in
anything
like
the
same
way
and
a
lot
of
what
the
brazil.
You
know,
the
stuff
that
the
payload
formats
do
is
things
like?
Okay,
hey,
I
missed
a
air
temporal
lower
frame
but
got
the
lower
tempo
layer
frame.
Can
I
go
ahead
and
feed
this
with
the
encoder?
A
Do
I
need
to
wait
for
the
retransmission
and
all
that
logic
is
not
something
that
a
container
format
that
assumes
hey
you're,
getting
a
continuous.
You
know
blob
that
doesn't
have
packet
loss,
that's
just
you
know,
coming
over
tcp
or
sitting
on
disk
or
something
like
that.
That's
none
of
that's
gonna!
It's
not
gonna
worry
about
any
of
that
stuff
and
that's
a
lot
of
what
formats.
A
F
So
I
guess
we're
at
the
wrap
up
and
next
steps.
What
I
was
hoping
is
that
we
would
take
note
of
the
action
items
that
we
have
to
follow
up
on
our
next
steps
on
the
various
documents.
Does
that
make
sense?
Jonathan.
F
F
What
are
our
next
steps?
On
this
whole
s
frame
s
packet
thing.
A
I
don't
I
wouldn't
say
we
have
anything
that
we
have
consensus
to
adopt,
so
I
would
say
that
authors
should
continue
to
work
on
it
and
talk.
You
know
talk
to
the
various
people
were
you
know
who
would
be
the
relevant
people
and
he
needed
to
continue
to
work
on
it,
but
not
ready
for
adoption
as
a
working
provider.
Yet
it
would
be
my
personal
opinion,
but
if
anybody
disagrees
with
me,
let
me
know.
K
B
A
I
think
it's
more
that
people
are
still
sort
of
confused
about
what
exactly
I
don't
think
anything.
You
know
it's
not
that
people
think
anything
is
wrong,
so
much
people
think
are
still
sort
of
confused
about
how
it
works,
so
I
think
it
needs
to
be,
and
particularly
the
you
know,
the
question
of
I
think
I
I
honestly
think
that
the
author
should
decide.
Do
they
want
to
propose
s
packet
or
s
frame,
don't
try.
A
Of
them-
and
you
know
just
make
a
concrete
proposal
and,
like
I
said,
maybe
maybe
like
work
out
okay,
what
does
this
look
like
for
even
just
one
of
the
payload
formats,
one
of
the
video
payload
formats,
you
know
and
say:
okay?
Well,
the
others.
You
know,
maybe
the
others
are
still
tpd,
but
give
us
a
concrete
example
of
okay.
This
is
precisely
what
it
looks
like.
This
is
how
it
would
actually
work
to
the
point
where
somebody
can
at
least
get
a
reasonable.
You
know,
maybe
not
every
you
know.
K
B
The
notes
I
it's
my
fault,
you
basically
sound
like
you're,
saying,
request
publication
or
do
a
write-up
on
the
the
what
was
on
frame
marking,
okay,.
F
Yeah
so
yeah,
I
think
because
we
believe
we've
resolved
all
the
issues
on
frame
marking.
There
was
a
discussion
of
one
last
issue
and
nobody
seemed
to
object
to
the
proposed
resolution.
So
I
think
it's
ready
for
publication.
Do
you
have
a
comment
on
that?
Do
you
agree
or
disagree.
A
Perfect,
there
was
also
note
to
the
effect
of
I
think
we
had
in
the
chair
slide.
Something
about
you
know
that
edc
might
also
be
ready
for
working
class.
Call
to
the
authors.
F
Just
bbc,
I
think,
is
I
don't
know
if
stefan
wenger
is
here,
but
I
think
it
was
just
bbc
that
was
ready
for
working
groups
call
and
cryptex.
A
D
I
saw
both
choi
and
stefan
unmute
themselves,
but
I
don't
hear
either
one
so
for
evc.
We'll
have
to
go
through
a
few
more.
F
A
Right
well,
thank
you,
everybody
and
we
will
see
you
at
other
sessions
at
this
ietf
and
then
either
in
person
or
again
virtually
in
madrid
or
virtual
bridge,
depending
on
the
state
of
the
world.