►
From YouTube: IETF110-AVTCORE-20210311-1200
Description
AVTCORE meeting session at IETF110
2021/03/11 1200
https://datatracker.ietf.org/meeting/110/proceedings/
A
A
Yeah
alex
the
guard
is
volunteered
to
take
notes.
I'm
sure
that
he
would
appreciate
if
anybody
else
wanted
to
join
him
in
the
cody
md.
A
A
Okay,
good
and
bernard
seems
to
be
bouncing
on
and
off
so
maybe
he's
having
network
issues.
Maybe
we
should
wait
until
he's
ready.
B
And
this
is
very,
it
will
be
your
stand
in
a.d
for
for
this
session.
I
think
it's
4am
from
murray,
so
nope
here,
mary's
murray,
just
came
on.
A
A
A
A
A
I'm
your
chair,
bernard
is
your.
Hopefully,
your
other
chair,
soon
alex
guyard
is
taking
notes
on
cody
md.
If
anybody
else
wants
to
help
out
there,
I'm
sure
he'd
appreciate
it.
Let's
see
meeting
tips,
hopefully
if
you're
already
successfully.
Here
you
know
most
of
these
things.
Please
use
your
headphones
when
speaking,
if
possible,
to
avoid
echo
other
things.
Let's
see,
and
let's
see
this.
A
Andrew
you
can
enter
the
cube
by
using
the
raise
hand,
tool
you'll
need
to
unmute
magnet
yourself
manually.
Video
is
separate
and
audio
video
is
appreciated
if
you
want,
but
remember
you
have
to
unmute
audio,
also
separately,
no
well.
That
well
applies.
Hopefully,
everybody
is
familiar
with
this.
If
not,
you
can
follow
the
links
on
this
slide
here
or
go
to
many
other
places
on
the
ietf
website.
A
This
here
are
the
links
to
various
things.
The
jabber
room
is
bridged
here
in
the
web
in
the
midico.
So
if
you
don't
need
to
do
anything
special
for
that,
it's
just
there
in
the
dialogue
balloon
tab,
and
if
anybody
actually
does
anybody,
if
anybody
is
not
able
to
talk
out
loud
for
whatever
reason,
maybe
your
family's
asleep.
A
We
can
say
mike
at
the
room
if
we
could
have
a
volunteer
to
relay
anybody
want
to
relay
from
the
chat,
if
not
I'll,
try
to
keep
an
eye
on
the
chat.
That'll
be
hard
with.
If
I'm
to
steal
the
chair
here.
B
Jonathan
I'll
keep
an
eye
on
the
chat
wow
great.
Thank
you.
A
So
if
anybody
has
any
comments,
let
me
know
we've
got
a
few
work
working
group
items
that
are
going
to
deal
with
relatively
quickly
and
then
spend
most
of
the
time
on
asp.
Encapsulation
any
comments
on
that.
A
Good
all
right
we've
had
a
number
of
drafts
published
in
the
last
idea.
So
that's
very
exciting.
Many
of
those
were
cluster
238.
So
that's
even
more
exciting,
so
good
work
keep
doing
teamwork.
A
We've
done
pub
wrecks
on
the
party
rdt
mix.
We've
had
a
few
working
class
calls,
which
many
of
which
will
have
commentary
on.
We
have
one
expired
draft,
we're
going
to
mention
in
a
moment,
and
we
adopted
three
day
work
items.
A
Help
move
into
experimental
because
of
this
limitation
status
and
we're
going
and
we're
removing
the
dependency
of
vp9
on
it.
A
So
there's
still
a
few
items
to
resolve,
but
I
think
we're
going
pretty
well
there.
I
don't
think
we
have
this
on
the
agenda,
but
I
don't
remember.
D
Yes,
I
did
just
publish
an
update
that
watches
all
of
the
the
benign
information,
the
things
and
I
try
to
dig
into
the
the
issues
that
the
implementer
reported
about.
D
I
think
we
need
to
drill
down
to
a
little
bit
more
because
from
what
I
can
tell
it's
nothing
to
do
with
vpa.
It's
it's
it's
in
both
vpa
and
vp9,
and
it's
really
a
non-issue.
I
don't.
I
don't
see
anything
that
mandates
sfp
rewriting
of
anything
or
requirement
for
incrementing
by
one.
So
maybe
we
need
to
you
know,
follow
up
on
the
list
and
figure
out
what
really
needs
to
happen
there.
If
anything.
A
All
right
yeah,
so
probably
let's
raise
that
on
the
list.
A
The
tetra
draft,
we
aim
the
chairs
emailed
the
draft.
It's
expired
more
than
a
year
ago,
the
chairs
emailed
the
draft
to
authors
and
no
response.
So
our
proposal
is
to
drop
this
milestone
and
work
item
for
now
and
if
interest
returns
in
this
always
obviously
be
adopted,
does
anybody
have
any
objection
to
this.
A
I'll
mention
this
on
the
list
also,
but
sounds
like
there's
no
objection.
So
that's
good.
A
A
So
status
of
this,
we
published
after
eleven
all
the
authors
of
infinite
effort
working
for
class
call
all
the
authors
I'll
acknowledge
their
bcp
7879
obligations.
A
D
That
a
list
about
what
looks
like
an
omission
in
the
vp9
draft-
I
don't
know
if
it
was
intentional
or
not,
but
something
that
was
in
vp8
was
emitted
in
vp9
and
it
it
impacts
on
marking
and
I
think
it'll
impact
any
anything
similar
to
frame
marking
like
81
dependency,
descriptor
or
anyone
that
wants
to
do
any
kind
of
intelligent
marking
or
analysis
of
the
vp9
streams
without
having
to
go
really
deep
into
the
bitstream.
D
Basically,
it's
difficult
to
identify
whether
something
is
a
reference
frame
or
not
just
my
proper
inspection
of
the
headers
to
figure
it
out
and
I'm
guessing.
That
may
be
an
omission
or
an
oversight
because
it
was
their
vp8
and
maybe
just
you
know,
fell
off
the
table.
Yeah.
A
I
think
par
part
of
that
is.
You
know
whether
it's
it's
a
somewhat
of
a
complicated
question.
If
something
is
once
you
have
temporal
scalability
I
mean
because
you
know,
are
you
going
to
want
people
to
rewrite
as
a
dependency
for
this,
for
the
layers
that
are
actually
being
forwarded?
A
But
I'll
take
a
look
at
that
in
more
detail,
I
mean,
I
don't
think
we
want
to
try
to
actually
change
the
bits
of
the
wire
at
this
point,
given
how
much
vp9
deployment
there
is,
but
I'll
try
to
take
a
look
and
see
if
I
can
figure
that
out,
I
mean
it
may
be,
maybe
that
you
do
need
to
dig
into
the
actual
vp9
bitstream,
which
I
agree
would
possibly
be
unfortunate,
but
I'd
rather
not
change.
A
The
like,
I
said,
we've
got
a
very
wide
deployment
of
ep9
at
this
point,
so
I'd
rather
not
change
the
bits
of
the
wire
if
possible,
but.
D
A
Mean
I
guess
my
questions
are
people?
Actually
I
I
think
it
was.
You
know
the
people
who
were
just
those
of
us
who
were
designing
the
excuse
me
the
vp9,
this
stream
weren't.
They
didn't,
have
any
use
for
that
flag.
So
you
know
we
we
didn't
see.
I
think
we
didn't.
I
mean
we
didn't
think
of
it
when
we
designed
it
because
we
weren't
using
it.
A
So
I
guess
the
question
is:
is
there
are
people
using
that
flag
from
vp8
rather
than
using
temporal
scalability,
which
serves
I'd,
say
much
the
same
purpose
to
a
somewhat.
A
With
somewhat
more
flexibility,
but
might
be
better
to
answer
this
on
the
list
when
I'm
more
awake.
A
Sam
bernard,
I
see
you
now.
Are
you
actually
here
at
alberta,
because
that
would
be
awesome
if
you
were
yes,
oh
excellent
good,
I
saw
you
were
bouncing
in
and
out
before,
so
hopefully,
you're
stapled
now.
A
All
right
so
next
week
we
do
have
some
ipr
disclosures
on
the
vp9
draft
yeah.
E
I
had
sent
those
to
the
list
earlier
and
they
have
been
posted
as
they
came
in
to
the
list
that
they
related
to.
So
the
question
for
the
group
is:
is
there
anything
that
thinks
we
should
do
about
this?
We
did
not
hear
anything
any
suggestions
on
the
list,
or
should
we
just
ahead.
E
So
is
there
anyone,
let
me
put
it
this
way:
is
there
anyone
who
objects
to
moving
forward
with
vp9,
given
the
disclosures
that
we've
seen.
E
Anyone
put
up
their
hand,
I
don't
think
so.
A
And
finally,
we
have
so
crystal
raised
some
questions
on
the
stp
issues
as
a
meta
issue.
He
sort
of
raised
that
we
probably
should
be
sending
these
things
to
the
stp
directorate,
but
we
didn't
so
we'll
try
to
do
that
more
of
that
going
forward,
but
specifically
on
this,
he
raised
some
issues
on
the
sap
negotiation.
A
My
proposal
there
are
three
parameters
on
vp9
profile,
id
max
frame
rate
and
max
frame
size.
My
proposal
is
the
profile
id.
It
needs
to
stay
unchanged
and
answers
updated
offers.
If
you
want
to
do
profile,
id
use,
you
know,
negotiate
a
new
payload
type
number,
whereas
max
frame
rate
and
max
frame
size
are
declarative.
A
Either
side
can
add
it
or
change
that
in
either
an
answer
or
an
updated
offer
and
then
relatedly.
We
don't
actually
say
christopher
also
pointed
out
there
wasn't
any
language
saying
what
you
know,
somebody
who
receives
an
stp-
and
this
is
a
media
sender-
actually
it
has
to
do
in
response
to
these
and
the
important
question
there
is,
whether
max
frame
rate
and
max
size.
You
should
not
or
must
not
exceed
them.
A
Excuse
me
so
I'd
say
there.
My
inclination
is
to
say
would
be
to
say,
should
not,
but
I
see
justin
in
the
queue,
so
he
can
get
some
in
the
bands.
A
F
Should
is
probably
appropriate
here
simply
because
of
the
fact
that
you
know
primarily
it's
a
pixel
rate.
You
know
limitation
if
one
exists
and
like
for
a
screen
chair
or
something
like
that.
You
may
find
yourself
exceeding
like
one
of
these
dimensions.
I
think
it
just
looks
like
a
pretty
heavy
hammer.
You
know,
given
that
this
is
marginally
advisory.
A
All
right
that
was
my
inclination
too,
so
I
guess,
like
I
said,
the
important.
The
other
important
question
is:
do
you
know
before
both
of
these
things
do
browsers?
Will
any
existing
browser
applications
choke
or
will
they
just
they
just
read
this
as
advisory
and
we'll
try
to
decode
everything
I
mean
my
suspicion.
Is
I
try
to
decode
everything,
but
I
just
wanted
to
make
sure
that
was
the
case.
A
Are
you
still
in
the
queue
for
justin
or
I
don't
know,
I
need
to
remove
you
all
right,
all
right
so
yeah.
So
it
sounds
like
that
we
I'll
make
that
it
should
I'll
just
and
make
do
the
rest
of
these
I'll.
Just
say
one
more
comment
there
and
I'll
publish
a
draft
12
and
then
our
belief
is
that
we
don't
need
to
do
anything
further.
We
don't
need
to
do
any
update
work
with
less
call,
because
these
are
basically
just
the
verses
for
the
issues.
A
If
I'm
working
the
last
call
so
we'll
do
a
public
once
all
the
12
is
out.
Well,
I
guess
once
we
also
do
address
the
issue
that
pedal
raised.
A
All
right,
so
that's
it
for
that
and
next
we
have
jpeg
access.
G
G
G
So
next
slide.
Please.
G
Thank
you.
So
the
sdp
questions
from
christopher
were
about
the
section
six
to
four
in
our
rtp
payload
spec,
which
basically
contained
just
a
single
sentence,
saying
that
all
parameters
are
declarative,
but
we
wrote
it
to
be
more
correct
and
and
more
information
more
contain
more
information
to
describe
what
we
actually
want.
G
So
so
that's
that's
a
rewrite
and
it's
a
full
paragraph
that
was
added
to
the
to
the
document,
and
then
there
was
a
question
on
optional
versus
mandatory
parameters,
but
we
believe
that
this
was
already
clear
in
the
text
because
we
really
have
one
part
where
it
says
like
mandatory
parameters
and
and
then
another
part,
it
says
like
optional
parameters.
G
So
I
don't
think
this
was
really
an
issue
unless
we
misunderstood
the
question
so
to
be
clear,
all
parameters
are
optional,
except
for
the
rate
parameter
and
the
transmode
parameter,
and
then
we
used
wrong
terminology
in
the
text,
so
we
replaced
the
wording
of
sdp
object
everywhere
with
the
sdp
media
description.
G
I
think
that's
correct
unless
somebody
tells
me
if
it's
not,
please
let
me
know
if
it's
not
correct
still
and
then
from
india.
There
was
a
remark:
basically
one
of
the
optional
parameters.
The
interlaced
parameter
was
accidentally
spell
corrected
to
become
interlaced
in
revision,
7
of
our
draft.
So
I
changed
this
back
to
to
be
to
be
interlaced
again,
as
is
intended,
and
this
reflects
with
the
other
rtp
standards
and
also
with
the
70
2110
specification.
So
so
it
should
be
interlaced.
G
That
was
a
good
fix
and
then
we
also
had
a
small
side
question
on
the
clock
rates,
so
indeed
we
specified
originally
that
the
clock
rate
should
be
90
kilohertz.
But
this
is
this.
This
took
like
a
an
extra
liberty,
and
I
think
indeed
the
comment
was
correct,
that
it's
better
to
say
it.
It
must
be
90
kilohertz,
so
we
changed
that
and
made
it
a
mandatory
setting,
and
I
submitted
draft
09
which
addresses
all
the
the
questions
and
remarks
normally.
G
So
I
think
yeah
the
text
is
done
and
I
would
like
people
to
review
it,
maybe
again
or
or
to
move
it
forward
in
the
in
the
pipeline.
So
that's
it.
Thank
you.
G
E
Yes,
all
right,
so
then
the
next
step,
I
guess,
would
be
chair,
review
and
publication
request.
E
Go
ahead,
please
put
that
in
the
minutes,
so
we
know
what
the
next
step
is
on
everything.
Thank
you.
G
Okay,
yeah
and
if
any
questions
pop
up
I'm
available
at
your
disposal,
so
let
me
know
thank
you.
A
So
yeah
so
well
chairs
will
do
our
chair
review,
make
sure
everything's
reviewed
and
then
we'll
go
with
publication
requests
and
we'll
figure
out
whether
bernard
and
I
are
driving.
F
F
F
Slide
up
yeah,
so
the
things
that
the
one
thing
remaining
right
now
is
inclusion
of
the
test
vectors.
You
know
to
demonstrate
proper
encryption
using
cryptex
into
the
document
once
we
have
that,
I
think
we
should
be
ready
for
for
last
call
sergio
is
currently
working
on
those
test
vectors.
F
A
Cool
anybody
else
have
any
comments,
if
not,
hopefully,
they'll
be
able
to
get
that
done
soon.
E
Justin,
do
you
think
we
should
be
ready
for
implementation
very
very
soon
for
implementation,
yeah.
F
Yeah,
I
I
don't
know
the
time
frame
of
you
know
when,
when
that
would
be
actually
implemented
within
chrome
or
whatever,
no
longer
my
exact
sort
of
remit
now,
but
I've
discussed
it
with
the
team
here,
and
you
know
you
know
probably
something
that
people
start
to
look
at.
You
know
in
the
coming
months.
I
Just
point
out
that
for
performtp,
including
cryptex,
with
the
test
vector
that
are
used
in
the
in
the
spec
and
also
by
jonathan
in
his
implementation.
So
it
should
be
just
a
matter
of
complaint
of
putting.
A
All
right,
that's
it
next
is.
C
C
C
Yes,
oh
that's
great,
okay,
all
right
morning,
cup
of
coffee,
so
this
is
gonna,
be
a
quick
update
for
ubc
draft.
Thank
you
already.
So
just
kind
of
quick
recap.
Whatever
you
have
decided
for
you
is
a
draft
a
couple
of
millions
back,
which
is
no
purse
parky
and
also
no
support
of
a
multi
rtp
stream
is
mixed
with
multiple
transport
and
all
also
no
download
basis
signaling
forever
for
regional
support.
So
that
is
all
have
been
great,
for.
C
I
think
I
believe
it's
two
minutes
back
and
the
last
two
items
no
support
or
framework.
That's
really!
The
decision
co-author
made
things
the
last
intermediate
agreement,
defense,
free
market
removal
to
experimental,
so
that
doesn't
make
any
sense
for
us
to
support
it
anymore
and
then
also
we
remove
the
slice,
authentication
and
also
the
psr
support.
C
That's
also
basically
trying
to
align
with
the
bbc
bbc
draft
next
slides.
Please.
C
Next
up,
so
ubc
driver
was
adopted
as
a
working
group
drop
since
108.,
we
recently
very
honored
to
have
a
young
quest
from
samsung.
As
co-author,
I'm
not
sure
if
you
want
to
save
your
word
after
that,
so
I
will
leave
it
to
him.
The
recent
revision,
which
is
the
zero
one
working
group
draft,
we
mostly
focus
on
the
ubc
codex
specifications.
C
I
think
everything
is
getting
there
there's
only
a
few
there's.
Only
a
few
edited
notes
left,
but
I
think
we're
going
to
get
it
down
on
this
codex
specification
part
very
soon.
C
You
actually
skipped
one
yeah.
There
you
go
so
there
are
still
a
few
placeholders
we're
trying
to
address
in
in
the
draft
and
the
major
thing
we
have
left,
which
is
a
section
7,
the
optional
parameters
for
the
stpe.
C
We
believe
we
have
agreed
that
the
ubc
pillow
draft
will
be
following
bbc
and
we
bc
draft
is
almost
getting
there.
We
we
have
a
new
version
which
is
going
to
talk
about
later,
has
mostly
parameters
down,
so
we
actually
gonna
start
to
fill
up
this
blanks
in
the
ubc
draft.
C
Let's
we
will
be
probably
gonna,
follow
closer
look
with
the
bbvc
draft,
so
it's
really.
I
I
think
it's
reasonable
for
us
to
target
working
group
law
school
around
summer
2020..
C
You
know
at
that
time.
I
think
bbc
should
be
done
already.
I
mean
working
with
glasgow,
hopefully
yeah,
that's
a
pretty
much.
We
only
have
our
two
divisions
since
109
and
that's
all
the
updates.
I
have.
J
Thanks
for
having
me
as
an
author
of
this,
you
know
internet
prep.
Personally,
I've
been
involved.
You
know
for.
C
C
C
Yeah,
I
think
either
welcome
basically
run
summer.
It's
really
reasonable
for
us,
without
stressing
too
much
of
that
lines
sounds
good.
Thank
you
all
right
still
me
again
and
good
morning.
Here's
the
update
for
the
bbc
and
next
slide.
Please.
C
So
we
did
a
three
revision
since
109
and
most
trying
to
there's
lots
of
echoes
all
right
so
most
trying
to
address
the
editor
notes
as
we
promised
since
the
last
meeting.
There
were
24
of
them,
so
lots
of
them,
but
we're
getting
pretty
good
progress
only.
I
think
four
or
five
laptops
at
this
moment.
C
C
C
This
is
kind
of
quick
look
at
what
happened
to
your
seven,
as
I
mentioned,
that
we
have
a
new
one,
zero
eight,
and
that
was
zero
sum-
is
the
first
batch
edit
nodes,
so
we're
trying
to
resolve.
Basically,
the
opi
assume
has
to
be
perimeters
a
stefan
west
center.
I
sent
out
a
really
detailed
rationale
why
we
want
to
do
that,
so
I
don't
want
to
just
kind
of
repeat
it
whatever
he
said
already.
C
C
C
Yep
yeah
right
here,
yeah
right
here,
okay,
as
I
mentioned
earlier,
it
was
a
little
submission.
So
please
go
ahead
review
them.
We
did
a
more
than
dozen
updates
and
I
probably
don't
have
enough
time
to
go
through
each
of
them,
but
here
I'm
just
going
to
kind
of
bring
up
some
things
that
we
think
is
very
important,
but
we
we,
I
will
do
sent
the
updating
from
the
media
list.
C
C
I
think
it's
also
reasonable
and
we
actually,
from
last
two
meetings,
we'll
be
asking
a
pin
at
working
group
opinions
regarding
if
we
should
remove
the
slri
and
rpsi
rtsp
feedback
mode
and
no
support.
So
from
our
point,
we
don't
see
the
usage
of
that
and
we
simply
remove
that.
C
The
one
last
thing
I
want
to
mention
in
this
meeting
is
really
the
reserved
arbit
in
the
fragmentation
unit.
Header.
That's
also
something
we
brought
up
a
couple
of
minutes
back
already,
but
we
never
propose
anything
so
in
this
minivac.
I
have
a
slide
to
to
discuss
the
things
that
we
propose
for
the
rpg.
C
C
Next
slide,
yeah
right
here.
One
of
the
things
we
actually
removed
is
one
of
the
informative
notes
under
the
market
bit
for
the
http
header,
the
similar
content,
which
is
because
we
copy
paste
from
hevc
and
then
make
a
line
with
bbc,
but
the
co-author
just
found
the
confusion
is
about.
It
is
actually
a
bit
of
the
benefit.
C
C
Stp
parameters
inherited
from
hevc
and
things
like
maximum
video,
bitrate
and
picture
size,
perfect
size
and
etcetera.
Just
from
our
experience
and
and
the
co-author,
I
think
there's
less
usage
of
practically
no
usage
at
all
from
the
hevc,
so
we
decide
not
to
support
it
again.
If
you
have
any
different
opinions
that
we
do
want
to
hear
that,
maybe
we
missed
anything,
so
we
don't
want
to
do
that
next
slide.
Please.
C
Next
next,
here
we
go
so
here's
the
reserved
rb
that
we
talked
about
in
the
fragment
unit
header.
So
we
brought
this
up
things
zoom
107.,
so
we,
the
proposed
text
in
the
following,
is
really
from
our
production
team.
They
have
their
usage.
So
what
is
this?
C
What
we
proposed
here
is
really
we're
trying
to
say
use
the
orbit
to
identify
the
signal
if
the
last
are
not
the
last
and
not
unit
in
the
fragment
unit
header,
when
it
says
when
I
said
it's
one,
the
rbd
simply
said
is
the
last
non-unit
for
the
fragment
unit,
header
or
you
could
still
equals
to
zero,
which
is
not
the
last
line
here.
C
It's
it's
a
similar,
I
would
say
the
concept
wise
is
similar,
like
the
ambit,
the
market
building
the
rtp
header
unit,
but
we
don't
see
much
of
usage
for
marker
bits
for
the
video
site.
Maybe
we
missed
anything,
but
that's
the
current
conclusion
we
had
and
we
think
that's
a
pretty
good
usage
for
the
rpg,
otherwise
just
leave
it
blank
as
we
call
it
reserved
again.
C
That's
going
to
be
the
last
one,
so
we
only
have
four
edit
notes
left
down
from
24,
which
is
great
progress,
and
we
only
have
one
large
section,
which
is
the
sdp
offering
answer
sections
which
is
not
going
to
be
very
difficult
things
that
we
have
all
the
stp
congress
already
and
we
are
targeting
the
rocket
glasgow
around
june
2020
first.
C
I
think
I
think
probably
earlier
than
that,
but
I
think
june
should
be
a
a
good
time
and
that's
all
for
bbc.
E
Yeah,
I
wanted
to
thank
you
by
the
way
for
making
good
progress,
removing
things
that
weren't
required.
That's
actually
makes
suspect
much
more
readable.
Now.
C
C
All
right
should.
C
A
Yeah
all
right,
I
just
I
just
had
one
comment.
I
mean
I'll,
probably
better
erase
this
on
the
list,
but
on
the
r
bit.
I
just
I'm
curious
how
your
use
of
the
art
that
interacts
with
the
spatial
stimulus.
J
C
A
A
L
I
start
a
little
bit
and
they
will
take
over
or
okay
yeah
sure,
okay,
so
I'd
like
to
present
a
few
slides
and
up
to
getting
to
a
set
of
requirements
that,
with
sergio
and
alex
we
we
gathered
and
maybe
from
there
take
take
questions
so.
L
L
L
That
said
they
they
do
that
currently,
in
a
very
adult
way-
and
that's
true-
we
can
say
it
relies
on
a
few
hacks
as
presented
as
the
link
is
showing
and
what
was
discussed
in
ietf
109..
L
So
we
think
it
would
be
good
to
rationalize
what
is
being
done
by
these
products
make
sure
it
stays
as
consistent
as
possible
with
the
rtp
ecosystem
and
at
the
end
of
the
provide
either
technical
solutions
or
guidelines.
L
So,
with
sergio
analex,
we
we
looked
at
that
and
we
we
thought
that
the
current
architecture,
like
the
rtp
media
pipeline,
we
we
can
probably
adapt
it
to
allow
these
products
to
do
what
they
want
to
do
without
too
much
changes.
L
And
that's
why
it's
relying
on
this
idea
of
a
codec
agnostic,
packetizer
and
the
proposal.
We
are
trying
to
explain
now
so
at
last
avt,
core
meeting
and
also
on
tuesdays
spring
meetings.
There
were
a
lot
of
questions
on
this
speculizer
and
how
it
would
deal
with
avc
or
redundancy
mechanisms,
so
we
have
slides
that
are
explaining
that
and
I
will
present
some
of
them
now.
Next
slide.
L
L
Then,
in
a
browser
like
chrome,
there
will
be
a
transform
which
is
implemented
in
javascript,
where
the
encoded
frame
and
the
metadata
will
be
exposed
and
the
transform
using
javascript
will
typically
encrypt
the
frame
with
a
current
api
design.
The
metadata
is
read-only,
so
you
cannot
change
it
and
that's
that
makes
sense.
L
So,
after
the
transform
now
we
have
some
transform
data
so
in
in
the
slide
it's
in
red
and
we
keep
the
same
metadata
and
the
transform
data.
There
is
no
longer
valid
https
for
valid
vp8
content,
but
the
metadata
is
still
applicable
to
what
was
the
underlying
content.
L
L
So
I
just
want
to
point
out
also
that
the
metadata
it
can
be
also
audio
so,
for
instance,
the
voice
flag
that
you
can
find
in
some
rtp
head
extensions
is
typically
produced
by
by
the
encoder
and
you
you
cannot
use
the
transform
data
to
try
to
generate
ltp.
L
It
extensions
like
the
voice
flag
or
what
or
any
kind
of
ltp
head
extensions,
because
it's
encrypted
basically
and
that
mirrors
well,
what
any
proxy
will
we
will
also
have
to
to
deal
with
meaning
a
payload
that
is
encrypted
and
just
use
vrt
header
data
to
do
its
processing
next
slide,
which
might
be
a
bit
more
interesting,
is
what
about
vp9
hvc.
L
So
there
it's
very,
very,
very
similar.
We
have
a
row
frame
and
the
encoder
will
produce
like
in
this
example,
three
subframes,
so
probably
a
base
frame
and
then
two
dependency
frames
and
each
subframe
is
related
to
some
metadata.
So
you
have
sub
frame
one
separate
to
do
so
frame
three
and
each
one
has
meta
data
and
then
there
will
be
a
transform
and
again,
the
transform
will
be
applied
separately
to
each
subframe,
and
so
you
will
have
transform
one
transform,
two
transform
three
and
and
so
on.
L
After
the
transform,
the
packetizer
will
first
deal
with
transform
one
plus
metadata
one
and
it
will
produce
a
first
set
of
packets.
Again,
it's
the
same
strategy.
Payload
is
done.
You
like
you,
you
practice
with
mtu
and
you
only
rely
on
metadata
one
to
generate
headers
and
you
first
transmit
the
base
layer
related
packets.
Then
the
dependency
transform
two
then
transform
three
and
in
in
a
typical
case.
They
will
all
share
the
same
timestamp
like
in
a
single
stream
single
transport
case.
L
M
L
I
would
prefer
to
to
go
to
the
requirements
so
that
there's
only
two
slides
and
then
we
can
take
you
if
that's
okay,
okay,
thanks
so
getting
back
to
the
point
of
colon
in
last
meeting
in
tuesday
at
frame
meeting
where
we
we
do
not
remove
any
like.
There
are
two
places:
two
different
processing:
there's
a
codex
specific
processing
which
happens
before
the
transform
and
a
codec
agnostic
processing
that
happens
after
the
transform.
The
transform
itself
can
be
codec
specific
or
in
the
case
of
s-prime.
We
hope
it
will.
L
It
will
stay
categoristic
because
that's
simpler,
but
clearly
the
the
rtp
media
pipeline
is
codec
specific
and
the
code
specific
processing
can
even
be
application
specific
in
the
sense
that
there
we
are
saying
vp9
will
produce
three
different
subframes.
You
could
say
an
application
will
tell
its
https
one
encoder
to
produce
two
subframes
as
well
in
whatever
heuristic
they
want,
and
then
the
proceeding
pipeline
should
be
roughly
the
same
and
last
slide
before
taking
questions.
L
L
The
first
thing
is
that
we
do
not
want
to
disrupt
existing
mechanisms
like
the
mapping
to
have
tp
streams,
redundancy
mechanisms
or
existing
feedback
mechanisms
as
well.
All
of
that
should
work
well,
and
maybe
they
could
be
optimized
later
on,
but
that's
fine.
They
could.
They
should
still
be
usable
out
of
box
and
in
terms
of
applicability,
we
really
want
it
to
be
able
to
apply
it
to
simulcast,
to
support
single
attribute
string
on
a
single
media
transport
for
svc.
L
We
also
think
we
we
need
to
do
negotiation
there
to
negotiate
that
that
payload
format
and
we
also
think
that
to
succeed.
L
M
Sorry
just
took
hitting
unmute
a
couple
times
to
get
it
to
work.
Okay,
so
let
can
you
go
back
a
couple
slides
anyway.
M
Let
me
forward
one
slide.
This
one
might
be
easier,
so
my
so
first
of
all,
the
describing
the
packetizer
as
being
after
isn't
isn't
really
accurate.
This
is
sort
of
wrong.
I
mean
the
the
encoder
the
what
came
out
of
the
encoder
had
to
be
packetized
before
it
was
at
some
level
of
packetization.
M
Before
it
got
passed
into
the
transform
okay,
there
may
be
further
packetization
later
of
a
different
type,
but
it
was
packetized
up
there
and
my
concern
is
around
having
in
two
places
to
put
information
so
like
the
whole
idea
of
some
sort
of
generic
type.
I
you
know
I
in
for
information
content
for
for
a
generic
for
codex.
Like
I
at
some
level,
I
don't
think
we'll
ever
achieve
it.
There
will
always
be
things
that
come
up
new
like
when
we
move
to
holographic
content
when
we
move
to
even
even
the
360
degree.
M
Stuff
probably
would
need
some
metadata
that
isn't
covered
by
the
current
draft,
so
there
always
will
be
extensions
to
it,
but
I
I
do
like
the
idea
of
like,
particularly
for
you
know,
flat
2d
codecs.
We
should
be
at
the
point
of
getting
pretty
good
at
this.
So,
like
I
support
the
general
idea,
but
the
question
is
about:
does
it
go
in
the
payload
section
or
the
header
section,
and
you
know
the
rtp
architecture
we've
really
gone
down
that
this
goes
is
a
header.
M
Is
the
header
is
where
we
put
this
information,
and
I
don't
see
that
as
being
very
incompatible
with
this
information.
I
just
think
that
here
basically
you've
already
packetized
at
the
encoding
at
least
partially
privatized
you're,
passing
into
a
transform
and
as
long
as
the
transform
can
modify
the
header
that's
going
to
end
up
in
the
rtp,
as
well
as
the
data.
That's
going
to
end
up
in
the
thing.
That's
just
like
an
implementation
detail
of
your
specific
implementation.
It's
not
like
something.
M
We
should
drive
the
whole
standard
by
and
you
should
just
do
that
and
then
the
transformation
you
know,
then
we
can
have
this
metadata
sitting
in
the
header
data
where
we
already
have,
like
you
know
a
whole
bunch
of
stuff
to
do
it,
and
the
reason
I
like
that
better
is
I'm
really
worried
about
the
case
of
having
the
data
two
places
and
not
matching,
and
that
will
inevitably
happen,
and
I
think
that
that's
you
know
this
is
just
trying
to
flip
the
architecture
of
rtp,
of
where
we
put
this
information
to
be
completely
wrong,
and
you
don't
need
that
for
any
reason,
you
can
just
put
it
in
the
right
place
and
have
the
same
information
you
have
right
now
and
and
that
that's
that
that's
no
big
deal
so
could
I
mean?
L
L
Yeah
so,
but
I
agree,
it's
it's
chunking
the
data
and
the
encoder
either
the
encoder
of
the
application
is
doing
that
and
what
what
we're
seeing
is
that
the
chunking
is
really
application,
specific
right
and
as
part
of
the
challenging.
M
L
So
to
me
in
our
use
cases
it's
codec
specific,
but
I
don't
see
why
some
applications
will
not
be
able
to
specify
like
yeah.
I
want
to
use
h264
in
a
specific
way
so
that
this
style
there
it's
its
own
subframe
and
I
will
output
it
separately,
but
that's
fine
as
well.
I
don't
want
to
preclude
it.
I
don't
want
to
define
it
yeah
right
so
about
the
data
being
somehow
redundant.
L
If
you
look
at
frame
marking
information
like
or
dependency
descriptor,
you
will
have
it
like
easy
to
reframe
or
not.
It
can
be
articulated
it's
also
in
the
payload
at
some
point
and
so
there's
always
redundant
data
there
and
that's
fine.
L
One
of
these
redundant
data
is
for
the
sfu
and
the
other
one
is
for
vm
recipient
and
we
need
to
extract
some
of
the
information
from
the
user,
the
encoder
to
the
sfu,
and
this
extraction
should
be
carefully
studied
because
it
has
privacy
implications
and
I
would
like
us
to
provide
guidelines
there
so
that
we
say
okay,
this
particular
information
is
not
privacy
sensitive,
it's
useful
for
the
sfu,
so
yeah,
let's
allow
it
or
let's
say
that
yeah
you,
you
can
use
it
it's
safe
and
this
information-
maybe
not
you
know,
like
audio
level-
maybe
maybe
not
things
like
that.
M
Look,
I
agree
with
you
that
I'm.
I
think
that
my
question
is
about
why
that
data
needs
to
somehow.
Now,
though,
the
rtp
architecture
is
to
stick
that
in
the
header
why
it
now
needs
to
be
somehow
stuck
in
the
payload.
That's
that's
my
question
like
why?
Don't
you
do
the
same
thing,
but
stick
it
in
the
header?
No,
but
we
are,
we
are
putting
it
in
the
head.
L
Yeah
and
if
you
look
at
the
vp9
packetization,
somehow
you
you,
you
have
the
information
in
the
payload,
but
it's
that
it's
a
keyframe
and
it's
also
in
vrt
header.
And
what
we're
saying
is
that
since
the
payload
now
is
encrypted,
we
don't
care
whether
it's
there
or
not.
We
will
not
use
the
payload
information
and
we
really
need
to
put
this
information
that
is
really
crucial
in
the
headers.
E
So
yeah,
I
would
like
to
follow
up
on
what
cullen
just
asked
and
here's
here's
my
framework
for
asking
these
questions
they're.
E
Actually,
I
think
your
architecture,
maybe
is
a
little
bit
too
tied
to
the
insertable
streams
architecture,
but
even
within,
if
we're
just
talking
about
the
chromium,
codebase
right,
there's
more
than
one
architecture,
because
you
can
also
do
this
with
web
codecs,
where
you
actually
get
access
to
each
of
the
encoded
chunks,
you
can
call
them
encoded
frames
and
you
have
the
ability
to
build
your
own
rtp
header
as
well
as
getting
access.
E
You
know,
potentially
to
the
payload
so
and
the
reason
I'm
asking
this
is
that
and
following
up
on
codex
question
colin's
question
about
application
versus
codec
specific
is
we've
just
heard,
for
example,
that
some
of
the
codex
will
not
use
frame
marking
and
the
the
issue?
I
think
that
comes
up
when
we
discuss
this
is
the
metadata
isn't
necessarily
going
to
be
encoded
in
the
rtp
header?
The
same
way
for
every
codec,
for
example,
for
something
like
h264
avc
with
temporal
scalability
you
might
find
frame
marking
is
fine
right.
E
But
what
we're
saying
here
is
the,
I
think,
there's
an
issue
if
you're
saying
that
every
codec
will
have
its
metadata
encoded
the
same
way
in
the
rtp
header
you're
going
to
get
you're
going
to
get
into
issues
there,
whereas
in
the
web
codec
kind
of
a
framework
right
you
get
this,
you
can
get
the
metadata.
The
metadata
actually
comes
as
part
of
the
encoded
frame,
but
then
you
can
do
with
it.
Whatever
you
want,
you
could
decide
I'm
going
to
take
this
metadata
and
output
frameworking
or
I'm
going
to
I'm
gonna.
E
I
want
to
put
instead
dependency
descriptor
in
the
header
or
anything
else
that
the
itf
or
anyone
anyone
else
might
come
up
with,
so
that
that
yields
a
different
conception
here.
So
I
would
just
ask
you
to
to
try
to
be
clear
about
what
codec
agnostic
means
exactly.
E
Does
it
really
mean
that
everything
is
the
same
for
every
codec?
I
think
that
that
that
kind
of
is
a
promise
that
might
be
difficult
to
fulfill
over
over
time,
or
does
it
mean
that
just
the
generic
process
is
somehow
the
same
for
every
code,
but
codex
might
handle
it
differently.
I
just
I
just
think
not
asking
for
an
answer
right
now,
but
just
to
think
this
through.
L
Can
I
can,
I
can
provide
a
temporary
answer
and
that's
maybe
something
we
are
discussing
with
sergio
and
we
do
not
align
precisely,
but
I
I
think
I
think
it's
a
very
good
question.
My
understanding
is
that
the
general
processing,
like
the
processing
that
we're
seeing
there
could
be
applied
to
any
any
codec.
The
packetizer
itself,
like
the
construction
of
the
payload
using
ntu,
can
be
codec
specific.
L
L
My
understanding
is
that
it's
very
difficult
and-
and
that's
fine
and
I'm
not
sure
that
existing
implementations
currently
needed,
for
instance,
implementations
using
h.264
or
vp8,
might
stick
with
a
frame
marking
and
as
vp9
or
av1
they
will
use
the
dependency
descriptor.
L
I
don't
think
that
decision
impacts,
how
we
define
the
packetizer
and
the
payload
format,
because
we
have
an
extension
point
which
is
rtp
head
extensions
and
we
can
combine
the
payload
format
with
various
rtp
head
extensions
and
basically
it's
the
application
designer.
But
we'll
do
that
at
the
end
of
the.
A
A
Jonathan,
all
right,
I'm
next
in
the
queue
as
an
individual,
so
I
feel
like
you
are
making
life
too
easy
for
yourself
by
showing
the
encoder
half
of
this
picture.
A
I
feel
like,
if
you
showed
the
decoder
half
then
a
lot
more
of
the
complexities
would
be
apparent
in
particular,
because
basically
the
the
decoder
process,
which
is
to
say
the
depacketizer
I
mean
the
d
transport,
hopefully
is
obvious,
but
the
you
know
the
reassembly
in
particular
and
then
before
the
decoder
proper,
is
where
a
lot
of
the
complexity
arises,
which
you
need
to
worry
about.
So
in
particular,
the
thing
that
you
you
need
to
be
able
to
do
is
to
be
able
to
tell
as
a
you
know,
receiving
pipeline.
A
Do
I
have
all
the
you
know
have
I
received
you
know:
have
the
packets
I've
received
the
packets
I've
received
contain
enough
information
for
me
to
successfully
decode
a
video
frame,
or
do
I
need
to
knock
something
and
wait
for
it
to
come
in
before
I
actually
feed
things
to
my
codec,
and
so
you
know
in
the
process
of
arbitrary
packet
loss
and
packet,
reordering
and
so
on
and
so
forth,
and
so
that's
where
I
think
a
lot
of
the
trickiness
is
going
to
arise.
A
L
Yeah,
so
that's
true
and
I
thought
about
adding
slides
for
the
decoder,
but
I
was
maybe
too
lazy
and
it's
true
that
the
encoder
is
a
good
example,
so
that
we
can
start
precising
all
these
issues
so
yeah.
I
can
go
ahead
with
providing
information
for
the
decoder
you're
right
about
the
packet
reassembly.
L
So
in
the
vp9
avc
case.
Maybe
sergio
can
correct
me
if
I'm
wrong,
but
you
will
set
the
mark
a
bit
that
will
tell
you.
Okay,
you
got
transform
one.
You
should
not
expect
to
receive,
transform
two
and
and
then
and
then
you're
good
and
yeah.
I
I
A
I
A
E
But
hold
on
one
minute,
john
right,
you
may
have
the
dd
info
at
the
in
on
the
on
the
sending
side
here
right,
because
you
could
decide
to
encode
that
metadata
with
dg
and
send
it.
But
you
might
not
right
if
you,
if
you're
doing,
frame
marking
instead
and
on
the
sender,
you
got
the
dd
info,
but
decided
not
to
use
it
all
and
just
do
frame
marking.
Then
you
might
not
have
it
on
the
decoder
side.
So
I
think
your
question
is
valid.
Still.
A
Yeah,
I
think
so
so
we
need
to
figure
out
if
there
needs
to
be
at
least
some
minimal
information
at
the
in
the
in
this
itself
and
then
relatedly,
as
you
mentioned.
If
you're
you
know
your
question
of
whether
how
things
are
chunked
into
the
stream,
are
you
know
you
seem
to
be
saying?
Well,
you
know
the
question
of
whether
it's
codec,
specific
or
application
specific.
A
If,
if
it's
an
encoder
choice,
how
the
raw
media
stream
is
trunked
into
the
pre-transformed
frames,
then
you
need
to
figure
out
how
much
of
that
decision
does
the
the
the
reassembly
product
line
need
to
know?
Can
it
just
you
know,
does
it
need
to
be
prepared
for
any
chunking
decision
whatsoever
that
the
encoder
might
have
felt
like
doing
or
do
we
need
constraints
in
the
spec
of
what
the
what
encoders
are
allowed
to
do.
J
Yep
all
right.
K
Hi
yeah,
so
I
am
a
little
concerned
that
for
something
which
is
supposed
to
be
codec
agnostic,
all
the
discussion
we
are
hearing
is
of
a
very
small
number
of
currently
popular
codecs
for
interactive
video
applications
and
rtp
is
used
for
a
lot
more
applications
and
in
a
lot
more
different
scenarios
and
with
a
lot
more
different
types
of
codec
than
we
are
just
currently
hearing
about,
and
we
hope
it
will
continue
to
be
used
in
a
bunch
more
scenarios
in
the
future,
and
I
think
I'm
not
seeing
something
which
is
really
code.
L
Yes,
so
if
you
take
the
previous
slide
and
replace
h64
vp8
by
opus,
it's
working,
fine
and,
for
instance,
audio-
is
less
of
an
issue
there,
just
less
of
metadata
to
provide
to
to
make
it
work
to
make
and
to
an
encryption
work,
and
we
we
want
to
have
it
in
scope.
So
we
definitely
want
to
have
audio
and
video
as
of
over
kind
of
metadata.
L
I
Yeah
but
anyway
that,
but
I
mean
this
is
not
meant
to
be
a
replacement
for
rtp
a
packet.
Is
it?
I
think
that
in
fact,
if
you,
if
we
talk
about
in
later
in
the
in
the
spec
we
are
only
going
to,
we
are
going
to
to
make
the
the
the
the
negotiation
of
the
canonistic
packetizer
the
to
to
be
done
within
or
together
with
the
with
the
normal
packetization.
So
it
is.
I
This
is
not
precluding
or
preventing
to
to
use
any
other,
any
other
codec
or
packetizing
packetization
that
it
is
already
available
or
featurely
implemented,
but
it
is
only
be
meant
to
be
used
when
you
are
using
this
frame
or
other
transformation.
That
may
include
the
the
content
so
that
you
cannot
use
the
the
standard
rtp
packetization.
L
I
would
also
say
frame
like
frame
marking
either
rtp
head
extension
that
you
may
want
to
use-
or
you
may
not
want
to
use
in
your
application
frame
marking,
does
not
make
any
sense
outside
of
a
given
scope
and
the
payload
these
packetizers.
There
would
make
sense.
J
L
K
K
I
L
I
I
think
that
I
think
that
this
should
be.
I
mean
I
would
like
to
to.
The
idea
is
to
be
able
to
to
work
with
all
the
audio
and
video
codecs,
not
real-time
tests,
for
example.
So
that's
a
reduction
in
the
scope
of
of.
E
Just
just
one
question
from
from
the
chair,
I
think
we've
had
at
least
three
questions
about
the
meaning
of
the
term
codec
agnostic,
and
I
think
I
would
suggest
that
that
needs
to
be
clarified
in
the
draft,
because
for
something
that
just
the
meaning
of
that
term,
I
think
colin
had
that
question.
We've
had
at
least
three
questions
about
that.
So
especially
when
you
start
talking
about
differences
between
codec
and
and
use
the
term
codec
agnostic,
it's
a
bit
confusing.
Thank
you.
K
Yeah
I
mean
it
also
seems
that,
in
order
to
decode
this
and
in
order
to
packetize
the
media
sensibly,
you
need
to
know
what
the
what
is
the
underlying
codec
and
you
need
to
know
how
you
know,
whatever
the
equivalent
of
a
null
unit
in
in
this
particular
codec
or
a
frame
or
a
an
independently
decodable
piece
of
the
codec
output
is
and
all
this
is,
is
payload
format,
specific,
it's
all
codec
specific
and
so,
unless
you're
just
arbitrarily
chopping.
K
You
need
to
know
the
underlying
payload
type
and
the
signaling
for
rtp
is
also
based
fundamentally
on
the
idea
of
knowing
the
payload
type.
K
Now
I
think
the
idea
of
it
of
enter
and
encryption
of
the
payload
is
a
perfectly
reasonable
one
and
we've
seen
a
number
of
examples
of
different
applications
and
different
organizations
defining
ways
of
doing
that.
K
But
this
seems
like
something
which
can
be
done
within
the
existing
rtp
model
of
payload
formats,
which
identify
a
codec
by
defining
a
a
common
way
of
encrypting
the
output
of
particular
codecs,
and
it
would
seem
that
that
can
be
made
to
work
with
these
middle
boxes
and
these
selective
forwarding
units
in
a
way
which
is
compatible
with
the
architecture.
K
So
I
I
think,
before
throwing
out
the
entirety
of
the
way
rtp
has
defined
media
handling
and
the
entirety
of
the
way
rtp
and
the
associated
signaling
specifies
codex.
It
will
be
useful
to
see
if
we
can
make
this
work
within
the
existing
architecture,
and
I
see
no
reason
why
we
can't
do
that.
K
L
So
from
what
I
understand,
what
you're
asking
for
is
saying,
so
you
propose
something:
is
it
efficient
and
can
we
compare
it
with
another
way
of
doing
things
that
would
be
code
specific?
K
I
I'm
saying
that
that
the
rtp
architecture
fundamentally
relies
on
knowing
the
codec
and
fund
fundamentally
relies
on
things
being
codec
specific,
and
I
think
you
can
achieve
your
goals
in
a
way
which
is
generic
and
works
with
the
middle
boxes,
and
it
allows
you
to
do
the
same
sorts
of
transformations
whilst
doing
it
in
a
way
which
fits
with
the
architecture.
E
K
Right,
I
think
it
would
also
be
possible
to
define
a
a
a
common
way
of
transforming
a
payload
and
sigma
and
signaling
that
such
a
transform
has
been
done
and
using
a
different
payload
type
to
indicate
that,
and
that
would
also
let
you
know
let
the
decoder
know
how
to
decode
it.
B
E
I
I
No,
but
but
I
think
that
what
colleen
is
saying
is
something
different,
I
mean
what
I
think
that
is
what
he's
asking
is
to
is
for
his
friend
to
not
work
on
a
framed
level
but
work
on
a
per
packet
level.
So
I
think
that
it
is
reasonable,
but
I
think
that
this
frame
must
change
to
the
order
be
able
to
be
able
to
implement
this.
L
And
that
that's
why
I
was
asking
about
efficiency,
because
we
we
can.
We
can
do
this
approach
where
it's
per
packet
as
fermentation
or
perfect
fertilization,
and
we
will
see
there
will
be
differences
in
terms
of
efficiency
as
well,
and
that's
something
that
people
might.
K
I
I
K
I
think
you
can
split
thing,
you
know
I.
I
think
this
can
be
done
in
an
agnostic.
You
know,
I
think
you
can.
You
can
encrypt
a
frame
at
a
time
and
split
the
resulting
encrypted
frame
up
in
a
way
which
is
consistent
with
the
rtp
architecture
and
signal
it
in
a
way.
That's
consistent
with
the
rtp
architecture.
I
I
You
will
have
to
leave
the
first
bytes
of
the
of
the
payloading,
so
it
is
a
very
huge
effort.
I
mean
if
we
want
to
go
there.
It
was
also
the
first
slide
and
when
we
explained
why
the
current
packetization
does
not
work
with
the
with
the
current
codex
and
also,
I
don't
think
it
is
a
good
thing
that
you
are
saying
that
you
are
going
that
you
are
sending
normal,
for
example,
excuses
for,
and
data
when
you
are
just
sending
encrypted
data.
K
J
L
I
L
K
G
G
B
H
The
danger
of
implementation
driven
standards
work
really
as
as
nice,
as
this
may
be
normally
here,
there's
just
too
many
oversights
so
on.
On
the
terminology
front,
I
think
we
have
something
like
a
semi-codec
agnostic,
it's
agnostic
for
certain
codex,
but
agnostic
to
certain
codecs,
but
not
to
other
codex.
H
So
the
the
applicability
here
of
this
technology
is
limited
to
a
number
of
codecs
and
what
they
want
to
do
is
to
to
create
a
mechanism
that
would
work
with
all
the
codex
they
had
in
mind
when
they
were
doing
this
thing,
but
hopefully,
without
assuming
or
without
hoping
against
hope,
in
my
opinion
that
it
will
apply
to
all
future
codex,
and
we
have
already
had
an
example.
So
the
the
example
here
why
it
wouldn't
work
with
future
codex
is
for
everyone.
H
I
just
heard
from
the
from
the
thing
that
jonathan
pointed
out
that
you
need
the
dd
in
order
to
make
sense
of
this
type
of
stuff
in
the
decoder.
When
you
have
scalability,
there
are
tons
of
tons
of.
H
Video
codecs,
which
do
not
necessarily
have
the
same
the
same
concept
for
temporal
scalability,
at
least
as
the
dd
right
shvc,
doesn't
have
a
dd.
The
video
parameter
set
doesn't
express
everything
the
dd
expresses.
H
So,
okay,
the
way
to
do
this
would
be
don't
call
it
generic
call
it
or
make
an
applicability
statement.
This.
This
specification
is
intended
to
and
believed
to
work
with
the
following
codex
and
then
comes
a
list
and
in
the
title
tone
it
down
a
little
bit
and
then
go
for
it.
That's
how
I
would
approach
this
generic
versus
non-generic
problem.
H
The
let's
not
forget
that
the
vast
majority
of
bits
that
are
going
over
rtp
in
this
world
today
are
not
webrtc
bits,
and
I
know
webrtc
is
the
itf
technology
and
blah
blah
blah.
But
but
let's
not
forget
that
the
vast
majority
of
of
interactive
video
today
is
not
using
webrtc,
and
I
would
not
want
to
use
this
type
of
drafts
as
an.
I
would
not
like
to
see
others
to
use
this
type
of
drafts
as
a
pusher
for
for
itf
technology,
for
no
other
merits
as
being
a
push.
L
So
I
agree
with
you
that
it
will
be
very
difficult
to
come
up
with
sfus
that
could
be
implemented
in
a
fully
codec
agnostic
way
and
we're
not
trying
to
solve
that.
What
we
are
trying
to
do
is
to
migrate
the
information
that
might
be
codec
specific
from
the
payload
to
the
rtp
head
extensions
and
you
you.
You
took
the
example
that
the
dd
descriptor
will
be
good
for
ev1,
and
we
think
so.
We
are
not
saying
that
the
dd
will
be
good
for
all
codecs.
L
L
Hopefully,
the
header
extensions
that
we
that
we
are
planning
to
use
will
be
supporting
enough
codecs
enough
video
codecs
enough
audio
codecs,
that
there
will
not
be
a
huge
replication
effort,
but
certainly
it
will
evolve.
L
L
I
Also,
I
would
be
interested
in
knowing
the
specific
details
about
a
bbc
and
and
how
this
codec
does
not
work
with
the
dependence
and
distributor
I
mean,
because
I
I
think
it's
interesting.
So
if
you
can
provide
in
the
list
how
the
independence
industry
wi-fi
may
fail
to
do
to
include
the
data
from
from
bbc,
it
will
be
right.
I
H
You
provide
that
please
I
will.
I
will
try
I'll
try
I'm
so
that
said,
you
know
you
can't.
H
I
think,
if
you
are
going
in
the
direction
of
of
of
trying
to
create
of
trying
to
resurrect
your
dream
of
creating
something
generic,
at
least
to
the
known
codex
as
of
this
day,
then
I
will
try
to
do
some
work
there,
but
don't
expect
me
to
specifically
argue
with
you
about
whether
this
is
true,
or
that
is
true
if
it
is
if
this
were,
if
you
are,
if
you
were
continuing
to
exist,
that
this
is
an
agnostic
thing,
then
it's
your
burden
of
proof
to
show
that
it
is
agnostic
and
not
by
my
burden
of
proof
that
it
is
not
agnostic.
H
You,
you
see
my
point
yeah,
so
I'm
helping
on
best
effort
basis,
but
not
on
a
you
know
on
an
obligation
basis.
Thank
you.
F
F
F
You
know
in
terms
of
basically
how
it
breaks
up
packets
to
fit
within
mtu
and
like
I
think
it
can
be
argued
fairly
easily
that,
like
you,
know
things
that
produce
you
know
essentially
datagrams
can
be.
You
know
generically
packetized,
the
you
know,
the
only
other
sort
of
requirement
that's
been
placed
here
is
that
there
can
be
some
way
of
splitting
the
metadata
in
the
the
payload
and
assuming
that
can
be
done
for
a
given
codec
format,
then
the
actual
payload
can
then
be
packetized
using
this
generic
packetizer.
F
So
the
question
that
then
remains
is:
can
the
metadata
be
described
in
a
generic
way
and
I
think
that's
what
a
lot
of
people
are
sort
of
latching
onto
here
like?
Can
this
packation
be
generic
or
not
because,
like
it
definitely
is
a
complicated
problem
as
to
whether
or
not
the
metadata
for
all
codecs
can
be
described
in
like
a
generic
way?
Like
I,
I
think,
that's
a
totally
reasonable
thing
that
people
can
can
differ
on
the
thing
is,
though,
I
don't
think
that
that
is
really
fundamental
to
the
this
document.
F
Encryption,
where
they're
forced
to
look
at
these
pieces
individually
on
a
codex
specific,
you
know
in
a
kind
of
specific
way,
to
determine
what
their
behavior
should
be
and
there
you
know,
even
if
we,
you
know,
still
had
things
that
did
not
fit
the
generic
packages
or
generics
or
metadata.
You
know,
sfus
could
continue
to
do
that.
So
what
I
would
want
to
do
is
just
sort
of
say:
okay,
take
that
off
the
table
yesterday
can
still
be
useful.
F
You
know,
even
if
it
doesn't
have
a
completely
generic
metadata,
because
it
still
allows
like
the
transmission
of
the
end
encryption
formats
and
the
separation
of
metadata
and
payload,
which
is
the
main
things
that
is
required
to
actually
work.
I
Yeah,
I
completely
agree
with
you
also
just
one
thing
that
the
metadata,
the
metadata
that
it
is
needed,
is
only
the
one
that
the
svu
is
requiring
for
work.
I
mean
we
are
not
trying
to
provide
all
the
metadata
in
the
world
to
inaccurately
work,
only
the
ones
only
that
the
specific
data
that
is
required
for
the
svu
to
work.
F
Exactly
yeah
now
whether
that
can
be
fully
generic
is
still
an
open
question,
but
I
think
that
we
have
a
good
set
that
seems
to
work
for
well,
you
know
critics
that
are
known
today
and
even
if
there
are
ones
that
kind
of
go
beyond
that
tomorrow,
you
know,
like
I
said
it's
not
really
a
critical
issue.
Sfus
can
deal
with
this
custom
metadata.
The
only
in
critical
piece
is
that
you
have
things
that
produce
packets
and
then
those
packets
have
their.
F
You
know
metadata
and
actual
payload
separated,
and
I
think
that
you
know
all
the
codecs
that
we're
talking
about
can
do
that.
K
To
that,
and
then
I'll
go
back
to
that,
so
I'm
not
sure
I
agree
with
justin's
statement
that
we
can
build
a
generic
format.
I
think
at
the
minimum.
You
need
to
know
where
the
the
independently
decodable
units
you
know
the
null
units
or
whatever
it
is
for
that
codec
start
and
finish,
so
you
can
packetize
it
in
a
way
that
you
don't
split
these
units
across
packets
and
that's
that
makes
it
specific.
K
You
need
to
identify
the
codec,
so
you
so
you
know
how
to
decode
it,
how
to
decode
it
and,
as
justin
said,
you
need
to
to
separate
out
the
headers
and
the
metadata
information
from
the
payload
and
from
the
encrypted
content,
and
this
is
fundamentally
what
rtp
payload
formats
do,
and
this
is
the
fundamental
point
of
the
the
payload
format.
Concept
is
just
even
if
all
you're
specifying
is
how
you
split
this
thing
up
into
into
pieces,
how
you
label
them
and
how
you
separate
out
the
the
contents
from
the
the
headers.
K
That's
what
a
payload
format
does
it
doesn't
have
to
be
more
complicated
than
that.
So
I
think
what
you're
just
describing
an
rtp
payload
format,
and
I
think,
rather
than
fight
against
that
architecture,
you
know
just
accept
that
what
you
are
describing
fits
within
the
architecture
and
work
with
the
architecture
rather
than
trying
to
throw
it
away.
G
F
I
mean
I,
I
think
that
you
mean
those
things
that
you
sort
of
mentioned
of.
Yes,
we
do
need
to
identify
the
individual
units,
but
really
every
you
know.
Every
payload
format
is
doing
this
and
it
does
seem
kind
of
strange
that
you
know
every
time
we
have
a
new
format.
We
have
to
go,
define
really
the
exact
same
transforms.
F
You
know
for
that
particular
format
when
like
if
we
got
a
structured
sort
of
input
from
the
encoder.
That
sort
of
said
here
are
the
frames
and
independently
to
credible
units,
then
like
you,
could
have
a
fairly
generic
way
of
describing
that
on
the
wire,
and
I
think
that's
really
all
that's
trying
to
be
done
here
that
you
know.
If
you
have
this,
you
know
meta
information
that
describes
like
what
it
is
you're
trying
to
packetize.
F
You
could
then
have
a
single
packetization
format
to
basically
fit
this
into
rtv
packets
and
you
know
empty
use
accordingly,
and
then
you
know
separate
out
the
you
know
the
meta
information
so
that
that
stuff
is
not
encrypted.
You
know
for
for
the
sfu,
that's
really
all
it
is,
and
I
don't
really
feel
we're
throwing
away
rtp
here,
we're
just
basically
saying
here's
the
same
thing
we're
doing
every
single
time
with
a
new
payload
format
and
trying
to
come
up
with
one
way.
F
We
don't
have
to
redo
this
all
over
again
for
every
single
encrypted,
slash
format,
tuple.
K
F
I
mean
well
that's
what
we
have
here.
You
know
first.
F
M
Break
this,
I
don't
think
that
is
what
you
have
here
at
all
here.
So
let
me
let
me
try
and
give
some
very
pragmatic
suggestions
to
get
to
what
you're
saying-
and
I
I
think
this
this
gets
the
heart
of
my
comments,
one.
I
think
that
we
mean
very
different
thing
than
packetizer.
I
think
that
what
most
people
here
mean
by
pack-
and
this
is
just
terminology
what's
important-
is
we
line
up
what
we
mean?
M
Okay,
but
taking
the
stream
of
bits
out
of
the
encoder
and
figuring
out
the
logical
places
to
break
them
based
on
things,
you'd
want
to
lose
so
breaking
the
mcnally
units,
or
whatever
the
moral
equivalent
of
that
is
in
your
codec,
is
something
that's
happening
on
the
top
half
of
the
slide.
That's
currently
displayed,
it's
happened
before
the
transform
okay,
and
that
is
what
most
people
think
of
packetization.
M
M
I
think
we
should
get
a
different
word
for
that
second
transform
where
we
slice
them
smaller,
because
I
think
that's
causing
some
of
the
confusion
in
in
the
discussion
and
then
look
no
one
like
the
I
like
we
all
sort
of
have
like
as
far
as
so
one
thing's
about
packetization,
and
then
I'm
going
to
talk
about
metadata
for
a
second
and
then
esprit
so
packetization.
I
think
with
some
confusing
terminology
which
is
really
not
helping
the
conversation
here
in
a
huge
way.
M
Then,
because
packetization
is
inherently
codec
specific,
there's
no
way
for
packetization
to
not
be
codec
specific
right
in
the
way
I
define
packetization.
Obviously
splitting
up
things
on
his
new
size
is
by
definite
guaranteed
to
be
codec.
Agnostic
right,
totally
agree
right.
So
that's
part
of
the
terminology.
Confusion
then
we
have
the
metadata.
So
look
like
audio
levels
is
already
a
great
example
of
four
audio
types.
A
fairly
generic
works
with
you
know
most
things
I
can
imagine
of
type
audio
media
way
of
sending
that
up
to
the
sfu.
M
That's
great,
I
mean
I
don't
think
anyone
would
like.
Let's
do
some
of
those
for
video
as
well,
whichever
ones
make
sense,
we
don't
have
to
claim
that
they
work
for
every
video
type.
Ever
we
could
say
like
look
here
are
some
ones
we
believe
these
ones
work
for
the
list.
You
know
here's
a
hammer
up,
here's
a
definition
of
some
metadata
and
we
believe
it
works
for
at
least
these
common
webrtc
video
codecs
and
we
suspect,
it'll
probably
work
for
some
2d
video
codecs
in
the
future.
M
You
know,
and
video
codex
could
figure
it
out.
We
don't
have
to
to
pretend
that
it
will
work
for
all
video
codecs
that
haven't
been
defined
yet
or
not.
We
don't
have
to
pretend
that
future
video
codecs
might
not
need
additional
metadata
for
the
sfu
for
some
reason
right
and
we
get
what
we
want
here
on
the
metadata
and
then
the
third
thing
is
with
s
frame.
I
think
that
part
of,
what's
causing
confusion
here
is
in,
is
we're
sort
of
overreaching
on
a
generic
transform
here.
M
Well,
I
think
that
what
I
see
we
need
to
do
here
is
be
able
to
indicate
in
the
sdp
in
some
sort
of
negotiation
way
or
whatever.
You
know
that
like
look
this,
this
vp9
was:
was
s?
Was
s
frame
encoded
and
let's
just
step
back
a
little
bit
to
think
about
like
what
would
be
the
way?
If
that's
all,
we
wanted
to
indicate-
and
we
wanted
to
be
able
to
do
that
for
a
bunch
of
the
major
video
and
audio
codecs,
how
we
would
do
that
in
the
sdp?
J
M
Keep
hearing
the
efficiency
issue
here,
but
for
the
the
the
where
the
overhead's
high
is
actually
the
audio
packets,
because
they're
small
right
and
actually
it
makes
no
difference,
it's
exactly
the
same,
whether
you
do
it
per
because
it's
small,
because
the
audio
packets
are
always
smaller
than
an
mtu.
M
It
makes
there
is
no
efficiency
difference
between
doing
it
for
every
rtp
packet
and
for
every
frame
for
the
video
packets
I
mean
when
you're
talking
about.
I
mean
we're
talking
about
adding
one
extra
iv
vector
or
something
maybe
I
mean
it's
a
very
small
amount
of
bytes
that
we're
adding
as
overhead-
and
I
maybe
you
gave
me
the
estimate
the
other
day
when
we'd
add
one
or
two
percent
bandwidth.
M
If
we
encrypt
for
the
large
video
packets,
if
we,
if
we
did
it
per
rtp
versus
per
frame,
I
think
we
should
really
seriously
reconsider
that
decision
and
think
about
if
it
would
make
more
sense
to
just
have
the
s
frame
applied
on
per
rtp
packet
versus
per
frame
and
but
to
collins
point,
regardless
of
how
we
come
across
on
that
decision.
I
think
that
we
can
easily
find
a
solution
that
continues
to
work.
M
So
you
know,
and
part
of
the
reason
I
like
about
that
is:
it
makes
the
losses
of
the
packets
more
decoupled
from
each
other
so
that
it
makes
some
of
the
recoveries
and
other
things
easier
to
deal
with,
but
you
know
anyway,
there's
pros
and
cons
to
that,
but
the
saving
one
or
two
percent
bandwidth
on
large
video
flows
is
just
totally
irrelevant
like
I
see
no
no
gain
in
that
whatsoever.
M
F
Yeah,
okay,
I
mean
I,
I
think
that
was
I.
I
agree
with
a
lot
of
what
was
just
said.
You
sort
of
just
jump
ahead.
To
that
I
mean
I,
I
think
that
we
should
agree
on
what
is
meant
by
packetization.
F
I
I,
the
authors
and
and
I've
been
using
the
term
of
basically
taking
something.
That's
already
been
broken
up
into.
You
know
now
use
with
equipment,
and
then
how
do
you
put
that
in
the
wire-
and
I
think
most
of
us
agree
that
you
know
once
you
have
now
used
putting
that
in
the
wire
it's
fairly
straightforward?
F
I,
like
I,
like
colin's
representation
of
like
the
audio
levels
as
the
same
type
of
thing,
we're
trying
to
create
here
for
video.
I
think
that's
a
really
good
analogy,
and
I
I
think
that
you
know
it's
something
that
is
worthwhile
to
pursue
as
a
relates
to
you
know
the
the
actual
sort
of
encryption
and
the
overheads
I
I
would.
You
know
still
disagree
that
one
to
two
percent
you
know
savings
is
you
know
worthless?
F
You
know
there
are
people
who
work
on
transforms
for
codex
and
spend,
like
you
know,
big
parts
of
their
careers
to
save
one
to
two
percent.
But
I
agree:
it's
not
the
driving
factor.
You
know
here
it's
an
additional
bonus
and
I
I
think
that
the
chief
benefit
of
processing
things
as
frames
rather
than
as
rtp
packets
is
largely
just
the
logical
separation
that,
if
you
can
split
it
across
multiple
frames
like
you're,
not
forced
to
you
could
break
everything
into
slices.
F
Agreement
here
so
I'm
gonna
move
on,
but
I
think
there's
something
we
can
work
from
here.
E
Okay,
so,
as
jonathan
said,
we'll,
let
sergio
and
yuan
finish
their
presentation.
I
I
So
the
idea
is
that
the
what
we
will
get
from
foreign
for
the
transform
is
is
a
blob.
It's
a
binary
blob
that
we
don't
have
as
it
is.
We
are
talking
about
this
frame.
It
is
going
to
be
encrypted,
so
we
don't
know
what
are
the
boundaries
of
any
underlying
codec
obus
or
null
units
or
whatever.
I
So
what
the
packet
decision
is
going
to
do
is
to
split
this
this
this
blob,
these
bytes
of
the
of
the
frame
into
several
rtp
packets
and
as
we
don't
have
any
boundary
that
we
have
to
preserve,
because
if
you
lose
one,
you
need
all
the
packets
in
order
to
be
able
to
decrypt
the
the
the
original
frame
and
the
with
the
packet
decision
will
just
split
them
into
several
rtp
packets
just
ensure
that
they
don't
exceed
the
mtu
and
the
last
one
will
be
set.
I
The
market
bid
will
be
set
according
to
us,
the
same
as
it
is
specified
in
rfcc3551
and
in
case
of
the
video
codec
is
supporting
a
special
scalability
in
each
of
the
spatial
frames,
the
whatever
the
call
is
in
in
in
the
in
the
specific
codec,
because
there
are
different
names
for
each
vp9,
maybe
one
and
things
like
that.
It
will
be
sent
with.
In
order
this.
I
So
the
idea
is
to
then
that,
as
as
schoolnet
was
as
colleen
was,
was
saying
that
we
are
not
going
to
to
to
just
negotiate
in
this
in
this
new
packet
decision
format
on
its
own.
It
will
be
always
relying
on
the
on
the
new
session
on
the
standard
one
and
in
order
to
try
to
reduce
the
amount
of
payload
types
that
we
are
having
the
sdp,
because
we
have
already
in
webrtc
at
least
we
are
already
using
the
payload
types
for
in
the
35
frames,
and
things
like
that.
I
So
we
are
going
to
use
a
single,
a
single
payload
type
for
this
generic
packet
decision
format,
and
then
it
will
be
needed
to
to
multiplex
the
codex
inside
this.
This
pilot
type,
and
also
this
we
are
doing
that
by
sending
the
original
or
the
associated
pilot
type,
that
will
cause
a
minor
network
overhead
and
also
it
requires
to
negotiate
different
and
the
generic
value
types
for
audio
is
one
for
a
different,
a
clock
rate
effort.
K
K
I
We
will
have
to
duplicate
or
replicate
the
number
of
pallet
types
and
I
think
the
webrtc,
where
we
may
exceed
the
maximum
limit.
I
mean
it's
just
if
I
have
to
say
I
have,
I
think
in
webrtc
we
have
20
or
30
already
period
types
so
going
to
60
is
is
a
no-go
because
at
some
moment
in
time
we
will
reach
the
the
the
point
that
we
cannot.
We
don't
have
any
free
pilot
type
to
use.
I
So
that's
what
we
have
decided
to
use
a
single
one
and
just
have
to
include
the
original
pilot
type
within
it.
L
Just
to
mention
that
the
draft
is
exposing
various
ways
of
doing
the
same
thing,
some
of
which
would
require
more
parallel
time
being
consumed,
some
of
which
would
require
less,
and
we
are
certainly
interested
in
getting
feedback
there
in
what
would
be
best
appropriate.
So
representing
this
approach,
which
we
think
is
fine,
we
know
it
has
some
drawbacks,
but
we
think
it's
it's
fine
as
well,
and
we
welcome
reviews
of
the
current
draft
for
the
other
alternatives.
K
I
Yes,
so
we
are
using
a
single
instead
of,
for
example,
doing
like
what
we
do
in
rtx
that
win
rtx.
We
have
one
payload
type
for
rtx
for
each
of
the
media
types
and
other
codec
media
types
that
is
causing
the
payload
ties
to
to
be
double.
I
If
we
use
the
same
approach
with
in
this
general
mode
generic
packet
decision,
we
will
have
to
triple
the
payload
types
and
again
this
is
we
in
webrtc,
when,
with
all
the
h264
profiles,
pp9
profile,
vp8
profiles,
we
are
reaching
the
limit
when
we
cannot
do
have
any
free,
payload
types
anymore.
So
that's
what
we
have
decided
to
do:
a
payload
type,
multiplexing
yeah.
We
can
just
use
one
one
pillow
type
per
codec,
but
then
we
may
have
the
problem
that
it
is
not
implementable.
I
I
Slide
so
it
just
be
recovered
in
a
header
extension
just
say,
and
as
we
have
one
extra
byte
we
are
going
to
use
that
byte
to
indicate
and
for
audio
is,
is
not
really
useful,
but
we
can
avoid
having
to
send
more
information
in
a
metadata,
a
header
extension
for
the
svu
in
case
that
it
is
not
a
scalable
video
stream.
I
I
Maybe
the
one
method
that
could
be
used
is
the
output
stock
to
know
the
frame
length
that
could
be
used
in
recording
scenarios.
There
are
several
solutions
that
could
be
that
could
be
used
like,
for
example,
trying
to
get
back
pre-marking
and
study.
Somehow
you
say
b1
dependency,
descriptor,
that
it
is
the
one
that
it
is
currently
proposed
in
the
in
the
draft,
and
we
think
that
it
fits
with
with
which
it
will.
I
So
there
is
regarding
redundancy:
this
does
not
require
an
exchange
to
rtx.
It
will
just
keep
working
with
knack
and
rtx
as
it
is
working
today
effect.
Both
ulp
effects
and
flashback
will
also
work.
The
only
thing
without
any
change.
The
only
thing
is
that
now,
the
heuristic
that
it
is
done
by
the
application
to
which
package
should
be
a
should
be
more
protected
or
not
is
has
to
be
changed
a
bit
in
order
to
reflect
that
and
that
the
payload
is
encrypted.
I
I
Is
just
too
red
can
still
be
used.
The
only
thing
is
that
that
and
the
redundant
primary
data
will
be
the
content
that
has
been
encrypted.
So
inbred
will
not.
The
encryption
will
not
be
applied
to
the
full
red
packet,
but
to
the
to
the
resumed
and
data
parts,
and
there
is
overhead
in
this
and
applied
by
the
transformation,
because
you
will
have
the
overhead
and
this
frame
headers
for
the
redundant
data
and
the
primary
data.
So
we'll
be
so.
Basically,
you
are
sending
the
data
twice,
and
there
is
here.
I
There
is
a
small
limitation
that
we
have
when
using
the
what
we
call
the
association
paleo
type
used
for
multiplexing
in
an
rtp
header.
Thus,
red
only
has
allowed
us
to
have
one
header,
so
this
header
and
this
the
value
has
to
be
applied
for
both
and
for
both
the
data
data
blocks.
So
it
makes
that
you
cannot
send
a
one
block
that
it
is
supposed
to
one
block
that
it
is.
I
E
So
at
this
point
sergio,
I
think,
we're
almost
out
of
time.
We
only
have
two
minutes
left.
So
what
I'd
like
to
do
is
just
try
to
understand
from
the
working
group
how
we
move
forward,
but
one
way
since
we
are
exhausted
even
more
than
the
hour
here,
should
we
be
thinking
about
an
interim
to
get
into
this
to
allow
some
more
time
to
get
into
this
say
like
sometime
during
late
may.
Does
that
make
sense?
I
don't
know
jonathan
if
you
have
an
opinion
before
our
time
runs
out.
A
Yeah,
I
do
think
that
you
know
some
sort
of
interim
would
probably
use
into
discussion
of
this
there's
a
lot
of
interest
and
not
a
lot
of
convergence,
which
sounds
like
sort
of
the
sort
of
area.
Where
interim
is
sort
of
ideal.
I
mean
we
can
work
out
the
timing
on
the
list.
If
we
do
various.
L
Whatever
it
is
in
the
meantime,
I
I
would
encourage
for
people
to
to
read
the
draft
and
also
start
sending
emails
on
the
mailing
list,
issues
to
the
current
working
draft,
so
that
we
can
continue
in
the
media
list
not
work
for
the
intern.
K
E
Okay,
I
think
we're
just
about
to
be
out
of
time
for
this
session.
So
maybe
you
know
jonathan.
Do
you
want
to
take?
Maybe
one
question
one
final
question
and
then
we
can
call
it
a
day.
A
Yeah
I
mean
if
either
call
in
your
first
and
q.
So
if
you
have
something
that
can
be
handled
in
one
minute,
but
I
know
I
guess
he
dropped
himself
out.
No,
I
guess
if
you
have
something
quick
or.
D
Yeah,
just
something
quick,
I
I
think
we're
all
probably
more
concerned
about
the
same
things
than
we
realize
a
lot
of
the
stuff
that
colin
was
was
mentioning.
It
really
is
mostly
applicable
to
the
new
codec
that
I
that
I
presume
the
the
the
people
that
presented
this
care
about,
which
is
81..
D
I
think
maybe
people
don't
realize
that
81
is
basically
identical
to
h.265
as
far
as
high-level
syntax.
You
know
it
has
the
same
value
structure
and
it
has
the
same
subframes
that
h.265
has
there's
tile
groups.
D
And
I
think
the
people
that
want
to
deploy
81
should
take
a
good
hard
look
at
the
81
packetization
to
figure
out.
Do
they
care
about
any
of
those
things
or
are
they
willing
to
forego
them
in
end-to-end.
E
Thank
you
very
much
mo
and
I
think
on
that
note
we
will
declare
the
session
that
itf
went
over.
Thank
you.