►
From YouTube: AVTCORE WG Interim Meeting, 2021-01-28
Description
AVTCORE WG Interim Meeting, 2021-01-28
B
A
Great
okay,
thank
you.
So
we
have
been
reminding
people
please
sign
in
to
the
virtual
blue
sheets,
which
are
at
this
cody
md
link.
We
have
a
note
taker,
just
here's.
The
note
well
is
the
reminder:
invite
tf
policies
such
as
the
patent
policy
and
then
sorry.
A
So
about
more
about
the
notewell,
it's
set
forth
in
bcp
79.
As
a
reminder
you,
when
you're
participating,
you
agree
to
follow
it
processes
and
policies.
Definitive
information
is
in
the
documents
listed
below
and
other
itf
bcps.
A
A
We've
turned
that
on
there's
no
registration
required
to
attend
the
meeting,
so
hopefully
you've
gotten
in,
but
you
do
need
to
fill
in
the
virtual
blue
sheets
and
actually,
actually
you
can
just
fill
in
the
virtual
blue
sheets
without
a
data
tracker
login,
so
that's
not
needed,
I
don't
think
could
be
wrong
and
then,
if
you
want
to
join
the
session
jabber
room,
you
can
do
that
via
the
itf
data
tracker
meeting
icon
by
clicking
on
that
that
will
get
you
in
the
jabber
room.
A
Please
use
headphones
or
an
echo
cancelling
speakerphone
state,
your
full
name
before
speaking,
a
few
other
little
things
to
enter
the
queue
you
type
plus
q.
In
the
chat.
I
guess
jonathan
you'll
handle
the
queue
and
you
leave
it
by
typing.
Minus
q,
if
we
do
have
a
home,
I'm
not
sure
we
will
you
raise
your
hand
with
a
hand,
raising
tool
and
lower
it
by
clicking
on
the
hand
resin
tool
again
and
when
you're
called
on.
A
You
need
to
enable
your
audio,
if
you've,
muted
yourself
and
do
that
you
click
on
the
mute,
unmute
icon.
You
don't
have
to
use
video.
A
If
you
don't
want
to,
and
many
of
you
are
not
so
that's.
I
think
you've
got
that
okay,
so
the
agenda
has
been
uploaded
and
there's
this
kodi
md
thing,
which
also
has
the
agenda.
If
you
want
to
look
at
it,
I
mentioned
the
jabber
room,
the
secretariat.
We
have
a
jabber
scribe
and
notetakers
okay.
So
here's
the
agenda
for
the
meeting
we've
gone
through
the
preliminaries.
Hopefully
we'll
do
the
jpeg
xs
payload
format,
we'll
talk
about
the
framework
marking
working
group,
let's
call
the
bp9
payload
format.
A
A
We've
published
a
whole
ton
of
rfcs.
Many
of
them
were
in
cluster
238,
so
those
on
our
out
are
now
out
yay.
We
have
four
drafts
that
have
completed
working
group
last
call
three
which
we're
going
to
talk
about
today.
We
have
one
expired
draft.
I
guess
we
have
an
action
item
jonathan
to
follow
up
on
that.
We've
adopted
three
documents:
the
evc
draft,
cryptex
and
7983
bis,
and
that's
about
it
for
documents
all
right.
So
I'm
going
to
turn
it
over
to
tim.
F
Hi
everybody,
my
name,
is
tim
brelands
from
intupix,
so
I
am
one
of
the
authors
of
the
xs
payload
format
for
rtp,
and
so
in
december
there
was
a
last
call
wglc,
but
no
response
was
given
on
the
reflector
to
the
to
the
last
call,
and
so
it
was
said
that
during
this
meeting
we
would
see
what
we
had
to
do
to
proceed
further.
F
In
the
meantime,
I
have
taken
some
actions,
so
people
from
fraunhofer
actually
joined
the
avidecor
mailing
list
and
people
from
vsf
the
vo
services
foundation
also
joined
this
mailing
list
in
order
to
be
able
to
respond
to
a
future
wglc.
F
If,
if
it's
still
required
and
then
from
jpeg
committee,
I
asked
them
to
draft
a
liaison
letter
to
idf
to
the
avid
core
working
group.
So
this
letter
was
issued
last
week
and
will
be
yeah
will
be
sent
by
the
secretariat
of
iso.
I
do
not,
I
don't
know
when
it
will
arrive
here
at
the
ietf,
but
I
hope
it's
soon.
So
it's
just
a
letter
that
actually
the
committee
would
also
very
appreciate
the
support
of
the
rtp
spec.
F
So
in
the
the
next
slide,
please,
so
my
question
is
a
little
bit:
what
is
the
next
to
do
in
order
to
move
the
the
draft
forward?
That's
what
I
want
to
ask
here.
A
F
Because,
of
course,
I
don't
know
the
quality
of
the
text,
it
would
be
very
much
appreciated
if,
if
people
with
a
lot
of
knowledge
on
writing,
this
kind
of
text
and
specifications
can
also
read
it
and
verify
if
everything
is
okay.
Okay,.
A
So
I
guess
two
action
items
one
is
to
reissue
the
working
last
call,
and
I
guess
jonathan
and
I
to
review
it
and
any
other
volunteers.
F
Yes,
so
I
I
I
know
for
at
least
two
other
people
will
also
cross,
read
and
normally
respond.
I
hope
they
will
approve,
of
course,
that
we'll
see
okay,
so.
A
All
right
next
item
is
the
frame
marking
working
group
last
call
so
a
little
bit
of
detail
about
that.
That
was
announced
on
the
21st
of
november
and
concluded
on
december
6th.
A
We
had
stefan
wenger,
who
provided
an
initial
response
and
then
a
more
detailed
one,
sergio
and
dr
alex
basically
plus
one
that
and
we'll
talk
a
little
bit
about
what
stefan
posted
and
I
posted
a
review,
which
is
mostly
nits
and
a
few
other
things.
But
the
main
comment
came
from
stefan,
and
this
is
what
he
said.
I
won't
read
all
of
it,
but
you
can
get
the
general
idea.
He
basically
inquired
who's.
A
So
that
was
stefan's
question
and
that's
what
dr
alex
and
sergio
also
plus
ones
and
stefan
provided
a
little
bit
more
detail
on
his
opinion,
saying
that
the
problem
is
to
try
to
provide
the
main
or
sfu
with
sufficient
information
to
do
its
job
for
selective
forwarding,
and
to
do
that
you
have
to
abstract
from
the
syntax
of
various
codecs.
A
It's
something
the
sfu
maker
wants,
because
they
want
to
reuse
the
same
logic
independent
of
codec,
but
it's
hard
to
do,
and
then
staphon
provides
some
historical
info
on
the
fact
that
this
has
been
tried,
starting
in
2000.
I
didn't
realize
it
was
that
old
and
has
we
got
it
wrong
every
time,
despite
many
many,
my
our
eyes
looking
into
this?
A
So
that's
that's
the
basic
comment
from
stefan
so
based
on
that
we
decided
to
call
for
implementation
experience
to
try
to
fill
in
what
stephan
was
asking.
You
know
what
were
the
implementer
experiences?
What
has
happened
what's
gone
wrong,
what's
gone
right,
so
we
got
a
couple
of
responses.
A
I'll
try
to
paraphrase
here.
If
people
think
I
haven't
gotten,
this
quite
right
feel
free
to
get
in
the
queue
and
speak
up,
but
we
got
one
response
from
sergio,
which
was
with
respect
to
vp8
and
vp9
experience,
as
I
I
believe,
one
issue
with
vp8
was
the
picture.
Id
needs
to
be
consecutive,
and
so,
if
a
forwarder
drops
a
frame,
basically
they
need
to
rewrite
it
and
that
created
a
problem
because
it
meant
that
the
picture
id
you
couldn't
just
have
end-to-end
encryption
over
the
entire
frame
with
vp8.
A
You
would
need
to
modify
the
tl0
pick
index
and
the
picture
id.
This
is
not
a
problem
specific
to
framarking.
That
is,
it
would
occur
with
any
other
rtp
header
extension
as
well.
There
was
an
issue
with
vp
so
because
of
that
it
wasn't
clean
to
implement
this
with
vp8,
and
I
don't
believe
others
can
speak
up.
I
don't
believe
it
was
ever
frameworking
was
not
supported
with
vp8
and
chrome.
I
believe
that's
true.
A
There
was
also
an
issue
with
the
vp9
pu
bit
for
the
temporal
up:
switch
temporal
and
spatial
up
switch
at
least
sergio
had
issues,
mapping
that
into
frame
marking
and
figure
out
what
to
do,
and
in
addition,
there,
the
more
complex
vp9,
ksvc
scalability
modes
were
not
suitable
for
use
with
frame
markings,
so
basically
issues
with
vp8
and
vp9,
and
then
jonathan
replied
back
with
h.264,
which
was
implemented
in
chrome
for
to
basically
offer
support
for
temporal
scalability.
The
three
layers
in
h264
abc
in
chrome,
it
assumed
temporal
nesting.
A
So
every
frame
was
a
valid
up
switch
point.
So
you
didn't
have
the
issue
with
the
p
and
the?
U
bit.
It
was
contributed
to
the
webrtc.org
base
and
I
guess
it
was
used
by
video
and
perhaps
some
other
folks.
That
code
was
subsequently
removed.
I
think
because
it
wasn't
as
universally
applicable
wasn't
used
for
vp8
and
vp9.
D
All
right
just
to
clarify
that
the
vp9,
p
and
ubit
issues
were
what
triggered
the
update
to
frame
marking
to
to
add
the
restriction
of
temporal
nesting.
So
that
was,
I
think
at
least
four
or
five
versions
ago.
So
I
guess
the
real
question
is:
do
we
think
that
that
the
temporal
nesting
restriction
is
too
severe
and
in,
in
which
case
there
needs
to
be
a?
You
know,
a
solution
to
a
more
generic
form
of
marking?
D
Or
do
we
believe
that
stefan's
objections
mean
that
no
form
of
marking
will
ever
truly
be
relevant
for
for
coding,
because
they're,
just
naturally
you
know,
every
codec
has
its.
You
know
naturally
expressible
ways
and
it's
not
right
to
try
to
club
them
all
together
in
some
generic
abstract
way.
I
think
those
are
the
two
main
issues
that
we
need
to
address.
Yeah.
A
Those
that
that's
great
great
summary
mo
just
a
question
on
my
part,
all
the
the
temporal
modes,
I'm
aware
of
like
l1
t2
and
all
that
stuff
they're
all
temporally
nested
right.
E
B
A
Right,
I
think
the
point
on
the
previous
slide
and
sergio
can
correct
me,
but
so
it
was
really
more
with
a
spatial
up
switch
than
with
a
temporal.
I
think
that's
true
right
yeah.
I
need.
A
G
A
Yeah
justin:
do
you
want
to
say
anything
about
the
experience
in
chrome.
H
Yeah,
to
be
honest,
like
I
don't
remember,
I'm
not
as
close
to
this
anymore,
and
I
don't
remember
the
exact
issues
you
know
I
think
they
might.
My
overall
understanding
is
that
s
frame
and
the
sort
of
approach
there
has
largely
obsolated
this
you
know
from
from
the
chrome
perspective.
H
It
doesn't
surprise
me
that
you
know
that
some
of
the
things
like
ksbc
and
such
you
know
are
just
explicitly
not
covered
here.
Yeah.
E
Sorry,
I
know
I
think
that
was
me.
What
I
was
saying
was,
I
feel
like
what
I
found.
You
know
the
most,
the
only
the
most,
which
would
say
the
primary
useful
case
I
saw
for
it.
I
think,
which
is
also
says
on
my
slide,
is
the
h64
case,
where
you're
actually
retrofitting
information
that
the
codec,
the
payload
format,
doesn't
carry.
E
So
you
know
so
to
actually
provide
information
that
you
otherwise
don't
have,
but
trying
to
it
is
compared
to
compared
to
the
things
that
do
try
to
do
this
natively.
E
It's
not
it's
very
hard
to
make
it
as
rich
as
the
payload
specs
do,
especially
for
anything
scalable
yeah
yeah.
So
I
mean
I
guess
my
question
is:
is
this
something
that
we
can
still
consider
useful
for
this
one
use
case
of
of
basically
retrofitting
temporal
scalability
onto
abc.
A
So
I
think
basically,
the
questions
are
those
that
mo
just
said.
You
know
where
we,
where
do
we
go
from
here?
I
would
just.
A
D
Yeah
yeah,
just
just
to
address
the
comments
about
different
codecs.
I
don't,
I
don't
believe
we've
ever
had
anything
where
the
different
codecs
have
ever
caused
anything
in
frame
marking
to
not
you
know
to
work
differently.
I
don't
believe,
there's
anything
different
between
264
vp8
or
vp9
on
either
temporal
or
spatial
scalability
that
that
prevent
effective
marking.
D
The
only
problem
is
whether
or
not
the
streams
are
temporarily
nested
and,
like
jonathan
mentioned,
I
think
you
know
in
practice,
most
implementations
are
temporarily
nested,
but
it's
it's
not
to
say
that
that
more
creative
things
could,
you
know,
could
be
done
with
it
with
a
codec.
D
So
I
don't
think
it's
really
a
question
of
which
codec
you're
carrying
it's
really
more
a
question
of
whether
you're
carrying
a
a
complex,
dynamic,
scalability
structure
or
whether
you're
carrying
a
common,
simple
static,
scalability,
structure
and
frameworking
is
it
was
updated
to
only
represent
the
simplest.
You
know
most
basic
temporally
nested
scalability
structure,
but
it
could
do
that
for
any
codec.
It
could
do
that
for
even
81..
It
doesn't
matter
whether
you're
using
264,
vp8
or
vp9
or
81.
D
As
long
as
you're
not
doing
a
dynamic,
complex,
scalability
structure,
what
it
conveys
is
still
the
same.
The
real
question,
I
think,
is
stefan's
point
of
whether
or
not
that
little
bit
of
information
is
useful
or
not
with
with
a
complex
codec
and
with
complex
scalability
structures.
D
You
may
want
a
lot
more
information
and
in
which
case
you
know
you
you're
better
off
trying
to
just
expose.
You
know
the
first
few
bytes
of
the
codec
payload
and
maybe
that's
something
that
could
be
done
for
s-frame.
Instead
of
trying
to
normalize
some
format,
common
format
across
all
the
codecs
just
expose
the
the
payload
headers
of
the
codecs,
the
first
you
know
whatever
bytes
and
maybe
maybe
that's
a
better
direction.
If
people
think
that
they
need
all
the
flexibility
that
the
codec
provides.
D
If
they
don't,
if
people
don't
expect
to
really,
you
know,
innovate
on
their,
you
know
scalability
structures
in
the
codec,
then
I
don't
see
why
we
couldn't
just
keep
a
simple
static,
normalized
descriptor
like
either
frame
marking
or
the
81
dd,
or
something
like
or
whatever
s
frame
decides
to
come
up
with.
So
I
think
that's
really.
D
G
I
see
sergio
look,
you
yeah
that's,
but
there
is
a
small
thing
that
I
don't
agree
is
that
we
say
that
this
is
this
word
for
every
codic
and
there
is
a
part
that
is
specific
and
has
to
be
specified
for
each
other.
So
we
don't
have
a
even
a
mechanism
that
works
for
our
codex.
It
works
different
for
our
coders,
so
we
have
to
implement
it
in
an
svu.
G
I
have
to
implement
frame
marking
for
its
sus4.
I
have
to
implement
a
smart
game
for
vp9.
I
have
to
implement
frame
marking
for
vp8.
I
have
to
implement
the
frame
marking
for
ab1,
so
even
if
it
is,
can
be
applied
to
any
codec,
its
implementation
is
different,
so
this
is
even
less
helpful
thing
that
the
regard.
A
G
Well,
I
mean
I'm
not
sure
about
that.
I
mean
just
about
framework
and
I
think
that
and
for
example,
the
and
the
one
dependency
in
the
creator
is
meant
to
be
used
without
or
being
kind
of
cateching.
Another
thing
is,
if
it
is,
will
be
able
to
to
match
all
the
different
codecs
in
the
future,
but
at
least
it
is
not
something
that
it
is
specific
or
is
different
to
implement
for
each
of
the
color.
I
mean.
G
G
An
implementation,
a
specific
issue
that
prevent
me
from
implementing,
so
I
have
not,
it
will
happen
to
any,
but
it
is
something
that
that
is
specific
to
is
also
to
to
to
chrome,
to
legume
rtc
so,
and
but
this
is
what
I
would
did
not
implemented.
The
frame
marking
for
bp8
is
not
that
it
could
not
be
implemented.
A
Well,
I
think
I
think
yeah
I
mean
jonathan's
had
some
suggestions
about
how
to
make
vpn
implementations
more
robust,
so
they
wouldn't
create
this
issue.
But
I
think
if
the
issue
is
there
in
the
implementation,
you
know
if
the
idea
is
to
support
end-to-end
encryption,
you're
you're
going
to
have
an
issue
with
any
rtp
header
extension.
G
D
Well,
yeah,
so
the
goal
original
goal
behind
frame
marking
and
this
same
same
goal
for
the
81
dependency
descriptor,
is
for
receivers
of
this.
Of
this
payload.
To
be
able
to,
you
know,
do
things
without
digging
deep
into
the
actual
codec
payload
and,
and
that
does
that
always
helps
the
people
receiving
it,
but
it
never
helps
the
person
that
has
to
generate
the
the
actual
marking
the
right.
D
The
codec,
the
encoder
to
packetizer
layer
always
has
to
incur
the
complexity
of
understanding
the
payload
format
and
building
the
right
header.
Even
if
we
came
up
with
a
perfect
normalized
header
right,
there's
always
going
to
be
that
complexity
of
understanding
the
codec
bits
and
and
and
lifting
them
up
into
this
into
this
header.
So
I
think
we
should
abstract
that
out
and-
and
I
think
stefan
was
arguing
that
no
matter
how
well
you
think,
you're
doing
that,
it's
never
going
to
be
good
enough.
So
don't
even
bother
doing
that.
A
G
Yeah,
I
completely
agree
about
the
center
side.
The
only
thing
is
that
frame
marking,
even
and
if
I
had
to
to
add
a
new
code
codec
in
my
svu
that
uses
frame
marking,
I
will
have
to
implement
something
at
the
svu.
It's
not
because
the
there
is
a
codec
specific
part
that
it
is
not
a
it's
not
the
same
for
bp8
or
vp9
or
sus4.
So,
even
in
that
you
have
to
do
something
I
mean
it
is
not
complex
it
just
for
you
to
support
new
things.
E
E
A
Yeah
also,
I
think,
there's
a
little
bit
of
an
issue
in
that.
I
think
we've
discovered
that,
at
least
in
the
case
of
epa,
the
having
an
rtp
header
extension
may
not
be
sufficient
to
do
forwarding,
in
other
words,
if
you're.
If
you
know
it,
should
be
clear
that
if
the
idea
was
that
you
didn't
have
to
parse,
the
payload
didn't
have
to
touch
it
and
didn't
have
to
know
anything
about
it.
I
think
we've
demonstrated
at
least
in
vp8
that
that's
not
realistic.
A
Yeah
so
yeah,
I
think
so
maybe
we
can
get
people
to
opine
on
which,
on
the
questions,
you've
just
asked
jonathan,
I
mean
I
I
my
personal
opinion
is.
It
doesn't
make
sense
to
throw
away
the
document.
There's.
Definitely
something
we've
learned
there.
D
Well,
I
guess
I
would
I
would
tend
to
side
with
the
use
more
than
than
anything
else.
So
if
people
think
that
they're
going
to
use
this
or
something
like
this,
then
I
think
it
makes
sense
to
continue
work
on
it.
If
no
one
expects
to
use
this,
and
no
one
expects
to
use
something
like
this,
then
it's
not
worth
you
know
iterating
the
doc
anymore.
D
So
I
guess
the
real
question
is
that
and
with
all
the
activity
happening
you
know
in
in,
in
both
the
81
dependency
descriptor
and
eventually
with
s
frame,
you
know
I
I
struggle
with.
You
know,
stefan's
argument
that
this
just
can't
be
done,
because
everyone
is
still
trying
to
do
this.
Yeah,
there's
still
energy
in
trying
to
do
this.
The
question
is
whether
or
not
the
the
restrictions
of
having
you
know
a
simple
payload,
scalability
structure
or
not
is
it
would
be
the
next
level
question.
A
Yeah,
I
mean
one
of
the
weird
things
about
follow-on
efforts.
Is
they
seem
to
have
some
of
this
same
beliefs
that
we've,
I
think,
we've
shown,
aren't
true
for
for
framarking
and
then
that
the
dependency
descriptor?
You
know
it
was
designed
for
ab1,
and
I
think
it
does
work
for
that.
But
you
know
one
of
the
reasons
for
developing
it
was
to
solve
the
vpa
problem,
which
I
think
we've
just
demonstrated.
It
does
not
do
so.
I
think
there
are
things
we've
learned.
D
Yeah-
and
I
would
also
highlight
the
the
recent
issue
and
frameworking
for
vvc-
for
whether
or
not
a
gradual
decoder
refresh,
can
be
signaled
right
exactly
and
that
that
same
problem
would
exist
with
maybe
one
dependency
descriptor
and
currently
there's
no
solution
for
this
frame
either.
So
you
know
it's
inevitable
that
that
interesting
things
in
the
codec
may,
you
know,
may
come
up
and
surface
that
aren't
captured
in
in
in
any
normalized
descriptors.
A
So
so
let
me
ask
you
something,
though:
you
know
what
what
do
you
think
is
actually
doable
in
any
rtp
extension
right.
I
mean,
I
think,
you're
making
a
good
case
that
if
you,
if
you're
trying
to
solve
you,
know
create
a
generic
descriptor
that
prevent
you
know,
says
you
never
have
to
parse
the
payload
of
any
codec
right
that
and
you'll
get
full
functionality
right.
That's
that's
on
an
unrealistic
expectation,
not
just
for
frame
marking,
but
for
anything
so,
but
people
want
to
do
this.
A
A
That's
just
not
fun,
so
you
know
is
there
a
trade-off
of
you
know
optimal,
forwarding
versus
amount
of
work
for
for
parsing
everything
right
I
mean
it
could
be
some
place
where
you
say
yes,
I
acknowledge
that
I
can't
handle
everything,
but
you
know
I've
saved
myself
enough
work
and
you
know
I
get
enough
agility
where
it's
fine.
I
accept
that
trade-off.
I
I
I
So
I
don't
see
you
getting
away
in
that
environment
from
selecting
implementing
some
support,
etc
and
therefore,
I
think
generalizing
as
much
as
possible
for
what,
for
those
use
cases
we
know
we
can
get.
Work
is
probably
the
best
here
and
then
I
don't
know
if
the
answer
is
saying,
can
we
close
down
the
scope
on
framework
saying
that
this
is
a
good
enough
support
for
this,
or
are
we
already
now
moving
on
to?
I
No,
we
try
this,
it's
not
good
enough,
but
if
we
clarify
the
scope
and
do
another
extension,
we
can
do
a
version
two
which
actually
works
for
our
intended
purpose.
It's
not
going
to
take
all
the
crisis,
but
yeah
that's
we're
dealing
with.
So
I
think
it's
something
to
think
about
how
we
deal
with
this.
I
I
I
might
be
I
mean
if
no
one
really
implemented
frameworking.
It
might
be
that
the
best
step
is
actually
to
go
to
v2,
directly
and
say:
okay,
let's
try
another
approach
here,
which
we
think
will
and
and
be
clear
on
which
scope
of
functionality
we
want
to
cover.
It
might
be
the
simplest
here
and
then
we
actually
get
better
implementation
support
for
that
version
too,
with
them
having
two
different
in
the
market
and
therefore
not
publishing
this
we
other
than
maybe
at
the
informational.
H
Yeah,
I
think
my
comments
are
largely
aligned
with
what
magnus
you
know
was
indicating
I
I
do
think
that
stefan's
comment
of
there's
always
going
to
be
header
stuff,
that
you
know
we
can't
fit
in
this
generic
framework,
and
I
do
think
that
means
we
have
to
package
our
stuff.
H
You
know
differently
of,
like
you
could
imagine
you
know
s
frame,
just
allowing
you
know
essentially
a
way
to
send
like
that
metadata
in
the
clear
and
then
like
well,
the
sv
would
still
have
to
parse
it
like
it
wouldn't
have
to.
I
puncture
the
encryption
or
you
know
it.
Wouldn't
you
know,
interfere
with
the
encryption
functionality,
which
I
think
is
really
the
the
fundamental
piece
here,
and
so
I
think
what
that
suggests
is
like
you
know
there
there's
some
sort
of
frame
marking
thing
where
you
get.
H
I
think
you
know
90
of
what
you
want
from
this
generic
thing
and
then,
if
you
want
to
do
more,
you
know
you
can
just
basically
send
the
bits,
and
that
means
the
sfu
or
the
metadata.
You
know
for
the
codec
you
know
in
the
cleaner
and
the
svg
will
then
have
to
parse
that,
like
I,
I
think
that
you
know
would
really
be
a
useful
framework
for
a
solution.
You
know.
I
think,
then,
that
the
question
that
comes
to
us
is
you
know,
is
that
really
90
that
you
get
from
this
generic?
H
You
know
setup
or
is
it
like
10,
and
I
don't
have
a
good
read
on
that
and
I
think
that's
kind
of
where
you
know
I'd
love
to
get.
You
know
the
folks
who
worked
on
you
know
implementing
this
at
google
to
to
weigh
in
you
know
we
could
pull
in
harold
or
some
of
the
folks
there
and
just
try
to
get
a
sense
like.
Is
this
the
10
or
is
the
90?
A
Yeah
I
mean
my
understanding
is
the
kind
of
at
this
point.
The
bar
is
being
able
to
be
useful
for
multiple
codecs
right.
I
mean,
if
you're
going
to
do
this,
you
want
it
to
be
useful
usable
for
bpa,
vp9
and
h.264
and
ab1
kind
of
if
you
got
that
subset,
that
would
that
would
be
a
bar.
That
would
certainly
would
get
it
into
chrome.
I
believe
that's
that's
what
people
believe
can
be
done
in
chrome.
If
that
is,
I
think,
that's
accurate.
A
G
B
G
Quantum
encoder
for
whatever
is
going
to
look
like,
but
I
think
that
there
are
realistic
goals
should
be
to
to
at
least
make
it
work
for
all
the
cool
rendering
video
you
can
mainly
s264,
especially
five
bpa,
bp9
and
ab1.
I
think
that
should
be
a
really
a
realistic
goal.
I
mean
obviously
having
we
don't
know,
what's
going
to
happen
in
the
future,
but
at
least.
A
So
while
we've
got
you
sergio,
what
is
your
opinion
on
what
to
do
with
the
document.
G
I
think
that
I
don't
see
the
trademarking
to
be
useful
at
this
stage.
I
just
think
that
removing
it
is
not.
I
mean
it's
something
that
we
have
done
so
yes,
the
lighting
is
doesn't
seem
nice.
I
think
that
the
idea
of
having
it
last
an
information
and
probably
removing
the
bb9
part
from
the
framework
for
bp9
and
not
into
this
to
this
document,
so
it
is,
it
is
there's
at
least
for
historical
reason.
I
think
it
will
be
fine,
but
keeping
removing
the
regarding
furnace
pilot
specs.
G
J
Yeah
tim
hansen,
I
just
tracking
back
to
what
justin
said.
I
mostly
agree
with
the
idea
that
something
useful
here
is
would
be
useful,
but
as
a
somebody,
who's
implemented
the
thing
that
looks
a
lot
like
an
sfu.
There
are
two
problems
that
you're
trying
to
solve
here,
one
of
which
is
define
the
bits
that
you
need
and
that's
actually
sometimes
more
difficult
than
interpreting
them.
So
I
think
like
if
we
skip
the
interpreting
step
and
actually
just
did
found
the
bits
that
would
be
relevant.
D
I'm
yeah
answering
the
question
about
you
know:
is
this
90
or
or
10
of
what
you
need?
Of
course
you
know
the
authors
are
going
to
be
biased,
because
you
know
we
wouldn't
have
created
this
if
we
didn't
think
it
already
captured
at
least
80
of
what
we
need.
D
So
I
think
when
you,
when
you
compare
it
to
other
things
like
what's
going
to
happen
eventually
for
s
frame
or
what's
you
know
already,
you
know
in
in
progress
for
av1
90
of
you
know,
the
actual
use
of
the
codec
is
already
captured
in
this
simple
frameworking
extension.
So
I
I
think
it
already
covers.
90
of
you
know,
common
meeting
services,
certainly
all
the
ones
that
that
I've
ever
either
used
or
or
worked
on,
that
that
doesn't
mean
that
it
it
should.
D
It
should
dumb
down
implementations
to
only
use
those
features,
so
I'm
I'm
struggling
with
whether
or
not
we
should.
If
we
publish
anything,
should
we
add
text
to
say
that
you
know
if
you
need
to
use
these,
you
know
these
types
of
advanced
features.
D
You
know
here:
here's
how
you
should
do
it
or
or
here's
how
you
can
do
it
in
addition
to
frameworking
it's
not
that
you
can't
use
frameworking,
it's
the
in
order
to
get
the
extra
semantics
you
need.
You
can
use
frameworking
and
and
this,
but
I
think,
if
you
look
at
the
core
of
this,
the
the
exact
same
one
bite
thing
is
going
to
be
in
all
in
any
implementation.
D
You
know
in
any
other
proposal,
you're
going
to
have
that
same
one
bite
thing
you
know
frame
start
a
frame
into
frame,
independent
or
not.
All
those
same
things
exist
in
any
kind
of
abstraction.
You
could
ever
come
up
with.
G
Yeah,
I
just
that
I
disagreed
with.
I
mean
this.
Obviously,
yes,
we
cannot
be
scientific
here,
but
I
disagree
with
the
with
the
percentage
about
how
useful
framework
is
because,
for
example,
ksbc
is
not
supported
and,
for
example,
we
are
using
it
now
for
screen
sharing.
If
I
recall
really
how
what
how
well
I
don't
know
if
we
were
in
google
mid
and
we
will
be
using
for
screencasts,
I
mean
so
it
is
something
that
it
is
not
implemented
and
it
is
widely
used.
So
it
is.
G
A
D
Oh
yeah,
sorry
I
can.
I
couldn't
hear
what
you
were
saying
earlier.
The
the
reason
we
didn't
add
information
about
vp9
in
the
spec
was:
was
the
workgroup
didn't
want
to
add
stuff
for
just
drafts?
But
if,
if
people
want
the,
you
know
the
details
about
how
to
how
to
use
vp9
modes,
including
ksvc,
and
how
do
you
how
you
set
the
p?
And
u
bits?
We
could
certainly
add
that
to
the
document.
D
If,
if
you
know
people
don't
have
an
objection
to
having
that
documented
in
in
this
spec,
but
the
updates
were
made
specifically
to
add
those
those
things
to
to
allow
those
things.
But
we
didn't
document
how
to
use
them
with
vp9,
because
we
were
told
to
remove
the
vp9
section
from
the
spec.
E
E
You
know
any
description
is
going
to
need
to
include
the
information
framework
and
the
question
is:
is
there
ever
a
case
where
the
information
and
frame
marking
is
sufficient?
And
I
think
that's
you
know
you
know,
and
if,
if
you
can
have
framework,
if
you
need
to
have
framework
plus
some
other
information
you
get
somewhere
else,
I
think
it's
not
terribly
useful.
A
E
A
Exciting,
so
just
trying
to
come
up
with
what
the
next
steps
are,
what
do
you
think
we
ought
to
do
jonathan?
Do
you
want
to
have
some
hums
or
where
do
you
think
we
should
go
from
here.
A
What
would
the
first
one
be?
Would
that
be
on
something
like
the
publication
status.
A
Okay,
did
everyone
get
that
hum,
so
the
first
question
is:
should
should
we
publish
this
stock
document
at
all?
Is
that
the
hum
jonathan
yeah?
I.
E
A
I
think
your
question
ronnie
was
about
whether
future
codex
should
be
required
to
support
it.
You'd
like
to
yeah.
E
A
E
A
Can
you
see
every
who's
raising
their
hand?
Jonathan,
I
I
can't
from
where.
L
E
M
E
E
A
Where
we
are
now
okay,
so
what's
the
next
publication
status
yeah,
do
we
still
want
to
do
this
as
ps.
E
I
I
I
didn't
know
if
you
had
experimental
on
your
list
of
intended
status.
I.
E
A
So
what
is
what
hum
do
you
want
to
go
ahead
with?
I,
I
guess
I
don't
know,
I'm
I'm
well
what
why
don't
we
hum
on
proposed
standard
and
then,
if
see
where.
B
E
E
Yes,
all
right,
I
only
see
one
and
something
other
than
proposed.
E
I
think
I
saw
seven
so
yeah.
It
sounds
like
consensus
for
something
other
than
proposed
standard.
I
mean
whether
that's
people
want
to
speak
as
to
experimental
versus
informational,
or
is
that
just
or
is
that
getting
too
much
into
the
weeds
for
for
bombs?.
E
Are
they
are
they?
Are
they?
Oh?
Are
you
still?
Are
you
in
the
queue
yeah?
Okay,
yeah.
You
are
sorry.
I'm
kidding.
D
Yeah,
just
a
procedural
issue
are
we
are
we
humming
on?
Are
we
discussing
the
current
revision
of
the
document?
Are
we
discussing
any
possible
future
revision?
You
know
including
changes.
D
It
wasn't
clear
whether
or
not
we're
saying
what's
in
there
now
just
publish
that
or
we
want
to
document
some
things
or
maybe
even
make
a
normative
change
to
enhance
something
that
way.
That
part
wasn't
clear
to
me.
E
E
E
Apparently,
nobody
has
an
opinion
on
that
yeah
I
mean,
I
guess
I
mean.
Obviously
you
know
we
could
take
this
and
you
know
completely
rewrite
it
to
be.
You
know
the
heavy
one
dd
for
if
you
know
in
an
extreme
case,
but
that's.
A
Yeah,
I'm
just
just
my
personal
opinion.
I
think
it's
important
to
publish
it
and
document
the
issues,
because
if
we
don't
at
least
do
that
that
we
have
learned
things
and
that
will
get
lost
and
that
would
be
bad,
so
I
think
we
have
to
document
what
we
have.
I
wouldn't
personally
try
to
fix
every
issue
with
it,
because
I
think
that
would
turn
it
into
something
else.
We
have
something
that
was
implemented.
We
learned
from
it
and
just
try
to
encapsulate
that
document,
publish
it.
I
Yeah,
I
I
think
that's
the
right
way
forward
of
it.
Looking
okay,
what's
the
easiest,
if
smaller,
if
there's
small,
easy
fixes,
roll
them
in
otherwise
just
document
push
it
out
and
I
would
say
experimental
with
the
same,
we
know
certain
parts
of
this
might
work.
Fine
other
parts
is
a
bit
shaky
and
we're
saying
that
the
experiment
to
see
if,
if
there's
actually
anything
that
I
mean
getting
some
experience
made
it
potentially
and
otherwise
we're
working
on.
A
A
I
think
thank
you,
everybody.
I
think
we
we
will
bring
it
to
the
list,
obviously,
and
try
to.
I
think
we
have
good
guidance
all
right,
so
the
vp9
payload
format,
jonathan.
E
Yeah
so,
fortunately
the
only
open
issue
with
this
was
a
lot
to
do
with
frame
markings,
which
is
why
I
wasn't
too
worried
about
rolling
frame
marking
over.
So
it
sounds
like
I
I
would
say
and
I'll
ask
if
people
agree
with
me
that
maybe
the
thing
to
do
is
move
the
description
of
how
to
do
bb-9
with
framework
into
the
frameworking
document.
E
Take
it
out
of
the
vp9
spec
and
go
ahead
and
publish
59..
Does
anybody
disagree
with
that
plan
sounds
good
to
me.
E
Yeah
right
so
yeah
exactly,
but
I
mean,
but
especially
because,
like
I
said
this
ep9
as
as
written
as
what's
been
shipping
in
chrome,
for
I
think
at
least
four
years
now
or
something
so
we
want
to
get
that
published
and
I
apologize
for
it.
Taking
so
long
sounds
like
a
lot
of
looks
like
a
lot
of
plus
ones
in
the
chat
room.
So
yeah
we'll
go
ahead
and
do
that.
I
guess
if
make
sure
that
let's
get
the
notes
and
we
can
sort
of
catch
up
on
time.
A
Okay,
all
right
so
we're
gonna
turn
this
over.
I
guess
you
n
you'll,
be
presenting
on
s
frame,
rpp
encapsulation,.
N
Hi
bernard,
can
you
share
the
slides
or
I
believe
I
am,
can
you
see
them?
No,
no,
I'm
not
seeing
them!
Okay!
Well,
that's!
Okay!
Are
you
seeing
it
now.
N
N
Okay,
in
the
meantime,
so
sergio-
and
I
based
on
our
last
meeting.
M
N
Okay,
let's
go
to
yeah
the
goals
so
from
the
last
meeting
sergio
and
I
started
to
dive
into
how
to
support
headframe
and
modularly
instability,
which
both
are
breaking
the
assumption
that
from
the
encoder
to
the
packetizer,
nothing
happens
so
that
the
pythagorea
can
no
longer
assume
that
encoded
data
is
correct,
and
certainly
we
could
try
to
update
ace
frame
or
install
streams
to
try
to
handle
that.
But
it
does
not.
It's
not
really
tractable
to
handle
that
with
existing
codecs
and
future
codecs
as
well.
N
So
it
might
be
better
to
find
with
a
solution
that
that
is
more
generic
and
meaning
that
minimize
impact
on
the
intermediaries,
like
it's
a
fuse
or
browser,
and
maybe
if
we
have
a
good
solution,
it
can
actually
get
us
some
bonus
like
simplification
like
no
need
for
new
packetizer
implementation
in
it.
If
you
are
in
browsers
that
is
for
web
access
next
slide.
N
So
what
what
we
broke?
The
things
in
three
parts?
First,
given
the
change
of
s
frame
or
installable
streams,
we
need
to
change
a
bit
of
a
processing
model.
The
packet
packetizer
can
no
longer
really
split
frames.
So
it's
really
up
to
the
application
that
is
doing
the
transform
to
actually
do
it
on
the
wire,
of
course,
some
changes.
N
Some
changes
are
needed,
so
we
thought
of
using
a
generic
participation
with
side
channel
information
so
that
intermediaries
can
still
do
their
current
processing
like
with
packets
or
even
process
packets,
because
somehow
a
browser
that
is
receiving
content
is
also
an
intermediary
to
the
web
page,
and
if
we
change
what
gets
on
the
network,
of
course,
we
we
need
a
way
to
negotiate.
So,
let's
look
at
all
these
three
things,
starting
with
processing
model
next
slide.
N
So
the
proposal
there
is,
hopefully
simple
and
straightforward,
so
the
encoder
is
generating
a
frame.
The
application
is
modifying
the
frame
and
the
idea
would
be
that
the
application
may
be
able
to
split
the
encoder
frame
in
individual
subframes
and
the
packetizer.
The
generic
practicalizer
would
then
work
on
each
individual
subframe
as
an
independent
frame
to
transmit.
N
If
we
look
at
an
example
like
using
h264
with
this
frame,
it's
almost
like
today,
h2c4
encoder
encodes,
a
frame
we
encrypt
it
and
then
packetizer
sends
it
as
one
frame
in
other
applications
like
svc.
N
Where,
on
the
decoding
side,
you
might
actually
want
to
decrypt
different
layers
independently,
it's
good
if
the
application
is
splitting
each
layer
in
its
own
individual
frame,
but
the
packetizer
will
then
process
independently
and
the
same
could
be
applied
to
even
h364.
If,
for
whatever
reason,
an
application
wants
to
split
frames
and
http
for
frame
in
two,
because
there
are
good
reasons
for
that,
then
it
can
be
done.
It's
really
up
to
the
application
to
to
do
it
next
slide.
N
So
once
we
have
individual
frames,
we
need
to
send
them
on
the
network
and
there
the
idea
would
be
to
given
the
frame
is
just
opaque
data
and
metadata.
We
could
put
the
opaque
data
in
the
in
the
as
a
payload,
and
the
packetizer
would
do
nothing
on
the
payload.
Would
not
prepare
data
would
not
depend.
We
wouldn't
do
nothing,
but
except
splitting
it
into
rtb
packets.
So
we
we
get
a
very
dummy,
packetizer,
very
simple,
to
implement.
N
Of
course,
it
works
with
any
codec,
and
all
the
complexity
is
less
is
left
to
the
frame
metadata
that
is
sent
as
an
rtp
header
extension.
So
if
you
look
at
sframe,
the
idea
would
be
that
what
is
double
encrypted
very
high,
highly
protected
it's
in
the
payload
and
what
is
still
necessary
to
leak
to
intermediaries.
N
Then
it's
an
rtp
head
extension
with
sergio.
We
started
to
enumerate
what's
needed
and
it's
really
similar
to
what
frame
marking
is
actually
doing
like
codec
profile
frame
type.
We
think
we
think
it's
interesting
to
look
at
what
instable
streams
is
also
exposing
prior
transform
as
a
source
of
inspiration
as
well.
E
N
Okay,
go
ahead,
so
the
dedicatizer
processing
is
very
simple.
It
needs
to
know
when
there's
a
new
frame
that
is
started,
so
it's
just
rtp
processing
and
when
it
ends,
that's
that's
about
it
and
then
what
it
will
do
is
concatenate
all
all
these
blobs
into
one
big
frame
and
pass
it
to
the
transform
pass
it
over
to
a
transform,
but
we'll
do
processing
before
sending
it
to
the
decoder.
G
Also,
I
think
that
I
would
like
to
to
add
that,
as
we
wanted
to
clarify
some
some
grounds
before
going
to
today
tell
about
what
a
frame
is
because
it's
not,
we
can
get
lost
about
deciding
if
we
should
go
to
slices
or
whatever
and
or
frames.
So
I
think
that
there
is
a
lot
of
work
that
we
can
agree
on
before
going
to
to
that
fine
detail
that
it
will
be
easy,
it
will
be
difficult
to
agree
on,
but
will
be
easy
to
implement.
G
So
I
think
that
that
it
is
our
goal
now
is
to
try
to
to
to
not.
B
G
When
we
agree
to
that,
we
can
just
decide
how
we
apply
that
to
the
codex
if
we
want
to
go
to
deeper
to
a
slices
or
we
want
to
just
have
frames
or
whatever,
but
in
we
felt
that
if
we
start
discussing
about
the
specific
this
specific
topic,
we
might
escape
all
the
other
things
that
will
allow
us
to
progress
further.
Okay,.
I
Yes,
so
I
I
think,
that's
actually
quite
reasonable
to
look
at
because
in
some
sense,
this
r2p
payload
format
is
going
to
be
so
possible
to
misuse
or
as
a
misuse
of
misuse.
You
can
use
it
in
any
way
and
you
could
even
throw
multiple
sources,
multiple
streams
into
one
single
ssrc
and
and
be
done
with
all
the
things
about
how
efficient
the
repair
mechanism
works,
etc.
I
O
Hi,
can
you
hear
me?
Yes,
yes,
yeah,
a
kind
of
echoing
magnus's
comment.
It's
not
clear
how
much
if
any
of
our
tp
is
left
once
you've
done
this
and
you
know,
maybe
it
is
time
to
to
replace
it,
but
you.
O
This
seems
to
be
just
saying
that
you
know
all
of
this
effort
we've
put
into
payload
formats
and
signaling
and
different
source
identifiers
and
different
recovery
mechanisms,
and
all
of
that
isn't
isn't
important
and
we
can
just
do
it
all
in
a
generic
way.
O
So
this
seems
like
a
really
quite
startlingly
large
change
to
be
proposing
as
a
payload
format.
So
I'm
a
little
nervous
about
scope
here.
G
G
Cannot,
and
but
I
mean
that
not
regarding
I
mean
about
this-
and
this
is
these
slides,
because
I
mean
what
we
were
talk
is
to
present
this.
Ladies,
I
mean
discussing
about.
If
this
is
relevant
or
not
I
mean
we
did
it
last
time
and
we
agreed
that
we
were
going
to
present
this
anyway.
Regarding
the
what
rtp,
we
are
only
talking
about
replacing
the
the
codec
specific
rtp
packetization
for
its
packet,
I
mean
the
knack,
the
fake
everything
else
is
still
rtp,
so
I
would
say
that
it's.
O
G
O
So
I
I'm
not
sure
I
necessarily
believe
that,
but
I
think
my
more
fundamental
point
is,
if
you
have
decided
the
mechanisms
by
which
rtp
supports
this
are
not
appropriate,
maybe
rather
than
building
something
which
isn't
actually
you
using
90
percent
of
rtp's
features
and
calling
it
rtp.
Maybe
we
should
just
do
the
the
new
thing.
N
N
So
I,
if
you
look
at
how,
for
instance,
it's
done
for
audio
it's
done
in
a
certain
way
and
video
is
different.
But
for
sure-
and
I
agree
that,
like
all
the
work
work
that
was
done
to
do
codex,
specific
participation
has
some
benefits
and
with
this
approach
we
lose
some
of
these
benefits.
So
we
need
to
document
this
and
compare
with
the
benefits,
and
if
we,
if
we're
seeing
that
we
we're
getting
more
with
this
approach
than
we
are
losing,
then
I
think
we
should
go
ahead.
N
So
it's
why
it
would
be
very
beneficial,
there's
a
github
repo,
where
we
could
start
discussing
all
the
benefits
and
what
we
are
losing
and
it
would
be
great
if
you
could
provide
input
input
there
raise
issues
and
precise
exactly
what
we
are
losing
there.
So
I
I.
O
N
Yeah
but
it's
good
to
be
able
to
continue
discussion
during
having
them.
And
I
mean.
N
N
But
we
need
to
to
get
to
precise
issues
if,
if
it's
like,
oh,
we
we've
not
done
this.
We
decided
to
not
go
there
during
20
years
and
now
we
are
planning
to
to
go
there
and
we
should
not
try
to
do
it
because
that's
not
how
we
we've
done
it.
That's
not
great,
if
it's
more
to
say,
hey,
there's
this
issue
there
and
you
cannot
solve
it
with
what
you're
proposing
then
even
it's
great
to
to
try
to
tackle
this
besides
issues
and
we
can
make.
O
Progress
on
what
I'm
saying
is
not
don't
consider
this.
What
I'm
saying
is
that,
if
you're
going
to
build
something
which
basically
throws
away
most
of
rtp,
then
throw
away
rtp
and
do
it
different
use
a
different
base
because
you're
not
using
most
of
rtp's
features
here,
you
know
the
result,
isn't
really
rtp
anymore.
So.
B
A
Yeah,
so
what
might
be
helpful-
and
I
I
don't
think
we'll
do
be
able
to
do
it
here,
but
at
some
point,
maybe
in
the
next
meeting
you
end
would
be
to
talk
about
how
insertable
streams
work.
So
I
guess
everyone's
on
the
same
page
because-
and
this
may
be
in
fact
another
issue,
but
insertable
streams
does
use,
try
to
use
the
rtp
infrastructure,
whether
you
know
it
has
its
own
issues,
but
that
might
be
a
useful
thing
just
to
get
everyone
on
the
same
page.
At
some
future
meeting.
H
Sorry
my
headset
cut
out
raise
you
prompted
me.
I
didn't
hear
the
prompt.
I
was
going
to
agree
with
you
and
that
you
know
we
are
using
a
a
significant
amount
of
features
of
rtp.
You
know
everything
from
the
header
to
recovery
mechanisms
to
ssrcs.
H
You
know
the
features
that
are
used
for
like
recovery.
You
know
in
video,
I
think,
is
a
pretty
small
fraction,
and
so
anyway,
I
agree.
We
should
try
to
get
the
issues
documented
here
on
the
mailing
list
or
on
github
or
whatever,
and
then
we
can
try
to
understand.
Like
you
know,
are
these
like
really
critical
issues?
You
know
and
there's
a
lot
of
great.
You
know
stuff.
H
I
What
I
see
as
the
issue
here
is,
is
that
calling
and
the
others
you're
talking
past
the
child,
because
you
haven't,
you
need
a
document
that
actually
describes
how
you
can
utilize
them.
I
know
that
some
stupid
implementations
will
not
use
it,
but
at
least
have
something
describes
he's
here.
This
is
the
features
of
r2p.
I
Here
is
how
you
actually
use
them,
so
you
can
get
this
benefit
because
of
what
existing
rtp,
for
example,
repair
and
all
these
other
things,
but
that's
needs
to
be
described,
and
I
mean
looking,
for
example,
at
some
of
the
grouping
etc
and
how
the
streams
relates
to
codecs,
etc,
and
especially
for
the
scalable
codecs,
and
you
can
actually
maybe
have
some
flexibility
to
do
things
which
wasn't
that
easy
to
do
with
rtp
here
and
for
experimentation
etc
in
this
framework
in
the
end,
but
I
think
we
need
something
which
actually
talks
a
little
bit
about
the
texture
of
this
change
and
how
you
use
rtp
from
that
higher
level.
G
I
I
agree
with
that.
I
mean
we
have
to
have
it.
The
only
thing
is
that
I
think
that
it
is
too
early
to
to
have
it
and
also
because
it
depends
on
the
some
discussions
that
are
happening
in
s
frames
so
and
at
the
end
we
will
have
to
have
this
this
this
for
sure,
because
we
cannot
expect
to
have
any
interoperability
between
the
endpoints.
G
D
Yeah,
I
think
the
the
mental
model
that
I
have
for
for
s
frame
impact
rtp
is
similar
to
red
or
effect
or
other
things
that
also
impact
packetization.
So
I
think
if
we
don't
look
at
this
as
we're
redefining
the
packetization
of
the
fundamental
codec,
we
look
at
this
as
we're
we're
a
wrapper.
We're
providing
a
privacy
wrapper
just
like
red
provides.
A
redundancy
wrapper
in
fact
provides.
You
know
a
resilience
wrapper.
I
think.
D
If
we
look
at
it
that
way,
then
there
probably
are
still
a
lot
of
pieces
of
rtp
that
can
still
be
used,
and
we
just
define
the
extra
bits
that
these
that
these
extra
transforms
do
in
the
same
way
that
red
or
feck
would
have
defined
them,
and
we
still
have
the
fundamental
rtp
packetization
described
in
whatever
you
know:
negotiation
you're,
using
if
you're
using
sdp,
you
still
negotiate
the
actual
codec
and
its
actual
payload
format
parameters.
N
A
Go
there
just
wondering.
E
E
O
A
John
jonathan,
a
question:
we
have
the
rest
of
this
presentation
plus
another
one.
How
do
you
propose
to
manage
the
time.
E
E
Yeah,
I'm
useful
absolutely
or
you
know
it's
not
that
long
to
march
either.
Let's
I
think
that's
we
seem
to
be
having
an
active
discussion
on
this,
which
is,
I
think,
how
much
how
many
more
slides
are
there
on
this.
N
A
E
G
N
Yeah,
I'm
still
not
exactly
sure
how
we
could,
what
what
is
it
exactly
asked
for
for
us
to
to
provide
yeah
what
initial
information
is
requested,
so
it
would
be
good
to
clarify
precisely
the
next
steps.
Then,
what
did
we
ask
there?
Certainly
one
next
step
is
to
have
a
draft.
O
Right
I
mean
I
mean
that
that
was
what
I
I
was
going
to
say.
I
think
we
need
a
draft
that
describes
how
this
fits
to
the
rest
of
our
tp,
because
you
know
you
know
clearly
the
people
who
are
pushing
this
think
they
have
provided
on
every
view,
but
I
I
am,
I
still
have
a
bunch
of
you
know
it
doesn't
seem
like
we.
We
have
a
clear
understanding
of
how
this
works.
From
my
point
of
view.
O
E
E
N
N
G
N
N
B
You
and
sergio-
I
I
think
I
think
I
know,
or
we
can
move
forward
with
that.
I
propose
in
the
spirit
of
time
that
we
take
that
offline.
The
three
of
us
speak
a
little
bit
and
then
we
come
back
to
colleen.
A
A
All
right,
so
we
have
six
minutes
left.
Is
that
enough
time
to
talk
about
quick,
rtp
tunneling,
I
suspect
maybe
not.
A
I
talk
quickly
you're
willing
to
do
that.
Why
don't
we
go
to
that
then,
okay,
samuel!
You
have
the
floor.
L
Thank
you
very
much
yes,
so
hello,
my
name
is
sam
hurst
and
I'd
like
to
talk
today
about
myself
for
carrying
rtp
sessions
of
a
quick
transport
which
I'm
calling
qrt
next
slide.
Please.
L
L
We
currently
use
rtmps,
which
is
a
little
archaic
at
this
point,
with
no
support
for
more
modern
video
and
audio
codecs,
which,
thanks
to
the
hard
work
of
everyone
in
the
avt
core,
we
can
do
things
like
hebc
and
vvc
and
vp9
whatever
we
wanted
to
do
so.
We've
been
looking
at
other
protocols
such
as
hivision's
udt,
based
srt
and
the
video
services
forum's
rtv
based
rest.
L
Our
requirements
for
an
ip
contribution
protocol
include
strong
encryption
for
authenticity
protection,
as
well
as
a
good
codex
support
and
latency
streaming
in
order
to
carry
multiple
rtp
sessions
on
one
sort
of
logical
packet
flow.
We
noticed
that
ripst
wraps
rtp
sessions
in
a
set
of
gre
over
udp
sessions
and
quick's
concept
of
streams
gave
me
the
idea
of
doing
this
over
quick
instead
next
slide.
Please.
L
L
In
addition,
it
has
other
features
like
connection
migration,
which
allows
quick
client
to
move
between
different
networks
for
reliability
and
performance
reasons,
and
while
quick
is
normally
a
fully
reliable
protocol,
there
is
a
datagram
extension
which
allows
you
to
carry
data
that
is
acknowledged
by
the
protocol,
but
the
application
can
control
any
arq
behavior
or
for
re-transmissions,
and
that
sort
of
thing
on
top
next
slide,
please
so
with
quick
state
of
ground
extension
frame.
Qrt
is
a
relatively
simple
mapping
of
rtp.
L
On
top
of
the
datagram
frame
within
quick
transport,
I've
deliberately
designed
the
system
with
the
goal
of
supporting
the
carriage
of
several
rtp
sessions
down
a
single
rtp
tunnel,
as
as
it
is,
and
I'm
replacing
the
traditional
rtp
rtcp
port
pair
with
an
opaque
flow
identifier,
which
is
similar
to
how
the
quick
streams
work
if
you're
familiar
with
that
and
qrt
uses
quick
selective
acknowledgement
to
satisfy
the
conditions
of
the
rtcp
generic
mac
framework.
L
So
it
the
application,
will
feed
that
information
back
in
for
that
next
slide
please.
So.
This
is
just
an
example
use
case
to
show
how
qrt
can
carry
several
independent
rtp
sessions
over
several
qrt
flow
identifiers
and
there's
just
an
example,
remote
production
that
I
came
up
with
so
everything
in
the
blue
box
is
a
single
tunnel
and
we're
just
doing
multiple
rtp
sessions
down
there
next
slide.
Please.
L
And
this
is
another
example
which
shows
one
of
quick's
beneficial
features
which
is
connection
migration,
so
once
the
client
comes
into
the
range
of
another
network
which
has
fewer
hops
or
potentially
more
bandwidth,
they
can
swap
to
the
other
network
without
interrupting
the
connection.
As
far
as
the
client
application
is
aware,
this
can
also
be
used
in
the
case
of
one
network,
connection,
dies
or
becomes
usable
for
any
other
reason.
L
As
for
the
future
of
the
draft,
I've
got
a
list
of
different
things
which
I'm
looking
into
adding
to
the
draft
which
I've
got
on
screen
there
I'm
happy
to
consider
any
other
features
as
well.
I
noticed
that
there's
a
few
questions
that
have
just
popped
up
on
the
screen,
so.
C
Yeah,
so
this
is
just
a
a
silly
question,
but
I
should
ask
it
for
the
chairs,
so
this
draft
is
the
draft
file.
Name
is
targeted
to
the
quick
working
group.
Is
this
is
avt
core
the
right
place
for
this,
so
that
refiling
it
with
a
different
draft
name
would
get
this
more
attention
for
the
right
people.
Thank
you.
E
I
mean,
I
think,
it's
it's
probably
sort
of
an
open
issue.
What's
the
right
working
group,
but
I'd
be
kind
to
say
it's
probably
in
scope,
though,
possibly
with
some
pressure
tweaking
if
necessary,
we
need
to
look
over
about
the
current
charter
says
I
mean.
C
This
seems
very
timely
because
quick
has
got
open
activity
to
do
new
chartered
activity.
You
know
now
that
they're
getting
ver
be
won
out,
so
it
seems
like
an
excellent
time
to
have
that
conversation.
Thank
you.
I
Yeah,
may
I
clarify
as
transport
id
here
sure
the
the
intention
here
really
for
quick
working
group
is
to
focus
on
quick
extension
etc,
and
it's
not
clear
here
what's
if
it's
just
protocol
mappings
mapping
onto
quick
for
protocol
like
this,
I
would
say:
that's
mostly
related
to
the
working
group
where
that
pro
original
protocols,
let's
get
the
mapping
it
should
be.
So
I
would
say
this
is
probably
abt
course
matter
more
than
it's
quick
if
there's
need
for
extension,
etc
for
to
realize
this,
then
it's
it's
a
matter
of
interacting
with
quick
working
group.
L
Yeah
so
so
far,
there's
no
actual
interaction
on
that
side,
with
any
extra
extension
frames
in
quick
or
anything
that
it's
it's
purely
a
protocol
mapping
at
the
moment
it
depends.
E
L
E
L
Okay,
right,
that's
fine!
I
I
think
the
name
mainly
came
from
quick
rtb
tunneling
and
just
has
to
have
happened
to
have
questions.
H
So
we've
looked
into
this
several
times
over
the
past.
You
know
five
years
and
one
of
the
key
sort
of
underlying
problems
ends
up
being
that
both
quick
has
congestion
control,
and
you
know
the
rtp
has
its
own
congestion
control
ideas,
and
these
things,
like
are
hard
to
kind
of
decouple
like.
Is
this
something
that
you
have
been
able
to
solve?.
L
So,
by
using
the
datagram
frame
we
saw
we
don't
entirely
negate
any
sort
of
connection
or
flow
control
in
the
quick
space,
but
it
we're
just
sending
udp
packets
and
using
the
the
quick
transport
encryption
right,
real
estate
yeah.
So
data.
L
Problem
yeah,
I
I
think
with
I've,
been
trying
to
keep
up
to
speed
with
with
exactly
what
congestion
control
in
the
datagram
frame
is
means
at
the
moment,
and
it
this
is
sort
of
something
which
I'm
really
hoping
to
experiment
with
when
I
get
that
far,
I'm
sort
of
just
lobbing
backwards,
packets
backwards
and
forwards
at
the
moment.
L
So
I
don't
really
have
a
firm
answer
on
that.
Just
yet.
A
Since
we're
almost
all
out
of
time,
I
just
want
to
get
clear
on
what
the
next
steps
are.
I
guess
jonathan
you
suggested
republication
as
an
avt
court
document.
Well,.
A
L
Absolutely
yeah,
so
any
and
all
review
comments
on
this
are
great
on
the
next
slide,
which
is
my
last
one.
I've
actually
got
a
link
to
a
github
project
where
I've
I
I
collate
issues
and
things
on
it.
So
if
you
can
raise
stuff
there
or
you
can
send
me
an
email
directly
or
do
it
on
the
mailing
list
and
I'll
see
it,
I'm
more
than
happy
to
take
any
feedback
from
anyone
on
it.
E
A
Very
much
okay,
thank
you,
everybody.
E
Well
and
there's
one
comment
also
on
the
chat:
please
make
sure
that
the
you
know
try
to
attract
attend.
You
know
probably
bring
it
to
the
attention
of
the
quick
working
group
as
well,
even
if
this
is
going
to
be
the
group
that
probably
owns
the
any
subsequent
work
on
that
just
so
that
that
group
is
aware
of
it.
That's
janna,
pointing
that
out
in
the
chat.
E
All
right
well,
thank
you,
everybody,
and
we
will
see
you
whenever
our
next
meeting
is
have
a
good
day.