►
From YouTube: IETF109-AVTCORE-20201119-0500
Description
AVTCORE meeting session at IETF109
2020/11/19 0500
https://datatracker.ietf.org/meeting/109/proceedings/
B
E
E
C
A
B
I
believe
we
are
now
at
our
starting
time,
so
we're
not
possibly
let
people
begin
to
come
in,
but
I
think
we're
probably
ready
to
start
so,
let's
see.
B
B
B
I
bernard:
let's
see:
okay,
yes
good,
so
if
you're
in
the
medical
room,
you
can
you
don't,
you
should
be
seeing
the
chat
if
you
joined
early
and
it's
not
working
reload,
the
page
there
was
a
medico
is
having
a
brief
issue
which
they
have
fixed
so
other
than
that
yes
see.
The
notes
here.
B
Yeah
here,
are
the
instructions
on
how
to
use
mitika
if
you've
been
in
the
room
if
you've
been
in
one
of
these
meetings.
Previously,
you
probably
hopefully
know
this,
but
if
not
here
are
some
tips
important
to
note
when
you're,
at
the
queue
when,
when
you
come
to
the
front
of
the
queue-
and
we
call
on
you-
you
have
to
unmute
yourself
rather
than
submitting
you
and
if
you
want
to
send
video,
that's
great,
but
you
have
to
send
audio
also
separately.
B
B
Please
be
familiar
with
these
and
if
you
want
more
questions,
ask
somebody
offline
or
follow
the
follow
the
links
on
the
page
about
this
meeting.
We
have
the
agenda.
We
have
the
notes,
though,
and
we
have
the
dabber
room
bernard
admire.
The
chairs
we
have
brian
rosen
will
be
jabber
scribe.
If
you
aren't
able
to
or
don't
want
to
speak,
you
could
type
something
into
the
chat
he
can
relay
it
for
you.
B
He
fix
it
with
mike
and
we
have
stefan
wenger
taking
notes
on
his
own,
but
if
somebody
else
also
wants
to
take
notes
on
the
kodi,
md
are
also
on
their
own.
That's
also
always
appreciated.
Please
send
them
to
us
or
let
us
know
where
they
are
after
the
fact
now
we're
gonna
do
the
slides
tab.
B
Our
agenda,
our
one
agenda
batch
we
had
is
that
colin
perkins,
who
was
supposed
to
be
the
first
item,
has
to
be
in
the
I
think,
coin
rg
or
some
some
higher.
B
I
hear
g
group
because
he's
the
chair
of
that,
so
we're
go.
He's
we're
going
to
bump
him
later
until
he's
able
to
come
here
he's
in
the
jabber.
So
we'll
be
able
to
let
him
know
when
it's
time,
but
he's
not
yet
here
in
the
midico,
so
hopefully
he'll
be
able
to
join
here
and
so
we'll
move
gunner
first
and
then
we'll
see
if
colin
has
joined
and
then,
if
not
we'll,
go
on
to
justin
and
so
forth,
and
just
as
do
we
have
other
than
that,
we
have
all
our.
B
Don't
think
we
have
all
the
participants
yet
so
we'll
have
to
see
if
everybody
is
here
but
we'll
get
to
things
as
we
can
get
them.
G
G
B
B
And
stefan.
A
B
You,
okay
yeah.
We
should
be
able
to
probably
well
I've
got
a.
The
next
slide
is
working
group
items
that
are
not
on
the
on
the
agenda
so
we'll
talk.
We
can
talk
about
that
briefly.
B
And
then
speaking
of
which
here
are
the
draft
statuses,
since
the
last
meeting
we
have
published
one
rfc,
which
was
the
tsvcis
payload,
so
good
work.
There
we've
got
four
documents
that
are
in
auth
48
done,
which
basically
means
they're
waiting
on
other
documents
in
their
cluster
to
finish
off
48,
as
I
understand
it,
so
we're
done
as
far
as
we're
concerned
unless
something
exciting
blows
up,
but
hopefully
that
should
be
published
once
those
clusters
clear
other
than
that
we've
got
two
things
that
are.
B
I
think
both
of
these
are
well
probably
me
falling
down
a
little
bit.
One
is
vp9.
We
said
last
time
was
ready
for
working
class
called,
and
we
didn't
so
we'll
try
to
do
that.
Now,
I
think
bernard.
You
should
probably
be
the
one
to
take
that
since
I'm
an
author
on
that,
so
we
should
probably
do
that
after
the
meeting
and
then
frame
marking
mo
made
some
changes
after
the
last
ietf
and
said
this
is
there's.
B
This
is
some
changes
which
you
know,
substantial
lefty
thought
that's
been
the
working
class
call,
which
I
then
completely
failed
to
respond
to,
and
I
apologize
it's
my
fault.
E
Actually,
if
we
could
mark
action
items
for
both,
those
working
group
last
calls
in
the
notes.
That
would
be
helpful.
H
B
Yes,
so
the
tetrad
payload
draft
is
expired,
expired
I
think,
beginning
of
this
year,
it's
not
clear
to
me
that
that
anybody
is
still
interested
in
pursuing
that.
So
we
might
want
to
drop
that
from
the
milestones.
B
If,
but
probably
we
should
ping
the
authors
first.
So,
let's
see
if
you
could
put
something
in
the
minutes
that
the
chairs
should.
B
Should
ping
should
follow
up
on
that
and
if
not
remove
the
milestone,
unless
anybody
else
thinks
that
you
know
they
want
to
take
it
over
anything
and
then
we
had.
Finally,
we
had
a
call
for
adoption
on
evc,
which
seemed
to
be
favorable,
so
I
think
the
authors
should
resubmit
that
as
a
working
group
item
since
the-
and
there
were
some
people's-
you
know,
people
saying
yes
and
nobody's
saying
no
I'll
say
you
can
resubmit
that
at
the
draft
idfa
decor
and
stephanie
said
you
have
your
questions.
B
B
I
Hey,
oh
it's
right
here.
Did
you
see
we
should
upload
a
working
group
chat
for
the
evc.
B
Yeah
yeah
take
that
yeah
start
start
from
the
current
individual
draft.
As
your
baseline.
A
B
You
yeah
yeah,
all
right
and
finally,
as
I
said
thanks
to
the
secretariat-
and
I
guess
also
midako
everybody
here
and
the
ads
for
everything
they've
done
to
make
the
help
big
as
possible,
and
now
I
guess,
let's
get
started
so
I
think
we're
going
to
start
with.
B
Okay,
just
let
me
know
when
you
want
me
to
advance
the
slides.
Actually,
I
don't
think
make
sure
you're
sending
audio
gun
are
right.
We
only
see
your
video,
not
your
audio.
K
K
K
The
idea
is
that
we
shall
not
prescribe
presentation,
but
we
have
some
background
requirements
from
the
itu-t
presentation
level,
standard
t140,
and
it
says
that
you
shall
present.
Even
if
you
have
multiple
parties
sending
simultaneously,
you
need
to
present
it
readable
in
at
least
phrases
or
sentences
with
the
source
indicated,
and
you
should
have
multiple
insertion
points
so
that
the
text
can
flow
from
multiple
parties
at
the
same
time
and
the
approximate
timing
of
the
entries
shall
be
visible.
K
So
this
is
one
way
you
can
present
real-time,
multi-party,
real-time
text
with
three
participants
and
next
slide.
Please
and
next
slide.
There
is
another
view
in
a
kind
of
chat
style
where
you
have
oops
that
would
to
wrap
it,
but
you
saw
it.
It
looks
like
a
traditional
chat
style,
but
the
text
is
flowing
in
multiple
insertion
points
so
that
you
can
follow
more
than
one
participants
entries
in
real
time
and
next
slide.
Please.
K
K
K
K
Security,
good
presentation
of
text
requires
that
multi-party,
aware
actions
by
the
receiver,
so
the
receiver
must
be
able
to
handle
the
this
rtp
flow
and
distribute
the
received
text
from
multiple
simultaneously
sending
sources
so
that
it
can
be
presented
in
a
readable
way,
even
if
it
comes
simultaneously.
K
One
is
for
multi-party,
aware
endpoints
and
with
rfc
4103
format
with
one
source
per
packet
in
the
csrc,
so
each
packet
must
not
have
more
than
one
source
and
the
the
source
is
indicated
in
csrc
and
the
other
one
is
fallback
for
endpoints
who
do
not
understand
this
multi
party
format.
So
there
the
mixer
needs
to
make
the
mixing,
and
that
is
not
at
all
as
functional
as
the
main
solution.
But
it's
needed
because
there
are
real-time
text.
K
K
K
K
A
series
of
good
comments
from
brian
just
beginning
of
november,
and
one
important
question
was:
why
do
we
need
to
change
between
two
sources
in
a
slow
and
clumsy
way?
Why
don't
we
if
we
have
more
than
one
source
in
the
mixer?
Why
don't
we
just
interleave
them,
and
that
made
me
rethink
and
introduce
and
much
more
efficient,
interleaving
method,
and
that
was
included
in
the
version
I
submitted
yesterday.
L
K
K
The
new
way
is
that,
when
there
is
text
from
more
than
one
source
available
for
transmission
in
the
mixer,
then
send
next
packet
with
all
available
text
from
the
source,
with
the
oldest
the
remaining
unsend
text
and
send
this
100
milliseconds
after
the
latest
packet.
I
have
this
timing
so
that
we
don't
overload.
D
K
K
K
So
the
receiver
now
needs
to
look
at
the
timestamp
of
the
latest
received
text
from
each
source
and
when
it
receives
anything
more,
it
evaluates
the
original
timestamp
stamp
of
redundant
text
and
check
if
the
text
was
received
from
that
source
with
these
timestamps
and
recover
the
text.
If
this
text
was
not
received
as
original
before,
the
marking
of
possible
loss
is.
K
More
tricky
now
it's
harder
to
detect
if
there
was
a
complete
loss
instead
of
a
possibility
to
recover
from
the
redundancy,
so
this
lost
sentence
seems
in
fact
wrong.
You
can
have
oh
yeah.
If
you
have
lost
three
packets
anywhere
the
last
second,
then
there
it
might
be
a
loss,
and
this
is
the
bad
side
of
the.
M
K
C
Gunner
there's
a
question
in
the
in
the
in
the
chat
room
that
says:
how
will
this
lose
3
mechanism
work
with
new
packet
recovery
mechanisms
like
rack.
K
J
Yeah
I
I'm
daniel,
have
you
asked
the
question?
I
actually
joined
the
queue,
but
nobody,
nobody
called
on
me.
So
a
rack
is
a
is
a
new
packet
recovery
mechanism
instead
of
fast
recovery
and
having
three
lost
packets.
What
we
do
is
we
use
time
and
and
so
we'll
we'll
resend
packets
based
on
time,
and
I'm
just
wondering
how
this
is
going
to
interact
with
the
the
mechanism
of
looking
at
three
loss.
J
C
K
K
J
K
But
now
we
change
so
that
when
we
have
received,
we
have
a
redundancy
scheme
to
be
able
to
recover
from
packet
loss.
And
now
we
look
first
look
at
the
timing
of
the
time
stamp
of
the
rtp
packets
and
look
back
if
we
have
received
the
redundant
part
before
comparing
its
original
timestamp
and
timestamp
in
the
redundancy
header.
J
D
J
K
No
right
we're
using
rtp
and
the
redundancy
of
rfc
2198,
which
transmits
every
every
chunk
three
times
so
you're
right
and
it
is
udp
and
it
is
rtp
and
with
this
draft,
is
just
a
small
addition
to
the
original
real-time
text-
transmission,
rfc
4103.
K
J
N
C
Yeah
so
I'll,
I
will
check
10
to
see
how
you've
resolved
the
other
of
my
comments
and
look
at
this
new
mechanism,
for
if
I
I
think
the
text
is
okay,
I
do
think
the
doc
at
that
point
should
be
ready
for
working
group.
Last
call,
and
I
want
to
remind
participants
that
this
is
high
priority
work.
There
is
implementations
that
are
waiting
for
this
to
solidify
so
that
they
can
do
it.
There
are
planned
deployments
in
canada
related
to
emergency
calls.
C
We
really
don't
want
to
have
multi-party
unaware
endpoints
if
we
can
can
can
can
deal
with
it,
but
but
at
any
rate,
this
is
very
high
priority
work
for
some
people
and
and
will
get
deployed
very
quickly.
So
please,
let's
finish.
Thank
you.
B
B
Right
yeah,
so
I
think
yeah
I
think
yeah.
Let's
try
to
get
that
to
working
group
last
call
if
you
think
all
open
issues
are
resolved
so
whoever's
taking
notes.
Please
take
also
the
note
there
that
will
send
that
to
working
group
class
call
shortly.
So
I
think,
on
the
other
document,
my
suggestion
would
be
if
we
do
want
to
publish
that
as
information.
B
I
mean
I
originally
said
no
and
then
I
thought
other
people
said
yes,
so
I
said:
okay,
that's
fine,
but
if
other
people
are
saying
no
now
I'm
happy
to
say
no,
I'm
not
do
so.
B
C
Yeah
I'll
be
on
the
list
shortly
I
mean
I
will
review
10
shortly
and
send
any
comments.
I
have
sounds
great.
A
B
All
right
and
then
we'll
figure
out
what
to
do
with
the
other
document.
If,
if
anything
later
yeah,
we
can
wait
with
that
question
yeah
all
right.
So
I
think
is
that
everything
for
this
I
think
so.
So,
let's
move
on
yeah
justin's.
Next,
we
still
don't
have
colin.
So,
let's
see,
let
me
share
justin,
slides.
O
Go
how
this
thing
works?
Okay,
so
this
is
a
continuation
of
the
topic
that
we
sort
of
discussed
at
108
and
sort
of
managed
to
make
a
good
on
a
good
progress
there.
It's
not
a
modified
presentation
and
reference
document.
O
You
know
the
overall
problem
statement
is
that
you
know
basically
srtb
only
encrypts
the
actual
payload
of
rtp
packets,
and
you
know
historically
that
was
not
really
a
big
deal.
You
know
most
the
information
that
was
covered
in
you
know,
header
extensions
was
considered
to
be
non-sensitive
next
slide.
O
But
now
we
are
actually
sending
a
fair
amount
of
stuff
in
header
extensions,
and
some
of
these
have
even
been
noted
in
their
defining
rfcs
is
something
that,
where
they're
carrying
information
that's
somewhat
sensitive,
in
particular
ssrc
audio
level,
which
actually
carries
like
the
volume
of
you
know
not
just
who
is
speaking,
but
the
exact
sort
of
you
know,
or
a
quantized
energy
level
and
video
content,
type,
also
kind
of
indicates
whether
someone
is
sending
video
or
a
screen
share,
which
can
be
used
to
indicate
oa,
who
is
speaking
in
a
meeting
and
maybe
who's
presenting.
O
So
there's
definitely
some
information
that
you
know
you
could.
Then
you
know
mine
out
of
this.
You
know
header
information
and
even
if,
like
you're,
not
you're
covering,
like
you
know
all
the
actual
details
of
what's
being
discussed,
you
can
still
learn
things
about
the
clients.
Do
they
have
hardware
codecs?
Do
they
support
hdr?
What
type
of
application
is
it
where,
like
you
know,
of
course,
our
trend
is
to
try
to
move
away
from,
like
letting
any
of
this
information
be
carried
in
clear
text.
O
Rfc
6904,
you
know,
did
try
to
address
this,
but
only
sort
of
encrypting,
the
actual
payloads
of
these
extensions,
rather
than
the
extensions
and
their
ids
themselves.
So,
even
if
you
were
to
encrypt
the
bodies,
you'd
still
be
able
to
tell
what
extensions
you
know
we're
being
sent
around
by
virtue
of
the
ids,
because
you
know
most
of
the
sdps
exist
right
now
and
webrtc
implementations
are
fairly
static,
and
so
you
can
probably
figure
out,
what's
being
sent
around,
just
been
looking
at
the
the
numeric
ids.
O
What
we're
going
to
do
is
encrypt
the
entire
extension
block,
including
the
ids
and
and
their
lengths,
and
while
we're
doing
that,
we'll
also
encrypt.
You
know
some
of
the
other
things
that
are
kind
of
easy
to
encrypt
in
the
srtp
packet,
most
notably
the
csrc
identifiers,
and
you
know
we
discussed
this
at
itf
108
as
whether
there
were
more
things
that
could
be
encrypted.
O
But
it
turns
out
that
the
timestamp
and
the
ssrc,
sorry,
the
sequence
number
and
the
ssrc
are
used
for
the
key
schedule
for
encryption
and
then
like
there's,
some
other
things
needed
for
demuxing
like
the
payload
type,
and
you
know
some
of
the
other
bits
that
we
decided
that
you
know
it's
more
trouble
than
it
would
be
worth
to
actually
try
to
go
and
encrypt
those
pieces.
If
you
really
wanted
to
do
something
and
basically
secure
all
that
information,
you'd
be
better
off
with
an
encapsulating
protocol
like
dtls.
O
So
overall,
like
this
seems
like
a
pretty
simple
thing
we
can
do.
We
can
use
the
existing
srtp
encryption
mechanism
to
basically
use
the
offset
of
where
you
start
encrypting
rather
than
starting
at
the
payload.
We
just
started
earlier
and
cover
the
csrc
and
extension
block
and
it's
we
don't
change
any
of
the
actual
where
the
lights
are
in
the
packet.
So
it's
compatible
with
all
existing
rtp
parsing
code.
O
O
You
know:
rc
5285
defines
a
mechanism
for
identifying
that
525
is
in
use,
which
is
the
thing
that
we
use
to
figure
out
how
to
encode
the
header
extensions
and
they
put
a
they
call
defined
by
profile
block
in
there,
which
is
a
16-bit
integer
of
a
certain
form
that
tells
us
that
the
things
are
laid
out
in
a
certain
way.
Here,
we've
basically
introduced
a
new
16-bit
integer,
which
basically
says
that
we're
doing
52-85
header
extensions,
but
this
time
they're
all
fully
encrypted.
O
You
know
if
you
want
to
get
into
the
details,
there's
actually
two
things
to
find
in
5285,
one
for
where
the
id
and
the
length
are
contained
in
a
single
byte
and
then
there's
another
one.
Fourth,
the
iodine
length
are
each
individual.
Bytes
is
known
as
like
two
byte
headers
spread
the
extensions,
and
here
we
have
two
new
16-bit
defined
by
profile
identifiers
for
the
one
byte
and
two
byte
id
length.
Combos
martin
did
you
have
a
question
or
I
see
you're
showing
up
as
green
on
my
screen.
O
So
that's
the
basic
mechanism:
there
are
some,
you
know
special
cases.
O
One
of
them
is
that
you
know
if
you
want
to
send
a
packet
that
doesn't
have
header
extensions,
but
does
have
csrcs
that
you
want
to
encrypt
the
only
way
that
you
can
tell
their
counterparty
that
the
header
extensions
are
actually
being
encrypted
is
by
adding
a
you
know,
the
32
bits,
basically
for
an
empty
extension
block
where
you'd
have
the
16
bits
for
the
you
know,
defined
by
profile
thing
saying
that
the
sections
are
encrypted
and
then
their
other
16
bits
can
basically
say
it's
just
a
you
know
a
zero
length
extension
block,
and
that
was
pointed
out
by
sergio-
and
I
think
that's
a
pretty
clean
solution
to
that
particular
problem-
may
not
actually
be
a
real
world
case.
O
Most
of
the
time
you
know,
extensions
are
almost
always
being
sent,
but
only
these
covers
that.
Let
along
this
end,
the
actual
capability
is
signaled
via
sdp
in
the
draft.
It
talks
about
a
new
thing,
called
xmap
encrypted
jonathan
pointed
out
that
xmap,
you
know
that
that
sort
of
string
has
its
own
connotation
about
the
extension
map
between
the
actual
extension
string
and
ids
and
since
we're
not
really
encrypting
like
the
the
map,
so
to
speak,
you
know,
maybe
we
ought
to
have
a
different.
You
know
attribute.
O
That
indicates
this
particular
mechanism,
so
we
haven't
really
come
up
with
the
exact
you
know.
String
naming
is
hard,
but
you
know
for
one
of
a
better
attribute,
I'm
now
suggesting
a
equals
cryptex
and
then
finally,
the
way
this
would
actually
be
configured
by
the
application.
We
have
some
precedent
here
with
webrtc
and
how
it
set
things
like
the
bundle
policy.
O
Here
you
have
a
policy
that
can
be
set
on
for
cryptex
and
you
either
say
negotiate
in
which
you'd
offer
cryptex,
but
still
be
okay,
if
you
didn't
get
it
from
the
other
side
or
require
where
you'd
offer
cryptex
and
fail
if
the
remote
description
did
not
include
the
cryptex,
you
know
sdp
attribute
next
slide.
O
So,
overall,
though,
not
a
lot
of
open
issues,
implementations
are
progressing.
Sergio
is
working
on
getting
this
into
srtp,
so
we
should
be
able
to
actually
implement
it
and
test
it
out
pretty
soon.
I
know
jonathan's
also
been
helping
with
socio
and
getting
actual
test
vectors,
so
we
can
make
sure
that
everything
is
producing
the
encryption
output
that
we
expect
and
I'm
hoping
that
you
know
we
can
have
a
working
implementation
in
the
next
couple
months.
B
I've
implemented
in
the
bitsy,
srdp
library
and
sergio-
and
I
are
getting
you
know,
agree
on
the
test
vector.
So
that's
a
good
sign,
good
good.
E
So
what
is?
Is
there
an
action
item
here,
such
as.
B
Yeah
and
I.
B
Yeah,
so
does
anybody
have
any
opinions?
I
mean
I
I
I
would
think
it
sounds
like
you
know
my
my
first
attempt
at
encrypting.
Her
extensions
was
over
complicated
failure.
So
I'm
happy
to
do
something.
That's
easy
and
I
implemented
this
in
a
day
so
on.
So
it's
definitely
easy.
So
does
anybody
else
have
any
opinions
on
whether
this
is
a
good
idea
and
just
whether
he
should
adopt
it.
O
So
harold
asks:
can
we
lose
one
bite,
header
extensions,
while
we're
at
it?
You
know
this
is
the
kind
of
thing
where
we're
actually
at,
like
16
header
extensions.
I
think
in
in
what
you
see
right
now,
so
I
can
ask
I
understand
why
harold
might
be
proposing
that
the
one
bite
is
actually
a
nice
optimization.
You
know
you're,
basically,
saving
you
know
one
byte
for
each
header
extension,
which
you
know
while
not
a
lot.
O
You
know
I
always
think
of
like
codec
engineers
go
to
quite
some
lengths
to
save
even
like
one
byte.
So
I
I
don't
really
see
any
real
value
at
trying
to
you
know,
remove
it
at
this
point
in
time.
It
would
also
sort
of-
I
think
you
know
the
separation
of
concerns
here.
Is
that
we're
not
really
changing
5285
any
other
way
than
just
kind
of
encrypting?
You
know
I,
I
think
he
keeps
things
nice
and
simple.
L
O
L
Great
yeah,
the
reason
why
I
ask
this
is
because
we've
seen
a
number
of
people
saying:
oh
I
get
to
this
one
by
two
bite
cliff
and
and
I'm
afraid
of
what
will
happen
if
I
cross
it.
So
I'd
like
to
get
this
I'd
like
to
not
have
people
were
the
turn
off
extensions
at
random,
just
just
to
get
get
under
14.
O
L
Okay,
I'll,
let
I
erase
this,
you
will
well
yeah
it's
it's
okay
to
leave
it
the
way
it
is.
N
E
An
individual
yeah
just
wanted
to
say
just
from
my
personal
opinion,
I
think
we
should
call
for
adoption.
I
think
it's
ready.
P
Hello
yeah,
so
just
to
clarify
that
this
is
expecting
all
all
middle
boxes.
P
O
Well,
it
depends
on
everything
you
know.
If
there's
like
you
know,
sort
of
packet,
you
know
capture
tools
like
they
can
continue
to
work
on
this.
You
know
if
they're
trying
to
just
look
at
you
know
basically
what
packets
and
what
lengths
are
being
said,
but
you're
trying
to
understand
the
packet.
Then
then,
yes,
you
would
need
to
be
able
to
to
decrypt
this
and
like
sfu
or
something
would
certainly
be
expected
to
to
be
decrypting.
This
and
processing
in
clear
text.
P
Okay,
so
so,
if
fsc,
if
sfus
are
expected
to
be
processing
this,
then
I
think
the
csrc
concerns
aren't
that
big
of
a
deal
because
it
would
have
to
take
someone,
that's
actually
participating
in
the
rtp
session
to
get
confused
by
those
scrambled
csrcs.
So
I
think
that's
probably.
O
O
I
think
the
way
you
look
at
this
is
like
this
basically
upgraded
the
dtls
srtp,
that
this
is
a
point-to-point
encryption
system
and
so
like,
for
you
know
talking
to
the
sfu,
you
know
you
it
would.
Basically,
you
know
need
to.
It
would
implement
the
actual
decryption
thing
for
srtp
for
getting
at
the
payload
as
well
as
getting
at
the
actual.
You
know
encrypted
csrcs
and
header
extensions.
P
Okay,
so
if
you
were
to
do
s
frame
or
anything
like
that
or
any
other
kind
of
perk
or
any
other
kind
of
end
to
end
crypto,
then
this
wouldn't
be
part
of
the
end-to-end
context.
O
Yes,
okay,
so
I
I
was
just
going
to
comment
back
towards
harold
harold's
comment
about
one
bite
or
something
which
I
I
I
see
the
value
of.
I
get
it,
but
I
don't
think
it
really
has
anything
to
do
with
this
draft.
I
think
we
that's
a
very
separable
thing
and
we
should
do
that
separately
if
we
want
to
do
a
one
bite
thing
and
of
course
it
could
work
with
this
mechanism
too,
so
I'd
rather
not
try
and
put
it
in
this
draft.
O
I
think
we
should
do
that
separately,
and
you
know
this
draft
just
tries
to
clean
a
difficult
part
of
our
encryption
that
no
one
really
ever
implemented,
and
it
just
makes
it
with
the
hop
by
hop
encryption
of
some
stuff.
That's
really
hard
to
do
with
our
existing
rfcs.
So
that's
it.
B
Okay,
so
I
it
sounds
like
people
are
in
favor
of
adopting
this
so
we'll
put
out
a
call
on
the
list
for
any
other
opinions.
So
note
takers
make
sure
we
have
that
slowed
it
down
that
we're
going
to
do
that,
but
it
sounds
like
at
least
from
people
who
are
here.
B
People
are
in
favor,
okay,
and
I
think
we
now
have
colin.
So
we
can
go
to
rtb
conjunction
control
feedback.
Are
you
saying
you're
ready,
colin.
F
B
F
Sorry
for
being
late,
I
had
to
it's
okay,.
F
F
Okay,
so
the
08
version
of
this
draft
went
to
a
iesg
review
a
little
while
ago.
It
got
a
bunch
of
comments
from
the
the
tsv
review
team
from
the
gen
arts.
Ayanna
had
a
couple
of
clarifications
and
a
bunch
of
ad
review
comments.
F
F
Okay,
so
starting
with
the
the
editorial
stuff,
I
I
hope
absolutely
none
of
this
is
is
at
all
controversial.
We
added
some
clarifications
to
the
abstract
to
explain
what
the
scope
was
and
why
we
needed
this.
We
removed
some
references
to
the
terminology
defined
in
some
rfcs,
which
didn't
define
any
terminology.
F
We
clarified
that,
since
our
tcp
packets
don't
contain
a
sequence
number
you
have
to
infer
based
on
the
time
since
you
last
last
got
an
rtcp
packet
as
to
whether
there's
any
any
feedback
packets
been
lost.
F
We
made
a
clarification
that
if,
if
multiple
feedback
packets
are
lost,
it's
the
media
sender
that
should
reduce
its
transmitting
rates
rather
than
the
rtcp
that
should
reduce
its
transmission
rate
and
since
the
the
ramb
draft
had
expired
many
years
ago,
we
we
put
a
reference
to
the
generic
concepts
rather
than
to
that
draft
next
slide.
F
Okay,
so
there
are
also,
I
hope
this
is
more
readable
for
everyone
else
than
it
is
for
me.
There
are
also
some
technical
changes
in
in
the
draft.
This
one,
I
think,
is
sort
of
borderline
between
technical
and
bottling
between
technical
and
editorial.
In
the
there
was
a
field
which
was
a
label
called
the
l
field,
which
indicates
whether
the
packets
were
received,
and
we
we
we
just
flipped
the
name
of
the
field
around.
F
So
it's
now,
the
r
field
which
indicates
over
the
packets
were
received
to
avoid
confusing
everybody,
and
we
clarified
that
if
a
packet
is
lost,
then
the
bits
in
in
the
the
the
the
field
indicating
its
arrival
time
offset
and
its
ecm
mark
must
be
set
to
zero
and
must
be
ignored
and
the
must
be
ignored
is
a
new,
normative
requirement.
F
We
also,
I
think
this
was
in
response
to
magnus,
that
there
are
an
announces
that
the
main
change,
the
the
previous
version
of
the
draft
said
that
when
you
were
negotiating
this,
you
you
either
put
in
the
the
sdp
attribute
to
indicate
this
format
or
the
sdp
attributes
to
indicate
the
the
rfc
6679
ecn
feedback
format,
but,
but
you
didn't
put
in
both.
So
if
you
put
this
one
in
you,
it
said
you
must
not
put
the
other,
the
other
formatting.
F
F
So
so,
essentially,
that
enables
the
negotiation
there
and
and
that
that's
a
a
change
to
the
way
the
signaling
works
and
the
other
addition
was
that
the
the
second
paragraph
here
we
add
some
suggestions,
saying
follow
all
the
guidelines
in
section
seven
of
rfc6679,
which
is
the
ecn
for
rtp
draft,
and
these
are
the
guidelines
that
talk
about
how
you
probe
to
to
make
sure
the
ecm
is
still
working
and
that
sort
of
thing
in
the
next.
N
F
F
Yes,
so
yeah,
so
the
questions
are:
are
the
working
group?
Okay,
with
these
changes?
And
if
the
answer
is
yes,
then
I
think
the
isg
click
the
approve
button-
and
this
goes
to
the
rfc
editor
and,
if
not
well,
we'll
need
to
do
an
update
in
a
new
review
cycle.
Q
Future,
yes,
so
about
the
ecm
thing,
I
think
the
most
important
recommendation,
which
was
kind
of
missing
there
for
you
around
this
yen,
was
to
send,
if
you
get
see
more,
extend
those
immediately
aspect
of
it.
That
was
that
wasn't
I
picked
up,
because
that
recommendation
is
quite
important
for
the
performance
and
reactions
to
ecnc
marks.
That's
maybe
the
most
important
takeaway
from
that
recommendation,
rather
than
anything
else.
So
so.
Q
Clear
enough,
or
does
that
need
to
be
yes,
I
I
mean
in
general
it
was
more
about
what
he
said
in
his
presentation.
So
if
you
actually
go
follow
and
read
those
guidelines
that
will
be
apparent
so-
and
I
I
have
cleared
my
disgust
of
this,
so
that's
so
I
thought
it
was
clear
enough.
It's
just
that.
You
need
to
follow
and
read
the
section
to
realize
that
that
that
didn't
came
through
in
his
presentation.
Q
No,
I
don't
know
I
mean
I
do
expect
people
to
actually
be
able
to
follow
references
before
the
guy
doesn't
actually
look
through
them,
because
then,
if
they
do,
I
think
it's
fine.
It
says
that
in
this
context
of
the
presentation
in
the
session,
it
wasn't
clear
that
that's
what
maybe
the
aspect
which
maybe
realizes.
B
F
B
Well,
I
mean
I
would
my
I
mean
sounds
like
I
mean
as
an
individual.
I
think
all
these
are
fine.
I
don't,
and
I
think
if,
if
anybody
has
I'd
say,
if
anybody
here
has
any
issues
with
this
calls
out
now,
otherwise,
personally,
I
think
that
this
is,
I
would
say,
the
working
group
is
probably
fine
with
this,
because
you
know
it's
a
discussion
between
you
and
and
and
magnus
anyway,
and
that's
the
people
we'd
go
to
to
see
whether
the
working
group
agrees
with
this.
So.
F
Yeah
I
mean
I
mean,
but
the
main
technical
change
here
is
the
signaling.
So
if
people
are
okay
with
that.
D
F
B
All
right,
I
think,
like
you
know,
since
we're
not
getting
any
complaints
here
and
we
didn't
get
any
complaints
on
the
on
the
list.
I
would
say
the
working
group
is
okay
with
these
changes.
N
Okay,
yeah,
I'm
good
with
that.
Is
there
a
reason
not
to
just
post
a
link
to
the
diffs
on
the
mailing
list
and
yeah.
N
N
B
H
Yeah-
okay!
Yes,
so
this
is
yes,
and
these
slides
are
meant
to
be
an
extraordinary
of
the
topic,
and
most
of
you
most
probably
have
already
followed
the
discussion
that
we
are
having
in
this
frame
working
group,
but
I
mean
just
to
to
make
it
properly.
Yeah
say
what
I
wouldn't
going
to
do
is
just
to
make
a
great
introduction
now
at
what
is
frame
and
what
the
problems
that
we
have
with
the
rtp
encapsulation
of
the
of
the
included
payload.
H
H
So,
as
you
know,
I'm
going
to
talk
especially
about
webrtc
this,
and
then
it
has
a
peer-to-peer.
Encryption
model
means
that
a
all
the
content
between
both
ends
of
the
dlt,
the
dtls
connection,
is
encrypted.
But
if
you
are
using
a
media
server,
the
media
server
will
have
to
actually
degrade
the
content
that
are
sent
between
in
srtp
in
order
to
to
perform
the
the
for
one
between
between
the
peers.
H
So
in
in
this
case,
the
all
the
data
that
this
is
coming
in
the
or
that
it
is
inside
the
media
server
is
soon
encrypted,
and
it
is
obviously
a
first
and
this
view
has
attached
to
it.
So
it
is
something
that
in
many
cases
it
is
something
that
it
is
not
required,
and
it
is
also
subject
to
attacks
that
may
happen
in
the
media
survey
itself.
So
next
slide.
H
The
idea
of
of
s
frame
and
is
to
be
able
to
do
an
end-to-end
encryption
that
will
just
include
the
inner
encryption
on
top
of
the
hub,
I
hope
ingredient.
So
when
these
contents
go
through
the
media
server
or
any
intermediary
server
in
the
past,
they
are
still
encrypted,
so
they
cannot
be
accessed
by
any
intermediate
peer.
H
The
also
just
saying
again,
this
is
just
a
quick
introduction
about
this
frame.
Yes,
if
anyone
just
has
not
been
following
the
the
the
s
frame
working
group,
the
the
main
idea
and
the
difference
with
other
approaches,
as
for
example
perk-
is
that
in
they
have
a
different
approach
about
the
key
management
system.
The
the
goal
also
is
to
minimize
overhead
when
adding
the
in
the
we're,
not
in
the
end-to-end
encryption
in.
B
H
And
also
we
trying
to
be
a
independent
of
the
underlying
transpose,
so
we
can
be
all
applied
to
rtp,
but
also
it
work
with
other
transport
like,
for
example,
quick
and
when
use
without
rtp.
We
are
required
that
no
special
hundred
for
rdx
or
or
effect
is
required
in
order
to
minimize
the
changes
that
it
is
required
to
implement
it
in
both
svu's
and
endpoints.
H
So,
let's
slide
please.
One
of
the
main
difference
with
this
plane
is
that
it
does
not
work
on
upper
packet,
but
it's
doing
a
full
frame
encryption.
So
it
happens
in
a
different
states
of
what,
for
example,
perk
is
is
doing
the
ingredient.
It
is
happening
actually
before
the
just
before
the
or
between
the
encoder
and
the
rtp
layer,
so
it
encrypts
the
frame
as
it
gets
out
of
the
of
the
encrypter
of
sorry
of
the
encoder
and
before
the
rtp
package
decision
is
happening
and
also
in
the
receiver
is
happening
after
the
packet
decision.
H
But
before
passing
it
to
the
decoder,
so
this
casts
problems
with
rtp
packetization
and
it
requires
some
some
new
mechanism
or
at
least
to
check
how
this
interacts
with
the
current
rtp
payloads
of
the
especially
the
video
codecs,
because
the
audio
code
is
absolutely
at
the
end.
You
have
a
one
packet
per
frame,
so
it
is
not
so
critical
as
in
video
and
in
in
in
web
browser.
The
w3c
aspect
is
the
we
have.
H
I
did
a
quick
demo
of
how
to,
for
example,
introduce
the
the
face
detection
data
into
the
media
frame,
so
it
is
transported
alongside
with
with
the
with
the
with
the
with
the
video
frames
from
end
to
end.
So
this
obviously,
is
what
it
is
coming
into.
The
in
the
the
rtp
packet
decision
is
not
a
media
frame
anymore;
it
is
something
different
and
it
can
must
be,
or
it
interacts
with
how
the
rtp
packet
decision
is
is
done
so
necessary.
H
Q
Yeah
clarification
question
here
around
one
aspect:
I
think
you're
isn't
your
two
core
screen
here.
When
you
talk
about
video
frame,
I
mean
this
is
the
s
frame
in
caps.
Encrypted
encapsulation
is
going
to
be
on
the
codex
smallest
independently
usable
unit.
Whatever
that's
called,
and
I
mean
I
think
you
need
to
be
clear
about
that
before
when
you
have
scalability,
etc.
Q
Its
relation
to
rtp
and
rtp
streams,
etc
is
dependent
on
if
you
have
one
or
multiple
scalability
layers
in
one,
multiple
stream
etc.
So
I
think
we
need
to
be
very
careful
here
in
terminology
and
and
clarify
these
aspects
so
that
the
payload
format
etc
is
actually
recommending
you
do
the
right
thing
and,
and
so.
H
Yeah
but,
for
example,
the
spatial
scalability
and
and
and
the
independent
decodable
units
or
how
it
can
be
a
slices
in
excuses
for
or
ties
in
ab1
is
not
specific
to
rtp.
So
no
they're,
not.
Q
But
it's
to
design
a
payload
format
that
works
as
possible
good
for
generic
use
for
spending
codex
as
possible.
I
think
it's
very
important
that
working
with
this
payload
format,
you
establish
the
terminology,
clearly
maps
out
that
you
can
have
these
different
cases
so
that
we
ensure
that
we
get
a
payload
format
that
actually
is
useful
and
flexible
enough
to
cover
the
different
use
cases
depending
on
the
codec
and
the
encoding
implementation.
I
want
to
try
to
achieve.
H
Q
B
O
Yeah
there
you
go
in
an
s
frame,
and
you
know
this
is
definitely
something
we
thought
about.
In
fact
that
you
know
the
the
implementation
of
our
stream.
This
exists
right
now,
you
know,
has
to
deal
with
this
of
multi-layers,
multiple
encodings
right
now
already
and
we
we
talked
about
this
notion
of
an
idu
independent
decodable
unit
which
could
be
you
know
you
separate
individual
sort
of
spatial
layers,
but
then
also
sub.
You
know,
you
know
deal
with
individual
tiles
or
slices
within
a
particular
layer.
H
Yeah,
so
this
same,
this
world
kind
of
the
conclusions
that
or
or
the
ideas
that
we
gathered
yesterday
we're
presenting
a
much
more
detailed
information
about
how
the
the
video
code,
the
gram
payloads
interact
with
this
frame.
This
is
a
in
the
in
the
slides
for,
for
that
were
presented
two
days
ago
in
the
next
frame.
So
you
want
more
details
about
the
specific
interaction
between
the
codecs
and
and
this
frame
is
there.
But
the
conclusion
is
that
not
all
the
video
codex
supports
s
frame
easily,
I
mean
it's.
A
video,
codex
or
rtp.
H
Packetization
of
the
video
codes
requires
a
different
processing
in
order
to
be
able
to
to
adapt
to
query
encrypted
and
payload,
and
this
obviously
cause
problems
both
in
in
in
a
specification
and
because
you
have
to
to
start
checking
in
the
to
a
very
detailed
way
how
this
has
to
be
made
for
each
code
and
checking
the
security
properties
of
each
of
the
of
the
transformation.
Because
in
some
cases
you
have
to
send
an
encrypted
parts
of
the
of
the
video
frame
in
order
to
to
svu
for
working
and
also
will
create
problems
in
enterability.
H
Because
you
will
have
the
integrity
matrix
will
will
will
be
increase
as
the
number
of
codex
increases,
and
also
this
was
seen
that
for
svu
to
work.
It
is
something
that
the
the
it
requires
on
some
metadata
for
for
the
frame
that
must
be
carried
and
encrypted.
So
it
seems
that
it
is
better
to
to
have
it
into
a
header
extension
that
inside
the
the
rtp
payload
that's
for
example,
bp8
or
bp9.
H
This
is
doing
it
right
now
and,
if
possible,
it
would
be
good
to
have
some
kind
of
solution
that
could
be
reused
for
other
and
protocols
that
are
not
rtp
based,
as
s
frame
is
also
a
agnostic
of
the
of
the
of
the
transport.
H
So
this
is
the
the
main
idea
of
of
this
slice
is
to
introducing
the
topic
here.
So
we
can
decide
what
needs
to
be
done
in
in
the
proper
group.
So
let's
slide
this
so
in
terms
of
net
steps.
Yes
again,
this
is
just
to
trigger
the
discussion
three
main
items.
I
think
that
it
should
be
done
that
it
is
one
is
to
this
to
define
a
new
video
packet
definition
format,
that
it
is
a
codec
agnostic
and
allows
to
transport
the
arrow
in
our
blob.
H
Then
the
output
of
the
insertable
media
frames
that
it
can
be
whatever
that
the
application
decides
to
to
to
happen,
as
as
I
think
it
was
madness,
they
said
that
we'll
have
to
decide
if
we
need
to
support
a
partially
capability
and
how
do
we
support
it,
and
we,
the
the
other
topic,
is
how
to
if
we
need
to
define
a
new
header
extension
to
carry
the
metadata,
that
it
is
required
for
svu
operation
and
do
the
specifically
the
the
the
layer
switching
where
you
are
using
spatial
or
temporal
scalability.
H
We
have
different
options
here,
just
to
name
a
few.
Yes,
I
don't
want
to
to
point
to
to
say
what
is
my
favorite
or
where
my
I
don't
want
to
propose
any
solution
just
to
list
and
what
the
space
week
will
use
is.
One
is
the
the
frame
marking
that
it
is
already
defined
the
been
designed
here
and
also
it
is
possible.
It
is
also
available.
The
ab1
one
dependency
descriptor
that
it
has
been
working
on
in
in
our
media
is
also
meant
to
be
codec
agnostic.
H
So
we
will
have
to
check
if
how
we
are
we're
going
to
link
the
this
new
packet
decision
format
to
the
current
do
the
negotiation
of
the
of
the
of
the
standard
one.
So
I
mean
if
you
are
going
to
have,
for
example,
let's
use
four
and
with
a
good
with
a
normal
rtp
package
session
and
this
new
video
packetization,
that
it
is
called
agnostic.
H
You
have
to
negotiate
both
way
to
do
or
or
they
are
the
patent
and
also
in
here.
I
think
that
it
could
be
a
very
interesting
to
to
check
the
the
the
pilot
former
restriction
for
negotiation
registration
of
the
video
called
the
formers
in
a
category
way
that
it
is
something
that
has
already
been
working
on,
and
it
is
a
available
thing,
in
other
words.
So
this
is
again.
H
B
I
mean
my
I
mean
yeah,
we're
happy
to
have
you
bring
a
draft
and
then
the
working
group
consider
that
they
want
to
adopt
it
I
mean.
Presumably,
since
s
frame
is
a
iatf
working
group,
I
mean
the
inclination
would
be
to
say.
Yes,
though,
obviously
we'll
have
to
the
details
matter,
the
other
all
the
other
process.
Content
is
in
terms
of
sdp
negotiation.
If
it's
just
negotiating
of
the
properties
of
this
meta,
payload
type,
that's
generally
a
thing
that
can
be
done
in
a
payload
format.
F
B
F
So
one
of
the
things
about
the
way
rtp
has
been
designed
right
from
the
very
beginning
was
that
it
was
in
explicitly
and
intentionally
designed,
not
to
be
codec
codecagnostic.
F
You
know
that
the
reason
we
have
particular
payload
formats
was
that
the
group
believed
that
there
was
value
in
that
in
terms
of
increasing
robustness
and
allowing
partial,
decodability
and
so
on,
and
over
the
the
the
many
years
that
this
working
group
has
existed.
It
has
declined
to
standardize,
could
agnostic
payload
formats
on
a
number
of
occasions.
F
Now,
maybe
the
constraints
have
changed.
I
mean
that
the
type
of
applications
people
are
building
nowadays
are
perhaps
different
to
the
ones
they
were
building
five
years
ago
or
10
years
ago
or
20
years
ago.
F
It
was,
but
if,
if
we're
going
to
change
that,
that
sort
of
fundamental
underlying
assumption
of
how
we
do
the
processing,
we
should
do
it
deliberately,
rather
than
just
sleepwalking
into.
F
B
Okay
mo
you're
up.
E
P
You
mo,
I
couldn't
hear
when
he
gave
me
the
floor,
because
the
peer
connection
thing
when
you
hit
a
mute
too
fast
you
you
lose
the
indication
that
you
actually
have
the
floor.
The
I
think
there
may
be
a
little
bit
of
tension
between
you
know
what
justin
was
presenting
and
and
s
frame,
because
the
if
the
the
premise
of
of
end
to
end
is
that
we
don't
want
to.
P
We
think
that
the
middle
boxes
could
be
adversaries,
and
in
that
sense,
if
you
want
to
encrypt
as
much
as
possible
from
your
adversaries,
then
I
wonder
if
you
know
the
things
that
you
know
that
we're
thinking
need
to
be
done
even
for
the
atp
headers
and
csrcs
and
whatnot.
You
know,
if
they're
not
being
done
for
for
for
s
frame,
assuming
anyone
ever
does
s
frame
over
rtp
instead
of
over
some
other
transport.
P
So
I
don't
think
that
you
know
impacts
sergio's
problem
here
too
much,
but
it
just
makes
me
wonder
about
the
overall.
You
know
threat
models
that
we're
trying
to
do
for
end-to-end
versus
you
know
just
encrypt
wherever.
Q
Yes,
so
I
mean
in
its
party
response
to
collins
here
about
rtp,
and
I
think
I
agree
with
the
deliberate
thing
and
that's
why
I
think
it's
very
important
that
we
do
describe
this
very
well
about
and
that's
why
I
think
it's
so
important
to
say:
okay,
we
are
actually
boiling
down
this
problem
to
saying
that,
for
a
codec,
that's
gonna
be
encoded
in
or
practiced
in
20p.
Q
You
still
need
to
say
structured
into
what
you
consider
the
smallest
independent
decodable
units
and
and
and
let
r2p
handle
that,
and
I
think
we're
getting
down
to
that
level
where
rtp's
alf
principle
was
intended
to
be,
but
we're
generalizing
that
just
slightly,
I
think
you
still
for
codec.
Actually
gonna
need
a
bit
of
a
description
in
the
future.
Even
if
you
are
never
intending
to
send
it
other
than
over
s
frames
over
rtp,
you
would
still
need
to
sit
down
and
actually
define
okay.
What
do
I
actually
put
in
each
in?
Q
What
do
I
consider
as
the
independent
decodable
units?
How
does
that
relate
to
some
degree,
with
the
timing,
timings,
etc?
Of
these
things,
decodable
decoding
you
order,
and
things
like
that.
That
actually
is
relevant,
because
there
are
aspects
of
this
which
actually
means
quite
careful
design,
and
that's
why
I
think
it's
very
important
to
start
to
start
staying
on
the
high
level
in
this
preparation,
for
this
document
is
to
really
say:
okay,
what
do
we
have?
Q
How
do
we
deal
with
decoding
order?
How
do
we
deal
with
time?
How
do
we
deal
with
different
scalability
layers
and
actually
go
through
that,
so
that
we
know
what
metadata
I
need?
We
know
what's
from
robustness,
layer,
principles,
etc,
it
needs
to
happen
so
that
you
can
do
repairs
in
a
reasonable
way
on
ibus
and
things
like
that.
So
thank
you.
H
Yeah
yeah,
I
I
absolutely
agree
with
that.
I
mean
we.
Obviously
we
have
to
do
their
new
packetization
form
and
it
will
have
to
be
done
thoroughly.
I
mean
I
don't
want
to
to
to
prevent
any
kind
of
discussions
I
mean,
but,
for
example,
regarding
the
the
party
like
availability
well,
it
is
something
that
the.
H
I
have
not
seen
much
use
of
it
in
the
real
life,
so
I
would
love
to
to
get
more
information
about
it,
for
example
in
in
av1,
even
if
it
is
supported
in
the
in
the
the
codec
for
tiles,
the
rtp
packet
decision
format
and
the
descriptor
does
not
allows
the
svu
to
do
to
handle
it.
So,
even
if
it
is
supported
in
the
codec,
it
is
not
actually
being
used
at
all,
and
I
will.
D
Q
Yeah
you
I
mean
this
is
pushing
the
I
mean
as
it's
authentic
the
encryption.
These
frames
I
mean
you
are
gonna,
end
up
in
a
situation
where
you
either
get
the
whole
idea
or
not,
and
that's
I
think,
and
I
think
that's
actually
helps
you
clarify
this
question
too.
A
lot
is
that
we're
not
going
to
see
at
least
partial.
Q
If
you
get
the
first
n
bytes
of
the
slice,
you
could
decode
it
attempt
the
coding
of
it,
but
in
this
case
you're
not
going
to
be
able
to
decrypt
it
and
verify
it's
there
for
you,
you
have
to
throw
it
away,
and
then
I
think
it's
just
accept
that
fact
and
deal
with
that
and
put
that
to
say,
okay.
This
is
an
aspect
of
why
you
need
to
think
through
what
your
ideas
are.
O
I
just
wanted
to
comment
most
point
about.
You
know
how
this
stacks
up
with
cryptex.
O
I
hadn't
really
thought
about
it
too
much
sort
of
figuring
that
most
of
the
data,
that's
in
header
extensions,
is
actually
something
that
sfu
usually
wants
to
to
peek
at
you
know,
audio
level
being
one
one
obvious
example,
and
also
the
you
know,
transport
control
feedback
message
that
colin
just
talked
about
being
being
yeah,
some
of
the
data
that
goes
on
that
and
perhaps
being
another.
You
know
we
might
want
to
audit
that
and
see.
If
there
are
things
that
you
know,
we
wouldn't
want
to
ask.
O
If
you
didn't
look
at
my
guess
is,
though,
that
that
stuff
is
probably
you
know,
for
the
overall
threat
model,
probably
a
much
less
concern
than
the
actual
media
which
which
in
most
implementations
right
now,
is
currently
being
exposed.
O
So
I
wouldn't
want
to
get
too
caught
up
in
those
details,
but
it's
probably
worth
looking
at
a
little
bit
as
part
of
the
cryptex
effort.
H
Yeah
also,
this
is,
I
think
that
this
is
one
of
the
use
cases
of
the
insertable
media
stream,
because
it
allows
you
to
put
a
metadata
in
the
in
the
actual
payload
that
it
is
increased
end-to-end.
So,
even
if
it
is
not
kind
of
it
started
kind
of
of
hacky
way
of
not
adding
information
to
the
video
frame.
F
E
A
Face
this
using
the
face
just
sucks,
so
traditionally
the
the
mapping
between
aquatic
design
and
the.
A
We
here
we
are
talking
now
about
the
generic
rtp
payload
forward,
where
that,
so
it
is
the
plan
to
add
all
the
mappings
for
the
for
the
codec
that
someone
or
you
or
anyone,
interested
or
so
is,
is
a
plan
to
put
that
into
your
document,
or
is
the
plan
to
ref
the
the
traditional
documents,
which
would
be
the
rtp
payload
forwards
of
the
codex
themselves
to
put
an
additional
section
on
on
how
to
do
s-frame
mapping
or
what's
applying
that?
H
I
ideally
will
not
have
to
reference
the
codex
when
we
specify
the
the
the
packetization.
I
mean.
We
assume
that
you
have
some
format
that
is
defined
by
the
video
codec.
H
H
A
I'm
almost
certain
that's
not
going
to
work.
I'm
and
the
reason
is,
I
mean
over
in
ezra
in
your
presentation.
There
showed
very
clearly
that
you
had
to
follow
somewhat
different
trains
of
thoughts
for
the
three
codex
you
you
were
worried
about
right
for
vp9,
for
ab1
and
for
h264
and
yeah.
But
you
know
you.
H
Yeah,
but
it
is
only
because
we
are
using
the
the
the
the
packet
decision
defined
for
each
of
the
formats,
so
you
will
you
need
to
define
how
which
part
needs
to
be
encrypted,
which
far,
that
does
not,
I
mean,
need
to
be
encrypted.
So
this
is,
as
the
input
is
not
the
what
the
packet
is
respect.
You
have
to
adapt
it
to
to
adapt
it
before
you
can
use
the
the
packetization,
but
if
you
are
defining
a
new
packet
decision
format,
you
don't
really
need
to
to
have
those
constraints.
B
We
do
need
to
get
them
to
move
on
because
we're
getting
a
little
tight
on
time.
So,
let's
so,
I
think
I
saw
yon
and
mo
sorry.
I
did
close
the
queue
so,
but
I
mean
I
think,
just
to
briefly
introduce
myself.
I
think
most
of
these
are
issues
that
would
be
I'd,
probably
addressed
in
the
s
frame
group,
not
in
the
in
this
group
I
suspect,
but
as
it
will
be,
draw
discussed
there
first,
I
suspect
so
harold
and.
L
So
the
thing
that's
happening,
if
you
don't
have
a
specific
rtp
format
for
s-frame,
is
that
we
get
get
rtp
decoders
trying
to
peek
inside
the
packets
and
failing
so.
L
I
think
it
was
johnson's
point
that
we
we
have
not
designed
the.
L
L
Yes,
this
is
about
how
you,
how
you
tell
the
rtp
packet
answer
and
deep
packetizer,
especially
there
are
no
user
serviceable
parts
inside
this
packet.
B
O
O
I
mean
yeah,
I
think
the
comment
is
just
like.
I
think
we
need
to
be
very
careful
about
this.
The
s
frame
group
is
not
chartered
to
change
the
way
that
the
every
future
rtp
codec
is
done
in
this
working
group.
That
is
way
out
of
scope.
That
needs
to
happen
absolutely
absolutely,
and
I
think
that
that
is
really
at
the
fundamental
of
I'm
not
saying
we
shouldn't
do
that.
O
That
may
be
a
very
good
thing
to
do,
but
I
think
that
that
is,
I
mean
that's
basically
what
colin
said
earlier
right
we're
talking
about
changing
the
architecture
here,
so
I
think
that
it
will
involve
this
working
group
to
sort
out
this
problem.
E
But
cullen
is
it
fair
to
say
that
s
frame
will
will
produce
some
material
on?
What
is
what
exactly
is
being
the
unit?
Let's
say
the
units
that
are
to
be
encrypted,
like
the
whether
it's
the
decodable
units
and
stuff,
like
that.
O
I
I
think
it's
fair
to
say
that
the
s
frame
group
and
this
working
group
don't
come
to
the
same
solution.
It's
going
to
be
a
show,
but
you
know
I
mean,
like
we
gotta
be
coordinated
and
it's
the
same
people
both
places.
So
I'm
not
worried
about
that,
but
I'd
I
do
think
this
is
not.
I
do
think
this
is
gonna
involve
some
hard
discussions
in
this
working
group,
not
just
punt
at
all
to
the
other.
O
B
K
B
All
right
so
should
I
or
whoever's
planning
yes.
I
You
guys
hear
me
all
right.
Yes,
we
hear
you,
let's
see,
look
at
this
screen
all
right,
so
all
right,
so
this
is
updated
for
the
bbc
rgb
format
next
slide.
Please
next
slide,
please,
okay,
so
this
is
a
recap.
What
happened
from
previous
meetings?
As
you
know,
this
is
fdis
already,
so
it's
under
iso
approval
and
publication,
so
the
specs
stable.
I
I
We
said
we're
going
to
support
only
a
single
stream
of
single
transport
and
also
we
removed
and
also
we
removed
the
boundary
package
for
a
cleaning,
rp
environment,
and
then
we
also
agreed
to
include
the
free
marking
as
an
optional
feature
for
obvious
performance.
However,
this
is
something
we
probably
need
to
reconsider.
I
Based
on
the
current
discussion
in
the
mailing
list.
I
have
a
specific
slide
for
that,
so
we
can.
We
could
discuss
that
in
the
slide
next
slide.
Please.
I
Next
slide
is
quick
update
in
the
zero
five
version.
First,
we
want
to
welcome
your
queen
board.
Your
creator
has
been
providing
very
details
and
valuable
comments
and
including
proposed
lots
of
great
text
for
the
draft
so
very
happy
to
have
you
and-
and
we
know,
you're
going
to
be
a
great
asset
for
this
work.
I
I
What
are
we
going
to
move
forward
with
those
sections,
and
another
thing
we
want
to
discuss
recently
is
a
discussion
in
the
meeting
list
regarding
whether
or
not
we're
going
to
support
graduate
degree
refresh
and
cre
in
the
forum
troll
response
or
whether
or
not
we
should
have
support
in
gdr
in
the
free
marketing
sections,
so
that
those
discussions
really
come
from
the
mini
list,
and
thanks
for
watching
for
bringing
this
up,
we
believe
it's
a.
It's,
have
a
good
amount
of
impact
for
both
vbc
graphic
and
free
marketing.
I
Next
slide,
all
right,
just
quick
overview.
What
happened
in
section
one
and
mostly
according
to
overview
so
and
what
I'm
mentioning
in
the
section
1.13
yeah
we
summarized
high
level
picture
partitioner
for
bbc,
also
shows
the
difference
compared
to
the
gvc,
introduce
the
new
sub
picture
feature
of
bbc.
I
I
So
here's
the
reviews
mostly
needed,
as
I
mentioned
authors
and
we
started
to
put
out
a
list
of
sd
stp
optional
parameters,
focus
on
section,
I
think
7.2,
it's
totally
15
pages.
I
think
a
perfect
field
for
a
good
night's
reading
before
sleep,
but
most
of
them
is
really
copy
piece
from
hgvc
draft.
We
did
understanding
bbc.
Of
course,
please
review
them.
I
We
have
a
lot
of
editor
notes
on
that
pay.
Attention
to
that.
We
do
have
a
suggestion
to
request
reviews.
We
do
plan
to
push
ourselves
and
you
guys
to
move
this
forward,
we'll
be
starting
main
list
discussion
for
each
of
the
proposed
parameters
in
the
draft.
We
anticipate
two
weeks
real
period
for
each
of
the
topics,
parameters
or
group
of
parameters
produced
per
comment
received
and
then
new
revision
will
be
uploaded
and
then
discussion
for
the
next
set
of
parameters.
I
So
to
response
to
the
main
list
of
that.
That's
our
site.
Right
now
we're
going
to
initiate
the
discussion
in
the
middle
east
and
hopefully
the
end
of
the
year
we
can.
We
should
be
able
to
go
through
the
whole
list.
But
that
being
said,
that's
only
one
of
the
reason
we
try
to
move
this
forward.
Comments
are
welcome
at
any
time.
I
It's
just
much
easier
if
you
guys
can
just
comment
in
the
proposed
review
period.
I
Next
slide
next
slide,
please
so
same
same
goes
to
the
stp
offer
and
an
answer
section
so
we'll
be
also
starting
the
same
process
for
for
the
relevant
sections
current
set.
Currently
we
just
put
placeholder
in
the
section
also
same
two
weeks
review
period,
we're
going
to
initiate
from
the
medium
list.
I
We
do
hope.
The
proposed
two
weeks
review
period
will
be
okay
for
everyone,
but
do
make
comments
if
you
think
any
more
time,
but
I
think
two
weeks
focus
on
while
the
two
parameters
should
be
sufficient
next
slide.
Please.
I
So
here's
the
I
what's,
it
called
fun
stuff.
I
expect
we
should
spend
some
time
on
not
sure
how
many
people,
following
the
current
discussion
in
the
mailing
list.
This
is
regarding
the
gradual
decoding,
refresh
ddr
support
for
the
bbc.
I
So
I
kind
of
summarize
the
key
point
it
all
comes
down
to
whether
or
not
we
should
support
ddr
and
bbc
or
if
there's
any
interest
in
this
working
group,
to
support
that,
because
this
is
not
only
for
abvc
itself,
it
probably
has
a
potential
impact
on
the
avcohevc.
I
So
there
are
two
specific
discussion
points
going
on
in
the
meeting
list
regarding
the
gdr,
one
is
regarding
free
entry
request
mode
and
another
one
is
regarding.
If
you
visual
support
gdr
in
free
marketing,
since
we
are,
we
have
supportive
framework
in
the
bbc,
so
the
the
first
one
bring
up
by
martin
regarding
if
we
should
allow
gdr
additional
probiotic
requests
to
response.
I
I
If
it's
not
in
fr
or
where
should
go,
and
my
current
understanding
based
on
the
marketing
proposal
regarding
cleaning
random
access
should
not
be
allowed
in
the
full
request
before
you
enter
request
mode,
I'm
not
sure
we
have
come
to
a
agreement
in
the
main
list.
I
The
second,
as
I
mentioned
it's
a
regular
discussion
regarding
we
should
support
gdr
in
the
framework.
The
discussion
is
around
the
confusion
orbit
in
the
short
and
long
version
of
the
framework
draft.
I
think
it's
needed
to
be
clarified
whether
the
arbit
includes
both
rep
or
not
rep
intro
pictures-
or
it
just
only
supports
the
random
access
point
in
general.
I
So
I
think
we
should
ask
yourself
a
million
dollar
question
here.
We
know
we
already
agreed,
I
think
back
two
millions
back.
We
support
free
marking,
but
do
we
really
want
free
marketing,
maybe
security
format,
and
does
anybody
see
implementation
or
existing
implementation
or
for
the
large
deployment
in
the
future?
I
I
can't
propose
couple
options
here.
I
think
the
first
option,
if
we
do
really
want
a
free
marking-
let's
assume
that
we
first
see
the
deployment
of
framework,
so
there's
a
couple
solutions.
We
can
do
right.
One
is
you
know
this
is
regarding
the
gdr
so
regarding,
if
we
so,
the
first
option
is
requires
new
modification
or
coefficient
from
a
framework
in
draft
right.
I
So
if
we,
what
we
can
do,
you
know
obviously
draft
if
we
specify
saying
that
the
r
bit
in
the
bbc
draft
say
equal
to
zero
when,
when
this
is
a
gdr
picture,
that's
basically
saying
there's
a
free
market
on
a
separate
gdr
right,
so
again
you
can
see
this
is
not
a
complete
solution
right
for
this
whole
problem,
but
it
actually
worked.
I
think
it
will
work
and
the
second
option,
which
always
go,
which
also
goes
two
ways.
This
is
from
most
gestures
from
the
main
list.
I
The
first
one
is
anything
whatever
being
needed
to
support
gdr
signaling.
With
specific,
like
the
the
picture,
the
picture
recovery
can't
equal
to
zero
in
mind
or
do
a
totally
generic
approach
to
signal
another
general
gdr
and
not
just
freebbc,
and
also
said
this
problem.
For
that
approach,
is
we're
going
to
put
we're
going
to
put
a
lot
of
consideration
for
the
redesign
of
the
free
market,
one
last
option
for
us,
maybe
the
best
one.
I
I
think
it's
we
just
don't
support
frameworking
right,
then
we
only
need
to
address
if
the
gtr
needs
to
support
in
the
response
mode
message
mode.
I
believe
I
have
one
more
slide.
I
think
we
can
go
back
to
this
slide
for
some
discussion
once
we're
finished.
I
Yeah
I
mean
it's
last
slide
is
simple.
Then
we
can
go
back.
I
need
to
realize
this
all
right.
This
is
this
is
nothing
new,
so
this
is
just
as
we
request
from
our
last
meeting.
If
you
should
support
yeah.
Okay,
if
you
support
slice
indication
or
reference
picture
slice,
indication
feedback
mode.
As
far
as
we
can,
we
know
that
there's
not
much
of
an
implementation
there,
but
we
want
to
hear
things
we
don't
know.
I
So
please
do
let
us
know
if
there's
any
implementation,
we're
going
to
send
out
the
meeting
list
inquiry
for
this
question
as
well.
So
please
do
reply.
Otherwise
we,
I
think
we're
just
going
to
remove
that.
If
there's
a
there's,
no
implementation,
no
one
last
thing
is
we
currently
call
this:
the
reserve
building
the
fu
header,
we're
still
trying
to
figure
out
a
good
user
age,
all
right,
it's
called
reserve
just.
I
think
we
probably
can
stress
something
in
the
mailing
list
and
or
if
somebody
else
knows
a
good
suggestion.
I
So
please
do
share
your
thoughts
in
the
middle
list,
open
them
on
time.
Yeah!
That's
that's
all
I
want
to
summarize.
I
I
think
people
have
questions
regarding
the
gdr,
okay,
well,.
P
Yeah,
I
think
it's
beyond
the
gdr.
I
think
it's
it's
always
been
talking
about.
You
know.
Last
20
or
30
minutes
is
basically
high-level,
syntax
design,
you
know
and
whether
or
not
it
you
know
it
can
be
optimized
for
each
codec
or
whether
we
need
to
pick
something
that
works
for
all
codex,
so
in
in
this
case
of
gdr.
You
know
that's
a
good
example
of
things
that
don't
currently
work
in
any
of
the
frame
markings
even
in
the
new
av1
dependency
descriptor,
gdr,
signaling,
wouldn't
wouldn't.
P
So
I
think
this.
This
underscores
the
question
of
whether
or
not
we're
trying
to
design
something
that
basically
a
a
uniform
high
level
syntax
for
all
payload
formats
that
could
be,
like
justin,
said
hoisted
up
into
a
header
instead
of
into
the
payloads.
P
I
I
think
that's
really
where,
where
we
are,
because
it's
not
that
nobody
wants
free
marketing,
every
single
every
single
person
is
using
framework,
the
specific
draft,
but
everybody
does
some
form
of
framework,
whether
proprietary,
whether
it's
svc,
whether
it's
81
dd,
whether
it's
the
framework
draft
everyone,
is
doing
some
sort
of
frameworking.
You
know
for
real
conferencing
systems.
I
mean
the
question
is:
do
we
do
we
really
try
to
harmonize
all
of
that
and
make
it
interoperable
and
bring
it
up
into
something
common
or
do
we?
P
You
know
just
admit
that
it's
too
hard
of
a
problem
and
and
let
individual
payload
formats
and
individual
implementations
go
off
and
and
and
signal
things
in
the
way
that
they
want.
P
I
I
think
we
should
I
mean
start
from
the
first
question.
I
mean
we
do
kind
of
want
some
answers
right.
Do
we
need
to
support
gdr
in
the
in
the
volunteer
response
message?
That's
brilliant
by
martin.
Is
there
any
interest
in
there
in
in
the
working
group
to
support
that.
B
I
D
Yes,
tomorrow,
I
really
want
get
your
opinion
on
what
is
the
intention
of
the
ibis
in
your
frame
marking
box?
D
Is
it
for
just
for
narrowing
up
introduced
picture
or
is
it
for
labeling
indicating
random
text
points,
because
currently
the
text
in
the
frame
marking
says
it's
just
intro
picture,
while
the
text
in
the
pillow
from
the
previous
pair
format,
it's
interpreted
as
a
indicate
random
point,
so
they're
not
consistent,
so
we
need
to
get
to
know
your
opinion.
What
is
intention
in
the
marketing
draft?
First.
P
Yes,
the
the
intention
is
to
be
a
random
access
point.
It's
just
that
other
drafts.
Don't
use
that
same
terminology,
that
you
know,
there's
the
there's
i2
terminology
and
the
you
know
the
you
know
vpx
and
81
terminology,
and
they
all
differ.
So
maybe
we
should
clarify
in
the
draft.
P
You
know
what
exactly
that's
referring
to,
but
yeah
it's
intended
to
be
the
random
access
points.
It's
intended
to
be.
Okay,.
D
A
It
I
have
two
things
and
I
don't
want
the
second
to
be
forgotten
so
on
this
particular
discussion,
we're
discussing
your
tiny
little
details
that
are
probably
somewhat
use
case
driven.
My
personal
opinion
is
for
gdr.
Gdr
means
that
you
have
multiple
pictures
that
create
the
the
basically
the
random
access
point:
that's
something
which
would
break
a
whole
bunch
of
deployed
conferencing
system
designs.
A
It
would
be
hard
to
implement.
I
want
to
see
a
real
use
case
for
that
before
before.
I
would
support
that.
That's
on
this
gdr
thing
yeah,
but
the
my
other
thing
was.
A
We
had
a
little
bit
earlier
in
the
slides
this
idea
of
a
like
staged
review
of
these
15
pages
worth
of
hevc
sourced
stuff.
I
would
really
like
a
resolution
there.
Can
we
move
forward
in
this
direction,
because
otherwise,
we'll
go
with
a
giant
blob
of
of
data
and
text
into
the
working
group?
Last
call-
and
you
know
that
that
is
way
too
detailed
to
review.
B
Yeah,
I
I
I
think
I
agree
that
asking
individual
questions,
individual
focus,
questions
on
the
mailing
list,
and
you
know
saying
this
is
what
we
propose
to
do.
Let
us
know
if
you
think
differently.
That's
that's,
probably
a
better
approach
than
giving
a
huge
blob
of
text
for
people
to
try
to
digest
so
yeah.
I
think
that's
that
stage
review
sounds
like
a
good
approach
to
me.
A
B
Yeah-
and
I
mean
I
think
we
actually
probably
need
to
cut,
because
I
do
want
to
get
to
video
up
some
more
things,
topics
and
we're
almost
out
of
time.
So,
okay
and
do
you
have
any
other
answers.
D
A
I
A
A
A
Nice
most
likely
most
likely
that
that
will
be
the
same.
The
key
point
is
more
that
that,
at
the
moment
the
complete
stp
section
become
the
upper
answer
section
all
that
stuff
and
the
evc
drive
is
empty.
We
want
to
populate
that
before
before
we
submit
the
ebc
draft,
and
then
you
know
that
will
just
make
changes
of
where
possible,
on
both
drafts
simultaneously
based
on
discussions.
J
B
B
B
E
Just
a
recap:
this
is
a
draft,
is
an
update
to
rfc
1793,
section
7,
which
documents
quick,
multiplexing
describes.
The
multiplexing
solution
provides
some
guidance
on
handling
overlap
between
concerned
channels,
which
is
not
an
issue
in
webrtc,
and
then
it
updates,
potentially
the
dtls
content
type
field
on
the
iana
page
to
represent
new
rfc.
But
no
other
change
is
needed
and
there
is
a
caveat,
which
is
that
it
applies
to
quick
version
one,
but
not
necessarily
to
subsequent
versions.
E
B
So
I
think
most
people,
yes,
but
there
was
a
comment
from
martin,
which
I
thought
was
the
most
important
because
you
know
I
wanted
to
get
his
input
here.
Martin,
if
you
could
just
talk
to
your
comment
and
see
what
you
had
to
say
and
give
your
you
know.
B
R
It
very
much
does,
but
obviously
in
the
in
the
settings
where
this
is
used,
that's
something
that's
under
the
control
of
the
endpoints
that
are
opting
into
this.
I
did
have
a
read
of
the
draft,
and
I
thought
it
could
be
a
little
clearer
about
when
it
was
talking
about
quick,
that
it
was
talking
specifically
about
quick
version,
one
throughout.
R
D
B
In
that
case,
I
would
say
that,
yes,
the
conclusion
was
reached
and
we
will
want
to
adopt
this,
so
I
just
I
was
unclear
what
the
outcome
was
of
that
conversation
with
you,
because
everybody
else
said.
Yes,
absolutely
with
no
caveats,
so
I
just
wanted
to
make
sure
you
didn't
think
this
was
going
to
do
us
going
to
have
a
bad
ecosystem
consequences.
E
B
Recorded
well,
a
you
know,
say
yes,
we're
adopting
this
to
working
group
item,
but
b.
Martin
has
some
comments.
I
think
that
needs
to
clarify
that
you
know
the
things
you
said
in
the
previous
slide
might
need
to
be
the
caveat
you
need
might
need
to
be
further
clarified
and
also
perhaps
citing
the
the
martin's
grease
draft,
depending
on
its
status.
Saying
don't
do
that
if
you're
doing
this
basically.
B
Okay,
well,
that
was
easier
than
I
feared
all
right,
so
yeah.
So
that's
so
we'll
I'll,
we'll
figure
out
a
milestone
for
that
and
and
then
probably
take
your
existing
draft
as
the
working
group
item.
And
then
we
can
incorporate
martin's
comments
for
for
such
good
drafts.
B
Okay,
so,
finally-
and
I
have
to
figure
out
my
setup-
isn't
great-
for
the
jpeg
access
is
a
powerpoint
which
one
they
have
to
figure
out
how
to
share
that,
because
I
don't
have
fully
only.
G
G
Yes,
I'm
sorry,
I
was
not
aware
how
to
upload
slides
to
the
data
tracker.
That's
that's
fine.
The
powerpoint
is.
G
Time
I
will
try
to
do
in
the
proper
way,
like
everybody
else
did.
B
G
Let
me
already
start
a
little
bit
to
save
some
of
the
time,
because
I
only
have
two
slides,
which
is
pretty
short.
I.
G
I
I
should
have
put
some
more
emphasis
on
the
actual
rtp
draft
that
we
that
we
created
for
the
payload
of
jpxs.
But
okay,
it's
my
first
ietf
meeting.
I
I
need
to
learn
so
my
name
is
tim
brelands,
I'm
actually
working
for
a
company
called
into
pix
and
I'm
an
active
member
for
for
almost
15
years
in
iso
sc29
in
the
wg1
working
group.
G
More
particularly,
which
is
the
jpeg
committee,
and
I've
been
working
on
jpg
xs
for
for
some
years
now-
and
this
is
a
already
iso,
officially
published
standard
for
doing
image-based
compression
with
with
very
good
properties
for
what
we,
what
we
use.
So
many
organizations
are
already
using
jpeg
excess,
but
more
in
particular.
There's
the
the
the
smt
standards
with
the
move
that
that
were
designed
for
putting
audio
and
video
over
ip
networks,
and
this
is
backed
by
the
ames
alliance.
G
So
there's
a
lot
of
track
going
on
a
lot
of
changes
going
on
in
the
pro
rv
markets
to
move
more
and
more
to
ip
based
audio
video
networks.
G
And
so
in
this
respect,
there
is
70
2110-22
to
send
compressed
video
in
a
in
a
constant
bitrate
fashion
over
ip
networks,
and
on
this
topic
we
actually
think
we
need
an
rtp
payload
for
jpxs
rfc
in
order
to
be
able
to
support
the
sending
of
jpxs
encoded
video
over
rtp
as
part
of
smtp22-22
standard
and
we're
also.
We
also
are
collaboratively
working
with
vsf
the
video
services
foundation
on
technical
recommendation
or
aids,
where
they
are
actually
standardizing
the
transport
of
gpxs
video
in
70,
21,
10-22
next
slide.
G
G
So
we've
been
working
on
this
rtb
payload
spec
and
currently
we
are
in
draft
version
seven.
But
for
me
it's
unclear
as
how
to
proceed
or
if
the,
if,
if
the
people
here
working
group
is,
is
willing
to
adopt
this
asset
well,.
B
G
Well,
there
is
one
open
issue
which
we
we
have
some
customers
that
we
are
working
with
and
they
are
already
implementing
this
kind
of
payloads
packet
education
and
they
they
have
an
issue
with
with
something
we
wrote
in
the
in
the
current
draft
about
the
the
the
interlaced,
if,
if
the
video
is
encoded
in
interlaced
mode
with
respect
to
the
time
codes,
so
this
is
an
open
comment
that
we
will
address.
D
G
So
we
will
issue
a
new
version
at
last,
at
least
at
latest
next
week,
all
right
so.
B
Yeah,
so
if
you
want
so,
I
think
if
that's
the
only
remaining
open
issue-
and
you
have
a
yes,
I
think
yes,
so
so
yes,
I
think
the
right
thing
to
do
is
to
submit
a
you
know
that
a
new
version
of
the
draft
with
that
resolved
and
then
we
will
the
chairs
will
issue
a
working
last
call
for
people
to
review
the
document
make
sure
they.
You
know.
We
have
working
group
consensus,
that
everything
is
good
and
once
we've
done
that
we'll
see.
B
If
there's
any
remaining
comments,
we
can
re
address
any
of
those
comments
and
then
we
can
submit
it
for
publication.
So
that's
that
would
be
the
plan
so.
G
So
how
do
I
contact
you,
the
chairs,
to
to
tell
you
that
we
are
ready
and
that
we
uploaded
to
the
so.
B
B
Yes,
absolutely,
okay,
and
that
I
think
that
is
all
the
presentations.
We
had
one
more
thing
on
the
agenda,
but
the
I
don't
think
the
presenter
is
here
and
we
didn't
get
his
slides
and
we're
out
of
time
anyway.
So
that's
worked
out,
I'm
sorry
we
didn't
get
to
that.
Hopefully
we
can
figure
out
where
the
disconnect
was
and
haven't
just
talk
at
the
next
meeting.
B
So
I
will
see
you
all
on
the
next
meeting
and
the
mailing
list
and
if
anybody
has
any
any
other
last
comments
before
we
close,
I
think
we're
thankfully,
in
the
break,
but
mitako
doesn't
cut
us
off
for
another
seven
minutes.
If
anybody
else
has
any
last-minute
things
to
say
otherwise,
no
all
right!
Okay!
Well!
Thank
you
all
for
attending
it's
a
great
evening.
Okay!
Thank
you.