►
From YouTube: WEBRTC interim meeting 2022-12-07
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
We
abide
by
the
w3c
patent
policy,
which
is
described
to
the
link
and
only
in
people
and
companies
listed
the
status
page
you're
allowed
to
make
substantive
contributions.
We
have
next
meeting
is
January
17th
2023.
B
We
have
slides
up
on
the
wiki
thing
is
now
being
recorded.
Do
we
have
volunteers
for
note-taking.
C
I'm
going
I'm
going
to
to
see
yeah
I'll,
take
notes
in
RC.
C
B
B
All
right,
so
we
operate
under
a
code
of
conduct,
that's
described
at
the
link,
we're
all
passionate
about
Weber
receive,
but
let's
try
to
keep
it
cordial
and
professional
I.
Think
probably
people
know
how
to
do
this
by
now,
but
we
use
the
plus
q
and
minus
Q
in
the
Google
me
chat
to
get
into
and
out
of
the
speaker
queue
please
use
headphones
or
a
echo
canceling
speaker
and
state
your
full
name,
although
we
probably
know
you
by
now
so
just
a
note
about
document
status.
B
Just
because
something's
in
the
repo
doesn't
apply
it's
been
adopted
even
within
a
document.
Portions
may
not
have
consensus
and
should
have
notes
to
denote
that
and
we
use
CFCs
and
we'll
be
doing
a
bunch
of
those
in
January
as
we
described
okay.
So
here's
what's
on
the
agenda
for
today,
Tim
is
going
to
talk
about
the
network
of
users
project
and
we'll
talk
a
little
bit
about
use
cases,
encoded,
transform
and
finally,
webrtc
PC,
all
right.
Tim
you
on
the
floor.
D
So,
a
few
months
ago
we
had
this
conversation
about
trying
to
broaden
the
input
to
this
group,
specifically
around
kind
of
developer
feedback,
but
without
the
encumbrances
of
them
physically
having
to
join
the
w3c
OR
b,
you
know
invited
experts
for
that
kind
of
complexity.
So
I
span
up
this
thing,
which
is
the
essentially
just
a
GitHub
project
called
webrtc.nu
network
of
users,
and
the
idea
is
just
to
give
a
feedback
path
to
this
working
group.
D
I
invited
a
random
12
people
who,
who
are
webrtc
developers
to
kind
of
help
me
guide
it
and
I'm
very
grateful
for
them
for
for
doing
that,
and
the
first
activity
that
we
came
up
with
was
to
try
and
do
a
survey
of
some
questions
that
were
maybe
unresolved
was
a
bit
fierce,
but
the
things
that
this
group
had
discussed
at
some
length,
without
necessarily
knowing
what
that
coming
to
a
consensus
about
what
the
developer
views
would
be,
and
so
that's
kind
of
you
can
see
it
in
diagram.
D
The
idea
is,
we
just
add
some
feedback
into
this
Loop
next
slide,
please.
So
we
did
this
survey
and
this
we
got
some
interesting
results.
D
I
think
probably
the
best
way
to
do
this
is,
if
you
have
got
questions
about
any
of
these
individual
survey,
questions
and
the
results
I
think
it's
probably
best
to
deal
with
them
as
we
go
through,
rather
than
at
the
end,
because
I
think
they're
all
fairly
disparate,
and
so
the
conversation
naturally
goes
with
the
graphic
I
find
so
this
one's
not
desperately
interesting,
except
that
we
felt
we
should
find
out
where
the
people
had
heard
about
this
survey.
So
we've
got
some
context
from
them.
D
I
think
the
only
surprise
to
me
is
the
is
the
extent
to
which
LinkedIn
is
is
used
by
webrtc
developers.
Maybe
we're
all
looking
for
jobs.
I
don't
know,
but
anyway,
I
was
slightly
surprised
about
that
one,
but
it
shows
that
kind
of
we
did
try
and
reach
out
to
various
sources
and
there's
another
three
pages
of
these:
to
try
and
get
a
reasonable
spread
of
Developers
next
slide.
D
So
we
asked
about
what
how
people
felt
about
async
functions
in
JavaScript
and
the
general
consensus
is
by
a
small
margin
was
that
they
make
life
easier
and
the
other
huge
vote
was
that
they
make
life
easier,
but
they
do
add
some
complexity
so
I.
Actually,
that's
a
pretty
resounding
positive
note
for
for
async,
rather
than
callbacks
I
mean
we
did
actually
put
a
callbacks
like
Yay
callbacks
in
there
and
almost
nobody
voted
for
it.
So
that's
kind
of
interesting
I
think
next
slide.
D
So
next
thing
that
I
kind
of
we
thought
we'd
try
and
find
out
about
is
to
what
extent
the
apis,
obscure
or
hide
or
cover,
or
whatever
the
STP
and
the
shocking
answer.
D
Is
it
pretty
much
doesn't
like
over
half
of
the
respondents
only
does,
but
over
half
of
the
respondents
have
to
rewrite
their
STP,
even
though
they
don't
want
to
and
well
over
half
do
because
they
want
to
all
because
they
need
to
so
it's,
like
you
know,
huge
percentage
too
and
I
think
that's
a
well,
in
my
view,
I
think
that's
a
failure
of
our
apis.
D
What
we
didn't
do,
and
what
we
might
be
interesting
to
find
out
is
is
why
what
those
things
are
that
they
feel
they
have
to
use,
and
maybe
there
are
apis
that
exist,
that
we've
done
that
they
just
don't
know
about
and
they're
still
rewriting
the
STP
because
they
used
to
have
two
three
years
ago.
But
it's
still,
you
know
this
of
60
of
the
65
responses.
Whatever
that
is
68
is
say
that
they
have
to
rewrite
the
STP.
That's
that's
not
good
I.
Think
next
slide
please.
D
So
then
we
asked
about
the
data
Channel
usage
and
like
what
people's
usage
of
the
data
channels
were
and
and
how
they
felt
about
the
main
threat
issue,
and
the
surprise
to
me
on
this
one
is
the
percentage
of
people
who
don't
use
data
channels.
Like
you
know,
I
was
quite
surprised
to
see
28
of
the
respondents,
don't
use
data
channels
and
there's
another
nine
percent,
don't
even
like
doesn't
cross
their
their
desk.
D
Even
so,
I
was
genuinely
quite
surprised
by
that
that
number,
but
that's
maybe
my
bias,
showing
I
think
the
other
conclusion
from
this
is
that
there's
quite
a
large
number
of
people
who
would
like
to
use
text
channels
in
workers,
so
that's
kind
of
useful
learning.
I
think
we
maybe
knew
that,
but
it's
good
to
see
a
number
for
it
next
slide.
Please.
E
Last
two
slides,
so
so
it
seems
there
are
two
good
points
about
data
channels
than
previously
about
sdp.
Managing,
and
do
you
think
you
can
reach
out
to
these
guys
to
ask
for
more
details
about
why
the
way
video
activity
managing
or
what
their
use
case
for
the
Nationals
in
in
a
work
of.
D
So
yes,
and
no,
we
can't
reach
out
to
these
guys,
because
this
is
a
blind
survey
in
order
to
collect
email
addresses.
We'd
have
to
have
an
entity
that
could
do
gdpr
and
all
of
that
stuff,
and
so
we
decided
it
was
much
better
to
to
not
ask
people
who
they
were
and
not
collect
that
data.
D
So
we
can't
go
back
to
them
and
ask
them,
but
we
could
potentially
do
another
survey
or
or
or
have
another
method
of
finding
that
out,
and
that
might
well
be
something
which
we
should
do
in
the
future.
But
strictly
no.
We
can't
go
back
to
them
and
ask
them.
Unfortunately,
at
all
I,
don't
know
whether
it's
unfortunate
or
not,
but
we
can't.
D
Yeah
so
next
slide,
please
yeah.
This
is
the
other
one
that
was
kind
of
a
bit
of
a
higher
level
thing,
but
we
wanted
to
find
out
what
people's
experience
was
in
terms
of
what
a
new
developer
feels
about
being
asked
to
work
on
webrtc
and
how
the
apis,
how
easy
they
were
to
learn
and
pick
up
the
vanishing,
no
small
number
who
thought
that
it
was
easy
to
pick
up
just
over
half
thought
that
it
was
complex.
But
okay
and
38
thought
it
was
very
difficult.
D
A
vanishingly
small
number
said
that
it
wasn't.
It
was
a
reason
why
they
try
and
change
jobs,
but
so
like
the
the
the
fact
that
we've
got
like
very
difficult
and
complex,
but
okay
being
a
huge,
dominant
numbers,
I
think
is
yeah
I
mean
okay,
it's
a
complicated
subject,
but
maybe
this
is
a
something
we
could
at
least
try
and
reduce
the
very
difficult
I.
Don't
not
again,
I
mean
there's
no
specific
kind
of
aren't
answers
about
what
we
should
do,
but
I
think
it's.
D
It's
still
an
interesting
observation
that
this
result
isn't
as
bad
as
it
is
so
yeah
next
slide.
Please.
A
D
D
We
were
interested
in
the
apis
that
this
working
group
produces
I
think
there
is
some
interesting
scope
for
talking
about
the
relationship
between
native
apis
and
this
working
group,
but
those
we
were
very
kind
of
clear
I,
think
every
question
actually
says
in
browser
JavaScript
or
web
workers,
or
something
that
makes
it
pretty
specific,
actually
you're
right,
the
last
one
doesn't,
but
all
of
the
previous
ones
do
so
I
think
by
implication
it
would
be.
We
would
be
talking
about
the
this
working
groups
apis
rather
than
Okay
negative
ones.
D
Thank
you
and
so
yeah
and
if
anybody's
got
any
other
kind
of
questions
about
the
survey
and
the
results,
I
kind
of
put
this
big
slide
together,
where
you
can
barely
read
all
of
them,
but
if
anyone's
got
any
other
questions
now
it's
a
kind
of
good
moment
to
dig
in
to
the
detail
of
any
of
them.
D
Okay,
so
next
slide,
then
please
so
the
kind
of
massive
question
is:
do
we
think
that
this,
if
it
was
useful,
like
has
the
group
learned
anything,
do
we
think
that
is
something
that
we
want
to
do
again?
Do
we
have
more
questions?
We
would
want
to
ask
next
time
and
a
huge
tip
of
the
heart
to
Patrick
Rockwell
who's
kind
of
idea.
What
this
was,
who
put
a
little
bit
into
action
and
made
it
happen?
I
do
really
do
appreciate
that
so
do
do.
You
think
we
learned
anything
from
this.
E
E
Like
50,
it's
very
high,
and
it
would
be
good
to
understand
what
we
are
still
missing
or
whether
it's
just
like
lagging
of
implementations.
Adoption
basically.
B
D
I
I
mean
personally,
I
was
slightly
surprised
by
some
of
them,
and
it's
like
you
know,
particularly
the
the
bunging
I
I've
got
down
I'm
Excel
in
Daily
practice
at
the
manage
things,
but
it's
pretty
rare,
so
I
was
surprised
that
other
people
still
do
and
the
answer
is
they
do
so
yeah
that
one
was
a
was
worse
than
I'd
expected.
D
Okay,
any
other
points
on
that.
Okay,
I
think
I've
got
one
more
slide.
D
No
okay,
so
I
mean
so
so
this
is,
this
I
think
is
is
like
watch
yeah.
There's
one
was
like
there.
We
go
sorry
so,
like
what
else
do
we
think
we
could?
What
were
the
next
steps
like
to
do
more
surveys?
Should
we
do
more
targeted
Outreach
of
trying
to
find
you
know?
Who
is
this
who's
munging
and
gets
them
to
answer
some
questions
for
us?
D
The
other
things
that
I
think
are
interesting,
which
comes
back
to
the
question
about
like
our
relationship
with
Native
apis
I.
Think
there
are
some
interesting
I,
almost
say
best
practice,
that's
happening
in
some
of
the
other
apis
that
maybe
we
could
adopt
some
of
them
into
ytc
APR.
So
you
know
things
that
are
maybe
happening
in
in
Pion
or
or
or
media
super
somewhere
like
that,
like
little
methods
or
techniques
or
apis
that
are
there
that
we
should
maybe
be
adopting.
D
That's
the
kind
of
thing
I'm
thinking
about
the
last
two
of
those
but
yeah
I'm
I'm
up
for
kind
of
this
was
a
moderate
success
in
my
view,
and
so
I'm
also
don't
kind
of
playing
this
game
again,
but
with
a
slightly
angle,
so
kind
of
feedback
from
from
you
about.
Maybe
what
what
you'd
like
to
do
so
when.
E
Yeah,
so
it
would
be
interesting
to
to
get
more
more
details
about
some
of
these
issues
like
sdp
managing
data
channels.
One
question
that
might
be
interesting
is
also
the
developer
tools
on
the
web.
The
web
inspector
is
great
for
some
of
these
things
like
debugging,
JavaScript,
debugging,
layout
and
so
on,
and.
E
The
web
developers
have
ideas
about
what
they
would
like
to
get
in
terms
of
support
for
webrtc
for
debugging
webrtc
within
web
inspector
or
within
Chrome,
Firefox
or
Safari,
for
instance,.
D
But
is
that
an
API
I
mean
I,
guess
I'm
asking
you
in
the
remake
of
this
group.
E
Okay,
yeah,
that's
that
might
be
my
personal
interest.
I.
Do
not
think
that
browser
browser
tools
would
would
end
up
in
two
apis,
except
maybe
Webdriver,
but
I
was
thinking
web
inspector,
so
yeah.
D
Yeah
I
mean
I,
think
I
think
certainly
something
that
was
maybe
running
parallel
with
SATs,
but
for
Diagnostics
and
debugging
would
obviously
be
an
API,
and
it
would
be
something
we
would
do
in
this
group,
but
I
think
probably
like
web
inspector,
maybe
Falls
outside
I.
Don't
know,
I
mean
I,
I
kind
of
want
to
make
sure
that
we're
not
putting
things
into
this
group
that
doesn't
don't
make
sense.
B
Yeah
we're
starting
to
get
a
few
issues
filed
about
STP
munging,
but
the
big
question
has
been
like:
are
there
a
few
things
that
people
new
apis
that
people
would
want,
or
is
it
a
zillion
things?
B
And
so,
if,
if
we
knew
that
there
were
like
two
or
three
things,
we
could
do
to
get
rid
of
STP
munging.
That
would
be
great
as
opposed
to
you
know,
having
having
to
do
a
million
things.
D
Right
right:
okay,
yeah.
G
Yes,
so
I'm,
not
that
surprised
that
a
large
number
of
people
are
sdp
managing
among
the
people
who
are
actively
using
webrtc
for
production
stuff,
because
not
all
the
browsers
have
implemented
the
necessary
Idris
in
Firefox
were
Landing
spec
compliance
set
parameters
only
in
nightly
in
a
couple
of
days,
and
people
also
need
set
codec
preferences,
which
is
not
in
all
browsers
and
other,
and
there's
a
new
one
set
offered
we're
still
adding
apis
to
minimize
SGP,
mergings
and
I.
G
Think
that's
what
we
expected
I
would
love
to
see
if
you're
doing
another
survey
to
know
what
browsers
people
are
targeting
and
also
maybe
ask
them
if
they
were
to
start
again
today,
would
they
write
it
the
same
way
or
would
they
still
do
sdp?
Managing,
for
example,
okay,.
G
Also,
whether
people
are
using
new
features
like
media
capture,
Transformer
and
you
know,
background
blur
and
have
a
breakdown
of
features
that
people
want
or
or
are
using
or
right.
It
would
be
nice
I
think
to
gauge
interest
in
the
work
return.
D
Okay,
yeah.
D
We
have
to
think
about
how
we
would
come
on
come
up
with
that
question
list,
but
maybe
that's
something
that
the
chairs
could
come
up
with
and
send
to
me
or
something
I.
Don't
know.
Harold.
C
Yeah
I
was
thinking
the
same
as
the
node.
We
need
to
know
why
this
system
Imaging
occurs
and
yes
and
the
yeah
I
know
that
that
philippineke
has
been
doing
some
attempts
attempts
to
measure
away
and
by
by
just
looking
at
the
sdp
and
seeing
seeing
what
modifications
are
done.
D
Cool
actually
flip
out
in
the
queue.
So
maybe
we
we
can
hear
about
that
in
a
second
yeah.
C
So
but
the
the
finding
that
async
is
not
strongly
resisted
by
almost
anyone
was
nice.
Let's
put
that
to
rest.
D
D
Yeah,
oh
well,
yeah,
probably.
H
D
Okay,
so
so
that
yeah
I
think
we
need
to
think
about
how
we
might
find
those
answers
to
those
questions,
or
rather
how
might
we
find
things
that
you
can't
find
from
the
inside
chippo,
so
you've
kind
of
got
the
big
statistics,
but
now
you're
kind
of
asking
the
question:
why
are
those
like
that?
D
F
D
Well,
maybe
the
thing
to
do
is
to
put
that
into
an
issue
on
on
webrtc
Dot
and
use
GitHub,
and
then
people
can
discuss
it
in
that
we
can
come
up
with
a
a
a
method
for
trying
to
get
the
answers
to
it.
I'm
not
quite
sure
what
they're
going
to
be
yet
but
Yes
sounds
good.
Okay,
brilliant
thank
you,
yeah.
Anything
else.
B
D
I
have
one
one
more
question
which
is:
do
we
have
any
feelings
about
adopting
ideas
from
native
apis?
Is
that
like
do
we
think
that
the
kind
of
the
space
is
so
different
that
it's
not
a
useful
thing
to
look
at
or
do
we
think
that
there
that
it
is
potentially
fruitful?
We
just
haven't
done
it
or
do
you
think
that
everybody
we
do
it
anyway
and
it's
like
there's
no
point
in
making
more
effort,
because
it's
already
happening.
D
That's
a
good
question:
I
was
actually
thinking
about
webrtc
apis
in
terms
of,
like
you
know,
things
that
are
occurring
in
in
Pion
or
or
media
soup
or
Janus,
or
one
of
I
mean
probably
stick
with
open
ones,
because
it's
simpler,
but
you
know
things
that
were
happening
there,
that
maybe
methods
and
tricks
that
would
and
kind
of
constructs
that
were
there
that
maybe
are
relevant,
but
just
not
not
implementable.
In
the
current
apis
in
on
the
web.
In.
E
General,
we
we
try
when
we
design
web
apis
to
know
what
OS
apis
they
are
and
we
tried
to
compare
and
see
whether
that's
that's
good
and
so
on.
So
I
would
guess
that
it's
always
good
to
to
look
at
native
apis
to
design
web
apis,
but
then,
of
course,
you
need
to
spend
time
there
so
generally
I
think
it's
good
to
look
at
it.
D
B
It
I
do
think,
there's
an
interesting
question
there
Tim
in
that
we've
been
moving
towards
webwd
streams,
which
typically
doesn't
have
a
good
native
analog,
so
we're
actually
in
some
ways
making
it
harder
to
use
native
or
making
it
more
difficult
to
translate
than
it
was
previously.
C
B
Thank
you,
Kim
all
right,
so
we're
going
to
talk
a
little
bit
about
use
cases.
There's
two
major
items.
One
is
talking
a
little
bit
about
three
use
cases
that
will
be
going
to
CFC
in
January,
potentially
and
then
Harold
has
a
bunch
of
issues
on
some
potentially
new
use
cases.
B
So
how
we'll
we'll
talk
about
those
all
right?
So
these
are
the
use
cases.
We've
been
working
on,
Tim,
contributed
them,
and
so
I
wanted
to
show
people
where
they
are
at
the
moment
and
maybe
get
get
a
little
bit
of
feedback.
B
B
The
second
one
is
what
is
called
low
latency,
which
means
it's
not
quite
at
the
latency
levels
of
RTC,
and
so
that
can
include
fan
app.
But
this
is
a
game
streaming
one
and
the
goal
here
is
to
typically
to
send
high
resolution
and
high
frame
rate,
typically
only
in
One
Direction
down
from
the
servant
of
the
client,
although
it
can
also
be
peer-to-peer
and
if
this
is
an
ultra
low,
latency
scenario.
So
things
like
relays,
don't
really
make
that
much
sense.
B
Although
there
can,
there
can
also
be
RTC
going
on,
but
I
think
we're
mostly
focusing
on
the
server
to
client
kind
of
kind
of
flows
here,
and
so
we
have
three
requirements
that
are
on
the
in
it
currently
there's
one
relating
to
controlling
aspects
of
the
data
transport
and
then
there's
basically,
the
ability
to
do
the
video
at
the
high
resin
frame
rate
and
controlling
the
Jitter
buffering,
rendering
delay
so
n38,
37
and
38
relate
to
Performance.
B
One
of
the
questions
that
came
to
mind
is
that
frequently
in
this
space,
there's
an
interest
in
Hardware
accelerated
decode,
particularly
at
the
higher
resolutions
and
frame
rates,
because
not
necessarily
it's
pretty
much
a
requirement,
particularly
on
the
mobile
devices,
unless
you've
got
Hardware
decoder,
acceleration
you're
just
going
to
run
the
mobile
device
battery
down
to
to
zero
very
quickly.
B
So
we
don't
really
look
into
that,
particularly
there's.
No,
it
doesn't
explicitly
say
that
in
the
requirement
and
I
know
that
Henrik,
for
example,
has
been
looking
at
things
like
encode
and
decode
errors,
so
there
might
be
more
more
there,
I'd
be
interested
in
your
feedback.
B
Another
thing
that
was
pointed
out
is
support
for
custom
fek,
because
this
is
ultra
low
latency.
That
may
not
be
even
time
for
a
re-transmission
necessarily,
and
so
it
particularly
at
the
high
resolutions
and
and
frame
rates
you
may
want
to
do,
do
something
there
to
support,
make
sure
that
you
don't
have
to
retransmit.
B
B
The
question
there
is
because
the
media
typically
flows
from
the
server
to
the
browser.
The
server
can
do
whatever
it
wants.
They
can
have
its
own
custom
transport
change,
the
congestion
control
do
any
of
that.
So
that's
not
actually
a
requirement
on
the
browser
API
to
be
able
to
do
that,
and
even
in
the
peer-to-peer
game
streaming.
Typically,
the
server
or
one
of
the
peers
would
be
a
game
console
where
you
have
a
native
app,
so
it
can
also
do
whatever
it
wants
in
the
custom
transport.
So
the
question
came
up.
B
You
know
the
Zen
15
really
need
to
be
there,
or
does
it
just
just
applying
to
the
game
stuff
going
from
the
game
console
back
up
to
the
server.
So
is
this
you
know
just
the
direction,
changes
or
keyboard
input
or
something
is
that
what
that's
about
Runner
I
don't
know.
If
people
have
any
comments
on
this.
C
So
I
joined
the
queue
okay,
and
so
my
worry
about
this,
this
particular
ways
of
framing
requirements
is
that
they're
not
requirements
their
implementation
options
right
right,
so
I
think
that
the
way
I
would
like
to
formulate
the
N15
is
that
the
application
must
be
able
to
deliver
data
with
very
low
delay
and
n37
I
think
it
looks
fine,
but
but
both
M15
and
and
38
is
really
about,
and
about
me
needing
to
be
to
deliver
the
data
and
audio
video
with
low
delay
and
that's
what
the
requirement
should
say.
B
B
B
C
It's
I
think:
that's,
okay,
that's
okay!
The
I
mean
it's
necessary
to
do
to
do
it
in
order
to
achieve
the
useless
I,
don't
I'm,
not
sure
if
it's
it
actually
necessary
to
do
anything
about
the
apis.
For
that,
but
probably
detecting
whether
or
not
you're
able
to
do
it
is
an
important,
Criterion
and
I.
Don't
think
we
have
that
now.
B
And
you
know,
dropping
down
to
software
also
can
often
become
unacceptable
because
it'll
create
thermal
and
battery
problems.
So
I
don't
know
that
in
webrtc
you
can
really
know
necessarily
you're
going
to
have
Hardware
decode
I.
Guess
there's
some
webrtc
stats
that
tell
you
whether
you're,
using
it
or
not,
but.
B
Yes,
it
is
part
of
media
capabilities,
I
guess
in
theory.
You
should
be
able
to
ask
yourself,
okay
at
this
resolution
or
whatever
should
I
be
getting
it.
Immediate
capabilities
will
tell
you
that
I
guess
football
you're
next
in
the
cube.
B
Okay,
you
win.
E
Yeah
about
n37,
it
seems
that
it's
focusing
strictly
speaking
on
the
user
agent
and
n38,
is
about
the
application
that
would
be
able
to
control
vegetable
buffer
and
rendering
delay.
I
was
wondering
whether
there's
an
interest
in
injecting
some
JavaScript
in
in
the
rendering
pipeline
for
game
streaming.
However,
it's
so
low
latency
that
you
do
not
even
want
that
so
because.
A
H
B
E
Would
use
it
yeah
transform
things
like
that?
I,
don't
know
whether
for
brainstreaming
there's
use
case,
I,
I,
guess
yeah,
you're
right
but
request.
Video
frame
callback
is
used,
maybe
for
capturing
like
the
frame
rate
of
what
is
being
rendered,
for
instance,
and
maybe
there
are
some
stats
like
that
that
they're
they're
using
I'm
not
quite
sure
there.
B
D
Don't
know
exactly
what
or
or
you
know
is
it
just
like
match
up
the
timeouts
or
whatever,
like
there's
no
point
in
switching
off
the
heart,
changing
the
heartbeat
interval,
if
the
other
end
isn't
playing
along.
Maybe
that
so
I
think
I
think
these
things
may
well
need
API
points
so
that
they
coordinate
correctly
with
the
server
side,
foreign.
B
Right,
I
guess
one
of
the
questions
Harold
was
even
whether
like,
for
example,
it
talks
here
about
the
sctp
heartbeat
involved.
I
guess
you
have
to
support
the
heartbeat,
but
the
RTO
values
is
actually
involved
in
implementation
of
this,
and
that
could
be
purely
on
the
server
right
it
decides
when
it
wants
to
retransmit.
So
you
know
I
guess
it's
only.
It
would
only
be
for
the
console
data
sent
up
that
you
would
actually
even
use
this.
Potentially
that's.
B
Okay,
so
then
we
have
the
low
latency
broadcast
fan
out,
as
I
mentioned,
this
one
is
a
little
bit
more
tolerant
of
latency
I
actually
have
edited
this.
B
You
I
have
sporting
events,
church
services,
webinars
and
Company
Town
Hall
meetings,
I'm,
not
sure
the
sporting
event
might
necessarily
be
low
latency,
as
opposed
to
ultra,
because
I've
heard
scenarios
where
people
don't
want
to
hear
other
people
cheering
like
if
they
get
the
they
might
want
a
sporting
event
to
to
be
very
low
latency,
but
that
I
the
church
service,
the
webinar
and
the
company
Town
Hall
are
things
that
often
can
take
a
little
bit
more
latency
and
therefore
can
use
the
pan
out
to
to
improve
scale.
B
Scale
is
often
more
important
than
than
just
the
latency,
and
this
is
with
limited
interactivity.
So
again
we
have
three
requirements
here,
one
of
which
is
the
N15
and
again
the
the
media.
Typically,
this
this
this
can
be
interactive,
so
occasionally
you
can
get
the
media
coming
up
from
the
client
to
the
server.
It's
mostly
from
the
server
client,
though
here
we
have
support
for
DRM.
B
C
B
B
So
that's
one
question,
but
certainly
I
added
uncontainerized
here,
because
I
think
that's
kind
of
where
we're
more
likely
to
be
going,
and
then
we
added
n39,
which
is
the
fan
out
requirement,
which
is
to
be
able
to
forward
media
from
one
node
to
a
third
node,
and
that
brings
up
a
bunch
of
interesting
requirements
about
doing
that.
So
that's
that's.
B
The
new
thing
here
is
to
add
n39,
and
that
is
being
done
currently
with
folks
like
Pier,
five
and
others
who
doing
it
on
the
data
Channel
at
the
moment,
but
it
also
could
be
done
in
the
webrtc
AV.
E
E
So
if,
if
we're
trying
to
push
the
envelope,
it
seems
that
you,
you
want
the
transport
to
be
you,
you
want
to
use
the
RTP,
you
don't
want
to
use
a
CTP
when
you
are
signing
out
right.
So
maybe
that's
something
that
that
could
be
said
because
otherwise
we
already
have
our
web
that
we
need
and.
B
M39
brings
in
two
things,
I
think
and,
and
this
maybe
need
to
be
teased
out
right,
A
lot
of
the
stuff.
That's
done
today
is
containerized
and
and
but
the
thing
is
since
they're,
not
using
DRM,
that's
kind
of
useless
code
in
some
ways,
so
I
think
n39
is
assuming
also
that
it's
it's
trying
to
get
rid
of
the
containerization.
B
You
know
that
doesn't
have
to
be
done
via
webrt
to
Cav.
It
could
be
done
in
beta,
Channel,
2
I.
Suppose
you
could
move
the
raw
media
around,
but
right
the
the
un
uncontainerized
encoded
chunks
anyway,.
A
E
So
yeah
I'm
wondering
whether
the
the
use
case
is
trying
to
say:
okay,
we
we
want
to
get
lower
latency,
that
what
forwarding
with
data
channel
is
is
achieving.
For
instance,
these
are
the
things
that
I
try
I
like
to
understand
compared
to
what
we
can
do
today,
and
the
second
thing
is
it's
not
yet.
It's
not
really
clear
to
me,
like
applications,
require
access
to
and
collection
by
data,
which
means
like
maybe
it's
the
frame
as
well
as
information
from
the
r2p
header.
E
So
then
it's
the
packet
level.
So
it's
not
really
clear
to
me
whether
we
are
trying
to
provide
access
to
to
frames
or
to
packets,
and
this
relates
again
to
the
latency
thing
like.
E
If
we
really
want
to
have
low
latency,
then
probably
you
want
something
at
the
packet
level,
but
if
we
are
okay
with
like
waiting
for
the
whole
packets
of
a
given
frame
to
come
and
reassemble
them
and
so
on
and
the
frame
might
might
be
might
be
good
and
and
fakina
is
not
very
clear
about
this
as
well.
So.
E
Can
work
on
improving
it?
There.
C
I
was
I
was
actually
thinking
that
sorry,
sorry
about
the
interest,
I
was
actually
thinking
that
picking
a
particular
level
was
part
of
the
solution
space.
Not
the
not
the
problem
place.
E
E
G
That
was
on
the
Queue
on
that
one
I
haven't
echoed
that
and
say:
I
also
apply
Harold's
comment
from
earlier.
The
requirements
should
be
requirements
to
end
39,
and
it's
not
clear
to
me
that
applications
do
need
this
access.
For
instance,
you
could
take
a
transceiver
receiver
track
right
now
and
add
it
to
a
second
peer
connection
and
I
think
in
implementations.
G
Today,
it's
not
very
optimal
because
a
decode
and
encode
re-encode
will
happen,
but
there's
no
reason
why
that
couldn't
user
agents
couldn't
optimize
that
without
evolving
JavaScript
and
if
we're
shooting
for
low
latency
that
might
be
a
win.
So
I
think
the
requirement
should
be
more
framed
as
we
want
ultra
low
latency
in
these
cases,
I
also
have
a
question
since
webrtc
is
peer-to-peer.
G
It's
this
use
case
it's
only
about
P2P
relays
or
are
we
imagining
because
that's
what
seems
to
be
the
case
from
the
experience
points,
I
think
Pier
five
and
all
these
are
doing
relays
right,
client
relays.
G
So
there's
no
use
case
here
for
server-based
fan
out
correct.
B
Well,
the
way
this
usually
works
is
you
receive
the
receive
the
media
over
one
transport,
and
then
you
do
the
peer-to-peer
fan
out
and
that
receive
that
receive
may
not
be
webrtc.
It
may
be
low
lane
chls
or
something
like
that.
B
B
You
might
want
to
move
to
either
some
replacement
for
low
leg
chls
like
mock
or
something
where
it
comes
down.
Raw
Media
or
you
may
want
to
essentially
uncontainerize
it
and
push
it
out
and
and
the
data
Channel
versus
our
webrtc
thing.
I
I,
don't
know
the
difference
in
in
latency.
B
C
I
mean
people
you
could
try.
The
QPS
thing
is
that
oh.
H
Sorry,
can
you
hear
me:
okay,
yeah
I
was
just
going
to
point
out
that
the
using
the
data
channel,
the
thing
I
would
worry
about
is
the
congestion
control,
but
if
you're
doing
it
from
server
to
client,
theoretically,
you
could
run
a
good
latency,
sensitive
congestion
control
on
the
server,
but
I
don't
know
of
anybody.
That's
done
that
it's
theoretically
possible
I,
just.
B
Although
that's
probably
more
in
the
ultra
latency
category
than
in
the
low
latency
category,
most
circumstances
I'm
aware
of
and
this
this
particular
one
they're
using
like
low
Lane
chls
to
get
the
initial
feed
but
could
be
whoever
you
see,
I
guess.
B
Okay,
the
last
one
is
virtual
reality
gaming
and
then
we'll
turn
it
over
to
Harold.
This
one
has
only
two
requirements
currently,
which
is
the
ability
to
sync
data
and
then
the
content
security
policy
stuff.
B
But
we
did
get
some
questions
about
whether
this
was
really
complete
or
not,
because
we're
seeing
a
virtual
reality
Games
doing
a
lot
more
things
like
supporting
spatial
audio,
which
sometimes
they
Implement
with
bring
your
own
codec.
So
there
was
a
question
about
whether
this
use
case
really
has
a
complete
set
of
requirements
or
not,
and
whether
it
needs
to
be
augmented.
E
What
just
I
I
don't
understand
why
there's
CSP
there,
especially
for
the
virtual
reality
gaming?
So
that's
the
first
feedback.
The
other
feedback
is
at
TPAC.
There
were
some
virtual
reality,
folks
that
were
saying
that
sometimes
they
do
own
video
frames
in
advance
like
trying
to
to
maybe
to
to
predicate
what
should
be
the
video
frame
and
so
that
it's
it's
ready
in
advance
and
so
on,
and
it
seems
that
they,
they
might
also
want
some
kind
of
metadata.
Also.
H
E
So
maybe
there
are
some
additional
requirements
in
whatever
I'm,
not
an
expert
there,
but
I
guess
we
we
might
want
to
reach
out
to
like
the
double
50
pools,
doing
standardization
there
in
XR
or
or
VR
right
and
try
to
get
more
requirements.
B
Yeah,
that's
actually
a
good
point.
You
win.
We
probably
should
reach
out
because
I
think
when
this
this
use
case
was
formulated.
That
group
didn't
exist.
B
Okay,
so
I'm
now
going
to
turn
it
over
to
Harold
for
discussion
of
one-ended
use
cases
I'll
be
up
as
well.
Well,.
C
That's
now
someone
else
needs
to
to
take
notes
because
I'm
going
to
be
talking,
volunteers.
B
I'm
moving
the
slides:
if
someone
can
do
the
slides,
then
okay.
C
Caribou
will
take
over
it's
us
in
a
shot
yep,
so,
okay,
well,
okay,
we'll
try
and
no
so
I
started
working
on
the
on
on
this
and
called
it.
The
encode.
The
data,
access
and
I
was
asked
to
come
up
with
use
cases.
C
C
That's
a
use
case
for
alternative
generators
where,
where
things
things
come
from
sources
that
are
not
the
webrtc
but
are
encoded
before
they
get
to
webrtc,
that's
that's
next,
to
two
slides
and
of
course,
and
we
can
have
it
the
other
way
around
feeding
frames
to
web
codecs,
MSE
types
mechanism
for
for
Egyptian
and
so
on.
Other
destinations
not
using
webrtc
for
decoding,
but
one
thing
to
use
web
white
receiver
transport,
so
that
was
a
few
few,
not
very
complete
formulations.
So
let's
go
to
next
slide.
C
So
this
is
Menards
formulation
of
them
several
of
the
same
things,
so
people
can
read
that
at
their
leisure.
B
C
C
Yeah,
so
that's
that's
the
case
where
you
want
to
do
timing
adjustment,
the
frames
in
the,
in
a
sense
that
this.
That
is,
if,
if
you
want
to
do
your
own
decisions
about
how,
how
much
to
delay
things
or
speed
things
up
or
slow
things
down,
then
you
need
to
get
into
the
pipeline
before
the
data
buffer
and
then
either
have
a
jitterbah
for
following
that
or
or
just
avoid
the
buffer
altogether
I
see
it
came
on
the
queue.
D
Yeah
I
just
wanted
to
say
that
you,
it's
not
necessarily
even
slowing
things
down
and
speeding
things
up.
There
are
other
ways
to
deal
with
particularly
overrun
that
aren't
necessarily
that.
So
it's
not
just
speeding
up
basically.
C
So
there's
been
a
couple
of
requests
coming
into
Google
from
people
who
have
a
video
camera
or
something
else
that
delivers
h.264
data
and
it
gets
into
the
browser.
But
it
doesn't
get
into
the
browser
on
the
media
stream
track.
It
gets
delivered
by
some
other
means.
C
C
But
that
also
requires
taking
signals
from
the
artist
RTP
sender
to
go
back
to
the
camera
and
say
don't
produce
so
much
data.
Don't
produce
some
big
frames.
We
don't
have
room
enough
on
The
Wire.
Of
course
we
hope
that
most
people
you're
doing
this
will
have
room
enough
on
the
wire
and
that
that
it
will.
It
will
be
okay,
but
we
are
the
we're
the
Congressional
controllers.
B
I
have
a
common
Herald,
which
is
I've
actually
heard
this
use
case
in
public
safety,
where
the
cameras
are
traffic
cameras
and
they
wanted
things
like
recording
services
and
also
like
machine
learning
to
figure
out
if
there
are
accidents
on
the
highway
or
fires
or
things.
B
B
The
dispatcher
workstations
now
often
use
the
browser,
but
they
don't
want
to.
They
don't
want
to
each.
They
might
have
like
a
New
York
City.
You
have
a
hundred
workstations,
so
you
don't
want
to
be
hitting
the
traffic
cam
with
from
a
hundred
different
places.
So
they
have
the
feed
coming
in
over
once
and
they
want
to.
It
has
some
resemblance
to
the
fan
out
case
where
they
want
to
Fan
it
out
to
the
workstations,
but
they
don't
want
to
load
the
traffic
cam
with
you
know:
100
requests.
G
So
just
a
quick
comment
and
if,
if
we're
talking
about
video
cameras,
my
angle
is
always
to
try
to
see:
can
we
use
existing
apis
with
this,
and
so
one
way
to
do
that,
perhaps
would
be
I,
don't
know
anything
about
this
video
camera
and
how
it
delivers
pre-encodered
data.
But
if
we
wanted
to
support
it
and
and
probably
self-view,
isn't
a
big
deal
with
the
traffic
cameras,
so
it's
hard
to
gauge
based
on
how
it
works.
G
But
if
it's
built
into
an
OS
and
the
OS
is
existing
camera
system,
one
solution,
one
approach:
it
would
just
be
to
see
how
will
JavaScript
minimally
describe
this
and
it
could
just
be
as
simple
as
you
get
a
media
stream
track
and
if
you
play
it
locally,
then
it
would
have
to
decode
it.
But
if
you
add
it
to
a
peer
connection
that
you've
also
negotiated
hl364,
the
user
agent
then
has
all
it
needs
to
to
optimize
and
Skip
an
encode
decode
step.
Perhaps
that
was
my
thoughts.
E
I'm
next
on
the
Queue
I
would
equal
your
Universe
approach.
I
I'm
still
fuzzy
about
this
use
case,
because
it
seems
that
one
node
in
the
network
is
generated
h2c
for
data
and
then
is
sending
it
to
the
browser
that
will
do
the
RTP
transmission.
So
it's
adding
delay
and
if
you
need,
if
there's
some
packet
class,
then
the
browser
needs
to
understand
it
and
then
say
to
the
video
camera:
hey,
please
generate
a
keyframe
and
so
on.
E
So
so
there
are
a
lot
of
things
there
and
I
I'm,
not
sure
I
understand
why
there
should
be
a
browser
in
the
middle
between
the
the
network,
node
that
is
sending
h264
and
and
and
these
they
are
very
receiver.
That
is
RTP.
What
yeah
yeah.
B
E
E
So
you
usually,
when
you
have
a
browser
you
have
a
web
page
and
there's
a
user
clicking
on
links
and
and
there
as
far
as
I
understand
it,
it
would
not
be
a
user
that
is
navigating
or
whatever
it's
clearly
a
native
application.
That
is
running
in
the
background,
so
maybe
it's
a
duplicate
webview
and
they
would
like
to
reduce
the
size
of
their
applications
by
reusing
JavaScript
apis.
For
that.
E
But
clearly
it's
not
the
typical
browser
case
where
your
user
is
clicking
and
then
is
getting
these
h2c
for
data
and
and
is
sending
it
to
to
another
node
so
get
getting.
The
context
would
help
to
understand
whether
it
qualifies
as
a
use
case
for
a
browser
or
if
it's
more,
like
a
webview
kind
of.
C
Thing
so
the
so
in
setting
up
this
particular
use
case,
I
started
out
with
the
assumption
that
we
have
a
video
camera
that
is
not
supported
by
webrtc.
Get
used
media
right
right,
yeah,
it's
not
so
so
that's
that.
So,
if,
if
you
assume
that
it's
supported
by
get
used
media,
then
of
course
we
can
do
the
tricks
that
you
never
and
suggests,
but
it
the
the
use
case
I
was
trying
to
outline,
is
the
one
that
is
not
supported
by
and
by
by
webrt
web
artists.
They
get
user.
F
B
Traffic
cameras:
are
it
isn't
just
traffic
cameras
in
a
browser
traffic
cameras,
I
mean
you?
Could
you
can
see
web
pages
that,
let
you
view
traffic
cameras?
It's
this
is
a
scenario
where
it's
in
the
public
service
access
point
and
they're
trying
to
show
the
have.
G
B
D
Rtsp
so
so
I
think
a
couple
of
things
on
this,
one
of
which
is
that
I
do
like
the
universe
point
about
like
if
you've
already
got
it
and
you
choose
not
to
display
it
locally,
then
like
somehow,
the
user
agent
could
figure
out
that
it
doesn't
need
to
re-encode.
It
I
think
we're
going
to
need
hints
to
do
that,
and
that
still
applies
to
kind
of
anything
that
you
might
have
set
as
the
source
on
a
video
element
right
anything
that
qualifies
for
that.
D
It
would
be
really
nice
to
be
able
to
do
something
to
it.
That
made
it
something
that
you
could
apply
to
a
a
peer
connection
without
having
to
re-encode.
If
re-encoding
isn't
necessary
and
some
hints
about
like
not
doing
it.
If
re-encoding
is
necessary,
then
don't
do
it.
That's
an
error,
so
I
think
that's.
That's
kind
of
I
do
like
that
method,
but
I
think
we're
going
to
need
like
some
hinting
or
some
kind
of
some
API
surface
to
indicate.
D
That's
what
we
expect
about
them
to
be
told
when
it
doesn't
happen
and
I
have
a
I
have
a
kind
of
maybe
a
clarifying
use
case
that
might
make
things
a
little
clearer.
So
if
you've
got
an
in-car
in
car
camera,
for
you
know,
maybe
security
or
police
or
whatever
dash
cam
any
use,
and
that
has
a
limited
bandwidth
up
linked
to
a
a
monitoring
station
and
the
monitoring
station
May
well
be
a
web
browser
and
then
the
user.
D
There
wants
to
show
something
to
a
third
party
and
they
want
to
spend
it
over
whether,
if
you
see
so
you've
got
some
kind
of
constrained
link
between
a
mobile,
probably
camera
remote
to
an
endpoint.
That
is,
is
a
browser
and
then
from
there.
You
want
to
try
and
get
that
exactly
that
stream
sent
over
to
a
colleague
and
then
that's
on
a
temporary
basis,
and
you
want
to
do
that
everywhere.
D
What
you
see
we've
seen
a
couple
of
those
use
cases
and
and
getting
the
kind
of
what
you
end
up
doing,
is
rendering
it
locally
and
then
capturing
it
and
resending
it
on,
and
the
I
I
think.
What
we're
trying
to
do
here
is
is
say,
hey
if
we
know
that
that's
the
case,
we
can
give
a
hint
and
say
don't
bother
to
re-encode
it.
G
Yesterday,
it
seems
like
if
low
latency
is
the
goal
which,
otherwise,
if
I
send
it
over
RDP
seems
the
quickest
way
would
be
for
the
native
app
that
implements
the
traffic
camera
to
support
negotiating
webrtc
and
sending
h.264,
and
then
the
browser
endpoint
would
then
be
the
receiving
of
h1264.
C
B
G
Sure,
but
a
native
app
then
developed
on
specifically
for
the
traffic
camera
to
it
doesn't
seem
to
be.
It
seems
to
be
a
middle
step
here
of
the
thing
that
does
the
networking
and
supports
Factor
could
also
support
webrtc.
C
If
you,
if
you
take
away
the
if
you,
if
you
take
away
the
assumption
that
the
issues
explore
arrives
outside
the
web,
the
webrtc
context-
yes,
many
different
other
different
approaches
are
possible,
but
I
like
I
I
I'd
like
to
get
a
solution
to
this
use
case.
But
anyway,
should
we
just
go
on
to
the
next
slide.
I
think
we
have
enough
points
on
this.
One.
C
And
the
the
third,
the
this
one
is
also
come
coming
up
and
come
up
in
a
couple
of
contexts,
including
weight
signals,
yeah,
pre-recorded
segments
I
mean
otherwise
live
situations
where,
for
some
reason
you
want
to
send
out
something
that
is
not
a
live.
Video
I
mean.
C
If
you
look
at
the
screen
just
now,
you
will
see
a
number
of
examples
of
exactly
that
happening
and
that
this
is
presuming
that
the
desired
transmission
mechanism
is
RDP
rather
rather
than
sending
the
picture
by
all
means
and
having
the
recipient
to
play
it
so
again
yeah.
This
is.
This
is
a
a
use
case
that
this
that
is
relevant
under
certain
constraints,
in
this
case
that
outgoing
is
RTP
and
the
media
is
pre-recorded.
C
E
Yeah,
my
my
main
comment
would
be
I'm
not
sure
to
understand
the
the
use
case
in
particular
why
the
desired
transmission
mechanism
should
be
RTP.
E
For
instance,
like
we
could
say,
hey
pr5
is
doing
something
similar,
like
you,
you're
doing
it
you're
getting
like
prerequotic
content
for
hls
and
then
you're
finding
out
with
data
channels.
So
clearly,
for
that
use
case
to
be
meaningful,
it
means
that
we
cannot
use
data
Channel
or
we
do
not
want
to
use
data
Channel,
because
RTP
is
better
and
what
would
help
is
understanding
exactly
that.
Why
is
RTP
better
in
that
particular
use
case.
C
So
in
my
in
in
the
scenarios
I
was
envisioning.
While
I
wrote
this,
it
would
be
that
you
already
have
an
OTP
stream
which
carries
my
face
and
for
whatever
reason,
I
want
the
switch
out
my
face
and
put
in
something
else
and
that
something
else
it's
a
pre
encoded
statistic
and
I
want
to
do
that
without
any
special
handling
on
the
receiver
side.
C
G
G
E
G
D
I
I
have
a
I've
lost
track
of
the
queue,
but
I
have
a
kind
of
side.
Comment
on
this,
which
is
I,
don't
understand
how,
if
it's
pre-recorded,
how
you're
going
to
generate
a
keyframe
like
the
best
you
can
do,
is
wait
for
the
next
one
or
catch
an
old
one.
C
Yep
I
have
I've
outlined
a
number
of
possible
possible
things
and,
and
in
another
case
we
can
switch
switch
to
social,
so
yeah
you
can
jump
back
to
the
next
to
the
previous
keyframe.
You
can
now
it's
very
very
application
dependent
what
what
it
should
do
and
therefore
it's
not
necessarily
a
good
fit
to
decide
in
the
browser
to
do
it.
B
Yeah
one
reason
that
one
might
want
to
do
this
is
because
I
mean
it
could
be
done,
for
example,
by
rendering
the
pre-encoded
content
over
MSC
or
something,
but
the
problem
with
that
is
MSC
often
has
much
poorer
thermal
performance
than
RTP
does
with
webrtc
API.
E
So
so,
in
any
case
to
to
make
progress,
there
I
think,
since
it's
an
issue,
that's
probably
a
GitHub
issue
and
we
it
would
be
good
to
comment
on
yeah
this.
We
are
targeting
optimization.
We
are
targeting
thermala
and
so
on,
so
that
it's
very
clear
what
we
are
trying
to
achieve.
Yeah.
B
H
Well,
you
already
had
the
custom
effect
on
a
previous
slide,
I
think,
but
there's
also
the
question
of
if
you
have
a
custom,
codec
or
a
web
codec
that
a
codec
that
is
implemented
in
web
codex,
but
not
Implement
web
urgency
that
you
want
to
send
over
RTP.
H
In
both
of
these
cases,
I
think
we
need
something:
that's
a
little
more
like
an
activation
type
API
and
similarly,
if
you're
writing
an
SFU
which
I
think
was
one
of
the
S
use
cases,
and
you
want
to
have
RTP
over
quick
on
one
side
and
RTP
over
UDP
on
the
other.
You
need
to
do
some
kind
of
accusation
control.
H
So
these
are
all
kind
of
similar
things
that
I
think
have
art
partially
been
mentioned,
for
maybe
not
as
explicitly
clear
that
like
we
might
need
something
at
the
packet
level.
I
did
think.
H
Maybe
it's
possible
to
use
the
frame
level
API
that
we
haven't
defined
yet
necessarily
but
A-frame
level,
API
and
just
produced
frames
that
are
really
small
and
if
the
browser
told
the
client
what
the
NTU
size
was,
it
could
make
this
video
thinks
that's
cool,
but
I,
don't
know
if
that's
sufficient
I
just
wanted
I'm
not
trying
to
design
the
API
here.
I'm
just
trying
to
point
out
that
there
might
be
cases
where
you
want
to
control.
E
Yeah
I,
I
really
think
like
or
all
I'm
trying
to
do
there
in
in
this
space
is
to
understand
whether
we
we
want
an
API
that
is
targeting
the
packet
level
like.
Should
we
control
rtpad,
extensions
and
and
so
on
or,
and
maybe
we
need
both
I,
don't
know
an.
E
Is
actually
at
the
encoder
side
like
I
I,
want
to
provide
some
and
could
it
symmetically
data
and
then.
E
I
have
a
feeling
that
we
are
trying
to
do
both
with
the
same
API
and
I
I
fear
that
it
might
be,
it
might
be
hard,
and
maybe
some
of
you
more
so
one
discussion
that
would
what
would
be
great
would
be
to
understand
whether
we
like
there
are
two
different
set
of
use
cases
that
might
lead
to
different
two
different
apis,
maybe
a
low
latency
and
maybe
optimization.
However,
it's
more
like
a
single
use
case
and
we
think
one
one
API
is
good
enough.
There.
H
I
I
think
you're
right.
There
might
be
two
different
sets
of
use
cases,
ones
that
care
about
the
frame,
level
activity
and
others
that
care
about
the
packet
level
activities,
so
the
ones
I
put
on
this
side
like
custom
fact,
is
a
good
example.
If
you
think
of
caring
about
the
packet
levels,
not
the
same.
E
Yeah,
so
so,
maybe
where
tnv
could
could
try
to
have
use
cases
and
some
requirements
would
be
hey,
we
need
to
handle
packets,
we
need
to
read
packet
data,
we
need
to
write
packet
data
and
some
other
use
cases
would
be
hey.
We
need
to
to
write
frames
and
this
distinction
would.
A
E
Then
applying
these
requirements
to
to
apis.
H
Right
so
I
I
think
that
might
be
a
good
approach
to
make
the
part
where
it's
a
little
fuzzy
is
when
you
think
about
audio,
where
frames
are
packets,
more
or
less
perfect,
and
so,
if
they're,
if
there's
a,
if
there's
a
way
that
you
can
send
and
receive
custom
audio
codecs
I
mean
you
can
kind
of
turn
that
into
anything.
E
Would
think
so
there's
two
different
apis
levels,
because
you
have
head
extensions,
for
instance,
and
you
have
metadata
that
is
attached
to
the
encoded
data
and-
and
these
are
two
different
things
right
and
you
might
not
want
to
expose
one
theme
for
maybe
you
we
will
not
be
able
to
expose
one
structure
for
all
of
these.
So
that's
something
we
should.
We
should
look
at
as
well.
C
Anyhow,
because
not
being
satisfied
with
just
talking
about
possible
use
cases
that
should
try
to
sit
down
and
this
time,
something
that
would
would
be
useful
in
some
of
these
use
cases
so
yeah.
So
the
picture
in
my
mind
that
we
also
use
a
cranky
geek.
C
What's
this,
you
have
multiple
sources
of
encoded
media
RTP
cameras
wherever
JavaScript
gets
things
from,
we
have
multiple
destinations
for
encoded
media
wherever
JavaScript
might
want
to
put
them
display
him
on
the
screen
in
the
through
the
decoder
or
something
about
over
the
internet,
and
if
we
could
do
the
same
thing
as
we
do
with
media
stream
tracks
in
the
in
the
Norman
College
case
and
just
couple
everything
to
everything
that
should
be
useful
next
slide.
C
There
are
a
number
of
things
where
I
feel
that
we're
not
clear
enough
to
actually
make
specific
API
proposals,
yet
I
mean
congestion.
Control
must
work,
which
means
that
there
must
be
signals
coming
from
wherever
whatever
this
the
destination
is
to
say,
you're
sending
too
much
slow
down,
I
can't
handle
this
or
I
can
handle
more
than
this
Sunday
more.
C
We
must
allow
stream
repair
to
work
so
that,
if
something
goes
wrong,
we
can
get
back
request
keyframes
and
we
may
have
to
allow
negotiation
of
resolutions
having
messages
saying
I
want
I
want
a
more
detailed
picture.
I
want
a
smaller
picture,
destinations
have
opinions,
and
if
we
have
destinations
most
beyond
what
we
usually
we
have
had
so
far,
they
may
have
different
opinions.
Different
behaviors
next
slide.
C
Api
proposals
for
the
requirements
that
we'd
know
that
we
need
to
satisfy
and
then
continue
discussion,
collection,
collection
and
firming
up
of
the
of
the
other
requirements
we
have
so
I
have
a
sketch
for
an
API
that
I
think
can
be
extended
to
do
the
right
thing.
With
this
connection,
connection
of
signals
in
various
directions.
C
I
took
that
to
the
idea
hackathon
and
got
some
positive
noises
about
it
from
a
couple
of
people,
but
not
really
much
of
a
discussion,
but
for
the
for
the
gnome
requirement,
I
wrote
up
a
PR
that,
as
the
kill
method
to
the
encoder
frame,
API
and
quite
correctly
pointed
out
that
I
also
have
to
include
in
the
pr
that
we
delete
the
requirement
that
a
frame
has
to
stay
within
the
within
its
own
sender
or
receiver.
C
C
C
D
I
have
a
slight
twitch
about
incremental
API
development,
I.
Think
it's
an
okay
approach
provided
you've
got
some
sort
of
overarching
flavor
set.
It
doesn't
have
to
be
the
detail,
but
you
have
to
have
a
rough
idea
of
what
kind
of
flavor
of
API
you're
going
to
produce.
Otherwise
it
becomes
inconsistent
or
like
ugly,
and
we,
you
know,
that's
a
risk
of
like
to
try
and
avoid
if
we
can.
D
Yeah
tricky
sort
of
I
suppose
to
do
with
who,
who
are
target
audience
is
and
what
other
apis
they're
going
to
be
using,
so
that
it's
consistent
with
those
like
is
this
something
that,
like
you
know,
fits
naturally
wraps
around
a
web
codec
or
is
it
something
where
people
are
going
to
be
bit
banging
and
constructing
an
RTP
packet
themselves
like
getting
the
sort
of
general
rules
of
the
game,
and
the
kind
of
the
edges
of
the
thing
at
least
described
I
think
is
useful.
C
Yeah
I
think
the
the
it
might
not
my
natural
feeling
was
that
the
obvious
apis
to
fit
in
next
to
where
the
web
artist,
the
API,
isn't
that
true
and
the
web
codex
apis
think
other
side
of
those.
G
A
C
So
as
long
as
as
long
as
you
don't
what
what
I?
What
I
want
on
that
point
is
that
if
I
have
a
proposal
for
something
that
this
part
of
an
API
that
are
part
of
a
solution,
we
don't
have
to
wait
for
all
the
parts
of
the
solution
to
come
in
before
we
can
can
get
that
one
pushed
into
professional
specs
or
extension
specs
or
whatever
specs,
but
get
get
it
somewhere
that
we
can
say
that
yeah.
We
agree
that
this
is
an
okay
okay
thing
to
do.
G
G
C
Good,
it's
an
okay
approach.
It's
not
the
only
I
I,
don't
want
to
say
that
it's
the
only
approach,
but
we
seem
to
agree
that
it's
one
possible
approach
and
and
proposals
will
be
studied
on
them
and
added
on
the
minutes
so
check.
E
I
think
I'm
next
on
the
Queue
I
would
equal
what
team
is
is
saying.
Basically,
we.
A
E
To
understand
the
big
picture,
precisely
what
we
are
trying
to
solve
like
packet
or
frame
like
these
are
some
basic
questions
and
it
could
be
good
to
to
try
to
from
various
cases,
derives
what
we
want
to
solve,
and
then
we
we
can
decide
whether
the
API
examples
make
sense
or
not.
But
before
that
it's
it's
really
hard
to
to
be
to
to
be
able
to
to
go.
There
I
agree
that
it's
good
to
to
look
at
webrtcy
and
web
codecs
I
think
with
webcodex.
E
You
can
already
create
and
code
it
frames.
So
it's
already
under
it's
already
available.
So
there's
no
no
issue
there.
Weber
TN
could
transform.
That's
that's
an
interesting
API
by
your
name.
It's
a
transform
and
it
has
been
designed
this
way
and
people
are
using
it
for
other
purposes.
I
think
it's
fine,
but
it's
that
does
not
make
it
like
hey.
E
If
they're
doing,
then
we
should
design
an
API
based
on
that
I
think
we
should
try
to
design
the
best
API
we
can
and
if
the
best
API
is
actually
before
that
gravity
and
go
to
transform
and
not
not
related
towards
the
end,
could
transform
that's
fine.
It's.
We
should
try
to
design
the
best
API,
not
the
API,
but
requires
the
less
work
for
user
agents.
E
F
Yes,
lots
of
small
issues.
I'll
try
to
keep
it
brief.
Next
Slide.
The
first
slide
is
generate
keyframe,
which
we
discussed
during
the
October
interim.
The
solution
was
to
pass
an
array
of
arguments
to
generate
keyframe
and
pull
request.
165
implements
the
resolution,
it
takes
a
list
of
writs,
it
must
be
negotiated
with
empty
list
means
all
operates,
and
there's
no
recurrent
value,
and
the
proposal
I
have
to
the
working
group
is
to
merge
that
PR
and
I.
Think
by
now,
you
added
the
pr
on
the
next
two
slides.
A
E
I
think
that
we
discussed
like
the
reads
we
we
did
not
discuss
proton
value.
So
that's
the
main
item
that
is
remaining
for
discussion
right.
E
And
so
the
the
reason
why
it's
it's
a
promise.
We
are
two
reasons,
at
least
for
the
script
Transformer
case.
First,
you
you
need
to
validate
the
input
and
validating
the
input
like
the
list
of
reads,
for
instance,
you
might
not
have
it
synchronously
in
the
script
Transformer,
so
using
a
promise.
There
is
helps
that
that
case,
the
additional
reason
why
we
wanted
to
have
a
promise
was
an
easy
way
for
developers.
H
E
I
will
I
will
await
the
generate
keyframe
and
and
then
depending
on
how
they
write
javascripts.
They
can
basically
take
a
read
and
the
promise
of
the
next
read
when
it
resolves.
It
will
be
a
keyframe
and
that's
also
nice
for
script.
Transformer
I
think
that
for
sender
we
do
not
really
care
because,
although
maybe
that's
useful
I,
don't
really
know
so.
E
E
Yeah,
you
can
do
you,
you
read,
and
then
you
check
whether
it's
a
keyframe,
but
the
problem
is:
if
it
resolves,
you
do
a
weight,
and
then
you
know
that
the
next
read
that
you
are
calling
it
will
be
your
keyframe.
So
that's,
that's,
that's
you
you
can
you
can
do
without,
but
it's
it's
potentially
a
nice
utility
and
the
fact
that
it's
it's
a
promise.
E
Then
you
can
reject
asynchronously,
which
is
good
for
checking
the
reads,
because
in
the
script
Transformer
you
do
not
have
necessarily
ever
reads
that
are
synchronized
with
the
main
thread
or
even
the
encoder
as
well.
So
having
a
promise
helps
a
bit
there
as
well.
G
Just
a
quick
comment:
this
PR
removes
a
lot
of
algorithm
text.
Is
there
a
reason
for
that.
F
The
reason
is
that
the
current
algorithm
makes
a
lot
of
assumptions
about
having
different
video
encoders,
and
that
is
not
true
at
all.
For
some
software
encoders.
E
So
yeah
it
was
like
the
the
model
was
like
Hey.
If
you
have
free
reads,
you
have
free
encoders
conceptually,
because
you
have
three
different
frames
and
then
we
are
detailing
that
and
then,
if
you
have
the
first
video
frame,
the
first
keyframe,
then
you
would
resolve,
but
with
multiple
reads:
when
do
you
resolve?
E
So
that's
why
there
are
all
these
cases
if
you
remove
performance
resolution
and
if
you
say
oh,
we
don't
care
about
like
having
like
orders
for
each
encoder
I
think
you
can
simplify
things.
Foreign.
A
E
The
the
main
thing
is
not
really
about
all
the
changes.
I
think
it's
we
we.
We
got
a
decision
for
all
everything
which
is
aligned
with
a
PR,
except
for
the
written
value
we
agreed
previously
to
not
provide
the
timestamp,
but
it's
it's
not
clear
whether
we
decided
whether
we
would
return
the
promise
or
we
would
return
undefined
and
that's
what
I
think
we
we
need
to
decide.
B
E
So
I
think
we
are
using.
Let
me
check
the
the
original
proposal
was
promised
and,
and
the
implementation
in
several
is
returning
a
promise.
E
B
E
H
F
F
We
also
have
issue
601
exposed
in
video
frame,
and
there
we
have
the
resolution
to
file
specific
issues
on
specific
specs
and
we
filed
issue
167
on
encoded,
transform
and
issue
88
on
media
capture.
Transform
The
Proposal
I
currently
have
for
this
working
group
is
to
add
timing
metadata
to
the
RTC
encoded
audio
video
metadata,
in
particularly
the
receive
time,
as
defined
in
our
vfc
anti-capture
time
also
defined
in
rvfc,
and
this
is
different
from
the
metadata
we
currently
have,
because
the
current
metadata
consists
only
of
things
from
the
RTP
header.
Basically,.
E
Ed
but
my
main
question
would
be
the
use
case.
B
And
yeah
yeah
I
mean
the
weird
thing
about
this:
fippo
is
that
rbfc
obviously
assumes
it
gets
carried
by
the
pipeline
all
the
way
through.
So
if
this
isn't
part
of
RTC
encoded
metadata
I,
don't
know
how
rbfc
can
actually
work.
B
E
So
yeah
I
haven't
looked
at
beforehand,
so
I
do
not
remember
like
the
use
cases
and
so
on
it's
difficult
to
me
to
to
decide
what
would
be
my
input
there.
C
Okay,
these
are
all
these
read-only
values.
C
F
For
audio,
it
also
is
used
to
indicate
a
jitterbuffalo
sorry
data
buffer
flush,
so
that
might
be
useful.
If
you
have
agenda
buffer
that
you
wrote
yourself.
C
F
F
Basically,
it's
a
resolved
version
of
the
payload
types
that
is
not
easy
to
do
in
the
in
a
worker
presented
as
the
October
virtual
interim
and
I
have
a
specific
slide
on
that
as
well,
and
we
have
other
codec
specific
metadata
that
is
not
RTP
and
not
timing,
for
example
the
width
and
height
of
the
frame,
which
might
only
be
available
for
incoming
frames
or
things
like
the
audio
level
for
audio
packets
and
the
voice
activity.
Bits
next
slide.
F
D
D
No
I'm
talking
about
the
the
index
field,
so
it
Loops,
I,
think
and
and
you
you
deduce
it
whether
it's
looped
or
not,
and
yes
so,
which
number
are
we
talking
about?
Is
it
one
that
Loops
or
not.
F
D
G
So,
just
a
just
a
comment
that
these
are
supportive
of
use;
cases
that
don't
have
consensus
yet.
Is
that
right.
H
G
So
we
should
either
probably
not
merge
them
or
add
a
note.
H
C
F
F
H
H
I,
just
I
was
wondering
if
red,
even
matters
for
audio.
If
there's
anybody
I
guess
we
can
enter
it
there
for
completeness,
but.
C
So
like
conceptually
they
find
for
for
for
in
this
in
the
it
aspects,
the
defined
defined
for
and
then
kind
of
an
entity.
So
we
should
just
commit
the
Restriction
identifiers,
and
so
we
I
think
we
should
just
add
them
to
right
to
all
other
service
and
not
populate
the
value
when
they're
not
present.
G
G
D
Worked
example
of
how
bad
the
audio
is
currently
supported,
I
popped
it
in
the
chat.
Lorenzo
Monero
did
US
attempt
to
integrate
Lyra
into
into
webrtc,
and
the
only
way
to
do
it
appears
to
be
the
abuse
l16
in
order
to
get
things
flowing.
So
that
kind
of
tells
you
that
we're
not
there.
C
Yet
so
the
artist
he
encoded
audio
frame
is
in
section
5.5
hours
back.
C
G
Now
I
had
on
a
search
for
whole
words.
Let's
sometimes
can
mess
me
up.
F
F
The
problem
which
is
going
to
affect
how
health
use
cases
is
that
this
can't
be
modified.
The
Proposal
I
have
to
get
out
of
this
mess
is
to
add
the
RTP
timestamp
to
the
respective
metadata
object.
Then
deprecated
on
the
main
object,
remove
it
from
implementations,
breaking
this
and
then
later
re-add,
as
defined
at
web
codecs,
which
defines
the
timestamp
as
I
read
only
long
long
timestamp,
because
that's
the
actual
media
time,
as
you
sometimes
call
it,
does
that
make
sense.
E
So
our
use
case
you,
you
could
add
the
metadata
to
to
the
to
the
metadata
dictionary
without
deprecating
it,
the
other
one
I
guess
but
but
good
Messy
as
well.
I
guess,
since
it's
the
use
cases
the
sign
out
or
something
like
that
which
hasn't
received
yet
consensus,
I
guess
we
should
delay
a
little
bit
working
on
it
until
we
get
proper
consensus
on
the
use
cases.
F
F
Okay,
I'll
do
a
PR
for
adding
to
metadata,
but
nothing
else
yet
foreign
good.
So
moving
on
to
the
next
issue,
mindtap
metadata
October
resolution
was
to
add
the
nine
type.
But
how
about
raised
an
additional
question?
Where
does
the
raw
mind
type
is
sufficient?
It
is
okay
to
tell
you
if
the
encoded
data
is
vp8
or
h.264,
but
it
is
not
sufficient
to
tell
you
what
h.264
profile
level
the
data
is
encoded
in.
So
the
question
is:
do
we
need
fmtp,
which
we
have
in
the
codec
statistics?
B
Yeah
in
in
web
codex,
the
codec
type
includes
both
of
these
things.
It's
it's
both
the
Mind
type
and
the
fmtp
stuff
in
one
parameter.
E
Yeah
I
I
would
I
like
web
codex
consistency,
I
guess
more
than
stats,
but
that
I
might
be
biased.
There.
C
B
B
B
E
F
I
think
the
raw
mindtap
will
already
allow
developers
to
choose
a
codec
specific
encryption
profile,
which
is
one
of
the
use
cases
for
that.
You
might
remember
trying
to
encrypt
h.264
and
being
surprised
about
the
now
three
Texas
yeah.
E
But
what
what
what
use
case
of
SMTP
would
you
would
you
actually
use
and.
B
Well,
it's
primarily
the
profile
levels
right
and,
and
maybe
the
because
I
think
one
other
thing
right,
which
is
the
so
if.
E
You
wanted
to
use
what
if
we
wanted
to
use
web
codex,
for
instance,
would
it
be
easier
with
the
fmtp
be
actually
concatenated
with
the
mime
type.
E
Yeah
sure
I'm
just
trying
to
to
see
what's
the
most
convenient
because
of
obviously,
if
you
have
mind
type
and
then
keeping
two
members
or
in
one
member
you
can
go,
you
can
do
the
JavaScript
to
go
from
one
to
the
other
and
vice
versa.
So
yeah
I
don't
know
what
web
developer
will
prefer.
It
should
be
minor
in
any
case.
C
B
F
B
This
one
is
looking
at
the
differences
between
web
codex
and
encoded,
transform,
so
web
codex
defines
the
encoded
chunk
metadata
like
this,
which
includes
a
decoder,
config,
the
SVC,
metadata
and
Alpha
side
data
and
then
within
SVC
output
metadata.
We
have
the
temporal
layer
ID.
The
reason
it
was
done.
This
way
was
to
provide
structure
so
that
we
weren't
dumping
everything
into
a
single
bag,
and
we
could
also
extend
the
SVC
output
metadata
dictionary.
B
So
here's
what
even
though
only
the
temporal
ID
is
in
there,
here's
where
we
were
going,
which
is
basically
to
put
in
web
codex,
all
the
data
from
the
dependency
descriptor
into
the
SDC
dictionary,
and
that
included
the
frame
number
which
is
used
in
the
depends
on
IDs,
which
are
essentially
the
dependencies
of
a
frame,
the
spatial
layer
ID
in
addition
to
the
temporal
layer
ID.
But
it
turned
out
that
wasn't
enough
to
do
forwarding
decisions.
B
You
also
needed
the
decode
targets
to
essentially
know
what
what
you
were
trying
to
do,
which
is
essentially
the
resolution
and
frame
rates,
and
then
we
also
had
to
have
something
called
the
chain
links,
and
the
reason
for
this
is
that
it
turns
out
that
the
satisfying
dependencies
isn't
enough
for
a
frame
to
be
decodable,
and
this
is
important
to
understand
so
a
receiver
may
have
received
the
dependencies,
but
if
those
dependencies
themselves
were
not
decodable,
that
is
if
the
chains
were
broken
at
some
point,
then
it's
possible
to
satisfy
the
dependencies,
but
still
not
be
decodable.
B
So
if
we
compare
this
with
the
RTC
encoded
video
frame
metadata,
there's
a
couple
of
problems
that
come
up
one
is
the
types
aren't
aligned.
So
you
have
differences
in
the
types
for
temporal
ID,
spatial
ID,
the
frame
number
versus
the
frame
ID,
and
it
depends
on
IDs.
The
types
are
different,
also
in
the
encoded
transform.
We
have
some
missing
info
like
the
decode
targets
and
the
chain
links
aren't
there
and
then
so.
B
What
I'm
proposing
here
is
to
develop
a
PR
to
try
to
harmonize
these
things.
So
we
don't
have
this
completely
different
webrtc
and
web
codex
usage
of
SVC.
E
Yep
and
the
idea
to
have
like
additionally
dedicated
to
that
as
well.
B
Yeah
so
I
think
we
basically
need
to
get
like
Dale,
Curtis
and
stuff.
We
need
to
get
everybody
to
agree
to
have
consistent
metadata
here.
E
So
Bernard
is
the
proposal
at
some
point
that
some
of
the
the
video
frame
metadata
existing
properties
would
be
removed
or
moved
to
like
a
new
dictionary.
E
B
They're
in
basically
we,
this
is
the
way
they
are
now
and
so
we're
doing
a
subdictionary
for
SVC,
specifically
and
I.
Guess
probably
my
proposal
would
be
to
use
a
subdictionary
use
the
same
subdictionary.
B
E
Okay,
so
we
would
change
FTC
and
coded
video
frame
metadata
to
have
like
an
SVC
metadata
dictionary
and
then
move
some
of
the
existing
properties
to
to
the
dictionary.
So.
H
B
Yeah
that
that's
what
we
did
we
actually
in
web
codex.
We
had
them
at
the
high
level
and
moved
them
into
SBC,
so
we
did
actually
deprecate
temporal
layer.
Id
was
event
was
at
the
formerly
at
the
high
level,
so
we
we
deprecated
it
and
moved
it
into
the
SVC
dictionary.
B
The
thing
would
be
to
get
it
into
web
codex
first,
because
right
now
only
the
temporal
layer
ID
is
in
there,
so
get
it
in
there
and
then
reference
it.
Yeah
yeah.