►
From YouTube: WebRTC WG meeting 2023-05-16
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
A
So
today
we're
going
to
cover
a
number
of
things,
use
cases,
some
emergency
extension
and
media
capture,
extension
issues
in
PRS
and
some
new
work,
ice
controller
and
rtb
transport,
and
we
have
a
bunch
of
future
meetings
that
are
listed
below
all
right.
So
the
slides
are
up
on
the
wiki
and
we're
being
recorded
just
a
reminder
and
it
will
be
made
public.
Do
we
have
a
note
taker.
B
A
A
I
think
you
all
are
I
figured
out
how
things
work,
but
we
do
run
a
tight
speaker
queue
so
raise
your
hand
to
get
into
it
and
lower
it
to
get
out
of
it.
We
will
mute
you
if
you
jump
the
queue,
so
please
don't
do
that
and
tell
us
your
full
name
before
you
speak,
so
we
can
help
get
it
in
the
minutes.
All
right,
I,
don't
think
we'll
use
polls
today.
But
if
we
do,
you
know
what
it
does
all
right.
A
So
just
about
document
status,
we
have
to
say
this
every
meeting
and
sometimes
it
gets
confusing
and
we
will
be
talking
about
use
cases.
The
point
is
that
editor
straps
do
not
represent
consensus.
Where
congrats
do
it's
possible
to
merge
some
PRS
that
lack
consensus
if
the
notes
attached
and
we'll
be
dealing
with
a
use
case
document
where
that's
been
done
liberally
and
maybe
not
such
a
great
idea,
we'll
see?
A
Okay,
so
we're
going
to
talk
about
use
cases
Tim's
going
to
lead
that
discussion,
then
we'll
talk
about
extensions
ice
controller,
RTP
and
wrap
up.
So
we'll
try
to
be
good
about
time.
Also
thanks.
All
right
so
I'm
gonna
hand
it
over
to
Tim
we're
going
to
see
MD
use
cases
yeah.
C
So
the
extended
use
case
document
has
just
changed
its
name
and
and
I
asked
myself
whether
it's
even
useful,
since
it's
kind
of
caused
us
some
difficulty
I
think
recently,
and
and
you
know
the
answer
is
yes,
I
think
it
has
some
some
use.
C
I
mean
it's
been
cited
recently
in
in
in
new
feature,
so
it
is
something
that
kind
of
crops
up,
and
so
it
has
a
use,
and
can
it
be
improved?
Well,
almost
certainly
I
think
there's
quite
a
lot
wrong
with
it
at
the
moment,
and
the
purpose
of
this
kind
of
20
minutes
or
whatever
it
is,
is
to
try
and
look
for
guidance
and
make
and
come
to
some
sort
of
agreement
about
how
we
might
improve
it
and
what
we
think
it's
for
next
slide,
please.
C
So
as
a
preparation
for
this
I
reread
it
and
and
it
references
RFC
7478,
so
I
read
them
both
together
and
you
know,
honestly,
it's
pretty
unsatisfactory.
C
I
mean
kind
of
random
example,
but,
like
7478
is
really
dated,
you
know
it,
it
isn't
what
we're
doing
with
where
both
you
see,
I
mean
it
sort
of
is,
but
it
isn't
and
it
talks
about
things
that,
like
I,
haven't
talked
about
for
10
years
like
telephony
terminals,
so
it's
kind
of
out
of
date
in
some
ways
and
and
then,
if
you
look
at
like
at
the
NV
use
cases
document
as
well
as
it
talks
about
things
like
that,
aren't
actually
use
cases
like
funny.
Hats
isn't,
in
my
view,
a
use
case.
C
It's
a
feature
on
an
existing
use
case,
but
I
the
requirements
for
funny.
Hats
are
still
valid
and
useful,
and
what's
more,
the
resulting
API
points
fantastically
popular
and
useful
Way
Beyond,
the
hats
or
even
for
video
conferencing.
It's
like
there
are
users
outside
video
conferencing
so
like
that
tells
me
that
maybe
there's
something
wrong
with
the
way
that
we're
treating
this
document
and
yeah
like
and
the
other
way
around.
C
Like
section
3.9
and
we'll
talk
about
the
status
of
the
rest
of
the
document
in
a
minute
I,
it's
they've
got
no
consensus,
but
it's
already
been
done
by
wish
and
whip
effectively
and
so
like
we,
we
find
ourselves
in
a
weird
position
of
like
this
document
sort
of
almost
like
it's
an
orphan
or
something
I,
don't
know
it's
kind
of
been
overtaken
by
standards
and
events
and
and
things
and
I
and
I
think
that's
a
shame,
because
it's
still
got
a
use
next
slide.
Please
so
I
must
thank
I.
C
Think
Bernard
for
for
filling
out
this.
The
details
of
this
I
don't
intend
to
kind
of
go
through
a
huge
amount
of
this
full
status
in
in
a
lot
of
depth,
but
I
think
it's
worth
like
running
through
it
quickly,
so
that
we've
got
a
sense
of
where
we
are
with
it.
Yeah
we've
changed
the
name
because
Envy
isn't
what
we're
doing
anymore.
This
is
extended
use
cases
and
there
are
nine
consensus
use
cases,
but
some
of
them
have
non-consensus
requirements.
There
are
seven
non-consensus
use
cases
in
there.
C
There
are
31
open
issues,
you
know
of
varying
ages,
but
18
of
them
in
open
for
two
years
and
so
yeah,
it's
not
in
a
particularly
healthy
state.
From
that
point
of
view,
and
and
that
begs
the
question
you
know,
what
should
we
do
with
non-consensus
use
cases
that
aren't
progressing?
What
should
we
do
with
non-consum
census
requirements
and
and
kind
of?
What
do
we
do
with
use
cases
where
there
are
no
requirements
or
proposals?
C
Next
slide?
Please
so
just
to
dive
into
a
few
few
examples
here
or
I'm,
probably
not
going
to
kind
of
read
the
whole
thing.
But
you
know
we've
got
the
multi-party
gaming
for
which
the
requirements
have
consensus,
and
there
are
some
proposals.
We've
got
the
mobile
calling
service
for
which
they're
in
non-consensus
requirements,
but
it
has
consensus.
C
We
have
the
video
conferencing
with
a
central
server
which,
to
be
honest,
is
the
kind
of
the
core
use
case
that
everyone's
using
webrtc
for
and
all
the
requirements
have
consensus
and
they're
irrelevant
proposals
next
slide.
Please
and
then
we've
got
use
cases
more
use
cases
with
consensus,
say
the
file
sharing
whether
there's
a
proposal
for
there,
and
it's
got
the
required.
C
It's
got
the
consensus,
there's
the
internet
of
things
for
which
we
don't
have
consensus
on
one
of
the
requirements,
but
we
do
have
a
proposal,
that's
associated
with
that
requirement
to
some
extent
and
and
then
we've
got
virtual
reality
gaming,
where
we've
got
consensus
and
that's
in
the
kind
of
good
State
and
then
funny
hats,
which,
as
I
say
I,
don't
think,
is
a
use
case.
C
I
mean
I'm,
not
even
sure
that
some
of
these
others
are
but
I'm
not
sure
that,
but
it's
in
good
state
from
the
point
of
view
of
the
kind
of
paperwork
I
and
then
like
the
machine
learning,
is
interesting
because
it
effectively
has
a
bunch
of
the
same
requirements
as
funny
hats.
But
it's
a
different
use
case
and
Bernard.
A
Yeah
that
that's
a
weird
one
Tim,
because
it
really
begs
the
question
of
what
these
use
cases
are
for,
and
one
thing
add
some
confusion
in
the
ATF
and
and
maybe
Dom
or
someone
can
explain
the
w3c
process,
but
in
the
ietf
we
have
an
applicability
statement
document
and
a
use
case
document
they're,
not
the
same.
So
in
the
eye
chip
there's
no
requirement
that
the
use
cases
cover
everything
that
the
standard
can
do
right.
The
applicability
statement
will
tell
you
stuff,
that's
that
it.
A
The
standard
is
inappropriate
for
right,
but
I
don't
know
like
in
the
w3c.
Is
there
an
idea?
The
reason
we
have
machine
learning
here,
even
though
it
it
doesn't
give
you
any
new
requirements,
as
people
said
well,
if
you
don't
put
it
in
people
will
think
it
can't
be
used
for
that.
But
a
guy
cap
that's
considered
an
inappropriate
argument
and
an
argument
for
another
kind
of
document,
which
is
the
applicability
statement
so.
C
So,
let's,
let's
park
that
because
I
think
we
we
come
back
to
that
I
think
it's!
It's
well
worth
like
raising
it
at
this
point,
but,
like
maybe
we'll,
hopefully
get
a
little
clearer
on
on
where
we
get
to
with
that.
When
we're
a
little
further
through
the
through.
A
C
You
so
yeah
here
we
go
so
we've
got
a
bunch
of
use
cases
more
use
cases
with
consensus
that
don't
plan
my
video
conferencing
and
there's
a
sort
of
semi
theme
here,
which
is
that
video
conferencing
tends
to
get
consensus
and
other
things
sort
of
tend
not
to,
but
that's
a
that's
a
probably,
not
statistically
accurate,
and
that
that's
got
encoded
transform
as
a
proposal.
That's
that's
on
on
the
go
next
slide.
C
Please
and
then
we've
got
a
bunch
of
non-consensus
use
cases
for
things
like
low
latency,
streaming
and
low
latency
broadcast
with
fan
out
and
decentralized
messaging
and
and
I
would
point
out
that
we've
got
non-consensus
on
these
use
cases,
but
people
are
out
there
doing
them
right,
and
so
it's
kind
of
odd
situation
that
we
we
sort
of,
don't
believe
that
they're
use
cases,
but
they
are
obviously
because
people
are
using
them
now,
whether
I'm
not
quite
sure
what
that
means.
C
But
it
slightly
bothers
me
next
slide,
please
and,
and
then
we've
got
again
as
I
said
before
the
low
complexity,
signaling
we've
said:
there's
no
requirements,
it's
non-consensus
and
there
are
no
relevant
proposals,
except
that
wish
is
exactly
that
and
that's
what's
happening
and
and
the
likewise
one-way
media,
where
she's
doing
this
and
a
bunch
of
other
people
are
doing
this.
C
But
we
have
no
consensus
on
the
use
case
and,
and
we've
got
a
ton
of
open
issues
on
it
and
the
requirements
are
in
a
in
non-consensus
state,
so
we're
kind
of
where,
in
the
weirdest
situation
of
almost
saying
we
don't
think
you
should
be
doing
this
when
people
are
out
there
doing
it,
which
is
I.
Think
is
not
a
positive
thing.
Next
slide,
please
so
I
I
kind
of
stepping
back
I
thought.
C
Well,
you
know
what
is
this
use
case
document
for
what's
the
use
case
for
the
use
case
document
you
know
and
who
is
the
intended
readership
and
what
will
they
do
with
it
and
then,
of
course
like
how
should
it
evolve
because
I
think
that's
one
of
the
things
that
we're
not
doing
well
with
this
document
is
is
letting
it
evolve.
C
It's
not
a
living
document,
but
it's
also
not
definitive.
So
it's
kind
of
Falls
between
two
kinds
of
documents
that
I
think
would
be
more
desirable,
so
I
kind
of
put
two
not
very
funny,
actually
hats
on
to
to
think
about
that
next
slide,
please.
C
C
Where
have
we
got
to
maybe
to
some
extent
a
progress
meter,
but
also
a
yardstick
to
see
whether
the
changes
that
we
people
are
bringing
to
us
are
suitable,
whether
they
fit
like
what
we've
said,
we're
trying
to
do
and
and
then
and
I
think
also
it's
desirable
to
have
a
way
of
maybe
decoupling
the
scenarios
from
the
requirements.
So
it's
you
know
it's
very
easy
to
kind
of
dive
in
with
with
a
requirements
or
even
the
solution.
C
The
sort
of
the
purpose
of
this
document
from
my
perspective,
is
to
help
decouple
and
help
to
shape
the
scenarios
and
the
requirements
so
that
we
kind
of
understand
that,
but
also
I,
think
we
need
a
place
to
Define
our
Direction
our
aspirations,
because,
like
particularly
74
or
78,
and
to
some
extent
the
original
definitive
work
for
this
is
out
of
date.
C
Like
we've
been
doing
this
a
long
time
and
and
the
world
has
changed
and
I
think
that
we
need
some
kind
of
forward
Direction
on
this
and
I'm
happy
in
a
minute
for
people
to
bring
up
other
things
to
add
to
this
this
day,
I,
don't
think
it's
exclusive
at
all.
It's
just
like
what
I
came
up
with
next
slide.
C
Please
and
then
the
other
hat
that
I
could
put
on
and
did
is,
is
like
a
developer
who's,
building
on
webrtc
every
day,
using
the
apis,
consuming
them,
and
what
I
want
from
a
document
like
this
is
confidence
that
my
API
usage
will
continue
to
be
supported
and
a
guide
to
see
like
what
I
might
be
able
to
use
in
the
near
future.
What
sort
of
features
are
coming
down?
C
The
pipe
that
might
be
sensible
to
like
put
in
my
product
roadmap
and
a
place
potentially
to
ask
for
new
API
features,
a
place
to
say:
hey
I've
got
this.
You
know
aquarium
with
turtles
in
it
and
I
need
this
in
it,
and
and
and
so
you
have
a
place
to
kind
of
raise
that
discussion
and
what
features
you
might
need
to
to
meet
the
turtle
use
case
and
a
way
to
know
what
is
possible
to
do
now.
C
Now,
I
I
realize
that's
kind
of
difficult,
but
it's
a
changing
API
space
and
I
think
as
a
top
level
guide
to
kind
of
well.
You
know
these
are
things
that
work
is,
is
pretty
desirable
and
and
again,
I
feel
absolutely
free
in
a
minute
to
kind
of
come
up
with
other
things
to
add
to
this
list,
because
I
I
don't
claim
that
it's
exclusive
next
slide,
please.
C
So
the
reason
all
this
matters
is
is
and
I
had
it
kind
of
brought
home
to
me
very
firmly
recently
when
I
was
in
a
slack
chat,
and
this
floated
past
I
wasn't
addressed
to
me.
I
just
happened
to
be
in
the
chat
and
says:
we've
got
an
answer
from
the
web
ITC
team.
They
basically
confirmed
that
they're
doing
what
they're
doing
intentionally
since
they
believe
the
tech
is
meant
for
two-way
audio
conversations.
C
It
should
that's
not
true
but
like
that
is
what
they
were
told,
and
that
is
the
sort
of
pervasive,
View
and
I
think
this
document
can
help
move
the
line
back
to
other
use
cases
rather
than
having
these
folks
wander
off,
and
you
know
reinvent
something
else,
or
do
something
different
and
use
hls
or
whatever
it
is
that
they
they
want
low
latency,
but
anyway,
so
I
think
it
think
we
that
tells
me
that
we
do
have
a
problem
next
slide.
C
Please
so
I
have
some
proposals
and
and
and
again
tip
of
the
hat
to
Bernard
for
for
kind
of
tidying
these
up
and
adding
to
them
so
like
proposal
to
rename
it
call
it
extended,
use
cases
I
think
that's
actually
in
process
at
the
moment,
but
I
think
it's
it's
a
good
thing
to
do.
C
C
You
know
there
are
other
apis
that
that
are
coming
along,
that
are
doing
the
filling
in
the
gaps
in
the
other
spaces
and
I.
Think
P2P
is
the
thing
that
webrtc
does
uniquely
does
and
I
think
we
should
like
try
and
bring
the
emphasis
back,
which
is
actually
already
in
74.78,
bringing
a
little
bit
of
emphasis
back
to
to
that
and
I
think
we
should
take
out
use
cases
that
are
now
met
by
other
standards,
things
that,
like
you
know,
web
transport
or
web
codex
or
whatever
does.
C
We
should
be
removing
the
use
cases
and
and
their
Associated
requirements
if,
if
they're
already
done
somewhere
else,
we
do,
though
I
think
need
to
include
use
cases
that
have
no
requirements,
but
that
extend
74.78
so
that
you
know
that
particularly
things
like
the
iot
stuff,
which
is
dear
to
my
heart,
like
it
doesn't
even
crop
up
in
7478
but
I.
That's
relevant
and
interesting
and
and
a
good
use
case
for
this
technology.
I
think
yeah,
but
then
more
practically
on
a
kind
of
document
level.
C
C
C
So
and
then
I
think
we
should
remove
use
cases
that
don't
add
new
requirements,
with
the
exception
of
the
ones
that
extend
RFC,
7478
I
think
there's
a
little
tension
there
and
I
think
there's
a
discussion
to
be
had
about
what
what
those
Place,
how
that,
like
requirement,
balance
goes
I,
think
proposed.
Api
changes
should
probably
all
include
changes
to
the
use
case.
Dock
like
I,
don't
like.
C
Why
are
we
changing
the
API
if
there's
no
use
case
for
it,
so
so
I
think
you
know
I
think
we
need
to
tie
this
into
the
process
a
little
deeper,
and
that
also
comes
back
to
like
what
the
relationship
between
this
document
is
and
the
explainers
I
think
explainers
are
very
useful,
but
maybe
this
document
should
contain
pointers
to
explainers
so
that
it
becomes
much
more
of
a
living
document
and
I.
Think.
C
The
other
thing
that
we
we've
struggled
with
in
this
document
is
is
the
input
where
the
input
comes
from
I.
Think
we,
if
we
can,
we
need
to
find
a
way
of
broadening
that
I'm
happy
to
kind
of
use
webrtc.nu
as
a
feed
in
from
developers.
If
that's
thought
to
be
useful,
or
we
have
some
other
shape
that
allows
us
to
bring
stuff
in
here,
so
we
now
have
a
queue.
Oh
next
slide,
I
think
we
think
this
is
the
discussion
Point
site.
C
We
have
a
queue:
let's
see
who's
in
it
yeah
discussion
time.
So
we've
got
eight
minutes
for
discussion
who
we
got
in
the
queue
Harold.
Oh
well,
yeah,
okay,
I,.
A
I
think
these
are
a
lot
of
very
good
suggestions.
Tim
I'd,
like
to
just
give
my
opinion
on
each
of
them.
A
I
do
actually
think
that
the
peer-to-peer
aspect
is
important
because
it
comes
up
a
lot
and
it's
been
a
confusing
point
in
other
use
cases
and
other
working
groups
as
well,
where
what
I
see
from
developers
is
often
they
disagree
with
the
way
other
working
groups
have
done
the
use
cases
as
an
example
streaming
use
cases,
many
of
them
actually
require
peer-to-peer
operation
game
streaming
is
a
good
example
of
something
that
you
know.
I've
talked
to
game
streamers,
it's
very
popular
in
webrtc.
A
In
fact,
all
the
major
game
streaming
services
use
webrtc
and
many
use
them
not
just
client
server,
but
also
peer-to-peer
so
I
think
that
what
I
hear
coming
up
from
developers,
if
it's
peer-to-peer,
they're
forced
to
use
webrtc
and
a
lot
of
things
that
other
groups
think
are
our
client
server
are
actually
peer-to-peer.
So
I
think
that
one
is
a
really
good
one.
A
Certainly
I
I
think
you
know,
when
you
say
met
by
other
standards.
I
would
like
to
see
wide
usage
because
there
are
other.
There
are
people
claim,
for
example,
we're
the
best
game
streaming
stuff.
When
I
talk
to
developers,
they
basically
say
nope
I'm,
not
interested
in
that
stuff.
I
want
to
use
webrtc,
so
I
think
it
it's
not
just
that
they're
met
by
other
standards,
but
you
know
the
other
standards
are
actually
being
used
and
then
you
know
if
they
don't
get
consensus.
I
do
agree
with
this.
A
If
something's
been
lying
around,
as
you
said,
for
some
of
the
things
lying
around
for
two
years,
are
people
asking
for
stuff,
but
some
of
the
stuff
lying
around
for
a
long
time
is
things
that
were
just
the
issues
have
not
been
answered
and
I.
Think
I
think
you
have
a
really
great
point
that
leaving
these
things
in
the
document
probably
is
not
a
great
idea
and
then-
and
you
know
you
can
always
have
another
PR
to
clean
it
up.
A
But
if
it's
lying
in
there
with
no
consensus
and
if
the
requirements
that
where
the
issues
aren't
get
fixed,
just
rip
it
out
I
think.
But
there
is
a
very
big
question.
I
think
I'd
like
to
try
to
get
some
guidance
from
here
which
is.
Are
we
saying
that
if
there
isn't
a
use
case
for
it,
it
doesn't
mean
it
can
be
done?
Obviously
people
don't,
like
you
said
Tim
they're
doing
it
anyway,
so
do
they
need
our
blessing
like?
C
I
I
I
I
mean
just
to
pick
up
on
that
one
I
do
think
it's
a
risk,
particularly
for
smaller
software
houses
like
if
you're
a
small
developer
and-
and
you
don't
know
for
sure
that
the
API
point
is
is
whether
your
use
case
is
met
accidentally
because
it
just
happened
to
be
a
side
effect
of
something
that
Google
wanted
in
meat
or
whether
it's
something
that's
that's
defined
as
like
as
part
of
the
standard,
because
it's
something
that
the
standards
body
thinks
needs
to
be
done,
and
it's
been
documented,
I
think
there's
a
big
difference
there
like,
and
you
know
my
my
quote
from
the
the
slack
chat
like
indicates
that
I
mean
because
the
next
piece
of
that
Snapchat
was
a
long
discussion
about
whether
they
could
move
away
from
webrtc,
which
is
a
shame
because,
as
far
as
I'm
concerned,
what
they're
doing
is
absolutely
core
to
what
we
should
be
supporting.
C
A
D
A
To
Safari
and
say
you
didn't
do
this
use
case,
like
you
know
what
I'm
saying
it's.
C
No
totally
totally
agree
there,
but
but
when
somebody
comes
up
with
a
change
to
the
API
that
says,
okay,
maybe
we've
got
a.
We
want
to
change
the
API
in
some
way,
API
shape,
and
it
removes
the
ability
to
do
that
and
then
there's
no
way
of
saying
from
the
document
point
of
view
hey,
but
we
agreed
that
this
was
something
we
wanted
to
do.
C
You
know
so
I
I
agree
that
it
doesn't
have
any
effect
outside
the
standards
space,
but
it
does,
but
it
does
within.
Let's
Bernard,
if
you
can
finish
the
points
and
we
will
your
your
response
and
we'll
get
to
the
other,
the
others
as
well
yeah.
E
Yeah,
so
one
of
the
things
that
surprised
me
in
our
handling
of
this
document
has
been
the
has
been
the
problems
we've
been
getting
with
consensus
on
use
cases,
I
mean
the
most
recent
example.
I
personally
encountered
was
the
one-way
use
cases
where
I
claim
that
we
have
developers
who
are
eager
to
do
them,
and
the
working
group
says
no.
We
don't
have
consensus
that
these
are
valid
use
cases.
E
That
seems
bizarre
to
me
and
an
even
worse
one
is
the
one
that's
been
hanging
around
forever
with
or
was
kicked
out
with
trusted,
JavaScript
and
conferencing,
because
that's
what
everyone's
doing-
and
we
don't
have
a
use
case-
we
don't
have
consensus
on
the
use
case,
so
we
so
we
delete
this
from
the
use
case
document.
That's
nuts
right
right,
so
I
have
a
problem
with
the
distance
between
the
use
cases
document
and
the
use
cases
that
I
need
to
support
in
the
real
world.
C
F
I
hear
a
lot
of
things.
I
really
like
I'd,
want
to
give
a
plus
one
to
getting
our
use
cases
in
order
a
plus
one
to
having
a
place
where
developers
can
ask
for
things
and
see
a
status
of
what
it
is
and
then
I
think
it
would
make
sense
to
have
a
status
for
everything.
F
C
G
Yes,
so
a
lot
of
good
points.
Thank
you.
Tim
for
for
I
definitely
agree.
This
needs
a
cleanup
and
I
like
a
lot
of
the
points
you're
proposing
here.
One
thing:
I,
don't
really
mentioned,
is
scope.
I
think
we
should
be
clear
about
scope
for
this
and
that
this
is
only
for
this
working
group.
You
mentioned
removing
things
that
have
other
standards
and
I
think
it's
useful
to
look
at
the
history
of
this
document.
G
Next
version
was
supposed
to
be
webrtc
2.0
and
it
was
an
important
part
in
our
lives
where
we've
finished
1.0
and
we're
deciding.
What
are
we
going
to
do
next
and
there
wasn't
a
monolithic
2.0?
Instead,
it
was
more
of
an
unbundling
of
webrtc
into
other
specs.
So
that's
why
a
lot
of
use
cases
are
in
there
like
you
mentioned,
wish
that
doesn't
need
to
be
does.
Is
that
me
does
that
that
wishes
existing?
G
Does
that
mean
the
use
case
stays
in
or
that
it
goes
out
and
I
think
it's
important
for
it
to
serve
our
process?
The
w3c
process,
which
is
that
this
is
not
we're
not
supposed
to
document
everything
for
web
developers.
Other
websites
can
do
that.
I.
Think
the
purpose
of
these
cases
is
to
drive
our
discussions
in
the
working
group
and
on
GitHub.
So
when
someone
opens
an
issue,
we
would
say:
hey
I
have
a
problem
solving
it.
G
We
don't
have
a
great
leader,
so
we
have
to
have
some
kind
of
system
that
says
this
is
in
our
app
right.
So
I
would
support
having
use
cases
Drive
the
work
that
this
working
group
does
and
if
there
are
other
working
groups
like
web
codex
web
transport
wish
that's
for
them
to
organize
I.
Think.
C
So
we're
now
at
time,
I'll
leave
it
to
the
church
to
tell
us
when
we're
when
we're
going
to
pull
the
plug
and
Tony's
in
the
queue
yeah.
D
I
just
had
a
quick
point
on
I
think
I
agree
with
the
removing
use
cases
met
by
the
standards
when
they
have
wide
adoption
in
the
other
places,
my
only
thought
was
I
just
hope.
This
doesn't
mean
that
we
end
up
sort
of
accidentally
siloing
where
you
can
have
like
peer-to-day
or
those
other
things
just
because
it
happened
to
be
an
understand.
It
do
you
think,
there's
space
for
having
like
the
need
of
a
use
case
for
integration
between
robot
to
see
and
these
other
standards
in
this
kind
of
life.
H
In
terms
of
scope,
I
would
Echo
what
anybody's
saying
is
that
this
document
is
foreign.
So
it's
mostly,
we
are
the
main
users
and
the
driver
is
to
add
new
things
or
add
new
features.
So
that's
the
school
I
I
agree
with,
however,
that
it's
not
very
successful
and
other
working
groups
over
bodies
are
using
explainers,
which
is
a
bit
better,
because
you
can
have
the
use
case.
H
The
requirements
and
some
example
plus
eventually
some
API
or
proposals
in
a
free
format,
and
then
it
it's
much
easier
to
for
us
to
actually
yeah
and
have
the
requirement
we
have.
We
can
match
all
of
these.
Well.
Currently,
you
have
a
use
case
in
one
document.
You
might
have
a.
H
Else
and
so
on,
so
it's
it's
a
bit
more
difficult,
so
maybe
we
should
try
to
use
more
spanners
and
less
separate
use
case
documents.
C
I
I
I
think
that
I
I,
like
that
and
I,
think
the
relationship
with
explainers
is
probably
Central
to
the
solution.
The
question
is
that
Suzanne's
still
unclear
to
me
is
what
remains
in
a
in
a
use
case
document.
Is
it
just
like
a
headline
use
case
and
a
pointer
to
an
explainer
and
a
list
of
those
which
I
think
is
actually
possibly
a
doable
thing,
but
we
need
to
think
about
what
that
will
be
and
I'm
happy
to
have
this
conversation
on
the
list
now
Peter
you're
back
in
the
queue
somehow.
C
So
so
my
proposal
is
is
to
take
this
to
to
the
list.
Let
people
I
want
input
from
people
and
prepared
to
put
some
time
into
doing
some
of
this,
but
I
want
to
make
sure
that
we
understand
what
it
is
that
we're
we're
trying
to
achieve.
I
want
to
broaden
this
scope
a
little
more
than
than
than
just
formally
for
the
w3c,
but,
like
maybe
we'll
discuss
that
elsewhere
for.
A
Next
month,
what
I'd
like
to
do
Tim
is
to
continue
this
discussion
in
the
next
month's
meeting.
I
will
try
to
create
some
PRS
to
address
the
proposals
that
I'm
in
discussed
here,
and
then
we
can
talk
about
them.
You
know
the
the
Merit
of
the
individual
PR's,
but
basically
a
bunch
of
removable
PRS
sounds
good
next
month.
Thank
you.
A
Okay,
Henrik
and
fippo
media
capture
and
webrtc
extension
stuff
and
we'll
start
with
134
PR
164.
I
Yes,
this
is
a
process
problem
we
have
because
the
UFC
extension
spec
in
order
to
implement
the
header
extension
API
is
trying
to
modify
the
Json
RFC
and
mozilla's
Eric
scholar
objective
to
that
which
is
valid
and
we
currently
are
in
the
states
that
RFC
8829
bizc
is
accessor
to
RFC.
8829
is
in
the
editors
RFC
editors
Cube,
so
it's
close
to
publication.
Just
networking
you
agreed
to
make
a
small
adjustment
to
the
text
which
will
hopefully
get
into
that
new
RFC
once
it
gets
published
as
a
discussion
in
progress
on
the
ITF
list.
I
E
E
E
I
Okay,
the
next
slide
is
about
the
question
when
are
keyframes
generated
and
webrtc
is
very
light
on
that
it
doesn't
even
mention
the
term
keyframe
at
all,
and
we
have
issues
from
that,
because
it
is
a
side
effect
of
some
API
calls.
For
example,
set
parameters
may
cause
keyframes
to
be
generated
when
you
change
the
schedule
resolution
down
by
Factor,
but
that
may
not
be
true
in
some
cases
for
some
codecs.
I
So
the
proposal
to
solve
that
is
to
allow
requesting
a
keyframe
explicitly
when
you
call
set
parameters
that
makes
this
implicit
thing
that
we
have
the
simplest
ability
to
call
the
keyframe
explicit
and
in
terms
of
semantics,
that
is
going
to
be
similar
to
the
rtcp
fair
message
defined
in
an
RFC.
So
at
the
earliest
opportunity
the
encoder
is
asked
to
generate
a
keyframe.
I
B
G
Oh,
thank
you
sorry
about
that
yeah.
So,
looking
at
the
pr
I
see
it's
a
request
frame
Boolean,
which
is
a
bit
odd
in
that
it's
not
really
a
parameter.
Is
it
so?
Why
not
just
make
it
a
method?
I
guess,
but
this
is
on
bike
shedding
I
I
don't
have
a
necessarily
a
problem
with
the
proposal
other
than
that
I
understand
they're.
Already
it's
an
API
for
this,
so
I'll.
Let
others
talk
to
that,
but
yeah
it
seems
like.
G
Maybe
it
could
be
a
counter
or
maybe
it
should
be
a
separate
method
on
most
of
it's
a
specific
reason:
I
missed
on
the
previous
slide,
about
combining
it
with
set
parameters
for
some
synchronizing.
G
And
like
like,
which
one.
G
I
G
G
C
I
E
E
I,
don't
like
the
concept
of
having
set
parameters,
setting,
not
something
that's
not
a
parameter.
I
would
draw.
I
would
rather
following
it
on
the
tradition
of
set
local
description
that
suddenly
generated
this
kittens
and
say
that
okay,
we'll
make
a
new
call
that
that
called
set
parameters
and
sun
keyframes,
which
guarantees
that
they
do
they
do
asynchronously,
but
set
parameters
should,
in
my
opinion,
I
mean
get
high.
Matters
should
return.
E
Oh,
if
we
can
make
it
that's
long
queue,
let's
go
to
next.
K
K
It
might
be
a
better
compromise
to
make
it
work
better,
with
get
parameters
that
would
require
to
have
some
read
validation
but
I.
Don't
think.
That's
necessarily
a
really
hard
thing
to
do.
One
thing
that
I
see
that
might
be
an
issue
is
that
the
symmetric
API
on
a
receiver
would
be
very
different,
because
we
don't
have
set
parameters
and
receives
so.
Do
you
have
any
idea
about
what
we
should
do
there
if
we
wanted
to
add
an
API
that
is
doing
similar
things.
I
I
K
L
G
Yeah
so
quickly,
I
forgot
one
thing
to
ask:
just
because
the
browser's
not
doing
the
right
thing
doesn't
mean
we
need
to
jump
to
a
JS.
Api
JS
API
should
only
be
needed
if
the
browser
can't
figure
this
out
so
I
think.
First,
we
should
look
at
whether
maybe
changing
active.
Maybe
the
browser
should
send
a
keyframe
and
we
could
standardize
that
instead
of
relying
on
applications
to
to
to
you
know
fix
my
browser
thanks.
H
Yeah
two
two
things
now.
The
first
thing
is
so
in
and
connect
transform,
there's
a
sender,
receiver,
API
and
there's
a
transform
API
and
they
are.
They
are
different
to
each
one
of
them.
So
I
guess
the
partly
solve
different
issues.
So
I
guess
this
one
is
mostly
targeting
the
sender
API,
and
it
makes
sense
to
me
that
it's
synchronized
reset
parameters.
H
So
we
could.
We
could
Bike
Share
there
I
guess.
If,
if,
if
a
user
version
before
behavior
is
not
good,
then
maybe
it
could
be
a
policy
that
is
throughout
the
connection.
If
you
do
not
want
to
to
provide
the
set
parameter,
specific
parameters
but
I
guess
the
transform
API
itself
would
still
remain
or
are
you
suggesting
to
remove
it.
J
Next,
thanks
yeah,
yeah
I
think
this
makes
sense
to
me.
There
seems
like,
like
there's
two
reasons:
you're
going
to
want
to
deactivate
the
top
layer
either
the
person
left
the
call
or
they're
switching
down.
So
it's
like
semantically
two
different
set
parameters,
but
yeah
I
feel
like
most
of
the
time,
almost
all
the
time
you're
going
to
want
to
send
a
keyframe
when
you're
removing
in
a
higher
layer,
except
for
that
last
subscriber
leaving
the
call
case
and
I
think
also
by
extension.
J
This
means,
when
the
Ascender
throttles
on
upper
layer
due
to
bandwidth
constraints.
That
also
means
we
should
send
a
keyframe
in
that
scenario,
although
that
also
doesn't
involve
an
API
API
change,
but
just
yet
just
like
most
of
the
time
we
could
almost
always
set
a
keyframe
not
even
require
an
API
change.
I
F
G
L
Sure
can
I
go
I'll,
try
to
be
fast.
We
all
know
and
love
scale
resolution
down
by
it.
Lets
you
do
something
like
you
capture
a
720p
track.
You
apply
some
expensive
video
effects
and
then
you
send
civil
cast
as
follows
to
layers
civil
cast,
but
if
the
server
tells
you
that
720p
is
not
needed,
you
can
inactivate
the
top
layer
and
you'll
just
send
the
360p.
You
just
get
the
resolution
down
by
two.
The
question
is
now:
why
are
we
applying
expensive
video
effects
on
our
720p
track
if
we're
only
selling
360p?
L
L
You
can
do
this,
but
it's
racy.
So
when
the,
if
you
change
the
size,
you
will
probably
send
the
resolutions
the
wrong
resolution
on
the
right
wrong
layer
and
you
will
generate
keyframes
necessarily,
and
you
know
you
can
try
to
work
around
this
and
inactivate
layers
to
to
avoid
the
jump
jumping
up
and
down,
but
that
will
also
generate
more
keyframes
and
perhaps
previous
slides,
like
what
people
talked
about,
maybe
even
more
keyframes.
So
how
about?
L
We
add
a
scale
resolution
down
to
API,
where
you
specify
the
resolution
you
want
to
send
rather
than
a
relative
term
in
so.
If
you
save
I
want
to
send
360,
then
you'll
send
360
and
it
doesn't
matter
if
the
track
changes
size,
because
the
encoder
just
sends
360.
and
there
exists
a
similar
API
in
labor
RTC.
So
maybe
we
can
experiment
thoughts,
interests,
Peter,.
F
F
L
Well,
it's
just
like
scale
resolution
done
by
you
would
be
allowed
do
whatever
you
want,
but
just
like
with
scale
down.
We
don't
want
to
upside
up
size.
So
if
you,
if
you
if
the
track
becomes
tiny,
tiny,
tiny
we're
not
going
to
upscale
it,
so
it's
gonna
be
a
Max
right.
Max.
F
Right
and
if
you,
if
you
say
720,
360
180
and
then
you
feed
in
something
that
only
has
say,
360
or
the
bandwidth
limitations
are
such
that
720
can't
be
sent,
and
it
just
drops
the
top
layer.
L
Yeah
I
I,
so
I
I
think
we
can
there's
probably
more
discussions
needed
to
be
had
on
a
potential
pull
request
like
and
like
portrait
mode
and
stuff
like
that
as
well.
But
but
the
general
idea
is
do
whatever
we
already
do
with
scale
resolution
down
by
which
remove
this
dependency
on
the
input
frame.
To
avoid
the
race
gotcha.
H
Yeah
I
got
most
of
the
same
question
as
Peter,
and
so
I
just
want
to
know
whether
they
are
like
workarounds
I
guess
you
can
set
that
keys
to
false
and
then
active
control,
yeah.
H
L
L
I
think
there's
workarounds
I
I
just
I'm,
not
sure
how
good
they
are
in
practice.
K
I
think
an
API
like
this
would
be
great
I,
don't
really
agree
with
a
single
value
to
see
scare
resolution
down
to,
as
mentioned
in
other
places,
there's
a
problem
of
orientation.
If
you
have
a
portrait
or
landscape
capture,
use,
that
value
will
be
very
different,
will
have
a
different
meaning,
so
we
might
want
to
talk
about
the
API
shape
there.
K
There's
also
something
that
Peter
said,
which
is
what,
if
you
feed
a
frames
that
are
smaller,
with
lots
of
different
resolutions
that
are
bigger.
Maybe
we
want
to
have
an
extra
mechanism
as
well
to
be
able
to
stop
sending
a
layer
if
a
frame
size
is
too
small
to
prevent
having
many
different
simoncast
layer
that
are
the
same
size
if
I
say
1080p,
7,
2360
and
I
feed,
360.
K
I
don't
want
to
have
three
layers
that
are
360.,
so
we
might
want
to
copy
a
couple
of
these
with
other
apis
I,
like
the
direction
of.
L
Okay,
what's
the
conclusion,
it
seems
like
generally
positive,
but
details
need
to
be
fleshed
out.
Should
I
should
I
provide
a
pull
request.
H
L
L
L
The
original
issued
file
talked
about
the
you
know,
calling
this
excessively
means
excessive
number
of
objects
created
on
the
GC
pile
not
being
ideal
for
performance,
and
it's
it's
a
lot
of
talk
about,
should
the
API
be
asynchronous
or
synchronous
should
written
a
dictionary,
I.E
a
copy
or
an
interface
I
reference
to
the
same
object.
So
there's
there's
two
main
API
shapes
that
have
been
discussed,
there's
more
too,
but
I'm
trying
to
keep
it
simple.
L
We
have
a
way
to
track
that,
gets
that
we're
turning
a
dictionary
or
track
dot,
audio
stats
or
track.videostats
to
return
an
interface
and
next
slide.
So,
first
of
all,
the
question
is:
is
how
big
GC?
How
big
of
a
problem
is?
Is
garbage
collection,
so
the
intended
use
case
is
to
pull
stats
what's
once
per
second
per
track.
Medium
Sim
track
is
not
accessible
from
real-time
threads.
So
arguably,
you
know
should
read
time
pushes
the
requirement
or
not
and
Johnny
ever
mentioned,
the
GC
nurseries
like.
If
you
do
call
this
a
lot.
L
L
Yeah
because
I'll
try
to
Encompass
the
the
counters
to
what
you
could
do,
but
anyway,
so
we're
only
talking
about
local
capture
tracks.
So
it's
not
a
lot
of
tasks
130
per
seconds,
if
you
have
an
audio
and
video
track,
but
what
I
want
to
at
least
people
to
have
in
their
back
of
their
their
mind,
is
because,
typically
when
it,
when
we
add
stats
eventually,
someone
asks
for
more
stats.
So
next
slide
humor
me:
will
you
what,
if
we
start
to
have
what?
L
If
someone
wants
stats
that
are
also
available
on
remote
tracks,
for
a
video
conferencing
use
case,
you
could
end
up
with.
You
know:
do
some
math
here
you
could
have
2
000,
plus
tax
per
seconds
and
what
I'm
kind
of
fearing
whether
that's
valid
or
invalid
I'll.
Let
you
decide
what
I'm
clearing
is
what
we're
in
a
situation
where
we
need
to
do
IPC
in
order
to
grab
stats,
so
fear
next
slide.
L
So
the
two
main
things
I'm
concerned
about.
Yes,
one
is
the
excessive
task
posting
for
apps
that
are
only
occasionally
introduce
interested
in
stats,
and
the
other
thing
I'm
concerned
about
is,
if
there's
cross,
process,
metric
collection
and
unnecessary
IPC.
So
next
slide,
it's
been
pointed
out
that
the
first
problem
can
be
avoided
with
the
mutex
and
that's
true
as
long
as
you
keep
make
sure
to
catch
the
stats
to
and
carry
it
in
the
next
type
of
distribution
cycle.
L
The
second
problem
has
been
proposed
that
so
the
next
slide
is
that
you
can.
You
can
piggyback
the
metrics
update,
because
it's
only
a
few
bytes
of
counters
you
can.
You
can
piggyback
on
other
IPC
messages
and
what
I'm,
not
very
comfortable
with
with
that
solution,
is
that
it
assumes
IPC
happens
anywhere
anyway.
L
So
what
if
the
source
of
sync
lives
in
different
processes,
you
know,
should
we
be
forced
to
send
a
bunch
of
IPC
messages
just
to
update
these
stats?
That
may
or
may
not
be
read
next
slide.
L
But
at
the
end
of
the
day
this
this
is
what
we're
talking
about
or
some
version
of
it
open
to
suggestions,
but
we
need
to
decide
synchronous
or
asynchronous
interface
or
dictionary.
I,
don't
think,
there's
a
huge
difference
personally,
but
you
know
your
mileage
may
vary
and
I
I
would
like
input
on
if
my
concerns
are
valid
or
invalid
and
on
the
next
slide.
This
is
the
last
slide
before
we
get
to
cues.
L
I
have
three
proposals
proposal
a
promise?
Yes,
that's
proposal,
B
interface
and
proposal.
C
is
you
know,
let's
make
everyone
happy?
We
we
have
a
synchronous
API,
but
we
just
say
that
it
should
be
the
latest
snapshot.
So
you
could
do
a
batch
updates
if
performance
ever
becomes
a
problem.
Yes,
let's
go
to
the
queue
yen.
H
Yes,
so
the
use
case
here
is
getting
stats.
It's
not
getting
real-time
information
and
that's
why
you
call
an
API
and
you
get
the
result
and
if
you
want
it
like
10
seconds
later,
you
will
call
again
the
API
and
there
will
be
somewhere
being
done
and
that's
good
in
in
the
interface
case.
H
You
get
the
interface
object
and
then
the
user
agent
will
start
to
the
processing,
no
matter
whether
you
start
to
actually
get
data
from
it,
and
this
is
more
suitable
for
real-time
kind
of
processing,
where
you're
actually
creating
a
lot
of
data
repeatedly
every
task
here,
basically
either
even
Loop
task
and
I.
H
Don't
think
that
this
use
case
here
is
Target
on
that,
because
the
name
of
the
API
is
getting
stats
and
stats
are
like
yeah
one
on
time
time
like
every
second,
you
you
want
to
get
some
results,
but
you
you
don't
want
to
get
them
like
every
even
Loop
task.
So
that's
why
I
would
tend
to
go
with
a
promise
based
get
stats
and
if,
in
the
future
we
need
like
real-time
information
like
that,
then
we
will
be
free
to
add
additional
apis.
H
And
that's
something
that
is
already
being
done
in
webrtc
PC,
for
instance,
you
can
get
stats
about
frames
being
received,
and
you
can
also
have
request
video
frame
call
back
or
media
capture
transform.
If
you
want
to
know
precisely
at
at
what
time
the
video
frame
is
being
received-
and
these
are
two
different
apis-
and
this
is
fine-
so
propose
okay
and.
L
G
Oh
yeah,
so
unfortunately
Paul
couldn't
be
here
to
today,
so
thank
you.
Henrik
for
adding
slides
to
present
to
apis
I
thought.
I
should
give
a
little
a
bit
of
input
since
you
advocated
for
one
side
here,
Paul
made
a
good
comment
on
the
issue.
I
think
revolving
IPC.
It
says
this
is
not
available.
G
And
with
the
real
time
or
not
real
time
is
the
Spectrum
it's
hard
real
time
like
in
a
real-time
audio
processing,
and
there
are
softer
flavors
in
real
time,
such
as
Graphics,
rendering
at
60hz,
so
I
think
I
just
feel
that
this
working
group
has
fallen
into
a
pattern
that
it
likes
and
it
might
be
good
to
look
at
other
working
groups
like
the
media
working
group
that
not
maybe
not
everything,
is
a
stat.
G
But
at
the
same
point
you
mentioned
queuing
a
lot
of
tasks,
for
example,
and
using
a
mutex.
You
don't
also
need
to
you,
you
don't
have
to
use
a
mutex.
You
can
also
do
lockless
implementations
of
this,
which
should
so
there
are
ways
to
implement
this.
That
doesn't
require
locking
and
in
fact
the
w3c
design
guide
says
we
should
not
lock
in
a
getter,
so
lockless
would
be
one
option.
I
think
it
comes
down
to
use
cases
if
the
use
case
really
is
just.
G
This
is
a
feeder
API
for
my
Graphics
library
and
I'm
going
to
call
it
once
per
second,
then
you
do
want
a
copy
of
the
data,
in
which
case
you
know
a
dictionary
might
be
fine,
but
I,
just
like
the
working
group
to
consider
that
other
working
groups
that
also
deal
with
real-time
has
apis
already,
like
media
element,
current
time
and
audio
context,
latency
that
update
just
fine
and
have
simpler
apis,
because
they're
attribute
to
just
read,
which
clearly
is
simpler
than
having
to
remember,
to
do
a
weight
and
to
you
know,
allow
other
application
state
to
and
yield
to
other
application
State
application
code
in
order
to
just
read
some
information.
L
I
mean
the
the
use
case
is
pulling
once
per
second
and
and
I
have
trouble
imagining
why
you
would
want
to
call
it
at
higher
frequency
from
a
JavaScript
main
thread
or
equivalent
and
also
like,
if
we
knew
for
sure
that
we're
only
going
to
Target
this
we're,
never
gonna,
add
any
any
more
metrics,
then
fine,
yeah
I
think
the
piggybacking
works,
but
so
my
main
concern
is
well
one.
L
Is
the
ergonomics
I,
don't
see
a
big
difference,
whether
you
put
the
a
weight,
key
keyword
in
front
of
it
or
not
and
and
secondly,
like
I
guess
my
concern
is
one
month
from
now
someone
says:
oh,
what
about
these
other
metrics
and
and
then
we've
painted
ourselves
in
the
corner
which,
again
with
proposal
C,
you
could
probably
get
around,
but
I
I.
Just
don't
see
why?
Yes,
you
and.
H
Yeah
also
to
mention
with
that
with
commits-based
approach,
it's
very
clear
that
you
call
the
API
and
you
will
gather
the
result.
If
you
have
a
synchronous
API,
then
you
have
to
identify
what
is
the
frequency
of
how
much
equal
be
updated
and,
for
instance,
in
the
media
element
current
time?
The
use
case
is
that,
it's
being
it
will
be
query
to
the
synchronization
with
cover
data,
for
instance,
so
you
have
to
update
it
very
frequently,
but
in
another
in
an
implementation
like
that,
maybe
Safari
would
say.
H
Oh
the
use
case
is
that
so
we
will
be
very
lazy
and
do
it
like
once
every
second
or
so,
and
over
browsers
will
not
do
that
and
then
you
start
to
have
like
different
behaviors
and
then
the
effect
will
have
to
say:
okay,
you
have
to
update
it
to
that
kind
of
frequency
and
so
on,
and
it
starts
to
not
be
great
I
think
so.
That's
why
we
have
a
promise
based
approach.
The
contract
is
a
bit
like
clearer
as
well
and
fits
the
use
case.
G
So
I
just
like
to
quote
Oprah
to
say
we
should
make
decisions
out
of
love
out
of
here,
because
we
can
all
play
the
fear
game
like
what.
If
in
the
future,
someone
will
actually
need
to
read
these
stats
to
do
real-time
audio
processing,
and
then
we
would
have
wished.
We
had
a
you
know
an
attribute,
API,
so
well.
I
think
we
should.
G
Decisions-
I
guess
yes,
but
you
know
there's
always
for
every
decision.
There's
a
you
know
what
didn't
happen
so
I
think
we
should
make
decisions
based
on
the
best
information.
We
have
now
not
necessarily
try
to
anticipate
every
future
use.
J
H
Another
question
would
be
if
we
have
to
design
it
as
a
real-time
API.
Would
we
feel
confident
that
the
synchronous
API
will
be
the
best
choice
or
not,
and
I
don't
know
so
we
would
have
to
think
about
it
precisely
without
having
the
exact
real-time
use
cases
that
we
are
trying
to
to.
Think
of.
So
that's
why
it's
also
fuzzy
to
try
to
address
these
suitcases.
While
we
do
not
really
have
the
precise
information.
G
And
also
since
it
was
mentioned
on
the
issue,
a
variant
of
The
Proposal
a
would
be
to
sorry
a
proposal.
B
would
be
to
have
the
attributes
directly
on
the
track.
In
case
the
the
end
of
separate
interface
was
confusing
to
people.
G
L
G
I
think
a
lot
closer
implementation
would
be
since
you're
reading
a
value
from
another
thread.
There
are
there,
are
papers
and
stuff
on
how
you
can
do
that
without
requiring
a
lock,
but
it
just
involves
a
bit
tricky
code
expert.
G
I
think
we
would
like
to
hear
what
what
other
members
feel.
E
I'll
kind
of
see
that
the
specifying
and
implementing
an
an
async
interfaces-
simple,
probably
the
performance
enough
and-
and
we
can-
we
can
do
that
pretty
quickly.
Why
not.
L
Okay,
well
I
guess
I'm
done
with
these
slides.
G
Yeah,
sorry,
sorry!
So
if
it's
just
Mozilla
on
this
issue,
let
me
talk
to
Paul.
It
sounds
like
there's
not
a
lot
of
other
interest
in
an
attribute
API,
so
we'll
try
to
discuss
and
come
back
to
you.
M
Okay
and
I
would
like
to
continue
our
discussion
on
the
ice
controller,
Slash
webrtc
ice
apis.
So
to
recap:
the
discussion
thus
far
at
the
previous
Century
meeting
Peter
and
I
proposed
a
set
of
improvements
to
the
API
that
will
incrementally
allow
applications
to
control
ice
to
an
increasingly
greater
extent.
M
There
was
a
positive,
possibly
consciously
positive
response
to
it,
so
I've
gone
ahead
and
I
would
like
to
start
the
discussion
on
the
first
of
those
set
of
improvements,
so
I've
written
up
an
issue
on
webrtc
extensions,
repo
I'll,
try
and
keep
this
brief.
So
we
have
enough
time
for
discussion,
but
the
first
increment
is
basically
to
prevent
the
removal
of
a
candidate
pair.
M
The
main
use
case
for
this
is
to
maintain
connection
redundancy
and
gradually
build
on
this
to
be
able
to
switch
the
connection
to
another
connection,
candidate
pair,
so
economical
use
case,
for
this
would
be
when
you
have
several
network
interfaces
or
several
options
to
connect
your
call,
and
let's
say
you
can
choose
between
relay
and
not
really
or
between
Wi-Fi
and
cellular
or
Wi-Fi
and
ethernet.
M
B
M
M
So
there's
a
couple
of
different
approaches
to
this.
We've
talked
about
these
a
bit
before,
but
to
summarize,
we
can
do
a
cancelable
event,
so
this
is
when
the
ice
agent
has
decided
to
remove
a
candidate
pair.
But
before
that
removal
actually
happened,
there
will
be
an
event
that
the
application
can
cancel
and
prevent
the
removal
from
taking
place,
and
the
other
is
the
ice
agent
continues
to
do
what
it
does.
M
The
application
just
provides
inputs
to
the
ice
agent
by
setting
certain
attributes
on
a
candidate
pair
and
to
reiterate
in
either
of
those
cases
the
existing
Behavior
does
not
change.
It
is
only
when
the
application
takes
some
steps
to
change
the
Behavior.
Anything
different
happens
so
on
the
next
slide,
going
a
bit
more
into
consolable
events.
M
So
yeah
the
basic
idea
is
the
ice
agent
when
it
decides
that
it
wants
to
remove
a
candidate
pair,
it
stops.
It
passes
our
action
temporarily
and
lets
the
application
know
that
it's
about
to
remove
a
candidate
pair,
and
then
it
waits
for
that
event
to
finish
this
patch.
If
the
application
calls
prevent
default
on
the
event,
the
ice
agent
does
not
remove
that
event.
It
can
come
back
and
propose
removing
that
event
at
a
later
time.
That's
completely
fine.
M
This
is
the
way
other
events
work
in
the
case
of
touch
or
form
submit
events.
So
this
is
an
established
pattern
in
some
use.
Cases
on
the
left
is
what
the
API
proposal
looks
like
so
an
RDC
ice
transport.
There's
a
new
event
for
a
new
event
can
be
filed
when
the
candidate
removal
is
being
proposed
and
then
on.
The
right
is
how
an
application
would
use
it.
So,
in
the
event,
listener
called
prevent
default.
M
So
changing
the
automatic
Behavior
instead,
so
as
a
another
couple
of
ways
to
do
this,
so
one
approach
we've
already
talked
about
in
the
past
is
to
have
a
removable
attribute
on
a
candidate
pair.
M
So
on
the
left
again
is
the
API
proposal,
so
rdci's
transport
can
have
an
event
to
that
is
far
when
a
candidate
pair
is
at
it,
and
the
application
can
set
the
removable
flag
on
that
to
true.
If
that
is
also
removable
to
false
by
default.
It's
true,
and
if
this
is
false,
then
the
ice
agent
does
not
remove
the
candidate
pair.
The
flag
could
be
set
either
in
the
event
listener,
or
it
could
be
said
at
a
later
time
as
well.
We
can
discuss
a
bit
towards
the
end.
M
M
Next
yeah,
so
another
approach
to
do
this
is
instead
of
removable,
attribute,
there's
a
Timeout
on
a
candidate
pair,
and
then
we
leave
the
life
cycle
management
of
the
candidate
to
the
ice
agent,
the
application
just
States
what
the
timeout
should
be
for
a
candidate
pair
and
then
the
ice
agent
can
remove
the
candidate
pair
once
that
time
off
has
expired
and
the
semantics
of
the
timeout
is
the
duration.
M
Since
the
last
ice
check
occurred
on
this
candidate
pair
or
since
data
was
sent
or
received,
so
it
works
pretty
much
the
same
way
and
the
net
here
is
that
the
application
could
actually
reduce
the
timeout
from
the
default
value
as
well.
So
you
could
have
a
candidate
where
you're
getting
involved
earlier
than
it
was
planned
or
you
could
set
it
to
the
max
value,
in
which
case
it's
equivalent
to
not
having
the
candidate
remove
at
all.
M
So
on
the
next
slide,
so
that's
a
discussion
point.
So
let
me
quickly
go
over
these.
So
a
is
there
a
completely
different
approach
that
we
could
take
to
this,
and
then
we
don't
need
to
necessarily
take
the
same
approach
to
all
of
the
events
that
are
in
those
set
of
proposed
improvements.
M
So
we
could
decide
that
maybe
it's
best
to
leave
the
life
cycle
management
of
candidate
Bears
to
the
ice
agent,
and
so
here
we
do
attributes,
but
then
for
for
figuring
out
how
how
often
to
do
Banks
or
how
to
switch
candidate
pairs.
Cancelable
events
makes
more
sense
over
there.
M
Like
that's,
I,
think
a
reasonable
question
to
ask
and
then
in
case
of
attributes,
is
it
better
to
just
set
attributes
to
have
setable
attributes,
or
would
it
be
a
better
option
to
have
methods
that
can
fail
if
setting
the
attribute
does
not
make
sense,
so
one
example
of
this
would
be
the
ice
agent
is
already
in
the
process
of
removing
or
it
has
already
removed
a
candidate
bib.
The
event
hasn't
fired.
Yet
what
happens
when
the
application
sets
removably
to
false?
M
So
it
does
not
get
silently
ignored,
or
should
that
be
an
error
conditioning
and
then
lastly,
it
may
be
reasonable
to
say
that
this
is
not
the
thing
that
we
should
be
tackling
right
away.
Maybe
there's
there's
something
further
along
in
the
list
that
we
should
be
talking
about
instead,
so
that's
the
discussion
points.
L
F
There
too
much
background
noise
I
mean
there
is
I,
think
that
this
makes
sense
as
the
first
thing
to
do.
Although
it
we
should
admit
that
it
is
more
useful
once
you
can
select,
which
can
it
appear,
you're
sending
with
I.
It's
no
surprise.
I
like
the
removed
attribute
I,
do
think
it
would
make
sense
to
have
a
set
removable
or
sorry
the
removable
attribute
and
having
a
set
removable
method
in
case
it
can
fail.
E
I
see
a
slight
problem
here
in
that
removal
of
candidates
isn't
timer
driven.
E
So
I
don't
think
that
proper
ice
implementation
can
will
fit
well
with
the
timeout.
So
I
think
that,
because
of
this,
the
cancelable
event
is
is
a
better
fit
because
that's
and
that
allows
the
ice
engine
too,
do
whatever
it
thinks.
It
is
correct.
According
to
Life's
protocol.
G
Yeah,
so
sorry,
I'm,
not
an
ice
expert
so
of
the
proposals
proposed
I
kind
of
without
seeing
the
whole
picture,
I
kind
of
like
prevent
default
on
the
remove
one,
because
otherwise
I
don't
see
a
purpose
of
why
you
need
a
callback
for
adding
one
if
it's
the
only
way,
if
the
only
purpose
to
get
notified
of
adding
is
so
that
you
can
improve
it,
that
seems
like
more
API
surface
than
we
need
so
I
like
prevent
default.
G
It
seems
like
a
decent
enough
thing
if
the
really
goal
really
is
to
prevent
the
user
agent
from
removing
a
pair,
but
it's
still
not
clear
to
me
what
Behavior
you
actually
expect.
What
do
you
expect
to
use
data
to
do
by
by
preventing
it
from
removing
this
thing
like?
What
is
the
actual
functionality
that
you're
looking
for
and
could
that
be
expressed?
G
Is
the
problem
here
that
I
know
this
ice
transport
does
not
let
you
actually
look
at
all
the
pairs
you
can
get
selected
pair,
but
that
seems
to
be
the
only
use
for
where
we
return
rtci's
candidate
pair.
So
it's
this.
This
is
an
Insight
problem
into
the
state
of
things.
You
know
if
there
are
other
Grandeur
designs
that
might
help,
but
other
than
that
yeah,
based
on
the
information
available.
I
would
pick
the
the
first
option,
A,
which
was
to
prevent
it
in
default.
G
M
Right,
so
let
me
see
if
I
understand
so
sorry,
so
this
is
definitely
the
first
set
of
improvements
so
enumerating
all
the
candidate
pairs
that
are
active
or
that
are
around
at
a
certain
point.
That's
certainly
one
of
the
further
proposals,
so
you
would
want
to
do
that
as
well
at
some
point,
but
the
main
reason
to
keep
candidate
pairs
alive
as
opposed
right
now,
is
to
be
able
to
switch
to
a
different
kind
of
repair
at
a
certain
time
in
the
future
when
the
application
decides
start
so.
M
Time
exactly
yes,
which
is
also
for
that
on
the
list
of
post
improvements.
G
M
C
Yeah,
so
that
makes
me
wonder
what
happens
if
it
should
be
removing
it
like
if
you
like,
you're
gonna,
you
get
an
event
which
says
that
it's
removing
this
kind
of
the
pair,
because
it's
not
working
or
because
IPv6
has
disappeared
or
something
I
don't
know
like
you,
don't
want
to
prevent
that
default.
M
Right,
so
removal
is
meant
only
for
candidate
parents
that
actually
makes
sense
to
have
around.
So
yes,
of
course,
if
one
of
your
network
interfaces
goes
down,
any
candidate
best
associated
with
that
should
be
deleted.
That's
different!
That's
a
deletion
event
versus
a
removal
event,
which
is
this
is
a
returned
in
candidate
pair.
That's
not
necessary
anymore,
and
so
the
ice
agent
decides
to
remove
that.
G
Yeah,
that
seems
confusing,
like
maybe
the
application
just
wants
to
keep
a
log
of
what's
being
removed
and
deleted
so
you're
saying
it
would
it
would
only
fire
the
event?
Maybe
it's
a
bike
setting
issue
like
voluntarily.
You
know
remove
working
pair
or
something
but
yeah,
but
maybe
that
could
be
worked
out
in
bike
sitting,
but
it
seems
confusing,
because
I
thought
you
would
get
the
event
for
every
candidate
pair.
That
was
tossed.
M
So
I
do
also
want
to
mention
I'm
I'm,
not
I'm,
still
not
100,
sure
it's
relevant
here,
but
there
is
the
possibility
for
the
user
agent
to
for
cancelable
events
that
aren't
actually
canceled.
So
an
event
could
be
cancelable,
but
this
instance
of
the
event
isn't
cancelable.
So
that's
perfectly
valid.
G
M
B
Yeah
I
I
think
it's
mostly
indeed
a
bike
shedding
issue
and
the
term
removal
is
indeed
a
bit
ambiguous
and
you
might
assume
that
it
includes
both
deleted
and
what
you
call
removed
candidates,
but
I
think
we
should
fix
it
by
picking
the
right
name,
possibly
explaining
it
better,
but
still
keep
separate
events
for
things
that
are
recoverable
and
things
that
aren't
and
things
on
which
the
developers
should
act
and
things
on
which
it's
mostly
an
FYI.
So
I
do,
like
the
distinction.
J
It
I
think
the
RFC
uses
uses
the
word
prune
for
or
optionally
for
these
for
this
situation,
yeah.
M
Yeah
so
in
my
very
first
ice
controller
purpose
and
I
did
use
the
term
Pro
9.
Maybe
we
want
to
come
back
to
that
so
again,
that
might
be
a
naming
thing
that
we
can
buy
Shadow
or
you.
A
A
I
would
note
that
we're
approaching
the
end
of
time
for
this
segment
so.
M
M
So
to
maybe
summarize
so,
it
seems
like
I'm
hearing
most
about
for
the
cancelable
event
approach,
I'm
happy
to
write
up
a
PR
on
that,
and
then
we
can
continue
the
discussion,
not
if
that
makes
sense.
F
All
right
last
time,
I
talked
about
an
idea
for
something
called
RTP
transport,
and
there
was
some
feedback
and
I
wanted
to
follow
up
on
that
feedback.
The
three
main
things
that
I
heard
were:
what
are
the
use
cases?
Is
there
a
gap
between
somebody
using
rtb
transport
with
web
codex
versus
webrtc?
You
know
if
you
want,
if
you
were
using
webrtc
and
you
wanted
to
move
to
webcodex
and
web
RTP
transport,
how
much
trouble
is
it
and,
lastly,
providing
examples
instead
of
web
ideal?
F
F
For
example,
people
want
to
be
able
to
send
custom
data
along
with
the
audio
and
video,
either
in
the
same
packets
or
separate
packets,
but
in
the
same
congestion
control
context,
with
RTP
to
an
endpoint
that
speaks
RTP
for
audio
and
video,
but
they
want
to
include,
say:
3D
avatar
data
people
want
to
be
able
to
do
their
own
packetization,
sometimes
because
they
have
a
new
codec
like
atvc,
which
is
supported
in
web
codex,
at
least
for
decode,
or
they
want
to
do
their
own
style
of
packetization
for
existing
codecs,
for
example.
F
H.264
has
lots
of
ways
you
could
do
packetization,
and
sometimes
people
want
a
particular
one
and
not
what
webrtc
happens
to
spit
out.
More
generally,
there
are
lots
of
low-level
controls
on
codex
inside
of
web
codecs,
and
people
would
like
to
have
those
controls,
but
also
be
able
to
have
a
real-time
peer-to-peer
transport
which
webrtc
provides,
and
if
we
provide
RTP
transport,
then
that
combine
well
together.
F
F
Some
people
want
to
bring
their
own
codec,
do
some
wisman
audio
or
maybe
even
use
web
GPU
for
ML
based
codecs,
and
they
want
to
be
able
to
send
that
over
RTP
or
generally
control
lots
of
things
bring
your
own
bitrate
allocation
or
your
own
FEC
or
your
own
RTX,
or
your
own
Jitter
buffer
and
along
with
custom,
rtcp
messages
and
rtb
data.
There
are
existing
RTP
endpoints
that
sometimes
have
their
own
RTP
data
RTP
messages
that
they
want
to
do,
and
so
this
would
allow
for
better
interop.
F
F
Next
slide,
all
right,
so
what
about
the
Gap?
So
this
is
the
diagram
I
showed
before
RGB
transported
one
side,
web
codex
and
the
other
the
app
in
the
middle,
but
there's
stuff.
If
you
wanted
to
replicate
webrtc
today
that
you'd
have
to
provide
next
slide,
in
particular,
there's
packetization,
depictivization
and
Jitter
buffer
Jitter
buffer,
probably
the
biggest
one
next
slide.
F
So
don't
be
scared
of
this
I'll
explain
so
on
the
left,
instead
of
one
web
codex,
which
I
just
split
it
into
audio
encoder
and
decoder,
video,
encoder
decoder
and
then
I
said.
Okay.
What
if
we
provided
a
couple?
Other
objects
to
fill
that
Gap
in
I
alluded
to
this.
F
In
my
last
presentation,
for
example,
you
could
have
an
RGB
packetizer
or
a
Jitter
buffer,
both
for
audio
and
video,
and
then
the
app
would
kind
of
just
have
to
tie
these
things
together
and
there
would
be
a
lot
smaller
gap
between
web
codex
Plus
for
rtb
transport
and
the
existing
webrtc
next
slide.
F
Basically,
if
you
created
a
transport
and
a
packetizer,
and
you
got
a
frame
encoded
frame
for
web
codex,
then
you
would
just
pass
that
into
the
packetizer
and
it
would
give
you
packets
that
you
can
send
on
the
transport.
It's
pretty
straightforward.
Next
slide,
depacketization
I
was
thinking
could
be
just
built
into
the
Jitter
buffer.
F
So
if
you
created
a
video
Jitter
buffer,
you
just
take
the
RTP
packet
that
comes
over
the
transport
stuff
it
into
the
video
Jitter
buffer
with
the
meth
say,
insert
packet,
and
then,
when
the
video
Jitter
buffer
assembles
a
frame,
it
would
have
a
callback
that
says
here's
an
assembled
frame
and
then
you
could
pass
that
to
web
codex
to
decode
next
Slide.
F
The
audio
Jitter
buff
is
a
little
bit
different
because
at
least
the
implementation
of
the
webrtc
that
everybody
uses
it
the
decoding
and
the
error,
concealment
and
the
time
dilation
are
all
kind
of
tied
together.
So
what
you
really
want
to
come
out
is
not
assembled
audio
frames,
but
rather
in
fact
you
don't
really
assemble
an
audio
frame
and
depacketization
is
so
straightforward.
What
you're,
really
trying
to
do
is
get
decoded
audio.
F
So
here
you
would
have
something
where
you
insert
audio
packets
and
at
the
end
you
have
something
you
can
render
a
straightforward
way
to
do.
That
would
be
with
a
track,
but
that
has
kind
of
cross
worker
problems.
So
we'd
have
to
think
of
a
way
to
allow
the
audio
Jitterbug
to
work
in
it
being
a
worker
and
be
able
to
render,
despite
that,
so
there's
that
issue,
but
for
Simplicity
they
just
said:
hey,
there's
a
track
here
on
this
slide
next
slide,
there
are
some
other
things
in
the
Gap.
I
haven't
talked
about.
F
F
I
did
want
to
provide
more
examples,
because
that's
what
was
requested
as
part
of
the
feedback.
This
is
an
example
of
custom
data.
You
could
say:
okay,
my
3D
avatar
data
is
going
to
be
payload
type,
126
and
I'm,
going
to
pick
an
ssrc
and
I'm
going
to
just
pack
in
the
data.
However,
I
choose
it's
custom
and
then
I
call
send
RTP
packet,
where
I
specify
the
payload
type
ssrc
and
payload
next
slide.
F
An
example
of
bring
your
own
packetization
might
be
that
with
web
codecs,
I
create
an
hevc
decoder
and
then
I
create
my
own
dpacketizer.
Just
my
own
custom
thing
and
then
on
on
the
transport
I
get
the
RTP
packet
I
hand
the
RTP
packet
over
to
the
d-packetizer.
Once
it's
decided,
it
has
a
whole
frame.
Then
it
hands
me
that
and
I
decode
it
using
the
hevc
decoder
next
slide.
F
Oh,
that's
it
so
discussion,
I
have
a
few
questions.
One
of
them
is
about
use
cases.
Do
we
need
more
use
cases
like
what
what
is
required
to
convince
people
that
we
need
to
do
something
here?
Another
one
is:
do
we
need
more
examples?
F
I
just
provided
a
few
simple
ones,
but
if
we
want
more
complete
examples,
perhaps
what
exactly
do
people
want
more
generally,
should
we
try
to
fill
this
Gap,
or
should
we
leave
it
up
to
JavaScript
libraries
I
think,
there's
a
good
case
to
be
made
for
the
audio
Jitter
buffer,
the
others
I'm,
not
so
sure
and
Where
Do
We
Go
From
Here.
Somebody
suggested
writing
an
explainer.
Should
we
do
that?
A
A
So
what
one
of
the
questions
I
had
was
about
handling
of
ssrcs
and
stuff
and
whether
that
would
could
be
considered
a
gap
because,
like
if
you're
trying
to
use
this
to
build
some
conferencing
thing,
you
have
a
ton
of
ssrcs
that
are
coming
in
and
you
know
you
got
to
keep
track
of
them
figure
out
where
they
go.
Is
there
some
case
for
help
you
know
and
and
for
helping
to
manage
these
things?
A
Like
you
know,
at
least
in
webrt,
you
see
you
just
can't
receive
an
ssrc
out
of
nowhere,
you
know
has
to
be
negotiated,
so
it
has
to
be
registered
somehow
that
you're
expecting
it
to
show
up.
A
A
Another
question
is
about
the
depacketizer
here,
like
I
I,
actually
think
so
you-
and
originally
you
were
talking
about
the
video
G
video
buffer
kind
of
building
in
the
depacketizer.
But
here
you
get
to
do
your
own
and
I
guess:
are
you
saying
that
there
would
be
a
generic
video
buffer
that
would
be?
You
could
somehow
distinguish
the
depacketizer
from
that.
F
So
we've
got
two
things
there,
one
about
the
ssrcs.
You
you
I,
think
what
you're
highlighting
is
that
another
part
of
the
Gap
is
in
rtb.
Demuxer
I
was
thinking
that
this
would
just
hand
you
the
package.
You
do
the
demuxing,
but
if
you
wanted
the
demuxer
logic
provided
then
I
I
suppose
you
would
want
an
object.
F
That's
like
RTP
debugger,
where
you
say:
okay
here
are
some
ssrc's
in
a
mid
and
a
red,
and
you
know
right
when
it
pops
out
give
me
some
identifier,
maybe
the
mid
so
yeah
an
RTP
debunker
is
something
that
is
in
the
Gap,
that
I
didn't
bring
up
and
could
be
provided.
A
That
would
do
it'd
be
nice
in
that
you'd
you'd
have
like
a
Handler
for
that
particular
stream
right.
That
might
be
different
from
other
handlers.
So
you
could,
you
could
actually
know
you
know
you
could
get
that
particular.
You
might
want
to
do
something
different
when
you
get
that
stream
versus
other
lights.
Yeah.
F
Yeah
that
makes
sense
on
the
depacketization
side.
The
audio
and
video
are
a
little
separate
here
for
video.
I
was
thinking
that
if
you
took
an
audio,
a
video
Jitter
buffer-
and
you
said
my
maximum
time
is
zero,
then
basically,
you
end
up
with
a
d
packetizer
for
audio
depacketization
is
so
trivial
that
I
don't
know
if
it's
working
and.
A
The
last
question
I
had
was
about
audio
video
sync,
because
that's
the
thing
that
seems
to
be
the
nastiest
one
of
the
nastiest
things
and
I'm
wondering
you
know
if
you're
just
outputting
these
tracks
that
are
independent,
I
haven't
really
tried
it
with
media
stream,
tech
generator.
But
people
have
been
asking
for
samples.
I
know.
When
I
wrote
the
web
transport
samples
they're,
always
asking
me
hey.
Can
you
show
how
sync
would
work?
You
know
they're
afraid
that
this
will
just
be
a
big
mess.
If
you
just
output,
separate
audio
and
video
tracks,.
F
G
Yes,
so
if
you
go
back
to
the
use
cases,
thank
you
first
for
for
these
slides
they're.
Very
helpful.
I.
Do
worry,
though,
that
we're
abandoning
the
existing
webrtc
API
a
little
bit
when
we're
saying
that
to
solve
problems
like
custom
data
along
with
audio
and
video,
for
example,
has
been
a
law
requested
feature,
and
so,
if
our
answer
is
going
to
be
well
to
do
that,
stop
using
doing
what
you're
doing
now
and
then
use
this
RDP
API,
so
I'm
a
little
worried
about
that.
G
Similarly,
so
my
first
reaction
would
be
maybe
we
should
solve
time
synchronization
with
data
on
the
data
Channel
with
some
time
codes
and
there
might
be
other
ways
to
solve
it.
So
I'm
worried
that
by
focusing
on
switching
API,
basically
we're
saying
we're
not
going
to
solve
these
problems
in
the
old
API,
and
the
other
thing
I
hear
here
is
that
people
want
to
use
web
codecs
as
a
use
case
and
I
think
it's
worth
looking
at
well,
why
is
webrtc
not
exposing
the
same
codec
choices
as
web
codex?
G
Again,
there
might
be
users
that
would
like
these
features
without
having
to
go
to
this
very
low
level.
Api
and
look.
The
third
one
is
that
we
already
both
Chrome
and
Firefox
now
ship
web
transport,
which
offers
this
low-level
API
using
what
WG
streams,
and
so
that's
I,
have
a
lot
of
questions.
This
API
right
now
seems
to
be
using
events
and
operates
at
a
very
low
level
by
sending
packets.
Basically,
so
the
range
goes
from
I
think
from
the
top
of
the
list.
G
There
are
some
general
use
cases,
but
it
gets
more
and
more
specifically
the
last
one
is
basically
I
want
RDP
right.
It's
just
I
want
to
be
able
to
custom,
rtcp
and
RDP
data,
so
I'm
not
sure
that
these
all
necessarily
need
to
be
solved
or
should
be
solved
with
the
new
API,
and
maybe
maybe
there's
maybe
we're
stepping
on
web
transport
a
little
bit
and
I
understand.
Web
transport
is
not
P2P.
F
F
There
are
situations
in
which
RTP
transport
would
be
more
suitable
than
web
transport,
at
least
at
the
moment,
and
maybe
for
a
long
time
because
of
interop
on
the
question
of
web
codex.
I
I
actually
did
design
an
alternative
to
using
web
codecs
with
R2
transport,
where
it
was
like.
F
We
Define
media,
senders
and
receivers
that
are
like
an
RTP
sender
and
receiver,
but
only
the
media
half
and
then
you
tie
it
to
the
the
transport
half
and
then
you
can
control
things
in
between,
but
it
ended
up
being
almost
exactly
like
web
codex.
F
So
it
was
like
a
lot
of
duplicate
steps,
so
we
could
go
down
that
road
where
we
say
instead
of
having
web
codecs
and
an
extra
thing,
we
just
bundle
those
together
and
we
say:
okay,
there's
this
thing:
that's
the
media,
half
of
RTP
standard
and
receiver,
and
if
people
are
interested
in
that,
we
could
try
going
down
that
path
too.
But
I
I
felt
like
there
was
a
lot
of
duplication
there
with
web
codex
and
there
are
situations
where
these
objects
might
be
useful
outside
of
the
RTP
transport.
F
Lastly,
you
talk
about
abandoning
the
webrtc
API.
You
know,
maybe
like
part
of
what
I'm
trying
with
this
fill
in
the
Gap
part
is
to
lower
the
the
difference
so
that,
if
you're
using
the
web
peer
connection
and
then
drop
down,
it's
not
such
a
big
gap
drop
but
I.
Don't
have
much
sympathy
for
someone
saying:
hey,
I,
we're
abandoning
peer
connection
because,
like
that
would
be
great
if
we
never
have
to
see
pictures
again
and-
and
everybody
uses
something
better
so.
G
Sure
I'm
just
saying
are
we
saying
we're
not
going
to
solve
custom
data
along
Audio
and
Video
in
existing
webrtc
peer
connection?
Api
like
Firefox,
is
just
implemented
different
gender
buffer
Target,
for
example.
So.
F
G
F
A
lot
of
things
there
are
a
lot
of
things
here
that
we
can
continue
to
do
incrementally,
like
the
keyframe
control,
but
I
I,
don't
think
we'll
ever
get
through
all
of
it,
and
certainly
at
the
pace
that
we're
going.
It
will
basically
be
forever
for
app
developers
that
want
to
do
stuff
and
they'll
they
might
just
give
up
on
RTP
and
want
to
use
or
I,
don't
know
right
now.
F
H
Yeah
I
heard
so
since
we're
on
Vegas
Casey's,
slides
I
have
people
asking
for
some
of
the
same
things
and
I
think
it's
fine
and
for
things
like,
let's
try
to
do.
Hevc
I
think
that
there
are
some
niche
things
where
it
will
be
hard
to
convince
user
agents
to
actually
support
it
and
then
having
an
RP
transport
is
actually
good
issues
with
what
is
the
future
there
is.
H
It
is
the
future
that
would
you
activity
transport
and
then
it's
GS
like
work,
transport
and
web
codex
approach,
or
are
we
still
planning
to
evolve?
The
FTP
player
connection,
and
one
thing
that
is
related
to
peer
connection-
is
that
it's
tied
together.
It's
working
smoothly
consistently
together
and
it
has
some
perfect
advantages
as
well.
H
So
it
would
be
interesting
to
see
with
web
codex
existing
applications,
whether
it's
easy
for
an
application,
doing
real-time
to
actually
have
very
good
performance
compared
to
peer
connection
and
being
able
to
deploy
a
smoothly.
That
would
help.
D
H
Understand
whether
FTP
transport
would
be
like
the
future
or
not
in
terms
of
how
to
make
evolved,
this
I
I
think
an
explainer
would
be
good.
It
would
be
nice
related
to
packetizer
tutorial
performance
on
I
would
think
that
web
transport
based
apps
will
might
want
something
like
that
like
for
games
or
whatever.
If
at
some
point,
web
transport
is
good
enough.
H
So
in
the
short
term,
I
would
be
against
developing
something
like
that.
I
think
that
we
we
should
check
whether
GS
is
not
sufficient.
In
most
cases,
GS
should
be
sufficient,
like
audio
GTA,
buffer
use,
audio
worklet,
Channel
reversal,
blah
blah
blah
it's
extensive,
because
you
need
to
be
very
good
at
GS
programming,
but
you
should
be
able
to
to
get
to
it
and
we
should
concentrate
on
our
own
effort,
which
is
the
rtb
transport
on
peer-to-peer.
That's
what
we're
trying
to
solve.
H
E
So
I
very
much
want
RTP
the
the
peer
connection
to
explode
into
a
into
a
Mesa
object.
It's
all
different,
but
I'm
a
bit
afraid
of
trying
it
to
do
it
by
defining
a
new
object
at
one
end
and
then
say
that.
Okay,
that's
this
long
thing
over
at
the
other
end
called
Web
codec
and
you
just
have
to
fill
in
the
Gap
yourself.
E
E
It's
defined
that
a
dividing
Point
within
the
within
the
automotive
that
device
the
codec
from
the
from
the
packetizer
and
I'm
not
very
happy
with
the
way
we
ended
up
with
the
API,
for
instance,
instantiating
that
so
I've
I'm
working
on
proposing
Innovations,
but
I
think
we
need
to
focus
on
I.
Think
we'll
get
there
faster
if
we
Define
apis
like
an
rtcp,
sending
API,
orthopedic
rtps
and
receive
API
and
say
Here's.
E
J
Yeah,
what
I
really
like
about
this
is
well
like
it'll,
probably
never
be
an
official
use
case,
but
like
just
RTP
transport
makes,
makes
me
picture
these,
like
really
nice
sfus,
where
all
the
like
congestion,
control,
knacks,
rtcp
header
extensions,
like
that's
all
nicely
packaged
in
RTP
transport,
and
you
can
make
this
SFU
easily.
Now
that
interops
perfectly
with
the
client
and
then
yeah
I
also
agree
with,
like
everything.
J
Jennifer
is
saying
that
it
I
would
rather
try
to
let
people
do
as
much
as
possible
without
having
to
fully
unbundle
everything
like
if
you
just
want
to
be
able
to
use
web
codec.
J
You
shouldn't
have
to
unbundle
that
do
all
this
unbundling
and
yeah
also
I'm
slightly
worried
of
like
these
leaky
API
boundaries
like
how
like
congestion,
congestion,
window
pushback,
where,
like
transport
congestion
control
messages,
can
end
up
like
telling
the
encoder
to
drop
frames
coming
in
from
coming
in
from
the
camera.
F
A
Yeah
I'd
like
to
pick
up
on
one
last
thing,
which
has
been
a
little
bit
of
a
loose
end,
both
the
installable
streams,
API,
encoder,
transform
and
media
capture,
transform,
have
used
streams,
SWAT,
WD
streams
and,
quite
a
while
ago,
un
raised
some
issues
about
what
WD
streams,
I,
don't
think
have
been
completely
put
to
bed
and
we've
had
we
have
a
bunch
of
demo
code
now
but
I.
Don't
there
one
of
the
big
questions.
I
think
in
this
API
will
be.
A
Should
it
be
streams
based
or
not
and
I?
Think
the
danger
of
not
using
streams
is
that
there
can
be
some
issues
when
you
I
think
this
it's
very
possible.
This
API
will
need
to
run
in
a
worker,
particularly
along
with
web
codecs,
and
so
you'll
have
issues.
If
you
don't
do
things
right,
you'll
have
issues
because
we
don't
have
transferable
media
stream
tracks.
A
If
we
don't
aren't
able
to
leverage
transferable
streams,
I
think
you
can
have
issues
in
in
functioning
with
workers,
so
I,
don't
I,
don't
want
a
situation
where
we've
seen
this.
A
lot
of
developers
are
starting
to
transfer
individual
encoded
chunks
or
individual
video
frames
to
workers.
It's
it's.
Not
it's
not
hap,
not
working
very
well.
So
I
think
the
streams
discussion
is
an
important
thing
to
pick
up
figure
out.
D
F
D
H
The
restraint
API
issues
are
being
resolved.
It's
it's
underway
in
the
invest
pack,
at
least
so.
Okay,.
H
Maybe
not
this
one
but
transferring
ownership
on
all
these
things.
They
will
be
so.
A
A
F
All
right,
well
I'm,
taking
ways
the
next
steps
to
make
an
explainer
and
explore
ways
to
incrementally
move
this
direction.
Something
like
that,
but
I
do
appreciate
lots
of
feedback
that
I
got
so.
Thank
you.
A
Okay,
well,
we
actually
got
to
the
end
of
the
working
group,
so
I'm
going
to
say
thank
you
to
everybody.
I
didn't
have
an
animal
at
the
end,
which
is
usually
our
reward
for
getting
done,
but
thank
you,
everybody
and
we
will
meet
again
in
June.