►
From YouTube: WEBRTC WG meeting 2023-01-17
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
So
usual
introduction
slides.
If
you
don't
know
what
the
patent
policy
is
and
the
rules
for
substantive
contributions,
yet
better
learn,
learn
quick.
A
A
Once
you
once
you,
have
this
one
up,
you're
supposed
to
be
you're
supposed
to
have
an
icon
somewhere
that
that
lists
all
the
people
who
have
the
queue
I.
A
So
repeat,
document
status
and
all
that
all
that,
let
me
see
we
have
too
cold
for
consensus
running.
A
B
I
think
that
for
at
least
the
first
low
latency
to
meet
cases,
there
are
some
issues.
So
we
probably
need
to
resolve
the
issues
and
then
there
will
be
consensus.
A
Well,
I
think
well.
Abstain
doesn't
count
as
counting
against
consensus,
so
I
think
that,
based
on
the
current,
it's
also
called
call
for
consensus.
We
can
merge
on
any
trades.
B
I
think
it's
already
merged.
It's
just
that
the
warning,
there's
no
consensus,
I
guess
I
would
prefer
that
we
remove
it
once
we
have
resolved
the
issues.
A
B
Well,
I'll
change.
This
then
yeah.
A
Oh
yeah,
the
rule
for
the
usual
rule
for
call
for
consensus
is
that
if
we
don't
have
objections,
we
have
definitely
have
consensus
and
abstain
doesn't
count
for
as
an
objection
and
so
we'll
look
at
the
issue
that
this
race
on
face,
detection
and
see
if
that's
easily
sample.
A
And
by
now
it's
been
running
those,
but
we'll
have
more
because
kept
getting
over
getting
those
markers
should
actually
mean
something.
B
If
people
is
not
here
for
the
first
one,
I
can
probably
say
a
few
words
and
then
we
can
start
from
there.
B
So
I
I,
don't
remember
the
exact
slide
for
issue
2795,
but
we
decided
to
remove
the
Earl
in
the
peer
connection,
Ice
event
object
and
from
that
we
are
there's
one
there's
one
thing
that
is
broken
currently
in
the
word
c
spec,
which
is
that
peer
connection,
i7
init,
the
dictionary
to
create
the
event
has
a
kind
of
date
field
and
an
error
field.
So
the
Earl
field
is
probably
useless
and
we
should
probably
remove
it.
B
If,
if
algo
events
are
actually
using
it,
then
we
we
need
to
update
the
algorithms
so
that
that's
the
first
thing,
and
the
second
thing
is
that
usually
we
are
able
to
to
shim
properly
events
and
Courtesy
Air
is
candidate.
Currently
you
cannot
create
it.
When
you
call
the
Constructor,
you
cannot
create
a
nice
candidate
with
an
error
value
that
is
not
undefined
and
the
question
is:
are
we
good
with
that?
B
Or
do
we
want
to
allow
replication
to
create
a
TCI
candidate
with
all
values,
meaning
that
meaning
that,
for
instance,
they
could
shim
properly
peer
connection,
IC
events
with
candidates
that
have
Earls,
for
instance,
or
they
could
shim
over
like
the
the
pair
of
candidates
with
one
having
Earl
and
the
other,
not
having
an
error
and
so
on,
so
that
that's
the
question
there.
So
the
two
questions?
First,
one
on
the
Ice
event
in
it.
B
Hopefully,
everybody
will
agree
that
we
should
win
with
and
the
next
one
on
the
ice
candidate
Constructor.
Do
we
want
to
allow
setting
the
Earl
parameter
of
the
ice
candidate
through
the
Constructor.
B
A
And
not
being
able
to
I'm
having
a
Constructor
that
is
not
able
to
create
all
values.
That's
a
problem
for
testing,
so.
A
I
would
like
I
would
like
to
see
that
that
artistic
ice
candidate
in
it
can
take
a
new
URL
so
that
yeah,
so
that
you
can
can
generate
those
candidates.
F
Yeah,
so
I
don't
have
a
strong
opinion
on
this.
Just
a
comment
that
the
lack
of
a
Constructor
argument
does
not
prevent
JavaScript
from
adding
attributes
or
modifying
attributes.
As
far
as
I
know,
is
that
true
on
an
interface
wait
yeah.
B
You,
you
can
add,
you
can
probably
add
a
getter
and
update
it,
but
it's
not
the
same
code
path,
so
you
will
have
edge
cases
where
things
will
not
work
properly.
B
F
A
And
the
eyes
kind
of
rtcis
candidate
interface
specifies
all
its
attributes.
I
see
it
only
so
no.
You
can't
update
the
URL.
D
D
B
Okay,
it's
it's
a
small
issue,
so
maybe
we
yes.
B
Quickly,
I
I
guess
nobody
is
like
very
strong
minded,
either
way.
B
F
So
would
this
mean
that
if
an
ice
candidate
does
Json
stringified
would
include
the
URL
no.
F
B
F
Well,
they're
linked
I
think
because
if
you
create
a
nice
candidate
with
a
URL
and
then
you
jsonify
it,
you
would
expect
to
get
as
candidate
in
it
and
also
we.
F
And
also
our
rtci's
candidate
in
it
is
the
argument,
advice
candidate.
We
should
also
consider
whether
it's
what
it
would
mean
for
ice
Canada
to
look
at
the
URL
field
with
that,
given
it's
concern
here,
yeah.
A
F
Yeah,
so
we
can
actually
skip
this
because
these
were
merged,
because
so
that
the
slides
were
from
an
earlier
slide.
Deck
and
people
are
welcome
to
look
over
and
see
if
they
disagree
with
them.
But
the
changes
are
small
enough
and
there
were
no
really
objections
on
GitHub.
So
we
went
ahead
and
removed
duplicate
rids
in
proposed
standard
coatings.
F
And
similarly
in
create
answers
and
codings,
there's
a
PR
to
do
the
necessary
work
to
defer
pruning
of
10
Coatings
to
the
answer.
A
previous
PR
had
some
mistakes
in
it
that
are
fixed
by
this
PR
next
line
and
some
more
updates
here
and
that's
it.
H
A
H
H
A
H
Have
a
couple
of
issues
here:
why
don't
you
go
to
the
next
slide
and
I'll
introduce
it?
So
first
one
is
issue:
40
43
we've
discussed
this
before
at
the
July
2020
web
agency
working
group
meeting
and
basically
we
had
a
resolution
that
florante
was
to
submit
a
PR
and
we
now
are
developing
pr139.
So
let
me
introduce
this.
H
The
basic
use
case
is
a
situation
where
you
want
to
do
mixed
codex
simulcast,
and
that
would
be
a
situation,
for
example,
where
you
want
to
use
ab1,
but
you're
only
going
to
get
a
decent
performance
at
the
low
resolution
say
you
might
have
some
Hardware
acceleration,
but
it
can't
go
to
the
largest
resolution
or,
for
example,
on
a
mobile
device.
Av1
will
just
suck
too
much
power
and
create
too
much
heat
at
the
higher
resolutions.
H
So
basically
you'd
want
to
send
ab1
at
low
res
and
then
something
else
at
higher
revs,
vpa,
pp964.
Whatever
so,
and
and
actually
doing
this
with
formal
simulcast
has
the
following
advantages:
you
don't
require
multiple
senders,
so
you're
not
creating
an
ab1
stream
and
then,
for
example,
simulcast
will
be
paid
every
D9
you're
doing
it
in
a
unified
way.
That
gives
you
unified
resource
allocation
and
some
graceful
degradation
between
the
layers
which
multiple
senders
wouldn't
do.
H
The
other
thing
is
that
if
you're
doing
it
with
simulcast,
you're,
potentially
compatible
with
a
transceiver
will
describe
why
and
then
you
can
also
potentially
switch
the
Codex
without
offer
answers
long
as
the
codecs
you
have
or
within
the
were
selected
by
all
France
okay.
So
that's
a
little
bit
of
background.
Basically,
we've
discussed
this
issue
before
and
now
we're
going
to
talk
a
little
bit
about
the
approach
which
was
previewed
previously
next
slide:
foreign
okay.
H
So
in
looking
at
this,
we
looked
at
what
RTC
did,
which
is
they
basically
put
the
payload
type
in
the
rtb
coding
parameters.
However,
this
doesn't
work
with
webrtc
because
the
payload
types
are
negotiated.
Yet
so
you
can't
call
us
with
a
transceiver,
because
you
don't
know
the
payload
type.
Yet
it's
negotiated
and
they
offer
answer
so.
H
The
alternative
which
we
have
discussed
previously
in
which
we're
pursuing
in
pr139
is
to
put
the
RTP
Coda
capability
into
the
encoding
parameters
instead,
so
that's
how
we
describe
the
codec,
not
through
the
payload
type,
which
we
don't
know,
but
through
the
coda
capability,
which
we
do
now
and
so
a
couple
of
constraints.
Here,
as
we
said,
we
don't
know
the
payload
type
yet
so
you
you
can't.
G
H
That,
but
also
the
codec
you
select
has
to
be
one
of
the
ones,
that's
in
capabilities,
because
if
it's
not,
then
it's
it's
not
one
of
the
potential
codecs
that
you
can
have
next
slide,
so
so
this
is
an
example
of
how
it
would
work.
For
example,
if
you
called
used
air
transceiver
with
the
following,
send
encodings,
what
you
would
get
is
a
quarter
resolution
potentially
at
well.
You
you
put
two
codecs
in
here.
H
Actually
in
this
particular
example,
one
is
av1
and
one
is
vp8
and
then
at
the
full
resolution
you
would
have
only
the
pH
and
I
put
in
the
dictionary
from
the
code
of
capability
from
Weber
to
CPC,
because
it
shows
you
that
the
mime
type
and
the
clock
type
are
required.
You
could
also
put
in
the
channels
and
the
sntp
fmdp
line
if
that
was
needed,
to
differentiate,
which
it
would
be
I
guess
with
h264,
but
that's
basically.
This
is
basically
what
we're.
What
we're
talking
about
next
slide.
H
G
H
C
Right,
thank
you
Bernard.
So
in
issue
126
that
Henry
created,
we
want
to
address
primary.
You
know
a
problem,
but
that
the
API
proposed
would
also
cover
the
use
case
for
a
mixed
codex
simulcast.
So
the
problem
that
we
have
is
that
we
have
some
applications.
I
want
to
be
able
to
select
which
codec
is
used,
but
we
don't
want
to
do
that
by
doing
our
negotiation
to
put
the
codec,
we
want
to
use
first
in
the
list.
C
This
is
a
very
clunky
mechanism
that
requires
many
different
calls
set
correct
preferences
when
doing
all
the
steps
for
the
negotiation
and
then
finally
updating
this
kind
of
empty
mode.
Because
if
you
change
codec,
you
probably
want
a
different
scalability
mode.
So
that
is
a
very,
very
annoying
mechanisms
and
we
think
it's
really
prone
to
a
lot
of
issues,
because
we
will
probably
have
people
trying
to
renegotiate
while
doing
these
challenging
different
parameters.
C
C
So,
in
order
to
address
that,
we
think
that
we
could
probably
have
a
single
call
to
set
parameters
that
is
able
to
change
the
codec
that
is
used
and
possibly
the
scalability
mode
for
changing
the
SVC,
the
SDC
mode
that
is
used
and
so
I
put
some
examples
of
how
it
could
be
used.
C
C
We
set
a
codec
and
then
we
set
some
angles
cavity
mode.
We
call
set
parameters
and,
yes,
we
change
everything,
as
you
would
think
it
would
work.
C
C
C
Those
are
simple
examples
of
how
it
could
be
used
and
you
would
be
able
to
do
that
during
a
call
and
change
which
encoders
are
used
on
all
the
layers.
Very
simply,
next
slide,
please
so
there's
a
few
things
that
might
want
that
we
might
want
to
Define
what
happens.
What
happens
when
you
renegotiate,
for
example,
if
you
select
81
and
you
renegotiate
even
follow
the
other
API,
we
wanted
to
avoid
that,
but
it
may
still
happen
what
should
happen
if
everyone
which
was
a
select
codec,
is
no
longer
available.
C
Should
you
have
an
error
in
cyclical
description
or
set
remote
description?
Should
you
remove
the
codec
value
and
use
the
first
negotiated
correct?
They
might
be
The
Logical
Choice
there
also.
Should
we
have
a
single
value
for
the
codec,
or
should
we
have
an
array,
as
was
presented
a
few
years
ago?
C
If
we
have
a
single
value
of
multiple
values,
it
doesn't
really
help
with
a
problem
that
is
choosing
the
scalability
mode.
That
is
matching
the
codec
that
is
used,
which
some
people
have
raised
as
an
issue
as
a
problem
in
the
issue.
126
I
think
that
a
single
value
is
probably
enough.
Maybe
having
an
array
will
be
preferred
by
users
and
when
we
pick
the
first
value
that
is
available
in
the
array
and
is
the
question
also
that
I
want
to
ask
is
renegotiation?
C
Is
it
a
problem
for
people
who
want
to
use
this
API?
Maybe
they
will
just
negotiate
once
against
an
SFU
that
is
able
to
handle
every
codec
and
then,
if
you
need
to
switch
to
a
different
codec,
because
someone
enters
a
call
with
low
power
device
and
they
only
can
use
vp8.
Maybe
you
can
just
use
that
new
API
to
switch
to
vp8
talk
to
your
SFU
and
the
SFU
will
only
forward
vp8
to
the
members
of
the
call
that
only
support
it.
C
And
then
you
don't
really
have
a
problem
and
you
basically
never
need
to
renegotiate
a
transceiver.
You
might
still
want
to
do
it
to
add
new
transceivers
if
other
people
join
the
call,
but
you
don't
necessarily
need
to
do
that
to
change
your
capabilities.
Basically,
so
those
are
some
questions
that
I
have
and
I
think
that
we
can
probably
talk
about
it
for
a
few
minutes.
If
you
want-
and
there
are
some
people
who
raise
their
hand
and
the
first
one
would
be
Bernard.
H
Hi
Florence
yeah,
so
I
have
some
questions
about
some
weird
cases
that
I
have
seen
in
the
field.
One
of
them
is
a
situation
where
you
have
a
hardware
encoder,
but
somehow
the
resources
get
grabbed
out
from
under
you,
and
now
you
have
a
software
encoder,
that's
not
giving
you
the
performance
you
want.
H
H
It
does
are
there
situations
where,
particularly
for
the
ad
transceiver
case,
where
you
think
that
having
the
array
would
make
sense
and
in
the
case
I
just
described
I
guess
you're
kind
of
in
a
pickle,
no
matter
what
so
I
don't
think
it
matters
right.
If
you
have
one
or
more
than
one.
C
Right
so
for
the
problem
with
Hardware
encoders
that
have
limited
capacity,
so
if
you
didn't
have
a
software
fallback
set
parameters
should
actually
throw
an
error
when
you
are
not
able
to
acquire
the
hardware
resources
so.
C
Cover
whose
use
cases,
but
otherwise,
if
you
have
a
software
fallback,
it's
really
hard
to
have
behavior.
That
is
always
good,
because
if
you
run
out
of
capacity
on
your
Hardware
encoder,
you
might
just
use
a
software
fallback
and
we
don't
have
a
control
that
allows
you
to
only
use
the
hardware
encoder
and
then
trigger
whose
errors
that
we
mentioned.
If
that's,
what
we
wanted
so.
H
C
A
So
just
stating
opinions
I
think
the
real
negotiation
problem
is
pretty
easily
solves.
If,
if
we
just
say
that
when
you
set
when
you
set
the
encoding,
it
must
be
valid
and
when
you
negotiate
even
for
the
first
negotiation
removed
and
you
remove
anything
from
the
internal
sloth
holding
the
capabilities
that
isn't
in
the
negotiated
codex
and
you
do
that
for
the
renegotiation
too,
then
we
should
say
that
for
ease
of
use,
we
should
have
an
array
and
we
should
use
the
first
entry
in
the
entry
in
the
area.
A
A
C
D
You
Henrik
yes,
so
I
I
have
I,
have
slides
I'm
not
going
to
go
into
it
now,
but
but
it's
somewhat
overlaps
with
this
and
it's
it's
somewhat
a
separate
issue
but
which
we'll
talk
about
later,
but
regardless
of
that
I
do
have
a
preference
for
having
in
the
set
parameters,
called
a
single
codec
value,
because
I
think
that
so
my
opinion
is
that
they
should
either
be
a
sensible
default
or
that
we
should
disable
the
stream,
because
there,
when
you
introduce
scalability
mode
you
get
into
these
combinations
and
there's
different
trade-offs.
D
You
might
have
different
number
of
layers,
for
example.
So
I
would
keep
this
API
surface
as
as
simple
as
possible.
C
I
guess
it
could
be
possible
to
if
the
codec
doesn't
match.
The
correct
selected
doesn't
match
that
using
some
of
the
mechanisms
that
you
propose
in
the
next
slide,
that
with
run
error
and
that
the
application
handles
its
and
reconfigures
Jennifer.
F
Yes,
so
apologies
if
I
haven't
followed
this
too
deeply,
but
I
I
know
that
in
the
API
so
far,
we've
we've
tried
very
hard
to
not
have
set
parameters
and
the
negotiation
methods
control
the
same
properties,
whether
it
be
active
or
scale
resolution
downplay,
because
it
creates
inherent
application
of
races
between
Center
mode
description
and
local
JavaScript
access
methods
makes
up
parameters.
So
it's
possible
that
what
harle
said
just
solved
that
but
I'm
not
positive.
C
I
think
that
the
negotiation
is
just
about
which
codecs
are
allowed
in
the
transceiver
and
not
necessarily
which
one
is
used,
and
so
those
are
different
but
related
problems,
and
this
API
would
allow
you
not
to
pick
the
first
one
in
the
list,
but
any
codec
in
the
list,
and
it's
a
little
bit
different.
So
the
usage
is
not
the
same
right
at
the
moment.
C
H
Yeah
all
right,
just
one
comment:
I
I
think
there
may
be
a
difference
in
how
things
will
work
before
and
after
offer
answers.
So
after
offer
answer,
you
have
selected
the
potentially
used,
codecs
and
I.
Think
at
that
point
in
set
parameters
you
you
really
only
have
the
the
Codex
in
the
list
have
to
be
within
that
envelope,
so
you
check
them
against
that
not
against
set
capabilities
as
to
whether
there's
just
one
of
them
or
two
of
them,
I.
H
Think
after
oh
for
answer,
you
could
probably
just
have
one,
although
to
be
clear
that
the
one
you
choose
doesn't
affect,
create
offer
or
anything
like
that.
So
there's
no
overlap
overlap
with
set
coded
preferences,
it's
just
taking
choosing
which
one
of
the
one
in
the
envelope
is
is
actually
being
sent.
Now
the
tricky
thing
is
for
air
transceiver.
Remember,
there's
no
offer
answer
yet,
potentially.
H
So
in
that
situation
you
know,
particularly
if
you
haven't
called
set
coded
preferences
it
it.
You
know,
if
it's
just
one,
then
it
can
be
any
one
of
a
codex.
That's
in
set
capabilities,
and
again
it's
a
little
bit
weird
though,
because
right
you,
you
haven't,
you
know,
haven't
if
you
call
set
code
preferences,
you
could
actually
try
to
make
something
at
the
head.
That's
not
in
your
ad
transceiver
call
and
there
you'd
have
a
bit
of
a
contradiction,
so
I
I
think
there
may
be.
H
As
your
neighbor
said,
it's
possible.
There
could
be
a
little
bit
of
conflict
there
and
I
think
we
have
to.
We
have
to
think
this
through
in
the
pr
to
be
exactly
clear.
What
happens
in
that
situation?
Do
you
have
any
thoughts,
Florence.
C
I
think
yes,
we
should
check
against
the
capabilities
if
we
don't
have
any
codec
preferences
and
write,
something
that
makes
sense
and
that
we
do
in
other
places,
especially
India,
as
you
suspect,
I
do
believe
that
one
of
the
main
problem
that
I
when
I
talk
about
renegotiation,
is
when
you
renegotiate
and
a
code
that
goes
away
and
what
happens
to
all
the
entries
that
we're
using
that
codec
right
in
the
encodings
and
choosing
a
behavior
is
what
would
matter,
which
is
why
I
mentioned.
Maybe
that
remote
description
should
for
an
error.
C
Maybe
we
should
offer
more
tools
to
be
able
to
inspect
what
is
in
sdp
to
know
which
codecsa
would
be
applied
in
a
sender
and
to
allow
the
application
to
reconfigure
their
encoders.
Before
applying
the
description,
maybe
there's
a
different
ways
to
approach
this
I
guess
we
can
gather
feedback
in
the
issue
as
we
write
a
spec
proposal
for
this
in
more
detail.
C
But
yeah
I
think
we
are
running
out
of
time
for
Visa,
which
would
probably
move
on
to
the
next
slide
an
issue.
Thank.
D
All
right,
so
this
is
somewhat
related
and
also
a
bit
different,
so
how
to
deal
with
encoder
errors.
So
when
you
call
set
parameters,
if
you're
doing
something,
that's
not
supported,
we,
we
know
what
to
do,
because
we
can
react
the
promise
and
do
nothing.
But
we
don't
know
what
to
do
if
an
encoder
error
happens
later
asynchronously
either
like
a
hardware
error
that
was
previously
mentioned
or-
and
this
is
how
I'll
type
it
back
later,
but
what
about
negotiation
and
so
on?
D
Anyway,
if
an
error
happens
right
now,
the
spec
doesn't
say
what
what
should
happen
and
some
issues
with
this
is
that
an
app
may
decide
different
codex
scalability
modes
and
number
of
active
layers
depending
on
which
codec
is
used.
So
if
you
use,
maybe
you
want
to
use
vp8
simulcast
where
you
want
to
use,
you
know
SVC
for
for
vp9
and
then
you're
in
a
case
where
not
even
the
active
flag
matches.
D
So,
arguably,
because
we
have
this
combination
of
codec,
scalability
mode
and
a
number
of
layers,
I
say,
there's
no
sensible
default
I
mean
you
can
have
current
defaults,
which
always
work
but
you'll
end
up
sending
more
layers
and
then
you
might
have
intended.
The
second
problem
is
that
the
browser
doesn't
know
which
preferences
the
app
has.
So
why
do
we
delegate
this
to
the
browser
if
the
app
knows
how
to
deal
with
it?
So
arguably
apis
already
exist
for
taking
action.
D
D
D
H
Yeah,
so
my
question
is
whether
it's
always
active
equal,
false
on
all
layers
say
for
the
mix
codex
simulcast,
it's
the
ab1
codec
that
resource.
Could
you
just
keep
sending
h264?
Do
you
really
have
to
stop
that?
One
too,
even
though
it
wasn't
the
source
of
the
error.
D
Yeah
so
brought
to
my
attention
earlier
today,
actually
that
you
might
want
to
know
which
layer
the
error
happened
on
right.
H
D
F
Oh
yeah,
so
so
two
comments,
one
you
addressed
I
think
when
you
said
callback
this
could
be
I
hope
this
means
event.
F
Yeah
and
the
other
one
was
on
your
previous
slide.
You
said
that
the
the
hassle,
the
downside
here
was
that
the
browser
might
not
know
what
to
do
in
that
situation
and
switch
on
the
to
an
inferior
encoding
did
I
get
that
wrong,
because
this
Floyd,
it
seems
setting
active
default,
seems,
is
that
necessary?
Why
not
just
fire
the
error?
Handler
fall
back
and
let
the
JavaScript
interfere.
If
we
can.
D
Well,
if,
if
the
current
encoder
doesn't
work,
you
either
have
to
right,
so
you
can't
keep
using
it.
You'd
either
have
to
stop
encoding
altogether
or
you'd
need
to
switch
to
some
default,
and
I
was
arguing
that
there's
no
sensible
default
if
we
allow
combinations
of
codex
scalability
mode
and
active,
but
there
might
be
a
default
that
does
make
sense.
My
concern
is,
if
we
have
some
default,
is
that
we
send
keyframes
extra
keyframes
or
do
something
unexpected
I'm,
not
sure.
D
If
it's
a
it's
a
big
or
small
problem,
though
so
un
do
you
want
to
go
next.
B
Sure
yeah
so
I'm
wondering
what
will
be
the
definition
of
Errors
like
a
user
agent
might
think
hey.
This
is
an
error
and
another
user
agent
will
say:
oh
it's,
a
transparent
error.
I
will
be
able
to
fix
it.
So
I
will
not
send
the
error
like
like,
for
instance,
if
a
process
that
has
the
encoder
on
decoder
is
crashing.
B
For
instance,
then
you
might
think
it's
an
error,
but
in
fact
the
process
will
be
launched
and
the
encoder
will
be
instantiated
and
you
will
be
able
to
recover
very
nicely
so
I'm
wondering
whether
it's
encode
error,
decode
error
or
whether
it's
more
like
hey
something
changed
like
the
setup
that
we,
the
parameters
that
we
are
actually
using,
have
changed
web
application.
Are
you
fine
with
these
new
parameters
and
so
on?
B
And
if
that's
not
that
then
yeah
the
question
would
be
how
we
can
Define
properly
errors
in
an
interoperable
way,
and
that
might
be
difficult.
D
That's
a
very
good
point:
I
I
think
that's
true.
Some
errors
you
might
want
to
just
notify
the
app,
but
it's
not
necessary
for
the
app
to
act.
For
example,
you
fall
back
from
Hardware
to
software,
or
something
happens
that
you
can
recover
from
and
other
errors
might
be
like,
so
that
I
think
that
would
be
good
to
have
a
consensus
on
in
general
like
should
this
only
be
encode
decode
errors,
or
should
this
also
include
the
fallback
mechanism
for
codecs
being
removed
from
negotiation.
A
We
shouldn't
stop
anything
unless
the
error
forces
us
to
stop
foreign,
so
active
faults
on
all
layers
when
it.
It
should
only
happen
if
the,
if
you're,
using
actually
using
the
crashed
encoder
on
all
layers,
for
instance,
and
if
the
encoder
is
capable
of
going
on,
perhaps
at
the
reduced
rate.
Well,
they
shouldn't
stop
it
from
doing
that.
A
A
E
A
H
Just
one
last
point:
I
I
agree
with
both
un
and
Harold
here,
but
I
just
want
to
make
clear
I,
don't
think
this
is
for
recovery
right
like
if
you
were,
if
something
was
lost
on
the
receiver.
You're,
not
you're,
not
seeing
you're
not
getting
into
the
stream
of
plis
or
knacks,
or
anything
like
that
right.
This
is
just
for
real
errors,
not
for.
D
Yeah
real
errors,
but
you
know
the
point
is
still
valid
that
there
might
be
fallbacks
right
like
harder
to
software
stuff
like
that.
H
D
D
H
The
if
the
error
is
recur,
even
if,
if
the
error
is
recoverable,
I
would
assume
that
you
wouldn't
throw
this.
You
wouldn't
have
this
event,
as
un
said
that
the
browser
which
is
do
what
it
does
to
bring
it
back
because.
G
B
C
H
Right
well,
how
about
that
I
I!
Do
like
the
chain,
a
change
like
like
you.
You
failed
over
the
software,
but
now
you
look
at
Stats
and
this
thing
is
you
know
you
discover
that
it's
it's
really
performance
is
terrible
now
and
not
really
acceptable
here
on
a
mobile
device,
you're
draining
the
battery
so.
A
You
were
told
that
it
was
happening,
share
interest
where
over
time,
so
please
please
finish
the
argument
and
we'll
drain
the
queue
and
then
move
on.
I
Tim
so
yeah
I
I,
like
this
but
I,
think
the
name's
wrong.
I,
don't
think
it's
an
error,
I
think
it's
an
I
think
it's
a
codec
availability
change.
A
But
okay,
moving
on,
we
have
oh
this.
This
is
what
people
asked
me
to
and
present
to
to
mention,
and
so
Philip
and
I
got
into
an
argument
on
how
set
of
the
RTP
header
extensions
work
and.
A
A
Yeah,
so
the
read
radius,
basically
I
said
I
I
originally
specified
language
that
says:
okay,
here's,
here's
the
list
of
extensions
I
want
to
modify,
go,
modify
them
and
everything
else
will
be
unchanged.
Well.
Philip
suggested
that
the
much
cleaner
approach
which
is
more
compatible
with
with
the
the
way
we
deal
with
parameters,
is
that
you
should
get
a
list
of
all
the
parameters
and
then
change
the
one
you
want
once
you
want
to
change
and
then
and
then
use
that
parameters
or
set
up
for
the
headaches
and
are
set
offered.
F
Yes,
I
basically
agree
with
Philip
here
and
it's
more
web
compatible
because
enforcement
JavaScript
to
deal
with
a
negotiated
deal
with
the
pull
of
what
the
browser
supports.
Otherwise,
if
they
can
pass
in
hard-coded
values
that
might
not
be
good,
fallbacks
I
also
opened
an
issue.
F
A
But
last
words
on
the
subject.
A
A
Okay,
I
still
I
still
have
the
slide,
so
I'll
just
keep
clicking.
So
you
see
some
knowledge
library
before
some
of
this
new
going
beyond
the
bump
in
the
stack
and
then
call
the
media
manipulation.
A
They
might
argue
about
which
we
try
and
put
more
important
than
others,
but
they're
awesome
and
I
have
a
PR
for
explaining
how
I
think
this
should
work
on
a
coded
media
which
has
had
some
comment,
especially
from
un
anyway.
A
A
I
did
an
API
designed
that
was
for
frame
handling,
which
is
basically
create
frame
from
a
metadata
and
a
data
modifying
a
frame,
because
sometimes
you
just
need
to
modify
a
metadata.
You
don't
need
to
touch
the
data
and
especially
don't
need
to
copy
it
following
the
same
pattern
as
we
discussed
just
listed,
get
metadata,
modify
and
submitted
media
there.
A
So
I
think
our
apis
so
far
have
been
reasonably
successful
in
using
streams
for
handling
frames,
but
reconfiguration
requests
or
like
change,
bitrate
or
certain
keyframe.
No
more
event
like
you,
don't
expect
them
on
a
regular
Cadence.
You
don't
expect
a
stream
of
them.
You
expect
something
to
happen,
so
Let's
use
an
event
like
mechanism
for
them.
A
A
Then
have
a
function
that
says:
okay,
you
take
frames
from
this
stream.
I'll
give
it
to
you,
so
my
diddling
around
at
the
hackathon
said
that
we
didn't
need
anything
more
than
this
really
so.
A
Comparing
this
with
other
models
that
we
have
tried
the
crooks,
the
model
that
Chrome
uses
for
screen
creation
well
source
is
easy
to
easy.
Damn
late,
this
this
thing
doesn't
expose.
The
sink.
I
could
probably
adapt
that
one
too
and
signals
okay.
Now,
the
in
the
current
model,
the
handle
handling
being
handled
implicitly,
but
relaying
a
signal
is
easy
and
the
Apple
model
that
is
currently
in
the
spec.
A
So
working
in
this
group
decisions
we
need.
We
need
to
agree
that
these
use
cases
is
within
scope.
That
means
more
CFCs
and
agree
to
accept
proposals
for
apis
that
support
these
use
cases.
Requirements
such
as
the
one
I
just
sketched,
but
not
that
at
the
point.
Yet
where
we
can
say
and
oh
this
interface
works.
This
interface
is
the
wrong
one.
I
think
we
got
this
got
to
see.
Okay,
what's
the
what's
the
proposal
we
need.
A
H
I
have
two
questions.
One
is
about
whether
you
can,
whether
there's
an
assumption
that
the
packetizer
is
one
of
the
ones
that's
already
in
the
browser
or
whether
you
could
do
packetization
yourself.
The
use
case
would
be
say
hebc,
which
is
supported
in
web
codecs.
If
you
have
the
hardware,
but
it's
not
supported
in
webrtc.
So
could
you
make
that
work?
You
know
by
getting
frames
out
of
web
codecs
or
encode
or
decode
atom
or
codex,
and
then
doing
it.
That's
question
one
anyway.
A
H
With
respect
to
the
use
case,
do
you
I
mean
it's
not
discussed
in
the
use
case
so
anyway
that
that's
a
question
about
the
use
case?
Whether
it
should
include
some,
you
know
Define
what
the
required
packetization
functionality
is
yeah.
H
Question
is
about
workers
so
like,
if
you're
doing
something
like
web
codex
you're,
often
doing
the
encode
decode
in
a
worker,
so
I'm
just
wondering
about
the
required
functionality
for
interacting
with
workers,
because
it
does
this.
Does
this
implicitly
require
have
being
able
to
do
a
r2p
sender
or
receiver,
and
a
worker.
A
I
am
very
unsure
about
that.
Actually,
okay,
because
the
model
of
the
of
the
API
I
suggested
gives
you
a
stream
and
this
theme
can
be
transferred
and
the
stream
has
them
after
that,
no
interaction
with
the
main
thread.
G
Can
you
go
back
aside,
sure,
okay,
so
you're
asking
if
we
think
these
use
cases
are
in
scope
and
if
we
are
willing
to
accept
proposals
right?
Yes,
okay,
so
my
answer
would
be
yes
on
both
of
those
for
myself
but
I'd
like
to
know
similar
to
Bernard's
question.
If
that
scope
and
the
proposals
can
include
ones
that
do
allow
for
custom
packetization
in
particular,
I'm
interested
in
applications
being
able
to
do
custom
fvc,
which
I
believe
does
require
some
kind
of
packet
level
control.
A
Good
question
and
I
think
that
none
of
the
use
cases
we
have
I
have
on
my
list
are
requiring
custom
packetization,
but
I
think
there
are
use
cases
for
custom
packetization,
but
we
need
to
add
them,
and
if
we
manage
to
separate
out
the
packetizer
as
a
component
with
a
defined
Upstream
interface,
then
in
close
quote
marks.
All
that
is
needed
is
to
define
a
downstream
interface
for
it.
A
But
I've
always
I've
been
hesitant
in
the
past
to
go
into
exposing
packet
level
to
JavaScript
and
then
still
somewhat
nervous
and
somewhat
yeah
hesitant
to
go
to
go
down
that
route.
If
all
our
use,
cases
that
we
currently
have
describes
are
satisfiable
on
the
on
a
higher
level
on
the
on
the
frame
level.
H
Just
just
a
point:
Harold
we're
now
A,
Time
and
I
want
to
make
sure
that
we
know
what
the
next
steps
are
on
this
issue.
Are
we
to
run
a
CFC
CFC
on
the
use
cases,
exactly
what
you
want
to
have
happen.
A
F
F
F
Nothing
prevents
implementations
as
far
as
I
know,
from
skipping
the
code
and
code
in
those
cases
and
optimizing
them,
but
did
you
mention
the
abilities
to
modify
frame
metadata
and
other
controls
which
become
more
important,
I
think
to
highlight
the
need
for
an
apis
there,
but
other
than
that?
Well,
that's
my
point.
G
F
Yeah
with
streams,
I
would
say
this,
even
though
one
might
argue
that
streams
API
wasn't
necessarily
perhaps
the
best
API
for
our
use
of
it
and
we'll
emergency
included
streams.
At
the
end
of
the
day,
JavaScript
is
going
to
plug
in
a
transform
stream,
which
are
basically
going
to
be
callbacks
called
by
the
browser
on
JavaScript.
F
B
Yeah
I
think
the
Bernard
and
Peter's
question
about
packet
or
frame
is
a
question.
I
asked
also
several
times
and
I
think
we
should
concentrate
the
debate
there
and
if,
if
we
see
that
we
we
actually
want
a
packet
level
API,
we
should
try
to
figure
out
the
security
issues
and
the
model,
and
it
will
be
probably
very
different
than
if
we
stay
at
the
same
level.
So
that's
why
I
do
not
want
to
spend
a
lot
of
time
on
digging
to
a
frame
API.
A
So
my
My
Little
Current
to
take
is
that
I
haven't
yet
seen
the
use
case
that
that
I
haven't
yet
seen
the
use
case
written
up
that
weren't,
a
packet
level
API
so
I'm
very
happy
to
see
the
use
case
discuss
the
use
case
and
then
decide
upon
the
use
case.
A
But
at
the
moment
what
I
want
to
do
is
it's
possible
to
do
at
the
frame
level
which.
A
Which
means
which
is
for
me
a
reason
that
a
reason
for
pursuing
the
frame
level
API.
A
So,
let's
see
the
use
cases
and
we
have
some
mentioned
and
Peter
that's
that's
a
headline,
not
the
not
not
the
write-up,
please
Supply,
the
body.
E
A
J
A
J
J
Over
the
last
years,
we've
discussed
many
topics
related
to
screen
capture
in
this
working
group.
It's
been
very
interesting
and
we've
made
a
significant
progress,
but
one
thing
that
we've
been
kind
of
missing
is
significant
participation
from
web
Developers
and
I.
Think
that
one
of
several
reasons
here
is
that
screen
capture
is
distinct
from
webrtc.
J
One
of
those
is
to
transmit
it
remotely
and
you
could
even
transmit
it
remotely
using
web
ODC,
but
maybe
using
some
other
means,
or
maybe
you
do
something
else
with
the
stream
such
as
just
saving
it
to
disk,
or
maybe
you
do
something
interesting
before
or
after
transmitting
it
and
all
of
those
are
topics
which
are
not
truly
connected
to
webrtc
and
the
developers
that
are
interested
in
that
are
not
necessarily
also
interested
in
webrtc,
and
vice
versa
and
I
think
that,
in
order
to
increase
developer
participation
in
matters
related
to
screen
capture,
it
would
be
good
if
we
have
a
screen
capture.
J
J
So
please
join
us
any
questions.
J
So
in
quick
succession,
I
would
like
to
discuss
the
fact
that
we've
got
one
spec
called
capture
handle
identity,
which
is
somewhat
tied
to
a
capture,
handle
actions,
spec
and
I
think
that
at
least
for
capture
handle
identity.
This
group
should
probably
hand
it
over
to
the
to
the
screen
capture.
Community
group
and
I
would
like
to
lay
out
my
case
here.
So
just
a
quick
reminder
for
everybody.
J
This
API
has
been
implemented
in
chromium
and
shipped,
and
it's
been
gainfully
employed
by
all
sorts
of
web
applications,
for
example,
meet
and
Google
meet
and
Google
Slides
use
that
and
yeah
we've
got
plans
of
extending
it
even
elsewhere
in
Google.
J
So
I've
proposed
multiple
extensions
to
this
spec,
but
we
seem
to
be
misaligned
on
the
vision
here.
We
being
me
and
I
believe
in
any
power
that
you've
got
a
different
vision
and
therefore
you've
got
a
different
spec
called
capture
handle
actions
and
also
we
seem
to
be
at
somewhat
of
a
disagreement
on
several
of
the
extensions
I
have
proposed.
J
For
example,
exposing
crop
Targets
on
the
capture
handle
so
I
think
that
at
this
point
it
would
be
good
if
each
party
took
its
own
spec
and
ran
with
it
extended
it
until
the
point
where
it
becomes
a
fully
fleshed
proposal,
and
then
we
can
propose
these
different
specs
to
different
groups
and
a
group
can
make
a
decision
of
either
adopting
one
of
those
next
slide.
Please
or
maybe
even
synthesizing,
both
of
them
together
and
I-
think
that
we
will
enjoy
better
iteration
speed.
J
If
we
do
that,
if
we
first
split
them
and
then
come
back
later,
so
that's
my
proposal
next
slide
and
I
give
up
the
mic
to
everybody
who
wants
to
grab
it.
J
F
Thank
you
so,
on
the
earlier
slide,
I
do
think
that
when
you
say
where
the
spring
capture
is
part
of
webrtc
I,
think
traditionally
webrtc
working
group
has
been
in
charge
of
every
API
that
produces
or
consumes
the
media
stream
track
other
than
the
HTML
video
capture,
spec,
so
I
think
there's
good
history.
There
we
have
tried
to
give
away
media
capture
to
another
w3z
working
group,
but
known
as
one
or
two
I.
Don't
think
it
will
be
progress
to
it.
F
F
When
you
said
the
action,
spec
was
my
spec
and
it
was
the
action.
Spec
is
not
instead
of
the
identity
spec,
but
it's
a
it's
a
supplement
that
addresses
a
subset
of
use
cases
so
I
think
historically
the
way
we
what's
the
term
when
we
consider
what's
the
slide
here,
can
you
go
back
a
slide?
Please.
F
I
could
go
back
another
slide,
one
more.
F
Thank
you,
the
adopting
I'm,
so
sorry,
I
think
traditionally,
when
we're
talking
about
the
adopting
a
spec,
it's
because
members
either
have
no
interest
in
it
or
they
do
not
want
to
implement
it
at
all
and
I
think
for
Mozilla
at
least.
We
are
definitely
interested
in
this
which
we
showed
when
we
adopted
this
back
a
year
ago.
So
we
have
no
change
in
opinion
on
this.
We
think
this
is
a
valuable
API
that
should
be
developed,
so
we
would
be
opposed
to
the
adopting
it.
H
Yeah
so
procedurally
I
just
wanted
to
clarify
what
will
happen
here.
Are
you
proposing
a
CFC
on
just
the
adopting
capture
handle?
Is
that
the
scope
of
the
CFC.
J
So,
procedurally,
it's
up
for
the
group
to
decide
how
this
would
best
be
done.
But
what
I
would
eventually
want
to
happen
is
for
a
version
of
capture
handle
identity
to
be
incubated
by
me
and
whoever
else
wants
to
participate
and
eventually
proposed
and
I
think
that
the
place
to
incubate
it
now
would
be
the
sccg.
J
So
the
screen
can
capture
community
group,
and
if
that
means
that
this
is
the
adopted
by
the
current
group
and
given
to
the
sccg
or
if
it
means
that
the
secg
creates
a
copy
of
the
document,
I
think
anywhere.
Any
version
of
this
works
for
me,
but
we
should
probably
choose
the
one
that
works
best
for
everyone.
H
Okay,
so
a
CFC
in
the
web
agency
working
with
about
the
adoption,
but
just
to
be
clear,
there's
no.
This
does
not
include
the
adoption
of
any
other
screen
capture
proposal
right,
so
that
still
remains
in
the
webrtc
working
group,
just
trying
to
understand
the
boundaries
between
the
the
community
group
and
this
one.
Yes,.
B
You
thank
you,
I,
don't
think
the
wherever
see
anybody
from
needs.
The
Liberty
working
group
approval
to
focus
back
update
it
do
whatever
we
want
with
it,
and
the
community
group
I
think
is
fully
able
to
to
do
that
without
us
doing
anything.
I
think.
G
B
F
Just
there
was
a
characterization
of
this
agreement
earlier.
I
would
just
like
to
add
that,
in
my
view,
all
the
disagreements
are
minor
and
usually
over
API
differences.
Then
there's
General
agreement
on
on
all
the
main
use
cases
and
desired
behaviors
thanks.
E
Yeah,
it
is
factually
correct:
that's
the
community
group
could
decide
to
Fork
the
spec
and
do
whatever
is
it
it
wants
with
it.
I
do
think
that
this
creates
uncertainty
and
fragmentation
in
terms
of
what
is
being
implemented,
discussed
and
it
makes
building
a
consensus
view
harder,
and
my
personal
inclination
would
be
to
bite
the
Barrette
and
and
indeed
figure
out
how
to
solve
these
disagreements,
which
I
I
kind
of
agree
with
Geneva
I
think
on
the
API
shape,
not
on
the
purpose
or
of
all
Direction.
E
But
if
that's
no
longer
realistic,
then
I
think
I
would
prefer
a
cleaner
break
than
an
ambiguous
situation
where
we
would
have
expect
in
the
working
group
that
nobody
really
investing
and
expecting
a
CG
where
one
implementer
is
putting
all
of
its
effort.
That
fits
like
this
service
to
the
community.
J
I'm
gonna
invite
myself
to
speak
next
I'm.
Putting
the
other
hand.
Thank
you
very
much.
So
I
think
that
the
disagreements
about
the
shape
are
a
bit
bigger
than
character
L.
Then
you
have
characterized
DNA
Bar
I
think
that
it
is
a
bit
difficult
to
make
forward
progress
right
now
and
I.
Think
that,
given
that
the
major
contributor
to
one
of
the
specs
is
interested
in
forking
it
and
starting
it
and
continuing
elsewhere,
I
don't
really
see
who
would
benefit
of
living.
J
Okay,
so
with
the
chairs
take
the
AI
to
do
that.
H
That's
my
assumption:
unless
there's
yeah
going
forward
for
the
minutes,
we'll
we'll
start
the
CFC
on
that.
J
Awesome,
thank
you
very
much,
I
think
I
I'm
also
presenting
next
to
so
a
couple
of
slides
forward.
Please
thank
you
very
much.
So
next
topic
Auto
pause
of
capture
or
stopping
the
Name
of
Love
Before.
We
break
my
heart
next
slide,
please.
J
So
a
quick
reminder
when
you
capture
a
surface
that
surface
can
change
at
any
moment.
There
are
two
main
ways
this
can
happen.
J
Maybe
three
first
either
the
user
or
the
captured
application
could
navigate
the
top
level
document,
in
which
case
you
suddenly,
you
start
capturing
a.com,
suddenly
you're
capturing
b.com,
maybe
the
user
intended
that
maybe
they
made
a
mistake,
maybe
the
application
actually
caused
that
another
option
is
that
the
user
could
choose
to
start
capturing
something
else
with
chrome
that
would
be
using
share
this
tab
instead,
a
button
that
you
can
see
on
the
bottom
left
and
with
Safari
that
could
be
with
changing
between
shared
windows
and
Screen,
which
is
a
recently
recently
released
functionality
and
very
well
done
there
next
slide.
J
J
So
this
can
be
problematic
because
applications
might
want
to
tailor
certain
actions
to
what
they're
capturing.
So,
for
example,
maybe
if
you
go
from
a.com
to
b.com,
you
want
to
prompt
the
user
to
make
sure
that
they
really
wanted
to
do
that.
Maybe
if
you're
capturing
different
things,
you
want
to
crop
to
different
sub
targets
of
those
of
those
Services.
Maybe
you
want
to
apply
different
constraints
like
maybe
you
want
a
different
resolution
frame
rate
if
you're
capturing
a
window
versus
a
screen,
maybe
you
want
to
change
the
encoding
parameters?
J
Maybe
you
want
to
save
that
to
different
files.
So,
each
time
you
capture
something
else,
it's
in
a
different
file.
You
could
come
up
with
other
examples,
and
when
that
happens,
you
kind
of
need
two
things.
You
need
an
event
to
happen,
letting
you
know
that
hey
something
changed
and
you
also
want
to
kind
of
have
the
time
to
change
whatever
you
want
to
change
before
more
frames
are
produced
and
potentially
immediately
put
based
on
The
Wire
next
slide.
Please
Bernard!
If
you
can
do
that,
I
think
Harold.
Okay,
all
right
this
here!
J
Thank
you.
So
here's
one
proposal
initially
if
we
could
focus
on
the
bigger
picture
I
know
that
maybe
there
are
a
lot
of
edge
cases
here.
Yes,
you're
in
you
and
you've
raised
the
hand.
So
do
you
want
to
go
down
or
do
you
want
to
be?
First
later.
J
Sure,
but
just
the
way,
spoiler
alert,
I'm
gonna
talk
about
your
recent
on
device
change
or
something
like
that,
but
not
device
change
on
configuration
changed
so
I'm
gonna
touch
on
that
too,
so
assume
for
the
sake
of
argument
that
we
had
one
more
concept
right,
post,
I
assume
this
is
still
for
later
and
same
slightly
previous
slide.
Please
previous
slide.
Please.
J
I'm
sorry,
could
we
go
down
to
49
I
think
that
we're
in
51
right
now
yeah
and
thank
you
and
50.
Please
sorry.
J
So
which
do
you
want?
It
looks
like
a
150
I'm.
Sorry,
it
just
changed
while
I
was
not
looking,
and
thank
you
very
much
so
currently
there
are
two
Concepts.
There
is
muted,
which
means
that
the
track
is
not
producing
frames,
but
this
is
outside
of
the
control
of
the
application.
It's
not
something
that
the
application
initiated
and
it
can
also
not
stop
it.
So,
for
example,
maybe
the
user
is
minimized
a
window
and
you
are
not
producing
frames
for
minimized
Windows.
J
J
Another
one
is
that
maybe
a
surface
switch
occurred,
so
the
user
chose
to
capture
a
different
window
and
spoiler
alert
may
be
a
config
change
and
we
take
this
event
and
we
expose
an
event
handler
for
it
on
the
track
so
and
if
somebody
sets
that
event
handler,
then,
whenever
something
like
this
happens,
the
track
is
going
to
get
Auto
paused
an
event
is
going
to
be
fired
and
then
the
application
could,
if
it
wants
to
unpause
from
that
event
on
for
or
from
a
later
Point
next
slide.
Please.
J
Thank
you.
So
here
is
a
generic
example.
Where,
basically,
you
say:
hey
Trek,
almost
event
handler,
and
whenever
there
you
get
all
the
post
you
check.
Hey,
do
I
need
to
adjust
something
if
not
just
unpause
the
the
track
or
if
I
need
to
adjust
something
I
can
adjust
it.
Maybe
I
prompt
the
user.
Maybe
I
apply
a
different
crop,
Target,
maybe
I.
J
You
know
change
the
encoding,
Etc
and
then
I
unpause,
and
that's
it
you
plug
in
your
own
implementation
of
adjustment,
necessary
and
adjust,
which
are
just
for
illustration
purposes
here,
and
you
get
your
own
behavior
that
handles
this
next
slide.
Please
so
I
would
like
to
get
back
to
this
later,
but
obviously
we
want
to
somehow
get
the
default
behavior
that
if
you
don't
set
an
event
handler,
you
just
get
automatically
unpaused,
and
that
means
that
everything
is
backwards.
Compatible
existing
applications
don't
need
to
change
anything.
J
We
can
get
back
to
that
later.
If
relevant
next
slide,
please
so.
C
J
Last
thing
I
want
to
say
is
that
there
is
a
similar
use
case
in
the
recently
introduced
on
configuration
change,
or
basically
you
get
an
event
whenever
something
changes
and
I
would
like
to
ask.
Maybe
what
you
want
to
do
in
configuration
changes
is
to
get
the
event,
but
also
to
post
their
track
until
you
decide
if
you
actually
want
to
continue
receiving
frames.
Given
the
new
configuration
next
slide,
please
and
I
think
that
un
goes
first.
B
Yeah
so
I
think
the
use
case
makes
sense
and
it
seems
good
to
solve
the
issue.
I
don't
think
the
API
shape
is
at
the
right
level,
it's
probably
at
the
level
of
the
source,
because
you
you
don't
want
one
track
to
continue
being
working
while
I
know
there
is
not
working.
So
it's
probably
at
capture
controller
that
you
actually
want
to
have
this
kind
of
hey.
B
Something
has
changed,
do
something
and
then
it
could
be
async
like
respond
within
fetch
event
or
as
Harold
mentioned
for
Hanover
encode
era
thing,
and
we
we
could
reuse
that
pattern
so
that
there
will
be
only
one
event
in
the
culture
controller
instead
of
as
many
events
as
a
clone
track,
for
instance,
but.
B
Let's,
let's
try
to
dive
into
the
API
shape
if
everybody
agrees
with
solving
the
issue.
J
Okay,
next,
thank
you
very
much
so
Jennifer
you
kind
of
left
the
queue
and
re-entered
it
who
goes
next,
your
team:
what's
the
gentleman's
agreement.
I
Right
yeah,
so
does
this
cover
audio
as
well?
So
your
list
of
potential
reasons
for
pausing
seem
kind
of
Fairly.
Video
orientated
is,
is
it
would
it
cover
audio
as
well.
J
That's
an
interesting
question:
I
think
that
it
could
also
be
a
distinguishing
factor
for
un's
suggestion
of
putting
it
on
the
capture
controller
instead
by
the
way
I've
also
been
talking
with
Ben
recently,
and
maybe
maybe
a
source
object
is
actually
also
appropriate,
but
in
either
case
I
think
it's
possible
that
you
would
want
to
post
one,
but
not
the
other
I
think
that
it's
much
easier
to
kind
of
have
a
glitchless
exam
glitchless
operation
with
video,
where
the
user
does
not
notice
a
readily
when
one
frame
is
a
bit
late,
then
with
audio
but
I'm
open
to
discussing
this
more.
I
Yeah
I
I
mean
I'm
thinking
of
the
case
where
you
know
you
get
the
incoming
GSM
call
and,
and
you
suddenly
lose,
you
lose
your
your
input
source.
I
You
know
you
get
you,
you
lose
your
microphone
source
and
if
that
that
came
up
as
a
paused
event,
and
you
could
then
substitute
kind
of
music
on
hold
or
something
that
you
were
sending,
then
that
would
like
that
might
be
a
use
case
that
be
worth
adding
to
this.
If
we
don't
have
another
way
of
solving
that
problem,.
J
For
this
particular
example,
it
could
be
that
I've
not
thought
about
it
enough
yet,
but
it
sounds
like
the
muted
event
is
sufficient
because
you
could
get
a
muted
event
and
then
just
unplug
your
track
from
wherever
it's
going
right
now
and
then,
when
you
get
the
unmute
event,
if
you
decide
that
you
need
to
make
the
you
know,
A
Renewed
decision
you
can,
but
I
could
be
mistaken
here.
I,
don't
know
if
this
is
critical
to
discuss
right
now,
given
the
queue
I
mean
yeah.
A
Which
is
my
worry
every
time
we
talk
about
source
and
when
we
might
want
we
might
want
to,
and
we
might
want
to
Define
that
the
capture
controller
is
a
source
just
like
here.
Video
check,
generator
Maybe,
but
definitely
worth
solving
I
want
I'd
like
to
mention
again
that
we
have
this
thing
in
the
event,
interface
called
prevent
defaults
that
says
so
that
you
can
say
when
the
event
currently
returns.
F
Universe,
oh
yeah,
so
yeah
I
initially
took
down
my
hand
because
I
thought
you
uncovered
it
I
agree
with
you
and
that
this
probably
belongs
on
the
capture
controller,
which
I
think
to
a
single
capture
is
indistinguishable
from
a
source
not
for
multiple
captures,
of
course,
but
that
seems
fine
I'm
a
bit
worried
about
again.
Similarly,
like
we
had
earlier
with
active
false,
is
that
it
sounds
like
the
browser
it
seems
to
me.
We
shouldn't
terminate
output
because
that
puts
out
new
roadblock
for
JavaScript
that
they
have
to
you
know.
F
F
I
would
love
to
have
a
different
shape
that,
where
the
default
isn't
the
terminate
output
I,
don't
see
a
reason
for
that,
but
also
the
name.
Media
stream
tracks
are
also
used
for
many
things
other
than
screen
capture.
I
think
that
would
be
a
reason
and
if
that's
moved,
to
screen
capture
controller
I,
think
that
results
that
issue,
but
if
it's
not,
then
I
have
other
issues
like
animated
capture,
extensions
issues.
39
we've
talked
about
fixing
the
double
mute
problem
and
other
things,
so
I
worry
about
having
muted,
unmuted
and
paused
can
be
confusing
foreign.
J
Was
not
fully
clear
on
mine,
and
so
I
might
have
missed
a
bit
of
this.
So
with
respect
to
putting
this
on
the
capture
controller,
it
makes
some
initial
sense.
J
One
thing
to
worry
about
is
that,
whereas
the
track
itself
is
transferable
and
could
even
be
transferred
to
a
different
tab
in
the
capital,
controller
is
not
transferable,
and
that
is
by
Design,
and
so
that
kind
of
worries
me
in
it's
going
to
be
kind
of
difficult
for
you
know
to
use
that
after
to
use
the
track
after
it's
moved,
but
maybe
it
remains
to
be
seen
if
that's
a
major
concern
with
respect
to
default
action.
J
I
couldn't
misunderstood
you
here,
but
what
I
think
a
core
component
of
the
proposal
is
that
if
an
event
handler
is
not
set,
things
do
not
get
Auto
paused,
so
the
Legacy
behavior
of
if
the
user
changes
something
the
application
does
not
keeps
on
getting
frames.
That
is
not
going
to
change
with
this
proposal
and
also
with
any
other
proposal.
I
intend
to
bring
up.
F
J
So
the
third
bullet
point
here
says
that
there
is
a
guidance
for
event,
handlers
to
not
have
side
effects
and
one
way
to
get
around
that
is
to
use
some
kind
of
function
like
set,
pause,
handlers
and
I.
Think
there
is
a
precedent
in
the
form
of
set
action.
Handler
I
would
have
to
check
that
again,
but
there
is
some
precedent
about
doing
that.
Also
Harold
has
made
something
that
they
did
not
fully
understand,
but
I
intend
to
eventually
research
about
prevent
default
harm.
A
Default
is
used
in
a
couple
of
other
apis
that
it
came
as
a
surprise
to
me
to
learn
about
it,
so
that
you
specify
that
there
is
a
default
action
that
is
carried
out
and
if
the
event
does
not
prevent
it,
and
then
then
you
have
the
function
called
a
function
that
is
literally
called
prevent
default
that
you
fire
at
the
event
to
say,
carry
your
message
back
to
your
master.
Saying
saying:
don't
don't
do
the
default
thing
and
this
would
seem
to
suit
the
use
case
perfectly.
J
So
I
guess
the
next
sections
here
would
be
to
analyze
whether
this
could
be
done
on
the
capture
controller
or
up
on
the
track
or,
alternatively,
on
a
source
object
and
maybe
present
those
in
the
next
meeting.
Does
that
sound,
reasonable.
J
I
think
that
we're
behind
schedule
anyway,
so
Henrik,
sorry
for
taking
some
of
your
time.
No.
D
Problem
and
thank
you
next
slide,
please.
So
this
is
an
old
issue.
We
discussed
it
a
few
months
ago
and
bring
it
up
again,
because
the
the
pr
has
been
going
a
bit
back
and
forth
flip-flopping
and
I
think
it
would
be
easier
to
to
talk
about
it
and
get
like
how
it
should
be
written
is
what
I
want
to
achieve
by
bring
it
up
again.
So
the
the
whole
problem.
Just
for
context
is
you
know
if,
if
your
frame
rate
is
low,
it's
it's
hard
to
know
why
it's
it's
low.
D
Is
it
the
camera
not
producing
frames,
or
is
the
user
agent
dropping
frames
and
if
they're
dropped
or
they
dropped?
You
know
for
performance
reasons
or
because
they're
decimated
in
order
to
achieve
the
desired
frame
rate
of
the
settings,
and
so
the
the
I
think
the
reason
the
pr
has
been
slow
other
than
me
being
slow.
Is
that
there's
a
balancing
act
between
needing
to
be
specific
enough
that
we
all
agree
on
what
we
measure,
but
also
be
vague
enough
to
allow
different
implementations
and
NS
mentioned
a
minute
ago?
D
The
source
is
not
really
exposed,
so
we're
kind
of
trying
to
measure
something.
That's
not
really
a
JS
observable,
like
the
counter
of
frames
passing
through.
Basically.
So
my
proposal,
which
is
also
one
of
the
comments,
is
to
not
to
use
phrase
it
as
requirements
rather
than
specific
steps.
Video
playback
quality
is
an
example
of
this,
and
I
think
that
the
way
to
achieve
a
good
Balancing
Act
here
is
to
require
that
each
frame
is
categorized
in
one
of
the
distinct
categories,
because
then
you
can
say
that
they,
you
know
get
frame.
D
Here's
the
categories
I
came
up
with
I,
also
updated
the
pr
to
say
that
this
is
only
supported
on
on
tracks
that
supports
the
frame
rate
as
a
setting,
and
each
frame
would
be
characterized
as
either
being
considered,
deliverable
or
delivered.
If
it's,
you
know,
either
was
delivered
to
a
sink
or
it
would
have
been
delivered
to
a
sink
if
one
was
connected
or
it's
decimated
if
it
was
discarded
in
order
to
achieve
the
frame
rate
settings,
let's
target,
and
otherwise
it
was
dropped
for.
D
For
any
other
reasons,
main
example
being
the
system
is
under
heavy
load,
then
it's
categorized
as
dropped,
so
get
frame.
Stats
would
just
return.
This
dictionary,
friends
delivered
decimated
dropped
based
on
the
number
of
frames
that
have
been
categorized
into
each
category.
Does
that
make
sense
to
people.
D
I
Yeah
I
have
two
questions
well,
I'm,
uncomfortable
with
decimated.
It
has
a
kind
of
Fairly
specific
meaning
which
I
don't
think
you
mean
here
like
it's
a
factor
of
10.,
so
that's
kind
of
probably
not
what
you
had
in
mind
and
my
other
question
is
like
what
are
we
expecting
the
developer
to
do
with
this
I
mean
I
like
the
idea
of
describing
it
as
like
kind
of
at
a
higher
level,
and
rather
than
digging
down
too
deep
into
the
the
mechanics,
but
even
further
back.
What
are
we
thinking?
D
You
can
measure
you
can
make
measure
Deltas
within
the
camera
settings
and
what
you're
actually
achieving
if
you
want
to,
if
you
want
to
experiment
with
like
performance,
experiments,
I
think
you
want
to
be
able
to
know
if
you're
dropping
frames
or
not
necessarily,
you
might
want
to
reconfigure
the
camera.
D
It's
also
useful
for
debugging,
basically
when,
when
there's
bug
bugs
that
that
the
your
frames
are
being
dropped,
I
think
it
would
be
good
to
know
if,
if,
if
this
is
you
know,
camera
issue
or
other
issue,
so,
for
example,
you
could,
if
it's
a
if
it's
a
drop
issue,
you
might
want
to
reconfigure.
D
D
I
F
D
B
Yeah
I
was
wondering
with
her
I,
understand,
friend,
delivered
and
friends.
This
meeting
I
was
wondering
whether
the
total
number
of
friends
that
have
been
generated
by
the
camera
is
the
sum
of
all
of
them.
Yes,
and
if
so,
maybe
we
should
have
friends
delivered
from
the
committed
and
Trends
generated
instead
of
French
draft.
F
I'm,
sorry,
so
in
a
low
light
condition
and
its
setting
is
30
frames
per
second
and
the
camera
is
only
producing,
say,
15
frames.
D
D
So
are
you?
Is
the
I
get
the
sense
that
there's
there's
less
pushback
with
the
approach
of
using
these
categories,
but
that
input
is
that
it
makes
perhaps
I
want
to
rename
decimated
to
something
else
and-
and
we
should
replace
frames,
dropped
to
Total
frames.
But
the
overall
approach
does
that
doesn't
make
sense
to
people
I
get
thumbs
up
from
Harold.
D
H
Yes,
so
we
moved
to
the
sum
up
slide.
Okay,
so
I
think
we
have.
H
J
Could
I
request
an
action
item
from
those
interested
to
chime
in
on
issue
I?
Think
255
regarding
Auto
pause.
J
It
would
be
nice
before
I
give
my
next
iteration
of
a
proposal
to
get
maybe
a
more
concrete
proposal
of
what
it
would
look
like
if
it
were
Exposed
on
the
capture
controller.
Instead,.
J
What
about
so?
The
question
is
whether
we
want
to
separate
Audio
and
Video
in
that
case,
because
I
think
that
Autobots
in
audio
could
be
a
bit
more
problematic.
Then
again,
if
you
are
changing
surfaces,
then
maybe
a
glitch
would
not
be
noticed,
because
these
are
different
audio
sources.
Anyway,
it's
an
open
question
here:
I
guess
we
start
with
something
that
was
this
video
and
told
you
at
the
same
time,.
J
Okay,
so
in
that
case,
I
guess,
the
action
item
can
be
on
me
to
flesh
out
two
different
proposals
and
to
bring
them
up
to
the
group
to
decide
between
or
maybe
just
one
if
I
am
convinced
of,
the
merits
of
the
capture
controller
approach.
Of
course,.
A
H
Why
don't
you
go
to
the
next
slide?
How
because
I
think
it's
the
first
time
in
a
while
that
we've
actually
gotten
through
and
the
one
after
that
yeah?
So
we
have
a
butterfly
that's
what
we
get
for
actually
finishing
things
on
time.
Anybody
have
any
idea.
What
kind
of
butterfly
this
is
I
will
just
tell
you
it's
from
South
America.
It
is
a
of
the
amorphous.
Yes,
a
blue
morpho,
Mike
English.
You
are
correct.