►
From YouTube: WEBRTC WG interim 2023-02-21
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
Links
to
slides
are
up
on
the
wiki
as
well.
Do
we
have
a
scribe.
B
A
A
A
few
things.
We've
recently
changed
we're
now
using
the
hand
raising
tool
to
get
into
and
outer
of
the
speaker
queue.
So
if
you
want
to
speak,
please
raise
your
hand,
don't
just
speak
and
then
you'll
be
recognized
and
you
can
talk.
A
But
if
you
jump
the
queue
we'll
mute,
you
please
use
your
headphones
or
network
canceling
speakerphone
when
speaking
and
state
your
full
name,
and
then
we
may
use
the
Paul
mechanism,
not
sure
we're
going
to
do
that
today,
but
we
could
to
get
a
sense
of
the
room
a
little
bit
about
document
status,
just
because
something's
in
the
repo
doesn't
mean
it's
been
adopted
or
that
it
has
consensus.
We
run
calls
for
consensus
and
calls
for
adoption
on
a
mailing
list.
We'll
talk
about
those
in
a
minute
and
editors
drafts.
A
Don't
represent
working
consensus
but
working
to
editors
to
have
stones
but
working
groups
too.
Okay.
So
here's
what
the
agenda
is
for
today
and
we're
going
to
give
people
warnings.
We
will
try
to
keep
to
a
strict
time
schedule
because
we
have
a
little
schedule
so
we'll
give
you
a
warning
two
minutes
before
time
is
up
and
then
we'll
move
on
all
right.
Here's
a
summary
of
the
CFCs
that
we've
run
since
the
new
year.
A
We
had
a
CFC
on
low
latency
streaming
use
cases
that
concluded
on
January
16th
people
were
generally
favorable.
We
had
five
in
support
out
of
six,
but
we
filed
six
issues.
So
we
need
to
address
those
so
I
guess
Tim.
We
probably
need
to
sync
up
on
on
fixes
for
those
and
then
we
had
a
CFC
out
in
the
face
detection.
Api
we're
going
to
talk
about
that
today.
That
also
concluded
on
January
16th
that
had
six
responses.
A
Five
in
support
with
one
objection
and
five
issues
were
filed,
three
of
which
were
new,
and
then
we
had
the
one-way
media
use
cases
CFC,
which
concluded
on
February
6th.
We
got
seven
responses
on
that
five
and
support
one
objection.
One
no
opinion.
Ten
issues
were
filed
relating
to
that
CFC.
So
just
the
author
would
be
Harold
needs
to
deal
with
those,
and
then
we
had
a
CFC
on
recycling
leverage
CPC
that
concluded
on
February
10th
12
responses
were
received
all
in
favor.
A
So
that's
a
summary
of
all
those
CFCs
that
have
gone
on
the
last
two
months,
all
right,
so
I'm
going
to
turn
this
over
to
Henrik
and
fippo.
We've
got
20
minutes
on
Robert,
C
stats
and
whatever
to
see
extensions,
and
this
is
what
we're
going
to
talk
about
all
right.
C
Okay,
Byron
campen
from
Mozilla
found
an
issue
that
we
have
round
trip
time
defined
as
mandatory
to
implement,
while
responses
received
is
not
and
that's
correct,
and
if
you
look
at
the
round
trip
time,
it's
typically
where
the
common
pattern
that
you
divide,
the
total
round
of
time
by
the
number
of
responses
received.
So
what
we
currently
have
as
MTI
doesn't
make
sense,
because
it's
we
need
either
one
of
them
or
either
both
of
them
or
none
of
them.
D
Yes,
sorry
for
being
late
to
add
a
comment.
I
added
to
the
issue
tracker
but
I
think
based
on
the
firefox's
eye
stack,
does
not
compute
total
round
trip
time.
We
were
under
the
impression
that
the
spec
was
moving
toward
using
smooth
round
trip
time
instead,
which
is
an
satp
of
stat
only
to
realize
that
later,
so
that
working
group
apparently
there's
some
history
that
we
agreed
on.
That
was
a
good
idea,
but
then
no
one
had
implemented
the
satp
transport
stats
so
that
got
removed
from
the
spec.
D
D
C
D
Okay
and
in
general,
I
just
wanted
to
express
the
sentiment.
I
think
Mozilla
is
not
super
happy
about
implementing
total
round
trip
time.
If,
since
that,
we
don't
currently
compute
that
and
it
sounded
like
earlier,
the
working
group
had
decided
that
that
was
a
poorer
metric
even
than
the
satp
one.
D
But
if
there
are
cases
that
are
not
the
same,
you
know
that
would
be
challenging
I
guess,
but
so
far
we're
just
talking
about
whether
they
should
be
we're
not
preventing
anyone
from
implementing
these
right.
It
was
just
whether
what
should
be
the
guaranteed
minimum
for
web
developers,
which
I
think
is
important
to
establish.
E
E
It's
it
gives
totally
different
information
from
satp
or
RDP
and
metrics,
which
so,
if
RTP
switches
from
one
from
one
one
can
appear
to
another
and
RTP
object
time
will
follow,
will
likely
a
change.
Well,
that's
kind
of
appears.
That's
around
total
around
around
that
time
should
not
theocity
so
I
mean
if.
E
I
think
we
should
just
keep
them
as
mandatory
temperament
both,
but
since
for
Firefox
has
not
implemented
them
and
we
don't
have
UNIF
uniforms
income
uniformity
on
it,
so
I
wouldn't
object
strongly
to
making
them
not
Mentor.
But
you
know
you
can't
exchange
one
type
of
rtt
from
another.
G
F
You
would
they
won't
have
the
rtt
measurement,
which
would
not
make
sense
in
this
case
I
think
for
the
ice
candidates.
We
should
have
both
the
like
I.
Don't
think
you
can
measure
total
rounder
time
without
sending
stunt
binding
requests
or
some
kind
of
stun
messages.
F
So
if
you
don't
implement
it,
then
you
won't
have
the
sponsors
received
and
you
would
not
have
a
total
round
of
time,
so
if
I
would
rather
have
both
of
them,
if
it's
possible
and
if
Mozilla
is
not
going
to
implement
it,
then
then
maybe
there's
a
case
for
it
not
being
MTI,
but
having
only
one
of
them
as
MTI.
Make
does
make
sense
without
the
other,
because
you're
measuring
anyway,
both
at.
I
Yeah
I
I,
don't
I,
mean
I,
think
it
would
be
nice
to
have
them
both
MTI,
but
I,
don't
actually
see
that
they're
directly
correlated.
There
are
things
that
you
could
do
with
with
one
without
the
other
like
knowing
you're
still
getting
responses
is
useful
and
that
the
response
count
is
going
up.
Even
if
you
don't
know
what
the
right
or
the
total
round
trip
time
is
so
I,
don't
I,
don't
see
them
as
being
irretrievably
tied
together,
but
it
would
be
nice
to
have
like
have
them
both
MTI
I.
Think.
I
I
mean
Ron's
point
about
them
being
like
you
can't
derive
them
one
without
the
other.
That's
true,
but
you
don't
have
to
expose
them
both
and
there
are
uses
that
you
could
make
of
one
that
don't
use
the
other,
but
I
mean
I,
don't
yeah,
I,
don't
think,
and
particularly
maybe
in
this,
like
future
programmable
ice
World,
actually
exposing
them
both
might
be
much
more
useful.
I
That
might
be
actually
be
a
useful
thing.
You
want
a
little
crop
up
at
the
end
of
this
session.
F
So
my
find
them
related
to
the
fact
was
that
total
round
trip
time
is
already
empty
eye,
so
total,
like
total
round
trip
time
without
responses,
received,
is
kind
of
meaningless.
In
that
sense,
so
getting
responses
received
as
MTI
makes
a
lot
of
sense
as
a
consequence
of
that
decision
and.
D
G
F
I
D
F
We
can
either
make
both
of
them
empty
eye.
I
think
that's
what
we're
leaning
on
but
kind
of
depends
on
mozilla's
position.
D
I
guess
our
position
isn't
I
mean
we
see
the
value
of
having
a
round-trip
time
measure
for
most
web
developers,
but
I
think
in
the
ways
that
it
is
different
than
Sapp
transport,
I,
guess
I'm.
Assuming
that
most
applications
90
will
will
just
want
the
round-trip
time
of
the
connection
and
would
take
either.
Is
that
not
most
of
the
cases.
I
There,
a
third
measure
which
is
the
current
round-trip
time.
E
I
So
I
mean
short
term
you
you
use
that
and
the
other.
What
we're
looking
at
here
is
long-term
average
to
try
and
get
a
sense
of
like
you
know
what
this
in
fact
to
see
whether
the
current
is
different
from
the
average.
That's
actually
the
most,
probably
the
most
interesting
thing
to
look
at
like
how
wildly
is
it
varying.
D
D
The
so
I
would
argue,
since
this
is
already
implemented
in
several
browsers,
that
US
changing
a
decision
here
doesn't
really
impact
much
one
or
the
other.
It
doesn't
impact
other
browsers
and
Firefox.
Basically,
so
it's
it's
a
question
of
how
strongly
do
you
want
to
do?
We
want
to
push
on
that.
One
browser.
D
D
A
So
do
we
have
any
notes
or
next
steps
for
this
I
think
we're
going
to
need
to
move
on
to
the
next
item.
E
So
everyone
just
hit
the
thumbs
up
for
NCI
and
thumbs
down
for
for
for
now,
I'm
see
if
there's
a
clear
indication.
Okay,.
A
Why
don't
we
do
that?
One
two:
three.
H
H
E
You
won't
have
nothing
into
I
think
we
declared
this
no
consensus
at
the
moment.
Okay,.
B
A
B
Next
yeah,
what
should
I
do
in
terms
of
credit
should
adjust
there.
Please
continue
the
discussions
on
GitHub,
basically.
J
Guess
right:
Heinrich!
Yes,
thank
you.
So
in
gets
that
power,
efficient,
encoder
and
power,
efficient
decoder
exposes
whether
or
not
Hardware
is
used
for
encoding
and
decoding
and
to
address
privacy
concerns.
We
added
a
hardware
exposure
Shack,
which
said
only
to
expose
this.
If
the
context
is
capturing
I.E,
camera
problem
with
this
is
that
this
does
not
working
in
cloud.
Gaming
use
cases
that
don't
captures
and
a
a
general
problem.
J
It's
not
just
this
bank,
but
there's
some
inconsistency
between
specs,
exposing
Hardware
capability
and
we're
not
we're
not
consistent
with
when
that
is
allowed
or
not.
So,
for
example,
the
media
capabilities
already
exposed
this,
although
it's
capability,
rather
than
what's
currently
used
so
there's
a
subtle
difference
there,
but
it
has
a
privacy
consideration,
sections
it's
rather
vague
anyway.
Next
slide
I'm
proposing
to
resolve
this
inconsistency
and
to
fix
the
problem
for
the
cloud
gaming
use
case
is
that
we
add
a
step
to
the
hardware
exposure
Shack.
J
That
sets
essentially
that,
if
power
efficient,
the
power
efficient
attribute
is
exposed
in
media
capabilities
for
the
same
configuration,
then
we
should
expose
it
and
get
stats
as
well.
So
it
solves
the
current
issues,
but
please
note
that
it
appears
that
browsers
today
do
always
expose
in
power
efficient.
So
we
would
expose
this
as
un.
Yes,.
B
Yeah,
so
I
think
that
the
subtlety,
as
you
mentioned,
is
that
in
one
case,
media
capabilities
is
saying:
hey
I
got
Hardware
decoding
and
in
you,
in
your
case,
in
the
stats
case.
That's
not
what
you
want.
You
want
to
say:
hey
I
actually
have
a
decoder,
but
I
I
have
a
look
at
the
decoder,
a
hardware
decoder
and
that's
more
private
information,
because
you
can
use
that
as
a
side.
Channel
information.
B
If
you
can
spoon
if
one
page
is
spooning,
a
few,
is
freezing
a
few
decoders
and
then
another
page
is
coming
and
is
trying
to
fill
a
decoder
seeing
that
it's
not
and
so
on,
then
you
have
a
channel,
an
information
Channel
between
the
two
pages
and
that's
what
we
are
trying
to
avoid
and
media
capabilities
is
not
in
that
territory.
So
that's
why
it's
different
there.
So
that's
why
I
don't
think
we
we
can
say
hey!
It's
media
capabilities
is
exposing
power
efficient.
J
My
opinion
is
that
yes,
you're,
absolutely
correct,
but
I
think
it's
rather
obscure.
I
I
have
a
hard
time
imagining
this
being
used
to,
but
him
yes.
I
Yeah,
are
we
sure
that
you
can't
tell
that
you've
got
a
hardware
decoder
anyway,
like
I
I
would
be
very
surprised
if
you
couldn't
just
like
look
at
the
at
the
RGB
output
in
a
canvas
and
decide
whether
it's
gone
through
a
hardware
decoder
or
not.
If
you
were
only
looking
for
a
couple
of
hardware,
decoders
I'm,
pretty
sure
you
could
tell
there.
B
Is
something
done
to
canvas
in
some
cases
to
prevent
fingerprinting
so
having
web
pages?
Do?
That
is
good,
because
then
you
you
can
first
them
enough,
so
that
real
web
pages
canvas
will
will
be
working
and
video
printing
will
be
broken.
F
A
Yeah,
a
couple
of
things
here
remember
that
we
also
have
encoder
apis
that
are
in
decoder
apis
that
are
separate
from
capture
and
in
in
those
cases
you
you
can't
really
link
them
to
capture.
A
Also,
you
can
really
tell
if,
if
the
acceleration
is
not
there
because
you'll
see,
particularly
with
codecs
like
ab1
you'll,
see
a
dramatic
drop
in
performance,
and
so
it's
not
like
you
can't
tell,
but
the
problem
is
that
the
the
app
has
to
collect
information,
and
meanwhile
the
user
will
be
experiencing
some
huge
delay
as
an
example
of
that
they'll
see
the
performance
segregation.
So
it's
it's.
Basically.
A
This
is
useful
to
let
the
app
kind
of
warn
the
user
that
something's
happened
so
that
that's
what
it's
that's,
what
it's
used
for
in
practice
in
the
cloud
gaming
cases,
so
it
doesn't
really
accomplish
anything
to
to
to
put
on
a
there's.
No
there's,
no
real
Privacy
Information,
that's
being
leaked
here.
The
app
can
get
everything
at
once
anyway,
but
it
just
takes
longer
and
meanwhile
the
user's
sitting
there
and
their
game
is,
you
know
not
working
so.
D
Yeah,
so
this
is
more
of
a
question,
so
the
this
is
the
the
cloud
gaming
use
case
here
can
already
use
media
capabilities
here
right,
so
they
can
already
get
the
information.
J
D
And
wasn't
there
on
the
issue
a
similar
API
proposed
to
detect
the
fallback?
Does
that
have
the
same
problems.
J
D
A
Just
just
to
be
clear,
any
of
our
just
because
media
capability
says
you
should
have
Hardware,
doesn't
mean
you'll,
actually
get
it
and
we've
seen
situations
where
you
go
from
Hardware
to
not
having
Hardware
like
in
mid-session,
so
you're
you're
in
the
pot
of
the
game,
where
it's
really.
You
really
want
to
have
responsiveness
and
all
of
a
sudden.
You
lose
your
heart
or
acceleration
and,
like
the
experience,
degrades
and
so
the
app
wants
to
know
right
away,
makes
sense.
J
I
would
argue
that
if,
if,
if
we
want
to
improve
the
the
fingerprinting
situation,
then
we
should
do
it
in
a
way
that
works
for
both
use
cases
and
not
I.
Don't
want
us
to
be
in
a
case
where
each
spec
tries
to
solve
the
more
or
less
the
same
problem
differently.
B
I
think
you're
right
there
I
think
web
codex
has
the
same
issue,
because
we
were
conducts
few
instantiate
a
decoding
task
and
you,
you
might
not
know
we're
very
proudly
or
not
and
I
think
there's
an
issue
there
and
I
agree
that
both
the
solution
should
be
the
same
for
both
basically.
A
Yeah
I,
you
and
I've
actually
proposed
a
PR
I'm
going
to
propose
a
PR
for
exactly
that,
because
it
is
the
same
in
web
codecs,
yeah.
J
H
I
Tim
yeah
I
just
want
to
say
it
feels
this
feels
like
the
wrong
place
to
fix
this.
The
specific
cloud
gaming
problem,
because
what
they
actually
want
is
an
event
which
says
that
you've
lost
your
Hardware
codec
like
you,
you
requested
it
with
capabilities
and
you
got
it
and
now
it's
gone
away.
What
you
want
to
do
is
be
told
that.
J
Yeah
each
each
specific
use
case
like
a
separate
API,
where
you
can
use
an
encoder
or
decoder,
would
likely
have
its
own
API
for
the
exposing
that
bit
of
information
like
get
status
is
specific
to
webrtc.
Web
codex
would
be
a
different
set
of
apis.
So
the
question
isn't
like
having
one
spec
to
expose
all
the
same
information.
The
question
is:
can
we
have
one
spec
where
this
the
the
question?
I
Yeah
but
I
mean
I
I,
don't
want
to
dig
into
this
too
far,
but
I
I
feel
that
the
being
told
that
you've
had
something
taken
away
that
you've
been
using
is
less
of
a
privacy
violation
than
being
able
to
poll
to
find
out
whether
you're
going
to
get
it
and
I.
Think
so.
I
think
there's
a
like
a
shading
here
of
what
what
you're,
exposing
in
terms
of
privacy.
B
So
there's
so,
it
was
claimed
that
it's
not
privacy
issue,
because
the
web
page
can
actually
detect
that
information
already.
B
So,
if
that's
the
case,
if
you
like
frame
rate,
is
working,
for
instance,
you
can
observe
it
in
like
two
or
three
seconds,
so
maybe
we're
just
talking
about
an
optimization,
but
it
might
be
good
to
actually
nail
it
down
so
that
we
have
all
this
kind
of
information
to
make
a
potential
judgment.
If
we,
if
we're
doing
this
change,
like
we've
done
in
the
past,
the
pin,
working
Loop
will
say
hey.
This
is
not
correct
and
they
will
be
absolutely
right
and
we'll
need
to
provide
them.
B
A
So
so
what
is
the
next
step
here.
B
I
guess
Henrik
wanted
to
do
these
four
with
proposal.
A
and
B
I
think
it's
it's
good
to
have
a
some
feeling
about
in
the
room
whether
it
should
be
like
web
codex
and
webrtc
should
be
just
one
issue
and
a
decision
is
made
and
is
consistent
or
not,
and
then
we
can
maybe
move
this
discussion
to
FIFA
World
or
see
or
media
workload.
J
So
can
we
do
a
poll
for
whether
or
not
we
should
have
one
place
to
point
to
just
not
solve
this
in
every
other
spec?
First
of
all,.
H
J
I
saw
both
stop
thumbs
up
and
thumbs
down.
So
I
conclude
that
there's
no
consensus
there.
B
So
so
maybe
another
question
is
whether
it
should
be
media
working
group
or
we're
actually
working
with
that
device.
This
proposal,
a
discussion
I,
think
that
since
media
working
group
is
owning,
media
capabilities
and
also
e-zoning
web
codecs
I
would
plan
to
have
this
discussion
there
instead
of
words.
H
Yeah
I
think
that
this
implementation
was
kind
of
after
the
blocking
the
exposure.
So
why
don't?
We
have
a
way
to
have
some
approaches
to
allow
the
cloud
game
to
get
this
information
in
a
meanwhile
I
I
believe
that
this
question
was
going
on
for
quite
a
long
time-
I
I'm,
not
quite
sure,
but
oh
should,
should
we
have
a
way
to
have
some
information
in
a
meanwhile
for
the
discussion.
That's
my
question.
J
But
one
proposal,
so
so
what
happened?
If
this
was
working
in
one
version
and
then
the
next
version
it
stopped
working
because
we
implemented
the
this
recently
added
the
Privacy
gate,
one
like
just
to
mitigate
the
the
immediate
pain.
Could
we
add
this
as
a
feature
at
risk
and
then
file
an
issue
on
the
relevant
working
group,
or
should
we
just
wait.
B
I
think
waiting
is
fine.
Bernard
has
a
PR,
so
I'm
guessing
there
will
be
discussions
at
the
next
media
working
group
meeting.
Okay.
Hopefully
there
will
be
progress
one
way
or
the
other.
D
Yeah
I
just
wanted
to
clarify,
since
I
gave
a
thumbs
down
that
I
think
if
someone
could
show
with
the
polyfill
basically
a
way
that
the
existing
applications
could
detect
us
already,
and
it's
merely
a
moment
of
saving
like
a
second
or
something.
I
would
find
that
very
convincing.
Okay.
D
J
All
right
should
we
in
the
meantime
Bernard.
Can
you
add
you
said
you,
you
were
filing
an
issue
about
this.
Could
you
just
add
a
pointer
to
that
in
the
issue
for
this,
and
then
we
can
continue
the
discussion.
Yeah.
A
I
think
I
think
it.
Actually.
The
pointer
is
already
in
the
issue,
but
because
I
reference,
7
30,
but
anyway,
all
right
yeah
I
will
I
will
do
that.
A
J
Yes,
so,
regarding
the
header
extension,
we
have
a
lot
of
or
the
header
extensions
apis.
We
have
a
lot
of
open
issues,
I'm
hoping
to
to
get
to
all
of
them.
So
in
a
previous
interim,
we
decided
to
change
the
API
shape
to
this
get
modify,
set
pattern.
So
I
think
this
already
consensus
there,
but
there's
other
open
issues
like
the
name
header
extensions,
to
offer.
That's
a
confusing
name
and
there's
several
issues
related
to
Frozen
array
being
used.
So
in
the
next
slide.
J
I'm
hoping
to
you
know,
resolve
all
of
this,
and
hopefully
the
first
slide
is
basically
the
same
as
before.
Just
other
names,
so
I
was
for
the
gets,
modify,
set
pattern.
I'm
proposing
get
header
extensions
to
negotiate
for
what
you
want
to
negotiate.
You
have
the
setter
as
well
and
then
for
what
has
already
been
negotiated.
You
have
get
negotiated
header
extensions.
So,
first
of
all,
can
we
get
a
decision
about
these
names
because
then
we
can
create
a
PR.
J
The
alternative
name
suggestion
would
be
get
header
extensions
and
get
current
header
extensions,
because
that
mirrors,
Direction
and
current
direction
already
exists.
Opinions.
D
Oh,
this
looks
good
to
me:
can
I
so
we're
still
talking
about
dictionaries
here?
Basically
right,
yes,
yeah
I
I,
don't
have
a
strong
preference
on
name
but
I
think
either
works
for
me
thanks.
D
Oh
clarifying
question:
it's
not
clear
from
the
web
ideal
here,
but
what's
the
intended
use
of
this.
J
So
next
slide
I
can
I
can
share
an
example,
so
you
use
get
header
extensions
to
negotiate.
You
have
a
loop
and
you
have
some
amazing
extension
you
want
to
enable.
So
you
set
the
direction.
So
this
is
why
it
can't
be
a
frozen
area.
That
would
be
a
painful.
You
should
modify
it.
You
do
set
header
extensions
to
negotiate
the
result.
Oh
it
should
be
extensions.
Plural.
J
You
do
your
offer
answer
and
hopefully
get
negotiated.
Header
extensions
will
reflect
if
the
other
endpoint
agreed
sounds
good.
B
H
B
It
fully
synchronous,
API
or
yes,
okay
and
there's
no
need
for
hopping
to
another
thread
or
whatever
to
check
things
or
whatever
and.
J
No,
there
are
some
more
complexities,
which
is
this
this
slide,
but
I
was
hoping
to
so
it
sounds
like
we
have
consensus
on
naming
and
then
so
that
part
ready
for
PR
but
okay.
So
the
next
question
is:
what
do
we
do
about
directions,
not
not
being
perhaps
what
you
expect,
because
Direction
could
be
what
you
want
to
negotiate
or
it
could
be
what
you're
capable
of
negotiating
or
it
could
be?
What's
your
default,
the
default
value
is
etc,
etc.
So
I'm
just
doing
a
rundown
about
cases
where
something
might
not.
J
You
might
not
get
what
you
want
so
in
the
first
1A
here
is
what
if
the
capability
of
the
header
extension
is
receive,
only
so
you've
asked
to
send
and
receive
something
like
it's
only
applicable
in
the
receiving
case
or
on
the
1B
is,
is
what
if,
what,
if
you're
you
are
trying
to
negotiate
a
receive
only
header
extension,
but
the
transceivers
direction
is
send
only
so
clearly,
that's
not
applicable,
and
the
second
the
point
two
there
is
well
what,
if
the
extension
is
not
supported
by
the
remote
endpoint,
and
my
proposal
to
all
of
this
is
that
we
just
silently
downgrade
the
direction
like
we
don't
throw
an
exception.
J
I
think
that's
the
most
web
compact
friendly
way
to
to
deal
with
things.
So,
if
you,
if
you
get
set
and
then
we
can
just
down
downgrade
it
to
receive
only,
but
in
all
the
other
cases,
I
would
say
that
we
we
just
what
you
set
remains
and
then,
if
the
other
end
point
does
not
support,
it,
then
will
not
modify
the
header.
Extensions
to
negotiate
will
only
modify,
get
negotiated,
Henry
extension,
so
this
matches
how
Direction
and
current
direction
works
right.
J
A
E
A
Up,
thank
you
Henrik,
okay,
so
we
now
gonna
try
to
go
through
some
SVC
and
media
capture
extensions
issues.
So
this
is
issue
73
real
in
Weber
to
csvc.
It
arises
because
there's
two
types
of
simulcast
that
are
supported
in
vp9
and
av1:
there's
traditional
simulcast
with
multiple
konings,
each
with
their
own
s,
associ
in
red
and
then
in
vp9
and
ab1.
We
also
have
something
called
S,
Mode
simulcast,
which
is
multiple
accountings
within
a
single
ssrc,
so
issue
73.
A
The
question
is:
should
we
allow
both
simulcast
types
to
be
configured
and
if
so,
when
the
specification
currently
doesn't
rejects
configuration
of
s
modes,
if
there's
more
than
a
single
layer,
so
you
can't
do
both
at
once,
and
the
reason
for
this
was
a
concern
that
mixing
simulcast
types
would
complicate
an
SFU
and
browser
implementation
without
providing
any
real
value.
A
So
then,
florante
asked
and
he'll
we'll
show
an
example
in
a
minute
whether
you
can
simultaneously
negotiate
the
codec
and
the
simulcast
type,
how
you
do
that
most
elegantly
and
here's
the
example.
A
So
in
this
particular
example,
you
don't
know
whether
you're
going
to
get
vp9
or
ab1.
You
might
get
vpa,
and
so
it's
proposing
three
layers
with
two
of
them
turned
off,
and
one
of
them
has
this
s
mode,
S3
T3,
which
is
three
simulcast
layers,
each
with
temporal
scalability,
that
one's
active
and
it's
a
send
only
transceiver.
A
So
then
you
do
this,
you
don't
know
what
you're
gonna
get
and
it
turns
out
that
you
end
up
with
vp8
instead
of
ab1
or
or
vp9,
and
then
so,
when
you
do
get
parameters,
you
end
up
with
three
layers,
but
not
what
you
expected
instead
of
S3
T3,
you
end
up
with
L1
T3,
because
that's
the
default
for
vp8
and
then
what
you
can
do
is
you
can
then
update
it
and
and
turn
on
the
other
layers.
A
So
this
is
a
situation
where
you,
basically
you
don't
know
what
codec
you're
going
to
get.
So
it's
more
elegant
because
you,
you
basically
would
just
do
a
set
parameters
and
you
could
recover,
depending
on
what
codec
you
got.
A
So
basically,
what
this
argues
for
is
being
able
to
do
to
mix
allow
an
S
Mode
as
long
as
there's
only
one
active
layer
and
so
I
wrote
a
PR
for
this
in
PR
86,
which
essentially
says
you
look
for
whether
there's
more
than
one
active
encoding
and
then,
if
you
have
an
S
Mode,
you
throw
so
so
that's,
and
this
is
done
in
both
ad
transceiver
and
set
parameters.
You
basically
say
hey,
it's
only
invalid
if
you
have
more
than
one
layer
active
and
you
configured
an
S
Mode.
A
So
let
me
talk
about
the
discussion,
so
in
issue
73,
we
then
had
some
discussion
about
this
and
one
one
question
was
in
the
example
here
in
the
ad
transceiver.
If
you
attempted
to
activate,
read
H
or
F,
you
would
throw
because
you
you
couldn't
do
that
and
that's
fine.
If,
if
you
want
to
prohibit
the
browser
from
sending
both
s
in
traditional
simulcast
at
the
same
time,
then
that's
what
you
would
do,
and
that
would
be
the
consequence
but
Harold
raised.
The
question
overall
is
I.
A
Guess
the
general
a
point
of
philosophy
which
is:
are:
how
far
should
we
go
in
preventing
people
from
being
stupid
and
I
guess?
Another
question
is:
is
it
always
stupid?
For
example,
you
know
what,
if
you
wanted
to
send
six
encodings
and
you
you
wanted
to
do
three
layers
each
with
two
or
something.
Is
that
necessarily,
is
that
something
we
have
to
prohibit,
absolutely
that
this
could
never
be
done
and
if
so,
why?
The
problem
it
seems
to
me
is
really.
A
The
biggest
issue
is
that
traditional
sfus
won't
handle
S
Mode
anyway
and
there's
a
couple
of
reasons
why
they
probably
can't
handle
an
S
Mode
one
is
that
at
least
today
I,
don't
believe
that
any
of
the
implementations
set
the
rid
with
an
S
Mode.
A
So
if
the
SFU
is
relying
on
the
rig
to
switch,
it
won't
have
that
information,
and
so
it
could
switch
essentially
multiple
encodings
to
a
browser
which
the
browser
wouldn't
be
able
to
deal
with
and
because,
if
there
is
no
rid,
the
only
way
the
SFU
can
switch
is
if
it
parses
the
ab1
payload
and,
of
course,
older
SUV
won't
necessarily
be
able
to
do
that.
So
that's
that's
kind
of
where
we
are.
We
have
a
PR
that
that
actually
meets
I.
A
Guess
Florence
request
that
that
we
able
to
be
able
to
make
the
example
work.
But
the
question
is
when
a
philosophy
that
Harold
raised,
you
know
how.
How
tough
should
we
be
on
on
folks
to
prevent
them
from
shooting
off
their
own
foot?.
A
B
You
win
so
I
would
not
I
would
ask
the
alternification,
which
is
how
how
hard
is
it
to
do
in
user
agents?
B
If
user
agents
already
support
it,
I
would
say
hey,
we
already
have
support,
it's
just
exposing
it,
and
maybe
it's
fine,
but
if
it's
like,
if
it's
taking
some
time
to
do
it,
it's
a
budget
and
we
might
as
well
want
to
avoid
spending
this
budget,
because
maybe
no
one
is
asking
for
it,
and
if
some
people
is
asking
for
it
in
the
future,
then
we
might
be
able
to
reconsider,
so
I
would
tend
to
be
conservative
there
and
keep
the
prses
and
and
throw
and
kill
some.
K
L
I
agree
with
you
and
that
it's
something
that
is
not
necessarily
easy
to
implement
in
queer
and
user
agents,
but,
on
the
other
hand,
I,
don't
think
we
can
be
implementing
everything
all
the
time
with
all
versions
of
user
agents,
especially
future
ones.
Future
versions
of
the
tech
I
think
we
might
want
to
have
a
mechanism
that
allows
us
to
reject
the
configuration
with
the
current
agent
is
not
supporting
with
proper
justifications
that
application
developers
can
use.
L
I
understand
that
it's
not
great
for
compatibility,
but
the
reality
of
things
is
that
video
configurations
like
that
are
complicated,
and
it's
unlikely
that
we
will
get
even
compatibility
for
some
Advanced
modes.
A
E
Yeah,
my
my
thinking
goes
with
that
the
spec
shouldn't
and
shouldn't
need
to
stop
people
from
being
stupid
if
it
complicates
the
other
implementation
or
the
or
people's
ability
to
understand
the
status
back.
E
So
obviously,
as
long
as
they
don't
support,
s3t
T3
any
configuration
with
the
stage
GT3
and
it
is
going
to
be
rejected,
and
that
should
be
legal.
We
don't
have
a
mandatory
to
implement,
set
for
or
and
promotes
except
to
S1,
L1
or
L1
T1,
so
I
think
both
the
spec
and
the
implementation
can
be
simpler
if
we
don't
have
to
have
a
rule
about
which
modes
are
allowed
and
which
combinations
and
we
can
still
reject,
what's
not
supported.
D
Yeah
so
I
I,
like
the
API
that
Florence
suggested
here,
and
it
seems
like
a
good
way
to
negotiate
things
so
I'm,
not
so
mistaken,
I,
don't
not
hearing
any.
It's
I
haven't
looked
at
the
pr
in
detail,
but
it
sounds
like
the
pr
goes.
Some
other
way
of
loosening
the
Restriction
we
have
today
and
it
sounds
like
Harold
wants
to
go
further.
So
unless
I'm
wrong,
I'm,
not
hearing
any
objections
to
at
least
take
the
first
step
and
maybe
discuss
going
further
later.
J
A
Do
this
so
do
we
be
so?
Basically
the
suggestion
is
to
take
the
pr
as
as
is
which
just
basically
allows
what
allows
florons,
for
example,
and
then
I,
guess
florante.
If
you
think
that
it
can
be
loosened
further,
you'll
suggest
that
is
that
reasonable.
L
We
can
take
it
by
steps
sure
if
people
want
to
do
more
complex
modes
later
on.
We
can
do
that.
Yep,
okay,.
A
D
So
this
is
we're
switching
out
to
Media
capture
where
we
have
some
and
Firefox
we're
trying
to
implement
permissions.query,
and
you
may
be
wondering
why
are
we
talking
about
permissions
API
in
the
media
capture
spec?
Well,
it's
because
the
permission
spec
has
perhaps
wisely
pushed
in
order
to
not
talk
so
much
about
different
domains
in
the
permissions
Vector
pushing
a
lot
of
the
specific
language
into
the
individual
specs.
D
So
our
spec
now
has,
for
instance,
a
section
about
the
missions
integration
where
it
discusses
things
like
passing
in
a
device
ID
which
the
spec
allows
for
her
device
permission
models
which
only
Firefox
currently
supports.
Unfortunately,
no
one
Implement
implements
device
ID
yet,
and
that
means
that
there's
some
friction
when
we're
trying
to
implement
this
in
Firefox
so
currently,
and
one
of
them
has
to
do
so.
There
are
two
issues
to
discuss.
D
One
is
permissions,
query
in
per
camera
and
mic
permission
models
per
device
permission
models
and
I
have
a
second
slide
for
a
different
issue.
D
So
so,
right
now
our
permissions
Integrations
section
in
our
specs
says
if
the
descriptor
does
not
have
a
device
ID,
which
is
an
argument
to
permissions
query,
not
get
user
media
of
the
same
name.
It's
semantic
is
that
it
queries
for
access
to
all
devices
of
that
class.
D
So
all
cameras
are
all
our
all
microphones
and
since
no
one's
implemented
device
ID
yet
sites
today
will
use
this
to
query
and
call
permissions
query
with
the
name
for
camera
and
if
they
don't
get
back
granted
or
if
they
get
back
prompt,
they
may
enable,
for
example,
showing
an
extra
permission
screen
that
says.
Hey
this
website
needs
to
use
your
camera
and
microphone.
Can
we
please?
Please?
Can
you
please?
Please
turn
that
on,
but
the
site
here
is
actually
trying
to
ask.
Can
I
call
get
user
media
unprompted
they're?
D
Not
really
asking
do
I
have
access
to
all
cameras
which
is
tricky
for
us
in
Firefox,
because
we
cannot
answer
yes
to
the
latter
question
even
after
the
site
has
gotten
the
camera,
because
permissions
is
per
device,
not
all
by
default
in
Firefox.
D
So
the
proposal
is
that
it
would
be
more
web
compatible
to
say
if
the
descriptor
does
not
have
a
device
ID.
It's
semantic,
aesthetic
queries
for
access
to
any
device
of
that
class,
and
this
should
remain
compatible
with
all
device
permission
browsers
and
additionally,
sites
can
use
device
ID
to
inspect
individual
cameras
or
devices.
E
Harold,
not
really
an
objection.
I
think
it's
reasonably
compatible,
but
I
know
that
if
you,
then,
if
you
call
or
get
a
query
without
an
ID,
and
then
you
call
get
this
media
with
an
ID,
you
can
still
get
permission
tonight
or
pop-up
or
whatever,
because
the
change
in
semantics
is
that
as
long
as
the
camera
exists,
that
you
will
get
when
you,
when
you
prompt
it,
when
they're
called
get
used
media
without
the
device
ID
and
we
have
implemented
the
ketus
media
with
device
ID
and
then.
E
Yeah,
it's
a
change
change
in
semantic,
but
I
think
it
just
meant.
Change
in
somatics
is
okay.
D
Yes,
I
think
Harold
is
correct
and
but
then
luckily
in
that
case,
so
if
you
have
a
device
idea,
you
should
pass
it
to
both
apis
I
think
is
the
answer.
B
Yeah,
but
that's
fine
by
me,
I
think
it
makes
sense
just
to
say
that
the
proposal
should
is.
B
L
D
Actually,
there's
an
example
in
the
spec
to
do
that,
where
you
basically
call,
but
it
requires
that
you
have
qualification
media
first,
because
you
have
to
be
able
to
call
enumerate
devices.
So
once
you
have
full
enumerate
devices
access,
you
can
do
a
for
loop
on
all
the
devices
you
get
back
and
request
and
check
the
permission
state
of
each
one,
which
is
a
bit
of
a
fingerprinting
Vector.
So
that's
why
it's
probably
good
that
that
doesn't
work
without
cases
you
may
get
user
media
success.
B
D
I
believe
that's
orthogonal,
but
that's
a
good
question.
I
also
had
when
I
looked
at
the
spec
and
then
from
what
I
could
tell
right
now.
It's
not
it
seems
the
permission.
Spec
right
now
seems
to
say
that
it's
as
if
you
didn't
pass
in
a
device
ID,
which
I
guess
might
be
a
little
surprising,
so
we
might
want
to
change
that
if
we
can
but
I'm
open
to
ideas
there.
B
Okay,
well
I,
recommend
on
the
issue
and
then
we
can
go
there.
Okay,
try
to
continue
There.
D
D
Right
great
thanks
next
slide.
D
Right
so
here's
the
second
issue,
I
promised
there's
also
with
permissions
query,
and
this
happens
for
a
non-persistent
permission
model.
So
this
would
be
Safari
and
Firefox
so
and
I
had
to
pick
on
a
particular
service
here,
because
each
service
has
different
ux.
D
So
in
this
case
this
is
whereby.com
and
and
showing
you
a
screen
that
says
the
service
wants
to
use
your
camera
and
microphone
in
order
for
others
to
see
and
hear
you,
your
address
will
request
camera
and
access
the
microphone
request,
access
and
you
have
to
click
a
button
here,
and
this
is
shown
only
ever
once
in
Chrome,
but
it's
shown
for
every
meeting,
essentially
in
Safari
and
Firefox.
D
So
that
means-
and
this
is
because
of
fire
and
Firefox-
you
still
get
a
permission
prompt,
which
is
an
extra
click.
But
thanks
to
this
ux,
it's
actually
two
clicks
extra
per
meeting
next
slide.
D
And
for
the
reason
for
that,
I
believe
is
that
users
in
Safari
and
Firefox
the
fact
that
they
granted
camera
or
microphone
last
time
they
used
the
site
and
the
time
before
that
accounts
for
nothing.
Basically,
in
the
permission,
spec
and
there's
an
issue
filed
on
the
permission,
spec
to
perhaps
add
another
permission
state
that
says
granted
last
time,
but
that's
not,
but
that's
I,
think
it's
a
long-term
solution
and
I'm
hoping
to
have
a
better
solution
that
we
can
Implement
more
quickly
in
Firefox
but
I'm
getting
ahead
of
myself.
D
So
as
a
result,
many
video
conferencing
sites
today
excuse
me
offer
a
smoother
user
experience
to
returning
Chrome
users
than
to
returning
users
in
other
browsers,
because
the
spec
is
basically
ignoring
past
non-persistent
permissions
entirely,
and
this
sets
up
sites
to
expect
granted
permission
and
to
treat
anything
less
as
a
user
retraining
problem
which
we
saw
in
the
earlier
screenshot
this,
unfortunately,
because
it's
diverging
inside
ux
for
returning
users
and
browsers
that
persists
permission
from
browsers
that
don't
purchase
permission.
D
D
So
this
would
mimic
persisted
permissions
in
a
way
that
users
shouldn't
have
to
be
have
granted
persistent
permission
to
get
sites
off
their
back,
so
to
speak,
I'm
just
using
language
to
shorten
the
slider
I'm
running
out
of
time,
not
saying
one
is
better
than
the
other,
and
but
sites
can
still
use
device
ID
which
they'll
have
on
returned
visits
if
they
really
want
to
get
the
real
answer
for
that
specific
device,
and
there
are
some
use
cases
that
have
been
brought
up
like
televised
sites.
E
So
this
is
a
you're
asking
us
to
to
to
persist
Commission
for
in
a
browser
that
has
decided
not
to
possess
permissions.
This
sounds
sounds
insane
to
me:
I
mean
either
wish
either
we
should
either
the
browser
should
permit
persistent
permissions
or
you
should
not.
The
browser
should
decide
and
the
idea
that
you
have
a
stored
permission
for
the
or
the
non-specified
device,
that
is,
that
might
actually
be
feasible,
but
it's
a
store.
E
E
So
yes,
if
you
want
to
store
permission,
but
then
then
all
their
favorite,
storing
storing
permission
but
calling
it
and
calling
it
saying
that
the
browser
does
not
store
permission
when
it
actually
does
I,
don't
see
the
logic
in
that
and
we
shouldn't
describe
it
like
that.
B
Just
to
say
that
we
implemented
permission
policy
and
the
user
is
setting
some
permissions
and
the
user
agent
is
somehow
changing
this
value
to
web
pages
for
fingerprinting
issues.
So
if
particularly,
if
user
is
denying
access,
then
we
we
are
we're
changing
it
to
prompt
in
some
in
some
cases,
to
remove
fingerprinting
issues
and
I
I
think
the
spec
might
want
to
provide
some
hints
like
hey.
You
might
want
to
do
that
as
a
news
agent,
but
I
don't
think
the
spec
is
disallaring
such
Behavior.
B
I
M
Hi
everyone,
my
name,
is
Samir
bujakar
and
today
I
would
like
to
talk
a
bit
about
a
proposal
for
a
nice
controller
API.
So
if
you
could
go
to
the
next
slide,
so
in
short,
The
Proposal
is
to
allow
applications
to
have
more
visibility
and
control
over
which
connection
the
peer
connection
uses
for
transport.
You
know
on
the
next
slide
to
lay
down
some
of
the
motivations
for
this,
so
the
user
benefit.
M
We're
trying
to
get
to
with
this
proposal
is
to
improve
reliability
of
calls
by
allowing
the
application
to
actively
manage
which
connection
is
used
for
transport
and
there's
a
couple
of
reasons.
Why
I
think
it
makes
sense
for
the
application
to
decide
this
so,
firstly,
the
application
knows
its
use
case.
It
understands
the
use
case
better
than
the
ice
agent
itself,
so
it
has
more
information
to
decide
on
some
trade-offs,
such
as
the
choice
of
network
interface.
M
Rpv4
was
happy
V6
what
protocol
to
use
and
whether
or
not
to
use
relay
and
then
another
reason
is
just
to
acknowledge
that
the
ice
agent
might
not
always
have
an
answer
for
what
is
the
right
thing
to
do
in
every
possible
situation,
and
so
the
proposal
is
to
give
applications
a
bit
more
flexibility
to
experiment
figure
out
what
works
best
aside
from
the
established
standards
and
then
yeah
essentially
allow
applications
to
use
the
default
Behavior
what
it
is
today,
but
also
to
build
on
top
of
that
in
situations
where
it
makes
sense.
M
M
So
why
do
we
need
a
new
API
what's
possible
today
with
respect
to
ice,
so
with
respect
to
local
candidates,
the
I
servers
and
Ice
transfer
policy
can
be
said
in
RTC
configuration,
but
that
pretty
much
limits
what
you
could
do
with
local
candidates
by
restricting
to
relay
candidates
or
all
candidates-
that's
pretty
much
it
any
attributes
on
local
candidates
cannot
be
changed
through
the
existing
apis.
M
With
respect
to
remote
candidates.
There's
a
bit
more
flexibility,
so
an
app
could
decide
to
say
filter
out.
Some
of
the
candidates,
if
it
wanted
to
influence
which
network
configuration
is
used
for
the
peer
connection,
but
once
the
remote
candidate
has
been
supplied
to
the
peer
connection
through
our
ice
candidate,
the
only
way
to
do
that
would
be
to
initiate
a
nice
restart.
M
So
nice
restart
is
another
thing
that
an
application
could
request
and
then
the
last
the
last
capability
is
to
get
statistics
on
the
active
ice
candidate
pair
with
get
starts.
So
that's
pretty
much
the
extent
of
what's
possible
with
the
API
today,
unless
I've
missed
something
and
then
on.
The
next
slide
is
the
overall
shape
of
the
proposed
API.
M
M
And
then
allow
observing
some
actions
that
an
ice
agent
is
performing,
so
these
actions
are
sending
connectivity
checks,
selecting
a
candidate
pair
for
transport
and
proving
of
a
candidate
pairs,
but
we're
saying
we're
proposing
that.
Not
only
can
applications
observe
these
actions,
the
application
can
actually
block
these
actions
as
well
in
some
cases,
and
then,
on
top
of
that,
the
application
can
decide
to
request
these
actions
from
the
ice
agent
for
a
specific
candidate
where
that
the
application
has
chosen
so
on.
M
The
next
slide
is
a
bit
more
concrete
version
of
this
proposal,
so
as
an
interface
RTC
ice
controller,
which
can
be
supplied
to
the
peer
connection
through
RDC
configuration,
there's
some
informational
event,
handlers
for
updates
about
the
candidate
pairs
themselves.
So
that's
when
a
candidate
pair
is
added,
updated,
destroyed
or
selected
for
transport
and
then
there's
some
proposal
events.
So
these
are
events
that
could
be
canceled
by
the
application
by
calling
prevent
default
in
some
cases,
but
essentially
the
proposal.
M
Events
will
let
the
app
know
when
the
ice
agent
is
about
to
send
a
connectivity
check
or
select
a
candidate
or
prune
certain
pairs
and
then
decide
to
block
those
actions
and
then,
lastly,
the
methods
allow
the
application
to
request
those
actions
from
the
ice
agent.
M
So
this
example
shows
just
the
overall
mechanics
of
how
the
API
might
work
so
on
the
left
are
callbacks
for
the
proposal.
Events
in
this
case
the
application,
is
deciding
to
take
full
control
of
ice
by
canceling.
M
All
the
proposals
generated
by
the
by
the
by
the
browser
and
then
on
the
right
is
basically
the
application
doing
ice
by
itself
by
deciding
which
candidate
pair
to
Ping,
often
to
do
that
when
it's
gather
information
from
those
connectivity
checks
it
decides
which
candidate
pair
to
use
for
transport
and
then
which
kind
of
repairs
to
prune
away.
M
So
in
this
case,
what
the
application
is
doing,
is
it
inserts
a
callback
for
the
the
register,
the
callback
for
the
switch
proposal
event?
And
it's
deciding
that
if
one
of
the
candidates
in
The
Proposal
is
let's
say
a
private
IP
or
it's
TCP
or
it's
an
IPv6
host
candidate,
then
the
app
is
going
to
block
that
switch
and
then
on.
The
right
side
is
an
example
of
what
an
app
might
do
with
respect
to
rtt.
M
So
it
registers
a
callback
for
candidate
pair
updated,
which
gives
the
app
updates
about
certain
statistics
on
the
candidate
pair
and
then
for
the
active
candidate
pair.
The
app
monitors
the
rdt
over
time-
and
let's
say
it,
decides
that
if
the
rdt
goes
above
a
certain
threshold,
then
it
starts
pinging
other
candidate
pairs
more
frequently,
and
then
it
might
decide
to
switch
to
one
of
the
other
candidate
pairs
instead
of
the
worsening
candidate.
It
has
at
the
moment.
M
So
that's
that's
an
example
of
something
that's
possible
with
the
proposed
API
and
then
to
wrap
up
on.
The
next
slide
are
some
caveats.
M
So
the
proposal
pretty
much
only
covers
control
of
candidate
pairs
by
the
application.
It
doesn't
affect
Gathering
of
the
local
ice
candidates
themselves,
so
the
only
levels
for
that
are
still
I
servers
and
Ice
transport
policy
in
RTC
configuration.
M
Then,
even
though
the
proposal
lets
application
decide
which
candidate
where
to
Ping
the
actual
stuntings
are
still
constructed
by
the
ice
agents.
So
the
application
can't
construct
the
contents
of
a
stunt
paint
and
then
finally,
just
to
keep
the
initial
proposal
simple,
We've
restricted
when
a
nice
rdcs
controller
might
be
used
and
that's
if
the
bundle
policy
is
set
to
Max
bundle.
M
So
only
one
transport
is
negotiated
and
then
the
other
restriction
is
on
the
use
of
candidate
prefetching,
Ice
prefetching
and
there's
some
suggestions
in
the
full
draft
of
the
API
on
how
the
area
might
be
extended.
But
this
is
just
the
first
initial
proposal
to
keep
the
API
simple.
M
A
H
N
All
right
so
I
left
some
comments
on
the
GitHub
on
GitHub
and
an
issue.
This
is
very
similar
to
what
I
proposed
a
long
time
ago,
which
I
think
I'll
call
I
called
flick,
size
and
I
think
it's
great
to
give
the
web
app
more
control
over
the
ice
agent,
so
I
I
think
in
general.
This
is
a
good
direction.
N
I
did
try
to
analyze
the
differences
and
things
that
might
be
missing
here
for
for
the
things
I
think
might
be
better.
For
example,
I
think
it
might
make
sense
to
put
this
on
the
ice
transport
directly
rather
than
having
a
different
object.
N
But
I
left
the
comments
there,
so
I
I
have
ideas
of
how
it
could
be
better,
but
I
also
think
in
general.
This
is
a
really
good
idea.
D
Oh
yeah,
so
yeah
I
have
a
bit
of
I.
Think
solving
some
of
these
problems
makes
sense
it's
hard
for
me
to
tell,
based
on
the
reasons
that
were
given,
like
the
thing
that
was
being
asked
on.
The
earlier
slide
seemed
like:
oh,
that's,
reasonable.
D
It
didn't
sound
like
a
big
deal,
but
then
I
kind
of
got
a
bit
of
a
API
shock
when
I
saw
the
full
controller
interface,
so
I
would
love
to
make
sure
we
can
prune
down
to
to
find
out
what
is
essentially
missing
since
we
have
an
existing
API
and
if
else
I
think
I
agree
with
Peter.
If,
if
you
could
work
with
someone
like
Peter
to
maybe
reshape
this
into
a
smaller
API
that
probably
could
fit
into
the
existing
transport
that
might
be
easier
to
swallow.
M
Yeah
I
think
we
did
consider
I
I
looked
at
the
flag
size
proposal.
It
is
to
some
extent
it's
somewhat
similar,
but
there
are
I
think
some
areas
that
don't
quite
overlap,
I
think
one
of
the
main
reasons
for
putting
it
in
a
separate
interface
rather
than
on
RDC
transport
was
to
allow
this
to
be
set
up
before
actually
setting
up
the
peer
connection.
M
I
Yeah
I
mean
I
I
Echo,
everybody
else
that
I
think
it's
it's
something
that's
be
useful
to
do,
but
I
have
some
reservations
about
the
specific
shape
like
it's
not
obvious
to
me
that
an
event
API
with
defeat
default
is
like
the
cleanest
way
to
do
this.
I
It
seems
like
a
slightly
unnatural
way
of
doing
it,
but
to
be
fair,
I
haven't
thought
of
a
better
one,
yet
I
think
it
needs
to
be
quite
explicit
of
whether
you're
allowed
to
change
a
candidate
on
your
way
through
one
of
those
events,
maybe
that's
already
covered
in
the
detailed
document,
but
but
I
think
we
need
to
be
very
clear
about
that,
and
the
final
thing
is
actually
I
would
really
like
to
add
a
feature
which
isn't
in
there,
which
is
the
ability
to
unpickle
an
existing
candidate
that
worked
before
in
a
previous
session.
I
So
if
you've
got
a
pair
of
devices
that
talked
two
hours
ago,
and
now
they
need
to
talk
again
and
nothing
else
has
changed
about
the
network,
it
would
be
nice
to
be
around
essentially
be
able
to
restore
an
existing
candidate
pair.
That
you're,
pretty
sure
will
work
with
with
as
little
fuss
as
possible.
I
realize
that's
a
big
ask
on
this
API,
but
it
would
be
really
nice
to
at
least
leave
space
for
it
somewhere.
A
Okay,
yeah
I
think
I
was
in
the
queue.
My
comment
is
that
Weber
to
see
NV
use
cases
have
specific
requirements
for
ice
and
I.
Don't
think
this
API
meets
all
of
them,
for
example,
requirement
in
four
is
the
iStation
must
be
able
to
maintain
multiple
candidate
Pairs
and
move
traffic
between
them
and
I.
Didn't
notice
that
in
the
API
so
anyway,
just
tying
it
to
the
actual
use
cases
and
requirements
would
be,
would
be
helpful
thanks.
E
I
I
think
so
I
think
it's
my
turn
and
I
think
it
actually
meets
that
specific
one.
When
move
is
select
the
different
candidates
peer,
but
I,
don't
think
the
I
don't
think
the
the
API
is
going
to
get
very
much
smaller
and
the
the
reason
why
this
particular
thing
ended
up
with
using
all
that
cancel
the
default
thing.
E
What's
that,
sir,
we
were
kind
of
saying:
okay,
if
you,
if
you
don't,
if
you
don't
want
to
handle
something
yourself,
you
shouldn't
have
to
specify
only
that
that's
the
default,
and
but
if
you
want
to
handle
it
yourself,
you
should
you
should
be
very
care
about
whether
the
well,
whether
the
browser
or
the
or
the
user.
E
And
prevent
people
seem
to
be
fitting
the
pattern
as
a
paradigm
that
we
haven't
used
in
this
working
group
before
so.
I
was
a
bit
surprised
when
I
came
across
it
from
a
completely
different
place,
but
I
think
it
does
what
what
it
needs
to
do
by
the
way.
E
Some
of
the
things
that
it
that
we
would
like
to
use
it
for
are
the
ones
that
I
presented
at
the
ITF
under
the
name
of
my
search
about
a
year
and
a
half
ago,
So
reading
up
on
that
document
might
might
give
some
more
years
gifts,
okay,
next.
B
This
is
UN
just
to
Echo
what
Bernard
said
about
the
requirements
and
use
cases
like
connecting
with
the
proposal
is
good
and
I
was
one
thing
I
was
wondering
was
it
seems
that
we
prevent
defaults
and
so
on?
It
seems
to
be
API
might
be
I'm
not
sure.
I
haven't
I
only
have
looked
at
the
API
for
like
30
seconds,
but.
G
B
Be
tied
to
peer
connection
and
I
I
would
get
excited
if
we're
able
to
use
this
API
right
now,
but
maybe
in
the
future,
where
there's
no
peer
connection,
but
you
you
create
a
nice
controller,
you
create
a
transport
blah
blah
and
then
you
you
can
do
your
own
thing
like
in
web,
like
web
transport
like
or
whatever,
but
that
might
be
something.
That
is
that
that
is
good
to
do,
and
so
that's
something
to
consider
as
part
of
this
API
proposal
as
well.
B
If
it
has
not
been
taken
into
into
into
account.
M
So
I'm
I'm
not
hearing
any
violent,
disagree
or
suggestions.
Not
to
do
this
so
I
will
take
some
of
the
feedback.
I
will
come
up
with
some
better.
Some
more
use
cases
I'm
trying
to
address
some
feedback
about
exactly
where
the
API
might
live,
on
the
shape
of
it,
and
maybe
I
can
talk
about
that
in
a
month.
A
O
Yeah,
as
we
know
that
there
was
a
CFC
on
base
detection
last
month
and
the
summary
was
like
five
gifts
and
one
objection,
the
concerns
were
put
up
in
three
separate
YouTube
issues,
and
we
have
explained
the
reasons
both
in
mailing
list
and
the
issues.
So
I
won't
be
rehashing
everything
and
we
keep
enough
time
for
discussion,
so
the
first
one
was
applicability
so
for
Android,
pixel
books
and
iOS
devices.
I
think
this
is
a
non-issue.
O
Almost
all
devices
in
the
ecosystem
have
the
phase
detection
for
at
least
five
years,
in
some
cases
like
Android
10
years
or
something
so
on,
Windows
the
topic
of
camera
camera
driver
support
was
highlighted.
Well,
the
driver's
support
helps
to
achieve
the
power
and
performance
data
which
we
have
highlighted
in
the
explainer.
O
O
It's
just
a
performance
would
not
be
the
same
as
presented
in
the
explainer
because
we
haven't
POC
yet
that
scenario
next
slide
yeah.
The
second
objection
was
regarding
generality,
so
many
would
remember
that
we
initially
started
the
API
trying
to
pre-optimize
for
many
things
which
the
platforms
can't
give
us
today,
like
Contours
and
all
the
stuff,
and
the
guidance
was
let's
work
with
the
minimal
set
first
and
what's
achievable
presently
and
work
towards
an
MVP,
and
my
impression
was
that
we
sort
of
had
an
agreement
on
the
overall
API
shape
anyways.
O
G
Yeah
sure
thanks,
so
what
I
propose
here
is
that
all
of
defining
multiple
segments
each
segment
can
Define
in
in
tcx
template.
We
have
Center,
Point
or
pounding
box
control.
G
So
using
these
these
Primitives,
we
can
Define
several
segments
in
images
which
don't
don't
necessarily
need
to
be
faces,
but
could
could
be
also
something
else
we
have
here
in
the
dictionary
segment
we
have,
the
thumb,
string,
type
and
type
would
be
one
of
the
N
segment
types
and
in
this
example,
here
I
have
human
face,
left
or
right
eye
or
mouth,
and
this
is
easy
to
extend
for
other
types
of
objects
like
cars
or
animals,
or
whatever
need
to
be
detected
later
or
describes
described
and
after
the
type
of
the
segment
there
is
ID,
which
is
similar
to
the
ID
that
we
have
proposed
before
this
ID
can
be
used
for
tracking
the
objects
that
are
segmented
out
from
the
video
frames,
and
it's
also
the
identification
of
a
specific
segment
and
the
next
field.
G
Part
of
all
of
these
cropping
segments
here
are
clearly
so
that,
for
example,
I
would
be
part
of
phase,
and
in
that
case
the
part
of
field
of
the
eye
would
refer
to
the
ID
of
the
face.
So
this
allows
this
graphic
segments
here
are,
and
then
we
have
the
probability.
This
would
be
similar
to
the
probability
that
we
have
already
proposed
for
the
face
detection,
but
in
this
case
it
would
be
used
for
any
any
type
of
segments
that
there
are.
G
So
that's
basically
the
proposal
that
we
think
would
solve
the
issue
in
the
issue.
A
number
79.
O
Yeah,
so
the
third
point
was
about
variance
of
results
and
I
think
this
is
a
quality
of
implementation
issue
which
allows
Innovation
to
happen
in
implementation
without
breaking
the
API
contract.
So
previous
cases,
like
Eco
cancellation,
can
be
done
by
the
OS,
the
UA
or
the
web,
app
right,
like
Hardware
encoders
in
web
codex.
They
also
do
not
have
uniform
Spectrum
uniform
results
across
the
Spectrum
for
the
face
detection
case.
O
It
will
return
a
rectangle
bounding
box
and
some
platforms
would
be
able
to
do
a
great
job
with
lower
rates
and
some
would
need
a
bit
more
power
usages
and
might
not
be
able
to
achieve
the
highest
standards
we
understand.
But
if
we
use
the
fallback
code,
I
talked
about
in
Windows
using
the
winrt
apis
there,
there
should
not
be
much
variance
within
the
Microsoft
ecosystem.
O
Hopefully
that
will
take
care
of
this
concerns
so
and
also
I
just
wanted
to
highlight
that
for
organizations
who
can
afford
their
own
models,
they
are
free
to
use
model.
Loader
API,
and
here
our
objective
was
to
give
the
web
developers
access
to
the
same
resource
that
a
native
developer
would
have
that's
it.
So
maybe
we
can
quickly
have
a
discussion
on
these
three
on
Facebook
detection.
First,
before
I
move
on
to
the
next
topic.
B
Yes,
yeah
I
think
it's
a
it's
a
good
summary
I
think
there
are
three
issues
and
the
first
and
third
needs
discussion
for
the
second
one.
I
think
that
we
can
always
evolve
the
API
the
API
shape.
So
to
me,
it's
not
a
blocker
at
one
and
three
needs
to
probably
be
further
discussed.
O
A
Well,
the
segmentation
one
I
think
is
a
good
good
step
forward,
because
that
that
would
allow
the
essentially
the
standard
metadata
could
be
used
for
multiple
purposes.
So
I
think
that
one
is
a
good
change.
A
The
I
mean
essentially
both
one
and
three
are
really
about
developer
acceptance.
You
know,
will
developers
because
you
know,
will
they
find
the
API
available
enough
to
be
willing
to
depend
on
it
sure.
So
that's
that's
the
real.
That's
the
real
question
right.
O
So
so,
as
I
explained,
so
apart
from
Windows
all
other
places,
as
you
saw
almost
every
everywhere,
it
is
available
and
windows.
Also,
initially,
I
was
looking
at
only
the
meaty
based
cameras,
which
is
like
the
surface
and
the
hero
devices,
the
expensive
ones,
but
the
lower,
but
the
winrt
API
contract
is
like
for
on
camera
system,
so
it
would
also
work
so
I
think
availability
is
taken
care
of
now.
B
Just
to
say
that
you
mentioned
that
this
kind
of
apis
is
available
to
certain
to
some
native
developers
and
there
are
some
native
applications
that
are
using
this
API.
Maybe
it's
not
a
lot,
but
I
know
some
apis
that
are
that
are
doing
so.
So
if
this,
if
this
API
were
evolving
to
the
web,
maybe
they
would
use
that.
Maybe
they
would
use
webml
I,
don't
know,
but
I
I.
Don't
think
that
this
API
is
pre-including
webml.
In
any
case,
no.
O
Complementary,
it's
like
it's!
Okay,
if
you're
a
developer
on
Zoom
or
meet
what
you
will
do
is,
of
course,
try
this
API
if
it
meets
your
standard.
You'll
always
use
this,
because
this
is
going
to
be
the
most
performant
path.
If
it
does
not,
then
you
take
the
rectangle
and
do
your
stuff
like
it's
just
that
we
are
giving
a
pretty
high
performance
path,
ex
complementary
you
which
you
can
decide
to
use
or
not.
O
Okay,
so
my
because
there
was
one
blocking
for
Bernard,
so
do
you
think
like?
Is
there
anything
we
can
do
to
unblock
this
issue
and
land
this
so
that
we
can
start
implementing
the
code
and
you
know,
give
developers
develop
a
trial
like
they
can
try
behind
the
flag.
Yeah.
A
The
segmentation
change
is
fine
with
me,
I
I
think
it
could
resolve
that
that
issue,
the
other
two
are
really
not
they're,
not
blocking,
on
my
opinion,
they're
really
about
the
developer
acceptance
sure
you
know
that
and
that's
not
to
be
determined
by
what
I
think
it's
been
determined
by
the
trials
and
all
that
totally.
O
A
Okay,
it's
not
the
working
group's
opinion.
I
mean
I
mean
whether
whether
I
mean
that's
up
to
the
chromium
process,
not
the
not
the
working
group
I,
don't
think.
O
Yeah
yeah,
that's
fine!
What
I
meant
was
you
know
once
we
land
this
pick,
we
can
start
upstreaming
the
code,
so
the
it's
like.
Can
we
land
the
pr
that
is
the
working
groups,
opinion
I,
guess
so.
O
B
Think
that
it
can
be
good
to
add
these
comments
on
GitHub.
So
so
we're
not
if
you,
if
you
could
comment
that
on
the
issue,
one
and
three:
it's
not
blocking
lending
the
PR,
but
right
but
issue
two
is
blocking
the
pr.
So
then
we
we
can
validate
that
the
pr
is
is
aligned
with
with.
H
O
Right,
so
these
three
peers
have
been
in
a
bit
of
a
limbo
for
some
time
so
to
refresh
they
are
just
on
and
off
switches
to
Bubble
Up.
The
features
offered
on
Native
systems
to
the
web,
so
I
do
not
see
a
variance
of
results
within
a
particular
native
ecosystem.
O
The
availability
on
Apple
is
restricted
to
any
M
series.
Google's
Chrome
OS
team
wants
to
origin
trial,
at
least
one
and
two
with
us,
oh,
and
also
whereby
they
were
also
pretty
interested
Zoom.
The
talk
is
still
going
on.
They
are
trying
out
some
of
the
things
which
we
have
passed
on
to
them.
O
Yeah
availability
on
Windows
might
be
a
bit
low
today,
but
that
is
hundred
percent
going
to
change
very
quickly,
Within
a
few
months,
so
I
think
yeah
yeah.
What
can
be
the
next
steps
for
these
peers?
I
I
know
how
around
you
had
a
sort
of
opinion
on
eye
gaze
correction
part,
but
these
days
with
nvidia's
own
solution.
O
Quite
a
few
people
were
like
excited
to
see
that
feature,
but
and
and
I
I
would
just
highlight
that
the
Chrome
OS
team
and
whereby
you
have
specifically
asked
one
and
two
for
origin
trial,
definitely
and
three
they
are
evaluating.
So
with
this
kind
of
information.
O
You
said
that
it
you
Google,
did
user
study
where
some
a
big
part
of
the
group
were
worried.
It
looked
too
realistic
and
too
spooky
because
it
was
too
realistic.
E
Because
it,
it
was
close
enough
to
realistic
that
it
only
occasionally
broke.
O
Yeah
things
have
improved
after
that
was
last
year
we
talked
so
yeah,
but
I'm,
specifically
looking
for
the
first
two,
the
next
steps
for
face
Framing
and
lighting
correction,
because
there
was
a
good
ask
from
a
team
in
Google
to
start
origin
trial
on
specific
boards.
So.
E
So
I
mean
the
only
only
way
we
learn
learn
whether
or
not
it's
reasonable
now
is
to
try
it
yeah.
O
G
D
Well,
yeah!
Never
here,
I,
don't
know
that
we
necessarily
can
sign
off
on
these
PRS
here
in
the
meeting
right.
I
haven't
looked
at
these
in
a
long
time
if
I
have
looked
at
them
at
all
yeah,
so
maybe
some
time
to
look
at
and
provide
feedback.
E
So
so
I
think
so
I
think
you
need
to
I
say
that
they're
currently
about
the
current
state
says
that
that
I
guess
correction
is
draft
yeah.
So
let's
go
to
others
or
not
so
can
you
can
you?
O
O
O
O
Okay,
do
I
know
within
this
group
anybody
objecting
so
that
I
can
fix
before
calling
or.
A
O
P
O
O
K
So
about
configuration
change
event,
configuration
of
track
May,
maybe
change
dynamically
outside
the
control
of
a
web
application.
I
have
an
example
is
when
we
users,
which
is
background
background
on
off
through
the
operating
system.
The
publication
might
want
to
know
when
that
happens,
but
at
the
moment
we
are
no
direct
way
to
do
it.
Configuration
change
even
would
solve
that
problem.
K
K
Originate
from
from
a
camera,
that
is
what
is
to
save
a
get
user
media
text,
but
it
would
be
maybe
order
if
it
would
not
apply
to
get
display.
Many
attacks-
and
that's
second
question-
is:
should
configuration
change
even
be
a
configuration
change
even
over
and
or
just
a
plane
event
platform
design
principles
may
be
such
as
the
plane
even,
but
it
would
be
good
for
developers
to
have
a
have
a
a
state
anyway
event
and
there's
some
researchers
also.
K
So
we
would
prefer
that
the
next
question
is:
should
all
capability
changes
to
go
event?
I,
don't
see
an
issue.
Why
not
so
so?
I
would
say
yes
and
last
question
is
what
kind
of
change
is
to
trick
of
the
event?
Most
notably
problem
is
what
we
don't,
but
a
trick
of
an
event
on
every
frame,
so
we
so
we
should
exclude
the
settings
which
sign
automatically
automatic
mode
like
a
firm
Focus
mode
is
in
a
continuous
mode.
K
K
B
Overall,
but
that
makes
sense
to
me
for
question
four
I
think
it's
pursuit
of
type,
so
peer
connection
should
say:
I
I
think
that
it
should
fire
or
not
Fire,
based
on
this
thing
for
capture
tracks,
it
should
be
the
same
and
so
on,
but
the
overall
approach,
where
you
say
estimated
settings
we
should
probably
not
trigger
on
everything,
but
that
kind
of
makes
sense
as
a
general
rule.
But
each
spec
that
defines
a
source
should
basically
defines
it
with
free
yeah.
B
It
should
trigger
event
and
with
two
I
guess:
I
have
a
small
preference
for
exposing
something
but
I'm
fine,
even.
A
Yeah
I
think
we're
out
of
time
now,
so
we
we
need
to
move
on
to
A
lot's
segment.
Yes,
okay,
thank.
O
P
Okay,
I'll
start,
thank
you.
I
will
add
them
here
to
talk
again
about
what
opposing
with
the
stream
tracks.
So
just
a
reminder
of
last
time,
we
discussed
the
fact
that
it
is
now
possible
in
at
least
two
different
browser
engines
to
switch
which
surface
you're
sharing
as
you're
sharing,
and
this
is
driven
by
the
user.
P
So
in
the
case
of
safari,
the
user
can
interact
with
the
browser
and
says
that
they
want
to
share
a
different
window
or
screen
and
in
the
case
of
Chrome
you
can
share
another
tab.
Next
slide,
please
now,
when
that
happens,
you
might
want
to
change
something.
There
are
a
list
of
examples
here,
but
let's
just
focus
on
one,
the
one
that
I've
highlighted
in
red-
and
that
is
maybe
you
want
to
change
the
cropping
parameters,
so
maybe
you're
initially
capturing
your
your
own
tab.
P
The
problem
is
that,
even
if
you
get
events
when
that
happens,
which
you
might
get
through
capture
handle,
it
might
be
too
too
late
to
respond
to
those
events,
because
frames
might
already
be
on
The
Wire
by
the
time
you
handle
the
event,
and
there
is
no
way
that
using
events
alone,
you
could
ever
be
fast
enough
right.
P
So
what
I'm
suggesting
here
is
a
mechanism
for
Auto
Parts
in
this
the
track
whenever
something
relevant
happens
next
slide,
please
so
I
we
discussed
this
last
month
and
one
of
the
proposals
that
came
about
if
I'm,
not
mistaken
by
un,
was
to
expose
that
on
the
source
and
I
thought
that
it
was
interesting.
P
Maybe
you're
cropping
it
differently
right,
you
could
do
all
sorts
of
stuff,
and
that
means
that
maybe
you
want
to
Auto
pause,
one
of
those,
but
not
the
other,
so
I
don't
think,
there's
a
good
reason
to
do
that
on
the
source
and
there
are
other
Arguments,
for
example,
the
fact
that
we
don't
actually
ever
have
an
object
for
the
source
at
the
moment.
So
that's
my
counter
argument
here.
Additionally,
the
idea
of
exposing
this
on
capture
controller
also
came
about
and
they
think
that
the
same
objection
applies
here.
P
P
So
next
slide.
Please
I
propose
this
API,
which
is
slightly
modified
from
last
time.
I
would
ask
you
not
to
pick
it
with
a
fine
com.
We
could
change
a
lot
of
things
here.
Let's
just
look
at
the
general
picture
right
there.
It
is
that
basically,
you
trigger
using
set
auto
pause.
You
say:
hey
normally,
tracks
are
not
posed
right,
like
we're,
not
changing
that
in
any
way,
but
maybe
you
want
to
start
for
some
reason
right.
P
You
get
the
promise
back
and
the
promise
resolves
once
you
know
that,
okay,
all
the
posing,
might
take
some
time
to
circulate
through
the
system.
Now
it's
on
now,
you
can
rely
on
that.
We
can
discuss
this
promise
if
it's
controversial,
we
can
get
rid
of
it.
Now
assume
that
you
did
that
now.
You
also
want
to
get
events
whenever
you
Auto
pause
right,
so
you've
got
the
Handler.
We
notice
that
setting
the
Handler
of
its
own
does
not
actually
trigger
the
set
auto
post
Behavior
right.
P
These
are
two
separate
matters
because
of
a
design
principle
that
says
that
setting
event
handlers
should
not
have
side
effects
once
you
get
that
every
time
something
happens,
something
like
the
top
level
document
of
the
time.
You're
sharing
was
navigated.
Maybe
the
user
clicked
share
this
tab
instead
or
other
things
in
the
future.
Maybe
conflict
change.
P
When
that
happens,
you
get
an
event.
You
know
what
changed
and
you
you.
The
application
have
time
to
respond
to
the
change
and
the
response
can
be
immediate,
like
within
the
event
itself
or
a
bit
later
right,
because
it
could
be
that
you
need
to
communicate
with
the
other
tab
that
you're
capturing
it
could
be
that
you
want
to
communicate
with
the
user.
The
sky
is
the
limit.
Okay,
once
you've
made
all
of
the
changes
that
you
need,
you
call
something
of
the
general
shape
of
unpause.
It
can
be
a
different
name.
P
P
These
are
the
priorifications
I,
think
I've
gone
through
them,
and
the
next
slide,
I
think
should
say.
Discussion
and
I
do
indeed
open
this
to
discussion.
Foreign.
D
Oh
yeah,
so
so
the
the
key
feature
here
is:
is
the
auto
pause
so
living
the
events
alone?
For
a
moment,
that's
I'm
not
sure
that
I
agree
that
an
event
cannot
be
done
in
time
because
you
could
clearly,
if,
if
the
concern
is
only
that
that
you
get
the
event
ahead
of
switching
that,
you
could
do
that.
You
could
find
the
event
ahead
of
switching.
P
What
what
happens
if
the
user
clicks
share
this
tab
instead,
in
the
meantime,
the
application,
just
you
know
basically
busy
hangs
like
will
it
keep
on
getting
new
frames?
You.
D
Well,
in
any
case,
the
user
agent
could
decide
to
wait
to
actually
do
the
switch
until
the
event
is
fired.
I'm
not
saying
that's
ideal,
I'm,
just
saying
that
it
is
technically
possible
I'm
not
advocating
that
it's
just
making
a
point
that
it's
not
necessarily
a
given
but
I.
Think
I
have
some
general
concerns,
because
I
really
love
the
feature
that
Chrome
and
Safari
I
believe
have
added
where
you
can
switch
the
source.
D
It
puts
the
user
in
control
and
there's
an
inherent
tension
here
between
just
like
people
have
physical
camera
shutters.
There's
been
API
proposals
to
let
apps
know
about
the
camera
shutters,
which
I
think
would
defeat
the
purpose
of
having
a
physical
camera
shutter
in
the
first
place.
There's
a
there's
a
tension
here
between.
D
If
an
app
can
detect
that
I'm
changing
an
input
through
a
user
agent,
then
it
can
also
I
also
have
to
deal
with
some
apps
that
might
prevent
the
application
prevent
the
user
from
doing
that
or
imposing
some
kind
of
restriction
on
it.
And
this
is
there's
an
inherent
conflict
here.
I'm
just
saying,
there's
inherent
conflict
I'm
not
saying
there
aren't
reasons
to
do
that.
P
Okay.
This
is
already
possible.
Nothing
has
changed
all
of
the
because
it
is
possible
for
the
application
to
just
treat
like
we're
not
given
we're
not
making
it
too
much
easier
for
the
application
to
actually
figure
out
that
you've
changed
the
source
it
was
already
possible
in
the
application
could
already
kind
of
trying
to
pester
you
to
do.
You
know
to
switch
back
to
the
original
Source
Etc,
so
all
of
that
is
important.
D
I
agree
with
you
that
application
could
go
to
an
extent
to
detect
this
anyway,
but
when
it
comes
to
apis,
it's
really
about
what
an
apis.
What
Behavior
we
encourage
from
sites.
So
if
we
have
an
API
that
turns
on
Auto
Parts,
a
lot
of
applications
might
immediately
turn
that
on
because
they
get
more
control,
and
then
we
have
to
talk
about
other
side
effects
of
that.
D
But
let
me
be
more
specific,
so
I'm
not
totally
opposed
to
having
some
kind
of
solution
to
this
problem,
but
I
would
Merit
earlier
discussions
that
having
it
on
the
track,
I
think
is:
let's
let
me
focus
on
that
location
problem
first,
but
having
it
on
the
track.
You
have
to
deal
with
video
tracks
and
audio
tracks,
and
also
media
Sim
track
is
the
is
the
endpoint
not
just
for
screen
capture.
D
So
I
worry
that
these
apis
set
auto
paused
we're
having
screen
specific
screen
capture,
specific
things
added
to
track
that,
then
we
have
to
think
about
what
happens
if
I
capture
from
camera
from
microphone
or
from
canvas
from
web
audio
from
all
these
things,
so
I
agree,
there's
a
benefit
to
having
one
track
that
has
this
Behavior
another
one
doesn't
but
I'm,
not
sure
that
outweighs
the
complexity
you're
imposing
then
on
web
developers,
getting
other
tracks,
video
tracks,
synced
and
I.
P
You
very
much
I
would
like
to
respond
to
that
too.
If
you've
got
more
ideas
on,
you
know,
I
don't
want
to
have
all
of
them,
because
I
won't
be
able
to
remember
them.
So
first
in
the
past,
I've
already
proposed
a
couple
of
times
that
we
need
to
actually
subclass
with
the
string
track
to
have
a
different,
so
a
in
the
dimension
of
audio
versus
a
video
and
being
in
the
dimension
of
whether
the
user
is
sharing
a
screen,
a
window,
a
tab
or
something
else.
P
So
I
think
that
if
this
is
really
a
concern
that
we're
exposing
an
API,
that's
only
relevant
to
some
of
those
on
all
of
those.
Then
let's
revisit
my
earlier
proposal,
but
so
long
as
we
don't
actually
revisit
that.
We've
got
other
examples
of
things
that
are
only
valid
in
some
cases
and
what
you
do.
What
happens
is
that
if
you
call
it
on
an
irrelevant
context,
you
just
get
an
error,
and
that's
okay.
I
think
that
we
can
do
it
here
too.
H
D
If
there
were
any
other
comments,
I,
let
other
people
have
some
input
as
well.
B
Yeah
so
in
terms
of
use
case,
that's
fine,
I
think
the
API
so
track
versus
source.
B
So
you
seem
to
have
use
cases
for
track.
Maybe
it
should.
It
would
be
good
to
document
those
on
the
GitHub
issues,
but
we
make
a
conscious
decision
because
it's
probably
making
things
a
little
bit
more
complex
I
think
it
would
be
good
to
I,
don't
know
whether
we
need
the
flexibility
of
the
current
API
like
you
create
a
track
and
at
some
point
you
decide
that
it
should
not
oppose,
and
at
some
point
you
decide
that
it
should
not
Auto
pose.
B
So
maybe
we
can
say
when
you
create
the
track,
it's
a
time
where
you
decide
whether
you
want
to
Auto
pause
or
you
do
not
want
to
to
pause
it's
because
in
that,
in
that
case
the
API
shape
might
be
simpler
and
smaller
as
well.
B
I
I
also
do
not
like
events
there.
Maybe
it
might
be
good
to
have
a
call
back
and
then
you,
you
have
another
callback
or
you
use
a
promise,
for
instance,
so
that
you,
you
delay
until
you're
done.
Basically,
that
might
be
another
pattern
that
we
could
explore
in
terms
of
VP,
actually.
P
If
I,
could
you
know
one
point
at
the
time
because
otherwise
I
get
overloaded,
so
you've
just
mentioned
that
you
want
it
on
the
when
we
construct
the
track,
we
are
not.
We
don't
actually
construct
the
track,
get
this
media
returns
it
do
you
mean
to
say
that
you
would
like
this
to
be
part
of
the
get
display
media
options.
B
That's
one
thing:
I
I'm,
not
saying
I
I
want
this
I,
just
I'm
just
saying:
I
want
to
discuss
this
option.
Yes,
because
we
we
are
talking
requirements
there
and
we
we
need
to
understand
whether
the
requirement
to
to
and
put
to
set
pose
and
pose.
Dynamically
is
a
good
thing
or
not.
So
we
should
make
a
conscious
decision
there.
Okay,
based
on
that,
we
will.
We
will
go
to
the
API
so
yeah,
so.
P
So,
in
that
case,
I
would
say
that
I
think
that
the
answer
is
clear
from
the
example
of
clones,
because
you
might
want
to
pull
auto
post
one
of
the
Clones,
but
not
the
other.
It
means
the
display.
Media
is
not
actually
the
right
time
to
do
that
and
after
the
display
media
so
get
display.
Media
is
too
early
after
get
display.
You
need
your
resolves.
You
already
have
the
track,
so
I
think
that
we
don't
actually
face
a.
B
Decision
here,
I
think
that
our
hands
are
tied.
It
needs
to
be
on
the
track.
It
really
depends
on
the
API
shape,
I'm.
Well,
we'll
discuss
the
aviation,
but
I
I
wanted
to
discuss
that
and
also
track
our
source.
It
would
be
good
that
you,
you
document
the
cases
and
then
we'll
derive
the
API
and
and
yeah.
We
should
get
a
decent
API
soon.
I
guess.
Thank
you.
H
P
Think
that
I've
presented
use
cases
before
but
I
can
document
them
in
writing.
If
that
helps,
and
in
just
a
second
you're
in
I'm,
sorry
that
I
cut
you
off
before
you
had
the
second
point
that
I
was
afraid.
I
would
forget
and
then
did.
I
have.
B
Yeah,
so
it's
it's
again,
API
shape
where
you
guys,
you
are
adding
an
event.
You
are
adding
a.
G
B
An
attribute
originally
attribute
to
adding
to
two
methods
and
I
would
hope
that
we
we
can
come
up
with
something
simpler,
as
I
said
in
the
past.
Service
worker,
for
instance,
is
firing
an
event,
and
on
this
event
you
can
say
you
can
delay
the
resolution
of
the
event
and
whenever
the
event
is
no
longer
delayed
Bank
you
would
be
unposed
or
something
like
that.
H
P
You've
got
an
alternate
API
shape
that
you
could
lay
out
in
the
issue
if
it
solves
my
use
cases.
Of
course,
I'm
going
to
be
happy
with
it.
I
P
I've
not
found
a
way
of
doing
that,
but
of
course
at
least
not
without
doing
a
transform.
That
sometimes
drops
sometimes
does
other
things
and
not
even
that
does
not
really
work
for
me.
But
of
course,
if
you've
got
a
solution
here,
you're
welcome
to
post
it
on
the
GitHub
issue
issue
and
if
it
works,
then
it
works.
Okay,.
H
D
Yeah
just
to
iterate
yeah,
so
maybe,
if
you
could
have
a
Time
code
or
something
on
a
video
frame,
there
might
be
a
way
to
do
this
on
route
without
necessarily
stopping
everything,
and
that
might
give
a
better
experience
and
give
you
all
the
more
control,
perhaps
but
I
just
want
to
make
before
Tim
brought
that
up.
Also
the
idea
of
pause
here,
where
it's
going
to
be
a
little
confusing
in
a
media
stream
track
space,
because
we
also
have
other
issues
about
we
have
enabled
disabled.
D
We
have
muted
unmuted
we're
going
to
pause
and
unpause,
not
to
mention
that
we're
talking
about
having
other
unmute
methods
and
then
unpause.
So
that's
part
of
my
reasoning:
I
would
love
a
simpler
API
just
to
Echo
that
and
I
was
hoping
moving
it
to.
The
controller
would
would
alleviate
some
of
that,
but
maybe
naming
renaming
could
help
as
well
to
call
this
something
else.
Maybe
it's
a
type
of
configuration,
change
or
capture
change,
or
something
like
that.
P
So
here's
my
problem
here
I
think
that
using
capture
control,
if
we
have
a
good
reason
for
to
put
it
on
capture
controller,
of
course
we
should.
But
right
now
it
seems
like
the
reason
is
elegance
and
you
know
not
reusing
names
or
not
introducing
more
names
into
an
already
crowd
Place
space.
P
But
in
terms
of
usability
you
know
the
wanting
to
do
different
things
with
different
clones
is
compelling
to
me
and
because
of
that,
I
would
like
to
push
back
on
capture
on
putting
it
on
capture
controller.
D
Understanding
that
specific
use
case
would
be
good.
That
seems
like
an
optimization
for
me
yeah.
You
said
that
some
sources
would,
for
instance,
not
have
to
intervene
so
yeah.
So
maybe
myself
you,
you
wouldn't
flicker
when
or
the
self-capture
view
wouldn't
flicker.
If
this
is
only
needed
on
for
sending
over
the
wire.
Is
that
the.
P
P
It
seems
like
it's
more
future
proof
to
say
that
clones
are
independent.
A
Yeah
so
folks,
where,
where
over
time
do
we
have
any
next
steps
for
this
or
things
we
need
to
follow
up
on.
P
Yeah
I
would
actually
like
to
understand
that
because
from
my
end,
this
is
the
second
time
I'm
presenting
about
this,
and
it's
not
clear
to
me.
If
there's
any
action
item
left
upon
me
so.
B
What
I
think
is
so
they
are
like
individual
topics
like
sub
issues
like
track
or
source,
so
we
should
get
to
a
decision
there,
so
maybe
not
today,
but
maybe
we
should
have
a
specific
Association
on
GitHub
and
then
we
we
end
up
with
a
decision
and
they
are
like
also
over
decisions
on
the
requirements,
and
we
should
discuss
that
as
well,
and
the
API
shape
I
think
is
also
part
of
that
that
I
would
guess
API
shapes
should
come
after
we,
we
finish
this
earlier
discussions.
B
P
I,
don't
think
that
helps
unless
we
also
expose
you
know
not
just
the
times
that
I
don't
understand
how.
B
The
timestamp
helps
you
yeah
group
targets,
for
instance,
would
not
be
visible
with
timecode,
so
you.
F
P
A
Okay,
well,
I
think
we're
done
for
today.
Thank
you,
everybody
and
we'll
see
you
next
month.
Thank
you.