►
From YouTube: WEBRTC WG VI April 2020
Description
W3C WEBRTC WG virtual interim meeting on April 28, 2020
A
B
B
C
B
Thank
you,
Henrik,
okay,
there's
an
IRC
channel
to
do
that
in.
We
are
recording
this
meeting
right
now
and
there's
linked
Ola
drafts
in
case
want
to
get
more
info.
So
here's
what's
on
the
agenda
today,
Harold
will
talk
a
little
bit
about
insertable
streams,
as
I
mentioned,
Sam
will
talk
about
a
extension
to
content.
Hints
then
we'll
have
several
issues
on
media
capture
and
capture
output.
That'll
be
the
rest
of
the
meeting,
all
right
Harold
over.
A
Here
so
the
incredible
streams
there's
a
project
that
we
started
in
order
to
address
some
of
the
use
cases
from
where
Marcus
eme
and
the
particular
shape
of
the
of
the
first
version
was
given
by
needs
at
Google.
But
the
point
of
the
reservable
streams
model
is
that
we
should
be
able
to
give
young
scripts
more
power
over
what
over
what
actually
goes
on
the
wire
next,
but
just
a
reminder
of
how
RTC
media
or
the
web
RTC
works.
A
A
A
We
chose
to
focus
on
their
encoded
frame
or
encoded
reference
of
samples
in
case
audio
and
say:
okay,
now
it's
going
out
towards
the
wire
you
have
the
chance
to
look
at
it,
manipulate
it
do
something
with
it
and
then
have
it
sent
over
the
wire
and
similarly,
on
the
incoming
side,
you
get
what
came
across
the
wire
is
and
was
shaped
like
a
frame
and
get
to
say
here's
what
you
do
front.
Can
you
take
care
of
that
meeting
people?
Of
course,
great,
so,
post
encode,
pre
decode
was
the
pop
implemented.
A
A
So
what
we
have
is
in
Chrome
Canary,
it's
the
tester
with
applications.
Men
do
a
web
group
calling
where,
where
we
have
this
p
existing
and
to
encrypt
the
solution
where
we
just
want
the
web
client
to
we
have
the
G
Sumeet
link
to
article
that
actually
came
out
first
with
something
that
people
can
try
with
encoding.
A
Okay,
here's
some
information,
that's
interesting
info
for
a
are
organize
the
reality.
Applications
effects
that
you
want
to
apply
at
the
receiver,
but
where
the
sender
is,
that
is
the
easiest
place
to
figure
out
what
needs
to
be
done
and
performance
seems
to
be
adequate
for
those
purposes.
It's
not
observable
streams.
That
is
the
bottom
line,
which
I
was
very.
A
It's
called
five
bits
called
timestamp,
it's
called
data
and
we
got
what's
still
the
most
open
part
of
it,
which
is
additional
data,
which
is
things
you
need
to
know
about
the
frame
in
order
to
do
the
right
thing
and
we're
not
sure
what
that
has
to
be
it.
But
we
know
that
there
has
to
be
something
there
next
step.
A
So
this
is
the
core
of
the
demo
that
you
can
see
on
their
web
RTC
samples.
It
says:
okay,
you
create
the
transform
frame.
You
connect
it
to
the
to
the
encoded
video
streams
that
you
get
from
encoder
and
therefore
what
you,
what
comes
from
the
encoder,
will
be
sent
through
that
transformation
function
before
it
goes
out
to
the
packet
that
strangles
all
of
them
that
work.
It's
actually
that
simple.
A
So
next
time
it
becomes
very
interesting
when
you
do
workers,
because
this
is
another
new
feature
called
transferable
streams,
which
is
you're
able
to
take
the
stream
that
comes
out
of
the
encoder
and
pass
it
through
a
worker,
and
then
the
platform
will
actually
come
set.
The
frames
to
the
worker
through
that
stream
and
your
main
thread
javascript,
doesn't
have
to
see
the
project.
A
This
actually
has
students
featured
well
as
performance.
The
other
one
is.
That
is
a
very
easy
separation
of
concerns,
because
you
can
then
say
that
all
the
transformation
stuff
and
all
the
setup
you
need
to
do
the
just
mission
self
that
can
be
dominant
working,
and
you
can
imagine
that
the
worker
could
be
something
that
you
pick
up
as
a
component
from
someone
else
or
you
could
even
say
that
this
worker
actually
is
actually
conforming
to
standard
and
just
give
a
URL
given
in
the
standard.
A
You
can
sign
up
for
an
original
trial
or
just
turn
a
turn
on
the
experimental
features
in
a
browser,
if
you
have
chrome
83,
which
is
currently
in
beta,
we're
working
to
synchronize,
with
the
way
where
codex
defines
so
that
we
have
these
two
ways
of
addressing
these
things
actually
handily
the
same
way,
we
want
to
look
at
how
we
can
extend
the
same
concept
to
Romi.
Here
we
are
prefer
handling
is
of
obviously
more
of
a
problem.
A
A
E
F
So
yes,
I'm
a
little
first
I
had
a
question
about
the
use
case
and
19
assess
the
application
must
be
able
to
insert
processed
frames
into
the
outgoing
media
path.
Does
process
frames,
mean
access
to
raw
media
or
X's
to
encoded
media,
because
this
API,
just
to
be
clear,
only
exposes
encoded
frames
right.
Yes,
there's
no
access
to
raw
data.
Yes,
that's
right!
So.
F
A
A
F
I
see
that
I
think
I
see
two
problems.
One
is
I
think
what
Searle
is
still
forming
an
opinion
on
a
position
on
the
spec
as
far
as
it
being
used
for
end-to-end
encryption,
so
I
expect
that
will
be
forthcoming.
I
think
we
have
some
concerns
whether
this
is
true
end-to-end
encryption
or
if
it's
a
sort
of
a
poor
man's
version,
since
there's
no
key
management
and
I
believe
it
in
its
current
shape.
That's
the
only
thing
well,
it
can
be
used
for
that,
and
it
can
also
be
used.
F
A
A
A
Way
of
looking
at
media,
it's
got
its
own
special
synchronization
properties
and
its
own
special
and
requirements
for
synchronicity,
which
are
applicable
for
some
web
applications.
Some
really
painful
for
all
of
us
and
so
I,
so
I
think
that's
different
tools
for
different
purposes.
I
thought
about
exploding
this
of
off
into
the
track,
but
I.
Think
of
that
as
opening
up
the
the
sender
and
receiver
objects
and
seeing
the
pieces
inside
and
I
think
that's
a
larger
design
effort.
A
A
So
I
think,
eventually
the
we
will
explode.
The
the
sender
and
receiver
objects
to
say
that
they
are
actually.
This
is
a
chain
of
objects
with
connections
between
them.
Actually,
the
there
are.
There
are
things
that
go
in
both
directions
inside
the
objects
which
make
it
makes
it
more
complex
on
a
stream
graph.
A
A
A
H
World
this
is
max
from
Microsoft.
First
of
all,
I
guess
great
work.
This
is
a
very
awesome
feature.
I
think
everyone
was
looking
for
that.
So
thank
you
there
and
I
guess.
You
folks
managed
to
implement
as
well
very
fast,
which
is
awesome
to
see,
but
here's
the
question
right,
I
guess
you
also
highlighted
that
one
of
the
major
questions
here
is
performance
or
was
performance,
and
here
you
mentioned
that
some
measurements
happened
and
performances
adequate
right,
given
the
current
limitation,
of
course.
So
can
you
be
a
bit
more
specific?
H
A
This
is
from
the
a
bit
of
a
stretch
to
say
that
we
have
done
measurements.
What
we
have
actually
done
master
is
to
put
up
a
hood
repair
connection
with
this
unit
wave
our
hands
and
say
see
that
it
does
not
break
up
and
it
does
not
look
horribly
slow,
so
measuring
how
many
milliseconds
of
extra
Delirium
incur
is
something
that's
definitely
on
on
the
list
of
things
we
have
to
do
anything
else.
I
Well,
in
the
thanks
for
testing
you
so
far,
it
has
been
doing
all
the
four
into
an
encryption.
It
has
been
all
the
all
the
operations
to
actually
do
the
encryption
and
dealing
with
with
how
to
efficiently
deal
with
the
friends
to
try
to
reduce
frame
copies
in
the
internal
system.
We
have
to
do
that
that
those
have
been
those
having
the
main
performance
issues
we
have
seen
so
far
am.
J
Currently
we
have
a
pipeline
that
has
one
observer
point,
which
is
entry
of
the
encoder
and
the
output
of
the
decoder,
which
makes
it
very
easy
or
very
flexible
in
the
way
you
can
optimize
it.
For
instance,
you
could
think
that
if
you're
doing
it
onto
an
encryption-
and
you
have
keys-
you
do
not
want
these
keys
in
the
weather
process.
J
J
So
if
you
look
at
the
graph
there,
you
might
home
to
you
transport,
network
adaptation
and
code
in
a
process
that,
if
not
running
JavaScript,
so
you
use
peerconnection
API
to
modify
it,
that
you
do
not
run
it
in
the
process
that
is
running
JavaScript.
On
the
other
hand,
the
display
of
a
pre-processing
and
so
on
is
really
tied
to
the
application.
So
there
it's
fine
to
put
it
in
JavaScript,
so
adding
one
entry
point
in
touch
block
there
is,
we
need
to
be
very
cautious
there.
F
J
I
think
that
we
we
really
want
like
to
be
sure
that
there's
no
power
lost
at
least
and
I
think
that
we
want
to
be
in
a
world
where,
maybe
temporarily,
it's
fine
to
have
an
entry
point
there,
but
into
in
the
future
for
things
that
are
really
important.
Like
Antoinette.
We
actually
want
that
to
be
implementable
by
the
browser
and
set
about
by
the
JavaScript
application.
J
So
this
proposal
makes
me
worry
if
we
are
just
stopping
at
oh
yeah,
we
we
explode
API
now
and
to
an
encryption,
is
implemented
in
JavaScript.
That's
good
enough
and
I.
Don't
think
it's
good
enough
and
I
don't
want
to
stay
just
in
the
middle
of
the
bridge.
I
want
to
go
to
a
world
where
we
have
an
end-to-end
encryption
spec,
but
it's
don't
die
stand.
Every
browser
is
implementing
it
and
applications.
J
J
I
would
prefer
that
to
be
left
to
the
browser
at
the
end
of
the
day,
just
like
currently
for
DTLS,
we
are
saying
that
crypto
is
no,
not
good
enough
anymore,
so
we
want
to
absorb
it,
and
I
won't
browser
to
still
have
this
ability.
They're
saying
oh
for
end-to-end
encryption.
We
no
longer
want
that
crypto,
it's
not
good,
and
if
we
do
that
in
JavaScript,
my
fear
is
that
we
will
not
get
there.
J
A
A
If
the
structure
is
right,
it's
possible
and
we
get
consensus
on
the
component
that
is
able
to
do
the
key
management
and
the
encryption.
Then
it's
possible
to
use
this
architecture
to
invoke
such
a
component,
but
waiting
until
we
have
the
architecture,
I'm
afraid
you
will
leave
us.
The
users
exposed
for
another
five
years.
B
A
B
Question
about
the
synchronization
with
Web
codex
Harold,
so
web
codex
we've
been
looking
at
a
couple
of
things
that
are
maybe
slightly
different.
I,
don't
know.
Instead
of
an
array
buffer,
we've
been
talking
about
potentially
like
a
handle
to
to
GPU
buffer,
which
is
probably
more
useful
for
the
raw
video
case.
I
can't
think
of
a
use
for
the
encoded
video
case.
B
A
A
J
A
A
In
this
case
we
can
say:
oh
as
long
as
you
don't
violently
break
the
contracts,
it
will
sort
of
work
out
okay,
so
we
can
get
away
with
not
solving
a
problem
in
the
first
instance,
but
it's
one
of
the
things
that
I
mean
sketching
my
head
about
how
to
how
to
do
right.
I.
Imagine
that
we
might
have
to
expose
the
stream
of
feedback
signal
stats.
The
web
application
can
also
insert
itself
into,
but
we
don't
have
the
even
sketch
on
that
one.
Yet.
B
Are
there
things
that
we
could
be
doing
Harold
based
on
the
existing
experimental
trial,
to
try
to
get
at
some
of
the
performance
issues
like
I
thought
of
maybe
a
demo
that
tried
to
simulate
the
CPU
of
some
of
the
raw
video
use?
Cases
like
the
funny
hats,
just
to
see
one
concern
I
had
is
that
the
crypto
demos
are
in
some
sense
easier,
because
they're
more
likely
to
have
cache
hits
than
like
a
machine
learning
kind
of
a
thing
which
could
take.
B
A
One
of
my
first
thoughts
was
that
at
least
place
instrumentation
in
the
demo
and
I
can
be
able
to
say
that.
Okay,
this
this,
this
frame
arrived
at
the
receiver,
ten
milliseconds
after
the
after
it
departed
from
the
sender,
so
where
there
are
a
few
very
simple
steps
you
can
take
in
order
to
get
measurements,
and
they,
if
you
can
they
get
to
those
steps
actually
contained
in
the
in
the
test,
fixture
and
and
run
as
as
benchmarks.
A
J
Yeah,
so
somehow
this
API
proposal
is
really
about
like
metadata,
but
you
want
to
put
as
of
the
uncoded
friends
and
also
end
to
an
encryption,
so,
for
instance,
in
end-to-end
encryption,
probably
you
will
not
unlock
the
size
of
the
frames
of
your
can
be
tried.
So
there's
no
issue
about
the
bandwidth,
but
you.
J
A
H
B
A
K
So
our
proposal,
our
was
to
add
a
new
content
in
four
audio
tracks
called
speech
recognition.
This
will
allow
developers
to
say
what
their
tracks
are
going
to
be
used,
for.
It
also
allows
for
the
platform
to
make
changes
based
on
that
flag,
and
so
the
prose
is
here
or
that
this
is
a
kind
of
a
minimal
amount
of
changes.
K
It's
only
adding
a
content
hint
and
it
only
really
modifies
that
draft.
Although
we
will
have
some
defaults
for
what
we
set
different
constraints
to,
and
this
will
allow
for
prototyping
and
gathering
feedback
from
developers
in
the
sense
of
like
in
an
origin
trial
for
a
chromium,
the
cons
are
that
I,
believe
content
hidden
is
not
be
used
by
all
browsers
right.
F
K
K
So
maybe
speech
is
something
more
like
communication
and
then
speech.
Recognition
is
maybe
something
more
like
dictation
that
could
help
differentiate
between,
like
the
use
case
is
specifically.
G
K
J
J
F
B
E
F
K
K
But
in
order
to
get
that
to
happen,
you
have
to
specify
that
you
want
to
do
speech
recognition,
and
so
this
is
kind
of
allowing
for
that
to
happen.
If
the
implementer
wants
to
do
that
and
then
other
than
that,
the
other
kind
of
optimizations
that
happen
are
along
the
lines
of
what's
listed
here,
but
they
haven't
been
necessarily
like
standardized
across
platforms.
A
A
A
So
next
step
is:
is
emergent
PR,
not
if
the
group
says
that
we
have
to
obsess
that
it's
okay
to
merge
them
emerges,
and
then
we
see
if
if
we
can
get
their
rotations
enough,
when
that's
that
time
rolls
around.
If
the
group
says
that
this
comes
to
consensus
of
this
is
a
bad
idea,
then
then
we
have
to
drop
it.
Just
a
fact.
I
think
it
looks.
Okay,
now
I
think
we
should
merge
it
yeah.
A
A
E
F
G
Say
if
I
recall
correctly,
I
think
the
push
back
on
the
the
last
time
this
was
presented
was
that
it
was
vague.
What
I
was
doing,
because
there
was
this
catch-all
thing
and
I
think
this
content
is
the
same.
It's
just
placed
in
this
content.
Hint
sounds
more
appropriate,
but
regardless
whether
it's
merges
and
my
question
would
be
I
guess:
do
we
have
a
well-defined
list
of
what
it's
supposed
to
do,
or
is
it
up
to
the
implementation
to
figure
that
out.
K
So
the
specs
do
help
in
describing
like
what
is
exactly
the
difference
between
the
two
different
types
and
so,
whether
or
not
like.
Ideally,
things
should
go
according
to
that
C
spec,
but
I
in
terms
of
how
implementation
I
think
like
part
of
it
is
if
there
wasn't
implementation,
it
wouldn't
be
helpful
to
see
exactly
what's
going
on,
but
the
specs
do
define
what
the
difference
between
communication
and
speech
should
be
well.
G
B
K
J
B
A
B
F
Right
so
I'm
gonna
start
out
with
a
quiz.
This
is
a
time-warp
quiz
back
to
2014
or
2018,
and
a
question
is
the
device
change
event
fires
when
and
for
people
who
haven't
looked
ahead
on
the
slides?
They
can
guess
the
user
inserts
or
removes
the
device.
That's
a
or
b.
Do
you
numeric
devices
list
changes
or
see.
Those
are
the
same
thing
and
anyone
want
to
guess
I
guess
everyone's
looks
ahead,
so
you
can
go
to
next
slide
and
the
answer
is
toaster
the
same
thing,
and
so
there's
at
the
bottom.
F
I
have
two
users.
The
one
on
the
left
is
basically
looking
for
an
end
user
to
insert
a
device
and
in
their
app
there
they
have
a
communications
app
where
they
want
to
as
an
app
choice,
switch
to
a
new
device,
the
user
inserts
on
the
on
the
thinking
that
they
probably
want
to
use
that
device
on
the
other
on
the
right
hand,
side,
there's
different
user
caching,
the
enumerate
devices
to
list
they
have
their
own
cache
for
certain
reasons,
I'm,
not
sure
white
next
slide.
F
Because
of
now
the
list
now
is
populated
with
more
things,
but
the
user
doesn't
know
whether
anything
was
inserted
or
not.
So
if
they
can't
take
it
as
a
strong
signal
that
the
user
probably
wants
to
use
this
device
because
they
just
inserted
it
and
the
guy
on
the
right
he's
wondering
well,
is
my
cash
stable
or
not
now
so
next
slide,
so
sad
customers
there.
So
this
is
just
a
recap
of
how
JavaScript
and
the
old
spec
might
have
detected
new
devices,
and
it's
basically
by
doing
a
simple
subtract.
F
F
The
two
use
cases
for
this
is
basically
before
initial
getusermedia.
Before
you
have
permission,
the
user
may
have
no
devices
and
they
might,
which
means
you're
not
supposed
to
show
them
a
feature
like
if
they
don't
have
camera,
don't
show
them
there.
You
know
the
funny
or
the
the
camera
button
and
they
insert
a
device
and
suddenly
they're
not
call
for
the
feature.
F
Excuse
me,
and
after
done
it
maybe
during
a
call,
the
user
inserts
a
preferred
device
over
the
one
they
have
now,
and
this
is
kind
of
broken
because
we
no
longer
have
device
IDs
free
gum.
So
there
are
cases,
an
issue
where,
if
you
were
to
just
look
device
IDs
after
gum
you
it
would,
you
might
just
have
a
single
camera,
but
before
gum
it
had
no
device.
Eddie
in
Africa
MIT
has
a
device
ID,
so
the
algorithm
of
Bob
will
say
that
now
there's
a
new
device.
So
that's
not
great.
F
So
there's
no
right
way
around
that
apps
doing
device,
detection
will
are
broken
and
will
have
to
update
and
basically
what
the
minimum
they'll
have
to
do
is
probably
call
a
numerate
devices
again
right
after
gum
success
to
update
their
cash
list.
Otherwise,
they're
going
to
suffer
pause,
false
positives
next
slide.
F
So
let's
see
what
the
spec
says
today,
device
change
says:
the
device
device
and
fight
the
ax
men
should
fire.
When
the
set
of
media
devices
available
to
the
user
agent
has
changed
and,
interestingly,
it
does
not
say
to
the
applet
available
to
the
application
it
says
available
to
the
user
agent,
meaning.
F
In
the
system
got
inserted
things
like
that,
but
then
it
also
says
elsewhere
when
the
new
media
or
input
and
or
output
devices
are
made
available,
which
is
passive.
Language
is
hard
to
read
into.
Is
it
made
available
by
the
user
agent
itself?
Words
made
available
from
the
oweth
to
the
user
agents.
So
again,
the
use
case
that
I
claim
is
important
to
support
is
inserting
a
device
as
a
strong
signal
that
they
want
to
use
it.
F
So
now
we
have
problems
where,
if
you
now
fired,
I
might
change
whenever
the
government
changes
that
this
can
be
indistinguishable
from
a
user,
inserting
a
device
at
that
very
time.
So
you
have
a
race
we
get
using
media
and
inserting
a
device
as
if
that
developer,
I
cannot
determine
whether
this
is
a
new
device,
so
it
let's
say
I
follow
the
algorithm
I
figure
out
it's
a
new
device
that
wasn't
there
before.
F
As
far
as
I
know,
but
and
I
you
know,
I,
don't
know
whether
that's
actually
happened,
because
they
usually
did
something
or
whether
that's
an
artifact
of
our
new
model.
So
I
don't
know
if
I
should
switch
to
it
immediately
and
if
they
didn't
insert
anything
I,
don't
want
to
accidentally
switch
thing
to
a
secondary
device
because
they
just
joined
a
call
with
a
primary
next
slide.
F
So
I
don't
see
a
way
around
this
other
than
trying
to
this.
The
single
event
in
2018
cannot
can
no
longer
support
both
those
users.
So
the
only
the
changes
and
also
apps
will
still
need
to
update
to
call
the
new
rate
devices
after
getusermedia
anyway,
or
this
is
not
good
or
they're
going
to
get
false
positives.
So
I
have
proposal
a
which
is
the
law
of
the
JavaScript
list
to
change
without
an
event
when
getusermedia
succeeds
and
then
only
fire
the
device
change
event.
F
When
devices
are
added
or
removed
by
user
action
or
I
guess
OS
action,
you
could
argue
we
could.
That
could
be
a
some
discussion
proposal
B.
Alternatively,
we
can
only
change
that.
We
can
say
that
only
changes
to
the
JavaScript
list
cost
of
a
change
to
fire,
meaning
that
if
you
call
anywhere
devices
you'll
see
a
difference,
don't
actually
mean
not
firing
device
change
in
a
lot
of
cases
where
we
fired
today,
which
is
before
you
call
getusermedia,
the
spec
actually
says
to
fire
the
event.
F
So
the
proposed
will
be.
You
would
only
see
that.
Excuse
me.
If
you
go
from
zero
to
one
device
and
then
we
would
invent
a
new
event
called
device
inserted
or
new
device
if
one
of
my
devices
were
were
actually
added
by
user
action.
There's
no
proposal
for
removing
removal,
because
that's
not
really
interesting,
there's
no
use
case
to
detect
that
then
I
can
take
him
and
that's
a.
G
J
We
ship
the
behavior
of
device
change
event
have
to
forget
use
media
from
the
start.
Our
fear
was
that,
since
we
were
only
browser,
doing
filtering
before
that
use,
media
and
not
after
websites
might
not
call
always
or
be
stuck
with,
like
oh
there's,
only
two
devices
or
whatever,
so
we
decided
to
fire
the
event
to
be
sure
it
was.
It
was
working.
J
I
looked
at
a
few
websites
in
in
Safari
and
I
can
see
that
the
some
of
the
website
show
you
I
when
I
plug
a
device,
but
they
do
not
show
that
you
I
have
to
get
user
media,
so
they
are
doing
some
filtering
to
work
around
that
bag,
so
somehow
how
they
managed
to
handle
it
properly.
We
have
no,
we
have
not
received
any
feedback,
saying,
oh,
you
should
change
it.
We
have
not
changed.
I
think
that
we
could
actually
change
it.
G
So,
if
device
like,
if
we
do
proposal
eight,
the
device
change
can
continue
to
do
what
it
was
supposed
to
do
from
the
start,
and
it's
also
deterministic,
like
you
already
have
an
event
for
the
first
time.
You
need
to
call
the
numerator
Isis,
because
that's
the
the
promise
being
resolved,
so
you
don't
need
a
very
similar
API.
That
does
almost
the
same
thing.
I
think,
however,
that
would
be
assuming
we
don't
break
things
by
doing
proposal
a.
H
For
proposal
a
isn't
getusermedia,
not
the
only
way
to
actually
get
access
to
device
labels
user
can
deny
any
show
you
write
and
don't
go
to
omnibar
and
allow
from
there,
which
actually
means
that
we
should
start
exposing
labels
from
the
get-go
right,
which
is
when
it
is
a
good
moment
for
device
change
to
fire
for
duplication.
To
understand
that
permission
is
their
device.
Labels
are
available
right,
very
explicit
mechanism.
F
Because
the
only
thorn
this
used
to
work
well
before
we
started
we're
only
room
for
privacy
reasons,
we're
only
removing
knowledge
of
devices
with
our
hidden
devices,
so
the
only
problem
is
devices
when
they
become
unhidden,
it
could
look
like
devices
were
added.
There's
no,
there's
no
problem,
the
other
way
where
things
start
out
known
and
then
become
hidden,
so
you
wouldn't
actually
so
device
changes.
Event
is
already
quite
you
can
already
tell
when
things
are
removed
from
device
change.
J
H
A
list
of
size,
one
initially,
regardless
of
the
number
of
devices
and
then
I,
remove
my
active
devices
right.
This
has
is
still
one
right
level
to
say.
0
then
yeah
well
say:
I
have
multiple
devices
right,
I
have
two
devices
having
one
without
permission,
I
will
say:
I
will
see
one
device
right
and
well
after
removing
one
of
two
devices
we'll
see
one
Joyce
wouldn't.
F
That's
correct,
yeah,
that's
that's
a
privacy,
and
that
is
also
we
have
the
same
issue.
If
you
add
a
device
before
gum
because
before
gum
you
can,
if
you
don't
have
permission,
we
don't
allow
you
to
see
multiple
devices
and
respect
currently
says
to
the
fire.
The
device
change
event,
even
even
if
you
insert
a
secondary
device.
But
when
you
look
at
the
list,
you're
still
going
to
see
that
you
don't
have
one
device.
So
that's
the
same.
Yeah
I.
H
F
So
if
we
go
into
detail,
I
think
the
browsers
would
actually
have
to
be
careful
if
someone,
if
the
user
inserts
a
device
right
around
the
time
they
get
using
media
is
called.
They
probably
want
to
wait
to
to
surface
that
until
the
JavaScript
has
had
time
to
process.
The
successive
getusermedia
call
a
new
right
devices
and
then
fire
the
device
for
an
event
which
they're
allowed
to
do,
but
there's
not
a
lot
of
help
in
spec
at
the
moment.
J
F
Well,
you
could
already
Paul
said.
The
problem
is
that
the
internet
device
list
right
now
is
not
allowed
to
change
without
the
event,
so
that
proposal
a
would
be,
allow
it
to
change
so
right
now
you
can
call
the
new-made
devices
and
you
can
tell
whether
you
have
zero
or
one
of
a
device
right.
So
the
use
case
there
is
that
that
I
mentioned
user
does
not
have
a
camera.
F
H
A
J
F
H
B
J
B
F
A
A
A
A
Then,
when
we
then,
when
we
wake
up
in
here,
we
should
not
start
sunny
media
until
the
user
has
had
a
chance
to
interact
with
the
machine,
for
instance
by
unlocking
it,
and
then
we
should
fire
them.
You
do
that,
but
what
we
still
haven't
thought
about
this:
how
do
they
specify
that
when
we
don't
have
a
dominant
I?
Don't
so
that's
where
we
are
so
the
questions
to
the
group
are
is
a
suggestion
reasonable
and
it's
just
a
suggestion.
G
J
G
F
I
have
a
couple
of
concerns
here,
which
is
that
I,
don't
think
the
Machine
actually
suspends
in
chrome,
so
I
don't
think
it's
about
suspension,
I!
Think
if
javascript
is
fully
suspended
or
the
OS
is
fully
suspended,
then
there
would
be
no
time
to
fire,
muted
or
anything
like
that.
I
think,
what's
happening
right,
so
I
actually
opened
a
separate
issue.
F
F
The
other
thing
you
can
lock
up
your
machine
and
also
not
suspend
it
right
so
right
right
so
to
come
up
with
was
that
for
just
from
observation.
There
is
some
decision
points
in
audio
playback
that
happens
that
when
you
close
the
lid,
if
you
play
in
YouTube
or
whatever
audio
playback,
will
stop
so
one
idea
I
had
was
to
piggyback
on
that.
Basically,
if
our,
if
audio
output
sees
it,
input
should
also
cease
and
meat
should
be
fired.
In
that
case,.
A
F
A
G
So
what
happens
with
the
juice
it
chooses
is
if
there
are
multiple
devices
left
after
constraints
processing,
you
will
be
shown
a
picker
by
the
browser
UI,
where
the
user
selects
the
device.
Instead
of
letting
the
user
agent
decided
at
fault
for
you
and
then,
but
then,
if
we
have
this
user
chooses
today,
I
UI,
then
this
makes
me
questions
required
constraints
a
bit
because
right
now
you
know
if
you
have
two
cameras
and
one
camera
is
facing
me
and
the
other
camera
is
facing
my
room.
G
So
the
the
main
proposal
is
that
if
you're
using
the
user
chooses
constraint,
then
you
know
you
can
still
use
the
vas
ID
to
filter
at
devices.
But
if
you
use
any
other
constraints,
then
if
using
those
constraints
results
in
reducing
the
set
of
devices,
then
you
should
ignore
them.
So,
for
example,
you
can
force
the
HD
setting
on
a
device
that
supports
both
HD
and
and
low
resolution,
but
you
may
not
exclude
the
device
from
the
set
of
devices
that
may
be
selected.
G
This
could
be
used,
for
example,
to
toggle
between
a
front
camera
and
a
back
camera,
but
you
would
you
could
hide
if
other
devices
are
available.
So
in
this
case
you
could
have
a
fake
device
ID.
That
just
is
a
stand-in
for
other
devices
and
then
you
can
use
that
to
to
reap
prompt
and
then
the
most
aggressive
flavor
seems
far
in
the
future
is
to
always
rely
on
the
usage
juices
picker.
So
basically
the
deprecated
device
IDs
and
then
the
the
only
way
to
select
the
device
is
with
the
user,
chooses
approach.
G
G
So,
where
are
we
heading
with
devices
and
device
IDs?
Are
we
intending
that
usually
chooses
the
new
reign
of
choosing
alliances,
in
which
case
you
should
do
it
exposes
little
device
IDs
as
possible,
or
are
we
playing
now?
We
want
things
to
basically
work
the
same
way
as
today.
First,
if
we
do
have
to
chooses
ABI.
C
F
Feedback
is
I
think
this
presents
a
false
choice
because
it
sounds
like
you're
saying,
because
we're
going
to
have
an
in-browser
picker
rook,
we
should
not
support
required
States,
anymore
and
I.
Think
that's
those
I
would
love
to
keep
these
questions
totally.
I
think
there
are
thorn
all
questions
in
that.
F
B
B
F
F
B
Now,
today,
it's
not
in
the
existing
model;
that's
not
a
concern!
Why
not?
You
know,
but
basically
in
in
the
screen
capture
model,
that's
what
we
were
afraid
of
right,
that
the
app
could
figure
out
what
applications
were
running
and
do
something
devious
to
get
the
user
to
share
their
banking
app
or
something
right.
That's.
My
question
is
whether
that
is
a
concern
here.
Although
things
are
not
like,
we
have
to
kind
of
decide
what
the
security
threat
is.
G
Me
carry,
but
we
don't.
We
don't
need
to
abandon
the
the
required
constraints.
My
understanding
is
with
usage,
which
user
chooses.
Is
that
it's
it's
used
when
when
you
need
the
tiebreaker
right
right,
there
are
multiple
options
so,
instead
of
doing
the
default,
we
we
ask
the
user,
so
we
can
leave
it
at
that
which
is
fine,
but
this
is
where
I
want
to
know.
Where
are
we
heading?
Because
if
we
do
have
this
way
this
picker-
and
it
does
seem
to
solve
a
lot
of
things
that
are
currently
solved
by
the
application?
G
It
seems
to
serve
to
some
extent
the
same
purposes,
but
the
old
way
of
doing
things
which
we
could
continue
to
support.
But
the
old
way
of
doing
things
does
seem
to
have
this
privacy
concerns,
so
it
would
be
good
to
know.
Do
we
want
to,
in
the
future
continue
down
that
path,
or
do
we
want
to
try
to
step-by-step
limit
the
number
of
devices
and
more
rely
on
user
chooses?
G
A
G
H
B
J
B
J
J
B
F
B
F
F
F
G
G
And-
and
it
does,
we
do
need
to
know
if
we
do
implement
the
the
picker.
We
do
need
to
know
whether
we
should
filter
this.
The
list
of
options
to
the
user
or
not
so
I,
don't
know
how
we
could
implement
user
chooses
without
saying
either
that
you,
yes,
we
do
want
to
filter
the
set
of
devices
which
we
kind
of
do
today
or
no.
We
don't
want
to
do
that.
So
the.
F
I
think
update
options.
I,
like
the
least,
is
to
change
too
much
based
on
user
chooses
because
then
we're
then
we
fall
then
browsers.
Would
we
fall
victim
to
the
that
web
developers
usually
want
to
test
with
one
browser,
so
it
can
actually
be
hard
to
operate
in
that
world
for
other
browsers.
So
if
there's.
B
B
B
Anyway,
yeah
I
think
that
okay,
so
that's
that
that's
the
next
step
is
there's
an
experiment.
Yeah
yeah
I'd
like
to
apologize
for
not
being
able
to
get
to
the
rest
of
this
I.
Guess
we'll
try
to
schedule
another
in
them
to
get
through
the
rest
of
the
slides,
but
I
do
think
we
accomplished
something
today.