►
From YouTube: W3C WEBRTC virtual interim April 14 2019
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
See
virtual
interim
April
11
its
2019
okay,
so
here's
all
the
issues
that
we
are
hoping
to
get
through.
Today.
We've
got
six
on
WebRTC
PC,
a
bunch
of
them
relating
to
simulcast,
two
on
screen
sharing
and
then
three
evilly,
five
relating
mostly
to
privacy.
That
UN
is
going
to
go
over
relating
to
media
capture
main
and
then
they've
time
permits,
and
it
may
be
quite
tight.
B
Yeah,
okay,
so
should
we
rename
whoever
TC
WP
key
folder
to
we're
at
CPC,
so
in
the
blue
BT?
There's
a
trend
to
Rene
to
have
the
top
folders
match
the
name
of
a
stack.
The
name
of
a
spec
is
Weber
TCP,
C,
so
I
think
we
should
try
to
do
that.
We
are
also
trying
to
link
the
web
at
CPC
spec
to
the
WPT
test,
so
it
makes
sense
also
to
have
like
this
matching
in
the
folder
name.
B
B
So
there
are
some
advantages
in
terms
of
tooling.
In
terms
of
consistency,
the
downside
I
see
are
that
there
are
some
existing.
The
PTP
are.
That
might
actually
need
to
be
released
and
there
might
be
some
work
to
be
done
in
Rhode,
Islander
contain
continuous
integration
system
to
actually
handle
that.
B
B
F
A
G
A
H
A
C
A
C
A
A
I
C
C
Come
all
of
the
time
so
I
was
I
was
thinking
that
we
could
write
something
like
if
there's
congestion
and
each
layer
should
be
given
a
reasonable
share,
a
bandwidth
so
that
it
remains
useful.
For
instance,
if
you
really
reduce
the
frame
rates
than
you,
then
all
channels
should
be
reduce.
Proportionally
you
shouldn't
see
the
5fps
channel
go
to
zero
and
the
60fps
channels
going
on
at
16.
F
Or
vice
versa,
so
isn't
this
also
why
we
have
simulcast?
So,
for
example,
different
alternative
might
be
that
you
give
precedence
to
the
lower
quality
layers.
So
if
because
they
take
up
less
bandwidth
and
you
can
have
them
at
high
quality
and
yes,
a
few
would
would
recognize
that
it's
not
getting
all
the
frames
it
expects
on
the
high
linear
and
it
would
forward
different
layers.
Isn't
that
more
consistent
looks
out
and
achieve
yeah.
C
I
do
think
that,
as
a
few
would
ever
might
a
big
problem
in
in
detecting
that
case
when
it
gets
both
the
high
layer
and
the
low
layer
and-
and
it
just
happens
that
the
high
layer
has
been
has
had
worse,
it
is
coming
in
and
the
worst
quality
than
the
than
the
low
layer
I
mean.
How
can
it
tell
I
mean
if
the
high
light
goes
away?
It's.
F
C
Wouldn't
want
to
lose
synchronization
anyway.
Remember
that
the
the
common
case
is
that
you're
talking
to
two
people
and
one
is
on
the
phone
and
one
is
on
the
desktop
and
that
and
well
on
a
desktop
gets
the
high
layer
and
the
and
the
one
on
the
phone
gets
the
lower
layer,
but
both
of
them
want
to
see.
You
also.
D
F
With
that
I
say,
I
feel
that
this
issue
is
only
when
something.
You
know
something
changes
in
the
next
word
quality
so
that
you
no
longer
have
the
ability
to
send
all
the
bandwidth
right,
although
all
the
layers
with
full
quality,
otherwise
yeah-
don't
send,
you
know,
don't
send
Sam
I
guess
if
you
can't
send
it
even
so.
A
D
I
A
D
A
I
A
A
A
J
Is
if
the
envelope
doesn't
fit
we're
back
to
square
one
like
if
it's
one
megabit,
half
a
megabit
and
250
little
bits
per
second,
and
you
have
only
one
megabit?
What
do
you
do?
I
think
that
was
what
the
that's,
how
I
understood?
How
does
no
problem
is
that
you
have
one
megabit
available
and
you
have
three
layers,
which
are
one
point:
seven,
five
megabits,
so
you
proportionally
go
down
or
we
drop
the
lowest
like
yeah.
A
E
A
C
C
A
A
D
E
A
J
C
C
E
E
Requirements
to
the
application-
and
you
know
as
such-
it's
up
to
the
user
agent
to
do
the
to
make
the
best
of
it.
So
clearly
if
I
have
two
layers
and
the
first
layer
has
like
scaled
down
by
16
or
something
crazy
and
the
second
layer
is
scaled
down
by
one.
I
would
hope
that
a
browser
would
allocate
more
bandwidth
to
the
second
layer
than
the
first.
A
C
A
A
E
A
A
A
D
Well,
I,
don't
think
we
need
to
spend
too
much
time
on
this.
Basically,
the
stats
has
so
far
not
reflected
simulcast
we've
sort
of
assumed
that
you
have
a
sender.
Then
we
have
an
RTP
stream
and
that's
it,
but
with
simulcast
you
have
multiple
RTP
streams.
Prayer
sender,
so
solution
is
to
have
multiple
RTP
streams
in
this
RTP
RTC
in
in
or
outbound
RTP
stream,
stats
per
sender
and
the
only
problem
is.
D
We
haven't
been
very
consistent
with
where
we
place
some
things,
so
some
of
the
like
frame
with
audio
level
that
has
been
placed
on
the
sender
or
the
track
stats,
but
we
would
have
that
properly
belongs
to
the
RTP
streams,
because
each
RTP
stream
would
have
a
different
resolution
and
so
on.
So
the
bad
news
is
that
we
we
have
to
move
some
stats.
That's
have
been
there
for
a
long
time,
but
if
we
do
that,
we'll
have
a
picture
that
reflects
what's
going
on.
D
C
C
C
D
E
F
D
A
D
E
J
J
Stats
was
that
how
do
you
correlate
stats
on
the
receiver
with
stats
generated
at
the
sender,
and
one
thing
is
SSRC,
so
if
we
go
through
the
next
few
slides
it's
an
animation
of
some
sort,
so
the
user-
a
generates,
SSRC
eh
in
al,
so
is
the
source
SRC
and
high
and
low
for
H
and
L.
It
is
sent
to
the
SFU
which
goes
onto
the
next
slide
is
BR
one
which
is
being
received
by
the
user
B
and
now,
due
to
network
congestion
or
conditions,
the
SFU
might
be
swapping
eh
and
al.
J
E
J
F
J
A
J
And
about
CSS
is
not
normally
used
for
bias
appears,
but
they're
used
by
MC
used
quite
often
yeah,
and
in
that
case
they
actually
take
a
bunch
of
videos
and
mix
them.
That
means
they
crop
them,
change
their
wear
tonight
and
merge
them
all
and
send
it
down
as
well
as
a
sir
C.
So
it
makes
sense
for
them
to
do
it.
D
J
D
Yeah
so
I
don't
think,
there's
anything
we
need
to
do.
I
and
I
think
this
is,
if
you
want
to
deliver
information
about
what
this
the
stream
you're
sending
from
this.
If
you
s,
you'd,
probably
use
something
like
okay,
I
think
you'd
use
CSR
C's,
but
it
would
be
up
to
the
SFU
so
right
we
can't
mandate
it,
but
we
can
recommend
that
this
hey.
This
is
a
good
idea.
Yeah.
J
E
C
A
A
A
A
J
A
A
D
J
C
A
J
H
A
Yeah
remember
it
could
be
in
an
MCU
right,
it's
still
it's.
The
debugging
is
the
debugging
it,
but
you're
you've
got
the
timestamps.
You've
got
to
see
your
services,
you've
got
packet
loss
and
other
stuff.
You
can
combine
that
for
a
lot
of
different
analyses.
I
don't
see
anything
wrong
with
it.
You
know.
A
A
J
J
So
one
simple
thing
that
came
up
was
that
we
have
layered
the
dependency
that
we
were
warned
to
expose
what
layer,
dependency
or
scalability
mode
is
being
used.
So
if
you
have
that
literal
scalability
mode
somewhere,
we
would
expose
that
and
then
the
next
question
would
be
typically.
What
we
see
is
the
variation
in
resolution
and
frame
rate
so
next
slide,
which
would
be
so
a
scalability
mode.
I
think
this
is
non-controversial
if
everyone
agrees.
We
can
say
yes
to
this.
One.
A
J
J
For
a
while-
and
we
thought
that
we've
done
this
for
DSC
peas,
before
where
we
have
packets
sent
and
each
packet
has
a
different
DHCP
code
point
or
may
have
a
different
DHCP
code
point.
We
could
actually
do
the
same
thing
here.
We
could
have
counters
for
per
packet
and
for
frame
resolutions
which
we
would
say
like
here
the
resolutions
that
were
available
in
the
SVC.
J
J
J
So
slide,
22
is
about
counters
for
frames
and
packets
and
slide
23
is
about
frame
rates
and
instead
of
talking
about
frame
rates,
we
are
actually
talking
about
inter
frame
delay
here,
see
just
on
the
sender
side.
You
see
what
was
the
frame
delay
of
this
current
packet
or
frame
compared
to
the
previous
frame,
and
you
basically
count
up
the
appropriate
frames,
delay,
buckets
or
bins.
So.
C
C
It
worries
me
that
weird,
we
still
have
no
idea
how-
and
this
gives
more.
This
does
give
more
or
less
this
version
of
what
the
sizes
you're
getting
and
the
and
the
times
you
are
getting
frames
at,
but
they
don't
say
anything
about
the
scalability
properties
that
you're
that
you
wanted
to
achieve.
You.
C
For
instance,
if
you're
sending
in
L
2
L,
2,
T,
3
and
I
mean
the
the
intermediate
node
decides
to
drop
a
drop
a
frame
in
in
tea
tree
in
the
player,
then
that
would
mean
that
in
that
you
would
count
up
in
33
instead
of
6
and
6
and
no
66
instead
of
33,
because
you
waited
twice
as
long
as
usual
right.
But
if
you,
but
you
can't
use
that
diagnose
that
you
actually
have
a
I've
had
a
drop
of
that
layer.
Right.
J
Yes,
I
guess
if
they
were
all
like,
let's
just
say,
T,
t2
or
t3
and
one
layer
got
dropped.
So
in
the
simplest
case
it
would
be
33
milliseconds.
All
of
them
were
33
milliseconds
apart
I
suppose
so.
On
the
sender
side,
all
the
pins
would
be
33
milliseconds
showing
like
1,000,
packets
and
1/3
of
your
package
got
dropped
because
t3
one
of
the
t3
highest
gets
dropped.
So
you
start
to
see
different
distribution
on
the
receiver
side.
Instead
of
seeing
all
33
milliseconds,
you
see
some
66
and
one
some
33
milliseconds
is.
E
H
C
A
D
Constraint
right
so
arcs,
the
constraints
are
bit
under
specified
for
remote
tracks
and
the
currently
the
language
says
that
the
track
settings
for
a
remote
track
will
only
be
populated
with
members
to
the
extent
that
data
is
supplied.
This
refers
to
what's
negotiated
or
or
the
RTP
data
itself,
which
means
that
certain
members
are
not
applicable,
and
so
it's
a
bit
vague.
You
know
from
the
RTP
data
you
would
have
with
so
do
we
have
with
it's
a
it's,
not
very
well
specified
and
what
happens
if
you
do
apply
constraints?
D
So
then
the
status
is
chrome
has
already
shaped
with
height
aspect,
ratio
and
resize
mode
and
frame
rate,
and
these
are
implemented
as
down
scaling
or
dropping
frames,
which
I
think
is
the
only
sensible
thing
you
can
do
with
the
remote
source.
Since
you
can't
configure
them
but
find
constraints
you
can
only
you
can
only
you
know,
take
what
you
get
and
then
do
something
with
it.
Or
you
know
we
could
always
reject,
but
chrome
has
already
shipped
is.
D
What
I
want
to
propose
is
that
we
we
specify
what
these
do
in
terms
of
down
scaling
and
dropping
frames
and
and
going
forward.
We
just
say
that
anything
we
we
haven't
explicitly
specified
it's
not
applicable
and
then
then
so,
if
we
want
to
add
other
future
constraints,
we'll
discuss
them
separately.
D
A
E
Yeah
Jahna
right
here,
yeah
I,
think
for
for
us
it's
it's
great
to
have
the
specified
so
that
there's
no
implicit
meaning
to
this
and
I
like
that.
We're
adding
so
well
I
think
the
plan
was
to
add
similar
language.
To
me.
You
capture
main,
to
say
that
constraints
don't
automatically
apply
to
every
spec
that
creates
a
media
stream
track,
but
it's
up
to
the
individual
specs
to
specify
what
should
be
implemented
there.
Nothing
else
to
be
implemented
that
way.
E
D
E
All
right,
so
this
is
me
so
the
the
problem
here
is
one
way
touched
on
before
at
least
an
energized
meeting
is
that
the
bundle
spec
has
kind
of
painted
us
in
a
corner
a
bit
because
calling
stop
on
the
first
transceiver
and
you
have
a
remote
offer
state,
which
means
you
calling
stop
between
set
remote
description
offer
and
set
local
description
answer
could
actually
be
lethal
in
the
sense
that
it
will
stop
all
your
transceivers,
which
would
be
surprising,
especially
since
this,
if
you've
implemented
negotiation
needed
and
you're
not
really
doing
this
as
part
of
your
handling
of
explicit
handling
of
incoming
offers.
E
You
may
be
be
very
surprised
by
this,
especially
since
the
den
will
appear
to
be
racy,
because
you
can't
actually
know
unless
you
a
head
of
time
special,
give
a
if
stable
state
then
call
stop.
Otherwise
don't
call
stop
and
call
it
later.
So
as
far
as
I
understand,
this
is
impossible
to
fix
and
bundle
at
this
point.
Basically,
it
has
to
short
explanation
as
stopping
the
first
on
a
transceiver,
which
is
the
in
bundle,
would
be
the
offer
tag.
E
Transceiver
is
actually
quite
safe
and
stable,
because
all
that
happens
is
a
new
transport
will
the
transport
will
move
somewhere
else
when
you
do
a
great
offer,
however,
when
you're
already
receiving
offer
from
the
other
side,
the
bundle
does
not
allow
you
to.
It
does
not
allow
that
transport
to
migrate
to
somewhere
else
in
time
for
the
to
be
part
of
the
answer.
So
that's
whether
that's
the
problem,
so
there
really
create
primarily
two
use
cases
for
transceiver.
Stop
that
we
see.
E
There's
the
high
level
one
which
affects
everyone,
I
think,
which
is
that
to
relinquish
resources
after
a
nap
is
done
with
a
transceiver
and
what
this
is
consistent,
we're
also
adding
language
to
the
spec.
Now
that
once
transceivers
become
stops,
they
can
eventually
be
removed
from
get
transceiver,
so
will
not
appear
there
anymore,
but
that
will
happen
later
at
the
same
time
that
M
lines
become
available
for
recycling.
So
that's
after
another
cycle
of
negotiation.
So
imagine
you
have
a
button
click.
E
Then,
when
the
button
is
checked,
you
add
a
transceiver
with
a
track
and
stream,
and
when
you
uncheck
that
box
later,
you
stop
the
transceiver.
This
is
safe
in
this
case,
because
we,
this
is
not
bi-directional.
Transceivers
in
this
example
we're
just
creating
a
transceiver
we're
using
we're
not
using
add
Jack
we're
using
a
transceivers
that
we
get
one
transceiver
only
to
send
in
each
direction
great,
but
still
the
above
code
will
only
work
99.9%
of
the
time,
because
a
button
click
here
does
not
check
for
the
signaling
state
of
the
peer
connection.
E
So
once
in
a
blue
moon,
this
will
stop
all
transceivers
if
the
transceivers
this
first,
if
you're
using
bundle
and
if
timing-wise
you
were
just
unlucky-
that
a
remote
offer
happened
to
be
have
come
in
just
a
few
milliseconds
before
so
now,
you're
no
longer
in
stable
and
the
pic
connection.
Api
goes
great.
All
you
want
to
reject
the
airline
perfect,
we'll
put
that
right
in
the
answer:
send
it
back
and
now
suddenly
everything
stopped,
and
you
don't
know
why
and
that's
the
second
use
case.
E
It's
a
lower
level
expert
that
wants
to
do
this
on
purpose,
reject
an
offer
timeline
in
time
for
the
answer,
and
you
know
with
safety
gloves
so
next
slide,
so
the
so
try
to
come
up
with
a
solution
kind
of
like
a
two-step
stopping
procedure
so
similar
to
like
we
have
direction
and
current
direction,
which
I
think
people
are
familiar
with.
So
looking
at
that
staring
it
down
for
a
bit,
it
became
clear
that
modifying
jcf
definition
of
stopped.
E
It
was
not
gonna
fly
and
it's
best
to
leave
that
alone,
because
chase
up
is
late
in
the
process,
and
that's
really
no
need
to
answer
that
felt
like
the
wrong
thing.
Also,
some
differences
from
direction
that
became
clear.
Is
that
stop
as
a
terminal
action
means.
You
can
take
certain
make
certain
assumptions
earlier,
because
once
it
stopped,
they
won't
be
unstopped,
and
it
also
has
excuse
me.
E
Get
some
water,
it
also
has
an
instant
sort
of
it
side
of
it,
which
is
it
will
fire
immediately
rtcp
by
as
well
as
fire
ended
events
on
local
tracks,
as
well
as
the
goat,
negotiated
effects.
Current
direction
is,
is
also
different
in
and
current
Direction
is
the
result
of
negotiation
that
happens
after
you.
Come
back
to
stable.
We're
stopped
is
an
input
to
negotiation
that
needs
to
happen
ahead
of
negotiation.
E
If
it
were
to
happen
after
it
would
just
mean
a
subsequent
negotiation
would
be
needed,
so
so
it's
best
to
design
something
on
top
of
Jessup
and
leave
Jason
alone.
That
way
we're
not
disrupting
the
model,
and
we
can
get
this
in
without
disrupting
1.0.
So
a
revised
goal
is
to
have
the
stopped
transceiver
state.
Misty
has
earnest,
have
remote
offer
window
and
to
do
that
next
slide.
The
proposed
solution
is.
E
A
two-step
stopping
procedure
inspired
by
direction
and
current
direction,
but
that
leaves
J
Saab
alone
by
defining
a
new
stopping
property
in
WebRTC
PC.
Only
and
I
can
read.
The
screen
capture
here
is
stopping
of
type
boolean
read-only.
The
stopping
attributes
indicates
that
the
stop
method
has
been
called
on
this
transceiver,
but
this
receiver
has
not
yet
been
stopped.
If
true,
if
stopping
is
true
means,
this
transceiver
will
be
stopped
in
the
Q
test.
E
As
far
as
the
negotiation
needed
vent,
which
only
happens
in
this
table,
signaling
state
I'm
getting
yeah,
so
there's
a
stopping
Tran
internal
slot
for
that.
So,
in
short,
this
means
that
a
stopping
transceiver
will
be
stopped
on
the
next
tick
or
once
we're
in
stable
state.
Whichever
is
later-
and
that
means
we-
we
sort
of
fly
over
the
dangerous,
have
remote
offer
window
and.
E
So
the
way
I
described
this
is
a
bit
with
a
twist
is
basically
proposal.
A
everything
is
like
today,
but
would
define
a
new,
stable,
stop
method
that
sets
this
new
stopping
attribute,
which
triggers
negotiation,
causing
the
transceiver
to
always
be
stopped
from
stable
state
and
then
proposal
B
is
the
same
as
a
except.
E
We
renamed
stable,
stop
to
stop
and
we
rename
the
old
hazardous
stop
to
reject
and
I'm
just
kidding,
because
these,
the
only
the
real
proposal
here
I
just
used
a
as
a
rhetorical
device
to
explain
that
there
are
very
few
down
downstream
effects
of
this.
This
is
more
merely
another
abstraction
on
top
and
the
goal
is
to
make
stop
the
safe
method
that
people
can
use,
regardless
of
thinking
about
state.
E
For
the
first
key
use
case.
You
know
a
button,
click
on
button
click,
transceiver
stop
and
then
the
advanced
expert
can
still
use
reject,
which
is
the
new
name
for
the
old
hazardous.
Stop.
That's
our
next
slide.
I
just
show
the
web
IDL
dentist
would
entail.
There
will
now
be
two
boolean
attributes
stopping
and
stopped.
I
know
it's
it's
kind
of
late
in
the
game,
and
maybe
that
should
be
it
an
enum
with
different
states
and
stuff,
but
I.
Don't
really
think
we
need
add
it's
like
it's
it's
similar
to
direction
in
current
direction.
E
Right
exactly
so,
and
that's
pretty
much
don't
have
one
more
flight.
Yeah
I
also
have
an
FAQ
another
slide
just
to
preempt
any
questions
that
slide
so
yeah.
So
when
exactly
will
the
transceiver
be
stopped,
it
would
be
stopped
right
before
your
knees,
negotiation.
You
need.
A
callback
is
called
on,
in
fact
on
the
same
queued
task,
and
it
doesn't
matter
if
you're
listening
to
these
events
or
not
that's
when
it
would
happen
so
basically
on
the
next
tick,
if
you're
in
stable
state
already,
if
you
don't
use
negotiation
needed,
will
this
break
me?
E
No,
you
will
be
fine,
because
the
following
should
continue
to
work.
You
calling
transceiver
stop
and
you
immediately
concrete
offer,
set
local
description,
and
that
is
actually
thanks
to
the
way
the
create
offer
answer.
Algorithm
is
already
written
to
pick
up
changes.
State
changes
from
JavaScript
before
succeeding,
and
you
know
I
I
checked
in
the
order
of
Q
tasks
in
JavaScript
here
guarantee
that
we
will
pick
up
the
cute
stop
from
stopped
from
stop
there
and
the
last
question:
what
happens
if
I
call
stuff
for
in
hammer
would
offer
the
answer
will
be
unaffected?
E
You
won't
be
stopped
until
you
return
to
stable,
where
negotiation
needed
fires
again
to
trigger
a
second
round
of
negotiation,
and
this
allows
for
a
safe
stopping
of
any
transceiver
even
in
bumble,
and
if
you
think
that's
on
behavior
now
it's
actually
what
would
already
happen
today.
If
you
were
a
few
milliseconds
late
and
calling
that
method,
the.
D
E
Yes
immediately
before
your
good
negotiation
need,
a
callback
will
be
called,
so
that's
in
some
ways
very
similar
to
today.
It's
just
that.
It's
one
tick
later,
then
it's
not
immediately
when
you
call
the
method.
So
when
you
call,
if
you
write
transceivers,
stop
/
NN
friend,
if
you
immediately
check
the
state,
stopping
will
be
true,
but
stock
will
be
false,
but
then
inside
your
negotiate
you
need
a
call
back
or
on
a
later
tick.
Basically,
stop
will
turn
true
quite
quickly.
E
The
only
exception
is,
if
you're
in
this
dangerous
habit
wrong,
if
you're
in
the
hammer
mode
offer
which
isn't
always
done
and
dangerous
but
can
be
dangerous
if
you
have
bundle
and
you're
addressing
the
first
transceiver.
If
you're
in
that
have
remote
offer
state,
then
you
will
not.
You
will
see
stomping,
but
he
won't
be
stopped.
I'll
tell
your.
E
E
C
D
C
D
E
D
E
The
thing
is,
though,
that
once
you
go
back
to
stable,
but
my
problem
is:
if
you
cannot
get
back
to
stable,
you
have
a
programming
error
and
you
have
to
this
happens
in
chrome
today.
You
basically
have
to
toss
away
your
peer
connection
and
create
a
new
one
today,
right,
because
you
don't
have
rollback.
A
C
The
issue
so
so
I
think
we
have
I
think
that
seems
consensus.
Okay,
let's
do
this
I'm,
giving
up
on
that
one,
and
so
we
so
we
can.
We
can
go
forward
with
this
and
I
mean
really
review,
and
all
that
and
and
we'll
discuss
the
issue
of
whether,
where
the
media
should
stop
flowing
at
separately,
Shion
great.
E
D
D
So,
even
if
you
say
video,
true,
not
your
true,
you
might
not
get
audio,
it
depends
on
the
capabilities
and
the
browser-
and
you
know,
relates
to
the
OS
and
all
that
and-
and
we
added
the
restrict
own
audio
to
say
that
the
browser
has
to
tend
to
remove
audio
produced
by
the
application,
and
this
is
to
avoid
the
echo
problems
and,
and
one
way
we
do-
that
is
to
just
remove
the
applications.
Audio
from
the
what's
being
captured
and
problem
is
so
the
application
is
vague.
D
Does
it
mean
the
browser
document,
origin
document,
window,
tab,
etc?
And
what's
do?
If
you
have
iframes,
you
know,
do
you
do
remove
audio
from
the
child,
children
from
the
parent,
iframe
and
so
on,
and
my
proposal
is
to
say:
the
application
means
that
the
current
document
that
you
you
do
not
have
to
remove
audio
from
from
parent
or
children.
D
Now
that
doesn't
print
that,
because
audio
is,
is
optional,
it
would
be
perfectly
valid
to
implement
this
as
removing
all
audio
of
a
tab
right.
You
can
always
remove
more
audio,
but
this
this,
but
what
application
refers
to
you
and
what
this
constraint
refers
to?
Is
it's
not
getting
audio
produced
by
the
current
document?
D
D
Right
so
same
egg
problem
again,
the
audio
constraints
are
not
very
well
specifies.
The
question
is
which
audio
constraints
apply
for,
gets
to
a
media
and
there's
a
few.
We
have.
We
have
a
volume
channels,
sample
rate
sample
size,
latency
a
channel
count
and
then
there's
some
few
ones.
That
obviously
doesn't
apply,
but
these
the
first
ones
they
could
apply
or
they
could
not
apply
depending
on
what
we
decide
and
I.
Just
they
don't
seem
very
useful.
So
I
propose
that
we
just
don't
support
any
additional
audio
constraints.
C
C
C
C
E
Mean
you
could
you
could
solve
this
with
a
web
audio
and
a
number
of
ways
I
think
like
with
a
gain
node
or
like
it,
something
like
that
I'm
not
getting
no,
but
yeah
yeah
gaming,
that's
what
it's
called
yeah,
so
they.
A
D
G
B
B
So
Safari
is
now
shipping
anti
device
IDs
when
you
do
any
red
devices
and
device
info
permission
is
not
granted.
This
is
done
to
enforce
mitigations
when
an
origin
had
previously
been
granted
capture
access
and
the
page
is
reloaded
or
there's
a
new
page.
So
so
the
new
page
does
not
have
device
info
permission
and
in
that
case
we're
getting
back
to
empty
device
IDs.
We
did
that
for
two
reasons.
First,
we
do
not
want
to
expose
information
in
general.
That
is
not
useful,
second
difficult
to
implement
correctly
providing
device
IDs.
B
That
would
not
be
empty
strings,
but
that
specific
case
and
second,
if
we
were
to
implement
it
and
have
like
consistent
device
IDs
after
granting
gakuto
media
permission
for
new
pages,
we
would
need
to
basically
always
pass
the
same
device
ID
again
and
again,
but
the
issue
is
that
the
setup
of
a
user
might
change
the
time,
which
means
that
the
device
IDs
might
change
the
time.
So
it's
posing
that
info
would
still
be
a
privacy
like
small
privacy
leak,
but
still
a
privacy
leak.
B
So
that's
why
we
think
that
we
will
keep
and
testing
device
IDs,
because
it's
very
simple
does
not
cause
any
fingerprinting
issues,
but
the
issue
is
that
it's
not
spec
compliant
because
the
spec
is
saying
the
same.
Identifier
must
be
valid
between
browsing
sessions
of
this
origin,
but
Matt
Matt
must
also
be
different
from
for
over
origins
and,
of
course
it's
it's
the
same
over
over
origins.
B
I
think
the
second
mast
that
it
should
be,
but
it
might
be
different
for
over
regions,
is
just
fair
to
disable
correlation
of
a
user
between
values
origins,
but
the
empty
string
is
already
providing
that
it
is
not
allowing
to
correlate
a
user
across
origins
anyway.
So
it's
it's
a
non-issue
there.
So
we
would
hope
that
the
spec
that
we
will
like
suffer
implementation
to
be
aspect
compliant.
So
basically,
we
would
like
respect
to
allow
specific
values
to
be
non
unique
as
long
as
they
do
not
relate
to
a
specific
user
builder
or
specific
device.
B
B
So
the
idea
is
that
when
you
call
anywhere
devices
and
the
page
has
not
device
info
permission,
we
always
return
empty
string
device
IDs.
The
device
info
permission
is
granted,
for
instance,
when
the
page
has
granted
getusermedia
once
in
that
place.
This
specific
page
will
get
all
the
device
IDs
and
the
device
IDs
will
be
stable
over
time.
As
long
as
the
page
over
time
is
getting
getusermedia
access
and.
D
B
G
D
B
Let's
say
a
page
get
to
the
media
and
it
received
the
track
and
it
has
now
the
device
ID,
so
it
will
put
it
in
ID
be
and
later
on.
The
page
is
reloaded
and
get
to
the
media
can
be
fed
with
a
device
with
this
specific,
with
this
specific
device,
ID
and
you'll
be
used
by
via
by
a
drive
browser
to
actually
select
the
right
device.
B
B
D
B
And
you
can
go
through,
you
can
also
pass
an
empty
string
device
ID
as
part
of
getusermedia,
and
basically
we
will
say:
okay,
we
will
get
you
in
your
eyes.
It
might
not
be
ready
for
device,
it
might
not
always
be
the
same
device,
but
you
will
still
be
able
to
get
one.
We
did
that
because
some
web
applications
actually
do
that
and
we
do
not
want
to
break
them
either.
B
E
And
I
think
a
lot
of
people
in
the
working
group
who
cared
a
lot
about
this
are
gone
so
I'm.
An
odd
advocate
here,
for
this
does
kind
of
a
break.
The
original
model,
though,
which
was
that
you
call
the
numerator
vices
and
you
learn
the
devices
that
are
available
to
be
used
would
get
youth
media
and
so
I
could
see
a
JavaScript
being
broken
now,
where
I
have
a
sight
device
idea,
let
me
run
enumerate
devices
first
to
see
if
that
the
user
still
has
that
device
and
on
Safari
I
learned.
A
B
Point
in
in
general,
I
alert
under
motions
I
would
think
that
web
applications
in
general
the
web
application
you
actually
present.
There
is
a
bit
broken
there.
We
had
ideal
constraints.
So
basically,
instead
of
trying
to
animate
devices,
see
whether
the
device
ID
is
matching,
you
should
just
call
getusermedia
device,
ID
ID,
o
/
d
and
not
on
it,
and
this
we
were
but.
E
B
So
initially
we
are
thinking
that
maybe
in
a
device
it
could
written
absolutely
no
device
at
all,
but
we
were
worried
that
some
webpages
I'd
be
broken
with
that
thinking
that
no
camera
on
the
microphone.
So
that's
why
Safari
is
still
providing
some
camera
and
microphone
devices.
Currently
you
can
go
get
you.
The
media
is
not
granted
and
that's
okay,
where
we
do
not
want
to
live.
I,
say
right.
E
So
I
would
point
out
that
Firefox
I
believe
does
implement
a
device
ID.
That
I
would
say
it's
not
impossible
to
I
would
say
it
would
be
possible
for
Safari
I
think
to
provide
a
device
ID
that
was
not
empty.
That
could
be
used
later.
That
and
I
think
it
was
a
useful
data
point.
You
had
I
think
you're
right
in
Firefox
implementation.
It
would
leak
some
information,
because
what
we
do
is
we
typically
hash
the
value
of
the
real
device
internal
draw,
the
raw
idea.
E
If
you
will
yes,
I
use
a
change
the
device
that
would
leak
some
information,
although
it
wouldn't
seem
too
hard
to
fix
to
replace
that
internal
variety
with
some
some
default
string
that
just
basically
said
the
default
device
at
all
times
or
just
default
device,
or
something
like
that,
and
that
way
you
would
never
leak.
That
said,
I
think.
B
B
B
A
B
One
is
again
closed
and
I
think
that
we
are
actually
mine
dating
anybody,
Isis
and
innovate
and
get
you
the
media
to
delay
processing.
So
it's
closed
as
well.
Okay
and
size,
six
view
yeah
we're
doing
the
same
user
agent
must
not
report
over
constraint,
never
constraint
if
the
context
does
not
happy
by
sinful
permissions.
So
we
went
with
this
way.
So
it's
closed
as
well.
A
D
Why
do
we
constrain
event?
So
getusermedia
gives
you
a
track
with
the
capabilities.
Are
the
settings
that
you
asked
for,
but
even
if
you
get
what
you
asked
for,
you
might
not
get
what
you
asked
for,
for
example,
poor
lighting
conditions,
you
have
a
less
fps,
then
the
camera
is
set
up
to
capture,
and
currently
this
would
trigger
on
over
constraint
and
there's
several
problems
over
constraint.
One.
Is
it
mutes
the
track
which
makes
the
track
unusable?
D
It
can
make
it
silent
or
black,
and
it
also
contradicts
the
definition
of
mute,
which
is
that
the
mute
state
reflects
whether
the
source
can
provide
the
media
at
the
moment
and
also
if
we
talk
about
WebRTC
tracks,
the
unmute
means
means
something
else's
result
of
negotiation.
So
it's
it's.
It's
also
something
that
you
can't
really
fix.
D
D
Now
you
could
argue
devil's
advocate
to
say
well,
but
it
being
undesired.
Oh,
it's
it's
very
much
underside
because
it
meets
the
track
but
possible
solution
to
be
it
could
be
the
not
muted
track
and
maybe
it
would
be
beside
and
about
it
being
redundant
and
sure
it's
it's
redundant,
but
because
you
can
do
this
in
other
ways,
but
if
you
first
see
application
to
to
implement
ways
to
check
whether
or
not
you're
matching
the
constraints,
then
you're
sorta,
it's
redundant
because
you're
duplicating
browser
logic
and
you'd
have
to
do
pulling
and
stuff
like
that.
D
So
this
is
there's.
You
know
it
might
be
possible
to
fix
every
constraint
like
by
not
muting
it
or
adding
an
attribute
on
on
on
over-constrained
event
and
so
on,
but
I
think
the
this
API
is
is
sort
of
not
really
doing
whether
this
was
designed
to
four
and
it
seems
a
bit
over
killed
for
just
making
sure
like
hey,
am
I
getting
the
right
frame
rate.
You
shouldn't
really
have
like
this
constraint.