►
From YouTube: W3C WEBRTC WG VI on February 27, 2020
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
C
C
C
A
C
C
Don't
know
this
was
ITF
104,
so
this
is
pretty
old.
Hopefully
we'll
get
an
update
on
this
and
to
understand
where
we
are
with
respect
to
the
SFU
support,
but
it
looked
like
I
guessed
from
this
right.
Some
of
the
SF
use
like
jitsi
do
not
work
well
with
the
web,
reduce
e
CR,
but
some
other
ones
do
like
janus
and
media
soup.
C
C
D
You
all
know
a
camera.
Microphone
may
be
temporarily
disabled
using
the
track,
enabled
attributes
and
the
sole
case
for
inventing.
That
was
that
you
could
basically
mute
and
microphone,
mute
and
camera
mute
during
Webber
to
see
call,
and
so
this
code
shows
how
you
you
would
implement
that
using
three
buttons
that
our
comment
is
commonly
shown
in
calls
like
this
and
the
equivalent
JavaScript
code,
where
you
basically
set
the
enabled
property
on
the
track.
D
As
far
as
I
know,
this
is
semantically
complete
in
that
that's
all
sort
of
the
ideal
interface
for
JavaScript
web
page.
That's
all
they
need
to
say,
and
it
works
in
all
browsers,
that
are
permission
models.
Unfortunately,
some
browsers
don't
turn
off
the
camera
hardware
like,
and
that
is
unfortunate,
because
users
demand
privacy
and
being
able
to
you
know.
D
D
D
So,
unfortunately,
this
does
not
work
in
all
browsers
and
sites
where
you
want
this
behavior,
so
they
work
around
it,
and
this
is
a
good
example
of
where
aspect
not
being
explicit
about
behavior
actually
causes
web
compatibility
headaches
something
I
threw
together,
based
on
what
I've
heard
people
are
doing
in
the
simple
case.
What
they're
basically
doing
is
stopping
the
track
when
you
hit
mute
and
then
basically
call
and
get
you
to
meet
again
when
you
unmute,
of
course,
in
practice
you
know
so
that
maybe
all
you're
doing,
but
if
you're
gonna
do
this
right.
D
I've
heard
from
some
people
doing
this
that
well,
we
want
to
wait
three
seconds
because
users
might
just
be
toggling
these
buttons,
so
we
don't
want
to
do
that
right
away.
So
there's
a
lot
of
corner
cases
here
and
it
can
be
quite
yucky
to
get
this
right,
and
the
irony
of
all
this
is
that
this
does
not
work
in
all
browsers
permission
models.
Unfortunately,
you
will
get
extra
need
as
far
prompt
permission
prompts
in
Firefox.
D
So
we're
we're
very
sad
about
this
because
in
a
call
like
this,
if
you
temporarily
make
your
camera
and
then
you
do
something,
and
you
come
back
three
seconds
later
and
then
you
are
like
or
a
minute
later
and
then
you
unmute
asking
asking
you
like:
do
you
want
to
share
your
camera
with
this
website?
It's
like
duh,
so
what
we're
hoping
to
do,
and
it
also
doesn't
what
we've
found
in
some
websites
is
that
this
often
does
not
work
with
microphone
either.
D
But
the
hack
is
usually
only
done
before
cameras,
and
that
means,
if
you
have
a
Microsoft
LifeCam
for
example,
that
actually
has
a
light
for
the
microphone
that
stays
lit,
which
has
the
same
problem
of
a
mime
unit
or
am
I.
Not
we
don't
know
and
there's
also
other
odd
corners
with
using
these
hacks,
like
you
end
up,
adding
and
renegotiating
within
two
tracks,
and
you
have
to
replace
track
and
it's
kind
of
messy
so
next
slide.
D
So
this
is
a
refresher
of
how
Firefox
works
and
we
relinquish
the
hardware
device
when
all
tracks
are
disabled,
we
acquire
the
hardware
device
than
any
of
its
tracks.
Are
being
able-
and
we
see
that
that
takes
about
a
second
to
reacquire
the
device
and
if
we
can't
rewrite
the
device,
because
maybe
it's
no
longer
available,
I
got
the
cable
got
pulled
or
another
application
is
using
it
on
some
of
us
as
we
fire
the
ended
track.
D
We
have
a
live
indicator,
that's
always
on
for
at
least
three
seconds
minimum
and
that
suspect
says
you
know.
The
privacy
indicators
must
remain
observable
for
a
sufficient
amount
of
time
and
we
also
have
an
excess
in
the
privacy
indicator,
while
in
the
URL
bar
while
track
is
muted
like
this,
and
we
also
have
a
privacy
mitigation
planned
to
fire.
The
user
agent
muted
event.
If
this
happens-
and
the
page
does
not
have
focused
so
proposal
here-
is
to
enforce
web
compatibility
around
this
behavior
in
all
browsers.
D
D
Then
the
user
agent
may
using
the
devices
device
ID
set
blah
blah
blah
to
false,
which
is
the
way
of
saying
you
can
turn
off
your
live
privacy
and
indicator
in
this
case,
and
that
refers
to
the
in
browser
software
indicator,
and
this
is
the
closest
approximation
to
Hardware
lights
that
we
have
in
as
much
as
having
a
physical
and
logical
privacy
indicators
users.
Basically,
the
least
surprising
thing
is
to
users
is
that
these
indicators
align.
D
So
I
have
a
PR
ready
that
basically
would
mandate
this.
It's
I
can
read
through
it
when
a
live
track
sourced
by
a
device
explodes
by
using
media,
and
this
is
important
because
suspect
talks
about
other
tracks
as
well.
This
is
only
about
my
camera.
Microphone
becomes
the
atom
user
or
disabled,
and
it
brings
all
tracks
connected
to
the
device
to
be
either
muted,
disabled
or
stopped,
and
the
UN
must
relinquish
the
device
within
three
seconds
and
yes,
this
is
very
specific.
D
It
says
within
at
the
same
time,
while
a
long
time
for
a
reasonably
observant
user
to
become
aware
of
the
transition.
So
these
are
phrases
stolen
from
elsewhere
in
the
spec
about
privacy
indicator,
and
then
it's
a
so.
This
is
a
mouthful,
but
you
know,
let
me
continue.
The
US
must
attempt
to
reacquire
the
device
as
soon
as
any
live
track
source
by
the
device
becomes
both
unmuted
and
enabled
again
provided
the
document
has
focused
at
that
time.
If
the
document
does
not
have
focus
at
that
time,
the
UI
must.
D
D
B
D
D
All
right
so
came
up,
and
this
all
ties
into
you'll
notice.
This
all
ties
into
device
election,
and
we
have
this
powerful
constraints,
language
whose
goal
was
to
be
able
to
pinpoint
quite
well
there
immediately
what
kind
of
device
you
want
without
you
know,
listing
the
devices
and
we
did
well
there
for
a
while.
We
had
like
facing
mode
which
can
tell
you
the
front
or
back
camera
on
the
phone
back
when
phones,
maximum
had
two
cameras,
and
that
seems
sufficient.
Of
course.
Flagship
phones
today
have
multiple
back
cameras.
D
They
can
have
wide
lens,
regular
and
telephoto,
so
that
kind
of
falls
apart
then,
because
you
know
users
might
not
want
us
to
default
to
the
wide
lens
for
every.
You
know
every
WebRTC
call
now,
where
do
you
see
call
usually
use
the
facing
camera,
but
there
are
other
use
cases
of
using
camera,
like
you
know,
scanning
a
barcode
filming
some
event
or
happening
stuff
like
that.
D
D
The
secondary,
which
is
the
telephoto,
has
52
millimeter
focal
length
and
wide
lines
has
12
millimeter
focal
lengths
now,
when
I
tried
to
put
these
numbers
into
actual
focal
lengths
to
any
non
photographer
is
the
the
math
person
would
say.
This
is
the
distance
between
the
lens
and
the
sensor
and,
of
course,
the
actual
angle
would
depend
on
the
size
of
the
sensor
as
well,
using
some
math.
D
Now
it
turns
out
that
that's
when
I
put
this
into
the
algorithm
that
did
not
work
at
all,
so
it
looks
like
these
instead
actually
are
at
something
called
35
millimeter
equivalent
focal
lengths,
which
is
actually
a
measure.
That
indicates
the
angle
of
view,
and
you
know
this
is
because
photography
because
photographers
through
this
is
basically
a
legacy,
we're
not
the
only
industry
with
lots
of
legacy
apparently.
So
this
is
how
the
industry
has
gotten
used
to
35
Miller
millimeter
millimeter
film
cameras,
and
this
is
how
they
basically
talk
about
angle
of
views.
D
D
So
there
are
basically
two
proposals
here
and
that
one
is
to
add
a
new
constraint
called
focal
length,
which
is
this
35
millimeter,
equivalent
focal
link,
not
the
actual
focal
line,
because
just
because
that's
what
apparently,
a
lot
of
drivers
expose
and
people
may
be
used
to
talking
about
that.
Even
though
it
is
confusing,
and
then
you
could
do
a
min
and
Max
to
avoid
wide
lenses,
you
can
avoid
all
wide
lenses
with
less
than
25
26
millimeter
focal
lengths
proposal.
D
B
is
to
call
it
angle
of
view,
and
basically
there's
conversion
for
how
to
convert
this
focal
length
into
an
angle
of
view.
And
if
we
put
in
thirty
five
millimeters
as
into
that,
then
it
seems
to
work
out
two
numbers
that
make
sense.
So
that
is
the
two
proposals
and
there's
a
bonus
question
about
where
to
specify
this,
whether
we
should
specified
it
and
maybe
capture
main
or
media
capture,
image.
A
Thoughts,
thoughts,
I'm,
not
a
camera
person,
so
to
me
angle
of
view,
seems
more
intuitive
because
I
get
it
without
having
to
Wikipedia
math,
but
it
was
assigned
a
side
point
I.
Think
the
fact
that
the
user
of
an
API
has
to
care
about
this
is
kind
of
ridiculous,
but
that
goes
into
the
browser
picking
campaign
having
the
picker
or
not,
which
is
not
what
this
issue
is
about
right.
D
Well,
you
care,
well,
you
might
be
writing
an
app
or
you
want
the
wide
lens
camera.
So
you
know,
like
you,
don't
care
until
you
do
care
right,
so
yeah
constraints
to
say
you
know
you
don't
have
to
specify
them,
but
once
you
run
into
a
problem,
whereas
this,
like
half
of
our
users
using
our
app
end
up
with
these
telephoto
images
and
they're,
asking
us
how
to
fix
it.
E
Well,
question
because
I
am
I,
come
at
a
person
and
I
yeah
well,
I
matter
understand
it.
Yeah
I
mean
when
in
photo
op,
we
always
talk
about
it.
The
focal
length
wouldn't
35,
millimeters
I
mean
we
are
used
to
that
and
I
don't
understand
what
these
focal
length
and
angle
of
view
I
mean
means
at
all,
I
mean
if
you're
talking
to
someone
that
it
is
a
camera.
My
camera,
a
person
I
mean
I,
know
sure
that
they
will
understand
what
you
are
talking
about.
Angular
be
your
focal
lens
at
all.
E
D
C
Focal
length
is
more
familiar
to
photographers,
but
your
example.
Yanni
var
is
a
little
bit
unsettling
because
I
don't
think
of
a
telephoto
as
a
photographer
as
being
50.
You
know,
millimeters
I
think
if
that
is
usually
being
300.
So
if
I
were
writing,
this
app
I
might
like
do
something
ridiculous,
like
yes
for
a
300
millimeter
focal
length
and
end
up
with
nothing
yeah.
D
C
B
D
D
But
yeah
I
don't
know
what
the
good
answer
I
tried
to
put
links
in
this.
If
you
find
the
slides
there
are
links
to
different
websites
that
describe
this
like
the
concept
of
35
millimeter
equivalent
focal
lengths,
there's
also
a
lot
of
other,
like
the
conversion
link
here,
will
take
you
to
a
page
where
you
calculate
the
the
field
of
the
angle
of
view
and
the
field
of
view,
and
all
these
terms
like
they
are
real
terms
and.
D
It's
just
a
legacy
issues
that
photographers
have
that
when
most
of
them
talking
about
focal
length
they're
not
actually
talking
about
the
length
between
those
and
lens
and
the
sensor,
even
though
that
is
what
it
you
know,
arguably
means
right.
So
I,
don't
think.
There's
a
great
answer
here.
I
mean
we
could
support
both
constraints.
That
would
be
proposal.
C
yeah,
it's
too
much
yeah.
B
D
E
C
A
B
So
I
think
we
have,
it
sounds
like
we
have
rough
consensus
of
the
people
presents
to
go
forward
with
the
proposal
a
and
not
for
that
for
to
be
turn.
It
turn
into
a
PR
yeah,
and
the
mailing
list
is,
of
course,
free
to
the
work
group,
a
speed
to
objects
to
the
object
to
the
resolution
from
the
meeting.
At
any
time.
Sure.
D
D
D
We
do,
however,
like
Firefox
nightly,
will
now
remove
personal
identifiable
information
from
device
labels,
so
that
is
a
improvement,
which
means
you
might
still
see
full
labels
in
the
Firefox's
in
browser
of
camera
and
microphone
picker,
but
you
would
see
a
filtered
name
and
you
know
see
what
it
goes
to
in
content
to
buy
so
much
so
so
it's
a
recap
here.
The
ping
wants
a
basically
I'll
read
through
this
in
2020,
exposing
all
your
devices
to
the
web
beyond
the
ones
you're
using
is
not
the
principle.
D
D
D
D
So
yeah
so
we're
kind
of
yeah.
Maybe
that
was
a
mistake.
Yeah
trusted
JavaScript
with
getting
this
right
in
all
cases,
they're
not
gonna,
talk
about
that
other
than
that.
That
seems
like
a
workable
approach,
regardless
of
what
the
spec
says.
Although
it
does
change
the
API
a
little
bit,
I'll
come
back
to
that
on
the
picker
style
API.
How
far
do
we
want
to
go,
and
so
I
kind
of
broke
up
two
goals,
because
I
want
to
focus
on
one
of
them?
D
Well,
one
is
to
get
rid
of
in
content
to
my
selection
and
also
get
rid
of
label
in
enumerate
devices.
We
could
always
talk
about
tracked
label,
getting
rid
of
that
as
well.
I
thought
we
might
keep
track
of
label
and
just
call
it
camera
one
and
camera
two,
because
you
know
that
might
give
you
some
information
about
what
you've
already
picked,
which
is
less
yeah.
D
The
reason
labels
are
in
the
numeric
devices
is
so
you
can
have
a
list
of
reasonable
choices,
and
so
that
would
go
goal
to
somehow
prevent
browsers
from
granting
all
devices
of
a
kind
by
default,
which
I
think
the
majority
of
browser
still
do.
I,
don't
know
if
we'd
ever
reach
consensus
on
that,
so
I'm
gonna
focus
on
gold
one.
So
the
news
lights
I
have
is
I
tried
to
think
of
a
minimal
approach.
D
So
I
have
to
break
down
the
different
I
call
them
use
cases,
but
the
different
ways
in
which
getusermedia
is
called
and
how
that
kind
of
works
today
and
how
they
could
work
if
we
didn't
have
labels.
So
when
we
think
about
all
the
time
is
the
new
visitor,
you
call
getusermedia
video
true,
maybe
you're,
an
app
that
you
want
to
face.
D
You
know
this
site,
or
rather
they
don't
think
about
it
until
it
doesn't
work,
so
they
go
like
hey.
Why
does
doesn't
this
stupid
site?
Remember
the
camera
I
use
last
time,
so
of
course,
but
there
are
also
these
web
pages.
Usually
demos,
that's
just
called
video.
True
every
time
and
the
unfortunate
thing
there
is
that.
D
Ironically,
that's
sort
of
the
API
you
would
have
wanted
if
we
had
an
availability
style
API,
where
this
was
configured
and
browsers
had
well
configured
defaults.
That
users
could
set.
So
just
mentioning
that
and
the
third
use
case,
which
is
the
new
one,
is
how
do
we
change
or
add
a
second
camera
in
the
world
without
labels,
and
here
we're
a
picker
is
required
so
I'm
imagining
a
new
device,
ID
constraint
here
called
existing
device
idea
and
I'm
going
to
get
back
to
that.
D
There's
I'm
just
mentioning
also
a
fourth
use
case,
which
is
actually
an
anti
use
case,
where
people
call
lazily
call
getusermedia
with
the
last
constraints
over
and
over
again
to
get
the
same
stream
because
they
don't
know
about
stream
clone,
and
this
is
an
unfortunate
reality
that
a
picker
if
we
we
change
the
semantics
that
getusermedia
and
that
would
result
in
a
picker
here.
This
is
basically
why
even
Firefox
doesn't
know,
does
not
show
a
picker
always
forget
using
media,
because
people
do
this
next
way.
D
Then
I
was
an
obstacle,
not
not
a
this
argue
stage.
So
the
news
visitor
and
the
repeat
visitor
pattern.
Firefox
already
has
an
in-browser
device
picker
for
this,
and
it
lists
the
devices
between
min
and
Max
with
the
ideal
constraint
if
its
present
chose
my
default.
Otherwise,
if
there's
no
ideal
constraint,
we
should
choose
the
system
default,
but
it
only
appears
if
permission
is
absent
and
there's
a
strong
emphasis
on
permission
and
on
defaults,
because
you
get
the
prompt-
and
this
is
actually
not
very
so.
D
This
is
why
I
learned
from
the
experiments
that
if
we
just
use
our
picker
here
when
to
replace
in
content
of
my
selection,
it's
kind
of
it
does
it's
not
a
great
fit
because
you'll
click
on
a
you
know.
Let's
say
you
have
a
camera
with
a
label
and
you
click
on
that
label,
and
then
you
get
this
again
well,
there's
such
a
heavy
emphasis
on
the
default.
What
should
the
default
be?
D
Firefox
you
can
put
in
the
device.
Id
constraint
without
exact
and
I
would
have
control
the
default
on
the
picker,
but
should
that
default
be
the
device
you
just
have,
or
should
it
be?
If
you
have
two
devices
should
be
the
other
device.
So
it's
not
clear
what
device
ID
means
here
and
also
you
click
on
one
button.
You
get
this
drop-down,
it
has
a
default
in
it.
D
You
have
to
click
on
that
one
to
get
a
second
drop-down,
so
there's
too
many
dropdowns
and
also
but
sorry,
I'm,
jumping
ahead
here
for
the
new,
visitor
and
repeat
visitors,
that's
not
a
problem!
This
is
actually
what
we
want.
We
want
to
encourage
the
default
that
users
don't
want
to
change
it
change
it
and
there's
no
obstacle
to
other
browsers,
adding
something
like
this.
If
they
want
you,
the
next
slide.
D
D
Actually,
if
you're
replacing
an
existing
device,
it
might
actually
be
useful,
depending
on
UX,
to
know
what
you're
replacing
or
what
this
is
in
addition
to.
So
this
is
the
code.
You
provide
existing
device
ID
and
you
give
the
idea
of
the
camera
track.
You
already
have,
and
you
call
getusermedia
and
this
the
exist.
The
mere
existence
of
this
new
constraint
tells
the
UA.
This
is
a
request
for
a
device
other
than
the
one
we
have
which
is
non-ideal,
and
it
always
prompts
the
user.
D
Basically,
we
don't
want
to
over
specify
that,
of
course,
we
don't
know
what
the
Jas
intends
to
do
in
the
end,
whether
to
replace
it
or
supplement
it
and
that's
up
to
JavaScript
the
other
good
thing
with
having
or
passing
in
the
existing
device.
Id
is,
if
you're
on
platforms
that
have
limitations
on
how
many
devices
you
can
have
open
at
once.
The
browser
can
now
do
the
tricky
part.
D
You
can
still
call
this
API,
while
your
existing
call
is
ongoing
and
the
browser
on
the
platform
can
then
decide
can
temporarily
mute
like
we
just
talked
about
earlier.
We
can
mute
tracks
and
then
we
can
relinquish
hardware.
Devices
temporarily
show
a
preview
of
a
different
device,
and
you
can
trick
like
that
to
ameliorate
all
these
failures
of
JavaScript
have
to
deal
with
today.
If
we
were
to
try
to
do
a
picker
on
an
Android
phone,
I'm
sure
I.
A
Have
a
question
right
because
if
you
have
a
track-
and
you
want
to
change
it
to
you-
want
the
prompt
to
the
you
want
this
UI
to
show
up
again.
It
makes
sense
to
tell
it
that
this
track
wants
this.
But
when
you,
when
you
specify
with
an
ID
like
why
do
I
need
to
know
the
ID
of
the
camera
before
I
call
this
API
and
if
I
do
use
this
88
this
API
to
sort
of
swap
out
what
this
ID
means.
A
D
A
D
All
right,
let's
do
that
next
slide
and
I
think
I'm
dressing
that
there,
so
the
pros
of
this
minimal
API
would
suffice.
The
latest
experiment
will
take
us
specifically
to
only
to
replace
the
current
in
content
device
selection.
That
would
be
missing
and
we
would
retain
the
option
going
further
later.
D
D
C
D
Okay,
but
it's
important
to
answer
your
question.
This
would
signal
our
intent
to
solve
the
post
gum
selection
problem
only
right.
So
there's
a
larger
question
here
of
like
for
Firefox
is
sake.
We
already
have
a
prompt
and
then,
when
you
initially
join
a
call,
even
and
so
in
case,
other
vendors
are
not
as
keen
on
doing
that.
This
approach
would
first
try
to
address
just
the
absence
of
encanta
device
selection
and
that's
the
least
disruptive
to
existing
models
that
I
could
think
about
and
now
B
to
bring
up.
D
A
good
point
with
the
cons
of
this
is
that
it's
constraint
is
still
barely
more
than
a
glorified
boolean
I
mean
it
does
pass
in
some
value
to
the
user
agent
to
X
and
unless
we
prevent
it,
JavaScript
may
exploit
this
new
API
and
people
may
go
cool
chrome
as
a
new
picker.
Let
me
do
whatever
you
know.
It's
weird
that
I
have
to
do
all
these
constraint
thing
IDs,
but
let
me
do
that
because
I
want
the
picker
so.
A
A
Why
can't
we
just
do
the
equivalent
of
plank
constraints?
I
have
my
track
whether
or
not
I
know
the
ID
and
I
want
to
reopen.
I
want
to
be
prompt
for
this
track.
I
call
some
API,
it
does
the
Reap
romped
and
either
I
get
a
new
new
camera
or
or
I
get
the
same
camera
and
the
track
doesn't
die.
I
don't
have
to
add
JavaScript
code
to
keep
track
of
which
track
object.
Is
it
I
want
to
use
and
I
don't
have
to
know
what
the
device
ID
is
so.
D
Well,
I'm
trying
to
avoid
adding
new
methods,
but
yes,
that
is
something
you
could
do
and
challenging
parts
of
the
model
is
that
you
can
clone
track,
so
you
can
have
many
tracks
of
one
resource.
So
just
because
one
of
the
tracks,
you
do
some
kind
of
replace
source
method.
You
could
do
that.
It's
an
odd
way
of
you
know
basically
requesting
the
same
thing.
D
I
mean
people
can
already
call
getusermedia
many
times,
and
people
can
already
have
like
we're,
not
proposing
removing
device
IDs
here
by
the
way,
so
you
could
still
you
could
still
build
in
content
device
selection,
you're,
calling
just
camera
1,
2,
&,
3
right
and
you
could
just
call
get
using
meat
in
the
regular
way.
So
this
is
is
trying
only
to
solve
a
very
specific
problem
that
you
no
longer
have
labels
and
then
the
right
devices.
D
So
the
idea
you're,
proposing,
though
that
would
require
having
a
track
at
the
outset,
which
is
what
kind
of
was
what
this
existing
device
ID
does
as
well.
But
not
you
know
we
could,
for
example,
if
we
wanted
to
prevent
JavaScript
using
this
outside
of
host
selection,
we
could
actually
demand
that
the
the
track
of
the
ID
you
pass
in
has
to
be
active
or
we
could
throw.
So
we
could.
A
B
D
That
is
the
two
viewpoints
and
I
I
agree
a
little
bit
with
both
of
you
and
that's
the
conflict.
I
think
yeah.
So
we
can
go
on
the
next
slide.
I
talk
about
that
a
little
bit
so
better
boolean
might
be
choose
you
intermediate
a
choose
user
media
method
unless
we
change
the
semantics
of
getusermedia,
so
I'm
I
wanted
to
throw
these
words
out
a
little
bit
these
options.
Just
so
we
have
words
to
talk
about
it.
D
D
We
introduce
a
new
semantics
constraint,
dictionary
member
semi
constraint
that
with
default,
the
browser
chooses
and
that
can
be
set
to
user,
chooses
kind
of
like
Plan,
B
and
unified
claim,
and
then
we
implement
the
Pickers
win
as
the
default
for
browser
chooses
and
no
browsers,
I
prepare
to
avoid
redundant
Pickers,
preferably
by
using
exact
and
scream
clone.
And
then
at
some
point
in
the
future.
We
flip
the
default
to
user,
chooses
and
optionally.
D
So
what
sites
that
pass
in
video
triggers
like
demos
and
maybe
I'm
biased,
because
I've
read
a
lot
of
demos
and
fiddles
demos
and
fiddles
don't
bother
with
this
so
there
that
would
get
a
picker
every
time
if
dip
the
device.
If
the
constraints
didn't
spoil
down
to
it
or
you
didn't
pass
in
some
kind
of
default
device
ID,
that
meant
always
give
me
the
first
first
device
in
the
list.
So
that's
not
backwards
compatible.
No,
it's
not
web
compatible.
This
aren't
pointed
out
so
option.
B
is
to
concede
status
quo
with
the
above
syntax.
D
A
What
about
what
I
like
about
the
semantics
bit
is
similar
to
the
STP
semantics?
Is
that
it?
There
is
a
possibility
like
kiss
a
lot
of
websites,
don't
care
if
if
they
get
a
prompt
or
not
some
here,
some
don't
care
with
with
this
one.
You
can
for
all
the
cases
where
you
don't
care
it
wouldn't
matter.
If
we,
if
we
flip
the
default
behavior
and
for
the
cases
where
it
doesn't
matter,
fix,
fixing
the
the
regression
caused
by
it
is
a
one-liner
and
right.
A
It's
it
allows
you
to
switch
over
and
then
you
can
over
time,
deprecated
the
old
one.
It
would
be
harder,
I
think
if
you
have
to
call
a
different
method,
because
you
can't
you
can't
it
might
be
very
similar
but
yeah,
but
it
would
be
harder
to
switch
the
default
and
deprecated
the
other
one,
because
you're
stuck
with
you
well.
D
What
I
like
about
semantics
is
well
when
I
wrote
it
is
that
it
was
very
easy
to
talk
about
and
explain,
and
it's
kind
of
it
is
kind
of
a
miracle
poems
like
we
messed
up,
and
we
have
these
two
behaviors,
and
here
they
are
so
it's
very
open
about.
What's
how
things
actually
work
and
it
also
captures
it's
sort
of
like
resized
mode
when
we
added
resize
mode.
If
people
are
resize
mode
constraint,
it
actually
captures
and
if
you
don't
specify
browsers,
currently
have
different
defaults.
So
well.
A
The
pro
with
semantics
is
the
the
downside
with
the
issues
user,
media
separate
API.
Is
that
even
even
the
use
cases
where
they
don't
care
about
what
the
default
is,
they
are
forced
to
care.
What
default
is
that's
why?
Secondly,
secondly,
there's
no
way
for
us
to
tell
who
uses
the
old
API
just
because
that's
what
they
were
using
and
they
didn't
care
and
who
uses
the
API,
because
they
explicitly
made
a
decision.
Hey
I
want
the
old
behavior,
but
yeah
it's
in
either
case.
You
do
end
up
with
at
some
point.
D
Now,
what
what
I
like
it
don't
like
about
semantics
is
that
you
could
argue
then
that
if
someone
specifies
text
is
trouble
to
specify
semantics
browser
chooses
then
even
Firefox
should
stop
something
right.
We
should
just
say:
allow
camera
or
microphone
they're,
not
specify
why
so
it
kind
of
that
kind
of
interferes
a
little
bit
in
to
user
agent
territory,
because
we
do
per
device
grants
and
so
and
I
think
I
don't
have
a
slide
for
this,
but
in
UX
land.
D
D
Then
you
run
into
trouble
because
the
user
might
think
I
do
want
to
share
my
camera,
but
I
want
to
share
the
back
camera,
so
I'm
gonna
hit,
deny
and
now
you're
blocked.
So
as
soon
as
you
mentioned,
specifics
about
which
camera
users
are
going
to
want
to
change
it
and
that's
how
we
ended
up
with
Firefox
a
picker.
So
there's
that
we.
A
C
C
C
C
D
Do
that
using
an
existing
device
constraint
which
we
might
have
to
specify
anyway?
So
that's
that
that's
the
one
option
I
have
for
going
with
the
the
existing
device
ID,
even
though
it's
kind
of
clunky
is
that
that
might
be
useful
information
to
the
UX
that
we'd
need
anyway,
but
yeah
I
think
we're
talking
about
sorry
Henrik.
You
were
saying
we
should
first
discuss.
Do
we
want
I
yeah.
C
C
C
D
It's
a
good
thing
that
we
already
like
you
can
go
to
WebRTC
samples
and
if
you
don't
have
initial
permission,
you
will
see
camera
1,
2,
&,
3,
so
Java
Script
can
do
this
already.
So
it's
not
a
total
deal-breaker
in
my
mind
that
you
know
it's
it's
ugly
and
it's,
but
it's
only
going
to
affect
people
who
have
multiple
devices
in
the
first
place
and
probably
I
haven't
really
seen
a
device
picker
on
a
phone
because
there
was
usually
just
a
flip
button.
D
So
there's
that
even
with
the
user
chooses
semantics,
the
idea
is
that
if
you're
able
to
specify,
if
you're
able
to
reduce
the
choices,
Whittle
them
down
to
one,
then
you
will
you
will
not
need
a
picker
and,
like
you
need
permission,
the
difference
is.
If
you
haven't
whittled
it
down
to
one,
you
will
get
a
picker
whether
you
have
permission
or
not,.
B
B
A
B
C
C
B
So
this
is
a
status
report
and
we
presented
the
in
the
insertable
streams
idea
at
t
pack
and
since
then,
we've
been
trying
to
implement
it
and
see
if
it
actually
interminable,
it
turned
out
that
what
file
did
originally
specified
was
hard
to
implement.
So
we
implemented
something
that
worked
as
that.
That's
the
nature
of
experiments
implementation.
B
So
we
managed
to
do
the
proof
of
concept
implementation
down
and
we
managed
to
actually
document
what
the
API
turned
out
to
be.
There's
a
link
to
the
explainer
here,
which
is
what
contains
the
the
and
JavaScript
us
and
the
examples,
and
so
on.
It
considerably
changed
from
what
I
originally
suggested
and
I
haven't
updated
the
documents
yet
so
just
take
just
read
the
explainer.
No
more.
If
you
don't
want
be
confused
next
slide.
B
B
So
when
you,
when
you
call
one
of
these
getting
coded
in
a
stream
sting,
you
get
back
the
pair,
then
you
and
take
the
readable
of
the
pair
and
pipe
it
through
at
a
transform
stream
and
then
pipe
the
output
or
the
transforms
them
back
to
a
writable.
That's
it
now
we
have
inserted
processing
into
into
the
into
the
producing
puff,
and
the
code
on
the
left
is
actually
JavaScript.
It's
pure
JavaScript.
It's
dots
and
dots,
transform
of
the
data.
B
It
inverts
all
the
way
and
that's
a
few
bytes
at
the
end,
just
to
just
be
nasty
so
that
so
what
comes?
What?
What
comes
out?
If
you
don't
deliver,
the
transform
will
not
be
comprehensible
to
the
to
the
video
decoder,
so
we
called
Mac
and
we
have
a
similar
receiver
transform
that
can
be
inserted
on
the
receiver
and
it's
all
in
the
example.
But
there
wasn't
that
before
or
after
encoding
or.
C
C
B
C
B
Have
had
suggestions
for
a
couple
of
other
the
insertion
points
in
the
pipeline
like
before
or
after
the
data
buffer
for
applications
that
want
to
do
they
don't
own
data
buffering
or
just
make
sure
that
they
bypass
they
did
the
buffer.
But
that's
for
further
study.
You
could
end
up
with
an
enormous
number.
These
inserts
yeah.
C
B
If
you
look,
if
you
go
back
a
slide,
there's
one
particular
thing,
so
we
realize
that
if
the
first
frame
sneaks
out
before
we
write
down
the
insertion,
that's
a
bad
thing,
because
if
you
have,
if
you
get
one
frame,
that's
unaccounted
and
then
suddenly
switched
to
encoders,
then
without
any
signaling,
then
you
have
no
idea
what
you,
what
you're
going
to
get
so
the
force
encoded
variables
in
a
RTC
configuration
or
just
to
say.
If
you
see
this
boolean
set
to
true,
then
don't
process
any
frames
before
before
the
JavaScript
has
done
its
insertion.
B
B
So
we're
kind
of
trying
to
relate
this
to
other
efforts.
I
mean
transferable
streams
was
the
thing
that
make
it
all
possible,
but
it's
not
released
in
in
Chrome
generally,
yet
it's
available
behind
the
flag,
so
we
didn't
have
to
modify
anything
about
that
proposal
in
order
to
use
it.
So
far,
we
are
looking
at
some
of
the
implementation
detail
so
seeing
if
we
can
tune
them
a
little
bit,
but
that's
that's
something.
The
API
is
really
what
we
want
and
we're
also
working
with
the
web.
B
Codecs
guys
say:
okay,
this
API
is
actually
one
wants
to
be
exactly
the
same
as
if
you
have
had
inserted
a
web.
A
web
codec
objects
in
the
pipeline
and
took
the
output
out
the
bats,
so
the
video
frame
and
audio
frame
types
we're
defining
is
intended
to
be
harmonized
with
web
codec,
whether
they
copy
are
so
weak
up
in
there
more
way,
we
kind
of
sense
synthesize
something
that
work
for
both
that's
to
be
seen.
So.
B
I'm
not
at
the
moment,
so
there's
no
interaction
with
the
web.
Gpu
I
know
web.
The
web
codec
folks
have
tried
to
talk
where
the
web
web
GPU
guys
before,
because
when
you
want
to
implement
the
codec
and
not
have
as
best
to
have
that
specific
hardware
for
it,
then
a
GPU
is
a
mighty
fine
thing
to
have,
but
we're
not
directly
in
touch
with
them.
E
One
question
is
their
intention
to
suppose
they
were
able
right
the
streams
for
you
seen
another
use
case.
I
mean
why
not
just
in
served
a
transfer
and
the
they
stream.
Can
you
go
back?
One
is
light.
Please,
yes,.
E
B
If
we
wanted
to
do
things
like
multiple
multi-step
pipelines
or
a
pipeline
that
drop
data
or
a
pipeline,
that
that's
delayed
data,
that's
completely
simple
to
do
with
transform
frames
when
we
expose
the
streams.
But
if
we
were
to
push
the
tell
from
down
and
stealth,
then
that
would
have
to
be.
We
would
have
to
really
reinvent
the
whole
bunch
of
concepts
that
transform
streams
already
already
in
and
specify.
So
it
seems
like
a
powerful
abstraction
that
we
shouldn't
sir
try
to
modify.
E
D
B
D
D
B
D
And
there's
an
implicit
understanding
here
that
when
you
specify
this,
the
sender
no
longer
sends
there's
a
default
right
when
you
don't
specify
this,
it
just
goes
through
and
this.
So
when
you
specify
this
there's
a
point
in
which
you
go
through
this
path,
only
yes,
how
would
you
set
it
back?
You
wouldn't.
D
B
When
I
first
experimented
with
streams,
the
first
thing
I
did
was
to
try
to
try
to
build
a
pattern
for
and
try
to
take
a
take,
a
stream
that
were
connected
in
one
place
and
and
connected
to
another
stream,
and
it
turned
out
at
a
the
spec
wall,
didn't
didn't
support
it
and
B
both
of
those
both
of
the
implementations
of
streams
where
didn't
support
it
that
were
broken
in
different
ways
so
we're
we.
We
decided
that,
oh,
let's
not
explore
that.
Okay.
D
B
D
B
That
could
be
found
to
try
I
mean
in
theory.
You
could
have
sender
stream,
not
readable,
dot
pipe
through
through
a
pipe
to
sender
stream,
send
the
be
writable
and
then
have
sent
a
beam
right
to
some
that
I
hate
a
readable
and
I.
Don't
think
that
that
that
would
work,
because
the
configurations
would
be
kind
of
messed
up
right.
D
B
D
I,
do
worry
that
I
love
streams
and
I
love
the
pipe
through
in
the
pipe
to
commands
in
particular,
but
at
the
same
time,
I
do
worry
that
it's
a
little
too
flexible
and
it
might
encourage
people
to
do
things
they
maybe
shouldn't
like.
If
I
were
to
pipe
this
through
a
drop
transform
stream
written,
you
know,
let's
say:
I,
have
a
transport
transferable
stream
from
a
worker
and
but
once
as
soon
as
I
insert
any
kind
of
other,
let's
say
insert
the
second
step,
where
I
do
something
on
in
plain
JavaScript
on
main
thread.
D
C
E
Women,
just
women
yeah,
oh
man,
if,
if
you
can't
think
of
the
bigger
stream,
is
going
to
change
the
behavior
of
the
video
sender,
I
couldn't
call
it
get
because
I
go
to
spread
that
I
get,
does
not
modify
any
behavior
at
all.
Yes,
capture,
Ranko
that
this
video
stream
or
whatever
but
get
is
say
at
least
when
I
do
I
get
I
expected
a
the
stato
of
Dante
and
the
state
of
the
videos
under
to
change.
B
B
Yes,
there's
another
slide
that
just
says
outlining
what
we
do
next
so
after
that
one
that
I
thought
this
was
not
this
one,
but
next
one
yep.
So
we
have
a
real
customer
that
once
wants
to
use
this
for
for
their
trial
a
origin
trial.
So
we
will
have
an
and
a
Croma
region
trial
on
this.
We
will
synchronize
with
web
codecs
as
appropriate,
and
once
we
know
that
the
API
works
and
the
names
are
not
not
too
horrible,
we
will
propose
it
for
Standardization.
B
A
It's
another
update
of
recent
PR.
It's
an
explanation
of
how
we
we
want
to
calculate
the
end-to-end
delay.
So
the
problem
is:
how
do
you
get
the
time
that
passed
between
the
capturing
on
one
endpoint
and
the
play
out
at
another
end
point
and
the
RTC
RTP
contributing
source
gives
us
the
timestamp
which
tells
the
play
out,
but
we
don't
have
the
time
of
capture
next
slide.
Oh
and
this
picture
by
the
way
shows
previous
slide.
It
shows
the
simple
case
where
the
capture
system
and
the
sender
system
is
the
same
system
anyway.
A
So
there's
a
header
extension
called
apps
capture
time,
which
gives
you
the
capture,
time
stamp
and
an
estimated
clock
offset
which
I'll
get
back
to
later.
So
if
we
have
the
capture
time
stamp,
you
can
also
solve
a
problem,
which
is
how
do
we
measure
an
audio
and
video
sync
solution
is
exposed
to
capture
time
stamp,
but
I'll
talk
more
about
the
end-to-end
delay,
just
one
of
the
reasons
why
this
is
needed
other
than
existing
mechanisms
next
slide.
A
A
So
luckily,
with
with
the
rtcp
existing
rtcp
mechanisms,
you
can
estimate
the
clock
offset
and
you
do
this
by
you
know.
The
assumption
is
that
the
takes
half
a
round-trip
time
to
get
from
from
A
to
B
and
looking
at
our
TCP
information,
you
have
the
the
sender's
time
stamp
when
they
sent
their
TCP
packet.
You
have
the
round-trip
time
that
you
can
divide
by
two
and
you
know
the
time
of
your
own
clock
when
you
receive
this
packet,
so
you
can.
A
A
A
Well,
here's
really
this!
The
header
extension
comes
in
the
app's
capture.
Timestamps
estimated
clock
offset
would
be
an
additional
field
that
contains
the
so
that
in
this
case,
it's
the
sender's
estimate
of
the
of
the
clock
skew
between
the
capture
and
the
sender,
which
means,
if
you
have
the
original
capture
timestamp,
which
you
want
for
a
V
syncs.
You
do
want
the
original
and
cents
and
then
you
add,
this
estimate
is
clock.
Skew
provided
in
the
extension
by
the
the
sender.
A
So
there's
we
on
the
web
words
from
the
WebRTC
extensions
back.
We
just
merged
this,
which
basically
just
exposes
these
values
from
the
header
extension
if
the
header
extension
is
present,
if
it's
not,
these
are
missing,
but
by
exposing
this
to
the
raw
information
the
web
app
can
handle
all
the
possible
error
cases
itself,
rather
than
if
we
were
to
expose
an
end-to-end
delay
that
was
pre
calculated.
That
would
that
would
have
had
a
lot
of
assumptions
about
this
information.
A
A
Yeah
I,
there
is
more
information.
If
you
follow
the
links,
but
you
could
you
could
employ,
you
can
employ
your
own
strategy.
The
example
I
showed
on
these
slides
would
be
where
you
use
the
rtcp
information
and
an
and
an
actually
implementation
would
be
a
little
bit
more
sophisticated,
like
sample
multiple
round-trip
times
and
try
to
get
some
smooth
average,
but
that
would
be
that
would
be
implementation.
Specific.
B
B
Back
to
you,
Harold
yep,
RTP,
header
extensions,
so
and
the
nice
thing
about
RTP
header
extensions.
Is
that
there's
a
right
line
in
the
sand
where,
if
you
add
RTP,
has
extension
number
15,
some
things
that
used
to
work
don't
anymore,
because
that's
where
you
have
to
go
to
two
bytes
header
IDs
at
X,
9
IDs,
so.
B
And
those
are
the
reasons
why
we
wanted
to
say:
okay
applications
need
to
have
some
control
over
which
are
to
be
had
the
extensions
get
negotiated
in
STP
and
STP
managing
well,
it's
not
what
we
like
the
trip
to
ask
people
to
do
so.
Instead,
we'd
like
to
have
an
API
that
controls
both
RTP
header
extensions,
get
set
nextly.
B
B
So
the
next
steps
here
are
we're
going
to
implement
this
as
an
Chrome
experimental
feature,
we're
going
to
ship
it
out
behind
the
duration
child
and
let
people
the
people
really
care
experiment
with
it
and
then
with
then
yeah
we'll
have
learned
something.
Is
this
actually
useful?
Is
this
a
shape
of
API
that
we
want
to
keep
and
then
we'll
come
back
device
or
accept.
B
C
B
Okay,
let's
go
forward:
do
we
have
next
night
gotcha?
Hence,
yes,
so
status
update
again
we
now
have
our
KC
degradation,
preference,
spec,
included,
incontinence
and
yeah
I.
Think
young
Eva
it's
a
little
bit
happy
with
this
bed,
because
there's
much
more
language
on
what
is
actually
happening
when
you
set
the
confidence
and
you
this
language
about
how
content
and
constraints
interact,
including
saying
that
okay
constraints,
if
you
really
want
the
constraint
value,
then
you'll
get
it.
B
But
if
you
don't
set,
don't
ask
for
something
specific
using
constraints,
then
the
the
content
hints
is
going
to
influence
your
settings
so
that
if
you
tell
the
content
that
that
what
you're
sending
is
music
echo
cancellation,
which
tends
to
make
music
a
kind
of
incompressible,
is
defaulting
to
off
so
obvious,
and
we
did
send
out
that
call
for
consensus
for
publishing
and
new
working
drafts,
which
has
only
favorable
favorable
responses
so
far,
but
not
very
many
of
them
so
anyway,
but
probably
at
the
stage
where
it
makes
sense
to
ask.
C
C
Yeah
so
like
we're
using
this
now,
we
use
it
in
every
screen
share
and
it
seems
to
work
and.