►
From YouTube: WEBRTC WG meeting at TPAC 2020 - part 1
Description
Recording of the WEBRTC WG meeting on October 20, 2020
B
That's
in
the
hashtag
webrtc
channel,
so
the
agenda
for
today
is
for
how
to
give
a
brief
talk
on
the
state
of
the
working
group,
we'll
talk
about
tests
and
implementation
status
over
to
see
stats
and
then
media
capture
and
streams
and
other
capture
specs.
So
today
is
largely
existing
specs
and
then
on
thursday.
We'll
talk
about
newark
so
thursday,
harold
will
talk
about
insertable
streams,
we'll
discuss
antenna
encryption
where
we
see
svc
some
things
relating
to
get
display
media
and
get
browser
context,
media
we'll
wrap
up
any
agenda,
bashes
peop
things.
A
That's
new
protocols,
new
apis,
and
this
all.
This
is
what
I
have
said
before.
Basically,
just
there's
a
few
updates,
webrtc
1.0.
It
should
just
work
across
all
browsers,
small
networks
and
so
on,
and
we
should
have
low
level
data
access.
People
want
to
do
funny,
hats
of
voice,
compression
or
or
a
background
blur
or
whatever
you
want,
and
they
need
to
do
it
perform
with
a
high
of
high
performance.
A
Nothing
much
has
changed,
except
that
we
have
some
new
ideas
that
have
popped
up
like
the
in-browse
in
browser
device,
picker
and
deprecating
the
peak
by
constraints
mechanism,
but
the
community
sense
of
it
is
still
that
this
is
working
as
intended.
There
are
lots
of
products
that
that
are
dependent
on
it.
A
That's
next
screen
capture
security
is
still
a
troublesome
subject,
as
yaniva
will
pontificate
on
a
little
bit
later
today.
I
think
it's
today
and
but
we
can't
live
without
it.
I
mean
basically
today
the
word
work
lives
on
video
conferences
and
and
screen
share
is
a
totally
vital
part
of
video
conferences,
but
we
do
need
to
get
a
tag,
review
done
and
push
it
push
it
forward.
A
A
A
A
A
A
A
We
need
to
have
the
privacy
security
review
tag
review
all
the
boring
stuff
and
close
those
open,
seos
recorder
is
another
one
that
is
heavily
needed
updating,
but
we
don't
have
the
editors
for
it,
but
the
third
one
on
the
slide
stats
identifiers
is
luckily.
A
A
A
A
A
A
There
is
interest
across
the
across
the
communities,
but
didn't
come
up
with
any
specific
proposals
for
doing
anything
that
in
a
more
efficient
way,
and
we
need
to
be
aware
that
we
have
people
who
care
deeply
about
security
and
privacy
issues
and
they
want
to
make
sure
that
we
don't.
We
write
specs
that
respect
security
and
privacy.
A
I
mean
very
close
to
getting
those
documents
shipped
and
we
want
to
look
at
the
new
apis,
because
if
we
don't
make
forward
progress,
we're
not
moving,
but
the
use
cases
and
requirements
are
key
and
raw
media
is
kind
of
the
current
burning
issue
for
a
lot
of
people,
and
so
we've
taken
that
on
a
couple
of
efforts
to
a
couple
of
proposals
to
access
raw
media
and
deal
with
them
and
we'll
discuss
that
on
thursday
too,
with
my
famous
keyword
called
breakout
box,
I
kind
of
like
the
keyboard.
A
B
All
right
we
have
corrine
and
dr
alex
karine.
Do
you
want
to
present
and
turn
your
own
slides.
D
E
Thanks,
oh
you
want
me
to
start
the
slides.
E
G
E
All
right,
so
I'm
going
to
be
fast,
I'm
just
first
for
continuity,
because
I
presented
the
previous
year
I'm
going.
I
don't
have
a
lot
of
new
content,
so
I
have
15
slides,
but
I
can
do
that
a
little
bit
faster.
E
The
first,
a
little
feedback
is
that
it's
still
difficult
to
compute
how
much
we
testing
right.
We
tried
to
to
put
some
coverage
method
in
2017
and
we
were
forced
to
removed
it
dom,
did
and
again
using
something
called
respect
for
the
double
the
the
webrtc
spec
itself.
That
is
also
used
and
extended
to
support
google
unit
test
and
kite
tests
for
the
ev1
and
our
media.
So
they
would.
There
would
be
capacity
to
extend
it,
but
right
now
we
don't
really
know
how
much
of
the
document
is
actually
covered.
E
E
That's
correct,
but
my
understanding
was
that
for
the
w3c,
what
is
important
is
is
only
the
javascript
api
themselves
right,
but
yes,
when
we
will
get
back
to
that,
but
when
we
need
to
test
the
network
and
when
we
need
to
test
simulcast,
you
cannot
really
test
the
javascript
api
in
itself.
Beyond
just
does
it
exist.
There
is
some
error
message
that
are
complicated
to
generate
and
require
some
programmatic
access
to
the
network,
and
things
like
this.
E
So
if
you
look
at
the
test
we
do
have,
we
can
see
that
there
is
a
small
augmentation
of
the
number
of
tests
and
a
small
augmentation
of
how
many
tests
works
on
on
the
four
different
browsers.
E
E
Because
of
the
way,
testing
is
dealt
with
the
perception
for
small
people
that
are
not
browser
vendors
that
that
express
that
it's
extremely
difficult
to
actually
push
any
kind
of
test
through
or
simply
a
lot
of
people
slow
down
and
and
lost
interest.
I
think
this
question
will
come
back
in
in
different
presentations
from
different
people
and,
and
we
have
to
decide
whether
we,
whether
we
want
to
push
this
more
or
not,
and
what
we
really
do
about
it
again.
E
E
Here
you
can
see
that
we're
running
on
macintosh
firefox,
69
windows,
firefox,
69,
mcintosh,
firefox,
71
and
so
on
and
so
forth,
and
for
each
of
those
we're
running
all
of
the
tests
and
we
can
check
the
the
distribution
if
you
sum
up
all
the
tests,
it's
18
000,
but
that's
arbitrary,
because
it
depends
on
how
many
configuration
you
run.
What
is
really
interesting
is
the
percentage
here
if
you
use,
the
percentage
provided
by
the
interoperability
from
wpt
is
going
to
take
the
the
common
denominator
right.
E
E
Yeah,
it's
moving
now
all
right,
yeah
a
little
bit
too
fast.
If
you
go
back
here,
you
see
that
it's
telling
you,
the
interoperability
in
the
green
or
four
of
the
number
of
tests
that
works
across
the
four
browser
is
44.
E
E
Kite
can
run
wpt
just
like
wpt
does
so
you
can
have
multiple
tests
in
one
file,
in
which
case
you
have
a
lot
of
false
negative.
Let's
say
you
have
18
tests
in
one
file
and
one
is
failing.
Then
it's
going
to
consider
that
as
a
failure
and
we
prefer
to
run
it
in
an
individual
mode
like
this,
where
each
of
the
tests,
whether
they're
in
the
same
html
file
or
not,
are
actually
separated
and
the
and
the
error
message
is
given
nicely
on
on
the
right-hand
side.
That's
all
we.
E
E
The
safari
tech
preview
14.1,
is
a
little
bit
behind,
but
that's
because
of
the
way
we
test
it,
I
double
check.
We
can
see
an
improvement
across
all
the
browsers.
So
now
the
question
really
is:
are
we
happy
with
that?
Is
there
anything
we
want
to
improve
that?
That's
a
different
question:
simulcast
and
svc
have
been
a
different
problem.
We
knew
from
the
beginning
of
the
working
group
we
wpt
could
wouldn't
be
able
to
test
p2p
usage
and
that
you
need
to
have
some
kind
of
network
instrumentation
to
be
able
to
test
ice
really.
E
There
were
different
efforts
at
the
beginning
of
the
working
group
by
and
mozilla
and
other
people,
and
then
in
tpack
at
sapporo
in
2015
we
decided
to
include
similar
cast
in
1.0
and
then
that
becomes
an
even
bigger
problem.
So
we
came
with
kite.
We
spoke
in
lyon
about
simultaneous
being
frightening
because
we
didn't
know
how
much
work
that
that
would
be
done.
E
We
had
the
simulcast
look
back
or
the
civil
class
playground
introduced
and
contributed
by
philippe
hank
this
year
that
help
have
a
little
bit
of
visibility
from
the
wpt
itself
on
the
one
browser
setting,
but
really
we
have
no
real
reference
test
and
indication
today
on
how
much
how
well
that
thing
works.
So
now
that
henrik
has
finished
the
test
per
simulcast
layer
and
that
we
did
several
hackathon
and
sprint
on
this.
My
feeling
is
that
it's
good
enough,
but
I
do
not
have
number
to
back
that
up
right.
So.
B
The
loopback
test
disclosed
that
there
were
major
protocol
bugs,
for
example,
there
was,
I
believe,
firefox
didn't
support
mid
so
that
test
failed.
So
they
were,
it
turned
out.
There
were
some
pretty
big
bugs
that
were
hiding
in
there.
E
E
And
I
don't
I
don't-
have
the
answer
for
that
really
what
what
we
did
in
last
year
was
at
the
three
different
hackathon
work,
with
as
many
browser
vendors
and
as
many
esf
open
source,
webrtc
sfu
vendors,
as
we
could
to
try
to
have
a
map.
So
this
is
the
old
one.
So,
for
example,
in
that
one
firefox
does
not
support
rtx;
no,
they
do
so
it
would
need
to
be
updated.
E
E
Unfortunately,
it's
a
lot
of
work
and
when
ietf
turn
a
virtual,
it's
it's
difficult
to
maintain
proof
is
this
year
we
didn't
sustain
that
effort.
We
could
not,
and-
and
nobody
raised
up
and
and
continued
this
or
any
kind
of
effort
on
in
that
regard,
then.
E
E
B
That
we're
going
to
have
to
wait
till
the
pandemic
is
cured
before
we
go
back
to
this
or.
E
E
E
We
could
test
it
against
a
pipe
from
team,
or
that
will
be
equally
equally.
Okay
right.
We
should
do
that
now.
That's
that's
a
lot
of
effort
and
even
the
basic
tests
are
not
finished
today
or
we
like
we
like
a
volunteer
to
do
that.
So
I
I'm
I
don't
know
I
really
don't
know.
I
don't
have
the
answer
to
that
one
I
feel
like.
We
me
we're
missing
an
opportunity,
but
I
don't
know
how
to
do
better.
To
be
honest,
we
have
team
in
the
queue.
J
Yeah
I
mean
I'm
not
offering
to
be
the
test
site
for
all
of
this,
but
I
am
offering
to
help
talk
to
the
sfu
vendors
and
if
we
have
a
concrete
proposal
discuss
with
them,
what
we
might
do
there
was
a
proposal
to
do
a
hackathon
at
the
ietf
this
in
vancouver.
But
you
know,
events
overtook
us
and
I-
and
I
genuinely
think
that
would
have
taken
place
like
either
in
real
time
in
real
real
life
or
if
we
had
more
warning,
we
might
have
been
able
to
organize
something.
E
With
respect
to
the
to
the
hackathon
in
in
vancouver,
it
was
explicitly
stated
that
it
was
a
hackathon
for
people
that
didn't
want
to
use
the
webrtc
and
didn't
care
about
the
the
browser
and
the
javascript
part
of
thing.
So,
from
the
w3c
point
of
view,
I'm
not
sure
that
these
are
people
that
are
motivated
to
work
with
us
to
make
sure
that
the
javascript
and
the
browser
implementation
is
good.
Did
I
read
it
correctly.
J
I
think
I
mean,
given
that
it's
an
id,
it
was
an
ietf
event.
I
don't
think
you
can
deduce
from
that.
What
the
what
the
response
to
a
w3c
request
would
be.
I
think
they're
orthogonal,
certainly
a
conversation.
J
We
could
have
there's,
certainly
a
lot
of
people
who
are
interested
in
in
having
simulcast
work
so
and
yes,
a
few-
and
there
are
a
few
people
who
I
mean
the
sfps
have
moved
on
a
lot
in
the
last
year,
so
I
think,
and
they've
got
much
more
public
apis
and
there
are
a
couple
of
new
ones.
So
I
think
I
think
it's
worth
going
back
to
that
that
group
and
seeing
if
we've
got
a
if
we've
got
a
concrete
proposal
we
want
to
put
them.
E
Okay
in
2019,
before
every
hackathon,
one
or
two
months
before
an
email
was
circulated
to
everybody
to
announce.
You
know
what
what
the
plan
and
get
feedback
and
do
that.
E
I
don't
think
that
was
the
case
at
vancouver
and
it
was
explicitly
cited.
We
don't
want
to
work
with
javascript.
So
let's
do
it,
let's
propose
and
see.
What's
coming,
it
looks
like
it's
a
different
group
than
the
one
that
you
know
contributed
in
in
the
table
that
I'm
showing
here
but
the
more
the
merrier.
E
B
E
B
So
yeah,
okay,
karine.
I
think
these
are
your
slides.
D
So,
as
you
should
know,
we
are
now
currently
using
the
process
2020,
which
has
a
little
differences
compared
to
the
previous
requirements
for
progress
recommendation.
The
first
criteria
are
unchanged.
We
have
to
show
implementation
experience,
namely
two
interoperable
implementations
of
each
feature,
white
review,
of
course,
but
we
already
did
that
we
have
to
close
all
the
issues
that
came
in
during
the
candidate
recommendation
review
period.
D
D
Not
make
any
substantive
change
compared
to
the
previous
cr
actually
now
cr
is
either
snapshot
or
draft.
So
for
the
purpose
of
the
pattern
policy,
we
consider
cr
snapshot,
and
so
we
don't.
We
are
not
allowed
to
make
changes
to
a
cr
snapshot
before
going
to
well
right
between
the
cr
snapshot
and
the
pr
so
because
it
could
invalidate
the
reviews,
it
could
invalidate
the
patent
policy
commitments
and
we
may
also
like
for
previous
years,
we
may
remove
features
that
were
marked
at
risk
in
the
previous
cr.
D
We
have
not
made
any
substantive
change
since
then,
and
we
have
one
feature
that
is
marked
at
risk.
Can
you
go
to
next
slide?
Please.
D
D
D
One
question
that
is
open
is:
do
we
have
more
testing
issues
about
to
come
so
maybe
related
to
simulcast,
for
example,
or
something
else
or
wpt
or
next
slide?
Please.
D
So,
as
dr
alice
said,
we
had
improvements
with
regard
to
wpt
testing.
In
the
slide,
you
have
a
link
to
the
interoperability
report
that
shows
the.
D
The
table
for
today-
and
there
is
a
summary
of
what's
implemented,
what
what
has
no
implementation
nice,
the
voice
activity
flag,
is
already
marked
at
risk
in
the
spec,
a
list
of
things
that
have
only
one
implementation
well,
actually,
for
for
for
the
purpose
of
the
interrupt,
we
consider
that
chrome
and
edge
are
only
one
implementation,
so
the
the
big
chunks
that
have
only
one
implementation
are
http
transport
ice
transport,
set
streams
and
data
channel.
D
On
closing
so
far
we
don't
have
a
report
for
simulcast,
so
we
need
to
add
something
to
that
interrupt
report.
So
we
can
present
this
to
the
director
if
we
request
a
proposed
recommendation
transition.
D
We
were
previously
also
using
the
conference
results.
The
ideal
interface
tracker
with
those
scripts
filtering
the
the
webrtc
ones.
This
conference
report
actually
now
is
roughly
what
wpt
tells.
K
D
It
is
the
the
red
areas
are
in
the
same,
the
same
places
the
the
table
is
still
interesting
because
it
shows
the
test
results
by
areas
more
more
than
the.
What
the
wpt
test
is
doing.
D
D
It
meant
that
some
areas
weren't
not
necessarily
bugless,
so
bug
free.
So
so
I
think
that
the
improvement
is
is
quite
quite
noticeable
next
slide.
D
So
for
working
group
discussion.
Is
that
some
a
proposal
that
we
can
introduction
of
a
proposal
that
we
could
make?
D
We
have
two
main
features
that
don't
have
a
double
implementation:
ice,
transport
and
http
transport.
The
other
one
says
streams
and
data
channel
on
closing
should
be
fixed
fairly
quickly.
Based
on
on
our
implementer's
input,
the
the
transport
are
going
to
be
implemented,
but
not
with
that.
The
that
high
priority
that
we
would
probably
would
like
since
now
process
2020
allows
us
to
modify
a
recommendation
to
correct
normatively
books
that
we
have
in
the
spec.
D
D
And
will
be
implemented
without
saying
any
deadline,
but
since
it's
going
to
be
a
living
standard,
if
we,
if
we
discovered
bugs
we
could,
we
would
be
able
to
correct
them
inside
the
recommendation
using
the
process.
2020
procedure.
D
So
the
proposal
is
to
not
delay
first
edition
of
webrtc
one
or
further,
and
we
consider
that
those
features
that
still
don't
have
double
interpret
interoperable
implementation
should
me
should
not
be
a
showstopper.
So
that's
that's
what
we
propose.
We
ask
the
director
for
approval
if
we
can
go
to
propose
wreck
in
those
conditions
and
have
a
living
standard
of
what
we're
about
to
know.
B
Well,
I
I
have
a
question:
this
is
bernard,
so
we
we've
been
talking
about
the
testing
problem
for
a
while,
and
there
have
been
some
recent
events
that
I
think
underlie
some
of
the
limitations
of
wpt.
B
For
example,
I
think
it
was
one
or
two
weeks
ago
and
cullen
is
probably
familiar
with
this.
We
had
a
breakage
in
the
multiplex
demultiplex
code,
which
broke
a
whole
bunch
of
applications
and
in
looking
into
it
this
was
not
something
that
was
covered
by
wpt.
It
was
kind
of
a
basic
ietf
functionality,
and
I
understand
that
so
wpt
has
two
functions.
One
is
to
as
a
gauge
for
going
whether
you
advance
in
the
w3c
process,
but
the
other
is
it's
actually
used
for.
You
know
validating
check-ins.
B
So
I
I
have
a
basic
question,
which
is
if,
if
obviously
kite
tests
are
not
being
run
to
buy
to
validate
check-ins,
if
is
there
any
way
to
address
all
of
this,
because
you
know
it's,
it's
one
thing
to
develop
the
functionalities,
in
other
words,
to
keep
it
actually
working.
B
A
How
did
he
do
that?
Well,
it
turns
out,
if
you,
if
you
just.
A
A
A
Cases,
but
it
caught
at
least
at
least
this
case.
B
So
there
may
be
more
more
bang
in
the
whole
loopback
thing
than.
C
Maybe
to
address
your
brother
christian,
but
I
think
in
general
you
know
wpt
is
this
open
source
project
where
proposals
can
be
brought
up
for
additions
to
the
platform
right
now?
Indeed,
the
kind
of
stuff
you
can
do
is
limited
by
what
the
test
harness
exposes
and
apparently
well,
that's
already
quite
a
bit,
but
some
things
definitely
would
need
more
more
powerful
design
platform.
C
I
guess
the
key
part
of
adding
something
more
powerful
and
maybe
kite
is
too
powerful.
I
don't
know,
but
something
like
kite
or
kite
light
or
something
would
require
buying
from
the
community
and
in
particular,
from
the
brother
vendors
that
are
using
it
as
part
of
the
regulation
testing,
and
I
assume,
if
the
thing
that
we
need
to
be
deployed,
is
you
know
too
hard
to
deploy
in
a
cr
environment?
Then
it's
unlikely
to
to
happen,
but
until
and
unless
that
discussion
happens,
I
I
wouldn't
necessarily
assume
that
it's
not
possible.
E
So
dominic,
I
agree
with
you.
This
is
this:
is
the
concept
behind
wpt
just
for
transparency
and
harold
was
involved
in
all
the
discussion
for
two
years
we
tried
that
actually,
today,
kite
can
generate
results
in
a
wpt
dot,
fyyy
format,
so
we
can
actually
send
result
from
kite
to
the
database
and
see
them
with
the
same
result,
and
we
propose
to
do
that
to
run
the
wpt
test
for
them
and
provide
them
with
the
same
format
or
result
for
all
the
platform
that
the
the
auto
harness
could
not
handle
today.
E
So,
for
example,
android,
chrome,
android,
firefox
and
ios
safari.
We
send
that
proposal
in
april
and
we
send
the
data
set
and
that
was
april
2020.
you,
you
confirm
harold.
B
Yeah
one
other
thing
I
know
for
web
transport:
we've
had
the
same
issue
right
and
what
we're
doing
now
is
we're
writing
these
little
servers
and
that's
how
we're
doing
the
the
tests
for
web
transport.
I
don't
know
if
that's
a
viable
thing.
You
can
have
little
little
servers
that
run
as
part
of
your
wpt
test.
No.
E
I
mean
you,
we
could
do
that
for
the
specific
case
of
signal
cats
right
with
where
you
need
a
server
running
so
web
transport.
That's
the
same
problem.
You
need
a
quick
server
to
be
able
to
run
your
test
because
it's
it's
a
client
to
server,
but
I
mean
when
you
able
to
generate
the
same
formats
that
it
used
by
the
tool
and
you
propose
to
do
it
for
free
and
to
maintain
it
and
it's
not
happening.
I
think
this
is
not
a
technical
problem,
we're
speaking
about
so
for
the
specific
case
of.
D
E
Few
we
can
maintain
an
sfu
and
run
the
test
ourselves
in
lyon.
We
promise
to
make
the
test
and
the
sfv
open
source,
which
we
did
I
think
apple
is-
is
running
it
from
time
to
time
to
to
test
the
implementation,
but
he
did.
He
didn't
think
you,
you
can
make
the
effort
and
put
it
on
the
table.
You
cannot
force
people
to
use
it,
even
if
there
is
no
alternative
right.
C
So
I
mean
again,
I
I
hear
your
frustration
with
what
has
happened
with
your
proposal.
Alexander,
I
freely
understand
it,
but
but
first
you
know
it
may
have
failed
for
thousands
without.
As
you
know,
there
was
lots
happening
in
the
world
in
april
of
2020,
and
so,
unless
you've
heard
very.
C
That
this
is
not
something
anyone
is
interested
in
then.
I
would
not
necessarily
give
away
a
free
option
understand
you
have
other
things
to
do,
but
I
wouldn't
necessarily
take
this
as
like
a
definite
conclusion
that
this
has
no
possible
future.
I
would
also
say
that
integrating
results
in
wpt
fyi
is
probably
not
as
powerful
as
running
the
tests
themselves
as
part
of
the
wpt
infrastructure.
A
big
part
of
why
wpt
is
so
useful
is
that
browser
vendors
are
able
to
run
this
as
part
of
their
own
local
ci.
C
Without
any
external
dependency-
and
you
know
even
running
an
sfu
somewhere-
doesn't
actually
work.
I
assume
for
vendors
because
of
their
needs
for
no
dependency
environment
anyway,
I'm
not
suggesting
that
it
is
easy
that
it
will
be
trivial
to
get
accepted
or
anything,
but
I
wouldn't
say
that
this
is
impossible.
If
we
can
indeed
show
that
the
effort
we
need
to
get,
there
is
proportionate
to
the
value
we
expect
for
the
webrtc
ecosystem.
B
I
Okay,
so
I'm
going
to
give
an
update
on
the
mercy
stats
focusing
on
what
has
happened
since
last
tpack
and
where
the
implementation
is
at
today.
I
And
so,
if
you
remember
last
tpack,
there
were
a
lot
of
issues
and
discussions
and
I
would
say
that
the
primary
focus
was
to
enable
simulcast,
and
we
did
this
by
moving
a
lot
of
things
around.
So
just
to
recap:
the
the
old
stats
hierarchy
that
we
had.
It's
not
compatible
with
simulcast,
because
we
put
a
lot
of
things
in
the
track
stats.
I
So
we
had
one
track
stats
per
attachment
and
it
was
a
mix
of
track.
Metrics,
send
the
receiver
metrics
and
encoding
and
decoding
metrics,
and
we
had
the
outbound
rtp
stats
object,
but
it
was
famously
not
per
layer.
So
you
might
have
three
simulcast
players
with
30fps
and
it
shows
that
it
shows
up
as
a
one
outbound
rtb
with
90
fps
next
slide.
I
All
right,
I'm
the
one
doing
the
slides
there
we
go
so
the
the
major
update
this
year
is
that
the
simulcast
stats
migration
has
completed
the
outbound
and
inbound
stats
objects
now
contain
the
encoding
and
decoding
metrics
that
were
previously
found
in
track
stats.
We
have
outbound,
rtp
objects,
being
generated
per
simulcast
layer
and
track
related
metrics
have
been
moved
to
a
media
source
and
in
the
spec
we
moved
the
track.
I
I
I
think
I
forgot
to
draw
an
arrow
between
the
transport
object
and
some
other
objects,
but
in
any
case
this
this
basically
the
the
overview
of
the
entire
stats
spec
in
terms
of
the
dictionary
dictionary
objects
and,
if
you're
familiar
with
the
webc
apis,
this
maps
pretty
closely
with
the
objects
you
see
in
the
apis,
with
the
addition
of
rtp
metrics
and
some
minor
differences
like
the
transport
object,
is
a
mix
of
the
dtls
and
ice
transports,
but
it's
a
pretty
good
view
of
things,
but
so
just
to
show
you
what
is
actually
implemented
today.
I
This
slide
scratch
scratch
scratch
scratch.
So
there
are,
there
are
some
things
missing,
but
there's
not
a
lot
of
metrics
missing.
It's
mostly
objects,
so
next
slide
here,
let's
overview,
what's
missing
the
remote
outbound
rtp
side
of
the
rdcp
metrics
are
missing,
and
this
includes
stuff
like
round
trip
times
and
other
information
from
the
rtcp
report.
I
We
have
the
remote
inbound
rtp,
but
not
remote,
outbound
rtp.
I
think
the
primary
use
case
of
this
is
trying
to
estimate
end-to-end,
delay
and
and
and
round
trip
time
and
there's
a
there's,
a
weber
c
extension
that
does
offset
to
the
center
capture
time.
I
think
in
terms
of
implementation
efforts.
I
It
might
be
interesting
to
to
view
those
as
a
one
bulk,
but
that's
that's
in
webrtc
extensions,
and
this
is
that
spec
other
than
that
we
have
the
centers
receiver
and
transceiver
objects,
they're
they're
missing
they're,
mostly,
they
mostly
show
the
relationship
between
the
these
objects.
So
it's
already
available
in
the
api.
I
But
if
you
want
to
you
know
inside
the
get
stats
report
figure
out,
what's
related
to
what
you
want
this,
but
in
terms
of
of
the
actual
metrics
there's
not
a
lot
here,
you
know
you
can
get
the
transceiver.mid,
but
the
sender
and
the
receiver
objects
wouldn't
actually
contain
the
encoding
metrics.
Those
would
be
in
the
outbound,
rtp
or
inbound
rtp
other
than
that
we've
added
recently
sctp
transport,
metrics
and
iserver
metrics,
and
they
have
not
been
implemented.
I
I
So
all
in
all,
most
things
are
available,
but
there
are
some
things
missing:
some
of
them
are
available
outside
of
get
stats,
and
but
some
some
of
them
aren't
so.
I
Lastly,
to
give
an
update
the
mandatory
stats
in
wpt,
we
show
that
we
have
66
out
of
77,
mandatory
stats,
implemented
in
chrome
and
edge
and
m87.
I
could
click
the
link.
I
guess
if
we
want
to
show
the
rest
of
them,
so
firefox
also
has
a
lot
of
green,
but
also
a
lot
of
red
and
safari
is
also
a
lot
of
green
and
a
lot
of
red.
I
I
wanted
to
give
an
update,
but
percentage
of
all
metrics
updated,
but
I
couldn't
find
those
numbers
before
this
slide.
I
know
that
more
than
170
metrics
have
been
implemented.
I
don't
know
how
many
metrics
we
have
in
total-
hopefully
not
too
many
more
than
that.
I
H
I
The
simulcast
stats
do
not
cover
svc
because
they
are
structured
a
bit
differently.
There
was
if
last
tpack,
we
came
up
with
a
proposal,
how
we
would
do
the
svc
stats,
but
we
never
merged
a
change
for
that,
because
we
don't
haven't
merged
the
api
exchange
to
actually
support
svc.
I
Today,
if
you
want
svc,
you
have
to
rely
on
a
hack
that
says
vp9
simulcast
equals
svc,
which
is
not
the
way
to
do
it,
so
this
is
blocked
on
having
a
proper
apis
for.
H
G
G
Well,
we,
I
don't
have
any.
I
won't
talk
about
stats
specifically,
but
we
will
talk
about
the
document.
Yeah.
M
M
N
I
believe
we
want
to
have
implementation
experience
on
the
ssc
spec
first
before
adding
the
stats
for
it
and
yeah.
M
C
Yeah,
so
I
think
that
question
has
two
or
three
aspects:
one
is:
how
are
we
dealing?
How
are
we
doing
with
issues?
Are
they
all
close?
Can
they
all
be
closed?
Soon?
Are
some
of
them
like
next
generation
issues?
C
Another
one
is
implementations,
so
we've
discussed
that
you
don't
have
double
implementation
of
all
mti
and
I
guess
that
means
even
less
of
all
stats,
which
I
guess
brings
us
to
the
third
point
which
I
think
is
going
to
be
key
for
that
particular
aspect
is
using
process
2020
to
manage
it
as
a
living
standard,
which
I
have
somewhere
in
my
long
and
confused
list
to
make
a
specific
proposal
for,
but
that
would
be
my
suggestion,
that
is,
we
lock
in
into
the
spec
the
things
that
we
know
are
interoperable
and
implemented
twice
and
mark
the
rest,
as
this
will
come
into
the
next
iteration
of
the
wreck
vertically.
M
All
right
that
sounds
reasonable,
I
think,
did
when
do
you
think
we
could
have
like
like
at
least
work
through
those
steps.
C
M
That
I
think
we
added
a
few
based
on
like
the
conversations
on
on
the
pr's
last
last
year,
but
I
guess
you're
talking
about
the
implementation,
not
the
not
the
spec
themselves.
M
Yeah
and
rick:
do
you
have
like
hendrick
or
janniver?
Do
you
have
any
thoughts
on
on
that.
I
No,
I
I
don't
have
a.
I
don't,
have
an
update
on
that,
I'm
personally
not
spending
a
whole
lot
of
time
on
on
get
stats.
Right
now,
and
I
haven't
heard
anyone
ask
for
this
other
than
the
well
younow
and
the
when
the
spec
it
was
added
to
the
spec.
I
P
I
think
for
firefox
we're
committed
to
implementing
the
mandatory
stats,
but
but
no
not
committing
to
a
timeline
at
the.
P
Moment
and
for
I
think,
sctp
style
since
particularly,
I
think,
are
low
priority,
since
they
can
be
shimmed
as
far
as
I
understand
and
that
they
only
provide
metrics.
They
only
provide
usage
metrics
of
the
apis.
For
a
second,
though,
which
you
can.
M
M
This
is
not
right,
so
I
think
very
high
level
question:
do
we
have
enough
stats
for
from
a
specs
perspective
if
there
any
missing?
Please
create
a
an
issue
on
the
issue
tracker
and
I
think,
in
terms
of
implementation.
I
think
what
hendrick
is
saying
that
if
anyone
wants
to
bridge
the
gap
between
the
live
http
and
in
the
javascript
layer,
cls
are
welcome.
M
We
had
one
like
a
couple
of
months
ago,
or
at
least
at
the
beginning
of
the
year,
and
I
think,
based
on,
I
think
we
moved
one
of
them,
which
was
more
controversial
network
type
out
from
from
this
back
into
like
the
to
the
other
spec
hendrick
spectrum.
M
So
I
think
that's
that's
where
we
stand.
I
think
there's
only
one
pii
marker
right
now
on
the
whole
spec,
which
is,
I
think,
codec
implementation,
decoder,
implementation
and
encoder
implementation
in
the
codec.
Those
are
the
only
and
since
it's
not
mandatory
to
implement,
I
think
people
can
leave
it
as
null.
P
Anybody
there's
a
similar
issue
in
webrtc
pc,
raised
by
ping
on
exposing
hardware
capabilities
for
get
capabilities
and
sdp,
and
I
think
our
push
back
there
was
that
this
information
is
also
available
in
other
apis,
like
webgl
and
web
gpu,
and
that
any
permissions
should
probably
be
at
a
higher
level
in
in
a
different
spec.
M
Right,
so
I
think
that's
why
the
the
codec
implementation
thing
is
just
like
behind
a
pii
flag,
it's
up
to
the
browser
vendors
to
implement
it
or
like
put
something
there
right
like
they
could
just
say.
Whatever
the
user
agent
string
is
in
many
cases,
I
think
what
people
want
from
that.
Is
it
a
hardware,
implementation
and
or
a
software
implementation?
That's
the
only
thing
that
people
are
looking
at
is
like
one
bit
of
information
and
if
anything,
what
software
implementation.
P
P
Firefox,
I'm
gonna,
I'm
just
gonna,
do
do
it
from
here.
P
Yeah,
I
think
this
is
enough,
though
I
don't
have
any
animations
or
anything
so
all
right,
so
we
had
a
joint
meeting
with
ping
last
week.
I
think
it
helped
us
review
media
capture
apis
for
camera
and
device
enumeration
and
12
issues
were
filed
back
back
in
the
beginning
of
the
year
when,
when
they
started
this
review,
four
are
still
open.
Eight
were
closed.
Seven
prs
were
merged
from
review.
P
Hopefully
most
of
this
audience
also
we're
in
the
joint
ping
calls
I'm
not
going
to
double
up
too
much
on
slides,
not
doing
a
review
of
what
we
did,
but
I'm
going
to
review
the
open
privacy
issues
that
we
presented
to
ping
and
we
showed
them
our
consensus
for
our
proposed
solutions.
So
I'm
just
going
to
go
through
solutions,
and
then
there
are
some
other
issues
we
can
dive
into
for
me
to
capture
in
general.
P
So
640
was
the
only
reveal
labels
of
devices
the
user
had
given
permission
to.
We
agreed
that
labels
are
bad
for
web
compact
and
privacy,
but
it'll
take
time
to
get
rid
of
we've
improved
exposure
quite
significantly,
because
device
ids
are
now
not
in
enemy
devices
except
during
active,
live
camera
capture
or
microphone
capture,
and
shortly
after
in
the
same
document
session,
labels
of
non-granted
devices
are
still
needed
during
capture
to
support
sites
implementing
device
pickers
in
browsers
that
don't
grant
all
devices
at
once.
P
That
would
be,
for
instance,
firefox,
so
the
long-term
solution,
unfortunately,
is
is
long-term
and
that
is
to
move
toward
in-browser
pickers
for
camera
and
which
we
have
moved
to
midi
capture
extensions,
which
is
our
new
repo
for
for
holding
items
that
are
incubated
to
be
into
considered
for
later
wreck
inclusion
in
media
capture
main
and
right
now,
it's
just
in
brazil,
camera
picker
and
I
think
some
other
things
like
channel
audio
channel
layout
beyond
stereo
that
kind
of
stuff.
P
Now
there
were
some
short
term
issues
mentioned
we're
going
to
do
some
pr's
for
those
that
labels
may
contain
private
information.
So
we
should
encourage
sanitation
and
clarify
that
they're
for
display
purposes
only.
P
We
have
some
web
developers
comparing
them
to
model
and
manufacturer,
which
is
not
desirable,
which
is
why
labels
are
bad,
so
we're
going
to
close
this
issue
once
those
close
short-term
issues
are
resolved
and
revisit
when
we
do
in-browser
picker
extensions,
so
645
limit
enumerate
devices
so
that,
if
you
only
share
camera,
you
can
only
integrate
cameras
and
the
consensus
is
to
do
that.
P
This
is
what
chrome
is
implementing,
so
breakage
risk
should
be
low,
should
enumerate
devices
default
to
return
an
empty
list.
That
was
a
request
from
ping,
unfortunately,
not
very
web
compatible
because
people
rely
on
these
booleans
to
websites
rely
on
these
booleans
to
decide
whether
to
show
ux
or
camera
and
microphones.
P
So,
instead
we're
going
to
add
we're
going
to
continue
to
return
those
booleans,
but
the
spec
allows
user
agents
to
fake
devices
which
safari
has
an
option
for
and
we're
going
to
add
a
note
for
that.
P
Any
questions
please
interrupt
and
we
also
have
input
device
info
get
capabilities,
meaning
you
can
do
call
enumerate
devices
during
live
capture
and
get
the
capabilities
of
all
devices.
This
is
also
needed
to
ensure
to
maintain
the
app
constraints
during
the
picker,
as
well
as
for
to
match
the
initial
gum
request
long
term
again
in
browser
picker
with
constraints-based
and
browser
pick
it
would
obsolete
this
need.
P
I
should
clarify:
there's
a
difference
between
you
can
have
it
in
browser
picker
and
still
have
a
control,
constraints-based
selection,
where
you
still
offer
the
user
choices
within
the
constraints
of
the
app
which
is
separate
from
you
could
also
go
with
an
in-browser
picker
that
has
no
constraints,
so
the
question
of
whether
to
keep
constraints
is
sort
of
orthogonal
to
a
picker
in
a
way.
P
This
is
no
consensus
here,
but
the
feature
is
at
risk,
so
we're
going
to
revisit
that
later
within
browser
picker
and
now
we're
into
regular
issues,
and
I
think
you
and
you
had
some
issues
here,
but
I
don't
see
slides
for
them.
I
P
Okay,
so
some
so
in
order
to
make
progress,
I
tried
to
triage
some
of
the
different
specs
and
find
issues
that
would
be
helpful
to
have
in
front
of
the
working
group
so
issue.
660
is
handling
a
rotation
for
camera
capture
streams.
P
What
happens
when
the
phone
is
in
portrait
and
we
can
test
that
in
the
different
browsers
and
what
happens
is
the
only
basically
the
constraints
pretend
everything
is
always
constraints
are
always
in
landscape.
Basically,
so
you
can
constrain
against
the
values
as
if
they're
landscape.
P
However,
when
you
return
your
phone
into
portrait
or
if
you
have
your
phone
in
portrait
and
cognitive
media,
the
constraints
while
they're
applied
in
landscape,
everything
is
applied
in
landscape,
except
when
you
call
get
settings
the
values
you
get
back
are
rotated
if
you're
in
portrait
mode,
so
only
get
settings
is
rotated,
the
constraints
are
not
capabilities
are
not
constrainable.
Properties
are
not
so.
The
purple
proposal
is
to
specify
this,
because
that's
what
browsers
are
doing
any
objections.
P
It's
also
simple,
because
it
avoids
a
lot
of
zoning
issues.
We
don't
have
the
over
constrained
event
anymore.
So
if
you
had
mid
max
constraints,
they
might
apply
in
in
one
aspect,
but
if
you
rotate
the
phone
they
might
no
longer
apply.
So
it's
actually
quite
a
a
simple,
if
not
exactly
elegant,
solution.
Q
P
Not
specifically
try
apply
constraints,
but
I
think
we
would
want
to
specify
that
it
works
the
same
as
the
initial
get
user
media
call.
P
All
right,
if
there
are
no
questions,
then
I'll
move
to
the
next
slide
and
issue
735
is
to
make
a
fitness
distance
change
it
from
right.
Now
it
says
may-
and
this
is
a
suggestion
to
change
this
to
a
should.
P
This
is
basically
there's
there's
two
things:
there's
select,
selecting
algorithm,
which
is
used
in
both
get
user
media
and
apply
constraints.
That's
already
says
should,
but
as
far
but
get
user,
media
also
uses
it
for
device
selection,
meaning
pick
one
camera
over
another
based
on
constraints
and
that
one
has
a
may.
P
I
think
we
need
better
web
compact,
around
device
selection
and
that's
important
for
both
this
spec
and
other
specs
like
image
capture.
So,
for
example,
if
you
specify
you
know,
I
want
a
1080p
but
also
60
hertz
frame
rate,
which
device
is
more
important
to
the
app
and
you
could
specify
you
know.
I
want
the
device
from
last
time,
but
if
you
don't
have
that
device,
I
want
a
new
device.
That's
10
hd
1080p,
ideally
so
you
know,
rather
than
figure
out,
which
is
exactly
more
important.
P
There's
a
million
corner
cases,
predictability,
trump's,
usefulness
at
the
edges,
which
I
like
to
say,
which
means
that
it's
more
important
that
it
works
the
same
across
browsers
than
for
it,
the
constraints
instax.
P
You
can
always
come
up
with
combinations
that
are
not
intuitive,
but
as
long
as
they
produce
the
same
results
in
all
browsers,
I
think
we're
good
and
similar
for
a
pan
tilt
and
zoom.
You
could
specify
I
want
1080p,
but
I
also
want
to
be
able
to
zoom.
P
So
getting
this
getting
a
stronger
consensus
around
fitness
distance
would
help.
I
mean
we
have
consensus,
stronger
implementation
requirements
around
fitness
distance
would
ensure
that
we
could
get
better
web
company
media
capture
main,
but
we
could
also
fix
some
current
spec
bugs
in
image
capture
which
I'll
discuss
on
a
later
slide.
Q
This
is
u.n,
I
mean
mayan,
should
so
should
it
so
it
does
not
change
for
requirements,
so
user
agent
can
still
do
whatever
they
want,
be
it
a
may
or
should
I
don't
think
that
we
can
do
a
must
without
providing
additional
statements
like
hey,
if
user
is
actually
selecting
a
device
whatever
that
distance
wouldn't
care
and
the
same
apply
with
until
zoom?
Q
So
I
I
don't
know
I
I'd
like
at
least
to
specify
some
guidelines,
then,
where
we
that
we
could
at
least
be
agreed
to
them
like
the
device
picker
thing
or
maybe
until
zoom
as
well
or
permission
related
things
right,
so
that
the
shoot
is
a
bit
stronger
in
a
sense,
the
additional
issue
I
have
with
fitness
distance
in
general
is
that
in
a
lot
of
cases
you
end
up
with
fitness
distances,
which
are
exactly
the
same
between
devices.
Q
P
Yeah,
well,
I
I
think,
even
if
there
is
no
in
webcam,
even
if
browsers
are
doing
the
right
thing
today,
I
think
it
may
have
already
always
been
a
mistake.
To
use.
May
here
because
may
is
quite
weak.
It's
sort
of
like
most
browsers
won't
do
this,
but
they
may.
I
think
the
reason
it
was
in
may
was
actually
because
of
firefox,
because
we
have
a
permission
prompt.
P
So
actually,
I
think
we
so
I'm
not
opposed
to
strengthening
fitness
distance
further
and
I'm
not
opposed
to
adding
more
guidelines,
but
I
don't
think
any
of
that
stands
in
the
way
of
fixing
this,
I
think
the
may
here
is
more
of
a
bug
and
for
the
next
slide
there.
The
one
exception
that
I
think
cost
them
may
was
that
in
firefox
we
do
show
a
prompt,
sometimes
where
the
end
user
gets
to
override
it.
P
So
basically
constraints
tell
us
what
is
most
important
to
the
app,
but
then
the
user
is
more
important
than
the
app.
So
the
proposal
pr
here
is
to
add
the
should
but
then
add
a
clarification
that,
however,
it
may
also
user
agent
may
also
use
internally
available
information
about
the
devices
such
as
user
preference.
P
Q
I
Us
one
step
closer
right
with
the
the
ideal
would
be
where
everything
is
very
well
specified
and
testable
and
and
should-
and
I
may
are
both
not
testable,
but
at
least
we
clarify
the
intent
and
say
do
this
unless
you
have
a
good
reason
not
to
rather
than
do
this,
if
you
feel
like
it,
so
I
think
we
can.
I
think
we
can
do
this
and
then
iterate
on
further
improvements
separately.
P
Great
all
right
next
slide,
so
I
want
to
discuss.
Not
a
lot
has
happened,
unfortunately,
on
media
capture,
extensions
for
cameron
microphone,
but
I
want
to
present
some
of
the
slides
we
showed
to
ping
last
week.
So
apologies
to
people
who
had
were
part
of
both
meetings,
long
term.
P
Now
we
had
had
great
progress
in
audio
output
capture
within
the
last
year,
where
we
now
have.
We
actually
have
an
in-browser
picker
there
and
it's
select
audio
output.
I'm
mentioning
it
here
because
it's
a
setup
for
camera
microphone
to
contrast
differences,
so
you
can
do
select
audio
output,
which
gets
you
a
picker
and
then,
if
only
if
the
user
selects
something
to
share,
you
get
an
id
exposed,
returned
and
also
exposed
into
numeric
devices.
P
This
is
great
because
it
work,
and
then
you
can
take
that
id
and
call
it
set.
Sync
id
on
it.
This
works
without
microphone
permission,
which
is
unlike
the
old
api,
and
it
means
you
can
redirect
audio
from
any
source.
So
that's
a
great
new
feature.
Actually
it's
often
iframes
by
default.
It
needs
allow
equal
speaker,
selection
and
firefox
is
planning
to
implement
this
soon,
and
we
want
to
thank
safari
for
driving
and
design
the
design
on
this
one.
P
So
there's
still,
we
also
added
a
device
id
member.
These
are
not
constraints.
It's
just
has
the
same
name
as
a
constraint.
P
Hardware
light
on
for
three
seconds
or
turning
on
camera
microphone
in
browser
notifications
for
three
seconds
and
again
the
id
only
appears
in
the
numeric
devices,
if
the
call
succeeds
so
leading
into
since
we
have
this,
why
don't
we
add,
select
camera
and
select
microphone?
P
Well,
there
are
a
couple
of
complicated
reasons
for
that.
Web
apps
want
constraints
on
camera
selection
like
resolution
web
apps
want
some
discovery.
There's
some
immersion
use
cases
where
you
know
twitch
streamers
now
are
using
two
cameras
where
they
can
show
their
face
and
their
dog
webvr
might
be
other
uses
more
complicated
uses
where
it's
not
a
simple
just
web
conference.
It's
easy
to
think
that
camera
is
only
for
web
conference.
P
Users
want
sites
to
remember
their
configuration
and
not
pick
a
device,
every
time
user
agents,
although
that
was
so
somewhat
solved,
with
select
audio
output
and
also
user
agents,
differentiate
in
permission
models,
some
browsers
have
on
off
permission
permission,
states,
others
have
one
shot
or
both
and
there's
still
innovation
in
that
space,
which
I
think
we
should
protect
and,
more
importantly,
what
should
the
migration
path
be?
P
Get
user
media,
unlike
set
sync
id,
is
already
implemented
in
all
browsers,
so
what
site
is
going
to
upgrade
to
prefer
a
less
powerful,
less
established
api?
So
for
now
the
consensus
goal
that
we
have
is
to
get
rid
of
labels
and
capabilities
of
non-captured
devices
and
going
further
than
that.
We
don't
have
consensus
of
right
now,
but
that
would
be
things
like
limiting
putting
more
spec
limits
on
permissions
or
limit
capability
exposure
of
the
currently
selected
device
or
the
in-use
device.
P
So
what
I
presented
to
ping
here
was
basically
the
user
chooses
semantics,
which
is
which
I
call
get
user
media
plus
plus
here,
where
we
have
a
migration
path,
where
we
can
fix
existing
user
media
and
enumerate
devices.
But
we
have
no
commitments
yet
for
anything
like
select
camera
and
select
microphone.
So
the
the.
If
you
look
at
the
during
capture
so
before
capture
we're
very
private
now,
which
is
good.
P
So
the
remaining
issue
is,
is
that
we
still
share
all
labels
of
all
devices
and
we
share
all
the
capabilities
to
the
website
during
capture
so
that
they
can
build
their
in-content
device
picker.
So
we
would
like
to
get
rid
of
that
and
that's
what
the
middle
option
here
does.
P
But
what
it
would
also
do
is
it
would
make
get
user
media
usable
to
implement
in
browser
pickers
and
there's
some
criticisms
and
features
would
be
that
if
we
were
to
flip
the
default,
this
would
mean
people
might
see.
People
with
multiple
devices.
P
People
with
just
one
device
would
see
no
difference,
but
people
with
multiple
ones
might
see
differences
on
more
like
demo
sites
and
other
sites
where
that
haven't
implemented
a
strong
device
selection
device
management
policy.
So
they
might
be
prompted
every
time,
because
if
the
websites
use
video
true
every
time,
it
might
be,
whether
that's
more
or
less
annoying
than
the
site
picking
the
wrong
camera
every
time,
it's
probably
less
annoying
than
that,
but
more
annoying
than
it
just
happened
to
50
50
pick
the
right
device
every
time.
P
I
Next
step,
question:
wouldn't
wouldn't
removing
labels.
So
if
you
go
back
to
slides
instead,
one
more
slide
step:
three
remove
all
labels
from
enumerate
devices
all
right,
I
think
1971.
find
71..
I
I
think
okay,
we're
way
too
back.
Oh
there,
you
go
yeah
removal
labels
from
enumerate
devices
and
application
capabilities
and
then
maybe
flip
the
default.
So
my
question
is:
if
you
don't
flip
this
default,
and
you
don't
actually,
you
know
force
migrate,
everyone
to
browser
chooses.
How
could
you
possibly
remove
labels
right?
Because
if,
if
the
browser
isn't
doing
the
picking,
then
the
application
is
doing
the
picture,
the
picking
and
then
what
will
they
display
will
be
us
device?
One
two,
three
four.
P
Well,
so
the
steps
are
first,
we
implement
user,
chooses
in
all
browsers,
that's
the
big
one.
Once
people
have
that
sites
don't
have
to
use
in
content
device
selection
anymore.
So
how
do
we
get
them
off
of
of
doing
that?
Well,
first,
we
have
to
offer
the
abilities
that
they
can
call
get
user
media
with
semantics
call
on
user
chooses
in
order
to
invoke
the
now
new,
newly
wonderful
in
browser
pickers.
P
So
now
there's
a:
how
do
we
get
sites
to
call
that
api?
We
remove
the
labels
from
innovate
devices
or
we
threaten
to
remove
the
labels
from
enumerate
devices,
which
means
that
it
doesn't
really
break
sites.
It
just
makes
them
less
attractive,
which
is
perfect
because
their
pickers
will
still
work
but
it'll,
say
camera
one
camera
two
camera
three
and
their
users
complain
to
the
site
and
the
site
goes.
Oh,
we
need
to
call
the
new
api
to
get
the
labels.
Okay,.
R
Sure
was
there
a
question
yeah.
I
guess
this
is
a
colin
here
I
mean
the
the
breaking.
I
mean
this.
So
look
the
idea
of
getting
a
better
way
to
do
this.
R
I'm
all
in
favor
of
this
is
one
of
the
worst
parts
I
think
you
know
this
is
one
of
the
most
constantly
identified
problems
of
people
using
webrtc
systems
is
the
difficulty
of
controlling
that
which
to
input
devices
they're
using
so
I
mean
I'm
really
glad
to
see
what
we're
doing
on
that,
but
the
idea
that
the
way
we're
going
to
get
people
to
move
to
the
thing
we
wish
they
would
do
is
by
making
them
their
life
really
painful
for
the
thing
that
they
currently
do.
R
I
would
be
surprised
to
see
chrome
and
webex,
and
everyone
else
do
that.
I
mean
if
you
talk
to
people
about
whether
they
would
actually
break
that.
P
P
Sure
so
so
the
migration
strategy
does
not
put
in
timelines,
so
so
we
could
offer
the
new
functionality
a
year
ahead
of
any.
You
know
we
might
not
be
able
to
remove
labels
for
a
couple
of
years
and
then
we
can
build
the
new
apis.
P
R
P
Well,
well,
they're!
All
here,
do
you
like
what,
whatever,
what
we're
doing.
R
No,
I
mean
like
no
they're,
not
all
here
I
mean
like
go
pick
the
major
apps
that
use
webrtc.
Sorry,
I
thought
you
said
vendors.
I
meant
browser
vendors.
Are
the
people
making
the
javascript
making
the
app
okay?
The
editor
is
not
the
browser
vendor
right.
Okay,.
P
R
P
In
order
in
order
to
remove
the
in
order
to
remove
the
permission
and
sorry
the
privacy
leak
here,
we
have
to
remove
the
labels.
So
at
some
point-
and
this
this
is
an
avenue
where
we
would
lessen
the
pain
of
doing
so
by
offering
new
apis.
I
think
that
was
my
point
with
the
first
slide.
If,
if
we
implement
a
new
api
and
we'll
never
get
rid
of
get
user
media,
we
still
have
a
privacy
issue,
because
all
the
trackers
are
going
to
use
the
old
api.
Q
Yeah,
I
think
web
developers,
they
will
probably
look
at
what
is
new
and
they
will
look
at
the
old
getusermedia
and
the
new
one.
And
if
the
device
speaker
is
a
better
user
experience,
we
will
have
probably
a
nice
path
of
deprecation.
But
that's
really
the
thing
there.
It
should
be
much
better
than
what
they
can
do.
J
I
Is
correct?
The
user
will
get
what
the
user
wants,
because
the
user
will
be
able
to
pick
it
in
the
in
browser
picker.
The
unexpected
thing
would
be
the
in
app
picker,
which
would
show
you
know
device
one,
two,
three
four
still
and
I
and
then
I
don't
know
if
it
would
pick
those
properly.
I
think
it
would,
but
it
would
make
less
sense
than
the
in
browser
prompt
at
that
point.
I
Q
J
You
could
go
a
step
further
and
you
can
make
it
so
that
the
that
that
the
the
in
chrome
prompt
limits
what
comes
back
from
the
in
in
page
prompt
so
that
what
they
see
they
they
select
it
like
this
is
for
the
site.
That's
done
nothing!
Okay
and
you
do
the
last
two
steps.
Simultaneously,
the
user
comes
to
the
site,
they
do
something,
they
choose
the
camera
which
happens
correctly
and
then
the
in
browser
within
the
page
picker
says
you've
chosen
the
camera
effectively
because
it
lists
the
only
one.
That's
still
available.
I
Yeah
good
for
for
sites
that
you
know
the
three-year-old
site
that
didn't
update.
If
we
take
away
the
the
list
of
devices,
you
know
switch
the
default,
so
we
only
return
the
one
and
only
device
that
the
user
did
pick.
The
user
would
be
super
happy.
I
You
know
for
their
first
choice,
but
if
they
later
go,
oh
hang
on
a
minute.
I
want
to
change
the
camera
and
the
only
way
to
change
the
camera
inside
of
this
three-year-old
app
is
to
look
at
enumerate
devices.
I
The
app
will
think
that
there
are
no
other
options
and
will
probably
not
re-prompt
the
user,
so
you
could
get
stuck
with
the
wrong
either
get
stuck
with
the
wrong
camera
in
this
edge
case,
or
you
do
list
all
the
devices,
but
you,
you
know,
label
them
one.
Two,
three
four.
P
I
think
we're
a
couple
of
years
down
the
road
here,
so
I
don't
think
we
need
a
solution
decision
at
this
point
and
I
don't,
I
think,
we're
actually
happy
to
let
user
agents
deal
with
whether
they
want
to
how
backwards
compatible
they
want
to
be-
and,
for
instance,
maybe
maybe
only
some
browsers.
Maybe
safari
will
want
to
deprecate
their
labels
first
for
their
own
interest,
maybe
not
and
then
websites.
P
The
challenge
then
becomes.
Oh
labels.
Don't
work
in
safari,
maybe
oh
there's
a
new
api
we
should
use.
I
mean
that's,
not
very
different
than
what
we
have
today
for
set
sync
id,
for
example.
So
in
any
case,
this
is
what
was
presented
to
the
ping
working
group
and
they
seem
to
like
that,
and
the
next
plans
is
for
firefox
to
implement,
select
all
your
output
and
we
hope
to
gain
an
experience
from
that,
because
there's
some
ux
challenges
that
are
similar
in
that
area.
P
P
All
right,
then,
we
have
next
slide.
It
says.
P
I
Thanks
all
right
cool,
I
I
think
the
only
thing
that
goes
away
in
1.0
implementation
is
calling
enumerated
devices
before
you
do
the
get
user
media
call
and
seeing
no
labels.
Is
that
correct?
No
right,
you
only
have
one
device,
in
that
case
audio
video,
true
false.
P
So
in
1.0
it
already
says
that
if
you,
you
don't
see
any
labels
or
device
ids,
except
for
one
camera
and
one
microphone
until
you're
actually
actually
actively
capturing
camera.
So
that's
a
stricter
than
it
used
to
be
used
to
be.
You
could
have
persistent
permission
to
devices
and
then
you
will
get
labels
and
the
spec
says.
Now:
that's
not
sufficient.
You
need
to
actually
actively
be
camering
capturing
in
the
document
or
have
been
capturing
in
a
document.
P
The
same
document
and
the
reason
for
that
is
web
compat,
because
that
lit
that
gives
us
much
better
web
compact
between
browsers
that
have
persistent
permission
models
by
default,
like
chrome
and
browsers
that
don't
like
safari
in
firefox
and
it
what
we're
basically
done.
We've
deprecated
the
enumerate
first
strategy
of
device
picking
so
two
and
now
most
websites
like
this
one,
have
a
device
first
strategy.
P
Where
you
ask
the
users
for
their
device,
they
used
last
time
or
the
os
default,
and
then
once
they
see
that
camera
and
have
it
added
permission,
then
you
have
a
you
know
a
gear
symbol
options
panel,
where
you
can
switch
around
between
your
devices
and
that
seems
to
be
good
for
most
sites.
There's
been
some
pushback.
P
I
think
big
blue
button
was
on
that's
another
one
that
that
had
a
different
use
flow
but
such
as
life
we
had
to
this
was
in
response
to
ping
and
we're
very
happy
with
our
improvements
in
that
area.
P
All
right
other
capture
specifications-
I
see,
there's
some
and
this
we
have
slides
per
per
spec
here.
So
this
first
one
is
screen
capture.
I
think
that's
issue.
60.
A
P
P
Yeah,
I
think
that
makes
sense
and
we
should
but
terminology
wise.
I
think
we
only
use
the
phrase
browsing
context
and
avoid
the
word
current
document
just
to
avoid
confusion.
So
we
can
we
can
like
share
on
the
language
there.
I
think.
I
So
I
I
I
agree
with
this
just
just
a
question
about
because
here
we
talk
about
changing
the
current
document.
I
think
there's
language
somewhere,
that
you
can't
change
the
source
or
am
I
misremembering,
I'm
just
wondering
if
we
need
to
clarify
something
there
like
what
is
the
source,
because
it
sounds
like
the
source
changes.
If
you
change
the
tab.
P
We
should
be
clear
then
that
source
is
the
browsing
context
which
is
the
frame
within
the
document
is
displayed.
If
you
will
like
it's
a
container,
so
we're
sharing
the
container
and
you're
seeing
the
current
document
at
any
time.
So
if
you
navigate,
if
you
go
backward
forward,
cache,
for
example,
with
the
back
and
forward
buttons,
you
would
see
the
content
change,
that's
being
captured
and
I
think
most
users
understand
that
when
they're
sharing
a
tab
that
it's
not
just
the
capture
doesn't
end.
If
you
hit
the
back
button.
I
Oh
right
right
and
that's
as
well
I'm
thinking
of
the
case
where
in
in
meet,
if
you
click
on
a
different
tab,
and
then
you
there's
a
ui
button
in
the
browser
that
says,
share
this
tab
instead
and
then
you
actually
move
context
to
a
different.
You
know
tab.
You
can.
P
P
I
I'm
I'm
happy
with
that.
I'm
just
suggesting
we
clarify
that
part
because
I
wasn't
sure
if
chrome's
changing
tab
was
against
the
spec
or
not.
But
I'm
happy
to
hear
it's
not.
P
Oh
just,
I
think
it
would
be
a
problem
if
the
permission
prompt
the
picker
made
it
look
like
you're
sharing
one
tab
and
then
later
you
can
share
a
different
tab.
P
So
you,
if
you
had
a
choice
that
said
current
tab,
for
example,
that
might
be
a
way
around
that
or
the
active
tab
as
a
special
smart
choice.
If
you
will,
if
you
have
other
ux
ideas
there,
we
could
probably
add
that,
as
an
issue
to
the
spec
I
mean
we
should
be
allowed
to
innovate,
but
it'd
be
nice
to
double
check
that
it
makes
sense.
P
As
a
concept
but
not
exposed
surface,
oh.
K
P
The
spec
is
venturing
into
trying
to
dictate
ux
a
little
bit
here,
which
I
think
is
fair,
given
the
security
implications.
So
but
yes,
you're
right,
there's
no
exposed
surface
other
than
track
label
where
I
think
we
have
an
open
issue,
but
we're
not
proposing
a
slide
here,
because
we
don't
have
a
good
solution.
Yet
for
that.
Q
So
some
people
are
asking
to
be
able
to
know
whether
a
given
tab
is
same
origin
or
not
or
is
whitelisted
origin
or
not,
and
if
it's
not
the
good
origin,
then
they
would
mute
the
track
or
disable
the
track.
Basically.
Q
A
All
right,
this
part
is
easy.
It's
relatively
easy
to
go.
P
I
One
is
there:
these
are
again
old,
slides
from
tpack
last
year,
all
right,
let's
delete
them.
P
No
worries
image
capture
that
I
promise
to
come
back
to
so
an
image
capture.
There's
two
problems
with
that
was
discovered
this
year
with
pan
tilt
and
zoom
constraints.
Right
now,
the
spec
has
kind
of
I
hate
to
use
the
word
hackie.
P
But
it's
basically,
whenever
you
see
a
true
so
the
goal
for
context,
they
needed
a
way
to
call
get
issue
media
and
say
I
want
a
pan
tilt
and
zoom
functionality,
because
that
has
requires
elevated
permission,
but
they
didn't
want
to
specify
a
value
for
zoom
because
or
especially
for
for
pan,
because
if
the
camera
is
currently
panned,
they
don't
want
to
alter
the
default
value.
So
they
don't
want
the
camera
to
move
necessarily
just
because
they're
getting
permission
to
the
camera,
so
they
invented
this
true
union,
where
you
could
specify
true.
P
Instead
of
a
value
that
basically
says,
I
don't
have
a
value
yet
for
pan
tilt
or
zoom,
but
I
want
that
functionality,
but
the
way
it
was
implemented
is
a
bit
unfortunate
a
bit
of
a
bug
because
it
does
not.
P
I
think
we
assumed
that
it
would
influence
fitness
distance,
but
it
does
not
so
the
proposal
there
is
to
expose
true
and
false
as
a
first
class
values
for
these
constraints,
which
is
mostly
the
trickiest
part,
is
just
the
web
idl
on
implementers,
but
the
net
effect
on
users
is
quite
intuitive.
P
I
think
which
means
that
whenever
you
see
wherever
you
can
specify
a
value
for
pan
tilt
or
zoom,
you
could
also
use
the
word
true
or
false,
and
then
you
can
basically
that
gives
you
the
existing
access
to
the
existing
fitness
distance
algorithm
in
media
capture
main,
and
it's
not
that
complicated,
because
the
input
is
either
a
value
or
a
or
a
boolean.
If
it's
a
boolean,
then
we
have
fitness.
P
K
P
I
should
clarify
that,
because
of
a
change
in
media
capture,
main
required
constraints
are
now
opt-in,
which
means
that
there's
a
big
impact
on
image
capture.
That
is,
that
you
cannot
use
required
constraints
anymore.
P
A
P
Got
it
right?
Okay,
so
there
are
some
permutations
there
that
you
could
use,
but
they're
they're,
not
adding
new
functionality.
Much,
although
you
could
use
supply
constraints
to
well
this,
this
isn't
really
necessary
for
constraints,
because
that's
you've
already
have
a
camera
and
it
either
has
pants
tilt
support
or
it
doesn't
so.
P
This
is
largely
redundant
for
apply
constraints.
Yeah.
Q
P
Okay,
well,
there's
already
a
shirt
there,
so
that
should
allow
you
to
innovate
and
I
think
we're
happy
to
have
better
ideas,
but
we
also
want
to
fix
the
bug
that
is
there
in
the
spec
right
now.
I
think,
because,
right
now
it
if
you
ask
for
zoom
pan
tilt
or
zoom
the
true
and
then
you
specify
one
other
constraint
like
1080,
there's,
really
no
preference
for
pan
tilt
resume
capability
at
all
the
way
the
spec
is
written.
L
D
P
Right
so
because,
yes,
so
for
constraints,
would
then
narrow
down
the
selection
of
choices
first
and
then,
if
if
the
user
is
still
and
then
from
that,
if
there
are
any
pan
tilt
and
zoom
cameras
that
made
the
cut,
then
you
know
you
would
be
able
to
show
a
permission
prompt
based
on
the
camera
that
you're
you're
asking
to
use
it
for
and
if
that's
more
than
one
camera,
that's
sort
of
a
permission
is
somewhat
orthogonal.
To
that.
I
claim
in
that.
P
I
agree
that
user
agents
should
try
to
not
grant
more
permission
than
needed
for
what
has
been
returned
to
the
the
site.
P
So
that's
sort
of
orthogonal
to
this
a
bit,
because
this
is
more
about
getting
web
compat
around
the
way
that
applications
describe
their
demands.
P
All
right,
if
there
are
no
other
questions,
then
we
can
move
this
to
ready
for
pr.
P
All
right,
so
a
second
part
of
that
problem
is
that
it's
a
bit
unspecified
or
unspecified
whether
non
pan
tilt
zoom
cameras
actually
satisfy
the
default
values,
because
regular
cameras
have
one
to
one
zoom
right.
So
does
zoom
one
give
you
regular
cameras,
or
does
it
guarantee
you
a
an
adjustable
zoom
camera
same
for
pan
and
tilt
so
proposal
a
is
to
say
they
do,
which
means
that
if
you
specify
true
that
would
prefer
or
level
two
which
would
imply,
since
no
cameras
have
zoom
two
by
default.
P
That
would
prefer
an
adjustable
camera
where,
but
asking
for
zoom
one
gives
no
camera
preference
when
it
comes
to
adjustable
zoom.
The
proposal
b
is
that
they
do
not
qualify,
and
then
we
could
specify
that
in
prose
in
media
capture
main,
and
my
suggestion
here
was
for
all
camera
constraints
for
all
constraints,
not
in
the
list
of
inherent
constrainable
properties.
P
If
constraint,
name
is
not
supported
by
the
vise
of
fitness
distances.
One
and
the
list
of
inherent
constrainable
properties
is
something
we
added
recently
to.
Then
it
has
a
list
of
basically
device
id
facing
mode
and
one
more
thing
which
are
properties
that
are
inherent
of
the
camera.
For
instance,
all
cameras
has
a
fading
mode
or
not
the
website
doesn't
the
browser
doesn't
always
know,
but
we
want
to
exclude
those
from
this
new
role,
because
facing
mode
does
not
imply,
give
me
a
camera
that
can
flip
right.
I
If
zoom
one
is
no
camera
preference,
is
this
the
same
as
not
specifying
zoom
at
all.
L
L
Was
a
specific
then
also
says
that
zooming
is
usually
ratio,
but
it
doesn't
say
that
it
must
be
a
ratio,
so
it
would
be
that
would
default
to
something
other
than
one
right.
P
P
I
Where
did
we
land
on
requiring
that
you
get
the
zoom
camera?
If
I
have
one
zoom
camera
and
one
non-zoom
camera
and
I
say-
and
I
say
zoom
true:
do:
will
I
necessarily
get
the
zoom
camera
or
would
I
maybe
get
the
zoom
camera.
P
Well,
we
removed
required
constraints,
so
there's
no
way
for
the
site
to
demand
a
pan
tilt
or
zoom
camera,
but
I
want
assuming
we
fix
the
thickness
distance.
Then,
if
you
put
in
you
know
pan
tilt
zoom,
then
unless
there
are
competing
constraints,
then
you
would
very
much
get
that
camera.
P
I
don't
know
if
the
spec
actually
allows
the
user
agent
the
users
to
opt
out
of
pan
tilt
zoom
they
might
so
you
might
still
not
get
so
there's
no
guarantee
that
you
can
get
a
pen
until
sim
camera
is
the
short
answer,
but
you
can
check
that
once
you've
gotten
the
stream,
you
can
use
get
capabilities
on
the
track.
You
get
to
figure
out
whether
you
can
adjust
these
values
or
not,
which
is
what
you're
going
to
do
anyway,
because
we
never
standardized
min
and
max
ranges
for
pan
tilt
or
zoom.
Q
P
Well,
well,
I
think
I
don't
know
if
I
agree
that
I
know
that
with
constraints
we
have
a
lot
of
syntax.
P
That
is
overkill,
but
it's
it's
well
implemented
and
at
some
point,
if
it's,
I
think
it's
more
important
about
that,
it's
predictable
across
the
different
apis
than
that
necessarily
every
use
case.
Every
corner
has
an
as
a
need
so,
but
is
that
something
we
can
continue
discussing.
A
F
Hey
jennifer
and
you
when
do
you
want
to
say
something
about
implementation
and
the
various.
P
Plans,
I
think,
for
firefox.
We
don't
have
any
immediate
short-term
plans.
Q
P
I
To
me,
it
seems
confusing
to
mix
picking
a
camera
with
you
know,
acting
like
reconfiguring
that
camera
after
it's
picked
right.
Maybe
I
want
the
camera
with
a
very
high
capability
to
zoom.
P
P
That
way,
I
think,
that's
closely,
since
these
are
edge
cases
anyway.
If
someone
puts
zoom
with
any
kind,
regardless
of
whether
it's
a
boolean
or
a
value
you're
asking
for
a
camera
that
has
adjustable
zoom,
I
think
that
just
makes
sense
and
proposal
b
would
clarify
that
in
spec
right
now,.
Q
If
you,
if
everybody
agrees
with
that
zoom
through
and
get
user
media
and
numerical
values
in
apply
constraints,
so
that's
what
makes
sense.
We
should
just
state
that
in
the
in
the
logs
and
build
the
spec
around
those
ideas.
P
Q
Yeah,
I
guess
we
we
could
do
that.
I
hope
I
will
get
back
up
when
I
will
file
the
issue.
P
A
P
Right,
so
all
right
cool,
so
there's
some.
This
is
an
old
issue
just
to
move
media
capture
from
element
along
a
little
bit.
Hopefully,
there's
agreement
on
this
one
add
some
old
unimplemented
language
that
had
this
weird
behavior,
that
if
a
meter
track
can
only
end
once
so
attract
an
audio
video
track
in
an
element
element
sense.
If
that's
enabled
disabled
or
re-enabled,
it
will
be
captured
as
two
separate
tracks.
I
don't
think
anyone's
implemented
that
so
the
proposal
is
to
tie
instead
tie
the
media
stream
track
lifetime.
P
The
media
stream
track
you
get
from
capture
stream,
tie
its
lifetime
to
the
audio
track
and
video
track
in
the
element.
This
is
for
element,
capture
stream
and
then
have
the
the
only
downside
is.
The
media
stream
track
would
then
produce
nothing
at
times,
basically
pause
on
the
last
frame
when
disabled
or
restarted,
but
it
seems
a
lot
simpler
to
understand
the
model
that
way,
and
it
also
fixes
infinite
cycles
where
you
can.
Basically,
if
you
do
element,
source
object,
equals
element,
capture
stream
or
if
you
create
a
different
cycle
through
a
second
element.
P
The
source
object's
load
algorithm
actually
saves
us
there
because
it
removes
all
selected
enabled
tracks
and
that
will
cause
the
capture
stream
to
end.
So
as
soon
as
you
assign
source
object
to
something
else,
then
what
it
is
already
emitting
will
end.
So
you
don't
have
to
worry
about
cycles.
P
A
P
P
B
Okay,
thank
you
we'll
be
meeting
on
thursday
at
this
same
time,
using
the
same
conference
parameters
and
we'll
be
focusing
on
new
work.