►
From YouTube: WebRTC Working Group Virtual Interim October 12, 2017
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
And
assuming
that
goes
well,
we'll
also
discuss
media
capture
issues
with
an
editor's
draft
to
follow
so
current
status
of
webparts
CPC,
at
least
as
of
this
morning,
we
had
100
open
issues
with
only
three
blocking
advancement
to
CR
and
that's
our
main
focus
today.
Once
those
issues
are
resolved
and
we
will
go
to
CR.
A
So
a
reminder
about
t
pack,
hopefully
you've,
all
registered
I,
believe
the
discount
3
whatever
is
up,
but
in
any
case,
if
you
want
to
go,
you
need
to
get
in
your
travel,
reservations
and
registration
ASAP.
We
have
put
out
a
draft
agenda
and
we're
looking
for
feedback
on
it,
so
we
can
make
the
best
use
of
time
and
if,
in
particular,
if
you're
looking
to
present,
please
let
us
know
so
a
little
bit
about
this
virtual
meeting.
A
C
So
is
everyone,
okay
with
merging
these
or
yeah
I'm,
trying
against
the
mean
still.
D
E
D
C
Yeah
I
agree:
the
behavior
should
be
that
you
can
continue
receiving
media
because
the
the
new
detail
Association
uses
a
new
generation
of
ice
panda.
So
you
know
the
previous
select
Canada
fair,
you
know
is
going
to
be
used
exclusively
for
the
old
DTLS
Association.
In
fact,
it's
going
to
need
to
be
received
on
it.
I
don't
see
any
reason
why
it
shouldn't
be
decoded,
but
as
far
as
whether
this
is
explicitly
called
out
by
Rove
and
RFC
I'm,
not
sure,
okay,.
D
C
Yeah,
so
the
previous
slide
was
basically
the
entire
contents
of
other
PR,
which
is
again
clarifying
and
objects.
You
know
the
scope
of
the
objects
out
of
whether
or
not
they
recreated,
and
the
second
slide
is
only
showing
happen.
The
PR
sort
of,
but
on
the
other,
half
is
just
the
set
of
no
description
part.
D
I'm
randomizer
and
I,
don't
objective
sounds
good.
Okay,.
B
F
B
Next
slide
right:
Bernardo,
yes
yeah!
So
this
one
is
about
the
e-text
we
have
in
the
web,
Odyssey
spec
on
centering
scaling
and
cropping,
and
first
of
all,
all
it's
in
conflict
with
what
J
is
obsessed
and
I.
Think
Justin
and
Catelyn,
and
many
others
have
pointed
this
out
and
then
I
started
to
look
a
bit
more
carefully
on
what
we
actually
say
in
the
web
or
CPC
section
and
I
am
highlighting.
B
Here
we
talked
about
the
desired
video
desired
size
and
destination,
size
and
desired,
and
destination
is
sort
of
it
or
is
undefined
here,
and
my
first
case
would
be
the
desired.
The
size
would
be
the
size
of
a
video
element
on
the
receiving
side
that
is
used
to
to
render
a
video.
But
the
sender
will
not
know
about
that.
B
B
However,
this
desired
size
and
destination
site
could
also
mean
the
match,
the
size
that
could
be
sent
and
that's
what
is
in
the
example
in
in
the
web
or
to
CPC
taste,
but
I.
Think
that
could
lead
to
an
unintended
effects.
I
mean
I
I.
Have
here
an
example.
If
you
have
a
1920
by
1080
video
16:9
aspect,
ratio
and
the
maximum
size
that
can
be
sent
is
1440
by
1080,
and
the
video
element
on
the
receiving
end
also
have
a
16
by
9
the
solution,
but
the
smaller
in
size.
They
would
actually
throw
away.
B
B
B
D
B
B
B
H
E
B
B
Well,
yes,
obsesses
are
all
said
that
the
sender
should
applied
downscaling
instead
and
the
downscaling
must
not
change
the
track
track.
Aspect
ratio
and
my
PR
1570
proposes
that
we
removed
the
web
or
CPC
text
on
center
scaling
crop
and
replace
it
by
referencing.
The
asset
and
I
also
added
a
note
that
you
can
have
effects
of
this
I
mean
if
there
is
an
aspect
ratio
between
the
track
and
the
video
element,
and
you
can
move
on
so
I.
E
D
D
I
think
Jeff
is
talking
about
what
happens
on
the
be
in
what's
happening
with
the
Kodak
and
the
encoder
and
how
it's
changing
the
video
stream,
but
now
we're
talking
a
little
bit
about
how
the
video
was
a
slightly
separate
issue
is
how
the
video
from
the
media
track
gets
mapped
into
the
transcoder.
What
the
transcoder
does
is
controlled
by
Jessup
and
now
scale
things,
but
you
know
if
we
have
an
image
adder
that
says
four
by
three
and
we've
got
a
sixteen
point:
nine,
sixteen
by
nine
camera,
something
yeah.
B
D
C
B
B
B
B
D
Don't
think
that
really
it
says
like,
let
just
gives
you
the
best
track.
You
can,
let's
say
up
with
no
constraints
on
my
camera
and
I
get
a
media
track.
Okay,
when
I
get
a
media
track,
it
doesn't
mean
I'm,
sending
it
to
it.
I'm
gonna
even
use
it
for
WebRTC.
It
might
think
I'm
going
to
do
something
else
with
it
right,
full
resolution,
the
camera.
That's
what
it's
going
to
give
me
and
now,
when
I
pass
that
to
WebRTC,
it
needs
to
be
hoping
it.
You
know
something
needs
to
happen.
Yeah.
I
With
it,
without
that
text,
would
we
end
up
with
browser,
defines
whether
we
have
square
or
non-square
pixels,
or
whether
it
crops
or
letterboxes
those
kind
of
things
or
it's
that
cover
down
square?
If
we
don't
move
this,
or
if
we
remove
this
text,
is
it
still
covered?
What
what
should
happen
in
that
case.
D
C
D
F
D
It
up,
but
max
for
writing
here
is
not
saying
the
browser's
can
do
or
whatever
they
it's
saying.
They
can't
do
anything
and
it's
going
that's
not
going
to
work.
We
have
to
write
some
tech.
If
you
think
the
answer
is
that
we
worked
the
text
is
the
browser's
can
do
whatever
they
want.
We
could
do
that.
That
is
the
original
text
that
was
in
here
six
months
ago
or
longer
and
Martin
objected
to
that
and
that's
how
we
got
down
this
whole
path,
and
then
we
just
go
back
to
that.
D
D
It
is,
it
says
it's
the
sister
scale
crop
right.
It
says
here's
why
you
take
this?
What
we
thought
right
now,
I
understand.
Why
Justin,
if
you
don't
want
to
find
changing
it
well,
because
it
loses
date
right,
but
what
the
text
we
have
in
the
draft
right
now
says:
take
whatever
you
got
from
the
media
streams
and
here's,
you
know
what
you
need
need
for
the
image
stream
and
the
way
you
get
from
a
ability
is
center
scale,
crop.
D
D
D
Mean
I
think
that
where
Justin
would
go
with,
this
is
I
mean
on
all
of
these
things.
You
really
have
two
choices.
You
can
either
you
know
crop
and
lose
data
near
the
edges,
or
you
can
allow
black
to
be
inserted
to
get
the
aspect
ratios
to
match
and
Justin
made
a
very
strong
argument
to
ever
and
I
and
I
found
it
very
compelling
that,
particularly
for
the
screen
sharing
case
cropping
data.
What's
the
wrong
thing
to
do,
and
what
we
should
do
is
insert
black
if
that
was
needed,
not
the
crop
yeah.
B
E
E
I
I
F
F
D
F
E
I
So
if,
if
there's
some
kind
of
so
this
kind
of
ties
into
what
I'll
bring
up
later,
but
if,
if
it's
done
similar
to
the
the
framerate
thingy
or
frame
rate,
let's
say
reduce
frame
rate
for
downscale
in
in
RTP
sender,
we
there
could
be
like
a
balance
mode,
which
is
essentially,
if
you're
doing
tab,
capture
or
or
screen
screencasting.
You
can
have
this
be,
do
not
crop
whatsoever
by
default
and
then
for
the
UBC
case.
I
D
D
A
I
G
F
Stretch
would
be
rich
could
be
handled.
I
like
resizing
I've
got
at
all
times,
but
we're
perfectly
fine
with
with
saying
don't
use
only
those
values
but
I
think
the
inventing
new
names
for
something
else,
although
mostly
just
about
exactly
the
same
thing,
is
going
to
cost
and
less
confusion.
No
I'm.
D
K
H
H
K
H
D
So
let
me
make
a
concrete
proposal
trying
to
bring
in
any
name
of
getting
done.
Man
I
done
these
these
ones.
Look
excellent
and
looks
like
the
two
modes
we
are
discussing
in
CSS
would
be
called
pill
and
container
and
I'm
proposing.
We
add
a
new
knob,
which
I
think
Harold
said,
needs
to
be
on
the
cinder
object
and
it
can
have
three
values
still
contain
and
auto
and
fill
corresponds
to
what
we
used
to
have
upset
your
scale
crop
and
contain
corresponds
to
corresponds
to
effectively
scale
and
Pat.
D
I
B
G
B
H
H
B
D
B
B
C
H
C
C
First
of
all,
this
is
something
that
controls
the
SCTP
level
priority
and
also
the
GOS
priority,
but
the
RTC
web
transports
Becket
also
controls
the
relative
transmission
capacity
at
different
encodings,
get
where
each
successive
priority
level
uses
twice
the
transmission
capacity,
the
level
below
it
so
like
the
example
given
is
audio
is
low
and
video
is
high
and
in
the
same
time,
that
a
thousand
bytes
are
sent
for
audio.
Four
thousand
bytes
are
censored.
Video,
but
basically
I,
don't
see
how
this
could
be
useful
at
all,
because
those
ratios
of
one
the
two
are
not.
E
C
Just
for
all
your
relatives,
videos-
and
you
can't
have
any
video
encoding
that
are,
you
know
greater
it
relative
to
each
other
and
also
it's
a
problem
that
it
mixes
up
this
relative
bit
rate
with
QoS
priority,
because
I
would
say
it's
a
pretty
common
case
that
you
want
something:
that's
using
fewer
bits
to
actually
have
a
higher
priority
in
network
use
in,
in
the
example
of
audio
versus
video,
to
likely
want
audio.
C
C
F
If
you
want
to
send
the
hyper
something
at
high
priority
at
a
low
bitrate,
the
meaning
of
the
text
was
just
step
into
high
priority
and
send
it
because
and
then
all
the
bits
that
it
needs
will
go
out
and
then
there
is
then,
when
you
run
out
a
bit
to
set
to
send
the
rest
of
the
bitrate,
we'll
see
that.
But
if
you
need
to
start
dropping
packets,
those
high
priority
things
go
knock.
The
drops
see
you
saying.
I
But
they
say
in
the
example
where
you
have
a
thumbnail,
maybe
you
want
to
some
thumbnail
to
be
sent
at
all
cost.
You
might
not
want
the
thumbnail
to
max
out
its
bitrate.
You
might
be
fine,
sending
the
thumbnail
at
50
K
so
long
as
it
arrives
on
the
other
end,
whereas
you
might
want
you
may
need
to
to
take
all
of
the
available
bit
rate
before
like,
but
still
not
being
as
high
priority.
As
all
of
the
thumbnails
is
an
imaginable
case
rate
yeah.
I
50
to
be
the
max,
you
might
want
50
but
start
use
more
than
50
as
soon
as
you
can
so
we're
like.
Well,
no
well,
there's
nothing
else,
something
that
there
so
you
have
a
thumbnails
and
goes
from
50
to
500,
and
then
you
want
it
to
stay
50
until
there's,
basically
free
bit
bitrate,
but
you
always
have
to
be
high
priority.
C
J
And
if
we
have
the
ratio
that
we're
locked
onto
it's
not
clear
how
even
locks
on
what
do
you
mean
we're
saying
that
the
priority
is
orthogonal
to
the
allocation
rates?
I
think
that's
a
good
approach
to
take,
but
I'm
just
saying
it.
Harold
is
suggesting
that
it's
not
and
that
the
locking
on
of
the
ratios
is
tight
to
a
max
fit
rate
so
that,
as
long
as
you
live
everything
if
they're
more
bits
available
than
what
the
high
priority
thing
needs
for
its
max
bitrate,
then
basically
you
can
go
off
of
sir
I.
I
I
I
C
Mean
basically
I'm.
We
also
have
a
fact
that
says
this
field
controls
the
relative
transmission
capacity
and
us
priori,
I'm
just
saying
we
separate
that
into
two
things
and
a
lot
more
ratios
than
just
one.
Two
four
eight
so
like
there
may
be
existing
problems
with
this,
but
my
proposal
here
is
just
splitting
it
up
and
making
it
more
flexible.
F
F
C
D
Like
I
get
I
get
the
idea,
try
and
separate
these
okay
that
that
sort
of
makes
sense,
but
I
the
thing
is
I'm,
not
sure
that
relative
bitrate
is
ever
what
you
want.
It's
almost
always
sort
of,
like
you
know,
give
this
guy
this
much
and
then
move
on
to
getting
other
people
that
much
which
I
think
it
lines
up.
The
way
Harold
was
thinking
about
it
better
than
a
relative
different
I
mean
like
so.
C
D
Look
you
reason,
for
that
sake,
just
just
let's
say:
there's
no
congestion
whatsoever,
I
mean.
Can
anyone
on
this
colleagues
and
guess
what
the
relative
band
wets
of
a
screen
share
versus
a
HD
video
stream?
Is
we're
not
talking
about
what
there's
no
congestion,
though
no,
but
what
we,
but
we
are
once
we
talk
about
sort
of
trying
to
I
mean
how
should
I
allocate
if
I
want
reasonable
quality
wonder
this
is.
C
G
D
D
Usually
like
the
from
a
develop
like
like
thinking
about
how
to
set
bit
rates
is
very
difficult
for
developers
a
but
think
about.
Relative
importance
is
a
little
bit
easier,
I'm,
not
arguing
that
they
shouldn't
be
two
orthogonal
things,
I'm
just
saying
that
relative
bitrate
may
be
a
very
difficult
knob
for
developers
to
work.
Alright.
I
:,
are
you
seeing
something
more
like
I
want
to
send
this
in
QVGA
if
I
have
more
available
bit
rates
and
first
increases
this
stream
to
VGA
than
that
other
stream
to
VGA
and
then
like?
Do
you
want
something
more
like
in
order
of
what
things
should
happen
instead
of
available
bit
rate
like
send
this
at
5
FPS
at
first
and
then,
if
you
care
and
go
up
to
15
fps,
what
do
you
mean
to
have
no
order
of
which
things
should
happen?
I.
D
D
As
for
my
application,
you
know
audio
is
most
important,
followed
by
screen
shares,
followed
by
the
video
of
the
person,
or
you
know,
video
is
critical
to
me,
followed
by
audio,
followed
by
you,
know,
50
thumbnails,
but
and-
and
they
generally
want
the
bandwidth
allocated
to
what
they
think
is
important
to
them
until
it's
pretty
good
whatever
that
means,
and
then
they
want
bandwidth
applied
everywhere
else
and
then,
once
everything
is
pretty
good,
then
they'll
move
the
things
that
are
important
to
them
to
excellent.
Now,
how
do
they.
D
C
Maybe
the
conclusion
is
that
we
need
a
way
to
set
the
you
know
minimum
bit
rate
as
well,
for
each
encoding
sort,
this
level
you're
talking
about
where
things
are
good
enough
and
you
should
move
on
to
the
next-
is
priorities
and
going
yeah
we'll
be
going
by
it
such
that
like,
if
you
can't
you
know,
meet
all
the
men
bit
rates
and
you'll
try
to
meet
them
in
bit
rates
at
the
highest
priority
encodings.
First,
then,
you
know
once
you're
meeting
all
the
men's.
Then
you
scale
them.
According
to
this
ratio,
right.
I
D
You
everybody
else
back
down
is
how
we
got
to
originally.
We
decided
well
we'll
just
have
these
priorities,
we'll
just
have
a
few
priorities
that
people
choose
from
important,
not
important
to
just
to
do
a
bit
right,
never
to
do
a
clause,
and
then
people
were
like
yeah,
but
don't
starve
the
bottom
ones,
and
that's
how
Harold
added
the
one
two
four
eight,
as
sort
of
rough
approximations,
I
like
look
it's
a
rough
and
ugly
box,
gracious
I'm,
not
arguing
for
it
I'm,
just
maybe
maybe
the
bottom
one
should.
J
Think
there's
two
problems:
one
is
that
the
cross
over
should
say
the
dscp
markings.
Basically
the
TFTP
markings
and
the
bit
rates
are
coupled
in
these
things,
and
so
one
part
of
this
is,
should
we
be
coupled
those?
The
other
part
is
the.
How
you
do
was
one
4/8
is
that
what
we
want
versus
being
allowed
to
be
more
flexible,
so
I
think
these
are
two
separate
things,
and
probably
these
approaches
such
regardless.
C
J
I
C
D
The
time
right
so
look,
I,
I,
agree
with
separating
them.
I
agree
that
probably
this
relative
relational,
particularly
one
one
to
four
days,
probably
not
what
we
want.
I
just
sort
of
I
I'm,
not
sure
that
relative
bitrate
is
exactly
the
knobs
that
we
want
to
give
people
to
try
and
deal
with
the
different
issue.
All.
A
I
This
is
going
to
be
the
same
thing
anyway,
so
so
my
name
is
peter
boström.
I
used
to
work
on
the
web
works
she's
video
implementation
about
a
year
ago,
and
we
ran
into
an
issue.
Where
are
our
assumptions
made
inside
the
browser
didn't
match
the
reality
that
we
were
facing
so
some
background
for
this?
This
doesn't
only
apply
to
video,
but
if
I
sound
video
aligned
it's
because
of
that
background
with
video,
but
either
way
like
we
were
saying
before
browsers
make
assumptions
based
on
how
content
gets
ingested.
I
So
what
what
I'm
suggesting
is
that
provides
some
way
to
override
this
browser
default
behavior
so
that
if
your
screen
sharing,
but
you
don't
get,
but
you
care
more
about
motion
than
you
do
with
individual
frame
detail.
You
can
say
this
is
like
motion
style
content,
and
if
your
webcam
is
really
a
capture
card,
you
can
say:
I
prefer
details
for
the
stream
and
for
for
audio.
You
could
override
and
say
this
is
music
or
maybe
something
that
you
know.
This
is
already
pre-mixed
content.
You
don't
need
to
do
any
processing
on
it.
I
But
the
idea
with
putting
it
on
the
media
stream
track
is
that
you
put
it
in
one
position
and
then
all
specs
that
make
use
of
it
or
all
browsers
can
make
use
of
this
content
hint
in
whatever
they
want,
which
means
that
you
don't
have
to
rewrite
the
media
stream
recorder
api's
to
be
able
to
use
this.
You
can
just
infer
it
and
then,
whenever
your
spec
is
in
conflict
with
a
Content
hint,
the
spec
wins.
I
So
if
you
explicitly
override
some
behavior,
that's
not
tied
to
the
that's,
not
appropriate
for
other
content
in
and
the
specific
overrides
still
wins.
So
in
the
previous
example,
where
we
have,
if
this
would
be
applicable
to
cropping,
then
maybe
your
screencast
would
default
to
not
cropping.
But
if
you
explicitly
say
crops
that
would
take
precedence
that
make
sound
kind
of.
E
Yeah,
that's
a
good
explanation
of
the
issue.
The
the
argument
for
putting
it
on
the
arch
piece
ender
would
be
that
there
are.
There
are
certain
things
that
occur.
That
would
that
you,
where
you
may
lose
this
information
like
passing
audio
through
Web,
Audio
and
so
on,
or
you'd,
have
to
put
in
a
bunch
of
text
elsewhere
to
say
that
this
state
is
somehow
preserved.
If
you
pass
the
Web
Audio
and
generate
a
new
track
on
the
new
stream
on
the
output,
so.
I
E
I
I
If
you
want
to
be
sure
that
it's
actually
propagated
one
of
the
so
that
is
so,
I
think
one
of
the
one
of
the
points
for
having
this
kind
of
content
hints
instead
of
adding
a
bunch
of
knobs
on
RTP
sender
as
well
and
I.
Think
this
is
going
to
bring
up
is
that
this
kind
of
ties
in
to
more
station
to
find
things
like
what
quantization
to
use,
etc
and
I?
I
Think
quantization
levels
is
something
that
we
should
really
not
expose
to
web
developers
because,
like
even
on
the
web,
our
seat
team
here
I
think
most
people
don't
have
a
good
notion
of
what
QP
means.
So
sending
this
out
to
the
open
web
would
probably
be
making
a
very
misunderstood,
parameter
that
people
kind
of
copy
off
of
Stack
Overflow
rather
than
understanding
it
so,
but
they
can
probably
easily
say
I
want
to
prefer
motion
in
my
content
or
I
want
to
prefer
details
right.
I
I
Correct
I
think
this
is
there
to
kind
of
discussion
in
here.
One
is:
do
we
think
a
Content
hint
that
does
not
actually
define
specific
behavior,
make
sense
and
then,
where
to
put
it,
does
it
make
sense
to
put
it
on
the
track,
or
does
it
make
sense
to
put
it
on
all
sender,
configuration
objects
and
kind
of
in
that
context?
Remember
that
RTP
sender
is
not
the
only
object
that
consume
these
I'm.
D
E
D
I
mean
I'm
in
favor
of
this
we've
discussed
this
before
this,
which
came
up
very
long
ago
right.
We
had
whether
something
was
music
or
speech
was
something
we
could
specify
and
whether
it
was
preferred
spatial
or
temporal
resolution
I
think
we
previously
used.
Don't
we
have
those
on
the
sip
right,
not
the
sort
Oh.
A
I
A
D
A
D
Bernard
I'm
100%
on
board
with
we
should
just
tell
it
what
works.
Our
intention
is
like
this
is
screen
content
or
whatever,
and
the
codec
may
or
may
not
support
that,
like
the
apps
like
abstracting
this
away
from
from,
like
you
know,
set
parameter
22
in
the
codec
right
I'm
totally
on
board,
but
it
seems
like
what
why
why
wouldn't
this
be
control
that
an
abstract
desire
level
at
the
place
where
the
the
codec
is
or
the
echo
canceller
or
the
media
Processing's
happen,
and.
B
I
One
of
the
problems
with
that
is
that
you
always
as
a
developer,
then
have
to
play
catch-up,
and
so
because
you
can't
kind
of
infer
what
your
defaults
are
like
anything
that
we
add
to
a
future
web
spank,
a
user
or
a
web
developer,
would
have
to
be
in
the
loop
bus
and
the
implementations
would
kind
of
deteriorate
because
we'll
add
defaults
that
are
good
for
speech,
but
wrong
for
music.
So
if
you
can't
say
it's
music
stop
messing
up
my
stuff,
then
we
will
keep
doing
that.
B
E
E
E
Is
a
big
choice
here,
as
we
said
it's
between
putting
on
the
sinks
and
also
you
know
the
sources
for
things
like
getusermedia,
for
the
microphone
music
thing
order,
whether
or
not
it's
just
on
the
sources
somehow
and
in
and
then
filters
down
through
all
the
tracks
and
the
like,
which
also
means
you
have
to
be
able
to
change
it
on
the
tracks
like
if
you
capture
it,
as
whatever
with
the
application
decides
it
wants
to.
You
want
to.
You,
know.
I
D
You
want
to
do
echo
cancellation
on
music
right
expense
if
your
microphones,
in
the
same
room
where
you're
playing
it.
Sometimes
you
don't
like
it's
an
independent
parameter.
I
agree
that
the
default
for
speech
is
you
do
echo
cancellation
and
the
default
for
music
might
be
different,
so
I
get
the
idea,
but
we
still
need
fo
cancellation
regardless.
If
we
have
adding
this
agreed.
E
I
So
I
think
what
is
that?
What
is
a
good
thing
or
I
think
a
better
example.
Is
the
noise
suppression
because
you
might
be
like
I,
like
my
music
I,
want
it
to
be
noise,
free
that
not
unreasonable,
and
then
you
have
to
actually
go
into
what
it's
doing
is
cutting
out
trapped
in
etc.
So
it
will
mess
up
your
snares
and
I.
Don't
think
the
web
developer
should
have
to
know
that.
K
E
C
D
I
So
I
also
I
think
one
of
the
one
of
the
good
things
to
maybe
agree
on
or
not
is.
Are
we
doing
the
right
thing
by
letting
the
user
specify
I
have
a
thing?
It
kind
of
looks
like
this
Hamlet
as
best
as
you
can
to
make
sure
that
it's
really
only
the
browser
vendors
who
have
to
deal
with
this
really
complicated
problem
of
what
to
turn
on
and
what
not
to
turn
on.
I
D
I
I
They're
implementation-specific
knobs
that
aren't
even
exposed,
oh
yeah,
okay,
say
there's
a
goog
intelligibility,
announcer
or
whatever.
That
would
not
definitely
not
go
well
on
music,
but
it's
also
not
in
any
spec
and
chrome
is
completely
free
to
do
that.
And
if
it
assumes
that
everything
in
speech
it's
going
to
do
that
for
music
as
well.
I
think
the.
D
Label
would
be
Apple,
not
good.
They
already
do
that
interval
I,
don't
want
to
be
the
actual
situation
where
it
says
things
that
I
can't
override,
but
yeah
no
I
agree
what
you're
saying
I
get
your
point,
I
think
we're
going
to
end
up
with
what
Dan
set
of
an
auto
default
added
to
lots
of
things.
We
go
down
this
path.
I
think
we
just
need
to
see
what
it
looks
like
when
you
follow
down
this
path,
but
it
seems
like
most
no
one's
really
just
great
I
think.
K
Good
yeah,
it's
definitely
worth
discussing
I'll
point
out
that
you
know.
One
of
the
reasons
we
have
constraints
was
that
we
had
a
similar,
a
similar
kind
of
situation,
where
you
know
browsers
wanted
to
be
able
to
say:
I
want
to
do
something
smart
and,
on
the
other
hand,
developers
said,
but
I,
don't
trust
that
your
smarts
or
what
I
want
right,
and
so
it's
not
that
there
aren't
smarts,
that
the
browser
can
use
and,
and
the
example
you
give
about.
K
You
know
developers
not
knowing
about
new
features
or
new
features
being
introduced
that
they,
you
know,
they
weren't
aware
we're
going
to
come
up,
and
now
they
have
to
deal
with
I,
just
I
think
we
need
to
think
it
through
carefully,
because
every
time
we
start
on
the
road
of,
let's
make
sure
the
browser
can
do
something
smart.
We
start
getting
pushed
back
from
developers
that
say
and
I
want
to
change
it.
Yeah.
I
I
And
I
guess,
if
you
want
to
have
some
mode
where
you're
like
no
processing
whatsoever,
please
then
I
I,
don't
think
that's
mutually
exclusive
to
this
proposal
and
I.
But
this
like
a
good
thing
to
really
keep
in
mind,
is
that
this
proposal
means
to
that.
It
would
be
good
to
have
Auto
settings
basically
everywhere.
C
I
For
some
notion
of
everywhere,
maybe
this
would
not
tie
it
directly
into
the
bitrate
validation
or
maybe
it
would
because
maybe
you
prefer
detailed
content
over
motion
content
while,
while
allocating
bitrate
I
don't
know
so
that
it's
good
to
keep
in
mind
where
or
like
how?
Broadly
this
stretches
into
other
things,.
G
But
that
thinking
about
constraint,
I
mean
one
one
news
case
was
also
that
two
two
tabs
could
apply
different
constraints,
but
if
they
were
compatible,
both
halves
could
kind
of
consume
the
the
track.
The
way
they
wanted
in
this
case
Ken
would
it
be
reasonable
for
one
tab
to
interpret
an
audio
like
speech
and
the
other
one
another
tab
in
interpreting.
Ask
music.
K
K
How
are
we
going
to
work
that
into
the
already
complex
select
settings
out
of
them,
I'm
not
saying
we
shouldn't,
do
it,
but
but
figuring
out
exactly
what
that
means,
and
specifically,
in
the
case
of
track
interactions
with
the
same
source
like
you
just
described,
we
just
need
to
think
it
through
it's,
not
that
we
can't
do
it,
but
it's
not
I
think
there
are
probably
broader
ramifications
than
have
come
up
so
far.
There.
I
Are
one
other
good
thing
to
keep
in
mind
is
whether
or
not
these
settings
should
back
propagate?
So
if
you,
if
you
have
like
an
RTP
receiver-
and
you
say
actually,
this
is
music.
Should
that
turn
off
the
the
web
RTC,
whatever
it's
called
like
fit,
where
it
fills
packets
and
does
packet,
smearing
and
resampling
etc
like
should
you
be
able
to
or
if
you
clone
the
track
and
you
set
the
hint
property
on
one
of
these
children
will
that
influence
the
default
on
the
parents?
Probably
not.
E
E
Fingers-
and
this
is
what
that's
part
of
why
I
was
I'm
tending
to
lean
towards
setting
this
on
sources
and
sinks,
not
on
tracks,
because
it
gets
kind
of
confusing
what
happens
when
you
start
having
different
requirements
for
different
pieces
when
you
put
on
the
sources
is
the
sinks
and
the
sources,
it's
much
more
clear.
What
your
I
think.
I
F
E
Yes,
that's
right
right,
but
the
point
there's
not
in
it,
but
but
it's
not
a
it's,
not
a
property
of
the
track.
That
is
changeable
by
change
by
messing
with
the
term
essing.
Well,
my
point
was
that
you,
when
you
did
get
user
media,
you
could
say
this
is
music
or
apply
constraints
or
whatever,
whether.
I
Music,
or
not,
can
also
definitely
change
over
time,
depending
on
how
you
want
to
do
it.
So,
if
I'm
doing
like
a
live,
recording
where
I'm
talking
to
the
audience
that
might
be
CSUN
and
I
bring
up
my
guitar-
and
it's
no
longer
speech
also
also
a
good
point,
is
whether
or
not
we
care
about
that
like
whether
anyone
will
actually
use
it
and
practice
enough
to
require
and
suspect
change.
I
mean.
I
The
strongest
thing
that
I
had
for
putting
it
on
the
track,
I
think
is
not
having
to
change
every
single
spec
that
can
use
it.
So,
if
there's
another
way
to
say
like
any
consumer
should
have
this
kind
of
property
on
it
or
have
a
common
property
that
you
can
refer
to
from
other
specs
and
then
only
have
to
to
change
this
thing
properly.
I
So,
if
you
have,
if
you
have
a
competent
object
and
then
RTP
sender
could
refer
to
that
spec
that
defines
it
as
well
as
how
you
open
the
camera
could
also
refer
to
the
safety
sect
entry.
Just
so
we
don't
end
up
with
in
the
case
where
we
have
eight
different
accommodations
of
the
content,
things
that
are
all
out
of
date
does
that
make
sense.
E
F
I
Yeah
so
context
for
this,
this
is
already
implemented
under
the
blink
experimental
swag
in
chrome.
So
if
you
want
to
try
it
out,
there's
also
a
demo
page,
it
does
not
expose
anything
that
chrome
could
do,
of
course,
because
it
could
be
a
lot
more
aggressive
in
terms
of
we
could
quantize
video
a
lot
more
to
prioritize
high
motion
video,
but
we
don't
so
if
you
want
to
play
it
out
or
play
with
it.
I
K
I
I
did
or
they
so
no
I'm
not,
and
the
only
thing
I've
done
for
the
implementation.
To
be
honest,
it's
just
flip
the
screen
share
based
on.
However,
you
decide
to
override
it.
So
the
it's
really
just
a
minimal.
What
we
have
for
video
right
now
and
but
I.
One
thing
that
I
did
was
I
designed
the
enum
to
kind
of
the
B's
so
that
you
can
extend
it
with
additional
values
without
breaking
existing
implementations,
so
any
unknown
value
you
will
be
in
no
op.
I
So
if
you
want
to
add
some
depth
specific
thing,
because
you
start
to
have
depth
capture,
then
that
would
not
be
in
conflict
with
implementations
that
do
not
support
it.
They
would
just
ignore
that
value,
so
you
could
then
set
sorry.
This
might
be
on
a
tangent,
but
what
you'd
have
is,
if
you
set
motion
first
and
then
said
like
depth,
capture
motion,
then
depth
capture
motion
would
just
not
be
set
and
you'd
still
keep
the
motion.
One
so
you'd
have
you'd
be
able
to
have
increasingly
specific
hints.
I
K
I'm
not
so
much
concerned
about
extension,
specs
being
able
to
also
extend
this
I
presume
that
you
would
allow
for
those
kinds
of
extensions.
It
was
more
just
you
know.
When
we
originally
created
the
media
capture
spec.
You
know
we
were
thinking
of
its
use,
primarily
for
WebRTC,
but
of
course
other
groups
use
it
and
they
use
it
for
other
purposes.
So,
more
and
more,
we
have
to
think
anything
that
we
add
into
that
spec.
We
have
to
be
aware
of
how
that
addition
might
impact
other
specs
that
have
extended
it.
I
A
A
K
Sure
I'm
happy
to
do
that
and
in
fact,
if,
if
we
have
time
for
any
of
the
the
people
doing
extensions
to
join
us,
I
don't
actually
know
if
that's
in
the
agenda
yet
or
not.
You
know
like
depth
if
they
wanted
to
come
in
and
talk
that
this
might
be
a
great
time
to
have
them
join.
Is
during
this
discussion.
K
B
D
Of
the
requests
that
one
of
our
groups
had
to
me
long
ago
that
I
sort
of
ignored
but
fits
into
this
well
is
the
dip
for
speech
on
the
difference
between
near-field
and
far-field
speech,
how
you
processes
is
very
different
and
often
from
the
application
type
of
microphone.
You
know
which
it
is
if
there
was
a
way
to
extend
that
type
of
information
into
here,
I'd
be
really
interested
in
seeing
we
can
meet
goals
as
well.
So
then
I
can
see
the
number
of
categories.
I
Definitely
so
what
I
really
started
with
with
the
video
use
case
and
I'm
I'm
less
familiar
with
the
speech
and
music
thing.
I
just
know
that
you
know
there
took
some
things
that
we
do
that
are
wrong
for
music,
so
I'd
be
happy
to
like
consider
other
values
as
well.
I'm,
mostly
like
I,
want
it
that
we
hand
it
over
to
a
working
group
rather
than
being
my
baby
extension
spec
that
no
one
will
touch.
I
A
A
K
Mom
and
I
and
I
actually
changed
the
order
of
the
two
of
them,
because
I
think
this
one
we
can
do
pretty
quickly.
The
other
one
may
require
some
some
more
work
so
yeah.
So
what
I'm
going
to
do
is
try
to
explain
the
concern
that
Jana
of
are
had
and
then
I'll
give
my
opinions
and
then
we
can,
you
know,
go
into
discussion
from
there.
So
the
question
is:
does
get
settings
reflect
configured
or
actual
settings,
so
this
I
think
probably
one
of
the
best
example
is
that
the
best
comments
is
live.
K
Measured
values
sometimes
deviate
from
their
target
settings.
So,
for
example,
if
you
have
a
camera
motor
that
you
know,
cameras
panning
and
it's
going
on
thirty
to
sixty
the
actual
value
may
not
be
the
same
as
the
setting
value
or
the
target
setting
value
for
quite
a
while
in
a
system
overload
case
or
low-light
case,
your
your
frame
rate
may
fluctuate
and
it
may
actually
be
different
from
the
target
it
was
set.
So
so,
then
the
question
get
settings
returning
the
the
actual
values,
or
is
it
returning
sort
of
the
configured
values?
K
And
so
I'll
tell
you
from
from
my
perspective,
this
question
has
come
up
before
we
have
discussed
this
specific
topic
and
Cohen
gave
the
example
of
the
camera
pan.
You
know
probably
three
four
years
ago
and
at
that
time
the
conclusion
was
that
the
only
thing
that
made
sense
for
us
was
to
use
the
target
value
rather
than
the
actual
setting
value.
K
That
was
the
only
kind
of
sane
sane
response
if
anyone
else
has
other
something
else
that
they'd
like
to
propose
or
something
else
I'd
like
to
bring
up
in
this
context,
please
do
I
will
say
that
the
next
to
last
bullet
on
this
slide
about
you
know
aggressive
over-constrained,
air
or
on
over-constrained
I'm,
not
sure
exactly
what
yon
of
our
had
in
mind
here.
So
if
anyone
would
care
to
comment
on
that
as
well,
feel
free
to
I.
D
Think
I
can't
comment
on
them.
I
mean
basically
Calabar
has
always
totally
hated
masks
or
constraints
as
trying
to
eliminate
them.
So
the
one
he's
standing
on
right
now
is
friend
ranks,
and
you
know
my
position
is:
if
your
application
sets
a
minimum
frame
rate
of
you
know.
Twenty
nine
point,
five
frames
per
second
and
it
drops
down
to
20
frames
per
second
because
of
low
lighting
in
the
camera.
D
Jana
bar
does
not
think
we
should
get.
An
error
thinks
is
better
just
to
carry
on
I
think
you
should
get
error,
because
that's
what
I
was
at
strength
was
as
a
mandatory
constraint.
So
you'd
get
an
error.
We
both
agree
you
to
try
and
get
36
per.
Second.
If
you
get
on
that,
you
know
that's
what
I
think
the
mandatory
can
straight
scoop
on
the
pan
thing,
though
I
agree
on
what
I
think
we're
doing.
Is
that
all
we
offer
that
the
definition
of
the
value
is
the
desired
pan
position
of
the
camera.
D
D
Well,
okay,
I
didn't
get
settings.
Should
we
continue
the
cutter
current
setting
and
I?
Think
if
the
setting
is
correctly
sorry,
let
me
rephrase
your
question
what.
I
Do
you
anyone
ask?
So
if
you,
if
you
have
users
looking
at
your
API,
called
the
get
settings
and
they
can't
say
whether
it's
the
settings
that
you
set
or
whether
it's
the
settings
that
are
in
place
in
the
actual
physical
object?
Should
you
name
it
get
configured
settings
or
something
just
to
avoid
the
confusion
or.
E
K
Well,
that's
an
interesting
one.
You
know
I
would
tend
to
say
that
the
best
the
setting
the
current
setting
is
what
it
is
configured
to
be,
meaning
that
you
know.
Maybe
you
have
two
different
tracks
and
one
you
know
one
has
required
an
exact
frame
rate
of
a
certain
value
and
another's
required
an
exact
frame
rate
of
a
completely
different
value.
K
K
K
With
one
minute
left
in
this
call,
that's
a
that's
a
great
point,
but
actually
well.
That
might
be
a
good
discussion
topic.
Actually,
a
tea
pack
is
whether
whether
we
have
really
crossed
the
threshold
that
we
have
to
add
something
big
new
to
the
spec,
and
that
is
to
be
able
to
get
the
actual
values
as
opposed
to
doing
J
s
measurement,
as
you
know,
measuring
as
Yeun
of
our
suggests
I.