►
From YouTube: WEBRTC WG interim 2022-02-15
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
Welcome
to
the
february
meeting
of
the
w3c
webrtc
working
group,
we're
going
to
have
a
90-minute
meeting
this
month
and
then
we'll
go
back
to
the
usual
two
hours
in
the
months
ahead.
So
a
reminder
of
the
ipr
policy.
We
abide
by
the
w3c
patent
policy,
and
only
companies
and
people
that
are
listed
on
the
status
page
are
allowed
to
make
substantive
contributions.
A
Our
future
meetings
are
on
the
third
tuesday
of
the
month,
so
the
next
ones
will
be
on
march,
15th
april
19th
and
may
17th,
starting
at
the
same
time
of
8
a.m.
Pacific.
A
So,
hopefully
we'll
see
you
folks
for
the
next
couple
of
meetings
as
well.
So
a
little
bit
about
the
meetings
we
have
the
meeting
info
up
there
and
the
link
to
slides
is
on
the
on
the
wiki.
We
do
need
a
scribe.
Somebody
to
take
notes
on
what
has
happened.
A
We
are
being
recorded
and
the
recording
will
be
public.
So
do
we
have
a
volunteer
for
note-taking.
A
A
B
Really,
okay,
I
mean
I
I
feel
like
I
did
half
of
it
last
time
so.
A
B
Yeah,
I
could
do
that
yeah
yeah,
I
I'm
I'm
slightly
hesitant
because
I'm
really
stunningly
ignorant
on
the
on
both
of
these.
So
I
don't
really
understand.
What's
going
on
so.
A
B
A
A
Okay,
that's
right!
That's
a
good
point.
Keep
us
on!
So
let's
try
to
be
really
clear
about
what
decisions
mean:
okay,
the
w3c
code
of
conduct.
We
operate
under
it
and
let's
try
to
keep
the
conversations
cordial
and
professional
so
a
little
bit
about
the
submitting
tips
it.
We
may
run
a
queue.
A
I
don't
know
if
we
have
that
many
people,
but
if
so
type
plus
q
and
minus
q
in
the
google
me
chat
and
then
we'll
try
to
manage
and
give
everybody
time,
of
course,
the
headphones
for
the
echo
and
wait
for
the
microphone
access.
So
we
can
actually
hear
you.
So
thank
you.
I
don't,
I
don't
think
we'll
be
using
poll
today,
but
so
just
a
reminder
about
document
status.
We're
trying
to
be
clear
about
that.
Just
because
something's
in
a
repo
in
the
wc
doesn't
mean
it's
been
adopted.
A
A
Okay,
so
for
today
we
have
webrtc,
svc
and
multi-capture,
as
I
mentioned
we're,
because
we
only
have
two
things
on
the
agenda:
we're
giving
people
a
little
bit
more
time
so
on
the
weber,
csvc
we'll
go
until
8
45
and
then
we'll
move
over
to
a
lot
and
go
to
920.
all
right.
So
webrtc
svc,
so
we're
going
to
try
to
cover
four
issues,
and
then
I
will
just
give
you
a
sense
of
a
pr
rather
long
pr
that
I
submitted
to
try
to
address
57,
58
and
59.
A
But
we
won't
try
to
get
consensus
on
that
because
it's
too
big
we'll
just
let
people
review
it
and
get
up
and
try
to
try
to
figure
it
out.
So
we're
going
to
start
with
49
and
so
folks
may
have
seen
there's
an
api
called
media
capabilities
that
has
added
functionality
for
scalability
modes.
A
So
now
there's
a
bigger
issue
95,
which
was,
I
think,
filed
in
webrtc
extensions,
which
is
overall
about
get
capabilities
in
the
entire
webrtc
api.
But
we
won't
be
talking
about
that
at
the
moment,
because
there's
some
other
associated
prs
with
that
one.
So
the
basic
the
only
question
we're
trying
to
answer
right
now
is:
do
we
need
in
fact
sender.gate
capabilities,
or
can
we
get
everything
we
need
out
of
media
capabilities?
That's
kind
of
the
question,
so
basically
this
media
capabilities
api
now
includes
something
called
configuration.video.scalabilitymode.
A
As
I'll
describe
you
to
give
it
a
bunch
of
things
to
get
back
whether
get
back
the
info
on
the
scalability
mode,
you
need
to
put
in
the
width,
height,
bitrate
and
frame
rate
and
I'll
show
you
what
the
api
surface
looks
like
in
a
minute.
But
basically
you
you
input
these
things
in
configuration.video
and
then
you
get
back
whether
the
scalability
mode
is
supported,
smooth
or
power
efficient.
A
A
A
And
so
here
I've
got
all
the
modes
that
are
in
the
document
and
basically
do
a
for
reach
over
each
of
the
modes,
and
then
you
can.
You
can
basically
look
when
the
idea
here
is
you
choose
a
codec
and
then
for
that
codec
you
iterate
over
all
the
modes
and
you
can
figure
out
if
it's
supported
smooth
or
power
efficient.
A
So
this
is
basically
a
little
a
little
test,
whether
you
can
in
fact
duplicate.
What's
in
get
capabilities,
so
so
a
few
little
weird
things
about
it.
You
have
to
put
in
the
width
height
bit
rate
and
frame
rate
and,
of
course,
in
webrtc.
A
You
know
these
things.
The
bitrate
frame
rate
can
change
right,
they're
adaptable,
so
you
don't
really
know
like
what
is
my.
I
know
what
maybe
I
know
what
max
frame
rate
was
max
bit
rate,
but
what
is
the
bit
rate
and
frame
rate
at
this
moment?
Right?
You
don't
really
know
so
that
seems
a
little
odd
you
do
have
to,
but
they
are
required,
so
you
do
have
to
put
them
in
and
the
question
is:
do
they
matter?
A
For
example,
if
you
put
in
the
what
you
thought
was
the
max
frame
rate
or
max
bit
rate,
could
you
get
a
different
answer
if
it
was
less
right
or
do
these
things
make
any
difference?
Can
you
put
in
an
arbitrary
value
and
still
get
your
answer
or
not,
so
that's
a
little
that's
a
little
interesting
or
or
if
they
did
matter,
would
they
only
matter
for
smooth
and
power
efficient,
not
for
supported
now.
A
The
other
thing
is
the
the
content
type
I
just
put
in
video
slash
the
the
codec
like
video,
slash,
vp8
or
vp9,
or
av1
or
h.264,
but
the
question
is
what
else
goes
into
the
content
type
and
I'm
thinking
of
things
like
your
profiles,
we'll
talk
a
little
bit
more
about
this
later
and
does
that
matter
like?
Would
I
get
a
different
answer?
A
For
example,
if
I
put
in
h264
with
a
different
profile,
would
I
get
a
different
set
of
scalability
modes
being
supported
so
just
observing
this
and
playing
around
with
it
it?
The
other
odd
thing
is
that
media
capabilities
currently
returns
a
superset
of
the
scalability
mode
values
returned
by
sender.git
capabilities.
A
That
is
doing
this
little
iteration
and
calling
get
capabilities.
They
don't
return
the
same
set
of
scalability
modes,
which
is
a
little
bit
odd
because
we're
basically
talking
about,
I
think,
the
same
thing
which
is
what's
supported
in
webrtc.
A
A
I
think
that's
what
what
we're
looking
for
and
if
I'm
going
to
get
an
error
by
putting
it
into
these
apis,
then
that
probably
shouldn't
have
been
returned
in
discovery.
A
So
when
you
say
something,
the
meaning
of
the
word
supported
should
mean
that
it's
actually
works
in
the
api
and
the
webrtc
api
in
particular.
So
another
question
is:
is
that
always
true?
For
example,
if
I,
if
I
call
it
on
if
I
got
an
answer,
that
something
was
supported
at
a
given
with
height
bit
rate
and
frame
rate-
and
I
call
the
api
do-
I
have
a
reasonable
expectation
that
I
like
it'll,
actually
work
or
not
or
or
could
it
fail,
because
the
with
height,
bitrate
and
frame
rate
changed?
A
That's
my
question.
The
other
thing
which
I'll
note
in
a
minute
is
that
l1
t1
is
not
a
valid
scalability
mode
in
any
of
the
apis.
So
there's
a
question
about
whether
that
makes
sense
or
not.
So
let
me
just
show
you
the
results
here.
So
basically,
what
I'm
going
to
show?
You
is
a
bunch
of
slides
that
compare
the
results
from
media
capabilities
with
that
from
sender.get
capabilities
for
different
codecs.
So
you
can
see
what
the
differences
are.
A
So
here
is
h.264
and
all
this
is
is
it's
a
screenshot
of
the
little
little
test
that
I
wrote
so
for
media
capabilities.
It
says
that
h.264
supports
l1
t2,
so
two
temporal
layers
and
three
temporal
layers
and
that's
about
it.
A
Nothing
else
is
supported,
but
if
I
call
get
capabilities
I
actually
don't
see
any
scalability
modes
for
any
of
the
combinations
like
packetization
mode
profile,
level
and
level
asymmetry
and
by
the
way
all
I
was
putting
into
media
capabilities
was
essentially
what's
in
the
mime
type
and
in
the
get
capabilities.
So
video
slash,
h264.
A
And
I
you
know,
I
guess
it's
conceivable,
at
least
in
get
capabilities
that
you
could
have
different
answers
for
the
different
values
of
the
level,
asymmetry,
packetization
and
profile.
But
I
didn't
I
didn't
try
that
in
media
capabilities,
so
anyway
there's
a
different
answer
here.
There
are
more
more
things
available
in
h.264
under
media
capabilities,
but
at
least
my
understanding
and
correct
me
if
I'm
wrong
harold,
is
that
these
things
are
actually
the
scalability
modes
are
actually
not
supported
for
h264
I
mean
they're
supported.
A
D
D
A
D
Yeah,
I
would
think
so
as
well,
so
so
when
I,
because
I
added
things
and
made
a
capabilities,
and
then
I
I
talked
to
those
that
I
believe
the
nude
like
what
was
the
correct
answer
and-
and
I
didn't
really
at
that
time-
know
like
what
was
really
right-
what
you
actually
could
set
things
but
see
this
was
more
like
a
low
level.
Okay,
what
is
right,
okay,
makes
sense
for
the
encoder
yeah.
A
So
this
so
basically
media
capabilities
is
returning.
What's
supported,
it's
basically
returning
the
set
of
things
that
are
supported
in
web
codecs
instead
of
webrtc.
A
A
I
saw
for
vp9
it's
again
similar
because
get
capability
says
only
l1
t2
and
l1
t3,
and
then
you
get
a
whole
bunch
of
additional
things
in
vp9
and
media
capabilities
so
like
it
says,
basically
advertising
spatial
support,
so
you
can
get
like
l3,
22
and
l3
t3
and
then
a
whole
bunch
of
the
key
modes
are
supported
in
media
capabilities
as
well
and
again.
A
I
think
this
is
everything
that
the
encoder
can
do,
but
I
think
so
you
could
get
all
this
stuff
with
web
codecs,
but
I
don't
think
you
can
get
them
with
webrtc
and
then
av1.
It's
it's
it's
closer,
but
one
thing
here,
that's
interesting
is
so
in
media
capabilities.
D
So,
okay,
so
what
I'll
do?
I
would
probably
make
like
a
c
just
just
maybe
like
reducing
the
the
settle
of
supported
scale,
the
scalability
modes
for
many
capabilities
make
us
making
making
them
the
sameness.
Well,.
A
I
mean
the
question
would
be
yeah
if
you
right,
you
could
do
that
or
you
know
I'm
just
interested
like
if
I
set
s3t3.
A
Would
this
actually
come
out
in
webrtc
I
haven't
actually
looked
at
the
bitstream,
so
I
can't
tell
you
if
it
will,
but
that
would
be.
That
would
be
kind
of
the
test
as
if,
if
that
bit
stream
actually
came
out
and
in
webrtc.
E
E
A
Right
yeah,
so
I
mean
I
assume
this.
This
distinction
can
be
fixed,
so
it
actually
outputs.
I
think
the
I
I
think
what
I'm
hearing
here
is
that
we
have
agreement,
that
media
capabilities
should
only
output
modes
that
are
actually
supported
in
weber.
Do
you
see
when
you
use
a
type
of
webrtc?
Do
we
have
agreement
on
that.
D
A
Okay,
so
that's
that's
conclusion.
One
is
that
this
seems
to
be
a
bug.
I
guess
my
other
question
is
relating
to
the
width
height
bit
rate
and
frame
rate,
because
in
in
webrtc
the
sender.getcapabilities
it's
kind
of
a
static
thing.
We
assume
it
doesn't.
None
of
these
things
matter.
A
And
so,
as
you
said
right,
it's
you're
just
looking
up
a
static
set
of
modes,
and
so
I
think
I
think
that
implies
that,
in
that,
with
height
bit
rating
frame
rate,
whatever
you
put
in
there
doesn't
doesn't
make
a
difference.
Is
that
right,
johannes.
E
This
is
actually
the
case.
There
are
some
decoders
that
have
that
have
issues
with
width
and
height
limitations.
A
So
right
right,
but
so
yeah
the
thing
so
that
that
makes
sense.
I
was
expecting
yeah
that
the
smooth
and
power
efficient
could
could
change,
but
the
basic,
the
scalability,
the
supported,
would
not
change
the
thing.
Is
it
doesn't
say
that
in
the
spec,
so
it's
kind
of
hard
to
to
know
whether
you
can
depend
on
that.
I
don't
know
if
you
would
be
willing
to
say
that
scalability
mode
would
not.
E
A
Right
so
the
other
thing
was
how
and
we've
been-
I
think
we've
been
discussing
this
in
the
other
pr,
but
how
were
things
like
the
profile
level
of
packetization
mode
and
the
asymmetry
thing?
How
would
those
work
in
here
would
that
be?
Did
we
decide
to
put
this
in
the
content
type
I
forget
or
in
a
separate
attribute,
or
are
we
still
discussing
that.
D
I'm
not
sure
what
if
it
decides,
something's
the
way.
I
I
think
the
current
implementation
is
so
that
the
profile
level
can
be
put
as
the
content
type.
A
Right
right,
yeah,
so
basically
here
yeah,
I
didn't
well.
A
Right
right,
so
basically,
so
if
I
wanted
it
to
get
exactly
the
same
information
as
a
sender.capabilities
like
depending
on
the
profile
level
and
all
of
that
junk,
then
I
would
put
that
into
the
content
type
as
it
is
in
the
sdp
fmtp
line.
Is
that
right?
Yes,
yes,.
D
A
So
then
that
does
correspond
to
pretty
much
exactly
what
I
get
out
of
center
cake
capabilities:
okay,
yeah!
Well,
thank
you.
I
think
I
think,
basically
I
think
we've
more
or
less
resolved
most
of
the
differences
here,
I'm
not
going
to
write
up
a
pr,
but
I
think
we're
getting
a
lot
closer
to
being
able
to
replace
sender,
dot
capabilities.
A
Based
on
based
on
what
I've
just
heard
us,
I
think
we
may
need
to
clarify
a
little
bit
in
the
in
the
spec
just
to
make
it
clear
as
to
what's
expected,
but
I
I
think
we're
getting
we're
getting
closer.
A
So
I
don't
think
there's
a
if
you're
asking
if
there's
a
conclusion
for
this
one
tim,
I
think
not
quite
yet,
but
we
need
to
fix
some
bugs
and
get
some
clarifications
into
media
capabilities,
but
I
would
say
the
it's
looking
like
it.
It
could
be
a
substitute
for
center
that
gatekeepers.
A
Let's
put
it
that
way,
all
right
so
yeah,
oh,
the
other
thing
it's
worth
mentioning
is
l1.
T1
isn't
supported
in
any
of
the
codecs,
and
I
guess
my
question
for
harold
is:
is
the
intent
that
that
would
eventually
be
indicated
as
supported
somewhere.
That
was
the
intended.
C
Not
submitted
any
change
lists
that
that
answer
that
show
it
as
being
supported.
Yet,
okay,.
A
Be
supported,
okay,
all
right,
so
so
why
don't
we
go
and
talk
about
the
other
bugs
so
issue?
57
was
handling
of
unknown
scalability
modes
is
unsupported,
is
underspecified
and
so
right
yeah
did.
F
We
have
time,
for
I
was
on
the
queue
for
a
while,
so
I
was
just
checking.
A
Sorry,
I
I
wasn't
handling
the
cue,
maybe
harold,
if
you
could
look
at
the
queue
I
was
talking
so
but
anyway,
yanivar
speak
up.
Okay,
oh.
F
Yeah
so
yeah,
so
I
I
was
gonna.
I
had
some
questions,
but
then
you
had
the
the
same
questions
on
your
slide,
so
I
think
you're
posting
the
right
questions.
I
was
just
going
to
attempt
some
answers
if
we.
F
Type
for
explicitly
for
webrtc,
I
don't
see
what
the
purpose
would
be
of
returning
any
values
that
weren't
available
and
usable
in
ad
transceiver
and
whether
a
transceiver
can
fail
due
to
differences
in
width
and
height.
I
don't
think
you
can,
because
at
the
time
you're
adding
a
transceiver,
you
don't
know
what
the
right
those
parameters
are
going
to
be.
F
D
Like
very
efficient
and
smooth
so
how
those
should
be
handled
so
so
so
chris
cunningham,
I
guess,
he's
like,
oh
no,
that
the
medicaid
spec.
So
I
guess
it's
best
to
just
yeah.
A
D
A
Yeah
so
I
think
yaniva.
The
answer
is
you're
asking
good
questions
and
I
think
they
would
probably
need
to
be
answered
before
we
could
decide
to
get.
Capabilities
can
be
done
away
with,
but.
A
Okay,
so
we're
we're
back
to
57.,
so
harold
wrought
up
a
little
little
test
here,
where
he's
got
a
scalability
mode
of
total
nonsense
and
then
goes
and
retrieves
encoding
that
scalability
mode
and
asks
what
is
it
going
to
be,
and
at
the
moment,
basically
chroma
notation
returns,
total
nonsense,
which
doesn't
make
much
sense.
A
So
the
proposal
on
this
one
is
to
add
to
the
operation
of
set
parameters
if
the
doing
motive,
any
encoding
is
not
supported
by
any
codec
in
parameters.codex,
then
reject
reject
with
an
invalid
modification
error
and
then
add
to
the
operation
of
air
transceivers
if
it's
not
supported
a
a
codec
and
send
their
kid
capabilities.con.codex
reject
with
newly
created
operation,
error.
A
So
I
I
think
this
proposal,
basically
I'd
like
to
ask
your
working
group.
If
you
think
the
proposal
makes
sense.
A
I
mean
it's
in
we're
talking
here
about
ad
transceiver,
actually
right
we're
talking
about
air
transceivers,
so
you
don't
know
what
codec
is
going
to
be
selected.
So
that's
why
it's
any
codec
here.
C
Yeah,
but
one
note
about
the
text
here
is
that
if
we
delete
get,
can
you
go
next
slide
if
you,
if
we
delete
the
sender,
sending
keep
get
capable
this
kind
codex.
E
In
case
there
is
a
lot
of
uses
of
the
capabilities
codex,
just
even
for
psychotic
preferences,
so
I'm
not
sure
that
it's
something
that
we
could
say
it's
removed.
E
But
it's
certainly
an
issue
that
we
need
to
make
sure
that
it's
not
really
supported
by
any
of
the
codex
of
the
kind.
A
Right,
so
we
will
get
to
that.
I
think
in
a
minute
sergio
that's
the
second
question,
but
but
at
least
for
now,
basically
so
what
this
means
sergio
is
that
base?
As
you
said,
you
can
request
like
you
could
have
you
could
ask
for
l3
t3
right
and
you
had
vp9
as
your
preferred
codec.
A
So
it
it
if
you
got,
if
you
had
vp9
you
could
you
could
want
that?
But
then
you
actually
got
vp8.
So
then
the
question
is
what
you,
what
you
actually
get
all
we're
saying
here,
is
you're
not
going
to
get
an
error.
Yeah
right,
you
could
ask
l3
t3
and
but
then,
at
the
end
after
negotiation
succeeds,
then
you
should
probably
need
to
call
get
parameters
to
figure
out
what
you've
got
and
if
you've
got
vp8
you're
not
going
to
have
l3
t3,
but
we'll
get
to
that
in
a
minute.
A
E
To
me,
I'm
just
worried
that
we
have
two
different
types
of
errors
that
are
from
yeah.
Maybe
that's
something
that
is
yeah.
A
E
A
They're
both
my
my
belief
well,
so
no
it's
it's
different,
because
one
is
set
parameters
and
the
others
that
transceivers
set
parameters
does
typically
return
invalid
modification
and
then
add
transceivers,
typically
returns
operation
error,
so
they
are
like
in
the
other
aspects.
They
are
different
between
the
two
apis.
A
Yeah,
I
think
I
think
they
are.
I
I'm
still
working
on
the
on
the
pr
but
to
incorporate
these
both,
but
I
think
I
think.
E
A
E
Of
it
looks,
looks
good
to
me.
A
G
A
G
E
G
Is
maybe
it's
a
good
it's
better
to
not
tie
it
to
if
it
is
supported
by
their
naked
or
not,
but
just
to
check
if
the?
If
they,
if
the
mode
is
in
the
list
or
not,
because
I
would
not
have
as
a
developer,
I
would
not
like
to
have
to
check
to
different
errors.
G
If
I
can,
if,
as
we
are
going
to
do
anyway,
that
is
to
know
if
the
scalability
mode
is
already
supported
when
the
codec
is
selected,
I
think
that
at
least
for
me,
it
will
be.
It
would
be
better
just
to
throw
an
error
if
the
scalability
mode
is
a
total
nonsense.
Yes-
and
it
is
not
in
the
list,
but
don't
tie
it
to
if
it
is
supported
or
not
by
any
code,
because
you
will
have-
we
will
have
another
verb
for
that.
Specifically.
A
Well,
but
I
I
think
the
reason
why
it's
the
way
it's
proposed
is
because
you
could
use
psychotic
preferences,
for
example,
to
limit
the
number
like
you
could
make
it
only
vp9
and
or
only
vp8,
and
the
difference
would
be.
If
you
made
it
only
vp8
and
you
put
in
l3
t3,
then
you
would
in
fact
get
an
operation
error,
whereas
in
what
you're
proposing
you
wouldn't
because
it
would
be
legal
for
some
codec,
but
not
the
one
you
you
set
in
set
code
of
preferences
so.
E
Need
to
renegotiate,
but
if
you
do
psychotic
preferences
before
at
translator
and
before
you
have
negotiation,
then
we
should
be
able
to
detect
that
you're
requesting
something
that
is
not
supported
by
the
codex.
The
user
wants
to
use.
F
But
if
a
renegotiation
is
needed,
then
I
could
see
some
support
for
what
for
having
like.
I
sympathize
with
the
idea
that
there
are
two
ways
to
check.
So
if
you
set
parameters
in
order
to
figure
out,
if
it
worked
you
you
have
to
check
for
errors
if
it's
in
no
codex,
but
you
also
have
to
check
get
parameters
if
there
were
in
some
codex,
but
still
not
going
to
be
right
and
that's
simply
a
little
awkward.
F
E
F
F
A
A
C
A
Yeah,
so
sergio
does
this
make
any
more
sense
now
or
you
still
think
it
would
be
better
to
have
check
all
of
just
the
modes,
whether
it's
a
legal
mode
or
not.
G
E
G
Encoding
and
one
supported
and
the
other
one
not
but
you
when
you
set
parameters,
you
still
don't
know
which
one
will
be
used.
So
this
is
what
I'm
kind
of
of
not
against,
but
it
is
like
a.
I
think
this
that
it
is
supported
or
not
maybe
misleading,
because
it
did
you
never
know
really
well.
There
could.
E
A
A
A
How
about
this
write
down
that
the
pr
will
include
what's
in
the
proposal,
but
we'll
continue
to
discuss
it
in
the
pr
review.
B
A
Because
there
are
people,
I
question
about
the
errors
as
well.
So
there's
it's
possible
like
a
change
but
we're
I'm
gonna
the
instructions
to
me
as
editors.
I
will
put
in
what's
in
the
proposal
into
the
pr
and
then
we'll
continue
to
discuss
it.
F
A
Well,
it's
not
quite
ready
yet,
but
anyway,
58
is,
if
you
have
specify
no
scalability
mode.
So
in
this
little
test
we
don't
put
anything
into
send
we
put
up.
Essentially
it's
we
don't
put
anything
into
some
encodings
and
then
the
question
is:
what
should
the
scalability
mode
be?
If
you
retrieve
it
and
currently
it's
missing
and
the
proposal
is
don't
do
that
return.
The
default
mode
of
the
most
preferred
codec.
A
A
So
yeah,
so
if
you
set
this,
you
can
and
do
a
get
parameters.
Again.
We
haven't
haven't
completed
negotiation.
I
guess,
and
that
would
return
the
default
mode
of
the
most
preferred
codec
so,
for
example,
for
vp8,
if
that
was
most
preferred,
it
would
be
l1
t2.
A
H
A
G
I
mean
because
because
we
are
not
setting
the
the
master
rate
or
the
other
other,
if
I
mean
I,
I
don't
remember
respect
so
sorry
for
that.
But
if
I
think
that
the
behavior
should
be
consistent
with
the
rest
of
the
parameters,
I
mean
if
we
said
it,
I
think
that,
right
now
it's
on
the
university.
You
said
it
not
because
what
is
the
default
value
for
the
codec
or
that
has
been
negotiated
or
well.
A
C
Well,
if
you
have
picked,
if
you
have
done
set,
set
codec
preferences,
then
this
gate
and
then
you
do
get
parameters.
You
should
get
the
information
that
that
there
is
a
valid
mode
for
at
least
one
of
the
codecs.
In
the
that
you
set
that
you
set
in
the
preferences.
A
Okay,
yeah,
and
it
was
surprising
to
me
that
the
vpa
default
is
l1t2,
but
it's
useful
to
know
it
so
yeah.
It
actually.
E
E
A
So
can
is
this
acceptable
to
the
working
group?
This
proposal.
A
I
say
yes,
okay,
all
right.
F
C
Decision,
as
for
the
previous
one
will
be
included
in
pr,
discussion
will
will
continue
in
pr.
A
Dr
right
issue,
59
is
what
is
the
scalability
mode
if
the
preferred
scalability
is
not
supported
on
the
request
codec.
So
this
is
sergio's
question
right.
The
weird
situation
you
put
in
l3
t3,
you
don't
know
what
you're
going
to
get,
and
here
we
play
with
the
standard
capability
capabilities
to
make
it
vp8
so
you're
not
going
to
get
l3
t3.
A
So
I
guess
so
what
should
get
parameters
return?
At
least
as
of
now
it
returns
l3
t3,
because
that's
what
the
user
requested
and
you
haven't
completed
negotiation
right.
So
you
don't
formally
know
that
you
didn't
get
you
didn't
get
vp9
or
something
that
would
support
l3
g3.
A
A
So
before
negotiation,
that's
the
mode
that
you
asked
for
in
either
a
transceiver
or
sender,
dot
set
parameters,
but
and
then
you,
if
it's,
if
you
didn't
request
anything
again,
it
goes
back
to
the
default
mode
on
the
most
preferred
and
then
after
negotiation,
it
would
be
the
the
currently
used
scalability
mode.
A
So
basically,
with
these
things,
so
before
negotiation
before
we
found
out
that
we
got
vp8,
you
would
still
see
l3
t3.
But
then,
after
if
you
got
vp8,
I
guess
the
answer
would
be
harold
that
you'd
get
l1
t2
right,
because
that
would
be
your
default.
C
If,
if,
if,
if
vp8
was
selected
by
the
good.
A
By
negotiation
right
yeah,
then
it
would
be
l1
t2
so
and.
E
E
I
I
believe
that
this
proposal
is
about
the
same
as
the
propose
the
pr
from
the
previous
issue
as
they
claim.
E
Returns
always
the
mode
that
is
currently
in
use,
which
is
something
that
absolutely
makes
sense.
A
Yeah,
so
do
we
have
consensus
for
what's
being
proposed
here
in
59,
so.
F
I
have
a
concern
here,
but
the
weight
get
parameters
and
set
parameters
are
structured.
You
have
to
call
get
parameters
as
a
basis
for
the
what
you're
going
to
set.
F
E
F
Also
modes
yeah
right
right,
so
it
has
to
be
something
that
set
parameters
would
accept-
and
I
don't
know
if
we've
covered
this
before,
but
get
parameters
is
synchronous,
but
set
parameters
is
not
so.
You
could
actually
have
a
race
here
with
negotiation,
potentially
on
set
parameters,
but.
C
That's
why
we
did.
Oh,
yes,
we
solved,
let
me
see,
did
we
put
set
parameters
on
the
operations
queue.
G
G
For
example,
I
configure
the
code,
the
codec
in
video,
I
don't
choose,
I
don't
change
the
preference
in
the
in
the
in
the
browser
and
but
I
only
have,
for
example,
only
return,
vp9
or
ab1.
If
you
want,
let's
say
a
big
one
that
has
more
scalability
modes,
why?
The
the
problem
has
to
change
the
scalability
modes
when
at
the
end
is
going
to
be
supported
and
again,
because
if
I
do
I
get
parameter
asset
parameters,
it
will
be
overwriting
what
I
I
put
at
the
beginning
so
right.
G
A
Yeah,
so
how
about
this
as
a
resolution
we
will
we
will.
We
will
discuss
this
in
the
pr,
but
it
sounds
like
we
do
have
an
agreement
post
negotiation,
but
then
the
question
is.
G
B
A
Return,
pre,
yeah,
but
we'll
try
to
post
negotiation
will
try
to
put
this
into
the
pr
okay.
So
I'm
almost
out
I'm
out
of
time
here,
but
I
just
want
to
point
people
to
pr64,
which
is
what
contains
the
what
will
hopefully
become
the
fix
to
issues
57,
58
and
59.
A
I
basically
added
a
whole
behavior
section
which
attempts
to
describe
the
behavior
of
edge
receivers,
set
parameters
and
get
parameters
for
all
of
the
things
that
we've
just
talked
about
it
and
more
so
anyway,
it
doesn't.
Currently,
it
doesn't
quite
conform
to
what
we
just
said,
but
I'll
I'll
try
to
fix
it,
and
hopefully
people
can
review
it
and
we'll
we'll
get
to
an
acceptable
pr
fairly
soon.
E
Yes,
I
have
a
question
regarding
59
yeah.
If
we
request-
let's
say
l3
with
codex
vp8
and
vp9
in
this
order,
would
it
be
acceptable
or
possible
in
any
way
to
actually
have
the
browser
use
vp9
instead
of
vp8,
as
maybe
we
say?
Oh,
you
re.
You
want
l3
t3.
So
probably
it
would
be
better
to
use
a
different
codec
than
the
first
one
in
the
list.
A
A
A
Okay,
so
basically
we've
we've
got
this
pr
64.
It
isn't
quite
right
yet,
but
we
can
fix
it
and
have
people
review
it
and
and
try
to
make
progress.
So
I
think
we're
we're
a
little
bit
over
time.
So
I
don't
want
to
steal
everything
from
a
lot
and
I
want
to
turn
it
over
to
you
a
lot.
I
Okay,
thank
you
and
no
worries.
So
I
imagine
my
mic
is
working
correctly.
I
Perfect
so
I'm
here
to
talk
about
multicapture
a
case
where
basically,
instead
of
just
capturing
one
tab,
one
window,
one
screen,
you
want
to
capture
a
few
of
them
and
there
are
one
slide
backwards.
Please
thank
you.
So,
let's
discuss
about
a
couple
of
use
cases
so
use
case
number
one.
Maybe
you're
recording
a
couple
of
those
and
then
you
just
want
to
later
mesh
them
all
together
into
one
kind
of
video,
so
you're
capturing
a
couple
right
now.
I
Maybe
you're
doing
that
for
compliance
purposes
right
like
maybe
you
need
to
show
that
everything
you
did
is
always
captured
and
is
you
can
always
reference
it
later
and
if
you've
got
a
couple
of
monitors,
you
still
want
to
do
it
on
all
the
monitors,
and
maybe-
and
this
is
the
case-
that's
most
interesting
to
me
personally-
maybe
you're
teaching
something
and
you're
capturing
a
couple
of
tabs
or
a
couple
of
windows
and
you're
streaming
them
all
upwards
to
the
cloud
and
each
individual
student
actually
chooses
which
one
to
get
back
in
full
resolution
at
any
given
moment.
I
So
you're
streaming
a
couple
of
things
and
there
are
separate
streams
and
the
user
just
multiplexes
each
user
separately,
but
this
is
besides
the
point
next
slide,
please
so
the
can
you
do
that
now
and
the
answer
is
yes,
you
could,
but
there
are
going
to
be
a
couple
of
problems
with
the
way
you
do
that.
So
currently,
you
would
call
get
display
media
and
you
would
have
to
either
ask
the
user
ahead
of
time.
I
Okay,
so
how
many
of
those
do
you
want
and
then
you
would
have
to
call
get
display
media
that
many
times
or
you
could
ask
make
the
user
press.
I
want
one
more
every
single
time
and
that
would
be
necessary
because
you
would
need
a
new
activation
and
you
would
also
need
to
know
that
the
user
wants
one
more
and
when
that
happens,
you're
also
kind
of
burdening
the
user
with
the
with
the
task
of
remembering
what
they've
already
selected.
I
So
you
can
imagine
that
if
they're
trying
to
select
let's
say
three
or
four
different
things
now
they
need
to
know.
Oh
I
chose
the
a
then
they
get
prompted
to
choose
it
again.
They
choose
b
and
then
they
kind
of
need
to
choose
c,
but
wait
a
second
they're,
not
what,
if
they're,
not
labeled
like
that,
what
it's
not
abc,
it's
like
dog
cat
and
pigeon
right,
like
you,
you're,
not
easily
gonna,
always
remember
which
you've
already
selected.
I
I
So
what?
If
we
could
solve
this-
and
this
is
just
one
possibility,
but
we
could
discuss
others,
but
what,
if
we
just
had
another
function
next
to
get
display
media
and
it
was
called
get
display
media
set
and
when
you
call
that
each
user
agent
would
choose
their
own
ux,
but
they
just
threw
together
something
that
chrome
could
use.
I
What,
if
you
just
had
checkboxes,
for
example,
and
the
user
could
choose
as
many
of
those
as
they
wanted
and
as
you
can
see
here,
it's
patently
obvious,
which
ones
have
already
been
chosen
or
not
right.
So
before
you
actually
press
share.
It's
kind
of
difficult
to
you
know,
choose
the
same
one
a
few
times
or
to
forget
one
of
those,
because
you
can
see
a
holistic
picture
of
everything
you've
already
selected.
Now
this
is
ux.
This
is
not
necessary.
I
We
don't
actually
mandate
that,
but
it
would
be
possible
with
a
get
display.
Media
set
approach
and
obviously
all
of
the
other
problems
would
also
be
taken
care
of.
So
only
once
a
user
activation
user
cannot
forget
what
they've
already
chosen
and
they
can
also
look
and
see.
Does
that
make
sense
as
a
whole,
all
of
them
together
and
overall,
a
lot
less
user
clicks.
So
applications
are
gonna,
be
happy.
Yes,
team.
I
Sure,
but
the
name
is
your:
the
name
is
just
for
illustration:
it
doesn't
even
have
to
be
a
new
function.
On
github,
I
suggested
that
we
could
actually
do
that
as
an
extra
parameter
for
get
display
media.
I
think
that
deciding
exactly
the
shape
like
the
name
or
something
like
that
comes
after
deciding
that
this
is
interesting
and
that
we
want
this,
and
then
we
can
discuss
how
it
is
going
to
look
like
geneva.
F
Yes,
yeah.
I
have
also
reservations,
but
I
can
express
them
at
the
end
that
wasn't
specific
too,
but
yeah.
I
also
share
the
we
wouldn't
want.
You
could
add
an
option.
I
would
agree
you
if
we
wanted
this,
which
I'm
not
agreeing
to
necessarily,
but
if
we
wanted
this,
we
could
add
an
options
argument
to
get
display
media
for
sure.
I
think
that
would
be
better.
I
Okay,
so,
as
I
mentioned,
if
now
is
a
good
time,
then
I
would
say
that
on
github,
my
initial
proposal
for
this
was
to
add
another
parameter
and
the
parameter
would
be
the
max
number
of
streams
that
you,
the
application,
are
able
to
handle
right,
like
you,
don't
want
the
user
choosing
20,
that's
of
no
use
to
you
and
by
default,
we'll
just
pretend
sorry.
By
default
you
get
one
which
is
the
current
behavior,
and
then
we
say:
okay,
if
you've
not
specified
an
explicit
default.
I
F
All
that-
but
I
don't
believe
I
agree
with
the
use
case-
is
rewarding
so
maybe
I
should
save
my
objection
here
and
I
I
don't
necessarily
so
I
have
a
question
how
many,
how
many
captured
surfaces
do
you
think
a
teacher
would
set
up
this
way?
F
So
if
I'm
not
a
teacher,
but
I
would
imagine
if
I
had
three
groups
a
b
and
c,
and
I
wanted
to
give
each
one
of
them
a
captured
surface.
It
would
be
very
important
that
a
gets
what's
intended
for
a
b
gets
what
incentive
for
b
and
c
gets
what
intended
for
c
and
that
they
don't
get
mixed.
So
an
applica.
I
F
I
So
so
I
I
showed
one
use
case
and
you
showed
another.
My
use
case
means
that
all
of
the
all
of
them
are
equal,
whereas
you
have
a
different
use
case
for
order
matters
and
where
it's
really
important
not
to
mix
them,
and
I
agree
that
in
your
use
case,
probably
we
don't
want
to
use,
get
display
media
effect.
We
probably
want
to
get
use,
get
display
media
and
the
application
should
probably
also
show
that
the
teacher
later
hey
by
the
way.
I
This
is
what
you
chose
for
group:
a
that's
for
b,
that's
for
c
here
previous.
Would
you
like
to
start
sharing
remotely?
That's
your
use
case,
I'm
not
aiming
to
solve
absolutely
every
single
use
case
that
exists.
I
think
that
I've
shown
some
use
cases
that
are
compelling.
I
And
I
really
don't
see
why
it
would
be
better
to
force
an
application
for
a
teacher
that
wants
to
share
three
or
four
different
tabs
or
three
or
four
different
windows
to
keep
on
forcing
the
user
to
come
back
to
the
application
click
I
want
to
share
one
more
then
try
to
navigate
both
the
list,
as
well
as
everything
else
and
find,
and
I
would
challenge
you
to
tell
me
why
you
object
to
this
api
on
the
merits
of
this
particular
use
case,
not
on
different
use
cases.
F
Well,
I
don't
think
we
need
to
solve
this
problem.
I
don't
think
we
would
implement
this
because
this
is
easily
solved
better.
I
claim
with
existing
apis,
because
I
don't
see
that
the
burden,
I
don't
see
that
the
user
is
necessarily
burdened
by
being
able
to
multi-select
in
in
the
browser
prompt.
I
don't
think
they
think
of
a
browser.
Prompt
is
different
from
other
ux,
so
they
can
already
pick
this,
and
I
think
we've
already
saw
where
we're
in
the
process
of
solving
prompt
problems
where
we
can.
F
The
application
can
already
pick
a
default
category,
so
you
can.
The
prompt
can
already
open
if
I'm
assuming
we're
sharing
tabs.
So
you
could
open
with
a
list
of
tabs.
I
F
F
But
I
can't
imagine
a
teacher
that
would
assign
let's
say
three
or
or
or
more
things
and
not
care
about
which
one
goes
to.
I
Which
group
well,
if
there
might
not
even
be
groups,
it
could
be
that
I
am
teaching
somebody
how
to
code
and
I've
got
visual
studio
code
and
they've
got
some
reference.
Material,
open
and
they've
got
a
couple
of
those,
and
I
want
to
share
all
of
them
and
I
don't
want
to
stream
all
of
them
as
one
single
video.
I
want
to
sell
all
of
them
in
full
resolution
and
each
given
student
when
they
want
they
just
zoom
in
on
one
of
those
at
full
resolution.
F
So
I
won't
debate
whether
this
is
a
useful
use
case
or
not,
but
I
think
that's
a
very
marginal
use
case
very
specific
that
can
already
be
solved.
Other
ways
and
the
chrome
status
right
now
says
that
get
display
media
is
like
less
than
a
thousand
or
five
like
the
usage
of
net
display.
F
Media
is
much
less
than
one
percent
like
it's
a
fraction
of
a
percent
so
and
people
who
would
want
to
have
this
a
special
chrome
ux
for
multiple
picking,
multiple
things
at
the
same
time
and
not
be
guaranteed
any
order.
It's
it's
very
slim
and
I
don't
think
we
would
improvise.
F
I
F
Well,
it
definitely.
I
would
like
to
hear
yeah,
but
definitely
consider
if
this
use
case
could
be
expressed
and
explained
better.
Yes
for
sure.
I
C
Just
want
to
add
that
I
I
see
a
point
in
this
use
case
and
the
fact
that
you
can't
control,
which
order
the
the
streams
appearing
is
a
bit
of
a
pain,
because
then
you
have
to
figure
out
some
way
to
identify
which
one
is
which
for
the
use
cases
where,
where
that
matters,
but
that's
a
different
set
of
problems.
B
I
I
do
think
that
one
aspect
of
this
would
be
useful,
which
would
be
for
would
be
to
highlight
ones
that
were
already
being
captured.
B
I
mean,
I
think
I
can
definitely
see
like
if
you
get.
If
you
want
to
capture
two
knowing
what
the
one
that
you've
already
captured
is.
Has
some
merit,
I'm
think
from
my
own
practice.
I
can't
think
of
an
instance
where
I
wanted
to
more
than
capture
more
than
two.
But
yes,
it
would
be
nice
to
remember
whether
I'd
capture
the
slides
or
the
notes
or
whatever
it
was
already
so
I
can
see
I
can
see
merit
in
in
like
somehow
indicating
that
that
you've
already
captured
this
one.
B
I
I
Yes,
it
is
a
pain
point
that
we
don't
let
the
user
order
this
so
first
we
could
amend
that
like
we
could
say
that
the
order
should
be
influenced
influenceable
by
the
user
or
if
we
just
return
all
of
this,
a
as
hard
said
capture
handle
works,
but
also
it
could
be
the
application
that
gets
all
of
those
just
lets.
I
The
user
sees
them,
lets
them
through
previews
lets
the
user,
reorder
them
and
then
stream
them
later,
and
even
in
that
case,
we
still
save
a
lot
of
clicks
by
allowing
the
user
to
select
all
of
them
all
at
once.
F
F
So
I
would
encourage
browser
vendors
that
want
to
explore
new
ux
like
this
to
experiment,
and
I
think
they
can
already
do
that
by
calling
get
display
media
many
several
times
like
10
times
synchronously
and
do
a
promise
all,
and
in
that
case,
if
a
vendor
wants
to
a
user
editor
can
easily
detect
that
because
they're
all
called
from
the
same
javascript
task.
So
if
they
wanted
to,
they
could
then
create
a
bundled
prompt
and
say.
Oh,
this
user
wants
to
have
multiple
wants
to
capture.
F
Ask
the
user
multiple
times
the
same
question,
and
that's
perfectly
fine.
I
think,
for
a
user
agent
to
it
doesn't
need
any
spec
change
to
experiment
with
the
ux
in
those
cases,
okay.
But
this
of
course,
would
assume
that
you
would
have
to
come
up
with
some
limit.
Some
arbitrary
limit,
like
no
more
than
10
captures,
for
instance,
and
then
you
would
just
reject
the
remaining
promises
if
they
only
pick
three,
for
example.
F
I
It's
not
clear
to
me
what
the
barrier
to
entry
to
very
successful
would
be,
and
also
when
we
spoke
about
this
out
of
band
you-
and
I
I
had
some
problems
with
that,
but
it's
gonna
be
a
bit
difficult
to
discuss
all
of
those
details
in
video.
I
think
it's
better
to
continue
that
on
github.
So
if
you
want
to
suggest
that
option
on
the
github
issue
issue,
I
would
love
to
answer
it.
There.
F
Sure
I
think
I
added
that
comment
already,
wouldn't
promise
all
or
didn't.
I
can
do
that.
If
I
didn't
sorry,
I
thought.
I
Okay,
next
slide.
H
I
Sorry
bernard
next
slide,
please
so
before
I
continue
with
this
particular
slide.
Let's
just
clarify
the
premise:
we've
heard
me
who
pitches
this
and
we've
heard
yanivar
who
opposes
this
and
absent,
currently
are
all
of
the
potential
customers
that
would
have
this.
So
I
would
just
mention
that
we're
gonna
proceed
the
discussion
under
the
assumption
that
I
could
convince
yanivar
that
this
is
actually
of
significant
enough
interest
that
we
should
pursue
it.
So
in
that
case
it
would
be
interesting
to
discuss
a
do.
I
We
want
a
new
function
or
do
we
want
an
extra
parameter
in
eval
and
others.
I
believe
at
least
geneva
has
mentioned
that
you
would
prefer
a
parameter.
Am
I
right.
F
Yes,
I
think,
actually,
there
are
unrelated
issues,
for
instance
with
feedback
I've
given
on
capture
handle
and
other
things
that
suggest
that
a
common
problem
we
have
is
how
to
return
more
useful
information
from
get
display
media,
and
one
of
them
actually
might
ironically,
be
to
pass
in
a
controller
object
which
suggests
also
having
an
options,
object,
a
secondary
parameter
to
get
display
media.
F
So
I
could
see-
and
that's
also
and
spec
wise-
that's
a
common
way
to
add
like
for
fetch,
for
instance,
rather
than
minting
new
methods
which,
dilutes
the
language
of
whenever
we
talk
about,
screen
capture,
it's
good
to
be
able
to
refer
to
a
specific
method
rather
than
a
plethora
of
method
depending
on
the
configuration.
C
It's
still
a
matter
of
test.
We
have
yes
c
plus
plus
style
guidelines
that
both
have
at
various
times,
both
over
outlawed
overloading,
outlaw
default
parameters.
Allow
default
parameters,
encourage
the
encourage
optimal
optional
parameters
and
card
encouraged
overloads.
Yeah
yep
is.
I
Okay,
we
can
continue
with
that
later.
Another
thing
that
I
would
like
the
feedback
of
the
working
group
on
is:
have
I
neglected
to
think
about
any
issues
that
we
could
have
with
having
multiple
types
of
display
services
like,
for
example,
should
the
user
be
allowed
to
choose
two
tabs
one
window
in
the
monitor,
or
is
that
the
problem,
and
maybe
we
want
to
allow
the
user
to
only
choose
okay,
you
choose
n,
tabs
or
n
windows
or
monitors,
and
you
can
choose
whichever,
but
you
cannot,
mix
and
match.
C
I
do
see
an
issue
with
outlawing
mixing
in
that,
if
I
want
to
show
you
my
web
page
and
how
it
renders,
on
the
monitor
outlook,
mixing,
will
prevent
me
from
doing
that.
It's
a
very.
C
Use
case,
but
but
I
don't
like
outlawing
things
so
it
and
unless
there's
a
good
reason
to
outlaw
it.
F
Well,
I
think
we
should
be
careful
in
interpreting
silence
from
the
working
group
to
mean
that
there
are
no
issues
when
the
first
question
should
be
whether
we
want
to
adopt,
and
what
do
we
think
this
is
a
problem
worth
solving
mostly.
If
we
don't
feel
that,
then
no
one's
going
to
have
any
for
the
comments.
C
A
A
Say
a
lot
is,
I
think,
the
the
audio
is
a
little
interesting
because
it's
really
the
system
audio
right.
So
it
doesn't
matter
what
you
chose.
You're
you're,
only
going
to
get
that
am
I
right.
You.
A
I
That's
the
last
bullet
point.
I
guess
you
could
get
audio.
I
guess
it's
operating
system
dependent
if
you
could
get
window
audio,
at
least
from
tabs.
You
can
get
audio,
so
we
could
so
long
as
the
user
only
chooses
tabs,
they
could
have
separate
audios
or
not,
but
then
it
becomes
a
little
bit
tricky
because
the
user
cannot
easily
like.
I
I
guess
ux
could
allow
the
user
to
say
this
with
audio
this
without
and
when
it
comes
to
several
monitors,
then
it
does
become.
I
mean
the
ux
could
show
that
okay
you're
showing
either
sharing
audio
for
all
or
none.
But
then
the
question
becomes
okay,
so
which
of
the
several
streams,
has
the
audio
track.
So
but
then
you
could
say
it
would
be
the
first
one
right.
So
I
think
there
are
solutions
to
all
of
these
problems
and
I
mostly
wondered
whether
there
were
any
other
problems.
I
was
unaware
of.
G
G
Yeah
because
I
mean
for
knowing
if
there
is
some
problem
that
to
be
solved,
that
I
mean
if
there
are
other
people
using
it,
it
could
be
worth
it
to
to
to
discuss
it.
But
I
mean
I
think
that
there
are
too
many
corner
cases
here,
yeah
if
there
are
only
one
or
two
services
using
it
out
there,
maybe
and
they
have
already
a
solution
that
it
is
calling
a
display
media
for
several
times.
G
G
There
is
many
people
already
having
these
issues,
or
this
is
just
a
problem
that
is
not
necessary
and
we
are
trying
to
solve
thing
or
or
make
a
specification
word
that
it
is
not
going
to
be
implemented
in
firefox
or
because
it
is
not
existing
and
for
the
developer.
It
would
be
as
useful
as
not
having
it
implemented
in
nowhere
see.
I
So
just
a
second,
so
I
obviously
I'm
here
pitching
this
because.
E
I
Partner
and
I
can
see
and
talk
to
others,
of
course,.
G
I
E
I
Useless,
it's
not
clear
to
me,
at
which
point
we
actually
make
the
cut
off
of
say
like
this.
Is
enough
partners
or
they're
big
enough
or
they've
got
enough
usage.
It's
an
interesting
question
in
general,
not
just
for
this
particular
proposal.
Well,.
G
G
But
if
it
is
something
that
it's
already
be
possible
to
be
done,
and
we
are
just
adding
something
that
it
is
allowed
to
do
in
a
different
way,
but
we
haven't,
it
doesn't
have
much
adoption,
then
I'm
less
prone
to
to
specify
it,
especially
if
and
there
could
be
a
lot
of
corner
cases
that
could
be
dangerous.
So
but
anyway,
that's
my
two
cents.
F
I
There
are
a
couple
of
services
that
basically
their
entire,
their
entire
claim
to
fame
is
that
they.
E
I
F
All
right,
may
I
I
add
that
I
think
sergio
had
a
great
point
about
if
we
could
measure
this
and
we
find
that.
Presumably
these
users
are.
These
are
able
to
make
it
work
today
by
calling
it
display
medium
many
times
during
the
same
session,
and
that
seems
something
we
could
measure
and
I
think
measurements
that
prove
there's
a
significant
amount
of
users
doing.
This
would
be
very
convincing.
I
Understood
if
I
can
come
up
with
these
measurements,
I
will
I
think
it's
more
likely
to
that.
I
would
come
with
testaments
of
interest
from
parties
that
want
to
use
this.
I
It's
always
difficult
to
say
how
much
of
something
does
not
exist,
because
it's
too
hard
at
the
moment
and
the
new
api
is
going
to
shift
that
and
how
much
it
is.
Just
you
know
a
corner
case,
and
I
agree.
I
don't
think
that
measurements
are
going
to
be
very
interesting
either
way,
and
that's
why
I
would
actually
prefer
to
proceed
by
getting
new
support.
Testaments
of
support.
B
Yeah,
I
I
I'm
a
bit
nervous
about
using
metrics
as
the
only
reason
for
doing
this,
or
the
only
reason
for
not
doing
this.
I
I
think
I
think,
if
it's
an
api,
that
that
clearly
does
something
useful
and
there
are
some
people
who
you
who
would
use
it.
Then
I
think
we
should
seriously
consider
it.
I'm
not
convinced
of
that
yet,
but
I'm
prepared
to
listen.
A
B
C
So
so
we
seem
to
have
resolved
that
whether
or
not
this
should
be,
and
this
proposal
should
be
pursued
further,
is
dependent
on
whether
we
can
identify
people
who
want
to
use
it.
C
And
with,
and
we
should
also
note
that
the
question
has
been
raised
about
whether
whether
the
the
permission
prompt
quality
is
able
to
achieve
the
same
functionality.
I
By
the
way,
I've
just
thought
about
another
use
case,
and
this
one
is
specifically
for
bernard.
Imagine
that
the
user
captures
multiple
different
documents
right
all
of
them
belonging
to
microsoft,
office
and
at
any
given
moment.
You
only
want
to
share
remotely
the
one
that's
currently
active,
but
the
user
just
says
hey.
I
actually
I'm
sure,
I'm
in
one
I'm
collaborating
on
all
four
of
these
and
then
whenever
one
becomes
active,
it
can
communicate
back
to
teams.
I
Hey,
I'm
active
start
streaming
me
now
using
capture
handle
you
solve
the
ordering
issue,
so
the
user
doesn't
actually
need
to
tell
you
which
is
which.
A
I
Right
exactly
so
at
the
moment,
what
the
user
like
that
would
do
is
they
would
click
something
that
only
exists
in
chrome
and
in
edge,
which
is
share
this
tab
instead,
but
they
wouldn't
even
need
to
do
that.
You
would
save
them
even
that
effort,
if
you
just
made
sure
that
as
soon
as
they
switch
to
another
tab
to
make
another
tab
active,
that
is
one
of
the
ones
currently
being
shared.
You
just
start
streaming
that
one
right.
I
So
if
you
want
to
file
a
statement
of
support
I'll,
send
you
the
issue.
B
So
harold
just
to
re
capture
that
second
action
item,
which
was
about
prepa
permission,
prompt,
bundling
who's,
going
to
look
into
that.
C
You
suggested
it
and
so
either
you
can
sell,
sell
the
idea
to
the
people
who
want
to
want
to
use
multicapture
or.
F
Well,
yeah!
Well,
I
committed
to
I
committed
to
posting
an
example
on
the
issue
which
I
believe
is
issue
204
in
screen.
Share
that
we're
looking
at
multi-capture
right.
E
F
F
I
think
it
also
would
be
useful
to
my
feedback
from
this.
It
would
be
useful
to
track
how
many
times
people
use
get
display
media
in
the
session
and
how
I
get
better
telemetry
on
those
things.
So
I
encourage
other
vendors
to
that
are
interested
to
track
that
as
well,
but
my
prediction
will
be
that
the
the
get
slay
numbers
are
already
so
low.
That
is
hard
to
find.
I
F
F
C
C
The
calling
multiple
times
requires
that
you
have
a
have
a
cap
on
the
on
the
number
of
things
you
want
to
capture
and
that
you
have
some
sane
way
of
signaling
that,
yes,
the
user,
selected
and
five
are
not
not
sure.
F
Well
and
now:
well
sorry,
I'm
confusing
things
that
was
my
idea
of
calling
get
display
medium,
multiple
times
in
parallel.
Yes,
then,
you
need
to
have
some
kind
of
cap,
but
an
even
simpler
approach
would
be.
You
just
call
get
to
splenium
media
once
and
allow
the
user
to
drive
the
process
and
click
the
button
again
very
manual
setup,
but
I
don't
think
manual
setup
and
this
sort
of
pressure
to
save
clicks
here
is
necessarily
correct,
because
there's
such
so
much
complexity
here,
there's
like
audio.
F
I
No,
but
but
there
are
use
cases
that
were
simply
not
feasible.
So
if
we
take
again
the
idea
of
one
tab
has
a
video
conference
and
four
other
tabs
have
different
documents,
then,
at
the
moment,
for
chrome
and
edge.
If
we
looked
at
our
statistics,
we
would
find
that
the
user,
an
application,
would
just
ask
you
to
share
once
and
then
rely
on
the
fact
that
you're
gonna
manually
press
share
this
tab
instead,
each
time
because
it
was
just
not
feasible
to
call
get
display
media
four
times.
I
A
Yeah,
I
think
there
might
have
been
times
during
this
actual
meeting
when
we
would
have
used
something
like
this.
Like
you
know,
for
the
the
demo
of
media
capabilities,
I
probably
would
have
put
another
tab,
but
it's
just
too
much
hassle
to
switch
tabs,
so
it
didn't
bother
my.
G
I'm
going
to
be
a
bit
blunt,
but
anyway,
I
think
that
a
lot
if
you
have
just
brought
this
example
from
the
beginning,
we
will
good
at
the
start
what
you
are
trying
to
do,
because
the
other
use
cases
didn't
make
much
sense
now,
with
the
use
case,
about
being
able
to
share
multiple
documents
in
in
teams
and
and
in
in
google
mid.
That
seems
that
a
good
fit
for
this
that
this
api
is
much
easier
to
understand.
G
C
I'm
being
like
not
closed,
but
let's
so
the
decision
is
that
alad
will
continue
on
use
cases
yeah.
Hopefully.
D
B
One
thing
we
didn't
ever
bring
up
in
any
of
this,
which
is
the
other
alternative,
is
to
share
the
whole
screen,
which
is
what
the
users
are
going
to
end
up
doing
at
the
moment.
If
you,
if
the
option
is
to
is
to
pick
four
windows,
none
of
them
are
going
to
do
it
they're
just
going
to
share
everything,
because
it's
just
they'll
hide
their
email,
they'll,
iconify
their
email
and
then
share
everything,
because
it's
just
quicker.
So
that's
why
I
think
that
the
statistics
path
is
going
to
be
grabbing.
B
F
B
Okay
right,
I
mean,
I
think,
if
the
if
the
app
has
engineered
it
such
that
it
each
of
them
is
separately
streamed,
then
obviously
they
have
to
be
separate
streams,
but
but
if
you're
using
an
app
like
like
this,
a
more
general
purpose
app
like
this,
then
what
you're
going
to
do
is
is
shrug
instead
of
flipping
between
things.
You're
gonna
together
share
the
whole
screen
and
then
maybe
manage
that
screen
by
maximizing
different
windows
on
it
and
and
using
controls.
You
already
have
so
I
I
kind
of
yeah
I
just
want
to
emphasize.
B
H
Right
yeah,
sorry,
just
to
this
is
not
with
the
multicapture
going
a
bit
out
of
agenda
because
we
are
running
out
of
time.
I
guess
so.
I
just
wanted
to
sorry
yeah.
I
just
wanted
to
ask
the
group
about
some
review
comments
on
the
media
capture
extensions
prs.
We
have
put
out
it's
been
some
time,
I
mean
with
examples
and
everything
like
I
think
we
have
tried
to
utilize
all
the
comments
last
time
harald
and
others
have
given.
H
So
it
would
be
great-
I
mean
not
here
in
jata,
if
you
could
just
put
up
some
comments
and
if
you
want
any
more
clarifications
from
from
our
side,
I'll
we'll
be
happy
to
put
up
in
twitter.
A
A
H
Okay,
but
a
bit
of
g-type
is
it?
Would
it
be
okay,.