►
From YouTube: WebRTC WG meeting 2022-05-17
Description
See also the minutes of the meeting https://www.w3.org/2022/05/17-webrtc-minutes.html
01:05 Future meetings
04:00 WebRTC
04:00 RTP Header Extension Encryption
12:00 Issue #2735: webrtc-pc does not specify what level of support is required for RFC 7728 (RTP pause)
16:00 CaptureController
36:14 WebRTC Encoded Transform
39:29 Extensions to Media Capture and Streams
39:30 PR #61: Add support for background blur and configuration change event
52:44 PR #59: Add powerEfficientPixelFormat constraint
1:02:14 Dynamic Source for Screensharing
1:35:30 WebRTC
1:35:38 Simulcast issues
1:45:36 Next meeting
B
A
B
So
today
we've
got
a
lot
of
things
we
may
not
get
through
all
of
them,
but
webrtc
extensions,
webcpc
capture
controller,
go
to
transform
capture,
extensions
and
dynamic
sources.
B
D
B
Just
can't
you
know
attend
yes
or
no
can
you
can
you
make
it?
Let's
say
that
okay.
D
A
Okay,
I
launched
it
as
an
actual
poll
just
to
make
it
easier:
okay,
okay
or
although
it
looks
like
everybody's
ears,
except
for
single.
No,
so.
A
Should
we
conclude
this,
it
looks
like
it's
four,
yes,
one,
no
and
one.
I
cannot
make
either
way
so.
B
B
Okay,
all
right
a
little
bit
about
this
meeting,
the
slides
link
to
the
slide
is
up
on
a
wiki.
We
do
need
a
volunteer
for
note-taking.
B
Okay,
all
right,
so
we
do
have
a
code
of
conduct
in
w3c,
so
let's
keep
it
professional
and
cordial.
B
You
know
do
for
the
speaking
use,
plus
q
and
minus
q
in
the
google
me
chat
to
get
into
and
out
of
the
speaker
queue.
So
we
can
manage
that
speakers
can
manage
their
own
cues
when
you're
doing
your
slides.
I
won't
go
over
this
understanding
documents
stuff.
B
Okay,
so
I
think
what
I'll
do
dom
to
give
more
time,
I'll
kind
of
only
probably
only
deal
with
like
one
or
two
issues
from
the
webrtc
extension
stuff,
and
then
we
can
move
to
capture
controller,
to
kind
of
get
ourselves
back
on
track.
B
Okay.
So
this
is
the
list
of
stuff
that
I
have
on
the
list.
We
probably
won't
get
too
much
more
than
two
of
them,
but
let's
start
with
rtp
header
extension
encryption,
so
I
don't
know
is
sergio
here.
Finally,.
B
Well,
so
not
yet:
okay,
so
cryptex
is
in
is
completed,
itf
less
call.
Now,
there's
sergio's,
I
think
working
on
finishing
up
the
last
changes
in
response
to
the
itf.
Less
call
comments,
so
we're
getting
towards
the
point
where.
B
It's
it's
almost
done
and
we
had
an
api
proposal
at
the
september
15
2020
meeting
and
the
basic
idea
here
is
to
encrypt
the
rtp
headers
on
all
the
m
sections,
or
none
or
well,
it's
not
within
a
bundle
or
none
of
them.
So
the
way
it
works
is
you
always
attempt
to
negotiate
cryptics.
B
So
you
put
a
cool
cryptex
on
all
the
m
lines
in
a
browser
that
supports
cryptex,
and
then
you
can
set
the
policy
whether
you
want
to
negotiate,
which
means
you'll,
accept
it
not
being
supported
by
the
other
side
or
require,
which
is
that
the
other
side
has
to
support
it
or
you'll
fail
and
then
on
each
transceiver.
You
get
a
little
attribute.
That
tells
you
whether
it
was
negotiated
or
not.
So
that's,
basically
what
we
what
we
had
in
the
api.
B
B
So
what
can
happen
is
you
a
browser
can
send
an
offer
to
a
peer
that
doesn't
support
it
or,
of
course
you
can
get
an
offer
from
a
peer
that
doesn't
support
it
and
it
basically
it
cryptex
is.
The
extension
is,
what's
called
bundle
transport,
and
it
only
requires
that
you
have
a
cryptex
in
all
the
m
lines
of
a
bundle
group,
but
you
can
have
multiple
bundle
groups,
you
might
not
be
bundling,
so
it
doesn't
require
that
all
m
lines
be
identical.
B
So
here's
an
example
that
harold
came
up
with
where
you
have.
Basically
one
bundle
is
for
audio
and
the
others
for
video,
so
everything's
not
on
the
same
port,
and
this
is
this
would
be
an
offer
you
got
from,
it
would
be
from
a
non-browser,
and
so
here
what's
happening
is
for
some
reason
the
audio
is
doing
cryptex,
but
the
video
is
not
maybe
there's
a
different.
It's
a
like
a
legacy,
video
system
that
doesn't
support
cryptics.
B
So,
if
you
put
in
require
as
your
rtp
header
encryption
policy,
the
browser
would
reject
this
offer,
but
if
you
put
in
negotiate
basically
what
will
happen
is
the
browser?
Would
send
cryptex
for
audio,
but
not
for
video,
so
it's
you
can
get
these
situations
where
it's
sending
it
on
some
bundles,
but
not
others,
but
I
think
that's
basically
what
will
happen
if
it's
negotiated
and
probably
negotiate,
has
to
be
the
default.
D
B
Pr
now
for
this,
which
is
pr106
and
I'd,
encourage
people
to
review
it.
Basically,
it's
here
we
have
the
header
encryption
policy
of
negotiator
require
trying
to
explain
what
that
means
exactly.
B
I
don't
know
if
we
need
more
examples
with
respect
to
this,
how
this
works,
but
this
is
what's
in
the
pr,
it
is
a
little
complicated
because
you
do
have
to
change
some
of
the
the
steps
in
in
webrtc
pc
to
set
the
policies
and
stuff
like
that,
and
then
we
have
this.
This
extension,
this
attribute
of
the
transceiver,
which
is
the
rtp
header
encryption,
negotiated,
attribute.
B
So
what
do
people
think
about
this?
Any
any
comments.
B
Oh
previously
yeah,
this
would
be
a
oh.
B
C
F
This
looks
good
to
me.
I'm
just
wondering
whether
we
could
we,
whether
we
want
to
simplify
it
and
say
with
a
bundle
or
not.
We
we're
just
reporting
cryptex
or
not.
F
I
don't
know
if
it
simplifies
implementations,
but
it's
simply
playing
a
little
bit
like
the
checks
that
that
we
would
do.
It
would
be
only
in
one
place
that
you
would
check
whether
cryptex
is
negotiated
or
not,
and
not
for
each
transceiver.
B
Yeah
I
mean
it
would
be
because
of
this
weird
situation
here
right
you
would.
The
question
is:
how
would
you
tell
somebody
that
it
was
not
on
any
of
the
video
m
lines,
but
it
was
on
the
audio
m
lines.
B
The
problem
is,
we
have
no
concept
of
bundle
and
webrtc
pc,
so
there's
no
there's
no
attribute
of
this
bundle.
It's
just
for
each
each
of
these
mids.
B
D
B
It
is
a
bad
idea,
but
the
question
is:
do
we
prevent
that
from
happening.
G
Yeah,
I
think
I
think
that
that's
reasonable
to
do
yeah
changing
changing
that
is
really
awful
and
you
know
writing
tests,
for
it
is
going
to
be
another
matter
entirely
yeah.
So
let's
just
keep
it
simple.
Unless
we
see
a
use
case,
I
guess.
B
B
Yeah,
okay,
I
think
we've
got
as
much
as
we
can
out
of
that
one.
So
I
do
want
to
try
to
cover
it's
27
35.,
so
it's
been
changing
a
little
bit
when
he.
This
is
what
it
looked
like
when
the
issue
was
originally
filed,
which
is
basically
what
what
this
is
about
is
is
some
confusion.
I
think
that
may
exist
about
rc7728.
B
And
the
filer
basically
said
he
was
confused
about
what
level
of
support
is
required
for
rfc
17.
This
is
the
rtp
stream
pause
and
the
problem
is
it's.
There's
no
level
of
support
required,
rfc,
seven,
seven
yeah.
G
I
I
can
speak
to
this
very
briefly.
There
is
one
place
in
weber
tcpc
where
it
specifies
paying
attention
to
the
simulcast
rid
paused
flag,
so
the
little
tilde
that
you
put
in
the
sdp
in
the
simulcast
attribute,
so
we've
got
one
place
in
set
remote
description
for
processing
remote
answers
that
specifies
looking
at
that
paused
flag
from
8853
and
then
setting
the
active
bit
on
the
the
encoding
parameters
to
reflect
that
state.
G
But
if
you're
gonna
support
that
flag,
that
little
tilde
that
requires
support
for
7728,
and
so
my
question
was
okay.
If
we're
gonna
be
paying
attention
to
that,
don't
we
need
to
support
7728
and
it
seems
like,
like
people
are
kind
of
leaning
towards
removing
that
language
from
webrtc
pc,
so
just
ignore
that
that
little
rid
pause
flag
and
that
just
makes
the
problem
go
away.
B
Yeah
I
mean
my
my
question
overall
is
why
the
tilt
is
even
in
there,
because,
basically,
it's
it
seems
like
it's
a
problem,
because
in
set
parameters
right,
you
can
set
active
or
not
it's
it's
not
supposed
to
cause
any
stp
renegotiation
or
any
activity
at
all
in
sdp,
and
there
is
something
called
a
video
layer
allocation,
header
extension
which
essentially,
you
know
sent
along
with
the
rtp
packets.
B
F
H
Implementer
mixed
two
different
features.
B
Yeah,
so
I
think
we
need
to
yeah,
so
I
I
don't
I
I
think
this
probably
shouldn't
be.
There
is
the
answer.
B
B
Okay,
so
I
went
out
since
we're
a
little
bit
behind.
Why
don't
we
move
skip
over
these
issues
because
they're
a
bit
complicated
and
move
to
capture
controller
anymore.
D
Yes,
thank
you
and
sorry
for
my
voice.
I'm
a
little
under
the
weather
today,
so
I'll
go
a
little
slowly,
all
right.
So
this
is
yeah
next
slide.
Please!
D
So
I'm
gonna
propose
a
new
api
that
would
cover
two
issues
or
maybe
three
and
the
first
one
of
those
is
conditional
focus.
So
just
to
recap,
get
display.
D
We
have
capture,
handle
identity
and
actions
that
exist
to
let
apps
keep
the
user
in
the
vc
tab
and
also
we're
hearing
from
some
screen
recording
apps
that
it's
too
soon
to
focus
the
new
screen,
because
the
user
hasn't
hit
record
yet
and
they're
asking
for
a
way
to
focus
later,
which
is
not
part
of
the
proposal
at
the
moment.
But
this
is
about
providing
a
place
where
we
can
put
potential
controls
and
have
these
discussions
later
so
where
to
put
a
focus,
control
api,
why
not
the
video
track?
D
What
that
means
is
that
you
can
have
one
source
and
many
consumers
where
each
track
has
its
basically
its
own
constraints
and
the
constraints
mechanism
here
lets
basically
arbit
excuse
me,
will
arbitrate
between
for
some
properties
on
a
source
that
are
not
shareable.
You
can
use
constraints
to
negotiate,
but
in
practice,
most
browsers,
implement
things
like
downscaling
and
frame
destination
per
track.
So
that
means
that
these
tracks
can
have
independent
settings
and
that's
fine
for
some
apis,
like
we
have
new
methods
on
the
track
like
crop2,
that
is
basically
per
clone.
D
So
if
you
crop
one
track
clone
it
doesn't
affect
another.
However,
imagine
a
new
track
focus
method
say
would
affect
all
clones
and
that's
a
leaky
abstraction,
because
the
user's
focus
is
not
an
inherent
property
of
a
single
video
track,
but
of
the
source
and
the
other
problem
with
putting
it
on
the
video
track.
Is
that
why
only
the
video
track?
What
about
the
audio
track?
Do
they
both
have
this
property?
If
so,
why?
Why
not?
D
D
Yeah
and
so
there's
a
similar
issue
with
capture
handle
actions,
there's
a
method
to
send
the
captur
action
to
these.
D
It's
an
api
to
send
supported
actions
to
the
captured
page
that
we're
we're
still
discussing
it's
still
early
times
for
it,
but
it
also
seems
misplaced
on
the
media
stream
track
where
it
is
right
now,
and
the
reasons
are
the
same.
Basically,
one
source,
many
consumers
and,
unlike
crop2,
send
capital
action
would
affect
all
clones
again.
A
leaky
abstraction
because
progression
of
the
captured
page
isn't
the
property
of
a
single
video
track
and
same
thing.
I
will
disappear
both
on
the
audio
and
video
track.
Who
knows
so?
D
D
And
similarly,
the
same
argument
can
be
applied
to
get
capture
handle
the
spec
already
know
that
there's
no
consensus
yet
on
whether
get
capture
handle
belongs
to
on
media
stream
track
or
on
a
dedicated
controller
object.
That
is
neither
clonable
nor
transferable.
D
So
this
is
the
idea.
This
uses
a
pattern,
that's
familiar
from
those
who
used
a
board
controller
with
fetch.
You
first
create
a
new
capture,
controller
object,
and
then
you
pass
it
in
as
an
argument
to
get
display
media
what
this
does.
It
does
a
couple
of
things
it
upon
success.
D
This
controller
is
associated
one-on-one
with
this
get
display.
Media
call-
and
this
is
a
new
advantage
because,
unlike
say
media
devices,
if
you
get
an
event
on
media
devices,
you
don't
know
which
you
know
which
specific
capture
you
can
have
multiple
captures
going
on.
At
the
same
time,
you
know
which
you
don't
know
which
one
it
is,
but
here
you
have
a
dedicated
object
to
this
particular
capture,
so
you
could
do
things
like
if
we
wanted
to
later-
or
we
could
put.
D
This
could
solve
identity
where
you
could
put
origin
and
handle
directly
on
this
controller
object,
and
you
could
have
a
focus
method
on
the
controller
that
returns
a
promise.
So
the
idea
here
is
that
if
you
don't
specify
a
controller,
you
get
backwards,
compatible
behavior,
that
the
browser
will
automatically
focus
in
window
or
tab.
But
if
you
specify
a
controller,
it
will
not
automatically
focus
unless
the
application
calls
focus,
and
this
could
be
the
the
application
could
call
this
even
before
calling
get
display
media
or
you
could
call
it
shortly
after
and
in
the
past.
D
So
maybe
this
would
open
the
door
to
discuss
potential
future
changes
there,
but
I
don't
see
a
problem,
for
example,
of
waiting
allowing
up
to
one
second
after
after
the
call
seems
fine,
as
long
as
the
end
user
can
associate
the
focus
with
their
action,
and
that
seems
similar
to
transient
activation,
for
example,
the
problem
we
used
to
have
was
correlating
this
focused
method
with
a
single
get
display
media
success,
and
we
were
talking
about
within
the
certain
within
the
next
task
and
really
sort
of
tighten
it
control
there.
D
I
don't
believe-
and
this
this
kind
of
solves
that,
because
we
have
a
dedicated
controller
object
for
this
capture
only
and
same
for
actions.
We
could
also
put
get
supported
actions
on
this
controller
and
if,
if
they
are
supported,
you
would
get
a
non-empty
array,
otherwise
you'll
get
an
empty
array
and
you
can
send
an
action
on
your
using
your
controller
dot.
Send
action
next
slide,
for
example,
and
that's
it.
A
Yes,
hello,
so
I
would
need
to
think
about
this
a
bit
more,
but
generally
the
controller
pattern
seems
nice
to
me.
If
this
helps
us
move
forward
with
all
sorts
of
proposals,
all
the
better
a
couple
of
questions
would
you
be
open
to
exposing
a
getter
for
the
controller
on
a
track,
or
would
you
consider
that
this
actually
goes
against
the
grain
of
your
proposal
and
that's
number
one
and
number
two?
What
about
a
proposed
new
api
right
now?
That
returns
several
streams?
D
Sure
so,
on
the
first
question,
I
think
the
purpose
here
would
be
to
not
ex
it
would
be
a
feature
to
not
expose
it
on
the
track.
So
I
would
discourage
having
a
method
to
get
to
it,
because
the
goal
here
was
to
return
an
object
that
the
owner,
the
caller
of
get
display
media,
can
keep
to
themselves
and
and
not
necessarily
distribute
to
every
downstream
media
consumer
that
they
may
have
for
clones
and
that
kind
of
stuff.
D
Because
again,
if
you
have
a
single
track,
that
might
be
one
clone
of
many.
And
it's
not
clear
to
me
that
if
a
javascript
application
wanted
this,
they
could
easily
add
this
controller
as
a
custom
attribute
to
their
track.
Right.
A
That's
one
thing
they
could
do,
but
what
another
thing
that
you
could
do
is
you
could
say
that
when
you
clone
or
when
you
transfer,
you
don't
actually
transfer
the
controller
and
that
you
do
separately
but
yeah.
I
guess
that
you
do
make
a
good
point
that
if
somebody
really
wants
to
attach
it
to
the
track,
they
can
do
the
it
themselves.
A
So
yeah
good
point.
D
Okay,
great
and
the
second
question
yeah,
I
haven't
really
put
anything
about
multi
capture
here
and
I
I'm
open
to
discuss
that.
If
we
think
that
that's
useful,
I
guess
I
haven't
thought
through
how
you
know
it
seems
like
intuitively.
I
would
have
expected
one
controller
per
per
capture,
but
if
you
can
think
of
use
cases
for
having
a
shared
one,
we
could
also
entertain
that.
I
guess.
A
I
would
say
that
if
you're
capturing
several
things
at
once
with
an
api
that
is
not
currently
standardized
but
has
been
proposed,
and
then
you
might
actually
have
completely
different
actions
for
the
different
things
and
I
was
kind
of
leaning
towards
a
model
in
which
you
different,
depending
on
what
you
capture.
If
it's
a
window,
if
it's
a
tab,
if
it's
a
screen,
you
might
have
different
a
different
subclass
of
ministering
track
and
different
things
exposed.
A
It
looks
like
a
controller,
would
have
a
bit
more
trouble
doing
that
unless
you
also
subclass
the
controller
and
gave
it
different
properties,
because
imagine
like
focus,
for
example,
makes
sense
for
capturing
a
window
or
a
tab,
but
probably
not
for
capturing
a
screen.
But
I
imagine
you
would
still
want
to
expose
focus
regardless
so
and
then
what
it
would
raise.
An
exception.
D
Yeah,
so
so
I
put
a
little
disclaimer
at
the
bottom
here.
So
what
did
I
say
here,
sort
of
my
speaker
notes
down
there
so
yeah.
The
idea
of
focus
would
be
that
it
would
return
for
a
captured
window
tab.
It
would
resolve
when
the
tab
window
got
focused,
but
no
earlier
than
get
media
get
display.
Media.
F
D
A
D
Right
so
the
goal
is
to
get
an
owner
object
for
your
capture,
and
given
that
there
can
be
different
kinds
of
capture
I
mean
we
could
always
discuss
making
different
subclasses
of
capture
controller.
I
think
that's
a
bit
excessive
at
this
point,
but
you
know
I
think
we
can
always.
F
So
I
I
think
that
the
the
idea
to
separate
functionalities
that
are
specific
to
a
source
and
not
to
a
track
is
a
good
idea.
So
we
we
should
do
that.
We
try
to
add
that
to
track
and
it's
it's
not
a
great
model,
so
have
it
elsewhere,
where
there's
a
single
object
is,
is
a
good
idea.
F
I
I'm
not
not
sure
about
the
api
surface,
but
we
can
probably
iterate
on
it.
For
instance,
focus
is
one
thing
supported.
Actions
is
another
set
of
things,
and
maybe
it's
not
just
one
object,
but
it
might
be
two
objects.
F
We
should
dig
into
that
apply
this
idea
of
separating
these
functionalities
out
of
track
and
see
where
it
should
be
put
in
the
same
object
in
a
different
object
in
a
controller
kind
of
style
or
in
an
object
that
is
created
by
the
display
itself
and
so
on
and
yeah.
Let's
continue
discussing
it
and
improving
it.
A
So
I
would
have
a
quick
question:
how
quickly
do
we
think
if
we
all
agree,
assuming
that
we
agree
that
we
want
to
go
this
way?
How
quickly
can
we
actually
agree
that
this
is
the
shape
and
you
know
get
it
done,
because
otherwise
it
could
be
that
now
other
discussions
that
have
started
a
very
long
time
ago,
for
example,
conditional
focus,
would
now
get
you
know
scheduled
behind.
Yet
more
discussions.
D
Right,
yes,
so
I
I
would
say
that
I'm
not
a
huge
fan
of
subclassing
and
that
it
has
some
benefits.
I
understand,
for
you
know
a
method:
it's
not
there,
but
that's
usually
just
going
to
throw
in
a
different
place.
You're
going
to
get
an
you
know,
type
error
instead
of
can't
focus,
error
or
whatever,
so
it
doesn't
seem
that
different
to
me,
I
think
the
main
benefit
here
is
to
have
an
object
where
we
don't
have
one
today,
and
I
would
certainly
welcome
getting
this
ready.
D
For
I
understand,
conditional
focus
is
probably
the
the
most
immediately
useful.
So
I
think
we
hopefully
can
go
this
direction
for
that.
So
we
don't
end
up.
D
Yes,
but
also
no
one
implements
this
yet
so
we
could
definitely
implement
both
of
those
I
mean
it
seems
like.
If
you
have
a
capture
controller
object,
it'd
be
very
useful
to
know
what
one
useful
property
would
be,
what
what
type
of
source
do.
I
have-
and
I
don't
see
that,
but.
A
We
do
have
that
already
a
quick
question:
why
do
you
mean?
What
do
you
mean
that
focus
would
throw
elsewhere
if
it
doesn't
like?
If
we
subclass,
I
mean,
I
would
expect
that
the
developer
would
check,
if
not
not,
focus
right
before
calling
focus
if
they
know
that
this
doesn't
necessarily
have
focus
the
focus
method.
D
Sure,
but
the
the
benefits
of
subclassing
that
you
get
from
c,
plus
plus,
for
example,
where
things
don't
compile
you
catch
it
earlier
are
not
present
in
javascript,
so
you'll
you'll
get
javascript
developers
that
just
call
focus
because
it
worked
when
they
tried
it
and
something
returns,
a
different
type,
suddenly
user
picks
a
different
type
and
then
focus
is
now
not
a
function
of
undefined
as
the
error
they
would
get
right.
So
I'm
saying
supplies
doesn't
bias
that
much
in
javascript.
E
So
to
to
to
a
large
point
and
assuming
that
conditional
focus
is
the
one
topic
where
we
have
the
greatest
clarity,
maybe
capture
controller
should
focus
on
so
to
speak.
Let's
should
focus
on
on
this
and
when
the
time
comes
to
look
at
supported
actions,
we
can
look
whether
they
share
and
offer
the
same
properties
across
the
two
needs.
E
A
Okay,
nobody
objects.
What
what
do
we
think
this
means
for
backwards
compatibility
I
mean
for
if
capture
controller
does
not
exist
on
the
platform,
so
controller
equals
new
capture.
Controller
is
probably
going
to
throw,
whereas
you
know
the
yes,
so
that
looks
less
than
the
best
solution.
Hopefully
there
is
a
better
solution.
F
A
D
A
E
My
statement
so
concretely,
if
we
do
focus
on
focus,
I
guess
this
makes
it
a
screen
share,
recall
proposal
rather
than
a
capture
handle
where
it's
right
now
being
discussed.
B
Right
but
still
there
we
go
all
right.
F
Yeah
so
for
about,
we
were
there
a
couple
of
meetings
ago,
we
agreed
to
add,
generate
keyframe
to
rtp
script
transformer
so
that
when
you're,
adding
a
transform,
a
webrtc
and
kodi
transform
you're
able
to
control
the
encoder
and
ask
the
encoder
to
generate
a
keyframe
and
currently
that's
working
great.
F
F
So
when
the
promise
is
resolved,
you
know
that
you
will
be
able
to
read
and
get
to
the
keyframe
very
very
quickly.
It
was
mentioned
during
the
generation
of
the
pr
and
discussions
that,
if
the
promise
could
return
the
timestamp,
the
keyframe
timestamp,
it
would
be
a
nice
addition
and
that's
true.
It
would
make
life
a
little
bit
easier.
Probably,
but
since
generate
keyframe
is
able
currently
to
generate
several
keyframes
from
different
encoders.
F
We
would
need
to
generate
multiple
timestamps
and
potentially
resolve
at
the
last
timestamp.
So
it
does
not
work
great
with
multiple
reads
so
next
slide.
The
idea
is
to
basically
remove
so
to
change
the
parameter
from
a
sequence,
just
one
parameter,
so
the
old
version
is
returning
a
promise
with
no
value,
and
it
has
a
sequence
and
the
new
version
would
be
returning
a
promise
that
would
result
to
timestamp
and
there
will
be
an
optional
read,
meaning
that
you
can
select
which
anchor
encoder
you
you
want.
F
B
F
So
if
there's
a
agreement,
we
can
go
to
the
next
topic
and
mark
and
then
editors
will
be
able
to
finalize
upr.
Okay
now
going
to
media
capture
extensions,
so
we
discussed.
F
In
the
past
background
blur-
and
we
know
that
background
blur
is
being
deployed
in
web
pages
a
lot
these
days,
a
lot
of
websites
are
actually
using
canvas,
webgl,
whatever
web
technology
to
implement
background
blur,
and
it's
working,
it's
also
being
supported
by
os's
like
ios
and
macos,
for
instance,
have
bank
and
blur
built-in
support
and
riju
and
hero
did
some
measurements.
F
I
believe,
and
they
can
talk
about
the
measurements
if
the
perf
measurements
are
more,
but
basically
it
might
be
two
times
or
even
more
power
efficient
to
use
the
os
background
blur
than
the
web
technician
background
blur
and
also,
if
background
blur
is
already
applied
by
the
os,
it
does
not
make
any
sense
for
the
website
to
apply
background
blur
on
top
of
it.
F
The
proposal
is
to
add
a
background
blur
constraint.
It
will
be,
it
would
be
a
capability
and
a
setting
as
well,
and
it
will
allow
applications
to
identify
whether
background
blur
is
supported
by
by
vos,
and
it
would
also
allow
the
application
to
identify
whether
a
given
track
is
already
blurred
or
not
by
the
os
and,
of
course,
with
apply
constraints.
You
would
be
able
to
switch
on
and
off
background
blur.
F
So
the
proposal
is
also
to
make
background
blur
a
boolean
constraint
for
now,
oss
are
more
or
less
like
echo
cancellation,
it's
enabled
or
not,
and
we
could
start
with
this.
I
think
that
we
we
can
evaluate
the
usefulness
for
a
zero
one
kind
of
double
constraint,
and
if
we
go
there,
we
would
need
to
understand
how
to
define
it.
F
What
means
zero,
probably
number,
one
blur
what
means
one,
probably
a
full
background
blur,
but
what
is
full
by
gun
blur,
so
we
will
need
to
to
to
to
work
on
that.
So
that's
why.
Currently,
the
proposal
is
to
start
with
a
simple,
brilliant
background
blur
and
I
like
to
get
feedback
on
that.
F
F
Yeah
it's
I
I'm
not
exactly
sure
about
the
migration
path.
One
migration
would
be
to
to
add
a
new
constraint
which
should
be
a
double,
for
instance,
maybe
there's,
maybe
when
we'll
implement
it
at
that
point,
all
acs
will
have
changed
to
a
double
and
there
will
be
a
clear
definition.
So
we
we
can
switch
to
the
ball
before
then,
but
yeah,
starting
with
boolean,
is
probably
the
safe
seems
like
a
safe
approach
to
me.
Right
now,.
F
Okay,
okay,
so
I
guess
it's
already
okay,
so
we
can
probably
work
on
merging
the
pr
and
finalizing
the
pr
for
editors.
So
next
slide.
F
The
second
part
of
the
pr
the
same
pr
is
the
introduction
of
a
new
event
called
configuration
change,
and
the
reason
is
that
some
oasis
may
change
camera
microphone,
settings
outside
of
user
agent
control,
for
instance,
you're
you're
in
safari
or
you're
in
an
application,
and
you
grab
camera
access
and
at
some
point
the
user
might
decide
to
enable
through
the
os
ui
a
background
blur.
F
So
initially
the
stream
was
not
background
blurred
by
the
os
and
after
that,
it's
background
by
the
os
and
some
oss
will
not
allow
web
applications
to
to
switch
it
off.
But
in
any
case,
there's
a
change
of
settings.
It
went
from
false
to
true
and
web
applications
might
want
to
be
notified
of
that,
and
currently
the
only
way
would
be
to
basically
call
get
settings
over
and
over
and
over.
So
the
proposal
here
is
to
add,
like
a
configure
exchange
event,
so
it
will
tell
the
web
application.
F
Hey
you,
if
you,
if
you're
interested
in
understanding,
what's
what's
happened,
please
call
get
settings,
get
capabilities
and
look
at
whether
these
things
are
good
for
you.
So
it
would
be
a
simple
event.
We
would
start
with
a
simple
event
like
no
fields,
no,
nothing
and
there's
a
small
example
there
that
says
hey.
If,
whenever
configuration
is
changing,
we
will
we
will
toggle
back
on
blur
enough
if
we,
if
we,
if
we
can
dom
you're
on
the
queue.
E
F
So
yeah
I
forgot
to
mention
that
it
would
be
on
the
track
on
the
track
object
and
it
would
apply
to
any
settings
capabilities.
We
can,
but
that's
my
understanding.
We
could
restrict
it
to
only
some
set
some
some
values,
but
we
should.
We
would
need
to
have
a
good
reason
for
that
and
I
feel
like
it
might
be
useful
in
the
future
for
other
properties
as
well.
D
So
this
made
me
think
about
previous
slide,
so
this
is
apply
constraints
on
each
track,
clone
right.
So
how
do
you
deal
with
if
this
is
a
global
thing
in
the
os?
Only
how
do
you,
I
guess,
you're,
not
supporting
exact,
for
example,.
F
So
so,
if
in
ios,
if
vos
is,
if
you
enable
background
blur
through
vos,
then
the
web
application
cannot
do
anything.
So
the
idea
would
be
for
get
capabilities
of
background
blur
to
go
from
false
to
to
just
true.
And
then,
if
you
try
to
do
a
byte
constraint
by
background
by
force,
it
would
reject.
F
F
Yeah
right
right,
yeah,
then,
if
you,
if
you
try
to
apply
it
with
a
non-exact
constraint,
then
it
would.
It
would
stick
to
you
too.
Basically,
even
if
you
set
it
to
false,
as
idea.
D
And
I
suppose,
if
a
user
agent
for,
for
instance,
wanted
to
implement
this
in
the
browser,
they
could
do
this
per
camera,
or
they
could
do
this
per
track.
Basically,
if
they
wanted
to,
that
would
be
okay.
Okay,
thank
you.
F
So
that's
that's
a
that's
a
good
yeah!
That's
a
good
thing
to
to
to
discuss.
For
instance,
I
know
that
on
ios,
if
you
disable
a
cancellation
in
safari
you
we
would
not
be
able
to
to
only
disable
it
for
one
track.
So
there
are
there
precedence
for
things
that
are
leaking
already.
If
you
apply
constraints
on
one
track
and
not
the
other.
D
So
one
more
thing,
a
side
effect
of
this
is
that
applications
can
control
turn
this
on
and
off
with
the
block
constraints.
Is
that
something
you
had
in
mind
or.
F
So
to
to
me,
if
the
os
is
is
not
applying
background
blur,
then
it
it
would
be
nice
to
to
allow
the
browser
to
enable
it
and
disable
it.
For
instance,
that's
something
we
could
do
at
the
browser
convenience.
F
H
You're
on
the
queue,
yes,
I
was
wondering
what
other
settings
and
capabilities
would
trigger
the
configuration
change
you
mentioned
always
level
ones,
which
ones
are
they
and
would
it
apply?
Also
if
the
same
device
was
opened
by
multiple
pages
and
one
of
them
was,
for
example,
changing
the
brightness
of
the
exposure
mode
and
different
things
like
that.
F
Yeah,
so
that's
the
same
thing
with
with
grenade,
for
instance,
in
macos.
If
two
applications
are
capturing,
then
they
might
compete
with
camera
settings
and
what
we
try
in
the
browser
world
is
usually
to
to
hide
this,
and
we
can
sort
of
doing
doing
that
and
we
can
continue
doing
that
it
will.
The
configuration
change
event
will
be
at
the
discretion
of
of
the
user
agent.
On
that.
F
F
Okay
and
it's
it's
true
that
there
might
be
like
some
security
or
privacy
issues
like
if
you're
capturing
between
two
different
origins,
so
you
need
to
be
cautious
and
in
the
pr
we
probably
need
to
to
add
some
warnings
there
in
safari.
I
don't
think
that
it
would
be
possible,
because
there's
only
one
page
that
can
capture
at
any
point
in
time,
and
so,
if
the
track
is,
for
instance,
muted,
we
should
probably
not
fire
the
configuration
change
event.
Even
if
a
configuration
has
changed
yeah.
F
Okay,
so
I
think
that
for
pr,
61
there's
consensus
to
move
forward,
provided
we
improved,
maybe
the
pr
in
terms
of
the
things
we
discussed
and
privacy
and
so
on-
is
that
correct.
B
F
So,
henrik
from
google
uploaded
a
pr
which
is
about
adding
poor,
efficient
pixel
format
constraints,
so
it
was
pretty
good
to
describe
it
there
before
further
making
progress
on
vpr.
But
the
issue
is
that
some
cameras
generate
motion.
Jpeg
video
frames,
especially
for
some
like
with
eight
frame
rate,
combinations
and
os,
will
typically
decompress.
F
These
video
frames
before
feeding
feeding
them
to
the
user
agent,
which
usually
expect
like
yuv
or
like
non-compressed
video
frames,
and
if
there
is
a
hardware
decoding,
then
the
impact
of
actually
capturing
motion.
Jpeg
is
okay,
but
if
there's,
if
mobility
decoding
is
expensive
and
for
some
machine,
it
is
then
there's
a
potential
power
impact.
And
apparently
google
can
made
some
measurements
and
validated
value
that
it
was
an
issue
and
some
applications
would
like
to
be
able
to
avoid
these
configurations
so
avoid
the
motion.
F
Gpek
decoding
penalty,
so
the
proposal
there
is
to
add
a
boolean
power,
efficient,
pixel
format,
constraints
that
would
be
exposed
in
capabilities
and
settings
and
it
would
be
you
could
use
it
in
when
calling
get
to
the
media
or
apply
constraints
as
well.
F
F
F
Yeah,
I
like
it
and
probably
it's
another
small
fingerprinting
issue,
meaning
that
you
would
be
able
to
detect
that
some
cameras
are
supporting
motion
jpeg.
But
since
it's
after
get
user
media
being
granted,
I
think
it's
fine.
F
If
you,
if
you
require
a
specific
night,
for
instance,
like
I
don't
know
when
motion
jpeg,
video
frame
is
being
used,
but
let's
say
it's
for
super
hd
resolutions,
and
you
actually
want
to
capture
super
hd
resolutions
because
you're
capturing
one
you
just
want
to
capture
a
single
frame,
for
instance,
and
nothing
more
in
that
case,
maybe
using
motion
jpeg,
you
don't
care
about
it
for
typical
video
conferences
system.
I
would
guess
you
would
like
to
enable
it.
F
The
issue
is
that
so
the
default
value
would
still
be
to
allow
power
in
efficient
pixel
formats,
because
we're
we're
not
sure
about
potential
backup
compatibility
issues.
F
Yeah,
I
would,
I
would
be
happy
to
reverse
the
default
so
that
by
default
we
would
not
use
the
power
efficient.
We
will
not
use
inefficient
pixel
formats.
My
guess
is
that
there's
still
leeway
for
the
user
agent
to
actually.
F
Not
select
motion
gpeg
if
it
can,
but
if
it
cannot
like,
if
you're
setting
with
a
to
exact
constraints,
for
instance,
and
there's
only
motion
jpeg,
then
you
would
still
select
it
and
and
yeah.
That's
that's
fine,
but
I
would
hope
that
user
agents
in
general
will
try
to
favor
not
selecting
over
efficient
pixel
formats.
E
But
I
guess
that's
a
bit
what
I
mean
like
if
you're
asking
for
an
exact
resolution,
then,
as
you
say,
probably
you
don't
care
about
the
power.
You
understand
that
you're
asking
for
something
weird
and
specific.
If
I'm
just
you
know
here
doing
my
get
your
media
so
that
I
can
send
it
over
wherever
you
see,
I
really
don't
want
to
have
to
tell
you
to
do
that
efficiently.
Just
do
it!
E
F
So
sure
so
the
the
point
is
also:
let's
say
you,
you
ask
for
a
640
480
and
you
actually
get
get
1024
to
avoid
motion
jpeg.
Then
it's
good
that
you
understand
through
settings
and
capabilities
that
the
user
just
tried
to
avoid
avoid
it
due
to
motion
jpeg
and.
D
F
D
Knowledge
and
yeah,
and
also
I
think,
user
agents
might
have
different
defaults
for
this.
Maybe
for
desktop
it's
not
a
big
deal
to
use
motion
jpeg,
maybe
on
mobile
phones.
It
would
be.
D
A
D
F
Yeah
yeah,
if,
if
we
were
to
restart
from
scratch,
I
would
probably
try
to
make
this
power
efficient,
pixel
format
on
by
on
by
default,
but
it's
it's
probably
too
late.
So
I
think
it's
fine
like
this.
E
I
mean
I
I
guess.
As
long
as
user
agents
can
already
do
the
right
thing,
then
I
would
be
trying
with
exposing
the
information
I
I
just
like.
If
that
makes
it
so
that
I
don't
say:
oh
it's
not
my
responsibility,
then
I
think
it's
really
the
worst
possible
outcome.
E
F
So
for
device
selection,
I
totally
agree
the
apply
constraints
thing
it's
fine
as
well
like.
If
you
there's
no
way
you
can
discover
currently-
and
that's
really
sad
like
the
different
part
of
the
camera
and
that's
something
I
I
think
we
should
we
should
do,
but
in
any
case
having
this
constraint
with
it
available
for
apply
constraint
is
good
for
device
selection.
F
I
don't
think
it
makes
sense
at
all,
but
that's
why
our
efficient
pixel
format
will
not
be
able
to
be
a
required
constraint
if
you
colgate
user
media,
like
all
the
constraints,
cannot
be
used
as
required
constraints,
forget
user
media
and
that's
that's
a
good
thing
and
we
would
follow
this
rule
for
efficient
pixel
format.
D
Yeah,
I
agree
with
that.
I
think
limiting
some
users
may
only
have
motion
generate
cameras
and
we
don't
want
to
exclude
them
from
using
a
website.
F
Okay
sounds
good,
then
we
will
mention
that
in
the
pr
and
proceed
with
finally
finalizing.
D
A
Hello,
thank
you.
So
I
would
like
to
talk
about
sharing
and
specifically
sharing
tabs,
but
actually
we
can
generalize
the
image
to
sharing
anything.
Usually
when
a
user
gets
ready
to
share
a
presentation
or
to
explain
things
to
people,
they
might
have
several
surfaces
that
they
might
want
to
switch
between.
For
example,
I
might
have
you
know
something
for
editing
code,
mdn
wikipedia
stack
overflow,
all
of
them
ready,
scrolled
to
just
the
right
place,
and
I'm
just
about
to
start
giving
the
presentation
I
share.
A
I
show
the
first
thing:
everything's
great
people
can't
stop
themselves
from
uploading
and
then
I
need
to
start
sharing
the
next
thing,
and
here
it
gets
a
bit
tricky
next
slide.
Please,
because
how
do
I
even
go
about
starting
to
share
something
else?
If
I'm
not
using
google
me,
I
need
to
go
to
the
tab
for
the
feeder
conference
right.
So
I
need
to
find
that
out
of
potentially
many
tabs
and
many
windows,
then
I
need
to
trigger
a
cool
another
call
to
get
display
media
right.
A
I
try
to
start
keep
presenting
and
then
I
need
to
switch
surfaces
yet
again
and
it
goes
on
and
on
so
obviously
not
ideal
next
slide,
please
so
there's
an
obvious
solution
and
the
solution
is
that
right
now,
chrome
in
the
using
an
extension
api
allows
you
to
just
go
to
the
tab
that
you
want
to
share
instead
and
press
a
little
button
saying
share
this
tab.
A
Instead,
obviously,
we
would
like
to
ship
that
for
absolutely
all
websites
out
there,
but
that's
a
little
bit
problematic
next
slide,
please,
because
until
now
this
did
not
exist.
Many
applications
are
built
with
the
assumption
that
it
does
not
exist
if
they
get
something
and
especially
if
they
establish
a
connection
using
capture
handle,
but
they
can
do
it
using
other
means
they
might
start
relying
on
that
right.
They
could
expose
controls.
A
They
might
rely
on
the
fact
that
they're
self-capturing,
all
sorts
of
things,
could
be
assumed
that
are
correct
initially,
but
then,
if
the
user
presses
share,
this
type
instead
would
no
longer
work
next
slide
base.
A
So
here
we've
got
one
example.
So
currently,
if
you're
using
google
slides
or
google
docs,
you
could
embed
the
meat
inside
and
it
kind
of
assumes
that
you're
capturing
the
current
tab.
If
you
try
to
capture
any
other
tab,
it
tells
you
hey.
What
are
you
doing
and
it
shows
you
an
error
sure
it
could
also
transmit
that
other
tab
remotely,
but
that's
a
product
decision
right.
That's
not
a
decision
for
user
agent.
A
If,
for
some
reason
the
application
does
not
think
that
the
user
should
be
able
to
choose
anything
else,
so
be
it,
and
if
the
user
has
already
chosen
the
right
thing
and
all
sorts
of
nice
things
are
happening
and
the
user
is
happy.
We
don't
want
to
confuse
the
user
by
showing
him
a
button
which
once
pressed
breaks
everything
down
next
slide.
Please.
A
Similarly,
let's
say
that
we've
got
two
tabs
going
and
one
of
them
is
capturing
a
slides
deck
and
we
actually
remote
control
the
tab
and
then
maybe
we
switch
to
capturing
docs
or
maybe
we
started
capturing
docs
and
we
can
actually
scroll
in
the
middle
and
that
scrolls
the
capture,
tab
too,
the
user
might
think
hey,
that's
just
the
general
thing
that
can
happen
right
like
if
I
capture
another
tab.
I
can
scroll
it
from
this
tab.
Then
they
try
to
share
another
tab.
Doesn't
work
and
they're
upset?
But
you
know
next
slide
page.
A
The
application
could
not
have
controlled
that,
basically,
all
of
those
webby
pet
peeves
that
we
have
with
web
applications
is
because
they're
kind
of
limited
in
what
they
can
do
in
comparison
with
native
applications,
because
they
don't
fully
control
the
the
experience
and
when
we
can
give
them
a
bit
more
of
control,
we
can
make
them
a
bit
more
polished.
A
So
next
slide.
Please.
I
suggest
that
the
the
solution
here
is
mind-bogglingly
simple.
All
we
need
to
do
is
yes.
All
we
need
to
do
is
just
expose
an
extra
boolean.
The
application
can
tell
us
if
it's
interested
in
dynamic
sources
and
then
it's
up
to
the
browser
to
decide
what
dynamic
sources
it
wants,
how
it
exposes
it
all
of
the
normal
things,
but
it
could
even
disregard
what
the
application
says
or
not
that's
up
for
discussion,
but
at
least
the
application.
A
F
Yeah,
I'm
a
bit
surprised
by
this
api
surface.
We
need
to
think
a
bit
more
about
the
use
case
and
so
on,
because
it
seems
that
maybe
like
for
the
self
capture
thing
it's
the
user
agent
can
already
do
some
heuristics.
So
that's
one
thing,
but
in
any
case
I'm
not
sure
that
the
web
application
can
decide
whether
it
wants
the
search
change,
supported
to
be
true
or
false.
At
the
time
it
calls
get
display
media.
F
Maybe
it
will
want
the
actual
track
like
oh,
is
it
a
self-capture
or
is
it
oh,
it's
not
self-capture
and
so
on
to
actually
decide
whether
it
wants
to
be
able
to
change
source.
So
maybe
there
should
maybe
there's
a
I.
I
wonder
whether
you
you
had
discussions
for
a
more
flexible
api
surface
or
not
or
it
if
this
a
particular
api
surfaces.
A
So,
thank
you.
You've
mentioned
two
things
going
now
in
the
reverse
direct
order.
If
you
want
to
make
the
application
give
the
application
even
more
power
to
make
it
a
later
make
an
even
more
informed
decision,
I'm
up
for
it,
and
we
can
discuss
exactly
how
to
do
that
when
you
mentioned
heuristics,
which
was
the
first
thing,
I
would
like
to
push
back
on
that.
I
think
that
heuristics
are
going
to
serve
some
applications,
but
not
others.
A
That
we
can
discuss
that
would
be
we
could
discuss.
I
would
argue
that,
because
we
don't
want
to
break
any
or
even
inconvenience
existing
applications
we
can
just
default
to
the
existing
behavior
of
sources
are
not
dynamic,
but
I'm
open
to
discussion,
because
the
important
thing
thing,
I
think,
is
to
allow
the
application
to
tell
us
what
it
wants.
D
So,
yes,
I'm
a
bit
concerned
about
this
api
because
it
would
let
the
application
limit
the
user's
choices,
and
I,
like
the
feature
in
chrome
that
you
can
switch.
I
think
dynamic.
Switching
is
a
good
idea,
but
it
also
has
some
raises
some
questions
like
if
the
user
changes
source
is
that
another
opportunity
will
it
focus
the
new
source?
Will
it
not
focus
it?
D
It's
not
clear
to
me
how
that
would
work,
and
also
I
tried
this
in,
and
this
might
be
good
for
google
meet
to
to
responsibly
set
this,
but
this
would
be
for
any
web
application
and
they
could
turn
off
this
feature
on
users
and
that
doesn't
seem
as
desirable
as
the
having
this
decision
in
the
user
agent.
So
I
tried.
D
Google,
google
docs
has
the
integrated
meat
feature
now
in
chrome,
and
I
tried
it,
and
so
I'm
wondering
if
this
is
actually
related
to
that
and
that
the
the
problem
there
seems
to
be
to
keep
the
user
from
picking
another
source
which,
in
the
current
chrome,
is
a
problem
already.
When
I
start
presenting,
I'm
actually
get
the
chrome
picker
that
defaults
to
this
tab,
but
there's
still
other
options
that
say
other
tabs,
and
if
I
pick
some
of
those
I
get
the
you
mentioned,
why
do
we
click
this
button?
D
I
click
this
button
and
google
me
google
docs
now
tells
me
sorry.
You
can't
do
that.
You
have
to
pick
the
same
tab.
So
I'm
wondering
if
this
is
a
subcategory.
If
this
is
if
the
problem,
this
is
really
solving.
Is
we
want
self
capture
here
and
we
have
a
specific
use
case
for
soft
capture,
so
maybe
the
user
agent
should
just
determine
this
based
on
whether
this
is
self-capture
or
not,
because
self-capture
seems
very
different
from
capturing
any
source.
D
I
can't
see
what
kind
of
app
would
break
in
the
general
case
of
not
picking
self-capture,
for
instance
capture
you
might
still
have.
D
You
might
have
actions,
for
example,
or
identity
capture
handle
identity,
but
you
could
solve
that
the
same
way
you
would
solve
it
for
navigation,
for
example.
D
So
there's
really
no
inherent
difference
to
me
between
capturing
a
another
tab
and
navigating
doc
from
document
a
to
b
in
that
tab
versus
picking
a
different
tab
b
that
happens
to
have
document
b
in
it.
D
A
Okay,
you've
said
a
couple
of
things
and
I
I
hope
that
I'll
be
able
to
respond
to
all,
and
if
I
forget
one
of
them,
please
remind
me.
The
first
thing
that
you
said
is
that
this
would
allow
us
to
limit
user
selection.
D
A
I
think
that
you
should
look
at
it
the
other
way
around
right
now
there
is
no
user
choice.
You
cannot
actually
press
share
this
tab
instead,
so
this
is
a
boolean
and
if
we
look
at
it
as
we
can
look
at
it
as
actually
introducing
new
behavior,
so
we're
not
limiting
the
user,
we're
actually
giving
him
and
the
new
choice
responsibly.
A
Second,
is
that
you
didn't
actually
say
it
exactly
at
this
time,
but
I
think
that
we've
discussed
this
before
you
were,
if
I'm
not
mistaken,
concerned
that
this
could
be
used
in
order
to
kind
of
nudge,
the
user
towards
certain
things,
and
I
would
just
argue
here
publicly
that
this
could
not
happen
anymore
than
before,
because
basically,
when
the
user
chooses
to
select
something
is
the
moment
where
you
would
push
him.
A
You've
mentioned
how
google
docs
uses
prefer
current
tab,
and
yes,
that's
true,
but
that's
unrelated
to
this
particular
thing
that
happens
before
we
even
show
the
share
this
tab
instead
button.
A
This
would
handle,
and
at
least
with
the
chrome
implementation
of
this
ux
you
are.
Actually
you
must
be
focused
on
the
new
tab
at
the
moment
where
you
press
this.
So
it's
no
issue,
it
could
be
that
you
are
gonna,
choose
a
different
ux
and
then
maybe
you
will
want
to
standardize
a
bit
more
something
about
what
does
with
focus
and
I'm
sure
that
we'll
be
able
to
find
a
good
compromise
there,
and
the
fourth
thing
that
you
said
was
that
it
seems
to
you
like.
A
This
is
specific
for
self-capture,
and
I
will
admit
that
this
is
very
so
self-capture
is
the
easiest
example
of
when
this
is
interesting,
and
then
I
would
go
back
to
what
you
end
said.
It's
like
okay,
why
not
use
a
heuristic?
I
would
claim
that
if
you've
got
two
different
applications,
let's
say
that
google
docs
wants
one
behavior
and
microsoft
office
wants
another.
You
wouldn't
want
different
browsers
to
behave
differently.
You
would,
you
know,
employ
different
heuristics.
You
wouldn't
want
one
application
to.
A
You
know
to
be
ill-served
just
because
the
other
one
was
there
first
or
is
louder
or
whatever.
It
makes
sense.
For
this
to
actually
cater
to
the
specific
application,
I
cannot
come
up
with
a
heuristic
that
we
would
fit
all
of
them.
Otherwise
we
would
just
hard
code
that
behavior
and
the
last
thing
that
you
said
I
forgot
that
one.
Could
you
remind
me.
D
I'm
not
sure
that
well
I
I
didn't
actually
suggest
that
this
could
be
used
to,
of
course,
the
user.
So
I
I
think
my
main
comment
here
is
that
it'd
be
useful
to
see
if
user
agents-
I
oh
you
mentioned
that
this
is
a
way
to
enable
features,
but
I
think
it
would
be
better
to
leave
the
application
out
of
that
decision
and
I
think
it's
fine
neces.
It
doesn't
mean
that
this
seems
early
to
stand
wise
to
me,
because
this
is
still
a
user
agent
space.
D
In
my
opinion,
where
it's
good,
that
browser
vendors
are
experimenting
with
new
features
in
this
area,
and
I
would
hope
that
they
could
do
that
and
maybe
make
a
special
case
for
self-capture
it
doesn't
seem.
I
don't
understand
why
letting
all
applications
control
this
feature
is
necessary.
A
Oh
okay,
I
will
respond
to
this,
but
I
did
actually
recall
now
the
fifth
thing.
The
fifth
thing
that
you
mentioned
was
navigation
and
you
asked
why
this
is
not
the
same
and
yes
and
by
the
way.
Sorry,
if
I
put
words
in
your
mouth
about
queers
in
the
user,
I
thought
that
you
said
something
to
that
effect
in
the
editors
meeting,
but
maybe
I
misremembered
so
you're
saying
it's
good.
That
user
agents
are
experimenting
with
this
and
I
respond
to
that
that
we've
got
implications
even
only
inside
of
google.
A
We've
already
got
applications
that
are
pulling
in
different
directions,
one
of
them.
What
wants
to
have
share
this
tab
instead
and
one
doesn't
and
the
heuristic
does
not
actually
allow
us
to
to
give
them
different
behavior,
but
even
if
it
did
what
happens
if
microsoft
comes
next
and
says
that
the
heuristic
that
we
employ
is
exactly
the
inverse
of
what
they
would
like
to
do.
How
do
we
make
a
fair
decision
there?
A
Is
it
okay?
If
I
also
respond
to
navigation
before
handing
the
mic
back
to
you
aren't
sure?
Okay,
so
at
least
for
self
capture.
Navigation
is
interesting
in
that
it
just
stops
the
capture.
So
it's
a
non-issue
there
and
when
it
comes
for
navigating
a
capture
tab,
you
are
correct
that
okay,
so
the
user
can
do
that
and
break
things.
But
there
are
some
problems
that
we
cannot
solve
today.
It
doesn't
mean
that
we
should
not
try
to
solve
other
problems.
So
that
is
my
answer.
Navigation.
F
A
I
am
completely
open
to
go
in
one
way
or
the
other,
as
the
consensus
goes,
so
it
can
be
a
hint
or
it
can
be
a
requirement.
It
is
not
important
for
me
because,
basically,
because
in
chrome
we
intend
to
just
abide
by
the
hint,
but
if
you
would
like
it
to
be
a
hint
so
be
it.
D
Anymore,
well,
I'm
also
not
sure
that
the
effects
of
swords
change
regardless
of
this
api,
one
of
the
effects
of
chrome
changing
its
source
on
the
application.
Today
I
mean
constraints
are
going
to
update.
D
A
So
long
as
you
only
change
between
different
tabs,
I
don't
think
that
control
constraints
need
to
change,
and
that
is
our
current
implementation.
Ideally
one
day
we
would
want
to
support.
You
know
changing
between
more
dynamically
between
more
things,
but
that's
you
know
for
the
future,
and
then
we
can
tackle
the
challenges
then,
but
right
now
you
sure
want
that
you
start
sharing
another.
The
most
that
can
happen
is
that
resolution
changes
all
of
a
sudden,
but
that
can
also
happen
if
you
just
resize
the
the
window
that
contains
the
tab.
D
Right,
so
I'm
not
supportive
of
this,
but
as
an
interest
of
progress.
The
I'd
like
to
at
least
like
said
on
the
default
here,
it
seems
like
requiring
applications
to
provide
this
before
you
get
a
source
change
in
one
browser.
Is
it
would
be
better
to
flip
it
and
say
it
sounds
like
the
constraint
here
is
applications
that
don't
want
a
source
change
and
the
default
should
be
benefits
for
the
user,
so
maybe
like
a
prevent
source
change,
at
least
that's.
A
Okay
for
me,
yes,
if
you
would
like
that,
that's
okay
for
me,
but
just
so
I
could
understand
you're,
saying
you're,
not
supportive.
Does
it
mean
that
you're
not
supportive
but
you'll,
say
but
you'll
not
block
it
or
are
you
saying
you're,
not
supportive,
you're
gonna
block
this,
but
just
in
case
your
mind
is
changed
in
the
future.
Let's
bike
should
so,
which
interpretation
is
it
well.
D
I'm
not
convinced
that
the
user
agent
couldn't
figure
out
eurasic
here
on
his
own,
that
would
work
for
both
chrome
and
other
and
yeah
and
microsoft.
For
example.
I
mean
it
seems
to
me
that
the
question
here
is
that
source
change
is
harmful
on
self-capture
because
it
violates
the
assumptions-
and
this
is
much
clearer
once
we
have
get
viewport
media,
because
I
assume
that's
currently
written.
This
will
display.
Media
constraints
is
also
reused
by
get
viewport
media,
so
I
would
object
to
source
change
on
that.
A
We
can
say
that
this
has
no
effect
on
get
viewport
media
if
you're
concerned
about
that
and
about
microsoft
versus
google
et
cetera.
Well,
what
happens
if
we
decide
okay
source
self
capture?
You
don't
get
the
share
this
type.
Instead
button
then
loom
suddenly
comes
over
and
says,
like
hey
users
often
start
recording
the
current
tab,
then
they
switch
and
start
recording
another.
Why
have
you
made
that
impossible?
For
us?
This
is
exactly
what
we're
looking
for.
A
F
I
I
think
that
hint
is
is,
is
a
way
for
both
chrome
and
to
move
forward
and
for
others
to
try
to
do
heuristics
and
a
good
default,
which
would
be
that
the
the
in
chrome,
the
icon
will
be
shown
seems
like
also
something
we
maybe
can
get
consensus
on
as
well.
A
I
agree:
I
think
that
the
hint
would
be
an
easy
compromise.
What
do
you
say
anywhere.
D
F
Yeah,
but
it
might
be
a
good
heuristic,
and
this
can,
if
we
see
that
it's
a
good,
a
good
default,
then
we
can
either
standardize
it
later
on
or
we
can
like.
These
agents
will
still
be
able
to
implement
it,
no
matter
what.
A
A
Just
so
I'm
clear,
are
you
saying
yes?
Can
I
always
say
it's
a
hint
we're
gonna
bike
should
do
the
name,
we're
gonna
bike
shed
on
default
value,
but
essentially
we're
in
agreement
here.
E
Yeah,
I
guess
the
time
has
gone.
I
mean
I,
I
think,
I'd
like
to
understand
better
the
relationship.
Indeed
with
self-capture
I
mean
I
I
I
take
your
point
ela
that
may
maybe
apps
that
would
need
this
beyond
self-capture,
but
then
maybe
we
can
discuss
a
broader
thing
once
we
have
these
apps
knocking
out
order,
but
if
there
is
this
consensus
emerging
around
the
hint,
that
sounds
like
a
pretty
good
approach
to
me.
So.
A
I
have
produced
that.
I
know
that
I
acknowledge
that
it's
a
little
bit
less
convincing
than
self-capture,
but
it
is
what,
if
right
now,
I
capture
slides
in
a
different
tab
and
they
start
remote
controlling
that,
and
then
I
find
out
that
they
cannot
actually
remote
control
other
things
and
let's
say
that
the
entire
experience
is
about
you
know
being
able
to
present
slides
nothing
else.
Right.
Let's
say
it's
a
new
video
conferencing
tool.
That's
built
only
around
that!
It's
not
the
most
convincing
example,
but
it
is
possible.
A
A
E
Fair
enough,
I
mean
again
like
if
the
hint
gets
us
where
we
want-
and
I
think
it's
probably
fine
but
yeah
in
terms
of
use
cases.
E
A
D
So
I
would
flip
it
and
say
that
being
able
to
control
next
previous
slide
and
that
kind
of
stuff,
that's
the
property
of
the
captured
page.
So
I'm
assuming,
if
I
navigate
from
one
page
with
this
ability
to
a
second
one,
with
the
also
with
the
ability
that
would
work
really
well
now
with
capture
handle
right.
So
I
don't
see
a
difference
between
tab,
navigation
and
switching
from
one
presentation
to
another.
So
when
it
works,
it
sounds
like
it's
great
that
it
you
know.
A
It's
a
bit
of
a
question
here
because
let's
say
that
I
capture
dopes
and
now
I
can
scroll
up
and
down
the
dock
from
inside
of
me
and
suddenly
I
think,
like
hey,
that's
a
general
ability,
that's
unrelated
to
like
me
as
a
user.
I
don't
really
know
why
it
works
right
and
then
I
share
this
tab
instead
wikipedia
and
it
stops
working
now.
A
It
is
mostly
a
concern
for
self-capture
and
you're
right.
There.
F
I
would
think
the
wikipedia
argument
is
not
very
convincing,
because
the
capturing
page
is
responsible
to
show
ui
explaining
what
is
happening
like
oh
there's,
no,
more
controls,
because
of
that,
and
you
can
explain
it.
F
So
I
would
think
that
if
a
website
is
so,
the
only
case
I
would
see
have
it
happening
is,
if
you
change
the
source,
the
web
application
discovers
it
and
decides
to
to
stop
the
track
and
say
no,
please
please
go
back
to
to
this
these
kind
of
things
and
they
can
do
it,
no
matter
the
amount
of
the
button
so.
A
Yeah
I
acknowledge
again.
This
is
mostly
interesting
for
self-capture,
but
I
think
that
it
is
enough
if
we've
got
even
only
two
applications
that
engage
in
self-capture.
One
of
them
wants
it,
the
other
one
doesn't.
If
we
cannot
have
a
heuristic
that
keeps
both
of
them
happy,
then
this
hint
is
going
to
make
them
happy.
F
A
I
would
be
I'm
gonna
be
very
convinced
by
this
argument.
Once
again,
viewport
media
is
implemented
and
in
all
browsers
and
adopted
by
web
developers,
because
at
the
moment-
and
this
has
been
reiterated
for
the
last
year
and
a
half
while
we've
been
talking
about-
get
viewport
media,
we
don't
even
know
of
applications
that
can
actually
that
so
get
hyper.
Media
has
two
requirements.
One
of
them
is
not
even
finalized.
Mozilla
is
actually
everybody
in
was
look.
A
Some
people
in
mozilla
other
than
univr
have
not
actually
consented
to
that
mechanism
being
introduced,
let
alone
been
the
get
viewport
media
gay
team.
So
there
are
a
lot
of
ifs
here
and
I
would
be
surprised
if
get
viewport
media
comes
in
the
next
in
less
than
a
year.
You
never.
You
could
contradict
me
here.
D
Well,
I
mean
I
would
also
love
to
see
chrome
implementing
this
and
chrome
has
no
such
blocker
on
mozilla
and
also,
I
don't
believe
it's
true
that
I
think
we
are
amenable
to
document
policy,
at
least
in
this
case.
So
I
don't
try
to
say
that
we're
blocking
on
that.
A
Okay,
and
do
we
have
we've
got
a
lot
of
web
developers
that
say
that
they
would
not
be
able
to
adopt
this,
so
we
will
have
to
keep
the
display
media
for
a
while,
and
even
if
we
moved
with
all
possible
speed,
don't
get
viewport
media
which
of
course
we
should.
It
would
probably
still
take
quite
a
long
time
yet.
D
So
my
main
concern
is
that
we
keep
adding
features
to
get
display
media
for
something
we
know
is
unsafe,
and
that
seems
in
some
respects,
if
I
were,
I'm
worried
that
we're
getting
farther
and
farther
away
from
implementing
get
viewport
media.
A
I
understand
I've.
I've
got
the
inverse
concern.
I've
got
the
concern
that
for
a
year
and
a
half
now
we've
been
discussing
get
viewport
media
and
it
was
used
as
rationale
for
not
doing
certain
things
and
even
though
you
know
a
year
and
a
half
later,
and
it's
still
not
here,
and
we
don't
know
if
it
will
ever
be
implemented
or
will
ever
be
adopted,
and
we
cannot
keep
on
discussing
this.
A
D
I
don't
think
there
are
any
big
outstanding
issues
that
would
prevent
any
vendor
from
implementing
it.
So
hopefully
we
could
see
progress
there
and
I
agree.
That's
that'll
be
good
as
far
as
it
sounds
like
if
this
would
be
a
hint
that
we
could
deprecate
eventually,
that
might
be
okay.
D
I
would
like
to
see
a
path
here
toward
assuming
get
viewport
media
happens
that
I
would
hope
that
browsers
would
vendors
would
try
to
make
this
move,
and
maybe
this
could
be
deprecated
at
that
time.
A
Is
anybody
taking
minutes.
D
A
B
It's
only
12
minutes
left,
so
I'm
not
sure
that
we
can
do
capture
much
more
in
this
meeting.
A
That
would
work
for
me.
I
guess
that
if
we've
got
more
time,
we
can
just
maybe
if
you
never
could
suggest
what
kind
of
shape
he
would
like.
Maybe
if
everybody
else
is
happy
with
it,
we
could
just
record
that
we've
made
a
decision.
B
B
B
Okay,
so
you
wanna,
you
wanna
talk
about
some
of
those
byron.
B
Why
don't
we
just
try
to
introduce
it
then
try
to
put
you
know,
I
think
they're
all
related.
As
you
said,
yeah.
B
And
I
don't
know
if
you
like
the
slides,
but
where
do
you
want
to
start?
Is
there
one
that
you
think
might
be.
G
So
I
think
the
previous
slide
that
you
were
just
on
is
is
fine
and
I'm
not
really
gonna
follow
the
slides
one
by
one.
Because
again,
I
think
we
are
short
on
time,
but
ultimately,
what
all
this
cluster
of
issues
boils
down
to
is
the
simulcast
ietf
spec
basically
means
that
we
have
to
allow
a
remote
description
to
remove
a
red
or
remove
a
simulcast
encoding
at
will.
G
There's
nothing
in
that
specification
that
allows
us
to
say
you
know
what
we're
not
going
to
allow
a
remote
endpoint
to
either
reject
a
simulcast
encoding
in
an
answer
to
our
offer
or
to
forbid
a
remote
endpoint
from
issuing
a
new
offer,
a
re-offer
that
removes
a
previously
negotiated
simulcast
encoding.
G
So
I
think
that
some
of
the
language
in
webrtc
pc
was
written
with
the
expectation
that
these
things
not
be
allowed,
but
there's
also
parts
that
seem
to
allow
it.
Although
some
of
that
language
is
aimed
at
an
initial
answer,
not
a
not
a
re-answer,
so.
G
Basically,
what
it
boils
down
to
is:
we've
got
to
decide
how
we're
going
to
handle
handle
this
discrepancy,
and
I
kind
of
outlaid
or
laid
out
a
few
different
things
that
we
need
to
keep
in
mind.
One
is,
of
course,
that
we
just
have
to
allow
the
removal
of
simulcast
encodings,
initiated
by
a
remote
description.
There's
just
not
really
any
way
around
that.
I
don't
think.
G
As
far
as
adding
new
simulcast
encodings,
due
to
a
remote
description
in
a
renegotiation,
I
think
that
there's
not
really
any
reason
to
allow
that,
because
the
spec
doesn't
force
us
to
allow
it.
You
know
if
an
if
a
re-offer
has
a
new
encoding
we're
under
no
obligation
to
accept
it,
and
so
I
think
that
we
probably
should
reject
it
and,
of
course
you
can't
add
a
new
encoding
in
an
answer
that
wasn't
in
the
offer.
G
That's
just
that's
just
a
syntax
error,
so
I
don't
think
unless
there's
like
some
concrete
use
case
for
allowing
the
addition
of
things
to
the
simulcast
envelope
after
the
initial
negotiation,
I
don't
think
that
we
should
allow
it.
G
Yeah,
I
mean
I'd,
say
unless
somebody
comes
up
with
something
pretty
compelling,
I
think
it
makes
sense
to
just
disallow
grids
to
be
added
in
renegotiation.
The
other
wrinkle
that's
a
little
bit
harder.
G
G
G
The
the
the
meaning
that
the
order
has
in
a
simulcast
attribute
is
related
to
transmission
priority,
which
we
aren't
really
paying
attention
to
right
now
in
webrtc,
pc
we've
got
an
extension
spec
that
allows
the
transmission
priority
to
be
deliberately
specified
from
from
the
javascript
api
and
the
language
in
the
simulcast
spec
around
how
you
interpret
the
order
of
reds
in
the
simulcast
attribute
is
kind
of
hand,
wavy
and
there's
a
should
level.
G
There's
some
should
level
language
on
how
to
interpret
that,
but
because
it
should
level
we
could
probably
get
away
with
saying
no
we're
not
going
to
do
it
that
way,
we're
going
to
explicitly
set
it,
which
is
fine,
but
if
we
do
get
a
re-offer
with
rids
in
a
different
order
than
they
showed
up
or
were
negotiated
previously.
G
That
doesn't
end
up
impacting
the
order
in
the
send,
including
encoding
slot,
which
is
something
to
keep
in
mind.
I
guess-
and
I
think
I
mentioned
that
trying
to
figure
out
which
issue
that
I
kind
of
lay
all
those
out
and
it's
one
of
these
yeah.
So
it's
two
seven
two
three
I
could
lay
out
some
of
this
comment.
G
So
that
might
be
acceptable.
I
doubt
that
you're
gonna
see
any
simulcast
receiver
implementations
just
kind
of
fiddling
with
the
order
of
the
reds.
Probably
so
maybe
it
wouldn't
cause
too
much
trouble,
but
it
is
syntactically,
valid,
stp
and
just
barfing
on.
It
is
probably
a
base,
spec
violation,
although
you
could
just
say
you
know
what
we're
we're
not
even
going
to
try
dealing
with
this.
G
G
That's
really
the
essence
of
all
of
these
bugs
is
issues
around
modifying
the
simulcast
envelope
and
rid
renegotiation.
G
And
then
I
guess
there's
this
other
one
where
we
have.
This
is
2722
where
a
re-offer
can
stop
the
sending
codings
entirely,
which
is
yeah.
G
Yeah,
that's
just
a
you
know,
that's
a
and
I
think
janavar
has
already
sort
of
proposed
some
language
somewhere.
So
that's
probably
a
different
issue,
although
it's
related,
so
that's
pretty
much
every
everything
I
have
to
say
on
those.
B
So
what
is
the
next
step
here?
Maybe
maybe
we
can
ref
in
the
issues
we
can
refine
the
recommendation.
G
G
And
that's,
that's
me
done.
Okay,.
B
Well,
thank
you
and
I
would
encourage
people.
I
guess
to
take
a
look
at
this.
It
is
a
complicated
set
of
issues,
but
I
think
we've
become
aware
that
the
samakast
aspects
of
the
spec
do
have
some
problems,
as
we
discussed.
B
All
right
thanks,
okay,
so
I
think
we're
done
for
today.
As
we've
said,
the
next
meeting
will
be
on
june,
7th
and
we'll
be
taking
agenda.