►
From YouTube: WEBRTC WG interim 2021-09-20
Description
See also the minutes at https://www.w3.org/2021/09/20-webrtc-minutes.html
01:30 Next meetings
04:10 Status of recent CfCs
05:35 WHATWG Streams
06:29 Agenda review
07:00 Conditional Focus
30:13 getViewportMedia
46:27 Display surface constraint
1:12:41 Echo Cancellation
1:37:46 Wrapping up
1:43:27 October meeting
A
A
We
had
some
overflow
from
today's
meeting
and
so
we're
going
to
schedule
an
october
virtual
interim,
we
put
out
a
doodle
poll
for
the
week
of
either
october
4th
or
11th
we're
going
to
try
to
close
that
poll
pretty
quickly
because
it's
possible
that
the
meeting
would
be
the
next
week.
So
please
go
and
stick
your
entries
in
right
away,
we're
going
to
close
that
poll
september.
28Th
we've
also
made
available
the
overflow
slides.
A
So
we
have
our
almost
a
full
agenda
for
the
october
meeting
already
and
those
are
available
at
the
at
the
link
and
I've
also
posted
it
to
the
working
group
mailing
list.
A
A
Okay,
so
about
this
meeting,
the
slides
have
been
linked
to
the
slides
is
on
the
wiki.
We
do
need
to
get
a
scribe.
We
have
a
volunteer
for
that.
A
All
right
so
a
little
bit
about
document
status.
Many
people
know
this,
but
hosting
within
the
w3c
doesn't
imply
that
the
document's
been
adopted
by
a
working
group
that
requires
a
call
for
adoption.
A
Editors
drafts
don't
represent
working
group
consensus.
Working
group
drafts
do
once
they're
confirmed
by
cfc.
It's
also
possible
to
merge
prs
that
may
lack
consensus
of
the
node
is
attached
to
indicating
controversy.
It's
also
possible
to
run
cfcs
on
issues
pr's
or
sections
of
a
document
we'll
be
talking
about
some
of
those
open
ones
in
a
moment
also
about
the
w3c
code
of
conduct.
We
operate
under
that
code
of
conduct
and
the
link
is
here,
of
course,
we're
all
passionate
about
improving
legacy,
but
let's
try
to
keep
it
cordial
and
professional.
A
If
we
can
a
little
bit
about
this
virtual
meeting,
it
is
being
recorded
to
get
in
the
google
me
chat
to
get
in
the
queue
please
type
plus
q
and
minus
q,
and
then
we'll
we'll
call
on
you
and
and
maintain
a
speaker
cue.
A
Please
also
use
headphones
or
an
echo
cancelling
speakerphone
and
wait
for
microphone
access
to
be
granted
before
speaking,
and
you
know
state
your
full
name,
etc.
A
I
don't
think
we
will
be
doing
any
polls
today's
meeting,
but
we
could
do
it
if
we
need
a
sense
of
the
room,
don't
think
we
will
okay,
so
we've
got
two
recent
cfcs,
one
on
republishing,
a
media
capture
and
streams
at
cr
we
hadn't
had
the
last
cr
was
well
over
a
year
ago.
I
think
almost
18
months.
So
it's
time
certainly
to
republish
at
cr
that
completed
on
september
17th.
A
I
think
we
have
five
positive
responses
so
far
and
I
think
yanivo
you'll
write
up
the
summary
right,
I'm
not
sure,
there's
much
to
summarize
other
than
going
ahead,
but.
A
Yeah,
so
the
set
the
one
that's
ongoing
now
is
a
cfc
on
transferable
media
stream
tracks.
That's
a
section.
That's
been
been
added
to
the
media
capture,
extensions,
two
responses.
So
far
that
will
complete
on
september
27
2021.
A
Potentially
so
please
read
that
section
you
know
play
around
with
it
and
and
comment
on
the
cfc
all
right,
so
as
people
know
a
lot
of
what
we've
been
talking
about
in
some
of
the
specs
like
media
capture,
transform,
as
well
as
in
other
groups
such
as
web
transport,
relies
on
the
what
wd
streams
specification,
and
there
are
actually
quite
a
few
issues
in
that
spec.
A
I
just
noticed
some
of
them
here
and
they
do
relate
to
some
of
the
pipelines.
We've
been
talking
about
setting
up,
and
I
think
in
the
october
meeting
in
particular,
we
will
get
into
a
lot
of
these
and
their
impact,
and
so
I
just
wanted
to
point
out
that
these
discussions
are
ongoing
in
the
what
wg
streams
repo
itself.
A
D
A
Of
this
does
relate
to
potentially
to
the
transformer
media
stream
tracks
as
well.
Okay,
so
here's
what
we're
gonna
do
for
the
agenda.
For
today
we
basically
have,
I
guess,
a
lot
with
three
topics:
conditional
focus,
get
viewport,
media
and
display
service
constraints,
and
then
we
have
harold
on
echo
cancellation
and
then
20
minutes
for
wrap
up
the
next
steps,
we'll
try
to
keep
to
the
time.
D
Thank
you
very
much,
so
I've
got
a
little
riddle,
but
you
can
see
the
answer
pretty
quickly
and
I
guess
that
you
can
guess-
and
the
question
is:
we've
got
two
scenarios
here:
a
video
conferencing
application
and
one
scenario
we're
capturing
a
thesis
draft
and
the
other
one
we're
capturing
a
presentation,
and
the
question
is
what's
in
common
for
both
of
these
and
the
answer
is
a
lot
of
things,
but
one
of
the
things
that
should
not
be
in
common
and
this
in
common
is
that
when
you
start
capturing
from
the
video
conference,
if
you
start
capturing
the
thesis
draft,
you
immediately
switch
focus
to
that
or
the
browser
focuses
that
document
and
when
you
capture
the
presentation
it
does
the
same,
but
sometimes
it
could
make
a
bit
less
sense,
because
maybe
you
can
actually
control
the
presentation
directly
from
your
video
conferencing
application.
D
You
can
say
next
slide
previous
slide
and
there's
not
a
lot
more
that
you
want
to
might
want
to
do.
There
is
some,
but
a
lot
of
your
interaction
is
going
to
be
just
that
and
then
to
focus
that
application,
then
to
force
the
user
to
find
their
way
back
while
they're
talking
to
an
estimated
audience
such
as
yourself,
which
puts
the
speaker
a
bit
under
mental
pressure,
not
exactly
the
best
experience
for
them.
D
And
these
slides
are
just
for
those
people
who
are
gonna
read
those
later,
but
we
can
see
that
here.
As
I
said,
you
would
be
immediately
capturing
in
with,
and
that
makes
sense.
I
think,
because
if
you
are
capturing
a
text
editor
well,
it's
relatively
likely
that
you're
gonna
edit
some
text,
and
maybe
you
don't
need
to
see
the
faces
of
the
other
people,
although
maybe
it
would
be
nice
and
you
really
need
to
start
getting
access
to
the
text
next
slide.
D
D
So
the
question
is
okay,
so
sometimes
we
capture
an
application
that
we
want
to
immediately
focus
and
sometimes
not,
and
how
do
we
know
or
so
the
browser
doesn't
know
the
browser
is
agnostic.
All
it
says
is
some
javascript.
It
has
no
idea
what
kind
of
application
is
running
in
the
capturing
tab.
D
It
has
no
idea
what's
in
the
captured
application
in
the
capture,
the
tab
or
window,
and
you
know
it
can,
maybe
it
could
imply
employ
a
couple
of
heuristics,
but
generally
it's
not
likely
that
it
would
be
able
to
do
a
very
good
job
but
the
capturing
application.
Well,
it
knows
everything
there
is
to
know
about
itself,
and
sometimes
it
might
even
know
something
about
the
captured
application.
So
for
one
thing
the
capturing
application
could
know
that
hey
users,
my
users,
usually
do
this
or
usually
do
that
or
maybe
it
could.
D
Even
you
know
if
we're
using
capture
handle,
which
is
an
unspecified
proposal
I've
made.
But
hopefully
I
could
convince
the
working
group
that
it's
worthwhile.
D
Then
it
would
know
exactly
what
it
caught
under
sometimes
sometimes
it
would
have
no
idea,
but
sometimes
it
would
know
hey.
I
caught
google
slides
or
I
caught
some
other
application,
maybe
wikipedia
and
in
those
cases
well,
the
decision
of
whether
to
switch
focus
is
very
informed.
Next
slide.
Please.
D
So
we
will
notice
that,
because
the
user
could
choose
more
than
one
thing,
it
is
not
really
possible
to
create
it.
It
is
less
useful
to
create
an
api
that
allows
you
to
decide
ahead
of
time,
whether
you
want
to
focus
the
captured
application
or
not.
D
If
you
can
make
the
the
decision
immediately
after
capture
starts,
then
you
encompass
you
get
everything
that
you
would
have
gotten
if
you
made
the
decision
ahead
of
time,
because
you
could
still
hardcode
the
decision
of
always
focus
or
never
focus,
but
now
you
also
get
the
opportunity
to
examine
what
you
got
and
make
a
decision
based
on
that,
so
I
am
proposing
an
api
that
generally
just
says
hey
as
soon
as
you
as
the
capture
starts,
you
get
your
promise
resolved
the
promise
that
get
display
media
returned.
D
It
is
resolved,
it
is
resolved
on
a
microtask
and
so
long
as
that
microtask
is
not
terminated.
You
have
an
opportunity
to
call
focus.
You
say
you
want
to
focus
or
not
after
that,
it's
too
late.
You
no
longer
do
that
now
in
the
next
slide,
which
I
please
do
not
switch
in
the
next
slide,
are
notes
and
covering
some
edge
cases,
so,
for
example,
okay.
But
what
if
the
micro
test
on
which
you
resolved,
runs
for
five
minutes?
D
Well,
you
don't
want
to
allow
that
so
or
what,
if
the
application
does
not
call
the
focus
function
at
all,
you
probably
want
to
behave
the
way
you
did
before,
but
assuming
that
those
edge
cases
are
not
encountered,
you
just
call
focus,
you
say,
hey
focus
or
don't,
and
then
you
can
make
the
decision,
based
on
all
of
the
things
that
we've
discussed
before
you
could
hardcode
the
decision
based
on
the
fact
that
you're
capturing
application
of
a
certain
type,
you
could
say,
hey
if
it's
a
window,
focus
that
if
it's
a
it's
that
I
don't
vice
versa
or
you
could
even
use
capture
handle
and
try
to
see
exactly
what
you
capture
captured.
D
The
only
thing
that
you
cannot
do
here
is
you
cannot
really
wait
for
the
first
frame,
try
to
read
that
to
make
a
decision
based
on
that
like
that
is
one
thing
that
maybe
would
have
been
even
nicer,
but
unfortunately
the
api
I'm
proposing
does
not
allow
you
to
do
that
just
yet
next
slide.
Please.
D
So
here
we
have
a
discussion.
I
will
not
walk
you
through
this
wall
of
text
which
of
course,
you're
welcome
to
read
later.
If
the
proposal
is
of
interest
to
you,
but
basically
get
a
microtask.
If
you
before
the
microtask,
even
fires,
you
don't
have
a
handle
to
the
track.
You
cannot
do
anything
while
you're
on
the
microtask.
That's
your
time
to
shine.
You
call
the
the
api
things
happen
after
that
you
just
get
an
exception
raised.
D
If
you
don't
call
you
just
nothing
happens,
it's
the
same
behavior
that
happened
before,
or
rather
basically
we
just
say
the
user
agent
makes
its
own
decision.
Chrome
intends
to
keep
on
doing
what
it
did
before
geneva
is
now.
Do
you
specifically
want
to
ask
now-
or
should
I
just
finish
this
particular
slide?
Oh
no!
Go
ahead.
D
I
was
just
getting
my
q
in
cool,
so
yes,
and
one
thing
that
we
do,
for
example,
to
prevent
like
an
attack
of
late
focus
where
you
maybe
you
put
on
a
button
on
the
page
for
the
user,
you
hold
up
the
micro
desk
and,
as
the
user
is
just
getting
ready
to
click
that
you
focus
switch
focus.
D
So
all
sorts
of
attacks
like
that,
the
way
you
prevent
them-
and
you
say-
hey
you've-
got
exactly
one
second
to
do
that,
one
second,
it
could
sometimes
cpu
could
be
starved
and
you
might
end
up
not
making
it,
in
which
case
shame.
You're
gonna
get
no
exception.
You'll
just
not
get
the
change
in
behavior,
but
one
second
is
a
lot
of
time
for
a
computer
and
most
of
the
time
this
is
gonna,
be
enough,
and
for
the
user,
it's
not
terribly
disruptive
so
and
of
course,
the
usual
stuff.
D
You
cannot
focus
more
than
once,
and
you
know
to
prevent
if
you
clone
the
track
that
you
know
so
that
you
don't
get
competing
calls.
We
just
ignore
anything.
That's
on
a
clone!
It's
just
an
error.
Yes,
I
will
listen
to
the
queue
now.
D
Got
one
more
slide,
I
okay,
I
can
do
that
one.
I
think
it's
not
terribly
long.
D
Okay,
so
next
slide,
please
so
the
last
slide
is
in
the
particular
api
that
I'm
proposing
I'm
proposing
exposing
this
new
method
on
a
subclass
of
midi
stream
track,
so
that
only
if
you
capture
a
tab
or
window
would
this
even
be
observable.
This
particular
method-
and
I
just
want
to
mention
that
this
this
does
not
really
limit
us
in
the
future.
So
here's
an
inheritance
tree
that
we
could
eventually
go
to,
which
is
you
know
right
now.
D
We
only
have
the
blue
ones,
I'm
offering
suggesting
that
we
had
the
red
one,
but
at
the
end
we
will
be
able
to
break
everything
apart,
really
fine,
so
that
you've
got
one
class
for
tab
for
window
for
screen.
For
anything,
that's
not
even
a
display
capture
and
we
just
group
together,
tab
and
window
under
one
under
one
common
ancestor
that
we
call
focusable
media
stream
track,
and
we
only
expose
focus
on
that.
C
Yes,
hi,
can
you
hear
me
yeah?
I
think
you're
good
yeah,
so
I
thank
you
for
the
slides.
I
think
this
is
a
reasonable
problem
to
solve,
because
browsers
today
have
implemented
specific
behavior.
That's
narrowly
focused
to
the
presenting
use
case,
so
I
do
have
some
concerns
with
the
api
surface,
but
I
do
think
the
problem's
worth
solving
specific
to
the
api
since
focus
switching
would
be
global
to
the
user.
C
I
don't
feel
there's
a
need
to
put
this
api
on
track
at
all,
which
would
remove
the
need
to
subclass
track.
We
could
have
a
navigator
media
devices.
You
know
focus
method
that
you
could
invoke,
so
I
don't
see
that
we
need
to
it's
not
like
you're,
going
to
call
display
media
twice
and
have
two
tracks
so
that
that
precision
to
target
the
track
doesn't
seem
necessary.
So
I'm
hoping
that
is
solvable.
The
other
one
is
that
I
think
microtask
is
too
narrow.
C
I
appreciate
the
security
measures
there
and
I,
like
the
one
second
busy
time
out,
but
putting
it
at
the
microtask
level
would
prevent
shimming.
So
I
would
recommend
queuing
a
task
as
well.
Sorry
cueing
a
task.
Instead-
and
I
think
that
would
give
you
the
same
protection
also,
I
had
a
question:
if
you,
if
you
can't
look
at
a
frame,
how
would
the
app
know
whether
to
focus
or
not,
based
on
the
target.
D
Yes,
so
I'll
do
it
in
a
reverse
order,
so
the
app
will
know
because
it
can
still
call
get
settings
and
get
settings
allows
you
to
to
say:
okay
was
it
a
window?
Was
it
tab?
Was
it
the
screen
and
also
with
capture
handle?
You
might
sometimes
be
able
to
say
more,
so
that's
how
a
frame
would
also
not
be,
although
that
is
like,
I
wouldn't
say,
that's
the
holy
grail,
but
it's
definitely
very
useful.
D
It's
not
so
easy
to
to
look
at
the
frame
and
tell
exactly
what
it
is
right.
It's
very
easy
to
make
a
mistake.
Applications
could
be
just
moving
through
something
if
I
capture,
for
example,
youtube
well.
Okay,
most
of
the
tab,
most
of
the
frame,
I'm
capturing
is
a
video
you
know,
could
be
a
little
bit
difficult
to
parse.
So
this
is
more
like
plans
for
the
future,
but
for
the
immediate
future
it
is
a
lot
easier
to
just
look
at
the
metadata.
D
C
The
last
one
was
putting
the
api
on
the
global
instead
of
the
track.
D
So
on
the
global
I've
not
considered,
so
maybe
I'm
I'm
willing
to
listen,
probably
not
live
like
that.
I
need
some
time
to
process.
The
immediate
thing
I
say
about
this
is
that
it's
not
so
easy
to
distinguish
clones
from
non-clones
and
you
could,
but
I
don't
know
if
these
are
reasonable
concerns.
We
might
end
up
agreeing
to
do
that.
Just
that.
So
let
me
think
for
a
microtest.
D
In
fact,
our
implementation
in
chrome
behind
the
flag
at
the
moment,
uses
microtask
that
we
schedule
immediately
after
get
display
media
if
you're
open
to
specifying
it
like
that,
then
we
can
talk.
One
problem
I
can
see
here
is
that
you
could
keep
on.
So
when
I
say
microtest
I
do
not
actually
specif,
I
don't
remember.
If
no,
I
don't
know.
Yes,
I
do
specify
it
in
the
specs
so
nevermind.
D
C
Well,
we
generally
try
to
make
apis
shimmable
in
the
web
platform.
I
mean
adapter.
Js
is
a
good
example,
so
any
kind
of
shim
that
sims
and
async
method
might
need
to
call
the
original
method,
and
that
means
implicitly
that
you
have
to
queue
another
microtask.
In
order
to
do
that,
and
I
don't
see
that
the
requirement
the
security
properties
you
want
here
also
seem
met
by
queueing
a
task.
I
think
so.
I'm
not
sure
why
a
microtask
was
chosen.
D
Yes
or
no,
so
if
we,
if
we
don't,
have
a
clear
boundary
of
like
now,
it's
done
and
even
if
you
have
not
called
focus
this
is
gonna
happen.
It
means
that
all
existing
applications
that
do
not
call
focus
will
experience
a
one
second
delay
for
the
focus
change.
But
if
you
qmi,
if
you
say
yes,.
C
No,
no
I'm
not
saying
we
should
wait
a
second.
My
understanding
of
the
one
second
delay
was
only
for
malicious
busy
waiting
applications
correct
because
you're
only
awaiting
a
microtask,
which
is
really
fast.
So
I
think
queueing
a
task
awaits
a
task
instead
of
a
microtask
and
it's
also
a
negligible
difference
in
time.
So
I
think
they're
both
good.
C
C
I
I
I
think
we
can
keep
discussing
that.
My
main
concern
is
where
the
api
lives
right
now,
I
don't
think
we
want
it
on
the
track.
Okay,.
D
One
thing
to
think
about
just
to
say
it,
so
everybody
can
think
about
this
is
that
I
think
that
if
you
queue
a
task,
I
think
it's
a
bit
more
likely
that
it
would
exceed
one
second,
even
though
yeah,
but
if
it
does
exceed
the
man
the
one
second,
then
it
would
switch
focus
anyway.
We
can
talk
about
this.
It
is
an
interesting
proposal.
Thank
you.
A
Okay,
you
went.
E
Yeah,
so
on
the
same
topic
I
would
say:
cloning
of
tracks
is
known
and
when
you
subtype
tracks,
then
it
starts
to
be
a
bit
messy,
because
if
you
clone
a
track,
will
it
be
the
base
type
or
will
it
be
the
derived
type
and
so
on?
So
if
we
can
avoid
to
subtype
track
like
it
seems
possible
to
do,
we
should
try
to
to
do
that.
That
will
be
simpler.
E
I
was
a
bit
surprised,
so
that's
the
first
stop
first
comment.
The
second
comment:
I
was
a
bit
misled
with
the
one
second
issue.
E
I
was
thinking
that
somehow
you
were
saying
microtask,
but
then
we
would
wait
for
somehow
one
second,
and
I
think
that,
given
we
want
the
default
behavior,
which
is
the
usage
and
to
do
focus
by
default,
we
really
want
that
to
happen
very
fast,
and
now
we
have
some
javascript
that
is
executed
in
between
this
thing
and
that's
why
I
would
hope
we
can
keep
that
very
short
so
that
it's
it's
done
very
quickly.
E
So
the
micro
task,
like
you're,
doing
a
weight,
gain,
display,
media
and
then
synchronously
you,
you
do
the
focus.
That
seems
fine
by
me,
which
seems
to
be
what
you're
going
with
lad.
So
I
I
think
it's
fine
to
me
and
the
additional
condition
that
yeah,
if
you're
busy
looking,
then
maybe
there
will
be
additional
mitigations
by
the
browser.
That
seems
good
as
well.
D
Yes,
I
just
want
to
clarify
that
I'm
not
suggesting
either
one
second
or
at
the
end
of
the
current
microtask,
but
rather
both
so
an
application
that
does
not
hold
up
the
the
main
thread
for
too
long
is
just
going
to
experience
that
the
first
micro
task,
the
micro
test
on
which
get
displayed
the
micro
task
on
which
the
get
display
media
promise
was
resolved.
Sorry,
it's
a
bit
of
a
mouthful,
so
this
one
as
soon
as
it
terminates
focus
happens,
even
if
it
is
not
called
focus
right.
D
Just
the
decision
is
made
okay,
but
as
a
backup
in
case
it
was
faulty
or
there
was
a
cpu
hack
or
it's
even
malicious.
There
is
the
one
second
backup,
but
you
normally
don't
hit
that.
E
So
I
I
like
the
the
first
one.
I
would
need
to
think
more
about
this
one.
Second,
where
you
can,
you
can
call
focus
and
it
would
still
succeed.
I'm
not
sure
I
like
it,
but
at
least
the
basic
proposal
without
one
second,
I'm.
I
think
I'm
fine
with
it.
G
No
to
the
cloning
issue,
I
think
we
have
one
case
of
media
stream
track
subclass
and
a
clone
that
produces
the
base
class.
I
think
that
was
a
mistake
and
we
should
change
it
and
when
we
add
new
features
like
this
to
a
media
stream
track,
we
basically
have
two
alternatives:
either
with
subclass
or
we
add
the
function
in
the
base
class
and
just
specify
that
it
fails
whenever
the
track
is
not
the
right
type,
I
mean
subclass
seems
somehow
tidier,
but
most
of
the
time,
like
99.9
percent
of
the
time.
G
D
If
I
could
say
something
else
to
support
this
view,
I
think
that,
as
we
go
forward,
we're
going
to
add
a
few
more
apis
and
some
of
them
are
going
to
be
specific
to
specific
types.
So
it
would
make
sense
if
the
inheritance
tree
actually
matches
those
types
and
allows
us
to
expose
things
only
where
they
belong
and
not
have
to
mess
up.
The
implementation,
with
too
many
exceptions
that
are
returned
from
methods
that
are
just
irrelevant.
E
I
I
believe
that
to
do
that,
we
would
need
to
get
like
an
important
set
of
apis
where
we're
sure
we're
actually
wanting
to
do
subclass
and
it's
inferior
to
to
do
the
non-subclassing,
and
then
we
could
try
to
clone
to
change
clones
so
that
clone
would
be
subtyped
and
so
on,
but
which,
if
we,
if
we
have
only
one
motivation-
and
it's
just
one
method,
I
think
that
I
would.
I
propose
to
not
go
there.
So
it
seems
there
are
multiple
apis.
D
That's
that's
true,
and
also
just
to
I'll
give
you
three
right
away.
So
capture
handle
conditional
focus
and
region
capture.
All
of
them
do
not
apply
to
capturing
an
entire
screen
or
to
capturing
a
media
stream
track.
That's
resulting
from
a
get
user
media
call.
C
Yeah,
I
just
want
to
clarify
that
I'd
be
opposed
to
this
api
if
it
relies
on
subclassing
track.
So
I
would
really
hear
an
explanation
first
for
why
we
cannot
move
it
to
a
global
and
the
other
apis
you
mentioned
are
not
on
the
agenda
today.
So
I
think
we
can
skip
the
subclassing
question.
G
So
janie,
will
you
be,
will
you
be
describing
the
effects
of
of
of
moving
into
this
api
to
global
somewhere.
C
Well,
you
could
do
a
navigator
media
devices
dot
and
if
you
want
more
feedback,
I
I'll
happy
to
work.
I'm
happy
to
work
with
everyone.
D
C
I
think
we're
interested
in
solving
this
problem
with
a
different
api
shape,
a
slightly
different
api
shape.
E
I'd
like
to
explore
a
different
shape
as
well
and
see
what's
better,
so
that's
something
we
should
discuss,
and
I
also
would
like
to
discuss
this
one
second,
but
other
than
that
yeah
working
there
seems
seems
fine
in
general.
I
guess.
C
Sorry
clarifying
question:
the
one
second
is
an
additional
requirement
right,
so
it
would
only
kick
in
if
a
javascript
is
busy
waiting
for
one
second
correct.
D
Well
almost
correct,
it
is
an
additional
requirement.
That's
100,
correct
that
it
would
only
kick
in
then,
is
not
correct,
because
you
could
also
get
cpu
delays,
so
it
could
be.
The
javascript
application
is
100
legitimate,
tries
to
handle
it
as
quickly
as
possible.
Cpu
does
not
get
scheduled
and
it
happens.
I
expect
this
to
be
a
relatively
uncommon
edge
case
right,
but
it's
a
restriction,
not
an
allowance.
E
D
Jennifer.
Thank
you
very
much
for
the
question.
A
Okay,
anything
else
on
this
topic.
D
A
D
Yes,
so
here
I'm
presenting
together
with
you
and
eva,
if
you
would
like.
Basically,
we
would
like
to
give
a
recap:
we've
been
speaking,
a
lot
about
gift,
viewport
media
and
we've
got
a
couple
of
items
of
consensus
that
we
would
like
to.
But
consensus
between
us,
of
course,
not
with
the
working
group,
and
we
would
like
to
here
to
work
in
groups
opinion
so
far.
We've
been
talking
about,
have
gate
and
get
viewport
media.
D
Maybe
I
should
give
a
quick
recap
of
what
get
viewport
media
is
it's
an
api
allowing
you
to
capture
the
current
tabs
viewport.
So,
basically,
every
all
of
the
content
that
you
see
in
the
tab
in
which
get
viewport
media
is
called.
If
there
is
occluding
content
it
gets
captured.
If
there
is
occluded
content
does
not
get
captured,
it's
basically
the
entire
tab.
What
you
would
get
if
you
call
get
display
media
and
core
select
the
current
tab.
D
So
this
is
a
bit
dangerous
because
you
are
capturing
exactly
what
the
application
can
influence
the
most
and
because
of
that,
we've
agreed
on
a
couple
of
items,
one
of
them
that
the
entire
site
has
to
be
crossover.
Sorry,
the
tab
has
to
be
cross-origin
isolated,
that
all
of
the
documents
on
the
tab
in
the
tab
have
to
have
an
or
to
express
opt-in
via
a
header,
currently
we're
talking
about
using
document
policy.
But
the
concept
is
just
that
they
need
to
obtain.
D
We
can
still
talk
about
the
mechanism
and
an
embedded,
iframe
or
an
embedded
document
can
only
call
this
if
it
is
been
if
it
has
explicitly
been
given
the
permission
to
call
it
so
top
level
document
can
usually
call
this
and
specifically
called
out
privileged
iframes.
D
So
the
questions
to
talk
today
are
hey.
Does
anybody
have
any
interesting
opinions
here.
D
Second,
we
have
had
some
discussions
about
whether
you
actually
capture
the
entire
tab
or
just
the
viewport
of
the
iframe,
from
which
the
the
api
gets
invoked.
We
agreed
on
having
the
entire
tab,
we've
been
me
and
dean
evar,
but
everybody
is
welcome
to
have
an
opinion
and
the
last
one
is
that
we're
tending
towards
using
document
policy
as
the
opt-in
mechanism,
and
we
would
like
to
hear
thoughts
here
as
well.
Geneva
have,
I
said
everything
we
wanted
to
cover.
C
I
think
that's
good,
and
I
think
the
key
part
is
that
what's
listed
under
proposed,
are
the
questions
to
the
working
group
right
and
so
we've
sort
of
what
we're
proposing
together
here
is
that
get
report
media
in
order
to
progress
will
capture
the
entire
viewport,
not
part
of
the
iframe,
and
we're
wondering
if
the
working
group
has
any
objections
to
that,
and
the
second
part
is
we're
hoping
to
bike
shed
on
using
document
policy
names,
and
the
current
contenders
are
required
document
policy,
viewport
capture
and
document
policy
report
capture.
C
Well
earlier,
proposals,
if
you
were
calling
get
viewport
media
from
an
iframe,
the
default
was
to
only
capture
that
iframe,
but
that's
not,
but
that's
a
little
more
complicated
that
because
you're
capturing
the
viewport
of
that
iframe,
which
could
theoretically
include
other
content
that
occludes
the
iframe.
C
So
it's
really
a
cropping
tool
and
cropping
is
important,
but
we
felt
that,
in
order
to
make
progress
on
viewport
media,
we
wanted
to
separate
the
cropping
issue
from
get
report
media,
and
this
lets
us
do
that.
So
by
default.
Get
viewport.
Media
will
always
give
you
the
entire
viewport,
and
this
lets
us
discuss
cropping
in
a
separate
issue.
C
All
right,
I'm
not
hearing
any
objections.
C
Great
and
the
second
is
a
bike
shedding
one
we
figured
in
order
to
get
some
progress
here.
We
could
any
objections
to
calling
it
viewport
capture.
G
So
is
the
tradition
for
document
policy
to
be
to
be
a
noun
verb.
C
I
I
believe
so
sorry
document
policy
I
can't
speak
to,
but
for
permissions
policy
it
used
to
be
called
features
policy,
and
you
know
there
were
names
of
features.
That's
where
that
came
from.
G
I
mean
you
could
have
had
a
cap
capture
viewport
instead,
but
just
do
what's
traditional.
H
Tim
patton
is,
in
the
queue
tim
yeah,
just
to
say
that
those
two
are
kind
of
connected
like
if
the
first,
if
we
hadn't
agreed
to
having
the
iframe
capture
the
whole
viewport,
then
the
second
one
wouldn't
make
sense
like.
So.
If
we
go
back
on
the
first
decision,
we
need
to
unwind
this
one
as
well.
D
I'm
sorry,
if
I
understand
correctly
what
you're
saying
is
that
if
we
look
at
recap
c
we're
saying
that
we're
already
delegating
the
permission
to
capture
the
entire
tab
to
the
iframe
and
therefore
it
is
not
a
problem
that
we
are
indeed
capturing.
The
entire
iframe.
H
No,
no,
no!
It's
just
semantics
that
you
can't.
If
you're
not
if
the
iframe
isn't
going
to
capture
the
whole
viewport,
then
you
shouldn't
call
the
policy
viewport
capture,
because
it's
only
capturing
the
eye
praying
so
like
these
two.
These
two
things
are
correlated
or
linked
or
dependent,
or
something.
C
I
could
speak
to
that
as
someone
who
originally
or
was
arguing
for
the
narrower
default.
Is
that
it's
not
it's
really
a
cropping
tool
and
not
really
a
security
feature
because
of
how
the
document
object
model
lets
you
move
frames
and
have
other
content
on
top
of
frames,
so
it
wasn't
really
a
security
measure.
H
C
Well,
I
think
the
name
of
the
method
is
get
viewport
media
and
I
think
it's
it's
a
subtlety
that
we're
always
capturing
a
viewport
which
is
a
rendering
of
things
rather
than
something
tied
to
the
document
and
there's
also
from
our
end.
I
hope
that
the
document
policy
here
is
general
enough
that
it's
not
necessarily
tied
to
one
api,
because
we
might
want
to
change
the
api
in
the
future.
We
might
want
to
reuse
this
opt-in
mechanism
in
other
future.
Apis,
for
example,
maybe
get
display
media
at
some
point
as
well.
C
All
right,
I'm
not
seeing
anyone
else
on
the
queue,
so
I
think,
can
we
record
rough
consensus
on
this.
D
Okay,
a
bit
of
foreshadowing
for
the
next
few
months,
but
I
also
intend
to
suggest
a
cropping
api
that
would
work
independent
of
get
viewport
media
and
might
complement
it
in
the
way
that
nfr
seeks.
So,
basically,
if
an
application
does
want
to
only
capture
the
iframe
as
viewport,
it
will
have
a
way
if
my
proposal
is
later
accepted.
D
I'm
done
with
this
slide.
Anybody
else.
C
D
There's
a
tbd
there
on
user
activation.
Did
you
mean
to
talk
about
that
or
not?
I
did
not,
but
I
think
that
we're
a
bit
ahead
of
schedule.
So
if
you
would
like
to
talk
about
things.
C
Well,
yeah,
I
don't
know
if
that's
I'll
go
first,
I
think
get
viewport.
Media
should
have
require
user
activation
like
display
media
and
get
user
media.
C
I'm
proposing
that
get
viewport
media
require
user
activation.
B
D
So
I'm
not
really
objecting.
I'm
just
saying
that
we
should
probably
think
about
this
a
bit.
I
see
certain
cases
in
which
user
activation
makes
sense
and
somewhere,
it
seems
to
me
like
just
it
could
be
just
automatic
thinking
right.
It
could
be
that
it
does
not
actually
confer
any
added
benefits.
D
So,
for
example,
before
we
open
a
tab,
if
we
require
a
new
tab
open
a
link
right,
then,
if
we
require
user
activation
that
prevents
the
application
from
spamming
the
user
with
200
new
tabs
here,
I
don't
really
know
understand
what
we
would
be
protecting
against.
It
seems
like
we
would
be
at
least
making
a
bit
more
frictive
one
use
case,
and
that
one
use
case
is,
if
you
open
a
new
tab,
a
single
new
tab
with
user
activation,
and
now
the
new
tab
immediately
loads,
something
and
says
hey.
D
E
I
I
think
it's
it's
a
good
topic
to
discuss
with
user
activation.
Maybe
user
activation
should
be
updated
so
that
when
you
click
on
the
link,
you
open
the
tab,
there's
an
audio
player
and
it
will
auto
play
because
there's
user
activation
as
well,
but
in
general
I
think
that,
given
it's
a
high
privilege
api,
we
should
we
should
go
with
user
activation.
We
know
it's
safe,
we
know
it's.
It's
helping
some
use
cases
like
we,
we
don't
want
to
spam.
E
People
with,
for
instance,
prompts
so
we
should
just
just
do
it
and
if
somebody
complains
later
and
identifies
that
there's
a
use
case
that
we
should
fix,
then
probably
it
will
be
a
user
activation
fix,
maybe
not
to
get
your
mediafix.
D
E
I
would
tend
to
do
the
opposite,
at
least
in
safari:
we
get
user
media.
We
were
not
able
to
enter
the
this
user
activation,
but
when
you're
capturing
there's
no
and
you
you
want
to
call
get
to
the
media
again
again,
there
will
be
no
additional
icon
telling
hey.
This
page
is
catching
because
it's
already
capturing
and
user
already
said.
Okay,
I'm
allowing
capturing
if
we
remove
user
activation
in
some
cases,
the
page
while
doing
not
interacting
with
it.
You
will
suddenly
see.
E
C
Yeah,
I
would
tend
to
agree,
and
also,
if
you're
selecting
a
tab,
we
might
also
need
to
account
for
the
user
selecting
the
wrong
tab
and
even
if
they
see
like,
maybe
they
see
a
thumbnail
or
something
like
that,
and
it's
hard
to
tell
what
the
document
is.
So
once
you
open
the
full
tab
and
they
go
oh
no.
This
is
the
wrong
document
and
if
it's
already
shared
to
the
group
that
seems
less
safe,
less.
D
B
Yeah
and
I
think
the
general
argument
that
un
was
alluding
to
that
there
may
be
reasons
to
escape
the
existing
rules
of
user
activation.
I
don't
know
that
there
is
anything
so
specific
to
our
particular
focus
here,
capture
that
it
wouldn't
apply
to
other
cases
where
user
activation
is
required.
I
think
even
architecturally
speaking
it's
best
if
we
start
looking
at,
I
think
from
the
generic
cases,
rather
than
the
specific
one.
D
So
from
my
side
I
would
be,
I
would
prefer
to
continue
this
discussion
a
bit
later.
I
don't
think
that
it's
going
to
block
us-
and
I
want
to
have
some
more
time
to
think
about
this
before
I
debate
this
further.
E
That
sounds
good
just
to
mention
that
we
user
activation
is
really
hard
to
get
added,
and
it's
so.
We
need
to
have
it
before
anything.
Ships
or
not.
We
need
to
solve
that
issue,
make
a
decision
before
anything
ships
and
that's
really
important,
because
otherwise
it
will
become
a
nightmare
to
actually
change
the
behavior.
That
is,
shipping.
C
D
Web
compatible,
the
challenges
are
different
in
adding
and
removing
restrictions.
Some
of
them
are
about
getting
consensus,
and
some
of
them
are
about
existing
applications
and
in
both
cases
I
prefer
to
try
to
get
it
right,
and
that
means
I
need
to
think
just
a
bit
more
before
committing,
but
also
you
know,
I'm
just
one
person,
other
people
might
have
other
opinions.
A
Okay,
you
exhausted
this
topic.
A
Why
don't
we
move
to
display
surface
constraints.
D
Yes,
so
thank
you
very
much
for
listening
to
me
for
such
a
long
time
I'll.
Try
to
keep
it
brief
here.
Basically
right
now,
when
you
call
get
display
media,
a
browser
that
is
perfectly
spec
compliant,
does
not
allow
you
any
way
of
influencing
in
any
way
what
the
user
chooses
and
the
intention
was
to
not
limit
the
user's
choice.
D
But
then
later
the
spec
says
that
by
and
then
it
by
only
applying
certain
constraints
after
the
user
makes
the
choice
it
prevents
influencing.
D
So
one
thing
that
I
want
to
cite
is
that
currently
we're
already
influencing
the
user's
choice,
because
we
are
presenting
one
of
the
options.
First,
so
chrome
shows
screens
first,
which
is
not
the
safest
thing,
but
it's
kind
of
difficult
to
change
apple.
I'm
sorry,
safari
only
shows
one
choice
so
that
kind
of
limits
the
user's
choice
and
mozilla.
I
believe
is
working
on
also
introducing
tabs,
but
right
now
it's
only
screens
or
windows
and
the
order
in
which
they're
offered
could
also
be
said
to
be
influencing
the
user.
D
So
the
question
is
also
whether
influence
influencing
the
user
is
such
a
bad
thing,
because
influence
could
be
yes,
it
could
be
used
maliciously,
but
it
could
also
be
wielded
productively.
For
example,
you
could
try
to
push
the
user
towards
the
safer
choice
or
you
could
push
the
user
towards
the
more
relevant
choice.
So,
for
example,
you
could
say
hey,
I
just
need
another
tab.
Don't
show
me
your
entire
screen
no
need
next
slide.
Please.
D
Verbalized,
I
forgot
the
word
for
expressed
expressed
interest
in
in
allowing
us
to
either
limit
or
influence
the
user's
choice
and
they
cited
all
sorts
of
reasons,
so,
for
example,
that
it
saves
clicks
like
if
you
are,
if
that
application
knows
that
it
wants
just
a
tab,
it
doesn't
have
to
get
the
user
to
choose
that
or
if
it
just
needs
a
window,
doesn't
need
to
get
just
show
that
one
first.
D
Second
one
is
you
know
a
lot
of
applications
want
to
capture
something
with
audio
audio
is
not
supported
on
all
types
of
capture.
You
want
to
push
the
user
towards
that.
You
want
some
browsers,
can
capture
certain
surfaces
better
tabs,
for
example,
you
can
capture
at
higher
fps,
so
you
want
to
push
the
user
towards
that.
D
You
want
to
app
relevant
surfaces,
we've
already
discussed.
So,
let's
give
an
example,
let's
say
meet,
and
if
it
knows
that
you're
gonna
capture
a
slide,
then
it
if
it
all
were
to
offer
you
slides.
First,
that's
gonna,
make
it
less
likely
for
you
to
make
a
mistake,
and
that
would
be
nice.
D
You
also
want
some
applications
want
to
push
users
away
from
risky
things
like
showing
the
entire
screen,
and
also
some
applications
intend
to
actually
break
off
the
capture
immediately.
If
they
see
that
you
have
captured,
screen
or
window
or
something
else
that
they
do
not
necessarily
approve,
and
then
it's
just
gonna
be
a
waste
of
your
time
as
the
user.
If
they
can
capture
that
to
begin
with,
and
so
the
more
they
can
prevent
you
from
making
that
mistake,
the
better
next
slide.
Please.
D
So
the
proposal
we're
dealing
with
right
now
is
that
we
could
introduce
a
hint
where
we
basically
say
display
surface
as
a
constraint
you
say.
Ideally,
I
would
want
this
one,
you
specify
constraint
as
ideal,
then
browser
ideal
window
or
ideal
monitor
or
screen.
D
Sorry,
I
forgot,
and
the
user
shows
you
the
same
prompt
it
would
otherwise
would,
but
it
needs
to
shuffle,
or
it
may
shuffle
the
options
that
are
offered
to
you
at
the
user,
so
that
those
things
which
the
application
asked
for
can
appear
more
prominently
and
if
the,
if
the
user
agent
believes
that
those
to
be
less
to
be
more
risky,
it
could
employ
it.
First
of
all,
it
could
ignore
this
hint.
D
Second,
it
could
employ
additional
warnings,
so,
for
example,
if
if
the
application
tries
to
get
you
to
share
the
screen,
maybe
it
could
explain
how
dangerous
it
is
to
capture
the
current
screen,
etc.
Next
slide,
please.
D
D
So
here
is
some
text
of
how
you
could
potentially
use
that
constraint.
Very
simple
you
just
to
you
just
add,
display
surface
ideal
and
then
the
type
of
your
choice
and
at
least
for
chrome.
What
would
happen
here
is
that
you
still
get
all
of
the
options,
but
the
one
normally
entire
screen
would
have
been
the
first
one
highlighted,
and
in
this
case
it
would
just
be
chrome,
town
and.
C
Oh
yeah,
so
so
when
we
discussed
this
on
github,
I
thought
we
made
some
progress
but
on
this,
but
we
also
talked
about
some
some
give
and
take
some
security
mitigations
to
perhaps
prevent
the
api
from
returning,
maybe
not
not
list
the
the
requesting
tab
in
the
tab
list,
maybe
not
list
the
requesting
window,
the
browser
window
in
the
window
list
and
those
things.
C
So
I
think
I
think
I
would
like
to
see
some
of
those
mitigations
put
into
the
spec
in
order
to
allow
this,
and
if
we
can
do
that,
I
I
do
agree
that
there
is
some
benefit
here
in.
C
I
should
also
say
that
constraints
in
get
display,
media,
the
min
constraint
and
the
exact
constraints
are
already
disallowed,
so
it
would
have
to
be
ideal
constraints,
and
I
think
there's
some
value
in
the
argument
made
here
that,
by
specifying
at
least
for
the
way
the
chrome
picker
is
built
that
it
has
these
subpaints.
C
I
think
it
makes
some
sense
to
steer
to
let
these
in
practice
these
some
of
the
other
sub
tabs
other
than
full
screen,
be
the
active
one,
because
full
screen
isn't
really
a
safe
choice
either.
C
C
Candle
can
is
not
necessarily
bound
by
these
same
rules,
but
we're
talking
about
here
exposing
a
constraint
that
would
let
the
application
serve
as
a
hint.
As
you
said,
I
think
that
that
might
work,
but
I
would
like
to
see
some
of
the
security
mitigations
that
we
talked
maybe
even
normative
language,
that
we
do
not
return.
G
D
So
I
would
like
to
apologize
that
your
suggestions
and
what
we've
discussed
about
them
did
not
make
it
into
the
slides.
The
slides
are
a
screenshot
of
the
spec
of
the
pr
that
they
produced
before
we
discussed
this.
My
understanding,
though,
of
what
we
discussed,
is
a
bit
different
of
what
you
re
of
what
you
have
now
mentioned.
I
remember
that
we
discussed
in
a
recommendation
that
the
user
agent
should
employ
additional
warnings
before
letting
the
user
choose
the
current
tab.
D
I
don't
think
that
we
discussed
completely
removing
this,
and
I
think
that
they
even
said
that,
currently
there
are
enough
use
cases
out
in
the
wild
of
self-capture
that
we
can
just
shave
them
off,
so
that
is
where
we
differ
and
if
you're
open
to
using
a
recommendation
that
says
warranty
user.
This
is
important.
D
Now
you
and
warren,
the
user
can
even
be
to
completely
hide
the
option
for
self-capture
and
make
it
make
it
a
lot
more
frictive
to
get
there
right
like
you
could
have
another
pain
you
might
have
to
click
advanced.
You
might
get
a
couple
of
more
warnings,
it's
completely
up
to
the
user
agent,
but
I
don't
believe
in
removing
that
option.
D
And
in
addition,
I'll
just
mention
that
you
would
or
if
you
were
to
remove
that
option,
you
would
potentially
be
pushing
users
towards
sharing
their
entire
screen
when
they
just
want
to
share
their
entire
tab.
So,
first
you
would
be
removing
on
tab.
Then
you
would
have
to
remove
on
window,
then
any
user
that
wants
to
show
one
of
these
would
have
to
share
the
entire
screen.
And
if
you
want
to
remove
that
tool,
then
I
think
that
we're
completely
out
of
scope.
C
Right,
the
problem,
though,
is
that,
yes,
I
think
I
would
like
to
have
some
strong
recommendations
on
user
agents,
so
I
think
it
sounds
like
you
would
be
amenable
to
that.
I
think
that
would
make
a
lot
of
sense.
Yes,
and
that
would
be
specific
warnings
that
are
not
general
warnings
about
any
sorts,
but
that
are
specific
to
choosing
your
your
own
tab.
B
C
Yeah,
so
yes,
that
would
that
sounds
good.
I
would
have
just
liked
that
we
could
go
a
little
further
and
norm
since
specifications.
Don't
have
that
much
authority
over
what
user
agents
do
in
a
user,
interface
and
user
experience,
which
is
appropriate.
I
think
so.
There
was
an
opportunity
here
to
perhaps
have
normative
language,
since
we
could
actually
detect
the
case
where
get
display
media
returns
the
its
own,
the
self
tab,
which
is
the
concern
concern
we
have
so
we
could
normatively
prevent
it.
E
Yeah,
I
have
a
few
questions.
The
first
question
would
be:
where
should
we
allow
screen
like?
Why
should
we
allow
a
webpage
to
say
hey,
I
want
to
capture
the
whole
screen
since
it's
the
least
secure
approach,
and
we
actually
want
users
to
move
away
from
that.
So
I.
E
It's
fine
to
add
a
constraint
that
say:
hey:
let's
try
to
focus
on
capturing
less
but
not
screen.
So
that's
the
first
comment
here.
My
second
comment
on
the
api
is:
maybe
I
have
a
bias,
I'm
really
this.
I
generally
dislike
constraints,
so
I'm
like.
Why
should
we
go
with
constraints
like
with
like
something
like
displacer
face
id
or
browser?
If
we
can
add
just
an
additional
parameter
which
would
be
a
preferred
display
surface
as
a
second
parameter?
E
And
then
you
you
pass
the
string
which
would
be
either
browser
window
or
even
something
else
we
like
maybe
in
the
future,
we'll
have
like.
Oh,
we
actually
want
this
kind
of
things.
It's
not
it's
a
subset
of
tabs,
for
instance
yeah,
let's
say
hey.
E
I
want
to
capture,
I
don't
know,
google,
any
google
tab
whatever
something
I
don't
really
know,
but
it
seems
like
moving
away
from
constraints
in
general
would
be
good
to
me,
and
the
third
thing
is,
I
I
think
it's
we
discussed
adding
security
mitigations
explicitly
about
the
cases
where
we
think
get
display.
Media
is
a
big
issue,
which
is
the
self
tab
or
same
origin
tabs
issue.
E
I'm
looking
at
the
slide
there
and
there's
a
picker
and
my
I
was
one
since
you're,
probably
prototyping
this.
I
was
wondering
what
are
chrome
plans
to
do
in
terms
of
mitigation
in
terms
of
ui,
to
try
to
help
user
understand
that
self-capture
is
a
potentially
dangerous
thing.
D
So
no
we've
not
prototyped
this.
This
is
straightforward
enough
that
it
does
not
look
like
prototype.
It
is
necessary.
D
We
can,
you
know
just
choose.
Another
pane
is
relatively
easy
in
code
in
terms
of
employing
additional
warnings.
That
is
an
open
question
and
we
would
have
to
employ
to
approach
the
correct
people,
I'm
not
a
ux
designer,
and
I
could
not
begin
to
imagine
what
would
be
appropriate
here
when
you
say
what
you
said
about
using
an
extra
parameter
instead
of
constraints.
I
at
least
do
not
have
any
problem
with
that.
That's
okay
with
me,
maybe
others
would
have
a
different
opinion
and
what
you
said
about
removing
the
screen.
D
That
is
interesting
because
that
is
the
most
risky,
but
I
think
that
we're
we
would
be
entering
slightly
dangerous
territory
here,
because
you
would
be
pushing
browsers
towards
making
or
keeping
the
screen
as
the
default
option
so
that
there
would
still
be
an
application,
a
way
for
an
application
to
signal
that
they
want
that
by
just
not
saying
anything
and
whereas
you
could
more
easily
change
the
default
selection
to
something
else.
If
the
application
still
had
a
way
to
invoke
the
legacy
use
case,
legacy
behavior.
E
It
seems
like
chrome
might
have
like
migration
issue,
but
in
terms
of
api,
I
think
that
if
we
were
starting
from
scratch,
we
would
go
with
not
not
allowing
to
default
to
screen,
and
just
that's
that's
the
most
risky
issue
and
in
terms
of
safari
implementation.
E
Currently,
we
are
only
allowing
screen
at
some
point.
We
will
have
a
picker
and
I'm
pretty
sure
that
the
default
will
will
not
be
will
not
be
screened,
and
I
don't
think
that
we
have
a
use
case
where
apps
should
be
still
allowed
to
have
as
a
default
screen.
C
So
I
think
sorry
for
cutting
in
here,
but
I
just
want
to
say,
firefox
does
not
default
to
screen
and
I'm
sorry
bernard
for
jumping
the
queue
here.
But
it's
relevant
to
this.
I
do
agree
with
you
and
we
should
disallow
the
full
screen
constraint.
D
Or
at
least
ignore
it
so
having
some
constraints
is
better
than
none,
but
I
think
that
the
on-ramp
for
chrome
and
edge
here
would
not
be
meaningless,
like
it
would
still
be
useful
and
it
would
be
of
no
cost
to
to
safari
and
to
firefox,
because
you
can
just
ignore
that
the
spec
that
I'm
suggesting
the
pr
I've
posted
allows
the
user
agent
to
ignore
the
hint
and
you're
you
would
be
okay
to
do
that.
D
D
No
because
because
once
you
introduce
those
and
if
a
lot
of
applications
start
using
that-
and
we
see
uma
user
metrics-
that
that
shows
us
that
enough
applications
have
already
migrated
to
asking
for
the
least
risky
thing,
and
maybe
that
users
generally
don't
pick
the
risky
thing.
Then
it
would
be
a
lot
easier
to
motivate
a
decision
of
just
making
that
no
longer
the
default.
D
It's
easier
to
make
a
small
change
in
perceived
behavior
or
of
the
application,
I'm
sorry
of
the
browser.
So
if
right,
but
we
can
discuss
that
later-
what
I
mean
to
say
is
it
would
be
difficult
if,
right
now,
the
default
is
to
to
show
the
entire
screen
first,
then
it's
a
big.
U
exchange
to
make
it
no
longer
so.
A
Yeah
I
wanted
to
comment
on.
I
understand
you,
you
went
and
yoni
was
concerned
about
displaying
screen,
but
I
wanted
to
go
back
to
the
developer
request
because
I
think
there's
a
contradiction
here.
A
Maybe
we
can
talk
about
how
to
how
to
get
rid
of
it,
but
so,
in
terms
of
the
list,
the
things
that
the
developers
have
been
asking
for,
so
one
of
them
is
discover
the
surface
type
that
supports
audio.
Am
I
correct
that
that
the
only
surface
type
that
currently
does,
that
is
full
screen.
D
No
tab
so
in
windows,
okay,
on
windows;
it
is
tab
or
full
screen
on
all
other
operating
systems.
It
is
just
that.
A
Okay,
how
about,
when
you
say,
save
clicks
on
the
journey
towards
the
user's
historic
preferences
that
would
be
actually
somehow
being
able
to
capture
like
when
they
start.
Looking
at
the
different
display
surfaces
like
what
they
clicked.
D
No,
so
currently
the
application
knows
exactly
what
the
what
type
of
display
surface
the
user
chose
right.
You
right.
D
D
Maybe
I
misunderstood,
so
I
hope
that
my
answer
would
be
irrelevant
will
be
relevant
currently
without
any
changes.
Somebody
calls
get
display.
Media
user
chooses
something
the
application
knows
exactly
what
they
chose
right
now,.
D
Yes,
and
that
is
all
all
that
is
being
meant
here.
As
far
as
I
know,
I
am
quoting
pexip
if
I'm
not
mistaken,
but
at
least
if
I
were
to
to
repeat
the
argument
for
my
own
sake,
it
would
be
that
you,
you
just
see.
Oh,
this
user
has
chosen
tab
nine
out
of
ten
times.
Let's
offer
them
tab.
First,
okay,.
A
So
you're
not
asking
for
any
more
like
you're,
not
you're,
not
just
what
they
just
what
they
chose,
not
any
more
important
than
that.
Okay,.
A
Okay,
and
as
far
as
the
option
that
supports
the
high
fps
capture,
are
you
saying
that's
tab?
Typically,.
D
In
chrome,
if
I'm
not
mistaken,
yes,
but
it
doesn't
have
to
be
that
right.
That's
just
one
example,
but
any
kind
of
technical
preference
towards
some
kind
like,
for
example,
away
from
windows
because
they
don't
have
audio
is
called
out
specifically,
but
it's
also
another
instance
of
this
type
of
preference.
So
when
one.
A
So
my
question
was
say:
say
you
know
you
didn't
have
screen
say
my
the
concern
overall
is
that
you
know
you
could
use
this
to
try
to
push
somebody
towards
the
screen
capture,
which
is
the
most
risky
option
right.
I
think
that's
what
people
have
been
saying
that
concern
is,
I'm
just
wondering
say
we
took
that
out
of
the
picture
right
you
could
you
weren't
allowed
to
push
somebody
towards
screen.
I
I'm
just
trying
to
understand
whether
any
of
the
developer
requests
would
not
be
possible.
A
C
D
They're
all
dangerous,
so
am
I
hearing
that,
except
for
the
part
where
we
also
support
pushing
the
user
towards
the
current
the
current
screen
or
s
screen.
Sorry,
are
we
done
with
everybody
with
everything
else.
D
Yes,
we
have
never
suggested
pushing
the
user
towards
a
specific
window.
In
fact,
there
is
no
mechanism
by
which
you
could
specify
that
if
you
look
at
the
pr
and
also
there
is
no
way
for
you
to
even
know
what
windows
are
available
so
right.
D
So
I
think
team
party.
D
A
H
Just
just
to
say
that
I
dislike
the
idea
of
having
a
heuristics
based
most
commonly
used
selected
thing,
it
it's
a
complete
nightmare
to
test
and
it's
unpredictable.
What
happens
like
you
know?
I
I
I
if
we
can
avoid
doing
that,
I'm
like
everything
else
here
is,
is
useful,
but
heuristic
space
pickers
that
move
things
around
are
a
nightmare.
D
H
Well,
no
on
this
list
there
was
direct
users
to
there
was
on
your
description.
There
was
historic
preferences,
yes,.
D
This
this
that's
a
heuristic.
Surely
no,
that
is
an
application
explaining
why
they
want
to
have
this
hint,
but
they
can
use
that
hint
for
whatever.
So
they
are
thinking
that
they
might
use
that
in
order
to
you
know
to
to
push
the
user
towards
the
historic
preference,
but
the
broad
user
agent
will
not
do
that.
A
A
I
think
we've
run
out
of
time
on
this
item
and
I'm
gonna
move
to
the
next
one,
but
before
we
do
that,
do
we
have
kind
of
a
summary
of
where
we've
gotten
on
this
discussion.
A
I
think
we're
going
to
move
on
to
echo
cancellation,
harold,
yes,.
G
Okay,
echo
cancellation,
it's
been
on
the
spec
forever
and
it's
kind
of
always
been
some
kind
of
weird
magic
to
me,
but
I
tried
to
try
to
drop
some.
G
G
So
this
is
really
august
technology,
but
I'm
presenting
it
anyway.
So
you
know
how
recruit
translation
works
in
principle,
you
get
an
audio
signal
in
and
you
feed
it
into
the
room.
G
G
It's
not
as
simple
as
just
subjecting,
because
the
room
transform
function
is
complex
delays
and
distortions
and
and
non-linear
effects
and
echoes
and
reverb
and
all
that
stuff.
G
In
some
cases,
it's
not
because
especially
headphones
are
trickier
beasts
than
you
want
them
to
be
it.
Some
of
them
have
rather
strong
acoustic
or
electrical
coupling
between
the
headphone
and
the
microphone,
and
some
outputs
aren't
using
the
native
speaker,
they're
using
some
other
other
speaker
for
for
room
noise.
G
G
G
G
So
I
think
we
should
just
do
this,
but
I
see
that
there's
a
cube.
Let's
discuss.
A
Okay
wow,
so
who
is
first
time
I
lost?
Is
it
yanivar.
H
Well,
I'm
jumping
the
queue
I'm
wholly
in
favor
of
doing
something
in
this
area.
I
think
it's
vitally
essential.
What
I
don't
understand
is
whether
this
will
cover
the
output
from
web
audio,
which
is
a
really
common
use
case
that
you
get
a
bunch
of
bunch
of
tracks
in
from
webrtc
and
other
places,
mix
them
in
web
audio
and
then
throw
them
out
into
the
room.
But
you
still
want
to
listen
to
the
microphone
in
the
room
and
maybe
and
put
that
somewhere.
H
G
C
Yes,
sir,
I
think
the
mozilla
view
here
is
that
we
don't
need
this
api
to
to
do
correct
echo
cancellation.
So
we
don't
see
that
we're
not
sure
why
the
user
agent
will
need
the
javascript
help
on
this
one,
and
that
seems
like
this
is
something
that
implementations
could
fix.
C
The
implementation
should
already
know
if
the
agent
should
know
if
headsets
are
being
used.
Basically,
for
example,.
G
I
mean
if
you
have,
if
you
have
a
headset
plugged
in,
and
this
there's
also
noise
coming
out
of
the
speakers
which
one
do
you
cancel
them?
Why.
C
C
So
so
the
view
from
so
I
spoke
to
some
mozilla
audio
engineers
and
their
view
was
that
we
did
not
need
this
api.
C
I
believe
we
have
access
to
the
rendered
audio,
so
it
wouldn't
cover
things
like
web
audio
and
our
audience
manual.
Audio
engineer,
paul
adno
has
been
helpful
for
websites
helping
websites
also
work
with
chrome
in
this
regard
on
mozilla
hubs,
for
example,
I
can
provide
some
links
and
more
than
that,
I
cannot.
I
don't
have
paul's
full
knowledge
of
echo
cancellation,
so
I
can't
comment
deeper
on
that.
E
I
think
that
the
user
agent
has
more
knowledge
than
the
web
application,
so
it
knows
exactly
where
it's
rendering
audio,
in
which
paths
which
microphone
is
connected
to
which
speaker
and
so
on,
so
that
it
can
do
a
good
job
at
doing
echo
cancellation,
for
instance,
if
you
know
that
you're
using
the
speaker
that
is
close
but
you're
using
a
mic
that
is
close
to
a
speaker
geographically,
which
is
the
case
like
in
some
setups.
We
we
know
that,
then
you
can
actually
do
some
nice
tricks.
E
I
I
do
not
know
how
web
applications
will
be
able
to
to
to
do
to
do
that
or
to
provide
knowledge
that
the
usage
and
the
os
would
do
so.
I
don't
understand
how
safari
would
be
able
to
actually
make
use
of
such
hint,
and
I
do
not
know
how
web
applications
would
be
able
to
use
it
as
well.
A
Okay,
I
think
I'm
in
the
queue-
and
I
I
have
some
questions
about
this.
Basically
so
harold
you're,
basically
saying
what
chrome
currently
does
is
they
use
the
sum
of
all
audio
outputs
from
pure
connection,
and
is
I'm
just
trying
to
understand?
Why
are
there
is
the
desire
to
let
app
individual
to
improve
the
chromium
implementation
or
to
give
applications
the
ability
to
do
their
own
implementation?
I'm
just
trying
to
understand
the
goal
here.
G
G
But
my
response,
my
suggestion
response
was
so
why
we
shouldn't
try
to
reproduce
that
bug
so,
let's,
let's
just
fix
it
and
fix
it,
and
so
what
I'm
hearing
is
that
that
the
two
other
browser
vendors
representatives
are
saying
that
they
think
that
the
browser
can
figure
out
what
to
cancel
automagically,
and
I
think
tim
was
next.
Thank
you.
A
Well,
I
I'm
not
quite
finished
yet
because
I've
heard
requests
from
applications
to
be
able
to
basically
have
an
adjustable
noise
cancellation.
Think
of
it
as
a
transform
stream
right
or
an
echo
canceling
transform
stream.
So
I
can
understand
why
somebody
might
want
to
build
build
that
in
an
app
and
they
might
not
have
access
all
the
access
that
the
browser
has.
A
G
E
Guess
you
could
still
do
the
one
input,
one
output
and
then
as
another
parameter,
which
would
be.
E
G
Now
that
would
be,
that
would
be
an
embedded
transform,
because
the
actual,
the
actual,
the
actual
echo
translation
requires
two
signals.
E
Right,
what
I'm
saying
is
that
you
have
the
input
which
is
microphone.
You
have
the
output,
which
would
be
the
echo
console
stream
and
then,
when
you
create
a
transform
you,
you
pass
a
parameter
which
would
be
a
stream
which
will
be
the
reference
stream
that
is
being
played
and
that
you
will
use
for
cancellation.
And
you
have.
G
Yeah,
so
that
that's
an
interesting
thing
to
the
thing
to
do,
but
I
don't
but
it's
not
this
proposal.
Okay,.
H
Yeah
just
there's
another
way
of
looking
at
this,
which
is
to
say,
are
there
situations
to
address
universe
point
particularly
there
are.
Are
there
situations
where
you
are
playing
something
out
of
the
speaker
that
you
don't
want
to
cancel
and-
and
I
claim
that
there
are
very
occasionally
situations
where
there's
something
that
you're
playing
out
of
the
speaker,
which
you
would
like
to
be
captured
by
the
microphone?
H
It's
rare,
but
it
does
happen.
I
think
your
kind
of
background
music
or
something
like
that
right.
C
Is
it
coming
from
the
browser
it
could
be
well,
in
that
case
the
if
it's
coming
from
the
controlling
application,
then
it
has
the
audio
and
it
can
insert
it
itself.
H
H
I
think
it's
a
rare
use
case,
but
I
mean
I
I
we
stumbled
over
it
with
with
yoga
studios
where
they
they
want
to
make
people
feel
like
they're
in
the
room.
So
they
want
the
music
to
sound
like
it's.
I
mean
you
can't
do
it,
but
they
would
like
the
music
to
sound
like
you're
in
the
studio.
A
I
This
would
not
help
in
in
fixing
with
audio
echoing
in
in
impeding
with
the
webrtc.
I
think
that
the
the
use
case
that
team
is
proposing,
I
think,
is
quite
interesting.
I
have
also
hear
people
that
wants
to
do
a
lego
cancellation,
but
not
even
from
the
from
the
audio
that
is
playing
being
played
or
or
captured
by
by
one
by
the
by
the
current,
by
the
current
browser,
for
example
in
case
that
you
want
to
have
multiple
participants
in
the
same
room.
I
You
wouldn't
like
to
have
like
one
a
main
participant
that
it
is
doing
the
mixing
or
the
participant
and
wants
to
perform.
They
can
settle
of
the
audio
between
the
different
mixes,
so
it
is
not
even
so
they.
What
I
have
heard
is
that
they
they
will
not
even
want
to
to
even
cancel
its
own
nose,
but
it
is
like
being
able
to
have
the
different
streams
considering
remote
and
different
students
consider
local
and
to
be
able
to
apply
echo
cancellation
on
all
these
and
all
this
mix.
I
So
I
think
that
this
is
much
is
what
you
can
said
about
the
or
also
bernan
about
the
transform.
So
something
that
we
can
modularize
and
and
be
able
to
implement
echo
cancellation
not
only
on
what
it
is
being
played
or
captured
locally,
but
on
different
street
reside
by
different
parts,
and
so
I
think
that
there
is
like
three
different
things
to
consider
faces.
That
chrome
still
has
issue
with
echo,
and
this
is
will
not
solve
it.
I
There
are
some
use
cases
that
are
interesting
for
for
this
use
case,
but
I
don't
think
that
they
are
very
clear
and,
for
example,
team
explaining
out
the
the
issues
about
turning
their
co-cancellation.
So
maybe
it's
a
bit
interesting
to
to
work
a
bit
more
on
these
specific
use,
cases
that
need
the
api
and
also,
I
think
that
it
is
interesting
too,
to
work
on
a
different.
Maybe
it's
not
here.
It
should
be
in
with,
without
those
translators
to
be
able
to
dispose
only
the
echo
cancellations,
part
of
webrtc
to
the.
G
What
I'm
hearing
now
is
that
we
have
significant
opposition
to
to
making
an
api
out
of
this
at
all
because
and
the
browser
the
browser
should
be
able
to
able
to
do
the
right
thing
automatically.
G
I
found
it
interesting
to
say
that
to
see
that
that
you
can,
you
would
like
to
cancel
only
the
output
from
the
browser,
because,
if
you
have
like
two
applications
on
running
on
the
machine
and
I
wanna
and
both
are
producing
audio,
should
the
audio
from
the
other
application
be
cancelled
or
not.
C
I
think
we've
seen
different
implementations.
There
was
an
rn
noise
version
at
one
point
that
would
remove
all
audio
all
knowledge
that
you
wear
at
an
airport.
For
example,
you
could
make
a
subjective
comment
that
you
might
actually
want
that
background
noise
in
some
cases,
if
you're
in
a
stadium
or
something
so
we
already
have
some
ability-
I
think
echo
cancellation,
true,
is-
is
mostly
focused
on
cases
like
this,
though
so,
but
more
use
cases
are
definitely
interesting,
but
it's
not
clear
if
they
belong
in
this
working
group.
E
I
would
also
mention
some
efforts
done
by
osc's,
where
you
can
have
different
echo,
concealer
or
different
audio,
rendering
styles,
that
the
user
can
select
himself
and
like
fully
concealer
or
natural
filtering,
or
things
like
that,
but
might
also
be
in
scope.
There.
F
F
F
I
think
the
motivation
for
chrome-
I'm
not
part
of
the
audio
team,
but
the
motivation
is
that
since
we
have
the
audio
output
devices,
api,
so
chrome
can
have
output
on
different
output
devices
and
the
echo
canceller
can
can
have
only
one
output
preference
signals
to
do
the
canceling,
and
this
is
so
that
you
can
choose,
which
is
the
reference
signal
that
you
want
to
use
for.
To
do
it
again.
F
The
cancer
that
has
one
one
output
reference
signal
and
that
that's
what
the
motivation
is,
although
other
things
like
the
web
audience,
those
are
separate
box
that
the
audio
team
in
chrome
is
working
on
fixing
and
are
not
related
to
this.
G
G
I
haven't
seen
much
comment
on
the
shape
of
the
api,
so,
and
so
if
we
were
to
conclude
that
that
this
this
was
there
was
a
need
for
selecting
the
reference
signal
in
such
a
simple
minor
way.
Then
it
sounds
like
this.
Api
could
be
possibly
okay,
but
at
the
moment
we
don't
have
any
any
trace
of
consensus.
That's
a
session
api
is
needed.
Is
that
where
we
are.
A
Well,
are
you
suggesting
that
we
need
a
cfc
on
the
mailing
list
for
this.
G
A
B
So
I've
described
how
old
saying
I
would
like
to
write
comments
on
the
issue
on
whether
this
apis
is
needed
or
not.
I
haven't
seen
much
comment
on
the
shape
of
the
api.
If
you
were
to
conclude
that
there
was
such
a
need,
this
api
may
be
okay,
but
no
consensus
on
the
need
for
such
an
api.
B
A
Okay,
so
we've
reached
the
the
wrap-up
stage
of
the
meeting.
I
just
wanted
to
make
sure
that
we
have
all
of
the
understand
all
the
action
items
going
forward
things
we
need
to
do
to
follow
up
on
a
meeting.
C
A
C
B
Yeah
I
mean
I
I
guess
at
the
end
of
the
day.
That's
really
the
main
question
I
have
what
timelines
of
implementation
are
we
looking
at?
If
you
all,
are
going
to
run
and
implement
this
tomorrow,
then
I
have
no
concerns
if
this
is
going
to
be
a
protected
implementation
roadmap
for
the
next
three
years,
and
you
have
more
concerns.
B
C
Well,
I
think
the
benefits
is
that
it
would
reuse
a
lot
of
the
same
permissions.
We
we
want
the
same
privacy
mitigations
like
what
I
want
to
say
in
browser
privacy
indicators
stuff
like
that
so
but.
A
My
concern
would
be
that
screen
capture,
at
least
so
far,
hasn't
even
gotten
to
cr
right,
and
I
don't
think
it
has
that
great
prospects
of
getting
there.
I
mean
other
than
what
a
lot
might
do.
So
in
some
ways
it
might
hold
it
back.
This
seems
like
it
might
have
the
potential
to
go
to
cr
before
screen
capture.
C
Well,
any
activity,
well
that's
hard
to
say
right,
so
adding
more
interesting
api
might
actually
garner
more
interest
in
finishing
the
spec.
I
don't.
Actually
we
haven't
done
an
analysis
of
why
screen
capture
isn't
at
cr.
Yet
so
well,
we've
been
busy
with
other
things
and
that's
another
thing.
D
So
one
thing
that
I
had
to
consider
is
that
writing
and
reading
specs
is
not
super
easy
and
it
would
be
easy
to
confuse,
get
viewport
media
and
get
display
media
and
their
respective
requirements
if
we
put
them
in
the
same
document.
And
if
we
were
to
use
two
different
documents,
then
that's
far
less
likely.
E
I
would
also
say
that,
in
terms
of
testing,
we'll
probably
have
a
front
folder
for
wpk,
like.
B
D
Harley,
if
you've
got
other
arguments,
I'm
I
would
be
very
happy
to
hear
them.
I
you
know
I
don't
have
a
very
strong
opinion
here.
G
I
said
I
see,
I
see
the
arguments
for
for
making
small
specs.
That's
that's
why
I've
done
so
many
of
them.
G
G
C
Wow
all
right
yeah,
I'm
kind
of
leaning
more
toward
a
single
spec,
but
I
guess
we're
not
going
to
resolve
that
in
this
meeting.
A
Okay,
are
there
other
items
to
follow
up
on
as
a
result
of
what
we've
been
discussing
today,
any
any
actions
in
the
working
group
that
we
need
to
take.
A
G
A
So,
basically,
we
had
a
whole
bunch
of
discussion
on
media
capture
transfer
and
that
we
just
weren't
able
to
fit
into
this
meeting.
So
I'm
proposing
that
we
basically
devote
the
next
of
the
october
interim
entirely
to
to
that,
and
so
here's
kind
of
a
sketch
of
what
that
meeting
agenda
would
look
like.
Basically,
as
we
said,
we
still
have
this
open
cfc
on
transferable
media
stream
tracks
only
had
two
responses
so
allocated
some
time
in
case,
there's
anything
to
discuss
from
that
cfc.
A
Then,
as
we
noted
in
this
meeting,
I
there
have
been
a
number
of
issues
raised
in
what
wg
streams,
which
are
probably
relevant
to
this
discussion.
A
I
don't
know
if
I
could
prevail
on
you
yanibar
to
present
some
of
those
or
at
least
discuss,
what's
what's
open
and
what's
what
the
situation
is,
and
so
that
I
think,
is
an
important
part
of
of
some
of
this
is
understanding
what's
going
on
with
what
wg
stream
some
of
the
limitations
and
where
we're
going
with
that,
then
u.n
had
a
discussion
of
media
capture,
transform
issues.
A
A
So
this
would
be,
I
think,
a
meeting
where
we
have
two
hours
to
kind
of
get
into
all
this.
It's
it's
quite
detailed
in
particular,
some
of
it
like
the
what
wg
streams
issues
I
wasn't
personally
paying
a
lot
of
attention
to,
but
I
think
it
probably
deserves
it.
E
Just
a
few
comments
that
I
believe
the
web
working
with
streams,
issues
and
media
capture
and
transform
issues.
It's
roughly
the
same
topic.
A
And
yeah
they're
similar,
but
it's
it's
like
it
seems
like
some
time
to
just
discuss
the
nature
of
the
what
w
dream
stuff
before
we
get
into
it,
because
I'm
not
sure
people
are
paying
attention
to
it
and
it's.
We
were
also
thinking
that.
Maybe
we
could
invite
some
people
who,
who
are
the
you
know
the
authors
of
streams
and
editors
to
kind
of
give
us
a
sense
of
whether
some
of
those
issues
are
going
to
get
resolved
or
how
difficult
they
are
or
yeah.
E
That's
what
I
I
was
trying
to
do
in
the
slides
I
can
try
to
since
there's
two
weeks.
I
can
try
to
improve
them
a
little
bit.
I
think
universe
pointed
out
that
I
should,
for
instance,
mark
the
explicitly
the
event
the
whatwaregamestream
issues
explicitly.
There
might
be
some
of
the
documents.
A
Yeah,
I
think
I
think
you
know
we
can
with
a
full
two
hours.
I
think
we
can
do
a
better
job
of
kind
of
getting
into
all
of
it,
but
yeah.
So
this
is
roughly
the
the
proposed
agenda
for
october.
I'm
sorry
that
it
wouldn't
leave
much
time
for
anything
else,
but
I
think
I
think
I
feel
fairly
confident.
A
We
can
use
almost
the
entire
time
just
for
discussion
of
all
this
stuff,
because
it's
and
it's
also
very
important
not
just
to
this
working
group
but
to
other
working
groups
that
are
using
streams
to
understand
the
the
issues
of
the
pipelines
and
so
forth.
A
So
I
guess
I
apologize
in
advance
to
a
lot
or
other
folks
who
might
want
time
on
the
agenda
in
october,
because
I
don't
think
people
agree
that
we
can
probably
fill
the
entire
time
with
this
agenda.
C
Yeah,
so
you
have
to
clarify
that
slide.
Un
was
set
to
present
did
cover
the
what
wg
streams
issues,
at
least
mentioning
their
existence
and
right.
They
were
mostly
filed
and
to
address
the
way
that
where
we
want
to
use
the
what
you'll,
what
wg
streams
for
real-time
streams?
And
that's
that's
where
all
the
issues
are
coming
from.
So
they.
A
Yeah,
I
think
these
these
are
the
the
issues
I
pulled
out
of
the
issues
list
that
I
think
might
be
relevant.
I
don't
know
if
there
are
others
yanivar
that
would
be
on
this
list
as
well,
but
I
think
these
cover
most
of
the
ones
that
are
referred
to
there.
I
guess
the
thing
is
I
wanted
to
understand.
To
what
extent
are
some
of
these
things
going
to
get
fixed,
or
are
they
not
going
to
get
fixed
or
what
the
status
is
with
respect
to
what
wd
streams
so.
C
Yeah,
so
some
of
them
are
more
promising,
though
than
others
I
would
say
also.
A
C
Well,
it's
a
stream
based
proposal,
so
we
already
already
presented
months
ago
a
non-stream-based
proposal.
Okay,.
G
C
We
didn't
have
any
takers
on
so
that,
but
we
we
do
feel
that
the
this
is
a
good
compromise
that
at
least
allow
you
to
do
things
like
cloning
and
apply
constraints
instead
of
t,
for
example,
which
is
an
issue,
but
I
still
feel
we
need
solutions
in
the
wd
stream
spec
for
things
like
t.
A
So
in
any
case,
I
posted
a
link
to
these
slides
on
the
list
and
also
a
link
to
the
issues,
and
so
I
think,
and
and
probably
maybe
we
should
have
a
list
of
of
issues
that
are
being
well
is
the
proposal.
Will
we
have
an
actual
document
to
discuss
in
the
proposal?
Yanivar.
A
Well,
if
it's
possible
in
the
two
weeks
or
whatever
we
have,
that
would
also,
I
think,
advance
the
discussion.
A
We
can
work
on
that.
Okay,
so
anyway,
we're
gonna.
I
think
the
goal
is
to
post
all
this
stuff
to
the
list
and
try
to
get
discussion
going
on
the
list
and
in
the
github
issue,
specific
github
issues
and
also
have
a
spec.
So
we
can
have
a
you
know,
full
two
hours
with
a
lot
of
with
people
having
you
know,
familiar
eyes
themselves,
and
also
it
wouldn't
be
too
bad.
A
If
we
had
demos
that
addressed
some
of
the
concerns
here,
I
know
yanivar
you've
been
working
on
a
few
fiddles
to
to
try
to
illustrate
some
of
the
problems.
A
G
So
yeah
neighbor:
will
you
clean
this
list
up
to
to
get
rid
of
the
closed
ones?
A
C
There's
one
closed
issue:
there's
one
close
issue
here,
which
was
an
attempt
to
have
the
source
manage
the
lifetime
of
video
frames,
which
was
unsuccessful.
So
the
fact
that
it
was
unsuccessful
shows
that
streams
can't
really
do
this.
It's
not
easy
to
do
to
solve
that
with
streams
where
a
source
can
manage
the
all
the
video
frames
in
their
lifetimes,
which
is
a
problem
and
it's
a
problem
when
you
combine
streams
and
web
codecs.
C
And
things
like
the
issue
that
has
we
have
some
hope
with
is
the
last
one
which
is
to
have
a
pipeline
without
buffering
built
into
every
transform
stream.
So
if
we
can
address
that,
it
might
help
with
the
applicability
of
some
of
the
other
things.
H
Like
I
I
feel
like
we
we've
dived
down
into
this
repeatedly
and
then
we
end
up
arguing
about
what
the
use
cases
are
and
I
feel
like.
Maybe
if
we
started
with
like
just
a
very
brief
statement
of
you,
know
the
goals
here.
It
might
make
the
rest
of
the
discussion
more
focused
and
that's
what.
C
Yeah
well,
the
slides
that
you
and
I
were
said
to
present
include
goals.
So
there's
twofold,
one
is
to
identify
the
issues
with
using
streams
and
the
other
one
was
to
identify
some
of
the
issues
with
the
mediastim
track,
processor,
generator,
api
and
its
exposure
to
main
thread,
and
so
the
api
proposal
being
stream.
Spaced
addresses
the
latter
part
more,
and
it's
also,
we
feel
a
more
an
api.
That's
more
idiomatic
to
javascript
and
a
more
natural
api
that
solves
some
issues
that
that
proposal
had.
A
Yeah,
if
I
could
try
to
answer
your
question
tim,
maybe
in
a
more
general
way,
I
think,
implicitly,
in
multiple
working
groups,
we've
been
adopting
the
streams
model
in
the
belief
that
we
can
express
pipelines
with
that.
You
know,
and
that
includes
pipelines
with
a
lot
of
sophisticated
things
like,
for
example,
taking
an
input
track
and
doing
special
effects.
You
know
like
background
blur
or
funny
hats
and
then
piping
this
through
to
an
encoder
and
to
a
serializer
and
then
to
web.
You
know
something
like
web
transport
or
maybe
rtc
data
channel.
A
You
know
sending
that
over
the
wire
and
going
through
the
same
pipeline
on
the
decoder
side
right
so
building
all
these
apis
and
also
developing
specs
for
things
like
rtc
data
channel
and
workers
to
make
all
this
possible,
and
yet
we
have
all
these.
What
wg
streams
issues?
I
don't
think
that
you
know
raise
questions
about
various
aspects
of
the
pipeline
model.
A
A
I
think
your
suggestion
is
a
good
one
to
try
to
provide
an
overview
of
what
the
problem
is,
but
it's
it's
a
pretty
big
problem
that
is
being
brought
up
here
and
you
know,
because
it
also
covers
things
like
memory
handling,
and
you
know
garbage
collection
of
all
of
this
due
to
things
like
cancel
anyway.
A
So
do
you
think
that
I'm
actively
characterized
the
the
bigger
problem?
I
don't
think
this
is
just
about
media
capture,
transform
it's
about
understanding
streams
and
all
the
issues
that
we
need
to
make.
It
really
work
as
a
pipeline
model.
C
I'm
not
hearing
yet,
but
I
can
provide
some
clarification
there
that
we're
not
putting
into
question
what
wg
streams
piping
as
a
concept
for
all
use
cases
only
for
real-time
use
cases.
The
difference
is
having
a
source.
Yes,
oh,
I
was
trying
to
answer,
but
you
can
do
it.
E
Yeah
sure
so,
streams
have
been
initially
defined
for
with
some
use
cases
in
mind
and
with
some
structures
like
array
buffers
that
in
minor
and
so
on
and
with
video
frames,
which
it's
a
different
object
and
with
real-time
constraints.
It's
it's
different
constraints
and
we
we
need.
If
we
really
want
to
have
a
good
media
pipeline
based
on
streams,
to
identify
the
issues
that
are
specific
to
that
and
how
we
can
fix
them.
E
And
additional
to
that.
As
you
mentioned
beyond
that,
the
long-term
goal
of
having
not
just
a
video
pipeline
but
a
full
gravity-like
pipeline
and
whether
that
will
work
out
or
not
and
we're
trying
to
look
at
these
issues.
And
but
we
have
a
particular
focus
on
video
frame
and
media
capture,
transform
yeah.
A
So
anyway,
I
think
between
now
and
this
october
meeting
it
might
also
be
good
for
people
to
post
demos
to
the
list.
You
know
touching
on
some
of
the
issues
that
are
here.
I
think
I
know
harold
mentioned
that
some
people
have
done
some
fairly
extensive
things.
I've
only
managed
in
some
of
the
demos
we've
seen
yanivar.
A
You
know
a
couple
of
things
strung
together
in
a
pipeline,
but
you
know
the
more
the
more
the
merrier,
I
think
to
understand
the
whole
thing-
and
you
know
some
of
the
limitations
here
so
that
also
might
help
you
know
if
there's
code
out
there,
that
people
can
look
at
and
understand
to
what
extent
some
of
these
things
can
be
handled
and
not
handled.
F
A
The
other
thing
I'll
mention
is,
I
think,
it's
very
important
for
us
to
get
a
handle
on
this
prior
to
tpack,
because
and
that
which
is
why
we're
trying
to
schedule
a
virtual
interim
before
that,
because
remember
we
have
our
emergency
working
group
meeting,
which
was
to
kind
of
present
an
overview
so
understanding
the
overview,
I
think,
would
be
important
before
presenting
it.
A
But
the
other
thing
is:
we
have
two
hours
with
the
media
working
group
and
some
of
these
issues
may
make
sense
to
bring
up
with
them,
because
they've
kind
of
they
haven't
been
thinking
about
streams.
A
But
if
the
whole
point
of
what
they're
doing
is
to
eventually
you
know
supply,
encode
and
decode
and
other
services
for
streaming,
then
I
think
you
know
they.
If
we
find
something
we
need
to
share
with
them
as
well.
E
Yeah,
and
I
would
I
would
go
with
your
idea
of
doing
the
homework
even
before
the
next
interim,
like
if
you
can
look
at
the
slides
and
if
you
have.
E
A
Yeah
so
anyway,
I
think
we've
got
a
ton
of
work
to
do
over
the
next
couple
of
weeks
and
in
particular
I'd
also
mentioned.
I
know
damas
made
a
request.
You
know
the
tpack
folks
are
asking
for
demos,
so
some
of
the
things
we're
talking
about
here
might
be
very
appropriate
for
tpack
demos
as
well
anyway.
I
think
we're
out
of
time-
and
I
want
to
thank
everybody
for
participating
in
the
meeting
and
hopefully
we'll
have
another
very
good
one.