►
From YouTube: WEBRTC WG interim July 22, 2021
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
Welcome
to
the
webrtc
working
group
meeting
in
july,
the
group
abides
by
the
patent
policy
and
only
come
people
and
companies
listed
on
the
status
page
you're
able
to
make
substantive
contributions.
B
So
a
few
things
about
this
virtual
meeting,
the
meeting
info
is
posted
here
are
some
links
to
the
latest
drafts.
The
slides
are
also
linked
to
on
the
wiki.
We
do
need
a
volunteer
for
scribe.
Have
somebody
volunteering
to
be
described
yeah?
I
can
take
it.
Okay,
thank
you,
tony
as
we
noted
the
meetings
being
recorded
and
our
recording
will
be
public
next.
B
Okay,
so
a
few
things
just
some
some
basics
about
the
documents
that
we
just
pointed
to.
B
One
thing
to
note
is
that
just
because
they
are
hosted
in
the
w3c,
repo
does
not
imply
they've
been
all
adopted
by
the
working
group,
and
I
think
there's
at
least
one
and
maybe
more
that
are
not
adopted
by
the
working
group.
Yet
working
group
option
requires
a
call
for
adoption
on
the
mailing
list.
So
that's
something
to
keep
in
mind.
The
repo
does
not
tell
you
necessarily
the
status.
B
The
second
thing
is
that
editors
drafts
do
not
represent
working
group
consensus
unless
they
are
confirmed
by
a
call
for
consensus
on
the
main
list,
in
which
case
they
do
so.
Those
are
some
things
that
may
not
be
obvious
about
the
status
of
documents
in
the
w3c,
but
there
they
bear
keeping
in
mind
next.
B
So
the
w3c
has
a
code
of
conduct.
You
can
go
back
one.
It
is
described
in
this
link
code
of
ethics
and
professional
conduct.
It
is
quite
long
there
are
a
lot
of
shoulds
and
should
nots,
so
you
probably
ought
to
read
it,
and
I
think
one
thing
if
I
can
boil
it
all
down
we're
all
passionate
about
improving
the
web.
But
let's
try
to
respect
each
other,
keep
the
conversations
cordial
and
professional
if
we
can
there's
a
zillion
other
things
so
worth
reading
the
code
of
conduct
next.
B
Okay,
so
a
few
few
tips
on
this
meeting
we're
going
to
be
managing
the
queue
using
google
me
chat,
not
the
hand
raising
tool.
So
if
you
want
to
get
into
this
cue,
to
ask
a
question
or
something
you
type
plus
q
and
to
take
yourself
out
you'll,
be
minus
q
and
I'll
be
kind
of
running
running
the
queue
on
my
pad
of
paper.
Here,
we're
also
asking
people
to
use
headphones
or
an
echo
canceling
speakerphone.
B
To
avoid
echo,
please
wait
for
microphone
access
to
be
granted
before
you
speak,
like
don't
just
interrupt
and
state
your
full
name,
because
that
will
help
the
notetakers
know.
Who
said
what
and
also
I
don't
know
that
we'll
use
the
poll
mechanism
we
might,
but
if
so,
we'll
use
that
to
gauge
a
sense
of
the
room.
If
that
would
be
useful,
I
don't
think
we
have
any
to
do,
but
we
might
so
that's
basically
how
we're
going
to
use
the
tool
that
we
have
next.
B
Okay,
so
here's
what's
on
the
agenda
for
today,
un
is
going
to
talk
about
media
stream,
track
transfer,
we're
going
to
have
about
five
minutes
for
slides
and
then
10
minutes
for
discussion
and
then
we'll
talk
about
the
alternative
proposal
for
media
capture,
transform
and
again
about
20
minutes
for
the
slides
and
then
15
for
the
discussion.
And
then
we'll
have
t
up
yanivar
at
11
a.m.
B
Hour
for
discussion
and
wrap
up,
I
will
give
a
warning
two
minutes
before
your
time
is
up
and
once
the
time
is
elapsed,
we'll
move
on
to
the
next
item
we're
trying
to
keep
keep
on
a
schedule
and
also
let
people
have
enough
time
for
questions.
B
D
Thank
you.
So
at
the
may
interim
meeting
we
talked
about
the
transferable
option
and
we
recorded
consensus
as
per
the
minutes
on
the
transferable
option.
There
was
another
option:
serializable
option
that
working
with
feeling
and
consensus
was
to
go
with
transferable
options
and
just
to
recap
that
is
too
moving.
B
You
and
I
have
to
interrupt
you
there.
This
misrepresents
the
w3c
consensus
process.
Making
a
pr
to
add
something
to
a
non-consensus
document
does
not
indicate
consensus.
B
This
is
a
document
which
is
not
reached
working
group
draft
status.
So,
basically,
can
this
consensus
has
to
be
confirmed
on
the
mailing
list
for
the
document
or
for
the
issue
we'll
talk
about
how
to
do
that
in
a
minute,
but
it
is
not
accurate
to
say
that
this
or
anything
else
in
media
capture
extensions
has
working
group
consensus
at
this
point.
D
So
I
just
want
to
say
that
in
the
minutes
from
that
meeting,
it
was
recorded
consensus
from
the
working
group
to
go
with
transferable
options.
That's
just
what
I'm
I'm
trying
to
state.
E
D
So
in
the
meeting
it
was
there
was
consensus.
Let's
see,
and
one
thing
that
we
discussed
during
the
editor's
meeting-
was
the
lifetime
strongly
tied
to
our
general
document,
and
it
was
not
very
clear
whether
it
has
had
consensus
or
not
on
that
particular
point.
So
that's
why
we
we
are
back
to
you
to
the
working
group
and
the
lifetime
strongly
tied
to
the
original
document
is
a
conservative
option.
D
It's
conservative
in
in
the
sense
that
existing
mediating
track
sources
are
doing
that
the
security
and
privacy
infrastructure
assumes
that
as
well
and
as
I
tested
it,
implementations
are
doing
it
as
well,
so
we
started
drafting
media
trim
track
transfer,
so
part
of
it
is
merged
in
pr21,
but
some
pcs
have
not
landed
yet.
So
the
proposal
to
finish
media
stream
track
transfer
next
slide.
D
So
the
proposal
would
be
to
do
editorial
work
in
two
steps.
The
first
step
in
media
capture
main,
which
is
pr
805,
would
be
to
clarify
that
by
default,
all
sources
are
tied
to
the
creation
context,
and
this
is
existing
usage
and
behavior.
According
my
testing
at
least,
that
does
not
mean
that
we
can.
We
leave
the
door
open
to
new
types
of
sources
with
a
different
lifetime.
If
there's
a
new
source,
then
the
spec
can
state
yeah
lifetime
is
this
way?
D
If
lifetime
is
not
stated,
then
it
would
be
the
default
lifetime,
which
is
defined
in
media
capture
main
that
we
define
and
then
based
on
this
step,
one
if
we
agree
on
it
when
we
clarify
immediate
impact
transfer
on
behavior
derived
from
this
step.
So
we
we
have
pr
30
in
media
capture
extensions
which
would
state
that
if
the
original
document
goes
away,
then
the
transfer
track
will
get
ended,
we'll
we
still
leave
it
open
to
new
types
of
sources
with
a
different
lifetime.
D
But
currently
we
do
not
have
examples
of
such
sources,
so
it's
just
better
to
describe
what
we
have
and
what
we
plan
to
to
implement.
Okay,
two.
D
Yeah,
so
that's
the
end
of
my
slide.
I
I
would
welcome
feedback
from
people
in
this
meeting
on
this
proposal.
B
B
I
would
note
that
just
slides
at
a
meeting
or
people
not
objecting
doesn't
imply
that
we
can
issue
a
cfc
on
individual
issues
or
we
can
issue
a
cfc
to
promote
media
capture,
extensions
to
a
working
group
draft
or
some
some
variation
on
that.
So
that's
one
thing
to
talk
about,
there's
also,
of
course,
questions
or
opinions
about
what
un
has
just
presented.
B
So
let
me
look
at
the.
B
B
Okay,
harold.
F
Yeah,
so
I
had
a
quick
look
at
the
public
budapest.
I
do
wonder
if
we
can
it's
a
which
one
harold,
eight
or
five
eight
or
five
okay
yeah.
I
got
back
from
my
vacation
yesterday,
so
I'm
a
bit
not
quite
caught
up
yet
I
do
wonder
if
it's
appropriate
to
to
make
a
pull
request
on
something
that
we
had
hoped
was
finalized
to
specify
behavior.
F
That
actually
should
be
testable
at
this
stage,
but
it
does
deal
with
the
issue
that
raised,
which
was
that
we
haven't
said
what
what
the
lifetime
it,
what
what?
What
determines
the
lifetime
of
a
track
or
the
endedness
or
check,
and
I
think
it's
reasonable
to
do
as
this
pr
proposes,
which
is
which
is
that.
F
B
So
harold
do
you
have
an
opinion
on
how
to
review
things
like
this?
When
you
say
that
people,
you
know
who
know
html
bring
them
in
if
they
need
to
look
at
something?
If
that's
not
us,.
F
The
way
we
structure
the
extensions
to
media
capture
or
and
to
webrtc
for
that
matter,
it's
it's
a
bit
old
because
we're
we've
kind
of
said
that
we've
we're
lumping
a
lot
of
things
together
and
in
a
single
document
I
mean
midi
capture.
Extensions
draft
also
has
janiverse
proposal
for
a
for
a
browser-based
picker,
which
I
don't
think
there's
consensus
for
at
all.
B
F
Yes,
I
mean
we
do
need
the
notation
within
the
document
on
what
what
has
been
consensus
called
and
what
what
has
just
been
added,
because
I
do
believe
that
we
need
a
fair
amount
of
flexibility
for
editors
to
add,
add
stuff
for
fixed
stuff
for
finished.
B
A
Yeah,
so
I
agree,
there
are
process
interesting
process
questions
here,
but
I
feel
we're
we're
eating
into
the
discussion
time
yeah
on
this
particular
pr.
I
would
also
agree
that
you
know
for
better
or
worse
the
extensions
documents
yeah.
I
share
some
of
those
concerns
as
well,
but
I
don't
think
that
we
necessarily
need
to
do
call
for
consensus
based
on
necessarily
documents.
We
can
do
those
on
individual
issues,
which
I
think
the
post
as
well.
A
I
also
like
the
pr-
I
think
it's
I
I
think
it
highlights
things
that
are
understood
and,
to
the
extent
they
weren't
understood,
they
preserve
existing
assumptions
that
were
there
prior
to
this
new
api.
So
I
think
this
is
a
good
way
to
iterate
without
making
any
decision
about
future
changes.
So
I
think
I'm
good
to
merge
this
pr
and
both
pr's.
A
What
was
the
second
one?
The
second
one
mentioned
on
the
slide.
Sorry,
I
lost
one
number
30
and
next
time.
Okay,.
B
Karine
notes
we
use
notes
inside
drafts
for
things
that
need
particular
feedback.
Thank
you.
A
Also
one
more
thing
when
it
says
yes,
we
could
have
a
we
could
go
to.
The
list
would
call
for
consensus
on
every
single
issue
that
we
discuss,
but
in
order
to
make
practical
progress,
I
think
we
do
often
when
we
have
meetings,
maybe
it's
wrong
to
call
a
consensus.
It
might
be
better
to
say
there
were
no
objections
right
and
that
that
lets
us
proceed
in
good
will
without
necessarily.
B
B
B
If
there
are
no
objections,
let
me,
let
me
put
it
this
way.
Are
there
any
objections
to
merging
805
and
within
the
group.
B
B
Okay,
so
how
about
this
we'll
run
a
cfc
on
a
805
and
and
30
separately.
B
I
think
we
still
need
to
talk
about
how
we're
going
to
track
all
of
these
things.
In
an
extension
document,
I
think
a
805
is
simple,
because
the
documents
on
the
already
you
know
media
capture,
mainness,
is
already
on
the
standards
track
and
moving
along,
but
30
is
a
little
trickier
because
the
document
itself
does
not
have
consensus
so
anyway.
So
that's
that's.
The
item
I
would
say
is
is
run
a
c
like
a
weak
cfc
on
on
805
and
30.,
see
what
happens.
B
Okay,
next.
B
Item
okay,
so
this
is
discussion
of
the
media
capture,
transform
alternative
proposal,
we're
going
to
have
20
minutes
for
u.n
to
present
it
and
then
another
15
for
discussion
go
ahead.
D
Okay,
thanks
before
diving
in
into
the
proposed
api,
which
is
really
illustrative
at
this
point,
I
just
want
to
recapture
the
goals
and
I
think
the
api
has
two
goals.
The
first
goal
is
to
enable
safe
and
efficient
access
to
mediated
track.
Video
frames,
there's
head
detection,
object,
tracking
encoding,
rendering
in
a
canvas
things
like
that.
D
These
are
known,
and
we
don't
have
good
apis
for
that,
so
we
need
one.
The
second
goal
is
what
is
called
the
funny
hat
thing
or
the
background
synthetic
background.
The
weather
use
case,
where,
typically
you
take
a
camera
track
and
on
each
frame
of
the
camera
track,
you
will
do
some
transform
like
background
blurring
and
then
you
have
a
transform
track
and
what
you
really
want
is
to
use
transform
track
as
if
it
was
camera
track.
So
you
want
to
use
it
pipe
it
into
peer
connection.
D
Do
whatever
you
want,
and
that
includes
not
only
the
video
frames
but
also
the
other
things
that
a
media
streaming
track
is
handling
like
muted.
If
a
camera
track
is
muted,
then
maybe
the
ui
of
the
application
will
look
at
it
and
it
might
be
good
to
be
able
to
hang
on
transform
track
and
not
camera
track.
Similarly,
if
you
buy
the
transform
track
into
peer
connection,
it
might
be
good
to
move
your
muted
to
transform
tracks
so
that
peer
connection
will
send
one
frame
per
second,
which
is
the
typical
behavior.
D
D
There
are
some
links
there
from
google
and
mozilla,
where,
basically,
you
want
to
store
audio
coming
from
a
microphone,
for
instance
in
a
ring
buffer
that
is
done
in
the
worklet
via
your
worklet,
but
you
do
audio
processing
in
a
worker,
and
these
examples
provide
libraries
and
examples
to
do
that,
and
it
should
be
noted
that
the
same
worker
can
do
video
processing.
D
D
So
second
question
is:
why
is
not
using
word
programming
streams?
D
So,
in
my
mind
at
least,
a
media
stream
track
is
a
readable
stream
of
video
frames,
but
more
there's
more
than
just
video
frames,
there's
muted
and
muted
enable
constraint
capabilities
settings
so
there's
a
lot
of
things
in
addition
to
qr
video
frames,
it's
a
queue
of
information,
and
the
issue
really
is
that
data
stream
is
lacking
a
reader
and
js
constructor,
and
that's
why
I
think
it's
best
to
add
this
api
directly,
instead
of
trying
to
go
through
a
readable
stream
of
video
frame,
which
is
missing
some
information.
D
So
if
you
go
from
adjusting
track
to
a
double
stream,
video
frame
you're
losing
some
information,
so
I
think
we
should
be
able
to
keep
all
this
consistent
in.
D
Webconnect
team
identified
some
edge
cases
and
they
were
not
confident
with
streams,
so
they
went
to
callback
apis.
Some
webrtc
working
group
members
had
similar
issues
with
regards
to
mediation,
track
performance.
Transferring
and
also
it
might
be
good
to
stay
consistent
with
slightly
related
apis
like
web
codec
or
web
audio,
for
instance,
next
slide,
and
this
time
we
go,
we
dive
into
the
api.
So
do
we
want
an
equivalent
readable
stream
reader?
We
we
do
not
need
to.
D
We
just
need
to
get
the
frames
and
that's
why,
if
you
look
at
existing
apis,
there's
html
video
element
request
video
frame
callback
which
is
used
just
for
that
purpose.
It's
applied
on
the
video
element,
but
the
goal
is
to
take
the
frames
of
the
media
stream
track
and
put
it
put
them
in
a
canvas
and
do
processing
on
them.
So
we
could
try
to
do
something
very
similar
and
that's
what
is
done
there.
D
So,
as
I
said,
the
callback
is
returning
a
promise.
So
if
you
take
the
example
track
process,
video
frame
equal
transform,
basically
transform
function
in
the
example
being
async,
it
returns
a
promise
and
the
promise,
as
long
as
it
is
not
settled,
the
user
agent
will
not
call
the
callback
again
so
that
enables
back
pressure
that
enables
you
to
say:
okay,
I'm
not
ready
yet
to
get
a
new
frame,
so
user
agent.
Please
please
stop!
D
I
will
tell
you
when
I'm
ready
for
another
frame,
and
the
second
thing
is
that
once
the
promise
is
settled
in
the
proposal,
the
video
frame
gets
closed.
So
that
means
that
as
long
as
you're
in
the
async
function,
transform
frame
is
live.
Everything
is
fine,
but
as
soon
as
you
exit
which
is
asynchronous
exiting,
then
the
video
frame
will
get
closed
and
that
can
prove
to
be
a
good
default.
D
The
consequence,
of
course,
of
returning
a
promise
is
that
if
a
promise
takes
too
much
time
to
resolve,
frames
are
dropped,
and
this
is
fine.
It's
like
web
audio.
If
web
audio
processing
callback
is
taking
too
much
time,
you
might
lose
audio
chunks,
which
is
terrible
for
audio
for
video.
D
It's
it's
okay
and
the
additional
benefit
of
process.
Video
frame
as
a
callback
is
that
there's
consistency
with
media
stream
track
events.
If
you
have
a
muted
event,
then
you
know
that
the
callback
will
normally
be
called
until
the
nbt
event
is
fired,
the
same
for
ending
event.
So
we
have
like
something
like
a
very
consistent
api
surface
next
slide.
D
So
I
did
a
small,
a
small
write-up
of
trying
to
use
callback
versus
stream
based
approach
for
a
web
codec
to
encode
a
track
with
webcodec.
So,
as
you
can
see
on
the
left,
it's
the
proposal
on
the
right.
D
It's
it's
a
little
bit
more
verbose
because
you
have
to
create
a
processor,
create
a
reader
and
so
on,
and
the
additional
thing
which
is
explained
in
green
is
that
you
have
to
call
frame
close
explicitly
and
of
course,
things
are
not
always
good,
meaning
that
maybe
the
away
transform
will
reject
and
then
the
frame
will
not
be
closed
and
we
do
not
like
that
because
if
you
keep
a
handle
on
this
frame,
unfortunately,
then
you
cannot
garbage
collect
it
and
even
if
you
garbage
collect
it,
it's
bad
behavior,
as
per
video
frame
definition
to
garbage,
collect
a
video
frame.
D
That
is
not
closed.
So
yeah,
that's
the
proposal
for
goal
1,
which
is
reading
video
frames
and
just
that
second,
next
slide.
Let's
go
to
goal
two
so
goal
two.
D
If
you
remember,
is
basically
to
be
able
to
generate
a
major
stream
track
and
the
idea
is
really
to
go
from
a
get
user
media
track
and
go
to
a
transform
track
and
the
idea
there
is
to
reuse
a
model
that
we
know
is
useful
and
used
and
working,
which
is
readable
stream,
js
constructor
model,
so
a
readable
stream
can
be
created
from
an
object
which
will
implement
some
callbacks.
D
So
that's
what
I'm
doing
here
here.
So
if
you
look
at
the
interface
there's
a
new
method
called
createvideotrack
that
is
taking
a
dictionary
as
input
video
source.
This
dictionary
has
to
call
back
start
and
stop.
The
start.
Callback
is
called
inside,
create
video
track
as
soon
as
you
create
the
stream
track.
The
stop
callback
is
called
when
the
created
media
stream
track
is
stopped
from
some
by
calling
media
streaming
text
stop.
D
What
is
interesting
is
the
start
callback,
which
is
taking
a
video
source
controller
as
input
so
that
you
can
using
this
controller
and
cupido
frames
or
stop
the
track,
it's
very
similar
to
readable
stream,
except
it's
doing
something
specific
to
mediastream
track
on
the
left.
It's
an
it's!
A
simple
example:
where
we're
going
from
a
get
user
media
track
to
a
transfer
track
by
using
a
gs
function
called
transform.
D
You
can
see
that
most
of
the
thing
is
done
in
start,
but
there
are
two
things
that
are
worth
looking
at,
which
is:
if
the
original
track
is
ended,
then
we
will
stop.
We
will
end
the
transform
track
as
well
and,
conversely,
if
the
transform
track
is
stopped,
then
we
will
stop
processing,
video
frames
of
original
track
to
limit
cpu,
and
then
you
can
use
transform
track
instead
of
original
track
and
it's
it
should
work.
Fine
next
slide.
D
So
the
typical
question
there
would
be:
oh
yeah,
that's
another
slide,
so
I
tried
to
based
on
these
two
apis
to
look
at
whether
we
can
do
rough
equivalence
of
midi
stream
to
our
processor
and
midi
stream
track
generator
equivalent
equivalent,
and
I
think
it's
doable
and
it's
not
a
lot
of
code,
which
means
that
with
this
api,
if
you
want
to
go
to
stream
world,
you
have
to
write
a
little
bit
of
code.
D
D
So
the
typical
question
would
be
why
not
passing
directly
a
writable
stream
to
create
video
track,
and
this
would
be
somehow
equivalent
to
media
stream
track
generator
right.
The
answer
is
that
passing
a
write-up
stream
to
create
video
track
is
not
enough.
You
need
to
given
the
middle
stream
track
is
more
than
a
queue
of
video
frames.
D
It
has
muted
and
muted,
for
instance,
we
need
to
be
able
to
handle
it,
and
if
we
look
at
level
two,
because
this
is
level
one,
we
can
have
an
example
of
how
this
api
can
be
extended
very
easily.
D
So
why
support
it
while
supporting
muted
and
muted,
if
the
original
track
gets
muted,
for
instance,
your
computer
gets
locked,
then
the
camera
will
be
muted
and
the
web
application
will,
for
instance,
send
the
information
to
whoever
you're
talking
to
that
the
camera
is
disabled
and
that's
fine.
It
can
be
done
on
the
get-go
media
track,
but
it
would
be
very
good
to
be
able
to
do
the
same
thing
on
transform
tracks
so
that
you
isolate
the
application
that
is
providing
the
track.
D
D
So
yeah,
it's
good
to
start
simple.
So,
as
I
said,
we
can
add
process
video
frame.
We
can
add,
create
a
video
track
and
stay
there
for
for
a
while
and
have
good
functionality,
but
it's
very
important
to
make
sure
we
can
easily
extend
the
api
to
get
the
full
api
and
the
full
functionality
at
the
end
of
the
day.
D
So
I
looked
at
the
support
for
track,
enabled
meaning
meaning
if
you
set
false,
on
transform
track,
enable
being
able
to
set
false
on
original
track
and
abort,
which
might
be
useful
in
some
browsers
so
that
the
camera
will
be
will
be
shut
down.
It's
very
easily.
It's
it's
doable,
it's
in
the
full
proposal
and
the
same
for
capabilities,
constraints
and
settings
it.
It's
it's
handy.
D
If
you
can
change
the
width
and
eight
of
the
transform
track,
for
instance,
and
it
would
be
very
handy
if
the
transform
track
would
say:
oh
I'm
requested
to
go
from
hd
to
hd.
So
let's
ask
the
original
track
to
actually
do
it
and
let's
answer
based
on
that
information
and
an
api
proposal
for
for
doing
that,
and
that
makes
it
very
handy
next
slide
now
I
will
scare
you.
D
This
is
an
example
where
what
I'm
doing
is
exactly
doing
that,
meaning
trying
to
have
a
transform
track.
That
is,
that
has
the
same
behavior
of
as
camera
track,
except
there's
some
transform
for
each
video
frame.
So
on
the
left
you
will
see.
I
don't
have
enough
time,
I'm
guessing,
so
you
will
look
at
the
example
yourself,
but
I
think
it's
not
a
lot
of
code
and
it's
working,
fine,
you
use
you,
do
a
processing
in
a
worker.
You
transfer
it's
not.
It
does
not.
Look!
It's
not
horrible.
It's
pretty
simple!
B
You
actually
have
six
minutes,
you
end,
so
you
actually
have
a
little
bit
more
time.
You
go
to
10
45.
anyway,
go
ahead.
D
Great,
thank
you
so
be
on
goal
2,
which
is
trying
to
transform
using
javascript
and
be
as
transparent
as
possible.
We
can
also
look
at
extending
the
api
to
handle
native
back
pressure
signals
things
like
html
media
element
peer
connection,
web
audio.
D
I
think
we
need
to
do
further
investigation
there,
because
it's
it
might
be
very
easy
to
write
code.
That
is
only
working
for
a
browser
or
it's
only
working
for
a
specific
sync,
and
so
the
api
proposal
allows
to
add
it
easily.
But
I
think
we
we
need
to
take
our
time
there
and
implement.
Add
it.
If
we
really
need
the
feature
and
if
we're
ready
to
define
what
is
a
thing
and
how
it
behaves.
D
The
other
thing
which
I'm
excited
about
would
be
to
extend
the
api
to
handle
transform.
It
would
be
great
to
add
dedicated
transforms,
like
you
put
wasm
in
the
middle,
and
you
go
from
one
meter
stream
track
to
another,
and
everything
is
done
for
you,
and
that
would
be
great
because
it
can
help
offering
you
use
good
practice,
good
practices.
You
have
efficient
code,
for
instance,
you
can
do
optimizations
like
zero
memory
copy
in
some
cases
so
and
the
proposal
there,
it's
very
natural
to
actually
do
it.
D
Next
slide:
oh
yeah,
that's
that's
almost
the
end,
so
I
will
have.
I
will
have
more
time
for
discussions,
that's
cool,
so
my
tentative
conclusion,
maybe
it's
biased
by,
but
I
think
the
the
proposal.
D
The
first
advantage
is
that
it's
simple
you.
You
have
two
major
stream
track
methods,
one
you
interface
the
controller.
I
believe
it's
easy
to
learn
and
use
it
blends
well
with
existing
apis
that
people
are
already
using.
So
that's
an
advantage.
It's
also
safe
and
efficient.
There's
no
buffering
by
default.
D
D
So,
since
we
are
almost
ready
for
registering
track
transfer
and
given
the
example
I
gave
where
you
do
transfer
it,
it
should
be,
it
should
be
fine
and
not
a
lot
of
hassle
to
actually
initially
start
with
this
option
and
can
learn
from
it
after
I
believe
it's
it's
a
powerful.
D
If
we
just
start
with
reduce
api
set,
we
get
most
of
the
functionality,
but
we,
if
we
go
with
more
and
more,
we
can
probably
add
more
apis
and
we
will
have
full
metastream
track
shimming
support
and
I
believe
it's
doable
in
a
consistent
way
and
as
shown
in
the
slides,
I
hope
I
think
it's
flexible,
there's
a
clear
path
towards
native
transforms.
B
In
advance,
okay,
so
we're
going
to
go
to
questions
and
I'll
manage
the
queue.
Please
add,
plus
q,
if
you
want
to
get
in,
I
think
I'm
the
first
person
in
the
queue
u.n.
Thank
you
for
that.
I'd
like
to
just
ask
a
few
questions
relating
to
the
back
pressure.
It's
a
couple
of
slides
ago,
where
you
talk
about
the
in
the
very
beginning.
You
talk
about
the
promise,
so
maybe
harold,
if
you
can
go.
E
B
To
that
the
way
that
works,
I
I
think
it's
accurate
to
say
that
by
default,
the
queue
length
is
one
yeah,
it's
previous
slides,
so
I'm
just
wondering
you
and
say
I'm
doing
something
fairly
computationally
intensive.
B
Well,
it
doesn't
really
matter
whether
it's
in
a
worker
or
not,
but
for
some
reason
the
the
length
of
time
I
take
is
such
that
another
callback
would
be
coming.
So
you
know.
G
B
Yeah,
what
what
what's
it
possible
for
me
to
ask
for
a
queue
of
say
two
I
or
how
or
will
I
automatically
lose
the
frame
you
know?
Have
it.
B
D
So
if
you,
if
you
look
at
video
frame
clone
it's
the
video
frame,
is,
is
a
recommended
object,
so
the
clone
currently
is
incrementing
a
counter
and
decrementing
a
counter.
So
it's
not
a
process
intensive
operation,
so
yeah
you
can
you
can
clone
it.
You
will
have
a
countdown
of
plus
one
and
then
it
will
be
on
your
side
to
actually
decrement
the
counter
and
that's
it.
B
See
so
you
won't
go,
but
I'm
still
trying
to
understand
so
I've
cloned
it.
You
know
I've
done
this
every
time,
because
I
don't
know
whether
I'm
gonna
exceed
my
time
window
and
if
I
clone
it
or
you're,
saying
that
that
there
that
the
api
will
manage
a
cue
for
me,
how
do
I
define
how
long
that
will
get
like?
If
I'm
cloning
everything?
Will
it
stack
up
conceivably.
D
It's
up
to
you,
you
can
use
redouble
stream.
If
you,
if
you
want
to
you,
can
create
an
array,
it's
it's
really
up
to
you
to
to
do
what
you
what
you
want.
My
guess
is
that
what
you
can
do,
basically,
let's
say
you
have
a
queue
of
10.
D
What
you
will
do
is
maybe
apply
back
pressure
at
some
point
to
say:
okay,
now,
I'm
not
ready
to
to
take
more.
If
it's
repetitive,
then
you
should
decrease
the
frame
rate
because
you
don't
want
to
create
new
frames
right.
B
D
And-
and
they
can
do
it,
however,
we
want,
because
what
is
good
for
an
api
for
an
application
might
not
be
exactly
suitable
for
another
application
and
that's
where
javascript
is
shining.
F
And
this
chain
media
stream
track
is
an
augmented,
readable
stream.
That's
false,
mini
stream
track
is
a
control
surface.
That's
all
it's
defined
as
and
well
that's
spec
language
really,
but
it's
false.
B
F
Have
it
you
have
a
wrong
picture
in
your
mind,
sorry
about
that,
but
I
did
find
the
control
controller
concept.
It's
really
interesting
because,
as
you
know,
we've
been
struggling
with
exactly
how
to
communicate
the
stuff
that,
if
not
frames
in
the
near
step,
team
track
generator
controller
case.
2.
F
F
But
since
we
weren't
able
to
get
a
proper
discussion
on
that
to
figure
out
what
the
requirements
were,
we
didn't
get
to
specify
it.
But
my
main
problem
is,
I
don't
see
where
I
don't
see
the
use
case
for
this
level
of
control
that
you're
proposing
I
mean
we
have
demonstrated
that
using
streams
works.
This
is
a
more
complex
api
and
when
you
look
at
it
your
it
looks
as
if
you're
saying
that.
F
Okay,
we
will
implement
an
api
that
is
new
and
different
and
implements
half
of
streams,
and
we
will
require
the
the
user
to
implement
the
other
half
of
streams,
not
in
order
to
have
a
working
solution.
So
what
exactly?
What
use
case?
Were
you
aiming
for
when
you,
when
you,
when
you're,
when
you
decided
that
this
this
was
a
better
api.
D
So
the
gold
gold
too,
for
the
generating
part,
as
I
said,
was
you
have
an
original
track.
It's
a
get
user
media
track
and
what
you
want
to
do
is
to
apply
a
transform
and
that's
the
funny
hat
thing
and
then
the
funny.
F
D
And
I
will
answer
to
your
question:
if
you
take
media
stream
track
generator,
for
instance,
it's
blocking
the
muted
and
muted
state.
So
if
you
apply
a
transform,
then
send
it
to
peer
connection,
then
you
lose,
for
instance,
the
one
black
frames
per
second,
that
is
normally
happening
for
muted
media
stream
tracks
and
people
are
using
it.
We
we
added
it
because
there's
a
purpose
and
it's
good
that
we
have
an
api
that
will
not
block
that.
D
D
It
will
still
behave
the
same
and
and
that's
that's
cool
and
that's
useful
yeah.
I
believe
so
so
I'm
I'm
happy
to
provide
more
information
and
more
use
cases
in
the
draft.
If
that's
not
clear
enough.
F
D
I
provided
some
use
cases
and
I'm
providing
an
api
that
is
fulfilling
these
use
cases
and
media
stream
track
generator
current
proposal
or
past
proposal
any
any
of
the
past
proposals
that
were
made
of
media
stream
track
generator.
We
are
not
fulfilling
these
these
use
cases.
E
Hello
yeah,
so
I
work
mostly
on
the
web
codec
side
of
things
so
actually
on
this
slide,
I
just
have
a
comment
on
something
that
was
that
here.
So
we
did
move
away
from
streams
for
web
code
exit
stealth,
but
we
moved
two
streams
into
breakout
box
because
we
had
a
video
track
reader
and
we
thought
that
streams
actually
just
fine
for
for
the
use
case
of
getting
video
frames
out
of
a
camera.
E
So
it's
true
that
we
had
that
using
streams
with
web
codex,
which
is
a
different
thing
than
media
stream
track,
would
have
been
too
complex,
but
it
was
fine
for
us
for
this
case
and
then
on
topic
of
request,
video
frame
callback,
so
request.
Video
frame
callback
is
not
just
a
callback
api.
It
also
has
a
very
specific
timing
that
is
closer
to
request
animation
frame.
E
E
Video
frame
callback
is
guaranteed
to
eventually
cut
through
and
be
able
to
run,
whereas
that's
not
necessarily
the
case,
if
you
don't
have
that
type
of
bi
and
also
a
request,
video
from
callback
pulls
the
latest
frame
from
the
video
element
right
before
it
runs,
so
you
always
have
the
freshest
frame,
which
I
don't
know
will
necessarily
be
the
case
for
a
callback
api.
E
So
I
guess
with
bernard's
question
about
back
pressure
and
cueing
things
yourself.
I
wonder
how
do
you
make
sure
they
have
good
fresh
frames
and
also
is
it
possible
to
capture
every
frame?
With
your
proposal
I
mean
I'm
just
thinking
of.
Do
people
ever
want
to
record
the
entirety
of
a
stream
while
losing
as
few
frames
as
possible
and
then
having
a
promised
backpacker?
D
Yeah,
no,
that's
fair!
So
talking
about
the
last
thing,
the
video
frames.
If
you
look
at
web
audio
and
audio
worklet
you're
not
guaranteed
to
get
every
audio
chunk
right
if
you're
taking
too
much
time,
it
will
not
be
guaranteed
in
practice.
D
It's
fulfilled,
because
if
you
lose
another
chunk,
then
it's
it's
terrible.
It's
very
bad
and
it's
true
that
it's
granted
by
having
the
audio
javascript
processing
in
the
audio
thread.
My
my
guess
is
that
it
can
be
done
for
video.
You
do
not
need
like
this
exact
timing
and
I
can
do
some
experiments.
D
I
haven't
done
experiments,
but
my
guess
is
that
if
you
do
the
process
video
frame,
then
thank
you
in
webcodec,
then
you,
you
will
in
practice
not
lose
frames,
except
if
you
are
like
in
in
a
very
exhausted
api,
and
in
that
case
I
believe
it's
very
good
that
you
that
you
start
losing
frames,
because
otherwise
you
will
be
screwed
and
in
terms
of
precise
latest
frame,
we
we
need
a
spec
right
and
there
are
some
details
there.
D
I
think
there
is
some
flexibility
in
how
the
user
agent
can
actually
decide
what
it
can
do,
for
instance,
if
it
knows
that
the
pixel
buffer
pull
is
like
20,
then
maybe
storing
like
one
two
or
three
frames
is
fine,
but
if
it
knows
that
the
buffer
pull
is
eight
or
six
or
even
three,
then
the
user
agent
might
do
a
different
thing
and
might
drop
frames
much
sooner,
and
that's
one
thing
that
I
have
some
concerns
with
media
film
track
processor,
where
you
can
set
the
storage,
because
the
storage
to
me
depends
on
the
device
and
the
capabilities
of
the
device
and
particularly
the
pixel
device
size.
A
Yes,
so
yes,
first
erin.
Thank
you
so
much
for
this
presentation.
I'm
very
glad
we're
discussing
this,
and
this
is
what
we
need
to
be
doing.
First,
some
things
I
like
I
like
that
it
relies
on
transferable
media
sim
track
and
only
exposes
new
apis
in
workers
and
not
on
main
thread.
This
is
essential
to
mozilla.
A
A
I
even
like
the
shape
of
create
video
track
and
how
it
takes
a
dictionary.
That's
sort
of
reminiscent
of
the
underlying
story
underlying
source
argument
to
readable
stream,
and
I
also
like
the
solving
the
problem
of
controlled
lifetime
of
video
frames.
But
again
I
don't
feel
that
that's
unique
to
callbacks.
A
H
A
I
A
Go
ahead,
okay.
Another
thing
I
feel
like
there's
a
goal
three
missing
here,
which
is
that
we
want
to
exfiltrate
media
stream
track
video
data
to
javascript
syncs
such
as
web
transport
or
rtc
data
channel
and
I'll
come
back
to
that
later,
because
the
improvements
I
feel
like
we
should
make
here.
A
I
agree
with
harold
that
media
stream
track
and
stream
are
not
the
same
concept,
so
it's
not
fair
to
compare
them
and
it's
also
unfair
to
fail
streams
over
not
handling
things
like
ending,
muted
and
apply
constraints.
The
streams
never
promised
to
solve
that,
and
I
think
the
stream
follicle
you
show
demonstrates
that
well
and
that
a
better
comparison
is
that
streams
are
better
than
promise
callbacks,
and
this
is
where
I
dislike.
A
They
basically
just
like
to
call
back
parts,
because
I
feel
this
implements
this
reinvents
streams
in
at
least
three
places,
and
one
is
streamlining
of
callbacks
you're,
using
a
promise
from
the
callback
to
only
process,
one
video
frame
at
a
time
and
a
push
that's
equivalent
to
a
push
source,
readable
stream
that
has
this
guarantee
built
in
the
other
one's
back
pressure.
A
The
javascript
can
use
the
same
promise
to
single
back
pressure
to
the
producing
source
and
that's
great,
but
that
leaves
j
has
to
figure
out
for
themselves
how,
for
instance,
correctly
propagate
any
back
pressure.
They
may
exist
downstream
from
themselves
up
to
the
source,
and
this
is
where
the
streams
api
is
the
web
platform's
robust
composable
pattern
for
back
pressure.
A
I
feel
like
we
should
embrace
that,
for
instance,
if
you
want
to
pipe
this
to
web
transport,
you
need
you
know,
you
want
to
be
able
to
create
a
pipe
for
that
data,
even
if
you
still
have
to
handle
things
like
ended
and
apply
constraints
outside
of
that,
and
so
the
last
thing
is
that
yes,
audio
workload
uses
like
a
video
frame.
It's
a
chunk
in
a
video
screen,
basically
inherently
what's
different
from
audio
worklets
synchronous
callback,
api,
those
apis
were
narrowly
inserting
synchronous
buffer
processing
in
javascript
into
the
native
media
stack.
A
But
here
we
also
want
to
exfiltrate
data
to
js
and
do
it
asynchronously,
and
this
is
why
composition
is
good
and
practices
become
critical.
So
I
think
we
need
a
good
js
api
that
informs
and
guides
users
and
implementers
about
the
right
way
to
use
these
apis
to
get
back
pressure
correctly,
because
that's
a
really
tricky
thing
to
get
right.
A
So
I
feel
like
we
have
synchronous
callbacks
in
audi
worklet.
We
have
asynchronous
promised
callbacks
here
and
then
we
have
streams.
I
don't
think
we
need
three
concepts
there
and,
to
quote
to
quote
readable
stream
spec.
The
platform
is
full
of
streaming
abstractions
waiting
to
be
expressed
as
streams
multimedia
streams,
file
streams,
global
communication
and
more
extreme
standards
enables
use
cases
like-
and
I
quote,
video
effects
piping,
a
readable
video
stream
through
a
transform
stream.
That
applies
effects
effects
in
real
time.
Okay,.
D
Ahead,
so
just
two
points:
first,
you
said
you
like
the
lifetime
of
the
video
frame,
and
that
was
that
was
part
of
using
a
callback,
because
with
a
callback,
it's
it's
very
clear
that
you
have
a
callback
and
you
will
have
it
for
the
lifetime
of
the
callback.
If
you
start
to
call
read
or
pipe
to
you,
you
start
to
not
have
the
same
thing,
especially
if
what
you
will
do
is
probably
pipe
two
to
a
transform
stream
and
the
transform
stream
now
needs
to
do
two
things.
D
It
needs
to
close
it
and
keep
the
back
pressure.
So
the
callback
there
is,
I
believe,
much
simpler
to
use.
The
second
thing
is
web.
Transform
to
video
frame.
You
don't
need
to
pipe
a
video
frame
into
a
transport.
What
you
need
to
do
is
to
go
from
a
pedestrian
track
to
web
codec,
then
to
web
transport,
and
you
will
see
that
web
codec
is
not
using
streams
in
the
middle
and
everything
is
fine
with
it.
D
Apparently
so
I
I
would
say
that
that's
fine,
if
you,
if
you
want
to
do
back
pressure
between
the
camera
to
web
transport,
then
either
you
we
need
to
change
web
codec
and
we
need
to
prove
that
it's
very
useful
or
we
need
to
let
javascript
handle
it
and
that's
what
is
being
done
here,
which
is
consistent
with
webcodec.
A
Okay,
can
I
ask,
I
thought
I
thought
the
video
frame
lifetime
was
until
the
promise
was
resolved.
D
So
in
the
webcode
example,
you
will
send
the
video
frame
to
the
webcodec
encoder
and
then,
when
you
exit
the
vehicle
frame,
object
will
be
closed.
It
does
not
mean
that
the
resources
the
resource
will
be
released.
It
will
be
owned
by
the
encoder
until
it
gets
encoded
and
that's
how
webrtc
pipeline
is
working
currently.
A
So
but
it's
not
synchronously
released
in
the
callback
after
the
callback
returned
synchronously
right.
H
H
H
Yeah
this
one
so
this
one,
so
I
wanted
to
point
out
a
couple
of
details
that
are
not
well
expressed
or
conveyed
by
this
slide,
so
so
the
the
concept
of
tying
the
the
flow
of
data
strongly
coupling
it
to
the
track
and
for
and
forcing
you
to
transfer
the
track
so
so
that
you
can
access
the
flow
of
data
in
the
worker
has
some
problems
so,
for
example,
in
this
example,
we
had
like
how
short,
although
you
you
weren't
as
minimalistic
as
you
could
have
been
for
the
stream
example,
which
would
could
have
been
a
lot
shorter
if
you
had
used
piping
for
example,
what
do
you
do
when
you
want
to
close
the
track?
H
Because
you
no
longer
have
the
track?
You
had
to
transfer
it
so
that
you
could
access
the
the
frames.
So
now
the
example.
If
you
want
to
stop
the
track,
the
example
would
not
be
so
short,
because
now
you
have
to
communicate
with
the
worker
to
tell
it
to
close
the
track.
So
if
you
want
to,
for
example,
use
this.
H
To
have
a
sales
view,
then
you
you're
gonna,
do
it
directly
with
the
track,
because
the
track
is
not
in
in
the
window,
it's
in
the
worker,
so
you
lost
the
track
too.
So
now
you
have
to
complicate
the
example
and
you
have
to
maybe
make
a
clone
of
the
track
to
keep
it
on
the
main
thread.
So
you
can
operate
with
that
and
and
and
then
you
have
to
to
to
manage
two
tracks,
one
here
and
one
in
the
one
in
the
workers.
D
Can
I
okay
just
five
seconds
to
answer
that
so
this
this
is
related
to
mediation,
track
transfer,
not
really
to
this
proposal,
which
is
adding
a
callback
there's.
H
A
separate
issue
I'm
discussing
and
discussing
the
proposal
as
a
whole
yeah,
because
your
your
time
as
a
whole,
of
course,
that's
not
inner
into
using
callbacks.
I
mean
you
could
have
another
object,
put
the
callback
in
another
object
and
transfer
that
other
object.
That's
basically
what
streams
us
it's!
It's
another
object
that
you
can
transfer
instead
of
having
to
transfer
the
whole
track,
because
and
that
that's
why
why?
H
I
also
disagree
that
the
track
is
the
stream,
because
the
the
track,
I
see
it
as
the
control
surface
of
the
media
flow,
but
the
media
flow
is
separate
and
that's
that's
the
model
that
we
have,
that
we
we
do
everything
on
the
main
thread,
which
is
the
control
thread
with
the
using
the
track.
You
you
put
it
in
the
in
the
media
element.
H
H
But
then,
then,
you
are
ending
up
with
an
object
that
looks
more
and
more
and
more
like
streams
like
like,
which
is
what
happened
to
your
generating
part,
which
is
basically
a
framework
very
similar
to
the
one
for
for
streams
and
and
in
part
it
serves
also
as
a
workaround
for
the
fact
that
you
have
to
transfer
the
track
so
that
you
can
turn
the
other
track
in
sort
of
a
replica
so
that
you
can
forward
the
operations
automatically
to
the
track.
That
is
on
on
the
other
contact.
H
H
So
it's
not
that
it's
very
short,
because
it's
a
callback
I
mean
there
are
consequences
to
making
it
that
short,
because
you're
trying
to
attract
not
having
it
in
another
object
that
you
can
transfer
separately.
So
by
treating
the
media
flow
separately,
which
is
which
is
the
actual
model
on
which
the
the
the
new
string
track.
Api
works.
D
I
would
be
glad
to
continue
the
discussion,
so
if
you
could
file
it
on
github,
that
would
be
great
and
then
we
can.
We
can
continue
it
thanks.
B
Okay,
I
think
I'm
next
and
then
harold
and
then
we'll
move
on.
So
I
had
a
question
you
and
I
got
a
little
bit
confused
about
how
you'd
implement
the
default
track,
behavior,
which
is
to
send
black.
B
So
basically,
when,
when
I
mute
the
track,
I'm
not
going
to
get
a
call
back
right.
So
I
do
I,
as
the
application
now
have
to
set
a
timer
and
and
create
my
own
black
frame
and
send
that
every
second
or
you
also
talked
about
how
you
could
bring
across
the
mute
behavior
to
the
transform
track
and
that
that
would
cause
it
like
if
it
went
into
webrtc.
That
would
cause
the
encoder
to
turn
black,
so
yeah.
D
Apparently
we
are
over
time,
and
so
I
can
answer
to
that.
Probably
offline
okay.
B
F
B
All
right
so
we'll
we'll
move
to
the
next
series
of
presentations,
which
is
from
yanivar
over
to
you,
yaniva.
A
All
right
thanks
and
if
we
finish
early,
maybe
we
can
come
back
to
some
of
this.
If
there's
yeah
interest.
K
A
So
I
have
two
issues
around
screen:
capture,
182
and
158,
and
I'm
gonna
just
jump
right
in
next
slide
and
it's
basically,
we
the
working
group
just
to
define
our
scope
and
the
problems
we
want
to
solve.
I
think
we
need
to
recognize
and
better
integrate
web
web
presentations
in
the
platform
and
so
looking
at
what
we
have
today
and
you
might
think,
that's
in
scope,
but
it's
actually
a
little
unclear
because
you
know
the
url
for
our
spec
is
screen
share,
but
the
name
of
the
spec
is
screen
capture.
A
This
means
web
presentations,
no
offense
to
powerpoint
or
other
native
applications.
But
that's
where
we
are.
Unfortunately,
the
number
one
use
case
is
unsafe
and
that's
well
documented
in
existing
specs.
So
I
think
this
working
group
must
fix
this.
That's
my
first
call
to
action
next
slide.
A
So
here's
a
link
to
youtube
it's
on
youtube,
a
presentation
I
gave
at
the
w3c
ac
meeting
in
april,
where
I
put
a
slide
about
safer
presentations,
and
I
was
based
on
my
understanding
of
what's
been
presented
in
this
working
group
so
far
in
in
the
broad
sense
of
what
problems.
A
So
again,
I'm
trying
to
make
sure
we're
all
on
the
same
page
about
the
problems
that
we
want
to
solve
and
what
I
said
was
that
more
and
more
people
are
presenting
online,
but
they
may
be
inadvertently
sharing
things
they
didn't
mean
to,
and
there
are
hidden
risks
too
hard
to
explain
even
to
technical
folk
with
sharing
untrusted
dynamic
web
pages,
and
we,
the
working
group,
think
presenting
can
be
more
web
friendly
and
safer
at
the
same
time
and
by
integrating
tab
capture
directly
with
web
pages
for
them
to
capture
themselves.
A
Now,
security
is
critical,
so
this
would
require
site
isolation
and
permission,
because
rendering
may
reveal
unobvious
sensitive
information
like
your
browsing
history
all
right.
So
next
slide
so
this
this
was
basically
talking
about
get
viewport
media
that
we've
been
discussing,
but
I
think
what
also
come
to
light
through
discussions
on
github
and
earlier
meetings
is
that
this
also,
we
also
need
to
fix,
get
display
media.
A
So
I
opened
an
issue
on
that
and
today's
choices-
and
this
is
pulled
from,
I
think,
all
the
browsers
most
professors
have
these
choices.
They're
named
different
things,
but
they
boil
down
to
and
the
screen
picker
comes
up.
You
can
pick
the
end
user
can
pick
between
sharing
their
entire
screen
a
window
or
their
tab
or
a
tab.
Sorry,
at
least
all
of
these
choices
are
unsafe.
You
might
over
share.
A
What's
on
your
desktop,
you
might
if
you
share
a
tab,
even
if
you
have
a
nice
screenshot
and
a
picker
where
you
see
oh
there's
my
web
application
you're,
actually
not
sharing
just
that
web
application.
You
are
sharing
its
container,
including
its
backward
forward
cache
and
everything
you've
done
previously.
So
if,
on
the
first
slide,
you
accidentally
hit
the
back
button
you're
going
to
share
what
you
were
doing
before
the
meeting.
A
Potentially,
if
there
was
a
previous
page
in
your
cache
to
their
your
entire
meeting
and
that
that's
unacceptable,
I
think
and
unsafe-
and
you
might
also
flip-
between
tabs
and
and
and
searching
for
document
things
like
that-
not
to
mention
that
there's
active
malicious
attacks
on
the
web
same
origin
policy,
specifically
from
sharing
web
services.
So
the
proposal
is
that
user
agents
should
come
up
with
a
new,
safe
choice
which
is
called
well,
I'm
calling
it
web
page,
which
is
a
little
we'll
get
back
to
what
that
actually
means.
A
But
I
think
that's
the
best
I
could
come
up
with
with
what
users
understand,
but
what
it
basically
would
be
is
they
would?
It
would
be
the
top
level
browsing
context
with
where
the
top
current
document
is
both
site,
isolated
and
opted
into
html
capture,
but
that
capture
the
user
agent
would
turn
off
capture
if
there
was
some
cross-origin
navigation.
A
So
it's
more
than
a
single
page.
Technically,
maybe
you
could
call
it
web
app
or
website,
but
you
know
like
and
user
agents
could
give
preferential
placement
in
the
picker
for
these
things,
and
you
see
that
in
the
screenshot
to
the
right,
which
is
actually
a
mock-up
based
on
a
feature
that's
already
available
in
google
slides.
All
I
had
to
do
was
change
the
first
tab.
A
The
name
of
the
first
tab
from
this
page
to
web
page
so
and
add
a
little
caption
at
the
bottom
to
to
show
the
name
of
the
document
and
also,
as
you
know,
extra
candy
apis
like
we
can
then
consider.
We
now
have
a
safer
environment
because
we
have
a
safer
container
for
sharing,
where
we
can
add
more
specific
apis,
where
it's
more
logical
to
api,
to
add
apis
like
communications
between
capture
and
capture
e,
because
now
there
is
a
well-defined
capturing.
A
A
So
I
was
going
to
ask
about
what
people
feel
about
that,
but
I
guess
the
format
is
that
I
finished
my
slides
and
then
we
discussed
so
I'm
going
to
move
on
so
without
assuming
that
everyone
agrees
on
that,
because
we're
going
to
discuss
that
later,
assuming
we
want
to
go
forward
with
something
like
that.
What
would
the
spec
need
to
facilitate
these
new
choices
in
user
agents?
A
Because
specifications
are
mostly
concerned
with
javascript
apis
and
they
don't
typically
dictate
much
about
what
user
agents
do,
which
is
good?
I
think
that's
the
right
breakdown,
but
we
need
a
few
things
editorially.
We
need
to
name
this
new
source
as
a
concept,
whether
it's
a
web
page
website
or
app,
and
we
want
to
loosen
the
elevated
permission
language
in
the
spec
today
for
these
sources
to
allow
user
agents
more
flexibility
in
actually
promoting
and
pushing
users
toward
these
sources,
because
there's
existing
language
in
the
spec.
A
That
prevents
that
and
we
might
even
encourage
user
agents
to
give
preferential
placements
to
them,
specifically
over
tabs,
for
example,
which
are
provide
similar
functionality,
but
in
a
more
risky
way.
Maybe
with
a
should
statement.
A
All
right,
sorry,
so,
yes,
so,
and
also
we
need
to
specify
how
sites
can
participate
and
that
they
need
to
have
satisfy
window
cross-origin,
isolated
and
document
policy,
html
capture
and
require
document
policy.
Html
capture-
these
are
cross-origin,
isolated
exists,
the
other
two
need
to
be
added,
and
hopefully
this
could
be
the
same
site
requirements
that
can
be
shared
with
get
viewport
media,
and
so
do
we
want
to
discuss
this
one
or
I
have
a
separate
slide,
which
is
kind
of
a
separate
item.
B
We
have
two
people
in
the
queue
right
now
you
win
and
elad.
D
Okay,
I
agree
with
the
analysis.
Tab
catcher
is
not
great.
D
I
would
much
prefer
that
we
try
to
move
tab,
tab
capture
to
be
safe
and
maybe
mute
on
navigation
and
mute
on
user
activation.
Try
to
do
some
things
like
that.
It
seems
that
it
would
be
easier
if
we
want
to
have
like
pickers
that
are
more
natural
and
easier
to
understand.
A
All
right,
okay,
would
you
include
that
so
so
you
wouldn't
necessarily
want.
So
how
would
you
then
expose?
Let's
say
we
have
success,
get
viewport
media.
I
assume
you
would
require
cross-origin,
isolated
and
document
policy.
Then
right
in
your
mind.
Yes,
so
yeah,
I
guess
I'm
thinking
that
once
we
have
those
benefits,
we
should
look
into
ways
to
maybe
extrapolate
that
value
and
to
get
report
media
for
user
agents
that
want
to
or
feel
like
they
could.
They
could
accomplish
that
user
interface
task.
A
Would
you
be
opposed
to
having
a
spec
help,
along
with
those
things
and
and
to
re
specifically
to
loosen
the
elevated
permission
language
around
such
sources?
Even
if
you
don't
implement.
D
Why
not
I'm
open
to
that
discussion?
Yes,
okay,
great
thanks.
G
Yes,
thank
you,
so
I
think
that
we're
generally,
we
see
eye
to
eye
when
it
comes
to
where
we
want
to
be
eventually
and
where
we
don't
really
agree,
is
about
the
way
there
and,
like
we've,
discussed
these
topics
for
a
few
meetings.
Now
so
forgive
me
if
I
refer
you
back
to
you
know
the
entire
history.
G
I
think
that
there
are
some
dangers
in
trying
to
prevent
additional
improvements
to
the
existing
mechanisms
or
imagining
that
we
could
one
day
just
do
away
with
them,
but
remembering
that
people
are
using
them
at
the
moment
and
that
people
have
alternatives.
So
if
we
do
not
keep
on
improving
what
we
currently
have
and
or
if
we
cut
off
support
for
that,
then
users
will
be
pushed
off
of
the
web
platform
and
towards
installing
native
software,
which
is
not
any
safer
than
what
we
currently
have
not
a
half,
but
rather
propose.
G
So,
while
I'm
very
happy
with
this
as
a
direction,
I
don't
really
understand
why
we're
tying
things
like
controlling
sending
the
other
tab
messages
like
previously
next
slide
with
this
like
why?
Why
can
we
only
send
that
like?
Basically,
this
is
capture
handle
right,
like
you're,
saying,
capture
handle
you're,
not
very
happy
with
that,
unless
it's
site
isolated.
G
I
don't
really
understand
why
why
we
say
that
and
I
think
that
the
vision
needs
to
acknowledge
that
we're
probably
not
going
to
get
off
of
allowing
the
user
to
share
the
current
screen
window
or
tab,
even
if
it's
not
site
isolated,
probably
for
years
to
come,
because
the
entire
web
is
not
just
gonna,
become
site
isolated
overnight,
and
if
users
cannot
share
certain
websites,
then
they're
just
gonna
install
extensions
and
they're
gonna
install
software,
and
I
don't
know
if
this
is
where
we
want
to
push
them.
A
Well,
I
would
respond
that.
Yes,
there
are
iterative
proposals
in
the
working
group,
I'm
not
presenting
those.
Today,
I'm
I'm
presenting
the
long-term
view
that
I
hope
we
can
have
a
working
group
consensus
and,
moreover,
the
problem
that
we
want
to
solve
and
there
might
be
short-term
solutions
and
there
might
be
long-term
solutions.
A
So
I
think
parts
that
we
agree
on
where
we
have
made
progress
like
with
site
isolation
and
document
policy.
I
think
those
right
now
are
blocking
get
report
media
as
well,
so
that
we
need
to
start.
A
We
need
these
to
bear
fruit
now
and
produce
something
in
the
spec.
So
yes,
I
I
regret
putting
in
the
mention
of
other
apis,
but
I
wouldn't
my
my
intent
was
not
not
to
to
say
those
apis
are
blocked
unless
we
have
this
all.
What
I
meant
to
say
is
that-
and
this
is
an
environment
where
I
actually
like
your
proposals,
so
so
I
would
take
it
as
a
positive
that
we
can
then
argue
about
the.
A
I
think,
there's
some
valid
concerns
about.
How
do
we?
How
do
we
iterate
if
we
can't
get
all
everything
at
once,
and
I
also
feel
like
we
need
a
long-term
direction
so
and
it
sounded
like
you
were
mostly
liking?
What
you
heard
here,
apart
from
that
the
impression
I
I
didn't
mean
to
give,
which
is
that
it
might
hold
up
other
things.
G
G
So
I
have
inherited
those
arguments
before
I'm
a
bit
worried
that
accepting
this
as
a
general
direction
would
make
us
move
slower
in
the
iterative
process.
A
Well,
I
I
think
our
objections
on
on
on
the
other
proposals,
you're
referring
to
remain
from
mozilla,
regardless
of
this
they're
based
on
principles
not
on
on
this.
This
particular
long-term
view.
G
A
Do
are
there
any
objections
to
start
to
try
to
put
in
some
of
the
site
requirements
which
would
need
to
be
specified
into
the
specs,
which
is
also
a
blocker
for
get
viewport
media.
G
A
The
next
issue,
exactly
assuming
assuming
we
can
agree
on
cropping,
would
you
be
a
positive
toward
specifying
these
requirements
for
cross-origin,
isolated
and
document
policy.
G
F
F
Because
the
concept,
the
the
issue
of
what
things
we
can
give
elevated
permissions
to
has
been
decades
in
their
non-solving
so,
like
you
know,
I'm
quite
worried
about
the
best
being
the
enemy
of
the
good
here
and
so
that
we
need
to
be.
We
need
to
make
sure
that
we
can
can
make
progress
on
on
what
what
is
ready
for
for
adoption
now,
but
I'm
in
favor
of
getting
these
concepts
specified,
but
I'm
not,
but
I'm
not
willing
to
state
a
position
on
which
particular
things
should
be
blocked
or
permitted.
A
All
right,
thank
you.
I
think
that
allows
me
to
proceed
at
least
with
the
specification
of
the
things.
Thank
you.
Do
I
have
a
two
minutes
to
the
next
slide.
J
Oh
yes,
I
think
that
harold
raised
an
interesting
point
about
the
scope
that
it's
not
really
well
side.
Isolated
is
not
really
in
robotic
per
se.
I
think
that
would
be
interesting
to
get
the
get
input
from
the
security
forks
working
groups
tag
as
well.
A
Yes,
to
clarify
cross
origin,
isolated
already
exists,
that's
existing
platform
technology,
so
the
new
part
would
be
document
policy.
J
A
Where
we
have
a
buy-in
from
chrome
security,
at
least
and
on
a
lancaster
and
has
looked
at
it
as
well,
so
yes,
there
are
other
people
to
involve
there
correct,
but
I
would
be
in
support
of
this
working
group.
B
Okay,
so
are
people
okay
with
letting
johnny
brook
cover
the
additional
slide?
We
do
have
half
an
hour
left,
so
I
think
I
think
we
can
do
it
go
ahead.
John,
amber.
A
All
right
great,
so
this
is
really
a
about
cropping,
which
is
what
ella
alluded
to,
and
there
was
a
proposal
on
github
which
apologies
you
en.
I
just
put
a
slide
together.
This
is
really
your
proposal.
Let
me
know
if
you
want
to
talk
about
over
it,
but
I
really
liked
it.
I
think
ella
liked
it.
A
So
it
might
mean
we
can
progress
on
something
and
now
is
to
have
basically
three
apis
that
call
the
same
underlying
algorithm,
taking
a
viewport
as
internal
parameter,
which
is
responsible
for
permission,
policy,
prompting
and
creation
of
a
track.
So
if
you
want
the
full
tab,
if
you
want
the
view
part
of
the
entire
tab,
you
call
navigator.
A
Media
devices
get
viewport,
get
tab,
viewport
media,
if
you
just
want
your
own,
if
you're
an
iframe-
and
you
only
want
your
own
document-
you
call
document,
get
viewpoint,
media
and,
if
you're
there's
also
an
I
similar
method
on
the
iframe
and
the
benefit
there.
As
I
understand
it
is
that
you
can
capture
the
viewport
on
that
iframe,
even
if
it's
a
cross-origin
document
with
that.
A
B
Yes,
yeah.
A
Yeah:
okay,
thank
you
and
there's
some
follow-up
questions
around
permissions
policy
and
site
isolation,
which
we
already
mentioned,
and.
G
Questions
yeah
a
lot.
Yes,
I'm
sorry,
I
didn't
mean
to
interrupt,
but
I
think
this
is
relevant
and
we
would
be
what's
the
difference
between
document
and
diaphragm.
It
looks
I
could
be
mistaken,
but
wouldn't
document
just
to
be
the
current.
The
way
to
get
your
own
iframe.
A
Yes,
but
I
think
you
had
a
the
benefit
of
iframe
is
to
solve.
The
problem
I
think
you
mentioned
was
that
if
the
iframe
a
cross-origin
document,
how
would
you
capture
it?
A
So
if
it's
the
same
frame
origin,
then
you
could
do
a
you
know:
iframe.contentdocument.getreport
media
right,
you
can
grab
into
and
reach
in,
but
if
it's
cross-origin
object
earlier,
proposals
would
have
forced
us
to
post-message
it
and
transfer
the
mini
stream
track,
and
this
avoids.
That
is
that
right.
G
Okay,
I
thought
that
documented
getting
viewport
media
looks
to
me
like
a
sub
case
of
iframe,
but
maybe
I'm
maybe
either
I'm
missing
something-
or
this
is
just
a
way
of
you
know
exposing
basically
the
same
thing
but
in
on
different
surfaces.
G
G
Dot
meter
devices
don't
get
type
v
permeated
this
one
is,
it
has
an
essential
difference
right,
it
gets
the
entire
thing,
and
maybe
you
would
even
want
different
permissions
for
that.
Or
do
you
intend
to
use
the
entire.
E
A
And
there
will
be
need
to
be
a
permissions
policy,
hopefully
separate
from
display
capture.
A
Yes
well
in
the
spec,
yes,
and
then,
if
bro,
if
some
browsers
have
difficulty
implementing
one
over
the
other,
then
feature
detection
becomes
easier.
With
un's
point
here,
I
think.
D
I'm
fine
with
it
as
well.
If
I,
I
think
it's
probably
fine
as
well
yeah.
A
Okay,
so
we
could
strike
the
second
one
and,
for
others
benefit
the
earlier
proposals.
We
had
ideas
of
passing
in
an
iframe,
dom
node
as
an
argument
and
passing
media
stream
track
around
to
do
this
cinema
cropping.
So
this
seems
like
a
cleaner
solution.
D
A
On
this
issue,
yeah
no,
this
is
yeah.
I'd
like
to
it
sounds
like
we
have,
I'm
not
hearing
any
objections
on
this
one.
So,
hopefully.
G
I
can
proceed
on
this,
so
not
philosophically,
but
I'm
I
cannot
find
the
correct
word
right
now,
but
philosophically
for
lack
of
a
bad
award,
this
looks
very
nice
to
me.
G
The
problem
I
see
here
is
that
one
thing
that
we
would
want
to
have
is
to
be
able
to
crop
to
an
arbitrary
frame-
and
here
this
claims
to
give
you
that,
because
there
are
transferable
media
tracks,
but
in
practice
we
don't
have
them
and
in
practice
it's
unclear
when
we'll
have
them
and
in
practice
it's
kind
of
unclear
when
we'll
have
them
working
efficiently
and
correctly
and
on
all
browsers.
G
A
Well,
I
think
this
was
this
was
aimed
to
to
solve
that
I
mean
you
have
iframe
dot
get
viewport
media,
so
you
have
access
to
arbitrary
iframes
in
in
same
in
same
origin,
top
level
documents
so
that
you
can
capture
you
no
longer
have
to.
You
can
now
capture
cross-origin
documents.
The
only
thing
you
cannot.
E
A
There
are,
if
there
are
iframes
inside,
if
you
want
to
capture
your
parent,
then
then
get
tab,
viewport
media,
then
you
get
the
whole
thing.
So
so
you
you're
you're
inside
an
iframe
inside
another
iframe,
and
you
you
don't
want
to
capture
everything.
So
you
want.
A
Well,
maximum
flexibility
is
never
a
good
thing
right,
I
mean
so
I
I
feel
like
from
the
use
cases
we've
heard
so
far.
I'd
like
to
know
more
about
the
use
case.
I
think
we
all.
We
still
have
the
option
to
do
it.
The
old
way,
where
at
some.
A
Level
so
I
I'm
I'm
confused
now,
so
you
were
you
liked
this
proposal,
but.
G
I'm
on
the
fence,
so
we've
got
right
now,
I'm
watching
a
screen
where
there
is
a
you're
actually
presenting
to
me,
but
it
could
also
I'm
sorry
bernard.
Did
you
say
something?
No,
so
I'm.
G
A
screen
worth
where
there's
a
presentation
on
the
left
which
you're
presenting,
but
let's
assume
that
this
was
actually
you
know
my
own
slides
inside
of
slides.google.com
and
on
the
right
side.
I've
got
a
meeting
at
mid.google.com
and
what
I
would
like
to
be
able
to
do
is
to
capture
from
the
mid
slide
pane
the
other
one,
and
I
want
to
do
that,
independently
of
which
is
the
child
of
which
and
how
many
other
iframes
exist
in
between.
A
Well,
if
you
have
a
child
and
parent
relationship,
then
I
don't
understand
this
side
by
side:
are
they
siblings,
with
a
same
parent
or
as
one?
If
one
is
a
parent
of
the
other,
then,
and
you
want
the
the
iframe
to
capture.
G
I'm
saying
so
we're
not
designing
this
exactly
for
the
case
of
you
know
this
particular
application,
but
rather
we
want
to
support
arbitrary
applications,
and
I
can
imagine
that
some
applications
you
know
mit
is
going
to
be
inside
of
slides.
I'm
sorry
a
is
going
to
be
inside
of
b
and
then
sometime
b
is
going
to
be
inside
of
a.
I
cannot
tell
you
ahead
of
time.
Who's
gonna
have
who's,
who
has
a
child,
and
also-
maybe
I
maybe
I'm
gonna-
have
both
slides
iframe
as
well
as
another
iframe.
G
You
know
wikipedia
or
you
know
some
you
know
ide
or
anything
in
in
addition-
and
I
just
want
to
crop
to
that
very
one
thing.
I
don't
want
to
capture
my
parent
who's
holding
all
of
those.
I
don't
want
to
capture
a
child.
I
want
to
capture,
you
know
an
arbitrary
iframe.
A
Yes,
well,
if
you
have
some,
if
you
have
some
buy-in
from
the
parent,
then
you
have
this
iframe.getviewport
media
api
and
it's
not
like
this
would
ever
work
without
some
buying
from
the
parent,
because
your
iframe,
the
capturer,
is
going
to
need
a
permission,
policy
specified
or
will
not
be
able
to
call
this
method,
and
the
target
is
going
to
similarly
need
to
have
a
document
policy
in
order
to
be
capturable
and
all
this
has
to
be
coordinated
by
the
parent.
A
So,
yes,
I
think,
but
so
from
a
top
level
down
approach
these
apis,
I
think,
would
hopefully
be
sufficient,
but
I'd
be
interested
in
the
use
case
that
where
that
wasn't
sufficient
and
I'm
sorry
I
I
couldn't
really
reduce
that
from
what
you
said
here.
So
maybe
we
could
discuss
that
offline.
G
I
I
think
in
in
one
minute.
I
think
that
the
problem
is
that
it's
theoretically
sufficient,
given
that
we
posit
that
we
have
transferable
media
tracks
and
that
they
work
perfectly.
But
if
we
don't
then
suddenly
well,
okay,
somebody
can
capture
just
the
right
amount
part
of
this
portion
of
the
screen,
but
he
cannot
truly
efficiently
transfer.
That
track
to
where
I
want
to
be
holding
the
track.
G
So
if
and
whoever
was
was
calling
the
cat
initiating
the
capture
could
just
you
know
with
buying
with
everybody
like
if
he
had
some
kind
of
permission,
token
or
an
identified
very
unique
identifier,
or
anything
like
that.
That
was
expressly
given
to
him
through
all
of
those
chains
of
iframes,
and
then
it
just
by
specifying
that
to
the
browser
you
could
say:
hey
and
I
want
to
crop
to
this,
then
that
would
be
very
useful
for
for
me
and
I
think
for
other
applications.
G
A
Well,
I'm
not
going
to.
Unfortunately,
I
can't
speak
to
such
a
hypothetical
proposal,
which
is
not
before
the
working
group
today,
but
I
will
say
that
if
your
concern
is
that
should
transferable
media
stream
track
fail
to
deliver
this
in
the
future
yeah,
I'm
happy
to
reopen
this
to
to
to
add
some
more
apis
to
solve.
This.
B
What
what
would
you
propose
we
do?
We
do
have,
I
guess
19
minutes
left.
Should
we
go
over
the
plan
for
action
on
various
things.
F
So
I'd
like
to
say
now
that
I've
had
half
an
hour
of
of
thinking
after
uh's
proposal
right
and
tell
of
the
opinion
that
I
would
like
to
say
say
that
no,
we
should
go
with
a
stream
space
proposal.
We
should
adopt
ideas
for
for
not
non-frame,
single
single
handling
separately
and
and
we
should
adopt
adopt
the
the
the
current
proposed
list.
The
other
current
proposal,
as
a
working
group
document.
F
Thinking
that
I
would
the
the
other
alternative
I
see
is-
is
that
we
could
could
ask
queen
to
complete
this
work
into
a
document
rather
than
a
collection
of
slides
and
examples,
because
there
are
so
many
things
that
are
either
unclear,
unspecified
or
or
should
shoot
yourself
in
the
foot.
F
I'd
like
I'd
like
to
hear
all
those
opinions,
because
when
you
end
the
night
with
you
when
you're
uneven,
and
I
speak
where
we
often
don't
reach,
either
a
consensus
or
a
or
a
significant
majority.
B
Yeah,
I
guess
a
question
I
have
and
people
can
free
to
get
into
the
cube.
Is
there
are
some
elements
that
u.n
described
that
were
not
covered
in
any
previous
proposal
like
they
had
to
handle
mute
and
unmute,
some
of
which
could
be
applied
to
streams?
I
think
just
as
well
as
you
said,
and
there
were
other
issues
that
I
guess
people
were
raised
by
by
posting
issues
to
to
un's
github.
B
I
guess
my
overall
question
is
you
know.
I
think
I
think
at
this
time
it's
fair
to
say
that
both
proposals
are
free
to
evolve.
So
I
mean
the
streams
proposal
can
certainly
incorporate
things
from
that.
You
went
had
and
vice
versa,
and
also
it's
fair
to
say
that
there
will
be
a
lot
of
probably
issues
and
other
things
that
so
un's
proposal
could
evolve.
B
But
at
some
point
it
would
be
good
to
actually
make
a
decision.
I
don't
know
that
you
know
we
don't
have
a
meeting
till
september,
so
we
have
kind
of
two
months
here
in
which
that
can
occur.
B
But
I
think
it's
fair
to
say
that
we
don't
want
to
let
it
linger
a
lot,
but
beyond
september.
Is
that
fair.
B
Guess
my
question
is:
what
what
to
what
does
the
working
good
feel
needs
to
be
done
to
make
a
decision?
I
mean
where
you
know
we.
E
B
H
Just
want
to
say
that,
for
the
mute
part,
we
were
working
on
pr
to
add
to
add
that
to
the
streams
based
proposals.
So
this.
E
H
Something
very
similar:
adding
a
field
to
to
the
mainstream
track
generator
to
kind
of
simulate
the
hardware
mute
button.
So
so
it
wouldn't
be
to
explicitly
set
the
the
muted.
H
To
directly
control
the
mute,
the
field
of
of
all
connected
tracks,
since
the
browser
is
free
to
mute
or
mute,
but
to
kind
of
simulate
that,
if
you
set
it,
then
they
will
all
be
muted.
Similarly,
to
the
way
a
microphone
would
be
muted,
if
you,
if
you
muted
via
hardware,
so
it
would
be
similar.
Maybe
the
details
might
differ
a
bit,
but
we
intended
to
to
for
the
mute
part.
Specifically,
we
wanted
to
do
that.
A
Yeah,
I
think,
ideally
I
would
love
to
see
harold
and
you
and
work
together
on
a
proposal,
because
I
feel
there
both
have
things
that
I
like
and
both
have
things
that
I
dislike.
I
don't
like
un's
callback
promise
api
and
I
don't
like
that.
The
existing
spec
explosive,
the
main
thread,
so
I'm
hoping,
but
there
are
very
good
ideas
in
both
that
might
be
able
we
might
be
able
to
combine-
and
you
know
if
people
need
help.
I
I
could
also
volunteer
to
try
to
produce
something
like
that.
A
Well,
maybe
we
could
iterate
on
api
shape,
at
least.
B
Yeah,
I
think
both
proposals
are
free.
To
do
that.
I'm
curious,
I
never
do
have
an
opinion
like
in
september.
Presumably
what
you
said
you
know
people
improve
things.
You
know
fill
it
out.
We
have
two,
hopefully
two
complete
proposals
in
september.
B
A
Well,
it
sounds
like
there
are
fundamental
issues
we
disagree
on
that
are
unrelated
to
api
shape
like
the
exposure
and
main
thread,
so
maybe
that's
something
that
could
be
discussed
separately
without
waiting
for
proposals.
A
H
Well,
maybe
we
can
try
to
reach
consensus
on
everything
and
for
the
parts
that
we
don't
reach
consensus.
Then
we
use
the
whatever
mechanism
we
have
to
to
solve
that.
Well,
I.
A
I
should
mention
that
exposure
to
main
thread
there's
a
similar
issue
in
web
codecs,
that's
going
to
affect
any
users
of
this
api
and
I
believe,
chrome's
intend
to
ship
of
of
the
current
proposal
here
mentioned
that
they
were
going
to
abide
by
the
decision
in
web
codex.
K
So,
strictly
speaking,
aligned
with
the
decision
in
web
colleagues
right
right.
H
Yeah
yeah,
which
is
something
we
have
to
do
since
we
are
depending
on
web
colleagues,.
A
Right,
although
in
theory,
I
think
I
don't
think
that
webcodex
decision
necessarily
it
doesn't
necessarily
follow
that
we
should
make
the
same
decision,
because
I
think
there
are
even
more
reasons
to
to
limit
media
student
track.
Producer.
Sorry
assessor
two
main
thread,
so
I
think
I
would
have
stronger
arguments
than
I
have
in
my
codex,
even
though,
but
at
the
end
at
the
other
end,
I
understand
that
that.
C
Tim
yeah,
I
was
just
gonna,
say
that
I
kind
of
disagree
and
half
agree
with
harold
about
whether
you
could
should
constrain
apis
in
workers.
I
think
you
should,
but
only
if
they're
different
flavors
of
worker,
I
think
if
you've
got
a
generic
worker,
then
it
should
like
do
everything
that
you'd
expect
and,
like
the
you
know,
audio
worklets
are
very
constrained,
but
then
they're
kind
of
they're
different
flavors.
C
So
I
think
that's
a
natural
that
feels
reasonably
natural
to
you
to
to,
to
you
know,
developers
and
just
tracking
back
to
the
the
issue
about
un's
proposal.
I
feel
like
the
promises
of
doing
a
lot
of
heavy
lifting,
and
there
are
a
couple
of
things
that
he's
using
promises
for
that
are
when
you
look
at
the
code
is
actually
going
to
be
much
uglier
than
the
than
the
snippets.
C
Now
it's
unfair
to
say
that
while
he's
not
here-
but
I
I
kind
of
I
didn't-
want
to
jump
into
the
discussion
because
it
didn't
seem
totally
irrelevant
at
the
point.
But
but
I
I
feel
that
we're
actually
iterating
very
slowly
towards
something
that
is,
it
is
actually
a
joint
proposal.
It's
just
like,
I
think,
if
everybody's
just
kind
of
felt
like
they
were
heading
in
roughly
the
same
direction
and
iterating
towards
it,
then
maybe
you'd
get.
B
There
yeah
they'll,
certainly
iterate.
I
don't
know
whether
they'll
iterate
in
the
same
direction
or-
and
we
don't
really.
E
C
Or
you
went
saying,
oh
you
and
you
can
mock
streams
like
that.
Like
actually,
you
know,
there's
a
there's,
a
kind
of
polyfill
streams
like
that
and
there's
actually
we're
iterating
towards
a
solution
to
a
common
set
of
capabilities,
like
maybe
achieved
with
different
ways,
and
I
I
think
I
feel
like
we're
closer
than
maybe
the
the
emotion
in
the
room
feels
like
we
are
see
what
I
mean.
B
Are
there
things
we
can
be
doing
to
try
to
maybe
track
things
or
help
help
things
along
just
wondering
as
a
working
group
process.
F
Well,
if
we
can
tell
you
when
to
ask,
ask
am
to
actually
get
the
stuff
to
be
a
be
a
proposal
that
we
can
right
share
a
share
ideal
from
ideal
snippets
from
or
procedures
from,
rather
than
where
it
is
now
and
say
that
this
is
not
a
pull
request
on
midi
capture
extensions,
it's
a
new
document
right.
So
that's
the
direction
at.
C
Least,
can
we
expect
a
kind
of
comparable
set
of
movements
on
on
your
side,
or
do
you
feel
that
that
that
proposal
is
already
sufficiently
complete.
F
I
do
personally
feel
that
the
proposal
is
not
complete.
We
we
had
a
proposal
for
for
feedback
signals
right
that
travels
in
the
opposite,
direct
direction
of
the
frames
and
that
turned
out
to
be
underspecified
and
underutilized.
So
we
deleted
it
again.
F
We
need
to
back,
add
something
back
in
and
if
we
can
generalize
a
pattern
that
does
the
mute
on
you
too
and
and
handles
the
the
black
frames
and
silence
properly,
then
I
think
that
would
be
would
be
a
win
and
if
we
don't
have
to
to
disagree
with
u.n
about
that,
so
I
think
that
would
be
a
win
in
the
in
in
the
working
together
since.
B
Just
a
question:
we
will
be
open
to
changes,
but
just
a
question
harold
is:
do
we
have
agreement
on
the
extent
of
the
of
the
need,
because
I
know
you
just
talked
about
mute
on
mute
and
stuff
like
that
there
were
also
feedback
for
other
elements
like
constraints
right
that
you
would
originally
thought
about.
Do
you
still
feel
that
that's
something
that
needs
to
be
in
there
because
didn't
talk
about
that.
F
No,
we
he
referred
to
it
kind
of
sideways
than
when
they
said
verbally
when
they
said,
and
we
can
extend
this
for
other
other
purposes.
F
H
Note
that
our
proposal
specifies
how
tracks
connected
to
a
generator
should
do
apply
constraints.
H
What
it
doesn't
do
is
that
it's
not
in
the
same
direction
again
that
the
constraint
of
the
track
should
be
propagated
to
right
the
other
track,
because
I,
I
think
you'll
run
into
issues
there
since
constraints
are
a
property
of
the
tracks
not
not
of
the
source,
so
yeah,
it's
okay,
to
inform
the
other
source,
the
source
about
the
constraint
change,
but
so
that
so
that
applied
constraints
can
reject,
for
example,
because
it
contradicts
the
constraints
of
some
other
track
that
is
also
connected
to
the
source
and
and
the
implementation
cannot
satisfy
both,
for
example,
by
using
cropping
or
some
other
mechanism.
H
But
so
so
so
so
we
with
our
spec
do,
does
explain
what
is
support
and
what
cannot
be
supported
with
say,
applied
constraints,
but
it's
not
in
the
direction
of
of
trying
to
connect
the
constraints
of
say
the
original
track
and
the
transform
track
it's
restricted
to
to
the
to
the
generator
track.
Yeah.
F
H
F
A
So
yes,
so
since
we
all
agree
that
mediastream
track
processor
and
these
apis
need
to
work
well
on
workers,
it's
the
idea
that
if
I
want
to
play
this
back
in
self
view,
I
would
transfer
the
resulting
track
back
to
main
thread
and
assign
it
to
video
dot
source
object.
H
A
F
You
wouldn't
have
a
media
steam
generator
and
a
worker.
You
would
generate
the
stream
in
a
worker
and
right
then
transfer
the
the
end
of
the
there's,
the
stream
back
to
where
you
want
the
track
to
be,
and
then
you
have
them
in
the
video.
F
A
A
I'm
wondering
is
that
was
there
some
miscommunication
here
or.
G
No,
I
think
that
I
like
this
approach,
with
the
one
caveat
being
that
you
know
it
doesn't
really
accommodate
for
the
fact
that
we
don't
yet
have
transferability
streams
tracks.
Sorry,
if
we
could
find
a
way
that
we
would
have
a
stop
gap
for
that
problem,
that
would
make
me
even
happier.
A
A
good
approach,
so
would
you
be
opposed
to
iterating
on
this
approach,
we're
trying
I'm
trying
to
detangle
issues
right,
because
we
can't
have
everything
blocking
on
everything
else?
We
never
get
anything
done
so
in
the
interest
of
separating
these
issues.
Would
you
be
okay,
with
moving
forward
with
the
pr
on
this.