►
From YouTube: WebRTC WG meeting May 25, 2021
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
B
B
B
Yeah,
I
think
that's,
okay,
any
any
does
dom
dom
have
any
objection
to
that.
Okay,
all
right.
So
this
is
the
agenda.
Just
a
little
bit
of
a
warning.
We're
gonna
do
more
strict
time
control
harold
will
be
the
bouncer,
so
we're
gonna
give
a
warning
two
minutes
before
your
time
is
up
and
then
once
the
time
has
elapsed,
we
will
move
on
to
the
next
item.
B
So
it's
going
to
be
fairly
strict
because
we've
been
running
out
of
time
and
people
get
angry
that
they
don't
get
a
time
to
present
what
they
want
to
present.
Okay.
So
next
session
is
a
lot
for
25
minutes
or
so
turn
it
over
to
you
a
lot.
D
Thank
you
very
much.
Can
everybody
hear
me
I've
just
joined
for
my
other
account
yeah
perfect.
So
this
is
about
capture.
Handle
capture
handling
is
a
way
for
two
for
two
applications
running
in
two
different
tabs
to
communicate,
but
to
communicate
just
a
bit
of
information
that
needs
to
jumpstart
the
rest
of
their
communication,
and
it
also
assumes
that
they're
already
communicating
in
one
way.
Let
me
get
to
that
in
a
second.
So,
let's
assume
that
we've
got
one
vc
application.
D
It
could
be
google
mid
or
let's
call
it
vc
max.
So
that's
an
application,
a
fictional
application
that
we
have
and
it's
capturing
using
get
display
media
another
tab,
that's
running
an
application
like
microsoft,
force,
point
or
open
office,
365
or
anything
else.
Let's
just
call
it
presentation
application,
so
two
applications
vc
capturing
a
productivity
suite
next
slide.
Please.
D
So
now,
let's
assume
that
the
application,
that
is
a
presentation
application
all
has
all
sorts
of
apis
for
getting
a
message
saying
hey.
I
want
you
to
change
to
the
next
slide
or
I
want
you
to
check
to
change
to
the
previous
slide
going
to
a
full
screen,
leave
full
screen.
Anything
we
can
imagine,
but
what
exactly
it
has
is
out
of
scope,
let's
just
assume
that
it
has
previous
next
line.
D
Just
because
that's
easy,
and
we
also
assume
that
this
depends
on
a
session
id
right,
because
you
don't
want
to
change
the
slides
on
somebody
else's
session
and
let's
also
assume
that
there
is
some
kind
of
identification
authentication
that
is
taking
place
and
that's
outside
of
our
school.
D
But
the
very
minimum
that
we
can
assume
is
needed
is
a
session
id
and
when
I
start
capturing
something
with
get
display
media,
I
don't
generally
know
what
I'm
capturing
it's
up
to
the
user
to
decide,
and
I
might
know
that
I
captured
the
tab,
but
I
don't
know
what
tab
who's
running
there
etc.
So
that's
what
I
want
to
focus
on.
How
can
we
discover
safely
a
reasonable
amount
of
information
to
allow
collaboration
if
collaboration
is
possible
between
these
two
applications?
D
So
one
hacky
way
of
doing
that
that
you
could
do
right
now
without
any
changes
to
any
browser.
Is
that
the
application
that
thinks
hey,
I
might
be
captured,
it
could
just
show
a
qr
code
and
whoever
ends
up
capturing.
It
could
just
read
that
qr
code
extract
whatever
information
it
wishes
out
of
that
and
then
use
that
so,
for
example,
it
could
get
a
session
id
and
then,
through
some
shared
back
end,
it
could
try
to
authenticate
and
say
hey.
I
want
to
start
communicating
with
this
session.
D
If
it
has
permission
it
could
and
then
it
could
start
over
that
secure
channel
sending
next
slide,
preview
slide,
etc.
So
that
would
already
work,
but
it
has
many
downsides.
First,
it's
not
ergonomic.
D
Second,
it's
not
efficient,
because
when
you
capture
something
you
have
to
continuously
scan
and
try
to
read
this
qr
code,
which
of
course
doesn't
have
to
be
a
full
blown
qr
code.
It
could
be
something
a
lot
more
minimal
and
less
user
visible.
It
could
just
be
a
couple
of
pixels,
but
you
still
need
to
scan
them
every
at
every
iteration
and
it's
also
right
for
disruption
unintended,
even
just
by
somebody
else.
D
Drawing
to
that
particular
part
of
the
screen,
and
last
of
all
you
you
all
when
you're
capturing
you
always
need
to
take
into
account
that
hey
whatever
I
think
I'm
reading,
not
none
of
it
is
verified
right.
I
need
to
verify
all
of
it
out
then
somehow,
for
example,
I
could
verify
a
session
id
by
trying
to
open
a
communication
channel
with
that.
Whoever
has
the
channel
and
they
need
to
like
they
could
display
a
you
know,
a
challenge
and
a
response,
or
anything
like
that
next
slide,
please.
D
So
what
I'm
suggesting
is
that
we
say:
okay,
if
this
is
already
possible,
let's
try
to
make
it
better.
First
of
all,
of
course,
this
needs
to
be
obtained
right.
The
application
will
not
like
some
of
my
previous
suggestions
would
not
just
always
expose
its
actually,
not
my
suggestion,
but
never
mind
not
always
expose
its
title
or
anything
like
that,
but
rather
it
would
have
to
say
hey.
D
I
want
to
expose-
and
here
are
the
things
I
want
to
expose
and
here's
who
I'm
willing
to
expose
it
to
so
two
are
things
it
could
expose,
it
could
expose
its
origin
and
it
could
expose
a
self-selected
string.
That
would
often
be
a
session
id
of
some
sort.
That's
meaningful
to
it,
and
the
next
thing
is,
it
needs
to
be
able
to
say
hey.
I
want
these
origins
or
maybe
any
origin
to
be
able
to
read
the
information
I'm
exposing
and
I'll
get
back
to
that.
D
Why
that
is
useful
a
bit
later
and
then
on
the
capturing
app,
you
will
say:
hey
here's,
what
I'm
capturing
so
I'm
capturing
meet.google.com,
I'm
capturing
slides.google.com
or
any
other
thing
and
here's
the
session
id
that
it
claims
now
one
benefit
here
is
that
at
least
part
of
this
information
is
automatically
verified
so
because
the
origin,
the
browser,
can
guarantee
that
you
cannot
expose
somebody
else's
origin.
So,
basically,
when
you're
exposing
you
just
say,
do
I
want
to
expose
my
origin?
D
Yes
or
no,
you
don't
self
self-report
the
origin
and
then
once
you
know
that
the
session
id
or
anything
else
that
is
coming
is
coming
from
you
to
you
from
a
verified
origin.
Then
you
could
choose
to
trust
or
not
trust
it.
So,
for
example,
if
one
microsoft
application
captures
another
microsoft
application,
maybe
it
chooses
to
say:
okay,
whatever
information
is
delivered
in
the
handle,
we
deem
that
to
be
trustworthy
or
not
it's
up
to
them
next
slide.
Please.
D
So
here
is
just
one
illustration
of
how
this
could
be
used.
So
let's
say
that
you're
on
the
captured
side
and
you're
some
kind
of
presentation
application,
so
you
call
set
capture,
handle
config,
meaning
that's
my
capture
handle
that's
what
I
would
like
to
show
you
could
say,
expose
origin.
True,
you
could
say
false!
D
Then
the
handle
can
be
anything
it
could
be
a
session
id
or
it
could
be
any
other
string.
Maybe
here
we
see
a
user
user,
readable
human,
readable
string.
That
says:
hey
go
here
to
understand
what
kind
of
api
you
have
you
could
use
with
this
particular
id
and
here's
the
id
as
part
of
it
and
that
could
be
parsed
on
capture
inside
and
then
we
say,
permitted
origins
is
any
origin
just
to
make
it
simple
next
slide.
Please
thank
you
and
then,
on
the
capturing
side
we
could
say:
okay,
I
called
get
display
media.
D
I
got
something
if
so
missing
from
this
example.
Is
the
part
where
you
say:
okay?
Is
this
a
tab
and
then,
if
it's
a
tab,
okay,
is
it
exposing
a
handle?
And
once
it's
I
see
it's
exposing
a
handle.
I
could
say:
okay,
is
it
one
of
several
collaborative
applications
I
know,
and
if
it
is,
then
I'm
gonna
parse
the
handle
in
a
certain
way,
extract
the
session
id
and
then
run
whatever
code
I
want
like.
Maybe
I
create
an
adapter
for
a
previous
and
next
with
whatever
api.
D
D
So
one
thing
to
note
here
is
that
actually
you
could
even
drop
the
exposing
the
origin
part
if
you
wanted
to,
because
if
your
id
is
sufficiently
unique,
then
you
could
just
get
a
value
and
go
and
talk
to
whichever
server
the
only
collaborating
application
or
a
small
number
of
collaborating
applications.
You
have
and
say,
like
hey,
is
this
id?
Does
that
belong
to
you
by
any
chance,
and
then
you
could
use
that
to
establish
communication.
D
D
Next
slide,
please
sorry!
Yes,
thank
you.
So
I've
already
implemented
this
and
it's
already
going
into
origin
trial
in
m92
of
chrome.
Of
course,
you
know
we
are
open
to
feedback.
This
is
just
an
origin
trial,
just
an
experiment,
and
this
is
what
it
looks
like
on
m92.
D
The
captured
side
says
like
hey,
here's,
what
I
would
like
to
expose
and
then
the
capturing
site
says
like
hey.
I
would
like
to
either
get
notifications
of
whenever
they
handle
changes
which
can
happen
if
you
navigate,
and
it
can
happen
if
you
just
load
the
different
slides
deck
on
the
presentation,
software
or
anything
like
that,
or
I
could
read
the
exact
the
one
at
the
time
which
I
wish.
D
D
Sorry
bernard,
are
you
the
one
controlling
the.
D
D
Thank
you
very
much
so
then
we
want
to
think.
Okay
are
there
any
privacy
or
security
concerns
with
an
api
of
this
shape,
and
my
argument
is
no,
and
I
think
it
is
because
of
the
following.
So
first
it's
opting
so
you
could
very
easily
just
not
use
it.
Second,
it's
user
driven,
so
it
hardly
ever
takes
effect.
It's
only
when
the
user
actually
starts
display
capture
and
we
don't
allow.
D
We
don't
influence
the
decision
anyway,
it's
completely
user
driven,
then,
theoretically,
it's
a
no
op,
because
it's
already
possible
just
not
as
ergonomically.
D
So
now
I
just
need
to
make
sure
that
everybody
is
looking
at
the
second
presentation,
because
soon
they're
gonna
shift
out
of
alignment
with
one
another.
So
next
it's
more
robust
than
steganography.
So
that's
nice,
because
but
that's
not
really
a
privacy
or
security
issue,
but
it's
good
to
have
that
right.
Once
you
start
relying
on
the
mechanism,
you
would
like
it
to
be
robust
next.
D
The
capture
of
the
app
the
one
misunderstanding
of
this
could
be
that
maybe
the
captured
app
realizes
that
it's
being
captured
and
the
answer.
No,
it's
not
it
only
discovers
it.
D
It's
been
captured
if
the
capturer
decides
to
tell
it,
which
is
a
already
the
case
and
b
perfectly
fine,
because
all
of
the
anything
that
we
could
have
against
the
captured
application
finding
out
is
because
it
could
mess
with
the
capturing
application,
but
if
they're
collaborating
if
they
choose,
if
the
capture
in
application
chooses
to
give
that
information
to
the
captured
application,
that
kind
of
pre
assumes
that
that
this
is
okay
by
it.
D
Next
you
could
use
a
handle,
that's
kind
of
partially
opaque,
so
you
could
like
write
in
a
very
understandable
form.
Exactly
what
you
are
I'm
wikipedia
article
xyz
or
you
could
have
something:
that's
only
understandable
if
you
have
access
to
another
api
last
thing
is
hey.
This
gives
us
the
ability
to
rely
on
the
origin,
because
you
cannot
spoof
it,
and
one
more
thing
is
like
you
could
kind
of
worry
that
hey.
D
Can
I
attack
the
capture
by
spamming
it
with
events,
and
the
answer
is
no,
because
it's
going
to
cost
you
as
much
as
it
costs
the
capture
that
you
don't
even
know
exists
and
the
capture
could
avoid
that
by
just
not
registering
to
read
your
handler
so
and
it
could
always
deregister
that
handler
next
slide.
Please,
yes,
so
previous
slide,
please,
and
that
was
what
I
had
to
say.
I
will
shift
the
mic
over
to
yanivar
later,
but
first
I
wonder
if
there
are
any
other
input
from
people.
F
This
is
good,
so
a
few
photos
there.
First,
the
name
might
be
a
bit
to
generate
capture
when
people
think
about
capture
they're
thinking
about
camera
and
microphone,
or
maybe
that's
my
bias
there.
It's
very
specific
to
screen
capture,
so
maybe
we
could
like
bike
shade,
a
name
that
is
that
has
more
focus
and
more
meaning.
F
The
reason
is
that
you're
passing
some
arbitrary
data
from
one
webpage
to
another,
and
if
you
not
pass
the
origin,
then
you
get
data
from
an
untrusted
source
and
it's
becoming
a
bit
more
complex
and
people
might
want
to
say,
like
in
your
web
ideal,
it's
exposure
regime
by
default
equals
false,
which
I
don't
think
is
the
right
thing
to
do.
F
I
I
would
start
very
simple,
not
always
expose
the
origin,
and
if
people
start
to
come
up
to
come
to
us
saying
hey,
I
want
to
not
show
my
origin,
then
we
can
talk
with
them.
Otherwise
I
think
that
I'm
not
exactly
sure
whether
we
want
like
this
even
mechanism
but
other
than
that.
If,
if
we
have
a
one-time
blob
of
data
that
is
exposed
from
the
capturing
to
the
capture
page,
I
think
it's,
it's
probably
fine.
D
So
for
applications
to
use
that
in
a
meaningful
way,
I
think
that
we
will
be
able
to
show
some
examples
very
soon
and
for
I
just
want
to
mention
that
this
is
not
one
time,
as
you
said,
because
once
you
capture
a
tab,
the
tab
can
be
navigated,
which
means
that,
even
if
you
wanted
to
force
like
you
know
like,
for
example,
if
we
take
a
slide
stack
once
you
navigate
away,
you
don't
want
to
keep
on
sending
messages
to
the
old
thing
that
you
no
longer
have
on
screen.
D
You
might
want
to
send
it
to
the
new
slides
deck
to
which
you
have
navigated.
F
I
would
say
that
in
most
cases,
navigation
will
probably
not
happen
and
also,
if
you
have
tight
coordination
between
one
and
the
other,
you
might
be
able
to
navigate
to
an
or
to
also
a
collaboration
destination
anyway,
so
you
might
be
able
to
reconcile
things
in
some
ways:
I'm
not
it's
something
that
that
is
doable
outside
without
this
event
mechanism
anyway.
So
I
think.
A
F
Would
start
very
simple
and
reduce
a
little
bit
the
the
scope
of
the
api
as
well,
there
just
to
start
very
simple
and
which
should
cover
hopefully
like
most
of
the
cases.
D
I
agree
with
simple,
but
not
too
simple,
so,
for
example,
saying
that
exposure
of
origin
is
happens
automatically
something
I
I
need
to
think
about
a
bit
more,
but
I'm
more
open
to
that,
because
it
does
not
reduce
the
usefulness
of
the
feature,
but
when
to
say
that
it
is
one
time
I
think
is
100
problematic,
because
navigation
can
happen
to
another
website
through
the
url
bar.
D
So
it's
not
controlled
by
the
app
in
any
way
the
user
just
moves
it
and
it
could
be
to
another
collaborating
or
non-collaborating
site
so
and
the
browser
does
not
know
right.
So
the
browser
needs
to
say:
hey
this
capture
handle
no
longer
applies.
You
don't
have
to
keep
calling
me
I'll.
Give
you
an
event,
so
you
know
you've.
Just
it
no
longer
applies,
and
maybe
something
new
is
going
to
apply
in
a
second
or
maybe
not.
F
Maybe
that
that's
something
we
could
discuss
in
that
case,
if
you
really
want
to
go
with
an
event,
I'm
not
sure
that
settings
is
the
right
way
of
doing
things.
Maybe
we
should
invent
something
new,
because
it
does
seem
a
bit
odd
that
you
have
settings
that
will
change
and
then
you,
you
need
an
event
somewhere
else,
so
it
seems
like
a
really
new
mechanism.
If
we
want
to
go
with
settings,
I
would
go
with
something
that
is
not
mutable
and
be
done
with
it.
D
I'm
not
sure
I
understood
because
for
settings
that
was,
if
you
want
to
read
the
capture
handle
at
the
moment
and
then,
if
you
want
to
register
an
event
handle
the
event
handle
is
on
the
track
not
on
the
settings
and
then,
when
you
get
an
event,
the
event
includes
the
new
capture
handle.
So
basically
every
time
the
capture
handle
changes,
you
get
it
with
the
event,
so
you
don't
actually
have
to
call
guest
settings
at
any
time.
E
A
D
F
Think,
yes,
so,
okay,
so
I
would
just
say
that
we
have
an
attribute
for
that
and
we
have
an
event
and
I
would
follow
the
same
pattern
in
terms
of
trying
to
go
with
get
settings.
But
that's
that's
an
api
like
small,
by
chat.
I
guess.
C
D
Not
necessarily
so,
for
example,
if
you're
capturing
another
tab
that
happens
to
be
in
the
same
origin,
you
could
use
a
broadcast
channel.
So
this
does
not
really
talk
about
how
you're
going
to
use
the
session
just
how
you
discover
what
you've
captured
and
then
you
establish
communication
in
a
completely
orthogonal
way.
I
just
aim
to
give
you
a
strong
enough
way
that
it
would
apply
to
as
many
scenarios
as
possible.
D
D
Yeah,
I
think
he
and
eva
wanted
to
speak
next.
E
Yeah,
so
I
have
some
minutes
after
too,
but
I
just
want
to
talk
specifically
about
this,
so
I
I
do
agree.
This
is
a
useful
problem
to
solve.
I
think
this
is
this
is
part
of
the
things
we
need
to
solve
to
have
better
integrated
web
presentations.
A
E
E
To
be
president,
okay,
well,
I
would
say
that
the
reasons
we
chose
to
pursue
get
viewport
media
was
to
solve
the
fact
that
capturing
a
web
surface
is
the
most
scary
source
that
you
can
capture
through
get
display
media
and
that
was
deemed
and
that
that's
caused
a
lot
of
grief,
because
it's
led
to
pushback
against
things
like
making
web
sources
the
default
and
pickers.
E
E
It
seems
like
we're
introducing
multiple
solutions
that
are
perhaps
competing,
and
this
solution
would
then
enshrine
more
the
idea
that
what
does
it
say
about
the
working
group
and
what
it
intends
to
to
signal?
What
is
the
intended
api
that
people
should
use
what
for
capturing
web
services
and
specifically
on
the
api
in
your
earlier
slide,
you
said
the
discussion.
Scope
is
discovery
of
the
generic
pair
essentials.
E
I
also
think
this
api
could
be
smaller.
I
agree
you
should
always
include
the
origin
and
maybe
even
have
a
browser-produced
id
to
lessen.
I'm
not
very
keen
on
having
another
communication
channel
potentially
where
we
have
to
validate
a
lot
of
long
inputs.
If
we
have
an
unlimited
string
that
javascript
can
insert
into
any
api,
it's
going
to
be
abuse
one
way
or
the
other.
D
Sure
so,
first,
it
does
not
have
to
be
unlimited
right
so
and
implementation
wise.
I
we've
limited
it
to,
I
think
1024
characters
and
we
could
discuss
what
the
right
limit
should
be
and
whether
it
should
be
throttled,
maybe
you're
not
allowed
to
change
it
more
than
you
know
a
certain
amount
of
time,
maybe
there's
a
leaky
bucket
until
the
next
update.
D
All
of
this
can
be
discussed,
but
going
back
to
the
point
of
get
viewport
media,
I
will
repeat
repeat
what
harold
said,
but
with
less
gravitas,
and
that
is
that
we
have
not
agreed
yet
that
get
viewport
media
should
supplant
get
display
media
and
if
we
all
and
you've
agreed
too,
that
this
is
a
useful
thing
to
have,
then
I
don't
think
that
we
should
delay
giving
that
to
the
web,
because
we
have
plans
for
2023
for
get
viewport
media
and
also.
D
I
think
that
this
is
healthy
competition
between
the
two
apis,
because
I
think
that
you
could
see
how
both
of
them
would
exist.
I
don't
really
think
that
it
has
to
be
one
or
the
other
and
if
it
is
yeah,
no,
no
I'm
sorry.
Yes,
I
have
a
tendency
to
go
on.
E
No,
no,
I
think
I
I
phrased
myself
poorly.
What
I
meant
with
get
viewport
media
was
rather
an
approach
that
relied
on
a
better
security
foundations
like
site
isolation
and
opt-in,
to
capture
that
to
me
is
the
main
thing
that
I
think
we
had
a
breakthrough
on
in
the
working
group
that
we
could
provide
a
better
integrated
web
presentations
in
a
safer
way.
And
I
I
would
like
an
api
like
this,
to
work
in
conjunction
with
that,
but
I
would
perhaps
want
to
limit
it
to
site
isolation
and
opt-in
capture.
D
Then
I
think
that
once
we
have
get
viewport
media,
then
we
can
discuss
whether
this
api
still
applies
and
how
we
could
move
off
of
it
if
we
believe
it
to
be
detrimental.
But
right
now.
I
think
that
it
is
very
useful
and
I
don't
really
see
how
it
could
be
abused
and
I'm
open
to
listening,
but
right
now,
especially
the
suggestion
of
having
a
browser
assigned
id
which
makes
everything
less
powerful
and
requires
an
additional.
D
Maybe
we
could
jump
into
your
slide
and
you
could
give
the
full
presentation
of
what
you
suggest
to
other
people
would
have
a
better
picture
of
what
I'm
responding
to.
A
B
E
So
yes,
so
this
is
my
crystal
ball
slide,
which
you
should
never
contribute
to
the
internet,
because
it's
a
prediction
and
that's
in
two
years,
we'll
see
and
it'll
be
out
there
and
be
either
totally
correct
or
totally
false.
So
I'm
I'm
getting
I'm
doing
a
crystal
moment
of
what
would
it
look
like
in
two
years
time?
E
Excuse
me,
this
means
they
can
capture
themselves
using
git,
viewport
media.
With
permission
from
the
user
and
the
use
case,
there
is
the
user
clicks,
the
button
in
the
presentation
program
to
join
an
integrated
meeting,
and
that's
the
starting
point
they're
actually
in
the
presentation
program,
but
also
the
same
sites
can
register
for
preferential
positioning
and
treatment
in
get
display.
E
I
just
used
a
little
different
words
here.
You
know
register
intent,
presentation
because
that's
what
the
api
really
is.
It's
a
way
to
what
get
display
media
has
is
a
powerful
picker,
that's
built
into
the
browser
and
while
get
viewport
media
is
great
when
you're
already
on
the
permission
page
and
integrate
meetings.
E
If
you
will,
that
register
their
registered
their
intent
to
be
a
presentation
program
to
appear
in
that
picker
and
then
here
you
would
also
get
an
id
back
similar
to
which
could
be
used
in
a
similar
way
to
what
ella
described
and
it
would
be
exposed
in
the
in
the
track
of
the
capturer,
so
that
you
could
then
correlate
and
do
necessary
things
like
what
we've
integrated
next
slide.
E
D
So
one
thing
that
I've
not
written
down-
and
I
think
I've
not
said
before-
is
that
if
we
look
at
it
from
the
vc
applications
a
point
of
view,
I
don't
think
that
they
would
ever
agree
to
let
go
of
being
able
to
to
capture
an
arbitrary
tab.
They
would
never
be
able
to
say
okay,
we
can
only
capture
something
that
collaborates
with
us
calls
get
view
viewport
media
on
our
behalf
and
then
shares
it
with
us.
That
is
way
more
than
2023.
I
think
so.
D
They
will
always
have
the
the
flow
that
starts
with
the
current
app
with
the
vc
application
and
captures
an
arbitrary
other
surface,
and
in
the
case
that
this
surface
is
a
tab
which
I
can
tell
you
is
quite
often
at
least
for
chrome
users.
D
D
So
to
the
intent
thing
I
have
to
say
it.
It
sounds
like
just
an
invitation
to
have
applications
jokey
for
position,
potentially
even
by
making
false
claims
saying,
I'm
a
presentation,
slash
video
game,
slash
productivity
switch
whatever,
please
put
me
or
whatever.
So
I
don't
think
that
we
should
rely
on
that
in
order
to
determine
position
in
the
picker
in
any
way,
so
that's
number
one
and
for
the
browser
assigned
id.
I
would
say
that
that
I
I
just
don't
see
why
it's
necessary.
D
I
know
why
why
it's
detrimental,
and
that
is
that
it
forces
another
translation
step
right.
It
also
means
that
you
have
to
have
collaboration
over
a
cloud,
whereas
if
you
have
an
arbitrary
string,
you
might
be
able
to
say
all
you
want
to
say
and
that's
it
like,
you
could
say:
hey,
I'm
a
bank,
try
not
to
try
to
warn
the
user
that
he
might
not
have
select
really
wanted
to
capture
me.
D
He's
probably
made
a
mistake,
or
you
could
say
you
know,
I'm
not
gonna
give
you
my
session
id,
but
here's
an
api
that
you
could
use.
There
could
be
a
lot
of
things
that
you
could
say
that
would
be
useful
without
a
shared
back
end,
so
first
you're
blocking
all
of
those,
the
ones
that
do
have
a
shared
back
end
you're
enforcing
an
additional
step,
whereas
maybe
you
could
speak,
you
could
talk
to
each
other
with
a
broadcast
channel.
Now,
no,
that's
not
good
enough!
D
Actually,
the
broadcast
channel,
I
guess
you
could
use
if
you
are
same
origin
you
could
ask
like.
Who
is
that?
So
I
take
back
that
particular
argument.
Sorry
and
you're
also
with
the
browser
assigned
id
you're,
no
longer
exposing
the
the
origin.
Unless
you
mean
to
you
also
meant
to
add
that
which
maybe
you
have
so
maybe
you
meant
to
say
okay,
so
first
you
have
to
opt
in
before
you
get
the
browser
assigned
id
and
then
you
get
both
the
browser
assigned
id
as
well
as
the
origin.
E
Now
let
me
respond,
let
me
respond
so
yes,
so
I
think
what
viewers
should
look
at
here
is
which,
whether
you
have
a
browser
assigned
id
or
an
application
provided
id.
What
is
the
smallest
surface?
E
That
does
the
job
that
does
the
bare
essentials,
and
my
argument
is
only
that
all
you
need
is
a
browser
id
and
it's
a
simpler
api.
Now
as
far
as
jogging
for
precision,
what
is
the
order
of
tabs
that
are
shared
in
the
picker
in
chrome
today,.
D
So
first
I'll
say
that
this
is
a
user
agent.
You
could
do
a
different
order
if
you
would
like,
but
the
order
right
now
is
order
of
last
activation
by
the
user.
So
it's
not
alphabetical,
as
you
might
have
thought
sure,
in
which
case
you
could
joke
for
position,
but
the
user,
no.
E
No,
I
just
meant
to
say
that
we
can
keep
that
ordering.
That's
fine,
but
I
don't
say
anything
wrong
with
if
web
pages
have
gone
through
the
efforts
of
site
isolating
themselves
and
opting
into
capture
those
are
inherently
safer
to
share.
I
see
nothing
wrong
with
giving
that
group
that
bulk
as
a
whole
preferential
treatment
over
sites
that
haven't
gone
through
that,
I
don't
think
that's
going
to
lead
to
as
far
as
position
within
that
group.
A
F
Chance
I
would
just
mention
now
that
I
understand
a
bit
more
of
a
proposal
that
the
idea
that
you
can
update
session
id
is
really
like
a
message
channel
one
way
and
we
already
have
like
a
construct
which
is
a
message
channel,
which
is
both
ways
and
we
have
window
proxy.
We
have
service
worker
clients
api.
We
have
a
lot
of
these
apis
that
allows
one
context
to
talk
with
another.
So
I
I
would
look
at
these
constructs
and
see
whether
we
should
not
try
to
use
them
or
not.
D
I'm
sorry,
but
I
think
that
is
a
misunderstanding,
because
all
of
the
existing
mechanisms,
like
peer
connection,
etc,
assume
that
you
already
know
who
you
want
to
be
talking
to,
whereas
here
you've
got
an
existing
channel
that
lets
you
you're,
basically
getting
all
of
the
pixels
off
of
whomever
you're
capturing,
but
you
don't
know
who
that
is
so
he
can
send
you
messages
like
your
embedded,
qr
codes,
etc.
But
you,
you
don't
know
who
that
is.
D
So,
I'm
not
what
I'm
offering
you
here
is
not
intended
to
be
this
communication
channel,
but
rather
an
identification
mechanism.
F
D
Sure,
if
we
can
have
a
bootstrap
mechanism
that
can
change
when
the
tab
is
navigated,
that
would
work
for
me.
E
Well,
I
think
this
is
an
interesting
problem
that
we
should
definitely
the
working
group
should
definitely
try
to
solve,
but
I
think
the
issues
are
whether
what
security
properties
should
there
be,
and
what's
the
scope
of
the
solution
for
identifying
how
narrow
does
it
need
to
be
and
anything
else,
oh
yeah,
I
think
we
all
just
always
should
expose
origin.
I
don't
know
where
the
idea
came
from
to
hide
origin.
B
Okay,
we're
out
of
time
we
should
go
to
the
next
okay.
We
will.
B
F
A
B
D
Slide,
thank
you.
So,
let's,
let's
imagine
a
post-covered
world
in
which
we
also
have
cheap
and
instantaneous
travel
and
therefore,
instead
of
meeting
remotely
as
we
are
right
now
we're
sitting
in
one
room.
Maybe
all
of
our
companies
got
purchased
acquired
by
the
same
company
and
now
we're
all
working
together
and
we're
sitting
in
a
room,
and
we
take
turns
presenting
to
a
mutual
to
a
shared
screen.
That's
on
the
wall,
and
there
is
a
pa
system,
a
set
of
speakers
whatever.
D
D
So
it
would
be
nice
if,
when
we
start
capturing
a
tab,
we
could
also
say
hey
kindly
stop
relaying
your
audio
to
the
local
speakers,
your
this
is
being
captured
and
we're
doing
something
useful
with
that.
So
what
I'm
proposing
here
is
a
new
constraint
called
suppress
local
audio
playback
modulo
bike
shared
it
on
the
name
of
course,
and
if
we
go
to
the
next
slide,
we
can
see
the
current
pr
state
modules.
D
Some
new
comments
that
have
just
come
in
and
they
have
not
been
able
to
address
yet
thank
you
geneva.
So
basically
we
we
with
this
constraint.
We
say
no
longer
play
this
or
your
audio
of
the
capture
tab
over
the
speakers,
but
one
thing
that
we
might
want
to
consider
is
okay.
What
if
there
are
multiple
captures,
in
which
case
we
say?
D
Okay,
we
resume
audio
playback
on
the
local
speakers
when
the
last
one
to
suppress
goes
away
so,
which
I
think
is
relatively
obvious,
and
one
thing
that
yanivar
asked
for
is
for
us
to
explicitly
say
that
the
captured
tab
should
not
be
made
aware
that
it's
been
audio
suppressed,
which
follows
from
the
general
principle
of
you
don't
want
to
let
the
tab
know
that
it's
being
captured,
so
I
100
agree
and
once
we
find
the
right
warding
for
that,
I'm
open
to
one
question
I
had
was:
if
there
are
any
mechanisms
currently
that
could
discover
that.
D
But
maybe
this
is
like
nitpicking
kind
of
discussion
that
we
could
continue
over
the
presentation,
so
I'll
open
the
floor
now
for
feedback.
Actually,
I
can.
E
Clarify
that
it's
actually
not
the
reason
there
wasn't
actually
to
prevent
the
site
from
knowing
it's
captured,
because
it
would
actually
be
able
to
read
this
from
the
setting
itself.
My
concern
was
more
about
how
it
would
be
implemented,
because
I
worry
without
that
clarification,
there's
two
potential
ways
and
the
browser
could
implement
it.
One
would
be
to
basically
mute
audio
from
the
tab,
but
the
document
is
none
the
wiser
that
this
is
happening
and
that's
the
one
I
think
we
want.
E
The
alternative
might
be
that
someone
might
misconstrue
this
to
mean
that
you
know
syncs,
like
you
know,
audio
tracks
and
peer
connection.
That
audio
would
somehow
mute,
and
that
would
be
javascript
observable
and
I
don't
think
that's
what
we
want.
E
D
That's
okay
by
me:
what
did
you
mean
by
get
settings
because
that
maybe
I
misunderstood
but
get
settings
would
be
on
the
capturing
side,
so
there
we
are
exposing
it
right.
So
that's
not
up
so
on
on
the
captured
side.
Sorry
you're.
E
D
On
the
captured
side,
I
don't
think
that
there
is
a
way
in
javascript
to
mute
the
page.
Otherwise
I
would
not
have
suggested
this
particular
thing,
because
you
could
always
just
send
the
message
to
the
captured
application.
Saying
like
hey
kindly
mute
yourself.
E
F
F
Okay,
suppress
local
audio
play
black,
the
name,
I'm
not
quite
sure,
because
you're
not
suppressing
you're
suspending
it,
and
maybe
it
will
resume
at
some
point.
So
maybe
the
name
is
not
practically
correct
and
we
should
change
it.
F
My
main
question
would
be
whether
the
user
agent
could
not
already
do
that
in
my
in
like
90
like
in
most
cases
or
not,
that's
the
question
I
have,
for
instance,
maybe
this
should
be
like
the
suppressed
local
audio
payback
should
be
true
by
default,
meaning
that
if
the
page
is
starting
capturing
and
you
have
a
prompt-
and
in
the
point
you
say
yeah,
I
want
the
audio
to
be
captured.
Maybe
the
user
agent
should
be
smart
enough
to
say:
oh,
I
will
suppress
your
local
audio
playback
or
maybe
it
will
it
will?
F
The
user
agent
will
wait
to
suppress
local
audio
payback
based
on
what
the
capturing
page
will
do.
If
it's
playing
this
local
track,
then
it
will
suppress
on
the
capturing
track
or
maybe
the
user
agent
could
provide
ui
to
control
that
either
on
the
get
display
media
prompt
or
on
the
tab
icon,
where
you
can
mute
a
mute.
F
E
So
I
can
jump
in
a
little
bit
and
say
that
firefox
does
actually
have
a
mute
icon
in
the
tab,
and
this
is
why
the
pr
says
the
user
agent
should
stop
relaying,
because
we
imagine
that
it
could
actually,
since
double
muting
states,
are
a
bad
idea
that
it
could
actually
flip
this
state
so
that
users,
if
they
wanted
to,
could
flip
this
state
manually.
But
I
think
it's
still
useful
to
have
a
constraint
to
maybe
have
the
application
provide
its
intent
here.
D
I
would
like
to
answer
you,
and
so
you
and
you
said
three
different
things:
why
not
just
change
the
default
behavior,
and
to
that
I
would
say
that
that
would
not
really
work,
because
you
could
be
capturing
something
just
in
order
to
record
it
and
not
in
order
to
share
it.
So
you
could
imagine
that
some
cases
you
would
want
to
suppress
audio
or
suspend
local
audio
play,
but
sometimes
not
so.
You
cannot
just
change
that
to
add
any
ui
elements,
I
think,
is
a
very
open
question.
D
I
think
that
we
already
have
a
very
big
kind
of
picker
and
it's
gonna
be
too
difficult.
I
don't
think
that
any
user
agents
ux
people
are
going
to
agree
to
that.
I
think
they're
gonna.
I
think
that's
a
bad
idea,
even
if
you
were
to
say
to
convince
them
that
it's
a
good
idea,
you
would
probably
want
to
influence
the
default
state
when
the
picker
comes
up,
or
maybe
you
want
to.
I
don't
think
that.
A
F
F
B
D
B
D
Thank
you,
so
I
would
say
that
sure
it's
true
that,
even
after
the
making
of
the
decision,
the
user
needs
to
be
made
aware,
potentially
even
be
able
to
reverse
that
decision.
I
think
that
I
don't
think
that
the
heuristic
that's
saying,
hey,
let's
observe
what
the
capturing
page
does
and
the
site
based
on
that
is
going
to
lead
to
a
consistent
end
user.
F
D
F
That
so
the
user
will
be
will
be
shown
a
picker
and
the
user
will
click
on
share
audio
and
in
some
cases
the
audio
will
disappear
and
in
some
cases
the
audio
will
not
disappear.
That's
probably
that's
potentially
problematic
to
users.
They
will
not
understand
why
disappearing
and
why
it's
not
disappearing,
and
it's
capturing
webpage
that
controls
that
now.
So
that's
why
I'm
a
little
bit
hesitant
to
actually
go
down
that
road.
D
That
is
true,
but
you
could
show
the
user
like
a
flashing,
mute
sign
or
something
that
shows
hey.
The
reason
this
tab
is
now
muted
is
because-
and
that
is
better
than
offering
them
a
choice
because
offering
a
choice
gives
a
cognitive
load,
and
you
know,
whereas,
if
you
just
say
hey
this
happened.
This
is
why
that
might
be
more
something
that
ux
people
might
be
more
amenable
to.
But
yes,
I
agree
that
we
could
continue
discussing
this
on
github.
E
But
as
far
as
I
didn't
hear
any
other
objections
on
un.
So
if,
if
you
and
a
lot
can
resolve
this
on
getup,
I
hope
we
don't
need
to
get
back
to
the
working
group
at
large.
D
Cool
so
third
and
last
topic
by
me,
sorry
for
everybody,
but
I
didn't
get
to
present
last
time
and
therefore
I'm
getting
some
time.
I
hope
you
can
weather
this
so
region
capture.
So
let's
assume
that
we've
already
agreed
on
inspect
get
viewport
media.
D
Of
course
there
is
an
implicit
decision
here
that
a
get
viewport
media
does
not
already
do
some
cropping
and
they'll
get
back
to
that
implicit
assumption
in
a
second
but
assume
that
we
have
get
viewport
media
that
lets
you
get
capture
the
entire
tab,
everything
on
it
and
what
now
so,
applications
can
comprise
multiple
parts
and
those
multiple
parts
can
be
cross-origin
from
one
another
and
they
can
also
kind
of
shift
in
size
and
location.
Especially
if
you
change
the
size
of
the
window,
then
the
browser
decides
to
move
things
around.
D
Sorry,
the
next
slide,
please.
I
am
not
sure
how
to
phrase
this
yeah.
Thank
you
very
much
one
less.
I
think.
D
Yes,
so
let's
assume
that
we've
got
this
particular
collaboration
between
a
productivity
suite
displaying
presentations
and
the
video
conferencing
application.
That
draws
the
token
heads
on
the
side,
and
so,
when
you
capture,
let's
say
you're
using
the
share
me
button
that
calls
get
viewport
media
your.
What
you
want
to
capture
is
just
the
part
that
does
the
presentation
and,
moreover,
you
might
only
want
to
capture
part
of
that,
because
maybe
there
are
some
speaker
notes
that
are
also
embarrassing.
If
discovered.
D
D
Number
one
is
performance,
because
the
browser,
instead
of
having
to
keep
on
handing
over
very
big
frames
to
the
application,
might
hand
over
frames
that
are
quarter
of
the
size.
So
why
not-
and
the
other
reason
is
that
it's
very
difficult,
even
impulse
almost
impossible
for
to
cross
course,
origin
frames
to
communicate
their
size
and
location
on
the
interview
port
when
you
change
the
window
and
make
sure
that
you
never
miss
crop,
even
a
single
frame.
D
Thank
you
so
we've
agreed,
hopefully
that
or
at
least
we
pretend
degree,
while
I'm
speaking
that
the
browser
needs
to
crop
and
next
slide.
Please.
D
So
there
have
been
some
discussions
between
me
and
yanivar
about
okay,
but
who
needs
to?
Maybe
we
don't
need
to
crop,
maybe
get
viewport.
Media
itself
is
gonna
crop,
just
whichever
frame
you
call
get
viewport
media
from
you
only
capture
that
frame,
but
I
would
like
to
remind
everybody
that
we're
not
talking
about
element
level
capture
because
you
would
still
be
cropping.
D
You
would
still
only
be
capturing
whatever
the
user
can
see,
so
occluded
content
is
not
going
to
be
captured
and
the
included
occluding
content
is
still
going
to
be
capturing.
So
essentially,
what
we
would
be
promoting
here
is
a
pattern
whereby
I
have
an
invisible
frame
or
div
or
whatever
behind
all
of
the
content.
D
I
massage
it
to
be
exactly
the
size
I
want,
and
then
I
am
able
to
capture
whatever
I
want,
even
if
it's
not
in
my
own
frame,
which
is
possible,
so
we
don't
get
any
of
the
intended
security
of
all,
but
only
capture
the
the
frame
right,
because
it's
impossible,
it's
very
easy
to
hack
the
way
around,
but
we're
pushing
developers
to
manage
their
own
capture
region,
which
is
dangerous,
because
they
are
not
aware
that,
oh,
when
I'm,
when
the
window
changes,
I'm
gonna
miss
crop
a
single
frame.
D
They
don't
care
about
that
and
they
might
not
even
think
that
it's
so
terrible,
but
we
would
like
to
teach
them
to
do
better.
So
I
think
that
if
we
a
more
developer
friendly
solution
would
be
something
that
would
allow
them
for
to
have
an
arbitrary
capture
cropping
to
an
arbitrary
target,
but
in
a
way
that
cannot
be
used
incorrectly
next
slide.
Please.
D
So
assuming
for
the
sake
of
argument
that
we
agree
on
all
the
previous
points,
the
question
is
okay.
How
do
we
indicate
which
arbitrary
target
we
would
like
to
crop
to,
and
then
there
are
two
or
three
options
I
could
think
of
number
one
is
just
give
a
dom
node
reference
right,
like
many
other
apis,
but
the
problem
is
that
this
would
not
work
cross-origin
origin.
D
D
The
other
option
is
to
use
an
id,
but
then
the
question
is
okay,
but
the
the
global
attribute
id
that
you
know
the
basic
one
is
document
global,
not
page
global,
to
which
I
would
say:
okay,
but
maybe
we
could
specify
that
it
only
applies
if
it's
really
page
global
and
it
is
trivial
to
get
a
page
global
id.
D
If
you
want
you
just
randomize
128
bits
and
it's
very
unlikely
to
not
be
global,
and
then
we
can
have
some
definition
of
what
happens
if
it's
not
global,
despite
having
to
be
another
option,
is
to
say:
okay,
let's
add
another
idea,
uuid
thing
which
is
kind
of
a
much
bigger
scope
project,
but
it
would
be
what
if
html
elements
just
had
the
uuid
that
was
assigned
by
the
browser
and
therefore
was
guaranteed
to
be
unique,
and
that's
not
so
that
would
work,
I
think,
but
it
would
be
kind
of
difficult
to
standardize
that.
D
But
if
we
have
this
use
case
and
maybe
a
couple
of
others,
maybe
we
could
push
that
through.
So
and
of
course,
all
of
this
assumes
that
you
know
you've
got
collaborating
frames
because
they're
on
the
same
application
and
they
just
post
message
the
id
from
one
to
another.
D
D
I
need
to
refresh
my
memory
on
which
one
is
the
last
one
yeah,
so
I
just
wanted
to
remind
at
some
point
that
you
could
still
get
the
I
crop
my
to
myself
with
this
approach.
Right
like
these
are
not
it's
not
either
or
you
could
still
say,
like
hey,
I've
got
an
id.
D
I
just
apply
that
and
I
capture
to
capture
myself
only
so
that
works
and
the
other
one
I've
already
gone
through
over,
and
that
is
that
we're
not
talking
about
element
level
capture,
that's
another
tool,
it's
a
useful
tool
and
we
probably
want
to
support
both
eventually,
but
I'm
only
talking
about
the
current
tool
right
now
so
and
then
the
last
thing
that
I
thought
that
we
could
discuss
before
I
ask
for
feedback
next
slide.
Please
is
how
do
we,
assuming
that
we
use
an
id
we,
whichever
id
we
end
up
using?
D
Do
we
want
to
say?
Okay
as
soon
as
capture
starts,
we
say
crop
to
this,
in
which
case
we
don't
need
to
have
some
kind
of
barrier
to
say
like
okay
and
now
cropping
started.
You
just
know
that
cropping
was
there
all
along.
So
you
start
cropping
and
you
can
use
the
very
first
frame,
but
then
it
might
be
a
bit
more
difficult
to
say.
D
Okay,
I
want
to
change
the
target
that
requires
an
additional
api
and
then,
if
you've
got
the
additional
api
we
might
just
want
to
support
both
from
the
bat
off
of
the
bat,
and
that
is
one
that
says:
okay,
here's
a
track
start
cropping
to
whatever
and
then
maybe
you
get
a
promise
that
resolves
when
you
can
guarantee
that
all
the
next
frames
are
already
going
to
be
cropped
or
something
like
that.
D
So
we
could
synchronize
that
way
and
then,
if
we
have
that,
then
it's
kind
of
easy
to
say:
okay,
I
don't
wanna
drop
to
this.
I
wanna
crop
to
that.
So
I
think
that
we
probably
want
to
support
both
flows
and
that's
all
I
had
to
say.
I
welcome
feedback.
E
All
right,
so
I
have
some
feedback
on
the
you're
totally
right
that
the
security
properties
are
for
the
entire
page,
but
I
don't
necessarily
see
that
as
a
problem,
and
I
definitely
don't
see
it
as
a
problem
with
only
one
of
these
solutions,
because
the
only
difference
I
see
here
is
whether
you
post
message
an
id
or
you
post
message
the
track.
It
doesn't
really
seem.
E
I
mean
the
same
security
properties
apply
and
as
far
as
a
security
issue,
I
think
it's
already
understood
that
an
iframe
will
not
be
able
to
call
this
method
without
being
having
that
permission
delegated
from
the
top
page.
So
the
top
page
is
remains
in
control,
and
I
think
we
just
need
to
document
the
security.
These
security
properties-
and
I
just
threw
with
either
solution
like
just
because
I
captured
to
a
specific
element,
doesn't
give
me
any
security
properties,
because
that
element
can
be
behind.
E
It
can
be
invisible,
can
be
all
kinds
of
things,
and
also
this
idea
that-
and
I
I
I
know
we
should
also
say
we
have
agreement,
which
is
good-
that
no
one's
proposing
coordinates,
which
I
love
so
so
at
least
there's
that.
But
I
do
worry
that
an
element
whenever
you
relay
at
the
element
can
move
and
when
you
define
what
happens
to
the
capture.
E
This
is
why
I
think
it's
better.
I
think
it
would
be
more
conservative
to
tie
it
to
an
iframe,
because
we
already
changed
the
size
of
capture
when
the
window
changes
the
dimensions,
and
I
think
that
makes
sense
for
iframes
as
well.
If
we
go
down
to
individual
elements,
that
seems
a
little
over
specific
to
me.
D
There
is
a
difference
between
a
posting,
a
track
and
posting
an
id,
and
that
is
that
that,
when
you're
posting
the
a
the
question
is
basically,
are
you
pushing
the
developer
towards
saying
I
don't
wanna
call
even
capture
from
this
I'll,
just
call
it
from
behind.
D
Size
I
want,
and
so
let's
take
a
step
back.
So,
let's
suppose,
for
the
sake
of
argument
that
posting
a
track
or
posting
id
is
the
same.
You
are
still
ending
up
with
a
a
much
more
restrictive
api.
If
you
say
that
whenever
you
kept
call
get
viewport
media,
you
automatically
only
get
the
iframe.
What
if
I
don't
want
to
get
the
iframe?
What
if
I
want
to
get
the
entire
tab?
D
Yes,
in
which,
but
that
still
assumes
that
there
is
an
iframe
or
top
level
that
is
the
same
size
and
for
some
reason
you
have
to
communicate
with
it.
Even
though,
when
you,
it
could
be
that
you
are
embedding
a
page
that
you
don't
want
to
communicate
with.
Like
you,
you
just
want
to
give
it
the
permission
and
that's
it
like
I'm
just
hosting
a
site
on
glitch
and
whatever,
and
it's
like
here's
the
application.
I
don't
need
to
know
what
you
are.
D
E
F
Yes,
just
the
first
comment,
saying
that
you're
saying
we
could
provide
both
tools
when
I,
whenever
I'm
hearing,
we
could
provide
both
tools,
I'm
thinking,
why
should
we
provide
both
tools?
We
really
need
a
good
justification
to
provide
both
tools.
That's
the
first
thing.
D
F
D
F
In
previous
slide,
you
said
there
were
two
options,
so
I
do
not
remember
exactly
but.
D
I
I
think
that
what
I
said
was
more
on
the
line
of
like
let's
provide
only
this
tool,
and
it
also
gives
you
the
functionality
of
the
other
tool.
I
say
what
I
said
was
like:
if,
if
you
have
an
id,
if
you
only
crop
to
an
id,
you
could
always
crop
to
self,
because
you
yourself
have
an
id
that
you
can
crop
to
which
is
kind
of
a
hack.
F
If
somehow,
but
anyway,
I
think
that
in
general
it
seems
that
there
might
be
agreement
that
capturing
a
sub-frame
by
capturing
an
element
is
kind
of
making
sense.
In
a
similar
vein,
we
are
seeing
that
full
screening
and
element
is
making
sense
to
websites.
F
So
maybe
we
could
at
least
here
get
a
consensus
on
that,
and
then
we
can
discuss
exactly
how
we
could
do
that
and,
for
instance,
whether
we
want
to
have
this
very
dynamic
or
whether
it's
at
the
capturing
start
that
we
want
to
say
hey.
This
is
the
the
element
of
the
iframe,
the
scope
where
we
want
to
capture
and
it
will
not
change,
and
once
we
have
about
the
sessions,
then
we
can
figure
out
maybe
the
particular
api.
F
C
A
Yeah
and
the
details
are
the
devils
are
in
the
details.
Should
we
then
move
on
to
our
next
subject.
D
D
Okay,
so,
but
I
think
the
biggest
open
question
is
whether
we
want
to
get
viewport
media,
whether
it
should
already
crop
to
itself
or
if
it
should
always
get
the
entire
viewport,
and
I
hope
that
I've
managed
to
convince
you
that
it
should
always
get
the
entire
viewport
unless
it
specifies
otherwise.
F
F
E
D
D
But
that
means
that
whenever
I
want
to
iframe
some
vc,
I
need
to
set
up
a
very
elaborate
collaboration
with
it,
where
basically
it
sends
me
a
message
saying:
hey,
I
want
you
to
capture
everything
on
my
behalf.
Oh
I'm,
sorry.
I
want
you
to
capture
the
interesting
part
on
my
behalf
and
send
it
over
where
or
assuming
you
don't
want
to
crop.
Okay,
so
you're,
it's
might
not
even
be
a
vc,
so
you're
embedding
something
that
you
want
to
allow
to
capture
now.
E
So
I
wasn't
following,
but
I
think
in
general
you
have
to
have
some
buy-in
from
the
target
being
captured
because
they're
getting
a
signaling
channel
here
already
they
can
resize
the
frame
whenever
they
want
and
have
a
significant
impact
on
the
output.
D
The
outline
of
my
argument
is
this:
first,
I'm
trying
to
say,
hey
get
viewport
media
should
always
get
the
entire
viewport
and
not
a
cropped
version
just
to
itself.
Unless
we
add
something
so
the
default
needs
to
be
get
the
entire
viewport,
and
if
I
convince
you
of
that,
then
I
need
to
then
it
would
be
easier
to
convince
you
that
okay,
now,
we
need
to
also
add
cropping
on
top
of
that.
So
for
the
first
argument
of
the
entire
viewport
is
what
you
should
get
by
default
from
get
viewport
media.
D
I
claim
that
when
you
iframe
something
when
you
embed
something-
and
you
give
it
permission-
that's
not
a
lot
of
work
and
that's
a
reasonable
amount
of
work
to
expect
of
the
application.
But
if
now
you
say,
okay,
but
every
time
you
embed
something
and
give
it
permission,
you
also
need
to
commit
to
set
up
some
kind
of
api
between
you,
a
post
message
based
api
for,
if
you
ever
want
to
capture,
send
me
this
message
and
I'll
capture
and
send
it
back.
E
I
just
want
to
clarify.
I
still
believe
that
having
get
display
media
tied
to
a
frame
is
the
capture
solution.
A
E
It
would
be
good
to
know
what
the
working
group
needs
to
make
a
decision
on
which
approach
to
take,
because
we've
spent
a
couple
of
times
now,
both
elide
and
I
to
present
different
avenues.
And
we
haven't
really
heard
much
about
what
the
working
group
prefers.
A
D
F
Okay,
so
going
to
another
world
where
data
is
no
longer
it's
encoded,
so
we're
shifting
to
webrtc
and
call
it
transform
and
s
frame.
So
there
we
are
talking
about
vsframe
transform,
which
is
implementing
the
sram
algorithm
natively
and
as
frame
processing
can
generate
errors
like
you
do
not
have
the
key
id
so
or
the
message
that
you
receive
is:
has
an
error
like
parsing,
fails
or
you're
decrypting
and
you're
validating
the
authentication
tag,
and
it's
it's.
It's
not
correct.
F
So
there
are
a
bunch
of
errors
there
and
since
it's
a
native
transform
it's
good
if
there's
a
hook
provided
so
that
javascript
can
register
to
various
events
and
say:
hey,
oh
there's
a
missing
key
id.
Maybe
I
should
do
something
or
oh
there's
a
lot
of
authentication
errors,
so
maybe
there's
a
potential
attack.
What
should
I
do?
Maybe
I
should
drop
the
connection,
so
I
think
it's
useful
to
surface
these
errors
to
the
javascript
application.
F
Here's
another
question,
which
is
whether
the
native
transform
by
itself
should
have
a
default
behavior
in
case,
for
instance,
of
authentication
error
for
now,
there's
no
default
behavior,
so
it
will
continue
working
and
it's
up
to
the
javascript
to
actually
do
something,
but
that's
something
we
could
continue
discussing
in
the
future.
So
the
proposal
yeah.
So
the
question
that
we
are
trying
to
solve
here
is
whether
we
should
surface
to
javascript
designers
and
how
we
could
do
so.
F
So
the
proposal
is
to
expose
error
event
handlers
to
on
the
s
frame,
transform
object.
These
events
can
be
used
if
spam
transform
is
used
standalone,
so
you're
assigning
vs
frame,
transform
to
the
sender.transform,
for
instance,
and
in
case
well
there
it
would.
It
would
probably
be
receiver
that
transform,
but
anyway,
it's
the
same
so
in
case
you're
missing
a
key
id.
Then
you
do
some
javascript
that
will
handle
the
unknown
key.
So
probably,
in
that
case
it
will
register
a
new
key
and
processing
will
continue
on
the
right
side.
F
We
are
seeing
an
example
where
the
s1
transform
can
be
used
as
part
of
rtc
script
transform.
So
the
example
is
doing
like
something
like
a
first
transform,
which
is
parsing
some
metadata
or
generating
some
metadata,
and
then
the
second
transform
is
the
iphone
transform.
F
So
let's
say
we
have
authentication
error
there.
Maybe
there
should
be
like
an
a
method
accumulating
the
errors,
and
if
there
are
like
too
many
authentication
errors,
the
javascript
application
will
call
peer
connection
clues
or
something
like
that,
because
probably
you're
under
attack.
F
So
that's
the
proposal
here,
a
simple
event:
error
event,
with
like
a
few
properties
that
are
defined
in
in
pr
103
feedback.
B
Yeah,
I
have
a
question
you
and
how
does
this
interact
with
knacking?
So
you
know
you're
say
you
know
you're.
I
guess
you've
by
the
time
you've
already
gotten
this.
It's
already
passed
the
udp
checksum
and
everything
correct
so
now
you're.
So
so,
basically,
there's
no
need
to
feed
that
back
into
the
knack
mechanism.
Right.
Is
that
what
you're
saying
correct.
F
B
Yeah,
do
you
need
I'm
just
thinking
if
you
need
more
info
like
you
know,
because
I
know
this
is
the
native
transform,
so
it's
a
little
bit
different
than
a
javascript
one,
but
you
know
yeah
go
ahead.
Okay,.
F
So
the
the
spring
specification
is
still
a
big
light,
but
at
some
point
it
will
define
some
processing
and
in
that
processing
steps
it
will
say.
Oh,
this
is
an
error
and
the
idea
is
that
we
will
expose
all
that
errors
from
vs
frame
spec
in
this
event
handler
so
far,
the
spec
is
still
a
bit
rough,
so
we
have
a
list
that
I
that
I've
listed
there,
but
it
could
be
extended
and
it
should
match
basically
vs
friend,
spec.
B
Yeah,
I'm
just
wondering
just
because
it's
a
native
transform
what
kinds
of
actions
javascript
will
loan
I
mean
normally
like,
for
example,
in
ipsec
you'll
have
replay
windows
and
everything
you
know.
You'll
have
certain
things
that
your
whole
implementation
will
do,
but
here
we're
just
saying
it's
it's
mostly
native,
so
it's
only
stuff
that
that
the
app
you
know
most
of
it
is
handle
in
the
browser.
I
guess
yeah
and
yeah.
B
F
B
Yeah,
I'm
just
wondering
whether
there's
any
is
like
anything
else
you
might
want
to
do
like
go,
get
a
key
and
rerun
things
with
a
new
key.
You
know
what
I'm
saying
any
other
actions
that
javascript
might
want
to
take.
F
I'll
take
it
afterwards.
Okay,
I
guess
if
you
want
to
replay
things,
but
the
error,
the
script
transform
will
probably
try
to
buffer
things
like
buffer
the
frames,
the
empty
trains
and
do
something
relative
transform
by
on
its
own,
will
not
try
to
do
buffer
or
replay
things,
because
it's
really
application
specific
code.
So
if
we
see
that
it's
happening
a
lot,
maybe
the
native
transform
will
not
be
enough
useful
and
then
we
will
say:
okay,
we
need
to
do
something
more
and
add
some
more
intelligence
to
the
latest
transform.
F
F
No,
no,
if
you,
if
you
have
like
a
missing
kid,
if
you,
if
you're
not
reacting
to
that,
then
you
might
have,
maybe
the
key
id
will
be
changed
or
maybe
you
will
get
this
event
again.
There's
a
slight
issue
in
terms
of
whether
you
will
be
flooded
with
evidence.
That's
a
potential
concern
there,
but
in
practice
I'm
hoping
it's
not.
E
F
I
don't
think
so.
I
don't
think
there's
any
s
frame,
specifications,
data
error
there.
A
C
A
B
F
Anyway,
I'm
hoping
in
most
cases
that
you
will
register
the
key
and
vs
frame
specification
is
trying
to
provide
some
guidelines
that
you
do
not
end
up
in
in
two
thousand
missing
key
cases,
because
you
will,
in
that
case
you
will
probably
drop
chunks
on
the
floor
and
you
will
have
to
ask
for
a
new
keyframe
and
so
on,
and
it
will
be,
it
will
be
bad,
so
I'm
hoping
most
applications
will
try
to
avoid
getting
into
those
errors.
B
Yeah,
I
think,
there's
a
need
for
another
event
for
a
re-key
like
if
the
key
schedule
changes
you,
the
javascript
may
need
to
know
that,
and
you
go
fetch
another
key
like
it
can.
It
may
get
it
may
get
as
for
incoming
packets
that
indicate
that
it's
a
different
key
id
like
that.
The
key
got
changed
on
the
other
side.
F
But,
oh
okay,
that's
a
different
mechanism.
Yeah
we
could
we
could.
We
could
think
of
that
so
you're
saying
if
the
key
id
is
changing,
even
though
I
already
registered
the
key.
So
everything
is
functioning
then
maybe
I
want
an
event
to
know
that
the
key
id
changed
is
that
correct.
B
Right,
like
I'm,
the
receiver,
and
I
see
I
start
getting
packets,
I
don't
have
like
the
key.
Somehow
the
key
exchange
I
didn't
get
it
and
I
start
getting
something
with
a
new
key
schedule.
I'm
going
to
start
getting
these
errors,
but
you
know
I
I
need
to
have
an
event:
hey
dude,
you
should
you
need
another
key
or
the
key
schedule.
B
F
Yeah
because
there's
a
type
for
the
error,
so
if
you
have
authentication
error,
it's
different
than
key
id
missing
error,
then
passing
error.
B
A
So
we
got
three
minutes
left
and
I
think
I
think
we
have
a
rough
consensus
that
we
should
continue
pursuing
this.
This
is,
of
course,
of
course,
has
to
track
the
key
streams
back
pretty
close.
F
A
And
it's
only
ideas
only
specified
for
this
frame.
It's
possibly
a
pattern,
like
other
other
transforms,
can
use,
but
and
in
the
specification
it's
only
on
the
stream.
F
Okay,
can
we
go
to
the
next
slide,
then
and
record
that
we
can
proceed
with
the
pr
modulo
changes?
So
the
next
issue
is
web
codec
and
predictions
versus
rtc
and
coded
chunk
frames.
So
currently
we
have
webrtc
encoded
transform,
but
has
its
own
object
for
representing
unquoted
frames
and
where
codex
also
has
the
same
things
and
they're
kind
of
similar,
but
they
have
differences.
F
Web
codec
current
proposal
is
to
have
immutable
and
credit
friends
on
the
oven
for
webrtc
and
quality
transform.
The
idea
is
really
that
you
take
a
frame.
You
change
the
data
of
a
frame,
but
not
not
anything
else.
You
will
not
change
the
metadata.
You
will
not
change
the
timestamp.
You
will
not
change
anything
else
and
then
you
you
send
them
to
to
either
the
decoder
or
of
the
packetizer
and
to
have
good
properties.
F
The
data
ownership
in
webrtc
and
could
transform
is
transferred
at
the
right
step
so
javascript,
even
though
they
they
will
try
to
change
the
the
data
after
sending
it
it
will.
It
will
not
change
how
the
packetizer
of
the
decoder
will
we'll
proceed
process
data
and
also
since
we're
rtc.
Another
codec
is
more
generic,
there's
some
specific
metadata
exposure.
F
So
the
question
is:
should
we
try
to
bring
all
of
these
together,
or
should
we
just
try
to
have
two
similar
code
paths
to
similar
constructs
that
are
slightly
different
next
slide,
so
yeah
we
could
go
with
state
core
which
is
currently
to
have
two
different
constructs,
so
we
keep
a
frame
for
which
we
can
change
the
data
attribute
and
we
can
reuse.
Of
course,
a
lot
of
subtypes
like
the
type
of
the
keyframe.
F
The
type
of
the
frame
would
probably
be
shared
and
the
meta
data,
the
application
metadata
and
several
things
would
be
a
share
the
time
and
so
on.
So
we
we
could
use
like
duck
typing
basically
like
websocket
is
doing
without
data
channel
wherever
it
makes
sense,
or
we
could
try
to
merge
the
two
apis
and
then,
if
we
merge
two
apis,
we
need
to
agree
either
to
go
with
web
codec
immutability,
which
is
not
matching
very
well
with
the
transform
model.
F
So
initially
I
was
thinking
about
going
with
just
one
contract,
but
now
I
think
that
we
have.
We
could
also
use
two
different
constructs.
It's
not
like
we,
we
are
like
it's
not
a
lot
of
complexity
to
implement
both
it's
there's,
not
a
lot
of
sharing
code.
There
would
be
a
some
sharing
code,
it
doesn't
seem
like
it
will
harm
too
much
developers
also,
so
I
would
go
with
the
easy
path
and
keep
stacy
school,
but
I'm
happy
to
hear
about
others.
A
A
You're
right
about
the
the
processing
model
being
slightly
awkward
and
in
when
you
have
an
immute
immutable
frame,
and
you
probably
have
to
do
special
tricks
for
making
it
effective.
But
if
you
don't
have
the
same
type
and
you
want
to
connect
a
web
code
like
where
it
transform
to
a
to
an
encoder
transform.
A
B
Well,
traditionally,
in
web
codex
we
we've
defined
other
methods
to
transform
into
different
data
types
so
that
you
know
they,
you
could
do
a
transformer.
You
could
have
like
a
webcodex
method
to
you
know,
transform
it
into
something
else.
B
So
I
don't
know
that
this
is
a
huge
obstacle,
the
difference
I
think
it
could
be
smoothed
over,
but
I
think
it
is
something
that
does
need
to
be
looked
at
more
because
I
I
you
know,
for
example,
the
encode
it
should
be.
We
should
be
able
to
use
web
codecs
with
end-to-end
encryption
too
right
in
an
easy
way.
F
Yeah
so
for
the
and
and
to
an
encryption,
vs
frame,
transform
is
taking
pc,
encoded
video
frames,
it's
taking
rtc
and
cody
load
your
frames
and
it's
taking
buffer
source,
which
is
a
red
buffer.
Basically,
so
it
could.
It
would
be
very
easy
to
take
video
frame,
but
the
model
would
be
a
bit
different
because
then
you
you
would
need
to
create
a
different
video
frame,
but
I
I
think
it's
fine.
We
would.
F
We
will
need
to
define
a
way
to
copy
all
the
metadata,
so
really
a
clone
and
it
would
be
implemented.
It
would
be
specified
natively.
And
if
we
go
down
that
path,
maybe
we
could
add
a
method
to
in
javascript.
To
do
that,
I
I
would
dislike
the
idea
that
in
webrtc
and
cody
transform
you,
you
have
to
create
a
new
object
and
you
you
would
have
to
copy
manually
all
all
the
properties
and,
of
course,
missing
some
just
to
change
one
field.
So
we
we
need.
F
If
we
go
down
that
path,
we
would
really
need
to
have
like
to
add
a
new
api.
B
Yeah,
I
think,
we're
out
of
time
for
this
item,
but
what
what
are?
What
did
we.
E
E
I
had
a
quick
comment
just
that
to
quote
un
from
earlier.
That
paraphrase
to
have
both
seems
problematic,
a
different
context,
but
has
anyone
in
web
codex
considered
an
approach
where
these
encoded
chunks
could
be
instead
of
immutable,
lockable
or
something
like
that
bernard?
You
know.
B
Yes,
and
but
I
think
we
don't
have
time
to
talk
about
that
here,
but
we
should
bring
it
up.
Yeah.
B
My
recommendation,
my
recommendation,
you
went,
is
to
actually
raise
an
issue
in
the
web
codex
repo
relating
to
the
relating
to
the
immutable
locking
issue,
because
I
think
that's
part
of
what
you
brought
up.
F
Yeah
but
I
tried
to
push
for
mutable
and
clearly
I
I
got
feedback,
but
they
do
not.
F
I
can
take
an
action
to
find
an
issue
on
web
code.
I
creep
over
yeah,
I'm
not
sure
where,
where
it
will
go.
A
Right
and
since
we
have
changed
out
the
means
of
getting
streams
in
in
the
spec,
it
would
be
nice
to
have
the
change
of
of
perfect
format
sit
down
at
the
same
time
break
it
once
break
it
once
break
it
twice:
you're
going
to
be
a
moral,
popular
okay.
Next
next
topic.
B
A
A
A
Exercise
we
have
had
an
open
item
on
discussing
whether
readable
stream
and
writable
stream
are
the
right
approach,
and
I
believe
that
we
have
had
sufficient
discussion
and
a
sufficient
lack
of
alternatives
that
we
can
conclude
that
this
is
the
right
approach
developers
like
it.
We
have
not
seen
significant
downsides
of
using
it
identified
for
our
use
cases.
F
Yep,
so
I
would
say
that
the
github
thread
is
very
active
and
with
guido
we
we
made
a
lot
of
progress.
F
We
tried
to
do
some
temporary
conclusion
and
I
think
that
we
we
have
some
consensus
on
some
points,
not
on
others,
and
but
at
least
we
had
consensus
on
the
idea
to
get
feedback
from
more
people
and
which
maybe
we
could
try
to
do
at
next
interim,
and
the
second
thing
is
that
it
might
be
time,
for
instance,
for
me
to
provide
alternative
apis
you're,
mentioning
that
there's
a
lack
of
alternative
apis.
F
I
can
take
an
action
for
next
interim
to
actually
provide
alternative
apis
or
maybe
scope
it
down
to
just
one
alternative
api
so
that
we
can
discuss
things.
I
think
guido
raised
the
valid
point
that
I
was
mentioning
that
readable
stream
or
italy
stream
has
a
cost,
and
he
mentioned
maybe
that
to
evaluate
the
cost.
You
really
need
to
see
alternatives,
and
then
you
can
compare-
and
I
agree
with
that.
F
So
that's
why
I'm
fine
trying
maybe
with
guido
to
first
summarize
the
github
issues,
discussions
for
next
interim
and
second,
to
present
an
alternative
proposals.
F
Would
you
be
fine,
you
know,
would
you
be
fine,
just
trying
to
prepare
slides
to
summarize
this
issue
for
next
intern.
G
A
C
Could
somebody
send
me
the
url
for
that,
so
it
gets
into
the
notes
it's
on
those
on
the
slide.
C
A
A
That
we
do
not
take
a
decision
on
whether
or
not
to
go
with
with
readable
stream
right
will
stream.
Now,
okay,
we
attempt
to
get
the
information
we
need
to
make
a
decision
in
one
month
from
now.
A
Of
course,
but
but
I
don't,
I
have
not
I've
not
seen
anyone
suggest
that
we
have
a
call
for
adoption
now.
Oh.
A
And
the
main
thread
proves
the
thing
should
be
possible,
which
is
the
basic
thrust
of
our
initial
proposal
and
the
other
approach,
and
I
would
say
that
worker
processing
should
be
mandatory,
which
is
the
encoder
stream
version.
That
un
is
advocating
and
main
thread
processing
should
be
hard
or
impossible.
A
A
F
The
rest
of
the
working
group,
but
just
just
to
be
before
that,
I
would
not
think
that
what
I'm
suggesting
or
what
universe
pushing
is
to
make
worker
processing
mandatory
it's
already
possible
to
do
a
mainframe
through
canvas
and
it's
working
right
right.
F
I
think
what
we
are,
what
what
we
are
suggesting
is
to
first
try
to
solve
the
use
cases
of
raw
media
capture
in
a
work
environment
and
then
as
a
first
step,
because
that's
we
we
know,
I
think
we
agree
that
it's
the
best
way
and
we
can
as
a
follow-up
step.
We
work
on
main
thread
processing,
maybe,
but
that's
what
I'm
more
suggesting
it's
not
like.
Worker
processing
should
be
mandatory
and
for
bid
main
thread.
That's
that's
not
what
I'm
saying.
E
Well,
I
would
paraphrase
it
as
saying
that
main
threat
processing
is
irresponsible.
I
think
that's.
We
should
have
apis
that
guide
people
strongly
toward
workers,
without
necessarily
making
it
totally
impossible
to
then
transfer
the
result
back
to
a
main
threat
if
you
really
want
to-
and
I
think
this
is
doubly
important
with
things
like
funny
hats,
because
you
often
want
the
self
view
if
you're.
If
I
have
a
funny
hat,
I
want
to
see
myself
with
a
hat
as
well
as
the
other
side
should
see
it.
E
A
B
Yeah,
I
can
actually
give
a
comment,
because
I've
been
working
on
an
application
that
actually
only
uses
the
main
thread
and
it
works
fine.
I
was
surprised
that
it
worked
so
well
so
and
the
the
the
comment
was
that
it
uses
it's
a
game
streaming
thing
that
uses
mse
right
and
you
know
you
have
to
look.
You
have
to
look
at
whether
in
that
kind
of
situation,
mse
doesn't
work
in
a
thread.
B
So
you
know,
if
you
do
this
and-
and
you
say
something
like
web
code
xd
code,
it
essentially
provides
the
same
the
same
functionality
as
mse
and
if
mse
doesn't
work
on
a
worker
thread
right,
then
you
might
want
to
do
this
in
main
thread.
I'm
not
saying
that's
that
you
couldn't
improve
it
by
also
supporting
it
in
a
worker
thread,
but
I
don't
see
why
you
would
prohibit
it
in
main
thread.
If
all
you
want
to
do
is
substitute
like
media
stream
track
generator
and
web
codecs
for
mse.
B
I'm
just
saying
yeah,
I'm
just
saying
it's
moving
to
a
worker,
but
it
it
wasn't.
It
isn't
in
a
worker
today
and
if
you
just
want
to
use
this
as
a
substitute
for
mse,
you
know
nobody
has
said
that
mse
shouldn't
be
allowed,
should
only
be
allowed
in
the
worker
right,
it
does
function.
So
I
I
do
understand
yaniva's
argument
that
some
things
might
be
such
heavy
processing
that
they
would
be.
They
should
be
done
in
a
worker.
But
not
everything
is
so.
F
So
it
should
be
also
known
that
msc
is
not
real-time,
it
means
you
do
fetch
in
main
thread
and
you
get
a
chunk
and
then
you
put
the
chunk
in
in
the
media
pipeline
and
there
there's
a
lot
of
buffering
there,
but
there's
a
lot
of
buffering
of
the
encoded
data,
not
of
raw
media
data.
So
that
makes
all
the
difference.
This
api
is
targeting
raw
media
data.
We
are
talking
about
taking
the
input
of
the
camera
and
do
that
in
main
thread
and
hoping
that
there
you
go.
F
G
C
C
It
should
be
easier
to
do
it
in
the
worker,
like
the
assumption
should
be
that
you
know
where
you
introduce
a
developer
to
this
concept,
that
the
example
is
doing
it
through
a
worker
and
if,
for
some
debugging
reason
or
for
some
other
complexity,
you
end
up
wanting
to
do
it
in
the
main
thread.
Then,
okay,
maybe
that
should
be
possible,
but
I
really
think
that
the
default
should
be
off
main
thread
and
I'm
thinking
about
like
the
low
spec
multi-core
machines.
G
Difference,
I
I
think
well,
I
have
to
check
that
for
myself,
but
actually
what
I've
heard
from
the
media
people
in
chromium
is
that
it's
in
low
spec
machines,
where
workers
do
not
make
a
difference
that
that
argument-
maybe
maybe
we
have
someone
from
from
media
here-
I
don't
know.
If
thomas
has
some
comments
there.
H
I
am,
I
do
not
know
about
that
portion
specifically
yeah.
Well,
we
do
always
offload
everything,
we're
not
actually
doing
any
decoding
on
the
main
thread
just
to
clarify,
but
I
think
maybe
everybody
knew
that,
but
I
don't
know
about
the
performance
for
like
low
spec
hardware,
for.
G
G
Of
transform
that,
for
example,
they
have
an
application
that
it's
a
like
an
internal
prototyping
application
for
effects
and
they
write
it
using
a
language
that
transpiles
to
javascript,
it's
not
like
javascript
and
then
it's
it's
an
application.
It's
a
production
application,
but
internal,
not
not
a
video
conferencing
application,
it's
just
to
to
evaluate
to
prototype
effects
and
things
like
that,
and
they
say
that
using
a
worker
there
is
is
not
only
unnecessary
but
also
complicates
their
application.
G
G
Effects
actually
main
thread
is
the
best
solution.
So
so
that's
an
example
of
of
use
cases
that,
where
the
main
thread
is
actually
our
choice,
so
yeah
those
examples
exist.
We
just
need
to
ask
enough
web
developers
to
so
so
making
it
extremely
exceedingly
difficult
to
use
the
main
thread
for
for
for
use
cases
like
this
kind
of
counterproductive.
G
B
I
Hey
everyone,
I'm
my
name's
matt.
I
work
on
the
xbox
game
streaming
cloud
cloud
gaming
product
I've
been
listening
in
for
a
couple
of
iterations,
but
well
I'd
introduce
myself
first.
I
The
thing
I
wanted
to
sort
of
offer
to
this
discussion
is
is
just
some
sort
of
practical
discoveries
from
running
a
cloud
gaming
thing
in
a
web
browser
and-
and
one
of
those
is
that
we
actually
hit
quite
a
lot
of
major
gc
hiccups,
trying
to
run
everything
on
the
main
thread,
because
you
know
we
build
a
web
app
like
everyone
else.
Does
we
put
in
literally
tens
of
thousands
of
npm
packages?
I
We
don't
really
know
you
know
in
great
detail
what
the
fine-grained
behavior
of
those
packages
is,
and
so
the
main
thread
gets
pretty
blocked
up
and
then
you
know
major
gcs
come
along
and
they
take
some
of
them.
Take
long
enough
that
it
causes
hitches
in
your
frame
processing
when
you're
trying
to
do
media
stuff.
So
we
do
do
quite
a
bit
of
media
processing
on
the
main
thread,
because
you
know
things
like
msc
have
to
happen
there.
I
I
A
I
think
I
think
we
have
a
total
consensus
that
working
worker
processing
needs
to
be
very
easy
and
probably
so
easy
that
it
is
the
first
thing
we
present
when
we
do
examples.
A
E
F
E
I
don't
see
any
other
hands.
Can
I
go
yeah
all
right,
so
I
would
just
say
that,
yes,
I
think
we
should.
The
spec
should
be
very
clear
that
we
expose
this
to
workers.
I
don't
think
we
should
go
out
of
our
way
to
prevent
the
result
from
ending
up
in
main
thread,
but
I
also
think
that
surma
has
a
great
presentation
on
youtube.
You
can
find
one
that
says.
E
That
hasn't
been
mentioned,
which
I
would
call
zoom
on
the
web,
where
it's
not
really
that
real
time,
because
the
end
result
is
web
codec
and
we
don't
actually
mention
web
codecs
as
a
use
case
in
in
the
breakout
box
spec,
but
the
one
that
is
mentioned
funny
hats.
You
do
have
a
real-time
budget,
that's
quite
short
and
doing
that
on
the
main
thread
I
think,
would
be
prohibitive,
maybe
not
on
your
fastest
machines.
But
you
know
we
want
it
to
work
everywhere.
A
Oh,
it
turns
out
not
to
be
that's
beside
the
point.
I
tried
it
anyway,
so.
A
A
I
think
we
have,
I
see
some
softening
in
that
we
seem
to
agree
that
that
it
it's
not
fatal
for
the
for
the
api
proposal
to
make
it
possible
to
do
work
on
the
main
trade.
A
So
I
think
we
need
to
come
up
with
specific
proposals,
and
one
of
the
course
of
the
specific
specific
proposal
is
actually
an
exercise
which
is
the
midi
capture.
A
So
we
had
some
interesting
discussion
on
what
is
a
source.
A
F
F
F
A
I
agree
they
are
in
two
different
pull
requests
at
the
moment,
which
is
the
only
reason
why
they
touch
on
that
on
each
other.
So
we
have
four.
We
have
four.
We
can
combine
these
two
in
four
ways.
Yes,.
E
Yeah,
I
would
say,
and
harold
said
he
promised
to
come
back
to
a
solution,
so
I
also
like
the
idea
of
transferring
media
stream
track,
so
one
solution
might
be,
for
instance,
to
the
earlier
problem.
We
discussed
would
be
only
expose
the
method
that
allows
processing
on
the
media
stream
track
in
the
worker
and
then,
if
you
get
a
track
back
that
you
would
usually
want
to
use
on
the
main
thread,
you
can
always
transfer
it
back.
E
As
to
the
question
of
clone
move
versus
copy
on
the
list,
I'm
preferential
to
toward
move
because
tracks
already
are
cloneable,
you
can
create
these
clones,
but
it's
a
leaky
api
where
javascript
already
today
is
forced
to
hunt
down
every
clone,
they've
ever
created
with
track
clone
and
make
sure
to
call
stop
on
all
of
them
before
and
if
they
don't
do
that,
they
might
get
a
resource
leak
of
like
the
hardware.
Camera
light
will
stay
on
until
garbage
collection
happens.
E
You
could
of
course
go
make
the
same
argument
the
other
way
they
can
always
copy
by
default,
and
if
you
didn't
want
the
copy,
you
could
call
stop,
but
there's
a
cost
of
having
these
copies.
So
I
would
pick
the
one
where
you
don't
end
up
with
unnecessary
copies.
If
you
make
a
mistake,
because
every
track
is
going
to
have
its
own
constraints,
which
going
to
be
a
burden
on
upstream,
if
it
has
multiple
clients,
it
needs
to
resize
graphics,
to
board
and
also
once
we
start
adding
transforms.
E
These
are
always
also
going
to
be
part
of
a
single
track
and
not
all
tracks,
and
we
haven't
even
actually
talked
about.
I
I
assume,
if
you
clone
a
track,
that's
already
been
transformed,
you're
cloning,
the
transformed
result,
which
we
also
said,
make
sure
to
state
somewhere.
F
B
Almost
out
of
time
or
about
two
minutes
left,
do
we
have
a
summary
here.
Next
steps.
A
B
I
I'd
have
to
understand
the
performance
impact
of
these
two
different
things.
I
mean
that
just
because
you
have
a
copy,
doesn't
necessarily
mean
that
there's
you
know,
there's
additional
overhead
in
the
track
itself.
Right.
A
C
F
F
A
Okay,
if
you,
if
you
remove
the
lifetime
management
special
handling,
then
I
think
I
could.
I
could
live
with
serializable
if
we're
transferable.
B
F
Okay,
we
could
do
that
and
find
another
issue
for
how
we
define
precisely.
F
A
C
B
Think
I
think
we
had
consensus
for
transferable
at
least
first
is
that
right,
harold,
yes,.