►
From YouTube: Webrtc WG January 19 2021
Description
W3C WEBRTC WG meeting January 19 2021
A
So
this
is
welcome
to
the
first
interim
meeting
of
2021,
we'll
be
talking
about
insertable
streams,
a
cropping
control
for
media
stream
track
and
some
open
issues
from
wherever
to
see.
Extension.
A
Just
one
request:
we're
trying
to
schedule
a
february
interim,
so
we
have
a
doodle
poll
open.
Please
fill
that
out.
The
poll
will
close
today
so
hopefully
we'll
find
a
time
and
date
that
everyone
can
come
to.
But
please
do
fill
it
out
right.
A
little
bit
about
the
meeting
it
is
being
recorded.
Recording
has
been
turned
on.
Can
we
get
a
volunteer
to
be
a
scribe.
A
A
B
A
Okay,
all
right,
here's
the
menu
for
today
I
have
insertable
streams
and
then
the
cropping
control
and
webrc
extension
stuff.
All
right
harold
over
to
you.
E
Okay,
next
slide,
that's
satisfied!
So
that's
a
quick
status
report,
there's
really
nothing
to
discuss
for
the
working
group.
Now
on
the
proposal
for
raw
media
insertable
streams,
we
do
have
an
implementation,
so
we
have
solid
working
for
video
behind
the
flag,
origin
trial
thing.
As
usual,
we
have
performance,
like
I
said,
tilde
equivalent
well
we're
in
an
order
of
magnitude,
at
least
of
production,
canvas
based
solutions,
we're
trying
to
beat
them,
but
we're
not
quite
quite
there.
Yet
we
changes.
We
did
some
changes
to
spec.
E
Based
on
the
discussion
at
last
meeting
I
removed
my
stage
2,
so
we
only
have
a
separate
track,
processor
and
track
generator
objects
and
in
the
in
the
process
implementing
it,
we
had
a
few
changes
to
the
spec,
and
I
mean
we
rename
check
processor
and
track
generator
to
media
stream
track
processor,
a
media
stream
track
generator,
because
just
because
we
launched
we
like
long
names
and
polluting
the
name,
space
with
very
unspecific
names
is
a
bad
thing
and
we
found
reason
to
add
a
buffer
size
argument
to
the
check
processor.
E
That
is
almost
ready,
as
as
gilda
says,
change
lists
in
review
and
control
signals
where,
as
going
to
change
the
proposed
changing
the
names
of
the
control
signal
tracks
to
writable
control
and
readable
control,
just
because
having
writable
and
readable
for
something
that
is
not
a
transform,
is
a
little
bit
confusing.
E
So
that's
there
will
be
spec
updates
I'll
have
more
more
to
report
on
that
than
february.
A
I
have
a
question
harold:
what
do
you
think?
Is
there
any
spec
anything
in
the
spec?
That
would
be
needed
to
improve
the
performance
beyond
canvas,
or
is
this
purely
an
optimization
issue
in
the
code
and
we're.
E
Trying
to
make
it
an
optimization
issue,
we,
I
suspect
that
we
might
might
come
back
and
and
want
to
talk
more
about
buffer
handling,
because
buffer
handling
is
kind
of
right.
The
place
where
the
number
hits
the
tarmac
preventing
takeoff
and
whatever,
and
so
we
have.
We
have
to
figure
out
first
that
this
is
indeed
problem
that
you
can't
solve
with
inspect,
and
second,
we
have
to
come
up
with
some
bright
idea:
ideas
for
making
this
for
for
fitting
stuff
in
the
effect
that
makes
that
high
things
happen.
E
E
E
Next
step,
yep,
so
insertable
streams
on
the
webrtc
side
is
basically
a
wire
out.
The
side
of
the
center
and
receiver
means
you.
You
pick
up
two
wires
connect,
connect
them
through
yellow
kit
and
that
handles
the
frames
and
everything
else
goes
on
as
usual,
and
that
means
that
the
only
thing
connection
you
have
is
these
two
wires
and
the
the
piece
you're
connecting
into
is
still
connected
on
the
back
side.
E
So
when
we
had
the
discussion
about
pickup
box,
you
saw
my
stage
2
and
stage
3,
and
everyone
said
now
stage
two
doesn't
make
sense:
let's
not
do
it,
including
the
guy
in
charge
of
implementing
it.
So
we
said:
oh,
let's
not
do
that,
and
so
going
back
to
insertable
streams,
it's
kind
of
yeah.
The
same
principle
should
apply
here
too.
E
E
It's
reasonably
obvious
that
when
you
have
encoded
data,
there's
much
more
more
going
on
inside
in
in
terms
of
feedback
signals,
congestion,
controls,
yeah,
free,
free,
inter
frame
requests
and
all
that
stuff
that
that
is
going
to
hurt
you,
if
you
don't
do
it
properly,
so
there
has
to
be
a
feedback
channel,
and
the
experience
with
the
with
insertable
streams
is
also
that
it
really
does
not
play
well
with
stp,
I
mean
if
you
insert
your
own
encoder
or
encryptor
or
whatever
what
comes
out
is
not
in
fact,
what
came
in
and
pretending
that
nothing
has
happened
to
it
in
terms
of
sdp.
E
E
It's
called
rtv
rtc,
rtb
sender,
because
that's
what's
it's
called,
we
don't
change
the
names
of
existing
objects,
we
just
redefine
them
a
little
bit
and
then
you
can
have
a
second
class.
That
is
also
an
instance
of
base
center.
That
exposes
these
streams.
E
E
So,
on
the
sender
side,
we
can
add
another
call,
saying
add,
frame
stream
which
will
cause
the
player
connection
to
create
a
frame
standard
object
rather
than
the
rtp
sender
object,
doing
exactly
the
same
thing
as
I'm
track
and
similar
modifications
and
when
receiving,
we
can
do
the
same
thing
as
we
do
with
with
data
channels.
We
tell
the
peer
connection
what
kind
of
date,
what
kind
of
receiver
we
want.
E
E
So
when
we
have
control
messages,
it
seems
logical
to
use
that
facility
for
saying
for
letting
the
object
you
insert
say
I
want
this
kind
of
codec
and
it
can
actually
receive
the
same
thing
from
upstream.
If
you
actually
connect
it
with
the
track,
where
you
can
propose
a
codec,
you
can
accept
the
codec.
E
E
So
there
are
a
number
of
pair
of
challenges
to
this,
including
that
audio
video
buffer
requires
that
you
deliver
data,
every
10,
milliseconds
on
the
clock,
every
10,
milliseconds
on
the
clock
and,
of
course
doing
garbage
collection
on
dynamically
allocated.
Buffers
is
a
challenge
when
you're
doing
every
10
milliseconds
on
the
clock.
E
Well,
we
have
to
we
have
to
have
a
design
and
finish
up
and
implement
it
and
see
if
it
actually
can
be
made
to
work
before
we
can
say
that,
but
it
I
think
it's
worth
trying.
F
E
I
don't
know,
I
suspect
that
we
would
need
to
say
that
there's
a
buffer
pool
here,
here's
how
you
say
that
you
take
things
out
to
the
buffer
pool
and
put
them
back
in
the
buffer
pool,
but
I
think
there
has
to
be
a
pool
and
I
think
it
needs
to
be
available
both
to
the
outskirts
and
to
the
native
side.
E
G
I
was
going
to
say
so
I
I
like
this,
but
I'm
kind
of
nervous
about
some
of
the
suggested
directions,
and
particularly
it's
about
getting
sdp
in
at
that
level.
I
think
the
data
channel
stuff
is
the
ideas
of
using
the
data
channel
kind
of
arbitrary
labels,
and
that
kind
of
style
is
much
more
much
easier
to
manage,
and
I
we
get
involved
in
far
fewer
standards
battles
that
way,
which
I
think
is
a
is
a
huge
huge
win.
So
yeah
I
mean
I'd
want
to
keep
an
eye
on
that.
A
F
E
That
requires
the
generic
rtp
and
encapsulation
formats
and
bernard
can,
if
you
want
to
use
rdp.
G
I
I
think
you
need
you'd
also
need
like
some
tag,
which
would
go
along
with
it,
which
would
let
you
know
what
you're
expecting
it
to
be.
I
don't
think
the
mid
is
enough
to
like
say
what
what
the
protocol
is
essentially
or
what
the
you
know,
rules
for
handling
it.
You
need
some
kind
of
arbitrary
label
or
protocol
tag
or
something
as
well.
I
think.
H
I
So
I
struggle
with
some
of
the
assumptions
here
that
we
need
a
feedback
channel
for
one.
You
mentioned
congestion
controls,
inner
frame
requests,
but
this
is
all
client-side
and
and
javascript
so
and
for
possible
applications.
Jitterbuffer
and
js
is
nice,
but
could
you
talk
more
about
the?
Are
the
other
applications
for
this
other
than
that?
I
E
I
Yeah,
it's
a
little
hard
to
comment.
It's
it's
fairly.
E
Yes,
yes,
yeah.
It's
very
very.
I
I
Right
if
I
were
to
nitpick
on
you
know
the
the
use
of
track
names
like
bike
setting
like
names
like
track
and
stream.
It's
quite
confusing
at
this
point,
so.
E
I
So
you're
breaking
out
a
new
class
for
the
sender
to
have
a
writable,
but
then
the
application
example
is
for
the
receiver.
So
I'm
assuming
a
similar
api
there.
E
I
I
So
some
of
the
concerns
I
have
is
this
still
feels
like
wires
on
the
side
a
little
bit
and
that-
and
it
sounds
like
we're-
we're
now
we're
back
into
encoded
data
and
there's
other
proposals
with
raw
data
and
it's
a
little
confusing,
but
it
kind
of
seems
like
we're
trying
to
break
up
these
apis.
So
you
can
get
access
to
data,
and
the
question
is:
what
kind
of
class
should
hold
the
representation
of
this
data
then
and
we're
building
them
in
two
places
in
peer
connection
and
raw
media?
I
E
I
So-
and
I
guess
also
for
the
feedback
channel,
how
certain
are
you
that
a
stream
is
the
best
representation
for
that
rather
than
methods,
because
they
sound
like
a
waste
for
the
sync
to
control
things
about
the
delivery.
E
I
I
For
instance,
you
know
if
there's
a
feedback
channel,
it
would
have
to
be
some
kind
of
protocol.
I
assume.
I
I
mean
it's
interesting,
but
I'm
not
sure
I'm
sold
on
the
direction,
but
learning
more
about
those
things
that
I
mentioned
might
help
yeah.
G
I
think
my
final
piece
of
feedback
would
be
about-
let's
not
forget
about
web
audio,
like
giving
these
audio
examples
triggers
me
into
thinking.
Well,
actually,
this
would
be
something
that
would
be
nice
to
plug
into
the
front
end
of
web
audio,
and
it
looks
like
doing
that
is
going
to
be
quite
clumsy.
D
G
E
My
hobby
awards
says
that
I
want.
I
want
to
write
more
demos
so
that
it
so
that
I
can
show
that
not
only
that
it's
possible
to
connect
these
things,
but
it
actually
works.
When
you
try,
we
had
a
demo
that
broke
and
surfaced
an
important
bag.
The
other
day,
which
was,
I
think,
an
excellent
excellent
use
of
demos.
H
Yes,
so
it's
also
related
to
the
same
area.
I
guess
I'm
still
fuzzy
a
little
bit
about
how
work
you
presented
fits
in
whether
it's
in
instable
streams,
but
anyway,
so
at
the
meeting
in
december
or
november.
Maybe
we
started
discussing
going
to
a
world
where
weber
team
super
streams
would
be
defined
as
transform.
H
So
I
presented
half
the
demo
of
the
slides.
So
here
are
the
remaining
slides,
so
just
to
recap:
there's
an
ongoing
safari
experiment
which
basically
implements
webrtc
installable
streams
using
a
transform
model.
So
basically
you
add
a
transform
boot
to
sender
receiver
and
then
you
can
set
transform
like
sram
transform,
which
is
pure
native
and
implement
this
frame,
or
you
can
have
a
script
transform
which
allows
to
implement
the
transform
in
javascript.
You
know
work
next
slide.
H
Okay,
maybe
I
will,
can
you
mute?
It
seems
weird
to
micro.
E
H
Thank
you,
okay.
So
the
questions
that
were
raised
during
the
meeting
were,
I
remember
three,
three
slides.
First,
one
was
how
we
expo.
Should
we
expose
streams
or
frames?
Should
we
expose
the
feedback
channel
like
on
the
other
side
like
harald
mentioned,
and
also
it
would
be
good
to
evaluate
the
api
complexity.
H
So
I
decided
to
go
with
some
examples
there
and
the
goal
is
not
really
to
to
look
at
the
api
exposed
in
workers,
but
mostly
to
look
at
the
api
in
window
and
see
whether
it
can
adapt
and
see
whether
it's
complex
or
not.
So
the
first
example
next
slide.
H
So
we
we
imagine
that
there's
a
script
transforming
window
and
when
you
instantiate
the
script
window
screen
transform
in
window,
then
there's
the
script
transformer
you
know,
worker
that
is
created
the
script
transformer
is
responsible
to
implement
the
actual
transformation
from
frames
in
a
worker
and
the
script
transform
is
there
to
allow
the
webpage
to
control
what's
happening
in
work,
so
you
create
a
transform.
You
say
you
assign
it
to
sender,
then
there's
a
port
so
that
you
can
exchange
messages
between
the
street
transform
and
the
transformer.
H
So
that
you
can
send
keys
or
whatever
and
on
the
worker,
there's
the
creation
of
a
transformer.
So
here
I
took
an
event
like
on
rtc
transform,
but
it
could
be
different
things.
That's
not
the
point
there,
but
what
we
can
see
there.
So
let's
say
that
now
we
have
the
worker,
the
worker
has
a
transformer,
so
the
transformer
can
send
messages
back
to
the
transform
we
can
say
it
has.
It
can
expose
readable
and
writable
just
like
current
instable
streams,
and
you
can
do
pipe
through
pipe
2.
H
Just
like
you
would
normally
do
and
that's
working
so
there
it's
the
stream
based
variant
but
of
course,
on
the
right
side
of
the
slide.
H
You
can
see
that
you
can
change
a
little
bit
the
model
there
just
on
the
worker
api
and
then
bang
it's
frame
based
and
it's
sort
of
fine
as
well,
so
there's
no
longer
readable
or
writable,
but
there's
on,
for
instance,
an
on-frame
event
and
you
you
can
do
things
and
you
can
write
and
and
that's
working
so
it
it
shows
that
the
fact
that
we
are
checking
stream
or
frame
we
can
decide
later,
but
the
transform
model
that
we
that
we
apply
there
can
can
use
both
models
and
in
terms
of
javascript.
H
Example:
two
yep,
so
there
I
decided
to
go
with
another
model.
Let's
say
that
we
have
a
script
transform
and
we
can
only
instantiate
it
in
worker
and
it's
transferable,
because
we
implement
the
transform
as
a
transform
stream,
as
defined
in
the
word
for
image
streams,
specification
and
it's
transferable
there.
You
can
see
that
the
window
html
page
is
very
similar.
H
You
still
have
a
transform
attribute,
you
assign
it
and
that's
all
on
the
worker
side,
you
need
to
create
an
rtsp
script,
transform
it
has
a
callback
called
transform
frame
which
takes
which
has
a
different
signature.
It
takes
a
frame
and
it
takes
a
context,
that's
by
definition
of
the
transform
stream
in
what
program
workforce
spec.
H
The
context
is
interesting
because
you
you
need
it
to
enqueue
data
like
a
regular
transform
stream,
but
we
could
also
decide
to
extend
it
so
that,
for
instance,
you
could
get
information
like
betrayed
you
could
we
could
extend
it
to
say
hey,
I
want.
I
want
to
record
the
keyframe
okay,
let's
go
for
it.
The
good
thing
there
is
that
in
example,
one
example
two
we're
seeing
that
it
can
be.
It
can
be
extended
and
if
you
look
at
the
exposed
api
there,
it's
it's
not
a
big
api.
H
So
in
terms
of
complexity.
Well,
we
just
have
one
constraint
there
and
a
few
attributes,
so
it's
smaller
than
the
example
one.
It
might
be
a
little
bit
less
flexible.
We
might
see,
but
it's
very
up
to
us
to
design
what
we
think
is
the
right
trade-off
between
a
bigger
api,
more
powerful
and
simpler
api,
less
flexible,
for
instance,
and
next
slide.
H
So
I'm
back
with
example,
three
with
the
transform
transformer
case
and
I'm
looking
now
at
whether
we
can
implement
implement
feedback
control.
So
there
you
can
see
that
the
window
api
stays
the
same,
because
if
you
have
feedback
control,
you
probably
want
to
have
knowledge
of
the
feedback
control
when
you
do
transform.
So
it's
good
to
have
in
the
same
place
where
you
get
data
to
transform.
H
Also,
the
like
the
bitrate,
and
maybe
you
want
to
change
the
bitrate
or
you
want
to
request
your
keyframe
or
like
whatever,
but
you
keep
that
at
the
same
place
in
the
worker
and
there
you
can
see
that
I'm
adding
a
transformer
on
bitrate
change,
even
for
instance,
if
it's
useful,
maybe
it's
an
event.
It
could
be
a
stream
like
it.
It
could
be
a
readable
control
stream
in
the
transformer,
and
then
you
can
pull
data
from
it.
It's
fine
as
well
on
the
right.
H
I
added
like
transformer
dot,
request,
keyframe
it's
a
method
which
means
that
the
encoder
will
do
its
best
to
produce
a
keyframe
as
soon
as
possible,
which
might
be
nice
in
the
case
where,
for
instance,
you
are
changing
keys
and
you
actually
want
when
you
change
the
keys
to
start
with
a
keyframe,
that's
fine
as
well.
What
we
see
there
is
that
with
the
introduction,
for
instance,
of
the
transformer
construct,
but
it
could
also
be
the
context
construct
from
the
previous
example.
H
We
have
placeholders
where
we
can
extend
apis
to
expose
knobs
that
might
be
of
interest
to
fine-tune
the
transform
or
to
fine-tune
a
little
bit
what's
happening
before
the
encoder
or
after
in
the
network,
or
vice
versa.
It.
H
Well
it's
really
up
to
to
us
to
to
do
that,
but
it
does
not
change
the
the
model
that
we
have
a
sender
with
the
transform
attribute
or
that
we
have
transform,
because
the
concept
is
really
that
you
have
data,
you
want
to
transform
it
and
you
want
to
to
send
it
to
the
next
step,
and
you
might
need
a
lot
of
information
to
actually
do
that
and
it
might
come
from
the
feedback
control.
H
It
might
come
from
the
window
application
and
we
we
should
make
this
api
as
comfortable
and
as
easy
as
possible
to
get
all
the
data
when
you're
doing
the
transform
of
the
frame.
H
H
The
model
that
is
exposed
there
can
can
do
both
it's
something
that
we
need
to
decide,
but
it
seems
orthogonal
to
using
transform
there.
We
might
want
to
export
the
feedback
channel.
It's
unclear
yet,
but
we
might
want
to
this
model
allows
it.
You
can
expose
a
readable
control.
You
can
expose
new
methods,
you
can
expose
events,
it's
really
up
to
us.
The
the
main
thing
here
is
that
we
keep
it
close
to
where
we're
doing
the
transform
and
in
terms
of
api
complexity.
H
The
proposed
window
interface
is
quite
quite
simple.
You
had
like
for
doing
s
frame,
it's
stream
transform
and
then
you
have
this
transform
attributes
for
the
gs
with
create
transform,
of
course,
it's
more
powerful.
So
we
will
need
more
constructs,
but
given
the
examples,
it
seems
that
we
can
really
either
decide
to
go
with
a
few
constructs
or
more
constructs.
H
Based
on
that
next
slide,
I'm
wondering
what
the
working
work
consultation
is
and
I'm
making
a
few
proposals
there
and
seeking
feedback
from
the
work
move.
So
the
proposal
one
would
be
to
add
up
to
transform
based
api
in
those
environments.
So
that
means
basically
add
this
attribute
and
proposal.
Two
would
be
to
add
up
this
frame
native
transform,
which
has
been
identified
as
a
really
important
use
case.
So
we
should
do
it
sooner
rather
than
later,
and
the
proposal
3
is
to
continue
api
design
for
script
transform.
I
Yeah
I
like
this
direction.
I
think
the
the
script
transform.
You
mentioned
the
two
options
there
for
whether
you
create
the
transform
on
the
worker
or
you
create
it
on
the
main
thread
is
an
interesting
one,
because
it
addresses
a
core
problem
that
we
have
with
all
these
data.
I
It's
too
much
data
to
do
on
the
main
thread,
so
we
want
to
make
sure
we
make
apis
that
make
it
obvious
that,
since
the
control
surface
is
on
the
main
thread,
where
you
hook
all
this
up,
it
would
be
good
to
have
an
api
where
people
don't
accidentally
do
the
wrong
thing.
I
And
I
like
the
approach-
maybe
it's
a
bit
redundant
to
have
a
port
and
stuff
like
that
to
communicate
it,
but
I
like
the
api
that
it
guides
you
to
do
the
right
thing
and
that
by
I
like,
where
you
create
the
transform-
and
you
have
to
specify
a
worker-
makes
it
clear
that
this
this
is
called
a
main
thread
and
it's
separate.
So
it
includes
a
separation
that
I
like
between
control
and
main
and
data
on
the
worker.
I
So
I
really
like
that
and
as
for
proposal
one
and
two
yeah,
I
think
we
should
do
that.
E
What
I
am
nervous
about
is
adopting
the
chance
and
api
that
says
that
you
can
only
use
transforms
in
particular,
since
I
believe
that
we
we
want
to
we,
we
have
applications
that
will
want
to
use
the
foreport
model
of
the
of
transforms.
I
mean
in
out
and
out
that's
four
four
ports.
Well,
the
transformers
catch
out.
So
far,
it's
only
two
ports.
E
It's
only
yeah
one,
one
way
through
so
I'm
nervous
about
adopting
the
the
transform
base,
as
our
only
only
api
as
a
as
an
optimization
api
over
that
this
quicker
to
to
call
them
connecting
the
streams
I'm
fine
with
it
and.
E
E
We
have
to.
We
have
to
get
a
handle
on
feedback
control.
We
have
to
have
it
get
a
handle
on
how
to
configure
how
to
tell
the
next
step,
what
to
expect
how
to
create
the
next
step
for
what
it
is
expecting
and
that's
more
or
less
independent
of
the
api
model.
Actually,
so,
yesterday
native
transform
I'm
hesitant
to
adopt
to
say
that
we
should
adopt
the
transform
api.
H
So
going
back
to
your
point
about
the
transform,
which
is
two
ports
and
you
might
want
at
the
end,
to
have
four
ports
currently
in
one
example,
we
have
the
transformer
which
has
a
getter
which
would
be
readable
and
a
navigator
which
would
be
writable.
So
we
have
two
ports.
H
We
could
certainly
add
another
getter,
which
would
be
readable,
control
and
another
which
would
be
writable
control.
And
then
you
have
the
four
ports.
So
it's
it's
really
at
the
next
step.
So
it's
really
in
proposal
three,
but
we
will
identify
whether
two
or
four
are
needed.
Currently,
I'm
not
sure,
but
I
don't
think
I
think
it's
independent
from
proposal,
one,
whether
we
will
expose
feedback
control
or
not,
and
whether
feedback
control
will
be
exposed
as
streams
or
as
functions
or
events
or
what
whatever
of
the
api.
H
So
that's
why
I'm
not
nervous
about
adopting
an
attribute
which
would
which
will
be
transformed,
because
at
the
end
of
the
day,
it's
just
a
switch
to
say
hey.
I
want
something
I
want
to
customize
something
so
give
me
the
context
with
all
the
apis,
where
I
could
customize
things
and
that's
what
we
will
define
and
we'll
define
the
context
and
all
the
apis
that
allow
us
to
do.
The
full.
I
Customization,
so
what
I
like
about
the
transform
is
that
it
seems
easy
to
do
the
same
things
but
harder
to
do
the
wrong
things
so
with
insertable
streams.
I
do
worry
that
it's
not
clear
how
you're
supposed
to
connect
the
pipes
together
and
there
are
ways
to
connect
them.
That
could
cause
a
lot
of
weirdness
and
might
require
a
lot
of
extra
tests
and
undefined
behavior.
I
So
and
if
we
have
an
s
frame
native
transform,
we
still
have
to
answer.
Well.
How
do
you
add
that
to
your
sender,.
H
I
Meaning
that
I
I'm
also
for
proposal
two
and
but
I
think
we
also
want
proposal.
One.
F
Yeah
yeah
right,
I
I
like
the
direction
of
this,
but
it's
hard
to
comment
without
seeing
like
how
would
you
achieve
exactly
the
same
thing,
but
with
the
existing
api
surfaces?
Would
you
do
the
same
thing
but
transfer
you
create
the
transform
objects
on
the
main
thread,
and
then
you
transfer
them
over
to
the
worker
or
how
does
that
work.
H
Correct
you,
you
will
you,
will
you
will
get
the
readable
and
the
writable
streams?
You
will
create
a
worker
and
then
you
will
call
post
message
with
veritable
and
writeable
to
the
worker,
and
then
you
will
do
like
the
readable
pipe
through
pipe
to
dance
in
the
worker.
So
that's
how
it's
done.
Currently
I
can
like.
There
are
examples
that
are
doing
so
and
that's
fine
there
it
will.
H
So
if
we
add
new
apis
like
I
want
to
request
a
keyframe,
then
you
will
probably
want
to
call
request
the
keyframe
in
the
worker,
not
in
the
window
environment.
So
we
will
need
to
transfer
this
object
that
allows
to
request
keyframe
to
the
worker
or
we
will
need
to
add
this
api
in
readable
or
writable
and
so
on.
So
that
becomes
more
difficult.
H
While
if
we
have
a
transform
with
clear
text
in
worker
and
window
in
worker,
we'll
be
able
to
expose,
we
have
a
placeholder
for
exposing
various
kinds
of
apis,
which
is
very.
H
That's
one
option:
we
we
need
to
decide,
but
but
certain,
but
that's
fine,
I'm
fine
with
it.
Another
option
would
be
to
add
frame.
Events
like
you
have
an
event
for
each
frame,
and
then
you
have
an
api
where
you
can
write
the
frame.
It's
it's
also
doable.
This
way.
I
G
So
I
again,
I
think
this
is
a
good
direction,
I'm
kind
of
fairly
happy
with
it.
I
think
the
only
thing
again
looking
at
web
audio
that
we
see
with
what
see
behavior
we
see
in
web
audio,
is
that
you
have
multiple
listeners
to
a
an
output
so
that
the
kind
of
one
in
one
out
model
isn't
true
for
web
audio.
You
have
like
17
things
out
and
I
can
see
uses
for
that.
I
mean
it's.
G
Obviously
there
are
uses
for
that
in
audio,
but
I
think
there
are
some
in
video
as
well,
so
it
would
be
a
shame
to
if
the
strict
transform
prevented
you
from
doing
that.
Maybe
there's
some
kind
of
fan
out,
transform
or
something
you
can
apply.
H
You
can
certainly
do
that
if
the
the
frame
is
cloneable,
if
you
can
clone
the
frame,
then
you
receive
a
frame,
you
transform
it,
you
send
it
back
and
then
before
sending
it,
you
clone
it
and
you
post-message
it
to
another
worker.
That
will
do
something.
It's
really
up
to
the
javascript.
So
it's
really,
the
definition
of
the
rpc,
encoded
image
or
frame
video
frame
will
tell
us
whether
you
can
do
that
or
not,
and
we
can
certainly
decide
outside
of
this
proposal
to
make
it
happen.
G
H
You
can
certainly,
for
instance,
if
we're
exposing
streams.
You
can
vary
double
stream,
you
can,
you
can
clone
it,
you
can
see
it
that's
the
term
and
then
you
can
transfer
it.
If
you,
if
we
are
exposing
just
frames,
then
you
can
create
a
readable
stream
that
is
in
javascript.
That
takes
the
frames
you
need.
You
can
then
key
variable
stream
and
send
one
and
keep
the
over
for
your
own
business.
Okay,
that's
also
doable.
I
So
I
had
one
api
question:
what
happens
if
I
assign
the
same
transformer
to
two
different
sender.transform.
H
So
currently,
in
a
separate
implementation,
it
will
fail.
You
will
reject
it
for
an
exception.
I
think
harold
mentioned
in
the
past
weather.
It's
fine
to
allow
a
sender
transform
to
be
overwritten
like
you
have
first
transform
you
assign
it,
and
then
you
reassign
it
to
a
new
fresh
transform
and
this
we
should
discuss
either
to
for
an
exception
or
to
allow
it,
and
this
will.
H
Basically,
this
will
tell
us
how
we
specify
the
transform,
whether
we
will
do
type
2,
for
instance,
or
whether
we
will
do
we
will
put
data
and
send
it,
but
we
can
achieve
both
models
as
well.
I
Is
the
transformer
reusable
after
the
stream
has
been
stopped
closed.
H
So
currently,
if
the
transfer
is
the
transform
is
assigned
to
a
sender
or
receiver,
it
will
not
be
reusable.
I
do
not
see
huge
value
in
that
and
it
was
simpler
to
implement
it.
This
way.
I
But
no
yeah
that
has
been
my
experience
too.
Dealing
with
readable
writable
streams
is
that
when
you
set
up
these
pipe
chains,
you
imagine
you
know
it's.
You
always
have
to
create
new
objects.
It
seems
like
yeah,
because
they're
only
good
for
that.
That
round
would
that
make
sense.
I
H
I
So
I'm
in
favor-
and
I
heard
some
other
voices
in
favor-
I
think-
are
there
any.
E
E
H
Can
can
we
just
write
it
down
in
the
meeting,
then
there's
consensus
to
go
forward
and
propose
a
problem.
A
Can
we
can
we
be
clear
in
the
minutes
which
what
our
action
items
are
with
respect
to
proposals?
One
two
and
three.
H
I
think
the
proposal
the
goal
would
be
to
provide
pull
requests
for
proposal.
1
then
pull
requests
for
proposal
2
and
for
proposal
3
it's
too
early
to
make
a
pull.
H
A
Okay,
I
think
I'm
glad
you
have
the
floor.
C
Thank
you,
so
I
would
like
to
make
a
suggestion
for
introducing
cropping
make
a
cropping
mechanism
for
midi
stream
track
module,
of
course,
any
kind
of
bike
shed
and
if
it's
on
midi
stream
instead
or
something
like
that,
that
is
okay.
For
me.
C
C
Yes,
thank
you.
So
here
I've
been
filled
in
that,
probably
I'm
not
the
first
one.
That's
introducing
a
proposal
to
make
some
kind
of
video
or
photo
editing
added
some
kind
of
capability
like
that
in
the
browser,
and
that
such
proposals
have
been
met
with
some
resistance
in
the
past,
because
it's
a
slippery
slope
and
we
never
know
what,
when
this
might
stop
and
a
lot
of
those
can
be
achieved
there.
Otherwise,
and
I
would
like
to
say
that
in
this
particular
case,
I
think
that
I've
got
an
argument.
C
I
cannot
tell
if
it's
new,
because
I've
not
been
here
in
the
past,
but
I
hope
that
it
is
new
and
I'll
present
it
with
the
next
slide.
Please
so
first,
I
would
like
to
claim
with
absolutely
no
authority
that
there
is
at
least
this
part
of
the
mandate
of
the
browser.
C
The
browser
needs
to
allow
applications
to
run
and
they
need
to
be
able
to
do
all
sorts
of
things,
and
if
it's
a
real
nobel
task,
it
needs
to
be
a
an
application,
needs
to
be
able
to
do
it
well
and,
furthermore,
good
applications
make
implicit
guarantees
to
the
user,
and
if
an
application
cannot
make
a
good
guarantee
to
the
user,
and
that
guarantee
is
very
reasonable,
then
the
browser
needs
to
make
it
possible
for
the
application
to
give
us
that
guarantee.
C
So
I
want
to
claim
that
if
we
have
this
for
a
following
application,
that
presents
slides
to
users
and
also
lets
them
see
this,
the
local
user
can
see
the
remote
users
video
feeds,
while
sharing
their
own
tab
or
at
least
part
of
it
to
remotely
it
could
be
that
the
application
which
is
to
promise
the
user
hey
you've,
got
some
private
content
here.
Speaker
notes
your
name,
your
address
all
sorts
of
things,
but
we're
gonna
crop
that
away
now.
C
This
is
a
guarantee
given
by
the
application
and
communicated
by
the
application
right.
So
each
application
is
going
to
communicate
that
fact
differently
and
there
will
be
different
things.
In
this
example,
we
can
see
that
there
is
some.
There
are
some
things
on
the
screen
which
are
not
really
privacy
sensitive
right,
that's
the
nature
of
the
smoke,
but
we
still
don't
want
to
share
them
remotely,
namely
the
video
feeds
from
the
remote
participants
should
not
be
shared
back
to
them.
C
But
you
can
imagine
imagine
this
mock
also
with
a
couple
of
speaker
notes
that
the
speaker
produced
for
themselves-
and
I
would
like
to
claim
that
right
now
this
cannot
be
cropped
away
so
easily.
Now
there
is
an
underlying
assumption
here,
and
the
underlying
assumption
is
that
get
this
stand,
media
slash,
get
current
browser
context,
media
or
one
of
the
other
names
has
already
been
accepted.
C
We're
still
debating
how
it
should
be
accomplished,
but
we've
kind
of
accepted
that
it
needs
to
be
accomplished
that
we
need
some
kind
of
a
get
east
media
interface,
in
which
case
the
application
already
knows
that
it's
getting
its
own
tabs
media,
which
means
that
it
knows
where
it
wants
to
crop.
But
that's
a
bit
difficult
to
do
in
javascript,
as
I
want
to
to
show
so
next
I
tab
slide.
Please.
C
C
C
Was
this
produced
before
or
after
such
a
resize,
because
in
an
event
that
hey
resize
happened,
it
gets
posted
back
to
the
event
loop
and
that's
not
synchronized,
with
the
composition
and
drawing
of
the
frame
and
capturing
of
it
and
to
further
complicate
things
first,
it
could
be
that
the
application
is
spends,
multiple
frames,
which
means
that
even
knowing
where
anything
on
the
screen
is,
could
be
a
matter
of
posting
messages
in
between.
C
So
it's
even
more
difficult
and
to
further
complicate
it.
It
could
be
that
there
are
peer
connections
involved,
which
means
that
if
you,
it's
actually
likely
in
this
particular
case
use
case,
and
if
you
make
a
mistake,
if
you
miscrop
something,
then
before
you
realize
your
mistake,
that
frame
is
already
on
the
wire
and
about
to
embarrass
you
to
your
colleagues
next
slide,
please.
C
So
I
want
to
quickly
look
at
three
known
solutions
and
then
look
at
the
solution
that
I
recommend
so
the
first
known
solution,
we've
just
discussed
it's
like
on
resize
event,
handlers.
C
You
know
by
the
time
that
you
handle
the
event
might
be
a
bit
too
late,
so
we
cannot
use
that
so
long
as
we're
using
peer
connections.
Next
slide.
Please.
C
Now,
you
might
say:
okay,
you
could
hold
on
to
the
frames
that
you
are
just
about
to
send.
Make
sure
that
no,
if
and
no
event
happens,
no
on
a
resize
event,
no
one
zoom
event
etc
hold
that
frame
for
a
while
and
then
transmit
it
after
you
know,
you
could,
for
example,
have
a
barrier
event
where
you
post
to
yourself,
and
if
you
see
that
by
the
time
that
event
you've
just
posted
to
yourself
comes
up
again,
no
such
event
happened.
C
You
know,
okay,
I
don't
think
that's
a
good
solution,
because
that
would,
by
the
design,
introduce
delay,
which
is
not
something
we
generally
want,
and
there
are
other
problems,
but
that's
enough
next
slide,
please
another
known
solution,
as
far
as
I
can
tell
is
request
video
frame
callback,
and
we
can
see
that
it.
C
If
I
understand
correctly,
it
gives
a
best
effort
kind
of
a
best
effort
service,
but
even
if
it
were
to
give
the
guaranteed
service,
because
you
need
the
because
the
content
can
be
between
different
frames
which
are
cross-origin,
it
would
still,
I
think,
not
help,
because
you
would
still
need
to
post
an
event.
It's
going
to
be
a
well,
it
would
still
be
problematic,
but
it
is
best
service
next
slide.
Please.
C
So
I
want
to
say
that
once
we've
looked
at
these
possible
solutions,
we're
kind
of
left
with
two
general
approaches-
we
can
say
either
okay,
so
your
problem
was
that
you
were
still
capturing
while
there
was
a
resize
going.
So
how
about
as
soon
as
you
there
is
a
resize
or
you
resume
or
whatever
get
the
make
sure
that
the
browser
pauses,
the
the
the
capture
notifies
the
application.
C
The
application
then
gives
the
new
coordinates
calculates
the
new
coordinate,
says:
okay,
everything's
ready,
and
then
it
resumes
the
capture
and
that
capture
can
get
paused
again.
That
was
my
initial
approach.
It
ended
up
spiraling
out
of
control
in
terms
of
complexity.
I
could
go
into
that
if
it's
necessary
if
anybody's
interested,
but
I
don't
think
that's
a
good
solution.
C
The
other
approach
is
to
say:
okay,
the
browser
is
going
to
do
the
the
cropping
and,
of
course
this
also
involves
a
lot
of
other
side
effects.
For
example,
it
probably
is
going
to
be
a
bit
more
efficient,
it's
much
easier
for
an
application
to
use
it's
not
a
bad
solution.
In
my
humble
opinion,
it's
got
one
problem.
C
C
C
There
is
only
one
problem
with
that,
and
that
is
that
if
the
div
happens
to
be
on
a
different
frame
which
is
cross
origin,
then
it's
a
bit
difficult
to
refer
to
that
and
I've
got
a
solution
to
that
too.
Next
slide,
please.
C
One
more
slide.
Yes,
thank
you.
So,
in
this
code
sample
very
minimalistic,
you
can
see
that
there
is
the
very
last
code
line
is
mst
crop2.
Normally,
you
would
expect
that
to
be
the
actual
div
testing,
I'm
saying.
C
Instead,
let's
just
use
some
kind
of
string
based
id
and
if
that
string
and
that
string
based
id
is
obviously
serializable,
which
means
it
can
be
used
as
post
message
between
frames,
which
means,
if
we
go
with
that
solution,
any
code
in
any
frame
can
given
user
consent
capture
and
crop
to
content
in
any
other
in
any
other
frame.
So
we
maintain
a
high
degree
of
flexibility,
of
course,
still
like.
C
Thank
you.
So
to
summarize,
what
I'm
suggesting
is
that,
given
that
we
already
have
video
of
the
current
tab,
so
basically,
if
the
media
track
and
the
midi
stream
on
which
this
api
is
operating,
if
you
try,
if
it
was
received
in
any
way
other
than
calling
get
current
display
media,
I'm
sorry
get
current
browser
in
context,
media
or
get
listed
media.
It
will
just
fail,
but
if
it
was
obtained
using
that
api
call,
then
the
application
already
knows
that
it
is
of
the
current
tab.
C
C
So,
just
to
walk
through
this
code
example,
so
a
div
can
define
a
crop
id,
I'm
perfectly
fine
using
any
name.
You
would
like.
The
idea
is
that
we
probably
don't
want
to
overload
the
current
id
or
something
like
that
and
give
because
they
might
not
be
unique
across
frames,
etc,
and
then
we
just
define
a
crop
id
that
crop
id.
Obviously,
you'd
only
need
to
specify
it
on
something
that
the
application
actually
knows.
C
It
wants
to
eventually
crop
to,
so
any
other
would
just
don't
have
it
then
you
either,
then
you
get
it
programmatically
or
hard
coded
or
whatever
pass
it
to
whatever
frame,
which
is
to
actually
run
the
capture
or
possibly
your
own
frame,
and
you
just
call
the
mediastreamtrack.crop2
again
name
up
to
debate,
and
then
one
last
thing
that
I
think
would
be
up
to
debate
is
whether
we
want
to
make
the
id
kind
of
an
unguessable
token
or
if
we
want
to
say
it's
up
to
the
application,
to
maintain
uniqueness.
C
If
there
are
several,
they
could
randomly
choose
it,
but
they
could
also
hard
code
it
to
whatever
they
want.
I'm
guessing
that
unguessable
token
would
not
hurt
anybody,
but
it
would
also
mean
that
whatever
kind
of
security
implications
we
might
not
be
able
to
think
of
would
be,
it
would
be
a
bit
better
for
that.
It
would
be
harder
for
the
for
developers
to
shoot
themselves
in
the
foot,
but
that's,
I
guess
one
of
the
least
important
parts
about
my
suggestion.
So
I'm
done
talking.
F
F
You
need
to
be
able
to
tell
the
browser
to
to
do
this
to
fulfill
the
use
case,
but
I'm
wondering
why
this
is
as
a
separate
step
right
because
it
looks
like
you're
capturing
it
in
one
track
and
then
you
are
cropping
it
in
a
separate
step
and
I'm
wondering
if
you
could
simply
do
in
the
argument
to
this
tab
media.
You
say
you,
you
reference
the
div
element
as
an
object
reference
directly
and
then
the
capture
only
captures
that
you.
F
C
Yes,
I'm
open
to
that
possibility.
I
thought
that
it
would
be
a
little
bit
less
user,
so
it's
got
both
benefits
as
well
as
problems.
One
problem
is
that,
then
you
cannot
switch
the
crop
region,
which
is
not
something
an
application
might
wish
to
do.
I
don't
think
that
the
one
that
I'm
specifically
building
this
for
needs
that
so
I
would
be
perfectly
open
to
that,
but
it's
up
to
you.
I
So
janova
here
I
have
some
a
couple
of
points,
so
one
is
that
this
tab
media
is
currently
not
specified.
I
mean
it's
being
discussed
in
a
working
group,
so
but
it's
not
a
done
deal
as
far
as
an
api
yet,
and
the
problem
here
they
seem
to
solve,
is
one
of
over
capture,
and
specifically
only
so
far,
I've
only
heard
a
use
case
for
this
being
for
this
new
method
that
has
not
been
defined
yet.
So
I
think
it's
a
little
premature.
That's
my
first
comment.
I
I'm
also
agreeing
with
henrik
that
you
know
right
now,
you're
proposing
a
cropping
tool
for
on
mediastream
track,
which
would
be
a
generic
cropping
tool,
and
then
we
have
to
think
about
all
the
sources
that
produce
media
stream
track
like
canvas
capture
peer
connection,
element
capture,
and
it's
not
clear
to
me
how
the
coordinate
system
of
a
div,
how
that
would
map
to
those
sources
or
for
or
for
a
camera
track.
I
Even
so,
if,
if,
if
get,
if
this
tab
media
is
the
only
source
of
the
supplies
to
having
this
as
cropping,
my
first
comment
would
be,
then
we
should
limit
it
to
that
api.
Like
kendrick
mentioned,
the
firefox
has
tried
to
add
constraints
in
the
past
to
crop
a
speech.
I
Sorry
speech,
the
sorry
we
got
some
background
noise
here,
the
I
was
gonna
say:
we've
tried
constraints
in
the
past
to
crop
the
capturing
of
screen
capture
specifically,
and
it
and
other
things
the
the
question
again
comes
with
coordinate
system,
because
an
application
wouldn't
have
to
deal
with
these
coordinates
also,
since
this
media
capture
hasn't
been
specified.
I
Yet
one
of
the
things
I
proposed
last
time
was
that
we
saw
this
at
the
point
of
capture,
so
in
this
case,
for
this
type
media,
the
obvious
solution
to
me
would
be
that
the
you
specify
this
during
capture
that
you
capture
this
div,
basically
and
don't
capture
everything
else,
and
we
don't.
Then
we
sidestep
the
whole
problem
of
coordinates.
C
So
I'm
I'm
not
interrupting
you
am
I
no.
I
think
that
was
my
comment.
Okay,
so,
as
I
said,
you've
just
brought
up
three
different
points.
The
first
point
is
that
there
is
yet
another
thing
that
against
putting
it
on
the
india
stream
track,
and
that
is
that
it
will
just
have
to
fail
for
me
the
stream
tracks
which
were
obtained
elsewhere.
So
that's
a
point
for
taking
henrik's
suggestion
and
I
agree.
C
C
I
I'm
not
sure
if
you
are
alluding
to
that
or
not,
but
just
for
the
benefit
of
other
people.
I
think
that
we
would
still
not
want
to
be
able
to
capture
an
html
element
to
say.
Okay,
I
want
to
capture
this
div
itself,
because
if
parts
of
it
are
obscured
the
user
might
be
consenting
to
capture
something
he's
unaware
of
so
I
think
that's
still
taking
the.
I
know
you
and
I
have
had
a
bit
of
track
here
in
a
record
here
just
for
the
benefit.
I
C
I
Discussion
we
had,
which
was
about
you're,
still
capturing
the
viewport,
but
you're
you're
clipping
it
to
to
a
an
iframe
or
something
or
a
div
like
that,
so
that
you
would
only
be
capturing
the
intersection
of
what
is
already
viewable
for
privacy
reasons,
but
you
can
also
then
crop
within
that
using
native
html
things,
whether
they
be
iframes
or
divs,
or
both
I
only
mentioned
iframes
last
time.
I
We
could
probably
talk
about
extending
it
to
other
items,
but
it
seems.
But
the
overall
point
is
that
the
problem
here
seems
to
be
overcapture
in
the
first
place.
So
if
we
can
solve
over
capture,
we
don't
need
cropping,
and
I
think
that
would
be
better
because
then
we
don't
get
into
coordinate
systems
because
you
know
display
pixels
on
different
os's.
Don't
even
have
the
same
sizes
all
the
time,
and
I
can
see
a
lot
of
confusion.
I
C
Yes,
so
what
I
would
argue
here
is
that
the
user-
we
are
not
going
to
communicate
to
the
user,
that
we're
only
capturing
a
div,
because
the
user
is
not
actually
in
place
to
understand
what
each
div
contains,
which
means
that
the
user
is
going
to
give
permission
to
capture
the
entire
viewport,
and
this
is
going
to
do
that
and
in
the
case
that
an
application
wants
to
sometimes
change
that.
C
Then,
if
the
user
just
consented
already,
it
would
make
no
sense
to
make
the
application
query
the
user
for
yet
another
consent
if
they
want
to
now
sub
capture
a
different
part.
So
if
you
capture
you
begin
with
the
entire
tab
and
then
you
you
know
you
crop
it
as
you
wish
in
different
times.
I
think
that
creates
a
better
user
experience.
I
Well,
I
I
think
permission
is
separate
from
this.
You
get
permission
to
the
application,
gets
permission
to
capture
the
entire
page,
but
the
application
also
gets
controls
to
only
capture
an
iframe
or
a
part
of
it,
and
I
think
those
things
are
separable,
at
least
in
my
mind
that
you
give
the
tools
and
though,
in
the
way
you
specify
the
capture,
you
say
you
have
permission
to
capture
the
whole
page,
but
we're
only
going
to
capture
a
part
of
it,
and
that
seems
fine
to
me
with
given
permissions.
E
C
At
the
moment,
calling
either
get
display
media
or
get
this
step
india
so
get
this
video
is
going
to.
I
can
hear
some
feedback.
I
think
it's
from
okay.
Thank
you
very
much.
So
at
the
moment,
get
display
media
queries
every
single
time
and
they
think
that
that
is
even
in
the
spec
and
get
this
spec
and
get
this.
That
billiard
is
modeled
after
that,
in
that
it
also,
if
you
query
multiple
times,
you're
going
to
get
multiple
prompts
for
the
user.
I
Yeah
now
you
wouldn't
want
to
do
that,
but
you
already
since
you're
capturing
your
own
page,
you
can
already
you
control
everything
you
can
relay
out
the
div.
You
can
do
whatever
you
want,
so
it
shouldn't.
C
And
another
argument
that
I
wanted
to
have
is
that
you
don't
necessarily,
I
guess
you
could
always
put
everything
else
inside
of
a
div,
so
that
much
at
least
is
true,
that
you
could
always
capture
the
entire.
So
for
that
reason
I
would
be
okay
with
also
making
this
get
this
type
media
just
far
instead
get
this
div
media.
Another
argument
against
that,
and
by
now
we're
in
particular.
C
So
maybe
but
another
question
here
would
be
whether
this
could
be
confusing
to
a
web
developer,
that
they
think
that
they're
capturing
the
div
but
actually
and
like,
would
they
understand
that
they're
only
capturing?
You
know
a
subportion
of
the
viewpoint.
I
C
That
would
not
actually
be
usable
for
us
because
we
want
any
iframe
to
be
able
to
catch
any
other
iframe.
Basically,
if
we
look
at
the
previous
one
use
case
where
we
said
you're
getting
the
entire,
which
is
by
by
the
way
still
something
that
somebody
like
we're
still
pushing
for
that,
because
you
some
people,
don't
want
your
crop
at
all
right
and
if
you
can
do
that,
then
you
also
want
to
be
able
to
crop
from
any
place.
I
Okay,
okay,
so
then
you
would
pass
it
in
then,
or
you
would
have
two
methods.
I
would
have
midi
devices
two
apis
for
the
same
thing.
C
I
G
C
I
think
that
streams
are
not
serializable
and
therefore
you
cannot
post
messages
stream,
but
I
could
be
wrong
there
if
yeah.
That's
that's
correct.
Okay,
the
media
stream
track,
in
this
case
not
serializable,
yes
and
and
same
thing
for
giving
access
like
you
cannot.
You
say
like
this
part
of
the
dome
like
this
div,
that's
similarly
impossible.
I
I
should
also
clarify
that,
since
we're
talking
about
cropping
as
a
sub
feature
of
get
this
tab
media
or
this
type
media,
I
don't
currently
think
that
the
since
we're
talking
about
security
properties,
I've
objected
to
some
of
the
security
properties
of
the
proposal
and
I
think
it
needs
to
be
locked
down
with
course
or
co-op
or
similar
things.
So
I
want
to
make
sure
that
this
isn't
perceived
as
me.
Agreeing
to
that
that
api.
C
Of
course
not,
and
I've
also
just
in
case
I've
not
received
the
complete
green
light
from
chrome
security
either.
So
this
is
still
under
active
discussion
everywhere.
I
H
Related
to
your
crop
id
thing,
I
I
think
we
should
have
a
clear
use
case
for
being
able
to
one
top
level
page
wanting
to
only
capture
a
given
element
in
an
iframe
that
is
cross-origin.
You
know,
because
it's
adding
quite
some
complexity
to
add
like
a
binary,
occult
id
attribute
or
id
string,
and
if
we
were
able
to
restrict
that
to
to
a
dom
node
like
yeah.
I
want
to
capture
this
iframe.
Then
it's
it's
a
more
natural
api
than
what
is
proposed
there.
H
So
I'm
fine
going
with
an
api
which
is
less
natural,
but
the
use
case
should
be
very
clearly
defined
and
yeah.
C
Sure,
if
we
could
jump
to
slide
number
26,
then
I
can
re-uh
I
can
show
again.
I
think
that
the
the
the
group
in
google
that
works
on
this,
I
think
their
current
name-
is
a
google
work
group.
I
think
they've
had
a
public
blog
post,
so
we
can
talk.
Number
26
for
me
is
a
few
more
up.
It's
the
one
with
the
the
only
one
with
an
image.
Yes,
thank
you
very
much.
C
So
this
one
is
actually
taken
from
a
public
post,
so
we
can
discuss
what
what
we
see
here
and
what
we
see
here
is
that
you
have
one
frame
that
runs.
Google,
slides
and
one
frame
that
runs
google
meet
and
the
google
mix
frame
is
the
one
that
needs
to
run
the
capture
and
crop
it
and
plug
it
into
a
peer
connection.
C
So
that
is
what
sorry
it's
more
than
the.
H
As
well,
I'm
sorry
well,
another
possibility
would
be
that
you
have
two
iframes
really
one
which
is
just
about
the
content
and
then
you
you
could
have
some
controls
or
I
mean
yeah.
That's
another.
C
Possible,
the
parent
here
is
slides
and
the
child
here
is
meet.
C
On
the
left,
you
have
the
top
level
application
on
the
right.
You
have
the
the
iframe
that
needs
to
capture
so
right.
C
So
so
what
we
would
have
here
is
the
top
level
would
just
communicate
with
the
post
messaging
id
to
the
to
the
side,
and
it
would
then
crop.
I
Right,
but
this
website
also
contains
other
specific
logic
like
sending
it
over
repair
connection
and
all
that
stuff,
so
it
would
probably
be
trivial
hopefully
to
even,
if
there's
an
iframe
for
structural
reasons,
because
there
are
different
origins
within
that
iframe
you
could
have
other
iframes
or
smaller
iframes
or
divs
that
do
the
job
solely
of
dictating
the
the
the
dimension.
The
extent
of
the
capture.
H
C
I
I
Sorry
yeah
I'm
trying
to
wrap
my
head
around,
so
you
need
to
reach
the
iframe
that
you're
going
to
capture
from
that
will
be
done
by
me.
That
would
be
done
by
the
the
capture
it's
capturing
itself
right.
So
that's,
hopefully
not
the
issue.
C
So
me,
the
part
you
have
on
the
right
here
in
this
sample
application
captures
everything,
but
then
it
needs
to
crop
to
the
co
to
call
it
to
a.
I
Or
communicate
privacy-wise,
that's
a
little
more
concerning,
though,
wouldn't
it
be
safer
to
have
the
the
the
thing
being
captured
capture
itself
and
join
the
call
that
way.
C
Sure
that's
one
option,
but
I
don't
think
that
this
is
really
a
problem.
If
this
is
an
unkissable
token,
which
the
application
that
is
being
captured
chooses
to
even
have
like
you
wouldn't
have
it
unless
they
actually
injected
it
there,
and
then
they
choose
to
post
messages
so
and
also.
I
would
argue
that,
if
not
for
the
crop
region,
you
would
have
been
capturing
everything
so
you're
you're
if
anything,
you're
removing
content,
so
you're
not
gaining
any
extra
information
here.
F
That
right,
because
because
this
can
change
in
you
know,
however,
you
want.
I
think
it's
important,
that
what
is
asked
to
be
captured
is
actually
everything
and,
like
the
the
whole
tab,
and
in
that
case,
if
you're
capturing
everything
anyway,
then
I
don't
think
we
need
to
get
into
details
about
which
iframe
are
we
capturing
right,
because
we're
catching
the
whole
tab,
and
I
think
the
cropping
can
just
be
considered
an
optimization
rather
really.
A
D
C
D
So,
can
you
remind
me
where
the
get
tab
display
media
thingy
is
being
discussed?
Is
it
on
the
screen
share
repo
right
now.
D
Okay,
so
yeah
maybe
raise
an
issue
there,
where
you
summarize
your
findings
and
where
we
could
continue
the
conversation.
Of
course,
it's
linked
to
the
discussion
on
get
tab
media,
but
I
think
continuing
the
discussion
on
the
orthogonal
bits
would
be
useful.
A
A
Okay,
everybody
we're
out
of
time.
We
did
not
get
the
bird
this
time,
so
I
guess
we
have
leftover
material
for
the
february
interim,
please
fill
in
the
little
polls,
so
we
can
get
a
date
and
time
so
we
can
meet
again.