►
From YouTube: WEBRTC WG interim 2021-10-14
Description
See also the minutes at https://www.w3.org/2021/10/14-webrtc-minutes.html
03:24 The Streams Pipeline Model (Youennf)
44:11 Altnerative Mediacapture-transform API
1:28:53 Mediacapture Transform API
1:49:07 Wrap up and next steps
B
Okay,
this
is
the
w3c
webrtc
working
group
meeting.
We
abide
by
the
w3c
patent
policy,
and
only
people
in
companies
listed
on
the
page
are
allowed
to
make
substantive
contributions
at
this
meeting.
We're
going
to
cover
some
issues
with
what
wg
streams
and
then
talk
about
the
media
capture
transform
api
so
a
little
bit
about
the
meeting.
These
are
links
to
the
specs.
The
slides
are
up
on
the
wiki.
B
B
Okay,
just
a
note
about
the
code
of
conduct,
we
operate
under
the
w3c
code
of
ethics
and
professional
conduct,
while
we're
all
passionate.
Let's
try
to
keep
the
conversations
cordial
and
professional
a
little
bit
about
tips.
We're
asking
you
to
we're
going
to
run
a
queue
like
they
do
in
the
itf.
B
So
if
you
want
to
get
into
the
queue
type
plus
queue
and
then
when
you
want
to
leave
type
minus
q
in
in
the
google
me
chat,
so
we'll
be
using
that,
of
course
use
headphones
and
wait
for
the
microphone
access
to
be
granted.
Before
speaking,
it's
also
helpful
to
state
your
full
name
just
for
the
recording,
although
we
can
probably
recognize
you
by
now.
I
don't
think
we'll
be
doing
any
poll
we'll
be
using
the
google
me
pull
mechanism,
but
we
may
take
polls
in
the
chat.
B
Okay,
just
a
few
notes
on
document
status,
just
because
something's
in
the
wpc
repo
doesn't
mean
it's
been
adopted
by
the
working
group.
That
requires
a
formal
call
for
adoption
and
just
a
reminder
that
editor
straps
don't
represent
working
group
consensus,
but
working
group
graphs
do
and
we
can
merge
brs
without
consensus.
But
as
long
as
we
have
a
note,
okay,
so
here's
what
we're
going
to
try
to
get
done
today,
uns
going
to
talk
a
little
bit
about
the
streams
pipeline
model
and
some
of
the
issues
that
are
under
discussion
there.
B
We'll
then
get
into
an
alternative
media
capture.
Transform
api
proposal
by
you
went
in
yanivar
and
then
harold
will
speak
on
the
existing
proposal
and
then
we'll
try
to
make
sense
of
it
all
in
the
wrap-up
and
next
steps
we're
going
to
try
to
keep
pretty
rigid
time
control.
B
So
we'll
I'll
give
a
warning
two
minutes
before
time
is
up
and-
and
we
will
move
on
once
we
get
to
the
timeline
okay,
so
until
8
30
we'll
have
u.n
present
on
some
of
the
streams
issues
that
he's
found
in
that
around
their
discussion.
In
the
what
wg
streams
working
group
you
and
you
have
the
floor,
sure.
D
So
yeah,
as
bernard
said,
this
presentation
is
about
topics
and
issues
we
discussed
partially
with
univar
when
we
dived
into
exploring.
B
D
Stream
is
a
good
fit
for
media
pipelines,
so
we,
the
goal,
is
to
identify
blocking
issues
and
which
may
allow
us
or
not
to
adopt
streams
as
a
founding
api
for
a
media
pipeline.
So
before
maybe
next
slide.
D
D
Javascript
to
connect
sources
and
things
or
you
can
use
transform
streams,
so
the
example
shows
shows
there
is
like
the
angle
that
everybody
would
like
to
to
have,
which,
which
is
go
from
the
camera
up
to
the
network,
with
just
streams.
So
as
we
as
we'll
see,
we
will
just
focus
on
a
video
pipeline
here
and
we
we
want
to
discuss
the
issues
around
threading
and
video
frame,
interaction
with
streams,
and
so
let's
look
at
threading
first,
so
next
slide.
D
D
D
But
in
the
web
audio
case,
the
spec
clearly
states
that
the
audiograph
is
set
up
in
the
main
thread,
but
it
runs
in
the
audio
thread
and
the
sec
is
very
clear
about
that
and
all
implementations
are
abiding
to
it
and
the
nice
things
is
of
course
the
audio
thread
is
a
high
priority
thread.
So
you
get
very
good
performances
there
in
our
case
we're
using
a
generic
approach,
streams
and
the
situation.
D
If
we
wanted
to
not
assume
that
we
would
need
to
optimize
pipe
2
and
pipe
through
operations
and
maybe
next
slide,
we
can
illustrate
that
it's
it's
not
easy.
So
the
example
one
there
is
the
case
where
you're
doing
the
funny
hat
thing
from
a
camera
to
an
html
media
element.
So
let's
say:
there's
a
native
transform
there
and
you
can
pipe
two,
the
media
element
in
some
ways.
So
there's
no
javascript
and
there
you
have
a
call
to
play
through
and
a
call
to
pipe
too.
D
So
from
reading
the
spec
from
this
example,
we
don't
know
whether,
where
will
flow,
the
video
frames,
it
might
be
my
infrared,
it
might
be
a
background
thread,
it's
really
up
to
the
user
agent,
and
that
makes
things
really
complicated
from
a
web
developer.
Basically,
that
actually
wants
to
get
guarantees
and
the
worst.
B
D
B
U.N,
that
would
be
true,
even
if
this
code
example
were
to
be
written
in
a
worker.
You
think
it
still
might
flow
on
the
main.
D
Thread,
oh
yeah,
sorry,
yeah,
it's
running
the
safest
assumption
is
that
it's
running
where
we
call
pipe
through
and
where
we
call
pipe
okay,.
B
D
So
if
we
call
pipeful
in
the
main
thread,
we
don't
know
whether
it
will
be
infrared
or
background
fed,
but
potentially
we
can
assume
it's
main
thread.
If
it's
called
in
worker,
we
would
say.
Oh,
it's
probably
called
in
the
worker
thread.
D
D
If
you
go
with
example,
3,
which
is
using
t
after
the
native
transform
and
using
pipe
2
there,
it's
really
unclear
where
video
frames
will
will
actually
flow.
Will
it
will
it
be
optimized
by
the
user
agent?
Or
will
it
not
be
it's?
We
cannot
predict
that.
So
that's
why.
I
think
that
the
safest
assumption
we
can
consider
is
that
the
thread
wears
down
the
setup
is
also
the
thread
where
it's
done.
D
So
one
potential
related
idea
is
to
transfer
the
stream
to
worker,
for
instance,
to
get
rid
of
main
thread
issues
and
currently
to
transfer
stream.
You
it's
specified
in
the
spec.
As
you
take
the
stream,
you
want
to
transfer
you
pipe
to
in
a
special
identity,
transform
that
will
actually
send
the
content
over
to
the
worker
thread
or
the
worker
context.
D
Let's
say
if
you
just
basically
implement
the
spec
like
that,
you
you
don't
avoid
a
main
thread
blocking,
because
the
source
is
still
in
main
thread,
so
you
have
to
do
optimizations
like
like
the
clever
ones
they
they
did
in
chrome.
D
The
third
issue
here
is
that
this
optimization
is
not
standard
and
it's
really
difficult
to
make
transparent
web
developers
as
discussed
in
the
github
issue.
The
links
are
in
the
slides.
D
I
also
actually
tried
chrome
last
week
and
the
current
implementation
around.
That
era
is
not
compliant
on
a
number
of
points
as
well,
and
the
second
issue
really
is
that
light
pipe
2
and
python.
It's
very
hard
to
predict
whether
the
optimization
the
transfer
optimization
will
actually
kicking
on
or
not,
and
the
fact
that
it
kicks
in
is
really
important
from
a
birth
point
of
view.
So
again,
this
next
slide
for
some
examples,
so
example,
one
is
the
typical
case.
Where
chrome
will
optimize
the
thing
you
have
a
camera
for
a
stream.
D
You
transfer
it
to
work.
It's
working
fine.
Now,
let's
say
that,
for
the
purpose
of
the
example
we
add
a
native
transform
between
the
camera
and
the
stream
will
actually
transfer
there.
We
don't
know
what
will
happen,
maybe
it
will
be
optimized,
maybe
not
example.
Three
is
again
the
same.
Let's
say
that
you
have
a
camera,
readable
stream.
Let's
say
you
t
and
you
transfer
one
of
the
teeth.
What
will
happen
in
terms
of
optimizations?
We
we
don't
know,
and
the
same
applies
to
non-camera
streams.
D
Will
peer
connection,
get
display,
media
or
get
viewport
media
be
optimized
or
not.
We
we
don't
really
know
it's.
The
spec
doesn't
say
anything
and
let's
say
you
transfer
a
media
stream
track
like
get
report
media
to
another
frame,
and
you
then
take
a
stream
that
you
transfer
to
worker
will
be
optimized.
We
we
don't
know
and
from
a
web
developer
perspective,
there's
no
way.
You
know
whether
it
will
be
optimized
or
not.
This
is
very
different
from
web
audio.
D
Where
you
have
a
spec,
you
know
it
will
it's
guaranteed
that
it
will
run
in
a
background
thread
in
an
optimal
thread.
So
that's
why
next
slide?
If
we,
if
we
look
at
the
idea
of
using
transferring
streams
for
our
purpose,
we
we
see
that
it's
really
a
generic
tool
like
strings
are
it's
designed
for
flexibility
and
it's
working
fine,
but
we
we
cannot
guarantee
performance.
D
The
good
news
is
that
we
have
media
slim
track
transfer,
which
is
a
dedicated
tool
to
mediate
track
and
there
we
can
design
it
so
that
we
guarantee
performance.
We
can
guarantee
that
the
the
transferring
will
be
optimal
and
that's
the
proposal.
I
think
we
should
go.
D
We
should
go
with,
which
is
to
build
on
mediation,
track
transfer,
and
then
we
we
have
no
hard
requirement
to
re,
resolve
the
github
issue
to
actually
unable
to
have
a
spec,
compliant
optimized,
video
frame
of
optimized
stream,
video
frame
transfer
mechanism,
and
still,
if
with
that,
we
have
like
optimization
like
type
two
is
optimized.
Python
is
optimized.
It's
it's
a
bonus,
it's
great.
D
You
have
more,
but
at
least
you,
you
you're
sure
that
you,
you
get
a
decent
level
of
performance
because
you
are
in
a
worker
and
it's
as
if
french
will
go
in
the
worker
thread,
instead
of
being
blocked
by
the
main
thread.
D
Let's
look
at
pipeline
buffering,
that's
the
next
slide.
So
as
we
discussed
in
the
in
the
past,
buffering
with
streams,
so
it's
specific
to
streams
happens
at
each
transform,
stat
step
in
the
media
pipeline.
D
So
in
the
slides,
there
are
like
two
examples
and
the
typical
stream
pipeline
is
the
one
on
the
top
where
the
pipeline
is
filled
greedily,
which
is
great
when
you
want
to
process
all
chunks
like,
for
instance,
when
you
have
a
big
document
and
you
unzip
it
progressively
and
you
you
transform
it
previously,
like
you,
you're
doing
utf-8
or
whatever
the
processing
can
be
optimized
so
that
it
happens,
sort
of
in
parallel,
it's
up
to
the
user
agent
and
you're
also
sure
that
you
have
the
slowest
delay
possible
between
the
compression
and
the
renderer.
D
But
in
our
case,
where
we
have
a
video
media
pipeline,
it's
sort
of
different
first,
you
might
not
want
to
process
all
frames
if
there's
a
one
second
or
video
frame,
for
instance,
you
might
prefer
to
skip
it
and
get
the
freshest
frame,
for
instance,
and
that's
what
media
stream
track
processor
is
is
doing.
Actually,
if
you
do
not
process
frames
fast
enough,
it
will
drop
frames
so
that
you
next
time
you
will
get
the
freshest
frame
and
to
to
have
that
and
to
also
reduce
buffering.
D
The
second
pipeline
on
on
the
silver
second
below
is
the
one
where
we
are
doing:
processing
sequential,
which
has
a
cost
of
course,
but
it
has
also
some
benefits
in
in
our
case,
and
I
think
that
in
general
we
it's
safer
to
to
get
the
second
pipeline
model.
For
me
for
video
frames,
and
the
first
issue
is
that
the
second
pipeline
cannot
be
built
currently
with
streams.
D
You
can
only
basically
build
the
the
first
pipeline
next
slide,
and
so
this
is
the
real
issue
because,
as
I
said,
syncs
are
really
hoping
to
get.
The
freshest
video
frames
and
also
video
frames
are
big
and
scary
sources,
and
also
it's
not
very
clear
from
the
web
developer
perspective.
What
happens
because
the
buffering
is
sort
of
sort
of
hidden
to
to
your
application?
D
So
we
really
need
a
solution
to
turn
off
the
frame,
so
universal
issue-
11
11,
11
58,
which
and
chances
are
good
high-
that
we
will
get
a
solution
there
and
the
solution
displayed
here
might
be
using
a
watermark
highway
to
mark
of
zero
and
calling
an
api.
D
When,
when
you
need
it,
I
I
don't
think
the
plan
is
to
have
noble
frame
by
default,
meaning
that
what
we
would
like
to
get
as
a
default,
which
is
the
safe,
safe
behavior,
will
probably
not
be
what
streams
will
be
doing
by
default,
and
so
this
requires
web
developers
to
learn
about
that
which
might
be
good,
but
also,
if
they
use
streams,
they
will
end
up
into
the
first
model,
which
might
have
side
effects.
D
So
it's
a
bit
sad
that
there,
if
we
use
streams,
we
we
end
up
with
not
using
what.
What
is
the
default
behavior
of
streams.
D
Next
slide,
and
it's
it's
also
a
bit
a
bit
sad,
like
generally
for
streams,
the
buffering
you
you
don't
really
care
about
it,
because
the
idea
is
that
streams
will
handle
it
for
you
for
back
pressure.
But
in
our
case
for
me,
the
pipeline,
maybe
some
limited
buffering,
might
be
good.
If
you
have
two
long
tasks
in
the
pipeline,
like
you're
doing
encoding
and
funny
hat
thing,
then
you
might
want
to
have
like
doing
the
encoding
and
funny
hat
thing
is
sort
of
important
to
to
reduce
delay,
but
nothing
else.
D
So
it
would
actually
be
good
to
do
some
allow
some
limited
buffering,
but
it's
not
really
easy
with
what
probably,
what
working
with
streams
to
to
do
that.
First,
the
buffering
is
done
in
their
own
queue
and
it's
mostly
a
pack
opaque
by
design.
D
So
you
set
up
the
pipeline,
you
set
the
high
water
mark
and
that's
basically
it
if
you
want
to
change
a
little
bit,
because
there
might
be
some
other
constraints,
then
you
basically
need
to
redo
your
pipeline.
So
you
redo
your
setups
so
that
that
that's
that's
not
great
plus.
The
idea.
B
D
Yeah
you're,
using
like
non-default
stream
approaches,
which
are
not
easy
to
understand.
I
think
what
streams
might
be
able
to
cover
the
use
case,
but
in
terms
of
complexity,
I'm
not
sure
that
it's
it's
a
good
trade-off
honestly.
D
So
that's
it
for
buffering.
Let's
go
with
teeing
now.
B
Question
I
think
there
were
some
questions
on
the
buffering
issue.
Is
that
what
you
wanted?
We're
in
the
queue
for
you
on
avar,
so.
D
So
how
should
we
proceed
there,
because
I
I
only
have
like
until
10
minutes?
Why
don't
we
finish
the
slides
and
then
we'll
we'll
take
the
questions
if
that
that
works?
Yeah,
okay,
so
let's
go
with
t.
As
we
know,
mediastreamtrack
have
built
in
support
for
multiple
consumers
and
t
is
the
way
to
do
this.
Multiple
consumer
thing
with
strings
a
typical
example
in
in
this.
D
In
the
slides
there
is
we
we
do
some
effects
on
the
video
and
then
we
we
key
the
the
end
results
to
do
both
rendering
and
encoding
in
parallel,
and
the
use
case
might
be
doing
analytics
in
parallel
to
rendering,
for
instance,
and
t
is
valid
api
it's
already
used
for
the
stream,
so
we
should
be
able
to
use
it
and
we
should.
We
should
not
be
in
a
place
where
we
would
say.
Oh
no,
please
not
your
st.
We
will
find
something
else.
We
we
should
be
able
to
use
this
so
next
slide.
D
As
we
discussed
this
a
couple
of
times.
We
know
that
t
is
broken
if
a
stream
is
steered
in
branch,
one
and
branch,
two,
the
same
object,
so
the
same
frame
f
will
be
presented
to
both
branches,
and
if
one
branch
is
closing
f,
then
the
overbranch
is
only
getting
a
frame
which
is
already
closed,
so
cannot
cannot
really
use
it.
D
The
potential
solution
here
is
to
use
a
structured
clone
as
discussed
in
issue
1156,
and
it
should
be
fairly
straightforward
to
adult
I
mean
if
you
compared
to
the
other
case
with
buffering
there.
It's
stretcher
cone
equals
true
and
we,
I
hope
we
know
what
structured
clone
means.
So
that's
that's
kind
of
okay.
D
D
So
are
we
good
not
really?
We
need
more
more
chance,
more
changes
to
key
than
just
structure
clone,
because
if
you
apply
structure
clone,
then
you
create
hidden
buffering
and
if
the
branches
are
not
consuming
at
the
same
pace,
you
end
up
into
issues.
The
typical
issue
will
be
that
the
camera
will
run
and
run
out
of
buffers
and
branch.
D
One
will
starve
of
data
because
range
two
is
not
consuming
enough
of
the
data
and
the
typical
solution
in
in
typical
real-time
processing
flow
of
video
frames
is
to
drop
frames
as
needed.
Like
media
frame
track.
Processor
is
doing
that
way.
Branch
one
can
continue
at
its
space
and
branch.
Two
can
get
fresh
frames
as
well.
D
Understanding
of
the
issue
right
now
is
that
there's
no
clear
solution
to
that
issue,
in
particular
streams
by
design,
are
not
expected
to
drop
frames
internally
and,
if
you
add
external
mechanisms
outside
of
streams
to
read
as
fast
as
possible,
then
you
end
up
into
issues
with
potentially
with
pressure
as
well
and-
and
that's
that's
really
a
bummer,
because
it's
very
specific
to
what
what
working
group
screens
as
we
said,
as
we
saw
in
july
gs,
promise
based
callbacks.
D
There
can
solve
this
issue
in
a
couple
of
lines,
so
it
should
be
possible
to
to
solve
in
a
in
a
good
way,
but
since
streams
have
a
model
and
the
model
is
not
aligned
with
the
goal
there,
it
it
makes
things
more
difficult
than
than
it
should
be
and
yeah,
but
that's
an
issue
that
we
we
need
to
solve.
Somehow
if
we
want
to
use
strings
there.
D
D
So
the
expected
use
of
video
frame
is
to
call
close
as
soon
as
possible,
because
video
frames
are
scarce
resources
and
usually
they
stem
from
a
buffer
pool
and
if
the
buffer
pool
is
exhausted,
then
you're
not
in
a
good
situation,
and
we
do
not
want
to
rely
on
garbage
collection
there.
D
D
We
call
video
frame
close
or
not,
since
video
encoder
does
not
call
close,
it
calls
internally
clone
that
media
stream
track
generator
is
calling
close
and
so
that,
if
you
pipe
two
medium
track
processor
to
the
stream
track
generator,
then
you
end
up
with
calling
close
automatically
so
we're
seeing
that
we
have
two
different
models
here
and
streams
could
be
used
both
ways,
there's
no
way
to
enforce,
like
the
case
where
a
writeable
stream
will
you
always
call
call
close,
for
instance,.
B
D
It's
not
done
that
way
and
from
an
api
perspective,
it's
difficult,
an
error
prone
for
a
web
developer
to
actually
understand
where
they
are
in
which
world
they
are
and
which
whether
library
will
go
in
in
world
one
or
well
closed
or
well,
not
close
and
up
to
you
since
there's
no
api
contract
and
I'm
not
quite
sure
how
we
can
solve
this.
D
What
possibly
one
possibility
would
be
to
go
with
some
form
of
dedicating
handling
in
the
stream
of
video
frames
object
like
a
subclass
or
something
like
that
where
we
would
have
built-in
video
frame
memory
management.
Let's
say
that
you
call
you
enqueue
a
video
frame,
then
the
video
frame
will
be
cloned
and
when
you
remove
it
from
the
queue
from
the
internal
queue,
you
will
call
close.
D
I
haven't
heard
any
work
in
that
direction,
so
it's
something
we
need
to
solve
and
if
you
look
at
the
pipeline,
if
you
start
to
change
the
pipeline,
you
will
need
to
cancel
things,
and
if
you
cancel
a
stream
that
has
some
video
frames
in
its
queue,
then
you
rely
on
garbage
collection
to
get
the
video
frames,
and
this
will
happen
if
you,
for
instance,
tear
video
frame
and
you
one
of
the
key
video
frame
will
have
buffers
in
its
queue
and
then,
if
you
cancel
one
of
them,
then
you
can
only
use
garbage
collection.
D
So
we
need
a
solution.
I
don't
know
what
we
can
do
there,
but
it's
certainly
something
that
we
we
should
do,
and
I
have
two
minutes
for
the
conclusion.
So
that's
good
next
slide.
So
yeah
next
slide,
so
we
we
need
yeah.
We
need
to
solve
those
issues,
buffering
t
and
lifestyle
management
specifically
for
video
frames,
a
stream
studio
frame.
D
D
So
next
slide
yeah,
so
yeah
and
my
point
is
that
we
need
to
solve
those
issues
or
have
a
very
good
confidence
that
we'll
solve
these
issues
before
selecting
the
api
model,
because
if
we
select
a
model,
it
means
that
we
we
want
developers
to
go
into
that
model.
So
we
need
to
be
very
sure
that
it's
a
good
model
and
also
since,
if
we
pick
a
model,
we
won't,
we
are
sure
of
it.
D
We
if
we
use,
if
we
select
streams
which
has
its
own
benefits,
we
should
exam
streams,
integration
with
exiting
a
new
apis,
and
that
means
video
decoder,
video
encoder.
I
mentioned
barcode
decoder
and
face
detector,
which
are
also
existing
apis
or
at
least
proposals,
and
it
does
not
seem
that
we
are
going
into
that
direction.
D
D
So
that's
it
for
my
presentation
and
there
were
some
questions.
B
Yes,
we're
gonna
open
it
up
for
questions
now
for
yannibar.
E
Yes,
sorry
about
that
so
yeah.
I
just
had
a
couple
of
comments
on
the
slide.
E
The
controller
release
back
pressure,
I
think
the
goal
there
is
you
and
mentioned
that
that
won't
be
the
default
behavior,
but
I
do
believe
the
idea
is
that,
specifically,
if
you
create
transform
streams
which
have
a
readable
and
a
writable,
if
you
put
a
high
water
mark
of
zero
there
today,
the
pipeline
just
stops
so
yeah.
E
I
think
the
idea
is
that
if
you
specify
a
transform
stream,
you
should
be
able
to
specify
a
transform
stream
with
a
zero
high
water
mark,
which
means
no
buffering,
and
then
the
transform
stream
will
automatically
call
release
back
pressure
when
something
downstream
from
it
reads
from
it.
I
think
that's
the
main.
What
we're
trying
to
solve
is
the
concept
of
bufferless
transform
streams,
which
would
be
helpful,
and
the
other
thing
was
about
dynamic
buffering.
E
It's
true
that
the
the
convenience
feature
of
providing
high
water
marks
is
a
static
one
that
you
set
up
at
the
time
of
the
pipe,
and
you
have
to
tear
down
the
pipe
again,
but
nothing
prevents
javascript
to
write,
transforms
to
implement
their
own
buffering,
which
could
be
dynamic
and
they
can
even
drop
frames.
They
just
can't
have
a
zero
high-water
mark
to
be
totally
buffer-less
today
and
that's
what
we're
trying
to
solve
in
that
issue
and
on
the
third
and
last
on
the
stream
reliance
on
a
government
collection.
E
I
think
that
pops
up
as
an
issue
in
error
situations-
and
I
would
suggest
that
we
might
be
able
to
solve
that
by
having
the
the
producer
of
the
frames
own
on
those.
So
there
might
be
some
so
error.
Situations
is
a
narrower
scope
of
problems,
but
we
still
need
to
solve
them.
D
Yeah,
I'm
thanks.
I'm
I'm
not
totally
optimistic
in
solving
that
at
the
source
level.
Honestly,
I'm
I'm
looking
forward
to
the
solution
there,
which,
if
we
have
a
solution,
then
I
I'll
be
happy
to
remove
my
concerns,
but
so
far
I
do
not
see
how
it
will
work.
So
my
my
understanding
is
that
and
more
generally
with
lifetime
management,
there's
no
api
contract
right.
D
If
you
can
use
you
don't
know
whether
close
will
be
called
or
not,
and
it's
depending
on
the
api,
and
but
I
mean
I
like
consistency
and
I
like
that
we
would
go
with
just
one
model
if
we,
if
we
had
like,
if
we
would
implement
our
own
stream
based
video
frames,
we
would
probably
when
we
queue
a
video
frame
into
the
internal
queue.
D
B
Okay,
I
had
a
few
questions
you
when,
in
the
current
model,
where
we
don't
have
high
water
mark
of
zero
in
the
transform
frames,
have
you
noticed
a
significant
delay
building
up
because
I've
been
trying
to
construct
these
pipelines
with
up
to
like
five
steps,
and
I
haven't
haven't,
noticed
enormous
delay.
D
It
really
depends
what
what
happens
so,
what
you
should
try
is,
for
instance,
on,
like
let's
say
you
have
a
render,
which
is
a
transform
stream.
Let's
say
it's
at
the
end
of
a
pipeline.
Let's
say
that
you
add
a
one
second
delay
and
you
look
at
you
look
at
which
video
frame
you
will
get
and
so
on,
and
you
will
see
that
things
will
will
not
be
will
not
be
great
there.
It's
true
that
what
what
you
happen
to
have
is
like.
B
D
So,
trans
transform-
and
if
you,
if
you're
processing
things
very
quickly,
things
will
be
good
if
one
step
is
taking
on
some
time
like
encoding,
for
instance
it
it
might
not
be
as
good,
especially
if
the
delay
is
like
changing
if
you
aren't
cutting
an
iframe
or
a
keyframe
things
like
that,
so
in
and
also
the
issue
is
that
currently
in
the
camera
pool
you,
you
might
have
like
10
10
video
frames
right
and
if
you,
if
you
build
a
five
steps
pipeline,
you
have
five
frames
that
will
automatically
be
allocated
and
it's
a
fixed.
D
It's
totally
fixed.
So
now
you
only
have
five
remaining
slots
and
you
need
to
be
sure
that
the
encoder
which
you
do
not
have
control
with,
will
actually
release
natively
the
frame
so
that
the
camera
can
can.
B
D
To
it
and
if
you're
adding
that
to
the
fact
that
maybe
it
will
go
out
of
process
and
so
on,
maybe
five
is
not
enough
and
it's
on
devices
that
have
10
frames.
Maybe
maybe
some
devices
will
have
a
lower
capacity
and
will
have
a
smaller
number
of
buffer
buffer
frames,
and
then
you
will
end
up
in
two
glitches
where
video
frames
will
like
the
camera,
will
stop
producing
until
you
release
frames
and
you
will
have
frame
rate
that
is
decreasing
and
changing
and
that's
that's.
D
That's
not
great,
because
you
actually
just
did
five
transform
in
your
pipeline
and
yeah.
We
should
not
be
invested.
B
In
that
issue,
so
one
thing
I
noticed
by
the
lack
of
streams
integration
within
web
codecs
is
that
right
there
are
actually
two
cues
here:
there's
the
encoder
queue
and
then
there's
the
streams
queue,
and
so
you
kind
of
have
to
manage
these
two
cues
yourself
and
in
web
codecs.
There's
no
is
really
the
the
cue
is
not
transparent
at
all.
You
have
to
kind
of
keep
track
of
it
yourself
with
the
pending
output
counter.
B
B
B
D
Yeah
and
I
want
to
mention
the
promise.
D
B
D
B
Okay,
harold.
A
Yeah,
so
the
discussion
is
discussion,
not
just
questions.
I
will
notice
a
couple
of
observations.
One
is
that
the
web
codex
did
have
a
stream
based
api
for
a
while,
after
we
developed
the
media
stream
check
processor
and
made
this
change
track
generator.
They
said
we
don't
need
to
have
another
one,
so
they
dropped
them
so,
and
we
have
had
actually
very
few
reports
on
people
having
having
trouble
with
these
so-called
issues.
A
I
mean
the
the
discussion
of
iran
24
the
the
issue
of
first
of
of
sarah
buffering.
Oh.
A
I
mean
the
no
no,
the
the
issue
of
transfer
was
a
bit
depressing
because
otherwise,
because
it
started
out
with
a
few
messages
from
people
who
understand
street
streams
very
well,
and
then
it
got
the
couple
of
back
and
forth
between
me
and
y'all
nevada
and
then
nothing
so
I'm
eager
to
see
that
discussion
come
back,
and
my
impression
is
that
the
real
problem
here
is
that
the
streams
model
has
been
somewhat
somewhat
confused,
with
the
act
with
the
streams
shim
implementation,
which
has
these
problems
well.
A
On
the
comment
of
t,
I
had
had
a
fun
experience
of
reading
the
the
cl
that
added
the
tea
to
the
specification
and
all
the
people,
all
the
things
that
people
were
worried
about
with
tea
when
they
added
it
to
the
specification
were
in
fact
the
same
thing.
We're
worried
about
t
is
a
bad
design
and
it's
fairly
trivial
to
write
like
this
much
javascript
and
have
the
tea
you
want,
and
the
tea
you
want
is
actually
quite
dependent
on
on
your
application.
A
One
of
the
nasty
pieces
of
tea
that
you
didn't
mention
is
that
it
doesn't
respect
high
water
mark
on
the
on
the
downstreams,
which
stunned
me
when
I
discovered
that
so
t
is
bad.
I
realize,
on
the
contract
point
I'm
just
taking
trying
to
take
off
the
point
so
that
we
get
to
it
within
the
next
five
minutes.
Are
the
contact
points
yeah?
A
I
think
it's
natural
to
say
that
a
downstream
either
have
to
call
close
or
pass
the
thing
on
to
something
that
the
frame
onto
something
that
calls
close
for
video
frames
and
and
and
that
and
that
they
shouldn't
depend
on
upstream
to
do
anything.
We
do
have
a
problem
with
disrupted
pipelines,
as
you
say,
in
that
disrupted
pipelines,
don't
seem
to
have
a
good
method
of
letting
the
user
define
what
should
happen
to
what's
in
the
pipeline
when
it's
being
disrupted.
A
There's
some
of
the
some
of
this
is
issues
with
with
the
description
more
than
the
implementation,
and
some
of
these
are
issues
that
we
need
to
solve,
but
they're
not
fatal,
like
the
t
thing
is
just
because
it's
possible
to
do
something
badly
is
doesn't
mean
that
that
that
all
usage
usage,
so
this
is
bad.
A
D
Can
I
answer
or
is
it
too
late
no
time.
D
We
have
two
minutes:
two
minutes
yeah.
I
agree
with
you
that
tea
is
bad
and
we
can
try
to
salvage
it.
It
will
be
difficult,
and
this
is
this-
is
due
to
the
use
of
strings.
As
you
said,
you
can
use
javascript
to
do
your
own
key
and
it
will
be
done
in
a
much
better
way
and
that's
why
promise
based
callbacks
is.
Is
the
right?
It's
right
approach.
D
If
you
actually
want
to
use
t,
because
that's
what
we
will
use,
you
will
use
promising
promise
based
callbacks
internally
and
you
will
asynchronously
iterate
over
the
stream.
So
in
that
case,
why
should
we
even
use
things
on
the
closed
case
and
on
other
issues
that
you
that
you
were
seeing
they're,
not
fatal?
D
I
welcome
all
the
transfer
streams
as
well.
I
welcome
changes
and
I
welcome
proposals
that
will
fix
these
concerns.
D
If
we
do
not
have
proposals,
then
what
should
we
do?
It
just
means
to
me
that
we
I'm
not
confident.
I
want
proof
that
reads
will
be
solved
and
I
want
proof
before
we
actually
decide.
Otherwise
we're
not
in
a
good
place,
and
all
of
that
is
because
we
select
streams
which
has
a
defined
model
which
sort
of
match,
but
not
entirely,
and
I
want
it
to
match
entirely
and
so
that
it's
the
perfect,
perfect
thing
and
then
I'll
review.
D
If
we
fix
these
issues,
I
I'll
be
happy
to
to
use
streams
because
they
they
offer
some
some
things
that
are
great
to
have,
but
they
currently
offer
also
very
bad
things,
and
I
want
to
remove
the
bad
things
before
we
go
further.
E
E
At
the
same
time,
I
wouldn't
say
that
I
would
also
disagree
with
u.n,
but
I
don't
think
the
issues
are
huge
and
I
don't
think
we
should
block
picking
an
api
surface
over
these,
because,
frankly,
one
implementer
is
already
shipping
an
api,
and
I
think
we
need
to
pick
and
standardize
my
api
as
soon
as
possible
and
so
and
with
promise
callbacks
you're
right
that
the
web
codex
lifetime
model
for
video
frames
makes
it
tricky
to
keep
track
of
things
with
streams
and
promise.
E
Callbacks
may
solve
that
very
easily
because
you,
but
the
problem
is
that
a
promised
callback
also
doesn't
work.
There
are
areas
where
streams
work
better,
for
instance,
with
audio
which
we're
not
talking
about
yet
where,
for
instance,
you
encode
some
audio
that
doesn't
mean
there's
a
one-to-one
between
encode
and
getting
a
callback.
Now
streams
can
handle
that,
but
at
the
cost
of
losing
track
of
it's
not
a
simple
input
output,
one
to
one.
D
I
will
I
welcome
pros
and
cons
between
promise-based
callbacks
and
streams
callbacks,
it's
true
and
streams.
It's
true
that
there
I
only
showed
the
issues
I
have
with
strings
and
I
I'll
be
happy
to
also
get
a
list
from
you,
our
geneva
or
lab
about
promise
based
callbacks
and
to
identify
what
are
the
issues
and
then
we
can
compare
issues
of
both
models.
B
Okay,
I
think
we've
got
to
move
on
to
the
next
presentation,
which
is
yanivar.
E
And
unmute
all
right.
Thank
you
for
that
setup.
You
covered
more
threading
than
I
anticipated,
so
this
is
great.
So
the
first
off.
Why
are
we
here?
Well,
threading
is
a
big
issue,
and
I
wanted
to
highlight
the
build
on
what
you
and
just
said
that
today,
the
status
quo
in
in
specs
and
the
cross
browsers
is
that
the
real-time
media
pipeline
is
off
the
main
thread.
E
That
means
that
the
midi
stream
track
interface
is
purely
a
control
surface
on
the
main
thread,
but
that
media
flows
in
parallel
either
on
the
dedicated
media
thread
or
other
threads
multiple
threads.
So
when
you
call
get
user
media,
you
get
a
track
back
and
you
attach
that
track
to
appear
connection
you're
pretty
much
done.
E
The
browser
just
takes
care
of
it
for
you
and
if
you
apply
constraints
to
that
track
to
reduce
the
resolution,
that's
an
asynchronous
call
that,
where
the
actual
change
in
the
bitflow
habits
off
of
the
main
thread.
E
So,
as
I
illustrate
here,
you
can
have
a
media
thread
where
camera
frames
move
through
a
downscaler
to
an
rdp
sender,
and
it
goes
out
on
the
network
and
the
main
thread
is
not
doing
anything
it
doesn't
get
in
the
way
of
that
I
mean
that's
important.
Media
delivery
never
ever
blocks
on
the
main
thread
next
slide
now.
E
This
is
true,
even
in
webrtc
encoded
transform,
where
we
recently
added
apis
to
the
sender
where
javascript
can
modify
the
encoded
data
and
and
there's
a
bit
of
a
deja
vu
here,
and
that
chrome
originally
released
an
api
that
was
on
main
thread
and
we
standardized
an
api
that
was
off
main
thread.
E
So
here
again,
the
media
thread
has
a
send
where
the
bits
go:
sender
encodes
it
and
it
sends
it
to
a
javascript
encrypt
function,
that's
only
exposed
on
worker
and
it
sends
it
out
over
the
network
and
again,
nothing
blocks
the
main
thread
and
that's
for
encoded
video,
not
even
raw
video.
So
since
we
recognize
the
importance
of
this
for
encoded
media,
we
should
be
even
more
concerned
about
exposing
raw
uncoded
media
to
main
thread,
because
it's
a
lot
more
data
next
slide.
E
So
the
premise
here
is
that
the
main
third
is
bad,
and
so
how
do
we
back
that
up?
Well,
chrome
dev
summit
in
2019
had
an
excellent
presentation
by
cirma,
which
is
basically
the
main
thread
is
overworked
and
underpaid,
and
I'm
quoting
literally
here,
there's
a
link.
If
you
can
follow
the
youtube
where
therma
says
we
are
setting
ourselves
up
for
failure
here,
we
have
no
control
over
the
environment.
Our
app
will
run
in
the
main
thread
is
completely
unpredictable.
E
What
takes
two
milliseconds
on
a
modern
flagship
phone
might
take
20
milliseconds
on
the
next
low-end
phone.
How
can
we
escape
this
unpredictability
and
later
in
the
presentation?
The
answer
is
web
workers.
E
He
also
outlines
the
the
frame
deadlines
that
we
have
under
60
frames
per.
Second,
for
instance,
we
have
60
milliseconds
to
complete
all
processing
of
a
frame
before
it
got
goes
to
rendering
or
it's
gonna
jack.
E
The
video
is
gonna
stutter.
So
on
high-end
devices
you
can
have
you
know,
90
frames
per
second
120
and
even
some
devices
with
144
frames,
whether
we're
less
than
seven
milliseconds
of
a
deadline.
To
finish
what
we're
doing
and
then
so.
The
point
here
is
contention
on
the
main
thread
is
common
and
unpredictable,
and
that
leads
to
intermittent
that
can
lead
to
intermittent
missed
deadlines
and
video
stutter.
E
E
In
contrast,
contention
on
a
dedicated
worker
thread
is
relatively
non-existent
because
it
is
a
controlled,
dedicated
environment
next
slide
now,
so
we
mentioned
that
web
codecs
recently
had
a
decision
to
expose
to
window
and
here's
what
they
wrote.
They
wrote
there
is
consensus
and
we
agree
that
media
processing
in
general
should
happen
in
a
worker
context.
E
So
where
is
this
working
group
on
on
this?
We
have,
unfortunately,
a
media
capture
transformed
document
that
says
it's
not
an
adopted
working
group
document.
E
That
means
it's
non-standard,
because
there's
no
consensus
and
google,
unfortunately
has
shifted
in
m94.
So
there's
a
time
limit
here.
I
think
for
us
to
try
to
standardize
something,
or
this
becomes
the
factor
standard.
Now.
Why
wasn't
it
adopted?
My
my
thoughts
on
this?
Is
that
the
problems
I
had
with
it?
I
should
say.
B
E
Also
ergonomics-wise.
These
apis
are
thread
couple
to
main
thread,
which
is
very
unfriendly
to
workers
which
I'll
show
in
subsequent
slides.
So
we
need
to
standardize
an
api
without
these
problems
and
reclaim
the
url
so-
and
this
also
was
designed
before
media
stream
track
was
transferable,
which
we
just
had
a
call
for
consensus.
That
was
positive,
so
we
can
rethink
the
api
next
slide.
E
So,
since
media
stream
track
is
now
transferable,
we
need
to
go
and
look
at
first
principles
from
a
worker
point
of
view.
The
question
here
is
a
worker
encounters
a
track.
How
does
this
access?
How
does
it
access
its
data?
E
Does
it
post
message
it
to
the
main
thread
to
ask
it
to
create
a
media
stream
track
processor
and
post
message
to
readable
back,
or
does
it
access
a
readable
on
that
track?
The
former
makes
no
sense.
It
shouldn't
have
to
go
back
to
main
thread
to
get
access,
and
this
also
doesn't
necessarily
get
us
off
main
thread
in
all
cases.
No
process
next
slide.
E
Does
it
have
to
post
message
to
ask
main
thread
to
create
a
generator
and
post
message
that
writeable
and
maybe
a
track
phone
back,
or
does
it
call
some
api
to
get
a
writable
and
a
track
again?
The
former
makes
no
sense
that
workers
should
be
able
to
create
a
track
directly
without
having
to
ask
main
thread
to
do
it,
and
this
points
out
that
mediaseam
track
generator
is
a
weird
object.
E
E
So
these
questions
are
getting
a
little
harder,
but
apologies.
So
this
is
very.
This
is
a
more
difficult
question,
but
very
relevant
one
that
that
euan
already
posed.
How
does
a
worker
send
processed
video
frames
say
over
web
transport
without
and
also
maintaining
a
self
view
that
has
high
resolution,
even
though
the
transport
might
need
to
drop
frame
rate?
So
we
mentioned
t
already
and
all
these
issues.
E
Another
option
would
be
to
clone
the
original
track.
But
then,
if
we're
doing
like
a
video
replacement
like
me
in
the
sky
here,
you
have
to
do
that
processing
twice
and
that's
undesirable,
or
do
you
post
message,
constraints
to
the
main
thread,
create
a
generator
apply
constraints
to
a
clone
from
the
generator
create
a
media
stream
track
processor?
From
that
clone
and
post
message,
the
generator
is
writable
and
the
the
producer
is
readable
back
to
the
worker.
E
No,
you
would
want
to
have
these
apis
available
in
the
worker,
and
this
is
also
an
argument.
I've
heard
is
that
mediastream
tracks
they
need
to
be
on
main
thread
anyway,
right
because
you're
going
to
assign
them
to
video
source
object
or
add
them
to
a
peer
connection.
E
So
here's
an
example
where
send
track
in
this
case
is
solely
created,
so
it's
a
track
clone
that
never
leaves
the
worker
and
is
used
there
to
natively
downscale
post-processed
frames,
instead
of
trying
to
drop
frames
or
down
scale
using
a
transformer
or
using
t.
E
And
I
also
want
to
talk
about
that
sentence.
I
I
rushed
through
here,
post
messaging,
immediately,
retract
processes,
readable
or
a
gender
is
writable
back
or
transferring
them
at
all
to
a
worker,
and
that
this
actually
violates
the
intent
of
what
wg's
transferable
streams
which
which
have
tunnel
semantics
they're
a
special
kind
of
identity.
Transformer
me,
which
has
a
the
writable
side
in
one
realm
and
the
readable
side
in
the
motherboard
to
implement
cross-thread
transforms.
E
They
only
transfer
one
side
of
the
stream,
basically
the
readable
or
the
right
one
and
they
create
exist
tunnels
on
purpose
not
solving
not
to
solve
creating
them
on
the
wrong
thread
in
the
first
place
and
without
spec
breaking
ui
optimizations.
E
That
means
media
is
doing
track
process
where
readable
remains
anchored
to
main
thread
on
one
side,
even
after
transfer
which
to
me
makes
mediastion
processor
a
broken
api
built
on
broken
assumptions
and
next
slide.
E
So
an
alternative
proposal
and
here's
a
link
to
issue
1559,
which
also
has
a
spec
document
and
more
details
you
could
look
at.
I
would
encourage
you
to
look
at
after
this
presentation
since
I'll
be
going
through
the
basics
here,
the
goals
are
to
align
the
api
with
transferable
medium
tracks,
for
simpler
api
surface
and
remove
the
aforementioned
blockers
to
standardization,
by
exposing
real-time
media
pipeline
to
workers
and
not
main
thread,
and
also
encourage
user
workers
by
making
it
simple
friendly
to
workers
and
the
default,
and
also
discourage
usage
on
the
main
thread.
E
This
proposal
also
starts
with
video
to
in
order
to
reach
an
agreement
and
because
we're
trying
to
present
an
api
to
the
working
group
that
will
be
adopted,
and
we
do
so
by
removing
exposure
on
the
main
thread,
which
is
what
mozilla
and
safari
want.
We
focus
on
video
for
now,
which
is
safari,
has
been
interesting
and
we're
also
using
we're
still
using
streams,
which
is
what
chrome
mozilla
prefer.
E
So
you
might
wonder
we
just
had
all
this.
You
unpresented
all
these
reasons
why
streams
are
bad.
E
That
means,
if
you
see
a
medium
stack
on
main
media
stream,
track
on
main
thread.
You
won't
see
this
attribute,
but
if
you
post
messages
to
a
worker-
and
you
look
at
it
there-
you
will
have
this
readable
attribute,
so
I'm
showing
the
javascript
only
the
worker
side
of
it
here.
You
went
showed
example
slides
earlier,
where
that
includes
the
post
messaging,
so
here
a
worker.
This
is
a
sort
of
read-only
example
where
we're
just
reading
and
we're
sending
it
over
web
transport
where
we
get
a
track
in
the
worker.
E
We
await
we
basically
just
pipe
it's
readable
through
some
kind
of
a
wrapper
for
web
codex,
and
we
also
have
to
transform
it
to
serialize
it,
which
also
means
chunking
it
and
stuff
for
you
know,
and
it
this.
This
might
not
be
how
you
would
send
things
over
web
transport,
but
this
is
a
simple
example
right,
so
it
says
sync
and
that's
it.
The
key
here
is
that
the
attribute
is
only
exposing
workers
and
that
keeps
data
off
the
main
thread
next
slide,
so
a
little
more
complicated.
E
Let's
say
we
want
to
read
and
write.
We
want
to
basically
have
a
self
view
here
or
show
local
view
only
and
direct
your
attention
again
to
the
blue
box
in
order
to
so
the
previous
slide
was
basically
the
stem
track
processor,
and
this
slide
is
made
into
a
generator
where
we
need
to
put
data
back
into
a
track,
put
humpty
dumpty
back
together
again,
so
we
exposed
only
on.
E
A
new
interface
called
video
track
source
that
has
a
writable
attribute
and
a
track,
and
also
a
boolean
muted
attribute,
because
sources
in
the
midi
stream
track
model
you
can
mute,
sources
can
mute
themselves
and
all
that
means
is
that
downstream
tracks
get
muted
events
fired
on
them,
so
here
again
the
worker
receives
a
track.
E
So
what
we
want
to
do
here
this
is
an
example
from
web
codex
that
I
modified
it's
basically
a
crop
example
where
you
use
offscreen
canvas
to
crop
some
video,
that's
basically
the
simplest
example
of
modifying
bits
that
actually
works.
So
we,
the
worker,
gets
the
track.
We
create
a
video
source
object
and
we
post
message
the
sources
track
back
to
main
thread
where
it
can
be
assigned
to
a
source
object,
and
I
know
what
we
do
the
same.
E
As
on
the
previous
slide,
we
pipe
the
readable
through
a
transform
stream,
which
is
shown
below
as
a
crop,
and
then
we
pipe
it
to
the
sources
writable,
and
that's
it
that's
it.
This
aligns
better
with
the
media
capture
main
spec,
which
separates
sources
from
its
track,
and
this
makes
it
easy
to
extend
the
source
interface
later.
The
source
critically
stays
in
the
worker,
while
its
track
may
be
transferred
cleanly
to
the
main
thread,
and
this
is
also
extremely
clean
and
simple,
with
how
interacts
with
track
clone
and
even
structure
cloning.
E
That's
actually
a
flaw
with
me
to
subtract
generator.
Is
it's
sort
of
a
pseudo
object?
That
is
a
track,
but
it's
also
a
subclass
of
a
track.
So
if
you
call
track
clone,
you
don't
get
another
major
stream
track
generator.
Now
you
get
a
plain
track,
so
this
solves
that
next
slide.
E
All
right
process
video
and
apply
constraints,
so
an
issue
we
have
is
to,
if
we're
going
to
do
any
kind
of
processing
like
say
we're,
cropping
a
video
you
might
want
to
have
a
self
view
of
that.
That
has
a
high
frame
rate,
but
you
might
also
want
to
lower
the
frame
rate
depending
on
where
you're
sending
in
so
here's
an
example
that,
where
you're
sending
over
a
peer
connection,
which
is
basically
you
send
it
to
the
worker,
do
the
processing
send
the
track
back
and.
E
Sorry,
this
one:
yes,
so
you
assign
the
so
when
you
get
the
track
back
to
main
thread,
you
assign
it
to
you
admit
you
take
a
clone,
basically
and
assign
it
to
the
source
object
and
then
on
the
track.
You're
sending
you
can
now
apply
constraints
and
you
can
have
a
lower
left
resolution
and
lower
frame
rate
on
what
you're
sending
so
you
can
send,
for
instance,
30
frames
per
second,
while
the
self-view
doesn't
start
to
stutter,
but
it's
still
a
good
60
frames
per
second.
If
that's
what
the
camera
source
was.
E
That's
great
next
slide.
E
Now
that
works
great
for
peer
connection,
but
what,
if
you're
sending
over
web
transport?
Well
with
web
transport?
We
really
wanted
to
encourage
you
to
use
open
the
web
transport
in
the
worker.
So
this
is
the
same
example
basically,
and
it
shows
how
you
can
how
media
stream
tracks
now
are
useful.
Also
in
the
worker.
Again,
we
receive
a
track.
We
get
a
source
from
it.
E
We
clone
that
source
and
we
send
the
original
source
back
to
main
thread
for
self
view
at
60
frames
per
second
and
then
now
we
have
the
clone,
which
is
called
send
track.
In
this
example,
we
can
apply
constraints
to
it.
Do
anything
we
want,
just
as
if
we
were
on
main
thread
and
now
we
can
open
a
web
transport
connection
and
we
can
read
from
that
send
track.
E
So
what
we're
actually
reading
from
we're
doing
two
things
here:
we're
reading
from
the
track
readable,
sending
it
through
the
cropping
and
piping
it
to
the
source
writable
and,
at
the
same
time,
we're
reading
from
the
send
track
readable,
which
is
the
processed
downscaled
data
and
sending
it
out
to
web
transport
I'm
using
pipe
through
here,
which
is
a
bit
clever
thing
an
earlier
example.
I
had
a
promise
all,
and
these
are
really
two
operations
that
are
separate,
but
it
shows
you
that
you
can
natively
downscale
and
that
medicine
tracks
apply
constraint.
E
E
So
this
is
the
summary
slide
here
benefits
that
we
see
the
simpler
api
that
takes
advantage
of
transferable
media's
gym
track.
There
are
fewer
new
api
objects
to
learn
and
it's
friendly
to
workers.
E
It
satisfies
core
principles
from
a
worker's
point
of
view
without
main
thread
and
tangled
apis
means
that
the
worker
has
a
data
object.
It
can
do
something
about
it
and
critically.
E
It
does
not
block
real-time
media
pipeline
on
the
main
thread
by
default
and
the
source
stays
in
the
worker
separate
from
his
track,
which
gives
us
clean
transfer,
apply,
constraints
is
available
in
the
worker,
there's
parity
and
more
with
media
stream,
track
processor
and
generator
and
both
features
and
brevity,
and
I
have
a
link
to
a
javascript
fiddle
and
that
this
highlights.
E
I
think
harlow's
also
going
to
show
that
the
differences
between
these
apis
aren't
that
drastic
as
far
as
the
number
of
lines
of
code
and
it's
more
about
whether
it's
allowed
in
workers
or
not,
and
whether
it
relies
on
transferable
streams,
ui
optimizations
and
there's
a
source
muted
attribute
which
we
threw
in
and
that's
it.
E
I
also
have
some
appendix
slides
that
I
might
have
time
to
go
through
since
I'm
a
little
early.
If
we
look
at
appendix
a
I
might
as
well
go
through
them.
There
was
mentioned
earlier
that
we
want
promise
callbacks,
not
streams.
E
Well,
you
can
actually
do
that
because
every
readable-
and
this
applies
to
any
streamspace
proposal-
you
could
use
408
on
that
readable
and
you're,
basically
being
called
back
you're
getting
your
async
function
is
going
to
be
called
then,
and
that
means
that
you
can
basically
operate.
This
is
basically
a
promise.
Callback
is
what
I'm
trying
to
say
and
here's.
What
un
wanted
to
show
you
think
is
that
you
can
now
do
operations
like
encode
and
you
can
guarantee
that
frame
close
will
be
called.
E
And
appendix
b,
the
next
line
and
there's
also,
I
added
the
slide
only
if
people
want
to
have
more
than
one
readable
from
a
track
but
enter
so
far.
It
seems
like
people
are
more
than
happy
to
suggest
cloning.
The
track
for
these
things,
which
came
up
to
avoid
t,
for
example.
So
I
don't
think
this
is
that
critical,
but
I
want
to
pull
it
up,
throw
it
out
there
and
I
think,
that's
it.
I'm
going
to
skip
the
last
slide
and
we
can
move
to
discussion.
B
Okay,
so
we're
have
discussion
until
9
30.-
I
guess
okay,
so
we
have
harold
and
then
in
the
queue
yeah.
So.
A
A
The
the
examples
where
you
have
posting
a
message
to
the
main
thread
to
to
do
get
a
message,
stream
check,
generator
and
and
get
that
to
get
generate
the
stream
and
post
it
back.
That's
just
wrong,
I
mean
obviously
mstg
and
nstp
were
designed
to
be
available
in
the
same
context
as
the
trackish
content.
It
is
available
in
so
as
soon
as
so
as
soon
as
track
is
available
on
workers,
msdp
and
msctg
obviously
need
to
be
there
too.
A
I
was
not
very
happy
with
the
quoting
of
chris
cunningham's
message
for
on
on
the
on
the
web
codex
decision,
because
one
of
the
heavy
things
he
said
was
that
the
availability
of
related
apis
is
a
strong
reason
why
developers
need
to
be
able
to.
A
He
did
mention
that
his
decision,
they
did
not
did
not
foreclose
the
discussion
on
msdbs
and
mr
g,
but
so
it
doesn't
foreclose
it
either
way,
and
the
third
point
was
the
the
picture
of
transferable
transferring
streams
where
you
said
that
it's
a
pipeline
between
the
region,
context
and-
and
this
is
nation
destination
context-
that's
completely
true,
but
you
imply
that
the
sword
contacts
this
the
main
chat,
which
is
not
true.
A
So
these
are
just
things
I
would
say,
misrepresent
the
con.
The
context
they're
not
direct
comments
on
the
actual
api,
we'll
revisit
the
discussion
about
whether
or
not
they
should
be
available
on
matrix,
but
the
shape
of
the
api.
I
like
it
it's
very
much
very,
very,
very
similar
to
what
I
propose.
E
Okay,
I
I
I
apologize.
If
I
misrepresented,
I
see
that
the
mediastrip
track,
processor
and
generator
is
exposed
to
dedicated
worker,
and
I
guess
I
missed
that.
But
however
I
don't
know
that
they're
transferable.
E
Which
raised
the
question
of,
but
so
I
would
still
say,
though,
that
the
model
that
people
seem
to
so
it's
your
suggestion
that
people
should
so.
There
are
two
ways
to
do
things
now
right,
so
you
can
now
either
create
you
have
your
track
on
main
and
you
create
a
media
stream
track
processor
there
and
you
transfer
it's
readable
to
the
worker
or
you
transfer
the
mediasting
track
to
the
worker
and
you
create
a
major
stream
track.
Processor.
There.
A
The
moment
is
that
we
can
we
people
can
implement
the
first
one
now
and
because
we
don't
yet
have
transferable
media
stream
tracks
or
availability
of
mediums
interact
on
worker.
We
haven't
worked
through
that
yet
codewise
and
so
right
yeah.
Once
we
have
it.
Yes,
you
have
two
ways.
A
E
No,
I
agree,
but
they
should
remain
what
they
were
designed
to
do,
which
is
to
create
tunnels
between
threats,
and
we
wouldn't
have
to
implement
these
these
magic
optimizations
in
order
to
get
things
entirely
off
main
thread,
which
is
only
a.
A
D
Yeah,
that's
this
one
where
you
say
t
is
bad
and
we
should
not
use
it.
I
agree.
D
Currently,
I
hope
we
will
be
able
to
use
it,
and
the
reason
is
that
the
the
small
example
is
not
creative,
either
you're
losing
back
pressure,
and
I
know
univer
you're
a
big
fan
of
back
pressure
and
you're
you're
losing
it
there,
which
is
not
great,
and
I
believe
that
we
might
be
able
to
make
it
work,
I'm
not
quite
sure
yet,
but
we
might
be
able
to
add
it
to
either
two
medium
track,
transform
maybe
or
we
can
fix
t,
but
we
should
be
able
to
to
have
back
pressure
and
so
there's
an
issue
there.
D
Definitely
in
general,
I
think
that,
in
terms
of
api
shape,
if
we
assume
that
we
think
stream
is
the
right
way
to
do,
I
think
the
api
shape
is
is
good.
It's
correct.
That's
that's
solving
some
of
the
issues
that
I
had
with
the
prior
proposal.
D
I
think
that
in
general,
in
media
capture
main,
we
have
the
concept
of
a
source
and
we
have
a
concept
of
a
track
and
we
have
we're
defining
the
relationship
between
the
source
and
the
track
and
the
fact
that
we
are
introducing
introducing
a
video
track
source.
A
javascript
object
that
represents
a
track
source
is
a
good
thing
and
it's
very
similar
to
readable
stream.
D
You
can
have
a
native,
readable
stream,
but
you
can
also
have
a
readable
stream
with
the
javascript
source
and
the
javascript
source
is
also
an
object
that
abides
to
specific
things,
and
so
I
think
we
should
go
there.
It
will
be
easier
to
extend
things
if
we
think
if
we
think
so,
we
remove
some
edge
cases,
so
it
will
be
much
cleaner.
D
So
that's
why
I
I
would
tend
to
go
with
this
way.
As
I
said,
there's
no
surprise
for
me
to
prefer
not
relying
on
transferable
streams
there.
I
think
that
we
should
rely
on
media
stream
track
transfer,
which
will
be
more
reliable,
which
will
be
a
typed
way
of
transferring,
and
since
it's
typed
they
are
like
requirements
that
that
will
be
fulfilled
while
with
streams
you
don't
know
whether
these
same
requirements
will
be
fulfilled
because
things
are
generic
objects.
So.
B
Okay,
thanks
ewan
yeah.
E
So
I
can
respond
yeah.
I
may
have
made
a
mistake
in
the
example
of
which
track
I
cloned,
because
I
I
chose
the
clone
to
be
what
you
would
apply
constraints
to,
but
I
think
it
might
work
if
we
flip
it
around
right,
so
that
the
we
send
the
original
track
right.
D
I
don't
I
don't
think
so
it
I
don't
think
it
will
work.
The
thing
I
would
like
to
to
get
to
is
like,
if
you,
if
you
drop
a
frame,
if
you
have
like
a
source-
and
you
have
like
a
five
pips
track
and
a
35
tracks,
then
you
should
you
should
get
some
back
pressure
that
says
hey.
D
I
actually
want
30
frames
per
second
to
the
video
track
source
and
we
have
no
guarantee
there
there's
nothing
there
and
it's
due
because
there's
no
back
pressure
on
the
writable
stream
and
if
we
start
to
introduce
back
pressure
on
the
writable
stream,
which
we
might
want
to
do,
it
would
be
a
good
exercise
to
do
then
we're
in
a
good
situation,
because
promise-based
callbacks
have
that
and
we
will
not
be
able
to
have
that
with.
We
may
just
interact
as
video
track
sources,
so
we
should
be
able
to
so.
D
Javascript
will
need
to
handle
it.
So
I
think
we
can
make
progress
there
if
we
really
want
to
go
with
students.
E
Sorry
bernard,
can
I
just,
can
you
jump
to.
B
E
47,
since
I
ended
a
little
early,
this.
B
E
E
I
think
un
is
concerned
about
back
pressure
and
is
right,
so
I
did
actually
write
a
fiddle
that
I
got
working
in
chrome,
even
that
uses
t,
and
so,
if
you
use
the
t
solution,
one
of
the
things
it
it
does
for
you
is,
you
do
get
brushed
back
pressure,
but
at
the
cost
of
solving
the
t
problem,
so
I
was
able
to
solve
it
by
by
polyfilling
synchronized
through
structured
clone
thanks
to
matthias
bullins,
and
the
only
thing
odd
is
the
that's
a
crate
frame.
E
Dropper,
it's
the
four
lines
from
the
bottom
or
five
lines
from
the
bottom.
You
have
to
have
a
transform
string.
Basically
that
drops
frames,
so
that
would
be
a
way
that
of
solving
the
t
problem.
But
this
is
why
we're
here.
So
yes,
so
it's
true
that
using
a
track
to
and
apply
constraints
is
a
workaround
for
this.
Basically,
if
we
cannot
solve
the
d
problem.
B
Yeah
yeah
a
bunch
looking
through
these
and
that
these
didn't
make
sense
to
me
right
because,
basically
you
would,
I
think
you
wouldn't
post
message
to
the
main
thread,
because
you
basically
create
the
track
processor
on
the
main
thread
and
and
transfer
the
stream
or
and
what
you're
saying
you
would
just
transfer
the
track.
So
I
don't
these
slides,
don't
matter
yeah,
I
don't
think
they're
distinctions
between
the
two
apis
right.
E
B
B
E
So
harold
is
right
there,
a
lot
of
similarities
between
the
apis,
but
the
difference
is
that
mediastimate
processor
creates
a
new
class
on
the
producer
side
and
it
where,
as
mike
on
yuan's
proposal,
does
not
create
a
new
object.
In
that
case,
we
just
add
it
readable
to
the
track,
which
we
think
is
sufficient
and
simpler,
because
a
readable
already
has
locking
semantics
and
once
it's
locked,
you
know
that
it's
being
consumed,
we
don't
need
a
new
object
there.
E
Other
than
to
to
add,
if
you
want
to
add
some
options
that
you
couldn't
add
to
the
track
itself,.
B
I
believe
this
is
a
quote
from
chris
needham,
not
chris
cunningham,
who
is
a
working
group
chair
and,
as
harold
said,
there
was
a
discussion
of
the
where
we
are
with
respect
to
workers
red
support
overall,
and
I
think
one
of
the
big
points,
as
harold
mentioned,
was
that,
for
example,
un
has
recently
proposed
rtc
data
channel
workers,
which
is,
is
not
widely
implemented,
but
that's
an
example
of
something
that
today
is
only
on
the
main
thread,
and
so
as
as
harold
mentioned,
the
big
issue,
a
lot
of
the
things
that
were
constraining
people
to
the
main
thread
was
lack
of
consistent
worker
support
across
the
api
set
anyway.
E
B
E
Anyway,
so
yes,
so
so-
and
the
other
thing
I
want
to
mention
is
that
media
stream
track
generator.
It's
a
bit
of
an
odd
duck
in
that
it's
it's
just.
It's
also
a
track
and
that's
where
you
and
I
feel
like
that's
where
we
should
have
the
separate
api,
not
on
the
pro
producer
side
and
that's
a
cleaner
model
that
fits
neatly
with
the
sources
and
syncs
model
in
media
capture
as
far
as
main
thread
areas
where
main
worker
apis
are
not
available.
E
Yet
I
think
that's
actually
a
good
use
case
for
transferable
streams,
because
once
you
have
set
this
up
in
the
worker,
you
can,
if
you
need
to
transfer
these
streams
using
tunnels
back
to
the
main
thread
for
those
few
apis
that
haven't
been
or
those
browsers,
that
haven't
implemented
all
apis
in
workers.
Yet,
and
that's
that
that
also
does
not
require
breaking
the
transferable
stream
semantics
and
I
think
that's
the
proper
use
of.
B
D
I
don't
know
who's
on
the
queue
at
you
plus
myself,
but
I'm
not
sure
I.
A
Yeah
the
that
this
was
a
response
to
that.
If
you
have
have
to
tell
some
place
up
the
stream
that
your
desired
frame
rate
is
30,
then
back
pressure
cannot
carry
that
information
right
right,
because
back
pressure
cannot
tell
the
difference
between
I'm
slightly
late,
and
I
have
to
wait
for
one
other
frame
and
I
only
want
every
other
frame.
So
then,
a
frame,
dropper
or
separate
tracks
with
separate
separate
frame
rates
is
the
way
to
go.
A
We
need.
We
need
those
kinds
of
signals
to
be
carried.
It's
one
of
the
areas
where
I
that
I
have
always
said
that
we
needed
to
to
to
work
on
with
the
with
the
with
streamspace
proposals
this
the
the
singles
that
have
to
go
in
the
other
direction,
and
we
haven't
gotten
around
to
it
yet,
and
that
worries
me.
B
D
I
I
tend
to
agree
with
with
her
that
it's
not
always
desired,
but
there
are
cases
where
it
really
depends,
whether
the
source
is
pull
or
push
and
they
they
might.
We
are
mostly
concerned
here
about
sources
with
pull,
but
in
case
it's
pushed,
then
it's
actually
useful
to
to
do
that
kind
of
stuff,
and
I
wanted
to
mention
that
need
to
propagate
things
up
to
the
source.
D
You
might
be
right
that
in
some
cases
we
should
not
use
back
pressure,
but
we
certainly
need
to
provide
that
information
in
some
way.
You
mentioned
the
overweight
that
we
removed,
and
I
agree
with
it.
We
should
work
on
that
and
fix
that
related
to
bernard.
You
mentioned
that
lgbt
channel
is
only
available
in
main
thread
and
that's
true
in
some
in
some
browsers.
D
That
said,
that
does
not
remove
the
possibility
that
that
does
not
remove
the
idea
that
going
to
work
is
a
good
thing,
because
you
go
to
the
worker
to
get
video
frames
and
the
video
frames
you
will
not
be
able
to
push
them
into
data
channel.
You
will
need
to
do
some
processing
in
any
case,
and
it's
better
to
do
that.
Processing
in
the
worker.
You
set
up
all
the
things
and
sadly,
maybe
the
last
bit
will
be
done.
D
You
will
need
to
go
back
to
the
main
thread,
but
still
you
have
some
gains
because
you're
doing
like
the
heavy
things
like
serialization,
packetization
and
coding,
and
so
on
in
a
worker
which,
which
is
which
is
better
so
and
you
will
be
ready
whenever
people
implemented
this
international
transfer
to
workers
to
actually
embrace
it
and
use
it.
So
that
seems
good
from
that
particular
use
case.
There
may
be
other
apis
that
are
missing
related
to
media
processing.
D
We
definitely
need
to
to
look
at
them,
but
my
understanding
is
that
we
we're
we're
in
a
good
place
there
and
we
should.
If,
if
the
issue
is
implementation
lagging,
then
we
should
push
implementers
to
do
more
things
in
workers
and
I'm.
D
B
Okay,
so
there's
nobody
left
in
the
queue.
Do
we
want
to
move
on
to
harold's
presentation
a
bit
early?
It
could
leave
some
more
time
for
discussion.
I
think
that
might
be
good.
F
Ahead
yeah,
I
wanted
to
add
that
the
issue
of
main
thread
explosion,
and
so
on.
It's
for
on
one
hand
it's
availability
of
many
other
apis
that
are
not
even
specified
to
be
available
in
workers.
So
it's
not
just
implementation,
but
also
specification,
but
also
that
that
we
have
first-hand
requests
from
application
developers
that
there
are
valid
use
cases
that
that
want
to
do
these
in
workers
because
of
the
nature
of
the
application.
So
it's
a
use
case
that,
okay,
that
we
don't
that.
Okay,
we
have.
F
We
can
say
that
we
don't
want
to
support
the
use
case,
but
I
don't
see
any
reason
why
we
could
make
it
a
bit
the
harder,
the
that
adding
something
to
to
the
api
so
that
by
default
it
doesn't
work
if
we
don't
want
it
to
work
by
default,
but
but
there
there
are
use
cases
for
this
on
the
main
thread
that
are
not
the
that
are
valid.
F
B
Okay,
we
have
doms
in
the
queue.
C
F
Well,
first,
the
there
are,
there
are
use
cases
where
the
worker
gets
absolutely
it's
a
net
negative,
because
the
worker
doesn't
give
a.
I
mean
what
the
worker
gives.
You
is
that
if
you
have
some
free
resources,
the
worker
allows
you
to
use
them
more
efficiently
to
actually
use
them.
F
So,
but
if
you
don't
need
those
resources,
so
the
worker
is
just
only
introducing
a
cost
and
it's
not
giving
you
any
benefit
that
has
happened
to
applications
are
not
like
a
big
conference
conferencing
websites
that
are
doing
real,
real-time
stuff,
but
more
like
editing
applications
for
composition
for
for,
like
dual
applications
where
you're
creating
effects
and-
and
you
are
editing
them
there-
that
that
that
sort
of
application
doesn't
benefit
from
the
worker
and
and
the
worker
only
introduces
complexity
on
ones
on
one
hand
and
and
and
actually
extra
resource
consumption.
E
Even
though
there
might
be
some
non-real-time
cases
like
canvas
capture
that
we
we
want
to
protect
the
parity
with
existing
exposure
and
browsers
today,
which
is
to
not
expose
media
the
main
thread,
except
in
the
cases
of
some
really
slow
ones
like
canvas
great
bitmap
data
and
that
kind
of
stuff.
B
All
right
so
discussion
period
for
this
is
over
turning
the
floor
over
to
harold.
Thank
you.
A
A
A
A
A
A
Such
as
the
one
that
we
have
once
that
we
have
on
webrtc.give
github
samples,
we
need
to
make
sure
that
the
samples
always
show
realistic
stuff
that
will
work
in
real
time
and
when
we
don't
need
hard
linkages
to
the
main
chat
it,
they
should
be
off
thread.
So
we
need
more.
We
always
need
more
examples
and
improving
the
ones
that
are
there.
A
Experience
with
streams
that
don't
come
from
cameras,
the
link
I
posted
earlier
was
generating
one
stream
from
slides
and
one
stream
from
a
camera
and
mixing
them.
And
of
course
those
two
streams
are
not
synchronized.
A
A
We
have
a
sharp
difference
of
opinion
in
whether
or
not
they
should
be
available
on
main,
and
we
have
some
oddities
of
what
the
things
are
derived
from
what,
when
we
did
the
stream
generator,
we
decided
that
this
was
not
the
stream
object.
Not
subclass
of
a
stream
object.
Didn't
have
a
relationship
with
it.
That's
because.
A
The
ways
we
have
of
generating
streams
today
generate
the
stream
and
we
don't
have
any
examples,
otherwise,
in
other
places
of
generating
a
stream
from
a
stream
clone
isn't
the
same
thing
so
having
a
stream
generator.
That
was
a
stream
was
a
new.
I
was
a
new
concept
and
we
didn't
want
to
do
that.
So
it's
a
separate
class.
A
A
A
A
A
A
Well,
the
proposal
on
the
table
from
yaniva
is
to
limit
this
to
video
only
so
we
seem
to
have
a
very
close
similarity
models
and
we
also
seem
to
have
a
very
distinct
set
of
cons,
set
of
differences
that
we
can
discuss
as
separate
issues
that
the
underlying
model
seems
to
be
gelling
together
in
one
place,
we
have
tracks
that
go
to
streams,
streams
that
go
to
tracks
and
that's
a
good
thing
so
we're
getting
there.
A
B
Okay,
I
have
dom,
and
then
yanimar
or
maybe
dom
was
already
there.
B
E
A
That's
the
depending
on
your
interpretation
that
the
stream
source
is
actually
in
the
main
thread.
My
conversation
is
that
the
stream
source
isn't
in
the
main
thread,
it's
off
thread
it's
in
its
own
context.
E
A
A
E
Well,
it
seems
I
don't
find
any
support
in
this
fact
that
that
implicitly
moving
well
that,
even
if
you
call
the
original
camera
source
off
main
thread,
the
what
lbd
stream
spec
does
not
seem
to
consider
that
an
optimizable
path,
and
it
also
as
yuan,
pointed
out.
This
would
create
action
at
a
distance
examples
where,
because
you
cannot
optimize
all
the
time,
we
can
optimize
some
of
the
time,
and
that
means
that
javascript
developers
might
scratch
their
heads
wondering
well.
This
worked
a
minute
ago.
Why
doesn't
it
work
now?
E
So
I
think
hopefully
we.
A
He
referred
to
objects
that
were
already
enqueued
when
you
transfer
the
stream,
which
is,
I
think,
side
effect
of
describing
the
queue
as
if
it
was
in
the
original
context.
In
the
in
the
in
this.
In
the
context
they
were
transferring
from.
E
In
any
case,
the
issue
posted
exposes
that
a
lot
of
things
that
have
to
do
with
error,
cleanup
and
stuff
are
still
tied
to
the
realm
in
which
the
object
was
created,
and
that
would
be
main
thread
if
you
create
it
on
metering.
So
I.
A
A
I'd
like
to
see
worked
examples
of
that
and
specific
pointers
to
the
text
I
mean
I
I
find
it.
I
personally
find
this
to
find
the
stream
spec
almost
impossible
to
navigate.
So
if
you
can
help
me
find
find
the
place
where
these
these
tie-ins
are.
I
would
be
very
happy.
E
Well,
I
mean,
but
as
far
as
intent
of
the
stream
spec,
I
had
a
slide
for
that.
A
I
and
I
and
I
disagree
on
your
reading
of
reading
of
the
intent,
so
I'm
asking
you
to
point
out
the
language.
A
Yes
and
that's,
but
the
idea
that
the
source
is
in
the
in
in
in
the
originating
thread
is
an
interpretation.
D
I
I
don't.
I.
D
A
D
You
you
you
do
that
in
the
in
the
context
of
the
stream
and
that's
how
it's
it's
done.
The
thing
you
could
you
can
do
are
like
type
2
and
things
like
that.
You
could
try
to
and
there's
some
leeway
in
in
the
spec
from
just
from
the
start
to
actually
optimize
this.
But
for
the
rest,
I
I
don't
think
that
there's
some
allowance
from
the
spec
and
adam
rice
I
quoted
adam
rice-
was
a
spec
offer
and
said
at
some
point
yeah.
E
Got
a
couple
more
keep
going
down
this
one
there,
that's
the
one
yeah,
so
the
first
paragraph
there's
a
literal
quote.
It
says
a
special
kind
of
ide
that
transferable
streams
are
a
special
kind
of
identity,
transform
which
has
the
rightable
side
in
one
realm
and
the
readable
side
in
another
realm
to
implement
cross-realm
transforms.
I
think
you
can't
get
more
explicit
than
that
that
this
is
about
transfer
between
realms,
not
to
solve
getting
things
that
were
in
different
realms
on
the
same
realm.
A
Well,
which
realm
is
the
writable
silo.
E
So
you're
thinking
for
media
stream
track
processor
they
like.
So
if
you
create
that.
Well,
if
you
create
music
processor
on
one
thread,
a
and.
A
A
E
It's
the
same
problem
yeah,
so
so
the
the
example
I'm
showing
is
a
media
stream
track
processor,
but
the
same
slide
for
media
stream
track
generator
would
be
that
the
the
instead
of
source,
it
would
say,
sync
on
thread
a
and
instead
of
readable,
it
would
say
writable
and
thread
b
and
the
errors
would
go.
The
other
way.
E
Based
on
the
language
directly
above
that
references,
a
writable
site
on
the
readable
side.
E
E
A
E
If
you
follow
the
links
of
the
spec
and
I
didn't
write
the
spec,
so
I
added
links
in
the
slide
that
follow
the
same
links
that
are
in
the
spec
and
that
takes
me
to
the
writable
side
and
available
side
of
a
transparent
stream.
So
it
looks
to
me
like
the
intent
of
this
working
group,
of
the.
What
wg
working
group
was
that
transferable
streams.
D
E
Sorry
I
had
a
question
for
harold:
is
that
your
stance
is
your
stance
now
that
exposure
of
raw
video
and
audio
on
the
main
thread
is
okay
and
has
that
changed
since
we
decided
the
working
group
already
arrived
at
a
worker-only
proposal
for
webrtc
encoded
transform
and
why?
Why
is
your
opinion?
How
do
you
reconcile
your
opinion
with
that.
B
Okay,
we're
out
of
time
for
this
discussion.
B
A
B
Okay,
all
right,
so
we've
now
reached
the
wrap-up
and
the
next
steps
portion
of
this,
and
so
we've
left
about
10
minutes,
in
which
time
we
can
get
feedback
on
some
of
the
distinctions
that
have
been
made
between
these
two
things,
and
so
one
of
the
things
we
could,
for
example,
ask
for
a
kind
of
sense
of
the
room.
B
Does
that
make
sense
just
try
to
get
people's
opinion
on
on
whether
they
on
some
of
these
issues?
Okay,.
E
Well,
I
I
guess
I
I
if
you're
asking
me
since
I
just
presented
the
proposal
and
and
this
meeting
I
would.
A
E
To
get
on
a
yes
or
no
on
whether
the
working
group
thinks
that
things,
what
what
I
presented
is
the
direction
the
work
group
wants
to
go
and
answers
could
be
yes,
it
could
be
yes
if
it
also
solves
audio.
E
A
A
We
have
two
starting
points
that
are
possible
for
the
for
the
specification
and
we
can
modify
from
them.
I
don't
see
a
reason
at
this
point
to
say
that
to
ask
the
working
group
to.
A
E
Well,
I
think
we
had
the
same
question
when
you
had
your
presentation
a
year
ago,.
A
A
A
D
Okay,
can
I
add
another
problem,
which
is
that
it
seems
that
everybody
is
taking
for
granted
that
all
these
streams
issues
will
be
solved,
and
I
didn't
see
much
progress
on
the
difficult
issues
and
I'd
like
to
get
a
sense
from
the
workbooks.
If
they
feel
that
to
be
successful,
we
need
to
solve
the
issues
or,
if
we
just
don't
care,
we
will
do
it
whatever
or
what?
D
Because
I
think
that,
with
the
current
state
of
the
stream
to
work
working
with
spec
and
features,
it's
not
good
enough,
and
I
I
don't
I'm
not
I'm
not
feeling
confident
with
any
of
the
two
proposals
without
a
strong
progress
on
the
stream
side
of
these
issues.
E
My
concern
with
what
are
all
having
two
questions
proposal
of
two
questions
is
that
I
think
media
steam
track,
processor
and
generator
has
an
unfair
advantage
here
in
that
it's
been
already
been
implemented
in
chrome
without
working
group
consensus
and
it
offers
audio
and
video.
So
I
I'm,
I
think
we
should
ask
a
working
group.
What's
what's
the
minimum
api
surface
that
we
can
agree
on
to
start
standardizing,
and
I
would
actually
hope
that
we
add
audio.
E
Not
everyone
might
agree
on
that,
but
we
don't
have
enough
time
to
stand
up
an
entire
replacement
from
the
sim
track.
Processor
as
a
whole.
So
I
think
it's
unfair
to
say
that
while
we
already
have.
C
If
I
may
have
what
I
understand
to
be
the
questions,
yaniva
is
looking
for
and
I
think
a
question
all
of
us
might
agree
on.
There
are
two
different
api
shapes
one
is
using
readable,
unwritable
and
happens
to
be
exposed
on
in
workers,
but
not
fundamental
to
it,
and
there
is
a
generator
and
processor
api
shape
which
happens
to
be
exposed
on
the
main
side,
but
that's
not
critical
to
it.
E
Thank
you
ewan.
I
think
that
that
might
be
a
way
to
if
we
can
make
the
questions
clear
that
we're
asking
about
shape
separate
from
what
it
might
imply
and
then
maybe
a
separate
question
for
exposure
to
maine
and
whether
to
solve
audio
or
not.
So
if
we
can
have
those
be
totally
independent
questions,
I
think
you're
on
to
something.
F
E
E
C
So
I
think
the
question
for
stream,
though,
is-
and
there
seems
to
be
clear
interest
in
streams
in
what
it
solves.
The
question
that
huan
is
raising
is
how
confident
the
real
issues
that
remain
can
be
solved
and
can
be
installed
in
a
timeline
comparable
to
the
one
we
want
to
achieve
with
this
work.
C
So
I
I'm
not
sure
that's
so
much
a
design
decision
as
much
as
trust
in
the
ecosystem
kind
of
evaluation,
so
that
one,
I
don't
think
it's
necessarily
a
good
sense
of
feeling
as
much
as
you
know,
more
research
needed
more
input
from
stream
people
needed,
because
the
working
group
might
have
trust
and
be
wrong
or
have
not
trusted
me
wrong.
So
I
I
would
probably
separate
that
question
from
the
sense
of
the
room
kind
of,
but.
D
B
Okay,
I
have
a
draft
of
potential
questions.
Let
me
show
them.
B
I
was
just
thinking
of
just
doing
it
in
the
chat
room.
The
three
questions
are:
should
media
capture
transform
be
based
on
streams,
question
two:
should
it
be
audio
only
and
question
three,
should
it
only
support
workers.
G
So,
just
I
guess,
there's
a
lot
on
cuba.
Just
one
comment
on
those
questions
like
I.
I
could
live
with
the
answers
to
yes
or
no
to
as
the
answer
to
the
to
every
single
one
of
those
questions
like
so
I
I
don't-
and
I
don't
know
enough
about
the
details
of
the
implementations-
to
have
strong
inputs
so
for
some
people
on
the
call
here
this
may
be.
I
don't
know
I
mean
I,
I
won't
provide
any
input,
but
it
you
know
it's
a
complicated
it.
G
We
may
be
discussing
a
detail
that
maybe
some
of
the
you
know
there
may
be.
We
may
need
more
things
that
help
us
understand
the
consequences
to
the
answers.
These
questions
before
some
of
us
can
really
provide
very
useful
answers
and,
like
you-
and
I
totally
hear
that's
very
unactionable
and
does
not
make
me
feel
very
happy
towards
resolving
these
things
either,
but
I
don't
know
what
to
say
to
any
of
them.
So
there
you
go.
B
Okay,
well
we're
almost
out
of
time,
but
we
can
send
do
the
questions
on
the
mailing
list
as
well.
E
Could
you
change
question
two
to
should
media
capture
transform
solve
audio,
I
think,
is
the
real
okay
question.
There.
E
B
F
No,
no,
not
that
your
proposal
supports
it.
It's
just
that
makes
it
annoying,
because
you
have
to
create
a
worker.
You
can
always
transfer
the
stream
to
the
main.
E
A
D
Maybe
to
help
kellen
decide,
I
mean
each
of
these
issues
should
be,
should
have
its
own
summary
somewhere,
so
that
somebody
can
read
the
summary
and
the
person
concerned
decide
because
I
feel
like
it
will
be
difficult
for
people
out
of
a
room
to
to
understand.
Like
the
fourth
question,
for
instance,.
C
Yeah
and
I
think
a
way
to
frame
this
summary,
this
pros
and
cons
is
to
explain
what
will
change
for
developers
when
facing
the
different
answers
to
these
questions,
because
at
the
end
of
the
day
that
really
the
kind
of
input
the
rest
of
the
people
in
this
room
can
give
it's
not.
You
know
this
will
be
terrible
to
implement
in
my
brother.
It
will
be
this
will
make
implementing
this
or
that
application
more
difficult
or
easier.
I
mean
in
the
api
shape
thing.
B
Okay,
so
I
think
the
next
step
is:
are
we
understanding
this
is
to
ask
these
questions
on
the
mailing
list,
because
I
think
we're
out
of
time
in
this
meeting
is
that
where
we
go
next.
E
I
think
question
four
is
hard
to
to
grasp
for
someone
who
hasn't
seen
the
whole
presentation,
both
presentations
so.