►
From YouTube: WebRTC WG interim March 25, 2021
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
So
the
slides
are
published
on
the
wiki
or
the
pointers
to
them
are.
Do
we
have
a
scribe.
B
Who
prepared
would
it
be
possible
for
someone
else
to
do
it?
I'm
trying
to
grab
a
snack
okay.
A
The
or
someone
else
can
can
we
get
a
volunteer,
it
doesn't
have
to
be
complicated.
Just
write
down
the.
A
A
I
I
can
do
it
if
you,
if
you'd
like
it's,
it's
not
very
complicated.
Basically
we're
recapping
what
we
said
at
the
last
meeting,
which
is
to
try
to
find
a
proposal
that
would
be
simple
enough
to
implement,
but
still
catch
more
bugs
than
we're
finding.
So
the
resolution
from
last
meeting
was
to
build
a
content
reflector
that
speaks
to
webrtc
stack,
but
as
minimal
as
possible.
Just
echo
back
what
it
gets
over
websockets,
so
jeremy
lane
and
tippo
built
a
demo.
A
I
was
only
60
lines
of
code
and
it
pretty
much
seemed
to
do.
What
we
needed
to
do
it
used
was
based
on
the
aaio,
quick,
sorry,
aortc,
python
library
and
it
parsed
rtp,
our
tcp
sctp,
etc.
There
are
three
tests
that
have
been
written.
I
think,
since
the
last
meeting
one
caught
some
mac
issues,
there
was
one
that
was
focused
on
rtcbo
rtcb
buys
making
sure
that
buys
were
incent
when
they
shouldn't
be,
and
then
a
vp8
simulcast
test.
So
I
think
we
have
a
pretty
good
proof
of
concept
that
this
works.
A
Fippo
has
rewritten
the
server
stripped
it
down
a
little
bit
use
the
ortc
api.
That's
also
in
aio
rtc,
so
it
does
less,
which
is
actually
is
kind
of
an
advantage
here,
because
you
base
it
enables
you
to
capture
things
like
stun
or
dtls.
If
we
needed
to
do
that,
so
basically,
we
have
a
pretty
good
prototype,
pretty
good
test,
and
the
question
is:
what
do
we
need
to
do
to
actually
get
this
into
wpt?
A
And
I
think,
as
I
recall
there
was
there
there's
some
bureaucracy
there
and
that
it
it's
it.
There
hasn't
been
an
instantaneous.
You
know
it
doesn't
seem
like
this
isn't
a
particularly
complicated
thing.
It's
like
60
lines
of
code,
but
getting
it
hosted.
I
guess,
is
an
issue.
A
So
the
question
I
guess
for
the
group
is
bureaucratically:
what
do
we
need
to
do
to
get
this
60
line
server
up
there
and
hosted
and
maintained?
I.
D
We
need
to
do
an
rfc
proposal
on
their
github
and
then
it
will
be
discussed
by
the
core
wpt
team,
and
if
they
approve
it,
then
we
would
be
ready
to
land
the
pr
I'm
guessing
that
if
the
pr
is
already
there,
we
could
do
the
rfc
request
and
show
them
what
the
pr
would
be
so
that
they
can.
They
can
have
a
good
sense
of.
A
Yeah
I
looked
for
the
pr
I
didn't
see
it
at
least
under
the
webrtc
label.
I
don't
know
harold,
do
you
know
if
it
was
submitted
or
if
it's
in
there
somewhere,
it
wasn't
submitted
yet?
Okay?
Okay,
so
I
guess
so
we
can.
If
we
can
write
down
u.n
next
step
is,
I
guess,
to
submit
the
pr
and
then
to
get
feedback
from
the
wpt
folks
on
it?
I
guess
is
that
the
next.
D
Step,
yes
and
I
I
guess
we
need
to
file
an
rfc,
so
there's
a
there's,
a
process
in
wpt
to
actually
do
that
kind
of
stuff,
and
I
I
think
it's
called
an
rfc,
so
we
probably
need.
C
A
So
anyway
we
we.
We
should
take
that
offline
to
see
exactly
who
submits
the
pr.
A
Okay,
so
anyway,
that's
great
that
that's
not
an
issue,
so
I
guess
your
suggestion
harold
makes
sense,
which
is
for
fippo
to
submit
a
pr.
I
guess-
and
I
think
is
that
it
everything
we
need
to
do.
F
So,
just
to
reiterate
what
you
are
saying
that
pull
requests
should
include
or
should
be
completed
by
an
array
of
sql
requests.
We
described
the
reasoning
or
the
motivation
for
for
the
change,
and
I
I
posted
on
the
chat,
the
link
to
the
wpt
error,
css.
A
Okay,
okay,
great,
so
I
think
we
understand
what
the
next
steps
are.
There
is
one
kind
of
question
question,
which
is
how
far
we
go.
I
think
the
things
that
have
been
done
so
far
seem
to
me
like
good
things,
to
do
like
the
knack
the
rtcp
buy.
It
basically
allows
us
to
find
a
few
more
protocol
things.
A
I
don't
know
that
we
have
enormous
ambitions
here
like
to
to
try
to
rate
a
conformance
suite
for
all
of
the
rtc
website.
That
would
seem
probably
beyond
our
interest
or
capability,
but
it
does
seem
to
be
useful
to
discuss
what
the
what
the
scope
is.
I
don't
know
harold.
Do
you
have
any
thoughts,
how
how
what
kind
of
things
make
sense
and
what
would
be
too
much.
E
A
Yeah
I
mean,
and
presumably
people
will
spend
effort
on
tests
that
actually,
you
know,
have
been
an
issue,
so
I
I
don't
think
we're
in
any
danger
of
anybody
going
overboard
and
writing
too
many
tests.
That's
that's,
probably
not
a
realistic
problem.
F
I
was
going
to
say:
wpt
can
only
be
usefully
used
for
things
that
are
surfaced
from
a
browser
api
perspective,
so
that
kind
of
already
scopes,
obviously
significantly
what
would
be
done.
A
D
I
would
look
at
web
socket
there.
Websocket
is
a
protocol
defined
in
etf
and
it's
an
api
defined
in
wpc
and
website.
Wpt
is
testing
like
the
handshakes
and
things
like
that
which
are
defining
the
idf
that
are
really
relevant
to
any
websocket
implementation
in
the
browser,
yeah.
A
Okay
and
there's
also
an
example
there
of
some
other
interop
stuff.
That's
been
done,
that's
up
on
the
web.
Okay,
I
think
that's
it
for
the
testing
discussion.
We
have
our
next
steps.
So
now,
harold
insertable
streams,
use
case
study.
E
E
E
So
this
is
the
pipeline
that
would
be
desirable,
would
want
to
take
packets
from
the
rtpd
packetizer,
put
them
in
and
call
the
data
buffer,
because
buffering
takes
much
much
less
room
when
you
do
it.
When
data
is
encoded,
then
once
you
need
the
packets,
you
run
them
through
the
web,
codec
decoder
and
then
do
all
the
algorithmic
filtering
on
the
raw
data,
because
all
your
data
is
much
much
easier
to
deal
with
when
it's
raw
and
then
play
it
out
in
browser
all
this
has.
A
So
harold,
where
does
the
wasm
principally
come
in
it,
looks
like
the
beginning
of
that
pipeline
is
just
this
kind
of
the
standard
pipeline?
Where
is
the
wasm
get
inserted.
E
H
Okay,
what
is
rtp
depacketizer
here?
This
is
webrtc
receiver
or
a
web
transport
receiver.
E
E
I
mean
once
once
we
have
this
like
if
you
could
theoretically
feed
the
same
pipeline
from
from
something
else,
but
this
is
the
one
we
wanted
at
the
moment.
E
D
Okay,
so
is
it
like
hooking
a
mediation
track
to
a
video
element,
or
is
it
like
using
audio
worklet
to
render
or
is
it
move
or.
F
E
E
E
The
normal
case
for
for
insertable
streams
is
that
when
the
incoming
check
is
created,
the
browser
sets
up.
Depoctizing
decoder
track
fires
on
track.
E
Then
you
can
call
create
create
vehicle
streams
on
the
on
the
receiver
or
in
the
new
api
that
is
prototyping
and
fire
an
event
at
an
irrelevant
worker
that
contains
the
impetus
stream
so
that
you
can
hook
into
them.
E
E
It
would
be
nice
if
we
had
next
thread
an
api
that
allows
us
to
do
something
like
this
create
a
pay
connection
with
some
extra
parameters
create
an
output,
that's
connected
to
media
stream
generator.
We
have
api
for
that,
create
a
worker,
that's
where
the
pc
and
output
gets
connected
to
and
then
in
the
worker.
E
We
we
could
do
create
a
decoder
set
up
the
feeds
and
when
you
get
the
events
from
the
event
or
message
from
from
the
main
main
and
made
from
the
matriarch
saying,
hey
start
working,
it
connects
the
input
stream
and
output
stream
and
the
and
the
decoder
all
inside
the
worker.
E
E
E
E
E
That
is
just
for
handing
this
part
type
of
stream
to
this
type
of
worker,
or
we
could
do
the
do
what
we
did
with
the
first
version
of
the
insertable
streams
and
say:
okay,
let's
just
extract
the
streams
and
let
and
send
them
a
message
use
existing
apis.
E
And
I'd
also
like
guarantees
that
once
audio
frames
are
emitted
from
js,
they
get
played
out
with
no
extra
jitter,
because
the
whole
purpose
of
the
other
thing
is
to
is
to
create
a
stream
of
audio
samples
that
are
that
sound
jittery
to
the
user.
Even
if
they
it
wasn't
due
to
you
to
begin
with.
So
we
want
to
make
sure
that
the
browser
doesn't
that
anymore,
more
nature.
E
D
So
if
I
understand
things
they
are
yeah
the
the
diagram.
There
is
quite
useful,
so
there's
vertipacketizer
and
then
there's
a
decoder
and
and
then
you
have
decoded
data
that
either
go
to
a
media
stream
track
or
it
could
go
somewhere
else.
D
E
Well,
there
will
be
code
there,
which
language
is
it
is
it
in
in
this,
but
since
buffers
are
so
much
smaller,
if
you
want
to
keep
a
buffer
of
150
milliseconds,
it's
easier
to
do
them
in
in
in
the
encoder
form,
provided
that
you
can
get
them
through
the
decoder
and
through
the
signal
processing.
Afterwards,
as
I
said,
signal
processing
after
decode
and
queue
management
before
the
legal.
D
D
You
also
want
maybe
to
inject
javascript
after
the
decoder,
which
currently
is
only
feasible
once
you
buy
the
mediastream
track
to
audio
to
about
you,
and
then
you
get
the
data
from
audio
worklets.
D
Which
is
yeah
and
and
so,
and
so
it
seems
that
you
so
there's
these
two
things
you
inject
javascript
before
the
decoder,
and
we
already
have
that
we
want
to
inject
javascript
after
the
decoder
we
may
or
may
not
have
that,
because
we
have
that
after
the
media
stream
track
for
your
worklet,
and
maybe
you
want
it
sooner,
and
you
also
want
to
disable
all
the
algo
events
that
are
done
by
the
rtcp
receiver
as
well,
which
is
an
api
that
we
do
not
surface
right
now.
Is
that
a
fair
summary.
E
G
I
I'm
worried
that
the
idea
that
you
can
decouple
about
the
idea
that
you
can
decouple
the
codec
from
this,
I'm
just
thinking
about
a
specific
instance.
If,
if
you
like,
if
there's
a
packet,
that's
late,
then
you
might
want
to
decide
actually
I'm
going
to
get
opus
to
backfill
it
or
I'm
not
depending
on
whether
I'm
prepared
to
wait
a
bit
longer
for
it
to
show
up,
I
mean,
maybe
it's
an
out
of
order
packet
or
maybe
I
send
a
knack
and
I'll
get
it
in
the
end.
G
I'm
thinking
specifically
of
like
you
know,
the
current
net
eq
does
basically
does
pitch
shifting.
If
the.
If
it
thinks
that
the
the
incoming
audio
is
faster
than
than
it's
its
own
clock,
then
it'll
pitch
shift
it,
which
is
like
disastrous
for
music.
There
are
other
algorithms
you
could
apply,
but
but
I
don't
think
that
this
pipeline
would
let
you
do
that.
Don't
think
it
like
it's
trying
to
separate
things
that
actually
you
want
to
entangle
a
bit
more
to
get
those
effects.
G
E
E
H
So
to
try
to
throw
some
rocks
at
it
a
bit.
The
use
case
perhaps
seems
a
bit
narrow
in
that
it's
for
people
who
don't
they're,
not
happy
with
the
webrtc's
net
eq
part,
but
at
the
same
time
they're
not
willing
to
go
as
far
as
encoding
their
own
audio.
H
E
E
C
H
H
Right
so
yeah,
I'm
just
wondering
if
the
I
mean
it
sounds
useful,
narrowly
and
but
are
there
enough
other
tools
that
almost
do
the
same
thing
to
warrant
adding
this,
because
I
think
so
far,
I
think
we
had
almost
come
to
an
understanding
that
insertable
streams
was
more
useful
in
the
video
realm,
because
we
already
had
audio
worklets,
for
example.
D
Yeah,
I
also
like
that
we
we
have
these
kinds
of
use
cases
now
so
that
currently
we
are
defining
one
point
where
we
are
injecting
javascript,
which
is
at
the
encoded
data
buffer
level,
and
it's
good
to
have
like
as
many
use
cases
as
we
have
now
to
actually
make
sure
that
the
api
that
we
are
designing
will
be
able
to
either
evolve
or
to
fulfill
these
use
cases,
but
maybe
they're
not
well
defined,
but
we
we
have
them
in
our
own
map
and
we
we're
making
sure
that
we
are
future-proof
with
the
current
api.
H
E
Well,
it
doesn't
doesn't
permit
if
it
you
can
live
with
with
doing
the
the
duty
buffer
part
on
on
decoded
data.
It
works.
E
Buffer
is
not
really
about
how
to
delay
things,
it's
how
to
how
to
find
an
adequate
buffer
size
and
to
control
rolling
running
out
in
and
out
and
drilling
samples
in
and
out
of
that
buffer.
G
D
E
So
it's
certainly
possible
to
prototype
this
with
the
whole
algorithm
done
after
the
decoder,
in
which
case
we
could
use
media
stream
transform
instead.
E
B
H
Just
for
my
understanding
with
another
way,
to
phrase
this,
be
that
a
lot
of
folks
would
like
to
do
their
own
codex
and
use
their
own
algorithms,
but
at
the
same
time
they
really
miss.
Rtp.
Is
that
right
and
we.
E
E
So
so
we
have
a
depak
visor
feeding
into
a
decoder
and
you
actually,
you
actually
have
to
tell
the
pair
connection
to
to
apply
only
this
many
building
blocks
to
the
lego
tower.
G
Quick
transport,
but
but
just
just
to
come
back
to
audio.
I
think
you
still
it's
not
enough
to
say
that
you
want
to
handle
it
after.
It's
been
decoded,
because
there
are
things
that
you
want
to
do
that
you
want
to
influence
the
code,
the
the
codec,
because,
like
the
behavior
of
how
you
do
like
I
said,
do
you
ask
opus
to
make
one
up
or
do
you
up?
Do
you
wait
and
grow?
The
jitter
buffer
is
a
really
important
decision
and
you
want
to
have
that
in
your
hands.
G
So
I,
my
in
my
audio
worklet
thing,
you'd
have
some
kind
of
weird
interface
where
what
was
coming
in
was
encoded
audio
and
then
you
could
feed
it
into
a
codec
and
you'd.
Have
your
fingers
wrapped
around
that
codec?
Somehow
now,
I'm
kind
of
quite
got
that
in
my
head
correctly,
but
but
I
do
think
that
you
can't
avoid
the
the
step
before
the
decoder
and
having
some
influence
over
the
decoder.
H
Just
from
an
api
point
of
view,
though,
we're
talking
about
pulling
cables
apart
and
plugging
in
different
cables
instead
and.
B
H
Sounds
like
it's
always
either
or
so
I
kind
of
liked
someone
I
forgot
who
said
it
here:
maybe
there's
a
switch.
We
could
just
flip.
Instead,
that
would
change
what
comes
out
of
the
soft
serve
machine
so
to
speak.
A
A
A
All
right,
yeah,
all
right
so
so
this
next
topic
is
is
quite
similar.
In
that
it's,
the
idea
is
to
present
the
relationships
between
a
number
of
the
apis.
We've
been
talking
about
and
their
their
potential
relationship
and
use
together,
and
I
think
we've
been
seeing
that
you
know
we
have
the
media
stream
track
considerable
streams.
We
have
web
codecs,
we
have
web
transport,
we
have
the
webrtc
insertable
streams
and
potentially
other
things
coming
down
the
pike.
A
You
may
have
seen
that
there's
now
web
and
n
may
potentially
be
standardized,
so
just
trying
to
describe
what
the
relationship
between
all
of
these
things
might
be
and
the
ways
we
might
plug
in
some
of
these
lego
pieces
together.
So
this
is
a
a
diagram
of
the
api
relationships
when
you're
sending
so
we
have,
we
acquire
a
media
stream
track
somehow
through
the
good
old
media
capture
apis
and
then
we're
attempting
to
transform
this
into
raw
a
raw
stream
of
video
frames,
and
we
do
that
through
the
media
stream
track.
A
A
Do
things
like
special
effects
or
other
machine
learning
stuff,
then
the
individual
video
frames
go
into
the
video
encoder
and
now
we're
into
the
web
codecs
api
and
they
output
an
encoded
video
chunk,
and
then
we
are
taking
that
encoded
video
trunk
and
attempting
to
send
it
through
potentially
a
variety
of
apis.
It
could
be
web
transport
or
it
could
be.
This
is
kind
of
the
other
side
of
what
harold
was
talking
about,
where
you
might
actually
want
to
just
have
an
rtp
transport
where
you
send
this
stuff
over
rtp.
A
So,
taking
that
this
is
the
this
would
go
into
the
packetizer
of
rtp
and
then
add
over
rtp,
so
or
or
conceivably,
even
over
a
day
over
data
channels.
A
A
So,
on
the
receiving
side,
it
looks
like
this,
which
is
you're
getting
encoded
video
chunk
over
one
of
these
transports,
rtp
or
the
data
channel
or
web
transport
and
you're
getting
it
as
an
encoded
video
chunk.
You're
then
feeding
it
into
the
video
decoder
of
web
codecs,
getting
a
raw
video
frame
out
of
that,
and
then
the
media
stream
track
generator
is
taking
the
stream
of
those
video
frames
and
outputting
a
track
which
you're
then
rendering
via
video
tag.
Does
that
make
sense
harold.
I
think
I
think
that's
a
correct
description.
A
Some
of
the
things
that
may
go
here
so
this
is
these-
are
the
paths
on
the
receiving
side
how
all
this
stuff
might
happen.
So
there
are
a
variety-
and
this
is
more
like
what
harold
just
presented,
because
now,
if
you're
talking
about
rtp
you're
getting
stuff
out
of
the
r2p
packetizer
to
create
your
encoded
video
check.
A
So
so
a
couple
of
questions
come
to
mind
and
I
was
just
purely
thinking
about
efficiency,
but
I
think
they're,
probably
a
bunch
of
use
case,
questions
that
come
out
of
this,
like
out
of
all
the
use
cases
that
could
be
encapsulated
on
these
two
diagrams,
which
are
the
ones
we
care
most
about.
But
I
also
had
some
questions
about
the
performance
of
the
pipelines
described
here,
particularly
relating
to
copies
that
would
occur,
and
we
just
had
this
discussion
in
web
codex.
A
Can
that
be
sent
by
web
transport
or
some
of
these
other
transports,
like
data
channel
or
rtp,
without
an
additional
copy,
so
typically
that
encoded
video
chunk
would
would
be
residing
in
a
gpu
buffer,
and
you
essentially
want
to
take
that
and
and
go
and
send
it
directly
from
that
buffer.
A
If
you
possibly
can
it's
probably
not
possible
with
no
copies,
because
you
have
process
separation,
so
you
just
essentially
can't
can't
take
something
in
the
in
the
gpu
and
just
you
know
in
networking
we
would
normally
do
that
from
user
space
just
directly
down
into
the
driver,
but
in
the
browser
you
probably
would
need
one
copy,
but
hopefully
not
another
one,
and
then
on
the
receive
side,
can
you
write
into
you
basically
want
to
write
these
encoded
video
chunks
into
a
gpu
buffer,
so
webcodex
can
handle
it,
and
that
would
be
you
know
web
transfer,
rtp
or
data
channel.
A
A
So
that's
kind
of
a
question
about
do
the
lego
pieces
of
the
transports
fit
together
with
with
including
do
they
fit
together
with
web
codecs.
The
way
you'd
want
them
to
and
then.
E
Overall
yeah,
so
there's
a
special
case
where,
where
you
want
the
webass,
the
wasm
or
or
javascript,
actually
actually
wants
to
access
the
the
buffers
using
web
gpu,
in
which
case
there's
no
need
to
move
them
out
of
gpu
buffers.
But
in
fact
we
would
have
to
move
them
into
gpu
reference
if
it
wasn't
already
there
right.
No
just
just
the
complicated
matters.
A
Right
and
some
of
these
things
yeah,
I
didn't
get
into
wasm,
but
in
some
cases
you
might
want
to,
for
example,
in
case
of
an
audio
codec,
write
your
codec
plasm
and
don't
want
to
be
copying
stuff
back
and
forth,
which
I
think
wasn't
does
do
right.
You
get
a
copy
there,
so
that
led
to
questions
in
my
mind.
You
know
we
don't
there's
nothing
in
web
idl,
which
describes
denotes
a
buffer
as
a
gpu
buffer
right,
it
just
is,
is
what
it
is.
A
We've
had
a
discussion
in
web
codecs
about
you
know:
can
we
denote
something
as
a
read-only
buffer,
which
is
what
it
would
be
in
the
gpu?
You
wouldn't
you
wouldn't
write
over
it
and
I
guess
the
answer
right
now
is
no.
A
Does
that
have
any
implications
for
if
I
had
questions
about
streams,
for
example,
does
any
b
y
o
b
help
any
of
this
in
streams?
I
think
the
answer
to
that
is
also
no,
for
example,
you
know:
can
you
describe
something
coming
out
of
web
codecs
as
a
bring
your
own
buffer,
because
essentially
web
codecs
already
allocated
the
buffer
and
then
would
that
improve
the
right
efficiency
in
this?
A
In
what
wg
streams,
I
think
the
answer
to
that
currently
is
no,
although
that's
probably
not
the
answer,
we
eventually
want
and
do
we
understand
in
particular
in
all
of
these
operations
when
copies
do
and
do
not
occur,
even
if
we
can't
fix
it,
can
we
document
it
well
enough,
so
developers
will
understand
hey.
If
I
do
this,
it's
going
to
cause,
you
know
stuff
to
be
transferred
between
a
gpu
and
main
memory
or
or
generate
additional
copies.
B
Be
platform
specific,
like
there's
so
many
ways
to
store
buffers,
I
think,
for
example,
mac
os.
You
can
have
I
o
surfaces
that
are
usually
pretty
efficient,
both
cpu
and
tpu
and
other
buffers
you
have
to
copy
them.
D
Yes,
that
is
correct,
honey.
It's
not
even
mac
os.
It's
apple!
Silicon
device
is
different
from
intel
devices
as
well.
So
it's
not
something
that
I
would
try
to
expose
to
the
web
in
general.
Actually,
in
terms
of
process
separation,
enrique
is
mentioning
io
surface.
So
typically,
what
happens?
Is
you
have
a
process
where
you
capture
images
from
the
camera,
so
you
get
an
io
surface
and
then
you
send
the
I
o
surface
to
the
web
process,
sending
the
io
surface.
It's
there's
no
memory
copy
you're
just
sending
a
handle
to
some
shared
memory.
D
So
there's
no
memory
copy
there
at
least,
and
it's
the
same
for
encoders
for
encoders,
take
io
surface
and
will
return
you
shared
memory
so
that
you
can
do
out
of
process
codecs
with
a
limited
loss
of
efficiency,
for
instance,
but
again,
as
enrique
said,
it's
very
platform
specific
and
very
implementation
specific.
So.
F
D
B
B
Another
example
would
be
chrome
os
where,
if
you
know,
if
you
you're
that
you're
gonna
touch
it
on
cpu
versus
gpu,
you
might
want
to
allocate
this
buffer
in
a
different
format
to
start
with.
So
sometimes
it's
it's
relevant
to
know
what
is
this
buffer
going
to
be
used
for
in
the
future,
so
that
I
can,
you
know,
find
the
best
format
or
mapping.
H
So
I
think
one
one
disclaimer
here.
I
think
this
this
presentation
mentions
interfaces
that
don't
have
consensus
yet
like
media
stream
track
processor
and
generator,
but
and
for
mozilla
we
don't
necessarily
agree
with
those
yet
and-
and
some
of
these
questions
are
part
of
the
reason-
it's
not
clear
yet
what
the
overall
picture
will
look
like
at
the
same
time,
I
do
appreciate
it's
really
helpful
and
valuable
that
implementations
try
out
and
see
what
is
fast
and
what
is
not.
So
this
is
a
good
discussion.
H
I
think,
for
streams
the
I
think,
the
the
bottlenecks.
Probably
I
could
see
you
could
have
browser
objects
that
are
video
frames,
for
example.
Theoretically,
that
could
still
be
streamed,
but
the
two
problems
seem
to
be:
how
does
javascript
access
those
gpu
buffers
and
at
some
point,
they're
going
to
transition
to
becoming
bytes,
and
I
think
that's
where
bring
your
own
buffer
might
help,
which
means
you
would
always
have
at
least
one
copy.
As
far
as
I
can
tell,
does
that
sound
right.
A
Yeah
yeah,
it's
certainly
at
that
point
when
they
they
would,
you
would
have
a
copy,
yeah
and
also
some
things
have
been
optimized
just
in
in
the
last
few
weeks
like,
for
example,
if
you
clone
a
video
track,
that
does
generate
a
reference
and
doesn't
generate
a
copy
currently.
A
But
the
reason
I
wanted
to
raise
this
is
also
for
us
to
think
about
the
use
cases,
what
we're
trying
to
do,
and
also
some
of
the
implications
of
of
how
these
apis
work
together.
A
We've
had
the
luxury,
I
think
in
the
web,
rtc
working
group
of
working
on
a
single
api
that
was
looked
at
by
us,
a
single
group
of
people,
and
now
we
don't
have
that
luxury
anymore,
because
looking
at
this
there's
at
least
at
least
three
w3c
working
groups
involved
here,
actually
maybe
four,
I
think-
maybe
more
so
there's
this
communication
between
all
those
groups-
and
I
don't
know
if
there's
even
one
person
who
attends
all
of
them-
I
suspect
not.
A
Maybe
it's
even
might
even
be
greater
than
four.
Maybe
five
working
groups.
E
A
Peppers,
so
I'm
wondering:
are
there
any
next
steps
that
come
to
mind
here,
things
that
we
should
be
doing
going
forward
or
thinking
about
one
of
them
was,
I
think,
to
continue
on
how
what
harold
had
to
think
through
which
of
the
use
cases
we
think
are
are
particularly
important.
A
There's
certainly
many
that
you
could
have
here,
but
but
maybe
articulate
them.
E
One
integration
step-
I
think
we
should
aim
for-
is
to
make
sure
that
the
encoded
transform
web
artisan
called
transform,
uses
the
same
and
called
the
buffer
format
as
as
web
codec
yeah.
H
And
yeah
also
the
maybe
reaching
out
again
to
the
there-
was
a
zero
copy
meeting
a
teapack.
Wasn't
there
yeah.
There
are
folks
there
that
can
help
yeah.
F
I
was
going
to
to
react
to
so
there
was
this
discussion
during
the
tpack
breakout
on
reducing
memory
copies.
One
of
the
outcome
of
that
was
the
creation
of
ycg
repo,
which
saw
some
early
activity
but
haven't
seen
much
recently.
F
I
think
it
might
be
useful
to
bring
some
of
those
use
cases
and
also
early
lessons
in
terms
of
buffer
format.
To
that
report.
To
rekindle
the
discussion,
I
mean
I
agree
that
coordination
with
web
codec
spec
is
probably
key,
but
as
we
discuss
then-
and
I
think
that's
still
true
and
even
maybe
truer
now-
there
are
also
intersections
with
web
gpu
with
web
assembly
with
weber
nan
and
I'm
probably
skipping
still
a
few
more.
So
I'm
having
a
place
where
we
are
at
least
getting
some
visibility
to
these
conversations,
I
think
really
useful.
F
A
Yeah
I
think
web
codecs
is
paying
a
lot
of
attention
to
this,
but
there
because
a
lot
of
the
work
in
web
codecs
is
specifically
oriented
towards
keeping
things
within
the
gpu.
So
I'm
not
that
worried
about
web
codecs
being
inefficient
because
it
really
can't
be.
But
the
big
question
is
when
it
passes
things
in
and
out
do
all
the
other
apis.
A
Can
they
use
it
in
that
form,
and-
and
particularly,
we
had
a
discussion
yesterday
about
encoded
video
chunks
from
from
web
codecs.
Can
they
be
used
by
transports
without
a
copy?
So
that's
thinking
of
web
codex
as
input
to
something
like
web
transport
or
rtp
or
data
channel,
and
does
that
automatically
force
an
additional
copy.
A
A
Anyway,
okay,
so
we
should
probably
move
on
from
this.
Yanivar
get
viewport
media.
H
Yes,
so
this
slide
was,
I
put
together
this
slide
as
an
update
slide,
because
we
got
some
good
news
last
week
from
ella
dalla
that
from
google
chrome
security
review,
we
now
seem
to
have
agreements
on
how
to
protect
get
tab,
media,
which
has
now
been
renamed
to
get
viewport
media,
which
seems
to
be
the
frontrunner
as
far
as
naming
goes
so
on.
H
The
naming
first
viewport
seems
to
best
capture
exactly
what
we're
capturing,
no
pun,
intended
the
definition
on
mdn
there
is
that
it
refers
to
the
part
of
the
document
you're
viewing
which
is
currently
visible
in
its
window
and
content
outside
the
viewport
is
not
visible
on
screen
until
scrolled
into
view.
H
So
that
seems
to
be
there's
still
some
open
questions
of
whether
an
iframe
can
would
only
see
itself
or
itself
and
its
parents
and
other
children,
but
back
to
the
main
point,
which
is
the
security
parts
which
are,
I
think,
maybe
we
could
have
some
time
to
discuss
today.
H
So
there
seems
to
be
an
agreement
on
site
isolation,
but
it's
still
a
little
clear
whether
we
mean
full
spectre,
isolation
which
would
be
co-op
plus
co-op
and
to
re
to
recap:
there's
an
opener
policy
and
then
a
better
policy
which,
when
to
use
together,
turns
the
magical,
window.cross-origin,
isolated
boolean
to
true,
and
that
means
you're
fully
specter
protected.
That
means
that
neither
your
embedded
content
nor
your
opener
are
from
you
know,
foreign
origins,
so
that's
fully
protected.
But
in
this
case
since
we're
only,
I
thought
for
now.
H
And
I
think
so,
that's
an
open
question
for
the
working
group.
It
might
be
simpler
to
to
go
with
the
with
both
still,
even
though,
if
we
think
it's
unnecessary,
just
because
it's
easier
to
talk
about
site
isolation
and
everyone
has
the
same
idea
and
secondly,
that
there
needs
to
be
some
opt-in
header
for
for
documents.
So
that
documents
can
opt
into
being
captured
in
iframes
and
that's
still
to
be
determined
and
another
thing
outstanding
question
there
is:
how
would
this
fail?
H
H
Well,
it
should
probably
fail
to
capture
to
begin
with,
but
what,
if
it
it
later
navigates
a
frame
or
opens
a
new
frame,
then
it's
not
opted
in
at
that
point,
we
could
either
block
the
loading
of
such
frames
in
a
more
traditional
co-op
model,
or
we
can
allow
it
and
kill
the
capture
and
have
it
terminate
with
an
error
somehow.
H
So
those
are
the
questions
we
do
seem
to
have
agreement
and
just
worth
mentioning
to
the
to
the
working
group
that
we
seem
to
be
in
agreement
about
some
that
resources
may
be
on
their
own
than
this
one.
There's
a
there's
a
mostly
with
site,
isolation,
there's
side.
Isolation
already
provides
a
mechanism
where
the
the
web
is
does
not
load
crossover
dynamics
by
default.
H
So
images
already
have
to
opt
in
to
this
model
to
be
included
in
the
first
place,
but
there's
two
ways:
there's
you
can
do
a
co-op
cross-origin
attribute.
That
says
I
allow
this
image
to
be
embedded
by
other
origins,
but
there's
a
separate
allow
list
where
you
can
also
give
it
access
to
the
the
bits
of
the
image,
which
means
that
you
can
actually
allow
your
image
to
be
shared
on
other
sites,
but
still
be
opaque,
and
I
think
everyone's
idea
here
is
that
we
would
still
allow
capture
of
that.
H
So
that
is
a
concern
and
with
our
rationale
for
not
protecting
those
images
that
it
would
be
too
arduous
to
add
html
resources
to
every
all
of
these
images,
and
these
we're
already.
These
images
are
already
not
protected
by
specters
because
they
would
live
in
the
same
process
as
the
iframe.
That's
including
them,
and
we
probably
could
never
protect
them
100
anyway,
and
so
that's
the
update
and
ella.
Do
you
want
to
add
anything.
C
I
could,
but
I'm
not
sure
how
much
time
is
slated
for
this.
I
wouldn't
want
to
monopolize
the
discussion
with
you
know.
If
you
just
wanted
to
give
an
update.
H
Okay,
well,
on
that
case,
I
guess:
can
we
it's
a
premature
to
get
the
temperature
of
the
room
of
which
of
these?
For
the
first
three
questions
there.
C
I
I've
got
one
question
which
I
think,
maybe
so
everything
with
security
is
a
bit
requires
a
lot
more
time
to
think
about,
but
I
think
that
maybe
we
could
talk
about
the
failure
mode
because
to
me
it
seems
like
it
is
very
important
to
make
sure
that
it
doesn't
block
loading
and
I'll,
give
you
an
example
which
is
admittedly,
just
one
example,
but
I
think
that
it
is
a
very
standard
kind
of
example,
suppose
that
you
have
got
google
slides,
okay
and
there
are
multiple
slides
there
and
most
of
the
content
is
first
party,
you
start
capturing
everything's,
okay,
suddenly
somebody
embeds
youtube.
C
I
think
it
would
make
more
sense
to
break
off
the
capture
and
let's
say
that
somebody
just
navigates
there,
so
if
you
couldn't
load,
you
wouldn't
be
ever
ever
able
to
load
even
before
you
even
know
if
the
user
is
interested
in
capturing
most
people
are
not
screen.
Sharing
right,
most
of
most
people
would
just
be
browsing
through
the
slides
if
they
could
not.
So
I
think
the
least
destructive
way
of
this
failing
is,
if
you
just
break
off
a
capture.
H
Right
and
that's
a
valid
point
and
I
think
the
counter
argument
not
to
you
know
the
difficulties,
is
that
there's
also
a
counter
argument
that
if
I'm
using
google
slides-
and
I
prepare
presentation
for
several
days-
which
I
have
I
think
the
last
point
at
which
I
want
things
to
fail-
is
during
the
actual
live
presentation,
because
I
think
we
are
talking
about
a
feature
here-
that,
if
successful,
would
be
the
hopefully
primary
target
for
applications
like
google
slides,
at
least
during
the
pandemic,
where
they
put
all
this
effort
into
a
presentation
and
then
at
the
point
of
presenting
you're,
not
even
aware
that
your
audience
might
see
something
well,
yeah
you
might
not.
C
I've
got
the
content
will
fail.
Yes,
I've
got
a
counter
counter
argument,
and
that
is
that
the
application
can
know
and
warn
the
user
hey
by
the
way
you
you
won't
be
able
to
present
this
and
that's
something
the
application
could
do,
and
that
would
work,
whereas
if
you
just
fail
loading
it's
like,
but
I
was
never
gonna
present.
So
why
am
I?
Why
can
I
not
load
like
basically
okay,
so
the
application
can
no
longer
load
any
third-party
content,
except
for
from
a
very
short
list
of
collaborating
websites.
H
H
C
H
Would
have
to
say
basically
it
would
have
to
turn
off
this
this
header.
At
that
point
and
say
your
presentation,
unfortunately,
is
not
shareable.
A
H
Sorry
just
quickly
any
preference
for
and
ella
do
you
know
if
the
google
team
wanted
full
isolation
or
not.
C
I'll
get
back
to
you
on
that.
I
I
think
I
have
an
answer,
but
I
don't
want
to
speak
prematurely
and
make
a
mistake
of
course.
Of
course,.
D
D
And
in
terms
of
killing
capture,
I
would
say
mute
or
not
new
or
unmute,
but
not
kill
the
catcher
really
but
anyway
yeah.
So.
F
Sorry,
just
on
the
failure
mod
so-
and
you
said,
google
slide
could
detect
that
this
was
going
to
fail,
but
is
that
so
would
they
get
access
to
the
header
information
to
determine
that
white
freedom
won't
work.
C
So
if
you,
if
you
have
slides,
you
generally,
can
just
run
through
all
of
them
see
if
all
of
them
just
have
first
party
content,
then
you
just
don't
issue
a
warning,
whereas
if
any
of
them
have
third
party
content,
you
know
which
third
party,
it
is
it's
no
surprise
for
the
application,
so
the
application
doesn't
need
to
guess
like
it
wouldn't
get
it
100
correct
of
the
time
like
if
some
remote
third
party,
that
used
to
collaborate
used
to
include
that
header
used
to
be
okay
suddenly
falls
back
to
some
kind
of
backup
server.
C
That
is
misconfigured.
Of
course,
there
would
be
a
mistake,
but
generally
when
you
embed
third-party
things,
you
can
just
check
them
against.
You
know
a
list
of
known,
safe,
safe
sources.
F
Okay,
I'm
still
not
on,
I
mean
still
not
entirely
sure
about
that,
but
I
I
can
continue
on
the
github
issue.
I
guess.
D
D
So
to
summarize
the
issue
when
you
call
get
user
media,
the
user
agent,
once
the
user
granted
permission
needs
to
select
device
settings
when
starting
capture
and
usually
respect,
is
saying
you
should
pick
settings
that
have
the
lowest
fitness
distance
and
that's
fine.
The
issue
is
that
in
many
many
cases
there
are
a
lot
of
setting
values
that
have
exactly
the
same
fitness
distance
and
there
the
specification
does
not
provide
any
guideline
at
all.
D
So
the
question
of
this
issue
is:
should
the
specification
say
more,
should
we
give
guidelines?
Should
we
say,
should
we
add
normative
text
or
not
or
or
is
it?
Is
it
really
an
issue?
Should
we
just
say
we're
good
and
we
forget
about
that.
So
on
the
long
thread
in
github,
there
were
two
positions:
one
was
yes,
we
should
say
more,
it
will
help
interrupt
and
the
other
position
was
no
first,
because
this
will
make
future
evolutions
more
difficult.
D
D
So
that's
my
summary
of
this
issue
and
the
current
position
next
slide.
D
So
what
I
did
was
to
look
at
whether
it's
a
real
problem
or
not
spec
is
saying
that
constraints
should
be
provided
by
webpage
for
all
important
properties.
So
I
went
to
a
few
websites
like
webex,
whereby
gtc
and
I
looked
at
which
constraints
they
set
for
getusermedia
they're,
all
video
conference
websites.
D
D
D
I
haven't
looked
at
whether
they're
using
apply
constraints
later
on.
Maybe
these
websites
are
doing
like
get
user
media
then
apply
constraints.
I
don't
know.
I
just
did
a
quick
check
there
on
mac
os
safari,
but
anyway,
since
we
are
doing
that
even
applying
constraint
later
on,
it
would
be
so
optimal
as
it
would
require
recalibration
of
the
camera
the
microphone,
and
we
know
that
these
websites
actually
won't
take
cancellation.
D
So
the
assumption
that
the
spec
took,
which
was
to
say
that
web
pages
will
do
the
right
thing
and
provide
all
the
constraints
that
are
really
important
to
them.
It's
not
followed
web
developers
are
not
doing
that.
So
I
would
say
from
this
little
experiment
that
it's
a
real
problem
and
that
we
should
try
to
solve
it
next
slide.
D
If
a
user
agent
realistic,
is
saying
echo
cancellation
equal
off,
then
there
will
be
impact
on
on
users
and
of
course
it
will
have
a
bigger
impact
on
youth
agents,
with
some
small
number
of
users
like
if,
if
a
big
user
agent
is
saying,
okay,
echo
cancellation
is
off,
probably
web
developers
will
adapt,
but
for
all
the
other
use
agents,
it's
not
really
possible.
D
Slide
so
chrome,
firefox
and
safari
what
they're
doing
they're
doing
roughly
roughly
the
same
thing,
not
in
the
same
ways,
but
roughly
they're
saying
a
co-cancellation
is
true
by
default
and
640
for
80
30
fits
I
it's
roughly
the
default
as
well.
D
D
D
So
the
first
thing
to
note
is
that
we
have
properties
that
have
os
default
values
like
sample
rate
sample
size.
They
have
usually
very
default
values.
So
in
that
case
we
should
just
use
them
just
like
by
default.
We
are
saying
if
you
do
not
know
what
device
to
pick
pick
the
default
one
and
the
default
one
is
usually
defined
by
the
user
by
vos.
So
we
should
acknowledge
that
and
add
in
the
spec
hey.
If
you
have
a
quite
different
value,
please
use
it
if
you're
not
sure
for
properties
without
os
defaults.
D
Second,
give
some
current
known
heuristics
from
some
browsers
and
give
those
numbers-
and
this
echo
calculation
is
true
as
examples
of
what
what
is
done
today,
which
does
not
mean
that
somebody
in
the
future,
we
can
revise
it
and
it's
not
mandatory
to
follow
so
user
agents
can
still
change
it
whenever
they
want.
D
B
D
E
So
what
what
happens
when
we
embed
defaults
in
the
in
the
spec
is
that
we.
B
D
That
would
be
not
mandatory.
That
would
be
non-mandatory
guidelines,
so
it's
it's
not
like.
You
must
use
640
by
480
by
30
flips.
It's
currently,
as
as,
as
of
the
latest
information
we
have
user
agents
tend
to
do
that.
So
please
be
aware
of
that
and
make
good
choices.
When
you
select
default
values.
That's
all
we
would
say
we
would
not
add
normative
text
for
properties
without
os
defaults.
H
I
think
makes
a
good
point
and
done
good
research
here
that
these
are
actually
defaults.
That
seem
to
be
in
effect.
In
particular,
I
happen
to
know
echo
cancellation,
for
instance,
if
you
turn
it
true
or
false,
the
other
ones,
noise,
cancellation
and
auto
gain
control.
Follow
because
the
browser
is
trying
to
guess.
Oh,
this
person
cares
about.
This
is
a
webrtc
call
versus
this.
Is
they
want
natural
music?
H
And
but
I
do
want
to
clarify
a
couple
of
things,
though
the
spec
is
actually
very
specific.
It
says
that
when
there's
when
their
fitness
distance
is
equal,
it
is
entirely
up
to
the
user
agent
I
mean
there's,
there
can
be
no
doubt
in
the
spec
that
it
says
that,
and
I
think
it
even
recommends
that
user
agents
I
mean
specs,
can't
don't
normally
mandate
what
user
agents
do,
but
they
also
recommend
to
follow
system
defaults
when
that
when
that
is
available,
okay,
so
a
stereo
microphone
is
a
good
example.
H
I
get
a
new,
I
buy
a
new
microphone
and
I
use
it
in
the
call
I
would
expect
stereo.
At
the
same
time,
we
get
complaints
from
people
who,
on
you
know
with
4k,
displays
that
just
they
do
get
display
media
and
pc
and
track
and
be
done
with
it,
and
then
they
complain.
Why
is
my
machine
so
slow?
So
there's
a
conflict
between
the
camera.
Microphone
is
not
just
repair
connection.
H
At
the
same
time,
peer
connection
is
the
primary
sync
of
it.
So
we've
been,
the
browsers
have
been
stuck
between.
Users
often
want
the
most
for
non-peer
connection.
You
probably
want
the
highest
best
quality
for
any
property.
At
the
same
time,
we
don't
want
peer
connection
to
break
or
buckle
over,
so
it's
become
the
reason
for
644,
830
and
echo
cancellation
are
all
to
satisfy
the
primary
think
and
to
make
provide
good
defaults
for
that
use
case
yeah.
So
so
I
I
think
it
might
be
useful
to
have
non-normative
notes,
but
implementation
advice.
D
I
think
that
for
os
defaults,
the
idea
would
be
that
it
would
be
assured
and
for
implementation
advice
like
properties
without
right
defaults.
It
would
be
an
implementation
note
and
there
would
be
no
should
be
maybe
a
may,
but
not
a
should.
D
E
I,
like
the
idea
of
os
defaults,
I
think
for
those
things
where
that
is
well
defined
and
discoverable,
although
if
you,
if
you
go
with
always
default
on
microsoft,
it
turns
out
that
you
get
the
bunch
of
signal
processing
things,
some
of
which
have
actually
been
landed
by
the
oem,
and
it
is
really
hassle
to
get
them
all
turned
off.
Sometimes.
F
D
I
think
I
cannot,
I
do
not
think
we
can
constrain
oasis,
so
there
would
be
not
a
mandatory
list,
but
we
can
mention
some
properties
that
we
know
usually
have
os
defaults
like
sample
rate
and
sample
size.
So
but
again
the
the
list
of
these
properties
would
not
be
mandatory.
It
would
just
be
explanatory.
E
F
One
one
possibility,
instead
of
documenting
it
in
the
spec,
might
be
to
document
it
like
in
mdn
browser
compat
data
have
the
defaults
non-value
across
browsers,
which
would
give
this
flexibility
on
mobile
and
non-mobile
and
surface
it
as
well
to
developers.
So
they
know
what
they
get
if
they
don't
pick
the
concerning
themselves.
H
Well,
that's
perhaps
outside
our
purview,
but
and
also
if
we
might
even
put
a
a
year
on
it
too,
like
in
2021.
These
were
the
defaults.
D
Maybe
okay
I'll
go
ahead
and
start
the
pr
then,
and
we
can.
We
can
follow
the
discussions
there.
There
are
all
the
slides
and
there
are.
There
are
10
minutes
for
10
slides.
So
I
should
be
quick.
D
Okay
issue,
64
transfer
data
channel.
We
talked
about
it
in
the
past,
so
there's
a
slide
to
say
yeah.
It
would
be
nice
to
transfer
data
channels.
I've
done
the
experiment.
It's
live
in
safari
will
be
reasonably
soon
in
sapphire
tech
preview.
D
It's
working
in
process,
it's
working
out
of
process,
so
you
can,
you
can
send
the
data
channel
to
a
worker.
You
can
send
a
directional
to
a
service
worker.
It's
not
blocked
on
main
thread.
It's
great!
Maybe
it's
not
highly
tested
yet,
so
we
need
to
still
do
work,
but
it's
working
next
slide.
D
So,
to
make
the
implementation
simple,
I
implemented
some
rules
that
allows
you
or
not
to
transfer
data
channel
and
basically
you
can
transfer
the
data
channel
as
soon
as
it
is
created.
Not
after
that's
how
software
is
working
and
you
cannot
transfer
a
data
channel
if
there
is
data
that
is
buffered
for
for
being
written,
because
if
you
transfer
it,
then
it's
complex
to
still
try
to
write
things.
So
it
still
allows
you
to
you,
create
a
data
channel.
D
You
get
the
data
channel
and
as
soon
as
you
have
it,
you
transfer
it
to
worker.
It
should
work.
The
same
thing
should
happen
for
on
data
channel
even
handler
you,
you
set
local
distribution,
you
do
the
negotiation,
you
have
another
data
channel,
even
handler
and
bank.
You
can
send
it
to
worker.
It
will
work,
that's
what
the
safari
prototype
is
doing
in
on
the
github
issue.
There
are
a
few
alternatives
that
were
mentioned.
D
You
can
relax
the
rules
and
say
you
just
run
for
the
dictational.
If
the
third
right
is
happening,
and
maybe
you
will
lose
some
events
like
you,
you
might
miss
some
messages.
You
might
miss.
Some
state
updates,
but
that's
part
of
the
idea
we
relax
the
rule
and
you
lose
some
of
the
nice
properties
of
non-flexible
rules.
So
I
think
we
should
go
ahead
and
make
a
proposal
so
write
a
pr,
probably
in
webrtc
extensions
and
choose
between
the
two
alternatives.
D
E
For
reliable
data
channels,
it
would
be
a
serious
breach
of
contract
if
we
lost
messages
out
of
the
middle.
That
would
that
would
basically
say.
H
I
think
this
sounds
good
as
well.
The
only
thing
if
we're
worried
about
on
data
channel
and
timing,
we
could
perhaps
limit
this
only
to
negotiated
data
channels.
H
Right
right,
but
then
you
you,
you
create
the
javascript
objects
earlier
and
they
can
be
transferred
earlier.
H
D
Well,
the
number
of
tests
that
I
wrote
is
is
not
enough
to
cover
all
cases.
So
currently
it's
both
are
supported
by
maybe
we'll
discover
that
there
are
some
limitations
and
which
we
need
to
restrict,
but
so
far
it
seems
that
we
should
be
able
to
support
both.
G
Did
we
resolve
the
issue
about
whether
you
can
see
the
label
of
an
incoming
channel
like
if
somebody
creates
a
channel?
The
far
end
creates
you
a
channel
and
you
would
like
in
receiving
it
you'd
like
to
pass
it
off
to
a
worker.
G
E
D
E
That
sounds
like
an
invitation
to
being
flaky,
so
I
would
prefer
to
do
to
do
that.
D
Well,
if
you,
if
you
I
mean,
if
you
create
data
channels
that
are
already
opened,
then
you
can
send
synchronously
as
well,
and
in
that
case
we
do
not
want
to
transfer
a
few
cent
already.
So
that's
one
thing:
we
need
to
cover.
D
And
so
the
idea
is
really:
when
you
start
sending,
you
should
don't
you
shouldn't
be
able
to
transfer
basically.
D
Okay,
it
seems
they're
consensus,
so
I'll
try
to
start
writing
a
pr,
and
I
have
three
minutes
for
six
slides,
so
we're
on
track
get.
A
D
Yeah,
the
last
one
I
I
was
hoping
to
write
more
slides,
but
I
didn't
have
time.
So
that's
why
the
last
one
is
radio
summary,
but
I
will
try
to
provide
more
fights
anyway,
transformation
track.
Why
do
it?
D
Ella
has
had
the
issue
about
the
use
case
where
maybe
the
iframe
wants
to
get
the
viewport
and
send
it
to
slide
to
meet
google.com
and
google.com
will
do
the
networking
there's
also
the
possibility
that
maybe,
if
you
can
transfer
the
stream
track
to
worker,
you
can
do
media
stream
track
to
web
connect
to
earth
data.
All
of
that
in
a
worker,
which
would
be
very
handy
next
slide.
D
So
can
we
do
it
can?
Can
we
even
do
it
cross
process
because
there
we
are
not
talking
about
data
channel?
We
are
talking
about
like
large
amount
of
data,
especially
video.
The
thing
I
would
first
like
to
say
is
that
media
trim
track
is
already
flowing
out
of
process
in
all
of
the
place.
D
D
They
do
cross-process
things
very
efficiently
and
it's
working.
So
what
we're
talking
about
there
with
this
notion
of
transfer
is
that
we
might
actually
trigger
additional
hops,
which
might
have
some
impact,
but
it's
not
as
big
as
what
we
think
it
could
be,
and
this
is
only
for
out
of
process
for
in-prosthetic
transfer,
meaning
you
transfer
from
a
window
to
a
dedicated
worker.
D
D
Yeah,
so
how
can
it
be
done?
It
could
be
done
very
similarly
towards
the
channel.
You
define
the
transfer
algorithm,
you
define
a
neutered
behavior,
so
maybe,
when
you
transfer
a
midstream
track,
the
track
will
get
ended
and
clearly
the
lifetime
of
a
transferred
media
stream
track
needs
to
be
tied
to
creation
context.
So
if
you
call
get
user
media
in
one
place,
you
transfer
the
track
elsewhere.
D
D
D
You
do
not
have
the
same
support
as
transferable
streams
out
of
box,
for
instance,
you
don't
have
muted
events,
the
ended
events,
the
get
settings,
the
apply
constraints,
all
of
those
things
that
are
that
make
things
that
makes
life
really
easy,
really
easy
and
nice.
So
you
could
still
do
it
by
using
post
message
back
and
forth,
but
it
would
be
a
lot
of
work
for
web
developers
to
do
it,
and
my
intuition
tell
me
that
we
should
try
to
work
on
transform
data
stream
track
support
so
boots.
H
Yeah,
I
think
this
has
legs.
I
think
with
the
time
allowed.
I
do
have
some
thoughts,
but
I
don't
know
if
there's
enough
time
to,
but
I
think
it's
worth
looking
at
this
direction.
H
Yeah
there's
some
unique
questions
like
if
you
do
a
track
clone,
for
example,
how
that
would
be
treated
for
some
sources
and
we
might
have
to
make
it
a
super
source
decision
for
things
like
camera.
A
track
clone
is,
is
already
independent.
They
can
be
cross-processed
for
other
tabs,
but
for
other
sources
like
canvas,
for
example,
there
might
be
two
you
might
have
concurrent
reads
on
data:
if
we're
not
careful.
D
H
And
the
other
question
would
be
lifetime.
Would
lifetime
end
when
the
original
document
ends.
D
Yep
yeah,
that's
that's
part
of
it's
like
data
channel.
We
need
to
do
that.
Otherwise
it
would
be
a
nightmare
in
terms
of
security,
privacy
and
implementing
right.
H
D
No,
you
can
already
do
that
by
taking
your
media
stream
track
to
web
audio,
get
the
bytes
and
send
that
from
first
message
to
another
origin,
you
can
do
the
same
with
canvas
midi
stream,
track
two
video
elements
to
canvas
and
then
send
lgb
data
over
to
cross
origin.
So
my
guess
is
that
we
should
not
put
constraints
fair.
D
I
think
that
one
use
case
that
ellen
mentioned
was
that
there
would
be
an
iframe
from
site
google.com
and
they
will
try.
They
will
want
to
transfer
it
to
another
frame
being
meetgoogle.com,
which
is
cross-origin
but
same
like,
but.
H
F
D
There's
one
more
slide,
but
it's
already
five
minutes
late.
So
I'm
guessing
that
I
will
try
to
expand
my
life
slide
for
the
next
meeting.