►
From YouTube: February 22, 2021 WEBRTC WG interim
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
Yes,
there
was
a
chime
that
says:
recording
is
started,
harold
is
in
mutato.
A
A
All
right,
so
here's
what
we're
gonna
try
to
cover
today,
we'll
talk
a
little
bit
about
testing,
then
media
capture
morbid
to
see
extension
stuff
and
then
go
on
to
insertable
streams.
Hopefully
we'll
get
it
all
in
within
90
minutes
so
about
testing.
A
So
what
we
need
to
test
is
the
kind
of
things
we'll
be
working
on
and
that's
things
related
to
pure
connection,
which
is
where
we
see
stats
priority
dscp.
We
still
have
some
of
that
work.
We
have
extensions
such
as
webrtc
extensions,
whoever
see
svc,
insertable
streams.
Then
there
are
extensions
to
capture
streams
and
output
specs,
and
then
there
are
some
standalone
specs,
most
of
which
we
won't
talk
about
because
they're
outside
this
working
group.
A
Now
these
this
new
work
offers
some
challenges,
for
example
in
webrtc
stats
testing,
whether
the
stats
are
correct,
not
just
whether
they're
retrievable,
which
is
what
we
do
today
for
priority
dhcp
and
content
hints.
Today
we
just
test
whether
attributes
can
be
set
and
retrieved,
not
whether
they
actually
do
what
they
should
do.
For
example,
a
text
content
hint
should
activate
the
av-1
content
coding
tools,
but
we
do
not
test
for
anything
like
that
today,
however,
to
see
extensions,
we
don't,
for
example,
test
whether
requested
rtb
header,
extensions
or
encryption
is
delivered.
A
It
would
be
good
if
we
could
test
end
to
end.
That
is
whether
content,
that's
encoded
or
encrypted
can
actually
be
decrypted
or
decoded,
but
we
don't
do
that
today.
Certainly
in
wpt
tests
for
media
stream
track
insertable
streams,
performance
testing
would
be
desirable
if
we
could
do
it
and
then
for
media
capture
and
streams
backward
compatibility
testing.
A
So
these
are
some
of
the
things
that
it
would
be
good
if
we
could
do,
and
so
we're
going
to
chat
a
little
bit
about
where
we
might
go
so
one
particular
harbinger
of
things
to
come
is
the
work.
That's
been
done
on
av-1
webrtc
integration
testing,
dr
alex
sergio,
have
worked
on
developing
a
custom
end-to-end
test
suite,
which
is
purely
for
av-1
webrtc
integration
testing.
It
is
complex
because
it
was
required
to
demonstrate
the
operation
of
the
antenna
system,
so
you've
got
these
webrtc
endpoints
communicating
via
an
sfu.
A
Many
interactions
were
found
between
the
rtp
stack
av1,
the
header
extension
scalability
modes,
sfu
behavior,
blah
blah
blah
blah
blah.
So
it
wasn't
just
simply
testing
81.
It
was
the
interactual
stuff.
There
are
javascript
tests
in
progress,
but
they
all
require
this
sfu
and
in
fact
the
number
of
tests
was
so
large.
They
had
to
develop
an
entire
spec
which
actually
used
respect
to
show
all
the
test
coverage
anyway.
The
big
question
was
after
they
had
done
all
this
work
to
validate
it.
A
So
here
are
some
questions
that
we're
going
to
hopefully
answer
at
least
some
of
which
during
this
discussion,
which
is
how
do
we
ensure
that
recently
added
features
don't
regress,
so
you
know
you
check
stuff
in
and
then
somebody
builds
an
entire
service
on
it
and
then
it
breaks
and
some
ideas
are
to
add
server
support
to
wpt
tests.
This
is
what
there's
being
done
in
the
web
transport
working
group,
but
we've
seen
as
we've
seen
with
the
ab1
n10
test
suite.
There
are
limits
on
what
people
can
do
for
build
validation.
A
C
C
F
F
F
F
F
F
That
way,
we
really
get
two
packets
in
this
case
from
chrome,
and
we
have
all
the
data
you
would
commonly
accept
expect
from
a
rtp
browser.
Like
you
get
all
the
contributing
sources
you
get
all
the
header
extensions
you
get,
you
mark
a
bit,
you
get
whether
it's
it.
The
packet
has
padding
the
sequence
number
synchronization
so
those
times,
then
everything
you
need
and
based
on
that,
you
can
write
the
tests.
You
want.
F
Please
so
it
is
a
bit
more
complex
if
you
want
to
run
it
in
production,
because
it
currently
has
offered
answer
from
the
paid
connection
from
the
server,
but
in
practice
the
tests
might
want
to
generate
the
sdps
themselves
like
a
h.264.
Only
offer
should
be
something
that
the
test
is
in
control
of
and
not
the
server
and
thankfully
a
rtc
is
an
ortc
library,
so
we
could
hopefully
just
take
the
ice,
transport
and
dtls
transport
and
leave
all
the
other
logic
with
the
test
in
javascript.
F
C
G
Yeah,
that's
that's
the
the
question
I
had.
I
like
the
idea
it's
integrating
very
well
with
wpt.
So
that
seems
great.
The
thing
I
like
to
understand
is
which
tests
or
which
area
will?
Will
we
start
testing
bait
on
that?
So
do
we
have
a
precise
idea
of
which
test
we
will
write
first,
and
we
will
do
that.
A
A
C
The
most
frustrating
parts
have
sometimes
been
that
when
I,
when
I
do
things
that
should
cause
a
keyframe
to
be
sent,
I
can't
actually
check
that.
I
that
that
I
have
been
sent
a
keyframe
and
when
I,
when
I
do
do,
layered,
encodings
or
symbol
casts,
I
can't
actually
check
that
there
are
multiple
are
multiple
is
coming
in
with
different
parameters,
so
the
and,
of
course,
and
compression
control
and
stats.
H
Yeah,
I
think
this
sounds
great
and
I
hopefully,
if
philip
and
others
can
contribute
some
initial
tests
that
others
can
copy
from
that,
hopefully
should
help
a
lot.
I
had
some
questions,
though
the
aortc
is
able
to
receive
simulcast,
for
instance,.
A
Think,
well,
I
I
think
the
answer
would
be
yes.
Yanivar
in
you'd
need
some
of
the
same
kind
of
tricks
that
fippo's
been
using
for
the
simulcast
playground.
D
I
have
a
question:
how
does
the
rtp
traffic
actually
end
up
on
the
on
a
websocket?
Is
this
a
server
living
in
the
background,
or
is
everything
hosted
in
javascript.
F
B
G
H
I
Let
me
just
throw
out,
like
I'm
not
going
to
do
this
so
obviously
like
it
would
have
to
be
someone
who's
willing
to
do
it,
but
it
almost
seems
like
the
the
most
wonderful
thing.
From
my
point
of
view.
To
use
testing
would
be
we
take
sort
of
the
webrtc
library
or
something
like
that.
We
full
we
make
a
test
client,
that's
like
not
just
a
sort
of
echoes
packet
server,
but
is
like
a
full-blown
webrtc
client
and
it
negotiates
everything
and
does
everything
and
it
gives.
I
But
it
has
the
added
thing
of
it:
effectively
has
a
packet
filter
underneath
of
it
like
we
can
grab
the
raw
packets
going
in
and
out
of
it
as
well
as
whatever
data
level
it
negotiated
or
whatever
else
so
that
we
can
sort
of.
So
we
don't
have
to
recreate
tests
that
do
all
of
the
stp
and
everything
else
and
all
the
problems
with
that.
We
already
just
use
the
webrtc
client
to
do
that,
but
we
get
direct
access
to
see
exactly
what
the
packets
that
sit
and
receive
look
like
on
the
wire.
I
F
G
Lines,
do
you
know
how
much
maintained
is
this
a
I
o
rtc
implementation.
A
Yeah,
I
guess
the
question
is
also
about
since
a
lot
of
the
things
you
want
to
use
it
for
a
new
things.
You
know
whether
it
would
keep
up
with
the
new
stuff
right
as
an
example.
This
you
know
the
new
rtp
and
header
encryption
stuff
right
would,
I
guess,
well
yeah,
if
it's
only
about
capturing
packets,
I
think
that
makes
it
easier
right.
It
doesn't
necessarily
have
to
keep
up
with
every
feature.
A
G
That
seem
reasonable
yeah.
Maybe
we
should
start
discussing
with
the
wpt
folks.
We
might
need
to
write
an
rfc
proposal
to
the
wpt
github
repo,
so
I
don't.
Maybe
we
should
start
that
in
parallel,
since
it
seems
there's
some
confidence
that
this
will
succeed
from
a
technical
point
of
view
and
there's
interest
in
terms
of
testing.
So
maybe
we
can
start
that
as
well
now,.
D
A
Okay,
moving
on
to
discussion
of
media
capture
and
streams,
I'm
going
to
turn
it
over
to
yanivar
and
you
win
so
yeah.
H
All
right,
so,
yes,
this
is
one
of
the
older
issues
in
media
capture
main,
so
we
wanted
to
bring
it
up
for
discussion.
H
So
web
browsers
have
so
this
I'm
first
explaining
a
tainted
canvas
for
those
who
are
not
aware
that
browsers
all
browsers
contain
some
cross-origin
protections
for
media,
so,
for
instance,
if
you
were
to-
and
that
means
that
you
can
play
you-
can
play
and
show
images
and
videos
from
all
kinds
of
different
origins
and
a
media
element.
H
So
you
can
forget,
for
instance,
play
a
video
file
here
and
you
can
draw
it
to
a
canvas
and
you
can
write
some
text
on
top
of
it,
but
because
the
source
material
is
from
a
is
cross-origin,
basically,
the
canvas
then
becomes
tainted,
and
I
show
that
a
little
virus
symbol
here.
That
means
that
you
can
still
play
this.
H
So
next
slide-
and
I
should
say
the
reason
for
this-
this
tinted
canvas
is
because,
when
browsers
can,
even
though
javascript
itself
does
not
have
access
to
cookies
of
other
sites,
browsers
conveniently
and
will
include
those
cookies
for
other
sites,
when
you
do
a
request,
any
resource
from
another
origin.
So
that's
why
we
have
to
protect
that
data
so
that
you
know
your
dating
site,
self
image
or
or
bank
account
check.
Numbers
are
not
cannot
be
captured
by
other
sites
and
because
of
ads.
The
ad
ecosystem
works
that
way.
H
So
if
we
look
at
video
element
capture
stream
and
canvas
capture
stream,
which
is
two
methods
we
have
for
capturing
from
a
video
element
and
canvas
respectively,
this
is
sort
of
what
the
specs
has
to
do
today,
and
that
means
because
both
the
video
element,
source
content
and
the
now
tainted
canvas
contains
cross-origin
information.
H
If
you
try
to
capture
them,
I
believe
you
either
get
black
or
you
might
actually
get
this
diagram
might
be
wrong
and
that
I
think,
for
one
of
them
produces
black
and
the
other
one.
I
might
give
you
a
security
area
if
you
try
so,
but
I
believe
that,
just
as
the
same
is
that
we
basically,
I
think,
there's
even
a
recommendation
that
in
both
specs
that
we
should
protect
across
our
content
by
sending
the
equivalent
of
muted
and
so
but
firefox
is
a
little
different.
If
you
go
to
next
slide.
H
Next,
yes,
so
firefox
we
actually
have
video,
it's
actually
mask
after
stream.
We
do
actually
support
on
the
left
there.
H
We
felt
that
we
sort
of
have
internally
an
authorization
model.
We
need
an
authorization
model
for
media
stream
tracks
we
felt
and
that
internally
we
have
to
track
the
origin
for
immediate
system
track
for
security
anyways,
so
that
led
us
down
this
path
of
of
tainting
them
instead
of
what
the
spec
suggested,
which
was
to
mute,
either
permanently
or
temporarily.
H
So
there's
some
pushback.
Is
this
useful
and
it's
unclear?
I
mean
I
tried
to
come
up
with
some
use
cases
for
painting
one
was
in
the
in
the
earlier
slide.
You
would
notice
that
you
could
actually
use
this
for
cat
for
captioning,
where
you
could
then
draw
video
on
a
quest,
animation
frame.
You
could
try
to
canvas
every
frame,
then
you
could
draw
a
text
on
top
of
it
and
then,
if
you
wanted
to
show
this
on
a
lot
of
browsers,
have
media
element.
H
H
Is
that
performant,
probably
not
very,
is
that
useful?
Maybe
could
it
be
optimized?
Maybe
other
use
cases
might
be
appear.
Identity
is
another
reason.
We
went
down
this
path
because
it
relies
on
similar
concepts
of
attaching
an
origin
to
a
piece
of
media
and
so
but
it,
but
that
didn't
really
gain
a
lot
of
success.
So
we're
not
sure
if
there's
going
to
be
any
future
consideration
of
that
some
have
mentioned.
H
Media
stream
track
manipulation
features
like
cropping
that
are
still
being
suggested,
and
the
use
case
here
would
be
that
such
features,
if
implemented
on
a
media
stream
check,
would
work
on
cross-origin
content
as
well
versus
the
raw
media.
Access
apis,
we've
discussed
so
far,
would
be
limited
to
same
origin
data,
since
it
actually
exposes
the
data
to
javascript.
H
So
long
story
short
there
aren't
that
many
enticing
short-term
use
cases,
but
should
we
allow
some
more
explanation
in
the
space
before
shutting
it
down?
So
as
far
as
media
capture
main,
which
is
the
main
goal
here-
is
the
closest
issue
in
meteor
capture
main
so
that
we
can
drive
this
specification
to
rec.
H
So
the
proposal
a
is
to
basically
mention
the
current
state
of
things
and
then
but
add
a
a
statement
like
if
the
user
agent
supports
tainted
medicine
tracks,
then
all
sinks.
Omega
stream
track,
must
protect
the
data
across
origin
media
and
set
tracks
from
being
exposed
to
the
application,
for
instance,
by
replacing
the
data
with
metered
output,
and
the
intention
here
would
be
to
that.
H
Every
spec
that
implements
a
new
sync
for
a
media
stream
track
shouldn't
have
to
go
through
and
look
through
all
the
specs
for
sources,
so
that
there's
a
central,
but
they
would
have
to
normatively
reference
me
to
capture
main,
at
least
so.
It
would
be
nice
to
say
something
here
about
these
protections
so
that
things
aren't
implemented
without
these
protections
proposal
b
would
be
to
try
to
band-aid.
H
This
immediate
capture
from
element
which
has
a
longer
which
is
farther
away
from
rec
and
proposal
c,
would
be
to
say
nothing
and
maybe
open
an
issue
on
this
in
video
capture,
extensions
thoughts.
G
This
is
yuan.
I
don't
like
proposal
a.
I
think
that
if
we
add
it
in
media
cap
domain,
that
means
we
would
need
to
have
two
implementations.
We
need
to
show
interoperability,
blah
blah
blah
and
so
far
it
does
seem.
We
have
no
proof
that
it
will
be
used
elsewhere
than
in
media
capture
from
element.
G
So
I
would
try
to
avoid
proposal
a
I'm
fine,
with
proposal
b,
or
at
least
I'm
fine
trying
to
make
it
consistent
between
media
catcher
from
element
and
makeup
capture
from
media
element
as
well
like
the
two
approaches
should
be
similar
really
and
they're.
Not
until
then,
I
would
be
assigned
with
proposal
b
if
mozilla
think
that
it
should
be
described
somewhere,
but
I
would
track
it
in
media
capture
from
element
not
in
media
capturing
from
now.
G
If,
in
the
future
we
see
that
original
isolation
is
interesting
for
pi
identity,
then
maybe
we
will
integrate
back
the
concept
of
painting
tracks
back
to
mega
chapman
or
back
to
media
capture
extensions,
but
currently
I'm
not
confident
that
this
concept
will
be
used
widely.
So
I
don't
think
we
should
include
it
in
media
capture
main.
H
G
H
Well,
to
talk
to
that,
I
think
so.
What
the
track
concept
of
sources
in
sync
allows
today
is
that
you
can
capture
from
an
html
media
element.
That
is
origin,
and
you
can
send
it
on
a
peer
connection
and
where
should
the
implementer
of
a
peer
connection?
Look
to
do?
They
need
to
normatively
reference
now,
the
from
element
spec.
H
G
What
are
we
trying
to
solve
there?
If
we
are
trying
to
solve
something
related
to
media
capture
from
element?
We
should
track
it
there.
We
forgot
to
make
that
capture
main.
I
don't
think
that
proposal
a
is
improving.
The
spec
like
implementers
of
media
capture
from
element
will
implement
media
capture
from
and
then
they
will
look
at
this
spec
and
they
will
see,
for
instance,
that
oh
tainted
tracks
can
be
only
be
consuming
html
big
elements.
Oh
okay,
I
need
to
implement
that,
for
instance,.
H
Well,
but
that's
a
different
implementaryness,
potentially
than
the
implement
of
a
peer
connection
or
media
recorder.
So
the
issue
is,
I
think
that
for
one
solution,
where
we
sort
of
mute
things
at
the
source,
yeah
that's
well
contained
within
one
spec,
because
you're
not
letting
tainted
media
into
the
sources
and
sync
media
stream
track
ecosystem.
H
To
basically
enforce
that
that
doesn't
leak,
so
this
was
more.
This
wasn't
meant
to
introduce
something
that
needed
testing
beyond
beyond
mandating
that
no
new
spec
can
we're
trying
to
avoid
to
say
that
you
know
we
can't
ever
have
tainted
tracks
in
a
media
stream
track
ecosystem
in
a
particular
user
agent.
H
G
So
we
have
a
concept
of
20
tracks.
We
are
not
sure
we
like
it
or
we
dislike
it.
We
are
not
sure
that
it
will
be
supported
in
all
browsers
or
not.
So
I'm
not
a
fan
of
adding
it
back
in
the
spec.
So
currently,
media
capture
main
is
not
describing
it.
G
So
I
would
keep
it
that
way
and
I
am
fine
with
a
different
place
where
we
would
define
the
concept
of
thinking
major
gym
tracks
and
when
we
have
consensus
amongst
browsers
the
web
developer
community
but
yeah,
we
should
really
make
it
a
prime
citizen.
Then
we
can
either
put
that
place
or
the
document
to
track
to
going
to
wreck,
or
we
could
merge
that
document
in
media
capture
main
or
in
some
other
document.
G
G
Okay,
you
went.
I
I'd
like
to
work
to
discuss
other
items.
So
can
we
do
that
item
as
the
last
one
of
the
meeting
sure.
A
Okay,
we
have
a
few
things
from
henrik.
D
Oh
right,
yes,
so
this
is
an
old
issue
about
invalid
turn
credentials.
So
you
you
set
the
credentials
with
the
set
configuration,
but
if
it's
a
non-parse
error
like
invalid
credentials
or
you're
unable
to
reach
the
hosts,
then
we
don't
describe
where
the
failure
should
occur.
So
that's
the
problem.
Where
do
you
surface
the
error?
D
We
do
today
already
have
on
ice
candidate
error
which
covers
see.
Let's
see
the
stun
error
code.
Sorry,
I
forget
so,
but
it
doesn't
cover
the
turn
errors.
D
So
my
proposal
here
was
basically
to
piggyback
on
the
on
ice
candidate
error
event
and
say
that,
if
return
turn
errors
turn
errors,
we
also
use
the
error
code.
701.
A
I
I
think
this
is
well
within
the
originally
the
original
reasons
for
creating
it.
D
Okay,
so
this
is
about
the
opus,
the
opus
codec
in
chromium.
There's
you
can
achieve
stereo
by
sdp
munching.
D
And
in
sdp
we
have
this
attribute
stereo
equals
one
and
that
it
means
that
I'm
okay
with
receiving
stereo,
but
if
the
stereo
line
is
missing,
then
that
is
implicitly
the
same
thing
as
stereo
equals
zero,
which
means
I
prefer
to
receive
mono.
This
attribute
always
talks
about
what
you
prefer
to
receive.
However,
regardless
of
the
value
of
this
attribute,
the
opus
decoders
must
support
stereo.
D
So
that
seems
pretty
backwards,
and
I
also
think
that
we
don't
look
at
the
channel
count
on
the
midi
stream
track.
So
all
the
problem
is,
how
should
we
support
stereo
the
right
way?
So
next
slide.
D
So
the
proposal
is
to
make
stereo
equals
one
the
default
value
in
in
the
sdp
and
the
way
to
decide
how
many
channels
to
send
is.
You
should
look
at
the
track's
channel
count
and
and
the
stereo
attribute.
So
if
you're
talking
to
an
endpoint
that
doesn't
support
stereo
and
it's
it's
overriding,
this
stereo
one
default
to
stereo
zero,
then
you
have
an
indication
that
they
don't
want
stereo.
D
So
you
can
keep
sending
mono,
but
if
everyone's
okay
with
stereo,
then
you
send
stereo
based
on
the
media
stream
track
property,
and
so
this
would
change
behavior
in
the
case
of
having
a
multi-channel
media
stream
track
and
you're
just
doing
an
offer.
Answer
exchange
not
relying
on
the
defaults,
and
this
would
start
sending
stereo.
D
But
if
you
want
mono,
then
you
can
either
modify
the
the
answer.
Stp
or
you
can
just
request
the
track
with
channel
count
one
or
use
web
audio.
D
So
I
because,
because
not
including
stereo
means
implicitly
means
that
you
don't
want
stereo.
No
sdp
has.
I
D
Oh
I
so
I
I
have
nothing
looked
into
that.
That's
the
answer
to
that.
I
It
just
allows
the
sender
to
say
what
they,
what
whether
they're
likely
to
stereo
or
not,
I
mean
look
it
works
in
I'm
not
sure
it
dramatically
changes
your
proposal,
but
I
think
you
need
to
you
should
use
it
as
well
to
get
to
happen
what
you
want.
I
mean
I
think
it
helps
solve
your
problem.
A
little
bit.
D
Well,
so
I
guess,
if
so,
you
have
to
be
prepared
to
decode
stereo
regardless,
so
in
terms
of
compatibility,
it
wouldn't
matter,
but
if
we,
if
we
want
backwards
compatibility
with
today's
behavior
of
defaulting
to
mono,
then
asking
like
proposing
to
send
stereo
announcing
that
with
the
other
attribute,
might
make
sense.
The
thing
I
don't
like
about
that
is
then
the
next
question
is:
how
do
you
control
what
this
should
be
set
to?
D
I
Yeah,
well,
I
mean,
I
think
you
have
to
have
a
way
to
allow
people
to
for
like,
regardless
of
what
you
think
about
about
people
being
able
to
receive
stereo
and
opus
or
not
it's
true.
The
decoder
will
decode
it,
but
it
doesn't
mean
it'll
work
and
it
generally
means
that
at
least
one
that
one
of
those
streams
will
be
thrown
away.
So
you've
got
to
allow
people
to
force
mono
negotiation.
I
mean
you
can't
right.
D
But
an
end
point,
an
endpoint
that
doesn't
support
stereo
can
always
reply
with
stereo
equals
zero
sure,
so
that
would
that
would
solve
that
that
that
problem-
and
I
think
if,
if
this
is
an
a
browser
endpoint,
then
that
doesn't
support
it,
and
certainly
it
should
overwrite
it
to
zero.
D
I
H
D
But
it
only
has
an
effect
of
when
you're
asking
the
other
end
point
to
do
something
because
you
have
to
be
prepared.
The
browser
has
to
be
prepared
for
stereo,
regardless
so
setting
the
local
description.
D
Does
it
need
to
because
I'm
I'm
thinking
you
could
you
could
you
could
set
it
without
munching
locally
to
stereo
equals
one,
because
it
doesn't
have
an
effect
and
then
you
before
sending
it
over
to
the
other
end
point
you
say:
stereo
equals
zero.
C
H
And
also
the
spec
prevents
does
not
allow
set
local
description
with
a
modified
offer.
Much
to
offer.
C
D
D
C
D
You
could
do
that,
but
I
I
would
want
to
avoid
firing
on
negotiation
needed
based
on
apply
constraints
on
the
track.
There
seems
to
be
some
layers
removed
from
the
peer
connection
level.
H
H
D
D
Yeah,
I
I
think
the
sender
is
already
able
to
control
this.
If
the
receiver
really
doesn't
want
it,
it
can
munch
after
set
local,
and
if
you
want
to
properly
negotiate
this
from
both
end
points.
You
can
do
this
in
application
logic,
land
and
then
leave
leave
it
to
the
sender
to
do
what
you
asked
for.
H
D
H
H
So
I
guess,
if
channel
count
defaults
to
one
on
mediastream
tracks,
then
yeah
it
would
effectively
mean
sending
mono
for
most
applications
yeah.
I
think
this
could
work.
G
D
So
can
I
write
as
that
the
proposal
is
fine,
but
only
if
we
verify
that
this
won't
cause
backwards,
compatibility
issues
so
don't
land.
B
C
D
Okay,
well,
I
think
we
can
enough
information
for
this
issue,
but
more
data
needed.
G
Okay,
you
win
okay,
so
speaking
about
transferable
data
channel
now
so
we've
heard
in
the
past
several
websites
doing
their
video
pipeline
or
with
your
pipeline
like
decoding
or
encoding
in
workers
and
like
doom,
for
instance,
is
doing
it
and
is
either
using
data
channel
in
main
thread
or
web
sockets
to
send
and
to
send
compressed
data
or
receive
compressed
data.
G
They
are
over
websites
like
parsec,
for
game
tuning
remote
desktop,
but
are
doing
similar
things
generally.
The
web,
with
off-screen
canvas
and
so
on,
is
adding
more
and
more
features
to
process
audio
and
video
in
background
threads
for
workers
and
while
for.
G
For
transform
like
for
websocket
or
http,
we
already
have
support
in
workers
for
doing
http
or
websockets.
It's
not
the
case
for
data
channels.
In
the
past,
we
discussed
the
visibility
of
creating
data
channels
in
workers,
particularly
in
service
workers,
but
maybe
we
can
reduce
the
scope
here
of
a
problem
and
solve
existing
issues
on
existing
websites,
and
the
idea
of
the
solution
for
websites
like
zoom
or
parsec
would
be
to
create
data
channels
as
we
do
it
today.
G
So
we
reduce
the
scope
and
we
hope
it
will
be
simpler,
so
some
some
issues
that
will
not
be
solved.
If
is
if,
for
instance,
you
want
a
persistent
data
channel
like
you're,
navigating
from
one
page
to
the
other
same
origin,
same
service
worker,
and
you
would
like
to
keep
the
data
channel
alive.
So
this
proposal
is
not
a
solution
for
that,
but
still
we
think
that
what
we
provide
with
transferable
data
channel
might
still
be
a
good
opportunity
for
websites
next
slide.
G
So
in
terms
of
what
would
be
new
to
specification,
we
will
need
to
specify
a
transfer
algorithm.
So,
given
a
data
channel
is
just
the
pipe
readable
and
writable
stream
plus
some
more
events,
it
would
be
very
similar
to
what
streams
working
groups
define,
for
instance,
for
transferable
streams,
so
it
it
would
not
be
that
difficult.
G
We
would
need
to
define
what
a
neutered
data
channel
is
because
we
probably
don't
want
the
data
channel
to
be
usable
in
their
hands,
so
there's
there's
some
work
that
needs
to
be
done
there
in
terms
of
existing
spec.
When
I
looked
at
it,
a
lot
of
what
is
in
the
spec
would
not
need
to
be
changed
like
creating
closing
algorithms
method
definition
garbage
collection
as
well,
so
I
do
not
anticipate
like
a
large
retractoring
of
the
ground
spec
in
terms
of
implementation.
G
G
So
my
question
to
the
working
group
is
whether
there's
interest
in
both
defining
and
implementing
transferable
data
channels.
H
The
alternative
sounds
a
little
appealing
too.
So,
if
just
making
sure
I
understand
to
the
the
if
we
disregard
the
alternative
you're
not
proposing
any
changes
to
the
api
yeah.
H
So
the
advantage
of
that
would
be
that
it
would
probably
be
easier
to
do
and
we'd
get
it
quicker,
but
it
would
have
the
same
event-based
api
that
we
were
accustomed
to.
So
I
think
the
the
only
counter
argument
I
see
is
your
alternative.
Also
sounds
appealing
to
me.
That's
an
alternate
approach
because
it
sounds
like
if
data
channel
had
already
supported,
readable,
writable
streams,
then
we
sort
of
could
have
already
done
this
by
without
making
data
channels
themselves
transferable.
H
G
Hopefully,
yes,
in
most
cases,
if
the
application
is,
is
taking
care
enough
of
that,
then
I
believe
that
browsers
will
be
able
to
optimize
it.
They
will
not
be
able
to
optimize
it
or
they
will
have.
They
might
have
some
difficulties,
optimizing
it
in
all
cases,
but
that
would
be
fine.
I
I
don't
know.
Where
is
the
websocket
streams
proposal
if
it
was
shipping
already
in
a
in
a
browser
and
like
in
all
browsers,
I
would
say
yeah,
we
already
have
it.
So
why
not
piggybacking
on
it?
G
A
Yeah
I
I
personally
like
the
the
simpler
proposal,
because
I
think
there's
a
lot
of
code
written
to
data
the
existing
data
channel
api.
That
would
use
it
immediately.
So
you'll
you'll
basically
get
a
lot
of
usage
quickly.
C
G
A
G
Copy
for
out
of
process
you
will,
you
will
need
to
yeah,
you
will
need
to
copy
data
or
use
shared
memory,
which
is
also
something
feasible.
I
believe
that
you,
you
have
whether
you
use
three
double
stream,
weightable
stream
or
transferable.
You
have
the
same
issue.
In
any
case,
I
was
thinking
that
we
could
start
with
just
the
dedicated
worker,
because
that's
the
major
driving
scenario
and
we
could
also
study
out
of
process
transferable
data
channels,
which
I
guess
would
be
iframe
to
iframe.
G
Basically
and
that's
something
I
could
prototype
for
instance
and
report,
whether
that's
difficult.
I
do
not
anticipate
any
difficulty
honestly,
except
that
it
will
be
less
efficient
but
yeah
sure
it's
not
the
same
process
so
and
you
need
to
get
the
data
from
the
network
process
ctp,
which
is
usually
in
a
process,
and
then
you
would
do
process
processing
of
this
data
with
sparse
data
in
another
process.
So
in
any
case
you
will
have
a
birth
penalty.
C
By
the
way,
web
socket
stream
has
a
norwegian
trial
that
seems
to
have
ended
in
in
some
something
like
a
18
m87
and
it
did
not
make
km89
and
I
don't
see
any
sign
of
it
in
in
92.
There.
C
Well,
let's
rescue
his
note
of
10
days
ago
was
I'm
making
progress
on
the
standard
that
it's
slower
than
I'd
like.
I
will
push
it
to
a
later
milestone,
so
I
would
say
heavy
heavy
going
still
not
that.
A
H
C
We
had
to
manage
to
to
do
the
do
the
thing
that
just
let's
web
web
sockets,
let
let's
data
channel,
produce
a
stream
of
streams.
That
would
be
also
be
a
solution
that
could
be
transferred.
But
at
the
moment
I
like
un's
proposal,
it
seems
somewhat
straightforward
if
he
thinks
that
they
can
manage
the
the
the
cross
cross
process
issue
without
too
much.
G
Worth
the
question:
yeah
yeah,
just
for
the
cross-process
thing,
the
ugly
thing
would
be
to
say
we're
only
allowing
dedicated
workers
past
messaging
transfer,
transferring
to
the
workers,
that's
something
we
could
define,
but
it's
really
ugly
and
I
don't
know
of
any
api
in
the
web.
That
is
doing
it.
So
that's
why
I'm
saying
we
we
should
at
least
try
to
do
it
and
see
whether
that's
very
hard
or
not.
H
I
think
the
main
reason
I
would
support
that
would
be
that
it
would
probably
blocking
on
websocket
streams,
probably
would
make
this
take
a
lot
longer
and
I
also
happen
to
know.
There's
I've
been
working
on
polyfilling,
websocket
stream
on
top
of
web
transport
actually,
and
it's
a
bit
of
an
impotence
mismatch,
because
both
websocket
and
data
channels
operate
on
messages
of
arbitrary
length
which
doesn't
really
fit
the
readable
writable
stream
directly.
So
you
end
up
with
a
stream
of
streams
concept,
so
it
might
be.
H
For
you
know,
the
alternative
might
take
a
lot
longer
to
get
right,
even
if
it's
not
directly,
if
even
if
it
doesn't
end
up
directly
mapping
to
websocket
stream.
The
idea
of
having
an
rtc
data
channel
with
a
readable
writable
sounds
desirable
long
term,
but
it's
still
not
clear
how
that
fits
with
its
inherent
message
based.
H
So
yeah,
I
think
it's
worth
exploring
transferring
and
rtc
data
analysis.
A
Yeah
I
mean
particularly
because
one
of
the
a
bunch
of
the
use
cases
here
involve
high
volumes
of
data
transfer,
so
the
copy
issue
is
going
to
be
a
problem.
A
G
G
So
issue
48
expose
rpc,
transforming
workers.
So
at
last
meeting
we
discussed
the
transform
attribute
on
sender
and
receiver,
which
is
a
way
to
set
hey.
I
want
to
do
a
transform,
it's
all
in
window,
environment
and
there's
a
need
to
complain,
complement
that
when
you're
setting
a
script
transform
on
how
it
will
happen
to
be
executed
in
the
worker.
G
So
there
are.
There
are
a
few
decisions
that
we
could
make
on
the
shape
of
the
api.
The
first
one
is,
we
are
doing
something
in
in
a
window
environment
which
is
say:
hey
this
transform
is
now
a
strict
transform
and
the
script
transform
should
run
on
the
worker.
That's
the
first,
the
fourth
line
there
send
it
to
equal
new
script,
transform
worker.
G
Then
we
need
something
to
happen
in
the
worker
to
say:
hey,
there's
a
there's,
a
transform,
there's
a
new
transform,
so
worker.
Please
start
to
do
something.
So
on
the
slide.
There
are
like
two
examples:
one
is
an
event-based
variant
where
every
time
you
create
a
new
script
transform
for
a
given
worker,
then
you
have
an
rtc
transform
event
that
is
fired
and
then
you
can
get
the
transformer
which
will
be
the
pendant
of
the
transform
back
in
the
work
environment
and
you
start
doing
things.
G
The
other
variant
which
is
on
the
right
of
the
slide
is
based
on
audio
worklet.
So
you
have
you,
you
create
your
class
which
extends
as
a
script
transformer,
and
then
you
register
this
class
and
then
every
time
you
create
a
transform,
then
what
the
worker
will
do
is
creating
your
my
transform
object.
G
It's
working
well
because
it's
used
in
worklets,
but
it's
a
little
bit
there's
more
api,
and
so
my
proposal
here
would
be
to
start
with
an
rtc
transform
event.
So
I
don't
know
whether
we
should
I
I
will
go
for
the
slides,
and
maybe
we
can
get
back
to
this
discussion
topic
to
get
feedback
from
the
working
group.
So
next
slide.
G
G
The
first
option
is
that
let's
say
we
have
a
transform,
rtc
transform.
Even
there,
then
we
we
say:
hey.
This
event
has
a
dictionary
which
is
called
fc,
transform
even
data,
and
it
has
two
fields,
readable
and
writable,
and
also
options,
and
the
options
is
actually
given
as
input
from
the
window
environment.
G
G
So
it's
more
work
since
you
will
need
to
describe
this
script
transformer,
but
it's
also
makes
things
much
easier
to
expose
additional
api
surface
like
more
state
getters.
Like
am,
I
am
I
in
a
sender
in
a
receiver.
What's
the
bit
rate
in
the
case,
I
need
to
request
a
keyframe.
Then
we
could
add
a
method
to
the
script.
Transformer
object
as
well,
so
it
makes
things
more
extensible
and
easier
to
manage
as
well.
G
So
that's
option
two,
but
we
will
keep
readable
and
writable
there
and
the
option
three
is
we
would
so
next
slide?
We
would
go
with
still
a
script
transformer
as
an
object,
but
this
time
we
would
try
to
avoid
using
readable
and
writable,
and
there
are
a
number
of
ways
we
could
do
that
we
could
add
an
event
like
on
frame
event
and
then
an
nq
attribute.
G
We
could
use
a
callback
like
so
transform
streams
has
a
transformer
transform
callback
and
we
could
try
to
allow
javascript
in
a
worker
to
set
that
call
back
and
do
things
like
that.
G
That's
feasible
as
well
compared
to
option
two.
We
we
keep
the
same
extensive
extensibility
and
I
believe,
option
two
and
three
are
sort
of
shameable
one
with
the
over
anyway.
So
it's
not
like,
like
the
yeah,
it's
a
decision.
We
need
to
make
at
some
point,
but
they
are
sort
of
similar,
so
next
slide,
which
is
trying
to
conclude
this
issue.
G
I
believe
an
interface
is
more
extensible,
so
that's
why
I
would
go
with
a
script
transfer
interface
and
the
first
decision,
I'd
like
us
to
make
is
whether
we
prefer
readable,
writable
or
even
past
method,
and
my
proposal
here
would
be
to
stick
with
readable,
writeable
stream.
For
now
there
are
some
pros
and
cons.
So
it's
worth
keeping
in
mind
that
we
can
continue
investigating
that
and
if
we
find
something
better
than
that,
we
can
certainly
change
the
api.
But
if
we
do
not
find
anything,
we
could
still
stick
with
readable
rewritable
foots.
H
H
Because
you
have
read
only
attribute
rtc
transform
event,
data
transformer,
which
is
a
dictionary
I
don't
think
that's
allowed,
because
dictionaries
are
inherently
use.
They
inherently
use
copy
semantics,
so
they
cannot.
You
cannot
have
a
reference
to
it,
such
as
an
attribute
and
attributes
cannot
return
copy
semantics.
H
C
So
I
keep
hoping
on
the
feedback
messages,
which
is
that
that,
in
order
in
order
to
complete
the
loop,
we'll
have
to
get
to
a
four
line,
version
of
readable,
writable,
readable,
controlled,
writable
control,
but
having
as
a
standard
form
standard
message
that
standardized
message
that
they
use
for
this.
Instead
of
doing
an
ad
hoc
message
would
make
life
life
a
little
simpler,
especially
if
you
can
have
a
and
a
standardized
message
that
they
include
on
any.
G
Yeah
post
message
takes
an
e
and
it's
creating
an
event
which
carries
this
data,
so
it's
it's
certainly
feasible
to
to
do
it.
Maybe
we
can
start
also
with
the
first.
The
first
decision
is:
do
we
like
the
rtc
transform,
even,
for
instance,.
G
G
E
G
Good
to
so
yeah,
whenever
you
create
the
new
rtcp
script,
transform
you
provide
a
worker.
So
the
first
thing
you
would
do
in
that
constructor
would
be
to
enqueue
a
task
in
the
worker
even
loop,
and
you
will
get
the
options
and
serialize
it
and
blah
blah
blah.
And
then
you
will
fire
the
event
in
the
worker
given
loop.
So
it's
happening
once
per
created.
Rtc
rtp
script
transform
object.
H
Okay,
so
it's
a
one
time,
one
time
event:
yeah.
Okay,
thanks
yeah.
I
think
that
event
sounds
reasonable
unless
there's
other
ways
that
are
common
in
workers
to
do
similar
things.
G
The
the
thing
I
found
was
the
worklet,
where
your
register
transformer
and
so
on.
That's
what
is
implemented
currently
in
safari.
G
G
D
G
Okay,
so
getting
back
to
expose
dictionary
or
interface.
C
H
G
Can
start
with
with
an
interface,
I
personally
prefer
an
interface
since
I
I
feel
like
we
will
need
some
extension
points,
so
maybe
it
will
be
the
four
field
values
that
you're
mentioning
around.
Maybe
it
will
be
methods,
so
I
I
feel
more
confident
with
a
transformer
interface
person.
G
Okay
and
about
sticking
with
readable
stream,
writable
stream
for
now
at
least,
is
that
good,
with
ability.
G
Okay,
okay,
so
I
will
you
have
to
put
a
request
on
webrtc
insertable
streams
based
on
what
we
been
talking
about.
F
A
But
we
did
get
through
what
we
talked
about.
So
thank
you,
everybody
and
we
will
post
the
minutes
and
recording,
as
usual,.