►
From YouTube: WEBRTC WG meeting April 27, 2021
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
B
B
D
Yes,
most
of
you
already
know
dr
alex
pass
away
last.
It
was
in
thailand,
I
mean
this
has
been
devastating,
especially
for
his
family
and
all
the
people
that
were
with
him
in
singapore
and
the
raci
of
cosmo
milica's
team.
I
mean
many
of
you
know
the
tornadoes
even
for
longer
that
I
have
know
him.
So
I
don't
think
that
I
it
is
needed
to
explain
how
alex
was
so
how
big
his
personality
was.
D
I
would
only
say
that
I
think
that
he
was
only
present,
so
he
was
involved
in
everything
and,
at
least
for
me
it
would
be
very
difficult
to
not
feel
his
presence
in
anything
anything
that
we
are
doing,
because
he
was
involved
in
almost
everything,
so
I
mean-
and
I
still
can
can
process
the
news
so
he's
it's
like
he's
not
happened
so
and
I
can.
I
have,
and
I
think
and
everything
every
step
I
take.
I
I
can.
D
I
think
I
can
hear
his
voice
still
in
my
mind
and-
and
it
is
really
difficult
to
to
think
about
him
as
passing
away,
so
we
are
taking
care
of
his
family
in
singapore
and
in
france.
So
we
are
trying
to
to
help
them
with
all
the
process
and
because
it
is
going
to
it
is,
has
been
very
difficult
being
in.
D
Representing
his
body
into
france,
so
we
are
trying
to
set
up
some
kind
of
memorial
or
charity
phone
in
his
name,
but
we
have
to
respect
his
family
with
us
and
everything
is
to
risen.
So
I
know
that
many
people-
many
of
you
has
asked
me
how
to
send
anything
for
the
funeral.
Anything
like
that.
But
I
will
ask
you
to
wait
a
bit
more
until
we
are
able
to
to
find
a
good
way
to
to
pass
our
condolences
to
the
to
their
families.
D
B
Yeah,
I'd
like
to
say
sergio
that
one
of
the
remarkable
things
about
dr
alex
was
that
he
he
enjoyed
doing
things
on
the
cutting
edge,
even
if
it
was
very
high
risk
and
in
particular
the
work
he
did
on
av1
to
test
it
and
write
the
test.
Suite
and
all
of
the
work
he's
done
on
webrtc
testing.
They
were
the
kind
of
things
that
don't
get
a
lot
of
recognition
that
are
very
hard
to
do,
but
but
he
really
loved
that
kind
of
thing.
He
never
shied
away
from
the
the
edge
the
cutting
edge.
D
B
D
E
Worth
saying,
maybe
yeah,
if,
if
I
may,
I
think
we
I
mean-
I
can
say
this
because
I'm
not
kind
of
part
of
any
of
the
browser
teams,
but
I
think
the
fact
that
we've
got
several
browsers
implementing
webrtc
now
is
in
no
small
part
due
to
dr
alex
proving
that
there
was
a
demand
for
it
by
by
doing
plugins
and
and
needling
people
back
in
the
day
that
you
know,
yeah
safari
did
need
to
be
able
to
do
webrtc
and
yeah.
E
You
know,
ie
did
need
webrtc
and
and
that
there
was
demand
commercial
demand
for
it.
I
think
his
his
willingness
as
sergio
says,
to
take
on
like
difficult
things
and
go.
Do
them
allowed
him
to
do
that
and
and
we're
all
much
the
richer
for
it?
I
think
it's
a
you
know
it's
a
real
tribute
to
him
that.
G
Yeah
I
I
also
wanted
to
say
that
dr
alex
is
probably
the
first
webrtc
community
guy.
I
met
actually
met
in
person,
and
it
was
very
inspiring
and
very
it
was
a
very
warm
welcome
for
me
into
the
webrtc
community,
and
that
was
just
great
to
to
to
see
to
see
that,
like
his
ability
to
connect
people
and
to
put
positive
energy
was
also
great.
B
A
The
when
we
got
things
to
wreck
we
with
we
took
on
certain
certain
restrictions
to
a
process
which
means
that,
basically,
that,
once
you
have
us
what
wgc
calls
rec,
it
means
that
you
you're
not
supposed
to
add
things
that
are
untested
and
proven
and
then
implemented,
and
the
wtc
process
does
allow
us
to
recycle
a
recommendation.
That
is
a
direct
stage
and
add
new
features,
but
they
need
to
be
features
that
that
have
an
appropriate
level
of
implementation,
and
so.
A
A
A
A
A
I
A
J
G
Yeah,
it's
you
and
I
also
think
it
it
makes
some
sense.
I
would
just
maybe
add
a
subtlety
that
maybe
you
can
have
consensus
on
an
extension
and
then
it's
promoted
as
working
with
draft
and
blah
blah
blah
and
work
on
it
and
work
on
it
and
so
on.
G
A
H
Yes,
so
janniver
here,
I
think
I
largely
agree.
I'm
only
a
little
concerned
about
the
repo
webrtc
extensions
itself,
because
it's
it's
not
a
document
of
a
specific
proposal,
but
it's
a
a
place
where
all
kinds
of
apis
can
be
added,
whether
they're,
good
ideas
or
not.
H
And
so
just
this
sort
of
is
a
separate
question.
I
guess
whether
we
should
for
new
ideas
of
a
certain
size,
they're,
probably
better
start
with
trying
to
get
a
consensus
on
a
working
draft
so
that
we
don't
I
was
concerned.
We
don't
skip
that
that
process
of
getting
group
consensus
on
a
working
draft,
new
api
by
it
landing
in
emergency
extensions,
but
other
than
that.
I
think
the
process
that
sounds
good.
J
And
I
an
idea
for
that
I
mean
for
webrtc
pc,
we
added
stuff
and
then
at
some
point
we
added
these
notes.
That
said,
implementation
at
risk
and
I'm
wondering
if
we
could
do
that
but
backwards
here.
Where
we
add
the
boxes
saying
you
know,
consensus
reached
or
something
would
that
make
sense
so
that
we
don't.
J
New
documents
in
whether
to
see
extensions,
yeah
yeah
like
by
default
they're
at
risk,
but
once
we
think
we
have
an
agreement
to
implement
them,
we
we
mark
them
as
ready
to
implement
or
something.
H
Well,
maybe,
as
a
way
to
deal
with
extensions
that
exist
now,
but
as
a
process
going
forward,
I
would
rather,
we
don't
have
documents
that
don't
have
consensus
there
to
begin
with,
because
I
think
seeing
documents
in
the
w3c
repo
can
be
very
confusing
to
people
about
how
much
so,
I'm
not
sure
readers
will
see
the
absence
of
such
a
label
as
a
deterrent
to
to
understand
that
this
is
not
does
not.
This
does
not
have
consensus.
A
Frankly,
manipulating
people's
implementation
decisions
by
manipulating
labels
is
well,
it
could
maybe
work,
but
in
practice
it
often
doesn't
so
I
would
hesitate
to
make
things
more,
complicated
and
tested.
And
frankly,
we
have
added
a
lot
of
stuff
that
didn't
have
consensus
at
the
outset,
because
we
because
we're
pretty
unable
to
come
to
concrete
discussions
without
a
concrete
proposal.
E
E
K
K
There
are
no
mechanisms,
both
in
respect
and
backshot,
the
tools
we
use
to
edit
the
specs
to
display
data
about
which
browser
implement
which
feature
and
so
provided.
We
also
maintain
that
data
or
make
sure
it
gets
maintained
in
mdn,
because
that
comes
from
mdn.
Then
we
could
have
that
displayed
along
with
a
feature
description
which
gives
more
clarity
about
the
exact
usability
of
the
said
feature.
H
Yes,
we
also
have
other
tools
yeah.
We
also
have
other
tools
like
we
have
incubator
specs.
We
have
private
repos.
I
don't
see
a
problem
with
discussion
issues
being
filed
in
private
repos,
for
talking
about
having
the
working
group
just
discuss
immature
ideas,
the
things
that
need
more
big
time,
and
maybe
we
can
take
a
cue
from
other
organizations
that
we
have
more
consensus
ahead
of
adding
spec
pros
into
w3c
documents.
H
I
think
would
be
great,
but
but
we're
talking
about
the
early
end
here
and
I
think,
as
far
as
talking
about
the
late
end
near
wreck,
I
think
I
likely
agree
with
that.
That
part
that
side
yeah.
A
So,
in
the
interest
of
time
I
suggest
please
make
more
comments
on
the
on
the
pr
and
we'll
probably
send
out
the
note
to
the
to
the
mainlist
saying
we
think
this
pr
is
ready
for
the
option
say,
say
yes
or
file
a
bag.
B
Yes,
please
put
that
in
the
minutes
that
I
think
is
an
action
item
both
of
those
are
action
items.
Thank
you,
okay.
Thank
you.
Okay,
a
little
bit
of
discussion
on
testing
two
meetings
ago.
We
had
a
test
proposal
which
is
to
create
something
called
an
echo
server
and
a
prototype
was
done.
B
It
seemed
to
work
well,
basically,
what
it
did
is
it
captured
packets
that
were
there
was
a
server
that
captured
packets
that
were
sent
to
it
and
sent
them
back
over
websockets,
and
so
it
allowed
made
it
possible
to
test
a
bunch
of
protocol
behavior
which
hadn't
been
testable
before
and
fippo
wrote
a
couple
of
tests
to
test
whether
knacks
and
buys
were
being
sent
appropriately,
bpa
simulcasts
lots
of
stuff.
B
So
this
seemed
like
an
interesting
idea,
and
so
we've
been
trying
to
move
forward
to
get
this
server
put
up
on
wpt.
I
guess
you
went
you've
been
following
up
with
jeremy
and
there
was
an
issue
with
the
crc
32
library
and
no
objection,
I
guess
from
from
jeremy,
but
the
work
hasn't
started.
B
So
is
there
anything
any
action
item
we
need
to
take
u.n
to
to
move
forward.
G
Maybe
I
maybe
we
should
file
an
actual
issue
on
the
ai
or
rtc
repo.
Okay,
I
did.
I
didn't
have
an
answer
from
jeremy
lane.
I
was
thinking
to
wait
for
this
issue
to
be
fixed
before
asking
the
wpt
false,
but
I
can
also
try
to
do
that
in
parallel.
I
don't
know
what
the
webrtc
working
group
thinks
about
that.
K
I
think
it's
probably
a
good
idea
to
at
least
start
writing
up
the
rfc
so
that
the
group,
for
instance,.
B
G
B
Okay,
so
for
the
minutes,
please
note
that
we'll
move
forward
on
the
rfc
and
in
parallel
trying
to
try
to
address
the
licensing
issue.
Okay,
thank
you
all
right,
so
we're
moving
on
to
the
rest
of
the
discussion.
B
So
this
is
a
little
slide
about
something
that
had
been
in
icebox.
It's
it's!
Where
emergency
extensions,
the
idea
was
to
introduce
a
request,
keyframe
api,
it's
a
little
bit
of
a
misnomer
because
it's
not
about
requesting
keyframes,
it's
about
generating
them
and
the
reason
it
was
put
in
the
icebox
was
there
were
existing
mechanisms
that
people
were
using
to
cause
a
sender
to
generate
a
keyframe,
but
we
found
they
have
some
limitations.
B
So
an
example
of
something
that
people
are
using,
I
believe,
is
sender,
dot,
replace
track
with
use.
You
pass
an
argument
of
the
same
track.
You
already
have
now
all
a
spec
says:
is
you
should
seamlessly
transition
to
sending
the
new
track?
Doesn't
say
you
should
send
the
keyframe
and
developers
have
found
that
as
a
result,
this
is
not
reliable.
Sometimes
it
generates
a
keyframe
on
some
browsers.
Sometimes
it
doesn't,
I
think,
there's
a
theory
that
there's
a
race
condition
that
makes
it
unreliable.
B
Another
thing
that's
been
suggested
is
twiddling
the
active
attribute
in
encodings,
so
you
can
set
active
false
and
then
back
to
true
that
I
guess
could
cause
a
keyframe
to
be
generated
again.
B
The
spec
doesn't
require
it,
but
the
problem
is:
if
you
do
that,
you'd
probably
cause
a
stutter,
so
you
might
miss
the
video
michael
black
or
something
so
none
of
these
things,
because
the
spec
doesn't
mandate
keyframe
generation,
it's
not
clear,
you
can
really
depend
on
them
in
an
application
and
and
there's
no
tests,
and
it
probably
would
be
a
valid
to
object
to
adding
a
test.
Given
that
the
spec
doesn't
actually
say
you
should
generate
a
keyframe.
B
So
the
proposal
here
is
to
have
a
new
api.
I
would
suggest
it
be
called
generate
keyframe
rather
than
request.
Keyframe,
because
it's
not
about
really
about
requests,
it's
about
generating
them
and
with
an
argument
of
the
encodings
that
will
be
reset,
so
you
can
specify
the
encodings,
I
guess
by
number
or
maybe
by
ribs,
so
you
pass
in
a
sequence,
and
the
effect
of
this
would
be
the
same
as
receiving
an
fir
with
ssrcs
that
correspond
to
the
target
streams.
B
Now
the
effective
of
that
is
not
neces,
it
may
differ
between
implementations,
for
example,
in
chrome.
It
causes
all
streams
to
be
reset,
not
just
the
one
that's
referenced
with
an
ssrc,
because
one
encoder
is
being
used.
You
kind
of
have
to
reset
the
state
of
the
encoder,
not
just
a
stream,
but
that's
that's
okay.
Maybe
it
basically
has
the
same
effect
as
nfir.
B
So
this
is,
we
do
have
potential
developers
who
would
use
this
if
it
were
defined
because,
like
we
said
the
existing
spec
doesn't
doesn't
have
a
reliable
and
well-defined
way
to
do
it.
Any
comments.
H
Could
you
talk
about
why
web
developers
are
doing
this?
The
use
case.
B
Okay,
basically,
I
think
it's
in
a
in
a
case
where
people
join
or
leave
a
conference
where,
where
you'd
want
to
basically
generate
generate
a
keyframe,
potentially
also
as
an
example
when,
when
your
conference
server
switches
between
simulcast
encodings,
you
need
to
you
need
to
generate
a
keyframe
as
well.
B
D
I
I
have
found
different
a
slightly
different
use
case,
because
you
could
argue
that
this
this
could
be
targeted
by
yes,
rtcp
firm
ambition
loss
indication.
So,
but
there
is
one
use
case
when
you
cannot
do
it
is
if
you
are
using
end-to-end
encryption
and
you
are
doing
the
encryption
via
insertable
stream,
you
change
the
the
key
because
someone
is
is
joining
as
the
key
exchange
is.
A
synchronous
is
not
synced
with
the
with
the
media.
It
happened
that
the
that
the
key
is
not
sent
exactly
when
the
or
is
set
on
a
knife
frame.
D
So
you
want
to
make
sure
that
when
you
you
are
sending
them
with.
B
D
Specified,
or
was
originally
proposed
by
harald
due
to
that
because,
because
doing
end-to-end
accretion
it
is
something
that
is
controlling
the
client.
You
cannot
for
have
you
recent
control
it
on
the
svu
and
due
to
the
nature
of
the
asynchronous
key
exchange,
you
need
to
control
it
on
the
client
side,
not
on
svu.
G
So
talking
about
the
end-to-end
encryption
case,
usually
encryption
will
be
done
not
in
main
thread
so,
but
that's
why,
when
I
first
initially
heard
about
this
use
case,
I
thought
it
would
be
better
to
put
this
api
in
webrtc
and
could
it
transform
context,
which
is
where
you
set
the
key.
So
you
will
also
request
the
keyframe
roughly
at
the
same
time,
and
you
have
like
good
ordering
behavior
in
this
use
case
that
maxim
expressed
on
the
github
issue.
G
I
think
it's
it's
fine
to
use
sender
generate
keyframe,
but
fear
would
be
fine
as
well.
I
think
that
I
would
be
much
more
interested
in
having
generate
keyframe
if
there
was
maybe
not
a
must
better
should
constrain
that
you
only
tap
trigger
a
keyframe
on
a
specific
encoding
and
not
on
all
encodings.
G
That
way
we
would
be
in
better
situation
than
with
fear.
Of
course,
there
might
be
encoders
that
are
doing
everything
in
parallel
and
it
might
not
be
implementable,
but
in
most
cases
it's
implementable
today,
and
we
should
just
do
that.
B
Yeah,
so
that's
why
the
argument
has
provided
un
so
that
implementations
can
do
that.
I
guess
it
would
be.
You
know
a
separate
argument,
whether
whether
you
know
it's,
it's
allowable
to
reset
the
whole
encoder
or
not,
but.
B
B
Okay,
so
we're
gonna
update
pr
37.
I
guess,
with
harold's
permission,
to
go
in
this
direction
and
have
more
discussion.
G
So
just
slide
comment
on
pr37.
I
think
it
was
based
on
chaining
operation
and
I'm
not
sure
we
want
to
change
operations
there,
because
we
probably
want
to
generate
the
keyframe
as
soon
as
we
as
we
can
right.
A
Yeah,
the
the
reason
why
I
suggested
chaining
the
operation
was
because,
if
you
do,
if
you
do
generate
keyframe
and
replace
track,
and
then
you
want
to
be
very
sure
whether
the
keyframe
that
the
that
the
generate
keyframe
consistently
happens,
either
before
or
after
their
place
track.
A
So
since
since
replace
track
was
on
was
on
the
operations
chain,
I
thought
the
simplest
way
to
make
make
sure
it
was
consistent
was
to
add
it
on
the
operations
chain
and
if
you,
if
there's
not
a
reason
why
this
is
not.
This
is
not
racy,
then
I'm
fine
with
with
not
not
doing
a
training
of
it.
H
Well,
maybe
maybe
we
should
some
concern
was
raised
here
also
about
inconsistent
behavior
on
race
replace
track.
So
maybe
we
could
address
that
at
the
same
time,
and
maybe
say
something
like
replace
track
should
generate
a
keyframe
as
well.
A
B
Yeah,
I
mean
the
weird
thing
here
is
it
it
says
it
should
be
seamless
if
you
think
about
it.
The
most
seamless
way
to
do
a
replace
track
on
an
existing
track
is
to
do
nothing
at
all.
Right
I
mean
so
it
wasn't.
It
wasn't
clear
to
me
that
it
it.
It
certainly
didn't
even
seem
to
imply
generating
a
keyframe
in
that
in
that
kind
of
a
case.
H
B
Okay,
I
think
we
have
the
next
step,
which
is
basically
comment
on
the
pr
and
and
try
to
get
it
closer
to
us
to
being
finished
all
right.
So
I'm
going
to
turn
the
floor
over
to,
I
guess,
harold
and
guido.
I
So
ghetto,
do
you
want
to
speak
to
this?
Okay,
you
can
start,
and
then
you
can
talk
whenever
you
want
to
add
something
yeah.
So
basically
we
want
to
report
on
the
current
status
of
breakout
box
and
we
are
proposing
to
adopt
adopt
it
as
a
working
group
document
as
an
initial
point
to
discuss
and
address
all
these.
I
Actually,
since
it's
used
frequently
together
with
web
codecs
and
it's
replacing
an
an
api
that
was
originally
part
of
webcodex,
so
it's
easier
for
for
existing
participants
to
to
migrate
by
keeping
the
summary
in
trial
during
trial
has
more
than
50
sign,
ups,
of
which
at
least
eight
are
experimented
experimenting
with
record
box.
The
feedback
from
developer
has
been
positive
in
general,
many
appreciate
the
ability
to
do
processing
on
workers
and
yes-
and
this
there
are
among
sign
ups.
I
There
are
applications
in
active
development
by
zoom
and
by
google
me
next,
so
I'm
going
to
go
over
the
issues
that
are
opened
on
the
on
the
repository
just
to
explain
what
the
status
is
with
regards
to
those
issues
and
then
we'll
have
also
opportunity
to
discuss
even
more
since
jun
has
plenty
of
slides
with
with
additional
feedback.
I
So
the
the
first
issue,
or
well,
not
the
first
one
but
the
the
first
one
I'm
going
to
talk
about
is
the
initial.
Whether
using
readable,
unrightly
streams
is
the
right
approach,
and
my
argument
is
that
it
is
at
least
at
this
time,
because
using
streams
allows
us
to
leverage
a
lot
of
mechanisms
that
are
already
proven
in
production
like
direct
support
for
workers.
I
It's
a
it's
api,
that's
being
used
on
other
api,
so
developers
know
it.
It
has
a
long
period
model
and
it's
suitable
for
what
we
want
to
do,
which
is
basically
providing
a
stream
of
of
media
chunks.
The
other
question
is
would
be
whether
separating
readable
unwritable
is
a
good
idea.
I
Instead
of
having
a
transform
approach
like
like,
we
are
adopting
on
on
the
encoded
transform-
and
in
this
case
our
position
is
that
separating
readable
and
writeable
allows
us
to
better
support,
more
use
cases
like
not
just
the
the
transform
case,
but
also
the
customs
in
case
which,
which
doesn't
require
the
output
to
be
another
track
just
reading
from
the
track
and
doing
something
with
it.
And
this
is
the
way
zoom
is
using
it
together
with
web
codecs
and
what
transport
to
implement
their
custom
communication
particles
the
same.
I
I
I
Yes,
another
issue
is
a
memory
management
for
incoming
frames,
so,
for
example,
what
what
if
the
upstream
track
or
source
produces
more
frames
than
than
can
be
consumed
by
javascript.
So
so
we
have
two
mechanisms
for
that.
So
video
frames
and
all
your
frames
around
web
codecs
have
a
closed
method
that
allows
releasing
the
resources
associated
with
the
with
that
frame.
So
so
it
allows
dropping
frames
with
with
precision
and
the
sprint
track.
Processor
also
has
an
input
buffer,
a
circular
one
that
has
a
maximum
size.
I
So
if
it
allows
the
developer
to
control
how
to
handle
a
certain
burst
of
data,
but
if
the
system
cannot
cope,
then,
then
the
the
all
frames
from
that
buffer
are
dropped
automatically
and
yeah,
and
if
you
look
at
it
like
it
could
also
be
a
problem
with
the
platform,
sources
and
tracks.
Like
let's
say
you
have,
you
have
some
some
source
that
produces
a
lot
of
a
lot
more
frames
than
than
then
can
be
consumed
by
the
things
connected
to
it,
and
the
implementations
need
to
deal
with
that
anyway.
I
Next,
then,
the
another
issue
would
be
the
opposite.
What
what
happens
if,
if,
if
it's
more
more
like
on
the
consumer
side,
like
you
use,
the
the
generator
generates
more
frames
than
a
sink
can
consume.
So
the
question
is:
how
how
does
the
application
know
about
it
and
yeah.
I
Well,
the
idea
is
that
that
this
is
intended.
This
is
intended
to
support
the
real
time
use
case,
so
so
a
generator
should
produce
frames
at
a
reasonably
real
time
pace,
but
even
if,
if
if,
if
that
doesn't
happen,
there
are
solutions
to
notify
an
application
that
it's
going
to
fast
like,
for
example,
the
platform
syncs
can
use
their
existing
reporting
mechanisms
like
error
events
in
media
elements
or
maybe
even
peer
connections,.
I
Channel
to
to
report
that-
or
we
could
even
have
other
other
solutions
like
events
on
the
generator
or
yeah
so
yeah,
this
is
open
to
discussion,
but
there
are.
There
are
ways
to
to
deal
with
it
yeah
and
next
and
the
other
one,
which
is
actually
the
first
file.
The
issue
is
to
add
a
high
quality,
facebook
tracking
api
to
discourage
porn,
discriminatory
implementations,
so,
basically
adding
a
platform
native
transform
for
phase
body
tracking.
I
In
regards
to
the
x,
I
think
we
can
adopt
a
similar
approach
and
the
one
we're
taking
in
encoder
transfer
and
just
define
a
standard
transformation
for
this.
I
We
can
probably
specified
on
on
a
separate
specs
since,
since
the
details,
I
well
I'm
not
an
expert
in
the
area,
but
I
would
guess
that
the
details
might
not
be
trivial.
I
I
So
next
we
have
another
slide
to.
Basically
what
we
propose
is
to
adopt
the
the
spec
as
a
working
group
document,
which
means
the
agreement
that
we
need
this.
The
problems
that
we're
trying
to
solve
this
is
actually
need
solving
and
that
we
agreed
to.
G
A
H
B
I
will
say
that
in
terms
of
developer
feedback
that
I've
heard
it
was
a
little
bit
confusing
that
web
codex
used
video
track
reader
writer
and
then
that
was
pulled
out
in
the
second
origin
trial
without
breakout
box
being
documented
in
its
place.
So
we
had
a
bunch
of
very
puzzled
developers
who
were
trying
video
track
reader
writer
and
had
to
be
redirected
to
breakout
box
and
initially
there
wasn't
any
docs
to
provide
them.
So
that
may
be
the
reason
that
you
have
50
webcodex
people
and
only
eight
on
breakupbox.
H
Yeah
yeah
we're
here.
Sorry
all
right.
Can
you
hear
me
yeah,
oh
yeah,
so
yeah?
I
think
some
concerns
in
that
I
believe.
As
a
matter
of
process,
we
agreed
to
address
the
problem
space.
The
working
group
agreed
to
address
the
problem
base
of
raw
media
access
and
that
we
adopted
the
problem,
but
not
necessarily
the
specific
specification
of
these
specific
apis,
and
so
it
leaves
us
on
unofficial
draft.
H
At
the
moment
the
github
repo
seems
to
have
mostly
pr's
from
from
one
company
and
doesn't
have
broad
involvement,
and
I
think,
there's
also
been
some
process
confusion
about
what
the
status
and
what
the
next
steps
there
are,
and
so
I'm
glad
that
we're
trying
to
address
that
now,
but
also
what
we
talked
about
earlier
about
working
drafts.
H
It's
our
understanding
that
chrome
is
potentially
planning
to
ship
this,
not
just
as
an
origin
trial
which
has
already
happened
but
as
an
actual
feature,
even
though
the
document
hasn't
really
been
adopted
by
the
working
group.
So
that
is
concerning
from
a
process
standpoint,
and
I
think
we
want
to
make
sure
this
has
better
review
an
overview
from
tag,
for
instance,
and
much
more
horizontal
review
before.
H
If,
if
working
draft
is
going
to
become
the
point
at
which
features
are
going
to
ship
on
the
web,
I
think
we
have
to
be
be
more
cautious
before
approving
a
draft
until
it's
reached
a
water
consensus.
A
H
Right
at
the
same
time,
I
think
this
raises
an
interesting
challenge
of
where
work
is
happening
in
multiple
working
groups,
and
it
looks
like
web
codecs
and
webrtc
perhaps
should
work
more
closely
to
make
sure
things
don't
fall
between
between
chairs,
so
to
speak,
no
pun
intended.
H
The
one
thing
that,
for
instance,
stands
out
is
that
the
web
codex
apparently
had
a
video
track
reader,
but
never
had
an
audio
track
reader.
So
by
deprecating
video
track
reader
in
favor
of
breakout
box.
Breakout
box
also
handles
audio,
and
I
think
no
one,
no
one
caught
that
as
a
an
area
that
hadn't
been
widely
discussed
or
discussed
at
all.
Perhaps.
K
A
H
A
G
We
are
at
the
wpc
so
and
not
ietf,
so
I
would
go
with
comments
as
they
as
they
come.
That's
fine!
I
see
that
dummy.
Don't
want
to
speak.
K
Yeah,
I
just
wanted
to
comment
on
the
process
bits
so,
firstly,
in
terms
of
getting
horizontal
reviews,
some
of
these
storytelling
reviews,
we
can
only
request
once
we've
adopted
the
document
in
some
form
like
I
know,
privacy
interest
group
doesn't
want
to
do
early
reviews.
They
want
to
get
things
that
have
some
level
of
consensus,
so
there's
a
bit
of
a
chicken
and
egg
issue
if
we
were
to
to
wait
to
get
them
all
with
all.
K
That
said,
I
think
if
there
were
going
to
be
early
shipping
of
working
glass,
I
think
with
our
ships.
This
trust
needs
to
accept
the
responsibility
of
possibly
having
to
break
existing
applications.
While
the
working
group
decides
to
revise
the
api-
and
I
think
in
particular,
until
we
get
to
some
clear
level
of
consensus-
that
this
is
the
right
api
shape,
then
you
can't
really
raise
your
card
or,
but
this
is
going
to
to
break
the
web.
I
think
this
is
like
a
two-edged
sword.
H
I
Yes,
well,
I
just
want
to
say
that
horizon
we
have
this
under
review
by
the
tag.
We
request
a
review
a
couple
of
months
ago,
but
you
know
the
tag
is
not
super
fast
when
it
comes
to
reviews,
so
so
yeah
we're
still
waiting
for
the
feedback,
but
yeah.
We
didn't
tend
to
ship
this
before
adoption,
so
we
were
requesting
adoption
before
we
can,
as
as
a
previous
step
to
us
considering
shipping
this
so.
G
Trying
to
also
so
on
on
this
similar
topic,
we
could
also
look
at
what
happened
for
webrtc
and
credit
transform.
G
Last
year
in
june,
we
agreed
to
adopt
the
working
draft,
knowing
that
the
space
was
good,
but
the
api
had
some
potential
issues,
then,
if
I'm
not
mistaken,
even
though
the
spec
did
not
reach
first
public
working
draft,
it's
still
not
the
case,
chrome
shipped
the
original
proposal,
and
then
in
november
we
decided
to
change
the
api,
and
now
we
have
a
spec
with
a
new
api.
G
C
H
Oh
yes,
just
just
a
comment
there
I
saw
people
raising
hands.
I
want
to
make
sure
that
anyone
wanted
to
speak
now
as
an
opportunity,
I'm
actually
not
sure
how
to
track
race
hands
in
my
google
meet.
L
Hi
this
is
chris
cunningham.
I
I'd
raise
my
hand
briefly
the
point
about
audio
track
reader
not
being
in
what
codex
was
not
just
a
matter
of
incremental
progress
on
that
specification.
It
was
always
a
planned
thing
and
at
the
point
where
what
codex
recognized
the
overlap
with
breakout
box
and
and
the
basically
the
superior
solution,
a
breakout
box,
it
was
removed
from
the
webcodex
spec.
G
Yep
so
last
month
I
had
a
slide
on
my
feedback
for
media
capture
transform,
but
it
was
the
last
slide
of
the
meeting,
so
we
were
not
able
to
to
go
to
it.
So
this
time
I
did
a
few
more
slides
to
describe
a
bit
more
my
feedback,
so
the
first
slide
is
in
the
current
draft.
We
have
a
transform
for
both
audio
and
video
and
for
video.
We
we
know
that
we
need
a
better
solution
than
canvas
we
have.
G
We
have
african
canvas
which
is
sort
of
okay
in
workers,
but
still
we
need
like
to
capture
camera
data.
We
need
something
better
for
audio,
it's
a
solve
issue.
Well,
you
have
audio
worklet,
which
is
working.
Fine
zoom,
for
instance,
is
using
audio
worklet
for
microphone
and
for
remote
audio
rendering,
and
it's
working
well
for
them.
G
A
And
just
that
I
don't
agree
with
this.
Audio
worklet
is
asynchronous
a
synchronous
interface
which
runs
bang,
bang,
bang,
bang
on
the
clock.
Applications
like
here,
like
the
the
netiq
api
that
I
talked
about
last
week
last
month,
cannot
be
done
in
in
web
audio
because
it
does
not
fit
its
processing
model.
So
I
don't
agree
that
that
audio
worklet
solves
all
the
cases.
B
Chris
cunningham,
I
know
you've
been
working
with
the
beta
testers
of
web
codex.
Do
you
have
a
comment
on
the
audio
versus
video.
L
We
did,
we
did
basically
hold
the
same
position
as
you,
and
initially
you
know,
like
audio
worklet
seems
like
this
should
work.
We
received
feedback
from
developers
that
kind
of
along
the
lines
of
harold's
comment
that
the
real
time
constraints
of
audio
worklet
mean
that
if
you
do,
if
there's
any
sort
of
cpu
hiccup
or
whatever,
that
you
end
up,
dropping
audio,
which
is
not
a
problem
with
the
breakout
box
proposal.
L
M
M
M
M
G
Yeah,
I
tend
to
agree.
If
you
look
at
separate
implementation,
you
usually
use
a
ring
buffer,
and
this
is
typically
what
could
be
used
with
audio
worklet.
Then
the
ring
buffer.
You
can
take
like
two
seconds
three
seconds
four
seconds
whatever
it
will
allow
you
the
thing
that
you
do
want
if
you
want
like
use.
If
you
want
to
use
less
memory,
half
a
second,
we
will
make
it
work.
So
what
I'm
saying
is
that
audio
worklet
should
solve
all
issues.
G
We
might
still
argue
that
it
might
be
not
as
convenient
whatever,
but
I
would
go
on
there
and
say
that
if
we
look
at
media
capture,
transform
versus
audio
worklet
audio
worklet
is
more
reliable
for
sure
it's
a
high
priority
audio
thread.
A
worker
usually
is
not
high.
Priority
thread.
It's
usually
interactive
or
depending
on
the
page
status.
Usually
audio
worklet
is
also
usually
more
resilient
because
it
has
been
designed
so,
for
instance,
there's
no
synthetic
hr
in
audio
workload
very
similar
in
in
workers.
G
It's
true
that
there
are
some
things
that
are
that
might
be
missing
in
the
work
like
compared
to
the
black
box
model.
Like
control
signals,
I'm
still
unclear
about
its
use,
it
can
probably
be
extended
or
shimmed-
that's
my
guess
at
least
so.
We
can
probably
discuss
audio
worklight
extensions,
so
the
problem
I
would
make
there
is
that
various
consensus
to
work
on
raw
video
access
and
websites
are
very
excited
about
it,
so
we
should
define
it
and
focus
basically
on
that
v1.
G
G
G
G
The
webkit
bug
was
very
simple:
it
was,
there
was
a
bug.
Mainframe
was
blocked
for
ins
for
some
user
gesture
and
the
rendering
pipeline,
even
though,
was
fully
off.
My
infrared
was
still
posting
tasks
of
one
frame
from
the
background
trade
to
the
main
thread.
If
mainframe
is
blocked,
then
the
frames
are
queued
in
the
thread,
queue
and
that's
bad.
G
I
don't
think
so
I
can.
I
can
look
at
the
implementation,
but
it's
a
very
usual
mistake
to
so.
For
instance,
I
haven't
looked
at
the
code,
but
I
can
look
at
it.
I
can
do
some
experiments,
that's
needed,
but
there's
a
max
buffer
size
parameter
in
media
stream,
track
breakout
box
and
if
you
check
max
buffer
size
at
in
the
main
thread,
if
you
decide
to
check
max
buffer
size
in
the
main
thread,
then
you
have
this
bag.
B
I
Media
goes
to
media,
doesn't
touch
the
the
main
thread
so.
G
The
question
is:
if
the
worker,
where
is
the
run
for
max
buffer
size
check,
for
instance,
is
it
done
in
where
the
readable
stream
lives
or
is.
M
E
G
B
G
Until
the
double
stream
is
transferred,
so
that's
something
you
should
look
at,
I'm
pretty.
G
G
Have
fixed
it?
What
might
so?
Let
me
just
continue
this
slide,
and
then
I
will
you
can
you
can
follow
up?
What
I'm
saying
is
that
the
current
api
is
difficult
to
implement
without
mainframe
blocking
risk.
It's
there's
a
signal
here.
Chrome
had
this
issue
apparently
but
fixed
it
recently.
Chrome
had
this
issue
as
well
for
webrtc
in
suitable
streams.
I
don't
know
if
it's
fixed
or
not,
maybe
it's
fixed,
in
which
case
that's
good.
G
G
If
you
have
a
max
buffer
size
of
16,
then
you
might
buffer
on
the
main
thread
16
frames,
then
you
will
transform
it
transfer
it
to
the
worker.
The
16
frames
will
be
somehow
transferred,
but
there
will
be
a
time
potentially
where
from
16
you
will
go
to
32,
except
if
you're
doing
unqueueing
in
the
thread
that
is
producing
the
frames,
which
is
a
possibility.
G
So
all
of
that
to
say
it's
hard
to
implement,
maybe
you
could
get
it
right,
but
that's
hard
and
the
benefits
are
not
huge.
We
could
do.
You
could
slightly
change
the
api
so
that
implementers
do
not
end
up
with
those
issues
and
we
could
we
would
be
able
to
by
slightly
changes
the
api
define
a
specific
algorithm
that
people
implementers
would
implement
and,
at
the
end
of
the
day,
come
with
an
of
the
main
thread
without
mainstream
blocking
implementation.
G
G
So
the
example
I'm
taking
is
you
allow
you
implement
transferring
of
megastream
track
from
workers
or
to
workers?
There
was
consensus
at
last
intern
to
investigate
this,
and
this
is
probably
shimmable
in
chrome,
using
transfer
stream
and
breakout
box,
and
if
you
have
a
mediation
track
in
a
worker,
then
we
might
be
able
to
add
an
api
to
expose
frames
in
workers
or
worklets,
and
then
the
processing
of
media
stream
track
content
is
done
in
workers
or
worklets
next
slide.
G
So
I
just
did
an
example
there.
So
the
red
things
are
what
is
missing
in
today's
browsers,
so
you
set
up
a
worker,
you
get
user
media,
you
get
to
track,
and
then
you
do
post
message
track
and
you
put
track
in
the
transferable.
G
Then
you
receive
a
message
where
you
have
a
track
and
you
could
do
like
this
track
and
frame
event
or
you
could
get
a
readable
stream
on
writable
stream
from
it
or
you
can
do
whatever
you
want.
I'm
not
proposing
this
on
frame
event,
even
though
that's
a
one-liner.
G
I
will
do
something
like
some
frames
or
I
don't
know,
but
you
can
do
processing
and
it's
much
easier
to
do
it
there
than
to
have
to
pipe
these
smooth
events
from
the
main
from
the
page
to
the
work
next
slide.
L
L
L
This
was
done.
You
know
by
one
engineer
in
under
a
week's
time.
We
have
confidence
from.
You
know
the
code
that
indeed
it
does
bypass
the
main
thread
entirely
after
the
transfer
occurs,
but
also
we
produced
a
pretty
slick
demo,
for
you
know
a
web
page.
L
Basically
that
does
the
transfer
and
then
simulates
main
thread
contention
by
busy
looping
on
the
main
window,
and
then
we
verify
the
receipt
of
frames
on
the
worker
and
and
we
we've
demonstrated
that
the
frames
are
not
lost,
that
that
no
amount
of
main
thread
busy
looping
affects
the
delivery
of
frames
to
the
worker,
and
I
think
it
could
be
insightful
for
us
to
share
that
demo
link.
L
Unfortunately,
I'm
in
my
phone
right
now,
but
guido
and
harold
both
have
that
link
from
earlier
discussion
if
they
could
maybe
paste
it
into
the
meeting
chat
so
that
it
could
be
recorded
in
the
minutes
and
just
available
for
everybody.
G
G
Then
you
might
end
up
in
potentially
in
some
issues,
because
some
frames
might
still
be
in
transit
from
the
camera
thread
to
main
thread
and
then
at
some
point
I'm
guessing
the
main
thread
will
actually
send
them
back
to
the
worker,
but
there
will
still
be
a
time
where
they
will
be
there
sitting,
waiting
for
the
main
thread
to
actually
forward
them
to
the
worker.
So
there
are
edge
cases
that
you
probably
will
not
be
able
to
solve
very
easily.
G
A
A
So
if,
if
you
are
claiming
that
he
implemented
something
different
from
from
what
he
wrote
then
either
his
writing
was
bad,
but
his
called
this
bad.
But
you
cannot
argue
that
that
this
algorithm
is
not
actually
not
what
was
intended
to
be
in
the
spec
I
mean
one.
G
Action,
I
can
take
an
action
to
actually
follow
up
after
the
meeting.
I
would
mention
a
simple
case,
which
is
you
go
from
main,
so
you
go
from
you
transfer
from
main
page
to
worker
and
then
from
worker
to
back
to
the
main
page,
and
in
that
case,
if
you
are,
I
would
guess
fully
not
going
for
the
worker
and
you
close
the
worker,
your
truck.
We
will
probably
continue
which
is
nice,
but
it's
not
as
per
spec.
A
If
you're
claiming
that,
if
you're
claiming
that,
if
you
transfer
a
stream
back
to
the
from
the
worker,
to
the
main
thread
that
it
will
congest
because
of
operations
on
maintenance,
yes.
C
G
Problem,
maybe
I
can
help
so
so
maybe
I
can
take
an
action
to
to
write
precisely
after
the
meeting,
maybe
only
value
you
could
try
to
summarize
what
you
understood.
Well,
I
was
probably
not
clear
there.
H
I
was
just
going
to
make
a
higher
level
argument
here,
which
is
that
if
we
want
to
avoid
main
thread,
we
should
also
discuss
what
is
the
best
api.
I
mean
we
know
script.
Processor
node
was
a
bad
idea,
so
why
are
we
talking
about
an
api
that
expo?
We
can
say?
Oh
we're,
exposing
it
to
the
main
thread,
but
we're
going
to
optimize
it.
User
agents
are
going
to
optimize
it,
so
that
never
is
on
the
main
thread.
H
I
I
I
H
H
One
difference,
though,
is
that
if
we
transfer
the
media
stream
track,
we
can
then
later
make
we
can
make
decisions.
That
say,
let's
not
expose
this
api
at
all
on
main
thread.
If
we
that's
not
an
option
today
where,
where
part
of
the
exposure
is
on
main
thread,.
B
G
So
let's
continue
so
yeah
just
I
just
wanted
to
point
out
that
existing
apis
are
heavy
based
on
the
data
stream
track
web
audio
connection
with
recorder,
html
mig
element
and
for
there
the
pipe
2
implementation
is
between
all
of
these
input
and
output.
That
that
looks.
That
looks
pretty
good.
If
you
look
at
media
stream
pack
and
web
codecs,
currently
at
least
the
sds
that
is
executed
for
every
audio
video
frame,
either
input
or
output,
readable
stream,
pipe
2
to
verse
q.
G
Maybe,
but
again
you
might
have
issues
with
with
main
thread
and
if
you
want
to
pipe
the
output
of
web
codec
to
a
video
element,
how
will
you
do
that?
You
probably
need
to
do
that
in
a
worker
and
do
things
so
it
seems
that
there's
a
possibility
to
extend
webconnect
to
support
media
stream
track
as
input
and
output,
and
maybe
that's
something
we
should.
We
should
look
at
next
slide.
G
So
I
just
wanted
to
say:
hey
why
not
taking
encoder,
dot,
encode
and
password
stream
track,
or
why
not?
The
decoder
could
produce
directly
a
track,
and
then
you
could
very
easily
set
it
to
the
source
object
that
doesn't
prevent.
So
this
could
be
done,
maybe
in
the
main
thread,
for
instance,
if
we
really
want
to
without
issues
of
having
very
big
objects
being
stuck
in
the
main
thread,
so
that
could
be
a
possibility.
L
L
I
want
to
say
that
from
like
the
having
implemented
web
codex,
as
is
defined
with
the
access
to
every
frame
and
having
heard
feedback
from
origin
trial
participants
and
having
discussed
the
you
know,
javascript,
you
know
with
with
the
people
who
own
blink
and
v8,
you
know
the
timing
requirements
that
web
codex
is
going
to
need
to
be
performing
in
this
regard.
L
This
is,
as
far
as
I
can
tell
not
an
issue
actually
that
doing
codec
io
on
the
main
thread
in
a
worker
in
particular.
Let's
say
every
10
milliseconds
is
is
is
just
fine,
and
the
additional
point
I
wanted
to
make
is
that
the
suggestion
to
use
the
media
stream
track
instead
of
every
frame
works.
Fine,
if
you
want
to
just
do
straight
pass-through
from
the
user's
camera,
let's
say
to
the
encoder,
but
actually
most
users
that
we've
talked
to
don't
want
to
do
just
straight
pass-through
they
want
to.
L
Instead,
do
things
like
background
blur
and
background
replacement
or
add
funny
hats,
and
these
sorts
of
things
for
which
having
access
on
a
per
frame
basis
is
important,
and
then
they
they
make
their
modification,
and
then
they
feed
that
modified
frame
into
the
encoder.
Of
course,
you
could
do
things
like
modify
the
frame
and
then
feed
it
back
into
a
media
stream
track,
which
then
goes
into
the
encoder,
but
this
seems
like
a
lot
of
hoops
to
jump
through
and
sort
of
like
the
argument
that
you
touch
less
javascript.
This
way
is
definitely
false.
L
L
We
are
supportive
of
still,
you
know,
pursuing
like
the
media
stream
track
generator
as
a
way
to
render
media,
but,
but
also
most
of
the
folks
that
we've
spoken
to
in
the
origin
trial
prefer
the
lower
level
primitives
of
rendering
their
frames
themselves
via
canvas
via
audio
worklet.
This
is
a
more
powerful
and
actually
kind
of
a
more
intuitive
way
to
go
about
it.
For
folks
who
you
know
have
spent
you
know
decade
in
video
stacks
and
that
sort
of
thing.
G
Yeah,
I
agree
with
that.
I
think
that
in
general,
it's
perfectly
fine
to
have
like
per
frame
processing
in
worklets
or
in
workers.
What
I
was
meaning
there
is
that
if
somebody
wants
to
do
the
easy
path
and
wants
a
main
thread
api,
then
media
stream
track
creation
or
piping
2
is
probably
fine
and
it's
probably
better
than
readable
stream.
H
Sorry
that
that
was
my
hand
from
earlier.
My
only
comment
would
be
that
if
you're
gonna
have
funny
hats
in
the
web
conference,
you
probably
want
a
selfie
too,
though,
so
you
know
what
you're
looking
like.
So
maybe
the
local
real-time
case
isn't
that
different
from
encoding
to
to
send.
L
But
even
your
selfie
would
need
the
modified
hat
and
so
you're
still
accessing
your
frame
on
a
per
frame
basis,
and
you
know
again
my
point
stand,
I
think
about.
Yes,
you
could
feed
that
back
into
a
track
for
the
sake
of
like
having
the
video
element,
render
it
for
you,
but,
but
why?
You
know
why
not
just
render
yourself
to
canvas
directly.
G
Well,
if
you,
if
there
are
some
so
I'm
saying
both
options
are
fine.
If
you
look
peer
connection
peer
connection,
you
need
a
track
so
there,
why
not
having
something
that
is
easy
to
use
and
that
would
not
require
javascript
to
be
executed,
and
that
would
be
easy
to
implement
by
the
user
agent
anyway.
But
let's
go.
We
need
to
keep.
G
Sorry,
contrary
channel
yeah
in
the
interest
of
time,
I
will
be
quick.
I
looked
at
the
controlling
channel
and
I
did
not
really
understand
everything,
I'm
guessing
that
it's
something
that
is
still
under
design,
which
is
fine.
G
I
I
provided
here
some
potential
issues
with
the
current
api,
based
on
my
understanding
of
what
this
api
is
trying
to
solve.
I'm
not
quite
sure
I
understood
precisely
everything
it
was
trying
to
solve,
because
the
details
in
the
spec
are
it's
not
very
detailed.
I
guess,
of
course,
with
additional
work.
It
could
be
detailed
server,
but.
F
G
Yeah,
I
was
just
wanting
to
mention
another
small
issue
with
muted
nd
tracks,
for
instance,
so
signals
that
we're
mentioning
like
muted
and
it
may
be
visa
signals
as
well,
but
we
already
have
something
in
the
stream
track,
so
it
feels
like
something
we
we
need
more
work
there
for
signals.
That's
my
understanding,
that's
my
current
feeling.
Next
slide.
G
Yeah,
let's
keep
this
one.
I
want
to
go
to
the
end
anyway.
So
tentative
summary
from
my
current
reading.
A
real
video
media
access
api
is
a
great
idea.
G
We
are
seeing
people
that
really
want
it
audio
from
what
I'm
hearing
there's
no
consensus,
currently
that
it's
needed
or
not
needed,
but
that's
something
we
we
need
to
continue
debating.
My
understanding
is
that
the
liberty
working
group
is
the
right
place
to
do
this
work,
because
there
is
knowledge
in
that
era.
So
that's
good.
The
current
proposal
from
google
needs
more
work.
That's
my
understanding.
G
I
would
tend
to
think
that
we
should
focus
on
video
and
core
functionality
for
v1
and
in
parallel
document,
additional
requirements
and
use
cases,
for
instance,
audio
controlling
signals
and
so
on.
I
really
think
we
should
fix
the
other
mainframe
thing
before
it
to
me.
It
seems
like
a
requirement.
G
So
that's
it
for
me,
I'm
guessing
that
somebody
wants
to
say
something
before
we
go
back
to
the
working
loop
adoption
slide.
H
H
Yeah,
I
I
would
say
I
agree
with
the
un.
I
added
one
slide
just
to
show
that
I
think
his
main
concerns
of
making
sure
that
this
we
don't
duplicate,
work
with
audio
worklet
and
that
we
keep
things
off.
The
main
thread
are
both
valid
and
I
also
hear
the
camp
that
wants
streams,
but
I
don't
think
I
think
these
can
be
done
orthogonally.
H
So
in
interest
of
adding
that
that
nuance,
I
I
modified
some
of
you
in
earlier
examples
by
adding
streams,
primarily
for
back
pressure.
I
just
want
to
add
that
back
pressure
doesn't
necessarily
mean
buffering,
and
I
believe
that,
even
in
real
time
you
might
want
to
have
some
kind
of
back
pressure
signal
in
order
to
implement
your
strategy
strategy,
whether
that's
buffering
or
dropping
frames.
H
So
I
hope,
hopefully
that's
something
we
can
keep
in
mind,
and
maybe
I
can
bring
parties
more
together
and
that
so
I'm
showing
some
examples
here-
and
I
think
the
rub
here-
is
that
we're
in
a
tricky
spot,
where
we
we're
dealing
with
very
high
priority
real
time
on
one
end,
but
we're
it's
bleeding
into
use.
H
Cases
where
we
might
want
to
do
things
like
take
real-time
microphone,
audio
and
encoding
it
and
sending
it
through,
for
example,
web
transport,
which
is
the
first
example
here,
and
in
that
case,
being
able
to
know
to
the
extent
that
audio
worklets
callbacks
are
synchronous.
H
Otherwise
that's
going
to
leave
a
lot
of
up
to
the
javascript
application,
to
figure
out
that,
if
it's
not
done
in
time,
it's
going
to
get
called
again.
It's
going
to
need
to
use
state
a
lot
of
things
in
order
to
not
potentially
explode
cpu
and
memory
usage.
That
kind
of
things.
B
B
But
you
know
there
is
the
issue
of
changing
the
encoder
rate,
so
I
don't
know
that
you
can
really
do
a
pipe
through,
like
that.
That'll
get
all
the
way
to
web
codecs
right.
You,
you
kind
of
have
to
feedback
the
the
rate
the
throughput
of
web
codec,
some
web
transport
back
into
the
web
codex
encoder
rate.
H
All
right
well
as
far
as
control
signals
that
might
come
up
there.
Yes,
the
idea
using
pipe,
you
don't
have
to
use
pipe2,
but
the
we
had
to
solve
this
with
datagrams
and
web
transport,
which
is
the
closest
we
have
to
real-time
networking
and
what
we
ended
up
with
was
that
instead,
you
know
the
earlier
proposals
we
had.
H
We
had
javascript
just
hammering
on
the
right
function
in
the
loop
in
order
to
fill
the
underlying
syncs
buffers
in
order
to
send
datagrams,
and
then
the
implementation
would
drop
as
it
saw
fit.
That
was
a
bad
design.
We
ended
up
with
using
back
pressure,
and
but
it
only
it
assumes
that
the
producer
is
going
to
not
just
call
writer
dot
write
but
also
await
writer
dot,
ready.
A
Avoiding
we
have
had
now
one
and
a
half
hours
and
we're
like
one
third
through
the
agenda.
Yeah
sure
that's.
A
I
mean
I
mean
the
adoption,
okay,
because
all
the
points
that
I
believe
that
I
have
a
strong
opinion
on
almost
all
the
points
raised,
and
I
it's
clear
that
I
don't
agree
with
the
people
who
have
raised
them
so
we'll
have
to
discuss
this
further.
We
don't
have
time
on
the
two
hour
two
hour
call
for
us.
B
All
right,
so,
please
put
that
in
the
minutes.
We
will
take
this
discussion
to
the
list.
L
G
Sounds
good,
okay,
getting
getting
to
webrtc
and
credit
transform,
and
hopefully
it
will
not
be
very
long.
So,
just
as
a
recap,
the
initial
proposal,
when
the
draft
was
adopted,
was
create
and
call
it
streams
which
makes
it
easier
to
do
things
in
the
main
thread.
Then
of
the
main
thread
and
the
notion
of
transform
was
not
very
clear
as
well,
so
the
working
good
consensus
was
to
move
to
a
transformed
model
and
to
define
a
script,
transform
and
screen
transform
with
other
mainframe
processing
by
default.
G
That's
what
is
now
in
the
spec
and
that's
what
is
available
available
behind
a
flag
in
safari
as
well,
but
we
still
have
in
the
spec
and
also
in
chrome
the
create
and
collect
streams,
api
which
is
roughly
doing
the
same.
It's
the
same
purpose
right.
It's
trying
to
solve
the
same
purpose,
so
my
proposal
would
be
to
remove,
create
and
code
streams
from
the
specification
as
a
way
to
complete
the
shift
to
a
transform
model
and
as
a
bonus.
G
A
Are
cases
of
use
cases
that
don't
fit
the
in
the
transfer
model?
A
I
have
tried
to
raise
some
of
those
earlier,
but-
and
there
are
also
use
cases
that
that
I
don't,
I
think,
don't
fit
well
with
the
with
with
the
idea
that
the
only
thing
that
you
can
send
send
your
streams
off
to
is
is
is
a
is
a
worker.
A
G
So
I
asked
for
use
cases
and
scenarios
in
the
past
and
I
haven't
really
seen
them
so
it's
difficult
to
make
progress
there
so
harold.
How?
How
can
we
make
progress
there,
because
that's
slowing
down
the
specification
and
we
certainly
do
not
want
to
keep
both?
I
think
we
all
agree
on
that.
G
I
would
be
more
in
favor
of
improving
the
script
transform
when
we,
when
we
have
the
use
cases
that
you're
mentioning
then
keeping
the
create
and
credit
streams
which
is
which
has
known
issues.
G
You
can
do
that
very
easily
in
in
the
worker.
You
can
even
post
message
the
chunks
if
you
want
to
do
that
in
the
main
thread
as.
H
Well,
yeah
y'all
never
hear
I'm
generally
supportive
of
un's
proposal.
Here
I
don't
think
we
should
have
two
apis
to
do
the
same
thing.
I'm
also
concerned
about
main
threat
exposure,
so
this
is
a
plus
one
for
me.
E
Other
comments,
I'm
kind
of
in
in
to
punch
pantone
here
and
kind
of
of
the
view
that,
if
we're
introducing
a
new
api,
we
should
introduce
them
one
at
a
time
introducing
two
related
apis
that
are
kind
of
45
degrees
to
each
other.
They're,
not
even
orthogonal,
is,
is
going
to
be
super
confusing
to
to
web
developers
who
again
wander
into
this.
I
I
I
genuinely
wouldn't
know
which
of
these
to
pick
up
for
a
given
problem
so
giving
people
the
option
is
kind
of
making
a
possibility
for
many
mistakes.
I
think.
G
So
one
option
could
be
to
separate
the
two
purple
like
in
two
different
specs.
E
I
think,
if
we're
doing
that,
we
need
to
be
clear
about
which
use
cases
which
ones
are
solving
and
then
that
allows
people
to
say,
hey
well.
My
use
case
looks
like
this.
I
should
be
using
this.
This
api
and
my
use
case
looks
like
that,
and
I
should
be
using
that
api.
If
we
don't
do
that,
then
people
are
just
going
to
get
super
confused.
Well,
I'm
confused
already
so.
G
Yeah,
I
agree
that
use
cases
for
main
thread
access
would
be
good
to
have.
H
And,
and
almost
regardless
I
mean,
if
we're
exploring
things
like
transferring
the
stream
track,
it
seems
like
if
you
really
wanted
main
thread
and
we
had
transferable
media
stream
track
you
could
you
could
do
that.
A
G
A
B
Well,
I
I
would
like
to
talk
through
the
use
cases
again,
because
I
I
do
agree
with
you,
harold
that
there
may
be
some
that
can
only
be
handled
via
the
encoded
streams.
G
G
G
We
we
discussed
that
in
in
the
past
and
vader
was
for
to
get
use
cases
for
main
fed
access,
and
so
far
I
haven't
seen
use
cases
that
could
not
be
fulfilled
by
the
script
transform
and
we
know
from
the
fact
that
our
main
thread
is
good
and
that's
most
people
like
it.
So
we
have
advantages
there
for
sure
and
we
don't
know
any
use
case
that
cannot
be
fulfilled
with
the
script
transform
so
far.
G
So
if
they
are,
I'm
fine
looking
at
them
and
discussing
them
but
haven't
looked,
I
haven't
gotten
any
right
now
until
now,.
K
Yeah,
my
sense
is
that
I
mean
the
design
point
that
we
shouldn't
have
two
api
is
very
similar
to
the
you
know
very
similar
things
with
like
a
complex
caveat
about
whether
it's
on
thread
of
thread.
I
think
that
points
toward
removing
the
duplicate
api,
no
matter
what
and
if
the
main
thread
use
case
do
emerge
and
cannot
be
solved
with
the
current
design.
Then
we
should
have
that
new
conversation,
but
the
fact
that
we
came
up
initially
with
this
and
now
have
a
better
proposal
for
of
thread
discussion.
H
G
G
Yeah,
I
think
we
need
to
skip
this
one.
I
think
that
we
should
go
to
the
next
items.
B
B
We'll
hand
that,
over
to
you,
danny
barr.
H
All
right,
so
this
is
get
viewport
media
just
want
to.
I
wasn't
sure
that
I
would
have
time
to
present
this
today,
so
lights
are
going
to
be
short,
some
good
news.
I
presented
some
good
news
last
time
too,
so
this
is
an
incremental
update.
I
I
believe
we
have
mostly
agreement
about
from
the
the
main
interested
parties,
so
we
could
hopefully
have
consensus
as
well
of
only
yeah,
so
background
get
report
media.
Since
we
have
a
larger
audience,
it's
a
it's.
H
An
alternative
to
get
display
media
get
display,
media
is
screen
sharing
of
whatever
the
user
picks
get
viewport
media
is
narrowly
defined.
The
problem
would
get
display
media
was,
it
has
specific
security
risks
and
capturing
web
services
that
could
be
used
to
attack
the
same
origin
policy
and
that
kind
of
stuff
so
get
viewport.
Media,
in
a
nutshell,
is
a
new
api.
H
That
web
pages
can
basically
capture
themselves
quote
unquote
and
in
order
for
that,
a
lot
of
the
same
security
risks
would
exist
if
we
allowed
pages
with
iframes
across
origin
content
that
could
be
exposed,
including
rendered
content
and
rendered
iframes
with
information
that
was
populated
with
your
local
cookies
for
all
kinds
of
websites.
So
we
all
agreed
now
that
we
need
to
secure
this
api
behind
site
isolation.
H
It
means
only
expose
this
get
viewport
media
method
if
you
have
window.cross
origin
isolated,
which
is
the
co-op
plus
co-op
security
model.
H
So
I
think
there's
agreement
now
with
on
that,
and
we've
also
agreed
that
resources
other
than
iframes
are
on
their
own,
because
they
were
would
be
vulnerable
to
spectre
attacks
anyway,
if
they're
rendered
into
the
process
and
also
earlier,
we
discussed
failure
modes
and
there's
also
agreement
that,
rather
than
dropping
capture,
it
generalizes
better
to
just
block
loading
entirely
of
frames
that
haven't
opted
in.
H
So
what
the
so
that's
all
great,
but
we
still
need
to
agree
on
the
shape
of
the.
We
need
an
additional
opt-in
header,
opt-in
to
capture
the
documents
that
are
in
iframes
can
specify
to
say
they
agreed
to
be
captured
and
there
was
another
proposal
there.
There
was
the
we
found
some
problems
with.
There
were
two
proposals.
One
was
to
add
an
html-capture
header
attribute
to
a
to
a
co-ep
as
part
of
a
co-op
require
corp.
The
the
problem
is
that
co-op,
which
is
the
embedder
policy.
H
H
Now
html
capture
is
a
less
safe
in
some
respects
model
and
that
that
can
be
confusing
because
you're
now
opting
into
riskier
not
safer
profiles
and
chrome
security
also
pointed
out
that
it
would
probably
require
a
fetch
metadata
request
as
well
fetch
co-app
or
something
like
that,
and
it
got
complicated.
H
So
the
proposal
from
chrome
security
was
to
introduce
a
an
off
by
default
document
policy
instead,
and
that
would
let
us
have
that
seems
cleaner
in
that
the
top-level
level
document
could
then
have
a
required
document
policy
of
html
capture,
meaning
that
any
iframe
for
any
iframe
to
load.
Those
iframe
documents
must
say
document
policy
html
capture,
so
they
they
often
otherwise
they
will
not
load
and
that
this
avoids
having
to
define
separate
fetch
metadata
request
headers.
H
So
that's
the.
I
think
the
proposal
on
this
questions
thoughts
on
any
of
this,
so
we
can
figure
out
if
we
have.
F
I
think
there
are
no
questions.
I
think
we
can
continue.
H
F
H
Well,
I
think
well
so
on
the
items
before
that
I
think
it
sounds
like
we
have
consensus
on
the
last
proposal.
I
think
mozilla
is
favorable
the
only
issue
we
have,
unfortunately,
that
document
policy
hasn't
been
fully
adopted.
Yet
that's
why
it's
an
incubator
spec
at
the
moment.
So
with
that
caveat
in
mind,
I
think
if
we
can
resolve
that
internally
and
get
a
an
updated
position
on
document
policy,
I
think
it
seems
like
a
reasonable
approach.
F
Yeah,
I
would
also
like
to
bike,
so
I
don't
html
this
capture,
but
in
the
interest
of
time
I
would
prefer
to
do
it
separately.
Also,
it's
not
going
to
be
the
end
of
the
world.
For
me,
if
it's
called
html
capture
names
are
everybody.
All
names
are
bad.
H
Yeah,
fair
enough,
I
think
we're
getting
consensus
on
direction
here,
so
I
just
wanted
to
add
some
slides
about
requiring
there's
still
a
problem
of
sites
even
on
same
monitoring
sites
that
they
might
be
able
to
harvest
user
information
from
things
like
browser,
history
from
link
purpling
and
that
kind
of
stuff
forms
and
the
so
there's
still
going
to
be
a
permission.
H
Gating
on
this
feature
and
we're
hoping
to
require
a
lot
of
the
same
protections
that
get
display.
Media
has
use
a
gesture
and
not
storing
a
granted
permission,
having
required
privacy
indicators
and
additionally
require
permission
to
use,
explain
to
the
user
the
risks
and
also
probably
disallow
things
like
iframe.contentwindow
get
the
get
viewport
media.
H
Sorry,
it's
an
old
typo,
their
previous
name
and
there's
also
a
risk
of,
even
though
we
want
to
let
user
agents
give
the
option
to
sanitize
information.
That's
a
double-edged
sword,
because
if
you
sanitize
about
way
important
extensions
like
ad
blockers,
we
could
risk.
We
don't
want
to
accidentally
create
an
ad
blocker
blocker
mode
and
the
last
one
is
there
was
some
contention
around
originally.
We
said
we
should
mute
pause.
H
While
the
document
was
not
visible,
I
know
security
concerns,
because
the
site
could
then
go
to
town
and
show
all
kinds
of
links
and
rendering
patterns
to
try
to
deduce
large
amounts
of
user
information.
H
H
So
there
are
decisions
we
can
make
there
that
maybe
the
tab
needs
to
be
visible
at
the
point
that
you're
requesting
permission,
but
you
probably,
but
if
you
want
to
go
to
a
different
site
and
look
at
the
wikipedia
page
while
you're
presenting
maybe
a
capture
shouldn't,
stop
and
for
reference,
I
think
chrome
and
both
chrome
and
firefox
behind
pref
allow
capture
of
background
tabs
at
the
moment
in
get
display
media.
So
there
are
also
some
use
cases
there
that
so
this
is
still
unclear.
H
Also,
there's
an
issue
about
cropping
and
there's
a
proposal
here
to
basically,
instead
of
generating
scripts
and
tracks
instead
of
generating
media.
That
needs
cropping,
maybe
add
tools
build
into
the
capturing
api
itself
so
that
they
can
narrowly
specify
something
that
needs
to
be
captured
so
that
you
don't
need
to
crop
it
later.
H
And
so
this
is
an
iteration
of
what
I've
proposed
in
the
past,
which
is
that
we
captured
the
intersection
of
the
viewport
of
the
top
level
browsing
context.
H
Active
document
and
the
document
say
iframe
that
is
requesting
the
capture
for
the
smallest
visible
surface,
which
is
beneficial
because
we
end
up
only
capturing
what
the
user
sees
rather
than
the
entire
page
or
the
entire
iframe,
and
the
that
also
means
that
you
capture
all
everything
that
might
be
on
top
of
that
iframe.
So
it's
really
just
a
clipping
api,
so
it
doesn't
totally
solve
that.
H
The
an
iframe
can
capture
its
parent,
but
it
probably
does
it
in,
except
in
odd
8,
in
odd
edge
cases
where
the
top
level
page
would
purposely
overlay
something
on
the
iframe
and
we've
also
discussed
transferring
media
stream
track.
H
F
So
I
would
like,
if
we
could
stay
on
the
same
one
before
we
go
here
just
for
a
second,
so
two
points
were
raised,
which
I
am
not
so
sure
about,
and
so
first
of
all
was
that
I
don't
think
that
there's
any
problem
with
capturing
the
top
level
page
by
an
embedded
page,
because
this
permission
is
actually
delegated
right.
We've
got
the
html
this
capture
that
we
wish
to
rename.
F
So,
of
course,
that's
part
of
it,
and
second,
is
that,
even
if
you
only
allow
the
iframe,
if
you
click
to
the
iframe
itself,
the
top
level
could
still
draw
something
above
it
so
kind
of
teaching
the
web
developer,
that
it's
okay,
you're
just
capturing
the
iframe.
That
is
a
misleading
lesson
that
could
backfire,
if
only
understood
after
things
hit
production.
So
I
think
that
this
could
be
confusing.
F
Additionally,
if
we
could
go
to
the
next
slide,
I
think
that
there
is
a
typical
use
case
that
we're
all
experiencing
almost
experiencing
right
now,
we're
all
together
in
a
vc
and
we're
all
watching,
slides,
present
being
presented
to
us,
and
I
think
that
there
is
a
reason
why
there
is
a
sense
in
structuring
an
application
so
that
the
presentation
software
would
be
the
top
level
under
certain
circumstances.
F
And
then
you've
got
the
vc
pane
on
the
side,
which
is
its
own
frame.
And
when
that
is
the
case,
there
is
one
reason
why
you
would
still
want
to
call
get
viewport
media
or
get
display
media
from
the
iframe
of
the
vc.
And
that
is
that
you
can
then
use
restrict
on
video,
and
you
can
then
immediately
get
rid
of
all
of
the
audio
that's
coming
in
through
the
vc
and
your.
F
F
So
I
think
that
there
is
sense
there,
and
also
there
is
the
general
opposition
to
making
things
transferable
not
being
as
easy
as
it
might
initially
appear.
So
these
are
the
things
I
wanted
to
raise
on
that
topic.
G
F
So
a
restrict,
no
restrict
on
audio,
as
it
is
currently
specified
says
that
the
document
that
calls
this
captures
the
entire
surface,
including
audio,
but
does
not
capture
its
the
audio
that
it
itself
is
producing.
I
think
that
hendrik
boostroom
is
here,
and
I
think
that
he
was
involved
in
originally
specifying
this.
So
maybe
he
could
give
a
better
summary
of
what
it
says.
J
F
So
if
the
vc
iframe
in
this
case
were
to
call
this,
then
it
would
end
to
supply
this
constraint
and
it
would
mean
that
only
the
audio
from
the
vc
would
get
stripped
away
from
the
capture.
Am
I
correct.
J
It
I
think
it
would
would
remove
any
audio
from
so
I'm
confused
because
you're,
you
would
be
capturing
a
part
of
the
same
page.
So
by
definition,
I
think
this
constraint
would
just
remove
any
audio.
F
J
J
J
We
actually
want
to
capture
as
a
part
of
a
page,
then
it
might
become
relevant,
which
audio
is
muted.
F
So
I
I
think
this
kind
of
comes
before
we
even
decide
about
capturing
a
part
of
the
page
right
like
because
in
the
way,
the
way
I
see
it,
we
crop
the
video
later
right,
but
suppose
that
we
weren't
cropping
anything.
If
you're
calling
this
from
a
side
from
an
iframe,
it's,
I
think
it
only
should
only
remove
the
audio
that
comes
from
that
very
iframe
or
its
descendants.
F
I
yes,
I
think
so
that
would
be
the
okay
and-
and
if
that
is
the
case,
then
my
argument
stands
that
it
makes
sense
for
a
vc
application
to
say:
hey,
I'm
capturing
all
of
the
tab
or
the
window
or
the
screen,
but
I'm
capturing
all
of
the
tab,
but
I'm
just
removing
the
audio
that
I
am
producing
myself,
and
that
leaves
us
only
with
the
media
content
that
comes
from
the
presentation,
which
is
exactly
what
we
want.
F
H
At
the
same
time,
I
think
in
earlier
slides
for
cropping,
I
proposed
by
being
more
targeted
in
what
we
capture
for
video
we
avoid
cropping-
and
I
would
say
the
same
thing
here
by
being
more
targeted
about
what
audio
we're
capturing
in
the
first
place,
we
removed
the
need
for
exclusives
like
restrict
on
audio,
but
I
think
tim
matters
hands
raised.
E
Yeah
just
kind
of
saying
that
this
illustrates
the
differences
between
video
and
audio
quite
starkly
and
and
it's
the
sort
of
this
is
the
sort
of
problem
that
I
would
choose
to
solve
in
in
web
audio,
rather
than
to
push
it
down
into
the
user
agent
like
specifically,
which
tabs
audio
goes.
Where
is
the
sort
of
thing
that
exactly
matches
to
what
you
can
do
well
in
web
audio,
and
I
think,
I'm
kind
of
disinclined
to
make
it
a
spec
item?
But
I
guess
that's
not
my
problem
really.
H
And
yeah,
I'm
glad
henrik
is
here,
because
I
also
struggle
with
I'm
pretty
sure
that
when
we
defined
restrict
on
audio,
we
weren't
even
thinking
about
iframes.
So
I
think
the
intent
there
was
good.
The
intent
was
always
to
allow
an
application
like
this
to
capture
things
other
than
itself,
and
I
think
we
can
we
can
iterate
on
if
that
ends
up
not
working
correctly
for
iframes.
I
think
that's
something
we
can
iterate
on,
but
it
seems
specific
to.
H
F
Issue
I
mean
we
could
always
change,
restrict
on
audio.
I
don't
think
it's
implemented
in
any
browser
yet
anyway.
So
it
is
a
limited
argument
that
I'm
producing
here.
J
Yeah,
I
I
just
remember
from
the
discussion
that,
like
the
intent
was
to
not
capture
audio,
if
that
would
produce
echo.
But
when
we
talked
about
this
constraints
it
seemed
very
platform
specific
and
source
specific.
Whether
or
not
you
would
be
able
to
distinguish
different
parts
producing
audio.
So
we
added
a
class
saying
that,
if
you're
not
able
to
adequately
remove
the
undesired
audio,
then
the
user
agent
should
just
mute
the
stream
entirely
so
the
whole.
The
whole
problem
with
iframes,
I
think,
is
not
fully
explored.
F
Okay,
so
I
see
that
we're
somebody
is
navigating
forward
and
but
we're
at
time
I'm
happy
to
stay,
but
I'm
not
sure
I
don't
think
other
people
would
be.
H
F
H
B
H
So,
just
on
the
slides
we
just
covered,
then
I
didn't
hear
any
major
objections
to
this
overall
direction.
Other
than
with
the
last
slide
of
restrict
on
audio
from
a-log.
Is
that
a
fair
assessment.
F
Yes,
but
we
so
if
you
could
go
one
slide
above
then
I
I
think
that
we,
I'm
sorry
one
slide
before
earlier,
not
later
this
saying
that
we
will
only
capture
the
part,
the
iframe
I
understand,
let's
avoid
the
iframes
intersection,
etc.
Basically,
let's
assume
for
simplicity
that
all
of
the
iframe
is
visible,
so
you're
only
capturing
the
iframe
and
whatever
the
top
level
document
happened
to
have
drawn
on
top
of
it.
F
B
All
right,
I
think,
we're
done
for
this
month.
Please
everyone
fill
in
the
doodle
poll
for
next
month,
so
we
can
select
a
day
in
time.
I
think
probably
we
might
have
to
go
two
hours
next
month
as
well.
L
I'll
post
it
in
the
irc
chat.