►
From YouTube: WEBRTC WG interim 2022-01-18
Description
See also the minutes of the meeting at https://www.w3.org/2022/01/18-webrtc-minutes.html
02:17 Recent CfAs and CfCs
03:23 Agenda review
03:42 WebRTC NV Use Cases
30:26 Media Capture Transform
41:33 WebRTC Extensions
51:41 Capture Handle 🎞︎▶⏸︎
A
A
We
today
will
cover
webrtc
and
the
use
cases
media
capture,
transform
a
little
bit
of
encoder
transform
and
capture
handle.
The
next
meetings
are
on
february,
15th
and
march
15th
of
both
at
8
a.m.
Pacific,
the
february
will
be
90
minutes
and
the
march
will
be
120..
A
Okay,
we
have
the
slides
have
been
published
on
the
wiki.
We
do
need
a
scribe
to
write
down
the
decisions
that
we
make,
that
we
have
a
volunteer.
B
So
I
can
scribe
after
the
first
section,
but
I
think
it'd
be
tricky
in
the
right
for
the
first
half
hour.
A
A
We
are
using
the
plus
q
and
minus
q
in
the
google
meet
chat
to
get
into
and
out
of
the
speaker
queue
if
we
to
manage
the
queue
now,
please
also
use
headphones.
Tell
us
who
you
are
so
we
can
keep
that
in
the
minutes.
A
I
don't
think
we'll
have
paul's
today,
so
understanding
a
document
status
just
because
something's
in
a
repo
doesn't
mean
it's
been
adopted.
We
need
a
call
for
adoption,
we'll
talk
about
some
of.
D
A
And
the
editors
drafts
don't
represent
working
group
consensus,
but
the
working
group
drafts
generally
do
okay.
So
here's
a
summary
of
some
of
the
recent
calls
for
adoption
and
calls
for
consensus.
A
We
had
a
discussion
at
the
december
21
interim
tim
submitted
a
pr
which
we
merged
to
address
some
of
the
feedback.
We're
going
to
talk
more
about
that
today
and
hopefully
get
a
little
bit
closer
to
convergence.
We
had
a
cfa
on
region
capture
which
concluded
on
december
13th
and
the
spec
is
now
in
the
w3c
archive,
so
we're
done
with
that.
We
will
talk
a
little
bit
about
a
cfc
on
pr
125,
which
was
for
wherever
c
encoder
transform
or
a
keyframe
api.
A
We'll
talk
more
about
that.
There
was
some
concern
about
race
conditions
and
sinking,
and
then
we
have
a
cfc
on
intent
to
discontinue
media
captured
death
extensions
which
are
will
conclude
on
the
26th.
So
it's
not
done
yet.
Please
express
your
opinion.
I
think
I
saw
one
or
two
more
opinions
come
in,
but
please
send
us
your
thoughts
on
that
all
right.
A
So
here's
the
agenda
for
today
we're
gonna,
go
to
tim
and
keegan
on
weber,
cnb
use
cases
and
then
harold
will
talk
about
media
capture,
transform
I'll
just
have
this
item
about
encoder
transform
and
then
we'll
turn
it
over
to
a
lot
for
capture
handle
all
right,
tim
and
keegan.
B
So
yeah
and
just
very
briefly,
I've
updated
the
nvu's
cases
with
a
pull
request
which
has
now
been
merged.
It's
I've
tried
to
reflect
the
thing.
I
would
very
much
appreciate
it
if
people
could
go
in
and
and
look
at
those
those
the
language
that
I've
tightened.
D
B
Further
suggestions
and
I've
removed
the
requirements
and
some
of
them,
I
think,
we'll
end
up
with
new
requirements
that
reflect
the
discussion
that
we
had,
but
for
the
moment
there's
a
couple
of
placeholders
in
there
so
yeah.
Basically,
if
you
can
go
in
and
look
at
that
pr
comment
on
it,
I'd
really
appreciate
it
and
then
the
other
thing
that
we
kind
of
talked
about
last
time
was
like
what
did
decentralized
messaging
mean.
What
did
we
need?
B
What
did
the
people
who
were
trying
to
do
that
need,
and
we
came
to
the
conclusion
that
none
of
us
really
knew,
but
we
thought
there
was
maybe
something
there
and
that
understanding
what
the
minimal
acceptable
changes
were
was
kind
of
the
thing
that
we
didn't
really
have
a
grasp
on.
So
I
invited
keegan
to
come
and
give
us
some
expert
opinions
having
given
that
he's
basically
tried
to
do
this
and
encountered
whatever
the
problems
are.
B
So
basically,
I
like
to
put
most
of
this
time
to
do
keegan's
experience
and
then
a
discussion
around
what
we
maybe
have
learned
from
it
and
what
we
could
potentially
put
into
the
document
so
and
and
huge
thanks
to
him
for
for
agreeing
to
come
and
spend
some
time
helping
us.
So
I'm
over
to
keegan.
D
Okay,
hello,
my
name
is
keegan.
I'm
a
software
engineer
working
at
matrix,
I'm
going
to
provide
a
little
bit
of
background
on
matrix
first
to
provide
some
context
and
then
go
into
more
detail
about
our
use
case
with
webrtc
and
service
workers.
So
matrix
is
an
open
source
project
for
secure
decentralized
communication.
D
The
mainstream.org
foundation
maintains
a
protocol
specification
similar
to
the
html
living
standard,
along
with
client
and
server
implementations
available
on
github.
The
network
is
federated
and
has
similarities
with
email
and
xmpp
voice,
and
video
messaging
in
browser
uses
webrtc
already
and
matrixes
handles
the
signaling.
For
that.
D
So
one
of
the
problems
matrix
has
is
that
it's
trying
to
be
decentralized
in
practice.
Most
users
will
actually
sign
up
to
one
or
two
main
servers.
For
example
matrix.commatrix.org
that
main
server
resulted
in
a
more
centralized
network
than
we
would
like.
So
one
of
our
goals
is
to
instead
allow
users
to
kind
of
automatically
sign
up
onto
the
network
just
by
visiting
a
website
and
effectively
making
the
service
peer-to-peer.
D
Can
we
go
to
the
next
slide?
Please.
Thank
you.
So
some
early
browser
experiments
with
peer-to-peer
matrix
was
done
in
2020..
D
Our
core
idea
was
to
compile
a
matrix
server
down
to
webassembly
and
then
run
it
in
a
worker,
a
worker
thread
in
the
browser
we
weren't
the
first
people
to
try
to
do
peer-to-peer
stuff
in
browsers
before
so.
We
made
use
of
protocol
labs
lib
p2p
set
of
libraries,
which
is
also
used
in
services
like
ipfs.
D
D
Unfortunately,
we
couldn't
get
webrtc
working
in
service
workers,
so
we
ended
up
using
web
sockets
to
a
relay
server
which
not
only
handled
signaling
but
also
handled
all
the
data
packets.
So
no
data
went
directly
between
the
browsers
which
isn't
very
peer-to-peer
at
all,
because
if
the
relay
server
dies,
then
any
existing
sessions
can't
exchange
messages
at
all,
but
it's
the
best
that
we
could
do
at
that
time.
D
Can
we
go
to
the
next
slide?
Please
thank
you.
In
2021
we
made
our
own
overlay
network
protocol
called
pinecone,
which
works
in
a
range
of
devices
over
a
range
of
transports,
including
bluetooth
and
web
sockets.
D
We're
currently
limited
in
terms
of
which
peer
connections
can
be
established
from
a
browser,
as
we
can
only
really
speak
to
directly
addressable
nodes.
In
this
example,
it's
dendrite.matrix.org,
so
we
can't
direct
speak
directly
to
the
iphone
and
android
nodes
in
this
graph
from
the
browser
because
of
lack
of
web
rtc
support
in
in
workers.
D
Can
we
go
to
the
next
slide?
Please
thank
you.
So
we
have
a
few
requirements
for
any
potential
solution
and
that's
going
to
work
for
us.
We
need
to
run
the
server
in
a
worker,
be
it
web
or
service,
as
we
do
a
lot
of
compute
heavy
tasks
which
would
affect
the
ui
otherwise
we'd
like
workers
to
live
as
long
as
reasonably
possible
as
having
the
server
alive
longer
results
in
less
network
churn
and
creates
a
more
stable,
b2b
network.
We
need
to
intercept
fetch
requests
from
the
browser.
D
D
D
So,
to
clarify
a
few
things,
we
don't
have
any
desire
to
run
a
long-lived
process
on
a
user's
browser.
We
see
this
as
an
anti-goal
and
a
privacy
invasion
in
our
experiments
with
service
workers.
There
are
a
few
inconsistencies
between
browsers
anyway,
on
how
long
the
service
worker
life
cycle
is
so
in
firefox.
It
seems
to
terminate
service
workers
after
30
seconds
of
inactivity,
but
you're,
not
intercepting
any
fetch.
D
Requests
and
chrome
keeps
the
service
worker
alive
and
much
longer,
possibly
until
memory
brush,
I'm
not
entirely
sure,
but
in
addition,
we
can't
really
use
fetch
as
an
alternative
to
actual
webrtc,
as
we
need
the
ability
to
discover
and
communicate
with
nodes
on
the
local
network.
D
Now
the
final
slide,
please
thank
you
so
so
that
hopefully
gives
a
enough
context
to
understand
our
specific
use
case
and
the
proposals
listed
here.
I
think
the
most
acceptable
proposal
is
probably
b
or
c,
though
I'm
not
sure
if
this
committee
has
the
remit
to
support
intercepting
fetch
requests
and
web
workers,
or
anything
like
that.
D
I
don't
know
how
difficult
it
is
to
allow
web
workers
to
create
webrtc
connections,
envisage
that
the
lifetime
of
the
connection
is
still
tied
to
the
lifetime
of
the
tab,
but
effectively
now
the
web
rtc
apis
would
need
to
be
thread
safe
as
the
public
api
can
be
called
from
the
main
window
and
web
workers
simultaneously
and
if
and
when
changes
are
made,
we'll
probably
use
off-the-shelf
libraries
provided
by
the
p2p.
Just
to
try
it
out
and
verify
that
our
use
case
has
been
handled
and
that's
that's
basically
about
it.
D
Yes,
so
so
that's
effectively,
proposals
c
and
d
of
actually
where
workers
can
can
make
web
rcc
connections
or
control
their
control
data
channels.
Making
new
connections
would
be
preferable,
but
yeah
something
along
those
lines
would
would
work.
It'd
still
be
a
little
bit
messy,
but
it
would
be
tractable
at
that
point.
E
All
right
so
questions,
I
I
think,
I'm
on
first
on
the
queue
so
I'll
ask
my
question:
did
you
consider
shared
workers.
E
Well,
there's
web
workers
most
people,
think
of
a
dedicated
worker,
which
is
a
scope
to
one
page,
a
shared
worker
and
I'm
not
an
expert
either.
I
just
learned
of
them
recently
myself.
My
understanding
is
that
they
allow
you
to
share
between
multiple
tabs
without
becoming
a
full-fledged
service
worker.
I
mean
you
cannot
well
I'm
being
recorded
now,
so
I
would
guess
that
shared
worker
cannot
outlive
the
last
tab,
but
I'm
actually
not
certain
about
that.
It
seems.
F
E
Better
suited
area
that
might
remove
some
of
the
concerns
with
service
workers.
D
Yeah,
but
that
certainly
would
sound
pretty
good
pretty
useful
for
us,
because,
obviously,
it's
not
ideal,
if
you
have
three
tabs
open
and
they're
all
running
these
servers
on
each
tab,
you're
doing
a
lot
of
a
lot
of
computation
three
times
computation,
whereas
if
you
can
just
share
it
all,
then
that
obviously
does
help
an
awful
lot.
E
That's
good,
then,
so,
just
so,
we
don't
rely
on
my
interpretation
of
it.
So
maybe
I'll
rephrase
it
as
a
question
to
you.
Does
your
service
need
to
out
last
the
last
tab.
D
B
Yeah
just
to
try
and
clarify
in
my
mind,
so
that
the
only
real
goal,
the
only
reason
why
you
would
want
data
channels
in
service
workers
is
to
be
able
to
intercept
fetch
and
that's
just
so
that
the
client
doesn't
doesn't
have
to
be
changed
to
support
that
running
a
local
service.
It's
not
that
it
would
be
impossible
to
do
what
you
want.
It's
just
that
it
makes
the
code
tidier.
D
Yeah
and
it
partly
because
also
it's
the
ecosystem
in
which
matrix
exists
right
because
by
all
means
you
know,
we
could
modify
our
own
sdks,
add
a
shim
layer.
So
it
does
like
a
post
message
instead
of
calling
fetch
and
you
know,
post
a
message
to
the
service
worker,
but
that
would
only
then
work
for
that
sdk.
It's
not
going
to
be
like
a
more
general
kind
of
solution,
and
it
would
be
quite
invasive
for
anyone
else,
then
to
want
to
add
in
support
for
for
similar
kind
of
peer-to-peer
capabilities.
D
So
it
would
probably
affect
uptake
in
terms
of
how
many
clients
would
support
peer-to-peer.
B
You
say
you
could
patch
it,
but
that
assumes
that
that
there
aren't
fetches
that
are
being
done
by
the
user
agent
rather
than
in
javascript.
So,
like
you
know,
fetches
for
css
or
something
that
was
actually
in
the
page
service
workers
will
incept
them
as
well
yeah,
whereas
you
know
you
can
obviously
monkey
patch
the
javascript
to
do
whatever
you
want,
but
like
I,
I
suppose
what
I'm
trying
to
get
to
is.
D
E
And
just
to
what
I
said
earlier
about
shared
workers,
I
need
to
go
and
look
up
if
to
see,
if
intercepting
fetch
it
might
be.
One
of
the
things
that's
unique
to
service
workers
or
not,
or
whether
shared
workers
also
offers
that.
E
All
right,
thank
you.
So
then
you're
asked.
Then
you
said
you
could
so
for
this
particular
application.
Then
it
sounded
like
you
would
be
okay
with
intercepting
those
by
overriding
it
in
javascript
libraries,
or
is
that
not
ideal
either.
D
I
mean
that
that
wouldn't
be
ideal,
because
it
would
probably
affect
uptake
in
terms
of
how
easy
it
is
to
add
peer-to-peer
support.
You
know
having
to
tell
the
world
okay,
you
need
to
replace
all
instances
of
fetch
with
this
custom
blob,
and
you
know
it
is
much
more
difficult
than
saying
to
shove,
a
service
worker
in
and
it
just
magically
kind
of
happens.
G
C
C
G
B
G
There's
already
a
proposal
for
making
the
national
transferable
and
the
proposal
is
to
make
them
transferable
everywhere.
And
I
guess
the
new
requirement
would
be
to
be
able
to
create
to
instantiate
a
peer
connection
in
any
worker,
meaning
shared
worker,
dedicated
worker
and.
E
G
In
general,
you
want
long
life.
Long-Lived
objects
like
peer
connection
in
stable
context,
and
service
worker
is
not
really
a
stable
context,
but
still
I
mean
there's
no
harm
in
exposing
it
appear
connection
in
service
worker
either
because,
if
you're
exposing
it
to
a
worker,
it
means
your
implementation
is
ready
to
expose
it
in
service
worker
as
well.
G
E
G
Service
worker
that
is
running
without
a
tab
is
a
really
evil
thing
and
it's
and
it's
only
happening
in
very
specific
cases
and
which,
which
we
try
to
reduce
as
much
as
possible,
and
I
don't
think
webrtc
will
cause
much
more
harm
than
just
fetch.
But
maybe.
H
G
The
current
model
is
not
really
like
that,
so
there's
a
service
worker
can
be
created
to
process
push
messages,
for
instance,
and
you
have
a
push
measure
the
api
that
is
usually
behind
a
prompt,
so
you
will
be
prompted
to
say:
hey
do
you
want
this
website
to
push
you
messages
and
in
that
case,
from
time
to
time
you
will
receive
push
messages
and
service
worker
will
be
running
in
the
background
without
you
being
able
to
notice
it.
G
I
would
say
that
in
in
general,
we
are
reluctant
to
have
push
messages,
triggering
some
javascript
without
a
user-visible
action.
So
if
a
service
worker
is
not
notifying
the
user
with
some
notifications,
then
a
browser
will
say:
there's
something
fishy
there,
so
they
might
throttle
your
post
messages.
They
might
even
stop
sending
push
messages,
because,
if
you're
able
just
for
a
push
message
to
trigger
some
laboratory
code
in
in
a
in
a
device,
that's
a
really
bad
thing.
So
that's
why
it's
really
respected
in
terms
of
security.
I
Yeah,
I
just
had
a
question
about
data
channels
and
service
workers.
Is
it
possible
to
keep
the
data
channel
around
for
more
than
one
request
in
a
service
worker.
G
That's
that's
a
good
point.
It
really
depends
on
each
browser
because
each
browser
might
kill
the
service
worker
or
not
with
some
heuristics
that
are
not
shared
currently
and
if
you
and
the
service
worker,
then
you
will
end
the
data
channel
in
practice,
so
you
will
need
to
recreate
one
which
is
better.
That's
true,.
E
To
clarify
so
far,
we've
only
specified
transfer
of
data
channels
to
dedicated
workers.
Are
they
transferable
anywhere.
H
Well,
data
channels
don't
have
constructors,
so
the
only
way
to
get
a
data
channel
is
to
is
to
get
to
have
a
pay
connection,
and
so
far
we
haven't
specified
transferability
either
of
date
channels
over
or
of
their
connections.
G
E
Oh
yeah,
I
found
that
link
to
so
it's
window
and
worker,
which
I
guess
is
dedicated.
Worker
lincoln
chat.
B
G
I
think
it
would
be
good
to
file
an
issue.
It
would
be
good
to
mention
ben's
issue
as
well
as
a
sub
issue
on
github,
because
maybe
there
will
be
something
specific
to
the
touch
and
run
transfer
or
we
need
to
understand
this
issue
a
bit
more
as
well.
B
So
where,
but
where
would
you
put
that
issue
I
mean
it?
Doesn't
it
feels
like
it's
gone
beyond
the
use
cases
document.
H
To
me,
it
feels
like
we
should
we
shouldn't
be
writing
an
explainer,
because
we
got
all
that's
needed
in
spank
is
probably
a
little
little
bit
of
patching
over
here
and
over
there,
but
seeing
how
it's
all
supposed
to
fit
together,
what
we're
going
to
enable.
B
So
where
would
you
think
the
explainer
lives
I
mean
it
could
be
at
least
some
of
it
could
be
in
in
in
use
cases,
but
maybe
that's
not
the
right.
Maybe
it
ends
up
going
to
the
the
wrong
level
for
the
use
cases.
H
I
mean
I
think,
I'd
like
to
like
to
see
a
document
in
I
mean
probably
in
exemptions
yeah,
because
that's
the
just
just
a
a
markdown
file
saying
that
so
here's
here's
the
description
of
the
use
case
for
of
an
implementation
of
the
peer-to-peer
use
case
and-
and
here
are
the
five
changes
that
are
needed
in
order
to
make
this
feasible
yeah.
H
B
H
G
B
A
Well,
I
think
you
can
you
you
can
make
a
start
with
some
of
the
things
we've
been
talking
about.
I
mean
it
might
not
be
finished,
but.
E
B
A
A
Okay,
great,
I
think
we're
done
with
this
segment
and
we
can
move
on
to
media
capture
transfer.
H
H
A
H
Next
slide,
so
I
looked
through
the
open
issues
and
I
found
one
two
three,
four,
five,
six,
seven
eight
and
from
going
all
the
way
from
is
this
the
right
approach
to
how
do
we
handle
the
white
video
rotation
and
now
a
new
one?
H
H
We
should
leave
that
open
as
a
placeholder
so
that
we
can
say
we'll
close
this
once
we
have
the
fixes
we
need
in
in
the
stream
spec,
so
that
we're
completely
sure
that
it's
an
a
feasible
approach.
H
H
Number
26
had
a
very
generic
title
of
api
for
tuning
a
media
stream
track
in
stone
internal
state,
and
the
discussion
had
quickly
settled
on
how
do
we
how
they
mute
the
track?
Well,
the
video.
H
Video
track
generator
has
some
muted
attributes
thanks
to
yaniwa
for
providing
that
api,
and
so
I
close
the
asu
we're
done.
We
have
a
proposal
proposed
solution.
H
Memory,
locality
is
a
question
about
really
about
reusing
your
buffers
so
that
we
can
cycle
around
the
buffer
pool
without
ever
having
to
to
rely
on
garbage
collection.
So
I
hope
I
I
have
held
that
as
a
document
as
a
an
issue.
We
need
to
keep
open,
but
it's
kind
of
exploratory
in
nature,
so
we
so
we
don't
need
to
answer
this
before
before
going
to
public
first
public
working
drafts,
34
relationship
to
w
web
gpu
the
same
kind
of
issue.
H
We
have
a
proposed
pro
a
solution
that
is
proposed
in
the
pr
web
codex,
which
might
lead
us
to
to
say
that
okay,
this
is
solved
by
by
ways
to
manipulate
video
frames
and
say
don.
But
again
I
don't
think
this
blocks
first
public
working
drafts
and
there's
65
on
video
rotation,
which
I,
which
I
I
think
is
actually
a
metadata
ac
on
video
frame,
and
I
suggest
we
just
move
it
there.
H
E
All
right,
yeah,
so
yeah.
I
also
looked
at
the
issues
and
I
think
I
agree
with
most
of
what
you
presented
there.
Actually
I
did
open
an
issue
71,
which
is
more
of
a
clarification
about
what
the
consensus
is
or
isn't,
and
so
I
did
a
look
over
media
stream
track.
Processor
introduction
section
still
says:
if
the
track
is
an
audio
track,
the
trunks
will
be
audio
data
objects.
E
G
I
I
think,
they're
in
the
same
vein,
the
use
cases
in
section
one.
They
identify
audio
like
in
in
three
of
the
four
use
cases
that
the
specification
explicitly
aims
to
support.
They
are
based
on
audio
as
well,
which,
if
you
just
read
that
you
think
audio
is,
is
supported,
and
this
should
be
clarified
as
well.
H
We
do
have
the
note
saying
that
there's
no
no,
no
consensus
in
audio,
but
with
it
we
need
to
carry
out
the
pieces
of
audio
that
are
scattered
around
the
documents.
E
Right
and
also
there's
an
open
link
for
whether
audio
applies
just
to
video
track
generator
or
the
absence
of
audio
track
generator
or
if
it
also
means
media
stream
track.
Processor-
and
that's
that's
that'll-
be
helpful
to
have
that
clarification,
and
it's
it's
still
and
bike
sharing.
I
guess
we
can
keep
going
on,
but
it's
still
a
little
awkward,
perhaps
that
you
have
media
steam
track
processor,
and
then
you
have
video
track
generator,
but
medium
track
is
the
name
of
what
we're
processing.
So
I
think
it
makes
sense
it's
a
bit.
E
It
would
be
nice
if
we
didn't
have
to
pass
in
the
kind
option
make
the
default
value
for
kind.
Video,
for
example,
small
things
like
that
to
make
the
yeah
actually.
D
G
Isn't
there
an
issue
about
mediation
track
processor
being
not
needed,
because
that's
what's
one
of
my
feedback
at
some
point,
maybe
I
forgot
to
to
file
this
issue.
H
Universe
proposal
was
to
make
the
media
stream
track
processor
be
a
track,
but
the
consensus
we
ended
up
with
as
this
in
this
proposal
is
that
it
takes
a
track.
So
it's.
G
G
A
G
I
I
think
it's
it's
worth
discussing,
because
that's
one
of
that's
one
of
the
things
that
might
change,
or
at
least
I
still
think
we
should
try
and
keep
this
section
at
this.
H
Okay,
I'll
take
that
I'll
make
I'll
make
make
it
a
pr
for
that
and
and
then
we'll
see,
and
after
that
we
can
assu.
The
first
public
work
draft
call
and
of.
A
H
A
So
I
think
we're
largely
done
with
this
item
yep
and
we
will
issue
the
cfc
shortly.
A
Okay,
go
on
to
just
one
little
item
about
webrtc
extensions.
A
We
talked
about
pr
125,
which
was
to
add
an
api
to
request
keyframes
that
has
three
apis
one
to
generate
a
keyframe
from
the
script
transformer
another
one
in
rtc,
rgb
sender
and
then
an
api
to
request
a
keyframe.
So
we
had
the
call
for
consensus
which
ended
on
january
17th.
A
I
think
I
was
maybe
myself
in
un.
Are
the
only
ones
who
commented,
but
the
thing
that
came
up
was
a
concern
about
synchronization,
so
pr
125
differed
from
the
original
harold's
pr
37
in
that
timestamp
is
not
returned.
A
So
there's
a
question
about
whether
whether
that
matters,
I
guess
whether
there's
a
situation
that
you
might
need
the
timestamp,
I
guess
to
figure
out
exactly
which
keyframe
was
the
one
you
asked
to
be
generated.
A
The
second
thing
was,
we
were
got
into
a
discussion
about
synchronizing
the
generation
of
the
key
frame
with
a
key
update.
So
typically,
what
happens?
A
So
there's
a
there's
a
synchronization
issue
as
well.
So
I
guess
the
first
thing
was
I'd
like
to
get
some
opinions
from
people,
whether
the
timestamp,
which
isn't
in
pr
125
whether
that
matters
do
you
have
an
opinion
harold.
G
A
G
Yeah
so
issue
issue:
127
is
just
about
the
s1
transform
it's
not
about
script,
transform
so.
G
Yeah-
and
the
question
is
really
we
we
we
need
that,
because
in
general,
what
you
want
to
do
is
you
want
to
apply
the
new,
the
new
key
and
you
want
to
apply
the
new
key
on
the
first
keyframe.
That
happens.
If
it
happens,
that
you
requested
a
keyframe
and
it's
actually
triggering
a
second
keyframe
you,
you
might
still
want
to
start
on
the
first
keyframe
that
you
receive
and
you
wait
to
start
as
soon
as
possible.
D
G
We
can
not.
We
do
not
need
to
update
to
give
a
timestamp.
What
we
can
do
is
probably
just
resolve
the
promise
when
the
next
read
will
be
will
be
called
and
that's
what
is
in
in
the
pr.
A
Enough,
but
it
should
be
very
close.
Well
I
did.
I
did
want
to
talk
about
that
specifically.
So
one
of
the
problems
for
the
for
the
s
frame
transform
right
was
that
when,
by
the
time
the
keyframe
promise
returned,
as
you
said,
the
corresponding
frame
may
have
already
been
encrypted,
so
it's
kind
of
the
timing
isn't
guaranteed.
A
So
essentially,
the
question
is
whether
you
could
do
something
like
a
promise.all.
You
know,
send
or
generate
keyframe,
and
then
the
send
a
transform
set
encryption
key,
whether
that
would
work
in
any
of
the
cases.
Are
you
saying
it
might
conceivably
work
for
the
sender.
generate
keyframe.
G
So
if
you're
applying
generate
keyframe
on
the
sender
on
rtt
rtb
sender,
then
everything
is
difficult
to
handle
because
your
main
thread,
while
the
frames
are
flowing
right
in
and
over
process.
So
it's
very
difficult.
There.
B
G
To
to
to
do
those
things,
so
that's
why,
in
that,
in
that
case
you
really
need
a
script
transform
and
for
the
script
transform
you
you
will.
You
will
be
in
the
same
thread,
so
I'm
I'm
probably
sure
that
this
will
actually
actually
work
there.
You
will
generate
a
keyframe,
you
wait
on
it
and
and
then
you
apply
the
anchor
chunky.
A
A
Yeah,
I
guess
we'd
have
to
have
some
kind
of
guarantee
that
that
would
that
that
would
function.
G
Yeah,
I
think
that
what
would
be
really
good
is
that
there
are
some
experiments
that
are
done
with
the
current
support,
but
that
will
prove
or
not
whether
that's
good
enough,
there's
already
support
in
safari,
so
people
can
try
javascript
and
try
to
use
it
and
see
whether
in
terms
of
resolution
that
will
work
for
them
or
not.
If
not,
we
can
certainly
try
to
change
the
way.
The
time
the
promise
is
resolved
or
the
type
of
a
promise,
if
that's
really
helping
so
for
script
transformer.
G
E
I
I
put
myself
on
the
queue,
I'm
sorry
for
not
providing
timely
feedback
on
this
one,
but
I
did
look
at
the
pr
and
wondered
why
we
had
two
apis
one
on
the
sender
and
also
on
the
transform.
But
I
let
it
go.
But
now
I
see
here
where
they
wait
from
us
all.
E
G
So
I
I
think
it
was
an
a
request
from
microsoft
teams,
people
right
so
I
I
do
not
have
all
the
details.
There.
G
Yeah,
it
wasn't
related
to
encryption.
It
was
more
related
to
simulcast
kind
of
things
where
you,
you
might
have
a
new,
a
new
guy
and
entering
the
call
and
you
you
might
want
to
keep
frame
and
not
wait
for
the
keyframe
to
happen
naturally
or
for
a
fear
to
happen.
Something
like
that.
A
Okay,
so
anyway,
I
guess
we're
willing
to
explore
whether
we
can
get
promise
that
all
to
work
and
not
my
other
question
was,
would
we
have
to
go
to
a
single
atomic
method?
That
kind
of
did
everything
at
once
and
I
think
at
least
so
far
the
answer
seems
to
be.
We
don't
need
that
right.
G
So
for
issue
1.7,
where
we
interact
with
svs
from
transform
and
generate
keyframe,
we
we
have
to
do
something
like
if
you're
setting
the
key
the
new
key
on
the
train,
transform.
G
You
need
to
understand
whether
the
s7
transform
should
ask
the
keyframe
and
apply
the
the
new
key
on
right
on
the
on
the
keyframe.
So
so
we
need
some
parameters
or
we
need
to
define
some
behavior
for
the
espn
transform.
So
there's
clearly
something
missing
for
experiment
transform
and
we
need
to
to
add
support
for
it.
A
But
you're
not
saying
that
we
need
to
essentially
generate
the
keyframe
and
set
the
encryption
key
in
one
method,
in
a
single
method.
To
avoid.
G
D
A
Okay,
so
yeah,
potentially
set
encryption
key,
might
also
kind
of
implicitly
call
generate
keyframe
to
roll
it
over.
Okay,
all
right
so
anyway,
I
think
I
think
we
have
our
potential
ways
forward
on
this,
and
I
guess
we
have
to
work
more
on
the
on
a
pr
for
to
address
127
before
merging
pr.
G
I
don't,
I
don't
think
it's
needed,
because
the
pr
is
adding
support
for
script,
transform
right,
it
can
be
used
without
this
frame
transform
and
it's
providing
benefits
for
the
people
that.
G
Encryption
done
in
javascript
and
for
shipping
at
frame
transform.
I
believe
we
need
to
fix
issue
127.
A
A
F
F
F
F
Yes,
sorry
about
that,
I'm
having
a
bit
of
technical
difficulties!
Okay,
I'll,
try
like
this!
So
thank
you
very
much.
I'm
going
to
be
presenting
again
again
about
capture
handle
and
I'm
gonna.
First
recap
a
bit
about
what
we've
discussed
previously.
Then
I'm
going
to
make
the
proposals
for
extension
of
the
original
api
and
then
I'm
going
to
if
the
crowd
is
interested,
I'm
going
to
discuss
a
bit
why
we
need
both
and
why
the
new
thing
is
not
enough
to
obviate
the
old
thing.
F
So,
first,
a
recap
of
our
reminder
of
the
premise
so
we're
assuming
that
we've
got
one
application.
That's
a
virtual
conferencing
application,
something
like
mid
zoom
teams
or
we'll
just
call
it
vc
app
and
somebody
is
presenting
something
in
a
tab,
and
you
know
that
tab
can
be
google
slides
or
it
can
be
any
other
application.
F
Let's
call
it
slides
up
and
right
now,
when
you,
when
the
user
chooses
what
to
capture
the
user,
knows
that
he
is
capturing
a
slide
app
and
that
is
capturing
one
of
potentially
several
sessions
of
that
slider.
F
That
he's
got
open
on
his
browser,
but
the
application
that
the
virtual
conference
in
application
does
not,
and
what
that
leads
to
is
that
if
the,
if
the
user
is
watching
the
virtual
conferencing
application,
he
cannot
actually
interact
with
the
other
application
that
he
is
presenting
so
easily.
So
it
would
be
difficult
for
the
user
to
change
to
the
next
slide
previous
slide,
and
it
would
be
even
more
difficult
to
do
anything
else.
More
robust,
like
maybe
interact
with
the
page
move
around
the
cursor
click
on
links,
etc.
F
Next
slide,
please
so,
as
mentioned
previously,
I
suggested
a
mechanism
that
would
address
this,
and
one
problem
with
that
mechanism
was
that
it
assumed
that
the
two
applications
have
set
up
some
kind
of
collaboration
ahead
of
time.
So,
for
example,
it
would
work
well
for
microsoft
teams
and
microsoft
powerpoint
and
it
would
work
well
for
google
meet
and
google
slides
and
it
could
even
work
in
conjunction
right.
It
could
be
google
me
and
microsoft,
sly
and
microsoft
powerpoint,
but
they
would
have
to
set
up
this.
F
This
collaboration
ahead
of
time-
and
you
know
it's
kind
of
difficult-
to
go
and
set
up
collaborations
with
absolutely
everybody-
that's
out
there,
but
if
they
do
set
up
a
collaboration
that
mechanism
worked.
So
I
will
now
give
an
example
of
the
old
mechanism
in
play.
Next
slide,
please.
F
So
in
this
case
we've
got
a
capturing,
so
an
application
that
knows
that
it
could
potentially
be
captured
not
every
single
time,
but
sometimes
so
ahead
of
time
it
calls
set
capture,
handle
config
and
there
it
says
exactly
what
it
wants
to
expose
should
it
be
captured.
F
So,
in
this
case
we
see
it
says,
expose
origin,
true,
which
means
hey.
I
am
willing
to.
Whoever
captures
me
may
discover
that
you
know
my
origin.
It
sets
a
handle
which
is
free
flow,
freeform
text
and
it
can
put
whatever
there
and
it's
up
to
whatever
is
capturing
it
to
know
how
to
parse
that
and
what
used
to
make
of
it,
and
then
it
says,
sets
permitted
origins
which
basically
is
a
way
to
limit.
Who
can
read
that?
F
So
in
this
case,
if
we
look
at
mid,
slides
and
sorry
admit
in
slides,
for
example,
then
slides
could
give
a
session
id
hey.
I'm
slide
session
number
one
to
three
and
if
mit
ever
captures
it,
it
knows
how
to
communicate
through
the
google
back
end
with
one
two
three
next
slide,
please.
F
So
in
that
example,
there
is
a
bit
of
code
here,
but
most
of
it
is
not
so
interesting.
Basically,
once
you've
already
got
the
capture
in
place,
you're
gonna,
first
you're,
gonna,
say
okay.
If
if
there
is
a
change
of
capture
handle,
I
need
to
react
to
that.
But
now
here
is
the
capture
handle
that
I
have.
F
Let's
see
who
that
is
and
oh,
if
it
happens
to
be
one
of
the
origins
that
I'm
ready
to
work
with,
then
I'm
just
gonna
use
the
id,
and
you
know
maybe
I'll-
expose
some
user
facing
controls
next
slide.
Previous
slide
and
whenever
the
user
presses
that,
I
know
that
I
need
to
send
that
to
session
one
two
three
on
whatever
next
slide.
Please
now
the
problem
with
this
mechanism
is
that,
as
mentioned,
it
only
works
when
you're
capturing
something
that
you've
set
up
a
collaboration
with
and
sorry
okay
seems
unrelated
message.
F
So
in
order
to
address
the
the
other
case,
I've
spoken
to
yanivar,
who
was
really
concerned
that
the
original
mechanism
was
only
useful
for
collaborating
applications
and
they
wanted
something
a
bit
more
generic,
albeit
a
little
bit
less
powerful,
or
at
least
less
flexible,
and
so
I
am.
I
split
that
we'll
call
the
old
thing
capture
handle
identity
and
we'll
call
the
new
thing
capture
handle
actions
and,
of
course,
the
names
are
far
from
final
next
slide.
Please
so
one
one
possible
way
to
go
about
this
is
we
could
say?
F
Okay,
there
are
a
list
of
actions
that
we
know
that
capturing
applications
and
captured
applications
might
actually
want
to
pursue
first
slide
previously
next
slide
last
slide.
It
doesn't
actually
have
to
be
a
slide,
because
you
could
also
be
capturing
something
like
youtube
in
which
or
anything
that
has
a
playlist
and
then
those
actions
could
have
slightly
different
meaning.
F
So
what
you
would
do
is
on
the
capturing
side.
You
would
declare
that
you
support
certain
actions
and
then
you
would
register
a
handler
for
should
those
actions
be
fired
next
slide,
please,
and
then
on
the
capture
side.
F
You
would
be
able
to
say:
okay
if
I,
if
I'm
not
capturing
something
that
I
know
and
if
I'm
not
capturing
something
that
I've
got
that
I'm
tightly
interwoven
with,
I
might
still
be
capturing
something
that
the
user
understands,
and
so,
let's
see
what
kind
of
actions
there
are
and
if
there
are
actions
like
so,
for
example,
if
I
see
that
the
application
I'm
capturing
claims
to
be
supporting
previous
and
next,
then
I
can
just
expose
those
buttons
in
the
capturing
applications
application
and
if
the
user
clicks
that
I
just
send
that
intent
to
the
other
side
right,
I
say:
hey
the
user
clicks
next,
I
have
no
idea
what
that
is,
but
you're
probably
able
to
handle
that.
F
F
There
is
an
issue
here
where
basically
it
could
be
that
you're
sending
the
action
exactly
as
the
user
is
navigate.
You
know
I'm
sorry
exactly
as
the
page
is
changing,
but
that
kind
of
thing
kind
of
exists
for
anything
where
the
user
presses
something.
So
I
think,
on
the
one
hand,
it's
not
an
issue
for
the
actions,
but
on
the
other
hand,
I
think
that
it
also
shows
that,
if
you
have
anything
where
you're
interested
in
only
sending
the
message
to
this
particular
capturing,
then
with
identity.
F
F
No
worries
could
let
let's
be.
I
think
that
actually
now
would
be
a
good
time,
because
the
next
application,
let
me
see
just
a
second
yeah.
I
think
my
next
slide
is
the
one
where
I,
basically
only
if
the
crowd
is
interested,
so
I'm
ready
for
questions.
C
C
You
embed
from
your
top
webpage,
and
I
don't
want
to
derail
this
very
specific,
narrow
or
maybe
not
narrow,
but
this
is
very
specific
use
case,
but
I
also
don't
want
us
to
miss
an
opportunity
to
achieve
a
greater
bank
for
the
buck.
So
I
guess
I
wanted
to
mention
this.
In
particular,
in
the
past
there
had
been
years
ago
discussions
about
something
called
web
intents,
which
was
a
way
for
a
web
app
to
expose
an
api
for
other
apps
to
use.
F
Understood
and
I
think
that
u.n
actually
brought
to
our
attention
that
there
is
something
called
media
session
where
something
like
that
does
exist,
I'm
not
sure
exactly
about
whether
you
can
also
expose
capabilities
or
not.
But
this
does
exist
for
the
not
where
for
the
case
where
you
don't
actually
capture-
and
you
mentioned
that
we
might
want
to
piggyback
on
that.
And
I
would
like
to
ask
you
and
if
there,
if
he
says
any
way,
that
we
can
piggyback
on
that,
because
it
seems.
F
It
at
least
a
cursory
glance
did
not
make
it
clear
to
me
how
we
could
tie
the
two
things
together
because
with
a
media
session,
as
far
as
I
understood
you,
the
application
knows
immediately
that
this
comes
from
the
user,
whereas
for
the
case
of
capturer
and
capturing
this
doesn't
necessarily
have
to
come
from
the
user.
G
So,
from
a
user's
perspective,
capture
capturing
is,
is
a
bit
similar
to
you
on
youtube
and
you're,
using
a
picture
in
picture
to
control
youtube
while
doing
some
other
stuff
and
picture
in
picture
is
a
user
interface
implemented
by
the
user
agent
or
the
os,
and
you
have
some
ui
like
next
previous
and
you
can
click
on
it
and
at
some
point
the
javascript
from
youtube
will
receive
even
saying:
hey
you
you
need,
you
need
to
do
something
to
go
because
the
user
wants
to
go
to
to
the
next
track,
for
instance,
and
so
the
media
station
defined
an
api
with
some
actions
that
are
defined
like
play,
pose,
stop
next
track
and
so
on,
and
these
are
called
actions.
G
So
it's
it
seems
very
similar
from
this
standpoint
from
a
security
point
of
view,
it's
very
different
because
on
one
end
it's
the
browser
that
is
sending
these
events,
so
you
can
think
of
them
as
trusted
while
on
the
other
end
it's
the
capture.
G
That
is
that
would
try
to
to
send
the
these
events
to
the
capturing,
and
so
you
probably
need
to
add
information
like
the
capture
or
a
gene,
or
you
need
to
think
about
potential
security
issues
there,
but
from
the
definition
of
the
actions
and
so
on,
it
seems
like
we.
We
should
be
able
to
reuse
as
much
as
possible
the
the
effort
and
add
additional
apis.
G
If
we
need
to
it
seems
potentially
useful
for
youtube
to
be
able
to
say:
hey,
I'm
a
media
station
and
pip
is
working
great
and
I'm
also
allowing
the
same
event.
Handlers
plus
some
security
checks
to
be
called
by
another
app.
If,
if
we
find
some
get
display,
media
being
used
or
some
other
apis
in
the
future,
that
will
be
able
to
hook
up
one
page
with
youtube.
So
that's.
F
I'm
definitely
happy
looking
at
that
and
if
you've
got
a
concrete
suggestion,
I
would
love
to
hear
that.
But
in
terms
of
generally
from
my
own
look
at
that-
and
maybe
I
just
misunderstood
the
your
intention-
it
did
not
look
to
me
to
be
exactly
something
that
I
could
piggyback
onto
for
several
reasons.
One
of
them
already
mentioned
is
the
lack
of
the
fact
that
that
I
want
to
allow
the
application
to
send
the
message
and
not
necessarily
it
doesn't
necessarily
need
to
come
from
the
user.
F
Second,
is
that
the
actions
in
media
session
do
not
actually
map
on
to
the
everything
that
we
would
want,
so
some
of
them
are
irrelevant
or
inappropriate
like,
for
example,
toggle
mic
on
and
off
on.
The
captured
site
does
not
seem
to
me
like
something
that
we
would
want.
I
also
can.
D
G
So
I
think
that's
that's
a
very
good
point.
There's
one
thing
that
we
we
need
to
understand
is
whether
what
we're,
after
is
like
a
predefined
set
of
actions
or
what
we're
after
is
an
application
specific
set
of
actions,
because
this
is
very,
these
are
two
different
use
cases.
If
what
we
are
after
is
like
very
generic
application.
Specific
actions,
then
media
station
is
probably
not
what
we
are
after,
but
what?
If
what
we
are
after
is
the
classic
media
station
controls
kind
of
thing?
G
Then
we
already
have
some
definition
and
there
are
there's
already
some
code,
of
course,
with
this
use
case,
there's
a
need
for
additional
apis.
For
instance,
you
need
in
capture
to
expose
some
proxy
to
the
media
station
and
on
the
media
station
side,
there's
also
a
need
to
say
media
station,
I'm
okay
receiving
orders
from
this
origin
or
this
origin.
G
F
E
Yes,
so
so
I
would
say
I
like
this
as
its
proposed.
I
also
looked
at
media
session.
It
has
in
particularly
I
looked
at
next
track
previous
track,
but
I
don't
think
google
slides
actually
responds
to
next
track
previous
track,
and
I
could
also
see
a
presenter
might
have
a
slide
with,
for
instance,
youtube
on
it
and
there
is
audio
and
then
it's
actually
confusing.
E
So
I
I
definitely
think
next
and
previous
slide
should
not
be
overloaded
onto
the
next
track
and
previous
track,
and
even
the
the
security
issues
you
went
mentioned
makes
me
think
that
these
are
we're
better
off
with
separate
apis
for
this,
even
though
there's
perhaps
some
synergy,
and
that
we
could
in
the
set
action
handler,
maybe
we
can
have
a
similar
type
api.
G
So
quick
question:
you
anova
related
to
my
question
about
like
a
fixed
set
of
actions
like
next
slide
previous
slide
or
like
application
specific,
like
very
generic.
What's
your
throat
on
your
face.
E
I
I
I
would
like
to
standardize
on
just
the
four
or
even
the
two.
I
would
even
take
two
if
next
slide
previous
slide,
I
think,
are
the
the
core
features.
E
Is
that
this
be
part
of
the
initial
api
of
capture
handles,
so
that
there
is
something
for
everyone
and
it
so
that
we
have
generic
standardized
actions
and
then
customizable
id
for
specific
applications,
so
that.
G
Would
like
to
implement
both?
Do
you
think
that
if
next
slide
previous
slide
was
in
media
station?
Because
we
might
talk
with
these
guys
and
we
might
say
hey,
we
are
thinking
about
next
slide
projects
like
what
do
you
think
and
they
might
say
no
or
they
might
say
yes,
and
if
they
say
yes,
then
maybe
it
means
there's
something
that
we
should.
E
I
don't
see
why
we
couldn't
have
both
and
why
we
couldn't
explore
both
in
parallel,
because
if
you
have
harder
buttons
for
next
previous
slides,
you
could
push,
though
the
user
could
push
those
or
the
the
application
the
the
vc
app
could
instruct
it,
which
hopefully
would
also
come
from
the
user,
pushing
buttons
in
the
vcr.
But,
of
course,
there's
no
guarantee
about
that.
H
I
wonder
if
we're
to
tackling
the
relationship
between
this
and
and
the
media
api
mentioned
the
wrong
way
around,
because
as
far
as
I
can
tell
erl's
proposed
list
for
a
more
generic
interface
than
the
medic
attractions.
So
I
would
rather.
F
So
the
way
I
say
it
and
harold,
I
would
like
to
hear
your
opinion
is
that
if
there
can
be
a
relatively
small
set
of
actions
that
are
useful
in
general
and
for
anything
that
is
user
defined,
it
already
requires
collaboration,
because
you
need
both
the
capture
read
to
be
able
to
handle
it
and
the
capturer
to
be
able
to
send
it.
F
So
for
that
you
need
the
capture
handle
identity
where
basically,
they
set
up
some
kind
of
communication
channel
out
of
bend
using
the
identity,
using
identity
exposure
actually,
and
then
it
no
longer.
Matters
like
you
would
no
longer
be
using
any
of
the
actions
at
that
point.
G
That's
why
maybe
harold's
point
might
in
practice
work
without
capture
capture
it
to
deal
with
capture.
F
G
F
C
So
I
mean
the
queues
I'll
jump
in
I.
I
agree
with
harold
assessment
that
the
architecture
relationship
is
probably
the
other
way
around
mediastation
being
a
specialization
of
what
a
capture
angle
handle
is
looking
at.
C
My
own
suggestion
earlier
was
that
it's
actually
a
broader
architectural
situation
of
coordination
between
web
apps,
which
typically
would
be
done
through
post
message.
Today
and
again,
I
sense
that
it
may
be
a
matter
of
defining
a
mechazine
mechanism
for
declaring
that
you
react
to
a
postmessage
set
of
commands
and
maybe
having
a
an
easy
way
for
well
having
a
way
to
standardize
some
of
these
comments,
so
that
you
can
expect
to
find
them
in
some
context.
C
But
again
I
like
there
is
this
trade-off
between
finding
the
best
generic
solution
and
fulfilling
this
specific
use
case.
I
I
don't
think
we've
quite
hit
the
right
middle
point
in
that
trade-off.
I
think
it
needs
more
exploration,
as
has
been
suggested,
but
again,
that's
probably
a
point
where
reasonable
people
can
disagree.
E
Yeah,
so
I
think
my
main
concern
with
user
definable
actions
is
inputting
strings
and
then
accidentally
creating
another
message
channel,
which
I
really
really
want
to
avoid.
E
I
think,
having
a
couple
of
standardized
enums,
if
you
will
and
then
an
id
with
the
idea,
you
can
bootstrap
your
own
back
channel
using
post
message
and
existing
technology
so
that
we
don't
have
to
create
new
technology.
For
that,
and
I
think
that
is
why,
as
soon
as
you
allow
arbitrary
strings,
you
know
people
can
put
megabytes
into
it.
F
Okay,
I'll
go
next,
so
I
think
that
if
we
can
at
least
agree
that-
and
hopefully
I
can
present
about
that,
if
necessary-
that
both
the
capture
handle
actions
as
well
as
the
capture
handle
identity
are
both
necessary.
Then
maybe
we
can
move
forward
with
capture,
underline
identity
and
keep
capturing
actions
under
discussion.
F
So
is
there
anybody
that
perhaps
is
unclear
or
disagrees
with
the
fact
that
both
are
needed.
G
So
I
would,
I
would
call
one
is
p1,
the
other
one
is
p2
somehow,
and
maybe
we
I
would
tend
to
push
more
effort
in
in
the
actions
and
less
important
capture
handle
and
just
about
we
in
general.
G
I
think
we
had
this
discussion
in
the
mailing
list
as
well,
and
some
people
express
the
same
points
as
univer
stating
that
as
capturers
they
would
prefer
to
have
like
a
fixed
set
of
actions,
because
this
way
they
feel
that
they
might
be
able
to
use
more
easily
capturing
than
if
capture
you
had
to
run.
Javascript
specifically
on
on
captures
messages,
because
executing
javascript
based
on
capture
input
requires
some
trust
and
websites
will
have
difficulties.
Trusting
like
a
large
amount
of
vc
providers,.
F
In
that
case,
if
I
could
just
if
we
could
go
to
the
next
slide,
I
think
that
I'm
not
sure
what
un
referred
to
as
p1
and
what
is
p2,
but
I
would
like
to
argue
that
capture
handle
identity
is
actually
the
p1.
So
if
we
could
go
exactly
thank
you,
so
I
just
want
to
explain
why
identification
is
necessary,
even
if
we
assume
that
we
standardized
capture,
end
elections
and
that
you
know
it
provides
everything
that
everybody
here
wanted.
F
So,
first
of
all
is
that
with
identification,
the
identification
part,
it
is
sorry
so
you're
able
to
do
two-way
communication.
That's
number
one!
You
cannot
do
that
with
the
actions
number
two
and
the
list
here
is
a
bit
different.
Yeah
number
one
actually
was
that
you
would
be
able
to
actually
send
some
information
from
the
capturer
to
be
the
capturer
without
actually
requiring
the
capture
to
send
the
message
and
therefore
alert
the
capturing
to
the
fact
that
there
is
a
capture.
F
So
if
I'm
a
capturer
and
I
capture
something,
I
can
already
understand
whether
I
want
to
start
communicating
or
not
without
giving
that
piece
of
information
that
you
are
being
captured.
Another
thing
is
that
we
get
the
ultimate
amount
of
flexibility
in
messaging
right,
because
once
you've
got
an
identity,
you
set
up
a
peer
connection.
You
set
up
some,
you
know
shared
some
communication
through
a
rest,
api
or
anything
of
the
sort.
F
All
of
this
is
possible
and
we
don't
limit
ourselves
to
any
kind
of
model,
and
it's
actually
also
possible
to
tie
into
whatever
kind
of
credential
model
the
capture
and
the
capturing
already
share
on
the
backend,
whether
certain
applications,
certain
actions
are
privileged
and
others
are
less
privileged,
etc.
F
Instead
button,
you
are
actually
allowing
the
user
to
let
the
certain
things
follow
you
right,
because
the
moment
you
press
share
this
tab
instead
and
you
start
capturing
the
other
one.
There
is
an
event
fired
that
lets.
The
capturer
know
that
hey
there
is
another
capture
handle
right
now
and
it's
almost
as
if
the
user
can
control
unloading,
the
old
iframe
and
loading
a
new
one
inside,
and
these
are
all
things
that
you
don't
actually
get
with
actions.
F
So
I
would
say
that,
if
anything,
the
identification
part
is
the
p1.
F
E
So
since
I
didn't
answer
your
first
question
there,
I
I
see-
and
I
hate
to
be
difficult,
but
I
kind
of
see
the
minimum,
a
minimum
set
of
actions
of
next
slide
and
previous
slide
as
a
blocker
for
identity
in
order
to
because
otherwise
we
can't
rebut
the
notion
of.
We
want
to
be
able
to
rebut
the
notion
that
we're
adding
api
specifically
for
narrow
collaboration
between
sites,
which
I
fear,
will
mostly
benefit
major
sites
with
resources
in
both
vc
app
and
slides.
App.
F
Domains
are
such
websites
illegitimate,
like?
Is
it
not
legitimate
to
own
both
the
virtual
conference
in
application,
as
well
as
the
slides
application,
they're?
Of
course,
they're.
E
E
E
Sure
I
I
think
this
I'm
rehashing
the
point
that
was
made
by
others,
a
surgery
I
believe
who
said
this
as
well,
that
it'd
be
better
to
have
an
api
that
could
standardize
that
would
do
next,
previous
slides
across
properties.
E
E
Time
necessary
to
to
have
a
longer
set
of
actions.
Sorry,
you
went.
G
Yeah
sorry
for
interpreting
you,
I
was
just
about
to
say
that,
generally,
when
you
have
a
generic
mechanism,
you
need
to
spend
a
lot
of
time
because
it's
very
difficult
to
get
it
right.
So,
what's
appealing
about
this
very
small
set
of
previous
slide
next
slide
is
that
we
can
probably
make
it
work,
hopefully
quite
fast
and
without
getting
it
wrong.
While
with
a
generic
mechanism,
we
really
need
to
to
spend
a
lot
of
time
to
check
all
edge
cases
and
so
on.
G
F
Okay,
but
I
believe
that
capture
handle,
I
don't
recall
exactly,
but
I
think
that
it
was
first
introduced
first
proposed
and
also
implemented
in
chrome
behind
an
origin
trial
more
than
six
months
ago.
So
there
was
plenty
of
time
for
issues
to
be
surfaced
and
if
you
know
mozilla
and
apple
had
any
concerns,
we
could
have
hashed
them
out
a
long
time
ago.
G
I
think
I
raised
some
issues
that
are
still
not
the
best
I
for,
for
instance,
the
origin
and
so
on
and
honestly
it's
a
generic
thing.
So
we
we
on
our
side.
We
need
to
spend
a
lot
of
time
to
iron
out
all
the
details
and
to
validate.
D
G
It's
that
it's
okay
and
I
think
that
doing
this
work,
this
amount
of
work
is,
is
more
difficult
than
spending
like
one
or
two
hours
on
the
very
reduced
case
of
next
slide.
Pressure
slide.
F
Do
you
have
any
kind
of
estimate
of
how
long
this
would
take
something
that
could
be
noted
in
the
minutes?
G
No,
I'm
I'm
yeah
correct.
The
generic
thing
is
the
capture
handle
and
the
narrow
thing
would
be
the
action
that
will
be
scoped
to
next
slide.
Previously,
I'm
not
talking
about
the
middle
action.
Genetic
thing.
H
G
So
I'm
not
sure
to
clearly
see
the
difference,
but
if
the
action
part
is
the
two
next
slide
previous
slide
make
it
work
for
next
slide
for
your
slide
and
if
identification
is
a
generic
approach.
Yes,
that's
correct.
H
H
E
There's
also
a
part
where
you
have
to
register
your
actions,
and
that
is
at
the
same
time
as
you
would
reveal
your
identity
right,
not.
E
E
H
You
so
you
see,
so
you
would
like
to
marry
them,
but
but
you'd
also
like
to
narrow
them
narrow
their
action
mechanism,
but
you,
but
at
the
moment
you
don't
have
a
ever.
You
don't
have
a
anything
that
needs
to
narrow
the
identification
mechanism
right.
E
Well,
I
like
the
slides
that
ella
presented
here,
and
I
have
to
read
with
my
memory
of
capture
handle.
I
understand,
there's
some
other
issues
where
sending
events
when
the
when
the
page
is
navigated
might
have
some
security
issues
that
I
haven't
fully
come
to
peace
with
yet.
But
I
I
feel
like
with
the
next
in
previous
slides
that
elon
have
proposed
it
at
least
addresses
my
concern
for
capture
handle
as
a
concept
and
that
they
can
move
forward
as
a
concept.
H
When
we
try
to
decide
multiple
things
at
once
and
set
up
all
what
I
I
regard
as
rather
arbitrary
linkages
between
them,
the
whole
set
becomes
very
hard
to
make
forward
progress
on.
So
I
would
like
to
see
if
we
can
detach
the
issues
into
pieces
where
it's
cleared,
which
parts
are
relevant
to
which
particular
concern.
G
Yeah
thanks
harold
for
stating
the
two,
the
two
parts.
That's
that's
helpful
for
this
question.
To
me,
the
most
difficult
part
is
the
identification
part,
and
that's
one
where
we
need
to
to
spend
a
lot
of
time
and
do
things
right
and
by
scoping
precisely
the
use
case,
which
is
as
understanding
next
slide
previous
slide
and
getting
it
right
for
the
identification
part
based
on
that,
I'm
hoping
we
can.
G
We
can
have
good
discussion
and
validate
the
model
and
and
be
as
fast
as
we
as
we
can
on
on
that
subject.
So,
that's
why
I
think
it's
it's
good
to
provide
a
good
scope
there
that
you
that
lads
has
done,
which
is
which
is
great,
and
I
would
say
we
use
the
narrow
down
scope
of
actions
as
the
things
we
want
to
support
for
identification
and
we
validate
the
identification
model
based
on
it.
F
You
and
I
have
since
checked,
and
it's
almost
been
a
year
since
we
started
the
discussion
about
identify
identification.
So
if
it's
possible
for
you
to
fast
track
discussions
there
with
me,
so
that
we
could
finish
that
part
relatively
you
know
within
the
near
future.
I
think
that
would
be
preferable.
G
Sure
I
I
think
that
there
are
still
issues
with
origin,
for
instance,
in
initial
you
there's
the
case
where
you're
saying
the
capturing
has
to
provide
origin
or
not,
and
there
are
things
like
that
that
we
discussed
in
the
past.
It
hasn't
changed
for
a
long
time
and
we
haven't
reached
consensus
and.
H
It's
not
the
same
thing.
Not
reaching
consensus
is
not
the
same
thing
as
you
haven't
replied
to
the
repairs,
and
I
mean
the
the
word
has
to
provide
origin
or
not
that
doesn't
parse.
You
can't
have
a
has
to
attach
or.
C
So,
just
to
provide
a
framework
for
this
additional
discussion,
so
first
I'm
hearing
we
want
to
solve
the
use
case,
I'm
hearing
from
yanivar
that
the
narrow
use
case
needs
to
be
driving.
The
discussion,
in
particular
as
jun,
was
suggesting
the
identity
discussion
need
to
be
compared
to
that
narrow
use
case.
So
I
guess
now.
The
question
is
how
and
where
we
want
to
have
the
conversation
right
now.
Capture
handle
is
a
ycg
committee
group
report.
F
Before
anybody
answers,
I
would
just
like
to
also
remind
everybody
that
this
feature
has
seen
a
lot
of
web
developer
interest
by
zoom
by
microsoft
and
gt
ring
central
et
cetera,
and
these
are
only
the
people
that
we've
reached
out
to
I'm
sure
that
we
could
get
even
more.
F
So
I
think
that
the
reasonable
thing
to
do
would
be
to
adopt
and
to
iterate
over
the
few
things
where
we
don't
have
consensus
and
hopefully
at
the
at
an
increased
pace.
E
I
I
think
I
would
support
that
if
we
can
get
some
commitment
to
include
the
minimum
actions
that
a
lot
proposed.
F
G
G
Probably
I
haven't
looked
at
the
capture
handles
spec
for
quite
some
time,
so
we
need
to
refresh
my
minority,
but.
C
But
I
guess
we
we
could
start
the
call
for
adoption
as
a
trigger
for
everyone
to
refresh
their
mind
on
the
specific
proposal
and
whether
they
have
burning
issues
they
feel
need
to
be
addressed.
So
my
real
question
was
not:
are
you
going
to
say
yes
to
the
call
for
adoption,
but
do
we
feel
in
the
rough
right
space
to
start
the
discussion.
G
I
would
I
think
that
we
it's
going
to
make
progress,
so
we
can
we
going
with
all
four
options
he's
in
in
the
path
to
make
progress.
There.
F
I've
got
two
questions.
One
do
you
ever
want
to
so
in
case
it's
possible
for
you
to
maybe
bring
bring
forth
a
more
detailed
proposal
of
what
you
mean
with
media
session
just
in
case.
We
want
to
go
with
that,
and
if
you
find
that
this
one
is
not
very
easy
to
reduce,
maybe
we
could
go
on
with
the
proposals
is
of
actions.
G
E
I
would
strongly
prefer
the
same
document
that,
because
the
actions,
in
my
view,
address
a
concern
with
the
the
identity
part
of
the
same.
C
F
A
Back
to
you,
yeah,
well,
I
think
we're
in
the
wrap-up
phase.
So
the
question
is:
what
do
we
have
a
clear
list
of
things
we
need
to
follow
up
on.
C
So
here
I
heard
the
chairs
to
start
a
call
for
adoption
for
capture
handle
to
raise
an
issue
on
a
media
station
alternative
to
the
proposal,
and
I
guess
that's
probably
the
gist
of
it.
F
I'm
sorry:
what
what
link
do
you
need?
I'm
sorry
for
not
paying
attention.
G
G
Yeah,
but
that's
that's
the
one.
That
would
be
good
too,
if
you,
if
you
want
to
refer
to
some
action
items
that
where
I
will
probably
add
some
some
foods.
A
Yeah,
so
the
other
things
we
have
to
do
is
the
announce
the
adoption
of
the
first
public
working
draft
right
for
media
capture
transform.
I
think
that
was
another
item
or.
A
It's
called
for
consensus
right
and
right
for
that
and
any
other
any
other
immediate
action
items.
A
C
Yeah
harold
committed
to
a
pull
request
to
remove
the
audio
bits
from
the
spec
before.
C
For
the
env
use
case
discussion,
we,
I
think,
have
also
a
plan
of
team
leading
the
work
on
the
explainer
and
the
use
case.
A
bit.
A
And
they
use
vrs
okay,
and
I
think
we
have
our
action
items.
Yeah.
B
If
somebody
noted
those
down,
it
would
be
great
to
put
them
in
the
in
the
irc,
because
I'm
I
didn't
you
were
talking
faster
than
I
can
type.
A
Okay,
well
thanks
everybody.
I
think
we've
actually
completed
all
of
the
items
for
today.
Our
next
meeting
will
be
on
february
15th.