►
From YouTube: WEBRTC WG meeting at TPAC 2020 - part 2
Description
WEBRTC WG meeting at TPAC 2020 (virtual), October 22, 2020.
A
So
everyone's
seen
this
search
right
by
now,
basically
there's
the
entire
set
of
boxes
that
go
into
mrtc
and
what
we're
now
looking
at
is
not
the
stuff
to
the
left,
which
is
all
the
networking
stuff
that
everyone
loves,
except
the
ones
that
who
the
ones
who
don't
we're.
Looking
at
the
stuff
on
the
right.
A
There
was
a
nice
picture.
I
think
it
was
given
an
at
an
earlier
talk
last
week
that
had
like
the
media
stream
track
in
the
middle,
and
then
you
could
feed
cameras.
Microphones
desktop
captures
incoming
streams
into
it
and
pull
stuff
out
on
the
right,
presenting
on
a
video
on
a
canvas
on
a
video
tag
on
a
recorder
or
media
capture
or
something
else,
but
the
media
stream
track
is
kind
of
this
central
idea
in
the
local
media
processing
pipeline,
and
what
we
wanted
to
do
is
open
it
up.
A
A
Nowadays,
people
are
using
ethernet
cables.
How
simple,
anyway,
a
breakout
box
then
permits
you
to
disconnect
some
of
the
wires
that
run
from
one
end
to
the
other
and
perhaps
patch
some
wires
onto
other
wires,
and
that's
the
mental
model
that
you
would
want
to
have
in
when
making
it
simple
to
understand
the
break.
This
three
stages
of
the
breakout
box,
because
the
first
thing
we
have
now
is
a
self-contained
unit,
the
breakout
box
stage,
one
you
can
apply
constraints
to
it.
A
A
A
I
have
my
media
stream
track.
That
comes
from
my
video
input
and
I'm
going
to
turn
that
into
a
processing
media
stream
track
that
has
this
way
of
pulling
out
and
then
I
can
create
what's
called
in
streams
parallel
parents
stream
is
such
an
overloaded
word,
sorry,
a
transform
stream,
which
is
you
put
data
into
it,
and
you
get
data
out
of
it
and
what
you're
doing
in
the
middle
is
call
the
add
massage
function.
A
A
A
D
A
A
A
A
A
B
Like
I
have
a
question
harold,
which
relates
to
the
buffer
handling,
is
everything
necessarily
in
main
memory,
or
can
it
be
gpu
buffers.
A
The
we're
reusing
the
video
frame
and
audio
frame
for
that
matter
that
were
defined
for
web
codec
and
the
web
codec
folks
are
thinking
that
they
have
found
ways
to
handle
a
main
memory
object
that
is
basically
backed
by
gpu
memory
so
that
as
long
as
you
don't
have
to
convert-
or
you
can
run
the
converters
on
the
gpu-
you
can
just
chip
around
this.
This
thing
with
a
pointer
down
into
gpu
memory.
A
So
I
don't
know
if
that's
going
to
work
in
the
first
version
frankly,
but
it's
definitely
something
that
we
want
to
design
to
be
possible
so
so
that
we
can
either
get
it
to
work
at
once
or
get
it
to
work
in
a
later
version.
D
D
A
I
think
that
this
would
require
marking
the
buffers
to
be
in
the
format
of
a
webgl
surface
or
whatever
texture,
whatever,
whatever
you
call
them
as
supposed
to
just
the
pixel
4
or
the
main
memory.
B
A
A
F
I
believe
them
you
can
use.
You
can
see
these
video
framework
products
as
cms
bitmaps,
which
are
structures
that
can
be
backed
by
the
gpu
memory
and
then
you
can
you
can
run
then
I'm
not
an
excellent
detector,
but
I
think
you
can
operate
with
webvl
on
those
objects,
so
everything
everything
stays
on
the
on
the
epu.
G
A
A
H
H
Right
but
the
transform
stream
in
this
case
was
a
main
thread:
transform
stream
yeah.
H
A
F
H
It
might
depend
you're
transferring
ownership,
so
it's
a
little
so
you're
relinquishing
ownership
when
you
transfer
something
right.
I
Yes,
just
to
say
that
you
your
example
is
correct
as
as
well,
you
could
do
what
what
you
want
to
say
what
you
want
to
do
and
harold
as
well,
so
it
can
be
done.
Both
sides.
C
So
there's
some-
I
I
I
think
this
is
coming
back
sort
of
where
tim
was
too
a
little
bit
like
I.
I
get
this
like
where
this
going.
I'm
thinking
about
the
cases
where
we
I
mean
if
the
camera
is
producing
compressed
media,
this
means
we're
going
to
decompress
it
to
operate
on
the
frame
here
then
recompress
it
and
and
send
it
again.
C
A
Know
that
that
depends
on
the
compressor
I
mean.
The
current
example
we
have
are
is
hd
cameras
on
on
the
usb
2
bus
and
they
produce
motion
jpeg
and
no,
but
no
sane
person
wants
to
send
most
to
end
up
with
motion
jpeg,
so
handrick
actually
has
some
slides
right
at
the
end
for
dealing
with
or
for
avoiding
motion
jpeg.
A
But
at
the
moment
the
easiest
way
to
think
of
motion
jpeg
is
that
is
it's
another
pixel
format,
just
a
very
old
one.
C
Sure
I
mean
most
newer
cameras
are
not
doing
motion
jpeg
anymore,
but
I
mean
for
the
ones
that
are
doing
h.264
or
something
like
that.
Yeah
I
mean
I,
I
don't
think
it
changes
the
story.
Much
like,
I
think
the
question
is,
is
roughly
you
know,
would
you
be
open
to
having
the
possibilities
of
being
able
to
get
the
compressed
packets
regard,
regardless
of
whatever
the
format
is
passed
into?
The
you
know
exposed
into
the
javascript
as
a
as
an
alternative?
Well,
look.
C
A
Yep,
I
mean,
I
think,
that's
an
interesting,
interesting
question,
and
both
I
mean
if
we
can
can
get
things
done
without
transcoding.
It's
obviously
cheaper
right,
but
that
that
might
be,
but
having
it
in
an
encoder
form
that
we're
not
going
to
decode
might
actually
be
more
suitable
for
inserting
the
camera
straight
into
the
in
into
the
encoded
side,
which
is
downstream
of
the
of
of
the
encoder
and
the
nice
thing
about
row
frames
apart
from
them
being
wrong,
is
that
they
don't
depend
on
each
other.
A
C
All
right
sure,
so
I
mean
I'm
totally
understand
depending
on
the
codec,
so
that
some
some
are
temporal.
Some
are
not,
but
I
mean
like
some.
You
know
this
would
really
open.
This
opens
up
to
some
of
the
stuff
that
bernard
was
saying
from
long
ago
that
we
ought
to
be
able
to
have
plug-in
codecs
right
I
mean
particularly,
if
you
think,
about
very
low
bandwidth,
audio
codecs
or
something
like
that.
I
don't
know.
I
think
that
there's
a
lot
of
options
here.
A
We
want
we're
trying
to
keep
keep
things
in
sync
so
that
if
we
do
the
breakout
box
break
out
the
speed
we
can
the
raw
stream.
We
can
feed
it
into
the
web
codec
and
then
take
what
comes
out
and
put
it
into
a
paid
connection.
H
Yeah,
I'm
just
going
to
say
a
note
on
web
codex
is
that
they've
so
far
rejected
streams.
I
think
for
performance
reasons.
So
one
concern
I
have
with
this
and
I'm
glad
it
can
be
done
in
workers,
but
since
the
api,
the
control
surface
is
on
main
thread.
I'm
worried
that
this
might
encourage
sites
to
try
to
do
I
mean
doing
raw
media
on
the
main
thread
seems
like
a
horrible
mistake
to
me.
H
So
have
you
considered
having
this
api
throw
if
the
pipe
contains
any
main
thread
javascript
and
would
might
there
be
other
ways
avenues
like
the
reason
we
have
to
do
this
post
messaging
is
because
the
media
stream
track
is
not
transferable.
Have
you
thought
about
that
as
an
option.
H
A
A
So
now
I
I
wouldn't
try
to
enforce
use
of
workers
in
order
to
use
this
api.
H
Well,
I
mean
there
are
other
concerns,
though,
like
like,
if
you
have
a
camera
track,
that
has
privacy
indicators,
that's
necessarily
tied
to
the
current
document,
so
you
wouldn't
be
able
to
do
this.
I'm
hoping
we
can't
this
won't
be
ever
possible
to
send
to
a
service
worker,
for
instance,
because
then
your
camera
access
may
outlast
the
page,
and
that
would
be
bad.
So
this
is
limited.
I
I
H
C
H
Steps
here
and
maybe
started
would
say:
let's
only
allow
it
in
worker
to
begin
with,
see
how
that
works
out
and
then,
if,
if
we
see
main
thread
needs
and
that
main
thread
is
performant
once
we
have
further
along
with
implementations,
because
we
don't
even
know
if
it's
going
to
be
performant
in
workers.
Correct.
I
I
completely
share
the
the
concerns
and
I
think
that
there
will
be
some
presentations
later
on.
That
tries
to
do
something
along
with
outlines
that
you
that
you're
suggesting
for
insertable
streams-
and
I
guess
that
the
same
model
could
be
applied
for
these
new
streams
as
well.
B
I
B
We're
gonna
need
to
shut
down
this
conversation
and
move
on
to
the
next
presentation
to
keep
on
streaming
before.
C
A
My
answer
is
that
I'm
interested,
but
I
think
that
it
might
not
be
strictly
a
mod
on
this
interface.
It
might
be
a
mod
on
the
on
the
insertable
streams
right
on
the
other
side.
H
Okay,
fair
enough,
can
I
also
just
mention
that
next
week,
there's
going
to
be
a
breakout
session
on
memory
copies,
because
I
think
a
lot
of
the
what
wd
streams
parts
that
we're
going
to
rely
on
here
by
streams
and
bring
your
own
buffers
are
not
well
defined
or
implemented.
Yet
so.
K
Okay
yeah:
can
you
copy
the
link
to
that
memory?
Copy
thing
on
the
chat?
Please?
Yes,
I'll!
Do
that.
Thank
you.
Yeah.
I
Thank
you
next
slide.
Okay,
so
going
back
to
instable
stream,
so
arlo
presented
it
in
june
and
one
use
case
among
servers
was
entwine
encryption
and
we
know
that
it's
a
desired
feature.
I
So
since
we
were
excited
by
that,
we
decided
to
do
an
experiment.
The
experiment
is
mostly
pen
and
paper
right.
Now,
though,
depending
on
our
discussions,
we
might
want
to
start
building
a
prototype.
I
So
with
pen
and
paper
we
decided
that
yeah.
It
would
be
too
long
to
redo
the
whole
world,
so
we
decided
to
build
upon
existing
technologies.
So
there's
s
frame,
which
is
the
underlying
format.
Basically,
as
frame
is
an
ietf
it's,
it
will
be
in
an
etf
spec.
I
think
the
itf
is
working
on
the
via
frame
spec.
I
We
also
want
to
use
cryptokey
to
convey
key
material
and
cryptokey
is
nice
because
you
can
either
generate
cryptokey
from
javascript
or
you
can
generate
from
native
code.
You
can
have
under
the
hood
properties
like.
Oh,
this
key
is
not
exposed
to
gs,
or
this
key
is
exposed
to
gs
and
in
the
future
we
could
add
a
lot
of
properties
to
crypto
keys
without
changing
this
proposal
and
of
course
we
want
to
integrate
with
peer
connection.
So
that's
why
instable
streams
is
coming
to
play
here.
I
So
we
we,
we
finalized
our
an
initial
draft
like
a
mock-up
of
an
api
which
is
available,
there's
a
link
in
the
spec
in
in
the
presentation.
So
you
can
look
at
that
and
specifically
it's.
I
It
has
quite
some
some
similarities
with
medus
s
frame
implementation,
which
is
some
kind
of
s
frame
in
javascript
implementation,
and
it's
done
in
a
worker,
so
the
worker
has
a
kind
of
api
that
is
exposing
to
the
main
thread
to
the
pages,
and
it's
quite
close.
So
it's
it's
quite
quite
nice
in
a
sense
next
slide.
I
So
there
is
an
example:
you
take
a
stream,
you
create
a
peer
connection,
you
add
a
track
and
then
you
have
a
sender.
So
since
you
want
to
encrypt,
you
need
a
key.
So
there's
a
key
management,
the
key
management.
You
don't
really
know
what
it
is.
It
may
be
implemented
in
javascript,
because
currently
it's
the
only
way,
maybe
in
the
future
it
could
be
mls
implemented
in
user
agents
whatever
at
the
end
you
get
a
crypto
key
and
then
in
red.
I
So
both
the
transform
and
the
sram
center
stream
are
in
red
because
it's
not
in
the
connection
api
and
it's
not
in
the
install
insertable
stream
spec
yet,
but
we
think
that
it's
a
it's
a
good
model.
If
we
were
to
remove
the
transform
attribute
there,
what
we
would
do
would
be
a
typical
take
very
double
stream
pipe
through
this
s
frame
center
stream
and
then
pipe
two
to
the
right
of
all
stream.
So
it
could
be
done
as
well
and
with
the
current
instable
streams
api.
I
Of
course,
this
model
there.
The
setup
is
done
in
main
file,
but
the
actual
processing
can
be
done
wherever
is
more
appropriate,
so
it
could
be
done
in
a
dedicated
crypto
thread.
It
could
be
done
in
the
media
pipeline,
it's
really
up
to
the
user
agent
and
that's
nice,
because
it's
allowing
flexibility
and
possibilities
to
optimize
things,
while
keeping
the
web
developers
life
very
easy.
It's
just
a
few
lines
of
code,
nothing
to
care
except
to
do
the
setup
properly.
I
So
next
slide
so
conclusion
from
this
experiment:
we
think
that
it's
feasible
to
add
a
native
end-to-end
transform
and
we
also
think
it's
a
timely
work
item
for
a
webrtc
working
group,
because
ietf
is
working
on
its
part.
There
are
a
lot
of
customers
that
want
it.
I
I
have
oversized
as
well,
but
I
guess
we
could
like
discuss
a
little
bit
that
first
and
then
move
on
to
other
parts
of
it.
K
I'm
just
going
to
give
a
status
update.
It
looks
like
the
ietf.
He
is
going
to
make
a
working
group
for
his
friend
and
that
is
going
to
be
open
to
all
different
kind
of
key
management.
So
there
may
be
something
interesting
to
be
done
in
collaboration
with
our
working
group
and
richard
here
can
tell
you
a
little
bit
more
about
that.
If
you
want.
E
This
seems
like
a
very
sensible
step
toward
leveraging
that
up
into
a
you
know
the
long-range
vision
of
having
stuff
protected
from
js,
obviously
not
yet
a
complete
vision.
I
think
we
need
to
get
you
know.
Key
management
that
was
out
of
the
hands
of
js
as
well,
but
like
this
seems,
like
seems
like
we
can
do
it
step
wise,
like
get
this
frame
locked
in,
get
this
thing
to
do
the
s
frame
transformation,
and
then
we
can
add
another
thing
that
encapsulates
the
key
management.
I
E
E
Sensible
to
me
and
happy
to
collaborate
with
to
help
facilitate
the
collaboration
with
the
ietf
relevant
idea,
for
it.
I
Yeah,
that's
that's
really.
The
design
is
really
that
to
think
that,
like
there's
a
key
management
system
that
will
come
in
the
future
and
but
that's
okay,
it
will
take
some
time,
but
we
think
with
that
with
good
confidence,
we
could
already
have
sram
apis.
I
That
would
be
used
initially
with
javascript
provided
keys
and
would
be
able
to
migrate
without
deep
changes
to
to
apis
to
protected.
A
L
I
think
I
am
but
yeah,
but
I
was
also
happy
to
let
richard
and
you
know,
since
they
were
saying
things
I
agree
with
yeah,
and
this
seems
like
a
reasonable
next
step
along
along
the
way
I
like
having.
L
I
like
the
fact
that
that,
as
you
can
said,
this
hot
conceal
is
whether
which
thread
it's
in
which,
like
makes
life
a
lot
easier
because,
like
obviously
it
sucks
to
have
feminine
thread
and
the
fact
that
this
conceals
it
is
really
nice
and
and
and
like
you
know,
we
want
we
rather
people
use
this
than
use
javascript,
and
so
this
encourages
them
to
do
the
right
thing
rather
than
the
wrong
thing.
L
One
thing
that
I
guess
I
was
thinking
about
and
I
apologize
guys
I
was
I'm
trying
to
look
this
up
in
insertable
streams
before
yeah.
Sorry
before,
while
I
was
waiting
in
the
queue,
what
happens
if
you
get
an
integrity
error
in
in
stream
processing?
That's
not
that's
not
specific
to
this.
This.
This
would
happen
as
well
with
the
javascript
stream
implementation,
but
how
does
one
handle
integrity?
Errors
is
that
is
that?
I
So
this
you
can
provide
a
sign-in
key
and
based
on
the
signing
key.
If
the
sign
the
signature
is
wrong,
then
there's
something
that
should
be
done.
I
guess
that,
depending
on
the
options
we
could
provide
as
part
of
the
creation
of
the
s
sender
stream,
we
could
provide
some
flexibility
there.
My
initial
guess
that
the
default
implementation
would
be
if
you're
not
providing
signing
keys
you
you
do
not
validate
them
so
you're,
okay
with
and
if
you're,
providing
a
signing
key.
I
You
probably
want
to
drop
the
frames
that
that
are
wrong.
L
Right
that
was
an
answer.
That
was
an
answer
to
a
question
I
was
about
to
ask
so
great.
My
question
is
dumber.
My
question
is:
what
happens
if
the
verification
fails?
Does
insertable
streams
have
an
api
for
saying
that
there's
a
there's,
a
failure,
or
do
you
just
like
like
how
do
you?
Actually?
How
do
you
actually,
the
practical
matter,
deal
with
the
fact
that
a
frame
can't
be
verified.
I
Yeah,
it's
it's
up
for
it's
not
present
in
the
api.
So
I
guess
that
if
we
were
to
do
that,
there
would
be
webrtc
stats
that
would
allow
you
to
see
that
frames
are
dropped
and
the
reason
being
as
frame
for
instance,
so
we
could
provide
that.
We
could
also
provide
like,
for
instance,
there's
the
issue
where,
when
there's
key
rolling,
oh,
I
receive
a
new
frame
and
I
do
not
have
yet
the
key.
For
instance,.
C
I
So
I
was
thinking
yeah.
Maybe
we
should
add
a
call
back
there,
but
I'm
not
sure
we
actually
want
to
add
that
complexity,
and
I
think
that
yeah
you,
you
don't
have
a
key
for
the
next
frame,
so
you
will
drop
it
and
you
will
get
the
next
one
and
you
will
need
an
iframe
or
we
will
prefer
a
little
bit
and
if
we
cannot
buffer
more
then
we'll
say
sorry,
we
drop
on
the
floor
and
once
you
get
the
key,
we'll
ask
for
a
new
a
new
key
keyframe,
for
instance,.
A
M
A
H
Okay,
so
as
it's
going
to
say
that
from
an
api
side
having
sender.transform
attribute,
I
like
that
better
because
it's
it's
more
contained
because
with
insertable
streams,
I
often
wonder
well
what
happens
if
I
don't
listen
to
the
incoming
stream
at
all
and
just
generate
my
own
data
from
say
web
transport
or
something
what
happens
then
you
know:
can
you
totally.
I
I
Yeah
there's
there
are
some
slides
about
about
that
shortly.
C
I
So
it
seems
that
there's
consensus
that
we
should
go
in
that
direction.
Right
am
I
hearing
well.
A
I
Looks
good
next
slide,
so
I
looked
further
and
started
to
think.
Oh,
we
have
installable
streams,
so
instable
stream,
when
you're
sending,
is
receiving
and
coding
valid
and
coded
media
content,
but
it
might
produce
whatever
content
it
wants,
so
it
could
be
valid
and
coding
media
content
or
it
could
be
invalid
media
content.
I
Let's
take
vs
frame
example,
for
instance,
vs
frame
will
produce
invalid
media
content
in
the
sense
that
the
media
content
will
be
encrypted
and
even
worse
it
will
prepend
the
media
content
with
some
keys,
key
ids
and
so
on.
So
what
happens
there?
I
Actually,
the
u.s
agent
pipeline
might
expect
to
get
valid
media
content
for
packetization
and
depocketization.
So
if
you
have
a
h364
with
various
nalu
and
you
will
not
be
able
to
pack
a
device
correctly
because
all
this
is
completely
crappy
compared
to
what
you
expect
and
on
the
decoding
side,
it's
also
difficult
sfus.
I
So
before
I
stream,
particularly,
I
believe
that
in
the
future
we
might
have
a
generic
future
packetizer,
but
might
work
with
codecs
like
everyone,
for
instance,
and
that
might
be
totally
fine
there.
We
don't
need
to
do
anything
whenever
there's
this
packet
sizer,
but
we
don't
have
it
yet.
So
what
can
we
do
there?
I
One
possibility,
if
we
really
want
to
have
it
in
x264
or
vp8,
would
be
to
add
an
optional
adaptation
transform,
which
is
basically
you
take
whatever
the
instable
stream
is
pro,
is
producing,
and
then
you
wrap
it
into
a
valid
instable
stream
output.
I
So
let's
say
that
you
have
a
keyframe,
it's
it's
being
encrypted,
then
you
take
the
blob
and
you
create
a
big
nelly
based
on
that
and
and
you
send
it
and
then
the
sfu
will
be
fine.
The
user
agents
will
be
fine
as
well
without
any
change,
so
one
possibility
would
be
to
define
this
adaptation
for
x264
and
vp8,
which
are
mandatory.
Video
codecs,
either
within
vs
frame,
transform
or
standalone.
I
J
You
have
dr
alex
in
the
queue.
K
Actually,
I
think
that
the
job
has
already
been
done,
or
I
put
it
that
way.
There
is
a
solution
for
that
one
one
of
the
problem
with
just
insertable
frame
and
when
you
you
change
the
the
packet
right
is
that
you
kind
of
already
announced
the
which
encoder
you
would
use,
and
you
already
done
the
the
stp
offer
answer.
K
The
dependency
descriptor
has
been
fully
described
in
the
annex
of
the
av1rtp
payload
codec
and
sergio
finished
an
open
source
implementation
this
week.
So
I
can,
I
think
I
can
liaise
you
with
the
people
that
have
done
that
work
and
that
will
be
part
of
the
work
in
the
itf
x-frame
working
group
anyway,
and
I
know
there
is
a
draft
document
for
exactly
this
problem,
so
yeah.
I
K
I
Yeah,
but
that
would
be
nice.
I
I
still
think
that
we
need
something
in
general,
even
without
this
frame,
for
instance,
if
insertable
stream
is
modifying
the
stream,
you
need
to
send
it
and
you
need
you
don't
want
to
change
a
lot
as
if
use
user
agents
for
for
for
that.
So,
but
I,
I
would
be
very
fine
with
initially
an
strength,
specific
solution
that
would
be
great
as
well.
A
E
Yeah,
I
think
that
that's
also
one
of
the
things
that
was
making
me
a
little
sad
about
us
frame
is
that
there
was
this
risk.
We
were
throwing
away
useful
information
as
we're
making
the
transformation
so
yeah.
I
think
we'd
be
a
difference
to
talking
about
ways
to
do
that,
both
at
the
api
level
and
at
the
protocol
level.
I
Okay,
good
next
slide,
which
is
coming
back
to
what
universe.
What's
suggesting
the
transformer
attribute
versus
exposing
readable
and
way
to
go
so
instable
streams
are
transformed
by
nature,
meaning,
if
you
look
at
so
I
took
herald
nice
block
images
and
just
put
ts
in
the
middle,
which
are
instable
streams
there,
and
if
you
look
at
that,
you
clearly
see
that
this
is
transform
right
in
super
streams.
They
are
not
producing
content,
they
are
transforming
contents
and
streams.
I
Api
has
support
for
transform
and
this
model
is
actually
validated
and
being
used
today
with
compression
streams
and
encoding
api.
These
are
apis
that
are
implemented
in
chrome
and
encoding.
Api
is
also
implemented
in
safari,
so
with
writable
stream,
transform
streams
and
readable
stream.
It's
available
in
software
tech
review,
and
it's
working
fine
there.
So
it
seems
that
using
the
transform
model
for
insertable
streams
will
might
have
some
benefits
next
slide.
I
So
I
took
various
examples
there
on
the
top
left.
You
have
the
native
transform,
which
is
exactly
the
same
as
before,
as
I
presented
earlier,
there's
a
current
api
example
where
you
have
the
pipe
through
type
2
thing,
which
is
in
the
main
thread,
and
that's
fine
and
I
thought
hey.
Why
can't
we
add
like
a
new
rtc
transform
that
would
take
a
worker,
for
instance,
so
you
create
it
in
the
you
create
it
in
the
main
thread:
it's
there.
I
Maybe
it
has
post
messages
under
the
hood
that
is
already
defined
so
that
you
provide
some
nice
benefits
to
the
web
developer
and
it's
just
one
object.
It
seems
fine
as
well.
What
about
a
worklet?
That's
another
approach
there.
You
can
see
that
there's
some
more
api,
like
you,
add
a
worklet
to
a
connection
and
you,
you
add,
add
module,
but
the
add
module
is
already
defined
in
for
worklet
in
general.
So
you're,
not
you!
I
You
don't
have
to
redefine
it,
so
it
all
seems
that
that's
feasible
for
with
the
transfer
model
there,
so
that
could
provide
us
a
nice
choice
in
the
future
when
it
when
we
will
be
ready
to
actually
decide
based
on
experiments
or
on
additional
thoughts
and
studies.
Next
slide.
I
So
some
observations
there,
api
with
transform,
is
ergonomic,
there's
an
intuitive
use
of
transform
and
also
it
allows
mixing
native
and
gs
transform
very
nicely,
just
as
like
people
are
doing.
Like
usually
you
text,
you
take
the
text,
decoder
stream,
api,
it's
the
transform
and
you
pipe
it,
the
fetch
api
response
body
and
then
you
take
that
out
to
do
something
else.
People
are
used
to
that
and
it's
working
fine.
So
why
not
piggyback
on
this
development
style
here?
I
That
makes
a
lot
of
sense
to
me:
there's
a
potential
for
a
good
threading
model,
because
we
could
provide
some
read
limited
api
to
be
government
fed
by
default
and
then
there
would
be
no
need
for
transferring
strength.
In
most
cases,
I
hear
that
transfer
transferring
streams
is
possible.
I
implemented
readable
stream,
I
implemented
writable
stream,
I
implemented
transfer
stream
text
decoder
streams
and
a
lot
of
things.
I
I
did
the
integration
of
the
fetch
api
and
I
know
that
transferring
streams
is
in
the
very
easy
cases
can
be
done
optimally,
almost
optimally,
no
memory
copy
and
not
blocking
on
the
main
thread,
but
there
are
edge
cases
where
you
start
doing
things
and
bang
the
optimal
way
is
no
longer
possible,
and
that
makes
me
think
that
we
should
allow
transferring
strings,
but
we
should
not
require
to
use
it
in
most
cases.
It
should
be
by
default
of
the
main
thread
and
in
some
specific
cases,
yeah.
Let's
transfer
it
next
slide.
I
Yeah,
so
worker
worklet-
I
don't
know
most-
will
probably
fulfill
cases.
Worker
or
worklet
worklet
has
some
advantages.
It
might
be
more
efficient,
like
you
could
do.
Synchronous
processing,
reuse
of
existing
native
threads
on
the
media
pipeline
worklet
might
be
less
error
prone
noise
to
chair,
for
instance,
but
I
feel
like
we
need
more
studies
there.
Maybe
we
need
more
feedback
from
the
web
audio
working
group.
Maybe
we
need
additional
feedback
from
a
native
transform
stream
implementation
experiments.
I
So
to
me
we
should
keep
that
in
mind,
keeping
its
scope
of
the
discussion,
but
on
my
side,
I'm
not
ready
yet
to
provide
like
any
guideline
whether
we
should
pick
one
or
the
other
and
that's
the
end
of
the
presentation
puts.
H
So,
jan
over
here,
I
I
like
the
direction
and
that
the
and
I
like
the
intent,
which
seems
to
be
to
get
this
off
main
thread,
and
I
agree
it's
also
early
to
decide
exactly
on
the
api
shape.
But
I
think
what
we
should
do
immediately
is
even
if
we
use
the
existing
post
message
solution
where
you
create
the
the
transform
stream
in
the
worker
and
then
post
messages
to
main
thread
or
the
other
way
around.
H
A
A
I
I
Okay,
we
could
also
simply
only
expose
a
transform-
and
in
that
case
you
don't
no
longer
expose
a
readable
and
writeable
stream
on
the
main
thread,
at
least
the
native
ones.
You
would
need
to
expose
them
elsewhere,
which
would
be
a
worker
or
worklet,
and
in
that
case
you
no
longer
allow
the
potential
food
gun
of
a
web
developer,
being
lazy
and
trying
to
do
the
easy
thing
and
it's
working
in
in
a
big
laptop
and
it's
not
working
in
a
mobile
device.
I
I
don't
know
any
you,
you
will
need
to
require
some
synchronization,
some
mainframe
synchronization
data,
so
that
and
type
synchronization.
But
when
you're
in
the
main
thread
you
might
be
blocked
so
yeah.
I
don't
know
any
use
case
here.
H
Just
just
for
context,
a
problem
with
running
a
lot
of
event-based
things
or
anything
on
main
thread
is
that
you're,
I'm
assuming
this
is
going
to
be
running
off
the
main
event
loop
and
performance
wise
for
real-time
things.
It's
often
very
bad
to
rely
on
the
main
event
loop,
because
so
many
other
things
may
be
queued.
H
I
Yeah,
even
even
for
audio
processing,
which
might
not
be
a
lot
of
data,
it's
very
sensitive
to
latency
and
that's
why
there's
an
audio
workload,
for
instance,
where
people
are
using
it
to
actually
process
captured
data
when
they're
encoding,
then
they're
sending
it
to
food
data
channel?
We
we
know
who
are
doing
that
and
it's
sort
of
working.
A
Fail
separately
or
be
impossible
for
different
reasons
might
just
possibly
be
a
bad
idea,
so
it's
possible
that
defining
the
transform
is
easier
for
users,
but
I
have
this
nasty
feeling
that
something
is
going
to
come
and
bite
me
for
the
cases
that,
where
we
absolutely
don't
want
javascript
to
see
the
see
the
data
like
native
transforms,
then
absolutely.
This
is
a
much
much
better
api
for
that.
A
I
Yeah,
I
I
agree
with
that.
I
think
that
the
the
medus
s-frame
transform,
so
the
implementation
done
by
sergio
is
roughly
similar
to
that
in
a
sense,
except
it
does
not
have
the
transform
getter
a
transformator,
but
it
could
be
a
basic
brick
for
doing
that.
Shin.
H
Yeah,
I
think
my
main
concern
is
that
sites
would
start
using
this
on
main
thread
and
depend
on
it
and
they
should
have
used
workers,
but
they
just
didn't
because
it
was
fast
enough
on
their
fast
machines
and
then
later.
If
we
try
to
decide.
Oh,
that
was
a
bad
idea
and
turn
it
off
and
throw
a
failure.
We
will
break
a
lot
of
sights
and
maybe
it'd
be
better
to
encourage
them
to
move
to
workers
early.
I
B
Yeah,
thank
you
very
much
u.n,
okay,
so
we'll
try
to
go
quickly
through
webrtc
svc.
So
a
little
bit
of
an
update.
We
have
four
open
issues,
of
which
two
are
recent
reviews,
one
from
the
tag
and
one
from
ping,
and
then
there
are
two
extensions:
it
has
been
implemented
in
chrome
behind
an
experimental
web
platform
features
flag,
so
you
can
try
it
out.
B
B
Okay,
so
at
last
tpack
we
talked
about
how
to
maintain
the
scalability
mode
table
and
diagrams
last
year.
The
idea
we
had
is
hey
make
this
someone
else's
problem,
someone
like
ale
media,
and
so
we
originally
were
thinking
of
referencing.
The
ab1
bitstream
specification
and
just
say:
hey
here
are
the
mode
names,
and
here
are
the
diagrams.
B
However,
since
then
the
disadvantages
of
this
have
become
apparent.
First
of
all,
whoever
do
you
see.
Svc
applies
to
multiple
codecs,
not
just
av1.
There
were
new
codecs
coming
down
the
pike,
so
just
referencing
an
av-1
spec
may
not
make
sense.
Another
problem
is
that
it
was
pointed
out
by
implementers
of
ab1
that
there
are
some
limitations
with
respect
to
svc
and
av1
that
result
in
very
unnatural
names
and
diagrams
for
some
modes
and
also
the
diagrams
in
the
av-1
spec
weren't,
very
good.
B
B
Just
so
people
don't
get
confused,
but
I
was
felt
there
were
better
names
than
the
ones
that
were
in
the
a81
spec.
Also.
We
now
have
our
own
scalability
mode
diagrams
in
section
nine
that
people
seem
to
think
are
a
lot
more
clear
than
the
ones
in
the
av-1
spec,
and
now
we
updated
section
6.1,
which
talks
about
adding
new
modes
where
we
require
submission
of
dependency
diagrams.
You
have
to
submit
the
diagram
along
with
your
mode
name,
to
get
into
the
table.
B
J
Would
be
a
great
fit
here.
B
Yeah,
that's
that's
a
that's
a
great
point,
dom
yeah,
because
basically
the
spec
itself
is
pretty
simple,
but
almost
all
the
changes
would
be
to
this
to
this
registry
of
the
table.
Yeah.
Okay.
So
the
tag
review
came
up
with
a
comment
and
they
asked.
Why
is
the
scalability
mode
a
dom
string
rather
than
an
enum,
and
they
pointed
out
that
enums
do
offer
automatic
error,
checking,
for
example,
of
a
scalability
mode
value?
B
Set
parameters:
you
can't
set
a
scalability
mode
that
isn't
advertised
as
one
of
the
support,
so
you
have
to
put
it
in
rtc
rtp
code
of
capability
scalability
modes
in
the
list
before
you
can
set
it
so
there's
some
non-automatic
checks
that
are
going
on,
but
not
automatic
ones,
and
so
some
of
this
question
discussion
of
this
revolved
around
what
we
just
talked
about,
which
is
that
the
table
has
changed
quite
frequently
and
is
expected
to
continue
to
do
so.
B
Just
since
last
tpack,
for
example,
we
had
originally
taken
out
the
s
modes
and
then
they
added
them
back.
We've
just
renamed
some
of
the
key
and
key
shift
modes,
and
we
suspect
that
as
new
codecs
come
about
we'll
we'll
add
new
modes.
So
this
table
is
constantly
changing,
and
that
raises
the
question
about
whether
automatic
error
checking
actually
adds
any
real
value
or
just
adds
confusion.
B
For
example,
we
already
have,
as
I
mentioned,
the
non-automatic
checks,
so
you
can't
really
set
a
mode
that
isn't
supported
and
you'll
get
an
error
and
the
problem.
If
you
have
the
automatic
checks
it,
it
could
result
in
some
different
behavior
between
browsers,
for
example,
one
you
get
a
new
mode
or
a
rename,
or
something,
for
example,
in
the
in
the
current
implementation
in
chrome.
That
was
done
before
these
these
this
mode
renaming.
B
So,
for
example,
the
l3
t3
key
shift
mode
with
is
not
well,
it's
not
supported
in
any
browser,
but
in
an
older
implementation
it
would
treat
could
treat
that
as
an
illegal
value,
while
in
some
other
browser
which
had
been
updated,
it
would
become
legal
and
because
of
the
potential
for
this
table,
to
constantly
update
you,
you
could
have
these
different
behavior
in
different
browsers,
whereas
the
current
mechanism
would
give
you
the
same
behavior.
B
N
B
And
and,
like
I
said,
having
different
behavior
in
different
browsers
doesn't
make
sense
to
me.
If
both
browsers
would,
if
you
can't
set
a
mode,
you
don't
support
in
either
browser.
Why
would
you
know
what
what
how
does
it
help
you
to
have
automatic
checking
when
the
table's
changing
all
the
time?
That
may
not
make
any
sense.
B
B
So
basically
here's
the
situation.
The
only
thing
that
weber
to
see
svc
does
doesn't
change
any
of
the
methods
or
how
they
behave
just
adds
to
the
capabilities
dictionary
by
including
a
list
of
supported,
scalability
modes.
So
what's
the
additional
fingerprinting
surface
from
this?
Well,
whoever
csvc
is
currently
only
supported
on
chromium-based
browsers,
but
there
are
a
couple
of
those.
So
you
do
get
generic
info
on
the
user
agent.
Like
you
know,
it's
a
chromium-based
browser,
but
you
don't
necessarily
know
which
one
it
is.
B
The
other
thing
is
that
we
had
an
issue
49,
which
was
filed
in
webrtc
extensions,
and
basically
that
issue
complains
that
the
existing
sync
synchronous
get
capabilities
method
is
not
usable
for
exposing
hardware
capabilities,
in
other
words,
the
person
who
posted
that
issue
wanted
to
be
able
to
describe
whether
the
codec
was
supported
in
hardware
software
and
found
themselves
unable
to
support
hardware
adequately,
and
they
proposed
an
async
method.
Instead,
so
we
simultaneously
have
the
ping
people
saying
it's
leaking
hardware
and
then
issue
49
says
no.
B
I
want
to
actually
leak
harder,
and
I
can't
so
both
of
those
things
probably
can't
be
true
at
the
same
time.
So
here's
what
we're
proposing
to
do.
We
are
going
to
update
the
security
privacy
section
to
include
some
of
these
things.
B
Some
of
it
is
already
there,
but
we
want
to
make
a
little
bit
more
clear,
and
then
we
have
the
generic
get
capabilities,
issues
in
issue
2460
and
issue
49,
and
we
can
basically
take
it
on
there,
but
basically
the
position
I
would
like
the
working
group
to
take
is
that
there's
really
nothing
specific
about
webrtc
svc
with
respect
to
leaking
hardware
capabilities.
I
B
Yeah,
I
think
that's
that's
an
issue
we
should
discuss,
but
I
kind
of
outside
of
this
particular
ping
bug.
J
Yeah,
I
agree
with
your
analysis.
Bernard
I
wouldn't
bother
with
not
about
user
agents
because
that's
describing
a
temporary
situation
and
it's
not
adding
fingerprinting
surface
anyway,
but
yeah,
given
what
you
described.
I
agree
that
there
is
really
nothing
exposed
in
svc.
That's
adding
anything
there.
B
B
That
I
think
the
we
should
discuss
adding
an
async
method,
but
that's
separate
from,
I
would
add
a
note
to
the
issue
in
webrtc.
Svc
just
say:
hey,
this
is
your
generic
issue
and
it
really
has
nothing
to
svc
and
then,
in
a
minute,
henrik
we'll
get
to
the
generic
slides,
which
yanivar
was
going
to
present
to
ping
but
never
got
to,
but
but
basically
idea.
Henrik
is
we
can
talk
about
an
asic
method,
but
it
has
nothing
to
do
with
with
this
particular
ping
book.
D
J
On
the
process
bits,
I
don't
know
that
the
right
approach
is
necessary
to
reclose
the
issue.
I
don't
think
going
back
and
forth
on.
This
will
help,
but
I
think
what
this
gets
us
is
clear
consensus
that
the
working
group
doesn't
believe
this
exposed
new
fingerprint
interface
and
then
once
we
have
a
started
that
consensus,
we
can
work
with
opinion
figuring.
The
next
steps
right.
D
H
Yeah
sure
the
question
dom
it
sounds.
What
I
just
learned
is
that
from
the
chairs
meetings,
clarifying
that
horizontal
review
and
wider
review
must
show
that
we've
addressed
all
comments,
so
would
that
include?
B
B
Okay,
so
here
is
the
generic
slides
that
yaniva
prepared
for
the
ping
folks
about
the
generic
issue
of
get
capabilities.
No,
you
didn't
get
to
present
it
yeah.
You
want
to.
N
H
Sure
I
mean
I'm
mostly
referencing
the
documents
that
you
put
together
bernard
so,
but
it's
basically
saying
that
a
site
can
learn
about
visitors
underlying
hardware
without
permission
prompt
that
was
the
issue
is
filed.
But
then
it's
just
basically
saying.
I
think
what
you've
said
here
is
that
the
same
information
is
available
in
the
sdp
but
which
is
inherently
needed
for
peer-to-peer
signaling
and
using
use
cases,
for
that
is
data
channels,
receiving
media
or
sending
media
other
than
camera
microphone
and
screen.
For
instance,
camera
element,
capture
stream
and
next
slide.
H
So,
oh
there,
it
is
yes,
so
we
already
have
fingerprint
notifications
in
the
spec
about
that
get
capabilities
and
create
offer,
and
I
think
bernard
you
had
a
graphics
hardware
and
fingerprinting
document
that
linked
to
some
of
our
thinking
there
that
basically,
these
the
same
information
can
be
inferred
from
other
sources
such
as
web
gpu
and
webgl,
and
the
performance
api.
So
the
proposed
solution
is
to
include
a
node
relating
to
implementation
issues
of
hardware,
hardware
capabilities
and
then
I
think,
to
ask
for
this
to
be
dealt
with
outside
of
webrtc.
H
If
someone
wants
to
propose
a
permission,
prompt
for
exposing
graphics
hardware
information
is
that
correct,
bernard.
B
C
J
J
I
think
the
to
me.
The
real
point,
though,
is
on
the
fact
that
a
lot
of
this
fingerprinting
surface
is
indeed
already
exposed
in
other
apis
and
that's
getting
a
proper
solution
or
mitigation
to
this
requires
coordination
across
these
various
apis.
I've
tried
to
get
ping
to
take
ownership
of
this,
but
I
don't
think.
J
I
mean,
I
guess
we
need
to
figure
that
story
out.
My
sense
is
that
we
cannot
stop
until
this
gets
figured
out
because
it's
a
really
complicated
issue,
but
I
think
we
at
least
need
to
be
very
clear
as
to
what
our
argument
is,
and
you
know
I
could
be
wrong
on
the
negotiation
negotiation
thing,
but
if
I'm
not,
then
I
think
we
need
to
be
careful
in
how
we
present
and
how
we
present
this
and,
as
a
result,
what
kind
of
mitigations
would
be
possible
or
not.
H
Well,
I
don't
think
telling
javascript
don't
look
at
the
sdp
is
going
to
work,
but
maybe
if
we
end
up
solving
the
s
frame
and
browser-based
keying
down
the
road,
we
could
at
some
point
encrypt
the
sdp.
That's
a
whole
different,
follow
axe.
Okay,.
B
Okay,
anyway,
I
think
that's
probably
it
for
now
I'd
like
to
turn
it
over
to
tim
who's.
Gonna
talk
about
new
webrtc,
nb
use
cases,
cool.
G
Thank
you
so
yeah,
the
theory
theory
here
is
it's
a
bunch.
There
are
a
bunch
of
people
out
there
who
are
doing
things
with
webrtc,
which
kind
of
mostly
work
but
don't
work
as
well
as
they
should,
and
so
I
went
and
talked
to
a
few
of
them
and
came
back
with
these
use
cases
that
are
like
just
on
the
edge
of
what
webrtc
one
zero
does,
and
I
think
that
means
that
they're
good
candidates
for
improvement
in
in
nv.
G
Basically,
you
know
these
are
things
that
we
could
fix
for
the
the
next
version.
So
yeah
next
slide,
please
so
the
the
first
one
is
p2p
broadcast.
You
know
things
like
auctions
live
events.
People
want
low
latency,
they
want
lower
latency
than
hls
and
dash
can
do,
and
you
can
do
that
with
the
web
otc,
but
like
there's
a
they
the
moment
they
do
that
they
lose
a
bunch
of
features
that
they
kind
of
love.
G
So
it's
a
question
of
whether
whether
we
can
how
many
of
those
we
can
put
back
in
and
then
can
we
also
give
them
the
thing.
The
other
thing
that
we
do
with
webrtc,
which
is
to
work
behind
that.
So,
if
you
think
about
an
auction
site,
that's
maybe
actually
on
a
fiber
to
the
premises
link,
so
they've
got
the
bandwidth
to
broadcast
to
their
500
bidders
or
100
bidders
or
whatever,
but
actually
they
maybe
don't
have
enough
public
ip
addresses
to
do
that
so
kind
of
bonus
points.
G
G
I
I'm
not
really
claiming
that
so
kind
of
the
idea
here
is
to
try
and
maximize
the
reuse
of
existing
high
latency
streaming
assets,
so
kind
of
create
stats
that
allow
people
to
see
the
things
that
they
would
have
got
from
dash,
and
also
this
is
a
big
ask.
I
know,
but
clarity
on
the
rules
of
autoplay,
like
a
lot
of
these
sites
are
are,
are
receive
only
essentially-
and
I
also
plea
play
on
on
on
video
and
audio
is
just
I
get
more
bugs
from
that
than
I
get
from
anything
else.
G
So
we
just
need
to.
We
need
to
make
that
work
somehow
and
it
doesn't
and,
and
then
yeah
there's
a
final
thing
here
is:
maybe
we
could
do
some
something
trixie
with
allowing
drm
to
work
by
kind
of
creating
a
our
webrtc
and
then
piping
it
up
through
the
drm
architecture,
so
that
they
can.
I
mean
it's
sort
of
like
s
frame
only
it
would
be
nice
if
they
could
use
their
existing
encryption
so
yeah
that
that's
the
idea
for
that
one.
The
next
slide
please.
G
Yeah,
so
so
this
is
about
cameras
in
hospitals,
factories
whatever
and
the
use
cases
that
quite
often
there's
a
nearby
user.
So
there's
a
camera,
maybe
watching
a
baby
or
a
patient
or
something
and
the
user
is
maybe
sitting
at
a
nursing
station
or
downstairs
watching
the
baby
monitor
so
they're,
probably
on
the
same
network,
and
then
what
happens
when
the
the
the
router
goes
down
like?
G
How
do
they
maintain
their
linkage
between
those
two
devices
that
are
probably
on
the
same
network,
although
not
definitively,
and
without
needing
to
go
back
to
the
external
internet
as
a
making
that
a
dependency?
So
it's
kind
of
trying
to
raise
the
availability
of
these
things
next
slide,
please,
and
so
like.
G
The
assumption
you
can
make
is
that
these
endpoints
have
already
been
connected
and
probably
recently,
and
what
you
want
to
be
able
to
try
and
do
is
reconnect
without
recourse
to
to
non-local
servers
and
and
that
kind
of
a
potential
way
of
doing.
That
would
be
to
make
stun
an
offer
answer
in
some
way
optional
or
predictable
for
reconnections,
and-
and
you
know,
maybe
you
do
that
with
service
workers.
I
don't
know
how
you
do
it,
but,
like
it's
a
it's
a
desirable
feature,
I
think
next
slide.
Please.
G
Next
one
is
decentralized
internet.
A
lot
of
people
using
the
dace
channel
for
a
bunch
of
interesting
things
and
matrix
is
one
of
the
ones
that
I'm
particularly
interested
in.
So
that's
the
entire
peer-to-peer
mesh
internet
messaging
network,
but
what
they
really
want
is
a
home
for
their
data
channel
and
and
because
the
page
life
cycle
just
doesn't
work
for
these
encryption
services,
essentially
or
you
know
a
bunch
of
other
things
that
you
want
to
try
and
do
in
that
space.
G
G
If
you
push
them
into
a
service
worker
or
somewhere,
then
you
would
allow
a
page
to
issue
a
fetch
which
could
then
be
resolved
either
with
a
cat
from
cash
as
it
is
at
the
moment
or
over
https,
as
it
is
at
the
moment
or
by
a
service
running
in
the
service
worker.
That's
doing
some
p2p
over
the
data
channel
and
it's
that
third
option
that
we
can't
do
at
the
moment
next
slide.
Please
and
this
one's
about.
G
If
you
see
there's
a
bunch
of
people
out
there
who
are
doing
communications
apps
on
on
mobile
phones
and
smartphones,
and
you
find
that
they're
building
them
as
native
apps
and
they're,
typically
native
apps
lit
wrapped
around
libweb
otc,
and
you
ask
yourself:
why
aren't
they
aren't
they
progressive
web
apps?
And
the
answer
is
that
the
robustness
isn't
there?
You
can
lose
the
call
way
too
easily.
G
You
know
an
incoming
gsm
call
will
just
drop
the
we'll
just
drop
the
webrtc
connection,
and
likewise,
if
the
bro
user
accidentally
browses
off
the
page,
then
they
lose
the
call
and
for
the
user
point
of
view,
it's
kind
of
understandable.
But
the
person
on
the
other
end
has
absolutely
no
idea.
What
just
happened
to
that
video
call
like
there's
no
information
whatsoever.
G
The
ambition
here
is
to
is
to
try
and
at
least
allow
the
far
end
to
have
a
sense
of
what
happened
like
the
fact
that
this
call
has
gone
away
or
may
come
back
or
whatever
so
some
way
of
indicating
that
this
is
happening
and
to
send
some
sort
of
message
to
them
in
terms
of
audio
or
play
out
or
something
I
mean.
I
don't
know
how
this
is
done,
but
that's
my
suggestion
and
ideally
you
want
an
api
to
allow
reconnection
of
a
pre-existing
call
post
to
gsm
interruption.
G
G
In
my
experience
and
then
the
third
one
which
is
kind
of
a
big
ask,
I
realize,
is
to
be
able
to
park
a
track
in
the
service
worker
so
that
you
can
move
them
between
pages
so
that
if
you're
co-browsing
with
a
support
user
support,
worker
and
you're
co-browsing
with
somebody,
you
can
keep
the
audio
running
whilst
you're
talking
them
through
how
to
use
their
banking
site
or
whatever
you
know,
obviously
huge
security
restrictions.
One
has
to
apply
to
that,
but
I
think
it's
a
desirable
aim.
G
So-
and
this
is
a
again
absolutely
huge
ask,
but
if
you
look
at
like
the
way
that
people
are
trying
to
deploy
webrtc
in
their
businesses,
you
find
that
actually,
in
a
lot
of
cases,
they
have
to
kind
of
get
a
bunch
of
departments
to
get
buy-in
because,
like
you,
have
to
spread
webrtc
out
over
a
bunch
of
servers
and
a
lot
of
that
comes
down
to
the
way
that
that
the
signaling
is
done
in
overweb
sockets
and
I'm
kind
of
put
my
hand
up
to
having
been
responsible
for
at
least
part
of
that.
G
But
there
are
simple
cases
where
we
make
it.
Maybe
could
could
minimize
the
signaling
such
that
you
could
bring
up
a
channel
with
pretty
much
no
signaling
at
all.
I
mean
if
you
look
at
whip,
that's
kind
of
that's
a
heading
in
that
direction,
but
but
next
slide
please.
I
think
we
can
go
a
little
further
than
that.
G
Potentially-
and
I
don't
know
if
we
want
to
talk
about
this-
and
maybe
it's
the
wrong
forum,
but
but
in
fact
a
url
like
that
actually
has
everything
you
need
to
bring
up
a
data
channel
like
you.
Could
con
you
could
compress
the
whole
of
of
of
the
offer
answer
to
to
something
as
simple
as
that
and
we've
done
a
little
a
couple
of
little
experiments
with
it
and
it
and
it's
doable
again.
G
There
are
some
some
interesting
compromises
you
have
to
make
like
isolite,
for
example,
and
you
have
to
assume
who's
going
to
be
the
server
and
the
client.
So
there's
like
a
bunch
of
the
optionality
in
the
normal
offer
answer
goes
out
and
you
kind
of
define
it
by
config,
but
it's
doable
anyway,
so
yeah
that
I
think
that's
pretty
much
all
I
wanted
to
say
just
that
that
you
know
those
are
the
the
options
that
I
think
we
should
be.
Maybe
looking
at
adding.
K
K
People
are
used
to
rtmp,
they
want
to
put
a
url
and
be
done
with
it
and
point
it
to
a
server
and
today
with
webrtc
they
cannot.
So
if
they
have
a
software
solution,
that's
okay,
but
when
they
have
a
hardware
encoder
like
a
lot
of
people,
have
then
it's
not
okay
anymore,
because
they
can
implement
the
webrtc
stack
or
take
an
existing
library,
but
then
for
the
signaling
for
every
time
they
want
to
add
a
a
new
platform.
K
They
have
to
have
to
have
an
implementation
and
when
the
hardware
has
shipped
it's
too
late,
and
so
whip
was
a
solution
to
that
as
well.
It
makes
the
life
easy
for
everybody
that
is
developing
hardware
encoder.
Do
they
do
hardware
encoder
by
default?
Support
youtube
video
motion?
All
of
this!
All
of
that,
but
support
webrtc
services
is
made
almost
impossible
by
lack
of
something
simple.
So
the
goal
to
weep
is
that
use
case
with
a
simple
url,
and
I
agree
there
is
a.
K
A
There's
an
interesting
trade-off
in
the
url
way
of
setting
up
connections,
which
is
that,
if
you,
if
you
exchange,
if
you
send
out
the
information
in
this
form,
you
have
no
idea
who
you're
talking
to
so
you're,
pretty
much
limited
to
saying
that.
A
G
A
G
That
that's
where
the
fingerprint
turns
yes,
so
you
can't
lose
the
fingerprint
completely,
but
yeah
and
and
the
assumption
is
that
you
know,
maybe
that
maybe
that
is
actually
maybe
you
you
make
that
a
checkable
with
https
or
something
that
you
can
check
the
the
the
host
name
with
https
and
you
get
a
guarantee
that
you're
talking
to
the
server
you
think
service.
G
You
think
you're
talking
to
but
yeah
sure
that
you're
making
absolutely
you're
making
a
set
of
compromises
about
the
use
cases,
as
alex
said
that
you're
you're
aiming
at
a
particular.
You
know,
you're,
aiming
at
a
particular
service
that
has
a
particular
set
of
capabilities
and
like
that
means
that
a
lot
of
the
optionality
is
at
least
potentially.
You
can
get
rid
of
it.
I
So
the
question
would
be:
can
it
be
done
in
javascript
or
what
is
missing
natively
for
doing
so
and
if,
if
it
cannot
be
done
natively
and
we
need
to
do
it-
something,
but
that's
fine
if
it
can
be
done
in
javascript,
I
would
guess
that
we
could
consider
doing
that.
Natively,
add
native
support
as
well.
If
we
see
that
it's
gaining
a
lot
of
usage,
because
if
there's
a
lot
of
usage,
it
means
it's
a
good
pattern
and
make
and
it's
fine
to
actually
ingrain
it
into
the
platform.
I
G
So
that
the
thing
you
can't
do
in
in
javascript,
is
you
can't
if
you
could
set
the
icu
fragment
and
you
pass
in
javascript
as
opposed
to
having
to
use
the
ones
you've
given
then
this
model
would
be
possible,
but
because
you
can't
set
it
in
in
javascript,
then
you
can't
do
this
for
everything
else.
You
could
make
it
up.
You
could
like
just
use
templated
stp
and
make
it
up
make
up
what
you
thought
you
the
answer
would
have
been,
but
for
the
eu
frag,
you
don't
know
that.
I
Okay
sounds
like
a
good
issue,
could
be
fined
there.
I
guess
the
other
thing.
You
were
mentioning
service
worker
a
lot
of
in
a
lot
of
cases,
and
I
was
thinking
about
it
like
a
long
time
ago.
I
still
like
it
as
I
I
think,
shared
workers
might
be
a
better
place
than
service
workers.
G
I
think
that
the
issue
here
that-
and
this
needs
really
thrashing
out
with
the
when
by
looking
at
your
use
cases
carefully,
but
it's
to
do
with
how
you
map
the
life
cycle,
like
it's
all
about
this
life
cycle,
and
I
think
I
suspect
that
some
of
them
won't
work
in
in
I
don't
know.
Basically
it
needs
needs
pretty
careful
examination.
H
Just
a
high-level
comment
that
you
mentioned
five
use
cases,
but
I
think
the
last
two
are
not
actually
use
cases
like
call
robustness
and
reduced
complexity.
Signaling
sounds
like
betterments
that
you
might
even
ask
for
within
existing
use
cases
right.
G
A
G
B
H
And
unmute
yes,
so
I
originally
had
15
minutes
to
do
this.
I
reduced
it
to
10
and
I
added
five
minutes
for
a
proposal
if
necessary.
So
this
is
more
of
a
refresher.
I
was
asked
to
do
about
the
same
origin
policy
wikipedia
says
it's
an
important
concept
in
the
web
application
security
model-
and
I
think
most
of
you
know
this
just
having
this
set
up,
as
I
think
it's
going
to
be
important
and
discussions
coming
later.
H
This
prevents
the
same
origin
policy
prevents
malicious
scripts
in
one
page,
from
obtaining
access
to
sensitive
data
on
another
web
page
through
that
page
is
dom
which
translates
into
the
script.
You
can
load
a
lot
of
things,
but
you
can't
necessarily
see
them
or
read
them
back
and
then
what
I
highlighted
in
red,
a
strict
separation
between
content
provided
by
unrelated
sites,
must
be
maintained
on
the
client
side
to
prevent
the
loss
of
data,
confidentiality
or
integrity,
and
there's
it
goes
on
to
describe
a
site.
H
This
is
wikipedia
entry,
is
fairly
old,
so
describing
a
user
visiting
a
banking
site
that
doesn't
log
out
if
it
weren't
for
same
organ
policy.
Other
malicious
sites,
the
site
the
user
goes
to
later-
might
be
able
to
access
user
data
by
by
sending
requests
to
that
banking
site,
which
would
be
authenticated
with
the
users
session
cookies
for
that
banking
site.
So
that's
why
we
have
the
same
origin
policy.
The
next
slide
and
wikipedia
goes
on
to
say
applies
only
to
scripts,
and
this
is
where
we
shouldn't
believe
everything
we
read
on
wikipedia.
H
What
it
means
is
what
it's
trying
to
say
is
that
you
might
imply
that
to
mean
that
yes,
but
you
have
access
to
cross-origin,
media
and
other
things
that
are
not
scripts,
but
that's
actually
not
what
it
means.
It
means
there's
the
barrier
exists
between
the
javascript
and
the
content,
which
means
that
a
lot
of
things
may
be
presented
to
the
user,
but
the
javascript
is
not
allowed
authorized
access
to
it.
So
that's
true
for
images.
H
So
if
you
try
to
draw
an
image
into
a
canvas,
you're
allowed
to
do
that
cross
origin,
but
the
canvas
becomes
tainted,
which
means
that
if
you
try
to
get
image
data
out
of
it,
you
will
get
a
security
error
same
for
videos.
You
can
play
them.
Fine,
like
ads
play
videos,
but
as
soon
as
the
javascript
tries
to
access
it
by
drawing
it
to
a
canvas
and
then
getting
the
it
could
still
see
it.
If
you
draw
it
to
a
canvas,
so
you
can
create
a
snapshot
and
display
it,
but
you
cannot.
H
The
javascript
cannot
get
access
to
it.
This
extends
to
peer
connection
as
well,
in
that
you
can
capture
a
video
element
into
a
stream
and
add
it
to
a
connection
and
send
it
and
all
the
browsers
today
will
send
black
instead
of
the
content,
and
that's
because,
if
they
didn't
it
would
be
very
easy
to
avoid
the
same
origin
policy
by
basically
creating
peer
connection.
One
period
connection
two
doing
a
local
loop
and
send
the
data
to
yourself
or.
H
Next
slide,
so
here's
a
quiz:
what
are
the
obvious
screen
sharing
risks,
so
the
options
I
give
are,
you
may
accidentally
share
something
unintended,
the
wrong
tab
or
you
may
be
scrolling
too
fast
or
other
things
on
your
desktop.
Non-Visible
parts
may
be
included
when
sharing
a
window
that
aren't
on
screen
right
now
or
hidden
behind
all
the
windows
sharing
system
audio
may
reveal
other
apps
or
sites
that
I'm
using
or
d
exposure
to
active
attacks
on
my
browsing
sessions.
H
So
the
the
last
one
is
obviously
not
obvious.
That's
my
point
here
so
next
slide
and
then
do
I
understand
as
a
regular
user.
Do
I
understand
these
risks,
and
can
I
navigate
them?
H
It's
tempting
to
say
a
yes
on
the
first
one-
and
maybe
I
understand
the
second
and
third
one,
but
the
last
one
there's
no
way
even
an
web
developer
can
protect
themselves
from
so
next
slide
and
to
explain
that
the
slide
I
was
asked
to
present
is
get
display,
media
and
same
original
policy
and
the
tl,
dr,
is
that's
not
a
thing
because
get
display
media
violates
the
same
origin
policy
by
design.
So
sharing
a
web
surface
under
attacker
control
may
expose
the
user
to
active
attacks
on
cross
origins.
H
The
site
can
pop
the
malicious
site
can
pop
up
iframes
from
other
sites
on
the
web,
and
it
can
embed
media
from
other
sites
on
the
web
and
by
displaying
it,
it
has
now
an
incentive
to
produce
these
pop-ups
because
it
can
now
record
them
something
you
couldn't
do
before
without
this
api,
so
browsers
are
supposed
to
require
elevated
permissions
in
the
spec
ie
differentiate
if
the
user
picks
a
web
surface
to
share
versus
a
normal
one.
Firefox
does
this
at
the
moment.
H
H
M
Okay,
so,
first
of
all,
it
could
be
that
your
proposal
is
going
to
invalidate
everything
I
said,
and
if
so,
I'm
sorry
that
I've
taken
all
your
time
I'll
try
to
be
quick.
So,
as
we
know,
the
state
of
the
art
is
that
get
display.
Media
allows
you
to
display
media,
and
there
is
a
trade-off
here
that
you've
alluded
to,
which
is,
on
the
one
hand,
by
not
allowing
the
application
to
constrain
the
selection
of
the
user.
M
By
saying
you
can
only
select
the
tab
or
you
can
only
select
the
entire
screen,
then
we
don't
let
the
application
nudge
you
in
a
certain
direction,
and
that
makes
a
certain
trade-off.
On
the
one
hand,
the
user
is
more
likely
to
select
the
wrong
thing,
but
on
the
other
hand,
the
application
cannot
shove
the
user
in
a
dangerous
direction,
we'll
get
back
to
that
in
the
next
slide.
But
first
I'll
say
what
I'm
proposing
I'm
proposing
a
similar,
a
similar
api.
M
That
would
be
initially
constrained
to
only
selecting
the
current
tab
and
it
has
some
obvious
benefits.
It
enables
certain
user
journeys,
so
you
can
have
an
application
that
says
hey.
I
would
like
to
share
myself
to
a
meeting
so,
for
example,
if
I'm
right
now
in
google,
docs
or
google
slides,
then
I
could
have
a
button.
There
saying
hey
share
this
one
to
the
meeting.
M
I've
got
somewhere
else,
or
maybe
I've
even
got
it
in
a
pane
next
to
me,
or
anything
like
that,
and
I'm
sure
that
everybody
can
think
of
other
use
cases.
So,
for
example,
you
can
locally
record
a
gaming
session.
You
can
record
a
reproduction
of
a
bug.
I
know
that
somebody
want
we've
got
some
partners
who
wanted
to
capture
a
screenshot,
so
there
are
all
sorts
of
things
that
you
could
do,
but
of
course
we
all
knew
about
this
before
right.
M
So
if
we
move
to
the
next
slide,
we
will
try
to
die
dive
deeper
into
the
security
trade-off.
So
the.
C
M
Security
trade
off
would
be
or
privacy
trade-off
would
be
that
well,
the
user
cannot
share
their
own
thing.
He
is
presented
with
a
very
simple,
very
simple,
prompt
that
prompt
can,
by
the
way
we
can
make
it
very
scary.
We
can
well,
we
being
the
user
agent
I
can
make
it
very
scary,
can
make
it
repeating,
can
make
it
staggered
by
time.
M
So
maybe
the
user
is
not
allowed
to
click
until
it's
been
three
seconds
like
any
user
agent
can
think
of
how
they
can
make
sure
that
the
user
just
doesn't
just
click
it
away,
but
once
this
is
done,
the
user
shares
exactly
what
he
is
expecting
to
share.
He
stays
in
the
same
tab
and
at
least
in
the
case
of
chrome,
he's
going
to
see
the
same
old
blue
frame
around
the
theme.
So
you
would
immediately
understand
that
hey
I'm
sharing
this.
M
On
the
other
hand,
we
you
could
claim
that
the
application
can
now
nudge
the
user
exactly
to
the
most
under
to
the
to
share
the
thing,
that's
most
under
its
control,
and
that
is
true
now.
M
Our
argument
would
be
that,
except
for
the
fact
that
we
nudging
there,
this
is
actually
more
secure
and
the
reason
is
it's
more
secure
is
because
normally
with
well-behaved
websites,
you
eliminate
the
biggest
risk,
and
that
is,
in
my
humble
opinion,
miss
sharing,
sharing
the
wrong
thing
and
just
as
just
because
I
think
I've
got
enough
time.
I
can
say
that
when
you
let
the
application
know
that
I'm
sure
I'm
capturing
myself,
then
you
enable
all
sorts
of
desirable
behaviors
like
you
could,
for
example,
crop.
M
So
if
I
now
get
the
entire
tab
and
they
get
the
permission
to
do
the
I
get
the
entire
tab.
Maybe
I
even
say
you
know
what
I'm
not
even
going
to
share
all
of
this
remotely
I'm
going
to
crop
it
and
share
just
a
part
of
it,
and
that's
it.
That's
all.
I
had
to
say.
B
So
we'll
give
the
floor
back
to
you,
yan
avar,.
H
Okay,
thank
you
and
yeah,
and
thank
you
eli
for
bringing
this
to
our
attention.
I
think
this
is
a
good
use
case
that
is
going
to
require
us
to
to
think
of
how
to
solve
this.
So
I
wanted
to
include
and
so
based
inspired
by
your
proposal.
H
Last
week
we
had
a
joint
meeting
with
the
media,
entertainment
and
media
and
entertainment
interest
group,
and
this
came
up
as
potential
interesting
future
features
that
the
working
group
might
be
working
on.
So
I'm
going
to
present
those
slides
here-
apologies
for
the
different
layout,
so
just
to
recap
what
I
told
them
is
today
web
get
display
media.
H
The
today
would
get
display
media
web
surfaces
may
be
captured
only
if
the
user
picks
them
carries
significant
risks
yeah
and
that's
why
the
api
prevents
sites
from
inference
influencing
choice
behind
elevated
permissions.
So,
ironically,
it's
it's
ironic,
then
that
sharing
native
apps
is
actually
safer
than
web
apps.
So
this
is
unfortunate
since
we'd
like
to
promote
web
over
native,
and
it's
creates,
as
you
mentioned,
prohibited
ux
flow
for
use.
H
Cases
such
as
presenting
a
google
doc
or
recording
in
this
meeting
the
next
slide,
so
a
better
integration
seems
like
it's
something
we
should
aim
for.
So
what,
if
web
pages
stream
themselves
into
a
conference
which
was
brought
up
by
you
by
by
elon
and
google?
I
think
here
and
so
the
page
could
use
existing
tech
peer
connection
to
join
an
ongoing
meeting
and
stream
itself
there
if
it
could
capture
itself
the
good
things
here.
I
think
my
takeaway
here
is
that
the
document
only
needs
to
capture
itself
and
but
to
be
secure.
H
The
document
must
be
origin,
isolated
as
a
matter
of
policy,
and
that
would
be
across
origin
policies
so
that
the
document,
because
trying
to
ensure
same
origin
with
a
dom
object
model
otherwise,
would
be
near
impossible.
I
think
because
the
document
the
javascript
can
make
changes
to
the
document
at
some
point,
unfortunately
course
only
allows
opt-in,
which
is
not
strong
enough,
since
rendering
a
document
from
another
origin
is
different
from
reading
it.
H
You
get
a
lot
more
information
about
the
information
than
that
in
that
other
site,
so
we
think
a
new
policy
would
be
needed.
It
might
be
something
like
just
hand
waving
here,
cross
origin
and
better
policy
disallow
next
slide.
H
So
if
we
can
have
that,
which
is
a
big
if
this
would
be
more
secure,
but
it
still
would
need
permission
because
rendering
itself
may
contain
private
info
things
like
link
purpling,
that
tells
you
your
browser,
history
form.
Autofill.
Might
you
know
browsers,
might
fill
in
your
address
credit
card
info
that
kind
of
stuff,
and
this
will
be
captured
in
the
rendering
extensions
like
lastpass.
H
H
Interestingly,
that
would
put
another
scope
for
the
webrtc
working
group
to
do
something
like
that,
which
might
not
be
a
bad
thing,
because
I
think
this
would
be
something
that
dom
security
engineers
might
want
to
look
at
next
slide
and
there's
already
another
issue.
145.
H
That's
asked
for
something
that
I
think
is
related
to
this
adjacent
to
this
is
to
capture
screenshots
from
dom
a
lot
of
browsers.
Have
this
screenshot
ability
right
now,
but
there's
no
exposure
to
javascript
so
depending
on
the
use
cases,
since
this
would
only
be
secure
if
you
on
pages
that
are
using
this
new
made
up
origin,
isolated
concept,
you
could
in
theory,
then
imagine
you
could
draw
image
document
to
a
canvas
and
the
canvas
becomes
tainted.
You
can
still
display
it.
H
But
again,
if
you
try
to
read
data
out
of
it
from
javascript,
you
would
get
a
security
error.
Unless
you
have
this
policy,
in
which
case
you
might
be
able
to
get
your
own
image,
data
you'd
still
need
permission,
since
it
exposes
the
same
origin,
rendering
with
private
user
information,
so
we
might
have
to
since
get
image.
Data
does
not
return
a
promise
we
might
have
to
but
unfortunate,
but
fortunately
create
image.
H
Bitmap
does
so
we
might
be
able
to
do
something
there,
and
these
I
just
put
a
fiddle
in
there
as
well,
to
show
that
these
apis
currently
allow
the
allow
these
calls,
but
they
produce
security,
error
for
cross-origin
content,
but
not
same
origin
content.
I
think
that's
my
last
slide.
I
H
Yes,
I
was
more
concerned
in
getting
away
from
browsing
context,
because
browsing
context
is
the
container
within
which
multiple
documents
may
live.
So
I
think
it
would
be
concerning
to.
I
think
you
we
wouldn't
want
to
capture
the
browsing
context,
because
if
you
hit
the
back
button,
the
four
button
you're
suddenly
the
user
might
not
expect
the
capturing
to
continue
so
document
or
lower
in
the
document
model
might
be,
it
might
be
doable
yeah.
G
There's
a
side
case
there
about
like
actually
not
really
wanting
to
share
the
whole
of
your
4k
screen.
So
it'd
be
kind
of
nice
to
be
able
to
say
you
know
the
left
hand
tab
or
something
not
tab.
That's
a
bad
answer,
a
left
hand
div.
H
H
A
So
the
the
huge
difference
between
draw
image
or
capture
documents
and
browse
the
context,
this.
D
Yeah,
it's
it's
possible
in
chrome
to
start
sharing
one
tab
and
then
temporarily,
you
you
move
and
you
show
a
different
tab
by
clicking.
You
know
share
this
tab.
Instead,
you
know
you
show
something
and
then
maybe
you
go
back
to
your
slides
and
you
say
you
know,
share
this
document
instead,
I
do
think
it's
very
useful
to
have
it
on
the
tab
layer.
D
H
D
In
my
mind,
this
api-
it
it
works
this
very
closely
to
the
existing
stuff.
It
has
the
same
more
or
less
privacy
concerns
as
the
existing
stuff,
and
so
I
mean
to
me
whether
it
should
put
on
document.
as
the
api
name
or
on
the
where
get
display
media
lives.
Today,
that
seems
more
like
bike
shedding,
rather
than
you
know
the
core,
the
meat
of
the
issue,
and
so
I'm
are
you
suggesting
that
this
should
be
in
a
different
working
group
because
of
the
name.
H
Well,
I'm
not
sure
that
the
necessarily
the
task
needs
to
be
solved
is
within
this
working
group,
depending
on
how
we
define
the
task
that
needs
to
be
solved,
I
mean,
if
we,
if
we
can
all,
we
can
already
capture
from
a
canvas.
So
if,
if
canvas
had
this
ability,
there
would
already
be
a
way
to
do
it,
I
guess,
but
my
my
main
takeaway,
I'm
not
necessarily
married
to
an
api
surface
here,
it's
more
about.
H
I
like
the
proposal
from
that
a
lot
is
bringing
up.
I
like,
I
like
the
idea
and
the
functionality
that
is
brought
up
here,
but
that
I
I
believe
pretty
strongly
that
we
need
to
secure
it
and
make
it
safe
so
that
we
disallow
cross-origin
content
from
being
captured.
D
What
what
about
the
the
primary
use
case
of
get
display
media
today,
which
is
I'm
sharing,
slides
and
then
you
know,
while
I'm
sharing
slides,
I
want
to
show
you
something
so
I
move
over
to
a
tab.
I
show
up
the
video
and
then
I
move
back
to
the
slides.
I
continue
my
presentation
and
those
two
web
surfaces
are
the
only
surfaces
I
ever
intend
to
share
and
I
use
get
displayed
media
to
do
this
and
then
I'm
a
stupid
user
and
I
share
my
entire
screen
and
then
I
share
something
accidentally.
H
Right
so
I
I
would
not
necessarily
put
one
privacy
concern
against
the
other.
I
think
we
should
solve
all
of
them
and
I
I'm
pointing
out
the
difficulty
in
trying
to
move
get
display
media
in
the
direction
of
being
more
lenient
towards
sharing
web
services.
When
I
believe
at
the
moment,
browsers
are
several
browsers
are
violating
the
spec
by
not
providing
the
elevated
permissions.
H
I
So
if
we
look
at
this
meet
google.com
page
that
we
are
all
looking
at,
does
it
contain
cross-origin
content,
for
instance,
that
that's
something
like?
Is
there
a
case
where
you
could
you
could
get
away
without
any
prompt
and
still
get
like
80
of
the
use
cases
that
people
are
thinking
of
and
leave
the
prom
cases
to
get
display
media
with?
Maybe
some
small
additions
to
or
better
prompts
as
part
of
get
display,
media.
H
Just
briefly,
I
I
say
I
think,
no
because
rendering
even
without
the
cross-origin
issue,
rendering
still
exposes
a
google
meet,
is
not
gonna.
Throw
up
a
lot
of
links,
but
if
a
malicious
site
might
throw
up
a
lot
of
links
and
purpling
of
links
would
tell
you
what
the
user's
browsing
history
is.
I
I
It's
very
similar
to
passwords
fields
where,
when
you're
capturing,
you
would
not
want
to
see
the
passwords
letters
even
for
a
few
milliseconds
you,
you
would
want
the
capture
image
to
not
show
any
letters
so
maybe
for
links.
We
could
also
do
something
very
similar
there,
so
the
page
is
doing
something
and
you
render
it
in
a
special
way
that
is
safe,
so
links
will
not
be
decorated
and
maybe
that's
fine.
H
Right
and
extensions
as
well
like
lastpass,
there's
a
big
space
here
and
worst
case
we're
ending
up
with
a
different
rendering
target
for
html.
So
it's
a
tricky
area.
M
A
M
A
B
So,
given
that
we
only
have
one
minute,
I
don't
know
if
there's
any
wrap
up
or
next
steps
to
talk
about
harold.
A
So
I
the
only
only
question
I
I
forgot
to
ask
at
the
end
of
ins
of
the
breakout
box
presentation
was:
is
this
ready
for
working
with
adoption,
or
should
we
discuss
some
more.
I
I
Yeah,
I
think,
there's
interest.
I
think
we
should
like
in
terms
of
api
shape.
I
think
we
should
first
like
consolidate
a
little
bit
in
suitable
strings
then
apply
something
very
similar
to
the
breakout
box
thing
so
better
other
than
that.
Why
not.
A
A
So
that
means
we
we
adopt
the
principle
of
yes
we're
going
to
do
something
here,
but
we're
we're
not
sure
of
the
api
shape
yet,
but
that's
a
lot
of
commands.
Let's
see
what
goes
okay
with
the
chance
we'll
send
out
and
call
for
consensus
on
that.
Yep
sounds
good.
We
have
a
decision
on
something
at
least.