►
From YouTube: WebRTC WG meeting 2022-06-23
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
A
We're
going
to
talk
about
the
webrtc
working
group,
rechartering
region,
capture,
region,
capture,
extensions
and
face
detection.
Today,
in
terms
of
future
meetings,
we'll
have
one
on
july
19th
we
will
skip
the
month
of
august
and
then
we
will
have
our
tpac
meetings.
A
So
far
we
have
webrtc
working
groups,
meetings
scheduled
for
the
12th
and
13th
of
september
and
then
a
joint
meeting
with
the
media
working
group
and
the
media
entertainment
interest
group
on
the
15th
of
september.
So
that's
our
schedule
for
the
next
couple
of
months.
A
Okay,
we
have
the
slides
up
on
the
wiki.
We
have
a
scribe.
A
A
Just
a
reminder.
This
meeting
is
being
recorded.
Please
use
headphones
or
an
echo
canceling,
speakerphone
and
state
your
full
name,
so
we
can
get
it
into
minutes.
I
don't
think
we'll.
I
don't
know
if
we'll
be
doing
a
poll
today,
I
suspect
not,
but
we
we
have
the
mechanism
available,
just
a
few
words
about
document
status.
Just
because
something
is
in
the
w3c
repo
doesn't
mean
it's
been
adopted.
A
We
use
a
call
for
adoption
to
do
that
and
editors
drafts
don't
represent
working
group
consensus,
but
working
group
drafts
do
once
they're
confirmed
by
a
cfc.
It
is
possible
to
merge
brs,
but
you
should
attach
a
note
if
it's
controversial
all
right.
Here's
the
issues
for
discussion
today,
we're
going
to
start
off
with
a
recharger,
then
get
into
region
capture
and
face
detection
tom.
C
Yes,
so,
as
you
probably
know
or
working
group
operates
under
a
charter
that
we
need
to
renew
every
so
far
in
general,
every
two
years,
the
current
charter
under
which
we
operate
is
expiring
by
the
end
of
september,
given
that
the
process
to
get
a
charter
renewed
needs
some
processing
time,
including
review
by
the
wcc
advisory
committee.
C
C
B
B
C
Probably
know
this
group
has
been
responsible
for
both
for
doing
all
the
integration
in
real
time
communications
protocol
since
it's
inception,
but
also
doing
most,
if
not
all,
of
the
work
in
terms
of
capturing
media
that
can
be
sent
over
this
real-time
communication
protocol,
one
recurrent
discussions,
we
have
each
time
we
look
at
this
charter
is
whether
we
should
split
that
work,
migrate,
media
capture,
work
to
to
another
group
in
issue,
7t,
harold,
I
think,
suggested
a
potential
group
that
could
hold
such
work
would
be
the
media
working
group
who's,
obviously
dealing
with
media.
C
So
anyway,
I
guess
what
I
would
like
to
hear
from
you
all
today
is
first
whether
there
are
any
our
changes
that
we
need
to
discuss
for
this
new
charter
beyond
the
list
of
deliverables
and
on
the
particular
questions
around
where
the
work
on
media
capture
should
be
done,
whether
it
should
be
moved
to
another
group
or
not
whether
there
are
work
items,
we
should
consider
dropping
from
the
charter.
D
C
I
haven't
looked
into
it.
It
depends
on
whether
the
charter
is
flexible
in
terms
of
scope
or
not,
but
I
would
say
so
if
we
were
to
suggest
that
clearly
we
would
need
a
very
strong
coordination
with
the
media
working
opening
that
may
indeed
include
having
them
rechartering.
I
don't
know
of
the
top
of
my
head,
whether
that
would
be
necessary
or
not.
B
A
B
C
So
I
don't
think
we
can
afford
to
wait
until
september
to
have
the
discussion,
but
if
we
were
starting
the
discussion
now
and
with
a
conclusion
that
we
would
need
to
keep
operating
under
the
current
charter
for
say
three
more
months,
we
could
extend
the
current
charter
while
we
go
through
the
administrative
yeah,
but
I
don't
think
we
should
wait
until
september
when
we
know
it
would
be
too
late.
No
matter
what
to
to
have
the
conversation.
If
that's
a
conversation,
we
want
to
have.
D
What
I
know
is
that,
from
an
administrative
point
of
view,
every
time
we
change
charter
the
scope,
even
though
it's
just
migrating
one
item
to
the
other,
then
on
our
side
we
have
to
to
do
some
assessment
and
so
on,
and
it
can
take
time
and
so
on.
So
it's
not
it's
not
free
to
do
the
migration
at
least
first.
A
So
dom
are
there's
some
additional
meanings
or
other
things
we
should
be
setting
up
on
this
topic
before
september.
C
Yes,
we
should
have
meetings
with
the
very
least
media
working
group
chairs
to
discuss
whether
they
would
be
willing
to
do
so
in
the
first
place
and
then
coordinate
on
what
it
would
take
to
make
it
happen.
But
the
first
part
of
the
question
is
really
for
us
to
say:
is
it
something
we
want
to
do
or
something
we
want
to
explore,
or
is
it
just
not
a
priority,
and
I
guess
un
was
saying
we
should
have
good
reason
to
make
that
decision,
because
it's
not
cost-free
yeah.
One
thing.
A
I
would
observe
is
that
there
is
a
whole
bunch
of
different
working
groups
that
are
working
on
specs,
that
interact
with
video
frame,
and
I
think
this
is
causing
problems,
we're
seeing
it
already
and
and
also
encoded
chunks,
where
we're
getting
out
of
sync
in
different
specs
and
things
that
are
falling
between
the
cracks.
So
I
do
think
there's
definitely
an
issue
there
with
too
many
working
groups
involved
in
that
in
that
work.
E
So
I've
got
a
question
and
that
is
how
busy
is
the
media
working
group
in
general
and
how
is
their
capacity
for
taking
on
additional
work,
because
I
would
mostly
be
concerned
about
whether
this
increases
the
pace
in
which
we
make
progress
or
decreases.
It.
C
Well,
I
mean
I,
like
some
people
in
this
group
are
also
involved
in
media
in
the
media
argument,
so
maybe
they
can
share
actual
insights.
I'm
not.
I
will
say
that
they
do
have
a
number
of
other
work,
including
web
codecs,
media
capabilities,
which
obviously
come
also
with
a
set
of
challenges.
Controversies
and
coordination
coordination
needs
so.
C
C
B
Say
that
this
is
something
that
the
chairs
need
to
discuss
with
the
with
with
the
media,
chairs
and
report
back
to
the
group
as
soon
as
possible.
A
Yes,
I
think
I
think,
given
the
timeline,
we
probably
need
to
do
that.
Yep.
C
A
All
right,
so
next
topic
is
region
capture.
We've
got
two
things
to
discuss
today:
issues
17
and
18.
17
will
probably
take
most
of
the
time.
We
have
a
we'll,
have
a
presentation
from
jan
ivar
about
proposing
sync
usage
for
crop
target,
and
then
a
lot
will
present
the
case
for
async
and
then
we'll
have
a
shared
discussion
and
then
we'll
talk
about
issue.
F
All
right,
thank
you.
So
yes,
so
this
is
about
crop
two
which
also,
which
you
pass
an
argument
to
to
crop
self-capture
to
an
element
in
a
way
that
is
consistent
and
doesn't
rely
on
coordinates
and
has
some
benefits.
So
the
issue
threads
on
these
issues,
issue
17
and
related
issues
have
gotten
really
long
and
impenetrable.
F
So
we
have
slideware
to
highlight
what
is
the
outstanding
issues
in
order
to
invite
the
larger
working
group
to
participate
in
decisions?
So
you'll
hear
two
views
and
I'm
up
first
and
I'll:
try
to
explain
the
api
as
I
go
for
those
who
haven't
followed
closely,
so
I
intend
to
present
arguments
that
I
hope
we
can
discuss
from
first
principles
that
I
hope
can
be
rebutted
with
counter
arguments
to
find
solutions
for
them
or
show
why
they're
not
a
problem.
F
This
is
our
process
and
we're
following
it.
It's
the
working
group
that
has
the
domain
knowledge
to
design
apis,
and
it's
not
the
tags
job,
for
instance,
to
call
winners
or
losers
on
api
design,
especially
not
in
first
public
working.
If
that
were
the
process,
we
could
just
ban
this
working
group
and
save
a
lot
of
time.
So,
but
what
the
tag
does
provide
is
a
list
of
design
principles
and
among
them
are
sorry.
Can
we
go
back
a
slide?
F
Yes,
so
among
those
principles
they
have
I'm
going
to
mention
three
one:
that's
on
consistency,
constructors
for
new
objects,
and
this
one
which
you
see
here
which
to
use
which
is
to
use
synchronous
apis
when
appropriate,
and
you
will
see
there
are
some
exceptions
to
this,
and
one
of
them
is
into
process
communication.
F
So
right
now
the
current
api
and
at
first
I
was
going
to
write
current
api.
The
problem
is
that,
what's
in
the
spec
right
now
is
what
I
mean
and
there's
a
note
in
respect
that
says,
the
current
api
does
not
have
consensus,
which
was
added
for
first
public
working
draft.
So
I'm
calling
it
non-consensus
api
here
just
to
be
clear
that
these
are
equal
proposals
in
my
mind.
So
currently
you
you
have
to
mint
a
crop
target,
and
this
is
because
for
reason
I'll
come
back
to
later.
F
The
question
here
for
the
working
group
does
this
need
to
be
an
asynchronous
method,
so,
right
now
in
the
spec
in
the
non-consensus
note
says
you
have
to
await
crop
target
from
element
and
then
you
mint
a
crop
target
object
instead
of
the
element.
Now.
Why
would
we
do
that?
It's
because
we
might
need
to
post
message
this
element
to
another
realm,
and
you
can't
do
that
with
elements,
so
we
get
this
sort
of
reference
handle
instead.
So
what
I'm
proposing
is
that
we
don't
need
this
to
be
async.
F
You
can
just
create
a
new
crop
target
and
pass
in
the
element,
and
that
would
be
a
synchronous
api,
and
this
is
because
the
purpose
of
this
target
is
to
associate
a
serializable
identifier
with
an
element
which
we'll
call
minting
and
the
spec
says,
calling
from
element
with
an
element
of
a
supported
type
associates.
That
element
with
a
crop
target
crop
target
is
an
intentionally
empty,
opaque
identifier
and
its
purpose
is
to
be
handed
to
crop2
as
input,
so
I've
currently
specified.
F
This
cannot
fail
for
non-synchronous
reasons,
but
an
issue
has
been
opened
which
I'll
talk
about
here
later
about
exposing
resource
failures,
and
so
here's
an
example
where
you
have
an
iframe.
You
mint
a
crop
target
from
an
element
in
an
iframe.
F
It
could
be
a
different
origin
and
you
post
message
it
to
a
top
level
document
that
is
doing,
screen,
capture
self
captured
through
screen
sharing,
and
it
has
a
video
track
upon
which
you
call
crop2
with
that
crop
target.
If
this
were
all
the
same
document,
you
could
just
pass
in
the
element.
So
crop
target
serves
a
really
useful
purpose
here,
but
the
testimonial
requirement
here
is
that
cropton
must
accept
it
at
this
point.
F
So
this
is
where
this
might
be
confusing,
but
I'll
work
through
it
this,
if
you're
not
familiar
with
the
api.
Hopefully
this
might
actually
help.
My
point
here
is
that
multiple
actions
need
to
happen
before
we're
cropping
anything,
and
this
diagram
shows
that
vertically
you
have
time
progressing
and
on
the
left,
you
have
a
top
level
document
in
the
middle.
You
have
the
iframe
and
some
notes
about
optimizations
on
the
right.
So
the
first
thing
that
happens
on
any
random
page
on
the
web.
F
It
doesn't
know
if
it's
been
captured
yet
is
that
you
mint
a
crop
target
for
an
element
because-
and
this
could
be
done
ahead
of
time,
which
is
shown
here
or
it
can
be
done
later.
So
a
decision,
then,
is
what
what
does
the
javascript
do?
Next?
F
It
can
do
nothing,
in
which
case
nothing
happens
so
or
you
can
post
message
it
to
the
top-level
document
and
where
it
receives
this
token,
and
then
nothing
can
happen
or
the
user
pushes
a
button
which
to
present
something
and
which
creates
the
top-level
javascript,
calls
get
display
media
or
get
viewport
media
and
prompt
user
for
capture
the
user.
May
cancel
or
the
user
may
select
a
different
capture,
in
both
case,
nothing
happens.
And
finally,
we
see
that
the
application,
the
top
level
document
application
decides
to
crop.
It
has
a
screen
sharing
track.
F
F
So
user
agents
can
jump
the
gun
here,
but
to
optimize
this
and
as
I've
shown
them
on
the
right,
you
can
optimize
quite
early.
But
if
you
do,
you
run
a
lot,
a
lot
of
risks
and
you
have
to
actually
then
handle
the
case
that
maybe
you
prematurely
optimized,
because
you
don't
actually
know
that
you
capture
that
you're
going
to
be
capturing
it.
So
that's
the
main
problem
with
early
optimization.
It's
a
good
thing.
F
F
Call
it
the
unoptimized
ipc,
if
you
will
so
that
you
can,
because
you
don't
have
to
for
optimization,
you
rarely
need
to
optimize
the
first
step
of
optimization
is
knowing
what
to
optimize
and
not
necessarily
think
you
need
to
optimize
every
edge
case,
and
there
are
a
lot
of
edge
cases
here,
some
of
which
I've
shown
with
the
red
symbols
here
next
slide.
F
So
another
reason
for
why
it
needs
to
be
async.
Is
that
there's
a
new
request
on
the
spec
to
change
the
original
spec
said
it
was
infallible,
but
there's
a
new
request
to
say
that
we
want
to
be
able
to
fail,
reject
the
promise
from
the
minting
process
because
of
resource
extortion,
errors,
and
presumably
this
is
because
chrome
has
implemented
some
optimizations.
F
They
allocate
cropping
resources.
This
early,
which
I've
shown
has
a
lot
of
complexity,
but
rather
than
having
a
fallback,
they
would
rather
basically
fail
to
web
developer
javascript.
F
But
let's
look
at
and
see
if
that
creates
any
problems,
so
this
would
allow
random,
javascript
and
would-be
captured
documents
to
exhaust
cropping
resources,
and
that
seems
bad.
F
So
if
you
have
a
javascript
library
that
you're
importing
that
has
nothing
to
do
with
cropping,
it
could
cause
action
at
a
distance
by
basically
calling
this
api
over
and
over.
It's
not
behind
any
permission,
so
any
javascript
can
do
it
and
basically
doing
a
denial
of
service
attack
on
the
cropping
function
without
user
permission
and
defeat
cropping
now
defeating
cropping
may
actually
expose
private
user
information
in
unsuspecting
poorly
written
apps,
which
is
a
foot
gun
because
that
may
have
been
designed
to
never
been
tested.
F
What
would
happen
if
cropping
resources
fail
and
what
happens
we
know
there
are
bugs?
Then,
if
you
don't
test
something,
the
worst
will
happen,
which
is
that
the
the
app
will
then,
instead
of
cropping
to
exclude
important
information,
it
will
now
show
and
screencast
everything,
and
I
hope
I've
also
shown
that
early
research
allocation,
it's
inherently
unnecessary.
F
F
F
F
So
what
you
have
to
do
instead
is
provide
to
to
to
combat
that
you
need
local
per
document
limits.
That
obviously,
must
be
smaller,
and
these
would
risk
becoming
exhaustible.
Much
sooner
to
the
point
where
you
could
even
have
well-designed
apps
that
maybe
sit
in
an
open
tab
for
a
long
time,
you
can
eventually
run
it
to
this
limit.
So
again,
web
compatibility
issues
and
web
developers
are
likely
to
ever
expect
or
check
for
exhaustion
and
again,
the
better
approach
would
be
to
have
the
user
agent
handle
this
and
not
fail.
E
All
right
performance,
I'm
very
sorry,
but
yes,
I
think
it
might
be
better.
If
I
interrupt
now
is
that
okay.
F
I
think
I
would
prefer
to
get
through
to
the
end,
because,
okay,
we
have
a
pretty
tight
time
line,
so
I'm
trying
to
spend
two
minutes
per
slide.
So
I
think
I
only
have
two
more
slides.
Does
that
work?
F
It's
this
one
and
one
more
slide.
Yes,
thank
you
so
performance.
F
So
a
part
of
the
spec
that
doesn't
have
consensus,
since
the
user
agent
must
resolve
the
promise
only
after
it
has
finished
all
the
necessary
internal
propagation
of
state
associated
with
a
new
crop
target,
at
which
point
the
use
agent
must
be
ready
to
receive
the
new
crop
target
as
a
valid
parameter
to
crop.
To
no
question
no
problem
with
the
last
part,
but
the
issue
here
is
that
what
does
this
mean
because
modern
browsers
use
process
separation
of
iframes,
which
means
there's
going
to
be
ipc
of
both
state,
propagation
and
post
message?
F
So,
looking
at
the
earlier
example
of
the
iframe,
I've
added
some
arrows
here
about
what
would
happen
and
arrows
up
means
out
of
process.
Basically,
so
you
call
crop
target
firm
element
and
the
spec
says
to
do
state
propagation,
which
is
really
you
know,
pre-optimized
resource
allocation
for
cropping,
which
would
need
to
go
under
master
process.
So
you
send
one
ipc
and
it
also
says
we
can't.
F
We
can't
resolve
the
promise
until
that
has
succeeded.
So
that's
the
second
ipc
coming
back
to
say
that
the
research
allocation
has
succeeded
and
then
now
javascript
only
now
does
javascript
get
to
have
the
target
that
it
can
then
post
message,
which
is
a
third
ipc.
F
So
this
api
actually
serializes
those
three
ipc's
which
is
slower
than
running
ipc
in
parallel
you
could,
you
could
have
the
state
propagation
happen
and
still
resolve
immediately
or
basically
have
it
be
synchronous,
in
which
case
it
would
be
up
to
crop
to
to
handle
this
race,
which
it
which
can
it
can
handle
either
approach.
Basically,
so
the
proposed
the
api
I'm
proposing
here,
I
think,
is
faster,
simpler
and
still
optimizable,
and
this
satisfies
the
design
rule
that
the
api
implementation
will
not
block.
You
know
one
of
the
exceptions
for
the
synchronous.
F
F
So
this
is
my
conclusion
slide.
So
on
the
previous
slide,
I
also
want
to
say
that
you'll
hear
chrome,
saying
that
they'll
need
this
and
I
hope
I've
shown
that
they
only
need
it
because
they
have
no
fallback.
F
I
like
denial
of
service
of
the
cropping
resource
and
exposing
potentially
and
privacy
risks
and
potentially
fingerprinting
information.
F
I
think
the
burden
is
on
user
agents
to
optimize
within
that
api,
because
a
failure
of
optimization
doesn't
have
to
mean
the
failure
of
the
operation,
implement
a
baseline
and
crop
to
and
to
fall
back
on,
and
I
think
what
we're
also
seeing
here
is
that,
because,
when
optimization
and
implementation
influences
api
decisions,
I
think
we
see
that,
since
optimization
is
quite
difficult,
we
see
new
issues
being
opened
on
the
working
group
here
and
that's,
I
think,
that's
the
because
optimizations
will
have
side
effects.
F
There
might
be
new
information
later
about
how
to
optimize
better,
and
I
think
my
prediction
will
be
we'll
see
new
issues
being
opened.
If
we
don't
close
this
gap
and
basically
said
optimization
should
not
influence
the
api
and
it
would
not
be
a
good
use
of
working
group
time.
Let's
be
done,
there's
no
inherent
need
and
no
developer
benefit
to
it
being
async
and
async.
F
Apis
goes
against
the
w3c
design
principle
I
mentioned
earlier,
which
is
consistency
with
a
media
source,
get
handle
having
an
object
constructor
for
new
objects
and
being
synchronous
when
possible.
There's
a
developer
cost
as
well
to
asynchronous
apis.
That
are
quite
general.
Every
await
is
a
preemption
point
in
javascript.
Javascript
is
single
threaded,
so
a
permission
point
a
single
thread.
That
means
you
don't
have
to
worry
about,
locking
in
order
to
read
data,
but
as
soon
as
you
do
an
await,
you
have
to
reassess
all
state
or
risk
databases
and
also
it's.
F
It
spreads
like
wildfire
because
much
like
cons,
non-cons
methods
and
simple
applause.
If
a
method,
you
call
is
asynchronous,
you
have
to
be
asynchronous
and
the
caller
of
you
has
to
be
asynchronous
so
which
is
unfortunate
but
reality
and
also
having
multiple
process
failure.
Points
for
extremely
rare
resources
is
also
risky,
because
web
developers
would
might
not
expect
or
test
for
it,
and
I've
also
shown
that
it's
slower
delaying
when
post
message
can
happen
and
I'm
happy
to
discuss
performance.
F
There
might
be
other
cases
like
if
you
wanted
to
have
multiple
resources,
and
you
wanted
to
do
like,
like
the
one
area
it
might
be.
A
benefit
is
if
you
want
to
do
this
kind
of
flip
book
between
multiple
targets
that
you've
prepared
already.
But
in
that
case
I
think
it
can
be
shown
that
we
can.
E
Yes,
so
I
asked
yes
thank
you,
okay,
yes,
so
I'm
going
to
go
over.
E
Sorry
and
I
I'm
going
to
have
to
say
a
couple
of
things.
F
Well,
first,
I
think
I
think
we
said
if
there
were
any
questions
about
what
I
proposed
and
we'll
also
have
general
discussion
at
the
end
right.
E
Sure
I've
got
two
questions,
but
I
need
to
set
up
a
quick
setup
for
this.
So
first
you've
mentioned
the
first
principles
and
they
think
that,
yes,
we
should
debate
this
on
first
principles,
but
the
tag
does
not
only
offer
design
principles.
It
also
sets
the
meta
discussion
right.
It
talks
about
how
we
can
even
have
a
productive
discussion,
so
I
think
that
it
is
not
necessarily
productive
for
us
to
discuss
other
the
browsers
other
browsers
optimizations
and
specifically
when
we
talk
about
resources,
exhaustion
and
how
it
could
serve
for
fingerprinting.
E
That
is
a
valid
concern,
but
nobody
is
forcing
you
to
actually
allocate
resources
to
nobody's
forcing
you
to
actually
exhaust
them
and
to
allow
allow
this.
So
if
you
think
that
this
is
a
bug
in
chrome's
implementation,
that's
very
good
and
we
can
solve
that
bug
over
time,
but
nobody's
forcing
you
to
replicate
this
bug.
E
Similarly,
nobody
is
forcing
you
to
propagate
state
only
when
you
at
one
point
or
another,
the
way
that
that
which
is
currently
phrased
you
could
have,
you
could
just
resolve
the
promise
immediately,
never
allocate
anything,
never
fail,
and
I
just
don't
understand
so.
If
you
could
just
explain
to
me
in
what
way
the
current
text
is,
limiting
your
implementation
and
forcing
you
to
make
design
decisions,
you
do
not
wish.
F
Well,
I'm
not
here
to
talk
implementation
so
much
as
the
api,
and
I
think
that
the
api
should
be
designed
based
on
principles
that
the
tag
has
put
forth
and
I've
shown
three
of
them,
and
I've
also
shown
some
problems
that
I
hope
can
be
counter
argued
from
first
principles.
Why
they're
not
important
or
why
they're
not
how
they
can
be
solved.
I
just
don't.
F
Okay,
fair
enough,
I
don't
this
is
for
the
entire
working
group,
so
I
don't
understand
your
question.
E
My
question
is
clearly
so
unless
the
api
forces
you
to
replicate
chrome's
problems,
what
does
it
matter
if
chrome
has
made
problems?
It's
got
problems
in
your
implementation,.
F
E
E
No
okay,
so
thank
you.
I
will
start
now
so,
as
he
never
has
mentioned,
and
thank
you
for
introducing
the
api.
This
is
an
api
for
cropping
video
targets.
It
was
proposed
one
and
a
half
years
ago
by
yours,
truly
chrome
has
implemented
it
and
google
products
have
actually
started
using
it
on
origin,
trial
and
soon
in
actual
production,
when
it's
shipped
but
right
now,
it's
already
in
production,
very
big
products
such
as
meat
dogs,
slides
and
it's
battle
tested.
So
it.
B
E
So,
as
we
know,
cropping
involves
the
target.
The
target
can
be
in
another
document,
which
means
that
you
cannot
pinpoint
the
element
without
actually
having
a
crop
target,
and
the
question
here
revolves
around
whether
this
the
minting
of
this
token
should
be
sync
or
async,
and
I'm
going
to
claim
a
that,
I'm
going
to
explain.
E
Next
slide,
please
so
just
to
remind
everybody
of
chrome's
implementation,
which
is
we're
not
mandating
that
anybody
else
follow
the
same
implementation,
we're
just
explaining
chrome
direction
now,
so,
first
of
all,
you've
got
the
captured
document
on
the
left.
The
capture
document
means
a
token
that
involves
the
ipc
in
chrome's
implementation,
not
necessarily
in
anybody
else's
implementation,
and
once
you've
got
this
strobe
target.
It's
not
only
that
the
capture
document
has
it.
The
browser
is
now
aware
of
it,
and
the
browser
knows.
B
E
E
Next
slide,
please
so,
as
mentioned
simpler,
more
performant
and
you'll
notice
that
there
are
no
race
cases
here,
because
by
the
time
that
the
capture
document,
I'm
sorry,
the
capturing
document
gets
the
token
that
token
is
ready
to
be
used.
Whereas
with
some
alternatives
that
have
been
debated
at
length
on
issue
number
17,
there
was
basically
an
ipc
being
sent
in
both
directions.
At
the
same
time,
one
with
browser
process
and
one
to
the
capture
document-
and
you
can
imagine
that
there
are
cases
where
one
ipc
would
reach
faster
than
the
other
next
slide.
E
Please,
oh
sorry
and
one
more
okay,
next
like
this,
so
basically
I
claim
that
we've
got
a
an
argument
here
between
a
party
that
needs
this
to
be
async
and
then
party,
that
just
wants
it
to
be
async,
and
then
the
question
is
why
so
chrome
needs
it,
because
we've
got
a
certain
set
of
trade-offs.
Other
sets
of
trade-offs
are
possible,
but
we've
got
our
own
philosophy
and
we
want
to
implement
according
to
it.
E
Mozilla
could
implement
according
to
their
own
philosophy,
regardless
of
whether
this
is
sync
or
async,
but
they
want
this
to
be
sync,
now
that's
fine,
but
for
that
they
need
to
show
that
there
is
an
actual
problem
with
making
this
sync,
I'm
sorry
async,
regardless
of
implementation,
or
that
one
of
the
constituencies
is
actually
impacted
by
this
negatively
next
slide.
Please.
E
So,
as
we
know,
this
is
the
text
it's
a
little
bit
much,
but
that's
the
original
text
with
a
reduction,
so
the
needs
of
users
come
first.
Then
web
developers
then
browser
implementers,
spec
authors
and
then
a
surprise
at
the
end,
let's
examine
how
almost
all
of
them
are
impacted,
because
I
think
spec
authors
are
a
little
bit
irrelevant
right
now.
Next
slide,
please
so
users,
users,
don't
know.
If
this
is
sync
async,
they
don't
care.
Users
want
their
software
to
be
performed
available
on
all
browsers
and
available
yesterday.
E
E
Well,
we
can
guess
what
they
think
or
we
could
ask
them,
and
we
have
asked
them
and
they've
provided
feedback.
They've
said
that
they
simply
don't
care
if
this
is
async,
most
developers
that
are
going
to
use
this
already
are
already
maintaining
very
large
applications,
very
complex
millions
of
lines,
and
they
don't
care.
E
If
we
just
add
the
award
await
now,
we
could
think
that
in
theory,
maybe
this
slots
right
into
the
place
one
place
where
it
would
otherwise
be
a
problem,
but
so
far
no
web
developer
has
come
and
explain
that
this
was
an
issue
for
them
and
also
github.
We
have
discussed
if
this
were
an
issue,
how
we
could
have
actually
gone
around
this
issue.
E
So
the
third
constituency
is
implementers
and
I
happen
to
be
an
implementer
and
to
represent
other
implementers
in
chromium,
and
we
say
that
this
is
imperative
for
us,
so,
whereas
the
other
two
constituencies
did
not
care
we
care
and
because
it's
of
no
impact
to
them
now,
it's
our
turn
next
slide.
Please,
so
is
there
a
problem
with
if
the
constituencies
are
not
the
thing
that
are
going
to
make
us
decide
to
make
this
synchronous?
E
E
There
is
no,
so
the
needs
of
implementers
actually
come
two
steps
about
about
theoretical
purity,
so
this
would
not
actually
be
a
reason
for
us
to
make
this
sync,
when
one
implementer
really
claims
that
this
is
necessary
to
be
async.
Let's,
like
this
tag,
actually
discussed
this
particular
api.
E
So,
first
of
all
about
the
api
as
a
whole,
they
said
we
reviewed
it
and
we're
satisfied,
but
then,
when
this
debate
raged
for
a
few
months,
tag
also
made
a
comment
on
the
meta
discussion
here,
and
I
think
that
we
should
actually
pay
attention.
So
let's
read
first,
they
said
so
sanguan
said
I
have
looked
at
this
discussion
and
think
that
developer
ergonomic
gains
would
be
minimal.
I
don't
see
a
significant
gain
in
terms
of
ergonomics
for
developers.
E
We
think
then
he
said
it's
fine
to
be
a
sync
async,
so
this
looks
good,
but
then
so
this
is
first
principles.
But
now,
let's
talk
about
the
meta
discussion,
please
so
dan
said
we
can
feedback
that
interoperability
is
an
imperative
and
then
he
said
the
issues
of
interop
are
many.
This
is
something
that
should
be
guiding
the
working
group's
work.
E
E
So
I've
demonstrated
that
all
constituencies
either
are
do
not
care
about
this
or
they
actually
need
this.
So
that's.
There
is
no
reason
to
go
synchronous
there.
E
Furthermore,
I
think
that
if
it's
a
more
c
flexible
solution
right
because
if
you've
got
a
synchronous
api,
it's
trivial
to
make
it
asynchronous
and
by
making
it
synchronous
where
actually
we
would
be
constraining,
implementations
such
as
trump's
implementation,
and
I
think
that
would
be
inappropriate,
and
I
think
that
this
debate
has
gone
on
for
too
long.
I
think
that
we
can
do
much
better
and
move
on
to
more
productive
work
by
just
going
with
the
interrupt
solution.
E
Thank
you
very
much
and
yes,
bernard.
I'm
ready
for
your
question.
A
You're,
muted,
okay,
yeah,
so
one
point
that
you
made
a
lot
is
that
by
making
this
async
you
basically
and
in
the
ipc
architecture,
you
know
that
when
the
crop
target
returns
that
it's
ready
for
use,
can
you
describe
some
of
the
things
that
in
getting
it
ready
for
use
that
that
might
create
a
problem
with
a
sync
implementation,
like
we've
seen
in
webrtc,
there
are
situations
where
we've
done:
sync
apis
that
turned
out
that
actually
needed
to
be
async.
A
E
Thank
you
if
you
could
go
a
couple
of
slides
back
back
to
where
I
had
the
diagram.
That's
gonna
help
me
a
little
bit.
E
So
we
can
imagine
that
if
this
were
sync,
then
we've
got
two
options
right.
We
could
either
block
and
kill.
We
register
things
in
the
browser
process.
E
We
wouldn't
want
to
do
that
and
the
other
way
that
we
could
go
is
we
could
embed
all
of
the
information
that's
necessary
in
the
crop
target
when
we
mint
it
do
that
in
the
render
process
on
the
top
left
on
the
captured,
captured
side
and
simultaneously
send
it
both
to
the
capture
document,
as
well
as
to
the
browser
process,
so
that
when
the
capture
process
goes
and
calls
crop2
and
asks
the
browser
hey,
I
want
to
start
cropping
to
this.
Yes
or
no.
E
This
would
already
be
there,
but
the
problem
is
that
it
is
not
guaranteed
that
the
crop
target
is
going
to
reach
the
browser
process
first
from
the
capture
document
and
not
for
the
from
the
capturing
document.
So
now
it
could
be
said
that,
yes,
this
is
an
edge
case.
You
can
optimize
for
that.
I
think
that
the
enabler
said
that
on
github
at
the
very
least-
but
I
don't
think
that's
worth
the
complexity
because
now
we
need
to
have
complete.
I
think
un
is
also
common
to
that
effect.
E
So
now
we
need
to
introduce
very
complex
code
that
says:
okay,
but
if
it's
already
here,
I
do
one
thing,
but
if
it's
not
here,
then
I'm
going
to
communicate
back
with
the
original
render
process
and
check.
Hey.
Are
you
still
here,
if
you
just
posted
this
to
me,
etc
and
sure
such
implementations
would
be
possible,
but
there's
always
a
trade-off
between
complexity
and
everything
else,
and
I
we
do
not
believe
that
this
trade-off
would
be
favorable.
Yes,
even.
D
So
I
I'm
surprised
that
you
you're
saying
that
this
is
a
must
on
past
discussions.
I
think
we
agreed
that
this
was
implementable,
synchronously
in
chrome
or
in
in
any
browser,
but
my
understanding
was
that
you,
you
were
favoring
chrome's
current
trade-offs,
meaning
that
you,
you
prefer
that
you
do
the
pre-allocation
at
the
time
the
crop
target
is
created
and
not
at
the
time
of
crop
to,
and
this
is
the
main
motivation
for
an
asynchronous
api.
D
That's
my
understanding
and
from
what
I
understand
from
universe
approach,
slides
and
that's
also,
my
understanding.
D
I
pointed
out
that
in
the
last
that
this
is
a
fingerprinting
issue,
this
is
a
potential
interrupt
issue
as
well,
so
that
that's
why
I
think
the
current
chrome's
trade-offs
with
pre-allocating
things
might
be
fine,
but
it's
it's
also
a
food
game,
and
so
in
general,
I
think
that
you
prefer
your
current
approach,
but
you
still
agree
with
being
able
to
implement
the
synchronous
api
with
similar
trade-offs,
except
that
you
think
that
the
implementation
might
be
a
bit
more
complex.
So
that's
that's
really.
D
I
think
the
core
of
the
discussion
there
is
that
both
are
implementable,
but
what
you're
saying
is
that
the
synchronous
api
in
chrome
would
be
more
complex.
D
I'd
like
to
to
finish
what
I'm
saying
so
that
was
my
first
point.
Maybe
I
misunderstood,
but
I
I
I'm
trying
to
narrow
down
exactly
why
you're
asking
for
an
asynchronous
api.
I'm
also
surprised
by
the
two
presentations
on
the
fact
that
they're
both
claiming
that
both
approaches
are
more
efficient,
that
both
approaches
are
faster.
They
cannot
be
faster,
one
needs
to
be
faster
than
the
other,
or
maybe
it's
different
cases.
But
it's
it's.
It's
a
bit.
D
It's
a
bit
misleading,
so
to
me
the
fact
that
in
one
case
you
can
do
things
in
parallel
and
in
the
other
case,
you're
doing
things
severely
think
one
after
the
other.
That
means
that
one
will
be
faster
than
the
other,
at
least
for
the
user
of
a
web
application.
D
Usually
synchronous
api
are
more
efficient,
except
if
you
know
that
there
will
be
something
blocking
like
getting
to
hardware
getting
into
files
hoping
to
thread
and
so
on
here
I'm
not
I'm
not
hearing
that.
So
I
think
that
a
synchronous
api
is
helping
a
bit
web
developers.
It's
not
a
huge
deal,
but
it's
still
a
tiny
important
thing.
E
Okay,
I
would
like
to
answer
and
you've
made
two
points.
The
first
point
included
the
claim
that
about
resource
allocation.
I
think
that
is
irrelevant,
because
resource
allocation
is
not
mandated
by
making
this
async
it's
a
design
decision
that
we've
made
and
we
can
always
roll
it
back.
You've
pointed
out
correctly
the
deficiencies
of
that
and
it's
up
to
us
to
decide
what
to
do
with
that.
We're
not
mandating
that
you
do
anything
similar
just
by
making
it
async.
D
Can
I
understand,
can
I
answer
to
that
you're
saying
we
implemented.
We
implemented
this
and
based
on
our
implementation.
We
learned
a
lot
of
things
and
we
are
thinking
that
we
should
do
that
in
in.
We
should
build
on
that
to
design
the
api
and
what
we're
seeing
is
that
with
your
current
implementation,
there
are
some
potential
issues
with
regards
to
security,
privacy
and
they
are
not
solved.
D
So
I
would
feel
much
more
comfortable
if
they
were
solved
and
then
you
were,
you
would
say,
hey
we
solved
all
these
things,
and
now
we
can.
We
can
say
that
this
is
good,
because
you're
asking
asynchronous
ct
exactly
for
that
reason,
and
that
reason
is
actually
your
implementation
has
some
issues,
and
so
that's
why
it's
weakening
the
synchrony
point.
E
No,
that
is
not
true,
so
it
would
be
very
easy
for
us
to
actually
change
the
implementation
in
the
following
way.
Each
iframe
has
its
own
limit
and
therefore
there
is
no
leakage
of
information
between
iframes
and
that
would
be
trivial
and
therefore
I
don't
think
that
this
is
relevant
and
you
don't
need
to
have
a
limit
on
anything
right.
So
I
think
that
this
is
a
a
red
herring
and
your
second
issue
was
about
which
one
is
faster
and
that
isn't
it
something
that
needs
to
be
debated.
E
It
is
possible
that
for
each
implementation,
things
will
be
faster
differently,
but
the
claims
that
you
you
never
made
about
serial
serializing
multiple
ipc's,
was
unfortunate.
I
didn't
have
time
to
counter
that,
and
that
is
you
can
crop
the
target
whenever
you
want.
You
can
do
this
while
the
user
is
interacting
with
the
prompts
right,
you
can
do
that
before
you
even
display
the
prompt
to
the
user,
so
these
are
not
in
serial.
These
are
quite.
E
Yeah,
you
crop
a
target
and
you
post
that,
but
that
does
not
actually
limit
their
crop
tool
and
crop
to
is
the
thing
that's
gonna
take
a
while.
That
needs
to
be
fast
right,
because
when
the
user
selects,
I
want
to
capture
this
tab,
and
now
you
want
to
crop
that
tab.
E
That's
when
you
need
to
be
fast
everything
that
came
before
that
the
user
did
not
notice
the
page
load
is
the
page
mean
to
the
crop
target,
the
page
posted
the
crop
target
in
between
its
various
sites
and
all
of
this
time
the
user
was
just
moving
slowly
now
to
start
the
capture
and
by
the
time
we
started
the
capture.
All
of
that
is
already.
All
of
this
cost
is
already
paid.
D
So
I
think
that
what
what
you're
saying
is
that,
with
the
synchronous,
api,
creating
a
crop
target
and
sending
it
to
the
recipient
is
faster.
D
Let's
we
do
not
debate
that,
like
you
post
message,
a
crop
target,
if
you,
if
you
get
it
synchronously
it's
faster
than
if
you're
waiting
for
the
actual
risk
allocation
to
to
finish.
D
And
just
think
you
create
a
crop
target
and
you
post
message
it
and
you
receive
it
on
the
page
you're
saying
that
it's
not
faster.
If
you
do
async,
I
am.
D
This
cost
is
so
I
think
we
are
agreeing
that
you
create
a
crop
target.
You
post
message
it
and
you
receive
it
on
a
page.
Then
the
synchronous
api
is
faster
than
the
async
one.
What
you're
saying,
then,
is
that
the
actual
big
latency
thing
is
about
crop
two
and
that
you
think
it's
neglectable
to
optimize
the
crop
target
generation
and
this
code
path,
because
scope
2
will
take
a
lot
of
time,
but
is
that
correct.
E
I
don't
want
to
say
that
everything
you
said
was
correct,
but
I
can
say
the
following
and
you
can
decide
I'm
saying
that
everything
that
comes
before
crop
2
and
before
get
display
media
are
called,
is
irrelevant
and
all
of
those
costs
are
indeed
negligible
and
the
cost
that
matters
is
once
the
user
decides
to
start
capture
and
capture
starts.
If
you
now
crop
to
how
quickly
does
that
happen,
and
because
now
there's
only
a
single
ipcrt
round
trip
that
is
faster.
That
is
my
claim.
I
Yeah,
I'm
I
don't
know
anything
about
the
implementation,
but
from
the
point
of
view
of
a
developer,
who
does
actually
have
an
app
a
small
few
hundred
line
app?
That
would
do
this.
I
have
a
mild
preference
for
it
being
synchronous
because
it's
slightly
tidier
and
slightly
easier
to
use,
but
I'd
sacrifice
that
if
I
was
convinced
there
was
a
user
benefit
or
there
was
a
developer
benefit
and
I
don't
see
a
developer
benefit
to
it
being
to
becoming
pacing
there's.
I
I
imagine
there
might
be
a
user
benefit
in
terms
of
like
the
pre-crop,
somehow
not
like,
not
not
not
seeing
the
whole
thing
and
then
then
it's
shrinking
down
to
the
crop
area
and
if
there's
a
visual
twitch,
that's
unavoidable,
then
like
obviously
getting
rid
of
that's
a
huge
win.
So
that
would
be
kind
of
my
only
overwhelming
argument.
There
is,
if
that's
somebody
could
convince
me
that
that
was
going
to
happen,
then
I
would
kind
of
be
interested.
I
I
think
you
know.
I
The
other
issue
from
the
developer
point
of
view
is,
is
about
whether
the
top
level
page
sees
whether
there's
a
failure
in
with
whether
you're
interested
in
any
kind
of
failures
in
the
resolving
on
crop.
Two
on
on
on
obtain
on
obtaining
a
target.
Is
there
a
way
of
of
an
interesting
crop
target
failure
mode,
because
that
would
convince
also
convince
me
that
that
it
needs
to
be
async.
But
I
haven't
heard
that
so
I'm
kind
of
unconvinced
that
it
needs
to
be
async,
but
I'm
listening
for
reasons
from
a
developer.
E
Point
of
view,
the
claim
has
never
been
that
this
can
only
be
implemented
if
async.
The
claim
has
always
been
that
there
are
multiple
sets
of
trade-offs.
One
of
the
sets
of
trade-offs
requires
this
to
be
async
for
that
trade-off,
and
there
is
one
implementation
that
wishes
to
go
for
that
set
of
trade-offs
and
the
other
implementations
are
not
going
to
suffer.
Developers
are
not
going
to
suffer
and
users
are
not
going
to
suffer
and
therefore
it's
quite
appropriate
to
to
care
about
that.
Implementer's
needs.
I
I
I
don't
think
it's
even
negligible.
I
did
like
I've
had
a
thing
where
you
know
something:
low-level
becomes
an
async
method
and
then
you
have
to
spend
an
hour
making
sure
that
that
kind
of
gets
compartmentalized
and
doesn't
spread
its
way
throughout
the
whole
damn
code.
So
it's
not
trivial,
but
it
isn't
insoluble
by
any
means.
E
Okay,
so
we've
got
google
developers
saying
it's
negligible
and
tim
penton
says
it
is
not
negligible,
but
it
is
solvable.
Did
I
get
that
correctly.
I
E
I
did
not
mention
any
kind
of.
I
didn't
say
that
you
were
less
important.
I
was
just
listing
all
of
the
feedback
that
we've
received
so
far.
E
F
F
Since
you
mentioned,
there
were
no
downsides,
but
nobody
is
claiming
the
chrome's
optimizations
aren't
useful.
Clearly
they
are,
but
I'm
confused.
It
sounds
like
you're
claiming
chrome
cannot
do
the
same
optimizations
with
the
synchronous
api
with
a
completely
synchronous
api,
and
I
don't
understand
that
because
you
haven't
actually
explained
why
you
needed
to
be
async,
except
for
the
fact
that,
instead
of
falling
back
to
something
slower,
you
need
to
fail
to
the
web
developer.
F
Why
can't
you
implement
a
fallback
and,
regarding
the
let
me
list,
the
the
downsides
again
to
users
attack
on
the
cropping
servers
to
defeat
it
defeated
cropping,
might
leak
private
information,
resource
leaks,
expose
fingerprinting
info
and
to
web
developers.
Async
is
harder
to
use,
introduces
databases
and
proliferation
of
async
web
developers
have
to
deal
with
failure
and
for
spec
authors.
User
agents
have
been
opening
new
issues
on
the
spec
because
of
the
they're
chasing
this
implementing
only
an
optimized
design.
Only
right.
B
B
B
B
F
I
I
would
still
like
to
hear
why
it
it
must
be
async.
B
F
Fair
enough,
I
think
I
hear
ease
of
implementation
and
I
think
one
of
the
rules
is
that
complexity
in
api
trumps
complexity
and
implementation,
yeah.
That
seems
to
be
the
tradeoff
here.
Where
did
they
find
that
one.
B
Now
the
private
consensus
that
says
that
the
concerns
of
of
web
developers
from
before
the
concerns
for
implementers.
That's
not
the
same
thing
as
saying
that
saying
that
the
more
complex
api
that
that
has
we've.
E
I
do
not
see
any
benefit
in
continuing
with
this
discussion.
I
think
that
it
has
gone
well
enough.
I
think
that
we've
got
many
of
many
other
things
to
discuss
on
first
principles
and
much
more
progress
to
the
to
have
elsewhere.
It
could
be
that
you
know.
Maybe
this
is
the
suboptimal
decision
here.
Maybe
you
are
making
a
mistake,
but
this
is
quite
a
small
mistake
and
we,
our
time,
would
be
better
spent,
making
other
smaller
mistakes
that
advance
the
web
and
not
dwell
in
on
this
particular
one.
E
You've
been
hearing
it
now
for
quite
some
time.
I'm
sorry
we're
already
over
time
for
the
next
presenter,
so.
D
C
I
can
suggest
to
paths.
One
would
be
to
organize
a
formal
vault
of
the
working
group,
which
would
say
each
member
organization
gets
one
vote
and
we
call
it
that,
and
this
is
used
to
determine
the
path
forward.
The
other
approach
is
to
leave
that
issue
open.
I
guess
what
I'm
hearing
is
that
a
lot
of
it
is
bound
to
implementation
experience.
So
maybe
we
leave
that
open
until
we
get
other
implementation
experience.
That
shows
an
alternative
path
and
brings
more
clarity
as
to
what
the
right
tradeoff
should
be.
F
I'd
like
to
note
that
there's
also
not
consensus
on
merging
anything
for
issue
48,
which
means
that,
as
currently
specified,
there's
no
way
for
this
asynchronous
method
to
fail.
B
Well
sure,
there's
no
consensus
to
not
merge
anything
on
this
year,
48
either.
So
you
can't
argue
one
way
or
the
other
and
at
the
moment
there's
no
nothing
in
the
specification
that
allows
this
function
to
fade.
I
Making
it
async
doesn't
stop
the
other
browser
implement,
says
doing
it
right
and
making
it
synchronous
so
effectively.
I
D
You
in
okay,
so
issue
18
is
target
name
to
generate.
So
when
I
read
the
spec
and
I
saw
crop
target,
I
wasn't
sure
exactly
what
it
meant
it's
made
of
two
very
generic
terms,
crop
and
target,
and
it
it
wasn't
very
clear
to
me
what
what
the
scope
was.
I
wasn't
even
sure
what
group
was
a
good
idea,
given
it's
used
a
lot
in
images,
so
I
was
thinking
that
maybe
it
would
be
used
quite
a
lot
in
css
or
html,
and
so
on.
It
appears.
D
Html
is
mostly
about
masking
and
clipping
not
cropping,
so
crop
is
not
used
a
lot
in
web
apis,
so
it's
probably
fine
to
use
it,
as
I
see
it,
target
does
not
bring
much
value
so
to
so.
I
thought
to
myself.
Ideally,
we
would
like
a
name
that
is
very
clear
about
what
the
object
is,
what
what
its
definition
is
and
so
on.
So
next
slide
is
looking
at
different
definitions
that
we
could
have
for
target
and
we
can
define
it
in
different
ways.
It
could
be.
D
It
could
be
a
reference
to
an
out
of
process,
bounding
back
on
the
box
and
that's
all
that's
another
definition,
but
that
we
discussed
or
it
could
be,
and
it's
a
stricter
definition,
just
an
object
with
sole
purpose
is
to
be
given
to
to
crop
to
meaning
that
it's
not
specific
to
screen
capture,
it's
not
specific
to
elements,
it's
not
specific
to
bounded
boxes
and
so
on,
and
so
ideally
we
would
pick
a
definition
and
then
we
would
pick
a
name
that
would
highlight
the
fact
that
this
is
really
the
definition
that
we
want.
D
So
I
I
looked
at
all
three
definitions
next
slide
and
I
think
that
initially
I
was
inclined
to
go
with
just
the
fact
that
it
was
an
element
reference
or
maybe
a
bounded
box
to
an
element
reference,
but
crop
2
is
a
mediation
track
method
currently
dedicated
to
screen
capture,
but
maybe
in
the
future
it
might
be
prop.
2
might
be
used
for
other
tracks.
For
instance,
if
you
have
phase
detections
somehow,
maybe
you
want
to
do
a
face.
Detection
cropping
as
well
and
maybe
crop
2
would
be
fine.
D
I
don't
know
so
that's
why
I
changed
my
mind
a
little
bit
and
I
thought
that
the
third
definition,
which
is
it's
an
object
with
soul
purposes
to
be
given
to
crop2,
is
the
definition
that
we
want
and
then
we
can
have
multiple
definitions.
Somehow
it
could
be
a
based
on
element
or
it
could
be
based
on
something
and
based
on
that,
I
think
that
probe
target
is
less
good
than
crop
region.
D
This
way,
we
are
clear
that
it's
not
related
to
targeting
an
element
or
whatever,
so
I
would
favor
corporation,
which
would
also
align
with
a
spec
name,
the
spec
name,
it
media
capture,
region
or
screen
capture
region.
I
I
don't
know
that
region
in
the
name
of
spec
region,
capture,
region,
capture,
yeah.
So
I'm
thinking.
L
D
What
about
crop
region?
It
seems
to
me
that
we
could
make
him
making
respect
that
it's
very
clear
that
the
objectives
is
not.
It
purpose
is
really
to
be
given
to
crop
to
we'll
change
a
little
bit,
the
name
as
well,
and
then
we
would
no
longer
have
to
deal
with
the
fact
that
yeah,
it's
just
a
reference
to
an
element
and
so
on,
because
we
will.
D
We
would
think
in
terms
of
api,
that
it's
it's
it's
a
region
that
might
change
and
we
can
crop
I'm
getting
track,
which
has
a
lot
of
video
frames
coming
on
and
based
on
on
this
crop
region,
which
is
just
an
abstraction
for
whatever
is
beneath.
That
will
define
the
region
and
that's
it.
Poets.
I
Okay,
yeah,
okay
yeah,
so
I
just
wanted
to
say
that
it's
I
think,
if
we're
gonna
change
the
name,
we
should
make
it
so
or
come
up
with
a
better
name.
I
think
we
should
add
the
fact
that
it's
a
token
it
isn't
a
region,
it's
a
token,
to
a
region.
It
isn't
a
target,
it's
a
token,
to
a
target.
It
should
be.
If
we're
going
to
make
a
name,
we
should
say
that
it's
an
opaque
thing
and
that
should
be
in
the
name.
D
Well,
it
might
no
longer
be
opaque
depending
on
what
it
is
if
it's
an
element
based
region-
yes,
it's
opaque,
but
if
it's
something
else
that
we
might
want
to
instantiate
through
other
means,
maybe
it
might
no
longer
be
opaque.
Somehow.
E
I
have
similar
reservations
about
crop
region.
I
also
read
it
initially
as
being
like
a
set
of
coordinates
that
I
would
be
able
to
read,
which
is
wrong,
so
that
would
be
misleading
and,
additionally,
it
kind
of
sounds
like
it's
static
right.
It's
a
region,
whereas
a
crop
target
is
something
that
can
move
because
the
target
can
move,
and
that
is
important,
like
the
entire
reason
that
we
have
crop
target
and
not
a
set
of
coordinates
is
that
that
thing
can
move,
and
that
can
happen
asynchronously
from
the
capture.
B
So
that
now
I
make
you
frankly,
this
is
bike
shot.
B
I
I
looked
through
the
issue
number
18,
which
I've
been
open
since
february.
There
are
exactly
three
people
who
have
spoken
on
it.
It's
allowed
and
and
one
other
person
who
claims
that
crop
target
is
perfect.
I
see
no
benefit
to
anyone
in
changing
it,
and
I
see
benefit
too.
A
D
Okay,
what
about
the
possibility
to
clarify
that?
Well,
we
could
do
that
in
the
future.
I
guess
because
they
were
discussions
about
yeah.
Is
it
an
element
reference
and
so
on?
D
And
I
think
so
I
I
came
to
a
point
where
I
think
it's
no
longer
an
element,
references,
it's
just
an
opaque
thing
and
the
fact
that
it's
an
element
reference,
maybe
in
the
future,
it
will
not
be
a
based
on
element,
it
might
be
based
on
something
else
and
maybe
I'll
I'll
file
a
clarification
issue
on
that
end.
B
B
E
10
minutes
behind
schedule
I'll
do
my
best
to
make
up
the
gap.
So
now
we're
going
to
talk
about
cropping
to
non-self
capture
tracks.
Currently,
you
can
only
crop
a
capture
of
the
car
and
tab.
I'm
not
sure
why.
So
I
would.
I
suggest
that
we
allow
cropping
arbitrary
tab
captures.
I
think
that
there
are
compelling
reasons
so,
for
example,
if
the
next
slide,
please.
E
E
So
yes,
so
to
me
what
we
see
here
that
the
local
user
can
see
all
of
the
slides,
including
the
sidebar
on
the
left,
but
nevertheless
they
can
crop
and
only
show
the
slide
not
show
the
meat
on
the
right
not
show
the
other
slides
on
the
left
and
that's
perfect
for
the
user
and
that's
currently
possible,
because
the
video
conference
is
in
the
same
tab
as
the
slides
next
slide.
Please.
E
So
I
don't
see
why
we
can't
actually
extend
this
to
the
point
that
if
we've
got
two
tabs-
and
let's
say
that
I
had
you-
know
the
meeting
on
the
right
hand
side
on
one
in
one
tab
and
I
was
capturing
slides
on
the
left
side.
Maybe
I
still
want
to
crop
if
these
two
applications
already
know
how
to
coordinate
and
collaborate,
why
can't
they
do
that
if
it's
in
two
separate
tabs?
E
F
F
So
you
could
see
if
a
popular.
F
Webrtc
service
says
basically,
if
you,
if
you
give
me
this
cropping
region,
I
will
crop
to
it
and
if
that
becomes
a
common
api,
a
lot
of
sites
might
be
able
to
to
use
that
to
basically
crop
to
like
an
empty,
empty
little
square,
for
example,
and
then
basically
block
and
censor
out
information.
E
F
I'm
not
saying
that
I'm
making
the
point
that
a
site
could
decide
to
offer
this
service
and
it's
something
we
should
at
least
think
about.
I'm
sorry.
F
Or
not,
and-
and
my
point
is
merely
to
to
point
out
you're
asking
for
comments
right.
So
one
of
the
comments
is
what
are
what
are
the
consequences
of
these
decisions
and
I'm
not
saying
maybe
we
should
let
pages
have
apis
that
censor
themselves,
but
I
I'm
at
least
pointing
out
that
that
might
be
one
consequence.
E
Of
this,
thank
you
for
clarifying.
I
remember
that
this
particular
point
has
been
made
in
the
past,
and
that
is
why
we
narrowed
the
scope
to
self-capture,
and
I
think
that
there
is
no
merit
in
this
point,
and
I've
pointed
out
the
argument.
So
I
wonder
if
you've
got
the
counter
counter
argument,
so
I'll
just
repeat
the
counter
argument.
It
is
that
no
sane
capturing
website
is
going
to
take
a
crop
target
from
an
unknown
source
received
over
an
unknown
media,
because
how?
How
would
I
even
get
your
crop
target?
E
F
I
think
what
I
explained
could
easily
be
built,
so
I
don't
accept
that
that
is
unlikely,
but
at
the
same
time
I
just
wanted
to
make
this
potential.
I
want
to
make
the
working
group
aware
of
it
as
so
that
we
can
discuss
it.
F
F
I'm
not
saying
that
that
will
happen.
I'm
just
saying
that
it's
something
I'd
like
us
to
consider
and
when
deciding
whether
this
is
a
good
idea
or
not,
I
don't
see
any
other
immediate
problems
with
it.
B
I
So
I'm
in
favor
of
this,
I
think,
given
that
the
token
is
an
opaque
token
and
it
can
only
have
got
there
by
interpret
into
communication
that
both
parties
willingly
went
into.
I
think
the
risks
are
probably
pretty
slim,
so
I
I
and
and
if
you
disable
it,
you
get
the
whole
screen.
You
get
the
whole
thing
anyway,
so
I
I
don't.
I
E
E
Yaniva,
would
it
be
okay
with
you
if
this
were
an
ai
in
the
minutes,
sorry,
and
why
I'm
suggesting
that
the
minutes
reflect
the
following
ai
either.
Somebody.
E
Action
item-
thank
you.
Yes,
sorry!
So
I'm
proposing
the
action
item
that
either
somebody
comes
up.
You
or
somebody
else
comes
up
with
a
compelling
case
not
to
extend
the
api
or
we
do
so
by
next
meeting.
Next.
F
Interview
well,
let
me
try
it
differently
that
just
briefly
right
now,
screen
sharing
is
a
power
tool
where
the
user
is
in
full
control,
and
we
have
some
new
specs
now
like
capture
handle
identity,
which,
combined
with
cropping
extends
resources
where
the
web
applications
can
gain
more
control
over
what
this
user
power
tool
like
right.
Now
I
can
share
you
know
I
can
be
in
a
webrtc
call
with
my
accountant.
F
I
can
share
a
page
of
my
bank
statements
on
my
bank
page
and
whether
that's
a
good
idea
or
not
is
is
on
me
right
now,
as
a
user,
we
might
move
toward,
but
by
changing
what
screen
capture
is
from
a
power
tool
to
where
web
developers
have
more
say
in
the
matter.
We
might
get
into
situations
where
these
sites
are
cooperating
and
the
bank
now
knows
it's
being
captured
and
it
will
now
redact
a
lot
of
the
information
that
I
want
to
convey
to
my
accountant.
Now.
F
E
E
We
need
to
have
some
timeline.
Would
you
like,
by
the
meeting
after
next
meeting.
E
E
Okay,
next
slide,
please
so
next
topic
making
crop
targets
stringifiable.
I
think
that
it
generally
helps
to
make
a
crop
target
stringifiable
you
can
expose
this
over
capture
handle
you
can
message
it
over
shared
cloud
infrastructure.
E
It's
just
you
know
it's
a
basic
thing
that
kind
of
helps,
and
previously
there
has
been
some
opposition
to
this,
or
maybe
let's
say
hesitance.
I
don't
want
to
misrepresent
so
you
and
you're
here.
If
you
could
explain
to
me
again
why
you
were
opposed
to
this
or
hesitant
or
whichever
is
the
correct
adjective.
That
would
be
most
helpful.
D
Sure
so,
there's
a
way,
for
instance,
from
a
blob
object
which
might
be
a
megabytes
of
data
to
take
an
l,
and
for
that
you
create
you
create
the
er
from
the
object
and
as
long
as
you
have
not
revoked
the
error,
then
the
blob
is
basically
non
garbage
collectible,
because
a
string
can
be
recreated
you,
even
if
the
string
itself
get
gc'd,
it
might
have
been
stored
elsewhere
in
idb
it
might
be
recreatable
by
concatenating
concatenating
two
strings,
so
it
becomes
very
difficult
to
actually
garbage
collect
a
crop
target
like
if
you're
doing
resource
allocation.
D
You
will
need
to
keep
your
resource
allocation
for
the
whole
lifetime
of
the
element,
and
that's
that's
basically
not
great
to
do
that.
It's
much
better.
If
we,
if
we
have
a
core
target,
the
crop
target
gets
gc'd,
then
you
send
an
ipc
message
to
the
to
the
capture.
Saying
hey
this
particular
thing
that
you
that
we
created
in
the
past.
Now
you
can
release
resources
and
so
on,
and
if
we
stringify
it,
then
it
becomes
very
difficult.
You
cannot.
You
basically
cannot
do
that
anymore.
E
I'm
sorry,
I
did
not
understand
it
previously,
and
they
still
don't
understand
this,
because
you
can
have
the
string
you,
you
might
even
have
a
garbage
collect
the
string,
but
you
can
garbage
collect
the
element
that
introduced
it
and
what's
gonna
happen,
then,
is
that
when
you
deserialize
it
and
you
get
a
new
crop
target
based
off
of
this
string,
this
crop
target
will
just
be
obsolete,
it
will
be
meaningless
and
trying
to
use
it
will
just
yield
to
you
an
error.
D
E
Sorry
I
didn't
say
that
and
this
let's
look
at
how
this
could
happen
right
now.
Okay,
so
let's
say
that
we
get
to
the
point
where
we
are
actually
allowing:
oh
okay:
let's
do
it
even
with
self
capture.
Let's
say
you
embed
an
iframe
that
iframe
gives
you
a
crop
target
and
then
you
hold
on
to
it,
but
you
actually
detach
the
iframe.
The
iframe
gets.
E
The
garbage
collected,
its
element
gets
garbage
collected
you're
still
holding
a
crop
target,
but
trying
to
to
crop
to
this
crop
target
is
not
gonna
help
you
by
much.
If
you.
D
Yeah,
but
what
I'm
saying
is
that
that's
not
the
typical
case,
the
typical
case
is
you
create
a
crop
target
from
the
iframe?
Then
you
send
it
and
you
stop
you,
you
stop
using
it
at
some
point
and
the
crop
target
gets
gc'd
and
then
you
remove
resource
allocation.
What
you're
saying
is
that,
with
a
string,
we
can
stop
resource
allocation
when
the
element
gets
gc
and
I'm
saying
okay,
that's
correct,
but
since
we
are
not
allowing
studentification.
E
Confused
because
it
is
explicitly
stated
by
the
spec
that
the
crop
target
does
not
actually
extend
the
life
the
lifetime
of
the
element,
so
both
cases
you've
got
something
either
string
or
crop
target
that
is
independent
of
the
lifetime
of
the
element.
So
I
I
really
don't
understand
the
problem
here.
D
D
E
D
So
when
you
create
a
crop
target
in
chrome,
currently
you're
sending
an
ipc
message,
you're
storing
something
in
the
database
and
then
you
you
send
back
the
core
target,
and
this
thing
in
the
database
currently
is
useful
until
the
crop
target
object
is
gc'd
or
the
element
is
deceived
because
whenever
any
obesity,
no
sorry
no.
E
That's
wrong:
it's
an
element:
it's
until
the
element
is
garbage
collected,
not
the
crop
target.
The
crop
target
can
outlive
everything
and
that's
okay
and
the
resources
will
be
freed,
and
it's
just
fine
in.
H
F
All
right,
so
I
think
I'm
next
on
the
queue
I
agree
with
you
n
here
I
don't
think,
there's
a
sufficient
use
case
to
to
to
stringify.
I
haven't
heard
them
the
current
use
case.
F
If
we
only
cared
about
cropping
self
documents,
we
wouldn't
have
had
crop
target
either
we
added
crop
target
to
solve
cross
process
iframes
and
I
think
that's
sufficient.
I
agree
with
the
garbage
collection.
The
reason
why
a
crop
target
is
better
is
it
is
garbage
collectible
as
soon
as
any
reference
is
dropped
from
it
and
it
cannot
leave
the
machine
which
I
think
has
some
benefits
the.
F
F
So
if
you
really
wanted
strings
you
could
you
could
store
your
crop
targets
in
a
map?
I
have
a
string
that
associates
to
a
crop
target
and
it's
an
easy
look
up.
However,
if,
if
you
were
to
pick
a
weak
map,
you
would
see
the
problem
in
that
it
would
allow
the
crop
target
to
be
garbage
collected,
which
means
that
the
strings
are
no
longer
usable.
F
E
D
As
I
understand
it,
in
chrome's
implementation,
when
you
create
a
crop
target,
maybe
the
group
direct
will
will
last
forever
in
other
browsers,
it
might
not
last
forever
and
it's
good
to
be
able
to
early
on
grab
back
these
resources
and
the
crop
target
object
allows
that
and
if
we
stringify
it,
then
we
lose
that.
E
E
So
if
you,
for
example,
one
way
that
you
can
solve
this
is
that
you
put
the
uuid
on
the
element
and
then
when
somebody
tries
to
create
a
new
crop
target
from
the
old
ui
id
you
just
search
for
it.
If
you
find
it
great
success,
otherwise
you
say:
hey
this
serialization
failed.
E
I
Yeah,
so
my
the
previous
thing
we
were
talking
about,
I
supported
on
the
basis
that
it
was
opaque
and
now
we're
making
it
unopaque,
I'm
less
comfortable
with
that.
I
think
it's
much
more
difficult
to
think
about
what
the
consequences
are
of
it,
not
being
okay,
because
opacity
guarantees
that
the
two
things
are
collaborating
consciously
to
exchange
that
that
token
within
the
process,
whereas
moment
you
stringify
it
it
can
go
out
by
some
third-party
advertising
server
and
you
don't
know
what
the
consequences
are.
I
E
So
the
only
difference
between
an
opaque
interface
and
the
string
is
that
the
equals
equals
operator.
So
with
the
string.
If
you
have
the
same
string,
you
know
it
refers
to
the
same
element.
It
could
be
that
multiple
strings
refer
to
the
same
element,
but
once
you
get
you
know
two
sources,
give
you
the
same
string.
You
can
compare,
but
everything
else.
It's
completely
opaque.
If
it's
a
uuid
right,
it
doesn't
tell
you
anything
about
the
time
it
was
minted.
It
doesn't
tell
you
anything:
it's
a
random
randomly
assigned
uuid.
I
But
from
the
developer's
point
of
view,
which
is
where
I
was
coming
from
there
is,
there
is
a
difference
which
is
that
you,
there
is
only
there's
a
very
limited
number
of
paths
that
you
can
get
a
valid
crop
target
from
one
in
a
frame
to
an
outer
frame
from
one
tab
to
another
tab
and
those
are
through
very
limited
number
of
interfaces.
The
moment
it's
a
string.
I
There
are
an
infinite
number
of
parse.
They
could
be
sent
over
a
webrtc
data
channel.
They
could
be
put
in
a
qr
code.
They
could
be
like
there's
an
infinite
number
of
paths
and
we're
now
saying
that
all
of
those
parts
are
completely
safe
and
the
the
developer
knows
of
the
consequences
of
all
of
those
paths,
and
I'm
saying
that
requires
a
lot
more
thinking
about
than
saying.
We
know
that
it's
come
through
post
message.
E
Yeah
yeah
sure,
okay,
so
here
I
suggest
again
that
unless
a
compelling
case
can
be
made
about
about
why
this
is
unsafe
and
why
this
cannot
cannot
be
solved
in
terms
of
garbage
collection
that
we
proceed
with
this,
because
various
generic
concerns
about
we've
not
yet
done.
The
due
diligence
can
always
be
raised.
It
is
time
to
do
the
due
diligence.
I
believe
this
to
be
safe.
Can
we
move.
E
E
B
For
it,
so
I'm
I'm
I'm
not
noting
that
we
need
to
have
a
proper
item
for
this
one
and
the
problems
that
the
issues
that
people
are
concerned
with
seem
all
to
be
not
with
the
stringifier
but
the
d
stringifier,
how
you
can
construct
a
crop
target
from
a
string,
and
that
is
not
well
understood
yet,
so
I
think
we
need.
We
need
to
leave
it.
K
Okay,
I
made
a
quick
change
here,
but
it's
not
visible.
It's
fine.
I
put
up
the
thanks
for
thanks
to
all
the
people
who
have
commented,
especially
harald
and
uen
who-
and
I
have
now
made
an
explainer.
Hopefully
everybody
had
a
chance
to
have
a
look
and
I've
listed
out
explicit
goals
which
harald
wanted,
and
I
think
we
are
in
agreement
on
most
of
the
face
detection
goals.
Actually
one
was
to
attach
to
video
frame,
and
it
is
done,
I
think,
suggested
to
use
request
video
frame
callback.
K
Now
it's
sort
of
done
harald
also
suggested
to
return
contour
instead
of
bounding
box.
That
is
something
we
can
again
discuss,
whether
we
want
it
in
the
mvp
or
not.
K
There
was
there
was
this
thing
that
we
should
use
space
detection
as
an
input
to
other
apis
like
background
blur
eye
case
detection
face
framing
and
other
stuff,
you
know
it
should
be
easy
to
use
space
detection
with
a
custom
say
eye
gaze
correction,
or
something
like
that.
So
I
listed
out
here
how
you
know
face
detection
can
be
used
as
a
building
block
for
eye
case
correction.
If
you
do
not
want
to
use
the
system
supported
eye
gaze
correction
or
something
else,
you
want
your
own
special
custom.
K
I
guess
correction,
you
can
use
face,
detection
first,
then
run
a
pca
and
hog,
and
some
random
with
some
face
landmark
results
and
you
can
create
your
own.
Similarly,
with
low
light
adjustment.
First,
you
actually
have
to
detect
the
faces
and,
depending
on
your
face,
roi
the
brightness
and
contrast
changes
face
framing
it's.
Obviously,
you
have
to
detect
faces
normally.
K
What
happens
behind
the
scenes
is
somebody
you
know
you
detect
the
phase
at
a
lower
fps
and
the
rest
is
usually
done
using
face
tracking
so,
but
yeah
next
slide
I'll
try
to
discuss
more.
So
this
is
the
one
important
thing
I
think
everybody
wanted
to
know
about
the
pnp
improvements
of
this
thing,
so
the
first
chart
shows
that
just
camera
access
get
user
media
and
the
second
chart
is
the
way
we
are
planning
to
do.
K
Let
me
clarify
it
has
been
tested
on
the
latitude
this
9420
this
model.
Only
of
course
we
can
also
use
shape
detection.
K
You
can
see,
I
think
the
power
is.
I
put
up
relative
power
here,
not
the
exact
you
know
absolute
power,
what
it
just
to
avoid.
K
Idea
that
how
much
you
know
it's
almost
2x
power
improvement,
so
we
have
used
15
fps,
because
the
example
where
we
used
tensorflow
it
was
using
15
fps
to
have
and
to
have
this
apple
straples.
You
know
comparison,
we
put
the
fpss
15
and
we
can
go
to
the
next
slide.
D
I
was
thinking
that
the
first
two,
the
chromium,
let's
get
to
the
media
and
the
webrtc
api,
our
proposal
would
have
roughly
the
same
relative
power.
So
is
the
difference
that,
on
the
on
the
left,
you're
not
doing
any
phase
detection
and
on
the
right
you're
asking
the
driver
to
do
phase
detection,
and
it
will
do
some
additional
processing
to
actually
detect
faces.
And
that's
why
there's
the
additional
power.
K
Actually,
I
would
like
to
answer
that
because
it
was
on
his
machine.
He
took
it.
G
Yes,
that's
exactly
true,
so
when
we
ask
the
driver
to
do
face
direction,
it
will
do
some
extra
processing.
We
are
actually
using
the
intel
hardware
on
the
die
to
do
the
processing,
so
it's
not
inside
the
camera,
but
it's
on
the
actual
package
and
that
causes
some
extra
power.
K
Yes,
yes
on
android,
it
should
be
almost
same
because
by
default
it
should
do
face
detection
on
if
it's
in
auto
mode
or
or
other
oss.
Also,
I
I'm
not
sure
about
that,
but
at
least
for,
but
we
can
cross
check
that
also.
K
Yes,
thanks
erin
for
all
the
comments.
Let
me
start
with
your
number.
Three.
Can
I'll
start
with
number
three
detected
phases
as
required
id
and
probability,
so
id
is
very
important
for
face
tracking
and
if
you
just
want
to
detect
first
and
then
the
rest
you
do,
tracking
id
is
very
important,
so
we
should
keep
id
and
the
probability
should
be
optional.
K
K
I
don't
know
he
can
make
choices
based
on
that,
for
example,
if
the
probability
is
a
bit
high-
and
you
have
you
want
something
like
funny
hats
and
you-
you
want
the
hat
on
the
head
only
so
I
I
think
it
might
need
a
bit
more
higher
confidence
score.
For
that
reason,
probability
might
be
important,
I
mean
we
can
keep
it
optional.
Also,
and
you
know
not
force
everybody
to
use
it.
So
I
am
open
to
that.
But
we'll
hear
about
this
from
others.
K
D
There
is
a
very
good
use
case
there,
so
since
they
are
owning
video
frame,
we
should
engage
with
the
people
most
familiar
with
video
frame
to
to
build
the
correct
construct.
K
Yeah
right
sure,
I'll
I'll
mail
them
right
away,
then,
okay
and
now
the
important
one
number
one.
So,
yes,
I
think
we
started
off
as
a
minimal
set.
Then
many
comments
came
and
also
it
we
increased
the
size
a
bit.
Now
again,
we
should
try
to
fix
the
mvp,
the
mesh
part
in
our
goals.
I
think
if
we
can
keep
it
as
next
steps
and
remove
from
now
everything
about
mesh
we
can
remove.
K
The
contour
is
something
harald
sort
of
asked
and
I
will
ask
his
opinion
on
that
and
the
landmark,
I
think,
for
face
protection
as
an
input
to
other
apis.
If
just
in
case
somebody
wants
to
do
a
custom
stuff.
Of
course
there
is
the
ready-made,
like
suppose,
face
framing
or
something
like
that,
but
the
landmarks
might
be
a
bit
important
and
we
should
keep
it
as
part
of
mvp
but
again
open
to
discussions.
K
D
K
Yeah
right
so
in
my
in
my
opinion,
I
can
remove
mesh
directly
and
keep
only
landmark,
but
I
will
hear
from
harald
and
others
what
they
say.
K
Oh
yeah,
I
think
let's
yes,
go
ahead
around.
B
And
so
I
still
have
a
problem
with
this
api.
I
mean
the
the
power
power
conception.
Thing
is
they're
kind
of
nice
to
have
and
now
that
it
attached
it's
attached
to
video
frame.
I
think
it's
in
the
right
place
in
the
system,
but
still
I
have
problems
figuring
out
what
I
can
use
this,
for
I
mean
the
use
cases
I
see
in
the
in
in
the
docs
that
in
the
explainers
just
go
yes
face,
detection
is
good,
but
what
can
I
do
with
a
square
box,
or
this
is
the
right
so.
J
B
K
Okay,
so
I
think
one
of
the
quick
answers
would
be,
for
example,
the
landmark
part
you
can
use
this
as
input,
because
that
is
something
you
had
suggested
like
use
face,
detection
for
some
custom,
other
work,
at
least
the
ones
I
put
in
the
previous
slide.
Does
that
help
the
landmarks
for
doing
custom,
processing
for
eye
gaze
correction
and
others
or.
B
So
so
landmarks
will
give
me
starting
points
and
so
you're
suggesting
landmarks
could
be
a
part
of
england
as
input
and
algorithm
like
funny
ads,
but
yeah,
but
landmark
detection
alone
is
not
enough
to
do
funny.
Hats.
K
Right
I
mean
so.
K
Well,
right
now
the
platforms
are,
as,
as
you
know,
only
giving
bounding
box,
but
I
think
that's
why
the
contour
is
there
in
place,
but.
D
Actual
fractionality
is
useful,
but
so
let's
go
to
the
queue
yeah.
Just
one
use
case
review
that
I
mentioned
in
the
past
and
maybe
which
I
should
write
it
on.
Github
is
the
fact
that
some
encoders
are
taking
regions,
specific
specific
regions
and
will
optimize
encoding
for
that
specific
region,
and
it's
just
a
bounding
box
and
having
it
attached
to
the
video
frame
would
allow
some
utilizations
there.
K
A
Yeah
no,
I
was
about
to
echo
what
u.n
said:
it's
actually
a
big
subject
of
research,
which
is
segmentation
within
video
coding
to
because
you
can
basically
optimize
the
encoding
and
allocate
your
blocks
according
to
the
regions.
It's
it
shows
fairly
high
gain
from
that.
So
I
agree
with
the
u.n
that
integrating
that
with
video
frame
is
a
pretty
pretty
good
idea
right.
So.
K
We
have,
we
don't
have
time,
but
I
want
to
quickly
get
some
support
from
at
least
the
chairs
and
other
working
groups
like
if
we
want
to
start
prototyping
or
is
there
like
bigger
concerns
with
the
innovation
aviation?
Yes,.
F
Sorry,
I
was
on
the
queue
I
I
noticed
in
the
explainer,
it
says,
face:
detection
api
should
be
anchored
to
video
frame,
it
says
goals
face
detection
api
should
be
anchored
to
video
frame
defined
in
web
codecs
instead
of
media
stream
track
and
which
I
applaud.
I
think
this
is
a
good
move.
However,
I
do
notice
that
farther
down
it
still
talks
about
meter
track
capabilities
and
constraints,
so
so
it
would
appear
that
it
is
in
both
places
and
in
previous
meetings.
F
It
was
some
concern
that
this
api
looked
like.
It
was
solely
for
face
tracking
that
was
supported
by
hardware
devices
and
drivers,
and
so
I'm
wondering,
if
you've
made
any
progress
on
thinking
about
how
this
api
could
be
used
on
sources,
not
from
camera,
say
on.
If
you
have
a
video
and
detecting
it
seems
like
it
could
be
useful
to
detect
faces
from
other
sources
like
recorded
video.
K
For
recorded
okay
from
recorded
video
well
in
that
case,
we
I
don't
think
we
can
use
the
underlying
same
platform
apis
for
which
we
get
a
big
pnp
benefit.
So
that
was
not
so
even
for
recorded
video
in
shape
detection.
They
take
each
individual
image
and
then
do
it
and
which
breaks
the
tracking
tracking
part.
K
You
can
only
do
detect,
and
you
know
the
a
lot
of
savings
are
because
of
the
tracking,
so
maybe
we
can
do,
but
is
it
something
which
is
needed
because
I
think
most
of
them
most
of
the
use
cases
are
video
conference
related.
If
I
am
not
mistaken,
so
was
it
something
needed,
then
we
can
again
look.
D
I
would
echo
with
you
reuse
point
that
the
first
target
is
probably
camera
and
for
recorded
video.
It
seems
that
if
you
have
a
media
stream
track,
you
could
have
a
transform
stream.
That
would
you
pipe
two
and
then
it's
up
to
the
transform
stream
to
either
do
it
video
frame
per
video
frame
or
to
be
smarter
and
do
some
kind
of
tracking
video
frame
after
video
frame
and
maybe
some
kind
of
ship.
F
So
sorry,
you
could,
for
instance,
get
that
we
could
maybe
add
ways
to
insert
that
video
frame
metadata
then
from
a
transform
stream,
for
example.
Is
that
what
you're
thinking.
L
Yes,
so
we
have
a
adidas
possibility
to
to
addre
a
face
meta
dedicated
to
a
video
frame
metadata.
It's
all
oral
explainer,
that's
on
our
api
proposal
in
explainer,
so
it's
possible
to
in
our
proposal,
it's
possible
to
create
new
video
frames
within
a
custom
meter,
metadata.
B
Yeah,
so
a
quick
look
at
the
explainer
says
that
they're
using
constraints
to
ask
ask
the
camera
driver
to
produce
this.
This
information.
Well,
the
the
representation
of
the
information
is
attached
attached
to
the
leader
frame.
That
seems
a
perfectly
sensible
way
to
go.
What
I'd
like
you
to
do
is
to
make
sure
that
you
have
written
up
in
the
explainer
the
exact
use
case
for
for
enhancing
and
coding,
because
that's
that's
the
one
I
can
see.
That's
compelling
at
the
time.
K
Okay,
I
I
can
add
that
encoding
part
there,
but
harald
any
any
further
suggestions.
Would
you
would
you
be
supportive
of
us
trying
to
prototype
this
on
chrome.
B
K
Thanks,
I
think
you
win
was
also
supportive
and
bernard
and,
I
think
jennifer.
I
would
like
to
get
your
comments
also
on
this.
It
would
you
be
supportive
of
if
we
try
to
mod
the
path,
removing
the
mesh
and
a
few
other
things,
and
then,
if
we
at
least
we
start
prototyping
on
chrome
and
then
maybe
follow
up
from
there
make
changes
from
there.
F
I
I
I
think
I
still
have
some
concerns
about
whether
this
information
would
reveal
a
lot
of
differences
between
different
hardware
devices
and
that
kind
of
stuff
and
making
sure
we
have
the
right.
Abstractions
I'd
also
encourage
opening
an
issue
on
mozilla
standards
positions,
so
we
can
get
a
sure
sure
a
good
view
from
the
entire
company.
Thank
you.
A
I
think
I
think
it's
useful
to
definitely
useful
to
prototype
it.
I
would
actually
bring
the
issue
of
the
metadata
to
the
web
codecs
folks,
because
I
think
that
that's
potentially
another
another
valuable
point
of
integration.
K
Thank
you
so
yeah
we
are
running
out
of
time.
My
action
item
is
add
more
use
cases,
especially
the
encoder
one
number
two
talk:
mail
to
web
codex
folks
and
number
three
start
prototyping
on
chrome,
okay,.