►
From YouTube: WEBRTC WG interim, Nov 15 2022
Description
See also the minutes of the meeting https://www.w3.org/2022/11/15-webrtc-minutes.html
02:30 Encoded Transform
23:54 WebRTC PC
24:10 Issue #2795: Missing URL in RTCIceCandidateInit
40:55 Issue #2796: A simulcast transceiver saved from rollback by addTrack doesn’t re-associate, but unicast does
45:26 Issue #2724: The language around setting a description appears to prohibit renegotiation of RIDs
50:16 Timing Model & WebCodecs
1:01:44 Face Detection
1:24:28 MessagePort on Capture Handle
1:41:59 enumerateDevices & Focus
A
We
abide
by
the
w3c
patent
policy,
and
only
people
in
companies
that
are
listed
on
this
website
are
allowed
to
make
substantive
contributions
got
a
lot
to
cover.
Today,
we're
going
to
talk
about
encoded,
transform
a
bit
about
timing
models,
some
discussion
of
face
detection,
a
message
port
on
capture
handle
media
capture
main.
We
have
two
more
one
more
meeting
this
year
and
another
one
scheduled
for
January
and,
as
usual,
that
info
was
up
on
the
wiki
site.
A
The
next
meeting
will
be
December
7th
I
know
the
meeting's
been
moving
around
a
bit,
but
that's
what
we
settled
on
okay,
A
bit
about
this
meeting,
the
slides
as
usual,
or
up
on
the
wiki
we're
being
recorded
and
do
we
have
a
volunteer
for
note-taking.
B
B
A
Thank
you,
Tom,
okay,
a
little
bit
about
the
code
for
conduct.
We
do
operate
under
it
and
we're
all
passionate,
but
let's
try
to
keep
it.
Correctional
and
professional
people
have
probably
figured
out
how
to
use
Google
meet
by
now,
but
if
you
want
to
get
in
the
queue
type
plus
q
and
minus
Q
to
get
in
and
out
use
headphones,
Etc
and
state
your
name
for
the
for
the
minutes.
Okay,
I,
don't
think
we'll
do
any
polls
today,
but
we
could
all
right.
A
So
just
the
usual
reminder
about
document
status
where
it
is
in
the
repo
doesn't
tell
you
if
it's
adopted.
Adoption
requires
a
call
for
adoption.
Editors
drafts
don't
represent
consensus,
but
what
computers
do?
It
is
possible
to
merge
PRS
at
lack
consensus
of
the
notes
attached
that
indicate
it
all
right.
So
here's
what's
on
the
agenda,
we
have
20
minutes
for
encoded,
transform
discussions.
We
have
a
few
issues
from
Weber
to
see
PC
I'm,
going
to
talk
a
little
bit
about
some
timing
level,
progress
and
in
general,
about
video
frame.
A
And
then
we're
going
to
have
a
discussion
from
riju
on
face
detection
and
then
some
sessions
on
message,
port
on
capture
handle
and
media
capture
main
all
right.
A
lot
to
do
today
so
help
turn
it
over.
D
D
We
didn't
get
quite
get
at
this
time,
but
we
got
a
few
people
saying
interested,
but
I
can't
make
it
so
nobody
came
actually,
but
I
did
some
experimentation
anyway
and
I.
Think
I
learned
something
next
slide.
D
D
D
Internal
on
all
this
and
Define
how
to
connect
these
things
to
into
rtb
standards,
not
to
be
receivers
and
follow
the
pwg
model
of
in
the
how
to
enable
frame
frames.
That
is
you
just
you
tell
the
pair
connection
that
you're
going
to
going
to
work
at
it
and
you
do
it
on
the
main
trip,
but
change
the
API
around,
so
that
the
process
of
saying
I
want
the
I.
The
I
want
frames
from
this
connect
from
the
center
or
receiver
to
go
here
and
and
I
want.
D
The
insert
frames
into
the
center
is
here.
They
are
separate
calls
because,
and
a
lot
of
the
use
cases
will
only
use
one
of
them.
All
of
this
is
described
in
the
repository
that
I
created
for
an
iPhone
next,
so
I
I
did
manage
to
get
two
demos
written
of
here's.
How
the
whole
code
would
look
if
all
this
worked
and
I
made
one
very
simple
demo
kind
of
yes,
the
API,
the
apis
that
that
are
simulated
in
JavaScript
compiled
and
it's
the
frame
counter
demo
and
that
counts.
D
How
many
frames
that
have
passed
by
and
display
this
place
it
on
somewhere,
and
this
is
fully
supported
by
the
shame
that
emulates
the
proposed
API.
So
you
can
run
this
in
the
existing
browser,
which
was
kind
of
cute
and
I
also
did
a
demo
of
here's,
how
you
connect
up
things
that
take
an
incoming
track
and
pass
it
out
to
an
out
contract
without
touching
the
bits.
D
Well,
this
is
part
of
what
you
can't
do
with
the
current
API,
at
least
in
the
current
browser.
So
I
got
ashamed
to
start
up
and
give
give
me
the
expected
letter
message
very
quickly.
D
D
D
So
this
is
a
frame
counter
example.
The
first
line
is
how
you,
how
you
get
the
stream
of
frames
with
the
in
an
object.
That
also
has
the
has
the
callback
functions,
that
you
need
to
control
bandwidth
and
generate
keyframes
and
so
on,
and
then
you
define
the
a
class.
You
have
a
good
class
that
does
the
processing
for
you
simple
as
that
and
then
connected
to
the
outgoing
frame,
outgoing
stream.
D
This
is
a
bump
in
the
stick,
but
the
thing
is
that,
with
this
API
there's
no
requirement
that
the
same
peer
connection
is
used
at
both
ends.
It
just
happens
that
that's
the
only
thing
that
works
at
the
moment
next.
D
So
that's
what
I
did
and
what
what
I
think
we
learned
that
okay,
it's
possible
to
write
something
using
this
API
shape
and
then
see
what
we
can
support
about
it.
But
Peter
wanted
to
tell
us
something
about
what
he
did
not
not
at
the
same
time,
but
and
some
tests,
that
of
what's
possible
Peter.
Take
it
away.
E
All
right,
can
you
hear
me
yep
perfect,
so
I
actually
did
my
little
kind
of
hackathon
at
the
at
TPAC
or
during
TPAC,
but
during
Harold's
hackathon
I
copied
it
into
this
GitHub
repository,
so
I
did
participate
a
little.
Basically
I
was
trying
to
find
out
how
much
of
what
Harold's
describing
in
a
kind
of
one-way
API.
E
Can
we
do
with
the
existing
kind
of
two-way
API?
Could
we
build
a
shim
for
one
to
the
other
and
I
found
that
I
could
produce
a
one-way
API
for
transport
and
another
for
codec
and
I
got
it
working
between
a
client
in
a
web.
E
Running
a
webrtc
stack,
but
it
lacks
a
few
things
and
I
had
some
ideas
for
how
to
work
around
that.
But
looking
at
Harold's
proposed
API,
it
basically
checks
all
the
boxes,
so
I'll
get
into
a
little
more
detail
next
slide.
E
E
The
problem,
or
the
thing
that's
lacking,
is
that
you
can't
call
this
Constructor
where
it
says
new,
RTC,
encoded,
audio
frame
or
encoded
video
frame.
That's
the
big
problem
and
I
did
work
around
that,
but
it's
it's
pretty
ugly,
so
I
could
make
it
work,
but
it
would
be
really
nice
to
have
that
Constructor
and
there's
no
construction
control
feedback.
Here
to
let
you
know
how
much
you
should
write,
which
is
one
of
the
things
Harold
proposed.
E
So
next
slide
on
the
receive
side.
You
just
read
from
the
receiver,
the
readable
stream
of
the
receiver,
and
you
do
something
with
the
data
so
that
that's
actually
pretty
straightforward
next
slide.
E
If
you
want
to
make
a
one-way
thing
that
looks
like
an
encoder,
you
just
have
to
read
from
the
sender
and
do
something
with
the
encoded
frame.data,
but
you
don't
have
a
way
of
controlling
it,
other
than
having
your
server
send
back,
say,
Firs
and
rembs,
or
something
like
that.
You
might
be
able
to
use
the
RTP
sender
set
parameters,
but
that
would
only
be
able
to
go
lower
than
what
the
congestion
control
is
doing.
E
So
some
direct
controls
might
be
nice
and
that's
again,
one
of
the
things
here,
Old
Point
added
in
his
proposed
API
next
slide
on
the
flip
side,
if
you
want
a
decoder,
you
just
write
to
the
RTP
Center
and
again
you
would
want
a
Constructor
so
that
you
can
pass
something
in
and
you
can
work
around
that.
But
it's
hacky
and
you
don't
have
a
way
of
knowing
when
that
thing
needs
a
keyframe
other
than
again
I've
got
this
working
client
to
server.
E
If
your
keyframe
is
necessary,
if
we
added
those
to
the
existing
API,
then
we'd
basically
be
able
to
do
all
the
use
cases
I
can
think
of
which
is
almost
exactly
what
Harold
said
and
I
looked
at
his
proposed
API,
and
it
does
all
five
of
these
things.
So
I
think
his
proposal
is
really
good
and
I
I
do
think
we
could
very
hackly
work
around
these
things,
but
it'd
be
much
better
to
add
direct
API
points.
G
Yep,
so
for
these
five
things,
I
think
that
there
was
already
a
proposal
for
the
keyframe
for
encoder.
We
we
already
have
something
in
inquiry
transform.
So
it's
it's
already
good
for
the
Constructor
for
encoded,
audio
and
encoded
video
frame,
I'm,
not
sure
I'm,
just
I'm,
really
understanding
why
we
need
a
video
code
path
because.
G
The
data,
then
we
have
web
contacts
to
actually
do
the
decode
and,
and
then
you
you
can
you
can
create
a
track
for
it
or
you
can
render
it
yourself.
So
what
what's
I'm
not
sure
I
understand
the
point
of
using
peer
connection
for
incoming
data,
because
if.
F
E
G
And
I
think
I
think
that
Mozilla
and
Safari
are
actually
prototyping
things.
There.
E
G
At
the
same
time,
I
heard
a
lot
that
we
we
want
these
kind
of
things
to
actually
led
to
web
application,
handle
its
digital
buffer
on
its
own.
So
right.
G
E
So
I
I
think
that
if
web
codex
is
there-
and
you
want
to
do
your
own
Jitter
buffer,
then
perhaps
this
is
not
needed,
but
if
you
wanted
to
get
the
same
behavior
that
you
would
get
at
a
webrtc
today,
because
again
you
can
do
this
today,
but
wanted
to
be
oh
more
convenient
because
you
don't
want
to
write
your
own
Jitter
buffer.
Then
that
might
be
a
benefit
here.
G
Okay,
use
cases
would
be
interesting
like
for
the
Constructor
for
encoded
audio
requirement.
Video
frame,
maybe
for
viewers
as
well,
but
viewers
I,
think
are-
are
much
more
straightforward
in
terms
of
use
cases.
So.
G
G
D
G
H
Oh
yeah,
so
sorry
I'm
having
a
bit
of
trouble
getting
to
high
level
problems,
we're
solving,
and
maybe
that
was
from
last
meeting.
It
seems
to
me
that
if
you
wanted
to
forward-
and
it's
just
instead
of
emergency
encoder
transform
or
is
it
is
it
imagining
of
it?
Is
it
addressing
small
problems
with
it
and
I'm
I'm
just
a
bit
confused
about
how
it
all
fits
in
like
forwarding
it?
We
already
have.
Excuse
me,
readable
writable
streams,
apis
on
a
median
storm
track,
for
example.
D
Well,
where
are
you
at
TPAC?
H
Well,
if
you
could
do
a
refresher
in
this
context,
it
might
help
more
readers
than
me,
but
if
it's
just
me,
that's
fine.
D
Yeah
I
think
I
think
I
can't
reproduce
the
tip
the
TPAC
discussions
directly.
But,
to
my
mind,
the
the
cases
were
quite
compelling
that
it
was
not
possible
to
address
this
in
a
sensible
manner
without
significant
extensions
of
the
we're
about
to
see
encoder
frame
API.
D
Of
course,
whether
or
not
this
should
be
a
additional
API
to
that
API
or
a
replacement
or
an
extension.
H
Okay,
I
see
Peter's
raised
his
hand,
is
that,
in
response
to.
E
Yeah
yeah,
because
specifically
when
you're
talking
about
forwarding,
you
cannot
forward
well,
unless
you
know
what
the
bandwidth
estimation
is
on
the
send
end
right,
you
got
stuff
coming
in.
You
want
to
forward
it
out,
but
you
can't
know
what
to
do
unless
you
have
a
bandwidth
estimate.
So
that's
one
of
the
things
that
you
would
need.
E
H
Okay,
yeah
I,
guess
it
wasn't
clear
to
me,
you
know:
I
was
expecting
my
representation
here.
Here
are
the
things
we
lack
and
here
are
the
changes
we
want
to
make
in
based
on
the
existing
specs
that
we
have,
and
it's
not
clear
whether
this
is
instead
of
or
on
top
of,
but.
E
I
I
didn't
think
it
was
the
right
time
to
say
here's
what
I
think
the
API
should
look
like,
but
Harold
did
have
a
link
to
where
he
thought
it
could
look
like
and
I
looked
at
that
and
thought
that
that's
great
checks,
all
the
boxes
so
I'm
not
particular
about
the
shape
of
it.
So
much
as
like
these
are
the
things
that,
in
my
experience,.
F
D
Now
this
this
was
also
the
reason
why
I
chose
this
particular
presentation.
Format
I
mean
to
keep
the
discussion
from
TPAC
going
and
to
not
lock
ourselves
into
an
API
shape
until
until
we
have
actually
worked
through
whether
the
use
cases
that
we
want
to
support
are
possible
or
simple,
with
the
API
shapes
we
want
so
we've
we've
had
it
had
the
experience
before,
of
locking
ourselves
into
apis,
API
shapes
and
and
then
having
to
revise
them
later
than
we
should
have
I
mean
the
many
many
iterations
of
the
constraints.
E
For
what
it's
worth,
I
think
that
these
things
can
be
added
with
fairly
minimal
additions
like
adding
a
Constructor.
That's
not
a
lot
a
signal
for
a
bandwidth
estimate.
That's
not
a
lot
now.
These
These
are
generic.
Keyframe
has
already
been
proposed,
so
I
don't
think
this
is
like
a
big
Delta
between
what
we
have.
D
So
I
didn't
see
a
I
didn't
see
it
particular
decision
consensus
on
this
on
the
decision,
but
it
seems
that
we
need
to
enumerate
to
use
to
animate
and
illustrate
the
use
cases
a
bit
more
before
we
before
we
make
a
before.
We
make
a
proposal
for
a
change,
so
at
least
Peter
and
I
will
continue
to
iterate
on
use
cases.
A
Okay,
so
here
are
the
issues
we're
going
to
talk
about
today:
27.95
96
and
27
24.,
okay,
27.95.
G
G
Two
more,
oh,
okay!
So
at
last
interim
we
started
to
discuss
the
facility
to
expose
the
server
reflexive
or
relay
server
error.
So
you
have
a
candidate,
it's
a
software
effective
or
it's
relay
and
you
you
got
it
from
a
particular
server
and
it's
it's
nice
to
expose
the
Earl
on
the
candy
Lake
in
webrtc
pc1.0.
G
It
was
exposed
in
the
con
in
the
candidate
event
and
at
last
meeting
it
was
proposed
to
move
it
to
to
the
candidate
object
itself
and
what
one
thing
we
did
not
really
talk
about
was
Weaver.
G
This
new
parameter
would
survive
just
an
strenification
and
Recreation,
so
you
have
a
nice
candidate.
You
translate
it
into
geism
by
calling
to
Jason,
and
then
you
send
it
or
the
wire
you
recontract
it,
and
then
it
should
be
the
same.
So
before
the
introduction
of
the
error
parameter,
the
there
is
full
fidelity,
meaning
that
you
you
all
the
different
attributes
are
the
same
before
and
after
serialization.
So
that's
why
I
added
this
JavaScript
there,
where
you
can
assert
that
the.
G
We
need
to
discuss
because
we
did
not
well
discuss
it
at
last.
Interim
next
slide,
so
the
two
questions
I
have
is
so:
do
we
want
to
share
server,
refixes
or
relay
server
or
two
remote
parties?
So
do
we
want
to
expose
it
like
you
call
to
Jason
do?
Do
we
want.
F
G
Have
it
there
or
not
that's
something
or
do
we
want
to
Omit
this
information
by
default,
of
course,
with
JavaScript
you
can
call
to
Json
and
later
on.
Add
the
Earl
to
the
general
object
itself.
If
you
want
it,
but
what's
the
default?
That's
the
first
question
and
the
second
question
is:
do
we
want
to
keep
the
ice
candidate
current
model,
so
the
con
model
is,
if
you
connect
to
Json
and
you
reconstruct
it,
then
it's
always
the
same.
G
Do
you
want
to
keep
that
Behavior
or
do
you
or
are
we
fine
with
the
idea
that
yeah
two
Json
will
be
a
lossy
and
that's
fine
and
developers
will
will
not
care
a
lot
so
that
these
are
my
the
two
questions
I
have
to
to
be
working
with
I
passed
it
my
personal
position,
which
is
that
there's
probably
no
use
case
to
expose
these
to
their
tradition
to
other
parties?
G
So
probably
you
would
not
want
that
and
I
think
we
should
keep
the
current
model,
which
is
simple
and
consistent,
so
which
would
mean
we're
working
at
our
past
decision
and
just
expose
keeping
the
error
information
to
the
event
and
not
the
candidate
and
there's
a
small
JavaScript
there,
which
allows
to
very
easily
stream
the
behavior
that
is
currently
in
the
spec
meaning
to
keep
people
in
the
candidate,
but
not
exposing
via
to
Json.
It's
two
lines
of
code.
So
it's
fairly
straightforward
and
I.
G
Guess
that
for
the
third
slide,
I
will
pass
it
down
to
Flippo.
C
C
C
And,
typically
you
want
to
know
what
kind
of
connection
you
have,
whether
you're
relate
or
not,
and
maybe
which
I
server
you're
going
after.
Currently,
that's
done
using
get
stats
where
both
URL
and
relay
protocol
have
been
available
for
a
while
and
I
think
putting
it
on
the
candidate
object
itself.
Instead
of
the
as
candidate
event
will
automatically
make
it
available
there
as
well,
which
is
much
more
useful
questions.
B
G
In
both
cases
you
can
you
can
do
it,
it's
it's
a
question
of
what
is
most
convenient
and
what
what
is
less
error,
prone
or
less
surprising
to
to
web
Developers.
C
G
B
H
H
Yeah
so
I
I,
like
thanks
for
breaking
down
about
the
reasons,
and
so
it
sounds
like
February.
You
were
saying
there
already
are
items
like
that
are
Exposed
on
the
candidate
that
are
not
part
of
two
Json
that
you
cannot
reconstruct
from
the
candidate
line.
Is
that
right.
C
H
G
Haven't
checked
that
that
that's
that's
that's
not
in
the
spirit
of
the
spec,
then
so
to
me
that
the
spec
is
saying,
if
you're
calling
to
Jason
and
then
you're
calling
rtci's
candidate
on
the
two
Json
object,
then
you
you're
creating
the
the
same
thing
and
it
should
be.
The
same
values
should
be
exposed
to
the
applicant
dates
objects
on
both
of
them.
G
If
that's
not
the
case,
then
maybe
we
should
make
the
spec
clear,
or
maybe
you
should
understand
why
this
is
the
case
and
whether
there's
a
bug
there,
because
that's
surprising
to
me
when
I
read
this
back
at
least.
B
C
H
B
C
G
Yeah,
my
understanding
was
that
the
event
is
sort
of
the
local
candidate
and
the
event
is
saying:
okay,
there's
this
one
and
there's
this
over
value,
it's
true
that
the
candidate
pair
change
event
is
exposing
the
local
and
the
remote
and
that's
where
it's
actually
interesting
to
to
have
that,
but
other
than
other
than
there.
Maybe
there's.
No,
no.
The
use
case.
H
H
H
For
that
I
mean
the
way
I
see
it
is
the
ice
candidate
is
sort
of
a
hybrid
thing
interface
already,
and
it's
trying
to
both
be
what
you
need
to
send
to
the
other
side,
but
also
a
helpful
here's,
some
extra
local
information,
interface
and
I
think
we
already
have
that,
so
it
just
doesn't
seem.
G
That
different,
in
that
case,
you
probably
need
to
be
able
to
to
create
an
fci's
candidate
that
has
a
non
null
error
because
that's
for
instance,
if
you
create
an
event,
you
should
be
able
to
perfectly
ship
it,
and
currently
we
cannot
do
that.
So
that
would
mean
we
would
at
least
need
to
change
the
Constructor
away.
H
D
So
the
so,
in
my
opinion,
and
the
candidate
is
behaving
like
a
data
object,
there's
no
in
inherent
Behavior
and
people
expect
to
copy
data
objects,
and
people
who
see
a
two
Json
method
generally
expect
that
that
object,
that
will
return,
something
that
can
be
turned
back
into
the
same
data
object
and
I
think
the
idea
of
breaking
that
pattern,
as
we
might
have
done
already
by
accident.
D
But
we
shouldn't
repeat:
you
shouldn't
continue
this
one,
it's
a
bad
idea,
but
the
backwards
compatibility
problem
we
have
is
that
users
are
today
using
two
Json
as
if
it
was
package
up
candidate
in
a
the
in
in
a
form
that
only
presents
what
only
presents
the
stuff
I
want.
I
want
to
send
to
a
remote
party
and
that's
a
bad
thing.
D
D
D
G
D
Subclassing
yeah
some
closing
candidates
to
have
remote,
remote,
candidate
and
and
local
candidates
are
some
classes.
This
would
would
be
an
interesting
idea.
H
Well
yarn
over
here,
where
would
chasing
technical
Purity
here
and
that
I
agreed.
It
is
a
word
already
and
we
have
an
interface
that
serving
two
purposes
if
we
subclass
I'm,
not
sure
that
will
get
us
out
of
it
because
it
still
depends
on
what
do
you
get
on
on
Ice
candidate?
What
interface
would
you
get?
H
You
will
get
a
local
candidate,
but
if,
if
you
then
package
that
up,
then
then
you're
sending
that
to
the
other
side
which
we
didn't
want
so
I
think
there's
president
in
the
web
platform
for
custom
2
Json,
where
not
every
web
interface
is
necessarily
enumerable,
but
you
can
still
get
the
properties
if
you
want
them
so
I
kind
of
feel.
We
already
have
a
word
so
I
would
like
to
see.
D
G
Continue
that
on
GitHub
and
try
to
finish
the
discussion
there.
D
H
H
We
found
some
more
Corner
cases
in
and
especially
to
do
with
rollback
RFC
8829
says
about
rollback
that
a
transceiver
must
not
be
removed
because
normally
a
transceiver
that
was
created
by
set
remote
description.
If
you
roll
it
back,
it
disappears,
but
it
must
not
be
removed
if
a
track
was
attached
to
the
transceiver
via
the
ad
track
method,
and
so
this
is
so
an
application
may
call
AD
track.
Then
call
set
remote
description
with
an
offer,
then
roll
back.
H
That
offer
then
call
create,
offer
and
still
have
an
end
section
fine,
but
this
is
a
bit.
It
creates
a
bit
unusual
situation
where
an
ad
track.
So
if
a
set
remote
description
has
happened
on
first
simulcast
offer,
for
example,
if
you
then
call
add
track,
if
an
application
calls
ad
track,
you
can
actually
save
effectively
save
a
a
transceiver
that
set
remote
description
created
for
this
offer
from
being
rolled
back.
H
Let's
switch
a
bit
of
a
magic
trick,
but
anyway,
so
this
is
already
a
corner
case
and
we
still
need
to
iron
out
how
this
is
going
to
work,
and
we
found
some
inconsistencies
here
that
in
Chrome,
which
was
the
browser
that
worked,
the
best
here,
I
think
still
would
Chrome
would
in
Chrome.
Simulcast
is
rolled
back
to
a
Riddler's
unicast
if
the
transceiver
was
created
by
had
track,
I.eu
called
AB
track
and
then
set
remote
description.
But
if
you
called
Central
description,
then
abstract
it
didn't.
H
So
the
proposal
is
to
roll
back
simulcast
to
Riddler's
unicast,
also
for
transceivers
created
by
set
remote
description,
and
what
this
would
do
is
it
would
make
things
more
consistent
and
it
makes
AB
track
behave
the
same
before
and
after
set
remote
description
and
it
avoids
having
to
deal
with
this
unicorn,
which
is
an
unassociated,
simulcast,
abstract
receiver.
Because
now
and
we
avoid
having
to
wonder
what
does
reassociating
an
abstract
receiver.
H
That
already
has
simulcast
mean
so,
and
this
seems
like
a
defensible
interpretation
on
our
of
RFC
section
41102,
which
says
that
rollback
discards
any
proposed
changes
to
the
session.
Returning
the
state
machine
back
to
stable
State,
meaning
that
we
we
also
have-
we
already
have
language
for,
for.
H
If
you
called
ad
track
to
create
a
transceiver
and
then
the
rollback
modify
that
receiver
and
then
you
roll
back
the
offer
put
it
back
the
way
it
was
before
and
undo
that
we
just
haven't
implemented
I
think
it
also
makes
sense
to
interpret
that
advice.
I
mean
we.
If
it's.
If
we
set
remote
description,
creates
a
transceiver,
then
then
later
application
fishes
out
by
attaching
an
ad
track.
H
At
that
time,
it
shouldn't
be
able
to
pull
out
a
simulcast
receiver
out
of
that,
because
the
simulcast,
the
thing
that
introduced
simulcast
was
rolled
back.
So
it's
sufficient
to
just
what
we
should
do
is
basically
restore
that
to
a
basically
plain
old,
unicast
transceiver,
which
I
think
goes
with
the
original
tent
intent
here
of
not
undermining
in
the
applications
intent.
H
D
So
action
item
to
propose
a
PR.
Yes,.
H
All
right
so
a
separate
issue
also
to
do
with
negotiation
and
simulcast.
H
Last
month's
meeting
we
drastically
reduced
the
cases
of
where
central
motion
description
could
fail,
rather
than
reject
or
or
their
promise
promise,
and
so
we
we
got
it
down.
We
had
one
remaining
item
when
it
came
to
Red
negotiation
that
we
still
failed
over
and
we
basically
to
read
a
note
here,
like
a
change
in
red
values,
is
tolerated
from
remote
offers
to
receive
simulcast
as
long
as
at
least
one
red
matches
read
in
the
encodings
that
were
previously
negotiated
or
the
offers
to
no
longer
receive
simulcast.
H
Last
month
we
tried
to
move
the
spec
closer
to
implementations
because
implementations
weren't
rejecting
on
any
of
this,
and
we
stopped
at
this
one
thing
and
my
proposal
is
basically
to
remove
it
as
well,
because
it
turns
out,
implementations,
aren't
failing
and
it's
actually
simpler
to
just
drop
down
to
the
first
layer
and
answer
with
unicast,
which
is
what
the
RFC
8853
that
the
governs
offer
answer
would
like
to
have
happen
because
there's
actually,
according
to
that
RFC
there's
very
few
situations
where
an
Sr
et
cetera
description
offer,
should
error
out
due
to
an
inconsistency
with
something
previously
negotiated.
H
The
Proposal
is
basically
to
remove
this
section
and
next
slide
and
add
a
couple
of
a
single
line
further
down
in
green
that
in
the
answer,
we
basically
pick
up
this
new
situation,
where
we
said
if
description
indicates
that
simulcast
is
no
longer
supported
or
desired,
or
the
description
is
missing.
All
of
the
previously
negotiated
layers,
then
removal
dictionaries
in
the
send
encodings,
except
for
the
first
one
on
aboard
these
substance,
basically
answered
with
unicast.
H
So
this
would
match
local.
This
would
match
Roman
Safari
I
believe,
even
though
there's
a
file,
this
CR
bug
on
an
inconsistency
there
where
it
seems
to
leave
the
encodings.
D
This
is
this
is
the
one
where
you
discovered
that
the
Chrome,
the
disables
layers,
rather
than
removing
them
in
this
case
yeah,
doesn't
sound
unreasonable
to
me
once
we,
because
we
already
accepted
the
language
that
says
that
we
remove
layers
instead
of
just
disabling
them.
H
Yes,
I'm
very
happy
with
her
little
the
change
here,
so
it
seems
to
fall
in
line
which
I
think
is
a
good
sign
that
it
doesn't
introduce
new
behaviors.
It
just
extends
them.
D
I
think
this
works.
Will
you
will
you
add
that
add
tests
too.
H
Yes,
the
test
will
come
from
Firefox,
adding
some
parameters
good
shortly
cool.
Thank
you.
A
A
Hopefully
we
can
get
back
on
track
here.
This
will
be
should
be
short.
So
a
little
bit
of
an
update
on
video
frame
metadata,
which
has
been
in
discussion
in
the
media
working
group,
actually
hold
on.
Let.
B
B
A
Okay,
all
right,
so
a
little
bit
of
update
on
what's
been
going
on
in
the
media
working
group,
the
group
created
a
video
frame
metadata
registry,
which
now
has
registration
requirements
which,
among
other
things,
require
a
spec.
That's
been
a
a
working
group
work
item
in
a
working
group.
A
The
process
is
that
you
enter
registration
requests
as
a
web
as
web
codex
issues
and
an
example
is
issue
607,
which
is
human
based
metadata.
We're
going
to
be
talking
about
that
in
the
next
presentation,
a
little
bit
about
the
relationship
with
the
request,
video
frame
callback
specification,
which
was
originally
in
a
community
community
group
and
now
I
believe,
will
be
merged
into
the
HTML5
specification
and
I'll
I'll.
Show
you
a
little
bit
about
that.
A
But
basically,
there's
been
questions
about
the
relationship
between
the
video
frame
callback,
metadata
defined
there
and
video
frame
metadata,
such
as
whether
they
should
inherit,
and
also
whether
some
of
those
frames
exposed
some
of
those
attributes
exposed
in
the
video
frame.
Callback
metadata
should
be
exposed
in
in
video
frame
metadata.
So
we'll
talk
about
a
few
of
the
implications
of
this.
So
this
is
what
the
Callback
metadata
that
is
in
this
request,
video
frame,
callback,
spec
and
you'll
notice.
A
A
few
things
there's
timing
info
at
all
aspects
of
the
pipeline.
So,
for
example,
you
have
things
like
the
capture
time
and
the
RTP
Tam
stamp,
but
also
the
receive
time
and
then,
as
you
get
closer
to
displaying
you
get
the
processing
duration,
you
get
the
expected
display
time,
the
presentation,
time
Etc
so
a
whole
bunch
of
timing
info.
That's
that's
relating
to
various
points
in
the
in
the
processing
pipeline.
A
So
what's
interesting
about
this
is,
is
it
mixes
not
only
what
I
would
call
codec
related
timing,
but
also
RTP
things
relating
to
RTP,
like
the
RDP
timestamp,
so
this
brought
up
a
whole
bunch
of
questions,
at
least
in
my
mind
as
to
what,
where
this
metadata
is
exposed
in
various
aspects
of
the
apis,
we've
been
working
on
in
this
working
group.
A
So
as
an
example,
if
in
media
capture,
transform
when
I
convert
from
a
track
to
a
stream
of
video
frames,
should
I
expect
some
metadata
to
be
in
those
video
frames
like,
for
example,
will
I
if
I
look
at
video
frame
dot
capture
time
will
I
see
that,
obviously,
it
seems
like
there's
also
assumptions
about
what
will
be
going
on
in
webrtcpc,
for
example
like
if
I,
if
I
call
the
rbfc
is
spec
and
look
at
videoframe.orgp
timestamp
that
the
timestamp
will
be
there,
the
only
way
it
would
have
gotten
there
would
have
been
through
iqp
processing
like
in
whatever
hcpc.
A
How
does
that
work
exactly,
and
also
is
the
metadata
passed
across
the
pipeline,
for
example,
when
I
go
through
the
send
to
the
receive
site,
how
does
how
does
that
work?
Exactly
an
example
of
this
is
if
I
look
at
media
capture,
transform,
say
I
convert
to
a
stream
of
video
frames.
Do
some
stuff
call
video
frame
generator
and
then
pass
this
to
webrtc
at
the
end
of
it
all
on
the
receive
side,
is
this
metadata
visible
in
rbfc
and
at
what
points
can
I
get
at
it?
A
In
the
in
a
media,
transform
as
well,
for
example,
yeah
is
it
it
somehow
gets
passed
from
the
video
frames
into
the
track
and
then
gets
and
goes
through
the
webrtc
encoder
and
gets
passed
on
The
Wire
Etc?
How
does
all
that
work?
Another
example
would
be.
Is
this
metadata
available
in
encoded
transfer?
For
example,
is
it
a
property
of
encoded
chunks
in
encoded,
transform
or
in
in
web
codex
as
well
or
is
it
does
it
somehow
get
lost
in
there
anyway?
A
G
I
mean
the
queue
yeah.
A
G
So
I
I
found
some
some
of
these
issues.
The
plan
is
actually
for
capture
time
after
the
timestamp
and
so
on,
to
move
from
video
frame
callback,
midday
data
to
video
frame,
metadata
right,
and
then
there
is
consistency,
meaning
that
if
you're,
creating
a
media
capture
track
for
video
track
generator
and
the
video
frame
metadata
like
capture
time
is
set,
then
it
will
be
available
if
you
are
actually
displaying
the
track
to
video
element
and
calling
request
video
frame
callback.
So
this
should
be
very
consistent
there
and
it
should
like
media
capture.
G
Transform,
will
will
not
preserve
by
magic
all
these
things.
It's
just
that
if
you
pass
a
video
frame
that
has
this
metadata,
what
happens?
Is
you
actually
cloning?
The
the
video
frame,
so
limited
data
will
also
be
cloned.
That's
what
is
happening
for
webrtcpc,
for
instance,
the
capture
time
might
be
encoded
as
a
r2b
r2p.
F
G
So
if
a
RTP
header
is
not
dropped
on
the
floor,
it
will
end
up
on
the
receive
side
if
you're
calling
request
video
frame
callback
or
if
you
are
getting
the
video
frame
for
media
capture,
transform
from
the
peer
connection
and
that
that
should
work
fine
in
and
conditions,
we
are
not
exposing
that
metadata.
G
Maybe
we
should
like,
maybe
the
capture
time
in
the
and
could
it
transform,
is
something
that
is
useful
and
that
we
should
expose
at
the
and
call
it
video
chunk
level.
For
instance,
we
haven't
received
feedback
from
use
cases
there,
so
it
should
be
per
per
previous
cases.
G
Are
there
issues
for
this
working
group
to
discuss?
Maybe,
for
instance,
we
have
a
working
group
that
is
somehow
responsible
to
say
you
have
a
track
and
you
pass
it
to
the
media
element
and
you
you
process
it
and
you
render
it.
So.
How
do
you
compute
presentation
time,
for
instance,
which
is
something
that
is
exposed
to
video
frame
request,
video
frame
callback?
Maybe
we?
G
We
should
actually
have
something
there,
because
the
video
track
generator
is
allowing
you
to
to
set
various
timestamps,
but
I
will
what
will
happen
on
the
render
side
if
you're
using
a
video
element?
That's
not
something
that
we
are
defining
in
any
way,
so
you
can
do
data
buffer,
you
can
do
you
can
do
it.
The
user
agent
can
do
whatever
it
wants,
and
maybe
that
that
is
something
that
is
hurting
and
that
we
should
try
to
to
dig
into
and
give
more
Precision
so
that
user
agents
are
more
aligned.
G
D
I
mean
for
metadata
that
it
that
this
particular
process
processing
element
knows
about,
like
an
encoder,
will
probably
add
the
encoded
frame
size
with
an
eighth
and
for
instance,
then
that
will
not
be
passed
unchanged.
But
should
we
ask
for
a
general
principle
that
a
processing
element
with
metadata
on
both
the
inside
and
the
outside
passes
a
long
metadata
that
it
doesn't
understand
because
I
don't
think
we
can
expect
all
processing
elements
to
understand
all
the
metadata.
All
the
time.
A
Yeah,
so
the
rules
for
the
registration
is
that
the
the
behavior
is
supposed
to
be
defined
in
the
specs,
so
the
registry
doesn't
Define
any
Behavior
right
it
just
cites
stuff,
and
the
problem
is
that
I
guess
what
you're
there
there
is.
No
general
rule,
that's
imposed
by
the
registry.
A
I
mean
you
could
decide
and
I.
Don't
know
that
we're
there's
going
to
be
you're
going
to
get
all
the
working
groups
in
WC
to
agree
on
the
general
rule.
You
know
for
every
API
it
would
be
possible
within
a
given
spec
to
say
this
is
what
you
know
what
what
this
spec
does,
but
that's
kind
of
about
it.
I
I,
guess
my
I
guess
what
I'm
saying
Harold
is
that
it's
everything
is
on
is
undefined.
If
you
have
a
proposal
for
a
rule,
it
would
be
great
to
hear
it.
A
G
I
agree
with
about
there
I
think
that,
for
instance
for
webcodex
the
feedback
we
received,
that
was
that
JavaScript
can
handle
the
metadata
passing
from
the
video
frame
to
the
encoded
video
chunk.
So
webconnect
does
not
have
to
handle
itself,
let's
JavaScript
do
it,
but
for
Weber
transform
it's
a
bit
different,
because
we
we're
passing
video
frame
as
a
track
and
then
on
the
transform.
G
We
get
encoded
chunks
and
maybe
there
it
would
be
nice
if
we
could
password
the
data
from
one
to
the
other
and
I
guess
the
registry
of
the
metadata
that
I
could
say
that
it
could
say
hey
for
whatever
I
would
see
inquiry
transform.
This
metadata
is
preserved
and
is
exposed
as
that
in
in
the
encoded
video
channel,
for
instance,
I
I,
guess
we
could.
We
should
sort
this
out
when
we
are
seeing
compelling
use
cases
where
yeah
this
data
should
actually
go
there
for
our
metadata.
G
Currently
we
do
not
have
any
any
API,
so
you
cannot
like
stick
any
JavaScript
object.
So
when,
if
we
go
there,
then
that's
another
topic
that
we
should
we
discuss
so.
A
G
Yeah
I
think
we
tried
to
to
file
these
issues
on
webcodex.
The
one
we
might
be
missing
is
like
having
the
renderer
site
media
capture
main.
Maybe
right
are
we
able
to
clarify
it,
so
that
might
be
the
missing
issue
that
we.
A
Should
and
also
maybe,
media
capture
transform,
okay,
so
I
I
think
what
we'll
do
is
we'll
we'll
try
to
file
specific
issues
and
bring
this
up
and
bring
them
up
in
future
future
meetings.
A
All
right.
Thank
you.
So
now
we're
gonna
pass
the
Baton
to
the
Intel
folks
to
talk
about
Place
detection.
I
Yeah
thanks
partner
Rachel
is
traveling
today,
so
I
will
cover
him
and
discuss
about
the
best
section
today
and
we
have
now
updated
our
face.
Detection
proposal,
which
uses
the
new
video
frame
metadata.
H
I
The
video
frame
which
allows
returning
the
video
frame
specific
metadata
to
Applications
and
we
updated
our
our
proposal
so
that
it's
it's
now
adding
a
member
in
the
video
from
metadata
which
is
used
as
the
description
for
the
faces
found
in
that
video
frame
and
in
that
process.
We
also
sensed
the
detected
phase
member
to
human
base,
and
this
is
to
anticipate
a
photorex
extensions
into
the
video
frame
metadata,
because
so
we
wanted
to
be
more
specific.
I
I
So
we
removed
most
of
the
constraints
that
we
presented
before.
We
also
removed
the
mass
representation.
We
still
have
the
counter
remaining
and
then
we
have
removed
the
expressions,
something
that
is
sorry.
I
That
was
my
beat,
so
something
that
is
still
remaining
is
the
landmarks.
I
I
Next
slide,
please,
so
here
is
the
new
new
metadata
members.
What
is
nice
in
the
video
of
a
metadata
is
that
it
actually
splits
our
proposal
nicely
into
two
parts.
First,
we
have
the
metadata
that
is
used
to
describe
the
faces,
and
then
we
have
the
constraints
which
are
used
to
control.
That
kind
of
metadata
is
generated
from
the
media
stream
tracks,
and
on
this
slide
you
can
see
our
proposal
for
that
for
the
metadata
or
the
description
of
the
phases
we
have
here.
I
A
new
member
human
face
is
into
the
video
frame
metadata
that
these
sequence
of
human
face,
which
is
defined
on
this
slide,
and
in
the
human
face
dictionary
we
have
four
members
ID,
which
is
used
for
tracking
the
faces
between
frames.
I
Then
we
have
probability,
which
is
the
likelihood
that
the
detected
object
is
indeed
a
phase,
and
those
are
both
if
some
implementation
doesn't
want
to
or
cannot
set
the
ID
and
probably
to
then
it
can
set
those
members
to
null.
And
then
we
have
the
counter
and
landmarks
which
are
both
sequencies
point
points.
I
In
the
case
of
counter
and
human
based
landmark
in
the
face
of
landmarks
and
in
the
upper
right
corner,
you
can
see
the
definition
for
the
human
face
landmark,
which
is
containing
two
members,
the
type
of
the
landmark
the
counter.
I
And
there
are
a
few
different
kinds
of
human-based
landmarks,
basically
eye
mouth
and
nose,
and
there
are
some
requirements
from
the
video
from
metadata
registry.
That
metadata
has
to
feel
and
one
of
the
most
important
or
maybe
most
practical
requirements
is
that
the
members
must
be
serializable.
I
Let's
go
a
bit
deeper
into
the
details
of
a
few
members:
I'm
not
listing
all
of
the
members
anymore
on
this
slide,
but
just
this
could
a
bit
bit
of
some
aspects
of
the
members.
I
So
for
the
ID
the
idea
would
be
an
internal
which
is
unique,
industrator
one
one
specific
integer
used
for
a
specific
phase
between
successive
frames
to
track
a
phase,
and
that
is
already
supported
by
some
of
the
platform
apis.
I
And
then
we
have
the
probability
and
we
would
like
to
define
the
probability
to
be
probably
too,
as
in
mathematics.
The
platform
apis
offer
different
versions
of
this,
like
in
Hull
Tree
on
Android
and
Chrome
OS.
There
is
a
score
and
on
Windows
API
there
is
confidence,
but
it
is
possibility
and
if
an
implementation
wouldn't
want
to
implement
that
it
could
set
that
to
normal.
I
I
I
The
only
difference
basically
would
be
that
we
would
have
a
fixed
number
of
points
in
depth,
bounding
box,
but
in
the
Contour
we
can
user
dynamic,
Dynamic
number
of
points
debating
on
implementation,
acceptabilities
and
the
Contour
can
also
specify
a
bounding
box,
or
it
can
also
specifically
specific
Center
Point.
If,
if
we
Define
that,
if
the
Contour
contains
a
single
point,
then
that
would
be
the
center
of
the
landmark
or
a
phase
next
slide.
I
Please,
and
the
second
aspect
or
second
part
of
the
proposal
that
we
have
is
the
constraints
which
control
how
they
face.
Metadata
is
generated
from
a
media
stream
track,
and
here
we
have
two
different
fields
or
members
in
in
each
of
the
dictionaries,
and
we
would
have
the
basically
the
main
streets
to
control
the
face.
Detection
face
detects
your
mode,
which
would
be.
B
I
The
Landmark
beautiful
include
both
faces
and
the
large
marks
in
in
the
faces,
and
we
would
like
to
also
have
the
face.
Detection
Max
Contour
points,
because
that
can
tell
the
application
can
use
the
max
code
or
ports
to
tell
the
user
agenda
how
how
much,
how
much,
how
complicated
algorithms
it
wants
to
use.
I
In
the
simplest
case,
we
would
have
a
bounding
box
with
four
points,
and
if
the
implementation
has
some
slower,
more
power,
hungry
algorithm,
which
would
return
accurate
counter,
it
doesn't
need
to
use
that
algorithm
if
the
application
tested.
It
only
wants
four
points
that
can
save
lots
of
computation
in
some
cases,
and
that
is
the
reason
why
we
why
we
would
like
to
have
the
fit
to
specify
also
the
maximum
number
of
points
in
the
counter.
I
So
we
would
have
the
Patriots
or
mode
into
the
constraints
with
only
counter
or
landmarks,
which
includes
powder
and
ant
marks,
and
then
we
would
have
the
max
gold
door
points
which
would
limit
the
maximum
number
of
counter
points
that
the
user
again
would
rattle
to
application
for
both
the
face
and
the
landmark.
I
I
I
Worker
that
the
interesting
parts
are
marked
with
board
here,
and
basically
this
is
just
using
the
the
video
Attack
video
track
generator
and
transform
stream
to
extract
the
video
frames
from
a
media
stream
track
and
sending
those
sending
those
frames
to
be
processed
into
the
worker
and
the
worker
then
gets
the
metadata
from
the
video
frame
at
just
displaced.
It
counter
points
in
this
case
Four
Points
on
the
console,
log
and
I'm,
not
going
more
into
the
details,
because
it's
hard
to
read
read
code,
but
we
can't
refer
to
this
if
needed.
F
I
Now
I
would
like
to
ask
if,
if
this
is
a
good
proportion
for
everybody
or
but
senses,
we
might
still
need
to
have
in
that.
G
I
think
I'm
the
first
on
the
cube
thanks,
I
I,
think
it's
it's
improving
a
lot
and
I
I
think
it's
it's
exciting
to
to
see
that
work.
Moving
forward.
G
A
few
thoughts
coming
from
your
slides,
there's,
probably
not
no
need
for
knowledgeable
dictionary
members.
Maybe
some
of
them
should
be
required.
I
don't
know
like,
for
instance,
maybe
Contours
should
be
required,
because
if
none
is
required,
you
could
end
up
with
a
metadata
which
is
free
empty.
Maybe
that's
not
what
you
want.
I'm,
not
quite
sure,
I'm,
also
interested
in
hearing
about
like
web
developers,
but,
like
you,
you
are
saying
hey.
G
Maybe
they
want
the
center
point
or
maybe
they
want
the
boundary
box,
which
is
like
four
four
points,
or
maybe
they
want
like
the
the
best
Contour
possible.
So
I
was
wondering
whether
a
sequence
there
is
best
or
whether
it
should
be
like
separate
member
Fields
for
these
things
in
in
one
of
your
example,
your
I'm
also
not
sure
about
the
phase
detection.
Max
Contour
points.
Do
we
need
do
we
need
it
now,
I
I
I'm,
not
sure.
Maybe
it
could
be
left
for
later.
G
G
Maybe
we
should
say
so
like
a
lot
of
what
they
were
about
to
say:
I
just
want
the
boundary
box,
and
then
we
we
give
it
a
boundary
box
object
and
if
they
want
like
the
best
possible
Contour,
then
it's
a
sequence
and
they
have
to
deal
with
the
complexity
and
also
in
one
of
your
slides
you're,
using
exact
in
get
user
media
and
for
new
constraints.
We
we
do
not
allow
this
so
I
would
guess
we
would
stick
with
that
approach,
so
you
might
want
to
update
your
example
there.
G
I
G
That's
a
good
question:
it's
so
you
will
probably
be
the
first
one
to
add
a
registry
entry.
I
was
thinking
that
maybe
it
should
be
a
live.
A
document
where
this
channel
3
would
be
the
document
where
the
metadata
Fields
would
be
specified
the
alternative.
If
for
the
registry,
a
document
to
to
just
say
Okay
this
we
are
adding
this
member
and
actually
the
definition
is
somewhere
else
in
media
capture,
extensions,
I,
guess
either
model
is
fine,
I,
don't
know
Bernard.
Do
you
have
any
opinion
there
on
what.
A
Well,
like,
like
you
said,
you
and
the
registration
would
be
in
web
codex,
but
it
has
to
reference
a
spec.
It
could
be
media
capture,
expansions
or
it
could
be
I
think
it's
probably
more
likely.
Media
capture,
extensions
and
media
capture
main
because
we're
trying
to
bring
media
capture
main
forward.
I
That
one
aspect
is
that,
or
one
thing
to
consider
is
that
the
constraints
and
the
metadata
have
dependency.
The
constraints
are
specifying
what
kind
of
metadata
is
going
to
be
stored
in
the
metadata.
I
G
I
think
that
makes
sense
I
think
there
was
some
feedback
from
the
web
codec
folks
that
either
the
new
version
they
would
like
to
review
metadata
again.
G
But
then
then
it's
best
to
have
like
a
document
in
web
codex,
which
would
Define
the
things
and
then,
when
you
want
to
update
it,
it
could
be
in
web
codex
land.
So
I
guess
we
will
need
to
the
best
thing
is
to
to
move
on
with
the
PRS
there,
and
then
we
we
will
see
how
we
should
proceed.
F
Yep
I
am
I,
am
so
I
I
like
this.
This
looks
useful
and
interesting.
I
had
a
couple
of
kind
of
very
minor
points.
I
think
it
would
be
good
to
have
some
language
about
what
the
ID,
what
the
lifespan
and
meaning
of
the
ideas,
particularly
it
would
be
good
to
be
confident
that
it
doesn't
like
it
doesn't
correlate
with
any
anything
in
the
face.
So
if
you
get
sub
get
two
streams
with
the
same
facing
you
shouldn't
get
the
same.
Id.
F
Right
exactly
and
and
so
just
some
language
there
about,
like
you
know,
the
properties
of
the
idea
would
be
would
be
great.
I
mean
I.
Think
that's
where
you
where
you
were
going
anyway,
but
I
think
it
would
be
good
to
have
the
language
explicit
there
and
then
the
other
thing
is
I.
I
Regarding
the
pounding
box,
you
are
definitely
right
that
four
point
counter
is
not
same
as
wounding
books,
but
the
application
it's
trivial
to
calculate
bounding
box
from
counterpoints
and
if,
if
in
the
food
render,
will
be
more
complicated
coward,
then
we
would
need
two
new
Fields
new
members
and
that
would
clatter
the
metadata
and
I
would
like
to
avoid
that.
If
possibly.
B
D
Yeah,
where
was
that.
A
H
Yes,
so
just
taking
a
step
back
here,
I
think
there's
a
larger
question
here.
Right
of
do
we
merge
this
and
does
it
look
good
overall
and
from
a
privacy
perspective
as
far
as
I
can
tell
these
are
just
optimizations
and
that
it's
something
that
applications
could
could
figure
out
on
their
own?
It
would
just
be
slower,
so
I
don't
see
any
inherent
problem
with
it.
H
I
don't
know
if
it
was
mentioned,
but
it
seemed
to
me
that
for
this
working
group
the
parts
that
are
relevant
are
only
the
constraints
part
and
maybe
the
the
metadata
part
should
be
in
the
realm
of
book
codex
I,
don't
know
if
that's
what
you
underst
said
earlier,
but
I
would
support,
maybe
not
split
and
other
than
that.
I
think
this
looks
good.
A
G
H
H
I
think
definitely
a
CSC
would
be
good
at
some
point.
I
don't
know
how
much
we
want
to
clean
things
up
first
or
after
that,
I'm
open.
J
Me:
okay,
thank
you
very
much
so
I'm
going
to
talk
about
Edina
message
board
on
my
capture,
handle
part
of
my
argument
that
it
should
be
tied
with
capture
handle.
But,
theoretically
any
message
board
that
allows
a
capturing
application
to
communicate
with
a
captured
application.
Is
the
right
scope
for
me.
J
So,
let's
just
record
next
slide.
Please
why
we
have
kept
your
hand
capture
handle,
allows
an
application
to
sorry
one
slide
back.
Thank
you
capture
handle
allows
an
application
to
declare
it's
to
opt
into
expose
metadata
to
a
capture.
So,
for
example,
right
now
we
see
that
Bernard
is
controlling,
slides
and
meet
right
and
theoretically.
Slides
could
declare
hey
I'm,
slides.
Here's
my
session
ID
and
here's.
How
you
can
phone
home
right
and
meet
can
use
that
to
communicate
directly
with
slides,
for
example,
it
can.
J
You
know
it
can
be
the
name
of
a
broadcast
channel.
If
both
of
them
are
on
the
same
origin
right,
it
can
be
an
IP
address,
it
can
be
a
session
ID,
that's
mutually
intelligible,
given
some
shared
Cloud
infrastructure
like
Google
has,
and
when
that
happens,
those
two
applications
can
now
start
communicating
and
in
the
case
of
meeting
slides,
if
Bernard
happens,
to
have
the
right
kind
of
account
then
meet
is
going
to
allow
Bernard
to
control
the
slides
directly
from
inside
of
it
next
slide,
please.
J
So,
as
we
know,
as
we
have
mentioned,
this
communication
happens
over
shared
Cloud
infrastructure
in
our
case,
or
actually.
A
J
It
happens
locally
using
some
hack
that
allows
you
to
have
a
broadcast
channel.
That
is
same
origin,
because
Mitten
slides
are
not
really
the
same
origin,
but
they
can
embed
the
same
origini
frame
and
communicate
through
it,
and
you
know
it's
a
bit
hecky.
Furthermore,
once
Storage
storage
partitioning
is
going
to
be
implemented,
it
won't
be
possible
anymore,
for
example,
with
the
site
like
YouTube
right.
J
So
we
have
a
case
where,
even
if
both
entities
control
both
at
the
same
entity
controls
both
applications,
the
applications
might
not
be
able
to
communicate
locally
and
communicate
it
over
shared
Cloud
infrastructure
incurs
costs,
it's
less
robust,
it's
less
efficient
and
it
makes
use
of
scarce
resource
in
the
scarce
Resort
in
scarce
resource
in
the
form
of
bandwidth.
J
So
what
I'm
proposing
is
a
message
board.
Instead,
capture
and
capturing
start
communicating
directly
next
slide,
please
now
there
are
a
couple
of
challenges
here
and
I
would
like
to
start
enumerating
them
number
one.
When
you
capture
somebody,
you
capture
a
tab,
but
that
tab
can
be
navigated
right.
So
at
any
moment
you
can
find
that
you're
now
capturing
somebody
else
or
maybe
the
entity
that
you
want
to
communicate
with.
You
only
start
capturing
it
two
minutes
into
the
capture.
J
Similarly,
Chrome,
for
example,
allows
to
press
the
button
inside
of
chrome
to
share
this
tab
instead,
so
at
any
moment
the
user
can
decide
that
you're
capturing
some
other
application,
even
if
no
navigation
takes
place.
J
So
all
of
those
communicating
complicating
factors
mean
that
the
solution
would
not
be
just
to
expose
a
message
for
it,
but
also
to
expose
some
events
that
are
tied
in
so
that
you
would
know
when
you
can
use
can
start
using
the
message
port
and
when
you
can
no
longer
use
that
message
port
and
maybe
now
you
can
use
a
different
message,
Port,
because
you're
capturing
something
else,
and
the
nice
thing
here
is
that
all
of
those
problems
have
already
been
considered
considered.
J
J
Now,
when
we
go
to
examine
what
kind
of
solution
we
want
to
have
here
for
rest,
for
it,
I
just
want
to
remind
us
that
we
probably
would
like
to
keep
the
design
principle
that
the
capturing
does
not
realize
it's
been
captured
unless
the
capture
allows
that
right.
So
the
the
person
at
the
entity
that
picks
up
the
phone
and
dials
is
going
to
be
the
capture
and
the
capture
is
going
to
pick
up
the
phone
and
answer
or
not
next
slide,
please
so
the
API
that
I'm
suggesting
is
as
follows.
J
First,
we
see
capture
handle
config.
This
is
the.
This
is
the
dictionary
that
we
have
nowadays
that
the
captured
website
can
use
in
order
to
declare
its
metadata
without
even
know.
Even
if
it's
being
captured,
it
can
say,
hey
do
I
want
to
expose
my
origin,
here's
my
free
floating
handle,
which
can
be
a
string.
Well,
sorry,
which
is
a
string
and
can
be
a
session
ID,
for
example,
and
it
can
also
kind
of
use
a
loud
list
or
Blacklist.
J
You
know
to
control
who
actually
system
I
suggest
that
we
just
had
an
event
handler
here.
Okay
and
the
events
are
going
to
be
of
the
type
of
of
hey,
have
I,
started,
being
captured
or
stopped
being
captured,
and
if
I
have
here's
the
port
for
which
this
is
relevant,
and
you
can
imagine
first
time
you
get
captured
by
a
given
entity.
J
You're
gonna
get
the
start
and
when,
for
whatever
reason
it
stops
capturing
you
because
share
this
step
instead
was
pressed
or
if
something
got
navigated
or
you
just
decided
to
terminate
the
capture,
you
get
the
same
event
only
we've
stopped,
and
then
you
know.
Oh,
this
port
is
no
longer
relevant
next
slide.
Please,
and
on
the
side,
the
capturing
side,
similar
things
are
gonna,
happen,
Okay.
J
So
right
now
you,
when
you
start
capturing,
you
can
examine
the
capture
handle
if
the
captured
site
did
not
set
up
anything
and,
namely,
in
this
case
they
don't
set
the
header,
you
just
don't
get
any
handle
right
and
you
don't
get
any
port.
You
know:
hey
I'm
capturing
some
kind
of
Legacy
website.
That's
been
collected
to
set
the
capture
handle,
but
if
you
do
get
one
you
can
say
you
can
see.
Oh
this
website
can
actually
support
the
message.
J
But
if
you
decide
to
capture,
does
support
that
you
can
now
call
a
method
called
get
message
board
and
you
get
a
port
and
the
other
side,
the
one
that
you're
capturing
is
getting
gets
an
event
say
like
hey,
you
are
being
captured
and
that
entity
wants
to
talk
to
you
and
here's
the
point
right
and
now
both
sides
have
a
port
and
they
couldn't
communicate
likewise,
if
either
side
no
longer
can
no
longer
communicate.
J
For
whatever
reason,
both
sides
are
going
to
become
aware
of
that,
so,
for
example,
if
the
user
presses
stop,
then
both
sides
are
gonna
know:
hey
I've
stopped
right.
So
in
the
case
of
the
captured
website,
it's
going
to
get
the
event
with
the
stopped
type
and
in
this
on
the
side
of
the
capture,
it's
gonna
just
realize
that
hey
the
entire
capture
went
away
and
that's
going
to
be
its
signal
right.
J
There
should
be
an
event
for
that
or,
if
the,
if,
for
example,
the
user,
navigates
or
anything
navigates
the
capture
it
sites
top
level
document,
there's
already
an
event
for
that
in
capture
handle
and
that
event
is
now
going
to
be
annotated
with
the
message
board
invalidated
flag
and
then
the
capturer
will
know:
hey
this
port,
no
longer
useful,
no
need
to
send
any
messages
over
it.
J
J
I,
don't
know
if
the
other
Universe
from
before
or
horrend
like
which
one
yeah.
G
G
There
are
some
things
that
seems
a
bit
off,
but
we
can
work,
but
we
can
work
on
that.
There
are
things
like
exposed:
origin,
for
instance,
equal
force,
and
you
can
use
message
Port.
That
does
not
make
sense.
We
should
probably
have
mandate
expose
origin
to
True.
D
G
Handler
in
the
dictionary
it
seems
a
bit
weird:
usually
you
have
events
on
objects
and
then
you
can
register,
Evander,
add
even
listener
and
so
on.
So
that's
probably
what
we
want
there
as
well.
I'm,
not
sure
we
need
to
support
Message
Board
so
may
maybe
I
would
try
to
remove
at
first
as
much
as
we
can,
and
so
that's
why
I'm
saying
it
supports
Message,
Board,
I'm,
not
sure,
and
the
same
for
message
for
invalidated.
G
I,
think
that
for
message
board
invalidated,
it
might
be
good
to
discuss
this
with
over
HTML
spec
Forks,
because
it's
it's
it's
a
it's
a
thing
with
message
tunnel
that
you
have
to
message
boards
and
then
two
different
documents
and
one
will
go
away
and
the
other
one
will
not
know
about
it
and
that's
already
an
existing
issue.
So
it
there's
nothing
new
here,
so
I
wonder
whether
there's
a
good
pattern
or
whether
there's
something
else
that
needs
to
be
solved
before
we.
We
advise
this
issue
there
so.
G
Should
also
like
make
it
remove
it
at
first
and
then
discuss
it
later
and
also
the
name
get
Message
Board
I
I'd
like
something
like
open.
Something
like
you
know
because
get
Message
Board
seems
like
a
very
like
it's
a
get.
So
it's
not
mutating
anything,
but
it's
actually
opening
a
message
channel.
So
it's
revealing
to
the
captury
that
that
it's
being
captured
so
may
maybe
you
should
have
a
good
name
there.
That
is
saying:
hey!
G
It's
not
like
a
simple
get
it's
something
like
a
bit
more
evolving
as
well
and
yeah.
The
integration
with
capture
controller
I
think
makes
sense
as
well.
We
have
these
two
objects,
so
it
seems
good
to
to
use
it.
So
that's
a
good
place
to
call
there
I
think.
J
So
what
I'm
hearing
is
a
your
group
of
the
use
case.
So
that's
great,
so
you
you
agree
that
the
message
Port
is
necessary
not
exactly
in
order,
but
you
said
that
you
would
like
to
rename
from
get
Message
Board
to
open
Message,
Board
I.
Think
that
makes
a
lot
of
sense.
J
You
mentioned
that
you
would
like
to
remove
a
couple
of
things
if
it
is,
if
we
can
discover
that
they're
not
totally
necessary
and
definitely
I
agree
with
the
sentiment
and
we
can
iterate
over
the
particular
details,
I
think,
given
the
time
limit,
I
don't
know
if
we
want
to
Deep
dive
into
any
of
the
details.
You've
mentioned,
but
I'm
definitely
open
to
discussing
them.
Yeah.
J
Clarification
possible
so
a
one
that
I
need
to
give
to
the
group
I've,
given
it
so
I
believe
that
you
and
your
neighbor
have
seen
it
in
the
thread,
but
I
also
want
to
mention
the
complicating
factor
of
a
tab
can
be
captured
by
multiple
different
captures
at
the
same
time,
which
is
part
of
why
I
want
an
event
right,
because
the
page
can
be
captured
multiple
times,
and
this
pattern
acknowledges
that
what
clarification
I
would
like
to
ask
a
few
and
then
I'll
move
on
to
your
neighbor.
J
G
I
don't
know
about
that
I,
don't
like
the
event
handler
in
the
dictionary.
That
seems
wrong
right.
That
seems
wrong
to
me,
so
we
we
should
change
it,
whether
it's
capture
handle
or
media
devices
I
don't
have
a
strong
opinion.
G
There
definitely
I
mean
I,
guess
that,
knowing
who
you're
talking
to
knowing
that
the
capture,
if
I
will
capture
I,
think
I
need
the
origin
of
the
capturing
before
calling
get
message
for
it
and
that's
why
conceptually
speaking
like
having
like
yeah
I'm
using
capture,
handle
to
say
I'm,
exposing
my
origin
and
putting
apis,
there
seems
to
make
sense
as
well.
So
that's
why
the
tying
like
that
might
be
fine
as
well.
J
I
only
partially
understand
the
concern
here,
because
I
think
that
the
web
application
that
always
checks
the
origin
right,
it's
just
going
to
find
that
an
undefined
origin
never
matches
what
it's
allowed
is
right.
So,
if
you're
allow
list
this
apple.com
and
apple.se
excellent
undefined
is
not
page
that
we.
G
Shouldn't
allow
to
have
origin
being
undefined
or
empty.
We
shouldn't
allow
this.
J
We
can
iterate
over
that
later.
It's
not
clear
to
me
why
not,
but
sure
I'll
just
mention
one
more
famous
story.
Your
neighbor
is
that
kept
your
hand.
The
tie
to
capture
handle
happens
on
both
sides,
and
you've
only
mentioned
so
far.
Why
you
don't
like
it
on
the
captured
side
right,
you
don't
like
an
event
handler
in
the
dictionary.
What
about
the
other
side,
so
that
side
sure
you
could
submit
the
devices
instead
doesn't
make
much
of
a
difference
to
me.
What
about
on
might
even
be
better
right.
J
I
need
to
think
about
this
I'm
more
interested
in
having
it
tied
in
the
event,
hand,
I'm.
Sorry,
everything
else
exposed
in
events
that
are
already
fired
for
capture
handle.
G
I
would
go
with
capture
control
over
there.
I
think
that
capture
handle.
If
we
want
it,
should
be
moved
to
a
capture
controller
and
because
we
we
don't
want
18
metering
track.
That's
a
bad
idea
to
tie
it
to
my
desktop
track.
We
should
tie
it
to
the
object
that
represents
the
source
and
that's
capture
controller,
so
we
should
move
it
to
capture
controller.
All
these
things.
J
Okay,
this
means
that
you
want
basically
the
same
kind
of
sorry.
I,
don't
want
to
put
the
words
in
your
mouth
but
I'm
doing
it
so
I'm
acknowledging
it,
but
if
the
same
events
were
basically
fired
on
capture
controller,
but
it's
the
same
kind
of
logic
for
the
same
kind
of
events
like
hey
when
the
user
navigates
the
top
level
page
when
the
user
closes
the
tab
when
the
user,
you
know,
presses
shared
this
type
instead
Etc
because
it
seems
like
there
will
be
some
duplication
there,
but
that's
not
a
show
stopper.
J
G
I'm
not
sure
to
follow,
but
let's
go
with
Universe
we
can.
We
can
quickly
yeah.
A
We're
almost
out
of
time
do
we
understand
what
the
action
item
is
for
this
discussion.
What
the
next
step
is.
J
H
Sure
I
think
I
like
to
I
I,
really
like
the
first
part
of
your
presentation.
H
I
think
you
got
the
requirements
down
right
and
I
agree
with
the
overall
use
case,
I
think
when
it
comes
to
API,
shape,
I
hope
we
can
iterate
on
GitHub
I
also
had
a
a
similar
proposal
plan,
but
I
decided
it
was
too
early
and
but
I
think
you've
highlighted
some
of
the
so
generally
I
would
I
would
agree
with
you
and
we
want
to
get
a
move
away
from
media
stream
track
I'm
going
to
put
it
on
the
capture
controller,
and
my
proposal
was
also
looking
at
post
message
more
of
a
post
message
Api,
but
it's
also
concerning
the
other
direction,
but
I
agree
it's
it.
J
Okay,
so
thank
you
so
Bernard
I
think
that
might
answer
the
question
of
next
items.
I
think
that
we
can
have
some
convergence
here
where
basically,
we
take
this
General
structure,
but
we
don't
use
the
pre-existing
capture,
handle
events.
We
set
similar
events,
but
all
capture
hit
on
capture
controller
and
but
basically
it's
going
to
be
this
pattern
or
the
captured
side
says:
hey,
that's
my
event:
controller.
J
If
I
get
the
message
board
and
the
capturer
says
like
hey
I
want
to
start
the
connection,
the
other
side
gets
an
event,
they
start
communicating
and
if
the
user
or
navigation
or
something
like
that
breaks
of
the
connection,
there's
going
to
be
another
event
that
says
hey
this
message:
Port
you
used
to
hold
possibly
one
of
several.
This
one
is
now
invalidated,
and
maybe
it's
not
exactly
an
invalidated
flag,
but
something
some
kind
of
event.
J
Does
that
and
un
would
like
us
to
check
if
this
problem
was
solved
differently
before
and
if
so
we
can
adopt.
That
solution
did
I
get
that
right.
H
I
would
say:
I
agree
with
the
requirements,
not
the
shape.
I
think
we
still
need
to
work
on
the
shape.
Okay,
so
I'm
not
ready
to
commit
on
shape.
Yet.
J
Okay,
so
the
next
steps,
I
guess,
is
basically
to
schedule
a
slot
for
this
in
one
month,.
H
All
right
I'll
try
to
go
fast
on
on
on
the
remaining
slides,
which
is
yes,
so
a
long-standing
issue
in
media
capture
main
has
been.
We
have
a
bit
of
a
broken
foreground
detection,
which
is
that
Tupac
get
user
media
was
supposed
to
require
both
a
page,
be
visible
and
have
keyboard.
Focus
and
implementations
differ
on
this,
and
Firefox
is
the
one.
H
A
H
Fixed
it
now,
so
it
only
requires
focus
on
the
browser
window,
not
necessarily
of
the
iframe.
So
a
PR
here
was
long
overdue.
There
were
some
other
challenges
as
well
that
we
want
to
try
to
satisfy,
so
this
PR
tries
to
do
two
things,
and
the
first
thing
is
to
allow
what
we
saw
in
Safari
is
that
I
have
a
demo
here.
You
could
click
on,
but
I'm
basically
walk
through
it.
H
This
is
useful
because
it
maintains
anti-spying,
because
you
click,
you
still
have
to
click
on
the
window,
implicitly
giving
it
Focus
before
anything
is
captured,
but
it
maintains
the
anti-spying
property
and
give
it
gives
users
a
chance
to
see
that
a
request
was
made.
So
I
think
we
should
allow
this.
So
that
means
relaxing
the
current
Focus
requirement
a
little
bit
next
slide.
H
So
the
proposal
is
to
push
The
Upfront
Focus
test
that
we
have
right
now
push
it
down
into
the
algorithm
further
down
after
the
prompt
and
up
front.
We
will
replace
it
with
an
is
in
view
check,
and
this
is
basically
testing
that
the
the
page
visibility
is
visible.
Basically
is
what
this
would
mean
and
if
they're
relevant
Global
objects,
Associated
document
is
fully
active
and
its
visibility
state
is
visible
return.
True,
that
is
in
view
check,
and
so
we
also
had.
We
also
need
to
fix
some.
H
H
After
the
Stalls
there's
a
place
in
the
algorithm
after
permission
has
been
requested,
where
we
basically
say
if
the
user
never
responds
the
algorithm
stalls
on
this
step.
There's
also
a
call
out
for
the
result
of
the
request
is
denied.
I
haven't
changed
the
order
of
those,
even
though
maybe
they
should
be
changed.
But
after
that,
whether
you
see
the
green
line,
there's
a
new
test
for
path,
system,
Focus
and
again.
H
The
wording
is
a
bit
awkward
because
it
basically
has
to
do
a
dispatch
domain
thread
in
order
to
access
the
the
document.
H
So
this
has
changed
from
this
used
to
be
a
focused
test
of
the
document
or
the
iframe,
and
this
is
now
instead
relying
on
a
a
new
HTML
property
which
is
system
focus
of
the
top
level
browsing
context
and
that's
as
close
to
the
browser
as
Focus
that
we
have
today.
I
have
a
PR
I,
have
a
PR
App
for
the
HTML
spec.
To
also
add
a
new
concept
called
user
attention
which
would
solve
this
would,
which
would
be
more
directly.
H
The
the
browser
has
focused
there's
an
edge
case
here
in
some
browsers,
so
in
some
browsers,
when
you
click
the
URL
bar
to
type
as
a
user,
you
actually
defocused
the
page
and
then
others
you
don't
so
this
will
take
care
of
it,
and
so
that
is
the
first
part
of
the
pr
any
thoughts
on
this
part.
G
Particularly
like
the.
H
G
Light
of
the
slide
that
we
are
seeing
there
I
didn't
know
about
this
system
Focus,
but
the
weight
makes
sense
to
me.
I
think
it's
it's
close
to
what
we
have
in
in
webkit.
D
H
No
yeah
that
the
intent
was
after
the
gun
prompt
had
been
shown
up
has
thrown
up.
Yes,
it
was
not
my
intent
to
stay
after.
It
has
been
responded
to.
D
D
Yeah
so
I'm
I'm
worried
if,
if
it's
actually
stating
the
right
thing
in
all
cases,
but
I
try
to
read
the
pr
carefully.
Just
don't
count
me
as
saying
yes
yeah.
Yet.
H
Our
Market
as
a
reviewer,
if
I,
haven't
already
okay.
Thank
you
all
right.
So
there's
a
second
part
to
this,
because
I
was
cleaning
up
a
lot
so
separately.
J
Yeah
I'm
sorry,
it's
possible
to
ask
for
a
quick
clarification.
H
Of
a
sure,
do
you
have
a
question.
J
Yeah
but
I
know
that
you've
got
five
minutes
and
I
don't
want
to
steal
your
time.
So
basically,
I
didn't
really
understand
what
you
were
saying
about
anti-spine.
H
Oh,
it
maintains
the
existing
requirements
of
focus,
but
this
PR
only
delays
the
focus
test
after
the
prompt.
So
it
doesn't
change
anything
else.
So
I
don't
think
we
need
to
go
into.
You
know
I'm
not
trying
to
challenge
the
focus
requirement
on
on
this
PR.
B
H
All
right,
cool
thanks
yeah,
so
next
slide
is.
B
H
Keeping
with
us,
we
had
another
and
trying
to
clean
this
up
and
make
it
more
consistent.
We
had
some
users
also
concerned
that
they
wanted
to
enumerate
devices
would
block
on
them
and
they
didn't
have
a
good
way
to
detect
that
they
wanted
to
call
enumerate
devices
on
Startup
page
load
for
some
reason,
so
in
keeping
with
this
relaxation,
it
also
we
also
started
to
question
well.
Why
do
we
have
a
focus
requirement
on
the
numerate
devices?
H
So
this
proposal
is
to
drop
the
and
it's
optional
in
Spec,
it's
a
user
agent
may
or
may
not
implement
it,
the
user.
It's
an
option.
The
user
agent
can
also
require
Focus
for
Internet
devices,
so
this
proposal
is
to
drop
that
requirement
and
reduce
it
to
being
visible.
H
The
rationale
here
is
the
web
compact,
because
then
web
browsers
would
work
the
same
and
the
the
other
rationale
is
that
for
enumerate
devices,
the
mitigation
is
supposed
to
be
anti-fingerprint,
not
anti-spying.
H
The
focus
requirement
or
get
used
to
Media
was
basically
and
quickly
that
requiring
Focus
for
turning
on
the
camera
or
microphone
seemed
like
a
good
privacy
mitigation,
because
that
meant
only
the
page
you're
interacting
with,
can
suddenly
decide
to
turn
on
and
spy
on
you,
but
for
enumeric
devices.
There's
no
anti-spying
necessary.
Is
this
anti-fingerprint,
so
a
clear,
simpler
visibility
check
seems
better.
It's
also
better,
because
I
iframes
without
Focus
today
cannot
tell
when
the
browser
window
receives
Focus
and
apps
like
the
unity
front.
H
End
who
filed
an
issue
want
to
be
able
to
make
a
deterministic
check
which
they
could
do
with
this
PR,
and
they
can
now
because
of
the
way,
the
pr
changes.
When
we
check
the
document
we
did
synchronously,
it
means
that
they
can
actually
write
this
code.
If
document
visibility
State
equals
visible,
then
they
know
they
will
know
for
a
fact
that
they
can
call
Internet
devices
and
not
have
it
blocked,
which
seems
a
desirable
invariant.
H
Now
this
opens
up
a
concern,
which
is
that
if
you
have
two
or
more
browser
windows
open
at
once
that
have
that
are
accessing
the
camera
or
microphone
or
have
access
to
it.
Since
they
were
opened,
it
means
that
sites
may
now
use
polling
of
their
numeric
devices
or
the
device
change
event
to
time,
correlate
users
across
Origins
if
the
users
insert
a
USB
or
Bluetooth
device
or
remove
the
device.
H
The
solution
is
that
we
already
have
existing
Pros
put
in
for
that.
That
says
these
device
change
events
are
potentially
triggered
simultaneously
on
documents
of
different
Origins
user
agents
may
add
fuzzing
on
the
timing
of
events
to
avoid
cross-origin
activity
correlation
and
my.
H
What
I
would
emphasize
here
is
that
the
spec
puts
no
time
limits
on
this
kind
of
fuzzing,
which
means
that
any
kind
of
time
delay
a
user
agent
seems
open,
and
it
seems
fine
for
a
user
agent
to
put
in
as
much
time
as
it
wants
to
quell
any
cross-activity
correlation
concerns,
which
could
in
fact
be
as
long
as
you
know,
a
focus.
H
And
next
slide
just
to
go
over
the
last
changes
of
the
pr
to
do
this.
It
would
basically
be
a
nude
new
Boolean.
That's
called
proceed.
That
is
the
result
of
the
existing
device.
Enumeration
can
proceed
steps.
This
is
done
synchronously
and
as
a
second
step
and
at
the
beginning
of
the
in
parallel
step.
We
add
the
pause
that
exist
today
without
access
and
be
careful
to
check
the
document
on
a
cube
task,
and
this
makes
the
initial
check
synchronous
based
on
the
synchronous
assault
I
should
say
which
makes
it
deterministic
thoughts.
G
Yes,
so
so
you're
your
desire
is
to
reduce
a
little
bit
of
a
friction
of
web.
G
G
Ground
and
then
all
user
agents
get
to
the
same
place
and
I
I
think
it's
good
to
try
that
my
question
would
be
so.
Let's
say
that
we
that
you
you
implement
this,
do
you
think
that
there
will
be
a
combat
issues,
or
do
you
think
that
the
current
relaxing
is
solving
like
almost
all
the
issues?
The
existing
issues
of
the
future
issues.
H
G
G
Mentioned
this
and
for
existing
Firefox
issues,
are
you
confident
that
this
will
fix
the
issues
or
do
you
think
that
it
will
require
adoption
by
web
developers
to
actually
fix
Firefox
issues.
G
So
the
issue
is
so
something
developers
are
complaining
that
this
requirement
is
not
good
and
that
it's
breaking
their
website
on
Firefox,
so
I'm
wondering
whether
this
relaxation,
yes
is,
is
good
enough
to
unbreak
the
web
application
or
it's
good
enough.
H
G
Web
developers
will
easily
change
their
application
to
actually
Unbreak
websites
on
Firefox.
H
Oh
I
see
yes,
they
would
have
to
add
that
if
you
go
a
couple
of
slides
up
to,
if
visibility
once
load
up,
if
visibility
State
equals
visible,
then
you
can
call
enumerate
devices.
That
is
the
way
to,
if
you're
concerned,
about
being
quote-unquote
blocked
which
isn't
really
blocked.
But
it's
in
the
you
know.
G
B
J
I'm
not
exactly
an
objection
but
I'm.
Sorry
I!
Don't
think
that
I
fully
understood
this
from
the
presentation
that
I
would
really
enjoy
appreciate
more
time
to
look
into
this.
But
you
know
I'm,
just
one
person
here.
H
Okay,
I'm
open
to
discuss
with
you
any
questions
you
might
have
after
the
meeting
or
at
any
time.
B
A
Is
since
we're
out
of
time,
what
is
the
next
step
here?
Do
we
have
a
have
clear
follow-up.
H
Well,
I'd
like
to
go
ahead:
yeah.
G
I
was
thinking
that
getting
both
user
agent,
implementers
and
web
developers
feedback
would
be
would
be
good,
and
so.
H
Well,
I
would
hope
that,
as
long
as
we
can
get
user
agent
input,
I
I
think
web
developers
should
be
happy,
because
this
is
a
relaxation.
They
should
only
make
them
more
happy,
not
less
happy.
H
H
Although
I
do
believe,
we
already
provide
buzzing
and
I
actually
don't
think
we
need
to,
because
the
the
focus
requirement
on
enumerate
devices
was
always
optional
right.