►
From YouTube: W3C WEBRTC meeting 2022-10-18
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
B
So
today
we're
going
to
cover
encoder,
transform
media
capture,
extensions
webrtcpc.
What
emergency
extensions
and
capture
handle?
We
have
two
meetings
before
the
end
of
the
year.
One
is
November
15th
and
December
6th.
So
please
keep
those
in
mind.
I!
Guess
it's
a
total
of
only
four
hours
together
other
than
this
meeting,
so
we
need
other
meetings,
probably
it'd,
be
a
good
time
to
to
think
that
through
and
and
plan
it.
B
So
a
little
bit
about
this
meeting,
a
link
to
the
slides
is
published
on
the
wiki.
We
do
need
a
scribe
to
take
meeting
notes.
Do
we
have
a
volunteer.
B
Thank
you,
okay
and
we
are
recording,
so
the
recording
will
be
public.
B
We
operate
under
the
code
of
conduct,
the
w3c
code
of
ethics
and
professional
conduct,
we're
all
passionate
about
approving
our
emergency,
but
let's
try
to
keep
it
cordial
and
professional,
we'll
be
managing
the
queue
I
guess
the
speakers
or
Harold.
Would
you
do
that?
We'll
have
the
plus
q
and
minus
q
and
Google
me
chat
if
you
want
to
get
into
the
speaker
queue
and
please
use
headphones
or
a
echo
cancer
speakerphone
you'll
have
to
handle
your
microphone.
B
Obviously
after
you
get
called
on
and
give
us
your
full
name,
I'm,
not
sure
we'll
use
pulse,
but
if
we
do,
we
can
gauge
a
sense
of
the
room.
Just
note
about
document
status.
There's
been
some
misunderstandings
just
because
something's
in
the
repo
doesn't
mean
it's
been
adopted.
That's
a
separate
thing.
We
use
a
CFA
process.
Editors
process
may
not
represent
consensus,
but
the
working
group
drafts
do
all
right.
B
So
here's
the
agenda
for
today
we
have
encoded,
transform
Harold
the
media
capture
extensions
briefly
Henrik
Weber,
CPC,
Albert,
Hilton,
gpac
yanivar
will
handle
that
Florence
will
do
some
more
about
to
see
extensions,
choose
and
then
we'll
have
capture,
handle
and
we'll
try
to
manage
the
time
so
that
we
can
give
everyone
the
their
allocation
all
right.
So
Harold
you
have
the
floor.
A
I,
looked
at
the
minutes
from
tbac
and
found
that
they
were
very
consistent
with
us
discussing
packet
level
API
as
a
concept
and.
A
We
did
manage
to
assign
AIS
for,
but
I
think
explain
the
use,
cases
and
Architectural
descriptions,
none
of
which
have
been
executed
on
I
think,
but
there
are
a
number
of
other
issues
that
were
in
the
strategy
that
Navigator
so
in
the
interest
of
making
progress
on
all
issues.
Let's
take
the
other
ones
here.
Next.
A
Is
that
packets
don't
arrive
in
order
on
the
network?
Sometimes
I
love?
Sometimes
we
ask
for
re-transmissions,
and
that
means
that
some
frames
might
turn
up
at
the
API
and
before
they
actually
the
before
frames
that
that
should
have
come
early,
so
decoders
codex,
not
frames
in
order
because
they
depend
on
each
other.
So
there's
a
reordering
in
front
of
the
in
front
of
the
decoder
step,
and
when
you
transform
things,
it's
of
course
simplest.
A
If
the
decoding
happens,
if
you,
if
you
can
get
them
in
in
decode
order,
because
that
makes
things
kind
of
obvious,
but
this
means
that
you
have
to
have
a
Jitter
buffer
in
front
of
the
Transformer,
and
so
that
means
again
that
if
you're
Transformer
introduces
Jitter,
there's
nothing
that
compensates
for
that
afterwards,
sorry.
So
the
current
Chrome
implementation
has
the
Jitter
buffer
after
the
Transformer,
which
means
that
we
sometimes
get
frames
out
of
order.
A
But
we
have
talked
about
this
earlier
and
said
that
we,
it's
rare
enough
that
to
we
should
have
frames
up
in
order.
So
we
should
pick
one
and
stick
with
it
and
write
tests
for
it.
So.
B
Yes
Harold:
this
isn't
the
only
place
where
we
encounter
this
problem.
So
I'm
wondering
are
you
thinking
that
there
would
be
the
Jitter
buffer
would
somehow
be
explicit
in
the
API?
B
Because,
certainly
you
know
what
WD
streams
can
handle
frames
that
are
out
of
order.
I
mean
they
have
to
be
I.
Assume
they'll,
be
reassembled.
They'll
just
be
out
of
order.
Is
that
the
idea
they'll
be
complete
frames
where
they're,
just
not
in
the
right
order
right,
yeah
I
mean
you
could
handle
the
Jitter
buffer,
I
guess
as
kind
of
a
transform
stream
that
you
could
offer
to
the
app.
Is
that
what
you're
thinking
now.
A
I'm,
actually,
thinking
that
we
have
three
choices,
we
can
say.
A
Frames
arrive
in
whatever
order
they
arrive
in,
they
can
say
the
the
browser
must
reorder
frames
so
that
they
arrive
in
order,
including
waiting
for
the
for
the
Lost
frame,
which
doesn't
sound
good,
and
we
should
have
a
flag
that
allows
us
to
control
which
of
the
two
two
first
Alternatives,
that
we
want
that.
The
app
wants.
F
Yeah
I
think
we
discussed
that
in
the
past
and
if,
if
I
remember
correctly,
we
thought
that
in
order
was
what
web
developers
were
expecting
basically
and
I,
agree
that
it's
it
might
make
user
agent
implementers
life
more
difficult,
but
in
but
yeah
so
I
would
think
that
we
should
look
at
use.
F
Cases
where
having
out
of
order
would
would
be
a
benefit,
and
we
we
know
that
out
of
order
is
a
is
a
potential
food
gun
and
if
we
see
use
cases
where
autoforder
would
be
good
to
have,
then
we
should
try
to
to
find
a
solution
there.
But
if
we
do
not
have
use
cases
for
out
of
order,
given
we
know
it's
a
food
gun
and
web
developers
will
not
account
for
it.
We
should
stick
with
in
order
like
we
have
currently
in
the
spec.
B
F
B
E
F
Like
this
encounter
might
not
be
incrementally
increasing,
monothematically
increasing
and
maybe
some
JavaScript
will
say:
oh
there's
something
weird
there,
so
we'll
drop
frames
and
so
on
right.
B
B
A
And
so
the
so
I
I
joined
the
queue
to
say
that
and
as
participants
yeah,
my
worry
about
in
order
is
that
is
the
case
where
a
frame
is
lost.
A
So
when,
if,
if
we
accept
that
frames
have
to
be
in
order,
then
we
also
accept
that
lost
frames
will
cause
a
delay
of
some
magnitude.
But
we
have
no
idea
what
that
magnitude
is.
The
app
will
have
no
idea
what
that
magnitude
is.
C
Yeah
I
guess
it
could
be
instead
of
picking
another.
It
could
be
always
in
order,
but
with
some
kind
of
developer,
configurable
timeout,
after
which
you
simply
assume
the
frame
lost,
and
so
that
way
the
developer
have
control
about
how
much
delay
they
might
introduce
without
having
them
to
re-implement
the
reordering
all
the
time
either.
A
G
F
Yeah
I
I
think
that
having
the
option
is
appealing
from
web
developer
point
of
view,
but
in
terms
of
implementation.
E
F
Might
be
a
quite
difficult
to
to
have
both
options,
so
we
should
try
to
just
have
one
and
not
two
I
think
in
general,
it's
the
potential
issue
might
be
if
the
transform
is
like,
sometimes
taking
a
few
milliseconds
and
sometimes
taking
a
long
time
and
and
then
it's
true
that
cheetah
prefer,
after
my
might
be
beneficial,
but
in
practice
for
what
we
are
thinking
like
decryption
or
like
getting
some
metadata
and
then
passing
the
data
to
the
decoders
and
so
on.
It
should
be
fairly
stable
and
fairly
linear.
F
So
I'm
not
sure
we
we
are
getting
much
from
from
getting
the
detail
buffer
after.
G
No
I
think
I
was
going
to
say
a
related
thing
of
if
we
are
moving
the
Jitter
buffer
to
before
the
transform,
then
that
is
kind
of
a
material
impact
of
either
we
wait
longer
earlier
or
we
do
increase
packet
loss
because
you
don't
have
the
extra
duration
of
whatever
the
transform
is
doing
to
be
able
to
wait
for
these
out
of
order,
packets
and
kind
of
a
separate
point.
G
We
are
going
to
have
some
risk
of
Jitter
being
introduced
during
the
transform
because
we're
just
on
like
a
normal
web
worker
right.
It's
not
like
we're
on
a
real-time
worklets,
so
there
could
be
scheduling,
delays
there,
GC
other
kinds
of
delays
that
are
out
of
control
of
the
web
developer.
F
Maybe,
which
is
that
So
currently
Chrome
and
Safra
implementation,
since
it's
have
the
same
back
end,
there
are
out
of
order.
So
the
question
is:
we
have
implementations
that
are
not
matching
the
specification,
so
it
is
Chrome
planning
to
switch
to
in
order
or
not.
That's
also
a
good
question,
because
if.
F
Never
align
then
we
need
also
to
take
that
into
account.
A
H
Yes,
I
just
wanted
to
make
the
point
that
unless
the
transform
has
side
effects,
it
doesn't
seem
why
it
would
matter
much
we're
talking
about
taking
too
long
if
we
have
a
Jitter
better
for
ahead
of
time,
but
it's
not
going
to
affect
the
I
mean
if
you're
just
decoding
it
shouldn't
really
matter.
Unless
there's
our
transformed
use
cases
that
are
time
dependent
in
other
ways
and
that
maybe
it's
rerouting
the
information
somewhere
else
or
something
so
use
cases
would
definitely
be
appreciated.
H
A
After
so
the
foot
can
then
in
no
reordering
is
variable
delay.
A
A
So
this
we
have
the
discussion
on
TPAC
about
the
generic
keyframe,
but
in
particular
people
wanted
to
and
revisit
easy
because
they
thought
that
I
think
just
Philip
I
think
you
said
that
the
answer
was
that
the
the
argument
was
not
presented
well
for
for
needing
a
partial
keyframes.
I
So
the
first
proposal
is
to
that.
It
agrees
with
the
consensus
at
tpacks
that
the
return
value
should
be
empty,
but
it
also
allows
the
application
to
pass
any
subsets
of
the
Reds
into
generate
keyframe,
which,
if
you
want
keyframes
for
just
two
layers,
avoids
splitting
that
up
into
two
calls
from
the
main
thread
to
the
encoder.
I
A
A
F
Yeah
I
think
at
TPAC
we
we
said
that
what
one
weed
was
good
enough
and
it
was
simple
enough
and
that
that
was
good
enough.
It
seems
that
we're
there's
no
use
case
for
being
sure
that
to
levels
will
actually
hit
the
same
the
same
frame.
F
It
seems
that
the
use
case
is
more
like
if
I'm
calling
the
API
twice
with
two
two
values,
then
maybe
I
will
sometimes
create
two
keyframes
for
all
the
reads,
and
sometimes
I
will
not
do
that
and
it's
it's
a
bit
more
convenient
to,
in
that
case,
to
pass
to
two
values:
I'm,
not
really
convinced
by
this
argument
at
all,
because
I
think
at
the
end
of
the
day,
the
web
application
will
need
to
know
whether
it's
encoder
is
creating
keyframes
for
a
particular
read
or
for
always
anyway,
because
if
you
do
not
know
that
you
will
not
be
able
to
actually
do
a
good
heuristic
of
when
calling
this
API.
F
But
on
the
other
hand,
the
it's.
It's
a
minor
complexity
to
add
a
list.
So
it's
not
like
a
big
deal
for
me.
Giveaway.
F
A
A
A
I
A
F
I
was
asking
for
use
case
and
I
I.
Think
people
provided
one
it's
it's
not
a
big
deal.
I
think
we
already
in
some
structure
exposed
both
the
Mind
type
and
the
pillow
type
right.
So
maybe
there's
a
pattern
there
as
well.
So
that
seems
fine.
J
E
J
Right,
okay,
I
just
wanted
to
make
sure
that
was
considered.
Okay,
if
it
is.
I
The
use
case
is
very
similar
to
this
one-ended
transform
that
you
talked
about
at
tpack.
We
have
a
custom
decoder
that
would
that
does
rely
on
the
RTP
sequence
number
to
detect
losses
in
the
audio
and
for
audio.
It's
relatively
easy
to
add
that
to
incoming
frames
for
video
it's
much
more
complicated
and
it
is
more
complicated
for
outgoing
frames
as
well.
F
So
getting
back
to
the
previous
issue
for
in
order
or
Diet
of
order,
this
with
the
SE
would
expose
that
and
if
people
start
to
use
the
sequence
number
to
say,
hey
this
packet
has
been
lost
and
we
are
not
doing
in
order.
Then
we
might
receive
it
later
on
and
they
might
be
confused
and
so
on.
I
think
for
this
particular
issue.
It's
fine
to
to
expose
it,
but
it's
for
more.
We
expose
that
kind
of
stuff.
The
more
we
in
order
might
be
might
be
better.
I
I
A
That
would
add
it
to
the
metadata
as
as
as
a
and
not
non
non-required
dictionary
member,
so
that
it
can,
it
can
be
just
missing
from
the
other
cases.
A
H
Maybe
I
should
have
sorry
since
I
was
on
the
kid
for
a
little
second,
there
I'm,
just
my
one
concern
is
to
to
clarify
the
use
cases,
because,
if
we're
adding
metadata,
that
makes
no
sense
in
the
traditional
transform
case,
I
think
we
should
be
a
bit
skeptical
and
I'd
like
to
understand
the
larger
use
case
of
these
one-ended
transforms
that's
my
feedback.
A
A
A
J
J
I
A
F
No
I
don't
think
it
deserves
to
be
in
the
frame
level
API,
but
that
does
not
really
make
sense.
It
would
more
at
the
context
level.
Something
like
you
have
a
value
and.
F
You
might
have
an
event
and
you
get
it
from
there.
That
would
be
the
typical
APA
surface
that
we
we
would
use,
because
a
frame
is
a
unrelated
to
it's.
It's
good,
it's
coming
from
the
encoder
right,
so
it
doesn't
have
any
MTU
information
there.
A
A
D
All
right:
let's
talk
about
track
frame
rates,
so
with
get
settings
attract.get
settings
frame
rate,
you
can
tell
the
configured
frame
rate,
but
you
can't
tell
the
actual
frame
rate
and
well
what
is
the
actual
frame
rate
that
can
mean
different
things
depending
on
where
in
the
pipeline,
you
measure
it.
So
the
camera
May
produce
less
frames
because
of
the
lightning
conditions
where
the
frames
never
even
created,
or
the
frame
may
be
dropped
prior
to
being
delivered
to
the
sink.
D
For
example,
if
you
have
performance
issues
or
it
can
drop
later,
but
I
think
all
all
frame
counters
are
of
Interest,
the
settings
was
produced
and
what's
what
I
call
emitted
from
the
track?
If
we
want
both
the
camera
frame
rate-
and
we
want
to
know
about
fan
drops,
there
are
some
apis
already-
that
measure
frame
rates
with
different
points.
D
So,
for
example,
our
GP
connection
get
stats
as
the
frame
counter,
which
I
think
is
a
bit
under
specified,
but
it's
implemented
as
the
input
frame
to
webrtc
or
you
could
look
at
the
HTML
media
element
like
playback
metrics
I
get
the
frames
being
rendered
I
guess,
but
there's
issues
with
these
apis
and
Alternatives,
and
that
is
that
the
measurements
are
happening
later
in
the
pipeline.
So
they're,
not
if
a
frame
is
dropped.
You
know,
as
soon
as
it's
been
produced,
it's
likely
not
going
to
be
covered
here
and
also
ergonomics.
D
It's
I
think
it's
bad
if,
like
a
non-webrtc
use
case,
would
be
forced
to
use
an
rdcp
connection
like
a
dummy
peer
connection,
just
for
the
sake
of
getting
a
metric
from
a
track
like
if
it's
track
specific,
that's
my
argument
and
I'll
go
to
the
so
there's
one
more
slide.
I'll
go
to
the
slide
with
the
proposal,
and
then
we
can
do
questions.
D
D
The
frames
captured
you
know,
produced
by
camera,
for
example,
and
Frames
emitted,
which
are
the
frames
that
weren't
dropped
and
when
I
say
that
it's
emitted
I
mean
that
it's
being
passed
over
to
a
sync.
So
it's
a
it's
a
frame
that
was
neither
dropped,
muted,
disabled,
discarded
and
the
asterisk
is
frame.
D
Drops
could
still
happen
later,
for
example,
if
you're
passing
it
to
webrtc,
the
frame
could
get
dropped
after
encoding
the
first,
but
before
sending
or
or
whatever
like,
if
you're
rendering
it
it
might
get
dropped
before
getting
rendered
because
reasons,
but
that's
outside
the
scope
of
this
API,
which
is
only
concerned
with
from
the
source
to
the
track
and
let's
go
to
questions.
So.
Thank
you
and
it's
first.
F
Yes,
so
I
think
that
all
apis
that
are
using
a
media
stream
track
will
allow
you
to
to
get
the
number
of
frames
that
you
are
actually
receiving
like.
If
you're
using
a
media
capture
transform,
you
will
get
all
the
the
count
of
frames
that
you're
using
if
you're
using
webrtc
it's
the
same,
if
you're
using
a
video
element.
It's
the
same,
so
this
part
is
sold.
E
F
Have
roughly
the
amount
of
frames
you
you
want,
and
it
seems
that
what
you
want
is
there's
the
sink
and
there's
the
source
and
in
in
between,
there
might
be
some
drop
and
you
you
might
want
to
to
get
additional
information
between
the
two,
which
seems
to
be
like
the
frames
captured
thing.
F
So
maybe
it's
fine
to
have
it
I'm,
not
sure
I,
understand
the
difference
between
frames,
captured
and
Frames
emitted,
and
it
seems
to
me
that
it
might
be
very
specific
to
a
given
pipeline
and
in
respect.
We
have
a
source
and
we
have
things
and
the
source
if
it's
producing
a
frame
will
emit
it
in
practice.
So
I'm
not
sure
it
will
be
very
easy
to
specify
frames
emitted
in
in
in
a
web
competitive
in
an
interval
way.
F
D
That
that
might
make
sense
actually
because,
like
frames
captured
as
the
big
missing
piece
in
my
mind,
frames
emitted
is
a
it's
a
nice
bonus,
but,
like
you
said,
you
could
get.
If
you,
if
you
remember
to
see,
is
your
your
sync
you
can
you
can
look
at
the
input
frame
rate
that
it
would
should
be
the
same
more
questions
John
either.
A
H
Yes,
so
I
I
would
say
for
frames
captured
I
have
some
understand.
You
know,
I,
think
that
makes
some
sense,
because
the
use
case
is
a
camera
and
low
lighting,
and
the
current
constraint
does
not
give
you
those
measurements
which
I
guess
we
could
revisit.
That
would
be
one
option
for
frame
submitted.
I'll
show
you
un's
concern.
H
Is
that
I,
don't
think
I
think
that's
implementation
defined
and
I'm,
not
sure
that's
actually
observable,
and
that
the
and
some
concerns
I
heard
in
what
it
might
when
it
might
produce
lower
numbers,
would
be
include.
Some
implementations
might
not
produce
a
frame
if
the
sync
doesn't
want
it.
So
then
you
have
a
Downstream
issue
where
you're
not
measuring
Upstream
you're
sort
of-
and
this
is
the
problem
with
track
right.
It's
supposed
to
represent
the
source,
but
it's
also
a
modifier
and
I.
H
If,
if
the
problem
is
that
you
know,
you
set
a
frame
rate,
so
it's
decimating
frame
rates,
hopefully
user
agents
are
good
enough
at
decimating
that
there's
not
going
to
be
that
much
difference
from
the
configured
value,
but
for
for
low
light
I
have
it
makes
more
sense.
As
for
a
new
get
starts
method,
yeah,
maybe
or
maybe
we
could
put
it
as
a
constraint,
but.
A
D
E
A
So
I
I
joined
the
cube
to
say
to
say
that
I
think
that
frames,
a
method
makes
some
sense
because
it's
consistent
across
the
apis,
but
I
can
see
the
argument
that
that
this
this
is
the
same
value
as
what
you
would
have
exposed
from
other
apis.
Well,
if
it's
the
same
value
that
would
be
nice
to
expose
to,
but
let's
consider
the
frames
captured,
accepted
then
and
frame
submitted.
That's
still
is
still
up
for
discussion,
not
that
not
adopted
I'll.
H
Simulcast
the
story
continues,
so
we
got
four
issues
to
discuss
the
next
slide,
so
this
is
same
slide
as
or
spillover
slide
from
last
keypack.
This
was
the
the
wing
we
choked
on
quite
early,
so
this
was
to
do
with
red
length.
If
you
put
in
17
character.
Rids
most
prices
will
choke
on
it
except
Firefox,
and
this
spec
relies
on
other
specs
to
Define
things
in
ITF.
H
The
RFC
8851
still
allows
256
octets,
but
in
practice
people
use
Single,
Character
Ritz,
because
anything
bigger
Bloods,
the
RTP
header.
So
we
still
I
think
this
working
group
still
would
like
to
try
to
limit
threads
to
16
characters
for
web
combat.
At
this
point
next
slide,
we
had
some
progress
because
there
were
some
other
differences
like
an
Errata
was
published.
I
should
say
on
RFC
8851,
where
minus
and
underscore
are
no
longer
valid
characters,
so
yay
restricting
the
feedback
we
got
from.
H
People
who
posted
this
a
ride
out
was
that
restricting
size
to
16
instead
of
255
would
might
be
hard
to
justify
as
an
erratum,
but
it
might
be
doable
in
abyss
and
for
people
who
are
not
familiar
with
ITF
lingo,
don't
feel
bad
I,
don't
know
what
a
business
either
I'm
sure
it'll
be
great
and
it'll.
Tell
you
that
if
you
go
with
16
plus
characters,
you'll
discover
a
path
of
pain,
and
it's.
A
H
I'm,
just
the
slide
is
basically
to
then
say
that
next
steps
that
we
think
are
the
way
to
go
is
to
try
a
Biz
to
say
less
than
16
characters
and
there's
a
remaining
question
of
empty
string,
which
I
think
we
can
just
fix
in
webrtc,
PC
and
basically
say
as
a
JavaScript
input
API.
If
we
get
an
empty
string,
you
have
not
provided
a
red
so.
A
C
H
Well,
I
suppose
we
could
make
a
decision
that
we
don't
allow
more
than
16
characters
in
our
API.
There
was
some
pushback
last
meeting
about
specifying
limitations
on
red
that
weren't
reflected
in
ID
aspects.
So
I
guess
we're
trying
to
do
the
right
thing
in
this
case,
but
I'm
open
to
other
options.
So.
E
H
J
It
should
be
possible
to
update
Chrome
to
support
all
the
characters
that
we
want
and
as
many
characters
as
we
want.
It
was
not
done
originally
because
we
didn't
have
to
buy
its
header
extensions,
and
so
we
felt
like
it
was
safer
to
just
you
know,
up
to
16
characters,
but
now
that
is
as
shipped
in
Chrome
and
it
should
be
possible
to
do
if
we
think
that's
the
right
thing,
and
if
people
don't
want
to
use
it,
which
we
probably
shouldn't,
then
we
don't
have
to
use
it.
F
It
seems
to
me
if,
if
we're
going
with
16
characters,
we
should
add
a
note
in
in
our
Weber
CPC
spec
but
hey.
This
is
the
current
limitation
and
we
are
trying
to
push
it
to
the
ATF
error
of
c
and
when
vrfc
is
updated,
then
we
can
remove
the
node,
but
still
it's
good
to
describe
it
somewhere
and
it's
somewhere
it's
easier
in
webrtc
right
now,
so
it
makes
sense
to
do
it
in
Weber
CPC.
First,
if
we
can
do
it.
H
One
option
might
also
be:
we
have
a
separate
decision
on
the
ad
transceiver
interface
and
a
separate
question
for
what
to
accept
an
incoming
offers
and
answers.
I
guess
be
nice
if
they
land,
though.
B
Sorry
yeah
I
I
was
just
echoing
what
un
said.
I
think
it
should
be
a
non-normative
note,
just
to
note
that
you
know
typically
more
than
16
won't
be
accepted.
C
Well,
but
it
can't
be
non-normative
if
there
is
going
to
be
an
error
when
you
do
more
than
69
again
personally,
I
feel
that
limiting
it
at
16
is
absolutely
the
right
thing
to
do,
but
it
it
is
a
normative
Behavior.
If
one
browser
is
going
to
accept
17
and
the
other
isn't
I
know,
my
sense
is
that
the
protocol
says
it's
legal
to
send
256
characters
that
doesn't
mean
that
any
implementation
should
allow
to.
Let
developers
do
something
silly
like
that.
C
So
I
think
limiting
what
we
allow
in
a
transceiver,
I,
don't
think
is
goes
against.
The
protocol.
I
fully
agree
that
in
terms
of
what
we
accept
from
sdp,
we
have
when
I
hope
we
still
already
hello
for
that,
because
that's
what
the
protocol
asks
but
yeah
in
terms
of
our
API
I,
don't
think
there
is
anything
bad
in
saying
such
a
silly
idea.
We
are
not
going
to
let
you
do
that.
There
is
no
point.
H
H
All
right
we're
out
of
time
on
this
issue,
I
think
so.
Next
Issue
the
specs
says
right
now
that
set
robot
description
with
an
offer
to
receive
simulcast,
as
actually
always
said
that
it
overwrites
sending
codings,
but
with
our
latest
update,
we've
relaxed
it
a
little
bit
to
clarify
that
it
only
overrides
sending
Coatings
that
don't
have
rids
in
them.
That's
an
improvement
based
on
what
was
considered
the
bucket
before
so.
H
The
current
language
is
that
if
the
length
of
ascendant
Coatings
is
one
and
the
loan
encoding
contains
no
red
member,
then
we
overwrite
it
with
simulcast,
and
this
I
think
makes
sense,
because
the
opposite
would
be
to
try
to
preserve
some
of
the
unicast
parameters
in
the
simulcast
array,
and
that
seems
like
a
poor
choice,
because
unicast
and
simulcast
arrays
aren't
super
compatible
specifically
because
the
the
lone
unicast
entry,
for
instance,
for
instance,
usually
we
would
have
scale
resolution
down
by
one,
whereas
if
you
have
simulcast,
the
first
encoding
is
usually
the
smallest.
H
So
this
seems
better
if
you
get
an
offer
to
receive
simulcast
now
you're
doing
simulcast,
and
you
you
have
your
previous
unicast
settings
overwritten
now.
So
that
would
mean
that
if
you
roll
it
back,
it
would
have
to
undo
that
overwrite,
basically
by
overriding
again
now
roll
back
in
offset
remote
description
is
rare.
H
It's
not
used
in
perfect
negotiation,
so
principal
at
least
astonishment
would
seem
to
be
that
if
you,
if
you
roll
back
the
overwrite,
it
undoes
the
override
basically-
and
that
means
it
also
wouldn't
do
any
modifications
you
had
done
after
the
overwrite,
if
that
makes
sense.
So
if
you,
because
these
methods
can
be
you
know,
JavaScript
can
call
them
and
lots
of
interesting
orders
and
one
of
them
is
you
get
offered
to
receive
simulcast.
You
then
call
set
parameters
to
modify
your
past
attributes,
and
then
you
roll
it
back
and
I.
H
H
All
right
so
another
issue
with
the
pr
is
that
modifications
to
sending
Coatings
from
set
parameters
and
negotiation
methods
can
be
racy.
H
So
our
set
parameters
API,
is
a
read
and
store,
modif
modify
API
and
that
you
read
settings
and
then
you
modify
them
and
you
set
them
asynchronously.
So
there's
a
Time
Gap
here,
where
this
process
of
reading
store
back
may
race,
with
the
negotiation
methods,
specifically
a
remote
offer
to
receive
simulcast
overrides
the
red
free,
Center
Coatings
array,
which
we
just
discussed
a
rollback
of
the
above,
which
we
just
agreed
to
and
also
answers,
May
prune
all,
but
one
written
coding
from
sending
codings.
H
So
what
happens
when
This
Racist
would
set
parameters?
What
outcome
would
we
want?
We'd
probably
want
the
result.
We've
gotten
had
set
parameters
been
allowed
to
complete
first,
so
the
proposal
is
to
let's
set
parameters
complete
first,
we
do
a
similar
thing
for
add
track
when
ad
track
is
racing
with
set
remote
description.
H
So
this
picture
shows
adding
a
third
sentence
here.
That
basically
says
if
any
promises
from
set
parameters,
methods
on
the
sender
associated
with
connection
are
not
settled
and
what
did
abort
these
steps
and
start
the
process
over,
which
is
language
similar
to
the
bullet
2,
above
that
we
do
for
ad
track.
Yeah.
H
There's
a
case
for
this
being
too
broad
I
would
have
to
a
local
description
set.
Local
description
answer
may
also
print
Reds,
so
I
would
have
to
cover
that,
so
it
have
to
be.
If
remote
is
true
or
it's
an
answer,
we
could
do
that.
D
D
Like
the
sdp
modifies
the
parameters
but
because
you're
setting
doing
set
parameters
again,
I
mean
the
set
parameters,
is
probably
a
modification
of
an
earlier
get
parameters.
So
to
do
this
correctly,
you'd
have
to
wait
until
the
sdp
is
applied,
then
call
gets
get
parameters
again
and
then
do
set
parameters,
but
that
requires
a
manual
restart
rather
than
applying
it
again.
So
I'm
not
sure
what
would
change
in
the
sap.
H
So
we
did
consider
so
this
is
steps
before
we
run
the
success
callback.
So
the
success
callback
would
not
happen
at
this
point
and
it
would
have
to
basically
the
our
intent
was
to
try
to
wait
until
the
set
parameters
have
settled,
so
we
don't
want
to
try
to
reapply
set
parameters
afterwards.
We
want
to
get
them
all
done
ahead
of
time
so
that
there
are
no
outstanding
set
parameters
and
solve
the
rates
that
way,
which
would
which
seems
to
be
close
to
what
would
have
happened.
H
Well,
it
will
never
fail.
So
if
you
do
set
parameters,
if
you
have
signal
cast
right
now
and
you
modify
simulcast
it'll,
say
you'll
get
a
promise
to
say
your
simulcast
parameters
were
set,
and
shortly
after
you
know,
there'll
be
an
answer
or
something
that
removes
those
encodings
again
and
you're
back
to
unicast.
So,
okay,
that
seems
to
fit
our
model
all
right.
H
So
we
also
have
issue
2762
that
contained
some
of
my
research
on
what
implementations
are
doing,
that
they
generally
don't
fail
in
a
lot
of
cases
and
that
we
thought
that
seemed
good
from
last
month
we
did
merge
a
PR
that
has
moved
the
spec
halfway
towards
what
implementations
are
doing,
but
not
all
the
way,
and
basically
the
it
used
to
the
spec
used
to
say,
set
remote
description
of
offers
that
try
to
modify
written
negotiation
at
all
would
fail
and
that
wasn't
really
compliant
with
jstap
right
now
it
says
that
so,
basically
just
relaxed
it
a
bit.
H
H
You
patch,
we
then
failed
the
process,
so
it's
a
much
more
relaxed
description
that
an
SFU
basically
has
to
at
least
acknowledge
one
red
value
that
was
previously
negotiated
and
that
basically
means
that
a
change
in
Wind
value
is
tolerated
as
long
as
at
least
one
with
matches
what
was
previously
negotiated
or
the
offer
is
to
no
longer
receive
simulcast,
so
you
can
actually
and
then
that
mismatched
or
Automotive
reads
effectively
result
in
layer
removal
by
existing
language
and
the
answer
now.
H
So
this
doesn't
so
there's
relaxation
a
lot
for
validation
and
satisfies
Jason,
while
simultaneously
maintaining
an
existing
and
Speck
invariant
that
layer
pruning
woman
Coatings
only
happens
in
answers.
This
is
different
from
implementations
that
actually
exposed
layer
removal
and
have
remote
offer.
The
problem
is
that
all
those
implementations
also
fail
to
roll
back
that
information.
H
Yes,
so
so,
basically,
the
idea
is
to
go
this
far,
but
no
further,
because
by
waiting
to
the
answer,
the
nice
thing
is
jsep
already
ensures
that
answers
are
within
the
envelope
of
the
offer.
So
when
SFU
gives
the
browser
an
offer,
the
browser
is
in
full
control
of
the
answer
and
can
deal
with
removal
at
that
time
and
I
think
that
makes
sense.
It
protects
these
existing
environments
and
it
gives
us
the
nice
behavior
that
things
can
start
out
as
simulcast
until
you
get
an
offer
to
receive
Simon.
H
So
sorry,
things
can
start
out
as
unicast
and
it
can
be
promoted
to
simulcast,
at
which
point
layers
can
be
reduced
back
to
one
which
effectively
brings
you
back
to
unicast,
but
now
it
has
a
red
member
in
it.
It
means
it
can't
be
promoted
back
to
simulcast.
So
it's
a
one-way
Street.
Basically
that
you
can,
you
can
add
as
many
layers
as
you
want
on
the
initial
negotiation
of
simulcast
and
then
you
can
prune
layers
down
turn
yourself
back
to
unicast,
but
you
can't
restart
the
there's,
no
loophole
that
you
can.
H
Then,
if,
if
we
had
removed
the
red
member,
then
basically
that
would
be
a
loophole
that
would
allow
you
to
then
expand
the
envelope
again.
A
So
we're
closing
it
on
at
the
end
of
time,
for
this
section,
I.
H
But
in
that
case,
next
slide
continues.
The
story
here.
H
This
is
the
existing
language
in
set
local
description
answer
where
I
made
a
slight
tweak
of
instead
of
offered,
we
say
previously
negotiated,
and
this
clarifies
the
way
some
of
implementations
are
working
already,
but
at
the
wrong
time,
in
the
offer
is
that
we
allow
we
love
removing
of
all
but
the
first
encoding,
but
that
description
is
also
but
that
description
if
it's
missing
any
of
the
previously
negotiated
layers,
we
remove
those
dictionaries
and-
and
the
second
statement
actually
is
different
from
the
first
in
that
it
allows
browsers
to
trim
any
layer
or
print
any
layer,
so
that
matches
implementations.
H
J
All
right,
so
we
have
a
few
issues
regarding
data
channels
that
are
mostly
described
in
webrtc
extensions
at
the
moment.
Next
slide,
please.
J
So
at
the
moment
we
have
RDC
data
channel
that
is
transferable,
so
set
algorithm
is
mentioning
that
we
need
to
check
the
max
message
size
of
the
channels,
Associated
RTC
acetic
transport,
to
know.
If
we
can
send
a
message
is
a
slot
on
rtcp
transport
and
it
can
be
updated
during
renegotiation,
so
asynchronously.
J
J
J
So
that's
point
one
0.2
is
that
when
we
can
transfer
a
data
Channel
at
any
point
before
it's
used,
which
means
that
asset
transport
might
not
be
created,
yet
we
can
it's
perfectly
fine
to
create
a
newer
connection,
create
a
data
Channel
directly
even
before
during
negotiation,
which
is
so.
We
don't
have
a
value
for
Max
message,
size
that
is
available
to
copy
into
the
transferred
objects.
J
So
we
have
an
issue
there
that
we
have.
We
still
have
an
object
that
has
a
value
that
is
going
to
be
updated.
Max
research
is
not
the
transport
when
it's
created
and
an
object.
That
is
in
a
different
thread
and
they
will
need
to
have
some
communication
to
get
updated
about
the
value
or
we
need
to
agree
that
send
is
going
to
be
blocking
Hub
threads
and
to
read
the
max
message.
Size
or.
J
Like
that,
so
a
solution
that
I
would
think
would
be
to
add
language,
to
update
announcing
a
detached
Channel
as
open
algorithm
to
make
sure
that
at
this
during
the
that
algorithm,
we
notify
the
object
on
the
worker
thread
that
there
is
a
new
value
for
Max
message
size.
Well,
the
raise
a
value
for
Max
message
size,
so
the
first
value
if
it's
been
transferred
before
and
with
the
value
and
then
data
Channel
could
live
on
with
its
own
value.
That
is
never
going
to
be
updated.
J
C
Yes,
how
confident
are
we
that
freezing
Max
message
say
size
in
my
negotiation?
Is
web
compatible.
J
We
asked
people
at
TPAC.
We
didn't
seem
to
think
that
it
was
something
that
was
needed.
We
need
to
make
some
checks
about
that,
but
it
doesn't
seem
like
to
be
to
be
a
very
Computing
feature
that
people
rely
on
well
for
sure
we
need
to
make
some
checks
about
that
at
the
moment.
In
any
case,
if
you
try
to
send
encounter
implementations
in
some
current
implementations,
if
you
try
to
send
a
message
that
is
bigger,
then
Max
message
size.
J
You
will
terminate
the
peer
connection,
the
data
channels
so
and
it
will
return
an
error
because
the
message
to
be
glitched
out
to
the
lower
level,
so
people
already
need
to
make
sure
that
they
don't
send
too
big
of
messages,
because
they
cannot
get
an
error.
So
it
shouldn't
be
too
much
of
a
problem.
A
J
I
believe
so
we
can
add
some
measurements
to
see
what
happens
in
in
Chrome
and
can
move
forward
with
any
decision.
I
would
still
like
to
know
what
we
should
do
about
a
Max
message
size
that
that
he
lives
on
an
object
that
is
on
a
different
fret
than
where
it's
used.
C
A
J
Yeah,
we
still
have
an
issue
of
a
renegotiation
that
would
change
the
value
and
then
you
would
have
a
mismatch
between
every
yellow
negotiation
that
changes
mac
message
size.
We
would
have
a
mismatch
between
what
the
transport
is
configured
for
and
was
the
data
channel
is
allowing
and
possibly
the
transport
will
have
a
smaller
message
size,
which
means
that
some
messages
might
be
rejected.
J
C
J
Right
so
we
could
go
on
with
copying
the
we've
announcing
the
value
for
sure
and
then
copying
the
value
we
will
need
to
probably
notify
either.
You
can
add
an
issue
about
renegotiation,
while
we
measure
in
Pro
in
implementations,.
H
Sorry
John
over
here
so
so
far
the
slides
are
talking
about
internal
slots.
But
are
we
talking
about
exposing
an
API
for
Max
message
size
on
the
data
Channel.
H
J
Sorry,
the
value
would
be
the
same
for
all
data
channels
from
the
same
transport,
so
it
was
considered
not
necessary
when
we
didn't
consider
transfer
data
channels,
because
you
would
have
access
to
probably
disappear
connection
and
then
the
satp
transport
and
read
the
value
from
there
for
Simplicity
and
considering
the
detectional
transfer.
It
might
be
easier
to
have
the
value
also
readable
on
on
the
data
Channel,
knowing
that,
if
you
have
a
negotiated,
you
might
have
an
undefined
value.
H
Yeah,
it's
not
clear
from
the
slides
whether
you're,
proposing
just
language
to
update
internal
slots
or
new
API
I
thought
I
heard
exposing
something
in
in
an
event
and
I.
Think
design
principles
usually
prefer
that
we
don't
add
data
in
the
event
exposed
to
JavaScript,
but
instead
have
it
be
a
property
on
the
target
event.
Target.
J
Right
yeah:
it's
what
I
considered
updating
internal
slots,
adding
internal
slots
and
updating
them
in
the
open
events.
And
then,
when
you
get
your
data
channel
is
open.
Event,
you
can
you?
Could
your
max
message
size
slot
for
the
data
channel
would
have
the
proper
value
and
possibly
an
accessor
to
read
it.
J
Hey
it
sounds
good,
I
would
definitely
help
things
and
I
will
measure
on
Chrome
to
see
what
happens
all
right.
So
next
issue,
which
is
one
that
anyvar
created
we
have
in
the
language
regarding
transfer
data
channels.
J
We
an
extension
of
the
interface
for
transferable
and
the
HTML
spec
says
that
we
should
have
a
detached
internal
slots
on
the
transferable
platform
objects
which
we
don't
have.
We
have.
Instead,
an
is
transferable
slot,
which
is
a
little
bit
different,
which
needs
to
be
guarding
against
having
a
data
channel
that
is
transferable
after
we
already
started
sending
from
it,
which
is
a
little
bit
of
overlapping
block.
J
Quite
I
suggest
that
we
add
the
data
slot
to
measure
HTML
specification,
and
we
update
the
algorithms
that
we
describe
regarding
data
channel
transferability
to
actually
use
it,
and
that
we
also
keep
is
transferable
to
present
transferring
data
channels
on
which
we
have
called
send
so
effectively.
It
would
be
just
internal
specification
updates,
but
the
behavior
would
probably
be
the
same
where
detectional
transfer
has
been
implemented.
J
H
No
objection
just
a
comment
that
I
think
data
channels
when
they're
transferred
it's
actually
still
woefully
unspecified.
What
happens
to
the
object
that
remains
on
the
main
thread
and.
J
Okay,
that's
the
next
slide,
so
there's
one.
So
there
was
one
issue
regarding
detached
slots
and
next
slide
still
for
the
same
issue.
We
have
we.
We
talked
about
different
points
about
data
channels
and
we
are
not
sure
if
a
transfer
detection
node
should
be
eligible
for
a
garbage
collection.
J
What
happens
to
an
object
that
where
it's
been
transferred,
there's
still
an
object
on
the
current
thread
and
it
needs
to
be
in
some
states
and
right
now,
in
the
specification
the
state
is
closed,
and
that
means
that
this
data
channel
would
probably
be
garbage
collected
if
there
was
no
strong
reference
to
it.
J
B
I
I
personally,
like
proposal
2
better
and
the
reason
is
first
of
all,
I
I
think
it
could
proposal
2,
allow
you
to
use,
get
data
channels
and
see
the
detached
ones.
So
that's.
B
And
the
other
thing
is:
garbage
collection
tends
to
wreak
a
lot
of
Havoc
with
Jitter
and
stuff,
so
I
actually
might
it
might
be
good
to
not
have
them
be
garbage
collected
anyway?
So
I've
been
lean
towards
from
puzzle
too.
H
So
I
think
that
we
still
have
a
lot
of
work
to
specify
so
transferable
objects
aren't
magic
and
they
actually
rely
on
that
they're.
H
Actually
more,
like
a
clone
right,
you
don't
actually
transfer
the
object
so
much
as
you
leave
a
sort
of
Clone
that
no
longer
works
works
behind
and
there's
a
movie
I
won't
spoil
it
for
you,
whether
it's
a
transporter
and
the
rest
of
the
movie,
you
spend
trying
to
kill
your
clones,
but
anyway,
I
think
we
need
to
specify
other
objects
that
have
transferable
they
end
up
having
all
their
algorithms
modified
like
if
this
is.
H
If
detached
is
true,
then
abort
these
steps-
and
we
haven't
added
that
yet
so,
there's
a
question
for
the
group
here:
how
should
these
main
thread
objects
that
are
left
behind
work
and
I?
Think
whether
they're
garbage
collected
or
not,
they
wouldn't
be,
because
you
could
still
have
strong
references
to
them,
and
that
seems
like
a
secondary
question.
But
the
primary
question
to
me
is:
how
do
these
objects
behave?
What
happens
if
you
read
their
attributes
and
those
kind
of
things.
J
In
this
specific
case,
since
the
current
state
in
the
specification
is
closed,
if
you
try
to
call
any
function
on
it,
there's
two
only
two
functions
close
and
send
close
will
say:
if
it's
already
closed,
don't
do
anything
it's
done
and
send
will
throw
an
invalid
State
error.
If
you
are
in
closed
state
which
matches
what's
a
detached
slot
would
do
if
it
was
true
on
most
of
our
apis.
J
J
The
same
as
it
is
implemented,
it
would
just
be
clear
that,
if
it
detached,
then
you
throw
this,
and
we
can
probably
go
back
to
this
if
we
have
time
at
the
end,
because
now
it's
you
know
it's
time
to
speak.
J
K
K
You
we're
here
to
discuss
capture,
handle
again
and
a
couple
of
extensions
that
I
would
like
us
to
make.
So
just
a
quick
reminder
for
people
who
have
forgotten
who
were
in
all
or
were
not
here
when
we
discussed
this
previously
capture
handle
is
a
mechanism
by
which
a
captured
page
sets
some
kind
of
string
and
maybe
also
exposes
its
origin,
to
whichever
other
taboo
might
end
up
capturing
it.
K
So,
for
example,
I
could
say:
hey
ABC
and
also
expose
my
origin
and
then,
if
anybody
ends
up
capturing
me,
they
get
to
read
the
ABC
and
why
this
is
interesting.
Is
that
I
could,
for
example,
say
hey
my
IP
address
is
XYZ
and
you
know
if
you
want
to
communicate
with
me,
try
there
and
meet
and
slides,
for
example,
could
use
that
to
for
meat
to
detect,
hey
and
capture
in
a
slides
presentation.
How
about
I
start
doing
interesting
things
next
slide,
please.
K
So
some
of
the
things
that
you
could
do
if
this
is,
you
could
remote
control
presentation,
but,
unlike
for
example,
capture
actions,
you
could
also
remote
control
anything
right.
So
we're
already
talking
about
a
new
mechanism
that
would
allow
you
specifically
to
capture
to
control
a
presentation
more
easily,
but
if
you
can
actually
start
communicating
and
in
any
way
that
you
choose
with
whatever
you're
capturing
you
could
do
anything,
you
could
send
any
type
of
message.
K
The
sky's
the
limit,
but
some
of
the
things
that
I
would
like
us
to
discuss
today
are
how
you
could
tailor
the
encoding
to
match
the
content
type
and
how
you
could
crop
to
regions
of
interest
in
the
capture.
Tab
next
slide,
please.
K
So
the
reason
that
I
would
like
the
reason
that
I
would
like
us
to
discuss
this
is
that
I
think
that
there
are
a
couple
of
gaps
in
the
apis
currently
specified
and
that
we
need
to
plug
those
gaps.
So,
first
of
all,
it
is
unstructured
right,
so
anybody
can
choose
to
structure
their
data
in
any
way
they
want.
K
So,
for
example,
let's
take
slides,
slides,
could
say:
hey
here's,
my
IP
and
I,
or
it
could
say,
hey
my
IP
is
here
or
it
could
say
you
can
imagine
right,
it
could
structure
it
in
any
way.
Another
problem
is
that
you
can
actually
only
work
with
strings.
So
if
we
go
back
two
slides,
please
thank
you
and
then
we
will
notice
that
there
is
json.stringify
here
and
json.parse
and
while
that
is
often
very
sufficient,
sometimes
it's
not,
for
example,
for
crop
targets.
Three
slides
forward,
please
so
I
think
that
is
two.
K
So,
let's
start
talking
about
these
are
problems
so
number
one
is
content
page
right,
so,
for
example,
if
I'm
in,
if
slides
and
ends
up
being
captured,
it
wants
to
let
the
video
conference
in
application
know
that
it's
static,
static,
content
or
video,
and
it
wants
to
do
that
in
a
way
that
would
be
useful
to
any
kind
of
video
conferencing
tool
right,
not
just
for
mint,
and
the
problem
is
that
if
you
can
set
any
kind
of
string,
you
don't
really
know
okay.
K
So
it's
like
the
video
conferencing
tool
doesn't
know
where
to
look
for
that
information
and
specifically,
the
way
that
we
want
to
use.
This
is
with
something
called
content
heat
right.
There
is
a
way
that
a
capture
and
somebody
that's
transmitting
a
video
or
audio,
can
say
hey
in
the
case
of
an
imperfect
Network,
prefer
to
the
grade
resolution
or
prefer
to
the
grade
frame
rate
right.
So,
in
the
case
of
static
content,
you
would
prefer
to
integrate
frame
rate
first
right
in
the
case
of
motion.
K
You
would
most
likely
refer
to
the
grade
resolution
and
there's
an
analogous
trade-off
for
audio
next
slide.
Please.
K
So,
as
you
can
see,
we
don't
really
know
where
to
look
for
the
content,
and
that
gives
us
problem
number
one
structured
versus
unstructured
and
tightly
coupled
applications
don't
really
care
if
it's
structured
or
unstructured,
they
impose
their
own
structure
right
if
mid
says
that
it's
capturing
slides,
it
knows
which
protocol
slides
uses
but
other
applications
don't
next
slide.
Please.
K
This
slide
is
for
other
arguments
that
have
come
up
for
this
specific
use
case.
I
think
this
is
not
really
needed
right
now,
because
this
is
specifically
I'm
just
trying
to
explain
the
difference
between
structured
unstructured,
but,
if
need
be,
we
can
go
back
to
that
and
people
can
refer
back
to
that
if
necessary.
Next
slide,
please
so,
as
mentioned,
we've
got
a
couple
of
that's
one
use
case
right:
capture
handle
and
content
hint.
Another
one
is
crop
Target
next
slide,
please.
K
K
Now,
if
I
were
to
share
all
of
this
term,
it's
going
to
be
a
bit
embarrassing
for
me,
because
well
I'm,
actually
trying
to
just
share
the
video
of
Rick
Astley,
but
I
see
that
there's
the
playlist
there
and
there
is
going
to
be
a
couple
of
ads
tailored
specifically
for
me,
and
maybe
there
are
even
some
comments
that
I've
already
started.
K
So
for
that
we
actually
have
a
mechanism
called
region
capture
and
that
mechanism
allows
you
to
say:
hey
I
only
I
want
to
crop
the
video
to
this
content
area.
The
problem
is
that
it
only
works
for
self-capture
right
now,
because
or
for
same
origin,
tabs
sales
same
origin.
Tabs
can
transmit
a
crop
Target
over
a
broadcast
channel.
K
Others
came
so.
How
would
this
be
useful?
So
we've
got
a
simple
mock
application
here.
Right.
Imagine
that
you're
a
video
conferencing
application.
First
thing
you
do,
is
you
say,
hey
whatever
you
captured?
Does
it
have
any
crop
targets
right?
If
it
has
any
crop
targets
you
can
cycle
through
them,
produce
one
frame
of
each
one
and
present
it
to
the
user's
thumbnails?
And
now
the
user
can
say
like
no
it,
the
user
can
either
say:
hey
I
actually
want
to
capture
the
entire
thing
transmit.
K
K
Be
able
to
do
if
we
could
actually
transfer
crop
targets
next
slide,
please,
as
mentioned
it's
not
swing,
string
dependent.
Also,
if
it
were
stringable,
we
would
still
run
into
the
original
problem
of
like
okay,
great
it's
a
string
where
do
I
find
it
among
all
of
the
other
strings.
Next
slide,
please.
K
So
what
I
think
we
should
do
is
we
should
probably
impose
some
structure,
some
additional
structure
on
the
capture
handle.
If
we
were
to
say
hey
there
is
there
are
a
couple
of
fields
that
are
always
there.
You
can
use
them.
You
cannot
use
them
right.
If
you
want
to
use
them,
here's
a
place
where
you
could
put
the
crop
targets
and
each
crop
Target.
Can
you
know
we
can
annotate
it
with
the
name.
We
can
then
take
it
with
metadata.
K
We
could
even
say
that
the
suggested
content
content
is
specific
to
that
particular
region.
Next
slide,
please
actually
I'm.
Sorry,
let's
go
back.
One
I
think
that
the
last
slide
is
like
a
different
topic.
So,
let's
get
back
to
that
later.
K
H
I,
don't
see
anyone
on
the
Queue,
so
I'm
going
to
put
myself
on
a
queue
you
do
have
a
slide
after
that
seems
to
answer
some
of
my
questions,
the
one
after
this
one.
What
about
a
message
port
so.
K
What
about
the
message
board?
There's
a
very
simple
answer:
it's
not
structured
if
it's
not
structured,
which
means
that
yeah
it's
going
to
work
just
great
for
Mid
and
for
slides
what
about
meet
and
zoom
I'm,
sorry
zoom
in
slides
what
about
meat
and
Ward
meat
and
PowerPoints?
All
of
those
combinations
are
not
going
to
work.
H
So
I
I
would,
as
someone
who
pushed
for
set
capture
actions
I
would
actually
suggest
we
use
a
message
Port
here,
because
it
feels
to
me
like
we're
getting
too
much
into
the
space
of
specific
applications
and
I.
Don't
think
we
necessarily
know
I'm,
not
sure
we
want
to
be
in
the
business
of
specifying
all
the
different
things
that
and
that
application
might
need.
H
There's
some
yeah
I
agree.
There's
some
benefit
in
that
it
could
allow
Innovation
across
presentation
products
and
video
conferencing
products.
But
it's
still
not
clear
to
me
how
far
we
need
to
go
to
to
cover
all
the
use
cases
and
I
also
believe
I've.
Given
this
feedback
back
in
February,
that's
issue,
11..
H
That's
basically
titled,
don't
reinvent
post
message,
so
these
slides
are
useful
and
I
think
they
show
that
there
are
a
lot
of
potential
use
cases
for
this
and
but
I
think
at
the
end
of
the
day
and
sorry
that
issue
11
also
highlights
some
some
issues
with
the
set
config
handle
step
capture
handle
config
in
that
the
handle
can
be
used
as
a
messaging
Channel
already,
because
it's
not
well
specified
that
you
can't
update
this
continuously
and
basically
create
your
own
channel.
E
H
G
H
K
Read
some
of
my
responses
to
these
arguments
delivered
in
the
past
so
that
people
who
are
here
would
also
know
my
counter
arguments
counter
argument
number
one.
A
message
Port
requires
that
the
capture
informed
the
capture
of
the
presence
of
the
capture
right
the
moment
I.
Send
you
a
message
saying:
like
hey,
you
know:
what's
your
suggested
content
hint,
then
you
immediately
know
that
I'm
capturing
you,
because
that's
the
only
way
you
could
get
that
message.
That's
number
one
and
number
two
is
you've
mentioned
that
this
is
the
capture.
K
Handle
itself
is
a
message
board,
but
unidirectional,
and
that
is
correct,
but
that
is
okay
and
the
reason
that
it
is
okay
is
that
it's
very
useful
because,
for
example,
right
now
we're
seeing
Google
slides
on
our
screens
and
each
slide
is
going
to
be
slightly
different
right
so
to
set
the
handle
into
the
at
the
beginning
of
the
capture
right,
it's
not
really
useful.
What
happens
if
the
next
slide
shows
a
video?
What
happens
if
the
next
slide
shows
a
gif?
K
Those
things
are
so
we
actually
need
to
be
able
to
change.
The
capture
handle
now.
I
agree
that
sometimes
we
do
want
a
message
board
and
that's
why
I've
got
a
slider
I
think
a
message
board
is
going
to
be
very
useful,
but
I
think
it's
an
orthogonal
use
case.
It's
a
use
case.
The
use
case
here
is
when
you've
got
tightly
tightly
coupled
applications
to
applications.
That
know
each
other
agree
on
a
protocol
and
know
when
how
to
send
each
other
messages
know
what
the
messages
need
to
be
structured
like.
H
Right
yeah,
so
I'll
just
take
a
step
back
and
and
ask
what
the
requirements
are
and
if
the
requirement
is
to
is
the
requirement
solved
by
a
message
Port
or
so
what?
What
is
the
external
surface
that
is
being
proposed
here?
So.
K
What
I'm
suggesting
I'm,
making
three
different
suggestions
here,
suggestion
number
one
is
I,
want
to
be
able
to
transfer
region
capture,
I'm,
sorry,
crop
targets,
all
the
capture
handle
itself.
Okay
right
now,
it's
a
string.
We
might
want
to
change
it
to
something
other
than
a
string.
Then
it
will,
for
example,
be
a
dictionary
and
that
could
include
region
crop
targets.
That's
going
to
be
suggestion.
Number
one!
That's
going
to
work
for
me!
K
Suggestion
number
two
is
I'm,
saying
I
want
to
say:
hey,
okay,
so
that
works
great
for
tightly
a
couple
of
applications.
What
about
Loosely
coupled
applications?
They
also
want
to
transfer
crop
targets.
So
how
about
we
also
designate
a
place
where
crop
targets
usually
go?
If
you
want,
you
could
still
put
crop
targets
right
in
the
normal
handle,
but
but
everybody
knows
that
hey
crop
targets
can
go
here.
That's
a
good
place
to
look.
K
Sorry,
that's
number
two
number
three
is
I
want
to
do
the
same
for
suggested,
content,
paint
and
I'm.
Sorry
I
liked
it
before
it's,
not
three.
It's
four
number
four
is
I,
want
to
add:
The
Message
Board
to
make
life
easier
for
tightly
covered
applications.
H
Because,
oh.
C
Sorry,
I
I'm,
thank
you
yep,
so
yeah
I
think
the
use
cases
seem
very
relevant
I.
C
Do
wonder
if
what
we
want
is
a
unidirectional
message
part
if
that's
not
what
we
should
be
doing
and
then
having
it
bi-directional
in
some
circumstances,
I
guess
what
resonates
with
me
in
yaniva's
argument
is
that
if
we
have
to
decide
each
and
every
way
we
want
capture
and
capture
us
to
coordinate,
this
may
creates
a
lot
of
charm
for
Progress,
whereas
if
we
provide
an
open,
Communication
channel,
maybe
with
some
ways
of
documenting
you
know
well-defined
protocols
for
changes,
and
this
might
create
less
less
charm,
so
I
mean
I.
C
Guess
I'm
not
opposed
to
to
what
you
described
a
lot,
but
I
I
feel
that
there
may
be
a
sweeter
spot
for
Loosely
coupled
applications
to
work
one
with
another.
K
Oh
yes,
so
again,
my
apologies
for
not
making
it
clear.
What
I'm
trying
to
argue
here
is
that
we
need
a
couple
of
orthogonal
apis
for
orthogonal
use
cases
and
for
the
case
of
practical
applications,
we're
already
running
into
the
problem
that
we
cannot
transfer
anything
except
for
Strings
and
streams
and
I'm,
arguing
that
we
need
to
change
the
the
field
from
a
string
to
an
object.
K
So
that's
number
one
and
like
that's
my
top
priority
number
two
is
I'm,
arguing
that
just
like
capture
actions
does
not
seek
to
like
Loosely
coupled
applications
will
not
be
able
to
do
everything.
They'll
be
able
to
do
a
very
small
subset
of
you
know,
potential
things
and
we're
we're
gonna
have
to
be
able
to
support
those
for
them.
Just
like
your
neighbor
is
working
on
capture
actions
for
next
and
previous
and
next
and
previous
are
by
no
means
enough
to
cover
everything.
K
And
yes,
there
will
be
additional
suggestions
in
the
future
and
we
will
have
to
evaluate
them
on
their
merits.
K
So,
let's
start
with
the
with
the
first
suggestion:
what
does
the
working
group
think
about
changing
the
handle,
which
is
currently
a
string
into
an
object.
H
I,
don't
think
anything
is
yes,
I
didn't
see
anyone
else
on
the
Queue,
so
I'll
speak
again,
so
yeah
I
would
reiterate.
Issue
11,
don't
reinvent
post
message
and
I.
Think
the
the
concern
here
is
that
it's
kind
of
like
creating
a
message
channel
that
someone
may
or
may
not
be
listening
to
and
I'm,
not
sure
that
we
should
build
that
much
application.
On
top
of
it.
K
But
that
is
intentional,
and
the
intentionality
is
that
YouTube
does
not
actually
need
to
know
if
it's
being
captured
by
Zoom
or
by
teams
or
by
mid
and
you'll,
be
yourself
in
the
past
worried
about
self-censorship.
You
know-
and
this
does
away
with
this-
this
means
that
if
YouTube
allows
meat
to
you
know
to
capture
it
in
a
good
way
than
any
other
video
conferencing
tool.
You
can
just
read
that
read
the
same
thing
and
do
the
same
thing.
H
Right
yeah,
it's
just
there's
a
lot
to
comment
on
here
and
I'm,
for
instance,
I'm,
not
sure.
H
If
you're
adding
API
surface
in
a
browser,
you
normally
expect
the
browser
to
to
react
to
it,
and
in
this
case
you
might
be
specifying
all
your
capture
handles,
but
there's
really
no
guarantee
that
anything.
It's
not
the
browser.
That's
going
to
react
on
these
capture,
handles
they're,
just
going
to
be
surface
I,
think
I'm,
understanding
to
the
capturing
application,
which
may
or
may
not
do
anything
with
these
correct,
and
that
seems
a
little
odd
to
me
why
we're
involved
with
just
basically
carrying
information
across
an
anonymous
way
without
actually
acting
on
it.
H
I
mean
you
could
argue
it's
the
same
way
with
set
capture
actions
going
the
other
way,
but
it's
it's
a
little
more
formalized
I!
Think
in
that
case,
what
will
happen?
I.
K
I
I
do
argue
that
this
is
the
same
and
they
I
argue
that
it's
not
at
all
more
formalized
because
all
you're
delivering
to
Google
Slides
is
hey.
Somebody
said
next,
but
it
can
interpret
next
in
any
way
it
wishes.
So
it's
not
actually
more
formalized
but
I'm.
Sorry
I
think
the
current
is
on
the
queue.
A
Yeah
yeah,
so
this
is
a
question
for
Dom
really,
but
do
we
have
any
tradition
of
establishing
standardized
protocols
over
message,
ports
that
is
saying
that
these
are
these.
Are
these
are
messages
that
you
can
pass
across
the
message?
Port,
if
you
agree
to
follow
this,
this
protocol.
A
A
So
because
I'm
thinking
that,
if
we
add
message
port
and
we
have
structured
data
over
message,
port
in
a
way
that
says
and
okay
I-
make,
if
you
can
say,
okay
I'm
a
I'm,
something
that
looks
like
a
presentation.
Please
control
me
with
next
time
forward
and
so
on,
and
it
would
seem
logical
to
collapse
all
the
other
stuff
into
messages
sent
across
the
message.
Port,
instead
of
as
yanivar,
keeps
repeating,
inventing
message:
Port
again,
yeah.
C
C
Yeah
I
mean
I
I
think,
ultimately,
this
is
kind
of
what
we
are
doing
one
way
or
the
other
like
either
we're
doing
it
through
message,
part
or
we're
doing
it
through
capture
handle
so
I
think
yeah
yeah.
Both
fields
are
same
level
of
scariness,
but
but
I
agree.
It's
a
path
is
more
or
better
paved.
If
there
are
other
similar
protocols
out
there,
I
can
certainly
check
that.
K
So
from
my
side,
I
I
took
too
hard
and
past
criticism
that
this
API
is
only
useful
for
or
most
useful
for
tightly
coupled
applications.
I
came
up
with
a
bunch
of
suggestions
that
address
Loosely
coupled
applications,
and
it
pains
me
that
the
working
group
is
not
become
more
amenable
to
those
proposals
because
of
that.
But
that
stated
if
we
could
at
least
change
the
from
string
to
an
object,
I
think
that
we're
gonna
address
at
least
four
type,
a
couple
of
applications.
K
All
of
the
use
cases
that
are
important
for
me
and
I
think
that's
going
to
be
a
good
start.
So
I'll
wonder
whether
there
are
any
arguments
for
keeping
just
the
handle
string
and
not
just
making
it
string
or
object.
C
So,
first
of
all,
let
me
reiterate:
I
do
feel
that
any
improvements
that
we
can
make
in
the
mechanisms
working
better
for
Loosely
coupled
API
applications
feels
a
big
Improvement
again.
I,
guess
I
wonder
if
we
have.
If
your
proposal
is
the
best
approach
to
do
that,
but
the
intent
I
fully
support
just
to
be
extra
clear
on
this
on
the
string
versus
object.
C
I
guess
would
that
be
I
guess
what
I'm
struggling
is
what,
if
you're
in
your
object,
you
have
something
that
cannot
be
transferred
to
I
mean
like.
Would
that
be
only
serious
editable
objects
or
serializable.
K
Me
that's
the
minimum
suggestion
and
I
think
that
it
works
even
if
we
go
with
other
suggestions
later
so
I
would
like
to
put
that
to
a
vote.
H
I,
don't
think
we're
in
a
process.
I,
don't
think
a
vote
is
the
right,
Next
Step
on
this.
The
the
concerns
we
have
with
capture
handle
I
think
are
I'm
not
sure,
going
from
a
string
to
an
object.
H
The
original
purpose
of
handle
was
presented
as
an
identifying
an
identifier
right
and
now
we're
talking
about
passing
objects
over
it
and
I.
Don't
think
that
that
fits
with
the
original
Narrative
of
of
the
API,
and
if
it's
going
to
be
so,
it
sounds
like
if
we
want
I
mean
I
could
be
wrong,
but
I
think
you
can
actually
post
message
something,
and
you
don't
actually
know
if
anyone's
listening
right.
K
Over
a
broadcast
channel,
but
what
happens
if
there
is
a
storage
partitioning?
How
can
I
do
it
from
YouTube
to
meet.
H
K
Obviously,
I've
been
exploring
this,
like
I,
think
that
this
was
presented
in
payback
and
I've,
not
heard
any
alternatives.
Yet
I've
come
up
with
several
suggestions
so
far
and
what
I'm
hearing
is
hey?
Why
don't?
We
just
use
a
message
board
for
everything,
and
my
argument
is
that
the
message
board,
if
we
go
to
slide
number
51,
please,
oh,
we
are
already
there
right.
So
I
was
actually
preparing
to
explain
why
a
message
Port
is
even
necessary
because
last
time
what
I
heard
it
was
like
hey.
Why
do
we
even
need
that?
K
Sorry,
yes,
so
a
message
Point
does
not
address
all
of
the
use
cases
that
we
have
mentioned
before,
because
a
it
requires
the
capture
to
expose
itself
right.
So
that's
number
one
and
number
two.
It
is
not
structured,
which
means
that
okay,
it
only
works
for
tightly
coupled
applications,
I've
not
heard
anything
better
yet
so
unless
so,
nobody
has
mentioned
that
this
is
not
an
interesting
problem
to
solve.
Nobody
has
mentioned
the
better
solution
for
it,
and
We've
now
discussed
it
two
times.
So,
what's
the
next
step.
H
And
well
I
think
maybe
we
could
try
offline
to
to
work
out
some
of
the
immediate
use
cases
and
it
sounds
like
one
of
them
as
to
what
we
need
some
clear
requirements.
I
think
I'm
a
little
lost
on
how
many
of
these
we're
solving
it
with
one
API
and.
H
H
I'll
just
say:
I'm
not
sure
we
should
I'm
not
sure
if
we
should
solve
random
websites
specifying
crop
targets,
so
I'm
not
sure
we
should
allow
random
websites
to
specify
crop
targets,
I'm,
not
sure
if.
K
That's
by
the
way,
this
is,
if
you
look
at
slide
47,
please
I!
Think
I
did
this,
especially
for
you,
because
you
have
several
times
in
the
past
mentioned
that
you're
worried
that
the
website
would
set
a
crop
Target
that
ends
up
being
a
single
Pixel
or
even
less,
and
that
this
would
kind
of
trick
video
conferencing
application,
and
if
you
dive
into
the
code
of
slide
47
this
one,
you
will
see
that
the
process
is
purely
user
driven.
K
K
It
is
true
that
we
can
construct
crazy
websites
that
are
not
usable
at
all
and
somehow
manage
to
trick
the
user,
but
for
a
moment
useful
to
the
website
like
YouTube,
it's
going
to
have
single
Chrome
Target
and
then
the
user
is
going
to
choose
between
hey,
do
I
want
to
share
all
of
the
website
or
just
this
region
of
the
website,
and
that's
going
to
be
very
useful
for
the
users
and
I
think
that
this
useful
use
case
is
much
more
interesting
than
theoretical
possibilities
that
are
very
easy
to
to
protect
against.
H
A
You,
if
you
want
to
con,
if
you
want
to
restrict
some
functionality
to
something
other
than
random
websites,
you
need
to
specif.
You
need
to
come
up,
come
up
with
at
least
a
hint
of
a
mechanism
that
will
allow
the
browser
to
to
to
tell
the
difference
between
a
random
website
and
a
non-random
website.
H
Oh
sorry,
yes,
I
I
meant
that
we
we've
also
discussed
here
potentially
opening
a
message
port
and
that
would
reveal
a
two-way
Communication
channel
right
and
that
point
the
gig
is
up
that
you
are
being
captured,
and
at
that
point
it
seems
a
lot
of
these
use.
Cases
could
be
solved.
K
H
Yes,
it
depends
on
what
the
yeah
I'm
again
having
trouble,
seeing
how
I
mean
you're,
showing
here
how
a
capturing
application
might
present
thumbnails,
but
it's
not
necessarily
how
it.
K
H
I,
don't
think
a
compelling
case
has
been
shown
why
you
wouldn't
want
to
do
that,
and
it's
not
clear
to
me
what
kind
of
objects
you
could
pass
through
this
way
and
couldn't
that
be
security.
Concern
okay,.
K
So
is
there
a
reason
why
YouTube
should.
H
K
K
A
At
the
moment,
as
far
as
I
can
tell
at
the
moment,
it's
clear
why
my
message:
Port
has
implications
that
that
make
it
not
necessarily
the
best
solution,
so
I
see
two
proposals
at
the
moment.
It's
make
prop
Target
an
objects
with
the
specific
specific
fields
that
are
useful
for
specific
purposes.
A
A
But
there
is,
there
are
two
two
separate
proposals
here
and
they
need
to
say
that
I
think
we
should
regard
them
as
independent
and
for
either
one.
We
need
to
say
we
should
pursue
this
further
or
we
should
not,
and.
K
Exactly
so,
I'm
not
hearing
a
reason
against
making
the
handle
an
object
yet,
except
for
your
neighbor
and
by
the
way,
I
would
also
like
to
mention
that
making
that
the
handle
an
object
is
also
the
lowest
complexity,
option
of
the
bunch
and
also
the
least
scary,
compared
to
adding
a
message
board,
but
obviously,
there's
also
engineering
challenges
both
in
producing
that
as
well
as
using
that.
K
K
H
H
That
seems
problematic
to
me
from
a
web
security
perspective.
Why
is
that?
Because
it's
any
object,
it's
not
just
JavaScript
objects,
but
it
can
be
platform,
objects.
K
For
personal
information,
but
you
could
anything
that
you
could
transmit
over
a
message
board,
so
there's
not
going
to
be
a
difference
here.
In
fact,
probably
less
than
you
could
transmit
over
Message
Board,
because
not
transferable
objects,
only
serializable
objects
I
see
okay.
K
Large
object,
yes,
so
that
is
actually
one
of
the
considerations.
I've
had
and
I've
discussed
this
with
chrome
security,
and
my
argument
was
so
I
came
up
with
the
argument
with
this
problem
as
well
as
that's,
basically
why
we
had
a
limit
on
the
string
length
to
begin
with,
and
then
I
came
back
and
I
said
I
think
I
was
over
service
here.
K
I,
don't
think
this
is
a
credible
attack
because
the
captured
page
would
be
attacking
before
it
even
knew
whom
it
was
attacking
and
it
would
be
incurring
the
same
cost
itself
that
it
imposes
on
the
on
whatever
it's
attacking
and
chrome
security.
I
agreed
with
me:
it's
not
a
credible
attack
and
it's
not
a
problem.
E
K
C
Social
engineering
for
that
right,
I,
mean
like
you
could
make
a
website
say:
oh
share
me
with
this,
for
that
and
you
saw
that
create
an
interesting
attack
like
I.
It's
not
just
about
random
sharing
social
engineering
can
influence
who
gets
to
share
what
so.
K
K
So
currently
you
don't
actually
look
at
the
handle
until
you
call
get
capture
handle
and
what
we
could
do
is
to
say
that
hey
just
make
sure
that
you
don't
get
any
memory
like
nothing
happens
until
you
call
get
capture,
handle
it's
gonna
be
problematic
in
that
get
capture
handle
currently
is
a
sync
call,
and
that
would
have
to
make
it
blocking,
but
that
would
be
our
way
out
of
things
if
we
see
that
it
ever
becomes
an
attack,
but
I
would
I
think
I
think
it's
very
unlikely
to
ever
become
an
attack
vector.
H
K
K
C
H
At
stringifying,
the
capture
Handler
another.
K
I'm
only
hearing
one
person
actually
objecting
to
making
the
handle
a
string
or
object,
then
I'm.
Sorry
I
thought
that
you
just
raised
it
as
a
question.
Do
you
formally
object
to
this
or.
E
Just
giving
some
food
for
that.
K
Okay
so
so
far
it
looks
like
your
neighbor
is
the
only
one
that's
objecting
this
and
I'm.
H
Sorry
I'm,
sorry,
we
need
I'm.
Sorry,
we
need
to
we're
not
making
decisions
at
that
level.
At
this
point
and
there's
nothing
formal
about
I,
don't
think
it's
appropriate
to
ask
for
formal
objections
at
this
stage.
Okay,.
K
C
I
mean,
and
just
to
be
clear,
I
at
least
some
of
the
Converses.
Some
of
the
questions
for
is
about
the
security
implications.
What
exactly
would
make
would
this
make
transferable
that
may
not
have
been
expected
to
be
transferable
I
mean
I
I,
don't
know
that
there
is
an
issue,
but
at
least
it
gives
me
more
questions
that
I
had
internally
when
I
said,
I
would
probably
be
fine
with
an
object.
C
I
mean
I,
I,
I
I
I
again,
you
I
think
you
made
also
good
arguments
that,
in
fact,
the
surface
is
similar
to
a
message
but
I
think
maybe
having
an
explainer
where
you
put
all
of
that
together
into
a
consistent
picture
where
you
say
which
security
issues
you've
actually
looked
into,
or
you
think
they
don't
apply
that.
E
C
Be
a
way
to
as
far
as
these
questions.
C
C
And
I
think
I
mean
given
that
obviously
we're
not
clear
on
the
next
step.
I
guess
this
gets
back
to
the
chest
to
figure
out
a
path
forward
for
UL
ad
so
that
you
don't
remain
stuck
on
this.