►
From YouTube: WeBRTC WG interim meeting 2023 04 18
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
B
We're
going
to
cover
try
to
cover
a
lot
of
things
today.
It
could
be
a
very
time
challenge.
Meeting
Chris
Thatcher
will
be
acting
as
time
police
during
the
first
slot.
We'll
try
to
keep
him
honest
after
that,
so
the
slides
are
up
on
the
wiki
or
link
to
them
is
up
on
the
wiki.
B
The
beating
is
being
recorded
and
we
have
a
volunteer
for
note-taking,
which
is
Henrik
a
reminder
about
the
code
of
conduct
we
operate
under
it.
Please
keep
things
professional
on
cordial
submitting
our
meeting
tips.
Please
raise
hands
to
get
into
the
speaking
EQ
and
low
hands
to
get
out
of
it
and
wait
for
the
microphone
access.
We
will
mute
you
if
you
try
to
jump
the
queue
and
please
State
your
full
name.
B
I,
don't
think
we'll
be
using
Colts
today,
all
right
so
a
little
bit
about
document
status,
just
because
something's
in
the
repo
doesn't
mean
that
it's
been
adopted.
We
use
a
call
for
adoption
for
that
and
within
editors
drafts.
If
something
doesn't
have
consensus,
it
could
have
a
note
to
that
effect
and
we
try
to
also
attach
bring
things
to
the
working
group
and
put
the
discussions
into
the
GitHub
issues.
B
So
we've
had
a
very
nasty
bug
or
series
of
bugs
going
around,
and
so
a
lot
of
people
have
been
very
sick
for
the
last
couple
of
weeks
and
we
like
to
wish
them
good
well
to
get
well
soon
and
among
those
is
Harold
Peter
thanks
I'm
here
anyone
else
caught
that
thing-
hopefully
not
but
anyway,
rest
up
and
Harold
will
be
back
to
us.
Hopefully
in
a
week
or
two
and
I
know,
Peter
I
think
Peter
and
Samir
are
here,
but
anyway
take
care
of
yourselves
all
right.
B
So
here's
what's
on
the
agenda
today,
it's
pretty
time
challenge
meeting.
So
we're
going
to
be
pretty
religious
about
sticking
to
the
time
allocations.
First
is
a
grab
bag
of
a
whole
bunch
of
different
issues.
Then
we
have
Peter
on
r2p
transport,
Peter
and
Samir
on
Ice
controller
Jane
Brown
with
play
out
to
lay
and
that
U.N
has
some
exercise
hopefully
will
be
good
and
get
to
them.
I
I,
don't
know
how
how
easy
that's
going
to
be
all
right.
B
A
Right
yeah
so,
as
I
presented
the
proposal,
a
few
instruments
to
go
about
adding
a
codex
selection
API,
we
failed
to
mention
that
it
could
also
be
used
for
audio
encodings,
so
I
just
want
to
make
it
clear
that
it
will
also
be
used
for
audio
for
simulcast
audio,
for
whichever
user
agent
ever
implements
it,
if
necessary,
and
all
that
so
that's
about
it
is
the
examples
where
about
summercast
video
mix,
codex,
simulcast,
video
but
audio
is
also
of
course
concerned,
since
this
is
a
generic
field
that
is
shared
by
both
audio
and
video
encoding.
A
And
yeah:
that's
it
universe.
D
Yes,
I
just
want
to
call
out
I,
don't
think
anyone's
planning
to
implement
simulcast
or
audio,
but
I
think
the
interesting
part
here
is
probably
more
that
the
there's
the
Second
Use
case
covered
by
this
API,
which
is
to
change
codecs
without
renegotiation
I,
wanted
to
make
sure
that
was
called
out
because
I
think
it
might
be
useful
for
for
that,
even
for
audio
but
I'm,
not
an
audio
expert.
So
yes,.
A
A
Nope-
and
if
you
wanted
to
do
that,
then
we
will
need
new
apis,
that's
going
to
be
other
proposals
from
other
people,
for
example,
I,
don't
know
creating
an
opus
correct
with
different
fmtp
parameters
that
people
want
who
could
create
new
things,
but
that
is
out
of
this
cup.
Of
these
we
want
to
create.
We
want
to
use
existing
codecs
that
are
supported.
That's
it
thanks.
B
All
right,
so
what
do
we
have
for
the
notes?
I
guess
approval,
but
can't
have
to
be
within
the
negotiated
envelope
and
also
doesn't
introduce
simulcast
for
audio
a
summary.
D
G
A
B
Okay,
typo.
I
I
D
Hi
I
think
I'm
on
the
Queue,
so
no
objection
specifically
but
I.
Think
in
the
issue.
There
was
discussion
of
only
adding
some
of
these
right
and
the
slide
is
asking
for
more
than
most
an
issue.
Was
there
a
change
of
heart.
D
Okay,
yeah
I
think
my
only
concern
would
be
that
webrtc
stances
in
Canada
recommendation
so
that
there
would
be
and
I
think
was
also
discussed
that
normally,
if
stats
are
not
implemented,
we
put
them
in
provisional
stats
instead
and
so
Bill
you're
saying
that
you
will
be
providing
implementations.
One
implementation
for
these
is
that
right,
correct,
okay,.
D
B
Okay,
can
we
get
a
summary
for
the
notes,
Henrik.
F
B
Thank
you
all
right,
so
I
have
an
issue
146.
So
you
know,
we've
been
talking
about
how
to
expose
the
code
errors
and
software
fallback
and
Henrik
file
an
issue,
and
we
did.
B
We
discussed
this
in
the
media
working
group
last
week
and
a
useful
suggestion
was
made
by
Eugene,
which
is
that
it
would
be
helpful
to
distinguish
between
a
data
error
and
a
resource
problem,
and
what
a
data
error
is
is
if
you
feed
an
encoded
chunk
to
a
decoder
that
and
that
can't
be
decoded,
and
the
reason
this
happens
often
is
because
Hardware
decoders
are
more
strict
than
software
decoders.
B
So
it's
usually
a
spec
violation
in
some
way,
but
a
software
decoder
doesn't
error
on
it,
but
the
hardware
decoder
will
and
so
that
in
that
situation
you
have
to
fail
over
to
software
because
it's
probably
going
to
recur,
and
but
the
developer
needs
to
capture
the
bitstream
and
investigate
the
interop
issue
like,
for
example,
it
could
be
some
something
that
typically
a
software
encoder
is
doing
at
the
hardware.
Encoder
decoder
can't
handle,
but
there's
nothing,
there's
nothing
to
show
the
user,
because
there's
nothing
you
can
do
about
it.
B
On
the
other
hand,
a
research
problem
is
quite
different
because
the
hardware
decoder
becomes
unavailable
and
it
could
have
been
because
it
was
allocated
to
another
application,
or
maybe
the
GPU
crash,
or
something
like
that
in
this
situation,
the
developer
would
do
something
different
like
they
would
tell
the
user
hey.
Another
application
is
affecting
your
performance
or
is
using
the
the
hardware
resources.
Please
quit
that
other
thing,
and
then
it
might
be
able
to
reacquire
it
or
if
it's
a
GPU
crash,
maybe
you
need
a
reboot
or
something
so
anyway.
B
This
is
the
approach
that
Eugene's
going
to
try
to
implement
in
web
codex.
B
It's
not
a
hundred
percent
foolproof
that
you
can
always
figure
out
where
it's
a
data
error
versus
a
research
problem,
but
anyway,
that's
that's.
What
we're
going
to
try
to
do,
and
that's
my
recommendation
here.
So
what?
What
do
people
think
of
this
approach,
trying
to
focus
on
differentiating
these
two
things.
B
I
can't
see
us
in
the
Q:
oh
go
ahead.
J
Yeah,
so
for
data
error
is
already
handled
by
the
user
agents
and
then
pull
back
to
Software
without
the
web
application
to
do
anything,
so
that
seems
percent
good,
as
is,
and
that's
actually
something
that
is
done,
for
instance,
in
Safra,
in
some
cases
where
SVC
is
not
supported
by
hardware
and
we
fall
back
to
software
and
the
web
application
has
no
no
API
to
actually
do
kitchen.
I
did
not
see
why
the
web
application
should
should
be
notified
of
of
an
event
that
the
user
agent
can
do
the
right.
J
J
We
have
a
banner
that
is
a
sapphire
UI
that
tells
the
user
that
that
this
web
page
is
using
like
very
intensive
resources,
and
if
they
want
the
device
to
to
run
in
a
smooth
more
smoothly,
then
they
can
close
the
the
page
so
again
there,
since
it
might
be
another
applications
that
is
actually
that
actually
have
the
resource
being
hooked.
J
Maybe
the
user
agent
might
be
able
to
know
which
applications
and
might
provide
better
information
than
the
web
application,
so
I'm
not
sure
how
the
web
application
can
help
the
user
more
than
what
the
user
agent
can
do
there
to
fix
the
resource
problem.
So
I
I
would
tend
to
wait
for
media
working
group
to
settle
on
this
issue
and
then,
once
when
we
are
working
group
has
settled
on
this
issue,
then
we
can
look
at
what
they
they
did
and
try
to
to
to.
B
A
I
think
in
the
first
case
of
a
data
error,
there's
also
the
case
where
the
software
indicator
can
also
not
be
able
to
decode
the
data.
If
you
have
data
that
is
focus
and
that's
something
that
needs
to
be
also
handled.
B
H
B
What
decoder
had
the
problem.
A
Yeah
in,
in
some
cases,
it
might
be
something
that
is
recoverable
by
a
software
encoder
right
and
which
is
in
which
case
it's
great.
Sometimes
it
might
be
interesting
to
know
for
the
application
that
the
hardware
decoder
is
having
issues
with
potentially
well-constructed
data,
but
then,
if
you
run
into
issues
or
specific
code
like,
maybe
the
application
will
want
to
do
something
different
switch
to
different
codec.
A
That
is
better
supported
by
the
current
Hardware
or
maybe
accept
the
software
fallback,
which
might
be
fine
and
having
that
kind
of
information
would
be
great
for
the
application
and
have
automatic
fallback
is
not
necessary.
D
D
That
might
be
different
cases
where
the
it
was
unclear
to
me
whether
the
only
option
is
to
fall
back
to
software
in
these
cases,
if
that's
the
subset,
we're
talking
about
or
other
type
of
data
errors,
I
see,
there
were
two
proposals
in
the
issue:
one
was
on
data,
decode
error
and
the
other,
the
other
one
was
on
software
fallback,
so
I
think
we
just
should
be
clear
about
with
proper,
which
problem
we're
trying
to
solve.
B
Yeah
they're
they're
related,
though
because
yeah,
because
you
fall
back
because
you're
you're
having
a
problem.
Okay,.
F
Can
I
I
rest
my
hand
earlier,
okay,
yeah
I
think
these
issues
were
were
originally
filed
because
I
mean
in
this
discussion.
We've
talked
about.
Hardware
errors
are
something
we
can
recover
from,
because
we
can
fall
back
to
software,
but
the
problem
is
that
the
application
prefers
to
use
the
better
resource
sets
than
falling
back
to
software,
and
it's
it's
of
course
arguable
where
the
hardware
is
always
better.
F
But
the
problem
is
that
when
we
are
not
we're
excluding
the
option
to
change
codec
and
we're
excluding
the
option
to
change
coded
configurations
when
we
hide
this
information
from
the
application
and
I
don't
see,
what's
been
discussed
here,
that
I
don't
see
an
attempt
to
try
to
solve
this.
So
I'm
not
sure
what
Mark
as
ready
for
PR
means.
B
B
Yeah,
well,
it
would
have
the
information
that
it
could
decide
hey.
You
know,
as
an
example,
I
have
a
data
error
on
a
hardware
encoder,
but
my
application
can't
stand
software.
It's
just
performance
is
going
to
be
too
bad.
B
F
B
Yeah
in
media
working
group
The,
the
goal
is-
is
to
improve
the
error
handling
so
that
they
would
have
potentially
have
this
information
available,
so
they
would,
they
would
get
it
out
of
web
codex
anyway,
so
I
mean
just
to
be
clear.
This
is
under
investigation,
so
so
Eugene
is
trying
to
implement
this
for
webcodex,
not
sure
he
can
he
can.
He
can
do
it
reliably.
D
So
y'all
over
here,
I
raised
my
hand
earlier
yeah
just
want
to
mention
also
the
Privacy
issue,
which
is
why
I
think
it
makes
sense
to
wait
for
to
see
how
the
media
working
group
handles
that.
First.
B
Well,
like
I
said
it's
already
been,
this
is
the
discussions
already
had?
No,
and
one
of
the
reasons
is
you
don't
need
to
know
as
much
information
in
this
approach
just
is
it
data
or
is
it
resource,
and
that's
that
the
question
is
what
what
would
the
application
do
with
the
info?
It
doesn't
really
need
more
info
about
what
it
was
just
just.
Is
it
a
data
or
is
it
a
resource
problem.
B
But
anyway,
the
point
is
that
it
there's
investigation
going
on
there,
but
it's
it's
in
this
general
direction
to
figure
out
what
what
can
be
provided.
Okay,
we'll
move
on
to
the
next
one.
F
For
the
notes
should
I
say
that:
well
so
it's
it's
under
investigation,
I
mean
my
general
feeling
is
that
we
don't
want
to
expose
privacy
leaking
information
unnecessarily
right,
but
that
there
is.
Is
there
some
positive
feeling
about
moving
in
in
the
direction
of
exposing
as
data
or
resource
and
keep
it
vague,
but
to
wait
for
wait
for
the
media
working
group
to
come
to
a
conclusion?
J
B
Yeah
I
mean
the
the
question:
is
it's
not
clear
exactly
how
this
data
versus
resource
would
be
exposed?
One
way
is
to
just
have
a
different
error
for
those
two
two
different
things:
there's
no
additional
information
being
exposed
you're
going
to
get
an
error
in
media
in
web
codex
either
way.
But
the
question
is:
would
it
be
maybe
a
little
bit
different
kind
of
error
like
an
operation
error
versus
an
encoding
error
anyway,
there'll
be
more
info
coming.
Okay,.
F
D
My
notes,
I,
don't
know
that
I
mean
if
someone
wants
to
provide
the
effort.
We
have
people
who
file
PR's
without
filing
issues.
So
I
don't
know
that
we
necessarily
need
consensus
for
someone
to
do
what
do
the
work,
but
I
also
don't
think
that
the
working
group
needs
to
be.
E
E
B
Right
I
think
Peter
has
said:
we've
right,
gotta
we
gotta
move
on.
Okay,
all
right
next
slide.
H
All
right,
oops,
all.
B
Right
all
right,
this
one
is
issue
170
incompatible.
As
we
see
metadata,
the
incompatibility
is
between
web
codex
and
encoded
transform.
B
So
anyway,
I
wanted
to
try
to
spend
a
few
minutes
describing
what
has
been
proposed
for
the
SVC
metadata
in
web
codex,
and
why
there's
a
the
basic
issue
has
to
do
with
spatial
scalability
and
what
you
need
in
the
metadata
to
be
able
to
handle
things,
and
the
problem
is
that
dropping
spatial
layers
is
easy,
but
adding
them
back
is
not
easy
and
the
reason
it's
not
easy
is
because
of
the
dependencies
and
the
way
spatial
scalability
works
and
let
me
show
a
a
slide.
B
So
this
is
at
l2t2,
so
two
temporal
layers,
two
spatial
layers
and
in
this
situation
basically
there's
at
time.
T2
a
spatial
layer
gets
dropped,
it
could
be
dropped
by
the
SFU
or
it
could
be
dropped
by
the
sender,
and
once
that
happens,
if
the
receiver
receives
the
next
spatial
frame,
which
is
S1
T1
at
time,
three,
that
frame
will
not
be
decodable
because
the
dependency-
that's
that's,
there
didn't
get
received
for
some
reason.
B
So
another
example
is,
but
the
dependencies
alone
are
not
sufficient
to
figure
out
if
the
frame
is
decodable.
So
here's
an
example
where
the
spatial
layer
gets
dropped
at
Time
Zero,
but
at
time
two
the
spatial
layer
is
received.
Well,
that's
not
decodable,
because
it
it
didn't,
get
the
spatial
frame
at
Time
Zero
that
it
depends
on
and
at
time
three
another
spatial
layers
received:
that's
not
decodable,
even
if
time
two
was
received
because
again,
the
time
zero
dependency
of
the
dependency
is
not
resolved.
B
Another
problem
that
can
happen
is
even
if
all
of
the
frames
are
received
and
nothing
gets
dropped.
You
still
can
have
an
issue
with
what's
known
as
the
decode
Target.
So,
for
example,
say
you
have
a
a
mobile
device
receiving
this
stuff
and
it
it
doesn't
have
the
screen
resource
to
or
the
to
handle.
For
example,
4K
at
60
frames
a
second
well.
It
could
be
getting
all
these
frames,
so
the
bandwidth
might
be
available
to
send
it
all
this
stuff,
but
it's
not
within
the
decode
Target
that
it
can
handle.
B
So
the
receiver
could
still
be
dropping.
All
of
these.
All
of
these
frames
that
it's
getting
even
if
all
the
dependencies
are
met.
B
So
this
this
kind
of
describes
what
information
you
need
to
get
need
to
have
that
goes
in
the
metadata
to
to
make
the
right
decisions
both
in
the
SFU
and
in
the
receiver.
B
So
the
the
point
here
is:
the
receiver
needs
to
figure
out
if
a
frame
is
decodable
and
to
do
that.
It
needs
not
just
the
dependencies
but
an
unbroken
chain
of
dependencies,
and
it
doesn't
want
to
calculate
that
dependencies
on
the
fly.
It
needs
to
know
that
in
the
frame,
because
it
basically
would
have
to
go
and
build
the
chain
and
you'd
have
you
know
potentially
hunt
dozens
of
receivers
building
a
chain
that
could
have
been
sent
along
with
a
frame.
B
So
that
doesn't
make
a
lot
of
sense
and
also
the
receiver
needs
to
quickly
determine
not
only
is
the
receive
frame
decodable,
but
also
is
it
necessary
for
the
desired
resolution
or
frame
rate
or
Dakota
okay.
So
now
we
get
to
what's
in
RTC,
encoded
video
frame
metadata.
B
The
problem
is
it,
it
does
have
dependencies,
but
it
doesn't
include
the
chains
and
also
it
doesn't
include
the
crypto
targets.
So,
for
that
reason
the
information
isn't
sufficient
to
support
spatial
scalability,
also
it's
incompatible
with
web
codex
in
terms
of
the
way
it
way
it's
done
in
web
codex,
you
have
an
SVC
dictionary,
which
is
a
subdictionary.
B
Currently,
this
only
has
temporal
layer
ID,
because
facial
scalability
has
not
yet
been
implemented
in
in
web
codex,
and
the
processing
web
code
exists
to
wait
until
we
have
an
implementation
to
put
this
stuff
in.
So
we
do
have
a
PR,
though,
for
web
codex.
We
did
discuss
this
previously
in
the
Weber,
see
working
group.
The
advice
here
was
to
submit
a
web
codex
BR
bring
it
back.
We
now
have
a
PR
at
654,
and
this
is
what
the
proposal
looks
like
put
in
the
chain
links
as
well
as
the
code
targets.
B
So
all
the
info
is
there
also
there's
a
little
bit
of
a
difference
between
what
we
call
frame
number
and
frame
ID
they're,
not
quite
the
same
thing,
three
numbers
much
smaller,
it's
actually
it's
an
unsigned
long,
but
it's
actually
in
the
ab1.
It's
only
six
roughly
16
bits,
because
that's
all
you
need
for
for
dependencies.
B
So
anyway,
we
had
a
discussion
in
the
media
working
group
and
the
big
question
was:
what's
the
state
of
implementation
in
chromium
Peter
investigated
it
I
don't
know
Peter
could?
Could
you
say
just?
Is
there
a
a
bite-sized
summary
of
what
you
found.
G
I'd
have
to
go
check
my
notes,
real
quick.
You.
B
G
B
So
anyway,
so
vp9
probably
has
the
most,
but
not
the
chain
portion
of
it.
So
so
the
big
question
that
came
up
in
media
working
group
was
hey.
Why
is
this
stuff
in
encoded
transform?
Has
it
been
implemented?
Do
we
as
a
dependencies
spatial
index
and
temporal
index?
Are
they
implemented
and
if
not,
should
that
stuff
be
removed
until
we
have
a
complete
implementation
that
can
actually
test
us
out.
A
A
So
you
can
have
that
information
stored
in
the
bitstream
for
well
Define
descriptor
for
vp8
for
vp9,
and
you
will
get
that
information.
You
should
get
that
information
just
fine,
it's
not
in
the
this
stream
directly
for
ad1
but
with
the
code,
but
we
require
that
if
one
dependency
descriptor,
so
if
you
provide
it,
you
should
get
most
of
the
information.
A
Somehow
I
believe
from
my
real
sentimented
testing,
I.
Think
with
the
behavior
that
we
have
doesn't
mean
there
are
no
bugs
or
anything
like
that,
but
I
think
that's.
The
constant
and
I
do
have
some
tests
that
are
relying
on
this
for
SVC
implementation,
encrypt,
so
very
confident
that
it
works
in
some
cases
at
least
internally.
Maybe
some
chromium
people
about
to
see
issues.
B
Okay,
so
we've
got
this
incompatibility
between
web
codex
and
and
yeah
RTC
encoded
frame
metadata.
So
what
what's
the
you
know,
we
went
ahead
and
did
a
PR.
What's
the
solution
to
fixing
the
incompatibilities
Authority,
you
have
a
suggestion.
H
H
B
Okay,
Henrik.
F
Yes,
so
get
stats
in
guest,
as
we
recently
added
the
the
following
audio
capture
related
metrics
to
the
RTC
audio
Source
stats,
the
source
stats
as
a
stat
object.
That's
only
present
when
a
media
stream
track
is
attached
to
a
sender,
so
all
of
these
metrics
was
added
to
this.
Existing
stat
objects
with
the
class
only
only
applicable
if
the
media
source
is
backed
by
an
audio
capture
device.
F
The
reason
we
want
this
capture
device
is
or
these
metrics
is
that
they
allow
to
calculate
quality
metrics
that
are
not
available
elsewhere,
meaning
the
bullet
point
list
shows
the
metrics.
You
can
calculate
glitches
by
taking
the
the
ratio
of
dropped
samples
and
the
total
samples,
or
you
can
calculate
the
average
delay
by
taking
the
the
capture
the
day
divided
by
the
number
of
samples.
The
total
capture
delay
next
slide.
F
And
so
so,
these
were
merged
into
the
stats
spec.
But
then
this
issue
is
filed
and
as
previous
discussions,
these
have
all
been
marked
feature
at
risk,
because
so
good
point
one
is
get
user
media's
frequency
used
outside
of
webrtc,
so
it
seems
like
a
bad
fit
to
have
these
metrics
in
get
stats
inside.
Webrtc
second
point
is
about
the
spec
talks
about
audio
samples,
but
we
should
be
talking
about
audio
frames,
and
this
is
actually
just
a
terminology
difference.
F
What
the
stat
spec
says
means
when
it
says
audio
frames
or
audio
samples
is
actually
the
same
thing
as
the
rest
of
the
world
means
when
they
say
audio
frames.
It's
clarified
in
the
terminology
section,
and
the
third
point
was
that
it's
not
clear
why
our
audio
may
be
dropped,
but
this
relates
to
when
processing
of
the
audio
samples
is
not
in
a
timely
manner.
Anyway.
The
main
problem,
I,
think
other
than
clarifying
things
here
is
that
it
gets.
F
That
is
a
bad
location
for
these
metrics
when
they
belong
to
the
track.
So
the
proposal
or
what
I
want
to
get
a
a
sense
of
a
direction
here
is
if
it's,
if
it's
okay,
to
work
on
a
proposal
PR
to
move
these
to
the
media
stream
track,
we
did
have
video
capture
metrics
added
to
musician
track
in
webrtc
or
media
capture
extensions.
So
I'd
like
to
do
the
same
thing
with
these
audio
capture.
Metrics,
it's
a
point
one
is
that
makes
sense.
The
second
point
is
naming
Nets
U.N.
J
E
D
Think
this
is
the
right
direction
because
having
to
create
a
peer
connection
to
get
some
of
these
stats,
it's
always
a
little
awkward.
F
Okay,
nice,
do
you
have
any
opinion
about
if
we
should
use
one
and
the
same
yet
stats
method
for
all
metrics,
in
which
case,
maybe
just
rename
existing
one
to
get
stats
or
should
I
correct,
create
a
separate,
get
captures
audio
capture,
stats.
E
F
C
I,
don't
know
how
much
we
want
to
go
over
here,
but
I
was
mostly
just
going
to
motivate
the
this
PR
and
the
encoded
transform
spec.
C
There
was
a
bit
of
confusing
history
of
I
took
over
from
a
previous
pr137
from
last
year.
It
was
abandoned,
so
some
of
the
historic
discussion
maybe
got
lost
but
yeah
very
briefly.
The
motivation
is
for
web
apps
that
are
doing
both
at
War
transform
and
an
encoded
transform.
C
It's
super
useful
to
be
able
to
align
the
encoded
frame
with
the
Warframe,
previously
sort
of
corresponds
to
it,
and
this
is
achieved
in
the
web
codex
spec
by
having
a
timestamp
that
matches
on
both
sides.
Unfortunately,
the
timestamp
that
we
have
in
the
RTC
encoded
frame
ended
up
being
the
RTP
timestamp,
so
the
the
pr
is
proposing
adding
a
presentation,
timestamp
I
know,
there's
some
discussion
on
naming
there
yeah.
It
is
unfortunate
that
that
time's
done
sort
of
unqualified
name
is
already
taken,
but
that's
kind
of
where
we
are
any
quick
thoughts.
Pizza.
D
I
think,
overall
we're
supportive
of
this
direction,
but
just
calling
out
my
last
comment
there
about
the
name
bike
trading.
If
you
will
is
that
there
is
a
note
on
RTC
encoded
video
frame,
a
comment
that
says
will
eventually
reuse
or
extend
the
equivalent
defined
in
web
codecs.
So
assuming
we
can
solve
the
naming
issues,
it
seems
like
we're
at
a
Crossroads
here.
If
we're
still
going
to
do
that,
I'm
curious
what
the
working
group
feels
about
redefining
timestamp.
It's.
B
Becoming
a
bigger
and
bigger
problem,
the
antiver
I
I
would
agree.
I
think
this
is
we've
been
we're
heading
down
a
road
where
we're
recreating
web
codex
and
that's
going
to
be
a
real
problem.
C
It's
just
discussion
on
the
pr
the
best
place
for
that.
Do
we
think
I
guess
there's
not
really
much
other
option
thanks.
B
Okay,
so
webrtc
combines
media
and
transport,
and
sometimes
it's
a
good
thing,
and
sometimes
not
such
a
good
thing.
B
B
As
an
example,
some
of
the
things
we're
talking
about
here
in
web
codex
are
different
in
web
Agency
for
reasons
that
aren't
always
clear,
it's
difficult.
If
something
can't
be
expressed
in
SCP
it's
difficult
to
do,
even
though
it
could
be
done
at
the
lower
level
like
trying
to
control
encoding
parameters
or
differential
resistance
or
custom
resilience
or
monitoring
some
of
the
hardware
acceleration
things
we've
been
just
talking
about
difficult
to
support,
bleeding
edge
codecs
that
could
be
bring
your
own
codec
or
it
could
even
be
stuff.
B
That's
already
supported
in
the
system
like
hebc
or
codec.
That
requires
a
new
rtcp
message,
which
is
already
in
the
system
like
dp9
or
everyone.
Don't
currently
support
layer
refresh
so
Peter.
G
All
right
so
as
Bernard
was
mentioning,
there's
a
lot
of
inflexibility
because
webrtc
has
the
media
and
the
transport
so
tightly
coupled.
So
what?
If
we
shop
that
in
half
into
the
media
part
in
the
transport
card
next
slide?
G
What
it
would
look
like
is
on
the
media
side.
You
would
have
encode
and
decode
and
on
the
transport
side
you
would
have
RTP
packets,
being
nrtcp
packets
being
just
sent
and
received
next
slide,
and
then
the
app
says
somewhere
in
the
middle
next
slide
in
an
API
between
these
could
look
something
like
what
we've
talked
about
needing
for
encoded
streams,
where
the
media
side
of
things
might
have
feedback
such
as
any
keyframe
and
the
app
might
be
able
to
tell
it
here.
The
bit
rates.
I
want
you
to
use.
G
On
the
transport
side,
the
transport
might
say,
here's
an
estimated
bit
rate.
So
today,
we'd
like
to
focus
or
I'd
like
to
focus
on
the
side
on
the
right,
the
transport
side.
You
may
have
noticed
that
the
side
on
the
left
looks
a
lot
like
web
codecs.
G
So
we
kind
of
already
have
the
site
on
the
left
solve
to
a
degree,
and
the
question
is:
can
we
do
decide
on
the
right
with
RTP
next
slide,
so
imagine
for
a
minute
that
we
Define
something
called
an
RTP
transport
that
is
able
to
send
and
receive
RTP
and
rtcp
packets.
These
packets
are
encrypted
with
srtp
and
srtcp.
It's
congestion
controlled
just
like
it
is
now
probably
with
transport,
CC
and
goog
CC.
G
They
sender
the
apis
for
sending
would
be
that
you
stream
packets
in
and
then
those
packets
go
out
on
The
Wire
and
then
on
the
receive
side.
When
the
packets
come
in,
you
get
some
event
of
packets,
whether
we
use
events
or
what
WG
streams
I
don't
have
to
tune
the
Weeds
about
right
now,
but
just
imagine
you
have
a
way
to
get
the
packets
in
and
out.
G
We
could
make
sure
this
thing
supports
workers
and
if
the
left
side
of
the
picture
had
before
was
web
codex,
it
should
work
well
with
this
right
side,
that
is
RTP
transport,
and
ideally,
you
should
be
able
to
construct
one
of
these
things
separate
from
a
peer
connection,
which
would
require
a
details
to
your
connection,
which
at
the
moment
does
require
a
peer
connection,
but
we
can.
We
can
fix
that.
G
So
next
slide
like
I,
was
saying
if
you
want
to
make
it
so
that
you
don't
need
a
peer
connection
at
all,
we
could
come
up
with
a
way
to
construct
a
dtos
transport
which
would
require
a
nice
transport
we'll
be
talking
about
that
later
later
today,
but
even
right
now,
if
we
had
such
an
RTP
transport
that
could
be
constructed
from
an
art
from
a
details
transport,
you
could
do
it
by
constructing
a
peer
connection.
G
G
G
If
you
squint
you'll
notice,
this
looks
a
bit
like
web
transport,
at
least
when
you're
using
datagrams
with
web
transport,
except
that
this
can
be
peer-to-peer,
whereas
web
transport
cannot
be
peer-to-peer,
and
this
would
I'll
have
built-in
latency
sensitive
congestion
control,
whereas
web
transport
currently
has
that
as
an
unsolved
problem.
G
G
Basically,
if
we
had
a
combination
of
web
codecs
and
this
theoretical
RTP
transport,
it
would
solve
all
of
the
the
use
cases
that
I've
listed
here.
So
things
like
being
able
to
receive
audio
and
video
without
constructing
a
sender
object.
Obviously
you
can
receive
whatever
you
want
over
there
to
be
transport
without
creating
RTP
transceivers
next
slide
or
in
the
case
where
you
want
to
control
what
is
sent
and
received
in
relation
to
rtcp
and
header
extensions
without
negotiation.
G
Well,
there
is
no
negotiation,
at
least
not
in
the
sense
of
peer
connection
STP.
Obviously
the
application
will
do
whatever
it
wants
to
decide
what
a
sense
and
receives
next
slide.
G
If
you
wanted
to
control
the
RTX
and
red
and
FEC
a
mechanism
like
the
r2p
transport
would
have
very
low
level
control
for
doing
that.
In
fact,
the
application
could
do
its
own
FEC.
If
it
wanted
to,
then
it
could
decide
what
packets
it
wants
to
send
it's
very
low
level.
So
this
would
be
this
requirement
to
be
met
next
slide
when
it
comes
to
being
able
to
control
the
rate
adaptation.
G
G
Nv38
does
not
have
consensus,
okay.
Well,
if
we
added
a
low
level
audio
Jitter
buffer.
We
should
have
controls
like
this,
but
obviously
if
the
application
doesn't
don't
Jitter
buffer
in
between
web
codecs
and
RTP
transport,
then
it
can
do
whatever
it
wants
with
the
Jitter
buffer,
including
controlling
this
delay
next
slide.
G
G
Sorry
having
to
remember
what
these
are
myself
on
the
spot.
G
And
finally,
we
get
to
the
point
where
you
get
an
idea
of
what
the
web
ideal
for
such
an
RTP
transport
could
be
again.
We
could
go
down
the
road
of
something
like
web
transport
does,
with
what
WG
streams
or
we
could
go
down
the
road
of
what
web
codex
does
with
it
with
events
and
callbacks
here,
I
just
presented
events
without
what
WG
streams
so
imagine
that
this
thing
has
a
Constructor
that
takes
a
details,
transport
and
then
the
main
RTP
send
and
receive.
G
Is
you
have
a
method
where
you
call
send
RT
packet
and
then
there's
an
event?
That's
like
hey.
You
received
an
RTP
packet
for
rtcp,
you
have
a
send
our
TCP
packet
and
then
you
have
an
event
that
says:
hey
the
internet
just
need
to
pack
it
and
then,
if
the
target
bitrate
changes,
you
can
invent
fires
and
then
there's
an
attribute
for
the
Target
send
rate.
G
So
next
slide.
Here's
a
very
simple
example:
if
you
somehow
constructed
a
peer
connection
with
negotiation
of
sdp
for
icdt
lives
and
sctp,
then
you
could
get
the
dtls
transport
construct,
the
RTP
transport
and
then
you'd
have
send
RTP
packets
or
send
of
rtcp
packets
next
slide.
G
So
one
thing
that
I've
mentioned
that
we
could
discuss
in
more
detail
is
the
media
side
of
things.
I
think
that
web
codecs
is
a
great
fit,
but
again
it
lacks
the
Jitter
buffers
and
perhaps
packaging.
So
if
we
wanted
to
make
it
easy
for
applications
to
take
the
output
of
web
codecs
and
packetize
for
sending
or
de-packetize
and
do
Jitter
buffer
on
the
receiving,
we
could
expose
some
additional
objects
for
doing
factorization
or
Jitter.
Buffers.
I
have
written
some
designs
for
that.
D
Yeah,
so
I
I,
don't
believe
an
issue,
so
I
welcome
opening
an
issue,
so
we
can
discuss
this
on
GitHub,
so
I
don't
have
to
give
my
first
feedback
on
YouTube,
but
I'll.
Try
I
I
feel
like
this.
What
you
mentioned,
let
me
step
back.
D
Envy
use
cases
kind
of
predates
web
transport
in
a
way,
so
I
think
we
have
a
problem
there
that
a
lot
of
the
use
cases,
an
NV
imagined,
a
webrtc,
2.0
and
I-
think
a
lot
of
those
use
cases
have
since
been
met
by
web
transport,
and
you
also
mentioned
there's
overlap
with
web
transport
here
so
and
the
things
you
list
are
actually
not
use
cases,
they're
requirements
for
use
cases.
B
You
haven't,
there
are
the
the
ones
that
were.
Consensus
have
already
been
linked
to
use
cases
and
ND
use
cases
and
they're
already
been
have
working
group
instances
so
that
the
slides
describe,
which
ones
have
objections
and
which
one
don't
but
the
ones
that
that
weren't
linked
as
non-consenses
are
work
in
group
consensus,
signers.
D
Right
well,
in
any
case,
Firefox
has
just
implemented
web
transport
it's
available
in
Firefox
nightly.
We
welcome
everyone
to
go
and
test
it.
That
means
there's
now.
Two
implementations
for
web
transport
and
I
would
like
to
ask
the
working
group
how
many
low-level
apis
do
we
need
and
pushing
back
a
little
bit
on
the
slide
that
says
everyone
has
wanted
to
do
their
own
packetization
into
their
buffer
and
rate
adaptation.
D
So
far,
if
that
is
true,
welcome
to
come
and
try
out
web
transport
I.
So
far,
adoption
hasn't
been
indicative
of
that
large
in
interest.
So
I
would
like
to
push
back
and
also
specifically,
it
seems
a
bit
early
to
discuss
web
IDL
and
some
of
the
concerns
that
we've
already.
We
spent
a
lot
of
effort,
keeping
video
video
track
generator
of
a
main
thread,
for
example,
and
this
would
give
main
thread
exposure.
D
J
I
think
that
so
we
see
a
lot
of
people
trying
to
push
a
lot
of
stuff
in
equity,
transform
to
do
all
of
their
customization,
and-
and
this
is
wrong-
and
this
approach
he
is
trying
to
to
clean
up
that
that
mess.
Somehow
so
that's
good,
there's
a
desire
to
have
a
nice
transport.
Then
you
can
have
a
dtls
transport.
Then
you
can
have
like
a
weapon
Fort
like
approach
where
you
you
separate
Booth.
So
that's
that's
also
good
from
from
that
perspective,
so
I
think
it's
the
right
approach.
J
The
the
question
about
Geneva
is
Raising
in
terms
of
use.
Cases
is,
is
good.
Are
there
sufficient
use
cases
to
warrant
before
to
implement
that?
And
that's
that's
a
good
question
and
we
should
spend
some
time
there,
but
once
we
gather
enough
information
in
terms
of
use
cases
and
we're
like
yeah,
we
want
to
do
it.
Then
it's
that
kind
of
approach
that
we
should
take.
Yeah.
A
I
think
that's
an
interesting
approach
that
probably
needs
to
be
discussed
more
to
respond
to
Universe
concern
about
overlaps
with
web
transport.
I,
don't
believe
it
with
transport
will
work
in
this.
In
most
cases
that
involve
peer-to-peer
since
I
web
transport
only
works
with
connection
to
a
server
and
not
between
two
browser
instances,
there's
also
probably
an
I.
Imagine
an
interest
of
keeping
the
existing
infrastructure,
all
the
sfus
that
are
working
right
now
with
RGB
packets
and
that
are
not
based
on
media.
Over
quick.
G
K
Sorry
hi,
so
yes,
I,
mean
I.
Think
it's
interesting
I
have
some
concerns.
K
K
The
it
I
mean
it
there
it.
It
does
smack
me
as
very
being
very
it's
very
low
level
and
is
inviting
applications
to
basically
Implement
their
own
implementation
of
webrtc
and
JavaScript,
which
is
possible.
It's
going
to
be
very
hard
for
people
to
do
this,
and
do
it
well,
I
mean
doing.
Is
one
thing
doing
it?
Well
is
another
thing:
that's
not
so
easy
and
I'm
a
little
concerned
about
that,
but
I
think
it's
worth
discussing
in
more
detail
about
what
this
would
actually
get
us.
K
What
the
use
cases
this
would
enable
are
enabled
elsewhere,
as
was
mentioned,
and
what
alternatives
there
might
be
to
this
design
to
meet
those
use
cases
so
I
think
it's
worth
opening
an
issue
and
discussing
I
have
some
reservations
about
about
some
of
the
details
here
and
about
you
know
about
there
being
some
Alternatives
available
one
of
the
issues
with
web
transport.
K
K
It's
only
client
to
server
at
the
moment,
it's
not
to
say
it
couldn't
be
peer-to-peer,
but
that
would
be
more
additional
work
and
specification
required
to
get
there.
The
other.
The
second
issue,
which
I
think
was
also
touched
on
somewhere,
is
that
the
in
the
slides,
which
is
that
the
congestion
control
currently
implemented
for
web
transport
is
not
real
time
friendly
and
that
could
be
fixed.
K
However,
and
there
is
a
you
know:
I
want
real-time
compression
flag,
it's
just
not
implemented
by
anyone
that
I
know
of
yet
so
any
case,
that's
my
opinion.
G
Let
me
quickly
reply
to
three
things:
there,
one
of
them
about
web
transport
I,
don't
think
you'll
find
someone,
that's
not
all
right,
so
I'm,
a
fan
of
web
transport
and
I
would
like
to
see
it
solve
the
issues
of
both
peer-to-peer
and
real-time
congestion,
control
and
I.
Think
it
can,
but
it
will
never
have
the
same.
Interrupt
with
endpoints
like
Florida
was
mentioning
and
which
I
also
put
on
my
slide
so
I
think.
G
Even
if
web
transport
becomes
everything
we
want
it
to
be,
they
will
still
be
a
reason
to
have
RTP
capabilities
for
web
apps
and
about
the
question
of
re-implementing
lib
web
RTC.
The
reason
I
was
earlier
mentioning
the
audio
Jitter
buffer
in
video
different
passwords
and
the
packetization
is
that
that
is
the
main
thing
that
you're
missing
that
somebody
would
need
to
re-implement.
The
factorization
is
not
so
hard,
but
the
Jitter
buffer
certainly
is
letting
a
high
quality.
Audio
buffer
is
a
is
not
an
easy
task.
G
So
that's
why
I
do
have
a
proposal
for
how
we
could
have
an
API,
for
that
I
could
bring
it
to
the
working
group
to
discuss
if
you'd,
like
and
also
in
response
to
the
question
of
congestion
control.
G
Part
of
the
detailed
design
I
have
for
this
does
address
the
question
of
how
we
keep
the
application
from
screwing
up
congestion
control,
basically
to
make
sure
that
the
application
can
further
sorry
the
browser
can
add
the
mechanisms
it
needs
both
in
rtcp
feedback
and
header
extension,
ID,
RTP
header
extensions,
in
order
to
do
conjunction
control
without
the
application.
Screwing
things
up
so
I
do
think.
That's
a
solvable
problem
and
related
to
that
which
nobody
brought
up.
Is
the
question
of
srtp
sequence,
number
reuse
and
avoiding
that
I
do
have
a
solution
for
that.
G
Also
so
I've
I've
thought
through
this
a
bit
and
I
can
I'm
more
than
happy
to
discuss
it
in
great
detail.
But
the
overall
question
is
first
like
do
we
want
to
have
those
discussion?
So
if
the
answer
is
yes,
then
been
great,
do
you
want
to
respond
or
angle
your
hands
up?
Yeah.
K
The
yeah,
those
are
that's
good.
The
one
thing
I
would
like
to
say
just
expanding
a
little
bit.
Is
that
all
things
being
equal
I,
you
know
if
I
had
a
choice.
I'd
prefer
to
solve
this
through
web
transport,
and
perhaps
there
are
some
cases
whereby
you
know
there
might
be
some.
K
You
know
you
mentioned
Florence
issue,
you
know,
I
I,
don't
know
I'd
rather
solve
it
through
a
transport.
If
I,
if
we
can
I,
think
that's
a
cleaner
over,
you
know
forward-looking
solution,
but
you
know
so
you
know
I
want
to
see
that
we're
going
to
get
a
clear
win
here
that
we
can't
get
be
doing
other
work
doing
you
know,
especially
especially
with
web
transport,
and
that
you
know
you
know,
for
the
use
cases
that
we're
going
to
decide.
We
care
about
Bernard.
B
Yeah
just
a
comment
that
the
ITF
is
not
going
to
be
using
web
transport
for
peer-to-peer,
so
RTP
over
quick
is
being
defined
with
real,
quick,
using
peer-to-peer,
quick,
which
this
working
group
decided
not
to
work
on
so
and
in
mock
they're,
discussing
whether
to
use
web
transport
or
not
not
clear,
they
will
use
web
transport.
They
may
also
go
with
peer-to-five
so
anyway
that
that
decision
on,
where
to
go
for
protocols,
is
in
the
ITF,
not
in
nwhcc.
D
But
in
any
case,
web
transport
would
merely
be
providing
a
a
a
data
channel,
so
to
speak,
no
pun
intended.
We
already
have
a
data
channel
for
peer-to-peer.
So
the
question
here
is
whether
there
are
use
cases
peer-to-peer
use
cases
that
aren't
Satisfied
by
existing
technology.
That's
what
I
would
like
to
see.
I
also
gave
a
link
in
my
earlier
objection
in
chat
here,
issue
100,
where
I
point
out
that
you
can
already
use
video
track
generator
to
an
existing
apis
to
send
data
in
this
fashion.
That
doesn't
require.
D
So
it
feels
like
this
is
a
without
better
use
cases.
It
feels
like
a
premature
optimization
so,
but
would
love
to
hear
reasons
why
that's
not
the
case.
G
So,
on
the
topic
of
use
cases,
we've
been
talking
about
them
forever
and
we
have
never
gotten
around
to
really
making
solid
proposals
for
one
on
one
one-way
apis
for
encoded,
media
and
I.
Think
that's
a
shame.
I
think
it's
like
been
far
too
long,
like
people
want
to
be
able
to
do
things
and
we're
just
kind
of
talking
about
talking
about
talking
about
talking
forever
about
what
these
cases
are,
and
eventually
people
want
us
to
actually
Implement
something
well
and
so
what
I?
G
What
I'm
suggesting
is
that
this
RTP
transport
could
be
as
when
combined
with
web
codex,
and
perhaps
an
audio
Jitter
buffer
could
be
a
solution
to
all
of
these
things
that
we've
talked
about
I,
think
it
meets
all
of
the
use
cases
anyone's
brought
up,
but
I
am
getting
tired
of
talking
about
use
cases.
We
seems
like
we
never
get
anywhere.
J
Yeah,
so
there
was
the
the
promise
that
web
transport
would
somehow
plus
webcodex
be
able
to
replace
webrtc
the
connection
in
JavaScript.
So
at
least
you
would
be
able
to
to
do
that
and
that
that's
not
true,
that's
not
really
possible,
and
people
are
actually
wanting
that.
J
So
maybe
they
should
be
more
clear
precisely
why
they
want
that,
but
they
they
keep.
They
keep
coming
out
at
us
to
to
ask
for
something
like
that.
So
I
think
that
it
it
would
be
great
to
collect
all
the
very
precise
use
cases
for
that
and
try
to
solve
that
in
in
a
clean
approach
like
like,
let's
do
a
transport
level
API
for
for
RTP.
J
If
that's
really
what
we
want
use
cases
and
interest
instead
of
trying
to
slowly
extend
things
like
webrtc
and
query
transform,
because
that's
really
not
the
place
where
all
that
stuff
should
happen,
and
there
is
pressure
to
to
do
that
at
whether
it's
transform
and
that
should
not
be
our
answer.
B
Okay,
I
think
we're
out
of
time
for
this
segment,
but
what
we'll
try
to
do?
I
think
what
un
just
described,
probably
is
something
that
would
be
best
done
in
in
a
needs
quite
a
bit
of
time.
I
think
so.
One
of
the
things
we
may
be
thinking
about
is
what
to
do
with
TPAC
relating
to
this
and
how
to
handle
stuff
like
this
may
not
may
not,
or
we
can
try
future
meetings,
but
thank
you
so
we're
going
to
move
on
to
the
next
slots,
which
is
nice
controller,.
L
Hi
I'm
Samir
excuse
me
and
I
would
like
to
continue.
Our
discussion
about
the
body,
see
ice
improvements
critical
to
the
next
slide.
L
So,
where
we
were
at
last
time
was,
we
had
three
different
proposals
for
trying
to
do
something
like
this.
So
there's
a
lot
of
options
and
one
of
the
main
feedbacks
was
quite
a
lot
of
API
can
be
transplitted
into
smaller
increments
and
how
do
those
increments
map
to
use
cases?
L
So
that's
what
Peter
have
been
trying
to
do
since
the
last
meeting?
What
we've
done
is:
we've
organized
all
the
different
proposals
into
a
single
proposal.
It's
got
lots
of
Common
Ground,
so
we
agree
on
quite
a
lot
of
things.
There's
a
few
cases
where
we
have
multiple
ways
to
do
something,
but
that's
something
we
can
discuss.
Those
are
fairly
smaller
issues.
The
second
thing
we've
done
is
split.
The
entire
proposal
into
several
smaller
increments.
Each
of
those
increments
adds
something
significant
in
terms
of
capability.
L
So
each
of
those
is
a
good
Implement
to
to
implement
and
all
together
they
are
up
to
pretty
much
anything.
We
would
want
to
do
with
ice
controller,
I
think
so.
Hopefully,
those
increments
make
sense.
So
if
you
could
go
to
the
next
slide
as
the
entire
list
of
all
the
increments
I
know,
that's
quite
a
lot.
So
let
me
try
and
go
over
these
in
batches
so
on
the
next
slide.
L
So
in
the
next
slide,
please
here's
the
first
batch
where
we
try,
where
the
application
gets
to
maintain
several
candidate
pairs
control
when
those
get
removed
or
cause
some
of
them
to
be
removed,
and
then
the
application
can
select
which
candidate
pair
to
use
for
the
transport
the
next
Slide.
The
next
increment
is
to
ensure
low
latency.
This
is
by
observing
the
rtds
for
different
candidate
Affairs
and
what
states
those
pairs
are
in
the
next
slide.
L
This
is
about
controlling
ice
connectivity
checks,
so
there's
apis
to
control
how
often
connectivity
checks
are
sent,
prevent
the
useration
from
automatically
sending
those
control,
the
timing
or
the
delay
of
those
checks
and
then
observe,
then
responses
are
received
for
checks
or
when
checks
are
received
from
the
other
end
and
then
the
next
slide.
L
So
this
batch
is
about
controlling
how
local
candidates
are
gathered
or
regathered
if
necessary,
next
slide.
This
is
around
keeping
local
candidates
around
you
know
having
them
pruned
prematurely
or
causing
them
to
get
pruned
and
then
the
last
slide.
The
last
batch
is
about
creating
a
nice
transport
without
a
peer
connection
and
then
supporting
ice
forking
by
upcoming
a
separate
gatherer.
So
Peter
can
talk
more
about
the
API
shape
itself.
For
these
increments.
G
Right
so
we
tried
to
break
it
down
into
very
small
increments.
Each
one
of
those
line
items
is
a
slide
here
with
a
small
addition
to
the
existing
API,
with
the
exception
of
perhaps
the
very
last
ones,
and
we
put
them
in
what
we
thought
was
priority
order
of
what
is
the
most
important
thing
to
implement.
G
First,
so
for
preventing
candidate
pair
removal,
we
actually
have
two
options:
they're
only
a
couple
times
when
we
are
present
presenting
two
options
and
that's
because
even
though
we
agreed
on
almost
everything,
Samir
and
I
still
aren't
sure
whether
we
prefer
cancelable
events
or
not
or
or
automatic
or
more
Direct
Control
it.
G
We
can
discuss
that
as
a
working
group,
but
for
today
you
know
just
pick
the
one
you
like
in
your
mind
and
then
consider
the
The
Proposal
as
a
whole
with
that
one
that
you
like
and
we
can
discuss
which
one
it
could
be
later.
So
if
it's
a
cancellable
event,
then
on
the
ice
transport
there
would
be
an
event
that
indicates
that
a
candidate
pair
has
been
removed
and
then
or
is
proposed
to
be
removed.
G
I
guess
because
it's
cancelable
and
then
the
application
continues
to
cancel
that
and
say
no
don't
remove
that
candidate
pair,
whereas
in
the
direct
approach
there
would
be
an
event
to
say
that
a
candidate
pair
got
added
and
then
an
attribute
on
that
and
it
appear
would
be
to
say
whether
it's
removable
or
not.
And
so,
if
you
set
that
to
pulse,
then
you
can't
be
removed
next
slide.
G
Now
this
is
something
where
you
may
want
to
know
the
state
of
a
particular
candidate
pair,
and
so
we
basically
add
an
event
and
an
attribute
to
the
candidate
pair.
So,
yes,
we'd
have
to
change
the
right.
Is
it
already
an
interface
I
can't
remember?
One
of
these
objects
is
currently
a
dictionary.
G
We
have
to
change
it
to
your
interface,
but
assuming
this
one's
already
an
interface,
the
state
would
give
you
something
that
looks
a
lot
like
the
current
ice
State
for
the
entire
connection,
except
this
is
a
per
candidate
pair
state,
so
that,
if
you
had
more
than
one
you'd
know
which
one
is
connected
or
are
still
checking
or
whatever
next
slide.
G
It
can
be
often
useful
to
know
the
rtt
or
the
result
of
outgoing
checks,
so
we
can
add
an
event
to
the
rtci
transport
to
know
when
a
particular
ice
check
was
sent
and
a
response
was
received
or
get
timed
out,
and
so
we
might
need
a
better
name
for
that.
I
apologize
for
this
name
here
the
ice
check,
sent
and
resolved
event,
but
basically
you
know
for
a
particular
candidate
pair.
What
time
the
check
was
sent,
what
time
the
response
was
received
was
and
then,
if
it
failed,
if
why
did
it
fail?
G
Another
incremental
change
would
be
to
control
how
quickly
a
particular
candidate
pair
is
sending
checks
so
that,
if
you
wanted
to
have
a
candidate
pair
that
was
like
in
a
background
state
that
it's
only
checking
every
25
seconds
say
then
you
just
set
the
integral
to
every
25
seconds
or
if
you
wanted
to
be
more
frequent
every
second,
then
you
can
set
it
to
every
second,
with
this
attribute
next
slide.
G
If
you
wanted
to
prevent
outgoing
checks
of
particular
candidate
pairs
to
get
more
precise
control,
then
again
we
have
two
options.
One
of
them
being
canceled
event
approach
and
the
other
being
the
more
Direct
Control
approach.
So
with
a
cancelable
event,
there'd
be
an
event
that
says:
hey
were
you.
The
ice
agent
proposes
sending
a
check,
and
then
you
can
cancel
that
if
you
want-
and
the
other
approach
is
just
whenever
you
see
a
nice
candidate
pair
similar
to
before
whether
it's
removable
or
not.
G
Now
it's
checkable
you
just
set
that
defaults
and
then
no
checks
will
be
sent
unless
next
slide.
G
You
call
the
send
check
method,
so
if
you
just
want
to
control
exactly
when
checks
are
sent,
you
can
call
send
check
whenever
it
is
that
you
want
a
check
sent.
So
this
gives
more
manual
control
than
than
setting
the
interval,
but
it's
also
a
little.
You
know
a
little
more
Hands-On.
G
Sometimes
you
it's
useful
for
the
application
to
know
when
things
stop
coming
in
not
just
checks
but
also
other
media,
and
so
we
add
two
attributes
to
let
you
know
the
last
time
a
particular
check.
Last
time
a
check
was
received
or
any
packet
was
received
so
that
if
a
particular
candidate
pair
suddenly
has
no
traffic
coming
in
then
by
pulling
this
value,
you
would
know
that
it's
been
say
over
a
second
and
then
you
can
take
some
action
according
to
that
next
slide.
G
Now
for
Gathering
local
candidates,
when
new
network
interfaces
come
up
comes
up,
this
is
very
useful
for
mobile
situations
where
Wi-Fi
might
come
and
go,
and
we
have
two
options
here,
not
because
it's
canceledable
events
and
not
cancelable
events,
it's
more
how
forward
compatible.
We
want
to
be
with
eventually
adding
as
forking
if
we
choose
to
so.
If
we
don't
care
about
ice
forking
ever
happening,
then
we
would
put
an
attribute
on
the
rtci's
transport,
basically
saying
gathered
continually,
which
is
to
say
if
a
new
interface
shows
up.
G
Wi-Fi
shows
up
what
we
didn't
have
before
gather
candidates,
for
it
there's
option
b,
where
we
want
to
be
compatible
with
ice
working
in
the
future,
then
it
would
be
better
to
put
that
attribute
on
a
new
object,
called
the
nice
gatherer
and
then
expose
the
ice
gatherer
through
the
transport
and
that's
because
with
ice
Fork,
you
would
want
this
gather
to
be
shared
amongst
many
transports.
G
So
it's
almost
the
same
at
this
level,
but
I
just
wanted
to
point
out
that
either
it's
within
its
own
object,
The,
Gather
or
not,
depending
on
how
compatible
we
want
to
be
with
potentially
doing
this
booking
next
slide
along
with
Gathering
candidates,
when
new
interfaces
come
up.
There's
also
the
question
of
if
a
previous
interface
like
say,
Wi-Fi
wasn't
working
because
of
say
a
captive
portal
not
being
finished
by
the
user,
but
now
it
is.
G
How
can
the
application
tell
the
ice
agent
to
re-gather
that,
and
so
again
we
can
either
put
this
directly
as
transport,
or
we
can
put
this
on
The
Gather,
depending
on
how
compatible
we
want
to
be
with
ice
forking,
but
either
way
is
to
gather
method
that
basically
says
We
Gather
and
there
had
used
to
be
some
option
for
either
gather
all
of
the
network
interfaces
again,
even
the
ones
I'm
currently
using
or
only
gather
the
ones
that
I'm
not
currently
using
only
the
ones
that
failed
previously,
basically,
and
so,
there's
a
Boolean
flag
for
that
next
slide.
G
It's
often
the
case
that
you
would,
as
mentioned,
want
to
prevent
the
application
ice
agent
for
removing
local
candidates.
You
want
to
keep
them
around,
and
so
here
again
we
have
two
options,
one
of
them
being
cancelable
event.
So
the
ice
transport
could
say:
hey
I,
propose
you're
moving
this
local
candidate
and
if
it
and
then
the
application
you
choose
to
cancel
that
now,
it's
not
always
possible.
G
With
the
more
direct
approach
option
b,
it
would
be
similar
to
the
candidate
pair
where
there's
a
removable
attribute,
except
now
it
would
be
on
a
local
ice
candidate,
and
this
this
is
the
one
that
the
bean
interface
instead
of
a
dictionary,
basically
there'd,
be
an
event
on
the
either
the
ice
gather
or
the
ice
transport,
depending
on
whether
we
want
to
be
compatible
with
iceworking
and
that
object
the
object
from
that
event
or
that
getter
you
do
get
candidates
would
have
removable
field
that
could
indicate
whether
it's
removable
or
next
slide.
G
Then,
of
course,
if
you
prevented
remove
it
removal
of
local
candidates,
you
want
to
be
to
be
able
to
remove
them.
So
in
a
world
where
we
don't
want
ice
working
that
would
be
on
Ice
transport,
it's
called
remove
local
candidates
and
in
a
world
where
we
want
it
compatible
with
ice
forking.
That
would
be
in
the
ice
gatherer
to
say
the
candidates
three
straight
forward,
either
way.
G
Now
we
get
into
the
slightly
more
complicated
ones,
the
use
without
peer
connection
and
the
ice
forking.
The
two
last
things
on
the
list,
so
construction
without
pure
connection.
G
Basically,
there
would
need
to
be
a
few
things
that
exist
at
the
peer
connection,
level
pulled
into
the
ice
transport
in
particular
instructor
and
a
way
to
tell
it
to
start
and
a
way
to
gather
with
particular
fields
that
exist
in
the
RTC
configuration
right
now,
such
as
the
ice
servers
and
the
ice
transport
policy.
G
So
it's
not
a
lot
of
stuff,
it's
oh
and
then
the
rule
indicating
what
the
ice
roll.
So
it's
not
a
lot
of
things.
It's
pretty
small,
but
the
benefit
of
this
would
be
that
if
we
wanted
to
do
something
like
pass
an
ice
transport
into
a
web
transport
Constructor
doesn't
allow
for
web
transport
peer-to-peer.
Then
we
would
need
a
way
to
construct
nice
transporter.
That
appear
connection.
So
that's
why
we
would
want
this
next
slide
and
the
last
one
is
ice
14..
G
So
what's
funny
is
actually,
if
we
do
the
previous
thing
with
the
ice
gatherer,
then
ice
forking
becomes.
G
It
just
falls
out
of
the
API
actually,
but
if
we
wanted
to
do
ice
forking
with
peer
connection,
then
we
would
need
a
way
to
pass
a
nice
gatherer
into
the
peer
connection,
and
it
would
actually
be
doable
with
just
this
one
addition
to
RTC
config,
assuming
that
the
previous
parts
of
the
previous
slide
were
added
with
the
Constructor
of
the
ace
gather.
G
Okay,
we've
got
some
hands
raised,
can
I
just
get
something
I,
think
there's
one
more
slide.
I
wanted
to
get
to
before.
Taking
questions
next
slide
right,
so
the
the
summer
here
is
that
we've
broken
it
down
into
lots
of
small
incremental
improvements
that
are
each
valuable
on
their
own
independent
of
the
others,
so
that,
for
example,
if
in
the
next
year
a
particular
browser
only
felt
the
need
to
implement
half
of
them
and
could
do
so
without
having
to
implement
the
whole
thing.
G
None
of
them
are
particularly
difficult
to
implement
or
use,
with
perhaps
the
exception
of
the
last
part
where
you
want
to
be
able
to
construct
this
independent
of
appear
connection,
but
that's
kind
of
down
the
list,
and
these
do
all
together.
Almost
everything
anyone's
ever
asked
for
to
be
able
to
do
with
ice,
so
one
more
slide.
G
G
Is
the
top
question
there,
which
is?
Does
the
working
group
want
to
pursue
this
direction
of
this
unified
thing
with
these
incremental
approach?
It's
just
what
people
ask
for
last
time
all
right
who's
in
the
queue
first.
G
H
B
G
Yeah
I
was
kind
of
assuming
that,
if
you're
willing
to,
if
you're,
going
to
be
doing
nice,
work
and
you're
willing
to
do
Max
bundle.
Otherwise
you'd
have
to
like
pass
in
more
than
one
gatherer
and.
D
I
do
have
some,
but
when
it
comes
down
to
and
I
know,
there
was
a
lot
of
different
things,
but
regarding
the
question
of
cancelable
events,
I'm
not
necessarily
opposed
to
cancelable
events,
I
think
their
semantics
may
make
sense
in
cases
where
you're,
obviously
trying
to
intercept
or
prevent
something.
The
ice
agent
is
doing
today.
I
think
that
might
be
a
valuable
pattern.
D
On
the
other
hand,
if,
if
the
application
might
want
to
remove
a
pair,
not
in
reaction
to
something
but
on
its
own,
then
I
agree
with
Peter's
method-based
API
is
better,
so
I
think.
If
the
two
of
you
can
agree
on
where
each
pattern
works,
I
think
you
know
it
might
be
a
mix,
and
my
my
other
feedback
just
is
the
details.
D
It's
about
the
detail
level
I
do
struggle
to
see
the
picture
a
bit
so
I
would
wonder
it's
hard,
it's
hard
from
the
web
IDL
to
imagine
I'm,
trying
to
like
imagining
a
meal
based
on
looking
at
the
recipe.
It's
I'm
wondering
how
the
JavaScript
would
look
when
using
all
these
web
ideal
apis.
So
so
so
apologies
if
I
don't
have
a
clear
view
right
now,
but
it
seems
to
me
that
I
I
did
see
a
lot
of
defining
of
custom
events.
D
So
I
wanted
to
point
a
lot
in
chat
too,
that
that
seems
to
go
there's
a
w3c
design
pattern.
That's
that
seems
to
want
to
be
to
prepare,
prefer
using
plane
events,
and
here
we
have
so
I-
would
only
imagine
we
would
need
to
create
an
interface
custom
event
if
there
wasn't
already
a
pairs
member,
for
example.
G
Maybe
yeah,
so
almost
all
of
the
events
here
were
on
the
cancel
of
event
approach.
The
exception,
I
think
was
getting
the
rtt
value
and
it
might
be
possible.
It
might
be
possible
to
instead
have
in
a
an
attribute.
That's
like
latest
rtt
latest
check
results
or
something.
D
G
D
C
J
It's
it's
a
it's
a
long
list
of
interfaces,
so
I
cannot
I
cannot
sign
on
all
interfaces
there,
but
it's
good
to
have
a
path
forward.
That
seems
pretty
clear,
I'm,
hoping
that
the
first
API
bits,
the
ones
that
developers
will
start
choosing.
J
First,
because
then
we
can
start
prototyping
and
shipping
things
first,
seeing
that
there's
interest
and
and
then
it
helps
also
motivating
for
all
the
remaining
stuff,
so
I'm
hoping
that
yeah,
the
first
three
there
will
will
a
great
value,
and
so,
if
so
like
yeah,
let's
start
digging
into
that
and
let's
start
being
nitpicky
about
the
API
and
the
design
and
so
on.
G
Yeah
Samir
and
I,
when
we
made
this
order,
we
basically
were
thinking
okay
as
application
developers,
which
would
we
like,
or
you
know
when
we
get
feedback
from
people.
What
are
the
things
they're
asking
for
the
most,
so
we
tried
to
put
it
in
in
that
order,
so
yeah
I
mean
one
option,
certainly
is
like
let's
really
nail
down
those
first
three
and
then
work
from
there.
Yeah.
J
L
I
just
wanted
to
mention
January's
comment
about
how
does
the
JavaScript
actually
look
when
using
this
API
for
the
new
proposal?
I
do
not
have
a
full
example
yet,
but
on
my
GitHub,
where
I
put
the
old
proposal,
I
do
have
an
example,
application
that
uses
the
older
API,
so
it
will
look
fairly
similar.
L
Of
course,
it
looks
a
bit
different
going
with
capital
events
versus
booleans,
but
it
still
hopefully
has
some
view
of
what
it
might
look
like
and
I
also
want
to
do
mention
one
of
the
differences
or
sorry
regarding
cancelable
events
versus
the
other
approach.
So
we
will
still
have
explicit
methods
in
both
approaches.
To
actually
do
things
like
remove
a
candidate
there,
it's
just
about
how
do
you
prevent
the
default
Behavior
to
happen?
L
Do
you
say
at
one
point
that
you
want
to
stop
the
default
behavior
from
happening
all
together,
or
do
you
make
that
decision
when
that
event
is
about
to
happen?
So
I
think
that's
the
main
difference
between
those
two
approaches.
We
haven't
already
done
that
at
the
end,
but,
yes,
it
might
benefit
from
broader
discussion
or
we
can
discuss
up
later
actually.
D
Yeah,
just
to
chime
in
there
seems
to
be
a
big
semantic
difference,
though,
between
only
being
reactionary
to
the
ice
agent
and
wanting
to
control
things
on
your
own
time.
That
seems
to
be
the
big
difference
in
the
apis,
and
so
maybe
in
some
cases
one
would
be
better
than
the
other
and
it
might
not
be
the
same.
G
Well,
to
be
clear,
if
the
things
like
say,
you're
moving
candidate
pairs,
there's
there's
a
method
to
remove
them
from
both.
It's
really
just.
How
do
you
prevent
them
from
being
automatically
removed
and,
in
one
case
you're
saying,
listen
to
this
event
and
always
cancel
it
and
the
other
you're
saying
just
tell
it
never
remove
d
right
this
candidate.
D
So
if
we
fire
an
event
already
but
yeah,
so
in
that
case
we're
trying
to
prevent
some
an
action
and
that
seems
like
cancel
but
events
might
fit.
But
if
the
goal
is
to
remove
account
of
the
pair
rather
than
prevent
it
from
being
removed,
then
a
method
seems
good
right
for.
G
For
both,
like
the
there
are
Parts
where
Samir
and
I
were
not
in
agreement,
and
then
there
are
Parts
where
we
were.
The
part
we
were
in
agreement
was
having
a
method
that
produced
them.
Yeah
I.
D
B
H
B
F
I'm
hearing,
generally
speaking,
I'm
hearing
support
from
all
sides,
and
we
should
just
you
know,
get
down
and
dirty
with
the
the
specific
proposals,
starting
with
the
top
of
the
list.
And
if,
if
that's
successful,
then
we'll
have
more
motivation
to
keep
going
down
the
list.
B
In
terms
of
how
to
move
forward
in
meetings
coming
up,
one
question
that's
come
up
is
how
many
people
will
be
in
person
for
TPAC
and
whether
we'll
have
an
in-person
TPAC
meeting
or
will
lead
to
so
un
is
saying
thumbs
up
you'll,
be
there
you'll,
be
there
at
TPAC?
Is
that
right.
H
B
All
right
so,
un's
the
only
one
who's
got
his
thumbs
fully
up.
Everyone
else,
maybe
not.
Okay,
all
right
so
un
says
thumbs
down
I'm
his
on
him
being
alone;
okay,
all
right!
So
we're
gonna
move
on
to
the
next
item,
which
I
think
is
any
of
our
play
out.
Delay.
D
Yes,
so
yes,
so
you
may
not
remember,
but
there's
a
an
API
called
receiver.play
out
delay,
it's
implemented
in
Chrome
as
receiver.player
delay
hint,
but
it's
in
extension,
spec
and
there
have
been
multiple
issues
opened
around
how
it
works
and
basically
at
Mozilla
we're
trying
to
implement
it
and
some
we
found
some
new
issues
as
well,
and
so
there's
been
some
discussion.
There
seems
to
be
some
alignment,
so
we
want
to
run
this
by
the
broader
working
group
to
see
if
that
alignment
holds.
D
So
some
of
the
issues
are
does
play
out.
Delay
of
video
effect,
Jitter
buffer
of
synced
audio,
and
vice
versa.
Should
we
clarify
playoff
delay,
input
value?
Is
it
Jitter
buffer
depth
or
Jitter
buffer
depth?
Plus
play
out
path?
Delay
that
you
might
expect
from
the
browser
should
play
out
delay
be
milliseconds,
because,
right
now
it
is
seconds
which
is
a
mistake
that
it
had
inherited
from
get
stats.
Basically,
there's
a
design
principle
to
say:
use
milliseconds
in
the
platform
except
and
get
stats
might
have
been
my
fault.
D
There's
also.
A
media
working
group
is
also
using
microseconds
for
video
frames,
so
lots
of
fun
there's
an
issue
on
whether
to
call
it
play
out
the
layer
player
the
late
hint
and
even
though
Chrome
implements
hint
right
now,
but
it
also
doesn't
throw
in
some
cases
it
should
so
it
might
be,
they
might
be
amenable
to
a
new
name.
D
D
D
D
For
instance,
there's
there's
a
an
upstream
bug
about
video
delay
actually
being
introduced
outside
of
Jitter
buffer
by
changing
the
timestamps,
which
seems
to
be
a
waste
of
time,
but
may
have
been
mostly
to
keep
AV
sync
Maybe
audio
was
the
most
pressing
issue
at
the
time
so,
but
instead
of
focusing
on
the
negative
side
effect
for
a
control,
surface,
I
think
we
think
it
would
be
more
desirable
to
control
something
positive,
which
is
the
positive
goal,
is
jitter-free
Media
next
slide.
D
D
Instead
make
the
input
value
the
targeted,
but
for
depth
and
nothing
more
so.
The
proposal
here
is
receiver.getter
buffer
depth
equals
a
value
in
milliseconds,
which
is
the
w3c
design
principle
section
8.3,
which
it
says
to
use
milliseconds
for
time
measurement
and
instead
of
and
basically
let
the
application
then
compare
that
to
the
gradually
matching
back
delay
measurement.
They
would
already
get
from
stats.
Now.
D
I
have
a
JavaScript
fiddle
here
that
works
in
Chrome,
and
it
shows
some
of
the
JavaScript
of
how
you
would
get
those
stats
which
are
basically
Jitter
buffer
delay
divided
by
Jitter
buffer
emitted
count
gives
you
a
value
in
seconds,
so
you
have
to
multiply
by
a
thousand
to
get
a
milliseconds,
and
you
can
compare
that
and
the
spec
already
allows
implementations
to
gradually
change
its
value
over
time.
So
you
won't
see
it
change
immediately.
D
D
F
A
so
my
understanding
is
that
what
what
you're
proposing
we
should
Implement
and
and
basically
mandate
it?
Yes,
this
is
a
control
server
for
the
user
buffer.
My
understanding
is
that
this
is
what's
already
implemented,
and
the
fact
that
we
saw
difference
in
the
Jitter
buffer
and
actual
delay
is
is
probably
just
a
bug
in
how
digital
buffer
delay
is
reported.
F
So
overall
I'm
I'm
supportive
of
this
direction,
and
if
we're
lucky,
this
is
already
what's
implemented.
Then
we
just
need
to
land
a
bug
fix
in
what
gets
that
exposes,
rather
than
changing
the
implementation.
So
my
only
my
thought
on
milliseconds
and
changing
the
name
and
whether
or
not
to
throw
an
exception,
which
was
the
difference.
F
I
I,
I,
I'm,
supportive
of
the
of
the
proposal,
I
think
I
think
it
all
sounds.
Good
I
would
I
would
question.
If
is
it
worth
to
change
the
name?
If
it's
already
implemented
like
it's,
the
is
migration
worth
the
new
name,
but
if
we're
changing
units
from
seconds
to
milliseconds,
then
we'd
have
to
change
the
name
anyway.
F
D
So
that's
very
good,
very
good
to
hear
as
far
as
whether
it
would
change
implementation.
I
think
Chrome
right
now
implements
play
out,
delay
hint,
whereas
it
and
the
spec
it
says,
player
delay.
So
that's
already
a
name
change.
If
we
change
the
name
to
match
Chrome,
then
you
would
still
have
an
issue
that
the
specs
has
to
throw
about
on
values.
H
F
Question
is:
what's
the
what's
the
easiest
path
forward
like
if
we
change
the
spec
to
say
player
delay
hint,
then
there
would
be
no
migration
needed
and
if
we
change
the
spec
to
say,
don't
throw
an
exception
like
I,
don't
see
the
value
in
throwing
the
exception,
so
it
both
both
the
name
and
whether
or
not
to
throw
the
exceptions.
Both
seems
like
bike
shedding.
It
doesn't
really
matter.
So
the
question
is:
do
we
let
is
it?
F
Is
it
worth
fixing
these
nitpicks
for
the
sake
if
it
requires
a
migration
or
is
it
like
doesn't
matter
much
if
it's
called
this
thing
or
the
other.
D
I
think
that's
a
good
way
to
the
phrase.
The
question,
but
I
think
my
answer
would
be
that
I
would
prefer
a
new
name
because
otherwise
it's
not
clear
what
changed,
and
there
are
also
reasons
I
believe
that
we
had
for
throwing
above
four
seconds,
because
if
you
add,
like
20
seconds
of
delay
to
a
webrtc,
real-time
communication
connection,
it
begs
the
question
of
what
really
is
this
thing
and
what
our
applications
I
mean.
Already
the
surface
is
trying
to
use
webrtc,
which
is
fine-tuned
for
Real
Time,
less
100
millisecond.
For
you
know
two-way
communication.
D
F
F
If
you
set
it
to
five,
we
could
either
say:
oh,
you
said
it
to
file
five,
we'll
just
interpret
it
as
four
or
you
could
say
after
an
exception,
because
it's
bad
input
and
I
I
in
my
opinion,
is
it
doesn't
matter
a
whole
lot
which
one
we
do,
but
if
you
feel
strongly
about
changing
the
name,
for
example,
I'm
not
gonna
object,
I'm,
just
curious
if
it's
worth
where's
the
name
change.
J
Henry
you,
you
said
that
you,
you
think
there
might
be
a
Chrome's
work,
so
that
might
be
the
first
thing
to
check
if
bit,
Chrome
implementation,
correct
or
is
there
a
bag?
And
then
report
that
and
you?
If,
if
it's
a
different
Behavior,
then
a
new
name
for
a
different
Behavior
makes
makes
a
lot
of
sense.
Otherwise,
you
see
if
it's
exactly
the
same
behavior
and
there
was
just
a
small
bag,
then
maybe
it's
not
worth
I.
F
D
I
also
am
not
a
fan
of
the
word
hint,
because
that
we've
tried
to
get
rid
of
that
word
hint
in
the
past,
because
it
often
suggests
that
I
think
mostly,
we
want
control
services
that
are
testable
and
confirmable
in
multiple
implementations
and
hints
give
the
impression
that
this
it's
optional
to
implement
or
if
it
doesn't
work,
that's
okay,
so.
F
I
I
agree
with
you,
I
think
I
think
the
name
was
a
mistake.
My
only
hesitation
is
how
much
how
important
it
is
to
change
the
name.
D
B
Okay,
so
we
have
some
extra
issues
from
un
which
we
might
actually
get
to
wow
yeah,
incredible.
J
Issue:
39:
let's
go
with
that,
so
we're
back
in
media
capture
main
where
we
have
tracks
and
we
can
be
muted
and
muted-
is
a
Boolean
and
there's
a
mutant
and
mute
events,
and
this
is
used
in
in
Safari.
You
can
see
on
the
right
where
Surfer
is
showing
some
Chrome
UI,
some
UI
that
allows
the
the
user
to
say.
Okay
I
want
this
webpage
to
no
longer
capture,
basically
temporarily
and
the
user
can
switch
on
and
off
capture
and
then
based
on
the
user
decision.
J
The
muted
state
of
the
track
will
be
updated
or
Q2
or
to
false,
and
this
allows
controls
to
be
on
on
Safari.
Whatever
the
website
is,
and
you
can
see
on
the
left.
A
typical
like
WebEx
style
UI,
but
websites
are
actually
implementing
to
let
the
user
decide
and
the
issue
here
is
that
you
can.
J
If
you
have
like
two
selections,
if
the
user
is
Select,
is
using
one
and
then
the
other
and
so
on,
it
becomes
very
complex
to
for
the
user,
and
it
would
be
good
to
be
able
to
sync
both
so
that
there's
a
good
way
for
the
website
to
influence.
Whatever
the
you,
the
Safari
or
the
Chrome,
is
actually
exposing
Android
verse
next
slide.
J
So
if,
if
we
look
at
OS
level
indicators
currently
on
on
camera
side
and
the
it's
pretty,
it's
pretty
deployed,
there's
the
windpill.
They
are
like
Safari,
Chrome,
Firefox
uis
as
well
and
based
on
that
websites
tend
to
stop
camera
when,
when
muting,
when
they
really
want
to
to
mute,
and
so
there's
there's
the
desire
to
enforce
muting,
they
were
the
we.
We
did
some
effort
to
allow
some
kind
of
muting
if
the
website
is
using
enables
to
fall
for
all
the
tracks.
J
But
it's
it's
a
bit
error
prone
and
I
I'm,
not
sure
it's
being
used
widely,
but
stopping
tracks
is
being
used
for
OS
level
microphone
indicator,
so
it's
deployed
as
well.
Ios
has
it,
for
instance,
but
websites
do
not
tend
to
stop
microphone
when
muting
and
one
reason,
maybe
is
the
ability
to
detect
with
the
user,
is
speaking
so
WebEx.
Has
this
nice
feature
where
you're
muted?
But
if
you
start
speaking,
then
WebEx
will
show
some
UI
that
tells
you
hey,
you're
speaking,
but
you're
muted.
J
So
maybe
you
want
to
click
that
button
and
some
osc's
like
iOS
are
supporting
disability.
The
website
does
not
have
access
to
the
microphone.
Samples
of
the
application
does
not
have
access
to
the
micro
version
itself,
but
it
has
access
to
whether
there
is
some
speech
activity
ongoing
and
then
it
can
unmute
itself
and
so
on
natively.
J
So
if
we
have
that
and
it's
being
used
natively,
it
would
be
great
to
be
able
to
expose
that
in
websites
as
well.
So
the
proposal
here
is
to
have
a
straightforward
API
to
for
the
web
application
to
either
request
muting
or
unmuting
of
camera
or
microphone
next
slide.
J
So
they
are
like
three
different
levels
of
granularity
that
we
could
use.
It
could
be
at
the
track
level.
You
request
muting,
and
then
you
mute
the
track
source,
meaning
that
all
the
track
clones
are
also
muted.
J
We
could
have
some
API
at
the
device
level
like
input
device
info,
so
that
you
you
don't
care
about
the
track
itself
again,
it
would
be
request,
Mutual,
custom,
mute
and
it
would
mute
the
same
thing
in
the
track
source
or
we
could
have
even
a
navigator
API,
which
would
be
like
you
request
to
mute
all
capture
ongoing
in
a
web
page.
J
That's
that's
easier
for
UI
from
Safari
or
Chrome
or
Firefox
UI
to
to
expose
the
fact
that
yeah,
you
muted,
and
it's
not
like
one
camera
is
muted,
but
the
other
is
not
muted.
That
may
be
complex,
so
these
are
all
three
proposals.
I
think
that
I
would
go
with
either
the
first
or
the
second
one
personally,
but
I'm
welcoming
input
for
both
whether
people
sign
request,
mute,
interesting
request
and
mute,
interesting
and
if
so,
the
granularity
of
what
we
should
Target
foots.
D
E
D
D
I
think
my
preference
would
be
to
put
it
on
the
track
because
I
think
USA
agents,
it's
that
would
leave
it
up
to
user
agents.
What
to
do
and
I
think
that
might
be
fine.
D
J
So
still
for
request
and
muting
I
want
the
user
agent
to
be
able
to
to
prompt,
because
maybe
a
muted
happened
like
two
two
days
ago
and
you
you
unmute
and
maybe
you
you
lost
the
user
actual
the
user
might
want
to
again
provide
the
view
approval.
So
that's
why
it's
a
promise
and
that's
why
it's
request
and
mute
for
mute,
I.
Think
since
you're
releasing
it
could
be
a
mute
and
that's
all,
because
I
don't
think
that
user
agents
will
want
to
actually
disallow
muting.
J
J
The
mute
use
case
is
that
if
you
have
tracks
you
clone
them,
you
transfer
them
blah
blah
blah.
You
need
to
chase
each
one
of
them
to
actually
set
enable
SQL
false,
and
that's
that's
very
error
point
and
what
you
actually
want
is
to
mute
the
source
you
you
do
not
want
to
disable
each
track
actually
and
you
you
might
not
even
want
to
when
you
set
enable
equal
false.
You
might
not
want
to
unmute
it.
Actually,
there
may
be
some
applications
that
do
not
want
the
track
result
to
be
muted.
J
They
just
want
this
track
to
be
silent,
so
I
think
it's
better
to
keep
a
specific
mechanism
from
it.
J
Yeah
and
and
both
then
would
be,
would
be
muted,
but
then,
if
you
want
to
mute
all
of
them,
you
need
both
of
them
to
set
enable
equal
source.
So
but
that's
that's
the
issue
really
so
I
think
it's
good.
If
you,
if
you
add,
if
you,
if
one
website,
won't
mute,
it's
better
to
be
able
to
do
that
without
too
much
coordination
and
I
feel
that
we
might
end
up
into
issues
generally,
it's
it's
a
little
bit
like
get
display.
Media.
J
D
J
Okay,
I
think
so.
Is
there
like
agreement
to
go
forward
basically
with
the
direction?
I
guess?
Maybe
it's
just
your
neighbor
and
I
that
are
interested,
but
still
it
might
be
good
to
record
that
for
issue
two
six
three,
since
we
are
only
two
minutes,
I
think
we
can
leave
that
for
next
month
and
elad
I
know
what
what
he
lets
think
there,
but
it's
better.
If
he
likes
it
is
in
Vermont
as
well.
So
we
can
leave
that
for
next
month.
B
B
Anyway,
thanks
everyone,
we
finally
got
through
a
slide
deck,
or
at
least
99
of
it,
and
we
will
see
you
in
the
Midstream
next
month.