►
From YouTube: WEBRTC WG meeting June 2018 Day 2 part 2
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
D
C
So
the
purpose
of
this
session
is
to
make
a
list
and
discuss
potential
protocol
dependencies
of
what
we've
been
talking
about
and
I'll
leave
off
of
it,
whether
we're
actually
gonna.
Do
this
work
or
not,
but
just
trying
to
make
a
slide
with
a
list
of
the
protocol
dependencies.
Assuming
we
were
to
do
the
things
we
talked
about,
and
so
we
will
be
developing
these
slides
as
we
speak
and
talk.
C
So
we
have
this
list
that
Stefan
and
I
put
together
may
not
be
complete,
so
feel
free
to
suggest
additional
things.
So
in
the
intent
security
work
that
un
presented,
the
question
is
whether
there
were
any
protocol
dependencies
in
what
was
discussed
for
any
of
the
potential
scenarios
or
whether
or
not
so
I
mean
we
talked
about
some
scenarios
that
only
involve
perc
double
and
some
that
required
full
perk,
but
were
there
any
that
required
anything
other
than
those
two
things
which
was.
E
E
F
G
G
G
C
F
C
H
I
G
E
E
G
C
G
A
C
A
C
C
B
G
G
G
G
That
method
to
be
called
more
than
once,
they're
like
from
already
implements
that
draft,
so
I
suppose,
if
we
thought
there
was
enough
need
for
that
to
be
adopted,
we
could
go
back
to
the
ice
working
group
and
say:
hey,
there's
a
lot
of
interest
for
this.
This
draft
actually
be
adopted
at
the
time
I
proposed
it
two
years
ago.
You
have
to
go
I
can't
remember
there
wasn't
enough
interest
and
the
candidate
removal
I,
don't
think,
there's
anything
that
needs
to
be
mentioned
in
the
ice
working
group,
but.
G
C
G
C
C
G
Or
head
every
nominated
is
so
you're
talking
about
kind
of
retaining
local
candidates
without
burning
them,
yeah
or
sorry,
deactivating
it.
Whatever
the
ice
spec
uses
whatever
word
it
uses
and
I
look:
I
went
and
read
through
the
spec
and
it
it
doesn't
say
that
the
agent
must
do
so
right.
It
doesn't
Welch
after.
C
C
Pair
after
you
nominee,
okay,
I
guess,
the
the
question
is:
when
you
do
that
right,
both
sides
might
have
non
nominated
pairs.
But
how
do
you
know?
How
do
you
know
which
pairs
are
I?
Guess
you
just
you
just
keep
checking
him
and
to
know
which
bears
over
life.
The
general
ID
yeah
there's
nothing
that.
L
C
M
G
H
H
H
G
G
The
second
is
including
it
in
the
stunt
checks
which
are
useful
for
when
you
get
a
pure
reflexive
candidate,
and
that
is
something
that
isn't
covered
by
flex
ice
right
now,
there's
no
Mekhi
didn't
concluding
a
mechanism
for
the
app
to
say
when
you
send
a
stun
check,
use
this
as
the
network
cost,
so
that
if
we
end
up
with
a
peer
reflexive
candidate
that
doesn't
match
any
candidate
I
signal,
it
will
know
the
network
costs.
So
that
is
kind
of.
H
G
Didn't
put
anything
for
adding
arbitrary
attributes,
no
I
didn't
put
it
for
never
cost
either.
That
is
something
that
was
available
in
slice,
but
not
not
flex
ice.
Now,
that's
not
to
say
that,
like
slice
couldn't
do
that
I
just
didn't
put
anything
at
the
moment.
There
that's
kind
of
a
it's
a
bit
of
an
edge
case,
but
there
might
be
situations
we.
H
H
G
K
N
C
So
next
list
was
quick,
so
we
had
four
unreliable
streams:
I
guess
there
was
this
basic
support
for
unreliable,
quick
and
that
included
stuff
like
Max,
retransmits
and
pakad
lifetime.
Well,
that's
the
two
proposals
or
whatever
they
were.
They
would
be
the
quick
data
channel
proposal.
If
you
wanted
to
protocol,
you
wanted
to
write
a
shim
on
top
of
it,
although
it's
not
required
for
the
quick
work
where
we
were
talking
about
here.
C
G
J
H
H
C
So
I
guess
what
we're
trying
to
get
here
is.
Is
there
some
algorithm?
That's
that's
likely
to
be
used
that
wouldn't
be
documented.
That
might
have
a
some
like
if
you
had
a
pluggable
congestion,
control,
enum
and
it
had
bbr
we'd
probably
need
to
reference
something
and
is
there?
Are
there
any
missing
references.
H
H
P
C
P
Q
C
It
would
only
be
used
to
it
would
be
used
for
SCTP,
at
least
on
this
slide,
and
so
the
way
to
think
about
it,
Tim
is
that
you
know
just
like
regular
SCDP.
You
know
Nuri,
you
know
whatever
has
an
interaction
with
various.
You
know
our
mcat
specified
congestion
control
mechanism.
This
would
as
well
how
exactly
that
works
is
a
research
problem.
Not
a
I
mean
you'd
want
to
know
that,
but
it's
it's
not
specified
in
the
algorithm
itself.
Yeah.
C
R
C
H
C
C
S
S
S
C
H
J
So
if
everything
goes
well,
we
should
be
able
to
proceed
with
this
yeah
for
the
next
week.
So
first
I
mean
my
apologies.
That
was
the
series
of
post
misunderstandings
and
gaps
in
the
discussions
due
to
bias
personal
issues,
but
hopefully
I
have
a
full
require
that
I'm
trying
to
get
confirmation
that
the
dress
concern
that
was
raised
and
want
I
have
this
confirmation
and
immediate.
H
Very
soon
now
awesome
and
thanks
to
everyone
who
contributed
so
are
we
done
so
when
I
was
writing
this
light?
That
was
it's
a
more
like
a
troll
question.
Maybe-
and
it's
perhaps
not
we're
not
nowhere
close
to
being
done
and
because
there
are
several
issues,
one
that
was
come
up
routinely
is
experimental
stats,
because
when
we
started
the
document
we
said
we
wanted
to
document
metrics
that
are
well
understood.
H
Now,
we've
landed
up
in
territories
where
we
are
adding
things
that
are
not
very
well
on
this,
like
we're
doing
things
like
we'll,
look,
quick
and
we
are
doing
a
CTP
there,
like
lots
of
new
things
that
we're
doing
the
new
objects,
the
metrics
on
those
objects.
Some
of
them
have
never
been
defined
before
so
so
in
this
process
we
came
up
with
things
to
do
so.
If
you
want
to
propose
a
new
stat,
you've
make
an
issue,
and
you
may
get
feedback
on
that
issue.
H
T
As
so
yeah,
it
was
added
when
some
compromise,
where
there
were
stats
that
people
wanted
to
add,
but
we
couldn't
find
a
I
get
it
does
the
standards
bar
of
like,
for
example,
there.
What
would
be
the
definition
if
something
is
audible
or
not,
and
it
it's?
The
idea
was
it's
better
to
document
something
than
to
have
some
horrible,
goog,
stat
and.
H
T
H
This
well
we're
talking
about
style.
Well,
they're,
two
parts
of
it
one
is
there's
a
stat
on
an
existing
object
that
already
exists,
and
we
just
realize,
while
through
operational
field
files
or
whatever,
that
there
is
a
missing
stat.
That
needs
to
be
added
to,
let's
say
an
existing
object
and
that's
a
metric
and
just
like
one
key
in
an
existing
dictionary.
H
So
there's
that
aspect
and
some
of
those
are
very
well
understood
because
they
were
defined
somewhere.
It
was
just
something
that
we
did
not
think
was
important
early
when
we
documented
when
we
created
the
web
or
DC
stats,
and
that's
easy
because
you
make
a
proposal,
you
follow
those
steps,
it
will
get
merged
and
at
some
point
the
browser's
will
implement
them.
But
there's
the
second
part
of
it
that
they're,
like
he
brought
up
like
Hendrick
brought
up.
H
So
there
is
that
aspect
and
I
just
made
a
document.
I
made
made
a
slide
later,
we'll
come
back
to
it,
but
then
there's
the
implementation
status
so
on
on
a
spec
level.
I
feel
very
confident
that
we
have
done
a
good
job.
We've
added
a
lot
of
new
metrics
over
the
last
few
years.
A
lot
of
the
the
metrics
make
sense.
H
H
H
K
T
K
T
H
E
H
Okay,
thank
you
Karthik.
Can
you
reach
out
yeah
so
so
this
was
just
verification
and
we're
moving
on
to
do
some
validations,
because
this
is
also
something
we
promised
at
last
tip
at
hope.
We
don't
have
really
good
results
right
now,
but
this
is
currently
in
works
and
the
idea
there
is
that
if
you
said
something
as
30
frames
per
second,
the
idea
is:
does
the
stats
report
30
frames
per
second,
and
you
can
say
like?
Oh,
if
you
have
a
50
milliseconds
delay
pipe
in
between
what
is
the
RTT
reporting?
H
Is
it
50,
60
70,
and
if
it's
important
hundred
50,
probably
it's
wrong
so
so
there
will
be
some
validations
that
will
add,
and
we
hope
to
have
some
good
results
before
the
next
Deepak
for
this.
So
there's
more
work
to
be
done
and
there's
more
work
to
be
done
by
the
browser's
as
well,
but
some
of
the
things
that
we
learnt
and
is
that
the
implementations
are
incomplete
and
like
remote
stats
are
missing
in
chrome,
most
other
browsers
do
not
report
much.
So
there's
not
much
to
say
here
from
a
developer's
perspective.
H
H
So,
irrespective
it
makes
adoption
of
some
browsers
difficult
because
they
don't
have
all
the
telemetry,
so
developers
start
referring
a
certain
browser
and
that
browser
might
have
different
telemetry
and
different
features.
So
it
makes
this
whole
thing
lot,
Messier
with
Babel
DC,
and
so
some
suggestions
for
NV.
One
of
the
things
that
we
notice
is
that
people
implement
features,
but
then
they
don't
implement
stat
switch
or
they
don't
wire
up
the
stats.
So
maybe
we
should
make
stats
first-class
citizens
like
make
it
in
a
way
that
it
appears
so
I'm
not
proposing
anything
right
now.
H
So
it's
kind
of
confusing
as
to
how
people
are
running
these
large
deployments
without
having
all
the
stats
available.
Are
they
on
only
particular
browsers
or
or
something
else
is
happening
and
our
own
from
our
own
experience.
One
of
the
things
that
we
realized
was
that,
since
pure
connection
is
not
available
directly,
we've
had
to
explicitly
ask
developers
to
give
us
peer
connection
and
some
people.
So
this
is
our
model
like
whoever
integrates
call
stats
has
to
give
us
peer
connection.
The
other
one
is
people
override
the
peer
connection.
H
They
load
a
JavaScript
which
moves
the
peer
connection
to
the
window
and
and
then
they
have
access
to
it.
This
makes
it
a
lot
brittle
and
I.
Don't
think
either
of
these
are
really
good
models,
at
least
for
us
from
a
business
perspective,
it's
not
a
good
model,
but
we
would
like
to
explore
better
ways
of
doing
it,
but
this
is
just
a
side
motivation
for
us,
so
the
alternatives
were
like
stats
per
object.
I
think
that
has
been
proposed
in
some
cases
like
there
are
stats
available
within
the
object.
H
So
I
actually
don't
know
if
any
of
that
is
correlate
or
both,
because
if
things
are
milliseconds
apart
or
tens
of
milliseconds
apart,
you
don't
know
if
you
actually
send
an
packet
as
an
you
encoded,
a
frame
if
that
in
frame
has
been
created
has
been
packetized.
You
don't
know
if
the
packet
counters
go
up
or
not.
So
in
many
cases
this
is
confusing.
So
the
current
get
stats.
You
don't
have
that
kind
of
problem,
because
you
call
get
stats
and
you
get
in
a
synchronous
way.
H
C
A
Like
that
particular
problem,
and
all
these
models
is
that
we
have
stats
objects
on
objects
that
are
not
so
exposes
of
a
skipped
objects
like
candidate
pairs,
so
it
so
implicit
in
saying
that
we
have
stats
by
objects.
It
goes
that
we
have
have
to
expose
a
lot
more
objects,
if
only
to
gets
that
from
one.
So.
T
H
So
I
think
the
heart
of
the
problem
that
I
want
to
solve
is
the
fact
that
we
don't
have
that
many
implementations
of
the
stats
as
in
stats
of
not
this,
the
implementation
of
stats
have
been
not
up
to
date
as
per
like,
as
per
even
the
stats
document.
So
we've
done
a
good
job
on
this
on
the
document,
but
the
implementations
way
way
way
behind.
J
J
F
We're
very
reactive
and
be
subject,
meaning
that
and
you
only
time
somebody
comes
to
us
and
say:
hey
I'm,
missing
that
stat
we
are
looking
at
it
and
if
we
can
expose
it,
usually
we
expose
it.
We
are
not
very
proactive
in
a
sense,
but
if
it's
not
available
for
we
would
see.
Basically
we
are
not
doing
things
and
it's
part
of
whoever
to
see
what
one.
Oh.
So
we
we
want
to
expose
the
B
stats
at
some
point,
but
it's
after
the
other
features,
I.
T
Think
one
problem
is
that
get
stats
on
a
whole
is
very
important,
but
on
a
like
metric
biometric
basis
is
it
depends
on
the
lower
layer,
but
it
can
feel
a
bit
higher
for
low
impact.
So
it's
it's
reactive
from
Chrome's
point
of
view
as
well
like.
If
someone
is
asking
for
a
metric,
that's
incentive
to
have
it
I.
Think.
A
large
problem
is
the
the
legacy
getstats
has
been
out
for
so
long
that
people
are
sort
of
comfortable
using
it.
So
what
I'm
thinking
is
sorta?
It's
not
spec
work.
T
H
So
we
did
that
we
moved
away
from
the
legacy,
gets
that
Chrome
to
the
new
one,
and
what
we've
had
to
do
is
we
call
the
legacy,
get
stats
and
backfill
everything
in
the
remote
stats,
because
we
realized
that
you
have
it
in.
You
have
some
of
those
metrics
which
are
available
in
one
API,
but
not
the
other
and
I
think
you
have
to
do
that
in
a
transition
period.
Yeah.
So
we're
doing
that
I.
K
Would
agree
with
a
lot
of
that
I
mean
Firefox,
doesn't
have
that
many
legacy
stats,
but
we
had
a
couple
were
trying
to
remove
them
and
get
rid
of.
We
actually
have
a
lot
of
legacy
junk,
because
stats
have
changed
a
couple
of
times
that
we
might
get
rid
of
and
clean
up.
As
far
as
for
the
long
tail
of
various
metrics
same
thing,
I
think
we're
looking
for
people
and
partners
and
stuff
aware
of
interest.
It's
always
easy
to
implement
these
things.
K
H
F
H
There's
some
overlap
and
there
are
some
browsers
which
have
done
additional
work,
which
some
other
process
have
not
done.
So
some
people
have
done
a
good
job
in
some
directions
and
others
have
not
done,
and
some
metrics
which
overlap
are
good,
but
then
so,
for
example,
round-trip
time
has
been
implemented
by
by
some
browsers,
but
not
by
others,
which
makes
it
a
bit
like
round-trip
time
is
the
most
important
metric
I
would
say
for
many
people,
jitter
is
another,
and
the
implementation
status
of
those
metrics
is
also
disputable.
E
Some
stat
are
easy
to
test
no
one-by-one
basis,
some
of
the
stat
actually
change
across
time
or
accumulate
in
aggregate
and
so
on.
So
the
WPT
will
not
be
able
to
test
all
of
them.
Actually,
we
we
have
added
a
lot
of
gets
that
supports,
but
against
the
variation
between
the
browser
make
it
difficult
to
to
test
it
across,
and
the
lack
of
network
instrumentation
for
us
that
actually
cost
at
on
our
hands.
H
This
is
where
we
wanted
to
bring
that's
one
of
the
reasons
why
we
are
exposing.
Even
the
validation
is
to
allow
for
people
to
just
subscribe
and
get
reports
weekly,
monthly,
whatever
or
new
updates
of
browsers,
and
that
was
because
we
have
we're
in
the
process
of
like
exposing
a
lot
of
this
instrumentation
externally.
K
K
A
E
E
A
couple
of
them
from
the
top
of
my
head:
there
are
some
stats
that
are
about
the
remote
appear.
There
are
some
stat
that
are,
you
know,
influenced
by
the
network
between
the
local
at
the
remote.
So
if
you
do
it
like
within
the
tab
with
you
know
to
peer
connection,
you
do
not
represent
other
possible
case,
so
some
of
the
stats
can
only
be
ready
meaningfully
tested
on
on
end
to
end
basis,
and
some
of
the
stats
also
need
variation
across
time.
So
with
the
control.
F
Well,
you
give
it
a
very
picky.
You
can
at
least
show
that
the
dictionary
value
is
it's
ver
and
that
the
value
in
the
context
of
WEP
makes
some
sense.
So
at
least
you
you
would
get
some
coverage,
and,
and
at
least
for
at
least
for
us,
what
we
are
doing
is
exposing
what
keyword
c
is
doing,
so
it
would
be
good
to
validate
the
keyword,
cease.
Doing.
Q
U
T
Think,
even
if
you
can't
test
that
you
can
test
that
something
makes
sense,
there
was
a
bug
where
some
white
counter
included
the
header
pale
or
payload
or
something
when
it
shouldn't
and
I
think
that
level,
where
exact
number
of
bytes
that
might
not
be
testable
and
web
platform
tests.
But
you
could
set
up
a
test
and
let's
say
you
have
a
canvas
based
track.
You
can
check
the
counters
and
then
produce
one
new
frame,
and
you
know
that
you
might
not
know
exactly
how
many
bytes
or
packets
are
gonna
be
sent.
T
F
T
H
So
and
I
would
not
wanted
a
bit
more
on
this,
because
I
think
we
already
have
done
a
lot
of
work
here
so
I
when
we
come
out
with
the
validation
you'll,
see
some
of
these
results
and
we'll
have
some
caveats
there,
so
so
I'm
sure,
like
not
everyone's
on
the
same
page
on
this,
like
different
systems,
different
tools
out,
there
will
of
course
trying
to
make
sure
that
these
tools
are
generally
available
and
if
not
at
least
the
results
are
generally
available.
Think.
J
F
H
F
For
do
be
on
the
PT
side,
when
you
get
four
results,
you
can
see
for
each
browser
what
is
supported,
what
is
green
and
when
it's
red-
and
if
you
see
like,
for
instance,
we
are
free,
green
and
just
one
red,
you
might
say,
hey,
maybe
the
priorities
a
little
bit
higher
on
this
way.
You
can
see
so
some
things
like
that.
Yeah
yeah.
F
T
H
And
we
also
did
try
to
make
sure
there's
only
one
side
sending
audio
only
one.
That's
why
it's
audio
only
and
video
only
so
that
you
don't
get
this
plethora
of
stats
from
both
sides
and
for
some
of
the
things
that
were
about
Network
testing.
You
can
easily
do
like
hair
pinning
through
a
turn
server
and
bring
it
back,
and
you
can
control
the
link
to
the
turn
server.
H
So
there
are
ways
that
we've
already
come
forward
on
some
of
this
testing
of
network
and
variable
situations,
which
we
hope
that,
when
we
release
the
validation,
results
should
become
easier.
But
that
takes
us
away
from.
Like
my
original
question
like
we
are,
as
if
I
understood
correctly,
we
are
reactive
as
a
community
to
build
more
stats
and
now
that
we're
thinking
about
doing
env.
We
are
already
aware
of
this.
So
how
do
we
like
make
sure
that
we
do
not
do
this
in
env
like?
H
T
So
usually
I'm
just
thinking
out
loud,
but
usually
there
is
a
counter
that
goes
up
when
something
happens,
and
and
I
mean
that
should
be
part
of
the
description
of
the
interface
like
it's
part
of
the
operation
that
it's
doing.
If
it's
doing
something
in
the
background-
and
it
should
say
when
this
happened
had
to
increase
this
counter
and
then
get
stats,
you
just
be
a
matter
of
take
these
values
that
you
already
have,
and
they
they
are
these.
T
Whatever
the
variables
slots,
I
say:
you're
called
in
this
bank-
and
you
say
you
return
the
value
of
these
slots.
There
you
go,
but
none
can
you
can't
get
around
it,
because
it
does
it's
bad
when
when
it
says
here's,
here's
the
entire
feature
and
then
there's
a
separate
documents
as
SS.
It
doesn't
go
into
detail
about
how
you
actually
calculate
these
things.
It
just
says
like
oh
yeah,
this
should
be
the
packet
counter
and
it's
the
first
occurrence
of
a
package
counter
right
and.
H
That's
what
one
of
my
things
as
a
consumer
of
gets
ads,
I'm,
happy
that
our
document
has
like
all
this
big
coverage
in
terms
of
it
covers
the
Laura
scope,
but
what
I'm
kind
of
unhappy?
Because
I'm,
not
the
person
who
can
who's
implementing
it,
I'm,
like
looking
for
guidance
as
one
of
the
co-authors
of
the
document,
is
okay.
What
can
we
do
better
and
if
it
means
that
we
make
it
more
in
the
face?
That's
that's
probably
the
right
thing
to
do.
L
H
Q
You
know
you
can
get
into
some
interesting
problems,
which
is
to
be
expected,
they're
experimental,
but
it's
something
that
if
people
aren't
careful
they,
you
know
you'll
end
up
with
applications
in
the
field
that
have
simply
ignored
the
fact.
It's
experimental
and
assume
it's
never
going
to
change
right.
A
Usually,
you
have
an
experiment:
that's
like
two
applications
that
are
really
part
of
the
experiment
that
can
be
can
be
relied
on
to
track.
What's
going
on
and
15
applications
that
might
discover
the
value
say.
Oh
this,
this
looks
useful,
we'll
just
record
it
or
after
night
or
whatever,
and
the
last
one
is
what
standards
are
supposed
to
prevent.
I.
J
T
But
I
think
I
think
having
it
in
the
standard
or
exposing
it
in
any
way.
By
default
would
be
mistakes,
then
they
stick
around
and
it
also
gives
people
an
easy
way
where
they
all.
Let's
try
out
this
value,
they
add
it
and
it
sort
of
makes
sense
and
then
like
well.
They
already
have
the
value
now.
So
why
bother
continuing
the
standardization
effort?
No,
it
shouldn't
be
available
unless
you
have
to
do
something
like
origin.
Traveler
guard
guard
the
web
solve
this
bank
and
then
then
release
it.
So.
H
For
the
second
one
I'm
thinking
of
it's
a
well
defined
metric,
and
we
don't
know
how
useful
it
is.
Why
wouldn't
we
just
go
ahead
and
put
it
in
the
spec,
because
it's
well-defined,
it
might
not
be
abused.
People
may
not
implement,
not
all
browsers
are
implemented
and
some
might
and
it
might
never
go
away
the
first
ones.
T
Less
worried
about
stats
that
well
the
fines,
let's
edit,
whether
or
not
is
useful,
and
then
you
can
always
remove
it
later,
whether
or
not
I,
don't
think
you
can.
The
problem
is:
is
you
can't
deprecated
stats
the
way
you
deprecated
a
function
or
an
interface
like
there's,
no
deprecation
warning?
We
have
no
idea
how
much
stats
are
used.
This
sterile
distinction,
Aires
yeah.
H
And
we
have
had
that
issue
with,
for
example,
with
round
trip
time,
and
we
miss
I
think
the
original
one
had
a
bad
metric
in
it.
It
was
in
milliseconds
and
then
I
think
Firefox
needed
to
change
it
and
it
took
a
while.
So,
given
that
experience
I
would
not
want
to
do
that
again,
duplicating
a
known
metric
is
is
hard,
especially
if
there's
a
bug
or
something
like
that
in
it
I
mean.
T
T
H
K
Sorry
I
was
going
to
jump
back
to
the
earlier
point.
Mr.
Henriques
point:
the
artist
ease
test
report
interface
as
a
map-like
of
stream
and
object,
which
means,
technically
you
could
add,
getters
we've
tried
to
and
there's
some
limitations
on
our
end
that
we
could
do
that,
but
in
theory
there's
nothing
in
a
spec
that
prevents
you
from
using
Gators,
but
even
though
even
Gators
are
hard
because
you
have
to
deal
with
libraries.
That
might
you
know
json.stringify.
There
are
all
your
stats
and
you
don't
want
to
mix
two
formats
in
the
same
call.
K
A
And
we
do
have
a
section
in
the
spec
called
obsolete
stats
where
we
move
every
everything
that
is
considered
useless,
which
is,
if
we
kind
of
think
that
once
you've
used
up
a
name
you,
you
burn
that
name
forever.
What's
the
status,
but
once
the
stat
name
has
been
in
this
deck,
it's
is
used
up.
You
can't
use
it
again.
So
anything,
that's
not
useful.
We
move
into
obsolete
stats.
H
H
And
I
think
we
had
the
same
problem
with
our
TT.
It
was
in
milliseconds
as
opposed
to
seconds,
and
we
replaced
it
with
round-trip
time
only
the
discerning
eye
catches
it,
but
we
did
not
overlord
the
same
definition
like
Overlord
the
same
variable
with
start
with
the
same
with
a
different
definition.
C
So
it
has
been
pointed
out
that
so
there's
a
bunch
of
missing
things,
so
we
thought
we
would
try
to
fill
in
that
portion
of
the
discussion
and
also
summarize
a
little
bit
of
what
we've
talked
about
so
far.
So
in
this
meeting
we've
talked
about
the
following
aspect.
So
Peter
talked
about
API
levels,
I
think
the
general
interest
was
in
B
or
perhaps
see
he
is
roughly
the
level
or
to
C
and
C
it
a
little
bit
below.
C
We
have
talked
about
intense
security
and
discussion
of
that
is
ongoing.
We
talked
about
quick
and
I.
Think
the
it
is
fair
to
say
there
is
no
consensus
about
that
at
least
yet.
We've
talked
about
RTP
transport
and
wanted
to
talk
more
about
it.
So
I
think
it's
accurate
to
say,
there's
no
consensus
about
that.
C
The
encoder
and
transport
separation
I
think
there's
no
consensus
about
that.
We've
talked
about
data
channel
objects.
That
seems
to
have
consensus
because
it's
already
in
Weber's
c1o
and
nobody
wanted
to
rip
it
out.
So
it
stays
ice.
We
had
some
enthusiasm
for
flex
ice
extensions,
which
is,
but
that
was
largely
for
use
with
quick.
Nothing
else
we
didn't
define
anything
else,
they
could
use
it,
and
then
we
talked
about
scalable
video
coding,
mostly
for
whoever
did
you
see
one?
Oh,
not
yet
I'm,
just
trying
to
summarize
what
we've
talked
about
so
far.
C
But
if
we're
talking
about
an
object
model,
there
are
other
things
that
weren't
talked
about.
So
we
haven't
really
talked
about
the
RTP
sender
or
receiver.
I
haven't
talked
about
DTLS
transport
or
other
objects.
On
this,
what
yeah,
standalone,
setp
transport
good
point
or
stand
alone?
Aren't
you
see
data
channel
built
on
that
in
the
way
so
kind
of
be
the
overall
picture
has
not
been
filled
in
we've
just
been
mostly
talking
about
what
we
don't
agree
about.
C
So
the
question
is:
are
there
things
that
have
not
been
discussed
yet
we
actually
do
agree
on
doing
and
some
of
those
I've
separated
into
or
Jassi
has
some
mandatory
to
implement
stuff
and
some
optional
stuff.
So
examples
of
mandatory
things
are
RTP
sender,
RTP,
receiver,
detail
s,
transport,
rtcdatachannel,
set.2,
transport,
of
course,
ice
transports
also
mandatory
in
case
of
RTC.
Also
the
gatherer,
then
there's
some
optional
things.
C
The
RTP
listener
is
optional,
but,
oddly
enough,
if
you
look
at
that
section,
it
actually
describes
how
RTP
and
rtcp
packets
are
routed
and
there's
actually
a
very
nasty
little
section,
because
it's
actually
even
more
complicated
to
specify
than
bundle
the
equivalent
section
you're,
basically
trying
to
get
the
same
result
as
bundle
so
you're
compatible.
We've
ever
see
one
oh
and
there
seem
to
be
a
bunch
of
different
ways
to
do
it
and
it's
been
implemented
differently
with
similar
results
to
bundle,
but
maybe
not
exactly
the
same.
C
For
a
discussion
so
far,
we've
mostly
learned
what
we
don't
have
consensus
about,
but
the
question
is:
do
we
have
consensus
some
things
we've
not
talked
about,
and
do
we
want
to
move
forward
on
aspects
of
what
we're
about?
You
see
that
nd
that
we
haven't
talked
about
that
we
just
described
a
potential
answers.
Are
yes
not
now,
but
soon
or
Never
Never
do
this,
and
so
he
wanted
a
solicit
opinions
of
the
group
on
whether
we
should
even
talk
about
the
things
we
haven't
talked
about.
M
G
B
F
C
K
C
The
concept
in
at
ortc
is
that
the
sender
and
receiver
have
capabilities
as
they
do
in
WebRTC
one
hour.
In
fact,
it's
90%
the
same
capabilities,
and
that
tells
you
what
the
object
can
do
and
then
you
configure
it
with
what
you
want
it
to
do
out
of
those
capabilities.
So,
for
example,
it'll
have
just
like
it
has
a
100
it'll
have
like
a
list
of
codecs.
C
In
addition
to
you
know
what
you
see,
it
has
a
list
of
the
parameters
like
I
support,
profile
level,
IDs
stuff
like
that,
and
so
you
you
advertise
these
capabilities,
and
then
you
get
to
set
things
within
the
limits
of
the
capabilities.
So
that's
how
the
sender
and
receiver
work
like
wherever
to
see
100
most
of
the
capabilities
are
on
the
sender,
the
receiver,
just
kind
of
sits
there.
In
fact,
your
clarification,
yani
bar
on
the
decoding
and
encoding
parameters
and
send
and
receive,
was
very
helpful
because
it
was
never
just
like
we're
gonna
see.
C
C
It's
largely
the
same
set
of
encoding
parameters
as
or
and
whoever
see
we
know,
with
the
exception
of
the
SVC
stuff,
which
we
just
talked
about,
which
is
there
and
and
isn't
at
the
moment
in
in
one
out
so
the
way,
the
way
these
things
go
is
is
today
in
Weber
c1,
oh,
you,
then
you
basically,
then
senders
and
receivers
that
have
these
things
attached
to
them,
and
so
you
get
a
sender
and
you
can
find
it
see
Taylor's
transport
and
find
a
size
transport.
You
know
RTC,
you
build
it
up
from
the
bottom.
C
So,
typically,
what
you'll
do
is
you
create
a
nice
gatherer
and
very
often,
when
you
hook
it
up?
It's
not
gathering
you.
Don't
you
hook
up
the
pipeline
first,
because
if
you
hook
it
up
first,
then
you
kind
of
guarantee
that
packets
won't
leak
out
the
sides.
They'll
have
places
to
go
in
the
pipeline,
so
you'll
create
a
nice
transport
object,
and
then
you
pass
that
ice
transport
audre
to
the
detail,
S
transport
create
that
attach
to
a
nice
transport
and
then
you'll
attach
a
sender
or
receiver
to
the
detail.
C
S
transmitted,
so
you
essentially
construct
these
objects
rather
than
having
them
done.
Having
the
chain
done
for
you
and
and
then
once
you've
got
them
constructed,
you
can
basically
do
an
equivalent
of
offer
answer.
If
that's
what
you
want
to
do
you,
you
take
the
parameters
and
you
can
pass
it
in
any
form
you
want.
You
could
create
SDP.
If
you
want
to
do
that
or
you
could
just
create
JSON
and
basically
send
the
equivalent
stuff,
the
negotiation
can
be
done
in
any
style.
It
can
be
done.
C
C
You
take
the
capabilities
and
like
the
ice
and
DTLS
stuff,
like
the
fingerprints
package,
that
up
send
it
to
the
other
side,
and
then
you
do
an
intersection
of
the
codex
say
what
codex
do
I
have
in
common
and
derive
essentially
the
sending
parameters,
and
if
you
read
through
the
ortc
spec,
there's
actually
a
an
intersection
code.
That's
been
written
by
fit
bow
that
describes
how
that's
done,
and
so
that's
kind
of
been
a
very
popular
way
of
doing
the
signaling,
which
is
more
similar
to
the
h.323
style.
C
So
there's,
if
you
look
through
I,
can
give
you
example
code
on
how
kind
of
what
it
looks
like.
L
Yeah
go
ahead,
yes,
yeah
hi.
Is
it
fair
to
say
that
basically,
the
way
our
RTC
works
is
that
both
sides
just
need
to
be
configured?
How
you
decide
what
you
want
to
configure
both
sides
to
is
up
to
you
and
you
know,
presumably
you're
going
to
collect
capabilities
information
from
both
sides,
but
but
again,
in
the
end
they
just
independently
get
configured
essentially.
C
Yeah,
that's
basically
true,
of
course,
you
still
have
the
same
obligation
to
configure
them
consistently
all
right.
C
Answer
yeah:
if
you're
just
going
to
do
the
offer
answer.
Basically
what
you
would
do
is
you
take
these
capabilities?
There's
functions
that
Pipo
is
written
to
convert
them
to
STP.
So
you
take
your
capabilities
and
stuff
call
the
SDP
function.
It
generates
the
SDP
you
soon.
At
the
other
side,
you
get
an
answer
back
and
then
fipple
has
something
that
converts
the
SDP
to
parameters.
So
then
you
can
push
it
down.
If
that's
the
style
you
happen
to
want.
So,
for
example,
in.
C
Adaptive
dot
J,
yes,
they
have
basically
a
complete
implementation
of
peerconnection
on
top
of
these
objects
and
they
basically
just
try
to
mimic
what
was
done
really
in
the
ad
stream.
Remove
stream
API
there
pieces
of
it
with
add
track,
remove
track.
I
haven't
done
that
transceiver
section,
and
there
are
some
things.
We've
learned
from
a
transceiver
that
how
to
make
minor
changes
because
remember
within
our
to
see
the
goal
was
to
actually
be
have
complete
compatibility
without
GC
peerconnection.
Q
C
That's
it
that's
a
great
question,
because
that
the
kind
of
capabilities
exchanged
I've
just
described
is
not
necessarily
all
that
suited
for
an
SF
you
case,
because
the
idea
is
to
take
the
intersection.
So
as
an
example,
if
it's
just
me
and
you
talking
Randall,
you
know
the
fact
that
you
can
do
three
streams
of
temporal
and
three
streams
of
spatial
and
doesn't
necessarily
imply
and
I
can
also
send
you
that
doesn't
necessarily
imply
that
it
makes
any
sense
for
us
to
do
that
in
a
printer
one-to-one
conversation
right
so
in
in
the
example
function.
C
We
don't
attempt
to
intersect
that
to
do
the
to
figure
out
the
simulcast
case,
because
it
doesn't
make
sense
to
do
that
so
that
at
least
the
examples
that
are
written
so
far
use
that
kind
of
intersection,
mostly
for
the
simple
peer-to-peer
case
where
it's
just
easy.
You
want
an
audio
in
a
video
stream
and
then
you
take
the
intersection
and
get
that
so
it's
kind
of
a
simple
case,
but
so
for
the
simulcast
case.
C
Basically,
you'd
have
to
decide
how
many
streams
you
wanted
to
do.
You
could
still
take
the
intersection,
and
that
would
give
you
the
codecs,
that
you
know
you
have
available
and
stuff
like
that,
but
and
you'd
have
to
go
in
and
add
things
to
the
intersection
to
suit
what
you
want
to
do
so
so
as
an
example,
you
could
have
an
intersection
that
could
tell
you
you
could
send
up
to
five
streams,
but
that
wouldn't
necessarily
mean
you
want
to
do
that.
Yes,.
P
There's
a
even
simpler
case,
that's
kind
of
interests
me,
which
is
that
you,
if
you
have
a
you're
talking
to
a
hardware
device,
a
doorbell
or
something
and
actually
when
you're,
writing
a
JavaScript
as
a
manufacturer
for
that
you
know
pretty
much
everything
about
it.
There
are
actually
only
I.
Think
four
things
you
don't
know
you
don't.
P
Ice
password,
you
don't
know
that
you
might
even
know
the
fingerprint,
but
you
probably
don't
write
and
ice
toe
and
you
probably
want
to
set
the
SSRC
right,
but
but
like
beyond
that,
you
already
know
everything.
There's
no
negotiation
needed
you
you,
you
built
the
damn
thing,
you
know
what
codex
it
does.
You
know
how
many
streams
it
is
yeah.
Q
And
I'm
I'm
not
concerned
about,
and
that
makes
sense,
given
the
ortc
API,
that
it
would
simplify
cases
like
that,
and-
and
that's
that's
great
for
those
cases.
What
I'm
just
trying
to
make
sure
of
is
that
it
doesn't
paint
other
use
cases
into
something
of
a
more
complex
corner
than
they
would
need
to
be
in
yeah.
Q
C
C
F
F
So
that's
one
thing
that
there
might
be
a
concern
that
it's
not
sufficient
enough
to
justify
the
cost
of
implementing
new
things.
But
I
think
that
at
least
if
we
want
to
move
away
from
SDP
V
since
natural
to-
and
we
already
made
that
decision
somehow,
because
we
introduced
by
objects
in
web
see
one
oh
and
it's
complimentary.
So
two
main.
K
C
Pretty
close,
I'd
say
whoever
see
when
I
was
at
least
80%
of
the
functionality
of
ortc.
It
also
has
80%
of
the
wood.
You
see,
has
80%
of
the
bugs
of
Webern
to
see
what
else
and
whenever
you
follow
by
Milo
you're
80
percent
of
the
time,
you're
also
filing
a
bug
on
what
you
see.
So
that's
something
to
keep
in
mind
in
it.
C
K
C
C
Can
give
you
my
opinion,
which
is
that
you
know
developers
are
very
interesting
set
of
people
because
they
will
go
through
hell
if
there's
a
rainbow
on
the
other
side?
No
matter
what's
in
between
so
no
matter,
people
used
to
say
Oh
ever
see
when
I
want
be
I,
it's
so
horrible.
It's
got
all
SSDP,
it's
so
bad
blah
blah,
but
people
build
a
lot
of
very
complicated
stuff
and
they
hopefully
make
a
living
at
it
and
they
got
it
to
work.
C
So
if
you
tell
them,
oh
I've
got
a
much
better
way
of
doing
it.
That
does
exactly
the
same
thing,
as
the
other
thing
did.
If
assuming
their
code
already
works,
they'll
say
well,
it's
very
nice,
but
my
manager
doesn't
pay
me
to
rewrite
all
of
my
code
to
make
it
nicer
in
general,
that's
a
kind
of
characteristic
I've
learned
in
my
foot
yeah.
Unless,
unless
the
thing
is
completely
falling
apart,
I
mean
give
it
up
for
dead
you're
not
going
to
get
a
manager
to
let
you
rewrite
it
so.
F
C
C
Tc
does
SVC,
which
were
to
see
when
our
does
not,
but
we've
agreed
to
extend
while
to
do
the
SVC.
So
with
that
you
would
be
at
90
percent.
The
remaining
10%
is
the
ice
gatherer,
which
allows
you
to
do
forking
I
think
we
had
a
bunch
of
discussion.
I,
don't
recall
whether
we
decided
forking
was
I,
didn't
I,
don't
think
anybody
said.
Forking
was
terribly
important.
Some
people
may
have
liked
it
or
not
like
it,
but
you
kind
of
already
have
90%.
C
So
that's
kind
of
one
of
the
reasons
we've
been
having
this
discussion
about
new
use
cases
is
because
my
personal
experience
has
been
that
you
have
to
have
something
not
just
cooler
but
actually
substantive
in
terms
of
business
opportunity
that
some
manager
will
say-
and
that
could
only
be
this
way,
because
you
know
that
if
it
can
be
done
through
one,
oh
by
hook
or
crook,
the
manager
will
say
you
know
just
do
it
Ben
do
it
in
Ueno,
and
you
know,
and.
Q
It's
in
speaking,
exactly
to
that
point,
if
you
do,
if
it's
possible
to
do
in
one
oh
and
a
reliable
fashion,
it's
sir
you're
certainly
going
to
have
better
coverage
and
better
deployment.
You
know
because
people
have
one
now
and
the
new
stuff
they
don't
necessarily
have
yet
and
if
they
do
it's,
maybe
if
he
in
some
implementations,
so
the
truth.
Q
C
T
Correct
correct
me:
if
I'm
wrong
by
the
way,
I
see
it
as
as
for
other
features
that
we've
discussed.
This
is
sort
of
unblocker
like
getting
rid
of
peer
connection,
allows
us
to
split
up
transport
more.
It
allows
us
things
up
which
allows
more
more
features,
but
in
and
of
itself
does
why
bother?
Is
there
anything
other
than
you
don't
have
to
do?
Stp
like
what's
the
big
selling
point.
C
So
we
implemented
turn
mobility,
so
the
person
could
go
between
Wi-Fi
hotspots
and
keep
the
game
going.
So
that
was
kind
of
the
the
kind
of
mobile
focus
and
the
game's
focus
is
really
where
the
where
the
activity
is
today.
So
that
was
a
very
specific.
The
kind
of
gaming
use
case
position
that
not
be
done
with
the
peer
connection.
No,
it
could
not.
It
was
attempted
and
could
not
be
done
with
your
connection.
So.
Q
The
ice
part
of
that,
because
this
is
actually
good
effect
that
use
case
exactly
fact,
I
sort
of
implied
that
I
was
a
minute
ago.
That
use
case
goes
directly
back
to
one
we'd.
Just
we'd
talked
about
early
on
in
WebRTC,
which
was
you
know,
something
acting
as
a
broker
for
connections
what's
effectively
forking
working.
P
Q
You're
really
putting
an
offer
out
there
and
you
could
have
multiple
people
responding
to
it.
Okay,
and
originally
that
was
something
we
were
actually
concerned
about.
The
real
question
is:
where
did
that?
How
did
that
end
up
not
being
possible
in
web
rc1?
Oh,
and
what
piece
is
missing
that
makes
it
not
possible
yeah.
C
C
It
turned
about,
if
you
think
about
it,
you're
sending
out
this
offer
and
getting
multiple
answers.
That
offer
includes
ice
stuff,
but
it
also
includes
fingerprints,
so
you're,
basically
saying
you're
creating
DTLS
transports
with
the
same
certificate
as
in
your
offer.
So
we
had
a
support,
forking
and
the
DTLS
transport
as
well,
and
the
the
thing
is
basically,
what
you
have
to
be
able
to
do
is
create
multiple
ice
transports
with
the
same
credentials
but
with
different
remote
credentials.
That's
kind.
C
Q
C
A
C
A
C
C
P
P
To
that
point,
I
wouldn't
underestimate
the
surprise
you
see
on
the
faces
of
web
developers
when
they
first
see
the
peerconnection,
API
and
and
the
SDP
it's
a
it's
about
a
week
before
they
kind
of
even
start
talking
to
again
it
it's
yeah
like
I
know
they
will
do
it
and
if
they're
sufficiently
motivated
they'll
get
through
to
the
other
end.
But
it's
a
it
costs
me
every
time.
I
have
to
do
it.
Yeah.
C
I
mean
it's,
you
generally
lose
about
a
week,
but
these
are,
people
are
like
been
doing
this
all
their
life
and
it's
like
hey
a
week
out
of
my
life
as
long
as
they,
basically
as
long
as
they
can
do
the
game
there.
No,
it's
amazing.
What
they'll
do
you
could
you
could
probably
create
bogus
api's
and
they've
learned
them,
even
if
they
made
no
sense.
P
G
C
C
Straight
forward,
without
STP
Jenna
generally,
what
we
found
is
the
games
developers
have
already
written
ever
see.
One
I'll
continue
to
do
so,
and
the
ones
who
are
Greenfield
are
the
ones
who
use
ortc
like
they
did
they're.
Just
writing
a
new
game.
They'll
tell
a
hundred
percent
go
to
ortc,
but
if
they're
they've
been
using
one,
oh,
they
they'll
continue
to
do
it
with
all
the
new
games,
because
they're
kind
of
used
to
it
already
yeah
and.
C
Q
U
Q
C
More
I
mean
in
part
of
the
reason
why
we
had
all
this
use
case.
Discussion
with
all
of
these
use
cases
is
we're
attempting
to
develop
a
substantial
portfolio
of
things
that
we
could
sell.
Yes,
beyond
just
forking,
to
try
to
address
these
additional
industries
or
whatever
to
the
point
where
you'd
say:
yeah,
we've
got
some
gaming,
but
we've
also
got
files
and
we've
got
this
and
we've
got
that
to
kind
of
make
it
a
more
attractive
package
instead
of
just
saying.
Basically,
if
you're,
not
a
hardcore,
gamer
you're
not
going
to
care
about
it.
C
Q
C
And
the
reason
they're
interested
in
it
is
because
they're
they'll
have,
for
example,
three
thousand
students,
99
percent
of
which
never
sent
yep.
So
they
like
the
idea
of
just
creating
a
receiver
and
no
sender
and
that
dramatically
simplifies
their
programming
of
that
very
large
kind
of
conference.
I
wouldn't
say
there
are
a
million
companies
doing
this
because
it's
kind
of
a
relatively
restricted
niche
and
there
were
only
one
or
two
people
who
try
to
you-
know
support
that
use
case,
but
those
have
seen
benefits
from
being
able
to
instantiate
a
receiver
with
no
sender.
Q
Say
I
would
suggest
that
the
biggest
advantage
and
biggest
issue
in
those
cases
is
that
is
how
you
handle
the
encryption
and
the
the
forking
of
all
the
media
and
all
that
sort
of
stuff.
That's
where
the
real
pain
in
that
scenario
is
not
so
much
the
building
I
receive
only
peer
connection
or
receive
only
you
know,
receiver
yeah.
C
I,
don't
know
those
people,
I've
talked
to
often
the
conference's
are
not
some
of
them
do,
involve
PSTN
and
that's
a
whole
level
No
another
level
of
complexity-
and
you
know
in
our
operation
with
us
so
to
be
as
des
and
all
that
a
lot
of
them
seem
to
be
internet
only
so
that
which
simplifies
it
a
bit.
Yeah.
Q
C
Just
because
they
don't
have
they
don't
have
all
the
additional
objects
to
deal
with,
particularly
if
you're
receiving
new
streams
depending
on
how
they
do
it.
It
can
add
a
little
complexity
just
to
manage
the
direction
of
the
transceivers
I'm,
not
saying
this
is
like
a
huge
issue
because
they
have
done
it.
So
it's
not
I
wouldn't
say
that
this
is
enabling
that
where
it
wasn't
doable
before
okay,
but
it's
just
a
little
bit
easier.
Okay.
T
G
G
G
C
Is
something
that
is
deployed
today
in
the
financial
industry?
So
that's,
you
know
you're.
Basically,
the
way
you
I'm
looking
at
this
is
I'm
looking
at
little
niches,
so
you've
got
gaming
with
the
antenna.
You
have
some
financial
people
using
this.
You
know
with
the
file
transfer,
you
have
some
other
things
and
hopefully,
at
the
end
of
the
day,
when
you
kind
of
look
at
all
these
things,
you've
got
something
substantial
enough
to
to
convince
your
engineering
manager
to
let
you
work
on
it.
Basically
right
sure.
F
C
Yeah
some
some
of
the
things
can
be
back
ported.
Some
probably
can't
like
the
we
said,
we're
gonna
do
the
SVC
stuff
because
it's
being
used,
so
why
not
it's
being
used
in
one?
Oh,
you
know,
I
think.
The
things
that
really
we
didn't
propose
to
be
done
in
100
was
like
some
of
the
use
cases.
Peter
talks
about
there's,
ice,
working
and
quick
and
quick
and
the
meet
you
know.
The
the
media
over
quick
frankly
is
a
use
case
that
men
begins
to
engage
entertainment,
so
that's
kind
of
another.
C
Q
Q
A
G
G
Q
T
G
G
C
In
terms
of
in
terms
of
new
methods
and
the
existing
objects
that
are
already
in
100,
which
have,
let's
be
clear,
have
not
all
been
implemented
completely,
but
the
diff
of
the
new
methods
isn't
that
much
you
do
have
the
constructors,
but
you
know,
overall,
it
is
a
new
API,
which
implies
that
you
have
to
kind
of
document
things
for
developers.
You
have
to
talk
to
them.
You
know
it
is
like
adding
a
new
API
like
any
other.
Wherever
I'd
see,
API,
there's
a
whole
there's
a
documentation
thing.
C
G
C
You
know
there
is
a
code
base
out
fully
functional
code
base
out
there.
If
you
want
to
look
at
it,
it's
cold
over
to
seal
I've
got
all
the
stuff
all
the
C
C++
objects
in
it
already,
so
you
can
get
a
sense
it's
or
it's
already
built
on
the
existing
chrome
media
engine,
so
you
don't
need
to
change
anything
there
and,
of
course,
there's
also
or
to
see
factory
in
chrome.
C
A
A
F
Q
Q
Q
Q
So
so,
like
I
said
even
they've
even
stuff,
that's
previously,
you
know
inspector
influence
that
there
is
cost
to
doing
these,
as
especially
on
the
in
doing
it.
I
want
to
make
certain
we
we
have
enough
gain
from
it
and
III,
because
he's
already
largely
exist
under
the
hood
already
in
one
oh
yeah
or
equivalents
there
to
exist.
T
Does
seem
like
relatively
low
hanging,
fruit
and
unblocking
other
features
that
are
of
interest.
We
would
be
nice,
at
least
for
me,
to
get
some
sort
of
written
down
like
this
is
what
you'll
be
able
to
do,
and
here's
why
you
can't
do
it
and
they
all
way
is
to
ya
get
an
understanding
of
what
we're
buying
yeah.
P
K
K
C
F
So
for
me,
it's
it's
again
in
terms
of
priority.
This
seems
fine,
but
it's
not
urgent
right
now.
Yes,
according.
B
F
The
working
group
to
get
too
distracted
and
to
sing
too
much
energy
in
that
compared
to
finishing
both
implementations
and
is
one
aspect
not
that
I
think
that
we
will
probably
continue
dig
into
whether
T
and
V.
There
are
some
nice
new
feature,
there's
quick
for
instance,
and
ones
who
will
get.
There
will
probably
see
that
yeah.
We
will
do
that
kidnap
as
part
of
it
and
that's
fine.
Yes,.
Q
And
I,
agree
and
and
I
think,
like
the
use
cases,
will
help
drive
the
prioritization
right
and
writing
those
down
and
hooking
them
up
to
this
use
case
needs
this
piece,
and
this
piece
in
this
piece
will
help
you.
You
know,
decide
on
the
prioritization
of
this,
both
in
the
spec
side
and
the
implementation
side,
and
that's
a
good
thing
and
that
all
that'll
help
everyone
so
I
think
when
it
you
know
the
piece
it
will
be.
The
most
painful
will
probably
be
the
tests
yeah.
Q
C
Q
K
So
I
have
a
question
about
features
going
forward.
A
prime
example
would
be
I'm
excited
about
readable
writable
streams
for
data
channel.
Now
both
peerconnection
create
data
channel
and
ortc
new
data
channel
returns.
The
data
channel
object
right.
Are
we
gonna
continue
it?
Would
we
and
readable
streams
to
both?
Yes
and
you
know,
there's
a
general.
K
C
Just
my
opinion
is
that
again,
because
developers
hate
to
lose
things,
if
you
can
decide
that
you
only
wanted
in
Andy
as
a
method
of
trying
to
get
them
to
move,
or
you
can
just
decide
hey,
because
if
it's
doable
in
one
hour,
we're
gonna,
do
it
just
because
that's
the
easiest
way
to
get
it
out,
in
which
case
you,
you
can't
have
it
lost
in
ND.
You
got.
You
know
you
kind
of
have
to
I.
C
N
B
F
F
Q
Yeah
I
mean
given
that
there's
no
support
and
workers
today
design
you
don't
have
to
pull
the
whole
old
API
into
workers.
You
could
just
do
the
new
new
API.
There
are
some
downsides
to
that
and
that
you
know
you
have
to
do.
Everyone
would
have
to
use
new
API
and
people
have
been
using
the
old
API
already
as
Bernard
was
speaking
to
you
know,
maybe
happy
just
keeping
with
what
they
already
know
and
already
have
code
for
and
just
having
it
that
code
instantiated
in
the
work.
T
K
G
Q
Go
ahead
with
peer
connection
and
I'll
say
there
is
I,
can't
think
of
a
lot,
maybe
there's
a
few
things
in
peer
connection
and
all
the
subsequent
api's
that
are
in
here
in
anyway,
inherently
problematic
for
a
worker.
The
biggest
issue
is
actually
probably
on
things
like
getusermedia
and
things
like
that
and
how
they
handle
transferring
streams
between
between
the
main
main
thread
and
the
workers
yeah.
F
I
agree:
it's
mostly
a
matter
of
implementation.
You
might
have
some
objects
that
say:
oh
I'm,
in
a
bag,
one
friend
I
we
and
then
we
go
to
the
main
thread.
But
if
you're
in
a
worker
that
does
not
work,
you
need
to
post
a
task
to
a
specific
Fred
there.
So
you
need
to
end
all
that
that
work
event
that
could
and
if
there's
less
code
to
port,
maybe
it's
easier
to
handle.
Q
L
G
Q
Q
More
closely,
yeah
and
and
I'll
very
very
quickly,
because
I
want
to
derail
us
any
further
say
that
I've
talked
to
the
main
wasm
architect
at
mozilla
luke
wagner,
and
he
is
happy
to
work
help
with
defining
wasm
worklets
that
could
be
used
for
doing
whatever
off
the
main
thread.
There
is
a.
There
is
a
an
issue
with
potential
grab,
work
to
collection
and
needing
a
j/s
context.
Q
If
you
allow
these
things
to
be
allocating
j/s
buffers
and
so
forth,
you
could
use
six
buffers
and
you
could
define
even
a
wizened
worklet
and
he
suggested
this,
which
does
not
even
have
a
JS
context
and
cannot
allocate
as
garbage-collected
memory.
It
would
just
use
fixed
buffers
or
buffers
that
are
passed
to
it,
but
that
would
effectively
be
the
equivalent
of
portable
compiled
code
that
could
be
used
all
on
and
you
could
call
that
from
our
existing
encoder
threads
and
so
forth
directly.
Q
Well,
how
many
do
any
fancy
set
up
or
allocating
j/s
context
for
it
or
anything
like
that?
So
that's
something
we
could
look
at
exploring
so
I'm
upset
about
that
I
mean
I,
said:
I
have
some
slides
where
I
have
outlined
some
places
where
we
could
inject
something
like
that
yeah.
So
perhaps
we
should
talk
about
that
or
I
should
get
you
and
talk
with
touch
with
Luke.
Okay,.
Q
Q
C
F
A
A
K
J
P
Q
J
W
G
Okay,
so
this
is
kind
of
a
no
idea
I
had
yesterday.
Maybe
it's
a
good
idea,
maybe
it's
not
so
let
me
know
basically,
I
went
through
so
we
when
you
presented
on
worklets
I
thought.
Well,
it's
that's
cool
that
high-performance
things
can
be
done
that
way,
so
I
went
through
and
looked
through
the
chromium
code
and
I
found
that
there
quite
a
few
plate
things.
G
These
are
some
of
the
more
high-performance
ones,
there's
a
long
list
of
things
that
either
use
worklets
or
JavaScript
callbacks
in
a
similar
way,
and
these
are
the
ones
that
are
a
place
where
you
would
expect
things
to
be
very
fast
right.
It's
during
some
kind
of
animation
or
layout
or
painting
or
or
audio
so
I
thought.
Well.
Where
could
we
stick
things
in
the
media
pipeline?
G
G
This
is.
There
are
two
versions
on
my
crazy
idea:
one
of
them
that
involves
streams
and
one
that
does
not
so
here's
the
version
that
does
does
not
involve
streams.
So
the
idea
is
that
you
have
worklets,
you
can
add
to
an
RTP
sender
or
an
art
to
be
receiver.
For
example,
here
you
can
say
alright,
I
want
to
be
the
packet
Iser,
so
my
little
workload
of
JavaScript
is
gonna
run
off
the
main
thread.
It's
gonna
consume
an
encoded
frame
and
it's
gonna
produce
some
RTP
packets.
G
So
if
I
wanted
to
do
my
own
intent
encryption,
there
I
could
and
I
put
a
little
comment
on
the
side,
which
is
maybe,
when
it's
executing
in
that
worklet,
we
could
have
make
sure
web
crypto
promises
resolved
immediately
sort
of
effectively
act
synchronously.
So
you
could
use
web
crypto
with
that
with
the
alda
async
difficulties.
G
Similarly,
on
the
RTP
receiver
side,
you
could
say:
okay
when
the
packets
come
in,
I
will
handle
them
as
a
worklet
and
then
I
will
produce
encoded
frames,
and
so
that
would
allow
the
application
to
be
in
the
media.
Then,
in
the
middle
of
the
media
pipeline
and
customized
things,
if
it
needs
to
but
not
be
on
the
main
thread
and
still
be
performant,
you
could
also
do
it
with
encoders.
If
you
wanted
to
do
your
own
encoders,
decoders
and
jitter
buffers
here,
I
know
that's
a
less
important
use
case.
G
So
that's
why
I
emphasized
the
the
one
that
could
help
with
the
intent
encryption
and
then,
if
we
added
one
more
method,
this
would
actually
give
you
a
rather
hacky
way
of
getting
custom
transports.
So
you
could,
for
example,
say:
okay
set
the
packet
Iser
to
consume
the
encoded
frame,
but
actually
go
and
stick
that
in
the
quick
transport
and
they
want
to
get
something
from
the
quick
transport
I'll
inject
it
into
the
RTP
receiver.
G
So
that's
the
inject
worklets
directly
by
calling
a
stick.
The
worklet
here
on
the
RTP
sender,
so
that
this
is
the
key
part
here
and
here
I
also
included
I
figured
that
you
might
want
the
original
method
in
case
the
custom
method
wants
to
call
the
original,
so
it
doesn't
have
to
re-implement
everything
you
can
just
take
what
was
packetized
and
then
do
something
extra
on
top
the
and
then
there's
the
version
with
a
readable
stream.
G
This
is
like
combining
all
the
new
stuff
together
where
clips
and
streams
and
the
media
pipeline
all
in
one
okay,
so
go
go
with
me
for
a
little
bit.
First,
we
define
a
thing:
I'll
call
an
RTC
worklet,
which
is
basically
you
take
a
worklet
and
a
readable
stream,
and
then
they
run
off
the
main
thread
and
they
produce
a
new
readable
stream.
Okay.
So
that's
your
that's
your
core
thing,
you're
working
with
and
now
you
go
on
some
of
our
objects
that
are
in
the
pipeline
and
you
add
readable
streams.
G
So
you
can
say:
okay,
a
media
stream
track
has
frames,
an
RTP
sender
has
encoded
frames
and
our
packets
are
TP.
Receiver
has
RTP
packets
in
are
to
be
receive,
resin-coated
frames,
and
then
you
have
some
methods
where
you
can
set
an
altered,
readable
stream.
One
that's
been
transformed
to
have
a
new
source
of
encoded
frames
or
packets.
G
So
here
are
some
examples
of
what
can
happen.
If
you
do
this,
if
you
wanted
to
do
a
custom,
encoder
or
decoder,
you
could
say
alright.
Rtp
sender,
I'm,
going
to
take
frames
from
the
media
stream
track,
I'm
going
to
transform
them
in
an
RTC
work,
cut
off
the
main
thread
where
my
encoder
exists
and
then
I'm
going
to
use
that
as
the
source
for
the
encoded
frames,
which
will
go
right
into
the
rest
of
the
pipeline.
As
is
on
the
decode
side.
G
If
you
wanted
to
have
a
custom
transport
for
media
such
as
quick,
you
could
say
alright,
this
is
a
key
part.
I
will
make
an
RTP
sender,
but
I
will
pull
the
encoded
frames
out.
I
will
transform
them
into
a
series
of
messages
or
streams
or
whatever
want
to
call
it.
Serialized
buffers
and
I
will
provide
that
as
a
source
of
streams
for
the
quick
transport,
which
will
then
send
them
off
as
separate
streams,
and
that
can
all
happen
off
the
main
thread.
G
G
So
if
you
had
this
primitive
of
an
RTC
worklet
which
allows
you
to
modify
a
readable
stream
off
the
main
thread,
and
then
you
added
these
readable
streams
on
these
objects
and
these
places
where
you
can
inject
the
readable
streams
into
these
objects,
we
could
have
all
of
these
use
cases,
but
without
any
new
objects
other
than
the
quick
transport.
Obviously
we
wouldn't
have
to
break
out
the
encoder
from
the
RTP
sender,
receiver
or
the
RTP
transport.
T
T
G
G
T
Q
So
I
can
I
can
speak
a
little
to
that.
Having
talked
to
Luke
and
I'm,
not
an
expert
on
wasum,
but
but
I
did
discuss
this
specifically
with
regards
to
where
were
QC
with
him.
This
gets
back
to
this
thing.
I
was
talking
about
with
the
Jas
contexts
and
garbage
collection
and
so
forth.
Q
If
you
are
allocating
and
and
and
letting
go
of
GS
memory
from
these
worklets,
you
will
eventually
get
garbage
collection
occurring
which
will
cause
long
delays
at
random
points.
While
in
your
work
light,
if
you
can
avoid
that
by
using
fixed
buffers
for
input
and
output
that
are
passed
in
or
pre-allocated
as
part
of
the
worklet
or
whatever,
then
you
can,
you
know
potentially
avoid
the.
You
certainly
can
avoid
the
garbage
collection
problem
and
you
can
potentially
avoid
also
at
needing
a
GS
context
entirely
and.
G
Now,
in
the
two
parts
of
that,
in
the
at
least
enema
petition
in
chrome,
when
these
worklets
are
executed,
they
can
be
executed
in
their
own.
What's
called
an
isolate,
which
means
it
has
its
in
its
own
little
JavaScript
world.
It
has
no
connection
to
the
outside
world
and
you
can
specify
okay.
It
only
gets
these
buffers
and
that's
how
my
body
works
right.
Q
K
Four
buffers
right,
I
know:
I
said
everything
should
be
readable,
strange
in
Envy,
but
if,
if
they
we
always
have
a
whole
frame,
that
might
be
the
one
case
where
a
single
buffer
might
be
better
unless
Java
syrup
is
gonna,
because
you
know
in
this
case
the
browser
will
be
the
one
feeding
the
JavaScript
to
frame.
So
it
can
control
that
part
and
maybe
make
that
something
so.
Q
The
biggest
concern
I
have
with
the
worklet
stuff
in
you
were
asking
about:
what's
the
overhead
of
employing
the
current
stuff
in
worklets,
so
I'll
go
back
to
that
question.
The
biggest
issue
is
that
there
are
certain
things
that
the
worklets
are
not
going
to
be
able
to
do.
They
are
not
going
to
be
able
to
access
at
this.
At
this
point,
a
hardware
encode
decode
sort
of
resources
for
codecs.
At
this
point.
Well,
though,
this
will
probably
eventually
change.
Q
Wasm
has
no
support
for
sim
d-type
operations,
so
video
codecs,
while
possible,
would
probably
not
be
a
great
idea
for
audio
much
more
possible.
It
might
be
less
efficient,
not
not
problematically,
so
and
obviously
for
other
things
that
are
less
signal
processing
overhead
oriented,
they
would
probably
not
be
a
problem
at
all.
In
terms
of
performance,
I
mean
it
might
be
slower
than
a
native
C++
implementation
or
rust
implementation,
but
the
difference
would
probably
not
be
see
important
or
significant
for
the
other
things.
So.
F
Q
F
Documents
is
great.
Second,
thing
is
yes,
there's
a
model
that
might
work.
Will
it
work,
we
don't
know,
and
we
should
monitor
or
even
push
the
people
that
are
doing,
whether
you
processing
musing
wisdom
to
continue
their
investigation
and
if
it's
proving
that
it's
working
in
terms
of
efficiency
in
terms
of
durability
in
terms
of
repeatability
of
deployment
in
different
devices.
If
it's
working
for
Weibull
you,
we
might
also
check
for
video
frames
as
well
for
processing.
F
G
Q
Q
P
Q
K
Spec
to
look
at
here,
I
think
just
sorry
to
interact
is
the
audio
workman
spec,
because
that's
the
one
that's
closest
to
us.
It's
media
and
they've
already
solved
all
this
I
added
a
link
to
slide
95.
You
can
go
look
there
and
the
key
to
it
is
to
stay
off
main
thread
entirely
right,
and
then
you
have
these
post
messages
that
also
in
the
audio
work.
Let's
back
that,
you
send
control
messages
back
and
forth,
so
for
so
here
where
you
have
media
stream
track
dot
frames.
G
K
So
so
yeah,
so
you
want
to
set
up
an
environment.
You
know
packets,
come
in
from
the
network
on
the
background
thread
already,
you
don't
want
to
go
to
the
main
thread
at
all.
You
want
to
go
straight
into
this
audio
work
with
JavaScript
code.
Let
it
do
its
thing
and
then
not
touch.
The
main
thread
at
all
maybe
send
some
control
messages.
Oh
yeah.
F
I
fully
agree
with
with
Randall
on
the
fact
that
there
we
are
seeing
hey,
let's
do
a
pipeline
defining
in
Java
JavaScript
once
and
it
will
run
and
runner
well,
except
that
at
some
point,
where
maybe
the
problem
of
like
packet
packet
loss,
in
which
case
you
might
create
more
keyframe.
So
you
need
to
react
to
that
and
handle
that
properly.
F
It
might
create
some
CPU
burst
because
of
that
that
might
have
over
effects,
because
if
you
need
to
change
your
options
from
the
JavaScript,
which
is
the
main
thread
by
doing
post
message,
things,
for
instance,
it
might
delay
and
derail
a
little
bit
with
the
feedback
loop
that
you're
actually
trying
to
implement.
That's
what
and.
Q
F
Q
Q
Let's
say
an
encoder
wasm
worklet
that
you
can
use
in
this
place.
You
need
to
define
more
than
just
it
takes
a
bunch
of
bytes
in
and
throws
a
bunch
of
bytes
out.
You
need
to
the
90%
of
the
work
and
doing
this
will
be
defining
that
control
interface.
For
that
encoder.
It
can
pass
arguments
into
the
work
like
arguments
once
arguments
is
one
thing
though,
but
but
you
have
to
anticipate
future
arguments
and
so
forth
right
now.
This
is
all
hidden
deep
inside
the
code
where
an
implementation
detail
of.
Q
F
I
think
we
could
like
just
step
back
a
little
bit
there.
It's
in
that
there
are
some
steps
that
we
could
identify
to
make
progress
where
requirements,
validation
of
the
models
with
different
approaches,
and
then
we
could
refine
our
model,
find
some
programs
identify
them
fix
them
and
so
on.
So
we
can
organize
that
already.
That's
that's
pretty
nice
and
if
you
can
document
these
these
requirements
and
these
initial
steps
would
be
great
I-
think.
Q
There's
considerable
opportunity
here:
okay,
I,
think
the
I
think
using
worklets
in
this
way,
as
in
worklets
in
this
fashion,
is
potentially
interesting
and
enables
some
interesting
cases.
I
think
you
need
to
investigate
this
with
the
Wow
some
people
and
find
out
what
we
can
do
and
what
we
can't
and
what
the
overheads
are
and
what
the
constraints
on
interfaces
like,
like
you
know,
can
we
use
streams
or
not
and
so
forth?
T
T
F
A
G
A
Q
Q
I
think
we
should
get
talk,
get
talk
to
Luke
and
I
should
have
Luke
talked.
Whoever
else
is
interested
in
this,
along
with
some
of
the
people
who
were
doing
audio
worklet.
Like
Paul,
add
know,
we've
got
the
audio
spec
and
maybe
Carl
Tomlinson
who's
implementing
it
for
us
and
whoever
on
your
side
and
whoever
else
is
interested
from
Apple
or
wherever
and
I
think
we
should
at
least
talk
about.
It
sounds.
P
Q
G
K
V
S
D
A
A
V
Okay,
so
this
is
the
sheriff's
attempt
to
write
up
what
we
agreed
and,
however,
forward
from
here,
and
it's
open
for
some
discussion.
So
if
you
see
realize
something
is
missing,
tada
so
and
we
might
add
it,
we
think
it's
good.
So
there
are
two
slides
now,
one
on
one
dodo
and
family
and
another
one
another
I,
don't
envy.
So
for
one
dodo
and
family,
we
agreed
to
create
extension,
specs
and
shift
out
from
the
main
specs
identity
and
that
one
will
have
editors
between
acre
modern
homes
on
McCullen.
V
J
C
J
D
V
So
and
then
for
streams
and
extra
knobs
on
data,
shall
we
agree
that
Leonard
Randall
and
Geneva
the
Mozilla
tank,
which
sought
investigating
that
and
make
a
proposal
assume
that
would
be
in
another
extension,
spec
yeah,
it's
another
teddy
for
stats.
I
think
the
only
thing
we
concretely
agreed
was
to
look
into
Isis.
That's
that
correct,
yeah,
okay,.
V
V
V
V
We
also
think
should
continue
the
work
on
flex
ice.
We
have
lot
support
for
that
and
the
Peter
I
assume
you
will
take
responsibility
for
that.
They
also
be
interesting
workers
and
work
less,
and
you
ran
rather
than
Holland
agreed
to
investigate
and
to
and
security
here,
and
you
would
do
a
write-up
of
that
part.
I
mean
you
presented
something
this
morning.
Sure.
F
V
N
F
V
F
Q
K
Moment,
yes,
I
think
there,
the
confusion
is,
is
there
actually
three
things?
There's
the
start
at
the
bottom,
there's
raw
media
using
worklets
and
then
there's
other
network
data
things
using
workloads
like
encoding
and
decoding,
and
the
third
one
is
the
peer
connection,
like
objects
being
owned
in
service
workers
and
shared.