►
From YouTube: WebRTC Working Group Virtual Interim April 4, 2017
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Welcome
everybody
to
this
interim
we're
hoping
to
make
progress
on
outstanding
issues
within
web
RTC,
pc
and
if
time
allows.
I
think
we
have
one
issue:
each
and
media
capture
main
and
weber,
TC
stats
that
we're
hoping
to
get
to
and,
as
usual,
we'll
have
it
it
as
drafts
that
will
follow
the
meeting
step
on.
You
want
to
talk
about
going
to
see
our
yeah.
B
I
can
talk
about
that
this.
We
have
been
announcing
a
few
times
now.
We
have
the
target
to
submit
request
to
transit,
to
see
or
before
the
end
of
this
month,
so
we
need
to
deal
with
the
open
issues
to
at
least
of
us
that
we
need
to
deal
with
in
a
lot
of
editorial
nature,
and
we
hope
this
meeting
to
be
fruitful,
and
I
also
want
to
remind
about
the
thing
I
sent
out
on
the
list.
I
think
last
week
that
we
would
like
to
get
feedback
from
into
implementers
on
stuff.
B
They
are
not
planning
to
implement,
so
we
can
highlight
that
in
the
CEO,
so
I
mean
basically,
the
idea
is
to
get
that
in
front
of
people's
eyes
and
what
people
are
really
interested
in
something
that
we
see
a
risk
for
not
getting
into
method.
We
can.
They
can
start
pushing
implementers
and
browser
vendors,
and
so
okay.
C
C
C
It
seemed
to
me
that
that's
not
something
we
have
to
worry
about,
having
two
implementations
at
and
thus
would
not
be
one
of
the
things
that
we
should
be
going
through
and
trying
to
be
concerned
about
that.
So
is
that
correct
or
I
think
you
know,
I,
don't
really
not
quite
sure,
I
understand
which
features
we
try
and
have
two
implementations
of
it,
which
ones
we
don't
like
this
dome.
D
C
C
D
D
A
D
So
that
again,
I
guess
is
also
up
to
us
to
decide
typically
for
a
browser
targeting
API-
and
you
know
the
title
is
my
keys.
Communications
are
browsers.
I
think
you
would
expect
to
find
implementations
in
widely
deployed
browsers
as
fast.
You
know
at
this
as
a
main
target
again
here
we
are
talking
about
an
optional
features.
I
think
that
is
probably
more
leeway
for
negotiating
a
muscle
among
yourself,
first
and
arguing
you,
but
this
with
a
direct
carnita.
So
that's
part
of
the
things
we
probably
need
to
get
consensus
to
are
physically,
so.
E
That
we,
the
requirement
for
implementations,
isn't
requirement
on
implementations
is
requirement
to
ensure
that
the
spec
is
actually
comprehensible.
So
if
two
people
understand
ur
stood
it,
then
it
is
the
incompressible
and
they
say:
if
nobody
implements
it,
then
we
don't
know
if
it's
comparable
amount
well,.
C
C
D
Redone
but
again
you
know
at
the
end
of
the
day,
the
principal
is
mostly
for
the
working
group
to
decide
what
criteria
it
wants
to
use
for
green,
be
on
TR
and
then
once
these
criteria
has
been
decided,
it
gets
presented
to
the
director
and
assuming
the
director
approves
them.
Then,
whenever
these
criteria
are
met,
we
can
move
to
propose
recommendation.
D
Api
is
that
would
certainly
be
nice,
but
that
nobody
cares
about.
But
again
here
we
are
talking
about
a
specific
optional
part
of
an
obviously
holly,
widely
deployed
API.
So
you
know,
I
think,
if
there
is
lots
of
leeway,
how
as
to
decide
that
yes,
it's
okay,
if
we
only
have
eaten
a
smaller
deployment
of
prototype
implementation.
C
B
C
A
A
A
G
G
A
Ok,
so
we
have
some
WebRTC
pc,
pull
requests
and
issues
and
also,
if
we
have
time,
media
capture
and
whoever
tc-stats
with
one
PR
each
all
right.
So
let's
talk
about
the
weber.
Q,
SI,
pull
requests
the
first
one
is
network
error
not
to
find
it
might
not
be
needed.
So
network
errors
mentioned
in
potentially
three
places.
A
What
event
do
you
fire,
and
the
second
is
when
you
have
an
error
occurring
in
the
data
channel,
say
above
the
transport?
What
do
you
do?
An
example
is
if
it's
closed
with
an
error,
if
there
aren't
enough
stream
IDs,
so
one
of
the
things
we've
done
in,
if
you
look
in
the
issue,
bug
is
people
discussing
a
comparison
to
web
sockets
and
in
web
sockets.
You
have
a
simple
error
event,
but
a
more
complex,
closed
event
where
you
have
was
clean
code
and
reason
attributes
now
in
WebRTC.
A
We
now
have
the
RTC
error
event,
which
shows
multiple
attributes
with
a
simple
closed
event.
So
it's
it.
There's
a
contrast
there
between
what
we're
doing
in
web
sockets
and
in
web
RTC
and
I'll
talk
about
why
that
difference
exists
and
see.
If
people
agree
that
we
should
maintain
it
or
not
so
a
little
bit
about
web
sockets,
what
it
does
so
they
have
a
ready
state
as
well
which
transitions
to
close
that
their
web
sockets
closed.
A
And
so
you
have
the
simpler
event
if
the
WebSocket
connection
is
failed
and
then
on
clothes,
you
have
this
closed
event
with.
It
was
clean,
L,
trabuco,
division
now,
I
believe
that
the
reason
web
sockets
was
designed
this
way
is
the
client-server
protocol
so
that
you
have
a
code
that
might
come
back
from
the
server
and
an
associate
reason.
In
contrast,
the
data
channels
peer-to-peer.
So
you
might
have
a
code
that
might
be
delivered
in
the
SE
to
be
transport,
but
not
necessarily
above
that.
A
So
here's
what
we're
proposing
and
what
we're
trying
to
rough
out
in
the
PR.
So
when
the
data
channel
transports
close
with
an
error,
we're
proposing
a
fire,
the
new
RTC
error
then
and
we're
creating
an
error,
detail,
enum
value
of
sctp
failure
and
also
adding
an
attribute
called
es.
It's
'be-kaws
code,
which
is
going
to
be
taken
from
the
sctp
cause
codes
in
in
the
operational
error,
and
then
the
message
would
be
set
to
the
sctp
cause
specific
information,
possibly
with
additional
text
and
then
when
it
occurs
above
the
transport.
A
The
suggestion
is
to
fire
the
RTC
error
event
with
an
error
detail
of
data
channel
desh
failure.
Now
the
alternative
to
that
would
be
potentially
putting
a
reason
attribute
in
the
closed
event,
as
opposed
possibly
opposed
to
the
RTC
error.
Maybe,
in
addition
of
people
have
opinions,
if
there's
any
need
for
additional
closed
event
attributes.
Today,
we
just
have
a
very
simple
closed
event
for
the
RTC
data
channel.
Does
anybody
have
an
opinion
on
that.
H
Mean
this
is
a
rail
Jessup
in
general,
the
overall
intent
was
for
the
data
channels
API
to
be
approximately
similar
to
web
sockets,
not
necessarily
entirely
identical,
especially
in
the
startup
and
shutdown
parts,
just
close
closer
the
latter,
but
once
you've
created
one
it
should.
You
know
we're
roughly
the
same
so
she's
having
to
shut
down
be
the
same
or
similar,
makes
it
easier
to
write
code
that
could
use
web
sockets
or
data
channel
interchangeably.
So
there's
a.
A
I
But
if
sad
of
speaking,
when
you
write
code
that
you
and
you
want
to
use
web
sockets
and
day
to
the
channel,
I
guess,
even
if
which
I
could
try
to
make
it
more
like
web
sockets,
I
think
to
make
a
riad
that
behaves
reasonably
uni
to
kind
of
prepare
to
handle
errors
for
both
kind
of
types
of
drivers.
So
so
I
don't
think
someone
can
get
away
with
just
one
error
management
function,
I
mean
I,
think
you
need
to
so
I
make
them
almost
similar.
C
H
A
I
think
the
only
difference
like
it
said
reading
this.
My
sense
is,
if
you
have
a
WebSocket
value,
get
the
error
event,
and
so
you
have
to
get
to
handle
at
neither
case
I
think
the
only
difference
would
be
in
WebSockets
you
get
a
little
bit
more
information
on
the
clothes
and
in
particular
whether
it
was
a
clean
clothes
or
not,
and
whether
we
know
there's
a
code
or
reason
associated
with
that
close,
but
I
think
you
would.
You
would
know
the
read
if
it
was
an
error.
A
I
I
I
A
A
J
I
J
A
A
J
Alright,
so
at
the
last
interim,
we
discussed
having
a
triangular
to
configure
the
packetization
interval.
There
is
discussion
about
also
having
the
ability
to
query
the
capabilities
like
see
a
list
of
support,
packetization
our
goals.
We
decide
we're
not
going
to
do
that,
because
a
feeling
the
ability
to
provide
a
hint
and
the
browser
might
not
necessarily
use
this
value,
though
use
whatever
is
closest.
J
So
there
you
can
see
the
content
of
the
pr
and
the
description
there
about
using
the
duration,
if
possible,
and
otherwise
using
a
closest
available
duration
is
the
same
logic
that
exists
in
j
sep
for
processing
the
equals
p
time
attribute
in
sdp.
So
basically,
this
will
behave
exactly
like
the
attribute
and
stp,
but
he'll
allows
application
to
classify
it
on
the
sender
side
and
it
will
override
if
there
was
a
p
time
in
the
remote
description
override
that
value.
J
H
C
J
C
C
J
H
J
J
The
audio
level
is
currently
specified
to
be
updated
whenever
an
RTP
packet
is
received,
but
there
isn't
any
guarantee
that
you
know
that
will
be
in
sync,
with
the
audio
that's
being
currently
played
out
by
the
browser.
So
you
could
end
up
with
you
know,
audio
level
indications
that
are
updated
ahead
of
when
the
audio
is
actually
heard.
J
This
seemed
like
a
problem.
I
couldn't
see
any
way
that
application
could
get
around
this,
because
there's
there's
no
attribute
that,
for
example,
indicates
the
current
delay
between
the
contributing
source
timestamps
and
the
when
value
will
be
played
out.
So
my
suggestion
is
that
somehow
we
change
update
when
package
receives
to
about
it
play
out
time.
J
We
need
to
talk
more
to
work
out
the
definition
of
play
out
time,
but
that's
the
general
concept
and
in
the
PR
I,
just
sort
of
brought
up
an
example
of
how
this
would
look
or
says
you
can
see
here
quote
here
when
the
the
first
audio
frame
contained
in
a
packet
is
delivered
to
the
receivers
media
stream
track
for
playoffs
the
objects
are
updated.
So
this
is
the
thing
that
may
need
some
wordsmithing.
J
I
C
I
was
going
to
say
calling
it
that,
if
we
is
a
similar
like
like
I,
agree
with
the
problem,
if
we
used
to
do
something
like
this
I
think,
we
should
also
keep
the
old
definition
of
what
we
had
to
this.
So
you
sort
of
have
enough.
This
is
because,
depending
on
how
you're
using
statistics,
you
might
want
to
understand,
what's
happening
on
the
wire
or
what's
happening
after
your
dinner,
bahar.
J
Okay,
well
yeah
in
web
RTC
stats.
There
is
also
I
forget,
the
PRS
emerged
yet,
but
there
is
a
suggestion
to
have
contributed
source
object,
so
we
could
have
the
picket
stats
version
be
updated
as
packets
I
received,
and
the
one
that
on
arch
p
receiver,
which
is
expected
to
be
used
to
actually
you
know,
drive
the
UI
application
logic
be
synced
with
play
out.
That
would
seem
reason
world
me.
E
K
So
one
comment
hi.
This
is
jana
VAR,
sorry
for
my
missus
last
time.
I'm
a
bit
concerned
that
this
method
is
synchronous,
and
that
would
mean
that
I
think
limitation
would
have
to
if
it's
doing
media
on
a
separate
thread,
it
would
have
to
always
send
runnable
to
the
main
thread
to
update
this
information.
Just
in
case
someone
were
to
call
this
method
and,
unlike,
for
instance,
would
stats
this
synchronous.
The
stats
would
have
more
leeway
to
only
do
this
effort
when,
when
the
jas
asks
for
it
as
I
cernus
up
and
discuss
I.
H
The
only
alternative
to
that
would
be
due
to
store
this
in
and
some
type
of
atomic
that
could
be
accessed
from
both
threads
safely,
as
you
don't
send
rentals
back
and
forth,
which
is
doable
but
more
complex,
and
especially
when
you
have
lots
of
CSR
fees
and
so
forth.
J
H
J
K
I
I
K
It
has
come
up
before
where
it's
been
a
bit
of
bit
controversial,
to
have
things
that
are
runtime,
observable
and
state
actually
change
on
the
single
event
loop.
So
you
can
you
can
like
busy
wait
in
JavaScript
and
listen
for
data
and
see
changes
and
that's
true
for
timeless,
but
for
very
little
else,
so
I
mean
I
would
consider
making
this
an
asynchronous
method.
J
That
hasn't
come
up
yet
I
assumed
that
still
would
not
be
updated.
Multiple
times
I
Jes
event
loop,
the
they
may
were
getting
too
into
implementation
details
here,
but
the
implementation
we're
currently
considering
for
chrome
is
too
the
first
time
it's
accessed
within
an
event
loop
essentially
like
retrieve
it
using
a
mutex
from
the
other
thread
and
Q
an
event
to
set
a
flag
that
will
cause
it
to
be
updated.
J
The
first
time
it's
called
in
the
next
event
loop
cycle,
so
I,
don't
think
it's
it's
very
hard
to
have
an
implementation
that
both
is
efficient
and
respects
the
the
JavaScript
single
for
a
delusion
I.
H
The
important
thing
is
you'd
have
to
you
know
have
a
reference
to
this,
so
you
know
where
to
get
it
from
so
maybe
it's
not
so
much
a
big
idea.
I
mean
univar,
you
can
chime
in
what
you
think
if
we
wrapping
the
mutex
or
use
an
atomic
with
same
difference
and
then
we're
we
are
safe
and
the
most
the
biggest
thing
is.
E
H
A
I
I
F
L
K
L
E
E
They
also
in
steps
have
some
variables
that
are
intended
for
for
measuring
the
play
out
delay
and
whether
or
not
all
your
video
well
I
think
they
are
pull
requests
now
and
again.
The
the
fact
that
the
variables
have
gotten
estimated
and
the
names
indicate
that
just
figuring
out
what
number
to
return
isn't
trivial,
so
I'm
a
bit
afraid
it
could
be
more
funnier
than
we
thought
it's.
H
E
H
C
J
I
C
J
E
J
Should
be
a
little
less
controversial,
the
big
contributing
sources
method
returns,
objects
representing
both
SSRC
S&C
srcs,
and
it
wasn't
completely
clear
me
when
the
SRC
objects
are
currently
updated.
The
spec
currently
says:
if
the
RTP
packet
contains
no
SC
srcs,
then
the
object
corresponding
to
desks
RC
is
updated,
so
the
seems
to
imply
that
it's
only
updated
if
there
are
no
CS
RCS,
but
some
cases
have
come
up
where
an
application
may
want
the
csrc
information,
even
if
a
packet
contains
CSRC
s.
J
E
J
Well,
if
you
were
going
to
map
csr
cs2
speakers
in
the
UI,
you
would
already
need
some
out-of-band
mechanism
for
associating
csrc
identifies
with
speakers,
so
seems
out
of
scope
for
the
API,
but
it
certainly
may
be
convenient
to
have
implications
object
itself,
whether
it
represents
a
csrc
or
SSRC.
I
was
thinking
of.
E
A
J
A
B
J
E
I
E
F
A
F
F
J
G
J
Okay,
so
I
was
going
to
just
write
a
PR
to
change
the
the
type
of
sending
coatings
to
be
a
dictionary
that
only
includes
rid
with
the
assumption
that
the
only
use
of
sending
coatings
that
was
intended
was
to
affect
the
number
of
ribs
and
their
identifiers.
But
when
I
was
doing,
this
I
noticed
that
the
simulcast
example
currently
sets
encoding
with
both
rids
and
encoding
parameters.
J
In
the
ignore
the
fact
that
encoding
is
not
called
sending
coatings
and
its
scale
down
resolution
by
instead
of
scale
resolution
down,
but
I
think
that
was
just
as
written
an
earlier
version
of
the
spec.
But
you
know
clearly
at
some
point.
The
intention
was
that,
if
you
add
a
transceiver
with
these
encoding
parameters,
it
will
both
affect
the
result
of
the
generated
offer
and
once
an
author
answer
exchange
is
done.
We'll
start
using
these
parameters.
So
it's
sort
of
a
combination
of.
J
How
add
add,
transceiver
is
currently
expect
and
RTP
sender
set
parameters
so
next
slide.
The
question
is
whether
we
want
to
keep
things
occurrent
way.
We're
sending
coatings
only
is
used
to
affect
what
rids
get
generated
or
make
it
a
combination
of
what
it
currently
is
and
RTP
sender
set
parameters
would
be
sort
of
an
added
convenience
for
the
application,
because
otherwise
they
would
need
to
do
a
full
offer
answer
exchange
and
then
get
parameters
set
parameters
you
can
just
pour
the
you
know
the
simple
simulcast
example
on
the
previous
slide.
A
Yeah,
my
recommendation
would
be
to
allow
it
to
function
as
an
early
set
parameters,
but
with
the
same
restrictions
like
you
shouldn't
be
able
to
write
something,
that's
only
reap
that's
read-only,
I.
Think
and
of
course
the
ridge
would
still
be
the
only
thing
that
influences
the
sdp,
but
you
could
kind
of
avoid
another
another
call
that
way.
J
J
D
J
Yes,
I'm
thinking
and
currently
there's
a
sending
coding
slot
on
RTP
sender
which
isn't
used
anywhere.
So
if
we
did
go
this
route,
then
you
just
imagine
in
steps
470
description
say:
if
this
completes
the
negotiation
of
a
transceiver,
then
you
know
follow
the
steps
for
our
to
be
centered,
set
parameters
with
the
contents
of
the
send
encoding
slot,
etc,
etc.
C
J
J
K
And
this
is
a
young,
never
I,
don't
think
this
is
really
comment
on.
We
don't
implement
and
turn
series
yet,
and
so
we're
talking
about
sending
coatings
here
just
want
to
double
check
that
this
will
be
compatible
with
the
cycle.
Catalyzation
firefox.
We
have
right
now,
it's
possible,
we
can
have
add
tracks,
so
you
can
call
add
track
and
then
immediately
set
parameters
before
doing
I'll
forever
answer
exchange.
J
Maybe
that's
a
separate
question,
don't
think.
Okay
set
parameters
is
very
well
specified
right
now,
but
in
previous
discussions,
I
thought
that
we
had
talked
about
only
being
allowed
to
set
the
parameters
after
remote
remote
description
has
been
applied
because
that
sort
of
a
negotiates
the
envelope
of
parameters-
okay,
but
this
isn't
specified
anywhere
I
can
see
currently.
So
I.
A
J
A
Okay,
so
another
question
is
so
say
you
call
a
transceiver,
and
how
do
you
know
how
many
simulcast
streams
you
can
send,
because
you
could,
for
example,
create
a
add
a
bunch
of
ribs
in
or
something
and
you
go
over
some
internal
limit?
How
do
you
know
what
that
is
and
yeah,
so
you
could
present
throw
a
like
an
invalid
parameters
exception
or
similarly,
the
promise
could
be
rejected
in
transceiver,
set
parameters,
dot
center
dot
set
parameters
and
also
in
existing
implementation.
A
A
So
some
of
the
things
you
might
want
to
know
would
be
forgiving
codec
the
maximum
number
of
simulcast
streams
that
can
be
sent,
and
it's
if
that
was
like
one
you'd
know
that
tamil
cast
is
unsupported
in
a
codec
there's.
Also,
a
question
of
whether
in
some
implementations
is
a
limit
to
the
aggregate
simulcast
dreams
that
can
be
sent
as
opposed
to
just
the
ones
within
a
given
codec,
and
so
one
idea
we've
kicked
around
for
this
is
whether
you
need
a
mac.
Simulcast
streams
attribute
encode,
a
capability
that
tells
you
for
a
given
codec.
A
C
C
A
The
problem
is
because,
in
what
we've
been
discussing
right,
you
call
these
things.
There's
no,
like
you,
see,
error
level
of
details,
so
you
just
basically
get
an
error
that
basically
says
it
didn't
work
and
in
practice
we've
been
supporting
these
encoding
primers
I
had
a
lot
of
things
could
go
wrong,
so
it's
not
always
easy
to
figure
out
exactly
what
you
did
that
it
didn't
like.
Okay,.
A
E
G
C
C
What
I
I
don't
really
have
a
strong
appeal
opinion
of
what
we
should
do
with
this
I
might
I,
don't
I,
I'm
pretty
sure
that
if
we
do
add
this
we're
going
to
need
to
write
it
in
a
way
that
it
is.
You
know
at
best
a
it's,
not
a
guarantee
that
something
will
work.
It's
a
hint.
Maybe
it's
a
guarantee.
If
you
go
over
this
number,
it
won't
work
or
something
like
that,
but
I
think
we
still
need
the
most
important
thing
will
be
to
clean
up
the
errors
for
when
it
doesn't
work.
A
C
K
B
G
G
A
B
C
But
I
think
I
would
agree
with
your
statement
is
very
difficult
to
debug
this.
It
is
it's
not
just
a
debug
issue.
The
only
have
this
development
happens
in
real
time
on
production,
because
people
have
different
machines
with
different
capabilities
and
hardware,
and
all
the
things
of
the
mentioned
side
like
I,
think
we
should
try
and
deal
with
this
somehow
or
other
so
that
we
can
get
good
information
about.
Do.
A
C
J
K
A
B
However,
the
chrome
counters
say
that
the
legacy
api's
are
used
a
lot
and
basically
my
question
and
I'm.
Looking
at
DOM,
perhaps
my
lease
is
there
any
precedent
on
what
to
do?
I
mean
we
got
the
input
from
fool
if
that
most
specs
have
beats
of
api's,
that
aren't
they
invented
everywhere
without
being
speculation,.
D
K
A
So
just
want
to
talk
a
little
bit
about
DTLS
failures.
Today,
the
dtls
transferred
audric
has
an
on
state
change
event
handler
and
a
detail
s
Transport
state
of
failed.
So
you
know
when
the
detail
is
transferred
has
failed,
but
there
is
no
honor
event
handler
and
the
question
is
in
particular
dealt
with
some
issues
relating
to
mismatched
crypto
suites.
For
example,
a
browser
offers
only
ecdsa,
but
you
have
a
peer.
That's
a
mobile
app,
that's
based
on
old
and
moldy
code
that
only
sports
detail
s
10
with
our
safe
certificate.
A
So
you
can
get
a
crypto
sweet
negotiation
failure,
and
so
the
question
is:
is
there
any
way
of
figuring
out
that
that
just
happened?
Currently
there
isn't
now
there
are
a
number
of
issues
with
security
and
in
the
next
slide,
I
have
the
text
from
sockets.
And
basically
it
says
you
can't
convey
any
failure
information
to
scripts.
That
would
allow
the
script
to
distinguish
any
of
the
following
situations.
So
really
any
kind
of
failure.
You
know
hostname
can't
be
resolved.
Packets
can't
be
routed,
refuse
the
connection
you
know
failed
to
TLS
handshake
opening
handshake.
A
Any
of
that.
So
you're
not
supposed
to
provide
information
that
allow
you
to
discern
any
of
those.
So
the
question
is
whether
and
here's
the
rest
of
it-
and
this
is
where
again,
they
use
the
connection
closed
code.
That's
really
the
same
for
any
of
those
weather
with
this
same
kind
of
logic,
whether
it's
possible
to
give
any
information
about
the
reason
why
DTLS
failed
or
you
just
you
know,
look
at
the
state
and
figure
out
that
it
happened,
but
have
no
idea.
Why.
C
A
A
B
A
E
So
yes,
there
is
a
mountable
attack
which
means
that
if
you
have
thrown
to
the
inner
failure,
so
that's
those
don't
count
but
wait,
and
we
can,
if
we
tell
it
already
attacker
the
difference
between
and
there
is
no
hosts
and
the
host
doesn't
have
a
port
and
the
port
is
open.
Let
them
suck
is
not
a
plan.
Connection,
that's
food
fanatic,
but.
J
But
even
gets
the
G
TLS
handshake
with
Lamar
DC.
You
need
to
get
ice
connected,
so
it
seems
like
the
attack.
Vector
is
a
lot
smaller.
You
can
use
the
yellow
ser
get
no
information
about
the
attack,
ease
network,
local
network
environment
because
nothing
on
their
local
network
is
going
to
be
a
nice
server.
C
E
B
A
C
A
A
Ok,
is
there
any
sentiment
about
well,
we
can
figure
out
where
they
aren't,
where
the
errors
should
be
thrown
from,
but
I'm
going
to
move
on
for
now.
B
Ok,
so
moving
on
to
media
capture,
main
I
mean
we've
talked
in
the
past,
about
that
I
frame
needs
to
be
explicitly
delegated
the
right
to
use
camera
microphone
and
the
path
we
went
on
was
to
use
this
allow
user
media
attribute,
which
is
used,
for
example,
in
allow
full
scream
and
some
other
places.
So
that
is
what
we
got
in
the
spec
right
now,
but
there's
there
is
a
new
proposal
to
base
this
on
the
new
feature,
policy
specification
and
add
individual
attributes
of
camera
microphone,
and
even
speakers
and
I.
B
B
I
B
D
J
G
H
So
I
a
question
what
it
camera
and
microphone
the
blue.
The
problem
we
get
in
here
with
start
detailing
is
out
is
then
you
can
list
and
grow
because
of
variations
on
what
you're,
referring
to,
for
example,
does
camera
include
the
possibility
of
screen
capture
or
window
capture
or
dot,
and
what,
if
there
is
a
new
type
of
capture?
What
about
the
cameras?
Capture,
I,
don't
know
that
would
really
apply.
H
D
I
think
that's
by
design
and
so
design
can
be
criticized
again.
I
think
City
Opera,
suspect
is
still
early
in
its
life
time,
but
the
idea
that
by
design
embedded
content
should
get
as
little
advanced
features
as
possible
and
as
a
result,
you
would
want
to
grant
access
to
these
features
explicitly
and
not
repeatedly,
and
so
yes,
if
you
wanted
to
grant
screen
to
reassess,
you
have
to
have
the
right
procedure.
Well,.
H
H
D
H
B
K
H
I
mean,
if
we're
going
to
do
this,
we
better.
We
should
make
sure
we
have
the
full
list,
okay
and
try
to
think
what
might
occur
in
the
future.
Now,
obviously,
you
can
always
add
them,
and
if
someone
is
written
in
this
it'll
a
list-
and
they
didn't
include
some
new
thing-
they
came
out
and
you
don't
get
it
which
is
probably
reading
for
future
policy
right.
E
E
E
K
So
the
issue
in
stats
is
you
enumerate
stacks
stats
dictionaries
by
type
and
for
inbound,
RTP
and
outbound
RTP?
There's
a
boolean
in
the
dictionary
itself
that
you
have
to
check
to
basically
four
types
covered,
but
you
know
for
dictionary
is
covered
by
two
types
and
out
of
the
pain,
because
a
basically
there's
no
way
to
enumerate
to
these
students,
that's
without
checking
the
extra
modes
and
worse.
K
If
you
forget
to
check
as
your
mouth,
you
might
be
reading
the
remote
data
and
think
it's
the
local
theta
Melissa,
so
also
the
overtime,
a
suspect
has
grown
the
dictionary.
They
started
that
symmetric
and
now
they're
not
symmetric.
Even
so,
there
are
things
that
are
different
in
whether
it's
a
members
that
are
present
based
on
whether
you're
looking
at
the
remote
or
the
local
one.
In
fact,
the
the
stats
example
in
both
web
RTC
and
the
web
are
three
specific
actually
fails
to
test.
K
Where
is
remote,
so
something
I
noticed
after
this
PR
so
does
those
evolve
is
been
wrong
and
could
be
looking
at
our
tcp
data,
not
the
most
data
instead
of
the
local
data,
so
that
instead
the
PR
introduces
new
to
new
types,
remote
dashed,
inbound,
RTP
and
remote
dash
up
and
dash
rdp,
and
in
order
to
their
basic,
be
more
explicit
definitions
of
the
actual
members.
You
would
see
and
there's
a
lot
of
commonality.
That's
been
pushed
into
a
into
base
classes.
K
So
I
just
like
shredded
some
names
here
that
you
can
see
in
the
slide.
So
basically,
what
I'm,
showing
here
is
only
the
differences,
so
r
t,
CN,
Barnard,
Eastern
stats
would
contain
friends
decoded
and
the
other
benefit
of
mapping
out
the
differences
here
is
that
you
know
very
needy
amorphous
cyclical
sounding
associate
staff
ID.
We
can
now
call
it
a
remote,
ID
and
I
would
look
up
the
remote
upon
RTP
stream
stats
in
this
case,
and
the
document
also
reads
a
lot
better.
K
I
think
in
that
they
won't
have
a
green
come
like
here,
but
the
definition
of
remotes
that
and
the
spec
follow
this
directly
and
says.
Instead
of
pointing
to
an
amorphous,
RT
stats,
it
looks
up.
You
can
see
the
whole
the
actual
type
that
you're
linking
to
so
you
can
you
can
you
can
glean
the
structure
directly
from
the
web
IDL,
which
I
think
it's
nice,
so
that
that
in
fact
it
we
used
to
have
a
example.
K
Number
one
was
actually
used
to
be
a
sort
of
explanation,
explanation
of
this
relationship
and
actually
removed
that
in
the
pr
because
it
seemed
redundant
now
so
and
to
go
through
the
dictionaries
and
the
remote
inbound
RTP
stream.
Stats
will
then
contains
a
local
ID,
which
would
be
pointing
back
in
the
opposite.
Direction
is
remote,
ID
and
whatever
on
trip
time
which
is
unique
to
you
know
the
rtcp
gather
data
and
for
same
thing,
and
they
both
inherit
from
the
same
common
directory,
inherited
dictionary
RTC
received
stream,
RTP
stream
stats.
K
So,
conversely,
on
the
outbound,
our
PCF
and
octopus
game
stats
can
stay
in
remote
ID
target
bit
right
frames,
encoded
in
total.
In
code
time,
which
are
the
ones
that
I
think
are
only
on
the
sender
side
and
then
I'll
give
you
remote
at
them.
Scream
stats
would
not
have
those,
but
it
would
have
everything
else.
K
I
I
J
K
Right
there
are
two
ways
to
get
to
the
rtcp
derived
data,
and
that
is
to
look
for
enumerate
and
using
is
remote,
but
the
other
one
is
to
use
the
remote
ID.
At
least
Firefox
encourages
remote
ID
and
actually
implement
remote
ID.
We
haven't,
which
was
the
old
name
for
associate
snaps
ID.
So
it's
actually
backward
compatibility
plus
for
us
to
go
back.