►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
See
and
only
people
and
companies
listed
there
or
allowed
to
make
substantive
contributions
so
welcome
to
an
interim
meeting.
This
one
will
be
focused
on
testing.
So
we're
going
to
talk
about
the
implementation
status
of
Weber
to
see
100
over
the
WPT
issues
and
PRS
talked
about
some
principles
for
test
design.
Fitball
will
go
over
an
example.
Tests
will
talk
about
cross-browser
testing
using
WPT
that
will
be
leonard,
then
alex
will
give
us
an
update
on
kite.
A
Announcement
about
the
face
to
face,
we
are
going
to
have
it
in
Stockholm,
at
the
Google
offices,
June
19
to
20.
We
will
support
remote
participation
for
those
who
can't
show
up
and
I
think
which
is
I.
Think
all
this
has
been
announced
on
the
list,
but
just
to
make
you
all
aware,
so
you
can
make
travel
arrangements
if
you
want
a
common
person.
A
A
Okay
on
the
WebEx
info,
hopefully
around
here
so
you've
got
it
okay,
so
we're
going
to
talk
about.
Basically,
what
we
said
we
would
first
topic
is
implementation
status
is
Dom
on
the
call.
B
A
A
Basically,
any
WPT
tests
dashboard
doesn't
claim
to
have
useful
metrics
for
evaluation
of
comparison
of
features.
So
instead
we've
been
looking
at
the
web
confluence
project
which
looks
at
properties
and
methods
that
are
exposed
by
browsers.
Of
course,
this
then
doesn't
bear
on
the
interoperability
or
conformance
part
of
it.
It
just
says:
hey
this
stuff.
Is
there
so
Dom?
You
wrote
a
tool,
it
extracts
data
from
the
tracker.
B
Yeah,
basically,
it
looks
at
all
the
properties
or
already
interfaces
defining
the
WebRTC
specs
and
extract
the
data
from
the
web
conference.
Data
and
present
them
in
are
so
much
easier
to
read,
table
highlighting,
in
particular
those
methods
and
properties,
tab
less
than
to
implementation,
in
some
case,
dome.
A
A
Reconnection
ice
error,
event,
ice
event,
URL
get
supported,
algorithms,
one
which
didn't
make
sense
to
me,
is
candidate
attributes,
I
thought
we
had
implemented.
Those
said
coded
preferences
on
state
change
of
DTLS,
one
supported,
although
on
DTLS
state
change
was
because
it
was
a
name
change,
beta
child
priority
and
a
bunch
of
stuff
that
only
one
browser
supports,
some
of
which
might
get
fixed
at
some
point,
so
just
pointing
out
the
general
green
red
state
of
things
and
whether
they're
expected
or
not.
D
A
So
one
question
that
came
up
is:
how
does
some
of
these
results
change
when
you
put
in
when
you
use
a
dr.
Josh,
a
yes
which
I
don't
be
the
current
confluence
or
tracker
is
doing
and
one
thing
I
noticed:
I
wrote
a
little
script
to
look
at
it.
That
way,
and
it
turns
out
there
are
a
bunch
of
name
changes
that
we
made
that
results
in
things
being
read
where
they
actually
aren't.
A
So
we
changed
on
BLS
state
change
too,
on
state
change
on
ice'
state
change
too,
on
state
change,
but
get
nominated
pair
got
changed
to
get
selected.
There
DPMS
send
their
capitalization
changed
so
a
bunch
of
so
the
because
of
the
name
changes
a
bunch
of
things,
don't
show
up
in
the
confluence
tracker,
and
also
there
are
some
things
that
can
be
shimmed
but
be
adapter
like.
A
If
you
have
an
ice
gatherer,
you
can
shim
get
local
candidates,
ice
transport
to
get
local
candidates
and
I
think
that
the
lack
of
the
ice
transport
attributes
may
be
due
to
this,
although
I'm
not
entirely
sure.
So,
the
question
is,
for
the
purposes
of
understanding
what's
been
implemented.
Should
we
perhaps
separately
track
results
with
adaptive
Joce?
Yes,
so
that
some
of
this
stuff
can
get
worked
in
you.
E
F
I
always
saw
the
to
Jazz's,
like
that
rule,
to
mitigate
the
difference
between
the
different
browsers,
not
so
much
true
as
a
something
which
helps
with
having
a
more
spec
compliant
browser.
So
I
guess
from
that
perspective,
I
would
I
would
say
no
without
adapter,
but
I'm
not
feeling
strong
about
it.
A
E
E
E
A
H
I
was
going
to
jump
in
and
say:
I
do
think
it
would
be
useful,
not
should
have
adapter
got
Jas
and
I
know
that,
that's
probably
that's,
probably
not
popular
with
you
know
those
of
you
who
work
for
companies
of
browsers,
but
but
it
is
still
part
of
the
sky.
Did
we
tell
developers
right?
It
is,
is
part
of
what
we
say
if
they
say
yeah,
but
you
guys
aren't
up
to
up-to-date
and
we
can
say
well
use
it
after
jazz.
F
Are
we
are
we
talking
or
considering
you
only
the
two
options
of
like
either-or,
I
I
think
it
would
actually
be
be
helpful.
I
think,
like
Tom
had
like
probably
like
similar
suggestion
of
like
if
we
in
the
same
table
would
basically
show
something
which
which
says
like,
oh
by
the
way,
if
you
use
adapter
Jess,
these
change
in
the
in
the
following
way
like
they
become
more
green
or
whatever
right,
I
mean
that
might
be
very
helpful.
I
I
would
have
only
have
problems.
F
A
H
H
A
Was
just
adding
more
adding
another
entry
for
the
adapter
Center,
because
also
adapter
I
think
I
looked
at.
It
was
a
little
surprised.
It
can
remove
things
so
I,
don't
think
you
want
to
make
that
the
only
the
only
way
to
look
at
it.
Anyway,
we
can
look
at.
We
can
look
into
Dom.
We
can
look
into
having
a
separate
entry
for
adapter
I,
don't
know
if
they
might
have
to
change
the
complex
scripts
or
something
so.
E
C
So
if
it
I
would
say
that
it
would
depend
on
and
the
feature,
but
I
mean,
if
something's
implemented
in
two
browsers
on
your
inter
works,
then
it's
clearly
a
Dom.
If
something
is
implement,
if
your
functionality
is
implemented,
the
clear
implemented,
the
and
interoperate,
with
the
help
of
an
asset
purchase,
it
means
that
one
browser
is
close
enough
and
another
browser
is
well
at
least
two
browsers
are
close
enough
that
we
can
become
paper
over
the
difference.
C
Is
it
probably
means
that
we
understand
the
spec,
which
is
the,
which
is
the
reason
for
the
removal
language?
It's
two
people
understand
the
spec.
Then
this
thing
is
just
possibly
understandable.
If
two
people
come,
if
you
meant
to
implement
it,
then
we
have
no
test
to
that.
The
spec
is
understandable.
Yeah.
A
Just
just
to
keep
in
mind,
of
course,
this
is
just
at
the
confluence
level,
so
it
doesn't
really
demonstrate
any
Interop
or
conformance
or
anything,
it's
just
showing
the
day.
Okay
did
they
get
implemented,
but
anyway,
I
think
they
for
purposes
of
this
discussion.
It's
just
DOM
and
I
will
look
in
adding
an
adapter
judge's
column
to
it
and
have
that
available.
A
Okay,
so
next
part
of
the
meeting
WPT
and
lots
of
stuff
about
that.
So
just
wanted
to
show
you
briefly,
this
web
platform
tests
dashboard
many
of
you
looked
at
it.
The
major
thing
to
notice
here
is
just
it
is
a
bunch
more
red
than
the
confluence
tracker
and
we'll
be
talking
about
some
of
that
redness
and
some
of
it
is
caused
by
permission,
turn
timeouts
and
other
stuff
which
we'll
get
into
in
a
moment.
A
A
A
F
F
Sure
but
yeah
I,
don't
know,
I've
been
told
that
that
people
actually
look
at
this
at
this
WPT
platform
test
dashboard
and
make
their
decision
on
whether
to
do
something
on
WebRTC
or
on
a
native
app
and
to
them.
Basically
read
this
like
scary
right.
So
if
it's
like,
if
that's
like
a
tool
with
like
you,.
F
H
C
A
A
A
What's
happening
here,
okay,
all
right!
So
let's
talk
about
the
status.
So,
first
of
all,
we
adopted
a
test
before
commit
policy
at
t
pack
in
November
2017,
so
might
be
we're
talking
a
little
bit
about
how
that's
working
or
not
working.
We've
only
had
a
few
tests
submitted
as
a
result
of
that
policy.
Specifically,
you
know
due
to
editor
it
due
to
changes
in
the
spec.
A
On
the
other
hand,
there
are
no
WebRTC
PC
PRS
that
are
currently
marked
needs
tests,
so
the
test
before
commit
policy
doesn't
appear
to
be
blocking
or
causing
some
huge
backlog
or
anything
of
that
nature.
So
just
wanted
to
check
with
the
group.
Are
there
any
objections
to
continuing
the
test
before
commit
policy
doesn't
seem
to
be
causing
and
problems
right?
A
A
We'll
get
into
and
PRS
40
PRS
have
been
merged.
Since
we
went
to
see
our
11
are
currently
open
and
six
are
open
more
than
30
days,
so
we'll
be
chatting
a
little
bit
up
here
about
whether
we
need
some
process
changes
to
improve
the
frequency
of
the
PR
submissions
or
to
improve
the
velocity
of
the
PR
review
and
I'll
turn
it
over
to
Suarez.
To
talk
about
the
general
issues
in
ownership
or
you
wanna.
G
Yeah,
as
you
might
see
that
on
the
the
activities
in
the
web
platform
tests
on
this
there's
a
there
has
been
quite
a
lot
of
issues
that
is
unresolved,
and
most
of
it
is
due
to
the
lack
of
ownership
accuracy
for
the
WT
project.
Inside
of
a
pity,
because
most
of
the
the
current
owners
are
working
on
the
volunteering
basis,
so
so
a
lot
of
time
there
when
there's
a
PR
there,
and
if
the.
G
If
the
owners
do
not
have
time
to
review
the
test,
then
they
do
not
get
approved
there
and
it
does
not
get
much
into
the
master
branch
and
also
things
we
have
quite
a
few.
We
have
not
many
owners
out
there,
so
if
the
owners
themselves
submitted
up
PR
and
there's
no
other
owners
that
can
review
it,
even
if
the
PR
has
already
revealed
by
others,
then
there's
also,
it
could
also
could
actually
block
the
PR
from
being
much
so
I
think
this.
These
are
good
chance
to
call
for
action.
G
G
I
Make
a
little
comment
on
that
one
when,
when
two
are
starting
to
started,
to
work
on
on
all
those
tests
right
earlier
last
year,
already
I
was
running
after
everybody,
all
the
browser
vendors
to
have
at
least
one
representative
by
by
vendor
to
actually
make
comment
and
review
the
text
that
I
eventually
gave
up
I
really
do
not
did
not
find
a
solution
there
to
have
someone
that
has
won
enough
knowledge
and
enough
time
to
be
able
to
do
a
good
I
think
what
people
is
doing
by
providing
some
rules
of
the
thumb
is
useful
because
he
can
help
people
that
want
to
submit
tests
to
go
through
a
checklist
that
indicate
themselves.
I
Eventually,
we
really
need
people
that
are
knowledgeable
to
to
allocate
time
otherwise
we'll
come
back.
You
know
one
year
later
and
say
well
the
test
she
wrote
was
not
good
enough
right.
Okay,
I
just
failed
to
find
a
way
to
address
that
last
year
and
I
really
hope
we
can.
We
can
address
that
this
time.
H
I'll
make
one
comment
from
other
groups
that
I've
been
in
some
of
the
most
successful.
The
most
success
we've
had
with
test.
Writing
has
come
from
QA
people.
You
know
among
those
who
are
implementing.
You
know
if
you
I
guess,
for
basically
what
alex
is
saying
that
that
suggestions
produces
app
right.
It's
your
the
Quality
Assurance
people
are
the
ones
who
are
used
to
doing
that.
Already.
H
C
C
Since
we
have
the
biggest
the
fastest
way
to
turn
off,
new
contributors
is
to
let
them
let
their
peers
language
without
review.
So
it's
so
I
I'd
say
that
the
first
that,
given
the
current
state
I,
think
we
need
need
to
need
reviewers
first
I.
E
B
G
E
G
F
Is
that
like
an
option
to
to
to
help
with
the
Berkeley
PRS,
because
I
understand
that,
like
for
people
to
write,
this
is
actually
probably
not
easy
to
understand,
which
browsers
actually
implemented
it?
What's
the
state
of
the
implementation
so
a
little
bit
the
feeling
that
sometimes
people
submit
PRS
and
like
test
around
one
browser
like
right,
there
test
against
the
one
browser
and
then
like?
Oh,
it's
a
it's
on
the
three
other
ones.
Probably
they
haven't
implemented
it
I.
G
B
Be
would
we
be
comparing
these
to
some
and
I
think,
partly
that's
what
we
expect
PR.
We
were
asked
to
do.
I
do
this
double-check
that
in
fact
the
results
aren't
too
bizarre,
but
as
Alex
was
saying
that
requires
people
that
some
reasonably
good
understanding
of
the
implementation,
implementation,
landscape.
B
Maybe,
to
clarify
what
test
reviewers
are
expected
to
do
is
reviewing
the
test
result
are
generated,
is
one
key
or
at
that
review
where
that
something
may
be
wrong
with
the
test,
but
nobody
expect
that
every
test
should
pass
on
every
browsers.
All
that
browsers
should
say:
oh
I
don't
pass
this
test,
or
this
session
would
be
committed.
Oh
man.
B
Clearly,
understanding
which
tests
pass
when
they
get
submitted
B's
and
you
bottom
hint,
but
it's
just
a
hint
and
there
may
be
test
cases
that
get
approved
despite
being
wrong.
I
mean
that
has
happened,
and
that
will
happen
again.
I
think
the
main
concern
is
finding
enough
test,
reviewers,
possibly
giving
them
some
guidance,
some
checklist
to
be
armed
for
the
reviews,
but
I
wouldn't
focus
too
much
on
the
fact
that
some
tests
may
or
may
not
get
the
right
implementation
results.
That's
that's
better
address
on
a
test
by
test
vegetating.
A
G
So
this
is
more
related
to
the
flights
that
we
might
discuss
later
on,
which
is
on
the
helper.
So
the
original
plan
for
the
web
platform
tests
was
a
tree
that
we
want
the
little
help
as
possible,
so
that
there's
no
not
many
obstructions
but
as
time
goes
on
and
as
more
tests
are
written,
we
we
are
also
adding
more
and
more
tests
and
the
way
we
add
this
is
by
defining
global
rebels
and
including
them
using
a
script
X
and
using
this.
G
Because
of
these
are,
is
there
have
been
some
discussion
on
how
to
keep
track
of
using
the
usage
of
these
helpers
and
and
how
to
manage
to
help
us
so
so
I
guess
there
are
some
ways
that
we
can
keep
in
mind
as
especially
when
we
are
some
more
refactoring
than
frumble.
Maybe
we
can
move
the
helper
functions
to
a
new
helper
directory.
G
You
can
also
consider
about
our
using
the
new
words:
yes
module,
yes,
2015
features
such
as
previous
module
and
also
a
single
wait,
although
I'm
not
sure
about
the
convention
of
using
this
in
WPT,
because
because
this
might
break
the
test
running
in
older
browsers,
that
does
not
support
so
yeah.
This
desire
does
my
observation
and
point
of
discussion.
E
E
C
F
A
E
A
Okay,
all
right
so
wanting
to
talk
about
a
few
of
the
issues
and
PRS,
so
we'll
do
a
couple
of
those
and
then
leave
the
some
of
the
dependency
issues
to
a
further
discussion
so
about
the
mock
media
stream,
data
for
Weber
to
seed
tests,
basically
from
houses
that
don't
have
command
line
flags.
This
is
causing
an
issue.
A
A
G
A
E
E
E
Do
we
really
need
to
be
testing
with
camera
microphone
in
order
to
test
that
peer
connection,
the
API
works,
I
would
say
no,
but
I
also
know
that
internally
browsers,
we
prefer
to
test
with
at
least
our
camera
and
microphone
stack,
so
I
mean
but
I'm
in
webdriver.
I.
Don't
know
that
as
well,
but
if
that
it
works
just
as
well,
I
was.
I
About
to
comment
on
a
on
the
webdriver
API,
so
Apple
released
an
implementation
of
webdriver
API
that
allows
to
handle
the
permission
and
specifically
forget
user
media,
but
as
far
as
I
know,
it's
Apple
Omni.
So
it's
prefixed
by
kya,
the
URL
in
the
webdriver
API,
is
perfect
specifically
for
Apple,
and
it
didn't
make
its
way
into
the
webdriver
specification.
Yet
so
they
implemented
it
in
Safari
driver,
but
they
didn't
put
it
in
the
spec
and
number
two
its
webdriver
right.
So
WPT
is
not
depending
on.
I
B
My
understanding
is
that
there
is
ongoing
work
to
make
some
test
cases
depend
on
webdriver
for
the
test,
wanna,
but
I
think
yeah.
If
I
was
saying,
if
we
don't
need
get
you
the
media
streams
which
for
web
RTC
is
3
99.9%
of
the
KDS,
then
we
should
not
use
getusermedia
and
skip
that
issue
altogether,
using
captions
three
more
well.
A
E
E
A
E
Well,
we'll
definitely
make
anything
that
would
make
a
greener
I
think
would
be
good.
I
had
a
separate
questionnaire
way
too,
because
a
lot
of
these
web
platform
tests
we're
able
to
run
them
in
Firefox,
because
we
have
these
in
E
files
that
add
some
press
and
only
the
other
option.
I
guess
it
if
these
dashboards
could
be
am
I
modified
to
accept.
That
is
the
problem
that
Firefox
doesn't
accept
the
command
line,
parameter.
I
Edge
for
just
the
permission
prompt
has
several
acts
so
the
edge
the
edge
team
gave
us
a
lot
of
things.
You
can
manually
accept
the
permission
once
and
basically
it
keeps
it
in
memory
forever,
not
like
30
days
like
Rhonda's.
That's
one
way
around
there
is
the
registry.
There
isn't
there's
a
lot
of
ways
to
do
that,
but
all
in
all
is
is
not
very
stable
and
from
one
revision
to
the
next,
whether
it's
edge
or
or
the
corresponding
edge
driver.
It
doesn't.
It
doesn't
aslam
resilient.
I
I
I
think
it's
good
to
have
the
list.
Oh
there's
a
few
I
didn't
think
about
so
I
will
have
my
team
try
a
few
of
your
well
all
of
them
and
see
if
it
improves
a
little
bit
the
green
Ness
of
the
different
dashboard
we
maintain.
It
would
be
nice
if
we
could
come
up
with
an
agreement
of
which
one
we
prefer
to
put
in
the
WPT
specific
right.
A
E
Just
spitballing,
if
we
we
decided
to
have
a
helper
file,
if
we
had
a
helper
to
just
basically
get
an
audio
track
or
something
like
that
or
get
a
stream
with
an
audio
track,
and
then
you
can
have
if
edge
then
web
audio
health
right
microphone,
I
mean
that
might
be
one
way
to
at
least
separate
a
concern:
send
them
a
platform
tests
themselves.
I
think.
A
So
potential
item
to
try
to
cuddle
helpers,
see
if
that
works
all
right,
so
92
1
3,
this
I
think
dr.
alex
will
get
into
assuming
we
can
get
through
this
quickly
enough,
which
is
issues
with
generating
RTP
like
some
of
the
more
sophisticated
features
like
contributing
sources.
You
need
things
like
a
mixer
to
client
header
extension
simulcast
tests,
so
you
need
some
kind
of
server
a
mixer
Asaf
you
and
then
I.
Think
dr.
Alex
you're,
going
to
cover
this
right
in
your
portion.
A
A
J
So,
as
we
know,
writing
good
tests,
heart
and
as
we've
seen
in
the
past,
making
them
pass
in
all
browsers
is
even
harder
than
that
and
I
mean
we've
mentioned
it
already.
It's
very
time-consuming
to
test
in
all
browsers,
even
if
we
just
limit
it
to
stable
versions
and
I
mean
we've
already
identified,
that
we
have
a
problem
with
reviews
that
they
are
not
happening
and
I'll
show
some
examples
of
where
Chrome
is
exporting
new
tests,
which
is
great,
but
they
lack
review
from
other
browser
vendors.
J
J
Dependencies
are
another
issue
like
the
dependency
on
a
transceiver,
to
get
a
media
stream
track
instead
of
get
user
media.
The
question
is:
where
do
we
draw
the
line?
We
need
an
agreement
on
the
other
I
think
and
automatic
upstream
without
review,
is
tricky
I
think
it's
on
the
next
slide.
Oh
no
cleanup!
First,
so
web
platform
tests
has
a
function
to
add
a
cleanup
which
is
executed
after
the
test
is
done,
even
if
the
test
errors
out
with
an
assertion
and
from
what
I've
seen
Travis
CAA,
sometimes
has
troubled.
J
J
F
C
C
F
Do
just
just
one
more
information
there,
for
example
for
Firefox
one
of
the
problems
is
it's
not
about
the
total
amount
of
open
ports
you
see
actually
on
the
ice
level,
because
internally,
the
Firefox
process
also
has
more
file
handles,
so
you
can
run
into
the
maximum
file
handles
on
the
given
operating
system
before
reaching
your
ice
maximum
numbers
of
ports
and,
like
the
browser,
basically
doesn't
notify
you
about
that
right.
You
basically
just
get
like
some
internal
errors,
and
things
goes
off.
Oh
yes,.
J
And
I
mean
we
have
the
same
problem
for
getusermedia.
If
you
don't
release
the
track,
then
you
might
get
strange
issues
with
resolution
and
requesting
a
different
aspect
ratio
and
that
might
not
work
so
it
should
always
be
released
after
the
test
and
it's
probably
very
hard
to
do
automatically
so
code.
Really
you
should
look
for
that.
I
mean
in
chrome.
It
didn't
recently.
J
Right
I
mean
using
the
a
transceiver
stuff
to
get
tracks
was
a
good
idea,
but
it
was
mostly
a
workaround,
but
for
the
lack
of
fake
devices
on
things
like
Travis
and
didn't
EPT
in
general,
and
at
least
on
Travis
chrome
fix
that
in
mid-march
and
the
Firefox
support,
for
that
is
a
priority-one
in
Philips
team.
And
if
you
look
at
the
media
capture
streams
dashboard,
it
looks
pretty
green
and
good
in
chrome
now,
so
we
will
probably
see
some
improvements
there.
J
Soon
dependencies
in
helpers
are
really
hard
to
spot
and
like,
but
now
give
the
example
of
the
ice
transport.
Depending
on
the
detail.
S,
transport
and
I
think
we
need
to
agree
that
dependencies
should
be
explicit,
not
intercepted
hidden
in
some
helper
file,
because
that
makes
you
review
very,
very
hard.
J
J
Next
slide,
please
and
we've
seen
issues
that
the
whole
rtcpeerconnection
without
argument
doesn't
work
an
edge,
whereas
rtcpeerconnection
load
us
and
apparently,
as
as
you
found
out
edge,
sometimes
crashes.
If
you
don't
give
ice
service
in
the
constructor,
the
question
is:
are
we
going
to
be
generous
and
always
pass
low,
lindsay
constructed
to
make
edge
happy
and
make
more
tests
running
edge,
or
do
we
don't
do
we
not
do
that
because
the
specters
require
it.
A
C
I'm
not
worried
about
this,
if
not
as
a
cognitive
load
of
acquiring
everyone
who
who
writes
tests
to
remember
that
you
need
to
subscribe.
No,
because
because
there's
the
browser
out
there
they're
not
testing
on
that,
it's
going
to
crash
otherwise
yeah
right.
They
have
to
see
their
connection
with
no
argument
all
the
time.
C
H
C
A
F
From
our
experience,
I
mean
I
think
it
has
been
said
already
but
like.
If
we
do
this
helper
functions,
we
probably
want
to
run
them
in
a
separate
test,
run
to
to
be
where,
when
they
start
failing,
there
was
like
a
lesson
we
learned
our
past
because
otherwise,
basically,
if
you,
if
you
have
someone
modifies
the
helper
or
something
like
this
and
they
start
failing,
then
suddenly
all
of
your
tests
start
like
or
some
tests
are
failing,
and
you
don't
know
why.
J
As
this
becomes
a
game
of
testing
the
tests
in
the
end,
okay
next
slide,
please
so
I
had
two
examples
of
Chrome
App
streaming.
Without
any
new
review
in
the
web
platform
tests
repository,
one
was
replaced
track,
which
was
read
on
the
web
platform.
Tests
dashboard
in
Firefox
and
I
was
surprised
by
that,
because
Firefox
has
implemented
replace
tracks
for
four
or
five
years
now,
and
it
turned
out
that
the
tests
used
at
track
with
just
attracted
without
any
streams
which
is
currently
not
supported
in
Firefox.
J
That
was
trivial
to
fix
and
we
even
found
a
spec
issue,
because
we
really
thought
I
think
and
even
did
that.
So
this
shows
review
is
good,
but
I
wouldn't
automatically
without
any
review.
Merge
those
tests
into
web
platform
tests
and
the
other
issue
was
the
tag
out,
wrote
a
helper
function
that
checked
on
the
can
insert
DTMF,
which
is
not
implemented
in
firefox,
and
that
was
trying
to
work
around
the
chrome
issue
and
then
broke
the
test
and
Firefox
and.
C
E
And
I
think
a
larger
issue
here
for
last
couple
of
slides.
That
said,
we
have
there's
a
there's:
a
safe
area,
subset
of
WebRTC,
that's
the
sort
of
safe
area
that
a
lot
of
browser
supports,
and
so
ideally,
whenever
a
feature
where,
whenever
we're
testing
one
feature
and
that
feature
doesn't
work,
we
want
one
read
test,
not
a
hundred
read
tests.
E
F
Tests
don't
have
like
any
I,
don't
know
how
to
call
it
like
at
the
beginning
of
a
test
like
a
check
of
like
let's
say
like
if
this
is
like.
Adding
trying
to
test
like
transceivers
like
basically
check
if
transceivers
is
supported
on
the
browser
at
all
and
if
not
kind
of
like
skip
over
it
and
don't
actually
try
to
run
the
test.
Now,
if
that's
one
would
put
help
in
certain
scenarios,
but.
J
F
E
On
the
first
example
of
a
replace
track,
I
mean
add:
tracks
should,
according
to
the
spec
work
without
the
stream
argument,
so
that
was
a
bug
in
Firefox,
so
I
mean
that
part
seemed
to
work,
I
mean
I,
don't
since
there
don't
necessarily
see
a
problem
with
in
that
case
is
just
do
get
fixed
in
there.
The
test
is
correct.
One.
F
One
question
I
had
with
these
two
examples:
did
they
got
merged
through
github
PRS,
or
did
these
get
merged
through
being
written
on
the
chrome
site
and
then
I
don't
know
what
the
the
system
is
called
for?
Pushing
fixes
from
from
the
chrome
repository
automatically
into
WPT
I
know
that
we
have
something
like
that
on
the
Firefox
side
and
I.
Think
it's
like
a
policy
thing
that,
like
small
fixes,
can
be
up
streamed
automatically
through
the
automatic
process
and
new
tests
are
not
supposed
to
go.
That
way.
No.
F
C
F
The
last
the
last
part
we
don't
have
on
the
Firefox
side.
So
in
our
case
the
the
import
happens
and
then
someone
has
to
manually
look
at
it
and,
like
figure
out
like
oh
yeah
and
the
most
most
of
the
time
it
just
gets
Marcus
by
well
some
non
web
RTC
person
is
like
Oh,
yet
another
failing
properties,
platform
test
and
it
gets
like
written
down
as
as
failed
and
then
like
the
we
as
the
Weber,
you
see
people
don't
even
notice.
E
F
G
F
C
C
E
J
I
The
hood
place
track
ready
when,
for
we
are
and
where
the
travis
build,
and
only
when
the
two
bills
were
successful,
was
the
thing
merged
right.
So
I
think
the
process
is
not
too
bad
there.
We
just
need
to
put
more
more
bills
right,
I
still,
don't
understand
why
the
Firefox
bill
with
Travis
didn't
catch
it
during
during
the
the
process.
I
I
I
J
C
I
So
people
to
answer
to
your
question
I
actually
looked
at
the
PR
today
to
see
what
what
happened
in
the
process
and
I
can
see.
On
the
on
on
the
live
stream.
On
the
ether
Brad,
the
two
tests
were
successful
for
Travis
right.
There
were
no
manual
intervention,
so
everything
was
done
through
books,
but
they
were
still
the
test
against
Firefox.
A
A
J
Let's
see
a
track
was
a
single
argument
and
no
MediaStream
should
succeed,
and
that
was
failing
in
chrome
because
in
the
end,
chrome
didn't
implement
the
transceiver
model
and
didn't
implement,
get
receivers.
Oh
No
get
transceivers
and
I
split
it
up
into
a
setup
function,
which
basically
is
like
something
F
in
other
testing
frameworks
like
jasmine
or
mocha
or
comma.
That
basically
sets
up
C
peerconnection
calls
getusermedia
get
the
track
from
that.
J
D
J
D
If
I'm
at
it
may
add
a
comment
there,
I
think
we
talked
about
this.
The
async
test
also
is
is
an
issue
because
it
runs
all
of
the
tests
if
you
want
to
test
functions
and
well
in
parallel,
and
that
is
the
the
cleanup
function
doesn't
help
there,
because
if
you
have
more
than
20
tests,
you
also
have
more
than
20
of
active
peer
connections.
At
the
same
time,
yeah.
J
J
So
we
have
the
test
and
it's
now
split
up
into
five
different
tests.
Each
making
just
a
single
assertion
like
a
track
returns.
A
now
TCP
sender
then
got
the
sender's
track,
is
set
to
the
media
stream
track.
You
got
from
getusermedia
that
it
creates
a
single
our
TCP
sender
that
is
in
the
get
senders
set.
J
That
is,
creates
an
RTC
receiver
and
a
transceiver,
and
that
way
we
can
get
the
chrome
on
the
next
slides
from
failing
the
test
completely
because
it
doesn't
implement
the
transceiver
model,
which
gives
a
receiver
after
app
track
and
it
doesn't
implement,
get
transceivers.
So
we
now
have
three
out
of
five
of
these
assertion
passing,
which
is.
It
shows
that
we're
at
least
on
the
way
to
get
to
our
end
goal
to
get
everything
to
pass
that
shows
up
is
the
HTML
result
which
gets
counted
in
the
dashboards.
J
So
I
had
lots
of
discussions
about
that
was
young
Eva
and
the
H
post
yesterday,
and
one
of
the
things
we
ended
up
trying
was
to
make
it
user
medias,
rappers
and
rtcpeerconnection
represent
magically
clean
up
after
the
tests
and,
if
you're
interested,
we
discussed
that
in
the
chrome
back.
Basically,
we
make
a
web
PC
test
that
acts
like
a
promise
test,
but
will
clean
up
the
peer
connection,
get
user
media
afterwards
and
that
lets
us
write
a
test
like
we
see
on
the
slide.
It
was
very
little
setup
very
little
boilerplate
over
the
disadvantage.
E
Yep
yeah,
that's
basically
my
criticism
as
a
while
is
that
bit
scarred
by
rappers
in
the
past,
like
in
some
of
the
Firefox
tests,
for
instance,
we
had
a
case
where
we
had
a
create
data
channel
that
actually
returned
a
promise
that
caused
people
some
confusion,
because
it
wasn't
actually
the
the
spec
API,
but
it
was
a
helper
function
that
was
trying
to
be
helpful
by
mimicking
the
spec
and
I
think
that
that
is
a
pattern.
I
don't
like
in
tests.
I
prefer
test
two
to
test
the
actual
API,
so
I
proposed.
E
So
simple
and
I
discussed
this
back
and
forth
a
couple
of
times,
I
tried
to
make
a
version
that
instead
of
cleaning
Apple
just
a
search
that
the
tests
had
cleaned
up
itself.
That
way
we
would
get
some
tests
of
some
coverage
and
some
way
to
making
sure
that
tests
were
doing
the
right
thing,
but
even
that
I
was
a
little
problematic.
So
basically
here's
the
same
test.
We
have
piece
you
you,
you
enforce
that
the
test
must
call
PC
close
and
close
attracts
itself,
but
it
the
assert
here,
would
actually
throw.
E
So,
in
the
case
where
the
assert
happens,
you're
missing
the
close.
So
then
you
know
dealing
with
this
in
JavaScript.
You
could
use
try,
finally,
and
all
that
stuff,
but
it
gets
it's
more
boilerplate,
so
the
pro
would
be
promoting
more
correct,
API
code
that
doesn't
skip
on
clean
up
the
con
would
be
that
when
the
asserts
happened,
you
would
get
leaks
or
if
you
had
one
mighty
clean
up
magic
again.
So
it's
not
really
correct
still
to
write
code
like
this
and
add
one
more
slide.
E
So
maybe
it
would
just
be
better
to
next
slide.
Please
to
you
know,
go
with
no
wrappers,
which
I
think
is
what,
if
originally
had
as
long
as
you
have
well
named
helpers,
maybe
that's
the
that
would
avoid
using
wrappers
and
we
could
maybe
even
have
an
optional
flag
to
run
tests
on
using
those
wrappers
that
asserts
that
tests
on
fleek
and
also
had
a
question.
I
guess.
Would
these
rappers
somehow
interfere
with
some
tests
like
instant
some
tests
and
stuff
like
that,
would
be
a
concern.
I
have
with
rappers.
C
The
tests
that
do
things
like
shake
in
terms
of
a
connection
they
should
be
using
the
rope,
a
connection
country.
C
G
E
J
J
D
D
So,
instead
of
running
a
test
in
a
single
browser,
I
now
have.
My
idea
is
to
run
the
test
in
two
brothers
which
share
the
test
instance.
So
this
is
the
architecture.
I
came
up
with
it's
there's
a
tiny
sibling
server
that
use
WebSockets.
It
allows
to
exchange
any
kind
of
data
and
it
is
a
one
rule
path
for
the
two
two
browsers
it
has
specific
URL.
D
So,
in
order
to
to
meet
up
in
order
to
run
the
route,
you
have
a
UID
a
test
index
and
the
role
that
is
assigned
to
the
brothers
and
you
activate
that
mode
by
passing
in
a
specific
URL
parameter,
which
is
called
cross
browser,
and
then
you
have
the
URL
parameter,
which
is
sorry.
The
URL
ya
picks
your
l
equals.
Is
man
bebe?
Well
you
all
of
the
signaling
server.
The
two
browsers
should
connect
to,
and
the
role
of
course,
because
all
you
need
to
assign
a
role
to
one
of
these
brothers.
D
Our
next
slide,
please
so
the
browsers
connected
to
each
other
by
the
signaling
server
and
they
wait
for
each
other,
and
then
they
start
the
test.
So,
on
the
left
side,
we
have
a
test
that
runs
that
we
have
at
the
moment
it
is
without
cross
browser
support.
So
what
it
does
is
it
creates
a
data
channel
and
it
always
on
the
peer
that
creates
the
data
channel
to
that
this
beta
channel
opens.
So
the
open
event
is
fired,
and
then
it's
done
so
on
the
right.
D
D
D
So
this
this
cross
browser
test
function
and
it
is
basically
an
extension
of
a
promise
test.
So
it
has
two
new
parameters.
The
first
one
is,
of
course
the
test
instance
itself:
that's
nothing
new!
Then
we
have
a
signalling
instance
and
the
roll
so
the
signaling
instance
it
abstracts
away
the
different
kind
of
modes
we
have
so
in
singable,
the
mode
the
function.
This
test
function
that
I've
defined
there
will
be
called
twice
with
different
milk
different
roles,
and
it's
cannot
connected
by
a
signaling
consent
which
I
call
loopback.
D
Certainly
so
it
just
redirects
the
data
from
one
of
the
test
to
another
and
back
again
and
across
for
the
node,
the
function
will
be
called
once
and
the
browser's
connect
by
the
WebSocket
signaling.
So
this
is
what
the
signaling
instance
is
for
at
the
very
end
of
the
function,
we
have
the
method
being
called
exchange,
candidates
and
exchange
descriptions,
which
is
just
some
high-level
helper
methods
on
the
signalling
instance,
which
wrap.
If
you
were
to
talk
about
exchange,
can
loops,
for
example,
it's
well.
It
takes
the
I
scanned
events.
D
Part
of
that
passes
that
into
what
the
signalling
and
everything
that
comes
in
from
the
remote
side
will
be
passed
into
the
P
connection.
So
that's
just
some
high-level
read
the
function
so
in
order,
if
you,
if
you
need
to
do
some
SCP,
mangling
modification
or
whatever,
then
you
can
still
do
that
so
yeah.
What
it
also
does
is
if
something
fails
on
one
side,
this
will
be
synchronized
to
the
other
side
and
that's
pretty
much
it
towards
test
so
and
they
have
to
be
rewritten
in
some
kind
of
way.
D
Was
on
too
far
alright?
So
if
the
next
steps
slide
Thanks,
so
next
possible
steps
from
my
point
of
view
would
be
to
modify
test
to
be
compatible.
We
can
also
do
some
cleanup,
adding
async
await
and
that
kind
of
stuff
which
we
talked
about
earlier.
D
What
would
be
well
would
be
necessary
is
to
update
the
the
Python
script
to
to
run
this
quest
as
a
mode
tests
automatically
by
following
a
defined
browser
matrix
and
at
the
very
end.
This
would
be
really
nice
to
have
the
result
pushed
to
the
WPT
dot
fYI
dashboard,
so
we
can
see
what
breaks
or
what
works
in
cross
browser
mode.
So
my
question
would
be
in.
Is
this
something
we
would
like
to
pursue
any
comments
or
any
questions?
D
F
It's
only
a
general
warning
internally
in
Firefox.
We
have
been
done
this
this
path
and
we
actually
gave
it
up
because
it
costs
too
much
well,
I
guess
in
our
case,
it
was
was
slightly
different,
but
we
try
to
reuse
existing
tests
for
two
different
environments
for
running
on
a
single
machine
and
running
on
two
different
machines
and
it
costs
too
much
overhead
too
many
problems
and
we
give
it
up.
F
I
It's
worth
the
extra
cost
to
make
the
same
comment:
I.
Remember
you
saying
that
after
the
steeplechase
shortness
and
we
took
that
into
account
when
we
design
kite
for
the
specifically
the
WPT
run
with
kite-
and
we
said
the
WPT
tests
are
made
for
one
purpose,
which
is
the
compliance
of
the
JavaScript
API
in
one
browser
right.
If
we
there
is
only
a
few
api
that
requires
more
than
one
browser
and
there
specifically
the
web
RTC
one
and
it's
a
very
small
subset
compared
to
WPT.
I
D
So
I'm
not
really
sure
so
in
order
to
use
kibe
wouldn't
that
require
rewriting
tests
anyway,.
F
F
Question
of
whether
you,
whether
you
write
your
tests
cook
for
two
separate
environments,
two
separate
tests
right
like
what
I
think
what
what
LX
is
saying
is.
Basically
you
have
your
WPT
tests,
which
are
really
meant
to
run
in
a
single
browser.
Only
and
then
you
have
your
your
separate
Chi
test,
which
is
meant
for
interoperability
testing
and
what
you're
proposing
is
basically
taking.