►
From YouTube: YUI Open Roundtable with Reid Burke on yo/tests and YUI
Description
YUI Open Roundtables cover topics of interest to the front end community.
A
A
More
guests,
arriving
as
they
come,
and
I'd
like
to
welcome
reid
burke
on
the
yy
team
to
come,
talk
to
us
about
yo
tests
and
continuous
integration
and
some
of
the
stuff
you've
been
working
on
lately.
So
take
it
away,
read.
B
Cool,
hey
guys,
so
yeah
I've
been
working
for
a
good
long,
while
I'm
testing
stuff
for
yahoo
for
yui,
and
lately
I've
been
working
on
bringing
no
tests
to
more
projects
besides
yui
at
yahoo,
so
yotes
has
been
really
successful
for
yui
and
we
continue
to
use
it
daily
for
checking
the
status
of
of
our
project
for
understanding
the
health
of
each
commit
and
or
making
decisions
to
release
or
to
integrate
features
or
not.
And
it's
it's
been
working
really
well.
B
So
we
like
for
really
the
thing
that's
new
about
your
tests
is
that
you
can
look
at
the
status
of
a
particular
code.
Revisions
instead
of
particular
say,
builds
right,
which
don't
mean
as
much
when
those
builds
could
have
been
building
different
things.
B
So
this
is
going
to
be
useful
for
a
lot
more
than
just
yui
and
a
lot
of
people
want
it,
and
so
we're
bringing
this
to
other
projects
at
yahoo
right
now,
we're
building
out
a
version
of
yotest
that
actually
will
work
with
more
than
just
yui
and
yeah.
I'm
really
excited
to
be
building
that
and
to
have,
and
the
ultimate
goal
is
to
have
all
of
yahoo's
open
source
projects.
B
Available
on
a
public
instance
of
yotes,
so
that
folks,
who
contribute
to
yo
tests
and
submit
pull
requests
and
anything
else,
can
leverage
our
internal
ci
infrastructure
to
run
tests
on
and
then
see
the
results
in
this
interface.
B
A
Yeah
just
some
background
history.
If,
if
you
look
at
the
yy
conf
videos,
reed
has
to
talk
about
your
test
there.
That
gives
a
great
background
about
feature
set
and
we
had
any
movement
like
on
the
number
of
browsers
or
number
tests,
or
anything
like
that.
Since
then,.
B
Is
it
is
so
yeah
we're?
Definitely
in
the
like
mid
tens
of
millions
and.
B
About
a
dozen
environments
yeah
so
so
yeah
so
yeah
I
mean
from
here
you
can
see
that
yeah
our
release
is
looking
pretty
good,
so
so.
A
That's
that's
what
it's
for
one
of
the
things
I
think
clarence
mentioned
you're
talking
about
these,
these
new
test
flows
that
you
were
working
on,
like
some
of
the
integrations
with
the
screwdriver
and
things.
B
A
B
Yeah,
so
we
have
tools
that
we
use
internally
for
building
projects
that
are
built
on
top
of
jenkins,
so
we
leverage
jenkins
as
internally
for
for
building
most
projects,
most
most
things
that
yahoo
use,
jinkins
and
folks
at
yahoo
have
built
things
on
top
of
jenkins.
That
help
us
understand
the
data
that
comes
from
jenkins
and
helps
scale
change,
and
so
I'm
working
with
the
folks
who
are
developing
these,
like
the
the
super
set
of
jenkins
or
this,
this
thing
that
that
controls
jenkins
to
integrate.
B
You
know
your
tests
with
that
internally,
but
our
of
course.
Our
goal,
though,
is
for
yo
tests
to
work
with
any
ci
system,
and
so
yo
test
does
not
say,
go
out
and
query
a
ci
system.
Instead,
it
expects
the
ci
system
to
be
have
its
results
pushed
to
it,
so
that
isn't
how
the
yo
test
that
we're
looking
at
here
on
the
screen
works
today.
B
This
version
actually
goes
out
and
we'll
go
and
ask
you,
know:
g
it'll
actually
go
out
and
query
jenkins,
but
the
scalable
way,
and
also
the
way
that's
more
flexible
and
allows
for
folks
to
use
more
than
just
unions,
is
to
just
have
really
any
ci
system,
including
travis,
and
so
I'm
really
looking
forward
to
the
day
where
we
can
see
on
our
publicly
hosted
geotest
instance,
a
both
our
internal
builds
and
also
our
travis,
builds
or
builds
with
really
any
ci
system
that
we
might
use
in
the
future.
A
And
part
of
the
benefits
of
this
just
to
click
through
just
to
give
a
bit
of
an
example.
If
you
look
on
say
dev3x
right
now,
we
have
three
of
these
stable
unit
failures:
they're,
not
real
figures,
they're
sort
of
flaky,
but
just
to
show
a
little
bit
about
what
that
looks
like.
Can
you
talk
a
little
bit
about
how
this
how
this
works?
B
Yo
test
is
kind
of
made
for
projects
that
well,
it's
actually
made
for
yui
and
that's
a
project
where
we
only
tested
about
three
different
browsers
first,
so
we
really
only
like
years
about
two
years
ago,
we
only
really
tested
one
version
of
say
firefox,
one
version
of
ie
and
one
version
of
say,
chrome,
but
when
we
finally
had
the
capability
to
test
on
more
than
those
environments,
we
found
that,
because
we
haven't
done
automated
testing
on
those
environments
ever
before
that
we
had
a
lot
of
test
failures.
B
B
We
wanted
to
distinguish
between
unit
tests
that
are
unstable
and
our
unit
tests
that
are
stable,
so
we
have
some
unit
tests
that
are
known
to
fail
because
of
legacy
reasons
they
just
weren't
written
well.
In
the
first
place,
the
only
really
worked
well
on
the
environments
that
were
tested
back
then,
but
now
that
we're
testing
on
multiple
environments.
We
want
to
understand
if
code
got
worse
and
it's
very
hard
to
do
that
when
you
have
hundreds
of
failures,
every
build
and
some
of
them
sometimes
fail.
B
Some
of
them
don't,
and
so,
when
you
see
things
fluctuate
so
much,
it's
very
difficult
to
understand
the
health
of
the
project,
and
so
what
we
did
to
address
that
is
inside
of
your
test
is
have
a
notion
of
a
stable
and
unstable
unit
test,
for
example,
and
each
of
these
can
be
classified
as
stable
or
unstable.
How
that
works
in
our
new
system.
That's
being
built
right
now
is
that
the
ci
like
this,
like,
however,
you
integrate
with
the
ci
system,
you
can
determine
your
own
rules
and
your
own
logic
for
determining.
B
If
something
should
be
marked,
flaky
or
not,
and
based
on
that,
then
you
can.
Once
you
tell
you
a
test,
you
can
actually
you
know,
have
a
test
failure,
that's
important
that,
should
you
know,
cause
notifications,
you
know
for
like
pages
or
pages
or
whatever
or
show
like
a
big
red.
B
You
know
box,
you
know
with
your
failures
or
you
can
say
that
that's
something
that
should
be
recorded,
but
not
you
know
not
necessarily
have
to
be
acted
upon
by
an
engineer,
and
that
would
be
classified
as
say,
a
flaky
test
or
an
unstable.
A
B
And
yeah
so
like
we're,
I'm
pretty
happy
with
the
fact
that
this
is
flexible
enough,
so
that
you
can
write
your
own
logic
in
javascript
or
any
other
language
that
you
know
can
classify
the
tests
for
you
as
they
as
they
come
up.
B
Yeah
celic
is
basically
our
functional
tests
and
so
yeah
and
in
the
medial
tests
they're,
you
know
just
they're
just
different
kind.
You
can
have
any
arbitrary
number
of
of
test
types
right.
B
You
can
have
your
unit
test
or
your
functional
tests
or
smoke
tests
or
whatever
you'd
like,
and
those
would
be
reported
differently
so
and
the
current
you
know
system
that
we're
looking
at
now,
there's
only
what's
called
unit
cell,
which
selected
are
basically
functional
tests,
so
yeah
the
ui
would
look
a
little
bit
different,
but
that
hasn't
been
started
yet
so
far,
I've
just
been
working
on
the
back
end
service
that
will
that's
taking
in
all
this
information.
A
So
this
is
extremely
useful
because,
when
we're
doing
like
the
week
of
testing
for
the
release-
which
is
what
we're
doing
right
now,
if
someone
checks
in
something
and
it
starts
causing
stable
test
failures,
we
can
pinpoint
the
the
specific
check-in
that
caused
the
problem
and
it
really
eliminates
a
lot
of
the
uncertainty
around
you
know.
A
Who's
who
changed
something
or
what
happened
and
also
having
to
cut
down
on
the
number
of
manual
tests,
has
really
improved,
like
the
fact
that
we
can
just
do
this
so
quickly
like
we
don't
have
to
wait
several
days
to
wait
for
the
the
manual
test
to
be
back.
So
I
think
that's
another
huge
benefit
of
this.
I'm
trying
to
actually
go
to
one
of
the
tests
themselves
in
jenkins,
but
network's,
not
working
too
well
yeah.
A
So
this
is
really
awesome
stuff.
So,
in
terms
of
like
things
that
folks
out
in
the
world
might
see,
do
you
have
any
time
frames
for
that
or.
B
We're
yeah
so,
like
I
said
before,
are
something
that
we
really
want
is
to
have
the
status
of
all
of
our
projects
that
are
built
internally
to
be
shown
to
the
public.
And
so
that's
that's
our
our
goal
to
have
by
the
end
of
the
year,
but
I
think
yeah
that
we
could
probably
get
that
done
sooner.
But
I
don't
have
anything
specific
to
share
right
now.
B
Yeah,
that's
we're
building
this
from
the
start,
with
the
intention
of
open
sourcing,
and
so
so
yeah
it'll
be
under
a
different
name.
But
that's
that's
our
goal
so,
and
this
is,
is
this
tied
at
all
to
yeti
or
not?
Is
it
this?
Has
you
know
you
could
use
yeti
to
run
tests,
but
this
doesn't
depend
on
anything
any
particular
test
framework
or
ci
system.
It
is
really
up
to
anyone
can
write
something
that
would
deliver
test
results
to
the
system.
B
What
I
mean
by
delivered
test
results
is
simply
that
there
is
a
api
for
for
everything.
That's
that's
over
the
web
over
restful,
calls
that
you
make
and
that
are
authenticated
with
say
you
know
github
or
github
enterprise.
If
you
want
to
run
this
on
your
own
inside
of
your
own
organization
and
so
yeah
there's
really.
This
can.
A
B
So
yeah
I
mean
there's
pretty
standard
stuff,
just
the
name
duration
and
what
what
kind
of
test
it
is
unit
functional.
You
know
if
it
passed
or
failed,
there's
a
failure.
You
know
stack
trace
or
some
more
information,
so
it
doesn't
probably
store
every
every
bit
of
information
that
everyone
would
like
to
have.
But
it's
right
now,
it's
enough
for
you
to
get
real
work
done
so,
additionally,
there's
also
a
facility
for
monitoring
performance
aspects
of
builds.
That
is
not
in
the
version
you're
looking
at
now
right.
B
So
there
is
a
way
for
builds
to
communicate,
say
how
long
they
they
took
or
or
how
long
certain
parts
of
it
took
and
those
things
can
then
be
say
graft.
So
so
that's
it's
pretty
it's
pretty
simple,
but
I
think
it's
it'll
still
be
very
powerful
when
you
get
to
organize
it
based
around
code
revisions
and
environments,.
B
I
don't
have
anything
to
share
about
code
coverage
at
the
moment
so
but
code
coverage
could
be
a
metric
city
but
yeah
code.
Coverages
is
a
pretty
difficult
problem
to
have
in
a
standard
way
across
many
different
projects
or
I'm
sorry,
many
different
kinds
of
languages
which
can
which
determine
code
coverage
in
different
ways,
and
so
our
focus
is
on
testing
and
and
also
reporting,
say,
aggregate
code
coverage
information,
but
also
I
mean
it's
yeah,
that's
that's,
basically,
where
we
are
right
now.
B
Code
coverage
is
important,
but
I
think
a
lot
of
folks
are
gonna,
get
a
lot
of
value
out
of
just
the
testing
side
and
also
some
aggregate
code
coverage
information.
But
I
look
forward
to
yeah
adding
more
helpful
features
around
code
coverage
at
some
point
so
or
with
the
help
of
the
community
there.
Any
potential.
B
Like
web
hooks
or
something
so
this,
this
really
only
works
with
github,
so
whether
it's
github
or
github,
that
you
run
as
an
instance
inside
of
your
own
organization,
so
so
yeah
this
this
you
for
the
front
end
you
log
in
with
github.
You
know
you
add
your
projects
that
are
from
github,
so
so
yeah.
A
B
A
B
Yeah
yeah,
so
so
yeah
we
built
this
with
the
with
intending
you
know
for
it
to
really
be
useful
for
multiple
different
browser
environments.
But
really
you
can
imagine
say
if
you
have
your
own,
you
know
if
you,
even
if
you
use
say
travis,
you
know
you
can
run,
say:
node
travis
tests
on
many
different
versions
of
node
and
those
could
show
up
as
environments
or
you
could
have
your
own
ci
system
that
supplements
travis
like
we
do,
and
so
we
could
show
travis
results
alongside
your
own
ci
system.
A
B
Yeah,
so
that's
something
I'm
working
on
right
now
actually
is
building
out
well
building
the
integration
integration
with
jenkins,
that's
just
baseline!
So
like
no
matter
you
know
so
there,
but
there
can
be
any
you
know.
So
what
I
mean
by
baseline
is
that
when
you,
if
you
have
anything
in
jenkins,
you
can
you're
guaranteed
to
get
some
basic
information
about
tests,
because
jenkins
does
a
lot
of
work
for
parsing
different
kinds
of
test
types
to
show
it
in
its
own
ui.
B
So
if
you're
already
using
that,
then
you
know
this
can
read
off
that
and
that's
the
lowest
common
denominator
inside
of
yahoo.
For
for
almost
everything-
and
so
you
know,
winds
up,
reporting
the
jenkins
in
some
way,
and
so
that's
what
we're
going
to
leverage
so
yeah,
but
going
forward
really
you
could
I
mean
we're
going
to
have
more
fine-grained
importers
that
could
understand
more
about
things
like
code
coverage
or
can
actually
supply
test
results
as
they
finish
inside
of
a
job.
So,
as
you
know,
you
know
so
like.
B
You
know
once
once
test
frameworks
catch
up
with
that
kind
of
concept,
but
we
definitely
are
excited
about
having
test
results
available,
the
instant
that
they
are
done
and
that's
really
just
a
function
of
support
inside
of
your
own
test
framework.
So
so
as
soon
as
your
test
framework
can
or
the
thing
that
you
know
this,
the
whatever
client
reports
to
you
a
test
can
support
that,
and
it's
ready
to
go.
We
can
you
can
do
that
so.
A
We
need
plans
to
have
integrations
with
external
testing
companies
like
say
sauce
labs
or
things
like
that.
B
So
this
doesn't
really
depend
on
you,
you'd
kind
of
I
mean
once
you
have
sauce
labs
running
in
any
ci
environment
that
you
like,
that
it
just
shows
up
as
test
results,
and
so
really
we
love
sauce
labs
but
yeah.
It's
really
just
we're.
Looking
at
integrations,
perhaps
with
say
you
know,
services
that
are
like
travis
so
or
you
know
things
that
you'd
run
in
you
know
that
are
like
traffic
save.
A
A
I
know
that
you're
pressed
for
time,
so
I
wanted
to
thank
everybody
for
coming
today.
This
is
this
is
pretty
much
a
short
round
table,
because
we
are
a
lot
of
things
going
on
just
a
quick
update
about
the
next
release.
317.0.
A
As
you
can
see,
all
the
tests
are
passing.
There's
no
unit
test
failures.
I've
been
monitoring
github
to
see
if
there's
been
any
any
new
issues
coming
up
since
the
release,
candidates
out
and
so
far
so
good
we've
also
got
a
number
of
tests
that
are
a
number
of
issues
that
this
release
fixes.
So
if
you
have
a
test
environment
out
there,
I
encourage
you
to
get
this
release.
A
Branch
get
an
instance
of
this
and
test
it
out
before
for
the
end
of
this
week,
and
if
you
run
into
something
please
let
us
know
right
away
with
the
issue.
I'm
gonna
get
him
outside
that.
I
think
we're
good
to
go
so
we're
gonna
close
out
our
round
table
early
this
week,
thanks
a
lot
reid
for
coming
and
talking
with
us
about
your
tests.
Sure.
B
Oh
I'd
like
to
share
if
anyone
is
still
with
us,
some
like
a
statistic
on
how
we
how
much
we
have
tested
so
right
now
we
are
at
so
we
we
started
using
your
tests
for
yui
about
the
last
third
of
last
year
in
2013.,
and
so
to
date
we
have
tested
52
million
968
1445
tests,
exactly
I've
gone
through
this
system,
and
so
I.
B
A
So
yeah
we
run
a
lot,
and
this
is
why,
when
we
do
have
pull
requests,
that
we
always
encourage
folks
to
add
new
tests,
because
this
makes
everything
that
much
more
robust.
B
B
A
968
145
45,
of
course,
and
that
number's
probably
gone
up
since
then,
but
nice
to
have
like
an
odometer
or
something
you
could.
You
could
run
yeah
for
sure
all
right
thanks
everybody
for
coming
tonight
today
see
you
next
week
next
week,
we'll
have
some
stead
talking
about
es6
modules
that
he's
converted
all
of
his
y
components
over
to
so
see.