►
From YouTube: Kubernetes SIG Testing 2017-06-06
B
A
A
Yeah,
so
I
actually
gave
this
to
some
people
at
Google,
because
there's
people
at
Google
are
interested
and
but
I
think
it'd
be.
You
know
interesting
for
all
of
us
in
the
kubernetes
community,
since
some
of
us
are
more
and
less
familiar
with
a
prowl,
and
so
oh,
let
me
send
or
I'll
put
a
link
into
this
sit.
Let
me
put
do
that
now
put
a
link
into
the
cig
channel.
A
Right
now
well,
alright!
Well,
so
maybe
someone
else
can
do
that,
but
there
are
links
here
and
so
no,
it's
just
sort
of
write.
Some
background
on
kubernetes,
which
I
assume
a
lot
of
us
should
be
familiar
with.
A
Oh
and
yeah,
so
you
know,
kubernetes
is
a
popular
project
just
in
terms
of
what
it
does
on
github
and
C&C
F.
A
There's
tons
of
issues,
there's
lots
of
PRS
going
on,
and
so
we
have
a
lot
of
automation,
needs
and
we've
kind
of
grown
out
of
you
know
our
ability
to
satisfy
all
of
that
with
Jenkins,
so
we
are
building
Crowl,
which
is
a
app
that
runs
on
kubernetes
to
handle
similar
things
and
the
key
you
know,
goals
that
we
had
for
prowl
are
that
is
sort
of
an
event-driven
workflow
where
you
can
think
about
the
you
know.
A
Main
concept
is
that
it
reacts
to
events
typically
github
events,
but
it
could
also
be
timer
events
and
then
it
does
something
such
as
changes,
label
or
schedule
testing,
and
then
the
idea
also
is
that
we
have
different
layers
of
our
test
infrastructure
with
crowd
being
on
the
top
and
then
prow.
You
know,
schedules
pods
and
then
pods
do
something
simple,
which
then
is
going
using
this
bootstrap
library,
and
then
they
call
basil
to
build,
and
then
we
eventually
call
cube
tests
to
like
run
tests.
A
But
you
know
the
idea
is
like
we
have
a
different
layers
and
then
each
layer
is
only
responsible
for
certain
things
and
that,
though
the
interfaces
between
those
layers
are
simpler
than
they
were
in
the
past,
because
historically,
we
sort
of
had
things
in
giant
shell,
scripts
and
or
a
bunch
of
different
Jenkins
plugins,
and
it
was
very
hard
to
sort
of
tease
apart
what
actually
happened
arm
in
a
given
job,
and
we
want
to
make
that
clear
and
the
third
thing
is
we
want
to
make
sure
that
everything
is
accessible
to
the
public.
A
We,
you
know
initially
like
I,
guess
like
a
year
and
a
half
ago,
the
state
where
app
is
you
know
the
best
way
to
debug.
Something
was
to
find
you
know
your
favorite
Googler
friend,
since
the
our
Jenkins
infrastructure
is
running
on
Jenkins
machines
that
we
didn't
feel
comfortable,
making
publicly
accessible,
and
obviously
that
is
not
a
super
great
experience.
A
If
you
are,
you
know
one
of
the
many
developers
who
are
not
at
Google,
so
we
wanted
to
make
sure
that
you
know
everyone
has
an
even
playing
field
where
the
best
way
to
deal
with
test
results
is
the
same,
is
one
it's
accessible
to
everybody
and
that
the
yeah
that
people
inside
and
outside
of
the
company,
all
or
up
Google
all
have
access
to
the
same
tools,
and
so
the
basic
idea
of
prowl
is
that
you
have
a
is
that
like
I
was
saying
earlier,
you
know
the
prowl
responds
to
events,
and
so
initially
you
get
some
sort
of
event.
A
The
the
main
concept
is
you
get
like
a
github
hook
of
it,
and
then
prowl
will
respond
to
that
by
creating
a
crowd
job.
This
is
a
third-party
resource
and
then
so
that
third-party
resource
is
picked
up
by
plank,
which
is
sort
of
our
our
controller
for
prowl
jobs,
and
then
it
is
going
to
create
a
pod
and
then
basically,
from
Prowse
point
of
view,
it
doesn't
really
matter
what
happens
inside
of
that
pod.
A
It's
just
waits
for
it
to
finish,
and
then
it
will
create
the
status
line
in
github
to
show
that
that
happens,
and
then
we
have
Dec
which
displays
a
nice
dashboard.
That's
the
product,
capes
IO,
and
so
then
you
know
inside
of
the
actual
pod.
There
is
the
there's,
the
bootstrap
library,
which
it's
job
is
to
make
sure
that
prop
that
the
repo
is
checked
out
at
the
right
place
and
then
it
starts
the
actual
testing
and
then
captures
any
log
output
and
then
uploads
everything
that
happens
to
GCS.
D
A
So
that
was
sort
of
just
a
quick
overview
right,
so
the
you
know,
initial
thing
right
is
that
the
first
thing
that
the
sort
of
bulk
of
prowl
is
that
it
responds
to
events
most,
and
so,
if
you're
familiar
github,
you
can
configure
github
to
send
whatever
application.
You
can
simply
say:
hey
github
anytime,
something
happens
in
my
repository.
I
want
to
configure
it
to
send
a
notification
about
that
to
this
application,
and
so
you
know
it
could
be
I
leave
a
comment
or
I
change.
A
The
label
or
I
push
a
commit
or
I
merge.
Something
github
will
then
send
all
of
those
events
to
prowl
and
then
hook
is
the
application
on
kubernetes
that
is
configured
to
receive
that
event
and
respond
to
it,
and
so
typically,
then
there's
if
there's
a
bunch
of
different
plugins
in
the
prowl
directory
that
will
do
different
things.
A
A
Hook
is
going
to
look
at
it
and
go
okay,
great
I'm,
going
to
send
this
to
the
assigned
plug-in
and
the
assigned
is
going
to
say
yeah
this
not
just
something
that
I
want
to
react
to,
and
so
it's
going
to
react
by
changing
the
assignees
to
assign
your
set
you
or
whoever
you
specified.
The
other
big
thing
like
is,
if
you
say
at
Kate's
bot
test,
this
same
thing
happens
that
comments
github
sends
an
event
to
hook
hook
processes
it.
A
It
sends
it
to
the
I
forget
what
the
name
of
the
plug-in
is,
but
there's
a,
but
that
plug-in
will
create
a
proud
job
and
then
wait
for
that
proud
job
to
become
finished
and
sort
of
over
time.
We've
actually
had
a
growing
number
of
people
who
are
who
are
extending
this
by
adding
new
plugins
and
so
like
yeah,
and
then
also
people
there's
actually
someone
I
forget
the
name
of
the
person
but
they're
from
IBM
and
they're.
A
They
have
a
PR
out
to
extend
this
to
make
it
to
where
we
can
respond
to
slack
comments.
I
think
for
right
now
me
just
maybe
it's
just
sending
something
to
slack
but
like
theoretically,
you
could
also,
you
know,
extend
this
to
where
we,
instead
of
our
just
following
a
github
events,
we
could
make
it
respond
to
slack
events
or
other
things,
but
yeah
basically
hooks
job
is
to
receive
events
and
then
pass
it
off
to
a
plug-in
which
can
do
whatever
the
plug-in
is
configured
to
do.
A
A
D
A
So
right,
so
when
you
request
testing
what
right
the
plug-in
will
create
proud
jobs,
just
to
signal
that
hey
we
want
to
you
know
eventually
have
some
testing
happen
and
then
plank
is
the
controller
that
is
responsible
for
making
sure
that
we
actually
schedule
pods
that
complete.
That
test
request
right.
So
now
these
yeah,
so
the
sense
so
arm
the
way
the
prank
plank
controller
works
right
as
it
is
scanning
the
kubernetes
cluster
for
proud
jobs
and
then
noticing,
if
there
are
any
new,
proud
jobs
that
it
doesn't
know
about.
A
Usually
something
comes
in
from
a
hook
event,
but
there's
also
you
know
various
other
ways.
There's
the
there's
a
horror,
Logan
binary
that
runs
in
the
cluster,
which
is
responsible
for
handling
periodic
jobs
that
fire
base
that
start
based
on
a
timer
basis,
as
opposed
to
a
commit
or
some
sort
of
event.
And
then,
theoretically,
you
can
also
create
them
manually.
A
Just
using
cube
control,
you
can
sort
of
say
queue,
control,
create
this
crowd
job
with
the
specifications
and
if
you
use
cube
control
to
apply
that,
then
magic
things
happen,
and
so
the
plant
controller
is
scanning.
For
these
proud
jobs
that
need
to
exist,
and
once
they
do,
then
it
will
create
a
pod
that
satisfies
that
proud
job.
A
So
crier
is
a
piece
of
prowl
that
is
responsible
for
making
sure
the
green
check
marks
or
whatever
the
check
marks
in
your
PR
status,
match
what
is
actually
happening
with
the
prowl
job
right.
So
the
plant
controller
is
sitting
there.
It
creates
the
prowl
job
and
it
monitors
the
status
to
see
whether
it
you
know
failed
or
started
or
passed
or
whatever,
and
then
it
sends
that
status
to
the
crier
binary,
which
then
is
a
knows
how
to
convert
those
crowd.
Job
statuses
back
into
github
contexts.
A
A
So
reviewing
the
basic
you
know
flow
that
all
of
our
testing
works
is
github,
pushes
some
events,
you
know
mostly
be
most
likely
because
I
either
emerged
a
PR
or
I
made
a
comment
requesting
testing
that
gets
picked
up
by
hook,
which
then
sends
it
to
a
plug-in
and
that
plug-in
creates
a
crowd
job.
A
The
plank
controller
detects
this
new
prowl
job
and
creates
a
pod
that
satisfies
that
crowd.
Job
request,
based
on
the
configuration
what
happens
inside
of
the
pod
crowd,
doesn't
really
care
about
it's
up
to
the
pod.
To
you
know,
do
its
thing:
it
just
waits
for
the
pod
to
complete
and
if
it
completes
successfully,
it
marks
the
jobs
past
otherwise
or
whatever
the
actual
result
of
the
pod,
on
its
updates
that
to
the
prowl
job
and
since
the
crier
and
the
crier
writes
the
context.
A
A
B
A
That's
that's
yet
that's
part
of
prowl!
That's
that's!
A
proud
plug-in
that
handles
that!
Yes,
okay!
So
if
we
look
at.
A
We
just
have
something
that
fails
tests.
The
exciting
new
thing
you
know
is
previously,
it
could
I
would
have
to.
You
know,
specified,
leave
a
comment
for
each
things
that
I
wanted
to
test,
and
then
I
would
leave
that
comment
and
it
would
retest
everything,
but
now
Brian
actually
just
made
a
change
to
the
plug-in
to
accept
retesting
so
that
now,
if
I
just
say
retest,
it
will
detect
that
hey
these
two
jobs
would
love
this.
A
You
know
there
there's
a
change
going
in
place
to
where
okay
there
we
go
yeah,
so,
okay,
great,
so
lots
of
exciting
things
happen
there
and
then,
if
we
look
at,
if
you
go
to
proud
Kate's,
do
we
have
a
quick
little
dashboard
that
shows
everything
that
is
happening
on,
and
so
we
can
look
for
that.
Was
it
four
seven.
A
We
can
see
that
I,
my
test
command
has
requested
those
three
things
to
start.
The
yellow
circle
means
it's
in
progress.
The
green
check
means
it's
past
the
X
screens,
it
failed.
The
little
hamburger
icon
gives
you
logs.
One
thing:
that's
nice
with
that
is,
it
shows
real-time
logs.
So
this
allows
you
to
debug
your
PR
as
it
is
happening,
and
so
you
can
just
refresh
the
page
to
sort
of
see
whatever
the
latest
thing
is,
and
then
this
dashboard
is
pretty
nice.
A
If
you're
wanting
to
you
know
debug
stuff,
like
we
have
bachelor
juice,
and
so
that's
a
pretty
common
way
for
me
to
check
and
see,
if
anything
is
if
our
PR
jobs
or
healthy
or
not,.
A
And,
oh
then,
the
circle
shows
the
I
was
missing.
How
you
can
just
manually
create
the
the
proud
job,
which
I
think
is
a
pretty
slick,
little
behavior.
So
how
this
works
is
you
know
you
can
just
say:
Q
control,
create
this
and
assuming
you
have
access
to
the
prowl
cluster
that
this
runs
on.
You
can
just
create
the
crowd
job
now,
of
course,
creating
the
gamal
is
sort
of
a
pain,
so
proud
makes
this
easier
by.
A
A
C
B
While
you
were
bouncing
around
deck,
I
had
a
question
about
the
individual
pull
request:
jobs
versus
the
batch
jobs,
I'm,
guessing
I'm,
not
sure,
seems
like
when
I
first
submit
a
pull
request
and
all
the
jobs
has
to
get
triggered.
That's
going
to
be
jobs
that
are
running
just
from
my
pull
requests,
then
the
submit
queue
needs
to
do
a
final
run
for
everything
before
merges
and
that's
what
happens
at
batch
or
does
everything
get
pushed
through
the
the
batch
thing
just
trying
to
figure
out
like
which
of
the
boards
should
I
go?
A
So
when
you're
PR
is
on
the
so
like
if
you,
if,
if
you
look
at
the
top
of
the
queue
arm
right,
so
it's
currently
running
this
guy
and
then
it's
also
running
a
batch
of
just
slightly
different
set
of
PRS,
but
you
can
actually
see
that
the
merge
bot
talks
to
prowl
so
I
guess
25
minutes
ago
this
the
queue
was,
the
submit
queue
was
free
and
this
was
at
the
top
of
the
queue.
So
it
validates
it
like
hey
something,
maybe
something
weird
happen
between
when
I
tested
this.
A
You
know
three
days
ago
and
now
so
it
leaves
a
comment
for
on
the
PR
which
triggers
which
results
in
prowl
retesting
the
PR,
and
so
then
the
submit
queue
once
it
notices
that
okay,
it
it's
aware
of
like
what
commit
was
tested
against.
So
it's
okay,
great.
A
This
PR
has
passed
all
of
the
contexts
at
head,
so
I'm
going
to
merge
it
and
then
the
batch
behavior
is
slightly
different
I'm,
not
it's
a
little
bit
foggy,
but
I
think
there
is
something
else
in
the
and
the
merge
bot
that
is
makes
a
request
directly
to
proud.
A
Let's
see
I,
don't
know
if
there's
anything
so,
let's
see
so
on
the
you
know,
PR
dashboard,
I
or
the
PR
dashboard
is
pretty
fancy.
Hopefully
everybody
is
somewhat
aware
of
it
now.
A
One
thing
that
might
not
be
you
might
not
be
as
aware
of
is,
if
you
just
go
to
/
PR
it'll,
you
can
log
in
with
your
github
credentials
and
then
it'll
know
who
you
are
and
then
that
enables
Act
behavior
like
if
I
am
not
really
interested
in
this
PR
right
now,
but
I
don't
want
to
leave
a
comment
on
it.
A
It
thinks
it
needs
my
attention
because,
like
if
I
pushed
it,
because
basically
sin
gave
me
some
comments
a
few
minutes
ago,
and
so
if
I
then
go
reply
to
it
now
it
will
once
goober
nadir
gets
the
event.
It
should
toss
it
off
of
me
needing
attention
for
me
and
it
should
now
need
attention
for
sin
and
it
will
show
up
on
his
dashboard
and
then
so.
Some
recent
improvements
there
right
that
we're
now
aware
of
things
that
can
I
can
approve
and
yeah.
A
So
that's
pretty
cool
I
think
it's
super
helpful.
So
if
you're
not
using
that
and
you
have
a
bunch
of
PRS-
it's
not
I'm
view
I
would
encourage
you
to
try
it
out.
A
Yeah
and
so
that
code
lives
in
guru,
nadir
yep,
let's
see
so
configuring
it
in
the
proud
directory.
There's
a
config
yeah,
no,
and
so
this
has
three
types
of
jobs
we
have
pre
submits,
we
have
post
submits
and
we
have
periodic
jobs.
Periodic
jobs
are
essentially
things
that
run
on
a
timed
basis.
So
most
of
our
you
know,
CI
jobs.
So,
like
our
upgrade
jobs,
they
just
sort
of
run.
You
know
every
hour
or
every
day
or,
however,
we
configure
them
to
run
post
them.
A
A
Here
is
what
the
configuration
looks
like
so
proud
can
work
with.
You
know
we
actually
have
it.
There's
multiple
orgs
and
multiple
repositories
that
we
are
that
prow
is
managing
right
now,
and
so
you
just
sort
of
list
you
know
here
is
saying:
I
want
on
repo
one
to
run
the
food
job
and
the
you
know
other
job
in
repo
one
and
then
in
repo
I
want
to
run
this
other
thing,
so
you
can
kind
of
just
you
know,
configure
it
to
do
whatever
you
want.
A
The
rerun
command
is
what
sets
the
comment
that
will
trigger
the
job,
so
in
this
particular
example
I'm.
Actually,
so
sorry,
the
trigger
is
the
regex
that
matches
so,
if
I
say
at
Kate's
bot,
test
S
or
at
Kate's
bot
foo
test
this.
This
will
cause
the
Foo
job
to
run
and
then
yeah.
Then
we
also
provide
a
concrete
example
of
the
command,
which
is
what
appears
when
the
job
fails.
It'll
give
you
a
comment
that
it
that
you
can
leave
to
request
the
job
to
run
again.
A
Posta
mitts
are
the
same,
except
that
they
are
similar,
except
that
they
don't
have
the
they
don't
have
the
rerun
command
because
they
just
happen
whenever
a
merge
happens
on
a
particular
branch,
and
so
you
can
by
default,
it'll
happen
on
any
branch,
but
if,
like
your
job,
only
is
relevant
for
the
master
branch,
you
could
configure
it
to
just
run
on
the
master
branch.
A
A
B
A
Then
you
know
what
actually
happens
inside
of
the
inside
of
the
actual
pod
and
over
time
we
actually
want
to
make
it
largely
where
the
pod
doesn't
have
to
worry
about
things
like
we'd
love
it
where
you
know
you
can
just
assume
that
your
pod
starts
and
we
will
mount
the
repository
that
you
want
to
check
out
in
a
particular
location
and
anything
test.
Artifacts,
you
want
to
leave
you
put
in
a
particular
location
that
sort
of
improvements
that
we're
planning
for
the
future,
but
right
now
you
actually
have
to
your
image.
A
That
runs
that
you
that
runs
and
that
your
container
runs
needs
to
be
aware
of
all
that,
and
so
bootstrap
is
our
library
that
handles
that
and
so
bootstraps.
The
job
is
essentially
to
check
out
the
repo
call,
the
actual
testing
and
then
upload
the
results
to
GCS,
and
so
then,
within
that
we
have
a
couple
different
scenarios
so
like,
for
example,
a
lot
of
our
testing
are
variations
of
ete
sweets,
and
so
we
have
a
two
Brunetti
CDE
scenario
that
we
pass
different
flags
to.
A
You
know,
which
is
what
determines
distinguishes
the
you
know:
cups
ete
from
the
GCE
and
the
GCE
cereal
from
the
you
know:
GCE,
slow
jobs
and
yeah,
and
so
the
but
the
other
you
know,
but
the
other
thing
is
it
used
to
be
where
there
was
a
ton
of
different
commands
that
ran
in
a
bunch
of
different
Jenkins
plugins,
the
mode
that
we
want
to
now.
Is
you
cloned
the
test
info
repo
and
then
you
call
the
bootstrap
job
or
you
called
a
bootstrap.
A
You
know
Python
binary,
so
that
sort
of
defines
the
interface
to
enter
our
testing,
and
then
you
know
bootstrap.
All
we
want
bootstrap
to
do
is
that
logging,
and
so
the
scenario
is
what
actually
handles
calling
cube
tests
and
then
cube
test
is
responsible
for
in
the
Eason
Aereo
doing
all
of
the
EDG
things.
A
So
the
goal
is
that,
like
you,
can
mimic
any
what
any
test
job
does
by
sending
a
command
to
cube
test,
which
is
what,
if
you're
familiar
with
the
go
run,
hacky
D
go,
that's
essentially
I'm
calling
cube
test,
and
so
yeah.
You
just
call
cube
test
passing
flags
to,
like
you
know,
build
and
stage
etc,
and
it's
in
yeah,
and
so
that's
what
we
do
for
our
Edes.
We
put
all
that
into
an
image,
and
then
we
put
that
image
into
our
pod
spec.
A
Yes,
so
literally,
everything
that
our
testing
does
can
is
some
variation
of
these
two
commands
clone
test.
Infra
called
bootstraps
saying
what
job
you
want
to
run,
what
repo
you
want
to
check
out
and
where
you
want
to
upload
the
results
to
this
has
been
a
lot
nicer
than
trying
to
you
know,
keep
all
of
our
Jenkins
plugins
up-to-date,
especially
because
it's
it's
seen.
A
At
least
our
experience
has
been
that
if
you
have
a
non-trivial
set
of
Jenkins
plugins,
it
is
challenging
to
find
a
version
of
Jenkins
that
is
compatible
with
all
of
the
Jenkins
plugins.
You
have,
or
at
least
a
working
version
of
all
the
Jenkins
plugins.
You
have
at
the
same
time,
and
so
yeah
like
I,
said
before.
Bootstraps
responsibilities
are
to
log
the
test
output
check
out
the
repo
and
also
it
handles
timing
out
and
then
upload
stuff
to
GCS.
A
So
the
scenario
is
mostly
just
to
make
sure
that
bootstrap,
rather
than
having
the
call
50
different,
you
know,
commands
it
just
can
call
a
single
command.
A
So,
ideally
we'd
like
to
get
to
where
you
know
all
everything
is
just
a
single
command
like
the
cubed
s
command,
but
in
reality,
there's
sort
of
some
complications
due
to
historical
reasons
so
like,
for
example,
what
we
really
do
is
we
have
to
set
a
bunch
of
different
environment
variables,
and
so
we
have
the
kubernetes
ete
scenario
that
handles
all
of
that,
and
so
cube
tests
is
what
actually
does
everything
once
the
environment
variables
are
set
up
correctly?
A
So
here's
an
example
command.
You
have
build,
equals
basil
stage
equals
GCS.
Foo
extract,
equals
local
up
test
down
and
dump
artifacts.
So
that
is
saying
that,
for
this
ete
run,
we
want
to
build
kubernetes
using
basil.
We
then
want
to
upload
this
release
to
the
food
bucket
via
the
stage
flag.
We
then
want
to
download
and
extract
the
version,
so
you
could
extract
you
know.
Maybe
a
more
interesting
example
would
be
like
viii
1.6
or
on
the
latest.
A
Yep,
so
I
actually
don't
know,
there's
a
super
easy
way
to
do
that.
Let's
see
so
another
instant,
let's
see
so
I
guess,
maybe
this
can
be
a
plug
for
some
other
interesting
tools
that
we
have.
Hopefully,
people
are
familiar
with
test
grid
on
which
has
all
of
our
test
results.
So
if
you
want
to
see
you
know
what
happened
with
the
1.6
blocking
jobs,
you
can
go
here.
Each
tab
is
essentially
a
cube
test
invocation
that
runs
continuously
on
different
commits.
So
this
is
showing
that
you
know
at
6
or
7
a.m.
A
Since
you
know
there
can
be
a
bunch
of
bunch,
it
can
be
time-consuming
to
go
to
all
these
different
tabs.
So
each
dashboard
has
a
summary
tab
and
this
will
sort
of
give
some
statistics
about
how
frequently
it
fails.
So
it
looks
like
the
build
job
has
not
failed
at
all
this
week,
whereas
the
GCE
job
failed
25
times
out
of
14,000
runs
and
the
end.
The
cube
admin
kubernetes
anywhere
suite
looks
like
it's
failed
twice
and
then
even
more
interesting.
These
specific
tests
are
highlighted
because
they
are
consistently
broken
as
opposed
to
just
flaking.
A
So
if
I
go
here,
I
can
see
that
those
looks
like
there's
something
wrong
with
this
test.
For
who
knows
what
reason
and
yeah?
So
that's
sort
of
that
sort
of
shows
that
there
I
guess
it
looks
like
also
in
the
reboot
suite
these
reboot
tests
are
failing,
something
that
we
have
been
investing
in,
which
I
would
really
like
to
see
more
of
is
giving
each
cig
their
own
dashboard.
I
think
this
could
sort
of
help
with
triaging
different
bugs,
and
so
one
thing
that
tests
grid
does
which
is
nice.
C
A
We
compare
the
regular
GCE
suite
with
the
CLI
GCE
suite.
You
can
see
that
this
here,
the
six
CLI
team
has
said
that
these
are
the
only
like
we're
only
interested
in
jobs
that
have
the
cube
control,
client
name
in
them,
like
we
don't
compare
about
the
config
map
or
the
deployment
etc
tests
which
you
can.
A
If
you
click
on
the
options,
you
can
do
that
filtering
yourself,
but
one
that
doesn't
exclude
it
from
coming
up
in
the
summary
and
then
also
it's
sort
of
you
know.
If
you
have
new
members
on
your
cig,
they
might
not
be
familiar
with
the
you
know,
weird
complicated
regex,
that
you
need
to
filter
results
and
then
also
something
for
some
people.
Some
jobs
might
not
be
relevant
to
their
sick.
A
There's
going
to
be
two
none,
but
each
test
group
is
essentially
a
prefix
GCS
prefix,
where
a
bunch
of
test
results
are
stored,
and
so
we're
saying
to
use
these
test
results,
but
I
want
to
filter
down
to
only
things
that
can't
that
contain
Q
control,
and
so
this
might
be
useful
for
other
SIG's
to
sort
of
set
up
their
own
dashboards
that
give
them
a
quick
way
to
just
check
the
summary
and
go
hey.
Life
is
super
happy
in
my
world.
There
may
be.
B
Said
a
quick
clarification
on
on
test
grid,
so
first
there's
a
question
from
the
chat
about.
Is
there
like
a
good
default
dashboard
that
we
should
be
dumping
people
at
instead
of
like
showing
them
20
different
grade
grade
blocks?
Could
we
just
dump
them
straight
away
at
a
dashboard
that
they
find
useful
right.
B
Do
you
think
the
fall
kind
of
matters
like
should
I
care
about
one
seven
right
now,
since
we're
in
the
one
seven
burned
down,
should
I
really
care
about
everything
that
happens
with
master
mi
coming
here
to
troubleshoot,
what's
going
on
with
cops,
specifically,
etc,
etc.
So
I
think
Morgan
I
think
that'd
be
a
great
issue
to
to
file
the
other
comments.
I
had
was
I,
think
the
percentages
that
you
were
showing
on
the
summary
tab.
Our
percentages
of
test
cases
not
job
run
specifically.
B
Yeah
well,
if
I
I'm
still
going
through
a
backlog
of
notifications,
but
there's
not
an
issue
for
that
I'll
file,
an
issue
just
to
clarify,
if
nothing
else,
the
language
percent.
You
know
ninety
two
of
twelve
thousand
six
hundred
ninety
one.
What
would
be
really
helpful
for
me,
because
the
percentages
are
really
low,
I'm,
just
trying
to
understand
what
the
percentages
of
right.
C
A
Know
I
mean
I,
think
that's
basically
a
it.
Oh
there's
velodrome
you
guys
go
to
oh
yeah,
let's
see
so
one
other
thing
is:
if
you
go
to
github,
you
know
all
this
stuff
should
be
on
the
the
readme.
So
there's
you
know
test
score.
There's
links
to
test
grid,
the
PR,
dashboard,
prowl,
arm,
etc.
I
guess
one
thing
that's
not
on
here
that
should
be
is
a
velodrome
which
has
some
of
shows
some.
You
know
statistics
about.
So
this
is
built
on
top
of
graph
Anna
and
in
flux,
DB
and
Prometheus.
A
So
one
fun
thing
is
I.
Guess
it'll
show
if
we're
like
running
out
of
tokens
for
the
merge
bot.
So
that
is
a
problem
that
slows
down
the
merge
bot
which
we
would
like
to
potentially
fix.
I
think
this
one
doesn't
work
very
long
yeah.
So
this
will
show
you
know
for
a
given
week
how
many
different
pull
requests,
authors
that
were,
etc
and
so
we're
working
on.
A
If
you
look
at
the
metrics,
we've
been
running
some
metrics
to
sort
of
calculate.
You
know
how
you
know
how
consistent
jobs
are.
You
can
see
that
we
sort
of
headed
south
in
the
past
month
and
we're
we're
now
calculating
all
these
metrics
automatically
and
we
are
working
on
coal-
is
actually
working
on
putting
this
into
Belgium.
A
B
I'm
still
still
not
quite
at
the
point
where
I
can
repro
add
on
my
own
machine
just
yet,
but
it
seems
like
bootstrap
is
the
place
to
start
either.
So,
if
I
want
to
understand
exactly
how
proud
of
it,
I
should
be
looking
at
bootstrap.
I
just
want
to
get
something
up
and
running
with
the
cluster
I
have
from
my
own
machine
that
can
maybe
send
the
right
set
of
flags
to
coop
test.
B
C
That
was
awesome
and
I
really
liked
it.
It
would
be
great
if
we
had
some
kind
of
a
tutorial
on
sort
of
like
hey,
got
a
bug
and
I
had
a
failure.
What's
the
sort
of
workflow
to
go
through
and
diagnose
it
to
figure
out
what
tests
are
failing
and
and
how
to
understand.
What's
going
on
I
pronoun
contributors,
that
would
be
really
super
or
something
like
that.
What
he
just
said.
A
Yeah
I
think
part
of
the
problem
is,
you
know,
like
I,
think
historically
everything's
kind
of
been
bespoke
right,
that's
sort
of
why
we're
trying
to
get
to
where
every
job
calls
you
know
clone
test
in
for
a
called
bootstrap,
because
then
we
can
start
giving
best
practices
but,
like
part
of
the
problem,
yeah
I
think
that's
actually
a
really
good
idea,
and
we
can
work
on
something
like
that.
Some
of
the
challenge
is
going
to
be
like
the
eee.
A
Jobs
are
going
to
have
a
different
debugging
strategy
than
like
the
unit
test
failures,
and
then
the
we
have
like
the
test
integration
and
test
CMD
jobs
that,
like
they
don't
even
produce
a
j-unit
output
right
now,
and
so
they
don't
show
anything
on
test
grid,
and
so
you
know
over
time
if
we
could
get
to
where
they're
sort
of
you
know
whether
by
using
we
could
get
to
where
all
jobs
have
like
the
same
output
that
they
give.
That
would
think
assist
with
making
it
easier,
but
yeah
it
can
be.
B
B
Could
be
a
great
walkthrough
for
a
future
state
testing
meeting
it's
if
somebody
wants
to
try
giving
that
a
shot
I'm.
This
is
part
of
the
reason
I'm
trying
to
pay
as
much
attention
as
I
am
to
the
release
to
the
latest
release.
Even
though
I'm,
not
a
member
of
the
release
team
I,
would
like
to
go
through
as
much
of
the
triaging
and
debugging
process,
as
my
time
allows,
so
that
I
can
sort
of
help
the
sort
of
thing
it's.
B
The
only
way
we
can
improve
our
tests,
like
Ilya
rate,
is
if
we
actually
grow
the
pool
of
people
who
know
how
to
troubleshoot
test
failures
and,
at
the
moment,
we're
tighter
than
this.
This
chicken
and
egg
problem,
where
the
people
who
are
really
busy
improving
the
test
infrastructure,
are
kind
of
also
the
people
who
have
the
insight
into
what
what
test
infrastructure
to
do
so
I
would
goober.
Nader
is
a
really
really
awesome
place
to
start.
B
There
are
so
many
links
to
so
many
of
the
other
things
and
then
I
think
between
this
presentation,
which
I
cannot
promise.
I
will
have
on
youtube
today,
but
I
will
get
this
posted
as
soon
as
I
and
table
and
the
readme
and
the
test
do
for
repo
most
of
the
stuff
is:
is
decently
up-to-date,
it's
significantly
more
up-to-date
than
it
used
to
be
as
far
as
schoo,
and
if
there
are
places
where
that's
not
true,
please
come
above
us
and
select
and
we'll
do
what
we
can
to
resolve
it.