►
From YouTube: Flake Finder Fridays #001
Description
Dan Mangum and Rob Kielty are back for the second episode of Flake Finder Fridays. In this episode they will walk through how to run Kubernetes e2e tests locally, as well as how they are packaged and run in CI environments.
B
All
right,
well,
hello,
everyone
and
welcome
to
another
edition
of
flake
finder
fridays.
My
name
is
dan
mangum
and
I'm
here
with
rob
guilty
and
we
are
kubernetes
contributors
and
I'm
a
tech
lead
for
sig
release
and
we
both
served
on
the
ci
signal
team
and
basically
the
purpose
of
this
show
is
for
us
to
walk
through
a
little
bit
of
how
kubernetes
testing
is
set
up,
particularly
how
it's
run
in
ci
environments
and
how
test
grid
and
all
the
different
tools
work
together.
B
But
today
we're
going
to
be
specifically
looking
at
reproducing
flakes
and
failures
on
your
local
machine
and
rob
is
going
to
run
most
of
it
and
look
into
a
specific
flake
that
we
saw
recently
and
we're
not
actually
going
to
go
into
addressing
that
in
depth.
But
we
are
going
to
talk
about.
You
know
how
that
could
be
reproduced
locally
and
rob's
actually
going
to
do
that
because
he's
a
he's
now,
a
local
end-to-end
test
wizard
in
my
approximation.
B
So
I'm
gonna
pass
it
off
to
you
rob
and
let
you
run
the
show
today.
A
Sure,
thanks
for
for
for
that,
big
introduction-
and
I
really
appreciate
it
and
the
wizardry
is
only
recently
recently
acquired
so
do
have
to
bear
that
in
mind.
I'm
just
going
to
share
my
screen
and
I'm
going
to
have
to
get
rid
of
that
and
pull
that
down
there
and
a
desktop
one
is
not
what
we
want.
It's
desktop
two.
I
think
no,
it's
desktop
one.
A
A
Excellence
is
excellent,
yeah
not
much
to
be
reading
there.
So
let's
have
a
look
so
so,
basically,
what
I
want
to
do
is
I
just
want
to
trace
through
the
path
of
looking
at
a
flake
in
test
grid
and
drilling
down
from
seeing
that
in
test
grid
to
getting
through
to
code
and
getting
through
to
running
that
and
trying
to
reproduce
the
flake.
So
there
is,
let
me
just
see
here
now:
let's
go
to
this
start
of
my
notes.
A
Okay,
so
essentially
what
we're
going
to
pick
at
a
particular
issue
and
it's
the
volume
teardown
and
container
start
can
race
while
a
pod
is
being
deleted
and
report
an
error.
So
so
this
there's
a
there's
an
issue
here
that
was
reported
down
before
christmas.
It's
not
a
very
severe
issue
and
it's
it's
it's
kind
of
like
the
the
edge
of
a
corner
of
an
edge
case
and
I'll
just
click
on
this
link
here
and
we'll
just
have
a
quick
perusal
of
the
issue.
A
I
probably
already
have
it
there
on
the
top,
but
there's
no
harm
in
and
loading
it
live.
So
this
issue
here
is
is
is:
is
this
test?
A
Is
checking
to
see
if
you
stop
and
start
a
container
repeatedly,
there's
a
there's,
an
issue
where
in
the
process
of
starting
and
stopping
and
a
secret
volume
is
not
mounted
and
when
a
second,
a
second
gofundme
is
trying
to
bring
up
the
container
and
expected
a
volume
out
isn't
present,
and
that
causes
an
error
and
it's
as
as
far
as
errors
go
and
the
errors
that
we're
interested
in
in
ci
signal.
A
This
is
what's
referred
to
as
a
flaky
issue,
so
for
the
most
part
it
works
and
then
intermittently
it
fails,
and
I
think
I
have
what
that
looks
like
here.
So
so
this
is
the
end-to-end
job
that
we're
looking
at
gce,
ubuntu
master
default
and,
and
we
can
see
that,
for
the
most
part,
this
test
runs
successfully
and
passes.
But
every
now
and
again
it
fails.
And
what
I
want
to
do
here
is
just
go
from
this
line
item
in
test
grid
which
I've
filtered
down
to
just
cover
this
test.
A
So
we
can
see
that
the-
and
we
were
talking
about
this-
is
sort
of
in
tech
rehearsal
that
this
issue
here
is
it's.
It's
a
big
long
string
move
kubernetes,
n10
suite
it's
part
of
sig
node,
we've
pods
extended.
Then
we
have
pod
container
status
and
the
assertion
is,
it
should
never
report
success
for
a
pending
container.
A
So
the
first
thing
I
want
to
do
is
just
go
from
here
and
find
this
in
code,
and
if
I
hop
onto
my
dev
box
here
and
ctrl
b
for
to
get
onto
my
emacs,
this
is
where
we
are
in
code.
So
if
I
just
do
a
slash
equals
equals
equals.
A
So
so
the
one
thing
I
just
want
to
point
out
here
is
that
we
are
using
a
framework
here
called
ginkgo,
and
that
is
what's
doing
our
what
drives
our
end-to-end
tests
and
what
we
have
here
is
a
ginkgo.describe,
and
what
that
does.
Is
it's
difficult
on
for
you
to
read
this,
but
this
the
describe
blocks
allow
us
to
organize
our
specs
and
the
test
specifications,
and
this
ginkgo
dot
is
or
in
this
case,
fit
I'll,
explain
that
in
a
second.
A
So
this
is
the
assertion
that
we're
making
in
this
test-
and
we
can
see
here
that
we
have
ginkgo
it.
It
should
never
report
success
for
a
pending
container
and
this
is
the
end
string
of
what
we
see
in
test
grid.
So
if
I
that
should
never
report
success
for
a
pending
container,
this
is
actually
the
actual
test
that
we're
looking
at.
B
Now,
just
just
to
jump
in
for
a
second
could
you
mention
also
where,
in
the
kubernetes
source
tree
this
file
webs.
A
Sure
absolutely
yeah,
so
if
I
just
do
a
control
xf
here
and
I'll
just
open
this
directory.
A
So
if
we
look
at
this,
the
directory
that
I'm
in
is
in
my
checked
out
fork
of
the
kubernetes
repo,
so
that
gets
checked
out
into
that's
my
home
they're
there,
home
or
kilty,
and
then
we
have
goes
source,
kate's
kubernetes
and
then
in
the
test.
Folder
we
have
an
end
to
end
folder
and
then
we
have
node.
Now
there
are
a
couple
of
traps
here:
don't
get
too
much
cot
in
the
weeds.
But
if
we
go
up
into
test,
you
can
see
that
I
think.
A
If
we
go
into
test,
we
can
see
we
have
end
and
we
also
have
end
to
end
node,
and
that
was
something
that
kind
of
tripped
me
up
when
I
was
going
through
this
earlier
on
the
week
and
there
are
other
tests
in
here
but
the
test
I
know
from
hunting
around
and
looking
for
for
that
string
in
my
code,
based
on
my
fork
that
that
our
test
is
actually
in
here
underneath
the
pods
directory.
B
Thing
I
wanted
you
to
to
point
out.
There
is
the
end-to-end
directory
right
and
we'll
make
this
distinction
a
little
bit
later
on,
as
well
is
for
kubernetes
end-to-end
tests,
and
then
this
is
a
kubernetes
end-to-end
test
that
is
owned
by
sig
node.
There's
also
node
end
to
end
tests,
which
are
pretty
much
exclusively
owned
by
signate.
I
believe
there
might
be
some
storage
in
there,
but
that's
for
running
specifically
against
a
kubernetes
node.
A
And-
and
so
so,
you
do
have
to
hunt
around
and
to
to
to
to
find
tests
that
are
referenced
in
test
grid
and
of
course
we
always
have
to
remember
that
if
you're,
if
you
get
stuck
during
the
hunt,
you
know
slack
is
always
there
and
you
can
always
ask
questions
in
either
the
the
node
room
or
the
sig
testing
room.
You
should
get
some
help
on
that.
I'm
just
noticing
that
I'm
getting
a
lot
of
a
lot
of
noise
at
the
bottom.
A
I'm
just
going
to
get
rid
of
that
noise
space
time
is
the
last
butter
and
ctrl
xp.
A
The
editor
that
I'm
using
here
is
emacs
and
it's
a
particular
distribution,
vmax
and
called
spacemax,
and
I
have
a
I
have
a
tool
here
called
kubernetes
overview
that
allows
me-
or
it
should
allow
me
to
visit
a
kubernetes
cluster
and
look
at
pods
and
nodes
and
logs
and
but
there's
a
bit
of
there's
a
bit
of
configuration
to
do
on
that.
So
I'm
just
going
to
kill
that
so
to
get
rid
of
that
noise,
I'll
kill
that
as
well
and
that's
it
and
that
too
okay.
A
So
this
is
back
to
the
test-
okay,
so
so
this,
so
this
is
the
code
that
I
needed
to
that.
I
wanted
to
run
this
individual
test
on.
I
added
that
log
you
can
see,
possibly
in
the
get
gutter
there
and
that
plus
sign.
This
is
a
log
that
I
had
just
to
make
a
change
to
the
test
and
I'll
just
hop
over
to
how
I
bring
up
the
local
cluster.
A
So
control
busy
on
this,
so
here
you
can
see
that
I'm
in
ghost
source,
kate's
kubernetes
and
I'm
in
a
detached
tent
state
at
the
moment
in
order
to
bring
up
that's
in
order
to
bring
this
up.
What
I
need
to
do
is,
and
then
just
I'm
just
been
included
there
by
kubernetes
bar
and
zoom
there,
so
in
order
to
run
up
a
local
cluster,
this
is
the
command
that
I
need
to
run.
A
So
I
need
to
run
it
as
root
in
order
and
there's
a
to
get
access
to
everything
that
I
need
to
get
access
to.
I
think
it's
specifically
a
socket,
and
the
other
thing
the
other
gotcha
is
is
that
I
need
to
introduce
my
path
into
the
path
of
the
command
run
by
sudo.
So
that's
why
I
have
path
is
assigned
dollar
path
and
for
some
reason,
minus
e
doesn't
work
but
hack,
slash,
local
up,
cluster.sh
will
bring
up
a
local
cluster,
and
I
suppose
the
thing
to
note
here
is.
A
That
so
the
nsa
knows
my
root
password
now,
but
in
order
I
need
to
run
this
command
from.
I
need
to
run
this
from
the
root
of
my
kubernetes
checkout,
so
that
takes
a
few
moments
for
it
to
come
up
when
it
does
come
up,
we'll
get
a
bunch
of
instructions
that
we
need
to
take
heed
of
in
order
to
run
the
test.
So
this
shouldn't
take.
B
B
A
Did
yeah
yeah?
I
think
it's
a
bit
of
a
delay
yeah.
It
looks
good
getting
those
bits
across
the
atlantic
now
so
this.
So
this
is
so
so
this
info
at
the
end
of
this.
So
you
leave
this
running
and
then
another
pane.
You
can
begin
to
make
use
of
this
cluster
to
run
end-to-end
tests
and
the
there
there's
a
couple
of
instructions
here.
So
the
first
first
thing
is:
it
tells
you
where
all
the
log
files
are
and
there
that's
those
there.
A
A
Let
me
just
say
to
to
interact
with
the
clusters
that
we've
brought
up
and
we'll
need
to
export
cube,
config
to
point
to
var
run
kubernetes
admin
cube
config,
so
I
pretty
much
have
most
of
that
already
set
up
and
I'm
just
going
to
go
to
my
pane,
where
I
have
where
I've
been
running
the
test.
So
you
can
see
previous
runs
there.
So
I'm
just
going
to
control
l
there
to
get
rid
of
all
that
noise.
A
Let
me
just
see
if
this
will
run
so
cube,
test
dash
up
dash
test
and
test
args.
The
the
test
args
flag
there
that
passes,
passes
and
parameters
into
the
ginkgo
test.
Runner,
so
cubetest
will
ultimately
call
the
ginkgo
cli
to
run
the
tests
that
we're
interested
in,
and
what
I
want
to
do
here
is
I
want
to.
A
I
want
to
instruct
kinko
to
focus
on
the
the
following.
This
is
actually
a
regular
is
interpreted
as
a
regular
expression,
this
string
and
because
this
is
the
test,
we're
interested
in
never
report
success
for
a
pending
container.
A
I
should
only
run
that
test,
so
I'm
just
going
to
pull
the
trigger
on
that,
and
we
can.
We
can
see
this
run,
and
this
is
this
is
indeed
the
test
running,
so
are
about
to
run.
B
Is
the
is
the
up
argument
and
we
can
look
at
the
docs
in
a
minute
as
well?
Is
it
required,
since
you
already
have
the
cluster
running,
is
it
bringing
up?
Does
it
recognize
that
there's
already
cluster
there.
A
A
We
could
control,
see
this
and
take
it
out
to
see
if
it
still
runs,
because
I
think
I
think
it
might,
but
you
can
see
that
because
I
can
see
that,
because
this
this
test
is
is
is
up
and
running,
and
I
can
see
also
I'm
just
going
to
control
s
and
that's
just
something
to
pause
here,
but
that
may
or
may
not
work,
no,
that
didn't
work
either,
but
that
that
that
double
equals
their
test
change.
A
That's
my
log
line
there
and
that
I
introduced
just
to
make
sure
that
that
I
could
make
do
that
iteration
of
changing
the
test
to
see
my
my
changes.
So
this
is
the
the
test
running
and
this
it's
it's
stopping
and
starting
pods
repeatedly
doing
that
and
checking
to
er
checking
the
status
it
that's
what's
the
sad
part
of
this
is
today
we're
not
going
to
fix
this
and
it
is.
A
It
really
is
the
corner
of
an
edge
of
a
corner
case
and
which
I
think,
which
I
think
is
why
we're
we're
officially
ignoring
this
for
the
moment.
But
but
that's
the
the
test
running
if
I
was
to
go
to.
A
If
I
was
go,
go
to
here
and
run
get
pods,
we
can
see
that
the
the
test
in
action
with
respect
to
bringing
up
and
taking
down
pods-
and
you
can
see
that
there's
a
set
of
terminating
and
pending
starting
pods-
and
this
is
where
the
race
occurs
is-
is
between
yeah,
the
stopping
and
starting
of
pods.
And
that's
the
test
and
operation.
A
A
B
Time
and
then
executing
some
things
in
parallel,
and
sometimes
there
can
be
a
race
with
that.
One
of
the
things
that
might
be
interesting
to
point
out
is
in
that
test
body
there
the
the
way
that
this
is
not
flaking
right
now,
which
this
is
generally
not
a
way
to
handle
a
flaking
test,
but
because,
because
rob
mentioned
right
that
we
are
aware
of
what's
happening
here,
this
regex
up
there
is
actually
ignoring
the
specific
error
that
was
causing
the
flake,
because
this.
A
A
But,
but
I
will
say
this
so
beautifully
done
by
jordan
because,
like
I
mean
the
variable
name,
calls
out
the
bug
number
so
the
the
book
in
question.
If
you,
if
you
go
to
kubernetes
issues,
it
is
indeed
eight
seven
six
six
so
in
terms
of
in
terms
of
leading
the
you
know,
hansel
and
gretelling
breg
com
crumbs
behind
him-
he's
done
a
nice
job
here
like
there's.
If
we
were
to
go
down
here,
I've
effectively
in
my
local,
let
me
just
do
re:
is
it
reb?
A
Is
it
already
b
yeah,
so
on
line
367
there?
The
code,
so
t
is
the
status.
If
we
look
at
the
definition
of
t,
there's
just
a
few
lines
above
yeah,
so
so
at
the
top
of
that
buffer,
there
t
is
assigned
status,
dot,
state,
not
terminated,
and
here
we
in
this
case
we
are
checking
to
see
the
exit
code,
and
this
is
where
we
get
into
the
the
meat
and
two
veg
of
this
test
and
this
this
walk
away
from.
A
I
suppose
that,
if
yeah,
if
the
message
matches
that
regex,
we
jordan
is
noting
it
on
that
line
there,
three
six,
nine.
So
pod
on
node
failed
with
symptoms
of
and
then
the
z
url
to
the
the
actual
issue
itself,
so
you
can't
say
fairer
than
that
in
terms
of
them
in
terms
of
being
self-documenting
and
then
and
then
what
I
did
was
just
to
force
this
was
to
instead
of
just
logging
it
I
returned,
returned
a
front
error
f
and
that
forced
the
error,
I
suppose,
to
to
wrap
up
that.
There's.
A
One
final
thing
here
that
I
want
to
point
out
and
just
talk
about,
because
I
only
learned
this
yesterday-
is
that
it's
possible
to
instruct
ginkgo
continuously
run
a
test
until
it
fails.
So
one
of
the
insidious
and
annoying
things
about
a
flaky
test
is
the
the
fact
that
it
thought
mainly
mostly
passes
and
when
you're
trying
to
chase
something
down
you
want
it
to
fail.
So
it's
possible
to
continuously
run
a
specific
test
or
suiter
tests
until
such
time
as
it
fails
and
I'll
just.
B
A
Have
a
quick
look
here
and
speak
to
that,
so
here
so
going
back
to
where
I've
run
this
and
we
can
see
that
those
tests,
we
ran
three
tests,
they
all
passed
and
if
I
do
this
export
here
and
I'll,
just
bang
that
up
to
there
and
export
ginkgo
until
it
fails,
is
assigned
true.
A
Yeah
that
works,
sometimes
fish
gives
out
about
export,
and
sometimes
it
goes
yeah
I'll
take
it.
But
if
I
run
this
now
and
let's
just
drop
the
up
and
just
see
see
if
that
works
yeah,
it
does
yeah.
A
A
B
Because
it
is
just
kind
of
like
by
definition
a
flake
right,
so
so
you're
not
going
to
see
it.
A
B
A
Absolutely
yeah
and
and-
and
here
in
is
yeah,
I
presume
where
the
trickiness
is
here
is
you
know,
because
people
have
looked
at
this
and
I
had
a
crack
at
fixing
it,
but,
but
there
is
that
tree,
I
can
almost
see
it
as
a
visualize
it
as
a
matrix
of
a
matrix
of
threads
of
execution
that
are
interacting
with
other
components,
and
you
can
see
how
they
get
a
bit
confused
at
times.
You
know.
B
Yeah,
if
you
look
at
that
issue,
it's
actually
pretty
helpful
for
it's
just
like
an
interesting
thing
to
look
at
in
terms
of
how
the
cube
looks
in
the
container
manager.
So
definitely
would
encourage
folks
to
take
a
look
at
that,
and
then
you
know
one
of
the
things
about
the
functionality
like
this
being
tested
at
the
end-to-end
level,
rather
than
the
node
end-to-end
level.
B
Is
that
we're
we're
operating
at
the
kubernetes
api
level
like
rob,
was
showing
just
a
moment
ago
those
pods
being
created
and
deleted
and
created
and
deleted
underneath
right?
That
has
to
do
with
the
containers
and
the
mounts,
which
might
be
something
that
we
test
more
at
the
end
and
node
level.
But
we
still
see
this
behavior.
That's
manifest
manifested
that
level
up
when
we're
interacting
with
the
kubernetes
api,
which
obviously
is
you
know
what
users
of
kubernetes
are
experiencing
as
well.
A
Cool,
I
think
I
think
that's
me.
I
think
that's
me
on
this
awesome
unless
you've.
B
Well,
I
I
know
that
we
have
a
few
folks
in
the
chat
at
this
point,
including
ricardo
and
adolfo.
So
thanks
for
joining
folks,
if
you
all
have
any
questions,
especially
on
what
rob
was
going
through
there,
while
I'm
going
to
take
over
and
look
at
ci,
a
little
bit
definitely
feel
free
to
drop
it
in
there
and
oren
slack
and
rob
can
monitor
that.
But
I
am
going
to
go
ahead
and
steal
the
screen
from
rob.
A
B
And
rob,
can
you
just
verify
for
me
that
you
can
see
the
heck
md?
I
have
up
here.
B
All
right
and
as
I
mentioned
at
the
beginning
per
usual,
we
will
share
this
document
and
all
the
links
that
are
in
it,
which
there
are
copious
links
in
here
at
this
point,
and
this
section
will
be
populated
with
some
more
of
rob.
A
There's
one
there's
one
shout
out
that
I'd
like
to
give
is
that
on
the
hackmd
we've
linked
to
that
an
export
of
that
org
file
that
I
that
I
worked
through
and
then
the
big
shout
out
that
I
want
to
give
is:
is
that
if
you're
coming
to
this
as
a
new
contributor
or
a
new
test,
maintainer
the
community,
repo
and
and
you're
driving
to
it
there
now,
then
that's
great
sig
testing
is
the
folder
where
the
I
am
working
from
to
get
through
all
of
this
and
it's
end-to-end
tests.
A
So
so
this.
So
this
is
end
user
documentation
for
running
an
and
and
test
someone
by
end
user.
I
mean
test
maintainer,
you
may
be
tempted
to
go
and
you
can
absolutely
go.
If
you
want
a
deep
dive
on
this,
you
can
go
to
work
the
the
source
for
cube
test,
and
you
can
look
at
documentation
around
that,
but
that's
geared
more
towards
maintainers
of
the
tools
where,
whereas
this
documentation
here
is
for
end
users
of
these
tools.
B
Yeah,
absolutely
that's
a
great
point
and
there's
lots
of
documentation
around
cube
test
here,
which
you
know.
Cubetest
is
really
just
kind
of
like
a
a
friendly
interface
kind
of
for
for
interacting
with
some
of
the
scripts
that
are
in
the
kubernetes
kubernetes
repo.
So
definitely
take
a
look
at
this.
Like
I
said
it
is
linked
here
at
the
run.
B
Kubernetes
into
intest
there's
also
a
separate
one
for
cube
test,
two
which
we're
going
to
talk
about
in
a
few
moments
and
then
there's
also
how
we
noted
that
there
are
separate
end-to-end
tests
and
node
end-to-end
tests
if
you're
more
interested
in
the
node
and
end
test
in
the
same
level
directory
at
the
sig
node
sig.
Here
we
have
some
documentation
on
node
and
then
tests
and
how
those
can
be
run.
Once
again,
those
are
testing
at
the
individual
node
cubic
cubelet
level
there
all
right.
B
So
I'm
going
to
rob,
demonstrated
how
you
know
you
can
potentially
reproduce
a
flake
locally,
and
I
want
to
look
at
the
same
test.
We
are
looking
at
there,
but
instead
look
at
how
it's
running
in
ci
right,
because
even
if
you
can
reproduce
something
locally,
there
may
be
certain
constraints
on
doing
that,
or
you
know
there
may
just
be
that
the
environment
is
literally
different
right.
B
It's
not
gonna,
look
the
exact
same
as
the
ci
environment,
so
it's
useful
to
understand
the
differences
between
the
two,
because
that
may
inform
if
you
can
produce
a
flake
locally
or
you
know,
if
there's
a
reason
why
it's
only
happening
in
ci
all
right,
so
I'm
gonna
hop
over
to
spyglass
here,
which
we
talked
about
on
our
first
episode,
so
you
can
go
back
and
take
a
look
at
that
if
you
like-
and
this
is
a
instance
of
a
test
failure-
so
you
can
get
here
from
test
grid
just
like
rob
had
up
moments
ago
by
clicking
on
one
of
the
the
little
red
boxes
for
this
job
you
can
see.
B
B
Job
history,
up
at
the
top
proud
yaml
artifacts,
is
basically
all
of
the
different
artifacts
that
can
come
from
a
a
job
run
and
then
test
grid
which
will
link
you
back
to
the
view
that
rob
was
looking
at
earlier.
You'll
see
this
is
actually
on
a
different
dashboard.
B
Jobs
can
exist
on
multiple
dashboards,
so
this
one
defaulted
to
going
to
the
google
gce
one.
This
is
also
on
sig
release
master
informing
you
could
access
the
same
exact
view
on
either
one
of
them,
okay,
so
looking
more
into
what
the
actual
environment
we're
running
this
in
and
how
that
setup
differs
from
when
we're
running
in
a
local
environment.
B
Well,
the
first
thing
to
look
at
is
definitely
the
proud
job
yaml.
So
this
is
the
configuration
for
this
job.
All
job
configuration
lives
in
the
test
info,
but
you
can
get
a
quick
link
to
it
from
spyglass
for
a
specific
job
that
you're
looking
at
just
to
show
where
this
actually
lives
in
test.
Infra.
B
Excuse
me
here
is
a
link
to
the
job,
so
you'll
see
it's
in
test:
infra
config
jobs,
kubernetes
sig
cloud
provider,
gcp
gcp,
gce
example
and
you'll-
see
that
the
name
matches
and
you'll
also
see
that
we
have
all
of
those
different
dashboards
that
we
want
to
show
up
on
here,
reflected
in
the
test
grid
dashboards.
B
We
also
have
the
test
grid
tab
name.
So
if
we
were
looking
back
at
test
grid
again,
you'll
notice
that
it
doesn't
say
ci
kubernetes
into
mgci
gce,
it
actually
has
a
more
friendly
name
and
then
something
I'd
really
like
to
call
out
here
as
a
wonderful
thing
that
the
test
maintainers
did
here,
the
config
maintainers
here
did
is
they
gave
a
description.
B
So
this
is
super
helpful
if
you're
familiar
with
some
of
the
test
infrastructure
already,
you
may
be
able
to
understand
this
just
by
looking
at
the
image
and
the
arguments
that
are
being
passed
and
that
sort
of
thing,
but
it's
always
helpful,
just
to
look
at
the
description
say.
Oh
okay,
I
see
we're
using
cubetest
to
run
end-to-end
tests
here
and
that
it
even
calls
out
the
specific
script
that
gets
invoked
to
actually
run
that
and
you'll
see
that
rob
was
using
a
similar
script
earlier
when
he
was
running
it
locally.
B
What
does
this
job
being
an
image
mean
so
since
we're
running
in
a
ci
environment
which
is
a
proud
job
cluster
and
we'll
probably
need
to
have
a
show
all
about
prowl
at
some
point,
but
since
we're
running
in
a
proud
job
cluster,
we
have
to
actually
have
a
container
image
to
run
right
because
prowl
itself
runs
on
kubernetes
and
this
specific
job
is
using
cubekin's,
end-to-end
and
you'll
see
that
used
in
a
lot
of
different
places.
So
here
we
see
the
it's
from
kate's
test
images.
B
The
image
is
cubekin's
end
to
end,
and
we
see
this
version
tag
here
that
looks
like
it
incorporates
the
date
as
well
as
potentially
the
branch
that's
built
from
and
a
snippet
of
the
digest.
So
where
is
that
coming
from?
Well,
let's
see
if
we
can
find
it.
B
Cubekin's
end-to-end
also
lives
in
the
test,
infrarepo
and
we'll
see
in
a
minute
what
the
implications
of
all
these
things
living
together
looks
like,
but
it
is
just
an
oci
image.
That's
built
from
a
docker
file
by
ci
when
when
changes
are
made-
and
you
can
go
through
it
through
here
and
look
exactly
at
what's
being
added
and
that
sort
of
thing
so
you
can
see-
let's
see
some
of
the
things.
So
it
looks
like
we
have
a
c
cf
ssl.
B
We
are
replacing
cube,
ctl
downloading
a
go
version,
and
some
of
these
can
be
overridden
by
the
arguments
that
are
passed
at
build
time.
So
importantly,
build
time
is
not
the
same
as
what
we're
seeing
here
at
runtime.
These
are
arguments
that
are
passed
at
build
time
and
the
other
important
thing.
Besides
the
fact
that
this
adds
things
like
cubetest
binary
itself,
as
well
as
some
things
from
the
kubernetes
repo
is
that
this
is
built
on
a
base
image.
B
B
So
going
back
to
our
helpful
guide
here,
let's
take
a
look
at
the
bootstrap
image,
so
once
again,
this
is
only
one
directory
up
and
then
over
to
bootstrap.
This
is
a
separate
image
that
is
serving
as
the
base
image
for
cubekins,
as
well
as
a
number
of
other
images
in
this
repo,
and
let's
take
a
peek
at
what's
going
in
here,
as
you
might
guess,
from
an
image,
that's
called
bootstrap.
It
does
a
bunch
of
the
setup
and
installation
of
common
packages
that
we
may
want
across
different.
B
You
know
images
that
are
built
on
top
of
it.
So
you'll
see
things
like
installing
python
common
utilities,
google
cloud
sdk
just
yeah
curl
common
things
here,
so
these
are
just
different
utilities
that
we're
going
to
want
to
use
across
these
different
images.
B
It
also
importantly,
adds
things
like
entry
point
and
runner,
which
we'll
take
a
look
at
in
a
moment
as
well
as
this
scenarios
directory
and
scenarios
are
going
to
become
very
important
here
when
we
talk
about
the
difference
between
cubetest
and
cubetest2.
B
But
I
want
to
look
at
entry
point
and
runner
specifically
entry
point,
so
this
is
set
as
the
entry
point
in
the
base
and
we're
not
overriding
that
in
the
in
the
actual
image
built
on
top
of
it,
so
it'll
serve
as
the
entry
point
there
as
well.
So
you'll
see.
If
we
look
back
at
let's
see
if
I
can
find
it
here,
we
go
we're
not
actually
overriding
the
command
right
that
the
container
is
executing.
B
B
If
you
will-
and
if
you
know,
if
you
know
more
about
oci
images,
you
might
actually
understand
a
little
bit
more
what's
happening
there,
but
you
can
just
imagine
that
it's
you
know
being
consumed
as
well
by
the
the
next
layer,
and
so
in
this
entry
point
we
have
things
like
fetching
the
test,
infrarepo
and
then
running
the
runner.sh,
which
is
also
in
this
directory
and
gets
added
in
the
docker
file
and
then
executing
jenkinsbootstrap.pi
bootstrap.pi
is
a
fun
script
that
you
will
see
lots
of.
B
It
is
taking
its
sweet
time
there
we
go,
you
will
frequently
see.
Bootstrap.Pie
is
deprecated
test.
Infra-On-Call
does
not
support
any
job
still
using
bootstrap.pi.
Please
migrate
your
job
to
podules.
You
will
see
this
all
over
the
place
and
please
do
not
use
bootstrap.pi
if
you
can
help
it,
but
there
are
a
lot
of
jobs
that
currently
do
use
it
and
there's
a
lot
of
folks
working
on
moving
off
of
it
as
well.
But
currently
it
is
a
script,
that's
being
used,
so
it's
important
to
understand
what's
happening
there.
B
It'll
also
inform
you
know
what
the
transition
means
and
that
sort
of
thing.
I
also
want
to
take
a
quick
moment.
It
looks
like
there's
some
questions
in
the
chat
here.
The
let's
see,
ricardo
asks
about
cubetest
and
cubetest2,
and
it
looks
like
okay.
It
looks
like
we've
already
indicated
that
we'll
answer
that
in
a
moment
rob
if
you
want
to
take
a
peek
at
some
of
the
other
ones
in
there
as
well
feel.
B
Those
yeah
cool
all
right
now,
we've
gotten
to
the
point
where
we
know
what
kind
of
the
entry
point
for
this
job
is,
and
so
now
we're
going
to
go
down
what
I
like
to
call
the
long
winding
path,
because
you
basically
go
through
a
bunch
of
different
things
that
are
set
up
to
create
a
certain
environment
to
essentially
do
what
rob
has
done
on
his
local
machine.
B
So
the
first
step,
as
we
saw,
is
bootstrap.pi
here
once
again:
bootstrap.pi
lives
in
the
test
in
for
repo,
it's
under
the
jenkins
directory
for
legacy
reasons,
and
also
a
reason
why
we
are
no
longer
going
to
be
using
it.
But
there's
quite
a
lot
in
here,
but
the
important
thing
that
is
going
to
happen
is
it's
gonna
choose
from
a
set
of
scenarios.
B
So,
if
you
remember
in
the
bootstrap
image,
we
added
that
scenarios
directory
and
if
we
look
in
the
scenarios
directory,
we
have
things
like
kubernetes
verify,
kubernetes,
end-to-end,
kubernetes,
bazel
and
a
couple
of
other
things.
You
also
see
a
very
large
deprecation
notice
here,
suggesting
that
you
do
not
use
it,
but
basically
these
are
different
ways
to
run
a
scenario
of
testing
kubernetes.
B
So
if
we
actually
look
in
bootstrap.pi,
it's
going
to
choose
one
of
these
scenarios
based
on
arguments
that
are
passed
to
it,
and
so
here
you'll
see
that
when
we
get
to
the
job
script,
we're
gonna
pass
in
the
scenario
choose
from
the
scenarios
directory
which
one
it
is
and
then
create
that
command
and
we
actually
run
that
command.
But
how?
How
is
it
knowing
that
we
to
use
end-to-end
tests
here?
B
Well,
if
we
take
a
look
back
once
again
at
the
prowl
job
animal,
we
are
passing
these
arguments
here
to
bootstrap.pi,
so
timeout
obviously
is
kind
of
self-explanatory.
I
believe
that
would
be
70
minutes,
not
actually
sure
what
bear
means,
but
we
can
look
at
the
arguments
and
then
we're
saying
we
want
this
kubernetes
end-to-end
scenario
here,
which
means
that
that's
going
to
eventually
get
passed
to.
B
Scenarios
which
is
eventually
going
to
be
passed
here
so
we'll
run
scenarios
slash,
kubernetes,
end-to-end,
dot,
pi
all
right.
So
let
me
close
a
couple
of
these
windows
here
and,
let's
head
over,
to
see
what's
happening
in
kubernetes
and
in.pi.
B
Also,
you
can
just
look
at
the
spoiler
here
and
see
that
it's
setting
up
an
environment
to
eventually
invoke
cubetest,
which
once
again
cubetest
is
the
same
thing
that
rob
was
running
locally
on
his
machine
and
so
basically,
in
the
end,
to
end
dot
pi.
Here
we
have
a
number
of
arguments
that
can
be
passed
and
you
can
take
a
look
at
all
of
them
here
and
you'll
see
that
lots
of
these
are
going
to
match
arguments
that
we
passed
here.
B
So
all
of
these
are
getting
passed
along
to
the
scenario,
and
this
is
informing
us
that
we
want
to
use
gcp
set
up
this
cluster,
which
once
again,
this
is
a
gcp
ubuntu
test.
So
you'll
see
these
different
parameters
that
get
passed
through
and
if
we
look
in
the
scenarios
you
can
see
things
like
different
gcp
args,
let's
actually
search
and
see
if
we
can
find
some
of
them.
B
Gcp
service
count-
some
of
these
are
also
going
to
get
passed
to
cubetest
themselves
and,
let's
see
if
we
actually
have
cube
tests
mentioned
here
yep,
so
we
are
going
to
pass
a
number
of
these
to
cubetest
and
I
believe
that
yeah,
some
of
these
are
are
all
going
to
keep
tests.
So
this
test
args
here
this
is
similar
to
what
rob
was
passing
where
I
was
running
a
specific
test.
B
B
Importantly,
I
believe
last
show
we
talked
a
little
bit
about
version
markers,
but
we
could
probably
have
an
entire
show
on
that,
but
this
is
saying
which
version
of
kubernetes
we
actually
want
to
test
and
where
we
want
to
get
it
from
so
this
is
essentially
using
the
latest
fast
build,
which
basically
means
we're
building
it
for
a
single
architecture,
in
this
case
amd
64
linux,
and
so
this
is
going
to
go
to
a
directory,
and
if
you
want
to
actually
see
exactly
where
that's
happening,
remember,
kubernetes
indian.pie
is
going
to
call
cubetest.
B
So,
let's
finally
get
over
into
cubetest
and
you'll
see
that
for
the
different
kind
of
plugins
you
can
run,
for
you
know
having
a
backing
cluster
to
execute
tests
against
you'll,
see
different
directories
for
that.
B
B
We're
going
to
download
that
version
of
kubernetes
and
use
it
to
bootstrap
our
cluster,
so
just
like
rob
was
in
that
detached
head
state
of
the
kubernetes
build
we're
going
to
use
a
actual,
publish,
build
that
if
you're,
actually
interested
in
where
this
is
coming
from,
you
can
take
a
look
at.
B
The
build
master
and
build
master
fast
jobs
on
master
blocking
here,
which
are
jobs
that
we
covered
last
episode
when
we
were
talking
about
how
we
do
some
of
our
build
x,
stuff
with
docker
all
right.
So
now
that
we
are
two
cube
tests,
we're
going
to
eventually
get
to
running
this
end-to-end
test
binary
using
ginkgo.
So
once
again,
rob
said
that
we
use
the
ginkgo
cli
to
invoke
these
tests.
B
The
node
tests
do
a
similar
thing
where
I
believe
it's
called
node
end
to
end.test
and
if
we
actually,
when
we
get
back
to
rob
in
a
moment,
look
at
his
output
directory
from
the
different
builds
and
and
look
what's
in
there.
We
should
see
this
actual
end-to-end
test
binary
there,
and
I
want
to
show
exactly
where
this
is
happening
so
going
back
to
cubetest
and
I
believe
it's
in
main.go
here
we're
gonna
find,
let's
see
well,
actually,
let's
just
walk
through
the
exact
execution
here.
B
So
when
this
gets
invoked,
we
are
going
to
see
that
it's
going
to
parse
some
of
the
flags
and
it's
eventually
going
to
do
this
new
process
control,
and
I
believe
this
complete
is
what
is
going
to
actually
execute
it
for
us
yeah.
So
it's
going
to
go
through
things
like
acquire,
kubernetes
check
the
version
directory
and
then
based
on
the
different
flags.
We
passed
it
like
up
down
et
cetera,
it's
going
to
do
a
variety
of
things.
B
Let
me
see
if
I
can
find
the
place
where
it
actually
invokes
ginkgo
here.
This
is
not
the
directory.
B
Let's
see
into
ingo,
this
is
likely
where
we're
going
to
find
it
yep
go
parallel
and
go
flags
here
we
go
so
this
is
where
we're
actually
running
the
command
hack,
slash
ginkgo
into
end.sh.
B
Once
again
we
cloned
down
the
kubernetes
repo,
and
so
we
have
access
to
these
different
scripts
and
we've
made
our
working
directory
there
as
well.
So
this
should
work
for
us
if
we
look
over
at
hack
and
ginkgo
end
to
end.sh.
B
So,
basically,
this
is
a
a
whole
path
of
going
through
passing
arguments
from
one
framework
to
the
next
and,
as
you
can
see,
it's
pretty
complicated
right,
there's
a
lot
of
different
layers.
It
goes
through
one
of
the
things
that's
helpful
is
actually
going
to
the
build
logs
and
we
can
kind
of
look
at
when
it
transitions
from
one
to
the
other,
so
we're
gonna
start
off
here
with
the
bootstrap.pi.
B
If
we
continue
to
go
down,
you'll
see
that
we're
executing
cube
tests.
So
once
again
it
went
bootstrap,
kubernetes.pi
scenario,
cubetest
and
eventually,
after
we
do
things
like
downloading
these
different
kubernetes
binaries.
B
Potentially
get
to
creating
the
kubernetes
cluster
and
I
believe
we're
going
to
get
to
here,
you
can
see
all
the
different
scripts
that
are
being
run.
So
all
of
these
are
in
that
hack
directory.
So
the
first
part
was
end
to
end
up,
so
cube
test
up
is
actually
translating
to
a
script
in
kubernetes
kubernetes,
we're
running
that
cube
control
script
and
once
again,
this
looks
very
similar
to
what
rob
executed
earlier
or
had
previously
executed.
B
We
are
running
to
end
the
end-to-end
status
script
and
we
are
also
eventually
running
this
hack
ginkgo
into
n.s.h,
which
is
exactly
what
we
saw
here
in
this
hack
script,
and
so
this
is
going
to
execute
the
ginkgo
cli
and
run
that
same
exact
thing
that
rob
showed
earlier
on
his
local
machine,
but
this
is
running
it.
Obviously,
in
a
gce
environment
that
we've
bootstrapped
and
you'll
begin
to
see
after
some
time
some
actual
tests
that
are
getting
executed,
based
on
the
different
things
that
we
focused
and
skipped
in
ginkgo.
B
So
the
last
thing
I'm
going
to
talk
about
before
handing
it
back
to
rob
is
talk
about
cube
test
2..
So
everything
is
moving
over
to
cube
test
two
and,
to
be
honest,
I
don't
have
a
ton
of
familiarity
with
cubetest2,
so
we
likely
should
have
some
other
folks
on
to
talk
about
it,
but
essentially
what
it
looks
like
to
me
is
being
able
to
have
a
more
unified
and
modular
tool.
So
we're
not
passing
arguments
through
this
whole
tree
of
things
that
are
kind
of
loosely
coupled.
B
You
can
actually
just
tell
from
kind
of
looking
at
the
directory
structure
of
this
repo.
What
the
goal
is
here,
you
can
see
that
we
have
different
kind
of
backing
clusters,
so
an
example
here
would
be
kind
or
gke
or
gce,
which
is
what
we're
using
in
this
case,
and
I
believe
someone
in
the
chat
asked
about
using
kind
to
actually
back
your
local
cluster
yeah
and
so
so.
A
A
quick
run
through
a
quick
run-through
of
the
script
would
say
no,
and
I
would
imagine
that
a
predator,
the
existence
of
kind-
I
don't
know
for
a
fact,
though,
so
I
I
so
don't
quote
me
on
that.
A
But
obviously
here,
if
you
look
at,
if
you
look
at
the
help
output
and
from
the
local
up
cluster,
you
just
see
a
page
and
a
half
flags
and
obviously
like
you
say
this
is
an
attempt
to
modularize
that
in
terms
of
the
the
the
back
ends
as
it
were,
that
are
providing
providing
a
combination.
B
Yeah,
that's
exactly
right
and
the
the
way
that
the
kubernetes
cluster
is
getting
bootstrapped
locally
for
for
rob's.
Local
machine
is
actually
in
that
cube
up
script.
So
if
you
want
to
look
at
exactly
what's
happening,
but
it's
actually
running
the
kubernetes
components
right
as
opposed
to
having
a
unified
thing
like
kind
and
another
thing
I'll
mention
here
for
cubetest2,
is
we
have
these
different
testers
as
well?
So
you
can
swap
out
ginkgo,
for
you
know
other
components
as
well.
B
And
then
you
can
also
see
that
this
is
starting
to
happen.
So
there's
folks
that
are
working
on
this
as
we
speak,
and
here's
an
example
of
a
gc
conformance
job
which
is
on
sig
release
master
blocking,
which
basically
means
that
it
is
critical
right.
We
would
block
a
release
if
it
was
failing,
and
this
was
actually
recently
merged
to
switch
over
from
using
the
previous
job,
which
was
using
cubetest
to
the
actual
cubetest2
variant
of
that
and
making
that
the
primary
and
moving
that
onto
sig
release
master
blocking
so
definitely
enhanced.
B
Confidence
in
cube
test
2
is
coming.
So
it's
exciting
to
see
some
of
this
switch
over
happening
so
yeah.
I
hope
that
was
helpful
in
kind
of
going
through
some
of
the
layers
and
also
just
illustrating,
potentially
that
you
know
it
is
complicated
and
we're
trying
to
improve
that.
So
it's
not
quite
as
complicated
for
folks,
but
hopefully
going
through
that
can
kind
of
give
you
an
idea
of
how
to
troubleshoot.
When
you
see
things
and
I'll
pass
it
back
off
to
rob
to
close
out
with
anything.
A
Yeah
sure,
I
suppose
one
of
the
one
of
the
things
to
to
note
is
that
work
has
been
done
on
pod
utils
to
make
it
a
complete
replacement
of
bootstrap,
and
I
think
you
might
know
this
better
than
I
dan,
but
there
was
a
there-
was
an
issue
around
metrics
or
logging,
or
something
that
that
podgy
tails
was
lacking
and-
and
I
think
that's
in
the
last
couple
of
months-
that's
been
fixed,
so
so
so
so
the
time
to
move
away
from
using
bootstrap
to
pi
is
pretty
much
now
and
I
think
it's
possible
to
do
it
now
and
whatever
was
yeah
whatever
the
feature
was,
that
was
missing.
B
That's
great
to
hear
rob
was
there
anything
else
you
wanted
to
show
on
your
local
machine
or
anything
else.
You
wanted
to
say
before
we
wrap
up.
A
Yeah
yeah,
I
don't
I
don't
think
so
other
than
to
say
that
it's
still
running
to
try
and
make
a
fail.
A
You
know
so
so
I
I
I
think
in
my
notes,
so
that
so
the
big
blurb
that
I
put
in
is
linked
off
the
main
hack,
md
and
and
because
I
didn't
want
to
pollute
the
the
the
hackmd
with
the
with
the
with
the
mass
of
of
text,
the
I
think
it
would
have
taken
down
like
I'm
doing,
to
be
honest,
so
so
yeah,
I
think
that's
pretty
much
it
unless
there's
any
other
questions
that
you
have
or
if,
if
you
think,
there's
anything.
B
No,
I
don't
think
so.
It's
definitely
super
helpful
to
see
those
executing
locally,
and
I
think,
maybe
next
time
well,
if
there's
a
good
candidate
for
an
actual
test
that
we
show
a
fix
to.
That
could
be
a
good
one,
but
we
both
looked
into
a
little
bit
of
the
node
end-to-end
test
as
well.
B
So
maybe
doing
kind
of
like
a
sibling
show
to
this
one
talking
about
how
node
and
then
tests
get
run,
they're
a
little
bit
more
mystical
to
a
lot
of
folks,
because
it's
a
a
little
bit
more
hidden
from
end
users
a
lot
of
times.
So
I
think
that
would
be
a
a
great
one
to
look
at,
but
this
has
been
a
a
super
helpful
show
for
me
personally
and
I
hope
for
some
of
the
folks
that
are
watching
and
some
of
the
folks
that
watch
this
in
the
future.
A
There
is
one
thing
that
we
missed,
that
that
we
we
that
we
spoke
about
in
our
tech
rehearsal,
which
is
what
what
what
is
cubekins
and
where
did
it
get
its
name,
and
so
so
I
asked
dan
this
and
time
goes
well.
I
don't
know,
and
he
asked
in
sync
testing
and
cubekins
is
portmanteau
and
the
mashing,
together
of
kubernetes
and
jenkins
jenkins,
used
to
be
used
as
for
ci
on
kubernetes
back
in
the
day,
but
but
not
some
more,
but
but
but
not
now.
A
So
our
our
ci
job
runner
like
dan
was
saying
earlier,
is
proud
and
we
don't
use
jenkins,
but
but
there
are
artifacts
at
the
time
when
we
used
to
use
jenkins
and
cubekins
is,
is
yeah.
That's
that's
a
poor
mantel
of
kubernetes
and
jenkins.
B
Yep
and
that
that's
a
little
kubernetes
history
for
you
all
to
finish
up
your
week
on
so
maybe
maybe
we'll
make
that
a
recurring
segment.
I
like
that,
but.
A
B
Thanks
again
for
everyone
for
tuning
in
and
thanks
to
jeff,
as
always
for
running
the
stream,
see
you
folks.