►
From YouTube: Kubernetes SIG Testing 2017-08-29
Description
Meeting notes: https://docs.google.com/document/d/1z8MQpr_jTwhmjLMUaqQyBk1EYG_Y_3D4y4YdMJ7V1Kk/edit#
A
Okay,
hi
everybody
today
is
Tuesday
in
August
29th.
You
are
being
recorded
to
YouTube
for
this
week's
ink
test
and
week
meeting
pretty
light
agenda
this
week,
we're
going
to
talk
a
little
bit
about
how
things
are
going
on
proud
for
all
the
things
desires
to
move
away
from
Jenkins
and
lunch
github.
This
is
basically
just
going
to
be
rambling,
off-the-cuff
and
maybe
Ryan
or
other
folks.
A
You
have
some
knowledge
can
be
chip
in
and
then
some
discussion
about
labels
and
whatever
else
happens,
to
come
up
organically,
so
first
off
I
think
like
ends
a
little
comic
that
he
posted
in
the
psych
testing
slack
little
earlier.
Maybe
it
was
yesterday
like
only
prowl,
no
munch
github
only
prowl
was
awesome.
A
That's
that's
totally
the
spirit
of
this
team
right
now.
So
to
recap
briefly,
right
prowl
is
given
a
github
event.
Do
a
thing:
lunch
github
is
sweep
around
github
things
and
a
loop
and
lunch
them
whatever.
That
means,
and
it's
much
github
has
been
an
organically
maintained.
Codebase,
that's
passed
across
a
number
of
owners
and
hasn't
been
super
well
maintained
and
Ryan
and
coal
wagon
have
been
doing
a
fantastic
job
of
making.
It
was
making
it
a
little
more
operational,
but
I
guess
maybe
unsaid.
A
As
far
as
I
can
tell
the
the
main
reason
you
want
something
in
munch.
Github
today
is
the
convenience
of
being
able
to
use
the
code
that
deals
with
owners
and
approvers
as
far
as
I
know,
that
stuff
doesn't
exist
in
proud
today.
The
other
reason
that
used
to
exist,
but
I
believe
is
no
longer
relevant,
is
much
github
ability
to
maintain
like
a
shared
cache
of
git
and
github
related
information.
B
What's
what
needs
to
be
acted
on
everything,
so
it's
possible
that
more
of
the
munchers
that
are
currently
in
mundo
github
before
that
do
something
each
day
could
be
moved
over
to
a
graph
QL
query
that
would
get
all
the
data
in
a
efficient
manner
and
be
a
separate
cron
job
or
something
like
that
and
they
are
concerned.
The
one
other
concern
I
had
was:
we
need
to
figure
out.
B
We
need
to
figure
out
the
monitor
in
logging,
because,
if
you
split
everything
out
of
munch
github
and
all
of
a
sudden,
the
logs
will
be
split
around
too,
but
that's
a
general
problem.
We
have
with
lots
of
our
infrastructure
where,
if
it
breaks,
we
don't
have
great
alerting
right
now.
We
just
need
to
get
that
set
up.
B
A
B
A
You
was
to
go
look
at
the
load
realm
and
look
at
that
like
metrics
page,
and
it
showed
this,
it
makes
you
loop
frequency
and
as
soon
as
we
split
the
two
instances
up
suddenly
the
submit
key
loop
frequency
was
just
like
as
high
as
it
could
go,
which
was
great
until
I
realized.
I,
now
no
longer
know
how
frequently
all
of
the
other
monitors
are
running
and
so
I
think
there's.
There
was
at
least
an
effort
to
take
the
HTTP
server.
A
That
exposes
the
metrics
end
point
and
take
that
out
of
this
and
make
you
monitor
and
turn
it
into
a
feature
that
other
things
could
list
as
or
requirements
and
I
think
that
opens
up
the
Gateway
to
exposing
potentially
more
metrics
than
just
submit
cubed
frequency.
If
we
find
that
lunch,
github
is
still
a
thing,
we're
gonna
be
leaning
on
going
forward,
something
else.
A
I
guess,
as
a
general
design
philosophy
that
you
touched
on
there
Ryan
is,
it
seems
like
once
you
get
hub,
is
a
place
where
a
couple
like
state
machine
type
of
processes
exists.
So
you
like
scrape
through
the
comments
stream
of
an
existing
issue
or
pull
request,
and
from
that
you
sort
of
transition
through
the
state
machine
to
decide.
Yes,
this
issue
is
approved
or
hey.
A
B
Those
ones
where
it's
a
common
stream,
you
need
to
figure
out
if
this
is
okay
to
test
based
on
a
previous
approval.
There's
things
like
that.
Those
are
quite
easy
to
do
and
prowl
we
have
enough
tokens.
We
can
just
ask
for
comments
so
anything
where
the
event
is
driven
by
a
human,
the
number
of
human
driven
events
low
enough
that
we
can
generally
do
requests
for
it.
It's
the
other
state
machines
where
it's
some
commit
happen
that
would
cause
a
rebase,
that's
not
instantly
available.
We
have
to
create
github
for
it.
B
It's
kind
of
expensive
those
things
where
it's
a
time-based
or
uncorrelated
events
are
the
more
difficult
state
machines
to
migrate
over,
but
there
are
only
a
few
of
those.
The
vast
majority
of
them
are
not
are
not
necessary
for
the
like,
for
the
very
stateful
approach
in
Munch,
github
and
obviously
the
blessed
day
we
have
to
maintain.
Plus
there
is
the
cold
cache
effect
where
everything's
change
and
everything
like
that.
So.
D
I
have
I
have
some
recent
experience
having
just
implemented
a
Lunger
I
guess,
maybe
people
didn't
notice
until
it
was
too
late,
because
I've
been
told
they
would
do
everything
in
prowl,
but
I.
Think
one
of
the
nice
things
about
lunch
github
is
that
if
I
write
a
Lunger,
it
runs
both
event-driven
and
time,
driven
like
they'll,
be
a
once-a-day
resync,
and
I
think
that
sounds
like
prowl
is
definitely
the
way
forward.
D
But
just
there's
there's
a
lot
of
niceties
a
bunch
github
has
that
I
think
prowl
needs
to
make
development
of
new
plugins
easier
for
someone
who's
uninitiated,
because
if
you
go,
you
try
to
do
and
you're
like.
Oh,
it
doesn't
think
capability
like
say,
sherry
repo
state
doesn't
exist,
I'd
have
to
implement
that
before
I
implement,
plug-in
or
I
could
just
you.
I
could
write
them
under
so
there's
kind
of,
even
though
we
don't
want
to
write
more
Munzer.
D
B
E
B
D
A
So
maybe
some
of
the
design
decisions,
you're
intent,
are
still
in
his
head
instead
of
in
a
document
somewhere,
I
always
write
into
it
as
we're
trying
to
keep
the
number
of
dependencies
for
proud,
very,
very
tiny,
and
so
by
being
very
explicit
about
which
what
each
plugin
needs
as
far
as
its
level
of
github
attraction
or
kubernetes
action
that
made
them
a
little
more
composable
or
separable.
But
we
may
be
at
a
point
where
that
doesn't
necessarily
make
sense
going
forward.
I
think.
D
There's
needs
to
be
I
mean
I,
guess
when
you're
starting
out
and
your
use
cases
are
super
simple,
that
sort
of
lowest
common
denominator
thing
makes
sense,
but
as
you
get
more
and
more
plugins
with
more
functionality,
I
think
that
there
is
a
need
for
a
certain
amount
of
framework
to
share
development
work
amongst
plugins,
rather
than
duplicating
the
effort.
I.
A
Would
agree
with
that
I
just
probably
an
element
of
cart
before
horse,
where,
if
you
really
do
have
the
momentum
that
it
might
be
worth
looking
at
what
I'm
work
could
look
like?
And
now
we
have
a
better
idea
of
what
pieces
are
useful,
but
I
think
largely
what
I'm
hearing
is
we're
getting
close
to
an
inflection
point
where
we,
it
would
behoove
us
to
have
some
kind
of
roadmap
or
plan
for
exactly
when
and
how
we're
proceeding
forward.
A
A
Something
that
Jase
would
probably
be
really
interested
in
is
whether
anybody
from
this
team
is
planning
on
making
any
significant
changes
to
our
testing
infrastructure.
While
we
have
code,
freeze
and
release
burned
down
on
going
in
like
the
next
couple
of
weeks,
I
would
anticipate
that
we
probably
want
to
keep
things
stable
and
any
significant
changes
would
best
be
withheld,
or
at
least
very
widely
cast
in
disgust.
How
do
we?
How
do
people
feel
about
that?
Does
anybody
know
about
anything
coming
down?
The
road
are.
A
So
in
general,
I
I
would
rather,
we
be
allowed
to
continue
doing
our
work
and
stuff
like
the
big
bright
line.
Example
to
me
would
be:
if
suddenly
was
we
decide
to
start
using
something
brand
new
like
tied
instead
of
the
submit
queue
to
do
all
of
our
merging
right?
That
would
be
kind
of
an
unexpected
change.
A
So
the
least
team
is
kind
of
used
to
the
status
quo
right
now,
as
far
as
the
way
in
which
things
are
tested
and
moved
through
the
submit
queue
and
how
code
freeze
is
implemented
and
things
of
that
nature,
and
if
we
foresee
any
changes
to
that
stuff,
it's
worth
being
extra
vocal
about
it
and
a
little
more
prudent
about
whether
or
not
it
makes
sense
to
do
that.
Does
that
make
sense,
yeah.
E
Kalyn
also
mentioned
that
a
piece
of
anything
that
could
destabilize
the
queue
I
know
that
some
of
the
changes
to
how
this
would
make
you
talks
to
or
looks
at
issues
do
not
merge.
Labels
do
some
of
the
work
that
I
did.
There
would
has
like
some
potential
and
changing
how
submit
key
works,
but
I
do
think
we're
isolated.
In
that
we
I
mean
we
can
a
not
update
the
configuration
for
these
things
and
B
not
deploy
them.
E
A
Think
Eric's
little
guidelines
in
the
chat
they're
like
uber
Nader
changes
or
things
that
are
display,
oriented
or
probably
low-risk.
The
label
change
that
potentially
changes.
How
we
withhold,
probe
requests
from
being
merged
is
median
risks,
but
I
think
we
have
plenty
of
time
to
get
that
implemented
and
then
yeah
the
actual
mechanic.
So
this
is
makeyou
itself
would
be
high-risk
chase
here
on
muted,
do
you
see
something
there?
Yeah.
F
I
was
just
gonna,
say,
I'm
less
opposed
to
changes
as
long
as
we
understand
where
and
how
to
get
our
signal.
So
if
we
have
a
really
strong
communication
loop
about
changes,
I'm
not
necessarily
opposed
to
it,
but
for
me
it's
it's
really
do
I.
Have
the
tools
to
in
any
given
moment
assess
what
the
health
of
this
event
kunis
or
the
health
of
the
release
or
those
things,
so
that
that's
really
where
I
personally
draw
the
line,
but
that's
I
mean
I,
think
Eric's,
yeah
Eric,
secure
lines
are
good.
G
Say
historically,
the
things
that
have
bit
us
have
been
largely
around
like
Cube
test
changes
and
refactoring.
However,
you
know
how
bootstrap
is
working
or
migrating
jobs
to
use
flags
instead
of
environment
variables,
I
feel
like
that's
sort
of
the
extreme
risk,
and
so
I
still
think.
Actually
we
haven't
developed
the
all
the
yeah
it'd
be
nice.
If
we
could
be
do
better
about
announcing
cube
test
changes,
some
way
of
communicating
that,
especially
through
the
release
teams
that
they
know
we
do
now.
G
One
thing
is:
we
do
now
show
in
the
test
grid,
there's
the
commit
for
the
kubernetes
repo
at
the
top
and
then
below.
That
is
a
commit
for
the
test
infra.
So
if
we
do
update
the
image,
that'll
show
a
change
in
the
test
and
for
a
commit
which
it
provides
something
of
a
signal
but
yeah.
If
we
do
change
it,
I
agree
with
you.
We
should
be
better
about
figuring,
providing
you
with
some
way
of
understanding
that
yeah.
A
That's
that's
exactly
where
I
was
thinking.
Err.
Thank
you
for
fuzzy
brain
today,
but
because
those
were
sort
of
the
changes
where
it's
like,
oh
well,
we're
just
sort
of
refactoring
and
changing
schemas
and
migrating
flags
around
you
shouldn't
care
about
any
of
that.
But
sometimes
there
were
you
know,
accidents
or
typos
or
whatever
oopsies.
That
College
like
whole
swaths
of
thing
to
fail,
and
so
just
like
being
verbose.
I
think
is
the
key.
A
The
key
thing
pair
on
the
on
your
comment
about
showing
the
testing
for
a
commit
underneath
of
the
kubernetes
commit
I
have
noticed
that
a
little
while
ago,
but
it's
it's
slightly
confusing
to
me.
Do
you
know
if
there
is
a
way
to
actually
display
the
names
or
like
row,
headings
column
headings
whatever
for
each
of
those
things
because
yeah,
it
seems
like
you
just
kind
of
have
to
know
at
the
moment.
I
click.
A
A
Definitely
tribal
knowledge
only
right
now,
a
little
bit
I
think
one
other
thing
just
to
try
and
close
the
loop
on
the
prowl
versus
Jenkins
thing.
My
my
understanding
for
why
Jenkins
is
still
involved
is
because
we
use
docker
containers
to
manage
all
of
the
dependencies
of
a
given
build
environment
and
we
do
not
wish
to
run
those.
A
So
the
build
scripts
basically
create
a
docker
container
and
not
all
the
dependencies
are
in
that,
and
then
they
run
things
inside
of
that
container,
which
means,
if
we
were
to
have
this
happen
on
prowl,
we
would
need
to
allow
the
prowl
job
to
connect
to
you,
the
dr.
Damon
on
the
node
that
is
actually
running
the
crowd
job,
which
we
don't
want
to
do,
because
that's
a
publicly
exposed
node
and
we
don't
trust
the
doctor
and
docker
approach
to
provide
sufficient
fidelity.
A
So,
instead,
if
we
could
use
basil
to
manage
all
of
our
dependencies,
we
no
longer
need
to
run
a
docker
image
for
all
of
the
build
dependencies
to
be
correct.
So
the
use
of
basil
is
pretty
much.
The
last
thing
from
that's
holding
us
back
from
migrating
everything
off
Jenkins
and
that's
actively
ongoing.
Hopefully
a
1:9
thing
you
think
did
I
get
that
right.
Ben.
G
H
There's
a
physical
mute
button
as
well
all
right,
not
particularly
busy,
but
seem
to
mostly
work
fine
right
now.
Hopefully
soon
some
of
the
PR
e
two
E's
will
be
using
basil,
builds
well.
H
G
There's
still
some
something
we're
gonna
have
to
tackle
in
1-9
when
there's
me
include
also
the
presubmit
jobs,
but
there's
also
a
bunch
of
weird
jobs
like
I.
Think
the
no
jobs
do
there's
various
jobs
and
do
weird
things
and
those
weird
things
tend.
We
haven't
migrated
yet
so
so
I
think,
like
the
see
advisor
jobs
and
node,
ates
and
I,
don't
know
about
queue
mark,
but
there's
various
things
that
are
not
yet
migrated.
But
almost
everything
on
the
CI
side
has
been
migrated.
C
Josh
did
did
that
answer
your
sort
of
your
question.
Yes,
that
answered
several
questions
and
then
answering
other
questions
before
I
could
even
ask
them.
So
thank
you.
Everyone,
who's
who's.
The
brain
dump
is
much
appreciated.
I'm
still
getting
my
head
wrapped
around
all
the
pieces,
so
just
to
follow
up
on
some
things
that
were
said
earlier.
It
sounds
like
to
me.
C
The
the
basal
builds
will
eliminate
the
need
for
any
sort
of
doctor
and
doctor,
which
will
therefore
eliminate
the
need
for
having
the
Jenkins
be
involved
at
all
even
to
be
kicked
off
byproducts
because
there
was
those
would
be
basal
builds
instead.
Instead,
hypothetically
and
the
other
issue
that
was
mentioned
was
the
lack
of
back
film,
because
brows
inherent
have
been
driven,
stateless
nature,
there
would
need
to
be
some
sort
of
backfill
mechanism,
some
sort
of
handle
and
also
a
time-based
event
firing.
C
Maybe
that's
just
something
that
literally
just
does
some
munging
and
then
fires
off
that
equivalent
events
or
something
what
what
what
would
be
the
follow-up.
The
next
action
on
that
should.
Is
there
an
issue,
or
should
there
be
an
issue
or
more
discussion
or
or
also
the
oh?
The
other
thing
before
I
forget
is
the
the
graph
to
lndia
api
calls
that
worked
that
we
were
able
to
eliminate
some
munching
in
favor
of
I
wasn't
clear
to
me.
E
A
G
C
A
So
there's
there's
a
triggers
plug-in
and
it's
got
presubmit
posts
of
it
and
periodic
stanzas
and
I
believe
periodic,
so
periodic
doesn't
do
straight
up
cron.
Actually
now
that
I
say
this
out
loud,
it
just
says:
run
these
proud
jobs
on
a
periodic
interval
like
every
two
hours.
Every
three
hours
like
like,
for
example,
the
big
query.
A
G
Two,
the
prowl,
so
the
press,
so
there's
the
prowl
cluster,
then
there's
the
proud
builds
cluster.
Since
the
build
cluster
is
running
untrusted
test
code,
we
don't
want
that
to
have
access
to
any
credentials
that
have
write
access
to
the
kubernetes
repo,
and
so
that's
the
mean
and
right
now,
there's
not
a
way
to
have
prowl
choose
which
cluster
is
scheduled
into.
So
theoretically
we
could
have
some.
G
You
know
protection
to
make
sure
that
none
of
our
untrusted
code
is
running
on
the
same
code
as
a
periodic
muncher
type
thing,
but
we
haven't
built
that
yet
so,
essentially,
if
you,
if
your
job
needs
to
run
periodically
and
needs
access
to
github
credentials,
then
you
need
to
run
it
as
the
cron
job.
The
schedules
in
a
cluster
that
has
access
to
write
credentials.
Where
is
like
my
reach,
the
retest
er,
all
it
does
is
leave
a
comment,
so
it
doesn't
have
is
basically
just
like
a
random
internet
user.
G
So
for
that
we
can
run
that
as
a
periodic
job
or
if
it
just
needs
credentials
to
a
DCP
project.
You
know
to
create
a
VM
or
something
all
of
that
can
happen
in
the
build
cluster.
That's
with
you
know
minimally
trusted
code,
but
if
we
don't
want
to
have
someone
to
be
able
to,
you
know,
write
code
that
does
things
to
the
main
repos.
A
Okay,
that
clarifies
a
lot
and
it
does
sound
like
it
beat
yeah,
because
so
right
now
the
path
for
migrating.
Many
of
the
munchers
that,
like
right,
labels
and
stuff,
those
need
a
github
token
to
be
able
to
write
those
labels,
and
so
we
can't
actually
turn
those
into
periodic
proud
jobs.
Those
are
actually
just
completely
separate
from
prow
entirely,
and
it
would
be
cool
if
we
could
do
those
as
periodic.
But
the
thing
that's
holding
us
back,
there
is
the
ability
to
schedule
to
make
trusted
cluster
versus
an
untrusted
cluster.
A
E
A
Josh
thanks
for
the
suggestion
like
I,
you
know,
a
lot
of
this
stuff
could
stand
to
be
written
down
and
documented,
instead
of
so
tribal,
but
at
least
having
it
recorded
on
YouTube
is
slightly
better,
because
I
think
this
has
been
the
clearest
articulation
or
snapshot
of
a
roadmap
for
a
little
bit
kara.
You
were
gonna,
say
something.
What.
G
E
Definitely
feel
like
the
the
face
to
face
stuff
is
more
productive
or
has
the
potential
to
be
a
lot
more
productive,
so
I
would
definitely
value
that
yeah.
Totally
scheduling,
scheduling.
A
All
right
cool!
Thank
you,
everybody
for
your
time
and
look
forward
to
seeing
you
next
week.
Take
it
easy
thanks.