►
From YouTube: Kubernetes SIG Testing 2019-03-05
Description
A
B
A
Testing
weekly
meeting
you
are
all
being
publicly
recorded,
so
you
are,
of
course
going
to
be
on
your
best
behavior
and
you
can
watch
yourselves
on
YouTube
later
adhering
to
the
kubernetes
code
of
conduct,
which
means
you're
not
going
to
be
a
bunch
of
jerks.
But
croucher
is.
Am
I
right?
Okay,
sorry,
just
back
from
lunch
warming
up
so
before
we
talk
about
proud
being
a
jerk.
Let's
have
an
Eric
talk
to
us
about
the
Container
image
promoter
and
where
and
how
we
want
to
run
it.
C
C
Is
what
I
was
lobbying
for
initially,
and
it's
mostly
kind
of
that
is
the
goal,
but
the
more
I
guess
the
team
feels
that
the
most
expeditious
way
of
accomplishing
that
is
to
have
it
start
running
yeah
in
a
cluster
in
the
current
cluster
and
then
sort
of
use
that
as
a
hooray
look
this
thing
you
know
we
can
now
promote
images
and
lots
of
people
can
do
this
by
sending
pr's.
Let's
create
a
less
sort
of
use
that
to
gain
momentum
and
run
this
create
a
a
CNC
F
cluster
and
run
it
there.
A
So
speaking
with
my
stake,
testing
hat
on
like
I
I
guess,
maybe
I
should
actually
ask
the
people
who
are
on-call,
how
they
feel
about
supporting
this
additional
thing.
But
it
sounds
like
maybe
you
all
have
already
talked
and
I
okay
with
it
I
think
with
my
Cates
in
for
a
working
group
at
on,
though
I
think
it
was
asking
like
if
there
aren't
any
technical
things
that
prevent
us
from
doing
this,
there
should
be
documents
with,
because
you
do
there
is
actually
a
cluster.
A
It's
just
supposed
to
be
an
alpha
cluster
that
automatically
self-destructs
after.
However
many
days,
I
think
thirty-year
1d
to
prevent
us
from
relying
on
something
that
was
stood
up
temporarily
has
the
permanent
solution,
but
we
do
already
have
the
DNA,
the
community
DNS
setup
running
on
there,
and
we
also
have
something
else:
that's
not
coming
to
mind
right
now.
Oh
the
publishing
box
is
also
running
on
that
cluster.
So
there
is
a
place
to
run
things
it's
more
just
whether
or
not
there
are
support
burdens
that
require
that
we
run
this
here.
A
C
Yep,
that
sounds
like
a
good
point:
I,
don't
know
why
we
can't
run
it
in
the
same
place
as
the
publisher,
but
if
it's
game
being
deleted
every
30
days,
maybe
that
is
will
be
annoying
to
have
to
recreate
the
credentials
to
have
prowl
schedule
it
and
that
but
technically
there
are
no
reasons
but
yeah,
yeah
I'll.
Add
that
and
I
yeah
then
see
what
they
think.
Okay,.
A
Like
I
mean
our
intention
is
to
eventually
have
that
be
along
with
cluster,
but
we
have
like
a
number
of
policy
things
that
we
want
to
get
right.
So
that's
why
we're
making
sure
the
cluster
sauces
to
us?
You
know
make
sure
we
can
press
a
button
and
have
the
thing
to
pull
it
up.
Yay,
okay,
so
anybody
else
from
testing
for
agree
with
fabok
did
what
famous
any.
B
B
A
A
guy
was
a
temporary
measure
to
because
there
are
all
sorts
of
policy
questions
and
doing
things
the
right
way,
questions
that
need
to
be
sorted
out
and
so
to
encourage
us
to
continue
to
iterate,
rather
than
rely
on
a
bunch
of
stuff.
That's
organically
done
in
the
half-hearted
measure
like
we're
just
going
to
make
sure
it
blows
up
until
we
feel
like
we
have
something
that's
longer
lived.
Please
show
up
to
the
meeting
tomorrow
at
8:30
a.m.
Pacific.
If
you'd
like
to
find
out
more
so
I.
A
The
main
thing
I
wanted
to
try
and
talk
about
today
was
the
fact
that
code
freeze
is
coming
up
on
March
7th
code
freeze
for
the
kubernetes
114
release,
I
should
say
now:
I'm
wearing,
learn,
early
sleep
and
so
I'm
super
paranoid
about
whether
or
not
crowd
is
unstable
right
now
and
whether
there
are
any
things
coming
down
the
pipeline
that
could
make
prowl
more
unstable,
so
I
initially
wanted
to
frame
this
discussion
in
terms
of
hey.
A
D
So
pro
overall
is
mostly
okay.
I
think
we
had
a
couple
hiccups
yesterday
with
prowl
itself
that
were
not
really
huge,
you're
facing
and
quickly
addressed,
I
think
the
bigger
issue
that
we're
seeing
right
now
is
with
tide.
That
being
said,
the
issues
are
mainly
limited
to
the
status
controller,
which
means
that
well,
first
of
all,
time
separates
it's
important
sinking
logic
for
actually
merging
pr's
from
updating
its
status
context
on
piers,
and
it
seems
that
only
the
second
half
of
things
is
actually
really
encountering
problems
right
now.
A
E
D
So
that
went
in
on
a
Friday
and
it
seems
that
we
started
seeing
problems
yesterday,
Monday
that
was.
That
is
really
the
only
recent
change
that
we've
made.
So
we
were
considering
that,
but
it
doesn't
appear
to
be
that
we
could
revert
to
try
and
fix
things,
but
I'm
a
little
bit
hesitant
to
just
because
I
had
I,
don't
think
we
have
any
evidence
to
suggest
that's
the
problem.
I.
E
Think
one
thing
we
have
learned
in
the
last
couple
of
weeks
is
that
there's
a
number
of
ways
that
there's
like
this
snowballing
failure:
behavior
either
into
either
an
iron
prowl
in
general.
I
know
when
I
hit
like
content
creation
through
great
limits
from
github
that
took
down
our
cluster
for
two
days
and
I
know
that
the
product
gates
that
IO
cluster
didn't
have
that
issue
because
I
could
have
engineering
was
able
to
mitigate
that
before
that
actually
happened.
A
A
First
thought:
there's
a
lot
of
stuff
in
here.
That's
probably
not
gonna
get
done
in
this
clip
in
this
last
thing,
I,
don't
see
anything
super
huge
I
know,
there's
been
a
lot
of
activity
around
velasquez
and
it's
used
for
AWS,
but
as
far
as
I
know
we're
not
we're
not
doing
anything
about
that
or
with
that
right
now,
if.
B
It's
not
gonna
be
in
for
one
fort
I
I
should
have
spoken
to
Jason.
Prior
to
this
call,
I
know
from
our
ends.
There's
a
huge
push
to
get
cap
out,
which
includes
a
te
and
and
I
was
out
part
of
last
week.
There's
some
personal
stuff,
so
I
didn't
get
to
to
put
push
on
those
items.
There
are
some
outstanding
pr's,
however,
whether
it's
114
of
the
day
after
I,
don't
think
it
really
matters
like
you
know.
B
A
To
ensure
that
our
contributors
have
the
best
smoothest
experience,
while
they're
trying
to
chase
their
tails
around
random
flakes
and
test
failures,
so
it's
unproductive
everybody
involved.
If
we
change
more
clothes
on
them
or
if
we
dramatically
change,
we
risk
introducing
bugs
that
could
cause
additional
flakes.
E
Essentially
yeah
but
I
mean
I,
think
there's
there's
something
like
CNSE.
I've
proud,
aren't
sorry
the
product
case
that
I
have
deployment
specifically
and
and
whether
or
not
it
probably
becomes
like
a
working
group
under
tested
throughout
whether
or
not
it's
going
to
be
focused
on
deploying
head
daily
during
code
freeze
or
the
product
case.
I
do
cluster.
Since
there
is
like
a
larger
community
around
the
crowd.
Sure.
A
We
we
shade
work
to
split
these
two
efforts
out
one:
the
management
of
the
codebase
to
the
management
of
proud,
CEO,
but
I
think
what
I'm
asking
for
this
group
is
trying
to
use
your
best
judgment
and
not
land,
any
massively
factors
and
I.
Think
right
now,
like
we
don't
have
anything
in
the
pipeline,
so.
B
Do
I
mean
if,
if
that
I
guess
that's
what
we
doing,
we
know
I
have
a
pretty
big
Bosco's
PR,
but
it
doesn't
replace
the
existing
janitor
that
cops
uses.
Although
sin
did
say,
can
we
replace
that?
Can
we
replace
those
periodic
jobs
with
this
Bosco
said
double
yes,
janitor
and
my
answer
is
probably
yes,
but
there's
no
reason
you
have
to
you
can
continue
to
use
what
you
have,
and
this
would
only
affect
the
new
kappa
accounts.
I
guess
also
sorry.
I
would
never
personally
considered
boss
kosis
part,
but.
B
F
Top
of
the
stability
I
I
have
two
things
so
when
this
the
next
topic
on
once
we
talk
about
that's
something
we
probably
want
to
fix.
The
other
thing
is
I
tend
to
move
the
reporting
from
and
to
Cryer
I'm
pine
in
the
future,
but
I'm
trying
to
not
fighting
with
tight
on
given
tokens.
This
last
time
when
I
fitted
I
didn't
point
to
judge
proxy
and
that
a
struggle,
our
token
usage,
so
I
guess
I,
my
best
bet
is
to
I
can
just
wait
after
the
close
breeze
to
make
that
movement.
A
F
A
Okay,
yeah
I,
guess
like
again
just
to
reiterate
as
well,
it's
kind
of
like
we're
just
trying
to
settle
down
or
quiet
our
test
infrastructure
to
make
sure
we're
not
deploying
any
changes,
because
people
will
have
enough
a
cupful
online
trying
to
triage
and
troubleshoot
individual
failures
and
flakes
if
we
start
changing
additional
things
on
them.
It's
not
going
to
be
great
but
I'm,
not
trying
to
say
don't
ever
emerge
any
code
and
testing
for
those
over
again.
Nor
am
I
trying
to
say,
don't
ever
deploy
anything
ever
again.
A
A
A
Explicitly
saying
no
I'm
not
saying
it's
a
deploy,
freeze,
I'm,
just
saying
it's
a
we're
gonna
be
really
careful
here,
because
I'm
really
against
the
concept
of
holding
pipe,
letting
a
bunch
of
additional
changes,
pile
up,
that's
even
riskier.
We
all
just
experienced
that
last
month.
I
want
to
make
sure
this
team
remains
the
least
risk
possible.
I.
A
D
We
should
be
in
a
state
where
we
are
not
merging
big
new
changes
like
that,
and
especially
given
how
things
have
gone
for
the
past
month.
I
think
it's
a
great
opportunity
for
us
to
try
and
stabilize
things
and
instead
of
working
on
new
features,
maybe
table
those
for
a
little
while
and
just
work
on
getting
things
like
really
healthy
and
see.
If
we
can
hold
that
state
for
a
couple
weeks,
we.
E
A
F
E
F
F
F
F
E
F
Missing
something
in
the
system:
yeah
also
the
pod
GC
logic
is
cleaning
pod
from
their
start
time.
That's
why
we
only
see
the
area
on
the
stability
jobs
because
they
take
like
super
long
time
to
run
and
when
pod
GC
kicks
mean
it
will
prioritize
cleaning
those,
oh
this,
to
started
rather
than
oh,
there's
finished
jobs,
I'm,
not
sure
if
we
should
file
a
bug
in
communities
to
improve
that
behavior-
or
this
is
councillor
wki
I,.
F
E
E
A
Right,
like
I,
was
going
to
ask
if
real
I
think
this
is
what
option.
The
alternative
option
is
something
like
kubernetes
verify
job,
where
it
ends
up
being
an
app
hour
to
an
hour
and
20
minutes,
but
is
actually
made
up
of
a
bunch
of
individual
verify
scripts,
each
of
which
register
as
a
test
in
that
verify
job,
so
ten
different
jobs
or
ten
different
long-running
scripts,
each
of
which
run
in
cereal
and
all
roll-up
to
report
in
a
single
job,
I
guess.
E
D
D
E
F
H
D
E
Asked
for
this
job
to
be
written
or
at
least
threat
check
to
be
written
was
so
that
nobody
would
have
to
sit
through
the
whole
requested
testing
for
fixing
the
spelling
mistake
ever
again,
which
I
think
you
can
get
behind
right.
I
think
if,
if
this
had
been
added
as
a
follow-up
to
the
current
linting
job,
like
I,
think
nobody
would
have
ever
noticed.