►
From YouTube: Kubernetes SIG Testing 2017-12-19
Description
Meeting notes: https://docs.google.com/document/d/1z8MQpr_jTwhmjLMUaqQyBk1EYG_Y_3D4y4YdMJ7V1Kk/edit
A
Okay,
hi
everybody
today
is
December
Tuesday
December
19th.
Welcome
to
you
this
week,
state
testing,
weekly,
meeting
I'm,
Aaron
Berger,
speeding,
is
being
publicly
recorded
and
will
be
posted
to
YouTube.
Shortly
on
today's
agenda,
I
put
down
a
bunch
of
things
that
are
just
sorta
at
the
top
of
my
mind.
Lately
one
nine
successfully
went
out
the
door,
hooray
I
haven't
heard
of
any
show-stopping
bugs
even
better.
That's
pretty
good
for
a
dot.
Oh
and
so
they're
a
couple
things
I've
been
trying
to
push
through.
A
In
the
background,
if
anybody's
interested
here,
I
can
try
running
through
the
cig
testing
slides
that
I
went
through
at
coop
con
real
quickly,
I,
don't
think,
there's
there's
too
much,
but
I
can
breeze
through
them.
In
like
five
minutes,
does
anybody
want
to
see
them
or
okay
I'm
seeing
thumbs
up
and
nodding,
offense?
Okay,
let
me
go
find
us
and
do
the
screen
sharing
thing
with
the
stuff.
A
So
state
testing,
update
I
gave
this
back
in
December
5th
I
saved.
This
quote
from
Clayton,
which
I
thought
just
perfectly
captured.
What
we're
about
here
at
Sig
testing,
I
sort
of
assumed
that
people
at
the
cocoon
had
heard
me
ramble
about
some
of
the
stuff
that
we
do
at
community
meetings
and
maybe
they've
heard
of
some
of
these
tools.
Maybe
they
haven't
I'd
like
to
do
sort
of
a
deeper
walkthrough
that
kind
of
combines
how
all
of
these
things
link
together.
But
that's
not
really
what
this
talk
was
about.
A
A
A
Some
of
the
numbers
I
found
interesting
were
that
over
the
past
year,
we've
run
about
three
and
a
half
million
jobs
and
that's
just
post,
submit
jobs.
We've
also
run
about
six
hundred
and
fifty
thousand
six
hundred
sixty
thousand
pull
request
jobs.
So
ballpark,
that's
four
million
for
the
year,
which
is
a
lot.
A
A
Jobs
are,
by
definition,
going
to
be
people
submitting,
occasionally
broken
code,
so
we
can't
always
expect
those
to
pass
100
percent,
and
the
other
statistic
I
found
really
interesting
was
that
over
the
year,
kubernetes
merged
about
9500,
PRS
and
testing
four
emerged
about
three
thousand
three
hundred,
so
it's
a
significantly
smaller
repo,
but
it's
significantly
higher
velocity,
which
I
thought
was
kind
of
cool.
Go
team
yay
us.
The
other
neat
thing
was
that
the
number
of
distinct
jobs
sort
of
has
doubled
across
the
year.
We
did
have
this
like
this.
A
It
was
kind
of
difficult
to
get
the
accounting
right
for
the
distinct
jobs
that
we
had
in
November
of
2016,
because
we
were
migrating
from
one
naming
scheme
to
another,
the
ballpark.
We
were
around
380
back
then
some
grass,
where
I
try
to
convince
you
that
it
might
be
worth
ignoring
slow
jobs.
So
here's
where
I
tried
to
break
up
into
the
blue
lines
and
bars
represent
jobs
that
go
that
take
less
than
an
hour
and
the
purple
ones
are
jobs
that
take
more
than
an
hour.
This
is
for
pre,
submit
jobs.
A
A
The
job
runs
for
longer,
it's
more
likely
to
fail
and
I'm,
not
sure
what
we
did
over
October
and
November
to
greatly
improve
the
pass/fail
rate
for
jobs
that
take
longer
than
an
hour,
but
it
still
kind
of
seems
like
if
a
job
takes
longer
than
an
hour,
you
probably
shouldn't
even
bother
running
it
or
paying
any
attention
to
it.
It's
if
you
break
it
down
to
look
at
test
results
instead
of
job
results.
A
These
numbers
are
significantly
higher
than
these
numbers,
but
these
numbers
are
the
ones
we
actually
gate.
All
of
our
poll
requests
on
it's
a
similar
story
for
post,
submit
jobs.
Only
here,
I
bird
things
up
by
two
hours
instead
of
one
hour
because
post
submits
are,
by
their
very
nature,
intended
to
run
longer
more
resource
and
intensive
jobs
and
the
pass
rates
pretty
Bismil
for
anything
that
takes
over
two
hours.
Again.
A
Maybe
a
case
is
to
be
made
here
that
we
shouldn't
even
bother
running
jobs
to
take
longer
than
two
hours,
because
nobody
has
really
been
doing
much
to
improve
the
state
of
their
pass
rate
and
it's
a
similar
story
for
the
test
cases
right,
I'm,
not
entirely
sure
what
that
dip
was
about
in
August,
but,
roughly
speaking,
we're
passing
a
lot
of
test
cases.
It's
just
the
jobs
themselves,
which
are
the
signal
we
look
at
for
blocking
releases
and
other
things.
A
One
thing
I
thought
was
really
cool
was
the
time
since
last
merge
graph.
We
saw
some
excitement
with
that
more
recently
with
coming
out
of
code
freeze,
so
unfortunately
I
don't
have
that
graph.
It
was
maybe
more
insightful
but
I
tried
to
explain
to
you
folks
how
we
sort
of
used
the
yellow
line
to
show
the
queue
depth
and
we
use
the
green
jagged
sawtooth
lines
to
show
the
time
since
last
merge
and
if
the
yellow
line
is
increasing
and
the
time
since
last
merge
is
also
increasing
past
a
certain
threshold.
A
We
alert
on
that
there's
also
this
sort
of
big
hump
in
the
right
there,
where
I
believe
you
can
see
that
the
little
sautee
is
down
below,
so
that
merging
is
happening
as
usual,
but
the
quantity
of
pull
requests
being
merged
is
not
going
down
fast
enough,
and
this
I
believe
is
where
we
collectively
learned
that
if
we
can't
do
batch
merges,
we
start
to
fall
behind
our
merge
capacity
pretty
badly.
A
A
Okay,
this
was
fun.
I
showed
this
and
said:
look
at
how
many
jobs
we
have
that
have
been
canceling
for
over
90
consecutive
days
in
red,
let
alone
the
jobs
that
have
been
failing
for
more
than
60
consecutive
days
in
orange.
What
do
these
jobs
look
like?
This
is
the
worst
one.
This
is
the
serial
job
queue
a
serial
job
for
Google
GCI,
and
you
can
see
how
it's
pretty
flaky
and
it's
kind
of
difficult
to
tell
what
the
appropriate
thing
to
fix
is
here.
This
is
sort
of
by
quantity
of
failures.
A
A
A
Here's
an
example
of
a
release
blocking
job.
This
was
one
of
the
upgrade
jobs
where
the
cluster
was
actually
upgrading,
but
none
of
the
disruptive
network
partition
tests
advise
the
gaps
were
passing
in
any
meaningful
way
and
here's
an
example
of
coup.
Beatty
I'm
also
failing
similarly
right.
These
close
these
sorts
of
things
I'm
trying
to
figure
out
how
do
we
incentivize
people
to
fix
their
tests?
Is
it
a
question
of
them
not
knowing
that
these
tests
are
failing?
A
This
would,
you
know,
cut
our
cost
by
about
16
percent
or
so
which
might
be
meaningful.
If
we're
talking
about
paying
real
money
to
run
our
tests
and
then
the
final
thing
is
at
a
community
meeting,
this
was
the
list
of
stuff
that
I
said
we
do
for
1/9,
basically
end
of
life
as
much
as
we
could
start
to
put
together
a
really
well-defined
support
policy
and
enable
better
support
by
non-googlers
for
better
or
for
worse.
This
is
the
state
of
what
I
think
we
actually
did.
A
A
A
C
Think
we
have
some
non-blocking
jobs
that
are
using
it,
but
I
think
we're
waiting
for
some
credits
or
some
quota
to
be
able
to
run.
We
need
to
be
able
to
run
like
I,
think
12
clusters,
12
5
node
clusters
at
the
same
time,
and
so
last
I
checked
we're
running
through
quota
things
with
that,
but
then
otherwise,
yes,
we
are
good
to
go
and
then
there's
a
bunch
of
other
jobs
which
are
using
it.
C
A
I
I
see
that,
as
like
a
rune
kind
of
tapped
me
on
the
shoulder
pretty
constantly
at
coop
con
about
making
sure
we
were
moving
forward
in
the
right
direction.
With
that
I
hear
an
interest
in
once.
We've
proven
that
we
have
the
credits
to
meet
our
existing
testing
needs.
The
idea
is,
we
could
expand
the
class
of
tests
that
were
running
I've
heard
of
the
potential
for
testing
a
che
configurations
potentially
is
driven
by
cops
through
the
cluster
lifecycle.
Sig.
A
A
Okay,
so
on
the
topic
of
tides,
sorry,
finding
the
agenda-
I
guess
I-
can
just
share
this
screen
too.
So
we
all
know
what
I'm
looking
at
when
I'm
talking
about
it.
Okay,
so
I'm
curious,
where
we
are
on
rolling
out
tide,
we
had
a
lot
of
consternation
over
the
submit
queue
being
a
little
funky
over
the
weekend.
It's
cleared
itself
out,
which
is
great,
but
we
we
had
talked
about
well.
Why
don't?
A
I
was
I've
sort
of
been
left
with
the
impression
that
we
have
canary
tide
on
a
number
of
repos
and
are
pretty
happy
with
it.
We
think
that
the
UI
kind
of
still
needs
some
work,
but
there
is
a
UI.
I
saw
a
PR
by
Joe
Finney
to
sort
of
set
status
context
on
PRS,
explaining
whether
they
were
in
the
pool
we're
out
of
the
pool
and
I
think
that
we
don't
really
have
any
documentation
on
what
tide
is
from
a
user-facing
perspective.
A
So
for
people
who
are
used
to
consuming
the
submit
queue
website,
they're
used
to
seeing
a
history
of.
What's
been
merged
and
why
they're
used
to
seeing
information
about
how
things
are
ordered
they're
used
to
seeing
a
page
that
describes
all
of
the
labels
that
are
necessary
and
criteria
that
are
necessary
for
the
pull
request
to
get
merged.
They
understand
what
the
queue
is.
It's
these
sort
of
expectations,
I
personally
feel
like
need
to
be
documented
a
little
bit
better
for
tide
to
be
turned
on
kubernetes
kubernetes.
A
I
think
said
that
I'm
currently
like
trying
to
push
it
for
the
kubernetes
community,
repo,
which
would
be
another
high-traffic
repo
and,
in
my
opinion,
kind
of
canary
for
what
this
process
would
domed
like
so
I
opened
up
a
straw,
man
pull
request.
My
next
step
would
be
to
tell
the
community
that
I'm
going
to
do
this
before
I
would
outright
enable
it
for
the
community.
A
One
of
my
questions
is
whether
or
not
it's
possible
to
have
tied
running
and
have
munch
github
money,
so
remove
the
submit
commander
out
of
munchkin
hub
because
there's
still
a
number
of
other
munchers
that
are
useful
to
us,
but
let
ty
take
over
the
merchants
possibilities
that
that's
100%
possible.
No
problems
with
that.
Okay,
and
so
it
could
be
since
the
community
repo
doesn't
really
have
each
pull
request
getting
exercised
through
many
many
tests.
People
are
less
confused
about
what
are
these
tests?
A
Why
are
they
failing
and
where
can
I
go
see
the
test
results,
so
a
UI
explaining
this
sort
of
stuff
might
be
less
necessary,
plus
turning
it
on
community
would
help
establish
trust
that
this
thing
actually
works
and
get
people
comfortable
with
it.
But
I
still
feel
like
it's
worth:
notifying
the
kubernetes
that
mailing
list
and
opening
an
issue
in
the
kubernetes
community,
repo
and
letting
people
know
that
this
is
coming.
E
D
Been
trying
to
make
type
or
self
documenting
so
like,
for
example,
in
the
most
recent
set
of
changes
I
made
to
it.
It
links
to
a
page
that
kind
of
even
explains
like
what
github
statuses
are
things
like
that
I'm
hoping
we
can
just
make
things
very,
very
obvious,
just
glancing
at
the
page,
but
I
do
think
it
would
be
useful
to
also
link
to
more
detailed
talk
kind
of
outlining
like
what
it
is
why
it's
there,
that
sort
of
thing
so
I,
don't
think
that
belongs.
A
My
god
has
always
told
me
that
kubernetes
community
is
the
place
for
people
to
learn
how
to
interact
with
the
automation
that
is
driving
the
kubernetes
project
and
how
that
automation
is
deployed,
as
opposed
to
testing
for
docs
that
explain
stuff,
that's
a
little
more
specific
to
our
code
or
like
how
to
take
our
stack
and
deploy
it
for
your
own
project.
Is
that
the
right
distinction,
or
am
I
making
that
up.
F
D
I
think
for
most
of
the
dogs,
things
like
the
plugin
hope
are
the
way
to
go
where
it
just
kind
of
documents
itself,
but
we
probably
still
want
a
somewhere
that
says
like
this
ease
tide.
We
use
it
for
emerging
that
kind
of
thing,
because
I
think
that's
like
most
I
think,
most
of
the
time
when
you
visit
tide
or
whatever
that
point
you're
already
aware
of
it,
you
don't
really
need
to
be
told
that
type
it's
doing
merging
and
things
hopefully,
but
they
probably
should
still
be
a
place.
That
tells
you
that.
E
D
G
I'm
Matt
fraina-
you
don't
see
me
around
here
very
often,
but
I
started
to
get
a
little
involved
in
the
testing
stuff
and
one
of
the
things
that
I've
been
talking
with
folks
about
lately
is
so
say:
I,
don't
know
how
the
tide
or
what
tide
is
or
prow
or
any
of
this
and
I
come
in
to
do
pull
requests.
I'm,
newer
and
I'm
trying
to
get
involved.
I
show
up
how
do
I
even
know
what
labels
mean.
What
and
how
do
I
even
find.
So
you
talk
about
a
tide
site.
G
How
do
I
even
know
that
exists
if
I
enter
in
coming
in
from
outside
the
kubernetes
or
but
if
he's
not
engulfed
in
it?
Where
does
that
documentation
live
in
so
I
like
the
community
repo
I
see
linked
in
there,
but
putting
documentation
on
pages
that
somebody
has
to
know
how
to
discover.
Isn't
that
already
a
difficult
findability
problem?
We.
F
F
A
F
I
think
like
as
well,
we
you
we
should
like
the
documentation
for
a
developer
that
has
a
comment
put
on
their
on
their
PR.
That
leads
them
to
the
thing
like
they
should
be
able
to
discover
all
the
components
from
there
and
so
like
today.
The
question
are
like
I
guess:
there's
this
day,
like
you're,
saying,
there's
a
bunch
of
different
questions
that
need
to
be
answered
in
a
different
way
and
also
like
when
we're
rolling
out
new
changes.
F
I,
don't
think
the
any
of
the
other
documentation
is
like
necessarily
will
people
want
to
see
either
because
at
that
point
they
want
to
see
I'm
going
from
lunch,
github
I'm
coming
to
type?
What
does
that
actually
mean
from
using
and
user
they're,
not
necessarily
interested
in
how
to
cite
work?
And
that's
a
different
question
used
to
be
put
like
it's
necessarily
to
live
anywhere
as
well.
Just.
A
It's
partially,
because
we're
really
hopeful
that
the
code
is
so
self
explanatory
that
if
people
could
just
read
the
film
they'd
figure
it
out,
which
is
nice
and
all,
but
that's
not
really
good
apply
for
a
much
larger
to
these
developers,
especially
being
the
first
time
contributors.
So
it's
that
question
of
like
how
do
you
have
just
enough
docs
that
aren't
likely
to
fall
out
of
date
because
the
stuff
that's
in
community
has
fallen
out
of
date
because
it's
drifted
behind
all
of
the
other
stuff
and
it's
big
and
it's
heavy
weight.
A
So
I
like
the
idea
of
having,
like
the
reference
stuff,
the
details
hosted
directly
on
the
infrastructure
and
then
linking
to
that
infrastructure
as
often
as
possible,
whenever
our
automation
interacts
with
users
on
the
comments
and
pull
requests
and
stuff.
I
still
think,
there's
that
higher-level
TLDR
summary
for
users
to
get
that
in
the
first
place.
Well,.
A
User
oriented
so
my
my
other
thing
on
the
plugin
help
its
is
freaking
awesome
by
the
way,
like
it's
so
great.
To
finally,
have
that
I
didn't
know,
I
needed
to
put
a
dot
HTML
on
the
end
of
it
or
I
would
have
demoed.
It
live
in
front
of
people
at
pucon.
Somebody
asked
like
where
they
could
go
to
get
this.
A
It's
not
yet.
You
know
I
set
up
that
redirect
to
gates
that
I
or
Bach
commands
I
can't
yet
point
that
at
plugin
help
yeah,
because
yeah
I
think
some
people
really
like
having
just
the
table
of
commands
I,
also
really
like
how
plugin
help
has
like
an
explanation
of
what
it
does
and
why,
and
example,
usages,
but
I'm
wondering
if
they're
competing
to
be
two
different
pages
written
off
with
the
same
metadata.
A
I
also
sometimes
get
questions
about
like
which
repose
is
this
plug-in
enabled
for
and
I
have
to
tell
people
well
what
repo
do
you
want?
It
enabled
for
and
then
choose
that
from
the
drop-down
box,
so
there's
like,
if
we
think
it's
ready
to
just
be
like
exposed
to
a
larger
audience.
We
absolutely
can
do
that
because
I
think
right
now,
I've
just
been
organically
trying
to
put
it
in
front
of
people's
faces.
Actually
it.
A
D
You
take
a
look
at
the
most
recent
tide
update
which
needs
a
few
tweaks,
but
is
now
now
deployed
I'm,
really
hoping
we
can
get
to
a
point
where
an
end
user
can
come
to
this
page
and
mostly
understand.
What's
going
on
I
do
think
we
need
an
introduction.
It's
like,
oh,
by
the
way,
we
have
a
murderer
bot
somewhere
else,
but
once
you
know
that
we
have
robot
and
like
this
is
the
dashboard
to
go
to
it.
D
I'm,
hoping
that,
like
it's
clear
enough,
just
looking
at
the
page
lid,
okay,
you
need
everything
to
be
green
to
pass,
and
this
is
that
I'm
still
I'm
not
happy
with
it
yet
but
like
if
you
see
like
where
it's
it's
going
from,
where
it
came
from
I
think
that
parts
the
right
direction,
I
think
we're
just
lacking
any
docks
that
are
like.
Oh
by
the
way
this
is,
we
have.
We
have
a
merge
thing.
We
have
things
runner,
and
this
is
how
you
speak
into
logs.
A
Okay,
well
so
I
will
attempt
to
file
issues
for
areas
of
clarification,
Matt
Farina,
if
you're
working
on
stuff
in
this
area,
I'm
assuming
you've,
got
issues
you're
gonna
work
against
a
file
as
well,
and
if
anybody
here
is
interested
in
tackling
that
stuff,
please
do
so
like
it's
the
sort
of
thing
where
I
really
am
interested
in
this.
But
every
time
I
get
really
interested.
I
also
get
sidetracked
on
other
stuff,
so
I
don't
want
to
like
assign
it
to
myself
and
then
never
do
it.
A
But
if
it's
still
open
after
a
little
bit,
I
will
do
my
best
to
get
to
it
because,
like
I
learned
a
lot
by
just
rewriting
the
document
on
owner's
files,
which
I've
noticed,
we've
changed
some
of
the
stuff
that
implements
that
you
know
blunderbuss
and
proved
a
blur
now
and
prow.
If
we
can
get
rid
of
all
of
that
from
all
the
Munchkin
hubs,
then
that
page
can
be
shortened
quite
a
bit
anyway.
F
Was
also
one
other,
like
kind
of
related
topic
that
I
put
on
the
coat
rebec's
meeting,
which
I
think
we
need
to
have
like
a
clear
avenue
for
comments
for
when
we
were
about
to
change,
we
were
about
to
make
changes
that
aren't
necessarily
just
in
the
background.
This
gonna
be
stuff
that
it
influences
how
people
actually
work
with
the
repos
just
to
give
the
larger
tech
community
a
place
to
have
comments
and
feedback
before
that
it
goes
live.
F
A
A
We
found
its
way
to
secant
of
X
I
forget
if
I
raised
it
there
or
not,
and
it
never
got
raised
to
kubernetes
death
and
like
there
are
a
lot
of
cooks
in
the
kitchen
who
are
belly
aching
over
like
things
that
could
be
answered,
but
it's
a
matter
of
making
sure
we
have
questions
about
potential
use
cases
and
they
know
that
there
is
an
answer
for
that
question.
Like
can
I
find
all
of
the
issues
that
were
automatically
closed
by
the
bots.
A
Yes,
you
can
here's
the
label
query
you
can
do
to
get
it
rather
than
a
lot
of
back
and
forth
over
Oh.
Should
we
do
this,
or
should
we
but
I
still
think
having
that
discussion
up
front
rather
than
just
springing
it
on
people
is
the
is
the
better
safer
way
to
build
trust
to
that
end?
One
of
the
trusts,
one
of
the
other
trust
building
things
I'm
trying
to
do
with
the
community
repo
is
I.
A
Warn
people
that
I'm
going
to
remove
direct
write
access
to
that
repo
you're
gonna,
take
away
all
the
teams,
but
community
maintains
community
admins
and
steering
committee
I'm
happy
to
even
take
away
steering
committee
if
I
continuance
other
people
on
the
steering
committee.
That's
a
good
thing
to
do.
A
The
community
repo
doesn't
use
milestones
so
I
don't
need
the
slash
milestone
community
for
my
dip
before
I.
Do
that
and
I'm
not
being
really
creative
or
thinking
outside
the
box?
I,
don't
know
what
these
cases
I'm
missing
in
terms
of
what
people
would
need
direct
write
access
to
be
able
to
do
most
of
the
ones
I've
used
on
a
daily
basis.
As
a
member
of
the
release
team
is
either
the
triage
role
and
the
CI
signal
guy
are
handled
by
the
bots
tip.
A
G
A
Might
there
be
like
a
separate
label?
Query
we
could
use
if
we
needed
a
high
priority
thing,
the
the
use
case
they
came
up
this
morning
right
was,
and
it
was
I
guess
not
really
critical.
It
urgent
need
that
somebody
had
a
PR
that
was
likely
to
suffer
a
lot
of
rebase
a--'s.
If
many
other
PR
stopped
ahead
of
it
in
the
queue,
and
so
they
wanted
it
to
go
in
the
first.
A
A
E
F
F
Because
it's
kind
of
a
moot
point
if
we
can
like,
especially
if
we're
running
more
like
if
we
were
to
do
like
an
exponential
back-off
and
do
like
a
couple
different
batches
from
the
larger
thing.
If
the
batch
with
like
100
PRS
merges,
then
it
doesn't
really
matter
what
the
priority
was.
Cuz
it'll
be
a
lot
faster.
I
think
I
was
just
kind
of
an
artificial
number.
At
this
point,
yeah.
E
F
E
F
A
C
Initially,
the
reason
why
we
set
a
limit
on
the
number
of
things
we
would
batch
together
was
because
batch
didn't
exist
and
there
were
concerns
about
it
being
too
hard
to
determine
on
why
my
test
started,
failing
if
we're
merging
too
quickly,
essentially
and
I,
think
so,
yeah
I
think
the
concern
would
be
if
you
merge
100
PRS.
At
the
same
time,
then
you
have
to
search
through
100
PRS
to
figure
out
like
which
one
regressed
my
job
but
I.
C
Think
that,
like
we
had
and
I
feel
like
everybody's,
very
comfortable,
but
really
this
was
kind
of
just
a
you
know.
The
idea
of
merging
100
PRS
at
the
same
time
was
too
scary,
given
where
the
community
was
but
I
feel
like.
The
community
is
pretty
comfortable
with
a
batch
emerges
these
days
and
so
I
think
it'd
be
good
to
try
and
just
merge
as
much
as
possible,
and
we
could,
theoretically,
you
know,
throw
up
like
crusading.
You
know
we
get
throw
off.
You
know
the
hole
every
power
of
two.
C
A
Sure
would
be
useful
to
have
data
on
who's
actually
impacted
by
this,
and
how
often
it's
happening
like
who's,
actually
bothering
to
fix
failures
and
post
submit
jobs
just
like
how
many
people's
lives
we'd
be
making
more
difficult.
With
this,
the
the
nightmare
scenario
that's
always
escalate.
That's
always
brought
up.
As
an
example
to
me
are
the
scalability
jobs
which
takes
17
hours
to
run
fully,
and
so,
if
you're
bisecting,
that's
that's
a
lot
just
to
jump
between.
You
know
the
difference
between
four
and
five
commits
to
jump
through
this
law.
A
A
Like
I
said
yeah,
some
of
the
slides
I
just
showed
you
is.
We
should
ignore
jobs
that
take
longer
than
two
hours
to
run.
They're
they're,
effectively
meaningless
and
most
people
are
actually
making
sure
they
go
green,
which
that's
a
whole
different
set
of
the
griefing
story:
okay,
I'm
Way
over
time,
I'm,
sorry
I
just
burned
through
the
last
couple
of
things
Eric
you
have
modified
thetaba
to
automatically
flag
and
closed
still
issues.
Thank
you
very
much
for
putting
that
forward.
A
I
think
if
there's
a
place
where
we
document
how
the
spot
functions,
that
would
be
useful
to
point
people
to
I
think
right
now,
it's
just
in
that
kubernetes
dev
thread,
but
I
was
able
to
understand
pretty
quickly
the
jobs
that
are
hooked
together
and
stuff.
So
it
should
be
a
pretty
quick
doc.
You
know
demonstration
enough,
there's
the
spot
that
runs.
These
are
the
jobs
that
apply
the
labels.
This
is
the
job
that
closes
things
based
on
those
labels
etc,
and
it's
getting
people's
attention,
which
is
always
good
Steve.
A
Thank
you
for
pointing
out
that
somebody
has
proposed.
He
put
together
a
framework
to
house
reviews.
Sorry
put
together
a
repo
to
house
reusable
test
frameworks
and,
in
the
context
of
the
discussion
we've
had
on
the
mailing
list
last
week,
around
sort
of
changing
or
expanding
the
scope
of
sig
testing
to
include
the
actual
tests
themselves.
I
think
that
say,
testing
makes
sense
as
the
owner
of
the
test
framework.
A
So
like
test
e
to
e
framework
and
whatever
integration
test
framework,
there
could
be
to
make
integration
tests
more
Musil,
but
it's
not
something
that
I
personally
have
the
capacity
or
bandwidth
the
drive.
It's
just
something
where
I
think
it
naturally
makes
sense
for
this
group
to
to
own
that
so
much
in
the
same
way
that
sig
Apps
is
like
an
umbrella
sig,
but
there
are
some
projects
underneath
of
it
for
helm
and
charts,
and
things
like
that
I
think.