►
From YouTube: Kubernetes SIG Node 20210317
Description
Meeting Agenda:
https://docs.google.com/document/d/1j3vrG6BgE0hUDs2e-1ZUegKN4W4Adb1B6oJ6j-4kyPU
A
Today
I
think
we
can
start
this
with
the
agenda
item
and
then
we
can
go
into
triage
elana.
Do
you
wanna?
Kick
it
down.
A
To
oh,
I
think
francesco
has
output
difficulties
with
the
headphones.
B
I
think
so
I
I
put
in
earbuds.
Can
you
hear
me.
B
Great
yeah
I
had
I,
I
wasn't
sure
I
think
it's
just
his
audio,
so
I
threw
something
on
the
agenda
here.
The
first
item
is
to
talk
about.
What's
going
on
with
the
node
pre-submit
tests
and
this
came,
this
was
prompted
from
a
discussion
with
ben
who
I
pinged.
He
said
he
might
be
able
to
make
it
today,
but
we
can
maybe
discuss
this
last
because
I'm
I'm
hoping
to
see
if
he
might
be
able
to
make
it.
B
Yeah,
basically,
the
the
tldr
of
this
one
is
that
node
had
agreed
some
time
ago
like
last
year
that
we
wanted
to
add
cryo
as
a
pre-submit
test
so
like
for
when
we
kind
of
move
away
from
just
doing
the
docker
shim
stuff.
We
want
to
have
like
one
container
de-pre-submit,
one
cryo
pre-submit
and
how
we
want
to
like
do
that
in
the
long
run
like
if
those
are
going
to
be
separate
jobs
or
one
job
in
the
same
cluster,
but
with
different
nodes.
B
Like
I
don't
know
the
exact
details
of
how
we're
going
to
end
up
doing
that.
I
think
that's
not
really
very
easy
right
now.
So
basically
there
was
oh.
We
should
discuss
this
at.
You
know
the
sig
node
ci
meeting
next
week
and
yeah,
so
we
will
see
if,
if
ben
can
make
it,
it's
still
waiting
on
like
a
final
approval.
So
I
think
I
pinged
aaron
about
that
as
well.
B
But
ben
is
cool
with
adding
the
job
in
the
interim
and
we're
going
to
start
with,
adding
it
as
a
like
a
a
pre-submit.
That's
that
doesn't
report
but
gets
run,
and
then
once
we've
got
a
bunch
of
data,
then
we'll
add
it
as
one
that
shows
up
and
is
like
skippable
and
then
hopefully
after
that
has
good
data.
Then
we'll
add
it
as
required
because
we've
seen
people
breaking
cryo.
So
that's
that
but
yeah
ben
had
some
really
great
ideas.
B
We
chatted
last
week
about
what
we
should
do:
sort
of
with
pre-submits
and
sort
of
the
future
of
pre-submit
tests
and
how
like
we
want
to
have
pretty
strict
criteria
in
terms
of
what
we're
introducing
as
pre-submits.
So
I
think
that
we
meet
the
bar
here,
but
he
had
some
really
good
discussion
points.
B
A
B
Yeah
well,
and
the
other
thing
is
on
the
flip
side.
I
think
that,
like
ben
said,
one
of
the
things
that
we
should
be
looking
at
doing
is
trying
to
like
remove
as
much
of
the
like
implementation
details
as
possible
from
the
test
name
because,
for
example,
I
think
we
have
some
e
to
e,
which
is
like
blah
blah
pre-submit
ubuntu
container
d
node,
something
like
that
and
the
of
course
the
problem
is.
B
Then
you
know
if
you
have
that
stuff
in
the
name,
then
people
come
back
and
ask
well
you're
doing
ubuntu.
What
about
you
know
my
like?
What
about
deviant?
What
about
fedora
you're
doing
container
d?
What
about
cryo
and
about
like
this?
What
about
that?
So,
I
think
there's
some
suggestion
of
like
maybe
it's
like
runtime
one
runtime
two.
B
So
people
don't
like
pester
us
in
terms
of
well
what
about
my
particular
distro
or
container
runtime,
or
that
kind
of
thing.
The
goal
is
to
demonstrate
that
you
know
the
priest.
Events
are
running
on
like
multiple
cris
and
not
necessarily
that
it's
like
this
particular
cri,
or
that
the
pre-submits
mean
anything
about
like
supportability.
B
A
Okay,
are
those
two
issues
also
related
to
that.
A
Yeah
this
one
is,
we
discussed
a
long
time
ago.
Oh
yeah,
you,
you
just
said
it.
B
A
B
B
Really
great
question,
as
I
don't
know,
what's
using
node
feature
right
now
for
the
liveness
probe
stuff,
I
actually,
I
ended
up
using
feature,
not
node
feature,
because
that
way
I
could
use
the
pre-existing
alpha
jobs
to
test
them,
because
the
node
alpha
job
was
like
broken
when
I
was
trying
to
use
it,
and
so
I
was
like.
I
just
want
this
to
go
into
the
regular
alpha
test
suite
so
and
in
order
to
do
that,
I
just
used
feature.
B
I
would
be
totally
fine
with
saying:
let's
get
rid
of
all
the
weird
node
specific
things
and
roll
our
alpha
tests
into
the
like
alpha
tests
that
run
on
every
pr,
because
I
think
that
might
be
a
better
way
to
do
it.
But.
A
A
Yeah,
this
specific
issue
was
about
duplication
of
not
feature
and
not
alpha
feature.
So,
like
we've
been
like
for
every
feature,
we
had
two
tags
like
not
feature
foo
and
like
not
alpha,
feature
full
and
then,
when
you
graduate
from
alpha,
you
just
remove
one
of
the
tags.
A
So
the
idea
was
to
just
have
feature
a
foo
and
then
alpha
tag
as
a
separate
like
either
like
yeah.
B
A
A
I
don't
even
know
what
ideal
features
was
like
my
initial
thought
was
that
all
the
features
like
whenever
you
have
a
feature
tag?
There
was
a
job
that
was
enabling
all
the
features
and
that's
why
we've
been
running
them
separately
so
like
jobs
that
enable
alpha
features
or
beta
features
and
like
just
enable
all
features.
But
now
all
features
are
enabled
by
default.
So
if
you're
out
of
beta
and
even
in
beta
you,
you
don't
need
to
enable
anything
explicitly.
B
C
A
B
A
Yeah,
it
will
be
a
very
good
first
step.
Tell
me
protection,
I
don't
myself,
which
one
is
that
five
two
eight.
A
A
Okay
on
this
one
yeah,
this
is
still
on
me.
I
just
didn't
have
time,
so
it's
very
trivial
who
puts
an
agenda.
You
just
pulled
it
out
of.
B
B
A
Basically
need
to
take
a
test
and
make
it
such
that
it
doesn't
use
any
linux
specific
features
and
doesn't
test
it
like
like
it
doesn't
put
the
schedules
with
the
runtime
class,
but
doesn't
tell
us
that
it's
actually
running
so
yeah,
it's
just
copy
paste
and
like
clean
it
up
a
little
bit.
So
it
will
be
really
hard
to
explain.
I
think.
A
B
I
don't
know
if
you
watch
the
recording,
I
should
catch
you
up
on
what
we
did
last
week.
We
basically
we
went
through
the
whole
board
and
every
single
thing
on
there
we
either
closed
or
like
assigned
someone
or
like
ping.
A
Cool
okay,
I
don't
think
recording
available
that
fast
direct
tip.
You
froze
direct,
sometimes
delay
uploading
recordings.
B
A
A
A
A
We
still
need
to
ping
people
about
this
lgtm
vrs,
so
they
likely
test
related.
B
Are
all
of
them?
Do
they
have
the
ones
that
need
approver?
Do
they
have
milestones
set?
It
looks
like
mostly
no,
we
need
to
determine
if,
like
we
want
them
in
121
or
not
so
maybe
we
could
do
that
today
make
a
call
for
like
in
or
out.
A
B
B
A
B
Oh
hashtag
yeah,
though,
is
this
a
comment
change.
I
think
we
need
like
a
top
level
approver
for
that
one,
or
something
like
that.
Okay,
so
I'll
put
a
milestone
on
that
one.
A
And
this
is
unclear
because
jill
has
a
question
from
clayton
yeah.
A
B
There's
like
a
couple
of
these
sorts
of
these
sorts
of
tests,
where,
like
there's,
these
one-line
changes
that
I've
seen
like
for
reviews
that
I'm
kind
of
like
I
don't
know
if
this
should
be
considered
for
inclusion
and
test
phrase
freeze
or
not,
because
one
line
can
sometimes
be
very.
B
A
Yeah,
we
still
have
a
pr
in
signaled
main
board,
where
somebody
remove
an
empty
if
block
that
was
added
by
derrick
long
time
ago.
B
A
B
It's
no
assignee.
I
think.
A
A
A
And
do
I
need
to
hide
with
like
who's
there.
B
Yeah,
I
can,
I
can
archive
it
on
the
other
board.
Okay,
for
some
reason,
there's
not
like
a
handy
shortcut.
B
Okay-
and
I
will
archive
this
from
the
sig
note
board,.
C
B
It's
like
shared,
so
sometimes
we
might
not
have
enough
approval
powers,
but
if
they're
just
touching
stuff
under
like
the
node
folders,
we
do
have
enough
approval
power.
So
it
depends
on
what
they're
touching.
C
C
A
D
Yeah
I
talked
with
rodrigo
is
the
one
that
is
bringing
this
so
I'll
talk
with
him
again.
D
Yeah,
I
think
we
can.
We
can
hold
this
again
I'll
talk
with
another
another
one
to
enable
it
not
sure
if
it
makes
sense
to
bring
this
back.
D
D
A
Yeah,
can
you
please
put
the
history,
I
can
review
it,
it
seems
to
be.
I
just
need
some
history.
A
And
quite
aligned
with
what
I
want
to
do
is
let
you
know
the
tags
see.
We
have
a
feature
and
I
wonder
whether
we
need
this
feature
to
be.
Actually
there
I
mean
attack.
A
Okay,
I
think.
B
A
A
A
D
That's
the
same
stuff,
oh
sergey!
I
split
these
in
three
pr's,
so
they're
on
this
understanding.
D
A
A
So
we
charge
these
items
any
other
topics.
For
today,
do
we
want
to
discuss
more
about
pre-submits.
B
Do
we
also
want
to
look
at
anything
that
does
have
an
assignee
or
sorry
anything
yeah,
anything
that
does
have
an
assignee,
but
the
assignee
is
like.
Basically,
my
worry
is
the
people
who
are
like.
Oh,
the
bot
told
me
to
assign
derek
and
then
derek
will
not
look
at
it
in
any
amount
of
time.
B
A
A
A
B
B
A
A
Yeah,
I
think
he
knew
like.
I
think
there
is
a
heavy
reliance
on
timing
and
our
the
end-to-end
environment
is
not
known
for
being
very
slow.
A
A
A
A
If
no
more
topics,
let's
switch
gears,
I'm
going
to
triage
enough
main
board.
A
So,
elena,
you
said
that
signals
a
year
ago
decided
to
do
cry
as
well
for
pre-submits.
B
Yeah,
I
think,
if
you
scroll
down,
I
have
a
comment
even
like
linking
to
the
agenda
dates
where
we
talked
about
this,
but
basically
like
there
had
been
a
bunch
of
stuff
or
like
people
had
got
stuff
in
that
broke,
ci,
so
sig,
first
cryo
so,
like
node,
decided,
let's
add
a
prio
priest,
cryo
pre-submit
job,
but
then
didn't
do
it
until
now,
so
we
went
back
and
did
this
and
I
think
aaron
I
just
pinged
him
about
it,
so
because
we
need
another
approver
for
this
one.
B
So
yes,
if
you
keep
scrolling
yeah,
there's
there's
the
links
to
the
note.
So
I
added
that
to
this
week's
agenda.
I
linked
the
old
agenda
notes
where
we
made
the
meeting
and
yeah.
B
There
was
like
a
thing
that
needed
to
get
fixed
and
then
I
think
this
one's
just
been
kind
of
sitting,
but
ben
and
I
had
a
long
discussion
about
like
what
do
we
do
about
pre-submit
tasks
and
who
owns
them,
and
that
kind
of
thing
so
now
ben
is
here
and
can
talk
to
more
people
than
just
me.
E
Hey
yeah,
so
where
do
I
start
so.
E
So
so
something
we've
been
we've
been
working
on
sort
of
across
sigs
is
reducing
the
amount
of
pre-symmets.
Actually
we
had
a
lot
they're
flaky,
every
pre-cement
we
have.
We
can't
really
like
share,
builds
so
in,
like
periodic
jobs.
Typically,
there's
a
continuous
build
that
uploads
builds
and
then
all
other
tests
just
consume
that
build
and
pick
that
up
instead
of
cloning,
the
repo
and
building
and
pre-submit,
though
whatever
you
need
to
run
the
tests,
we
have
to
build
on
that
for
ea.
E
That
looks
like
building
pretty
much
everything,
so
that
is
a
lot
of
extra
load,
because
we're
built
we're
doing
that
on
every
push
to
every
pr
in
the
repo.
We
already
have
something
like
13
that
we're
running
still
and
we're
trying
to
cut
that
down
so
to
today.
There
is
no
written
policy
for
this,
and,
in
fact
anyone
that
owns
any
config
directory
for
prow
can
just
like
make
a
pre-submit
for
any
repo
and
make
it
always
run
in
blocking
required
for
merge.
E
That's
something
we
really
don't
want
going
forward
and
we
are
trying
to
identify
who
owns
this,
and
how
can
we
establish
policy
for
that?
Probably
the
next
quarter
or
two
it's
if
we've
known,
we
need
to
do
this
in
the
meantime,
though,
the
general
direction
we're
looking
for
is
deduplicating
things
as
much
as
possible
and
trying
to
cut
down
on
times
where
possible.
E
We
also
want
to
prefer
things
that
people
can
realistically
run
themselves
if
it's
going
to
block
their
pr
we'd
like
them
to
be
able
to
actually
invoke
this
nodee
dewey
is
kind
of
a
bit
of
a
no
do.
E
We
in
scale
are
something
of
a
special
case
where
we
know
we're
getting
some
signal
there,
but
maybe
someone
doesn't
actually
have
credit
to
spend
up
a
whole
bunch
of
cloud
vms
for
these
tests,
but
pretty
much
everywhere
else,
we're
looking
for
like
unit
tests
or
like
kind
clusters,
or
something
that
like
we
can
ask
a
contributor
to
debug,
and
we
want
them
to
run
at
least
under
an
hour.
E
We
would
really
like
30
minutes
if
we
can
get
it,
because
the
time
that
these
take
to
run
affects
the
possible
merge
rate
we
can
get.
If
there's
even
one
flake,
we
have
to
run
them
again.
Now
we're
already
looking
at
two
hours.
Just
to
validate
one
pr
so
the
longer
it
gets
the
much
worse
this
kit
at
any
long,
presumably
causes
problem.
So
what
we're?
So?
What
I
asked
with
this
is
for
the
long
term.
E
If
we
can,
it
would
be
great
if
we
could
turn
this
into
one
pull:
kubernetes
node
e2e
that
runs
in
different
vms
and
parallel
with
different
configurations
covering
whatever
we
want
to
cover.
I
imagine
in
the
future
that
includes
things
like
secrets.
V2.
E
Maybe
we
want
to
cover
c
run
instead
of
just
run
c,
and
to
do
that,
though,
we
don't
want
to
keep
adding
like
a
new
job
that
goes
and
builds,
and
instead
what
we
should
be
able
to
do
is
build
the
node
runner
and
the
cubelet
and
any
components
we
need
once
per
push
and
then
fan
out
to
in
remote
vms.
E
The
other
thing
we
can
do
here
is
we're
trying
to
avoid
sort
of
saying.
Oh
yes,
we
test
like
this
thing,
because
what
we're
really
doing
is
we're
testing
kubernetes
and
the
moment
we
start
advertising
like.
Oh,
we
test,
I
don't
know
calico,
then
psyllium
is
going
to
come
and
say
wait.
Why
aren't
we
in
prison?
So
we
we
want
to
avoid
people
looking
at
this
as
like?
Oh
we're,
testing
compatibility
with
these.
E
We
can't
test,
like
the
full
matrix
of
these
things,
the
release
versions,
all
of
the
c
star.
I
interfaces.
We
just
want
to
make
sure
that
we
test
like
enough
of
these
things
that
were
confident
in
the
kubernetes
side
of
them.
So
similarly
we're
trying
we're
looking
to
clean
up
any
naming
in
the
future
so
that
we're
not
saying
like
right
now.
E
We
have
pulled
kubernetes
gc
ubuntu
container
d,
which
is
a
mouthful
and
also
suggests
that
we're
testing,
ubuntu
and
container
d,
whereas
those
are
just
like
implementation
details,
we
don't
care
that
kubernetes
works
on
ubuntu
and
every
pr
we're
using
ubuntu,
because
it
was
the
easiest
way
to
get
a
cloud
evm
where
we
could
install
arbitrary
versions
of
cri.
So
we
could
have
newer
versions
instead
of
whatever
cost
happens,
to
be
shipping
right
now,
but
right,
because
we
let
that
name
in
now.
Anyone
that
sees
a
pr
thinks.
E
Oh
we're
testing
one
two,
so
we'd
like
to
get
to
the
same
point
with
the
cris.
We
don't
want
people
thinking,
like
I,
don't
know
we're
testing
like
continuity
at
head,
we're
not
doing
their
ci
for
them.
We
are
using
these
things
to
make
sure
that
that
the
cubelet
side
of
the
cri
is
in
good
shape,
so
the
other
benefit
from
merging
these
jobs
down
is
that
it
gives
us
a
clean
answer
to
dropping
all
of
the
like
dependencies
out
of
the
test
names.
B
Yeah,
I
talked
about
that
a
little
bit
earlier
in
the
meeting
before
you
joined
and
I
suggested
like.
Similarly,
you
know
the
goal
here
is
not
like.
We
have,
you
know,
container
d
and
cryo
and
all
of
the
other
container
run
times
we'd
ever
want
to
support.
We
throw
in
here
it's
more
like
we
have
run
time
one
and
run
time
two,
and
that,
like
that
way
between
you,
know,
cri1
and
cri2.
E
We
also
want
to
be
very
clear
that,
like
the
versions
of
these
things,
we
run
are
totally
at
the
discretion
of
the
sig
that
needs
to
test
these
things
and
we're
not
going
to
do
something
like.
Oh
no.
We
also
need
to
run
like
a
dependency
at
head,
because
what?
If
we,
what
if
we
break
them
under
their
development
or
something
like
that,
like
that's
their
problem,
vendor
extensions,
they
should
deal
with
that.
We
really
don't
want
the
test
matrix
of
like
all
the
csi
cni
cri,
but
we
obviously
like.
E
I
said
we
definitely
want
to
get
some
of
these
things
running
so
that
we
don't
just
find
out
like.
Oh
it
only
works
here.
We've
found
that
sort
of
thing
before
like
when
we
only
had
cuba
and
pre-submit
we've
found
e
to
b
tests
that,
like
only
work,
if
you
have
firewall
rules
that
cube
up
generates
like
we
don't
want
to
get
into
that
sort
of
situation.
E
So,
in
the
meantime,
I've
said:
let's
just
go
ahead
and
merge
the
cryo
one,
so
we
can
make
sure
we're
getting
that
coverage,
but
for
like
the
desired
end
state,
it
would
be
really
great
if
we
could
merge
these
down
and
if
we
can
get
the
different
vms
concurrent
as
well.
So
we
can
keep
the
test
time
down,
as
we
had
to
add
these
in
the
future
for
different
coverage
scenarios,
we
need.
E
We've
had
a
pretty
big
push
to
do
this
for
the
for
some
of
the
other
sick
jobs,
but
there's
still
definitely
more
to
do
there
like
I
said
we
still
have
something
like
12
or
13,
jobs
that
run
on
every
pr
are
actually
required.
It's
a
lot.
A
So
your
suggestion
is
to
go
ahead
with
this
pr
and
then,
like
I
mean
I'm
trying
to
understand
what
will
motivate
work
to
to
do
this
cleanup
and
make
sure
that
we
run
like
single,
enter
and
test
with
both
runtimes.
E
So
I'm
not
sure
how
straightforward
it's
going
to
be
to
get
the
node
e2e
runner
to
a
place
where
you
don't
have
all
of
the
parameters
going
into
it,
assuming
that
all
the
vms
are
homogeneous
in
terms
of
like
run
time.
Typically,
the
jobs
I
see
today
do
have
details
like
oh
I'm,
using
container
d
in
this
node
eating
we
run.
E
Ideally,
we
could
get
to
a
point
where
that's
a
detail
of
any
particular
vm
that
we're
running
the
suite
with
and
we
can
run
multiple
of
them,
but
because
I'm
not
sure
how
much
work
that's
going
to
take.
I
don't
want
to
say
like
oh,
we
have
to
do
that
now.
Further,
we
don't
actually
have
written
policy
for
pre-submit.
There
are
a
handful
of
people
like
me
or
aaron
that
are
kind
of
running
around
keeping
tabs
on.
What's
going
on
in
kubernetes,
pre-submit
and
saying:
hey,
wait!
A
sec!
E
Don't
don't
add
this
flaky
thing
to
directly
to
pre-cement
or
something
like
that,
but
we
want
to
get
to
a
place
where
there
is
some
official
policy
and
even
some
enforcement
to
sort
of
prevent,
like
completely
arbitrary
things
being
added
or
like
tests
that
are
extremely
slow
or
something
like
that.
We
have
this
for
release
blocking
jobs.
We
never
got
in
place
for
president.
So
since
there
isn't
anything
in
prison,
it's
not
reasonable
for
us
to
say,
like.
Oh
here's.
Our
demands
for
getting
this
pre-submit.
E
Like
clearly
valuable
signal,
so
I'd
like
to
say,
let's
go
ahead
and
merge
it.
We
have
something
that
we
know
we
can
run
and
get
that
signal
now.
But
we're
sort
of
asking
to
head
things
into
this
state
down
the
line
and
we
will
plan
to
try
to
create
some
concrete
policy
and
we've
sort
of
we
haven't
figured
out
who
definitely
owns
this
yet.
E
But
a
few
of
us
have
started
thinking
about
like
what
have
we
been
add
doing,
ad
hoc
to
get
things
to
a
better
place,
and
these
things
are
the
types
of
things
we
are
looking
for
to
try
to
get
things
into
a
better
place
like
less
different
jobs,
that
we're
running,
where
possible
and
less
overall
run
time
like
wall
clock
time,
because
that
affects
merge
rate
and
do
like
developer
workflow
sitting
around
waiting
for
tests.
B
So,
on
the
node
side,
in
terms
of
like
follow-up
actions,
we
want
to
take
as
a
result
of
this,
I
think,
probably
like
I
mean
to
me,
there's
like
obvi
there's
an
obvious
two
one
is
to
rename
the
jobs
to
like
take
the
specific
platforms
out
of
the
job
names
and
two
would
be
to
like
figure
out
some
sort
of
long
term
strategy
to
only
have
a
one
node
e
to
e
pre-submit.
Does
that
make
sense?
Is
there
other
stuff
that
we
need
to
do.
E
I
think
that
would
be
it.
I
might
even
suggest
it
like.
It
would
be
also
pretty
reasonable
to
punch
the
renaming
for
bringing
it
down
to
one
resubmit.
It's
actually
rather
painful
dance
to
rename
blocking
pre-submit
jobs
because
they
need
to
have
a
status
on
all
of
the
prs
so
like,
for
example,
when
we
switched
off
of
bazel,
we
actually
started
running
a
non-bazel
unit
test
job
for
a
bit
and
just
reporting,
but
not
walking
on
it
and
then
and
then
switched,
which
one
was
blocking
and
then
span
down
the
other
one.
E
There
is
a
manual
way
that
we
can
have
someone
with,
like
a
token,
with
sufficient
privileges
on
github,
like
relabel
all
the
statuses
for
the
existing
ones,
to
a
new
name
or
something
like
that.
But
it
is
like
manual
toil
and
maybe
not
strictly
necessary,
it's
more
of
a
long-term
state.
I
mean,
for
example,
the
ubuntu
community
thing
has
been
around,
for
I
think
a
couple
release
cycles.
Now
we
should
clean
that
up,
but
where
we
actually
have
two
different
things
and
we
need
to
understand
the
difference
for
now.
E
I
think
it's,
it's
fine.
Just
more
of
a
more
of
an
end
goal
that
when
we're
able
to
fold
these
back
together,
then
we
won't.
Obviously
we
don't
need
to
name
it
pull
kubernetes,
no
d2e
container
d
cryo
or
something
like
that.
E
E
B
E
Yeah
so
like
so,
you
know
we'll
be
in
a
place
where
we
have
three
for
a
while
it'll
be
helpful
when
we
can
fold
those
down,
we've
also
done
similar
things
other
things
where,
where
they
maybe
don't
necessarily
need
all
these
different
coverage,
like
we've,
talked
a
scalability
and
said:
okay,
you
have
them
different
scale.
Pre-Submits.
Can
we
like?
What's
the
one
you
really
need
to
be
in
pre-submit,
because
we
can
also
get
really
good
coverage
on
most
of
these
things
and
post
them?
It's
actually
much
easier
to
monitor.
E
You
don't
have
the
preset,
that's
really
noisy
with
like
prs
that
don't
even
build
so
for
like
most
testing
that
we're
doing
in
kubernetes
we're
not
expecting
it
to
be
in
pre-submit
we're
expecting
things
that
are
like
relatively
quick,
very
high
signal
ratio,
ideally
things
that
contributors
can
run
themselves
where
possible.
E
That,
like
you,
know,
think
things
like
static
analysis
that
like,
if
we
ran
in
and
post
a
bit,
would
be
failing
all
the
time
and
give
us
value,
whereas,
like
some
of
the
more
like
esoteric
features
like
gpu
support,
they
may
or
may
not
like
need
to
be
in
pre-submit.
That's
probably
not
code.
We're
touching
super
often.
B
E
B
E
Tentatively
erin,
I
have
been
sort
of
drafting
like
what
would
like.
Does
this
even
make
sense?
What
would
it
look
like
and
starting
to
figure
out
like
who
would
we
need
to
do?
Who
would
we
need
to
go
talk
to
about
this?
I
think
that's
going
to
be
like
next
release
cycle
like
no
guarantees
yet,
but
that's
sort
of
what
we're
pretty
strongly
planning
to
do.
I
put.
E
But
I
think
we'd
also,
you
know,
want
to
be
pretty
light-handed
with
existing
stuff.
It'd
be
much
more
likely
to
apply
to
someone
trying
to
add
a
new
thing
and
that's
sort
of
what
happened
with
those
policies.
We
didn't
immediately
like
drop
stuff
that
wasn't
meeting
the
requirements.
E
It
would
be
great
if
you
could
do
this.
You
know
you
gotta.
E
B
Yeah,
that
makes
sense.
I.
E
Think
you
know
right
now
is
not
a
it's
not
a
reasonable
time
for
anyone
to
do
this.
I
would
love
to
see
some
work
towards
this
by
like
then
that,
like
the
next
test,
freeze
like
a
full
cycle
from
now,
but
especially
given
that
we
may
not
even
have
policy
fully
in
place,
it's
understandable
if
it
takes
a
bit
longer
than
that.
A
Yeah
I
mean
since
you're
like.
Is
there
any
work
planned
of
like
discussed,
to
kind
of
distinguish
which
end
to
end
to
run
and
which
don't
run,
or
at
least
like
which
piece
of
mysteron
so
maybe
like?
If
it's
like
one
line
change
of
like
some
type
of
fix,
then
don't
run
all
this
pre-submits,
like
heavy
press,
admits
at
least.
E
Yeah,
so
that's
actually
been
a
tricky
ongoing
thing.
One
of
the
things
that
tends
to
drive
it
right
now
is
just
like
speed
and
reliability
so
like
with
the
main
ewe
suite,
we
have
tests
that
are
tagged
as
flaky
and
we
still
run
those
somewhere
in
ci,
but
that's
something
that
we're
going
to
exclude
in
pre-submit.
So
we
have
some.
We
have
some
kind
of
rules
around
that.
E
The
other
thing,
the
other
thing
that
usually
happens,
though,
is
that
outside
of
things
that
are
known
to
be
flaky
or
extremely
slow,
we
just
sort
of
run
everything
and
then
keep
an
eye
on
the
test
times
and
when
we
see
that
they're
going
up
kind
of
look
into
that
address
it.
So,
for
example,
sig
storage
had
a
lot
of
e2e
tests
and
the
test
time
being
taken
by
them
overall
was
becoming
a
significant
fraction
of
pre-summit.
E
So
we
like
worked
with
them
to
identify
what
they
thought
were
like
the
most
important
ones,
and
to
split
some
of
the
rest
off
into
a
separate
test
suite
that
they
run
sometimes
like
when
they're,
particularly
working
on
storage
stuff.
So
something
like
that
may
also
make
sense.
It's
it's
a
bit
difficult
to
say
as
like
a
blanket
policy,
it's
kind
of
like
the
sigs
are
gonna
know
best
like
okay,
when
I'm
testing
cubelet.
E
I
really
really
really
want
to
know
that
this
thing
works,
but
maybe
there's
other
feature
is
not
high
traffic
generally.
What
we're
looking
for
with
things
in
pre-submits,
in
terms
of
like
what
needs
to
be
submit,
what
needs
to
be
in
pre-submit
is
something
where,
when
it's
only
in
post-submit
and
it's
in
a
release
blocking
job,
we're
constantly
seeing
the
release
blocking
job
go
red
because
of
bad
prs.
E
E
So
what
we
wound
up
doing
was
adding
a
our
own
linter
tool
that
checks
that
the
code
type
checks
for
all
platforms,
so
like
the
go
purchases
it
and
things-
and
that
was
something
like
that
has
to
be
in
pre-submit,
because
it
is
really
frequent
that
someone
makes
a
pr
that,
like
doesn't
compile
on
windows
anymore
or
something
because
they
like
added
and
they
added
something
to
an
interface
and
only
implemented
on
linux.
E
And
so
something
like
that,
where
we
have
a
clear
signal
that,
like
we
didn't,
have
this
in
pre-submit
and
we
have
a
quality
like
meet,
solid
requirements,
release
blocking
job
and
the
release
blocking
job
is
not
stable
because
prs
keep
coming
through
that
break
it.
Then
we
need
to
pre-submit.
E
Otherwise
we
probably
actually
for
most
things
default
to
like
it
doesn't
need
to
be
in
pre-submit,
get
out
of
pre-submit
stop
relying
on
pre-submit
at
a
periodic
job,
add
alerting
to
your
dashboard
and
if
something
breaks,
push
a
fix.
Hopefully
it's
not
frequent,
but
if
it's
a
really
frequent
thing,
then
it
would
be
better
off.
If
we,
you
know
we
caught
it
in
pre-submit.