►
From YouTube: Kubernetes SIG Testing - 2021-10-05
Description
A
To
do
while
I
do
my
thing:
hey
everybody
welcome
today
is
tuesday
october
5th.
You
are
at
the
kubernetes
sync
testing
bi-weekly
meeting.
I
am
your
host
aaron
of
sig
beard,
aka
eric
kirkenberger,
aka
fxp.
At
all
the
places
we
are
going
to
adhere
to
the
kubernetes
code
of
conduct
during
this
meeting
and
it
is
publicly
recorded
and
will
be
posted
to
youtube
later
kubernetes
conduct
facebook
games
will
all
be
our
very
best
selves
to
each
other.
A
A
So
having
said
that,
today's
agenda
is
pretty
light.
I
wanted
to,
let's
see
regularly
recurring
topics,
welcome
any
new
members
or
attendees.
I
feel
like.
I
know
everybody
here,
it's
nice
to
see
you
paris,
I
feel
like
we
haven't
seen
you
in
a
while.
A
A
And
then,
with
that,
I
will
hand
over
to
cole
to
talk
about
replacing
basil
and
testing.
Well,
I'm
gonna
make
you
co-host
in
case.
You
need
to
share
your
screen
to
talk
about
this.
C
A
C
There
we
go,
it
was
sharing
my
computer's
audio
instead
of
my
audio,
I
got
that
yeah,
so
I
just
wanted
to
bring
this
issue
to
everyone's
attention.
This
is
something
that
chow
has
done
kind
of
the
pre-requisite
exploratory
work
for,
but
that
we'd
like
to
recruit
more
help
on.
C
So
this
is
an
effort
to
replace
the
basal
build
system
and
the
kubernetes
testing
for
repo,
with
not
basil,
so
primarily
using
make
with
whatever
language
specific
build
tools
are
appropriate
for
the
you
know,
each
use
case:
ciao
has
kind
of
broken
this
up
into
some
individual
subtasks
to
make
this
a
little
more
piecemeal
and
more
approachable,
but
we'd.
We
yeah
we'd
really
like
some
help
to
try
to
tackle
some
of
these
things
and
chip
away
at
this
there's
a
number
of
things
kind
of
motivating
this.
C
The
most
recent
is
a
kind
of
a
dependency
issue
that
we
were
having
with
bazel
and
rules
case
and
the
gcv
client
that
we'd
like
to
use
we're
working
around
that
by
copying
pasting
for
the
time
being,
but
that's
obviously
not
a
good
way
to
be
managing
code.
So
we'd
like
to
be
kind
of
moving
towards
a
better
build
system
and
bazel's
kind
of
been
a
pain
point
for
us
for
a
while,
so
yeah
we're
looking
for
help
there.
C
D
I'm
good
that's
pretty
much
what
we
got
and
actually
I
would
like
to
mention
that
copy
pasting
is
not
going
to
work
because
of
the
nasty
dependency.
D
A
That
is
good
to
know,
like
I'm
super
get
very
excited
about
this,
but
I
also
feel
like
I
don't
know
how
much
time
I
have
to
super
dedicate
to
it,
but
I'm
definitely
interested
in
helping
out.
I
suppose
one
question
I'm
having
is
if
this
is
blocking
the
google
cloud
build
work.
A
D
I
think
that's
an
increased
scope
to
be
honest,
just
breaking
out
prowl
is
not
pissed
me
off
work.
Okay,.
A
I
didn't
think
it
would
be.
I
just
thought
I'd
suggest
it.
I
I
just
sort
of
there's
a
part
of
me
that
wonders
if,
like
how
much
stuff
we
have
in
the
repo
that
that
truly
needs
to
be
migrated,
or
if
it's
worth
sort
of
re-examining
some
of
the
stuff
that
we
have
bazel
doing
for
us
and
if
we
could
just
end
up
dropping
it
if
we're
not
maintaining
or
using
it.
A
C
So
this
this
comment
right
here.
I
also
link
this
in
the
in
the
meeting
notes,
but
this
this
issues,
the
tracking
issue
kind
of
for
the
bigger
effort.
All
of
these
individual
issues
that
chow
has
broken
this
down
into,
should
make
this
a
little
more
approachable,
because
these
are
kind
of
specific
changes
to
you
know
it's
more
specific
things
than
just
saying:
remove
bazel,
so
this
should
be
a
little
more
approachable
yeah.
A
This
is
a
challenge
I
have
when
I'm
trying
to
scope
stuff
skip
stuff
for
other
people
to
pick
it
up
and
run
with
it.
I
feel
like
somebody
might
need
to
see
a
little
more
step
by
step
work
here,
for
example,
I
noticed
the
way
that
ciao
has
been
working
on
building
and
testing
go
without
bazel
is
to
create
a
canary
job
that
is
run
manually
as
a
pre-submit
and
continuing
to
just
run
that
over
and
over
until
it
works.
So
I
would
imagine
that
doing.
A
For
all
of
the
other
work
would
be
necessary
and
then
maybe
help
sort
of
enumerating.
A
I
think
you're
enumerating
the
rules,
which
is
good
but
just
sort
of
trying
to
figure
out
like
what
are
all
the
things
that
need
to
be
tested
or
to
break
or
whatever
that's
something
I
can
try
to
help
out
with.
But
I
feel
like
sort
of
like
maybe
pointing
people
at
the
way.
Chow
is
doing
the
the
go
stuff
and
trying
to
use
that
as
the
prototype
for
basically
do
this,
but
for
a
different
language
or
a
different
set
of
rules
or
whatever.
C
A
Maybe
yeah
I
don't
know
it
seems
like
the
approach
that
the
chow's
using
for
go
works
relatively
well.
I
I'm
just
like
as
a
new
as
a
new
contributor,
I
think
I
had
to
have
a
tough
time
just
okay,
so
I'll
get
rid
of
typescript
the
typescript
stuff.
But
what
do
I?
What
actually
has
to
happen
like?
How
do
I
need
to
make
sure
that
everything
still
looks?
Okay,
since
typescript
is
involved
in
deck?
I
need
to
make
sure
that
runs
locally
and
also
it
works
remotely.
A
A
And
credentials
and
stuff
do
I
need
to
really
drive
this
stuff.
C
Yeah,
I
think
that
makes
a
lot
of
sense.
I
think
for
some
of
these,
that
is
kind
of
a
lot
of
what
the
work
is
is
determining
you
know
all
you
know
how
far
all
these
things
need
to
be
changed,
I
guess,
and
what
what
things
need
to
be
changed
but
yeah?
I
agree
that
we
could
definitely
clarify
that
more
and
kind
of
what
we
could
be
doing
is
definitely
filling
out
more
of
the
success
constraints
or
whatever,
or
you
know,
clarifying.
A
A
Is
the
intention
to
keep
the
existing
bazel
test
job
that
sort
of
runs
everything
around
in
until
we
finish
all
of
these
things
or
like?
I
guess,
I'm
trying
to
wonder
like
how
we
could
switch
stuff
over
piecemeal.
C
So
so
I'd
expect
us
to
keep
the
the
basal
test
job
in
place,
but
the
things
that
the
basal
test
job
is
testing
should
be
shrinking
as
we
migrate
over
individual
things
to
being
built
with
make
instead,
so
whenever
one
component
is
done
or
can
be,
you
know
handled
with
make
instead
we'll
make
that
canary
job
blocking.
You
know,
you
know,
promote
that
to
a
proper
blocking
pre-submit
and
then
remove
the
corresponding
tests
from
the
bazel
configuration
that's
kind
of
what
I
was
expecting.
A
A
So
there's
a
part
of
me:
that's
hoping
like
getting
the
go
stuff
landed
since
most
of
the
tests
that
do
this
or
written
go
might
give
me
some
amount
of
speed
up,
but
I'm
still
gonna
have
to
wait,
know
20
or
however
many
minutes
it
takes
to
run
the
basal
job
before
my
pr
will
actually
merge.
Okay,
but
I
understand
why
that's
that's.
That's
very
simple.
D
Prostitution,
sorry,
I
have
a
different
opinion.
I
I
my
plan
is
to
actually
migrate
everything
over
at
once,
instead
of
scoping
down,
because
I'm
not
quite
sure
how
feasible
it
is
to
peel
out
layer
by
layer.
D
Depends
on
the
familiarity
we
have
and
the
expertise
we
have
on
bazel.
I
think
it
might
be
easier
to
just
remove
it
at
once
and
also
I
don't
think
removing
any
individual
component
would
reduce
the
time
it's
useful,
build
bazel,
because
I
think
most
of
time
you
spend
is
actually
downloading
workspace.
D
D
Forgot
to
add
the
issue
of
linter
yeah,
I
will
create
a
linter
issue.
A
A
Is
there
an
intent
to
land
this
like
by
a
certain
a
certain
time
frame?
Is
this
the
sort
of
thing
I
should
say
we're
committing
to
for
the
123
cycle,
or
is
this
more
as
we'll
get
there
when
we
get
there?
We
just
don't
know
what.
C
I
think
that
depends
on
a
lot
a
lot
on
whether
or
not
we're
trying
to
do
a
single
switch.
I'm
kind
of
curious
more
about
why
the
motivation
behind
doing
a
single
switch
sounds
like
that
was
because
it's
it
seems
like
there's
a
perception
that'll
be
difficult
to
remove
things
from
bazel.
C
It's
just
what
I
would
I
was
imagining
that
we
just
delete
the
basal
test
targets.
Do
you
think
there's
more
to
it
than
that?
I
I
think
if.
D
We
just
delete
one
of
them:
hack,
update,
bazel
might
not
be
happy
because
it
it
likes
to
generate
everything.
So
I'm
not
quite
sure
how
we
can
do
that,
because
currently,
the
the
single
basal
test
is
bazel
test,
slash,
slash,
three
dots.
That
means
bazel
test
everything.
So
if
these
two,
so
if
we
can
make
update
happy
by
removing
some
basal
target-
and
I'm
totally
fine
with
that,
I'm
just
not
sure
how
that
would
work.
G
It
also
generates
these
like
rules
for
source
like
groups
of
source
files,
you,
if
you're
doing
dot
dot,
you
can
just
add
like
dot
dot
minus
some
other
target,
and
it
will
only
include
targets
that
were
either
under
the
wild
card
and
not
removed
or
are
a
requirement
of
one
of
the
remaining
targets
like
you're
still
going
to
have
transitive.
G
G
If
you
tag
targets
as
manual
they,
they
are
only
going
to
run
as
a
dependency
of
something
else
or
if
you
explicitly
include
that
one
they
won't
be
caught
by
wild
cards.
We've
done
this
in
kubernetes
before
it's
pretty
easy
to
do,
and
there's
also
pretty
decent
tooling
to
like
auto,
insert
this
for
like
generated
rules
as
like
a
post-processing
step.
C
I
was
just
going
to
say
if
we
could
do
it
all
at
once.
That
would
be
great,
but
I
think
that
it's
going
to
be
easier
and
you
know
safer
and
probably
better
all
around.
If
we
can
do
it
incrementally.
G
I
don't
necessarily
think
it's
going
to
improve
time
a
lot.
It
remains
to
be
seen,
but
just
not
having
two
different
ways.
Something
can
fail,
will
be
nice
like
and
have
confidence
that
we
are
actually
running
things
with
the
like
the
new
way.
I
think
we
should
like.
I
don't
think
it
makes
sense
like
we're
clearly
not
going
to
implement
this
in
one
shot.
G
G
We
had
a
lot
of
headache
like
one
of
our
main
motivations
for
removing
bazel
from
kubernetes.
Just
was
that
we
already
did
have
to
do
everything
twice,
because
we're
never
had
the
full
release
on
bazel
and
just
having
everything
work.
Two
different
ways
for
building
and
testing
and
whatnot
is
like
that's
a
that's
bad
enough
on
its
own,
like
that's
worth,
removing
just
for
its
own
sake,
regardless
of
pre-summit
time.
C
And
we've
been
talking
about
reducing
pre-submit
time
and
how
you
know
reducing
the
size.
The
basal
job
will
help
that
which
is
great,
but
also
that's
not
really.
The
goal
like
once
we're
done
with
this.
The
basal
job
will
be
gone
all
together.
So
in
the
meantime,
if
the
basal
job
still
takes
the
same
amount
of
time
that
it
does,
I
don't
think
that's
really
that
big
of
a
deal.
A
A
I
don't
know
that
I
have
anything
else
to
add
to
this
conversation
other
than
like.
I
will
totally
help
out
where
I
can.
It
might
be
in
fewer
images,
I've
kind
of
held
off
on
migrating,
some
of
the
basal
related
images
over
to
community
infrastructure,
so
it
might
be
one
less
image
for
me
to
migrate.
A
I
can
probably
help
in
the
areas
that
involve
like
python
and
bash,
and
maybe
some
of
the
post
submits
just
to
stay
out
of
the
way
of
the
ghost
type.
The
thing.
E
A
E
A
That's
made
out
of
a
bunch
of
different
components
and
all
sorts
of
stuff
the
complexity
there
kind
of
eludes
me
versus
like
at
the
moment
over
in
kate's
in
for
like
we're,
managing
our
proud
build
clusters
is
just
like
it's
just
a
set
of
yellow
files.
I
don't
have
to
worry
too
much.
A
I
just
like
use
the
generic
auto
bumper
tool
that
you
all
maintain
to
bump
the
images,
but
I
don't
know
if,
like
all
of
that
component
stuff,
that
I
see
inside
of
the
prow
stuff
sort
of
defines
like
what
all
of
the.
C
There's
two
halves
to
the
basal
magic
there
one
have
is
the
the
publishing
and
the
other
half
is
the
deploying.
So
all
the
component
things
you've
seen
that's
the
deploying,
but
there
are
similar
rules
for
releasing
the
images
as
well,
and
I
told
it
is
a
very
confusing
mess
in
there.
So
I
totally
agree
with
that:
yeah
yeah.
A
A
Yeah,
that's
that
that's
fine,
okay,
maybe
some
of
the
ways
we've
been
doing
stuff
over
in
kate's
intro.
I
can
help
they're,
not
amazing
or
fancy,
but
maybe
I
can
share
some
that
over
here.
A
Breaking
some
of
the
other
pieces
of
this
repo
out
into
their
own
repost,
so
that
I
didn't
have
to
deal
with
basil
for
them.
But
I'm
fine
working
to
chip
away
at
basil
in
this
repo.
F
A
Into
its
own
repo
like
because
it
makes
sense
to
get
the
config
directory
into
its
own
repo,
because
sometimes
I
feel
like
when
I'm
looking
at
diffs
for
the
repo
like
to
figure
out
my
most
recent
example,
is
I'm
trying
to
figure
out
why
a
job
is
failing,
and
it
could
be
that
the
image
that's
running
the
job
changed.
So
then
I
go
look
at
the
auto
bump
pr
and
I
click
on
the
link
that
says
well.
A
Here's
the
brahmin2
commit
that
includes
all
the
changes
that
could
have
planted
an
image,
and
I
click
in
that
and
it's
like
50
job,
config
changes
and
then
like
one
change
down
in
the
image
directory
and
I
have
to
manually
scan
through
the
list
of
all
those
pr's
and
hope.
Somebody
did
a
like
good
enough
commit
message
for
me
to
notice
that,
where,
like,
if
the
config
changes
were
in
their
own
repo,
then
I
probably
have
a
much
easier
time
of
seeing
the
actual
like
code
changes
or
image
related
changes.
G
Think
I'm
kind
of
blurring
scope
here.
That
should
be
pretty
easy
to
do
once
we
decouple
building
from
resolving
the
images
we're
deploying
like
once
once,
there's
no
bazel
magic
and
the
deployments,
I
don't
see
any
reason
we
couldn't
move
the
config.
I
think
the
last
thing
would
just
be.
You
would
need
to
make
sure
you
wire
up
running
the
linter
across
the
repos.
G
G
Okay,
I'll
follow
up
separately
about
it
unrelated
thing
that,
like
we,
should
still
be
able
to
pin
those
and
probably
should
okay,
okay,.
A
D
A
E
I
have
a
question:
hey
cole.
It
sounds
like
if
what
I'm
hearing
correctly
y'all
have
this
done
and
figured
out,
do
you
need
other
people
to
help
with
this.
A
E
It
okay,
I'm
also
happy
to
do
other
targeted
outreach
for
you,
especially
if
you
have
like
a
targeted
profile
that
you're
looking
for
meaning
like
engineer,
has
to
know
basil
has
to
know,
and
we
can
do
some
targeted
outreach
for
you.
So
whatever
you
need
there,
we
can
talk
offline
and
I'll
get
you
cooking
with
some.
C
Folks,
cool
yeah:
I
think
that
could
be
particularly
useful
for
the
the
typescript
stuff.
There's
a
couple
things
where
I
think
we
just
don't
have
that
much
expertise,
anymore
yeah,
so
yeah
that
could
be
helpful.
A
Cool
thanks
for
thanks
for
chatting
about
that,
like
I
said,
really
excited
about
this.
A
Than
than
anything,.
A
I
think
I'm
sharing
a
window,
it's
this
boscos
right
now
uses
extension,
api
extensions,
v1
beta1
to
define
its
crds
and
as
of
kubernetes
122.
A
That
goes
away.
So
I
copy
pasted
sort
of
the
the
notes
from
the
deprecation
guide
and
then
I
started
it
literally.
Just
did
the
lowest
effort
pr
where
I
made
those
changes
to
the
crd
resource
and
jeff
and
alvaro
greatly
pointed
out
that
like
well.
It
doesn't
work
right
now
and
we
are
going
to
have
some
questions
around
the
upgrade
story
and
stuff.
A
So,
as
I
find
the
time
I
was
going
to
go
through
the
bosco's
example
deployment
and
see
what
the
upgrade
story
looks
like
there
for
this
resource.
But
I
just
wanted
to
like
raise
the
visibility
on
this,
because
at
the
moment
over
for
the
k-10
for
proud
clusters,
we
are
running
in
gke's
release
channels,
which
means
our
clusters
just
automatically
automatically
get
upgraded
to
a
newer
version.
A
A
So
I
view
this
as
like.
It's
really
important.
It's
not
urgent
right
now,
but
I
feel
like
as
soon
as
121
rolls
out
it's
going
to
be
urgent
because
I
I
don't
want
to
like
ideally
I'd
like
to
prefer
I'd
like
to
avoid
having
to
pin
our
cluster
to
a
numbered
version.
I'd
rather
get
our
cluster
able
to
upgrade
to
kubernetes
122
when
that
is
ready
and
stable
and
production,
ready
and
stuff.
A
D
I
have
a
couple
of
questions
about
this,
so
actually
I
I'm
not
sure
if
versioning
would
help
in
this
case
or
is
it
desired?
D
D
A
Say
yes,
you
can
see
my
window.
I
can
tell
you
how
we're
doing
it
right
now
and
for
kids
and
for
stuff,
so
we're
just
kind
of
still,
I
think,
in
the
very
early
prototype
stages,
with
a
tool
called
contest
which
basically
relies
on
open
policy
agent
definitions,
so
open
policy
agent
is
a
thing
where
you
can
write.
Policies
in
its
language
called
rego,
which
I
forget,
what
that's
a
derivative
of
or
inspired
by,
but
the
idea
is.
It
allows
you
to
sort
of
define
policies
against
structured
data
such
as
yaml
files.
A
Kubernetes
manifests
terraform
resources,
things
like
that,
and
so
you
basically
have
some
people
who
sort
of
keep
track
of
the
release.
Notes,
especially
like
check
out
this
page,
the
api
deprecation
guide.
A
Every
time
there
is
a
release
of
kubernetes,
this
gets
updated
and
then
we
have
people,
it's
usually
our
no,
but
I
believe
there
have
been
some
other
folks
who
contributed
to
this
write,
a
policy
that
says
like
hey.
A
This
thing
is
deprecated,
and
you
should
use
this
instead
and
you're
like
we're
doing
the
same
thing
where
it's
it's
a
warning
right
now,
but
the
idea
is
like
if
I,
if
I
really
cared
a
whole
bunch,
I
could
turn
one
of
these
into
a
and
I
think
it's
an
error
instead
of
a
warrant
and
it
would
start
blocking
our
presidents.
A
There
are
tools
out
there.
I
think
I
literally
just
heard
of
one
today
there
are
other
tools
out
there
that
might
have,
like
other
people,
maintaining
other
rules.
I
kind
of
like
that.
We're
using
open
policy
agent
just
for
our
stuff,
just
for
the
things
that
we
care
about,
but
the
the
short
version
of
that
answer
is
we
pay
attention
to
the
kubernetes
release,
notes
we
look
for
deprecations
and
then
we
add
tooling,
that
warns
about
those
deprecations.
A
And
make
sure
that
we
have
like
a
pr
based
audit
process
that
sort
of
lets
us
know
when
our
clusters
are
upgrading
from
one
version
to
another
and
based
on
that,
that's
how
I'm
gonna
know
when
our
cluster
hits
121
and
when
I'm
going
to
start
jumping
up
and
down
and
saying
like
we
gotta
fix
bosca,
it's
real
bad.
D
Answer
your
question:
that's
awesome
yeah
it
does.
I
I
feel
like
this
is
a
generally
helpful
tool
that
other
people
might
also
benefit
from
sure,
because
maintaining
that
list
is
not
a
it
should
be.
It
should
not
be
a
duplicated
work
right.
If
someone
is
already
inspecting
the
the
release
notes,
why
should
someone
else
also.
A
Need
to
do
the
same
thing
I
I
agree
with
you
and
that's
what
I'm
like.
I
guarantee
like
whatever
we're
doing
for
k-10,
we're
probably
duplicating
like
10
other
tools
that
people
in
the
ecosystem
have
already
written
to
do
this
sort
of
stuff.
I
think
for
us
it's
a
learning
experience
to
use
open
policy
agents
a
little
bit
better
or
just
at
all,
I'm
gonna.
Stop
my
share
the
the
dream
I
have
with
open
policy
agent.
A
It's
the
sort
of
thing
where
you
can
define
your
policies
once
and
then
there's
like
a
webhook
admission
controller,
essentially
where
you
can
apply
those
policies
to
deny
resources
hitting
your
cluster
at
deploy
time.
But
you
could
use
a
tool
like
conf
test
to
apply
those
same
policies
at
like
build
time,
merge
time
whatever.
A
So
I've
always
liked
the
the
idea
where,
like
we
run
a
bunch
of
static
analysis
against
our
job
configs.
For
example,
it's
a
bunch
of
go
unit
tests.
But
then
we
don't
take
all
of
that
wonderful
analysis
and
like
prevent
jobs.
That
would
fail
those
tests
from
actually
landing
in
the
cluster.
We're
just
sort
of
trusting
that
the
humans
who
can
manually
create
crowd
jobs
or
could
manually
apply
configs
to
the
crowd.
A
Like
I'm
happy
to
share
our
deprecation
policy
stuff,
but
I'm
sure
if
we
did
some
looking
around,
we
could
find
something
that
somebody
else
has
maintained
and
is
far
more
fully
featured
than
just
the
list
of
api
identifications
for
tracking.
A
Okay,
paris,
over
to
you.
E
Hey
y'all-
I
am
here
for
meta
reasons.
I
serve
on
the
governing
board
for
kubernetes,
represent
kubernetes
representative
on
the
governing
board,
and
the
governing
board
deals
with
money
and
other
fun
operational
activities
like
that.
So
what
I'm
doing
is
trying
to
get
sig
needs
lined
up
for
staffing,
especially
long-term
staffing.
E
So
I
wanted
to
hear
from
the
group
to
see
if
there
were
any
sub-projects
long
term
that
we
fear
need
more
folks,
not
talking
about
interns.
I'm
talking
about
engineering
resources,
whether
that's
with
service
providers,
whether
that's
with
end
users,
whatever
it
is.
We
need
to
start
getting
the
needs
together
so
that
we
can
communicate
our
needs
to
these
folks,
especially
from
a
long-term
perspective.
E
E
And
I'm
happy
to
leave
it
at
that
too.
If
you
want
to
think
about
it,
so,
like
I'm
happy
to
drop
that
and
run
away.
E
E
So
that's
that's
where
I
come
in
I'm
most
concerned
about
that,
and
I
want
to
make
sure
that
y'all
are
healthy
and
fruitful
in
your
endeavors
forever.
E
I
really
think
y'all
need
an
onboarding
program.
Sorry
to
cut
you
off
aaron.
I
think
you
all
need
an
onboarding
program
and
I
think
we
need
a
person
to
do
that.
I
don't
think
that
I
think
y'all
are
super
busy
with
the
stuff
on
your
plates
right
now,
but
I
think
getting
more
documentation
more
on
boarding
together
to
onboard
more
engineers
into
the
group
would
be
super
fruitful
and
helpful.
A
So
there's
a
there's
a
part
of
me
that
thinks
that
the
removal
of
basil
will
help
tremendously,
because
I
think
any
like
first
time
contributor
to
the
project
who
comes
to
this
repo
and
is
like
what
is
this
basil
stuff
bounces
away
pretty
quickly.
A
People
not
believing
that
they
can
take
ownership
or
contribute
to
any
of
the
test
infrastructure
if
it's,
because
they
believe
that
we've
got
this,
it's
okay,
that
is
factually
incorrect.
I
have
a
list.
I
have
hundreds
of
issues
right
now
that
describe
how
things
are
broken
or
too
painful
or
could
be
better,
and
I
could
point
people
at
any
of
these,
but
they
all
feel
like
big
and
murky.
A
Like
will
the
project
blow
up
tomorrow?
If
none
of
these
things
are
addressed,
no
will
it
crumble
over
time.
A
And
harder
to
contribute
to
maybe
like
so
I
have
personal
opinions
on
that
stuff,
but
I
think
just
like
lowering
the
barrier
to
entry
like
I'd
love,
a
new
contributor
onboarding
thing
not
to
have
to
include
the
word
paisley,
if
it's
just
like
use,
go
test
like
you
are
used
to
everywhere
else
or
run
make
test
or
make
build
whatever
like
that.
Hopefully
simplify
things.
A
lot.
A
Yeah,
I
also
the
other,
like
trickier
policy
thing
that
I
have
a
problem
with.
Is
we
as
a
project
waste,
a
ton
of
money
on
our
tests
or
we
waste
a
ton
of
resources?
I
guess,
and
so
like
money
is
one
measure
of
that,
and
time
is
another
measure
of
that
like
it's
just
like
whatever
it's,
it
doesn't
cost
actual
money.
It
just
costs
us
our
time,
but
we'll
just
wait.
I
feel
like
it's
unfair
to
the
contributors.
A
A
E
Need
most
people's
time
take
take
new
contributor
off
the
table.
I
want
you
to
take
that.
I
want
you
to
erase
that
word
and
I
want
you
to
focus
on
engineers
who
have
kubernetes
experience
who
will
start
to
contribute
based
on
their
employers,
telling
them
that
they
can
and
should
so
like.
Let's
take
take
like
the
idea
of
people
coming
to
you
with
no
experience
in
kubernetes
and
or
go
like
put
that
off
to
the
side.
A
But,
okay,
let
me
go
back
to
something:
that's
a
little
less
project,
specific
the
other
problem.
I'm
I've
just
been
experiencing
the
pain
lately,
it's
too
hard
to
figure
out
what
changed.
It's
way
too
hard
to
answer
that
question,
because
what
changed
could
be
this
repo
one
of
four
other
three
other
repos?
A
It
could
be
the
image,
then
I
gotta
figure
out
what
changed
inside
of
the
image
it
could
be.
The
infrastructure
underneath
and
like
answering
all
of
those
questions
requires
a
different
set
of
steps
that
usually
involves
a
lot
of
clicking
and
copying
and
pasting,
and
maybe
manually
writing
some
tooling
or
scripts
to
go
investigate
things
because,
like
for
example,
we
don't
have
a
quick
way
to
go
say
tell
me
the
pass
or
failure
status
of
all
the
pre-submits
that
ran
against
the
release.
A
122
branch
of
kubernetes-
we
just
don't
there-
are
all
sorts
of
problems
like
that
that
need
that
would
reduce
the
pain
of
troubleshooting
if
they
were
solved.
A
But
again,
if
they're
not
solved,
the
troubleshooting
will
just
be
like
what
it
is
now,
which
is
that
there
are
a
few
like
technically
deep
people
across
the
project
who
can
do
the
troubleshooting,
but
when
they
go
away,
I
don't
know
who
will
be
able
to
replace
them.
E
G
A
Yeah
literally
this
like
make
it
work
without
basil
stuff.
I
think
I'm
hoping
that
a
lot
of
it
could
be
done
by
somebody
who's
like
they
know,
build
and
test
stuff
pretty
well,
but
they
don't
have
a
ton
of
project
experience
as
long
as
we
can
give
them
a
like.
Well,
this
is
the
command
you
run
to
see
if
it
works,
just
keep
writing
that
command
over
and
over
until
it
works.
That's
that's
a
helpful
way
to
get
people
with
less.
F
Yeah,
I'm
I'm
wondering
if,
if
there
might
be
two
things
we
should
separate
the
one
is
we
have
like
a
concrete
kind
of
project
right
here
with
the
basil
and
the
removal
I
mean-
and
the
other
thing
I
was
mentioned,
is
finding
people
that
become
maintainers.
Yes,
and
the
second
thing
is
an
intent
entirely
separate
bag.
I
think-
and
I
would
say
the
main
issue
with
that-
is
that
at
least
so
proud
that
it
isn't
really
a
project.
F
G
But
if
they're
getting
paid
to
actually
run
it,
I
think
that
falls
in
that
category
that
like
it,
then
it
might
make
sense
to
stick
with
it.
But,
as
someone
that's
maybe
just
has
like
an
interest
in
kubernetes
or
something
it's
going
to
be
difficult
to
keep
up
with,
unless
you
actually
have
a
good
reason
to
be
trying
to
deal
with
proud
day-to-day,
as
opposed
to
like
whatever
ci
stack.
You
actually
use
at
work.
F
F
Also
for
most
of
these
other
users,
there
is
little
to
no
contribution
back
to
the
project.
I
cannot
say
why,
and
but
this
is
how
it
is
today.
Maybe
we
could
try
to
figure
out
why
this
is
happening,
maybe
they're
just
happy
and
don't
want
anything
to
ever
change,
or
maybe
other
reasons
I
don't
know.
E
The
rumor
is
that
google
and
microsoft,
google
and
red
hat
have
this
thing
taken
care
of
you
see,
and
that
is
clearly
not
correct,
but
I
really
do
think
that
it
has
a
lot
to
do
with
the
fact
that
you
don't
have
an
onboarding
program.
So
a
lot
of
folks
have
come
to
this
meeting.
Thinking
that
you
all
have
it
taken
care
of,
and
that
is
incorrect.
E
So
that's
why
I
think
one
of
the
highest
priorities
for
this
group
is
to
get
an
on-boarding
program
together,
so
that
people
can
feel
like
they
can
jump
in
and
start
outside
of
just
like
help
wanted
issues,
but
I
mean,
as
far
as
like
growing
maintainers,
I
mean
that's
like
the
perfect
prow
is
the
perfect
example,
because
y'all,
like
only
a
few
of
y'all,
take
care
of
that.
So
do
you
all
think
that
you're
gonna
take
care
of
it
in
three
years?
E
Like
do
you
see
yourselves
as
proud
maintainers
in
three
years
and
if
not
then
like?
Let's
talk
about
how
we
can
talk
to
other
these
other
24
projects,
about
growing
maintainers,
for
prow
and
and
like
talk
to
them
directly
about
how
we
need
proud
maintainers
to
grow.
I
mean,
obviously,
it's
going
to
take
six
months
to
grow
people
into
maintainer
plus
roles.
So
that's
why
I'm
thinking
ahead
and
thinking
long
term
right
now
and
not
necessarily
short-term,
like
help
wanted
issue.
A
A
The
testing
common
sub
project,
we
ended
up
kind
of
retiring
or
just
kind
of
like
putting
into
maintenance
mode.
To
my
knowledge,
there's
nobody
who
really
like
has
ownership
or
oversight
over
good,
rigorous
testing
practices
for
the
kubernetes
kubernetes
repo.
It's
always
been
this.
A
There
are
a
group
of
people
who
are
working
on
an
e2e
framework
that
is
outside
of
kubernetes
kubernetes.
I
think
there's
some
collaboration
between
some
people
from
vmware
and
apple
and
amazon,
kubernetes
sig
c2b
framework,
but
there
are
zero
plans
to
fold
that
back
into
kubernetes
kubernetes.
E
A
And
difficult
to
read
and
write,
we
had
ansi
come
by
a
little
while
ago
to
talk
about
like
what,
if
we
used
ginkgo
v2
and
that
might
solve
some
part
of
it,
but
it
definitely
won't.
E
A
The
problem
of
like
people
have
copy
pasted
a
lot
of
stuff,
and
I
have
no
idea
which
parts
of
it
are
good
or
bad
as
far
as
patterns
for
like
doing
a
thing
to
a
cluster
and
then
waiting
to
see
if
that
thing
had
an
effect
and
then
doing
another
thing
based
on
that
and
it's
the
tests
are
a
mess
and
nobody
is
in
charge
of
cleaning
them
up.
A
It's
just
everybody's
kind
of
hoping
that
it
doesn't
break
too
badly
and
nobody's
working
to
like
simplify.
E
E
E
I
mean
that,
and
I
just
I
guess
I
challenge
y'all
to
as
maintainers
to
think
about
what
I
said
with
it,
which
is
like,
where
do
y'all
want
to
be
with
the
project
as
maintainers
a
reviewers
approver
sub
project
owners.
That's
what
I'd
label
a
maintainer
in
three
years
like
do?
You
all
want
to
continue
to
do
this
in
three
years,
if
not
like,
let's
think
about
how
we're
going
to
take
care
of
the
stuff
that
you
work
on
right
now,.
E
But
that's
that
was
it.
I
would
love
yeah
dm
me
with
any
thoughts
that
you
have
chairs
I'll
reach
out
to
y'all
separately
anyway,
but
let's
get
to
thinking.
A
Cool
and
that
ding
means
we're
at
time.
No,
it
means
somebody
received
email,
but
we
are
out
of
time.
Thank
you
all
for
showing
up
today,
and
I
look
forward
to
seeing
you
all
in
two
weeks
at
our
next
sig
testing.
Bi-Weekly
meeting
have
a
happy
tuesday.
Everybody.