►
From YouTube: Kubernetes SIG Testing - 2021-09-21
Description
A
Hi
everybody
today
is
tuesday
september
21st.
You
are
at
the
kubernetes
sig
testing
bi-weekly
meeting.
I
am
your
host
aaron
krikenberger
aka
aaron
of
sick
beard,
aka
spiff
xp.
At
all
the
places
this
meeting
is
being
recorded
and
will
be
posted
publicly
to
youtube
later,
where
you
can
all
watch
yourselves
adhere
to
the
kubernetes
code
of
conduct
by
basically
being
your
very
best
selves
to
each
other.
A
On
today's
agenda,
we've
got
a
couple
things
to
discuss:
I'd
love
to
give
as
much
time
to
others
other
folks
items
before.
If
there
is
time
left
over
I'll
go
over
sort
of
where
we
are
with
our
items
that
we
have
proposed
for
the
123
release
cycle.
A
B
Interesting,
I
didn't
know
why
I
can't
see
it
but
anyways
so
hi
everyone
good
morning,
if
you're
on
west
coast,
I'm
not
sure
which
time
zone
you're
on
so
this
is
ciao
from
google.
I've
been
working
on
proud
for
who
knows
how
long
I
kind
of
forgot.
B
I'm
not
expecting
this
yeah,
that's
long
enough,
not
long
enough.
I
I
can
keep
working
on
this.
It's
a
fun
project.
To
be
honest.
So
basically
I
I'm
not
expecting
this
idea
to
be
controversial
or
breaking
to
any
of
the
existing
pro
functionality.
B
It's
more
like
a
opt-in
feature
that
a
user
can
schedule
a
google
cloud
build
from
pro
without
wrapping
around
with
the
batch
script
of
monitoring
a
google
cloud
build.
This
is
pretty
much
the
idea,
so
we
have
two
basic
reasonings
behind
these.
So
first
of
all,
we
as
a
google
pro
maintainer
or
operator.
B
Well,
especially
the
ones
who
are
not
super
familiar
with
kubernetes,
and
we
do
have
users
like
that.
So
the
first
purpose
of
this
proposal
is:
we
want
to
be
able
to
support
them
without
the
toy
of
maintaining
a
gke
cluster
or
any
kubernetes
cluster.
B
The
other
thing
we
would
like
to
make
simpler
is
the
security
concern,
so
we
have
seen
quite
a
few
teams
use
talking
docker
to
do
darker
operations,
which
is
a
security
concern
in
proud
jobs,
especially
the
internal.
Each
company
is
private
or
whatever
they
want
to
keep
secure.
It's
actually
anti-part
patent.
So
we
would
like
to
promote
google
cloud
build
for
these
scenarios
instead
of
using
talking
darker.
B
So
if
you
are
not
already
familiar
with
the
current
most
popular
runtime
of
pro
is
kubernetes,
the
aging
type
is
kubernetes
and
we
will
create
a
new
agent
type
called
cloud
field,
and
the
current
spec
in
the
projobs
will
be
replaced
by
a
cloud
deal
spec
which
will
match
cloud
vios
apis.
B
So
so,
basically
the
idea
is,
we
will
let
users
to
embed
a
full-fledged
cloud,
build
ammo
definition
into
project,
and
if
they
say
this
job
is
using
cloud
build
agent,
then
pro
will
be
smart
enough
to
schedule
this
cloud
build
onto
the
cloud
build
project
it
specifies
so
to
do
this.
We'll
also
add
a
gcp
project.
B
B
B
Just
a
slightly
more
context
here,
google
cloud
build
can
only
clone
a
single
repo
as
part
of
the
triggering,
so
we
will
still
use
browse
triggering
like
for
prs.
It
will
be
through
github
webhooks
and
for
periodic
it
will
be
just
like
the
current
tab
we
have,
and
the
so
by
nature
will
also
support
pops
up
to
trigger
google
cloud
view.
B
And
due
to
other
limitations,
the
other
limitation
of
google
cloud
build,
we
will
pretty
much
reuse
all
of
the
pro
party
utilities
like
a
column,
ref
unit
upload
and
the
entry
point
all
of
these
positivities
that
we
invented
for
running
kubernetes
parts.
They
I
I've
done
a
poc
approved
concept
and
10
rgb.
They
can
just
be
used
for
a
google
cloud
build
out
of
the
box,
so
we
will
reuse
this
set
of
utilities
for
cloning
source
code,
for
uploading,
artifacts
for
capturing
exit
code,
etc.
B
I
think
this
is
pretty
much.
What
I
want
to
cover
here.
Does
anyone
have
any
concern
or
comments
or
would
like
to
get
involved
as
a
reviewer.
B
So
yeah
I
I
can't
say
no,
do
you
sorry
did
I
interrupt
you
well.
A
I
was
gonna
say
near
as
I
can
tell
like
what
looks
real
obvious
right
away
is
I
can
embed
my
cloud
build
yaml
inside
of
the
job
definition
instead
of
having
to
do
a
two-step
thing,
where
I
have
a
cloud
built
yet
in
some
repo
and
then
I
have
a
job
that
is
then
going
to
take
a
look
at
that
cloud.
Build
yaml,
that's
had
its
pros
and
cons,
I
kind
of
enjoy
that
people
are
able
to
update
their
cloud
building
animals
without
having
to
go
through
testing
for
approvers.
A
A
One
of
the
great
use
cases
is
for
the
cool
kids
image
that
runs
about
1800
jobs
of
the
2400
that
kubernetes
currently
triggers,
and
we
build
a
variant
of
that
for
each
release.
Branch
of
kubernetes,
because
different
branches
of
kubernetes
have
different
versions
of
go,
and
some
of
the
older
ones
have
different
versions
of
bazel
and
stuff.
So
it's
basically
the
same
cloud
build
file,
but
then
there's
a
variance
file
that
describes
the
different
environment
variables
to
use,
which
has
been
a
pretty
handy
function
for
that
and
a
number
of
other
images.
A
So
it
feels
like
we
would
lose
that
in
favor
of
a
bunch
of
copy-pasted
cloud-built
jobs.
I
think
what,
if
I
missed,
yeah.
B
I
got
you
so
you're
absolutely
right.
I
didn't
mention
this,
but
we
did
look
into
that
utility.
So
probably
I
forgot
to
mention
the
purpose
of
this
design
or
the
end
goal
of
this
design
does
not
include
fabricating
that
cloud
field
utility.
So
I
think
they
are
two
different
paths.
B
When
we
first
started,
we
cole
asked
exactly
the
same
question.
Why
can't
we
just
use
that
and
the
problem,
not
the
problem.
I
I
think
the
the
thing
is.
They
are
different
purposes,
so
this
design
is.
We
want
to
fully
adopt
or
support
google
cloud
build
as
a
runtime
so
including
running
tests.
We
will
support
running
unit
tests,
run
integration
test
everything
as
long
as
you
can
run
it
on
google
cloud
build,
we
would
collect
artifacts
and
display
results
on
pro
natively.
B
I
think
that's
the
purpose
of
this
design.
A
Okay,
I
mean
personally
once
there
are
two
different
ways
to
trigger
a
google
cloud,
build
I'd
rather
resolve
down
to
one,
and
I
think,
like
copy
pasting.
Job
configs
can
be
done
with
a
pattern.
I've
seen
a
number
of
other
people
do
where
they
generate
job
configs
from
something
else,
and
so
we
get
all
the
way
down.
Yeah
configuration
differences
and
then
generate
a
bunch
of
different
jobs.
So
then
that
brings
me
to
my
next
question.
A
I
feel
like
this
would
not
be
usable
for
a
project
that
is
the
size
of
kubernetes
because
of
the
quota
issues
that
we
run
into
with
google
cloud
build
like.
I
have
a
difficult
time
you
mentioned
30,
but
it's
it's
actually
unclear
to
me
how
a
public
customer
can
request
that
that
quota
gets
raised
to
30,
because
it's
not
accessible
through
the
quota
interface.
A
So
the
default
quota
is
10,
concurrent
builds
per
project,
and
then
I
think
there's
something
else
about
queuing
ahead
of
that
and
we've
hit
that
a
number
of
times.
One
of
the
use
cases
is
if
somebody
changes
something
that
changes
a
bunch
of
the
e2e
test
images
all
at
once.
In
the
kubernetes,
repo
will
trigger
more
than
10
google
cloud
build
jobs,
and
at
the
moment
we
error
out.
So
I'm
assuming
one
of
the
things
we
would
gain
is
proud
would
be
able
to
treat
this
much
like
it
treats
pods.
C
A
Pots
to
schedule
these
these
google
cloud
build
proud
jobs
could
go
to
pending,
which
would
be
great,
but
I
don't
see
how
we
avoid
having
to
have.
B
Yeah,
I
I
can
answer
this
question
here.
I
I
don't
think
I've
mentioned
this
limitation,
but
it's
it
does
exist
in
this
stock.
B
The
30
concurrent
field
limitation
is
default,
public
pool
which
is
going
to
be
what
we
recommend
to
the
users
who
want
to
update
this
feature,
and,
to
be
honest,
I
don't
feel
like
we
are
going
to
replace
or
swap
all
of
the
pro
jobs
to
use
this
runtime,
and
I
would
still
expect
kubernetes
community
to
keep
using
kubernetes
cluster
as
their
runtime
or
most
of
their
for
majority
of
their
jobs
they
yeah.
B
We
could
use
image
builder
for
this,
but
if
there
are
more
than
30
images
to
be
pushed
at
once,
I
would
say
we
may
want
to
think
about
the
private
pool
or
in
another
way.
B
Actually
they
can
be
pulled
into
a
single
google
cloud
build
because
they
can
run
in
parallel,
and
I
don't
see
a
limitation
on
the
vm
size
that
you
can
choose.
So
if,
for
example,
if
you
have
20
concurrent
images
built
in
the
same
google
cloud
field,
you
can
just
request
the
arbitrary
large
virtual
machine.
I
should
be
able
to
just
build
it
concurrently.
A
Okay,
those
sound
like
these
workarounds-
I
guess
I'm
still
just
kind
of
poking
at
the
the
design
assumption
that
we
should
treat
gcp
projects.
It
will
look
to
the
cluster
field
where
it's
sort
of
a
hard-coded
thing.
I'm
just
wondering
if
you
want.
Oh,
I
see
what
you
mean:
okay,
gcp
project,
being
something
that
is
dynamically
allocated
from
vasquez
instance.
B
A
Okay,
it
sounds
like
you're,
not
really
targeting
any
kubernetes
related
pro
jobs.
For
this
there
are
no
kubernetes
sub-projects
that
you're
talking
to
want
this
functionality.
Is
that
right,
right.
B
A
Cool
okay,
if
there
are
no
other
questions,
let's
go
over
to
antonio
the
ginkgo
vp
on
the
agenda.
D
D
You
know
to
be
able
to
send
tests
to
the
not
running
parallel
with
others,
and
I
think
that
the
labels
fitter
is
going
to
be
nice,
so
we
can
get
rid
of
the
tags
in
the
in
the
names
and
the
projects,
but
I
don't
think
that
that
we
can
implement,
implement
it
without
disrupting
something.
So
I
think
that
this
will
need
a
cap.
D
I
mean
I
I'm
happy
to
work
on
that,
but
what
I
wanted
to
ask
is
what
is
suggesting
position
on
this
and
to
answer
is:
what
is
his
role
mark?
When
does
he
plan
to
have
this
stable
or
the
db2
ga.
E
Sure
I
can
I
can
take
the
ga
question
if
you'd
like
just
as
a
starting
point,
maybe
just
by
way
of
apology
to
start,
I
spent
many
years
pretty
non-responsive
to
ginkgo
issues,
because
I
had
an
insane
job.
It
was
super
stressful,
but
I
have
a
lot
more
time
now
and
so
I've
spent
that
time
working
on
this
v2
thing
and
really
trying
to
take
into
account
a
lot
of
community
feedback
over
the
years
and
to
try
to
like
ship
something
that
actually
solves
a
lot
of
the
pain
points.
E
I
know
genko
has
a
reputation
of
being
anyway.
It
is
what
it
is.
C
E
I'm
happy
to
I'm
happy
to
make
it
better
and
I'm
excited
by
the
release.
So
in
terms
of
roadmap,
I'm
literally
like
today,
I'm
finishing
the
tests
up
on
before
all
and
after
all,
which
is
a
big
request
with
an
ordered
context,
and
I
just
run
one
setup
and
run
one
cleanup
and
run
all
my
tests
and
not
have
to
repeat
the
setup
and
clean
up
and
that's
that's
almost
the
last
feature.
E
I've
got
a
couple
of
small
things
still
to
go
and
then
just
like
clean
up
the
docks,
so
long
story
short,
I'm
hoping
by
middle
of
october,
it
it.
I
can
just
call
it
ga
and
ship
it.
I
might
have
a
release
candidate
in
like
the
next
couple
of
weeks
and
just
give
folks
a
chance
to
give
me
some
feedback
in
a
few
weeks,
but
then
I
I
just
want
to
get
it
done,
and
it's
pretty
close
at
this
point.
E
The
branch
my
intention
is
for
that
to
be
stable,
antonio,
so
like.
If
you
want
to
start
like
I'm,
I'm
not,
I
don't
want
to
make.
I
don't
think
there
will
be
many
major
changes,
so
I
don't
think
I'll
I'll
break
things,
but
I
can
appreciate
not
wanting
to
pull
it
in
until
it
goes.
Ga.
A
A
D
Well,
but
that's
one
thing:
I
don't
know
if
anybody's
going
to
work
on
that,
if
I'm
going
to
work
on
that,
I
don't
have
time
now,
I'm
just
you
know,
preparing
everybody
to
get
feedback
and
maybe
just
say:
well
we
don't
plan
to
do
this
in
one
month
or
during
this
year.
So
for
sure
this
is
not
for
this
release.
I
don't
have
time.
A
A
I
don't
know
the
tact
that
we
think
this
can
help
us
remove.
You
know
looking
at
it
from
the
like.
What's
the
least
disruptive
way
we
can
introduce
this
and
then
iteratively,
how
can
we
take
advantage
of
the
new
features
that
ginkgo
offers?
I
am
super
super
grateful
that
labels
are
in
here.
A
A
My
first
thought
is
just
in
the
context
of
conformance,
because
everybody
has
this
hard-coded
strings
laying
around,
but
also
like
one
useful
feature
we
have
is
since,
like
sig,
the
sig
name
is
embedded
in
it
in
a
test
name
like
sig
network
right.
The
easiest
way
for
us
to
figure
out
all
the
tests
that
sig
network
owns
is
by
regular
expression,
but
not
just
at
the
ginkgo
level,
also
like
in
how
we
display
all
of
our
test
results
on
the
test
grid
tool.
A
It's
got
the
ability
to
include
and
exclude
rose
by
regular
expression
on
the
name,
so
I
don't
quite
know
how
to
reconcile
that
difference
just
yet.
It
could
be
that
we
decide.
There
are
tags
like
that
that
are
still
useful
to
include
in
the
tests,
but
other
tags
like
serial
and
disruptive
and
conformance
and
stuff
that
are
maybe
less
important
to
including
the
test
name.
A
So
the
policy
stuff
is
kind
of
the
biggest
question
mark.
For
me,
I
don't
know
what
other
stuff
you
had
in
mind,
in
particular,
antonio,
that
you
were
like
excited
to
try
implementing
to
see
if
it
could
simplify
our
lives.
A
D
D
A
That's
a
fair
point.
I
it's
been
a
while,
since
I've
looked
at
sauna
voice
code
base,
I
don't
know
if
anybody
here
qualifies
as
a
maintainer,
it's
not
a
boy,
but
we
can
ask
in
the
case
conformance
channel
my
impression.
Last
time
I
looked.
Was
they
basically
just
use
the
regular
expressions
like
there
are
a
couple
environment
variables
that
they
use,
and
then
I
think
they
have
flags
on
there
and
that
map
to
like
what
to
set
those
environment
variables.
A
I
don't
think
they
actually
expose
the
full
environment
variable,
and
so
it
ideally
on
their
end
would
be
as
simple
as
translating
you
know,
eventually
translating
from
regular
expression
to
a
set
of
labels,
and
if
it's
it
changes
how
we
invoke
things
in
parallel
versus
not
it
would.
You
know
slightly
change
that
I
think
most
of
the
heavy
lifting
could
be
on
the
kubernetes
side
in
terms
of
how
we
declare
and
categorize
our
tests.
F
There's
also,
technically,
that
we
publish
the
image
that
sona
bowie
consumes,
which
is
also
like
a
release.
Artifact,
and
I
know
there
are
people
just
using
the
image
like
you-
can
basically
run
the
conformance
this
without
the
tools
on
a
buoy.
The
tool
actually
doesn't
do
that
much
in
the
case
of
running
conformance
tests.
F
Yeah,
so
you
can
accomplish
that
and
like
say,
your
air
gap
testing
by
just
kind
of
like
taking
the
release
image
deploying
your
own,
manifest
with
the
image
and
then
using
like
cube,
cuddle
to
pull
and
then
like
copy
down
the
results,
and
so
that
image
has
like
that.
What
sonobui
is
doing
is
setting
environment
variables
to
that
image.
But
there
are
also
non-sonic
movie
things
that
are
doing
this,
that
we
should
at
least
consider.
F
E
This
is
one
thing:
people
use
everything
right
right
exactly
and
like
who
knows
what
the
contract
is
right.
It
just
emerges
just
a
couple
of
quick
thoughts.
One
is
the
the
label.
Stuff
is
obviously
it's
additive,
so
you
can
take
your
time
migrating
and,
just
frankly,
you're
gonna
just
end
up
with
a
different
kind
of
mess
anyway,
because
that's
that's
the
nature
of
tags
and
labels
right.
That's.
E
Of
like
pick
the
mess
that
you
feel
like
you
can
best
groom
and
handle,
and
that
that's
fine
and
the
other
major
thing
is
I
one
of
the
main
things
I
want
to
try
to
do
is
like
support
the
effort,
and
in
particular
I
I
want
to
I
want
to
ask:
is
there
a
way
to
surface
any
like
major
blockers
or
deal
breakers,
because
the
last
thing
I
want
to
do
is
like
go
ga
and
then
a
month
later,
v3
is
ga,
because
I
had
to
you
know,
make
a
breaking
change
because
we
discovered
something,
and
so
antonio
thank
you
for
pulling
the
branch
in
and
just
seeing
that.
E
Okay,
like
you,
know,
90
level,
it's
working.
That's
great
I'd
love
to
to
to
just
ask
that
question
of
like
what
is
what
are
some
things
we
can
do
to
suss
out
whether
there
are
any
deal
breakers
here.
A
I,
like
I,
the
the
only
other
analogy
I
can
think
of
is
where,
like
jordan,
has
had
a
draft
pr
out
for
a
while
to
like
migrate,
kubernetes
kubernetes
over
to
a
fully
module-based
system,
because
we
thought
go,
117
was
going
to
yank
the
rug
out
from
so.
If
you
have
something
like
that,
going
for
ginkgo
v2,
I
think
we
could
use
you
know
we
could
like
iterate
on
that
and
see
that
the
pre-submits
test
against
that
and
that
would
hopefully
uncover
most
of
the
any
of
the
potential
deal
breakers.
D
A
That's
really
all
I
was
asking
for:
okay,
okay,
okay,
yeah,
no.
A
F
We
can
better
urge
that
as
provisional,
I
think
in
terms
of
surfacing
things
to
unsee
here.
The
the
I
think,
just
the
pr
should
help
quite
a
bit
cover
like
what
kubernetes
itself
is
going
to
run
into
here.
I'm
wondering
if
for
everyone
else,
if
we
might
be
able
to
with
some
abstraction
on
top
that
we
use
and
also
like
two
reporters.
F
Maybe
we
could
have
it
so
that
we
start
using
labels
to
construct
like
sort
of
the
legacy
test
names
and
have
like
the
default
reporter
continue,
including
the
full
like
super
verbose
test
name
with
all
the
like
tag,
registable
things
in
it
and
then
have
have
some
like
opt-in
to
start
using
the
labels
instead,
yeah.
F
E
And
my
time
frame
is
like
flexible,
if
you
guys
are
like
whoa
november
december
or
january,
like
whatever
I'm
cool,
I
just
don't
want
to
ship
something
that's
going
to
be
a
pain
and
I'm
I'm
just
trying
to
keep
myself
accountable
and
get
the
ball
over
the
line.
But
if,
if
I
need
to
go
slower,
that's
not
a
problem
at
all.
I.
A
F
Honestly,
like
because
we
still
use
vendor
and
everything
I
think
I'm
like,
I
think
the
important
thing
here
is
just
that
we
have
a
feedback
loop
with
you,
so
we
can
collaborate.
I
think,
like
we're,
going
to
be
pretty
fine.
If
we
don't
migrate
right
away,
we
have
a
lot
of
dependencies
that
are
pretty
old
as
long
as
they
don't
have
like
vulnerabilities
or
something
we're
not
too
worried.
A
E
You
know
how
it
is
okay,
and
then
I
guess
my
last
thought
is
if,
if
I
can
be
helpful,
like
I'm
happy
to
like
carve
out
a
few
hours
and
pair
with
someone
or
just
like
you
know,
just
I'm
happy
to
work
together
on
my
migration
to
v2,
but
also
just
in
general,
like
if,
if
it
would
be
helpful
to
like
look
at
what
you
all
have
already
and
like
just
poke
at,
is
there
a
better
way
to
use
ginkgo?
Is
everybody
use
gomega
here
like
just
in
general?
E
A
C
E
Yeah,
so
it
you,
it
should
not
break
you
without
getting
too
much
in
the
weeds.
There's,
there's
a
mechanism
to
take
a
v1
style
reporter
and
just
a
single
line
of
code,
we'll
integrate
it
into
the
new
v2
world.
So
I've
made
that
bridge
really
easy
and
then,
once
once
you've
done
that
you
can
take
your
time
actually
building
out
tooling.
That
uses
the
new
reporting
infrastructure
and
it
will
probably
serve
you
better
than
what
we
currently
have.
E
So
it
really
shouldn't
break
you,
but
that's
a
good
example
of
something
that,
if
there's
any
way
that
you
could
just
try
it
early
and
let
me
know
if
it
actually
doesn't
work.
That
would
be
valuable.
C
Okay,
if
you
will
send
me
the
or
remind
me
what
the
branch
is
going
to
be.
C
And
I
also
I,
I
think
that
it
will
be
good
for
vladimir
to
hear
this,
because
if
the
framework
ii
version
will
use
the
how
they
incorporate
with
the
new
reporter
as
well,
I
think
they
use
it
right.
C
Does
the
framework
to
use
uses
the
reporter
do
they
need
any
upgrade
or
anything.
A
E
Just
just
one
other
quick
thought
on
the
reporting,
like
sort
of
like
the
inline
programmatic
reporting
behavior,
is
a
bit
different
but,
like
I
said
it's
totally
backward
compatible,
you
can.
I
can
even
like
translate
an
existing
reporter
over
and
it'll
just
work.
E
But
what's
really
cool
is
the
new
json
output
like
it
has
absolutely
everything
and
there's
now
support
for
you
can
annotate
tests
with
additional
data
and
that'll
end
up
in
the
json
report,
so
you
can
do
whatever
you
want
it's
much
more
flexible
and
I
feel
like
here's,
my
json
file
go
and
post
process.
It
is
probably
a
much
better
interface
than
please
integrate
a
library
into
my
code
while
it's
running
to
send
stuff
to
your
database,
and
so
I
think
again,
phase
one
everyone
just
get
over
to
v2.
E
E
A
Cool
thanks
so
much
for
stopping
by
antonia.
Did
you
have
anything
else
you
wanted
to
discuss
on
this.
D
A
All
right,
let's
see,
I
still
don't
see.
Vladimir,
I
think
I
I
don't
know
what
vladimir
thing
is.
Unfortunately,
I
know
eddie's
thing
on
the
agenda
is
that
brown
now
has
an
issue
transfer
plug-in
and
it
is
actively
in
use,
I
believe,
within
the
kubernetes
org
issues
can
only
be
transferred
within
an
organization,
and
I
am
not
clear
on
whether
it
gets
to
like
certain
privileges,
rights
or
something.
A
Hopefully
we
can
get
eddie
to
give
us
a
walk-through
of
this
later,
but
I
definitely
saw
an
announcement
get
sent
out
to
kubernetes
dev
in
kubernetes
testing,
so
super
that
that
one's
been
standing
out
for
a
while.
So
it's
really
cool
to
see
that
finally
implemented
next
up.
I,
unless
anybody
has
anything
more
pressing.
I
was
going
to
walk
through
our
board
for
a
little
bit
and
just
check
in
on
where
we're
at
for
things
we've
committed
to
for
this
release
cycle.
C
A
Okay,
this
is
the
sig
testing
project
board.
It
is
issues
only
we've
got
the
help
wanted
column,
which
is
filled
with
relevant
things
where,
if
anybody
wants
to
help
out
we're
willing
to
help
review
or
describe
how
to
implement
it,
we've
got
a
backlog
which
I
do
not
know
if
I
actually
sorted
in
priority
order
recently,
it's
kind
of
annoying
to
do
that.
We've
got
our
in-progress
column
and
we've
got
stuff,
that's
blocked
and
one
question
I
kind
of
had
for
the
group
is
around
prioritization.
A
A
A
A
I
can't
even
find
the
issue
right
now
migrating
away
from
google.com
owned
container
registries
for
images
that
are
used
as
part
of
e2e
tests,
like
I
love
every
single
image
involved
in
the
ci
of
kubernetes,
whether
that
be
something
that's
used
to
build
kubernetes
to
test
kubernetes,
to
run
kubernetes,
to
run
the
jobs
and
do
all
this.
I
would
like
all
of
those
to
come
from
community
infrastructure,
so
I
have
that
in
my
head.
A
A
So
one
thing
I
could
do
is
start
to
put
on
priority
critical
urgent
on
the
issues
that
I
feel
like.
No,
we
really
really
have
to
get
these
done.
A
Another
thing
I
could
do
is
start
being
more
aggressive
at
kicking
out
anything,
that's
nice
to
have
leaving
us
with,
like
just
the
absolutely
critical
set
of
stuff
to
do,
but
then
I
feel
like
that,
leaves
less
opportunity
for
people
who
want
to
get
involved,
but
are
maybe
a
little
not
so
sure
about
getting
involved
with
something
that
is
like
high
visibility
and
could
potentially
break
things
and
so
on
and
so
forth.
G
In
my
opinion,
most
of
the
things
I
have
sent
before
I
have
almost
always
added
the
priority
importance
soon.
I
only
made
that,
because
I
don't
think
those
labels
are
too
fine-grained
or
useful,
for
example,
if
I
have
something
done
and
it
could
easily
merge
soon
or
something
like
that,
I
would
put
that
tag.
G
A
Yeah,
if
you
go
read
the
contributor
docs,
which
I
don't
have
a
handy
link
to
right
now,
but
there
is
something
that
describes
sort
of
organization-wide
what
we
feel
like
all
the
priority
labels
mean
as
an
aesthetic
choice.
The
fact
that
priority
importance,
soon
and
important
long
term
are
the
same.
Color
makes
them
basically
indistinguishable
for
me
other
than
like,
maybe
by
length
of
the
label,
but
it's
pretty
tough
when
they're
that
long
already
and
important
soon
has
basically
become
proxy
for
are
we
planning
to
ship
at
this
release?
A
A
How
I
use
them,
so
I
think
I
am
proposing,
like
stratifying
a
little
more
putting
priority
critical
urgent
on
some
more
stuff
and
seeing,
if
that
helps
us
gain
a
little
more
visibility
into
like
what
we
actually
care
about.
A
Is
I
have
experimented
a
little.
A
With
triage
party,
but
I
don't
feel
like-
I
have
a
good
set
of
rules
that
really
make
it
super
obvious
what
we
should
be
doing
and
what
our
triathlons
are.
But
I
can
set
that
up
and
we
can
start
to
go
through
sort
of
recurring
triage
party
things
collectively,
so
we're
all
kind
of
on
the
same
page
about
what
to
do.
A
Okay
and
I'm
just
looking
at
check
here,
our
note
says
we
should
move
the
group.
We
should
focus
on
the
critical
things
and
moving
move
everything
else
to
q1
all
right.
So
I'm
super
happy
to
do
that.
I
don't
want
to.
A
C
F
F
I
think
milestones,
help
flag
like
we're
working
on
this
like
this
milestone,
we
expect
to
ship
it,
and
the
priority
label
is
more
like
signaling
to
the
community
like
we're
aware
that
this
is
something
that
needs
to
happen
soon
versus
maybe
later
okay,
but
that,
even
if
we're
aware
something
it
needs
to
happen
soon,
that
doesn't
mean
that
we
have
it
staffed.
A
C
A
I'm
gonna
see
what
the
bot
says
and
if
it
proves
me
wrong,
I
can
dump
everything
in
our
backlog
that
we
really
need
help
on
into
help
wanted,
but,
like
everything
in
the
hell
wanted
section
stuff,
we'd
love
people
to
help
on.
I
see
people
assigned
to
them,
but
it
can
be
difficult
to
see
whether
or
not
that
actually
means
that
something
is
happening
with
it.
A
Low
barrier
to
entry,
clear
task,
goldilocks
priority
and
up-to-date,
so
I
can
go
through
and
re-groom
everything
to
make
sure
that
anything
that
meets
these
priority.
Yet
these
criteria
is
listed
as
help
wanted.
A
So,
for
example,
I
have
filled,
I
think,
a
role
of
like
a
new
contributor
ambassador,
somebody
who
is
willing
to
help
out
a
whole
bunch.
However,
thanks
antonio
good
to
see
you,
however,
I'm
kind
of
double
booked
these
days,
trying
to
do
the
same
thing
for
sick
gates
and
for
as
well.
A
I
think
sig
testing
has
been
while
I
have
you
here
ben.
I
had
a
question
about
one
of
the
issues
that's
listed
as
in
progress,
which
is
that
the
u.s
test
job
should
reflect
trivially
runnable
local
defaults.
A
My
question
was:
what's
left
to
do
before.
We
can
call
this
done.
F
We
could
arguably
close
this
one.
I
think,
because
we
got
the
defaultish
right.
I
was
quibbling
on
they're,
not
trivially,
runnable,
because
they
don't
pass.
F
We
fixed
the
don't
pass
without
root
part,
but
there's
still
something
very
strange
going
on
with
versions,
I'm
not
sure
if
we
got
that
fixed.
Yet
I
haven't
had
time
to
follow
up
that
that
one
might
also
be
fixed.
Now
I've
been
manually
going
and
like
getting
a
clean
environment
and
just
like
running
big
tests
and
seeing
what
happens-
and
it
has
been
sad
to
see
that
so
far
every
time
I
have
followed
back
up
and
tried
that
it
doesn't
pass,
is.
F
No,
it's
been
things
that,
where,
like
it
it
work
like
it
only
works
in
ci
or
something
like
that
or
like
it.
There
were
a
number
of
unit
tests
were
requiring
root.
We
have
locked
that
out
out,
we
don't
run
them
as
root.
You
can't
do
that
ci,
but
people
would
write
unit
tests
for
like
say,
like
some
storage,
something
and
it
wants
to
like
interact
with
mounts
or
something
like
that
that,
like
a
unit
test
shouldn't,
be
doing.
F
Some
of
the
other
ones
are
just
more
inexplicable,
like
we're
supposed
to
have
invert
injected
version
metadata
that
things
can
depend
on
and
for
some
reason
that
seems
to
be
fine
in
ci,
but
like
locally,
it's
not
the
case,
so
we're
like
unit
tests
are
failing
on
that.
There's
been
there's
just
been
like
a
steady
stream
of
like
you,
can't
actually
clone
kubernetes
and
run
the
unit
tests
and
just
have
them
pass,
which
I
would
expect
as
a
contributor.
F
I
feel
like
that's
a
pretty
big
red
flag
starting
and
tells
you
something
about
the
state
of
the
project.
Part
of
it
is
just
because
it's
it's
too
slow,
there's
just
too
many.
So
I
don't.
I
don't
think
many
people
are
actually
doing
this,
but
I
would
think
particularly
a
new
contributor
might
do
this
and
actually
run
like
the
whole
suite
and
right
so
far,
the
experience
on
that
has
been
awful.
F
Just
doesn't
work,
and
previously
you
needed
to
set
you
needed
to
set
options
to
match
what
we're
actually
running
like
it
was
not
the
default.
Some
of
that's
fixed,
I'm
not
sure
if
it's
all
fixed
yet.
F
F
A
I
wanted
to
call
attention
to
the
two
caps
that
he
said
we
were
going
to
do
during
this
life
cycle,
so
one
is
taking
the
q
test.
2
ci
migration
to
beta
I've
added
a
comment
at
the
bottom
of
the
cap
issue,
which
lays
out
all
of
the
jobs
that
need
to
be
migrated
to
test
two
and
then
all
of
the
jobs
that
need
like
equivalence
for
cube
test.
Two
runs
and
we're
also
looking
for
a
guide
to
help
people
migrate
from
cube
tests
to
q
test
two.
A
The
other
cap
that
we
have
is
reducing
kubernetes
build
maintenance.
This
is
effectively
a
no-op,
then
has
done
basically
all
of
the
work.
The
only
thing
that
remains
is
to
actually
turn
down
and
remove.
All
of
the
older
builds
that
still
use
bazel
older
versions
of
kubernetes,
essentially
we're
just
waiting
for
the
versions
of
kubernetes
that
use
nasal
to
age
out,
and
then
we
can
fully
remove.
F
There's
also
some
like
related
tech,
that
to
clean
up
like,
for
example,
we
still
have
code
to
make
sure
that
you
don't
have
bazel
build
files
in
the
repo.
At
some
point
it
doesn't
make
sense
to
continue
doing
that,
and
we
should
expect
pr
reviewers
to
catch
that.
It's
like
it's
not
a
matter
of
like
stale,
prs
or
something
anymore.
F
I
think
it
is
pretty
reasonable
to
go
ahead
and
call
that
once
we
no
longer
have
any
branches,
otherwise
I
feel
like
we
could
leave
it
indefinitely
and
they
are
kind
of
slow
things
to
do.
You
know
we
need
to
scan
all
the
files
that
sort
of
thing.
F
There's
also
the
question
of:
should
we
continue
running
the
like
enormous
bazel
build
cache,
we
allowed
some
other
sub
projects
to
use
it,
but
if
we're
going
to
continue
to
run
it,
I
think
we
should
at
least
scale
it
down,
because
the
majority
of
the
load
was
the
kubernetes
repo
and
that's
the
reason
we
set
it
up
to
begin
with.
I
don't
think
the
cost
is
justified
once
we
no
longer
have
any
branches
and
kubernetes
using
it,
so
we
should
at
least
make
it
smaller,
so
there's
some
infrastructure
to
turn
down.
A
Okay,
well,
I
said
I
would
use
the
rest
of
our
time
and
that's
what
I
did
we're
at
11
o'clock
pacific.