►
From YouTube: Kubernetes WG K8s Infra biweekly meeting 20210120
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hi,
everybody
today
is
january,
20th
2021
and
you
are
at
the
kubernetes
kate's
in
for
a
working
group,
bi-weekly
meeting.
I
am
your
host
aaron
of
sig
beard.
You
also
can
find
me
at
spiffxp
on
all
the
places,
slack
and
github
and
gmail,
and
this
meeting
adheres
to
the
kubernetes
code
of
conduct,
which
basically
boils
down
to
don't
be
a
jerk.
A
A
Okay,
I
think
I
said
the
spiel,
so
I
feel
like
there.
I
don't
know
if
I
remember
having
met
some
of
the
folks
on
here.
Does
anybody
feel
like
introducing
themselves.
B
A
A
So
this
billing
report
is
publicly
accessible
to
anybody
who's,
a
member
of
the
kubernetes
wg
cadet
for
google
group
anybody
can
join.
This
is
what
our
spend
looks
like
over
the
past
28
days.
A
There's
nothing
wildly
surprising
about
this
to
me
once
the
new
year
started
and
people
kind
of
came
away
from
the
holidays
and
started
working
on
the
project
again,
starting
to
see
our
weekly
traffic
show
up
again.
So
a
lot
of
the
traffic
is
people
downloading
our
images
on
a
weekly
basis
and
also
running
ci
jobs
on
our
infrastructure
on
a
daily
basis.
B
And
the
numbers
I
see
through
the
console
view
match
up
pretty
closely
modulo,
probably
a
few
hours
of
difference
on
the
sink
and
so
we're
within
spinning
distance.
A
I
forget
what
the
next
thing
is.
Yeah,
I
don't
know.
I
guess
I
don't
feel
particularly
compelled
to
walk
through
the
rest
of
this,
but
for
people
who
are
interested,
we
have
other
pages
that
break
it
all
down
by
skew
and
things
like
that.
Does
anybody
have
any
questions
about
the
building.
A
Okay,
let's
stop
sharing
that,
let's
see
any
ai
review,
so
I
was
bad
last
meeting
and
I
didn't
actually
take
the
time
to
write
down
action
items
as
people
were
talking
about
things,
but
I
know
there
was
a
lot
of
discussion
around
certain
apart
and
I
think
ricardo
was
gonna.
Look
at
helping
out
around
this.
Do
we
have
any
updates
on
that.
C
C
Some
sort
of
warning
based
on
on
days
like
I
want
to
issue
a
warning
of
20
days
or
10
days
until
expiration.
So
we
can
start
trying
to
reach
people
if
we
see
some
something
expiring
and
my
next
step
into
this
is
probably
trying
to
write
some
sort
of
plugin.
So
we
can,
we
can
say
if
we
want
to
post
this
on
some
slack
channel
or
send
an
email
to
some
sort
of
meeting
list
and
bring
to
you
folks
about
how
are
we
going
to
control
that?
C
Because
we
can't
know
who
owns
the
certificate
unless
we
put
some
some
sort
of
label
or
something
like
that,
so
we
can
know
who
to
trigger
some
action
because
the
certificate
object,
it
doesn't
have
any
any
other
metadata.
So
probably
we
I'm
going
to
bring
this
discussion
later
some
week
later,
how
to
map
who
owns
which
certificate.
So
we
can
one
people
see
and
not
also
like
revoke
the
certificate
or
hit
you
or
ask
what's
being
made
with
that.
A
Okay
yeah.
I
agree
that
sounds
like
something
we
don't
have
to
work
on
today,
but
that
would
be
really
helpful
for
us
down
the
line.
I
found
the
issue
you
open,
update,
cert
manager.
A
I
will
link
to
that
in
the
meeting
notes
where
people
want
to
find
it,
and
I
also
just
added
it
to
the
121
v121
milestone
I'll,
hopefully
make
it
easier
to
find
I'm
going
to
try
and
groom
the
kate's
information
a
little
bit
better,
this
release
cycle,
so
I've
added
milestones
to
the
repo
that
line
up
with
the
current
kubernetes
release
cycle.
So,
while
we're
working
on
stuff
during
the
121
release
cycle
issues
that
we
think
are
important
to
do,
then
I'll
put
in
that
milestone.
A
C
Okay,
thank
you.
I've
put
the
the
pr
into
the
into
the
the
dogs
also,
so
anyone
that
wants
to
review
it.
I
still
need
to
write
some
sort
of
readme
because
we
followed
a
different
process
and
has,
and
we
split
the
manifest
into
smaller,
smaller
files.
So
it
would
be
easier
for
anyone
that
wants
to
review,
because
the
circ
manager
manifests
by
itself
something
like
2000
lines
or
more
because
of
the
cids.
A
I
did
that
I
don't
have
sufficient
access
to
actually
run
the
terraform
to
verify
that
it
does
what
he
claims
it
does,
but
it
it
mostly
looks
good.
I
don't
know
you
know
they
are.
I
left
comments
and
I
think
justin
and
I,
and
perhaps
some
of
the
cluster
api
folks
need
to
iterate
on
it
a
little
bit.
A
I
got
the
impression
that
some
of
the
image
builder
subproject
people-
or
maybe
it
was
the
cluster
provider
aws
people
were-
were
a
little
blocked
on
this.
So
if
you're
watching
this
recording
or
you,
you
know
those
people
and
that's
the
case,
please
encourage
them
to
reach
out
on
to
the
kids
infrastructure
and
we
can
work
through
how
to
unblock
them,
while
making
sure
that
we
are
setting
ourselves
a
good
example
in
how
to
do
this.
A
Yeah,
so
you
know,
ideally
I
I
am
the
chair
of
sig
testing,
so
I'd
love
to
be
a
guy
who
helps
you
out
on
this,
but
given
that
my
bandwidth
is
already
stretched
a
little
bit,
I
think
it
would
be
helpful
for
you
to
talk
to
some
of
the
folks
who
are
more
directly
involved
with
prow
on
a
day-to-day
basis,
because
these
days
prow
is
kind
of
like
its
own
sub
project,
independent
of
sig
testing
and
kubernetes
and
stuff.
A
So
for
specific
names.
I
know
that
cole
wagner
and
alvaro
element
are
pretty
active
when
it
comes
to
prow
questions
and
reviewing
prowl
prs.
A
I
also
know
that
chow
dai
his
github
handle
is
chad.
Ig,
I
think
like
was
working
on
trying
to
help.
I
think
it
was
k
native
migrate,
their
their
proud,
build
cluster
stuff.
I
will
write
those
names
down.
A
I
can't
talk
and
write
at
the
same
time,
but
but
basically,
like
I
don't
think,
we've
ever
had
prow
like
had
two
proud
control
planes
simultaneously
cocked
to
the
same,
build
cluster
before
so
there
might
be
some
things
that
we
need
to
change
in
prow
to
support
that
as
a
concept
or
if
the
experts
behind
prow
say
like
actually
that's
just
such
a
fundamental
assumption
that
we
can't
support
that
we'll
have
to
figure
out
how
to
do
some
kind
of
migration
with
minimal
downtime.
A
Similarly,
I'm
not
sure
if
there
is
prior
art
for
having
multiple
prow
instances
servicing
the
same
repo
at
the
same
time
or
like
the
same
org
at
the
same
time.
So
I
feel,
like
it's
gonna,
be
an
exercise
of
trying
to
ask
yourself
the
question
of
like
what.
What
would
that
process
look
like
step
by
step
and
then
try
to
bounce
that
off
of
the
the
prow
experts
and
say:
hey?
Would
this
what
would
break
in
this
like?
What
do
we
need
to
change
to
to
support
this?
D
Yeah,
if
I
know
all
the
all
of
those
names
so
I
can,
I
can
ask
them.
A
Okay
is
this:
I,
like,
I
think
it's
really
awesome
that
you
are
working
on
this.
I
I
had
an
ish.
I
have
an
issue
open,
that's
about
developing
a
proud
migration
plan,
and
so
maybe
I
can
help
you
on
this.
Like
I
tried
to
author,
like
a
big
old
google
doc
about
everything
that
was
going
to
have
to
happen
to
migrate,
prow
in
its
entirety
and
setting
up
a
staging
instance
of
prow
will
help
us
identify
some
of
those
things.
A
So
that's
really
helpful
but
like
when
I
wrote
that
large
google
doc
I
kind
of
left
how
we
set
up
a
staging
prow
and
how
we
actually
do
the
migration
as
an
exercise
for
future
me
or
some
future
person,
because
it
seems
like
there
was
enough
there
for
just
migrating
the
existing
ci
jobs
to
new
build
clusters.
A
The
only
other
thing
that
account
that
come
to
mind
is
like
we
won't
be
able
to
reuse
the
kate's
ci
robot
account
in
a
separate
prow
instance,
the
reason
being
that
that
account
gets
5000
tokens
per
hour
and
if
we
start
to
have
two
different
pro
instances
using
github
api
tokens.
For
the
same
account,
it's
going
to
be
really
surprising
when
an
unpredictable
as
to
when
one
of
them
is
going
to
run
out
of
tokens.
A
It
does
not
need
those,
that's
true,
but
I
think,
as
we
look
towards
migrating
the
jobs,
I
don't
think
I
don't
know.
I
personally
don't
want
to
go
through
the
exercise
of
migrating
to
a
bot
that
has
a
different
set
of
permissions,
as
lazy
as
it
is
to
like
use
a
bot
that
has
org
owner
privileges
and
just
be
very,
very
careful
with
it.
A
A
And
I
am,
I
am
totally
fine
with
the
idea
of
running
the
staging
prow
instance
as
just
another
app
in
the
triple
a
cluster.
That
sounds
good
to
me.
A
A
No
don't
share
my
entire
screen.
Okay,
I
had
one
specific
thing
I
was
gonna
bring
up
and
then
I
will
try
walking
through
our
board.
A
This
this
one's
mostly
since
I
since
tim,
is
here.
I
noticed
that
the
audit
files
have
been
really
horribly
out
of
date.
I
used
to
be
running
that
script
as
a
human,
when
I
was
making
changes
regularly
back
in
april
or
may
of
2020.
A
It
hadn't
been
run
in
seven
months,
so
so
I
re-ran
it.
This
pr
has
seven
months
worth
of
changes
in
it.
I
tried
to
break
it
down,
commit
by
commit,
and
then
I
tried
to
save
all
of
the
changes
that
are
like
raising
questions
for
me
or
or
feel
like
they
shouldn't
quite
be
that
way
down
at
the
bottom
and
I
prefixed
their
commit
messages
with
qq.
A
If
you
want
to
subject
yourself
to
the
pain
of
reading
all
this
or
you
feel
like
somebody
should,
then
we
can
go
through
that
exercise.
I
can
try
to
spend
some
time
attempting
to
tie
individual
commits
to
like
the
issues
or
pr's
that
I
think
that
may
have
caused
these
changes.
B
I
think
this
is
a
you're,
a
hero
for
doing
this.
Did
you
assign
it
to
me
because
I
haven't
seen
it,
I
think.
As
you
scrolled
past,
I
was
marked
as
a
reviewer,
but
not
an
assignee,
which
means
I
almost
certainly
missed
it.
Okay,
I
will
assign
it
to
you.
B
B
A
So
immediately
after
doing
this
monstrosity,
I
I
was
reminded-
I
really
really
don't
want
us
to
ever
have
to
go
through
this
again.
So
I
really
really
want
someone
to
work
on
the
task
of
setting
up
a
job
that
will
automatically
run
the
audit
script
and
automatically
open
a
pr.
A
A
B
Yes,
and
to
add
to
that,
we
specifically
we
we
think
we've
set
things
up
in
a
way
that
the
permissions
on
automating.
This
should
actually
be
pretty
easy
right.
You
should
just
need
to
create
a
google
service
account,
which
we
then
add
to
the
google
group
that
we
have
for
the
auditing,
and
that
should
be
it.
A
Yeah,
I
even
got
that
far,
so
I
had
I've
linked
to
this
in
the
ifu.
This
was
on
my
personal
prow
instance.
I
set
up
a
proud
job
that
got
as
far
as
like
running
the
script
and
getting
ready
to
commit
all
the
changes,
but
where
I
got
stuck
was
actually
creating
the
pr,
and
so
I
have
links
to
how
that's
done
for
for
testing
for
his
job.
A
That
does
auto
bumping
and
I
just
need
somebody's
help
to
actually
push
that
over
the
line,
but
I
did
get
as
far
as
like
creating
a
service
account
that
just
has
audit
privileges,
and
I
can
help
make
sure
that
service
account
is
available
on
the
appropriate
pro
instance.
You
didn't
run
that
job
anyway.
A
tag
does
this
help
wanted?
If
you
know
somebody
who
wants
to
help
out,
this
would
be
really
greatly
appreciated.
B
So
I
know
that
in
the
last
part
of
last
year
everybody
was
busy
doing
other
stuff,
and
I
know
at
least
I'll
speak
for
myself.
I
you
know
my
attention
shifted
elsewhere
for
a
while.
I
this
is
important.
We
probably
shouldn't
do
anything
substantial
until
we
get
this
pin
down
like
we
should
not
take
on
any
new
major
migrations
or
projects.
Until
we
understand
this.
A
Yeah,
I
suppose,
that's,
I
suppose,
that's
true.
If
it's
about
doing
it
expediently,
then
I
can
probably
find
time
to
work
on
it,
but
I
I'm
trying
to
be
good
about
helping
others
work
on
this
stuff.
B
I'm
with
you
like,
I
probably
could
bump
something
to
do
this,
but
this
is
supposed
to
be.
The
goal
here
was
to
make
a
self-standing
community,
so
hopefully
we
can
get
more
people
and
ricardo.
I
know
you
keep
volunteering
for
stuff,
so
don't
you
dare
volunteer
for
this
one?
No.
C
I
was
going,
I
was
going
to
suggest
that
that,
as
we
have
like
a
small
quorum
here,
probably
sending
this
to
the
meeting
list
or
slack
and
asking
people
for
help,
because
I
think
that
we've
got
a
small
quorum
here,
but
the
meeting
is
still
big
right
and
we
have
some
folks
that
want
to
help,
but
but
they
probably
can't
attend.
So
I
don't
know
if
this
is
going
to
be
better.
I
can
I
can't
I
can
voluntarize
myself
to
this
team,
I'm
really
out.
F
A
A
Okay,
so
to
that
end
the
the
place
I
want
us
to
live
or
here's
my
vision
for
this
quarter.
I
really
want
us
to
work
on
issues
that
are
in
the
same
milestone
as
the
rest
of
the
kubernetes
project,
so
we're
there
are
many
people
out
there
who
are
developing
version
121
of
kubernetes,
so
I've
got
a
version
121
milestone
and
we
can
assign
issues
to
this
milestone.
To
say
this
is
what
we
want
to
work
on
this
quarter.
A
Separately,
I
have
the
project
board
here,
which
tries
to
sort
of
set
up
some
different
workflow
phases
for
our
tasks,
and
I've
tried
to
break
up
our
backlog
into
two
kinds
of
work
that
I
see
us
handling.
A
Now
but
basically,
everything
in
the
infrared
migrate
column
should
be
describing
either
at
a
google
project
level
or
individual
piece
of
infrastructure
like
hey.
This
is
something
the
project
uses
or
depends
on,
and
we
should
move
it
to
community
owned
stuff.
A
So
the
issue
I
tried
to
break
it
down
in
terms
of
projects
that
appear
to
host
miscellaneous
infrastructure
projects
that
are
referenced
by
kubernetes
kubernetes
ci
projects
that
host
release
or
ci
artifacts,
and
then
projects
that
are
used
like
manually
for
end-to-end
testing,
like
there
are
certain
jobs
that
might
require
their
own
project.
That
has
some
special
permission
thing
set
up
or
whatever.
A
A
Migrating
away
from
google
containers,
we
already
did
one
half
of
this.
We
migrated
kates.gcr.io
over
to
the
community.
It's
great.
We
started
spending
a
lot
more
money
when
we
did
that.
That's
super
cool.
The
other
thing
that
google
containers
hosts
is
all
of
our
binary
artifacts
that
live
in
two
gcs
buckets:
kubernetes
release
and
kubernetes
release,
depth.
A
Those
handle
a
lot
more
traffic
than
the
project's
container
images,
and
so,
if
we
wanted
to
migrate
the
most
dollar
spend,
we
would
want
to
work
on
migrating
the
binary
stuff
first.
But
that
said,
it's
it's
less
clear
to
me
that
migrating
the
most
dollar
spend
will
actually
provide
the
most
contributor
benefit.
B
B
A
Intermediary
steps
to
help
us
proceed
there,
the
let
me
think,
okay,
the
first
one
is
we
sort
of
developed
this
container
promotion
process
that
I
think
is
well
understood
by
the
community
and
everybody
generally
speaking,
knows
how
to
use
it
for
container
images.
We
don't
actually
have
something
that
fully
fleshed
out
for
binary
artifacts.
A
So
I
feel
like
we
should
figure
out
what
what
our
binary
promotion
story
is
binary
promotion
process
is.
We
have
a
couple
projects
that
have
like
gcs
bits
in
the
kate's
artifacts
prod
project,
but
I
don't
think
things
are
fully
working
for
them.
A
I
have
an
open
issue
to
work
with
somebody
from
the
csi
subproject
to
figure
out
why
they're
not
quite
able
to
push
to
one
of
their
buckets
and
they're
confused
about
whether
they
need
to
promote
or
use
staging,
or
what
have
you
and
the
only
other
and
the
only
other
person
I
know
who
uses
buckets
would
be
ben
for
kind,
but
I
don't
think
he
uses
artifact
promotion
either.
So
I
think
we
need
to
figure
out
what
our
story
is
for
that
and
then
separately.
A
We
need
to
identify
all
of
the
references
where
the
kubernetes
release
bucket
is
manually,
used
and
work
on
how
we're
going
to
migrate.
Those
in
addition
to
like
when
we
think
it's
safe
to
point
the
dl.kates.io
redirector
over
to
the
new
bucket.
B
B
So
maybe
one
of
the
early
steps
could
be
to
set
up
a
new
bucket
and
mirror
all
the
binaries
there
and
then
retarget
dl.cape,
said
io
and
see
what
happens
to
the
traffic
or
or
maybe
not
a
bucket,
but
set
up
a
load
balancer
with
a
vanity
name
on
it,
so
that
it
actually
hides
the
bucket
name.
We
can.
We
can
actually
make
that
dl.
That
kate
said
I
owe
so.
It
won't
be
a
redirect.
A
Okay,
does
that
sound
reasonable?
I
like
that
idea.
I
will
work
to
create
an
issue
for
that.
I
said
it's
time
for
me
to
type
and
take
notes
or
block
it
talk
and
take
notes.
At
the
same
time,
I
like
that
idea,
I
have
used
google's
storage
transfer
service
to
try
syncing
from
one
bucket
to
another.
It
worked
pretty
quickly.
A
The
it'd
be
tricky
for
like
alpha
releases
and
stuff,
because
the
most
frequently
you
can
schedule
those
jobs
is
every
the
storage
sync.
Jobs
is
every
hour,
but.
A
We
could
look
into
that
too,
like
it's
unclear
to
me
whether
their
tooling
supports
pushing
to
both
or
if
it's
easier
to
have
their
pushing
push
to
one
bucket
and
then
set
up
something
that
syncs
to
the
other
sure
I
think
that's
less,
of
a
problem
for
the
release
bucket,
like
I
think
we
could
just
ask
them
to
sort
of
manually
ensure
it
shows
up
in
both
places
for
this
experiment,
but
so
the
other.
A
The
other
thing
I
was
going
to
say
is
we
have
this
exact
same
problem
for
the
kate's
release,
dev
bucket,
which
has
less
traffic
than
the
kate's
release
bucket,
but
it's
important
that
it's
updated
more
frequently
like
it's
used
a
lot
more
by
rci,
so
I
was
planning
on
maybe
offering
a
cap
on
what
it
takes
to
stop
using
that
bucket,
or
I
need
like
a
really
big
umbrella
issue
that
describes
all
the
places
that
we
need
to
stop.
A
Referencing,
that
bucket
directly
I've
started
writing
that
down
in
an
issue
in
kate's
I
o
I've.
We
also
discovered
that,
like
cube,
adm
and
a
few
other
sub,
projects
need
to
change
some
of
their
hard-coded
defaults
in
their
apis
and
so
currently
working
with
them
to
decide
whether
they
just
want
to
make
a
breaking
change
in
their
alpha
api
or
if
they
want
to
make
a
new
api
version
for
those
defaults
either
way
with
both
of
these.
A
I
still
feel
like
we're
going
to
live
in
a
world
where
we
will
want
to
keep
those
buckets
around
for
a
while
during
a
deprecation
window.
As
we
work
to
migrate
everything
over
to
the
new
buckets,
we
do
have
new
buckets,
they're,
kate's
release
and
kate's
released
them
instead
of
kubernetes
release
and
kubernetes.
Now
those
live
in
the
kate's
release
project
in
kate's
improv,
I
had
hopes
to
talk
to
release
engineering
a
lot
more
about
their
plans
and
their
ability
to
staff
this
effort,
but
life
life
has
gotten
in
the
way.
B
B
But
if
we're
willing
to
take
a
like
a
flag
day
where
we
say
like,
if,
if
we
can
flip
through
the
vanity
domain
and
say,
look
90
of
traffic
is
going
to
the
the
real
the
new
bucket
anyway,
we
could
salvage
the
old
bucket
name
and
move
it
out
of
the
org
and
bring
it
back.
But
if
we've
already
solved
90
of
the
traffic
and
we're
willing
to
take
a
flag
day
like
that,
I'm
not
sure
that
it's
worth
bringing
back.
B
A
B
A
So
it's
kind
of
related
something
else
that
could
help
us
is
if
we
were
to
turn
on
access
logs
for
specific
buckets,
and
we
had
started
to
do
that
for
the
kates.gcr
dot
io
traffic
and
we
needed
to
stop
for
reasons
I
can't
quite
remember
are
we
do
we
think
we're
at
a
point
where
we
could
turn
that
on
again.
B
We
could
try
justin
and
I
started.
We
got
an
initial
data
dump
and
we're
trying
to
figure
out
how
to
actually
consume.
It
realized
that
neither
of
us
had
time
to
follow
through
with
it
and
it
was
just
going
to
eat
money
while
we,
while
we
logged
stuff
that
nobody
was
looking
at
so
we
turned
it
off,
we
can
go
back
and
revisit
it.
In
fact,
I
think
I
have
some
tabs
open
to
to
revisit
that
for
the
last
six
months.
So
totally
we
can
go
back
and
revisit
it.
B
Gcs
is
the
same
like
again.
If
we
were
to
tail
tail
off
and
move
the,
what
we
think
is
the
bulk
of
traffic
like
dl.kates,
then
we
could
look
at
the
logs
and
see
what's
left,
I'm
still
not
sure
what
that
would
show
us.
Maybe
it
would
give
us
a
pattern,
maybe
it
wouldn't.
We
won't
really
know
until
we
try
it.
Okay,.
A
All
right,
so
I
hear
you
that
this
is
still
like
one
big
chunk
of
the
project
that
is
not
yet
under
the
community's
control.
It's
also
the
biggest
dollar
spend.
So
I
agree
that
it's
really
important
from
that
perspective,
but
I
also
personally
feel
like
it
is
really
important
that
all
of
the
stuff-
that's
kind
of
in
that
critical
path
of
the
project
ci,
is
under
project
control.
A
So
because
I
feel
like
the
migrating,
the
artifact
stuff
is
a
pretty
heavy
lift.
There
are
a
lot
of
other
projects
that
host
like
images
for
ci
or
infrastructure
or
ci
that
I
think
people
who
are
scared
by
this
sounding,
like
a
big
task,
could
more
easily
take
on.
So,
let's
see.
A
Here
I
I'm
not
quite
sure
how
to
walk
through
this.
I'm
sorry,
so
I'll
walk
through
a
couple
that
I
know
of
for
sure.
A
But
basically,
I'm
suggesting
that
everything
that
falls
under
the
projects
referenced
by
kubernetes
kubernetes
ci.
I
would
like
to
try
to
set
the
goal
that
we
get
all
of
this
migrated
during
this
release
cycle.
A
I
feel,
like
the
release
engineering
team
was
interested
in
working
on
this,
but
we'll
have
to
see
how
much
bandwidth
they
actually
have
so
there's
an
umbrella
issue
that
I
will
populate
with
links
to
all
of
these
projects,
but
my
hope
is,
we
could
close
out
everything
in
this
umbrella
issue
walking
through
what
they
are
briefly.
A
A
Can
it
be
rewritten
in
a
way
that
uses
a
different
project,
and
then
you
know
rewrite
the
task
set
up
the
different
project
and
hooray
it
all
works
and
there's
probably
a
common
decision.
We'll
need
to
make
for
these
things
is
whether
it
is
appropriate
to
cherry-pick
changes
to
tests
or
images
used
in
tests
back
to
all
previous
supported
versions
of
kubernetes
or
whether
we
would
like
to
say
no
we're
not
changing
the
tests
in
older
versions
of
kubernetes.
A
That's
a
good
point.
I
hadn't
thought
about
that,
like
my
preference
would
be
to
cherry
pick
the
test
things
back,
but
I
can
see
especially
for
image
names
how
that
might
be
perceived
as
some
kind
of
breaking
change
for
people
who
want
to
like
air
gap.
All
of
the
images
used
in
tests
for
like,
if
suddenly,
the
names
of
all
the
images
that
need
to
be
air
gaps,
change
they'll
have
to
like
reconfigure
their
testing
environment.
A
On
the
other
hand,
you
could
argue
that,
like
that,
the
tests
have
never
had
the
same
sort
of
api
contract
as
the
rest
of
kubernetes.
So.
B
A
So
authenticated
test
images
is
one
kate's
authenticated
test
is
another.
This
was
part
of
what
prompted
the
push
to
do.
All
these
ci
projects
is
something.
Google
internal
accidentally
deleted
this
project,
and
so
we
had
to
scramble
to
undelete
it
and
make
sure
it
was
configured
appropriately
and
we
realized
this
test
should
be
using
something
a
community
controls
that
won't
get
suddenly
deleted.
A
So
these
would
be
two
great
things
for
people
who
have
kubernetes
experience
to
to
work
on
and
they
are
they're
pretty
like
targeted
to
just
to
like
one
or
two
specific
tests,
slightly
broader
things.
Many
many
many
of
the
images
used
in
kubernetes
ci
come
from
some
place
called
kubernetes
e3
test
images,
so
instead
we
should
have
them,
use
gates.gcr.io
and
use
the
proper
repository.
A
So
for
a
couple
of
these
issues,
I've
got
companion
issues
in
kubernetes
kubernetes
that
try
to
spell
out,
like
all
the
work
that
needs
to
be
done
in
the
kubernetes
project.
To
make
this
happen,
it's
got
like
all
the
images
that
the
test
binary
uses
their
jobs
that
are
set
up
to
promote
these.
We
just
need
somebody
to
actually
go
through
the
practice
of
the
process
of
promoting
all
the
images
bumping.
The
versions
running
the
tests,
yeah.
A
Yes
and
cloudy
has
been
claudio,
and
antonio
have
done
a
lot
of
work
on
this,
so
I
feel
like
we
will
be
most
of
the
way
there
on
this
one.
A
A
A
Yeah
exactly
this
is
kind
of
like
the
test:
infra
version
of
kubernetes
e3
images,
so
all
the
jobs
that
run
our
tests
use
images
that
live
in
like
gates,
test
images
and
so
any
job
that
runs
cubekins
pulls
from
this
google.com
repo.
A
So
I
feel
like
this
is
pretty
wide
reaching
when
it
comes
to
the
ci,
but
it
also
should
be
pretty
straightforward,
like
search,
replace,
pretty
mechanical
should
be
pretty
doable.
So
some
of
these
I
would
like
to
get
them
to
the
point
where
they
are
help
wanted
and
doable
by
new
contributors.
A
A
A
I
don't
know
who
put
them
there
or
how
they
got
there,
but
they
shouldn't
be
there.
So
again,
I
think
this
is
mostly
going
to
be
a
matter
of
like
who
owns
these
tests
or
who
owns
these
images,
steer
them
to
using
this
in
staging
project.
Gcr
image
promotion
process
that
the
rest
of
the
project
has
been
using.
E
A
Yeah,
but
it's
not
just
paws
right,
so
they're,
like
gc
gcp,
compute,
persistent,
I
don't
know,
looks
like
some
csi
artifacts,
so
we
maybe
just
have
some
some
storage
people
to
talk
to
and
then
run
that
one
by
saying
windows
to
make
sure
they're
they're
aware
they
should
be
catching
that
sort
of
thing
and
then
I'm
guessing
the
gpu
device.
A
A
A
So
basically,
if
we
get
through
all
of
those,
that's
gonna
be
a
lot
like
a
lot
of
the
project.
Ci
will
be
entirely
community.
A
Yeah
there
are
a
couple
others
that
I
would
like
to
get
help
on,
but
I'm
not
sure
they're
quite
as
easily.
They
might
be
slightly
more
difficult
or
slightly
more
involved.
So
I
guess
arno.
I
have
you
in
mind
when
I
think
of
like
work
to
try
and
get
the
brow
staging
instance
up
and
starting
to
understand
how
prowl
is
wired
together
with
different
bits
of
gcs
infrastructure.
A
I'd
like
people
with
kind
of
your
level
of
knowledge
to
work
on
things
like
this,
where
there's
some
part
of
prowl.
That
depends
on
like
buckets
or
service
accounts
that
are
used
by
this
project,
and
we
should
stop
that.
A
I've
kind
of
been
talking
non-stop
for
about
25
minutes.
I'm
gonna
stop
doing
that.
Is
there
any
specific
things
that
people
want
to
bring.
B
B
B
That's
so
many
things
in
here.
I
want
to
do
I'm
looking
at
them
be
like
yeah.
That's
like
a
couple
days
of
work.
A
Yeah
I'm
moving
that
over.
So
that
was
me
about
a
lot
about
the
projects
that
we
have
yet
to
migrate
over
there's
also
a
lot
of
stuff
we're
trying
to
do
for
the
stuff
we've
already
migrated
over
and
we're
realizing
like
we
need
to
do
better
or
we
want
to
do
better.
A
So
the
audit
job
is
one
of
them.
Improving
cert
manager
is
another.
Is
that
people
have
attempted
to
promote
mutable
image
tags
and
they
shouldn't
be
allowed
to
do
that.
So
we
need
a
pre-submit
that
stops
them
from
trying
that
I.
A
Would
block
that
it's
so
it
like?
The
promoter
will
block
that,
but
it
doesn't
block
it
and
pre-submit
which
allowed
somebody
to
merge
something
that
didn't
break
the
pre-submit
jobs,
but
it
broke
everything
going
forward.
B
A
B
A
That
I
have
not
had
time
to
follow
up
with
I,
and
I
I
guess
I
would
like
to
make
that
question
the
interest
of
the
release
engineering
subproject.
It
was
more
like
a
release
thing
not
that
I'm
not
like
I'm
also
interested
in
it,
but
I'm
trying
to
where
appropriate.
If
I
can
find
the
same
people.
A
So
I'll
try
and
drag
this
near
this.
I
upgraded
the
proud
build
clusters
to
version
116
of
kubernetes.
We
talked
before
about
eventually
moving
to
release
channels,
so
the
clusters
will
update
two
versions
automatically.
A
We
should
do
the
same
thing
for
the
triple-a
cluster.
I
forget
if
the
aaa
cluster
is
still
using
api
groups
or
some
of
its
manifests
that
go
away
in
116.,
so
I
will
see
if
I
can
find
the
pr
here,
let's
click
through
because
I
know
aren't,
I'm
pretty
sure.
Arno
worked
on
the
pr
that
addressed
this.
But
yes,
I
remember
reading
it.
E
A
A
Yeah,
okay:
this
is
about
like
changing
the
r
back
rules
to
make
sure
they
support
the
proper
api
groups.
But
I
didn't
want
us
to
remove
our
back
rules
for
the
old
things
because
they
weren't
yet
deleted
all
right
anyway,
I'll
work
with
you
to
wrap
this.
I
was
trying
to
raise
it
as
I
think
we
should
get
all
of
our
clusters,
not
just
the
power
cluster
ones
who
release
changes
and.
A
A
I
will
take
a
look
at
this.
Thank
you.
I
think
I
I
was
freaked
out
on
whether
or
not
terraform
I'm
going
to
do
an
in-place
change
for
switching
to
release
channels
or
not.
D
Last
time
I
tried
what's
happening
is
basically
telephone
will
subscribe
to
cluster
to
the
realest
channel
without
doing
the
upgrades
and
when
you've
specified
it
with
this
channel.
The
cluster
will
be
a
great
in
the
next
maintenance
window.
A
So,
ricardo,
I
I
I
will
try
to
work
to
write
up
some
of
that
get
some
of
that
stuff
to
help
on
it
and
send
out
an
email
to
kate's,
infra
mailing
list
and
see
if
people
feel
like
I've
sort
of
got
the
right
set
of
work
set
aside
for
us
to
to
work
on
for
this
release
of
kubernetes.
I
don't
think
most
of
those
imagery
names
and
stuff
merit
a
cap.
A
It
could
be
that
the
the
larger
binary
artifact
stuff
does,
I
don't
know
we'll
find
out,
but
I'll
try
to
do
better.
This
quarter
about
keeping
this
up
prod
or
this
working
group
organized,
but
I'm
super
open
for
suggestions
or
help
on
how
I
can
do
better
on
this.
B
Aaron
I
take
umbrance
with
the
do
better
language.
I
think
you've
done
fine,
and
this
is
not
about
doing
doing
better.
This
is
not
your
failing
in
any
way.
I
just
need
to
be
on
the
recorded
record
with
this
right
like
this
is
a
community
effort
and
it
falls
largely
on
the
shoulders
of
about
six
people
who
show
up
and
do
work
regularly,
and
I
appreciate
deeply
those
six
people,
but
we
need
to
continue
to
expand
it.
A
Thank
you
for
saying
that
the
way
that
you
did,
I
I
agree
with
what
you
are
saying.
I
would
like
to
find
ways
to
make
this
work
more
approachable,
because
I
do
feel
like
needing
certain
security
privileges
to
like
create
things
aside.
A
lot
of
this
is
pretty
doable
and
it's
really
really
valuable
and,
like
really
appreciated,
so
I
just
like
to
figure
out
how
to
encourage
the
group
to
grow
a
little
more,
but
I
will
try
to
stop
using
a
do
better
language.
A
Thank
you
thanks.
Thanks
all
see
you
again
in
two
weeks.