►
From YouTube: Kubernetes Weekly SIG Release Meeting for 20200309
Description
Kubernetes Weekly SIG Release Meeting for 20200309
A
All
right,
we
will
go
ahead
and
get
started.
Today
is
Monday
March
nights,
Calendar
oddities
for
me
first
day
after
switching
our
clocks
and
this
time
zone
always
screws
everything
up.
We
have
a
couple
of
things
on
the
agenda.
I
guess
I
should
start
with
saying
this
is
the
sig
release
meeting
this
will
be
recorded.
We
follow
the
kubernetes
code
of
conduct.
Please
adhere
to
that.
The
recording
will
be
posted
to
YouTube.
After
the
meeting
we
have
a
modest
number
of
attendees.
Today,
I've
got
a
couple
of
things
on
the
agenda.
A
Rather
than
go
through
team
updates,
we've
got
the
release
engineering
meeting
every
other
week
off
of
this
one
and
the
release
team
right
now
is
super
active
right
ahead
of
the
release
so
rather
than
being
redundant
relative
to
those
things.
Why
don't
we
just
push
into
the
open
discussion
section
for
the
day?
Do
you
want
to
start
Bart?
B
Because
I
don't
exactly
understand
what
you
eat,
because
it's
establish
like
Cuba's,
builds,
touch
stage.
Release
process
which
is
given
created.
This
kind
of
there
is
a
lot
of
points
there
and
I.
Don't
know
what
is
being
expected
from
us
at
this
point.
So
I
think
that
b-24
Steven
or
maybe
you
know.
B
A
It's
muted,
I
am
not
intimately
familiar
with
all
of
the
points
on
the
big
list.
Stephens
been
doing
a
ton
of
work
in
the
last
couple
of
days.
Things
are
flying
by
on
github
and
I
need
to
spend
the
better
part
of
my
day,
I
think
catching
up
just
with
what
all
has
gone
by,
and
we
don't
have
him
today
now.
A
D
I
think
it's
I
I
want
to
say
it's
nothing
right
now,
because
our
stuff
primarily
runs
in
GCB.
So
you
were
mentioning
if
the
asking,
if
we
needed
to
cluster
and
I,
think
we
need
to
cluster.
Yet
the
the
one
I'm
wondering
about
is
the
build
process
right.
So
the
staging
and
release
side
runs
via
GCV,
but
the
build
process
is
the
CI
kubernetes
build
and
CI
kubernetes
build
fast
jobs
which
run
and
brow
those
run.
D
B
B
D
B
D
B
B
D
Okay,
yeah
yeah
I,
doubt
so
I
think
I.
You
know,
based
on
that,
I
think
that
it
probably
makes
sense
for
us
to
do
what
we've
been
doing
Tim
in
the
meantime
and
let
things
run
as
they
run.
If,
when,
if
when
prowl
gets
migrated,
then
we
will
be
on
new
infra
for
the
build
jobs.
I
think
that
for
build
jobs
of
Lisa,
it's
I
mean
it's
it's
another
test
job.
D
B
D
B
D
D
B
D
And
if
it
feels
like
that
part,
that's
gonna
be
slow
going,
so
yeah
I,
think
I.
You
know
I
think
we'll
we'll
take
use
from
y'all
on
on
where
prowl
is
living
and
when
is
for
the
staging
and
release
process.
At
least
I've
been
making
a
few
tweaks
to
our
infrastructure.
So,
let's
see,
if
I
can
do
give
me
a
second
I'm
gonna
share
some
stuff
computer
still
doing
computer
e
things.
D
D
Just
go
to
9
1
1,
because
that
is
such
an
such
an
app
issue
number
for
establishing
building
the
the
build
staging
release
process
on
Kate's
in
for
us,
so
some
of
the
stuff
I've
been
working
on
in
the
background
since
one
this
issue
is
open
in
October
and
some
of
it
is
starting
to
finally
land
around
kind
of
like
this
prod,
like
late
staging
prod
test
phase.
Alright.
D
So
our
GCV,
our
current
GC
b,
has
access
to
push
into
either
of
those
projects,
so
there
isn't
an
an
image
promotion
per
se,
at
least
for
us
and
Onaga.
So
this
so
that
cut
over
kind
of
feels
like
a
good
time
to
also
move
us
over
to
to
a
new
infrastructure
for
staging
and
release
process.
Alright,
so
right
now
our
staging.
D
So
I
think
this
is
where
we
landed
right.
I
just
wanted
to
scribble
something
real
quick
about
where,
where
some
of
the
new
bits
are
so
the
today,
if
you
create
a,
if
you
create
a
project
in
or
if
you
want
to
create
a
project
in
and
kate's
that
I
oh
all
right,
there
is
one.
We
have
a
groups
definition
right,
so
of
course
a
some
more
fun
little.
We
Amal
right
where
you
can
go
and
look
up.
D
Defines
the
kate's
infra
release
admins
group
right,
and
this
group
has
essentially
like
partial
admin,
access
to
some
of
the
release,
GCP
projects
right
the
projects
that
we
have,
so
we
have
an
admins
and
editors
and
a
viewers
group
right.
So
these
are
the
sets
of
the
release,
engineering,
sub
project
owners,
the
grant,
the
patch
release
team
and
branch
managers,
and
then
finally,
the
release
manager
associates
right.
Each
of
them
have
so
we
use
each
of
these
groups
to
delegate
access
across
the
projects
now,
for.
D
So
if
we
look
at
the
staging
one,
it
kind
of
goes
through
does
some
stuff
it
loops
over
staging
projects
and
later
on
release
projects,
and
it
does
a
few
things
for
the
for
that
project.
It
sets
a
it
sets.
A
set
of
writers
make
sure
that
the
staging
buckets
exist
sets
a
sets.
A
retention
period
for,
for
those
buckets,
make
sure
that
the
project
exists.
Make
sure
that
you
know
you
have
a
view
access
right,
so
you
need.
D
You
need
a
certain
permission
on
GCP
to
allow
you
to
actually
view
the
console,
so
it's
kind
of
a
view
only
mode
that
all
the
writers
need
before
essentially
layering
more
permissions.
On
top
of
that,
making
sure
that
the
GCR
exists,
the
container
registry
exists
and
then
a
few
more
things
like
GCS
is
enabled
and
of
permissions
for
the
buckets
and
then
finally
GC
B,
which
is
kind
of
our
bread
and
butter
for
the
stage
and
release
process,
and
then
also
that
proud
can
right
into
the
project
right.
D
So
this
is
important
for
the
image
building
and
pushing
jobs
within
Crowl.
So
now
we
have
a
special
case
for
the
release
managers.
Essentially,
what
this
just
does
is
it
takes
the
it
takes
one
of
these
repos
again
kubernetes,
release
test
a
release,
rln
leaks
over
that
and
make
sure
that
the
release
viewers
so
that
cait's
infra
release
viewers
group
has
view
access.
So
this
is
so
that
our
release
manager
associates
can
actually
see
the
projects
and
see
the
logs
for
the
for
the
various
release
engineering
projects.
D
Yeah,
so
this
one
is
going
to
change
a
little
in
a
future
PR
where
we
basically
want
to
make
sure
that
each
of
the
release
projects
have
access
to
have
the
kms
API
enabled.
The
reason
for
this
is
that,
aside
from
that,
that
the
staging
the
cloud
build
account
for
the
staging
for
the
staging
project
can
write
into
the
test
prod
project
right
so
going
back
to
that
list,
there
are
a
few
things
going
on.
We
have
a
bunch
of
projects
now,
so
we
talked
a
little
bit
about
I.
D
Am
the
project's
I've
got
to
split
this
into
the
projects
that
we
use
for
specifically
for
releases
and
then
the
projects
that
we
use
for
release
engineering
right
so
whether
it
be
the
tooling
or
something
that
eventually
connects
into
the
release
project
process
these
the
kind
of
breaking
these
up
right?
So
we
have
some
active
ones
right
now:
kubernetes
release
tests
and
google
containers.
Google
containers
is
the
production
project
for
kubernetes
releases,
so
if
you've
ever
seen,
something
that
is
is
Kate
CCRI.
D
Oh,
that
is
a
vanity
domain
for
4G
cRIO,
slash,
Google
containers
right
if
you've
ever
seen,
the
GS
/
kubernetes
release
/
release
that
is
within
this
Google
containers
project
as
well
right,
that
is
our
prod
project
right
kubernetes
release
test
is
the
project
that
release
managers
primarily
working.
That's
the
staging
project
for
kubernetes
releases.
D
Coming
to
you
know,
new
new
comers
on
the
block
we've
got
Kate
staging
kubernetes.
This
one
is
going
to
be
used
specifically
for
staging
kubernetes,
but
on
Kate's
infra
right,
and
this
is
only
staging
kubernetes
right,
so
they
only
asked
us
that
should
be
in.
There
are
ones
that
are
related
to
the
release
process,
artifacts
that
we
would
want
a
you.
Do
you
pick
up
eventually
right
release,
test
prod?
This
is
our
kind
of
pre,
prod,
environment
right.
This
is
what
we're
using
to
vet
our.
This
is
like
our
test,
prod
environment.
D
D
Are
we
able
to
actually
push
images,
or
are
we
actually
able
to
pull
certain
files
out
of
this
right,
doing
testing
the
interaction
between
us
and
what
production
would
look
like
great
and
and
then
finally,
there's
Kate's
artifacts,
prod
right
in
case
artifacts
prod?
This
goes
into
you
so
tenancy
as
in
where
this
project
is
actually
located
right.
So
this
is
Google
infrastructure
versus
kubernetes
infrastructure
who
owns
it.
Release
engineering
for
a
few
of
these
projects.
D
Google
this
one
is
a
black
box
for
us
right
and
then
finally
for
Kate's
artifact
products,
its
working
group,
Kate's
infra.
This
is
what
everything
lands
and
Ray.
So
if
you're,
if
you
have
been
building
images,
pushing
images
to
your
staging
repositories
and
then
promoting
them,
this
is
where
they
end
up
right.
Kate's,
artifacts,
prod
slash,
each
of
them
are
named
spaced.
Each
of
the
manifests
are
essentially
namespace
to
your
project
right.
So
if
you
have
a
project
like
like
Kate's
staging
build
image
right
you're,
your
promoter
manifests
the
way
we
define
how
you
promote.
D
The
images
will
likely
be
something
like
Kate's
so
it'll
it's.
There
are
some
geographic
geographic
domains
for
each
of
each
of
the
locations
right,
so
there's
a
primary
key
cRIO,
which
is
a
which
is
essentially
a
vanity
for
my
god.
My
my
words,
my
words
are
killing
me
right
now,
so
there
is
a
us
there's
a
USA
and
EU
and
an
Asia
endpoint
for
each
of
these,
and
so
you
might
see
in
a
promoter
manifest
something
like.
D
So
at
least
for
the
case
of
of
kubernetes
we're
kind
of
special
cased
here,
where
the
artifacts
that
we
have
today
will
end
up
landing
in
the
root.
So
this
is
currently
a
test,
a
test
endpoint
right,
our
test
namespace
for
us
to
promote
images
into
while
we're
using
kind
of
like
this
test
fraud
situation
and
then
once
we're
actually
cut
over
to
to
the
new
Kate
statue
cRIO
on
Kate's
infra.
This
will
get
dropped
right,
so
this
will
become
will
be
pushing
into
the
root
of
each
of
these.
D
Of
each
of
these
end
points,
because
this
is
kind
of
this
is
kind
of
for
breakage
right.
We
want
to
make
sure
that
everyone
who
has
been
using
images
like
cube,
API
server
right
are
still
hitting
are
still
using
the
same
image,
URLs
right
so
going
so
I'm
going
all
over
the
places.
Anyone
have
questions
about
this
right.
The
second.
D
So,
on
the
release
engineering
side
there
is
the
Cates
infra
cig,
release
prototype
one
I'm,
not
sure
Tim
I
think
you
might
be
the
only
one
here
with
access
to
that
one,
but
I'm.
Considering
that
one
deprecated
right
yeah,
we
never
I,
think
we
play,
grounded
it
for
a
little
bit,
maybe
one
or
two
of
us,
but
we
never
got
that.
D
We
rely
on
the
github
token
of
the
Kate's
release
robot
to
be
able
to
commit
push
and
create
github
releases
for
for
kubernetes
kubernetes
right.
So
the
idea
is
that
the
Kate's
rel
inch
prod
project
would
act
as
a
bucket
for
things
that
are
important
to
our
processes.
Right
so
one
example
would
be
kms
where
we
want
a
central
plate.
We
have
several
jobs
across
multiple
projects
that
would
be
dependent
on
the
on
the
github
token
of
that
of
that
user
right.
So
why
not
put
them
in
one
place?
D
I've
also
added
that
that
functionality
for
the
kubernetes
release
test
project,
it's
essentially
the
project
that
is
on
Google
infrastructure,
is
able
to
to
reach
in
and
and
decrypt
that
token.
If
you
want
to
see
that
that's
basically
this
PR,
so
basically
you
you
have
a
you,
have
a
keyring
and
a
set
of
keys
that
you
would
specify.
D
So
these
are
the
these-
are
the
the
service
accounts
for
for
kubernetes,
with
kubernetes
staging
Kate's,
staging
kubernetes,
Kate's
staging
rel,
inge,
and
then
kubernetes
release
test
projects
right
so
now,
when
GCB
runs
on
on
these,
if
they
need
to,
they
will
be
able
to.
So
you
see
in
these
cloud
build
configs,
we
kind
of
cut
these
over.
We,
we
cut
these
tokens
over
right.
D
D
One
of
them
is
enabling
the
kms
API
for
the
other
really
staging
projects.
You
saw
the
PR
for
that
moving
the
kms
keys,
so
the
PR
for
that
staging
account
with
okay,
so
that
PR
so
making
sure
that
the
staging
account
has
access
to
copy
GCSE
objects
into
the
prod
test
account.
We
need
this
because
Onaga
has
access.
Currently,
our
GC
b
has
access
currently
right
from
the
kubernetes
release
test
project
into
the
google
containers
project
right.
So,
basically,
when
we
stage
assets
within
the
state
was
within
the
staging
run.
D
D
D
Anagha
has
access
to
or
GCB
has
access
by
way
of,
iñigo
or
another
as
access
by
way
of
GCD
on
the
case
on
the
kubernetes
release
test
project
to
push
to
both
the
staging
kate's
GC
r
io
registry,
as
well
as
the
kate's
that
GC
r
io
registry
right.
So
that's
those
are
both
registries
that
live
on
the
Google
side.
So
as
we're
moving
into
new
infrastructure,
we
need
to
use
the
image
promotion
process
for
these
for
those
images,
which
means
the
so
everything
listed
as
there
is.
D
So
these
images
that
we
tend
to
care
about
cloud,
controller
manager,
conformance
hypercube,
QAPI,
server,
cube
controller
manager,
cube
proxy
keep
scheduler.
We
need
to
be
able
to
promote
the
release.
Manager
needs
to
be
able
to
issue
a
PR
to
promote
them
from
staging
to
to
prod
you'll
no
longer
have
access
to
directly
push
via
an
algo,
and
we
shouldn't
anyway,
right
so
figuring
out.
What
that
looks.
Like
I
was
talking
a
little
bit
with
with
Linus
about
this,
and
the
I
opened
an
issue
or
it's
somewhere
right.
D
So
wouldn't
it
be
cool
if
we
had
a
simple
CLI
tool
that
would
figure
out
all
of
the
digests
and
tags
on
the
staging
side
read
the
manifests
that
is
already
stored.
The
promoter
manifest
that
is
already
stored
on
the
Kate's
IO
repo
and
then
merge
the
two
together
right.
So
if
I
say
I'm
releasing
one
18-0
if
I
have
tags
on
the
staging
repo
that
say,
one
18
0
I
want
you
to
take
them.
D
I
want
you
to
merge
them
into
the
the
current
yamo
can
take
four
for
the
promoter
manifest
right
and
then
from
there.
You
would
get
an
output
yamo
file
and
you
would
you
would
pop
that
into
into
kkt.
Oh
right,
so
you'd
be
able
to
see
a
simpler
diff
without
having
to
having
to
hand
Kabul
updates
to
this
file.
Since
it's
quite
a
bit
of
images
right,
it's
it's
multiple
images,
its
excuse
me.
It's
multiple
images
of
multiple
architectures,
so
I
think
it's
prone
for
error.
D
So
we're
going
to
talk
a
little
bit
more
about
this
soon
right.
So
the
I
want
to
write.
I
want
to
take
some
time
to
write
all
of
what
I
just
said
up
and
send
that
out
to
the
release
managers
group,
the
release
team
sake,
release
proper
as
a
follow-up
to
minuses,
email
about
the
the
cutover,
because
potentially
I
know
he's
going
to
be
taking
a
look
at
this.
But
potentially,
if
someone
is
interested
in
working
on
writing
this
tool
or
adding
functionality
into
the
current
image
promoter
tool,
that's
a
possibility
right.
D
D
And
so
the
reason
for
that
is
that
we
want
to
make
sure
that
we
do
it
soon
after
release
right.
It
cannot
affect
our
current
release.
Cycle
right.
Release
for
118
is
scheduled
for
March
24th.
So
what
we
wanted
to
do
initially
was
maybe
consider
doing
this
after
the
first
patch
of
118
went
out,
but
timing
wise.
It's
aligning
that
we
do
it
pre
patch
right.
So
the
idea
is
that
we
would
flip
would
flip
the
the
current
Kate's
current
GCR
dot,
io
/
Google
containers
to
read-only
mode
right.
D
So,
in
the
background
what's
happening
is
we're
doing
backfill
of
the
existing
image
is
in
in
Google
infrastructure
and
moving
those
into
moving
those
into
the
new
prod
right,
so
Kate's,
artifacts
fraud
and,
and
then
you
know
once
we're
in
freeze
we
basically
well.
What
the
freeze
is
supposed
to
was
supposed
to
eke
out
is
determining
whether
people
are
still
pushing
to
the
repo
post
announcement
right,
let
a
few
let's
let
a
few
days
settle
and
then
proceed
with
the
actual
cut
over.
D
So
this
so
our
side
and
and
moving
our
staging
and
release
process
over
Kate's
infra
will
be
happening
in
tandem
with
this.
So
we
have
until
we
have
until
that
time,
but
a
few
weeks
to
look
at
this
a
little
bit
more
deeply.
So
I
will
I
will
work
on
a
timeline
from
our
perspective,
from
the
the
release
engineering
perspective
and
get
back
to
you,
you'll
see
an
email
soonish.
D
Yeah
so
I've
been
so
yeah
last
week,
I
was
essentially
hounding
Tim
and
DIMMs
and
Chris
off
to
get
these
projects
started
because
I
have
I,
have
a
Naga,
anxiety
and
I
know.
Something
is
going
to
happen
with
that
tool
that
is
going
to
put
us
in
a
weird
state,
potentially
and
I
wanted
to
try
to
tease
that
stuff
out
as
quickly
as
possible.
So
having
these
state.
These,
the
new
official
staging
projects
means
I
can
test
the
way
that
it
should
be
closed
to
prod
now
and
yeah.
Stay
tuned,
I
think
it's
all
going.
A
A
A
A
We
had
originally
hoped
to
be
driving
conversation
kind
of
this
month
ahead
of
contributor
summit
in
Amsterdam.
Obviously
that's
not
happening
now,
so
we're
just
out
read
chatting
with
SIG's
trying
to
understand,
have
folks
been
following
the
discussion
on
the
kept
PR
or
do
people
have
things
that
haven't
been
captured
there
and
then,
especially
for
a
couple
of
the
SIG's.
Were
this
to
where
this
kept
to
go
implementable?
Do
people
think
that
we
would
be
ready
for
it?
So
I
wanted
to
throw
that
out
there
to
see.
D
D
D
There
is
another
idea
within
that
to
which
I
really
like,
but
it's
people
would
have
to
be
on
board
with
it
and
it's
the
idea
of
essentially
extending
the
release
cycle,
making
the
support
window
a
year
by
extending
the
release
cycle
and
only
having
only
having
three
releases
a
year
instead
of
four
right,
so
that
would
ultimately
make
that
ultimately
turn
the
support.
Cyclone.
D
A
D
A
So
the
right
now
we
have
sort
of
three
months
ish.
It
would
turn
into
four
months
ish
for
a
cycle.
The
the
bigger
lifetime
of
a
given
release
would
be
under
support
instead
of
12
slightly
more
than
12
ish
a
little
bit
of
ramp
up
a
little
bit
of
ramp
down
afterwards.
It
yeah
weeks
versus
months,
nevermind,
yeah.
A
For
release
team
is
a
good
point.
That's
so
that.
A
D
A
A
For
those
of
you
who
there's
a
variety
of
folks
on
the
call
who've
been
playing
around
with
tooling
and
seeing
things
go
by
I'm
wondering
what
you
think
about
readiness
like
does
right
now
we
we
have
these
support
branches
and
if
we,
if
we
accept
that
our
tooling
potentially
is
snapshotted
relative
to
the
stuff
that
it
is
being
used
to
build
like
there's
multiple
ways
about
this,
we
could
just
keep
the
tooling
always
working
backwards
compatible,
but
sometimes
we
have
breakages,
and
maybe
we
end
up
with
branches
of
tools.
So
there's
there's
the
tooling.
A
1:18
originally,
and
it
keeps
building
118
for
a
year
depending
on
which
way
we
go
about
that.
How
how
do
people
feel
our
tools
too
fragile
to
keep
around
and
keep
patch
support,
or
are
they
are
we
changing
things
to
dynamically
to
keep
backwards,
support
for
more
than
a
few
weeks
or
months?
What
are
what
are
people
thinking
there?
I.
E
What's
what's
the
focus
for
the
next
three
months,
six
months,
I
know,
Tech
is
hard
to
keep
a
pulse
with
on
that
front,
especially
with,
as
things
changed,
for
us,
from
release
to
release,
but
I've
like
seeing
you
know
we're
tackling
tech
debt
and
we're
trying
to
make
these
tools
easier
to
use
overall,
as
is
at
least
my
perspective
on
that
issue,
but
is
there
it
are
there
any?
Is
there
any
commentary
on
kind
of
like
a
longer
term
plan
or
things
that
you
all
want
to
see
as
chairs?
E
D
They
explained
like
a
vast
swath
of
things
that
are
going
on.
Having
that
knowledge
and
multiple
people's
heads,
like
increases
the
maintainability,
that
means
that,
like
we
can
start
to
tackle
different
things
that
are
maybe
even
hairier
than
just
the
support
ability
of
the
tools
right
so
I
see
that
as
a
primary
concern,
I
see
how
do
we
support
SIG's
in
what
they
do
day
to
day
right.
D
D
D
It
says
some
of
the
tools
that
we're
building
right
now
are
going
to
be
eventually
used
for
for
support
like
that
right
and
then
figuring
out
what
our
versioning
strategy
is
for
for
kubernetes
release,
because
and
and
what's
kind
of
like
the
the
order
of
operations
for
the
way
we
should
release
right.
So
things
like
the
publishing
bot
can,
you
know,
take
a
look
at
the
tags
that
exists
in
kubernetes,
kubernetes
and
shard
those
tags
out
to
multiple
repos,
based
on
the
way
that
staging
is
configured
and
the
publishing
bot
is
configured.
D
Is
it
possible
to
do
something
similar
like
that
for
for
kubernetes
release
right
for
the
things
that
we
we
use
right
because
the
question
became
like
if
we
depend
on
the
tools,
if
multiple
people
depend
on
the
tools
and
multiple
people
to
depend
on
our
version
strategy?
For
whatever
reason
do
we
go
with
the
versioning
strategy?
That's
in
Kerber,
Nettie's
krupa,
Nettie's.
How
do
we
do
branches?
How
do
we
do
look?
Wouldn't
when
do
we
tag?
D
Why
do
we
tag
right
right
now,
I
essentially
tag
the
like
I
should
tag
later
today,
right
I
essentially
want
to
tag
the
repo
before
we
hit
another
release
right,
so
beta
2
is
going
out
tomorrow,
I
believe
so
it
is
chair.
Its
cherry-pick
deadline
today
for
the
upcoming
patches
I
believe
that's
117
for
116
a
lot
and
numbers
169
and
115
11
I,
think
that
are
coming
out
soon,
and
so
those
are
slated
to
come
out
on
Thursday.
D
So,
even
though
one
gets
a
point
where
we're
actually
using
the
tools
tagged
at
that
point
in
time,
right
now
we're
just
kind
of
doing
the
tag
as
a
snapshot
right.
Some
of
the
jobs
that
we
have
running
within
prowl
are
configured
to
use
the
latest
tags
of
images
right.
Well,
that's
because
the
our
tools
are
changing
so
rapidly
that
for
us
to
change
the
tool,
change,
a
tool
cut,
a
tag,
go
back
to
go
back
to
testing
for
a
change
the
tag,
maybe
something's
wrong.
D
Okay,
do
it
again
right
and
then
come
back
to
testing
it's
kind
of
a
bit
of
a
turn?
So
that's
why
some
of
the
the
tools
have
been
targeting
latest
so
I,
but
I
would
like
to
get
away
from
that.
I
would
also
like
to
have
a
project
which
we
do
you
now:
the
kate's
staging
wrench
project
specifically
there,
so
that
we
can
promote
things
into
a
prod
project
right.
D
So
the
idea
of
having
the
cube
pkg,
the
cube
PKG
image
are
the
Krell
image
or
the
kate's
cloud
builder
image
that
we
use
for
our
build
assets,
be
tagged
at
a
point
in
time
and
be
in
a
production
location.
That's
important
to
me
too
right!
So
we're
not
quite
there
yet,
but
I
think
we
have
a
lot
of
the
pieces
in
hand.
I
think
it's
it's
a
bit
of.
D
It
will
be
try
the
forecasts,
what
people
need
in
the
next
six
months
a
year,
18
months,
right,
I,
think
we're
doing
it
so
far,
but
I,
you
know
the
probably
the
most
important
thing
is
going
back
to
going
back
to
maintainability,
making
sure
that
we're
writing
things
that
are
maintainable
by
multiple
people,
because
it's
not
a
it's.
It's
not
sustainable
for
only
a
few
people
to
be
able
to
touch
on
go
great
safely,
right,
yeah,
cool.
E
No
I
think
you
that
answers
my
question.
I
think
I
saw
that
too,
and
and
was
lucky
enough
to
go
to
skill
this
weekend
and
talk
with
the
two
contributors
about
really
just
removing
that
reliance
on
Google's
infrastructure.
They
give
us
the
credits,
you
know
and
just
kind
of
like
hey,
let's.
Let's
actually
do
that
thing
that
they
were
incentivize
to
do
so
cool
that
firm
things
up
for
me.
Thank
you.
Cool.
A
D
D
D
We
want
to
base
those
containers,
so
the
these
are
images
that
eventually
get
used
by
different
containers
like
like
key
proxy
or
cube
API
servers
right.
These
are
the
images
that
are
the
basis
for
the
images
that
we
essentially
put
out
in
production,
so
it's
important
that
they
are
maintained
and
and
protected,
and
all
that
good
stuff
right.
So
the
idea
is
that
we
would
maybe
move
to
just
using
the
Debian
slim
image
as
the
base
for
most
of
this.
So
Honus
has
one
PR
up.
D
Alternatively,
so
there's
some
back-and-forth
chat
with
Tim
Tim,
all
clear
about
the
security
posture
of
those
images,
and
we
kind
of
got
to
the
point
where
we're
like
well
what,
if
we
just
throw
them
out
what
if
we
just
stop
trying
to
maintain
these,
because
this
is
this-
is
another
situation
where
the
only
way
to
push
those
images
like
some
may
be
someone
flips
a
version
number
within
kubernetes
kubernetes
for
the
debian
hypercube
base
image.
What
does
that
version
number
flip?
Do
nothing
absolutely
nothing!
D
You
still
need
a
Googler
to
go,
build
this
image
and
make
sure
that
it's
pushed
with
the
with
the
the
correct
version
number
and
then
promoted
to
to
kate's
that
GC
odd
IO.
So
it's
one
of
those
things
that
we
cannot
maintain
as
a
community,
because
we
do
not
have
access
to
maintain
those
things
which
is
you
know
as
we're
as
we're
changing
some
of
this
we've.
You
know
we
recently
moved
the
I
think
I
was
talking
about
this
in
the
last
release.
D
Engineering
call,
but
we've
recently
moved
the
cube
cross
image,
which
was
kind
of
behind
behind
the
Google
curtain
into
into
Kate's
infra
right.
So
there
are
steps
in
the
right
direction.
It's
gonna
take
a
little
work
to
remove
those
images.
So
when
I
have
time,
I'm
gonna
go
poke
back
at
that
PR
and
see.
If
maybe
we
can
toss
a
few
people
at
it.
So
I
think
that's
most
of
the
update
for
that
one,
for,
if
you
want
to,
if
you
want
to
talk
about
the
the
CI
signal,
I.
F
Actually,
don't
have
any
groundbreaking
comments
and
I
left
some
couple
ideas.
A
la
Paix
in
the
short
got
some
ideas
and
proposals
of
things
and
I
see
a
signal.
Sub
project
called
do
put
in
general
I
just
wanted
to
swim
by
again
and
s
for
some
general
life
advice
and
wisdom
from
your
possibly
move
this
thing
forward
or
sideways
or
move
it
a
moving
in
some
direction.
D
So
yeah
that
that's
spun
out
a
little
bit
and
I
my
expectation
was
that
I
was
gonna
open
the
issue,
it's
gonna,
be
super
simple
and,
like
not
non-contentious,
I,
think
some
of
the
way
I
phrased.
The
issue
initially
makes
it
sound
that
it's
a
scope
creepy
into
into
sig
testing,
and
that's
not
the
intent.
This
I
think
I
specified
on
the
issue
like
this
is
a
brain
dump,
and
these
are
some
of
the
ideas
of
things
that
we
could
do
right.
D
The
one
that
I'm
most
concerned
with
is
making
sure
that
we
have
people
looking
at
the
help
of
all
of
the
jobs
that
are
on
blocking
and
forming
boards
right
and
not
just
for
the
current
cycle,
but
across
all
of
the
cycles.
Anything
that
comes
after
that
is
icing
on
the
cake
to
me
honestly
right
so
I
will
come
back
and
say:
let's
she
used
to
defer
a
lot
of
these
decisions
right.
Let's
try
like.
D
Let's
have
the
first:
let's
have
the
first
action
that
the
team
does
this
be
deciding
what's
in
scope
for
the
team.
I
think
that's
a
good
first
action
to
have
so
I
will
go
back
to
that
proposal.
I
will
strip
anything
that
is
contentious
from
it
and
say
that,
like
those
are
things
that
the
team
should
go
after
analyzing
first,
so
that
sounds.
F
Yeah
another
word
for
anyone
that
might
be
interesting
on
this
I
think
throughout
the
discussion,
wheels
a
myself
and
can
take
a
dream,
been
in
a
lot
of
people
throughout
a
thorough
ideas
of
what
will
be
in
scope
and
what
not
so,
possibly,
some
other
ideas
and
recommendations
in
there.
So
any
feedback,
a
welcome
from
you
and
anyone
who
might
be
interested
in
this.