►
From YouTube: Kubernetes Release Engineering 20200331
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello,
hello,
everyone
today
is
Tuesday
March
31st
2020
is
a
new
edition
of
the
state
release,
release
engineering
sub
project
meeting.
This
is
a
meeting
that
is
recorded
and
available
on
the
internet.
So
please
be
mindful
of
what
you
say
and
do
please
be
sure
to
adhere
to
the
kubernetes
code
of
conduct.
B
A
General
just
be
awesome
people,
so,
as
I
mentioned
before
we
start
recording
we've
got
a
light
agenda,
but
I
want
to
let
people
know
some
of
the
things
that
are
coming
up
in
the
next
few
days.
The
first
thing
I
should
say
since
he
is
on
the
call,
so
we've
got
a
new
promotion.
I
was
chatting
with
Tim
yesterday,
I've
been
thinking
about
it
for
a
little
bit,
so
Dan
is
gonna,
be
promoted
to
a
branch
manager,
starting
with
the
new
cycle.
Dan
has
been
doing
some
really
really
awesome,
work
and
I
think
I.
D
A
So
a
question
that
I
got
on
on
slack
related
to
release
managers.
If
you
are
currently
today,
a
release
manager
associate
the
job
doesn't
end
right.
The
job
does
not
end
for
the
the
difference.
There's
a
very
big
difference
between
release
managers
group
and
the
release
team
right.
It's
a
release.
Managers
group
is
a
is
a
group
that
exists
for
all
the
time
are
for
as
long
as
you
want
to
do
the
role
right,
there's
no
rotation
from
quarter
to
quarter
as
long
as
you're
doing
a
good
job
and
act
actually
active.
A
You
will
continue
in
that
role.
So
is
that
so
speaking
of
release
manager
group
stuff,
this
cycle
we're
going
to
have
some
updates
to
the
handbooks,
as
we
do
and
I
think
one
of
the
things
that
I
really
want
people
to
focus
on
is
this
idea
of
the
fact
that
we're
a
ladder
right
we're
building
we're
building
a
ladder
which
implies
that
we
need
to
move
people,
help
people
move
through
the
ladder
by
mentoring
them
by
allowing
them
to
take
on
tasks
that
we
may
normally
take
on,
but
to
expand
that
pool.
A
We
need
to
need
to
bring
in
new
people,
so
we
have
a
few
release
manager
associates
today,
and
you
know
we
want
to
bring
the
the
release
manager
associates
eventually
into
that
branch
manager
role
and
the
branch
manager
is
eventually
into
that
patch
release
team
role.
The
biggest
thing
I
would
like
to
see
is
that
mentorship
aspect
I'm,
seeing
a
lot
of
I'm
seeing
a
lot
of
technical
execution,
which
is
great,
but
that
technical
execution
needs
to
also
be
paired
with
mentorship
right,
because
that's
the
only
way
we
strengthen
this
ladder
so
yeah.
A
A
Will
be
considering
if
you
have
been
part
of
the
if
you
have
been
part
of
the
release
team
in
the
past,
if
you're,
currently,
if
you're
on
the
release
team
that
just
finished
for
118,
if
you
have
general
interest
and
becoming
a
release
manager
associate
you
can
you
can
reach
out
to
myself
via
DM
I'm,
going
to
be
sending
a
note
to
the
release
team
as
well.
I.
Think
I
would
like
to
seed
the
the
group
with
maybe
a
few
more
people
and
and
try
to
work
on
moving
them
through
the
ladder.
A
A
So,
if
you,
you
know,
depending
on
where
you
are
you
want
to,
you-
want
to
start
it
at
a
certain
time
and
maybe
execute
the
entire
release
yourself
or
or
be
able
to
put
it
in
a
place
where
you
can
hand
it
off
to
someone
in
another
time
zone
right
so
to
be
able
to
do
that
hand
off.
We
do
need
to
have
more
people
available
to
to
pick
up
the
load
as
we
move
through
the
for
the
branch
managers
we
mentioned
in
the
private
chat.
A
That's
so
so
Dan
you'll
get
access
to
the
private
chat
and
you'll
get
new
awesome
permissions
soon,
but
I
noticed
in
the
private
chat
that
we
were
talking
about
scrubbing
the
cherry-pick
sheets.
If
there
is
anything
that
the
branch
managers
are
confused
about
on
the
cherry-pick
sheets,
please
let
us
know
I
think
that's
one
of
the
one
of
the
things
to
look
out
for
on
those
sheets
is
it's.
If
there's
a
little
bit
of
nuance
and
approving
cherry-picks
right,
it's
like
there's
an
increased
level
of
like
is
this.
Is
this
a
bug?
A
A
A
E
So
I'm,
right
now,
working
on
some
logic
in
conjunction
to
find
windows.
I
also
have
pull
requests
up
here,
and
the
idea
is-
and
maybe
you
can
tell
me
if
this
is
the
right
idea-
is
to
replace
fine
preamble.
So
we
have
two
functions
and
this
really
slip
I
script
and
one
of
them
is
setting
which
has,
from
my
perspective,
a
high
complexity.
I
will
put
it
into
our
very,
very.
A
E
E
Mean
most
of
my
other
POS
cotton,
milk,
which
were
related
to
refactorings
and
she's
a
bee
manager-
and
this
is
mostly
all
done
from
my
point
of
view-
and
they
are
testing
in
conjunction
to
gzb
managers.
Also
pretty
good
for
my
understanding.
I
also
supported
Carlos
a
little
bit
and
related
to
that
and
I
think.
A
A
F
A
A
So
the
the
yeah,
the
biggest
thing
is
like
now
that
we
actually
have
a
document
for
all
of
the
are
a
set
of
docs
for
all
of
the
different
Krell
tools.
I
want
to
make
sure
that
we
don't
drift
in
in
our
branch
manager
or
patch
release,
because
the
way
it
works
today
is
there's
some
documentation
without
GCP
manager,
and
then
most
of
it
is
actually
in
the
branch
manager
handbook
and
then
the
patch
release
handbook
references,
the
branch
manager
handbook.
A
F
A
The
general
thought
is,
if
you
make
something
available
in
the
API,
someone
will
find
a
way
to
use
it,
so
I
would
prefer
the
the
reason
that
we
have
GTV
managers.
It's
it's
one
supposed
to
provide
the
substitutions
to
an
AA,
go
right,
that's
its
primary
focus,
but
the
second
part
so
that
Onaga
can
run
in
GCP
right,
but
the
second
part
of
it
is
so
it's
it's
meant
to
be
done
manually
so
that
a
release
manager
is
the
one
doing
it
right.
B
A
So
today,
Anagha
can
run
in
an
interactive
mode,
which
is,
which
is
what
it
does
right
running
through
GC
b.
There's
a
there's,
a
yes
flag
right!
So
if
we're
working
on,
if
we're
working
on
the
Onaga
portion,
the
Nagas
rewrite
that
should
sit
like
the
GCV
GCB
manager
should
be
able
to
pass
that
flag
through
to
okay.
F
A
A
A
C
Working
on
to
you,
okay,
so
I
have
also
been
busy
with
some
end
of
release
stuff
and
then
some
work
stuff.
But
one
thing
that
I
definitely
need
to
get
back
to
and
I'm
hoping
to
at
the
end
of
this
week
is
push
build
because
I
got
like
95%
of
the
way
there
and
then
it's
just
stone.
So
hopefully,
I
can
get
back
to
work
on
that.
Otherwise
getting
up
to
speed
a
little
bit
more
on
the
branch
manager
handbook
and
that
sort
of
thing
and
cool
do.
C
C
C
A
A
few
different
things
right,
so
the
the
pull
cluster
up
that
runs
in
our
repo
I
think
does
a
push
build
at
the
end.
The
build
CI
kubernetes
build
will
do
a
push,
builds
and
then
ciq
burn.
A
tease
builds
all
of
its
variants
right.
So
all
the
variants
across
the
different
branches
will
do
you
push
build
as
well,
so
just
be
aware
that
the
tool
that
we
build
has
to
be
able
to
slot
into
that
those
space
right.
Yeah,
okay,.
A
Cool
well,
alright,
mark
Oh
Jim
our
tailor.
If
you
y'all
have
been
working
on
anything
and
wanna
chat
about
it.
D
D
Nothing
too
big
beyond
that
have
started
to
look
at
calendars
and
dates
and
kind
of
you
know
start
to
work
through
that
and
do
some
planning
more
on
that
will
come
out
this
week
and
I'm
gonna
be
how
about
getting
the
release
tml
formed
up.
But
aside
from
that,
nothing,
nothing
too
crazy.
We're
starting
a
party.
It's
called
119.
A
B
Yeah
sure,
sorry,
if
I
break
up
a
little
bit
my
Internet's
kind
of
flaky
but
I
really
wanted
to
raise
awareness
over
this
issue.
I
opened
up
in
the
Cooper
news
release,
area,
mm-hmm,
making
sure
I
didn't
lose
everybody
everyone's
like
frozen
on
my
screen,
I'm,
not
sure
if
or
just
I
have
terrible
internet
or
if
y'all
are
seeing
the
same
thing.
B
So
the
idea
is
to
remove
all
of
the
human
error
that
we've
been
seeing
in
the
past
releases
for
117
and
118
and
make
it
a
little
bit
more
of
a
awesome,
ated
approach
for
docs.
You
know
how
the
docs
team
to
focus
on
purely
what
we
do
best
reviewing
PRS
making
sure
the
contents
they're
tracking
enhancements
and
moving
some
of
this
get
rotation
branching
strategies
into
more
automation,
tooling,
there
so
check
out
the
issue.
If
there's
any
questions
or
comments
be
happy
to
answer
those.
A
B
So
we
didn't
merge
that
yet
well,
we
wanted
to
do
is
good
documentation
in
place
for
how
to
squash
your
PR.
Some.
You
know
the
essentials
around
the
process
that
we're
changing
and
then
once
that's
in
place.
We're
gonna
make
that
tweak
we're
waiting
for
the
180
and
release
to
go
out
the
door
which
it
has
successfully
and
then
now
we're
working
on
some
documentation
around
squashing
PRS
for
docs
and
training,
some
of
our
folks
and
then
that
should
merge,
I'd,
imagine
the
next
week
or
so.
Okay,.
A
B
A
Contributor
guide,
where
you
can,
because
that's
it
happens,
a
lot.
It
happens
a
lot
across
all
repos,
but
it's
it's
a
style
thing
like
everyone
has
a
difference
of
slightly
different,
get
get
workflow
so
depending
on
what
you
do
or
don't
do
it
I'm.
Also,
if
you
pull
in
suggestions
during
PRS
and
you
don't
batch
them,
they're
like
they're
different
implications
for
like
each
style.
So
definitely
if
you
want
tag
me
on
that
tag
me
on
that
PR
once
it
goes
up,
I
don't
take
awesome.
A
Awesome,
thank
you.
So
all
right,
I
guess
I
will
go
so
I've
been
doing
stuff
and
things
and
all
that
good
stuff.
We
are
moving
into
119.
We
are
also
moving
into
a
big
change
called
vdf
right.
So
vdf
is
our
vanity
domain
flip
and
the
vanity
domain
is
Kate
CCRI,
oh
right
so
I'm
gonna
show
you
stuff
I.
Should
you
ap
are.
A
A
For
how
to
do
image,
promotion
and
we've
seen,
we've
we've
seen
different
SIG's
and
sub
projects
start
to
sign
up
for
for
doing
image,
promotion.
Those
those
images
are
moving
to
a
new
place
right,
because
images
are
basically
being
promoted
from
a
staging
repository
right
to
a
to
a
production
repot
of
reproduction
repository,
which
you
can
see
here.
A
So
we
have
various
config
generations
in
the
Cates
at
I/o
repo.
We
also
have
and
I
think
I've
shown
some
of
this
before
on
a
call,
but
so
there's
a
group
site
yeah
memo
which
is
pretty
cool
because
you
can
do
configuration
for
for
various
for
various
ACLs
by
yeah
mo
complete.
We
all
love
the
mo
right,
but
also
we've
got
Kate's
that
she's
here
that
IO
and
then
within.
A
And
manifests
all
right
so
an
example
of
a
manifest.
Let's
see
one
for
sig
relief
straight
so
build
image.
We
have
a
promoter
manifest
right,
which
basically
says
I've
got
a
I've
got
a
few
different
names
I'm
going
to
bind
the
the
Kate's
infra
image
group
is
the
group
that
has
I
am
privileges
over
this
GCP
project
and
the
registry
is
that
it
attaches
to
is.
This
is
the
the
was
essentially
like
a
GSLV
for
for
for
the
registry
which
which
hits
us
EU
Asia
eg
cRIO,
flash
Kate's,
artifacts
prod
slash,
build
image
right.
A
A
B
A
A
No,
not
that
one,
but
essentially
the
so.
We
have
a
tool
that
outputs
these,
as
ordered
by
ordered
by
digest
right
so
I
wanted
to
to
make
sure
that
those
were
in
the
same
order.
But
essentially,
if
you
run
this
command
and
you
have
the
you
have
sig
gates
that
IO
kate's
container
image
promoter,
you
can
output
a
file.
That
looks
exactly
like
that
right,
so
moving
forward
release
managers
will
need
to
do
that
and
I'm
gonna
write
up
a
process
for
it.
A
A
So
the
first
thing
we
want
to
do
is
skip
the
ACL
check
on
the
production
registry
write.
The
new
production
registry
is
so
the
way
Onaga
works
today
is
it
runs
in
one
place
and
has
access
rather,
the
GCB
service
account
that
runs
the
GCB
jobs
for
staging
and
release
has
access
to
write
to
both
the
staging
GCS
buckets,
as
well
as
a
production
of
GCS
buckets
right.
So
we're
kind
of
like
breaking
that
so
GCS
so
GC.
A
C
A
So,
to
make
sure
that
this
works,
we
kind
of
have
to
split
that
or
break
that
assumption,
because
we
won't
be
able
to
write
directly
to
the
production
bucket,
which
is
which
is
good,
which
is
good
right.
So
the
first
thing
we
do
here
is
we
skip
the
ACL
on
the
production
registry
right
and
this
cleans
up
a
lot
of
the
logic.
That's
involved
in
that,
but
basically
Onaga
will
do
a
pre-flight
check
right
and
the
logic
is
really
here
that
we'll
look
at
all
the
container
registries
right.
So
it'll
grab.
A
If
you
run
as
a
user
and
you
specify
that
user-
and
you
are
you
specify
a
separate
bucket,
which
is
something
that
we
don't
really
do.
It
will
take
that
bucket,
as
well
as
the
default
staging
bucket,
as
well
as
a
production
bucket
and
it'll
during
the
first
steps
of
Onaga.
It
will
see
if
you
have
write
access
to
to
those
buckets
by
writing
a
file
to
it
writing
a
file
and
then
deleting
it
right.
A
So
you
can
see
instead
of
doing
this
loop
for
registries,
we're
just
saying
if
it's
the
registry,
just
replace
you
know
just
replace
anything
and
anything
that
might
be
an
underscore
with
a
hyphen
and
then
get
on
with
your
day
right.
So
it'll
only
do
it
so,
and
we've
also
got
this
conditional
here.
That
says,
if
you're,
an
omok
mode,
we're
gonna
skip
the
ACL
check
on
the
on
the
prod
path
right,
so
yeah,
that's
the
first
piece
of
it.
The
next
piece
is
we're
gonna
skip
image
pushes
and
no
mock
load
as
well
right.
A
A
But
just
now,
just
for
now
we're
gonna
validate
the
image
manifests
and
let
you
know
that
there's
actually
a
to-do
in
here,
because
this
is
currently
a
no
op
right.
So
this
is
just
like
testing
phase
I'm
just
walking
you
kind
of
through
my
process
for
for
testing
out
some
of
this
stuff,
because
essentially
it
needs
to
be.
A
A
It
should
link
back
to
various
places,
but
that's
this
kind
of
a
tangled
thing
that
is
related
to
establishing
our
stage
and
release
process
on
on
Kate's
infra,
and
you
can
see
there
spend
quite
a
bit
of
work.
That's
happened
over
the
last
since
since
a
little
bit
before
October,
but
this
is
like
essentially
our
first
execution
of
moving
a
thing
so
check
that
out
too
all
right
back
to
the
review.
Any
questions
so
far.
A
Okay,
cool
all
right,
so
we're
skipping
the
image
if
it
pushes
and
then
in
the
next
one.
We
are
basically
setting
the
test
project
to
so
we're
removing
that
test
project
variable
we're
not
removing
it
but
we're
this
is.
This
is
part
of
what
I
was
talking
about
right,
we're
here,
we're
pointing
to
the
new
staging
project
on
case
infra,
for
the
kubernetes
release
process
right
the
staging
and
release
process,
so
we're
pointing
it
directly
there,
instead
of
referencing
the
kubernetes
release
test
project.
A
A
A
A
A
D
A
We're
kind
of
in
a
split
brain
mode
where
we're
still
writing
the
the
artifacts
like
tarballs
and
what
have
you
to
the
old
project
and
then
and
then
writing
the
container
images
into
the
new
project
right
and
then
in
between
in
between
running
the
staging
process.
Release
manager
will
need
to
go
and
do
an
image
promotion,
because
the
next
step
will
do
a
validation
of
that
image
promotion.
So
if
you
read
the
commit
message
essentially
what's
happening
is
we're:
we've
created
a
new
function
called
validate
remote,
manifests
and
sacha.
A
This
is
kind
of
why
I
put
scope
EO
in
to
the
pencil
of
the
image
so
that
we
could
leverage
this.
We
do
a
little
minor
cleanup
where
use
scope,
EO,
inspect
to
retrieve
the
image
manifest
remotely,
and
then
we
kind
of
loop
through
the
architectures
and
check
for
the
architecture-specific
digest
within
the
manifests
using
JQ.
A
So
the
reason
it
it
pretty
closely
resembles
the
release
doctor
release
function
and
could
be
consolidated,
but
I
decided
not
to
do
that
here,
because
we
know
release
dr.
release
works
today
and
we're
adding
new
functionality,
but
we're
also
adding
new
functionality
at
a
critical
time.
We're
actually
doing
an
infrastructure
cut
over,
so
I
don't
want
to
introduce
too
many
things
to
that
equation.
This
is
something
that
can
easily
be
refactored
later.
A
It's
just
something
that
I
don't
think
we
should
take
on
now,
so
we're
that
to
do
was
before
we
replaced
with
a
docker
release,
docker
validated
remote
manifest,
and
you
can
see
that
it
has.
It
takes
the
same
arguments
as
release
docket
release
right
and
then
going
into
this.
We
add
some
more
to
do
so
remind
us
to
turn
thing
remove
things
once
once
the
vanity
domain
flip
is
actually
successful.
We
cleaned
up
some
of
the
some
of
the
for
loop
population
here.
A
What's
a
find
instead
of
a
CD
echo,
this
is
this
is
a
little
bit
more
guaranteed
to
give
us
what
we
want
right,
so
we're
basically
looking
for
a
file
only
within
that
current
directly.
Only
within
that
current
directory
and
we
want
to
strip
off,
we
want
to
strip
off
the
file
path
right
and
just
have
the
base
name
of
the
file
so
release
doctor
release.
We
had
an
opportunity
to
clean
up
some
of
these
local
variables
because
they're
no
longer
required
following
some
of
the
locals,
as
well
as
some
of
the
local
rays.
A
And
then
you
can
see
that
for
validate.
So
the
minor
changes
really
to
the
docker
release
the
release
doctor
release
function,
but
you
can
see
that
the
doctor
validate
manifests
function
is
pretty
similar,
basically
because
I
wanted
to
get
this
logic
out.
But
again.
This
is
something
that
we
can
conditional
eyes
with
in
doctor
release
dogger
and
change
it.
A
So
this
is
actually
the
manifest
list
for
that
image
and
then
from
the
manifest
list,
I'm,
basically
we're
looking
at
the
diet,
we're
studying
digest,
and
this
is
taking
the
lists
and
doing
some
some
JQ
magic
right
to
pull
the
manifests
and
then
and
then
select
by
a
platform
and
architecture
right.
So
this
is
where
the
architecture
the
architecture
field
would
be,
and
then,
if
that's
good
great,
then
we're
going
to
we're
going
to
pull
out
the
digest
from
from
that
architecture.
A
A
A
This
is
useful
to
validate
once
we
start
doing
the
image
promotion
to
see
if
the
digests
that
week
supposedly
pushed
it's
the
same,
one
that
we're
promoting
right,
so
I
think
the
next
evolution
of
this
will
have
will
probably
dump
the
the
the
I
will
dump
the
promoter
manifest
somewhere,
so
that
someone
can
just
pick
that
up
and
then
PR
that
to
Kate's
the
kubernetes
Kate's
that
I/o
repo
there's
also
I
open
an
issue
for
a
tool
that
we
could
that
we
could
write
afterwards.
A
To
kind
of
that
would
essentially
merge
the
two
manifests
to
emerge
and
existing
manifest
with
the
new
results
right.
So
that
way,
we'd
have
it's
it's
a
little.
It's
a
little
more
frictionless
trying
to
having
a
tool
that
will
automatically
merge.
You
know
for
you
right,
as
opposed
to
having
to
go
okay,
well,
I'm,
looking
for
119
alpha
2,
okay
for
API
server
did
I
catch
them
all
right.
A
So
the
reason
that
we
put
the
validation-
yes,
yes
Sasha,
we
can
so
the
reason
that
we
put
the
validation
in
the
stage
in
them
in
the
in
the
mock
process
is
that
we
want
to
make
sure
that,
like
one,
we
want
to
be
able
to
test
it
right
because
we'll
be
currently.
So
we
want
to
validate
that
the
images
actually
made
it
to
the
staging
area
and
two
we
can't
easily
validate
that
they
made
it
too
to
the
production
area,
because
we
would
essentially
be
doing
a
release.
A
A
So
basically
we're
making
sure
that
there's
no
service
interruption
when
we
do
that's
cut
over
we're
going
to
freeze,
freeze
the
old
registry
Google
containers
right
before
we
do
this
we're
gonna
give.
We
should
probably
freeze
it
today.
Actually
we're
gonna
talk
to
Matt
for
this,
but
basically
we'll
we'll
give
it
a
few.
A
Give
it
a
little
bit
to
make
sure
that
no
new
images
come
into
that
registry
right
and
then
we'll
do
the
cut
over
and
after
the
cut
ever
happens,
I
think
what
I'll
do
is
I'll
do
I'll
do
another
one
19
alpha,
writes
it
out.
That'll,
give
us
an
opportunity
to
go
through
the
entire
workflow.
Well
hole,
merge
this
PR
first
and
then,
and
then
do
the
Alpha
right.
A
So
they'll
give
us
an
opportunity,
and
it
also
give
me
an
opportunity
to
kind
of
like
see
what
the
user
experience
is
and
and
write
up
some
docs
on
how
to
do
image
promotion
within
this
right,
so
yeah,
and
especially
if
none
of
the
release
managers
have
touched.
The
image
formation
process
before
it'll
it'll
be
good
to
have
a
walkthrough
on
that.
A
F
A
A
So
we
have
let's
I'll
share
my
screen
again,
so
we've
been
working
on
in
the
background
come
on.
Yes,
all
right
going
updates
and
there
have
been
quite
a
few.
So
there's
there's
a
lot
to
do
when
you
do
a
go.
Flying,
update
and
I
would
say
that
the
the
one
13.8
updates
were
a
little
easier
than
usual
it
cat.
A
A
A
B
A
To
do
I
want
to
make
sure
that
we
turn.
You
know
we
hand
that
over
to
the
release
engineering
group
and
give
them
an
opportunity
to
to
do
this
as
well
right.
One
of
the
bigger
things
was
doing.
The
go
updates
was
essentially
contingent
on
having
a
Googler
around
because
we
need
it's
published
new
cube
cross
image
and
the
Q
cross
image
was
so.
It
was
was
one
of
those
images
that
you
know
lived
on
a
Google
registry
right.
So
one
of
the
things
that
we
did
was
we
moved
team
cross
over
to
yeah.
A
So
the
second
one
is
so
really
quick
on
the
steps.
It's
essentially,
you
need
to
do
a
cube
cross
bump
now
that
key
cross
lives
in
K
release.
So
it's
it's
images
build
cross
and
then
the
the
make
file
in
the
dockerfile
are
there,
as
well
as
a
version
file,
I
T
bump
the
version
file,
you
bump
the
version
within
the
docker
file
and
then
you
and
then
send
it
for
builds
the
build
will
trigger.
A
A
A
So
it's
a
promotion
and
then,
after
the
promotion
there
is
the
actual
bump
within
KK,
which
it's
gotten
a
little
simpler.
We're
basically
bumping
a
number
within
the
key
prop
the
the
Q
cross
version.
Sorry,
there
is
a
something
in
the
background
and
then
changing
the
dependency
is
that
yeah
mo
file
and
some
basil
stuff,
which
is,
which
is
why
basil
is
connected
to
the
discussion
of
removing
Basil's
connected
to
this.
This
did
not
work
until
we
got
a
clue
from
Jeff
about
how
to
fix
it.
E
A
Yeah,
so
after
that-
and
there
is
a
variance
bump,
so
the
the
variants
are
basically
a
file
for
the
GCD
builder
that
controls
the
images
that
we
get.
So
if
you
have
looked
at
s
infra
and
you've
seen
the
cubic
ins
été,
you
know
:
sha
master
right
that
that
spec
is
basically
here
right.
It's
saying
that
we're
going
to
the
configure
that's
going
to
be
called
master.
We're
gonna
use
this
version
of
go
this
case
release.
A
This
is
the
version
marker
for
for
the
capes
release,
the
basil
version,
the
old
basil
version
right
and
so
on
and
so
forth.
So
we
want
to
make
sure
that
we
bump
all
of
these
files
so
that
in
CI
we're
starting
to
use
the
new
go
version
as
well
right
and
finally,
just
a
maintenance
update
for
the
publishing
bot-
and
you
can
see
that
this
is
just
a
lot
of
search
and
replace
stuff
right.
We're
gonna
say
that
you
know
we're
at
1
1212.
A
We
want
to
get
a1
1217
and
then
for
134
we're
going
13.
So
this
is
just
a
bunch
of
searching,
are
places
on
these
files
and
so
that's
kind
of
that's
kind
of
a
walkthrough
of
the
new
process,
and
this
is
assuming
that.
So
if
we
look
at
114
one,
we
can
see
that
it's
going
a
little,
not
a
smooth
way.
I
guess
it's
the
nicest
way
to
put
it.
So,
basically,
there
are
some
unlock
issues
with
certain
kernels
for
yo
114,
so
114
0
is
I,
think
non-starter
for
us,
so
we're
looking
at
114.
A
One
I
actually
think
that
we're
going
to
be
looking
at
114.
There
changes
to
you.
There
were
race
conditions,
discovered
I,
think
for
at
C,
D
and
B
bolts
specifically,
so
there
is
a
change
that
needs
to
happen
in
B
bolt,
which
I
think
did
but
was
not
fully,
was
not
tested
in
a
cost
compilation
environment,
so
that
might
need
to
be
redone
and
there's
more.
A
A
A
I
think
we're
gonna
end
up
waiting
for
I
hope
to
update
that
by
the
end
of
the
week,
just
at
least
a
general
note
about
what
direction
we're
going
in
and
I
think
we're
probably
going
to
be
looking
at
going
for
114
choose
assuming
all
the
right
pieces
are
in
place.
I
also
want
to
make
sure
that
at
CD
is
updated
on
kubernetes
kubernetes,
because
we
need
to
pull
in
the
new.
We
need
to
pull
in
that
new
default
version,
so
few
things
to
do
there,
but
we
will
we
will
we'll
get
it
all
done.
A
A
A
Trying
to
take
the
you
know
the
biggest
issues
right
now,
so
for
me,
biggest
issues
are
things
that
cause
kubernetes
to
not
be
able
to
be
release
well
and
things
that
are
moving
towards
helping
unlock
some
some
paths
for
people
to
move
to
new
infrastructure
right.
So
so,
if
it's,
if
it's
supporting
a
process,
that's
ripping
something
out
of
Google
infrastructure
and
putting
it
on
Kate's
infrastructure.
Then
that's
part
of
where
my
my
priority
looks
like
lies
too,
in
terms
of
you
know
deciding
what
we
do
for
the
cycle.
B
A
The
cube
ATM
out
of
tree
work,
which
implies
that
we're
going
to
be
doing
some
work
around
our
branching
and
tagging
strategies
for
4kk
for
for
kubernetes
release,
so
stay
tuned
for
that
stuff.
If
you
have
not
had
a
chance
to
you
see
to
take
a
look
at
the
cube
ATM
out
of
tree
cap,
please
take
a
look
at
that.
I
will
pop.