►
From YouTube: RelMgr: Golang update walkthrough 20200520
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello,
hello,
everyone.
Hopefully
we
are
recording
right
now,
I
think
we
are
yep.
This
is
a
special
edition.
It's
May
20th
special
edition
of
the
release
engineering,
sub
project,
meeting
kind
of
informal
thing
where
we're
going
to
go
over
how
to
update
go
for
kubernetes.
So,
specifically,
within
this
meeting,
we're
going
to
talk
about
the
Go
Go
patch
updates,
instead
of
the
go
minor
updates,
the
go
minor
updates
are
potentially
a
little
bit
more
involved
and
not
entirely
documented.
A
Currently,
we
have
Veronica
and
marki
on
the
call
Veronica
and
mark
you're
going
to
be
responsible
for
doing
the
go.
110
wait,
go
1-1,
13,
11
updates,
and
that
will
be
an
update
for
the
master
branch,
so
kubernetes
119,
as
well
as
cherry
picks
back
to
the
118,
117
and
116
branches.
Right
now,
we're
currently
on
go
one,
thirteen
point:
nine
in
all
of
those
branches.
So
as
long
as
we're
maintaining
minor
version
compatibility,
we
should
be
fine,
so
yeah.
This
is
the
issue.
A
So
let's
do
these
side-by-side,
make
sure
that
kind
of
matches
up
right
so
keep
grass
bump
image
promotion,
KK,
bump
the
cherry-picks,
yes
cubic
ins
variants,
Keats
cloud
builder,
bump
and
publishing
about
them.
Okay,
that
makes
sense
alright,
so
step
by
step.
Some
things
have
changed
in
cube
cross
just
to
start
off.
Keep
cross
is
the
image
that
we
use
within
kubernetes
kubernetes
that
allows
cross-compilation
builds
right
so
across
our
multi
architecture
built
right,
so
that
image
needs
to
maintain
the
same.
Go
version
that
we're
intending
to
bump
kubernetes
to
right.
A
So
really
quick,
we'll
take
a
look
at
what
that.
What
that
image
looks
like
great,
so
that's
in
urban
IDs
release
build
cross
and
then
docker
file
right
and
I'll
talk
about
some
of
the
content
of
this
repo
too.
So
you
can
get
a
better
idea
of
what's
there
and
why
so?
We've
got
a
docker
file
and
this
docker
file
has
been
fairly
similar
for
some
time
we
had.
A
We
did
some
kind
of
optimizations
of
a
few
of
these
run
steps,
but
going
going
down
the
line,
we're
we're
grabbing
the
go
version,
we're
grabbing
whatever
go
version
is
specified
by
the
build
argument,
we're
also
doing
a
bunch
of
Debian
front
things
and
non
interactive
mode.
So,
basically,
when
we
do
apt-get
commands,
we
don't
have
to.
We
don't
really
have
to
specify
this.
A
So
the
protobuf
version
should
match
our
be
in
compatibility
range
of
the
protobuf
versions
and
libraries
that
are
in
kubernetes
kubernetes
we're
creating
a
temp
directory,
adding
some
go
utilities
like
cover
and
go
imports
again,
cleaning
the
cache
adding
at
CD
and
then
doing
some
cleanup
on
the
app
side
later
at
the
end
and
setting
a
null
entry
points.
So
that's
the
docker
file
really
quick.
Any
questions
on
that.
A
Okay,
that
wasn't
the
exciting
part.
Let's
look
at
the
let's
look
at
the
more
interesting
things
right
so
for
people
who
have
been
on
the
release.
Engineering
calls
before
I've
gone
into
in
the
past
have
gone
into
how
we
do
image,
building
how
we
do
image
building
from
CI,
how
we
kind
of
support
staging
to
from
staging
to
production
promotion
process,
and
one
of
the
tools
that
we
use
is
something
called
GC
b,
GC
b
builder
right.
A
So
essentially,
this
is
a
command-line
utility
that
wraps
the
wraps
calls
to
GC
b
google
cloud
build,
so
that
call
is
really
I.
Think
it's
G
cloud
build,
submit,
write
and
it
submits
that
it's
emits
that
command
with
a
set
of
the
set
of
substitutions
right,
so
substitutions
are
essentially
GC,
B's
name
for
variables
right,
so
we're
passing
in
passing
in
some
user
specified
variables,
and
some
of
these
are
some
niceties
that
will
be
like
the
get
tagged
at
the
time
that
you're
doing
this
build,
or
maybe
the
date
or
maybe
the
or
maybe
so.
A
The
tag
will
include
like
the
date
and
the
the
tag
that
commits
from
last
tag
and
then
a
short
version
of
the
get
shop
right.
So
we
can.
We
can
check
that
out
later,
but
what's
really
cool
about
it
is
this
variance
file
right?
So
if
you
specify
a
variance
file
and
pass
it
pass
it
a
builds
directory
that
contains
a
variance
file,
it
will
try
to
build
all
of
those
variants
unless
you
specify
a
specific
one
right.
A
A
We
created
two
variants:
one
is
go
114
and
second,
one
is
go
113
right,
so
some
of
the
variables
that
are
defined
config,
it's
just
a
general
variable
to
let
us
know
that
this
is
the
goat,
114
config,
so
that
we
can
pass
this
around
during
the
the
cloud
build
run
the
go
version,
of
course,
the
cube
cross
version,
which
is
going
to
be
it's.
Basically.
A
This
is
the
version
that
we're
saying,
cube
cross
is
going
to
be
by
the
time
that
we
push
it
right,
we're
going
to
tag
and
push
at
this
version
right,
the
protobuf
version
and
the
SED
version
right.
So
these
match
this
matches
the
that
CD
version
that
what
is
in
master
and
I
believe
is
also
in
some
of
the
other
release
branches.
This
is
the
last
known,
good,
protobuf
version
that
we
were
using
at
some
point
we're
going
to
bump
that,
but
I
want
that
to
kind
of
happen.
A
A
So
within
the
make
file
and
the
cloud
build
IMO,
we
can
see
that
this,
let's,
let's
look
at
the
cloud,
build
yeah.
Well,
first
right,
so
the
cloud
build
Gamal
has
one
step
right
and
it's
simply
doing
simply
doing
make
all
right
a
make
all
with
a
set
of
variables
right
and
you
can
see-
that's
we're
enabling
the
experimental
experimental
functions
for
a
docker
CLI.
We're
passing
in
you'll.
Note
that
there's
an
underscore
on
this
variable
and
are
this
substitution
right
and
what
that
means
is
it's
a
user
supplied,
variable
and
user
supplied.
A
We
mean
either
user
themselves
or
GCB
builder
right,
there's
something
that
we're
a
plot
we're
supplying
to
the
GCB
build
before
it
actually
runs
right
so
that
config,
that
config
variable
and
basically
all
of
the
variables
that
you
saw
specified
in
the
very
top
right.
Those
are
getting
passed
into
the
make
call.
And
then
you
can
see
the
substitutions
here.
These
are
the
valid
substitutions
for
this
build.
You
can
see
that
we've
set
null
values
kind
of
two
for
each
of
these
substitutions.
A
So
that's
while
you'll
see
in
the
make
file
that
we
have
some
default
versions
of
each
of
these
variables.
So,
theoretically,
you
could
just
run
the
make
file
instead
of
submitting
a
build
to
GCP
builder,
but
for
the
purposes
of
promotion
we
do
not
want
user
built
or
make
file
only
built
images
pushed
to
the
staging
repository.
Those
should
be
handled
through
GCP
builder
and
then.
Finally,
it's
going
to
this
images.
A
This
image
Estanza
will
essentially
make
sure
that
these
versions
are.
These
tags
exist
for
this.
This
file
right
now
for
this,
for
this
image
right,
so
you
see
here
the
project
ID
project
ID
will
map
to
whichever
project
this,
whichever
project
this
run
is
happening
in
and
we
keep
that
parameterised
for
the
sake
of
anyone
who
wants
to
do
this
and
push
images
to
their
registry.
A
They
have
the
option
to
do
that
without
having
to
unfurl
the
hard-coded
nosov
gates,
gates
staging
bill
demonstrate,
which
is
the
the
staging
project
that
will
be
pushing
this
too
right
so
going
over
to
the
makefile
we
can
scan
through
this.
We
can
see
that
we
are
setting
a
few,
the
kate's
staging
build
image,
as
I
mentioned,
and
then
the
prod
registry
is
USG.
Crio
keys,
artifacts,
broad-billed
image,
slash
build
image.
A
So
basically,
if
you
remove
the
kate
staging
from
whatever
your
staging
projects,
name
is
you'll
get
an
idea
of
what
the
what
the
subdirectory
the
image
will
land
in
right.
So
that's
build
I'm
interested
build
image,
so
those
defaults
that
I
was
talking
about
we're
doing
a
tag
we're.
Basically
this
will
pull.
This
will
pull
the.
A
The
config
defaults
to
go
1:14
cute
cross
version,
defaults
to
the
latest,
go
114,
3-1
and
so
on
and
so
forth
right.
So
the
all
target
is
build
and
push
and
build
and
push
are
kind
of
straightforward
docker
things
right.
We're
doing
we're
doing
a
build
with
the
build
arguments
and
then
we're
doing
a
push.
What's
special
here
is
that
we're
also
creating
a
manifest
for
the
cube
cross
version
right,
so
the
manifest
will
hold
is
then
annotated
with
the
with
the
architecture
of
the
amd64
one
right.
A
You
know
what
I'm
not
going
to
say
that
it's
easy
well
we'll
find
out
when
we
get
there,
okay,
so
and
then.
Finally,
we
push
this
manifest.
We've
pushed
the
tags
so
we're
pushing
that
tag
config
and
then
we're
pushing
the
architecture
and
the
cube
cross
version
right.
So
what
we'll
end
up
in
little
end
up
in
the
staging
registry
is
a
variety
of
different
configurations
or
a
variety
of
different
tag.
C
A
Okay,
alright,
so
we're
gonna
close
these
up
all
right,
so
when
so
all
of
that
said,
what
do
we
do
right?
We
want
to
do
a
cube
cross
bump.
What's
the
first
step
right,
we
want
to
go
into
here
and
let's
take
a
look
at
what
we
did
for
the
114
3-1
image
right,
pretty
straightforward
right,
so
all
we
did
was
we
changed
the
so
because
114
is
the
prevailing
prevailing
minor
version.
A
That's
the
reason
that
we
bumped
this
cube
cross
version
within
the
make
file
right
for
the
113
variant,
we're
not
going
going
to
change
anything
within
this.
Make
file
right
we're
going
to
leave
it
as
they
leave
114
as
the
prevailing
version,
but
for
113
we're
going
to
change
the
cube
cross
version.
So,
if
we're
going
to,
if
we're
going
to,
if
we've
changed
something
some
content
within
the
repo
that
changes
the
way
this
this
cube
cross
version,
this
cube
cost
docker
image
is
built.
Then
we
want
to
bump
the
revision
right.
A
So
this
is
essentially
the
image
revision
right.
This
5
here
right.
We
want
to
bump
that
if
we're
changing
the
go
version,
so
in
this
case
we're
going
to
go
from
139
to
113
11
right,
we're,
gonna
change
the
go
version
here
and
then
we're
gonna
change,
the
cube
cross
version
to
be
1,
v1,
13.7,
1
right
and
then
save
your
changes,
push
the
PR
and
then,
in
the
background,
we're
going
to
do
some
kooky
stuff
with
image
building
in
posts
in
its
and
all
that
good
stuff
right.
A
A
So,
within
the
config
jobs
image
pushing
directories,
we
have
a
file
called
Kate
staging
build
image.
Demo
Kate
staging
build
image,
not
ya,
know,
contains
the
configs
for
post
submits
that
handle
the
image
pushing
doodads
right.
So
the
the
cloud
build
job
that
we
were
talking
about
earlier.
This
is
where
it
gets
submitted
right
now,
it's
using
GCV
builder-
and
this
is
basically
passing
a
bunch
of
GCP
builder
flags
to
the
prowl
job,
all
right,
so
we're
saying
use.
A
So
this
is
the
debian
base,
won't
that
be
night
Able's
and
then
further
down
you'll
see
the
cube
cross
one
right.
So
a
few
things
are
happening
with
this
image.
It's
running
on
a
trusted
proud
cluster.
With
this
image
building
job
it's
running
on
a
trusted
proud
cluster.
It's
doing
a
few
things.
It's
reporting
failures
to
release
managers
at
kubernetes
do,
which
is
what
we
want,
and
it's
reporting
across
on
a
few
different
test:
grid
dashboards
right,
so
rail
engine
forming
master,
informing
and
sig
release.
A
Image
pushes
right
so
we're
what
we
do
is
we
only
trigger
these
jobs
on
changes
to
the
relevant
directory
right?
So
we
don't
need
to
build
these
images
all
the
time.
We
only
need
to
build
these
images
when
something
in
the
cube
cross
directory
changes
right,
and
we
say
that
we
only
want
this
job
to
run
on
master.
We
want
to
use
the
GCP
builder
service
account
which
has
access
to
reach
into
our
reach
into
our
build
image,
build
image,
GCP
projects
and
trigger
a
GCB
build
right.
A
So
it's
it's
an
account
that
lives
in
one
place
that
has
lives
in
one
project,
one
of
the
case
in
for
a
master
projects
and
has
access
to
trigger
GC
people,
jobs
across
multiple
staging
projects.
Right
we're
saying
that
we're
going
to
use
the
the
project
name
PCB
is
a
scratch
bucket
and
we're
going
to
target
images
build
cross
as
the
essentially
the
config
and
build
directory
right
for
GCB,
builder
and
I.
Think
we
can
probably
we
can
go
into
GCP
builder
at
a
later
time,
hoping
to
do
some
changes
there
so
stay
tuned.
A
A
A
Cool,
so
we're
looking
at
the
post
release
push
image
cube
grass
job
after
we
have
made
our
PR.
It's
been
approved
its
merged.
This
job
should
run
and
depending
on
the
size
of
the
image,
I
think
this
takes
15
minutes
or
something
maybe
less
right
cool
and
it's
not
giving
me
all
the
logs
that
I
want,
but
it's
given
me
enough
to
get
more
logs
right.
So
it
says
that
we
did
job
runs
here
and
here.
A
A
You
need
well,
you
need
to
be
part
of
the
I
am
group
for
the
build
image,
so
I
think
in
this
instance
interesting
interesting.
So
eventually
we
want
to
move
the
build
image
ownership
fully
under
release
engineering.
Today
it
is
kind
of
shared
ownership
between
myself,
Tim's,
Christoph,
Ben
and
Linus.
A
So
there's
some
introspection
that
you'll
lack
based
based
on
that,
but
I'll
try
to
get
the
the
logs
output
to
I.
Think
that
last
step
in
in
transferring
ownership
fully
over
to
the
release
engineering
sub
project
is,
is
documenting
it
right.
So
once
we
go
through,
this
call,
marki
is
going
to
draft
some
notes.
I
am
going
to
refine
the
notes,
we're
gonna
get
that
merged
in
and
we
should
be
able
to
proceed
forward
after
that,
but
so
going
through
these
logs,
you
know
docker
build
stuff.
A
C
A
Thank
you
yep,
but
as
we
saw
that
this
is
a
little
bit
more,
a
little
sparse
in
terms
of
logs.
So
when
we
run
variants
when
we
run
jobs
that
have
variants
attached
to
them,
I
think
I
think
the
way
the
logs
are
output
is
slightly
different.
We
intentionally
obscure
the
logs
I
think
because
they
get
spit
out
randomly
right.
They
get
spit
out
as
as
both
jobs
are
running.
So
it
looks
kind
of
weird
on
the
screen
to
have
both
the
113
and
114
variants
running
at
the
same
time.
A
A
Right
so
we've
got
a
manifest
creation
here
and
we
can
see
that
we're
creating
the
the
cube
cross,
amd64,
114
3-1
and
we're
pushing
those
manifests
and
we've
pushed
the
docker
tags
previously,
and
this
is
the
tag
that
I
was
talking
about
right.
So
you've
got
the
Nate.
We've
got
the
date.
We've
got
the
repo
tag
right,
so
the
repo
was
at
zero.
Three
one
plus
27
commits
so
a
short
shot
was
three
f
c/f
785
that
config
variable
rate
go
114
right.
So
this
is.
A
You
know
I'm
looking
for
something
that's
in
the
113
series,
139
right.
So
all
of
all
of
those
images
that
were
you
know,
created
on
December,
31st,
1969
and
pushed
very
very
recently
are
available
and
we
can
see
those
two
days
ago.
You
know
two
days
ago,
May
6th.
So
this
is
the
one
that
was
repos
two
days
ago
right,
but
the
one
that
we
care
about
is:
can
we
just
do
14?
Does
that
work
not
well
enough
all
right,
so
that
one
that
we
care
about
114
3-1
push
two
days
ago
right?
A
A
Super
okay,
cool
I
refreshed
it
right
as
it
finished
loading
okay
right.
So
if
you
were
to
look
at
this
manifest,
so
anyone
who's
not
familiar
with
digests
digests
are
basically
the
immutable
tag
for
a
docker
image
or
some
docker
resource
or
container
image
resource,
something
that
fits
the
OCI
spec
right.
A
A
And
you
can
see
these
are
bit
more
tagged
up,
so
you
want
to
probably
be
consistent.
What's
these
tags,
so
I
have
some
fixes
to
put
in
for
this
right,
but
we
can
see
the
1
1
14
3-1
right,
and
it
also
pulled
in
that
tag
that
that
GCV
builder
passed
it
that
we
saw
in
the
logs
earlier
right
and
if
we
click
into
it,
we
can
also
see
we
didn't
have
to
click
into
it,
but
we
have
now
so
we'll
wait
for
this
load.
B
A
I
I
have
solved
the
problem
before,
but
I
think
that
it
will
also
have
to
do
with
the
the
docker
engine
on
GCP.
So
is
it
figure
out?
Apple
probably
did
I
bother.
Did
I
bother
trying
to
solve
it
at
the
moment?
No,
but
we
should.
We
should
get
those
creation
dates
correct
at
some
point.
So
if
you
want
to
file
an
issue
and
a
release,
we
can
work
on
that.
You
can
pick
it
up
if
you
want
so
looking
at
this
digest
right.
A
We've
have
that
ones
that
a7
3
to
0
right
that
we
had
before.
So
we
know
that
this
is
the
this.
Is
the
AMD
64
image
that
maps
to
to
this
manifest
list
right?
So
that
makes
me
happy
and
now
that
we
know
that
now
we've
verified
that
we
can
do
some
more
stuff
right
and
so
we're
going
to
go
on
so
the
next
step
and
before
we
do
do
we
have
any
questions.
A
Okay,
okay,
so
the
next
step
is
image.
Promotion
right,
so
we've
successfully
minted
a
new
image.
New
keep
cross
image
for
for
some
version
right.
It's
114
3
in
this
case,
and
we've
we've
put
that
in
staging
right
for
it
to
be
an
official
image.
We
need
to
promote
that
from
staging
over
to
over
to
production
right.
So
we're
going
to
do
that
next
and
we're
gonna.
Look
at
this
114
3
as
an
example
and
nope.
A
A
Approvers
are
tagged
on
this,
as
well
as
release
engineering
C
seats,
so
you'll
probably
have
seen
a
bunch
of
these
flying
by
I
like
to
try
to
tag
y'all
or
release
engineering
on
the
whole
and
many
and
as
many
of
these
as
possible.
So
sorry
for
flooding
the
inbox,
but
this
is
overall
good
to
review
afterwards.
A
So
simple
right,
what
we've
done
is
we've
added
and
I'll
show
you
the
expanded
manifest
list
in
a
bit.
We've
we've
added
a
new
tag
right,
the
1:14
3-1
right
and
that's
the
cube
Stagg
that
we
care
about,
and
then
we
have
this
digest
here:
zero,
b8c,
CA,
6a
right
and
if
we
go
back
to
that,
manifest
lists
that
we
were
talking
about
right.
So
what
we
did
was
we
took
this
digest
and
we
uploaded.
We
uploaded
that
to
we're
saying
we
triggered
the
promotion
for
that
right.
So
essentially,
what
that
does.
A
C
A
So,
unfortunately,
and
something
that
was
a
little
frustrating
to
me
too,
unfortunately,
because
these
are
these-
are
essentially
elements
of
the
digest
rate.
So
a
digest
can
have
multiple
tags
right.
We
need
to
key
on
something
that's
immutable
right,
so
we
key
on
the
digest
and
unfortunately,
when
you
we
use
some
of
these
tools.
The
the
way
the
digest
gets
spit
out
is
sorted
by
sorted.
The
way
the
the
manifest
gets
spit
out
is
sorted
by
digest.
A
A
So
once
we
move
over
to
new
infrastructure,
or
once
we
move
over
to
the
kubernetes
community
infrastructure,
we
will
not
have
access
to
push
directly
to
production,
and
you
could
you
could
posit
that
we
shouldn't
have
done
that
in
in
the
past
anyway,
so
in
between
the
staging
job
will
essentially
walk
through
some
of
this
process
to
you
right
so
publishing
to
staging
that's
fine
right
but
pushing
to
production,
not
fine
right.
So
between
the
staging
and
the
and
and
the
the
release
process.
A
A
release
manager
will
be
promoting
the
images
that
get
spit
out
into
staging
over
to
production
and
then
proceeding
with
the
next
steps.
So
in
order
for
that
to
happen,
we
want
to
make
sure
that
there
are
because
there
are
multiple
there.
Multiple
images,
there's
API
server,
controller
manager,
schedule
or
you
know,
keep
Roxy
down
the
line
right
conformance
and
those
are
available
for
multiple
architectures
right.
So
imagine
right
now
we're
only
doing
this
with
one
digest
right.
A
Imagine
we
have
to
do
that
for
multiple
images
across
multiple
architectures
and
and
make
sure
it's
right,
so
I
want
automation
to
take
care
of
that
a
tool
is
being
written
right
now
that
will
essentially
target
a
tagged
target,
a
staging
tagged
target,
a
production
tag
and
merge
the
manifest
right,
so
it'll
take
an
existing
manifest.
That
is
already
in
the
cases
at
I/o
repo
and
it
will
it
will
splice
in
the
changes
from
staging
right.
So,
if
I
specify
a
stating
tag
and
I
say
this
is
going
to
be,
this
is
going
to
be.
A
The
the
images
for
the
this
set
is
going
to
be
the
the
images
for
kubernetes
1.20
right,
it'll,
take
those
and
spit
and
plot
them
into
the
manifest,
and
then
give
me
a
gamma
file
that
I
can
propose
as
a
PR
just
like
this,
but
we're
taking
some
of
the
human
element
out
of
that
piece
right.
So,
ideally,
the
staging
process
will
dump
a
yeah
mol
file
in
some
GCS
bucket.
We
pick
it
up
and
we
propose
that
as
as
a
promotion
right.
A
A
Okay,
so
we've
hit
a
new
repo
and
we
haven't
really
a
a
lot
of
of
the
people
on.
The
call
are
watching
later
may
not
be
familiar
with
what
this
repo
does.
So
the
kate's
that
I
a
repo
right,
it's
kubernetes
files
for
various
startup
gates
at
I/o
sites
right
and
so
it
it
houses
TNS
own
files.
It
houses
the
aliases
for
kubernetes,
and
you
can
see
some
of
the
stuff
here,
configs
for,
like
sir
manager
for
the
the
things
that
we
use.
A
You
know
the
various
groups
that
are
managed
under
the
kinetics
at
io
domain
to
the
community
group
or
the
release
managers
group
or
any
of
that
stuff,
like
your
configs
right.
So
to
take
an
example,
we
can
see
right.
We
configure
bye-bye
yeah
Mille.
These
are
the
email
addresses
and
stuff
like
that,
so
you
can
specify
a
group
there.
So
that's
one
of
the
many
things
that
this
repo
does.
A
The
one
relative
relevant
to
this
conversation
is
Kate,
stodgy,
cRIO
right
so
Kate's,
dad
she's
here
at
I/o,
split
into
two
two
subdirectories
right
images
and
manifests
right
and
together
they're
all
essentially
a
manifest
right
and
manifest
in
question
for
or
config
for
the
image
promoter
right.
So
if
we
look
at
the
manifest
for
build
image
and
so
promoter
manifest
yeah,
well,
so
we'll
see
a
few
things
here
right,
the
first
a
set
of
registries
right
and
the
first
is
the
staging
registry
right.
A
So
we're
saying
that
the
staging
registry
is
the
source
for
promotion
right
and
then
subsequently
we
have
you
SMT
cRIO,
Kate's,
artifacts,
prod,
build
image,
EU
and
Asia
same
rest
of
endpoint
right.
You
can
see
that
there's
a
service
account
specified
right.
So
basically,
this
is
the
service
account
that
we
would
use
to
promote
the
image
from
saging
from
the
staging
tcp
project
over
to
the
prod
GCP
project
right
and
we're
telling
it
where
we
want
it
to
go
right.
A
So
the
reason
that
these
are
split
into
two
separate
directories
is
we
don't
want
changes
to
this
right.
We
don't
want
someone
arbitrarily
being
a
go
and
push
a
PR
and
change.
We're.
Staging
images
get
promoted
to
right
unless
it's
someone
that
we
want
doing
that
right,
so
to
kind
of
lay
any
confusion
and-
and
you
know
prevent
prevent
accidental
PRS.
That
would
do
that.
A
They
were
split
into
two
separate
directories
and
you
can
see
that's
there's
an
owner's
specified
here
and
oh,
it's
just
a
label
right,
which
means
that
the
owners
for
this
are
the
top-level
owners
for
Kate's
at
I/o.
Right.
If
we
want
to
see
those
right
here
right
so
Bart,
Christoph,
dims,
Mike,
Nikita,
Aaron
and
Tamaki.
A
And
then,
if
we
take
a
look
at
the
owners,
aliases
real,
quick,
I've
added
a
few
build
image,
approvers
and
reviewers,
as
well
as
release
engineering,
approvers
and
reviewers.
Now
these
are
the
set
of
sub
project
owners
as
well
as
as
well
as
for
the
reviewer
side.
For
the
approver
side,
we
have
the
patch
release
team
in
here
and
for
the
reviewer
side
we
have
the
patch
release
team,
as
well
as
the
branch
managers
right
and
that
will
come
up
and
a
bit
any
questions
going
too
fast,
too
slow.
A
Alright,
so
now
we
go
into
images
and
we've
kind
of
seen
the
images
directory
already
right
for
the
build
image
right.
So
first,
let's
look
at
the
owners.
We've
got
the
building,
Mitch,
approvers
and
reviewers
here,
and
we
target
these
as
sig
release
and
area
release,
engineering
right
suit
and
then
images,
and
that
was
that
images
file
that
I
showed
you
before.
We've
got
cube
cross
in
here
we've
got
Debian
base
and
its
various
architectures.
A
A
And
then
finally
go
runner,
go
runner
is
a
kind
of
dish,
realest
plus
plus
I.
Think
I
was
talking
about
that
on
the
sake
release
meeting
this
week,
dish
Arliss
plus
plus
image
written
by
dims,
intended
to
kind
of
replace
some
of
the
Debian
based
images,
so
that
is
happening
slowly
but
surely
but
yeah.
For
for
us
we
care
about
cube
cross
right.
So
let's
go
to
one
more
directory.
A
A
So
if
we
look
at
spilt
up
basil,
there
is
a
target
somewhere
CIP,
it's
one
of
these
yeah,
so
this
target,
it's
basically
passing
a
few
flags
to
see
IP,
which
is
the
container
image
promoter
right.
These
are
its
flags
and
we
won't
go
into
that.
I
have
limited
knowledge
about
how
all
of
that
works,
but
I
know
enough
to
do
this
part
all
right.
A
B
A
A
A
Going
back
to
that
PR
that
we
did
right,
we're
gonna
propose
it
as
the
the
promoter,
the
the
manifest
Lister.
What
have
you
to
promote
right?
So
in
this
case
we
wanted
to
take
the
cube
cross
manifest
list
right.
Not
not
the.
We
don't
care
about
the
architecture,
one
because
it's
included
on
the
manifest
list
and
keep
cross
images.
Do
you
properly
propose
the
manifest
or
publish
the
manifest
right
so
we're
only
taking
this
0
b8
CAC
SCA
6,
blah
blah
blah
blah
blah
and
114
3-1
right.
So
that's
what
this
PR
was
right.
A
Yeah,
it
depends.
It
depends
on
what
you're
promoting
for
so
for
cube
cross.
If
we're
only
doing
one.
If
we're
only
doing
one
go
version,
then
it's
just
one
one
that
we're
looking
to
promote.
Let's
say:
I've
changed
both
of
the
variants
and
like
that
might
be
the
case
where
I
want
the
full
dump
right
and
it
changes
this
file
and
now
I'm.
Looking
for
like
it's
messy,
you
know
alright,
but
I'm.
Looking
for
the
key
brown
stuff,
alright
and
then
you
can
see,
all
of
all
of
these
things
are
included
right.
A
A
So
the
going
back
to
the
the
promoting
staging
images
or
promoting
things
as
part
of
the
staging
to
release
process
for
for
the
kubernetes
release
process,
essentially
that
manifest
edit
commander.
That
kind
of
like
munge,
would
do
what
I
just
did
that
snapshot
and
then
merge
the
tags
that
I
cared
about
right,
so
that
zero
wherever
it
is.
That's
your
a
ba
ta
right
merge
that
into
lit
into
this
manifest
right
and
it
would
spit
out
an
animal
that
I
could
just
propose
as
a
beer.
Alright.
C
A
A
A
A
Okay,
my
computer's
trying
to
kill
me
today,
but
but
anyhow,
it
says,
begin
promotion
at
the
bottom
and
you'll
see
that
it
has
targeted
the
114
3-1
image
for
for
multiple
edge,
endpoints
right,
so
USG,
cRIO
asia,
GC
r,
dot,
io
and
EU
GC
r
dot
io
right.
So
those
as
assuming
this
job
is
successful,
which
it
was
that
image
is
now
promoted
to
sub
production
right
and
is
available
at
us.
Duchy
CRA,
r,
/,
Cates,
artifacts,
prod
/,
build
image,
/
cube
cross
version
right,
so
I'm
going
to
close
these
up,
I
dropped.
A
A
A
A
D
A
A
Let's
see
if
it
is
merged,
okay
merged
awesome
cool.
So
this
is
where
it
starts
to
get
interesting
right
because
we're
playing
we're
playing
in
kubernetes
kubernetes
and
we're
at
the
mercy
of
multiple
tests
in
multiple
different
styles.
Some
of
the
tests.
What
gets
interesting
about
this
is
like
some
of
the
tests
are
dependent
on
the
go
version.
Isn't
that
crazy?
A
So
when
you're
changing
the
go
version,
you
have
to
be
aware
of
that,
and
there
are
certain
things
that,
because
the
go
version
has
changed,
we
need
to
provide
a
matching
support
version
of
that
thing
right.
So
one
of
those
things
is
that's
just
why
I
open
up
this
PR?
Is
the
rules
dot
go
basil
file
right,
so
rules
that
go
rules
underscore
go
was
moved
to
repo
infra,
so
the
reason
I'm
not
using
the
1:39
PR
as
an
exemplar
is,
is
because
of
this
right.
Actually,
we
can
look
at
it.
A
Let's
look
at
it
right.
So
if
you
look
at
rules
that
go,
it
basically
defines
different
rules
for
for
using
go
and
basil
together
right.
So
you
can
see
these
say:
hey,
you
know,
go
114,
3,
+,
1,
13
11
are
now
supported,
and
here's
the
hunk
of
basil
that
you
copy
into
your
workspace
to
make
that
work
for
you
right
so
in
this
hunk
of
basil.
A
You'll
see
that
this
is
an
HTTP
archive
to
find
us
an
HTTP
archive
for
for
basil
with
the
rule
set
go
it
has
a
Shaw,
or
you
know
this
version
of
this
0
2
to
5
version
as
well
as
URL
endpoints
to
pull
it
from
right.
So
you
know
it's
it's
providing
just
on
the
off-chance.
One
is
down
right
and
then
there
are
a
few
functions
that
you
pull
in
cool,
good,
go
rules,
dependencies
and
then
go
register
tool
chains
and
then
basil
does
stuff.
A
Up
to
a
certain
point
right,
so
I
believe
that
that
recent
bump,
the
bump
that
just
happened,
supports
13
and
14
I
think
they
dropped,
go
12
support
in
that
one.
So
things
to
be
aware
of
you
know
so
like
if
we're
we're
kind
of
in
this,
like
sliding
mode
where
you
know
kubernetes
releases,
aren't
necessarily
having
it
happening
at
the
same
time
that
go
releases
are,
and
then
goes,
support
cycle
is
slightly
different
from
kubernetes.
A
So
we
have
to
be
aware
of
like
so
like
the
115
branch,
for
example,
of
curb
brew
Nettie's
was
on
go
one
1217
right,
which
is
now
an
out
of
support
version
of
go
right
so
for
us
too.
So
this
goes
into
like
the
annual
support
conversations
where
you
know,
if
you're
talking
about
enacting
support
for
1:15,
then
you
have
to
consider
the
fact
that
115
is
using
an
data.
Support,
go
version
right.
So
do
you
update
the
version
of
go?
A
Do
you
carry
patches
for
the
112
version
of
go
right,
so
it's
important
that
we
keep
these
very
as
close
as
possible
in
line
with
what's
being
Easton,
go
so
that
we
don't
have
to
worry
about
those
kinds
of
problems,
but
they
come
up.
Nonetheless,
they
will
come
up
every
every
release
cycle
every
release
cycle
towards
the
end
of
the
cycle.
Someone
goes
hey.
A
Should
we
update,
go
or
there's
a
new
version
of
go
out
right
and
it's
a
few
days
before
we're
set
to
release
and
and
for
for
those
of
you
have
been
on
the
release
team.
All
of
you
have
been
on
the
release
team
you've
seen
that
happen
before
right,
and
it
all
depends
on
when
the
next
go
comes
out
right.
They
I
believe
they
release
on
about
a
monthly
cadence
I'm,
not
sure
what
their
minor
Cadence's
so
anyhow.
A
So
there
are
a
few
things
going
on
in
this
PR
and
well.
Look
at
the
ghost
stuff.
First
right!
Well,
look
at
the
the
basil
stuff!
First
right
so
as
I
was,
this
is
kind
of
a
hack
that
that
XD
suggested
Jeff
Grafton,
where
we
actually
went
into
the
rules
that
go
and
pulled
out
the
explicit
Shaw's
for
each
of
these
downloads.
The
reason
for
that
was
I
think
rules.
Dot
go
had
not
been
updated
to
the
point.
To
support,
go
one,
thirteen,
nine
just
yet
or
we
hadn't
changed
repo
infra.
A
So
this
is
part
of
the
reason
that
we've
moved
the
the
the
basil
updates
out
of
over
to
repo
infra.
So
it's
easier
to
manage
those
out
out
of
pad
right
so
that
go
register
tool
chain
thing
that
I
was
talking
about
right
and
then
we've
got
some
obvious
ish
bumps
right
to
two
versions
right,
so
I'm
gonna
go
into
more
detail.
Is
everyone
good
on
time?
I.
D
E
A
A
All
right
so
now
we're
on
master,
and
we
have
properly
reset
this
and
we're
going
to
check
out
a
branch.
Let's
call
it
go.
1:13
I'm,
not
gonna,
actually
push
this
Marky
and
Veronica
can
work
on
this.
But
let's
do
that,
let's
create
this
branch.
We're
opening
gets
we're
opening
the
repo
and
it's
open
to
the
right
file
already
right.
So
we
have
this
really
cool
file
called
build
dependencies
that
y
amal
and
build
dependencies.
That
y
amal
store
some
interesting
things.
I'm
gonna
make
that
a
little
smaller.
A
So
it's
easier
to
see
more
of
it
sort
of
some
interesting
things.
First,
it
stores
a
cat,
Paul,
hey
cat,
how's
it
going
so
the
dependencies
are
we
basically
sort
them
by
name
version
and
Rothen
ref
paths
right,
it's
a
riff,
Raff's
set
of
paths
and
and
a
regex
match
for
that
path
right.
So
what
this
allows
us
to
do
is
when
we're
doing
the
pendency
bumps.
It
allows
us
to
key
into
the
files
that
we
need
to
change
and
hopefully
makes
sure
that
we
don't
miss
things
right.
A
A
Something
I
mean
across
a
few
of
the
tools
or
you
know
things
to
be
desired
in
terms
of
UX,
but
this
works
nicely
for
for
now
right,
so
we
saw
that
came
up
empty
and
that's
good.
Let's
make
it
fail
now
we're
gonna
bump
this
version
right.
So
we
can
see
that
this
we're
actually
bumping.
This
go
version
now
right
and
we
should
see
a
set
of
failures
right
right.
So
this
time
it
said,
hey,
build,
build
image.
Cross
version
didn't
match
that
version
that
you
gave
me
and
test
images
make
file
didn't
match.
A
A
We're
gonna
update
it
to
that
nude
cute
cross
version
right
and
when
we
run
this
again,
it's
not
gonna
tell
you
all
of
the
stuff,
that's
missing,
because
it
basically
fails
out
during
the
first
failure
all
right.
So
let's
go
resolve
that
first
failure.
First
right,
so
I'm
gonna
do
a
silly
search
for
you
know:
113
dots,
slash,
Donuts,
9
right,
and
it's
telling
me
where
to
go
and
I'm
going
to
place
this
with
113
11
right
and
I
want
to
touch
the
repo
infra
configure
part
which
needs
to
be
added
to
the
workspace
right.
A
I
want
to
touch
this
I'll
come
back
to
this
one
right.
This
is
the
new
cube
cross
version.
It
has
a
separate
entry
for
it,
we're
going
to
come
back
to
that,
but
also
the
make
file
for
the
test
images
right,
we're
gonna
bump
that
one
and
then
this
one
for
the
prot
for
the
this
is
the
API
server
sample.
Api
server
yeah
we're
gonna
come
back
to
this
one,
because
it's
the
cube
cross
version
right.
So
we've
satisfied.
A
D
A
That
is
stash
5
if
I
believe
right
and
I
want
to
replace
it
with
1
right,
1,
13,
11,
1
right,
so
I
am
pop
and
change
right
and
now,
if
I
run
that
again,
it
comes
back
empty
right.
That
soul
tells
me
that
I
have
hopefully
bumped
everything
that
I
need
to
bump
for
a
go
version
update
to
be
successful
in
kubernetes
kubernetes
right.
If
I
do
a
diff
right,
I
can
see
well
1
if
I
do
get
if
name
name
status.
A
It's
going
to
show
me
the
files
that
were
modified
right,
so
things
within
build
and
tests
are
pretty
much
what
we
care
about
right.
So
the
crop,
the
cross
version,
the
dependencies
that
llamó
file
itself
the
workspace
file,
the
basel
file,
the
make
file
for
the
test
image
and
then
the
sample
API
server,
dialer
file
right
and
if
I
look
at
that
diff
again
right,
we
can
see
113,
9-5,
113,
11,
1,
right
and
so
on
and
so
forth.
A
Right,
and
that
is
mostly
it
great
I
mean
the
next
thing
you
do.
Is
you
propose
this
PR
you
run
through
the
gauntlet
of
tests.
You
get
it
to
merge
and
then
from
there
we're
going
to
what
are
we
gonna
do?
What
are
we
gonna?
Do
we're
gonna,
cherry-pick
this
all
right,
we're
gonna
cherry-pick
this
back
to
the
branches
that
it
needs
to
be
cherry-pick
back
to
so
those
are
in
the
patch
scenario.
We
want
to
make
sure
that
the
patch
is
the
patch
version
stay
up
to
date
for
each
of
the
versions.
A
So
for
the
minor
scenario
we
want
to
make
sure
that
the
patches
are
up
to
date
for
each
of
the
four,
the
same
minor
version
right.
So
if
release
one
18,
17
and
16
are
using
go
113.
We
want
to
update
those
as
well
right.
So
in
this
scenario,
y'all
will
be
cherry.
Picking
back
all
the
way
to
116
right,
I
feel
like
we're
out
of
time,
but
there
are
still
a
few
more
places
to
cover
bumps
this
one.
We
can
probably
shift
up
and
we
can
talk
about
later.
A
The
Cates
cloud
builder
version
is
basically
the
cloud
builder.
The
Cates
cloud
builder
is
the
image
that
we
use
to
do
staging
and
release
for
for
kubernetes
right.
So
we
essentially
want
to
make
sure
that
the
patch
versions
or
minor
versions,
patch
versions,
I'll,
go
match,
match
the
match.
The
cube
cross
image
right.
So
to
do
that,
we
basically
we
we
do
a
from
cube
cross
right.
So
the
keep
cross
image
that
we've
just
published
and
the
new
Kate's
cloud
builder
image
they're
they're
versions
to
match.
They
need
to
stay
in
sync.
A
So
you
would
do
these
two
bumps
together,
but
we
can
have.
We
can
have
a
second
session
to
talk
about
wrapping
some
of
the
stuff
up
and
that
will
probably
be
like
we
can
do
this.
Maybe
after
Marquis
after
you
send
me
some
notes.
Maybe
I
put
up
a
draft
to
capture
this
first
part
of
it,
and
then
we
can
go
into
the
next
pieces
sounds
good.
C
C
Going
to
say
yes
and
though
I
think
ideally
you're
me,
I'm
gonna
have
to
go
through
it
stumble
like
hell
and
that's
where
I'll
start
to
really
get
the
muscle
memory,
because
I
want
to
figure
it
out
myself.
So
I'm
going
to
I'm
just
gonna
start
doing
my
part
lining
it
up.
I
know:
I
won't
be
able
to
do
things
until
certain
steps
are
done
before
yeah.
D
E
E
C
A
D
A
I
want
to
I,
don't
want
to
better
pair
the
the
tasks
right
so
like.
If
someone
is
doing.
If
someone
does
the
the
initial
bump
right,
then
maybe
the
other
person
does
all
the
cherry-picks
right
and,
if
you're
handling,
if
you're
handling
the
bump,
you
should
probably
also
handle
the
promotion
PR
right.
That.