►
From YouTube: RelMgr: Golang update walkthrough (part two) - 20200526
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello,
hello,
everyone.
This
is
a
special
edition
of
release
engineering
meetings.
We
are
going
to
be
walking
through
the
how
to
do
a
golang
update
in
kubernetes,
primarily
kubernetes
kubernetes
and
primarily
four
patch
versions
of
go
but
I
think
the
I
think
the
information
is
generally
useful
on
image,
building
and
whatnot.
A
After
that
we
walked
through
an
update
of
what
it
looks
like
to
bump
kubernetes
kubernetes
to
use
the
new
version
of
keep
cross,
and
we
I
think
we,
you
know
I,
think
we
explained
it
decently,
but
we
did
speed
through
that
towards
the
end
of
the
call.
So
if
you
have
questions,
let
me
know-
and
I
can
start
off
with
that
before
jumping
into
the
next
stuff.
A
A
D
A
A
All
right,
so
next
piece
is
a
Kate's
cloud
builder
right,
so
Kate's
cloud
builder
is
an
image
that
we
use
to
build
kubernetes
within
the
cloud.
So
this
is
it's
essentially
the
image
that
we
use
on
the
release,
management
team
or
a
release.
Managers
group,
so
Pat
releasing
and
branch
managers
release
manager
associates.
Oh
no
can
someone
get
Rob
guilty
the
password
for
the
meeting.
A
We
so
we
use,
so
this
image
is
kind
of
like
a
kind
of
a
layer
on
top
of
cube
cross
that
provides
us
the
utilities
that
we
need
to
to
actually
build
kubernetes
and
once
we
build
it
to
put
it
places
right,
so
whether
that
place
is
GCS
are
pushing
images
to
to
Google
cloud
registry.
We
have
the
tools
within
Kate's
cloud
builder
to
do
that.
A
So,
let's
take
a
look,
Kate's
cloud
builder
right
so
again
to
that
point
you
can
see
that
we're
we're
importing
we're
we're
building
on
top
of
the
cube
cross
version
and
we'll
look
at
the
the
variants
files
that
we
were
talking
about
in
the
previous
meeting
in
a
bit
we're
adding
some
additional
package
is.
We
are
purging,
Python
too,
and
making
sure
that
Python
3
is
installed,
and
this
has
to
do
it.
A
This
wacky
dependency
chain
that
you
can
see
here
so
Kate's
cloud
builder
and
imports
Kate,
cute,
cross
and
cube
cross
comes
from
the
golang
image
which
comes
from
build
plaque,
Deb's,
Buster,
SEM
and
then
finally,
Debian
Buster.
So
through
that
chain,
we've
kind
of
picked
up
some
cruft.
We
want
to
make
sure
that
the
Python
depths
are
at
Python
3
from
their
crc,
mod
and
yqr
installed,
and
then
the
Google
SDK
right.
A
So
the
Google
Cloud
SDK
includes
the
G
cloud
utility
as
well
as
gsutil,
right
and
and
for
us
G
cloud
and
gsutil
are
kind
of
our
bread
and
butter.
We
use
that
for
for
running
the
releases
within
within
Google
Cloud
build.
We
use
this
for
making
sure
that
the
right
users
are
running
the
builds.
We
also
use
it
for
copying.
You
know,
moving
around
buckets,
pulling
polling
data
via
gsutil,
so
definitely
required,
but
not
necessarily
required
for
cube
graphs
right.
So
this
is
more
cube.
A
Cross
is
something
that
should
be
able
to
run
without
user
intervention
and
somewhat
the
same
for
Kate's
cloud
builder.
But
there
are
things
that
are
not
related
to
cross
compilation
of
kubernetes
that
are
needed
for
the
release
managers
to
get
releases
out
to
you
right
so
G
cloud
and
GS
util.
Here
the
docker
CLI.
We
need
that
for
image
pushes
and
builds
and
finally
scope.
You
right--so
scope,
yo.
A
We
use
this
for
these
just
to
verify
that
the
manifests
for
the
images
have
landed
on
the
remote
right,
so
whatever
that
remote
is,
we
do
a
check
if
you
want
to
see
what
that
check
is,
let's
go
to
Scooby
all
right,
so
use
a
scope.
You
inspects,
so
this
is
lovely
lovely
bash
as
we
do
and
towards
the
bottom.
A
We're
basically
capturing
the
raw
manifest
right
of
the
image
and
version
inspecting
that
if
this
comes
back,
if
this
comes
back
with
an
error,
then
we
know
that
we
can't
find
the
manifest
list
remotely
we're
doing
this
using
scope.
You
because
scope,
EO
checks,
remote
locations
as
opposed
to
docker,
which
might
return,
might
return
successful
command
if
that
image
is
available
locally
right.
B
A
Gonna
clean
them
up
entirely.
Well,
not
the
note
specifically,
but
it
this.
The
notes
will
give
me
data
to
produce
a
doc
on
how
to
do
all
this
beautiful
yeah,
now
you're,
good,
okay.
So
next
up,
how
do
we
actually
bump
this
version
right?
So
it's
very
similar
to
what
we
saw
happen
with
the
build
cross
variants
file
right
where
we
have
these.
A
These
separate
configurations
that
are
essentially
iterated
over
in
in
via
GCD
GCP
builder,
right,
so
a
few
things
that
we're
specifying
to,
as
that
will
eventually
either
be
used
in
the
Google
cloud,
build
run
or
used
as
part
of
the
build
arguments
for
the
docker
file
itself
right
so
going
over
to
the
Kate's
cloud
builder,
one.
Let's
look
at
the
let's
look
at
the
variants
now
right
and
pretty
shrimp
down
right,
so
we
have
two
variants:
we
have
the
cross
114
variant
and
in
the
cross,
113
variants
rate.
A
So
we're
basically
saying
these
are
images
that
we've
produced
using
some
version
of
cube
graph
right
for
go
114
right
and
the
versions.
So
basically,
where
we
have
a
config
variable
set,
we
have
a
cube
cross
version.
Variable
set
and
then
finally
we
have
the
scope,
yo
version
variable
set,
and
you
can
see
that
these
are
utilized.
A
The
the
cube
cross
version
is
utilized
here
right.
So
knowing
what
image
to
import
right,
which
means
that
we
can,
we
don't
have
to
do
any.
Like
said
replacing
or
anything
like
this,
it
will
run
as
two
separate
jobs
if
that,
if
those
variants
are
or
if
there
are
multiple
variants
specified,
which
there
are
right
and
and
then
a
little
lower,
you'll
see
that
we've
set
a
build
argument
for
the
scope,
yo
version
right,
so
that
will
be
passed
in
as
a
result
of
the
build
arguments
that
are
here
right.
A
A
Because
we're
not
doing
anything
too
crazy
on
the
Cates
cloud
builder
side,
there
is
no
make
file
right.
We're
just
we're
just
doing
this
directly
in
cloud
build,
so
we're
producing
an
image
with
the
tag
case
cloud
builder.
The
get
tag
right
so
remember
the
get
tag
that's
going
to
be
passed
in.
Is
it's
going
to
be
V
the
date?
A
A
We're
also
tagging
this
as
latest
whatever
the
config
version
is
that
we've
set,
so
this
might
be
latest
cross,
114
our
latest
cross
113
and
then
finally,
we're
setting
a
cross
version
right.
So
this
one
is
super
important.
This
is
the
one
that
we
key
on
and
I'll
show
you.
Why,
in
a
second
I'll
show
you
why
we
need
to
key
on
that
in
a
second,
so
we're
saying
build
this
we're
doing
a
container
structure
test,
which
kind
of
just
looks
at
this
test:
dot,
yeah
Mille
and
ensures
a
few
things
happen
right.
A
A
So
next
bit
of
that
docker
file
is
the
substitutions
right.
So
these
substitutions
very
similar
to
the
substitutions
we
saw
on
the
cube
cross
image,
get
tagged,
pool
base
ref.
These
are
both
utilized
by
the
GCB
builder
tool,
the
config.
Right
again,
the
underscores
mean
that
these
are
user,
supplied,
user
supplied,
substitutions
or
variables,
and
we
essentially
use
the
GCV
builder
to
wrap
a
g-cloud,
build,
submit
call
with
a
set
of
user
base
substitutions
and
also
to
spawn
out
different
different
cloud.
A
Build
runs
based
on
based
on
what's
specified
within
the
variants
right,
so
it
will.
Unless
you
specify
a
specific
variant,
it
will
run.
It
will
run
all
of
the
variants
in
this
file
right,
so
it'll
spawn
out
two
separate
cloud,
build
jobs
to
do
that
and
then,
finally,
we
are
going
to
check
that
the
images
actually
exist
where
we
want
them
to,
and
we
essentially
so
again
we're
supposed
to
get
tagged
and
config
latest
config
and
then
the
cube
cross
version.
So
let
me
stop
there
ask
if
there
are
any
questions.
A
None
for
me,
okay,
cool,
all
right,
so
the
reason
why
this
is
important.
The
reason
why
that
cube
cross
version
is
important
right.
We
want
to
make
sure
that
we
are
building
we're
going
to
be
using
an
image
that
is
somewhat
similar
to
the
same.
So
the
image
that
we're
using
for
cube
cross
right
are
the
version
that
we're
using
for
cube
cross
so
say,
there's
an
instance
where
we
bumped
the
version
of
of
go
in
in
the
Cates
cloud
builder,
but
did
not
bump
it.
A
In
n
cube
cross
right,
potentially,
a
real
bull
run
into
an
issue.
Maybe
we're
able
to
maybe
we're
able
to
cross
compile,
but
we're
not
able
to
do
end-to-end
tests
or
something
like
that
right.
There
are
reasons
that
we
we
don't
want
variations
in
the
base
image
that
we
use
for
for
both
of
these
images
right.
So
that
said,
if
we
look
at
our
staging
cloud
build
right,
so
this
is
actually
what
the
release
manager
submits
for
staging
for
staging
at
kubernetes
build
right.
So
we've
got
a
few
different
Anagha
commands
happening
here.
A
A
It's
basically
we're
pointing
to
a
project
where
we
hold
our
crypto
keys
and
we're
saying
pull
out
that
that
github
token,
because
we're
going
to
need
it
later,
we're
cloning,
the
we're
cloning,
the
release
repo
at
a
certain,
a
certain
organ
repo.
This
allows
release
managers
to
test
test
their
PRS
against
kubernetes
release,
without
necessarily
needing
to
have
that
merge,
right
so
say
so,
say:
you're
changing
something
vital
within
urban
at
ease
release.
Maybe
it's
an
image
type.
A
Maybe
it's
a
new
tool
that
you're
working
on
and
you
want
to
be
able
to
see
if
the
staging
jobs
will
still
succeed
with
your
PR.
This
allows
you
to
basically
specify
these
environment
variables
and
run
a
krell
GCB
manager
with
cloning,
cloning
that
branch
or
cloning,
your
personal
organ
repo
for
kubernetes
release.
Right
from
there.
We
note
we're
using
this
cube,
cross
version
right
and
we're
compiling
release
tools,
so
compile
release
tools
and
I
know.
A
This
goes
a
little
outside
of
the
go
update
directly,
but
I
think
it's
important
to
understand
how
the
entire
puzzle
fits
together.
So
I'm
going
to
spend
a
little
time
on
this
right,
so
compile
release
tools,
simple
bash
script,
that
by
default
will
compile
a
few
really
stools
right.
So
it's
doing
blocking
pesky
grid
tests,
gh2
GCS,
so
blocking
test
code
tests
is
exactly
as
the
name
implies.
A
It's
a
tool
that
checks
for
tests
that
are
on
some
blocking
or
informing
board
for
sig
release
right.
So
those
are
the
board's
that
we
use
to
determine
if
it
releases.
If
a
branch
is
in
this
fit
enough
to
be
released
right,
github
GCS
is
a
new
tool.
I
talked
about
that
a
little
earlier
on
the
release.
Engineering
call.
This
is
responsible
for
downloading
release,
assets
from
a
github
release
tag
and
then
uploading
those
assets
to
a
GCS
bucket
right
and
the
the
previous
release.
Engineering
call
has
a
demo
of
that.
A
If
you
want
to
check
that
out
or
if
you
want
to
play
with
the
tool
yourself,
cue
pkg
is
responsible
for
building
Deb's
and
rpms
for
the
project.
This
one
is
kind
of
in
Alpha
State
hoping
to
bring
that
up
closer
to
beta
gah.
This
really
cycle
and
finally,
Crowell
Krell
is
our
bread
and
butter.
Krell
is
the
kubernetes
release
tool
box.
It
is
kind
of
a
central
point
for
kubernetes
the
kubernetes
release,
repo
or
we've
been
refactoring,
the
old
bash
scripts
that
exists
in
the
repo
into
sub
commands
of
Krell,
so
Crowell.
A
A
All
right,
so
that's
done
and
I
can
just
check
out
usage
for
Carell
if
I
do
help
right.
So
it'll
do
a
few
things.
Nago
is
the
current
kind
of
release
engineering
script
that
we
used
to
build
and
released
kubernetes
announces
for
announcing
release
kubernetes
releases,
so
those
announce
me
emails
that
you
get
will
be
moving
that
functionality
into
into
Corel
announced
the
change
log.
The
change
log
119,
118
what-have-you,
is
generated
using
this
tool.
This
is
our
fast-forward
tool.
A
Push
is
a
rewrite
of
push
build
which
is
in
the
root
of
our
repo,
and
it
does
what
it
implies
it.
You
know
kind
of
pushes
the
release
artifacts
to
you
to
GCSE
the
release,
notes
similar
to
change
log.
There
are
some
they.
They
use
a
lot
of
the
bits
of
the
notes,
libraries
that
we
have
in
the
repo
to
generate
release,
notes
in
various
forms
and
finally
version
yeah.
Again,
that's
what
it
implies.
It'll
get
a
version
out
of
that.
A
A
Silver,
compiling
the
release
tools.
We
are
then
stepping
into
an
hago
again
using
this
time,
we're
using
the
Kate's
cloud
builder
image
right
and
the
Q
cross
version
that
we've
specified
so
the
if
we
look
at
we
don't
need
to
go
into
the
details
of
a
Nago
I.
Think
that
that's
its
own
series
of
multiple
calls,
but
essentially
an
August
doing
a
pre
build
for
the
staging.
A
It's
saying
yes
to
essentially
skip
all
of
the
user
prompts,
and
it's
saying
that
it's
running
in
GC
be
running
in
GC
v
is
our
default
mechanism
for
running
an
alga
right
now
we
do
not
run
Anagha
locally
and
we
should
not
run
Anagha
locally
and
then,
finally,
this
some
directory
specified
and
then
type
rights.
This
is
a
type
of
release,
whether
it
be
an
alpha,
beta,
RC
or
official
release
and
then
stepping
through
again.
You
can
see
we're
using
these.
A
A
Yeah
and
you
know
again,
pretty
common
cloud
build
I
animal
stuff.
We've
got
some
tags
that
we
specify
for
for
usage
in
querying
querying
GCP
later.
If
we
need
to
the
image
the
machine
type
that
we're
using
is
the
max
machine
type
so
that
we
can
handle
and
all
of
that
kubernetes
builds
goodness
and
then.
Finally,
we
we
have
an
optional
tag
that
I
pretty
sure
it's
not
being
used
currently,
but
let's
go
into
the
cube
cross
version
again
right
so
cube
cross
version.
They
know
what
happened.
A
A
The
way
we
retrieve
it
is
what
matters
right.
So
the
way
that
we
retrieve
the
cube
cross
version
is
essentially
asking
we're
getting
a
set
of
branches
right
and
the
set
of
branches
that's
passed
into
this
into
the
function
depends
on
what
branch
you're
trying
to
run
a
staging
job
on
right.
So,
if
you're
trying
to
run
a
staging
job
on
master,
it
will
look
for
something
in
master.
If
you're
trying
to
run
the
staging
job
for
release
branch,
it
will
try
to
find
the
release
branch
and
and
check
the
version
there
right.
A
If
it
is
a
release
branch
that
does
not
exist
yet
say
you're,
creating
the
branch
via
the
staging
job
staging
and
release
jobs,
then
it
will
fall
back
to
the
master
version
right
and
the
master
version
will
be
included
in
this
and
the
slice
of
of
branches
and
yeah.
Okay,
here
right
here,
right
so
ODOT
branch-
and
this
is
the
branch
that
the
user
passes
in
for
the
GCV
manager
run
and
then
finally
get
get
master
right.
A
So
it
will
fall
back
to
it'll
fall
back
to
master
if
it
is
unable
to
find
out
ODOT
branch
right.
So
the
reason
this
is
important
is
it's
going
to
look
for
this
version
within
kubernetes
kubernetes
right.
So
that's
why
we
need
to
have
those
two
versions
in
sync,
because
we're
essentially
dependent
this
version,
that's
living
in
whatever
branch
of
kubernetes
that
we're
staging
or
releasing
right.
A
So
if
you
saw
some
PRS
that
we're
kind
of
flying
around
over
the
last
few
weeks
of
making
sure
that
these
versions
were
in
sync
and
then
cherry-picking
that
stuff
back,
this
kind
of
goes
back
to
what
you
saw
in
the
at
the
end
of
the
previous
call,
which
was
bumping.
The
bumping,
the
cube
cross
version
are
bumping
this
file
and
updating
the
dependencies
that
a
mole
within
kubernetes
kubernetes
right,
so
that
hopefully
ties
that
all
together
any
questions.
A
When
we're
changing
the
variance
for
the
the
kate's
cloud
builder,
we
want
to
ensure
that's
I
mean
it's
it's
it's
fairly
simple
from
the
the
previous
call.
You
saw
that
we
went
to
the
variance
file
and
one
we
need
to
make
sure
that
this
is
a
published
version,
a
version
that
has
been
promoted
of
cube
graphs
right.
We
should
only
be
using
the
prod
images
there
and
we
point
this
to
when
we
point
this
to
the
current
version,
or
we
point
us
to
the
active
version
for
whatever
go
version.
A
The
active
keep
cross
version
that
maps
to
whatever
go
version
that
we're
we're
building
for
right.
So
in
this
case
for
the
cross,
114
variant,
we're
using
114
3-1
and
for
the
cross
113
we're
using
113
9-5
right.
So
the
update
that
Marquis
or
Veronica
would
be
making
is
just
changing
that
and
changing
this
right
and
pushing
that
PR
once
that
PR
merges
we're
gonna
see
this
do
stuff
in
test
grid.
A
A
A
The
way
we
handle
output
is
a
little
different
and
can
probably
use
some
improvement,
but
we
we
don't
output
to
the
logs
right.
We
essentially
reference
where
those
jobs
were
running
in
the
output
for
g-cloud
build
submit.
So
that's
one
of
them
and
assuming
you
have
access
to
read
these
logs
you'll,
be
able
to
go
over
here
and
check
them
out.
A
A
All
right,
so
this
is
makes
this
just
a
little
bigger.
These
are
the
jobs
for
the
across
113
variant
and
the
cross
114
variant.
You
can
see
here
that
it's
outputting,
a
list
of
the
images
that
were
produced
as
a
result
are
the
tags
that
were
produced
for
an
image
as
a
result
and
then,
if
we
go
over
to
and
if
you
want
to
see
the
bill
vlog,
you
know
really
quick.
It's
doing
all
the
stuff.
We
said
it
should
do
installing
cheat
cloud
doing
that
container
test
and
then
doing
some
pitches
right.
A
They
can
look
at
Kate's
cloud
builder
and
see
that
we've
got
images
that
were
tagged.
This
latest
cross
14
cross,
114
3-1
and
then
that's
extended
kind
of
tag
right.
So
the
dates
the
get
tagged
that
commits
ahead
of
last
tag
and
then
gee
that
short
Shaw
right,
the
config
name
rate,
which
is
cross
114
right.
So
we
provide
multiple
tags
there
to
give
you
different
ways
of
querying
querying
these
images.
The
latest
pattern,
kind
of
comes
from
cubic
ins
and
I'll
talk
about
key
beacons
a
little
bit.
A
A
A
Based
on
yeah
there,
we
go
right,
so
it'll
spit
out
this
PR.
That
says,
like
hey,
we've
done
some
updates
and
we've
shifted
from
this
commit
to
this
commit.
These
were
the
images
that
were
effected,
and
you
know
for
here.
These
dates.
We've
shifted
from
that
commit
to
that
commit,
and
these
are
the
image
that
is
that
are
affected.
A
A
All
right
so
should
look
familiar
right.
This
was
we
borrowed.
We
borrowed
this
from
from
Cuba
conceived
our
the
the
format
for
a
cube
cross
image
and
the
kate's
cloud
builder.
One
from
key
begins
right.
So
you'll
see
that
there
is
a
config
there's
a
variant,
essentially
a
variant,
a
separate
variant
block,
a
config
name
right,
so
this
config
master
name
is
being
produced
as
a
result
of
that
GCD
builder
run
and
you're,
seeing
that
reflected
in
the
image
name
that
we're
referencing
right.
A
So
the
reason
this
is
important
is
that
eventually,
I
would
like
us
to
move
across
all
of
kubernetes
to
a
place
where
we
can
do
Auto
bumping
for
for
the
images
the
same
way
that
we
do
and
test
infra
right.
So
if
your
images
happen
to
live
in
tests
infra,
then
you
can
kind
of
take
advantage
of
this
today.
But
if
your
images
are
in
a
different
repo,
you
can't
write
so
any
questions
on
that
I.
C
B
C
A
A
All
right
so
back
to
the
branch
bumps
right,
so
we've
done
the
cube
cross
bump
we've
talked
about
the
cube
cross
image
promotion.
We've
talked
about
the
Cates
cloud,
Builder
bump
just
now,
funny
that
we
didn't
talk
about
the
Cates
cloud
builder
promotion,
though
right.
So
if
we
look
at
if
we
look
at
this
cloud,
build
file
the
wrong
cloud
build
file
whoops.
A
Or
know
I
was
in
the
right
one.
Sorry
you'll
see
that
it
references
Cates,
staging
rel
inch,
which
is
our
staging
project
for
release
engineering.
A
reference
is
the
image
in
this
project
as
opposed
to
a
production
version
right,
so
the
production
version
would
be
Kate's,
artifacts,
prod,
/rel
ang,
slash
Kate's
cloud
builder
right.
A
Currently,
we
are
not
promoting
the
Kate's
cloud
builder
image,
currently
we're
not
promoting
any
of
our
images
right
so
far
our
images
have
been
more
and
when
I
say
our
images,
I
mean
images
that
live
within
live
is
part
of
the
Kate's
staging
rel
inch
project.
The
reason
for
this
is
it's
not
so
much
outside
of
the
fact
that
we're
we
iterate
pretty
quickly
on
these
images
and
our
reiterate
pretty
quickly
on
the
content
of
the
repo
and
what
an
image
bump.
What
an
image
promotion
essentially
does
is.
A
It
adds
an
additional
hop
in
and
being
able
to
test
and
debug
certain
things
right.
So
so
far
we
have
not
done
promotions.
I,
think
that
I
would
like
to
move
us
to
the
place
where
we're
doing
promotions
for
those
images
as
well.
You
can
see
that
the
ones
that
we
have
in
the
repo
right
now
are
Kate's
cloud
builder,
key
pkg
and
cue
pkg
rpm,
and
then
the
rail
and
CI
basal
image
this
one
does
not
need
to
be
promoted.
A
This
is
kind
of
one
that
we
only
use
in
seei
and
if
you
note
from
the
test
in
four
images,
they
are
also
using
kind
of
staging
repo
for
their
images.
So
they're
not
they're,
not
promoting
these
as
far
as
I
know
either.
So
maybe
that
will
change
in
the
future.
We
can.
We
have
a
lot
smaller
of
a
surface
area
to
cover
with
our
images.
A
So
maybe
it's
possible
for
us
to
do
that
sooner
rather
than
later,
but
I
want
to
make
sure
that
all
of
the
images
are
doing
or
kind
of
adhering
to
this
builder.
This
GCB
builder
variance
pattern
before
we
move
forward
and
I
think
we're
just
about
there
most
of
the
the
most
the
images
have
been
switched
over.
You
also
know
that
if
we
were
to
go
to
the
staging
build
image
a
project,
there
are
a
lot
more
images
here,
there's
the
ones
that
we
care
about.
A
Really
debian-based
every
night,
Pete's
able,
let's
go
run
our
cube
cross
lands
here.
The
pause
image
lands
here
yeah.
So
these
are
the
building
managers,
project
kind
of
stores.
All
of
the
images
that
we
use
to
build
something
in
kubernetes
right
build
something
related
specifically
to
kubernetes
kubernetes
right.
A
So
that's
again
that
cube
cross
an
image
and
the
cube
build
image
that
are
built
as
a
result
of
that
in
NCI
and
then
the
debian
base
image
is
and
the
go
runner
are
images
that
we
use
to
base
our
core
production
images
off
of
right
so
like
if
we
were
to
go
over
to
the
base
image
exception
list,
we'll
see
that
we've
got
some
release.
Images
here,
debian
iptables
is
based
on
based
on
debian
base.
A
It
adds
some
utilities
to
allow
us
to
to
do
some
standard
out
standard
error,
redirects
right
so
James
is
talking
about
that
a
little
earlier
in
our
release,
engineering
meeting
and
then
we've
got
some
non
released
images
that
we
don't
necessarily
need
to
talk
about
here.
But
if
you
want
to
check
this
out,
I
will
pop
that
in
the
chat
switch
is
eluding
me
now
you
go
and
mark.
If
you
can
transpose
some
of
these
links
into
the
doc
as
well.
That'd
be
helpful.
You.
B
A
A
Right
and
if
we
look
at
this
one
look
at
this
PR,
it's
it's
kind
of
that
version
bump
that
we
were
talking
about.
We
pulled
keep
cross
out
of
explicitly
pulled
Q
cross
out
of
kubernetes
kubernetes
and
put
it
in
kubernetes
release.
This
allows
us
a
little
nicer
kind
of
development
cycle
for
building
Q
cross
images.
You
can
see
that
the
tests
for
tests
for
kubernetes
release
run
an
order
of
magnitude
faster
and
there
are
less
of
them
than
the
ones
in
kubernetes
kubernetes.
A
So,
while
we're,
while
we're
waiting
to
change
a
variant
or
something
and
rebuild
an
image,
we
can
do
that
fairly
quickly
in
kubernetes
release
and
then
from
there.
We
can
do
all
of
the
end-to-end
testing
for
that
new
image
within
kubernetes
kubernetes
right,
so
those
PR
has
become
a
lot
smaller
because
they're,
essentially
just
it's
kind
of
like
a
version
flip
right-
and
this
is
this-
one-
is
a
little
bit
more
complex.
A
This
one
is
that
yeah,
okay,
so
this
one
okay,
so
this
one's
a
little
cleaner,
at
least
at
the
beginning
of
it.
So
essentially
it's
that
version
change
right.
So
this
is
a
file
that
is
curled
within
within
that
get
cube.
Cross
version
function
for
GC
view
manager
right
and
it's
also
used
in
there
is
a
builds,
build
image.
A
Docker
file-
that's
at
the
root
of
this
here
that
uses
that
cube
cross
version
right,
we're
changing
the
dependency
versions
were
we're
making
sure
that
any
new
reg
X's
are
added
to
this
dependencies
match
the
pendency
site,
yeah,
Mel
and
and
the
the
pass
and
the
regex
match
matches
if
any
new
files
are
added
that
we
depend
on
for
that
dependency,
specifically
right.
So
for
golang
there
are
few
places
you
need
to
change.
A
That
version,
then
there's
also
this
cube
cross
dependence
list
right
where
so
this
first
one
is
okay,
we're
changing
golang
and
we're
changing,
cube
cross,
we're
doing
them
essentially
in
the
same
TR,
and
they
have
some
overlapping
places
where
we're
going
to
look
for
those
version.
Numbers
right
so
see
same
same
version
here
and
here
and
then
within
test
images
that
docker
file
right.
It's
going
to
make
sure
that
we're
pointing
to
the
newer
version
of
keep
cross
and
then
repo
infra,
so
this
is
maybe
I-
should
touch
on
this
soon
yeah.
A
So
this
repo
infra
configure,
essentially
repo
infra,
has
taken
over
the
management
of
the
rules
that
go
right.
So
people
who
are
using
repo
in
for
a
configure
can
take
advantage
of
updated
rules
that
go
right
and
then
that's
that
comes
up
actually
Veronica
Marquis.
This
comes
up
for
the
the
go
yeah
the
go,
one,
thirteen
eleven
ones.
So
if
we
look
at
that's
awesome,
so
if
we
look
at
this
bump
occasionally
this
will
need
to
be
added
as
a
step.
A
We
need
to
make
sure
that
we
track
the
repo
infra
version
as
well.
So
in
this
PR
you
can
see
that
the
dependencies
IMO
was
updated
to
include
repo
in
front
as
a
dependency.
You
can
see
that
we
updated
the
the
SHA
for
that.
The
strip
prefix
and
the
archive
URL
right
to
reference
the
new
version
and
then,
if
we
want
to
see
what
happened
in
that
new
version,.
A
A
A
A
A
But
the
TLDR
of
all
of
that
is
that,
if
we're
going
to
a
version
of
go,
that's
the
rules
that
are
currently
in
repo
and
ferd
don't
support.
We
need
to
do
a
repo
in
for
a
bump
first
and
then
one.
We
need
to
do
the
bump
within
repo
infra,
and
then
we
need
to
update
kubernetes
kubernetes
to
use
the
new
version
of
repo
in
front
so
I'm
trying
to
take
care
of
that
in
the
background.
So
y'all
don't
have
to
go
through
that
and
it's
so
it's
merged
and
master.
A
A
So
we've
done
this
a
few
times
now
we're
getting
familiar
with
what
it
looks
like
to
bump
a
variance
file.
It's
meant
to
be
this
process
was
a
lot
more
complex
before
this
variance
file
is
meant
to
simplify
this
quite
a
bit
by
by
by
parameterizing
this.
This
build
arguments
within
docker
files
right,
so
the
cubic
ins
ete
image
right.
So
one.
Let's
talk
about
the
variant
bump,
really
quickly.
A
You'll
go
to
this
variance
file.
You'll
update
the
master
one
first
right:
the
master
go
version
should
match
so
from
here,
depending
on
the
state
of
go
in
various
release.
Branches
will
depend
on
how
you
handle
this
update
right.
So
if
you
are
updating
so
first
you're
gonna
update
master
right,
whatever
merged,
whatever
merge
to
master
is
what
you
want.
This
version
to
be
right.
So
if
we're
going
to
1
13
11,
we
want
one
13
11
listed
here.
A
Experimental
should,
at
the
very
least,
match
master
right.
Ideally,
you
can
see
that
some
of
these
are
kind
of
the
same
versions.
Ideally,
what
we
want
experimental
to
be
is
a
little
further
afield
of
what's
configured
a
master
and
then
have
have
some
of
the
tests
like
canary
tests
target
target
that
experimental
version
instead
right.
So
in
an
instance
where
you're
looking
at
one
of
these
prow
bumps,
it
would
be
like
the
cubic
in
seed
II,
some
some
tag
experimental
instead
right.
A
You
use
that
on
a
use
that
on
a
job,
to
try
these,
these
experimental
variants
right.
The
upgrade
darker,
true
under
experimental
and
we'll
will
touch
on
that.
In
a
second
so
going
into
the
cubic
in
CTE,
you
directory
you'll,
see
this
docker
file
and
the
stalker
file
is
pulling
in
one
it's
pulling
in
that
old
Basel
version
and
the
reason
this
was
done
was
so
that
we
could
have
potentially
have
two
versions
of
Basel
to
switch
on
within
an
end-to-end
test.
A
This
is
useful
for
scenarios
where
we
want
to
update
Basel
and
a
repo
specifically
kubernetes
kubernetes,
but
could
potentially
be
for
for
other
for
other
other
repos,
and
we
basically
want
it
to
use
that.
What
is
the
name
of
the
file
the
workspace,
something
there's
a
file
where
you
can
specify
the
Basel
version
for
the
workspace,
and
if
that
file
has
a
bump
in
it
see,
I
will
use
that
new
version
in
the
tests,
as
opposed
to
whatever
specified,
is
the
old
version
from
there.
A
A
Yes,
so
copies
in
scenarios
right
so
scenarios
if
you're?
If
you
have
poked
around
on
intestine
for
it
before
and
you've
looked
at
the
way,
some
of
our
end-to-end
tests
are
configured
you'll,
see
that
at
the
root
of
tests,
infra
is
a
folder
called
scenarios
right,
and
there
are
a
few
scenarios
that
we
care
about.
Some
interesting
ones
are
kubernetes
build
dot.
Py
I
could
bernetta
CDE,
not
py
right,
so
these
scenarios
are
written
in
Python
and
I'll.
Make
that
a
little
bigger
so
essentially
what
these
scenarios
are
doing.
A
This
is
eventually
doing
this
is
this
is
running
push
bill
drain,
so
it's
doing
a
build
of
kubernetes
and
then
eventually
doing
running,
that
push
filled
script
with
the
arguments
that
you
specify
right
so
in.
If,
in
the
fast
scenario,
when
you
specify
the
fast
flag
for
your
end-to-end
test,
it
will
run
a
make
quick
release
and
when
do
use-
and
if
you
don't
specify
that
fast
flag,
it
will
run
a
make
release
rate
and
then
subsequently
run.
A
A
A
So
if
you
have
feature
flags
on,
if
you
have,
if
you're
checking
for
storage,
specifically
yadda-yadda-yadda
right,
so
the
reason
I
mentioned,
the
scenarios
are
that
they
are
included
in
they're
included
in
the
bootstrap
image
and
the
bootstrap
em
it
just
pulled
in
from
cubic
and
CD
you'll
you'll
notice
that
in
that
PR
we're
primarily
using
cubic
in
CTE
instead
of
bootstrap.
If,
if
you're
using
bootstrap
for
an
image,
you
should
try
not
to
because
that
image
is
not
updated
as
frequently
we
do.
A
Going
back
into
the
stocker
file,
we've
got
a
few
things
going
on.
We
were
sending
setting
some
standards
for
the
NGO
environment,
we're
reviewing
some
sim
linking
for
the
repos,
we're
adding
we're,
adding
some
additional
utilities
and
doing
some
cleanup,
adding
AWS
CLI
for
AWS
tests,
though
I
don't
think
this
is
required
anymore
and
then
again,
some
more
dependencies
that
were
that
we're
pulling
in
that
are
specified
as
build
arguments
that
are
then
swapped
in,
depending
on
what
variant
of
what
variant
of
the
GCP
builder
run.
A
You're
you're
doing
right
so
that
go
version
is
the
one
that
we
care
about.
It's
grabbing
the
go,
tar
ball
it's
you
know,
it's
unpacking
that
and
then
and
then
moving
that
to
user
local
right
and
then
the
basil,
old,
old
and
new
version
kind
of
comes
into
play
here,
where
it
will
it'll
essentially
do
some
copies
and
write
this
to
its
own
basil
version
right
so
that
you
can
switch
those
I'm,
not
sure
if
Basilisk
or
something
is
as
he
used
to
do
that,
but
the
so
Jim.
A
Finally,
getting
to
your
question
is
the
upgrade
docker
argh
right?
So
if
it's
specified
true,
it
will
attempt
to
upgrade
docker
within
the
image
right,
so
the
experimental
version
of
experimental
version
will
have
a
newer
docker
version
all
right.
So
this
is
a
way
that
we
can
check
against
the
most
updated
the
most
up-to-date
version
of
docker
in
the
package
stream
to
see
if
there
are
any
inconsistencies
right
honestly.
A
Looking
at
this
I
probably
feel
that
this
should
just
be
a
docker
version
that
we
specify
and
we
can
target
different
docker
versions
for
the
variants
right.
The
one
thing
to
worry
about
that.
The
one
thing
to
worry
about
is
like
how
many
places
are
we
using
this
experimental
variant
and
how
many
canary
tests
that
we
have
to
to
actually
vet
that
right
and
I
don't
have
that
answer
on
hand
right
now,
but
yeah?
This
is
what
it
does
here.
A
C
A
Cool,
so
the
last
bit
is
a
bit
of
a
cosmetic
thing.
If
we
go
over
is,
and
it's
the
publisher,
publishing
bot
bump
I
think
the
this
is
essentially
just
to
represent
the
fact
that
we're
using
a
newer
version
of
of
a
newer
co
version
for
two
so
to
explain
a
little
publishing
bot
is,
has
a
configuration
file,
rule
sam'l
and
the
publishing
pot
is
responsible
for
looking
at
the
staging
directory
within
kubernetes
kubernetes
and
turning
the
content
of
those
staging
directories
into
repos
right
and
syncing
that
content
across
to
those
repos
right.
A
Like
github
token,
what
have
you
that
are
able
to
essentially
create
manipulate
revos
right,
so
we're
we're
publishing
new
tags
to
these
repos,
based
on
based
on
the
branch
to
directory
so
on
and
so
forth,
and
it
will.
It
will
try
to
match
a
versioning
that
it
will
try
to
maintain
a
version
that
maps
roughly
to
a
kubernetes
version
right.
So
I
think
that
versioning
is
instead
of
like
we're,
say
we're
on
117.
It
would
be
0.17
dot
whatever
right
for
those
staging
repos
right.
So
this
is.
A
This
is
stuff
that
would
land
in
code
generator
and
if
you
expand,
you
can
see
that
you
know.
We've
got
things
like
API
machinery,
repo,
the
kate's
api,
repo
client
go
so
on
and
so
forth
right.
So
this
one
is
more
of
a
cosmetic
change,
because
the
the
go
version
that
these
repos
use
are
I,
think
configured
elsewhere
and
not
directly
handled
by
us.
So
this
is
essentially
just
to
represent.
A
A
The
minor
update
for
for
this
stuff
is
minor
updates
are
a
little
bit
more
complicated
I.
Don't
think
that
we
have
all
of
the
details
that
we
need
about
how
complicated
they
are
I
think
it
depends.
So
if
I
go
back
into
this
pr4
go
114.
3
I
showed
you
the
build
image,
build
image.
Cross
version
updates
the
dependencies
that
yeah
well
update.
What
I
skipped
over
was
this
go
mod,
update
right
for
Etsy,
DB,
bolts
and
then
also
effectively
what
will
be
like
the
go.
A
Some
go,
mod
update
for
like
the
API
server
and
a
fix,
for
you
know
a
fix
for
some
validation
that
happens
right.
So
the
thing
about
like
this
validation
in
particular,
is
and
then,
if
we
scroll
further,
we'll
see
that
a
test
image,
update
and
tests,
you
know
sample
API
server,
update
and
then
a
bunch
of
vendor
changes
right.
A
Something
changed
and
the
way
that
this
error
is
output
right.
This
invalid
field
is
handled
and
that
change
is
specific
to
that
change
is
specific
to
go
114
right,
it's
something
that
changed
between
go
113
and
114
right,
so
to
be
successful.
With
doing
one
of
these
minor
updates,
there
are
some
additional
things
that
need
to
happen.
Right.
I
would
suggest
one
following
the
go:
Lang
announce
list
right
so
becoming
a
member
of
that
going
announce
list.
A
But
if
we
look
at,
you
can
see
from
this
conversation
that
it's
much
longer
than
than
the
more
simplistic
bump
for
the
patch
the
patch
go
release
right,
but
you
can
see
that
you
know
scalability
has
jumped
in
asking
questions
about,
perform
potential
performance
regressions.
You
can
see
that
there
is.
You
know,
running
against
the
the
Etsy
tests.
We
had
some
potential
problems
here
which
referenced
the
people
tracking
issue.
A
You
know
so
there's
a
bit
of
conversation
about
how
specifically
to
handle
this
update
for
114
in
particular
right
there
is
an
EM,
lock
issue.
That's
effect
that
happens
on
specific
kernels
that
we
needed
to
watch
out
for
that
was
again
waiting
on.
You
know
waiting
on
various
updates
from
other
projects.
A
A
A
A
It
requires
watching
issues
in
upstream
go
and
you
know
so
all
of
that
to
say
that
there
is
more
involved
in
just
and
doing
the
minor
than
the
patch,
which
is
why
I
didn't
want
to
cover
that
here,
because
it
will
depend
depending
on
what
minor
bump
you're
doing.
But
we
can
see
that
there's
an
update
here
on
the
sed
people,
compatibility
fix.
So
hopefully,
if
that
goes
in
soon,
there
will
be
a
new
at
CDEP
bolt
release.
I'll
come
back
to
this
PR.
A
C
A
Ete
image
right
so
now
that
we're
in
a
place
where
we
have
control
of
over
who
in
the
community
can
can
build
that
image.
We're
able
to
kind
of
tie
more
of
these
threads
together
and
understand
how
the
entire
process
works.
So
I
have
tried
to
go
over
the
things
that
I
understand
about
it.
This
is
fairly
new,
as
of
you
know,
the
last
quarter,
or
so
so
expect
this
stuff
to
evolve.
A
If
you're,
following
the
conversation
on
kubernetes
dev,
you
can
see
that
there's
a
discussion
around
the
annual
support
cycles
right
so
we've
been
discussing
raising
enabling
previous
kubernetes
releases
to
be
supported
for
more
than
the
initial
nine
months
that
we
were
doing
or
the
the
nine
months
that
we
have
been
doing
in
the
project,
thus
far
right.
So
that's
you
know.
A
So
it's
essentially
right
now:
118
one-sixth
want
117
116
right
once
once
118
once
119
comes
out,
will
we
would
have
normally
moved
to
essentially
started
the
deprecation
car,
the
end
of
life
clock
for
116,
and
so
essentially
that
would
become
out
of
out
of
support.
Once
we
cut
the
first
patch
for
119
right,
so
119
one
comes
out,
we
also
cut
a
series
of
patches
right
so
119
one
as
well
as
118
through
116
right
and
that
would
have
been
the
last
116
patch
right.
A
What
we're
saying
now
is
that
we
would
add
more
support,
so
post
patch
cycle,
I
think
what
we're
gonna
do
is
talk
about.
Having
this
be
date,
based
versus
release,
boundary
based
rate,
saying,
n,
minus
1
and
minus
2
kind
of
stuff
doesn't
really
help
the
concrete
dates
would
and
and
saying
that's
we're
offering
support,
or
these
releases
are
in
support
for
about
a
year
plus,
roughly
a
two-month
upgrade
period
right.
So
that's
I
think
a
little
bit
more
robust
when
you're,
when
you're
looking
at
moving
clusters
to
two
new
versions
right.
A
A
So
the
question
that's
been
asked
on
that
list
is:
can
we
actually
say
that
right
if
we
can't
say
that
we
can
reliably
update,
go
or
sed
or
what
have
you?
Then?
We
can't
really
say
that
we
can
support
things
for
X
amount
of
time
right,
and
you
also
have
to
consider
the
go
release
cycle
right.
Does
the
very
often
during
the
release
process.
You'll,
see
that,
like
we'll
say,
oh
there's,
a
new
version
of
go
out.
Should
we
update
it?
A
Is
it
too
late
in
the
release
cycle
to
look
at
doing
a
go
bump
right
and
that's
going
to
depend
on
a
lot
of
things
right?
Is
it
a
net
new
minor
version
of
go?
Is
it
a
patch
version,
patch
version,
as
you
saw
once
you
step
through
the
the
hoops
is
like
instantiating,
a
bunch
of
different
PRS.
You
know.
Maybe
there
are
12
PRS
or
something
in
total
that
you
have
to
do
it's
fairly,
simple,
right,
III!
A
Don't
think
that
anything
in
that
process,
once
you
understand
how
everything
works
together,
is
particularly
complicated
right
that
level
of
difficulty
changes
when
it's
a
minor
version
right
and
when
we
have
to
loop
in
scalability
right.
So,
given
that
a
scalability
test
may
run
every
12
hours
or
every
24
hours,
do
we
have
enough
time
at
that
point
of
the
release
to
allow
scalability
to
run
tests
to
get
reasonable
signal
about?
If
about
this
being
a
go
update
that
we
can
go
to
right.
So
that's
the
question.
A
That's
kind
of
being
asked
on
the
list
right
now
and
that's
part
of
the
reason
for
this
work
to
bring
us
to
a
point
where
we're
we're
can
more,
where
we
can
more
easily
support
the
external
dependencies
that
we
have
in
the
kind
of
kubernetes
tool
chain
so
and
Lulu
wants
to
say
hi
so
I
think
that's
most
of
what
I
got
but
I'm
happy
to
talk
about
various
things
related
to
this.
If
you
all
have
questions.
D
D
A
A
So
that
is,
you
know,
that's
a
question
of
our
support
ability
for
for
that
right
and,
and
that's
important,
because,
based
on
the
go
release
cycle
like
at
the
time,
we
were
at
112
17
right,
go
1,
1217,
right
and
we're
looking
to
potentially
update
2
to
113.
There
were
some
problems
with
basil
and
there
were
problems
in
multiple
places.
A
I,
don't
remember
all
the
background
of
that
issue,
but
that
essentially
basil
essentially
stopped
us
from
being
able
to
upgrade
go
I,
think
it
was
rules
related
and
like
a
lot
of
other
things
that
had
to
be
refactored
in
the
workspace.
But
the
move
of
the
rules
of
the
rules
go
into
repo
infra
is
supposed
to
make
that
easier
right,
so,
hopefully
moving
forward.
We
can
keep
more
of
the
branches
in
lockstep,
I
would
say,
I
would
say,
like
my
personal
policy
is
update
as
much
as
we
can
and
make
sense.
A
We
should
be
on
a
continual
update
cycle
kind
of
with,
with
these,
with
these
go
releases
so
like
in
my
head,
if
I
receive
a
notification
from
going
announced
that
a
new
go
version
is
out,
my
next
step
is
to
file
an
issue
to
say
we
need
to
update
to
that
version
and
then
assign
someone
on
the
team,
whether
it
be
me,
someone
else
to
start
doing
that
walking
through
this
process
right.
So
this
is
kind
of
with
let's
go
111
out.
A
This
is
essentially
what
I
did
was:
go,
113,
10,
right
and
then
so,
marking
Veronica
picked
that
up
and
then
113
11
came
out
right.
So
that's
the
you
know,
so
the
issues
have
been
updated
to
reference,
113,
11,
but
basically
I
think
we
should
be
doing
it
as
much
as
we
possibly
can.
So
one
we're
it's
kind
of
its
kind
of
good
posture.
Just
keep
them
updated,
but
also
it
gives
opportunities
for
multiple
people
and
seem
to
learn
the
process
where
previously
we
did
not
have
that
right.
A
A
So
that's
kind
of
going
into
that
repo
in
for
a
bump
that
I
have
on
the
118
branch,
I'm
missing
something
potentially
trivial
in
that
PR
to
enable
that
bump
or
it's
a
it's,
a
Basel
ISM
that
I
don't
understand
and
and
that's
why
the
things
are
failing,
I'm,
so
figuring
that
out.
That's
also
a
an
instance
where
I
believe
the
branch
was
not
cut
over
to
use
repo
infra
just
yet.
A
So
it's
still,
it's
still
referencing
still
reference
in
the
the
the
Basel
go
rules
Shaw
from
the
from
the
actual
releases
page
for
for
rule.
Sokka
rules
underscore
go
so
yeah
it
does.
It
gets
harder
and
harder
the
further
you
go
back
because
we
don't
know
we
don't
know.
What's
in
that
branch,
we
don't
always
know.
What's
in
the
branch,
we
don't
always
know
what
was
done
at
the
time
and
then
we
don't
necessarily
know
what
was
cherry
picked
back
right.
A
So
if
a
feature
happens
in
multiple
steps
across
multiple
PRS
that
we
actually
cherry-pick
all
of
those
PRS
back,
so
the
release
branches,
if
not,
there's
going
to
be
a
diff
between
what's
happening
on
master
and
what's
happening
on
one
of
those
release,
branches
in
that
specific
specific
area
or
there
be
go
or
basil
or
sed
what-have-you
or
the
even
the
Debian
based
images.
I
ran
into
issues
with
as
well
right.
So
it
all
depends
on
how
cleanly
everything
is
is
is
packaged
as
a
PR
and
then
how
cleanly
it's
all
cherry-pick
back.
A
E
A
For
sure,
so
I
think
that
I
think
that
once
we
write
the
doc,
you
know,
I
can
start
opening
the
gates
a
little
bit
more.
The
idea
is
that,
like
this
is
this
is
a
task.
That's
right
up
the
alley
of
a
release,
manager
right
so
I
want
the
release.
Manager,
associates,
I,
want
the
branch
managers
and
when
the
patch
release
team
all
involved
and
and
helping
to
maintain
this
because
we've
got
you
know,
we've
got
a
team
of
nine
or
so
people
right
now
and
I
think
that
that
is
more
than
sufficient.
A
Occasionally,
like
you,
you
take
a
bump
and
you
take
the
you.
Take
the
you
know
you
take
the
cherry-pick
right
and
everybody
gets
to
learn
this
and
improve
on
it
right.
I
kind
of
you
know
we'll
be
writing
this
from
the
context
of
what
I
understand
today
of
the
process
and
that
will
surely
evolve
over
time
and
the
docs
will
get
better
as
we
have
different
viewpoints
on
it.
So
I
definitely
want
more
people
to
get
involved
with
us.
Once
we
have
like
a
baseline
set
of
documentation,
there.