►
From YouTube: Kubernetes Release Engineering 20200707
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
A
No,
we
got
to
do
it
right
now,
yeah,
I
know
that's
totally
fine
jeremy
hope
your
arm
is
feeling
better.
Yes,
we
can
do
that.
Give
me
a.
A
A
Okay,
okay,
so
we
will
talk
about
the
upcoming
branch
cut
and
version
markers,
so
maybe
this
would
have
been
good
as
carlos
on
four
two
okay,
so
sasha,
who
is
currently
planning
to
do
the
branch
cut
for
119?
I
think
we
don't
have
a
decision
yet,
okay,
so
I
will.
A
I
will
walk
through
the
the
mess
that
we're
jumping
into
in
a
bit,
but
I'm
going
to
I'm
going
to
take
care
of
the
branch
cut
itself.
Someone
else
can
take
care
of
the
release
part,
but
there
are
some
idiosyncrasies
about
the
branch
cut
that
I'm
making
changes
to
and
will
be
a
lot
worse.
If
multiple
people
do
it
and
I'll
I'll
walk
through
that
right
now,
okay,
so
share
my
screen.
B
A
All
right,
so
I
made
some,
you
may
have
seen
some
changes
jump
through
and
lube
amir.
I
am
actually
happy
that
you're
on
this
call,
because
this
is
a
relevant
information
for
you
too.
So
I'll
walk
through
this
pr,
essentially
what's
happening.
Is
I'm
trying
to
get
rid
of
the
the
generic
version
markers
so
anyone
who's
not
familiar
with
the
version
markers?
A
Essentially,
the
version
markers
are
what
we
use
in
what
we
use
across
ci,
as
well
as
various
jobs
to
various
jobs,
various
scripts
to
target
specific
releases
of
kubernetes.
A
If
we
look
at
the
latest
119
and
then
we
also
look
at
the
latest
stable
118
right,
you
can
see
that
for
the
release
markers
it
points
to
so
this
latest
119
will
point
to
the
latest
pre-release
that
is
available
for
119
right,
which
is
119
beta2
right.
So
if
I
was
to
use
this
in
a
script
I
could
say
I
could
say
you
know,
I'm
looking
for
release
latest
119
and
target
this
for
downloads
say
I
wanted
to
download
pre-release
artifacts
from
this
bucket.
A
A
Okay
different
place,
interesting;
okay,
if
we
look
at
it
from.
A
The
ci
bucket
right,
if
I
look
at
the
latest
119
marker
right,
it
gives
me
119
dot
a
build
number
plus.
This
is
actually
a.
A
So
this
is
one
of
the
ci
build
numbers,
as
well
as
the
sha
for
the
the
the
the
current
commit
or
the
last
commit
that
it
was
that
one
of
our
jobs
built
against
right
so
going
a
little
deeper.
These
are
related
to
the.
A
These
jobs
are
responsible
for
basically
creating
the
builds
that
we
use
that
we
extract
in
in
various
jobs
across
test
and
for
like
the
end-to-end
jobs
right.
So
we
have
one
of
these
kubernetes
build
jobs
across
each
of
the
release
branches,
but
the
curious
part
you'll
notice
is
that,
and
we
also
do
something:
an
extra
publish
file,
so
we'll
publish
a
kate's
master.txt,
which
should
match
should
match
that
119
marker
right.
A
So
the
curious
thing
is
that
we
also
have
a.
Where
was
I
okay?
We
also
have
a
build
fast
job
right
and
let
me
just
make
this
a
little
bigger,
so
we
also
have
a
build
fast
job
which
has
this
fast
flag.
Essentially,
what
this
does
is
it
does
a
make
quick
release
the
equivalent
of
a
make
quick
release
on
the
kubernetes
kubernetes
repo.
A
The
make
quick
release
will
instead,
instead
of
building
cross
artifacts,
so
artifacts
from
multiple
architectures
it'll
only
build
the
linux
amd64
markers
so
what's
interesting
about
this
is
this
condition
only
exists
for
the
master
branch
and
what
eventually
happens
as
a
result
of
that?
If
we
were
to
look
at
some
of
the
test
jobs,
we
can
see
that.
A
If
we
look
at
the
build
master
fast-
and
we
look
at
the
durations
for
some
of
these,
we
can
see
that
the
durations
for
the
jobs
they're
pretty
fast
right.
It's
running,
you
know
about
about
two
minutes
right
and
if
I
scroll
through
here,
you
can
see
that
there
are
some
that
run
for
longer
right.
We've
got
some
failures,
but
we
can
see
that
there's
some
run
for
longer
and
this
this
is
an
example.
20
minutes
is
about
what
I
would
expect
20
to
30
minutes
for
one
of
the
the
fast
builds.
A
So
what
happens
at
the
end
of
this
build
is
that
it
writes
a
version
marker
right
and
it
says:
hey,
I'm
I'm
changing
the
acl,
but
hey
I'm
copying
this
latest
dot.
119
txt!
A
You
can
also
see
it's
writing
a
latest
dash
one,
so
the
major
version,
dot,
txt
and
then
a
latest
version
file
as
well
right
so
what's
tricky
here
is
this
is
different,
behavior
from
the
rest
of
the
branches,
so
the
latest,
the
the
build
fast
job
runs
every
five
minutes
right
and
the
regular
build
or
the
cross
build
job
runs
every
hour.
A
A
A
The
first
thing
it
will
do
is
it's
going
to
go
into
the
bucket
and
see
if
it
can
ls
the
the
build
version
and
and
then
also
the
directory,
that
is
named
after
the
build
version
and
then
also
the
kubernetes.targe
within
that
directory,
and
then
the
bin
subdirectory
within
that
directory.
If
it
notices
those
files
or
directories
exist,
it's
going
to
say
that
the
build
already
exists
and
exit
right.
A
The
problem
with
this
is
this
means
that
the
cross
builds
are
looking
at.
The
results
are
often
looking
at
the
results
of
one
of
the
fast
builds
right.
So
if
the
cross
build
notices,
so
it's
kind
of
so
the
the
fast
build
is
essentially
racing.
The
cross
build
because
it's
using
the
same
version,
markers
and
and
noting
that
it
doesn't
need
to
run,
which
is
which
is
not
true.
A
So
the
instances
where
we
get
cross
builds
on
the
master
branch
are
essentially
only
instances
where
the
cross
build
interval.
The
interval
to
build
it
every
hour
is
it's
only
instances
where
that
cross,
build
beats
the
fast
build
right,
and
this
is
problematic
for
making
sure
that
we
actually
get
new
versions
of
the
cross
build,
because
if
we've
got
the
the
fast
one
running
every
five
minutes,
it's
really
likely
that
the
fast
build
will
write
to
the
buckets
first
right.
A
So
I
kind
of
went
down
a
rabbit,
hole
working
on
untangling
some
of
this
stuff,
and
there
are
some
commits
you.
Can
I've
left
breadcrumbs
across
a
bunch
of
these
prs
where
you
can
check
out
exactly
how
that's
happening
explained
in
more
detail
here,
so
what
I
wanted
to
do
was
set
it
up,
so
that
fast,
builds
and
cross
builds
would
write
to
different
locations
in
the
bucket
different
locations
within
that
gcs
bucket,
so
that
they
wouldn't
so
that
they
wouldn't
overlap.
A
The
the
fast
builds
would
not
be
beating
the
the
cross
builds
to
right.
So,
looking
at
this
pr
really
quick,
I
kind
of
inverted
some
of
the
logic
that
I
was
doing
before.
Basically,
if
the
flag
is
fast
instead
of
writing
a
what
it
previously
was,
was
type
type
dash
version
major
and
then
type
version
major.versionminer
after
some
iteration
landed
on.
A
If
we
have
a
fast
flag,
then
we
want
to
write
type
dash,
fast
type,
dash
version,
major
dash,
fast
and
then
type
dash
version,
major
version,
dot
version,
minor,
dash,
fast
right,
and
then
you
know
we're
going
to
update
this
build
destination
to
write
to
this
fast
location
instead
right,
so
this
pr
has
merged
the
what's
open
right
now
is
the
the
test
infra
side
of
it,
which
is
here.
A
So
we
do
a
few
things
in
this
pr.
We
remove
this
cross
marker
that
I
was
trying
to
play
around
with.
Essentially
I
didn't
want
to
cast
the
cross
builds
to
another
location,
because
it's
the
cross
builds
are
pretty
much
what
we
always
do,
there's
only
one
scenario
on
the
master
branch,
where
we're
only
writing
a
fast
build
right.
So
we
remove
these
cross
markers
here
and
there
is
within
cubetest.
There
is
an
extract,
case.go
file,
which
is
responsible
for
recognizing
the
types
of
they
have
extract
strategies
right.
A
A
A
So
we've
added
a
ci,
fast
extraction
strategy.
So
basically,
if
you
give
me
an
extract
of
ci,
slash
anything
dash
fast,
then
I
want
you
to
instead
look
in
this
fast
bucket
right,
so
that
matches
that
matches
what
we
did
on
the
push
build
side
right.
A
So
go
look
for
stuff
in
that
fast
bucket
right
and
then
we
also
need
to
so
that's
the
extraction
side
which
is
used
for
the
ci
jobs
for
the
for
the
specifically
for
the
end-to-end
jobs
right
that
we
run
across
to
stenfra,
specifically
for
the
build
job,
the
build
scenarios.
What
we've
added
in
are
these
this
dash
flat
fast
flag,
which
fast
was
already
here,
but
it
wasn't
wired
up
to
pass
that
flag
into
the
push
build
into
the
push
build
script
afterwards.
A
So
basically,
what's
happening
now
is
we're
saying
we're
saying
parse
that
argument.
We
want
to
store
that
as
as
true
so
that
we
can
use
this
later
in
in
various
functions.
One
of
the
functions
that
we
use
here
is
check.
Build
exists
right,
so
check
build
exists.
Is
the
one
that's
spitting
out
the
hey
build
exists.
I'm
going
to
exit
out
of
this
attempted
attempt
at
a
build
job
right
and
we're
saying
for
the
mode.
A
We're
we're
saying
for
the
gcs
bucket
we
want.
I
want
you
to
take
kubernetes
release
dev
the
mode
is
ci,
so
slash,
ci
and
then
fast.
A
So,
if
you're
looking,
if
you've
been
provided
the
fast
flag,
then
I
want
you
to
look
in
the
fast
location
instead
right,
so
that's
kind
of
this
pr
it
leads
to
so
I
know
that
was
a
bunch.
Any
questions
on
that.
A
A
Okay,
all
right,
so
that
leads
to.
A
Preparing
the
job
configs
for
119
release
branch
cut,
so
I
attempted
to
show
tim
this
yesterday,
but
bazel
hung
my
computer
while
I
was
screen
sharing.
So
I'm
not
going
to
attempt
that
here
but
to
to
actually
see
what
this
some
of
what
this
pr
is
doing.
But
we
have
a
a.
We
have
a
file
in
in
the
test.
Infrarepo
called
rel
testunderscoreconfig.yaml.
A
A
So
this
file
contains
essentially
stubs
for
the
generated
tests
right,
so
it
has
part
of
a
test
name
and
some
important
information
about
it
right.
So
the
interval
for
the
test,
the
arguments
that
you
might
use
and
and
which
cigs
own
them
right.
So
this
file
is
quite
long.
You
can
see
that
some
of
the
names
are
kate's
beta
case
stable
one
case
stable,
two,
so
on
and
so
forth.
Right,
so
currently
the
these.
A
Roughly
previously,
these
roughly
mapped
to
the
version
markers
the
the
generic
version
markers
so
kate's
dash
master,
kate,
stash
beta
kate,
stash
staple
one
two
and
three.
The
issue
that
we
often
run
into
with
the
kate's
beta
version
marker
is
that
beta
shifts,
depending
on
what
portion
of
the
release
cycle
you're
in
right.
A
So
if
you're
in,
if
you're
in
development,
if,
if
you're
in
the
beta
phase
of
this
of
the
cycle,
then
beta
is
actually
accurate
right,
assuming
assuming
that
we've
updated
it
to
be
accurate,
so
right
now
we're
in
the
beta
phase
of
the
119
release
cycle.
So
if
we
were
to
look
to
the
the
kate's
beta
version
marker,
we
would
expect
that
version
marker
to
be
something
around
119
right.
We
can
see
that
it's
118
6.
so
incorrect
right.
So
that
means
whichever
jobs
are
using.
A
This
kate's
beta
marker
are
not
getting
an
accurate
depiction
of
the
state
of
the
world
right
now
right
once
we
pass
the
beta
phase,
we
move
to
we've
actually
released
119
say
at
that
point
we
should
switch
the
ver,
the
beta
marker
back
to
master
right,
so
it's
it
essentially
making
it
inactive
right.
So
we
often
run
into
a
lack
of
understanding
around
when
these
markers
should
be
shifted
or
we
we
don't
do
it
and
and
what
causes
this?
What
causes?
What
ends
up
happening
is.
A
Is
that
any
any
job
using
these
kate's
beta
version
markers
are
inaccurate.
So
I
wanted
to
so
I've
been
working
over
the
last
few
cycles
to
try
to
push
this
towards
some
sanity
and
continuing
down
through
this
config
kind
of
long
you'll
see
that
there
is
a
kate's
version.
A
Block-
and
I
passed
it
right
here-
so
kate's
version
right,
so
originally
these
so
we
have
dev
beta
stable
one
two
and
three.
Originally
these
were
set
to
kate's
master
kate's
beta
kate's,
stable
one,
two
and
three
right.
A
I
had
made
the
change,
I
think
last
cycle
or
the
cycle
before
that.
So
these
point
to
explicit
versions,
so
this
is
pointing
to
119
right
now.
Beta
is
pointing
to
118,
117
and
so
on.
Right.
The
reason
for
this
is
is
that
we
can
now
expect
those
jobs
to
do
something
known
without
having
to
guess
what
the
the
beta
marker
is
at
any
one
time.
So
when
we
were
in
the
118
cycle,
this
was
accurate.
A
A
A
Okay,
so
we're
dropping
the
test
config
for
the
beta
jobs
and
we're
adding
a
stable
four
config
right.
The
reason.
The
reason
we're
doing
this
is
kind
of
twofold
one.
It's
for
what
I
already
mentioned,
the
the
fact
that
these
beta
markers
are
confusing
depending
on
what
part
of
the
cycle
you're
in
you
basically
have
to
know
what
what
that
marker
is
doing
for
you
to
be
successful,
writing
jobs.
A
So
what
we're
doing
instead
is
establishing
a
stable
four
marker,
and
essentially
you
can
see
all
the
stuff
is
just
kind
of
shifting
down
right,
so
we're
pointing
at
latest
18
118
for
stable
one
and
then
117
and
down
for
stables
two
through
four.
So
this
is
essentially
prepping
us
for
for
the
branch
cut
once
we
have
a
config
rotator
a
prepare
release,
branch
job
that
runs
within
that
we
manually
run
during
the
branch
cut
and
then
which
will
rotate
these
a
version
down.
A
So
part
of
the
reason
we're
doing.
This
is
also
on
the
the
back
to
the
the
lts
discussions
that
we're
having
and
extending
annual
support,
extending
support
for
kubernetes
to
be
to
be
annual
right.
So
there
are
instances
where
we'll
need
to
have
this
job
open
for
longer
right,
so
116
is
going
to
be
supported
for
longer
than
it
was
before
right,
which
means
we
need
to
continue
running
ci
jobs
against
that
branch.
A
So
this
marker
is
essentially
to
allow
for
that
support
and
also
to
kind
of
get
rid
of
the
beta
marker.
So
in
instances
where
stable
one
is
coming
up
right,
so
stable
one
will
now
be
119
as
we
move
into
the
rc,
the
branch
cut
and
the
rc,
and
then
this
pr
goes
into
kind
of
removing
some
of
the
kate's
beta
test
stubs
and
then
adding
adding
the
stable
four
ones
instead,
and
you
can
see
where
this
plays
out
in
generating
the
test
jobs,
which
will
be
a
long
file
up
here.
A
So
there
is
a
hack,
updated
gener
update,
generated
tests,
dot
within
test
infra,
and
you
can
see
that
these
are
all
shifting
down
one
essentially
kate's
beta
to
stable
one
stable
one
to
stable,
two,
so
on
and
so
forth
right.
So
this
pr
needs
to
needs
to
go
in
before
the
branch
cut,
as
well
as
the
pr
to
publish
fast
bills.
A
So
we
can
get
out
of
that
race
and-
and
I
think
there
might
be
one
more
or
yes-
there
is
one
more
that
will
tweak
the
generated
the
generic
suffixes.
So
if
you've
ever
seen
so
I'll
stop
for
questions.
First,
any
questions
on
this.
A
A
generic
suffix
before,
let's
see
if
we
can
find
one.
A
Right
so
copy
this
and
give
you
an
example
here.
A
Right
so
when
you
specify
fork
per
release,
generic
suffix
in
the
annotations
for
a
test
job
when
the
config
forker
runs
through
this
job,
what
it'll
do
is
it'll
grab
it'll
when
it
forks
the
job
to
a
new
branch.
So
when
we
say
fork
per
release,
true,
what
we're
basically
saying
is:
I
want
a
copy
of
this
job
in
the
release
branch.
A
When
you
cut
it
great
so
this
copy,
it
can't
necessarily
be
the
same
job.
Maybe
it
doesn't
run
on
the
same
interval.
Maybe
it
doesn't
run
against
the
same
branch.
Maybe
it
has
slightly
different
configurations.
Maybe
it
does
an
extraction
of
a
ci
build
job
that
needs
to
be
different
from
the
one
that's
done
in
master
right,
so
we
can
see
the
master
one
is
ci
latest,
and
then
we
can
also
see
that
the
master
job
doesn't
have
any
like
specific
indicators
about
like
what
branch
it's
running
on.
A
So
we
kind
of
assume
that
it's
master
right.
So
now,
if
we
look
at,
if
we
look
at
one
of
the
release
branches
right,
so
let's
just
copy
this
job
name
and
if
I
go
to
kubernetes
and
sig,
release,
release
branch,
jobs
and.
A
Right
we
can
see
that
when
the
job
was
forked,
it
was
actually
forked
to
be
named.
It
was
actually
renamed
to
be
a
plug-in
gpu-beta
right.
So
when
you
specify
that
that
generic
suffix-
that's
essentially
what
it'll
do
it'll
rename
it
in
such
a
way
that,
like
oh
well,
it's
on
the
master
branch-
and
I
know
that
it's
going
to
beta
now
so
rename
it
to
have
a
beta
at
the
end,
what
it
doesn't
handle
well,
unless
you
configure
it
in
a
certain
manner,
is
handling
these
extractions
right.
A
A
So
so,
eventually
the
the
the
goal
is
to
hopefully
get
rid
of
the
need
for
these
generic
suffix
truths.
If
we
were
to
look
at
a
different
job,
let's
see
if
we
can
compare.
Let's
do
this
one
right,
so
ci,
kubernetes,
node,
cubelet
features
dash
1-18
right
and
now.
Let's
compare
this
to.
A
Let's
compare
this
to
the
the
job
that
would
be
running
on
the
master
branch
right,
so
we
can
see
this
in
node
cubelet.
We
can
actually
see
this
just
from
hound
right.
You
can
see
that
there's
a
test
grid
config
for
ci,
node,
cubelet
features
and
then
within
the
actual
conf,
the
actual
job
configs.
You
can
see
that
this,
maybe
it's
worthwhile
opening
this
up
anyway
right,
so
you
can
see
that
this
one
has
fork
per
release
true,
but
it
does
not
have
the
fork
per
release.
A
Generic
suffix
true
set
right,
which
means
that
every
time
it
forks
across
a
release
when
the
config
is
rotated,
it
will
fork
with
its
its
with
its
branch
name
intact
or
with
representative,
with
a
a
name
representative
of
the
branch
that
it's
running
on
right,
so
118
117
116..
A
So
we
want
this
to
happen
for
really
all
jobs.
So
it's
very
so
it's
very
clear
what
content
this
job
is
running
against
and
if
we
do
a
quick
look
at
here,
we
can
also
see
that
when
we
did
the
rotation
for
this
job
right,
looking
at
the
looking
at
the
extract
for
rebo
right,
so
I
said
I
want
to
take
out.
I
want
to
check
out
kubernetes
kubernetes
at
master
for
the
job
that
is
running
on
the
master
branch.
A
If
we
compare
that
to
the
118
one,
we
can
see
that
the
config
forker
did
the
right
thing
right
and
checked
out
the
118
branch
instead
right,
it's
name
is
correct.
It's
test
grid,
tab,
test
grid,
tab
name,
it
notes
that
it's
on
the
118
branch-
and
it's
also
writing
to
it's
also
visible
in
the
118
test-
grid
dashboard
right.
So
this
is
what
we
want.
This
is
kind
of
the
behavior
that
we
want.
A
We
want
a
job
when
it
forks
to
be
to
indicate
which
branch
it
is
running
on
running
for
as
well
as
popping
up
in
the
right
tabs
within
test
grid.
So
that's
kind
of
some
of
the
impetus
for
this
change
to
make
it
a
little
clearer
about
where
jobs
are
running,
to
make
to
clean
up
some
of
their
configs
when
they're,
forked,
so
on
and
so
forth.
So
these
are
a
few
of
the
things
that
need
to
land
before
the
release.
A
119
branch
cut
happens
and
the
reason
I
say
that
is,
if
it
it
doesn't
land
so
basically
the
the
rotation.
The
creation
of
the
branch
is
kind
of
the
one-time
opportunity
or
it's
not
a
one-time
opportunity,
but
it's
a
one-time
automatic
opportunity
or
semi-automatic
opportunity
for
all
of
the
numbers
to
shift
in
line
with
the
branch
that
that
they're
that
they're
related
to,
if
you
don't
do
it
properly
before
the
branch
cut,
it
means
that
anything
after
the
branch
cut
is
essentially
a
manual
change
right.
A
So
this
would
be
someone
painstakingly
going
through
each
of
these
configs
and
making
sure
that
each
of
the
flags
for
test
grid
annotations
and
the
branches
that
they're
targeting
and
the
ci
versions
that
they're
extracting
are
all
correct.
So
I
want
to
try
to
avoid
that.
We
have
our
release
the
rc1
for
119
on
thursday
and
I'm
going
to
try
to
get
these
prs
merged
ahead
of
time.
So
that's
that
for
that
long
presentation,
any
questions
on
that
content.
A
A
Okay,
lumiere's
hiding
all
right,
we're
gonna,
we're
gonna
move
on
to
the
next
thing.
That's
up,
and
I
believe
that
is
sasha
and
carlos.
I
believe
you're,
both
on
now.
You
want
to
chat
about
your
prs.
C
Yeah
we
had
like
an
idea,
I
guess
a
couple
of
weeks
ago.
We
have
this
tedious
task.
That
is
copying
all
the
comments,
links
and
everything
for
that
we
usually
use
in
crowd
if
to
the
github
issue
like
for
adding
the
stages
and
not
stuff,
and
we
think
about
to
create,
like
a
crown
sub
command,
to
help
us
to
fill
these
tedious
work
and
generate
out
the
table
for
us.
A
D
C
C
Okay,
what's
this
the?
C
What
I'm
talking
about
is
specifically
this
table
here
until
here,
the
mock
really
mock
statement,
release
station
release
most
of
the
time
we
need
to
copy
and
paste
and
edit
this
several
times
during
the
process,
and
the
idea
is
like
partial
automate
this
using
the
crowsub
command
to
generate
all
this
stuff.
For
example,
can
you
guys
see
this
screen?
Is
some
good.
A
Yeah
yeah
you're
gonna
make
the
text
much
bigger,
like.
C
C
C
C
C
C
It
generates
the
commands
it
generates
everything
based
on
the
job
we
submitted.
This
is
the
idea
yeah,
maybe
maybe
the
future.
We
can
issue
this
command
and
they
update
the
github
issue
automatically
for
us
such
as
suggest
that,
but
I
was
like
the
first:
let's
try
this
and
then
we
maybe
pushed
the
automatically
does
that
for
us.
A
So
I
think
this
is
awesome.
I
think
copying
and
pasting.
The
commands
back
and
forth
is
one
of
my
least
favorite
parts
of
the
release
process
and
I'm
sure
that
that
is
probably
probably
the
the
inspiration
for
doing
this.
I
saw
the
pr
flyby
and
I
have
two
comments,
one
and
I'm
not
sure
if
this
this
is
a
side
effect
of
of
the
library,
but.
A
I
thought
it
was
going
to
be
a
sneeze,
so
the
the
first
piece
is
for
markdown
tables.
You
only
need
three
of
the
dashes
to
generate
the
table,
so
if
that
helps
with
the
output,
if
you
can
limit
that
to
three
of
the
dashes
that
should
make
it
read
a
little
bit
better
and
it
will
mean
that
we
won't
have
diffs
if
we
decide
to
put
this
somewhere
else.
If
the
table
size
changes
and
the
second
one
would
be,
can
we
call
this
history
instead
of
generate
generate.
A
Give
you
an
idea
of
what
we're
generating,
but
history
kind
of,
is
analogous
to
the
history
command
and
it
gives
you
an
idea
of
we're
getting
gcb
manager,
history,
right,
yeah,.
A
That's
it
and
tim
says:
I
can't
wait
for
copy
pasting
to
go
away,
because
each
item
is
just
a
step
in
a
jenkins
or
other
plug-in
and
other
pipeline.
Excuse
me.
So
yes,
one
day,
one
day
tim
one
day,
we'll
have
robots
running
this
entire
thing.
A
A
Okay,
cool
all
right,
so
the
last
one
is
go,
go
115,
okay,
veronica
and
marky
are
not
on
this
call,
but
so
we
had
the
painstaking
chore
of
updating
go
114
that
started
that
work
started
in
february.
I
believe-
and
I
completed
it
in
june,
late
mid
to
late
june.
So
the
reason
for
this
was
a
variety
of
things.
There
were
some
scalability
issues
that
were
fixed
earlier
in
the
cycle
earlier
in
the
previous
cycle.
A
There
were
some
issues
with
etcd,
b-bolts
and
unsafe,
the
unsafe
library
usage
that
were
potentially
potentially
problematic
for
us.
So
we
were
kind
of
waiting
until
various
updates
happened
on
the
go
side,
as
well
as
the
xcd
side.
Those
finally
happened.
The
the
tricky
thing
about
waiting
for
updates
means
you're,
also
carrying
different
different
versions
of
the
cubecross
version.
There
there's
some
maintenance
work
essentially
bumping
to
new
cubecross
version,
promoting
that
image.
A
Updating
the
kate's
cloud
builder
image
to
support
that
new
cube
cross
version,
rebasing
the
pr
fixing
any
fixing
any
test
failures
that
come
out
of
those
pr's,
changing
the
cubicans
e
image
within
test
infra
to
support
the
new
go
version
and
and
then
kind
of
iterating
through
that
cycle
over
and
over,
depending
on
what
version
comes
out.
So
the
original
plan
was
to
was
to
update
to
go
113
12
on
master,
and
then
cherry
picked
that
over
to
the
118
1
17
and
116
branches
because
go
114.
A
4
landed
in
master
before
113
12.
I'm
going
to
say
that
the
the
113
12
updates
should
not
happen
on
the
118,
117
and
116
branches.
A
We've
also
heard
that
there
are
a
variety
of
things
to
watch
out
for
in
go114.
That
makes
its
usage
questionable
for
us,
so
we're
going
to
be
waiting
for
go
115..
A
So
the
interesting
thing
about
115
is
this
presents
a
bit
of
a
dilemma
for
us
because
we're
we've
not
updated
to
114
in
the
previous
branches.
We're
essentially
going
to
be
jumping
one
we're
going
to
be
updating
to
one
go
115
on
master
and
then
we're
going
to
be
jumping
to
115
within
the
the
active
release
branches.
This
is
something
that,
as
far
as
I
know,
we
have
not
done
before.
A
Also
go.
115
is
not
out
yet
so
this
time
around
we're
going
to
attempt
to
consume
a
pre-release
of
go
and
see
what
happens
if
we
have
the
container
images
that
we
need
at
hand
if
they're,
properly
versioned
so
on
and
so
forth
that
we
can
actually
go
through
the
flow
that
we're
going
through
right
now
to
update,
go
with
the
pre-release
version
so
that
we
can
kind
of
get
ahead
of.
You
know,
instead
of
spending
four
months
trying
to
get
a
new
version
out.
A
Hopefully
we
can
do
it
much
sooner
than
that,
usually
we're
able
to
turn
it
around
a
lot
sooner
than
that,
so
we're
going
to
try
to
align
that
for
for
119,
I
feel
like
we
need
to
land
that
for
119
and
because
of
our
support
our
extended
support
cycle.
We
also
need
to
be
able
to
pull
this
into
the
the
release
branches.
A
We
cannot
have
release
branches
that
are
two
versions
behind
master
and
the
most
recent
release
branch,
because
that
will
that
doesn't
play
well
for
us
supporting
release
branches
in
general
right.
If
our
go
version
for
release
branches
goes
out
of
support
for
them
as
upstream,
then
we
have
problems
on
the
supportability
path
overall,
so
this
should
be
interesting.
A
Go
no
go
as
we
have
it
right
now,
the
so
the
upside
of
where
we
were
at
in
the
release
cycle
right
now
is
that
as
we
move
into
the
rc-1
that's
happening
on
thursday,
there's
still
an
rc2
and
rc3
and
before
the
eventual
release,
as
well
as
a
blackout
period.
So
we
have
a
decent
chunk
of
time
to
get
some
testing
in
where
there
will
be
a
lot
less
changes
on
the
branches
because
of
because
of
code
freeze.
A
So
the
hope
is
that
this
should
I'm
not
going
to
say
this
is
going
to
go
off
without
a
hitch.
I
I'm
not
going
to
jinx
myself,
like
that.
I
already
know
that
there
are
going
to
be
issues,
but
it
should
be.
We
should
have
enough
time
to
to
get
through
testing
for
this
ahead
of
the
119
release.
A
A
So
tim
and
I
have
an
open
thread
with
the
the
go
team,
we're
gonna
I'll
reach
back
out
and
and
talk
a
little
bit
about
like
what's
the
expected
release
time
for
for
the
rc
as
well
as
when
can
we
expect
images
to
be
available
because,
essentially,
what
we
need
to
get
started
with
the
process?
Is
the
the
cube
cross
image
which
needs
a
current
go
image
from
upstream
so
stay
tuned?
A
A
So
cool
we
are
at
the
end
of
our
agenda
if
there
are
topics
that
y'all
want
to
discuss
outside
of
that
feel
free
to
just
toss
them
out.
A
A
All
right:
well,
we
will
call
it.
Thank
you
all
for
hanging
out
as
always
lovely
to
see
all
your
faces.
We've
got.
You
know
if
you're
on
the
release
team
you've
got
more
release
meetings
for
the
week.
If
you
are
not
on
the
release
team
or
the
next
meeting
will
be
the
sig
release
meeting
next
week
same
time,
slot
so
we'll
catch
you
there
and
see
on
the
channels
on
and
the
interwebs
and
all
that
good
stuff.
Take
it.