►
From YouTube: Kubernetes Weekly SIG Release Meeting for 20200505
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
So
we
have
a
fairly
light
agenda
today.
I
think.
The
first
thing
that
we
want
to
cover
is
the
new
release
schedule,
and
so
we
had
some
back-and-forth
emails
over
the
last
few
weeks
as
well
as
meetings
to
discuss,
extending
the
the
kubernetes
119
release
cycle
come
to
some
agreement
and
that
PR
was
merged
in
the
recent
weeks.
So
if
you
want
to
check
that
out
that
is
available
on
the
agenda,
the
TLDR
is
that
the
release
cycle
started
on
April
13th.
A
The
enhancements
phrase
is
our
next
milestone
on
May
19th
June
25th
is
code
freeze
in
week,
11
Docs
completion
is
July
9th
the
release
is
going
out
on
August
4th
and
the
retro
is
on
August
20th.
So,
looking
at
the
schedule,
you'll
see
that
our
usual
11
to
13
week
cycle
has
been
extended
to
17
weeks.
The
right
around
that
that
area
is
also
happens
to
be
cube
con.
A
A
A
A
There
are
a
few
code
changes
on
the
Onaga
side,
which
is
our
release,
engineering
tool
for
staging
and
releasing
kubernetes,
and
then
there
also
I
guess
mechanical
or
human
changes
that
have
to
happen
within
the
schedule
determining
when
the
burndown
meetings
will
start
when
the
when
the
additional
our
C's
will
go
out
or
the
communications
around
how
those
are
going
out,
and
then
we
also
around
code
freeze.
So
we
usually
have
code
freeze
and
code
thaw
right.
So
between
code
free
code
freeze
is
essentially
when
master
flows
down
right.
A
We
implement
a
set
of
merge
restrictions
on
master,
so
that's
only
code
that
is
targeted
for
the
119
milestone
can
merge
right.
So
it's
on
the
milestone
maintainer
to
appropriately
milestone
those
PRS
and
at
which
point
they
would
merge
after
code
freeze.
So
so
code
freeze
is
maybe
a
little
bit
of
a
misnomer,
you're
still
allowed
to
merge
code.
It's
just
a
way
that
you
merge
code
has
changed,
there's
an
additional
requirement,
which
is
the
milestone
post
code
freeze,
there's
a
period
called
code
which
is
essentially
when
we
lift
the
that
merge
restriction
from
master.
A
So
that
means
that
everything
from
that
code
freeze
point
when
we
cut
the
RC
will
need
to
be
milestone
and
they
will
need
to
be
milestone
for
the
relevant
release
branch
right,
so
release
119
when
it
gets
cut
so
still,
also
trying
to
think
of
when
to
do
when
to
shift
from
that
point
into
doing.
Cherry-Picks
right
at
which
point
like
master,
is
reopened
and
you'll
need
some
merge,
appearance
master
and
then
cherry-pick
it
over
into
the
release
branch
right.
A
B
Nothing
I
can
think
of
I
had
one
more
thing
scheduled,
related
ID
at
the
end
of
schedule,
and
that
is
that
until
June,
1st
I'm
gonna
be
moving.
The
release
team
meeting
to
bi-weekly
just
give
everybody
a
little
bit
more
time
and
just
because
it's
a
little
bit
slower
and
then
that
correlates
with
all
of
those
milestones,
said
so
we'll
get
those
updates
in
a
timely
way.
B
C
A
Was
hold
on
so
actually
what
initially
was
set
in
schedule
was
cube.
Con
was
targeted
for
the
13th
through
the
16th,
so
the
idea
was
that's.
We
would
have.
We
have
the
retro
post
cube,
calm
right
now
that
cube
con
has
moved.
Taylor
Jeremy
looks
like
we
have
to
move
the
retro
date
to
fit
that,
so
maybe
we
can
either
do
I
guess
we
can
do
the
retro
on
the
13th.
If
that
sounds
good
right,
that's
the
the
Thursday
of
the
week
after
the.
A
A
So
any
I
guess
it's
worthwhile
to
give
a
little
perspective
on
like
why
we
did
this
right,
so
lots
of
lots
of
stuff
going
on
in
in
the
world
right
now,
and
it's
not
really
kubernetes
or
upstream
contribution
to
kubernetes
might
not
be
the
top
of
top
of
mind
for
everyone
right
now
and
we
wanted
to
give
some
opportunities
to
people
across
a
set
of
different
personas
to
kind
of
live
life
so
for
the
contributors.
This
allows
more
time
to
merge
code
more
time
to
merge
code
in
a
more
thoughtful
way
for
reviewers
and
approvers.
A
That
gives
them
more
time
to
merge
code
more
time
to
do,
refuse
more
time
to
create
enhancement,
proposals
or
update
enhancement
proposals.
You'll
see
that
kind
of
all
of
the
the
deadlines
have
have
been
expanded
across
the
cycle
right,
so
the
the
current
May
19th
enhancements,
freeze
deadline
was
was,
it
might
have
been
today.
A
So
the
the
people
who
are
so
basically
every
release
creates
every
release,
is
essentially
a
deprecation
period
right,
the
time
that
once
we
let
out
so
the
second
wheel,
the
second
we
let
out
or
a
little
after
we
let
out
118
115
moves
towards
its
deprecation
period
right
so
in
in
that
it
would
be
deprecated
at
the
at
the
next
patch
release
right.
So
when
one
18
one
goes
out,
that's
that's
the
point
at
which
we
would
deprecated
115
right
so
same
for
119
119
goes
out.
A
Preventing
massive
change
for
for
infrastructure
from
the
consumer
side
right,
so
so
people
that's
from
a
vendor
standpoint.
You
might
consider
to
be
customers
or
clients
and
then
and
then
from
a
service
provider
perspective
as
well
right.
Every
time
this
clock
starts
it's
kind
of
okay.
We've
got
to
go
upgrade
our
cluster,
so
we
have
to
make
sure
that
this
new
version
is
available
so
giving
them
more
time
to
to
be
able
to
do.
Those
changes
across
infrastructure
was
important
to
us
to
you.
A
A
A
Tracking
issue,
so
in
doing
some
updates
to
the
kate's
cloud
builder
image
right,
so
the
kate's
cloud
builder
image
is
based
on
is
based
on
the
cube
cross
image
right.
So
the
cube
cross
image
that
was
formerly
within
kubernetes
kubernetes
that
has
recently
moved
to
kubernetes
release
is
responsible
for
a
cross
building
right
building
for
multiple
architectures
within
kubernetes
kubernetes.
Alright.
So
the
idea
is
that
we
make
a
change
to.
A
We
also
have
these
image
building
and
pushing
jobs.
So
the
idea
is
that
we
make
a
change
to
the
the
content
of
the
cube
cross
image
when
we
merge
that
change
it
triggers
so
proud
as
a
check
to
see
if
files
have
changed
within
specific
directories,
perper,
repo
or,
however,
you
can
figure
you're
proud
job
and
that
triggers
an
image
building
and
pushing
job
right,
so
within
crew,
Brunetti,
scoober
Nettie's.
Basically,
we
push
that
job
to
staging.
A
We
issue
a
promotion
PR
to
take
that
staging
job
and
promote
it
up
to
Kate's
artifact
prod,
which
is
our
our
GC,
our
bucket
for
fraud,
images
right
once
that's
done.
We
then
take
the
new
whatever
the
new
tag
is
for
that
image
and
we
bump
it
within
kubernetes
kubernetes
right.
So
that
gives
us
an
opportunity
to
be
able
to
basically
be
able
to
kind
of
build
the
image
out
of
band
and
then
do
the
testing
of
the
cross
image
separately
right.
A
A
Have
them
kind
of
clear
that
PR
within
kubernetes
kubernetes
build
the
change
locally
push
the
image
promote
it
internally
at
Google,
then
we
rerun
tests
against
that
PR
right
so
that
that
process
kind
of
involves
a
few
people.
Not
all
people
who
are
are
as
active
on
the
release
engineering
side
from
the
from
the
Google
perspective
and
it's
it's
kind
of,
and
then
you
have
to
deal
with
all
the
tests
that
are
within
group
entities
kubernetes
to
get
to
the
point
where
your
image
is
promoted.
A
So
a
while
back
I
want
to
say
maybe
a
month
and
a
half,
or
so
we
moved
the
maybe
longer
we
move
the
cube
cross
image
to
the
kubernetes
of
community
infrastructure
right.
So
the
some
of
the
process
is
similar,
but
we
also
enabled
it
for
a
post
submit
image
building
right
so
that
process
that
I
was
talking
about.
A
So
now
we're
able
to
promote
those
images
ahead
of
time
and
then
issue
a
PR
that
is
much
smaller
to
kubernetes
kubernetes
and
because
it's
much
smaller,
it's
easier
to
approve
yeah.
So
all
of
that
to
say
all
of
that
to
say
the
the
kate's
cloud
builder
image
is
then
built
on
top
of
on
top
of
the
cube
cross
image
right,
so
the
case
cloud
build.
Our
image
includes
the
tools
that
we
need
to
to
build
the
stage
and
release
kubernetes
right.
A
Here's
the
tricky
part,
because
we,
because
we
kind
of
key
off
of
the
the
cube
cross
image
version,
it's
there's
a
require.
There's
a
current
requirement
for
the
Q
cross
version
and
the
Cates
cloud
builder
version
to
be
in
sync
right.
So
if
we
change
content
within
Kate's
cloud
builder,
which
we
did,
it
would
also
require
a
bump
of
of
cube
cross,
which
is
kind
of
counterintuitive
based
on
the
way
the
dependency
works
in
the
first
place
right.
A
So
this
popped
up,
because
we
recently
added
a
a
check.
We
added
scope
you
to
the
Kate's
cloud
builder
image,
to
enable
a
check
to
ensure
that
the
manifests
of
the
image
manifests
for
each
of
the
core
images.
Api
serve
or
controller
manager,
so
on
and
so
forth
are
available
before
before
the
GCB
jobs
that
do
the
stages
and
releases
affirm
that
they're
actually
able
to
stage
and
release
right.
A
So
once
we
move
to
new
infrastructure
our
once
we
move
to
the
existing
infrastructure,
the
new
existing
infrastructure,
it's
we
can
no
longer
the
staging
and
release
GCP
jobs.
Push
directly
to
the
prod
registry
right.
So
once
that,
so
once
we
actually
cut
over
will
no
longer
be
able
to
do
that
right
and
we
must
promote
images
that
are
that
are
created
as
a
result
of
the
staging
process
so
to
to
ensure
that
those
staging
images
actually
exist
for
the
release
release
managers
to
promote.
We
added
this.
This
image
manifest
check
within
the
staging
process.
A
A
So
I've
picked
up
the
release
for
one
1512,
because
I've
been
working
on
been
kind
of
backed
up
with
figuring
out
the
schedule
stuff,
but
now
that
that
is
out
of
the
way
I've
pushed
PRS
today,
I've
been
doing
some
testing
on
basically
of
refactoring
the
cube
cross
image
a
little
bit
so
basically
I'm
taking
all
of
the
content
from
the
from
the
kate's
cloud
builder
image
and
moving
that
into
cube
cross.
So
basically
we
do
a
few
things
in
Kate's
cloud
builder.
We
essentially
we
remove
Python
2.
A
We
ensure
that
python
3
is
installed,
we
add
scope
EO,
and
that
might
be
that
honestly
might
be
it.
Let
me
just
go
through
it
really
quick.
There
are
a
few
additional
dependencies
that
we
add,
we
add
the
the
G
cloud,
SDK
stuff,
we
add
docker
and
then
and
then
scope
you
right
so
that
stuff
is
moved
into
cube
cross
and
there's
a
PR
up
for
that,
which
is
the
next
item
on
the
agenda
and
yeah.
C
A
Okay,
great
so
the
first
one
is
basically
just
sorting
the
darker
file
a
little
bit,
ensuring
that
we're
doing
kind
of
like
it's
one
in
its
alpha
sorted
now
for
each
of
the
apt
packages
and
just
making
sure
that
we're
doing
one
per
line
right.
So
it's
easy
to
see
exactly
what
we're
adding
or
removing
there
were
some
steps
for
cleaning
doing
an
apt,
clean
and
removing
the
appleĆs
updates,
and
that's
just
moved
to
the
end
of
the
dockerfile.
Now
here
is
some
more
of
the
meat
where
we
pull
in.
A
A
So
this
next
bit
is
interesting.
This
is
called
a
variance
file,
variance
IMO
and
if
the
the,
if
the
GCD
builder,
recognizes
a
variance
IML
file
within
the
directory
that
you're
targeting
for
builds,
it
will
essentially
for
gough
to
build
into.
However
many
you
can
either
specify
one
of
these
config
files.
So
if
I
say
like
variant
equals
go
114
or
something
right
that
will,
it
will
tell
it
to
just
build
that
variant
right.
A
If
you
don't
specify
anything,
it'll
run
all
the
variants
in
the
in
the
in
the
in
the
file,
which
is
a
good
thing
right,
because
normally
we
want
to
be
able
to
we.
So
basically,
this
allows
us
to
build
these
go
versions
in
parallel
right
and
we
can
specify
a
few
of
the
things
that
are
required
within
within
the
docker
file
for
cube
cross
right.
So,
like
the
cube
cross
version,
we
care
about
protobuf
at
CD
and
scope,
yep
right
and
then
obviously
the
the
code
version
right.
A
We
pull
those
versions.
We
we
do
some
defaulting,
but
if
you're,
if
you're
interested
in
doing
this
locally
you're
able
to
do
this
locally,
you
just
have
to
you
know,
provide
the
correct
environment
variables
right,
so
we
do
some
defaulting
there
and
then
you
can
see
that
we're
just
changing
a
few
of
these.
A
Alright.
So
this
basically,
so
you
can
build
it
two
ways
you
can
you
can
and
then.
Finally,
it
does
a
check
for
to
see
that
those
images
were
actually
pushed,
so
you
can
build
it
in
two
ways:
you
build
it
via
GCD
builder,
which
is
the
preferred
way
you
can
it.
The
way
this
is
set
up
now
also
enables
you
to
build
this
locally.
A
If
you
want
to
test
right
and
then
finally,
we
swap
out
so
I'm
using
the
staging
one.
Now,
since
this
PR
is
still
in
flight,
but
we're
swapping
out
the
Kate's
cloud
builder
image
for
the
cube
cross
image.
That
way,
we
don't
have
to
worry
about
maintaining
two
sets
of
image
versions
when
we're
doing
the
stage
and
release
process
right.
A
So
once
this
merges,
that
should
fix
that
should
fix
the
ability
to
do
the
115
build,
and
then
you
can
see
that
there
is
a
follow
up
PR
on
the
kubernetes
side,
which
is
testing
the
build
version.
Well,
we're
just
testing
the
new
cube
cross
version
right
so
I
know
that
was
a
lot
and
maybe
not
the
usual
content
that
we
do
for
a
cig
release
global
meeting.
But
are
there
any
questions
on
that.
A
C
C
It's
sure
to
be
kept
secret
least
one
but
I'm
going
to
start
with
no
no
sign
of
issues
and
seeing
like
fuzzy
state
is
the
still
relevant
talk
if
they're
close
to
get
it
close
by
boat
or
something
like
that
being
them
and
keep
a
list
of
them
and
I
have
created
an
issue
in
the
series
repository
and
the
next
step
is
that
I'm
going
to
take
a
look
at
assignee
dishes
for
release
engineering
project
and
to
Pingdom
see?
Can
something
be
reassigned
and
see?
A
Sounds
good,
thank
you
for
thank
you
for
working
on
that.
So,
if
you
were
part
of
the
release,
engineering
I
think
we
did
in
the
release
engineering
meeting
Marquis
did
a
demo
of
triage
party
to
triage
party
as
a
tool.
That's
the
mini
cube
maintainer
x'
have
been
using
as
well
as
some
of
the
I
believe
scaffold
as
well,
and
triage
party.
Essentially
is
a
nice
nice
pane
of
glass
that
you
know
if
you've
used
something
like
Grenadier,
it's
kind
of
an
extension
of
that
with
a
little
bit
more
functionality.