►
From YouTube: Kubernetes Release Engineering 20191111
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello,
hello,
everyone:
this
is
the
release
engineering
meeting
for
November
11th.
This
is
a
meeting
that
is
recorded
and
available
on
the
internet.
To
please
be
mindful
of
what
you
say
what
you
do
please
be
sure
to
adhere
to
the
kubernetes
code
of
conduct
and
be
awesome
people,
so
we're
going
to
get
started
with
I.
Don't
know
we
don't
have
an
agenda
so
I
don't
know
if
there
are
any
open
topics
that
we
want
to
go
over
before
we
maybe
walk
the
board.
Real,
quick.
B
A
A
A
C
A
Yeah
so
I'm
gonna
be
honest,
I
know
very
little
about
it
outside
of
the
fact
that
it
runs,
and
it
hasn't
given
me
problems
to
date,
but
I
know
that
we
over
the
last
few
cycles,
we've
like
discovered
issues
in
the
way
that
we
parse
release,
notes
how
how
release
notes,
get
recognized
and
I
think
that
y'all
have
done
a
great
job
in
cleaning
them
up,
so
I
would
say.
Maybe
we
can
maybe
we
can
look
at
it
together,
hold
on.
B
B
A
Paragraph
or
okay,
so,
let's
see
what
it
does
it's
scans
get
log
for,
merges.
Merge
polls
collects
displays,
release,
notes
based
on
releasing
its
star
labels
and
blah
blah
blah
okay.
So
as
always,
we
we
source
our
our
bash
libraries,
the
comment
and
the
get
live:
dot,
SH,
okay,
some
functions
for
extracting
the
PR
title.
It's.
A
A
A
A
B
A
A
D
D
I
guess
what
what
we
need
to
find
out
is
what
what
features
of
him
we
missing
in
there
and
see
if
we
can
bring
them
in
and
I
guess,
the
switch
ten-four
and
a
go
to
just
call
the
other
thing.
The
new
thing
would
not
be
terribly
hard
yeah.
Now
that
should
be
fine.
The
only
thing
I
don't
remember
is
we
said
we
have
three
implementations,
I,
don't
know
what
the
third
one
is.
I
cannot
remember
that
right.
A
D
C
C
E
C
Tries
to
find
the
correct
latest
version,
which
is
either
a
release,
branch
or
the
muscle,
okay,
cool
yeah,
and
we
could
do
the
same
for
the
photo
patch
releases
as
well.
Okay,.
A
A
Right,
if
you
look
at
the
release,
117
branch
you'll
only
see
the
change,
log
and
change
log
117,
all
right,
so
change
log,
just
links
out
to
the
relevant
change
logs
and
I
guess
these
would
be
broken
own
art
links
to
back
to
master
okay.
That
makes
sense
yeah.
So
this
is
a
table
that
you
saw
being
generated.
We
should
probably
put
the
256
in
here
too,
but
this
table
is
already
long
enough
as
it
is.
D
A
B
F
B
B
Just
to
just
to
run
the
to
the
going
CLI
and
the
bash
one,
compare
the
outputs
and
four
little
tiny
differences
start
and
making
some
some
little
patches
like
this.
This
doesn't
have
to
be
a
massive
change,
but
just
to
do
a
little
bit
of
cleaning
and
then
to
say,
like
yeah,
they're
they're,
similar
enough
that
we're
comfortable
attempting
to
put
the
go
like
one
instead
of
the
bash
one
in
the
call
for
an
ago
and
I
think.
A
B
D
Would
have
been
my
approach
like
just
testing
what
the
different
things
do
and
documenting
that
and
then
based
on
that
we
can
see
which
ones
we
wanna,
maybe
replace,
which
features
we
would
need
to
do
so,
and
that
kind
of
business
not
not
really
thinking
about
code
changes.
Yet
because
I
have
no
idea
about
the
different
tools
yeah,
they
definitely
do
different
things.
B
A
D
A
A
A
Okay,
all
right
so
just
usually
what
we'll
do
is
just
grab
everything
from
release
engineering
area
release,
engineering
label
and
look
for
all
of
the
look
in
all
of
the
repos
I.
Think
I
probably
want
to
extend
this
label
to
be
a
global
label,
because
there
are
enough
things
that
we
touch
across
multiple
repos.
That
I
haven't
been
able
to
tag
as
a
release,
engineering
that
our
release
engineering
related
all
right.
A
A
And
just
a
note
for
people
who
are
on
the
call
who
work
on
the
release
engineering
stuff,
regardless
of
what
repo
you're
working
in,
whether
it's
cig
release
or
the
release
repo
or
somewhere
else,
please
try
to
make
sure
that
the
issues
are
tagged
as
area
release,
engineering
and
milestone
when
appropriate.
We
can.
We
can
work
on
figuring
out
priority
together,
but
I
just
want
to
make
sure
that
at
least
they're
easily
grab
a
bowl
while
we're
doing
stuff
like
this.
A
F
F
So
as
part
of
that
effort,
we
ended
up
investing
some
time
in
cluster
API
provider
for
GC
and
cluster
API
provider
for
AWS,
and
essentially
we
stood
up
CI
jobs
using
those
two
that
are
in
release
informing
now
and
they've,
been
pretty
green,
so
quite
quite
happy
how
that
turned
out
so
far,
so
that
gives
us
an
opportunity
to
test
Cuba
natives
latest
from
master
and
cluster
API
from
master
and
cluster
API
provider
for
the
specific
cloud
provider
in
all
sorts
of
stuff.
So
we
can
set
up
various
combinations.
F
E
F
A
F
F
A
F
D
A
A
That's
a
loaded
search
that
I
think
they
were
landing
on
the
release,
the
the
release
blocking
boards-
that's
where
weirdly
configured
not
weirdly
configured
but
yeah
these,
so
they
were
initially
named
like
GC,
115
devs,
or
something
like
that.
I'm
I'm
curious
about
your
thoughts
on
like
the
usefulness
of
running
the
conformance
test
against
one
of
the
stable
version
workers.
A
A
So
I
believe
some
of
these
were
were
set
to
the
the
markers
that
are
like
the
Kate's
master
marker
or
or
stable
stay
below
beta
Kate's
Kate
stay
well.
Kate
Kate
Spade
a
stable
once
able
to
stable
three
I'm,
actually
like
yeah
I'm
curious
about
like
the
usefulness
of
looking
at
any
of
the
stable
markers.
F
I
I
think
so
I
by
default
it
uses
116
and
then
I
had
to
do
some
special
stuff
to
run
it
against
master,
so
moves
to,
or
at
least
covered.
But
you
know
if
we
want
to
add
CI
jobs,
it
would
be
not
variations
of
the
CI
job
and
put
put
them
on
release
informing
for
that
specific
release.
Branch
that
that
would
be
easy
to
do.
I
can
walk
somebody
through
that.
So.
A
It's
actually
already,
it's
actually
already
done
the
yeah,
so
this
is
so.
This
is
a
conformance
latest
job
right,
conformance
latest
job
uses.
The
CI
latest
marker,
so
there's
a
for
parolees.
So
one
for
perler
per
release
is
true
and
then
also
for
the
replacements.
It
will
rewrite
CI
latest
to
CI
latest
whatever
the
version
is
right,
so
you
can
see
that
the
corresponding
jobs
are
here.
A
B
F
A
Use
it
all
the
time.
I
am
a
big
proponent
of
hound
yeah,
so
this
here,
so
it
was
one
of
the
manual
release
bump
required
job.
So
basically
is
something
that
we
were
copying
copying
over,
creating
new
ones
for
every
every
time
we
cut
a
release,
branch
and
I.
Don't
really
see
that
like
if
it's
wrong,
we
can
put
it
back,
we
can
put
them
back,
but
I,
don't
really
see
the
usefulness
of
checking
out
the
the
release,
stable
version
markers
since
we're
already
running
against
the
latest
right
and
there
are
periodic
jobs.
F
F
A
F
A
F
B
Go
back
to
what
Tim
said
about
the
version
of
the
test
and
the
version
of
the
code.
If,
if
we
don't
have
this
run,
sometimes
it's
possible
for
a
PR
to
come
in
that
changes,
behavior
and
the
testing
of
that
behavior
and
on
head
things
remain
self
consistent,
but
it's
now
different
compared
to
what
had
represented
conformance
previously.
As
of
that
tag,
the
one
that
you're
you're
not
seeing
the
value
in
does
that
make
sense.
Yes,
I
see
we're
shifting.
A
So
we
need
so
what's
the
way
that
in
that
okay,
okay,
this
is
a
I
think
this
is
a
longer
discussion.
Oh
yeah,.
B
Because
this
is
something
but
you're
you
you're
right,
I
think
this
is
something
that
isn't
really
well
articulated
in
terms
of
a
test
plan
or
philosophy
of
what
we're
intending
to
do.
So
the
the
value
can
be
unclear
and
then
depending
on,
if
or
how
we're
managing
all
of
these
tags
correctly,
not
relative
to
a
documented
plan.
We
may
not
be
ensuring
the
things
that
we
want
to
ensure
like
we.
We
don't
have
documented
invariants
here
and
we
don't
control
for
them
and.
A
So
that
was
something
I
wanted
to
bring
up
to
right
now
these
tests
are
flagged
for
urban
IDs.
Releasing
there
should
be
like
a
conformance
owner
for
GCE
for
the
the
various
for
the
various
providers.
So
we
need
to
figure
that
out
because
basically,
like
this
test
will
go
in
latest,
that's
cool
latest
runs
if
they
it
fails
on
our
board
right.
So
it's
landing
on
its
landing
on
master
blocking
performance,
all
and
performance
GCE,
the
GC
test,
to
be
honest,
I'm,
not
sure
who's
watching
it
was
at
any
one
time
the.
A
A
A
Let's
go
here:
it's
the
so
I
broke
up
the
generated
tests,
the
generated
dashboards
and
so
initially
so.
People
who
are
watching
CI
signal,
I
lumped
a
bunch
of
I,
lumped
a
bunch
of
dashboards,
our
tabs
into
the
cig,
release,
job
config
errors,
dashboard
and
then
moved
some
of
the
ones
that
are
specifically
generated.
A
A
The
rest
of
them
are
basel
related,
and
I
think
I
have
a
fix
for
that
to
move
them
off
this
board,
but
you
can
see
that
most
of
the
rest
of
them
are
keep
CTL
skew
tests,
and
you
can
see
that
they're
targeting
14
13
13
14
13
12
12
13
right,
like
the
way
that
we
define
the
tests
like
how
does
that
even
work?
Does
that
mean
it's
a
version
of
so
this
is
the
part.
I
got
confused.
That
and
I
was
like.
A
Right,
even
the
Teso,
if
you
look
at
the
test
name
right,
the
test
name
is
stable:
two
stable
one
right,
blah
blah
blah
stable,
two
sablone
right.
It's
landing
on
the
112
113
tab
right,
which
is
obviously
no
longer
correct
right.
So
right
now
our
our
latest
is,
you
know
latest
this
one
18
beta
is
117
stable.
One
is
116
stable
to
115,
stable,
3
114
right.
So
that
means
that
this
is,
at
the
very
least
named
wrong.
A
A
The
so
there's
like
a
double
extract
right
and
the
double
extract
will
mean
I'm,
going
to
use
one
version
of
kubernetes
to
test
against
a
dist
test,
some
parameter
against
a
different
version
right,
so
they're
testing
skew
for
keep
CTL
using.
So
an
older
version
of
I
might
be
an
older
or
newer
newer
version
of
keep
CTL
against
an
older
or
newer
version
of
the
server
components.
For
kubernetes
right
so
I
don't
know
who
I
don't
know
what
the
appropriate
order
is
like
does
this
mean?
A
Does
this
mean
I'm,
checking
out
I'm,
checking
out
the
stable
one
version
of
the
kubernetes
server
components
and
then
testing
against
the
stable
two
version
of
the
cube
CTL
of
keeps
detail
like
I,
don't
know.
A
If
it
was
not
in
either
of
these
categories,
it
would
land
in
sig
release
version
all
right,
so
I
switched
that
generated
so
that
we
could
get
rid
of
all
of
the
version
all
boards
right
and
then,
at
this
point
it's
a
matter
of
fixing
I
think
fixing
the
stuff
that
lives
in
test
config
dot,
you
mole,
which
is
yeah.
This
is
there's
a
lot
of
wacky
stuff
going
on
that
I.
Don't
fully
understand.
So,
if
there
is
someone
who
is
interested
in
investigating
this,
that
would
be
super
appreciated.
E
B
Find
it
right
now,
but
your
your
question
on
like
when
it's
a
Kate's
table
stable
one
or
those
different
patterns,
Aaron
had
written
a
document
somewhere
and
I
thought
it
was
either
intestine
for
our
community
and
I
am
not
finding
it
off
the
top
of
my
head,
but
it
was.
It
is
documented
which,
what
that
ordering
this
one?
A
A
B
A
Okay,
option
a
is
to
trust
the
job
name,
I'm,
not
sure
that
I
feel
we
can
do
that
anymore.
Upgrade
cluster
from
okay,
upgraded
to
new
version,
and
then
the
old
version
test
run
cluster
upgrade
cluster
new
is
upgraded
to
new
version.
New
version
of
the
test
run
of
great
master
upgrade
it's
a
new
version
knows
left
at
old
version.
Old
version
tests
run
like
that.
A
B
A
A
A
Yeah
I'm
not
sure
anyone
has
that
if
we
were
to
do
a
show
of
hands
how
many
people
have
seen
this
doc,
I'm,
not
sure
many
hands
would
go
up
if
a
test
job
ends,
name
name
ends
with
upgrade
cluster.
It
means
we
first
upgrade
the
cluster
and
then
run
the
old
test
suite.
If
it's
upgrade
cluster
new.
We
first
upgrade
the
cluster
and
run
the
new
test
for
you
if
it's
upgrade
master
to
keep
the
nodes
of
the
upgrade.
The
masters
keep
the
notes
at
the
old
version
and
one
the
old
test,
suite.
A
B
Other
thing,
I
think
to
reiterate
you
you
asked
like:
should
we
or
what
should
we
use
for
labels,
or
should
we
maybe
remove
these
stable,
unstable
I?
Think
we've
kind
of
just
demonstrated
and
talked
through
how
poorly
this
is
understood
that
yeah,
when
you,
when
you
see
stable
one
and
stable
three,
which
scenario
is
this
in
which
order
and
it's
spread
across
three
different
documents
and
outside
of
test
infra
and
CI
signal.
Probably
not
many
people
know
it,
and
this
goes
back
to
what
Tim
said.
B
Like
nobody
looks
at
these
tests,
even
when
you
notice
it
you're
like
hold
what
scenario
what's
going
on
here,
it's
this
is
very
much
a
hurdle
and
I
think
we
should
move
to
something.
That's
more
clearly
explicit
about
what
is
under
test
I
I.
Get
that
having
these
markers
the
way
they
are
means
you
just
sort
of
can
fork
load
a
test
and
having
changed
the
backend
and
direction
pointer.
A
And
I
mean
we've
seen
this
over
over
this
cycle.
At
least
this
cycle.
Almost
every
cycle.
That's
happened,
yeah
because
there's
there's
a
point
that
you
don't
know
at
at
which
the
the
version
markers
mean
something
different
right.
So
that
is
when
the
new
branch
is
cut
and
assuming
the
release
branch
jobs
happen
soon
afterwards,
right
as
they
should
that's
when
that's
when
the
markers
essentially
slide
over,
which
means
everything
that
has
been
stable,
1,
2,
3
beta
master
what-have-you
against
that
marker.
A
A
A
A
Those
issues
like
we
will
eventually
fix
those
issues,
but
I
want
to
make
sure
that
we
also
have
like
a
human
touch
on
top
of
that
stuff,
because,
like
I
didn't
even
know
that
this
I
didn't
even
know
that
this
the
script
is
where
the
all
of
the
all
of
the
whole
dashboard
jobs
we're
landing
on
the
dashboards
I'm.
My
words
are
so
bad
today,
the
yeah.
So
some
of
this
had
to
be
tweaked
and
then
also
like
the
stuff
that
did
not
have
did
not
have
owners
attached
to
it.
A
We're
either
migrated
to
a
place
that
had
owners
or
dropped
from
our
boards,
so
there's
still
like
more
work
to
do
on
the
generated
and
the
job,
config
errors
stuff,
but
I
think
there's
a
slightly
better
understanding
and
at
least
the
ones
that
are
landing
in
the
release
and
the
release
branch
jobs
config
are
actually
targeted
against
against
a
specific
version.
Instead
of
a
ominous
version.
Marker
non
version
version
marker
cool,
alright.
So
do
we
still
want
to
do
the
board,
or
do
we
want
to
talk
about
something
else
before
we
go?
Does
anybody.
B
A
B
We're
doing
so
much
technical
debt
here,
but
and
like
we're
puzzling
through
it.
So
if
nothing
else
for
the
people
who
watch
this
and
or
maybe
intimidated
by
it,
don't
be
scared.
You,
you
know
as
much
about
this
as
we
do
practically
and
we're
just
trying
to
work
together
to
figure
it
out
and
improve
it.
So
anybody
is
welcome
in
that
front.
So
you
want
to
just
glance
over
the
board
and
sure
if
there's
anything,
critical,
egregious.
A
Okay,
so
block
releases
on
non-synthetic
clusters
that
has
been
in
progress
right.
That's
some
of
the
stuff
that
James
is
working
on
some
of
the
stuff
that
George
has
been
touching,
determine
image,
building
process
for
released
tools.
Now
this
might.
This
might
go
away
since
we're
starting
to
shove
a
lot
more
stuff
into
your
screen.
Again,
oh,
oh,
yes,
that
would
probably
be
helpful.
Okay,
cool,
so
block
releases
on
non-synthetic
clusters.
That's
the
work
that
dims
is
talking
about
earlier.
A
The
determined
image
building
process
for
release
tools,
so
this
is
this
is
a
discussion
that
we're
having
about
tightening
up
some
of
the
the
images
that
we,
some
of
the
container
images
that
we
build
for
for
the
release
tool,
specific
stuff,
I!
Think
since
we've
had
that
discussion,
you
know
we
still
have.
A
So
we've
got
the
Kate's
cloud
builder
case.
Cloud
builder
is
basically
the
image
that
we
use
for
any
of
the
steps
that
we
run
through
GCB
or
most
of
the
steps
that
we
run
through
GCP.
So,
if
you're
interested
in
seeing
that
stuff,
it
is
GCB
and
build
release
stage
right
so
cloud
built
by
animal
all
right,
so
we're
using
the
the
get
cloud
builder
and
the
get
cloud
builder
again,
but
also
the
Kate's
cloud
builder
to
do.
A
Basically,
this
is
an
image
that
has
all
the
things
that
you
need
to
to
build
kubernetes,
as
well
as
any
of
the
tools
that
we
have
on
the
periphery
to
actually
to
stage
and
release
kubernetes
right.
So
you
can
check
that
image
out
Kate's
cloud
images
case
cloud
builder
and
then
docker
file
that
yeah
mole.
So
this
is
really.
This
is
kitchen
sink
in
it
right
now,
I've
just
tossed
in
everything
that
looks
like
we
needed,
but
this
is
built
on
top.
A
It's
now
built
on
top
of
the
cube
cross
image
and
cube
grass
has
recently
bumped
to
one
13.4
so
that
this
has
been
bumped
as
well.
I
do
want
to
pare
this
down
when
we
get
a
chance.
If
possible,
it'd
be
great,
if
we
pare
down
cube
cross
as
well,
so
that
we
we
get
to
take
advantage
of
a
smaller
image
size,
but
you
can
see
that
this
also
has
a
cloud
build
attached
to
it.
A
The
cloud
build
attached
to
it
is
it's
using
the
docker
cloud
cloud
builder
and
it's
just
doing
two
tags
agate
tag
in
the
latest
tag.
So,
basically,
every
time
a
every
time
a
file
is
touched
within
within
the
images
directory
and
it
merges.
The
cloud
builder
will
rebuild
itself
right
and
it
rebuilds
itself.
By
submitting
this,
this
template
to
GCB
I
did
a
code
based
tour.
A
If
you're
on
the
call
for
that
we
can,
we
can
link
it
in
the
notes
again
that
walkthrough,
the
PRS
and
all
the
things
that
go
into
building
this
so
I
think
that
you
know
this
issue
becomes
a
little
less
necessary
outside
of
or
a
decent
chunk
of.
The
work
is
done
already
outside
of
paring
down
some
of
the
images.
We
also
have
a
images
in
builds,
Deb's,
rpms,.
A
A
I
think
I
want
to
I'm
planning
to
merge
the
Deb
in
rpm
tools,
eventually,
I
want
to
say
soon,
but
I'm
gonna
be
realistic
and
say
eventually,
so
it'll
be
something
like
key
package
and
on-q
package
Deb's
cube
package
rpms,
and
it
will
so
we'll
do
something
similar
to
what
we're
doing
for
the
for
Crowell
right
so
building
a
toolbox.
But
this
will
be
a
very
small
one
that
only
does
Deb's
and
rpms.
A
This
is
something
that
people
can
pick
up
if
they
want
to
we'll
make
it
a
module,
so
people
can
go,
get
it
and
you
know
use
it
outside
of
having
to
basically
import
all
of
the
release
tools
that
we
use.
So
we
have
to
merge
the
paths
between
you
know
between
the
Deb
and
rpm
tools
and
the
way
they
work
today.
Tim
and
I
have
been
poking
at
that
over
the
last
few
release
cycles,
but
there's
still
some
work
to
do
so.
Yeah
coming
soon.
A
F
D
D
A
A
A
A
A
This
is
something
that's
open
as
well
trusted
cluster.
We
use
the
test.
Infra
trusted
cluster
for
our
release,
jobs,
the
staging
jobs,
but
I
think
that
we
might
there
was
talk
about.
Maybe
us
having
our
own
cluster,
since
our
credentials
are
a
little
bit
more
sensitive
than
some
of
the
stuff
that's
running
through
there.
A
So
we
should
have
that
discussion
and
figure
out
like
what
the
maintenance
burden
for
us
overall
will
be
and
more
stuff
that
we
yeah
some
of
this
you
know
needing
a
way
to
trigger
the
jobs
like
this
has
been
happening
as
a
result
of
the
the
the
image
builder
GCD
builder
tool
that
lives
in
tests
infra,
but
I
just
noticed.
We
are
at
time
and
so
we'll
stop
here.
Anyone
who
is
on
the
release
team
call
that
is
starting
now.
E
Be
I'm
so
that
alright.