►
From YouTube: Kubernetes 1.12 Release Burndown Meeting 20180920
A
All
right,
I
think
we
actually
have
sufficient
people
here
to
get
started
so
I
am
to
pepper.
This
is
the
112
release
burned
down
we're
in
the
final
week
this
meeting
is
being
recorded.
It
will
be
posted
to
YouTube
shortly
after
the
meeting,
the
agenda
is
relatively
short
today.
Actually
I
think
we're
kind
of
down
to
two
two
issues:
roughly
right
now,
yeah.
B
A
A
A
We're
gonna
go
ahead
and
cut
a
second
RC
tomorrow
and
redo
our
Pimm's
and
Deb's,
because
we
had
some
issues
earlier
this
week,
not
convinced
we're
gonna
have
that
straightened
out
for
the
RCS.
Specifically,
this
is
probably
gonna,
be
something
that
has
to
carry
on
into
the
next
release,
to
figure
out
how
we
manage
that
better
I've
opened
a
Testament
resolving
that
correctly.
A
Then
so
on
Monday
will
thaw
master.
Will
double
check.
What's
emerged
in
the
meantime,
make
sure
something
unexpected
hasn't
merged.
You
are
branch
fast
forward
to
release
112.
Let
that
get
its
final
soak
and
Thursday
should
be
our
release.
I
think
we're
pretty
on
track
for
that
modulo.
The
two
issues
will
we'll
talk
about
more
in
detail
here,
so
on
the
agenda
of
the
testam
for
stuff.
There
I
think
we've
talked
about
and
I'll
get
into
the
Deb's
and
rpms
a
little
bit
later.
More
so
so,
we've
had
a
whole
bunch
of
gke
issues.
A
Issues
noted
this
morning
that
things
are
triage
and
probably
addressed,
and
I
I
agree
with
you.
I
should.
If
this
is
the
only
thing
we
have
going
on
and
we
have
test
coverage,
that's
comparable
outside
of
GE
to
like
we
can.
We
don't
need
to
block
on
this,
but
it
also
looks
like
it's
on
a
trajectory
to
be
in
a
better
shape,
so
that
looks
good.
A
Our
storage
and
horizontal
pod
auto-scaling
stuff
that
was
in
flight
all
looks
good
as
of
today.
The
the
two
major
or
the
the
first
major
thing,
then,
is
the
state
of
upgrade
and
I
see
that
Tim,
st.
Clair's
on
and
I
know.
We
have
this
issue
around
how
we
get
the
the
version
strings.
Do
you
have
some
stuff?
You
would
like
to
say.
C
A
D
C
A
C
A
C
A
C
C
A
A
And
the
sick
release
document
basically
has
said,
like
hey
Koster
life
cycle,
people
have
it
together,
they'll
take
care
of
stuff,
but
yeah.
We
should
have
better
visibility
into
what
all
you're
doing
the
certificates,
issues
on
certificate
renewals
and
the
CLI
change
related
to
that
I
was
treating
as
a
release
blocker,
but
it
looks
like
those
have
merged
and
I
haven't
seen
any
sign
of
anything
additional
coming.
There
and
I
had
been
Liz
and
Lube
Amir
working
on
that.
E
A
Sounds
good,
then,
the
the
final
thing
that
was
on
my
radar
was
upgrade
downgrade
failures,
but
these
are
outside
of
cube
ADM
and
it
looks
like
this
may
be
yet
again.
Our
old
friend,
the
node
taints,
by
condition
cost,
has
a
PR
pending
as
of
early
this
morning.
That's
worrying,
but
we'll
see
that
hopefully,
that
PR
gets
its
final
review
today
and
merges
today,
and
we
can
see
the
the
test
status
there.
I've
also
got
a
sig
out
here.
A
C
A
Excellent,
so
the
next
thing
on
the
agenda,
then,
is
the
golang
update
to
one
10.4.
We're
expecting
this
to
go
smoothly.
It's
been
merged,
since
last
night
haven't
seen
any
destabilization,
but
of
course
that
could
take
a
little
bit
of
something
weird
happened
in
particular
scalability.
We
won't
know
for
sure
till
Monday,
so
that's
basically
a
watch
this
space
and
and
B
hyper-paranoid
I
guess
just
in
case
anything
odd
happened
and
then
that
flows
in
a
bit
into
the
final
bullet
here
on
image
builds
so
dims.
B
Don't
think
so,
but
there
is
definitely
one
thing
left
which
is
the
it
C
D
1
XD.
Yesterday
we
were
going
back
and
forth
between
updating
the
revision
number,
so
you
know
the
dot,
0
and
dot
1,
because
we
are
updating
them.
We
were
created.
We
are
generating
the
manifest
now,
but
for
all
practical
purposes,
it's
the
same
HCD
version,
so
XT
did
update
the
PI,
the
pr
that
from
the
IBM
person
a
manjunath,
so
we
have
to
cut
the
manifests,
but
we
don't
have
to
update
any
code.
So
that's
the
good
news.
B
A
E
B
B
Wouldn't
say
everywhere:
it's
just
starting
at
the
eight
CDE
repository
right
now,
because
they
want
to
match
the
HDD
version
with
the
version
that
we
are
cutting
for
the
image.
Okay,
that's
that's!
Why
they're
trying
to
link?
So
we
don't
have
to
go
look
into
the
container
image
to
see
what
version
of
HDD
is
there
to
avoid
that
problem.
C
B
D
B
Have
to
do
that
as
we
haven't
done
that
yet
there
was
a
long
snippet
yesterday
that
was
shared
across
multiple
channels.
That's
the
closest
that
we
that
we
have,
but
right
now,
there's
only
a
handful
of
people
doing
that
work,
but
then,
once
112
is
out,
then
more
people
will
be
doing
this.
So,
yes,
we
have
to
document
that
what
we
need
exactly:
okay,.
B
B
B
C
B
C
So
in
general,
like
ideally
for
a
long
long
time,
Jeff
and
I
and
some
old
people
who
have
been
around
forever
I
have
wanted
to
get
rid
of
the
release
repository
and
the
only
one
problem
we
had
was
the
only
detractor.
In
the
only
reason
we
could
never
do.
It
was
because
basil
did
not
support
some
cross
arch,
build
details,
and
that
was
the
only
detractor
that
we
had,
but
I,
don't
think
going
forwards.
I,
don't
think
that
that
prevents
us
from
doing
things.
C
We
could
still
have
a
single
make
artifact
for
rpms
and
Deb's
in
the
mainline
repository,
because
we
could
just
put
the
specs
and
the
dents
in
there.
We
already
have
the
generation
through
basil
for
the
specs
on
the
Deb's
that
are
built
as
part
of
the
mainland
stuff.
So
that
way
we
have
one
single
source
of
truth,
which
is
would
be
the
KK
repo.
A
Yeah,
that
would
be
a
notable
improvement
anytime.
You
have
parallel
paths
for
a
build
and
release
and
you're
sort
of
doing
one
most
of
the
time
and
then
at
the
end,
release
from
a
different
process
and
then
individuals
doing
devtest
also
on
a
separate
path
and
release
that
just
always
ends
up
causing
grief
right.
A
B
A
Okay,
well,
it
slightly
worries
me
that
we
don't
have
official
path
test
coverage
exactly
that
we're
kind
of
going
along
on
the
alternate
path,
but
this
is
also
something
that's
been
around
like
victim
says
for
forever,
and
we
do
have
a
few
more
people
coming
and
being
interested
in
helping
of
all
this
in
a
better
direction.
So,
hopefully,.
B
C
A
G
G
A
G
About
it,
I
believe,
if
I
have
the
context
right
and
yeah,
so
it's
tricking
jobs
right
now.
Let's
see
what
happens,
I'll
keep
the
watchman
I.
The
intent
seemed
to
be
to
merge
it
right,
I
added
it
to
the
milestone
and
we
talked
about
it
and
then
people
added
priority
important
soon.
I'm
like
well.
That's
not
gonna
work.
So.
A
G
A
Glancing
through
the
lists
today,
really
it
seemed
like
the
only
thing
new
that
had
come
in
was
classes
PR
and
then
the
issue
that
I'd
opened
yesterday
that
that
triggered
that
PR
and
absolutely
the
issue
that
I
should
open
on
the
gke
stuff.
But
but
basically
it's
looking
like
aside
from
that
Damon
set
up
great
issue:
we've
not
got
any
blocking
issues
and
PRS
are
looking
quite
as
well
so
I
feel
like
we're.
Almost
there
fingers
crossed
okay,
any
Docs
updates.
Do
we
have
anybody
ducks
on
the
line?
I,
don't
think
so.
C
I
I,
don't
know
how
to
answer
this
question,
but
the
cob
provider.
Folks
I,
don't
there's
no
forcing
function
that
people
can
use
for
some
of
this
stuff,
but
the
cloud
provider
Doc's
are
still
a
TBD
and
have
been
a
TBD
from
Oakland
releases.
It's
a
source
of
frustration,
so
I
don't
know
if
someone
wants
to
poke
them
from
multiple
angles,
not
just
including
the
docs
folks
and
myself,
but
we
ideally
would
like
to
have
some
of
those
in
place.
A
Yeah
I
think
that
this
is
a
totally
reasonable
place
to
bring
that
up
and
I
will
chat
with
a
few
people.
It's
probably
late
for
much
to
happen.
This
release,
unless
they're
sitting
on
something
magically,
but
probably
a
big
thing
to
note
for
ice
on
a
to-do
list
for
as
the
next
cycle
comes
up
to
have
this,
be
something
that
we
really
push
on.
B
A
C
Know
I
was
expecting
that
out
of
clean
out
of
tree
cloud
providers
would
be
mature
now
because
it's
been
touted
for
several
releases
and
documentation
that
outlined
all
the
details.
So
that
way,
when
folks
want
to
set
it
up,
the
instructions
are
there,
as
well
as
the
canonical
location
of
where
to
go.
What
what
happens
now
today
is
it's
kind
of
it's
a
hodgepodge
airy
as
information,
and
they
they
go
hunting
and
lots
of
questions
and
issues
get
filed.
A
I've
pushed
this
cycle
on
them
to
get
a
line
in
the
sand
like.
When
are
you
gonna?
Stop
accepting
even
just
maintenance
like
when
is
full
deprecation?
When
do
you
see
that
trending
and
they're
thinking
kind
of
first
or
second
quarter
of
next
year
next
calendar
year,
so
2019,
but
they
got
to
draw
that
line
and
then
back
up?
What
needs
done?
A
B
Given
the
complexity,
the
last
one
is
going
to
be
the
GCE
one
but
I'm
pushing
for
the,
for
example,
the
AWS
folks
to
do
their
to
do
their
stuff
as
an
external
flow
provider
and
just
vendor
cake,
a
repository
straight
just
like
we
are
doing
in
the
OpenStack
side,
so
I'm,
hoping
that
that
plan
goes
forward,
at
least
on
the
ALW
side.
So
what
will
be
left
would
be
the
GC
one.
A
D
A
E
A
E
I
saw
it
so
I
saw
okay.
So
the
question
here
we
have
a
this
was
discussed
earlier
in
this
meeting.
But
probably
the
problem
is
that
Cuba
name
needs
to
sync
with
the
sick
dogs
generation
of
reference
dogs,
because
last
time
in
the
last
cycle,
they
generated
some
documentation
with
outdated
abortions,
and
this
is
not
a
big
issue
we
can
solve
it
often
but
yeah.
It's
part
of
the
the
problem
with
syncing
the
build
the
cube
ATM
versioning
and
the
the
generation
of
the
reference
dogs
I
wanted
to
bring
something
else
here
as
a
question.
E
So
the
discussion
in
the
testing
for
the
sick
testing
channel.
So
how
has
the
project
historically
dealt
with
the
situation
where
you
have
a
possible
critical
bug
fix,
but
the
only
way
to
test
it
is
via
periodic
jobs.
Is
there
a
mechanic
to
trigger
periodic
jobs
on
demand,
or
is
there
perhaps
a
way
to
move
them
to
pre-submitted
ops
and
basically,
what
is
the
history.
A
That's
the
question
like
if
we
have
something
right
now
or
we're
not
sure
about,
like
we
kind
of
had
to
do
this
for
the
scalability
issue
with
Guardian
s,
do
we
merge
something
for
an
experiment
and
then
unmerge
it
or
the
the
complexity
of
getting
that
level
of
official
path,
ci
on
a
very
short
lived
branch
like
there's
a
human
scalability
factor
there
that's
an
issue,
but
somehow
we
need
something
like
that.
Enabled
I
suspect.
A
There's
this
really
interesting
essay
that
somebody
from
Etsy
had
posted
about
their
princess
environment
and
I.
Think
it's
a
really
important
part
of
release
engineering
to
have
this
parallel
track
that
you
can
do
official
normal
stuff
on,
but
not
have
to
have
merged
to
master.
So
a
relatively
common
pattern
in
enlarge
complex
systems
to
do
that
in
your
release,
engineering
and
DevOps,
and-
and
it's
something
on
my
mind,
to
try
to
figure
out
how
we
could
do
that
better
here.