►
From YouTube: Kubernetes 1.12 Release Burndown Meeting 20180919
A
B
C
C
A
A
A
D
A
The
you
112
overly
steam
meeting,
it
is
Wednesday
September,
19th,
I'm,
Tim
pepper,
or
the
release
lead.
Thank
you.
All
for
joining
should
be
a
fairly
quick
meeting
today.
I'm
hopeful
test
grid
status
went
positively
green
overnight
in
a
lot
of
areas.
As
always,
the
meeting
will
be
recorded
and
posted
so
behave
accordingly.
A
So,
let's
dive
in
I've
tweaked
the
agenda
just
a
little
bit
to
focus
on
the
core,
topical
buckets
of
remaining
issues.
We
did
go
ahead
and
cut
the
release
candidate.
Yesterday
it
seemed
like
we
had
a
number
of
merges
that
were
showing
positive
test
results,
so
that
seemed
like
it
made
sense
to
go
ahead
and
do,
and
that
also
enables
some
follow-on
work
from
cluster
life
cycle
around
upgrade
related
things,
so
it
made
sense
to
go
ahead
and
get
that
ball
rolling
release.
A
We
had
announced
earlier
in
the
week
that
we're
delaying
to
the
September
27th,
so
we
basically
have
a
week
at
this
point,
but
once
you
factor
in
some
of
the
slower
running
tests
and
a
weekend,
the
the
general
goal
here
would
be
kind
of
today
tomorrow
and
Friday
to
be
really
having
some
confidence.
Let
tests
soak
over
the
weekend
and
know
that
we
are
in
good
shape,
and
it's
looking
like
we're
on
that
trajectory,
which
is
positive,
so
I'm
gonna,
run
through
the
list
of
our
current
critical
issues.
A
We've
been
tracking
a
bunch
of
scalability
things
for
the
past
two
weeks.
I
would
say
that
it
bubbled
up
into
release
blocker
status
and
as
of
today,
we
have
basically
word
scalability
folks,
are
thumbs
up
and
go
on
everything,
nothing
remaining
as
a
release.
Blocker
there
had
been
a
test
flake,
but
they've
they've
got
some
good
triage
on
it
in
the
last
24
hours.
It's
something
that's
been
present
since
1.11
and
they
don't
tend
to
block
on
it
and
will
continue
to
figure
out
a
fix
on.
A
It
probably
put
it
on
patch
release,
but
it'll
also
need
backboard
it
at
least
to
111.
So
where
we
go
on
scalability
we've
had
a
bunch
of
DNS
issues,
and
these
are
all
green.
As
of
today,
the
first
one
we'd
had,
or
actually
there
were
cluster
of
things
that
had
been
kind
of
underlying
issues
that
seemed
like
in
hosting
that
stuff
is
all
fixed
and
green.
A
The
next
one's
just
started,
hasn't
finished,
so
we're
expecting
the
last
of
those
to
go
green
today
and
then
the
core
DNS
topic,
broadly
scalability,
had
continued
looking
at
that,
along
with
core
DNS
folks,
but
Dean,
not
a
release
blocker
and
that
we've
chosen
to
revert
the
the
default
setting
there
for
cube
up
deployments,
and
it
doesn't
need
to
be
release.
Noted
so
I've
put
a
comment
in
the
release:
notes
that
we
should
have
a
section
there
for
known
issues
and
some
tentative
text
is
being
worked
on
to
describe
the
problem.
A
There
is
also
a
PR
going
in
to
collect
a
little
bit
more
data,
but
just
collect
some
data
and
revert
the
the
patch
then
probably
tomorrow.
So
no
issues
expected
there.
So
folks
are
keen
on
resolving
this,
but
not
something
that'll
happen
in
one
12
or
be
a
blocker
and
I'm.
Not
gonna
block
merges
like
that
right
now,
because
it's
not
a
risky
thing
and
it
it
helps
the
team.
Do
you
do
that,
so
the
next
section
storage?
This
has
been
a
bit
troubling
over
the
past
weeks,
we
keep
having
a
small
little
issue.
A
Come
up,
get
fixed.
Another
issue
come
up,
so
it
kind
of
been
a
treadmill
there,
but
we're
making
progress.
So
there
there's
one
more
thing
that
had
come
up
sort
of
yesterday
day
before
and
wasn't
clear
on
status.
So
this
is
a
CSIC
Rd
issue.
There
PR
did
actually
emerge
early
today,
I'm
a
little
bit
worried
on
that
one,
so
want
to
watch
for
any
additional
stability
issues
that
might
come
up
in
storage
over
the
next
24-48
hours.
A
On
the
plus
side,
the
the
six
storage
folks
have
been
extremely
responsive
and
are
tracking
this
some
of
the
things
we
have
to
watch
and
notice
and
being
them,
the
sig
is
on
top
of
their
stuff,
noticing
the
failures
themselves
proactively,
working
on
addressing
them
very
rapidly.
So
how
does
that
still
colored,
yellow,
just
because
I
feel
like
that
latest
change
that
went
in
today?
A
Something
about
it
worries
me
just
in
terms
of
its
size
and
lateness
and
and
all
of
the
discussion
that
happened
yesterday
on
it
so
kind
of
watching
that,
but
it's
trending
green
now
the
next
bucket
was
around
the
horizontal
pot
autoscaler.
The
main
issue
definitely
was
fixed.
Slowly
had
been
confident
as
the
fix
there
and
it
merged
and
tests
definitely
went
green
that
had
been
a
release
blocker,
but
looking
green
there
there's
two
or
three
other
little
follow-on
things,
though
a
closed
additional
issue
got
reopened.
A
Yesterday,
hey
with
the
refix,
the
the
refix
is
all
correctly
set
and
tide
is
working
on
merging
it
right
now.
I
believe
so
should
be
positive,
but
again
kind
of
watching
that
space,
and
then
another
race
was
noticed
and
that
the
merge
it
also
looks
like
it's
going
in
today,
based
on
looking
at
the
the
github
status
and
then
tied
so
folks
have
been
super
responsive
there,
and
it
actually
maybe
helps
that
solely
seems
to
be
traveling
but
is
in
a
different
time
zone.
A
So
the
the
latest
change
going
in
there
when
I
look
at
the
the
and
much.
Thank
you
to
huge
thanks
to
Aaron
for
last
week,
really
describing
how
to
figure
out
which
commit
is
in
which
test.
The
last
failure
that
had
shown
up
there
did
not
have
the
latest
commit
that
was
expected
to
be
involved
on
this
one.
So
yeah
I
think
that
that's
just
the
the
lag
of
stuff
rippling
through.
A
Very
good
question:
networking
we'd
had
a
IP
vs
issue
that
came
up,
but
it
looks
like
everything
is
merged
there
to
be
in
good
shape
and
then
the
the
odd
load
balancing
l7
GCE
issue
that
one
was
claimed
to
be
sort
of
resolved
later
this
week.
But
as
of
this
morning,
the
tests
are
green,
so
I
think
the
fix
must
have
come
in
a
little
faster
and
Aaron
just
mentioned
in
zoom
chat.
The
guide
that
he
had
written,
really
really
useful
for
anybody.
A
A
Alright
upgrade
I
have
a
few
questions
here,
and
this
is
traditionally
when
upgrade
gets
a
little
funky,
because
the
so
we've
had
some
issues
and
unclear
ownership
and
unclear
symptoms.
Overlapping
with
our
other
symptoms
is
everything
else
is
going
green.
It
allows
us
to
look
a
little
more
focused
to
upgrade
double-check
that
things
are
ok,
so
cluster
lifecycle
seems
positive
on
things,
but
we
do
have
a
merge
that
normally
comes
in
around
now
or
a
PR
that
should
normally
merge
around
now
the
instance.
A
A
There's
been
a
few
little
things
going
by
in
the
last
day,
around
some
certificate,
renewal,
things
and
I
CLI
fix,
but
these
so
while
release
blocking
there,
they
look
like
really
minor
things
and
the
sig
is
all
on
top
of
them.
So
I'm
not
worried
there,
but
just
want
to
see
that
that
those
merged
today
and
that
churn
settles
out
and
then
the
the
image
building
that
dims
had
suggested.
We
we
had
a
bucket
to
tracking.
A
G
G
A
G
A
So
as
always,
there's
there's
things
that
folks
are
pushing
in
and
can,
but
yesterday
I
did
a
little
bit
of
holding
on
some
things
just
to
limit
the
the
mergers
coming
in.
So
we
don't
have
a
whole
bunch
of
believed,
ok,
stuff,
trivial,
merging,
at
the
same
time,
just
to
meter
the
merges
and
make
sure
we
get
solid
test
data
on
just
a
couple:
critical
things
we're
looking
for
so
a
few
things
coming
through
there
and
one
of
the
big
ones.
Then
that
came
up
so
big
small,
depending
on
how
you
look
at
it.
A
Xd
Jeff
Grafton
notice
that
golang
released
one
dot
to
end
up
for
almost
a
month
ago,
and
we
didn't
catch
that.
So
it's
got
a
couple
things
that
look
like
important
from
a
toolchain
perspective
to
have
n.
Probably
so,
there's
a
there's.
A
high
likelihood
that
we'll
go
in
today
I
would
have
unheld-down.
A
Goes
in
by
itself-
and
we
have
a
runway
of
a
number
of
days
yet
to
see
clear,
green
test
results.
Looking
at
what
change
and
because
it's
a
dot
for
I
wouldn't
expect
an
issue.
But
you
got
a
worry
on
something
like
this.
We
should
have
made
the
change
for
three
and
a
half
weeks
ago,
but
I
think
it
would
be
sort
of
a
it
would
be
shame
and
kind
of
personally
embarrassing
if
we
didn't
have
this
in,
given
the
lag
between
when
it
upstream
had
released
it.
It's
yeah.
A
A
F
This
is
really
anyway
there's
another
issue:
I
put
I,
put
that
in
the
notes
and
I'm.
Sorry
I
can't
follow
the
new
outline
quite
as
well
yet
I'm,
that's
I'm
super
confused
where
my
issues
are
and
where
how
everything
connects.
Give
me
a
second
here.
E
A
G
A
E
A
A
A
Yeah
yep
exactly
so
the
if
you
compare
that
way.
Yeah
I'll
double
check
later,
but
yeah.
If
you,
if
you
compare
the
GCD
versus
gke
like
things
that
are
rippling
through,
it
looks
like
the
normal
trajectory
there
and
then
looking
at
the
specific
commits
I
believe
the
less
hath
tests
hit
either
had
a
slight
wobble
and
some
expected
things
had
actually
passed
or
depending
on
which
one
you're
in
like
some
of
the
release,
one
twelve
branch,
I
think
are
slightly
trailing.
F
All
right,
we
have
an
open,
PR,
six,
eight,
eight,
three
zero,
which
has
passed
all
checks
but
and
has
LG
TM
and
approval,
but
it
currently
cannot
be
merged
because
we
have
protected
branches.
Now
right,
oh.
F
A
F
F
A
A
F
Great,
so
that's
cool,
never
mind
me,
then
the
big,
the
updating,
CSI
e2e
test
for
the
CSI
see
our
DS
got.
Lurched
he's
mentioned
that
it
actually
fixes
two
bugs
it
fixes
that
the
hosts
have
tests
failing,
and
it
also
fixes
the
attach
detach
controller.
It
does
not
spy
specified
validation
schema.
F
That's
why
that
pull
request
is
so
big,
but
you
mentioned
earlier
that
that
you
were
a
little
worried
because
it's
pretty
big
and
been
this
big
follow-up
from
the
lot
of
working
through
and
reverting
and
fixing
it's
been
in
the
works
for
a
while.
It's
been
out,
it's
been
worked
on
for
like
a
few
weeks.
I
think
yeah
well
I'm
little
over
a
week,
but
but
the
issues
have
been
worked
through
for
two
weeks,
so
I'm
actually
fairly
confident
that
and
I'm
actually
a
little
less
worried
than
you.
Okay,.
A
A
Yeah
I
think
I
I
really
appreciate
comments
like
that:
I'm,
probably
trending
towards
hyper-paranoid,
so
as
others
have
a
sense
of
maybe
a
little
bit
better
comfort.
It's
really
nice
to
hear
that
and
we'll
see
you
will
know
in
the
next
12
to
18
hours
or
so
as
all
of
that
ripples
through
the
test.
The
strain
I
am.
F
A
Any
other
issues
or
pr's
or
test
failures
that
anybody
else
wants
to
mention
the
the
the
open
issue
count
is
has
gone
down.
Lots
of
things
have
closed.
There's
no
open
issues
on
failing
tests
right
now
and
everything
I'm
seeing
in
tests
occurred
this
morning.
I
believe
is
expected
to
be
fixed
by
stuff
we
have
rolling
through.
So
that's
that's
really
positive
and
the
PR
count
is
also
at
the
open
for
your
account
has
gone
down
and
there's
a
number
of
these
I
still
think
are
reliable
to
be
finally
kicked
out.
A
G
G
So
one
of
the
bugs
that
showed
up
yesterday
and
really
quickly
fixed
was
because
somebody
was
using
the
beta
one
and
they
ran
into
a
problem
with
Q
proxy
and
cube
idiom.
So
and
the
reason
yesterday,
I
was
pushing
for
the
Deb's
and
rpms
was
because
that
will
help
with
additional
testing
I
can
once
I
have
the
URLs
for
the
Deb
and
rpms
I
can
write
a
small
note
on
how
to
test
it
in
a
fresher
environment.
So
that's
why
I
wanted
to
do
this?
G
A
C
A
F
We
had
a
a
bug-fix
PR
come
in
just
an
hour
ago.
Did
you
see
that
and
it
was
for
some
reason,
labeled
priority
critical
urgent
once
the.
F
A
D
G
A
It
certainly
rather
see
that
looking
tighter,
it's
sort
of
the
same
worry
that
I
had
with
that
CSI
storage
CRT
like.
Why
is
there
this
much
deep,
active
conversation
this
late,
but
since
its
documentation
and
I
and
and
we
have
a
week
yet
I'm
I'm,
okay
with
them,
even
if
they
iterate
on
it,
I
mean
if
something
Claire
approved
it
it
goes
in
and
they
have
a
follow
on
to
adjust
it
a
little
more.
It's
there's
I
feel
like
there's
practically
no
risk
on
us
for
this
one,
so
I
would
bias
towards
them.
A
H
B
A
A
So
the
gap
between
the
7th
and
the
18th,
the
pr's,
were
supposed
to
be
ready
and
done.
Merge
effectively,
merge
ready
at
the
beginning
of
the
month,
and
the
lag
for
getting
everything
in
like
yesterday
was
more
like.
If
there
were
translations
in
the
follow-on
things
within
the
docs
team,
so
I
don't
know,
but
we'll
watch
it
and
see
we
can.
We
can
see
what
happens
there,
but
yeah
I
also
agree
with
Aaron's
assessment
there
in
the
zoom.
A
A
H
A
I
D
B
Okay-
and
we
were
just
going
to
take
into
account
some
of
the
conversation
that
was
happening
yesterday
and
create
duplicates
sort
of
bucket
at
the
top
of
the
document,
for
all
the
notes
that
have
duplicate
entries
at
the
moment.
So
hopefully
sig
leads
could
go
through
that
and
just
let
us
know
which
label
they
should
fall
under
which
heading
they
should
fall
under
I.
A
E
A
So
the
one
thing
I
had
on
release
notes
because
of
the
cordilla
nest
stuff.
What
what's
been
determined
out
of
scalability
testing
in
the
decision
to
leave
it
reverted
is
something
deserving
of
a
release
note
in
terms
of
being
a
known
issue
and
then
I
know.
In
the
past,
we've
had
a
known
issue
section
and
the
release
notes,
but
I
just
realized
in
the
drafts.
There
isn't
one
so
I
definitely
have
a
candidate
piece
of
text.
To
add
to
that
section,
yeah.
I
A
A
D
I
guess
just
an
update
for
you,
Tim,
yes,
I
am
gonna,
be
reviewing
the
blog
post
just
to
make
sure
everything
looks
same
now
that
everything
is
kind
of
shifted.
We
also
have
something
scheduled
media
wise
for
Monday
I,
think
so
just
a
heads
up,
I,
think
Kristen
or
Kristen
or
Kelly.
Maybe
would
have
reached
out
to
you.
Yes,
awesome.
A
All
right,
then,
the
only
other
thing
briefly
to
mention
branch
management.
We
already
talked
about.
We
cut
the
release
candidate,
no
particular
issues
there
and
I'm
feeling
like
we're
on
track
at
this
point,
to
probably
how
our
code
thaw
on
Monday
so
as
the
week
progresses,
I'll,
probably
announce
that
out
to
folks,
probably
being
the
assuming
we
stay
on
a
healthy
trajectory
here,
so
things
were
looking
good
for
the
release
at
this
point,
I
feel
like
a
few
more
things
to
to
nail
down
this
week.