►
From YouTube: Kubernetes 1.12 Release Burndown Meeting 20180918
B
A
A
A
A
A
A
Is
just
one
issue
to
look
at
and
actually
ahead
of,
that
I
just
pulled
up
test
grid
also,
and
we
have
green
test
results
from
them,
but
they're
the
issue
they
mentioned.
Something
was
flaky,
so
I
mean
I.
Guess
we
don't
get
them
since
we
have
green
on
board
all
right.
Well,
it's
three
paths:
I'll
go
ahead
and
get
started
and
Dimps
has
his
hand
up,
but
will
I
saw
that
you
comments
on
that
issue
we'll
get
to
it
in
a
sec?
Okay,
it
is
Tuesday
September
18th.
This
is
the
release.
A
112
burned-down
meeting
in
the
final
week
now
one
week
to
go,
I
am
Tim
pepper
the
lead
for
the
112
release.
This
meeting
will
be
recorded,
it'll
be
up
on
YouTube,
just
after
we're
finished
here
and
please
behave
awesomely
and
in
accordance
with
our
code
of
conduct,
I
think
it's
gonna
be
a
quick
meeting
today.
Actually,
where
we're
getting
closer
on
things,
so
the
just
in
case
anybody
doesn't
have
it.
I
will
paste
the
agenda
doc
in
the
zoom,
so
I
went
ahead
to
end
made
the
call
yesterday
to
delay
the
release
a
couple
days.
A
Basically,
I
was
feeling
like
it
was
kind
of
a
50/50,
whether
we'd
be
ready,
come
Monday
or
really
5050.
We
would
be
ready,
come
Friday
and
see,
go
test
results
still
on
Monday
and
that's
just
not
quite
enough
roadmap
to
feel
confident
that
we
would
actually
release
on
Tuesday,
so
delaying
to
Thursday
gives
us
time
to
finish
things.
Hopefully
this
week,
recoup
at
the
beginning
of
next
week
get
any
absolute
last
minute
things
in
an
test
results
still
and
and
make
the
release
call
on
Thursday.
A
E
E
But
then
what
is
seems
to
be
happening
is
the
retry
logic
bails
out
really
early
when
there
is
a
problem,
because
it's
not
checking
for
the
condition
where
the
cache
is
empty
and
cache
is
being
loaded
rather
so
I
threw
up
a
patch,
but
then
Wojtek
said:
let's
see
if
it
actually
happens
again.
So
we'll
wait
for
that
and
see.
If
we
need
to
use
this
patch
I
actually
closed
out
the
patch
we
can
resurrect
it
if
needed.
So
that
was
basically
you
know
the
update
on
that
single.
A
E
Right
so
we
will
still
have
by
the
time
Monday
turns
around.
We
will
still
have
another
five
runs,
I
guess
at
least
yeah,
so
we
should
be
better.
We
should
have
a
better
signal
by
that.
But
then,
if
you
look
at
the
previous
runs,
we
never
had
like
there's
one
sequence
where
there
were
four
greens
in
a
row
other
than
that
you
know
it's.
It's
always
been
on
and
off.
A
All
right,
so
we
will
we'll
continue
to
watch
that
space
and
I
will
I
will
ping
for
an
update.
Well,
I
guess
I
mean
at
this
point
were
pretty
much
in
the
business
day.
So
if
we
didn't
get
Wojtek
and
Shawn,
then
and
I
know
they
were
up
late,
a
couple
of
prior
nights,
so
they
may
have
called
it
a
day
earlier,
since
they
had
things
looking
better,
so
I
will
fing
them.
A
Hopefully,
that's
not
still
an
issue
there,
all
right,
so
I'll
move
on
to
DNS.
So
we've
had
a
series
of
issues.
There
was
a
config
map
issue
with
DNX.
There
was
an
issue
that
was
determined
to
be
gke
specific.
There
is
the
core
DNS
scalability
issue,
so
those
three
things
we
believe
to
have
all
been
fixed,
but
there
is
still
another
issue
and
I
had
sort
of
thought.
It
might
be
related
to
the
gke
failure.
A
So
mohamed
opened
issue
six
seven
six
hundred
this
morning,
so
it's
a
DNS
configuration
should
be
able
to
change
Federation
configuration
and
that
is
visible
on
sig
release
master
blocking
in
the
GC
igce
serial
now,
I
had
kind
of
wondered
whether
that
was
related
to
the
anything
to
do
with
this
GK
evening.
That
was
being
changed
now.
I,
don't
know
if
that
was
specific,
gke
or
somewhere
else
in
the
network
or
what?
Because
we
don't
get
to
see
that
stuff
and
the
fix
for
that
only
emerged
at
midnight
and
the
latest
run.
A
B
A
Okay,
so
I'm
I'm
still
curious
if
those
are
related,
but
we
do
have
an
issue
open
to
track
it
and
then
that
that
brings
up
the
other
issue,
the
the
the
Google
one
is
fixed
and
the
test
just
went
green
a
little
bit
ago.
That
was
failing,
so
that
is
actually
positive.
A
A
We've
had
a
couple
things
flying
by
and
each
one
successfully
is
getting
better
right
now,
I
think
there's
one
thing:
that's
consistently
failing
on
storage,
around
entry
volumes
and
the
sig
was
looking
at
it
closely.
Yesterday
there's
a
PR
and
that
had
been
under
review.
It
looks
like
a
relatively
complicated
review.
This
isn't
a
super-huge
code
fix
we're
in
29
29
lines
of
code,
but
there's
a
few
questions
that
needed
a
little
more
figuring
out
so
I'm
hopeful
that
this
one
gets
maybe
a
bit
of
iteration
today
and
emerges
this
evening.
A
So
this
is
fixed
strain
for
evicting
terminal
des
pods
and
pods
with
local
storage,
DSS
daemon
set
and
the
description
describes
some
really
slow
performance
cases
where
draining
a
node
could
be
really
really
slow
and
we've
had
a
set
of
tests
that
have
been
a
little
flaky
in
that
regard.
But
if
this
I
can't
put
a
lot
of
Hope
in
that
this
is
gonna
fix
something
magically
great
for
us
right
now.
If
it's
been
around
since
110,
so
watching
that
space,
it's
a
relatively
large
patch,
so
we'll
see
what
happens
there.
C
So
I
remember
running
into
this
issue
actually
at
work.
We're
training
notes
was
just
kind
of
thing
that
was
supposed
to
be
a
feature
but
didn't
actually
work.
This
looks
like
it's
actually
just
kind
of
enabling
that
I
don't
know
if
it's
causing
issues
in
the
related
test,
of
course
at
night.
But
but
what
I'm
saying
is
this
sounds
like
something
I,
don't
know
why
somebody
added
the
milestone
to
it,
which
is
why
I'm
sort
of
tentatively
hopeful
that
it's
relevant
josh's,
but
at
the
same
time
this
is
like.
A
Would
like
to
believe
this
might
be
something
quite
useful
to
us.
At
the
same
time,
it
comes
in
a
period
where
we
have
risk
to
mitigate
to
whether
this
thing
coming
in
and
skeptical
cynical
part
of
my
brain
says
open
shifts
next.
Release
that
shifts
over
to
kubernetes
1.11
is
supposed
to
be
finishing
up
tests
in
GA
or
something
somewhere
around
now,
based
on
a
little
bit
of
googling
I
did
on
the
web
and
to
get
a
urgent
fix
into
one,
not
eleven,
which
they
would
need,
not
that
they
couldn't
carry
the
patch
themselves.
A
A
Let's
see
so
that
was
storage,
and
then
we
have
the
the
other
big
one.
The
horizontal
pod
autoscaler
I'm
inclined
to
think
this
could
be
a
larger
part
of
the
things
we're
looking
at
right
now,
so
the
the
tests
or
the
the
PR
that
Solly
had
pushed
for
that
yesterday,
it
got
hung
up
after
a
bit
in
our
test
CI
issues
with
the
gke
test
that
was
consistent.
There
was
flaking
90%
of
the
time
or
something
it
actually
finally
got
its
test
passes
just
a
couple
hours
ago.
A
I
think
and
is
pending,
merge
right
now,
so
hopefully
that
gets
in
here
and
were
able
to
see
some
improvements
from
that.
This
one
I
think
could
be
implicated
a
lot
of
the
odd
slowness
we're
seeing
so
again
a
watch
this
space
for
the
day,
the
the
final
bucket,
the
fifth
one,
that's
sort
of
a
question:
we've
had
a
bunch
of
upgrade
test
failures
and
test
grid
and
they're
a
little
ambiguous
in
that
they
seem
related
to
some
of
these
other
things.
A
So
if
these
other
things
merge
we'll
be
able
to
more
conclusively,
say
related,
not
related,
so
still
watching
that
space.
Somebody
has
added
a
networking
issue-
oh
maybe
that
was
what's
that.
Oh,
that
was
me
that
was
me
last
night.
Yes,
there's
a
networking
issue
this
in
the
sig
feels
it
needs
to
be
blocking,
so
that
is
showing
up
in
their
CI,
but
not
in
blocking
CI.
So
I
threw
it
in
the
CI
signal
here
they
have
something
a
fix
and
flight.
So
that's
kind
of
our
main
CI
related
issues.
A
C
Yeah
Nico
is
up
on
what
he
did
yeah,
but
there
isn't
really
a
lot
of
change
since
yesterday,
except
we
closed
an
issue
which
is
yay
that
that
one's
really
the
PR
that
was
just
kind
of
sitting
and
sitting
and
sitting
people
like
yeah,
maybe
we
should
merge,
does
it?
Finally,
clubs.
A
C
Metrics
p1,
yeah
exactly
and
so
we'll
have
I
think
that
was
the
one
that
was
yeah.
There
was
a
super
fragile
solution
and
they
decided
to
go
with
I
think
they
decided
to
go
with
that
for
this
release
and
then
do
more
long-term,
better
solution,
yeah.
So
what
happened?
So
we
should
be
good
on
that
I'm
going
to
delete
that
so
that
it
does
mean
and
that
we
have
only
three
issues
as
of
right
now
that
I'm
not
also
feeling
tests
I,
believe
I'll
go
over
the
numbers
and
double
check.
Okay,.
A
A
Not,
but
with
that
call
made
and
down
to
the
short
list
of
stuff
for
the
duration
here
most
likely,
the
call
is
going
to
be
a
targeted
fifteen.
Twenty
thirty
minutes
at
most
looking
at
a
very
short
list
of
issues.
E
One
one
bucket:
we
need
to
start
I
guess
in
this
is
images
that
still
need
to
be
cut.
There
is
cube,
DNS
hasn't
been
cut
yet
and
the
IP
vs
image
has
to
be.
You
know
the
network
failure
I
think
you
mentioned
just
now
the
IP
vs
stuff,
so
you
may
just
have
to
be
cut
for
that,
and
then
we
need
to
do
the
PRS
updates
and
then
merge
them.
So
I
think
we
need
to
track
them
somehow.
E
A
E
E
A
Like
I
wasn't
remembering
looking
at
it
as
I
looked
through
the
list,
oh,
it
is,
and
it's
probably
not
remembering
it,
because
I
looked
at
it
probably
yesterday
and
was
like
oh
yeah.
That's
that's!
Definitely
not
when
we're
kicking
out,
because
yesterday
we're
looking
from
the
which
of
these,
can
you
account
perspective?
Okay,
yeah
I
will
note
those
and
make
sure
we're
pushing
on
those
one.
Other
thing
to
mention
too
is
a
PR
should
be
coming
later
today
or
else
tomorrow.
A
After
we
cut
the
release,
the
cube
ATM
folks
will
make
a
change
around
the
version
that
cut
the
release
today,
we're
cutting
the
RC
release.
I
I
was
kind
of
waiting
to
see
so
the
approve
mints
on
test
fridge,
but
since
we're
seeing
improvements,
I
would
expect
that
we
will
cut
that
that
enables
cube
adium
folks
to
make
some
changes
to
version
strings
in
their
code.
A
C
A
A
Okay,
I
will
add
the
the
image
tracking
stuff
and
I'll,
probably
just
make
a
tighter,
shorter
agenda
for
for
tomorrow
in
the
subsequent
days.
It
just
has
these
couple
things
that
we're
looking
at
just
so
we
can
really
focus
in
on
all
right
folks,
thank
you.
We
will
meet
again,
make
sure
I
say
this
right
since
the
times
very
tomorrow's
meeting
is
9:00
a.m.
Pacific,
so
I
mapped
to
your
local
timezone
and
it
should
again
be
relatively
quick
all
right.
Thanks
a
lot
everybody
any
any
questions
or
comments
from
from
the
crew.
Here.
A
So
early
trying
to
make
things
spread
and
we've
actually
gotten
a
nice
set
of
folks
on
from
from
other
other
time
zones
and
I'll
put
a
shout-out
to
Maria
who's
there
in
European,
timezone
and
she's
been
increasingly
active.
This
cycle
has
been
a
week
attending
these
meetings
weekly
and
based
on
her
code
contributions.
I
think
today,
she's
going
to
become
an
orc
member
nice.