►
From YouTube: Kubernetes 1.12 Release Burndown Meeting 20180917
A
C
A
All
right,
we
are
the
top
of
the
hour.
I
want
to
go
ahead
and
get
started.
I
am
Tim
pepper.
Your
112
releases
Lee,
this
meeting
is
being
recorded.
I
will
be
posting
it
to
YouTube
right
after
the
meeting,
so
please
behave
accordingly
and
accordance
with
that
code
of
conduct
and
all
of
that
good
stuff.
A
So
the
main
thing
today
we
have
a
request
from
comms
today
tomorrow
to
make
a
call
on
whether
we're
going
to
delay
the
release.
Things
are
looking
up
a
little
bit
compared
to
Friday,
but
at
the
same
time
we
don't.
We
were
largely
waiting
today
for
some
scalability
results
that
we
don't
have
because
test
failed
due
to
a
new
and
different
issue.
So
that
has
me
pretty
much
thinking.
I
will
be
announcing
a
delay
but
I'm
still
slightly
on
the
fence,
because
things
are
improving.
So
let's
go
through.
Let's
go
through
the
agenda.
A
A
Aside
from
that,
we
basically
have
the
CI
signal
to
to
worry
about
right
now,
so
the
list
of
things
they
agenda,
they're
scalability.
We
have
a
new
issue:
68
735
that
just
came
up
got
word
from
the
scalability
folks,
but
they're
mostly
and
Poland
so
time
zone.
That's
basically
done
for
the
day,
so
I'm
not
expecting
to
see
anything
change
there.
A
A
B
B
B
Mouse
click
here
there
you
are!
This
is
from
this
is
from
the
this.
Is
the
real
update
call
from
the
Corps
Deanna
Scott
pods
crashing
and
large
cluster
performance
tests?
Six,
eight
six
one
three
and
the
comment
says:
basically
the
tests
are
still
failing,
but
not
due
to
core
DNS
anymore
and
voytek
filed
the
follow-up
issue
for
a
different
problem.
But
what
I'm
saying
is
is
this
is
presumed
to
be
the
current
reason
for
voice.
A
A
A
Yeah
and
then
just
with
the
time
zone
skew
unless
there's
so
I
mean
they,
they
pinged,
they
ping
sig
API
machinery
bugs
liggett,
liggett,
pinging,
sauly
sauly
is
in
Europe
and
has
just
gone
to
sleep
totally
jet-lagged,
so
I'm
not
feeling
super-confident
during
our
today
Monday
we're
gonna
get
movement
on
this
one
to
understand
whether
it's
a
root
cause
final
root
cause
latest
in
the
series
of
debugging,
so
I
think.
If
nothing
else.
That
means
we
can't
make
a
go
call
until
at
least
tomorrow,
which
I
think
by
the
end
of
today.
A
A
So
there
is
a
new
issue
around
entry
volume,
so
the
storage
folks
have
so
I
open
six,
eight
seven
four
four
this
morning
the
storage
folks
have
triaged
it
to
having
started
failing
right
around
a
particular
recent
storage
BR
that
was
fixing
something
else.
So
we
may
have
a
regression,
but
we
at
least
have
their
focus
on
it
today,
starting
early
in
the
day,
so
we
may
get
to
the
bottom
of
it
yet
today
and
the
the
next
stone.
A
Also,
then,
the
horizontal
pod
autoscaler
issue,
six
eight
three
eight
three
Holly
believes
it's
fixed
he's
got
everything
pending.
If
we
get
the
last
PR
there
in,
we
could
have
test
results
today.
So
those
two
things
if
they're
going
green,
that
gives
me
a
lot
more
confidence
and
then
the
final
sort
of
bucket
of
issues,
the
the
GCE
gke
issues
there
there
was
a
merge
over
the
weekend.
A
They
think
that
that
that
so
what
they
described
for
symptoms
are
almost
correlates
with
some
of
the
things
we're
seeing
for
odd
stability
and
stability
on
tests
and
then
the
first
thing
in
the
list.
There,
the
issue
six,
eight,
six,
five,
three,
the
dns
one
with
the
DN
a---
should
provide
E&S
for
external
name
services.
Barney.
E
So
so
we
were
tracking
that
internally
and
the
latest
is
that
the
test
pot
is
crashing
so
and
that's
why
the
test
is
failing.
So
this
should
be
unrelated
to
the
kubernetes
functionality,
and
since
that
test
is
passing
on
g
GCE,
we
could
make
a
call
telling
that
that's
an
acceptable
failure.
For
now
we
are
following
up
internally,
but
we
just
don't
want
to
block
at
least
the
one
or
twelve
pulls
based
on
this
failure.
E
F
A
Awesome
so
I'm
with
those
sort
of
things
in
flight
today,
I
have
a
certain
amount
of
confidence
but
they're
making
a
kind
of
subjective
comment
at
the
end
of
day
today,
Pacific
so
we're
probably
like
5
p.m.
if
these
sort
of
things
have
merged
in
and
we
see
a
new
set
of
test
runs
that
are
looking
better.
Okay,
that's
one
thing:
basically
the
scale
issue:
that's
remaining
and
we'd
still
have
a
week
over
runway
on
that.
A
We
have
a
bunch
of
failures
in
the
upgrades,
so
cig
release
master
upgrade
tests
have
all
been
failing
for
quite
a
while.
If
you
tunnel
into
these
the
test
results
as
I've
looked
at
on,
they
match
up
either
to
the
known,
store
or
expected
known,
storage
issues
and
HBA
issues.
So
I'm
relatively
hopeful
that
these
are
gonna
clear
up
with
those
they've
been
really
fuzzy
because
of
the
the
GCE
GK
connection
and
the
HPA
stuff
and
dns
small
and
storied
all
of
these
in
flight.
A
A
We
had
had
the
the
the
taint
node
by
condition.
Performance
stuff
had
been
implicated
in
the
that
set
of
things,
so
that
still
for
me
kind
of
feels
like
it's,
not
a
hundred
percent
conclusively
resolved,
as
distinct
from
the
scale
issues
and
performance
issues.
There's
still,
this
general
fuzzy
space
of
things
that
are
supposed
to
be
running,
aren't
always
running
and,
and
that's
whatever
like
I
feel
like.
That
could
be
the
background
issue
on
the
DNS
thing,
also
on
gke.
A
C
Jim
can
I
say
a
few
things
yeah,
okay,
so
one
thing
I
wanted
to
say
was
about
the
cold
ENS
versus
cubed
DNS,
so
the
cubed
DNS
we
don't
have
manifests
for
those
and
a
PR
is
going
in
hopefully
today
and
they
will
cut
a
new
set
of
images
with
manifests.
So
that
should
be
done
today,
but
we
don't
have
to
wait
for
that
to
happen
because
there
is
other
than
the
manifest
generation.
There
is
no
actual
change
in
code
in
cube
DNS,
so
whatever
we
ship
today
will
be
the
same
thing.
A
H
A
D
C
C
Totally
get
it
that's
why
I
was
trying
to
Randall
all
the
images
this
time
for
using
the
manifest
as
the
reason
at
least
we
now.
We
know
that
we
are
able
to
generate
whole
thing
wages,
even
if
we
don't
know
who
actually
uploaded
them.
That's
a
whole
another
thread
that
I
see
CD
on,
but
coming
back
here,
so
the
other
thing
I
was
checking
was.
Is
there
any
peers
to
revert
for
this
OD,
honest
thing,
and
it
doesn't
seem
like
it,
because
the
cue
barium
guys
already
did
echo
DNS
in
one.
C
I
C
A
C
So
the
cube
business
is
quiet
away
with
the
PR
that
that
would
be
much
today
and
they'll
contain
images,
and
then
we
are
good
there,
then
the
other
one
that
I
was
looking
at
was
cube.
A
diem
seems
to
be
moving
to
1806
I.
Then
they're
gonna
allow
1806
dock
docker
as
one
of
the
supported
ones
earlier.
It
was
17
something
Oh,
8,
I
guess
so,
but
the
problem
there
is
there
is
people
have
run
CI
locally
and
it
it's
what
it's
working
and
both
the
node
performance.
C
G
C
A
C
J
C
J
That's
fine
in
that
regard,
I
think
I
would
defer
to
the
judgment
of
sig
note
and
it's
not
like.
We
necessarily
want
to
prevent
so
cost
your
life
cycle
from
saying
that
they
have
a
tool
that
stands
up
a
cluster
with
1806,
but
it
may
not
have
gone
through
the
same
level
of
rigorous
testing
that
other
and
container
engines
have,
because
it
was
in.
This
change
was
introduced
into
the
release
process
a
little
bit
too
late.
For
that
to
happen,.
J
A
J
A
J
A
A
Feature
566
that
is
being
dropped,
so
Doc's
does
need
an
update
for
that.
Zack
had
asked
and
then
I
will
make
sure
I
had
put
a
comment
and
a
few
other
things
I'll
make
sure
that
kaitland
know.
Caitlyn
knows
also
that
that,
because
there
there
was
a
slight
mention
somewhere
in
some
of
the
communications
around
that
feature
all
right
issues,
Quinn.
B
Mmm-Hmm,
oh
my
gosh
I'm,
so
sorry
I've
been
typing
and
not
been
muted.
Alright,
so
let
me
grab
this
real
quick,
so
I've
been
seeing
a
lot
of
issues,
clothes
two
of
them
are
genuine
fixes,
which
was
great
and
then
you've
had
that
roll
back
that
you
already
mentioned.
B
B
I've
pinged
people
and
people
seem
very
responsive
on
both
the
first
one.
The
metrics
view
is
actually
it's
kind
of
a
kind
of
a
band-aid
patch,
there's
also
a
longer-lasting
better
fix
in
flight
that
perhaps
the
sig
will
say.
Let's
keep
this
one
for
now
and
then
have
the
other
one
for
the
next
release,
because
it'll
fix
things
but
like
they're
tracking
it
so
I
think
that's.
Okay.
B
We
have
that
new
issue
that
he
already
mentioned
on,
but
that's
supposed
to
fix
the
failings.
Scalability
tests,
that's
about
the
unable
to
get
full
preferred
group
version
resource
errors,
then
I
looked
also
into
failing
bug.
Failing
testing
bug
issues
and
my
main
concern
is-
is
pull
request,
six,
seven,
six!
So
it's
sorry
the
test
issue,
six,
seven,
six,
zero,
six
and
perhaps
the
CIA
CIA
lead-
can
talk
a
little
bit
more
about
that.
B
A
B
But
they're
not-
and
this
has
been
the
same
response
on
the
scalability
test
as
well.
It's
just
oh,
it's
fixed
now,
okay,
how
where
why
perhaps
perhaps
I'm
just
not
understanding
enough
but
I'm,
just
getting
a
little
bit
I,
don't
know
cranky
about
it!
I'm.
A
A
little
worried
here
as
well,
but
the
the
symptoms
on
this
one
do
overlap
with
the
autoscaler
issues
and
they're
furiously
working
away
on
things
now.
So,
if
he's
confident
his
things
are
fixed
that
may
have
been
what
caused
them
to
circle
back
and
say:
oh
wait.
Maybe
we
do
have
an
issue
and
now
they're
furiously
working
to,
but
were
this
is
one
of
the
the
tricky
aspects
of
the
release.
Team
I
feel
like
that.
A
We
were
really
dependent
on
information
from
people
who
very
often
don't
record
a
whole
lot
in
issues
that
their
issue
that
merged
this
morning,
that
the
scaleability
are
the
auto-scaling
one.
Like
basically
said
nothing
and
it
just
caught
my
attention
because
it
mentioned
it
in
that
test.
So,
yes,
Aaron,
dragging
folks
in
we've,
had
Klaus
in
on
the
call.
So
he
he's
expressing
confidence
in
his
side
of
things.
B
B
B
A
B
B
B
A
Block
on
that
one,
just
because
six
storage
has
been
highly
active
emerging
a
lot
of
features
in
this
release
and
and
their
initial
thinking
is
that
this
is
probably
a
regression
in
a
very
recent
PR,
so
I
wanted
I
would
start
by
biasing
to
wanting
to
see
it
fixed
for
the
release
unless
they
come
back
and
say
no.
This
is
a
pretty
much
an
outlier
and
we'll
get
in
a
dot
release.
But
that'd
be
my
thinking
initially
yeah.
A
I
think
they're
the
one
that
the
thing
that
was
failing
felt
like
our
norm.
Our
normal
unfortunate
set
of
test
fails
so
we're
the
this
is
one
of
the
other
big
issues
that
we
have
when
we
have
things
that
are
this
wobbly
in
the
provider,
it
becomes
a
secondary
risk
that
people
decide
okay
well,
this
is
this
is
flaky
that
tests
could
surely
not
be
related
to
what
we're
seeing
here,
we're
just
going
to
push
this
in,
because
it's
a
critical
fix.
A
B
Weight
now
I'm
confusing
this
yeah
there's
an
open
pull
request
and
it
doesn't
seem
to
be
it's
okay,
yeah
I
need
to
I
need
to
triage
on
that
pull
request.
I
need
to
add
it
to
the
milestone,
and
also
the
pull
request
is
not
ready.
It's
not
passing
tests,
but
so.
A
Sadly,
this
is
one
of
the
things
that
gives
me
a
little
bit
of
confidence.
Each
of
these
issues
feels
like
it's
related
to
something
else,
we're
looking
at
and
often
in
a
release
cycle
at
this
state,
where
you
have
a
whole
bunch
of
worrying
things
that
almost
seem
interrelated,
the
magic
PR
merges
that
resolves
it
all.
So
it's
not
the
sort
of
thing
that
I
want
to
depend
on
for
hope,
but
it
is
a
common
yeah.
B
B
B
I
A
This
one
at
least
it
was
one
that
was
intended
to
merge
so
that
it
came
through
it.
It's
not
like.
Why
is
this
one,
thirteen
targeted
thing
coming
in
accidentally,
so
it's
one
of
those
things,
so
even
without
the
effort.
This
is
something
that
we
always
kind
of
watch
for,
because
we've
also
had
people
that
rigorously
correctly
labeled
everything
to
merge
right
now
when
it
was
intended
to
merge
later.
So
we
are
just
always
sort
of
a
set
of
eyeballs
watching
whether
the
humans
or
the
automation,
watching,
what's
landing
in
master
right
now,
really
closely.
B
A
A
A
J
A
A
Elongation
I
feel,
like
should
generally
probably
be
more
on
the
trailing
side
of
things,
as
opposed
to
moving
it
around
on
the
front
side
of
things,
because
people
know
for
sure
than
where
the,
where
their
features
are
due
heading
into
the
release.
When
stability
is
due-
and
ideally
that's
always
just
happening-
because
CI
signals
always
just
green,
but
that
side
of
the
equation
seems
like
it
should
be
more
fixed.
A
A
If
we
were
to
unfreeze
right
now,
so
much
would
be
coming
in
to
master
that
we
would
lose
all
CI
signal
on
the
master
test.
Grit
boards
and
we'd
be
down
just
to
the
release
112
branch
tests
and
which
things
are
we
cherry
picking
in
there
I
think
right
now
that
creates
us
extra
work
and
reduces
our
signal
visibility.
So
my
proposition
would
be
that
I
I'll
announce
delaying
saw
until
Friday
at
this
point,
but
TBD
still
on
the
the
release
date
slipping
I,
don't.
J
I
I
C
A
A
Our
expectation
is
that,
right
now,
while
we're
still
frozen,
that
the
the
master
branch
SEI
is
our
leading
indicator
each
day
is
we're
doing
a
branch
of
fast-forward.
We
get
the
secondary
confirmation
that
success
has
flowed
into
the
release,
112
CI
boards
and
in
a
deterministic
way.
We
we
have
a
set
of
failures.
It
stops
failing
on
master,
we
do
the
branch
fast-forward.
It
stops.
A
Failing
on
the
release
that
that
is
a
good
level
of
test
confirmation
for
us
once
we
we
saw
master
that
whole
first
pass
goes
away,
and
we
only
get
the
this
that
second
order
test.
So
right
now
that
we
we
have
both
of
those
means
that
we
have
a
good
sort
of
time
series
of
understanding,
and
we
would
keep
that
just
through
through
next
week.
Does
that
make
sense?
Am
I
answering
the
question.
A
There's
something
else:
I
wanted
to
say
with
respect
to
that:
oh
yeah,
so
in
terms
of
the
the
actual
release
confidence
so
this
week,
having
master
fully
stable
if
we
get
everything
resolved
in
master
and
everything
resolved
flowed
into
master
112.
At
that
point,
I
don't
feel
like
it
matters
we're
cutting
the
the
release
from
the
branch
and
that's
kind
of
the
point
where
I'm
willing
to
say
like
we
thaw,
because
master
can
go
off
and
do
its
own
thing,
because
now
we
don't
care
what
destabilizes
there
is
it
that
that
was
the
fun.
A
A
A
A
Wanted
to
ask
ice
or
anybody
else
so
the
last
release
and
and
that
the
prior
release
to
that
we
were
in
similar
sorts
of
places
at
times,
and
part
of
this
is
kind
of
like
a
relative
weighing
like
how
in
trouble
are.
We
and
some
of
that
thing
can
be
informed
by
prior
similar
things,
I'm
taking
some
confidence
from
the
extent
to
which
six
scalability
is
engaged
right
now,
but
I'm
curious
other's
thoughts.
G
So
I
can
only
talk
about
last
release
because
that's
all
I
have
a
stability
into.
We
did
have
the
scale
test,
especially
failing
till
until
the
last
minute.
The
only
difference
was
again
our
shaman
Wojtek
I'm,
telling
his
new
clarity
they're,
always
on
top
of
things
and
the
new
White
was
failing
and
they
had
fixes
that
progressively
caught
us
a
step
closer
and
eventually
just
before
our
C
is
something
it
was
green
for
a
while
and
it
again
started
retracing
for
something
else,
but
that
said
grain
we
always
this.
G
A
Talk
one-on-one
with
Wojtek
late
last
week
and
and
he
was
really
confident
and
be
in
one
of
the
things
that
came
through
and
his
confidence
was
the
like-
he
didn't
even
want
to
and
I
think
because
it
historically
we've
had
these
wobbles.
He
didn't
even
want
to
experiment
with
trying
to
understand
for
sure
what
was
going
on
with
the
core
DNS
stuff.
It's
like.
We
have
this
history
of
problems.
We
have
this
instability
based
on.
Our
local
testing
were
quite
confident
that
this
revert
will
solve
things.
A
C
So
some
perspective,
there's
only
one
time,
I
remember
in
the
last
four
or
five
releases,
where
you
know
we
ended
up
fighting
over.
Is
this
scalability
failure
for
an
SLO?
Should
we
stop
the
release
or
not,
there's
only
one
time
we
got
to
that
point,
but
then
almost
all
releases,
like
I,
said
we've
had
one
problem
or
the
other.
You
know
close
to
the
end,
but
I
have
full
confidence
and
voytek
and
sham
to
resolve
those
things
in
time.
So
you
know
kudos
to
them
for
keeping
us
on
track
and.
A
Watch
the
space
because,
like
this
is
the
one
thing
that
really
gives
me
pause
on
making
a
call
later
today,
whether
or
not
to
go
ahead
and
delay.
The
release
is
that
we've
got
this
new
issue.
They've
done
their
triage
for
the
day,
but
I've
gone
to
bed,
and
we
won't
really
get
the
second
ordered
kind
of
finalization
of
that
triage
until
tomorrow,
our
time.
So,
if
it's
the
only
thing
outstanding
at
the
end
of
day
today,
I
have
confidence
that
they'll
they
should
they're
most
likely
we'll
get
it.
J
A
The
bits
of
interaction
I've
had
with
a
cluster
lifecycle
have
given
me
confidence
just
lately,
but
it
is
fuzzy
also
and
that
the
tests
have
been
read
for
quite
a
while
and
haven't
moved,
but
they
they
seem
comfortable
with
their
portion
of
things,
and
this
is
also
this
is.
This
is
something
that
we
should.
It's
probably
more,
a
sig
release
thing,
but
it's
gonna
come
to
a
head.
I.
Think
in
the
113
cycle
is
cube,
a
DM
goes
G
a
so
they
didn't,
they
ended
up.
A
Dropping
cube
a
DM
cluster
creation
and
cluster
upgrade
is
J
stable
announcements
for
112.
All
of
these
tests
that
we're
looking
at
are
not
theirs.
So
it's
it's
stuff,
that's
happening
from
a
directory
of
scripts.
That
says:
deprecated
maintenance
only
is
known
to
be
flaky.
We
don't
have
data
to
know
how
many
people
use
these.
How
many
people
are
already
shifting
over
early
to
cube
a
DM
I
am.
E
D
A
Unfortunately,
is
gonna
be
kind
of
one
of
those
open-source
EE
lazy
consensus
transitions
where,
in
the
next
six
months,
we're
just
gonna
be
using
cube
ATM
and
those
things
are
gonna
fall
away,
as
opposed
to
us
getting
resolution
on
how
we
get
better
green
from
them,
while
we're
making
a
clean
transition.
So
we
have
a
good
a/b
test
grain.
J
C
One
other
piece
of
context
here
is
just
a
cube:
idiom
itself
doesn't
actually
go
start,
a
virtual
machine
and
install
the
control
plane
on
it,
for
example
right
that
is
the
problem
right
now,
and
so
we
are
hoping
that
the
the
cluster
API
stuff
will
get
former
sooner.
So
we
can
really
replace
the
cube
net
is
anywhere,
and
things
like
that
that
is
there
in
Cuba,
so
we
will
have
something
else
to
fall
over
from
B
scripts.
C
A
A
Well.
This
is
rapidly
turning
into
a
sick
release.
Meeting,
let's
run
through
the
rest
of
the
agenda.
Yes
give
well
I
guess
I'll
mentioned
so
it
is,
it
is
noted
there
on
features,
removing
five
six,
that's
the
cube,
DNS
core
Deana
stuff
I
will
double
check
with
Doc's
and
comms
that
they
have
heard
that
message,
but
I
think
Zack
is
here.
A
H
No,
we
switched
over
to
the
new
document
and
we're
working
through
it,
encouraging
the
sakes
to
come
in
and
tell
us
what
to
keep
or
delete.
So
things
are
going
well.
Okay,.
D
This
is
Caitlin
I,
don't
believe
we
mentioned
it
anywhere,
but
I
will
double
check
on
that.
What
we're
planning
to
do
is
send
out
the
once.
We
start
pitching
send
out
the
blog
with
Cuba
Latinos
bootstrap
and
then,
as
your
virtual
machine
scale
set
and
then
basically
does
not
send
the
additional
notable
feature
section
until
we
have
confirmation
that
all
of
those
are
going
to
make
it
in
and
then
I
joined
a
little
bit
late,
but
it
did
sound
like
either
way.
D
D
Yeah
I
think
you
know
if
there's
any
chance,
we're
gonna
slip
from
a
comms
perspective.
You
know
the
extra
two
days
for
us
is
great,
but
completely
up
to
you
on
that
yeah.
A
A
Some
focus
pouncing
on,
so
it
may
be
that
announcing
that
improves
focus
on
the
last
couple
things
where
people
have
thought
they
were
under
control,
so
I
I
won't
make
that
choice
as
soon
as
possible,
just
to
really
kind
of
draw
the
line
and
and
declare
the
couple
things
we're
trying
to
finish
resolving
and
get
that
last
bit
of
focus.
Sooner
than
later,.