►
From YouTube: Kubernetes 1.10 Release Burndown Meeting 20180313
Description
http://bit.ly/k8s110-burndown
- Decision to push back the release date to 3/26
A
All
right
welcome
everybody,
it
is
Tuesday
March
13th
2018
today
is
or
now
is
the
coup.
Nettie's
1.10
release
burned
down
meeting
and
we're
gonna
proceed.
If
you
are
looking
at
the
agenda
and
meeting
notes
after
the
fact
they're
available
that
li
/
gates,
110
burned
down,
I
am
Jay:
singer
DeMars
your
host
and
release
team
leader
and
I
am
going
to
switch
my
camera
over
to
the
less
pastoral
version
of
me
and
yeah.
So
welcome
everybody.
C
D
A
B
A
All
right
so
running
through
this
real,
quick
I,
don't
need
to
tell
you
all
that
we're
in
crunch
week,
because
we
are
and
basically
trying
to
get
signal
and
all
the
things
figured
out.
So
we
know
what
the
health
of
the
release
branches
and
make
sure
that
we're
gonna
put
out
quality
code.
That
does
not
need
to
be
patched
immediately
and
all
that
good
stuff.
So.
A
Wednesday
is
the
end
of
code.
Freeze
I
want
to
talk
about
that
today,
because
I'm
not
completely
convinced
that
it's
the
right
thing
to
do,
depending
on
what
we
talked
about
today,
we'll
be
doing
the
final
fast
forward.
Port
Caleb's
gonna
have
to
cut
yet
another
beta,
so
that'll
be
the
beta
to
work
through
I
guess
what
might
have
been
considered
release
candidate
before
or
close
to
a
release,
candidate,
the
and
yeah.
So
then
we're
gonna,
lift
the
code.
A
A
Metrics
review
just
real
quickly
open
issues.
14
it's
down
4,
will
go
into
detail.
Some
of
this
may
have
changed
in
the
last
couple
hours.
There's
been
a
lot
of
people
looking
at
it,
and
Josh
and
Tim
will
go
in
this
in
just
a
second
in
detail,
but
I
just
want
to
quickly
go
through
the
other
metrics,
so
open
PR
is
the
milestone.
There's
13
it's
down
3,
our
velocity
in
terms
of
emerging
PRS
is
looking
pretty
good.
A
A
A
It
looked
like
human,
but
what
the
tests
were
feeling
for
a
bit,
something
about
that.
The
linter
was
having
some
issue
and
important
soon
added
some
things
to
the
people
on
these
PRS.
We
need
some
LG
teams
and
proofs,
and
hopefully,
if
those
people
can
look
at
those
that
will
happen.
It's
same
with
this
test
blocking
the
back
off
PR
here,
so
otherwise
that's
pretty
much
it
for
PRS
and
flight.
As
far
as
I
can
tell
there's
some
cherry
picks
out
on
the
one.
E
A
B
B
The
yellows
are
blockers
that
do
or
unknowns
that
do
have
a
PR,
the
Oakland,
okay
and
we've
got
two
that
have
now
been
decided
to
be
non
walkers,
so
those
can
go
into
the
green
category,
but
the
which
I
will
change
in
just
a
second
and
then
the
greens
are
non
blockers.
The
so
two
more
should
be
added
to
the
Greens
there.
The.
B
B
B
Yeah
so
I'm
concerned
about
those
from
two
perspectives.
One
is
because
they
look
pretty
bad
in
terms
of
performance
regression.
The
second
is
because
we
really
don't
seem
to
be
able
to
get
attention
from
them
other
than
from
a
single
engineer
who,
whatever
good
job
he's
doing,
nobody
seems
to
be
reviewing
his
stuff,
and
you
know:
I've
shouted
at
all
the
people
that
I
know
to
shout
at
at
this
point
so
I'm
concerned.
At
this
point,
if
you
said,
do
those
look
like
they're
going
to
delay
the
release?
My
answer
would
be
yes,.
A
Okay,
so
we
need
to
definitely
get
some
attention
there.
I
guess
I
can
I
can
take
lead
on
those,
not
a
problem.
I'll
pin
you
asynchronously
Josh
to
make
sure
that
we
synchronize
on
that
and
I
know
who
you've
already
contacted
and
who
I
need
to
contact
and
we'll
just
start
chasing
those
scalability
things
down.
B
C
B
Yeah
but
there's
also
some
there's
also
yeah,
but
there's
also
the
issue
with
PVC
protection,
so
we
have
an
issue
with
PVC
protection
moving
into
beta
that
basically
prevents
downgrade
if
you
have
any
PVCs
at
all,
which
seems
like
a
blocker
to
me,
particularly
because
we
don't
have
agreement
right
now,
even
on
how
that
should
be
fixed,
let
alone
code
to
fix
it.
B
B
C
B
E
B
A
B
This
is
really
strange:
okay,
I'm
having
trouble
Google
Spreadsheets,
because
I'm
seeing
stuff
on
here
that's
actually
closed
and
I've
removed.
So
and
yet
it's
back
speaking
of
which
we
do
have
one
feature
bug
on
there,
which
we
definitely
know
is
a
Google
spreadsheet
thing
that
feature
bug
all
the
way
in
the
top
schedule:
demons
all
pods.
It's
actually
been
on
the
sheet
for
a
while
Google
streets
was
literally
hiding
it
for
months,
and
so
we
lost
track
of
them.
B
I
J
B
B
If
you
look
at
the
list
of
PRS
for
110,
it
doesn't
come
up
because
it's
not
marked
it
would
have
been
nice
for
it
to
be
able
to
mark
automatically,
but
nobody's
really
figure
out
a
way
to
make
that
happen,
because
github
does
not
relate
PRS
to
issues
the
and
then
for
the
rest
of
this.
We
have
a
bunch
of
issues,
many
of
which
have
not
are
very
recent
issues
that
have
not
been
confirmed.
B
B
K
Can
give
a
quick
update
about
the
scalability
regressions
so
so
like
the
first
red
row
that
will
probably
soon
go
to
yellow,
because
so
there
are
three
parts
to
it.
One
of
them
is
flakes
in
general,
with
like
some
kind
of
flakes
we
have
seen
in
general
with
the
tests
that
are
unrelated
to
performance,
regressions
and
they've
been
there
for
quite
a
while.
I
will
file
a
separate
issue
for
that,
and
the
important
part
of
that
bug
is
actually
understanding
the
increases
in
memory
usage
of
ApS
or
and
the
controller
manager.
K
We
understand
the
increase,
why
we
see
the
increase
in
ApS
error,
it's
because
of
introducing
buffering
to
audit
logging,
and
we
found
that
we
are
already
like.
We
found
the
cause
there
already
and
I've
already
sent
out
a
PR
there
to
increase
the
threshold
for
our
test,
and
it's
expected
so
we
know
the
reason
by
increased
for
controller
manager.
I
have
done
some
couple
of
tests
locally
today
and
I'm
down
to
two
peers,
and
it's
most
likely.
K
You,
like
I,
will
post
an
update
soon,
but
it's
one
of
two
peers
and
once
I
confirm
we
will
like
know
what
that
reason
is
and
like
I
will
probably
bump
the
threshold
for
that
as
well,
so
that
should
become
a
non
blocker
by
end
of
the
day
today,
it's
already
half
a
non
blocker
and
the
second
regression.
It
turns
out,
it's
not
so
trivial,
because
it's
it's
showing
this
degradation
in
performance
with
pod
startup
latency,
but
it
is
not
showing
it
consistently
like.
It
is
slightly
high
and
lowly.
K
We
have
a
few
green
runs
of
our
scalability
test
like
where
we
actually
satisfy
the
SLO,
but
it
a
smiley
is
crossing
it.
So
it
is
a
regression,
but
this
might
actually
take
a
while,
because
I'm
having
to
run
the
test
like
few
times
two
or
three
times
against
the
same
commits
because
it's
just
lately
more
or
less
so
yeah.
So
this
might
we
like
this
might
actually
cause
a
push
in
the
daily
schedule,
though
I
really
hope
we
wouldn't
lock
on
this,
but
this
needs
more
discussion
like
in
any
case.
K
We
might
probably
let
this
be
a
non
blocker.
Considering
that
we
are
flaky
leasing
this
issue
and
we
feel
our
link
more
than
five
seconds,
but
still
this
needs
more
discussion,
and
so
one
more
thing
is:
I
am
actually
out
of
office.
This
Thursday
and
Friday,
but
voice
I
could
be
there,
so
I
think
is
out
of
office
till
Wednesday
and
then
I
am
out
of
office
for
a
couple
of
days.
K
So
hopefully
Wojtek
will
follow
up
on
this
and
I'll
talk
to
him
and
the
other
one
with
scalability
like
the
12th
row
about
cube
mark
I.
Don't
really
think
that
scalability
related,
that's
purely
testing
thing
like
I
I,
think
I
mentioned
this
to
Jordan
earlier,
it's
not
related
to
scalability,
it's
just
a
bug
with
the
weather
flag
in
cube
test,
so
I
just
call
it
a
testing
issue
and
the
row
15,
which
is
also
a
scalability
issue.
That
definitely
is
in
the
blocker,
and
it's
not
even
a
regression.
K
It's
it's
something
we're
trying
to
understand,
and
it
has
been
happening
for
a
long
time
of
like,
while
see
increased,
using
density
test
of
like
master
component
memory
when
it's
followed
by
when's
forward
when
it
follows
load
test.
But
yeah
it
is
not
really
a
truly
stalker
and
yeah
I!
Guess
that's
it!
So,
as
of
now
the
most
important
one
is
the
the
fourth
row
so
yeah,
that's
all
from
my
side.
A
Okay,
so
you
said
that
we
need
discussion.
Obviously
I,
agree
and
I.
I
know
that
white
X
is
gonna
be
around
how
what
is
the
best
way
to
use
your
time
between
now
and
when
you're
gone,
because
this
one
this
one
really
does
scare
me
and
I,
want
to
make
sure
that
we're
not
that
we're
doing
everything
we
can
mm-hmm
yeah.
K
I'll,
give
you
a
concern:
I
get
it
on
Sundays,
so
so
the
thing
is,
we
need
to
run
density
test
on
FIFO
like
most
likely.
We
need
to
do
this
by
section.
We
need
to
run
the
density
test
and
like
we
cannot
afford
like
having
more
than
one
run
of
this,
because
it's
a
5,000
node
test,
and
we
don't
really
have
more
than
that,
so
it
pretty
much
means
we
need
to
see
really
run
these
and
like
each
test
takes
about
like,
like
the
turnaround
time
for
each
iteration
is
about
four
to
five
hours.
K
A
A
D
A
D
A
A
A
F
A
A
B
No
everything
else
is
pretty
normal.
Most
of
you
think
that
are
open
are
open
because
their
recent
test
test
fails
or
recently
reported
bugs.
Okay,
not
that
someone
could
have
couldn't
have
turned
into
serious,
but
the
reason
they
have
been
fixed.
Yet
is
nobody
even
heard
of
them
until
you
know
less
than
48
hours
ago,
right
yeah,
it's
a
normal
turnaround
time.
Okay,.
D
A
C
Josh
actually
covered
most
of
things
already.
Okay,
the
good
news
is
GE
test
back
to
green
now.
Rest
of
the
suite
is
still
looks
pretty
much
similar
than
yesterday,
but
we
got
a
few
PRS
merge
like
just
now
an
hour
ago,
so
I
the
wait
for
some
results
from
the
neurons,
especially
for
the
serial
suite.
F
C
C
A
A
B
A
B
A
That's
that's
kind
of
what
I'm
leaning
toward
but
I
mean
this
essentially
we're
setting
a
little
bit
of
case
law
at
this
point,
so
I
want
to
make
sure
you
sure
that
we
do
this
correctly
I
mean
there's,
there's
been
some
other
cases
with
some
of
the
LCD
changes
and
whatnot
where
downgrade
was,
was
really
painful,
so
Caleb
you've
kind
of
endured
some
of
the
slings
and
arrows
of
this
sort
of
situation.
What's
what's
your
opinion.
A
D
I
mean
I,
guess
some
prudence
would
probably
be
worthwhile,
I
mean
I.
Guess
we
always
regret.
C
A
I'd
say:
use
Assad
as
the
H
a
proxy
for
getting
this
solved
and
hopefully
we'll
get
some
some
resolution
and
I
mean
so
my
unless
I
hear
otherwise
or
unless
it
seems
like
it's
a
really
bad
idea.
I
want
to
plan
on
this
being
a
known
issue
with
a
document
at
work
around
and
so
I
I
might
look
first
up
up
there.
A
E
A
H
B
B
A
A
Thanks,
okay,
so
just
to
recap
will
document
it
as
best
we
can
and
then
we're
gonna
try
and
find
a
way
to
fix
the
test
so
that
it
is
meaningful
to
the
community
if
and
when
they
need
to
do
this
operation
themselves,
so
there's
at
least
some
prior
art
to
follow
about
how
to
do
this
and
to
delete
that
okay
really
quickly.
Sorry,
this
is
taking
so
long
user
facing
documentation,
I.
I
A
I
A
A
L
M
A
G
Yeah
so,
as
you
all
may
or
may
not
know,
the
release
notes
exist.
They
currently
have
a
crap
ton
of
extraneous
information
in
them
and
I
have
gone
through
and
just
annotated.
Where
there
are
questions
that
we
have
like
you
know,
is
this
user
facing
you
know?
Can
you
give
an
example?
What
can
we
link
the
docs
and
so
on
and
so
forth?
G
G
If
you
wouldn't
mind,
taking
a
look
and
just
making
sure
that
we
don't
have
any
questions
on
your
PRS,
we're
also
thinking
that
we
might
just
go
through
and
tag
the
original
posters
buying
their
emails
in
those
comments
to
try
and
get
those
questions
resolved,
and
then
it
would
be
a
lot
easier
for
us
to
just
finish
up
editing
and
getting
this
all
through,
so
we're
in
a
good
place.
Yeah.
G
H
Sorry
I
was
commuting,
so
I
was
in
the
Sikh
cluster
life
cycle
and
I
remembered
one
thing
that
we
used
to
do
for
kube
ADM.
Just
before
we
cut
used
to
cut
a
release,
a
case
we
did
it
for
1.8
and
1.9.
It
would
be
like
a
single
line
er
that
needs
to
be
merged.
Just
before
we
cut
a
release
in
the
110
release.
It
should
not
be
merged
in
before
so
Timothy
is
going
to
create
a
pair
for
110
for
us.
So
I
added
a
note
in
the
document
as
well.
A
Great
and
yeah:
that's
something:
we've
we've
done
prior
releases,
so
thanks
for
helping
coordinate
that.
A
All
right
so
just
the
quick
recap:
we're
gonna
push
code,
freeze
off
till
Friday,
actually
should
I
say
Sunday
night,
so
we
have
the
weekend
to
to
play
around
with
it.
I
mean
there's
no
real,
compelling
reason
to
have
it
open
on
the
weekend
for
anybody,
but
us
so.
Oh
yeah,
Sunday
night
6
p.m.
Pacific
we'll
do
that
then
Monday
the
111
kicks
up.
D
A
Actually
has
to
do
all
right:
okay,
so
Monday,
Monday,
Monday,
midday
or
something
like
that.
The
Pacific
yeah
okay,
cool
I'll,
do
that!
Is
there
okay?
So
just
real,
quick,
I'm!
Sorry
to
do
this
at
the
last
minute
of
the
meeting,
but
if
we
do
that
and
we
cut
and
we
open
the
master
back
up
on
Monday,
does
that
inhibit
our
ability
to
have
anything
so
long
enough
to
have
a
realistic
release
State
on
Wednesday.
H
A
Kind
of
leaning-
well,
I
yeah
I'm
kind
of
mixed.
If
we
need
to
do
a
full
release
candidate
or
not,
but
meaning
all
green
signal
and
everything
we
need
to
get
that
actually
to
be
real
cuz
as
it
is,
we're
gonna
be
pushing
it
really
close
with
this
stuff,
with
the
scalability
I
mean
and
I
don't
want
to
have
this
I
don't
want
to
have
this
happen.
L
M
A
I
think
that,
given
given
the
situation
with
the
CVE
with
where
we
are
with
scalability,
those
are
those
are
both
major
user
world
impacting
things
and
okay
yeah.
Unless
somebody
vehemently
disagrees
with
me,
we're
gonna
push
the
release
off
till
Monday
the
26th
and
we're
gonna
push
code
freeze
off
till
midday,
the
night,
the
19th.
So
basically,
we
have
a
the
week
of
the
19th
is
gonna,
be
real
busy
to
try
and
get
everything
cleaned
up.
Should
we
say
rc1
on
Monday.
H
J
G
L
G
A
Yeah
and
for
the
blog
post,
I
I'm,
really
struggling
with
what
to
do
in
project
velocity
honestly,
I
I
might
be
inclined
almost
just
to
remove
that.
Unless
there's
some
compelling
news,
there
I
hate
to
have
no
news,
but
I'd
rather
have
no
news
than
just
like
repeating
the
same
things
over
and
over
again.
So
we
may
want
to.
We
may
want
to
check
that.