►
From YouTube: Kubernetes 1.10 Release burndown meeting 20080305
Description
A
Welcome
everybody:
it
is
Monday
March,
5th
2018.
This
is
the
coronaries
110
release.
Meeting
I
am
Jay
singer,
tomorrow's
the
release
late
for
1:10
and
happy
to
lead
us
through
today's
meeting.
If
you're,
looking
after
the
fact,
I
want
to
see
the
agenda
and
things
we
talked
about,
it's
available
at
bitly,
slash,
kids,
110,
burned
down
without
further
ado
timeline.
For
this
week
we
need
to
close
out
Doc's
PRS
and
get
dachshund
in
good
shape.
Generally
speaking,
that
includes
release
notes.
A
A
A
We
need
to
figure
out
release
health
really
sprint
health,
so
basically
sorting
out
what
flakes
are
real
and
from
what
are
or
what
our
flakes
and
what
are
real
and
basically
figure
that
out
and
make
sure
that
nothing
has
gone
into
the
release
branch.
It's
going
to
destabilize
it
and
we
continue
that
trend.
Ford
Marketing
is
proceeding,
we're
reviewing
the
blog
and
making
sure
that
it
syncs
up
with,
what's
actually
being
delivered.
A
A
We
start
the
Monday
through
Friday,
burned
down
meetings
and
we
see
the
end
of
code
freeze
and
do
the
final
branch
fresh
words
and
all
that
good
stuff.
So
basically
we're
gonna
release
the
floodgates
assuming
that
we're
in
a
good
position
to
do
so
in
the
past.
We
have
delayed
that
for
various
reasons
and
if
we
need
to
this
time
well,
basically,
after
that
point
once
the
branch
reopens,
then
we
do,
we
basically
are
in
the
business
of
cherry-picking
specific
things.
So
hopefully
it's
a
low-volume.
A
The
reason
that
we
do
fast-forwards
during
this
code
freeze
period
is
because
it's
just
kind
of
a
pain
to
do
cherry
picks
so
having
the
having
the
release
frozen,
allows
us
to
do
fast
forwards,
because
it
shouldn't
be
getting
anything
to
master
that
isn't
specifically
for
the
release
next
week.
We'll
probably
also
do
our
first
prediction
on
go/no-go
that's
important,
because
we're
gonna
have
to
make
our
best
guess
so
that
we
can
time
marketing
activities
and
all
the
stuff
that
goes
along
with
the
release.
A
Hopefully
we'll
be
able
to
get
some
consensus
on
those
things
of
contract
or
not,
and
the
next
week
we're
gonna
probably
be
all
focusing
on
docks
in
one
way
or
another,
because
that
seems
to
be
out
rolls
and
cleaning
up
tests,
cleaning
up
all
the
stuff
to
make
sure
that
we
have
good
signal
so
moving
along
any
questions
or
concerns
with
the
timeline
stuff.
There's
things
in
chat:
no,
no,
no
I.
A
A
C
We
haven't
finished
updating
the
issue
tracking,
yet
this
morning,
from
the
weekend,
what
most
seasons
have
happened
is
we've
gotten
a
whole
bunch
of
new
test,
bugs
they're,
actually
only
sort
of
new,
as
in
there
was
a
major
test
failure
and
it's
been
broken
out
into
a
whole
bunch
of
individual
test
failures
because
they
belong
to
different
SIG's
I'll.
Let
signal
comical
bit
more
about
that,
because
he's
the
one
who
opened
all
all
thumb.
C
So,
numerically,
it's
a
bunch
more
issues.
It's
not
actually
a
new
failure,
it's
just
being
more
specific
about
a
failure.
It
happened
last
week.
In
the
meantime,
a
bunch
of
issues,
I
think
had
been
closed.
I
haven't
gotten
around
to
updating
which
issues
have
been
closed
yet
so
I
don't
details
on
that,
but
it
looks
like
in
terms
of
stuff.
That's
still
open
we're,
mostly
looking
at
as
you'd
expect
at
the
stage
test
failures.
I,
don't
I,
don't
see
a
pattern
necessarily
in
the
in
the
test
failures.
C
C
Right
the
memory
increase,
so
we
had
a
major
issue
with
a
potentially
huge
memory
increased
by
the
API
server
memory
usage
increased
by
the
API
server
analyzing.
That
was
being
deterred
by
a
skill.
Do
the
test
flake
a
scalability
flake
is
supposedly
no
fixed,
so
I'm.
Now
waiting
for
an
update
on
whether
or
not
the
memory
usage
increases
real.
C
A
A
B
Only
to
the
degree
that
it
touched
the
signal,
then
I
would
say
that
if
we
have
a
conformance
test
that
doesn't
pass
against
the
release
we
cut,
it
should
be
untag
to
the
conformance
test,
because
it's
these
are
the
standards
by
which
distributions
are
measured,
and
so
it
doesn't
matter
whether
we
ran
it
or
not
or
deemed
it
blocking
or
not.
If
his
tag
is
conformance
and
we
don't
pass
it,
then
that's
bad
right.
B
B
D
Speaking
so,
the
design
the
DVD
test
was
upgrade
from
when
we
can
see
another
version,
and
there
were
twist
week
when
run
the
other
and
the
other
one
ran
the
test.
So,
for
example,
when
we
upgrade
from
191
10,
we
want
to
make
sure
both
one
nine
and
one
10
tests
are
working.
If,
if
they're
individual
tests,
that
is
not
supposed
to
run,
we
can
maybe
just
at
a
select
tinkle
skip
with
the
feature
tag
into
the
that
specificity,
so
that
it
does
not
run
that
specificity
test.
So.
B
D
A
B
C
B
E
I
mean
I
think
it
would
be
more
weird
had
like
if
the
tests
weren't
bundled
so
like
tightly
with
the
code,
that's
under
test,
but
I.
Think
in
this
case.
These
kind
of
weirdness
is
just
going
to
happen
because,
like
there's,
not
a
an
external
repository
that
contains
these
end-to-end
tests
or
these
like
acceptance,
level
tasks.
B
A
A
B
B
A
D
Sure
so
the
major
testing
like
has
been
failing
clusters
or
when
there's
the
cereal
sweet,
Dara
I
opened
five,
so
I
started
to
use
the
like
job
Steve
real
sweet
failure,
as
they
upgrade
I,
say,
umbrella
issue
and
I
open
five
more
like
issues
for
individual
suspects.
Hopefully
each
sick
will
start
to
scratch
that
and
become
green
and
for
a
pretty
sweet.
So
after
the
dimmest
at
six,
the
upgrade
is
like
six,
it's
actually
successful
now,
so
it's
actually
upgrade
from
191
ten.
D
Now
and
now
they
are
like
revealed
more
actual
test
failures,
a
store
dimension.
Maybe
some
of
them
need
to
be
further.
It's
cursed,
mmm-hmm,
but
I
also
open
a
list
of
actual
test
failures.
The
other
concern
that
maybe
actually
concerned
is
the
dungaree
test
from
master
209
is
currently
failing,
I
assigned
to
stick
just
like
psycho,
and
hopefully
they
can
charge
that.
D
E
Just
looking
at
the
out
the
log
output
from
the
tests,
it's
like
the
same
two
pods
ones
like
metric
is
bordered
and
other
ones
I
Prometheus
pod.
They
don't
seem
to
be
running
I,
don't
know
if
it's
a
scheduler
specific
problem
or
if
there's
some
underlying
infrastructure
set
up
thing
going
on.
Okay,
can
you
comment
back.
D
Last
thing
on
my
list
is
the
cubic
cubed.
Even
Swede
is
still
failing.
Garrison
band
took
some
testing
imperfect
system,
but
still
it's
failing
like
cannot
find
Q
comfy,
but
I
saw
a
previous
comment
that
people
are
saying
they
want
their
lists
across
the
lifecycle
want
to
get
rid
of
Q
bad
mean
like
get
rid
of
communities
anywhere,
I'm,
not
sure.
If
that
test
need
to
sleep
inside
master,
blocking.
D
F
Will
add
notes
to
the
to
the
doc?
Okay,
so
I
Jase,
it's
Jennifer
and
I'm
here,
I,
do
it
late
on
things
are
looking
much
better
than
they
were
on
Friday
people
have
been
super
responsive
to
getting
poked
and,
and
my
biggest
concern
right
now
is
a
small
handful
of
alpha
features
where
I'm
still
not
sure
what's
going
on,
but
it
also
looks
as
though
some
of
those
issues
have
been
updated.
Since
I
checked
this
morning,
I
haven't
had
a
chance
to
check
again,
so
things
are
moving
faster.
F
A
D
F
E
F
A
I
would
like
to
circle
around
with
you,
though,
because
it
looks
as
though
there's
still
a
small
handful
of
features,
not
just
not
just
the
stuff
folks
have
mentioned
up
to
the
up
to
now.
That
might
not
make
it
in
and
so
I
want
to
be
sure.
I'm
understanding
state
properly
I
want
to
spend
my
energy
bothering
people
who
need
to
be
bothered
and
not
people
who
don't
yep.
G
Spreadsheet
with
them,
so
if
you
I'm
going
I'm
going
to
double
check
all
of
them,
because
I'm
working
closely
I'm
actively
work,
you
know
marketing
material
for
Natasha
and
from
for
the
marketing
team
to
to
push
it
forward.
But
if
you'll
find
anything
else
that
I
could
not
notice
in
the
features
report
on
the
spreadsheet,
please,
like
okay,.
F
I
think
you
you,
you
use,
you
did
a
strikethrough
on
one
more
and
then
I
mean
the
other
thing
folks
might
be
interested
in
knowing
it
looks
like
there's
a
whole
clutch
of
storage
related
features
that
may
be
covered
in
existing
docks,
pr's
and
I'm.
Just
trying
to
find
a
right
person
to
confirm
this
with.
G
A
A
That
looks
like
at
the
end
here,
somebody
wrote:
the
milestone
command
had
a
bug,
but
it's
working
now.
Is
that
that's
awesome,
yeah,
that's!
That's
it
cool,
fantastic,
okay!
Anything
else,
one
just
one
bit
of
part
on
my
side
is
I'm
gonna,
be
I'm
at
the
open
source
Leadership
Summit
this
week.
So
it's
a
little
tricky
for
me
having
time
and
obviously
crappy
Wi-Fi.
So
nick
has
volunteered
to
lead
the
meeting
on
Wednesday.
If
I
can't
do
it,
which
I
probably
cannot
so
we'll
see
how
that
goes.
G
E
Do
like
the
idea
of
a
are
always
being
with
us:
I'm
not
going
to
follow.
I'm
gonna
deviate
from
the
playbook
and
not
fast
forward
the
branch
until
things
calm
down
on
master
just
because
we
I've
introduced
some
instability
on
the
release
branch
by
fast
forwarding.
Then,
unless
anyone
has
any
objections,
I
guess
mostly,
you
send.