►
From YouTube: Kubernetes 1.12 Release Burndown Meeting 20180926
A
A
A
Well,
I'm
gonna
go
ahead
and
get
started
because
I
think
I
have
everybody
that
I
really
need
to
talk
to
today
on
the
line.
So
today
is
Wednesday,
September
26th.
This
is
the
next-to-the-last
I
would
dare
to
say
at
this
point:
release
team
meeting
for
112,
I'm,
Tim,
pepper,
your
release,
lead
this
meeting
is
being
recorded
and
will
go
on
to
YouTube
shortly
after
the
meeting.
So
please
behave
in
accordance
with
our
code
of
conduct.
A
Let
me
just
post
the
agenda
link
just
in
case
anybody
didn't
have
it
there
in
the
zoom
open
and
thank
you
folks,
who've
already
started
putting
your
names
in
there
put
mine.
Also,
please
do
so
just
we
keep
track
of
who
was
here
all
right.
So
where
are
we
I'm?
Actually
I
I
made
the
status
all
green
today,
our
final
soaking
NCI
is
looking
good.
The
one
the
one
thing
that
we
had
that
we
got
pushed
in
yesterday
cause
the
expected
test
to
go
green.
A
We
have
a
green
run,
so
the
the
one
remaining
fix
we
had
looking
good,
so
upcoming
actions,
around
release,
notes
and
Docs
and
actually
cutting
and
publishing
the
release
is
basically
where
we
are
today
tomorrow
and
then
next
week,
we'll
have
a
retrospective
so
links
there
in
the
agenda.
If
you
have
any
thoughts
or
recollections
from
the
release
things
that
you
think
we
did
good
things,
we
could
do
better.
Please
drop
a
reminder
in
there
and
we'll
be
discussing
those
next
week.
B
A
Is
a
test
infrastructure
and
I
just
realized
today
we
actually
have
a
instead
of
cig
test
infrastructure.
There
is
a
area
release
infra
and
if
more
falls
in
this
area,
but
our
our
publishing
bot
is
having
some
trouble.
Nikita
has
been
looking
at
it,
but
to
my
knowledge,
she's
in
India,
timezone
and
I'm
wondering
if
we
can
get
some
additional
hands
on
that
in
North
America
to
hopefully
get
that
all
sorted
out
and
resolved
today.
So
there's
an
issue
link
there.
A
A
Even
even
though
we
don't
have
a
queue,
I
guess
right
now,
there's
effectively
a
series
of
patches
waiting
to
merge,
I
went
ahead
and
issued
a
manual
cherry
pack
cherry
pick
of
that
patch.
It's
exactly
identical
to
what's
going
into
Master
just
so
we
could
get
it
into
the
release
and
we
saw
the
test
go
green
after
that.
So
we
we
achieved
what
we
wanted,
maybe
slightly
unorthodox
in
terms
of
the
actual
process,
but
I
think
that
that's
a
reasonable
thing
to
do
so
test
one
green.
A
Yesterday
morning
an
issue
came
up
around
and
it's
not
actually
an
issue,
but
there's
a
PR
around
the
version
of
cry,
cuddle
or
cry
control,
so
this
would
have
missed
112
on
the
original
schedule
and
it's
looking
like
it's
missing
112
on
the
current
schedule.
It's
still
not
approved.
So
unless
anything
changes
there
I've
put
a
pretty
clear
suggestion
and
plan
and
clarification
of
intent
in
the
the
PR
they're
asking
for
the
The
Associated
stakeholders
to.
Let
me
know
if
there's
anything
else,
but
at
this
point
I'm
I,
don't
see.
A
That
is
anything
other
than
a
known
issue
in
the
release
notes.
Unfortunately,
it's
a
miss.
It's
not
the
end
of
the
world.
I
mean
the
only
thing
that
feisty
could
call
out.
That
would
be.
An
implication
is
that
one
of
our
alpha
features
and
112
wouldn't
be
usable
with
the
current
command
line,
and
that
is
what
it
is
and
it'll
be
fixed
in
112
one.
So
I'm
not
super
worried
about
that.
A
C
A
So
there's
in
in
master,
we
don't
yet
have
the
patch
that's
in
release
112.
We
fixes
this,
but
I
would
expect
that
those
go
green
since
they
did
in
112,
but
I
reassure
and
then
also
since
so
much
else
is
going
on
in
master.
I
also
wouldn't
be
surprised
if
they
didn't
go
green,
but
for
other
reasons.
A
A
A
So
the
the
daemon
set
failures
in
in
the
release
master
upgrade
the
daemon
set
failures
there
are
on
downgrade,
and
that
is
the
specific
instance
that
we
fixed
and
then
the
downgrade
test.
If
you
look
in
the
agenda,
there's
a
hunk
of
green
text
kind
of
in
the
middle
expected
and
got
a
green
run
at
in
a
link
there.
The
equivalent
test
there
wit
green
overnight
as
the
patch
had
merged.
C
C
A
Then
I
believe
of
the
things
that
we're
not
passing
an
upgrade.
We
had
so
we
had
passes.
We
had
flakes,
we
had
known
expected
failures
once
cluster
lifecycle
had
switched
their
string
over
waiting
for
the
final
release,
so
as
I've
triaged
everything
there,
it
all
was
looking
good
and
cost.
Her
life
cycle
was
giving
the
thumbs-up
Friday
and
Monday
as
well
so
I
believe
we're
all
good
there.
E
No
I
mean
I
haven't
been
sitting
here,
watching
it
for
as
closely
as
you
all
have
so
I.
Don't
know
that
I
have
anything
constructive
to
say
it's,
you
know
the
release.
112
blocking
dashboard
is
red
and
the
release
one
well.
All
dashboard
is
red,
but
I'm
presuming
this
is
because
the
jobs
that
are
red
they're,
our
jobs,
that
you
know,
are
red
and
you
have
reasons
for
why
they
are
red.
E
A
A
E
Yes,
it's
worthwhile.
There
is
a
feature
of
test
period.
I
believe
may
not
be
available
in
open
source
or
may
not
work
super
well
in
the
open
source
version
where
you
can
tag
test
failures
as
known
things
and
have
that
linked
to
an
issue
that
mean
that's
something
we
could
automate
in
open
source
today,
if,
like
the
test,
failure
name
showed
up
in
the
issues
that
were
created
or
something
we
could
link
to
the
most
recent
github
issue
that
had
that
test.
Failure
in
it
accept
our
expect.
E
C
A
A
A
Exactly
there
yeah,
so
I
think
I
think.
Let
me
I'll
drop
this
in
the
resume
and
then
also
put
it
in
the
you
noted
in
the
agenda.
So
I
think
issue
six,
eight
seven
three
five
was
where
they,
it
started:
tracking
the
flakes
that
had
ended
up
being
highlighted
a
weekend
ago
and
I
haven't
looked
at
the
details
because
I've
been
looking
at
so
many
other
details,
but
yesterday
they'd
said,
like
the
failure
was
their
known,
flake
and
I'm.
Presuming
it's
this
one,
but
I
I,
don't
know
for
sure.
C
A
So
then,
hopefully,
six
eight
seven,
three
five,
then
is
tracking
that
and
then
I
don't
know
if
you
want
to
drop
a
a113
label
on
that.
Just
so
it's
on
your
your
visibility
as
you
come
in,
because
this
is
one
of
the
other
things
that
we
struggle
with
right.
Now
we
have
these
known
issues
but
they're
not
on
our
radar
for
a
bit
as
the
release
starts
and
we
just
sort
of
are
accepting
of
things
getting
a
little
wobbly
on
master
so
yeah.
A
Well,
I
guess:
I
to
the
other
couple
things
in
the
agenda,
then
docks
I
will
pingjun
rondo
about
the
status
of
things
merging
and
going
live
there
not
too
much
the
way
it
concerns
or
worries
there.
I
feel
like
docks,
is
really
on
top
of
this
turn
of
the
crank.
At
this
point,
so,
oh
we
got
the
the
fingers
from
Zack
and
then
release
notes.
I
would
say
kind
of
expected,
last-minute
sort
of
stuff
we're
seeing
a
number
of
things
come
in
now
where
people
are
realizing.
Okay,
yeah.
A
Labeled
issues
and
PRS
are
starting
to
go
back
up
again,
but
this
is
also
normal
for
this
point.
In
the
cycle
where
people
realize
things
have
definitely
missed
and
they're
starting
to
tag
them
for
112
to
mark
the
intent
of
a
back
port
like
they
know
that
they're
there
figuring
the
thing
out,
they're,
seeing
a
going
to
master
and
like
hey,
you
know
we
ought
to
back
port
this.
So
we'll
we'll
see
a
little
bit
of
stuff
come
back
up
there
and
those
are
things
having
triage
through
them.
A
A
There's
press
folks
interested
in
knowing,
what's
in
the
release,
how
we're
doing
on
things
and
and
that's
part
of
our
jobs
to
on
the
release
team
is
to
kind
of
be
a
little
bit
of
a
face
for
that
sort
of
stuff.
So
some
of
that's
going
on
and
those
are
those
are
embargoed
at
the
moment
just
because
we
haven't
actually
released.
The
press
doesn't
want
to
announce
the
release
of
112
before
it's
actually
leased,
so
comms
folks
are
managing
that
and
I
guess
that
stuff
will
be
going
out
tomorrow.
F
A
E
E
B
A
B
A
A
So
I
think
that
is
everything
that
was
on
the
agenda
today.
I
appreciate
the
discussion
of
things
that
look
like
retrospective
topics
and
would
really
encourage
y'all
to
open
up
that
document
and
actually
I
see
an
anonymous
kangaroo
typing
right
now.
So
please
do
that.
That's
a
it's
a
it's
really
important
for
us
to
try
to
capture
some
of
these
things
in
the
moment
and
then
circle
back
around
and
see
what
we
can
do
about
improving
in
each
cycle.
E
E
Tomorrow,
I
I
want
to
take
up
a
little
bit
of
the
extra
time,
just
I
posted
it
in
sick
release
channel,
but
remind
folks
that
the
triage
dashboard
is
up
and
running
again.
This
is
usually
something
that's
been
super
useful
at
this
point
in
their
releases,
lifecycle
and
I
apologize
that
it's
not
been
available
for
so
long.
E
So
what
I've
done
is
to
try
to
improve
how
quickly
it
runs
and
make
sure
that
it
runs
three
times
every
hour
now,
instead
of
on
an
hourly
basis
and
I
will
see
if
I
can
look
into
expanding
its
horizon
as
a
final
thing,
because
it
currently
looks
back
at
builds
from
the
last
week,
and
maybe
it
would
be
more
useful
if
it
showed
like
the
last
two
or
three
weeks,
but
it's
a
fantastic
tool
to
say
like
given
this
failure
text.
Where
else
is
this
failure
text
happening.
C
E
A
A
So
life
yeah
that
was
sort
of
my
my
theme
for
the
release,
the
game
of
life
and
that
we
have
sort
of
these
iterations
and
it's
also
sort
of
a
play
on
test
grit
for
those
of
us
who've
been
looking
at
the
little
reds
and
greens
marching
across
test
grid
for
a
while,
so
yeah
game
a
life.
This
is
life.
We
we
another
another
iteration
and
a
in
a
stable
distributed
system
and
I.
Think
I.