►
From YouTube: Kubernetes 1.13 Release Burndown Meeting - 20181129
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
Hello,
everyone
welcome
to
our
113
burned
down
for
1129
I've
added
the
minutes
link
to
the
minutes
in
the
meeting
in
the
chart
notes
here
and
those
as
usual.
This
session
is
being
recorded
and
will
go
to
YouTube
shortly
after
that,
let's
get
started
as
for
the
release,
status
and
timeline.
I
moved
it
to
green,
because
we
are
now
officially
in
Kota
and
out
of
code
freeze.
A
A
We
did
get
two
green
run
in
all
of
the
preacher
once
so
I'm,
assuming
what
our
failures
me
might
see,
going
forward
can
safely
be
assumed
to
be
from
master
unless
we
do
see
some
systemic
failures,
as
of
now
release
is
still
on
track
for
next
Monday
12.
Three
pending
any
CI
signal
issues.
Tomorrow
we
plan
to
cut
our
second
our
C.
Hopefully
it
will
have
the
latest
release
notes,
because
the
draft
is
almost
finalized
and
had
a
lot
of
reviews
yesterday.
A
Okay
with
that,
let's
start
with
tests
infra
coal
or
anybody
around
from
Testim
from
if
not
Christoph
or
pink,
a
link
to
tight
to
the
merge
queue.
It's
been
steadily
hovering
around
50
plus
PRS
in
the
queue
since
we
left
it
free
lifted.
Freeze
I,
don't
see
any
incoming
requests
as
critical
urgent
for
113.
Yet
so,
if
they
do
come,
we
will
handle
it
on
any
pieces
and
then
cherry-pick
in
to
113
if
needed,
but
other
than
that
I
think
gyrus,
keeping
up
with
the
backlog
I'll.
B
B
Why
we're
having
all
of
these
random
storage
chest
failures,
which
has
to
do
with
the
test
being
based
on
waiting
for
an
event
message
rather
than
testing
for
successful
deployment
that
we
follow
me,
which
is
a
problem
when
you
employ
a
driver
and
then
immediately
try
to
schedule
a
pod,
because,
if
you're
polling
for
that,
you
get
a
whole
bunch
of
messages.
Well,
the
driver
is
not
available
and
then,
when.
B
So
as
a
result,
sometimes
the
message
that
the
video
test
is
waiting
for
never
shows
up,
and
that
would
certainly
be
consistent
with
the
behavior
that
we've
observed,
which
he
is
random.
Timeouts
like
if
you
look
look
at
GCC,
OS,
113
slow.
We
have
like
this
diagonal
line
of
random
timeouts,
mostly
on
storage
tests
I'm
going
across
the
line.
B
B
So
making
our
test
points
immediately,
let's
like
at
this
point,
would
require
two
things:
number
one
figuring
out:
what's
up
with
GCE
and
networking
and
number
two
would
be
honestly
refactoring
across
our
test
suite
at
least
the
storage
tests.
Possibly
many
other
tests
as
well
to
have
them
not
depend
entirely
on
event,
messages
to
determine
success.
Failure.
B
Which
is
obviously
not
something
that's
happening
in
the
next
three
days,
so
basically
we're
at
is.
We
have
no
unexplained
failures,
we
have
lots
of
flakiness.
The
flakiness
is
all
known.
It's
very
worrisome,
but
if
we
were
to
hold
up
release
to
reduce
flakiness
would
be
looking
at
holding
it
up
until
January.
A
A
C
So
I
ended
up
not
putting
it
in
those
release.
Notes
I'm
gonna
put
it
in
the
release,
notes
of
the
external
sidecar
container,
because
it's
not
actually
an
you
with
any
of
the
kubernetes
kubernetes
code
and
the
sidecar
containers
are
released
at
a
separate
cadence,
meaning
you
could
have
a
one
13-0
release
with
a
sidecar
container
where
this
is
fixed
and
it
doesn't
actually
apply
we're
gonna
capture
it
in
the
release.
Notes
of
the
sidecar
container
I.
D
B
C
D
E
I
know
you
have
been
paying
a
lot
of
attention
to
the
flakes
and
you
had
a
list
of
sort
of
all
the
individual
tests
that
were
seeing
flex,
at
least
for
the
storage
ones.
Is
there
a
list
that
includes
all
of
the
tests
that
have
flaked
so
that
we
can
sweep
those
to
see
if
they
are
also
making
bad
assumptions
about
event
handling.
B
I
was
actually
hold
on.
I
hid
a
spreadsheet
for
chronic
flakes
eye.
B
It's
not
updated
since
we
started
having
the
storage
issues.
Okay,
so
it
is
not
by
any
means
complete,
but
these
are
all
things.
Actually.
It
was
more
composed
looking
at.
Let's
keep
track
of
the
things
that
we
already
know
our
flakes
so
that
when
they
reappear,
we
don't
assume
their
new
failure.
Sure
I
I.
E
Would
like
to
make
sure
that
the
cig
owners
for
those
tests
know
that
there
are
issues
with
them
and,
if
possible,
to
start
addressing
the
test
changes
in
114
before
other
code
changes
are
made.
That
would
give
us
more
confidence
that
it
was
just
this
event
issue
so
I
know
so.
Team
is
all
over
this
I
didn't
know
if
there
were
other
swabs
that
we
needed
to
sweep
and
notify
people
of
as
well,
ideally
and
well
now
that
114
changes
can
go
in
yeah.
B
Michelle
just
came
up
with
this
theory
this
morning,
so
no
we
haven't
looked
at.
Nobody
else
has
looked
at
anything
else,
but
it
would
explain
a
whole
bunch
of
the
flakes
that
we've
seen
increasing
in
frequency
over
time,
because,
obviously,
the
more
tests
we
had
and
the
more
stuff
we
add
to
the
cluster,
the
worse
the
event
message
and
problems
been
together,
yeah,
the
so
one
of
the
things
to
actually
do.
There's
two
things
to
do
as
post
follow-up
for
1:14
p.m.
number.
B
One
would
be
sort
of
complete
this
list
of
known
frequent
flakes
and
then
look
at
those
individual
tests
to
see
if
they
have
event
messaging
issues
and
the
other
thing
actually
would
be.
What
are
the
things
I
think
we've
learned
from
CSI
here
is:
if
we're
gonna,
take
a
major
sort
of
intrigue
component
for
kubernetes
and
move
a
plug-in
architecture
to
GA
and
move
a
whole
bunch
of
main
tests
to
that
plug-in
architecture.
F
Yeah
they
are
both
about
networking
an
IV
vs.
This
one
is
fixes
node
for
services
in
clusters
that
use
IBBs
instead
of
IP
tables
and
our
only
ipv6,
which
is
quite
rare
combination
and,
like
realistically
I,
don't
think
that's
any
production
user.
We
will
just
jump
to
30
input
0
with
this
kind
of
setup,
so
it
can
be
merged
for
for
a
batch
release.
I
think
yeah.
A
F
A
But
either
way
I
held
these
peons
just
in
case,
so
that
I
can
get
some
reply,
but
I
believe,
even
if
it
meant
is
it
wouldn't
it
needs
to
be
cherry-picked
into
115.
At
this
point,
so
I,
don't
think
my
hold
me
and
holding
this
would
actually
affect
130,
but
I
just
put
in
a
put
on
a
hole,
because
I
wanted
to
get
more
clarity
on
this,
since
it
was
13.
G
A
A
A
A
Okay,
the
next
item
here
is
release
notes,
Mike
couldn't
make
to
the
meeting
but
looks
like
the
final
draft
is
at
least
a
copy
of
that
it
landed.
Yesterday
we
have,
as
mentioned,
we
had
a
bunch
of
comments
and
and
and
reviews
and
theirs
were
resolved
and
merged
later
late
yesterday,
so
mostly
it's
LGD
M,
and
once
we
cut
our
RC
to
tomorrow,
we
should
be
able
to
see
the
latest
change
log
there
and
release
team.
A
Our
release
notes
team
will
continue
to
make
minor
edits
until
we
release
on
Monday
and
from
comm
side
I
think
everything
is
on
track.
We
are
having
a
couple
of
media
messaging
meetups
today
today,
AM
tomorrow
for
some
initial
media
posts
and
hopefully
more
to
come
after
the
release
next
week.
So
that's
it
from
me
anything
else.
Anybody
wants
to
break
up
questions.