►
From YouTube: Kubernetes 1.13 Release Team Meeting 20181126
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
We
wait
till
we
get
to
see
a
signal
but
does
see
big
the
big
CSI
PR
that
was
going
to
go
in
on
Wednesday
to
enable
backward
compatibility
with
113
that
got
merged
and
the
upgrade
tests
that
were
read
whole
of
last
week
turned
green
over
the
weekend,
and
so
that
was
one
of
the
good
things
that
happened,
but
we
are
seeing
quite
a
few
peaks
as
a
fall
out
after
that.
So
that's
why
it's
still
kinda.
A
In
a
yellowish
green
status,
as
for
timeline,
we
are
still
in
code
freeze
and
our
c1
is
targeted
for
tomorrow
and
of
T
tomorrow
and
as
per
the
calendar,
we
are,
we
would
come
out
of
code
freeze
and
of
day
11
28,
that's
Wednesday,
but
one
of
the
thoughts
I
had
was
since,
since
most
of
the
tests
were
passing,
I
was
toying
with
the
idea
of
lifting
code
freeze
a
day
earlier,
giving
us
more
time
to
talk,
but
I
would
want
input
from
Josh
on
CI
signal.
Before
we
make
the
call.
A
A
A
C
A
D
E
D
Which
means
that
if
we
fast
given
that
failure
in
the
video
test,
if
we
fast
forward,
this
is
going
to
also
be
a
failure
at
Zig,
released,
13
blocking
I,
don't
know
exactly
what's
triggering
it,
but
because
for
default
we
have
a
storage
failure,
that's
been
100%
consistent,
and
so
it
really
looks
like
something
is
broken.
I
think
it's
resolved,
yeah.
D
A
D
Went
into
weird
debug
mode
and
github,
okay,
so
yeah,
so
we
have
sub
path
failures.
We
now
look
like,
as
of
this
weekend,
have
some
other
storage
flakes
that
are
happening.
At
least
you
know
so,
there's
articles
like
individual
tests
may
only
happen.
Flakes
may
happen
a
third
of
the
time,
but
for
some
tests
like
cos,
slow,
some
storage
test
is
failing
100%
of
the
time
by
run,
it
varies
which
storage
test,
but
some
sort
chest
is
always
fail.
It.
F
G
G
A
D
G
There's
somebody
working
on
the
sub
path
test
failures
right
now,
hopefully,
we'll
have
a
PR
by
end
a
day
for
that
to
clean
that
up,
oh
god.
The
second
issue
that
was
shared
with
me
here,
the
failing
detaching
volume
should
not
work
when
so
that
test
was
actually
introduced
over
the
weekend
as
part
of
a
unrelated
PR.
It's
a
new
test
that
went
in
on
the
23rd
I
think
that
tests
are
just
inherently
flaky
and
there's
another
issue.
I
think
already
opened
for
it
where
the
author,
a
month
from
Red
Hat,
is
already
investigating
it.
G
D
Before
we
proceed
onwards,
honestly,
the
particularly
given
that
we
have
a
couple
of
other
things,
we're
waiting
for
word
out,
there's
a
new
Signet
work
tests
that
are
failing
and
they're
failing
in
non-blocking
test
jobs,
but
they're
failing
in
several
jobs,
and
so
I'd
like
to
hear
from
Signet
work
on
the
significance
of
those
failing
the
and
then
we
need
to
also
hear
back
from
well.
This
is
again
not
necessarily
consider
blocking,
but
the
lease
will
wait.
Do
not
consistently
fail.
D
D
D
That's
a
new
one
right
and
then
there's
right
and
then
there's
did
you
write
the
soap
job
isn't
running,
yeah,
yeah,
I,
think
I.
Think
the
affinity
is
only
network,
one,
that's
still
open
and
we
just
they
haven't
it's
new.
They
haven't
had
time
to
respond
to
it.
Yet
so
yeah
and
soak
is
not
running,
which
again
it's
a
non-blocking
job
and
it
wouldn't
be
a
problem,
except
that
there
are
some
identified
flaky
issues
that
people
were
working
on
that
are
only
really
visible
in
the
soap
job.
D
One
of
the
things
that
we're
keeping
our
eye
on-
and
this
has
been
actually
true
for
the
whole
course
of
the
release.
Integration
is
flaky
yeah,
apparently
for
a
variety
of
reasons.
Somebody
just
brought
up
a
new
issue
with
the
CR
D
tests
and
that's
a
potential
blocker,
because
integration
is
both
a
needy.
Both
you
know
in
integration
test
and
a
presubmit
test.
E
D
D
Right,
like
I,
said
it's
not
that
it's
a
new
thing,
that's
going
to
block
113
in
particular,
it's
just
that
if
we
get
a
huge
spike
of
flakes,
you
know,
starting
on
Thursday
or
Friday,
we'll
end
up
delaying
the
release,
because
we
won't
be
able
to
tell
whether
their
flakes
or
whether
there's
a
different
problem.
Okay,.
A
H
A
H
H
I
I
I
F
Sure
I
understand
you
giving
that
information
verbally
I'm,
just
like
in
previous
release,
burned
down
docks,
I've,
been
able
to
scroll
back
and
previous
things
and
see
what
the
issue
was
the
are.
Is
there
like
a
spreadsheet
or
am
I,
not
seeing
you
updating
that,
or
are
you
telling
me
you're
giving
the
information
during
your
update,
but
the
only
way
I
can
historically
go
back
and
see
what
accounts
are
the
other
recordings?
Okay,.
A
I
I
G
F
A
G
Think
it's
like
three
or
four
lines
to
remove
it
from
the
release
it's
fairly
trivial
to
remove.
We
just
need
to
make
sure
that
their
side-effects
of
removing
it
aren't
catastrophic
or
anything
that
we
don't
expect
and
at
the
same
time,
if
the
fix
is
trivial,
we
might
just
move
forward
with
the
fix
or,
alternatively,
we
might
just
hold
off
and
say.
This
is
an
edge
enough
case
that
we
do
nothing
so
it's
possible
there
might
not
be
any
PRI
there.
So
we're
investigating
all
three
of
those
options.
Okay,.
B
A
B
A
B
C
J
Would
you
like
me
to
give
a
quick
update
on
what
the
week's
plans
are
for
wrapping
up
release,
notes,
step,
mm-hmm
sure
so?
Basically,
a
Dave
release
in
a
shadow
for
113
and
we
spent
sleep
for
the
next
release
has
been
awesome
about
leading
the
charge
was
last
week
gloves
gone
and
wrangled
most
of
the
cig
leads
to
add
in
their
major
themes
a
bunch
of
them.
J
You
know
said:
there's
no
major
themes
for
their
cigs
and
we
still
have
a
few
to
chase
down,
but
only
like
you
know
a
handful
like
two
or
three
or
so
so
on
the
docket
right
now
for
next
release,
notes,
release,
notes,
steps,
sorry,
I
just
wrote
it
into
the.
Let
me
pull
up
minutes,
so
next
steps
are
important.
The
latest
notes
from
the
last
time
the
doc
was
updated.
So
you
know
it's
was
it's
like
a
week
or
two
since
the
you
know
last
import.
J
So
that's
mostly
a
lot
of
the
commits
going
in
that
kind
of
enabled
features
or
graduated
features.
So
there's
a
long
list
of
the
new
features
that
actually
would
just
great
so
that'll
fill
out
a
few
here.
There
and
then
I
really
like
the
content
of
documents
pretty
much
there
and
you
just
have
to
go
through
it
all
organize
the
document
so
that,
based
on
the
specific
content
that
we
have
this
release,
it
kind
of
flows
naturally
and
then
go
through
and
copy
at
it
all
the
notes
so
that
they
use
consistent
tense.
J
A
I,
just
I
see
that
Zod
stepped
out.
There
was
some
web
eh
there
regarding
the
backward
compatibility
not
being
supported
from
how
the
CSI
stuff
was
designed
earlier.
So
I
think
I
left
a
comment
in
the
talk
to
clean
that
out
now,
considering
we
put
the
support
back
in
so
just
if
you
could
yeah.
If
you
could
follow
up
on
that,
that'll
be
awesome.
Sure.
F
Don't
plan
on
having
any
major
I'll
probably
have
something
in
the
release
notes,
that's
just
a
hey
fYI
we're
deprecating
it,
and
then
it
is
one
of
the
pending
docs
PRS.
That
Tim
is
waiting
on
that
there's.
A
migration
guide
about
migrating
from
various
versions
and
that's
Edie,
2
to
3
and
I
had
did
not
have
any
time
to
rip
out
the
migrated
code.
F
So
somebody
could
in
theory,
use
that,
but
I
want
to
update
the
docs
to
at
least
reflect
the
state
of
today
to
say
that
as
of
kubernetes,
one
six
at
CD
three
was
the
standard
and
then,
as
of
kubernetes
one,
thirteen
at
CD
two
is
no
longer
supported,
but
for
earlier
versions
of
days
you
can
still
use
this
migrator
tool
to
migrate
from
two
to
three.
If
you
so
desire,
I
see
will
get
unmuted,
though
so
maybe
you've
got
comments.
I.
K
F
K
F
A
K
A
L
L
Yeah
I
know
they've
been
reviewed
a
few
times,
so
just
a
last
LG
TM
would
be
great.