►
From YouTube: Kubernetes 1.13 Release Team Meeting 20181127
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
And
the
usual
disclaimer
this
is
being
recorded
and
will
code
on
YouTube
shortly.
So
please
be
mindful
of
what
you
say
with
that:
let's
get
started
so
timeline
status,
I
moved
it
to
yellow,
mainly
because
we
are
pursuing
few
fixes
and
ci
is
still
not
pretty
clear
on
the
fixes
that
went
in
yesterday.
A
So
I
have
moved
the
status
battery
yellow,
but
we
will
discuss
in
detail
mainly
we
are
tracking
all
the
storage
breaks
related
to
the
CSI
issue.
The
sub
pad
failures
I'll
hold
off
on
talking
about
this
until
we
get
to
the
CI
section,
because
that's
what
mostly
will
be
focusing
there.
Apart
from
that,
we
are
tracking
a
fix
for
an
actual
bug
around
the
the
node
plugin.
A
Let
me
this
is
the
7
1
4
2
for
about
around
cubelet,
removing
the
know
right,
be
it
annotation,
so
that
after
investigation
yesterday
storage
is
fixing
putting
a
fix
in
for
that
which
is
still
in
review.
Sod
I
also
saw
there
was
an
end-to-end
test.
That's
being
added
for
this.
Do
you
have
any
insight
of
if
the
test
is
also
going
to
be
ready
to
merge
sometime
today,
I.
B
A
C
C
A
D
A
I
mean
I
know,
that's
not
a
clear
trend,
but
if
this
is
the
only
outstanding
issue
as
of
tomorrow
ones,
it
made
me
a
do
to
lift
the
code
freeze
and
if
it's
still
not
something
that
can
get
wrangled
in
then
probably
will
have
to
make
an
exception.
But
I'll
make
a
note
of.
A
B
A
Yeah
I
I'm,
hoping
this
doesn't
affect
any
other
CI
signal,
but
yeah
I'll
also
keep
an
eye
on
that
and
as
for
code
freeze,
it
remains
hopefully
as
for
schedule
tomorrow.
There's
no
early
lift
at
this
point,
so
we
are
still
hoping
to
get
this
by
tomorrow
evening,
but
we
will
see
once
we
discuss
a
signal.
That
said,
let's
move
on
to
see
a
signal.
Josh,
do
you
want
to
call
those
items
there
I
just
added
the
notes
so.
D
At
this
point,
we
don't
have
any
outright
failing
tests.
What
we
have
is
a
whole
bunch
of
flakes,
but
it's
enough
flakes
that
there
is
never
any
time
at
which
all
tests
are
passing.
Something
is
always
flaking,
and
this
is
sort
of
a
you
know,
an
assortment
between
new
flakes
flakes
that
have
appeared
during
the
1:13
cycle
in
flakes
that
are
older,
but
are
happening
more
often
now
I'm
having
some
difficulty
getting
attention
from
SIG's
other
than
storage
and
I
suspect.
That's
because
a
bunch
of
people
are
at
AWS
reinvent
so.
D
Being
on
that,
in
terms
of
I
mean
sig
release,
113
blocking
is
technically
passing,
mm-hmm,
there's
a
bunch
of
flaky
things,
but
none
of
them
are
newly
flaky
they're.
All
things
that
have
been
flaky
chronically
upgrade
is,
as
usual.
My
main
concerns
about
this
new
master
upgrade
cluster
parallel
is
failing.
D
D
D
D
A
I
I
remembers
creating
something
similar
to
this,
but
it
was
in
Signet
work.
It
was
a
signal
issue,
so
the
same
yeah
we
might
have
an
existing
issue
so
going
back
to
the
storage,
specific
ones
there
in
the
list,
so
the
flaky
CSI
sepak
stuff.
As
of
this
morning,
I
have
dated
the
issue
there.
We
are
still
seeing
a
bunch
of
flakes
mainly
in
the
slow
jobs.
So
at
this
point
and
I
saw
such
follow-up
comment
of
its
being.
It's
due
to
the
external
sidecar
issue.
So
I
would
like
to
you
know,
get
a
consents.
A
B
My
understanding
is
that
the
issue
is
caused
by
a
delay
in
the
sidecar
container.
Getting
ready,
I've
opened
up
an
issue
in
the
sidecar
container.
To
look
at
that.
That
delay
is
consistently
about
exactly
a
minute
and
the
tests
basically
are
not
waiting
long
enough
for
that.
So
sometimes
they
are
sometimes
they
aren't
if
they
wait
long
enough,
they're
green,
if
they
don't
wait
long
enough,
they
end
up
being
red,
is.
D
B
C
D
B
D
B
A
I,
didn't
okay,
I
didn't
cross
check
the
latest
failures
to
see
if
it
was
all
host
path,
it
looks
like
it
is
at
least
upgrade
ones
are
only
the
host
path
driver,
that's
baby,
okay,
so
so
at
this
point,
are
we
and
they
were
all
okay,
let's
go
with
in
order,
so
this
are
we
then
gonna
just
move
it
to
long
term
or
114
importance.
I
think.
B
That's
fine
and
in
the
meantime
we
can
mark
it
as
flaky
if
you
want
to
reduce
the
noise
and
we'll
continue
to
investigate
it
as
well.
If
there
is
a
fix
that
is
made,
it's
going
to
be
external
to
kubernetes
through
Nettie's.
The
only
internal
change
would
be
updating
the
end-to-end
test
to
pick
up
a
new
version
of
the
container,
and
we
can
do
that
if
we
find
a
fix
and
within
the
113
timeframe,
but
otherwise
I
wouldn't
block
113
on
this
issue.
F
A
B
D
That's
fine
I
mean
the
question
always
for
these
is
do
we
know
why
it's
failing
yeah
and
is
that
do
we
know
why
this
failing?
Is
this
something
user?
What
experience
and
is
it
new,
and
the
answer
is
yes,
yes,
and
no,
so
so
one
of
those
things
I
kind
of
feel
why
this
should
be
in
the
released
somewhere
just
in
case
nobody
is
using
these
drivers
for
the
first
time,
which
the
very
well
might
be
since
CSI
spoiling
1.0,
and
just
say
you
know
these
drivers
are
known
to
take.
D
B
That
makes
sense
I
can
I
can
do
that.
A
I'll
capture
this
and
in
the
minutes
about
the
either
increasing
the
timeout
of
leaky
or
making
the
attacking
the
testers
flaky
as
well,
so
the
next
one
was
the
detach
volume.
B
B
A
B
G
C
A
These
so
these
at
this
point
it
is
blocking
1:13,
because
we
don't
know
the
exact
so
Michelle
do
you
have
issues
open
for
these
two
I
haven't
opened
them
yet
and
that
out,
ok
and
then
David
was
looking
at
the
other
subpath
issue,
yeah,
okay,
so
those
two
are
still
failing.
I
mean
they
still
flaky
I
see
a
whole
bunch
of
those.
In
fact,
in
the
in
the
master
blocking
suite,
and
then
there
was
a
flex
mount
issue.
C
C
A
A
So
I'll
watch
out
for
these.
Oh,
we
didn't
have.
We
don't
have
an
issue
open
for
flexmount
Josh
and
we
just
track
that
to
see
where
it
beans
thanks.
Thanks
are
the
Michelle
and
keeping
us
updated
yeah,
and
then
the
next
few
flakes
are
yeah.
The
network
affinity
issue
there's
absolutely
no
response
from
the
sig
there,
but
I
did
see
going
back
in
triage
that
this
at
least
happened
during
112
and
if
not
even
earlier,
yeah.
D
A
D
D
10
October
27th
is
when
it
got
much
more
flaky
and
by
the
way
we've
been
talking,
I
looked
into
the
coop
control
port
forwarding
one
mm-hmm,
not
a
new
flake.
In
fact,
it's
actually
less
frequent
than
it
was
two
months
ago:
okay,
not
a
blocker,
although
I
am
going
to
try
to
get
sig
CLI
to
address
it,
since
there
is
no
open
issue
around
it.
Okay,
okay
and.
A
Then
the
last
one
was
the
sole
job.
I
know
that
we
had
a
run
yesterday
after
sand
kicked
it
off,
but
it
was
timing
out
by
the
time
of
it.
It
always
times
out.
I've
had
only
a
few
as
we
seen
a
few
of
fully
past
green
runs
on
that
job,
but
was
that
a
specific
issue?
You
were
trying
to
verify
with
that
job.
D
D
A
D
A
H
I
I
can
let
me
bring
up
the
list,
everything
that
was
up
far
since
yesterday
could
merge
lots
of
small
fixes
here
and
there
one
new
one
fixed
feature:
gate,
inconsistent
between
root,
CA,
self-publisher,
cluster
roles,
its
minor
fix
and
cluster
roles.
It's
the
merging
of
might
have
merged
already.
The
other
one
is
the
fix
race
condition
for
the
storage
issue,
some
comments
by
Jordan
and
should
merge
by
end
of
day
today,
the
other
one
won't
be
for
1:13
the
tests
and
the
last
one
is
sanity
fix
for
dogs
is
just
one
character
and
its
cousin.
I
A
J
K
A
And
I'm,
like
super
know,
is
changing
it
at
this
point,
especially
if
we
don't
know
if
you've
never
done
this
for
a
few
releases.
Now
I
would
like
to
know
what's
the
urgency
to
update
it.
So
it's
good
to
have
the
latest
version
everything
I'm
nervous
of
changing
it
at
this
point,
I
would
I
would
like
to
hold
it
till
the
other
or
till
we
get
a
clear
signal
on
the
other
failures
and
flakes
for
113
and
then
probably
update
this
later.
Okay,.
K
The
other
thing
that
I
had
to
mention
real
quick
was
that
I
noticed
that
I
linked
in
the
document,
but
we
noticed
that
we
have
some
Docs
that
say
that
our
post
release
requirements
are
not
the
same
as
the
normal
depth
requirements,
which
is
not
something
that
we've
implemented
like
or
it's
something
that
we've
that
is
possible
with
tide.
But
it's
not
something
that
we've
ever
done
before
so
I'm
wondering
if
this
was
like
intended
to
be
a
new
policy
that
we're
doing
this
release.
Yeah
the.
A
K
Well,
the
main
thing
to
keep
in
mind
is
that
whatever
we're
doing
for
this
branch
like
this,
this
can't
go
on
for
all
eternity
in
this
branch,
because
we
can't
have
separate
merge
requirements
for
every
release
branch
past
the
end
of
the
release
cycle.
Otherwise,
we'd
have
more
and
more
distinct
sets
of
merge
requirements
which
would
eat
up
our
API
token
usage
code.
A
K
A
J
K
K
J
It
wasn't
enforced,
I
mean
that's
fine,
then
I
think
this
is
partly
kind
of
the
aspirational
intent
that
the
stable
branches
are
supposed
to
only
get
those
things
critically.
Urgent
bug
fixes
or
things
that
were
related
to
a
failing
test
to
ensure
upgrade
is
able
to
succeed,
but
if
there
hasn't
been
enforcement
of
that,
then
that
is
what
it
is.
Okay,
well.
G
I
think
it's
usually
solo
on
those
branches
and
everything
that
goes
into
those
has
to
be
manually,
backed
by
the
release
branch
manager.
That
enforcement
like
talking
to
the
bot
and
labeling
putting
six
labels
on
something
wasn't
really
giving
a
lot
of
value
there.
Okay
helps
with
our
API
usage,
yeah.
J
A
L
F
Yeah
I'll
keep
it
quick.
I
wrote
more
detailed
update
in
the
in
the
doc,
but
basically
making
good
progress
on
final
copy
edits
to
the
doc
going
to
continue
doing
that.
The
plan
right
now
is
to
copy
the
completed
notes
into
KK,
either
Thursday
or
Friday,
so
that
it's
there
for
the
release
cut
on
Monday.
So
that's
the
plan
and
timeline
right
now
and
so
far
so
good.