►
From YouTube: Kubernetes 1.13 Release Team Meeting 20181121
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
If
you
have
not
already
done
so,
and
the
usual
disclaimer,
this
is
being
recorded
and
will
go
on
to
YouTube
shortly
after
so
please
be
mindful
of
what
you
say
like
you
always
do
so,
let's
get
on
with
the
agenda,
so
the
status
of
the
release
is
still
yellow
and
we
will
talk
about
the
main
items
on
the
agenda
right
now.
Mainly,
we
did
have
an
interesting
turn
of
events
after
yesterday
is
burnt
down.
This
is
regarding
si.
Si
si
si
1
dot,
OS
backward
compatibility
issue.
A
So,
if
you've
been
following
the
burn
down
for
the
last
couple
of
days,
one
si
si
we
enabled
1.0
in
113
on
Friday.
We
did
see
the
1.12
upgrade
tests
fail,
mainly
because
si
si
1.0
is
not
backwards
compatible
with
previous
versions
and
since
Cuban
it
is
113
supports
while
AD
well,
it
adds
supports
for
support
for
one
dot,
o
its
dropping
support
for
the
previous
versions,
which
makes
clusters
that
are
completely
in
one
dot,
well
fail
upgrade
when
it's
trying
to
agree
213.
A
A
B
A
A
B
So,
yes,
you
summarized
it
well.
The
issue
was
that
zero
point
three
support
was
dropped.
Feedback
has
been
resoundingly.
This
is
bad.
You
don't
want
to
require
drivers
to
be
upgraded
before
you
do
an
upgrade
to
one
point:
thirteen,
so
we
investigated
a
couple
of
options.
One
is
to
have
some
sort
of
graceful
upgrade
path
which
will
basically
make
sure
that
your
there
is
no
disruption
during
the
upgrade.
B
So
it
is
disruptive
the
alternatives
that
we
explored.
One
was
making
the
upgrade
non-disruptive
but
still
require
user
intervention.
This
would
basically
mean
that
you
deploy
a
half
half
solution
where,
before
you
upgrade,
you
deploy
a
1.0
driver
along
with
the
0.3
demon
set,
and
this
will
allow
basically
a
non-disruptive
upgrade
path,
but
it
still
requires
user
intervention.
B
It
I
currently
am
running
the
battery
of
tests.
I
think
we
have
sufficient
tests
on
CSI
both
for
the
older
zero-point
X
CSI
and
the
new
1.0.
That
I
would
be
fairly
confident
in
the
change,
but
I'm
running
the
test
right
now
to
verify
that
if
we
accept
this
PR,
then
this
becomes
a
non-issue
because
CSI
1.13
will
support
both
CSI
1.0
and
sorry.
Kubernetes
for
1:13
will
support
both
CSI
100
and
0x
concurrently.
So
it
doesn't
matter
when
you
do
your
driver
upgrade.
A
B
This
is
the
only
change,
so,
basically,
what
changed
is
there
is
a
client
that
we
pull
from
kubernetes
storage
interface,
a
generated
client.
We
pulled
the
v1
client
and
we're
using
the
v1
client
and
what
this
does
is
it
basically
picks
up
a
zero
point,
X
client
and
uses
that
in
parallel,
so
that
it
just
adds
a
bunch
of
if-else
and
uses
version
negotiation
to
figure
out
which
version
of
the
driver.
It
is
and
will
speak
0.3
if
it's
talking
to
0.3
driver
1.0.
If
it
speaks
to
a
1.0
driver,
yeah.
A
B
A
A
A
D
Yeah
I'm
not
sure
I
have
all
the
context,
I
think
what
I'm
ascertaining
is
where
there's
a
risky
PR
that
we're
concerned
it
makes
things
worse.
Do
we
have
sufficient
time
to
revert
it
Saada,
saying
I'll
babysit
it
over
the
weekend
and
if
we're
happy
great,
if
we're
not
happy,
sod
will
revert
it
after
the
weekend.
So
I
is
that
right
or
what
am
I
missing
here.
A
D
A
B
C
The
I
mean
the
issue
with
backwards.
Compatibility
is,
is
that
I
mean
this
is
an
issue
for
users
as
in,
if
we
don't
have
one
I've
operated
on
backwards,
compatibility
that
lasts
that
allows
in
upgraded
to
be
performed,
then
we're
basically
putting
some
users
in
this
situation
where
they
can't
upgrade
from
112.
B
Yes,
I
I
agree
and
it's
a
pretty
bad
place
to
be
two
points,
though
one
is
the
CSI
zero
point.
X
drivers
that
I'm
aware
of
none
of
them
are
production.
I,
don't
think
anyone
is
using
a
0x
driver
for
production.
At
this
point
yeah
and
two
is:
it
would
only
affect
the
workloads
that
are
depending
on
those
CSI
drivers,
nothing
else
in
the
cluster.
Okay,
I.
C
D
Can
can
you
help
me
understand
what
the
impact
of
of
rolling
back
or
reverting
would
be
like?
What
is
what
is
the
difference
between
calling
it
beta
and
calling
it
GA
rolling
back
this
PR
or
well
I'm
trying
to
understand
if,
if
it
would
be
more
than
rolling
it
back
this
PR,
it
sounds
like
there
would
be
a
whole
bunch
more.
That
would
need
to
be
reverted
I'm,
trying
to
accordance.
B
D
B
So
if
we
were
to
roll
back
the
feature
for
this
release,
that
would
involve
rolling
back
a
couple
of
different
features.
One
is
the
CSI
1.0
would
not
go
into
1.13
we'd
have
to
roll
it
back
to
CSI
0x
and
then
there's
a
associated
changes,
the
cubelet
volume
plug-in
manager.
We
would
also
have
to
roll
that
back
and
potentially
some
other
changes
that
I
can't
think
of
off
the
top
of
my
head.
Okay,.
B
D
B
C
No
I
mean
the
affected
driver:
okay,
CSI
0.3
driver
for
storage
for
something
important,
yeah
and
I
upgrade
my
control
plane
to
113.
Now,
you've
already
said,
we
can't
deploy
new
nodes
that
need
that
particular
storage
driver
until
we
upgrade
the
driver
without
the
compatibility
as
they
are
that
you
have.
C
B
If
nothing
happens
generally,
when
an
upgrade
happens
when
nodes,
when
the
master
is
upgraded,
no
nothing
will
be
affected
when
the
nodes
are
upgraded.
If
you
do
nothing,
one
by
one
as
the
nodes
are
upgraded,
those
pods
are
gonna,
get
drained
and
scheduled
to
a
different
node.
If
they
get
scheduled
on
an
old
one,
twelve
node
everything
will
work
if
they
get
scheduled
on
one
thirteen
node
it's
going
to
remain
pending
and
you're
going
to
have
to
wait
until
the
upgrade
completes
for
it
to
work.
The
other
alternative
solution
we
were.
B
C
B
Sense
so
I
think
the
two
action
items
it
sounds
like
for
me.
In
addition
to
seeing
this
PR
close
by
then
today
today
is
one
is
seek
out.
User
feedback
on
kubernetes
users
and
two
is
continue,
pursuing
the
graceful
upgrade,
but
you
manual
user
intervention
option
yeah.
A
A
Yeah,
okay,
see
it's
never
done
until
it's
done
so.
Yeah
I
was
just
thinking
how
how
early
could
we
have
caught
this,
because
this
kind
of
the
everything
emerged
on
Friday
and
sure
enough
just
started
showing
the
signals
from
Saturday
it's
just
from
at
the
cap
or
from
the
design
process
itself.
I,
don't
know,
I'm
gonna,
add
an
item
to
retro
just
to
see
where
what
is
that
link?
That's
missing
that
should
have
brought
it
brought
this
into
discussion
earlier
itself
before
it
went
into
implementation
in
GA
sounds.
A
Thank
you,
yeah
Tim,
says
I
agree.
Prior
practice
has
been
to
assure
there's
a
forward
path,
even
if
a
user
is
on
beta
code,
we
don't
just
break
users
because
they
choose
to
engage
and
in
only
use,
rafita,
sure
and
yeah,
and
thanks
a
lot
to
Josh
and
Jordan
for
actually
lighting
the
right
fires.
Yesterday
and
day
before.
A
And
so
al
this
goes
on
to
the
beta
2
discussion.
I
would
like
to
hold
beta
2
in
case
things
land
and
me
there's
a
clear
signal.
I
would
like
the
second
beta
to
have
the
final,
the
stable
state
of
CSI.
If
we
are
taking
this
change,
so
I
had
a
chat
with
Doug
earlier
and
he
was
going
to
be
available
over
the
weekend.
Doug,
please
let
you
know
if
that's
not
the
case,
but
I
would
I
would
like
to
stall,
beat
our
cut
today.
A
A
I'll
have
to
coordinate
I'll
have
to
let
cluster
lifecycle
know
because
I
know,
they're
testing,
cube
admin
pretty
actively
and
they're
waiting
for
a
new
tag.
Worst
comes
I
will
I'll
have
to
just
coordinate
with
them.
Is
that
a
way
to
just
tag,
something
reach
out
each
other
one?
Instead
of
cutting
another
one.
A
A
Other
than
that
yeah
I,
there
are
still
about
eight
pending
PRS
and
cake.
A
most
of
them
are
stalled
on
presubmit
failures.
I
did
see
a
few
integration
failures
yesterday
they
just
asked
on
resubmits
reaches
and
there
are
a
few
G
K
failures
that
are
hitting
a
couple
of
those
PRS.
I
have
pink
the
on
cone
here,
just
so
see
if
there's
and
we
are
also
seeing
G
K
failures
in
CI
so
trying
to
get
some
help
there
to
unblock
those
fears,
we
can
talk
more
in
the
issues
section
and
cube
admin.
A
A
A
Just
infra
cool
I.
C
We've
heard
a
lot
about
it
already,
we
don't
have
any
new
blockers.
We
continue
to
have
a
lot
of
noise
from
flakes.
If
you
saw
him
I
see
a
signal
support
yesterday.
I
finally
did
did
an
approximation
of
the
math
I
will
not
have
claimed
to
have
done
the
advanced,
prompt,
stat
math,
but
what
I
consider
a
good
enough
approximation
of
the
math
on
the
probability
of
getting
flakes
of
blockers
and
it's
unfortunately
high.
C
That
is
basically
we
can
expect
everyone
in
five
days
to
get
free
three
different
flakes
in
blocking
or
upgrade
tests
all
at
once,
which
means
pretty
good
chance
that
either
three
or
four
days
out
from
the
release
we're
going
to
see
that
so
I'm
trying
to
go
through
and
document
the
individual
tests
that
flake.
So
we
know
which
things
are
a
probable
flake
based
on
tracking.
C
Well
and
then,
after
after
we
do
the
release,
I'm
gonna
go
through
the
big
shuffle
of
moving
stuff
to
informing
and
you
know
submitting
a
PR
for
moving
stuff
informing
at
all
and
stuff
and
I'll
be
partly
based
on
that
flake
history.
Because,
from
my
perspective,
any
any
test
that
it's
flaking,
25%
or
more
of
the
time
should
really
not
be
in
blocking
or.
G
G
Yeah
progress,
there's
been
five
or
six
er,
so
either
closed
or
merged.
Many
others
are
stuck
on
some
flakes
which
we're
hunting
down
and
pushing
Cree
past
three
tests,
they're
getting
merged
yeah
most
of
it,
is
on
track.
The
riskiest
thing
is
the
CSI,
of
course,
compatibility
which
I
expect
should
be
fine.
Let's,
let's
track
it
over
the
next
days.
A
G
F
Yeah,
so
looking
better
again
today,
there's
been
a
lot
of
activity
on
all
the
PRS.
All
the
PRS
are
now
Muslim
rain
review
and
any
which
are
ready
for
review.
So
it's
looking
good
but
a
little
bit
concerned
that
Thanksgiving
is
coming
up,
so
things
might
slow
down
a
little
bit
so
we'll
see
how
things
go.