►
From YouTube: Kubernetes SIG Storage Meeting 2021-01-14
Description
Kubernetes Storage Special-Interest-Group (SIG) Meeting - 14 January 2021
Meeting Notes/Agenda: https://docs.google.com/document/d/1-8KEG8AjAgKznS9NFm3qWqkGyCHmvU6HVl0sk5hwoAE/edit#heading=h.oneaxzv70p35
Find out more about the Storage SIG here: https://github.com/kubernetes/community/tree/master/sig-storage
Moderator: Saad Ali (Google)
A
Okay,
today
is
january,
14
2021.
This
is
the
meeting
of
the
kubernetes
storage
special
interest
group.
As
a
reminder,
this
meeting
is
public
recorded
and
posted
on
youtube.
So
on
the
agenda
today
we
are
going
to
go
through
the
planning
session
for
the
next
kubernetes
release
version
121
and
if
there's
anything
that
you
want
to
discuss,
please
feel
free
to
add
to
the
agenda.
The
link
to
the
agenda
document
is
in
the
invite
so
for
the
planning
session.
We
now
have
dates
for
the
121
release.
A
The
upcoming
deadline
is
for
the
enhancement
freeze.
This
means,
if
you
have
a
feature
that
you're
working
on
for
the
121
release.
It
needs
to
be
declared
before
january
9th,
and
you
need
to
have
the
design
the
cap
completed
and
merged
and
approved
before
then
there
are
some
changes
to
this
kepp
approval
process.
A
A
Previously,
you
could
just
open
an
issue
and
as
long
as
it
was
active,
the
release
team
would
ping
you
on
the
issue
and
ask
you
whether
this
should
be
in
the
release
or
not
they're,
no
longer
doing
that.
Instead,
the
release
team
wants
us
to
register
which
caps
are
going
into
the
next
release
with
them.
A
A
Please
make
sure
that
if
you
have
a
feature,
that's
going
into
121
that
it
has
been
added
to
this
spreadsheet,
otherwise
it
will
not
be
tracked
by
the
release
team.
The
only
people
who
have
right
access
to
that
spreadsheet
are
the
leads
so
mixing
michelle
and
yan.
So
just
ping
one
of
us,
and
we
can
make
sure
that
your
your
really
ear
cap
is
captured
in
that
sheet.
A
So
that's
the
first
change.
The
second
change
is
that
you
must
also
get
a
production
readiness
review
completed
before
the
enhancement
freeze
date
and
there
are
a
set
of
production
reviewers.
A
This
production,
readiness,
review
process
and
and
date
is
still
kind
of
being
debated,
it's
possible
that
they
may
separate
the
requirement,
the
required
date
for
the
readiness
production
readiness
review
from
the
cap
review.
So
you
could
it's
possible
that
that
might
get
pushed
back,
but
I
wouldn't
count
on
it.
I
would
plan
to
make
sure
that
the
production
readiness
review
is
done
by
this
february
9th
date.
A
A
Okay,
so
first
item
we
have
is
delegate
fs
group
to
csi
driver
instead
of
cubelet.
This
is
an
alpha
feature
hamath
or
michelle.
Any
updates
on
this.
B
I
don't
have
a
new
update
but
I'll
be
opening
the
csi
change
today.
Actually.
B
B
We
discussed
this
right
like
about
we
discussed
and
we
have
a
issue
open
where
we
want
to
pass
the
gid
and
we
have
a
csi
call.
I
think
next
week.
So
if
I
have
like,
if
I
have
the
proposal
out
then
next
year,
okay.
B
A
B
A
A
A
Okay,
so
let's
talk
about
csi
online
offline,
resizing
volume,
expansion;
no
we're
just
fixing
changes
for
this
release
a
month.
Any
updates
on
that.
B
Probably
not
so
we
have
one
like
like
if
we
need
some
volunteers
to
to
help
out
with
some
of
the
features,
some
of
the
bugs
sorry
and
previously
kk
volunteered,
but
we
didn't
had
any
progress
on
that.
So
if
anybody
wants
to
pick
it
up,
then
then
ping
me
offline.
B
We
have
two
two
or
three
issues
that
we
want
to
fix
it,
at
least
in
this
release,
which
is
the
resizing
escape
if
pvc
is
deleted
and
then
the
read
write
many
case
that,
if
I
might,
I
I'll
be
able
to
review
it,
but
I'm
not
sure
if
I
will
have
enough
time
to
fix
the
myself
but
I'll
try.
So
that's
the
status
on
on
those
issues
right
now,
but
so
are
there
any
volunteers
on
this.
A
Yeah,
this
is
a
good
opportunity
to
slowly
start
getting
involved
with
the
sig
if
you've
been
sitting
on
those
sidelines
wondering
how
to
get
involved.
This
is
a
good
opportunity.
This
is
an
important
set
of
bug,
fixes
for
a
key
feature,
and
you
have
the
guidance
of
someone
who
is
a
veteran
here
a
month.
So,
if
you're
interested,
please
speak
up
now
or
feel
free
to
message
a
month
or
me
or
any
of
the
leads
after
the.
B
A
Okay,
so
yeah,
if,
if
you're
interested,
feel
free
to
reach
out
afterwards,
we'd
be
happy
to
to
have
you
work
on
this
next
item
is
recovering
from
resize
failures,.
B
This
is
a
design
item
again
for
120
one,
because
it's
still
there
track.
So
I
don't
have
any
update
to
accept.
Will
work
on
designing
this.
A
Sounds
good
next
item
is
se
linux
recursive
permission
handling,
yawn.
A
A
Next
item
is
csi
entry,
read-only
handling-
I
did
not
reach
out
to
humble
yesterday,
so
I
don't
have
a
status
update.
Does
anyone
else
know.
E
I
know
that
humble
is
out
of
office
for
quite
some
time.
Okay
yeah,
I
hope
nothing.
Bad
is
happening,
but.
F
A
A
So
chang,
if
you
could
send
a
message,
an
email
to
humble
letting
him
know
again
feel
free
to
to
go
ahead
and
pick
this
up.
A
Preliminarily
I'll
I'll
switch
this
and
put
humble
as
the
api,
reviewer
and.
A
F
F
G
D
That
one
he
already
has
apr.
That
is
almost
ready
right.
So
let
me
just
pick
up
that
pr.
Is
it
the
link
yeah.
B
H
F
Yeah
I'll
definitely
follow
follow
up
y'all
after
I
ramp
up
to
the
context.
A
Okay.
Next
item
is
issues
related
to
assuming
volumes
or
mount
points.
Let's
see
last
update
here
is
andy's
fixes
in
another
fix,
get
got
reverted
due
to
some
issues
that
need
to
be
resolved.
Any
further
update
on
this.
H
No,
I
think,
basically,
one
of
the
fixes
revealed
a
problem
in
that's
like
a
little
fundamental,
so
we
need
to
potentially
you
know,
find
an
owner
to
to
potentially
try
to
look
into
it.
A
Do
you
want
to
tackle
this
for
the
121
time
frame
or
delay
to
the
next
quarter?.
H
H
B
A
A
This
is
moving
to
beta
this
quarter.
Patrick
is
running
this.
I
don't
know
if
patrick's
on
the
call.
H
Yeah-
I
don't
think
he's
here,
but
I
and
I
don't
think
he
had
there's
much
update
from
last
time.
A
Next
item
is
the
pvc
inline
ephemeral
volumes,
it's
the
original
ephemeral
volumes
for
csi
drivers,
bringing
that
in
line
with
the
new
ephemeral
volumes.
No
sorry
this
is
the
new
ephemeral
volumes.
Excuse
me,
this
is
what
patrick
is
working
on.
F
A
Okay,
we're
going
to
mark
that
as
no
update
and
then
we
have
a
second
ephemeral
volume,
which
is
the
old
csi
ephemeral
volumes,
and
the
ask
here
is
to
bring
the
api
in
line
with
the
new
ephemeral
volumes,
which
is
line
number
nine.
So
we
said
matt
carey
potentially
could
work
on
this
yeah.
G
G
Can
you
hear
me:
okay,
great,
okay,
good
yeah,
so
this
is
something
that
we're
interested
in,
but
probably
not
for
a
release
or
two.
We
we
have
some
new
customers
on
in
gke
here
who
are
using
ephemeral
in
new
ways
and
we
kind
of
feel
like
we
want
to
evaluate
what
that
usage
looks
like
before.
We
can
have
much
of
an
opinion
on
how
to
do
this,
so
I
think
the
tldr
is
that
I'm
happy
to
work
on
this,
but
it's
not
going
to
happen
for
probably
a
release
or
two.
G
So
if
there's
somebody
who
feels
like
this
should
happen
sooner,
I'm
I'm
happy
to
seed
or
work
with
them
or
something,
but
just
from
our
time
frame.
This
probably
will
won't
make
much
progress
until
the
summer.
A
So
if
anyone
on
the
call
is
concerned
about
delaying
csi
ephemeral
volumes
for
another
release
or
two,
then
please
reach
out,
and
we
can
we
can
kind
of
hand
this
off
from
matt
carey
to
you
and
otherwise
we'll
plan
to
have
matt
and
team
work
on
this
in
a
release
or
two
yep.
Exactly
thanks.
A
Okay
next
item
is
spreading
over
failure.
Domains
is
a
design
for
the
quarter
and
related
to
volume,
group
or
shing.
Do
you
want
to
talk
about
these
two.
D
Yeah,
so
I
think
the
the
spreading
one
I
haven't
made
much
progress,
because
that
depends
on
the
next
one,
so
we're
still
trying
to
wrap
up
the
design,
forwarding
groups.
I
need
to
update
the
cap
and
then
I
want
you
to
schedule
another
review
meeting
yeah.
So
I
think
the
target
in
this
quarter
is
still
design
trying
to
move
down
the
design.
A
And
design
we
got
plenty
of
time
so
sounds
good
to
me.
Next
item
is
csi
out
of
tree
move
by
scuzzy
driver
fit
and
finish
so
this
is
the
csi
iscsi
driver.
H
I
have
not
had
a
chance
to
follow
up
with
folks,
but
it's
on
my
plate
too.
A
I
Yes,
so
nfs
provisioner
things
are
set
up
now
I
have
a
flaky
build
that
I
need
to
fix,
but
things
are
good.
Nfs
client
provisioner
this
just
one
more
pr
to
be
merged
on
the
test.
Infra
with
that,
I
think
we
are
all
set
nice.
A
A
We
were
unsure
who's
gonna
work
on
this.
I
think
mike
was
working
on
this
in
the
last
quarter
unclear.
If
he's
going
to
continue
to
work
on
it.
I
don't
know
if
he's
on
the
call
doesn't
look
like
it.
A
If
anyone
else
is
interested
in
picking
this
up,
this
would
be
a
fun
design
project.
The
tldr
is
that
we
have
a
number
of
resources
that
are
namespaced
like
volumes
and
snapshots
and
we
need
and
they're
not
just
namespace.
They
have
a
namespace
component
and
a
non-namespace
component
that
is
tied
together,
pretty
tight
tightly
and
so
moving.
Those
objects
across
namespaces
is
a
rather
difficult
manual
process,
and
so,
if
we
had
a
generic
way
to
automate
this,
that
would
be
very
nice.
C
A
A
So,
if
you're
interested
in
working
on
a
future,
here's
a
here's,
a
feature
you
can
work
on
and
for
this
quarter
it
would
just
be
design.
It
would
be
coming
up
with
a
kept
a
design
dock
that
says:
here's
how
we
think
it
should
work
and
getting
approval
on
that
all
right.
Moving
on,
we
have
csi
volume
health
moving
that
to
beta
any
update
on
this.
D
So
yeah,
so
I
will
need
to
update
the
cap,
we'll
put
it
into
the
new
format
and
then
to
also
do
the
production,
readiness
review.
C
Yeah,
so
we
we've
been
discussing
this
on
our
regular
tuesday
meetings.
The
the
latest
update
is
that
I
have
finished
like
doing
a
code
refactor
or
a
split
of
the
reusable
parts
and
the
sample
parts,
and
so
like
I'm,
going
to
be
pushing
that
to
my
repo
soon
and
then
we'll
go
over
that
code
on
our
next
tuesday
meeting
and
then
I'm
probably
going
to
turn
my
attention
to
getting
the
cap
updated
and
approved
for
121,
because
that's
the
first
deadline
that
I'm
gonna
face.
C
A
All
right,
thank
you
for
that
update
and,
if
anyone's
interested
in
following
this
design
more
closely,
there
are
weekly
meetings
on
tuesdays
and,
if
you're
interested
take
a
look
at
the
sig
calendar,
you
should
be
able
to
see
an
invite
10
a.m.
Pacific
time
for
the
volume
populator
meeting
feel
free
to
copy
that
over
to
your
own
calendar
and
attend
next
up
is
the
object,
storage,
api
cozy.
Anyone
on
the
cozy
team,
interesting.
J
Update
yeah,
basically,
what
we
are
doing
is
we,
after
the
break,
we
speed
up
our
development.
We
are
at
a
point
very
close
to
integration.
We
did
some
local
integration
of
the
components
for
a
simple
positive
case
of
creating
buckets
and
the
credentials.
J
We
ran
into
a
small
issue
that
requires,
hopefully,
a
small
api
change
on
grpc
objects,
so
that
will
be
discussed
today
in
the
community
meeting
so
that
we
can
go
ahead.
Apart
from
that,
documentation
of
the
project
is
very
important
to
us,
because
that
will
help
us
onboarding
new
contributors
to
the
project.
So
we
had
few
prs
on
the
documentation
side.
J
D
Julie,
I
think
you
guys
also
need
to
update
the
cap.
You
need
to
do
this.
J
That's
exactly
true,
and
we
are
waiting
for
the
the
discussion
in
the
meeting
today
after
this
meeting,
so
that
we
can
update
the
cap
on
the
api
objects
and
we
can
continue
with
our
integration,
we're
pretty
close
in
finishing
up
the
integration
for
the
first
phase.
So
then
we
can
move
on
and
hopefully
do
api
review
after
that.
A
Yeah,
so
for
everyone
else,
this
is
a
pretty
ambitious
new
project.
The
hope
here
is
to
come
up
with
a
standard
like
we
have
with
csi
csi's
focused
on
file
and
object
and
cosi
container
object.
Storage
interface
here
is
trying
to
get
kind
of
a
similar
standard
going
for
object,
storage.
A
I
would
be
focused
on
the
data
path,
not
necessarily
sorry.
It
would
be
focused
on
the
control
path,
not
the
data
path,
and
so,
if
you're
interested
at
all
in
that
effort,
there
are
multiple
weekly
meetings
for
this
there's
kind
of
a
design
meeting.
A
Immediately
after
this
meeting
on
the
same
zoom,
and
then
there
is
a
kind
of
coding
or
working
meeting
on
monday
mornings
as
well
and
if
you're
interested
in
either
of
those
meetings
again
they
are
on
the
sig
storage
calendar
and
you
can
also
reach
out
to
sereni
or
jeff
or
sid,
or
anybody
working
on
that
project.
If
you're
interested
in
getting
involved,
I
think
they
also
have
slack
channel
as
well
all
right.
Moving
on
next
item
is
change
block
tracking
shing.
You
want
to
talk
about
this.
One.
D
Yeah
so
we
started
working
on
this
designing
little
production
working
group
phone
is
looking
at.
I
don't
see
him
on
the
call.
I
think,
last
time
it
was
before
the
holiday
break.
He
put
up
a
person
designing
a
google
doc.
There
are
still
some
design
challenges
like
how
to
come
up
with
this
common
design
that
works
for
every
vendor.
So
I
think
there
are
still
things
that
we
need
to
sort
out
so,
but
so
we
should
we'll
try
to
make
some
progress
on
the
design
in
this
quarter.
D
C
And
shings,
so
I
understand
working
groups,
don't
own
things
right.
The
working
group
is
is
for
the
for
the
sig
storage
and
sig
apps,
and
anyone
else
is
interested
to
get
together
and
talk.
But
like
we.
D
Don't
own
code,
so
we
don't
own
code
but.
C
C
So
so,
if
if
that
particular
design
you're
talking
about
like
the
change
block
track
and
turns
into
like
a
deliverable
like
that,
would
go
to
one.
A
So
yeah
for
folks
who
are
unfamiliar,
sig
storage
and
sig
apps
combined,
have
a
new
work
group
called
the
data
protection
work
group
and
it's
very
well
attended.
They
have
weekly
meetings
and
they
discuss
issues
related
to
data
protection,
everything
from
volume
snapshots
to
volume,
backups
and
in
this
case,
how
to
handle
change,
block
tracking,
and
so,
if
you're
interested
in
any
of
those
designs
attending
the
data
protection.
Work
group
would
be
a
good
first
step
and
you
can
also
take
a
look
at
the
the
designs
that
are
being
laid
out
here.
A
K
Well,
just
wanted
to
interject
quickly
here
my
name
is
ed
reed,
I'm
new
to
this
community
community.
I
work
for
microsoft.
K
We
discussed
the
cbt
design
a
little
bit
in
the
last
meeting
before
the
end
of
last
year
and
I
have
some
additional
information
about
the
capabilities
for
azure
with
managed
disks
and
snapshots
that
I
I
would
like
to
feed
into
that
group
shane.
Would
I
call
it.
D
Yeah,
yes,
please.
I
think
we
actually
need
some
information
for
azure,
because
we
couldn't
find
the
api
that
gave
us
change
blocks.
K
Right,
I
I
have
that
information
for
you.
Oh
awesome,
I'll,
follow
up
with
you
after
this
meeting,
then.
A
D
Yeah
we
actually,
we
actually
have
one
off
meetings
just
for.
D
Things
because
yeah
otherwise
you
know,
may
not
fit
into
that.
It
sounds.
A
Good
and
if
you
could
add
that
meeting
to
the
sig
calendar
that'd
be
great.
A
Oh
okay,
I'll
make
sure
yeah
perfect.
Thank
you.
Next
item
is
new.
Read,
write
only
access
mode,
we've
got
chris
from
our
team
working
on
this.
This
is
a
long
running
issue.
If
you
have
implemented
a
csi
driver,
I
think
one
of
the
things
that
you
notice
is
the
mismatch
between
the
access
modes,
that
kubernetes
allows
and
the
access
modes
that
csi
allows
and
what
further
complicates
it.
C
A
A
Yeah,
it's
it's,
it's
bad!
It's
confusing!
We
need
to
fix
this
and
the
person
tasked
with
this
is
chris.
I
don't
think
chris
is
on
the
line,
but
that's
a
little
bit
of
the
background
on
what's
going
on,
the
goal
for
this
quarter
will
be
to
come
up
with
a
design
that
we
all
agree
on
and
moving
forward.
A
A
All
right
next,
we
have
csi
migration.
This
effort
is
trying
to
take
all
the
old
legacy
entry
volume
plug-ins
and
get
them
migrated
over
to
csi.
A
There
is
the
core
code
in
kubernetes
responsible
for
this
migration,
as
well
as
individual
volume
plug-in
level
code
to
enable
that
migration,
so
for
the
core
migration
jaiway
has
been
working
on
this
along
with
matt
carey
jaiway.
You
want
to
talk
about
the
status
of
this.
L
Yes,
so
just
wanted
to
get
a
a
clarification:
the
the
cessna
migration
core.
We
can
only
declare
it
as
a
ga
unless
all
the
other
cessna
migration
are
done
right.
A
A
So
ideally,
we
move
it
to
ga
when
we
think
that
the
api
exposed
by
the
core
is
kind
of
stable.
In
this
case,.
A
A
I
think
more
of
a
concern
would
be:
does
it
actually
work
and
fulfill
the
needs
of
the
volume
plug-ins
that
are
dependent
on
it?
If
we
have
at
least
you
know
two
or
three
large
volume
plug-ins
that
are
fairly
different
depending
on
it
and
they
are,
you
know,
ga
quality
themselves,
then
I
think
the
core
is
ready
to
move
to
ga.
L
Okay,
gotcha
thanks
yeah,
so
the
status
for
this
right
now
is
that
I
have
already
started
to
collect
all
of
the
information
that
I
have
find
and
start
to
drafty
some
documentation
on.
L
What's
the
gap
between
the
the
beta
to
ga
and
also
I'm
starting
to
con
contact
to
the
clusters,
lifecycle
team
to
try
to
understand
what
are
some
other
kubernetes
distribution
that
we
need
to
take
care
of,
because
so
so,
basically
we're
we're
taking
the
gcepd
as
the
the
first
experimental
plug-in
to
go.
L
Ga
so,
basically
we're
trying
to
figure
out
what
are
the
things
that
we
need
to
do
for
the
gcpd
to
go,
ga
and
then,
after
after
that
is
done,
we
can
at
least
the
items
that
need
to
be
accomplished
by
all
the
other,
all
the
other
plugins,
so
yeah
right
now.
It's
still
I'm
still
collecting
all
the
all
the
works
that
we
need
to
down
and
I'll
probably
start
a
separate
meeting
after
I
got
enough
information
to
talk
about
this.
A
G
Good
excuse
me
I'll
also
add
that
we
have
created
a
project
board
under
kubernetes
sig
and
hopefully
we'll
have
that
updated
soon
for
like
tasks
and
such.
A
Awesome
yeah,
it's
really
exciting,
to
see
the
progress
on
this.
We
went
beta
on
the
csi
migration
in
the
kubernetes
117
release,
so
it's
been
a
while,
and
this
is
kind
of
hard,
thankless
work.
So
I
appreciate
the
work
jaiway
and
matt
are
doing
here.
Thank
you.
Oh.
L
Just
about
another
note,
so
regarding
this,
I'm
also
had
a
pr
to
fix
a
csi
translation
library
and
the
the
pr
is
still
in
review
and
after
that
I
think,
another
pr
to
migrate
all
the
topology
label
to
ga
it's
also
needed.
A
I
think
it
should
be
added.
Let
me
add
one
entry.
A
A
L
Replace
the
topology
label
problem
update
yeah,
okay,
because
because
I
have
heard
that
we're
gonna
remove
the
bit
a
lot
bit
label,
probably
in
this
release
or
next
release,
I'm
not
sure.
A
Okay
sounds
good.
Thank
you
for
that
update.
Now,
let's
get
an
update
on
the
csi
migration
for
the
individual
plugins
any
update
on
the
vsphere
side,
deviant.
M
M
M
On
supporting
this
on
the
older
releases
of
the
vsphere
to
declare
it
as
a
ga,
because
this
is
a
very
hard
requirement
that
we
have
right
now
to
upgrade
vsphere
to
7.0
or
human
awesome.
And
is
the
plan
to
get.
B
Is
there
a
plan
to
support
storage
classes?
Sorry
data
store
direct
name,
or
it's
still
like
defaulting
to
a
storage
policy.
That
was
one
of
the
deal
like
breakers.
We
had
in
some
of
the
this
thing
just
want
to
check.
M
You
mean
translating
the
data
store
into
the
url
right
as
part
of
the
migration.
B
B
A
M
So
when
we
support
the
migration
on
vspr
67,
that
has
the
lower
hardware
version
requirement
than
the
7.0
u1.
So
with
that
we
are
covered
awesome.
A
All
right
cool,
thank
you.
So
much
for
that
update.
Davian
looks
like
lots
of
good
progress
on
the
vsphere
side.
Anyone
want
to
give
an
update
for
the
azure
disk
or
azure
file.
H
Andy
is
working
on
investigating
fixes
for
azure
file,
there's
a
problem
with
node
authorizer,
so
so
yeah
he's
working
on
that.
I
don't
think
there's
an
update
for
azure
disk.
A
G
Yeah
we
have
started
figuring
out
how
to
launch
this.
Currently,
the
only
gaps
we've
seen
are
in
testing
and
kind
of
the
details
of
the
launch
process.
We
haven't
finished
evaluating
that,
so
there
may
be
some
feature
gaps
that
we
haven't
identified
yet.
A
Okay
sounds
good,
thank
you
matt
and
then
from
the
aws
side,
matt
wong.
H
Know,
yes,
so
I
think
matt
and
another
person
I
forget
his
name,
but
they
are,
they
are
actively
going
to
work
on
this.
H
A
And
then
we
have
openstack
cinder,
we
were
trying
to
figure
out
who
would
work
on
this?
Do
we
have
agreement
between
yon
hamad
fabio.
E
A
So
it
sounds
like
this
is
a
call
out
for
the
rest
of
the
community.
If
you
are
interested
in
openstack
cinder
kind
of
at
all,
this
would
be
a
project
that
needs
your
help
and
what
is
at
risk
here
is
that
I
think
eventually,
if
we
have
a
set
of
volume
plug-ins
that
are
not
migrated
to
csi,
we
will
consider
just
deprecating
them,
so
the
paths
forward,
our
deprecation
or
migration,
and
if
we
can't
get
anyone
to
commit
to
a
migration,
the
obvious
path
would
end
up
being
deprecations.
A
E
It
would
help
if
he
had
like
list
of,
I
don't
know
requirements
to
satisfy
before
ga.
H
G
Yeah,
so
the
the
main
goal
we
have
like
our
our
impression
is
that
we
think
migration
works
and
we
do
want
to
add
a
bit
more
testing
just
to
get
higher
confidence
of
that.
But
the
main
block
we're
working
on
now
is
like
exactly
how
we're
going
to
roll.
G
Gke,
you
know
like
so
all
of
the
issues
that
like
if
it
does
happen,
to
break
something
for
a
particular
customer.
Do
we
have
a
strategy
for
getting
it
rolled
back
effectively
and
that
kind
of
thing
so
yeah?
I
think
that
those
are
probably
the
main
considerations
you
have.
If
it
is,
you
know
how
how
to
communicate
to
customers,
how
to
switch
it
on
and
what
to
do
if
something
goes
wrong,
and
also
just
improve
testing
to
make
sure
you're
confident
that
it
does
actually
actually
work
work
well,.
E
Oh
yeah,
I
think,
openstack
upstream
they
run
the
migration
tests
regularly.
I
can
check
how
successful
it
is,
but
my
gut
feeling
that
is
that
it's
just
I
don't
know
to
note
cluster.
It
won't
test
much
and
I'm
not
sure
what
is
the
expected
scale
and
what
else
to
test
actually.
H
There's
also
so
in
the
in
the
main,
csi
migration
enhancement
issue,
there's
a
couple
of
known
gaps
that
are
listed.
H
I
would
go
through
that
list
of
known
gay
apps
and
see
if
your
plugin
is
impacted
by
them
or
not.
That
could
be
another
factor.
A
Okay,
I
think
that
sounds
good.
Thank
you
for
the
update
there.
Next
up
is
csi
migration
for
ceph.
We
preliminarily
put
humble's
name
down
on
this.
A
I
think
we
need
to
follow
up
and
see
if
humble's
actually
going
to
be
able
to
commit
to
this.
In
the
meantime,
if
there's
anybody
else
interested,
please
let
me
know.
A
Okay,
next
item
is
the
set
of
tasks
that
this
group
is
working
on,
that
are
co-owned
by
other
cigs.
First
up
is
sig
node
user
name,
space.
A
B
Oh
the
this
some
a
person
from
I
think
signor
came
and
presented
this
to
us,
but
it
was
kind
of
didn't
proceed
much,
but
they
want
to
enable
the
the
objective
is
to
enable
user
name
spaces
in
cubelet,
so
that
the
the
uids
could
get
shifted,
which
there
aren't
and
it
if
the
uids
are
shifted,
then
we
were
talking
if
we
want
to
do
something
similar
for
fs
group
or
how
does
it
affect
fs
group,
so
yeah.
B
No,
the
we
don't
shift
the
user
ids
in
in
kubernetes
right
now,
although
this
the
the
container
runtime
support
that,
but
it's
not
it's
a
feature
of
c
group.
This
is
not
enabled
in
kubernetes,
yet.
B
B
There's
a-
and
I
think
that
was
raised
in
our
call
when
the
when
it
was
presented
to
us
by,
I
think
different
person,
but
the
ownership
of
the
feature
has
changed
hands.
So
I
have
not
read
the
new
cap
yet.
A
Yeah,
if
anyone's
interested
in
helping
shepherd
this,
let
me
know
it
would
mean
kind
of
following
up
on
the
status,
helping
review
the
cap.
That
kind
of
thing
I
know
oman's
working
on
a
million
things
for
this,
so
it'd
be
great.
If
we
can,
we
can
get
some
help.
A
Next
item
is
ungraceful
node
shutdown
shang.
You
want
to
talk
about
this
one.
D
Yeah
yeah,
so
this
one
actually
right
now,
the
owner
of
this
is
actually
sixth
story.
So
I
asked
yasen
about
this.
He
said
this
actually
should
be
part
of
six
stories,
because
there
was
a
proposal
to
change
csi
slack
and
also
there's
a
the
pod
gc
controller.
He
said
that's
actually
not
owned
by
signal
it's
not
owned
by
any
sig,
so
yeah.
So
I
think
we
of
course,
still
need
to
get
signaled
involved,
but
not
sure
in
a
review
capacity.
A
Okay,
so
that
is
interesting.
Does
anybody
have
cycles
to
pick
this
up
as
their
main
project
for
this
release
or.
A
D
Yes,
I
talked
to
him.
He
well
he's
still
interested,
but
I
think
he's
abusive
he's
a
lot
of
things,
so
I
need
to
sync
up
with
him.
I
have
a
lot
of
questions
on
the
approach
itself,
so
I
think
this
is
something
that
we
need
to
address.
It's
just
I
mean
what's
the
right
approach,
yep
hugh
addresses,
so
I
need
to
sync
up
with
him
first
and
here
get
some
time
and
then
schedule
some
meetings
to
yeah
talk
about
this.
D
So
I
open
the
issue.
So
I
will
enter
this
one
as
a
six
large
project.
Then,
because
does
not
look
like
it's
a
signal.
A
Yeah,
that
sounds
good
to
me.
We'll
still
call
it
signaled
here,
because
we
need
to
get
input
from
sig
node,
but
effectively.
G
A
All
right,
thank
you.
Shane
next
item
is
immutable
secrets
and
config
maps
voytech's
driving
this
michelle,
and
I
looked
at
the
cap
and
prs
and
have
approved
them.
I
forget
if
they've
been
merged
or
not
yeah,.
A
Next
item
is
with
sig
apps
address.
Issues
related
to
pvc,
creates
that
are
pvcs
that
are
created
by
stateful
site
are
not
auto
deleted,
kk
or
matt.
Any
updates
on
that.
G
Yeah,
I'm
not
sure
if
kk
is
here,
I
don't
think
he
is
anyway.
This
is
something
that
we
kind
of
missed
pretty
closely
for
120.
I
I'd
love
to
get
this
in
for
121,
so
I
can
try
out
the
new
spreadsheet
process,
but
I
haven't
been
both
kk
and
I
have
been
busy.
So
we
are
not
certain
whether
that
will
happen,
but
I
think
we
can
put
in
a
goal.
Okay,.
A
Okay,
then
we
have
volume.
Expansion
for
stateful
sets
come
on.
B
B
The
main
issue
is
obviously
the
main
issue
is
like
how
the
rollback
of
like
stateful
set
works
when
something
resizing
is
enabled
and
yeah
the
cap
needs
to
be
updated.
There
was
a
cab
that
was
written
by
a
person
siddharth,
I
think,
and
that
was
abandoned,
so
it
has
to
be
kind
of
picked
up.
If
nobody
takes
it
up,
it
will
be
one
of
the
long
term
plans
for
this
group.
A
Got
it
okay,
thank
you.
Next
is
execution
hook,
shang.
D
Yeah
so
shenzhen
and
I
are
updating
the
cap
trying
to
address
some
of
the
comments
and
and
then,
after
that
we
will
need
to
accept
some
meeting
to
review
those,
so
we
are
still
hoping
to
target
alpha.
So
I
need
to
talk
to
shannon,
because
we
also
need
to
update
the
cap,
move
it
into
this
new
format.
A
Okay
sounds
good,
thank
you,
shane,
and
then
we
have
sega
architecture,
wanted
us
to
split
up
the
mount
library
out
of
the
cades
utils.
Any
updates
on
that.
J
Yeah
hi:
this
is
trini
again
thanks
to
dims,
he
has
moved
the
advisor
to
use
mobi
libraries
and
that
pr
is
merged.
So
c
advisor
no
longer
is
dependent
on
mount
details
and
also
thanks
to
michelle.
She
did
some
groundwork:
updating
the
cap
to
the
new
format
and
open
issue.
J
I
have
a
pr
which
is
on
hold
right
now.
That's
the
last
pr
I
believe,
to
to
update
the
dependencies,
but
that
has
to
wait
until
the
version
of
the
c
advisor
is
bumped
up
in
the
rendering
directory.
So
I
don't
know
when
that's
gonna
happen.
I
can
change
that,
but
usually
it
happens
at
the
end
of
the
release.
So,
okay,
okay,.
H
Also,
we
can
have
we
removed
the
mount
library
from
kitsutils.
J
Not
yet
yeah,
we
can
now
remove
it
because
there
is
no
longer
any
more
users.
Good
point,
yep
I'll,
do
that
I'll
work
on
that
cool.
Thank
you.
B
All
right,
I
have
a,
I
had
a
follow-up
question,
so
there
are
other
similar
utils
like
the
reporting,
stat
stats,
like
the
the
volume
size
and
then
there's
some
resize
utils,
and
that
generally
drivers
are
copying
the
code
from
what
I
saw
between
like
how
to
report
volume
size
and
it's
not
huge
amount
of
code
but
and
similarly
similar
for
resizing
like
the
fs
resizing
code.
H
A
A
Okay
sounds
good
and
then
finally,
we
have
user
id
ownership
in
config
maps
and
secrets
jaiway.
I
think
there's
no
update
on
this
one
right.
L
Yeah
by
the
time
I
played
I'll
probably
need
to
work
on
this.
L
A
M
Yeah,
so
this
issue
was
reported
long
before,
but
we
recently
observed
in
one
of
our
testing.
So
what
happens
here?
Is
we
create
a
pv
pvc?
They
get
bind
with
each
other
and
then
for
some
reason,
admin
instead
of
deleting
the
pvc,
they
start
deleting
the
pv,
but
since
we
have
the
pv
and
pvc
in
the
bound
state,
we
are
not
able
to
delete
the
pv.
But
somehow,
when
we
get
out
of
that
cube
kernel
delete
pv,
we
do
control
c
and
get
out
of
the
terminal
access.
M
We
see
the
status
of
the
pv
change
from
bound
to
like
terminating
and
then
later.
If
admin
change
deletes
the
pvc,
pvc
and
pv
both
are
getting
deleted,
but
controller
is
not
giving
any
callback.
So
in
this
case
the
pv
resource
is
getting
deleted
before
the
provisioner
can
get
hold
of
that
pv
resource
to
fetch
the
volume
id
and
send
it
to
the
controller.
M
So
this
is
this
is
the
issue
we
are
seeing
and
it's
kind
of
a
big
problem,
because
it
is
licking
the
orphan
volume
lot
of
orphan
volume.
If
I
mean
or
a
devops
persona
or
end
user
is
not
following
the
guideline
proper
guideline
to
clean
up
the
kubernetes
resources,
do
you
know
if
this
is
reproable
across
other
volume.
A
Michelle
do
you
know
if
this
is,
I.
H
N
Hey
sorry
to
interrupt,
but
isn't
this
isn't
it
time
for
the
object,
storage
meeting
now.
A
Yes,
it
is
so,
let's
do
this,
I'm
gonna
punt
these
two
items
to
the
next
meeting.
I
apologize
davian
and
sandeep.
I
didn't
manage
the
time
very
well
today,
so
let's
go
ahead
and
end
the
meeting
today
I'll
make
sure
to
move
these
two
items
to
the
next
meeting
in
the
meantime
attend
the
csi
meetings
and
we
can
get
the
conversation
going
there
and
I'll
go
ahead
and
end
this
meeting,
so
we
can
hand
it
off
to
the
cozy
folks.