►
From YouTube: Kubernetes SIG Storage - bi-weekly meeting 20210325
Description
Meeting of Kubernetes Storage Special-Interest-Group (SIG) bi-weekly - 25 March 2021
Meeting Notes/Agenda: -
Find out more about the Storage SIG here: https://github.com/kubernetes/community/tree/master/sig-storage
Moderator: Xing Yang (VMware)
A
B
A
B
Yeah
we
made
some
fixes
and
we
are
trying
to
get
some
we'll
try
to
get
some
more
changes
in
and
for
in
1.22,
the
especially
the
copying
the
allow
volume
expansion
field
that
requires
a
gap
and
then
the
recovery
from
resize
failures,
the
the
yeah.
So
this
is
like
I
have
to
there's
a
cap
already
merged
for
recovery
from
resize
failure,
but
I
have
to
update
it
for
for
the
new
design.
A
D
Last
time,
so,
last
time
we
mentioned
there
is,
there
was
a
pr
reverted,
and
so
I
checked
that
again,
because
the
risk
risk
condition
between
like
politician
and
volume
management
still
exists,
that
pr
will
not
be
able
to
merge
again
until
we
resolve
that
risk
condition,
which
is
right
now
we
don't
have
a
full
complete
solution.
Yet,
okay,.
A
E
Is
there
I
think
this
one's
done?
I
think
the
docs
pr
merged
and
there
aren't
any
more
rpr's
pending.
So
we
can
mark
this
one.
As
done.
I
think.
A
There
was
a
test
I
thought
chen
is
supposed
to
review
that
one
yesterday,
I'm
not
sure
if
that
one
is,
but
I
forgot.
A
E
Cool,
I
think
so
right
jing
did
you
have
something.
A
Merged,
okay,
thanks,
okay,
next
one
is
the
starting
or
field
domain.
I
think
this
one
is
depending
on
the
next
one.
What
in
group,
I
thought
you
haven't,
got
a
chance
to
update
and
actually
I
need
to
schedule
a
review
meeting
first,
so
I
need
to
do
that
soon.
A
E
Where
is
your
michelle?
Do
you
know,
is
this
a
work
in
progress,
or
I
haven't
seen
anything
since
last
time?
Okay,.
A
A
Okay,
thank
you
next
warning
house.
So
there
is
a
dog
pr
that
is
being
reviewed,
so
I
need
to
update
that
trying
to
get
that
merged
and
and
then
ryan
is
working
on.
Removing
the
agent
logic
from
the
external
house
monitor
repo.
G
H
Just
approved
them
so
hopefully
soon,
oh.
G
Let
me
think
that
we
continue
to
be
tinkering
with
the
api
design
in
the
cab.
I
guess
the
the
plan
is
is
to
address
all
the
existing
concerns
that
prevented
it
from
getting
approved
for
121
and
to
try
to
go
alpha
in
122..
G
The
biggest
change
that
is
going
on
right
now
is
an
alteration
to
the
way
they
handled
the
brownfield
case
or
or
the
way
that
they
handle
sharing
of
buckets
across
namespaces,
because
I
know
with
object.
Storage
sharing
is
expected
to
be
much
more
common
than
it
is
with
like
block
storage
or
file
storage.
G
So
they
really
want
to
make
sure
they
get
that
use
case
right
and
the
current
design
is
moving
from
having
an
object
per
namespace
to
having
one
object
that
all
the
namespaces
can
point
to.
A
Yeah
he's
actually
meeting
a
meeting
after
this.
A
So
if
you're
interested,
you
can
join
that
meeting
right
after
this
meeting
is
complete
thanks
ben
next.
One
is
a
change
block
tracking.
So
we
had
a
meeting
to
talk
about
the
cpt
use
case
just
to
talk
about
how
to
describe
the
workflow,
because
there
are
a
couple
pending
things
like
the
potential
security
issue:
how
to
address
that,
and
also
we're
still
waiting
for
feedback
from
aws
eps
side
on
the
api
design
stuff.
A
So
that's
the
status
next
one
is
a
new
rewrite
on
the
access
mode.
There's
chris
here.
E
Michelle
do
you
know
the
status
of
this.
He
has
a
provisional
cap
out
so
yeah.
I
think
we
can
take
a
look
at
it
and
at
least,
if
it's
provisional,
we
can,
we
can
merge
it.
I
think
chris
plans
on
working
on
the
cap
in
multiple
pieces,
so
we're
gonna,
keep
it
provisional
until
everything
is
there.
J
Yeah,
I
think
the
azure
one
I
made
some
updates
to
it,
but
I
it's
not
mergeable
right
now,
but
I'm
working
on
getting
it
to
pass
the
azure
file
test
that
it's
failing,
yeah
I'll,
be
done
soon.
Okay,
thank
you.
I
Yeah,
so
for
this
one
I
think
most
of
the
work
is
is
down
for
1.21
there's
still
one
small
bug
fixed
that
are
still
in
discussion
with
young.
I
We're
still
trying
to
figure
out,
what's
the
best
way
to
fix
that
other
than
that.
I
think
in
1.21
we're
all
good,
except
for
the
updates
topology,
but
I
think
layla
will
take
care
of
that
so
yeah.
It
should
be
good.
A
Okay
thanks
next
one
is
vsphere,
so
I
think
so.
Support
for
67
u3
is
planned
to
be
in
the
next
2.3
release.
Still
right
now
we're
working
on
it,
and
then
I
think
still
need
to
follow
up
on
this
is
that
windows
support.
I
Yeah,
I
think
it
has
been
moved
to
to
beta,
but
I
I
do
I
do
know
there
is
another.
There
is
another
issue
related
to
secret
that
has
been
raised.
Michelle
do
you
know
that.
A
E
Commission
there
is
one
yeah,
there
is
one
regression
that
was
identified
and
so
there's
a
revert
in
progress,
but
I
think
I
think
from
the
migration
standpoint
I
mean
it's
it's
I
think
migration
will
still
be
beta,
but
there's
going
to
be
another
issue
that
will
have
to
be
fixed
after
we
revert
the
other
change.
E
I
think
there's
still
a
dox
doc
is
still
pending.
I
K
Yeah
I
mean
this
is
the
the
same
update
as
last
time.
It
is.
We
are
not
turning
on
by
default
upstream
in
121,
but
this
is
still
going
out
in
gke
121
in
a
month,
dish.
A
A
Thanks
next
one
is
openstack
synthesis
migration.
It's
not
the
size.
I
believe
this
one
is
the
only
one
that
is
on
by
d4.
Is
that
right.
C
A
C
A
A
A
Seen
him
during
this
meeting,
I
I
thought
he's
back,
but
maybe
he's
still
busy
with
other
things.
So.
A
A
Thank
you.
The
next
one
is
now
grace
for
no
shutdown,
so
I
will
still
need
to
update
the
cap.
A
A
Okay
thanks
next
one
is
the
immutable
secrets,
complete
maps
that
one
is
done
and
so
okay,
so
next
one
is
pvc
created
by
stefan
said
when
I'll
be
altering
me.
K
Yes,
so
I
did
not
manage
to
make
the
code
code
freeze.
Oh,
I
guess
I
gave
that
update
last
week
yeah.
So
this
is
still
in
progress.
The
api
is
is
done.
Implementation
is
still
in
process.
A
Next,
one
is
the
one
expansion
for
safe
was
set.
This
is
a
kk
where
matt
is.
Are
you
also
keeping
track
of
this
one
ahamad.
B
This
requires,
I
think,
new
owner.
I
haven't
heard
back.
A
B
K
Yeah,
so
I'd
actually
be
so
this
is.
This
has
actually
recently
come
up
a
bit
on
slack
and
stuff.
I
mean
I
think
there
may
be
a
more
a
slightly
more
general
way
to
do
it
just
in
terms
of
updates
hey
anyway.
If
anyone
else
is
interested
in
talking
about
this
I'd
be
interested
as
well.
I
don't
know
if
I
can
commit
to.
A
Okay,
next
one
is
execution
for
content
notifier,
so
shenzhen
has
updated
the
cap,
so
we
need
to
get
api
reviewers
to
review
it
again.
We
also
reviewed
the
new
proposal
in
yesterday's
data
protection.
Working
group
meeting
get
some
feedback.
A
Next,
one
is
kubernetes
euros
months,
leaving
a
new
ripple
username
here:
okay,.
E
J
E
A
I
Yeah,
so
I
I
don't
think
I
don't
get
enough
psycho
this
this
release,
but
I
I
have
another
like
outside
contributor,
whose
name
is
ambassador-
and
he
said,
he's
also
like
very
interested
in
this,
and
he
will
take
this
up
yeah,
so
I'll
work
with
him
to
to
figure
out.
What's
the
what's,
the
next
plan.
A
D
Oh,
I
realized
that
and
never
put
css
proxy
in
here.
Oh
so,
let's
give
a
quick
update
on
css
proxy.
Okay.
So
right
now
is
beta,
so
we
are
working
on
to
move
it
to
ga.
J
D
But
kind
of
at
the
end
of
this
month:
okay,
so
there's
our
already
pr
to
move
the
major
api
groups
like
disk
volume
file
system
smb
into
b1,
and
there
is
one
small
issuer
to
checking
volume
checking
disk
formatted
or
not
after
that
resolved
and
pr
can
be
merged.
I
think
we're
pretty
much
ready
for
ga.
A
A
A
Okay,
thanks
for
sure,
okay
going
back
to
this,
so
we
have
passed
the
the
test
phrase
deadline.
I
think
that
was
yesterday.
The
next
deadline
is
next
wednesday
march.
31St
docks
must
be
merged,
so
I
think
that's
the
next
yeah,
that's
the
next
that
line,
and
we
have
in
issue
here.
L
Yes,
so
I
have
done
a
little
bit
of
work
on
this,
since
I
I
logged
the
request
to
talk
in
this
meeting.
It
turns
out.
This
is
not
a
conformance
test,
as
I
originally
thought,
but
there
is
still
an
e
to
e
test
that
does
check
that
when
you've
got
bi-directional
amount
propagation,
it
the
amount
done
by
a
container
with
bi-directional
propagation
turned
on,
does
propagate
all
the
way
up
to
the
host
os.
L
L
That
seems
to
address
the
underlying
system
d
issue
as
well
as
not
causing
any
issues
with
any
containers
that
we've
ever
tried
it
with
is
to
basically
take
the
the
container
runtime
and
also
cubelets,
and
move
them
down
into
a
separate
mountain
namespace,
and
as
long
as
that's
set
up
cleverly
with
mount
propagation,
so
it
still
receives
host
os
mounts
into
that
namespace.
L
Every
other
usage
of
container
bi-directional
mount
propagation
seems
to
work
great
with
csi,
with
other
other
mechanisms
where,
like
the
container
runtime
hooks,
might
need
access
to.
You
know
a
mount
point
that
was
created
by
a
container.
L
There
is
a
link.
I
can't
remember
if
it's
in
this
document,
I
can
just
drop
it
in
the
chat
as
well.
It's
got
sort
of
a
graphical
representation
sort
of
how
the
mount
namespaces
are
laid
out
now
versus
how
they
would
be
laid
out
in
a
you
know,
potential
implementation
if
we
relax
this
test,
but
basically
the
thing
that
I'm
trying
to
at
least
start
with
is
just
a
discussion
around.
Do
we
know
of
any
use
cases
where
something
running
in
the
host
os?
L
L
So
I
have
this
issue
that
you've
got
open
here,
where
I
hope
we
can
have
some
more
discussion
about
it.
I've
also
posted
to
the
kubernetes
development
list,
as
well
as
sig
node
and
n6
storage,
mailing
lists,
looking
just
for,
if
there's
any
information
about
known
use
cases
where
there's
something
at
the
host
os.
L
If
we
can
uncover
any
use
cases
we
haven't
thought
of
and
the
the
driving
force
behind
this
is
is
openshift
has
some
applications
where
we
we
have
sort
of
a
resource,
constrained
environment,
we're
trying
to
sort
of
get
the
cpu
utilization
down
as
much
as
possible.
And
so
you
know,
this
is
actually
going
quite
quite
a
long
ways
to
sort
of
reducing
the
systemd
overhead
and,
among
other
some
other
improvements.
We
kind
of
want
to
squeeze
as
much
cpu
to
make
it
available
for
workloads
as
we
possibly
can.
E
On
this,
so
at
least
from
my
experience-
we
sometimes
you
know
just
just
for
like
debugging
and
triaging
purposes,
we
might
want
to
go
in
and
view
all
the
mounts
that
are
on
the
host.
Is
that
still
possible,
with
some
alternative
command
or
something
like
that?
Absolutely.
L
It's
actually
very
easy
effectively.
All
you
need
to
do
is
in
your
shell.
If
you
know
where
the
the
name
space
itself
is
pinned
in
the
file
system,
you
can
use
the
ns
enter
command
and
that
will
basically
drop
you
into
that
mounting
space
and
then
using
the
mount
command
or
find
mount.
You
get
the
full
listing
of
everything.
That's
in
there.
L
Likewise,
if
there
is
some
sort
of
legacy
tool
that
needs
access,
it's
actually
very
easy
to
again.
You
can
use
ns
enter
to
wrap
that
command,
so
it
spawns
inside
of
that
mount
namespace,
and
then
it
will
still
have
access
to
all
of
those.
You
know
container
specific
matinee
spaces,
so
there
are
workarounds.
L
That's
a
good
question:
yes,
if
you
run
find
mount
in
the
host
os
namespace
yeah,
you
would
get
yeah
even
running
mount.
It
sort
of
depends
on
how
the
namespace
is
created
and-
and
what's
done
with
it,
in
the
proof
of
concept
that
I've
done
for
open
shift.
L
That's
true
absolutely,
and
I
guess,
because
by
its
very
nature,
we
would
know
that
both
cubelet
and
the
container
runtime
would
necessarily
be
inside
of
that
mount
space.
You
can
always
look
at
what
is
what
is
the
pit
of
cubelet
find
out
what
mount
namespace
it's
in
enter
that
one
and
there
you
are
in
fact.
G
F
L
L
Absolutely
so
so,
basically,
this
this
actually
sort
of
originated
from
a
red
hat
bugzilla,
and
there
are
some
changes
in
in
system
d
that
reduce
this
load.
However,
the
the
estimate
of
how
much
it
reduces
the
overhead
is
30
to
50,
which
is
good
a
step
in
the
right
direction.
However,
it
doesn't
help
us
in
open
shift
where
we're
sort
of
working
on
an
older
version
of
a
system
that
does
not
doesn't
necessarily
have
the
fixes.
L
Yet
it
also
doesn't
get
us
all
the
way
down
to
zero
and
the
the
change
of
being
required
to
get
that
overhead
much
lower
would
actually
be
a
kernel
change
to
to
actually
get
more
granular
updates
on
the
the
mount
point
changes.
L
So
we
there
are
the
upstream
system
d
that
openshift
doesn't
have
access
to
is
better
than
the
older
version
of
systemd
that
that
openshift
currently
has.
But
it's
there
is
still
some
significant
overhead
there.
That's
just
based
on
the
fact
that
it
doesn't
get
a
succinct
list
of
exactly
what
changed
in
the
mount
name
space
as
mounts
go
in
and
out.
H
What
actual
changes
are
needed
to
enable
this?
On
the
kubernetes
side?.
L
I
do
have
actually
just
scrolled
up
the
top
of
the
screen
there.
I
have
a
link
to
a
sorry.
I
think
it's
my
first
comment
after
this,
I
I
have
a
link
that
is
to
a
branch
in
kubernetes,
basically
just
fixes
the
ete
test,
so
that,
if
it
notices
that
cubelet
is
in
a
different
mount
namespace,
it
does
the
the
checking
inside
of
that
mounts
nice
namespace.
Instead
of
checking
at
the
the
top
level.
A
L
We
go
black
commented
that
first
link
there
yeah,
that's
right.
So
if
you,
if
you
dive
in
there,
basically
what
this
does
is.
It
has
a
check
to
see
whether
or
not
we're
in
the
name
space
and
then,
if
so,
does
the
test
differently.
It
also
changes
some
of
the
language
around
the
mount
propagation
flags
themselves,
because
they
it
does
change
the
meaning
a
little
bit
compared
to
how
it's
documented.
H
Yeah,
I
think
this
sounds
like
a
reasonable
change
overall
and
probably
good
for
older
systems
that
are
suffering
from
from
these
performance
issues.
E
A
Are
we
actually
deprecating
this
already?
I
thought
he
was
just
suggesting.
Relaxing
the
scene
test
case
itself.
L
E
L
J
L
Kubernetes
and
to
some
degree
it's
it's
kind
of
outside
of
the
scope
of
kubernetes,
I
think
I
mean
I
was
looking
through
the
kubernetes
documentation.
I
mean
sort
of
coming
at
this
from
openshift
there's
a
whole
suite
of
like
install
tools
and
stuff
that
are
part
of
openshift,
but,
looking
at
like
the
kubernetes,
how
do
I
install
kubernetes
page?
L
L
I
have
a
proof
of
concept
that
does
this
it.
I
think
it's
very
easy
to
adapt
to
any
system
that
has
systemd
running.
I'm
not
just
openshift,
specifically.
L
L
So
if
we
look
at
this
first
diagram,
this
is
basically
effectively
how
it
looks.
Today.
You've
got
system
d,
login
shells
cubelet
the
container
runtime.
Everything
is
in
the
same
name
space
when
container
a
has
mount
propagations
to
bi-directional
and
it
mounts
run,
slash
a
everybody
in
the
upper
level
sees
it
and
then
container
b
that
has
host
to
container
propagation
setup.
It
also
sees
it
because
it
gets
pulled
down
with
all
the
other
things
in
the
host.
L
Now,
if
you
scroll
down
a
little
bit,
there's
another
diagram
that
will
show
sort
of
what
it
would
look
like
after,
and
so
basically
you
can
see
here
that
anything,
that's
mounted
by
system
d
or
login
shells
as
long
as
the
namespace
that
this
new
secondary
namespace
is
set
up
properly,
with
with
appropriate.
It
has
to
be
slave,
shared
propagation,
but
effectively.
L
Anything
from
the
os
still
goes
down
into
cubelet
in
the
container
runtime
and
from
there
they
propagate
down
into
containers
that
have
host
to
containers
set
up
properly
and
then,
as
you
can
see
here,
when
container
a
mounts,
it's
it's,
but
in
its
bidirectional
space
run
slash
a
that
propagates
up
to
cubelet
and
container
runtime.
It
also
propagates
down
to
container
b,
but
it
does
not
propagate
up
to
where
systemd
and
login
shells
can
see
it
by
default.
Unless
something
in
that
name,
space
explicitly
enters
that
lower
level.
L
Two,
so
you
can
see
the
an
example
that
is
container
b
that
uses
host
to
container
so
both
today
and
under
this
proposal.
If
a
container
that
uses
host
to
container
mounts
its
own
mount
namespace
inside
of
that
area,
it
sees
run
slash
b,
but
the
parent
name
space
is
not
c
runs
flash
b.
However,
it
does
see
all
of
the
mounts
that
are
below
that
that
volume
as
though
they
were
locally
available.
G
Does
this
mean
that
pods
that
are
run
with
escalated
privileges
so
like
just
privileged
pods,
would
end
up
in
the
same
name
space
as
cubelet
and
container
d,
but
not.
L
I
think
that
by
default,
that
is
true,
and
so,
if
a
privileged
pod
would
want
access
to
that
top
level
name
space,
it
would
need
to
like
ns,
enter
into
slash
proc
slash
one
namespace.
G
G
L
Yeah,
that's
a
good
question
and
a
good
point.
I'm
gonna
make
a
note
to
see
if
I
can
figure
out
where
to
ask
that
question
to
get
more
details
and
bring
that
up
as
a
potential
like
a
privileged
pod
that
would
expect
to
be
in
that
sort
of
highest
level.
Name.
Space
would
no
longer
be
in
that
highest
level.
Name
space.
L
H
H
L
D
L
That's
right
and
basically
the
the
way
that
I
have
sort
of
done.
My
my
proof
of
concept
with
the
change
to
the
e
to
e
test
is
the
the
test
automatically
detects,
and
so
you
can
see
that
right
at
the
before
all
these
changes,
there's
basically
a
single.
If
actually
right
there
line
198.
it.
L
It
basically
checks
what
names,
what
mountain
space,
what
mountain
name
space
is
cubelet
in
is
that
the
same
as
the
amount
name
space
that
system
d
is
in
and
based
on
that
it
either
executes
the
exact
same
logic
as
the
previous
test
or
the
new
logic
that
sort
of
understands
that
when
you've
got
your
container
mounts
segregated
from
your
host
mounts,
you
know
it.
The
test
is
slightly
different.
L
It
still
ensures
that
you
know
container
mount
propagation
goes
up
from
a
bi-directional
container
to
the
container
runtime
level
and
back
down
to
anything
that
has
host
to
container
since
that's
critical
for
things
like
csi,
but
it
doesn't
check.
It
actually
explicitly
checks
that
in
that,
if
you
have
this
segregated
mount
namespace,
the
host
os
that
top
level
name
space
does
not
see
those
bi-directional
mounts.
G
L
L
So
if
there
aren't
any
other
questions
today,
I
mean
feel
free
to
you
know
if
you
think
of
something
after
the
fact
come
back
to
either
the
mailing
list
threads
or
the
the
issue
that
I've
logged
there
for
for
more
discussion,
and
I
I
made
a
note
to
follow
up
with
sig
security,
since
I
think
that
was
a
really
good
suggestion
and
yeah
thanks.
So
much
for
your
time
and
and
talking
to
me
about
this.
G
A
Okay,
are
there
any
other
issues
we
want
to
discuss
in
this
meeting.