►
From YouTube: SIG - Storage 2023-04-08
Description
Meeting Notes:
https://docs.google.com/document/d/1mqJMjzT1biCpImEvi76DCMZxv-DwxGYLiPRLcR6CWpE/edit#
A
And
Alvaro:
do
you
know
if
we
have
like
where
we
are
in
the
issue
scrubbing?
Is
there
a
particular
one?
We
should
start
on
when
we
get
to
that
point
in
the
meeting
and
if
so,
if
you
could
add
that
that
would
be.
C
A
Okay,
yeah,
why
don't
you
just
go
ahead
and
pop
the
link
into
the
agenda
for
everyone,
so
they
can
follow
along
too
and
with
that
we
can
also
get
started
on
the
agenda
topics,
so
welcome
everyone
to
the
May,
8th,
Sig
storage
meeting.
The
first
item
that
we
have
is
VM
export
topics
and
who
would
like
to
take
this
one.
B
So
I
put
it
in
as
a
result
of
some
conversations
we
had
last
week,
Okay.
So
basically,
today's
Main
vert
CTL
already
has
support
for
exporting
the
entire
VM
yaml
manifests
and
we
kind
of
considered
backboarding.
This
well
feature
to
59
to
the
59
release
and
I.
Think
I
think
we
raised
that
it's
not
a
not
a
really
big
change
and
it
might
just
make
sense
to
pop
it
in
it's
a
great
usability
addition.
B
But
if
we
think
it's
too
much
of
a
feature
backboard,
then
we
can
go
the
other
way,
which
is
just
documenting
the
curl
and
certificate
files
procedure,
and
just
we
should
just
discuss
if,
if
it's
a
viable
backboard,
if
it's
something
that
can
be
done
and
to
me
it
makes
sense
because
it's
first
of
all
it's
a
vertical
change.
B
D
A
D
A
A
Yes,
I
guess:
barring
no
immediate
concerns
here
I
would
think
it
would
be
okay
to
prepare
that
PR
and
hopefully
it's
a
simple
cherry
pick.
I
guess
that
also
it's
usually
the
complexity
of
the
back
port
and
yeah
the
if
the
effect
of
the
change
that
we
consider
so
hopefully
it's
simple.
E
B
All
right
and
there's
also
a
second
topic.
That's
if
you
recall,
when
we
were
discussing
demoing
the
export
on
vanilla
kubernetes.
We
understood
that
it's
not
so
simple
to
set
up
the
Ingress,
so
maybe
it
could
be
like.
Maybe
we
could
make
like
an
issue
out
of
this
some
fun
research
for
somebody,
a
good
first
issue,
Maybe
I,
really
don't
know
how
how
complex
it
would
be
to
set
up
like
a
real
Ingress.
A
Yeah
I
think
it
makes
sense
to
have
have
an
issue.
Do
we
have
and
I
guess
this
is
the
doc
PR
does?
Does
this
doc
PR
have
I,
guess
it
does?
It
lacks
kubernetes
instructions.
B
E
I
wrote
the
the
document,
the
option
document,
PR
and
I
mean
essentially
ignoring
the
issue
of
you
know.
How
do
we
get
to
these
external
links,
I'm,
assuming
that
we
have
either
a
route
or
an
Ingress
or
something
where
it
would
generate
the
external
wings?
And
the
document
shows
how
you
do
things.
If
that
is
the
case,
it
doesn't
explain
how
to
set
it
up
once.
B
A
Okay,
all
right
so
anything
else
on
the
VM
export
topics.
B
Regarding
the
first
one,
what
should
we
do?
Should
we
open
an
issue
about
this
about
the
backboard
I.
A
Would
just
open
the
back
Port
PR
and
we
can
just
like
if
there's
any
strong
concerns
on
that
it
could
be
discussed
there,
whether
to
do
it.
So,
let's
you
know,
maybe
keep
it
open
for
a
little
while
longer
to
invite
comments
than
we.
Otherwise
would
you
know
assuming
it
passes
the
eye
right
away.
F
B
Yep,
we
could
actually
defer
it
to
the
end
of
the
meeting
if
you
prefer,
so
we
so
we
get
to
Michael's
topic.
Persistent
containerdisc.
A
D
Copy
on
right,
VM
disks
and
it
turns
out
you
know,
one
thing
that
I
mentioned
kind
of
in
passing
was
what
about
a
persistent
container
disk.
D
Where
you
know
the
base
image
is
you
know
in
the
container
storage
of
the
node
and
the
copyright
layers
on
the
PVC
somewhere
turns
out
that
David
looked
into
that
a
while
ago
and
created
a
PR,
and
it
just
kind
of
this
is
closed
without
much
interest.
D
So
I
think
it
may
be
interesting
to
talk
about
you
know.
If
we
should,
you
know,
should
we
resurrect
this?
What
are
the
what's
the
real
use
case
and
what
are
the
advantages
and
disadvantages.
D
So
yeah
that
that's
I,
guess
or
I'm
wondering
from
the
community
I
think.
D
You
know
I
think
the
main
advantage
is
that
if
this
base
image
is
on
the
Node,
the
VM
can
start
up
really
quickly.
You
know
the
first
time,
there's
no,
you
know
population
phase
and
if
you're
you
know
not.
D
Data,
it
won't
take
a
lot
of
space
in
the
PVC,
but
I
think
the
advantages
maybe
go
away.
You
know
if
this
is
a
VM
that
is
going
to
be
around
for
a
while,
and
it's
going
to
have
a
lot
of
activity.
D
You
know
started
and
restarted
a
bunch
of
times.
It
I
think
that's
where
the
advantages
aren't
as
clear
to
me
at
least.
A
Mm-Hmm
yeah
so
I
mean
we
I.
Guess
I
got
confused
for
a
second
about
this,
just
because
I
was
thinking
of
how
CDI
Imports
container
disks,
so
we
already
kind
of
have
that
workflow,
although
it's
as
you
mentioned
this
in
some
examples,
could
be
faster,
although
if
you
did
what,
if
you
use
the
node
the
node
pulse
strategy
on
the
persistent
container
disks,
you
might
get
a
similar
result
right,
because
you're
you've
got
the
cached
container
yeah.
You
have
the
cast
image
on
the
nodes
already
then,
in
that
case,.
D
Yeah
but
you're
still
copying,
you
know
some
data,
you
know
if
it's
a
big
image,
it
could
take
a
while.
A
I
would
point
out.
Another
downside
is
that
when
using
this
approach,
we
are
introducing
a
cucao
two
layer
into
the
flow
right
to
the
PVC
that
you
supplied
to
go
along
with
it
in
this
case,
would
have
a
q-cow
2
file,
where
the
backing
image
references,
the
wherever
the
container
disk
image
appears.
So
this
is
I
mean
I,
guess
it
should
work
as
far
as
I
know,
but
it's
a
pretty
large
step
for
us
to
be
adding
qcau2
layers
into
like
the
primary
API.
D
Well,
I
mean
don't
we
technically
have
it
for
like
that,
weird
host
disk
thing,
there's
I
forget
yeah
I'm.
G
D
A
Yeah
I,
don't
the
host
disk
thing
I've
been
trying
to
I
guess
on
a
good
day,
ignore
that
it
exists
and
on
a
bad
day
actively
trying
to
kill
it
I
think
it
was
a
I,
don't
know
who
uses
it
I'd
be
curious
if
anyone
is
actually
using
that
I
think
it's
not
a
super
great
idea.
D
Yeah
well.
Nevertheless,
it
was
interesting
to
me
that
this
was
something
it
I
guess
it
escaped
my
radar
when
it
PR
came
up,
and
it
seems
that
maybe
based
on
the
lack
of
interest
in
this
PR,
it's
not
something
we
really
need
to
explore,
but
wow
want
to
bring
it
up
to
the
community
to
see
the
opinions.
A
Yeah,
thanks
for
raising
it
any
comments
from
anyone
about
this
idea.
Any
interest.
G
Yes,
can
you
explain
what
is
the
storage
behind
the
scenes
later
on?
We
move
away
from
Seth
with
Rook
to
use
lean
store
instead,
because
there
is
no
the
duplication
and
copy
on
right
without
the
application.
It
is
very
bad.
A
G
The
the
the
the
solution
that
you
was
showing
copy
on
right,
What,
kind
of
storage
you
use
behind
or
you.
A
This
is
using
qmu
and
the
cute
cartoon
layer,
and
this
is
what
this
copy
and
write
refers
to.
So
I
didn't
review
this
PR,
but
from
just
my
understanding
of
of
what
they're
trying
to
do
here
is
you
would
have
your
container
disk
which
gets
pulled
down
and
then
the
the
disk
image
file.
A
That's
inside
of
that
container
disk
appears
within
the
vert
launcher
for
the
VM
to
access,
and
then
the
persistent
volume
claim
that
you
supply
in
this
example
API
would
contain
another
Q
cow
2
file
that
references
the
relative
path
to
where
that
container
disk
image
appears.
Then
what
happens
is
qmu
is
able
to
to
use
that
image
chain.
A
So
when
the
VM
is
running
and
it
writes
to
the
persistent
data,
the
rights
that
would
appear
in
a
qcau2
file
on
this
PVC,
my
pvc,
in
this
case,
but
reads,
would
come
all
the
way
through
the
base
image
if
they
don't
exist
in
this
PVC.
So
this
is
just
standard
qmu
image,
layering,
that's
being
implemented
within
the
cube
vert
API.
A
G
Because
these
use
a
lot
of
storage
and
we
are
using
the
other
one
lean
store
because
we
do
the
same,
but
we
we
have
the
duplication
through
done.
If
we
have
several
copies
that
we
don't
have
several
the
usage
of
the
storage
is
is
small.
Why
I'm
putting
these
here
there
I
was
part
of
the
first
meeting
we
have
here
on
Mondays
and
until
now,
I
didn't
have
any
information
about
Seth
that
duplications
going
out
of
our
stage
going
on
beta
or
something
is
there
any
information
that
I
can
find.
A
Alexander
I
think
you
I
can't
remember.
Was
it
you
that
looked
into
this
I
from
what
I
recall,
there
was
no
immediate
plans
to
to
move
this
out
of
alpha,
but
I
think
I
feel,
like
one
of
you
guys
took
a
deeper
look
into
that.
D
But
I
don't
know
we
can
definitely
he.
He
did
not
know
and
I
haven't
heard
anything
since,
but
we
can
check
into
it
again.
A
D
D
Yeah
yeah
I
mean
I.
Think
the
one
thing
is
just
that
now
with
we
still
have
access
to
these
people,
but
you
know
with
with
the
set
being
IBM
now
I,
don't
think
we
maybe
have
the
visibility
that
we
did
before
yeah.
A
And
I
think
they're
I
think
there
are.
You
know,
channels
that
this
feature
request
can
be
you
know,
communicated
you
know
in
the
community
to
the
to
the
Seth
folks.
You
know
asking
about
that.
You
know
taking
a
look
I'm,
not
sure
if
there's
open
issues
or
where
those
you
know,
Community
things
are
but
I
do
think
that
it
could
be
worked
from
the
community
angle
and
if
they
see
you
know
a
large
interest
in
that
there's
a
possibility
to
get
some
traction
there.
A
Yeah,
so
my
I
mean
my
definite
recommendation
would
be
to
would
be
to
go.
You
know
and
interact
in
the
Seth
Community
about
that.
It's
you
know,
that's
exactly
I
think
that's
just
the
way
that
the
way
that
this
stuff
would
work.
You
know,
unless
you
know
this
I
guess
this
being
the
the
Sig
storage
community
meeting.
That
would
be
the
Channel
that
I
would
recommend
in
this
forum.
A
You
know
if
you're,
a
red
hat
customer
you
can
do
that
stuff,
but
this
can
be
about
the
community,
so
I
would
suggest
going
there.
We
could
I
mean
if
Michael
no
I'm,
not
exactly
sure
which
which
repo
or
exactly
where
we
we
found
that
information,
but.
F
So
I
had
a
question
about
this,
so
this,
like
somebody
mentioned
about
this
deduplication
thing
and
it's
still
back
sorry
back-end
store,
is
providing
that.
So
what
I'm
trying
to
say,
Q,
cow
2
and
the
deruplication,
they
will
not
be
mutually
exclusive
right,
like
the
QR
mu,
can
implement
the
qq2
layer
and
then,
ultimately,
all
these,
the
storage
could
do
the
deduplication,
Block,
Level,
deduplication
and
ultimately
just
merge
all
the
blocks
having
the
same
check
sums
and
they
both
can
coexist
together.
A
Yes,
that's.
That
would
be
my
understanding
as
well.
It's
the
yeah
two
completely
separate
layers
in
the
stack
exactly.
F
And
the
second
thing
like
I've,
asked
this
question:
I
never
understood
it
so
like
if,
like
I'll,
ask
again
the
just
to
understand
better
so
so
is
it
common
like
at
least
in
the
container,
what
we
used
to
have
common
that
shared
the
same
base.
Image
is
shared
across
multiple
containers.
It
does
the
same
thing
happen
in
the
keyboard
word.
A
Yeah
so
I
mean
the
the
pattern
that's
implemented
today
is
by
a
lot
of.
People
is
typically
that
you
would
have
a
golden
image
a
prepared.
You
know,
VM
disk
image
and
yeah
that
same
image
would
be
either
cloned
using
the
storage
to
multiple
VM
instances,
or
you
know,
if
you're
using
container
disk
Imports,
it
could
just
be
imported
multiple
times.
A
F
A
Separate
copy,
no,
it's
it's
we're
creating,
essentially
a
clone.
It's
you
know.
It
depends
on
how
you
create
those
two
if
you
use
a
CSI
clone
or
if
you're
just
importing
it
twice,
but
we
sort
of
leave.
You
know
efficiency
of
managing
duplicated
data
to
the
storage
layer,
so
we
don't
really
consider,
for
example,
trying
to
have
a
shared
base
image,
although
I
think
at
Cube,
vert,
Summit
I
believe
it
was
Nvidia
was
showing
a
strategy
that
they
had
to
take
advantage
of
that
more
so.
F
Yeah
yeah
I
think
and
that's
what
I
think
this
proposal
of
cue
card
to
layer
will
allow
you
to
do
the
sharing
and
sharing
that
base
image
right
and
and
save
the
pace
case.
So
the
you
mentioned,
the
leave
it
to
the
store
is
the
story.
F
Will
only
save
you
the
actual
stories,
the
blocks
on
the
story,
but
not
the
base
cache
the
memory
part
of
it,
which
I
keep
pointing
to
so
so
like
there
are
two
parts
to
it:
right:
that's
saving
the
memory
on
the
Node
and
the
page
cache
and
then
saving
the
blocks
on
the
list.
So
if
the
story
supports
so
for
supports
the
reduplication
great,
then
you
save
it.
F
But
if
your
storage
doesn't
so
so,
this
Q
cow
layer
will
help
you
in
both
the
ways
that
if
you
save
the
page
cache-
and
so
you
can
pack
more
VMS
on
the
same
node
if
they
are
as
long
as
they
are
sharing
the
same
golden
image
and
there
is
not
much
copy
Android
going
and
and
if
the
stories
is
very
basic,
it
doesn't
support
deduplication.
F
Then
you
save
the
space
on
the
stories,
because
you
don't
you're
not
creating
clones
of
these
base
images
so
like
from
my
perspective,
because
he
was
asking
that
what
are
the
benefits,
especially
the
pace,
cache
like
nowadays,
people
say
the
stories
are
somewhat
cheap,
but
it's
still
pic
that
people
want
to
maximize
that.
How
do
how
efficiently
do
it
do
they
make
use
of
this
memory?
And
people
want
to
optimize
that
and
pack
as
many
VMS
as
possible
in
the
node.
F
So
two
from
my
perspective,
that's
why
overlay
FS
was
widely
successful
in
the
container
world
just
because
it
provided
the
space
cash
sharing.
So
something
to
think
about
that.
Do
we
care
about
that
particular
optimization?
My
feeling
is,
as
the
images
are
big
and
if
they
are
shared
significantly
like
a
community
will
probably
start
caring
about
it
at
some
point
of
time,.
A
D
An
advantage
of
you
know
whenever
we
use
container
disks
either
the
the
base
layers.
It's
a
single
copy
of
the
node
that
is
shared
amongst
all
the
VMS.
F
D
Yeah
that
we
already
do
that,
for
you
know,
container
disks
now
or
just
read
only
but
like
in
that
persistent
container
disk
case.
There
would
be
one
copy
of
the
base
image
that
would
be
shared
by
all
VMS
in
the
node
yeah.
F
D
It's
a
disk
image
that
is
inside
a
Docker
container,
so
it
takes
advantage
of
Dockers.
You
know
how
it
has
shared
layers
and
it
does
its
own
kind
of
copy
on
right
stuff.
D
So
when
the
image,
basically
when
a
pod
starts
a
VM
that
has
a
container
disk
that
container
that
basically
that
layer
is
shared
with
all
that
image
you
shared
with
all
VMS
on
that
node,
so
it's
it
take.
It
basically
takes
advantage
of
like
how
Docker
has
you
know
these
read-only
layers.
F
D
A
So
they
are
using
a
Michael,
so
I
guess
the
one
detail
here
is
when
the
VM
starts,
that
image
appears,
but
then
we
create
a
qcow
2
file.
That's
I
believe
that's
on
the
like
host,
Hoster
or
host
path,
type
of
storage
that
uses
the
shared
images
of
backing
file.
So
while
the
VM
runs
it
can
write
to
the
disk
and
it
accumulates
data
into
this
ephemeral
storage,
but
as
soon
as
the
VM
is
stopped.
A
That
storage
is
cleaned
up
today
and
that's
why
container
disks
when
used
directly
as
a
volume,
type
ephemeral.
F
Oh
so
it's
sort
of
like
there
is
a
internal
cue
out
to
the
maze
all
the
rights
go
to
there,
but
they
are
sort
of
temporary
in
nature
and
they
will
be
deleted.
So
the
changes
are
not
persistent,
something
that's
right,
okay,
yeah!
So
so,
if
the
use
case
becomes
evolves
more
that
there
is
a
golden
image
which
is
shared,
but
we
want
to
keep
the
changes
persistent
around
and
then
cue
counting
layer
is
more
persistent.
That's
where
I
think
it
can
help
so.
A
I
think
I
think
it
would
be
really
interesting
to
you
know.
If,
if
somebody
did
some
scale
testing-
and
you
know
you
could
I
guess
you
could
measure
this
effect
on
the
page,
with
the
page
cache
sharing
today
by
having
a
container
disk
running
a
bunch
of
VMS
using
ephemeral,
container
disks
and
then
using
CDI
to
import
the
the
same
exact
container
disk
in
a
persistent
method
and
then
running
those
VMS,
and
so
you
could
try.
You
know
seeing
what
the
overhead
on
a
node
is
to
run
say.
A
You
know
10
of
these
VMS
using
the
containerdisc
approach
and
10
VMS
using
the
persistent
into
a
PVC
approach,
and
it
would
be
the
same
exact
operating
system,
image
and
being
I.
Think
that
would
be
an
interesting
test
to
kind
of
see
exactly
what
kind
of
benefits
we
would
get
on
a
certain
workload.
F
A
F
F
Yeah
I
remember:
we
had
done
similar
thing
for
the
container
thing.
Also
Jeremy
did
some
nice
blogs
and
chats
and
showing
that
how
page
sharing
allows
you
to
pack
more
containers?
You
need
to
do
some
graphs,
so
I,
guess.
If
somebody's
interested
in
this
it
will
require
similar
kind
of
effort
that
what
are
the
benefits
of
sharing
the
base,
cache
practically
mm-hmm.
F
Because
that
was
one
thing:
everybody
seemed
to
care
about.
The
memory
on
the
Node
and
the
Killer,
killing
kicking
in
two
often
and
killing
the
VMS
or
containers.
So
like
memory
seems
to
be
one
resource.
People
seem
to
care
about.
A
So
I
would
say:
I
mean
it.
It
seems
like
because
of
that
it's
not
necessarily
a
I
mean
it
doesn't
seem
like
a
a
dead
idea,
but
I
think
it's
important
to
quantify
the
benefits.
There's.
Definitely
some
additional
complexity.
Here
you
know
in
terms
of
managing
that
extra
PVC
and
having
you
know,
a
cute
cow,
two
layer
active
in
a
in
a
persistent
virtual
machine,
but
I
think
you
know
we'd
have
to
kind
of
think
about
all
those
flows
and
and
making
sure
everything's
handled
yep.
A
All
right
any
other
thoughts
or
comments
on
the
persistent
container
discs
topic.
A
Okay
and
on
the
stuff
deduplication,
did
we
cover
with
that
action
item?
Did
we
cover
that
appropriately?
A
Yes,
I'm,
fine
with
that?
Okay,
all
right
sounds
good,
so
yeah
I'll
try
to
try
to
locate
that
repo.
It
should
be
pretty
reasonably
easy
to
do
so
right.
So
I
think
the
last
thing
that
we
have
on
the
agenda.
Oh
wait.
We
we
should
bounce
back
to
the
Upstream
flakes
topic,
so
Alex.
B
Yep,
so
if
you
just
click
on
the
CI
search,
we
had
this
happen
a
few
times
it's.
What
happens
is
that
basically,
the
populator
PVC
is
just
not
reaching
bound
it's.
Actually,
it
gets
terminal
in
claim
lost
phase,
which
I
believe
is
a
little
worrying.
I
think
some
something
should
kick
in
and
start
rebinding,
so
that
the
claim
is
not
lost,
but
I
am
not
100
sure,
but
you
can
see.
This
happens
happened
three
times
already
so
I
think
it's
worth
a
look.
B
C
B
C
A
Is
there
do
we
think,
there's
anything
interesting
about
the
difference
between
like
this
CI
and
and
somewhere
else,
where
it
might
be
getting
exacerbated
here
in
this
environment?.
A
So
we
have
it
yeah
we
have
upgrade,
runs
NFS
storage,
I,
don't
know.
If
the
exact
lanes
are
matter.
Do
we
run
these
same
tests
on
a
different
Lane
and
find
them
to
not
be
flaky
there.
B
A
B
B
The
attention,
because
it
seems,
as
we
open
more
PRS
I,
believe
we're
just
going
to
keep
happening.
B
B
Yeah
but
it
just
gets
evicted
and
and
yeah
it
just
gets.
Evicted
and
I
I
had
a
little
lead,
just
it
could
maybe
be
the
new
and
the
kids
read
ahead
filter.
B
E
E
In
before
we,
we
have
a
a
thing
that
runs
first,
that
should
catch
the
invalid
files
before
it
actually
tries
to
do
the
import.
We
have
a
two
stages.
First
stage
is
sort
of
the
tech
stuff
I
mean
that
stage.
We
should
find
this
and
then
in
the
second
stage
we
actually
import.
E
B
A
So,
are
we
not?
Are
we
not
setting
up
the
NBD
kit
pipeline
to
run
qmu
image
info.
E
Info
so
we
use
the
the
pr
limit
right.
So
the
pr
limit
with
the
human
image
info
should
fail
because.
F
E
B
Yeah
so
yeah
we
can
scratch
that
nbdkit
filter
out.
Let
me
just
keep
looking
for
some
kind
of
regression,
because
the
pods
are
evicted
for
sure.
B
A
Sorry
are
these
files,
the
invalid
files
created
on
every
CI
run,
or
they
a
an
artifact.
That's
just
existing
somewhere
they're.
A
It
possible
that
the
file
got
changed
in
a
way,
somehow
that
it's,
oh,
it
should
always
fail.
I
would
guess
if
it
was
changed
instead
of
just
flaking,
sometimes
so
and
anyway,
I
don't
know.
If
there's
a
super,
easy
obvious
answer
that
we'll
come
to
on
this
call.
Is
there
somebody
who
would
want
to
dig
a
little
deeper
into
this.
A
A
All
right,
yeah
I
was
just
wondering
if
somehow
like,
maybe
they
got
like
if
they
were
once
sparse
and
wherever
they're
stored
there,
they
got
populated,
somehow
and
and
that
was
affecting
the
way
that
they're
being
accessed
but
I'm,
not
sure.
E
That
wouldn't
fit
in
the
in
the
GitHub,
like
I,
said
it's
the
size
that
we
put
in.
There
is
so
large
that
there
it's
insane
so
that
you
fail
as
well.
A
Okay,
all
right,
so
the
last
thing
that
we
had
was
to
take
a
peek
at
HPP
issues,
which
is
not
a
repo
that
we
have
looked
into
on
this
call
as
yet.
So,
let's
start
with
the
oldest
one,
which
is
support
for
a
read,
write
many
okay.
A
E
Yeah,
essentially,
they
want
to
use
NFS
as
a
betting
storage
and
then
have
the
host
pass
or
to
split
that
off,
but
it
might
actually
be
better
for
them
to
just
use
the
nfscsi
driver,
which
essentially
would
do
the
same
thing
for
him.
But
that
was
the
the
idea
some
time
ago,
where
you
know
I,
take
a
NFS
volume
and
put
HPP
on
top
of
that,
and
then
I
can
speed
up
the
NFS
volume
using
HPP.
E
Okay,
but
essentially
it's
always
it
would
is,
is
you
know,
passing
a
flag
saying,
allow
rewrite
manifa
access
mode
and
I
would
just
generate
read
that
many
volumes
on
that
particular
Source
class.
So
what
price?
It's
not
huge,
I
just
haven't
gotten
around
to
it
and
yeah.
D
I
I
thought
with
HPP
CSI.
You
can
use
like
PVCs
from
another
storage
class.
Yes
right,
yes,
so
that
you
can't
just
inherit
the
permissions
from
the.
E
That's
one
of
the
ways
I:
could
you
know
enable
it
automatically
if
I
wanted
to
I
I
haven't
thought
too
much
about
exactly
how
to
do
it?
It
is
because
it's
sort
of
we're
doing
it
with
HPP.
A
Yeah,
that
seems,
Michael's
idea
seems
like
a
really
viable
approach
where,
if
the
claim
the
persistent
volume
claim
template
in
the
HPP
CR,
is
it
or
specifies
a
read,
write
many
PVC,
then
the
volumes
we
create
could
have
that
access
mode
right.
Exactly
okay,.
E
A
I
think
read,
write
many
file
or
something
would
be
the
requirement
but
okay.
So
what,
in
terms
of
this
visitation
of
the
issue,
is
there
something
we
want
to
add
here.
E
So
I
just
haven't
gotten
around
to
investigating
these
more
so
I.
A
I
mean
I
think
I
think
one
way
that
we
could
address
like
I,
don't
think
we
need
to
be
necessarily.
You
know,
I,
think
this
being
an
open
source.
Community
is
somebody
who
wants
this
feature
they're
welcome
to
work
on
it
and
we
can
be
receptive.
So
we
could
I
think
that
maybe
even
just
providing
some
advice
on
an
approach
if.
A
Okay,
so
we
have
extend
VM
disk
sized.
A
A
Okay,
it
seems
that
the
person
was
yeah.
We
had
a
question
okay
from
Alvaro
from
four
hours
ago.
So
since
we
asked
the
question
about
closing
it,
let's
give
the
reporter
some
time
to
respond,
and
maybe,
if
we
come
around
within
two
weeks
or
whatever
Alvaro
I
would
say,
feel
free
to
close.
A
A
E
A
Okay,
all
right,
so
that
brings
us
to
the
end
of
the
agenda.
We
have
just
a
couple
of
more
minutes
before
the
scheduled
ends,
so
we
can
do
a
little
open
floor
if
anyone
has
any
additional
topics
or
quick
things
that
they
wanted
to
bring
up
today,.