►
From YouTube: SIG - Storage 2023-07-31
Description
Meeting Notes:
https://docs.google.com/document/d/1mqJMjzT1biCpImEvi76DCMZxv-DwxGYLiPRLcR6CWpE/edit#
A
Foreign
just
another
minute,
or
so
we
do
have
a
nice
full
and
interesting
agenda
today,
so
I'm
excited
to
get
started
on
it.
It
is
two
minutes
after
so
I
think
we
may
be
able
to
get
started.
I
know,
Shelley.
A
You've
got
something
both
you
and
Alex
have
designed
docs
to
introduce
when
we
do
have
only
just
a
few
people
who
aren't
already
aware
of
them
on
so
I'm,
not
sure
if
those
make
sense
to
introduce
later
in
the
meeting
or
what
we
want
to
do
with
those
it's
up
to
you.
If
you'd
like
to
start
now,
or
maybe
we
can
jump
down
to
Alex's
cryo
topic.
A
C
A
C
So
it's
fixing
a
long-standing
behavior
with
where
the
file
mode
is
not
being
passed
to
to
the
device.
So
in
the
kubernetes
case
that
would
be
for
the
block
PVC,
for
instance,
and
what
I
think
happens
is
that
for
a
while,
you
could
get
away
without
using
this
cryo
feature
gate
that
passes
security
context
regarding
block
devices
block
PVCs,
and
this
PR
actually
fixes
that
so
you
do
have
to
explicitly
set
a
feature
gate
to
be
able
to
manipulate
block
PVCs
as
non-root.
C
So
if
you
click
on
the
cherry
pick,
the
original
PR
is
embedded
in
the
in
there
somewhere
yeah
yeah,
that
that
has
the
full
explanation.
I
think
it's
a
big
issue.
I
I
opened
the
thread
in
slack
I'm,
trying
to
understand
like
if
this
is
desired
and
a
couple
of
things
come
to
mind.
First,
this
broke
hypershift
first
I
think
it
broke
hypershift,
CI
I,
don't
know
if
we
have
anyone
on
the
call
here.
C
A
C
Okay,
so
that
that
was
one
issue.
This
is
how
this
revealed
itself.
We
had
Oren
from
hypershift,
just
Reach
Out,
and
we
we
discovered
that's
the
culprit.
Then
we
have
openshift,
which
I
think
is
impacted.
I
mean
out
of
the
box,
openshift
I,
don't
think
block
PVCs
will
be
writable
for
non-root.
So
that's
that's
a
worry.
C
Thank
you,
Brits
yeah,
I!
Guess.
If
this
change
is
really
desired,
we
have
to
explicitly
set
a
feature
gate
everywhere,
basically
everywhere
yeah
Upstream,
Downstream
CIS.
So.
A
Is
there
a
reasonable
course
of
action
for
us
I
see
that
there's
some
discussion
like
it
sounds
like
that
yeah
I'm
trying
to
understand
like.
If
is
there
a
technical
controversy
at
this
point,
or
is
it
just
a
lot
of
work
to
back
Port
things
or,
like
you
I
see,
these
are
back
parts
for
the
cryo
fix
that
we
just
looked
at
right
all
the
way
back
to
yeah.
C
And
thus
they're
also
going
to
show
up
for
for
cube
vert
CI,
yeah,
yeah
I.
Guess
we
don't
I
I,
guess
I
am
a
little
unsure.
If
this
is
like
desired,
I
mean
it
could
be
seen
as
like
a
breaking
back
port.
C
So
if
anyone
can
chip
into
the
at
least
the
openshift
discussion,
which
I
believe
is
it's
just
the
only
place
I
could
think
of
where
I
could
get
like.
Cryo
people
attention.
E
A
Okay,
this
sounds
good.
I'll
definitely
be
looking
into
that
discussion,
and
it
would
be
great
to
have
some
other
of
you
representing
there.
Thanks
for
reporting
it
here,
Alex,
it's
definitely
going
to
be
impactful
for
our
use
case,
one
way
or
the
other.
C
Yep,
that's
all
for
this
topic
and
we
don't
have
to
go
for
the
qmu.
So.
A
It
may
actually
make
sense
to
do
this
topic
because
we
I
see
that
we
have
Stefan
here,
who
has
perhaps
has
some
good
context
and
maybe
can
offer
advice
so
like
I
wanted
to
just
call
his
name
out,
and
maybe
we
can
touch
on
it.
Quick
and
just
see.
If
there's
any
thoughts
here.
Just
because
I
know
that
we
have
some
people
with
knowledge
of
this.
C
Awesome
for
it,
so
it's
yet
another
discussion
about
the
qmu
image
convert
cache
mode.
We've
had
a
bunch
of
these
in
the
past
and
recently
another
one
surfaced
in
the
form
of
this
bugzilla.
Basically,
someone
has
a
home
lab,
which
it
doesn't
look
too
shabby
to
me
like
a.
B
C
Ssd
in
there
and
they're,
hitting
a
bunch
of
om,
kills
on
Imports
and
I
mean
they
wouldn't
hit.
Those
om
kills
if
they
had
used
c
groups
V2,
but
that
it's
going
to
be
a
while
until
we
get
c
groups
V2,
and
we
have
a
lot
of
versions
of
openshift
still
doing
c
groups,
V1
and
kubernetes
supported
versions
that
do
c
groups.
V1
and
I
mean
I'm
convinced
by
this
bugzilla
to
take
another
look
at
what
we're
doing
with
the
cache
mode.
C
So
the
claim
by
the
author
here
is
that
by
using
right
back
we're
basically
masking
like
bad
storage
like
running
away
from
the
issue
of
having
bad
storage,
not
the
other
way
around.
So
I
believe
the
quote
was
that
we
yeah
using
right
back
to
compensate
for
slow
storage
that
cannot
handle
all
direct
semantics.
So
I
was
just
going
to
get
some
thoughts
on
that
is
I
mean
to
me.
C
I
always
thought
that
using
the
page
cache
is
we
wouldn't
be
in
the
wrong
like
for
sure
we
would
just
be
using
the
page.
Cache
be
doing
a
good
thing,
but
apparently
this
could
be
regarded
as
like
something
you
would
do
to
overcome,
like
not
up
to
power
storage,
so
I'm
I'm
just
want
to
get
thoughts
on
that.
If
that's
true,
if
somebody
can
advise
or
maybe
that's
not
something,
you
could
determine
for
sure.
E
So
we
have
a
bit
of
experience
with
this
one
problem
with
if
your
copy,
basically,
if
you're
copying
a
file
which
I
guess,
is
what
you're
trying
to
do
here
with
a
qme
image
convert
you
and
these
files
are
pretty
big
like
gigabytes.
Usually,
if
you
use
the
page
cache,
you
basically
evict
all
your
sort
of
useful
Local
Pages
for
other
processes
and
things
for
data
that
you're,
probably
not
going
to
read
again
immediately
after
it's
written
to
disk.
So
we.
B
E
Found
it
useful
to
be
able
to
bypass
the
page
cache
by
using
odirect,
and
there
are
other
techniques
as
well
actually
but
Community
offers
this
so
direct
mode,
because,
if
you
think
about
it,
you
might
you
might
have
you
know
your
page
cache
on
a
machine
which
might
have
you
know
a
few
hundred
gigabytes
of
RAM.
E
You
know
you,
don't
you
can't
really
afford
to
get
rid
of
10
gigabytes
of
page
cache,
that's
being
used,
you
know
to
to
Cache
sort
of
programs
and
other
data
unless
you're
actually
going
to
immediately
read
that
data
again
straight
after
writing
into
disk
which,
if
you're
just
copying
a
file,
is
not
usually
the
case.
E
So
that's
the
sort
of
problem
here
we
did
a
lot
of
stuff
in
nbdkit
using
an
alternate
method,
not
using
a
direct
with
a
different
method
to
avoid
trashing
the
page
cache
when
you
copy
files
convert.
Has
this
ability
to
use
o
Direct.
A
Do
we
thanks
for
that
context?
Do
we
know
I'm
trying
to
remember
from
my
rev
history
what
the
like
the
typical
device
that
can't
support?
Oh
direct
is
like?
A
Are
we
finding
that
most
of
the
storage
is
able
to
support
that
I
I
believe
that
overt
had
a
method
where
we'd
try
to
open
the
device,
oh
direct,
and
if
that
failed,
then
we'd
fall
back
to
to
this
so
I'm
wondering
if
that
might
be
a
good
approach,
but
I'm
just
trying
to
figure
out
how
often
we'll
fail
on
the
oh
direct
case.
E
I'm
going
to
say
NFS,
but
I
I
also
can't
remember
what
the
exact
devices
are
that
fail
with
their
direct
okay.
So
there
is
a
I'll
find
it
in
a
minute,
but
there
is
a
really
interesting
posting
that
Linus
did
about
how
you
avoid
trashing
the
catch
and
it.
F
E
C
Actually,
the
next
point
is
really
similar
to
what
what
you
raised.
Adam
is
a
bunch
of
projects.
I
could
find
like
openstack
Nova
that
opted
for
this
approach
of
using
cash,
none
if,
if
the
target
can
handle
all
direct,
otherwise
it
just
falls
back
to
write
back
it.
Just
it
seems,
like
a
convention,
almost
I
even
found
a
few
other
projects
that
yeah
use.
This,
maybe,
like
you
said,
maybe
overt
did
the
same
thing.
A
Yeah
I'm,
pretty
sure
I
recall,
like
a
little
python
function
that
checked
if
the
device
is
able
to
support
it,
and
the
only
way
you
know
is
by
just
trying
to
open
the
file
with
the
oh
direct
flag
and
then,
if
it
works,
you
can
close
it
and
then
launch
with
the
cash
equals
none.
If
it
doesn't
work,
you
try
right
back.
So
maybe
that's
something
we
should
try.
I
mean
the
problem
is,
is
it's
not?
It
may
solve
this
particular
user's
issue.
A
D
So
we
we
had
the
cash
at
none
for
the
longest
time,
and
people
had
issues
with
I
was
trying
to
find
the
actual
GitHub
issue.
So
we
have
some
context,
but
oh.
D
A
It
was
yeah,
I,
think
that's
what
it
was.
It
was
slow
because
of
the
that
I
think
you
have
to
care
about
the
block
size
when
you're,
writing
and
I
don't
know
exactly
how
that
works,
but
like
we
found
that
yeah,
if
you,
if
you
write
it
and
then
you
you
do
a
sync
and
wait
for
the
sync
to
complete,
like
the
kernel
was
doing
a
more
efficient
job
of
batching,
the
the
rights
to
the
device
than
qmu
was
or
something.
F
Yes,
so
so
this
could
be
seen
as
a
performance
bottleneck,
a
limitation
of
Q,
mu
IMG.
Maybe
qmu
IMG
needs
to
think
a
little
bit
more
carefully
about
the.
I
o
pattern
that
it
generates
so
that
it
can
take
advantage
of
odorak
without
hitting
this
kind
of
poor
performance,
say
I'd,
say
on
NFS,
so
that
is
one
possible
solution
here
to
go
and
and
optimize
qmu
IMG
I
think
the
reason
why
we
haven't
really
done
this
or
invested
in
it
too
much
is
because
this
is
mostly
a
c
groups.
F
C
F
We
so
if,
if
this
is
something
that
is
important
to
to
get
working
and
make
efficient,
then
I
think
raising
like
a
a
jira
story
and
basically
requesting
for
the
vert
team
to
go
and
optimize.
A
Okay,
let
me
take
a
note
of
that.
I'm
trying
to
capture
like
there's
So,
like
switching
to
oh
Direct
oops,
can
have
other
benefits.
A
Wow
from
I
can't
type
this
morning,
I
apologize
from
the
page
cache.
A
We
noticed
performance
issues
due
to
slow
I,
don't
know
how
to
write
this
of
I
o
and
qmu
image.
A
Let
me
open
up
your
okay,
so
yeah.
Let's
we'll
consider
this
I
think
that
is
a
good
any
I
guess
I'll
ask
if
anyone
else
has
any
other
comments
that
we
I
think
we
got
some
really
good
information
around
this.
So
I
appreciate
that
from
everyone
I
guess
we
need
to
decide
if
we
want
to
try
using
odorette
again,
but
yeah
I
think
that
we
will
satisfy
one
person
and
make
another
person
upset
with
with
making
a
switch
like
that.
D
So
I
I
think
we
discussed
adding
a
field
that
people
can
actually
pick
their
cash
mode
so
that,
if
none
is
better
for
you,
you
can
pick
none
or
maybe
we
defaulted
to
none,
and
if,
if
Rideback
is
better
for
you,
you.
E
A
Yeah
I
think
otherwise.
I
wonder
if
simply
like
really
the
the
yeah,
okay,
yeah
I,
think
that
that's
interesting
and
then
with
right
back
we'll
do
an
automatic
sync
before
we
Mark
the
data
volume
is,
is
completed,
we're
with
a
direct.
We
don't
have
to
do
that
so
yeah.
This
is
something
okay.
So
it's
another
option.
A
I
hate
to
expose
a
knob
like
this
is
complicated
enough
for
the
experts
to
discuss
the
trying
to
put
an
option
like
this
in
the
hands
of
an
end
user
is
going
to
be
fraught
with
its
own
Peril.
So.
C
A
Yeah,
what
and
it
yeah
it
may
not
be
provision
or
specific
in
terms
of
the
configuration
mission
where
this
is
a
problematic.
F
I
think
a
solution
that
if
I
was
in
the
Hubert
perspective
or
the
CDI
perspective,
I
think
what
I
would
do
is
I
would
always
enable
right
back
and
I
would
also
need
quimg
to
be
modified
so
that
it's
very
careful
about
how
much
page
cache
dirty
so
that
it
will
not
be
killed
by
c
groups.
V1.
F
That
way,
you
don't
have
to
expose
anything
to
the
users
and
it
always
works,
and
then
at
some
point
in
the
future,
when
c
groups
V2
becomes,
you
know
widely
used,
then
maybe
it'd
be
possible
to
to
think
about
to
think
about
whether
whether
this
keep
doing
this,
but
essentially
what
I'm
saying
is
that
if
we
can
make
qmu
IMG,
never
abort
due
to
the
out
of
memory
which
I
think
is
possible,
then
you
could
just
use
right
back
all
the
time
you
wouldn't
have
to
worry
about
it.
A
Mm-Hmm
yeah
I,
like
actually
really
like
that
approach.
Where
you
know
you
give
it
a
certain
number
of
megabytes
of
cash.
It's
allowed
to
use
and
I
think
it
could
flush
that
and
then
fill
it
again
and
flush,
or
something
like
that
in
order
to
to
manage
its
usage.
A
Okay
sounds
good:
let's
take
this
I
guess
maybe
offline
for
the
rest
of
it.
So
we
can
move
on
to
some
other
topics.
Thanks
again
for
all
the
the
input
I
think
some
of
us
can
have
a
I
mean.
If
we
have
an
issue
in
in
CDI,
maybe
we
could
take
it
there
and
discuss
options
and
then
you
know
we'll
decide
if
we
want
to.
A
You
know
improve
qmus
or
cumu
image,
or
is
there
something
that
we
can
give
the
the
this
user
now
to
move
them
forward
and
stuff
like
that?
A
So
thanks
for
raising
it,
Alex
Let's
jump
down
to
Stefan's
topic.
If
you
already,
you
can
bring
that
one
up
and
then
we'll
Circle
back
up
to
the
design
proposals
that
we've
got
in
queue.
F
Okay,
thanks
Adam
I
just
wanted
to
to
find
out
I
think
in
the
past
we
talked
a
little
bit
about
the
ability
to
stream
or
populate
images.
In
the
background,
without
having
to
download
the
entire,
you
know
backing
file
or
whatever
you
call
it.
The
the
the
the
disk
image
that
might
be
stored
in
the
container
registry
and
I
I
haven't
been
following
along
so
I
don't
know.
F
Maybe
maybe
this
already
exists
and
you
guys
have
done
it
or
maybe
you
guys
have
hit
some
roadblocks
or
maybe
haven't
looked
at
it
yet,
but
I
was
just
curious
about
the
status
of
it
and,
if
maybe
there's
any
any
stuff,
you
need
from
qmu.
In
order
to
be
able
to
implement
this
feature,
the
big
Advantage
being
you
don't
have
to
wait
for
a
disk
image
to
be
populated
before
you
can
launch
a
new
VM
from
the
disc.
A
Mm-Hmm
yeah,
so
we
haven't
I
would
say
that
I
mean
somebody
else.
Definitely
correct
me
if
I'm
wrong,
but
the
closest
thing
I
can
think
of
to
us
doing
this.
Is
we
worked
on
a
feature
to
try
to
do
HTTP,
CD-ROM,
Source
images?
So
the
idea
is
it's
much
easier
to
have
qmu
connect
to
a
service,
that's
providing
storage
like
a
yeah,
an
HTTP
server
and
for
hot
plugging
CD-ROMs
or
changing
media.
A
So
we
thought
of
that
as
a
really
good
use
case
for
something
like
this,
it
ended
up
being
rejected
in
the
cube
verd
Community
because
they
did
not
want
to
have
a
like
a
network
type
volume
for
virtual
machines,
and
so
we
kind
of
dropped
it.
It's
sort
of
the
use
of
our
emulation
of
CD-ROM
devices
isn't
really
quite
as
interesting
in
a
kubernetes
landscape.
A
There's
lots
of
different
ways
to
you,
know
kind
of
provision
VMS
and
things
so
the
project
there's
PR's
out
there
and
stuff,
but
it
sort
of
got
abandoned
in
terms
of
regular,
like
bootable
disks
like,
for
example,
you
know
cloning
an
existing
image
for
a
new
VM.
It's
not
something.
A
We've
focused
on
where
we've
been
more
focused
on
taking
advantage
of
CSI
clones
of
existing
imported
images
as
a
faster
way
to
provision,
so
yeah
I
don't
know
if
anyone
else
has
any
examples
of
things
that
they've
tried
or
any
other
work
in
this
area.
F
I
want
to
respond
to
that.
So
if
you
have
a
PV
that
has
the
image
you
want
to
clone,
then
you
can
clone
it
using
the
the
kubernetes
storage
apis,
as
you
mentioned,
but
what
about
don't?
You
have
disk
images
that
are
stored
in
the
image
registry
as
disk.img
files.
F
A
A
So
so
the
the
golden
images
are
actually
delivered
like
if
we
go.
If
we
talk
Downstream
the
flow
is
that,
like
the
Rel
guest
image
will
be
delivered
in
the
the
registry
with
the
the
rest
of
the
containers
for
openshift
virtualization,
so
the
model
that's
employed
Downstream
is
that
the
Masters
would
be
stored
there.
We
actually
encourage
anyone
who
wants
to
use
the
data,
import,
cron
feature
and
all
the
golden
images
workflows
that
are
built
in
to
do
the
same
with
their
own
registry.
A
But
then
what
happens
is
the
data
import
cron
logic
will
take
that
and
detect
when
there's
an
whatever.
If
we
have
the
latest
version
on
the
cluster
and
it
will
trigger
a
CDI
Import
in
the
background
on
so
like
a
registry
where
the
container
is
pulled
to
the
node
and
the
image
is
copied
to
to
the
PV
automatically
and
then
anytime,
a
new
image
is
pushed
to
that
registry,
an
updated
one.
Then
it's
automatically
update
pulled
into
a
new
PV
and
we
manage
kind
of
garbage
collection.
So
this
is
the
flow.
A
So
what
you'll
find
when
you
install
the
the
downstream
product
is
that,
usually
by
the
time,
you're
ready
to
create
your
first
virtual
machine
after
installing
the
operator,
all
the
OS
images
have
already
been
imported
from
the
from
the
registry.
F
A
Yeah
I
think
so,
and
we
definitely
considered
some
of
the
the
streaming
stuff
I
still
consider
it
like
interesting
of
things
that
can
be
done
here.
We
actually
there
was
an
idea
to
extend
the
container
disk
specification.
So
today
the
containerdisc
spec
has
you
basically
pull
the
container
down
and
it
has
a
file
called
disk.img
in
it.
A
I
I
think
it's
in
maybe
in
a
subdirectory
or
something,
but
it's
a
it's
a
well-known
location
and
basically
Cube
vert
knows
or
C
and
CDI
know
that
this
is
where
the
image
is.
But
there
was
another
idea
that
we
had
where
you
could
have
a
container
disk
that
had
a
socket
file
or
some
other
thing
in
it.
A
Instead
of
a
disk
image,
and
when
that
was
there,
then
qmu
would
connect
would
be
instructed
to
connect
to
that
socket,
and
what
this
would
allow
you
to
do
is
provide
your
own
arbitrary
implementation
of
supplying
like
connecting
data
to
that
socket
for
qmu.
So
it
would
be
a
way
to
kind
of
like
experiment
with
some
of
these
streaming
options
or
cute
cow
2
layers,
for
example.
A
If
that's
something
that
you
wanted
to
try
to
implement
for
a
particular
use
case,
that
you
can
do
that
and
then
it
kind
of
keeps
the
implementation
of
that
outside
of
the
cube
vert
code,
because
they're
really
just
there
are
so
many
different
ways,
as
everyone
here
knows,
to
construct
the
qmu
block
layer
into
different
cool
things
and
it's
difficult
to
support
all
of
them.
So
this
could
be
another
interesting
idea
for
something
like
this
I'm,
not
sure.
A
Cool
all
right,
any
other
comments
or
thoughts
on
that
topic
before
we
go
on.
A
Great
okay,
so,
let's
pop
back
up
to
the
top
and
I'll
I'll
ask
Shelley
if
she'd
like
to
discuss,
let's,
maybe
give
some
context
of
what
we're
trying
to
do
with
this
design
document,
and
then
we
could
encourage
some
folks
to
take
a
look
at
this.
Give
their
comments.
Yeah.
B
Sure,
hey
so
one
of
our
new
work
that
we're
planning
to
do-
and
this
is
the
design
proposal,
for
it-
is
basically
having
a
way
to
use
the
new
kubernetes,
API
or
volume
pop
letters
with
VMS
in
a
way
that
we
will
get
VMS
with
PVCs
populated
with
our
disk
images
and
also
benefiting
from
our
storage
knowledge
that
how
we
use
in
data
volumes
that
we
can
provide
missing,
for
example,
movement,
volume,
mode
or
access
mode,
and
it
gets
filled
automatically
with
our
knowledge
of
the
PVC
storage.
B
So
that's
basically
the
the
idea,
the
motivation
our
planning
to
do.
It
is
basically
add
to
the
VM
a
field
similar
to
data
volume
templates,
which
will
be
called
volume,
claim
templates
and
there
yeah
there's
the
API
example
and
there
you
can
mention
the
information
that
you
want
and
the
information
that
you
can
miss
and
it
will
be
filled
similarly
to
how
we
do
in
data
volume
controller
in
the
PVC
that
we
will
generate
that's
more
or
less
the
major
work
that
we
plan
with
this
design.
D
B
A
Okay,
nice
I
knew
it
was
somewhere
I
just
forgot,
so
let's
go
down
so
all
right.
So
here
is
what
she
was
talking
about
with
an
basically
what
we
would
do.
So
this
looks
pretty
similar
to
the
data
volume
templates
section,
but
it's
yeah
go
ahead.
Yeah.
B
You
can
see
that
for
the
source
we
have
the
data
source
ref,
a
new
kubernetes
field,
which
will
mention
the
source
that
you
want
to
be
populated
for
your
PVC.
It
can
be
an
import
upload
or
close.
You
can
look
at
our
CDI
volume,
populators
dock
in
the
CDI
repo.
You
will
need
to
create
the
source
scour
before
you
can
see
here.
B
A
And
yeah
I
think
Shelley
you
mentioned
like
this
is
a
lot
of
work
and
introducing
a
new
API
for
something
that's
awfully
similar
to
what
we
already
have
today,
but
the
as
as
you
mentioned.
A
The
main
motivation
here
is
that
we
discovered
some
issues
with
data
volumes
and
how
they
behave
with
third-party
things
in
the
cluster,
such
as
backup
software
or
Disaster
Recovery,
workflows
that
make
us
want
to
move
away
from
the
data
volume
concept
as
a
sort
of
wrapper
for
storage
and
go
directly
back
to
the
PVC,
and
if
we
do
that,
we
should
find
a
much
smoother
integration
of
our
workloads
with
the
cluster.
Absolutely.
F
A
Cool
okay,
so
I
guess
at
this
stage,
you're
looking
for
comments
or
input
on
the
design,
PR
yeah.
B
We
can
discuss
two
things
if
we
have
time
and
we
want.
One
thing:
is
the
cross
namespace
issue
that
this
faces
it's
in
order
to
do
cross
names.
This
will
have
to
wait
for
the
beta
version
of
kubernetes
support
for
this
cross,
namespace
sources.
B
F
B
One
thing
that
we
need
to
to
see
how
we
exactly
address
how
we
stressed
this
is
currently
not
supported
with
this
new
API
and
there's
the
open
question
underneath
that
I
did
I
did
talk
with
Michael.
E
B
The
whole
concept
of
generate
name,
it's
something
that
we
couldn't
use
with
data
volumes
and
it's
it
will
be
really
cool
to
use
it
with
this
new
API.
It
will
allow
us
to
just
keep
applying
this
yaml
of
PVC
and
each
time
it
will
create
a
VM
and
a
PVC
with
a
generated
name.
B
So
we
thought
how
we
can
add
the
generate
name
to
the
PVC
itself,
but
my
killer
plays
a
really
cool
idea
of
basically
in
the
template,
to
put
a
constant
name,
like
my
pvc,
for
example,
and
in
the
volumes
list.
Also
for
this,
my
pvc
name,
but
before
creating
this
VMI,
will
basically
Add
a
prefix
of
the
VM
generated
name
to
the
PVC
name
and
that
will
create
us
a
PVC
generated
name,
basically,
a
unique
generated
name.
B
A
Yeah
we'll
have
to
think
about
how
that
works
with
restoring
virtual
machines,
if
there's
any
kind
of
you
know
or
like
Dr
workflows,
and
things
like
that.
If,
if
that
still
works,
I
guess
it
should.
F
B
A
Okay,
so
it
would
be
interesting
if
you
could
yeah
that's
like
a
proposal.
Perhaps
we
could
add
sort
of
a
description.
A
Okay,
you
mentioned
also
the
issue
with
cross
namespaced,
populators
I.
Think.
Another
thing
that
would
be
interesting
is
to
see
I'm
I'm
kind
of
interested
to
see
what
it
would
look
like
in
the
yaml
so
for
data
source
ref.
Is
it
just
that
you
get
to
add
a
namespace,
but
is
there
some
kind
I
can't
remember?
We
should
link
to
the
Cross
namespace
population
cat
or
the
AP.
A
The
alpha
API
page
I
can't
remember
if
there
are
third,
like
other
objects,
that
have
to
be
created
to
create
that,
like
authorization
and
if
that's
the
case,
how
would
that
be
managed?
Because
we
definitely
don't
want
those
authorizations
to
be
part
of
the
spec
I
would
say
so.
That
would
be
interesting
to
me
just
to
kind
of
see
how
it
works
and
then
I
think
this
API
can
be
added
before
we're
ready
and
only
work
with
intra
namespace
populator
CRS.
B
In
CDI
we
do,
we
are
doing
crosstain
space
with
with
creating
tokens
in
CDI,
and
so
we
added
namespace
to
the
Source.
If
I'm
not
mistaken,
Michael
may
may
correct
me
if
I'm
wrong.
G
G
Cross
team
space
with
data
volumes
and
and
rather
than
use
the
we
still,
we
can't
use
the
namespace
field
of
the
data
source,
ref
and
then
PVC.
So
we
add
a
special
annotation
that
has
the
namespace.
G
Templates,
it
should
be
the
proper
support
which
requires,
in
which
case
yeah,
you
set
the
namespace
in
the
data
source
ref,
and
then
there
are
yeah
resources
that
have
to
exist
in
the
source
name:
space
resource
grants
that
Grant
access
to
specific
resources
from
other
namespaces.
B
G
Think
what
will
happen
is
you
know,
data
import
cron
will
be
updated
to
create
those
things
and
then
or
SSP,
or
some
combination
to
create
the
cross.
Namespace
data
sources.
G
A
All
right,
interesting,
so
yeah
if
we
could
see
an
example
of
what
that
looks
like
as
a
maybe
a
separate
chapter
of
this
design
document,
I,
think
that
would
be
cool
and
then
basically,
if
we
find
that
we
don't
support
that
I
mean
I,
guess
if
the
grant
doesn't
exist,
which
it
wouldn't
until
it's
supported
properly,
then
that
just
wouldn't
work
yet
so
it
may
be
a
kind
of
a
Natural
Evolution
to
bring
that
feature
in
okay,
so
I
wanted
to
just
see
if
I
could
I'd
like
to
see.
A
If
anyone
has
any
other
comments,
we
do
have
actually
one
other
kind
of
important
design
proposal
to
talk
about
today
as
well.
Another
feature
to
call
attention
to
so
any
other
closing
comments
on
this
one.
A
Please
do
take
a
look
at
it
and
offer
your
thoughts
if
you
could
we'd
appreciate
that
I'm
sure
Shelby
would
so
let
me
go
ahead
and
click
on
Alex's
design
doc,
and
why
don't
you
give
us
the
scoop
on
this
one
and
I'm
gonna
try
to.
C
Yes,
so
recently,
what
we
discovered
is
that
for
some
storage,
backend
solution,
specifically
that
was
Seth
there
was
this
Steph
RBD,
and
there
was
this
specific.
I
o
pattern
coming
from
Windows
VMS.
That
would
just
not
play
nice
with
the
default
parameters.
C
About
utilizing
the
back-end
storage
and
the
solution
for
that
ended
up
being
passing
a
lower
level
mapping
option
to
the
kernel
RBD
driver,
and
this
opens
up
like
a
whole
box
of
potential
issues.
For
example,
we
have
also
providers
like
Port
works.
That
would
give
you
out
of
the
box
a
large
number
of
of
ways
to
utilize,
your
storage,
a
bunch
of
storage
class
objects
and
each
of
them
uses
a
different
set
of
parameters,
and
only
some
of
them
may
be
preferable
for
VM
workloads.
C
C
But
it
may
not
be
wise
for
us
to
piggyback
on
that,
because
that
is
suitable
for
pods,
whereas
for
us,
as
we've
seen
with
the
windows,
I
o
pattern.
We
have
our
own
needs
and
we
have
the
default.
What's
best
for
pods
might
not
be
the
best
for
us.
Basically.
So
what
I'm
suggesting
is
that
we
get
this
special
virtualization
storage
class,
so
naming
is,
is
not
something
we
have
to
solve
here,
but
basically
it's
just.
C
Instead
of
storage
class
kubernetes,
I
o
it's
just
a
storage
class
dot
cube
root,
IO,
and
that
would
tell
that
would
tell
us
what's
the
preferred
storage
class
for
VM
workloads
and
they're
about
there's
a
bunch
of
compatibility
concerns
like
cloning
between
one
one
variation
of
storage
backend
to
another,
for
example,
somebody
may
have
been
using
the
default
kubernetes
storage
class
for
a
while,
and
then
we
upgrade
to
a
version
that
has
a
vert,
specifically
a
word
storage
class,
and
we
may
want
a
clone
between
them
and
we
have
to
make
sure
that
works,
because
that's
very
basic
usage
of
the
product,
so
yeah
so
I
I'm
thinking.
C
A
Cool
and
I
think
yeah
I
think
one
of
the
interesting
questions
would
be
like
who's
responsible
for
annotating
the
class
like
in
kubernetes.
This
is
sort
of
left
open
a
lot
of
times
it's
the
cluster
administrator
other
times.
It
can
be
the
storage
vendor.
A
So
some
storage
vendors,
when
you,
when
they
install
their
operator,
they
can
check
if
there's
currently
a
default
class,
and
then
they
can
set
one
of
their
storage
classes
as
default.
So
I
was
thinking
like,
for
example,
the
staff
use
case.
If
they
would
like
to
apply
this
annotation
to
the
storage
class
they've
created
for
virtualization,
they
could
do
that
which
would
be
kind
of
sensible
since
it's
created
for
virtualization,
so
I
don't
know.
If
there's
any
discussion
on
that
that
was
I
was
going
to
call
that
out.
C
A
C
I
I
think
we'll
be
really
close
to
what
what's
happening.
Kubernetes
I
know
a
lot
of
operators
like
the
OCS
operator.
They
just
they
put
the
default
kubernetes
storage
class
annotation
on
on
the
storage
class.
They
want
to
advertise
so
mostly
I
think
that's
the
RBD
RBD
one,
so
they
just
slap
The
annotation
on
it
and
the
user
may
not
even
know.
A
Yeah
cool
all
right,
so
we're
almost
we're
kind
of
running
up
on
the
edge,
so
I
just
wanted
to
make
sure
that
I
made
some
room
for
questions
or
comments
on
this
proposal.
If
anyone
has
anything
or
thoughts
or
wants
to
weigh
in
we'd
also
appreciate
review
on
this
idea
for
anyone
who
wants
to
weigh
in.
A
Foreign
thanks
Alex,
let's
see
I,
think
the
only
thing
that
we
left
uncovered
so
far
is
the
triaging
of
issues,
but
we
are
so
close
to
the
end
here
that
I
don't
know
if
it
makes
sense
unless
anyone's
aware
of
a
particular
issue
that
we
should
try
to
cover
in
a
minute
or
two,
if
there's
any
specific
ones,
that
somebody
would
like
to
call
out.
A
And
if
not
I
would
say,
let's
defer
that
until
next
time
I
think
we're
in
pretty
good
shape
on
issues
and
I.
Guess
I'll
just
wrap
it
up.
Then
thanks
everybody
for
joining.
We
had
some
really
really
great
topics
this
time
around
can.
B
I
just
add
a
reminder
Adam
for
our
new
lightning
talk
idea.
Oh
yes,
for
those
who
haven't
been
in
this
meeting
last
in
in
the
last
meeting.
We
are
wanting
to
try
to
do
some
lightning
talks,
and
so,
if
you
want
to
share
something
in
this
forum-
or
you
would
like
to
hear
about
something
that
our
our
group
can
share,
please
feel
free
to
either
add
it
here.
Send
me
a
message:
either
way
is
good.
So
just
a
reminder.
A
Oh
cool
thanks,
so
are
you
intending
to
present
this
one
specific
like?
Have
we
scheduled
that
one
yet
or
are
we
just
collecting
ideas
so
far.
B
Yeah
I
think
we
are
collecting
idea,
I
think
I
think
we
will,
when
we
will
have
enough
time
for
a
lightning
talk
in
in
this
meeting.
We
can
do
it
and
have
enough
participants
that
you
know
will
benefit
from
this
idea.
Like
last.
E
A
Okay,
sure
that
sounds
good
cool
thanks
for
the
reminder
so
yeah
it's
right
up
here,
you
can
reach
out
to
Shelley
directly
and
with
that
I'm
going
to
close
the
meeting.
So
thanks
again
everyone
for
joining
in
for
your
participation,
it's
great
to
see
you
guys
and
the
next
one
will
be
in
two
weeks
and
we
hope
to
see
you
there
as
well
have
a
great
week
thanks.