►
From YouTube: SIG - Storage 2023-06-05
Description
Meeting Notes:
https://docs.google.com/document/d/1mqJMjzT1biCpImEvi76DCMZxv-DwxGYLiPRLcR6CWpE/edit#
B
To
be
a
little
late,
so
we
can
start
without
him.
C
A
C
C
Yeah
yeah
right,
so
my
topic
is
about
the
tokens
we
give
out
on
Cross
name
space
cloning.
C
C
C
D
Yeah,
given
the
current
weight
cloning
Works,
unless
there
is
some
I,
don't
I
didn't
look
at
this
issue
or
know
what's
going
on,
but
given
the
current
design,
a
namespace
transfer
res
with
smart
cloning,
the
namespace
transfer
resource
can
get
created
early
on,
even
so
that
gets
created
early
before,
like
the
source
even
exists
or
is
populated,
so
that
the
assumption
is
that
that
resource
can
get
created
within
that
five
minutes
and
once
that
exists
there,
there
should
be
no
need
for
a
long-term
token
unless
that
gets
deleted
or
something
keeps
it
from
getting
created
within
five
minutes.
C
Okay,
yeah,
that
makes
sense.
Then
yeah
I
wonder
what
what
I
saw
in
that
disconnected
environment
today.
Maybe
did
one
doing
cross-night
space,
but
then
tokens
are
not
necessary.
I
don't
know
anyway,
yeah.
D
This
is
all
going
to
change
soon.
Anyway,
with
as
we're
working
on
the
populators
there,
there
will
be
a
long-term
token
all
the
time
for
the
new
since
we're
the
way
we're
doing
things
is
changing
a
bit
so.
D
Yeah
pretty
soon
there
will
always
be
a
long-term
token.
C
C
A
E
Thanks
yeah
I
was
just
gonna,
ask
I
was
gonna:
ask
if
you
will
release
the
screen.
I
can
yeah.
E
C
F
F
Okay.
So
basically
it
started
that
the
applying
a
garbage
collection
garbage
collected
data
volume
who
will
basically
fail.
F
G
G
F
Breaks
the
basic
principle
of
kubernetes
the
CR
the
dependency,
and
it
also
were,
must
for
githubs,
which
uses
the
applying
mechanism
and
expects
the
data
volume
or
the
other.
You
have
to
exist
after
the
place
upload.
F
Now
we,
the
initial
suggest
response
in
a
thread,
was
the
just
to
disable
the
GC
as
a
a
it's
about
it's
a
default
for
about
half
a
year
or
years,
or
something
like
that
which
will
get
us
back
to
the
state
where
we've
been
before
it
and
the
the
item.
Potency
will
be
perfect
because
that
way
you
can
apply
the
DV
again
and
again,
and
it's
the
same,
we
will
succeed
for
sure
Alexander
mentioned
in
the
thread
that
the
disabling
GC
may
call
some
regressions
in
Valero
restore
I'm,
not
sure.
F
F
Some
noise,
so
I
suggested
that
PR
trying
to
solve
this
issue.
Michael
was
not
that
happy
about
it,
but
it
was
an
open
discussion.
There
I
hope
we
can
agree
on
something
I'd
like
to
hear
your
voices
about
it.
What
do
you
think
we
should
do?
How
should
we
focus
to
try
to
battle
this
issue
and
so
between
a
PR
which
the
felt
it
I
understand
is
solvable
or
go
with
disabling
a
garbage
collection
by
default.
G
I
would
say
that
when
I
start
using
CDI
I
was
pretty
confused
by
the
fact
that
DBS
are
automatically
deleted
after
input
is
completed.
That
was
so
surprising
for
me,
because
no
other
entities
in
kubernetes
are
not
working
this
way
and
why
this
design
was
chosen
for
implementing
this
feature
so
yeah.
We
also
had
this
problem,
but
we
used.
G
G
But
yeah
I
I
agree
that
DV
is
more
likely
jobs
which
should
be
deleted
after
importing
I
think
it
might
be
named
a
different
way
like
that
of
volume,
impact
jump
or
something
like
that,
but
never
mind.
I
already
have
what
we
have
and
I
would
represent.
That
DVS
are
should
continue
existing
anytime
after
you
imported
it.
I
think
it
should
I
think
it
should
be,
should
continue
existing,
but
somehow
confusing
that
they're
removing,
but
by.
G
There
were
problems
yeah
yeah,
but
this
garbage
collection
is
works.
Also
works
also
kind
of
weird
to
me
because
we
ruled
the
DVS
by
the
upper
entity.
So
we
introduced
our
own
entities
which
are
creating
DVS,
and
these
DBS
are
controlled,
controlling
PVS
and
all
this
stuff.
So
when
we
set
our
owner
reference
for
data
volume,
they
will
not
get
garbage
collected
by
the
garbage
collection.
F
G
No,
for
example,
I
have
upper
rest
Source,
like
virtual
machine
disk
or
virtual
machine
image,
and
when
I
create
this
resource,
it
creates
data
volume
and
for
this
data
volume
it
creates
it
sets
of
your
references
to
Apple
resource
and,
if
DB
has
this
option
reference,
it's
continue
existing
even
after
it
completed.
G
D
And
in
this
case,
we
don't
garbage,
collect
them
because
we
can't
set.
We
don't
like,
have
permissions
necessarily
to
set.
F
D
G
Exactly
and
the
only
thing
I
wanted
to
point
is
unexpected
on
behavior
and
every
time
it
is
different,
I
would
like
to
have
just
one.
If
it's
job,
then
it
should
be
job
and
it
should
be
positioned
like
job.
If
it's
a
resource
which
you
can
use
and
connect
later
to
Virtual
Machine,
then
it
should
continue
existing
while
it
removed.
E
Yeah
so
I
think
one
of
the
main
motivations
for
introducing
this
was
for
the
backup
and
restore
case
where
we
had
some
complications
when
trying
to
restore,
and
when
you
restore
a
data
volume,
there
could
be
a
race
condition
if
the
PVC,
depending
on
the
order
and
if
the
PVC
has
already
been
restored,
then
it
would
work
one
way
and
if
the
PVC
was
not
there,
then
our
DB
controller
would
try
to
create
that
PVC,
fresh
and
re-import.
E
So
the
idea
which,
which
led
to
This
was
oh,
let's
garbage,
collect
the
data
volume
so
that
we're
just
dealing
with
PVCs,
because
that's
all
that
really
matters
after
it's
imported.
So
I
think
this
was
the
idea
and
what
I
would
say
is
I
believe
that
we're
finding
that
this
idea
has
resulted
in
some
collateral
damage,
which
I
think
seems
to
and
what
I'm
seems
to
be
borne
out
by
by
your
experience
as
well.
Is
that
it's
actually
causing
more
confusion
than
it's
clearing
up.
E
We
also
handle
the
data
volume
case
correctly,
with
the
Valero
plug-in
that
we
release,
so
it's
kind
of
a
solved
problem
for
the
time
being,
and
the
long-term
solution
really
is
to
to
move
Beyond
data
volumes
and
be
using
PVCs
and
populators
directly.
So
I
think
we've
tried
to
take
I.
Think
we've
just
to
summarize
myself,
like
data
volumes
have
never
fit
as
cleanly
into
the
kubernetes
world
as
we
would
have
liked
and
we're
finding
this
out
and
trying
to
correct
course.
E
But
you
know
we
took
a
couple
attempts
to
try
to
solve
that,
one
of
which
was
garbage
collection
I'm,
not
sure
that
it's
turning
now
to
be
the
the
you
know
the
Panacea.
We
thought
it
might
be.
B
E
Yeah
so
yeah,
so
we're
kind
of
left
with
this
technical
debt.
If
you
will
around
data
volumes.
F
E
What
what
is
the
so,
what
is
the
state
of
it?
Upstream?
Is
it
on
by
default
and
a
default
CDI
deployment,
or
is
it
by
default?
Okay,.
E
I
will
point
out
one
of
the
issues
that's
mentioned
in
the
pr
which
it
was
cool
to
see
that
come
out
so
quickly
in
responses.
E
There
is
a
use
case
where
so
that
this
PR
makes
it
so
that
when
you
recreate
a
DV,
when
there
was
a
garbage
collected
one,
it
just
puts
the
DV
back
and
doesn't
do
anything
different,
but
I
think
it
is
a
valid
use
case
when
somebody
like
we
do
want
this
check
to
stop
you,
because
people
could
get
confused
if
they,
if
there's
a
PVC
around,
they
looked
for
the
DV.
It's
not
there.
So
they're
like
okay
I
need
to
create
this.
E
So
I
think
this
is
a
potential
to
create
additional
confusion,
because
it's
now
doing
something
special
in
another
way,
and
it
seems
it
feels
like
a
slippery
slope,
because
then
we'd
have
to
correct
that
so
I
think
to
me
it's.
It
pains
me
that
a
lot
of
great
work
was
done
to
enable
garbage
collection
to
enable
this
idea.
It
pains
me
that
we
may
have
to
essentially
revert
that
functionality,
but
sometimes
this
happens
and
I
think
the
best
way
might
just
be
to
cut
our
losses
and
turn
it
off.
E
I
think
the
easiest
way
to
do
that
would
be
to
switch
it
Upstream
in
the
CI
lanes,
turning
it
off,
but
leave
it
on
for
a
little
while
in
the
deployments
the
default
deployment
so
that
we
can
iron
out
any
issues
and
then
we
could
turn
it
off
by
default.
There
Upstream
as
well
I,
don't
know
if
that
makes
sense,
I
mean.
G
Great
can
can
I
just
point
one
more
issue
with
DBS
that
data
volume
spec
is
immutable
and
it
still
contains
the
requested
storage
size
for
the
PVC.
In
this
way,
user
can
be
confused
by
the
fact
that
he
can't
actually
resize
the
data
volume
yeah.
E
Because
we
didn't,
we
didn't
want
to
basically
make
the
DV
a
pass
through
for
everything
you
can
do
with
it
with
a
PVC,
because
then
people
would
start
to
say
well,
I
want
to
snapshot
this
DV.
So
why
can't
I?
You
know
and
then
we'd
have
to
implement
that
API.
So
essentially
we
wanted
to
say
once
the
the
data
volume
is
done
populating
its
work
is
done.
It's
now
essentially
a
dead
resource,
and
then
you
should
be
manipulating
the
PVC
directly.
E
F
Okay,
so
I
think
we'll
continue
the
discussion
after
this
meeting,
because
I
want
to
minimize
the
risks
here.
You
know
it's
both
in
the
Cydia
and
Cube
word,
so
we
need
to
think
how
to
do
it.
The
safe,
the
safest
way.
E
Well,
Cube
vert
should
be
able
to
handle
basically
if
the
DV
is
the
way
that
that's
implemented
is
if
the
DV
is
not
there.
We
look
at
the
PVC
to
see
if
it
referenced
that
DB
and
then
we
accept
it,
but
the.
C
F
F
Path,
you're
right,
but
you
know
it's
not
tested
the
case
where
DB
is
so.
We
need.
G
I
think
that
safest
way
for
now
is
to
disable
garbage
collection,
because
user
will
get
the
behavior
expected
behavior
that
he
is
creating
TV
is
get
imported
and
then
he
can
use
it
as
well.
He
can
use
the
PVC,
but
in
case
he
wants
to
resize
it
yeah.
For
example,
you
can
create
persistent
volume
claim
and
you
can
create
the
resistant
volume
in
different
storage
class
and
they
will
get
bound.
G
So
what's
a
problem
that
you
don't
allow
user
to
modify
size
I
think
on
these
two
facts
are
somehow
similar,
so
anyway,
user
is
secured
by
the
fact
that
he
can't
edit
the
size
for
already
created,
DB
and
I
think
this
is
not
a
problem.
A
G
E
Yeah
I
mean
we,
we
like
the
deletion
chain
to
work
correctly
like
the
cascading
delete.
So
this
is
the
primary
reason
for
the
owner
reference.
G
Yeah
I
mean
what
the
behavior
should
be.
If
user
removes
data
volume
should
it
should
it
remove
PVC
itself.
E
I
mean
this
is
the
the
traditional
and
expected
Behavior
since
the
DV
created
the
PVC
yeah
I
mean
that's
like
you
can
manually
delete
that
owner
reference,
but
typically
this
was
the
the
desired
approach,
especially
when
you're
using
a
data
volume
template
section
in
the
VM,
because
that's
intended
to
be
coupling
the
disk
resource
with
the
VM
so
that
when
you
delete
the
VM,
its
disks
are
removed.
So
this
is
like
traditional
behavior
that
we
wouldn't
want
to
change.
E
So
hopefully,
I
mean
what
I
would
say
is
that,
hopefully
you
would
review
what's
happening
with
what
we're
doing
with
populators
and
even
try
to
use
the
Standalone
populators
when
those
come
out.
I
see
that
it
looks
like
Alexander
added
this
next
item,
which
we
I
guess
we
can
kind
of
jump
to
that
includes
the
populators
and
so
I
think
the
way
of
the
future
is
to
use
those
directly.
E
There
are
some
things
that
the
data
volumes
and
conveniences
the
data
volume
provides
to
you
that
are
are
would
be
missing
from
populators
directly,
in
particular,
the
storage
profiles
and
kind
of
allowing
you
to
to
Omit
access
mode
and
volume
mode
among
other
things,
but
I
think
it
would
be
great
to
have
some
folks
trying
those
populators
Standalone
to
help
us,
because
what
we're
trying
to
do
is
smooth
over
these.
These
rough
edges,
as
we
go
forward
even
working
towards
the
CDI
1.0
API.
E
And
sorry
I
kind
of
stole
your
thunder
a
little
bit
Alexander.
So
is
there
anything
else
that
you
wanted
to
mention
about
this
release?
No.
B
I
I
just
wanted
to
say
that
we
we
did
release
the
first
Alpha
of
157.,
mainly
so
that
we
could
import
the
API
first
of
the
Comfort.
Would
you
know
work
properly
with
all
the
different
faces?
B
Well,
you
know
people
can
try
it.
This
release
does
not
have
integration
between
data
volumes
and
populators.
It's
just
these
standalone
populators.
G
If
there
are
any
examples,
how
to
use
populators
or
from
the
user
site
is
there
anything
changed,
I
would
like
to
say
it's
about
creating
PVC.
We
specified
yeah.
B
D
D
To
make
a
couple
notes
on
this
release
and
populators
in
general,
the
general
note
is
that
yeah
we
put
some
populators
out
there
and
you
can
use
them.
There
is
one
bug
we
found
over
the
weekend,
so
we'll
probably
do
another
release
to
fix
that.
Basically,
PVS
aren't
getting
cleaned
up
correctly,
but
the
other
kind
of
bigger
note
is
that
kubernetes
cross
namespace
data
source
is
still
Alpha
and
until
that's
beta,
you're
gonna
have
to
use
data
volumes
to
do
cross.
D
Namespace
cloning
just
leveraging
our
weird
token
mechanism
that
we
talked
about
earlier
so
until
until
kubernetes
is
beta
on
Cross
name
space
data
source,
we're
going
to
be
have
to
use
data
volumes
for
crossing
space
cloning.
E
And
then
I
guess
the
other
thing
which
is
due
to
be
released
soon.
Is
the
data
volume
integration?
So
what
that
means
is
that
when
you
do
use
data
volumes
as
part
of
your
workflow,
it's
going
to
transparent
to
you
under
the
covers
create
PVCs
that
are
populated
using
the
populators.
E
So
it's
kind
of
exercising
it
that
way,
and
it's
supposed
to
be
transparent
to
the
end
user.
Another
thing
that
we'll
definitely
be
wanting
to
get
feedback
from
the
community
in
terms
of
how
that's
working
for
your
particular
use
cases
make
sure
that
it's
still
working.
That's
not
really
supposed
to
change
anything
in
terms
of
the
experience
with
the
data
volume
right
so.
D
There
is
so
there
is
a
volume
clone
source,
which
is
a
yeah
it's.
It
will
do
well.
At
least
the
one
that
was
released
was
just
for
PVCs,
but
we
just
merged
the
code
for
doing
snapshots
as
well.
So
I
think
this
is.
This
is
again
very
preliminary
I
I
again
there's
at
least
one
major
bug
and
it's
very
incomplete,
I
I
use
at
your
own
risk.
D
So
basically,
we
have
a
bunch
of
custom
resources
that
you
can
create,
like
I'll,
try
to
find
my
PR,
but
it
should
give
an
example
of
how
to
use
it.
E
Will
cover
the
full
Suite
of
operations,
but
the
major
caveat
that
Michael
mentions
is
that
cross
namespace
cloning?
Is?
It
does
not
work
directly
with
the
populator
itself.
You
need
some
of
that
special
secret
sauce
from
the
data
volume
controller
to
achieve
that
for
the
time
being,
but
that
should
that
should
be
remedied
with
that
cross
name:
space
populators
API
from
kubernetes
when
it
graduates.
C
E
Yeah-
and
this
is
something
this
something
we
should
remedy
if
we're
gonna
like
I,
granted
I
kind
of
jumped
out
here
and
said,
we
want
people
to
try
it
and,
at
the
moment,
it's
a
bit
of
an
internal
implementation
detail,
but
we
I
think
it
is.
Since
we're
discussing
these
pain
points
around
data
volumes,
it's
really
I
would
love
to
have.
B
E
Right,
so
you
create
this
first
CR
is
it's
basically
the
part
you
you'll
recognize
this
syntax
from
the
data
volume.
It's
basically
specifying
the
Clone
that
you
want
to
do
a
clone,
and
then
you
just
create
a
PVC
directly
and
use
the
kubernetes
populators
API
the
data
source
ref,
to
say
that
populate
using
the
CR
that
you
created
above
and
then
that's
how
how
it
works.
E
It's
going
to
definitely
be
more
native
to
the
way
kubernetes
is
working
so,
but
also
just
remember
that
we're
we're
aiming
to
ensure
that
if
you
just
continue
to
use
data
volumes,
the
way
that
you
are
today
it
should
you
shouldn't
notice
this.
It's
meant
to
be
transparent.
D
Yeah,
so
the
data
volume
integration
will
create
a
temporary
volume
clone
source
resource
for
each
operation
and
then
delete
it
once
it's
complete,
so
it
should,
at
the
end
of
the
day,
you'll
still
just
have
PVC
in
a
data
volume.
D
I,
just
going
back
to
the
previous
one,
maybe
a
little
bit
I
think
yeah
evaps
mentioned
perhaps
a
bit
of
confusion
about
having
a
data
volume
around
and
the
PVC
around
and
like
mutable
Fields,
like
the
requested
size.
D
Maybe
I
missed
it,
but
what
were
your
thoughts
on
that
like?
Do
you
think
that,
for
example,
the
size
in
the
data
volume
should
mirror
what
is
in
the
PVC?
So
if
a
user
updates
the
data
volume
we
update
the
PVC
size,
is
that
what
you
would
expect,
or
is
the
current
behavior
fine,
where
it's
sort
of
just
static
and
never
changes.
A
E
I
think
he's
typing
the
this
other
agenda
item
so.
E
Right,
so
would
you
so
I
just
wanted
to
see?
Did
you
want
an
opportunity
to
answer
Michael's
question
as
well?
He
was
curious
about
what
you
would
expect
the
data
volume
Behavior
to
be
when
the
size
is
updated.
G
My
opinion
on
that
or
yeah.
D
G
I
can
say
a
little
bit
about
that.
We
are
developing
our
own
https
for
users
to
let
them
manage
the
images
and
drives
for
their
VMS,
and
we
let
our
users
to
update
sites
inside
them
like
we
have
our
own
abstraction.
Like
data
volume,
it's
called
virtual
machine
image
or
virtual
machine
disk,
and
if
user
update
size,
it
also
updates
the
PVC
size
and
I
think
this
is
pretty
understandable
for
every
user.
Yes,
he
can
update
the
size
on
PVC
directly,
but
our
controller
just
checks.
G
If
the
size
of
PVC
reader
then
size
of
data
volume
or
our
own
abstraction,
and
in
this
way
it
will
do
nothing
but
I
think
it's
nice
to
have
this
opportunity
to
update
the
size
from
DB
side.
D
Okay,
so
in
your
case
you
don't
the
size
represented
in
your
resources,
not
a
direct
directly
the
value.
So
if
the
user
updates
the
PVC
directly,
it
won't
necessarily
update
your
resource.
But
if
the
size
requested
in
your
resource
is
bigger
than
the
PVC
you'll
update,
the
PVC.
G
Yep,
exactly
I
think
it
is
pretty
understandable
because
spec
is
always
represents
what
they
expected,
what
the
user
needs.
So
first
time
user
specifies
that
he
needs
some
data
volume
with
specific
source,
for
example
Ubuntu
image,
and
he
needs
this
size
for
let
this
image
be
uploaded
and
other
case.
When
he
wants
to
extend
this
drive.
He
can
extend
this
drive
directly
by
modifying
our
data
volumes
back.
D
G
Another
thing
is
what
to
do:
if
user
wishes
to
update
storage
class
in
this
way,
I
think
we
have
to
do
nothing
because
the
Upstream
logic
in
kubernetes.
Thus
they
actually
the
same
thing
if
you
create,
for
example,
stateful
set
and
specify
one
storage
class
name,
and
then
you
try
to
update
it.
It
will
not
first,
it
will
not
let
you
do
that,
but
if
you
remove
State
full
set
and
create
another
one,
it
will
just
continue
consuming
existing
persistent
volume
claim
with
a
different
storage
class,
this
old
storage
class
65.
G
G
D
Yeah
I
think
I
think
that
storage
class
is
only
like
really
updatable
if,
like
the
PVC,
is
kind
of
pending
because
it
couldn't
get
assigned.
If,
if
it's
nil
and
it
couldn't
get
assigned
to
storage
class
during
provisioning,
because
there's
no
default
storage
class,
then
you
can
yeah
later
I.
Think
that's
the
only
time
storage
class
like
on
a
PVC
can
be
updated
and
then
updating
storage
class
in
the
data
volume.
You
know.
D
Yeah
I
guess
for
that
small
window
we
could
make
it
happen,
but
it
is
I.
Think
storage
class
updates
are
there's
a
small
window
there
for
for
when
that's
valid.
E
Yeah
this
is
this
is
kind
of
the
road
we
started
going
down
with
mutable
Fields.
You
know,
because
then
we
tried
to
tried
to
wonder
what
it
would
be
like
if
you
updated
the
source
part
of
the
DV
and
what
the
proper
behavior
should
be,
and
it
just
starts
to
get
a
little
complicated
and
not
always
clear.
E
So
really,
we
I
think
we
sort
of
left
it
at
the
data
volume
was
a
was
a
proxy
for
the
PVC
object,
and
but
we
did
stop
short
of
allowing
you
to
update
the
size,
I
sort
of
hesitate
to
start
permitting
that
because
we
really
are
trying
to
deprecate
the
data
volume
so
like
not
adding
a
lot
of
new
capability
there,
but
yeah.
It's
that
one's
a
little
cut
and
dried,
but
the
rest
of
it
not
so
much.
E
All
right
shall
we
move
on
to
the
block
based
qcow
2
drives
just
to
give
you
a
chance
to
bring
that
topic
up.
We've
only
got
about
five
or
so
minutes
here
so.
G
Well,
okay,
I'll,
try
first
I
would
say
that
I
would
like
to
start
the
new
project
which
will
make
available
to
use
shared
loon
devices.
Like
you
have
a
shared
blog
device.
You
can
use
lvm
to
cut
it
for
the
virtual
machines
and
I
need
Kiko
support
for
this
approach
and
I
found
that
overt
uses
actually
the
next
approach.
It
writes
Kiko
files
directly
on
block
devices
and
I
was
thinking.
G
How
can
we
reuse
this
pattern
in
cubit
because
in
keyword
we
always
expecting
from
CDI
the
block
device
itself
or
file
system
where
we
can
place
cubicle
files?
If
you
have
any
thoughts
on
that,
please
go
ahead.
E
Yeah,
so
we've
we've
discussed
some
of
some
of
these
ideas
in
the
past
and
one
idea
that
would
actually
be
super
interesting
to
see
an
implementation
for
is
container
disks
that
are
smarter,
like
a
a
smart
kind
of
container
disk
and
what
I
mean
by
this
is
today.
E
When
you
look
at
a
PVC,
we
are
looking
if
it's
well,
if
it's
a
file
based
one
we're
looking
for
a
disk.img
file
and
if
it's
Block
Base
we're
just
accessing
it
directly,
but
container
disks
being
file
based.
E
We
were
thinking.
It
could
be
interesting
that
if
we
add-
and
we
add
a
capability
to
The
Container
disk,
where,
if
the
disk.img
file
is
not
there,
we
could
look
for
a
NBD
socket
in
the
container
disk
or
something
like
that
and
basically
teach
Cube
vert
that
if
you
don't
find
the
disk.img
file,
you
just
connect
to
the
nvd
socket
when
that
exists,
then
your
container
disk
can
basically
implement
the
the
connection
to
the
device
in
any
way
that
you
would
want
to.
E
So
if
it's
a
qcow
2
file
underneath
that's
fine,
if
it
yeah,
basically
whatever
it
is.
So
this
was
an
idea
that
we
thought
was
interesting,
because
then
we
don't
have
to
teach
Cube
vert
about
like
different
ways
of
constructing
these,
and
it
doesn't
necessarily
need
to
be
container
disk
specific
either.
It
could
just
be
essentially
any
time
Cube
vert
connects
to
a
PVC,
that's
registered
as
file
based.
E
Now
you
mentioned
block
I,
don't
know
how
we
could
necessarily
make
that
generic,
although
with
a
container
disk
I
guess
you
could,
because
you
could
have
the
container
disk
attached
or
with
the
capability
to
talk
to
a
certain
block
device.
So
anyway,
I
don't
want
to
take
a
ton
of
time,
but
this
was
one
idea
that
was
kicked
around
in
this
area.
G
C
G
A
G
Has
time
to
extend
this
drive
yeah
and
in
case
of
Cube
build?
There
is
no
opportunity
to
do
that
because
it's
like
two
separate
components:
one
is
Seaside
driver
which
can
handle
this
extension
and
the
second
one
is
the
liver
which
runs
inside
the
work
launcher.
Yeah.
E
So
this
is
100
by
Design,
because
these,
but
these
logic
it's
tight
tightly
coupling
the
virtualization
layer
with
the
storage
layer
and
as
soon
as
you
start
to
go
down
that
road
things
get
complicated
really
quickly
and
it
makes
it
difficult.
You
know
you
have
to
have
lots
of
storage,
specific
logic
within
the
vert
layer.
E
There
is
a
project
called
qsd,
the
qmu
storage
demon
and
it's
actually
a
CSI
driver.
E
That's
being
worked
on
I,
don't
have
a
link
for
you
right
now,
but
maybe
someone
else
on
the
call
does,
and
this
driver
essentially
so
they
have
qmu-
can
actually
run
in
a
mode
where
you're
really
just
running
the.
I
o
layer
of
qmu-
and
you
know
exposing
devices
for
consumption
by
another
layer.
So
you
can
actually
have
a.
This
is
something
that
could
be
implemented
in
qsd,
for
example,
because
qsd
could
receive
the
like.
E
E
Preserved
that
still
preserves
the
isolation
between
the
vert
layer
and
the
storage
layer,
because
all
of
that
could
be
hidden
inside
of
a
CSI
driver
itself.
G
Got
it?
Thank
you
thanks.
A
lot
I'll
take
a
look
latest
question
is
how
cubers
handling
qco
files
right
now
I
haven't
seen
that
I
saw
that
it
can
use
the
both.
It
can
work
with
Q2
and
sorry
Kiko
2,
and
it
can
work
the
same
with
row
files
if
they
placed
in
file
system
data
volumes.
Is
there
any
opportunity
to
specify
what
what
type
of
this
drive
is
that.
E
So
someone
can
correct
me
if
I'm
wrong,
but
as
far
as
I
know
we're
making
the
Assumption
of
raw
and,
if
I,
remember
correctly,
if
you
want
to
use
Q
cow
2,
you
have
to
State
it
in
the
domain
XML
and
that's
for
security
reasons,
because
you
can
construct
a
someone
can
construct
a
a
q
cow
2
that
can
basically
fake
the
qq2
metadata
and,
if
and
trick
liver
into
accessing
it
in
order
to
basically
Grant
access
to
parts
of
a
device
that
should
not
be
accessed
like
to
bypass
permissions.
E
Use
I
could
be
slightly
wrong
with
that.
We're
not
fundamentally
opposed
to
the
qcat
2
format,
but
mostly
we
don't
want
to
support
qcow
2
Chains
like
backing
files
and
stuff
such
in
the
primary
API.
Yet
so,
but
again,
this
is
something
that
I
would
suggest.
If
you
follow
that
approach,
where
you
have
the
container
disk
that
has
the
NBD
socket
in
it,
you
could
put
whatever
you
want
behind
that
nvd
socket.
E
It
could
be
a
chain
of
100
File,
qcat,
2
files
that
are
put
together
by
whatever
method
your
deployment
wants
to
to
do
as
a
method
of
experimentation.
So
it's
tough
because
you
know
you
kind
of
have
to
support
a
limited
number
of
use
cases
to
keep
things
maintainable
so.
B
To
do
kick
out
through
files,
you
can
use
a
sidecar.
B
What
do
we
call
a
hook
yeah
where
you
change
the
actual
device
specification
in
the
domain?
Xml
I
think
that's
what
Nvidia
is
doing
in
their
setups
yeah.
E
Let
me
see
if
I
was
just
looking
at
an
example
of
this,
so
only
see
if
I
can,
if
I
can
pull
up
a
link
to
that,
and
then
I
will
share
it
with
you
and
then
we're
kind
of
getting
to
be
at
time
here,
but
I
found
there
is
a
cool
one
yeah.
So
this
is
from
Peter
harachuk
I'm
gonna
share
with
you
actually
I'll,
just
open
it
and
whoops
and
I'll
share
this
link.
E
But
this
is
actually
the
code
implemented
to
do
it
and
you're
basically
creating
a
sidecar
container,
and
then
you
use
these
annotations
hooks
cubework.io.
You
say
what
the
images
that
you
want
to
be
added
as
a
sidecar
I
think
this
has
to
be
enabled
by
a
feature
date
by
the
way,
and
then
you
can.
This
particular
hook
uses
this
parameter,
which
is
specified
as
an
annotation
on
the
VM,
and
then
it
will
you
can.
If
you
look
at
this
repo,
you
can
see
how
it
works.
E
So
you
can
you
have
the
opportunity
to
receive
the
domain
XML
before
it's
passed
down
to
livert
and
modify
it,
so
you
can
do
whatever
you
want
to
the
XML.
E
G
Okay,
I'll
keep
you
updated.
Thank
you.
E
All
right,
so
we
did
not
get
around
to
CDI
issue
triage
today.
I
don't
want
to
dive
in
here
since
we're
a
little
bit
over
time
anyway.
So
I
would
just
say
thanks
everyone
for
joining
and
for
the
great
discussions
and
we'll
see
you
here
at
the
next
one.