►
From YouTube: SIG - Storage 2023-03-13
Description
Meeting Notes:
https://docs.google.com/document/d/1mqJMjzT1biCpImEvi76DCMZxv-DwxGYLiPRLcR6CWpE/edit#
A
All
right
all
right,
so
let's
go
ahead
and
get
started.
Welcome
everyone
to
the
March
13th
edition
of
the
cube,
vert,
Sig
storage
call
and
we'll
jump
right
into
the
agenda.
Alex.
You
have
the
first
item
move
clone
off
package
into
CDI,
AP,
API,
Library,
yeah,.
B
So,
as
you
know,
keyboards
and
other
external
projects
in
the
future
may
want
to
vendor
in
CDI,
API
Library
and
that's
all
fine,
but
today
we
also
require
that
they
vendor
in
this
regular
CDI
package,
and
that
is
because
we
have
like
this
little
leftover,
which
is
the
clone
of
the
authentication
bits.
B
So
there's
a
bunch
of
Hubert
PRS
to
try
to
get
rid
of
this,
but
like
the
real
way
to
go.
I
think
is
that
we
we
just
make
the
change
in
CDI,
move
the
leftovers
to
the
API
and
make
sure
that
any
external
project
ever
only
needs
the
API
library
and
nothing
else
and
I
just
wanted
to
get
some
thoughts
on
this.
Maybe
there's
a
good
reason
not
to
do
this.
A
C
C
B
B
D
Yeah
I,
don't
yeah
I,
don't
know
what
other
libs
that
clone
package
brings
in,
probably
not
anything.
That's
not
already
imported.
C
D
B
Yeah
they
just
copy
pasted
it.
Oh
okay,
so
yeah
all
right,
I
get
one.
What
Alexander
is
saying
we
just
be
moving
the
problem
to
the
CDI
API
Library.
B
I
think
we
already
have
some
code
there.
Today
we
have
the
is
populated
a
couple
of
helpers
that
tell
you
if
a
PVC
is
populated
and
stuff.
A
But
isn't
this
kind
of
like
an
a
separate
thing
like
some?
Some
projects
may
just
need
the
CDI
API
and
if
other
projects
actually
want
to
implement
the
same,
like
clone
permission
logic.
That
seems
like
a
separate
thing
so
having
a
separate,
Imports,
I,
guess:
I'm,
not
sure
why
that's
problematic,
just
because
it's
to
to
vendor
or
to
import
statements
or
what's
the
what's,
the
problem
with
them
being
separate.
B
B
Yeah
yeah,
it
only
pulls
like
yeah
you're
right
if
you're
going
towards
saying
that
there
is
no
real
problem
today,
yeah
you're
right,
it's
just
pulling
in
the
Clone
package
and
that's
like,
doesn't
bring
any
overhead
at
all.
B
So
it's
just
I
think
if
you
save,
if
you
make
a
huge
PR
or
save
it,
make
dips
update
and
accidentally
bring
something
new
in
that
CDI
brought
in,
and
then
somebody
overlooks
that,
and
you
know,
the
fun
begins.
A
And
why
would
that
be
mitigated
if
this,
if
this
code
was
in
the
API
Library.
D
Yeah
I
think
it's
just
you
know
Cooper,
you
know
the
keeper.
If
you
import,
you
know,
convert
interest
data
importer.
There's
you
know
a
lot
of
stuff
in
the
Godot
mod,
that's
going
to
get
parsed,
and
that
means
you
know
it's
going
to
build
included
in
the
dependency
graph.
You
know.
Convert
API
has
a
lot
to
go
mod
memorably.
Adding
this
to
that.
D
Probably
won't
increase,
increase
very
much.
So
it's
just
dependency
parsing
it
you.
D
I
think
I
think
the
importing
CDI
we
have
some
things
in
our
godom
mod
that
make
it
so
that
you
have
to
like
explicitly
out
of
replace
in
your
consumer,
go.mod
I
think
that's
probably
some
of
the
objection
to
it.
It
yeah
I,
don't
know.
D
D
Dependencies
a
bit
by
moving
it
into
the
API
directory
and
maybe
eliminate
it
from
convert,
but
yeah
it's
just
about
parsing
the
dependencies.
C
A
All
right
so
I
guess
the
next
steps
on
this
would
be
to
further
continue
this.
The
discussion
in
the
pr
any
other
comments
from
anyone
on
the
line
right
now.
A
B
Yeah,
so
this
is
about
okay.
B
So
recently
we
introduced
cloning
from
snapshot
and
The
Next
Step
about
this
is
to
have
the
data
import
Chrome
feature
capitalized
on
this,
so
you
would
have
like
this
natural
source,
and
that
would
be
your
source
for
all
VM
disks
and
any
UVM
you
make
ubm
disk
would
be
cloned
from
that
snapshot
and
for
where's
that
okay
Seth
will
obviously
opt
into
this,
because
with
RBD
layering
it
makes
total
sense
to
keep
one
snapshot
and
just
clone
from
it,
and
a
couple
of
questions
I
had
about
this
is:
firstly,
the
storage
profile
bit
so
storage
profile
seems
like
the
correct
place,
to
signal
that
learning
from
snapshot
is
scale
scales
better
on
certain
storage
and
I
was
just
struggling
with
where
to
put
this
new
configurable.
B
So
yeah
my
my
alternative
suggestion
is
a
new
spec
field.
Maybe
something
along
the
lines
of
golden
Source
type,
something
like
that
and
the
golden
Source
type
is
Snapshot,
and
you
know
that.
A
B
And
it
could
be
like
a
an
existing
kind
from
the
kubernetes
API
or
something
so
that's
nice.
A
A
I'll
put
those
in
if
you
could
add
that,
just
as
a
kind
of
just
to
help
people
contribute,
the
second
thing
is
yeah.
This
sounds
like
a
you
know.
A
classic
naming
is
hard
issue
and
I
just
wonder.
Do
we
use
the
term
golden
anywhere
else
in
the
code?
I'm
not
sure
that
that's
a.
A
Kind
of
a
casual
term
I
think
what
this
is
actually
asking
or
setting.
Is
it's
really
specific
to
the
data
import,
cron
controller?
So
it
might
be.
You
know
it's
basically,
how
should
the
data
import,
cron
controller,
create
or
store?
What's
the
format
that
it's
storing
the
imported
images.
A
Just
like
being
really
specific
about
what
it
is,
instead
of
you
know,
kind
of
I,
don't
know
that
would
just
be.
My
suggestion
is
to
just
try
to
find
something
like
that.
It's
yeah
clone
strategy
is
a
is
a
yeah.
We
can't
like
override
that
or
use
that
in
any
way,
because
that's
like
that's
really
like
one
to
one
cloning.
A
B
A
Just
one
other
comment
is
I.
Think
we'll
want
to
open
this
up,
for
anybody
who
who
happens
to
know
that
out.
Another
particular
storage
provisioner
would
prefer
this
strategy
as
well.
It
can
definitely
be
added
to
you
know
the
other
storage
profiles.
B
B
A
B
It's
mainly
about
scratching
off
an
option.
He
had
this
thought
that
we
might
want
a
data
volume
API
that
says,
give
me
a
snapshot
instead
of
a
PVC,
but
then
it
seems
kind
of
clunky,
like
we
already
kind
of
committed
to
map
data
volumes
to
ppcs
very
early
on
and
I.
Think
it's
good
that
we
did
that
and
maybe
having
this
kind
of
data
volume.
Api
that
says,
hey
I,
want
a
snapshot
in
the
end.
B
So
give
me
don't
give
me
a
PVC,
it's
kind
of
clunky,
but
I
did
want
to
raise
it.
So
should
we
be
managing
the
volume
snapshot
ourselves.
A
C
B
Yeah,
that
was
the
original
idea,
but
we
later
scratched
it
off
and
we
went
with
just
snapshot
sources,
so
somebody
is
creating
their
snapshots
alone
manually
or
not
manually
and
those
are
served
as
the
source.
So
there's
no
smart
clone
happening,
you
know,
like
CDA,
doesn't
create
this
type
of
volume,
snapshot.
A
Yeah
I
think
I
think
it's
important
to
kind
of
remember
sort
of
the
evolution
of
how
we're
treating
data
volumes
these
days,
especially
in
light
of
the
garbage
collection
and
some
of
the
other
things
that
we've
done
so
data
volumes.
We're
really
emphasizing
that
they're
used
for
provisioning,
the
contents
of
a
PVC
and
that
they're
kind
of
useless
after
that
and
that
afterwards,
once
the
PVC
is
prepared,
we
should
be
using
the
PVC
directly.
A
So
in
light
of
that,
our
end
view
of
that
design
decision,
it
doesn't
make
sense
to
me
to
have
a
DV
API.
That
says
that
we
want
something:
that's
already
been
imported
to
be
stored
in
a
certain
way,
because
that's
the
consumption
side,
and
so
therefore,
at
the
consumption
side,
we
want
to
use
resources,
kubernetes
resources
directly,
so
I
think
it's
more
correct.
For
if
data
import
cron
decides
that
it
wants
to
maintain
some
snapshots
instead
of
PVCs,
that
it
should
be
doing
that
on
its
own.
B
A
Okay,
great
thanks
for
raising
both
of
those.
So
the
third
major
bullet
point
we
have
is
to
triage
CDI
issues,
and
so
I
can
jump
into
that.
A
And
okay,
so
we
are
on
this
issue
about
adding
an
expected
hash
to
data
volume.
So
I
believe
this
is
about
yeah,
having
making
sure
that
the
import
was
successful
by
comparing
a
hash
here.
A
I
think
my
biggest
concern
about
this
feature
is
that
we
do
some
manipulation
of
the
image
after
it's
been
imported
and
therefore,
at
what
phase
I
mean
I
guess
if
we,
if
we
have
a
stage
where
we're
downloading
the
exact
content,
checking
the
hash
before
we
start
to
manipulate
it,
it
could
be
possible,
but
since
we're
doing
some
like
streaming,
inline,
conversions
I'm,
not
sure
if
that's
very
practical.
D
Yeah
we'd
essentially
have
to
always
do
the
download
to
scratch
step
and
then
verify
after
that.
A
And
I
mean
I
think
we
could
potentially
like
in
the
presence
of
the
hash,
feel
that
it
changes
the
way
that
it
Imports,
but
I
do
Wonder
like
how
critical
is
it
to
have
something
like
this?
It
seems
like
a
nice
reasonable
feature
otherwise.
Well.
D
I
think
I
mean
HTTP.
Import
is
totally
insecure
without
a
hash
https.
At
least
you
know,
there's
no,
you
know
no
man
in
the
middle,
but
you
may
still
want
to
just
validate
that.
It's
what
you
expect
it
to
be.
You
know
no
one
changed
it
on
the
other
server
side
or
something
mm-hmm.
A
Okay
yeah,
so
we've
been
running
with
this
for
a
long
time.
I'm
just
going
down
to
see
the
latest.
Oh
cool,
Eric
Blake
has
a
comment.
B
A
A
So
I
don't
want
to
take
I,
don't
want
to
take
us
down
the
the
rabbit
hole
to
look
at
this
quite
now,
but
I
guess
it'd
be
interesting.
If
there
was
somebody
who
would
want
to
take
a
look
at
that
and
see
if
it
could
be
applicable
and
then
we
can
decide,
maybe
if
we
wanted
to
further
propose
that
Edition
foreign.
A
A
Important,
yet,
okay,
okay,
let
me
oops
I'm
having
a
hard
time
with
the
overlay
from
there
we
go.
Okay,
expand,
extend
metrics
for
importer
pod.
Let's
see
what
we're
looking
for
and
calling
the
metrics
endpoint
of
the
Importer.
We
only
have
a
percentage
of
transfer.
Data
would
be
better
to
have
raw
values.
C
Doesn't
know
that
whoever
opened
the
issue,
their
account
is
no
longer
there.
It
is
why
it's
a
ghost,
oh
okay,
yep,
well,
I,
think
this
is
related
to
the
vddk,
import
and
I
guess
the
percentages
that
are
not
really
useful.
If
you
run
into
a
large
block
of
zeros
it'll
skip
a
bunch
of
stuff
and
sure,
normally
the
percentage
will
jump
and
then
the
estimation
is
as
incorrect.
So.
A
B
C
Kind
of
I
think
this.
This
is
a
you
know,
relatively
easy
thing
to
add,
at
least
for
the
values
that
we
have
is.
A
The
structure
of
the
return
data
such
that
we
could
add
a
field
there.
It's.
C
A
Promethean
endpoint,
so
you
would
just
put
a
or
whatever
in
there.
A
Okay
and
do
we
have,
and
we
have
the
transfer
rate
for
certain
types
of
transfers
available.
We.
C
Have
a
if
we're
so
the
thing
is:
if
we're
using
the
QV
image
to
do
the
the
streaming
conversion
we
don't,
but
we
do
have
the
rate
for
if
we're
like
saving
it
to
a
scratch
face.
You
know
we
used
to
go
readers
and
we
have
a
reader
in
there
that
basically
counts
the
amount
of
bytes
that
happen
or
that
you
know
have
been
transferred.
So
from
that
we
can
calculate
speeds
and
things
of
that
nature.
D
A
Okay,
so
yeah
I
don't
know
what
we
want
to
do
there
like.
We've
we've
had
some
discussions
in
the
past
about
surfacing
additional
phase
information
on
the
data
volume
for
these
multi-phased
operations.
So
you
could
get
an
idea
about
that.
A
It
hasn't
really
gone
anywhere
because
it
can
be
kind
of
used
to
really
use
that
information
effectively.
You
got
to
understand
the
internals
about
how
the
Imports
actually
happening
so
I'm,
not
sure
about
it.
I
feel
like
we
could
I
mean
it
feels
kind
of
like
it
won't
fix
to
me
or
that
there's
not
a
ton
of
demand
for
this
at
the
moment
anymore.
So
I
don't
know
what
we
want.
If
we
want
to
try
to
close
it
or
if
there's
anybody,
that's
interested
in
picking
it
up.
D
Other
just
general
things
to
note:
you
know
we're
starting
to
Implement
populators
and
with
populators.
There's
no
well,
it's
just
a
hard
problem
to
report
any
sort
of
incremental
progress
like
where
does
that
happen,
so
yeah
yeah.
A
That's
a
that's
a
good
thing
to
also
worth
noting
is
that,
with
our
planned
move
to.
A
Okay,
all
right
yeah
I
mean
I.
Think
for
me,
this
is
feeling
like
one
that
we
should
close
and
if
somebody
really
loves
it,
we
could
they
could
reopen
so
I'm
gonna.
Just
if
there's
any
this
disagreement,
let
me
know
of
the
previous
comments.
A
A
B
I
think
we
we
already
talked
about
this
one,
maybe
a
little
time
ago,
yeah
here.
That
comment
is.
A
The
increased
size
of
the
volume,
but
if
I
log
into
the
VM
I
see
the
volumes
yeah,
that's
because
you
need
to
so
I
I
know
that,
let's
see
Maya
doesn't
appear
here
anymore,
but
she
did
work
on
a
feature
where
so
there's
a
piece
where
we're
actually
signaling
to
to
qmu
to
rescan
the
volume
size
when
it's
changed,
so
the
OS
we
have
like.
From
that
perspective,
the
the
model
of
the
device
is
updated.
A
The
virtual
machine
operating
system
itself
still
needs
to
rescan
and
discover,
but
the
problem
is:
is
this
is
OS
specific?
How
you
would
do
this
and
then
also
within
the
OS?
It
depends
on
how
you're,
using
your
disk,
whether
you
have
lvm
or
classic
partitioning.
So
it's
not
something
that
can
really
be
done
automatically.
A
This
it
looks
like
they're
just
asking
for
LS
block
or
fdisk
to
show
the
new
size
I
wonder
if
that's
actually
fixed,
then
based
on
so
Maya's
saying
here.
You
enable
a
few
oh
feature:
gate.
Okay,
that's
from
February
of
last
year,
a
year
ago,.
A
A
A
A
A
A
Yep
I
agree:
it's
a
good
first
issue.
Okay,
so
I
think
this
is
probably
something
that
well,
it's
been
flagged
as
a
good
first
issue.
It
should
be
pretty
easy
to
fix.
C
C
If
I
understand
what
we're
doing
correctly,
I
think
Michael
and
our
non
should
know
better,
but
it
might
sort
of
solve
itself
just
because
we're
we
should
be
reporting
errors
better,
because
you
know
what
happens
right
now
is
in
the
reconcile
Loop.
C
We
get
the
error
because
the
storage
class
cannot
be
found
and
we
error
out
and
there's
you
know,
there's
no
update
of
the
status
at
the
end
and
I
think
the
way
we've
split
it
out
into
the
you
know
the
sync
and
the
the
status
update
phase
the
status
update
phase
always
happens.
A
Okay,
Michael
or
Arnon
anything.
D
Yeah
I
have
no
idea
what
it
would
do.
I
mean
yeah.
It
would
be
interesting
to
see
what
exactly
happens
now.
A
A
A
A
D
It
in
certain
cases
it
is
faster
to
just
download
the
file
the
scratch
space
and
then
write
it
to
the
Target.
Then
do
the
direct
right
to
the
Target
yeah
like
significantly
faster
but
I,
think
Richard
spent
some
time
investigating
it
and
he's
I
think
you.
D
D
Certain
cases
multiple
orders
of
magnitude
slower
to
do
the
direct
conversion,
so
we
either
need
to
find
a
way
to
do
the
direct
conversion
faster
or
default
to
scratch.
Space
mm-hmm.
A
Yeah
I
think
this
topic
has
come
up
a
couple
of
times
in
the
past,
where
there's
enough
Corner
cases
where
we're
having
issues
with
NBD
kit
or
you
know
the
direct
convert,
and
you
know
a
couple
of
these
cases
where
it's
starting
to
feel
like
the
Sim
just
doing
the
simple
thing
which
is
to
always
download
and
then
convert
with
you
know
the
file
after
we've
already
received
it
might
make
more
sense,
but
yeah.
A
We
haven't
really
acted
on
on
that
those
comments,
yet
so
I'm
not
really
sure
what
the
best
bet
is
here,
I
kind
of
wonder
if,
when
we
adopt
populators,
if
we
ought
to
just
simplify
the
logic
at
the
same
time,.
C
C
At
this
point,
it
should
be
relatively
easy
to
just
add
it
to
the
the
list
of
plugins
we're
using
Richard
created
the
retype
plug-in
not
too
long
ago,
and
we're
already
using
that.
So.
D
Yeah
I
think
we
just
do
have
to
like
yeah.
We
should
see
how
much
it
helps
you
know,
because
I
think
yeah
I
think
we
want
performers
to
be
relatively
in
line
and
also
being
relatively
deterministic
about
what
we
do,
and
it's
just
more
understandable
and
yeah
having
fewer
different
code
paths
is
good
too.
So.
D
A
Okay,
so
that's
kind
of
what
we
talked
about,
I'm,
not
sure
you
know.
If
we
have
somebody
that
wants
to
take
a
look
at
that
or
not,
but
it's
here
for
you.
If
you
do,
let's
go
on
to
the
next
one
feature:
request:
support
retained,
PV
as
stores.
A
D
D
D
We
don't
do
that
now,
but
we
look
for
basically
I
think
this
could
work
under
certain
circumstances.
If
the
PV
has
the
claim
ref
set
to
the
name
to
the
appropriate
name,
it
would
work.
B
A
Michael,
would
you
be
willing
to
just
comment
to
that
effect
in
here
sure
yeah?
It
was
a
little
more
precise,
yeah.
A
Okay,
all
right
cool:
let's
try
to
tackle
one
more
I
think
and
we've
made
some
pretty
good
progress
today.
So
that's
good!
So
let's
do
support.
Read,
write
once
pod
for
data
volumes.
A
A
No
further
comments
since
that
initial
report.
Oh
we
have
from
Alex
last
year.
B
Yeah
it
it's
I,
don't
think
it's
as
straightforward
I
think
me
and
Michael
discussed
it
briefly,
and
the
first
thing
we
thought
about
was
the
expansion
pod
and.
A
B
But
maybe.
A
And
also
read,
write
once
just
generally
isn't
something
that
for
Cuba
purposes
that
where
we
recommend
anyway,
because
it's
not
enabling
live
migration,
so
I'm,
just
not
I'm,
not
sure
why.
We
would
necessarily
like
why
it's
super
high
priority
to
support
this
yeah.
B
I
wonder
yeah
exactly
I
wonder:
what's
the
backing
motivation
for
David
issue.
A
It
might
be
because
I'm
wondering
if
for
well
no
I
guess
for
yeah
I'm,
not
sure
on
what
that
would
be
so
I
guess
we
should
see
if
it's.
A
Okay,
all
right
so
I
think
for
this
one.
That's
kind
of
the
next
thing
we
need
to
see
and
I
I
find
that
yeah
there's,
probably
not
a
ton
of
motivation
on
that
other
than
like
the
fact
that
the
kubernetes
API
allows
it.
So
we
should
as
well,
but
it
doesn't
seem
like
it's
incredibly
useful
and
yeah,
we'll
need
to
think
about
how
that
would
affect
other
Cube
vert
features
as
well.
A
If
we're
taking
the
cube,
vert
Centric
mindset
here,
which
we
should
be
okay,
all
right,
I'm
gonna
say
that
I
think
we've
probably
tackled
enough
issues
for
today.
That
was
actually
pretty
good
got
through
quite
a
few.
Any
other
topics
at
the
end
here
before
we
close
out
for
the
week.
A
All
right
sounds
like
not
thanks,
everybody
for
joining
and
we'll
catch
you
at
the
next
one
in
two
weeks.