►
From YouTube: SIG - Storage 2023-01-30
Description
Meeting Notes:
https://docs.google.com/document/d/1mqJMjzT1biCpImEvi76DCMZxv-DwxGYLiPRLcR6CWpE/edit#
A
All
right,
so
we
guess
we
can
kind
of
officially
get
started
here.
Welcome
everyone
again
to
the
Stig
storage
meeting
for
cube
vert,
and
why
don't
we
jump
right
into
the
topic,
so
the
first
one
that
we
have
is
discussing
Michael's
comment
on
DV
controllers
and
I'm
going
to
just
switch
over
to
that
comment
or
context
and
in
the
meantime,
I
guess,
with
the
person
who
wrote
down
the
agenda
item,
please
feel
free
to
jump
in
and
take
the
introduction.
B
I,
just
edit
it
actually
I
just
noticed
it.
Two
days
ago,
last
month
we
did
the
really
dramatic
refactoring
of
the
huge
data
volume
controller
we
had.
We
splitted
it
to
three
controllers,
and
currently
currently
Alex
is
the
further
splitting
it
for
the
clone
of
controller
is
one
of
the
changes
was
the
splitting
the
reconcile
into
a
sync
and
update
status
similar
to
the
way
qubit
is
designed
and
Michael
just
added
a
note?
Is
it
currently,
we
have
shared
a
data
between
the
two
functions,
the
sync
in
the
update
status,
which.
B
A
A
Intro
or
not
right
so
I'm,
just
trying
to
think
in
terms
of
for
me
I
think
I
have
a
general
understanding
that
the
sync
part
of
the
process
is
applying
some
logic,
based
on
some
things
that
our
controller
is
doing
and
the
update
status
is
generally
in,
at
least
in
Cuba
a
separate
process
that
observes
those
changes
that
may
have
been
made
to
objects
in
order
to
properly
record
what
we
assess
to
be
the
status
of
those
objects.
I
think
that's
a
pretty
like
basic
way
of
saying
it
so
Michael.
D
The
goal
I
mean
the
that's,
basically
it
the
sink.
Part
of
the
controller
work
is
about
yeah,
doing
the
real
work.
Looking
looking
at
you
know
you,
these
controllers
are
basically
you
know,
State
machines.
So
just
look,
you
know,
figure
out
where
we're
at
and
do
the
next
thing
and
update
status
is
about.
D
You
know,
reporting
what
the
current
state
is,
and
you
know
I
think
in
a
theoretical
level.
These
these
are
two
things
that
two
operations
that
can
be
totally
separate.
D
You
know
and
obviously
I
think
that
there
is
some
potentially
like
you
know
they
both
you
know
kind
of
take
a
look
at
the
universe
and
either
you
know
in
sync:
we
do
something
about
it
in
update
status,
we
update
the
status
about
it
and,
if
you
look
at
like
Cube
vert,
they're
they're
really
separate
so
like
the
update
status,
sometimes
doesn't
have
you
know
the
most
recent
changes,
because
if
you
just
updated
a
resource
or
just
created
a
resource
update
status,
you
know
just
based
on
the
way
some
of
these
resources
are
cached
and
stuff.
D
D
Their
update
status
is
a
little
late,
but
they're
really
two
entirely
separate
functions
and
yeah
there.
There
is
definitely
so
and
there's
really
no
kind
of
state
shared
between
them.
I
think
I'm
not
totally
opposed
to.
Like
you
know.
Maybe
there
is
a
case
to
share
some
data,
but
things
like
I
think
what's
problematic.
Now
is
like
there's
a
struct
returned
by
Sync
that
has
like
PVCs
in
it
data
volumes
in
it
stuff
like
that.
D
Those
are
things
that
can
be
easily
retrieved
by
update
status
on
their
own
and
there's
no
reason
to
agree.
B
D
Then
I
think
we
should,
you
know,
enumerate
on
the
I
think.
The
next
step,
then,
is
probably
you
can
enumerate
the
stuff
that
we
can
get
rid
of
the
stuff
that
you
think
should
stay
and
we
can.
We
can
see
if
what
makes
sense
I
mean,
but
I
I
think
you
know
we
should,
just
in
general,
try
to
share
State
or
Pass
State
along
as
as
little
as
possible.
A
It
seems
to
me
that
if
you
consider
that
CDI
were
to
be
interrupted
after
a
sync
path,
but
before
the
update
status
passed
any
kind
of
context,
that's
passed
between
those
two
operations
could
be
lost
and
potentially
causing
a
bug
right.
A
Is
there
anything
that
is
it
only
for
efficiency
or
an
optimization
that
we
pass
data
or
is?
Would
something
get
lost
in
the
shuffle,
for
example,
if
CDI
were
to
restart
at
the
most
inopportune
time.
D
Yeah
I
think
I
think
it
really
is
about
optimization.
You
know.
If,
if
the
sink
can
you
know,
I
think
it
really
is
just
optimization.
B
D
The
you
know
yeah
and
like
this
bug.
Basically
the
issue
was
there
was
kind
of
I
think
a
transient
error
or
something
but.
D
D
F
F
F
E
A
Okay,
so
it
sounds
like
we
might
have
a
next
Next
Step,
which
is
I,
guess
it's.
It
sounds
like
it
might
be.
Our
non
that's
going
to
be
looking
to
enumerate
right.
Let's
see
the
LMN
that
can
be
removed
from
this.
The
stink
return
value
in
order
to
have
a
clearer.
A
D
Yeah
so
I
think
my
or
a
proposal
for
going
forward
is
that
we
can
merge,
Alex's,
PR
and
then
Arnon
or
Alex,
or
whoever
is
going
to
continue
to
work,
can
create
a
separate
PR
that
where
we
can,
you
know,
deal
with
the
refactoring
and
have
more
in-depth
discussions
there
right.
A
Okay,
great
anything
else
on
this
one.
E
Can
you
hear
me
yes,
perfect,
all
right
all
right,
new
laptop
I'm
gonna
find
the
right
input
device.
Yeah
I
just
wanted
to
make
sure
everybody
was
aware
that
you
know
a
release.
Schedule
used
to
be
every
three
weeks,
but
I
I
think
it
makes
sense
to
match
Cube
vert,
which
is
every
three
months
now,
especially
the
last
I
would
say:
10
releases
for
basically
fixing.
You
know
a
few
bugs
and
a
minor
feature,
and
it
just
isn't
really
worth
doing
a
release
every
three
weeks
for
that.
E
So
to
say
about
that,
so
you
know.
That's
that's
why
there
hasn't
been
a
really
busy
in
a
while,
just
because
we
want
a
match
keyboard.
A
F
E
I
think
we
should.
We
should
release
like
a
few
days
after
sort
of
like
uber
does,
with
kubernetes
and
I.
Don't
think,
we've
officially
designated
a
freeze
day
but
yeah.
We
probably
should.
F
A
Yeah
I
mean
development
continues
in
the
main
branch,
regardless.
E
A
E
A
Yeah
I,
don't
think
we're
I,
don't
think
we
need
any
release
candidates
at
this
point.
It's
a
solution
looking
for
a
problem,
in
my
view,
right.
A
Okay,
all
right.
Thanks
for
that
update,
so
we'll
take
any
discussion
on
that
into
the
mailing
list
if
there
would
be
any
okay.
So
let's
go
on
to
the
next
topic
regarding
scratch:
space
who
wants
to
pull?
Who
wants
to
grab
this
one.
D
D
D
So
this
was
an
import
by
default.
If
you
guys
recall,
if
we're
importing
a
qcow
2
directly,
we
we,
you
know,
don't
download
that
we
basically
don't
download
the
file
to
scratch
space,
but
in
doing
that,
the
way
the
qmu
image
import
process
works
is
it
makes
a
ton
of
small
requests
to
the
HTTP
server
on
the
other
end.
D
In
this
case,
you
know
you
make
a
lot
of
these
requests
and
we've
seen
a
bunch
of
issues.
Usually
what
people
complain
about
is
that
it's
really
slow.
So
if
you
know,
if
there's
a
lot
of
latency
or
the
the
server
and
the
other
side
is
doing
some
throttling
it
just
you
know,
it
is
way
faster
to
just
download
the
thing
to
scratch
space
and
then
do
the
conversion.
Similarly,
there's
there's
there's
this
error
here,
where
I
guess
sometimes
you
know.
D
Maybe
we
are
you
know
in
the
middle
of
converting
it,
but
occasionally
like
the
server
will
return
an
error
and
then
it
just
dies
like
has
happened
here.
So
it's
like
a
little
more
and
then
I
guess
we
fail
and
start
over
again.
So.
C
So
this
MBD
kit
has
two
filters
you
can
use
to
avoid
both
of
these
issues.
It
has
a
retry
filter
that
will
retry
transient
errors
and
it
has
a
read
ahead.
Filter
that
will
basically
read
forward
so
it'll
be
pulling
down
data
constantly
and
linearly
into
a
memory
cache
which
will
ensure
that,
basically,
you
won't
need
to
be
doing
lots
and
lots
of
small
requests,
although
it
does
depend
somewhat
on
the
exact
topology
of
the
network
as
to
whether
that
really
works
well.
C
So
there
are
ways
to
to
avoid
this
without
needing
to
use
scratch
space
by
adding
these
filters.
Yeah.
D
I
think
yeah
I
think
that
this
is.
You
know
this
is
just
generally
an
issue
that
that
keeps
coming
up
and
I
think
yeah,
I
think
those
are
definitely
two
things
to
investigate.
D
You
know
how
to
it's.
Just
that
you
know
this
directly
importing
is.
We
should
probably
revisit
and
find
out.
You
know,
I
think
by
default.
We
should
do
the
way
that
is
maybe
not
the
best
performant,
but
is
the
most
reliable
and
resilient
way,
but
anyway,
I
think
that
this
is
a
topic
that
I
don't
know.
I
think
this
is
worthy
of
potentially
like
a
research
Spike
and
coming
up
with
a
strategy
that
makes
the
most
sense
in
most
cases,.
A
D
They're,
mostly
GitHub
issues
like
I,
think
this
okay
I
added
another
link
here.
This
is
the
like
main
issue
where,
where
the
performance
is
a
problem
where
someone-
but
you
know,
I-
think
those
are
the
these
are
the
two
main
things
like
you
know,
and
we
see
a
lot
of
times.
People
are
using,
like
you
know,
they're
forcing
the
scratch
base
Thing
by
by
you
know,
using
like
a
a
g-zipped
Cuca
or
something
that
that's
what
actually
hypershift
does
so
I,
don't
know.
A
Yeah
I
think
it
would
be
interesting
for
me
to
see
a
table
that
says
when
the
format
is
is
acts
you
know
we
have
we,
you
know
either
are
using
NBD
kit
in
this
case
or
we're
using
a
full
download
to
scratch.
Space
I
think
that's
been
collected,
yeah.
D
We
have
I
think
some
docs
somewhere
in
the
CDI
repo
that
I
think
well,
maybe
I
don't
know
that
it.
It
I
think
it
may
just
be
more
of
a
general
support
Matrix,
but
I
I,
yeah
I.
Just
think
that
this
you
know,
direct
cue
card
to
conversion
is
not
optimal
in
a
lot
of
cases
is
all
and
I
think
it
may
be
optimal
on
the
usage
of
scratch
space.
But
maybe
that's
not
what
people
care
about
the
most.
A
Oh
thanks
for
adding
that
Alex
so
I
know
that,
like
there's
a
related
bit
of
work
on,
maybe
Richard,
you
can
tell
us
if
there's
any
updates
on
it,
but
I
think
you
guys
we're
looking
at
doing
using
the
new
populators
API.
That
was
with
an
implementation
of
of
NBD
kit
within
that
for
more
specific,
targeted
use
cases
that
that's
using
that
am
I
remembering
correctly
or
is
there
some
active
work
in
that
area?.
A
Okay,
the
reason
I
bring
it
up
is
I.
Do
think
that
you
know,
as
we
start
moving
towards
populators,
there
is
a
really
good
opportunity
to
have
really
small
popular
implementations
that
can
be
used
and
targeted,
and
so
then
it
does
maybe
make
the
case
for
having
CDI
do
things
in
a
really
General
but
consistent
and
simple
way.
A
And
then
you
know
if
you
want,
if
folks
want
to
experiment
with
optimizing
that
flow
for
certain
cases
or
other
things,
there's
a
really
good
opportunity
to
implement
that
inside
of
a
populator,
and
that
can
be
the
laboratory
for
for
Innovation
I
guess
in
in
the
area,
so
just
kind
of
throwing
it
out
I
wasn't
sure
to
what
extent
we've
made
progress
there.
A
A
All
right
so
I,
don't
yeah
I,
guess
I,
don't
know
if
there's,
if
we'll
come
to
a
conclusion,
but
but
we
do
have
the
filters
and
so
I
should
write
that
note
down.
C
I'll
put
a
link
in
the
issue.
Actually,
okay,
great!
You
do
need
to
use
these
because
I
mean
we
have
the
same
problem
adverty.
To
be
that
you
know,
if
you
just
use,
you
just
use
the
basic
curl
plugin.
It
has
all
the
problems
that
you
describe,
but
I
sort
of
assumed
that
you'd
be
using
the
filters
on
top,
which
is
which
I've
designed
for
this
they're
meant
to
exactly
the
same
thing
we
recover
from.
You
know
like
Network
failures
and
reading
ahead
and
there's
a
caching
filter
as
well.
C
You
can
use
so
I'll
I'll
try
to
pull
out
actually
what
B2B
does,
because
if
you
layer
the
filters
in
the
right
way,
you
can
you
can
get
kind
of
quite
reliable.
C
C
You
have
another
filter
below
it
to
actually
do
the
caching
and
then
you
can
control
where
that
caching
is
okay,
okay
and
now
I,
don't
think
the
reader
head
filter
is
actually
going
to
be
that
useful
I
think
we
tried
it
in
Berkeley
to
be,
and
it
wasn't
it's
sort
of
conceptually
great,
but
it
doesn't
actually
work
in
sort
of
realistic
cases,
but
the
caching
the
cash
filter
is,
is
pretty
essential.
C
Okay,
remember
it's!
It's
only
caching,
a
certain
limited
amount.
So
it's
not
like
it's
doing
a
full
download.
If
you,
if
you
use
the
cash
filter,
you
can
limit
habitable
store
locally,
so
you
kind
of
get
the
best
of
having
a
sort
of
a
local
cache,
but
without
actually
having
to
have
unlimited
amounts
of
space
scratch
bits
around.
A
Okay,
so
if
we
take
a
look,
you
know
if
you
do,
if
you
are
able
to
edit
it
or
add
a
comment
to
the
issue,
for
maybe
how
the
filters
are
stacked,
revert
V
to
V,
it's
something
that
could
that
we
could
try
to
Implement
and
see
if
it
if
it
helps
yeah
sure
great
thanks
all
right.
Any
other
comments
on
this
one.
For
now.
D
Yeah,
this
I
added
this
one
as
well,
basically
Community
again.
This
is
something
that
has
come
up
a
couple
times.
It's
a
feature
that
is
in
Cube
vert
that
we
don't
support
so
I.
Think
if
you
want
to,
for
example,
you
have
images.
D
And
Registries
that
need
authentication
authorization
there
are,
there
are
ways
to
do
it
and
yeah.
This
is
just
you
know
something
if
you,
if
you
look
down
somewhere,
there's
a
reference
to
the
cube
vert
PR,
but
it's
really
to
have
kind
of
parody
with
convert.
A
Yeah,
it
might
be
neat
if
we
could
actually
use
the
same
object
so
that
that
they
don't
have
to
create
a
parallel
config
for
CDI,
specifically.
D
A
I
mean
I
guess
if
we,
when
we're
using
the
the
node-based
poll
approach,
that's
that
would
use
the
existing
conversion.
D
Yeah
I
mean
I
think
a
lot
of
the
time,
though
you
know
I
think
I
was
discussing
the
other
issue.
You
know
you
don't
have
access
to
the
node
a
lot
of
times
so,
but
it
would
be
better
to
have
to
you
know:
do
this
I'm,
not
a
every
single
resource,
level,
kind
of
thing
or
hack,
up,
manifests
and
stuff.
Okay,.
E
Yeah
I
think
the
main
issue
for
the
reporter
is
that
their
notes
don't
have
access
to
the
registry
that
contains
the
images
for
CDI.
A
E
A
Yeah
I
mean
the
obvious
seems
the
obvious
solution.
There
would
be
to
configure
to
give
your
nodes
access
to
all
the
Registries
that
are
needed
to
operate
your
workload
and
then.
E
Apparently,
in
their
environment,
it
has
to
be
set
up
like
this
and
I
I
talked
to
him
a
little
bit
and
it
was
weird,
but
you
know
apparently
cupboard
has
a
mechanism
where
you
can
provide
a
particular
pool
secret
for
the
images
used
for
you
know
the
it's
for,
like
the
the
the
control
plane,
not
for
the
actual
Imports,
it's
for
mainly
the
control
plane.
E
E
A
E
What
I
could
tell
I
wasn't
from
from
the
actual
workload,
it
was
running
the
control
plane
and
it
seemed
really
weird
to
me
and
my
initial
thought
was
well:
why
don't
you
just
configure
your
nodes
to
actually
just
get
access
to
it,
but
for
some.
A
So
this
is
about
installing,
essentially
this,
like
the
CDI
deployment,
the
images
that
are
for
CDI,
not
the
like
the
virtual
machine,
disk
images
right;
okay,
because
that
those
would
be
two
completely
different
features.
So
it's
like
okay.
E
D
Reporter
reported
it,
it's
just
to
get,
you
know
for
like
or
control
for
deployment,
I
mean
CDI
deployment,
whatever
okay.
A
Well,
it
seems,
like
we've
got
a
yeah,
a
working
pattern
to
go
from
so
it'd
be
a
relatively
straightforward
effort.
A
A
So
yeah,
unless
anybody
wants
to
put
their
name
down
on
it
right
now,
we'll
leave
the
discourse
to
the
issue,
I
suppose.
A
A
Right
sounds
good,
so
we
can
take
yeah.
We
could
take
that
offline.
All
right
sounds
good,
so
that
is
the
end
of
the
agenda.
As
written.
Does
anybody
else?
Have
a
topic
they'd
like
to
bring
up
now.
A
Sounds
like
not
at
the
moment.
Nobody
has
anything
any
interesting
stuff
they're
working
on.
They
want
to
share
I
suppose
we
don't
have
to
force
it.
So,
given
that
I
guess
we
can
end
a
little
bit
early
today.
A
A
Great
all
right
so,
hopefully
we'll
see
you
all
back
in
about
two
weeks:
I'll
go
ahead
and
pre-populate
the
agenda
block
for
the
next
one,
so
that
over
the
course
of
the
next
couple
weeks,
if
you
have
something
that
comes
up,
please
feel
free
to
add,
and
everyone
have
a
great
week
talk
to
you
soon.