►
From YouTube: SIG - Storage 2022-12-19
Description
Meeting Notes:
https://docs.google.com/document/d/1mqJMjzT1biCpImEvi76DCMZxv-DwxGYLiPRLcR6CWpE/edit#
A
A
Hopefully,
you
can
hear
me
and
see
my
screen
and
with
that
all
right
good.
So
why
don't
we
go
ahead
and
get
started
with
the
first
topic?
Michael
I'll
open
up
the
pr
as
well
just
for
context.
B
A
B
Yeah,
this
is
an
issue.
That's
been
floating
around
quite
a
bit
as
GitHub
issues
come
up
on.
You
know,
slack
it's
and
even
I
noticed
last
week.
That
project
like
hypershift
is,
is
kind
of
coding
around
this
issue
by
doing
some
questionable
stuff.
Basically,
the
issue
is
that
importing
a
Cuca
to
file
via
HTTP
may
be
slow,
and
the
reason
for
that
is
that
we
are
converting
from
qcout,
so
convert
obviously
uses
raw
images
and
when
we
convert
from
HTTP
we
are
doing
that.
B
You
know
in
line
so
qmu
image
convert
can
work
with
HTTP
images,
and
so
we
just
call
qmu
convert
directly
by
passing
URL,
and
that
makes
a
bunch
of
HTTP
requests
to
the
server
for
doing
a
number
of
local
range
requests.
B
So
if
latency
is
high
to
the
server-
or
this
is
like
some
mirror
thing-
and
there
are,
is
you
know
throttling
going
on
on
the
server
side,
it
can
be
quite
slow
because
we
just
do
a
number
of
HTTP
requests
and
the
alternative
is
and
the
so.
The
nice
thing
about
doing
the
conversion
in
line
is
that
it
doesn't
require
scratch
space.
You
can
just
create
the
target
PVC
and
write
directly
to
it.
B
The
alternative
is
to
download
the
qcat
2
file
to
scratch
space
and
then
can
then
do
the
conversion
from
there
and
in
a
lot
of
cases
that
is
faster
to
just
stream
down
the
image
to
the
to
a
local
scratch
PBC
and
convert
from
there.
So
the
question
is:
how
do
we
want
to
deal
with
this?
It
seems
like
do
we
make
it
configurable
to
allow
this?
B
You
know
where
we'll
download
the
entire
file
before
doing
the
conversion,
or
do
we
want
to
make
that
the
default
or
or
or
what,
because
it
seems
that
the
performance
issues
are.
You
know
problematic
for
certain
users,
yeah.
A
Because
I
mean
do
we
know
what
size
this
image
was,
but
because
I
can
see
that
they're
saying
yeah
after
21
minutes,
it's
32
completed
yeah.
B
If
you
scroll
down
I
did
some
testing
it
took
like
basically
yeah,
so
it
took
like
10
minutes.
B
C
A
A
D
A
E
D
The
nice
thing
about
qco
is
that
you
should
have
less
I
mean
it
should
be
smaller
than
raw.
So,
in
my
opinion
it
would
be
nice
to
do
it.
I
mean
if
we
can
configure
it
and
add
the
scratch
space,
but
the
scratch
face
doesn't
need
to
be
the
original
size
of
the
raw
image.
It
needs
to
be
the
cube
cow
virtual
side.
Now,
I
always
forget
anyway,
in
usually
QC
how
images
are
smaller,
yeah.
B
Yeah,
we
could
be
right
so
typically,
when
we
allocate
scratch
space,
we
created
the
same
size
as
the
target,
but
we
could
be.
You
know,
maybe
a
little
smart
about
it
and
make.
E
A
D
A
That
it's
typically
reliable
to
get
the
the
content
length
in
most
web
servers
like
is
that
going
to
be
an
issue.
A
E
I'm
wondering
what
is
the
original
reason
we
were
avoiding
scratch
face?
It
seems
to
me
that
in
the
past,
when
there
were
ecstatically
provisioned
PVS,
that
would
have
made
significantly
more
sense,
was
that
the
only
reason
or
did
we
have
more.
A
I
mean
in
the
in
the
earliest
of
days.
We
never
even
had
scratch
space
like
so.
The
only
way
we
could
import
was
like
in
the
very
beginning,
was
finding
a
way
to
use
the
the
I
o.
The
readers,
like
the
reader
stack
in
golang
to
basically
build
a
reader
that
could
read
directly
into
the
PVC.
That
was
the
original
design
and
then
I
think
we
ran
into
cases
where
you
you
just
frankly,
could
not
do
that.
So
then
we
had
to
introduce
scratch
space,
but
we
never
Revisited
whether
we
should
use
that
everywhere.
A
And
I
would
say
like
just
adding
a
bit
of
information
here.
I
have
not
noticed
really
any
complaints
with
the
scratch
space.
Implementation,
like
I,
haven't
seen
any
kind
of
issues
coming
across
where
people
would
say
you
know.
I
don't
have
enough
space
in
my
cluster
to
import
this
because
it
needs
scratch
space,
but
I
would
have
been
okay
if
you
could
do
it
directly
like
that,
just
doesn't
seem
to
be
a
commonly
encountered
issue,
so
it
seems
like
if
scratch
space
is
working
well
and
people
aren't
bothered
by
the
temporary
increased
storage
requirements.
B
About
it,
we
always
use
scratch
space
from
now
on
and
then
nothing
on
the
API
changes
or
we
could
change
whatever
the
default
is
and
and
have
some
option
to
do
it
differently.
A
Yeah
I
mean
I,
definitely
have
an
opinion
here.
Maybe
it's
not.
The
only
one
I
would
say
that
more
options
equals
harder
to
use
the
the
software.
So
in
general
we
should
just
pick
what
we
consider
to
be
the
right
approach.
I
would
say
that
if
you
need
a
workaround
like
if
scratch
Space
is
really
an
issue
for
you,
you
could,
if
you're
in
control
of
the
the
file
or
you
have
a
place
to
host
it.
A
You
could
host
it
in
compressed
raw
format,
which
could
still
be
direct
to
PVC
without
a
performance
penalty,
or
you
could
create
a
container
disk
and
use
a
registry
import
or
something
else.
If
you
really
were
in
a
bind
with
this,
but
otherwise
it
seems
like
it
shouldn't
affect
people
to
too
badly.
C
I
would
just
like
to
ask
about
how
it
how
it
works,
for
so
you
mentioned
hypershift
hypershift
is:
does
it
pull
from
HTTP?
How
come
they
hit?
This
problem.
B
Yeah
I
mean
they
use
data
volumes
and
you
know
for
for
their
they're,
currently
importing
them
via
HTTP
because
they
don't
have.
You
know
this
is
probably
it
a
temporary
thing
for
now,
but
they
don't
have
whatever
images
they're
hosting.
They
need
like
some
Rel
image
or
Centos
I
mean
don't
forget,
which.
B
It's
not
getting
pushed
there,
so
they're
doing
an
HTTP
import,
but
it's
so
slow
that
they're
importing
it
and
to
like
a
cash
volume
and
then
doing
you
know,
data
volume,
cloning
from
there,
so
they're
implementing
their
own.
Like
caching
mechanism,
which
you
know
it's
not
ideal,.
E
B
I
think
eventually
it
will
be.
The
image
will
be
hosted
in
a
registry,
so
they'll
get
more
efficient
Imports.
That
way.
C
Yeah
I
was
about
to
just
wanted
to
make
sure
this
issue
will
not
pop
up
for
Registries
right,
because
it's.
B
Yeah
they
just
there
just
isn't
the
automation
or
or
whatever
to
to
to
have
the
images
that
they
want
in
a
registry.
Yet
yeah,
okay,.
A
Yeah
and
for
and
registry
images
are
typically
stored
in
qcow
2,
but
the
way
that
those
are
imported
it
gets
pulled
down
to
either
into
the
local,
like
Docker
image,
cache
or
like
basically
pulling
a
tar
file
down
and
extracting
it
to
scratch
space,
so
it
wouldn't
affect
the
the
import
side.
A
So
this
seems
like
just
a
single
case
where
you
have
a
Q
cow,
two
or
I
guess
it
doesn't
really
make
sense,
but
we
technically
support
compressed
qcow
too,
but
that
one's
probably
already
using
scratch
space
I
would
guess.
B
I
yeah
I
think
anything
that
is
compressed
will
be
downloaded
to
scratch
face
first.
A
So
this
is
a
pretty
it's
a
common,
a
common
file
format,
but
a
very
specific
scenario
that
we'd
want
to
adjust.
Potentially
the
default
Behavior.
A
C
I
think
that
we
have
lots
of
people
doing.
Registry
Imports
and
those
are
scrap
space.
So
if
nobody
complained
about
this
I
think
it's
fine
like
it.
It
backs
up
the
claim
that
the
scratch
space
is
not
so
terrible.
A
Okay,
so
I
think
what
I
mean
we
can
feel
free
to
continue
discussion,
but
I'm
going
to
write
a
note
down
here
that
in
general,
the
discussion
was
leading
towards
changing
the
way
that
we
do
this
to
use
scratch
space.
A
All
right,
so
that
added
that
sounds
like-
and
it
seems
like
this-
should
be
fairly
easy
to.
You
know
when
we're
kind
of
detecting
what
to
do
that.
We
just
say
that
we
need.
We
require
scratch
space,
so
we
have
everything.
Basically,
we
need
in
CDI
to
do
this,
it's
fairly
simply.
A
Okay,
so
are
we
good
on
this
topic?
Should
we
move
on
to
Alicia's
topic.
A
Sounds
like
that,
so
why
don't
you
go
ahead
with
your
topic?
Can.
D
You
hear
me
yep,
so
I
would
like
to
make
you
aware
about
the
design
proposal
I
open
on
Friday,
so
this
is
basically
describes
the
integration
of
a
new
project
that
rely
on
second,
especially
second
notifiers,
and
as
far
as
regards
storage,
that
could
help
solving
to
issues.
So
one
is
the
scalpy
Precision
reservation.
D
This
is
the
topic
I'm
working
since
the
wire,
and
it
doesn't
really
have
a
nice
solution.
So
for
those
of
you
are
not
aware
about
gazipers
reservation,
I
link
some
some
smooth
reference,
so,
but
basically
they
issue
go
Downs
to
let
qrimu
talking
to
a
privileged
demon
through
a
uni
socket,
and
this
is
really
not
nice
supported
in
a
kubernetes
environment.
D
The
PRL
per
demon
socket
to
the
Viet
launcher,
and
this
is
something
similar.
How
container
disk
work,
for
example,
and
also
how
it
plug
volumes.
D
So,
at
the
end
we
would
like
to
avoid
buy
mounts
because
several
issues
clean
up
issues
so
yeah.
Basically,
this
proposal
could
fit
nicely
in
these
two
use
cases.
So
it's
pretty
lengthy.
We
try
to
write
down
a
lot
of
security,
implication
so
yeah.
If
you
have
some
time
and
you
have
time
to
review
it.
A
Is
there
any
downsides
that
people
should
be
considering
other
than
maybe
just
that
additional
dependency
on
a
new
project?
Any
other.
D
This
is
something
that
maybe
you
have
already
heard
from
kubernetes
or
even
cryo,
but
second
filter
basically
adds
some
additional
latency
in
in
every
system.
Call.
D
We
actually
have
talked
about
this
in
the
document,
but
there
is
this
latency
some
additional
latency.
However,
we
have
tried
to
minimize
this
by
building
a
smart
filter
and
the
latency
is
logarithmic
proportional
to
the
number
of
the
filters
c-score,
but
we
considered
that
these
numbers
should
be
pretty
low.
D
D
A
Okay,
interesting
yeah.
That
would
be
interesting
to
get
like
like
to
be
able
to
measure
and
measure
that
at
some
point
too,
just
to
see
really
what
we're
looking
with.
D
A
Sounds
good,
does
anybody
else
have
any
thoughts,
comments
or
questions
I?
Think
at
this
point
it's
probably
a
fairly
new
topic
to
many
of
us,
so
the
call
or
they
ask
here,
is
to
review
the
the
proposal
and
become
familiar
with
it.
If
it's
something
that
you'd
like
to
weigh
in
on.
A
All
right,
thanks
for
raising
that
and
I
guess
the
next.
What
kind
of
what
is
the
next
step?
I
guess?
The
next
step
would
be
to
close
on
the
design
proposal,
and
you
said
that
you
were
working
on
the
the
sort
of
the
the
base
implementation,
so
you'd
be
able
to
show
like
a
proof
of
concept
of
this
at
some
point
as
well.
Yeah.
D
So
there
is
a
link
to
the
project,
and
this
is
already
proof
of
concept.
There
is
also
an
example
that
points
to
one
of
my
repository
where
I
have
implemented
for
sister
reservation,
so
you
could
check
it
but
yeah.
We
we
plan
to
do
it
in
a
more
in
a
more
nice
way.
This
is
some
very
basic.
A
Ally
I
loved
it
I
have
to
say
that
I
love
the
the
the
domain
for
this
project,
that's
fun.
So
it
looks
like
some
really
good
information
on
the
project
here
as
well
for
people
who
want
to
dig
in
deeper
yeah,
great
okay,
any
other
thoughts
or
comments
or
questions
on
this
topic.
A
Awesome
thanks
for
raising
it,
so
I
guess
at
this
point
we
are
before
we
get
to
this
administrivia
at
the
bottom,
I
think
we're
open
for
additional
topics
or
questions
or
anything
that
anybody
would
like
to
bring
up.
A
B
Yeah
I
don't
see
Andre
on
the
call,
but
last
meeting
he
brought
up
Seth
deduplication
Alpha
status
and
win
store,
yep.
B
So
basically,
he
recommended
to
take
that
topic
to
the
Seth
mailing
list
or
something
well.
I
talked
to
Niels
about
it
and
yeah.
He
he
he
did
not
know
the
status
of
deduplication
and
serve.
He
seems
to
think
that
we
don't
support
it
in
Downstream
like
OCS.
Yet
so,
okay,
I,
don't
think
I,
don't
think
Andre
is
using
OCS
or
but
yeah.
He
recommended
just
taking
it
to
the
except
mailing
list.
So
I
don't
know.
A
Okay,
did
anybody
have
a
chance
to
look
into
Lin's
store
in
more
details,
I
feel
like
maybe
someone
may
have,
but
I'm
not
sure.
A
I
think
maybe
Alexander
did
but
he's
not
on
the
call
today.
So
that's
just
kind
of
a
reminder.
It
seems
like
a
pretty
cool
project
to
to
look
into
as
well.
A
Okay,
thanks
for
that
Michael
I'm
just
trying
to
decide
if
we
want
to
capture
any
follow-up
notes
on
that,
so
I
will
write
just
oops.
C
A
He
wasn't
I'm
just
trying
to
think
of
how
to
word
this.
He
wasn't
certain
where.
C
A
D
A
Storage
label
or
PRS
that
can
I
guess
for
PRS
are
issues
that
can
be
used
to
attract
the
attention
of
this
sig.
D
E
D
I'm
not
sure
I
really
have
lost
a
bit.
What
we
plan
to
do
with
this
with
this
label
actually
I
wanted.
Also
to
ask
if
you
are
aware
about
to
sleep.
I
know
that
we
have
six
storage
Network
as
a
scale
or
this
kind
of
things,
but
I
don't
have
the
impression
they
are
really
used.
A
Yeah
there
had
been
some
discussion
about
that,
because
we're
definitely
trying
to
do
some
refactoring
of
code.
There's
there's
been
an
effort
over
the
long
term
to
yeah
to
try
to
organize
the
code
in
such
a
way
that
it's
sort
of
clear
which
maintainers
or
which
folks
should
review
certain
or
be
kind
of
in
charge
of
certain
bits
of
the
code
base.
A
That's
an
ongoing
thing
at
one
point:
when
we
had
some
plans
to
try
to
automate
that
you
know
Cube
vert
wide
we're
looking
for
ways
to
continue
to
scale
the
development
process,
to
make
it
easier
and
quicker
for
people
to
get
the
reviews
they
need.
So
yeah
I'm,
not
sure
what
the
latest
and
greatest
is
on
any
of
those
processes
or
automation,
but
at
least
for
now
I
guess
it
can
be
used
manually
and
it
looks
like
at
least
Alicia
is
looking
for
that
label
and
I
guess
the
rest
of
us.
A
When
we're
looking
for
to
help
out
with
code
reviews
or
to
get
directed
at
areas,
we
might
be
an
expert
in.
We
can
take
a
look
at
that
label
as
well.
D
A
Yeah
and
I,
could
you
know
that's
a
it's
a
good
point,
because
this
could
be
another
another
way
for
us
to
call
attention
to
certain
things
in
this
meeting
as
well.
I
know
at
the
first
the
first
meeting
we
we
tried
to
take
a
look
at
some
of
the
outstanding
GitHub
issues
and
things,
and
so,
if
we
could
do
that
across
repos
within
the
cube
vert
org
by
use
of
the
Sig
storage
label,
that
might
be
a
great
way
to
yeah
to
keep
a
hold
on
that.
A
All
right,
anybody
else
have
something
they
want
to
bring
up.
A
On
that,
on
that
subject,
I
don't
think
it's
been
approved
yet,
but
as
far
as
filtering
release
notes
when
we
come
to
that.
C
A
Great
okay,
so,
while
other
folks
are
pondering
any
other
last
minute
additions,
they
want
to
bring
up.
I
wanted
to
propose
that
we
cancel
the
next
instance
of
this
meeting,
which
would
occur
on
the
2nd
of
January
I.
Think
a
lot
of
folks
are
going
to
be
out
still
from
from
the
end
of
year,
holidays
and
stuff,
and
and
even
if
they're,
not
I,
don't
expect
there
to
be
a
ton
of
topics
as
folks
are
getting
back
from
that.
So
it
seems
to
make
sense.
A
We
could
just
give
everybody
a
little
bit
of
email
catch
up
time
and
in
lieu
of
that,
and
we
could
reconvene
then
for
the
the
following
meeting,
which
would
be
I
guess
on
the
16th.
A
So
unless
there's
any
strong
objections,
I
know
that
I
won't
actually
be
in
the
office
that
day
to
to
moderate
or
to
run
the
call.
So.
C
A
All
right
so
with
that
I
guess
we
can
give
folks
a
few
extra
minutes
back
on
their
schedule
today
and
I.
Just
wanted
to
thank
everybody
for
joining
I
really
enjoy
the
discussions
that
we're
having
here
so
I
hope
everyone
has
a
good
holiday
season
and
we'll
catch
you
guys
back
in
about
a
month's
time,
foreign.