►
From YouTube: SIG - Storage 2023-07-17
Description
Meeting Notes:
https://docs.google.com/document/d/1mqJMjzT1biCpImEvi76DCMZxv-DwxGYLiPRLcR6CWpE/edit#
A
All
right
good
morning,
good
afternoon,
let's
go
ahead
and
get
started.
It
looks
like
we
have
a
pretty
full
agenda
today.
First
off,
can
everyone
hear
me?
Okay,.
A
A
Okay,
cool
I'm,
sharing
the
agenda,
so
I
have
the
and
there
he
is
hello
vivec.
You
have
the
first
agenda
item
today,
so
why
don't
we
just
go
ahead
and
jump
right
into
that.
A
A
All
right,
let's
see
well,
he
gets
situated.
Maybe
we
can
jump
then
over
his
and
go
down
to
the
next
one
here.
So
I
see
the
configurable
client
upload
server
certificate
issue.
A
C
A
A
Okay,
I
mean
we've
seen
a
few
issues
of
this
kind
of
lately
with
different
different
scenarios.
Add
a
method
for
updating
the
starts.
I
wonder:
is
it
something?
Would
it
be
easy
to
just
potentially
extend
the
the
validity
of
the
certificate
by
default?
Do
you
think,
instead
of
adding
a
configuration,
I
guess
so.
E
E
All
this
is
configurable
there
are.
There
are
parameters
to
configure
our
certificate
rotation
intervals,
so
they
they
could
do
that,
but
the
search
that
we
generate
for
upload
are
really
only
meant
to
be
used
internally
for
communication
between,
like
the
openshift
router
and
our
service,
or
between
an
Ingress
controller
and
our
service.
A
Okay,
so
I
wonder
in
which
I
want
I'm
curious
about
to
learn
more
about
the
the
use
case
where
you,
where
this
person
is
running
into
the
issue,
yeah.
E
Because,
like
the
the
DNS
names
aren't
gonna,
you
know
work
for
the
search
that
you
know,
even
if
the
duration
is
longer
they're
gonna.
You
know
that
your
laptop
doesn't
have
the
same.
You
know
DNS
as
our
server
in
the
cluster,
so
the
names
will
be
off
so
they
have
to
do
stuff
with,
like
you
know,
Etsy
hosts
you.
E
You
know
configure
their
router
or
Ingress
and
have
their
router
or
Ingress
controller
have
properly
configured
start.
C
A
Maybe
that's
a
good
start
I'll.
Add
that
as
a
comment
and
see
where
we
can
go
with
that
I
think
maybe
with
more
details,
we
could
better
address
the
underlying
issue.
A
Or
maybe
a
leache
if
you're
aware
of
it,
and
you
wanted
to
take
it
over-
that
could
be
fine
too.
F
A
F
A
Yeah
I
feel,
like
I,
remember
a
bit
of
a
bit
from
him
on
this
particular
topic.
F
Yeah,
so
basically
what
he
is
trying
to
do.
He
would
like
to
use
sun
in
order
to
share
basically
to
implement,
read,
write
many
in
order
to
partition
the
storage
and
he
did
some
measurement,
and
it
seems
that
using
qcal
format
and
the
snapshotting
from
from
qmu
it
seems
to
be
the
fastest
on
the
most
performance.
F
So
basically,
this
is
what
the
topic
is
about,
but
I'm
not
sure
if
I
wanted
to
add
something
on
top
recently,
we
haven't
discussed
much
on
this,
but
neither
Albert
or
Viva
careers
I
mean
Vivek,
is
here,
but
having
issues
so
yeah.
That
is
a
summary
of
the
discussion,
not
quite
sure.
A
Cool
yeah,
it
seems
like
a
pretty
good
potential
collaboration,
similar
goals.
F
Yeah,
exactly
that's,
that's
exactly
the
the
topic,
but
I'm
not
sure.
If
Alberto
got
in
touch
with
Andre
I
I
don't
have
the
latest.
The
latest
news.
A
Okay,
so
we
can
kind
of
treat
this
as
an
FYI
in
this
meeting
for
people
to
dig
in
it's
a
cool
seems
like
a
cool
project.
I
did
encounter
and
right
now
the
name
of
it
is
escaping
me,
but
there
was
another
team
at
Red
Hat
somewhere,
which
was
also
looking
at
doing
some
kind
of
a
meta
provisioner.
That
would
actually
it
was
for
local
storage
and
it
would
support
essentially
multiple
back
end
storage.
So
you
could
use
nvme
devices.
You
could
use
standard
local
block
devices.
A
You
could
use
local
LVN
any
number
of
things
like
that,
and
then
it
would
present
a
similar
kind
of
a
unified
PVC
provisioning,
so
that
also
seems
to
carry
with
it
a
bit
of
overlap
and
with
now
three
different
groups,
seemingly
attempting
to
do
something
similar.
It
really
seems
like
there's
potential
for
something
here.
F
A
Yeah
I'll
see
you
after
the
call
I'll
see
if
I
can
dig
up
that
email
thread
and
put
a
link
to
that
project
here
as
well.
So
we
kind
of
have
all
three
in
one
place
and
it
seems
like
yeah.
It
would
be
kind
of
neat
to
kind
of
converge
the
requirements
into
something
and
see
what
what
could
be
done.
Yeah.
F
Exactly
I
think
Vivek
is
rejoined,
but
still
has
issues
with
okay
with
this
audio
but
yeah.
Basically,
you
can
envelope
to
this
thread
and
yeah.
We
just
need
to
figure
it
out
what
the
financial
should
be.
A
Okay,
all
right.
That
sounds
good
thanks
for
adding
the
agenda
item
and
Vivek.
If
you
wanted
to
jump
back
in
later
on
with
more
details
when
your
audio
is
resolved,
feel
free
or
you
could
add
some
more
context
into
the
agenda
for
everyone,
great
okay.
So,
let's
move
on,
we
do
have
a
bunch
of
things
here.
So
let's
go
to
Alex
your
topic
about
the
cube
root,
storage,
Lane
issues.
G
It's
carryover
from
last
week.
It's
it
maps
to
an
issue
on
CDI
yeah.
A
We
touched
on
this
one,
maybe
before
you
joined
I've,
added
a
comment
in
there
to
request
some
more
details
on
the
exact
use
case,
because
Michael
was
kind
of
talking
about
how
the
certificates
are
generally
for
internal
communication
and
so
shouldn't.
There
shouldn't
usually
be
an
expiration
problem,
so
we
kind
of
want
to
understand
a
little
bit
more
about
what's
going
on
there
before
and
that
it's
also
is
configurable.
So.
G
Okay,
awesome
then
I'll
just
go
for
the
storage
Lane
update.
Last
week
we
bumped
the
CDI
version
in
in
the
cube,
root
lines
and
I.
Think
like
a
day
after
started,
seeing
red
Lanes,
you
could
see
it
in
the
link
there.
That's
not
a
good
look.
G
So
Brian
from
the
CI
team
noticed
this
there's,
maybe
like
a
few
one
male
thread
and
there's
PRS
from
Alexander,
but
today
earlier
today
we
were
looking
at
the
memory
consumption
and
we
believe
that
that's
the
issue.
Basically,
we
we
get
like
30
gigs
per
CI
job
and
we
were
exceeding
that
with
new
CDI,
so
I
think
the
the
question
for
this
meeting
is,
if,
if
it
makes
sense,
maybe
it's
hard
to
answer,
but
does
this
make
sense?
G
There
was
a
three
gigabyte
increase,
sounds
a
bit
much
for
just
you
know
a
few
popular
controllers
that
just
Loop
over
state
but
combine
the
CDI
bump
with
at
CDN
memory,
which
is
being
used
in
in
the
CI
lanes,
and
we
get
basically
we
get
like
leader.
Elections
happening
a
lot.
We
have
just
just
straight
up
dead,
API
servers
yeah,
so
that
makes
the
tests
perform
worse,
so
I
mean
we'll
we'll
get
we'll
get
to
a
point
where
the
lanes
are
green,
but
what's
the?
G
We
have
maybe
that's
concerning.
E
Yeah,
that's
a
good
question.
Yeah,
you
think
you
know
we
added
these
new
controllers
and
I
think
what
we
can
look
at
is
I
guess.
Well,
you
know
they
use
the
informers,
so
they're
cash
and
stuff,
but
it
should
be
a
shared
patch
with
everything
else
and
maybe
we're
caching,
some
new
resources,
but
every
gig
still
seems
like
a
lot.
E
You
know
yeah,
it's
like
no
new
binaries
or
anything
it's
just.
We
added
some
controllers
and
and
the
data
that
they
use
should
be
mostly
shared
I
think,
but
we
can
definitely
check.
A
I
think
I'm
trying
to
understand
it
seems
like
the
FCD
issue
is
maybe
not
directly
related
to
the
size
increase
of
of
CDI
because,
like
it's
probably
two
issues
right
where
you
have.
G
G
A
So
we
have
more,
we
have
more
software
deployed
and
more
demands
placed
on
SCD,
so
kind
of
two
things
potentially
that
are
converging
to
cause
some
problems
here
can
I
write
somebody's
name
down
for
a
contact
that
would
be
willing
to
look
into
this
a
little
further.
A
I,
don't
know
if
there's
somebody
that's
willing
to
jump
in
and
and
take
a
peek
I
think
yeah
I
think
there's
yeah
there's.
Definitely
some
checking.
We
should
do
to
see
that
word.
G
Yeah
I
think
I
could
take
a
look
at
and
how
CDI
behaves
and
our
repo
see
if
it's
also
consuming
more
okay.
D
E
Yeah
I
I
was
I
can
help
out
looking
in
both
of
those
directions.
I
tried
to
make
some
changes
to
the
way
the
tests
were
structured,
but
that
didn't
help
things
so,
but
what
Alex
is
saying
is
seems
to
be
consistent
with
some
of
the
logs
I've
seen
from
tests
and
whatnot,
where
weird
things
are
happening.
C
A
Yeah
I
mean
there's
definitely
some
while
we're
kind
of
maintaining
multiple
ways
of
achieving
the
same
thing
for
the,
since
we
have
the
compatibility
going
on
there's
an
expected
that
we
would
be
using
some
more
resources,
but
that
seems
like
a
lot:
okay,
cool
all
right,
so
I
think
we
have
at
least
a
rough
plan
on
that
one
in
terms
of
looking
into
it.
Let's
see,
if
there's
some
updates.
B
Hey
so
it's
something
that
I
thought
I'd
raise
as
an
option
or
a
Time
feeling
for
this
meetings
to
add
to
this
meeting
lightning
talks,
meaning
5
to
10
15
minutes
of
talking
on
any
subject
that
you
want.
You
don't
have
to
be
a
member
in
the
team.
It's
anyone
from
the
community
can
also
raise
a
topic.
He
thinks
or
she
thinks
is
interesting
for
the
for
the
entire
meeting
and
people
in
the
meeting
you
can
also
I
will
add
an
option
of
requests.
B
B
I,
don't
know
that's.
Basically
it
and
people
will
just
volunteer
to
do
the
lighting
talks
or
by
requests.
Do
the
lighting
talks
I
thought
about
doing
a
lightning
talk
on
volume
populators
as
a
start,
but
I
did
demo
it.
We
did
demo
it
in
the
last
demo
session,
so
I'm
not
sure.
If
there's
anyone
in
this
current
meeting
that
haven't
heard
it.
B
So
yeah,
that's
what
I
wanted
to
raise
and
you
can
say
what
you
think.
If
you
have
already
ideas
of
a
lightning
talk,
we
can
write
it
down.
A
Do
we
have
so,
first
of
all,
I
think
it's
a
cool
idea
like
I
mean
I'm
speaking
to
the
it's
to
looks
like
all
red
Hatters
now
so,
like
obviously,
you're
all
aware
of
the
success
we've
had
with
the
the
talks
that
we've
done
like
directly
as
a
team
but
I
think
bringing
this
out
to
this
public
forum
will
be
super
interesting
and
allowing
people
from
different
different
areas
to
contribute
context
here.
A
I
think
it
will
be
a
really
cool
idea,
so
yeah
I
mean
that's
my
thoughts
on
it.
I
guess
the
questions
that
I
would
have
around.
That
would
be
what.
How
would
somebody
propose
a
lightning
talk?
Do
we
want
to
have
them
just
put
it
on
the
agenda?
A
Do
we
want
somebody
to
help
kind
of
curate
them
so
that
you
know
like
you,
did
for
the
internal
ones
Shelly,
so
that
you
know
we
have
a
continuous,
like
stream
of
them
coming
along
like
one
per
session,
or
did
you
have
any
thoughts
on
that
stuff.
B
Yeah
I
thought
it's
possible
to
either.
You
know,
send
me
ideas
or
add
to
this
document
like
a
bullet
point
of
requests
and
a
link,
maybe
to
a
doc
that
does
a
list
of
ideas
for
lightning
talks
and
people
can
volunteer
to
do
them
and
I
can
raise
it
at
the
end
of
each
and
meeting
either
requests
and
someone
volunteers
to
take
it
or
someone
volunteering
to
do
something
on
his
own
accord.
A
Let
me
just
add.
C
B
Have
the
mail
that
we
send
at
each
end
of
of
the
meeting
we
can
edit
as
a
bold
new
thing
for
the
meeting.
A
Okay,
I'm
just
trying
to
think
of
a
little
summary
to
add
here
so.
A
Maybe
this
is
a
good
start
here,
and
so
maybe
I
can
just
copy
the.
A
B
A
Sounds
good
I'll?
Let
you
revamp
this
then
to
you
know
to
be
something
that's
kind
of
useful,
but
it
seems
that
we
can
just
keep
it
right
here
in
the
dock.
It'd
be
a
nice
useful
place
for
it.
I'm
excited
about
this
I
think
it'll
be
cool
to
have
some
some
presentations
in
here
from
different
folks,
so
that'll
be
cool.
B
C
A
Yeah
and
I
think
over
time
once
we
have
the
first
couple
of
examples,
that'll
help
to
to
put
it
in
people's
minds.
B
A
Awesome:
okay,
any
other
discussion
on
that
topic,
any
other
thoughts
from
anyone;
potential
ideas
for
lightning
talks,
Etc.
A
All
right
easy
enough,
so
the
last
topic-
I,
don't
see
anything
else
having
been
added
would
be
to
triage
the
CEI
issues.
So
let's
jump
in
there
I
see
we're
at
27.55.
So
this
is
the
one
that
we
no.
This
is
different.
Okay,.
E
E
Yeah
well,
this
is
weird
I
mean
this
again.
This
client
cert
that
they're
talking
about
is
really
only
meant
to
be
used
internally.
G
E
Yeah,
maybe
this
is
just
I
mean
reading
this.
It
looks
like
I.
Don't
know
that
they're
using
this,
it
looks
like.
Maybe
they
just
does
an
organization
have
a
policy
that
they
can't
have.
E
You
know
certs
older
than
x
and
yeah.
The
problem
is
yeah
for
for
some
sorts,
and
this
was
something
we
never.
E
E
D
E
A
Cool
all
right
sounds
good,
yeah,
I
think
a
little
more
context
would
be
needed
all
right,
so
I'm,
going
to
jump
back
into
the
issues
was
that
okay,
so
we
were
seems
like
there's
a
I'm
just
gonna
see
how
far
down
that
was
oh
yeah.
So
this
appears
to
be
almost
the
like
the
third
one.
But
okay,
let's
go
up
to
metric
names,
failed
from
winter.
A
A
G
No
I
think
they
just
excluded
everything.
That's
not
standardized
like
these
ones.
That's
why
it
passes.
D
A
Do
we
have
a
do?
We
have
an
issue
with
so
yes,
I
guess,
with
the
Clone
progress
and
stuff
I'm,
just
kind
of
wondering
is
that
going
to
affect
something
like
when
we're
listing
data
volumes
on
the
command
line
or
something
with
vert
CTL
or
anything
like
I'm,
just
kind
of
curious?
If
who
might
be
depending
on
those
names
as
they
are
today,.
C
A
Okay,
so
we
don't
really
have
any
issues
with
changing
it.
A
A
A
We
don't
have
to
answer
now
at
least
we've
made.
We've
visited
it
and
discussed
it,
so
we
can
go
back
up
and
look
at
the
next
one.
I
guess
the
next
is
clone
without
Source
cross
namespace
Source
must
be
created
within
five
minutes
of
targets.
E
C
E
I
forget
what
I
don't
know
the
current
state
of
it,
but
basically
yeah
with
with
the
source,
not
existing
I
think
it
may
still
be
a
problem
with
the
non-csi
Clones.
How.
C
E
It
may
be
fixed
I,
I,
just
I
I
forget
the
current
state.
I
think
it's
yeah.
It
may
be
fixed
for
everything,
though,
because
I
think
what
we
do
now
is
the
web
hook
creates
a
short-term
token
on
the
data
volume
and
then
I
think
pretty
much
right
away.
I'll
have
to
check
the
latest,
but
I
think
we
add
a
longer
term
token
to
the
data
volume
and
then,
when
we
create
the
target
PPC,
there
is
a
longer
term
token
added
to
that.
E
Okay,
but
yeah.
If
you
assign
it
to
me,
I'll
I'll,
verify:
okay.
A
I
would
say
that,
like
since
the
non-csi
case
is
really
kind
of
compatibility
at
this
point,
and
we
expect
most
people
to
be
using
CSI,
just
like
kubernetes
itself
does
and
that's
the
direction
that
things
are
moving
in,
like
personally,
I
would
feel
comfortable
with
not
fixing
it
for
non-csi,
just
because
at
some
point,
like
I,
think
we
should
stop
adding
features
to
that
flow
and
really
focus
on
the
future.
A
So
if
you
guys
kind
of
tend
to
agree
with
that
and
there's
no
disagreement,
I
can
place.
I
can
put
that
in.
A
Great
okay:
let's
move
to
the
next
one
thanks
Michael!
All
right
next
is
CDI
upload,
concurrency
performance
issues,
somebody's
testing.
C
E
E
D
A
D
Okay,
okay,
yeah
all
right
sure,
so.
D
This
is,
but
the
network
is
just
falling
over
because
they're
doing
50.
poems
at
the
same
time.
D
D
A
Are
ones
yeah
I
mean
when
you
think
of
like
yeah?
We
would
definitely
encourage
people
if
they're
doing
that
aggressive
of
cloning,
that
they
want
to
be
trying
to
use
Smart
cloning,
but
I,
guess
in
the
case,
where
they're
not
able
to.
We
should
figure
out
how
to
handle
it.
But.
E
Yeah
yeah
I
mean
I
I
think
it
looks
like
you
know.
The
the
source
pod
is
posting
data
to
the
upload
server.
It
would
be
I
wish
we
had
the
upload,
but
a
lot
of
what's
happening
on
the
upload
server
side.
Maybe
we
do
in
a
previous
comment,
but
yeah
I'm
not
sure
what
we
can
do
here.
I
mean
clusters
are
sized
differently.
You
know
if
it
takes
well,
although
we
should
be
yeah
I,
don't
know
not
sure
why
you
know
the
upload
server
pod.
E
So
I
don't
know
if
if
there
are
a
lot
of
PODS
or
the
system
is
kind
of
over
committed
and
I
don't
know
but
yeah,
it
looks
like
what
happened
is
the
upload
server
part
started
said
it
was
ready
and
then
it
became
unready
and
when
it
becomes
unready
it
doesn't
get
like
the
it
doesn't
get
network
requests
anymore
through
that
service
address
you
know
so,
then
that's
what
I
think
happened
so.
A
E
They
are
yeah,
that's
a
good
question
too
yeah
I
mean
you
know
we
could
have
some
like.
You
know.
Global
configuration
like
you
know
only
X
many
concurrent
host
clones,
but
that
would
be
like
a
new
feature
and
it
would
really
only
be
useful
I
think,
for
you
know,
constrained
clusters
or
people
that
wanted
to
do,
but
yeah.
E
C
I,
don't
know
what
I
want
to
say
here:
I
don't
see,
maybe.
A
C
A
Right
so
we're
pretty
we're
in
the
retest
phase
here
at
this
point
to
get
into
passing.
So
it
looks
like
this.
One
doesn't
really
need
any
additional
attention
from
us
here.
D
D
I
want
to
know:
when
are
we
going
to
start
making
official
rm64
CDI
release,
but
now
we're
only
creating
our
containers
on
a
periodic,
but
not
when
we
do
an
official
rebates
is.
D
Essentially
done
we
can
do
it,
we
just
need
to
change
the
release
process
to
include
the
arm.
Binaries.
Okay,.
C
A
A
What
is
it
we'll
generate?
Update
the
release
script
to
also
generate
the
arm?
What
about
what
I'm
wanting.
D
A
A
We
talked
about
I,
don't
know
if
this
one's
we,
let's
see
yeah,
we
clicked
this
one.
So
let's
do
this
one
and
I
think
it's.
Then
we
have
one
more
after
that.
Okay
I
could
not
get
file
descriptor
for
Epson
call
no
such
file
or
directory.
G
A
Yeah
or
if
there's
a
yeah
I,
don't
know
that's
interesting
Unless.
Somehow
this
code
code
is
didn't
being
called
before
the
like.
If
it's
copying
over
from
scratch
or
something,
although
it
says
we're
deleting
it
here
by
now
so
yeah-
that's
really
interesting,
but
we've
asked
for
the
debug
info,
so
we
we're
kind
of
working
towards
resolving
that.
So
nothing
else
to
do
here,
I'd,
say
and
then
this
is
interesting.
Cdi
import
from
URL
is
massively
slower
than
a
manual
wget
or
CTL
image.
Upload.
G
Though
this
is
the
same
issue
that
keeps
coming
back
where
the
Ubuntu
I
think
mirrors
are
just
doing
really
bad
with
nvd
kit,
but
in
our
recent
version
we
have
a
filter.
That's
tackling
this,
so
I
replied
and
advised
to
give
that
a
try.
A
What
do
you
mean
by
a
filter,
an
NBD
kit
filter,
or
are
we
routing
certain
requests
away
from
like
by
not
using
NBD
kit,
I'm,
not
sure
what
you
mean.
G
So
there's
this
nvd
kit
filter,
that's
just
playing
nice.
With
these
type
of
mirrors.
It's
called
the
read
ahead:
filter,
okay
and
there
was
somebody
making
like
somebody
external
to
our
team,
made
the
pr
and
they
mentioned
that
on
on
their
machine.
It
reduced
the
import
from
90
minutes
to
one
minute.
D
C
G
A
Sounds
good
all
right
cool,
so
this
one
is
waiting
on
the
reporter
to
try
those
things
out,
awesome
cool,
so
that
is
we're
up
to
date.
On
the
issues
and
I'll
say
we
left
off
at
28.09.
I'll
just
add
that
here.
A
A
2809
I
said:
okay,
all
right,
we're
right
at
the
end
here,
so
any
other
last
minute
topics
that
anyone
would
like
to
share
before
we
call
it
call
it
a
call.
A
I'm
going
once
twice
all
right,
guys:
everybody
thanks
for
joining
and
have
a
great
week
and
we'll
catch
up
with
you
next
time.