►
From YouTube: SIG - Storage 2023-04-10
Description
Meeting Notes:
https://docs.google.com/document/d/1mqJMjzT1biCpImEvi76DCMZxv-DwxGYLiPRLcR6CWpE/edit#
A
All
right
so
I
think
we
are
I,
see
that
a
few
more
of
you
guys
have
joined
so
welcome.
So
far,
all
we
have
on
the
agenda
is
to
triage
CDI
issues
thanks
Alvaro
I
see
that
you've
added
that
we're
on
2576.
A
A
A
Yeah,
it
should
have
been
in
the
the
calendar
event,
but
I
will
add
it
to
the
chat
so
that
you
have
it
as
well.
B
Okay,
we'll
do
that
later.
My
question
is:
if
anybody
try
to
use
wait
for
first
consumer,
this
Cube
virtual
machines,
and
how
do
you
handle
them?
B
Node
topology
for
the
creating
CDI
CDI
volumes
like
I
mean
if
you
want
to
create
virtual
machine
with
some,
not
Affinity
rules
or
product
affinity,
and
if
you
trying
to
use
CDI
for
that,
the
CDI
port
for
uploading
data
volumes
created
first
and
it
does
not
use
these
rules.
I
was
just
thinking
about
how
we
can
arrange
this
with
Google
machines.
C
So
we
we've
thought
about
it
and
and
solve
this
issue
in
the
current
iteration
I
think
it'll
be
better
once
we
get
to
populators,
but
that's
we're
working
on
that.
So
that's
not
there
yet.
But
basically,
what
happens
when
you
have
wait
for
first
consumer?
When
you
create
a
data
volume,
nothing
happens.
It
just
sits
there.
Then,
when
you
create
a
virtual
machine
that
uses
that
data
volume,
Cube
vert
will
create
what
we
call
a
doppelganger
bot.
C
So
essentially
it
will
make
a
pot
that
has
all
the
same
resource
requirements
as
the
actual
VM
that
allows
the
scheduler
to
you
know
schedule
it
on
on
whichever
node
it
needs
to
be
scheduled
on
and
that
part
basically
starts
and
exits
immediately.
But
that
causes
the
PVCs
that
it's
associated
with
to
get
bound
as
soon
as
the
PVCs
get
bound.
Cdi
sees
that
the
PVCs
got
bound
and
will
now
start
to
import.
C
The
actual
VM
will
not
start
because
the
actual
VM
is
waiting
for
the
data
volume
phase
to
be
succeeded,
which
is
not
the
case,
because
CDI
is
important.
One
CDI
is
done
with
importing.
Then
all
the
data
volume
faces
will
be
successful
and
then
the
VM
will
start
on
the
note
that
the
scheduler,
the
doppelganger,
which
should
have
the
exact
same
resource
requirements
as
the
actual
VM.
So
it
should
start.
B
Got
it,
thank
you.
Is
there
any
requirements
from
Cube
virtual
machines
back,
which
should
be
specified
there
to
make
it
creating
to
force
it
to
create
this
kind
of
data
volumes.
D
B
D
There's
this
feature
gate
in
CDI
called
the
Channel
first
consumer
uh-huh.
If
that's
what
enables
all
this
Machinery
to
happen,
but
there's
nothing
on
the
VM
spec
itself
that
is
required.
D
C
B
Okay,
thank
you
and
sorry,
one
more
question:
what
will
happen
if
data
volume
is
created,
but
virtual
machine
consuming?
This
virtual
volume
is
not.
B
C
And
in
case
you
don't
really
care
which
node
the
volume
ends
up
on
you
know
in
case
you
have
sort
of
like
a
what
we
call
the
Goldman
image
workflow,
where
you
have
a
bunch
of
images
that
you
want
to
duplicate.
C
You
can
put
an
annotation
on
the
data
volume
to
basically
ignore
the
way
for
first
consumer,
and
it
will
just
end
up
on
a
random
note,
got
it
the
should
be
documentation
in
CDI
in
the
docs
folder.
On
all
of
this.
A
One
of
the
things
we
have
on
our
list
to
take
care
of
is
to
add
some
of
the
storage
like
CDI,
specific
or
like
more
CDI,
focused
documentation
to
the
cubert
user
guide,
because
that's
usually
the
place
where
most
people
are
looking
for
for
docs
and
not
in
individual
repositories.
So
that's
just
a
note
for
us
to
take
as
well.
That
could
be
important
here.
B
I
have
another
question:
that's
more
likely
an
issue
I
mentioned
in
GitHub
that
it
that
CDI
does
not
work
on
default.
Camo
processors
I
know
that
scenes
Red
Hat
OS
9.
B
It's
changed
requirements
for
the
basic
CPU
architecture,
it
now.
It's
it's
86
version,
2
success,
64
version,
2
and
but
I
wasn't
unable
to
run
CDI
in
standards
with
all
machines
on
proximos
or
on
openstack.
B
A
What
are
you,
what
are
the
like?
What's
the
use
case
for
that,
what
are
you,
what
would
you
be
importing
or
trying
to
understand
how
it's
how
it's
used
there.
B
Okay,
we
have
some
use
cases
so
when
our
users
want
to
clone
the
existing
PVCs
and
I
found
CDI
a
really
useful
tool
for
that.
B
For
cloning
and
distributing,
for
example,
if
you
have
some
example
database
with
content,
you
want
to
populate
this
contact
to
be
used
by
the
temporary
ports.
If
you
want
to
test
your
application
or
if
you
want
to
transfer
data
from
one
storage
provider
to
another,
so
then
CDI
might
be
really
useful.
In
those
cases.
A
Yeah,
there's
also
I'm
not
sure
if
you've
seen
it
but
there's
an
archive
content
type
in
your
data
volumes,
you
can
actually
import
a
tar
file
and
it
will.
It
will
expand
it
like
onto
the
root
of
the
PPC
as
well.
So
that's
like
another
way
to
lay
down
a
file
system
or
something
contents
into
a
file
system.
Pvc.
A
I
would
say
if
you're
encountering
certain
issues,
though
like
definitely
post
or
create
an
issue
and
upload
the
you
know
the
logs
or
the
the
issue
that
you're
seeing
so
that
we
could
take
a
look
at
that.
But
there's.
C
B
Yeah,
it
does
not
work
on
default
processors,
which
now
are
set
by
default
by
many
virtualization
platforms
and
I.
Think
by
many
Cloud
providers.
E
B
Yeah,
just
thinking
that
changing
the
I
saw
the
many
issues
about
that
and
every
time
people
solving
them
by
changing
the
CPU
model
has
by
the
changing
the
models.
Sorry
CPU
model
to
the
host
side,
but
I
think
that
will
not
always
work,
but
because
users
not
always
have
this
opportunity
to
change
CPU
type
for
their
virtual
machines.
B
A
Cool
thanks
for
bringing
that
up
so
yeah.
We
can
take
a
look
at
the
the
issue
further
and
drill
down,
see.
What's
going
on,
all
right,
I
see
that
we've
got
another
topic
added
by
Michael,
I,
believe
why
don't
you
go
ahead
with
that.
D
There
seems
to
be
an
interest
in
having
like
shared
base
images,
and
you
know
writable
layers
and
yeah.
It's
you.
D
Just
kind
of
efficiency
sake:
it's
a
good.
D
D
You
know
this
get
input
from.
You
know,
I
think
it's
just
something
that
we
should
acknowledge
that
we're
talking
about
and
should
get
input
from
the
community
on
it
like,
for
example,
you
know
it
seems
like
we
could
pretty
easily
make
like
a
writable
container
disk
image.
You
know,
which
kind
of
would
be
a
shared
base
image
and
it
like
in
just
put
the
writeable
layer
on
the
PVC,
but
I,
don't
know
if
that's
a
good
idea,
it's
just
something
we
could
probably
pretty
easily
do.
A
I've
always
thought
of
one
possibility
that
could
be
interesting
is
so
it's
always
I
found
it
always
to
be
tricky.
When
you
have,
you
know,
cucao
two
layers
within
a
single
disk
image
file
or
I'm.
Sorry
within
like
a
logical
PVC
I
should
I
should
say
to
be
clear.
So
when
it's
a
single
PVC.
A
A
Think
an
interesting
use
case
could
be
that
you
could
Define
the
individual
layers
that
you're
using
in
your
virtual
machine
spec,
and
if
each
layer
was
a
PVC,
so
you
could
actually
have
a
shared
base
layer
that
was
brought
in,
for
example,
by
a
registry
import,
and
then
you
could
have
a
Q
cow,
2
layer
that
that
you
define
in
another
PVC
that
sits
on
top
of
that
and
actually
back
references.
So
it
has
a
backing
chain
reference
to
that
other
PVC.
A
E
A
So
that
I
mean
that
is
the
that
is
the
underlying,
like
the
the
main
theory
that
we
have
had
from
the
beginning
is
that
with
kubernetes
people
would
be
using
snapshot,
capable
storage,
and
certainly
it
is
much
simpler
from
our
end
in
keyword.
If
we
don't
concern
ourselves
with
any
of
that
stuff
and
we
leave
it
to
the
storage.
A
And
that
is
the
case
a
lot
of
times,
but
in
the
like,
for
example,
in
the
presentation
from
Nvidia,
they
had
an
interesting
case
about
you
know
shared
like
base
operating
system
images
that
they
wanted
to
disperse
to
all
the
nodes
of
the
cluster
and
and
then
launch.
A
E
So
I
have
a
question
like
pardon
my
notice,
because
I
don't
know
the
details,
so
if
the
hardware
provides
the
capability
of
snapshotting,
what
happens
to
the
phase
cache
uses
on
the
Node
like
but
I'm?
A
Based
on
my
understanding,
so
yeah
I
think
you
have
a
potential
it'd,
be
interesting
to
see
what
kind
of
performance
gains
you
would
get
in
a
typical
case
when,
when
you
have
multiple
VMS
reading
from
the
same
shared
layer,.
E
Yeah,
at
least
in
the
container
regular
container
words
like
when
overlay
FS
came
along,
like
we
had
this
big
emphasis
on
sharing
Pace
caches
because
it
reduced
our
memory
footprint
on
the
Node,
and
then
people
could
pack
more
Credence
like
more
containers
on
the
same
node.
So
like
container
density
was
the
keyword
and
of
course
we
want
to
minimize
the
resource
uses
and
overlay
FS
were
good
at
it.
E
So
I
think
the
same
basic
principle
will
apply
here
as
well
that
if
this,
if
you
are
doing
using
qmu
or
Q
Capital
layer
for
sharing
the
single
base
image,
then
you
get
that
efficiency
of
sharing
phase
cache
on
the
Node.
If
I
understand
correctly,.
A
Yeah
I
think
you
I
think
you
would
be
right.
It's
definitely.
A
It
would
definitely
be
difficult
to
manage
I
think
this
kind
of
system.
It
would
be
it'd
be
interesting
to
consider
how
that
would
work
and
I
think
the
I
mean
it's
a
different
model
to
be
sure.
A
I'm
definitely
concerned
about
how
it
would
affect
some
of
the
higher
level
kubernetes
operations,
because
we
definitely
you
know
we
don't
want
to
like,
for
example,
when
you,
if
they
were
to
expand
a
PVC
that
was
a
cute
cow
2
layer
like
how
would
that
would
we
need
specific
virtualization
hooks
to
respond
to
that
like?
How
would
it
work
with
snapshots
or
clones?
A
E
A
E
Enough,
like
yeah,
everything
else
needs
to
be
considered
that
whole
thing
should
come
together
if
we
move
in
this
direction
or
for
this
optimization.
E
And-
and
another
thing
like
I
don't
know
like
I
was
guessing
that
if
it
is
a
separate
writable
layer
and
a
shared
base
image,
then.
E
Then,
like
whenever
the
non-shared
studies
I
was
thinking,
then
it
makes
a
migration
a
little
easy.
They
can
just.
You
should
be
able
to
send
just
a
writable
layer
to
a
different,
node
and
yeah.
E
So
if,
if
there
are
use
cases
where
people
are
using
local
nvme,
SSD
or
something
and
I
don't
know
if
that
is
the
case
or
not
or
I'm,
just
imagining.
So
that
was
another
thing:
I
was
thinking.
Does
it
make
sense
or
not?
A
Yeah
I
mean
that
would
be
yeah,
so
that
would
be
interesting
like,
for
example,
I
could
imagine.
If
the
base
bass
layer
was
stored
in
a
registry,
then
that
could
be
pulled
to
any
node
that
wants
to
run
a
virtual
machine.
That
requires
that-
and
you
know,
you'd
have
to
be
careful
about
managing
versions
of
that
base
image
but
yeah.
It's
it's
interesting.
I
think
it
would
be
cool,
I
mean
from
a
cube,
vert,
API
level.
A
You
would
need
a
way
to
convey
in
the
VM
spec
that
you
have
multiple
PVCs,
that
build
an
astrological
VM
disk,
and
then
we
would
have
to
make
sure
that
I
mean
those
those
for
example.
The
writable
layer
would
have
to
reference
unknown
location.
A
Well,
I
guess
you
could
have
a
relative
backing
chain
reference
I
believe
they
got
rid
of
those
in
in
overt
I
I
think,
but
we
could
potentially
use
those,
and
you
know
making
sure
that
images
appear
in
a
reliable
path
which
I
think
they
already
do.
So
it
would
be
interesting,
I'd
love
to
see
somebody
try
that
I
know
Nvidia
has
and
they're
successful.
A
It
would
be
interesting
to
see
what
how
you
could
extend
the
API
to
implement
something
like
that.
If
you
were
interested
and
then
we
could
start
to
experiment
with
what
breaks
when
you
do
that,
mm-hmm.
D
D
Pvc
name
and
the
right
layer
would
go
to
a
PVC
and
we
can
already
like
migrate
container
disks,
so
it
would
like
migrate.
It.
B
D
Be
a
cheap
way
to
try
this
out.
A
Yeah,
that's
an
interesting
way
to
to
think
of
it
because
yeah
and
really
I
think
that's
the
common
case.
It's
we're
not
trying
to
do
internal
snapshots
or
like
yeah
snapshot
like
qmu
snapshots
within
the
cute
cow
2
layers,
although
one
could
do
something
like
that,
but
I
mean
your
use
case
is
a
simple,
a
simple.
D
Here
can
be
so
here's
one
thing:
I
was
wondering
which
I've
never
tried.
Can
the
base
layer
be
like
a
you
know,
10
gig
image
and
the
Q
Cal
be
like
a
40
gig
or
something
because
I
think
that's.
D
D
But
I
don't
know,
I
think
that
maybe
an
interesting
experiment
imagine
it
would
be
too
hard
to
whip
together
a
prototype
for
that.
But,
oh,
but.
C
I
think,
generally,
what.
D
Is
interesting
to
me
would
be
use
cases
from
the
community
because
you
know
I
I
think
the
single
I
think
we
chose,
and
maybe
it
is
worth
just
communicating
in
some
other
way
somewhere
why
we
chose
you,
know
raw,
and
you
know
one
PVC
per
VM
disk,
but
I
think
it
is.
You
know
just
an
easier
way
to
reason
about
things
and
probably
makes
the
most
sense
for
kind
of
long
running
VMS
that
will
be
around
for
a
while,
but
maybe
for
kind
of
or
not
as
long
living.
D
But
you
still
need
some
persistence
and
we
want
to
optimize
host
storage
page
cache
whatever
it
could
be.
It
just
could
be
good
for
specific
use
cases.
D
I
think
so
yeah
there
was
something
about
they're
working
on
containers
that
are
bootable,
which
may.
A
So
it's
still
using
the
same
the
same
host
kernel,
but
somehow
the
kernel
provides
an
interface
that
allows
you
to
like
boot
from
an
empty
context
or
something
I'd
be
more
interested.
I,
don't
want
to
try
to
make
it
up
here,
so
I'd
be
really
interested
in
a
point
or
two
into.
A
Okay,
so
I'd
love
to
I'd
love
to
hear
more
about
that.
If
somebody
can
find
a
public
link
to
where
that
was
discussed,
I'd
love
to
see
it
added
here,
if
you're
able
to.
A
If
anyone
knows
where
that
is,
that's
super
interesting
of
an
idea,
Okay
cool,
so
does
anybody
have
any
other
topics
that
was
interesting
and
again
yeah
I
would
say
that
this
is
definitely
a
call
for
use,
use
cases
from
the
community
it'd
be
cool
like
we
got
a
really
cool
example
from
the
video
that
they
shared
at
cuberts
Summit.
So
if
you're
interested
and
hadn't
haven't
haven't
seen,
it
I
definitely
recommend
checking
out
that
presentation
when
the
recordings
are
released.
If
they
haven't
been
already.
C
E
Didn't
follow
a
question
and
it's
sort
of
trying
to
get
a
sense.
So
if
I
understood
correctly,
Nvidia
is
using
local
storage
there
and
they
don't
even
require
live
migration.
Have
you
heard
of
such
cases
use
cases
from
others
as
well?
Like
typically,
my
understanding
is.
We
have
heavily
relied
on
shared
stories
and
where
people
have
long
running
VMS,
so
they
require
live
migration
and
everything
like
that's
the
direction.
We
are
primarily
focused
on
so.
A
Yeah
so
I
think
there
are,
when
you
have
single
single
node
kubernetes
clusters,
I
think
there
are
some
people
Edge,
if
you
will
like
consider,
maybe
like
a
single
single
node
cluster
in
a
retail
store
that
wants
to
run
virtual
machines
and
in
this
case
you're
not
you
know
having
to
worry
about.
You
just
have
a
single
single
node.
A
So
in
those
cases
I
think
the
local
storage
is
making
more
sense
and
also
the
nature
of
the
data
is
a
bit
more
ephemeral
like
if
you
consider
like
a
retail
store
workload.
The
real
critical
data
is
the
transaction
log
and
that's
getting
uploaded
to
a
central
data
center.
So
whatever
else
is
is
stored
locally
is
a
bit
more,
it's
a
little
less
important,
so
you
don't
need
to
worry
about
replication
as
much
and
those
kind
of
things.
So
this
is
my
understanding
about
when
that
can
make
sense.
E
Anyway,
there
are
no
other
nodes
in
the
cluster,
so
where
would
the
lamb
migration
happen
so,
but
what
I'm?
Also
thinking
if,
if
you
do
Nvidia,
seem
to
have
a
cluster,
it's
not
a
single
node,
if
I
understand
correctly
and
and
cluster
with
non-shared
storage.
So
is
that
the
first
thing
I
think
the
first
time,
at
least
to
my
knowledge,
I
saw
somebody
talking
about
it.
I
was
curious
if
other
people
have
witnessed
similar
deployments,
similar
use
cases
being
talked
about.
C
Oh
I
I
think
Nvidia
is
looking
at
the
VMS
more
like
you
know,
normal
people
would
look
at
containers
like
you
know
if
they
get
destroyed,
it'll
just
get
started
somewhere
else
and
we're
not
that
worried
about
it
right
and
in
that
case,
like
migration,
doesn't
really
make
that
much
sense,
because
if,
if
you're
considering
it
just
like
a
somewhat
special
container
but
not
like
super
special,
it's
not
like
a
pet,
then
you
know
if
it
dies
and
it
starts
somewhere
else.
It's
fine!
So.
A
Especially
if
they're,
if
they're,
also
more
job
focused
where
they're
gonna
they're
gonna
run
for
a
short
period
of
time,
because
then
you
know
the
scheduling
issue
isn't
as
big
of
a
deal,
because
the
VMS
are
always
starting
and
stopping
mm-hmm.
E
A
Yeah
and
I
didn't
have
I
didn't
have
a
like
something
to
link
to,
but
in
my
head,
I've
been
thinking
about
live
storage,
migration,
which
kind
of
came
up
here
in
this
the
context
of
this
previous
topic
and
really
in
the
context
of
kubernetes.
A
It's
if
you
want
to
drain
a
node,
so
you
can
perform
maintenance
or
upgrade
the
software
on
it
and,
as
as
all
of
you
know,
we
require
read,
write
many
storage
for
that,
so
I've
been
considering
as
another
project
or
interesting
yeah
thing
to
look
at
is
how
we
can
enable
migration
of
the
storage
generically
even
for
read,
write
one
storage,
even
if
it's
less
efficient,
so
I'll
probably
bring
that
up
as
another
topic,
if
I
create
an
issue
Upstream
for
that
in
the
future,
but
it's
also
related.
A
So
it's
cool
to
see
us
kind
of
always
going
back
to
the
basics
and
thinking
about
how
things
are
are
architected,
and
if
there
are
improvements
or
changes,
we
want
to
make
all
right.
Should
we
go
to
triage
CDI
issues
at
this
point
or
are
there
any
other
topics
from
anyone
before
we
get
to
that.
A
All
right
sounds
like
we're
ready,
so
we
left
off
at
25.76.
Oh,
so
we've
nearly
gotten
through
the
list.
So
we
are
on
this
one
I,
don't
recall.
F
So
we
are,
we've
already
seen
issues
like
this
one
before,
if
it's
like
the
CRI
problem
that
Alex
reported
so
like
you
know,
the
issue
I
think.
A
A
Okay,
so
he's
he's
asking
for
documentation.
F
Yeah,
but
we
already
have
some
documentation
about
this,
so
right,
I,
don't
I
I
think
we
could
improve
it,
but
we
already
have
some
documentation
covering
this.
A
Okay,
all
right,
so
let
me
scroll
all
the
way
down
seems
like
there's
some
good
okay,
so
it
looks
like
yeah
I
see
and
it's
been
documented
for
those
who
come
later
so
I
think
we
could
probably
close
this.
Do
you
guys
agree.
B
E
A
Yeah,
this
is
definitely
a
recurring
theme.
So,
okay,
let
me
close
this
and
we'll
go
to
the
next
one,
all
right,
so
we
have
cannot
upload
to
data
volume
and
wait
for
first
consumer
state.
A
So
we
just
discussed
this
earlier
today,
something
similar,
let's
see
where
we're
at
so
Alvaro
looks
like
you
were
involved
in
the
in
the
issue,
so
yeah.
F
It
was
just
that
we
defaulted
on
our
way
for
first
consumer
to
true
and
156,
so
the
behavior
was
changed
and
the
user
wasn't
expecting
that
so
yeah.
It
wasn't
really
about.
A
Okay,
so
they
did
not
want
wait
for
first
consumer,
exactly
okay,
all
right,
so
is
this
documented?
Well,
what
do
we
need?
Is
there
anything
we
need
to
do
to
to
close
this
one
out
before
we.
F
So
we
already
have
documentation
covering
this
like
in
the
last
comment.
Another
user
suggested
to
update
the
labs,
the
CDI
labs
and
I
just
posted
a
comment
in
our
chat,
proposing
it
because
it's
true
that
they
are
very,
very
outdated.
A
Okay,
do
we
know
who
who
contributed
those
labs
to
begin
with,
because
we
could
just
talk
to
that
person
directly
and
see
if
they
would
update
it?
Is
it
Chandler.
C
That
would
be
my
first
guess
and
okay,
if
it's
Muslim
Chandler,
we
can
probably
ask
Andrew
to
figure
out,
that's
probably
a
repo
we
can
modify
to
fix
it.
Yeah.
A
Yeah
I
think
so
all
right
so
I'm
just
going
to
put
in
a
com.
An
item
in
the
agenda
find
the
owner
of.
A
A
F
Next
step
so
I'm
doing
some
research
I'm
still
doing
some
research
about
this
and
I
don't
have
any
conclusions.
So
it's
just
like
the
so
the
error
seems
to
be
happening
in
virtualization
platforms
when
trying
to
virtualize
new
os's
with
a
high
CPU
requirements.
So
I
don't
think.
Well,
it's
really
a
problem
on
on
cdi's
side.
E
A
Yeah,
it
seems
it
definitely
seems
below
our
level.
Although
I
would
like
I
mean
it
seems
like
a
lot.
More
stuff
should
be
breaking
and
so
I'm
surprised
that
it
isn't
so,
but
I
yeah
I
don't
have
the.
F
So
I
I
see
this
happening
in
like
in
forums
and
other
places
and
like
the
common
cause
seems
to
be
the
KVM
64
processor
and
even
in
Kimu
documentation.
It's
not
recommended
so
I
guess
it's
I
know.
Other
things
are
not
breaking
because
I
guess
KVM
64
is
not
really
used
that
much
okay.
A
A
Okay,
so
yeah
I,
guess
that
would
be.
F
Another
question
that
I
have
for
Andre
is
so
you
said
that
this
is
working
with
other
related
workloads
right.
B
Yeah
for
some
yes
for
some
workloads,
it
works
for
some
of
them
is
not
I
mentioned
in
a
in
and
down.
There
is
a
command
for
running
a
bit
operator
and
it
works
so
I
think
we
could
use
some
building
to
avoid
adding
not
sure
go.
Some
has
some
options
for
avoid
using
C,
maybe
if
we
would
avoid
using
C.
E
F
F
Maybe
we
should
try
if
this
started
to
fail
when
we
started
using
like
Centos
streamline
as
the
base
image
or
CVI
I,
think
that
was
probably
the
like.
The
change
that
break
this.
A
A
I
think
we
could
Vara.
We
could
compare
the
doctor
file.
That's
used
to
build
bird
operator
versus
CDI.
Oh
yeah.
Well,
in
this
case,
I
see
vert
hand,
so
vert
Handler
is.
A
It
probably
has
to
do
with
yeah,
if
there's
certain
libraries
that
that
pull
in
G
libc
versus
just
a
pure,
because
it
makes
sense
that
like
vert
operator,
would
be
able
to
be
just
a
pure
go
Lang
binary
because
all
it's
doing
is
manipulating
kubernetes
resources
where
vert,
Handler
or
CDI
we're
doing
things
with
with
you
know,
qmu
components
that
are
going
to
be
written
in
C
and
then
have
glibcy
dependencies,
okay
yeah.
A
So
that's
I
think
this
is
a
a
bit
more
fundamental
of
a
problem
that
would
probably
take
a
long
time
to
fix
I.
Guess,
I
wonder:
is
it
possible
on
your
platform,
to
change
this,
to
use
the
recommended
like
host
CPU
setting
or
or
do
you
not
have
control
of
that.
B
Yes,
we
do,
but
we
are
developing
a
platform
which
is
able
which
able
to
install
on
Mini
Cloud
platforms
I
see
and
we
can't
control
all
those
environments
and
we
don't
even
know
on
which
environments
getting
to
be
installed.
D
A
Yeah,
so
let's
do
that
I'll,
have
you
add
a
comment?
I,
don't
know
Andre
if
you
could
actually
try
that
suggestion
and
see
if
you
get
some
more
mileage
out
of
it
on
your
environment,
that
might
be
a
good
Next
Step.
A
Sounds
good
great,
okay,
let's
take
I
think
we
have
one
more
to
say
that
we've
finished
the
list,
so
I
think
we
this
one
looks
familiar
with
the
lost
and
found.
Do
we
have
a
PR
that
was
going
to
be
addressing
this
I
think
Michael
you
might
have
been
working
on.
It
has.
D
Yeah
so
yep.
D
So
the
reporter
is
so
I'm
going
back
to
one
of
the
issues
that
earlier
about
the
the
vice
permissions,
so
this
reporter,
you
know
I,
know
him
through,
like
the
slack
and
for
whatever
reason
they
they
cannot
set
those
the
vice
permissions,
so
they're
on
an
old
version
of
CDI,
154
I
think
we're.
D
Roots
and
yeah,
for
whatever
reason
they
can't
make
those
CRI
changes,
they
can't
update
so
they're,
asking
very
kindly
to
backport
this
to
154
for
them.
A
Okay,
is
that
reasonable
seems
like?
Is
it
a
simple.
D
One
yeah
I
mean
yeah,
we
did
it
for
the
immediate
retry
issue,
I
mean
I.
Think
yeah
I
mean
we
can't
do
this
forever.
Yeah.
A
D
So
I
don't
know,
I
mean
I
think
they
would.
They
would
probably
like
for
us
to
have
an
option
to
you
know,
run
as
root
and
then
something
we
discussed
and
said.
We
decided
to
hold
off
on
yeah.
A
But
yeah
at
some
point
we
we
won't
be
able
to
be.
You
know,
helping
to
maintain
like
old
old
branches
after
a
certain
point,
but
they're
welcome
to
carry
fixes
in
you
know
in
a
local
fork
or
something
if
they
need
to
so.
Okay,
all
right
so
I
think
that
is
the
last
issue.
Probably
nothing
else.
We
can
we'll
figure
out
what
to
do
with
respect
to
the
backboarding
and
stuff,
and
then
that
issue
would
be
able
to
be
closed
and
so
I
think
we're
kind
of
at
the
the
end
here.
A
Did
anyone
have
any
last
minute
thoughts,
questions,
comments,
Etc.
B
Just
short
question
of
them:
sorry,
if
you
have
PVC
or
data
volume
on
one
namespace
and
user,
has
no
privileges
to
this
namespace
and
he's
acquiring
to
create
data
volume
of
this
PVC
from
the
namespace.
He
is
not
owns
how
the
CDI
is
ensuring
the
security.
D
D
There
is
also
an
explicit
R
back
that
we
have
called
like
a
CDI
keyboard
data
volumes.
Source
I'll
find
the
documentation,
I
think
it's
documented
somewhere,
but
there's
also
an
explicit
check.
So
what
we
do
like
when
we
we
for
our
Downstream,
we
have
a
golden
images
namespace
that
everyone
can
access.
B
Got
it,
and
how
is
the
name
of
this
namespace
is
explicit
or
can
I
specify
it
somehow.
B
I
want
to
create
golden
images
repository.
Yes,
yes,.
D
So
you'll
create
a
role
that
has
this
permit.
That
has
basically
I
create
data
volume.
Slash
source
is
the
resource.
I'll
find
an
example
and
and
show
you
and
then
you
would
create
a
role
binding
to
or
system
authenticated
user.
So
it's
basically
all
users
have
that
permission.
Then.
D
I
think
it's
done
in
the
SSP
right.
They
do
golden
images.
A
Okay,
yeah
I'm
not
it'd,
be
interesting
to
yes,
if
we
could
find
the,
but
it's
going
to
be
in
one
of
these
projects.
That's
not
CDI
or
cube
vert,
which
kind
of
creates
the
sort
of
the
de
facto
environment
so
yeah,
but
sounds
like
Michael
would
be
able
to
give
you
some
specific
examples.
A
All
right
and
I
guess
I'll
hold
the
oh,
let's
see
yeah.
He
did
place
that
in
the
chat,
so
it's
there
so
I
think
we're
ready
to
wrap
up
here.
It's
about
five
two!
So
thanks
everybody
for
joining
in
the
participation.
It
was
a
good
discussion
as
always,
so
we
will
catch
up
with
you
guys.
The
next
meeting
in
two
weeks.