►
From YouTube: SIG - Storage 2023-02-13
Description
Meeting Notes:
https://docs.google.com/document/d/1mqJMjzT1biCpImEvi76DCMZxv-DwxGYLiPRLcR6CWpE/edit#
B
A
Let's
see
here,
foreign.
A
C
Yeah,
this
is
an
update
from
the
last
time
we
got
together.
We
discussed
this
issue
and
I
looked
into
it
a
little
bit
and
came
up
with
some
options.
C
Basically
yeah,
so
cubert
explicitly
supports
a
field
on
there
resource
for
image,
pull
Secrets
or
you
know
where
you
can
set
references,
a
reference
to
image
a
list
of
secrets
right,
so
you
know
we
can
do
something
similar
and
but
there
are
a
couple,
different
considerations
that
I
put
in
there.
Basically,
first
of
all,
it's
pretty
you
know
you
could
read
the
comment
there,
but
basically
it's
you
know
pretty
very
straightforward.
To
add.
C
Support
for,
like
are
core
components
like
our
controller,
all
the
stuff
that
runs
in
our
install
namespace,
where
it's
a
little
trickier
is
with,
like
our
worker,
pods
and
cubework
has
a
nice
little
hack
for
that
since
that,
but
it
all,
since
they
basically
have
a
demon
set
invert
Handler
that
runs
in
every
node.
C
If
someone
sets
image
pull
Secrets,
they
add
basically
some
sidecar
containers
to
that
demon
set
that
just
pulls
down
the
image
to
every
node
and
and
then,
as
long
as
you
don't
use.
The
pull
policy-
always
it
should
work
you
know,
so
we
could
do
something
similar
it's
a
little
happier
because
we
don't
have
a
demon
set
already.
So
we
would
have
to
create
something
new,
but
it's
a.
C
The
other
option
is
it
if
which
I
think
no
one
likes
is
that
we
could,
you
know
copy
secrets
from
our
CDI
install
namespace
to
the
worker
namespaces
for
the
duration
of
the
work,
but
that
may
potentially
make
Secrets
available
to
yous
or,
like
users,
that
you
know
shouldn't
have
them,
and
the
last
option
is
we
can
leave
creating
these
secrets
up
to
the
users
and
they
can
you
know
we
can
set
when
we
create
the
worker
pods.
C
We
can
set
the
reference
to
the
secrets,
but
they
are
responsible
for
creating
the
secrets.
C
It
seemed
that
in
if
you
read
the
comments,
basically
one
person
is,
would
prefer
the
first
case
and
like
two
people
are
in
favor
of
just
letting
users
manage
the
secrets.
This
is
the
third
option.
Would.
A
There
be
an
option
to
instead
of
doing
a
demon
set
to
basically
when
this
is
configured
basically
create
a
pod
that
has
Affinity
with
the
worker
pod,
like
just
a
quick
pod
in
our
own
namespace
that
has
node
Affinity
with
the
worker
pod,
so
that
it
causes
the
pull
on
demand.
C
I
suppose
so
you'd
have
to
make
sure
the
synchronization
works
there.
So
you'd
we'd
have
to
like.
C
Yeah,
it's
that's
even
to
me
even
a
little
more
hacky
than
than
doing
the
demon
set
option,
but
it
it
because
I
think
the
synchronization
there
is
going
to
be
weird.
We
have
to
pull
for
it.
Wait!
Wait
for
it
to
like
it's
possible
yeah.
A
I
mean
you'd
get
in
some
case,
I
guess
if
you
did
no
synchronization
you'd,
get
a
image,
pull
error
on
the
worker,
but
then,
presumably
at
some
point
in
the
future,
when,
after
the
other
pod
had
completed,
then
the
the
container
should
be
able
to
start
I
would
guess
unless
I
I
guess
we'd
have
to
try
that
but
yeah,
maybe
that's
not
even
as
nice
I
just
feel
like
having
a
dummy
pod
running
on
every
node,
just
to
enable
the
the
image
pulling
seems
like
it's
increasing
just
because
you
have
limits
on
number
of
PODS
that
can
run
in
a
cluster.
A
So
adding
those
is
a
little
bit
Yeah.
C
Well,
so
the
problem
is
I
think
with
just
have
starting
these
runoff
pods
is
that
you
know
well
yeah
we'd
have
to
basically
do
it
all
the
time,
because
you
know
images
get
removed
from
nodes.
Oh.
E
C
Implicitly
so
I
don't
know
I
I,
just
people
put
your
comments
in
here.
You
know,
I,
don't
know
that
we're
scheduled
to.
A
Yeah
I
wonder
if
if
we
did
the
self-managed
option,
then
you
know
just
having
some
kind
of
a
a
label,
so
we
know
where
we
can
actually
run
the
workloads,
because
we
know
where
these
secrets
are
configured.
A
F
Topic,
I
think
there's
an
open
shift
operator
that
basically
like
lets.
You
manage
option
C
and
Michael's
comment
something
like
option
C
but
like
with
wrapping
over
it,
but
I
have
to
dig
it
out.
I,
remember
it
from
from
where
we
had
a
problem
with
with
the
proxy
config
map,
so
I'll
have
to
dig
out
the
name
but
I
think
there's
an
operator
that
makes
this
like
a
nicer
experience.
Option
C
here.
C
Yeah
the
problem
is,
we
need
a
an
upstream
solution
for
people
as
well,
like
I,
think
Nvidia
implemented
the
image
plus
secrets
for
keeper
okay.
A
A
All
right,
so
any
other
comments.
Otherwise
yeah
you
got
you
folks
can
add
your
comments
to
the
to
the
issue
as
well.
A
All
right,
let's
go
to
the
next
topic,.
C
Yeah,
this
is
another
one
from
last
week
where
oops.
C
Not
last
week
or
whatever
the
last
time
we
got
together
where
I
had
a
suggestion
that
I
thought
you
know
was
pretty
straightforward
and
and
I
thought
we
agreed
that
we
were
going
to
make
some
changes
regarding
shared
State
between
some
of
these
controller
function
calls
but
I,
don't
know
there
haven't
been
any
updates
since
then,
and
I've
been
doing
some
work
in
the
controllers,
and
you
know
I'd
like
to
know
where
we
stand
with
this
I
think
it's
I
don't
know
it
yeah.
C
It
seems
to
me
like
straightforward
or
non-controversial
but
I'm
just
wondering
what
the
progress
is.
H
A
A
All
right
back
to
this
one,
okay,
so
CDI
and
disaster
recovery.
C
Yeah,
this
is
actually
new
topic.
Well,
something
we've
been
looking
into.
Basically,
you
know,
convert
is
maturing
and
people
are
starting
to
use
it
in
some
pretty
more
complicated
environments
and
we're
starting
now
to
see
support
or
people
want
support
for
Disaster
Recovery
Solutions
we've
been
looking
into
one
specific
case,
pretty
deeply,
which
is
a
what
we
call
a
Metro
Dr,
which
is
like
you
know,
primary
and
secondary
sites
very
close
to
each
other
synchronous,
replication.
C
And
the
solution
that
we're
using
combines
a
bunch
of
different
open
source
projects,
basically
ACM,
which
is
multi-cluster
management,
Ramen
Dr,
which
is
away
from
managing
volume,
replication
and
some
other
projects
fall.
Sync,
and
this
like
replicated
volume.
C
Came
up
with
a
couple
potential
issues
with
the
with
the
current
implementation
of
data
volumes.
So
the
way
this
specific
case
works
is
that
when
you're,
a
primary
you're
setting
up
the
application,
initially
you
apply.
Some
manifests
in
the
git
Ops
in
the
GitHub.
Repo
I
will
create
a
data
volume
populate
the
data
volume
and
then
that
volume
becomes
replicated
between
a
primary
and
a
secondary
site.
C
When
the
when
you
fail
over,
there
is
going
to
be
a
PV
on
that
secondary
site
that
exists.
That
is
waiting
for
a
new
PVC
to
be
created
for
a
specific
name
that
basically
the
PVCs
from
your
application,
and
what
was
happening
is
that
when
we
failed
over
our
data
volume
controller
was
essentially
binding
to
that
PV
and
rewriting
over
the
data.
C
Last
week,
in
our
internal
knowledge,
sharing
I
I
presented
like
a
GitHub
gist.
If
you
click
on
that
simulation
thing,
you
can
see
what's
kind
of
what's
Happening
Now,
but
basically-
and
you
can
try
it
out
on
your
own,
but
basically
you
can
yeah,
we
don't.
We
don't
have
to
look
into
this
now,
but.
C
So
what
I'm
proposing,
if
you
go
back
I,
have
an
in-process
PR
to
address
this
data
corruption
issue
and
it's
basically
we're
adding
an
annotation
to
the
data
volume
that
will
well
it's
pretty
much
described
here
that
we'll
basically
check
if
there's
a
statically
provisioned
PV
for
that
data
volume
before
we
before
we
create
the
PVC
and
if
so,
and
we
bind
to
that
PV,
you
know.
That's
it.
C
We
don't
do
any
population,
so
I've
started
it
and
I've
done
it
implemented
it
fully
for
import
the
rest
of
our
sources.
Are
there
so
yeah?
The
idea
is
that
if
you
have
a
data
volume
and
it's
going
to
be
in
one
of
these
replicated
Dr
situations,
you
add
the
sanitation
and
we'll
make
sure
it's
not
stomp
over
your
data.
When
it's
reimported
so
reviews
comments,
there
would
be
great.
A
Yeah
and
I
think
ideally
so.
We've
done
some
deep
diving
into
one
particular
implementation
of
disaster
recovery,
but
I
think
where
we
can
definitely
benefit
as
if
there's
others
who
have
experience
with
other
Disaster
Recovery
Solutions.
It
would
be
interesting
to
to
understand
if
similar
issues
exist
there.
If
different
issues
exist.
A
This
seems
to
be
I
think
the
standard
way
that
you
sort
of
prepare
a
PV
behind
the
scenes
in
a
sort
of
replica
or
a
failover
type
of
situation.
So
hopefully
this
is
the
canonical
way
to
achieve
this,
and
therefore
responding
in
this
way
should
be
a
universal
solution,
but
it
never
is
right.
So
yeah.
C
Yeah
and
to
be
clear,
yeah
I
think
we
definitely
want
to
make
data
volumes
work
in
this
scenario,
but
you
know
we're
going
to
start
working
on
Implement.
You
know
having
PVC
based
Solutions,
you
know
PVCs
with
populators
that
do
the
equivalent
of
our
data
volumes
and,
if
you're,
and
if
your
user,
if
your
application
is
just
dealing
with
ppcs,
there's
nothing
special
you'd
have
to
do
so.
That's
kind
of
the
longer
term
solution
is
to
support
populators
with
data
volumes
populators
and
use
PVCs
directly.
B
I
have
a
question
like
again,
like
very
hundred
thousand
foot
view
question.
So
what
exactly
do
we
hear
like
in
terms
of
disaster
recovery?
So,
let's
say
I
have
a
cluster,
so
we
just
create
a
copy
of
data
somewhere
else
in
a
different
cluster
on
a
different
stories
and
just
restore
from
that.
If
this
primary
storage
in
this
cluster
dies.
C
Yeah,
so
you
can
take
a
look
at
like
the
specific
projects
down
there,
but
so,
essentially
the
way
they
work
is
so
ocm
is
like
basically
multi-cluster
management
and
the
ramen
Dr
is
the
Disaster
Recovery
Solution
and
RAM.
C
A
couple
different
types
of
replicated
volume
types-
one
is
actually
they
call
it
a
replicated
volume,
and
this
is
like
synchronous,
replication
for
Metro
Dr.
They
also
support
asynchronous
replication
via
a
project
called
valsync,
which
is
another
red
hat
project
that
asynchronously
syncs
volumes
between
clusters,
but
basically
so
that
keeps
the
ramen
Dr
is
responsible
for
basically
prepping
the
volumes
or
preparing
them
to
be
replicated
setting
up
kind
of
the
lower
either
RAM
and
Dr
the
lower
level
resources
that
allow
for
synchronous
array,
synchronous,
replication
and
on
a
failover.
C
It
will
create
basically
get
the
system
in
a
state
that
is
ready
for
ACM,
which
is
the
cluster
management
and.
C
Works
by
applying
get
Ops
model
like
just
applying
manifests,
so
the
the
steps
are
basically,
if
you
have
a
primary
and
you're
failing
over
to
a
secondary.
C
You
kind
of
make
sure
that,
like
the
network,
fencing
is
done
and
then
you
promote
the
secondary
to
primary
in
the
process
of
doing
that.
The
RAM
and
Dr
will
create
any
PVS
statically
and
then
ACM
will
apply
the
manifests
and
any
PVCs
or
data
volumes
in
our
case
should
bind
to
those
PVS
that
RAM
and
Dr
set
up.
E
C
So
yeah
it's
it's
like
a
complex
orchestration
of
of
things,
but
that's
basically
what
we've
observed
and
the
one
configuration
that
this
specific
annotation
will
help
with,
and
the
hope
is
that
you
know
there
may
be
you'll
see
when
we
go
back.
There's
a
so.
C
The
problem
is
that
to
make
these
replicated
volumes,
work
I
think
we're
going
to
have
to
you
know,
have
some
specific
annotations
like
if
we
go
to
this
next
topic
for
get
like
proper
git
op
support.
C
We
want
to
disable
garbage
collection
because
that
so,
if,
if
other
people
don't
know
the
most
recent
version
of
I,
guess,
openshift
and
CDI
makes
data
volume,
garbage
collection,
the
default,
and
that
is
not
really
good,
Ops
friendly,
because
if
you
have
a
manifest
that
defines
a
data
volume
explicitly
when
you
go
to
delete
that
manifest,
which
happens
when
you
make
like
a
cluster
not
primary
anymore,
it
will
not
clean
up
everything.
It
will
leave
like
a
data
volume,
PVC
straggling.
C
So,
right
now
to
support
like
this
specific
Dr
configuration
I'm,
proposing
my
PR
with
that
specific
annotation
on
a
data
volume
for
handling
binding
to
static
PVS,
as
well
as
this
annotation
that
already
exists
for
disabling
garbage
collection.
C
But
you
know
I
think
this
is
again
one
very
specific
configuration
and
we'd
love
to
hear
from
other
people
in
the
community
how
they're
dealing
with
Dr.
A
Yeah,
it
seems
that
Dr
is
really
tied
heavily
to
the
underlying
storage
provider,
because
that's
the
you
know
the
ultimate
place
where
their
replication
is
happening.
So
we
expect
that
there'll
be.
You
know
with,
for
example,
with
like
NetApp
Trident
storage.
They
would
have
presumably
a
way
to
implement
this
with
their
storage.
D
I
have
some
experience
on
that,
for
you
know,
years
and
they're
from
the
desk,
we
are
across
30
data
centers
and
all
the
users
has
a
roaming
profile
across
all
these
180
countries.
We
serve
and
we
have
a
way
to
sync:
the
files
across
all
data
centers
regarding
his
profiles
and
files,
my
documents,
my
video
and
so
far,
but
there
is
no
disaster
recovery
for
the
desktop
itself
when
the
the
the
the
user
go
down,
because
the
entire
region
is
down
he's
able
to
log
into
another
region
with
his
files.
D
But
we
not
recovery
the
actually
desktop,
we,
let's
say
recreate
the
the
user,
the
user
desktop
on
the
flight
during
the
login
of
the
user.
This
is
how
we
we
do
it:
okay,
okay,
cool.
A
Yeah,
so
that
sort
of
feels
yeah,
that's.
That
seems
like
a
nice
kind
of
cloud
native
approach
where
you're
able
to
instead
of
relying
on
as
much
saved
State,
you
have
a
small
amount
of
state.
That's
like
configuration
that's
replicated,
but
the
the
data
heavy
portion
can
be
dynamically
recreated
on
demand,
which
seems
like
a
nice
model
as
well.
A
Do
they
I,
guess
I'm
curious
about
so
when
the
desktop
is
recreated?
Is
there
much?
Is
it
inconvenient,
I
guess
for
the
user?
Do
they
have?
Are
they
missing
the
data
that
might
have
been
on
their
other
desktop
or
is
it?
Are
they
used
to
starting
from
a
clean
slate
like
that?
No.
D
I
gonna
remember
the
name
in
one
second,
but
I'm
gonna
explain
how
it
works.
This
software
runs
on
on
the
desktop
of
the
windows
and
we
are
creating
similar
solution
to
Linux
and
Mac.
D
He
is
a
like
a
a
vhz
mount
during
the
login
of
the
user
that
has
his
profile
and
in
real
time
when
the
users
write
something
he
pushed
back
to
the
storage
or
the
S3
storage.
We
have
on
our
Central
location
and
this
syncs
all
the
files
for
the
profile
and
my
document
video
so
far.
Okay,.
D
This
was
done
very
well
on
memory,
even
if
he
he
opens
the
file
and
write
something
he's
pushing
back
almost
in
real
time
in
less
than
milliseconds
he
pushing
back
to
the
profile.
He
always
he
can
lose
an
open
file,
but
not
the
content
of
the
openness
file
because
he
saved
in
a
different
state
in
the
S3
storage.
For
you
understand,
let
me
see
if
I
remember
hear
that
the
name
of
the
the
song.
A
Okay,
so
that
sounds
good
and
yeah,
so
that
sounds
kind
of
like
I
guess
what
I've
been
calling
like
application,
Level
Dr,
where
the
actual
workloads
are
performing
some
synchronization.
So
that's
that's
interesting.
Yeah.
C
Definitely
the
the
projects
down
below
are
concerned
with
more
lower
level.
Synchronization
mm-hmm.
A
All
right
so
on
this
one,
I
guess
we're
looking
for
reviews
of
the
of
the
pr
and
experience.
C
These
comments,
it's
still
I
I,
think
the
other
sources
should
come
along
pretty
quickly,
but
yeah
I
have
to
finish
the
code
too.
So.
A
Okay,
great
anything
else
on
the
Dr
topic.
C
No
that
that's
pretty
much
it
so
yeah
those
for
now.
Those
are
the
solutions
that,
for
the
specific
case
that
I've
discussed,
you
know
using
a
couple
annotations
and
yeah
Mark.
If,
like
you
know,
this
is
something
a
topic
we're
all
learning
on.
So
hopefully
I'd
love
to
hear
more
from
the
community
about
other
VR
work.
That's
going
on.
A
Cool
all
right,
thanks,
Michael
I'd
like
to
jump
over
the
triaging
of
CDI
issues,
and
we
can
save
that
as
a
as
a
like
a
hygienic
thing
to
do
at
the
end
of
the
regular
agenda.
So
I
see
that
Lynn
store
has
been
added
here
to
the
agenda.
Why
don't
we
take
a
few
moments
and
discuss
the
updates
here.
D
I,
remember
the
the
the
software
from
Microsoft:
he
call
it
FS
Logics.
This
is
a
company
they
put
Chase
and
now
is
part
of
Windows.
If
you
on
Windows,
you
can
use
it.
D
I
put
the
link
on
on
the
chat
or
you
have
it:
okay,
okay,
great
I,
put
these
lean
store,
specifically
because
we
are
running
completely
away
from
rook
and
Sav,
because
the
the
lack
of
of
functionality,
the
First
of
and
most
important
for
us,
is
the
duplication,
because
it's
on
Alpha
stage
on
Brook
and
Seth
the
copy
on
right
and
they
ran
this
as
storage.
D
D
They
could
convert
on
top
of
a
lean
store
for
us
and
also
there
is
the
lean
store
CSI
for
you,
efforts
that
use
it
behind
the
scenes
to
to
make
the
the
glue
for
convert
use
this
daily
in
store
these
very
stable
and
in
less
than
in
a
couple
of
milliseconds
I'm
able
to
do
the
clone
of
the
disc,
because
we
do
copy
and
write
and
the
correct
solution
from
from
convert.
It
doesn't
fit
our
needs
for
you
understand,
and
also
the
iops
for
the
storage
for
the
solution.
D
A
Okay,
I
assume
the
ram
like
the
ram
disk
storage
is
something
it's
basically
like
a
local
cache
that
gets
synchronized
back.
D
When
the
server
is
up,
we
bring
from
the
the
SSG
to
RAM
and
when
we
shut
down,
we
read
them
from
Ram
to
the
the
local
storage.
Let
me
give
you
the
link
how
we
do
it.
It's
very
simple
scripts:
okay,.
D
D
A
Cool
all
right,
yeah,
thanks
for
sharing
some
of
this
additional
info
I
know
that
I
think
some
of
us
have
taken
a
look
at
Lynn
store,
I,
think
Alexander
Wells,
for
example,
I
think
I.
Remember
yes,
I.
I
I
did
look
into
that
and
you
know:
I
are
looking
to
see
if
I
could
make
it
part
of
RCI
Lanes
and
the
the
issue
I
ran
into.
Is
that
link
store
itself?
The
actual
back-end
is
proprietary
and
and
therefore
I
really
can
use
it
for
RCI.
D
We
have
a
deal
with
them
that
we
can
use
it.
That's
why
we
we
prefer
to
use
it.
I
I
Our
CI,
because
I'm
assuming
it
would
be
nice
for
you
guys
to
see
us
Ron
linstar
in
RCI.
So
you
know
that
whatever
we
produce
works
with
men's
store,
yeah
and
there's
no
weird
incompatibilities,
but
we
can't
use
it
in
RCI
because
of
the
whole
proprietary
back
end.
So.
D
A
I
think
an
easy,
short-term
solution,
for
that,
too,
is
probably
something
you're
already
doing,
which
is
well,
maybe
not
quite
but
you're
you're
running
an
environment
with
all
these
components
together
that
you
care
strongly
about.
So
when
you
find
issues
there,
and
hopefully
there's
not
too
much
too
many
unique
issues
related
to
your
storage
config
that
come
up
since
we're
trying
to
be
pretty
storage
agnostic
with
these
components.
But
when
you
do
find
those,
obviously
it's
important
that
this
stuff
works
everywhere.
A
So
you
know
the
community
is
happy
to
address
issues
that
arise
with
particular
storage,
even
though
they're
not
able
to
test
all
of
those
combinations,
all
right
so
I
think,
hopefully
we're
we're
pretty
well
set
there
anyway,.
C
The
instant
clone,
can
you,
can
you
I,
think
I
missed
what
that
is.
Just
basically,
you
know
we
under
yeah
Seth
clones
are
not
super
efficient.
Is
that
pretty
much
it
yeah?
We.
D
C
Yeah
yeah
now
it's
yeah
so.
A
So
Alex
posted
a
link
here
underneath
the
on
the
agenda
under
your
under
this
copy
and
write
topic
regarding
something
that
he's
working
on
so
for
Seth,
it's
able
to
clone
pretty
much
instantly
from
a
snapshot,
but
the
current
implementation,
that's
already
up
Upstream,
creates
a
snapshot.
Then
clones
from
the
snapshot
then
removes
the
temporary
snapshot.
It's
those
snapshot
operations
that
are
time
consuming.
A
So
if
you
have
a
model
where
you
have
sort
of
a
one-to-many
relationship,
which
sometimes
we
call
a
golden
image,
it
might
be
a
base
operating
system
image
that
you
want
to
create
a
thousand
clones
of
as
you're
spinning
up
new
desktops,
for
example.
In
this
case,
if
you
make
the
Clone
Source
a
snapshot
instead
of
a
PVC,
then
the
Clones
that
happen
from
that
snapshot
on
a
set
system
are
basically
instantaneous.
Yes,
so
we're
having
some
really
good
luck
with
with
that.
D
The
missing
part
there
is
that
the
the
duplication,
only
okay,
because
we
don't
have
a
lot
of
ram
disk
on
each
server.
That's
why
the
duplication
is
also
must
have
we
achieved
that
with
ZFS
on
top
of
of
lean
store
for
you
understand,
okay,
is
there
a
way
to
do
it
over
a
rook
concept
that
the
publication,
because
I
was
aware
only
they
are
on
Alpha
stage
only.
A
Yeah,
that's
the
last
I
heard
I.
Don't
have
any
any
details.
C
H
C
No,
he
had
no
news
either
he's
like
a
sep,
CSI
developer,
so
I
don't
know.
D
Is
there
someone
responsible
inside
Red
Hat
about
the
duplication
when
we
looked
and
said
foreign.
D
C
C
A
Yep,
that's
an
interesting
feature
for
sure.
So
for
those
who
who
do
want
to
use
the
platform
I
think
yeah
anytime.
That,
especially
when
you
have
these
models,
where
you
have
a
lot
of
derivative
volumes
that
come
from
the
same,
you
know
based
operating
system,
there's
a
lot
of
a
lot
of
opportunity.
There
yeah.
D
Since
we
are
talking
about
golden
image,
can
we
talk
a
little
bit
about
that
sure?
We
update
the
windows,
Linux
and
Mac
golden
image
every
week
and
we
have
created
a
cicg
pipeline
to
do
that.
We
was
able
to
dump
the
files
from
Microsoft.
D
The
latest
is
greatest
and
we
create
an
ISO
file
and
we
we
start
VM,
install
everything
and
now
then,
after
that,
create
the
golden
image,
for
you
understand
completely
automated,
but
this
is
something
that
the
missing
part
in
a
proper
way
is
how
to
after
I
create
the
golden
image,
how
to
replicate
across
all
the
the
Clusters
I.
Have
you
understand
today's?
This
is
a
sink
of
file
and
I.
Think
the
best
way
is
to
have
this
over
the
convert.
Api.
A
A
Import
cron
API
objects
that
that
we
have
are
not
was
the
principal
developer
in
that
area
and
I
think.
What's
a
natural
choice
is
to
push
the
disk
images
into
a
container
registry,
and
if
you
do
this,
the
data
import,
cron
logic
allows
us
to
check
that
registry
for
updates
periodically
and
then
the
the
updated
image
is
pulled
down
and
we
have
something
called
a
data
source
that
can
be
updated
dynamically.
A
To
then
point
to
the
updated
image
we
can
maintain
a
list
of
a
certain
number
of
previous
images.
Garbage
collect
the
old
ones,
and
so
this
feature
is
really
nice,
because
when
you
set
up
your
virtual
machines
to
clone
from
the
data
volume,
that's
managed
by
a
data.
Import.
Cron
object,
then,
essentially
your
automation
that
updates
the
VM
disk
images.
A
When,
when
you
have
an
image,
that's
ready
to
be
deployed
into
your
cluster,
you
just
push
the
new
tag
or
the
new
version
up
to
the
registry,
and
then
the
system
will
detect
that
the
presence
of
the
update
and
then
roll
it
out
to
the
rest
of
the
cluster.
A
I
think
if,
if
all
of
the
Clusters
can
be
hooked
up
to
that
same
container
registry,
where
you're
pushing
the
image
you.
A
So
then,
that,
essentially
you
have
a
a
private
registry
or
a
registry,
that's
accessible
to
all
your
clusters
that
you
push
your
your
base,
images
into
from
CI
CD
and
then
what
ends
up
happening
is
each
of
these
clusters
will
recognize
the
updated
image,
pull
the
image
down
to
the
cluster
locally
and
store
it
into
a
PVC
where
it
can
be
instantly
cloned
in
the
future.
So.
A
Yeah
so
I
would
I
would
definitely
encourage
you
to
take
a
look
at
the
at
the
documentation
inside
of
the
CDI
repo
and
our
non.
Maybe
could
dig
up
the
link
or
something
to
help
out,
maybe
popping
that
into
this
agenda
for
people
who
might
be
viewing
the
recording
and
then
I
would
just
take
a
look
at
that
stuff.
A
And
you
know
you
can
ask
the
community
for
some
assistance
on
on
things,
if
they're
not
clear,
but
I
think
that
I
think
that
should
without
too
much
maybe
a
little
bit
of
redesigning
how
your
flow
works,
but
not
too
much
new
code.
D
A
All
right,
so
if
there
are
not
any
other
additional
topics,
I
think
it
was
suggested
last
time
that
we
just
take
a
look
at
the
the
issue
list.
I'm,
not
exactly
sure
you
know
what
order
to
go
in,
but
just
kind
of
trying
to
make
sure
that
we
don't
let
any
of
these
issues
Wither
on
the
vine,
so
I
don't
know.
Is
there
a
suggestion?
Do
we
want
to
start
with
the
oldest
ones.
C
So
well,
yeah
I
mean
maybe
this
is
a
special
for
the
first
time
run
of
this,
but
usually
in
the
keeper
community
meeting
we
just
start
at
the
top
and
go
down
until
we've
hit
the
dates
of
the
last
community
meeting.
Okay,.
A
C
Yeah
I
mean
we
may
want
to
do
a
deeper
dive.
This
time,
I
don't
know
yeah.
A
We
have
a
few,
it
seems
like
we've
got
about
maybe
15
minutes
or
so
to
take
a
look
at
a
few
of
these.
So
why
don't
we
take
a
stab
at
it
and
I'll
take
a
note
of
how
far
we
got
and
then
we
can
continue
later.
A
It
looks
like
it's
a
little
bit
hard
to
see
what's
happening,
okay,
so
when
uploading
large
images,
oh
I
think
this
is
maybe
an
issue
where
the
I
wonder
if
the
we
reach
a
timeout.
If
the
upload
is
too
slow
and
then
it
ends
up
closing
the
connection.
No.
I
C
C
C
So
if
you're
uploading
data
and
it's
sending
sending
and
then
you
get
cut
off,
but
that's
not
related
to
the
token
but
I
think
this
is
something
that
comes
up
I,
don't
know.
This
issue
seems
to
come
up
a
lot.
I,
don't
know
why.
A
Okay,
well,
the
interesting
thing
here
is
the
upload
completed
and
then
so
I
wonder
if
it's
some
sort
of
like
redirection
like
when
we,
because
after
we
finish
uploading,
then
we
do
the
Prof,
the
post-processing.
A
Okay,
so
we
will
probably
won't
solve
it
on
the
call
with
everybody.
But
what
do
we
think
a
Next.
C
Step,
oh,
if
you
go
back
up
actually
there's
an
interesting
thing
there.
So
it
looks
like
Ingress
engine
X
log.
If
you
scroll
up
so
it
looks
like
so.
It
looks
like
the
upload
proxy
is
being
exported
via
Ingress,
an
Ingress
controller,
and
you
know
there
can
be
all
sorts
of
connection
handling
issues.
So
I
wonder
the
proxy
could
be
like
detecting
an
idle
connection
and
closing
it
something
like
that,
because
once
the
image
is
uploaded,
then
we
do
the
conversion
phase
and
the
client
is
still
connected.
A
I
Can
do
that?
What
we
that
the
whole
asynchronous
thing
for,
where
you
know
you
start
the
upload
and
you
essentially
disconnect
the
the
the
you
know
the
connection
except
for
the
data
that's
being
transferred
and
then
let
the
system
do
the
rest,
so
that
your
client
doesn't
report
a
a.
C
I
I
I
I
don't
see
that
message
in
the
in
the
in
the
actual
report
here
so
I.
It
seems
to
do
the
401
right
after
it
finishes
uploading
and
I'm,
not
quite
sure
why
that.
F
Just
a
quick
note:
I
am
just
looking
at
the
code
for
routes
or
openshift
routes
and
I
see
that
we
add
an
annotation
that
says
basically
keep
a
long
time
out
for
the
client
connection
for
16
minutes.
C
F
E
I
Added
the
asynchronous
thing,
because
the
upload
itself,
you
know
your
connection-
is
not
idle,
since
you're
actually
sending
data,
but
once
you're
done
uploading,
we
do
the
resize
and
and
conversion
and
other
stuff
like
that,
and
that
can
take
a
little
bit
of
time,
especially
if
you
have
slow
storage
and
and
that's
where
we
originally
were
seeing
the
time
miles
and
the
client
would
say
hey
that
time,
though
there
was
an
error,
but
in
fact
there
was
not
an
error,
it
was
just
doing
its
thing,
and
so
we
made
an
asynchronous
connection
that
once
you
got
done
uploading
it
it.
E
A
Okay,
so
who
can
I
tag
in
here
to
follow
up
because
I
think
we
should
probably
may
not
solve
it
here,
but
we
should
probably
try
to.
It
seems
like
it's
understood
at
least,
and
maybe
something
that
can
be
solvable.
E
B
A
All
right,
so,
let's.
A
Okay,
so
I'd
like
to
go
to
the
next
oldest
issue,
which
is
dynamically
resized
virtual
disk
image
based
on
size
of
cloned
PVC.
E
C
Is
out
well,
this
is
solved
when
you
know
this
happens
when
you
start
the
VM
and
if
you
have
the
expand
disks
future
gate
enabled
it
will
get
expanded,
but
yeah
CDI
doesn't
deal
with
that
directly.
A
A
So
this
is
working
in
conjunction
with
Cube
verts.
Well,.
C
Yeah
right,
if
right,
the.
A
Okay,
so
I'm
seeing
it
got
closed,
but
then
we
have
tangential
from
the
original.
If
you.
A
G
It's
been
mentioned
on
the
bug
already,
but
we
do
have
a
tool
called
that
resize
that
could
work
here.
I
mean
it's
designed
to
resize
guest
disk
images.
E
A
A
It
seems
to
me
that
this
is
should
be
resolved.
Maya
has
these
PRS
that
she
mentioned
where
we're
doing
the
rescans
okay,
yeah.
G
Yeah
I
mean
that's.
This
is
very
true,
so
it
is
very
hard
to
resize
a
a
disk
image
reliably
and
the
way
that
resize
works.
Is
it
literally
won't?
Let
you
sort
of
in
place
resize,
partly
to
force
you
to
keep
a
backup
and
just
partly
because
of
the
way
it
works.
You
can't,
for
example,
move
partitions
around
safely,
so
it
always
copies
into
a
new
PB
and
that's
probably
the
reason
why
it
was
suggested.
But
then
the
leache
says
it's
probably
the
reason
why
it
was
suggested,
but
then
rejected.
A
All
right
so
I
think
this
one's
a
candidate
for
reclosure
again
so
I've
just
asked
the
reporter
to
give
us
some
details
and
let's
see
if
we
can
tackle
one
more
issue
before
we
go.
C
Yeah
we're
still
to
this
date.
Basically
CDI
code
references,
a
Fork
of
Library,
go
in
my
private
GitHub
and
it
probably
shouldn't
okay.
When
I
opened
an
issue
in
to
make
some
files.
C
I
think
I
opened
an
issue
in
library
go
anyway.
Basically
there
are
some
functions
of
using
Library
go
but
they're,
not
public
and
I
just
made
them
public.
In
my
Fork,
that's
really
the
only
difference.
I
think
I
submitted
an
important
issue
or
or
click
on
that
open
shift.
Library
go
540,
nope
right,
yeah,
see.
What's
going
on
there.
A
It
oh
he's
asking
for
docs
on
the
public
method,
so
it
looks
like
there's.
Some
should
I
reopen
this
or
I
can
allow
you
to
reopen
it.
I
guess.
A
I
C
Right
yeah,
the
pr
was
not
created
by
me,
so
I'll
probably
just
end
up
creating
a
PR
that
does
the
same
thing
there,
but
with
Docs.
A
Okay:
let's
try
that
all
right,
so
I
think
that
is
good.
What
I'm
going
to
do
is
I'm
going
to
make
a
placeholder,
so
I
think
we
would
be
on
this
issue
next,
so
I'm
just
going
to
make
a
note
of
that,
and
we
can
just
continue
to
work
our
way
up.
Luckily,
we
we
aren't
super
unhygienic
with
respect
to
issues,
as
we
only
have
these
two
pages,
but
it'd
be
nice
to
get
through
them.
A
A
All
right
I'll
take
that
as
a
no
great
seeing
everyone
thanks
for
the
participation
and
for
joining
today
and
we'll
catch
you
in
another
two
weeks
at
the
next
one
have
a
great
day.