►
From YouTube: OKD4 on NVIDIA GPUs on AWS Zvonko Kaiser & Ceph Operator demo Charro Gruver OKD Deployment Marathon
Description
Deploying OKD4 on NVIDIA GPUs on AWS - Zvonko Kaiser (Red Hat)
Deploying Ceph Operator and configuring MariaDB Galera on OKD4 demo Charro Gruver
OKD4 Live Deployment Marathon
August 17, 2020
Day Zero Kubecon/EU Virtual
A
Okay,
first
introduce
yourself
and
tell
us
a
little
bit
about
who
you
are
and
what
you
do
at
red
hat
and
all
the
other
good
things.
B
All
the
good
things-
okay,
my
name
is
monko
kaiser,
I'm
team
lead
for
the
openshift
pc
team.
The
pset
team
mainly
is
responsible
for
enabling
accelerators
on
openshift
and
one
prime
example
is
the
gpu.
B
We
also
enabled
other
accelerator
cards,
like
solar
flare,
doing
this
right
now
with
melanox
in
the
process
of
enabling
those
extra
accelerator
cards,
we
developed
a
operator
that
is
called
the
special
resource
operator,
which
is
the
base
for
the
nvidia
gpu
operator
for
the
melanox
operator
and
some
other
hardware
accelerators
that
are
using
seo
to
enable
the
hardware.
B
B
B
4.5,
it's
pre-installed
on
aws
we've
seen
a
lot
of
installations
today,
so
I
don't
think
we
need
to
see
another
installation.
We
have
three
worker
nodes
and
three
masternodes
free
cpu
worker
nodes,
and
now
we
need
to
add
a
gpu
worker
node.
So
the
easiest
easiest
thing
to
do
on
right
now
is
to
go
to
the
open
shift
machine
api
project.
B
B
C
B
And
the
only
thing
we
need
to
do
if
this
machine
set
is
change,
the
name
change,
maybe
the
cluster
machine
roll
the
type
and
the
important
setting
is,
of
course
the
instance
type.
B
I
have
a
pre-populated
machine
set
here,
which
I
use
for
this
demo
and
we
are
using
a
g4
instance
with
which
has
a
t4
gpu
on
aws.
Currently
we
get
either
v100s
or
t4s.
B
We
can
add
more
settings,
but
the
most
important
one
is
the
instance
type,
so
we
just
create
our
new.
Oh,
I
forgot
something
to
say.
Another
thing
we
should
look
for
is
replicas.
B
B
B
An
immutable
system
container
only
system
to
have
the
video
gpu
operator
working
and
that's
where
we
developed
the
notion
of
a
driver
container,
a
driver
container
is
kinda
a
delivery
mechanism
for
out
of
three
drivers
via
container
and
we
are
currently
working
for
nvidia
to
add
fedora
core
support,
but
for
now
we
don't
have
any
containers
available.
So
we
are
going
to
build
our
own
driver
container.
B
B
B
I
explain
in
detail
in
the
document
how
to
take
the
driver
and
how
to
use
the
repository
plus
plus
the
name
and
how
to
instantiate
the
nvidia
gpu
operator
later
on
with
some
settings
to
make
it
run.
Let's
take
a
look
at
our
notes:
okay,
they
are
not
yet
up
when
those
two
gpu
nodes
appear.
We
have
a
heterogeneous
cluster.
B
In
the
past,
people
were
manually,
labeling
the
nodes
as
a
gpu
node
or
as
a
cpu
node,
to
steer
the
right
containers
on
the
right
node
since
openshift
4.2,
and
we
are
currently
working
on
adding
to
okd
as
well.
We
introduced
nfd
nfd
is
a
project
which
exposes
node
features
to
the
cluster,
for
example,
cpu,
flags,
pci
devices
and
other
hardware
that
is
exposed.
So
the
first
step
is
to
bootstrap
heterogeneity
so
that
we
know
and
have
the
labels
automatically
applied
to
our
cluster
without
manual
intervention.
B
We
are
currently
in
the
working
or
to
adding
it
to
the
operator
hub
for
okd,
so
that
our
customers
can
install
it
just
by
one
click.
But
for
now
we
have
to
check
it
out
and.
B
Call
it
via
make
plus
the
nfd
operator,
we
change
the
branch
to
release
4.5,
and
we
can
just
run
a
pull
policy
always
and
make
deploy.
What
this
will
do.
It
will
deploy
a
nfd
will
deploy
nfd
masters,
which
are
responsible
for
labeling
and
nfd
workers
that
are
running
on
the
workers
for
detecting
the.
B
D
B
B
B
Otherwise
you
will
get,
for
example,
illegal
instruction.
If
you're
running
on
a
later
note
or
on
later
cpu,
you
can
extract
if
s
linux
is
enabled
and
for
the
driver
container
to
work.
We
need
the
kernel
versions
that
are
running
we
need.
Where
is
the
system
hours?
For
example,
the
operating
system
release
it's
a
fedora
42,
so
we
can,
by
taking
the
driver
container
in
a
specific
way,
we
can
steer
only
drivers
that
are
pre-built
for
this
kind
of
kernel
version
and
operating
system
to
the
right,
node
and
pci-1
dof.
B
One
thing
we
need
to
do
is
we
have
currently
a
wrong
cryo
config
in
this
version,
there's
an
upstream
fix
for
that.
So
we
need
to
to
reload
cryo
with
some
corrected
config.
So
that's
what
we're
going
to
do
now
to
enable
the
hook
directory
the
how
gpus
are
enabled
in
in
our
container
is
that
nvidia
has
written
a
preset
hook.
A
preset
hook
is
called
during
the
runtime
of
or
stages
or
life
cycles
from
of
a
container
and
the
preset
took
is
executed
just
before
the
command
is
run.
B
B
B
So
this
is
the
wrong
hooks
here,
because
we
cannot
write
to
slash
user.
We
need
to
change
this
to
slash
atc,
create
the
w
dtc
write
quit
system
ctl.
B
B
And
reload
and
restart
so
we
prepared
the
host
and
the
note
so
that
preset
hooks
are
placed
in
the
right
directory
and
cryo
is
now
able
to
pick
up
the
priesthood.
Otherwise
cryo
wouldn't
find
this
hook
and
slash
adc,
because
it
will
just
look
in
the
slash
usr.
B
B
We
are
currently
using
it
when
do
it
via
helm,
also
in
the
working
to
edit
to
the
operator
hub
to
have
it
in
okd
so
later
on.
It
would
be
just
a
click
in
operator
hub
and
instantiating,
a
cluster
policy
with
the
right
settings
without
fiddling
too
much
around,
but
for
now
we
have
to
do
it
via
helm,
create
a
project
for
the
gpu
operator
where
we
can
save
our
stuff
and
then
install.
B
There
are
a
lot
of
settings
we
are
setting
here,
which
usually
are
encoded
in
the
cr
when
deliver
instantiating
from
operator
hub.
But
since
we
are
overriding
the
driver
container,
we
need
to
add
stuff
like
platform
openshift,
true,
that
we
have
the
default
runtime
cryo,
because
the
gpu
operator
works
also
on
on
docker
and
container
d,
adding
our
driver
repository,
we
created
before
a
driver
version,
toolkit
version-
and
we
are
saying
we
have
nfd-
enabled
faults,
because
the
nvidia
gpu
operator
is
also
able
to
install
nfd.
B
B
B
B
B
B
Image,
this
is
the
one
we
created
before
we
instantiate
it
with
the
right
driver
container
so
that
you're
running
a
fedora
32
driver
container.
Then
the
device
plugin
is
used
for
exposing
the
hardware
to
the
cluster
and
after
each
of
those
steps,
we
are
running
a
validation
step.
The
nvidia
driver
validation
will
just
run
a
small
cuda,
put
application
cuda
vector
at
allocating
memory,
doing
some
computations
to
verify
that
the
driver
is
working
after
device.
The
device
plugin
is
deployed.
B
We
are
running
a
device,
plugin
validation,
allocating
an
extended
resource
and
running
the
same
cuda,
vector
ad
on
to
verify
that
the
device,
plugin
and
cuda
is
still
working
custom,
node
exporter
for
nvidia
metrics.
I
will
show
later
on
prometheus
integration
for
for
nvidia
and
said
before.
The
cool
toolkit
is
for
the
preset
of
it.
B
B
We
have
a
bug
in
the
nvidia
nvidia
device
plug-in
game
set,
it's
not
restarting
on
error.
This
is
from
the
early
days
where
people
were
not
using
nfd
and
have
the
device
plug-in
running
since
the
demon
set.
It
will
run
on
any
node,
and
so
the
demon
said
were
running
on
a
cpu,
node
and
nvidia
decided
to
adjust
sleep
on
an
error
which
should
be
fixed
in
the
next
release.
B
D
B
B
The
other
step
was
just
to
prepare
the
container
with
some
with
the
source
code
and
the
tools
that
we
need,
and
the
driver
container
will
then
figure
out
on
the
cluster
when
it's
running
the
kernel
version
install
kernel,
devil,
headers
and
other
tools
that
it's
need
for
for
building
and
then
build
the
drivers
and
the
kernel
modules
on
the
fly
on
the.
B
B
B
And
the
complete
stack
is
completed
for
the
parts
that
should
be
completed
and
all
the
other
parts
are
running.
We
can
now
look
again
at
the
nodes.
B
A
mp
workload
which
runs
tensorflow,
distributed
by
horowat,
pretty
simple:
we
just
need
to
deploy
the
mpi
operator,
which
creates
a
crd
mpi
job.
We
can
instantiate
a
cr
with
a
mpi
job.
B
B
Okay,
what
also
works
with
gpus
is
auto
scaling.
You
can
create
a
auto
scaler
that
references,
this
very
machine
set
that
we
created
here.
So
if
a
pod
comes
in
with
min
and
max
and
if
a
pod
comes
in
and
kubernetes
killer
sees
it's
it's
impending,
the
auto
scaler
cluster
scala
will
kick
off
on.
Aws
will
kick
off
and
create
a
gpu
node.
The
nvidia
gp
operator
will
take
care
of
of
installing
the
mv
stack.
The
nfd
will
label
it.
So
the
operator
knows
where
to
deploy
all
the
pieces
and
the
workload
can
run.
B
A
Actually,
I
think,
you're
doing
a
pretty
wonderful
job
of
doing
this,
and
not
a
lot
of
questions
are
coming
in.
However,
the
document
that
you
shared
with
the
notes,
I
think
a
few
people
are
having
some
technical
difficulties
getting
into
it
and
I'm
wondering
if
either
after
this
demo,
you
can
make
this
into
something
that's
more
public
than
the
google
doc
so
that
we
can
have
access
to
it.
Maybe
somewhere
in
the
okd
repo
or
wherever
christian
and
the
team
think
that.
B
Yeah,
if
there's
yeah,
if
there's
some
something
that's
more
more
publicly
available,
I'm
happy
to
to
share
it
and
update
it
as
we
are
proceeding
and
adding
all
the
thoughts
that
I
that
I
have
mentioned
here,
that
are
still
work
in
progress.
E
So
there's
this
great
hackmd.io
tool
which
is
like
in
like
an
ether
pad,
but
it
uses
markdown
the
same
way
github
does
you
can
actually
store
it
in
github
as
well?
And
then
you
can
put
permissions
on
there
similar
to
google
docs,
but
it's
more
collaborative
and
more
open.
So
maybe
that's
a
good
choice.
Otherwise,
a
pr
to
to
the
okd
repository
would
be
super.
A
Yeah
that
that's
what
I
put
it
in
hackmd,
we
can
all
hack
on
it
and
add
to
it
and
get
our
feedback
on
it,
but
I
think
the
long
like
getting
into
the
docs
how
to
configure
gpus
and
stuff
like
that.
Eventually,
I
think,
is
a
a
not
long
term
medium
term
goal
as
well.
So
we
we
tend
to
do
a
lot
of
documentation
by
blogging
so
clean
some
of
that
up.
D
B
Yeah,
maybe
I
should
do
a
okd
gpu
blog,
but
since
I'm
I'm
covering
a
lot
of
the
yeah,
maybe
I
should
do
it
on
the
okay.
There
are
also
some
documents
on
how
sro
and
the
nvidia
gp
operator
work
internally.
If
you're
looking
on
open
shift
for
how
to
enable
hot
accelerators,
there
are
several
parts
describing.
What
a
driver
container
is
how
to
use
in
title
builds.
B
So
it's
you
don't
have
to
fiddle
around
with
the
host
because
we
said
from
the
beginning:
don't
touch
the
host
at
all,
keep
it
in
a
container
because
then
it's
far
easier
for
operator
to
pull
pull,
pull
those
images
or
update
the
drivers
and
with
the
conjunction
with
nfd.
We
have
all
runtime
information
there
to
easily
tag
or
pull
the
right
container
into
the
cluster,
and
then,
after
the
driver
container
is
deployed.
We
have
the
device
plugin,
which
exposes
the
hardware.
B
Then
we
have
the
node
exporter
which
exposes
metrics,
and
we
also
had
in
sro
a
custom
graphana,
which
also
exposed
a
gpu,
dashboard
and
nvidia
is
currently
thinking
of
of
adding
this
as
well,
so
that
they
have
maybe
their
own
profound
dashboard
just
for
gpus
and
their
metrics,
what
they
have,
because
it's
it's
fallout
of
metrics
and
that
they
can
expose
like
envy,
link
metrics,
then,
if
you're
doing
multi-node
all
the
stuff,
that
would
you
bend
with
from
node
to
node
and
then
all
the
gpu
metrics
and
alerts,
if,
if
you're
overheating,
if
you're
not
using
the
gpu,
because
they
cost
a
lot
of
money,
so
a
lot
of
stuff
that
can
be
added
to
the
gpu
operator.
B
B
The
only
thing
we
supplied
is
a
custom
cr
is
some
manifests
and
the
operator
takes
care
of
it,
ordering
state
transitions,
arbuck,
rules
and
stuff
like
that.
So
we
are
at
the
end,
tensorflow
run
here.
B
Gpu
workload
run
multi-node
with
mpi
and
horowitz
distributed,
and
that's
the
end
of
my
demo
here
awesome,
I'm
happy
to
answer
any
other
question.
If
they
came
up.
A
Oh,
I
think
is
it
I'm
not
sure
whether
can
you
boot
into
the
okd
dashboard
from
here
just
to
end
on
a
screenshot.
B
Yeah
yeah,
it's
just
a
second.
A
And
you
did
make
it
look
easy,
so
hopefully
we
can
get
this
documentation
up
and
accessible
somewhere,
so
other
people
can
test
it
out.
A
And
our
next
speaker
is,
I
think,
justin
pittman
is
coming
on
board
next
and
then
joseph
will
be
on
after
that
yeah
yeah
dustin's
going
to
try
and
attempt
to
do
okd
on
overt
and
I
using
ipi
living
dangerously
on
the
edge.
E
Yeah,
I
just
wanted
to
say
this
has
been
a
feature
that
has
been
requested
a
lot
of
times
actually
in
in
the
working
group
meetings
not
only
by
joseph,
but
I
think
others
as
well
so
really
great
to
see
this
works
because
we
weren't
able
to
definitively
say
it
works,
as
is
nobody
had
tested
it
out.
But
that's
super
great
to.
B
B
A
I
think
I
think
you
you've
actually
done
something
pretty
awesome
and
we're
really
grateful
that
you
took
the
time
today
to
come
and
join
us.
So
I
know
we'll
probably
hook
you
up
with
a
lot
more
questions
afterwards,
I'll
I'll,
join
your
your
email
and
and
your
contact
information
I'll
put
that
out
there
on
the
working
group.
A
So
if
people
have
questions
they
can
reach
out
to
you
directly
or
post
questions,
actually,
that's
probably
a
better
question:
where
is
the
best
way
to
if
we
have
questions
about
the
nvidia
gpu
and
working
with
openshift
and
okd?
What's
the
best
route
to
asking
those
questions.
B
B
If
something
is
missing
or
we
need
to
do
more
work
or
but
I'm
tracking
tracking,
all
the
work
upstream
with
nvidia
I'm
the
technical
lead
for
gpus
on
openshift.
So
any
any
requirement,
I'm
happy
to
help
get
those
things
upstream
or
to
have
more
features
included.
Perfect.
A
And
we're
in
talks
right
now
for
in
october
to
do
an
mlai,
openshift
commons
gathering
co-located
with
the
gtc's
event.
That's
going
to
be
virtual,
then
so
we'll
probably
see
a
lot
more
from
you
and
other
folks
doing
interesting
workloads
here.
A
So
you
know
I
look
forward
to
all
of
that,
and
october
should
be
a
really
interesting
month,
because
we'll
have
both
that
gtc
event
and
I
think
we're
going
to
try
and
do
something
with
open
infrastructure
around
openstack
and
an
openshift
commons
gathering
so
we'll
be
busy
in
october
and
probably
the
ramp
up
to
that.
So
also
I'll
share
the
link
and
chat
to
the
video
from
last
week
that
you
did.
A
That
was
a
deeper
dive
into
it
and
we'll
get
all
this
up
and
running
and
up
on
our
youtube
channel
again
in
a
playlist
for
today
and
great
work.
I'm
totally
appreciative-
and
we
all
are
of
all
the
efforts
that
go
into
the
accelerator
program.
A
So
thanks,
so
I'm
not
seeing
any
more
questions
anywhere.
So
what
I'm
gonna
ask
I'm
have
I
put
charo
on
the
spot.
We
have
20
spare
minutes
here
or
about
which
demo
charo.
Would
you
like
to
try
rook
ceph
or
there
was
one
other
one.
The
maria.
F
They
need
to
go
in
order
because
we
have
to
have
storage
to
provision
in
order
to
deploy
a
maria
db,
galera
cluster.
So
we'll
we'll
start
with
the
ceph
operator
and
we'll
go
from
there.
A
And
this
by
the
way,
is
what
we
do
to
all
new
employees.
We
just
throw
them
into
the
fire
pit
and
soak
it
until
they
good
they
can
demo
anything.
F
Let
me
clear
this
out.
If
you
guys
can
see
the
screen
on
the
left,
the
common
yaml
file
and
the
operator
open
shift
yaml
file,
those
are
pulled
verbatim
from
this
project.
You
can
see
the
you
can
see
the
path
here
to
get
to
it
and
it's
release
1.4.
F
It's
the
latest
release,
I'm
not
quite
brave
enough
to
run
directly
out
of
master
at
this
point
so
back
to
the
cluster
that
we
deployed
earlier
this
morning,
you
can
see
it
is
fortunately
still
healthy,
there's
a
few
errors
being
thrown,
but
it
tends
to
do
that
in
my
home
lab
network
latency
and
such
so.
What
we're
going
to
do
first
is.
F
The
three
worker
nodes
that
we
deployed-
what
I
didn't
tell
you
when
I
deployed
them,
was
that
I
actually
deployed
them
with
an
unused
hard
drive
attached
to
the
virtual
machine,
so
it
installed
the
operating
system,
it's
using
a
sata
bus,
so
it
installed
the
operating
system
on
sda,
but
it's
got
an
sdb
sitting
there
that
is
not
currently
being
used.
F
What
we're
going
to
do
is
we're
going
to
create
a
ceph
storage
cluster
to
serve
up
block
devices
on
these
worker
nodes.
The
first
step
is,
we
need
to
label
those
nodes
to
give
them
a
role
of
storage
node,
and
so
I
just
applied
that
label
to
them.
If
I
hit
a
quick
oc
describe
on
one
of
those
nodes,
I
can
show
you
that
it
now
has
a
roll,
a
storage,
node.
F
Okay,
so
that's
step.
One
is
we.
We
need
something
to
tell
seth
what
it's
going
to
be
working
with
step.
Two
is
we're
going
to
deploy
this
common
dot
yaml,
which,
as
you
can
see,
is
creating
a
whole
lot
of
boilerplate
that
the
rook
operator
is
going
to
need,
and
one
of
the
things
that
it
did
was
it
provisioned
for
us.
F
And
this
will
take
a
little
bit
for
it
to
bootstrap
itself.
The
the
operator
image
right
now
is
pulling
down
and
installing,
when
the
operator
is
up,
it's
going
to
create
some
workloads,
some
pods
on
each
of
those
nodes
that
bear
the
label
so
that
they
can
discover
the
the
resources
available
on
that
node,
and
this
this
will
take
just
a
little
bit
to
run.
F
Okay
and
there
you
see
the
the
three
discover
nodes
that
are
spinning
up
now.
F
F
Okay,
the
cluster.yaml
file
is
the
thing
that
actually
defines
our
particular
cef
cluster
and
again,
the
the
rook
project
has
a
a
a
boilerplate
copy
of
this
for
you
to
to
take
and
modify
to
your
own
purposes.
This
is
the
version
of
seth
we're
gonna,
be
running
15.2.4.
F
Just
like
you
see
in
a
typical
deployment,
and
here
is
the
piece
of
magic
that
tells
it
where
to
find
those
devices
that
it's
going
to
create
the
ceph
storage
cluster
on
now,
our
operator
appears
to
be
fully
bootstrapped
and
up
and
running,
so
the
next
step
is
to
go
ahead
and
deploy
our
f
cluster
on
top
of
that
all
right
now.
This
is
also
going
to
take
a
little
bit
and
you're
going
to
see
a
bunch
of
activity
here.
As
the
operator
provisions
this
cluster
there's
the
three
monitor
instances.
You
just
saw
spin
up.
F
F
F
F
Then
the
seth
cluster
is
up,
and
it
is
ready
for
use,
looks
like
we're
still
waiting
for
one
of
the
crash
collectors
to
go
into
a
ready
state,
but
everything
else
at
this
point
should
be
usable.
F
Volume,
I've
created
a
storage
class
here
that
I'm
going
to
apply.
Its
name
is
rook
seth
block
and
it's
going
to
use
the
new
rook
csi
plugin
that
we
just
deployed
as
the
provisioner.
F
All
right
now
we
should
have
persistent
volume
claim
and
the
key
here
you
can
see
now
it
is
bound
to
an
automatically
provisioned
persistent
volume
that
our
ceph
cluster
kindly
handed
out
for
us.
We
can
see
that
from
the
command
line
as
well
there.
It
is
100
gigabytes
now,
remember
if
you
were
watching
previously
when
we
deployed
the
openshift
cluster,
we
gave
our
image
registry
an
ephemeral
volume.
F
We
need
to
remove
that
ephemeral
volume
before
we
give
it.
The
new
volume
so
caveat
here.
Any
images
that
you
current
that
you
had
put
in
between
you
you're
going
to
lose
those
because
we're
yanking
away
the
the
storage.
You
will
lost
them
anyway,
because
this
is
an
ephemeral
volume.
F
Okay,
now
I'm
going
to
put
our
registry
back
into
a
managed
state,
we're
going
to
tell
it
to
use
the
persistent
volume
claim
registry
pvc
we're
also
changing
the
rollout
strategy
of
this
to
a
recreate,
because
I
created
a
rewrite
once
volume.
F
F
F
F
Shell
script,
that
is
going
to
be
run
by
the
image
when
it
starts
up
that
actually
provisions
the
mariadb
cluster
detects
whether
or
not
a
cluster
already
exists,
if
it's
the
first
node
in
the
cluster
so
forth,
and
so
on.
I've
got
a
short
tutorial
written
up
on
this
that
you
can
see.
So
I
won't
drain
it
here.
We'll
just
do
the
fun
and
kick
it
off.
F
F
F
And
now
I'm
going
to
do
a
podman,
build
and
build
our
mariadb
image,
and
you
see
I'm
I'm
grabbing
the
route
from
the
from
the
image
registry
to
tag
my
image
that
I'm
getting
ready
to
build
so
that
I
can
push
it
to
the
directory
and
it
generally
doesn't
run
that
fast.
I
ran
this
just
a
little
bit
ago
to
make
sure
it
was
going
to
work.
F
F
Let's
create
a
service
account:
okay,
we're
going
to
create
a
service
account
for
mariadb.
The
reason
is:
mariadb
is
picky
about
its
uid
and
it's
especially
picky.
If
it
restarts
and
its
uid
has
changed,
it
tends
to
get
upset
so
we're
creating
a
service
account
and
we're
actually
going
to
run
this
privileged
with
this
new
service
account
so
that
it
can
run
as
any.
F
F
F
A
A
It
may
be
one
of
the
ones
that
we
need
to,
as
a
working
group
add
in
a
community.
E
A
F
Yeah
and
that's
why
I
I
deployed
it
by
using
the
operator
configuration
that
is
provided
in
the
kubernetes
seth
examples
in
the
rook
project
itself.
F
All
right,
so
our
second
cluster
node
is
coming
up,
and
these
do
and
these
do
an
ordered,
startup
and
an
ordered
shutdown
so
that
you
can
gracefully,
stop
and
start
this
cluster
and
it
will
retain
its
state,
and
when
this
is
done,
we
have
a
three
node
maria
db
galera
cluster.
That
is
a
full
multi-master
database
cluster
running
in
our
open
shift
with
provision,
storage,
wow.
A
Well,
played
thanks
charo.
That
was
pretty
awesome.
I
think,
and
I
keep
emphasizing
in
the
chat
too
is
well.
Some
of
the
operator
work
is
the
next
things
on
the
road
map
that
we're
trying
to
get
folks
to
work
on
so
getting
some
of
those
default
operators
from
operator
hub
into
community.