►
From YouTube: Cloud Native Live: Introducing Kubestr - A New Way to Explore your Kubernetes Storage Options
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
It's
more
than
just
open
source,
it's
about
connecting
with
people
it's
about
being
part
of
the
community.
It's
about
sharing
what
you
know
and
helping
others.
Kubecon
is
the
best
place
to
get
hooked
into
the
community
and
learn
from
everybody,
and
let
me
tell
you
people:
this
is
just
the
beginning.
A
Every
week
we
bring
you
a
new
set
of
presenters
to
showcase
how
to
work
with
cloud-native
technologies,
they'll
build
things,
they'll
break
things
and
they'll
answer
your
questions
so
join
us
every
wednesday
at
3
p.m.
Eastern
time
and
this
week
we
have
michael
and
sharice
and
they're
here
to
talk
talk
to
us
about
cubester
and
also
just
as
that,
we
talked
about
in
the
video
join
us
for
the
kubecon
cloud.
A
Nativecon
virtual
eu
may
4th
the
7th
to
hear
the
latest
from
the
cloud
native
community
and
just
to
note,
this
is
an
official
live
stream
of
the
cncf
and,
as
such
subject
to
the
cncf
code
of
conduct,
please
don't
add
anything
to
the
chat
or
questions
that
would
be
in
violation
of
that
code
of
conduct,
basically
just
respect
all
your
fellow
participants
and
presenters,
and
with
that
I'll
hand
it
over
to
michael
and
cherise
for
to
kick
off
today's
presentation.
C
Yeah
thanks
bill
yeah,
so
just
a
bit
of
a
brief
introduction.
So
I'm
michael
cade,
I'm
a
senior
global
technologist
here
at
custom
by
veeam
yeah
excited
to
get
into
what
cubester
is
and
how
it
works.
Suresh.
Why
don't
you
introduce
yourself.
D
Sure
I'm
sireesh
I
work
at
casting
by
veeam
as
well.
I'm
a
software
engineer
there,
I'm
glad
to
present
cuba
to
you
guys.
C
So
one
of
the
key
parts
here
is
that
it's
gonna
give
you
it's
gonna
one.
It's
gonna
identify
the
storage
that
you
have
available
within
your
storage
and
we'll
get
into
what
that
demo
looks
like
shortly.
It's
going
to
validate.
In
this
particular
instance,
it's
going
to
validate
against
your
the
the
csr
driver
is
configured
correctly
I.e.
The
snapshot.
C
Functionality
is
up
and
running
and
working
and
then,
from
an
evaluation
point
of
view,
it's
going
to
enable
us
to
benchmark
your
storage
or
this
particular
storage
class
against
flexible
io,
so
another
open
source
project
that
enables
us
to
get
some
benchmarking
stats
out
of
your
out
of
your
storage
and
yeah.
Why
not?
Let's
jump
into
it
sireesh
and
actually
jump
into
a
demo
just
to
show
show
everyone
what
what
it
is
and
how
it
how
it
looks.
D
Sure
thing
everyone
said:
let
me
share
my
screen.
D
So
I
mean
kind
of
where
cubester
came
about
is
you
know,
we've
had
customers
and
the
customers
generally
have
or
like
needed,
kubernetes
and
have
tough
time
kind
of
figuring
out
their
storage
options,
and
so
we
kind
of
built
this
tool
in-house
and
then
realized
that
maybe
there's
a
benefit
to
the
community
and
the
community
could
actually
see
use
this
tool
and
gather
some
insights
onto
their
storage.
D
So
you
know
I
downloaded
the
tool
here
cubester
and
you
know
normally
what
somebody
would
do
when
they
want
to
see
what
storage
they
have
is
they'd
run
something
like
cute
cuddle,
get
storage,
class
storage
class
all
right,
and
it
would
tell
you
storage
classes
that
you
have
on
your
cluster.
D
This
is
a
digital
ocean
cluster.
So
it
has.
You
know,
one
storage
class
do
block
storage
and
this
is
the
driver
or
the
provisioner,
but
that
doesn't
tell
you
a
whole
lot,
but
if
you
run
cubester
on
this
cluster,
instead
it
does
a
little
bit
more
it.
The
first
thing
you
know
it
checks
that
you
have
a
valid
kubernetes
version.
Some
are
back
check
check
if
the
api
layer
is
enabled.
I
mean
accessible
some
warnings
here
that
come
with
the
csi.
D
The
csi
driver
is
changing
because
yeah,
it's
been
rapidly
changing
in
the
recent
year
or
two,
and
now
we
have
more
information
about
our
driver
right
initially,
we
just
had
a
storage
class,
but
now
you
know
we
headline
it
with
saying
this
is
the
provisioner
that
you
have
on
the
cluster
points
out?
D
It's
a
csi
driver
it'll
also
tell
you,
you
know
some
additional
information
about
the
driver
itself,
what
what
kind
of
features
it
supports
and
then
it
points
out
a
storage
class,
which
is
what
we
saw
earlier,
but
also
that
there's
a
volume
snapshot
class
here
right
and
once
you
have
both
of
those,
you
should
be
able
to
take
a
volume
snapshot
and
hopefully
restore
from
that
volume
snapshot
right.
So
the
cubestr
again
has
those
checks
or
has
the
ability
to
to
validate
whether
your
your
provisioner
is
set
up
to
take
snapshots
correctly.
D
All
right,
digitalocean
does
a
good
job
of
when
you
create
a
cluster.
It
sets
up
all
this
infrastructure
for
you
correctly,
but
for
a
lot
of
provisioners,
it's
not
always
straightforward
to
set
up
that.
That
kind
of
snapchatting
capability,
many
steps
involved.
You
have
to
install
many
different
types
of
objects
and
then,
even
after
all,
that
you
still
may
have
some
errors
and
it's
if
you're
new
to
kubernetes,
it's
really
hard
to
kind
of
debug
these
errors.
So
we
thought
this
would
be
a
handy
tool
to
kind
of
validate
that.
D
So,
let's,
let's
go
ahead
and
run
this
check.
Csi
check
just
to
look
at
the
options
that
we
have
here.
We
do
create.
So
in
order
to
run
this
check,
you
know
what
are
the
things
that
you
normally
do
you
create
an
application
which
is
a
pod
and
a
pvc?
D
Then
you
take
a
snapshot
of
it
and
so
that
creates
a
volume,
snapshot,
object
right
and
then
eventually
you
can
restore
using
that
volume.
Centered
object.
So
then
you
know
the
cleanup
flag
lets
you
clean
up
clean
up
those
those
specific
objects,
and
then
you
can
always
write
in
a
different
name:
space,
that's
not
default,
and
then
yeah.
So
I
mean
the
more
required
things:
are
the
storage
class
and
the
volume
snapshot
class?
So,
let's
run
this
check
really
quick.
A
And
we
also
have
a
question
that
chat
from
carlos
sure.
Is
there
any
application
for
this
tool
to
back
up
at
cd.
D
D
C
Hope
to
answer
your
question:
carlos
yeah.
Hopefully
that's
useful
carlos,
but
I
think
this
was
another
area
where
we
thought
from
a
cast
and
by
beam
point
of
view
or
custom.
Point
of
view
is
that
by
someone
coming
in
that
wasn't
affiliated
to
the
storage,
vendors
and
the
storage
market,
it
actually
allows
us
to
create
something
that
that
is
more
universal
across
all
storage,
arrays,
but
also
public
cloud
storage,
but
yeah.
This
isn't
really
focused
on
on
the
backup
of
of
anything.
C
A
Okay,
so
yeah,
maybe
I
have
a
question
that
might
help
the
audience
understand
a
little
bit
more.
Could
you
maybe
explain
like
why
someone
would
need
to
benchmark
their
storage
in
their
cloud
yeah.
C
C
You
would
create
your
own
set
of
tools
to
be
able
to
go
and
run
this
this
test,
and
I
think
what
we've
found
is
that
it's
a
very
manual
process
and
it
unless
you
know
the
depths
of
both
kubernetes
and
how
you
create
a
pod.
You
create
a
pvc,
you
create
a
persistent
volume,
you
attach
it
and
then
allow
your
application
to
run.
C
C
Coming
from
a
virtualization
point
of
view,
storage,
point
of
view
that
maybe
don't
know
kubernetes
to
be
able
to
spin
up
that
that
pod
and
the
pvc
etc-
and
this
is
just
taking
one
of
those
tedious,
potentially
long-winded
processes
away
and
automating
that
process
for
you
I'll
get
into
a
little
bit
more
about
the
challenges
later
on,
as
well
in
between
series
demo
around
things
like
making
sure
that
we
choose
the
right
storage,
especially
when
you've
got
the
we've
got
so
much
choice.
C
It's
kind
of
overwhelming
a
little
in
that
you've
got
different
various
different
types
of
storage,
and
I
think
this
also
helps
narrow
down
that
choice.
It's
great
to
have
that
choice.
We're
not
complaining
about
that,
but
being
able
to
choose
the
right
storage
option
for
the
workload
that
you
need
is
is
important
and
to
do
that
as
fast
as
possible,
because,
ultimately,
if
you're,
especially
in
the
public
cloud,
is
well
you'll.
C
You'll
pay
him
for
that
storage
and
if
it's
over
over
provisioned
underprovisioned,
obviously
under
you're
not
going
to
have
the
performance
that
you
need,
over-provisioned
well,
you're,
definitely
going
to
be
paying
more
money
or
over
the
odds
for
that.
So
there's
a
few
more
challenges
in
there
and
I'll
get
to
them.
But
I
I
think,
if
we
jump
back
into
the
demo
series,
then
I
think
you're
probably
going
to
answer
some
of
those
as
well
so
yep
and
we'll
get
back
to.
A
A
D
One
no
we
can
answer
these.
I
see
the
one
from
rajesh
saying:
are
there
any
limitations,
the
limitations
they're?
Not
really
any
limitations
to
do
the
csi
snapshot
restore
check.
We
obviously
need
a
csi
driver
and
then,
in
terms
of
running
the
fio
check
as
long
as
it's
on
kubernetes
and
it's
a
kubernetes,
provisioner
storage
provisioner,
then
we
should
be
able
to
run
the
the
fio
test
against
it.
So
yeah
there's
no
real
limitations.
It's
just
kubernetes.
D
A
D
This
is
outside
the
scope,
but
if
you
were
able
to
connect
a
sand
storage
to
a
kubernetes
cluster,
then
you
would
be
able
to
validate
its
performance
and
what
not
using
cubester
right.
I
won't
help
you
actually
connect
it,
but
yeah.
It
can
definitely
definitely
help
you
validate
that
that
you're,
seeing
the
performance
that
you
want.
C
Yeah-
and
I
think
it
might
have
been
a
a
broader,
broader
question
from
deepak
as
well-
is
that
is
it
possible
to
connect
to
an
on-prem
sam
storage
in
like
in
one
location,
to
a
public
cloud,
kubernetes
cluster
say
in
aws
or
in
microsoft,
aks
or
something
along
them
lines?
And
I
guess
the
answer
is
yes.
If
there's
connectivity
in
place
right,
that's
the
that,
so
it
can
be
done.
C
There's
some
storage
vendors
out
there
that
offer
close
to
cloud
type
type
offerings,
but
yeah.
This
tool
is
not
going
to
get
that
configuration
for
you.
It's
not
you're
going
to
require
that
direct
connect,
type
type
situation
that
connectivity,
but
what
cubester
can
do
is
will
validate
that.
It's
been
configured
correctly
from
a
csi
point
of
view
and
and
just
give
you
the
the
forefront
that
is
actually
there
and
configured
as
a
as
a
storage
class
within
your
within
your
cluster.
A
Cool,
no,
the
questions
keep
on
coming
in
like
fast,
so
pretty
popular
yeah
does
keepster
extend
beyond
the
standard
csi
benchmarking
checks,
maybe
something
that
is
available
like
out
of
the
box.
D
When
what
do
you
mean
by
csi
benchmarking
checks?
Let's
put
it
that
way,
because
are
you
trying
to
benchmark
so
the
benchmark
kind
of
benchmarking
that
that
cuba
does
is
like
storage
performance
right?
It's
a
performance
benchmarking,
so
whether
it's
csi
or
a
native
entry
provisioner,
you
know
it's
still
all
just
storage
and
we
don't
treat
them
all
the
same.
But
is
there
anything
more
specific
simone
that
you
had
about
csi
benchmarking
checks?
D
Of
course,
so
yeah,
let's,
like
I
said
where
we
left
off,
was
we
wanted
to
run
a
check
to
see.
If
you
know
we
can
take
a
snapshot
and
restore
from
a
snapshot
so
that
basically
checking
if
this
provision
is
set
up
correctly.
D
It's
waiting
for
this
pod
to
you
know,
become,
become
live
and
then
once
it's
live,
it'll
take
a
snapshot
and
you
should
see
a
volume
snapshot
object
getting
created
here
and,
like
I
said,
these
are
not
very
you
know
if
you're
used
to
kubernetes
they're,
not
very
complex
things,
these
are
things
that
kubernetes
operators
do
on
a
day-to-day
basis,
taking
a
snapshot
restoring
their
application.
But
if
you're
new
to
the
the
landscape,
it
can
seem
like
a
bunch
of
different
steps
just
to
validate
that
you
can
take
a
snapshot
correctly
right.
D
So
you
know.
Hopefully
this
creates
that
you
know
one
command
tool
that
creates
an
application,
takes
a
snapshot
and
restores
it
and
then
validates.
You
know
that
the
data
that
was
in
the
application
originally
is
still
there
on
a
restore
right,
so
account
does
that
end-to-end
workflow
without
you
know
all
the
various
steps
that
it
takes
to
do
that
see.
So
that's
that's
just
the
csi
check
tool.
We
also
have
the
fio
tool
right
and
let's
go
ahead
and
run
that
on.
D
And
I've
seen
this
test
take
about
maybe
around
like
40
seconds
or
so
so
yeah.
So
we,
the
pods,
split
up
and
essentially
the
pot.
The
application
here
is
an
fio
application
right
and
we
have
a
100
gig
pvc
and
it's
running
this
fio
test,
and
you
know
the
default
fio
test
that
we
have
here.
It
does
4k
random,
reads
and
writes,
and
then
128k
random,
reads
and
writes.
So
it's
actually
four
separate
tests
and
you
will
see
the
output
in
a
second
well.
D
So
it
shows
you
that
took
around
30
seconds
to
run
this
test,
the
four
different
jobs
that
it
ran
and
you
know,
let's
maybe
keep
note
of
these
numbers
somewhere
here.
D
Let's
just
just
stay
around,
you
know
1900
rediops,
1300,
right
eyeops
at
4k
right.
You
know
they
seem
like
reasonable
numbers
and
you
know
maybe
you're
like
hey.
That's
good
enough
for
me
I'll
go
digital
ocean
for
my
application,
but
you
know.
Maybe
you
want
to
try
some
other
another
cluster
right
and
I
have
another
cluster
here.
Let's
see.
D
Carlos
fios
and
flexible
io,
flexible,
io
tester.
So
again,
this
is,
you
know,
looks
a
lot
like
what
we
had
in
digitalocean
shows
you
the
kubernetes
version.
Our
back
checks
are
okay,
aggregated
api
layers
are
working
correctly,
but
here
I
have
two
different
provisioners.
I
have
the
csi
provisioner
and
then
I
also
have
the
entry
provisioner
you'll
see
the
csi.
D
Provisioner
has
a
volume
snapshot
class
and
you
know
the
entry
provisioner
doesn't
have
that
kind
of
functionality,
so
you
won't
see
that
there
and
you
know
in
this
case
you
know
we
ran.
We
ran
it
on
csi
provisionalism.
Let's
run
it
on
the
entry
provisioner
at
this
time,
we're
on
ssd.
Hopefully
it's
the
fastest
storage
option.
They
have
again,
we
can
follow
it
along.
D
A
D
Sure
so
like
if
you
had
an
application
right-
and
maybe
it's
writing
some
information-
writing
something
to
a
database
right,
storing,
maybe
some
some
records
to
a
database
right.
You
want
to
know
if
you
know
your
storage
can
handle
the
bandwidth
of
your
application.
Maybe
you
have
thousands
of
users.
Can
it
can
it
actually
handle
thousands
of
users
writing
data
to
your
database
at
the
same
time?
D
Right
and
that's
something
that
you
know,
you
need
to
figure
out
what
your
I
o
requirements
are
to
your
storage
and
then
you
can
how
to
figure
out
the
right
kind
of
storage
to
use
for
that
right.
So
maybe
this
will
help
you
say
hey.
You
know
I
cannot
use
slow
storage
like
slow
standard,
spinning
discs,
I
need
flash
like
high-speed
flash
storage
and
you
know
you
maybe
want
to
benchmark
that.
So
that's
kind
of
that's
the
use
case.
That's
a
use
case.
D
I'm
sure
there's
other
use
cases
out
there,
but
off
top
of
my
head,
that's
what
I
could
think
of.
Does
that
can
answer
your
question.
A
Yeah,
absolutely,
I
think
it
totally
makes
sense.
You
don't
want
to
have
something
that
that
doesn't
work
for
your
application
and
give
a
bad
user
experience
seems
like
we
have
a
couple
more
questions
in
the
chat
now
too.
A
D
I'm
not
sure
what
hd
parma
is
it's
not
necessarily
going
to
optimize
your
storage
at
all
it
it's
just
to
validate
that
your
storage
is
fast
enough
for
your
application
or
it
is
suitable
for
your
applications.
Put
it
that
way.
A
D
I'm
not
familiar
with
that,
but
fio
is
also,
I
think
I
know
I
don't
have
it,
but
it's
no!
It's.
It's
actually
simulating
the
fio
application
here.
Maybe
if
I
can
post
a
link
to
fio.
C
Yeah,
I
think
I
think
hd
palm
is.
D
C
A
A
yeah
cool,
there's
a
couple
more
questions
in
the
chat.
Now
is
there
a
way
to
benchmark
volume,
provisioning
attachment
with
cubester,
maybe
compare
multiple
csi
implementations
from
the
same
storage
provider.
D
Yes,
so
I
mean,
if
you
understand
the
question
correctly
you're
saying:
is
there
a
way
to
run
fio
with
cubester
or
check
if
the
snapshot
functionality
is
working
because
you
can
do
both
and
you
can
do
them
with
multiple
csi
providers,
whatever
csv
providers?
You
have
in
your
cluster
right
so
yeah
to
answer
your
question?
Yes,
it's
possible.
A
Cool-
and
it
looks
like
we
also
have
an
update
from
joe
about
hd
parm-
is
more
about
getting
the
hardware
parameters
of
the
physical
hard
drive.
So
it
looks
like
that's
more
on
the
hardware
side,
where
I
keepster
is
more
on,
like
the
software
side.
Is
that
right,
yeah?
It
keeps.
D
You
on
the
software
side
yeah,
I
joe
I'll
I'll,
take
a
look
into
h.
Department.
You
know
see
if
there's
some
sort
of
you
know
benefit
that
cuba
can
provide
using
hd
prime
right
and
that's
part
of
part
of
it.
It's
still
kind
of
in
its
infancy,
and
we
kind
of
want
to
see
what
other
tools
we
could
add
to
this
little
toolbox
to
make
it.
You
know
a
kind
of
a
a
go-to
for
all
storage
storage
needs
in
the
cloud
right.
D
A
Yeah
and
I
can
send
a
link,
the
link
is
here
in
the
conversation.
If
people
are.
D
Direct
link
to
the
fio
so
cool,
if
you
want
to
get
back
to
demo,
really
quick
you'll,
see
here
that
we
have.
You
know
the
iaps
that
we're
seeing
here
are
nowhere
near
what
you
know
we
experienced
with
digitalocean
right,
and
that
doesn't
mean
that
google
is
is
worse
by
any
means.
It's
just
that.
Maybe
we
have
we
haven't
configured
it
correctly
right
and
what
you'll
notice
is
that
if
you
look
at
google's
storage,
if
you
look
at
here,
let
me
post
a
link.
D
That'll
tell
you
that
yeah
here,
let's
see.
D
D
Out
of
your
storage
and
what
they
say
is
if,
if
you
use
bigger
volume,
sizes
you're
going
to
see
better
ios
right,
so
you
know
fio
sorry,
our
cubester
test
has
the
ability
to.
D
It
has
the
ability
to
to
specify
a
size,
so
you
know,
let's
try,
you
know
our
defaults,
100
gig,
let's
try
something
bigger,
let's
do
300
and
then
see
kind
of
what
that
gives
us.
Instead,
instead
of
100
gig.
D
D
This
one
actually
runs
a
little
bit
quicker
than
the
previous
one
because,
like
I
said,
the
bigger
the
volumes
in
google,
the
better
performance
you're
going
to
see,
takes
about
40
seconds.
C
I
think
that's
another
thing
to
mention
suresh
as
well.
Right
is
that
okay,
so
this
is
going
to
be
testing
the
the
block,
so
the
the
io
is
testing
against
iops
within
the
storage,
that's
available
to
you
within
your
kubernetes
storage,
but
also
so
as
series
just
said
by
changing
the
size
of
the
persistent
volume
or
persistent
volume
claim
that
dictates
what
iops
is
available
to
you,
but
also
think
about
the
underlying
node
architecture.
C
So
cerise
has
done
an
awesome,
cncf
blog
on
on,
like
it's
not
a
comparison,
but
it
just
goes
to
show
that
overwhelming
choice
that
we
have
out
there
from
a
compute
and
storage
point
of
view
and
that
no
one's
a
winner
in
in
across
the
best
performance.
But
it
does
show
that
there's
so
much
options
here,
so
many
options
here
and
yeah.
I.
D
Just
I
just
sent
you
the
link
yeah,
you
posted
out
there,
it's
a
link
to
the
blog,
and
I
kind
of
you
know,
go
through
the
same
kind
of
test
among,
like
you
know,
the
four
different
cloud
providers,
digitalocean
gke,
azure
and
aws,
and
you
know
just
kind
of
compare
and
contrast
and
kind
of
see
how
tweaking
little
things
here
and
there,
between
the
different
providers
gives
you.
A
A
There
you
go
so
it
seems
like
there's
a
question
from
a
user
on
linkedin.
What
are
the
best
storage,
backup,
third
party
solutions
for
aks
and
azure,
and
it
seems
like
you
could
use
cubester
to
help
figure
out
like
exactly
what
your
application
parameters
are
and
like
which
storage
is
best
to
meet
the
needs
of
that
application.
Is
that
right.
D
Yep
precisely
yeah
so
like
I
said
there
isn't
it's
hard
to
say
what
is
the
best
storage
without
knowing
what
your
application
is
right
and
say,
it
really
depends
on
your
application
and
your
budget,
and
then
you
can
pick
what
storage
works
for
you,
no
debug,
I'm
not
writing
300
gigs
of
dummy
data.
It's
it's
a
300
gig
volume
and
I
think
so.
I
I
think
the
the
file
sizes
here
are
two
gigs.
D
So
it's
two
gigs
of
dummy
data
and
a
300
gig
volume,
but
the
size
of
the
volume
itself
dictates
the
kind
of
performance
you're
going
to
get
at
least
with
google
right.
C
Yeah,
that's
a
that's
a
good
point,
though,
as
well
and
out
of
the
box
cubester
uses
like
so
the
default
fio
test
is
a
random,
read
and
random
right
at
4k
block
size
as
well
as
128k
block
size,
but
you
can
there's
a
there's,
a
huge
library,
repository
of
fio
configuration
files
that
you
can
kind
of
bring
to
the
bring
to
test
as
well
yeah
yeah,
so
so
that
so
sir,
is
there
showing
what
that
fio
configuration
file
is
so
if
you
know
that
your
workload
works
in
a
8k
block
size
or
you
need
to
walk
back
the
file
system
or
in
a
reverse
manner.
C
A
D
D
D
So
so
yeah
I
mean
you,
don't
have
to
use
our
you
know.
Default
test
really
comes
down
to
your
application
right.
So
if
you
kind
of
know
your
applications,
fio
nee,
like
signature
or
what
what
that
looks
like
what
your,
what
your
load
looks
like
in
in
terms
of
an
fio
file,
you
can
pass
in
that
fao
configuration
here
and
then
you
know
we'll
run
that
particular
fio
test.
D
D
So
what
else
we
can
also
you
know,
write
all
this
output
to
out
as
a
json.
So
if,
for
some
reason
you
are
descripting
where
you
want
to
run
multiple
fio
tests
or
whatever,
whatever
your
needs
are
you
could
easily
you
know,
get
this
output
back
as
a
parsable
json
file,
and
then
you
know
maybe
write
collect
some
data
that
way
or
if
you're,
trying
to
make
some
pretty
graphs
or
something
like
that
to
kind
of
showcase
some
something
about
your
data,
then
that's
what
you
can
do
so
yeah.
D
C
I
was
going
to
cover
some
of
the
okay,
so
there's
a
there's
another
question:
do
you
plan
to
have
a
crd
and
a
controller
for
cubesat
so
that
runs
directly
into
the
k8
cluster
and
is
declarative.
D
Yeah
there
there
were
plans
of
that
initially.
For
now
this
is
like
I
said
it's
in
its
infancy
and
it's
what
it
is
right
now,
it's
just
executable
but
yeah
down
the
line.
We
could
make
it
a
crd
crd
based,
you
know,
operation
yeah.
These
are
all
kind
of
things
that
we
have
in
our
pipeline.
D
We
also
kind
of
want
wanted
to
make
this
more
of
a
a
community-based
tool,
so
we
really
wanted
to
get
like
people
kind
of
pitching
in
like
running
their
own
fio
tests
and
telling
us
what
their
results
are.
So
maybe
like
a
leaderboard
or
something
which,
where
other
people
can
go
and
get
a
better
understanding
of
how
other
people's
storage
is
performing
right.
D
We
kind
of
want
to
we're
hoping
to
kind
of
move
in
that
direction,
where
we
have
more
of
a
community
involvement
with
it
or
community
parts
participation,
but
yeah
like
I
said
right
now:
it's
infancy.
We
just
want
to
see
if
it's
actually
useful
for
people
before
we
take
that
next
leap
and
and
do
some
cooler
stuff
with
it.
D
I'd
suggest
start
off
going
to
keep
sure
I
installing
it
and
running
it
and
yeah.
We
also
again
we're
open
source
so
go
to
our
github
page.
There's
links
on
that
for
that
on
the
website
as
well
and
yeah
fork,
the
repo
feel
free
to
make
some
changes.
A
D
Sure,
like
I
said,
the
first
few
things
are
additional
tests
that
may
be
useful
right.
There's
some
people
want
to
be
able
to
benchmark
their
object
storage.
So
that
was
something
that
I
was
looking
into.
This
is
persistent
storage.
It's
not
the
same
as
object
storage,
so
you
know.
D
Maybe
we
could
kind
of
foray
into
that
venture
into
that
a
little
bit
but
yeah,
like
I
said,
the
other
bigger
thing
is
to
make
this
a
more
community
driven
project
have
like,
like
I
said,
some
sort
of
a
a
leaderboard
or
some
sort
of
a
place
where
people
can
come
because
you
know
you
don't
run
the
same
test
100
times,
100
people
running
the
same
200
times.
If
you
can
just
go,
look
saying
hey,
I
have
that
exact
type
of
infrastructure.
D
I
can
see
what
kind
of
results
I'm
going
to
see.
It
makes
that
decision,
making
process
easier
right.
Obviously,
we'd,
like
you
know,
to
get
more
involvement
from
some
of
the
bigger
storage
vendors.
Maybe
have
them
write
specific
fio
tests,
like
that
say,
hey
if
you're
looking
for
a
certain
type
of
application.
This
is
an
fio
test
that
would
benefit
you.
D
Then
you
should
run
run
this
particular
test,
for
example,
so
yeah
I
mean
there's,
there's
a
couple
different
ways
that
we've
been
trying
to
a
couple
different
options
on
our
roadmap,
like
I
said,
we're
still
kind
of
seeing
what
what's
the
most
important
need
right
now
and
then
moving
on
from
there.
C
Yeah,
I
would
I'd
just
add
to
that
point
as
well.
Sorry
she
ran
the
so
we
mentioned
storage
providers,
storage,
vendors
that
obviously
makes
sense,
but
also
but
database
vendors
as
well
like
if
they
have
a
they're
gonna,
have
a
better
understanding
of
what
their
what
their
database
looks
like
at
load.
C
Obviously,
that's
not
gonna
reflect
on
every
single
individual
customer
of
theirs,
but
if
we
can,
if
we
can
team
up
with
database
vendors
storage
vendors
and
get
a
much
wider
view
of
what
that
application
look
or
what
that
database
requirements,
look
like
and
you
start
putting
fio
specific
tests
around
that
to
understand
what
that
looks
like
and
then
have
a
a
library
of
these
results.
C
I
think
that
becomes
very
useful
and
I've
been
calling
it
over
the
over
the
last
week
that
this
has
been
released
is
this
is
really
that
easy
button
to
understanding
your
kubernetes
storage
really
is
so
as
serious
kind
of
very
easily
went
through.
It's
a
super
simple
way
of
being
able
to
test
against
your
your
storage,
but
it's
a
lot
quicker
than
having
to
manually
go
and
create
pods,
create
pvcs,
create
pvs.
C
Then
understanding
actually
how
the
application
is
going
to
run
against
those.
This
is
literally
download
to
your
mac,
os
to
your
linux
box,
to
your
windows
box
from
github
and
get
going.
That's
that's
as
simple
as
that
and
then
yeah
to
go
back
to
the
roadmap
point
the
other.
The
other
thing
would
be
nice.
I
think
around
that
leader,
like
leaderboard,
would
be
something
around
like
vision.
How
do
we
visualize?
C
D
And
yeah
there
to
that
point
there
was
you
know
right
now.
I
just
give
you
the
fi
results
right,
like
maybe
down
the
line.
We
could
do
some
more
analysis
on
zfi
results.
Right,
say
that
hey!
You
know
your
tell
tell
the
customer
or
tell
the
user
why?
What
is
impacting
their
performance?
And
maybe
you
have
a
bottleneck,
you
know
because
of
the
storage
size
or
something
like
that
right.
D
So
I
mean
that
requires
a
little
bit
more
understanding
of
the
storage
and
the
results
of
the
fio
test
itself,
but
down
the
line,
we
could
see
ourselves
giving
you
more
a
more
descriptive
output
than
just
these.
Are
your
fio
results.
A
Cool
and
it
looks
like
there's-
actually
people
trying
it
out
right
now
as
we're
on
this.
So
I
got
this
error
message
when
cubestick
pulled
the
image
failed,
the
pole
image
user
cannot
be
authenticated
with
the
token
provided,
give
a
quick
way
to
allow
cubester
to
pull
the
image.
Some
live
debugging.
D
Yeah,
I
I
wonder
what
image
it's
struggling
to,
pull
because
cubester.
I
I
believe,
the
images
that
are
all
publicly
available.
I
think
they're
dockerhub,
because
you
cannot
be
authenticated
with
the
token
provided.
D
It's
kind
of
hard
to
tell
without
a
little
more
more
details,
boon
to
it.
Twitch
do
you
wanna,
you
know,
maybe
add
some
more
details
to
our
github,
create
a
github
issue
and
add
some
more
details
there
and
I'll
help.
You
debug
that
in
time.
A
Yeah
definitely
yeah
and
it's
cool
cool
to
see
people
are
actually
trying
it
out
like
as
we're
as
we're
live
right
now.
I
know.
A
A
Going
back
to
like
the
leaderboard
a
little
bit
just
checking
if
you
see
like
any
fluctuations
in
regards
to
the
iops
over
time
from
different
cloud
providers,
so
if
we
do
benchmarking
and
like
have
like
a
leaderboard,
would
it
change
over
time?
Would
it
continually
like
update
what
have
you
seen
so
far.
D
Well,
like
I
said
right
now,
we
don't
have
enough
data
to
see
if
there's
been
any
like
trends,
but
I'm
sure
if,
like
the
cloud
providers,
I
mean
they
have
so
many
different
offerings
right
like
it's
not
always
like
that,
like
hey,
I
have
one
type
of
storage
and
one
type
of
node.
They
have
nodes
optimized
for
storage
or
they
have
faster
storage,
slower
storage,
so
yeah
you
I
mean
you
will
see
them
like
the
cloud
providers
offering
faster
or
better
storage
options.
D
As
you
know,
they
they
progress
as
well,
so
yeah
you'll
definitely
see
fluctuation
in
what
cloud
providers
offer
right,
they're
trying
to
be
competitive
amongst
each
other
as
well
right.
C
C
The
other
thing
I
was
going
to
mention
was
around
obviously
csi,
especially
in
the
public
cloud.
I've
been
playing
a
lot
around,
I
think
cerise.
You
mentioned
it
like
digitalocean
out
of
the
boxes,
csi
first,
they
they
deploy
with
csi.
You
don't
have
any
entry
provisioner,
which
we
know
is
going
to
become
the
norm.
I
think
it's
the
next
version
of
kubernetes,
maybe
even
the
next
one,
but
it's
very
much
csi.
C
First,
as
we
as
we
move
forward
and
it
and
it
potentially
won't
take
that
configuration
route
or
I,
I
would
say
the
pain
points
as
a
new
beta
to
this
world
is
the
csi
driver.
Implementation
having
to
having
to
install
the
driver
is
a
is
a
is
a
a
little
bit
of
a
challenge
at
the
moment.
It's
a
manual
task
at
the
moment.
C
So
clearly,
I'm
the
error
there,
but
there's
a
there's,
a
there's,
a
there's,
a
task
that
you
have
to
go
through,
and
I've
found
that
cubes
does
a
really
good
kind
of
full
stop
to
understanding
whether
there's
something
wrong.
So
in
ceruse's
demo,
he
shows
you
all
the
good
stuff
that
it
says.
Okay,
that
the
csi
snapshot
was
was
successful
and
he
could
restore
back
to
a
clone.
I've
seen
many
more
of
the
errors
that
come
with
that,
where
the
snapshot
wasn't
successful
and
it's
just
really
really
short
sharp
to
the
point.
C
Let
me
know
what
it
is
and
then
we
go
and
troubleshoot
that-
and
it's
generally
going
to
be
down
to
you-
probably
your
I
am,
or
your
secret
credentials
that
at
least
in
aws
and
again,
probably
user
error,
but
I
think
that
as
we're
in
that
transition
period
of
csi
becoming
the
norm,
this
is
going
to
be
an
awesome
tool
moving
forward
and
it's
not
just
for
that
day-
zero
cluster
rollout.
It's
going
to
be
kind
of
to
to
the
last
question
around.
C
C
Today
I
was
going
to
kind
of
show
the
same
thing.
I've
got
an
aws
cluster
that
I
could
jump
in,
but
it's
going
to
be
very
much
the
same
same
stuff
that
suresh
has
shown.
So
I
don't
know
if
we
need
to
go
in
there.
C
I
think
the
other
thing
again
suresh
might
have
touched
on
it
is
around.
So
as
long
as
you've
got
cube,
ctrl
access
to
you
can
to
your
cluster.
That's
where
cubester
will
run.
It
will
use
the
cubectl
context
to
run
these
tests
in.
So,
if
you
don't
have
cube,
ctl
cubes
is
not
going
to
work.
Also,
if
you,
if
you've
got
many
different
contexts
like
siri
showed,
is
you're
going
to
have
to
jump
manually
between
those
before
you
can
before
you
can
run
cubester
against
the
other,
the
other
clusters,
but
yeah.
C
I
think
I
was
just
going
to
show
cubester.io
as
a
as
the
first
resource,
because
on
there
is
a
very
brief
description,
kind
of
what
we
touched
on
at
the
very
beginning,
around
identifying
validating
and
evaluating
your
kubernetes
storage
and
hitting
the
easy
button
and
kind
of
touching
on
that
csi
challenge
that
I
just
touched
on,
then
a
short
demo
from
sireesh
on
or
how
to
use
it
and
then
some
steps
on
where
to
grab
where
to
grab
the
source
source
files.
C
C
Yeah,
I
think
they'd
be
and
then
also
the
slack
that
that
we've
just
started
so
it's
very
fresh
in
there.
This
is
only
this
has
only
been
rolling
out
like
since
the
the
first
of
april.
So
it's
really
really
new
and
yeah,
looking
forward
to
the
to
the
feedback
from
the
community
and
and
driving
what
what
this
could
be.
D
No,
I
think
you
covered
most
of
it.
Yeah
like,
like
you,
said
we're,
I'm
excited
to
put
it
out.
There
excited
to
have
people
actually
using
it,
and
you
know
really
hope
it
benefits
people
and
they
they
can.
You
know,
see
its
value
and
eventually
contribute
to
it
right
so
yeah,
looking
forward
to
that.
A
Okay
with
that
thanks
everyone
for
joining
the
latest
up
episode
of
cloud
native
live,
it
was
super
great
to
have
both
michael
and
sharif
sharise
us
to
talk
about
cubester.
I'm
really
excited
to
see
how
this
project
grows
and
the
community
that
comes
up
around
it
and
really
seeing
like
those
benchmarks.
So
you
can
track
storage
performance
over
time
and
see
if
what
you
pay
for
is
what
you
actually
get.
I
also
really
loved
all
the
interaction
that
we
had
from
the
audience.
I
think
this
is
really
tons
of
questions
coming
out.
A
Super
excited
to
see
that
even
like
people
trying
it
live
like
as
the
session
is
going
like
super
fun
to
see
the
interaction
and
yeah
looking
forward
to
see
the
audience
again.
Just
a
reminder:
we
bring
you
every
single
week,
the
latest
cloud
native
code,
every
wednesday
at
3
pm
eastern
yeah,
and
if
you
haven't
gotten
your
ticket
yet
for
kubecon,
then
there's
a
code
for
you
to
do
it
so
yeah
see
you
around
the
cloud
native
community
thanks.