►
From YouTube: TGI Kubernetes 134: CSI
Description
Join Josh Rosso as he explores some cool stuff in the Kubernetes space! We'll start the episode out covering what's new in the cloud native space then get into some exploration! In this episode we'll be digging into provisioning storage for our workloads. We'll dig into CSI, which enables us to plugin drivers that can interact with underlying providers.
github.com: https://github.com/vmware-tanzu/tgik/tree/master/episodes/134
00:00:00 - Welcome to TGIK!
00:04:19 - Week in Review
00:19:57 - Local Provisioning and Primitives
01:00:11 - Intro to CSI
01:15:22 - Installing and Using a CSI Drive
01:51:21 - Goodbye!
A
Hey
everyone
happy
friday,
welcome
to
tgik
134
so
glad
to
have
you
all
with
us
as
usual,
if
you're,
if
you're
joining
us
or
maybe,
if
you're
new,
to
tgik,
feel
free
to
say,
hey
in
chat
and
where
you're
signing
in
from,
if
you
feel
comfortable
doing
so
looks
like
we
got
some
familiar
faces
already,
waleed
great
to
see
you
haim
from
israel
great
to
have
you
ibrahim
from
tunis
tunis
in
in
northern
africa
right
great
to
have
you
rodolfo
welcome!
Welcome
john
che.
A
A
Glad
glad
you
all
are
are
with
us
maddie
how
you
doing
maddie
thanks
for
joining
us,
as
always
christopher
kristoff,
sorry
from
germany
as
adding
to
our
germany,
our
population,
here
that
we
have
on
tgik
glad
to
have
you
paul,
hey
paul
nice,
to
see
you
thanks
for
joining
us
today,
martin
from
the
netherlands,
glad
to
have
you
as
always,
martin
wujay,
welcome
tim
welcome
back.
We
got
fully
geared
bear
today,
steve
it's
great
to
see
you
steve.
It's
been
too
long,
morteza
from
taran.
Welcome
glad
to
have
you.
A
We've
got
daniel
from
san
jose.
We've
got
peter
from
poland.
Welcome
jim
from
manchester
lewin
from
the
uk.
I
think
there's
just
a
lot
of
a
lot
of
european
folks
joining
us
today,
which
is
which
is
awesome
glad
to
have
you
marat,
dublin
ca,
dublin
ca,
cool
cool,
fully
geared
bear.
Are
you
in
ikea?
No
I'm
in
the
I'm
in
the
title
card.
A
For
today,
I've
I've
take
made
it
a
bit
of
a
habit
to
take
the
background
from
the
title
card
and
put
it
into
into
my
into
my
background
here:
jason
thanks
for
joining
us
mahmoud
from
new
york.
Glad
to
have
you
mona
from
germany
thanks
for
joining
us
mona
radha
from
scottsdale
great
to
have
you.
We
got
aviv
from
israel,
we
gotta.
A
We
got
a
good
amount
of
folks
from
israel,
too
very
cool
yusuf
from
morocco
and
yes,
north
africa,
good
good
and
we've
got
vijay
from
london
all
right
and
yes,
I
do
have
a
cat
paul
with
me
right
now.
My
my
girlfriend
and
I
are
watching
a
cat
right
now
and
she
jumped
on
the
desk
as
it
started.
So
if
my
green
screen
falls
over
during
this
episode,
you'll
know
you'll
know
what
caused
it
so
say:
hey.
A
This
is
jane
hello,
so
she's
she's,
just
as
stoked
learned
about
storage
today,
as
we
all
are,
I'm
sure,
but
I'll
I'll,
put
her
down
for
now
and
see
and
see.
If
we,
if
we
have
any
luck
with
her,
not
tearing
things
up,
can
we
pivot
the
show
just
to
the
cat?
I
think
that's
a
good
idea,
paul
it
honestly
it'd,
probably
get
more
viewers.
Knowing
the
nature
of
youtube
seems
like
a
very
good
idea.
A
Boxes
are
storage
and
cats
like
boxes
a
lot,
I'm
learning
that
they
also
like
paper
bags
a
lot.
So
it's
been
an
interesting
and
interesting
week.
So
again
thanks
everyone.
So
much
for
joining
us
welcome
to
episode
134..
This
will
be
a
fun
one.
I
am
especially
psyched
about
this
one
because
we'll
be
laying
a
bit
of
kind
of
foundational
knowledge.
Today
I'd
say
this
episode
will
be
a
little
bit:
tech,
independent
and
a
little
bit
more.
A
Like
the
you
know,
the
grocking
episodes
that
we're
used
to
duffy,
doing
where
we,
where
we
kind
of
talk
about
a
concept,
talk
about
how
the
pieces
work
and
it's
funny.
I
was
scrolling
through
our
tgik
episode
so
far
and
we're
pretty
we're
pretty
light
on
storage.
We,
in
fact
I'd
even
say,
like
we
have
an
episode
on
csi
with
secret
integration,
and
we've
got
a
couple
things
as
it
relates
to,
I
think,
like
databases
and
stuff,
but
not
a
ton.
A
So
I'm
hoping
this
can
kind
of
lay
the
foundation
and
then
maybe
we
can
explore
some
of
the
really
cool
stuff
that
I've
wanted
to
check
out
for
a
really
long
time.
For
example,
open
ebs
has
a
lot
of
really
cool
stuff
with
their
with
their
container
attached
storage
solutions
and
how
they
make
use
of
zfs
to
bring
in
even
more
kubernetes
native
solution
to
to
container
storage,
which
is
really
exciting.
A
There's
tons
of
cool
stuff
with
databases
going
on,
I
mean
you
know
long
story
short
folks
are
bringing
more
and
more
stateful
workloads
to
kubernetes.
I
think
we'd
all
agree
right
and,
as
that
happens,
it'd
be
cool
if
we
dove
into
some
more
of
those
topics
in
tgik.
A
So
here
we
are,
and
we've
got
nordine
joining
us
from
acharya
welcome
and
all
right.
Let's
talk
a
little
bit
about
what
we
got
going
on
this
week.
Everyone,
so
probably
the
most
exciting
thing
is
that
the
kubecon
and
cloud
native
schedule
is
out
any
chance.
We
have
someone
in
chat
who
got
a
talk
in
this
this
session.
A
If
so,
you
should
you
should
let
us
know
in
chat
and
let
us
know
what
the
name
of
your
session
is
too,
but,
as
always,
there's
an
insane
amount
of
amazing
talks,
and,
as
always,
I
myself
will
have
to
spend
endless
amounts
of
hours
on
youtube
after
the
event,
because
that's
how
I
actually
get
all
the
talks
that
I
wanted
to
see,
I
don't
know
about
you
all
but
like
without
the
youtube
pause
button
and
the
ability
to
flip
through
some
of
these
sessions.
A
I
think
I'd
be
dead
with
with
too
much
knowledge,
so
cool
cool
josh
get
a
cat
dancer
from
two
dollars
on
amazon.
Okay,
I
will
look
into
that
maddie
because
I
do
need
something
to
keep
her
occupied,
I'm
guessing
whatever
a
cat
dancer
is.
It
would
do
that
which
would
be
pretty
fantastic,
rory
you're
doing
the
sig
honk
panel
on
the
last
day,
with
duffy
ian
and
brad
congrats
rory
that'll
be
a
blast
to
check
out.
A
I
can't
wait
to
can't
wait
to
see
it
all
right
and
paul
you're
saying
that
you're,
the
storage
co-chair,
but
your
talk
didn't
make
it
okay
cool,
so
I'm
guessing,
I
don't
know
paul.
Would
you
be
doing
like
any
kind
of
panel
stuff
on
the
storage?
Oh,
okay,
sorry,
the
storage
track
co-chair.
I
see
I
see
so
you'll
be
responsible
for
making
sure
that
thing
doesn't
fall
apart
on
the
storage
track,
I'm
sure
there's
gonna,
be
some
amazing
storage
talks
all
right
cool.
A
So,
let's,
let's
continue
on
check
out
the
schedule
if
you
haven't
and
if
you
submitted
a
talk
like
myself
and
apparently
like
paul,
don't
be
discouraged.
If
your
talk
didn't
get
accepted
this
time
there
there
is,
you
know
it's
so
much
fun.
As
someone
who
spoke
at
cubecon,
it's
like
my
favorite
event
to
speak
at
it's
with
my
favorite
piece
of
technology,
my
favorite
ecosystem.
A
So
I'm
sure
a
lot
of
you
are
psyched.
Keep
it
going.
You
know,
keep
driving
it
getting
a
talk
in
there
and
and
we'll
we'll
look
for
your
name
on
the
next
schedule.
A
Can't
wait
all
right,
so
my
favorite
news
ever
is
that
it
was
a
boring
week
in
the
core
which
I'm
not
saying
that
that's
a
bad
or
good
thing,
but
more
so
that
there's
probably
just
small
things
that
happened
in
core
this
week,
probably
things
that
are
important,
but
we
don't
see
them
day
to
day,
and
you
know
it
also
is
ahead
of
kubecon,
so
that
kind
of
makes
sense
right,
we're
all
scrambling
a
bit:
okay,
rootless
mode
enhancement,
security,
people,
you're,
probably
pretty
psyched,
to
see
this
kind
of
stuff
being
talked
about.
A
So
a
lot
more
interest
in
you
know
a
more
formal
kep
around
what
it
means
to
run
in
a
more
secure
mode,
so
do
check
out
this
enhancement
description
inside
of
here
we've
got
information
about
the
poc.
We've
got
the
pr
for
some
components
like
cubelet
and
cube
proxy
in
rootless
mode.
A
I
feel
like
this
is
one
of
those
topics
we
just
talk
about
forever
and
now
you
know
with
the
kep
model
and
all
that
good
stuff
we're
getting
a
step
further,
a
step
further
in
bringing
some
of
these
core
kubernetes
components
into
a
rootless
mode.
So
good
stuff
can't
wait
to
see
if
this
gains
some
traction
and
then
once
it's
all
rootless,
maybe
we
can
coerce,
duffy
and
or
ian
to
come
back
on
and
talk
about
how
cool
this
is
so
very
cool.
A
A
The
release
cycles
go
faster
than
I
could
ever
keep
up
with
so
do
check
out
if
you're
interested
in
some
of
the
120
alpha
features,
all
that
goodness
and
as
120
progresses,
we
will
do
more
and
more
stuff
in
tgik
around
some
of
those
cool
features,
so
can't
wait,
maret
says
you're
so
rootless.
I
like
that
all
right
cool
cool,
so
good
stuff
this
week.
A
Nothing,
nothing
too
crazy,
although
this
highlight
in
and
of
itself,
is
a
pretty
big
deal
so
check
out
the
check
out
the
schedule
for
for
cubecon
all
right,
but
we
do
have
some
ecosystem
stuff
going
on.
Let's
talk
a
bit
about
it.
This
is
actually
something
that
is
owned
by
vmware
and
some
folks
that
are
on
my
team
currently
that
work
on
this.
A
So
we
have
launched
a
I
don't
know
if
a
version
of
cube
academy
is
the
right
way
to
put
it,
but
an
area
in
cube
academy
called
cubic
01
and
then
get
kind
of
into
some
more
deep
topics,
and
as
that's
trended,
deeper
and
deeper,
there
has
certainly
been
a
need
to.
You
know,
expand
the
scope
and
talk
at
a
more
deep
level
about
networking
and
all
kinds
of
cool
stuff,
so
kubernetes
cube
academy
pro
has
been
announced.
It's
got
some
cool
first
sessions.
We've
got
some
cool
stuff
about
operational
considerations.
A
We've
got
a
networking
course
check
it
out.
It's
all
free,
it's
really
cool
and
if
anything,
this
is
pretty
cool
joe
beta
to
launch
off
one
of
our
first
events
under
the
pro
umbrella
is
going
to
do
and
ask
me
anything.
So
if
you
want
to
get
on
and
ask
joe
questions
like
joe,
why
yaml
that
will
be
your
that'll,
be
your
prime
opportunity
to
ask
those
those
revolutionary
questions
so
do
check
it
out.
A
Cool
all
right
looks
like
I
got
blurry
for
a
second
but
chat,
I'm
guessing
you
all
can
hear
me.
Okay
again.
Let
me
let
me
know
if,
if
all
is
well
on
my
on
my
stream,
I
don't
know
what
what
got
us
so
blurry
for
a
sec
there,
okay,
cool
working
now!
That's
all
that
matters
all
right.
So
this
is
a
a
little
comic
that
george
and
I
were
laughing
at
before
the
episode.
A
So
we
figured
we'd
share
it
with
y'all,
so
vmworld
just
ended
as
as
some
of
you
might
know,
if
you're
in
kind
of
the
the
vmware
ecosystem
in
any
way
and
in
a
galaxy
far
far
away
a
team
struggles,
deploying
kubernetes
and
then
you
know,
the
ceo
or
cto
comes
back
from
a
nice
session
at
vmworld
and
says:
hey.
I
learned
that
we
need
a
service
mesh
in
prod,
which
I
have
I've,
seen
the
story
play
out,
I'm
not
going
to
lie.
A
It's
not
just
a
hypothetical
funny
thing,
it's
it's
kind
of
true
and
then
the
devs
cry
out
what
the
sres
are
confounded
about,
seriously
you're
going
to
do
this
right
now
and
then
they
all
sink
in
the
sand
and
someone
asks
who
gets
the
ping
pong
table.
So
I
thought
this
was
pretty
hilarious.
This
is
a
good
find,
george
and
also
it
is
hilarious
because
it's
true
all
right,
anyways,
okay,
so
we've
got
some
yeah.
I
think
I
think
paul
service
mesh
is
definitely
the
new
blockchain
or
blockchain's
the
new
service
mesh.
A
I
don't
know,
I
guess
I
guess
more
of
the
former
and
steve
yeah
same
with
me
every
company
I
go
into
these
days.
It's
like
we're
doing,
service
smash
and
it's
always
like
okay
back
up,
let's
just
make
sure
now
is
the
right
time.
You
know
it's.
It's
kind
of
a
nuanced
conversation,
so
your
cto-
oh,
that's
one.
I
just
did
tech
world
with
nana,
is
doing
a
video
tutorial
series
on
prometheus.
A
I
had
not
checked
out
this
youtube
channel
before,
but
it
looked
like
it
had
some
good
stuff
around
programming,
kubernetes,
all
kinds
of
container
stuff,
so
there's
a
new
series
on
prometheus
and
if
you're
like
me
and
your
prometheus
chops
are
not
where
they
probably
should
be,
this
could
be
a
good
series
to
check
out.
So
we've
got
a
link
to
it
in
the
show
notes
do
be
sure
to
check
it
out.
A
We've
got
a
article
from
ibram
on
collecting
the
ckss
exam
preparation,
materials,
pretty
freaking
cool,
and
this
is
you
know
like
like
a
lot
of
the
certifications,
one
of
the
ones
that
goes
a
little
bit
deeper,
around
security
concerns,
and
I
was
looking
through
it
because
I
actually
haven't
taken
the
time
to
read
what
the
security
specialist
certification
has
going
on
and
it's
pretty
cool.
It's
it's
a
lot
of
like
the
day,
two
stuff
that
we
we
do
like
you
have
a
cluster.
Now.
A
How
do
you
make
sure
someone
doesn't
just
take
it
over
and
do
terrible
things?
There's
exam
prep
material
in
here
around
setup
stuff.
I
like
how
it's
really
set
up.
It's
got
like
you
know
the
kind
of
key
things
you
want
to
do
and
then
a
breakdown
of
resources
underneath
this
with
links
to
documentation.
A
It
goes
in
from
just
not
only
kind
of
the
basic
fundamental
stuff,
but
into
some
of
the
deeper
stuff
like
minimizing
your
os
footprint
right
doing
things
like
psps
oppas
security
contacts
to
set
your
user
of
your
container
on
and
on
and
on
right,
so
really
really
cool
resource.
If
you're
thinking
about
doing
the
kubernetes
security
specialist
certification,
you
should
check
this
out.
This
is
one
of
those
things
that
can
really
help
you
save
some
time,
bringing
all
your
resources
together.
A
So
pretty
freaking
cool,
all
right,
yeah
and
I
I
don't
know
either
steve
how
the
heck
do
you
get
drop,
downs
and
markdown?
That
is
a
pretty
freaking
cool
feature
and
yeah.
It's
really
cool
to
see
all
right.
A
And
we've
got
jacob
with
deploying
sql
databases
on
kubernetes
somewhat
aptly
timed
here.
So
this
is
a
a
post
about
actually
getting
sql
running
on
kubernetes
and
like
a
lot
of
organizations,
a
lot
of
them
are
in
this
place
where
they're
saying
hey.
I've
got
my
stateless
services
in
kubernetes,
but
what
about
my
stateful
ones?
A
How
do
I
run
something
like
mysql
and
I
think,
as
more
and
more
of
these
articles
come
to
light,
it'll
probably
give
people
more
and
more
confidence
that
they
can
run
their
stateful
services
in
cube
and
hopefully
not
scare
them
away
too
much
with
some
of
the
complexities
but
pretty
cool
stuff.
It's
also
got
content
around
backups,
which
is
obviously
key.
You
know
how
do
we
do
things
like
snapshotting,
you
know,
backup
complexity
with
databases
can
be
especially
weird
because
it's
like,
if
you
do
snapshots
on
an
interval.
A
How
do
you
ensure
that,
when
you
restore
state,
nothing
has
lost?
How
do
you
kind
of
set
up
something?
That's
more
transactional.
If
you
need
that,
there's
there's
lots
of
interesting
architectural
considerations
when
you're
talking
about
backup
so
do
check
out
this
article,
pretty
cool
and
we've
got
a
microservices
kubernetes
demo
coming
out
of
github
george
was
joking
abram
yeah
thanks.
You
know
no
problem
sharing
thanks
for
putting
that
awesome
resource
together.
A
We
really
appreciate
you
putting
in
all
that
time
and
and
do
let
us
know
all
the
all
the
secrets
about
making
drop
downs,
because
that
is
pretty
freaking
cool
cool.
So
this
this
post
here
and
I
should
I
should
shout
out
to
george
too-
who-
who
found
that
and
posted
it
in
so
pretty
cool
stuff,
another
repo
that
george
brought
up
this
week,
micro
services
on
cloud-based
kubernetes-
and
you
know,
as
we
get
deeper
and
deeper
into
cube.
A
I
think
some
of
us
are
getting
just
a
little
bit
tired
of
seeing
that
nginx
demo
or
that
what
was
the
one
you
were
talking
about
earlier
today.
Georgia
was
the
the
card
demo.
Do
you
say
it
card?
It's
like
k-u-a-r-d,
k-k-u-ard
demo.
You
know
they're
all
really
cool,
it's
great
to
show
simple
stuff,
but
in
more
complex
systems,
it's
cool
to
see
all
the
pieces
play
together.
A
You
know
with
an
application
we
deploy,
perhaps
with
some
load
being
pumped
through
the
application,
all
that
good
stuff
right,
yeah,
http,
ben
and
kurd
and
probably
saying
it
wrong.
So
it's
pretty
cool
to
see
these
pieces
come
together.
This
is
what
this
repo
has
in
it
like
it
a
lot.
A
It
not
only
brings
some
of
the
platform
services
together
in
the
app
together,
but
it
also
shows
you
a
way
that
you
can
generate
load,
which
I
would
imagine
would
be
important
when
you
go
into
the
tracing
stack
and
play
around
with
the
you
know,
the
service
mesh
parts
of
it
and
all
that
good
stuff,
so
really
cool
demo.
A
I'm
looking
forward
to
trying
this
I
might,
I
might
give
it
a
give
it
a
run
next
week
and
see
how
see
how
things
go
so
pretty
cool
stuff
thanks
so
much
for
for
bringing
that
together
and
we're
psyched
to
check
it
out.
Noelle
welcome
noel
glad
to
have
you
all
right,
cool
and
steve
says.
Sequan
cates
gives
me
nightmares
from
a
database
as
a
service
project
and
an
ex-client
and
with
kate's
1.6
yeah.
That
sounds
a
little
painful,
sounds
scary,
all
right
and
then
last
thing
that
I
will
call
out
is
the
percona.
A
I
hope
I'm
saying
their
name
right.
The
pekoner
percona
operator
with
open
ebs,
so
pacona
has
a
bunch
of
different
operators
and
they
have
a
new
post,
that's
pretty
short
and
sweet
about
how
you
can
do
the
extra
db
clusters
and
a
couple
other
things
like
mongodb,
with
open
ebs
as
the
back
end.
So,
as
I
had
mentioned
before,
as
we
get
deeper
and
deeper
into
csi.
A
Things
like
this
are
going
to
become
really
really
interesting
right
because
you
know
I
my
my
perspective
is,
is
kind
of
this
and
I'm
not
saying
that
everyone
shares
this
perspective,
but
it's
sort
of
like
we
have
csi.
We
have
a
bunch
of
ways
to
integrate
storage
into
containers
and
that's
super
cool
super
important,
no
question,
but
I
think
there's
definitely
this
opportunity
to
kind
of
re-imagine
storage.
A
So
there's
there's
really
interesting
stuff
going
on
in
the
space,
and
I
think
in
a
future
episode
checking
out
some
of
the
cool
stuff
that
these
folks
have
going
on,
and
the
open
ebs
folks
have
going
on
would
be
a
really
really
cool
way
to
kind
of
dive
into
a
201
style.
Tgik.
So
really
cool,
really
cool.
All
right
and
we've
got
the
details
on
the
foldable
headers.
Oh
see
is
it?
Is
it
actually
html
interesting
cool
all
right,
good
deal
and
noel
says
that
the
bookstore
demo
is
getting
boring.
A
You're
right,
I
think,
maybe
pet
store
and
book
store
are
starting
to
people
are
coming
for
their
lunch.
Let
me
put
it
that
way.
We
want.
We
want
fancier
demos,
I
think
so-
anyways
really
cool
stuff
thanks.
So
much
for
all
these
awesome
finds
this
week,
george,
and
with
that
I
say
we
get
into
it
sound
good.
Let's
talk
a
little
bit
about
storage
inside
of
kubernetes,
so
in
our
kind
of
path.
A
Today,
here's
what
I'm
thinking
we'll
cover
and
if
you've
got
any
additional
ideas
for
things
we
can
look
at
feel
free
to
throw
them
in
chat
and
we'll
either
push
them
off
for
another
episode
or
try
to
mess
around
with
them.
If
we
can't,
but
here's
what
I'm
thinking,
I
think
we
should
talk
about
the
primitives.
Of
course
we
should
talk
about
pvcs.
We
should
talk
about
pvs
and
kind
of
how
they
work
under
the
hood
and
how
everything
gets
wired
up
and
we'll
go
through
a
use
case
with
that.
A
That's
actually
like
kind
of
viable
right.
That
is
the
use
case
of
doing
local
storage.
So
the
idea
that
you
know
the
most
common
case
I
see
is
you
got
a
bunch
of
hosts
and
you've
got
some
storage
pre-provisioned
onto
them.
That's
really
fast,
like
blazing
fast,
nvme,
crazy,
stuff
right
and
you
wanna
very
easily
attach
pods
to
that
storage,
plain
and
simple.
So
we'll
talk
about
that
use
case.
A
While
talking
a
bit
about
pvs
and
pvcs,
then
we'll
get
into
some
csi
stuff,
the
container
storage
interface
and
talk
about
how
csi
is,
in
my
mind,
a
means
to
an
end
to
offer
storage
as
a
service
right,
which
is
which
is
pretty
cool,
so
that'll
be
that'll,
be
super
interesting
and
then
we'll
kind
of
bring
that
to
light
by
trying
to
bring
some
level
of
self-service
storage
together
all
right.
A
So
all
of
that
will
be
kind
of
what
we'll
cover
today
and
I
think
that
should
easily
fill
up
an
hour,
if
not
a
little
bit
a
little
bit
more
than
that
and
martin
you
said:
sock
shop
from
weaveworks
is
okay,
I'm
guessing
that's!
That's
like
a
pet
shop
type
demo
app
have
to
check
that
out
very
cool
yeah.
If
you
all
have
any
more
like
end-to-end
example
apps,
you
should
let
me
know,
because
I'm
always
I'm
always
looking
for
them.
A
A
All
right
so
clusters
today
we're
gonna
actually
have
two
types
of
clusters.
Today:
everyone
we're
gonna,
have
a
local
cluster
and
we're
gonna
have
an
ebs
or
an
ebs
cluster.
We're
not
gonna
have
an
ebs
cluster.
We're
gonna
have
a
amazon
based
cluster
as
well,
and
that
will
be
all
all
fine
and
good.
So
if
we
do
cube
cuddle
get
nodes
just
in
this
initial
cluster,
as
you
can
see
here,
this
is
my
amazon
cluster.
We
will
use
that
today,
but
for
local
provisioning.
I
don't
even
want
to
mess
with
amazon.
A
Frankly,
what
I
want
to
do
instead
is:
I
want
to
go
ahead
and
provision
just
a
local
cluster
here,
so
I'm
actually
going
to
do
this
with
some
vms.
On
my
machine,
real
quick,
so
we'll
do
a!
Let
me
just
see
what
vms
I
have
so
we'll
do.
Sudo
verse,
sudo
vs
list
all
and
what
I'm
going
to
do
here
is
basically
bootstrap,
a
a
single
node,
a
single
node
cluster,
which
I
use
for
a
lot
of
testing
and
stuff.
A
Okay,
I
don't
have
anything
yet
so,
let's
just
do
a
quick,
vert
clone,
we'll
call
this
c0
looks
good.
We
have
a
host
right,
everyone
pretty
pretty
obvious
we're
going
to
have
a
host
of
some
sort.
A
Okay,
so
this
is
our
kubernetes
host
and
we
know
that
when
we
let
pods
come
into
the
host
all
right,
we
have
the
need
to
potentially
allow
for
some
additional
type
of
storage.
Now
this
is
probably
pretty
known
to
a
lot
of
us.
A
lot
of
us
have
done
pvs
and
pvcs
quite
a
bit,
and
the
pod
has
a
lot
of
options
for
how
it
can
get
its
storage.
A
Okay,
the
pod
has
options
like
the
ability
to
use
host
paths
for
storage
and
don't
don't
cringe
when
I
say
host
pass,
we'll
talk
about
the
the
downside
to
host
paths
pretty
soon,
but
you
have
options
like
the
ability
to
bind
arbitrarily
to
some
type
of
host
path.
A
You
have
the
option
to
bind
to
some
type
of
local
volume
as
well,
so
think
of
something
that's
kind
of
like
predetermined
on
the
host,
perhaps
again
ssds
and
things
that
have
already
been
set
up
and
then
another
really
common,
one
that
we
have
at
a
host
level
is
that
we
have
the
empty
dir,
which
is
a
really
common
one.
That
oftentimes,
like
the
primary
use
case
for
empty
dir,
is
an
easy
way
to
share
a
file
or
files
between
two
containers
in
the
same
pod.
Pretty
pretty
common
use
case
here.
So
we've
got.
A
A
Now,
I'm
just
going
to
shoot
over
to
my
virtual
machine
and
just
make
sure
that
that's
good
okay,
that
looks
good,
so
we're
gonna
start
up
this
virtual
machine,
which
is
gonna,
be
start,
vc0,
lovely,
get
that
started
up
and
then
I'll
just
watch
my
leases
here.
For
for
a
vm
to
come
up
and
we'll
get
into
that,
so
what
we're
gonna
use.
A
This
vm4
is
for
the
local
provisioner
itself,
but,
as
you
can
see
in
this
list
of
options,
there
are
many
many
many
kind
of
like
cloud
provider
or
external
or
storage
as
a
service
right
everything
ranging
from
the
cephs
of
the
world
to
the
scale
ios
to
the
amazon,
ebs
and
oftentimes.
These
are
the
types
of
storage
systems
where
we're
going
to
be
thinking
about.
How
do
I
go
in
and
how
do
I
bring
volumes
onto
the
host
here?
A
Okay,
so
how
do
I
add
new
volumes
that
maybe
come
from
some
extra
place?
And
what's
nice
about
these
kind
of
volumes
that
we
sort
of
like
attach
and
detach
right
is
that
these
volumes
are
ones
that
aren't
just
tied
in
a
more
should,
I
say,
a
femoral
ephemeral
manner.
I
guess
in
a
lot
of
architectures
to
the
host
and
if
we
lose
the
host,
if
we
reschedule
the
pod,
can
we
bring
the
volume
with
us
to
another
host?
A
Okay
and
that'll
be
one
of
the
use
cases
we
talk
about
once
we
once
we
dive
a
little
bit
into
into
local
here.
Okay,
all
right.
So
with
that
being
said,
let's
just
check
out
our
host
real
quick.
I've
got
an
ip
address,
so
I'm
going
to
go
ahead
and
grab
this
ip.
I
will
ssh
into
this
host
and
not
not
put
a
space
there.
So
let's
go
ahead
and
grab
this.
Think
of
this,
like
my,
this
is
my
my
kind
cluster.
A
If
you
will
just
not
using
kind
so
I've
got
a
host
here,
we
will
go
ahead
and
init
and
what
I
usually
do
when
I'm
playing
around
locally,
especially
with
like
virtual
machines
and
storage-
and
I
want
like
a
true
vm
experience-
I'm
gonna
do
a
cube,
adm
and
knit
and
just
say
pod
cider
wait.
Pod
network
cider
is
10,
20,
0,
0
16.
all
right,
and
then
let's
give
cube
adm
just
a
second
to
to
boot.
Up
all
right.
So
riko,
you
say:
cube
apps,
yep,
good
point,
cube!
A
Apps
does
have
a
lot
of
good
stuff
too.
To
deploy-
and
let's
see
here,
shock
sucks,
paul
yeah
paul
paul
says,
pour
one
out
for
the
wordpress
demos,
the
poor,
wordpress
demos.
You
know
I
have,
I
have
hope
paul,
don't
worry.
We
will
see
wordpress
demos
for
the
next
couple
decades.
You
will
not
have
to
miss
them
entirely.
A
Have
no
fear,
okay
and
we'll
grab
this
here,
give
myself
I'm
just
doing
some
initial
setup
here,
all
right.
So
in
theory
I've
got
a
cluster
now,
so
cube
cuddle
get
nodes,
lovely,
alright,
so
chat.
Can
you
tell
me
why
my
node
is
probably
not
ready?
Does
anyone
know
what
I
have
effectively
missed?
A
I
mean
you
said:
libvert
support
on
your
mac,
or
are
you
running
a
vm?
My
desktop
is
linux,
so
I
am
very
lucky
that
I
just
get
to
use
libvert
and
kvm
all
natively
without
too
much
hackery.
I
love
it.
It
makes
my
workflows
super
super
easily
and
you
all
are
right.
I
am
missing
my
cni
cube
cuddle
get
pods.
A
What's
that
one
indicator
that
always
says
we're,
probably
missing
our
cs
or
cni,
usually
when
core
dns
doesn't
start,
it's
like
the
the
symptom
that
should
be
on
page
one
of
the
kubernetes
troubleshooting
book
right.
So
let's
get
a
csi
in
here
I'm
going
to
use
psyllium
in
this
case.
So
let
me
grab
psyllium.
A
I
had
their
quick
start
opened
up
here.
We
will
deploy
psyllium,
okay,
lovely
all
right,
so
we've
got
psyllium
installed
and
then,
since
I
want
to
do
a
single
node
cluster,
I'm
going
to
go
ahead
and
go
ahead
and
untaint
or
or
taint
the
master
node
so
that
I
can
schedule
on
it.
So
let's
go
ahead
and
set
that
up.
Let
me
just
make
sure
that
those
components
started
well
real,
quick.
A
So
this
will
be
cube.
Cuddle
gets
pods.
All
that
looks
fairly
healthy
to
me.
Hopefully,
psyllium
is
just
starting
up
there.
Oh,
you
know
what
will
bother
me.
I
think
the
operator
has
an
affinity
rule,
so
it
doesn't
get
scheduled
on
the
same
node.
So
close
your
eyes,
real,
quick,
I'm
gonna
run
a
cube
cuddle
at
it,
so
cube
cuddle
edit,
and
we
will.
A
We
will
not
listen
to
best
practice
here
and
we
will
edit
this
deployment,
which
is
the
psyllium
operator,
and
this
is
in
the
cube
system
namespace.
So
let's
go
ahead
and
edit
that
let's
change
the
replicas
to
two
or
sorry
to
one
from
two
and
then
I
think
we
should
be
looking
pretty
there.
We
go
looking
good
all
right,
so
let's
go
ahead
and
fix
the
taint
thing.
I've
got
this
command
handy
because
I
use
it
all
the
time,
all
right
lovely.
A
So
now
we
have
got
a
local
cluster
that
is
healthy
with
a
whopping
single
node
on
it
all
right.
Okay,
cni,
cni,
press
labs
has
a
wordpress
operator.
Well,
there
hey.
If
if
the
wordpress
demo
uses
a
wordpress
operator,
I
am
a
hundred
percent
game
for
that.
That
is
that's
my
prereq
for
using
a
wordpress
demo
control,
l,
yeah
rory.
A
I
hear
you
on
control,
l
and
I
need
to
I
need
like
like
a
shocker
caller
or
something
that
every
time
I
type
in
clear
it
shocks
me
once
and
then,
if
I
type
in
clear
incorrectly
it
shocks
me
twice,
because
I
can
never
remember
to
hit
control
l,
I
it's
so
muscle
memory
and
grain
to
type
in
clear
and
I
even
have
an
alias
for
c,
and
I
still
forget
to
use
that
so
I
need
I
need
some
like
some
therapy
to
change.
A
My
behavior
here,
I
think
steve
said
I
knew
you'd
have
psyllium
hey
it's
easy
to
install
it's.
It's
great.
All
right
paul
said
get
ready
to
judge
him
on
his
editor
choice.
Yes,
I'll,
take
I'll,
take
all
judgments,
no
problem!
I
can
handle
it
all
right,
so
we've
got
the
worker
node
we're.
A
Looking
pretty
good
now
question
for
you
all
people
who
have
done
a
lot
of
kubernetes
stuff
before
what
comes
first,
let's
talk
chicken
and
chicken
in
the
egg
here
chicken
or
the
egg,
is
my
persistent
volume
claim
going
to
come
first,
if
I'm
going
to
be
using
storage,
that's
already
on
this
host,
or
will
my
persistent
volume
come
first
or
does
it
depend?
What
do
you
think
tell
me
tell
me
what
you
think
in
chat?
What's
our
first
step
here,
going
to
be
volume
or
claim.
A
A
That
kind
of
represents
something
okay
and
that
thing
could
be
mount
mount
fast,
pat
a
we
probably
would
never
name
it
this,
but
you
kind
of
get
the
point
right
here,
so
the
idea
is,
can
we
make
storage
available
and
then
can
we
put
in
the
persistent
volume
now?
What
we're
doing
here
to
start
off
is
kind
of
more
like
static
right,
we're
going
to
do
sort
of
like
static
provisioning.
A
I
guess
in
the
sense
that,
like
we
need
to
instantiate
the
persistent
volume
as
the
administrator
before
we
can
get
an
app
attached
to
it
with
a
claim.
So,
for
the
most
part,
y'all
are
right
and
what
we're
gonna
do
here
is
go
back
into
the
cluster
and
let's,
let's
set
up
a
directory
somewhere,
so
I'll
just
bring
myself
into
sudo,
and
I
am
in
the
actual,
like
kubernetes
node
and
the
cluster,
just
to
make
sure
you
know
this,
isn't
my
local
workstation!
A
So
we've
got,
you
know
a
bunch
of
a
bunch
of
stuff.
Some
block
devices
set
up
all
that
good
stuff.
We're
gonna
go
in
and
we're
gonna
make
a
directory
in
mount
okay
and
we're
gonna
call
this
directory
fast
and
we'll
call
it
pod
a
just
for
the
heck
of
it,
which
I
should
put
p
because
that's
not
in
place
all
right,
lovely.
So
now,
if
we
do
a
tree
against
mount
which
isn't
installed,
of
course,
so
we
won't
worry
about
that.
A
We've
got
mount,
we've
got
fast
and,
of
course,
we've
got
pod
a
now,
like
you
all
said,
let's
bring
a
persistent
volume
in
that
can
correlate
to
this
mount.
Okay,
that's
that's!
Where
we'll
start
today,
so
getting
back
out
into
a
normal
user.
Here
we're
going
to
call
this
pv.yaml
and
we're
going
to
set
up
the
set
up
the
the
volume
example
itself.
So,
let's
see
if
I
can
snag
a
volume
example
from
the
good
old
documentation,
so
you
all
don't
have
to
see
me
fumble
here
endlessly,
so
volume
volume
examples.
A
Let's
get
a
volume
example
going.
You
know
what
I
bet.
Local
probably
has
a
volume
example
there.
It
is
lovely
all
right.
Let's,
let's
talk
a
little
bit
about
this
api
and
we'll
we'll
we'll
dig
a
little
bit
deeper
and
let's
go
ahead
and
set
my
paste
again,
I'm
not
on
my
workstation
I'm
on
the
host,
so
it
might
be
a
little
funky
here,
but
you
get
the
point.
Okay,
so
here
we
go,
we've
got
persistent
volume
in
place,
we're
saying
that
its
capacity
is
100..
A
Now
this
is
what
I'm
declaring
the
storage
as
right.
Now.
Can
anyone
in
chat
tell
us
just
to
kind
of
I'll
quiz
y'all
a
lot
throughout
this,
especially
the
basic
stuff?
What
relevance
does
storage
in
the
persistent
volume
have
here
if
you
kind
of
catch
my
drift
like?
Why
would
I
declare
the
storage
amount
in
the
persistent
volume
api?
What
what?
Why
is
that
beneficial
to
the
system
and
we'll
come
back
to
that,
so
I've
got
the
storage.
A
The
volume
mode
I'm
going
to
use
here
is
file
system
right
because
there's
two
primary
modes,
we've
got,
we've
got
block
and
we've
got-
or
I
should
say,
raw
block
right
and
then
we've
got
file
system.
So
raw
block
is
not
going
to
have
a
file
system
installed
at
all
which,
if
we
have
a
use
case
or
an
app
wants
like
super
fast
raw
block
access
without
the
overhead
of
an
operating
system
managed
file
system,
we
can
do
that
now.
90.
A
You
know
correct
me
if
I'm
wrong
here,
if
you
all
feel
differently,
but
97.99987
percent
of
the
time
you're
probably
going
to
end
up
with
file
system,
because
most
things
expect
a
file
system.
Why
not
just
let
it
be
handled
right,
so
file
system
will
be
our
type
and
then
we'll
talk
a
little
bit
about
read:
write
modes.
Now
let
me
see
what
you
all
said
in
chat
about
pvc
exactly
so
it
matches
with
the
pvc
and,
like
rory,
said
same
idea
for
the
scheduler
right
to
know
what
to
know
what
to
put
there.
A
So
in
a
way,
what
we're
propagating
from
the
storage
end
is
really
to
help
us
understand
at
kind
of
a
scheduling,
level
and
eventually
right
at
a
pvc
level
how
to
know
whether
a
pvc
is
qualified
to
attach
to
a
volume
you
all
are
exactly
right.
So
the
whole
notion
of
for
the
pvc
request
for
the
scheduler
spot
on
spot
on
now,
we've
also
got
access
modes.
So
here's
quiz
question
number
three
for
you
all.
A
What
are
the
three
access
modes
kubernetes
supports
and
if
you
happen
to
know
the
acronym
extra
credit
here
so
put
in
the
chat,
what
the
three
axis
modes
are
and
we'll
come
back
to
that,
so
we've
got
a
persistent
volume
claim
policy.
This
means
that
if
I
delete
the
volume
or
the
sorry,
if
I
delete
the
claim,
should
the
volume
be
removed
and
I'll
just
say
yes
to
this,
because
it'll
bring
up
an
interesting
case
now
we
call
out
the
storage
class.
A
This
is
interesting
and
something
we'll
talk
a
bit
about
later,
but
for
now
I'm
going
to
put
in
the
path
which
is
going
to
be
mount
and
the
path
for
mount
will
be
fast,
pod,
a
all
right.
All
right,
you
all
are
racing
each
other.
Yes,
it's
about
who
can
type
fastest
you've
got
read,
write
only
we've
got
read,
write
many,
but
what
is
the?
A
What
is
the
third
one,
the
more
obscure
one
I
see
read,
write
only
and
I
see
read,
write
many.
There
is
a
third
technically
and
it
has
its
own
little
acronym
as
well.
Yes,
ian
you
got
it
read
many,
so
the
ability
to
read
many
now.
This
is
really
interesting,
and
I
guess
we'll
talk
a
bit
more
about
this
when
we
get
into
the
cloud
right,
but
most
systems
are
probably.
A
This
might
be
too
stout
of
a
statement.
Let
me
just
say
it:
most
systems
are
probably
going
to
support.
Read
light,
read
write
only
in
the
idea
of
read
write
only
is
that
we
attach
a
volume
to
a
single
node,
and
then
we
attach
a
single
pod
to
that
volume,
and
this
is
especially
common
with,
like
ebs
type
systems.
So
if
you
look
at
google's
persistent
disk,
if
you
look
at
amazon's
ebs
on
and
on
now,
there
are
patterns
that
sometimes
folks
want
to
support
and
read.
Write.
Many
is
one
of
them
read.
Write.
A
Many
is
a
bit
interesting
and
maybe
in
chat
you
all
can
can
give
your
opinions
here.
I
have.
I
have
a
feeling
about
read,
write
many
and
I'm
totally
open
to
other
perspectives.
Here.
If
you
feel
differently,
I
go
into
a
lot
of
shops
where
read,
write
many
or
like
an
nff's
file
system
supporting,
read,
write.
Many
is
there
to
support
some
kind
of
obscure
legacy
requirements
like
the
idea
of
sharing
objects
or
sharing
data
between
files.
A
That's
way
better
served
to
be
in
a
database
or
in
a
message
broker
or
in
something
else
in
in
kind
of
a
more
modern
architecture.
If
you
will
so
what
I'm
saying
in
a
lot
of
words
is
some
things:
will
support
read,
write
many
amazon's
efs?
Does
you
know
you
know?
Basically
things
that
replicate
kind
of
like
an
nfs
like
store
can
totally
do
that,
but
you
might
want
to
start
by
thinking.
A
If
I
need
read,
write
many
is
the
right
answer.
Is
it
because
I
have
a
really
good
requirement
for
it,
or
is
it
because
I'm
trying
to
support
some
really
weird
legacy
use
case?
That's
better
served
in
a
different
way,
something
to
think
about.
It's
all.
It's
all
trade-offs,
right,
yeah,
so
walid
you
bring
up,
read,
write
many
for
registry
haa
totally
makes
sense
as
ibram
says,
nfs
supports
all
these
access
modes,
so
network
files
share
models,
and
this
is
true
for
efs
and
amazon
as
well
right.
A
You
can
have
all
these
different
modes,
you're,
probably
going
to
find
read,
write
only
and
a
lot
of
systems
like
ebs
based
systems,
but
in
a
lot
of
cloud
native
apps
read,
write
only
is
exactly
what
they
need
is
how
I
would
say
it
and,
as
yusuf
said
some
databases,
steve
says
it's
always
a
smell
same
here
steve,
I'm
always
kind
of
a
bit
a
bit
curious
on
that.
A
So,
okay
good
deal,
so
we've
got
the
pieces
in
place.
We've
also
got
node
selector
terms.
Now
this
is
pretty
interesting
with
local
storage,
because
what
it's
going
to
be
able
to
do
is
effectively
tie
itself
to
a
specific
node
which
can
then
be
used
for
landing
the
pod
appropriately
as
well,
which
is
which
is
super
interesting.
A
So
what
we'll
do
here?
Let's
just
do
a
cube,
cuddle
get
nodes
again
and,
let's
make
sure
we
grab
our
hostname,
which
is
test
w
and
we'll
put
this
in
place
here
so
test
w
we've
got
that
we've
got
our
pv
now,
let's
see
if
we
apply
this
everyone
if
it
works
okay,
so
let's
do
a
cube
cuddle
apply
for
the.
A
What
did
I
call
it
pv.yaml
easy
enough
and
then
we'll
do
a
watch
for
cube
cuddle,
get
pv,
okay
and
all
right,
so
example:
pv
storage,
declared
of
100
gigabytes,
reclaim
policy
delete,
read
write
only
with
the
local
storage
class.
Now
chat.
Everyone
who's
hanging
out
with
us
right
now.
What's
my
next
step,
how
do
I
bind
this
thing?
What's
the
what's
the
object
that
comes
next,
so
I
can
actually
start
using
it
with
a
workload.
A
What
is
next
pvc?
You
got
it
mona.
You
won
the
the
the
race
there,
so
I
guess
I
wonder
if
I
wonder
if,
when
you
stream,
those
of
you
who
are
like
farther
away
from
where
the
stream's
coming
from,
if
you
have
a
disadvantage
for
answering
in
chat,
we
should
have
some
kind
of
like
pull
system
or
something
that'd
be
kind
of
cool
anyways.
So,
let's,
let's
look
at
a
pvc,
so
inside
of
local,
I
have
got
the
pv
in
place.
Is
there
a
claim?
A
Example,
let's
see
if
I've
got
a
pvc,
I
just
need
persistent
volume
claim.
I
just
want
an
example.
Someone
give
me
an
example.
Here,
persistent
volume
claim
persist.
Oh,
no,
that's
a
pod!
Where
is
my
okay?
Let's
grab
one
from
a
different
page,
then
kubernetes
persistent
volume
claim
persistent
volumes.
Persistent
volume
claims
there.
It
is,
let's
grab
this
all
right,
so
this
will
be
our
example.
We'll
go
ahead
and
we'll
keep
that
up
there.
A
A
Actually
I'm
going
to
go
back
to
that
top
session.
Now
that
I
think
about
it,
pvc.yaml
paste
it
in
okay,
steve
says,
connects
to
the
usbn
exactly
that'll,
be
your
your
strategic
advantage
for
sure
them.
Pvc
set
paste.
Put
that
in
all
right,
lovely
all
right.
So
here
is
our
pvc
everyone,
so
we've
got
the
pvc
now
the
pvc
is
going
to
bind
us
so
think
of
it,
like
the
characteristics
of
the
pvc,
are
eventually
going
to
try
to
bind
itself
to
a
volume.
A
So
a
lot
of
the
criteria
we're
putting
in
here
is
kind
of
like
helping
us
qualify,
whether
there's
a
volume
that
is
appropriate
to
claim
in
this
pvc
in
the
pvc.
I
think
of
it
like
an
extra
layer
of
indirection
from
the
app
so
as
a
developer
right
unless
I'm
like
really
digging
into
the
system
stuff.
Most
of
the
time
I'm
probably
going
to
interact
with
at
most
the
pvc
layer
and
then
let
the
implementation
details
under
the
hood
figure
out
how
the
volume
should
be
selected
or
come
to
be.
A
Is
that
how
you
all
kind
of
think
about
it?
So
it's
like
this
is
really
I'm
kind
of
taking
off
my
ops
hat.
I
guess
you
could
say
and
I'm
putting
on
a
bit
of
my
developer
hat
here.
So
we
come
in
we've
got
read,
write
once
that
makes
sense.
Volume
mode
is
file
system,
let's
ask
for
200
gigabytes
and
then
our
storage
class
selector
will
be
local,
provisioner,
okay,
and
I
think
that
was
what
I
think
that's
what
it
called
it
for
the
pv.
A
So
if
we
do
a
cube,
cuddle
get
pv
local
storage.
Good
thing
I
checked,
so
this
is
local
storage
right
all
right,
we've
got
a
selector
and
I
don't
think
I
need
a
selector.
I
think
that
should
be
should
be
optional
all
right,
so
we've
got
a
pvc
storage
200
and
then
we'll
go
ahead
and
apply
it
so
cube
cuddle
apply
the
pvc
that
I'm
literally
editing
and
let's
see
what
we
get
now.
So
if
we
watch
again.
A
Okay
and,
while
you
said,
did
you
define
a
storage
class
local
storage
now
what's
interesting,
waleed
is
this
will
probably
work
without
a
storage
class,
yet
there's
some
reasons
why
we
should
probably
include
a
storage
class,
but
this
might
work
by
default.
Now
it
is
pending.
What's
your
guess,
everyone?
Why
do
you
think
we
are
in
a
pending
state
rather
than
a
bound
state
with
the
pv?
What
do
you
think
the
cause
is
here.
A
Oh
hey
check
it
out.
My
drink
is
green,
so
it
it
is
following
the
green
screen
there,
that's
pretty
cool.
I
have
a
transparent,
transparent
drink.
Yes,
we
think
it
is
probably
that
the
pvc,
the
pvc
size,
is
greater
than
the
pv
size.
My
thoughts
exactly,
let's
see
if
that
is
actually
true.
So
one
thing
that
I
love
doing
when
trying
to
solve
pvc
issues
is
I
go
right
to
the
events,
because
the
events
in
pvcs
tend
to
be
pretty
freaking
good.
A
So
let's
do
a
cube
cuddle
describe
since
that
will
give
us
events
and
persistent
volume,
storage
class,
local
storage
not
found
so
waleed.
What
you
said
has
something
to
do
with
this
here,
so
local
storage
not
found.
Let's
just
take
a
quick
look
here,
so
cube
cuddle
get
pv,
get
pvc
okay,
so
we've
got
the
storage
class.
Local
storage.
We've
got
this
local
storage,
so
you
might
have
been
right
walid.
I
thought
that
with
the
local
provisioner,
you
didn't
need
a
storage
class,
but
I
think
you
were
maybe
spot
on
here.
A
Maybe
I
do
have
to
define
one,
which
is
a
little
interesting,
I'm
kind
of
surprised.
I
could
have
sworn
let's
just
humor
humor
me
real
quick
here,
maybe
maybe
you
all
are
about
to
blow
my
mind:
storage
classes
you're,
probably
going
to
create
them
anyways,
but
I
didn't
think
you
needed
them
for
this
scenario.
Let's
see
so
pvc,
let's
drop
this
down
to
100
or
let's
just
do
even
50..
So
it's
within
the
bounds
and
let's
apply
it
again.
A
Oh
sorry
apply
the
pvc
this
case,
okay
and
it's
not
happy
because
I
cannot
modify
that
field.
What's
another
thing,
that's
interesting
about
pvcs.
Is
you
can't
contrast
them?
You
can
only
expand
them.
So
if
I
apply
this
pvc
here,
okay,
let's
see
here
so
let's
go
ahead
and
take
a
quick
look,
so
we'll
get
pv
and
pvc
and
now
it's
bound
without
a
storage
class.
So
interesting
enough,
it
was
okay.
So
here's
here's
my
running
theory.
Okay,
so
with
local
storage
there.
A
For
for
reasons
we'll
talk
about
you
technically,
don't
need
a
storage
class.
What
I
think
was
happening
in
in
my
I
was
like
oh
yeah.
If
you
describe
the
events,
you'll
understand
exactly
what's
going
on,
I
actually
think
that
threw
us
off.
A
So
what
happens
is
it
looks
to
see
if
it
can
match
something
initially
and
if
it
matches
a
pv
that
was
provisioned
by
that
storage
class,
because
there's
a
big
difference
here,
we're
not
doing
dynamic,
provisioning
right
and
john
said
john
said:
you
have
a
storage
class
there,
though
no
john,
I
don't
think
I
do
cube
cuddle.
Get
let's
see
if
this
is
about
to
blow
my
mind
too
storage
class,
no
storage
class
and
here's.
Why
or
here's
why
it
works.
A
So
what's
interesting
here
is
the
pv
is
kind
of
like
saying,
hey,
I
exist
and
I
was
created
as
part
of
this
storage
class.
So
kubernetes
is
going
oh
you're,
a
pvc
and
this
volume
was
created
by
the
storage
class,
sounds
great.
Let's
go
ahead
and
just
wire
it
on
up
now
the
condition
I
think
we
hit
and
I'm
yeah
I'm
not
using
a
csi.
Yet
we're
not
there
quite
yet.
A
The
condition
I
think
we
hit
is
that
since
it
couldn't
qualify
a
pv
because
we
had
too
much
data,
it
then
was
like
okay.
Well,
I
can't
attach
you
to
any
pvs
who's,
the
provider
for
local
storage.
To
start
provisioning
this
thing
and
guess
what
I
don't
have:
a
local
storage
class
or
a
provider
that
can
satisfy
it
because
so,
in
short,
it
was
thinking.
Let
me
dynamically
allocate
the
storage.
A
A
We
don't
need
a
storage
class
yet,
but
there
are
some
reasons
why
why
we
need
to
kind
of
think
this
through
a
little
bit,
so
we've
got
it,
we've
got
it
set
up,
we've
got
it
in
our
app
now,
of
course,
we
probably
should
deploy
an
app
real
quick.
So,
as
I
was
just
complaining
about
nginx
examples,
I'm
going
to
go
ahead
and
just
deploy
an
nginx
example,
because
why
not
volume
mount
kubernetes
volume
mount?
A
Let's
see,
let's
see
if
there's
a
quick
pod
example
so
type
pod.
A
Okay,
volume
mounts
volume,
good,
good,
okay,
let's
go
ahead
and
do
this
so
we've
now
got
our
pod
that
we're
going
to
set
up
so
let's
go
ahead
and
do
a
podium
and
then,
after
we
finish
this
up,
everyone
we're
going
to
completely
throw
this
away
and
go
full-blown
csi
dynamic
provisioning
and
get
really
fancy.
So,
let's
go
ahead
and
set
the
set
the
paste
I
get
halfway
through
a
command
and
then
I
start
talking
it
totally
screws
me
up
all
right.
Pod
we've
got
test.
A
Dbs
we've
got
the
test
pod
we're
going
to
use
the
image
I
was
just
complaining
about,
which
is
nginx
right
and
steve,
says
quiz
time
difference
between
local
pv
and
host
path
volume.
I
think
that
is
a
great
question.
Steve
so
does
anyone
know
even
if
it's
just
like
a
difference
or
two
or
when
you
should
use
one
or
one
over
the
other?
What
is
the
key
difference
between
the
local
provisioner
and
the
host
path
approach?
Actually
I
should
let
me
let
me
put
that
more
simply.
A
What's
the
difference
between
taking
the
local
approach
which
we're
seeing
and
taking
the
host
path
approach?
Does
anyone
know-
and
if
you
can
maybe
tell
me
why
host
path
can
be
a
little
dangerous
I'd,
be
really
interested
to
hear
that
too.
So,
let
me
know
and
chat
all
right.
So
nginx
we've
got
the
test
container.
We've
got
the
volume
mounts,
so
the
volume
mount
we'll
just
put
in
var
lib
db,
we'll
call
it
test
volume.
A
This
is
test
volume
and
then,
of
course,
my
volume
type
here
is
not
going
to
be
test
volume
or
sorry,
it's
not
going
to
be
the
ebs.
It's
going
to
be
the
actual
the
local
storage
here,
so
pod
volume
mount.
Where
is
give
me
an
example,
so
I
have
to
type
this
thing
out.
I'm
getting
lazy
can't
can't
handle
all
the
all
the
complexity
here.
So
pod
example,
persistent
volume
claim.
A
Basically
what
we
need
to
do
here
is
we
need
to
reference
the
persistent
volume
claim,
so
persistent
volume,
persistent
volume
claim
and
here's
what
we're
looking
for
there.
It
is
persistent
volume
claim
wow.
It's
not
that
complicated
of
an
api.
I
should
have
been
able
to
figure
that
out
all
right,
persistent
volume
claim:
we've
got
that
we've
got
claim
name
and
the
claim
name.
As
we
know,
if
we
do
a
cube,
cuddle
get
pvc
right
because
remember
what
is
the
app
concerned
with
is
the
app
concerned
with
the
volume
or
the
pvc.
A
The
app
is
concerned
with
the
pvc:
it's
a
layer
of
indirection,
so
we
don't
need
to
know
about
the
provider
specific
complexities
of
the
volume,
so
we're
going
to
go
ahead
and
grab
our
pvc,
which
is
called
my
claim
and
we're
going
to
go
ahead
and
put
that
right
inside
of
here.
Call
it
my
claim,
and
what
do
you
all
think
should
this
work?
Maybe
that
comment
will
blow
us
up.
So,
let's,
let's
not
mention
anything
about
ebs
just
yet.
This
looks
good,
var
lib
db,
test
volume.
My
claim
looks
pretty
good.
A
A
Okay,
noelle
said
with
host
path,
there's
no
toleration
as
a
local
pv
marat,
you
said,
host
path,
mount
a
file
directory
or
local
pv
mounts
a
partition
or
a
local
disk
all
right
and
how
you
said
when
a
pod
is
to
be
evicted
by
a
different
node
host
path
could
be
a
problem
yeah.
I
think
that's,
that's
a
really
good
call
out.
Hoth
pass
host
path
is
super
ephemeral
right
and
it
is.
It
is
not
something
that's
meant
to
be
moved
around
now.
A
The
key
thing
here-
and
this
is
a
pattern
I
see
a
lot
is
folks
go
in
and
they
say
I
need
to
share
some
data
between
pods
and
I
need
it
to
be
like
provisioned
and
I
can't
be
like
going
in
like
josh
just
did
with
some
automation
and
provisioning.
You
know
volume
mounts
and
all
this
stuff
and
they
go
to
something
like
host
path
which
lets
you
allocate
this
kind
of
arbitrary
directory
in
the
system
which
can
have
security
implications,
especially
if
you're
running
as
a
route.
A
You
know
so
on
and
so
forth,
and
when
you
actually
want
that
use
case
a
lot
of
times
a
way
better
solution
for
that
use
case
is
what
I
actually
won't
tell
you
yet
do
you
know
what
is
a
better
volume
type
for
when
you
need
to
do
something
like
maybe
share
some
files
between
two
containers
in
a
pod?
Does
anyone
know
all
right
and
steve
says
wrong
mount
path?
Okay,
let's
see
if
I
got
my
my
mount
path
wrong
here.
Okay,
so,
let's
see
did
I
get
my
mount
path
wrong?
It
started.
A
Yes,
empty
der
exactly
empty
der
is
a
much
better
way
to
share
that
data.
An
empty
dir
will
abstract
it.
You
don't
need
to
say,
hey
mount
me
into
etsy
blah
blah
blah
right
effectively.
You're
just
going
to
be
given
some
storage,
your
containers
can
use
it.
You
don't
need
to
worry
about
some
of
the
underlying
complexities.
It
can
be
a
bit
of
a
safer
model.
Now
to
the
point
that
you
made
radha,
it
is
definitely
still
ephemeral
like
don't
get
me
wrong,
but
it's
a
better
solution
to
that
use
case.
A
All
right
and
steve
said
you're
not
using
the
pv
for
anything.
You
are
correct
steve,
so
I
yeah
that
is
probably
why
it
works.
So
it
looks
like
our
test.
Pod
is
in
place,
everyone
let's
go
ahead
and
check
it
out.
So
here's
what
we'll
do.
Let
me
just
get
my
ip
address
of
this
node
and
we'll
put
we'll
open
up
to
oh
geez.
That
is
not
my
ip.
That
is
a
psyllium
ip.
Where
is
my
main
interface
there?
A
It
is
okay,
we're
going
to
open
up
two
windows
here,
real
quick,
so
let's
bump
this
down.
Let's
do
an
ssh
we'll
do
josh
at
this
address
put
in
my
password
and
we
will
go
ahead
and
go
into
mount
pod
fast
pad
a
okay,
all
right
lovely.
So
I
don't
see
anything
here.
You
all
don't
see
anything
here.
So
let's
go
ahead
and
do
watch
lsla
we'll
keep
that
open.
Now,
let's
go
into
cube
cuddle
get
pod.
We
know
we
have
a
pod
called
test
pod,
so
in
our
lovely
nginx
demo.
A
Here,
let's
go
ahead
and
exact
into
test
pod
and
let's
do
bin
bash
okay.
Luckily
that
did
exist
in
this
pod
and
let's
look
in
var,
lib
db,
so
in
theory,
in
the
bottom
buffer,
where
I'm
looking
at
my
host
file
system,
I
should
now
be
able
to
write
files
inside
of
here
and
if
we
do
a
quick
echo
and
we
say
happy
friday,
tgik
and
we
send
that
in
as
db.text.
A
Why
are
you
not
showing
up
pad
a
oh?
No,
it's
pot,
a
yeah
should
be
good.
Isn't
that
mounted
in?
Let's
take
a
look
at
the
pod
definition.
What
did
I?
What
did
I
do
wrong
here?
Maybe
wrong
path.
Maybe
that's
what
steve
was
saying
cube
cuddle
get,
let's
see
what
we
got
here.
We
will
vim
pod.
Okay.
What
do
we
got
so
var,
lib
db?
A
Here's
my
claim
name
and
my
volume
is
my
claim.
I
wonder
why
it
didn't
show
up.
So
it's
clearly
got
this
right
and
it's
clearly
got
the
test
volume.
It's
got
the
claim
you
created
the
you
created
file
on
the
wrong
path.
I
think
you
wrote
the
file
in
forward
slash,
not
the
right
directory.
Oh
did
I
oh.
I
didn't
even
cd
into
it,
wow
terrible,
terrible,
terrible,
good,
good
point.
All
right,
let's
go
ahead,
one
more
time.
It's
like.
Why
is
this
not?
Why
is
this
file
not
echoing?
A
Well,
it's
because
you
wrote
it
to
your
root
file
system
josh.
So,
let's
go
into
mount.
Let's
go
into!
I'm
sorry,
not
mount!
I'm
in
I'm
in
the
pod
here,
which
is
var
lib
db.
Let's
run
echo
one
more
time,
okay,
and
do
we
get
anything,
let's
see
so
we
will
cd
into
pod
a
again
and
there
it
is
so
check
it
out.
A
Now
we
have
satisfied
this
if
the
pod
restarts
the
data
will
still
be
here
in
theory,
there's
ways
for
us
to
plop
this
off
and
all
that
good
stuff
and
kind
of
move
it
around.
So
now
we
have
seen
the
static
provisioner
route
with
the
pvc
and
the
pv
bringing
things
together,
which
is
a
which
is
a
decent
model.
A
Now
I'm
just
gonna
mention
a
couple
more
things
on
on
local
storage,
okay,
so
on
local
storage,
there
are
more
elegant
ways
to
do
this.
One
of
the
things
that
has
grown
in
popularity
a
little
bit
is
the
external.
A
Let's
see
if
this
is
it
local,
yes,
an
external
static
provisioner
that
can
run
separately
so
without
getting
too
much
into
the
details
of
this,
and
if,
if
we
ever
want
to
do
an
episode
on-
and
I
don't
know
if
it'll
justify
a
full
episode,
there
are
ways
for
us
to
go
in
and
detect
and
understand,
paths
that
were
set
up
for
pods
and
then
use
the
external
provisioner
to
automatically
propagate
the
pvs.
A
Because
one
of
the
things
you
can
imagine
would
be
a
maintenance
nightmare
is
for
me
to
go
in
and
make
pvs
for
every
single
thing
and
probably
screw
half
of
them
up
and
so
on
and
so
forth.
So
a
pseudo-dynamic
way,
let's
say
of
doing
this,
I'm
not
going
to
call
it
dynamic
provisioning,
but
a
pseudo-dynamic
way
of
propagating
some
of
these
things
can
be
done
through
that.
There's
an
interesting
kubecon
talk
on
it
too.
A
So
if
you
do
some
searching
around
it
can
be,
it
can
be
pretty
interesting
and
noel
said
that
there's
a
local
path
provisioner
by
rancher
as
well,
so
do
check
that
out
if
you're
interested
in
that
kind
of
stuff.
Now
you
know
you
might
again,
you
might
be
thinking
well,
josh
like
this.
This
is
definitely
better
than
just
pod,
ephemeral,
storage,
but
like
is
it
that
much
better
and
again,
you
have
to
keep
in
mind
your
use
case.
The
model
we
use
with
a
tie-in
to
an
you
know
a
more
dynamic
provisioner.
A
You
can
actually
do
really
cool
stuff,
especially
if
you've
got
really
data
intense
workloads
where
maybe
you
need
to,
or
I
o
intense
workloads
where
you
need
to
go
in
and
provision
these
local
ssds
that
are
really
fast
okay,
so
this
can
be
a
decent
way
to
do
it,
but
there
is
something
to
be
said
about
far
more
dynamic
provisioning
using
systems
such
as
amazon's,
ebs
and
otherwise,
and
while
you
said
which
kubecon,
I
am
not
sure,
let's
see
if
the
search
is
this
easy,
if
we
do,
if
we
do
external
provisioner
kubecon,
let's
see
if
it
actually
comes
up
external
provisioner,
external
provisioner.
A
No
none
of
these
actually
look
like
it
I'll
put
I'll.
Tell
you
what
I'll
put
in
the
show
notes
I'll,
find
it
and
put
in
the
show
notes.
There's
an
interesting
talk.
I
think
it's.
I
think
it's
actually
salesforce.
So
maybe,
if
you,
if
you
throw
that
in
your
search,
it
might
come
up
better.
I
think
it
was
two
folks
from
salesforce
talking
about
how
they
use
the
local
provisioner
to
to
enable
you
know
fast
disk
on
host
to
pods
and
steve
said
it
might
be
2018,
so
so
2018
local
provisioner
salesforce.
A
That
might
be
the
key
okay.
Now,
let's
take
it
a
step
further
here
and
we'll
we'll
kind
of
bring
this
all
together
with
a
more
advanced
example,
I've
got
clusters
running
in
aws
and
a
lot
of
times
when
I
run
clusters
in
aws
or
google
cloud
or
vmware
vsphere
or
anything
there's
a
good
chance.
I
want
a
way
to
integrate
with
a
storage
provider,
and
this
is
something
we've
been
doing
way
beyond
way
beyond.
A
You
know
the
container
days
when
we
had
hosts
in
aws
we'd
attach
an
ebs
volume
to
it,
and
we
would
write
data
to
it
and
if
we
needed
to
recreate
the
host
or
do
some
type
of
migration
or
backup,
we
had
ways
to
move
the
block
volume
we
had
ways
or
block
storage.
We
had
ways
to
snapshot
it.
We
had
ways
to
just
you
know:
keep
its
life
cycle
independent
of
the
host,
which
can
be
really
compelling
and
podland
too,
because
our
pods
move
around
all
the
time.
A
A
Now
the
csi
spec
is
in
github.
If
you
haven't
seen
it,
it's
really
well
written
sod
from
from
google,
at
least
at
one
point
he
was
at
google
helped
write
the
spec
with
a
couple
other
folks
from
I
believe
I
believe
mesosphere
actually
and
the
spec
is
really
great.
It
talks
about
how
we
implement
csi
and
all
that
good
stuff.
So
the
idea
here
is
right
that,
and
you
know,
let's,
let's
even
back
up
a
little
bit
back
in
the
day
when
you
would
provision
your
ebs
volumes.
A
When
you
would
provision
your
google
persistent
disks,
your
vsphere
vsan
volumes.
Maybe
what
would
happen
is
all
of
that
was
fully
integrated.
But
what's
interesting
about
that
is
it
was
intric.
It
was
integrated
in
tree
in
the
kubernetes
project.
Let's
see
if
we
can,
we
can
check
this
out
real
quick.
So
if
we
go
to
github.com
kubernetes
kubernetes
the
kk
repository,
if
you
will
kubernetes
kubernetes
and
then
we
go
into
tags
and
we
grab,
I
don't
even
know
that
heck
the
providers
might
still
even
be
here.
A
Let's
grab
a
really
old
tag,
though
let's
grab
like
oh
boy,
there's
so
many
releases.
I
need
a
more
elegant
way
to
do
this.
Let's
try.
Let's
try
116.
it'll
be
it'll
it'll
still
be
in
116.,
so
116,
14.,
okay,
so
we'll
grab
that
tag.
I
should.
I
should
do
this
a
different
way.
We'll
go
back
here,
we'll
go
to
tags
and
I
guess
I
could
search
right.
So
why
don't
I
do
that?
One
dot
14,
let's
say
lovely,
114,
10.
A
and
then
in
the
kubernetes
repo,
which
of
course
I
can't
really
remember
the
organizational
structure
of
we
kept.
The
cloud
provider
integrations
in
which
folder
josh,
which
folder
if
anyone
in
chat
knows
that'll,
be
this
will
be
the
really
helpful
quiz
question
it
was
in
wasn't
plugin.
It
was
maybe
it
was
in
package
and
then
in
a
sub
folder
and
package
package
cloud
provider
there.
It
is
providers
yep,
samir
I'll,
give
this
one
to
you.
A
Since
you
said
package
and
basically
what
we
would
do
is
we
would
ship
the
integrations
for
these
things
into
the
code
base.
Now,
let
me
ask
you
all
real
quick.
If
we
were
going
back
to
the
kubernetes
114
days
and
one
nine
days
and
however
long
this
has
been
here,
what's
one
of
the
big
downsides
you
can
think
of
to
having
the
providers
in
the
logic
the
code
for
talking
to
things
like
amazon
ebs
entry?
What's
what's
the
downside
to
this
model?
A
Now,
while,
while
some
of
you
are
typing
that
up
okay,
we
will
we'll
talk
a
little
bit
about
we'll
talk
a
little
bit
about
the
architecture
of
csi.
A
Okay-
and
these
are
some
of
the
the
biggest
pieces
to
the
csi
that
we're
going
to
want
to
talk
about
today,
the
controller
plug-ins
and
the
node
plug-ins
of
the
driver.
Now
let
me
check
what
you
all
said
in
chat
about
this.
So
pushing
out
fixes
is
quickly
maddie,
absolutely
right.
Dev
life
cycles,
I'm
thinking,
jeremy
you're,
meaning
to
couple
dev
life
cycles
right
steve,
says:
code
has
to
be
open.
A
Source
waleed
says,
keep
code
cadence
with
provider
and
cube
similar
to
what
I
think
jeremy's
saying
and
and
maddie's
saying
too
and,
like
you
said
at
the
end,
there
walid
keeping
them
independent
right.
So
you
know,
there's
a
whole
there's
a
whole
slew
of
reasons
that
probably
make
a
ton
of
sense.
One
is
why
ship
provider
code
for
how
to
attach
volumes
to
google
when
you
run
your
clusters
in
aws
and
vice
versa
right.
Why
have
that
code
in
there,
even
if
it's
inactive?
A
A
One
level
is
development,
let
the
development
happen
independent
of
kubernetes
itself
and
then,
secondly,
is
like
think
about
cve
remediation
and
problems
and
hot
fixes
and
all
those
kind
of
things
if
you're
integrated
in
the
core
code
base,
your
fixes
need
to
happen
inside
of
that
code
base
and
need
to
be
released
with
cube,
so
the
ability
to
remediate
and
update
just
the
storage
integration
stack
and
all
that
stuff
is
tied
and
bound
tightly
to
kubernetes,
so
csi
or
the
container
storage
interface
is
one
of
the
ways,
and
it
is
now
the
way
that
we
decouple
that
logic
and
implement
things
to
talk
to
back
end
providers.
A
There
was
an
intermediary
option:
it
was
called
flex
volumes.
Did
anyone
anyone
who's
in
who's
in
chat?
Did
anyone
here,
use
flex
volumes
or
uses
flex
volumes
today,
I'm
just
curious.
If
so,
what?
What
are
you
currently
using
flex
volumes
for
steve
says
for
security
fixes
exactly
steve
that
out-of-band
ability
to
fix
security
is
is
key.
What
are
what
are
you
using
flex
volumes
for
steve
or,
what's
what's
your
most
commonly
seen
flex
volume
driver
so
flex,
volume
aside?
A
Csi
is
definitely
the
the
prevalent
way
that
we
that
we
that
we
use
right-
oh
you're,
saying
steve,
you
did
in
the
past
with
flex
volumes,
yep,
so
flex
volumes
last.
I
checked,
I
think,
they're
in
maintenance
mode
and
with
csi.
Basically,
we
implement
two
two
pieces
to
the
puzzle
here:
okay,
one
of
these
is
the
container
plug-in
and
the
other
one
is
the
node
plug-in
now.
What's
what's
kind
of
confusing
to
me
about
these,
but
it
makes
sense
is
that
both
of
these
typically
have
the
driver
code
inside
of
them.
A
So
when
we
deploy
a
csi
plug-in
like
we
will
for
ebs,
the
controller
plug-in
will
have
a
driver
and
then
the
node
plug-in
will
also
have
the
driver
code,
and
this
driver
code
can
actually
be
and
is
oftentimes
the
same
code
all
right,
but
the
responsibilities
of
them
are
quite
different
and
and
they'll
run
as
separate
entities.
Now
the
node
plug-in
we're
typically
going
to
run
the
node
plug-in
on
host
as
damon
okay.
So
when
you
think
about
the
node
plug-in
everyone,
what
do
you
think
the
primary
kubernetes
component?
A
If
it
sits
on
the
host
and
it
sits
alongside
or
on
every
node,
I
guess
I
should
say
in
kubernetes
what
component
in
the
kubernetes
stack
do
you
think
talks
to
driver
exactly
noelle
the
cubelet
right,
so
the
node
plug
is
going
to
sit
in
it's
going
to
be
friends
best
pals.
If
you
will,
with
the
cubelet
no
question,
and
it's
going
to
run
exactly
while
as
a
daemon
set
now
the
controller
plug-in
typically
will
be
implemented
a
little
bit
differently.
This
will
run
somewhere
somewhere
right
as
a
deployment.
A
Okay
and
what
I'm
getting
at
here
is
it
and
it
can
also
be
a
stateful
set
and,
and
frankly,
the
the
csi
driver
implementer
has
flexibility
in
how
they
do
this.
They
could
not
even
have
these
two
components
be
separate,
there's
a
lot
of
different
ways,
but
I
can
tell
you
with
certainty:
this
is
one
of
the
ways
that
kubernetes
does
it
or
sorry
kubernetes
ebs's
provider.
Does
it
the
one
we're
going
to
use
so
we've
got
the
controller
plugin
we've
got
the
node
plugin
now
at
a
very
high
level
in
most
implementations.
A
A
way
that
you
can
think
about
this
okay
is
that
inside
of
the
controller
plug-in,
typically,
the
responsibility
is
going
to
be
for
the
drive
the
driver
to
go
out
and
use
some
provider
right.
Let's
call
this
provider
aws,
for
example,
and
do
things
do
actions
in
the
provider?
So
those
of
you
who
know
a
little
bit
about
aws
what
kind
of
actions
to
satisfy
container
storage
do
you
think
it
might
do
in
aws?
A
What
might
it
create?
What
might
it
delete?
What
are
what
are
the
kinds
of
things
that
you
might
expect
it
to
do?
Think
about
that,
let
me
know,
and
I'll
put
some
of
those
in
the
diagram
here
for
the
node
plugin,
like
we
said
this
one's
gonna
sit
on
every
host
and
talk
to
the
cubelet
right,
so
cubelet
all
right
and
the
cubelet
is
going
to
have
communication
happening
with
the
node
plug-in
as
well,
and
we'll
talk
a
bit
about
that
so
waleed,
you
said
mount
capability.
I
am
roles.
A
So,
what's
interesting
is
that
the
node
plug-in
right-
that's
also
got
driver
code.
Here
is
not
running
in
a
mode
where
it's
necessarily
doing
much
with
the
aws
provider.
It
could
be
like
they
could
implement
it.
However,
but
this
is
kind
of
the
model
that
we're
going
to
see
a
lot,
it's
going
to
make
the
volume
it's
going
to
attach
it
to
nodes.
It's
going
to
do
all
that
good
stuff.
A
Now
that
being
said
right,
what's
going
to
happen
between
the
node
plug-in
and
the
cubelet
right
and
maddie,
you
said
ec2
instant
stores
as
well,
so
you
mean,
like
the
actual
volume
stores
that
are
attached
to
the
ec2
instances.
That's
what
you're
saying
yeah!
If
so,
you're
totally
correct,
it
could
be
looking
at
those
as
well
so
node,
plug-in
and
cubelet.
Now
the
cubelet
and
the
node
plug-in
they're
working
at
a
much
lower
level
right.
So
these
are
going
to
be
doing
things
like
asking
you
to
go
ahead
and
create
the
fs.
A
So
once
it's
attached
right
and
it's
on
a
node
somewhere,
there's
a
good
chance.
You
want
a
file
system
on
it
unless
you're
using
raw
block
storage,
so
the
cubelet
and
the
node
plug-in
will
go
back
and
forth
and
the
cuba
will
be
like
hey,
go
ahead
and
put
a
file
system
on
this
thing.
That's
pretty
important,
so
it
will
set
up
the
directory
structure.
It'll
go
in
and
it'll
actually
write
the
file
system.
A
So
again,
there's
kind
of
this
back
and
forth
where
the
node
plug-in
is
working
on
like
okay,
so
I've
got
an
attached
volume
and
you
want
me
to
write
a
file
system
to
it.
Let's
do
it.
Oh
you've
got
a
pod
for
this
now
and
you
want
me
to
attach
my
file
system
I
created
and
mount
it
onto
the
pod.
Let's
do
this,
it
can
do
all
of
those
things
essentially
yep
all
right
and
then
matty.
You
said
yeah.
A
The
nvme
drives
are
incredibly
fast
for
female
storage
usage,
you're
totally
right
and
we're
going
to
talk
a
little
bit
about
how
we
can
attach
some
fast
storage
as
well.
I
mean
obviously
there's
the
nvme
that
might
be
on
host
we'll
even
talk
about
some
of
the
really
fast
and
high
iops
ones
in
aws,
as
well
and
steve
said
exactly
steve,
snapshotting
resizing,
attaching
there's
a
bunch
of
things
that
can
happen
here.
A
There
can
be
resizing,
there
can
be
snapshotting
and
snapshotting
can
even
have
another
controller
in
play,
which
we
might
talk
about
today
if
we
have
time,
but
these
are
all
the
kind
of
things
that
happen
at
the
controller
level.
These
are
all
the
things
that
happen
at
the
plug-in
level
and
again,
what's
going
to
end
up
happening
here
is
a
lot
of
the
code
bases
are
going
to
just
contain
both
of
these?
In
fact,
if
we
look
at
this,
is
the
code
base
for
the
aws
ebs
driver?
A
If
we
look
in
the
driver.go
file
for
just
a
moment
when
you
call
new
driver,
what
it
does
here,
is
it
figures
out
all
right
cool
you
want
to
run
a
new
driver.
Do
you
want
to
run
in
controller
mode,
node
mode
or
all
mode,
where
you're
basically
able
to
do
both
of
those
types
of
requests?
So
this
is
a
pretty
common
pattern.
A
This
is
something
that
we
do
in
the
vsphere
csi
driver
as
well
as
we
allow
you
to
switch
the
same
code
base
to
either
act
as
a
node
mode
or
a
controller
mode,
and
those
modes
like
we
talked
about
are
satisfying
these
things.
Okay,
so
let's
bring
this
all
together
and
whoa.
My
green
screen
will
fall
over.
Let's
bring
this
all
together,
see
if
we
can
mount
this
stuff
now.
I
guess
the
one
thing
that
we
should
say
and
I
kind
of
got
really
into
the
implementation
details
quickly.
A
A
What
will
need
to
be
created
in
order
for
all
this
stuff
to
happen?
What
I'm
getting
at
here
is.
Do
I
need
to
create
a
persistent
volume
again
like
I
did
in
the
local
example?
What's
going
to
be
the
object?
That's
going
to
trigger
a
lot
of
this
creation
to
happen
right
so
steve,
says
a
storage
class,
no
question.
I've
got
to
be
able
to
give
folks
the
storage
class
and
I've
got
to
be
able
to
have
that
and
then
walid
perfect
follow-up.
The
pvc
itself
right,
so
the
storage
classes
can
be
like
our
catalog.
A
I
need
something
to
attach
to
my
to
my
my
app
and
that's
going
to
be
able
to
automatically
dynamically
provision
the
pv
and
go
through
this
whole
flow,
which
is
a
much
more
elegant
experience
than
what
I
showed
you
on
my
local
node
right,
so
storage
class
pvc-
let's
see
if
we
can
get
this
thing
set
up,
so
we'll
bring
it
all
together
here
so
yeah
and
I
do
need
to
settle
down
or
else
my
my
green
screen
is
going
to
go
flying
all
right,
so
we'll
get
out
of
here
now,
I'm
back
on
my
host
everyone.
A
Just
so,
you
have
an
idea,
we'll
do
a
cube,
cuddle
get
nodes
and
we
are
back
in
aws
land,
officially,
okay
and
while
we
go
through
here,
let
me
just
make
sure
I
can
log
into
aws
too
so
aws
console,
I'm
gonna
be
able
to
show
you
all
what
it's
creating
as
we
go
through
here.
So
am
I
still
logged
in.
I
am
lovely
okay,
so
this
is
aws.
A
Obviously
we
are
in
ec2,
let's
go
ahead
and
throw
it
in
dark
mode
because
who
doesn't
love
a
good
dark
mode
and
we'll
go
into
instances,
and
we
will
see
that
we
have
some
instances
in
here.
So
these
are
the
tgik
nodes
that
we
are
using
for
this
example:
okay,
okay,
so
imo,
you
said:
what's
the
net
protocol
between
the
cubelet
and
the
node
plugin
great
question?
A
It
is,
as
samir
said,
rpc,
so
it
does
communicate
over
a
unix
socket
and
it
uses
grpc
to
do
the
communication
between
the
two
great
question:
cool
yeah.
This
is
also
the
zone
where
my
personal
website
is
hosted
too.
So
that's
why
you
see
octets
in
there.
Okay,
let's
get
this
thing
rolling.
So
first
thing
we
got
to
do.
I
have
a
cluster
in
aws.
Good
news
is
the
cni
is
all
set
up
for
me
this
time.
Let's
see
what
cni
got
set
up
for
me,
I'm
kind
of
curious.
A
I
think
it's
calico.
Let's
make
the
text
a
little
bit
more
reasonable,
and
this
is
a
calico
see
we're
spreading
the
love
in
tgik.
We
got
one
cluster
with
psyllium
one
cluster
with
calico.
Now
we'll
do
a
cube
cuddle.
Let's
see
here,
we'll
do
a
cube,
cuddle,
get
storage
class
we'll
make
sure
that
we
have
nothing
in
place
here.
It's
all
blank.
I've
got
nothing
set
up,
so
what
we'll
do
here
is
we'll
go
in
and
we
will
grab
the
ebs
provisioner,
which
will
be
ebs.
A
Csi
driver
we'll
see
if
we
can
get
the
github
page
there.
It
is
so
this
lives
inside
of
the
kubernetes
sigs
repo
and
it's
called
aws
ebs.
Csi
driver
okay,
and
we
just
need
to
deploy
this
thing,
and
I
know
they've
got
some
alpha
thingy.
So
let's
go
ahead
and
do
that
deploy
driver,
deploy,
driver
all
right
and
steve
said
waiting
for
an
antreya
tgik
me
too.
We
got
to
get
on
that
thing.
A
We
should
bring
someone
on
from
the
core
team
to
talk
about
it
too,
because
my
open
v
switch
skills
are,
if
not
where
they
should
be,
but
it'll
be
really
cool
to
explore
it.
So
let's
go
ahead
and
apply
great
and
if
all
goes
well,
this
should
create
the
the
controller
I
was
talking
about.
This
should
create
the
node
plugin,
all
that
good
stuff.
So
let's
make
the
text
more
reasonable
and
rory.
If
you're
still
here,
you're,
probably
seeing
me
type
in
clear
in
my
head,
I
know
to
hit
ctrl
l.
A
I
just
can't
bring
myself
to
do
it
in
the
moment.
Cube
cuddle
get
pods
all
okay.
There
we
go
so
we've
got
some
containers
creating
notice.
We've
got
the
csi
controller
inside
of
here.
We've
got
the
csi
node,
so
this
is
a
three
node
cluster
one
control
plane,
two
workers,
the
csi
node-
will
run
on
all
of
them
and
then
the
csi
controller
has
two
now
csi
controller
will
most
commonly
run
in
a
leader
election
mode.
So
one
of
these
two
will
effectively
be
the
leader
as
we
get
into
provisioning.
A
Okay,
all
right,
yes,
wally,
that
is
me
octatz.com.
That
is,
that
is
my
little
little
pet
project
and
I
agree
maddie.
It
would
be.
It
would
be
super
cool
to
check
out
kind
of
the
ovs
stuff,
especially
with
like
the
ability
to
do
flow
analysis
and
all
that
that
would
be
pretty
freaking
cool.
Okay,
so
we've
got
the
pieces
deployed,
it's
all
up
and
running
now.
A
Okay,
so
I
think
by
default
I'll
have
the
privileges
I
need,
but
if
we
don't
it'll
just
break
and
we'll
fix
it,
no
big
deal,
so
that
should
be
all
good
and
I'm
doing
this
privilege-wise
through
the
instance
profile.
So
I'm
being
really
lazy,
steve's
right,
something
like
k-I-am,
I
don't
even
know
if
people
use
cube.
I
am
anymore,
but
those
types
of
projects
that
can
do
more
advanced
workload
based
identity,
that's
the
way
to
go,
but
I'm
being
lazy
and
using
instance
profiles
and
giving
them
more
access
than
they
should.
A
So
just
don't
tell
anyone:
okay,
ebs,
csi,
node,
ebs
controller,
but
again
we
don't
yet
have
a
storage
class,
so
cube
cuddle
get
storage
class.
There
is
none!
So,
let's,
let's
look
at
the
api
because
I
feel
like
storage
class
is
a
pretty
cool
one
and
we'll
we'll
channel
our
inner
duffy
here
and
we'll
use
the
explain,
directive
or
explain
command
so
cube,
cuddle,
explain,
storage,
class,
okay
and
sorry.
I'm
trying
to
get
my
my
size
just
right
here,
so
storage
class,
okay
storage
class
describes
the
parameters
for
a
class
of
storage.
A
Okay
and
we
have
a
couple
different
options
that
we
can
set
here.
We
have
some
fields
for
what
our
storage
class
offers.
Should
we
allow
expansion,
which
means
should
we
let
people
make
bigger,
make
expand
their
pvs
over
time,
so
maybe
go
from
30
gigs
to
40
gigs
right.
We
have
topologies,
which
we'll
talk
a
little
bit
about,
and
then,
if
we
go
down
a
bit
here,
we
have
some
more
interesting
stuff.
One
is
the
provisioner.
A
So
when
we
set
up
a
storage
class,
we
want
to
make
sure
that
the
storage
class
is
tied
to
the
csi
driver.
We
just
installed.
That's
what
we'll
use
the
provisioner
field
for
okay,
we've
got
a
reclaim
policy.
Reclaim
policy
is
going
to.
Let
us
know
what
should
happen
in
the
case
of
needing
to
reclaim
or
the
life
cycle
of
a
pvc
being
being
altered
or
changed,
and
one
that's
really
interesting
is
that
we've
got
a
parameters,
and
this
is
what
makes
storage
class
in
the
integration
with
csi.
A
Pretty
pretty
pretty
configurable
in
that
parameters
is
just
a
map
of
strings.
There's
no
structured
data
inside
of
it,
so
arbitrarily
aws
can
implement
their
own
parameters.
They
support
in
their
csi
driver,
just
like
vsphere
can
just
like
open
ebs
could
on
and
on.
In
fact,
if
we
go
to
the
repo
and
scroll
down,
there
should
be
a
table
of
parameters
here.
What
are
some
of
the
parameters,
so
we
can
say
things
like
this.
This
type
of
storage
offers
this
file
system.
This
type
of
storage
offers
the
speed
here's.
A
What
the
iops
is,
here's,
whether
it's
encrypted
so
on
and
so
forth.
Okay,
so
pretty
pretty
cool
stuff,
all
right,
maddie
said
irsa.
I
don't
know
what
irsa
is,
or
at
least
I
don't
know
what
the
acronym
is
past
two
years
of
running
it.
Oh,
it
looks
like
it
might
be
a
different.
I
am
im
service
or
I
shouldn't
maybe
not
an.
A
I
am
oh
it's
for
eks,
okay,
and
that
would
be
why
I
haven't
seen
it
yet:
cool
cool,
okay,
so
good
good
noel,
said
same
thing
that
you
came
on
it
with
eks.
Okay,
good
deal
all
right.
So
assuming
that
I
have
my
im
set
up
correctly
and
I
don't
have
anything
fancy
in
place
like
irsa
or
cayem,
I
am
going
to
go
ahead
and
set
up
some
storage
classes.
A
We're
going
to
set
up
one
storage
class
and
we're
going
to
call
this
default
block.
Now
the
default
block
storage
is
going
to
be
the
default
class.
So
there's
an
annotation
supported
by
there's
an
annotation
supported
by
the
storage
class
api.
Where,
with
this
in
place,
if
a
user
does
not
specify
a
a
a
storage
provider
or
storage
class
type,
it
will
automatically
use
this
one.
You
can
see
we
have
the
provisioner
set
up
here.
A
So,
in
short,
if
somebody
asks
for
this
and
it
maps
the
storage
class,
we
know
which
csi
driver
to
go
to.
We
also
have
volume
expansion
so
on
one
of
them,
I'm
going
to
enable
volume
expansion
on
another
one,
I'm
going
to
set
up
an
interesting
volume,
binding
mode
so
think
about
this.
For
a
moment,
volume
binding
mode
wait
for
first
consumer-
the
default,
I
think,
might
be
immediate,
or
it's
wait
for
first
consumer
anyways,
the
two
common
binding
modes
are
wait
for
first
consumer
and
immediate.
A
A
So
let's
pretend
we're
on
a
company
together
and
I
want
to
offer
commodity
storage
to
the
developers
I'm
actually
going
to
be
offering
them
hard,
drive
disk
storage
through
this,
this
isn't
even
ssd
and
then
in
the
second,
the
second
storage
class.
It's
kind
of
more
of
the
same,
I'm
not
going
to
wait
for
first
consumer
just
to
show
you
all
how
that
works,
and
I'm
going
to
actually
use
a
pretty
fast
ssd
type
and
I'm
actually
giving
it
a
specific
iops
value.
A
So
it
can
be
ideally
blazing
fast
and
of
course,
it
will
cost
folks,
more
so
in
short
developers
using
my
platform
will
have
the
opportunity
to
request
default,
block
storage
or
performance
block
storage,
okay,
cool
and
yucca
says
more
money
for
blue
origin.
Hey
at
least
it's
going
to
going
to
something
so
get
we'll
get
some
get
some
more
rockets
in
space.
I
guess
out
of
it
right
and
yusef,
you
said:
is
there
any
limit
for
allow
volume?
Expansion?
That's
a
great
question.
A
I
don't
know
off
the
top
of
my
head,
like
it
might
just
be
a
value
of
like
an
integer.
Probably
kubernetes
doesn't
try
to
be
really
smart
about
that
because
it
would
depend
on
the
provider.
So
my
guess
is.
You
can
probably
put
some
enormous
value
in
and
then
worst
case
scenario
when
it
goes
to
expand
to
your
like
requested,
90,
terabytes
or
something
I
don't
know
how
big
I'm
sure
can
get
even
bigger
than
that
it'll
pop
back
with
a
csi
provider
and
say
hey.
I
can't
go
this
high,
I'm
sorry
right!
A
So
that's
that's
my
best
guess,
but
that's
a
great
question,
I'm
not
sure
and
steve
says,
makes,
makes
bezos
richer.
That's
true,
I
think
he's
got
enough
money.
We
don't
need
to
give
him
more
ebs
income.
Okay!
So
let's
do
this
thing.
We've
got
the
storage
classes.
Let's
go
ahead
and
apply
them
storage
class
yemo.
All
right,
we've
created
two
storage
classes.
So
if
we
cube
cuddle,
get
storage
class
like
if
I'm
a
developer
going
in
and
I'm
like
what
storage
options
are
available
to
me
here
they
are.
I
have
got.
A
Let's
make
this
smaller.
I
have
got
the
option
for
default
block.
I've
got
the
option
for
performance
block.
I
can
see
information
about
the
volume
binding
mode.
I
can
see
info
about
whether
it
allows
expansion
and
I
can
set
reclaimed
policy
stuff
and
see
all
that.
So
all
of
that's
fine
and
dandy.
Now
here's
actually
an
interesting
thing
that
you
all
might
be
interested
in
that
happens
under
the
hood.
When
we
deployed
the
ebs
system,
those
little
csi
node
objects.
You
remember
those
guys
these
these
little
things
that
are
talking
to
the
cubelet.
A
They
actually
went
through
a
flow
that
the
cubelet
calls
something
like
it's
like:
no
cubelet,
node,
plugin
registration
or
something
there's
like
a
there's
like
an
interface
for
it
and
what
that
means
is
from
like
a
deeper
kubernetes
level.
An
object
actually
got
created
on
from
the
cubelet
itself.
A
So
if
we
do
cube
cuddle
get
csi
node,
you
can
actually
see
that
this
csi
node
object
got
created
as
well,
and
this
is
a
really
interesting
api
again
the
cubelet
got
registered
with
by
the
csi
node,
and
then
it
created
these
objects
inside
of
the
cube
system.
So
if
we
do
a
cube,
cuddle,
explain:
csi,
node,
okay,
we've
got
a
couple
interesting
fields
here:
csi
node
holds
the
information
about
all
the
csi
drivers,
so
it
told
us
things
like
how
many
csi
drivers
there
were.
A
If
we
go
into
the
spec
for
a
moment
and
explain
you
can
see
info
about
the
drivers,
I
actually
wanted
to
go
one
level
deeper.
I
think
okay
and
then
inside
of
here
there's
actually
a
couple
interesting
fields
that
we
can
we
can
read
about.
One
of
those
fields
is
the
the
topology
keys,
which
is
actually
going
to
be
able
to
give
information
about
from
the
csi
driver.
What
key
can
be
propagated
on
the
node
to
be
able
to
know
where
things
should
be
scheduled
right
now?
A
A
What
what
can
happen
here
right
if
you
run
your
workers
across
az1,
az2
az3,
and
you
need
to
attach
to
an
ebs
volume?
What
do
you
think
I
think
steve's
having
a
flashback
here
so
yeah
wally,
exactly
the
volume,
the
ebs
volume
we
provision
is
bound
to
an
availability
zone.
Yep,
just
like
noel
said
it
is.
It
is
bound
to
a
specific
az
by
default.
Now,
there's
a
bunch
of
things
that
we
can
do
to
try
to
mitigate
this.
But
let's
just
assume
that's
the
default
behavior
and
if
we
do
cube
cuddle
get
nodes.
A
Actually,
you
know,
let's
cube
cuddle,
get
csi
node
for
a
second
here:
okay
and
let's
grab
the
yaml
from
one
of
these,
so
get
csi
node,
we'll
grab
this
we'll
put
it
through
here
and
we'll
do
a
quick
yq
for
color
and
also
to
start
at
spec,
because
the
the
managed
fields
thing
is
driving
me
a
little
bit
crazy.
What
is
that
drivers
cube
cuddle,
csi
nodes?
Isn't
that?
Oh
I'm
not
doing?
Oh
yeah
well,
that'd
be
helpful.
So
spec,
oh
yaml,
okay,
we'll
do
oh
yaml
there.
A
It
is
so
this
is
the
spec
on
one
of
those
csi
nodes,
all
right
and
there's
a
couple
things.
That's
important
here:
allocatable
25,
that's
saying
hey
on
this
node
for
this
csi
driver
you're
only
allowed
to
have
25
up
to
25
ebs
volumes
that
you
allocate.
A
A
A
I've
started
using
yq
religiously,
because
I
just
can't
all
those
managed
fields
that
are
and,
of
course
it's
in
metadata.
So
it's
not
going
to
help
me
at
all,
but
all
those
managed
fields
just
drive
me
bonkers.
Okay,
so
we've
got
that.
We've
got
this.
Let's
see
what
what's
the
what's
where's
the
annotation
I'm
looking
for
manage
there.
It
is
up
top
up
top
check
this
out.
A
So
what
we've
done
is
that
we've
propagated
the
csi
node
id,
as
you
can
see
right
up
here
up
top
and
then
we've
also
propagated
or
should
have
propagated
the
topology
interesting.
Why
is
the
topology
not
propagated?
That's
super
weird,
I
think,
by
default.
It
does
that.
I
could
be
wrong,
though
interesting.
A
Oh
there's
a
crew
plug-in
that
helps
me
with
that.
Well,
we'll
come
back
to
the
annotations.
I
don't
know
why
why?
That
would
why
that
would
do
that,
but
nonetheless
you
can
at
least
hey
at
least
the
note
id
showed
up
I'll.
Take
that
so,
oh
it
comes
in
as
a
label
dennis.
I
think
you're
right
is
that
what
it
is?
Let's
see,
label
label
label
so
that
was
annotations
labels.
A
Yes,
good
call
there.
It
is
yes,
it's
in
labels,
so
you
can
see
here
that
the
zone
for
csi
is
right
here,
us
west
2a,
and
that
is
how
it's
seeing
it.
So
I
should
have
too
stuck
in
the
annotations
realm,
but
good
call
thanks
for
that.
Dennis
cool
yeah,
let's
check
out
that
plug-in
steve
it.
The
managed
fields
is
just
like
I
feel,
like
I'm
scrolling
through
five
pages
of
yaml
before
I
find
the
spec
half
the
time
so
anyways
all
right.
So,
let's,
let's
get
this
thing
wired
up
and
try
it
out!
A
We've
got
the
storage
class.
We've
learned
a
little
bit
about
the
csi
node
object
that
propagates
all
that
good
stuff.
I
think
we're
pretty
much
good
to
start
creating
some
pvc,
so
okay,
interesting
here.
Here's
get
ready
for
another
quiz
question
here.
Everyone
I've
got
this
pvc
right
here.
So
let's
do
this.
Pvc
dot,
yaml,
alright
and
I've
got
two
pvcs
in
place
that
I
pre-created
to
save
some
time
at
the
end
of
tgik.
A
Nothing
too
crazy
here,
but
notice
that
I'm
not
making
a
volume.
That's
an
important
detail
right,
I'm
not
making
a
volume.
One
of
these
is
going
to
use
default
block.
One
of
these
is
going
to
use
performance
block.
Both
are
going
to
be
in
file
system
mode.
Okay,
now
question:
for
you
all!
This
will
be
really
about
how
much
you
were
paying
attention.
A
When
I
apply
these
two
persistent
volume
claims,
one
of
them
is
going
to
do
something
and
the
other
one
won't
does
anyone
know.
I
know
my
my
class
names
suck
they're,
not
that
interesting
do
y'all
know
what's
gonna
happen
here,
so
think
about
this.
When
I
deploy
these
two
there's
gonna
be
two
different
behaviors
from
the
csi
driver.
A
What
do
you
think
in
walid?
It's
actually
not
got
any
print.
You
are
right,
like
eventually
one
of
them
will
be
set
up
as
an
hdd,
and
one
of
them
will
be
set
up
as
an
ssd,
but
something
slightly
different
morza
you
got
it.
One
of
them
is
going
to
wait
a
noel
right
above
you,
one
of
them
is
going
to
wait
for
the
first
consumer,
which
means
in
this
flow
the
pv
should
not
get
created
for
both
right
until
we
have
a
pod
that
is
going
to
claim
it
or
bind
to
it.
A
So,
let's
check
it
out.
If
we
save
these
up
and
let's
do
a
quick
watch
for
cube
cuddle
get
what
should
we
look
at
here?
Let's
look
at,
I
guess
just
pvcs
and
pvs
is
pretty
good
all
right.
So
I'm
looking
at
pvcs,
I'm
looking
at
pvs.
I've
got
nothing
so
up
here.
We're
gonna,
cube,
cuddle,
apply
the
pvc
yaml
and
let's
watch
this
bottom
buffer
now,
so
two
pvcs
were
created
they're
both
pending
nothing
happening,
nothing
happening
and
watch
it's
gonna.
It's
probably
gonna
break
on
me
and
we'll
have
to
troubleshoot.
A
I
guess
it's
gonna
wrap,
I'm
not
gonna
be
able
to
make
it
look
real
good,
anyways
pvc,
zero
default
block
pending
pvc,
one
performance
block
not
pending
exactly
for
the
reason
y'all
said
the
ssd
is
not
set
up
for
wait
for
first
consumer,
the
the
default
block
is,
and
we
can
again
look
at
that
by
going
in
and
saying
cube,
cuddle
get
storage
class
and
we
can
see
wait
for
first
consumer
verse,
immediate
now,
as
we
talked
about,
I
am
further
funding,
aws
right
now
for
no
good
reason
right,
because
if
I
go
into
volumes
in
my
elastic
block
store
section
here-
and
I
look
for
things
that
have
a
csi
volume
name,
okay,
you
can
see
that
one
has
been
created
and
I
actually
have
two
from
an
older
cluster.
A
Let's
get
rid
of
those
before
I
forget
and
they
cause
me
heartache,
but
boom
we've
got
a
pvc
okay.
So
it's
available
it's
there.
The
the
the
hard
drive
disc
is
not
quite
there
yet,
of
course
right.
So,
let's
see
if
we
can
get
these
things
scheduled
and
working.
Okay
and
we'll
play
around
with
a
couple
things
and
and
wrap
up
on
that.
A
So
pvc
is
there.
We've
got
the
claim
here
now.
I
think
we
need
a
pod.
So
let's
go
ahead
and
start
looking
at
some
pods.
I've
got
app.yaml
here,
another
lovely
nginx
demo,
next
time
I'll
do
wordpress.
I
promise
so
nginx
nginx,
our
persistent
volume
claim
is
going
to
be
pvc
and
we're
going
to
put
this
one
inside
of
var
lib
db.
Now,
as
we
talked
about
this
should
go
out
to
aws
when
it
sees
it.
In
this
case
it
should
create
the
volume
because
it
doesn't
exist.
A
It
should
also
do
something
else,
because
once
the
pvc,
or
once
the
app
is
bound
right,
it's
not
just
going
to
set
up
the
volume,
but
what's
the
next
thing
it's
going
to
do
too.
There's
there's
like
another
operation
once
the
app
is
known
that
it's
going
to
do
regarding
the
node
inside
of
aws,
what
do
you
think
yeah
and
actually
steve?
What's
funny
about
the
500
gig
volume?
Is
I
learned
this
the
hard
way
you
know
a
couple
forever
ago,
which
is
in
the
hard
drive
disc?
A
I
think
you
have
to
have
at
least
500
gigabytes
to
use
it.
That's
why
the
the
limit's
so
high
feel
free
to
correct
me
if
I'm
wrong
there,
but
I
think
there's
some
interesting
thing
there,
if
you're
not
using
like
gp2
or
something
all
right.
So
let's
apply
the
app
waleed
says
schedule
the
app
schedule
it
to
the
node
yup
yup
and
the
other
thing
it's
going
to
do
is
attach
it
to
the
node
right.
A
A
So,
if
we
watch
this,
there's
a
there's,
an
object:
that's
going
to
represent
the
attaching
right,
because
there's
kind
of
like
three
steps,
there's
making
it
exist,
making
the
volume
exist,
there's
attaching
the
volume
to
a
node
and
then,
of
course,
there
is
mounting
it
at
a
lower
level
and
making
it
actually
available
for
applications
to
use
all
right.
So,
let's
see
if
we
can
make
this
happen,
so
we'll
apply
the
app
yemel.
A
We
can
see.
Pvc
is
now
bound,
so
it
actually
does
have
a
volume
backing
it,
and
you
can
also
see
what
I
love
about
volume
attachments
is.
You
can
use
the
api
to
see
what
node
it
got
attached
to
so
the
volume
attachment
object
is
here
and
we
can
actually
see
that
it
should
have
been
attached
to
ip18
okay.
So
if
we
go
in
and
we
look
at
and
refresh
our
list
of
csis
now,
you
can
see.
Oh,
this
is
an
old
one
too.
Isn't
it?
Okay,
anyways?
A
I
know
the
in-use
one
is
my
new
one
and
you
can
see
here
that
we
are
attached
to
this
sorry
to
this
directory,
but
specifically
to
this
node.
So
if
we
click
on
this
and
look
at
the
ip
since
that's
what
kubernetes
has
in
the
node
name,
you
can
see
here
that
this
is
number
18
which
matches
up
with
what
we've
got
here.
So
we've
attached.
It
we've
put
the
volume
in
place
and
now
ideally
the
pod
has
also
gotten
it
as
well
pretty
cool
stuff
right
now.
A
Here's
here's
something
I'll
show
you
that,
I
think
is,
I
think,
is
pretty
neat.
So
we
know
that
this
attached
to
what?
What
note
is
this?
This
is
eight
it's
18
right.
The
end
of
the
ip
is
18.
yeah
18..
Okay,
so
check
this
out.
If
we
do
a
cube,
cuddle,
get
pods,
okay
for
the
namespace
cube
system
and
we'll
get
rid
of
this
temporarily
and
make
it
clean.
A
We
know
that
the
csi
node
and
you
know
what
I'm
gonna-
I'm
gonna
make
the
text
real
small
here,
everyone
just
for
one
quick
thing:
okay,
so
let's
just
make
it
crazy
small!
So
don't
worry,
I'm
not
showing
anything
crazy.
So
18
is
this
node,
which
means
that
this
is
in
fact
the
csi
node
instance.
That's
running.
So
let's
make
it
real
big
again,
okay
and
then
let's
do
cube
cuddle
exact,
and
we
will
do
I
t
into
this
node
and
we
will
do
bin
bash
wait.
No,
I
don't
need.
A
A
Okay
and
I'll
make
it
a
bit
smaller,
so
I
can
find
it.
But
what
we'll
see
here
towards
the
top
of
the
logs
okay?
Is
it
ran
check
this
out?
Okay,
so
node
stage
volume
got
called
right
here,
all
right
when
node
stage
volume
got
called,
what
did
it
did
well?
What
or
what
did
it
do
it
attached?
The
the
volume
was
attached,
of
course,
but
didn't
have
a
file
system
on
it.
It
didn't
right,
so
it
sets
up
the
target
path.
A
It
gets
everything
kind
of
initially
mounted
up
and
then,
if
you
look
closely
once
it
finds
the
device
path,
it's
actually
going
to
format
that
particular
location
on
the
host
right.
It's
going
to
verify
whether
the
file
system
is
there
and
then
it
actually
goes
in
and
it
sets
up
the
file
system.
So
in
our
case,
by
default,
it's
ext4
and
at
a
low
level,
that's
what
it's
doing
so
the
keyboard's
like
hey.
A
I
really
need
you
to
make
this
thing:
a
file
system,
that's
usable,
go
ahead
and
set
it
up,
and
then,
if
we
go
even
further
when
the
pod
got
attached
inside
of
here,
which
is
what
we
call
as
part
of
the
node
published
volume,
we
can
actually
see
if
you
look
closely
here.
This
is
where
the
mount
for
that
volume
got
brought
into
and
actually
now
here
it
is
mounting.
Yeah
got
brought
into
the
actual
pod
itself
and
got
bound
so
we
can
start
using
it.
So
all
this
stuff
is
is
nothing
crazy
right.
A
This
isn't
some
like
really
weird
kubernetes
specific
thing:
the
volume
got
attached,
the
node
part
of
the
csi
driver
said
hey.
I
need
you
to
have
a
file
system
and
then
it
mounted
it
to
the
pod
and
boom.
Now
we've
got
storage
there
just
like
that,
and
if
we
go
in
and
we
look
at
our
pod,
real,
quick
to
kind
of
wrap,
this
up
cube,
cuddle,
get
pod
and
I
think
nginx
was
right
here
right,
so
we'll
go
ahead
and
take
a
quick
look
at
that.
A
So
cube
cuddle
exact
and
we
will
go
into
it
here
for
this
nginx
bin
bash
and
then
right
inside
of
var
lib
db,
which
I
believe
I
use
the
same
directory.
I
can
once
again
echo
in
and
say
hello
tgik,
just
like
that,
so
we'll
call
this
db.txt,
and
that
is
the
integration
with
the
csi
driver,
pretty
cool
and
we
can
go
in
and
we
could
request
the
fast
storage
same
idea.
We
now
have
the
ability
to
which
is,
which
is
what's
really
compelling
about
this
kind
of
thing:
to
move
the
volume
around
right.
A
So
in
theory
you
know
just
because
we're
running
a
little
bit
light
on
time.
I'll
keep
this
example
short
and
sweet.
Okay,
if
I
do
cube
cuddle,
get
node
now,
obviously
it'd
be
a
lot
cooler.
If
I
had
the
time
to
show
you
like
a
node
failure
condition,
but
let's
just
assume
we're
draining
a
node
for
a
moment.
Okay,
so
let's
see,
if
we
can,
we
can
watch
this
happen.
So
we'll
do
cube,
cuddle
get
pod,
get
volume
attach
and
I'll
start
with
volume
attached
get
volume
attachments.
A
We
will
get
pods,
we
will
get
pvs.
Okay,
so
that'll
be
all
the
things
that
we'll
get
here.
We'll
watch
this
okay
and
make
that
just
a
bit
smaller
and
let's
go
ahead
and
drain
a
node
okay.
So
in
our
case
we
know
we're
running
on
node
18,
we'll
do
a
cube,
cuddle
drain
and
we'll
drain.
This
note
now
there's
a
lot
that
can
go
wrong
here.
I
admit,
but
I'm
hoping
we'll
be
able
to
kind
of
prove
an
interesting
model.
So
the
pod
is
dead.
Okay,
that
got
drained
the
container
is
now
creating.
A
My
hope
is
we'll
actually
see
the
volume
attach
update
itself
and
in
our
case
we
should
see
the
volume
attach
eventually
update
itself
to
number
46
right,
because
that
should
be
the
new
node
that
the
volume
goes
to
and,
in
short,
the
container
gets
to
keep
its
storage.
We
go
down
and
oh
check
it
out,
we're
already
there
so
we're
volume
attached
on
46.
A
the
container
is
hopefully
going
to
start
in
a
moment
here.
If
all
goes
well,
one
thing
that
I
really
actually
just
started
right
then:
okay,
this
is
great.
So
the
pod
is
right.
Here:
we've
moved
to
a
completely
different
node.
Let's
now
go
into
that
one
more
time,
so
we'll
do
the
exact
command
from
before
pretty
cool
right,
exact
bin
bash.
Of
course
it'll
be
a
different
pod
name
here.
So,
let's,
let's
cube
cuddle,
get
pods.
A
Let's
grab
this
waleed,
you
said:
can
we
see
the
logs?
Hopefully
not?
Oh
great,
point
walid
we'll
check
that
out.
So,
let's
exec
and
let's
go
ahead
and
bring
this
in,
and
we
will
paste
the
pod
in
we're
now
in.
If
we
go
into
var
lib
db
and
here's
our
grand
finale
for
tgik
today,
if
we
look
in
db.text
there,
we
go
pretty
freaking
cool.
Our
pod
has
completely
moved
to
a
different
node.
A
The
storage
has
entirely
come
with
it
and
we
have
this
area
where
we
can
now
go
in,
and
you
know,
schedule
different
stuff
and
have
different
classes.
And
you
know
if
you
look
at
the
bigger
picture
for
a
moment,
we've
been
really
in
the
weeds
about
the
details,
but
what
we
just
implemented
here
is
effectively
self-service
storage.
A
We
can
give
clients
of
our
platform
the
ability
to
choose
storage.
They
like
have
different
implications
based
on
cost
if
we
have
a
chargeback
model
right,
so
it's
a
really
cool
way
to
really
manage
all
of
this
stuff
and
and
again
like.
Maybe
we
want
more
than
one
csi
driver.
Maybe
we
want
efs
to
be
attached
in
here
too,
and
a
lot
and
another
provider
like
we
have
really
boundless
possibilities
and,
like
waleed,
said,
this
really
is
kind
of
like
a
very
pure
form
of
of
software
defined
storage.
A
So
to
your
point:
walid,
let's,
let's
finish
up
today
on
just
checking
out
the
logs
one
more
time,
because
I
think
that's
a
really
a
really
good
call
out.
I'm
gonna
have
to
make
my
text
small
one
more
time.
So
if
we
get
pods
right,
cube,
cuddle
get
pods
for
the
name,
space
cube
system,
okay
and
do
a
oh
wide
is
what
I'm
looking
for
and
noel.
You
said
a
volume
snapshotting.
I
totally
agree,
here's
my
thought.
We
should
do
a
storage
disaster
episode
of
tgik.
A
What
do
you
all
think
so
something
where
we
talk
about
snapshotting?
We
talk
about
all
these
things
and
we
try
to
just
like
blow
up
entire
parts
of
the
cluster
and
see
if
we
can
recover
and
bring,
and
I'm
not
so
much
interested
in
like
scd
recovery.
In
this
case,
I'm
thinking
more
of
like
we
blow
up
parts
of
the
cluster
where,
like
our
workloads,
are
impacted
and
how
does
snapshotting
and
recovery
and
all
that
stuff
work
with
the
csi
driver.
How
can
we
integrate
that
with
valero
those?
A
That
would
be
a
really
really
freaking
cool
episode?
I
think
if
you
all
agree,
let
me
know,
but
I
think
that
would
be
really
cool,
so
we've
got
46.
That's
this
csi
node
here
walid,
let's
check
it
out
again,
so
I'll
zoom
in
one
more
time.
I
will
go
ahead
and
exact
this
time,
I'm
going
to
exact
into
something.
That's
in
the
name,
space
cube
system.
It
is
called
this
and
it
is
in.
I
keep
saying
exact.
A
Why
am
I
getting
exact
in
logs
mixed
up
today
such
a
obscure
thing
to
to
get
mixed
up
and
then
the
container
is
going
to
be
ebs,
plugin,
okay
and
then,
like
you,
said,
wally,
let's
see,
obviously,
there's
gonna
be
a
lot
in
the
logs.
Let's
see
if
they
can
tell
that
the
file
system
formatting
and
mounting
checking
attempting
to
mount
disk
called
with
args
called
with
args
node,
publish,
node
storage
volume.
A
A
I
can't
remember
what
the
old
log
message
was
exactly,
but
I'm
pretty
sure-
and
I
know
for
a
fact,
it
should
not
be
formatting
it
because
we
went
in
and
we
saw
it,
there
is
one
you
can
see
it
good
good
steve
said
he
does
that
all
the
time
you
did
it
today
with
yeah,
I
mean
it's,
it's
a
great
practice
to
just
kind
of
like
blow
stuff
up
like
that
and
mess
around
with
it.
A
A
It
is
one
of
the
things
that
we
didn't
talk
about
is
resizing,
but
it's
not
actually
that
intense
how
resizing
works
is
basically,
if
you
go
in
and
you
change
the
size
of
your
pvc
like
if
you
go
from
500
gigabytes
to
550
gigabytes,
the
resizer
controller
will
see
that
change
and
it
will
send
it
right
to
the
csi
driver
and
the
csi
driver
will.
Basically,
if
the
csi
driver
was
a
human,
go
in
and
say,
oh
cool,
you
want
to
resize
this
thing,
no
problem,
let
me
go
into
volumes
for
you.
A
Let
me
go
ahead
and
this
is
how
the
csi
driver
talks
by
the
way.
Let
me
go
into
the
pvc.
Let
me
just
modify
this
for
you
and
then
put
it
at
550
and
that's
essentially
what
the
resizer
does
now.
There
are
some
interesting
implications
here,
like
your
block.
Size
will
expand,
but
it
might
take
and
I've
seen
this
take
a
couple
minutes.
I
don't
know
if
it
was
just
my
test.
A
You
need
to
give
some
runway
for
your
file
system
to
expand
too,
which
I
believe
the
csi
node
will
do
automatically,
but
don't
take
my
word
on
that.
You'll
have
to
you'll
have
to
check
it
out.
Oh
steve,
you
meant
exec
versus
logs
yeah.
I
know
it's
like
it's
either
I'm
typing
in
get
logs
or
I'm
typing
in
exact,
but
I'm
never
seemingly
typing
in
what
I
want,
which
is
a
very
unfortunate
place
to
be
and
walid
said.
You
still
need
to
extend
the
file
system
yeah.
A
Maybe
it
doesn't
do
it
automatically
and
you've
got
to
go
in
and
do
it.
I
thought
the
csi
node
had
the
ability
to
do
that,
but
I
honestly
can't
remember
so
all
right.
Y'all
we
covered
an
insane
amount
of
stuff.
We
implemented
local
storage,
we
implemented
persistent
volumes,
we
drove
claims
into
them.
We
created
a
fully
self-service
storage
system
on
top
of
ebs.
A
We
consumed
the
storage.
We
saw
the
inner
workings
of
how
all
the
stuff
flows
in
the
architecture
of
the
controller
plug-in
in
the
node
plug-in.
If
you
are
watching
this
and
you
do
not
use
ebs,
a
lot
of
the
drivers
are
implemented.
Just
like
this,
for
example,
the
vsphere
driver
so
know
that
what
you've
learned
is
generally
universally
applicable,
and
I
hope
you
had
a
good
time.
This
was
a
really
fun
episode.
A
I
loved
it
and
I
can't
wait
to
see
what
extra
stuff
we
do
with
storage
now
right,
like
the
disaster
idea,
be
sure
someone
to
put
that
in
github.
If
you're
still
here,
don't
don't.
Let
us
forget
that
I'd
love
to
look
at
open
ebs,
I'd
love
to
look
at
a
lot
of
cool
stuff
now
that
we've
kind
of
opened
that
door.
A
So,
oh
you
love
the
diagram
good
deal.
Well,
the
diagram
will
be
in
the
github
repo
too.
So
look
for
that
walid
I'll,
be
sure
to
upload
it
all
right
y'all.
This
was
an
awesome
episode.
Thank
you.
So
much
for
joining
us
we'll
have
someone
on
next
week,
be
it
myself,
joe
cora,
paul
someone
but
have
an
amazing
weekend.
Everyone-
and
we
will
see
you
very
soon
later.