►
From YouTube: TGI Kubernetes 120: CSI and Secrets!
Description
Come hang out with Duffie Cooley as he does a bit of hands on hacking of Kubernetes and related topics. Some of this will be Duffie talking about the things he knows. Some of this will be Duffie exploring something new with the audience. Come join the fun, ask questions, comment, and participate in the live chat!
A
Hey
everybody
welcome
to
TGI
K
episode,
number
120
good
to
see
you
all
out.
There
hope
everything's
going
well,
let's
see
how
everybody's
doing
in
our
chat
we've
got
a
few
people,
some
of
who
signed
in
pretty
early.
Today
we
got
some
folks
from
we
got
pratik,
saying
hello
from
india
good
to
see
you.
We
have
Paul,
saying
hello
back
from
California
lomatin
checking
in
good
to
see
you
mr.
A
Lew
Matty,
we
got
David
Michael,
say
hello,
TJ,
good
to
see
you
David
Michael
and
Rory
and
Liam
from
the
UK
we
got
seve
from
Istanbul
got
mr.
Josh
Rosso
check
it
in
and
Lachman
is
gonna,
be
joining
us
again.
I
hope
that
will
also
see
Rita
today,
because
we're
gonna
be
exploring
some
of
the
work
again
that
they
did
so
it's
two
episodes
kind
of
back
to
back
where
we're
gonna
be
talking
about
some.
Some
of
the
amazing
work
that
those
those
folks
have
been
working
on,
which
will
be
awesome.
A
We
got
Mark
Martin
checking
in
we
got
Harry
from
Rotterdam.
We
got
a
nice
saying,
hello
and
Jeromy
Pruitt.
This
is
somebody
I
used
to
work
with
it.
Juniper
way
back
in
the
day,
it's
good
to
see
you
Jeremy
Sebastian
from
Hungary
Luca
from
Helsinki
and
ansel
from
saying
hello,
good
to
see
you
all
and
Simone
from
Italy
I,
don't
know
actually
how
to
say
that
in
Italian
but
I'll
just
say
hello,
good
to
see
you
Scotty
ray
from
mr.
A
pence
from
Pensacola
Florida
Scotty
is
one
of
the
people
that
has
actually
ridden
a
motorcycle
I
think
all
the
way
across
the
country
back
a
couple
times.
So
it's
like
shout
out
for
that.
Like
that's,
that's
an
intense
ride.
Rita
has
joined
us,
that's
awesome.
We
got
Bochy
Han
from
Turkey
and
continue
continue
from
DC
and
Olaf,
saying
hello,
yet
mr.
Pedro
Acosta
from
Scotland
and
Christian
from
Germany.
Alright,
we
got
lots
of
people
joining
us
today
from
all
over
the
world
once
again,
awesome
to
see
you
all.
A
So
in
their
documentation,
they
present
this
on
mini
cube,
we're
gonna
be
exploring
the
kind
because
you
know
how
it
all
about
that
kind
of
stuff,
but
will
also
be
digging
in
a
little
bit
and
seeing
kind
of
how
the
pieces
work
and
what
are
some
of
the
differences
in
the
way
that
it
the
way
that
it
works
I
am
in
the
backyard
again
you're.
Absolutely
right.
A
You
know
it's
great
to
be
out,
so
it's
great
to
be
outdoors,
but
also
like
my
home,
was
not
really
set
up
as
an
office
space,
and
so
it's
like
I
look
for
pretty
much
any
excuse
to
get
the
heck
out
the
house
yes
and
take
you
again,
thank
you
again
for
all
of
your
for
all
of
the
folks
who
contribute
upstream,
whether
docks
or
or
our
applications
or
any
of
that
stuff.
It's
it
really.
A
A
A
There
was
a
note
on
this
in
the
dev
mailing
list,
but
I
don't
remember
exactly
what
the
detail
was.
So
if
you're
curious
about
that,
maybe
maybe
George
can
put
up
a
link
to
that
sasha.
Goon
Hart
has
done
a
data
analysis
on
PRS
and
issues
in
kubernetes.
Let's
check
that
out,
that'll
be
kind
of
interesting,
so
this
is
hosted
on
the
kubernetes
blog
and
George.
Told
me
about
this
one,
but
I
didn't
get
a
chance
to
check
it
out.
So
I'm
actually
kind
of
curious.
A
So
is
the
story
of
data
sciencing
90,000
github
issues
pull
across
using
goop
flow,
tensor
flow
prowl
and
a
fully
automated
sea
icd
pipeline,
fascinating,
choosing
the
right
steps
when
working
getting
the
data
got
raw
data
from
the
github
API
I'm
sure
that
they
were
like
probably
trying
to
throttle
you
to
REST
API.
We
exported
roughly
none
1,000
issues
and
pull
requests
in
the
first
iteration
into
a
massive
650
mega
byte
data
blobs.
That
is
a
huge
amount
of
data.
A
This
took
I
mean
it's
not
a
huge
amount
of
data
when
you
think
about
it,
like
it's
just
650
megabytes,
but
but
think
about
the
content
right.
We're
just
talking
about
little
pieces
of
text
which
are
not
a
megabyte.
I
promise
you
so
like.
Really,
that
is
a
lot
of
data.
That's
a
lot
of
text
structured
or
not.
This
is
this
took
me
about
eight
hours
of
data
reviewable
time,
because
github
rate
limited
to
me.
That
makes
sense.
A
Kick
it
in
to
what's
happening
here,
which
is
like
some
good
detail
into
what's
happening
to
get
the
data
set
built.
Mr.
Djou
beta
gets
a
call
out
here
created
the
first
github
issue,
mentioning
that
the
unit
test
coverage
was
too
low.
The
issue
has
no
further
description
other
than
the
title
and
no
enhanced
labeling
applied
like
we
know
from
more
recent
issues
and
pull
requests,
but
now
we
have
to
explore
the
exported
even
more
to
do
something
interesting.
So
some
of
the
structured
data
wasn't
structured
in
a
consistent
way
and
exploring
the
data
I.
B
A
It's
a
pretty
fascinating
graph,
created
and
closed
for
creating
versus
closed
pr's
over
time.
A
lot
of
PRS
are
languishing
there.
You
can
kind
of
see
that
right
out,
let's
jump
that
jumps
right
out
of
you.
Labels
labels
labels,
I'd
love
to
see
this
one
over
time
too,
because
in
reality,
I
think
that
label
stuff
is
relatively
new
in
the
life
cycle
of
the
project.
It's
not
and
certainly
isn't
everywhere:
label
usage
by
name
for
PRS
lots
of
LG
teams.
A
That
makes
sense
lots
of
CN
CIF
clas,
which
are
the
you
know,
there's
a
bot
on
the
github
repository
to
ensure
that
you
have
signed
off
on
the
License
Agreement
neat.
It's
probably
release
note
none,
but
whatever
lots
of
approval
lots
of
sizing.
This
is
pretty
cool.
This
is
all
just
basically
bot.
Most
of
these
are
bought
labels.
A
Yeah,
so
if
you're
interested
in
this,
this
is
actually
pretty
fascinating
thing,
but
feel
free
to
dig
more
into
that
as
well.
Someone
asked:
why
is
everyone
moving
to
operators?
I,
don't
really
want
to
get
into
the
reddit
article.
So
much,
but
I
am
interested
in
this
comment
on.
It
was
from
a
gentleman
named
Matt
butcher,
who's,
a
principal
software
engineer
and
in
this
article
I
think
he
tries
to
it.
A
Right
operators
are
meant
to
be
I
meant
to
have
a
reconciliation
loop
that
will
follow
over
time
and
give
you
the
ability
to
codify
things
like
operational
knowledge
about
how
to
operate
particular
applications
or
stateful
applications
over
time
and
their
responsibility
is
to
watch
those
applications
for
health
and
understand
how
to
fix
it
when
things
break,
whereas
helm
is
a
package
manager
right
if
the
goal
is
to
actually
provide
you
a
tool
that
allows
you
to
install
things
and
we're
going
to
be
using
helm
today
when
we
start
playing
with
with
the
secret
store
some
operator
stuff,
but
but
when
helm
installs
a
thing,
it
installs
that
thing
and
it
records
what
it
did,
of
course,
but
it
doesn't
stay
around
and
and
and
reconcile
the
state
of
that
installed
thing
over
time.
A
It
hands
that
off
right.
That's
the
job
now
of
your
of
your
target
environment
kubernetes
is
responsible
for
managing
that
stuff
over
time,
but
it's
but
yeah,
and
for
me
that's
the
distinct
difference
between
these
two
things
right.
This
is
where
this
is.
Where
that
we
we're
I,
think
the
the
the
difference
is
really
succinct.
You
know.
Operators
haven't
different
are
trying
to
solve
a
different
problem,
so
great
article
by
mr.
Matt
butcher
definitely
check
that
out.
Let
me
know
what
you
think
in
the
comments.
A
Cloud
native
ecosystem:
there
is
a
cloud
native
party
happening
on
to
June.
I
know
that
we
all
need
a
little
party
sometime
and
you
know
it's
definitely
been
a
while,
since
we
were
all
you
know,
a
bunch
of
us
were
able
to
get
to
meet
in
person.
So
if
you
have
some
time
on
the
2nd
of
June
they're
really
trying
to
make
sure
that
you
have
the
opportunity
to
come
and
be
a
part
of
this,
so
look
at
the
time
frame
right.
So
Pacific
time
is
5
a.m.
to
5
p.m.
A
hopefully
somewhere
in
that
cycle
anywhere.
Wherever
you
are
in
the
world,
you
will
find
maybe
a
little
bit
of
time
to
jump
in
and
like
catch
a
session
or
maybe
hit
the
hallway
track
and
say
hello.
My
good
friend,
Steven
Augustus
is
going
to
be
emceeing
this,
along
with
some
other
folks
and
there
and
the,
and
that
and
the
presenters
look
tremendous,
so
Steven
and
Sheryl
are
going
to
be
emceeing
this,
and
then
we
have
quite
a
few
other
speakers
right.
We
have
foe
from
different
companies
talking
about
different
tooling
of
their
building.
A
There's
just
a
lot
of
a
lot
of
really
good
content
on
here,
with
a
lot
of
really
amazing
folks,
so
definitely
check
this
one
out.
This
is
actually
a
project.
I
find
fascinating
crust
lit
a
web
assembly,
kubernetes
cubelet
in
rust,
which
is
fascinating.
So
if
you're
interested
in
that
kind
of
thing
definitely
check
out.
Mr.
roffe's
coolest
talk
that'll
be
really
a
good
one,
and
then
this
one
this
is
the
last
part
I'm
going
to
call
out
on
the
on
the
list
here.
A
A
It's
open
all
day.
It's
open
for
12
hours
and
that
day,
I
hope
that
you'll
have
some
time
to
join.
It's
a
free
registration.
It
should
be
a
good
time.
Webinars.
This
week
we
got
an
octant
webinar
on
June
3rd
that'll,
be
the
day
after
this
cloud
native
party
thing.
We
got
cluster
API
webinar
coming
up
on
June
11th.
A
This
is
a
webinar
talking
about
kubernetes
stuff,
so
kubernetes
is
about
six
years
old.
Most
loved
infrastructure
is
GMO,
so
basically,
as
code
sensibility
I
think
we
talked
about
a
lot
of
these
things
in
previous
episodes.
But
if
you're
interested
in
joining
that
conversation
definitely
check
out
the
webinar
I'm
shooting
through
this
stuff,
because
I'm
trying
to
get
to
the
show
notes,
alright,
so
show
notes
here
we
go
using
CSI
store
to
enable
ingress
controller
with
us.
Well,
that'll
be
interesting.
Alright,
so
how
are
we
doing
in
the
chat
everybody
saying
hello?
A
We
got
some
folks
from
dusted
or
forgot,
abc123,
saying
hello.
We
got
a
mother
fer
from
from
saying
hello
as
well,
and
mr.
Craig
Peters,
saying
hey
subrata
from
Virginia
and
my
friend
Joe
from
West
Virginia
I
thought.
Have
you
always
been
in
West
Virginia,
Joe
I
thought
you
were
like
in
the
city
G
saying
hello
and
the
surest
ring
hello
from
Hamburg
good
to
see
you
Suresh
a
meme
4+1.
Oh,
that
would
be
amazing.
I,
don't
know
enough
about
webassembly
to
actually
tackle
that
one,
but
that
would
be
a
really
good
one.
A
A
really
good,
really
good
one
for
sure.
Sanjay
Singh,
hello
from
basel
switzerland
didn't
gate
and
I'm
not
gonna.
Try
to
slaughter
your
name
from
Phoenix
Arizona
good
to
see
you
mr.
Hank
Burke
and
Paul
Bauer,
say
hello
from
Australia
good
to
see
you
and
AJ
say
hello
from
San
Jose
and
more
teza,
say
hello
from
Tehran
always
been
in
West
Virginia,
since
we
worked
together,
but
only
90
minutes
from
DC
Wow,
okay
yeah.
For
some
reason
you
were
closer
to
San,
Francisco,
but
I
think
that's
just
because
I
run
into
you.
Sometimes
all
right.
A
So
I
wanted
to
go
into
kind
of
like
what
CSI
is
so
CSI
and
it's
definition
is
a
container
storage
interface
and
the
idea
of
CSI,
which
I,
which
I
think
is
pretty
it's
a
pretty
big
scope.
But
pretty
awesome
scope
right
is
to
provide
an
interface
to
improve,
not
only
the
security
of
how
we
attach
volumes
and
those
sorts
of
things
to
to
containers
or
pods
inside
of
kubernetes,
but
also
to
provide
a
little
bit
more
flexibility
in
the
model
that
that
uses
right.
So
traditionally,
inside
of
a
stock
kubernetes
cluster.
A
So
let's
just
take
a
look
at
that
real
quick,
get
pods
and
local
past
orange
waited
for
those
things
to
register,
but
there
is
a
dynamic
storage
provider
in
kubernetes
already
or
sorry
within
kind
already
and
I
kind
of
got
into
this
in
a
recent
post
on
Maui
lion
dev,
and
so,
if
you're
interested
in
understanding
more
about
storage
options
in
kind,
you
can
actually
check
out
these
top
two
articles
kind.
Persistent
volumes
gets
into
it
and
we
and
I
and
I
break
down
the
default
storage
class.
That's
provided
within
kind
inside
of
that
article.
A
A
Before
the
CSI
stuff
happened,
what
we're
looking
at
here
was
still
happening
right.
So
all
of
this
is
not
related
to
CSI
at
all.
This
is
actually
just
following
kind
of
the
existing
pattern
within
kubernetes
four
volumes
in
which
you
would
have
a
dynamic
storage
provision,
er
either
entry
provided
by
maybe
the
controller
manager
or
something
ism
like
that
or
out
of
tree,
wherein
you
have
to
define
that
external,
that
storage
for
visitor
and
that
might
be
defined
by
your
cloud
integration
provider.
A
So
if
you
are
hosting
your
kubernetes
cluster
in
AWS
and
you've
turned
on
the
cloud
integration
provider
for
AWS,
then
you
will
probably
already
have
a
storage
class,
a
default
storage
class
provided
and
when
you
create
a
new
persistent
volume
that
persistent
volume
will
be
will
be
taken
from
EBS
volumes.
Inside
of
your
particular
AWS
account
right
and
that's
kind
of
one
of
the
mechanisms
that
in
the
way
that
that
works-
and
this
is
sort
of
like
what
came
before
CSI.
A
A
And
if
we
do
see
rock
LPS
now
a
little
tip
here
in
kind
we
use
container
D
inside
of
the
kind
node
and
so
I
have
to
use
C.
Our
kettle
commands
to
actually
interact
with
those
with
those
things
that
are
running
inside
of
our
environment
right
and
so
that's
why
you'll
see
me
running
the
C
our
kettle
commands,
so
we
should
be
able
to
see
our.
A
B
A
B
A
Oh,
it's
called
name.
Sorry,
that's
where
I
was
messed
up.
She'll
see
her
act
at
all
odds,
yes,
okay,
so
for
some
reason,
whatever
other
reason,
I
actually
named
the
container
this
running
name,
which
is
a
terrible
name
for
anything
so
fair.
This
is
our
container
here
and
so
just
like
with
docker,
we
can
do
inspect
right
and
we
can
look
at
the
container
or
the
pod.
But
let's
start
with
our
container
and
I'm
gonna.
A
Look
at
our
pod
and
the
way
this
stuff
Maps
up
when
you
attach
a
volume
to
a
to
a
when
you
attach
a
volume
to
a
container.
Is
that
we're
going
to
see
that
we're
gonna
see
that
volume
showed
up
I
mean
displayed
when
we
look
at
the
the
pod
in
specific?
So
here
is
our
PVC.
So
this
is
the
actual
attachment
point
for
our
for
our
volume
that
we
created
and
one
of
the
interesting
things
that's
happening
here.
A
Is
that,
though,
that
what
this
means
is
that
the
way
that
cubelet
handles
this
particular
attach
is
that
it
attaches
this
directly
to
the
node
and
then
presents
that
volume
as
an
attachment
to
the
pod.
Okay,
that
means
that
the
cubelet
is
gonna,
be
able
to
see
all
of
those
volumes
attached
and
the
pod
is
only
going
to
be
able
to
see
those
pods
that
are
expressed
to
that
pod.
Does
that
make
sense
right
now?
I
only
have
just
the
one
one
pot,
one
node,
to
keep
things
simple
and
somewhat
clear,
I
hope.
A
A
Pvc,
so
there's
our
this
file
right
so
now
on
the
underlying
host
inside
of
the
node
SSH
into
the
node
I.
Put
this
file
called
this
and
when
I
connected
to
the
container
I
saw
that
show
up,
and
that
means
that
the
that
the
host
has
readwrite
access
to
all
the
volumes
that
the
container
has
and
that
that's
actually
how
the
attachment
is
happening.
A
Now
the
reason
I'm
giving
you
this
background,
a
little
bit
is
because
I
wanted
to.
Actually
this
is
one
of
the
areas
where
I
think
CSI
really
shows
up
right
and
that,
instead
of
actually
handling
it
in
this
way
explicitly,
we
could
actually
handle
this
in
a
different
way
right.
We
could
make
it
so
that
we're
actually
expressing
a
particular
volume
attachment
directly
to
the
running
container
rather
than
directly
to
the
underlying
host
and
then
mounting
it
in.
A
A
A
So,
in
our
case,
when
we
created
this
deployment,
we
also
defined
a
persistent
volume
claim
and
that
persistent
volume
claim
triggered
the
storage
provider
to
say,
make
a
new
directory
and
associate
that
directory
as
a
local
storage
connector
with
a
particular
persistent
volume.
And
if
we
look
at
the
persistent
volume,
we
can
see
some
pretty
interesting
information
right.
A
So
when
you
define
a
pod
it's
going
to
attach
to
this
to
this
particular
persistent
volume,
then
node
affinity
kicks
in
and
before.
We
can
schedule
that
pod.
We
have
to
determine
that
the
pod
can
be
scheduled
in
the
same
place
that
the
volume
is
located
and
that's
actually
how
that
part
of
the
magic
works
right.
A
So
in
our
case,
we're
actually
keying
on
hostname,
but
in
if
you
were
using
AWS,
for
example,
you
instead
might
be
keying
on
particular
on
particular
availability
zone,
because
storage
doesn't
traverse
system
availability
zones,
storage
remains
within
the
availability
zone
in
which
it
was
created
right
and
so.
In
this
way
we
can
actually
Express
node
affinity
with
a
persistent
volume
to
ensure
that,
when
we're
selecting
the
node
or
determining
what
nodes
to
consider,
we
are
only
considering
nodes
that
have
access
to
that
particular
volume.
A
And
yes
muzaffer,
you
are
correct.
The
actual
mounting
is
happening
at
the
host
and
not
in
the
container
exactly,
but
that
might
be
different
with
CSI
and
that's
actually
where
I
think
CSI
gets
pretty
interesting.
All
right,
I
know
that
I've
been
throwing
a
lot
of
data
at
you.
I
hope
that
is
this
is
going
pretty
well,
but
I
think
is
kind
of
important
stuff.
So
this
is
sort
of
what
we
talked
about
so
far
entirely.
Everything
that
we
talked
about
has
nothing
at
all
to
do
with
CSI
everything
that
we
talked
about
right
now.
A
A
A
So
csi's
goal
is
to
provide
a
standard
for
exposing
arbitrary
block
and
file
storage
systems
to
containerized
workloads
in
tightly
inside
of
things
that
are
maybe
not
just
kubernetes
but,
like
you
know,
just
like
c
and
I
can
be
used
across
different
things.
Csi
is
also
trying
to
to
be
that
sort
of
aspect
to
things
right.
It
wants
to
basically
just
provide
sort
of
a
generic
interface
that
you
can
use
whenever
you're
dealing
with
containers,
kubernetes
obviously
does,
and
so
it's
a
great
consumer
of
that
it's
been
around
a
while.
A
It
went
G
a
and
V
113,
there's
some
great
content
in
here.
If
you're
interested
in
understanding
more
about
kind
of
the
history,
it
gets
into
the
design
dock,
some
of
the
recommended
mechanisms,
if
you
want
to
develop
a
CSI
driver
for
kubernetes.
If
this
is
something
that
is
on
your
you
know
bucket
list,
then
this
is
a
great
place
to
start.
It's
got
a
lot
of
really
good
documentation.
For
you
know
what
all
of
the
different
components
of
a
CSI
driver
might
look
like
and.
B
A
A
One
of
the
features
that
are
expressed
by
the
CSI
P,
so
that's
kind
of
built
into
the
spec
here-
is
the
idea
of
being
able
of
being
able
to
solve
secrets
in
a
different
way
right,
so
the
CSI
driver
for
secrets
is
here
and
we
have
and,
and
it
kind
of
gets
into
how
it's
working
and
what
and
what
kind
of
things
we
can
do
here
right.
So
this
is
actually
a
generic
implementation
that
allows
the
CSI
mechanism
to
provide
a
way
to
attach
or
inject
secrets
into
a
container.
A
That
is
meant
to
increase
the
security
of
it
right
to
basically
improve
the
security
model
for
handling
secrets,
and
they
call
that
out
in
there
in
the
documentation.
Right
like
this,
we
are
handling
sensitive
information.
Csi
drivers
that
accept
secrets
should
handle
this
data
carefully.
It
may
contain
sensitive
information,
so
there
really
get
really
focused
on
the
idea
that,
like
this
stuff,
has
to
actually
be
pretty
reasonable,
pretty
reasonably
implemented,
because
everybody
that
sees
a
secret
is
a
potential
attack
vector
for
things
that
are
trying
to
get
a
hold
of
those
secrets.
A
A
A
A
In
which
the
parameters
are
the
fast
provisioned
storage
keys
of
M?
In
this
example,
the
external
for
provision
or
well
fetch
kubernetes
secret
object,
fast
storage
for
inner
key
in
the
namespace
PD
SSD
credentials
and
passed
the
credentials
to
the
CSI
driver
named
CSI
driver
team
example.com
in
the
create
volume.
Csi
call.
So
in
this
way
we
can
actually
map
a
particular
secret
from
some
other
entity
back
to
the
mount
call
that
a
particular
come
volume
is
going
to
have
all
volumes
provisioned
with
this
storage
class,
we'll
get
the
same
secret.
A
So
this
is
kind
of
like
a
one-to-many
thing.
Here's
where
you
find
the
secret
and
then,
whenever
you
try
and
actually
access
this
particular
storage
class
by
name
you're,
going
to
get
that
secret,
and
this
gives
the
ability
to
kind
of
rotate
those
secrets
dynamically
and
all
that
good
stuff
per
volume
secrets.
In
this
example,
the
external
provisioner
will
generate
the
name
of
a
kubernetes
secret
object
and
namespace
for
node
publish
volume.
The
CSI
call
based
on
the
PVC
namespace
and
annotations
at
Pro,
a
lien
provision
name
now.
A
This
is
neat
because
it's
a
two-way
communication
right
when
we're
defining
the
PVC
claim
we're
actually
providing
a
hint
to
the
throat
writer
telling
it
where
to
find
our
secret
and
what
and
what
parameters
are
necessary
to
understand
about
that
secret
so
that
when
the
provider
gets
the
call
right
it
understands
where
to
go
and
and
actually
find
that
secret
to
present
back
the
first
one
was
not
dynamic.
The
first
one
was
here
is
where
the
secret
is.
A
If
anybody
calls
this
particular
storage
class,
give
them
this
secret
and
the
second
one,
which
is
per
volume
secrets,
means
when
I
ask
for
a
volume
I'm
going
to
give
you
some
hint
some
context
about
what
secret
I
want
and
I
want
you
to
be
able
to
go
and
get
that
specific
secret
and
bring
it
back
to
me
in
a
secure
way.
That's
a
very
cool
difference,
thing,
cool
difference
between
the
two
and
then
down
below.
Here
we
have
multiple
operation
secrets.
A
driver
may
support
secret
keys
for
multiple
operations.
A
In
this
case,
you
can
provide
secrets,
references
for
each
operation,
so
here's
the
provisioner
secret
name
and
the
namespace,
the
your
we
are
relating
the
node
published
secret
name
and
namespace.
This
is
the
secret
that
it's
actually
related
to
and
I
guess
it's
good
in
both.
This
is
a
combination
of
the
first
two
examples.
I
believe
that's
right.
A
A
I
have
the
idea
for
staging
secrets,
which
is
actually
pretty
cool
Lachlan.
Let
me
know
if
you,
if
there's
something
in
here,
that
you
would
like
me
to
cover
a
little
bit
more
and
then
nodes
stage
secret,
where
this
is
the
ability
to
stage
secrets
like
we
talked
about
before
a
volume
snapshot
secret.
What
is
that
going
to
do
container
facilities
handling
series
for
the
following
operation,
so
you
can
take
a
snapshot
of
a
secret
cluster?
A
A
There
is
some
topology
piece
built
in
to
this:
we
got
Rob
Locke
valium
attached,
CSI
doesn't
provide
capability
for
querying
block
volumes,
so
SEOs
will
simply
pass
through
requests
from
block
volume.
Creation
to
CSI,
plugins
and
plugins
are
allowed
to
fail
with
invalid
argument.
If
they
don't
support,
block
volumes,
kubernetes
doesn't
make
any
assumptions
about
CSI
plug
in
the
supporting
box
and
which
don't,
of
course,
so
this
is
actually
kind
of
highlighting
another.
A
You
can
only
express
mount
points
to
you
can
only
express
specific
things
to
docker
when
it
comes
to
attaching
a
particular
volume
or
a
particular
device
to
a
docker
container.
You
can't
just
say
here
is
a
device
and
the
docker
container
will
format
it
for
you
and
put
X
TFS
on
there
and
make
it
all
happy
and
then
and
then
startup
right.
It's
not
that's,
not
that's
kind
of
out
of
scope
for
docker
itself,
and
so
instead
Oh
welcome
Steve.
A
Instead,
what
that's
actually
kind
of
handed
out
to
the
CSI,
the
CSI
mechanism
or
your
dynamic
servers,
provisioner
right.
So,
for
example,
in
the
AWS
EBS
case
right
when
I
have
created
a
persistent
volume
claim,
the
call
goes
up
to
AWS
and
says:
make
me
an
EBS
volume
and
the
AWS
creates
the
ABS
volume,
and
then
we
determine
where
we
determine
where
the
pod
will
be
scheduled
based
on
fault
domain
or
availability
zone.
We
schedule
the
pod
there.
A
We
and
then
cubelet
makes
a
call
to
actually
mount
that
volume
onto
that
specific
node
and
ensures
that
and
in
that
process
we
have
to
make
sure
that
there
is
a
file
system
on
that
EBS
volume.
And
then
we
express
that
volume
up
to
the
pod
when
it
starts
up,
and
this
will
be
true
for
like
a
number
of
different
providers
as
well
right.
A
So
all
of
many
of
these
things
are
associated
with
projects
or
that
are
other
upstream
projects
or
other
ways
to
attach,
and
so
each
of
these
things,
as
we
see
them,
has
to
have
a
file
system
on
it
before
we
can
actually
express
it
to
our
pod
and
that's
actually
kind
of
why
we
did
it
why
it
was
done
before
right
like
something
had
to
handle
that
action
of
putting
a
file
system
on
it
before
attaching
it
to
the
pod
and,
typically
that
would
happen
right
there
right
before.
Actually
attaching
it.
A
Okay,
but
this
actually
explicitly
calls
out
in
the
spec
here
that,
like
this,
is
something
that
we
can
handle
with
CSI
right.
So
when
you're
implementing
that
CSI
driver
one
of
the
ways
that
you,
one
of
the
things
that
you
can
do
is
make
sure
that
a
volume
that
is
being
attached
has
a
file
system
on
it
and-
and
you
can
handle
all
of
that
as
part
of
your
implementation.
A
Pod
in
phone
mount
volume,
expansion
being
able
to
change
things,
there's
also
a
lot
of
built
in
stuff
around
snapshots
in
restore
which
is
really
cool,
ephemeral,
local
volumes,
which
I
think
was
relatively
new
a
while
ago,
I
haven't
looked
at
this
one
in
a
bit,
but
this
gives
you
the
ability.
Si
si
si
si
inline
volume
feature
gate
which
I
haven't
enabled
so
I
guess
we'll
have
to
see
I
think
it's
actually
I
think
everything
I
need
is
already
like
in
v1
18
to
play
with
this
stuff.
A
They
do
highlight
some
of
the
challenges
basically
got
a
you
have
to
allow
privilege,
containers
actually
I,
think
this
flag
has
been
deprecated
in
rude,
so
probably
could
be
removed
from
this
documentation.
It's
no
longer
a
viable
note
enabling
amount
propagation
other
another
feature
of
CSI
depends
on
is
mount
propagation
allows.
The
share
sharing
of
volume
is
mounted
by
one
container
to
other
containers
in
the
same
pod
or
even
other
pods
on
the
same
node.
A
A
A
This
content
to
get
this
stuff
rolling,
so
we're
gonna
mount
volumes,
we're
gonna,
mount
vault
secrets
through
container
storage
or
interface
in
SCSI
volume,
so
we're
gonna
see
how
far
down
the
path
we
can
get
on
this
one
that
all
sound
good
to
everybody.
Everybody!
That's
about
that!
Yeah
I,
didn't
think
so.
A
A
I've
been
playing
with
work
and
stuff
since,
like
openstack
days
and
I,
remember,
like
being
you
know,
savagely
burned
by
stuff
with
us
back
in
the
day,
unfortunately,
I
don't
fortunately
I
haven't
actually
explored
that
since
and
it's
something
that
we
definitely
have
there's
definitely
a
lot
of
interest
in
so
cool,
so
we're
gonna
walk
through
this,
which
is
the
documentation
hosted
on
Hoshi
Corp,
learn
in
which
we're
gonna
use
the
the
CSI
volume
to
mount
the
pots.
We're
just
gonna
walk
through
this
process.
A
We're
gonna
see
how
far
we
can
get
with
it
and
if
we
can
get
all
the
way
through
the
whole
thing
that
we're
also
going
to
compare
kind
of
that
existing
volume
model
with
our
default
stories
class
and
the
secret
and
see
how
these
things
are
different,
because
we're
leveraging
see
oh
yeah
yeah.
That
was
that's
good
times,
alright.
So,
let's
move
forward
here,
we're
gonna
see
what
the
difference
between
those
two
mounts
are.
So
we
already
got
cube
kettle.
A
We're
going
to
use
a
vault
helm
chart
one
of
the
things
I
noticed
about
this,
which
is
actually
pretty
cool,
is
that
they
require
column
three,
so
they're
not
even
providing
a
home
chart.
If
you
don't
have
health
REE,
which
is
actually
kind
of
neat
vault,
manages
secrets
that
are
written
to
these
mountable
volumes.
To
provide
these
secrets,
a
single
volley
server
is
required
for
this
demonstration.
Vault
can
be
run
in
development
mode
to
automatic,
to
automatically
handle
initialization,
unsealing
and
setup
of
the
kubernetes
are
the
key
volume
or
key
values
secrets
engine.
B
A
So,
although
I
do
have,
let's
see
helm
repo
lists,
I
do
have
Hoshi
corpse
helm
chart
in
my
repo
set
I'm,
not
actually
pulling
it
from
that
I'm
pulling
it
directly
from
this
piece,
and
the
neat
thing
about
that
is
that
in
your
documentation
right,
you
can
be
really
explicit
about
what
the
configuration
is,
because
you're
gonna
be
pulling
a
very
explicit
archive
of
that
helm.
Chart
you're,
not
haven't
you,
don't
have
to
worry.
A
We
don't
have
to
worry
so
much
about
whether
the
helm
chart
has
moved
forward
in
time,
and
so
that,
and
so
the
documentation
has
changed
and
that's
actually
kind
of
neat
I
think
that's
true.
The
older
versions
can
support
it,
but
I
do
like
that.
I
do
like
that.
It
I
perceived
that
there
is
a
push
toward
supporting
only
home
3/4
for
particular
home
charts,
which
is
actually
pretty
cool
yeah.
So
let's
do
our
get
pods
see
if
we
got
that
going,
keep
gonna
get
pods.
A
A
So
this
is
actually
basically
enabling
vault
to
use
service
accounts
as
authentication
tokens
to
access
secrets,
which
is
pretty
cool,
so
this
actually
just
turned
it
on.
We
haven't
configured
it
yet
so
let's
go
ahead
and
configure
it
Oh,
neat.
Okay,
so
there's
actually
a
whole
bunch
of
assumptions
in
this
line.
We
should
talk
about
it
because
I
think
it's
gonna
be
neat.
Alright,.
A
That
is
really
cool
I,
like
I
like
how
they
do
that.
But
let's
talk
about
it
so
remember
that
right
now,
I
am
exacting
to
a
pod
and
the
deployment
of
that
pod
was
handled
by
the
helm
chart
not
by
me
because
of
that
they
can
make
a
bunch
of
different
assumptions
about
how
things
are
going
to
be
configured
right.
They
can
make
the
assumption
that
I
have
that
there
is
a
service
account
token
associated
with
this
pod.
A
So
if
I
look
at
that
particular
path,
VAR
run
secrets,
kubernetes
I
do
service
account
token,
that's
gonna,
be
the
token
that
is
associated
with
the
service
account
that
this
pod
is
operating
on,
and
we
could
look
at
that
here
in
just
a
moment
as
well,
and
then
the
kubernetes
host
we're
just
going
to
use
the
internal
kubernetes
port
443
TCP
adder,
which
is
actually
in
most
cases.
It's
going
to
be
your
service,
cider
plus
one.
A
So,
in
my
case,
I
think
it's
10
9601,
but
we'll
look
at
it
in
here
in
a
second
and
then
the
kubernetes
CA
cert
is
also
a
part
of
the
mechanism
that
it
gets
mounted
when
you
join
when
you
create
a
service
account.
So
a
service
account
actually
puts
in
like
three
pieces
of
information,
the
namespace
here
in
your
token
and
the
CA
certificate,
and
it
mounts
it
to
every
pod
inside
of
your
system
by
default.
A
That's
the,
but
this
is
a
JWT
or
a
JWT
token
for
accessing
the
kubernetes
api
and
to
constrain
the
permissions
of
this
particular
token.
We
have
to
understand
what
the
permissions
are
of
that
particular
service
account,
so
I
might
show
that
real
quick.
If
there's
some
interest
in
it.
Let
me
know:
if
that's
what
is
the
@
symbol
for
it?
Where
do
you
see
the
@
symbol.
A
A
And
I
believe
that
that's
a
function
of
the
vault
CLI
command,
it's
a
way
that
you
can
actually
pass
to
it.
Content
from
the
file,
some
different
things
to
a
different
ways.
Like
you
know,
file,
colon,
slash,
slash,
slash
is
another
way
to
do
it,
but
basically
it's
just
telling
it.
The
content
of
this
particular
field
can
be
found
at
this
file
at
this
particular
path
that
make
sense.
A
We
got
that
part
done
the
token
reviewer,
JWT
and
kubernetes
see
a
certificate
reference
files,
written
hey,
I,
even
document
like
what
these
things
are.
That's
awesome
and
then
for
the
kubernetes
secrets.
Tour
CSI
driver
to
read
secrets
requires
that
it
has
read
permission
of
all
mounts
and
access
to
the
secret
itself,
so
this
is
going
to
allow
us
and
actually
hold
on.
C
A
You'll
note
that
we're
only
giving
read
access.
That
means
that,
as
that
as
system
as
the
entity
that
has
access
to
this,
we
will
only
be
able
to
read
the
secret,
not
modify
it,
which
would
be
cool,
so
we've
uploaded
the
policy
for
internal
app.
So
as
somebody
who
have
reference
references,
internal
app
as
their
role,
this
was
only
so
only
allow
read
access.
A
So
currently,
vault
extension
of
the
keyword
in
his
secret
store
only
supports
the
KB
Secrets
engine.
This
extension
verifies
that
the
secret,
the
requested
secret,
belongs
to
a
supported
engine
by
reading
the
mounted
secrets
engine
and
the
data
of
the
KB
v2
secret
requires
that
the
after
requires
that
it
that's
a
typo
right.
It
requires
that,
after
the
amount,
the
additional
path
element
of
data
is
included,
finally
create
a
kubernetes
authentication
rolled
named
database
that
binds
this
policy
with
a
kubernetes
service
account
named
secrets,
store,
CSI
driver.
A
Julie's
policy
gives
me
access
to
the
secret
for
this
date
particular
database
that
we
created
earlier
and
then
we've
bound
that
particular
policy
to
a
service
account
named
secret
store,
CSI
driver
and
we've
crown
and
we've
associated
that
with
the
namespace
default,
which
is
interesting.
So
both
of
these
two
things
allow
us
to
constrain
things
in
such
a
way
that
the
that
the
service
account
when
trying
to
access,
vault
and
authenticate
is
only
authorized
to
access
these
particular
secrets
and
is
only
able
and
is
only
able
to
read
them
and
not
able
to
write
them.
A
A
B
A
A
And
what
this
lets
me
see
is
what
type
of
permissions
this
particular
service
account
has
within
kubernetes.
So
if
I
were
to
impersonate
this
group,
what
I'm
doing
with
the
as
flag
here
is
I'm
impersonating,
that
service
account
and
I'm,
taking
a
look
at
the
permissions
that
it
has
I'm
gonna
make
this
a
little
smaller
I
know
that
it
it's
maybe
make
it
a
little
bit
harder
to
read,
but
it's
not
going
to
make
it
impossible,
and
so
we
can
look
at
the
permissions
that
it
has.
A
It
has
the
ability
to
self
subject,
access
reviews
and
self
selected
rules,
reviews
and
basically
understand
like
what
capabilities
it
has.
It
has
the
generator
that
kind
of
the
generic
discovery
stuff
turned
on
so
I
can
understand.
It
can
hit
healthy
and
Livesey
and
open
API
and
information
about
the
API
server,
and
this
is
actually
the
one
role
that
we're
actually
allowing
it,
which
is
a
little
bit
different,
which
is
allowing
it
token
review
access
right.
A
So
if
somebody
gives
this
vault
token
a
token
to
review
so
like
our
application,
when
it
try
order,
when
the
CSI
driver
tries
to
authenticate
to
vault
to
get
that
secret,
how
vaults
can
determine
that
that
is
a
trusted
service
account
or
a
trusted
token,
is
by
going
through
that
token
review
process,
and
so
this
API
is
actually
expressed
by
uber
nude,
as
it
gives
us
that
ability
to
say
okay,
I
have
somebody
who's
I've
received
a
token.
Is
this
a
viable
token
kubernetes
api
server
will
say
yes
or
no?
A
A
What
can
this
authenticated
token
do
and
that's
where
we
get
into
the
policy
part
right,
so
that
gives
us
the
authorization
piece
to
go
with
authentication
once
a
token
is
authorized
or
once
the
token
is
authenticated
via
this
token
review
process,
then
we
can
look
at
it
and
say:
okay
well,
within
vault
within
false,
are
back
system.
That
authenticated
token
has
the
ability
to
read
these
particular
secrets
in
this
particular
content.
I
said
this
wasn't
about
vault,
but
you
know
how
it
is
we're
gonna
get
into
it
anyway.
A
Let's
do
our
check
out
of
the
secret
store
a
CSI
driver
now
again,
I
started
this
episode
by
talking
about
the
fact
that
these
things
are
that
the
secret
for
CSI
driver
is
upstream
inside
of
the
kubernetes
project
right.
This
isn't
a
specific
bull,
sorry
about
the
dog,
there's
no
way
for
me
to
quiet
him
down,
but
because
of
the
check
out
right
like
you
can
understand
that
this
is
upstream
code.
This
is
not
code
that
is
part
of
all
this
vault.
A
B
A
Oh,
you
probably
should
go
with
actual
short
name
for
this
stuff
because
likely,
if
there's
any
libraries
or
anything
else
inside
of
here,
it's
gonna
use
that
path.
It's
not
going
to
use
github
to
do
it
so
one
other
one
other
side
note
just
clean
things
up:
okay
and
then
we're
going
to
move
into
the
secret
store
driver,
and
then
we've
got
to
checked
out.
B
A
So
this
is
the
chart,
that's
actually
going
to
deploy
our
CSI
storage
stuff,
and
we
can
see
that
there
are
a
few
things
that
are
happening
right,
we're
defining
a
that.
We're
defining
that
service
account
that
we
talked
about
earlier
we're
putting
it
in
the
default
namespace.
So
this
is
the
name
of
the
service
account
that
will
be
used
to
authenticate
to
vault
to
go
get
secrets.
A
We
are
defining
a
custom
resource
definition,
so
there'll
be
actual
CR
DS
that
are
registered
with
our
cluster,
we'll
look
at
them
here
in
a
minute
as
well.
It
looks
like
they're
using
Q
builder
to
define
this.
You
have
a
secret
provider
class
kind
being
defined
and
it
uses
the
open,
API
v3
schema
stuff.
So
we
can
actually
be
pretty
confident
that
the
fields
that
are
necessary
will
be
populated
and
if
they
aren't
populated
that
it
will
not
allow
the
creation
of
that
CRD
object.
A
We
talked
a
little
bit
about
that
part
of
it
in
CRTs
before
in
a
previous
episode,
but
yeah
having
an
open,
API
spec
is
actually
pretty
cool.
It
also
gives
us
the
ability
to
do
things
like
so
here's
the
API
resources.
We
can
see
all
of
the
CR
DS
that
are
being
created,
so
here's
our
secret
provider
classes
that
were
part
of
that
CR
D,
but
then,
as
of
I,
think
it's
like
116
or
117.
A
A
Yes,
okay,
so
storage
decades,
I!
Oh,
we
can
actually
see
a
kewpie
it'll,
explain,
content
for
what
a
CSI
note
is
right.
We
can
see
cubic
it'll,
explain,
content
for
a
lot
of
these
things
and
it
really
gives
it
mean
it
means.
As
a
platform
operator,
you
have
one
place
to
go
and
look
at
the
definition
of
what
those
what
those
api's
mean
and
how
they
can
be
used.
Pretty
darn
cool
stuff,
yeah,
that's
right,
yeah,
it's
very
fun!
I
am
a
sucker
for
certifying
all
right.
A
So,
let's
move
on
here,
let's
see
if
we
get
our
pods
I
suspect
we
will
so
we
have
our
CSI
stored
driver
running
notice
that
there's
two
of
them.
So
let's
do
a
show
wide
and
we
can
see
that
the
reason
there
are
two
of
them
is
because
there's
one
running
on
each
worker
right,
so
this
is
CSI.
Driver
piece
is
actually
I
believe
this
is
a
yes.
A
This
is
a
daemon
set.
That's
been
deployed
by
that
helm
chart
that
we
were
looking
at
earlier.
In
fact,
if
we
go
back
to
our
helm,
chart
our
helm
template
this
is
the
daemon
set.
That's
being
defined,
it's
consuming
that
service
account
that's
been
created,
and
inside
of
here
we
have
the
no
driver,
Registrar.
A
We
actually
have
a
number
of
containers
and
if
we
go
back
to
the
documentation
on
the
CSI
side
of
things,
we
can
see
why
these
containers
exist,
but
they're
responsible
for
how
the
implementation
happens,
notice
that
we've
installed
vault
and
that's
all
fine
and
well.
But
what
we're
talking
about
here
is
actually
just
installing
the
the
pieces
that
are
necessary
to
expose
that
container
storage
interface.
We
haven't
associated
that
any
anything
that
provides
for
that
storage
interface.
A
Yet
we've
only
just
installed
the
storage
interface
itself
right,
so
we're
still
missing
the
part
where
we
say:
hey:
storage,
interface,
CSI
storage,
interface,
here,
a
provider
for
you
right
and
we
can
actually
even
see
where
those
things
would
be
would
show
up
right.
We
have
a
plug-in
directory
mounted
at
CSI.
We
have
a
registration
directory
mounted
at
slash
registration,
and
if
we
could
don't
we
go
down,
we
can
see
them
volume
mounts
that
are
associated
with
that.
The
volume
mount
well
looks
like.
A
A
We
also
have
what
is
this
guy?
This
is
actually
one
of
the
storage
objects,
a
CSI
driver
defining
that
particular
storage
object.
That's
start!
The
secret
store
storage
object,
so
cool
volume,
lifecycle
modes
are
ephemeral,
attach
required
false
pot
info
and
mount
true.
This
is
basically
a
defining
the
driver
for
secrets,
but
again
it's
not
specifically
calling
out
that
that
driver
will
be
satisfied
by
vault.
It's
just
calling
out
that
particular
driver
all
right
back
to
our
Doc's.
We
got
that
we
got
that
applied
now,
apply
the
provider
vault
executable
and
secret
provider
class
resource.
A
This
is
where
we
connect
them
right.
This
is
where
we
connect.
That's
that
container
storage
interface
with
a
provider
that
will
allow
that
will
allow
us
to
consume
secrets
right
now.
What
we
got
so
far,
we've
got
container
storage
interface,
we've
got
a
vault
implementation
running,
but
we
haven't
actually
glued
them
together,
and
this
is
where
that's
gonna
happen.
A
If
we
look
at
the
volumes
that
are
being
that
are
being
passed
to
this
installer,
we
can
see
that
we're
actually
using
a
path,
we're
mounting
a
path
up.
That's
called
Etsy
kubernetes
secret
store,
CSI
providers,
and
if
we
go
back
to
our
content
over
here
right,
let's
see
who
were
to
this
secret
store.
Csi
providers
all
right.
So
this
is
actually
how
we're
gonna
be
registering
this
particular
plugin.
We're
gonna
be
registering
it
as
a
provider
at
that
path
on
the
underlying
host.
A
A
Kind
worker
bash,
the
Etsy
Coover
need
a
good
store
providers.
There
is
both
and
this
is
being
implemented,
and
this
is
being
installed
by
that
CSI
provider
store
provider
vault.
This
is
actually
putting
content
there
and
then
the
other
daemon
set
the
actual
driver
is
consuming
the
content
from
there,
and
so,
if
we
look
at
the
log,
we'll
probably
be
able
to
see
how
that
provider
became
a
registered
thing.
This
is
actually
just
using
a
go
binary
file,
provider,
vault
provider.
A
A
A
A
Oh
did
I
miss
that
receive
notify
registration
call.
I
was
actually
looking
for
something
to
say
vault
in
there,
but
maybe
that's
just
not
gonna
happen.
Oh
no
problem,
Avenger
I
was
like.
Why
would
you
do
that
with
a
diamond
set,
but
I
guess
it'll
make
sense?
Can
anyone
recommend
a
circle
see
I'd
share
I'm,
not
gonna,
get
into
that.
That's
in
the
chat
all
right
he
would
go
so
we
got
that
far.
A
Cool,
that's
here:
I
am
actually
doing
file
from
the
nose
perspective,
but
we're
going
to
just
jump
right
into
the
provider.
So
what
we're
going
to
jump
into
the
container
that
is
actually
consuming
it
and
see
and
make
sure
that
it
can
see
it,
and
that's
that's
a
great
I
love
that
we're
kind
of
on
the
same
page
here
so
jump
in
here.
So
what
we're
doing
is
we're
exacting
into
the
CSR
CSI
storage
driver.
Oh,
that's,
probably
a
bug.
B
B
B
A
Boom,
finally,
got
it
all
right
so
bugging
the
docks.
What
we're
doing
here
with
this
command
keep
kettle
exiting
into
one
of
our
pods
that
is
running.
This
CSI
driver
we're
jumping
into
the
secret
store
container,
because
the
pod
has
multiple
containers,
then
we're
gonna
pass
standard
output,
input,
we're
gonna,
run
the
stat
command
against
Etsy
Cooper
to
this
secret
store
as
CSI
providers,
vault
provider
vault,
and
basically
just
making
sure
that
inside
of
the
secret
store.
A
A
That's
pretty
cool
the
Cooper
need
a
secret
store.
Csi
driver
helm
chart
creates
a
definition
for
the
store,
the
secret
provider
class
resource,
and
we
looked
at
that
before
in
the
helm
chart
the
resource
describes
the
parameters
that
are
given
to
the
executive
all
to
configure
it.
They
were
to
configure
it
requires
the
IP
address
of
the
vault
server
and
the
name
of
the
vault
server,
the
kubernetes
authentication
role
and
the
secrets
so.
A
So
we've
registered
it
and
now
we're
gonna,
try
and
make
it
the
the
we're
gonna
try
and
make
a
storage
class
effectively
of
it
right.
So,
let's
take
a
look
at
it,
so
this
grace
is
secret,
store,
CSI
XK,
it's
the
I/o.
It's
in
alpha
still
its
alpha
one,
it's
kind
secret
provider
class.
The
metadata
for
the
name
of
this
will
be
the
vault
database.
The
provider
will
be
vault,
which
is
a
registered
provider.
A
Now
we
finally
just
did
that
part
and
then
some
of
the
parameters
to
tell
it
how
to
go
and
find
vault
the
vault
address
will
be
HTTP
colon,
slash,
slash
that
doesn't
make
me
happy.
I!
Guess!
That's,
because
it's
in
dev
mode
kind
of
wild
does
that's
HTTP
but
anyway,
so
it's
going
to
access
this
content
unencrypted
against
the
local
vault
inside
of
my
cluster
and
this
particular
piece.
If
you
are
not
already
familiar
with
it,
this
is
actually
going
to
work
because
of
service
name
discovery
inside
of
kubernetes
right.
A
So
this
the
way
this
breaks
down
it's
a
kind
of
a
short
name.
You
could
actually
fully
qualify
it
if
you
want
to
and
make
it
a
little
easier
to
understand,
but
what's
happening
here
is
we're
actually
identifying
a
service
named
vault
in
the
default
namespace,
and
then
the
rest
of
the
implied
host
name
would
be
dot
SCC
for
service
dot,
cluster
dot,
local,
for
which
basically
indicates
that
we're
using
service
named
discovery
inside
of
kubernetes.
A
A
The
vault
skip
TLS
verify
is
true,
which
is
interesting
because
we're
not
actually
using
TLS
at
all.
It
seems
unless
I'm
missing
something
and
then
the
objects
are
an
array.
The
object
that
we're
actually
going
to
be
exposing
here
is
that
database
password
that
we
said
earlier,
the
object
name
is
password
and
the
object
path
is
/.
Db
pass.
A
B
A
A
We
can
call
it
anything
but
look
at
the
type
the
volume
type
so
instead
of
host
path
or
rook
or
you
know
even
a
PVC
any
of
those
things
we're
actually
defining
a
type
CSI
right
and
then
we're
telling
that
volume
and
then,
via
that
CSI
interface,
we're
telling
it
use
the
driver
secret
store.
Si
si
que
esta
I
owe
that
upstream
plug-in
that
we've
turned
on
in
the
CSI.
A
We
want
to
mount
this
read-only.
True
I
mean
it's
gonna,
be
read-only,
no
matter
what
right,
because
we
didn't
actually
provide
right
access
to
this
secret,
we're
only
in
providing
read
access
to
the
secret.
The
volume
attributes
like
telling
it
basically
how'd
it
go
and
how'd
it
go
and
find
that
secret
for
us.
So
the
only
attribute
we're
gonna
provide
is
the
secret
of
a
writer
class
for
vault
database
right
now,
when
we
defined
that
class
up
above
here,
we
gave
it
one
password.
A
So
this
is
the
one-to-many
situation
right,
we're
in
anything
that
accesses
that
particular
secret
provider
class
is
going
to
get
this
secret,
we're
not
allowing
for
the
dynamic
creation
or
the
dynamic
relationship
of
that
secret.
We're
just
saying
this
is
the
one
secret
and
then,
with
anybody
accesses
this
class.
It's
going
to
get
this
secret
just
as
a
reminder
here.
If
we
go
back.
A
What
we've
done
is
this
basic
provisioning
secret
right
we've
said:
go
find
that
particular
secret
name
and
associate
it
with
the
start
with
it
with
the
class
okay,
but
we
haven't
enabled
this
functionality,
which
is
like
to
give
it
to
enable
us,
when
we're
defining
the
actual
amount,
to
give
more
parameters
or
to
give
us
more
specific
information
about
what
secret
to
go.
Get
we
haven't
done
that
part.
We've
only
done
this
part,
all
right.
A
A
A
A
A
B
A
So,
interestingly,
we're
on
the
underlying
node,
we
still
have
access
to
the
content
of
the
secret.
It's
not
it's
not
directly
attached
to
a
container,
yet
it's
I
mean
and
I.
Don't
believe
that
it's
going
to
be
in
this
particular
instance,
and
so
we
still
have
that
attack
vector
of
if
somebody
can
get
into
the
underlying
node
they're
gonna
be
able
to
get
access
to
all
those
secrets
they
wouldn't
have
to
jump
into
the
containers,
although
in
reality,
if
you
have,
if
you
have
access
to
an
underlying
host,
you
still
have
that
problem.
A
That
this,
oh
sorry
about
that,
okay,
this
mount
is
actually
mounted
via
tempo
fess
right.
So,
even
though
it
is
exposed
on
the
underlying
host,
that
doesn't
necessarily
mean
that
the
content
will
be
there.
It
doesn't
mean
that
it's
persisted
to
disk
right.
The
distinction
here
is
that,
because
it's
not
persisted
to
disk,
if
somebody
were
to
get
a
hold
of
this
disk
and
try
looking
for
secrets
on
it
right,
like
somebody,
gets
a
hold
of
the
disk
in
a
server
or
whatever,
like
that's
your
attack.
A
It
is
plaintext
yes,
but
you
know
I
mean
like
understand
your.
This
is
where
I'm
gonna
like
harp
a
little
bit
on
like
understanding
the
security
of
things
and
the
attack
vector
of
things
so
understand
your
attack
vectors
understand
your
threat.
Your
threat
model
here
right,
so
an
Archon
I
think
the
context,
the
threat
model
and
actually
would
be
really
interesting
to
see
a
threat
model
for
secrets
here,
and
that
is
that
they're.
A
What
they're
trying
to
do
is
ensure
that
the
CSI
actually
be
really
interesting
to
see
if
our
model
for
a
secret
CSI
driver
and
also
for
vault,
so
I
think
vault
has
threat
models,
but
what
they're
trying
to
do
is
ensure
that
they
have
a
reasonable
implementation
of
getting
a
secret
into
a
pod
without
exposing
that
secret
outside
of
out
to
other
pods
within
the
cluster,
without
some
intent
right.
So
in
this
case
are,
let
me
show
you
kind
of
what
I
mean
here.
So
if
I
did
cubic,
it'll
get
secret
provider
classes.
A
A
This
store,
this
provider
class
is
defined
within
a
namespace,
and
that
means
that
this
secret
is
only
consumable
by
things
within
that
namespace.
It
wouldn't
be
consumable
by
other
things,
within
a
different
namespace.
It's
only
it's
only
accessible
within
the
namespace,
where
that
particular
secret
store,
where
that
secret
provider
class
has
been
defined.
A
Also
we're
not
persisting
this
to
SED
encrypting
it
with
base
it
with
base64
we're
using
vault
for
that
encryption
and
when
I'm
a
consumption
model.
Whenever
anybody
accesses
this
secret
right,
we're
gonna
have
an
event
or
related
to
them
to
the
the
the
consumption
of
this
secret
that
we
can
track
and
audit.
We
can
understand
when
the
when
that
secret
was
was
attached
to
a
container,
and
we
can
understand
what
those
container
names
are
and
and
during
what
period
of
time
that
container
had
access
to
it.
A
We
have
the
ability
to
ensure
that
the
secret
is
not
writable.
Nothing
inside
of
the
cluster
has
access
to
write
that
secret
right
now,
only
somebody
with
access
to
vault
explicitly
and
the
permission
to
write
that
secret
has
that
capability
right.
So
if
I
threat
model
included
being
able
to
modify
secrets,
then
this
breaks
that
threat
model
I
mean
I'm,
pretty
cleanly
Ari
or
protects
against
it,
because
right
now
nothing
has
that
ability
to
do
to
write
that
secret.
A
A
We're
using
service
accounts
to
authenticate
the
vault
to
get
read
access
to
those
secrets,
but
we're
using,
but
we
were
actually
using
a
locally
defined
user
and
I'd
been
user
of
vault
to
be
able
to
populate
secrets,
and
this
is
part
of
that
threat.
Modeling
exercise
that
I'm
talking
about
like
how
do
we
think
about
that
sort
of
stuff?
What
are
we
protecting
against
and
how
do
we
see
and
where
do
we
see
the
challenges
in
it?
A
So
I
do
think
that
this
is
actually
pretty
awesome
and
I
do
think.
It
provides
a
pretty
good
mechanism
for
interacting
and
granting
secrets
in
a
secure
way
to
pods,
even
though
it's
still
expressing
them
in
the
underlying
node
the
threat
vector
of
being
able
to
understand
what
our
sorry,
the
attack
vector
of
the
node
is
already
pretty
intense
right.
A
If
you
can
get
root
on
the
node
and
you
pretty
much
already
own
the
shop
yeah,
it
does
yeah
well,
the
vault
home
chart
does
support
TLS,
and
the
vault
home
chart
also
really
provides
a
pretty
good
mechanism
for
setting
up
an
actually
secure
and
reasonably
implemented
voluntary
integration.
That's
not
what
we
did
here.
We
just
did
like
the
dub
part
of
it,
because
we
were
showing
off
the
CSI
piece
more,
so
that
gets
us
all
the
way
through
it.
A
B
B
B
A
B
A
Oh,
we
can't
actually
right
so
because
we've
actually
mounted
the
node
is
a
node.
Mount
is
read/write
and
it
doesn't
look
like
it's
reconciling
that,
so
we've
actually
just
we've
actually
just
hijacked
the
secret
Oh
first
I
mean
that
it
probably
will
work,
but
it's
a
pod,
so
I
can't
restart
it.
I'd
have
to
have
to
read
it
play.
A
So
because
of
that,
it's
not
gonna
actually
refresh
that
secret.
We
basically
just
kind
of
broken
the
security
model
here
by
overriding
the
content
from
the
nodes
perspective,
so
that
gives
us
the
ability
to
mess
with
the
content.
That's
actually
mounted
on
their
underlying
note
yeah
it
only
it
only
caches
it
when
the
pod,
what
it
actually
only
constituted
unmount.
It
looks
like
it's
not
changing.
A
What
happens
if
we
restart
you
know
this.
Is
it's
great
I
love
this
idea.
This
is
it.
This
is
the
stuff
that
I
think
is
absolutely
fascinating.
Right.
We're
coming
up
with
different
theories,
so
let's
do
keep
cattle
get
pods
Oh
wide,
and
we
can
see
that
the
vault
driver
for
worker
is
running
here.
What
happens
if
we
deleted
that?
A
Wide
we're
back
up
and
running
it
didn't
overwrite
it.
So
it
seems
like
the
own.
This
is
actually
gonna
be
a
pretty
interesting
attack,
vector
because
I
mean
obviously,
if
you
can
under.
If
you
can
write
to
the
underlying
node,
you
have
all
kinds
of
crazy
powers.
It's
not
just
this,
but
this
is
kind
of
an
interesting
way
of
being
a
little
more
insidious
about
your
attack
so
depending
on
what
you're
actually
trying
to
do
that
might
be
kind
of
an
interesting
attack
vector.
A
B
A
A
A
B
A
So
what
I've
just
done
is
I've
gone
to
the
underlying
implementation
and
container
D
and
I've
just
deleted
that
pod.
That
pod
is
dead
and
it's
gone
forever
in
the
in
the
past
and
something
rear
view
mirror,
and
that
means
that
any
resources
that
were
associated
with
that
pod
should
also
have
gone
away.
A
A
A
B
B
B
A
A
A
If
the
UID
is
the
same,
then
qubit
won't
invoke
them
out:
Thank,
You,
Annie
Sh.
That's
awesome!
I
love
that
I
love.
That
y'all
are
like
on
line
for
this
one.
That's
a
fact!
That's
a
fascinating
thing,
but
I
agree.
This
totally
is
a
bug,
but
the
UID
is
oh
I,
see
oh
I,
see
so
even
on
the
restart
of
a
pod.
It's
not
invoking
the
mount,
which
means
that
you're
not
getting
a
new
you're,
not
getting
that
secret
retread.
A
Cube
kiddo
delete
pod
nginx
secret
store
in
line.
A
A
We
no
longer
have
that
mount
mount
is
now
gone
right,
so
we're
no
longer
seeing
that
because
the
pod
has
been
deleted.
So
the
only
things
we
still
see
mounted
in
are,
like
the
actual
service,
account
token
for
vault
and
for
other
things
that
are
like
mounted
inside
of
this
host,
but
that
pot
has
been
deleted.
It's
no
longer
a
present
on
there,
and
so
now,
if
we
redefine
the
pod,
let's
see
what
I
probably
be
up
here
right.
So.
B
A
A
Pass
and
we're
back
to
the
secret
I
did
delete
the
pod
I
completely
deleted
the
pod
and
the
container
the
pod
and
the
dependent
container
were
completely
deleted,
but
the
bug
that
was
highlighted
Tanisha's
pointing
out
it's
a
different
bug
because
like
or
unless,
unless
maybe
it's
a
different
bug
than
you
think
it
is
so
to
just
kind
of
really
highlight
this.
If
I
do
see
our
kennel
pods,
we
can
see
that
the
pod
ID
is
d0
Bede,
3,
7,
3,
2,
FF,
4
right
and,
if
I
do
see
our
kettle.
B
A
A
A
So
we
can
see
this
is
the
pod
id
known
about
inside
of
CD
for
this
pod
and
even
though
we
restarted
it
and
we
deleted
the
underlying
pod
container,
we
haven't
generated
an
event
that
would
allow
for
sed
to
delete
this
object
and
make
a
new
one,
and
so
because
of
the
mount.
This
is
actually
really
interesting
because
of
the
way
this
URL,
this
path
has
been
defined.
A
A
The
pond
sandbox
still
should
remain
the
same
if
the
container
is
killed.
Yeah,
oh
okay,
so
container
D
has
the
idea
of
pods
container
D.
Has
the
idea
of
sand
boxes
of
containers
right,
but
these
don't
line
up
100%
with
the
UUID
inside
of
the
inside
of
the
side
of
a
CD
so
again
like
just
to
really
make
that
super
clear,
clear
and
that
was
I.
Think
part
of
the
disconnect
for
me
was
kind
of
interesting.
A
A
A
Sierra
kettle
pods,
which
shows
me
the
sandbox
and
the
containers
all
right
so
I'd
be
able
to
like
basically
from
the
perspective
of
container
D.
The
pause
container
is
the
sandbox
right.
So
I'm
gonna
check
this
one
see
ya:
kiddo,
RM,
p,
f,
remove
everything
associated
with
this
pod
or
this
sandbox
make
it
completely
go
away.
A
If
I
do
a
refresh
up
here,
I
see
that
that
value
hasn't
changed.
If
I
do
a
refresh
down
here,
I
see
that
it
has
changed.
So
this
is
kind
of
a
naming.
Collision
really
what's
happening
is
I've
described
a
continue.
T
thinks
of
these
the
pod
as
a
pause
container,
and
and
that's
why
you
don't
see
pause
containers
in
container
D
if
I
do
Sierra
kettle
PS
show
me
all
the
containers.
I'm
not
gonna,
see
pause
containers
and
that's
because
the
pods
container
is
more
representative
of
the
pod
container
and
container
D
I
hope.
A
That
makes
a
little
bit
more
sense,
but
that's
actually
part
of
the
disconnect
that
I
was
having
was
if
I
check
the
pause
container
would
I.
Also
get
a
new
PI,
the
answer's,
no
so
cool
yeah.
That's
what
I
wanted
to
show.
You
know:
I
hope
that
was
educational,
I,
hope
people
dug
it.
I
thought
it
was
fascinating.
We
found
a
really
interesting
exploit
yeah.
A
Let
me
kick
back
to
face
here.
So
I
can
see
how
y'all
are
doing
it
face.
Ok,
cool
I
hope
that
that
was
awesome.
I
enjoyed
it
very
much.
It's
a
you
know,
I
love
playing
with
all
of
this
stuff
and
kind
of
digging
into
it,
and
seeing
how
it's
all
wired
up,
underneath
I
thought
I
hope
you
thought
that
was
a
good
time.
If
the
pod
was
a
deployment
where
the
new
ID
would
change.
Yes,
that's
right.
If
we
made
it
a
deployment
then,
but
only
if
actually
I
mean
maybe
really
clear.
A
My
vendor,
the
UID
for
the
pod
would
only
change
if
that
pod
object
was
deleted
right
there
we
have
a
deployment.
I
would
have
to
actually
delete
that
pod
to
have
a
new
pod
created
with
a
new
ID
for
the
further
for
the
new
mount
to
work.
In
fact,
do
you
want
to
explore
that
stuff
with
me?
Real
quick
or
do
you,
are
you
ready
to
call
it
a
day?
A
Everybody
ready
to
call
it
a
day,
or
do
you
want
to
look
at
another
thing,
nice
chance
before
I
call
it
a
day
you
are
in,
and
that
is
good
enough
for
me
all
right,
I'm,
totally
a
sucker
for
this
stuff,
I,
don't
know
why
it's
just
like
my
thing.
Okay!
So
let's
do
look
at.
Let's
look
at
our
pod
again!
I'm
gonna
convert
this
into
a
deployment.
B
B
A
Okay,
so
we
got
kind.
No,
we
don't
okay,
so
we
got
API
version.
Apps
v1,
it's
a
kind
deployment
metadata
is
name,
is
nginx
secret
store
in
line
the
famous
space
is
default
we've,
given
it
some
labels
inside
of
the
spec.
We
have
one
replica
we're
matching
the
labels
secrets
optical
secrets,
inside
of
the
template,
we're
studying
that
label.
That's
a
good
thing.
Inside
of
our
spec,
we
are
defining
a
single
container.
It
is
the
nginx
container.
It's
named
nginx,
that's
the
volume,
that's
the
volume
vault!
That's
the
volume
amount
class
looks
good
to
me.
A
A
Is
running
on
kind
of
worker
and
it's
only
though
it's
the
only
one
okay.
So
then
we
are
gonna
retry
our
experiment,
but
this
time,
instead
of
deleting
it
at
the
node
level,
we're
gonna
delete
it
at
the
at
the
at
the
top.
Its
I.
Don't
keep
cut
off
so
cube
kettle,
exacty,
I
kind
worker
or
set
aside
Tucker
and
worker.
B
B
B
A
Go
dang
it
into
DB
pass
CatDV
pass
good,
okay,
and
then
we
can
show
down
here.
Keep
cattle
exec,
CI
engine
X
cat
mount
secret
store,
see
the
override
still
happening
now.
If
we
do
a
cube
kettle
delete
of
that
pod
actually
before
we
do
that,
let's
do
this
right.
So
this
is
the
UUID
of
the
pod
on
disk
747.
So
that's!
The
idea
of
the
pod
inside
of
at
CD
are
stored
right.
So
now,
if
we
go
ahead
and
delete
that
pod
delete
pod.
A
A
Am
actually
really
curious
why
the
mounts
not
working,
though
oh
it's
probably
because
you
know
why
the
deletions
failing
right
now,
so
everybody
have
a
guess.
A
A
C
A
So
I
had
to
make
another
deal,
II
call
probability,
but
yeah
it
has
nothing
about
node
being
cordoned.
It
had
to
fight
I
had
to
do
with
the
fact
that
I
was
actually
inside
the
folder.
You
nailed
it
first
shot.
That
was
awesome
all
right
now.
That
really
is
it
yeah,
so
I
hope
that
was
educational,
I
hope
you
got
something
out
of
it.
I
know,
I
did
I,
know
I
really
enjoyed
hanging
out
with
you
all
on
this
beautiful
Friday
afternoon.
A
So
again
yeah.
Thank
you.
Thank
you
all.
So
much
for
tuning
in
and
I
will
see
you
next
time,
or
somebody
else
will
looks
like
we're
actually
pulling
more
people
into
doing
some
tea.
Gi
k
so
that'll
be
super.
Exciting
I
really
hope
that
happens
soon,
so
because
it's
great
to
have
more
viewpoints.
You
don't
mean
like
for
me.
Tj
is
about
perspective.
It's
about
me
bringing
my
perspective
to
a
problem
and,
and
you
asking
me
questions
and
I-
think
it's
it's
great
to
have
that
from
a
much
different
host.