►
From YouTube: TGI Kubernetes 065: Keep it a Secret!
Description
Come hang out with Duffie Cooley as he does a bit of hands on hacking of Kubernetes and related topics. Some of this will be Duffie talking about the things he knows. Some of this will be Duffie exploring something secret with the audience. Come join the fun, ask questions, comment, and participate in the live chat!
See https://github.com/heptio/tgik/tree/master/episodes/065 for notes and code.
A
Hey
good
afternoon,
this
is
duffy,
I'm
back
doing
another
tgik
with
you
here
on
this
beautiful
friday,
I'm
here
on
the
seventh
floor
of
a
building
in
san
francisco,
the
new
vmware
office.
Second
and
bryant.
It's
a
gorgeous
place
to
work
from
have
a
beautiful
view
of
the
bay,
and
this
episode
is
going
to
be
about
secrets
and
I'm
really
looking
forward
to
catching
up
with
you
folks
and
seeing
what
you're
up
to.
Let's
do
some
of
that
right
now.
A
A
Okay,
there
we
go
that's
better
all
right,
so
we
got
olaf
from
denmark.
We
got
a
decent
from
russia,
good
to
see
you
samia
and
sadaz
anna
winkler
from
boulder
colorado.
My
buddy
timmy
carr,
one
of
my
favorite
people
that
I
work
with,
along
with
some
of
my
other
people
that
I've
my
other
favorite
people
that
I
work
with
rory
and
stephen
are
both
here
with
us.
A
Today,
we've
got
doc
clutch
marco
y'all
are
really
turning
out
to
see
this
one
we
got
sadas
amin
george
is
going
to
help
us
out
with
some
of
the
notes
we
got
steve,
sloca
checking
in
got
silvio
from
basil.
We
got
gustavo
from
chicago
and
christopher
from
germany,
so
welcome
y'all.
I'm
really
glad
to
see
you
here
and
I'm
really
looking
forward
to
this
episode
of
tgik.
A
A
The
hackmd
george,
if
we
paste
the
link
to
the
heck
md,
it's
in
there,
we're
going
to
kind
of
do
this.
Normal
format.
Welcome
to
tgik
everybody,
I'm
going
to
do
the
week
in
review.
It's
been
a
really
interesting
week
for
those
of
you
involved
in
the
security
world
run
c.
A
The
the
epic
breakout
of
the
epic
pick
out
battle
of
containers,
there's
also
so
they've
got
an
update
for
slack
arcades.
So,
let's
just
kind
of
work
our
way
through
that,
so
on,
the
11th
kubernetes
community
put
up
a
blog
post
about
run
c
and
the
cbe
it's
a
it's
a
pretty
big
deal
it's
interesting
in
the
way
that
it
is
exploited.
A
It
just
keeps
collapsing
on
me.
The
way
this
exploit
works
is
that
basically
pretty
much
every
containerization
or
run
time.
That's
out
there
that
leverages,
run
c
is
and
has
a
read.
Write
underlying
root
file
system
has
the
ability
to
be
exploited
by
this
breakout
and
it's
a
really
interesting
one.
A
Because
effectively
what
happens?
Is
you
run
inside
of
the
inside
of
your
container
with
privileged
access,
you're
able
to
replace
the
binary
that
is
being
used
to
act
as
run
c
on
the
underlying
node
and
thus
do
pretty
much
anything
you
want
with
the
underlying
docker
daemon,
which
is
you
know,
kind
of
a
big
deal?
A
A
A
A
A
A
So,
for
example,
the
stuff
that's
running
in
cube
system
needs
a
pod
security
policy
that
allows
it
to
run
or
it
will
not
be
started
and
we'll
talk
a
little
bit
more
about
that
later,
possibly
when
we,
when
we
kind
of
get
into
the
into
the
fun
part
of
the
episode,
so
we
got
hey.
Somebody
from
dan
is
here
from
sf
and
we
got
nick
checking
in
from
seattle.
We
got
folks
coming
in
from
sam
from
north
carolina
and
olav
is
asking.
A
A
Control
is
literally
like
one
of
the
best
security
control
points
that
you
have
within
kubernetes
literally,
it
is
like
one
of
the
very
best
and
with
admission
control
cloud
security
policies
really
give
you
a
pretty
significant
amount
of
capability
for
securing
things
like
this
or
even
mitigating
things
that
are
found
that
might
affect
your
running
environment.
A
A
It's
still
invite
only
for
the
moment,
while
we're
still
working
out,
what's
going
to
happen
with
it,
but
I
want
you
to
know
that
you're
not
like
if
you
are
new
to
the
community.
Still
I
want
you
to
you
know
spread
this
word
with
me:
discuss
dot,
io,
slash
or
discuss
dot,
k,
h,
dot
io
is
still
very
much
a
great
place
to
get
involved.
Ask
questions
start
a
conversation
about
a
thing
you're
trying
to
dig
into
you
are
not
without
resources
to
reach
out
for
help.
A
A
A
This
one
is
about
building
a
kubernetes
edge,
ingress
control
plane
for
envoy2,
with
the
idea
being
that
you
can
actually
host
envoy
out
in
the
front
somewhere
and
have
it
route
traffic
back
down
to
your
existing
pods,
and
this
is
not
too
dissimilar
from
some
of
the
work
that
we
did
with
gimbal
and
some
of
the
other
tools.
It's
interesting
to
see
other
people
kind
of
repeating
patterns
or
or
digging
into
patterns
that
to
try
and
solve
sort
of
the
gslb
layer
for
kubernetes
and
and
how
we
can
actually
make
that
work.
A
A
Hpa
is
a
pretty
interesting
tool,
but
initially
the
implementation
of
horizontal
pod,
auto
scaling
was
really
tied
to
things
like
the
capab,
the
process
metrics
on
the
pod
itself.
Right
they
were
tied
to
like
you.
Could
you
could
key
the
growth
of
a
particular
deployment
within
kubernetes
based
on
cpu
usage
or
memory
usage,
but
obviously
the
next
question
comes
in
well,
you
know
what
this
is
a
kafka
cluster
and
I
want
to
be
able
to
scale
this.
You
know
on
cue
debt.
A
I
want
to
scale
the
consumers
of
a
particular
topic
on
cue
depth
right.
So
how
do
I
go
about
that
and
the
answer
is
custom
prometheus
metrics.
This
article
was
pretty
well
written
and
I
believe
it's
a
series,
so
I
definitely
recommend
kind
of
giving
this
giving
this
a
look
over
and
getting
involved
in
sig
instrumentation.
If
you
have
other
questions
or
if
there's
other
stuff
that
you
need
from
it,
it's
a
really
cool,
really
cool
article.
A
One
of
the
other
ones-
I
thought
that
I
saw
this
week
that
I
thought
was
interesting-
was
writing
custom
controllers
by
the
banzai
cloud
folks
they
their
blog,
is
actually
you
know
kind
of
impressing
me
over
and
over
again,
they've
got
a
lot
of
really
interesting
stuff
coming
out
of
banzai
cloud,
even
if
you're
not
interested
in
writing
a
custom
scheduler
within
kubernetes.
A
I
think
this
is
a
great
article,
just
for
just
kind
of
digging
into
like
the
what
and
how
and
why
and
all
that
stuff
of
scheduling
at
all
inside
of
kubernetes,
so
really
good
stuff.
I
highly
recommend
it
and
it
actually
kind
of
also
highlights
sort
of
the
extensibility
of
kubernetes.
That,
I
think
is,
is
pretty
critical.
One
of
the
questions
that
mike
that
might
pop
up
is
well.
If
I
write
my
own
scheduler,
how
will
I
actually
make
use
of
that?
A
The
same
the
state
is
then
persisted
to
sed,
but,
unlike
most
of
the
other
platforms
that
we've
seen
come
before
kubernetes,
what
happens
next
is
a
game
of
like
basically
distributed
systems
right,
so
our
next.
The
next
thing,
the
next
piece
that
happens
is
that
the
controller
manager
looks
if
there's
a
deployment
or
one
of
the
other
higher
level
constructs.
The
the
controller
manager
will
look
at
that
that
newly
created
deployment
and,
depending
on
the
controller,
that's
being
run
by
the
controller
manager.
A
Other
stuff
will
happen
if
it's
a
deployment
it'll
create
a
replica
set.
If
it's
a
replica
set,
it
will
create
pods,
but
once
we
get
down
to
the
pod
level,
the
very
next
thing
that
happens
is
the
schedule
looks
to
see
is
doing
a
watch
on
the
api
server
to
understand
if
there
are
any
pods
that
are
not
scheduled
and
that
are
keyed
to
the
scheduler
that
is
being
represented
in
this
in
this
blue
box,
and
if
it
sees
one
it
does
the
work
and
it
schedules
one
and
it
schedules
you
to
a
node.
A
Basically,
basically
by
populating
that
node
name
field,
then
cubelet
is
basically
watching.
All
of
the
keyboards
in
your
system
are
constantly
watching
the
api
server
for
paws
and
other
resources
that
are
allocated
to
that
cubelet,
specifically
and
we'll
we'll
kind
of
dig
into
this
one
here
in
a
little
while
too,
and
as
soon
as
it
sees
a
pod
schedule
to
itself,
it
pulls
a
copy
of
that
pod
and
all
of
the
associated
resources
and
maybe
takes
care
of
like
volume,
provisioning
et
cetera,
et
cetera.
A
A
This
is
work
that
the
scheduler
does
and
it's
by
itself
in
its
own
replication
or
in
its
own
reconciliation
loop
same
thing
with
the
controller
managers
they're
all
operating
on
a
reconciliation,
loop,
kind
of
highlights,
the
power
of
kubernetes,
but
enough
about
that.
Let's
move
on
another
one
I
saw
this
is
a
really
interesting
new,
scheduler
kind
of
based
on
the
idea
of
you
know.
A
While
we're
talking
about
schedulers,
I'm
not
going
to
get
into
explaining
it,
but
I
highly
recommend
if
you're
interested
in
scheduling
and
like
you
know,
some
other,
some
of
the
different
ways
that
schedulers
could
work
within
kubernetes
or
even
within
large
distributed
systems
like
kubernetes.
This
is
an
incredible
read
and
incredible
project
and
there's
lots
of
really
interesting
stuff
happening
with
it,
so
check
that
one
out.
A
Hello
from
montreal
and
hello
from
paris
and
there's
my
matrol,
not
good
buddy
mike,
and
we
have
I'm
looking
at
this
like
a
blind
man,
let
me
make
this
text
bigger.
There
we
go
so
we
also
have
leonardo
and
we
have
nick
again
everything's,
looking
pretty
good,
all
right
so
back
to
our
regularly
scheduled
program
here.
A
That
was
our
week
in
review
lots
of
stuff
happening
out
there
this
week
and,
as
I
said,
you
know
please
reach
out
to
if
you,
if
you
are
in
the
community
and
you
feel
like
and
you're
like
shepherding
new
folks
into
the
community
or
you're
trying
to
get
them
involved
in
the
community
in
any
way.
Remember
that
they
they
have
that
resource
discuss.ks.io.
A
They
also
have
the
sick
community.
They
also
have
a
weekly
community
meeting.
The
way
that
I
generally
onboard
people
is,
I
bring
them
into
the
community.
Like
you
know,
hey
check
out
the
discuss
list
come
check
out,
the
slack
come
ask.
Questions
come
play
with
it,
and
I
also
I
mean
weird
self-promotion
thing.
I
also
totally
point
them
toward
tgik
because
I
feel
like
there's
just
such
such
an
amazing
amount
of
content
in
there.
So
there's
that
now
I
want
to
kind
of
get
into
the
show
and
start
playing
with
secrets.
A
All
right,
one
thing:
that's
neat
about
kubernetes
in
general,
then.
Actually,
I
think
my
friend
stephen
augustus
was
just
commenting
about
this-
is
that
you
can
always
find
the
design
docs
for
something
that
you're
you're
curious
about
right.
Generally
speaking,
I
would
say
that
you
can
always
find
it.
There
are
some
gaps
and
when
there
are
gaps
you
know
highlight
them
and
we
should
probably
try
and
do
like
a
retroactive
design,
dock
or
a
kubernetes
enhancement
proposal
for
for
one
of
those
things,
in
fact,
I
think.
A
A
For
example,
if
I
had
a
a
pull
secret
for
a
particular
image
repository
for
for
a
particular
image
repository-
or
even
you
know
what
even
a
a
tls
secret-
you
know
a
a
key
and
a
certificate
for
a
particular
for
a
particular
ingress
for
particular
ingress
that
I'm
going
to
deploy
into
into
a
diff
into
different
kubernetes
clusters
that
might
associate
with
different
classes
of
ingress
right.
A
Those
things
are
usually
environment,
specific
those
secrets
that
I'm
talking
about
those
pull
secrets
and
certificate
secrets
are
environment
specific
I
might
want
to
pre-populate
them
into
my
environment,
such
that.
When
I
deploy
my
application,
the
secrets
are
already
made
available
to
my
application,
not
necessarily
I
don't
generally
want
to
deploy
my
secret
with
my
application.
Hey
it's
jimmy
from
new
york.
How
does
he
how
you
doing
jimmy-
and
so
that's
kind
of
one
of
the
things
that
I
wanted
to
highlight-
is
that
you
know
the
reason
they
have.
A
I
hope
that
makes
sense
that
was
kind
of
what
I
wanted
to
expound
on,
feel
free
to.
You
know
read
through
this
document
and
if
you
have
any
questions
you
know
like
start
up
a
discussion
inside
of
the
kubernetes
slack
or
you
know
dig
into
it
from
the
from
that
perspective,
or
even
put
them
up
in
here
inside
the
chat
like
I'd
love,
to
hear
from
you.
A
The
next
thing
that
I
want
to
talk
about
is
something
that,
like
a
lot
of
people
refer
to,
and
that
is
that
kubernetes
secrets
are
not
a
secret
and
what
is
that?
What
does
that
mean?
When
I
say
kubernetes
secrets
are
not
a
secret,
as
has
been
highlighted
a
lot
recently,
and
actually,
I
think,
through
the
entire
time
I've
been
working
with
kubernetes.
A
What
that
means
is
that
when
you
store
a
kuber,
a
secret
inside
of
the
kubernetes
database,
when
you
persist
it
to
etcd
it's
stored
in
plain
text,
it
may
be
encrypted
in
base64,
but
that's
just
an
implementation
detail.
Sorry,
I
said
encrypted.
It
was
obfuscated
in
base64
how
about
that,
because
that's
not
encryption,
but
it
is
based.
A
A
So
first
before
I
get
any
further,
I'm
going
to
give
a
huge
shout
out
to
this
project
right
here
that
url
at
the
top,
with
sigs
dot,
kh
dot,
io
slash
kind,
feel
free
to
check
it
out.
It's
an
incredible
project.
It's
basically
a
project
that
is
starting
to
kind
of
become
the
you
know,
conformance
test
tooling
and
a
lot
of
other
really
great
things
about
it.
A
But
yeah
true
jimmy
says
you
could
always
have
block
level
encryption
on
disks
at
rest
and
but
still
not
great,
and
it's
true
you
can
and
and
the
onion
thing
is
the
challenges,
of
course,
but
we're
going
to
get
into
it
we're
going
to
get
into
it
when
we
start
digging
into
it.
But
first
I
want
to
finish
my
shout
out
so
kind
is
an
incredible
tool
prior
to
using
kind.
A
I
believe,
if
you
check
in
with
my
the
most
recent
one
of
these
tgiks,
that
I
did,
I
I
was
using
cubadm
dnd
and
that
thing
is
kind
of
a
bit
of
a
resource
hog
when
it
comes
down
to
it
and
right
now,
with
a,
I
have
a
kind
cluster
running
on
my
laptop
locally
that
we're
going
to
be
using
to
me
to
manipulate
things.
A
A
So
we
can
go
kind
of
go
through
these
two
configurations,
but
I
kind
of
wanted
to
show
you
how
all
of
this
works-
and
I
think
it's
an
incredible
thing.
So
what
you
cannot
do
in
kind?
Yet
yet
is
you
use
other
cni's
like
right?
Now
it
actually
only
kind
of
comes
with
with
weave,
but
I'm,
but
I
opened
an
issue
and
I
know
that
they're
working
on
it
so
pretty
exciting
stuff,
okay,
pause,
all
name
spaces.
A
A
B
B
B
A
So
starting
these
things
up
is
like
a
great
way
to
actually
just
play
with
the
local
kubernetes
cluster.
Get
it
get
a
kubernetes
cluster.
You
know
into
a
place
where
you
can
start
playing
with
it.
I
think
it's
just
it's
also
very
quick.
This
whole
process,
even
with
five
nodes,
takes
maybe
two
minutes
three
minutes
at
the
most,
so
I
mean
very,
very
cool
on
the
on
the
on
the
multi-master
one,
where
you
have
three
control
three
control
plane
notes.
They
actually
take
care
of
all
of
that
challenging
stuff.
A
Underneath
the
hood
in
which
you
associate
right
in
which
you
copy
search
around
and
you
make
sure
that
each
master
has
the
search
necessary
to
be
able
to
start
a
control,
plane
node,
it
brings
up
a
three
node
scd
cluster.
All
of
those
things,
it's
pretty
awesome,
there's
actually
been.
You
know.
Jimmy
brings
up
a
point
that
says
rest
rest
of
peace
minicube.
A
I
think
it's
probably
because
he
was
actually
part
of
that
project
or
certainly
worked
with
a
bunch
of
other
folks
at
coreos,
who
were
kind
of
kicking
off
some
of
the
early
versions
of
that
project.
But
there
has
been
some
really
interesting
work
happening
with
minicube
lately.
I
don't
know
that
it's
outfit
account
I'm
running
this
all
docker
and
docker
style,
because
I'm
running
on
a
linux
machine,
but
I
don't
think
that
it's
actually
I
I
I
don't
know
that
mini
cube
is
like
not
long
for
this
world
or
anything.
A
There's
still
quite
a
lot
of
really
great
capability
there
and
I
don't
think
the
kind
is
yet
a
direct
replacement
for
it.
But
who
knows
maybe
someday.
A
A
A
And
that
controller
manager
coming
up
what's
interesting
about
this
is
that
the
controller
monitor
is
already
up,
but
what's
happening
is
that
the
cubelet
is
actually
registering
that
controller
manager
static
pod
with
the
cluster.
So
when
you're
seeing
this
in
pending
state,
the
pod
is
already
running.
It's
actually
just
registering
that
static
pod
that
was
known
on
that
particular
node
control,
plane,
kind,
one
control
plane
with
the
api
server
interesting
point.
A
I
actually
put
in
a
chat
I
put
in
a
cfp
this
year
for
kubecon
eu,
in
which
I'm
going
to
talk
about
static
pods
and
talk
about
how
how
that
works
yeah.
So
since
we're
talking
about
docker
ps,
so
since
we're
talking
about
docker
and
docker,
we're
also
going
to
take
a
look
at
like
how
dock
ps
works.
So
here
I
have
my
five
nodes
and
we
can
see
their
names
right.
I
have
a
kind
one
control,
plane,
kind
of
worker,
one
all
these
things,
but
these
are
the
only
doc.
A
A
A
And
do
docker
ps,
I
see
significantly
fewer
pods
right,
and
this
is
what
I
mean
by
docker
and
docker.
So
basically,
each
of
these
each
of
these
docker
containers
feels
like
or
look
thinks
that
it
is
completely
in
control
of
its
own
docker
daemon
and
is
only
reporting
on
those
things
that
are
relevant
to
that
particular
container.
So
it's
kind
of
like
the
mirrors
pointing
at
each
other.
It's
really
it's
very,
very
cool.
You
know
I
never
get
tired
of
stuff
like
that.
A
It's
super
awesome,
so
the
way
this
is
deployed
is
using
qradium
and
if
you
wanted
to
kind
of
dig
into
the
bones
here,
a
little
bit
and
we're
going
to
kind
of
do
a
little
drifting
back
and
forth
talking
about
high
level
stuff.
If
you
go
into
the
slash
kind
directory
on
any
given
node,
you
can
see
some
of
the
stuff
that
was
put
into
that
node
so
that
it
could
be
used
as
part
of
the
configuration
of
the
node
itself.
A
So
here's
the
cubeadm.conf
that
was
used
to
spin
up
this
particular
cluster
right.
We
told
it
you
want.
We
want
version,
113.1,
there's
an
edit
configuration.
There's
a
join
configuration
and
there's
keyproxy
configuration
very
little.
Very
little
has
been
configured
as
far
as
other
than
the
defaults
right.
One
of
the
things
about
cubadm
and
some
of
the
other
tooling
out.
There
is
that
in
this
particular
case,
if
I
did
keep
adm
config.
A
Print
init
defaults,
so
many
of
the
things
in
the
init
defaults
are
actually
default
for
the
cluster
right.
So
there's
a
set
of
defaults
for
the
for
the
configuration
of
the
cluster
that
cube
adm,
just
kind
of
has
wired
into
it,
and
then
there's
a
bunch
of
defaults
that
can
be
or
there's
a
bunch
of
different
settings
that
you
can
actually
specify
with
cube
adm.
A
All
of
that
stuff
is
made
available
to
you
through
that
cube,
adm
api
leveraging
things
like
the
capability
of
specifying
extra
arguments
to
the
daemons
themselves,
or
even
or
even
just
you
know,
passing
you
know
different
objects
down
like
you
want
to
pass
a
particular
secret
or
a
different
volume
that
you
want.
The
q
radium
pod
to
mount
all
that
stuff.
A
All
of
those
things
are
exposed
in
kind.
Okay,
it's
really
really
cool,
so
docker
ps.
Again,
I
want
to
look
at
the
sct,
the
running
sd
pod
here,
so
I'm
going
to
do
docker,
ps,
crap
scd.
I
can
see
that
I
have
two.
I
have
two
containers
running.
I
have
sdd
running
and
I
also
have
a
pause
container
before
I
move
into
that.
I
want
to
show
you
what's
actually
been
done
to
run
std.
A
So
most
of
the
stuff
here
is
actually
just
sets
it's
just
set
by
default
when
cubed
m
is
run
or
that
when
it
or
when
it
brings
up
a
an
s
d
node
for
you,
we
can
see
that,
for
example,
peer
and
server
certificates
have
been
generated
and
a
different
ca
certificate
have
been
has
been
generated
for
the
ncd
cluster,
and
that
means
that
to
interact
with
this
std
cluster,
you
have
to
actually
have
a
client
certificate.
You
have
to
use
tls
to
interact
with
the
community's
cluster.
A
And
that's
good
because
it
means
that
you
know
in
this
particular
case:
it's
not
super
super
relevant
and
because
they're
all
kind
of
hosted
on
the
same
node.
So
literally,
if
you
can
exploit
the
master,
you
have
ftd
and
you
have
the
underlying
host
system.
So
it's
not
you're
not
really
gaining
much
security,
real-world
security
from
that
tls
mechanism,
but
say
you
were
going
to
try
and
scrape
metrics
from
fcd
right
your
metric
server.
A
The
thing
that's
actually
scraping
those
metrics
your
prometheus
server,
where
that's
located
that
might
be
external
to
the
cluster
or
it
might
be
running
on
some
other
node.
Now
we
have
a
reason
to
actually
get
some
value
out
of
having
some
tls
mechanism
for
std.
So
you
have
to
think
about.
You
know
as
steven
mentioned
earlier.
Oh
they
they
expire
in
a
year
jimmy
and
they
can
be
rotated
with
qadm,
but
anyway
they
there
are
lots
of
good
reasons
to
actually
you
know
use
tls
between
all
of
these
things,
so
where
is
so?
A
A
We
also
have
two
volume
mounts:
we
have
var
lib,
scd
and
etsy
kubernetes
pkisd,
and
if
we
look
at
the
host
path
for
those
things
host
path,
I'm
often
known
for
saying
things
like:
oh,
my
god,
host
math
right
and
the
reason
I
say-
oh,
my
god.
Hostpath
is
because
that
it
basically
enables
the
pod
to
mount
something
in
the
underlying
os
and
and
make
use
of
it,
which
security-wise
again
kind
of
scary.
A
A
Let's
see
so
that's
all
of
that
good
stuff.
So
next
thing
I
wanted
to
do
was
show
that
john
eldridge
had
a
question
is
kind
something
you
would
use
yes,
absolutely
yeah
yeah
check
out
kind,
dot,
kx,
dot,
io
or
six
dot
k,
dot,
io,
slash
kind,
actually
kind
is
absolutely
something
you
would
use
in
doing
ci
testing
within
kubernetes.
A
In
fact,
I
think
that
in
the
next
community
session
somebody
wants
to
do
like
a
or
I
saw
somewhere
somebody's
getting
ready
to
do
like
a
a
presentation
on
how
to
use
kind
to
do
exactly
that.
So
it's
really
great
stuff,
but
yeah.
There's
it's
it's
taken
off.
It's
like
it's
the
way
we're
testing
kubernetes
kind
of
going
forward.
A
B
A
A
So
we
are
looking
at
a
cluster
of
one.
It's
only
a
single
instance.
These
are
the
the
interfaces
that
it's
listening
on
for
peer
connection
and
client
connection,
I'm
currently
coming
in
as
a
client,
but
I'm
authenticating
as
a
peer,
so
interesting
stuff.
A
A
A
B
A
A
A
A
A
So
definitely
stuff,
you
can
do
there
all
right
what
else
so
we've
seen
that
it's
not
a
secret.
So
let's
look
at
you
know.
How
can
we
get
to
the
point
where
maybe
it
is
stored
as
a
secret,
because
you
know
when
we
think
about
these
things
like
again
sort
of
in
the
security
way?
We
have
a
couple
of
different
problem,
a
couple
of
different
problems
to
solve.
Actually
sorry,
I'm
skipping
ahead.
The
next
thing
I
want
to
do
is
talk
about.
Well
now
we
know
the
secret
is
stored
in
plain
text.
A
A
So
as
a
user
in
our
back,
I
have
I,
I
have
a
lot
of
capability
or
as
a
a
cluster
operator,
I
have
a
lot
of
capability
with
rbac
to
limit
things,
but
maybe
not
all
the
things
that
you
would
maybe
perhaps
expect
to
be
limited.
So
I
can
do
as
an
administrator
right
now,
I'm
running
as
an
administrator
all
namespaces.
A
I
can
see
all
the
secrets
in
all
namespaces
available
to
me
as
an
administrator,
but
what,
if
I
were
actually
going
to
try
and
do
this
as
a
different
yeah?
Probably
I'd
go
with
jimmy
on
that
one,
but
what
if
I
wanted
to
actually
try
and
create
a
way
to
limit
the
ability
to
understand
that
secret
to
other
to
you
know
within
other
people?
What
capability
does
rbac
provide
to
me
right?
So
in
this
case,
if
I
do,
if
I
look
at
the
roles
or
the
cluster
roles
that
are
defined.
A
Go
so
I
have,
I
have
a
number
of
different
cluster
rules
that
are
kind
of
they
kind
of
come
out
of
the
box
with
with
when
you
enable
our
back
inside
of
a
kubernetes
cluster,
and
one
of
them
is
the
view
role.
Let's
take
a
look
at
that.
One
cube
kettle
describe
cluster
role
view,
so
we
can
see
that
inside
here
nowhere
is
listed.
The
ability
to
view
secrets
right.
A
A
A
B
A
This
is
a
cool
tool
that
is
actually
not
my
creation.
It's
it's.
It
was
created
by
the
z
lab
folks.
I
should
actually
put
up
a
link
for
that
create.
Let's
do
that
real,
quick.
A
Go
this
is
the
one
I
use
it
works
pretty
well
and
basically,
it's
just
wrapping
cube
candle
to
go,
get
the
token
from
the
service
account
and
then
just
basically
mint
a
working
cube,
config
from
it,
which
is
pretty
clever.
All
things
considered,
so
I'm
not
going
to
get
into
the
details
of
how
it
works.
But
if
you're
curious
about
it,
it's
definitely
worth
checking
out.
So
what
it
wants,
as
arguments
is
the
name
of
the
service
account
as
the
first
argument
and
then
it
will
pass
other
arguments
in
as
other
arguments.
B
A
So
here's
my
cube
config
with
my
embedded
token-
and
this
token
is
actually
from
the
is
stored
oddly
enough
as
a
secret
inside
of
kubernetes,
and
that's
my
certificate
authority
data
and
all
of
that
stuff.
That's
made
available.
So
now.
If
I
do
cubekit
I'll
get
pods
cube,
config
no
soup,
I
can
see
that
there
are
no
pods.
But
if
I
do
it'll
get
secrets.
A
I
get
this
result
right,
this
user,
the
user
that
is
defined
by
system
service
account
default.
No
superview
gets
no
secrets,
so
that
means
that
the
secret
itself
may
still
be
available
to
the
pod,
but
the
user
is
that
does
not
have
it's
not
is
not
able
to
access
that
secret.
So
this
gives
me
the
ability
to
limit
the
capability
of
that
particular
user
to
access
that
secret.
Let's
look
at
some
of
the
other
interesting
limitations
that
are
out
there.
Do
you
have
any
questions
about
that
before
I
move
on.
A
Okay,
so
we've
talked
about
how
we
limit
users.
We
can
do
that
with
our
back,
but
there's
also
another
thing
which
I
think
is
actually
pretty
cool
for
kind
of
limiting
the
blast
radius
for
an
exposed
secret.
And
it's
not
a
perfect
thing.
I
don't
think
I
don't
think
that
there
are
very
perfect
things
in
the
world,
but.
B
A
B
A
A
A
Right
and
that's
pretty
powerful.
That
means
that
if
somebody
were
to
get
a
hold
of
the
certificate
that
was
being
used
by
the
cubelet
say,
I
exploited
that
cubelet
to
get
a
hold
of
this
particular
node's
credential.
I
would
not
be
able
to
do
things
like
list
all
secrets.
I
would
also
not
be
able
to
actually
get
that
particular
secret
unless
that
secret
was
associated
with
a
pod
on
this
particular
host.
A
Keep
getting
run
here,
my
favorite
comes
somebody
shave,
your
thing
image
equals.
A
Replicas
you're
restart,
never
so
what
this
command
is
going
to
do
is
it's
going
to
create
a
pod
and
it's
going
to
create
one
instance
of
it
and
because
I
have
said
restart
number,
I'm
going
to
I'm
triggering
some
run
magic
here,
which
will
basically
just
generate
a
pod.
If
I
do
dash
o
yaml
dry
run,
I
can
see
this
is
basically
what's
going
to
be
created
right.
So
now
let's
go
ahead
and
take
that
manifest.
A
B
A
Although
you
raise
a
good
point,
there
is
actually
a
problem
where,
like.
If
you
create
a
crd,
you
can't
yet
associate
the
open
api
spec
that
you
write
for
that
crd
with
the
with
like
cube
kettle,
explain
which
is
kind
of
a
bummer.
I
would
kind
of
hope
that
that
would
work,
but
I
don't
think
it's
there
yet.
So
my
pot
is
running.
B
A
Not
so
secret
bash.
Thank
you.
George
have
a
great
weekend.
How
does
one
find
out
why
a
pod
is
running
in
creator
container,
creating
for
so
long?
Typically,
it's
because
imaging
is
pulling
an
image
right.
So
if
I
did
keep
kettle
describe
pod
debug,
I
can
see
that
it
was
in
pulling
images
for
118
seconds.
A
A
So
cute
kid
will
describe
pods,
which
is
actually
just
a
subset
of
events
that
are
specific
to
that
pod.
This
is
just
so
interesting.
If
you
do
get
events,
you
can
see
all
the
interesting
thing.
That's
happening
things
that
are
registering
fashion
defaults.
You
can
see
things
that
are
specific
to
a
given
namespace.
A
B
A
A
A
A
A
Right,
so,
if
I
make
whole,
if
I
were
to
exploit
this
underlying
node,
I
could
just
do
a
search
for
all
paths
with
this
mount
and
I'd
be
able
to
exploit
that
secret.
This
is
true
for
any
certificate.
There's
a
two
for
all
of
those
things
right.
So
how
do
I
keep
those
secrets
safe?
I
got
to
secure
them
somehow
right.
I
got
to
make
it
so
that
only
those
things
can
actually.
Oh,
you
know
what
I
wanted
to
do
before
I
go.
I
also
wanted
to.
A
I
need
to
figure
out
how
to
a
way
to
make
them
encrypted.
So
before
I
do
that,
let's
do
this
get
secret.
B
A
A
Node
off
that's
what
I'm
talking
about
right
with
node
auth
what's
happening
here
is
because
this
secret
is
not
associated
in
any
way
with
anything
on
this
node.
It
cannot
get
that
secret,
only
something,
only
a
node
with
that
access
with
that
access
allocated
to
it
has
a
secret.
So
note
us
super
critical
in
supporting
this,
and
this
is
a
lot
of
the
really
great
work.
A
That's
been
happening
lately
in
kind
of
reducing
the
surface
area
when
things
go
bad
right,
so
that
tells
you
that
you
have
the
ability
to
limit
access
to
secrets
for
users.
You
have
the
ability
to
limit
access
for
nodes,
but
we've
also
just
shown
that
secrets
are
stored
in
plain
text
in
scd.
So
if
I
had
a
pod
that
I
could
schedule
on
that
on
that
master,
I
might
be
able
to
get
a
hold
of
this
sgd
snapshot.
All
of
my
secrets
are
right.
There
keys
to
kingdom
the
whole
thing.
It's
terrible.
A
Let's
talk
about
the
next
piece
we've
been
at
this
for
about
an
hour
now.
I
feel
like
we're,
making
pretty
good
progress.
A
Let's
do
encrypting
secrets
at
rest,
so
this
is
a
great
page
for
reading
how
this
all
works.
I'm
going
to
start
just
by
doing
it
this
way
and
we'll
talk
about
like
why
this
this
way
is
kind
of
challenging,
and
this
is
not.
What
is
this
felix?
You
ask,
and
this
is
not
true
for
config
maps.
What
does
that
mean?
A
Are
you
saying
that
the
node
authorizer,
or
are
you
talking
about
like
whether
it's
encrypted
at
rest
or
something,
and
then
what
happens
over
time?
If
pods
move
from
nodes,
it's
the
permissions,
it
is
not
still
there
if
the
pod
is
no
longer
scheduled,
it's
basically
a
call
that
happens
what's
happening
there
in
the
underlying
implementation.
A
Is
that
when
the
node,
as
identified
by
that
node's
certificate,
goes
up
to
the
api
server
to
request
access
to
a
particular
resource,
the
api
server
is
making
a
a
judgment
about
whether
that
particular
node
has
the
has
a
reason
to
ask
right.
It's
like.
Why
do
you
need
to
know-
and
it
says
well,
I
have
this
resource
scheduled
to
me,
so
I
I
need
it
right.
Node,
authentication,
yeah.
A
A
A
Yeah,
so
I
can
still
do
get
pods
right
and
the
reason
for
that
is
because
of
the
discovery
mechanism.
I
just
I
may
need
to
actually
discover
what
other
pods
are
available
as
an
underlying
node,
but
I
couldn't
do
something
like
delete
a
pod
from
another
host
right.
So
I
couldn't
say
dash
of
wide
cue
kettle
delete
pod
dash
in
let's
delete
that
one.
A
A
So
yeah
wrote
off
super
powerful,
very,
very
cool
all
right
next
up
for
our
discussion
is:
let's
do
this
as
encrypt
secret
rest
thing,
which
will
be
pretty
fun
and
hopefully
we'll
have
time
to
do
some
other
fun
stuff
after
this.
But
let's
dig
on
through
here.
So
there
is
a
catacota
for
this
definitely
worth
checking
out.
There's
also,
but
you
know,
I'm
gonna
walk
you
through
it
anyway.
A
A
A
But
if
I
were
to
remove
this
line,
then
the
next
one
in
the
list
would
be
aes
gcm,
and
I
could
specify
the
key
and
some
text
and
we'll
probably
end
up
using
these
to
do
this
as
well,
but
with
this
stuff
I
can
actually
say
okay,
I
want
this
particular
provider
to
encrypt
my
secrets
with
this
information
and
if
I
provide
multiple
keys
it'll
be
encrypted
to
both,
and
this
also
provides
me
the
capability
of
rotating
those
keys.
Should
the
time
come
so
what
I
want
to
walk
through
next
is.
A
A
Manifest
cat
cube
api
server,
so
here's
my
cube
api
server-
and
this
is
these-
are
the
pads
that
it
has
available
to
it
and
the
configurations
volume
mounts
that
are
mounted
in
and
fundamentally
really.
What
matters
is
that
I
figure
out
a
way
to
point
as
a
flag
for
that
encryption
configuration
flag
to
a
file
that
is
available
to
the
api
server.
Now,
there's
lots
of
good
ways
to
do
this,
but
I'm
going
to
kind
of
cheat
a
little
in
this
for
the
in
the
interest
of
time.
B
A
A
B
B
A
A
B
A
A
A
D
C
A
B
A
A
B
B
A
B
A
Didn't
tell
it
was
a
phase,
so
I
I
basically
just
re-ran
a
phase,
but
I
pointed
at
a
different
keyboardium.conf
and
now,
if
we
look
at
kubernetes
sequence,
manifest
cube
api
server,
we
can
see
that
the
encryption
provider
config
is
populated,
and
it's
pointed
and
configured
correctly
neat
stuff.
So
that's
how
that
works.
A
A
B
A
B
B
B
B
A
A
B
A
A
So
when
the
when
the
secret
is
encrypted,
let's
actually
talk
about
how
the
flow
works.
Inside
of
this,
I
wish
there
was
a
chart
here,
maybe
I'll
make
one,
but
the
way
the
flow
works
is
when
you
have
an
encryption
provider
available
to
the
api
server,
just
think
of
it
as
a
step
between
the
api
server
and
persistence
to
std,
it
doesn't
change
the
way
that
things
interact
with
the
api
server.
It
changes
the
way
the
api
server
modifies
or
mutates
that
object
before
persisting
it
to
scd
right.
A
So
that
means
that
if
you
had
multiple
api
servers,
each
of
the
api
servers
would
have
to
have
the
same
set
of
keys
to
be
able
to
encrypt
and
decrypt
secrets
and
rotating
would
become
more
complex.
But
that's
what
so.
The
last
thing
I
want
to
cover
before
we
call
it
a
day.
It's
already
been
an
hour
and
a
half
we're
doing
pretty
good
on
time,
but
I
wanted
to
show
you
this
and
and
now
we're
going
to
like
pick
it
apart
and
we're
going
to
talk
about
the
problems
that
we've
just
highlighted
right.
A
So
just
now
we
saw
the
problem
was
that
when
I
replaced
the
secret
or
if
I
change
the
order
of
objects
in
that
underlying
file,
we
don't
see
a
change
in
the
way
that
the
secrets
are
actually
manifested
right.
The
encryption
happens
regardless
because
it
only
loads
that
file
on
start.
It
doesn't
load
that
file
if
it
changes
it's
not
watching
that
file
for
changes
and
by
that
file.
I'm
talking
about
the
encryption
file,
the
one
that
actually
has
the
encryption
provider
configuration
in
it.
It
only
has
that
one
thing
right.
A
So
how
can
we
improve
on
this
right?
How
can
that
be
improved?
And
that's
where
things
get
pretty
interesting
again
if
that
file
were
made
available
to
each
of
my
masters,
I
could
make
it
so
that
my
master
had
that
each
master's
api
server
would
have
that
file
and
made
available
to
it.
Instead
of
rotating
my
secrets,
I
would
have
to
rotate
my
secrets
on
each
master
in
turn
and
make
sure
they
all
have
that
configuration
and
that's
actually
specifically
why
they
allow
you
to
have
more
than
one
key
defined.
A
It's
because,
like
you
know,
api
server,
one
I
might
put
in
the
new
key
so
that
api
server
1
will
start
encrypting
to
the
new
value.
And
then
I
have
to
go
into
api
server.
2
api
server
3
make
sure
that
everybody
has
all
the
keys
configured
correctly
all
the
time,
and
this
represents
a
huge
administrative
burden
for
people
who
are
trying
to
encrypt
at
rest.
A
The
answer
is,
you
can
use
a
kms
provider
for
data
encryption
and
I
don't
think
we're
going
to
have
time
to
get
through
that
in
this
session
today,
because
I
also
want
to
talk
about
a
couple
other
things
real
quick,
but
I
am
going
to
talk
through
this
doc
and
the
next
one
here,
which
is
with
this
model.
It's
very
different
right.
A
A
There
is,
if
you're,
looking
for
just
kind
of
a
generic
solution
for
this
and
I've
actually
kind
of
looked
at
this
one.
This
is
pretty
cool,
so
this
is
something
that
oracle
built
called
the
kubernetes
vault
kms
plugin,
and
if
you
have
vault
running
and
configured
in
such
a
way
that
your
masters
can
communicate
with
that
fault,
server
or
service,
then
you
can
actually
make
use
of
this
plugin
directly,
and
you
would
run
this
again
as
a
daemon.
A
You
could
run
this
in,
like
maybe
like
a
static
pod
on
on
all
of
your
masters
and
point
at
a
particular
vault
configuration
and
then
make
use
of
vault
as
a
kms
provider.
So
when
you're
actually
making
use
of
that
encryption
key,
you
can
rotate
that
encryption
key.
You
can
replace
it.
You
can
do
all
of
those
things
with
with
basically
just
a
locally
running
vault
inside
of
your
own
infrastructure.
You
don't
necessarily
have
to
make
use
of
a
kms
or
with
vault.
A
You
can
actually
at
that
point
kind
of
delegate
the
encryption
key
to
a
kms
like
in
gke
or
or
one
of
those
other
places,
but
this
kind
of
acts
as
your
middleware
for
your
ability
to
manipulate
what
secret
kubernetes
is
using
to
encrypt
the
encryption
keys
for
all
of
your
secrets
within
or
at
rest
within
kubernetes,
and
that
is
actually,
in
my
opinion,
pretty
darn
cool,
because
I
spent
a
lot
of
my
time
kind
of
playing
with
bare
metal
environments,
and
so
I
really
want
to
kind
of
believe
that
this
sort
of
stuff
is
possible
like
without
having
necessarily
to
rely
on
a
particular
cloud
provider.
A
Unless
I
choose
to
do
it
because
the
the
cost
is
low
enough
or
what
have
you,
but
you
get
the
idea
pretty
cool
stuff.
The
next
thing
I
wanted
to
show
you
was
another
cool
thing.
This
might
be
the
last
thing
we
covered
today,
if
you're
all
still
with
me,
this
is
called
kamus
and
it's
actually
written
by
a
guy
named.
B
A
I
believe
it
is
I'm
sorry
if
I'm
slaughtering
your
name,
camus
is
pretty
interesting
because
of
the
way
that
it
works,
and
it's
you
know
like
all
of
these
secret
things
are
something
of
a
of
building
a
better
it's
something
of
a
game
of
building
a
better
mousetrap,
in
almost
every
case,
that's
kind
of
effectively
what
it
is,
but
the
the
idea
behind
this,
for
my
part,
I
think,
is
actually
pretty
interesting
right
and
so
I'll.
Take
you
through
that
idea.
A
The
idea
behind
this
is
that,
like,
while
this,
while
this
person
was
working
with
glue,
they
were
using
travis
secrets
and
they
saw
the
travis
secrets
encryption
solution
which,
if
you're
unfamiliar,
basically
travel
the
way
travis
handles
secrets,
is
it
makes
the
server's
rsa
key
available,
the
one
you
would
use
like
ssh
in
or
whatever,
and
then
it
makes
the
public
key
available,
and
then
you
encrypt
to
that
public
key
and
because
the
server
has
the
private
key,
it
can
decrypt
it
and
I'm
like.
Well,
that's
a
that's
a
pretty
clever
idea.
A
I
like
that
idea.
So
how
are
you
going
to
do
this
with
kubernetes
secrets,
and
so
the
idea
again
is
well.
We
can
do
this
with
a
service
account,
so
we
can
encrypt
secrets
to
a
specific
service.
Account
understand
that
within
kubernetes.
The
idea
is
that
you
have
maybe
like
a
service
account
per
application,
just
like
the
one
that
we
created
for
our
for
our
cube
config
for
our
user
right.
A
So,
if
that's,
if
that's
available
to
us,
that
means
that
as
long
as
the
the
secret
is
available
inside
of
it's,
none
of
the
service
account
exists
before
I'm
trying
to
populate
the
secret.
A
A
This
is
you
know
I
can
kind
of
hear
a
number
of
your
brains
turning
here
in
my
mind,
right
now
right
so
I
have
a.
I
have
a
wild
card
certificate
that
I
want
to
use
for
a
particular
ingress
controller,
but
I
want
to
have
it
so
that
only
the
ingress
controller
can
decrypt
that
particular
secret,
so
how's
that
going
to
work
right.
In
this
case,
it's
saying
well,
if
I
associate
that
service
account
with
that
particular
ingress
controller,
and
I
make
use
of
it
init
container.
A
I
can
make
it
so
that,
before
populating
the
secrets
that
the
ingress
controller
would
use
onto
disk
within
that
pod
or
within
a
shared
pod
across
a
you
know
like
a
an
empty
here,
for
example
for
that
pod
that
I
can
ensure
I
can
encrypt
to
that
service
account
and
have
only
that
server
account
have
the
capability
of
decrypting
it
and
making
it
available
to
the
pop
to
the
pod,
which
is
really
cool,
he's
written
a
th.
It
is
over
omar
lever,
omar
levi
hevroni.
A
I've
seen
him
in
the
kubernetes
security
slack
a
few
times,
so
it's
really
really
neat,
so
I
definitely
recommend
giving
this
a
look
like.
I
think
this
is
a
really
interesting
play
on
how
to
do
it.
This
is
the
repo
that
gets
into
how
it's
all
done
again.
They
have
a
threat
model.
They
have
a
few
other
things
that
they're
describing.
A
I
was
like.
That's
that's
a
really
cool
one,
and
it's
actually
really
useful
for
things
like
you
know
like
the
the
devops
flow
or
not
that
that's
what
you
call
it.
The
git
ups
flow.
You
know
where
you
want.
You
have
some
mechanism,
that's
actually
pulling
encryption
secrets
down
and
you
want
to
make
them
available
to
you
have
something:
that's
actually
watching
a
repository
and
pulling
down
the
latest
version
of
code.
Github's
flow
that
kind
of
stuff.
This
is
a
pretty
good
way
to
do
it
cubesec
it's
another
one.
That's
interesting!
A
We
have
some
shared
secret
that
is
known
by
the
application
at
runtime,
and
we
have
an
encryption
secret
like
a
public
key
that
is
available
to
us
at
deploy
time,
and
we
know
that
at
deploy
time,
we
can
encrypt
to
that
public
key
make
the
secret
make
the
secret
available
like
in
my
repo
or
anywhere
else,
so
that,
when
it's
pulled
in
at
run
time,
I
can
decrypt
that
secret
and
make
use
of
it.
Cubesat
makes
it
pretty
heavily
pretty.
A
A
Mozilla
sops
does
a
lot
of
the
same
capability,
but
I
think
that
they've
really
kind
of
taken
that
project
and
breathed
new
life
into
it
so
like
this
is
an
incredible
tool
for
doing
this
as
well.
So
these
two
tools
are
out
there
for
like
moving
the
problem
up.
The
stack
instead
of
trying
to
encrypt
from
the
api
server
to
the
sd
server,
which
is
a
good
thing
to
do,
is
that
what
what?
If
we
actually
just
made
it
so
that
you
know
only
the
application
as
identified
by
that
application
credentials
could
decrypt
that
secret.
A
That
would
be
a
better
model
than
trying
to
ensure
that
you
know,
because
it
limits
our
surface
right.
It
means
that
not
everybody
even
has
access
to
those
decryption
keys
and
like
it's
a
better
model,
because
if
I
had
multiple
applications
all
deployed
in
default-
and
I
had-
and
I
figured
out
that
there
was
another
high
value
secret
called
cert
wildcard
inside
of
the
default
namespace,
I
could
just
deploy
a
pod
into
default
into
the
default
namespace
and
make
use
of
that
secret
right.
A
A
With
that
we've
been
at
it
for
about
an
hour
and
15
minutes,
I
wanted
to
thank
you
all
for
tuning
in.
I
hope
this
was
helpful
now,
if
only
they
didn't
use,
pt
yeah.
A
Fair
point:
fair
point
anyway.
I
hope
this
was
helpful.
Thank
you
all
for
tuning
in
and
have
a
kick
in
weekend.
This
is
a
three-day
weekend
for
a
lot
of
you.
I
hope
I
know
it's
true
here
in
the
us,
maybe
around
the
world,
obviously,
probably
not,
and
on
that
note,
I'm
going
to
leave
you
with
a
terrible
dad
joke
that
I
heard
this
week,
which
was
you
know
what
he
called,
what
what
is?
What
is
a
group?