►
From YouTube: TGI Kubernetes 095: K-Rail
Description
Notes: https://github.com/vmware-tanzu/tgik/blob/master/episodes/095/README.md
A
Good
afternoon
everybody
and
welcome
to
TGI
K
number
95
I,
hope
you're
all
having
a
great
day
I'm,
having
a
great
day
here
in
San
Francisco
beautiful
view
of
the
bay
out
there
in
front
of
me,
I'm
kind
of
looking
out
toward
the
ballpark
and
I
could
see
the
ballpark
and
there's
some
a
couple
of
boats
out
there
in
the
bay
things
are
getting
a
little
smoky,
but
not
super
smoky.
Yet
there's
wildfires
happening
in
California.
Again,
all
those
dry
hot
winds,
blowing
things
around
craziness
all
right.
A
Well,
today's
tea
gik
is
going
to
be
on
some
of
the
security.
Tooling
that's
kind
of
coming
around
and
showing
up
and
I
just
want
to
give
a
shout
out
to
everybody
who's
joining
with
me
joining
me
here.
On
the
session,
we've
got
Olaf
Lehmann
from
Copenhagen
Denmark
and
Martin
from
Netherlands
good
to
see
you
both
got
Dustin
Decker
right
here
from
San,
Francisco
hello,
and
we
got
Rory
from
Scotland
and
Shahar
from
Atlanta.
A
What
else
do
we
got
here?
Yeah
Bogdan
from
Bucharest
and
ylim
Adi,
hello,
Maddy
and
Madden
Madrid
are
seen
from
India
and
Christine
from
Germany
Eelco
from
the
Netherlands
and
Grigor
from
she's.
Now,
Moldova
I
probably
missed
that
up,
but
Phillipe
from
Paris
everybody
tuning
in
from
all
around
the
world.
It's
such
a
great
thing.
Actually,
this
is
my
favorite
thing:
hey
money,
good
good
good
to
meet
you
and
we
have
Alexandra
from
Brazil
and
we
have
Andre
from
Brazil.
A
Also
I,
wonder
if
you
I
mean
Brazil
is
pretty
big,
so
I
wonder
if
you're
you're
you're,
both
kind
of
like
in
in
the
general,
you
know
same
general
area
or
if
you're
kind
of
anywhere
in
between
there
so
got
village
strong
signing
in
from
Mexico
City,
or
maybe
you
know
from
since
Michigan
we
got
Robert
from
Dublin
how's
it
going
Robert
good
to
see
you
all
so
again.
This
session
is
gonna,
be
exploring
a
new
project
that
was
actually
just
recently
announced
by
a
cruise
off
by
the
cruise
automation.
Folks.
A
That
project
is
called
K.
Rail
I
chose
this
image.
This
is
actually
I
thought.
This
was
like
such
a
fun
image,
because
this
metal
bar
is
not
too
dissimilar
from
Kay
rail
and,
like
you
know,
Kay
rails
are
one
of
those
things
that
sometimes
skateboarders
use
as
to
do
slides
and
stuff
like
this
sort
of
thing,
so
a
K
rail
for
those
who
are
unaware,
you
probably
have
a
name
for
this
wherever
you
are
in
the
world.
Okay,
rail
is
this
cement
barrier
thing.
You
know
how
they
usually
use
it
for
crowd
control.
A
A
B
A
B
A
See
looks
like
I
missed
a
link
here:
okay,
so
Nancy
Lancaster,
who
is
one
of
the
people
who
is
responsible
for
handling
kind
of
the
some
of
the
work
around
managing
the
scheduling
for
all
of
that
can
all
of
the
amazing
talks
that
we
see
it
keep
gone
every
year
has
put
out
a
request.
If
you
are
planning
on
actually
attending
cube
con,
please
take
a
moment
to
go
ahead
and
fill
out
your
sessions
in
sked
comm,
and
this
actually
helps
them
figure
out
what
the
right
sized
room
will
be
and
that
sort
of
stuff.
A
So
if
you
have
a
moment
and
you're
planning
on
attending
coupon,
this
is
a
great
a
great
way
to
go
about
it.
So
so
I
mean
a
great
reminder
to
go
ahead
and
fill
that
stuff
out.
There's
a
link
from
that
tweet
directly,
and
we
should
really
you
know
we
should
help
Plan
C
out,
she's,
she's,
awesome
and
also
I,
wouldn't
want
you
to
show
if
it
keep
calm
and
not
be
able
to
get
into
the
session
that
you're
like
eagerly
awaiting
you
know.
A
B
A
A
I,
wonder
if
I'll
have
time
for
this,
but
the
idea
of
this
is:
there
are
still
20
sessions
without
track
hosts,
and
what
they're
looking
for
is
somebody
kind
of
bring
a
little
bit
of
continuity
to
it
sounds
like
make
sure
they,
sir.
The
session
starts
and
ends
on
time.
Maintain
the
integrity
of
the
session
sponsors
should
not
be
passing
out
anything
to
attendees
or
scanning
badges.
A
Is
you
know
this
is
we're
looking
for
volunteers
to
fill
some
of
these
roles
at
the
con
itself,
so,
if
you're
interested
in
doing
that,
yeah
I
mean
this
is:
where
is
how
you
get
involved?
Right
like
this
is
a
everything
we
do,
including
cube
count
is
very
much
community
focus,
so
I
mean
if
you
if
this
was
like
something
that
you'd
be
interested
in
doing
definitely.
A
Next
up,
we
have
Vito
bitten,
has
written
a
cool,
Valero,
backup
notifier
for
email
and
slack
so
Valero.
If
you
are
unfamiliar
with
it,
is
one
of
the
projects
that
we
developed
at
hefty.
Oh
and
the
goal
of
Valero
is
to
enable
you
to
backup
not
just
a
CD
state
or
just
the
state
of
those
resources
within
your
cluster,
but
also
any
volumes
that
you
have
associated
with
them.
So
as
we
think
about
models
for
actually
backing
up
and
restoring
stateful
applications
to
a
single
cluster
or
maybe
even
moving
them
from
one
cluster
to
another.
A
Valero
is
some
of
the
community
based
tooling,
that
that
we
developed
at
hep
q
and
is
now
under
the
governance
of
at
at
VMware
to
to
make
all
of
that
possible.
So
it
looks
like
Vito
has
written
a
project
that
is
about
a
Valero
back
up
notifications,
so
that
when
you're
scheduled
backups
take
place
like
if
you've
gone
ahead
and
set
up
a
cron
job
to
perform
a
particular
backup
that
you
actually
get
some
notification
from
the
result.
From
the
result
of
that
in
and
it
looks
like
they've
got
a
few
different
options.
A
A
A
So,
like
all
the
developers
on
those
projects,
wiggle
arrow
contour,
sana
Blio,
those
things
that
is
actually
these
are
the
dev
chat
for
that
is
in
kubernetes
slack
and
so,
like
the
developers
who
are
working
on
this
piece
or
even
the
folks
are
in
the
community
like
Vito
that
are
interacting
with
our
with
the
teams
that
are
working
on
that
project.
They
can
jump
jump
right
into
those
channels
and
like
interact
with
the
team,
to
ask
some
questions
about
my
particular
design.
A
Decisions
were
made
or
whether
a
particular
idea
is
a
good
fit
for
the
project.
All
those
things
are
available
to
you
like
that
and
I
think
that
really
signifies
kind
of
the
open
source
nature,
but
we're
trying
to
achieve
here
so
move
forward
here.
Kubernetes
patterns,
the
service
discovery
pattern.
So
this
one
is
about.
A
So
this
is
a
nice
blog
by
Mohammed
Ahmed
and
he
talks
about
service
discovery
and
how
service
discovery
works
within
kubernetes.
So
this
is
actually
should
be
a
pretty
interesting
read
so
Cooper
News
deploys
applications
through
pods.
We
know
about
pods
and
micro
services
application.
The
application
is
broken
into
a
number
of
components
and
they
kind
of
need
to
be
able
to
discover
each
other.
So
they
can,
inter
communicate
between
those
pieces
right,
I.
Think
of
this
more
in
terms
of
distributed
systems
rather
than
micro
services,
but
they're
there
they're
there
you.
A
So
client
side
discovery
means
the
client
is
responsible
for
determining
which
service
it
should
connect
to,
and
it
does
that
by
contacting
a
service
registry
component
and
are
in
in
kubernetes.
We
do
this
with
DMS,
typically
and
server-side
discovery
in
this
model.
The
load
balancing
layer
exists
in
front
of
the
service.
Oh
I
see
yeah.
A
A
He
does
describe
how
the
environment
variables
are
set,
although,
interestingly,
there
is
a
bit
of
a
race
for
those
right,
so
these
environment
variables
cannot
be
updated
once
the
pod
has
come
up.
So
if
you
want
to
rely
on
environment
variables
for
service
discovery,
then
you
also
kind
of
rely
on
defining
services
in
an
order
that
those
services
are
available
to
the
pod
when
they
start
up.
A
A
Worth
definitely
worth
a
read
if
you're
interested
in
this
stuff-
and
this
actually
reminds
me
I-
was
going
to
mention
that
I've
actually
been
doing
a
series
of
things
on
on
grokking,
kubernetes
and
I'm.
Gonna
continue
that
series,
but
I
thought
you
know,
I'd
take
a
break
from
it
and
do
something
else
this
week
and
so
I
think
the
next
up
for
the
grokking
series
is
scheduling,
but
we
are
going
to
get
back
to
it
and
I
want
to
keep
doing
that
series
I'm
just
taking
a
break
this
week.
So
just
a
heads
up
there.
A
Let's
kick
back
to
the
crowd.
We
got
Antoine
from
Paris
and
we
have
George
putting
up
the
notes.
Remember
you
can
get
your
notes
at
TGI
k,
io,
/
notes,
that's
where
you
can
find
him
and
the
Maddy
is
asking.
Would
you
mind
talking
about
how
Kol
works
yeah?
That
is
absolutely
the
goal
of
this
session.
A
Like
my
kind
cluster
I
could
expose
that
to
an
external
endpoint
you
hosted
perhaps
on
AWS
or
on
digital
ocean,
and
that
would
give
you
the
audience
the
ability
to
access
the
service
that's
running
on
my
laptop,
and
so
that's
actually
a
pretty
cool
feature
and
if
you're
looking
for
you
know
that
sort
of
a
environment
or
that
kind
of
capability,
inlets
is
actually
a
pretty
interesting
tool
for
that.
So
I'm
not
going
to
get
into
crazy
detail
here,
but
definitely
check
it
out.
A
I
had
the
pleasure
of
actually
chatting
with
Alex
about
the
kind
of
the
TLS
implementation
and
how
and
how
all
that
works,
and
it
was
really
he's
a
very
responsive
person,
and
if
you
have
questions
about
how
it
works,
definitely
definitely
reach
out
yeah.
It
is
pretty
neat.
You
know,
I've
been
planning
on
that
doing
a
little
bit
more
by
myself.
A
Deployments
you
have
the
powerful
replica
set,
and
this
is
absolutely
the
case
right.
So
your
deployment
when
you
create
a
deployment
construct
within
kubernetes
that
deployment
construct
is
broken
down
into
its
component
parts
by
the
controller
manager.
So
as
soon
as
the
deployment
shows
up
and
controller
manager
replicas,
that
controller
will
grab
that
deployment
or
see
that
there
are
no
replicas
that's
associated
with
it
and
then
create
a
replica
set
and
the
replica
set
controller
within
create
pods,
and
so
we
walk
through
that
whole
process.
A
But,
what's
interesting
is
that,
because
of
the
way
that
deployment
construct
works,
you
actually
end
up
with
multiple
replica
sets,
depending
on
when
you
change
the
configuration
of
that
deployment
spec.
So
if
you
change
the
deployment
we
keep,
the
old
replica
sets
around
in
case
moving
forward.
Doesn't
work
like
how
do
we?
A
Actually,
you
know
if
I,
if
I
updated
my
deployment
to
the
wrong
image
name,
it
would
start
crash
lipping,
right
or
image
pull
back
off
or
what
have
you
and
the
question
is
like
how
would
I
be
able
to
go
back
to
that
previous
one
that
had
the
correct
image
name
while
I
go
figure
out?
Why
my
new
me,
my
new
image
name
is
not
populated
in
the
container
registry
right,
and
so
this
is
what
this
is
describing.
A
It's
like
some
of
the
some
of
the
capability
of
being
able
to
basically
move
back
to
the
previous
replica
set.
So
this
is
an
official
official
command
that
you
can
actually
manage,
and
you
can
also
govern
how
many
replica
sets
are
actually
kept,
and
that's
where
this
revision
history
limit
is
managed
here
and
so
yeah
good,
an
article
on
deployments
and
definitely
a
preview
up
related
to
understanding
those
things
a
little
bit.
So
this
one
is
actually
this.
A
You
can
do
this
with
cube
cat
all
get
raw,
there's
all
kinds
of
interesting
ways
to
actually
to
get
this
data,
and
so
what
they're
describing
here
is
like
how
you
send
us
an
API
request?
What
the
resulting
of
that
request
looks
like
it
kind
of
opening
up
the
idea
of
interacting
with
the
kubernetes
api
as
a
restful,
endpoint,
so
pretty
cool
stuff.
A
Then
we
have
a
couple
of
submissions
by
mr.
Bobby
tables.
We
have
the
volume
plugins
for
flex
volume
CSI,
and
then
we
have
the
bare
metal
cluster
installed
cluster
API.
Let's
look
at
those
two
interesting
new
projects
or
not
new
projects.
This
was
been
around
for
a
little
while
so
flex
pilot
volume
plug-in
or
the
minimal
Viable
fishing-rod.
What
does
it
mean
by
that
flex?
Volume
is
an
executive,
a
file,
binary
file,
Python
script
batch
file.
A
So
this
this
blog
post
I
think
just
digs
into
like
what
flex
volumes
are
and
some
of
the
ways
that
you
can
actually
explore
them
or
extend
the
kubernetes
to
support
volumes
that
perhaps
didn't
already
that
didn't
already
exist
are
more
specific
to
your
particular
infrastructure.
So
definitely
worth
checking
out.
A
Let's
go
back
to
our
chat,
see
how
everybody's
doing
we
have
buddy
from
the
Ukraine
which
we've
Dimitri
from
NYC.
We
have
I
talked
about
this
the
other
week.
I
think
you
did
yeah
overheard
Joe
bata
how
to
persist
events.
We
see
on
Cube
kettle,
get
events.
Sorry!
If
it's
out
of
oh
I,
see
yeah,
we
can
talk
about
that
money.
That's
a
good
question
want
to
find
time
to
play
with
Alex's
thing.
I
agree.
We
have
Sam
from
Melbourne
Australia
and
Bogdan,
asking
oh
yeah.
B
A
A
And
in
this
presentation
this
person
explores
what
it's
necessary
to
effectively
develop
a
provider
against
a
virtualization
library
right,
so
they
first
go
in
through
and
kind
of
describe
what
cluster
API
is.
We
spent
a
bit
of
time
on
cluster
API
within
the
the
tgia,
so
there's
a
number
of
different
sessions
that
talk
about
TJ
cluster
API,
both
I'm
even
like
way
back
in
the
Genesis,
when
we
were
still
kind
of
playing
with
this
idea,
all
the
way
up
to
kind
of
like
what
is
available
currently
so
definitely
worth
checking
out.
A
It's
got
a
pretty
good
summary
of.
What's
there
like
some
of
the
primitives
that
are
exposed
by
the
cluster
API
and
then
what's
interesting,
is
they
talk
about
like
a
well
when
we
describe
machines
or
machine
classes,
we
also
have
to
kind
of
wire
up
a
writer
or
an
actuator
which
is
responsible
for
handling
the
heavy
lifting.
A
If
you
will,
or
interacting
with
that
infrastructure-as-a-service,
to
create
the
thing
that
you've
asked
it
for
and
in
this
document,
I
think
they
go
through
a
pretty
deep
dive
into
what
that
provider
is
going
to
do
and
ways
to
leverage
it.
So
in
this
example,
they're
talking
about
leveraging
libvirt
or
KVM
to
interact
with
as
a
provider
for
cluster
api.
A
Because
if
you're,
looking
at
exploring
cluster
api
or
perhaps
developing
a
provider
or
anything
else
like
that,
this
gives
you
all
the
tools
necessary
to
kind
of
play
with
this
whole
idea.
This
is
an
incredible
topic
and
it's
actually
a
really
good
blog
post.
So
it's
a
really
good
jumping-off
point
good,
shout
out
to
the
amazing
amount
of
work
that
this
person
did
to
really
get
this
done,
so
nice
work,
I,
call
them
by
name.
Nice
work
care
money
that
is
oh,
that
is
awesome,
so
very,
very
cool
definitely
check
that
out.
A
The
Serbian
system,
such
as
kubernetes,
are
designed
to
be
resilient
failures,
which
is
true,
there's
the
whole
level
set
construct.
We
talked
about
this
a
little
bit
on
TTI
k2
have
a
simple
view
of
how
most
of
the
most
parts
of
H
a
will
be
skipped
to
describe
cubed
controller
manager,
communication
only
okay,
so
in
this
in
this
document,
they're,
basically
describing
kind
of
like
you
know,
how
does
it
actually
work
and
they
provide
some
keep
some
understanding
of
some
of
the
tunable
--zz.
With
regard
to
the
cubelet
that
you
can
use
to
kind
of.
A
Excuse
me
to
tune
how
long
the
API
server
waits
to
understand
whether
a
node
is
in
a
healthy
state
and
how
frequently
the
node
updates
the
API
server,
with
its
current
state
or
with
its
current
understanding
of
things,
and
they
specifically
talk
about
some
of
the
how
some
of
these
tunings
some
of
these
tunable
'z
are
specific
to
the
size
of
clusters
that
you
have
right
and
how
many
events
those
things
represent,
and
what
set
of
a
load
that
that
represents
on
the
backing
stateful
store
like
on
sed.
So
this
is
a
really
good.
A
Read
if
you're
in
the
process
of
trying
to
understand,
like
you
know
like
why
is
sed,
seem
like
it's
always
super
busy.
Here's
some
of
the
tunable
x'
that
we
can
do
to
kind
of
like
improve
things
depending
on
the
scale
in
the
cluster
or
the
scale
of
the
events
and
that
sort
of
stuff.
So
that's
definitely
worth
a
read.
A
So
if
you're
interested
in
understanding
more
about
the
release
process,
there's
a
recording
for
how
this
was
done,
but
yeah
we're
already
talking
about
117
0,
alpha
3
116
is
out,
and
that
brings
me
to
the
next
piece
that
I
wanted
to
talk
about
and
then,
if
you're
interested
in
understanding
more
about
how
patch
releases
worked
and
what
the
cadences
are
information
about,
how
to
get
code
fixes
from
one
branch
to
another
or
into
a
different
release
and
also
target
dates
for
when
particular
releases
are
going
to
happen.
This
is
a
live
document.
A
It's
updated
pretty
frequently,
so
it
definitely
gives
you
a
pretty
good
understanding
of
where
we
are
with
things.
So
this
is
like
this
can
actually
be
found.
Underneath
the
sig
release
releases
patch
releases
and
this
document
the
link
is
right
there
in
the
in
the
notes
and
I
think
frequently
when
people
start
adopting
kubernetes,
they
start
asking
questions
like
ok.
Well,
how
often
does
it
change?
A
You
know
how
what
is
my
adoption
process
need
to
look
like
for
me
to
actually
stay
in
tune
with,
what's
happening
upstream
and
we're
developing
things
like
you
know,
cluster
API
and
those
sorts
of
things
to
really
sort
of
address
some
of
those
issues
but
117
in
two
sentences.
I
am
NOT
gonna,
take
a
shot,
yeah
I
think
there's
a
lot.
I
know:
I,
don't
think
that
117
has
been
quite
as
heavy
a
change
as
116
has
been,
but
what
I
will
tell
you
is
this?
A
If
you're
looking
to
understand
some
of
the
stuff,
that's
changed
in
kubernetes
over
over
time.
There's
this
great
website
that's
put
on
or
managed
by
Josh
burkas
inside
the
community,
where
he
does
a
weekly
report
of
what's
actually
happening
with
with
paprs
that
are
merging
and
stuff
like
that.
So,
like
one
of
the
things
that
actually
happened
recently
was
the
billion
laughs
thing
and
so
he's
describing
some
of
the
PRS
that
have
been
merged
to
fix
things.
Things
like
removing
hypercube
and
117
that'll
be
removed.
A
A
B
A
A
A
A
The
next
thing
I
wanted
to
show
you
was
this
project
called
deprecated
and
I
was
actually
working
with
the
folks
with
the
gentlemen.
It's
like
behind
this
nicolas
and
deprecates,
actually
pretty
interesting,
because
what
it's
trying
to
do
is
it's
trying
to
enable
you
to
leverage
a
project.
A
project
called
contest
by
Gareth,
Cove
and
contest
is
a
tool
I
think
it'd
be
kind
of
with.
If
we
do
another,
if
I
do
another
session
on
OPA,
that's
which
I'm
thinking
about
doing
it's
definitely
worth
talking
about
kind
of
the
ecosystem
around
the
tooling.
A
So
like
there
are
things
like
you
can
leverage
OPA
to
do
admission
controllers,
and
you
can
also
do
leverage
OPA
and
Rigo
to
do
to
do
a
lot
of
really
interesting
functionality.
Around
contests.
In
this
week's
example,
I'm
going
to
show
you
this
one,
which
I
think
is
actually
super
neat
somebody
basically
Nicole
or
Nicholas,
wrote
the
policy
around
reporting
on.
A
Api's
that
will
be
removed,
so
one
of
the
questions
that
people
are
asking
lately
and
we're.
Actually,
we
have
a
proactive
campaign
around
this
right
now
at
VMware.
Our
CRE
team
does
that
are
basically
just
kind
of
like
how
do
I
go
about
validating
that
all
of
the
manifests
that
I'm
using
inside
of
kubernetes
are
actually
up-to-date
or
are
some
of
them.
You
know,
are
the
manifests
themselves
still
going
to
hit
those
removed
api's
in
1/16?
How
can
I
determine
that
right
and
so
in
this,
and
so
what
this
policy
is
doing?
A
Is
it's
going
through
and
it
will
actually
just
report
back
when
you,
when
it
sees
a
policy,
that's
being
used
that
has
been
deprecated,
so
very
cool
project
definitely
worth
checking
out
and
as
an
example
of
the
output.
He
provides
an
example
of
that
here
right,
an
example
of
the
output.
If
you
go
ahead
and
grab
the
deprecated
reco
and
you
use
comp
test
directly,
you
could
use
something
like
helm,
template
to
go
ahead
and
validate
the
configuration
of
things
right.
A
So
in
this
example,
he's
gonna
grab
helm
template
he's
going
to
do
a
test
against
the
configuration
that
he
has
for
Prometheus
and
then
he'll
be
able
to
see
that
you
know
the
number.
There
are
a
number
of
template
files
that
result
in
api's
that
have
been
removed
right
and
that's
actually
a
pretty
handy
thing
for
a
lot
of
people
who
are
actually
working
with
it.
So
definitely
check
that
out.
I
was
really
I
was
really
pleased
with
it,
and
actually
one
of
the
things
that
Nikolas
and
I
were
talking
about
was.
A
There
are
a
number
of
ways
to
purpose
it.
To
present
that
data
to
contest
you
can
present
that
data
as
hyphenated
chunks
of
llamó.
You
can
also
present
that
data
as
a
list
of
yeah
mel
documents
in
kubernetes
basic
form
right
and
one
of
the
changes
that
he
recently
made
was
to
enable
both
of
those
models
right.
So
it
doesn't
matter
how
you
present
it
now
to
comp
chest.
Is
it
could
be
a
list
of
jamol's
according
to
the
way
that
could
reduce
it
lists
it
out,
or
it
could
also
be
a
just.
A
A
That,
when
we
get
started
here
so
I
think
we
should
just
go
ahead
and
get
started,
you
mean
ec2
bare-metal.
What
are
we
talking
about?
Yeah
Comcast
is
actually
very
cool.
There's
another
one
that
Gareth
wrote,
which
was
cube:
eval
KUB
e
eval,
which
is
more
specific
to
kubernetes,
write
eval.
A
Yeah
instrumentation
stuff
is
really
neat,
so
cube.
Eval
gives
you
the
capability
to
validate.
You
know
to
do
some
strict
schema
stuff
and
to
do
like
validation
of
the
of
things
according
to
the
according
to
the
configure
to
the.
What
do
you
call
it?
The
swagger
document
known
by
your
kubernetes
cluster,
so
this
is
another
one
of
the
kind
of
interesting
things
that
you
can
use
to
kind
of
validate
manifest.
So
do
you
understand
whether
fields
are
missing
or
if
you
have
a
typo,
but
this
is
more
specific
to
kubernetes
contests.
A
This
is
kind
of
more
general.
You
can
apply
contrast
across
a
variety
of
things.
It
doesn't
even
have
to
be
kubernetes.
It
could
be
any
document
like
or
any
mo
or
JSON
I
think
it's
only
I
think
it's
both,
but
I
came
from
birth
a
moment,
but
you
could
do
kind
of
more
wide
I
think
you
use
it
to
evaluate
other
other
things,
so
that's
actually
pretty
cool.
A
A
And
I
couldn't
take
any
longer
than
this
right
like
see.
Really
quick,
so
I'm
gonna
create
my
kind
Club.
We're
gonna
use
this
kind
cluster
to
kind
of
explore,
K,
rail
and
and
get
it
played
with
and
stuff,
and
so
we're
gonna
walk
through
to
install
dock
and
then
we're
gonna
play
with.
What's
there
and
I'm
gonna
show
you
some
kind
of
interesting
things
that
we
can
do
to
kind
of
hack
around
and
play
with
what
it's
available
to
what's
available
to
us
within.
A
B
A
Let's
go
ahead
and
look
at
the
docks,
while
waiting
for
things
to
boot
up
here
so
Kol
has
is
a
validating
webhook.
That
gives
us
the
ability
to
deny
or
allow
deployments,
or
you
know
all
of
these
things
that
that
make
use
of
these
sorts
of
things
directly,
but
because
it's
a
a
validating
webhook,
we
don't
actually
have
to
modify
the
API
server.
A
A
Fuschia
with
using
TVs
in
the
deployment
well,
I
could
I
mean
yeah
I
guess,
while
we're
I
think
it's
probably
up
here
so.
A
So,
to
give
you
a
quick
answer
on
the
stateful
set
versus
deployment
with
PBS
I'm
gonna,
give
you
a
little
insight
into
this.
So
deployments
are
meant
to
be
a
thing
that
you
can
just
scale
horizontally
and
if
they
have
a
volume
associated
with
them,
it's
assumed
that
that
volume
is
either
going
to
be
like
an
empty
volume
that
you
could
use
for
like
caching
or
something
like
that.
A
Would
that
would
that
would
be
specific,
for
instance,
but
if
you
make
use
of
a
persistent
volume
in
a
deployment,
then
only
one
of
the
instances
within
that
deployment
are
going
to
get
access
to
that
persistent
volume.
Unless
you're
using
the
technology
like
read/write
many
right
we're
using
the
read/write
many
something
like
NFS
or
or
EFS
with
an
AWS
those
sorts
of
things,
then
you
can
actually
mount
that
or
even
read,
write
or
read.
Write
read
only
many
that
right,
I
think
it's
just
read
only
but
volumes
that
can
actually
be
expressed.
A
That
way
can
be
mounted
across
the
entire
set
of
instances
or
pods
associated
with
the
deployment.
But
there
are
some
real
caveats
here
right.
Some
sharp
edges
for
sure,
because
each
of
the
deployments
is
expected
to
be
effectively
just
a
horizontally
scalable
unit,
so
you
don't
really
have
their
own
identity.
There,
no
you're
not
going
to
be
able
to
mount
those
things
uniquely
to
each
point,
and
so
the
volume
is
expressed
to
the
pod.
It's
going
to
be
exactly
the
same
on
all
of
the
pods
right.
A
It's
gonna
be
the
same
volume
expressed
in
the
same
way
to
each
pod
within
a
deployment,
and
sometimes
it
sets
up
kind
of
a
rough
model.
If
what
you're
trying
to
do
is
something
like
create
a
database
right,
because
you
wouldn't
perhaps
want
a
different
pass
for
the
backing,
with
a
backing
store
for
that
volume,
and
that's
where
things
like
stateful
sets
and
daemon
sets
come
in
right.
A
Stateful
sets
give
you
a
completely
new
feature,
really
a
completely
new
capability
when
you
think
about
stateful
sets,
because
with
stateful
sets,
you
have
what's
called
a
persistent
volume
template
and
that
persistent
volume
template
gives
you
the
ability
to
define
what
storage
you
want
to
associate
with
each
pod
uniquely
so
now,
each
pod
has
its
own
persistent
volume,
its
own.
It's
very
own.
A
Persistent
volume
not
shared
with
all
the
other
pods,
and
each
pod
has
its
own
unique
host
name
right,
it'll,
be
usually
it's
like
pod
name,
zero,
pod
name,
one
pod
name,
and
you
also
get
ordinality
out
of
it
right.
So
you
also
get
the
ability
to
define
the
or
to
make
use
of
the
fact
that
in
a
stateful
set
they
will
come
up
one
at
a
time
at
a
time
after
one
after
the
other,
whereas
like
with
a
deployment,
it's
gonna
be
where
ever
we
could
fit
it.
We're
gonna
make
it
bring
it
we're
gonna.
A
Have
it
come
up
when,
in
whatever
order
works
out
so
interesting
stuff
but
yeah?
This
is
a
very
good
question.
Thinking
about
them.
There's
a
very
good
question
for
the
kubernetes
slack.
So
definitely
go
check
that
out
so
looks
like
we're.
Probably
all
the
way
up
and
running
here.
So
this
cluster
is
up
and
I
haven't
done
anything
to
configure
it.
I
want
to
just
look
at
the
configure
quick,
so
we
can
understand
what's
happening
here:
I've
not
overridden
any
settings,
I've
just
basically
set
the
pod
and
the
service.
A
A
It
would
I
like
this
like
in
their
document.
They
actually
show
leveraging
helm
template,
which
is
really
great,
because
how
template
means
that
you
can
actually
go
ahead
and
just
use
with
helm
template.
You
can
generate
the
manifests
that
will
be
used
and
then
apply
them
to
the
cluster
using
cube
head.
I'll
apply.
That's
actually
what's
happening
here
right,
so
we're
generating
the
manifests
with
this
command
and
then
we're
gonna
actually
go
ahead
and
create
those
or
deploy
them
to
our
cluster,
with
our
cube
get
all
applied
command
right
after
that.
That
means.
A
A
Let's
see
what
we
get
out
of
the
out
of
that,
like,
let's
take
a
look
at
what's
happening
here,
so
this
what
I
meant
by
actually
this
is
a
great,
a
great
segue
into
what
we
were
talking
about
before
with
that
comp
test
command
right.
So
this
is
a
bunch
of
manifests
that
are
separated
right
and
so
I
can
do
things
like.
A
So
we
have
a
pod
disruption.
Budget
we've
got
a
config
map
that
has
been
templated.
We
have
an
exemption
class.
We've
got
another
config
map
that
describes
the
K
rail
exemptions,
which
looks
like
it's
just
allowing
everything
inside
of
the
cube
system
namespace,
we
have
a
service,
that's
been
defined
and
that
service
is
actually
in
front
of
the
validating
webhook
Z.
A
B
A
And
we
see
our
error,
this
is
actually
what
I
was
talking
about,
what
comp
test
right.
So
what
we
just
know
what
we've
just
found?
Pardon
me:
what
we've
just
found
is
two
outputs
from
that
comp
test
command.
That
I
showed
you
earlier
one
of
them
telling
us
that
the
validation
webhook
configuration
is
against
a
deprecated
API
that
it
should
be
moved
to
v1
rather
than
v1
beta
1
and
the
other
one
is
a
fail
in
116.
A
So
that's
what
I'm,
talking
about
by
the
value
of
the
contest
piece
and
here
in
a
minute
I'll
also
get
to
show
you,
because
of
this
I'll
be
able
to
show
you
like
some
of
the
stuff.
That
is
also
interesting.
Make
sure
you
don't
template
into
the
default
namespace
or
the
non-compliant
deployment
example.
Won't
work,
yeah
I'll,
get
that
worked
out,
Thank
You
Justin,
all
right!
A
C
A
A
That's
correct,
yeah,
after
116
many
hum
charts
are
not
working,
but
this
is
a
way
to
actually
see
that
that's
going
to
happen
before
right
and
then
you
could
actually
contribute
to
the
change
that
would
actually
help.
It
would
really
be
good
with
your
CI
pipelines
agreed
all
right.
So,
let's
move
into
here
and
let's
go
ahead
and
do
our
deployment.
A
B
A
And
we
can
see
those
pods
doing
work
so
they're
loading
things,
they're
loading,
all
these
policies
up
and
in
enforce
mode,
and
it
gives
us
a
period
of
time
and
which
it's
doing
stuff
right.
So
let's
go
ahead
and
look
at
some
of
these
policies
that
are
being
defined
here
so
I
mean
first
I
want
to
say,
like
what
we've
got
right
now
is
a
fully
set
up
thing
right.
It's
actually
working.
We've
got
it
running
inside
of
our
our
our
kind
clusters,
so
yeh
install
Docs
like
that.
That
worked
right
out
of
the
box.
A
I
love
that
so
and
we
have
a
couple
of
different
options
here.
It
looks
like
by
default.
It
will
deploy
in
an
enforcing
mode,
and
you
also
have
a
report
only
capability
where,
if
you
don't
want
to
necessarily
enforce
those
things,
but
you
do
want
to
be
warned
that
those
things
are
being
done,
then
we
can.
Actually,
then
we
can
do
these
things.
So,
let's
talk
about
what
those
things
are.
A
So
this
is
supported
policies.
These
are
some
of
the
policies
that
are
available
within
k,
rail
and
these
are
policies
that
they
that
cruise
automation,
folks
and
I
think
you
know
in
agreement,
certainly
with
a
lot
of
the
community,
considered
to
be
kind
of
like
high
high
risk
capabilities
that
are
exposed
to
any
authenticated
user
within
the
cluster,
and
so.
A
So
next
one
they
highlight
I
think
this
may
be
just
a
little
too
specific,
but
it's
definitely
worth
doing
if
you
are
in
a
docker
only
environment.
So
what
this
one
does
is
it
says
you
know
this
is
a
little
bit
redundant
also
because,
like
up
at
the
top,
you
have
no
bind
mounts,
but
down
here
we
say
no
docker
socket
mount.
So,
even
if
we
did
say
you
could
do
bind
mounts
than
the
brawl
I
guess
we're
also
going
to
be
able
to
test
whether
the
socket
the
docker
socket
is
being
mounted
in.
A
A
This
one
is
actually
a
really
interesting
one
and
I
spent
a
little
bit
of
time.
Talking
about
this,
the
doctor
socket
is
effectively
an
unauthenticated
api,
and
so,
if
you
have
a
cluster
with
docker
on
underneath
and
you're
gonna
make
use
of
something
like
docker
and
docker.
So
you
can
do
things
like
your
image
builds
and
push
those
things.
A
If
you
have
that
set
up
so
that
you're
expressing
the
docker
socket
as
a
file
mount
into
the
container,
then
what
that
means
is
that
you
have
the
ability
to
to
easily
easily
escalate
your
privilege
and
the
underlying
host
and
I'm
going
to
show
an
example
of
that
in
this
session
and
we're
also
gonna
try
and
make
sure
that
gets
shut
down
by
the
policy
here.
So
we'll
play
with
that.
A
I
can
push
images
all
day
long
with
different
contents
to
the
tag
Alpine
3
well,
I
couldn't
push
it
to
Alpine
3
8,
but
if
I
might
in
my
own
registry
right,
I
could
I
could
just
reuse
that
tag
over
and
over
again
and
all
its
gonna.
Do
is
basically
point
that
tag
at
a
new
Shaw,
very
similar
in
some
ways
to
get.
If
you
think
about
it
right,
the
same
thing
I
can
I
can
move
a
tag
anywhere
inside
of
my
get
history,
and
that
means
that
it's
effectively
immutable.
A
That
tag
is
only
so
valuable
when
we
think
about
like
being
able
to
understand
that
the
thing
that
I
pulled
down
is
the
thing
that
I'm
running,
and
so
in
this
case
our
goal
here
is
they're
they're,
trying
to
enforce
that.
If
you're
gonna
use
an
image
that
you
should
use
the
immutable
image
reference
so
that
at
least
you
know
that
when
you've
got
that
image
from
docker
or
from
whatever
your
registry
is
that
that
image
is
the
same
image
that
you
deployed.
A
A
The
next
one
up,
no
host
network,
no
host
network
means
that
you
don't
want
pods
to
be
able
to
make
use
of
the
underlying
host
and
network
right.
So
you
can't
sew
those
pods
couldn't
do
things
like
you
know,
capture
traffic,
going
to
another
pod
or
down
interfaces
or
generally
wreak
havoc,
or
you
know,
bind
to
a
external
facing
port
directly.
You
know
into
a
pod
those
sorts
of
things
right,
so
host
Network
gives
you
that
capability
and,
generally
speaking,
when
you
grant
host
network,
you
also
give
the
ability
to
do
things
like
understand
you.
A
May
you
may
extend
privileges
further
than
you
think
by
enabling
that
container
running
as
root
with
access
to
the
host
network
to
be
able
to
do
things
like
dump
IP
tables
rules
or
modify
the
direction
of
traffic
or
manipulate
packets?
There
are
things
there
that
are
definitely
sharp
edges,
so
it's
reasonable
to
cut
down
host
Network
host
pid'
is
another
one.
That's
interesting
and
I'm
gonna
show
this
one
off
a
little
bit
as
well.
A
Host
bid
gives
you
the
ability
to
actually
understand
the
host
pins
namespace,
and
so,
when
you're
inside
of
a
container
with
host
pins
set
and
you
type
PS
minus
EF,
then
you
actually
see
all
of
the
processes
on
an
underlying
host.
Not
just
those
processes
that
make
up
your
container
does.
If
you
think
about
it,
containers
really
are
at
the
end
of
the
day,
just
process,
isolation.
They
don't
really
do
anything
else,
and
so
it's
a
it's
an
interesting
point.
A
A
To
only
those
capabilities
necessary
that
the
application
itself
needs,
for
example,
most
of
your
applications
probably
don't
need
the
ability
to
create
new
network
namespaces
or
to
do
things
like
flush.
The
IP
tables
rules
inside
of
the
Associated
network,
namespace
right,
and
so
these
are
capabilities
that
you
can
enable
or
disable
by
default
and
and
what
they're
looking
for
here
is
to
make
sure
that,
when
the
pod
has
started
up,
no
new
pipe
capabilities
are
granted
to
that
pod.
A
During
one-time
privilege.
Containers
you've
heard
me
talk
about
propose.
Containers
are
like
a
really
bad
idea
for
lots
of
reasons.
Privileged
containers
have
the
ability
to
do
pretty
much
anything,
including
access
any
of
the
devices
in
the
underlying
host.
They
have
the
ability
to
you
know
from
a
privileged
container.
You
can
NS
enter
into
any
other
container
or
into
any
other
C
group
or
or
name
space
associated
with
any
other
container.
A
One
I
think
we're
pretty
close
to
all
of
them,
but
I
want
to
just
talk
them
talk
through
them.
Real
quick,
cuz,
I
think
there's
some
value
in
that
the
next
one
here
is
a
trusted
image
registry,
and
this
is
actually
I,
think
pretty
valuable
because
it
gives
us
the
ability
to
define
those
registries
where
we
will
allow
images
to
be
pulled
from
now.
This
is
a
validating
web
hook,
which
means
that
in
your
pod,
spec
in
your
deployment,
you're
gonna
have
to
specify
that
correct
registry
ahead
of
time.
A
Right
you
can't
it
can't
modify
that
registry
for
you,
and
so
it's
actually
specifying,
and
so
that
so
in
this
case,
they're
providing
a
list.
Our
repository,
reg
X's
that
are
allowed
or
trusted
register
registries
that
are
going
to
be
used
to
to
pull
images
from
so
definitely
check
us
out.
It's
very
cool
I
would
say
that
this
is
probably
more
than
just
official
docker.
How
bogus
it
looks
like
it's
everything,
but
you
get
the
idea.
A
A
A
There
are
multiple
levels
of
logging
that
they've
enabled
in
the
code
they've
enabled
debug
worn
in
info
by
default,
its
info,
which
is
actually
pretty
useful.
It
looks
like
from
the
output.
There
are
a
couple
of
different
modes
of
operation.
There's
a
report
only
mode
where
it
doesn't
actually
deny
anything.
It
just
sits
in
a
global
report
mode,
and
just
tells
you
you
know
when
it
sees
things
that
it
would,
it
would
have
affected,
which
is
a
great
way
to
adopt
a
tool.
A
So
by
default
we're
running
an
enforcement
mode
and
then,
lastly,
they
have
this
policy
exemptions
piece,
and
this
is
governed
by
they
don't
explicitly
call
it
out
here,
but
this
is
governed
by
a
config
map
inside
your
configuration
where
and
you
can
specify
inside
of
your
helm
values
here,
you
can
actually
modify
that
via
the
helm
values
and
then
we
template-
or
you
can
also
just
interact
with
that
config
map
directly
and
that's
for
our
purposes.
That's
probably
what
we're
going
to
do.
A
Some
policies
are
configurable
policy.
Configuration
is
contained
in
the
KBL
configuration
above
documentation
for
the
policies
configuration
can
be
found
in
the
supported
policies
heading
above
so
they
have
new.
They
have
the
ability
to
they're
saying
that
all
new
policies
must
excite
us
Phi.
This
interface.
A
Which
means
name
must
return
a
string
that
matches
a
policy
name
that
is
provided
in
configuration,
validates
an
accepted,
an
admission
request
and
their
resource
and
the
resource
of
interest
must
be
extracted
from
it
see.
Resource
pod
go
for
an
example
of
extracting
pod
specs
from
an
admission
requests,
make
sense,
policies
can
be
registered
in
internal
policies.
Ngo
and
any
policies
that
are
registered
but
do
not
have
configuration
provided
will
be
enabled
in
report
only
mode.
A
So
it's
not
you
don't
you
can
do
thing
I
guess
you
can
do
some
policies
in
report,
only
mode
kind
of
by
default,
unless
they're
actually
registered
and
have
the
ability
to
be
enforced,
so
that's
actually
pretty
cool.
So
if
you're
developing
new
policies,
you
have
the
ability
to
kind
of
like
iterate
over
them
before
actually
having
to
modify
them
or
before
having
to
you
know
commit
to
them
so
resources
having
they've
got
some
debugging
logs.
You
know
capability
here
kind
of
giving
you
some
view
into
it.
They
have
metrics
that
they've
exported.
B
A
A
A
Nice,
ok!
Well,
we
can
actually
look
at
the
configuration
of
that
validating
webhook
here
in
a
second
so
before
we
do
that,
let's
move
on
we've
got
policies
enabled
we
have
deployment,
as
blocking
exception
is
needed.
We're
gonna
play
with
that
live
we're
gonna
play
with
that
one,
and
then
they
also
have
like
an
empty
LS
certificate
request.
So
if
you
want
to
understand
when
that
certificate
will
expire,.
A
A
A
Let's
go
ahead
and
play
with
making
stuff
that
might
break
this
thing
or
you
know,
I
mean
like
stuff
that
it
would
enforce
so
I
think
they
actually
have
an
example
of
a
non-compliant
deployment.
So
let's
take
a
look
at
that
one
and
then
we'll
play
with
that
and
see
what
we
see.
First,
I
wanna
go
back
to
the
chat.
A
Money
agrees
that
he
deserves
sleep.
It's
important
sam
says
we
found
soft
fail.
Audit
mode
was
critical
to
roll
out
without
yeah
I,
completely
see
that
Dan
says
mr.
pop.
How
you
doing
dan
pop
is
the
Dutton
Papandrea
is
or
it
works
at
Cystic
and
a
good
friend
of
mine.
It's
looking
forward
to
seeing
all
many
of
you
at
cube,
tron,
it's
gonna,
be
so
great
yeah
and
Joe
said
the
same
thing.
Oh,
maybe
I
missed
a
little
more
in
background.
Where
can
I
where
can't
where
I
can
I
like
to
tag
images?
A
B
A
Kd
my
bad
Thank
You
Rory
awesome
all
right,
let's
get
into
it
here.
So
our
example
of
the
non-compliant
whoo
yeah,
that's
very
non-compliant
all
right.
So
here's
what
we
got
like
directly
out
of
the
box.
That
was
really
great
feedbacks.
Like
almost
immediately.
We
got
like
a
ton
of
information
back
like
right
away,
which
was
very
helpful
and
in
understanding
its.
But
let's
look
at
some
of
the
output
that
we
get
here.
A
We
get
error
from
server
k
rail
admission
review,
so
that's
really
helpful
gives
us
a
breadcrumb
to
tell
us
like
what
is
actually
producing
this
output
error
when
creating
deploy
non-compliant
deployment.
Yemma
admission,
webhook
k,
rail
crews,
automation
github.com,
which
is
what
we
know
it
to
be
denied
the
requests
it's
been
denied
because
it
has
a
bunch
of
volume.
Mounts
looks
like
a
host.
Brine
mounts
are
forbidden,
the
docker
socket
is
forbidden.
A
A
B
A
Path
and
overriding
wait:
oh
this
is
the
volume
and
then
where
it's
mounted
as
a
different
place,
so
yeah
it's
mounting
host
path
and
air.
Slash
host
we're
giving
a
capabilities
like
privileged
and
net
admin,
insists
I've
been,
which
is
a
little
bit
redundant,
because
if
you
have
this,
you
have
all
of
the
privileges,
all
of
them,
but
they're
putting
it
in
here
so
that
you
can
actually
see
the
error
and
I
totally
get
that.
A
And
then
we
define
an
ingress
using
the
ingress
class
public
and
we're
trying
to
deny
that
as
well
so
cool
stuff.
Let's
go
back
to
our
log
output,
yep
and
it
says,
require
ingress.
So
we
actually
got
that
as
a
result,
and
this
is
why
we
see
it
happening
again
and
right
when
we
see
more
input
specifically
from
the
K
rail
admission
controller.
So
this
is
two
different
outputs.
A
A
A
Point
out
which
I
thought
was
gonna,
be
you
know
this
is
actually
one
of
the
challenges
of
admission
control.
What
hooks
in
general,
one
thing
I'll
point
out
is
that
if
you
think
about
it,
there's
a
bit
of
a
timing
attack
here,
because
it's
a
validating
web
hook
right
in
that
anything
that
I
had
defined
in
the
cluster
before
run
before
before
running.
B
A
Yeah
not
anymore
I
mean
like
with
things
like
image
and
and
and
some
of
the
other
tools
that
are
out
there
for
actually
handling
the
build
of
kubernetes
images.
We
are
of
docker
images
and
managing
those
things
in
kind
of
a
more
secure
way.
I
have
a
hard
time,
believing
that
there's
a
good
reason
to
do
it.
A
I
can't
think
of
a
single
instance
of
why
that
would
be
a
reasonable
thing.
I
have
seen
implementations
where
somebody
has
built
like
a
build
farm
and
they're
just
really
happy
with
like
docker
being
the
solution
for
this,
and
so
what
they
do.
Is
they
basically
just
rotate?
Those
images
like
every
day
or
every
week,
where,
like
the
underlying
nodes
associated
with
that
docker
and
docker,
build
farm,
are
just
wiped
out
daily
or
weekly
and
I.
A
Think
if
you
were
gonna,
actually
try
and
expose-
or
you
know,
you
know,
build
something
like
a
DND
build
farm
on
top
of
kubernetes,
where
you
could
actually
just
use
a
particular
cluster
to
handle
image
creation
and
image.
Publishing
then
I
mean
that's
not
the
worst
of
the
options,
because
at
that
point
at
least
you
couldn't
actually
you.
You
just
made
sure
that,
like
those
nodes
are
only
ever
used
for
building
and
pushing
docker
images
and
they
are
recycled
on
a
regular
basis,
that
would
be
a
reasonable
implementation
of
this.
A
A
Build
so
images
a
project
by
Jess
still
actively
maintained,
gives
you
the
ability
to
actually
go
ahead
and
build
images
without
the
requirement
of
that
of
that
mounting,
and
then
koneko
is
another
one.
There
are
a
few
other
tools
out
there
that
you
don't
actually
have
to
expose
the
underlying
doc
or
socket
to
do
those
things
and
I.
Think
these
things
are,
you
know,
definitely
improvements
in
the
way
that
we
actually
manage
those
things.
So
there
are
other
tools
out
there
that
will
let
you
accomplish
that
goal
without
giving
up
the
underlying
node.
A
B
A
A
Open
repository
sea
runs
scope
is
kopi.
Oh,
like
there's
much
of
there's
a
number
of
tools
in
here
that
are
actually
pretty
interesting
for
for
things
that
I'm
actually
seen
a
couple
of
subpod
meds
in
here.
There's
a
couple
of
there's
a
number
of
tools
inside
of
here.
That
I
think
are
actually
pretty
interesting,
so
might
be
worth
checking
out
all
right.
A
A
B
A
Yeah
ingress
isn't
that
one
of
those
ones?
It's
a
warn.
It
says
extensions.
We
wouldn't
be.
One
is
deprecated.
Now
you
should
use
networking
P
1
beta
1,
so
it's
not
removed,
but
it's
another
one
of
those
ones.
That's
been
expired.
So,
like
you
kind
of
get
your
brain
in
there
like
thinking
about
noticing
when
they're
expired,
so
I
saw
extensions,
v,
1
beta
1
and
was
like
wait.
That's
that's
another
one's
expired
ones,
that's
pretty
cool!
So
now,
let's
play
with
some
stuff,
which
I
think
will
be
fun.
B
A
C
A
Example
of
a
deployment
spec
that
should
not
be
allowed
by
this
admission
controller.
Let's
take
a
look
at
it,
so
this
is
a
deployment
using
sp1.
It
is
going
to
run
and
it's
actually
doing
a
lot
of
things.
Good
right
so
like
in
this
particular
manifest
I'm
running
as
not
route.
I
have
an
F
S
group
I
have
a
run
it.
You
know.
All
of
this.
A
All
of
these
things
set
up
I'm,
pulling
an
image
from
my
own
registry,
but
I'm
using
a
tag
so
I'm
expecting
it
to
complain
to
me
and
maybe
we'll
play
with
the
idea
of
actually
going
and
getting
there
the
write
the
shot
the
SHA
for
that.
So
we
can
see
what
that
looks
like,
but
we
can
also
see
that
I
am
mounting
in
the
docker
socket
and
that
I'm,
but
I've
got
all
of
the
other
things
set
correctly.
I
have
allow
privilege
escalation.
False
and
I've
got
capabilities.
A
A
C
B
A
Apparently,
like
it
automatically
invalidates
the
namespace
that
K
rail
is
in.
So
if
you
have
access
to
the
K
rail
namespace,
then
you
would
actually
have
the
ability
to
deploy
and
defeat.
This
thing,
which
is
an
interesting
look,
I,
didn't
really
think
about
that
before
so,
but
now
we
actually
see
the
thing
that
we
were
looking
to
see
right.
So
it's
complaining
about
host
bind
mounts.
A
It's
complaining
about
docker
socket,
it's
complaining
about
an
immutable
image
reference
and
the
trusted
image
repository,
and
it's
saying
that
the
annotation,
the
sacred
vicked
annotation,
which
is
not
a
policy
that
I
saw
in
our
output,
is
required
for
pods
that
use
empty
dear
or
host
backgrounds
to
enable
cluster
auto
scaling,
which
is
also
valuable
output.
So,
let's
do
cube,
cannot
get
deployment
and
the
do
one
was
totally
shut
down,
so
it
didn't
actually
even
land
as
a
deployment
which
is
cool.
C
B
A
Let's
take
a
look
what's
happening
here:
koneko
koneko
is
quite
okay.
Yes,
true,
build
a
+1.
We
are
using
koneko
in
Ark
on
our
CI
CD
cluster
to
let
people
build
their
images
from
within
Jenkins
a
+
same
nice
yeah
exactly
it
was
in
the
same
namespace
the
K
rail
was
in
and
so
I
was
able
to
deploy
it
validating.
Webhook
configuration
ignores
the
K
rail
namespace.
That's
what
I
missed.
A
A
That's
true
yeah
you
mean
if
you
shut
yourself
down,
how
would
that
work
out?
How
does
okay
well
compared
to
OPA
and
then
that's
typically,
they
prevent
completely
working
thing
yeah,
exactly
alright,
let's
go
ahead
and
take
a
look
at
the
validating.
What
book?
Let's
see
what
that
looks
like
right,
so
we
talked
about
the
different
policies
we
played
with
them.
We've
looked
at
them
I'm.
The
next
thing
going
to
do
is
actually
create
an
exemption
and
we're
gonna
play
with
the
exemption
flow
as
well.
But
first,
let's
look
at
get.
A
That's
gives
us
a
pretty
readable
output,
so
here's
the
configuration
for
this
config
and
the
exemptions.
So
this
is
actually
it
looks
like
that
means.
Usually,
if
you
see
like
an
exemption
interesting
like
that
or
a
checksum
for
this,
it
means
it's
actually
looking
at
the
configuration
of
that
config
map
and
if
it
gets
updated,
it'll
get
it'll.
Take
the
update
automatically
I
haven't
actually
validated.
That's
the
case.
A
We're
gonna
play
with
exemptions
here
in
a
second
to
see
if
that
is
the
case,
but
it
that
might
be
but
like
if
I
see
that
in
the
annotations,
that
is
usually
that
usually
means
that
it's
what
it's
like
a
hook
as
part
of
the
helm
piece,
so
that,
if
I
modify
an
exemption
as
part
of
that,
then
when
helm
goes
to
deploy,
manifests
it
takes
it
checks,
and
if
it
was
a
config,
only
change
then
we'll
see
that
config
change
and
then
it
will
automatically
restart
the
deployment
or
redeploy
the
department
or
market
for
a
a
rolling
deployment,
because
you
will
have
changed
the
annotation
of
the
deployment,
and
so
that's
like
one
way
that
we
can
actually
manage
that
stuff.
A
So,
let's,
let's
look
in
here:
let's
see
what
else
we
see.
So
this
is
a
validating
webhook!
Oh,
hey!
That's
the
thing!
I
was
going
to
talk
to
you
about
I,
almost
forgot.
You
see
this
cube
kettle
cabrini.
This
last
applied
configuration.
We're
gonna,
talk
about
that
in
a
minute.
I
just
wanted
to
point
out
that
it's
there.
This
is
gonna,
come
in
handy
in
here
in
just
a
moment,
so
API
version
it's
an
admission
registration
against
V
1,
beta
1.
It
is
a
validating
type,
which
is
awesome.
A
When
you
create
a
validating
webhook,
you
need
to
actually
provide
the
certificate
that
is
able
to
validate
the
serving
certificate
of
the
webhook
directly.
Now
understand
that
this
is
code
that
I'm
applying
to
the
API
server
I'm,
actually
registering
this
webhook
with
the
API
server.
This
is
not
the
code.
That's
running.
This
is
just
me
informing
the
API
server,
hey
API
server.
A
The
pod
is
actually
who
it
says
it's
supposed
to
be,
especially
if
there
are
multiple
of
them
right.
So
it's
actually
able
to
validate
on
that
serving
certificate
is
signed
by
a
known
Authority
and
that's
actually
we're
the
new
Authority
part
comes
in
here
now.
This
is
a
self-signed
CA.
This
is
a.
This
was
actually
generated
as
part
of
the
helm,
template
call.
A
Moving
down
the
moving
down
the
list
here,
what
else
do
we
see?
We
say
that
it's
going
to
be
a
service
and
that
the
name
is
K
rel
and
it's
in
the
K
rail
namespace
and
then
it's
not
at
any.
It's
not
at
any
fancy
path.
That's
just
serving
at
route
there
as
serving
his
route
there
and
then
the
port
443
failure
policy
is
interesting.
Failure
policy
means,
you
know,
I'm,
actually
not
gonna,
try
and
define
it
for
you
here,
I'm
gonna
do
this
cube.
Cadell
explains
my
you
know.
B
A
A
And
so
this
failure
policy
basically
describes
a
failure.
Policy
defines
how
unrecognized
errors
from
the
admission
end
point
are
handled
allowed,
values
are
ignore
or
fail,
and
it
defaults
to
ignore.
So
if
we
get
back
trash,
we're
gonna
definitely
ignore
that
trash
there's
also
an
interesting
one.
In
side
effects.
Side
effects
states
whether
this
Web
book,
Oh,
typos,
we've
got
put
it
or
fix.
For
that
has
side
effects.
Acceptable
values
are
unknown.
None
some
non
or
dry
run.
A
Web
books
with
side
effects
must
implement
a
reconciliation
system,
since
a
request
may
be
rejected
by
a
future
step.
In
the
admission
change
and
the
side
effects
therefore
need
to
be
undone
requires.
Requests
with
dry-run
attribute
will
be
auto
rejected
if
they
match
a
web
hook
with
side
effects
unknown
or
some
there
is
another
one.
I.
B
A
A
A
Object,
selector,
we're
looking
for
all
groups
were
looking
for
all
versions,
we're
looking
for,
create
and
update.
So
anything
is
just
a
get
or
a
set
or
a
patch.
It's
not
going
to
show
up
here.
Well,
I
guess
up
would
write,
but
delete,
isn't
gonna
show
up
here.
We're
done.
We
don't
really
care
about
the
lifecycle
pieces
and
then
those
resources
that
we're
watching
out
for
our
pods
deployments,
replication
controllers,
replica
sets
demon,
sets
stifle,
sets
jobs,
cron
jobs
in
grasses,
which
is
interesting.
B
A
Well,
that's
actually
kind
of
interesting,
so
maybe
is
there
a
yeah?
What
they're
trying
to
do
is
basically
highlight
anything
that
could
result
in
a
pod
that
they
want
to
match
on
including
pods
themselves,
so
good
catch
putting
pods
in
here.
It's
not
just
these
things
to
create
them.
You
can
create
them
directly
as
well.
So
that's
a
good
point
and
then
the
scope
is
everything
within
that
manifest
side
effects
set
to
none
in
time
out
set
to
30
seconds.
A
B
A
So
looks
like
we
actually
had
the
overlap
there,
because
it
was
still
always
a
pod
running,
so
this
is
actually
a
shout
out
to
the
fact
that
they're
running
more
than
one
of
these
pods
right
so
there
was
there
was
never
an
overlap
in
which
we
didn't
have
something
running.
It
looks
like
in
this
case,
but
let's
do
this
scale
deployment
and
Gabriel
zero.
A
A
And
what
that
show
this
is
that
same
behavior
right
because
it's
like
an
fell
open.
That
means
that,
even
though
we
cranked
it
down,
we
still
allow
that
in
right.
So
so
now,
if
I
Duke
you
kiddo
get
pods.
That
pot
is
still
running
because
it
happened
in
that
race.
It
happened
before
it
happened
in
a
time
when
there
was
no,
but
there
was
no
validating
admission
controller
to
block
it.
B
A
Run
docker
suck
it
that's
because
there
is
no
darker
socket.
So
that's
not
what
I
wanted
to
do.
A
A
That
DMD
deployment
actually
I'm
going
to
move
into
D
and
D
and
cat
D
and
E
no
premium.
Oh,
you
can
see
I
set
node
name
two
links
here.
So
if
I
do
cute
kiddo
get
nodes,
I
could
see,
I
have
a
links
node
now
running
and
because
I
have
populated
node
named
links,
I'm,
actually
enabling
that
pod
to
be
created.
On
my
on
my
note,
even
though
it's
forwarded
version,
things
should
work
out.
A
You
can
all
get
pods
in
k
rail,
so
now
I
see
that
here
and
it's
running,
and
this
is
actually
what
I
wanted
to
show
you
about
with
the
doctor
and
doctor
stuff.
So
let's
jump
into
that
D
and
D
pod,
so
cube
kettle,
exactiy
I!
This
is
why
D
and
D
is
kind
of
like
a
kind
of
a
sketchy
idea:
ok,
real
D
and
E
nope
ribs.
A
Bash,
so
here
I
am
inside
of
my
container
on
my
link
I'm
on
my
laptop
here
connected
my
host
right
and,
if
I
do
docker
PS
I
can
see
all
of
the
containers
running
on
my
host
right
so,
for
example,
I'm
gonna
pop
up
in
another
window.
Here
just
so,
we
have
enough
context
to
really
put
together
what's
happening
visually
inside
the
screen
right
so
over
here.
A
A
So
I
do
docker,
PS
I
can
see
all
of
those
containers
running
on
the
underlying
host
and
if
I
do
and
let's
go
ahead
and
start
a
new
container
from
inside
of
the
inside
of
the
pod,
because
this
is
actually
a
thing
that
I
really
want
you
to
understand
about
why
docker
and
docker
is
maybe
a
terrible
idea
right.
So
if,
within
my
pod,
I
do
docker
run
I
T
bash.
B
A
A
A
Okay,
so
now
I've
actually
got
a
file
that
I
created
inside
of
a
container
running
inside
of
docker
and
docker,
and
if
I
exit
back
out
of
that
darker
and
darker
section,
then
I
I
won't
be
able
to
have
access
to
that
file
anymore.
But
if
I
go
back
to
the
node,
where
my
host
is
running,
this
is
links
over
here.
I
do
cat
at
sea
flag
there.
A
It
is
right,
so
I
couldn't
have
done
that
as
a
user,
and
so
that's
actually
why
doctor
and
docker
is
so
powerful
as
it
gives
us
the
ability
to
do
all
kinds
of
crazy
things,
because
inside
of
the
doctor
and
docker
host
contextually
what
I'm
exposing
what
I
do
this
command?
This
is
not
the
Etsy
directory
of
the
doctor
and
docker
pod.
This
is
the
SE
directory
of
the
underlying
node.
A
Just
don't
do
it,
it's
really
bad,
and
this
is
why
I
definitely
shout
out
to
cruise
automation
and
those
folks
were
and
pretty
much
any
tooling,
even
cluding
PSPs
to
limit
that
capability,
because
it's
a
big
deal
now,
the
other
one
I
wanted
to
show
you
I'll
see
where
us
are
we
so
we've
talked
about
some
of
those
capabilities.
Oh
that
was
the
next
thing.
We're
gonna.
A
Do
we're
gonna
look
at
the
order
to
look
at
grabbing
the
check
sum
for
an
image
and
using
that
to
get
past
our
problem
of
so,
if
I
to
do
if
I
were
to
do
in
my
cluster
here,
if
I
do
cute
kid
all
run
nginx
image,
nginx
replicas
three,
then
it
will
be
denied
because
of
the
fact
that
it's
using
the
what
else
I
was
complaining
about
there's
no
sha-256
digests.
So
let's
go
back
to
our
docs
here,
because
I
remember
them
talking
about
how
you
go
about
finding
that
you
know
show.
A
A
A
A
A
A
Okay,
so
so
I
can
see
all
the
image
the
the
docker
containers
that
are
running
on
my
local
machine
here
with
the
contact
contextually
within
docker
on
my
host
right
and
if
I
docker
exec
in
to
what
I'm
the
kind
control,
plane
and
I
do
see.
Arcade
LPS
I
can
see
only
those
containers
that
are
running
inside
of
this
particular
node
and
if
I
jump
into
a
worker,
node.
A
Again,
I
can
only
see
those
containers
that
are
running
on
this
node
and
on
the
underlying
host.
I
cannot
see
any
I
cannot
see
any
of
those
k.
Rail
containers
right,
docker,
PS,
grab
k
real,
oh
I
do
because
one
of
the
key
rail
images
is
on
my
laptop,
but
that's
fine,
but
my
point
is
that
you're
not
seeing
that
there
now
what's
interesting?
A
Now
this
is
the
fun
part,
so
this
is
a
docker
container,
that's
running
and
inside
of
that
there's
a
container
D
service
that's
running
and
that
container
D
service
then
creates
other
containers.
So
this
is
another
implementation
of
docker
and
docker
right,
wherein
I've
actually
provided
enough
capability
enough
enough
privileged
to
the
running
container
that
it
can
then
create
new
containers
which
is
kind
of
mind-boggling
right.
A
Yeah
I
mean
there's
definitely
a
couple
of
clear
definitions
of
how
it
works.
I
agree
with
I
I,
like
that
definition,
better
Bogdan,
because
of
the
because
of
the
isolation
that
it
represents.
But
fundamentally
that's
not
what
people
are
doing,
what
to
her,
not
her
kind
of
terrible
all
right,
let's
play
with
these
policy
exemptions,
I
want
to
build
an
exemption
that
will
allow
me
to
remove
the.
B
A
A
A
And
this
is
the
full
configuration
and
you
can
do
things
like
and
I
like
that,
this
that
you
can
actually
enable
or
disable
or
report,
only
or
true
or
false,
on
each
of
these
policies
right.
So
if
I
wanted
to
turn
off
that
immutable
or
reference
for
the
entire
cluster
I
could
just
disable
that
here,
in
fact,
let's
you
know,
let's
try
that
first.
A
B
A
A
And
no
error
and
then,
if
I
do
cubic
I'll
get
events.
That's
another
capability
that
we
have
here.
We
could
see
everything
running,
but
let's
cruise
back
in
events,
and
we
can
actually
also
see
that
they've
actually
wired
up
events
with
the
with
the
tooling
itself
right.
So
if
I
were
to
try
and
deploy
that
that
tool
again,
so
you
can
all
apply
dash
F,
deploy,
D
and
D.
You
can
all
get
events.
C
A
Think
it's
robably
cuz,
it's
being
shut
down
so
quickly,
I
think
that's
relatively
new
capability,
relatively
new
functionality,
but
yeah
they
were
using
the
helm
checksum.
So
if
we
were
to
use
helm
to
template
again,
it
would
work
but
what's
interesting
about
that,
is
that
it
means
that
to
add
policy
or
to
to
change
policy
that
you
do
have
to
reload
those
pods
and
you
can
do
that
using
the
helm,
template
mechanism
where
and
you
just
reapply
the
change.
A
A
So
then
we
see
what
has
changed
right.
Our
configure
map,
our
config
map
was
changed.
Our
exemptions
are
unchanged.
Everything
else
is
unchanged,
including
this.
Oh,
the
secret
was
updated.
Well,
that
is
interesting,
and
so
that
means
that
the
validating
webhook
and
the
secret
were
updated.
I
was
rotating
the
certificate
when
I
did
this.
The
deployment
itself
was
also
configured
right,
so
in
this
change,
because
I
am
modified,
the
policy.
B
B
C
A
A
Wait
there:
it
is
okay,
good,
all
right,
so
enforced
false
kind,
pod
level
info
not
enforced,
namespace
default
policy,
immutable
reference
resource
and
genetics,
so
there
we're
seeing
the
air
and
we're
seeing
that's
what
I
was
looking
for
was
like
if
we're
reporting
on
it
I
wanted
to
see
what
the
report
looks
like
and
so
we're
seeing.
The
report
actually
refer
to
the
deployment
to
the
replica
set
to
the
pod
for
some
reason
that
did
not
actually
get
expressed
by
the
by
the
events
API,
but
that
is
there.
So
that's
cool
all
right.
A
B
A
A
Again-
and
we
can
see,
what's
changed
right
so
we
see
the
exemptions
was
configured
that
could
fix
stayed
the
same.
We
see
that
the
deployment
was
configured
and
again
the
secret
and
the
validating
lip
hook
were
also
both
configured.
We
cute
kid
I'll
get
pods
and
k
Brielle
and
that's
all
almost
done
rolling,
update
style.
A
I
mean
you
grab
the
matter
there.
Well,
actually
shouldn't
have
to
okay.
So
actually
has
this
more
time,
so
this
pod
is
actually
going
to
run
as
privileged
and
it's
gonna
run
with
a
host
Pig
and
we're
running
as
and
that
should
be
enough
to
get
us
there.
So,
let's
give
that
a
try,
so
if
I
do
cube,
cannot
apply
FM.
Oh.
A
A
B
B
A
Nope
well,
I'm,
not
sure
why
that's
not
hitting
but
naming
does
not
match.
What
was
that
my
bad,
my
screwing
that
up
kate's
root
and
then.
B
A
A
B
A
Our
pots
are
up
and
running
exemptions
work,
so
it's
typos
in
my
part
and
a
misunderstanding
of
what
star
means
for
external
policy
so
think
you
do
meet
you
for
bailing
me
out
on
that.
One,
alright
I
think
we've
been
at
this
long
enough,
I'm
going
to
go
ahead
and
put
my
notes
up
in
the
repository.
A
So
if
you
want
to
play
with
this,
you
can
actually
follow
along
and
do
this
entirely
in
your
own
session,
and
thank
you
cruise
automation,
folks,
for
letting
me
kind
of
dig
into
this
I
think
it
was
a
really
fun
exercise
into
seeing
what's
what's
what's
available
and
like
getting
the
opportunity
to
kind
of
talk
about
some
of
the
things
that
are
are
possible
here.
So
I
want
to
show
you
what
Keith
root
is.
This
is
actually
the
last
thing
I'm
going
to
show
you.
So
if
I
look
at
Kate's
root
the
pod
itself,
the.
A
A
But
what
if
I
wanted
to
take
over
that
node?
This
is
pretty
interesting
right.
So
we
can
see
the
the
first.
The
first
process
is
the
pig
one
and
it's
running
the
innate
system.
So
let's
talk
about
in
a
center
here
and
a
center
is
a
tool
that
we
talked
about
before.
That
gives
us
the
ability
to
enter
into
any
namespaces
or
some
set
of
namespaces
for
a
particular
user.
With
this
one
that
I
can
actually
do
Anna,
Center,
chief
or
a
target
page
we'll
say
one:
a
for
all.
Namespaces.
A
A
Sure
but
yeah,
it
was
like
a
part
of
a
bundle
that
I
found
that
actually
had
everything
that
I
wanted.
So
I
didn't
I
didn't
play
with
it
too
much.
I've
already.
Has
everything
gonna?
Need
it
all
right?
Well,
that
is
our
session
for
the
day.
Thank
you
all
for
tuning
in
yeah
and
the
comparison
to
OPA
I
think
I'm
going
to
do
that
next
week
or
coming
up
here
real
soon,
when
I
do
another
OPA
session,
I
think.
A
But
a
lot
of
the
things
that
OPA
can
do
and-
and
this
one
can
do
are
pretty
interesting.
One
of
the
key
differences
here,
I
think,
is
that
they,
the
crews
on
a
mission
folks,
have
actually
developed
a
lot
of
the
logic
for
determining
resources
that
they're
matching
on
with
go
directly
right
and
so
like.
If
we
go
back,
get
t
GI,
k,
0,
95,.
A
So
this
is
a
policy
that
they
defined,
and
this
represents
go
code
that
they've,
written
and
licensed
under
Apache
tube
awesome
to
see
that
stuff,
and
this
is
actually
how
they're
going
about
validating
or
providing
our
resource.
A
response
to
this
particular
piece
like
what
the
violation
text
is.
So
much
of
this
could
actually
be,
could
use
regu.
It
could
use
the
this
stuff.
You
could
like
greatly
simplify
the
work
that
you're
doing
here
by
by
vendor
in
things
like
OPA.
Instead
of
doing
that
work
yourself,
you
would
still
be
able
to
define
the
response.