►
From YouTube: This Week in Cloud Native - 9/13/21
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello,
everyone
and
welcome
to
this
episode
of
this
week
in
cloud
native.
This
is
episode
number
nine
and
in
the
previous
episode
we
spent
quite
a
bit
of
time
kind
of
exploring
the
a
lot
of
the
information
about
the
pod
security
admission
controller
and
in
this
episode
we're
gonna
do
a
little
bit
more
hands-on
there.
A
A
A
So
yeah
rough,
saying
hello
to
everybody,
and
then
here
is
our
notes
for
this
week,
got
some
good
stuff
to
cover
I'm
doing
some
work
and
some
you
know
pre-work
and
getting
ready
for
kubecon.
I
will
be
down
there
in
person,
so
I
hope
that
some
of
you
will
be
as
well
and
hopefully
we'll
get
to
see
each
other
that'd
be
tremendous
turn.
This
a
little
bit
looks
like
I
got
kind
of
off
center.
Somehow.
A
Yeah
sure
enough,
it
is
correct,
it
is
not
correct,
but
I
don't
know
how
to
change
it,
because
the
place
where
I've
normally
changed
it.
It's
already
set
correctly,
so
I'm
not
sure
where
to
make
the
change.
I
guess
we're
gonna
have
to
fix
it
in
post
all
right.
Well,
thank
you
for
pointing
it
out.
At
least
I
don't
know
exactly
how
to
change
that.
I
think
it's
already
set
correctly
on
my
side,
so.
A
A
If
you
like
the
stuff
that
I'm
putting
out
here,
make
sure
that
we're
that
you're,
throwing
in
the
that
you
subscribe
to
this
channel
and
you'll
be
notified
when
things
are
coming,
we're
trying
to
still
figure
out
the
calendar
things
you
can
actually
schedule
ahead
of
time,
but
that's
taking
a
little
bit
of
time.
A
If
you
want
to
see
any
recorded
sessions
like
if
you
missed
an
episode
or
if
you
want
to
check
something
out,
you
can
actually
go
check
out
the
youtube
streams,
and
this
is
where
a
lot
of
the
recordings
that
we're
doing
will
be
here.
So
here's
a
bunch
of
this
stuff
from
before
the
last
week
and
the
week
before,
but
each
of
these
curated
playlists
will
actually
have
the
content
from
the
previous
from
the
previous
shows.
Actually,
it
looks
like
some
of
mine
are
missing.
I'll.
Have
to
figure
out
what's
up?
A
For
the
kubernetes
side
of
things,
there
were
a
couple
of
good
articles
or
a
couple
of
good
things
that
happened,
including
this
one
from
celeste,
hogan
celeste
is
a
good
friend
from
the
community
and
points
out
that
there's
a
really
great
list
of
first
projects
or
first,
you
know
good
first
good,
first
items
inside
of
the
kubernetes
stocks,
and
I
wanted
to
kind
of
share
with
you
how
to
both
to
find
them
and
also
to
be
able
to.
You
know
how
do
you
get
involved
right?
A
And
so,
if
you
go
to
go
dot
k,
it's
dot,
io,
slash
good
dash,
first
dash
issue,
and
that's
this
url
right
here:
go.kh
dot,
io,
slash,
good
dash.
First
issue
and
you're
gonna
see
any
issues
that
are
in
the
kubernetes
repository
that
are
marked,
help
wanted
and
or
good
first
issue,
and
these
are
great
for
getting
started.
And
if
you
wanted
to
look
for
a
particular
category,
you
could
jump
into.
A
You
could
look
for
you
could
look
right
in
here
and
there's
lots
of
good
stuff
in
here
for
for
good
first
issues,
so
this
is
a
great
place
to
start
if
you
want
to
start
getting
involved
in
side
of
the
kubernetes
environment
and
there's
definitely
stuff.
That
is
like
easy
for
some
and
hard
for
others
like
there's
one
that
is
talking
about
like
the
some
of
the
korean
translations,
for
the
documentation
is
incorrect
and
that's
a
great
first
issue
for
someone
who
speaks
korean
would
be
very
easy.
A
It
would
likely
be
very
simple
for
them,
but
it
would
not
be
very
easy
for
somebody
like
myself,
who
does
not
speak,
that
language
yeah,
there's
some
stuff
in
the
perf
test,
there's
stuff
in
cube
builder,
there's
stuff
in
the
hierarchical
name,
spaces.
There's,
there's
tons
of
stuff
in
here,
including
stuff
for
docs,
so
great
place
to
get
involved.
A
There's
actually
also
just
recently
a
thread
on
the
kubernetes
twitter
bot,
which
is
kate's
contributors,
and
so,
if
you
want
to
follow
cage
contributors,
there's
always
good
information
coming
from
cage
contributors
regarding
things
that
are
happening
inside
the
community.
They'll
announce
that,
like
the
summit's
coming
up
and
those
sorts
of
things,
but
they
also
call
out
right
here,
information
about
other
things
that
are
looking
for
the
other
folks
other
parts
of
the
kubernetes
community
that
are
looking
for
contribution,
so
here's
an
example
of
building
an
intuitive
dashboard.
A
So,
if
you're,
if
you're,
if
you'd
like
to
play
with
angularjs
golang
and
client
go
sig,
ui
is
looking
for
such
contributors
to
help
them
build
an
intuitive
dashboard,
more
information.
Here
there
are
a
few
other
call
outs
just
this
last
week
around
things
that
are
around
folks
that
are
looking
for
feedback
there
as
well,
so
go
ahead
and
retweet
those
and
if
you're
following
me
on
twitter
you'll,
see
those
retweets
as
well.
So
sync,
cluster
lifecycle
is
looking
for
contributors
to
help
us
with
ncdadm,
which
is
a
neat
project.
A
A
A
A
So
we
have
kubernetes
cbes.
Usually
I
check
out
the
announce
group
which
there's
nothing
in
here
since
july
14th.
So
that's
good,
no
big,
new
security
announcements.
I
did
find
another
dashboard
that
I
wanted
to
share
with
you
all,
because
I
thought
that
this
would
probably
be
a
good
source
of
information
as
well.
A
I'm
not
sure
that
everybody
is
aware,
but
kubernetes
has
actually
formed
a
relationship
with
hackerone
and
they
have
a
bug
bounty
program
and
it
was
launched
in
january
of
2020,
which
feels
like
it
was
a
million
years
ago
and
only
yesterday
at
the
same
time.
So
this
is
the
kind
of
activity
list
and
I
think
you
can
sort
by
when
it
was
published
or
when
it
was
disclosed.
A
I
guess
you
can't
really
sort
this
in
a
way
that
I
think
makes
sense.
But
here
are
some
here
here
are
some
things
that
have
been
opened
against.
Are
you
know
publicly
disclosed
about
things
that
are
happening
within
the
cluster
and
if
you
look
at
that
security
announced
list,
you're
going
to
get
to
see
the
same
events,
but
I
think
it's
actually
probably
a
pretty
good
way
of
understanding.
What's
what's
coming
and
there'd
be
a
coordinated
disclosure.
A
So
if
you're
following
the
security
list,
that
is
in
the
notes,
then
you
would
still
be
able
to
see
this,
but
also
this
is
kind
of
like
where
you're
seeing
this
is
where
folks
that
are
actually
doing
the
work
of
trying
to
find
vulnerabilities
in
kubernetes
are
going
to
make
that
announcement,
and
so,
let's
just
take
this
top
one
here,
for
example,
node
validation,
admission
does
not
observe
all
old
object
fields.
The
validating
mission
controller,
the
mission
web
hook
for
node
objects,
is
passing
old
object
fields
incorrectly
on
admission
review
requests
it's
identified.
A
This
is
an
interesting
one
because
they're
theory,
the
premise
is
that
you
could
potentially
allowing
users
to
bypass
validating
admission
by
updating,
node
labels,
taints
and
others,
which
is
an
interesting
attack
surface
and
they
created
a
validating
webhook
and
they
created
a
dummy
and
they
created
a
potential
issue
location.
So
this
is
a
great
example
of
an
expo,
a
vulnerability
documentation
like
that
actually
provides
experiments
or
it
provides
actions
and
really
explains
the
impact
of
a
vulnerability
that
has
been
put
out.
A
So
I
thought
this
was
really
well
written
and
really
well
really
well
done
here
and
then
you
know,
as
I
was
saying
before,
so
this
was
resolved
as
a
cbe.
A
This
is
the
cbt
here
and
again,
if
you
just
go
to
that
information
within
the
cve
document,
there'll
be
a
link
to
kubernetes
security,
announce
which
is
the
group
that
I
usually
check
every
every
time
we
log
in
here-
and
this
is
where
you
can
actually
see
like
what
this
issue
was
rated
as
what
the
affected
versions
were
and
what
the
fixed
versions
were,
and
so
this
particular
piece
was
actually
exposed
and
fixed,
so
good
stuff.
A
But
anyway,
more
interesting
information
to
go.
Look
at
super
cncf
things
I
wanted
to
talk
about.
A
I
always
check
the
cube
weekly
for
the
previous
week
to
take
a
look
and
see
what's
coming
or
what
will
be
announced
and
what
is
happening
this
week,
cncf
programs
on
online
programs
for
this
week
we
have
kubernetes
clusters,
need
persistent
data
by
james
byrne
of
storage
os
and
then
that
tweet
that
we
were
just
watching
for
or
we're
just
talking
about
from
celeste
hogan
and
then
there's
some
good
technical
articles
talking
about
prometheus,
definitive,
guide,
interesting,
so
digging
into
the
prometheus
operator
by
nadad
desai
from
infra
cloud
technologies,
kubernetes
ci
cd
by
alex
czech,
is
from
ubuntu
and
sql
commenter
emerges
with
open,
telemetry
interesting.
A
A
Oh,
I
mean
that
should
be
a
really
interesting
talk.
So
if
you're-
and
I
think
that
will
be
pre-recorded
so
likely
you'll-
be
able
to
see
that
one
both
virtually
and
in
person,
so
we'll
see
how
that
works
out
upcoming
events,
there
will
be
kata
and
arm
a
secure
alternative
to
the
sg
space
building,
an
h.a
control
plane
for
tinkerbell.
Oh
wow,
bye,
jason,
my
good
friend
jason
d
ibirus.
That
should
be
a
fun
one.
A
That's
coming
up
on
the
15th
hasn't
happened
yet
two
more
days
and
then
on
demand
seminars
moving
from
clis
to
control
planes
with
cross
with
cross
plane.
Somebody
else
from
cross
plane,
victor
farcick
and
using
csi
snapshots.
So
that's
what's
coming
in
the
cncf
community,
definitely
check
those
things
out
if
you're
interested
in
them
and
then
one
of
the
ones.
A
I
also
really
liked
was
this
blog
post
by
my
very
good
friend,
scott
lowe,
who
gets
into
envoy
configuration
he's
talking
about
how
envoy
works,
what
you're
configuring
and
how
and
how
the
and
how
those
bits
of
it
work.
So,
if
you're
interested
in
envoy
and
the
way
that
envoy
integrates
with
service
mesh,
this
is
probably
a
pretty
good
read
on
the
envoy
piece
of
it.
I
thought
that
was
a
pretty
good
article
all
right
now
it
is
playtime.
Let
us
get
started
all
right.
A
A
A
So
if
you
are
aware
of
a
like
this
particular
issue,
this
is
the
cooper.
This
is
the
enhancement
for
pod
security
admission,
which
is
a
thing
that
we're
going
to
be
exploring
in
hand
today.
These
are
the
assignees
to
it,
and
this
is
kind
of
the
work
that's
ongoing.
A
A
A
Oh,
and
actually
I
just
saw
an
issue
go
by
in
the
kubernetes
development
list,
which
is
on.
I
usually
look
for
lwkd.info
for
this
and
what
they
pointed
out
was
this
recently
merged
and
what
this
does
is
it
actually
changes
the
admission
mechanism
inside
of
the
api
server
to
enforce
pod
security
before
pod
security
policy.
A
So
if
you're
in
a
state
where
you're
running
both,
then
pod
security
will
run
before
pod
security
policy,
and
that
enables
the
functionality
of
like
audit
or
warn
right-
and
we
can
talk
a
little
bit
more
about
that
when
we
get
hands-on.
A
But
when
you
have
this
stuff
running,
if
you
have
pod
security
admission
running
and
you
have
it
set
in
an
audit
mode
or
a
warn
mode,
then
you
probably
want
to
be
able
to
see
those
things
before
pod
security
policy
takes
over,
takes
the
object
and
and
implements
it,
and
so
in
this
case
you
can
they
basically
just
change
the
order
of
the
order
of
which
admission
control
policies,
the
ordering
of
admission
control
controllers
policies,
objects.
However,
they
are
being
evaluated
so
that
pod
security
would
run
before
pod
security
policy.
A
Pretty
cool,
I
thought
that
was
pretty
neat,
it's
a
good
fix
and
then
we
also
talked
about
okay.
So
this
is
actually
where
we're
going
to
jump.
In
my
good
friend,
lachlan
evanson
actually
wrote
this
article
and
I
was
helping
him
kind
of
understand
the
way
that
the
sekamp
stuff
worked,
but
this
is
actually
also
just
an
article
that
he
put
together
about
the
very
thing
that
we're
going
to
explore
today,
and
so,
since
some
of
this
work
is
already
done,
we're
going
to
dig
in
and
we're
going
to
start
here.
A
B
A
Warn
policy
violations
will
trigger
a
user
facing
warning
but
otherwise
be
allowed,
and
so,
as
we
define
policy
and
enforce
that
policy
in
a
given
name,
space,
then
we'll
be
able
to
see
like
the
three
the
three
different
modes.
Here,
that's
what
that's!
What
I'm
planning
on
covering
today,
but
in
here
specifically,
it
calls
out
the
need,
possibly
for
being
able
to
see
what's
happening
in
the
audit
log.
And
so
I
wanted
to
extend
our
kind
configuration
to
support
an
audit
log
so
that
we
could
actually
see
that.
A
A
This
is
a
gist
I
put
in
gosh
13
months
ago,
so
a
year,
and
a
month
ago,
when
I
was
playing
with
auditing
inside
of
kind,
you
could
even
see
even
the
api
version
is
an
older
version.
I
think
now
we're
on
a
different
version
of
api.
A
We
probably
don't
need
to
worry
about
that
piece
of
it
and
then
there's
don't
log
node
communications,
don't
log
read
only
urls
unlock,
configmap
and
secrets
sure
catch
a
little
more
than
uncles.
Okay,
there
we
go
so
then
here's
our
kind
configuration
so
these
are.
A
This
is
basically
what
the
audit
with
the
the
auditing
file,
will
look
like
when
we're
configuring,
auditing
and
auditing
inside
of
kubernetes,
and
then
the
other
file
here
is
our
kind
config
that
leverages
that
makes
the
advanced
audit.yaml
file
available
to
the
api
server,
so
that
the
api
server
can
actually
use
that
audit
policy
file
and
specify
an
audit
path
and
then
I'll
make
it
so
that
we
actually
have
an
audit
log
that
we're
able
to
view
so.
A
I'm
going
to
do
is
I'm
going
to
go
ahead
and
copy
we're
going
to
build
a
kind
configuration
that
supports
both
pod
security,
pod
security
admission
and
also
the
the
audit
logging,
so
that
we
can
see
those
events
so
bear
with
me
and
as
we
work
through
this
and
then
we'll
be
able
to
actually
see
exactly
what
that
looks
like.
A
A
I
want
you
to
make
a
copy
of
this
file,
which
is
local
dot,
slash
ad,
make
it
a
read-only
file
and
copy
it
as
true.
Now
what
this
does
is.
It
makes
the
assumption
that
actually,
this
makes
it
so
that
the
file
is
available
on
the
underlying
node.
So
if
I
were
to
docker
exec
into
the
api
server
sorry
into
the
control
plane
node,
I
would
be
able
to
see
that
file
sitting
on
the
file
system
inside
the
container.
A
A
B
A
You
know
what
that's
relative
to
its
path.
The
path
is
going
to
be
relative
to
the,
where
you're
actually
running
the
kind.
The
kind
create
cluster
command.
A
Configuration
this
metadata
name
thing
is
just
giving
it
something
to
actually
parse
against,
and
then
here
here's
where
we
get
things
that
makes
things
interesting
so
for
the
api
server
component,
we're
going
to
pass
some
extra
arguments
to
that
binary.
We're
going
to
set
the
audit
policy
flag
or
file
to
use
our
policies
of
audit.yaml
we're
going
to
set
the
log
path.
Var
log
kubernetes,
cube
api
server,
audit
log
and
that'll
be
made
available
on
the
underlying
container
and
then
the
format
for
that
will
be
json.
A
And
then
here
we
actually
make
the
file
available
to
the
api
server
binary
yeah
inside
my
laptop
and
where
it
will
be
inside
the
docker.
Node
will
be
at
that
path
at
that
container
path
that
we
specified
and
we're
going
to
look
at
that
here
in
just
a
second
once
we
get
this
wired
up
right,
and
so
this
path
host
path
will
be
on
my
laptop
and
container
path.
Container
path
will
be
inside
of
the
container.
A
And
then
the
other
piece
is
where
we're
going
to
put
those
logs.
So
we
called
out
above
that
it
was
far
log
kubernetes,
so
we're
just
going
to
mount
that
path.
On
the
underlying
container
into
the
container.
That's
running
cube
api
server
where
that
path
will
be.
This
is
read
only
false
and
it's
directory
you're
correct.
A
So
in
siri
we
should
have
everything
we
need
to
be
able
to
turn
on
both
pod
security
policy.
That's
all
we
had
to
do
here
was
feature
gates.
Pod
security
equals
true
and
then
in
the
nodes
on
the
control
plane.
We're
going
to
add
this
advanced
audit.yaml,
which
we
still
have
to
create
on
our
local
file
system
and
then
we're
also
going
to
pass
in
some
configuration
options
to
the
to
the
api
server
so
that
the
api
server
can
enable
autologging.
B
A
Let's
go
up
a
121
cluster,
real
quick
and
just
take
a
look
at
that
I'll
show
you
how
to
actually
validate
that
stuff
because,
like
sometimes,
I
also
don't
know
so
pretty
sure
it's
v1
now.
But
let's
take
a
look.
A
A
A
A
Yeah,
this
is
probably
just
going
to
only
be
known
about
within.
I
thought
that
audit
was
actually
something
that
was
serviced
in
the
api,
so
you
could
actually
oh
no,
that
never
happened.
That's
right,
I
was
thinking.
Maybe
it
would
actually
be
something
in
the
surface
in
the
api,
because
I
thought
we
were
talking
about
like
dynamic
audit
for
a
while,
but
I
don't
think
that
actually
were
materialized,
and
so
it's
actually
not
going
to
be
known
here.
It's
going
to
be
known
in
the
code
base.
A
A
We
are
making
some
pretty
significant,
interesting
changes
to
the
to
the
control
to
the
control
plane
configuration.
So
you
might
get
to
see
some
troubleshooting
on
the
kubereum
side.
Oh
hey
control,
plane
started
whoop.
A
A
A
A
A
It's
probably
one
of
those
things,
you
know
there's
this
thing
that
happens
with
yaml
and
that's
where
it
is
like
it
will
ignore
stuff.
That
is,
but
it's
not.
That
is
missing.
A
You
can
see
that
the
configuration
that
was
passed
to
cuba
em
was
of
this
particular
version.
Qbdm.K,
it's
the
I
o
v1
beta2,
but
in
our
configuration,
we're
passing
a
patch
to
v1,
beta3
and
v1.
Beta3
is
not
a
configuration
that
is
being
passed
to
cubadium,
and
so
it
is
failing
silently
because
it
doesn't
know
how
to
match.
B
A
A
A
A
Alternatively,
we
could
do
a
thing
where
we
generate
the
qradium.com
for
ourselves
and
overwrite
the
destination
in
that
api
server,
with
just
the
complete
cube
adm
file
that
it
would
use
right,
and
so,
if
we
wanted
to,
if
we
wanted
to
take
like
an
example,
qmdm
configuration
like
you
see
here
and
make
that
like
this,
the
only
part
part
right
is
this
local
ip
advertise
address,
because
we
need
to
know
for
sure
that
that
would
be
the
ip
address
that
it
would
resolve
to.
A
But
if
we
did
know
that
we
knew
it
would
be
the
the
second
docker
container
running
or
what
have
you?
Then
we
could
actually
go
ahead
and
make
this
change
also,
there's
a
default.
That
would
probably
be
okay
here,
because
this
is
actually
the
only
the
one
ip
address
on
those
nodes.
So
we
probably
don't
need
to
specify
node
ip
here.
It's
just
being
passed
because
of
the
way
we
generate
we're
generating
the
configuration.
A
I
hope
that
answers
your
question.
I
know
that's
not
a
super
easy.
I
know
that's
not
a
super
easy
answer,
but
that
is
one
way
that
we
could
be
doing
that.
A
A
A
And
what
I'm
going
to
do
is
I'm
going
to
make
this
this
configuration
I'm
going
to
put
this
configuration.
You
know
what
let
me
just
before.
I
say
I'm
going
to
do
it.
Why
don't
we
just
do
it.
C
C
B
B
B
B
B
C
B
A
When
we
go
back
to
the
documentation,
remember
we
have
a
couple
of
different
modes
right.
We
have
a
warning,
we
have
a
mode
for
baseline
mode
for
restricted
and
the
baseline
is
pretty
permissive.
A
B
A
These
are
the
labels
that
I
have
specified
on
this
namespace.
I've
set
the
kubernetes
to
I
o
metadata
name.
Oh
that's
actually
already
done
for
me,
but
then
pod
security,
kubernetes.I,
o
audit,
restricted,
pod
security,
kubernetes
io,
enforce
equals
baseline
and
then
bot
security,
kubernetes
io,
warn
equals
restricted.
So
now
I
should
be
able
to
get
warnings.
I
should
be
able
to
see
that
in
my
audit
log-
and
I
should
get
user-facing
warnings
about
manifests
that
don't
line
up
with
a
reality,
but
they
should
still
allow
the
creation
of
a
pod.
A
A
A
That's
pretty
cool,
I
mean,
I
think,
that's
actually
pretty
useful
for,
like
the
developer
side
of
things
like
you'd,
be
able
to
see
that
in
your
log,
if
you
were
doing
something
like
a
a
mechanism
by
which
you
were
deploying
automatically
using
ci
or
something
like
that,
then
this
mechanism
would
actually
probably
surface
in
your
logs
and
you'd
be
able
to
see
it
neat.
A
A
A
B
A
A
A
A
So
here's
the
response
object
for
this
deployment
right.
This
is
coming
from
the
api
server
being
and
it
is
a
response
to
whatever
the
command
line.
Client
is
that
created.
This
object
right?
It's
a
kind
deployment
appsv1.
So
this
is
in
line
with
what
we
saw
previously
in
the
documentation
where
now
we
can
actually
warn
you
about
the
object
as
it
is
created
as
part
of
a
deployment.
We
don't
have
to
necessarily
wait
for
the
pod
object
to
get
created,
and
this
is
different
than
the
way
pod
security
policies
worked.
A
If
the
decision
that
that
pod
security
policy
would
enforce
is
to
deny
the
admittance
of
that
pod,
then
what
happens
is
that
the
pod
itself
just
will
never
be
scheduled
it
will.
It
will
never
show
up
as
a
pod
object
inside
of
the
ncd,
because
the
object
was
denied,
but
you
would
still
get
the
replica
set.
You
would
still
get
a
deployment
object,
those
things
would
be
created
and
they
would
be
allowed
in
because
they're
not
part
of
the
way
that
pod
security
policy
would
enforce
admission.
A
Pod
security
policy
would
only
enforce
the
pod
object
itself.
In
this
case,
though,
with
pod
security
admission,
this
is
a
different.
That's
the
new
version
stuff
right.
The
new
stuff
actually
can
evaluate
the
deployment
object
itself,
and
it
can
actually
look
at
that
object
and
give
you
feedback
based
on
that
information
right.
So
this
is
the
manager,
for
this
was
cube
kettle
create.
We
just
saw
that
here's
some
information
about
what
was
being
known,
what's
known
about
it,
the
information.
Basically,
that
is
evaluating
against
there's
going
to
be
three
replicas.
A
A
B
B
A
The
kettle
set
basically
just
gives
me
the
ability
to
set
particular
things
about
a
deployment
or
a
daemon
set,
or
what
have
you
and
the
interesting
thing
was,
even
though
that
was
being
set,
we
still
saw
the
same
would
violate
latest
version
of
restricted
and
that
is
being
configured
by
by
what's
configured
at
the
name
space
level.
So
we
have
things
working
we're
able
to
see
the
audit
events,
we're
able
to
see
the
user
space
feedback,
we're
not
getting
any
feedback
on
the
object
itself,
which
I
think
is
kind
of
interesting.
B
B
A
A
A
I
think
that
that
would
be
fun
to
play
with
I'm
not
sure
if
that's
going
to
be
in
scope
this
time,
but
I
imagine
that
because
now,
pod
security
policies,
sorry
pod
security
admission
will
be
enforced
or
will
be
evaluated
before
pod
security
policies,
and
so
the
way
that
you
would
break
things,
it's
probably
likely
that
the
box
eraser
blade,
is
going
to
be
in
pod
security
policies,
not
in
pod
security
admission
and
the
reason
I
say
that
is
because
pod
security
admission
it's
only
validating
it's
only
looking
at
only
validating
the
specs
and
making
a
decision
whether
to
allow
or
deny
based
on
its
evaluation
and
against
the
policies
that
you've
defined,
whereas
pot
security
policies
do
quite
a
lot
quite
a
lot.
A
Other
of
other
stuff
like
they
might
also
do
things
like
enforce
the
configuration
of
things
or
mutate,
the
pod
spec
itself,
so
likely
you're
more
likely.
I
would
say
that
you're
more
likely
to
run
into
trouble
with
pod
security
policies
than
you
are
with
pod
security
admission,
because
pod
security
admission
is
either
going
to
allow
or
deny
and
pod
security
policies
are
going
to
be
the
thing
that
mutate
and
change
or
enforce
the
change
in
configuration.
A
That's
the
way
I
understand
it,
but
I
want
to
see
one
of
my
one
of
my
explorations
of
this
effort
was
to
see.
Are
we
doing
any
any
any
sort
of
mutating?
And
I
don't
think
that
is
the
case:
do
duplicates
of
mode
cause
an
error
or
is
the
last
one
declared
or
maybe
first
so
in
our
example.
Here
we
could
all
get.
B
A
A
A
A
A
So
I
assume
pod
security
is
cluster-wide,
so
you
couldn't
have
different
rules
per
node,
so
this
is
admission
and
it
happens
at
admission
on
objects
like
deployment
or
daemon
sets
or
pods
or
those
sorts
of
things,
and
because
it's
admission
it
doesn't
actually
matter.
The
rules
would
not
apply
differently
on
different
nodes.
A
Next-
was
some
of
the
exceptions
because
you
can
make
it
so
that,
like
a
given
user,
has
the
ability
to
deploy
a
pod
that
is
of
a
particular
enforced
set
and
a
particular
in
a
particular
name
space
you
can
like
overwrite
it,
but
how
the
nodes
cri,
what
or
how
the
node
would
enforce
it
would
be
would
be
the
same
regardless
of
the
node.
I
think,
because
it's
all
happening
kind
of
at
the
admission
layer.
A
A
If
I
understand
it
right
big,
if
there
are
only
three
built-in
policies,
that's
right
and
they
can't
be
changed.
Also
true,
privilege,
baseline
and
restricted.
Is
there
any
way
to
add
your
own
or
the
too
soon,
and
that's
why
there
are
links
to
other
projects
yeah?
So
the
way
this
is
shaken
out
over
time
is
this.
A
Initially,
the
concern
was
the
reason.
There's
even
a
pot
security
admission
piece
is
that
the
api
for
pod
security
policies
has
grown
somewhat
out
of
bounds
right
like
there's
just
so
many
things
you
can
configure
and-
and
it's
not
it's
kind
of
difficult
to
configure
it
kind
of
difficult
to
to
test
it,
and
so
it
became
a
thing
where
we're
trying
where
it
became
very
difficult
to
actually
manage
the
api's
growth
over
time.
A
So
that
led
to
the
deprecation
of
pod
security
policies,
however,
liggett
stepped
up
and
our
jordan
liga
had
stepped
up,
and
he
said
you
know
what.
A
And
I
honestly
think
that's
the
right
move
right,
I
feel
like
we
have
to
have
something
in
place
for
the
project
itself
and
that
that
something
that
is
in
place
for
the
project
itself
should
follow
the
best
practices
that
we're
describing
as
a
project
for
how
to
secure
pods
and
containers
inside
of
your
kubernetes
cluster,
but
to
provide
like
the
entire
configurable
surface,
that
pod
security
policies
provided
was
was
arguably
like
too
much.
It
was
arguably
too
difficult
to
actually
maintain
over
time,
and
so
I
think
honestly,
this
is
probably
the
right
decision.
A
So
that's
why
there
are
only
the
three
and
why
they're
not
mutable,
why
you
can't
change
them,
and
I
don't.
I
don't
suspect
that
they'll
be
changeable
in
the
future,
but
I
guess
we'll
see
how
that
evolves
over
time
so
other
than
getting
the
feature
flag
enabled.
Should
this
pod
security
policy
admission
mechanism
work
on
cloud
providers
like
gke
qks
aks
once
they
upgrade
their
services
to
offer
kubernetes
version
that
is
compatible?
I
believe
that
is
the
case
yeah.
A
It
might
be
kind
of
interesting
to
understand
how
the
how
the
mic,
how
the
graduation
plan
would
work
right.
So,
if
you,
if
we
go
back
to
our
notes,.
A
Let
me
go
back
to
that
that
cap
2
5
7
9,
or
what
have
you.
A
It's
actually
broke
well,
okay,
let's
just
go
look
at
that
real,
quick,
we're
going
to
take
a
quick
sideline
and
because
we
already
have
like
some
that
we
have
been
allowed,
we're
just
going
to
look
at
like
some
of
the
metrics
that
are
exposed
here.
So
let's
just
look
at
that
real
quick,
so
you
can
all
get
raw.
A
A
One
of
my
big
takeaway
commands
from
the
raw
metrics,
so
I
only
have
the
one
api
server
and
what
I'm
doing
here
is
I'm
querying
the
metrics
endpoint
on
that
api
server
using
cube
kettle,
so
I'm
doing
cube
kettle
get
dash
dash
raw,
which
tells
which,
which
means
that
I'm
just
going
to
make
like
a
raw,
get
against
the
kubernetes
api
at
a
path,
and
I
know
that
the
slash
metrics
path
is
just
hanging
right
off
of
the
root
directory
inside
of
the
kubernetes
restful
api.
A
So
if
I
can
do
cube
kettle,
get
dash
raw,
metrics
and
I'll
be
able
to
see
those
metrics
that
are
relevant
for
this
api
server
that
I'm
connected
to.
If
you
have
multiple
api
servers,
you're
going
to
see
multiple
different
outcomes,
because
each
api
server
is
its
own
entity
and
they
might
have,
they
might
have
different
outcomes
for
some
of
these
values.
Now
likely
they'll
all
agree
on
this
one.
But
it's
still
an
interesting
argument.
A
If
you're,
if
you're
talking
to
a
load
balancer
that
talks
to
multiple
api
servers,
you're
likely
to
see
multiple
api
servers
for
sure,
however,
this
is
one
of
them.
In
my
opinion,
one
of
the
most
interesting
metrics
that
we
expose
at
the
api
server
and
what
it
is,
is
it's
the
scdap
object.
Std
object
counts
and
you
can
see
like
what
those
actual
objects
object.
Counts
are
inside
of
the
kubernetes
cluster.
A
So
you
can
see,
there
are
43
tickets
secrets
defined.
There
are
42
service
accounts.
There
are
two
services.
There's
zero
stateful
sets
one
storage
class
I
mean
you
can
see
like
all
of
these
different
objects
and
one
of
the
ones.
That's
really
interesting.
Is
this
one
right
so
here
are.
There
are
107
events
currently
stored
in
etcd?
A
B
A
But
yeah
I
mean
what
this
can
tell
you
this.
This
can
actually
show
you
like,
where
things
are
likely
to
go
sideways
soon
right,
so
you
can
do
things
like
monitor
the
number
of
namespaces
monitor
the
number
of
nodes
monitor
the
number
of
pods.
The
number
of
events
that
are
currently
stored,
all
of
those
things
and
you
can
kind
of
see
where
your
problem
is,
and
so,
if
you're
seeing
experience
if
you're
experiencing
a
problem
like
you
know,
my
kubernetes
cluster
is
really
slow
and
I
want
to
understand
like
what's
where
it's
spending
its
time.
A
A
And
for
that
bucket,
they're
broken.
That
means
that
the
the
value
of
how
long
a
particular
call
is
taking
is
broken
up
by
buckets.
So
we
have
.01
seconds
two
seconds:
four,
eight,
sixteen
thirty,
two
sixty
four
one,
twenty
eight
two,
six
five
twelve
and
when
you're
evaluating
this,
what
you're
looking
for
is
where
the
event
is
showing
up
in
the
set
of
buckets
that
you
have
right.
A
So,
if
you're
looking
at
the
histogram,
it's
going
to
say
in
this
particular
query,
all
of
the
queries
are
being
resolved
in
about
the
0.064
bucket
for
resolution.
It
means
they're
not
taking
128
and
they're
not
happening
any
faster
than
that.
So,
if
we
look
for
like
something
that
actually
has
a
little
bit
a
few
more
requests,
we
might
be
able
to
see
some
more
interesting
outcome
here.
We
go.
A
Three
of
them
were
responded
to
within
point
zero.
Zero.
Two
three
more
were
responded
to
within
the
point:
zero
zero,
four
time
frame
and
one
of
them
was
responded
in
the
point:
zero,
zero,
eight
time
frame,
and
so
if
somebody
were
to
say
that
cube
kid
will
get
name,
spaces
is
taking
a
long
time.
Then
we
would
start
seeing
things.
We
would
start
seeing
things
graph
up
to
like
a
maybe
a
half
a
second
or
longer.
A
A
It
doesn't
store
the
metrics
key
in
a
city.
I
believe
it's
actually
what's
happening
is
that
the
metrics
code
inside
of
the
api
server
is
evaluating
the
number
of
objects
inside
the
cluster
as
part
of
a
metric
that
it
exposes
over
a
period
of
time,
and
I
suspect
that
it
doesn't
constantly
pull
that
to
get
that
metric.
It's
just
updating
that
that
metric
on
a
scheduled
period
of
time
and
that's
where
that
was
actually
happening,
because
otherwise
it
would
probably
greatly
increase
the
load
on
the
api
server.
A
If
every
time
somebody
hit
the
metrics
endpoint,
it
had
to
evaluate
all
the
metrics
that
would
take
too
long
so
likely.
What
would
happen
is
what
happens
instead.
Is
that
it
just
evaluates
that
value
and
then
stores
it
as
a
response
for
anybody
hitting
the
metrics
api
endpoint
for
a
period
of
time
and
then,
once
that
period
of
time
expires,
and
it
goes
back
and
reevaluates
and
updates
the
values.
Next
time
you
scrape
you're
going
to
get
a
different
value,
but
I
I
suspect,
that's
actually
how
that's
working.
B
A
A
B
A
All
right
so
back
to
our
questions,
make
sure
we
got
everything
we
talked
about
that
yep.
We
talked
about
that
other
than
getting
yeah.
We
talked
about
that
great
yeah
over
this,
so
I
guess
the
last
thing
I
wanted
to
cover
was
this
piece
here,
which
I
thought
was
actually
pretty
interesting.
So
pod
security
tests.
A
A
A
All
test
spots
have
the
following:
labels
defined
what
policy
baseline
are
restricted
and
the
control
the
value
that
identifies
the
security
quote
and
control
being
tested.
You
can
view
the
pots
for
a
policy
level.
What's
this?
Well,
let's
try
this
out.
I
mean,
I
think
I
think
it
looks
pretty
interesting.
So
let's
do.
A
B
C
B
B
B
B
A
A
B
A
Output
to
bus.
B
A
A
A
One
of
the
big
things
I
can
share
with
you
is
that
I
will
be
co-hosting
with
mr
dan
pop
from
cystic
I'll,
be
co-hosting
ebpf
day
at
cubecon,
which
is
a
pre-show
event,
so
if
you're
going
to
be
at
kubecon
or
if
you
want
to
attend
virtually
definitely
check
that
out
I'll,
be
co-hosting
that
with
dan
pop
and
that'll,
be
amazing,
so
definitely
check
that
out.
I
hope
you
all
have
a
great
week
and
thank
you
for
tuning
in,
and
I
hope
this
was
useful
and
I'll
see
you
all
next
time.