►
From YouTube: Kubernetes Configuration - Auditing for Enterprise Best Practices Through Open Source Tooling
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
About
why
you
should
listen
to
us
so
again,
as
I
said,
I
work
for
ignu
we're
a
service
is
only
full
stack
systems.
A
Integrator
really
focused
on
helping
customers
across
the
the
landscape,
with
things
from
application,
modernization,
hybrid
and
multi-cloud
data,
analytics
devops
and
automation,
and
as
such,
we've
done
this
a
couple
of
times
and
by
a
couple
I
mean
probably
hundreds
for
customers,
so
we've
helped
deploy
and
manage
kubernetes
at
scale
for
some
of
the
world's
largest
companies
and
and
have
kind
of
learned
a
couple
of
lessons
along
the
way
and
have
gotten
some
battle
scars
and
so
anytime.
A
B
Yeah
well,
and
I
would
just
add,
that's
that's
that's
what
we're
hoping
to
help
you
with
today
is
you
know
both
john's
company
and
my
company
have
learned
a
lot
of
things.
The
hard
way
you
can
go
learn
things
the
hard
way,
but
you'll
save
yourself
a
lot
of
headache
by
listening
to
folks
who
already
have
those
scars
so.
B
Enablement
company,
so
we
offer
services
open
source
and
software
in
the
space.
Our
services
are
primarily
focused
on
build
and
maintain
kubernetes
infrastructure.
That's
a
lot
of
like
here's.
My
infrastructure
problem
make
it
go
away.
We
can
build
that
and
maintain
it
for
you
open
source,
because
we're
managing
lots
of
infrastructure
at
scale
across
lots
and
lots
of
clients.
We
are
regularly
seeing
hey.
This
is
a
macro
problem
that
everyone
has
so
we'll
go,
write
a
tool
that
solves
that
that
helps
us
with
our
service
and
is
also
beneficial
to
the
community.
B
We're
going
to
be
talking
about
one
of
our
open
source
projects
here
today
and
then.
Finally,
software
we
do
have
sas
products
that
are
often
leveraging
open
source
underneath,
but
if
you
wanna
be
sure
that
you're
using
kubernetes
correctly,
we
have
a
software
solution
in
there
for
you
as
well,
but
go
ahead.
Okay,
so
let's
dive
in
and
and
just
as
a
level
maker
talk
about.
B
What's
new
about
kubernetes,
so
part
of
the
reason
we're
having
this
conversation,
getting
configuration
is
getting
configuration
right
is
very
easy
if
you're,
using
something
that
you're
very
familiar
with
part
of
the
reason
that
it's
difficult
and
part
of
the
reason
we're
talking
about
this
today
is
not
that
kubernetes
is
inherently
infinitely
more
complex
than
other
technology
out
there,
although
there
you
know,
there's
some
argument
that
it
is
sufficiently
complex,
but
a
big
part
of
it
is
just
it's
a
new
paradigm.
Everything
about
kubernetes
is
new
and
sort
of
different.
B
It's
a
different
way
of
thinking
about
the
world.
It
is
sort
of
cloud
reaching
maturity
in
some
ways,
but
it's
a
very
different
world
from
when
you
had
a
data
center
and
you
could
walk
around
the
room
and
you
could
flip
servers
off
flip
them
on,
and
you
know
I
think
john
has
been
in
this
industry
long
enough.
We
can
both
look
back
on
times
where
you
know.
B
As
a
sys
admin,
you
had
one
server
that
you,
you
know
you
could
go
in
and
look
at
the
uptime
and
it'd
be
like
you
know,
783
days
and
you'd
be
super
proud
or
15
years
and
you'd
be
super
proud
and
kubernetes
doesn't
even
attempt
to
do
that.
Everything
is
ephemeral
from
the
get-go,
ephemeral,
meaning
built
to
come
and
go,
and
I
mean
anything
you
want
to
say
about
that
john
before
I
get
into
uniform
apis.
B
One
of
the
things
that
kubernetes
makes
really
different
is
it
does
enable
uniform
apis
across
clouds
across
your
data
center,
so
everything
below
kubernetes
still
has
to
be
configured
to
run
on
that
hardware,
be
that
in
aws
be
that
in
google
cloud
azure
your
data
center,
one
of
the
other
clouds,
but
once
you've
got
kubernetes
there,
there's
a
uniform
api
for
just
about
everything
and
people
do
run
their
compute.
There
there's
a
way
to
run
serverless
in
a
uniform
way.
B
There's
a
way
to
put
your
databases
there
there's
questions
about
all
of
those
different
things,
but
it
gives
you
a
uniform
api
that
people
really
like
having
that
uniform
api
across
different
locations.
So,
if
you're
running
kubernetes
in
lots
of
different
places,
it's
easy
to
just
go
leverage
kubernetes
and
go
from
there.
So
it's
all
new,
it's
all
different,
and
that's
part
of
why
we're
talking
about
this
today,
but
once
you
do
understand
the
basics,
it's
relatively
easy
from
there
go
ahead.
John.
A
So
yeah
and,
as
kendall
said
right,
a
lot
of
organizations,
because
this
is
new,
they
really
start
off
with
I
have
a
kubernetes
right.
I
have
a
single
cluster.
I
have
a
sandbox
now
what
and
so
with
that
it.
You
know
it's
definitely
convenient
and
it's
fairly
easy
to
manage
that
configuration
manually
just
like
with
the
servers
of
old,
where
I
could
log
in
and
make
changes
to
a
config
file.
I
can
do
a
lot
through
the
kubernetes
cli.
A
I
can
do
a
lot
through
the
api
directly,
but
that's
kind
of
where
it
ends
from
from
an
experimentation
perspective
and
kendall.
Can
you
kind
of
tell
us
some
of
the
areas
where
you've
seen
customers
start
to
run
into
problems
with
with
that.
B
Yeah,
so
I
mean,
I
think,
it's
you
know
the
first
time,
you're
you're
kicking
the
tires
on
kubernetes,
probably
the
most
likely
way.
You're
gonna
do.
That
is
log
into
one
of
the
clouds
spin
up.
One
of
the
manage
clusters
hop
into
the
gui
click
some
buttons
and
try
different
things
right,
but
it's
one
thing:
if
you're
kicking
the
tires
with
it
that
way,
if
you're
at
your
company-
and
you
have
your
first
kubernetes
cluster
and
it's
entirely
configured
through
a
gui
again,
nothing
necessarily
inherently
wrong
with
that.
B
But
what
happens
is
when
you
want
to
go
spin
up
the
next
one.
You
can't
remember
all
the
buttons
you
clicked,
you
can't
remember
all
the
things
you
had,
and
so
what
we
see
is
companies
who
spun
up
one
kubernetes
cluster
and
they
have
one
guy
who
remembers
all
the
configuration
they
had
or
maybe
wrote
it
all
down
to
go
spin
up
the
second
cluster
and
then
pretty
soon.
You
know
they're
three
years
on
and
there's
one
person
in
the
organization
who
remembers
all
of
the
configuration,
because
none
of
it
was
documented
in
this
code.
B
You
have
organizations
that
start
that
way
and
maybe
even
wrote
all
their
configuration
down
as
code,
but
as
they
start
to
spin
up
their
third
cluster
or
their
15th
cluster
or
they've,
enabled
the
company
to
spin
up
clusters
wherever
they
want
and
suddenly
there's
150
clusters
across
the
entire
organization,
with
no
uniform
anything
and
it's
all
insecure.
And
oh
my
gosh
you're
a
financial
institution.
And
what
are
we
even
thinking?
B
A
They
are
yeah.
Unfortunately,
those
are
problems
that
real
customers
have
come
to
us
with
in
the
past,
and
the
good
news
is:
is
that
the
community?
You
know
this:
isn't
this
isn't
an
unknown
problem
to
the
community?
So
there's
a
cncf
project
out
there
called
flux,
it's
a
great
project
for
those
who
aren't
familiar
with
it.
I
highly
encourage
you
to
take
a
look
at
it,
install
it.
A
A
Ultimately,
git
becomes
the
source
of
truth
and
flux
is
a
tool
for
the
automation
of
kubernetes
objects
from
git
at
its
very
core
right.
So
it
does
some
things
really
really
well,
it
solves
some
of
the
challenges
that
kendall
just
talked
about.
Specifically,
it
allows
us
to
deploy
helm,
charts
customizations,
which
is
a
for
those
who
aren't
familiar
with
customize.
A
It's
a
language
for
applying
patches
on
top
of
kubernetes
objects,
or
even
just
barricades
manifests
directly
to
our
cluster
by
committing
those
as
code
to
get,
and
so
we
have
a
kind
of
a
walk
through
here
kind
of
a
use
case
on
how
we
can
use
flux
and
then
we'll
talk
through
how
how
that
becomes
a
really
good
solution
for
managing
configuration.
A
But-
and
it
helps
us,
keep
those
clusters
synchronized,
but
it
doesn't
really
solve
for
a
best
practices
right
and
for
specifically
for
highlighting
or
preventing
folks
from
doing
things
that
that
violate
kind
of
you
know
call
it
community
best
practices
around
creation
of
my
deployment
and
creation
of
my
my
kubernetes
objects,
so
real
quick,
I'm
going
to
go
ahead
and
pull
up
a
demo
and
kind
of
walk
through
how
flux
is
useful.
So
in
this
case
I
actually
have
several
kubernetes
clusters
already
created.
A
You
know
test
thing
one
and
wait
a
few
seconds,
and
then
I
would
do
cube
q
and
s,
and
I
would
see
my
testing
one
name
space
right
here,
which
is
super
handy
but,
as
you
start
to
scale
clusters,
whether
they're
multi-region
and
you
need
to
have
multiple
clusters
in
multiple
regions,
whether
they're
different
security
enclaves,
whether
they're
just
multiple
clusters
for
different
workloads,
right,
there's
a
lot
of
reasons
why
you
might
need
more
than
one
cluster
keeping
those
name
things
like
namespaces
and
rbac
and
different
deployments.
A
Everything
in
sync
across
all
of
those
clusters
starts
to
become
daunting
and,
at
a
point
right,
there's
a
point
of
a
huge
point
of
diminishing
returns
where
it's
almost
unmanageable,
and
so
using
something
like
flux.
We
can,
of
course,
keep
all
of
the
the
configuration
of
the
cluster
itself,
the
deployment
of
things
into
the
cluster
as
code
manage
it
through
git.
A
So
in
this
case,
I
have
an
in
for
repo
here
which
you'll
see
on
my
screen
and
if
I
just
jump
into
my
staging
environment-
and
this
is
the
route
that
my
flux
install
in
my
staging
clusters
uses
I've
got
a
namespaces.yml
file
here.
So
if
I
wanted
to
create
a
namespace
on
my
staging
clusters,
I
can
come
in
here,
and
this
is
just
a
copy
paste.
So
I
don't
typo
this
again.
A
A
Correct
so
all
I've
done
so
far
is
I
have
a
repo
that
defines
my
infrastructure
as
code.
In
this
case,
I'm
using
yaml
files
and
all
I've
done
is
make
a
change
to
a
file
commit
that
file
to
get
and
push
up
to
get
and
I'm
using
github
for
this.
A
B
Sorry,
I
should
have
jumped
in
there
and
helped
yeah
I
mean
it's.
The
reconcile
is
the
word
because
this
is
this
is
a
live
demo
awesomeness
here
john,
but
yeah
that
it's
going
to
what's
running
in
the
cluster
with
flux
is
watching
the
get
repos
regularly
and
saying.
Okay,
any
change
that's
made
there.
I
should
apply
to
the
cluster
right.
A
Right
so
you'll
see
here
already,
because
I
kicked
off
that
reconciliation.
I
already
have
my
name
space
on
my
cluster.
So,
okay,
that's
kind
of
that's
great!
That's
wonderful!
It's
managed
as
code,
but
yeah.
Okay.
I
could
have
done
that
manually
where
that
really
starts
to
pay
dividends
is
when
I
start
to
look
at
how
I
scale
that
out.
So
if
I
do
look
at
my
other
staging.
A
A
You'll
see,
I
should
already
have
this
namespace
here
as
well
right
there,
so
I
didn't
have
to
do
any
action
for
these
two
clusters.
All
I
did
was
commit
my
change
to
code.
Now,
I'm
doing
this
obviously
committing
directly
to
my
main
branch.
You
shouldn't
do
that.
You
should
obviously
do
pull
requests.
You
should
do
a
have
a
solid
git
flow
or
get
ops
flow
or
whatever
your
your
flow
is
for
source
control.
You
should
be
following
that.
A
Do
peer
review
and
all
those
things,
but
for
demo
purposes,
I'm
just
committing
directly
to
to
maine,
but
the
takeaway
here
is
that
it's
easy
to
make
changes
through
code
and
have
those
changes
automatically
pushed
out
to
my
clusters.
Now
I
have
two
separate
environments.
I
have
both
my
production
and
my
staging
clusters.
My
staging
clusters
use
a
separate
branch
or
directory
structure
in
that
same
repo,
but
that's
great
like,
as
I
said,
this
is
a
really
good
method
for
managing
configuration
across
multiple
clusters.
A
A
I
have
all
of
those
I
actually
have
polaris
deployed
through
flux,
so
I'm
deploying
all
of
these
solutions
or
all
these
software
packages
through
flux,
and
so
but
that's
what
flux
doesn't
do
in
this
case,
though,
is
make
sure
that,
for
example,
I
have
requests
and
limits
set,
or
that
I
am
not
that
I'm
making
sure
that
containers
don't
run
as
root
or
that
I
have
read-only
file
systems.
A
A
B
Yeah,
so
so
you
know
the
as
I
mentioned,
fairwinds
is
building
and
maintaining
lots
of
infrastructure
across
lots
of
clients.
We
see
what
the
macro
problems
are
and
to
john's
point.
You
know
what's
being
deployed
into
kubernetes
and
is
it
healthy
and
we
realized
as
a
company
while
we're
responsible
for
building
and
maintaining
that
infrastructure.
B
B
If
the
things
that
are
deployed
into
those
clusters
are
configured
correctly,
and
so
that's
where
polaris
comes
in,
we
said:
hey,
let's
build
out
a
bunch
of
checks
so
that
we
can
look
across
things,
industry,
best
practices,
things
that
we've
learned
the
hard
way
et
cetera,
and
let's
put
that
all
together
as
code,
it
makes
a
single
place
where
you
can
check
the
health
of
the
things
that
are
being
deployed
into
your
cluster
and
then
actually
stop
unhealthy
things
from
being
deployed
into
your
cluster.
B
It
also
will
give
you
a
score
so
that
you
have
some
feeling
for
how
you're
doing
and
then
give
you
some
specific
actions
to
go
and
improve
things
in
certain
examples.
But
that's
that's
polaris
exists
because,
no
matter
how
great
your
infrastructure
is,
if
what
you're
deploying
into
it
is
terrible,
it's
insecure!
It's
it's
gonna,
cave
in!
It's
all
of
those
different
things
and.
A
Yep,
I
appreciate
that
so
so,
as
kendall
said,
you
know,
polaris
was
built
from
industry
experience
from
teams
that
have
been
running
kubernetes
at
lots
of
different
customers.
So
not
just
one
organization,
and
some
of
those
organizations
are
not
quite
as
mature
and
and
some
are
you
know,
maybe
on
the
bleeding
edge
and
a
little
bit
more
mature
in
their
process,
but
we
still
need
to
provide
the
same
quality
deployment.
A
The
same
quality
of
assurance
assurance
quality
assurance
there
we
go
to
to
the
the
developers
that
are
interacting
with
the
cluster,
so
polaris
actually
has
three
deployment
way
methods
that
we
can
use
to
both
audit
and
enforce
a
best
practices
in
our
clusters.
So
the
first
is
just
as
a
dashboard.
A
I
can
fire
up
polaris
inside
my
cluster.
I
don't
even
have
to
put
it
in
my
cluster.
I
can
run
it
on
my
desktop
if
I
want
and
connect
it
to
the
api
and
it's
just
going
to
go
out
and
scan
the
kube
api.
Look
at
the
objects
that
are
in
the
cluster
and
kind
of
report
back
on
the
health
of
the
cluster,
and
so
I'll
show
that
off
here
in
just
a
minute,
it's
a
really
lightweight
deployment.
A
It
runs
in
the
cluster,
it
takes
very
little
resources
and
it
just
kind
of
sits
in
the
background
and
monitors
the
api
to
see.
What's
what's
happening,
really
good
for
giving
you
that
score,
that
health
score,
that
kendall
just
talked
about,
in
fact
that's
actually
something
they
put
in
the
development
team
puts
in
the
in
the
top
of
the
report
there.
In
addition
to
that,
though,
a
lot
of
a
lot
of
folks
are
saying.
Well,
that's
great,
but
you
know
I
don't.
A
I
don't
want
to
just
know
about
problems
I
want
to
prevent
them.
So
polaris
has
the
ability
to
also
in
act
in
your
cluster,
and
this
has
to
be
installed
in
the
cluster
act
in
the
cluster
as
a
as
an
admission
controller
and
as
specifically
as
a
dynamic
admission
controller
web
hook,
which
will
allow
us
to
when
a
user
tries
to
create,
say
a
deployment
or
a
stateful
set
or
a
daemon
set.
It
will
allow
us
to
compare
that
against.
A
You
know
our
set
of
of
checks
and
if
the
user
is
trying
to
do
something
that
violates
our
call
it
best,
you
know
our
best
practices
guidance.
We
can
prevent
that
from
being
run
in
the
cluster
and
so
we'll
demonstrate
that
here
as
well
and
then,
lastly
and
again
great
right,
but
we
want
to
know
before
it
even
gets
to
that.
We
want
to
know
before
he
even
gets
applied
to
the
cluster.
A
Then
we
can
apply
this
through
a
command
line
utility
during
our
ci
process
to
catch
best
practices
violations
during
during
rci
during
our
build.
So
we
can
give
developers
early
feedback
that
hey
this,
this
isn't
even
going
to
be
deployable,
because
these
you
know
xyz
reasons
and
of
course,
all
that's.
Tunable,
there's
a
score.
That's
tunable
and
you
can
turn
out
different
checks
on
and
off.
You
can
even
create
custom
checks
so
with
that
I'm
going
to
jump
right
into
kind
of
what
the
dashboard
itself
looks
like.
A
So
this
is
the
the
dashboard
for
our
one
of
our
staging
clusters
that
we
showed
you
earlier
and
really
it's
you
know.
It's
going
to
tell
us
in
general,
hey
grades
b,
that's
not
too
bad,
and
I
want
to
emphasize-
and
this
is
in
the
docs,
but
I
also
want
to
emphasize
this.
A
If
you
install
this
and
if
you
install
polaris
and
you
look
at
the
dashboard
and
the
the
score
is
a
little
bit
lower
than
you
were
thinking
or
that
you
were
hoping
for
that
happens,
this
is
it's
designed
to
be
relatively
strict
from
a
standards
perspective
right
when
the
fairwinds
team
built
this,
they
really
designed
it
to
be
kind
of
rigorous
right,
essentially
set
the
bar
high.
A
You
can
always
turn
those
some
of
those
checks
off
if
there
are
things
that
you
just
have
to
live
within
your
if
your
environment
or
or
customize
the
checks
as
necessary,
but
if
you
just
install
it
out
of
the
box
and
you
get
like
a
c-
don't
feel
too
bad.
That's
that's
not
an
uncommon
thing.
So,
if
we
look
at
you
know,
here's.
C
A
B
Let
me
just
add
one
thing
there,
john
like
for
some
companies.
A
c
is
what
you're
shooting
for
you
know
it
and
and
you're
fine
with
that.
It
depends
a
little
bit
on
a
number
of
things,
but
you
can
also
go
down
and
dig
through
the
checks
that
are
giving
flags
and
lowering
that
grade
and
decide
how
important
those
things
are
to
you
and
it's
a
different
score
is
important
to
everyone.
A
So
yeah
and
so
we're
going
to
I'm
going
to
show
that
I'll
show
that
here
in
just
a
second,
so
it's
going
to
tell
us
a
little
bit
about
our
cluster.
It's
going
to
kind
of
give
us
the
summary,
and
if
we
scroll
down
it's
going
to
tell
us,
we
have
three
different
categories
of
checks,
so
we
have
our
efficiency
checks,
our
reliability
checks
and
our
security
checks
and
there's
a
little
bit
of
description
here.
But
essentially
efficiency
checks
are
exactly
that
right.
Are
you?
A
Are
you
leveraging
the
kubernetes
resources
you
have
to
their
to
their
full
potential?
Are
you
are
you
making
good
use
of
the
cluster
and
are
you
making
good
use
of
the
the
tools
that
are
available
to
you?
A
The
reliability
checks
are
going
to
be
things
like
looking
for
health
checks
and
looking
for
you
know
those
kind
of
standard
foot
guns
where,
if
you
don't
set
these
you're
setting
yourself
up
for
a
failure
in
the
future
and
then
security
checks
are
going
to
be
things
like
you're
violating
security,
best
practices,
and
I
think
I
saw
that
in
the
questions
I
apologize.
We
will.
We
will
get
to
the
questions,
but
I
saw
one
of
those
scroll
by
so
it
does
support
security,
best
practices.
A
There's
some
out
of
the
out
of
the
box.
Polaris
has
a
bunch
of
security
checks
in
it.
We'll
talk
through
those.
It
also
allows
you
to
filter
by
namespace.
So
in
this
case
you
know,
if
I
don't
care
about,
say,
kub's
system,
because
I'm
using
in
this
case
I'm
using
a
managed
cube
cluster,
then
I
can't
control
what
goes
into
coop
system.
I
have
to
rely
on
my
provider,
so
maybe
I
don't
want
to
filter
that.
A
Maybe
I
only
want
to
look
at
my
micro
services
demo,
so
I
can
filter
down
by
namespaces
just
to
look
at
the
at
the
dashboard
and
then,
if
I
look
here,
it's
going
to
tell
me
a
list
of
all
the
checks
right.
So
I
have
my
pod
spec
right.
Am
I
using
host
ipc
and
I'm
not
going
to
run
through
all
the
checks,
but
you
can
see
this
in
this
particular
case.
This
deployment
has,
I
don't
know
it
says,
about
15,
different
checks
that
it's
running
and
I'm
actually
violating
a
few
best
practices
here.
A
So
my
image
pull
policy
is
not
always
which
is
kind
of
the
recommended
best
practice.
My
file
system
is
not
mounted
read
only
and
it
is
allowed
to
run
as
root.
So
if
I
were
to
fix
all
of
those,
this
would
go
to
all
teal
and
this
would
be
essentially
contributing
to
a
higher
score
which,
if
we
go
back
and
look.
C
A
I
think
the
only
thing
in
the
cluster
that
has
a
perfect
score
is
actually
the
polaris
dashboard
itself.
Yep
nice.
A
If
it
didn't,
we
would
probably
catch
endless
amounts
of
flack
for
that,
so
so
it
does
have
a
the
obviously
it's
following
all
the
best
practices.
Now
that's
great
again-
and
this
is
this-
is
a
really
good
starting
point,
because
it's
it's
easy
to
install.
You
can
install
this.
If
you
just
want
to
do
you
know,
cube
ctl,
apply
or
helm
install
you
do.
You
can
install
this
in
like
five
minutes,
the
containers
are
public
they're
hosted
out
on
clay
real,
easy
to
use.
A
You
can
read
the
docs
and
and
again
it's
about
five
minutes
to
get
started,
but
that
really
just
kind
of
gives
you
a
magnifying
glass
and
helps
highlight
the
problems.
If
we
want
to
start
presenting
place
to
start
it's
a
great
place
to
start,
that's
actually
where
we
would
recommend
customer
start
from
our
perspective
right
audit
understand
where
you're
at
get
an
assessment,
but
the
next,
the
next
step
there
is
really
okay,
let's
start
with
a
little
bit
of
enforcement.
A
A
You'll
see
I
actually
have
the
admission
controller
deployed
and
again
this
deploys
in
managed
providers
just
fine
as
long
as
they're
running,
I
think
it's
115
or
116.
They
can
support
the
dynamic
admission
contr
web
hook
and
give
you
some
stupid
task
bar.
Give
you
some
some
flexibility
there,
but
you'll
see
here.
We
have
the
web
hook
controller
running
again,
small
lightweight
footprint.
The
configuration
is
all
passed
in
via
either
via
config
map
or
via
the
helm
chart,
but
the
admission
controller
is
running.
A
Apps
directory,
I
have
what
I
would
call
a
bad
app
here.
You
know
just
a
somebody
who
didn't
didn't
follow
best
practices.
We're
going
to
just
manually
apply
that
for
a
second
to
the
cluster.
A
So
I
just
so
now,
if
I
go
in
here-
and
I
say,
tube
ttl
apply,
dash
f,
deploy,
add
app
deployment,
and
I
try
to
apply
that
the
kubernetes
api
is
going
to
reject
that.
It's
going
to
reject
that,
because
in
this
case,
oh
because
I
don't
have
the
testing
name,
space
finder
detail,
minor
detail
in
this
case.
It's
going
to
reject
it,
because
it's
going
to
tell
me
that
polaris
rejected
this,
because
privilege
escalation
should
not
be
allowed.
A
So
it's
not
going
to
allow
me
to
install
this
deployment
because
I'm
using
a
feature-
or
in
this
case
a
a
security
spec
that
is
not
permissible
in
my
in
my
environment.
So
if
I
want
to
fix
that
and
go
back
in
here
and
of
course
edit,
my
my
deployment
yeah
I'll
go
back
down
here
and
say:
yep,
okay,
you're
right,
I
don't
need
privilege
escalation.
I'm
going
to
turn
that
off
thanks
for
catching
that
and
if
I
apply
that
now
now
my
application
actually
gets
deployed.
A
You'll
see
here,
I've
got
a
bits
pending.
My
cluster
needs
to
scale
up,
but
but
I've
got
a
container
that's
pending
and
will
get
scheduled
so
again.
Polaris
acting
as
an
admission
controller,
gives
you
another
layer
of
defense
against
folks,
applying
applying
things
to
the
cube
api
that
maybe
don't
don't
meet
your
your
requirements
and
again
there's
a
bunch
of
security
checks
that
are
baked
into
polaris
out
of
the
box,
but
there's
also
a
ton
of
flexibility
in
creating
your
own
checks.
A
A
A
B
B
Most
things
go
through
the
ci
pipeline,
but
there
might
be
something
where
somebody's
trying
to
do
something
manually
to
apply
it
to
the
cluster,
where
there's
some
kind
of
admin
reason
to
need
to
go
into
the
cluster
directly
and
that
a
admission
controller
can
live
outside
of
the
ci
pipeline
and
stop
something
before
it
gets
deployed
in
that's
misconfigured,
even
if
they're
going
around
the
normal
ci
process.
But
do
you
want
to
go
show
the
ci
part
two?
Is
that
part
of
this
as
well?
John.
A
Yep
yeah,
so
I
also
have
a
demo
of
how
we
would
actually
integrate
this
into
a
ci
pipeline
and
how
we
could
use
polaris
as
part
of
a
a
pull
request
style
workflow
to
validate
that
my
application
is,
or
my
deployment
manifest
sorry
is,
is
compliant
before
I
even
start
to
apply
that
to
the
cluster
and
real
quick.
I
just
I
did
see
another
question:
do
you
define
admission
web
hook
and
polaris?
A
Yes,
so
polaris
actually
uses
the
admission
web
hook
in
and
if
you
go
out
and
look
at
in
the
github
repo,
you
will
see.
There's
there's
an
admission
web
hook,
definition
that
then
references
that
admission
controller
that
we
have
deployed
in
the
cluster.
So
so
it
is
using
the
standard
sort
of
call
it
def.
You
know
kubernetes
native
way
of
creating
the
submission
controller,
it's
nothing,
nothing,
fancy
or
or
bespoke.
A
So
with
that
real
quick,
I'm
going
to
pull
up
my.
A
So
here
we're
going
to
go
back
and
we're
going
to
set
this
back
to
to
trip
our
our
polaris
polaris
score.
C
A
So
if
I
wanted
to
so,
let's
say
I
I
wanted
to
branch
and
you
know
get
a
branch.
I
wanted
to
create
a
feature
branch,
we're
gonna
call
it
jlw,
you
know
cncf
test
and
then
I'm
going
to
check
that
out.
A
So
I'm
going
to
go
ahead
and
I'm
going
to
push
that
up
to
up
to
a
feature
branch
in
github.
A
And
I
have,
I
obviously
have
github
actions
already
configured
to
run
polaris.
You
know
via
this
via
the
ci
pipeline.
So
if
we
real
quick,
we
want
to
look
what
that
actually
looks
like
in
github.
Good
news
is
because
polaris
is
containerized
running
this
in
any
ci
system.
That
supports
containers
is
relatively
easy.
A
I
have
my
polaris
action
and
all
it
does
in
this
case-
and
this
is
all
obviously
hard-coded-
you
would
want
to
pass
in
environment
variables
and
be
a
little
bit
more
intelligent
about
this.
But
here
I'm
pulling
down
the
polaris
image
from
quay
the
one
that
fairwinds
publishes
and
maintains
for
us
and
then
I'm
just
going
to
run
polaris
audit
and
I'm
going
to
pass
in
some
commands.
So
in
this
case,
I'm
going
to
pass
in
the
location
of
the
ymo
manifests
that
I
want
polaris
to
audit.
A
For
me,
I'm
going
to
pass
an
exit
code
on
danger,
meaning
if
I
get
any
danger
or
any
essentially
critical
warnings.
I
want
to
set
the
exit
code.
I
want
to
fail
and
then
also
if
my
score
is
below
90-
and
this
again
is
configurable,
you
don't
have
to
use
90,
you
could
use
60
or
whatever
number
works,
but
so
that
helps
avoid
those
cumulative.
You
know
tons
of
best
practices
violations
but
still
allows
my
developers
a
little
bit
of
wiggle
room
in
this.
A
A
My
ci
test,
I'm
going
to
go
down
to
my
oh
it's
running
again,
because
I
created
a
pr,
so
it
ran
on
my
commit
as
well
we're
going
to
go
into
my
ci
test.
We're
going
to
go
to
my
build
step,
and
here
you're,
going
to
see
that
this
step,
my
polaris
step
failed,
and
it's
going
to
tell
us
right
here
that
danger
items
found
in
audit
so
this-
and
this
is
all
json
so
that
you
can
parse
it
and
return
it
back
to
the
ci
system
accordingly.
A
But
essentially,
this
is
telling
me
that
polaris
found
a
critical
vulnerability
or
a
danger
in
my
yaml
manifest
and
thus
polaris
failed.
So
my
ci
system
fails
to
build.
So
what
I
want
to
do,
then,
is
go
back
in
now
to
to
fix
that.
I
go
oh
right
and,
if
and
sorry,
just
to
just
kind
of
show
where
we
would
find
that
in
here.
If
we
we
can
kind
of
scroll
through
and
see
all
the
checks
that
are
happening.
So,
in
this
case,
this
this
check
was
successful.
This
check
was
successful.
A
All
these
checks
were
successful,
so
we
really
want
to
look
for
where
here
we
go.
We
want
to
look
for
these
right,
where
success
is
false.
So
in
this
case
I
don't
have
a
liveness
probe.
It's
because
it's
a
busy
box
container.
It
doesn't
need
a
liveness
probe
for
testing,
but
there's
another
one
in.
If
we
keep
scrolling
here.
Oh
there
we
go.
A
Yes,
go
ahead.
Thank
you.
I
appreciate
that
yeah.
So
for
anybody
who's
who's
used,
you
know
any
kind
of
sort
of
auditing
or
security
scanning
tools.
This
this
sort
of
methodology
should
look
very
similar
in
that
we
we
rate
the
way
that
the
vulnerability
that
the
the
checks
we
rate
their
severity
and
then
we
can
fail
on
certain
types
of
checks
or
aggregate
numbers
of
checks.
Things
like
that,
but
in
this
case
this
one
was
danger
and-
and
we
failed
right
so
in
this
case.
A
In
this
case,
you
know
we'll
have
to
come
back
in
and
fix
that.
So
if
we
look
at
our.
A
App
here
I'm
gonna
go
into
apps
go
into
my
boy,
I'm
gonna
edit.
This
I'm
gonna
go
back
in
here
and
say:
oh
yeah
silly.
I
forgot
to
forgot
to
disable
privilege
escalation.
I
don't
really
need
to
privilege
escalate.
Why
would
I
ever
do
that
and
then
we're
gonna
just
check
that
in
so
we're
gonna.
A
Get
push
that
up
to
what
do
we
call
our
branch
called
a
feature.
A
And
so
now,
if
we
go
back,
polaris
is
still
running
here,
so
you'll
see
now
polaris
the
check
has
succeeded.
I
still
get
the
same
output
telling
me
all
the
checks
that
were
run,
but
my
code
is
now
clean
and
if
I
go
back
to
my
pull
request-
and
I
look
at
my
ci
test-
you're
gonna
see
all
checks
have
passed
so
now.
A
A
It
makes
it
very
easy
to
run
these
kinds
of
easy
checks
early
on
and
and
it's
really
designed
to
be
a
really
low
barrier
to
entry
for
deploying
this
in
your
workflow,
getting
started
quickly
and
and
essentially
not
having
to
understand
a
huge
complex
tool
chain
for
for
managing.
You
know
simple,
a
bunch
of
simple
best
practices,
checks,
and
so
with
that
kind
of
talk
about
next
steps.
So
how
do
I
get.
A
Yeah,
so
if
this
is
interesting,
obviously
you
know
the
fairwinds
team
and,
and
myself
and
other
polaris
users
in
the
community
would
love
for
y'all
to
install
it.
Try
it
out
and,
as
I
said,
you
don't
have
to
install
the
admission
controller,
you
don't
have
to
use
it
all
in
ci,
just
install
the
dashboard
install
it
locally
point
it
at
your
api
on
your
cluster.
Let
it
run
and
audit
just
try
it
out.
Kick
the
tires
fix
issues
if
sorry
file
issues
fix
them.
A
If
you'd
like
to
but
file
issues,
if
you
find
them
give
feedback,
you
know
let
the
developers
know
the
maintainers
know
how
you
feel
about
it,
where
you
think
it
could,
it
could
evolve,
how
you
find
it
useful
and
then,
like
legitimately,
if,
if
you
find
an
issue-
and
you
will
feel
like
you
know,
you're
feeling
frisky
and
you
want
to
fix
it-
please
right.
Pls
prs
are
actually
welcome
right
kendall.
I.
B
Think
it
always
sounds
sarcastic
yeah
when
people
say
pr
is
welcome
is
like
well.
You
know
up
your
nose
with
a
rubber
hose.
If
you
do
want
to
do
better,
go
for
it,
but
but
but
seriously
we
would
love
for
the
tool
to
get
better
and
better.
This
is
a
very
widely
adopted
tool.
There
are
a
lot
of
people
running
it
in
a
lot
of
places,
and
so
because
of
its
community
adoption,
it
is
getting
better
pretty
regularly
and
we'd
love
your
feedback,
your
input
and
pull
requests
so
well.
Thank
you,
john.
B
I
think
it's
time
to
dive
into
questions.
Are
you
ready
for
these?
I
can
pose
them
to
you
and
we
can
knock
them
out.
A
That's
a
good
good
question,
so
argo
and
I'm
not
as
familiar
with
argo,
but
I
really
feel
like
argo
is
meant
to
be
a
a
workflow,
a
more
generic
workflow
engine.
Ci
is
obviously
one
of
the
workflows
that
it
can
support,
but
but,
as
far
as
handling
the
actual
task,
executions
and
orchestrating
them
via
cube,
flux
is
really
designed
to
be
a
get
a
set
of
get
up,
get
ops
tools
and
again,
I'm
not
not
the
world's
foremost
expert
on
flux,
either.
A
There's
a
whole
there's
a
whole
community
around
that
tool
as
well
highly
encourage
if
that
was
interesting,
same
thing,
right,
install
it
use
it
give
feedback
to
the
maintainers.
It's
an
awesome
tool
as
well
lots
of
use
cases
for
it,
but
I
think
flux
is
really
more
around
enabling
a
get
ops,
workflow
and
argo
is
more
of
a
generic
workflow
engine,
so
I
think
they're.
A
I
think
they
can
even
arguably
even
be
used
in
comp
in
conjunction
with
each
other,
because
argo
really
is
really
good
at
kind
of
the
github
actions
portion
of
what
we
just
showed
right,
orchestrating
a
ci
pipeline,
running
checks,
etc
and
flux
is
really
good
at
taking
that
artifact
that's
been
run
through
ci
and
deploying
it
yep.
A
So
this
is
probably,
I
think
this
is
probably
a
more
valid
comparison
than
flux
versus
argo,
at
least
in
my
my
mind,
and
you
know
they're
both
describing
a
desired
state.
So
terraform
is,
you
know,
describing
what
you
want
the
and
you
can
manage
all
of
the
things
we
just
showed
in
flux.
You
can
also
manage
with
terraform.
The
main
difference
is:
is
that
flux
is
a
pole
based
operation,
so
there's
an
operator
running
in
the
cluster.
A
That's
continually
looking
at
a
git
repo
and
pulling
those
changes
down
or
it
doesn't
have
to
be
a
git
repo.
I
should
be.
I
should
clarify
it
can
be
git,
it
can
be
an
object,
storage
bucket.
It
can
be
a
helm,
repo,
there's
lots
of
different
sources.
We
used
git
for
this
demo,
but
but
there
are
other
sources
that
flux
supports,
and
so
flux
is
more
of
a
pull
workflow,
meaning
that
once
that
change
is
pushed
into
your
code,
repo
flux
will
handle
deploying
that
to
the
cluster.
A
So,
whereas
terraform
you
know,
as
I'm
sure
everybody's
aware,
you
have
to
actually
execute
it
now
you
can
do
that
through
ci.
There's.
Maybe
some
reasons
why
you
don't
want
to
do
that,
but
you
can
you
can
execute
terraform
through
ci,
but
sorry
I
had
to
kill
my
timer
there.
But
but
again
I
think
the
once
push
versus
pull
is
probably
the
easiest
way
to
contrast
them.
B
Yeah
and-
and
there
are
for
clarity-
there
are
commercial
products
for
terraform
offered
by
hashicorp,
as
I
understand
that
do
address
some
of
the
same
things,
so
that
there's
a
way
to
automatically
watch
and
pull
those
things
in
the
way
that
flux
does
flux
is
one
of
the
open
source
tools.
It's
one
that
john
recommends
and
and
we're
talking
about
that,
because
this
is
a
cncf
webinar
as
well.
B
Okay,
two
questions.
First,
really
simple:
the
cube
ctx
cli
is
that
a
shortcut
for
cube,
cuddle
context,
commands
or
okay
yeah.
A
So
cube
ctx
is
just
a
just
a
real
quick.
It's
just
a
set
of
packages.
It's
coop,
ctx,
cube,
ns
and
a
couple
other
really
nifty
command
line
tools,
brew,
install
ctx
on
your
mac
and
or
I
don't
there's
some
way
to
install
it
on
windows.
I
don't
know
what
it
is
yeah,
but
yeah
super
handy
for
for
just
just
shortcuts.
That's
all
it
is
yep.
B
A
Yes,
so
the
answer
is
is
a
resounding
yes.
What
we
just
walked
through
is
a
very
abbreviated,
probably
probably
butchered
gitops
workflow,
so
flux
is
designed
to
be
a
get
ops
toolkit
for
kubernetes
get
ops
is,
is
the
workflow
that
we
were
kind
of
discussing
here.
So
yes,.
B
There's
I
mean
the
very
simple
bits
of
git
ops:
are
your
entire
infrastructure
for
your
operational?
Everything
is
enshrined
in
get
as
code.
That's
good
now.
Are
there
way
more
mature
versions
of
get
ops
where
certain
things
pull
some
ways
and
other
things
pull
others?
Yes,
there's
a
million
ways
to
go
really
far
with
this,
but
at
the
very
base.
That's
what
git
ops
is,
and
yes
that's
what
we're
talking
about
question.
B
This
is
about
polaris,
the
ci
tool,
admission
controller
and
dashboard
automatically
share
a
single
database
of
custom
checks,
or
do
customizations
have
to
be
synced
somehow.
The
short
answer
of
that
is:
yes,
they
automatically
share
the
same
database
of
custom
checks.
Anything
to
add
there
john.
A
So
the
answer
is,
it
depends
the
admission
controller
and
the
dashboard
can
share
the
same
configuration
because
they
can
pull
from
the
same
config
map
running
in
the
cluster,
the
the
ci
portion
of
that
again,
I
in
my
install
I'm
just
using
the
default
rules
that
that
are
provided
from
the
polaris
team.
If
you
wanted
to
then
take
those
rules
that
that
you're
using
in
your
cluster
and
also
use
them
as
part
of
ci,
you
would
have
to.
A
A
B
A
C
The
the
short
of
that.
B
Is
I
mean
there's
no
such
thing
as
a
tool?
That's
going
to
100
have
all
pod
security
policy,
everything
there's
not
there's
not
a
tool.
That's
going
to
do
that
perfectly,
because
your
policy
and
security
needs
are
going
to
be
unique
to
you.
That
said,
and
and
polaris
does
not
have
opa
support.
That
said,
fairwinds
commercial
product,
fairwinds
insights,
which
includes
polaris
and
other
open
source
tools,
trivia,
kube,
bench
etc-
is
much
more
heavily
focused
on
the
security
pieces
and
does
have
support
for
opa.
So.
C
B
Controller
ci
and
a
dashboard
is
a
high
priority
for
you,
as
well
as
those
additional
security
checks.
There
is
a
commercial
product
for
that
and
for
clarity,
there's
a
free
version
of
that
commercial
product,
but
it's
not
open
source.
The
way
that
polaris
is
so.
B
A
What
kendall
said
is
you
know,
does
it
does
it
have
all
the
features
needed
to
replace
100,
pod
security
and
opa?
No,
there
are
features
in
pod
security
policies
and
opa
that
are
not
in
polaris,
and
there
are
features
in
polaris
that
are
not
in
opa
or
pod
security
policies.
A
It
may
be
depending
on
your
needs
and
your
implementation.
It
may
be
enough
coverage
for
your
use
case.
It
means
polaris
may
not
be
right,
so
you
may
have
to
add
pod
security
policies
or,
and
honestly,
you
know,
our
approach
has
always
a
layered.
Defense
is
the
best
defense.
So
all
of
the
above,
as
many
layers
as
you
can
add,
from
a
defensive
defense
in
depth
perspective
is,
is
gonna,
help
build
out
a
more
resilient
infrastructure.
B
Thanks
are
the
checks
audits,
customizable
by
the
customer,
the
short
answer
that
is
yes
and
there's
a
way
to
write
custom
checks.
It
is
with
json
and
a
couple
different
things
in
polaris:
it's
not
leveraging
opa
standards
or
rego
rego,
whatever
it's
it's
it's
something
else
for
writing
those
checks,
but
it
is
customizable
and
which
checks
you're,
you're
checking
are
also
customizable
right.
John.
A
Yes,
yeah,
which
checks
you
run
again.
I
just
ran
it
with
the
default
out
of
the
box
configuration
which
is
a
pretty
high
bar,
but
also
designed
to
be
really
useful,
with
minimal
configuration.
So
the
team
kind
of
took
the
hey.
Let's,
let's
make
it
as
useful
as
possible
as
verbose
as
possible,
and
let
people
turn
it
down
if
they
don't
want
it.
So
they
give
you
the
most
information
out
of
the
box,
but
but
you
can
absolutely
turn
it
down
and
use
less
checks
or
add
custom
checks.
A
If
you
need-
and
I
think
the
example
I
gave-
which
is
in
the
documentation-
is
how
to
prevent
how
to
have
the
admission
controller
prevent
quay.io,
pac
containers
from
running.
So
it
looks
like
the
next
one
is
comments
on
kubernetes
dropping
docker
ooh.
That's
an
hour-long
talk
all
by
itself.
A
So
yeah
I'm
happy
to
happy
to
walk
right
into
that
one.
The
takeaway
on
that
is,
you
know
the
kubernetes
maintainers
are
doing
what
they
think
is
best
for
the
community
and
and
that's
that's
all
that
matters.
B
A
It
is
so
because
we're
using
the
scout,
I
say,
scaffolding
loosely
not
scaffold
like
the
tool,
but
just
the
scaffolding
that
kubernetes
has
for
admission
controller
web
hooks.
We
have
the
ability
to
filter
out
by
namespace
or
filter
out
by
tags
which
which
objects
we
are
going
to
apply
the
admission
controller
to
now.
That
being
said,
that
comes
with
a
pretty
significant
caveat,
which
is
you've
now
opened
up
a
loophole
for
people
to
circumvent
your
admission
controller.
So
you
know
your
mileage
may
vary
on
how
you
actually
want
to
use
that.
A
If,
if
I
were
in
that
position,
I
would
probably,
in
that
case
just
deactivate
the
admission
controller
by
removing
the
web
hook
temporarily.
So
I
would
just
take
a
quick.
I
just
you
know,
export
that
object
into
a
yaml
file
somewhere
delete
the
admission
controller
web
hook,
leave
all
the
all
the
tooling
running,
but
just
delete
the
entry
that
tells
kube
to
use
that
web
hook
do
what
I
need
to
do
and
then
put
it
back.
B
And
and
to
be
clear,
you
don't
need
to
run
the
admission
controller.
You
can
run
it
nci
so
that
if
somebody
needs
to
do
something
quick
and
dirty
they
can
they
don't
have
the
admission
controller
running
like
that's.
Essentially,
the
admission
controller
exists
so
that
even
in
a
quick
and
dirty
you're
going
around
ci
and
you
need
to
make
a
change
you're,
not
also
implementing
some
other
problem.
That's
that's
why
it
exists.
A
And
you
could-
and
you
could
actually
have
the
admission
controller-
be
a
little
bit
less
strict
than
your
ci
system.
So
if
you
wanted
ci,
for
example,
to
be
super
high
bar
all
of
the
checks
all
of
the
things
and
really
just
wanted
to
use
the
admission
controller
to
stop
people
from
doing
things
that
you
know
are
going
to
hurt
your
cluster,
you
could
do
that
as
well.
You
could
run
them
with
separate
configurations.
B
Right
all
right
next,
one
thanks
for
answering
my
first
question,
one
more:
please
can
polaris
fairwin
solutions
and
insights
provide
audit
policy
enforcement
monitoring
to
vendor-specific
kubernetes
clusters
such
as
vmware
tanzu,
red
hat
openshift,
as
long
as
they
are
compatible
with
upstream
kubernetes
apis.
So
that's
the
question
is:
how
compatible
are
they
with
upstream
kubernetes
apis?
In
theory?
Yes,
in
practice,
maybe
not
john,
I
don't
know.
Have
you
used
polaris
in
one
of
in
either
of
those.
A
So
I
haven't
used
it
on
tanzu
or
openshift.
Specifically,
we
have
used
it
on
different
flavors
of
of
kate's
and
to
your
point,
it
depends
on
how
far
from
the
from
the
upstream
distro
the
the
vendor
has
deviated
this.
The
examples
we
did
today
just
to
be
clear.
All
of
this
is
running
in
gke.
A
So,
yes,
it
runs
in
a
vendor.
You
know
vendor
distro.
You
will
see
polaris
flagging
things
that,
and
you
may
have
so.
You
may
have
to
do
some
additional
configuration
to
polaris
if
you're
going
to
use
the
admission
controller
to
stop
it
from
blocking
vendor-specific
things
that
may
not
be
up
to
its
standard
or
that
may
need
to
run
as
root,
for
example,
and
you
may
see
it
flagging
things
that
you
have
no
control
over
like
in.
A
What
I
would
recommend
in
that
case,
though,
is
running
the
I
would
use
the
admission
controller
sort
of
as
like
a
last
step,
I
would
really
focus
on
using
as
part
of
ci,
because
if
your
deployment
process
is
using
100
upstream
compatible
deployment
artifacts,
then
yes,
it's
100
compatible
with
your
deployment.
It
may
not
be
compatible
with
the
deployment
that
the
vendor
is
giving
you
in
kube.
B
C
B
Get
through
a
couple
of
these
relatively
quick
because
we're
running
out
of
time,
real,
quick,
how
is
polaris
resource
usage
on
the
cluster?
It's
very
minimal.
That's
the
short
answer.
It's
it's
not
doing
a
whole
lot
and
it's
not
a
thing.
That's
running
and
checking
constantly
all
the
time
because
it
just
doesn't.
It
doesn't
need
to
for
that.
So
it's
very
minimal
resource
usage.
What
are
the
recommended
tools
for
opa
and
talk
more
about
opa
opa?
Is
the
open
policy
agent?
You
can
go
look
about
it.
Look
look
for
that!
Open
policy.
B
Agent
applies
to
all
kinds
of
things,
not
just
kubernetes.
It
is
a
standard
one
of
many
standards,
but
seemingly
winning
some
of
the
the
standard
battles
around
writing
custom
policy
for
different
things.
We
there.
A
B
Other
tools,
besides
fairwinds
insights,
for
doing
oppa
enforcement-
some
of
them
are
messier
than
others,
but
there's
there's
a
lot
of
community
development
of
this
and
it's
still
continuing
to
to
grow
and
be
adopted.
Anything
to
add
to
that
john.
Before
I
move
on
oh.
B
Does
git
ops
model
replace
traditional
ci
cd?
No,
when
you
think
of
traditional
ci
cd
you're
thinking
of
deploying
applications
into
an
infrastructure.
What
git
ops
is
about
is
deploying
your
infrastructure,
also
through
ci
cd,
so
making
sure
that
not
just
your
applications,
but
also
all
of
your
infrastructure
code
is
enshrined
as
code
and
being
deployed
in
is,
is
where
that
goes.
And,
finally,
john
real
quick,
kubernetes
dropping
docker
wow.
A
A
It
is
so
there's
there's
plenty
of
chatter
about
this.
If
you
just
go
search
for
kube
docker
deprecation,
I
believe
it's
120
or
121
docker,
but
will
not
be
used
as
the
default
or
as
a
as
not
as
the
default,
but
as
a
as
a
container
runtime
engine
or
cri.
A
It's
actually
been
that
way
for
a
while,
there's
been
a
shim
docker,
cri
shim,
or
something
some
such
thing
that
that
handles
making
docker
compatible
with
the
cri
interface-
and
I
I
think
container
d
is
probably
the
most
most
widely
accepted
replacement
for
it,
but
anything
that
that's
compatible
with
cri
can
can
act
as
the
container
runtime
environment.
Now
it
doesn't
mean
you
can't
use
docker
on
your
workstation
to
publish
the
containers
just
to
be
clear
right.
A
You
can
still
absolutely
use
docker
build
docker
push,
docker,
pull
on
your
workstation,
it's
just
and
and
honestly,
a
lot
of
the
managed
cube.
Distros
haven't
used
docker
for
a
while
anyway,
so
not
a
huge
deal.
Can
we
deploy
flux.
C
A
Without
using,
can
we.
B
A
B
I
want
to
wrap
up
john,
so
I
can
hand
back
to
libby
to
close
this
out.
Thanks
for
the
question
feel
free
to
email.
John
email
me,
if
you
have
other
questions
also,
we
had
our
twitter
profiles
in
there.
You
can
get
in
touch
with
us
if
you
have
other
things,
but
john,
you
did
great.
Thank
you
for
the
live
demos
and
everyone.
Thank
you
for
questions.
We
appreciate
your
attendance
would
love
for
you
to
get
involved
in
either
of
these
projects.
Thank
you
for
coming
and
libby.
C
John
and
kendall
y'all
were
awesome.
Thank
you.
Everyone
for
attending.
We
will
see
you
at
a
webinar
next
year
in
2020
the
calendar
is
open,
communications
have
been
sent
out
so
be
sure
to
check
any
of
your
marketing
folks
in
inboxes,
and
we've
got
some
exciting
stuff
coming
up.
So
thank
you
both
again.
Everyone
have
a
great
day
and
we'll
see
you
soon.