►
Description
Learn how Red Hat weaves together DevOps and Security to master the force called DevSecOps. This show brings you Red Hat products and our security ecosystem partners to aid in your battle. We help you integrate DevSecOps into your OpenShift lifecycle and create a strategic advantage to win against the dark side (aka security problems). Bring your lightsaber and be prepared to train hard... may the DevSecOps be with you!
A
A
A
A
B
Good
morning,
good
afternoon,
good
evening
and
welcome
to
another
special
edition
of
devsecops
is
the
way
here
on
red
hat.
Live
streaming
is
what
we've
changed
our
name
to
dave.
Sorry,
the
memo
has
not
gone
out
about
it,
though
so
yes,
the
thing
formerly
known
as
openshift
tv
is
now
called
red
hat
live
streaming.
I
am
chris
short
host
and
show
runner
of
that
thing.
Called
red
hat
live
streaming
and
often
producer,
but
today
we're
using
intern
bobby
as
producer.
So
thank
you
bobby
for
doing
that.
Oh
look
at
this
look
at
this.
C
B
C
Doing
great
we
were
just
talking
about,
you
know,
losing
power.
B
C
In
these
summer,
storms,
obviously
here
in
michigan,
I'm
down
here
in
florida
and
we're
we're
we're
hoping
knocking
on
wood,
that
storms
don't
come
by
and
knock
us
out,
but
we
should
be
good
yeah.
C
Doing
great,
so
I'm
I'm
excited
to
talk
to
everybody
today
in
this
month's
episode
of
devsecops
is
the
way
we're
talking
about
data
controls
and
I'm
really
excited
about
the
guest.
We
have
matt
rogers
who's.
A
senior
software
engineer
has
a
lot
of
good
and
interesting
history
at
red
hat,
we're
going
to
talk
about
things
about
data
controls
like
encryption
and
we
might
even
get
to
some
other
items
like
the
compliance
operator.
C
Maybe
file
file,
system,
operator,
file,
integrity
operator,
sorry,
nice,
so
yeah,
it
should
be
a
should
be
a
good
show
excited
to
get
get
going
before.
I
do,
though,
I
just
want
to
remind
everybody
that
this
is
a
monthly
series
that
we're
doing
so
if
you're
really
interested
in
security
tune
in
every
month,
we
actually
have
two
openshift
tv
shows
that
we
have
under
this
umbrella.
C
A
C
Isn't
very,
very
important
to
red
hat
yes
and
I
focus
on
our
security
isps,
so
this
month's.
I
think
we
had
last
week,
I'm
trying
to
think
yeah.
It
was
last
week
and
it
was
with
zettaset
who's,
one
of
our
good
partners
in
data
encryption,
and
you
can
see
on
the
right-hand
side.
We've
done
certain
security
categories
in
different
months.
We
started
in
march
with
vulnerabilities
and
we're
gonna
end
here
in
december,
or
this
year
with
with
platform
security,
which
will
be
very
interesting.
C
So
I
just
wrote
a
blog
about
data
controls,
it's
a
very
short
blog
that
you
can
find
on
redhat.com,
and
then
we
do
webinars
and
other
things
like
that,
and
you
can
find
more
information,
red.ht
red.ht
forward,
slash
devsecops
and
the
d
and
the
s
and
the
o
actually
do
need
to
be
capitalized
by
the
way.
This
this
whole
thing
isn't
just
a
you
know,
a
show
with
pretty
faces
and
I
would
have
to
say,
elegant
beards.
D
C
I
know
this
is
a
little
bit
smaller
font,
but
we
we
use
this
in
these
categories
with
our
joint
customers
and
as
we
go
to
market
with
solutions
that
solve
dead
set
ups.
So
this
is
just
a
very
high
level
view
of
it.
We've
we
go
down
into
deeper
levels
to
say:
hey,
you
know
as
you're,
making
your
journey
into
into
devsecops.
C
B
Yes,
and
that's
I
mean
dave,
you
didn't
know
or
you
might
not
have
realized,
we
just
had
a
nice
devops
com,
devsecops
conversation
in
the
clouds
with
andrew
clay,
schaefer
kirsten
newcomer
and
jamie
scott,
so
that
was
nice
and
eye-opening
episode.
As
always,
you
can
check
our
archive
for
all
the
videos
that
we've
put
out
so
far.
C
Yes,
those
three
are
are
very
knowledgeable
in
devsecops.
I
work
with
all
those
three
all
the
time,
so
yeah
we're
we're
gonna
be
doing
a
lot
around
devsecops,
actually
you'll
see
even
more
stuff
in
q3
from
red
hat
around
our
point
of
view,
and
things
like
that.
So
we're
excited
to
do
that.
B
C
Cool,
but
so
for
this
episode,
let
me
just
stop
sharing.
I
will
let
our
distinguished
guest
introduce
himself
matt
rogers.
If
you
please,
let
everybody
know
who
you
are,
what
you
do
at
red
hat.
E
E
So
that's
a
that's
an
engineering,
a
small
engineering
team
that,
where
we
are
tasked
with
getting
products
like
openshift
up,
be
able
to
meet
compliance
standards
so
we're
doing
some
cool
stuff
there
and
I've
been
with
red
hat
for
I've
been
with
them.
Since
2010
I
started
off
in
the
support
department
and
worked
there
for
a
while
and
made
my
way
over
to
engineering
nice.
C
Yeah
you
were,
we
were
talking
earlier.
You
got
a
pretty
interesting
start.
Why
don't
you
tell
us
a
little
bit
about
how
you
sort
of
navigated,
through
red
hat
and
and
where
you
got
to
where
you
are
now
great.
B
E
Yes,
so
when
I
so
when
I
started
it
was
rail
support,
the
versions
of
the
versions
of
rel
were
like.
I
think
it
was
like.
E
The
latest
was
5.3
or
something
like
that
when
I
5.2
when
I
started,
and
so
I
came
in
kind
of
as
a
generalist
just
to
work,
on,
base
os
components
and
support.
Those
like
there
were
cases,
support
cases
for
send
mail,
post
fix
and
things
like
that,
but
also
some
of
the
security
security
components
like
samba,
red
hat
idm.
E
Towards
the
end
of
my
time
in
support,
I
was
pretty
much
just
supporting
the
idm
suite
of
products
which
is
samba
mit
kerberos
those
kind
of
things.
I
also
spent
a
lot
of
time
supporting
vpn
vpn
software
pharrell.
It's
libreswan.
It
came
out
of
the
open
swan
project.
I
spent
a
lot
of
time
supporting
those
cases
and
also
kind
of
swinging
into
development
through
that
project.
E
So,
as
a
as
a
part
of
the
support
engineer
group
at
the
time,
submitting,
patches
and
and
fixing
bugs
for
customers
was,
was
highly
encouraged.
E
So
of
the
projects
that
I
worked
on,
I
probably
spent
the
most
time,
debugging
and
and
and
just
fixing
stuff
on
on,
libre
swan,
so
that
was
that
was
a
great
great
opportunity.
Yeah.
E
It
yeah
and
it
was
cool
because
coming
into
red
hat
support
considering
that
things
you
know,
if
you
were
to
do,
if
you
were
to
do
software
support
at
microsoft
or
something
you'd
probably
have
a
script
that
you
would
go
through
for
for
cases.
You
know
you're
not
gonna,
be
in
the
code,
but
for
red
hat
support
that
was
kind
of
like
the
that
was
kind
of
like
the
boost.
E
If
you
could
go
in
and
look
at
the
code,
then
you
were
really
able
to
be
able
to
help
customers
out
as
opposed
to
just
expecting
documentation
or
scripts,
or
anything
like
that.
So
that
was
a
so
the
support
organization.
At
the
time
the
team's
working
on
there
were.
There
was
like
a
lot
of
good
competition
in
a
way
of
saying,
hey.
E
If
you
could
go
into
your
case,
if
you
could
just
you
know
blow
your
customer
out
of
the
water
fix
this
issue,
get
them
out
of
the
way
and
you
could
do
it
in
a
in
a
way,
especially
if
you,
if
you
wrote
a
patch
and
fixed
a
bug,
form,
then
that
was
that
was
highly.
You
know
that
was
encouraged
and
we
we,
the
support,
techs
played
off
of
each
other,
supported
each
other.
E
To
say,
like
hey,
hey,
you
can,
you
know,
go
check
out
this
part
of
the
code
so
for
doing
software
support,
it's
was
a.
It
was
a
great
environment
to
to
learn
as
well.
I
mean
I,
I
pretty
much
got
my
start
in
software
development.
Just
working
on
on
support
related
things.
C
B
A
B
Yeah,
there's
not
many
places
like
that
right,
like
I
think,
I've
worked
at
one
place
where
it
was
like.
The
support
team
was
literally
like
in
the
cube
farm
like
right
next
to
ours
and,
like
the
support
lead,
knew
like
hey.
Can
you
guys
help
me
with
xyz
thing
right
like
for
my
team
of
quote
devops
engineers
and
everything
and
oftentimes
we
could,
which
was
kind
of
nice,
so
yeah
the
escalation
path
was
quite
simple,
then
yell
over
the
wall?
Hey
short,
can
you
fix
blah
blah
blah
nope?
That's
not
my
thing.
E
Yeah
the
support
and
the
interaction
with
support
and
development
elisa.
My
experience
has
always
been
one
of
those
things
that
red
hat
has
tried
to
foster
really
well
to
be
able
to
get.
You
know
even
maintenance,
engineers
and
stuff
that
are
just
doing
packaging
of
upstream
things.
You
know
you,
it
was
a
lot
of
collaboration
with
engineering
either,
even
even
just
as
you
know,
a
level
one
support
tech
and
that
def
that
definitely
improved
over
time.
E
You
know
there
was
a
lot
a
lot
more
back
in
the
day
there
occasionally.
I
remember
I
remember
having
to
ask
ulrich
drepper
a
problem
about
something,
and
you
know
he
didn't
get
back
to
me.
You
know,
and
it's
like
and
ulrich
is
the
is
the
you
know,
the
the
the
glib
c
guy
and
pretty
notorious
for
closing
bugs
and
just
saying,
go,
you
know
to
say,
go
away
so
there
there
were
definitely
some
where
you're
like.
I
just
can't.
E
I
can't
get
get
in
contact
with
this
guy
or
get
a
get
a
deep
answer
from
so
but
over
time.
You
know
that
that
that
worked
out,
but
as
an
engineer
you
know
we
always
keep
and
keep
an
eye
out
for
for
customer
stuff.
E
Yeah
before
that,
I
had
done
some
basic
security
stuff
on
linux.
Just
you
know
securing
a
a
system
or
whatever,
but
in
terms
of
for
libre
swan.
I
I
remember,
being
I
remember
one
thing
that
surprised
me
about
that
project
was,
I
think,
like
the
first
case
I
got
of
it.
E
I
had
the
customer
turn
on
the
debug
logs
and
they
were
very
verbose,
but
what
they
did
was
give
a
lot
of
give
a
lot
of
prompts
through
the
code
and
libre
swan
is
an
implementation
of
of
a
of
a
protocol
called
ike
and
ike
version,
two
which
is
a
which
is
like
a
stateful
protocol
for
for
ipsec,
vpn
establishment
and
it
ran
through
the
you
know.
E
You
had
two,
you
had
two
endpoints
and
they
each
ran
through
the
same
debug
log,
but
at
different,
but
at
different
states,
and
you
could
go
and
reference
the
the
rfc
drafts
and
and
the
rfcs
that
are
out
there
and
follow
along
the
the
state
machine.
So,
even
though
it
had
these
very
big
verbose
logs
that
just
tried
to
describe
everything
I
found
it,
I
found
it
really
useful
for
going
through
the
code
and
that
was
kind
of
the
first
time
that
clicked
in
terms
of
like
hey,
I
can
you
know.
E
I
just
found
this
this
log
in
the
in
the
code.
Well,
it's
right
there.
You
know
the
issue
is,
is
is
right
there
in
front
of
me,
so
that
was
kind
of
you
know
an
epiphany
of
of
getting
started,
and
this
was,
you
know,
jumping
into
a
case
of
a
product.
I
had
never
I'd
never
heard
before
I
had
no.
You
know
I
had
heard
of
vpns,
but
I
I
had
no
idea
about
ipsec
or
anything
right
like
that.
E
E
Right,
yeah-
and
that
was
the
and
that
was
kind
of
the
way
it
went
with
support-
was
you'd
get
obscure
stuff,
especially
if,
since
we
were
supporting
all
of
you,
know
the
components
like
all
the
packages
in
rel
or
you
know
most
of
them,
some
were
unsupported,
but
no
large
majority
of
them,
so
you'd
get
you'd,
get
cases
on
things
that
had
never
had
a
case
open
before.
Right,
like
you
were
the
first
one
to
work
on
something.
E
Pro
yeah
there
was
one
for
like
ipmi
tool
or
something
it's
like
knowing
who's
who's
used
that
so
so
you
get
cases
coming
in
and
and
you
go
look
for
the
guy
like
hey.
Who
knows
this?
It's
like
sorry,
you
gotta,
like
you
gotta,
you
gotta,
take
a
shot
at
it.
Yeah
and
so
you
know
being
thrown
into
the
fire.
Is
it's
daunting
at
first
but
they're?
But
it's
it's.
It
was
a
good
crash
course
in
in
just
everything
you
know
pretty
much
everything
open
source
development
related.
So.
C
How
did
that
the
work
that
you
did
sort
of
manifest
itself
into
openshift,
then.
E
It
so
so
I
worked
on
libre
swan,
mostly
when
I
was
in
the
support
organization.
E
When
I
joined
engineering
I
joined
on
to
work
on
mit
kerberos,
because
there
were
some
and
I
worked
on
adding
features
to
kerberos
for
in
support
of
free
ipa
and
red
hat
idm,
and
some
of
the
well
one
of
the
one
of
the
one
of
the
themes
that
I
kind
of
took
along
with
working
on
different
projects
has
been
x509
certificates
and
and
because
I've
I
worked
on
certificate
related
stuff
for
libre
swan.
E
You
can
actually
authenticate
a
vpn
tunnel
using
a
using
assert,
and
I
also
worked
on
pk
in
it,
for
for
mit
kerberos
and
for
openshift.
When
I
joined
openshift
on
the
auth
team
working
on
certificate
stuff
was,
I
was
like
okay,
I
just
I
guess
I'll
be
working
on
this.
You
know
of
the
the
guy
in
the
you
know,
I
was
the
guy
in
the
auth
team
that
had
the
most
experience
with
it.
So
so
that's
what
that's?
E
I
guess
just
some
of
the
experience
and
some
of
the
stuff
just
of
over
time
on
multiple
products
working
on
certificate
related
things,
nice
very
cool
yeah,
so
that
was
one
of
so
that
was
you
know
one
of
the
main.
I
also
liked
him.
You
know
I
I've
liked
the
the
part
of
being
a
generalist
and
being
able
to
kind
of
move
around
at
different
projects,
so.
C
C
On
on
openshift
certificates,.
D
E
So
my
hope,
with
this
presentation,
is
to
just
to
share
some
of
the
some
of
the
decisions
and
some
of
the
history
behind
or
some
of
the
decisions
behind
the
certificate,
management
and
handling
that
openshift
does
sort
of
also
kubernetes
as
well
to
some
extent
and
yeah
I'll
get
into
it
awesome
so
yeah,
I'm
I've
already
done
the
introduction
of
matt,
rogers
and
yep.
There's
some
of
the
projects
that
I've
worked
on
now.
E
I'm
gonna
open
shift
container
platform
compliance
where
we
do
the
compliance
assessments
of
products,
and
lately
I've
been
working
on
the
compliance
operator
and
the
file
integrity
operator,
and
this
has
kind
of
been
a
general
area
of
operator
design
for
openshift
for
secure
operators,
ones
that
have
some
kind
of
security
function.
In
our
case
I
have
a
development
blog
at
am
rogers950.gitlab.io
and
my
personal
site
is
cryptotheater.info.
E
I
have
a
that's
a
painting
of
mine,
so
I've
got
my
art
up
on
crypto
theater.info,
if
you're
interested
in
that
nice.
So
this
talk
is
kind
of
inspired
by
peter
gutman.
The
work
of
of
his
his
research
in
the
pki,
it's
actually
about
like
20
years
old
now,
but
a
lot
of
the
stuff
is
still
pretty
relevant.
E
He's
got
a
gray
paper
called
pkai,
it's
not
dead,
just
resting,
I
mean
he's,
got
some
other
good
stuff
everything
you
never
wanted
to
know
about
pki,
but
we're
forced
to
find
out.
That's
a
really
good
one
as
well.
So
sounds
like
a
good
one.
Yeah
yeah,
those
are
those
are,
are
really
good
so
and
I'm
I.
E
I
cover
some
of
the
themes
that
he
goes
through
in
in
in
broader
detail,
but
his
work
has
kind
of
shaped
a
lot
of
the
ways
I
think
about
pki
at
a
at
a
high
level.
E
So
for
pki
like
for
x509,
what
are
we
talking
about?
Well,
it's
a
it
describes
a
system
of
a
public
key
infrastructure,
so
you
have
certif.
I
have
a
note
here,
make
sure
yeah.
So
it's
going
to
define
it
defines
a
system,
a
system
in
data
formats
that
are
supposed
to
answer
certain
authenticity
questions
about
public
private
key
pairs,
namely
who
does
it
who
does
a
public
private
key
pair,
belong
to,
and
specifically
it
ties
and
authen
to
ties
the
pair
to
a
name.
E
E
E
So
yeah,
this
is
just
one
of
the
secrets.
Actually
it's
the
secret
from
openshift,
it's
the
cd
client
secret
or
the
xcd
client
certificate.
I
mean,
and
there
is
a
it's
signed
by
one
of
the
at
cd
cas.
E
So
there's
a
bunch
of
links,
a
bunch
of
tls
links
there,
and
this
is
one
of
the
certs.
So
this
is
just
to
give
an
idea
of
the
general
outline
of
of
the
format.
You
have
a
an
issuer
name.
You
have
a
subject,
name,
information
about
the
public
key
or
the
public
key
itself.
I
mean
you
have
some
other
data.
E
You
have
some
other
informative
data
here
with
extensions
yeah.
This
is
marked
as
like
a
web
client.
So
this
is
a
cert,
that's
supposed
to
be
used
for
client
authentication
or
for
a
server
to
authenticate
a
client,
and
it
has
the
signature
from
the
that
was
calculated,
or
that
was
added
to
the
serp
by
the
ca.
So.
E
So
yeah
a
little
bit
of
history
behind
x
509,
so
it's
a
bunch
of
historical
standards
like
from
going
back
to
the
before
the
80s
throughout
the
80s.
After
you
know,
rsa
began
to
catch
on.
It
was
its
primary
goal
was
to
avoid
the
man
in
the
middle
with
our
with
rsa
yeah.
So
because
the
core
drawback
with
just
a
plain
rsa
is
you
know
you
set
up
a
session
with
bob
and
you
ask
for
bob's
public
key.
E
E
How
do
you
make
sure
that
you
know
that
is
bob's
actual
bob's
public
key
and
bob
is
who
he
he's
claiming
to
be?
So
that's
what
it's
it
essentially
tried
to
address.
E
So
there
was
a
directory
that
was
supposed
to
come.
There
was
a
directory
standard
that
was
supposed
to
come
along
with
x509.
It's
called
x500,
the
the
closest
to
what
x
500
is.
It
today
is
open
ldap.
E
E
But
you
know
over
time,
of
course,
that
that
didn't
happen.
So
that
was
supposed
to
be
one
of
the
district
that
was
supposed
to
help
with
the
distribution
problem
was
that
it
was
assumed
you
had
an
x,
500
directory
and
you
just
get
your
keys
and
cas
and
stuff
from
there.
So
more
I'll
get
to
more
of
those
ramifications.
E
Yeah
and
it
had
this,
it
was:
has
this
name
hierarchy
concept
of
distinguished
names
made
out
of
relative
distinguished
names,
it's
one
of
those
things
that
when
people
s
when,
when
people
see
it
in
a
in
a
cert,
it
just
makes
sense
like.
Oh,
you
have
this
hierarchy
of
things
like
you.
Have
you
have
my
server
farm
and
then
you
have
my
servers,
and
so
it
would
make
sense
that
you
would
have
you
know
you
would
have?
E
Oh
you
know
you
would
have
it
broken
up
into
that
way,
but
that
is
not
how
it's
used
today,
really
for
most
most
cases,
so
the
over
time
the
implementations
vary.
There
were
some
that,
were
you
know
some
that
were
implemented
loosely
some
that
were
implemented
very
strictly.
So
there's
not
like
so
kind
of
what
ended
up
happening
was
the
ietf
there
were
rfcs
published
that
just
kind
of
ended
up
being
broad
profile
recommendations
like
what
do
you
do
with
this?
You.
D
E
E
So
there
was
this
one
thing
that
kind
of
ended
up
happening
was
you
could
get
players
in
the
pki
space
like
microsoft,
where
they
would
make
decisions
about
the
what
attributes
and
things
would
go
in
your
shirts
and
just
by
virtue
of
them
being
there
and
being
the
the
guy
the
people
doing
it,
the
most
their
stuff
would
make
it
in
in
a
way
of
everyone
else,
has
to
deal
with
certain
attributes
and
work
around
bugs.
E
You
know
with
with
this
kind
of
stuff.
So
when
it
came
to
pki
interoperability,
that
was
really
not
not
in
the
in
the
cards.
It's
one
of
those
things
that
over
time,
people
want
to
do.
They
want,
to,
you,
know,
be
able
to
use
search
from
a
pki
and
other
stuff
and
and
as
we'll
see
that
that's
kind
of
that's
tough
to
do.
E
It
came
along
with
a
data
format
called
asn.1,
which
I
won't
go
too
far
into,
but
this
is
this
is
devsec
ops,
cat
and
he's
wondering
if
the
if
the
asn,
if
asn,
1
and
microsoft
certificate
profiles
were
done
on
purpose,
so
he
thinks
that's.
He
thinks
that
was
done
specifically
for
him.
Okay,.
E
He's
got
his
tim
foil
hat
on
he's.
You
know,
he's
a
little
bit
mad
at
bill
gates,
but
but
oh
well,
so
yeah
so
interoperability
stuff
ends
up
being
to
where
you
have
these
certificates
that
have
all
this
stuff
in
them
that
your
parsers
don't
know
about.
Your
implementations
may
not
handle
well.
E
E
Actually,
if,
if
anyone
recognizes
that
name
but
but
anyway,
so
some
old
code-
and
there
were
instances
where
customers
would
plug
in
their
certs
from
active
directory
or
whatever
else
as
soon
as
it
would
hit
the
parser
boom,
it
would
crash,
crashed
the
the
pluto
daemon.
So
there
are
tricky
things
around
there.
So,
in
this
case
we
have
is
it
says,
freedom
feed
me
a
stray
cert.
E
So,
where
we're
at
today
is
is,
of
course,
is
used
as
certs
are
used
everywhere.
Ssl
tls,
of
course,
which
is
you
know,
common
use
for
web
pki-
is
what
people
end
up
interfacing
with
it.
You
know
on
a
day-to-day
basis.
That's
your!
You
know.
Your
the
little
padlock
icon
in
your
browser
is
supposed
to
warn
you
whether
or
not
this
the
server
you
talk
to,
if
the.
E
If
it's
server
certificate,
if
you
were
able
to
verify
that
with
a
ca
in
your
browser's
bundle,
it's
not
telling
you
if
it's
using
military
grade
encryption
or
anything
like
that,
but
that's
what
most
people
are
going
to
be
and
of
course,
if
you
don't
have
a,
if
you
don't
have
a
a
trusted
ca
in
there,
then
you
get
the
you
get
the
warning
you
get,
you
have
to
add
the
exception.
E
E
Everyone
does
it.
I
say
this
because
if
you
look,
if
you
go,
if
you
search
stack
overflow
for
pki
or
tls
related
questions,
you'll
see
you'll
see
answers
with
almost
a
million
views,
and
things
like
that.
So
at
some
point,
if
you're
in
any
devsecops
or
any
kind
of
devops
or
or
security
work
on
security,
stuff,
you're,
you've,
you're
minting
certs
you're,
going
to
be
running
you're,
going
to
be
using
whatever
tool
out
there
to
create
certs
and
moving
them
around
and
you're
going
to
get
familiar
with
it.
E
There's
some
progress,
progress
in
terms
of
new
protocols
and
stuff
that
that's
been
going
along
to
you
know,
make
things
easier,
but
now
we're
in
a
restful
we're
in
a
microservices
world.
So
it's
it's
certain
explosion.
Now
you
have.
If
you
have,
you
know
multiple
tls
endpoints,
you
have
to
secure
them
all
you're
not
going
to
get
away
with,
having
especially
from
a
standards
compliant
standpoint,
you're
not
going
to
get
away
with
having
unencrypted
t.
E
E
That
is
where
it's
it's
checking
against
the
the
ca
on
its
side,
so
now
enter
kubernetes.
So
the
problem
just
gets
deeper.
Here
you
have
tons
of
self-signed
cas
and
certificates
to
manage.
You
have
a
whole
cluster
control
plane.
It's
got
api
servers,
cubelets
ncd,
not
to
mention
there's
client
authentication,
which
is
like
there's
an
a
cluster
admin
insert
that
you
can
talk
to
the
you
can
use
to
just
talk
directly
to
the
api.
That's
like
used
by
some
of
the
other
components
that
have
like
a
direct
cert
authentication.
E
So
that's
you
know
even
outside
of
the
just
the
tls
itself.
You
also
have
like
your
internal
apps.
Things
on
your
service
network
mtls
ever
like
mtls
adds
another
another,
a
headache
to
it
there.
E
E
This
system,
colon,
like
the
the
one
of
the
authenticators
or
the
the
api
server
authenticator
like
we'll
like
do
some
do
some
stuff
to
this
like
look
and
make
sure
it's
a
system
user,
so
there's
some
special,
a
special
certificate
profile.
You
know,
there's
a
few
of
them
in
there
there's
some
stuff
that
that
is
a
necessity.
I
believe,
but
it's
sort
of
questionably
secure
like
there
are
some
certs
that
have
both
client
and
server
extended
key
usage
on
it.
E
So
it's
like
a
dual
use
like
you
could
use
it
for
as
a
client
on
the
client
side
or
the
server
side,
at
least
for
the
for
this
at
you
at
least.
As
far
as
this
attribute
is
concerned,
I
I
think
it's
a
tech.
I
think
it's
a
a
technical
limitation
with
ncd
for
some
reason,
but
I
I
don't
remember
I
just
remember
it
being
one
of
those
things
that
we
looked
at
early
on
so
yeah.
E
A
D
E
So
yeah,
without
a
without
a
strategy
around
that
your
clusters
are
just
you're,
just
gonna
die.
There
are
some
little
convenience
mechanisms
in
there
from
cube
like
automatic
worker,
cubelet,
bootstrapping
and
rotation,
but
it
doesn't
cover
the
full
range
of
certs.
You
need.
You
have
various
things
like
certificate
and
key
files
to
just
sitting
around
on
nodes.
E
You
could,
if
you're,
if
you're
totally,
you
know
if
the
cert,
if
the
pki
that
you
plug
into
cube,
is
totally
up
to
you,
you
can
do
ridiculous
things
with
it
like
sharing
ca,
keys
and
and
stuff,
so
just
use
use.
Openshift
already
is
what
I
would
tell
them.
E
E
It's
set
up
by
ansible
playbooks
and
those
playbooks
do
the
rotation
and
regeneration
of
the
certs
and
stuff
they're
three
does
have
the
automatic
server
service
ca,
which
is
you
can
get
automatic,
tls
search
for
things
for
services
that
are
in
your
service
network
in
the
service
network.
So
it's
very
common.
You
know
if
you
have
a
like
a
if
you
have
like
a
single
workload
transfer
boom.
E
You
just
need
a
need:
a
server
cert
for
that
workload
automatically
you
can
annotate
a
service
and
openshift
will
give
you
a
serving
serp
for
that
automatically
and
it
can
distribute
the
ca.
The
service
ca
too.
It's
got
a
default
default,
app
route,
idp
certificates,
support,
etc.
E
Openshift4.
When
we
went
to
four
it's
even
more
improvements
over
three
self-rotating
control,
plane
certificates,
a
lot
of
them
are
short-lived.
They
are
a
lot
of
them,
are
managed,
as
or
they're
managed,
as
cube
secrets
by
the
control
plane
operator.
So,
rather
than
have
all
this
all
the
search
sitting
on
the
master,
you
know
they're
they're
in
the
they
can
be.
E
You
know
they
they'll
be
in
etcd.
So
if
you
use
that
cd
encryption,
then
then
you
get
some
benefit
there.
Nice
and
there's
some
other
some
other
convenient
stuff
like
no
no
scale
like
auto,
auto
scale
up
the
service
network,
the
service
ca.
E
There
is
a
key
see
a
key
rollover
trick
that
we
do
that
I'll
get
into
so
yeah,
so
pki
was
just
resting
and
now
it's
awake
so
some
of
the
approaches
that
we
took-
I
know
I
don't
know
if
I'm
going
a
little
bit
over
time,
but
that's
okay,
some
of
the
approaches
that
we
took
one
of
them.
This
one
is
really
important
and
it's
kind
of
not
at
least
I
don't
think
it
is
totally
obvious
at
first.
E
E
So
for
the
known
infrastructure
links
you
load
the
clients,
ca
store
with
only
cas
for
the
server
workload
that
it
that
you
know
it
needs
to
connect
to
kind
of
a
little
bit
of
a
comparison.
Maybe
this
could
helps
illustrate
it
a
little
bit.
You
would
have
a
brow
like
for
your
browser.
E
But
if
you
have
a
headless,
tls
client,
then
whatever
you
put
in
the
ca
store,
that's
going
to
be
those
are
you
know
it's
going
to
restrict
it
there
so
and
when
it
tries
to
connect
to
a
server
that
it
doesn't
that
it
can't
authenticate,
it
can't
verify
the
the
cert,
because
it
it
has
no
ca
it.
Should
the
connection
is
going
to
stop
there.
There's
no
there's
nobody
there
to
hit
exception.
I
mean
you
would
have
to
go
in
and
program
the
exception
into
your
your
code
or
whatever.
E
E
The
same
goes
with
like
server-side
cas
that
you
load
for
mutual
tls,
because
you
want
to
verify
the
client
insert
you
want
to
avoid
that
big
wad.
E
Just
then
that
server,
you
know
you
you've,
you've
broken
up
the
trust
model
there,
so
it
becomes
harder
to
it's
it's
a
little
tougher
to
manage,
but
it's
better
for
the
the
security
when
you
when
you're
explicit
about
your
ca's
trust
and
an
example
I
give
is
like,
if
you
need
like
a
temporary
mutual
tls
connection,
we
have
one
if
we
use
one
of
these
in
the
compliance
operator,
where
we,
where
we
just
explicitly
load
one
cert,
we
created
an
ephemeral,
you
know
and
mutual
we
created
like
really
short-lived
certs.
E
We
set
up
the
connection
it
sends
over
its
its
workload
and
then
and
then
and
then
that's
it,
and
we
only
load
the
ca
that
that
we
need
on
each
side.
E
There's
kind
of
there's
a
thing
you
can
do
with
with
a
pki
which
is
use
intermediate,
certs
or
intermediate
cas
you
have,
which
is
like
a
delegating,
like
you
delegate,
signing
of
search
to
another
ca.
You
really
like
in
our
case
for
microsoft.
You
really
shouldn't
do
this.
It
doesn't
add
anything.
E
It
doesn't
add
anything
to
security
now,
because
you've
broken
everything
up
into
this
complicated
hierarchy.
It
just
adds
the
complexity.
There's
like
a
theoretical
discipline
with
with
x509
called
path.
Construction.
It's
like
one
of
the
great
rivals
of
continental
philosophy
is
his
path.
Construction.
E
So,
for
example,
here
is.
You
can
do
single
inter
domain
cross
certification,
but
I
I
don't
I
don't.
I
don't
suggest
you
go
look
at
this,
but
anyways
so
yeah.
If
we
did
it
like
that,
we
would.
Some
of
the
effects
would
be
that
you
know
compromising
a
cube
ca.
E
You
have
to
redo
everything,
some
some
clients.
We
ran
into
issues
with
clients
that
don't
require
the
full
chain,
so
the
the
clients
don't
agree
on
this,
like
a
go.
Tls,
like
a
go
client
will,
if
you
load
the
intermediate
cert
it's
as
like
for
here.
If
you
have
for
cert
ii,
if
you
load
the
cubelet
ca
the
intermediate
and
you
sit
in
and
it
doesn't
need
to
go
and
check
its
signature
from
with
the
cube
ca.
So
the
distribution
of
of
of
cas
here
becomes
a
real
issue.
E
E
So
yeah
we
did
a.
We
did
a
fancy
self
cross
signing
trick
actually
to
on
the
service
ca
which,
if
you
want
to
check
it
out,
rfc
4
210,
I'm
not
going
to
try
to
explain
this,
but
it's
a
cross
signing
trick
that
lets
you
gracefully
roll
over
a
root
like
a
ca's
key,
a
self-signed
ca
key.
So
we
actually,
we
actually
use
this
successfully
in
in
service
ca.
E
Bring
your
own
ca
is
this
thing
that
customers
have
wanted
to
do,
but
it's,
but
it's
it's
ridiculous,
like
you
have,
if
you
have
oh
good
corp,
oh
you
office
here
you
use
it
for
other
systems.
It
just
makes
sense.
You
use
your
use.
Your
openshift
pa
pki
branch
it
off
of
this,
but
then
you
run
into
these
situations.
There's
really
not
any.
There's
really
not
a
feasible
way
to
do
this
in
in
open
shift
3.
E
You
could
give
an
intermediate
ca
if
you
plugged
in
the
private
key
you
could
like
the
the
installer
would
issue
these
certs
based
on
this
intermediate
that
you
just
retrofitted
in
and
which
is
what
you're
saying
is
you're,
saying:
hey
someone
go
mint
certs
with
this
key.
You
know
like
no,
no
one
would
I
don't
think
I
don't
think
people
would
do
this
or
want
to
so
yeah.
So
pokey,
the
penguin
says,
put
your
face
into
the
glue.
If
you
do
that.
E
So,
revocation-
and
this
is
kind
of
the
revocation
stuff-
is
mostly
the
last
last
part-
I'm
getting
towards
the
end.
So
revocation
is
kind
of
the
elephant
in
the
room
like
what,
if
certificate
key,
is
stolen
or
replaced,
there's
there's
been
methods
to
do
this.
Crl
is
the
oldest
one
yeah,
so
the
ci
the
ca
issues
a
dated
and
signed
list
of
certificate
serial
numbers
and
distributes
it
distributes
it
to
clients.
E
E
Revocation
now,
first
of
all,
revocation
by
itself
is,
is
pretty
flawed.
You
could
just
google
certificate
revocation
is
broken.
Plenty
of
people
have
have
written
about
its
flaws.
E
Asking
is
this
revoked?
It's
not
the
same,
as
is
this
valid
right,
because,
because
you
get
into
this
position
where
the
server
doesn't
have
a
reply
to,
is
this
remote?
So
there
are
these.
There
are
issues
with
the
failure
modes
with
with
ocsp,
where
the
cert,
where
you
can't
contact
you,
can't
fetch.
You
can't
contact
the
ocsp
server.
E
Well,
then
you
know
you
could
break
automated
services
or
else,
if
you
just
let
it
go,
if
you
just
let
it
go
by,
then
it's
like
you
know,
is
this
a
cert
revoked
unless
I've
heard
otherwise,
like
you
know,
or
it
it
it's,
it
gets
into
a
weird
situation,
so
yeah
ddosible
just
just
blow
up
the
ocsp
responder
and
no
one
can
check.
There
are
some.
There
are
some
there's
some
there's
some
extensions
to
help
with
that.
E
That's
like
the
ocspmus
staple
that
that
that
google
uses,
I
think
for
chrome
for
web
pki
stuff,
but
this
is
only
but
like
revocation
is
only
half
of
the
solution.
You
have
to
replace
this
cert
that
you
just
revoked,
especially
in
the
microservice
infrastructure
situation,
and
how
do
you
know
when
to
revoke
at
all
like?
How
do
you
know
your
key
got
stolen?
E
And-
and
you
know,
that's
the
reality
of
something
like
heartbleed
was
you
know,
people
just
got
key
material
for
free
without
you
knowing
so
so.
That's
a
tough
thing
to
determine
is
what
do
you
know
when
to
revoke
it
so
yeah
so
revocation
has
big
snags
with
this
in
open,
shifted,
cube,
a
crl
can
be
huge.
There's
a
there's,
a
one
meg
limit
on
config
map,
so
crls
can
get
you
can't
they
can
get
500
megs.
E
So
what
do
you
so
when
you
fetch
so
clients
having
to
fetch
a
500
meg
crl
to
verify
a
single
cert?
That's
like
it's
like
crazy,
there's
some
stuff
for
delta
crls.
That
can
help
with
that,
but
it's
this
balance
of
which,
which
option
is
less
worse
when
it
comes
to
picking
crl
and
ocsp.
So
so,
overall,
we
opted
for
this
self-rotating.
E
E
You
know
one
thing
about:
revocation:
is
that,
especially
from
the
compliance
standpoint,
if
there
are
going
to
be,
depending
on
the
you
know,
on
this
compliance
standard
that
you're
trying
to
adhere
to
you'll
get
some
of
this.
You
know
demanded
by
by
your
compliance
where
you
have
to
have
some
kind
of
revocation.
Oh,
if
you
use
certs,
then
you
gotta
revocate
be
able
to
revoke
it
in
in
some
way.
So
in
some
ways
that
becomes
this,
this
thing
where
well
well,
if
we
could
just
limit
well,
revocation,
is
so
bad.
E
E
So,
for
things
like
for
things
like
external
endpoints
and
stuff,
I
I
think
that
openshift
would
be
that
approaching
it
as
we'll
have,
revocation
or
seer
support,
cr
crls
or
something
for
a
you
know
for
one
of
these
domains
in
openshift,
like
the
external,
I
think,
the
external
routes
and
stuff
like
that
externally
exposed
services.
I
think
that's
the
best
case
for
supporting
some
some
serials,
but
now
this
future
stuff
is
not
me
saying
this
is
going
to
be
an
open
shift.
E
This
is
where
I
would
like
the
direction
I
would
like
to
see
it
go
in,
don't
take
this
as
future
product
product
enhancements.
E
If
we
can
leverage
acme,
which
is
the
protocol
used
by
to
request
certificates,
you
automatically
use
by
let
the
let's
encrypt
project,
here's
the
said
this
is
the
I
actually
for
my
personal
site.
I
chose
the
host
because,
through
let's
encrypt
they
are,
they
have
a
little
box
in
your
control
panel,
where
you
just
click,
give
me
a
let's
encrypt,
ssl
sir.
E
This
is
like
you
can
use,
let's
encrypt
for
like
this
is
this
is
great.
This
actually
takes.
A
lot
of
you
know
takes
a
lot
of
the
hurdles
away
from
obtaining
ssl,
certs
and
things
from
from
cas,
and
it
is
a
good
option
for
for
stuff
yeah,
so
leveraging
acme,
where
we
can
internal
applications
kind
of
like
how
we
have
our
our
our
service
ca
stuff
for
internal
applications.
E
There
are
some
things:
there
are
some
options:
projects
like
cert,
just
excerpt
manager,
there's
a
there's,
a
oop
there's
an
and
there's
a
let's
encrypt,
there's
a
plug-in,
and
I
think
it's
called
openshift
acme,
and
that
is
one
where
you
can
use
you
can
get.
Let's
encrypt
certificates
from
you
know,
for
external
stuff,
and
that
is
something
that
is
always
good.
I,
I
think,
would
be
good
to
leverage
now
infrastructure.
E
I
don't
think
a
lot
of
closed
infrastructure
is
even
prepared
to
do
this
for,
like,
like
not
let's
encrypt
usage
like
you,
should
be
able
to
use
acme
to
get
certificates
from
an
acme
server,
but
I
think
let's
encrypt
is
only
really
the
the
primary
use
case,
so
it
hasn't
caught
on
in
infrastructure.
But
I
think
it's
a
the
acme
protocol
is
a
good
point
to
move
forward,
for
you
know
future
like
certificate
certificate
generation
methods
and
and
things
and
off
look
kind
of
offloading.
The
trust,
where
possible.
E
There's
an
example
at
one
of
the
components
of
openshift
is
a
cluster
machine
approval
approver,
where
it's
basically
asking
the
machine
api.
If
the
name
for
a
node
cert
is
good
and
that's
you
know,
when
you've
done
that
you've
offloaded
the
trust.
You've
said:
okay,
I
you
know,
I
already
trust
this
machine
api.
To
tell
me,
I
even
have
nodes
available,
so
you
know
offloading
some
of
those
decision.
Trust
decisions
to
like
official
apis
and
things
like
that
would
be
would
be
cool.
E
Kubernetes
itself
has
a
csr
api,
but
it's
a
limited.
It's
limited
only
for
one
case.
So
if
there
were
general
they're
generalized
csr
apis,
then
that
would
be
useful
and
just
kind
of
if
it
you
know
being
able
to
support
crls.
E
You
know,
for
compliance
reasons
would
be
great.
You
know
not
using
the
cro
as
it
is,
but
kind
of
abstracting
away.
Some
of
this
stuff
saying
you
know-
maybe
we
take
in
a
crl,
but
maybe
we
just
turned
that
into
a
flat
like
list
that
we
just
referred
to
so
there's
some
innovation
that
we
can
do
in
that
space
to
just
try
to
make
just
try
to
get
rid
of
the
need
for
people
to
think
too
hard
about
these
about
these
certs.
That's
my
job!
E
So
but
yeah,
that's
that's
about
it.
I
hope
it
was
informative.
B
Yeah
no-
and
this
is
all
great
info
to
have,
as
you
know,
someone
that
works
with
certificates
on
a
regular
basis,
because,
if
you
like
find
yourself
like,
I
know,
I've
built
tools
to
help.
You
check
the
chain
right
like
to
make
sure
that
the
cert
was
in
the
right
order,
because
you
know
a
lot
of
times
you
get
like
here's,
your
intermediate
key,
here's,
your
private
key,
here's,
your
public
key
and
sometimes
that
all
needs
to
be
in
one
file,
not
the
private
key,
obviously
but
yeah.
E
Yeah,
it's
the
yeah
in
fact,
that
the
privacy
enhanced
mail,
the
pem
format.
I
believe
it
is.
You
have
to
have
the
ca,
the
root,
then
the
intermediates
and
then
the
insert.
If
you
have
them
in
the
wrong,
then
you
don't,
then
it
doesn't
parse
correctly.
The.
B
B
E
E
I
just
installed
this.
I
built
the
images
and
installed
it
from
our
upstream
source.
Let
me
see
if
just
deleting
the
pods
means.
Let's
see
we
actually
get
a
okay
yeah,
so
the
compliance
operator
to
try
to
summarize
it
shortly.
If
so,
it
is
a
an
openshift
operator,
so
a
bunch
of
openshift
controllers
that
they
have
a
it's
got
a
a
compliance
scanning
api
that
you
interact
with
and
what
it
does.
Is
you?
E
Okay,
yeah,
so
there's
pods
that
are,
that's
actually
parsing
the
profile
content,
so
so
yeah.
E
So
you
apply
a
profile
basically
to
the
cluster
and
what
it
does
is
it
goes
through
and
it
launches
openscap
under
the
hood
that
launches
openscap
pods
that
mount
the
host
file
systems
and
read
only
with
with
you
know,
just
direct
privileges,
scc
and
it
does
and
open
uses
the
openscap
scanner
to
scan
the
the
files
based
on
the
on
the
on
the
profile
and
give
you
the
results.
Is
it
are
you
compliant?
E
Are
you
non-compliant
et
cetera
the
and
then
what
it
can
do
from
that
is
generate
remediations
that
get
applied?
So
if
you
haven't,
if
there's
something,
if
we
can
auto
tune
a
flag
or
you
know
something
to
bring
the
cluster
into
compliance,
it
can
do
that
automatically
nice
see
if
this
okay,
so
we
should
have
profiles
now.
E
Okay,
so
we
have
a
bunch
here
so
like
I
will
pick
the
ocp,
for
I
might
do
our
calls
for
moderate
okay.
So
we
actually,
we
have
a,
let's
see,
okay,
so
what
I'm
doing
here
is
we
have
it?
We
have
a
a
plug-in
tool,
called
a
cube,
control,
plug-in
tool
called
oc
compliance
and
what
this
it's
basically
like,
a
little
helper
that
will
create
some
of
the
objects
that
that
you
need
to
to
just
kick
off
the
scan.
E
D
E
E
I
always
have
to
look
it
up.
It's
going
to
apply
to
worker
and
master.
A
E
You
can
set
some
options,
some
result,
storage.
If
you
need
to.
E
And
this
is
one
of
the
things
about
this
one.
Is
that,
oh,
I
don't
know
it
doesn't
say:
okay,
yeah,
there's
like
one
there
there's
a
scan
setting,
that's
one
for
for
auto
apply,
but
that
one
which
is
it'll
auto
apply
the
remediations
that
get.
E
Generated
so
our
compliance
suite
is
still
running
so
all
the
scans
together
from
the
profiles
that
you
select,
they
all
get,
they
all
get
put
into
the
they
get
aggregated
together
as
a
scan
suite.
So
you
might
have,
you
might
have
a
bunch
of
different
rules,
some
of
them
or
a
bunch
of
different
checks.
Some
of
them
fail,
some
of
them
pass.
Some
of
them
are
info
or
warning
or
whatever
it'll
aggregate
a
complete
a
total
status.
E
So
if
you
have
ever,
if
all
your
rules
pass,
then
you'll
get
compliant
here
when
it
when
it
finishes.
E
Break
this
is
broken
out
and
all
into
rules
like
the
profiles
when
the
profiles
are
parked.
We
parts
it
from
like
image
con.
We
parsed
it
from
a
big
image
stream
file
that
we
ship,
along
with
the
operator.
That's
like
our
default
content
set
of
the
profiles
that
we've
we've
worked
on
and
the
content
that
we've
written,
so
here's
a
bunch
of
stuff
from
from
from
fedramp
moderate.
You
can
see
it's
things
like
you
know,.
B
Don't
kind
of
kind
of
simple
things
in
some
places,
but
kind
of
not
obvious
in
all
places.
E
Yes,
yeah
right
yeah
and
they
are,
and
they
are
they
are
applicable
based
on
like
they
are
written
to
adhere
to.
You
know
the
particular
standard.
So
a
lot
of
our
work
is
taking
the
the
standards
and
the
requirements
and
seeing
what
rules
do
we
need
to
what
checks
do
we
need
to
have?
E
E
E
So
sometimes
there
are
some
that
are
just
that
are
just
manual
like
we
can't.
You
know
it
may
have
other
dependencies
and
things
like
that,
so
you
can't
you
know,
you'll
have
to
go
and
fix
it.
We
give
recommendations
where
we,
you
know
to
address
the
controls,
but
then,
ultimately,
you
have
to
go
and
address
it
manually,
but
you
can
check
out
the
individual
results,
but
you
can
also
look
at
what
would
be
remediated.
So
if
I,
if
I.
E
Do
so
here,
here's
the
remediations
that
get
generated
and
their
application
state
if
they
were
applied
or
not.
So
if
I
look
at
like
this
one,
you
see.
E
You
can
see
this
is
the
remediation.
It
takes
the
form
as
a
machine
config,
so
we
just
post
this
as
a
generic
cube.
You
know
as
a
generic
api
unstructured
object,
so
this
could
be
a
machine
config.
This
could
be
a
config
map.
It
could
be
something
else,
so
so
that's
how
the
remediation
is
applied.
E
So
if
I
so
if
I
applied
these
remediations,
they
would
all
get
kicked
over
to
the
machine
config
and
then
the
nodes
would
start
to
roll
roll
just
like
as
if
you
were
to
just
go
and
configure
a
machine.
Can
you
know
a
machine
config
pool
as
you
normally
would
so
so
this
way,
there's
no
like
this.
This
takes
away
the
need
to
have
a
privileged
openscap
pod,
because
I
believe
openscap
doing
it
on
rel
it
can.
E
E
We
wanted
to
have
the
right
privilege
separation
so
by
by
making
by
getting
it
to
the
point
where
you
know
we
we're
just
using
openscap
and
read
only
to
do
the
scans
and
then
we're
going
through
the
native
mode
of
of
applying
changes
to
nodes
on
your
clusters,
which
is
which
is
through
the
the
machine,
config
api
and
then
there's
also
the
ability
to
do
the
same
with
generic
cube
objects
like
some
of
the
some
of
the
other
checks
worked.
E
It's
not
that
they
were
running
not
that
they
were
running
openscap
on
on
the
host
file
on
the
file
system,
but
we
fetch
a
cube
object
like
in
the
content.
We
say:
hey,
we
want
to
look
at.
You
know
this
config
map,
so
we
we
designate
that
in
the
content
and
the
rule
ends
up
being
causing
the
compliance
operator
to
go
and
fetch
and
stage
that
object.
And
then
then
we
do
an
offline
scan
just
on
that
object
with
with
openscap.
E
So
we
still
use
openscap
even
to
look
at
native
openshift
or
cube
objects.
So
that's
cool
and
there
is
a.
E
There
is
also
the
ability
to
say
to
remediate
some
of
these
so,
like
you
can
post
a
config
map,
but
you
can
also,
if
you
need
to
do
some,
if
you
write
a
rule
and
you
need
to
do
something
that
requires
extra
permissions,
you
can
run
the
check
and
then
address
the
permissions
like
go
and
add
the
role
binding
to
be
able
to
go
and
change
whatever
object
that
that
the
check
needs,
and
then
you
can
run
the
run
the
check
again
and
it'll
it'll
remediate
it,
and
so
so
we
can
do
we
approach
it
on
both
fronts.
E
You
can,
you
can
assess
the
compliance
of
of
you
know
the
archos
nodes
you
can
also,
then
we're
still
on
still
there.
C
E
E
We
can,
we
can,
you
know,
do
the
same,
do
the
same
for
the
the
cube
objects,
so
you
can
assess
the
compliance
state
of
the
cluster,
but
also
of
the
nodes
themselves
so
and.
E
Compliance
state
of
the
nodes
is
with
getting
our
costs
to
be
compliant
with
various
standards.
B
Awesome,
so
if
folks
want
to
learn
more
about
the
compliance
operator,
where
would
you
send
them.
E
So
I
would
send
them
to
so
on
git
under
github
under
the
openshift
org.
They
there
is
the
compliance
operator
and
that's
there
for
the
code.
We
also
have
a
section
in
the
official
docs
for
using
the
compliance
operator,
so
the
official
openshift
documentation
it's
under
us.
The
security
and
compliance.
B
B
E
Yeah-
and
this
is
this-
is
compliance
operator
is
cool
because
it
is,
I
mean
it's,
it's
innovative
for
our
space
now.
I
think
the
feedback
that
we
have
gotten
on
it
has
been
has
been
fantastic.
E
I
think
a
lot
of
the
usually
will
present
it,
and
you
know
they
have
immediate
questions
like
oh
well.
What
happens
if
openscap
goes
and
and
modifies
something
on
the
node,
and
you
know
these
things
that
we've
we've
you
know
fleshed
out
and
but
we
also
try
to
keep
an
eye
on.
You
know
that
the
if
you're
gonna
run
a
security
operator,
you
know
operator's
supposed
to
do
something.
Security-Wise,
you're
gonna
want
that
thing,
secure
itself,
so
yeah
we
do
try
to
apply.
E
We
apply
a
lot
of
careful
thought
to
you,
know
the
the
privilege
level
and
and
and
everything
that
we
that
we
do
so.
E
I
think
that's
the
that's
the
tough
that
would
be
the
tough
part
is
sometimes
with
especially
with
operator
design.
Sometimes
you
feel
like
you're
you're
redesigning
the
wheel
and
and
this
kind
of
stuff,
but
but
yeah,
I
think
we're
we're
leading
in
the
space
of
of
cloud
compliance.
So,
if
I
may
be
so
bold.
E
B
Good
awesome
stuff,
thanks
for
thanks
for
going
through
the
demo
and
going
along
with
us
here.
I
appreciate
that
for
sure
so
dave
anything.
You
want
to
wrap
up
with
or.
C
Yeah,
I'd
absolutely
like
to
thank
matt
for
a
great
session,
appreciate
it,
and
thank
you
chris
I'd
like
to
tell
everyone
you
know
be
on
the
lookout
for
a
couple
podcasts
that
we're
going
to
drop
here
in
the
next
week
or
so
around
some
more
data
controls,
encryption
and
protection
topics
next
month
is
network
controls
month.
So
really
excited
about
that
and
all
all
things,
network,
security.
B
B
Now,
thank
you
dave.
Thank
you,
matt.
Thank
you
bobby
for
producing
and
we'll
see
y'all
soon
here
on
the
get
off
guide
to
the
galaxy,
which
is
at
3
p.m.
Eastern
1900,
utc,
so
yeah
stick.