►
From YouTube: TGI Kubernetes 152: Cluster Testing with Sonobuoy
Description
Evan Anderson and Gustavo Franco discuss using Sonobuoy to run your own cluster-level acceptance tests.
A
Hello,
everyone
we're
probably
going
to
give
a
couple
minutes
for
people
to
trickle
in,
but
I
wanted
to
sort
of
get
faces
on
screen
rather
than
just
leave
you
sitting
at
the
title
card,
so
I'm
evan
anderson,
I'm
a
software
engineer.
Who's
been
working
on
the
k-native
project
for
the
last
couple
years,
but
today
I'm
going
to
be
exploring
solo
boy,
a
project
I
know
almost
nothing
about
and
with
me,
is
gustavo
franco
and
go
for
it.
B
Hey
folks,
yeah,
so
gusav
franco,
senior
manager
for
situ
level,
engineering
at
vmware
and
yeah
happy
to
be
here
to
explore.
Sono
boy
talk
a
little
bit
about
some
of
the
the
plug
is
important
plugins
in
particular
reliability
scanner,
they'll
be
working
on
and
yeah
like
interested
in
use
cases
and
things
that
we've
been
doing
and
also
ready
to
explore.
B
A
A
Well,
we've
got
see:
we've
got
a
lot
of
people
staying
up
late
to
see
us.
So
that's
really
awesome.
Hey
morteza
tehran
sounds
like
you're,
probably
very
early
in
the
morning
helsinki,
morocco.
A
Yeah
so
yeah
it's
sunny
and
super
warm
outside
in
seattle.
So
I
hope
everyone
else
is
having
weather
that
they,
like
I'm
good
for
about
one
week
of
warm
and
then
I'm
back
to
wanting
cold.
B
I
won't
say
warm
enough
in
multiview,
california,
but
definitely
getting
warmer.
A
And
for
those
of
you
who
don't
who
aren't
familiar
with
this,
we
have
shared
notes,
there's
a
little
banner.
I
do
this
ta-ha
even
with
mirror
mode,
so
you
can
add
to
the
notes
as
we're
going
along
I'll,
be
going
through
and
cleaning
them
up
at
the
end.
But
if
you
all
want
to
put
comment
in
there
in
the
meantime,
it's
gonna,
be,
you
know,
it'll
be
a
help,
any
links
and
so
forth
that
you
want
to
add.
A
D
A
Yeah
we
are,
we
are
light
in
the
kubernetes
and
cloud
native
ecosystem
news
for
the
week
right
now.
To
be
honest,
I
was
on
vacation
for
the
first
half
of
the
week,
so
I
just
kind
of
ignored
almost
all
the
news
going
on,
but
I'm
sure
that
people
out
there
have
been
doing
interesting
and
exciting
things.
The
only
reason
I
know
there
was
a
canadian
release
actually
last
week
was
that
I
was
doing
it
before.
I
took
some
time
off.
C
A
I'll
be
using
those
later
but
looks
like
we've
got
a
decent
number
of
people
online
watching
so.
A
We're
gonna
get
started
with
the
week
in
review,
so
this
might
be
last
week's
news,
but
the
kubernetes
release
is
out
121.
after
122
there's
a
cap
here,
that's
linked
and
kubernetes
is
going
to
be
three
releases
a
year
rather
than
four.
A
So
you
know
you
will
only
have
to
upgrade
every
four
months
rather
than
every
three,
and
I
know
that
that
is
gonna
make
a
lot
of
system
administrators
happy
that
it's
a
little
easier
to
keep
up
with
the
pace
of
releases
sort
of
attention
here
between
it's
always
nice
to
be
able
to
get
new
things
into
people's
hands
as
a
developer,
and
you
know
hey
you're
working
on
stuff
and
you
want
people
to
get
out
there
and
like
give
feedback
on
it.
A
On
the
other
hand,
it
can
be
really
yeah.
It
can
be
really
tiring
to
say
hey.
I
just
rolled
out
this
really
since,
like
you're
already
out
of
date
time
to
go
back
again.
You
know
it's
a
little
harder
to
update
a
cluster
than
it
is
to
update
your
web
browser.
So
there's
also
a
container
vulnera
container
d
vulnerability
out
there.
If
you
haven't
seen
this
cde,
the
summary
is
that
you
can
deadlock
container
d.
A
If
you
ask
it
to
fetch
a
container
image
that
has
a
bad
layer,
that's
not
a
tar
archive,
and
so
I
believe
that
there
are
new
this
effects.
I
think
both
podman
and
crio,
and
so,
if
you,
if
you
are
using
one
of
those
and
letting
people
choose
what
images
to
load,
you
may
want
to
be
careful.
A
Yeah
so
there's
there's
a
little
question
in
the
chat
I'll
just
pop
it
up
here
it
looks
like
it's
blurry.
A
A
And
yeah
so
there's
the
upgrades
that
you're
that
you
get
because
we
ship
new
feature
because
kubernetes
is
shipping
new
features
and
there's
the
upgrades
you
get
because
there's
a
vulnerability
and
yeah
for
those
of
you
who
aren't
already
registered
there's
a
virtual
cube
kind
of
you
in
a
couple
weeks,
there's
still
time
to
register
and
the
the
if
you
are
on
a
budget
and
you're.
Like
I
don't
know.
A
If
I
want
to
attend
the
keynes
keynote
showcase
showcase
tickets
are
absolutely
free
and
then
the
regular
tickets
are
75
dollars.
Because
these
virtual
events,
they
don't
have
to
pay
for,
like
a
big
area
to
rent
and
so
forth.
It's
mostly
just
internet
infrastructure.
A
And
then
I
mentioned
in
the
kubernetes
and
cloud
native
ecosystem
k-native
is
out,
you
know,
has
a
new
release
every
six
weeks.
We
do
a
release
and
yeah
it's
it's
like
clockwork
a
little
faster
than
the
kubernetes
frequency,
but
I
don't
have
any
other
news.
I
didn't
see
any
news
in
chat,
but
if
anyone
wants
to
throw
news
in
there.
A
And
I
see
I
see
a
lot
of
mirrors
here
so,
let's,
let's
close
this
up
and.
B
Yeah
just
about
contacts
right,
so
we
have
vladimir
lids
sono
boy,
so
we're
embarrassing
ourselves
here
or
we're
about
to,
but
yeah.
We
count
with
almiring
here
to
help
us
out.
A
Yeah,
it's
always
nice
to
have
someone
here:
who's
actually
familiar
with
the
software
as
we
go
bumbling
through
it.
So
for
those
of
you
who
aren't
familiar
sonoboy
is
the
tool
that
you
that
kubernetes
uses
to
run
conformance
tests
and
it's
not
just
a
tool
for
running
kubernetes
conformance
tests,
but
that's
where
most
people
probably
know
it
from,
and
so
the
basic
model
of
sono
boy
is
that
it
will.
A
You
do
some
testing
and,
as
I
understand
it,
you
do
some
testing
and
those
all
come
as
containers
and
they
get
run
on
the
cluster.
And
then
it
will
pick
up
the
results
from
from
plugins
and
display
them.
A
Nice
to
see
that
we
are
continuing
the
tradition
of
zero
point.
High
number
releases,
just
like
we're
doing
in
k,
native
linux,
amd64
and
I've
got
an
existing
microsoft,
azure
kubernetes
cluster.
So
I'm
going
to
be
using
that
it's
got
three
nodes.
B
Yeah
and
while
you
do
that
yeah,
let
me
call
for
folks
like
sonoboy,
is
interesting,
so
there's
a
both
a
plug-in
framework,
so
it's
extensible
and
also
different
modes
right
there.
You
can
run
sonoboy
even
without
leveraging
the
plug-in
architecture,
so
the
end-to-end
mode
has,
if
you're
not
mistaken,
what
we
can
correct
me
for
wrong.
B
There
is
a
known,
disruptive
mode
that
folks
use
to
make
sure
that
the
kubernetes
installation
conforms
with
the
quadratic
specification,
so
you
kind
of
have
both
a
non-disruptive
one,
a
one
that
could
be
disruptive
and
one
mode
that
checks
for
conformance
with
this
back.
B
A
Yeah
do
let
me
know,
I
have
been
told
that
I
have
good
eyes,
so
I
tend
to
have
itty
bitty
fonts,
but
I
know
that
doesn't
show
up
as
well
for
streaming.
So,
let's
see
and
to
get
started,
they
suggest
I'm
just
doing
this.
A
C
B
Yeah
and
if
I'm
not
mistaken,
the
default
so
you're
just
running
and
waiting
to
install
the
object,
it's
gonna
run
the
end-to-end
checks,
but
I
think
it's
in
no
disruptive
mode.
A
That
should
be
the
case
mode,
quick.
B
C
A
Space,
oh,
this
is
a
namespace
that
it
was
created
and
then
it
looks
like
everything.
C
C
A
A
Looks
like
we
have
a
cluster
role
and
a
cholesterol
binding
as
well.
I'm
inclined
not
to.
C
A
Moaning
suggesting
using
a
kind
cluster
in
order,
I
think
willie-
was
suggesting
using
a
kind
cluster
in
order
to
just
be
able
to
blow
away
the
cluster
and
create
a
new
one,
which
is
great
for
testing.
But
it's
really
easy.
I
definitely
get
into
this
habit
of
oh
just
blow
away
the
cluster
and
create
a
new
one
which
is
great
until
you're,
using
it
in
production,
and
then
it's
like.
I
can't
do
that
anymore.
A
Let's
mode
what
name
spaces
I've
got,
I'm
gonna!
Guess
this
pods
one
and
the
schedule
preemption
one
are
ones
that
sauna
boy
has
created
or
my
last
run
created.
A
A
A
I
did
not
enable
audit
logging
here,
okay,
so,
let's
see
it
looks
like
it's
run
and
then
it
looks
like
I
wanna
see
what
this
does.
B
You're
gonna
get
the
turbo
file
name,
but
yeah
go
for
it.
Oh.
A
Nifty,
I'm
seeing
elite
suggesting
that
treating
you
should
treat
your
production
is
immutable.
Unfortunately,
you
don't
get
to
keep.
It
doesn't
get
to
be
immutable
for
all
time
unless
you're,
not
shipping,
any
new
software,
aha
and
here's
how
to
clean
it
up.
A
Instead
of
needing
to
just
wait
for
it
and
look
at
it
from
the
side,
docker
hub
rate
limits
is
good.
No,
hopefully
we
won't.
B
A
Oh,
was
that
willie,
you
said
it
was
creating
namespace
was
that,
just
by
looking
at
the
code
to
see
what
quick
does.
A
A
Avoid
header
prefixes
yeah,
none
of
this
looks
like
it
particularly
helps,
so
I
think
stratus
gustavo
here
has
been
actually
using,
has
actually
been
using
sauna
boy
and
anger.
So
do
you
want
to
talk
a
little
bit
about
how
that
works?
What
you've
been
using
it
for.
B
Yeah
sure
so
the
the
vmware
cre
team
has
been
building
a
reliability
scanner
plug-in
for
sonoboy.
When
you
talk
about
reliability
scanner,
it's
not
like.
If
it
passes,
you
know
your
cluster
is
reliable.
B
What
we're
trying
to
do
is
identify
reliability,
risks
basically
through
software.
So
then
people
can
run
sonoboy
across
multiple
clusters
and
get
a
report
of
reliability
risks.
So
let
me
share
a
little
bit
how
this
works
for
us.
A
A
A
B
Clever,
okay,
cool,
so
pretty
simple,
but
just
to
kind
of
show
the
url
right
so
yeah,
it's
on
github
vmware
tends
to
use
sonobot
plugins
reliability
scan
is
one
of
the
plugins,
so
you
just
download
you
can
just
clone
off
them.
So
that's
you
know
through
here
you
clone
them
and
let
me
show
you
and
kind
of
get
you
through
the
config.
So
within
the
reliability
scanner
we
have
the
concept
of
reliability
checks,
so
basically
the
risks,
the
things
we're
checking
for
and
that's
an
ever-growing
list,
so
one
is
backup
freshness.
B
So
is
this
closer
being
backed
up
using
valero?
So
that's
something
that
sono
boy,
the
reliability
scanner
once
you
run
it
will
check
across.
You
know
this
cluster,
but
if
you
run
across
multiple
cluster
right,
it's
a
cool
report
to
get
like
okay,
which
backups
are
fresh
and
by
fresh,
it's
both
looking
for
expire
backups.
If
you
have
the
ttl
set
on
valero
and
we're
also
about
to
implement
a
max
age,
so
you
can
override
you
know
here.
B
So
we're
gonna
have
a
so
right
now:
spack,
backup,
name,
space,
we're
gonna,
add
a
max
age.
So
if
you're
like
yeah,
I
really
want
to
check
all
the
backups
are
older
than
I
don't
know
a
day
a
week,
no
matter
the
ttl
and
valero.
So
that's
something
that
sono
boy
can
do
for
you
with
the
reliability
scanner
plugin,
so
backup
freshness
is
one
thing
that
we
are
implementing
pod
disruption.
So,
basically,
do
you
have
all
of
your
pods
cover
under
a
disruption
budget
right,
so
people
generally
ask
include
detail.
B
What
does
that
mean
so
include
detail
equals
through
is
for
the
report.
You
don't
just
get
a
pass
or
fail.
You
got
a
list
of
pods
that
are
included
or
not
in
a
pod
disruption
budget.
We
haven't
implemented
a
filter
yet,
but
it's
probably
a
good
idea
right.
You
can
filter
out
or
just
you
know,
include
or
exclude
in
scope
there,
and
the
qos
check
is
basically
a
minimal
desire.
B
Kos,
like
you
can
run
against
one
or
multiple
clusters,
like
I
wanna,
have
a
minimal
qrs
of
guaranteed
and
same
thing
include
detail.
It's
not
just
through
pass
and
phase
we're
going
to
include
in
the
log
in
the
output
will
show
you
the
the
pods
that
are
say:
failure
not
passing
the
check
right.
Other
thing
that
you
know
could
be
a
considerable
reliability
risk
is
that
you
may
have
name
spaces
with
our
owner,
so
then
checking
for
an
honor
label.
So
this
is
more
like
a
matter
of
policy
right.
B
So
if
you
want
to
have
a
policy
and
kind
of
scan
for
who's,
conforming
to
that
policy
and
probes,
so
do
we
have
probes,
liveness
readiness,
probes
so
back
to
all
those
things
are
not
super
difficult
to
do
manually
to
check
manually,
but
I
think
it's
cool
that
in
aggregate
you
got
this
like
yellow
config
and
then
you
can
kind
of
you
know
you
get
a
reasonable
default
and
you
can
rust
on
a
boy
with
a
reliability
scanner
and
then
boom.
It's
gonna
do
kind
of
off
this
at
once.
B
For
you,
you
don't
have
to
kind
of
the
sre
approach
of
you
know
eliminating
toy
right,
so
this
eliminate
a
lot
of
toy
of
checking
for
configuration
mistakes.
Of
course.
Ideally,
it
will
be
impossible
to
make
such
mistakes
because
you
will
be
creating
the
cluster
or
creating
your
clusters
in
a
way
that
prevents
this,
but
we
know
in
reality
right
you
kind
of
need,
checks
and
balances
right.
So,
ideally,
you
create
the
clusters
to
avoid
this,
and
then
you
have
the
tool
to
kind
of
scan
for
all
of
your
production
clusters.
A
My
experience
has
also
been
that
over
time
you
add
checks
here
based
on
hard-won
experience
of
you
know:
oh
gosh,
maybe
that
thing
was
important.
We
should
check
for
that,
and
you
know
once
you've
discovered
that
you
know.
Oh
hey
qos
is
important.
We
should
make
sure
that
everything
that
is,
you
know
a
serving
namespace
or
something
like
that
has
qos
set.
Then
you
need
to
go
back
over
and
actually
go
and
check
that
and
you
can
set
a
policy
to
protect
things
going
forward,
but
that
doesn't
help
all
the
stuff.
B
Exactly
yeah
100,
and
so
how
do
we
run
this
thing
right?
So
then
we
have
basically
a
basic
javascript.
We
we
have
three
clusters.
We're
gonna,
do
like
a
backup
example
thing
so,
like
imagine,
if
you
just
had
in
your
config
the
the
backup
freshness
check
on
top,
and
so
we
have
kind
clusters
here.
The
plugin
uses
ytt,
which
is
part
of
carvel
right
so
for
a
template,
so
think
of,
like
you
know,
template
adding
template
template
to.
B
But
it's
running
so
my
boy
behind
the
scenes,
and
so
was
that
cool.
Would
we
also
be
running?
I
love
ideas.
I
like
I
got
distracted
because
I
saw
an
idea
for
a
check
and
I'm
like
wait
like
oh
no.
I.
B
A
Once
you've,
once
you've
shown
this,
I
was
going
to
you
know,
pull
down
the
sono
boy
repo
and
like
develop
a
plugin
to
go.
B
A
B
Yeah
that'll
be
cool
yeah.
I
I
love
ideas
for
checks.
We
have
a
list.
I
think,
on
the
repository
of
things
that
we
want
to
implement.
The
backup
is
super.
Recent
peter
grant
was
also
on
chat,
implemented
this
yesterday.
I
think
so
then
yeah
we
do
run
and
it's
like
it's
gonna,
basically
refresh
the
so
it's
the
leading
zone,
boy
and
switching
the
contacts
to
the
cluster.
So
we
have
three
kind
clusters,
so
let
that
run.
B
I
won't
pause
here
because
I
have
the
output
for
you,
so
I'm
diffing
a
cluster
with
all
backup,
so
no
backups
log
in
a
cluster
with
backup.
So
I
have
two
log
files
here
to
diff
to
show
you
there's
a
lot
there.
You
know.
B
No
because,
like
we
of
course,
be
right
before
it
to
jike
we're
running
this,
I
think
I
I
you
know
it's
spooking
me
because
it's
not
the
output
I
was
expecting.
B
I
was
expecting
a
much
shorter
one
without
all
the
timestamps,
but
let
me
see
if
that's
what
I
want
to
show
with
all
the
diff.
I
can
actually
show
like
no
backups.
B
So
you
can
clearly
see
if
I
have
no
backup
so
I'd
show
no
backups
to
find
and
that's
what
I
want
to
see
right.
We
don't
have
backups
in
that
cluster.
In
the
no
backups
cluster
in
the
backups
log
cluster
looks
like
you
ran.
B
At
the
bottom,
I
have
it
running
right
now:
oh,
it's
not
gonna
yeah!
So
it's
it's
not
overwriting!
This
file,
it's
overriding
one
directory
up,
but
we're
gonna
have
more
so
we're
gonna
have
fresh
results
but
yeah.
I
think
this.
This
shows
that
if
you
don't
have
backups
running
this
log,
then
it's
going
to
show
no
backups
defined.
It
shows
you
the
name
it's
valero,
and
so
that's
what
we
want
to
see
right,
no
backups
backups
are
failing.
B
D
B
The
expires,
probably
not
expired
yet
because,
let's.
D
B
B
A
Yeah
and
it
looks
like
it
expired
yeah.
B
A
Maybe
two
backups
peter
grant
says.
B
Okay,
we
had
two
backups
nickelodeon,
I
see
yeah,
so
no
backups.
We
don't
have
any
backups
to
find
yeah
thanks
peter
peter
put
the
demo
together
for
us,
so
yeah
no
backups,
as
you
can
see
in
the
logs.
It
fails
because
there
are
no
backups
to
find,
and
this
is
exactly
what
we
want
and
then
we
got
the
name
of
the
backup
which
is
valero
and
when
we're
looking
to
expire
backup,
we
can
see
that
it's
also
fail
because
it
is
expired.
B
The
backups
one,
that's
the
one
that
we
wanted
to
pass,
but
it's
failing
because
it's
saying
that
it's
expired
and
that's
probably
because
we're
pointing
that
to
an
expire,
backup
and
not
a.
A
C
B
A
Do
you
want
to
do
that,
while
maybe
we
take
a
pass
at
phillips
idea
of
a
running
time
check.
A
So
for
those
of
you
who
aren't
familiar
with
the
gh
command,
that's
github's
official
cli
and
it
makes
it
really
easy
to
clone
and
fork
read
those.
So
if
I,
if
I
wanted
to
let's
say.
A
B
And
so
yeah
we're
we're
cleaning
up
the
demo
and
evan
just
heads
up
to
you.
I
post
on
the
internal
chat
the
a
default
emo
config
that
might
be
useful
for
you
to
figure
out
how
to
pass
config
down.
If
you
need
to
pass
config
down.
A
A
I
don't
know
if
anyone
can
read
these,
but
we
have
an
api
directory
that
defines.
A
Some
I'm
guessing
these
look
like
they
might
be
sort
of
kubernetes
objects,
but
it
looks
like
it's
a
dynamic
interfacing
only
a
speck.
B
And,
let's
see,
let
me
cover
a
question
from
the
chat,
real,
quick,
yeah.
Well
yeah.
Someone
was
asking
about
like
why
not
set
this
as
a
policy
yeah.
We
have
both
so
we're
looking
to
open,
gatekeeper
as
well
and
to
so
kind
of
the
sibling
project
right.
So
this
is
the
scanner
we
have
the
reliability
policies
project.
That
is
simply
of
this
one.
So
to
cover
kind
of
cover
on
both
sides,
right
kind
of
like
evan
was
explaining.
A
So
we've
got
something
here
that
it
looks
like
we
have
something
for
disruption
budgets
already,
so
I
am
just
going
to
clone
that
and
get
started
there.
C
C
A
A
A
A
A
B
C
Sad
so.
B
There
is
an
editor
here,
the
question
about
visualization
and
logs
for
reliability:
scanner.
It's
like
yeah,
it's
now
purely
logged,
so
we
don't
have
anything
that
is
easier
to
make
it
visualize
elsewhere,
like
for
me
to
use
or
anything
like
that,
yeah
happy
to
to
look
into
the
suggestions
here
on
how
to
like
how
I
guess
how
folks
in
the
community
would
like
to
consume
that
kind
of
report.
A
Know
so
I'm
realizing
that
emacs
is
probably
not
going
to
be
a
great
tool
here,
because
I
don't
have
it
set
up
appropriately.
B
A
A
I
wanted
to
delete
that
backup
file,
but
instead
it
just
didn't
work.
A
So
we
are
listing
all
of
the
core
v1
pods.
If
there's
an
error
anywhere
in
here,
we
set
the
check
to
failed.
Let's
see,
this
is
a
pod.
C
C
C
A
A
A
D
D
D
C
A
C
C
C
C
C
B
Yeah
so
peter's
saying
you
need
to
build
and
publish
the
other
changes
as
a
docker
image.
Oh.
B
B
Yeah
so
yeah
it's
in
the
make
file
so
check
the
make
file.
C
C
C
A
A
C
C
C
A
A
B
So
it's
still
the
leading
summer
boy
in
this
one,
but
so
I
checked
for
before
we
didn't
have
a
backup
passing
right
so
like
do
I
have
something
complete
and
passed.
Yes,
they
stay
on.
This
is
a
different
track
by
the
way,
so
the
stillness
track
you
can
ignore.
B
So
the
backup
one
is
completed
in
path.
So
I
have
backups
and
then
compared
to
no
backups,
and
it
tells
me
the
name
of
the
backup
plural
status
fail,
no
backups
define.
B
So
this
is
the
diff
of
no
backups
and
backups
so
fail
to
pass.
Let's
see
if
the
expire
is
still
there,
so
it's
not
expired
now,
because
we
just
ran
right,
so
I
think
the
default
is
set
to
five
hours.
So
then
I
don't
have
an
expire
backup
now
so
then
it's
still
passing.
B
So
that's
for
this
first
check,
backup,
freshness,
pod,
sorry,
backup
freshness.
As
you
can
see
in
the
logs,
we
have
a
little
bit
more
so
like
the
stableness
is
there,
but
it's
passing
so
it's
not
really
interesting.
B
To
talk
about,
but
we
have
bot
probes.
If
pros
are
you
know
available
or
not,
and
the
name
spaces,
if
you
name
space,
has
an
understat
and
kiosk.
If
there's
a
minimal,
desirable,
kos
disruption
and
backup
freshness,
we
can
run
all
the
checks
at
once
against
you
know
the
cluster,
but
just
you
know,
for
the
sake
of
this
exercise,
we're
just
focusing
on
backup
freshness
in
this
one.
A
And
yeah
it
looks
like
the
combination
of
make
and
docker
is
not
something
that
this
machine
has
readily
available.
Everyone
could
watch
me,
get
docker
working
with
windows
subsystem
for
linux,
but
that
doesn't
really
seem
like
the
best
use
of
people's
time.
B
Do
do
you
have
ko
because
ben
is
saying
ko
publish?
A
Tried
fetching
a
code
version,
let
me
see
if
I
can
get
that
working,
although
it
doesn't
end
up
with
a
command.
A
I
was
looking
at
the
docker
file
and
at
the
end,
you
define
a
command
for
the
reliability
scanner,
and
I
don't
know
how
critical
it
is
that
that
it
run
as
a
docker
image
without
needing
the
command
specified.
A
It
looks
like
it
takes
an
argument
scan
that
yeah
yeah.
I
think
that's.
If
we
build
with
co,
it's
not
going
to
be
there.
B
C
C
B
A
A
But
in
the
entire
ball
it's
not
a
yeah,
it's
not
actually
a
thing.
I
can
execute
so.
A
C
C
C
C
C
C
C
A
A
A
C
D
A
A
Okay,
now
I
have
things
logged
in:
let's
try
that
again.
A
Entry
point
met,
command
code,
doesn't
support.
It's
answering
this
question
code
doesn't
use
its
own
login.
It
uses
the
standard
docker
credential
keychain,
so
I
needed
to
do
a
azure,
acr,
login
and
feed
that
into
the
docker
login
command.
A
And
now
I
have
a
docker
config
that
has
my
azure
credentials
in
there
and
I'm
not
going
to
show
that
to
you,
because
I
don't
really
want
to
have
to
bring
those
credentials
and
come
up
with
new
ones.
A
So
we've
built
a
new
image
here.
You
can
see
that
co
will
print
out
the
name
of
the
image
with
a
shell
one
digest,
and
then
it
looks
like
that.
This
is
going
to
do
basically
the
same
thing
that
this
that
this
docker
file
does,
except
it's
going
to
use
my
environment's
golang,
and
so
it's
going
to
build
this
reliability
scanner
binary
and
then
from
distro
list
base.
It's
going
to
do
the
same
copy
instead
of
copying
it
from
this
build.
A
C
C
C
C
A
C
C
A
A
A
There
we
go,
it
looks
like
solo
boy
delete
if
I
don't
pass,
wait
just
puts
in
delete
requests
for
everything
and
walks
away,
and
so
things
were
still
deleting
the
first
time
I
tried
make
run,
but
now
it
looks
like
the
run
happened
successfully
it's
complete,
so
I
should
be
able
to
say.
C
C
A
A
A
A
I'm
realizing
that-
maybe
I
shouldn't
have
done
this
here,
but
it
looks
like
it's
not
it's
not
part
of
my
it's
not
one
of
the
directors.
I
already.
B
B
A
B
Anyway,
I
took
for
folks,
you
know
in
the
audience
to
to
simplify
the
reliability
scanner
plug-in
and
the
make
file
we
have
like
the
shortcuts.
So
if
you
make
results,
we
kind
of
run
that,
for
you,
you're
gonna
see
what
evan's
running
now
to
kind
of
get
to
the
detail.
Results
for
the
reliability
scanner
run.
C
A
A
A
B
Yeah
and
then
you,
you
also
need
to
name
the
package
pod
h,
because
you
have
package
name
disruption.
Your
pod.
A
C
C
C
C
A
Yeah,
the
backup
file
has
that
in
several
places,
there's
disruption.
A
A
No,
these
are,
these
are
actual
kubernetes
resources,
so
it's
named
pod
disruption
budget.
So
I
don't
need
this
config.
This
config.
C
A
I'm
going
to
guess
that
resources
here
are
the
types
of
resources
that
are
created
and
therefore
that
we
need
permissions
for
that's
a
lot
of
different
things,
but
that
looks
a
lot
like
a
well
the
first
part
of
that
looked
a
lot
like
a
cluster
roll
and
a
cholesterol
binding.
C
C
C
C
C
C
A
C
D
C
C
C
A
C
C
D
C
A
C
A
B
C
B
A
It
does
seem
like
you
might
need
to
be
a
little
bit
careful
with
sauna
boy
to
to
run
delete
after
each
run.
It
would
be
nice
if
there
was
sort
of
a
continuous
mode
that
you
could
have
it
on
in,
like
a
like,
a
probe.
A
B
A
A
If
not,
I'm
definitely
sure
that
it's
not
gonna.
A
B
A
Yeah,
oh,
but
I
should
be
able
to
see
that
somewhere.
A
Used,
oh
there's,
this
plug-in
reliability,
scanner,
custom
values,
lib.
B
That's
the
config,
but
do
so
do
we
have
the
page
on
config.
A
Yeah
we
just
we
just
added
it
to
config.go,
okay,
all
right
yeah.
I
did
a
copy
of
disruption
here
and
it's
right
down
here,
but
I
never
actually
asked
it
to
run
it
so
yeah
yeah.
We
forgot.
A
Is
the
one
that
we
did
need
to
change?
Oh
look,
that's
beautiful,
plugin,
xero.yaml
and
then
here's
the
command
to
use
and
oh
that
image
doesn't
look
like
the
image
that
I'd
want.
B
That
you
know,
what
did
you
do
make
a
run?
Can
you
open
the
make
file?
I
wonder:
do
we
have
this
hardcore
number
file
yeah?
Maybe
we
need
to
change
the
make
file.
B
No
yeah
take
a
look
at
the
make
file.
This
is
yeah
yeah.
A
I
see
these,
but
if
I
echo
registry.
A
Image,
but
it
may
be
that
I
need
to
set
them
in
make
in
some
other
way,
because
I
try
not
to
use
make.
If
I
get
too,
if
I
can
avoid
it.
C
Waiting
for
wave
file-
let's
see.
C
A
Running
actually,
I
had
one
question
about
this:
is
the
yaml
configuration
that
you
put
in
the
config
map
somewhere
here
the
reliability
scanner,
custom,
values
lib?
Is
this
something
that's
needed
by
sono
boy
or
just
the
way
that
you
happen
to
structure
your
checks.
B
Just
do
not
need
to
buy
some
boy
at
all,
just
the
way
we
we
because
it's
you
can
think
of
the
reliability
scanner
as
a
plug-in
of
plug-ins
right,
because
each
jack
is
sort
of
you
know
like
a
plug-in
on
its
own,
so
we
had
a
way
to
really
configure
that
within
the
reliability
it's
going
to
plug
in
so
no
summer.
Boy
does
not
require
this
at
all.
B
A
Yeah,
it
looks
like
the
can.
Our
make
run
here
didn't
actually
update
this
fl
this
field
so
plug-in.
C
B
And
you
have
to
clarify
for
the
audience
like
we're
saying
right:
this
is
just
needed
if
you're
trying
to
develop
a
new
check,
so
the
reliability
is
going
to
plug
in
if
you're
just
trying
to
run.
You
don't
need
to
do
any
of
this.
It's
just
that
that
demo,
that
I
did
it's
like.
Basically,
three
commands
and
you're
done.
A
B
Yeah
yeah
yeah,
it's
still
painful,
to
add
checks.
We
haven't
really
tried
anyone
else
like
now.
I
feel
like
I
have
50
to
do's
going
on
a
list
here.
A
A
Yeah,
I'm
not
sure
how
the
best
way
to
parameterize
this
is,
but
it's
super
surprising.
The
image
is
hard-coded
here,
but
that
you
also
have
parameters
for
it
in
the
make
file
which
I'm
guessing
get
used
for
docker
build
and
docker
push,
but
you
could
probably
pass
them
in
as
as
arguments
to
ytt
here
as
well.
A
A
C
A
A
A
C
C
B
C
A
A
A
We
do
have
a
docker
file
yeah,
I
was
having
trouble
with
make
plus
docker
plus
push.
C
B
A
For
those
of
you
who
haven't
seen
dive,
it's
a
pretty
cool
tool
for
taking
apart.
A
A
A
Image,
oh,
but
it
wants
the
docker.
C
A
Does
anyone
remember
yeah
so
this
I
feel
like
we're
this
close
and
I
just
needed
to
figure
out
where
that
is,
and,
unfortunately
I
can't
just
like
execute
it
and
poke
around
in
there
because
remember.
These
are
built
from
the
distro-less
images
that
don't
have
a
shell.
C
D
A
D
A
A
C
C
A
It
looked
like
90
minutes
if
we
were
actually
going
to
do.
This
we'd
want
to
go
in
here
and
add
a
duration
that
you
could
parse
out
to
add
duration
for
age
and
then
we'd
use
that
down
here
with
the
spec
instead
of
saying
one
time
negative
one
times
our
you
would
put
something
reasonable
in
there
instead
and
then
you'd
be
done
so.
B
I
think
the
lesson
learned
for
for
us
here
is:
it
takes
30
minutes
to
add
a
check,
but
like
an
hour
of
pain
to
actually
you
know,
build
and
test,
and
so
we
need
to
simplify
that
development.
A
It
would
be
really
nice
if
you
could
auto
generate
a
stub
from
here
or
have
some
way
to
automatically
figure
out.
Oh
hey
here
was
a
check
that
was
added.
You
know,
maybe
you
want
to
hook
it
into
this
other
config
file,
that's
in
a
different
place.
If
you
look
there's
actually
three
places,
so
I
needed
to
put
my
check
here.
I
needed
to
put
my
check
in
config.go,
and
I
needed
to
put
my
check
over
here
and
thinking
about
the
code
in
config.go.
B
Yeah,
this
is
awesome,
and
it's
great
that
we
have
this
recorded
now.
The
team's
gonna
rewatch
this
30
times.
A
A
And
if
you
had
that,
then
you
could
have
a
second
command
which
took
that
map
and
output
a
and
output
this
yaml
automatically,
and
so
you
would
just
fill
in
this
map
here
and
then
you'd
have
a
command
that
you
ran.
That.
A
Yeah,
but
we
we
found
all
of
the
old
pods
and
we
can
actually
double
check
that
because
you
can
run
just
a
cube,
control
get
pod
dash
a
for
all
name
spaces
and
other
than
discovering
that
a
few
of
these
pod
names
are
really
long.
You
can
see
that
yeah,
almost
all
of
them,
are
three
hours
old
at
this
point,
and
if
we
really
wanted
to
check,
we
could
put
the
barrier
at
like
four
hours
and
then
see
which
ones
fall
out
here.
A
Awesome
well,
I
learned
a
lot
and
hopefully
this
has
been
useful
to
people.
I
would
definitely
say
if
you're
looking
at
making
like
tests
and
configuration
checks,
repeatable
look
at
at
least
this
framework.
You
don't
necessarily
need
to
break
things
out
in
the
modular
way
that
they've
done.
A
A
I
did
a
bunch
of
stuff
that
you
don't
want
me
to
push
directly
like
hardwiring
in
the
the
container
url
in
two
different
places.
So
I
will
not
do
that,
but
I
may
send
a
pull
request
that
someone
else
can
take
over.
A
And
it
was
fun
seeing
all
of
you,
I
need
to
go
get
ready
because
I'm
going
to
go,
do
a
run.