►
A
Okay,
in
the
hope
it's
it
works
now
we
are
live
on
youtube
for
our
29th.
Everyone
can
contribute
coffee,
chat
or
coffee,
and
today
david
joins
us
from
puppet
and
we'll
be
talking
about
operational
verification,
not
not
sure
what
that
is,
but
we
will
totally
learn
it
now.
So
david,
it's
your
stage,
go
ahead.
B
Thank
you
prizes.
Thank
you.
Thank
you.
Let's
start
by
sharing
my
screen,
I've
got
some
slides,
prepared
yeah
good
afternoon
folks,
operational
verification
is
something
that
I'm
currently
working
at
puppet,
starting
out
with
a
new
module
to
provide
more
confidence
in
your
infrastructure's
health
right.
Well,
I
hope
I
I
hope
I
don't
need
to
convince
you
that
having
confidence
in
your
deployments
is
necessary.
B
I
hope
to
show
today
that
it's
possible
to
improve
on
a
situation
that
we
currently
have.
First,
a
few
words
about
words
like
what
is
is.
What
am
I
even
talking
about
here
is
how
I
learned
them
in
university
and
eternity
ago
where
verification
is
process
oriented.
Are
we
doing
the
things
in
the
right
way?
Does
each
step
match
the
requirements?
So
a
very
detail-oriented
view
of
are
we
are?
Are
we
doing
the
the
things
in
the
correct
way?
On
the
other
side,
we
have
validation.
That
is
outcome.
Oriented.
B
Are
we
solving
the
actual
problem?
Are
we
helping
customers
to
move
forward
with
their
services?
Are
we
doing
the
things
that
business
actually
needs
right
in
refreshing?
My
memory
on
these
distinctions,
I
found
a
very
good
blog
post
by
a
computer
science
professor,
who
summarizes
it,
as
verification
will
help
to
determine
whether
the
software
is
of
high
quality,
but
it
will
not
ensure
that
the
system
is
useful,
but
he
also
goes
on
to
say
that
this
is
a
very
strict
distinction.
B
That's
not
necessarily
useful
for
practitioners,
and
he
shows
this
graphic
here
on
the
slide,
it's
from
the
same
blog
post
and
it
shows
various
techniques
that
we
can
apply
throughout
the
software
life
cycle
to
ensure
that
this
solution
is
within
the
specification,
but
also
that
the
specification
is
actually
something
that
solves
the
customer's
problem.
It's
not
a
binary,
really,
it's
a
whole
set
of
actions
that
we
take
along
the
process
to
to
be
sure
that
we're
on
the
right
track
and
the
track
is
leading
in
the
right
direction.
B
On
the
validation
side,
you
can
see
things
like
goal:
analysis,
prototyping
and
customer
testing
to
get
feedback
from
the
customer
or
from
the
business
that
something
is
produced
for
to
understand
that
everything
is
going
in
the
right
direction.
On
the
verification
side.
On
the
other
hand,
we
have
automated
tests
code,
analysis,
static
analysis,
things
again
more
process,
oriented
that
help
us
understanding
that
we
are
meeting
our
code
standards
that
we
are
still
solving.
B
The
problem
that
you
have
verified
is
the
problem
that
the
customer
needs,
source,
etc,
etc
in
the
puppet
ecosystem.
On
the
verification
side,
we
have
unit
and
expectation
tests
that
are
responsible
to
check
either
low-level
expectations
on
on
the
puppet
code
itself
or
higher
level
system
tests
that
verify
that
a
specific
piece
of
puppet
code
applies
and
does
the
right
thing
right.
These
are
again
verification
steps
that
check
the
code
against
a
often
hand
coded
specification
of
of
the
test
cases
that
we
put
into
them
into
the
code.
B
B
There
is
a
clear
progression
in
the
testing
scope,
as
the
tests
become
more
complex
and
more
fully
featured
that
ensures
that
the
system
that
we're
building
is
providing
value
right,
for
example,
if
this
requirement
is
having
a
web
server
a
test
that
shows
that
apache
is
running
correctly
on
the
system
shows
that
the
system
is
more
likely
to
be
a
web
server
than
when
there
is
no
apache
running.
Although
we
expected
that
right.
B
So
the
next
thing
I
want
to
talk
about
some
of
you
if
you
have
worked
with
puppet
already
know
this,
but
I
I
think
it's
also
a
good
refresher
to
frame
the
rest
of
the
conversation
is
item
potency
right
item.
Potency
means
that
actions
can
be
applied
any
number
of
times
onto
a
system,
but
won't
change
the
state
of
the
system
on
subsequent
applications
for
puppet.
B
It's
that's
very
convenient
because
it
in
specifically
it
means
we
can
take
the
catalog,
the
desired
state
that
puppet
that
that
we
have
encoded
for
papa
to
run,
and
we
can
run
that
over
and
over
again
and
other
than
the
first
time
when
we
enforce
that
desired
state.
There
will
be
no
additional
changes
happening
to
the
system
until
you
change
the
desired
state
or
the
system
has
drifted
from
that
configuration
right,
but
in
in
the
regular
operations
of
a
system
there
will
be
no
ongoing
turn
on
the
system.
B
So
that's
that's
a
very
foundational
property
of
of
puppet
manage
system
management,
and
only
very
few
cases
are
where
specific
details
are
violating
the
principle
and
they
usually
come
with
big
red
flags
that
somebody
is
doing
something
outside
the
safety
of
the
of
the
usual
system.
Much
of
public's
ecosystem
today
relies
on
a
catalogue's
item
potency
for
verification
right
like
when
we're
running
impact
analysis
in
cd4p.
B
That
only
makes
sense
if
the
thing
that
is
in
the
catalog
is
is
really
what
what
the
state
will
look
like
afterwards.
Otherwise,
those
those
differences
don't
make
a
lot
of
sense
right
when
we
catch
errors
in
testing
when
something
doesn't
apply
cleanly
again,
that's
an
item,
potency
violation
because
it
it
doesn't
reach
that
desired
state.
B
We
recognize
that
there
is
a
problem
that
needs
to
be
addressed
before
we
move
on
into
production.
If
there
are
unexpected
changes
in
production.
That's
is
a
high
value
signal
that
something
in
production
has
changed
outside
of
our
control.
Maybe
somebody
installed
a
dependency
or
or
a
package
that
we
wanted
to
have
installed,
or
maybe
an
attacker
has
changed
something
or
just
a
rope
process
that
changed
something
that
is
on
the
puppet
control
either
way
it
is.
B
It
is
something
that
that
needs
to
be
checked
as
a
deviation
from
from
that
important
desired
state.
B
Conversely,
the
assumption
often
is
if
the
puppet
run
doesn't
change
anything
the
system
is
healthy
right,
that's
a
quite
basic
assumption,
but
is.
Is
that
really
true
right
here?
I
I
have
a
quick
example
to
to
have
a
thought
experiment
on
that
this
configures
apache
to
serve
static
content
from
a
directory.
B
Consider
what
common
issues
would
have
a
puppetron
fail
on
this
code
right
when,
when
typing
it
up,
I
I
I
actually
didn't
find
a
lot.
I
mean
it's.
It's
well
tested
code.
It's
it's
been
battle,
hardened
puppet,
usually
just
configures
apache.
For
for
that
static
directory,
apache
starts,
and
it's
fine
right.
One
of
the
ugly
ones
where
this
can
go
wrong
is
when
the
apache
service
fails
to
start
because
of
a
fatal
fatal
configuration
error.
B
For
example,
if
port
80
is
already
in
use
right
and
then
the
apache
goes
like
hey,
somebody
else
is
using
my
port.
I
can
start
and
with
recent
improvements
around
systemd
and
how
puppet
is
managing
systemd
services.
B
Actually
we
can
puppet
can
detect
that
at
the
time
of
the
initial
application
and
report
back,
and
so
this
is
actually
quite
an
easy
to
detect
situation
today,
and
it
will
just
come
back
with
we
just
popped
just
tried
to
deploy,
apache
and
and
and
it
errored,
and
somebody
needs
to
intervene
there
right
this
nicely
lines.
B
Up
with
what
I
explained
just
before
on
the
item,
potency
slide,
an
error
on
the
service
resource
triggers
a
fix
in
the
code
or
in
the
deployment,
or
somebody
needs
to
go
in
and
take
that
vm
out
and
replace
it
with
one
that
is
not
broken
or
whatever
right
like.
There
is
a
problem
in
our
system
and
and
someone
needs
to
get
in
that's
als
or
a
problem
outside
of
of
the
expectations
of
the
code,
and
somebody
needs
to
go
in
and
fix
that.
B
As
a
side
note,
this
kind
of
error
is
also
really
helpful
because
it's
actionable
right
if
the
service
doesn't
start
because
of
a
misconfiguration.
It
is
really
something
that
somebody
need
to
fix.
B
Compare
this
to
heaps
of
system,
load-based
alerts
that
trigger
any
time
the
service
actually
gets
used
where
there
is
nothing
to
actually
do
other
than
check
yeah.
Well.
Actually,
the
system
is
currently
in
use.
So
it's
it's!
It's!
Okay,
right!
Sorry,
let's
make
the
system.
Let's
make
this
situation
a
bit
more
complicated
and
by
complicated,
of
course,
I
mean
more
realistic
right.
B
In
this
example,
I've
replaced
the
static
directory
configuration
with
a
back-end
service
and,
for
simplicity's
sake,
just
a
docker
container.
I'm
sure
all
of
you
will
have
their
own
examples
on
on
how
how
that
looks
in
your
environment,
be
the
database
in
the
background
or
an
application
server
or
100
other
things
that
that
could
possibly
be
connected
to
actually
form
the
service
that
that
is
relevant
to
the
business
searching
for
the
actual
configuration
around
that
slide
is
is
not
really
the
point.
B
Right
puppet
will
happily
configure
these
two
services
apache
and
the
doc
container,
and
they
will
never
work
and
they
will
never
ever
talk
to
each
other
and
puppet
will
not
reply
not
report
any
errors
from
applying
this
catalog,
because
puppet
doesn't
have
that
concept
of
oh
apache
needs
to
talk
to
this
back-end
service
here
right
like
that,
that's
not
encoded
here,
and
it's
really
hard
to
to
do
so
at
this
level
or
maybe
the
ssl
certificate
of
apache
has
expired
and
a
battery
starts
fine,
but
it
doesn't.
B
It
isn't
really
serving
any
requests,
because
no
browser
accepts
the
certificate
or
the
docker
images.
Configuration,
doesn't
look
at
the
port
variable
and
defaults
to
something
else,
or
there
is
a
firewall
configuration
that
blocks
the
access
between
those
two
processes
or
again,
I'm
I'm
sure
any
one
of
you
will
have
their
own
example
of
that
one
time
when
something
went
too
wrong
and
and
you'll
notice
too
late.
It's
okay
accidents
happen
and
we
fix
it
and
we
learn
it
learn
it.
We
learn
something
from
them
and
we
move
on.
B
But
can
we
do
something
better
here?
Can
we
notice,
earlier
and
and
for
years
and
years,
even
before
puppet,
we
have
developed
monitoring
tools
to
live
with
that
right,
eating
or
negatives.
Before
that
I
looked
it
up.
First
release
2002
we've
been
working
on
these
problems
for
for
a
while
now
right,
and
the
issue
is
that
I
see
here
is
those
monitoring
tools
are
not
integrated
with
system
management
tools,
like
sure
some
use
the
singer
modules
to
push
out
configuration
for
services.
B
Some
might
have
a
different
con,
a
different
integration
into
their
monitoring
system,
to
link
up
the
things
that
we
configure
with
the
things
that
we
monitor,
but
all
of
that
requires
extra
effort
and
is
not
aware
of
what's
going
on
within
puppet
and
takes
extra
delays
and
and
multiple
systems
to
observe
to
understand
the
actual
state
of
the
system.
B
Excuse
me
a
few
years
back
minor
garden,
I
had
a
talk
and
blog
post
at
the
configuration
management
camp
about
how
testing
puppet
item
potency
and
monitoring
our
facets
of
a
bigger
effort
to
verify
our
systems.
I
don't
want
to
go
too
much
into
the
details,
but
if
you
screen
just
right,
you
can
see
that
monitoring
is
system
testing
and
that
system
testing
is
always
a
kind
of
monitoring,
because
it
needs
to
observe
the
state
of
a
system
to
determine
if
the
test
was
successful
or
not
right
and
with
operational
verification.
B
B
B
This
check
resource
here,
currently
mostly
vaporware,
but
we
have
good
people
working
on
on
actually
implementing
this.
At
the
moment,
this
check
resource
will
make
a
http
call
to
the
specified,
url
and
report
a
failure.
If
that
request
doesn't
return
to
100
or
if
the
body
of
the
response
is
not
the
specified
json.
B
This
will
run
directly
on
the
manage
node
at
the
time.
Every
time
puppet
runs
and
and
thus
provides
an
ongoing
audit
trail
of
of
that
deserve
that
the
configuration
that
public
deployed
is
actually
working.
This
doesn't
make
puppet
into
a
monitoring
solution
right.
I
I
realize
that,
but
it
will
provide
another
system.
Health
data
point
closely
integrated
into
the
management
referrals
right
it.
It
happens
directly
after
the
service
started.
We
can
say
well
immediately
afterwards
it
worked
it.
B
It
broke
afterwards
because
of
something
else
right
or
puppet
didn't
manage
to
get
a
healthy
service
together
it.
It
is
maybe
something
that
that
we
deployed
through
pop,
but
at
this
at
this
point
in
time
it
shows
up
in
your
reports
and
again
you
have
that
integration
of
what
did
you,
what
did
puppet
change
in
the
system
and
how
was
the
health
of
the
system
afterwards,
for
the
sake
of
gravity,
the
example
glosses
over
a
couple
of
details
like
resource
dependencies.
B
Sure
if
you
can
rely
on
on
recent
advances
in
puppet
to
execute
in
in
manifest
order,
but
for
more
complex
configurations.
Maybe
the
checks
are
in
a
different
clouds
and
need
to
be
properly
ordered
that
they
happen.
Actually,
after
the
service
has
been
configured,
the
service
might
take
a
few
moments
to
start
up,
so
the
check
should
be
configured
with
a
retry
loop
and
the
timeout,
but
those
are
implementation.
B
Details
right
that
for
for
the
conversation
today,
it's
just
that
idea
of
puppet
can
have
a
higher
level
understanding
of
the
service
health,
even
if
a
lot
of
the
gap
that
we're
covering
here
is
then
not
really
enforceable
at
the
time
of
noticing
that
it's
wrong,
but
just
surfacing
that
information
back
to
to
the
operations
team.
B
And-
and
I'm
only
scratching
the
surface
here,
for
example,
what
if
what
happens
when
this
gets
included
in
bolt
plans
for
deployment
steering?
Maybe
it
only
fails
on
a
couple
of
systems
if
it
fails
only
on
one
percent
of
the
systems?
Is
that
good
enough
to
roll
it
out
to
the
other
100,
or?
Is
that
a
reason
to
stop
the
deployment
and
and
isolate
the
failing
nodes
from
the
load?
Balancer?
B
Is
this
useful
in
your
cd4be
blue
green
deployment
pipeline,
so
that
you
don't
rush
out
management
change
to
thousands
of
nodes
that
actually
produces
unworkable
systems
right
to
figure
this?
These
things
out
and
and
have
some
some
practical
experiments
we're
currently
working
on
implementing
that
in
the
puppet
labs
opv
repository
that
that
module
has
already
some
design
sketches
on
on
the
details
of
what
we
think
is
necessary
to
put
into
these
resources
and
how
they
are
looking
and
we're.
B
Gonna
start
implementation
within
the
next
couple
of
weeks.
We're
definitely
looking
for
early
feedback
on
how
this
fits
into
your
workflow
and
what
other
checks
you'd
like
to
see.
Of
course,
pr
is
especially
welcome,
but
any,
but
really
any
feedback
that
you
can
post
in
the
repositories.
Discussion
forums
is
highly
welcome.
Right,
we've
already
identified
more
work,
that's
necessary
for
this
for
general
consumption.
B
One
big
one
is
if
a
naive
implementation
of
the
resources
that
that
I
had
as
a
prototype
would
present
a
notification
that
the
system
stage
is
fine,
which
then
implies
a
lot
of
churn
in
the
reports
and
and
we're
currently
working
on
also
making
sure
that
that
doesn't
happen
so
that
you're
only
getting
a
notification
when
there
is
something
wrong
and
not
also
when
there's
something
okay.
B
So,
but
where
do
we
go
from
here
right,
like
I,
I'm
not
done
with
that
idea
yet
clearly,
there
is
firstly
there's
some
work
to
make
this
fully
usable
and
I've
listed
a
couple
of
points
here,
like
more
checks.
Http
is
nice
https
with
ssl
cert
verifications
is
an
easy
goal.
Powershell
and
running
commands
for
arbitrary
shell
checks.
Somebody
somebody
please
make
a
check
and
pay
plugin.
B
I
I
would
love
to
see
that
from
somebody
who
actually
knows
what
they're
doing
in
that
regard,
app
update
to
check
for
the
mirror
being
up
to
date
and
update
actually
running
or
no
security
updates
outstanding
certificate
checks
for
certificates
on
disk,
as
opposed
to
certificates
that
are
exposed
on
a
service.
B
B
And
again,
whatever
other
ideas
folks
are
coming
up
with,
I,
I
would
be
very
interested
to
hear
where
you
see
that
being
useful
and-
and
also
I
mean
I
have
to
admit-
while
I'm
very
excited
about
that,
we
don't
have
infinite
budget
and
I'd
much
rather
see
us
implement
checks
that
folks
are
actually
asking
for
than
things
that
I
pull
out
of
my
hat.
B
Yeah
fix
reporting-
I
mentioned
that
already,
the
fix
is
about
to
land,
as
as
anything
touching
the
low-level
puppet
apis.
That
was
a
little
bit
more
challenging
than
I
expected,
but
the
the
folks
working
on
that
did
very
well
in
in
knocking
it
out
and,
and
so
that
will
be
available
in
the
next
puppet
six
and
seven
releases
likely.
B
So
so
that's
a
little
bit
of
of
a
bummer
within
that
regards,
but
it's
also
just
the
reality
that
we
have
to
work
around
and
that's
that
next
on
the
list
exponential
retry
for
arms,
I
I
think
there's
pretty
well
established
algorithms
for
exponential
back
off
with
timeouts
and
and
jittery
in
there,
so
we
just
need
to
get
find
the
time
and
actually
implement
it.
B
B
We
at
puppet,
we
support
a
lot
of
core
modules
that
deploy
specific
services
like
apache,
app
mysql
postgresql,
just
to
name
a
few,
and
if
you
configure
something,
we
should
also
prove
that
it
works
afterwards.
So
like
in
the
example
I
showed
configure
v
host
make
a
connection
to
that
v
host.
To
show
that
it's
available,
I
I
would
be
interested
in
in
what
your
opinions
are
about.
That
is
that
something
you
would
like
to
see.
B
B
But
I'd
rather,
maybe
it
would
be
more
ergonomic
to
expose
them
as
tasks
and
functions
so
that
they
look
more
native
to
to
bolt
and
they
code.
The
verification
code
itself
is
not
very
complex,
so
wrapping
it
in
in
different
calling
conventions
is
also
not
a
big
problem.
B
The
the
discussion
forums
on
the
github
repository
are
the
best
way
if
you,
if
you
have
a
puppet
production
environment,
I
would
also
appreciate
any
volunteers
for
a
45-minute,
in-depth
ux
interview
where
we
can
cross-check
some
of
our
basic
assumptions
and
learn
more
about
how
people
in
the
field
are,
are
using
cicd
systems
and
and
how
that
would
fit
into
their
workflow.
B
When
I
showed
this
around
internally,
I
I
usually
got
asked
one
or
both
of
the
foreign
questions.
First,
when
can
I
get
this
and
yeah
we're
working
on
that,
but,
and
also
and
the
other
one
was.
How
does
this
interact
with
acceptance,
testing
and
I'm
glad
you
asked
so
this
is
again
the
example
from
before
a
little
bit
restructured
and.
B
And
this
configures
a
vhost
adds
a
file
and
a
check
that
that
file
is
available
on
the
web
server,
and
this
is
now
also
an
acceptance
test
case.
If
the
file
can
be
downloaded
from
the
web
server.
The
configuration
is
acceptable
right,
like
it
did,
what
it's
supposed
to
do
it.
It
has
verified
that
it's
within
the
specification
of
or
within
the
expectations
of
what
we
expect
from
the
code.
B
To
do-
and
let's
be
honest
here-
this
is
a
more
valid
and
more
in-depth
checking
test
implementation
than
most
I've
ever
written
at
puppet
and
before
that,
because
it
really
checks
the
availability
of
the
service
and
not
just
the
other
potency
of
applying
that
basic
configuration,
plugging
that
into
litmus
or
beaker
is
quite
straightforward.
Using
an
item
port
on
the
prime
method
and
just
run
run
that
example
on
on
a
provision
node
and
see
if
it
passes
right.
B
The
check
resource
already
does
all
the
checking
inside
that
application
and
if
it
has
configured
that
apache
and
it
has
downloaded
the
test
file
from
from
the
vhost
yeah
done,
it
works.
And
we
know
it
works
because
we
actually
made
a
connection
to
the
web
server
and
have
proven
that
it
works.
B
So,
and
notably
one
of
the
big
advances
that
that
I,
that
that
is
one
of
the
main
motivations
of
starting
all
of
that
there
is
no
ruby
involved.
In
doing
that,
I
mean
yes
today,
you
need
to
have
some
boilerplate
to
to
have
the
test
case
with
this
string
inside
it
and
apply
it,
but
really
it's
not
a
big
step
to
thinking
of
having
a
set
of
these
files
that
are
test
cases
in
a
special
repository
and
having
a
tool
that
takes
each
of
those
gets.
B
A
vm
applies
that
checks
the
result
and
reports
back
to
results
without
any
involvement
of
ruby,
archback,
our
spec
puppet
source
pack
or
any
of
the
other
things
that
for
people
that
are
not
developers
are
usually
not
great
experiences
right,
but
wait.
There
is
more
right.
This
is
also
a
piece
of
the
documentation
for
the
apache
module.
This
is
how
you
deploy
apache
and
configure
it
with
a
v-host
that
hosts
static
files.
B
Taking
a
leaf
out
of
behavioral
driven
development
tools
like
cucumber,
this
is
a
a
behavior
specification
right
like
add
a
couple
of
sentence
at
the
start
about
how
this
is
the
way
to
configure
a
a
static
v
host
and
and
render
it
into
a
readme
or
a
reference,
and
you
get
a
narrative,
a
narrative
documentation
on
all
the
different
ways
in
which
your
v-host
can
be
used
for
a
static
thing
for
a
proxy,
using
authentication,
etc,
etc.
B
I
I
I
know
I
don't
know
about
you,
but
for
me
that
that
sounds.
That
makes
me
really
excited
and
then
also
like.
This
is
a
unit
test
case.
It
might
not
look
like
it,
but
if
this
compiles
it,
it
means
that
the
internal
structure
of
the
apache
v
host
and
everything
fits
together
and
and
and
works
on
on
various
platforms
and
is
using
the
right
facts,
etc,
etc.
B
Right
in
in
this
case,
there
is
no
data
transformation
and
the
actual
file
deployment
is,
is
better
inspected
at
code
review
time
than
in
a
unit
test.
But
for
for
for
this
basic
example,
that
compilers
is
also
really
sufficient
right
in
the
bigger
thing
in
the
sorry,
in
the
bigger
picture,
there
is
one
thing
that
we
should
be
really
unit
testing
right.
B
The
thing
that
we
need
to
unit
test
is,
if
we
configure
the
v
host
for
a
specific
port,
does
the
check
actually
check
the
right
url
on
the
right
port
right?
This
is
the
one
unit
test,
that's
absolutely
necessary
to
make
sure
that
an
acceptance
test
that
tests
that
the
check
resource
is
working
is
actually
testing
the
the
thing
that
was
just
configured
and
it's
not
going
out
to
google
and
saying
oh
yeah.
I
I
just
configured
the
vhost
and
google
is
accessible
right
like
that.
B
That
would
be
a
nasty
oversight
and
and
would
make
the
acceptance
sets
invalid,
but
with
a
small
unit
test
to
make
sure
that
the
configuration
information
is
passed
correctly
to
the
check.
We
can
then
use
that
as
a
foundation
for
the
acceptance
test
to
be
really
really
accurate
and
valid,
because
we
have
already
proven
that
the
check
resource
will
test
the
correct
thing.
B
Even
even
further
in
the
future,
here's
just
a
loose
and
fast
example
to
spark
your
imagination,
while
still
fitting
everything
on
a
single
page,
a
bold
deployment
plan
for
a
fictional
application
that
has
a
database
and
an
application
server
in
the
first
step.
There
is
that
apply
block
to
deploy
and
configure
the
database
on
a
on
the
db
server,
and
then
it
does
a
check
that
that
application
worked
fine
and
that
there
were
no
errors
there.
B
B
If
you
can't
reach
the
database,
there's
something
wrong
and-
and
this
then
running
on
the
application
server-
can
check
the
entire
end-to-end
chain
of
all
the
firewalls
and
networks
in
between
access
rules,
etc,
etc.
I
skipped
credentials
here,
but
that's
also
something
that
could
potentially
be
passed
through
here
and
then,
if
the
database
is
accessible,
it
reconfigures
the
application
server
to
use
that
database
and
be
accessible
on
a
specific
url
again,
the
example
is
entirely
hypothetical.
B
I
didn't
even
try
to
see
if
that
is
a
valid
syntax,
but
I
believe
it
is
mostly
and
then
at
the
bottom
we
again
check
that
the
application
went
correctly
with
check
resources
inside
that
app
class.
B
That
already
gives
a
high
degree
of
confidence
that
the
application
is
correctly
configured,
but
at
the
end,
another
check
http
called
this
time
from
the
node
running
the
plan
and
not
one
of
the
configured
nodes
to
see
if
the
application
is
available
externally
right,
truly
production
systems
will
have
additional
complexities
like
running
database,
migrations,
pre-configuration,
post
configuration
quiesce
the
database
and
take
a
backup
while
running
in
a
special
mode,
switch
the
app
into
and
out
of
a
maintenance
mode.
B
I
think
next
cloud,
for
example,
is
very
keen
on
on
steps
like
that
draining
disconnecting
re-establishing
load,
balancer
configurations,
configuring
cluster
members
across
multiple
nodes
right.
B
Again,
I'm
sure
you
all
will
come
up
with
much
more
interesting
examples
than
this
one,
but
by
virtue
of
having
these
critical
checks
directly,
where
the
configuration
happens,
they
will
not
get
lost
in
a
shuffle
right.
You
use
that
deployment
plan
to
deploy
into
your
staging
system.
B
You
use
that
to
deploy
into
your
development
system,
you
use
the
same
checks
over
and
over
again,
and
you
can
use
the
experience
that
you
gain
in
your
development
systems
and
in
your
staging
systems
that,
when
the
time
comes
to
take
that
plan
to
your
production
systems
and
do
a
system
upgrade
or
deploy
a
new
database
customer
member
or
whatever,
is
actually
encoded
in
that
plan,
it
will
run
the
same
checks
at
the
same
point
and
will
be
able
to
catch
that
one
percent
of
difference
between
your
production
system
and
your
staging
system
at
the
time
when
it's
running
the
deployment-
and
hopefully
the
hope
is
catch
it
before
it
starts
destructive
operations
or
or
at
least
notifies
the
person
running
the
plan
at
the
time.
B
Yeah
and
that's
all
I
have
for
you
today
right
I
I
hope
I
have
inspired
you
to
have
a
new
perspective
on
testing
and
monitoring
and
how
else
that
fits
together,
and
I
hope
you
go
and
check
out
that
opv
repository
and
drop
also
note
in
the
discussion
forums
on
on
what
you
think
of
that
and
and
how
that
fits
into
into
your
workflows.
B
Michael,
I
I
think
we
still
have
a
few
minutes
for
q
a
before
we
maybe
go
to
other
topics,
so
I'll
leave
up
the
links
here
for
reference
in
the
background
for
now
and
I'll
also
post
the
this,
the
slides
as
a
pdf,
with
a
mostly
accurate
transcript
of
what
I
talked
about
afterwards
and
and
hopefully
that
that
can
be
difficult
around.
A
Yeah
thanks
I'll
be
looking
at
that
to
the
blog
post.
I've
been
trying
to
transcribe
right
now.
That
was
really
interesting.
A
I'm
just
trying
to
think
of
the
question
I
had.
Do
you
think
the
the
current
method-
and
this
like
is
like
stealing
the
question
from
nicholas,
I
think,
does
the
way
apply
to
puppet
only
or
could
it
be
like
when
I
say
I
have
my
infrastructures
code
tool,
I
have
my
monitoring
in
place.
Is
that
something
I
could
potentially
use
if
I'm
outside
of
the
puppet
ecosystem.
B
I
I
think
the
concepts
do
apply
right
and,
and
I've
been
when
mina-
and
I
were
talking
about
our
our
talk
a
couple
of
years
ago
and
yeah,
the
hitchhiker's
guide
to
testing
and
infrastructure
as
code.
There
is
the
link
on
on
the
slide
and
again
in
the
materials.
Afterwards,
we
did
very
much
talk
about
how
having
your
single
tests
or
your
singer
monitoring
report
back.
The
status
of
your
staging
systems
is
part
of
understanding
whether
your
deployment
has
worked
right
like
it's.
B
It's
not
it's
not
qualitatively
different
from
what
we've
been
doing
since
2002,
when
naggers
was
founded,
or
probably
before,
that
as
people
had
their
shell
scripts
and
pearl
hacks
and
whatnot
to
understand
to
get
an
overview
of
how
their
system
was
living.
The
the
difference
is
in
in
the
in
the
integration
and
moving
it.
B
Just
that's
one
step
closer
to
the
point
of
operation.
Right
like
you
could
equally
have
a
shell
script
that
deploys
your
thing
and
at
the
end,
I
have
a
loop
that
pulls
for
nagios
and
or
triggers
a
check
in
daggers
and
then
pulls
for
the
result.
Right,
like
the
concept,
is
exactly
the
same
figure
out
at
the
point
of
deployment.
What
is
the
health
of
my
system
and
load
balancers?
B
Do
that
already
just
in
their
default
configuration
like,
if
you
configure
it
correctly,
you
can't
hook
up
a
back
end
to
aj
proxy
if
it's
status,
api
endpoint
is
not
healthy.
B
A
Yeah,
that's
a
really
good
point,
because
I
was
also
thinking.
Why
would
I
need
to
add
an
additional
layer
of
a
monitoring
tool
when
the
deployment
process
can
already
take
care
of
it?
So,
as
you
pointed
out,
you've
been
adding
the
check
http,
which
is
a.
Is
it
a
function
or
risk
and
defined
resource
or
something.
A
I
think
I've
been
using
that
in
the
past,
because
I
was
like
using
puppet
to
provision
elasticsearch
and
I
needed
to
wait
for
the
rest
api
to
come
up
which
takes
a
while,
and
then
I
wanted
to
provision
some
kibana
dashboards
or
something
like
that.
As
far
as
I
know,
and
at
first
glance
I
wrote
my
own
best
script
and
I'm
not
a
fan
of
bash
scripting,
so
I
made
mistakes,
and
this
took
me
longer
than
I
anticipated.
A
So
I
was
really
happy
to
see
that
this
is.
This
was
made
a
resource.
The
thing
is
like
having
a
health
check.
This
is
similar
to
promisius
and,
having
like
the
the
black
box
exporter
or
the
pink
probes,
or
something
else
is
there
is
a
concrete
plan
to
add
more
like
more
of
these
health
checks.
So
you
had
a
slide
with
next
steps
and
what's
needed,
and
this
was
quite
a
lot
to
to
unpack.
A
B
As
I
said,
I'm
I'm
currently
at
the
very
start
of
this
project
and
we've
been
discussing
the
idea
internally
and-
and
I
think
it
has
lags-
and
it's
it's
something
where,
where
we
can,
I
don't
want
to
see
rip
off,
but
package
existing
practices
into
a
better
together
solution.
Right,
like
puppet
in
its
core,
is,
is
a
integration
solution
and
not
a
do
solution.
Right,
like
like
the
example
that
I
always
use
is
puppet
installs
packages
puppet
does
not
install
packages
puppet
lets.
B
You
install
packages,
puppet
uses,
the
package
managers
to
install
your
packages
right
and
implementing
the
checks
here
is
a
little
bit
in
the
gray
area
beyond
what
I
think
puppet
is
really
comfortable
with,
but
on
the
other
hand
it
it
provides
a
starting
point
for
the
conversation
it
gives
people
that
mental
framework
of
oh
hey.
I
can
get
my
feedback
earlier
right
like
I
can
get
my
feedback
during
deployment,
not
20.
B
Minutes
later,
when
monitoring
alerts
me,
I
can
get
the
feedback
during
my
tests
without
having
to
write
ruby
instead
of
during
deployment
et
cetera,
et
cetera.
Right,
like
I
think
it's
a
lot
of
people
have
suffered
a
long
time
by
test
tooling,
being
focused
being
very
developer,
and,
and
specifically,
software
engineering
focused
requiring
ruby
requiring.
B
If
you
can,
if
you
can
read
and
write
puppet,
you
can
read
and
write
a
test
case
here,
so
it
yeah
making
it
more
accessible
to
people
to
have
this
kind
of
feedback
earlier
in
the
process
and
therefore
making
the
end
less
surprising.
That
that
is
is
really
the
goal
here.
A
Yeah
I
keep
thinking
I
was.
I
was
reading
the
promises
up
and
running
book
lately
and
I
think
there
is
an
exporter
for
literally
everything
you
had
on
the
slide
and
beyond
so
like
having
having
a
node
exporter
or
like
black
progress,
builder
exporter
or
whatever
is
needed
and
exposing
an
http
endpoint
and
you
deploy,
for
example,
the
docker
image
with
puppet
as
part
of
the
the
infrastructure's
code
process,
you
could
spin
up
like
a
metrics
endpoint
on
the
host.
A
In
addition,
probably
means
you
need
to
create
the
check
http,
but
ask
the
promises,
resource
or
the
remote
resource.
You
don't
need
to
have
a
promisius
instance
somewhere.
This
is
like
away
from
the
monitoring.
The
idea
would
be
like
you
have
an
ec2
instance.
You
just
use
puppet
to
deploy
it
and
install
nginx
or
whatever
and
as
a
side
car
container.
You
install
promises
exporter
and
you
use
that
as
a
health
probe,
endpoint.
Basically.
B
Yeah,
I
I
think
there
are
getting
the
basics
right
means.
It
will
open
a
lot
of
possibilities
down
the
line
right.
I
I
don't
think.
As
I
said,
I
don't
think
this
will
make
popular
monitoring
system
but,
like
you
said,
it
provides
the
stepping
stone
to
integrate
with
the
monitoring
system.
A
This
the
thing
is
I'm
because
if
you,
if
you
start
adding
more
health
checks
on
your
own
you're,
probably
reinventing
the
wheel
with
the
database
connection
with
an
http
connection,
certificate
handling
whatever
and
like
using
what's
already
there
in
this
open
source
could
be
a
way
of
saying:
hey.
We
are
bundling
something
or
where
we're
using
existing
resources,
and
you
get
it
out
of
the
box.
A
If
you
want
to
integrate
with
like
subix
or
senzu
or
whatever
here
is
here
is
a
different
way,
but
out
of
the
box,
you
will
get
that
the
thing
is
you
probably
could
do
it
on
your
own
and
write
your
own
simple
interfaces?
A
We
wanted
to
add
everything
we
wanted
to
add,
embedded
database
checks,
embedded
whatever
nc,
plus
plus,
which
made
our
code
not
really
approachable
and
contributions
were
impossible
because
there
were
just
three
people,
understanding
it
and
nobody
else
so
like
having
something
open
where
everyone
can
contribute.
Okay
sounds
odd,
but
still
would
be
would
be
like
would
be
a
good
idea,
I'm
just
not
sure
if.
B
I
I
think
there
there
there
is
a
certain
temperature
like,
of
course
I
I
I'm
going
to.
Oh
sorry.
Of
course,
the
opv
module
will
be
open
source
and-
and
I
would
love
to
see
more
people
contributing
checks
and
and
improving
what
can
be
integrated
there
right
like
integrate.
A
check
is
single
that
goes
out
to
a
single
triggers,
a
specific
check
and
then
waits
for
the
result.
Right
like
I,
I
think
that
would
be
entirely
possible
to
implement
and
in
ruby
it's
not
it's
not
impossible
right.
B
A
Yeah
you
need
to
you
need
to
you
need
to
find
abstractions
and
you
basically
need
to
hide
it
and
probably
you
if,
for
example,
we
we
take
or
you
you're,
taking
the
route
of
adding
a
premises
exporter,
somehow
you
would
totally
hide
it
away
and
say
hey.
This
is
like
my
health
probe,
which
gets
automatically
deployed
and
in
the
background,
it's
a
docker
image
which
somehow
lands
on
on
the
host,
but
as
a
as
a
consumer
or
as
a
customer,
I
don't
care
about
it.
A
Yeah,
that's
that's!
That's
also
a
story
which,
which
is
can
get
quite
long,
I'm
also
thinking
about
when
you're
deploying
it
in
your
kubernetes
cluster,
for
example,
you
probably
want
to
have
something:
yeah,
okay,
boop
you're,
probably
using
a
different
method.
Then,
but
it's
just
it's
a
similar
thought
in
in
a
way
of
saying,
hey,
I
want
to
have
a
health
endpoint.
B
Like
if
you,
if
you
bring
up
kubernetes,
where
would
puppet
run
right
like
puppet,
would
not
be
running
in
one
of
the
containers
in
a
pod
as
part
of
your
application,
puppet
would
be
running
outside
of
kubernetes
itself
and
configure
specific
things
in
the
kubernetes
api
or
take
care
of
managing
the
base,
kubernetes
installation
and
there.
The
question
then,
is,
for
example,
if
you
configure
a
ingress
server
or
do
a
network
configuration
change
like
sure,
suddenly
you
have
again
something
that
you
can
verify.
B
Oh
is
my
network
configuration
change
now
actually
working
is
that
ingress
server
now
providing
the
right
connectivity,
etc,
etc?
B
A
I
think
yeah
yeah,
I
forgot
that
puppet
uses
an
agent
for
deployment,
I'm
I'm
living
too
much
too
much
in
the
ansible
world.
I
think
the
the
point
you're
making
is
like
puppet
manages
the
outside
of
the
kubernetes
cluster
and
the
deployment
happens
whatever
with
clcd
or
something
else.
A
You
have
a
you
have
a
very
good
rest
api
to
consume
and
you
can
basically
build
a
kubernetes
integration
based
on
that
api,
because
everything
is
already
there
and
you
get
the
status
information,
whether
port,
80
or
whatever,
is
bound
to
that
and
the
application
works,
which
potentially
makes
it
easier
for
you
to
integrate,
because
you
don't
need
to
add
any
health
checks
or
write
too
much
ruby
code,
but
for
like
bare
metal
or
virtual
machines.
C
A
Yeah,
but
what
are
the
I'm
talking
too
much?
What
are
the
the
others
thinking
around
this
topic.
A
I
have,
I
have
no
problem.
I
can't
talk
all
day.
A
Them
another
one
yeah
one
by
one,
hi
nicholas,
are
you
here,
it's
a
little
awkward.
A
But
maybe
you
can
stop
screen
sharing,
so
we
can
open
up
the
discussion
ground.
Sure
david.
C
I
have
to
be
honest,
so
I'm
from
the
windows
world,
so
I
I
know
that
puppet
is
restricted
in
the
windows
world.
So
not
it's
not.
C
B
System
completely
unrelated
also,
we
are
currently
in
the
final
stages
of
wrapping
all
the
dlc
resources
in
puppet
code
so
that
you
can
get
all
the
fine-grained
property
by
property
change,
reporting
from
puppet,
but
based
on
dsc
resources.
So,
if
you're
more
familiar
with
powershell
or
something
like
that,
but
in
not
happy
about
their
deployment
model
or
their
reporting
puppet
can
do
better.
C
The
command
prompt
guy
but
sounds
pretty
interesting,
so
I
will
have
a
deeper
look
into
the
system.
I
I
can.
I.
A
B
Name
of
the
handle
rob
reynolds
he's
the.
A
Founder,
I
met
him
at
the
puppet
con
some
years
ago
and
and
was
excited
to
learn,
and
I
think
public
with
the
nation
on
windows
works
pretty
reliable
in,
in
contrast
to
like
finding
a
way
to
ssh
into
a
windows.
It's
better
now.
But
if
you
need
to
maintain
something
older
which
which
you
need
to
like
manage,
it
gets
complicated
without
an
agent
and
nuget
or
whatever
package
manager
is
the
hot.
B
Native
package
manager,
and
from
what
I
heard
from
from
the
windows
folks,
their
rollout
of
the
last
try
of
win
get
or
what
it
was
called,
was
not
very
received
as
they
they
still
don't
get
community
fully.
It's.
C
C
Win
but
don't
think
so.
B
Yeah,
but
but
we're
digressing
from
from
the
actual
topic
like
integrating
that
in
your
workflow.
Do
you
think
that
that
could
help
or
or
do
you
think,
the
concepts
make
sense,
or
am
I
completely
off
base
come.
C
On
so
for
my
case,
I
think
that
this
is
a
really
viable
option
for
me,
because
I
sometimes
have
to
reinvent
the
wheels-
and
I
don't
like
that.
I
have
to
have
my
own
system
to
get
some
checks,
so
this
could
be
really
an
available
option
for
me.
So
I
think
I
couldn't
speak
for
everybody,
but
I
think
in
this
for
the
windows
enterprise
business.
C
Right
now
a
custom
one,
because
we
have
so
many
special
requirements
that
we
couldn't
fit
an
existing
one.
But
now
maybe
I
have
when
when
this
is
done-
or
at
least
I
can-
I
can
test
it.
C
A
One
use
case
I
was
thinking
of
when
you,
when
I'm
managing
everything
in
git
and
then
using
ci
cd
and
triggering
a
puppet
run
and
then
like
the
catalog
or
the
agent
post,
pulls
everything
I
could
use
bolt
to
immediately
execute
something
and
potentially
get
immediate
feedback.
A
So
I
can
like
integrate
it
in
my
gitlab
dashboard,
for
example,
but
also
use
puppet
the
is
it
is
it
the
enterprise
console
or
like
the
front
end
to
see
the
current
state,
and
specifically
a
use
case
of
mine,
is
always
I'm
deploying
websites.
A
So
then
deploying
something,
and
I
want
to
ensure
that
not
only
the
the
text
or
the
color
is
right,
but
I
want
to
see
that
a
certain
end
to
end
test
that
when,
when
I
click
on
a
button
that
something
moves
out
or
something
happens
and
having
a
more
approachable
interface,
I'm
not
a
fan
of
diving
into
aspect
and
accessibility,
testing
and
everything
else
and
understanding
it.
I
would
love
to
say:
hey,
please
test
this,
and
this
is
how
it
looks
like
now,
and
this
is
like
the
x.
A
Back
automatically,
this
is
an
enterprise
feature
or
like
warn
me
about
it.
This
this
would
be
awesome.
B
I
I
think
there
are
web
testing
solutions
that
already
give
you
that
right.
The
question
is,
how
do
you
set
them
up
and
how
do
you
integrate
them
in
your
in
your
workflows
and
having
a
check
resource
that
kicks
off
one
of
these
tests?
B
Yeah,
maybe
maybe
not
if,
if
you're,
if
you're
really
in-depth
testing
your
web
application
after
you've
deployed
it,
you
probably
want
to
go
for
a
real
solution
like
selenium
grid
or
some
something
like
that
and.
B
People
who
actually
do
web
testing
will
hate
me
for
saying
that,
because
I
I
heard
some
people
be
very
unhappy
about
selenium
grid,
but
it
like
the
options
are
out
there
and.
B
Yeah
so,
but
to
your
to
your
actually
puppet-related
part
of
that
question,
I
I
just
talked
to
somebody
who
has
a
lot
of
clients
who
are
using
puppet
in
quite
regulated
environments,
and
an
interesting
thing
is
happening
there.
That
they're
mostly
focused
on
managing
the
baseline
of
the
system
in
an
iterative
in
an
ongoing
state
enforcement
way
right
and
and
so
and
and
that
is
managed
by
one
team
and
and
then
it
sounds
like
actually,
the
actual
application.
B
That's
me,
that's
deployed
on
there
is,
is
not
as
relevant
to
that
system
baseline
right.
So
certainly
I
I
see
a
future
where
you
have
a
core
infrastructure
team
running
the
puppet
agent
on
all
the
systems
taking
care
of
the
system
baseline
so
that
it's
still
cis
compliance
so
that
it's
still
a
pci
dss
compliant
so
that
all
the
security
stuff
runs
on
it.
But
then
the
application
gets
deployed
through
a
bolt
deployment
plan
that
doesn't
actually
have
a
regular
state
enforcement,
but
it
just
deploys
the
files
starts.
B
The
service
configures
a
couple
of
things,
make
sure
that
everything
fits
together
and
hands
it
off
to
a
different
part
and
that
bold
deployment
plan
then
orchestrates
across
the
entire
cluster
or
parts
of
the
clusters
and
thus
rolling
upgrades
or
whatever
else
is
needed
for
the
cluster,
or
goes
back
and
forth
between
nodes
to
to
to
check
the
health
of
the
things
and
and
then
again
going
back
to
your
pipeline
example.
B
Yes,
you
have
your
your
complete
pipeline
then
starts
with
some
unit
tests
and
syntax
validation,
because
they're,
cheap
and
easy
and
fast
and
then
goes
on
to
oh,
I
I
get
five
vms
from
my
vsphere
cluster.
I
chuck
out
that
deployment
plan
against
those
five
machines
and
it
creates
a
service
that
goes
green
and
it
goes
on,
and
then
it
starts
a
selenium
test
cell
that
connects
to
to
that
application,
server
or
or
the
web
front
end
in
in
that
cluster.
B
That
was
just
stood
up
and
checks
that
everything
there
is
okay
and
then
all
of
that
gets
recycled
into
the
digital
void
and
it's
ready
for
an
approval
or
for
waiting
for
a
maintenance
window
to
run
the
same
deployment
against
a
production
version
of
the
cluster.
A
Yeah
thanks,
I
I
think
it's
deeply
tied
into
what
we've
been
discussing
with
quality
gates
and
captain
and
cloud
native
and
slos
and
application
performance
monitoring
like
not
only
defining
the
health
in
a
way
of
this
is
a
state
but
like
the
performance,
the
the
cpu
usage
increased
and
the
memory
usage
increased
by
this
committee
by
this
change.
It's
technically
not
related
to
infrastructure's
code,
but
this
could
be
like
you
don't
want
to
maintain
different
workflows
for
just
one
thing.
B
One
of
the
things
that
that
I
always
wanted
to
implement,
but
never
got
around
to
is
every
database
management
system
should
come
with
a
bold
backup
task
and
a
bold
restore
task
that
you
can
just
point
at
your
configuration
data
for
the
database
and
say
well,
I've
configured
that
database
now
run
a
backup
for
that
and
and
pull
the
data
that
back
up
down
to
to
this
location.
B
And,
conversely,
since
the
code
knows
all
the
details
of
where
the
database
is
configured
and
where
the
backups
are
configured
now,
I
can
go
back
and
say,
execute
this
task
to
run,
to
take
that
backup
and
restore
it
back
into
that
database
or
a
new
instance
that
is
adequately
configured
and
and
then
for
our
enterprise
customers.
All
of
that
would
run
through
the
the
console
npa
adequately
audited,
et
cetera,
et
cetera,
so
that
that
you
can
see
what
what
happened.
A
A
Is
there
so
I
have
the
github
repository
opened?
Is
when
one
way
when
you
say
you
want
feedback?
Is
there
like
something
where
you
want
someone
encouraged
to
contribute
or
create
code.
B
So
if
you
look
at
the
issues
on
on
the
repository,
there
are
a
couple
of
one
seller
tag.
This
enhancements
I've
I've,
pretty
detailed,
flash
them
out
ready
to
be
implemented
as
as
soon
as
as
the
team
gets
ready
for
it,
but
certainly
if
anybody
wants
to
get
involved
again,
drop
in
the
discussion
forums
on
on
the
repository
and
and
have
a
chat
of
what
you
want
to
do
and
how
you
want
to
use
it.
B
I
think
at
this
point
it's
a
little
bit
early
to
start
slinging
cold,
but
on
the
other
hand,
hey
I'm,
I'm
not
gonna
keep
you
from
it.
B
I
I
just
can't
promise
that
anything
of
this
goes
anywhere,
as
as
our
understanding
progresses
of
of
how
things
are
useful
in
in
the
public
ecosystem,
but
then
okay,
so
somebody
from
the
community
who
says
hey
they're
using
this
in
production
and
it's
awesome
even
when
it's
a
pre-release
prototype,
gives
me
much
more
legs
to
stand
on
and
argue
the
case
for
for
this
work
than
anything
else
that
that
you
could
post
in
in
in
a
discussion
forum
right.
A
It
is
true,
I'm
just
trying
to
make
people
curious
on
how
to
to
get
started
more
easily
and
just
like
finding
a
use
case,
and
probably
we
have
many
use
cases
where
we
deploy
something
and
it
doesn't
work
afterwards,
and
then
we
spend
10
hours
of
debugging
and
pair
programming
and
finding
out
that
it's
just
one
one
one
one
by
one
offset
now
by
whatever
I've
said
but
yeah.
A
The
point
is
finding
a
use
case
and
trying
to
solve
a
problem
and
trying
it
out
is,
I
think,
a
great
idea,
and
maybe
we
check
back
in
half
a
year
or
in
some
weeks
and
say:
hey
david:
did
it
work
or
what's
the
new
system?
Oh
just
kidding,
yeah
no.
B
Yeah
that
that
is
one
of
the
limitations,
the
the
the
lead
for
the
dc
work
as
soon
as
he's
done
with
the
dlc
work.
I
he
already
is
talking
about
now
that
he
understands
the
infrastructure
better,
that
he's
been
working
on.
He
wants
to
do
native
powershell
providers
so
that
for
windows,
people
like
michael,
you
can
then
implement
your
provider
in
powershell
without
having
to
touch
ruby
and
it
will
just
natively
integrate
in
puppet.
B
A
Yeah
I
just-
and
this
is
not
on
the
recording,
but
I
will
add
it
to
the
blog
post.
I
just
checked
the
code
of
the
check.
Http
check
command
thing.
It's.
It
looks
rather
straightforward.
So.
B
Yeah
yeah
and
again
like
this,
is
currently
just
a
prototype
that
I
scratched
together
to
to
start
my
understanding
of
how
it
could
work
and
how
it
goes
together
and,
for
example,
one
of
the
things
that
we
that
I
noticed
really
quickly
is
that
reporting
with
a
change
notification
that
everything
is
okay
is,
is
really
awful,
so
so
we're
fixing
that
as
a
first
step
because
again
yeah,
as
I
talked
about
in
item
potency,
people
expect
that
nothing
means
no
news
is
good
news
right
and
then
saying:
oh,
hey,
hey
here
something
changed.
A
No,
no,
no,
and
I
think
one
of
the
challenges
which
will
also
will
come
up
is
like.
I
need
a
history
to
verify
whether
the
the
the
system
is
healthy.
So
I
need
the
last
10
checks
or
the
last
10
deployments
that
can
say
hey
this
works
somehow
and
then
you
need
the
back
end
to
provide
this
data
and
like
talk
to
a
tsdb
or
a
database
or
exported
resources
or
what
whatever
is
like
needed
for
that
yeah.
A
Yeah,
I
was
just
thinking
about
that,
so
you
have
like
the
the
ideal
world
or
the
better
word
of
saying,
hey.
I
have
that
already.
I
don't
need
to
invent
it
right
now,
because
yeah,
I
think
I
think
it's
a
great
initiative
and
a
great
project.
A
I
would
need
to
clone
myself
ten
times
to
contribute,
I'm
hoping
that
everyone
else
steps
up.
No,
let's,
let's
see
about
it,
I
think
it's
like
keeping
the
pipeline
idea
in
mind
and
deploying
things
and
making
sure
it
works
sounds
like
a
really
nifty
idea.
So
I
really
like
it
yeah
and
I
would
say
thanks
for
the
for
the
presentation
and
the
discussion
today.
If
there
are
no
further
questions,
I
would
close
with
the
typically
buy
on
youtube.
C
What
what
one
thing?
One
thing
so,
because
it's
the
first
anniversary
of
everybody
called
conte
with
coffee
chat,
so
I
just
want
to
say
for
me
and
follow
all
the
other
people.
Thank
you,
michael
for
bringing
us
all
together
in
this
really
really
awesome
way.
So
that's
right!
Thank
you,
michael
is
there,
and
I
hope
so.
Hopefully,
we
have
another
year
and
multiple
ones
to
to
bring
us
all
together
to
to
learn
such
cool
things
like
this
puppet
thing
from
david
today.
A
Thank
you.
Yes,
the
pleasure
is
all
mine
and
you've
seen
all
the
twitter
notifications
you
got
today.
I
think
we're
learning
a
lot
and
we're,
I
think,
like
meeting
each
other
at
some
point
in
person.
Hopefully,
is
the
ultimate
goal
and
travel
the
world
and
enjoy
what
what
we're
doing-
and
I
hear
you
you're,
using
a
custom
monitoring
system.
I
think
I
need
to
travel
to
lower
austria
and
fix
that
problem.
A
Yeah
yeah
no
for
next
week
I
thought
we
could
be
looking
into
k-span,
which
is
mapping
object,
relationships
in
kubernetes
to
tracing.
I
don't
know
if
we
find
the
time
to
do
it,
but
it
was
one
of
the
hottest
topics.
Last
week
at
cubecon
and
the
tweet
was
like
going
a
little
wide
and
it's
just
an
idea,
so
we
can
like
decide,
should
notice.
A
After
that
I
was
in
the
aws
user
group
meetup
on
monday
for
neuromac,
and
I
met
chris
from
aws
and
we
thought
about
machine
learning,
and
this
is
like
the
spontaneous
idea
for
for
the
week
after
the
week
just
need
to
check
our
events
page,
I'm
a
little
exhausted
and
tired
this
week,
so
thinking
in
german
and
translating
life
to
english,
yeah
and
then
june.
Second,
we
frederick
joins
from
polar
signals
talking
about
continuous
profiling.
This
will
be
super
interesting.
A
I
think
the
week
after
we
dive
into
sneak
with
matt
and
I've
also
asked
anais,
who
recently
joined
a
sequel
cloud.
I
hope
I
pronounced
that
correctly,
a
cyber
cloud
which
is
a
new
kubernetes
cloud
hosting
or
offering
or
case
k3s,
I
think
yeah.
I
said
I'm
we're
curious
to
learn
so
yeah.
This
is
like
the
the
plan
for
the
coming
weeks,
like
doing
a
little
more
than
kubernetes
and
learning
new
things.
A
Everything
else
afterwards
is
still
free,
so
pitch
pitch
today
and
benefit
tomorrow.
Marketing
works
no
just
kidding.
If,
if
we
find
any
any
topics
or
do
it
spontaneously,
everything
is
good,
yep
and
yeah
thanks
again
and
have
a
great
rest
of
your
week,
and
now
I'm
saying
bye
on
youtube,
bye.