►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
Hello,
hello,
everyone
welcome
to
cloud
native
live.
We
are
diving
into
a
code
behind
the
cloud
native
policy
moyes.
I
am
cncf
ambassador.
Every
week
we
bring
a
new
set
of
presenters
to
showcase
how
to
work
with
cloud
native
technologies.
They
will
build
things.
They
will
break
things.
I
hope
so
and
they
will
answer
your
questions.
A
A
I
think
that's
this
name,
please
correct
me
and
the
idea
again
also
china's
in
cubicon
cloud
native,
come
to
future
europe
during
may
4th
to
7th
to
earth
the
lesser
from
the
cloud
native
community.
This
is
an
official
live
stream
of
the
cncf
and
essec
is
subject
to
the
cncf
code
of
conduct.
A
A
C
Yeah,
so,
first
of
all,
thanks
for
having
us
it's
really
great
to
be
here
and
and
to
share
our
story
here
in
this
live
stream.
C
So
it's
actually
it's
captain
and
for
us
we
are
from
austria,
it's
with
german
speaking,
so
it
sounds
like
the
captain
of
a
ship
and
two
years
ago,
when
we
came
up
with
this,
we
really
thought.
That's
that's
really
cool,
because
our
initial
goal
was
to
to
ship
applications
and.
B
C
A
Great
captain,
so,
oh
my
god,
why
I'm
not
remember
this
grady
great
gregory,
oh,
and
could
you
please
introduce
yourself
talk
about
a
little
bit
about
you
to
us
if
you
are
working
this
project,
our
commuters
members
talk
a
little
bit
more
about
this
project
and
you
please.
C
Sure
so
I
got
involved
into
the
project
actually
from
its
very
beginning
about
we
started
about
two
years
ago.
C
I
was
one
of
the
core
contributors
and
I
was
really
writing
one
of
the
let's
say
the
first
lines
of
code,
but
then
I
kind
of
transitioned
a
little
bit
more
into
the
role
of
leading
the
ecosystem
around
captain.
We
have
a
lot
of
tool
integrations.
We
are
heavily
involved
in
all
the
cncf
tooling.
We
have
the
integration
with
helm
with
litmus
where
you
heavily
using
cloud
events,
so
we
are
really
picking
up
all
these
cloud
native
technologies,
and
this
is
kind
of
my
background.
C
So
I
and
still
know
a
lot
of
the
the
source
code
of
captain
and
still
maintaining
the
project,
but
now
really
taking
care
of
of
the
ecosystem
and
the
the
cncf
ecosystem
open
source
ecosystem
around
it.
D
And
then
I
I
think,
jurgen
and
I
we
started
at
the
same
time,
because
we
were
both
kind
of
there
at
the
inception
of
the
product.
My
focus,
however,
is
on
adoption
of
captain
so
really
making
sure
that
our
users
are
successful
with
adopting
captain
there's
still
ways
to
go.
I
mean
we
have
great
adoption
already,
but
we
also
learn
something
new
every
day
and
what
else
were
missing?
What
is
complicated,
as
you
said,
sometimes
things
still
break
right
and
I
think
that's
also
expected,
with
an
o
point
something
release
but
yeah.
D
My
my
goal
is
to
really
make
sure
we
get
captain
out
there
in
all
of
its
different
use
cases.
My
background
is
performance
engineering,
so
I've
been
working
in
performance
engineering
for
the
past
22
years
now
and
have
been
trying
to
get.
I
saw
a
lot
of
performance
engineering
use
cases
also
prominently
featured
into
captain.
A
Great
damn
great
time
we
we
should
schedule
one
another
day
to
talk
about
performance.
Engineer.
It's
amazing!
It's
it's
smart!
It's
very
interesting
in
distributed
computing
and
the
cloud
native
is
a
disciplinary
component.
I
like
cut
visibility
to
the
complaint.
Great
great,
so
thank
you.
Let's
go
on.
Let's
see
the
captain,
let's.
D
See
the
captain
and
I
think,
you're
gonna
I'll,
just
kick
it
off
with
a
little
overview,
so
yeah.
I
think
I
will
try
to
share
my
screen
and,
I
believe
really
I
was
told
I
need
to
give
libby
a
heads
up
that
she
actually
is
this
possible,
or
is
this
the
right
one?
I
lost
the
wrong.
Let
me
see
try
this
again.
D
D
Now
one
thing
we've
learned
the
hard
way
is
that
a
dot
sh
domain
is
pretty
cool
and
nerdy,
but
not
every
organization
allows
to
browse
to
dot
sh
domains.
Still
you
know
give
it
a
try.
If
you
don't,
if
you
cannot
get
there,
we're
obviously
also
you
can
find
us
on
github.
We
have
four,
oh
it's
not
for
three
different
organizations.
D
On
the
one
side,
there
is
captain
itself
kind
of
our
core
project
with
captain
core
the
specs
enhancement
proposals.
The
website
examples
and
so
on,
then
we
have
captain
contrib
with
our
you
know,
kind
of
core
integrations
contributions.
D
These
are,
as
you
can
see
here,
very
heavily
focused
on
monitoring
integrations,
because
captain
heavily
relies
on
pulling
data
from
the
underlying
observability
platform,
whether
this
is
prometheus,
whether
this
is
dynatrace.
D
We
also
have
some
other
argo
is
a
it's
a
core
contributor
service,
our
rollouts
in
particular,
so
we
have
the
contributions
on
in
captain
contrib
and
then
we
also
have
sandbox.
This
is
where
every
extension
that
you
build,
we
call
them
captain
services.
This
is
where
every
captain
service
kind
of
starts
to
live.
D
We
also
have
a
great
template,
so
that
means,
if
you
want
to
get
started,
if
you
want
to
build
an
integration
with
captain,
you
would
start
here
by
just
following,
I
think,
the
really
excellent
tutorial
and
using
the
template
that
was
mainly
curated
by
christian
one
of
our
core
contributors,
and
so
I
think
that's.
This
is
something
that
you
want
to
know.
The
other
thing
is:
if
you
go
to
the
website,
you'll
find-
and
I
think
you
can-
this
is
actually
I
mean
you've.
D
But
I
think,
what's
what's
also
very
good
besides
obviously
explaining
what
captain
does
from
a
high
level
perspective,
the
tutorials
here
bring
you
to
the
tutorials
that
we
want
you
to
walk
through
we're
using
codelab,
and
we
have
a
couple
of
different
tours
now.
You
can
see
here,
they're
all
sorted
by
version.
Currently,
the
latest
captain
version
is
0.8.1,
so
this
is
why
this
defaults
to
it
with
the
previous
versions.
We
have
a
little
more
of
the
tutorials.
We
still
have
to
level
up
or,
let's
say,
convert
some
of
these
tutorials
to
0.8.
D
0.8
is
rather
new
and
we
haven't
had
the
time
it,
but
I
think
most
important
is
that
we
have
full
tours
through
captain
using
prometheus
as
a
data
source,
full
tone,
dyna
trace.
The
reason
why
we
feature
dynatrace
heavily
is
because
we
are
both
working
for
dynatrace
and
therefore
we
always
wanted
to
make
sure
that
dyna
trace
has
a
great
integration.
D
Captain
in
a
box
is
a
pretty
cool
tutorial
from
our
colleague
sergio.
He
has
built
this
and
you
can
just
stand
up
captain
on
any
linux
box,
and
then
I
mean
jurgen.
This
is
something
that
I
think
you
will
probably
show
later
on
around
resiliency
engineering,
where
captain
is
orchestrating
performance
tests
and
chaos
tests.
So
captain
is
battle,
testing
your
environment
and
then
is
telling
you.
How
is
your
system
behaving
under
chaotic
situations?
D
So
I
think
this
is
kind
of
like
how
I
would
get
how
what
I
would
love
everyone
to
know.
The
other
thing
to
know
is
we
have
a
a
slick
channel
as
well
on
both
on
the
cncf
there's
a
you
find
this
like
channel
called
captain.
D
We
also
have
it's
like
workspace
because
tradition
before
we
kind
of
donated
our
project
to
cncf.
We
had
our
own
slack
and
you
can
also
get
there
through
slack.captain.sh.
A
And
there
there
isn't
any
meeting
each
month
etc
where
community
can
chat
about
and
participating
with.
You.
D
D
C
Yeah,
so
I
I
think
it
was
briefly
cutting
off,
but
so,
if
I
understand
correctly
now
it's
about
the
community,
so
we've
built
the
community
page
into
into
the
captain
website
and
we
do
have
a
couple
of
different
channels.
B
D
C
So
we
do
have
our
own
slack,
we
do
have
a
mailing
list
and
then,
if
you
scroll
down
a
little
bit,
we
also
have
our
kind
of
ask
an
expert
session.
So
that's
basically
a
session
you
can.
That
will
be
me,
so
you,
whenever
you
feel
like
you,
want
to
talk
one-on-one
with
a
couple
of
questions.
C
And
also
for
we
have
our
captain
user
groups
and
our
developer
meetings.
We
initiated
this
because
we
saw
that
more
people
want
to
contribute
to
captain.
C
We
will
state
later
on
how
captain
orchestrates
different
tools
that
you
might
already
use
in
your
organization,
such
as
argo,
rollouts
or
chimido
for
testing
or
litmus
chaos
or
other
tools
like
helm
for
for
deployments,
and
we
saw
the
kind
of
the
urge
from
the
community
to
to
see
more
how
new
services
can
be
contributed,
and
this
is
why
we
are
also
coming
up
with
developer
meetings.
They
are
each
thursday
5
pm
central
european
time.
Everyone
is
very
welcome
to
join
and
also
in
our
user
groups.
C
We
are
more
sharing
adoption
stories
from
captain
users
that
this
is
shared
with
the
with
the
broader
captain
community.
Sometimes
they
are
more
focused
in
on
performance
engineering,
like
andy
explained
earlier,
sometimes
they're
more
focused
on
a
quality
gating
aspect:
how
to
prevent
bad
builds
to
to
reach
production.
C
C
Is
our
captain
community
rockstar
program
and
actually
we
are
about
to
announce
our
next
community
rockstar
tomorrow
in
one
of
our
meetings,
because
it's
the
end
of
the
quarter
and
we
already
awarded
three
of
our
let's
say:
captain
friends
and
community
rock
stars:
one
is
actually
an
organization
and
the
other
ones
are
individuals
that
heavily
contributed
back
to
captain
and
also
maybe
you've
already
seen
them
somewhere
speaking
about
captain.
So
we
really
have
a
very.
B
D
Okay,
perfect,
so
what
we
have?
What
we
have
seen
is
that
a
lot
of
us
software
engineers
are,
you
know,
trying
to
figure
out
how
we
can
automate
delivery
and
how
we
can
automate
operations
as
well,
because
we
are
getting
more
and
more
responsible
for
pushing
out
features
and
and
also,
if
you're,
responsible
for
operations.
D
We
also
want
to
automate
in
case
something
is
wrong
in
production,
how
we
can
react
to
it,
and
I
think
a
lot
of
us
are
using
tools
that
are
you'll
be
familiar
with
whether
this
is
you
know,
automation
tools
like
jenkins,
where
you
can
do
magic
things
with
jenkins.
I
always
call
it
a
swiss
army
knife,
but
what
we
thought
if
the
world,
if
a
thousand
people
have
already
built
jenkins
pipelines
that
basically
do
something
that
everyone
wants
to
do,
which
is
taking
an
artifact,
deploying
it
into
an
environment,
running
tests
figuring
out.
D
If
the
tests
are
good,
then
maybe
reach
out
to
your
monitoring
tools,
your
security
tool,
your
log
analytics
tools
and
figure
out.
Is
there
anything
else
that
it
might
be?
You
know
going
on
that
prevents
us
from
promoting
and
then
go.
Maybe
if
everything
is
good
push
it
to
the
next
stage,
so
there's
a
lot
of
boilerplate
code
we've
been
building.
I
think
all
of
us
have
been
building
in
automation
pipelines
and
we
thought
we
want
to
provide
an
opinionated
approach
that
makes
it
much
easier
to
define
delivery
processes
and
also
processes
around
operations.
D
So
if
you
want
to
get
started
with
captain,
because
captain
you
want
to
use
captain
to
automate
performance,
testing
delivery,
quality
gates,
the
easiest
way
to
get
started
is
really
going
to
either
following
one
of
our
tutorials
or
going
to
the
captain
docs.
There
is
a
great
easy
way
to
install
captain.
The
only
thing
you
need
is
a
kubernetes
cluster,
and
then
you
can
just
follow
you
download
the
cli,
and
then
you
just
do
a
captain
install
and
there's
two
flavors
of
installations.
D
Captain
itself
the
core
is
a
control
plane
that
controls
processes,
and
then
you
have
execution
components
that
can,
for
instance,
trigger
a
deployment
trigger
a
test
trigger
an
evaluation,
promote
one
thing
to
the
the
other
stage
and
so
on.
So
you
can
actually
decide
when
you
install
captain
on
a
kubernetes
cluster,
whether
you
just
want
to
install
the
so
called
control
plane,
which
contains
the
features
for
quality
gates
and
automate
operations,
or
if
you
want
to
also
install
the
execution
plane,
which
we
then
call
the
use
case,
continuous
delivery.
D
This
includes
everything
and
you
can
install
everything
in
one
cabin
in
this
cluster.
This
is
actually
what
I
have
done
on
on
my
machine.
Let
me
just
walk
over
here.
So
if
I
clear
here,
I'm
actually
running
here
on
an
ec2,
I
have
a
k3s
installed
so
lightweight
kubernetes
cluster
and
just
to
show
you
that
I'm
not
lying.
B
D
So
when
you
install
captain,
captain
comes
with
a
couple
of
components,
as
I
mentioned,
captain
at
its
core
is
a
control
plane
that
is
later
on
for
me,
managing
and
orchestrating
my
processes,
it's
an
event-driven
system.
So,
for
instance,
some
of
the
things
we
have
here
is
we're
using
nets
for
the
eventing.
D
We
are
using
our
so-called
shipyard
controller
that
manages
our
what
we
call
automation
sequences.
We
have
a
where
we
store
all
of
our
events.
We
have
a
configuration
service
where
captain
internally
keeps
all
configuration
files,
so
we
are
having
a
a
git
first,
a
configuration
file,
first
approach,
so
all
of
our
configuration
files
are
version
controlled
in
a
git
that
we
are
hosting
here.
We
obviously
have
an
api
endpoint.
D
We
have
a
bridge.
This
is
our
ui.
We
also
manage
secrets,
so
we
have
a
secret
service.
What
else
is
interesting?
The
lighthouse
service
is
a
component
that
takes
care
of
sli
and
slo
validations,
so
service
level
indicator
service
level,
objectives
for
our
quality
gates,
and
so
a
lot
of
other
services
that
you
can
then
install
that
then
can
participate
in
the
event
stream
of
captain
and
because
we
will
see
them
later.
I
have
cheat
meter
installed.
D
I
have
argo
installed
for
argo
rollouts,
I'm
using
dynatrace
as
a
monitoring
tool.
I
also
have
the
ability
to
deploy
helm
charts
and
I
also
have
a
so-called
generic
executor,
where
I
can
have
captain
execute
any
type
of
web
hook,
any
type
of
python
script,
or
even
shell
script.
So,
but
this
is
basically
what
you
install
in
order.
D
No,
not
the
configuration
database
is
kit.
It's
fine
stores,
the
individual
events.
A
Good
one
question:
you:
you
choose
a
mongodb
for
some
specific
requirements,
or
maybe
we
can
one
question
it's
possible
to
change
from,
for
example,
mongodb
to
another
cncf
project,
name
in
chcd.
D
So
we
didn't
pick,
I
think,
for
any
particular
reason.
I
think
when
we
started
with
the
project,
we
were
looking
for
a
document
database
that
made
that
meet
our
needs,
and
that's
why
we
picked
we.
You
know
you
can
create
your
own
kind
of
service,
so
basically
what
it
is.
There's
the
data
store
service
that
is
really
storing
the
data
so
because
captain
everything
is
event
based,
I
mean,
is
very
cool
to
the
to
to
captain
itself,
but
in
general,
yes,
we
should
be
able
to
to
also
replace
the
store.
D
No,
it's
good.
I
see
the
question:
does
all
the
components
get
installed
by
default
when
installing
kept
them?
So
when
I
go
back
to
my
doc
page,
let's
see
when
you
run
captain
install
depending
on
whether
you
say
the
control
plane
only,
it
will
only
install
the
core
components,
which
means
everything
that
is
needed
to
manage
the
processes.
D
The
data
store,
the
api
endpoints,
the
ui
and
also
the
the
lighthouse
service,
which
is
responsible
for
quality
gates
and
what
we
call
the
remediation
service
for
the
remediation,
because
this
is
part
of
the
control
plan,
then,
on
top
of
that,
you
can
install
whatever
tools
you
want
captain
to
integrate
with.
In
our
terminology.
We
call
this
the
captain's
uniform
right,
if
you
think
about
a
captain
that
steers
a
ship,
a
captain
wears
a
uniform.
In
this
case
the
uniform
defines
what
type
of
additional
tools
are
installed.
D
D
All
right
now,
let's
get
let's
get
actually
into
the
product,
because
I
think
I've
been
talking
a
lot,
so
captain
is
organized
in
projects.
So
what
you
see
here
this
is
my
current
captain
installation,
which
means
this
is
what
we
call
the
captain's
bridge.
I
know
I
can
make
it
bigger,
because
I
know
that
the
screen
size
is
a
little
limited.
D
We
are
using
captain
for
different
use
cases,
and
there
are,
I
want
to
show
you
actually
the
the.
I
would
say
the
use
case
that
we
initially
had
in
mind
when
we
designed
captain,
because
we
really
wanted
to
solve
the
problem
that
people
have
to
write
very
long
pipeline
files
for
multi-stage
delivery.
Where
we
deliver,
we
test
we
evaluate,
and
then
we
promote
it
into
the
next
stage.
D
So
here
what
I
have
in
a
project
captain
has
stages
within
a
project
within
a
stage
or
within
the
project.
You
also
have
10
services,
so
we
have
a
concept
of
you
can
have
one
service.
You
can
have
five
services
10
services.
This
will
be
basically
represent
your
micro
services
or
they
can
be
any
type
of
component
within
your
application
that
you
can
deploy.
D
In
my
case,
I
have
one
one
service
in
here,
so
we
have
a
project.
The
project
has
two
stages,
and
then
I
have
a
service
that
I
can
now
have
captain
run
through
that
process
and,
as
you
can
see
here,
I
have
played
around
a
little
bit
today.
D
D
I
was
deploying
a
particular
version
of
my
simple
node
service
and
then
kept
on
the
right
side
actually
shows
me
what
happened.
It
shows
me
that
it
was
initially
deployed
into
staging.
I
immediately
get
an
overview
from
captain
of
the
things
that
are
important.
I
automatically
get
the
evaluation
results,
because
what
kept
us
very
opinionated
it
takes
my
component,
my
image,
it
deploys
it,
it
runs
some
tests
and
then
it
evaluates
it
against
metrics.
D
We
call
them
slis
and
then,
in
the
end,
calculates
a
score
and
based
on
that
score,
decides
whether
this
is
good
enough
to
go
into
the
next
stage,
which
is
production
all
right,
so
I
can
see
how
we're
staging
how
was
production
now?
The
big
question
is
this
all
looks
very
you
know
nice
from
the
outside,
but
what's
what's
what
happens
behind
the
scenes
behind
the
scenes?
D
If
you
remember,
captain
internally
holds
a
git
repository
and
actually,
when
you
create
a
new
project
in
captain,
the
first
thing
you
need
to
do
is
you
need
to
say.
Captain
here
are
the
sequences
that
I
want
you
to
automate
for
me
in
each
individual
stage
and
when
you
create
a
project,
captain
automatically
also
creates
a
git
repository,
an
internal
one,
but
then
what
I've
done?
D
I
have
given
it
an
upstream,
so
I
could
upstream
it
to
bitbucket
to
github
to
gitlab
whatever
you
want.
I
have
just
my
own,
I'm
using
kitty
here.
That's
my
my
my
kind
of
keep
web
ui
and
what
you
see
here
is
that
when
I
created
this
project
initially,
the
only
thing
you
need
in
order
to
create
a
project
is
a
so-called
shipyard
file,
and
you
may
remember
now
that
there
was
a
shipyard
service
earlier
now.
What
I
have
here
is,
I
have
to
specify
how
many
which
stage
I
want
to
have.
D
D
B
C
D
That,
I
guess
I'm
just
spoiled
because
I
see
the
full
screen
in
front
of
me
and
and
not
the
smaller
version.
So
the
the
point
is
you
specify
a
stage
and
then
what?
What?
What
do
you
want
to
automate?
In
that
stage
I
say
delivery
and
then
you
specify
tasks
and
every
task
can
have
additional
metadata,
because
otherwise
captain
has
a
very
opinionated
approach
or
thought
what
should
happen
in
the
deployment.
But
here
I
say,
hey
captain
later
on
when
you
trigger
a
deployment,
whatever
tool
you
have,
that
is
consuming
that
event.
D
D
Okay,
so
the
nice
thing
here
is:
we
have
a
complete
separation
of
how
we
define
the
process
and
how
that
what
the
tools
are
doing.
You
see
here,
there's
no
tool
definition,
there's
no
there's
no
hard
coded
into
no
hardcoded
weaving
between
what
should
happen
in
any
stage
which
tool
should
pick
it
up.
So
what
we've
implemented
in
captain
is
a
separation
between
process
definition
and
then
the
tools
are
listening
to
it
through
events,
and
then
captain
has
an
opinion.
D
What
should
happen
between
the
tasks
right
so,
for
instance,
it
knows
that
when
an
evaluation
happens,
the
evaluation
will
evaluate
the
results
and
based
on
that
says,
this
is
good
or
no
good,
and
then
it
continues
with
the
next
step.
So
there's
certain
things:
we've
we've
built
into
the
into
the
automation
here
and
if
I
go
back
to
my
project
now
right
my
demo
rollout
what
I
showed
you
earlier
and
I
I've
shown
you
on
my
on
my
version
number
one
that
I
only
see
kind
of
like
the
the
thing
that
is
important.
D
So
this
view
really
just
shows
me:
hey
was
it
that
was
everything
good,
yes
or
no.
What
I
can
also
see
behind
the
scenes
is
all
the
individual
details
that
happened
so,
for
instance,
if
I
click
on
view
sequence,
I
now
truly
really
see
what
happened
in
staging
and
what
happened
in
production
now.
This
should
now
look
familiar
because
these
are
exactly
the
individual
tasks
that
I
have
specified
that
should
be
executed.
Remember
the
first
thing
that
is,
that
should
be
a
deployment.
D
D
D
We
call
them
captain
services,
testing
tools,
deployment
tools,
delivery
tools,
notification
tools,
they
simply
need
to
say
I'm
interested
in
a
particular
type
of
event,
and
I
know
if
an
event
comes
in,
I
can
expect
a
certain
data
structure
that
tells
me
some
additional
information
on
what
I
should
do.
In
this
case,
captain
was
sending
out
the
deployment
has
started
event
and
in
my
case
I
have
my
help
service.
D
D
D
This
is
important
because
so
that
kept
knows
is
one
tool,
are
many
tools
or
is
no
tool
handling
a
task,
and
then
it's
also
waiting
until
that
tool
is
finished
and
when
the
tool
is
finished,
it
basically
sends
back
hey,
I'm
the
helm
service.
I
have
just
worked
on
that
task.
I
am
now
finished
and
here
all
of
my
results-
and
maybe
some
additional
information
like
this-
is
the
url
that
I
deployed
the
new
application
to
now.
D
D
Because
none
of
this
information
is
here?
This
is
just
the
process
definition,
but
let
me
show
you
when
you
are
creating
when
you,
when
you
create
a
new
project
in
captain.
Not
only
do
you
give
it
the
process
definition,
but
for
every
stage
captain
automatically
creates
a
branch.
I
have
a
prod
branch
and
I
have
a
staging
branch.
D
D
The
staging
branch,
the
idea
is
that
every
single
tool
that
participates
the
end
user
of
captain
can
simply
upload
the
necessary
additional
configuration
files
to
that
git
repos.
So,
for
instance,
here
we
have
our
helm
chart
our
terminology
or
our
definition
is,
if
you
are
using
a
captain
project
and
you
have
multiple
services
that
you
want
to
deploy.
Every
service
has
a
unique
name.
D
Therefore,
every
service
has
a
subfolder
with
its
name
and
underneath
that
folder
for
every
specific
tool
that
you're
on
board
you
add
the
tool
specific
configuration
files
like
helm,
has
its
helm
charts,
and
so,
when
I
go
back
to
my
workflow
and
helm,
says:
hey
that's
interesting!
There's
a
deployment
request.
D
D
D
D
D
Now
this
slo
is
completely
tool
agnostic,
it
doesn't
say
where
the
metric
comes
from,
but
then
I
have
enabled
the
dynatrace
integration
and
therefore
the
dynatrace
subfolder
includes
a
monitoring
tool.
Specific
sli
ammo,
where
I
have
specified
how
dynatrace
when
it
is
triggered,
knows
how
to
query
this
data.
These
are
just
query
languages.
You
have
the
same
for
prometheus
for
prom,
ql
and
other
data
sources
as
well,
but
this
is
the
way
this
works,
and
now
one
more
thing
you
may
notice:
there's
no
there's
no
g
meter,
but
I
have
g
meter
test.
D
D
A
Amazing,
amazing,
and
so,
if
a
lost
some
part,
when
you
create
this
project,
you
you
have
this
kind
of
kind
of
template
or
there
you
can
feel
yeah.
D
So
exactly
when
you,
when
you
create
a
new
project,
the
only
thing
you
need
is
a
so-called
shipyard
file.
This
basically
specifies
how
many
stages
do
you
have
and
what
type
of
automation
sequences
do
you
want
captain
to
orchestrate
later
for
you
and
you're
completely
free
right
this
one?
What
I
showed
you,
I
just
showed
you
one
one
little
piece
of
it,
but
I
also
have
some
other
things
here,
but
in
staging
I
have
my
delivery,
which
I
showed
you.
Then
I
also
have
a
rollback
sequence,
and
you
can
also
see
here.
D
D
D
A
very
simple
shipyard
file
could
look
like
this,
which
is
a
shipyard
file
that
specifies
a
stage
called
qualitygate,
which
is,
I
think,
our
number
one
use
case
right
now.
Quality
gate
basically
means
I
want
to
run
an
evaluation.
I
may
have
a
system
I
have.
Maybe
you
have
your
jenkins,
your
github,
your
gitlab,
and
you
have
already
done
some
deployments
and
some
testing,
and
the
only
thing
you
want
to
automate
is
the
evaluation
of
certain
metrics
over
a
certain
time
frame.
D
So
in
this
case
this
sequence
here
is
doing
an
evaluation
and
I
have
two
tasks.
I
could
even
forget
the
first
one,
but
the
first
was
actually
very
interesting.
It's
called
monaco
monitoring
is
code.
This
task
is
picked
up
by
my
monitoring
integration.
To
make
sure
my
monitoring
tool
is
correctly
configured.
So
if
you,
for
instance,
assume
right,
you
have
tool
tool
x
and
you
want
to
make
sure
the
tool
x
is
properly
configured
that
all
the
metrics
are
there,
that
you
really
then
later
need
for
the
evaluation
and
yeah.
A
Okay
and
one
question
you
can
see:
that's
it's
a
yamo,
you
plan
the
the
project.
Captain
has
the
plan
to
to
have
a
version
of
this
kind
of
management,
construction
on
using.
A
Its
portal-
it's
it's!
Oh,
my
god,
dashboard.
A
D
Yeah
yeah
yeah
yeah.
We
have
this
here
right.
This
is
all
this
is
all
here.
The
visuals
right
so,
for
instance,
here
is
my
staging
and
production.
I
also
have
other
projects,
for
instance,
with
a
with
a
three-stage
pipeline.
I
have
dev
staging
and
production
if
you
go
back
to.
If
I
go
back
to
the
previous
project,
the
demo
rollout
and
you
go
to
sequences
here,
you
can
exactly
see
what
sequences
are.
I
mean
if
your
question
is.
Are
we
planning
on
a
visual
editor.
D
I'm
pretty
sure
this
is
something
that
you
know
we
will
do
in
the
future,
but
I
think
we
have
we.
We
currently
have
other
priorities.
I
would
say
than
than
doing
a
visual
editor
for
the
shipyard
file,
I'm
pretty
sure
it's
somewhere
in
the
roadmap
already,
but
right
now
there
is
so
many
other
things
that
we
believe
bring
more
immediate
value,
because,
to
be
honest
with
you,
you
define
that
shipyard
file.
Once
when
you
create
the
project,
you
can
obviously
edit
it.
D
You
can
edit
it
and
you
can
add
more
things
to
it
as
you
go,
but
I
think
from
a
visual
perspective,
we
want
to
invest
more
in
in
better
visualization
here
and
more
interaction
with
the
ui
first
than
with
an
edit
capability.
That's,
for
instance,
yeah!
There's
one
thing
that
we
have
you
can
do
you
remember
ff.
What's
it
called
shipyard
uniform
die
uniform.
Thank
you
all
right.
D
So,
for
instance,
we
already
have,
in
the
current
version
a
a
mock-up
of
of
one
thing
in
here
that
will
come
so
remember
I
told
I
talked
about
the
uniform,
uniform
means,
which
services
do
I
currently
have
installed,
either
on
that
same
kubernetes
cluster
or
on
remote
kubernetes
clusters,
because
we
allow
now,
with
the
latest
version
of
captain
remote
execution
planes,
but
then
you
want
to
have
an
overview
of
hey.
What
is
currently
installed,
which
services
are
out
there?
Where
do
they
run?
What
are
they
listening?
To
which
events
are
they
subscribing
to?
D
D
Captain
says
I
need
somebody
that
can
run
a
performance
test
and
maybe
today,
I'm
using
jmeter,
which
works
for
me,
but
maybe
tomorrow
I
want
to
switch
to
locust,
which
is
a
tool
that
that
jurgen
you
have
got
is
just
recently
worked
with
the
locust
team
and
the
nice
thing
is.
I
can
switch
tools
without
having
to
think
of.
Where
do.
I
have
hard-coded
calls
to
that
tool
in
my
pipeline
and
because
we
have
taken
care
of
this
right.
D
There's
no
need
anymore
in
your
pipelines
to
say
trigger
the
tool,
parse
the
results
and
then
do
an.
If
then
statement,
if
it
fails,
we
have
taken
care
of
this
because
we
have
autumn
we've,
we've
taken
care
of
the
tool
integration,
but
also
in
handling
the
results
and
the
reason
why
it
is
so
much
easier
for
us
to
do.
It
is
because
we
have
standardized
on
events.
We
are.
We
have
standardized
on
an
open
standard
which
is
called
cloud
events
which
allows
every
tool
to
easily
participate
and
then
also
easily
send
back
their
results.
D
C
Andy,
can
I
just
forward
a
question
here
from
the
community
and
it
goes:
are
there
any
performance
test
tools
for
kubernetes
clusters
and
the
question
is
from
from
deepak?
If
you,
I
think
it
just
fits
to
this
conversation.
D
Yeah
so
for
kubernetes
clusters
I
mean
the
the
thing
is:
if
I
go
back
quickly
to
my
overview
in
my
case
in
chain
meter
here-
and
I
think
you
again
right-
you've
built
you've
worked
with
the
low
cost
team
and
think
you're,
also
working
with
the
artillery
teams.
They,
if
you
write
a
captain
service,
which
means
a
service
that
is
consuming
the
captain
event,
that
integration
can
decide
where
the
test
is
actually
executed
in
our
case
for
geometer.
D
This
is
a
container
that
receives
the
event
and
then
will
then
also
execute
the
test
within
that
container.
So
if
the
question
is
targeted
towards
other
testing
tools
that
run
their
tests
on
kubernetes,
meaning
generating
the
load
on
kubernetes,
then
the
answer
is
clearly
yes,
because
we
already
have
the
genius
service.
I
think
york
and
the
low
cost
service
is
similar,
you're
executing
low-cost
on
in
that
container
yeah.
D
We
have
another
integration
with
neotis
load
testing
tools
and
they
provide
two
options.
You
can
either
run
the
load
in
the
kubernetes
cluster
or
you
can
use
the
cloud
load
testing
service,
so
they
have
both
options.
C
I
I
really
would
like
to
to
add
one
part
here,
because
you
already
mentioned
it
and
for
us
when
we
started
the
locust
integration.
What
was
one
benefit
of
captain
to
say
is
that
we
did
not
have
to
change
the
shipyard
file,
so
we
kept
the
shipper
file.
It
was
saying
I
want
to
do
a
deployment.
I
want
to
execute
a
test.
C
I
want
to
do
an
evaluation
and
then
I
want
to
promote
it
or
release
it
to
the
next
stage
based
on
the
evaluation,
but
we
did
not
have
to
change
anything
of
this
file.
We
just
removed
the
chain
meter
integration.
Actually
we
scaled
it
down
to
zero
and
we
added
the
locus
integration
and
the
locus
integration
was
then
doing
the
job
of
the
performance
testing
tool.
So
exchanging
tools
is
very
easy.
C
You
can
just
think
of
this
before
we're
getting
this
uniform
screen
that
andy
showed
earlier,
but
before
until
we're
getting
this
uniform
screen,
you
can
basically
just
scale
scale
down
to
zero
your
deployment,
and
then
you
can
just
add
another
tool
and
you
can
always
go
back
with
just
bringing
the
old
one
up
to
one
or
more
replicas.
D
And
I
think
this
is
also
very
interesting
here
right,
so
this
is
actually
the
definition
of
your
low
cost
service
that
you
built
and
basically
a
deployment
of
the
captain
service.
It
on
the
one
side
deploys
the
container
itself.
It
implements
the
the
actual
action
and
then
we
also
have
a
so-called
distributor
that
you
run
as
a
second
container
in
the
pot,
and
this
distributor
is
the
one
that
is
actually
subscribing
to
the
captain
event.
So
we
have
kind
of
made
it
easy
so
that
not
every
service
has
to
write
their
own
subscriber.
D
So
you
basically
just
have
a
second
container
in
your
pot,
and
here
you
can
define
to
which
type
of
events
you
want
to
subscribe.
You
can
either
be
specific.
Also,
comma
separated
can
be
multiple,
but
you
can
also
do
wild
cards,
so
you
can
say
I
want
to
handle
every
triggered
event.
An
example
here
would
be.
We
have
integrations
with
slack
where
the
slack
integration
is
listening
to
all,
triggered
and
finished
events,
and
then
it's
just
forwarding
these
events
to
your
slack
channel.
D
D
We've
been
calling
our
lighthouse
service
lighthouse
for
a
long
time
and
I'm
not
sure
how
long
google
has
used
that
name
publicly,
but
we've
been
we've
been
using
it
for
a
while,
and
the
lighthouse
service
is
exactly
what
we
it's.
Basically,
the
service
that
is
reaching
out
or
sending
an
event
say:
hey
monitoring
tools
that
are
listening,
give
me
your
values
and
then
the
monitoring
tools,
prometheus
dynatrace.
D
They
would
then
reach
out
to
the
config
directory,
say:
okay,
which
ones
do
I
need
returns
them
back
and
then
the
lighthouse
service
is
really
then
doing
the
magic.
Here
it
is
comparing
the
individual
values
against
your
slos,
where
you
can
either
specify
pass
criteria,
warning
criteria
that
could
be
with
a
with
a
fixed
threshold,
but
you
can
also
do
relative
values
where
you
can
do
regression
detection.
We
can
also
calculate
baselines
across
multiple
builds
and
then
in
the
end,
we
score
right
every
every
line
here.
So
this
is
the
same
visual
say
up.
D
Here's
a
heat
map
visualization
on
the
bottom
is
a
table
for
every
result.
Here
you
can
see
that
the
lighthouse
service
was
looking
at
a
metric,
comparing
it
and
then
calculating
a
score
normalizing
that
score
between
0
and
100,
so
that,
in
the
end,
we'll
get
a
result
between
0
and
100,
and
you
can
also
specify
what
is
the
objective
that
you
have,
and
this
is
also
all
specified
if
you
remember
earlier
what
I've
showed.
D
If
I
go
back
to
my
project
here
so,
for
instance,
in
staging
for
my
simple
node
service,
I
have
an
slo
yaml
file.
Where
I
specify
these
are
the
metrics.
I
don't
care
how
I
get
to
the
metrics.
This
is
somebody
else
to
care
about,
but
I
want
to
know.
I
want
to
specify
what's
important
for
me
and
then
in
the
end,
you
can
also
specify
for
the
total
score.
D
C
D
C
Think
if
we,
if
we
can
take
a
look
again
at
the
slo
file,
I
think
what's
really
interesting
is
that
we
also
see
a
little
bit
here.
How
captain's
orchestrating
the
different
tools
we
do
not
see
directly
where
the
data
is
coming
from.
Andy
mentioned
this
already
earlier,
so
this
is
kind
of
abstracted
here.
So
this
file,
you
can
easily
reuse
for
different
data
providers.
You
can
even
reuse
it
for
different
services.
C
It
does
not
have
the
service
name
in
it,
so
the
service
name
is
a
placeholder,
then,
in
the
api
calls
or
in
the
promptql
that
are
behind
the
scenes
and
what's
also
really
cool
is
since
captain
is
orchestrating
the
tools
and
we
were
talking
about
performance
testing
earlier.
It
actually
knows
the
time
frame
when
it
needs
to
do
the
evaluation,
because,
with
all
the
triggered
events
and
finished
events,
captain
does
know
how
long
the
tests
were
running
it
does
know
for
which
test
it
was
executed.
C
All
the
information
is
stored
in
the
git
repository
or
all
the
events
are
stored
in
the
in
the
data
store
of
captain,
and
the
combination
of
both
really
gives
you
the
possibility
or
gives
you
the
let's
say,
confidence
that
the
evaluation
is
for
the
correct
time
frame
for
the
correct
service.
You
don't
get
any
noise
in
the
evaluation.
I
think
this
was
also
one
part
that
we
really
worked
on
hard,
that
the
evaluation
phase
is
really
doing
a
a
great
job
here
and
you
don't
have
to
to
parse.
A
Yeah
really
really
amazing.
We
have
a
few
minutes
to
to
finish
our
great
live.
If,
let's,
let's
think
about
people
that
are
starting
in
in
studying
the
community,
I
saw
that
captain
is
in
sandbox
project.
C
A
Mid
opportunity
for
everyone
that
are
starting
to
starting
community
and
starting
and
want
to
contribute
what,
in
your
opinion,
how
can
I,
for
example,
how
can
I
I'm
starting
now
I
I
want
to
contribute
and
it's
amazing
contribute
to
a
sandbox
project,
because
it's
the
it's
the
beginning
of
the
journey.
Of
course,
how
can
how
can
you
what
what
you
can
say
to
me
and
to
the
guys
that
want
to
start
now?
How
can
we
contribute
with
you,
my
expertise?
A
What
which
expertise
I
need
to
have
how
how
you
advocate
for
for
me,
participate
in
this
amazing
project.
C
A
C
Question
so,
first
of
all,
we
really
appreciate
if
someone
wants
to
to
join
our
project
and
contribute
to
our
project.
For
this
we
have
on
our
git
repository.
We
have
a
lot
of
issues
tagged
with
good,
first
issues,
so
we
really
want
to
have
a
kind
of
a
a
lower
entrance
barrier
really
welcoming
new
contributors.
We
have
a
list
of
good
first
issues
that
should
not
require
here
here,
for
example,
it's
it's
it's.
C
I
think
it's
a
rather
new
one
yeah
there
is
one
it's
so
we
are
maintaining
them
and
creating
regularly.
That
should
not
require
a
lot
of
in-depth
knowledge
on
the
on
the
project.
So
all
the
different
core
services
should
not
really
be
be
important
for
you,
but
really
to
get
a
hands-on
of
the
project.
We
have
all
the
documentation
how
to
to
set
it
up
the
project
and
then
you
just
get
started
with
coding.
C
Most
of
the
coding
is
done
from
the
core
parts
that
is
done
in
golang
and
the
ui
is
angular
typescript.
So
I
think
this
is
most
of
the
parts
and
what
we
also
see
from
the
community.
It's
not
only
about
contributing
with
a
code,
but
we
do
have
the
documentation
and
we
do
have
our
tutorials,
and
these
are
actually
also
two
parts
where
we
see
a
lot
of
contributions.
C
Just
recently
we
had
josh-
I
I
don't
know
him
by
in
person.
He
was
just
he
found
the
captain
project
and
he
was
contributing
a
spell
checker.
So
now
we
have
our
cli
spell
check.
C
We
have
our
documentation
spell
checked,
so
it's
not
only
about
contributing
code,
but
if
you
have
something
you
you
want
to
contribute,
if
you
are
very
good
in
writing
documentation
if
you
want
to
add
doc
tutorials,
because
you
just
added
your
tool
to
captain
and
you
want
to
make
it
more
visible
to
the
broader
community,
you
can
start
with
writing
a
tutorial.
For
example,
the
littlest
tutorial
that
we
can
see
here
was
created
together
with
our
friends
from
lithus
chaos.
C
It
shows
everything
that
will
be
done
and
achieved
until
the
end
of
the
tutorial.
We
even
have
kind
of
an
estimate.
We
say,
like
you,
will
finish
the
tutorial
in
less
than
45
minutes
and
you
will
get
a
full
setup
of
captain
plus
chaos,
engineering,
experiments
plus
cheerleader
tests,
and
I
can
already
spoil
a
little
bit
here.
This
is
also
a
talk
at
cubecon
that,
where
we're
basically
basing
and
building
upon
what
we've
already
done
and
what
we've
already
seen,
also
being
adopted
by
by
some
of
by.
B
C
A
The
maintainers
that's
let's,
and
what
we
can
waiting
for
for
this
amazing
prayers
for
the
next
next
year,
because
I
suppose
not
next
year
is
so
long,
but
next
few
months
or-
and
we
can
reach
you
again
in
any
present
in
events,
you
are
planning
to
present
in
cubicon.
You
are
plenty
presenting
any
kubernetes
community
today.
What
what?
What
are
you
planning
to
do
in
the
next
months
to
be
in
touch
with
this
project?.
D
D
What
I'm
doing
here
also
showing
a
way
for,
I
would
say,
uses
of
jenkins
how
they
can
modernize
their
jenkins
pipelines
with
captain
so
that
they
don't
have
to
throw
away
jenkins,
because
jenkins,
you
know,
is
doing
certain
things
really
great,
but
instead
of
trying
to
build
all
the
logic
that
we
have
here
in
your
jenkins
pipeline
and
maintaining
it,
let
captain
do
the
logic
and,
let's
keep
then
call
your
jenkins
pipelines,
for
instance,
for
the
individual
stages
like
a
test
or
deployment.
D
A
B
A
I
want
to
see
this
more,
so
it's
really
amazing
moment
here
today.
I
knew
a
lot
a
lot
about
about
captain
captain
and
was
really
amazing,
a
tool
very,
very
interesting
and
they
helped
a
lot
series
etc.
A
So
I
we
are
finished
now,
but
I
will
give
you
the
the
last
moments
to
to
finish
our
our
meeting.
Amazing
live,
let's
sing.
C
Thanks
andy
for
for
doing
all
the
the
hard
work
here
in
presenting,
I
think
we
covered
a
lot,
but
there
is
even
more
andy
mentioned
it
in.
In
the
end,
we
are
working
also
on
alter
remediation
and
we
are
kind
of
also
orchestrating
alternate
remediation
sequences
when
it
comes
to
rolling
back
toggling,
feature
flags
executing
any
kind
of
radiation
action
in
in
in
response
to
a
production
alert
from
from
prometheus,
for
example.
So
this
is
one
part.
C
Maybe
we
can
come
back
and
also
talk
about
this
part
a
little
bit
if
you're.
If
you
want
to
give
captain
a
try,
please
go
to
captain.sh.
I
think
we
can
also
see
the
the
url
here.
Andy
mentioned,
also
our
tutorials,
and
if
you
have
any
questions,
please
feel
free
to
reach
out
and
thanks
again
for
for
having
us.
A
I
I
look
forward
to
meeting
you
again
guys
because
it
was
really
amazing.
I
will
invite
you,
please
accept
it's
like
it's
very
really.
We
we,
we
should
know
more
about
this
projects
and
the
other
things
that
we
can.
We
can
do
with
it.
So
it's
the
it's
it's
the
end.
So
thank
you.
Everyone
to
join
us.
The
last
episode
of
this
week
in
the
cloud
native
gladly
with
live.
It
was
a
great
to
have
you
here
again
and
andy
was
amazing.
Thank
you
so
much
to
talk
about
captain.
A
I
learned
that
it's
captain
and
we
also
really
love
it
big
direction
and
question
from
everyone
audience
that
came
to
us
today.
We
bring
you
the
last
cloud
native
code,
every
wednesday
at
the
3
p.m.
Eastern
time
and
next
week
we
have
another
amazing
presentation,
amazing
meeting
with
great
guys
that
will
show
to
us
the
best
of
cloud
native
of
the
world.
Thank
you
so
much
everyone
see
you
take
care
and
see
you,
health
and
safe,
bye,.