►
From YouTube: Keptn Community Meeting - October 14th, 2019
Description
Discussion of Keptn Quality Gates
https://docs.google.com/document/d/1Vebjqs2JRtcH_GHBXTqddyowKGTUxeMCxCUIUvFd23U
A
B
Real
life
thanks
Andy,
so
hello,
everyone
from
outside
to
the
next
captain
community
meeting
today,
we
we
want
to
have
one
of
the
of
the
first
actual
working
meetings
where
we
want
to
discuss
some
future
use
case
that
we
want
to
cover
with
captain.
So
one
of
the
three
major
use
cases
of
captain
which
are
continuous
delivery,
continuous
operation
and
also
like
automated
quality
gate
validation.
B
B
So
maybe
it's
just
as
a
little
a
frame
or
our
our
thought
process
around
this.
So
this
will
actually
be
part
of
the
version
0.6
which
we
are
working
on
right
now.
We
we
might
not
implement
all
of
the
details
that
are
in
there
right
now,
but
at
least
this
is
the
direction
that
you're
headed
and
we
want
to
know
right
away
if
this,
if
we
need
to
consider
things
that
we
haven't
thought
of
yet,
and
that
is
actually
why
we
want
to
to
have
this
working
session
right
now.
A
B
So
we
have
three
prerequisites
that
need
to
be
met.
We
of
course,
need
a
captain
installation
and
for
each
service
that
you
want
to
evaluate
your
serve.
We
needs
the
service
level
objectives
for
that
service
and,
of
course,
we
need
tools
that
provide
service
level
indicators,
so
that
could
be
metrics
like
response
time
and
we
will
be
there
in
just
a
second.
So
you
need
an
service
level
indicator
provider
and
in
our
initial
version
we
will
support
growth,
both
Prometheus
and
dynaTrace,
for
the
objectives
evaluation
using
SL
eyes.
A
A
B
But
if
we
want
to
actually
go
further
and
support
more
complex
scenarios,
we
need
to
adapt
a
few
things
so,
while
in
theory
the
the
definition
that
Hendrik
already
has
done
and
for
that
matter,
every
everyone
else
that
provided
a
pedometer
source
will
be
will
be
basically
the
same.
So
you
you
need
to
come
up
with
an
implementation
that
pulls
predefined,
metrics
out
of
a
third-party
system.
B
B
Next
section
is
a
brief
introduction
of
service
level
objectives
and
indicators,
and,
if
you're
familiar
with
Google's
sre
handbook,
then
you
already
know
this.
So
this
is-
and
it
says
it
here-
it's
inspired
by
the
book.
So
there
is
the
definition
of
a
service
level
objective
and
a
service
level
indicator
saying
that
an
SLI
is
a
quantitative
measure
of
some
aspect
of
the
level
of
the
service
that
is
provided.
B
So
there
is
a
metric
that
can
be
measured
and
also
queried
and
compared
to
some
other
values
of
that
metric,
and
the
objective
is
the
target
value
arranged
for
that
specific
metric
that
you
need
to
to
meet
in
order
to
meet
the
objective
and
for
our
first
thrower
for
the
first
version,
we
are
set
out
to
support
five
service
level
indicators
and
they
are.
These
are
also.
On
the
one
hand,
we've
talked
to
a
lot
of
people
working
at
the
ANA
trees
that
worked
with
a
lot
of
customer.
B
That
actually
also
said
us
and
Andi
is
one
of
the
of
those
people.
That
said,
these
five
metrics
are
the
most
useful
when
evaluating
the
quality
of
a
service,
and
the
sre
handbook
of
Google
also
tells
the
same
story.
So
the
the
metrics
are
request
latency,
and
this
is
described
as
the
time
that
it
takes
for
a
service
to
execute
to
complete
the
task,
or
it's
also
often
referred
to
as
response
time.
B
Then
for
the
next,
these
are
basically
three
service
level
indicators
that
you
can
define
objectives
for
later
on.
As
we
will
see,
the
next
service
level
indicator
is
throughput,
so
the
number
of
requests
per
second
that
have
been
processed
and
the
fifth
one
is
the
error
rate.
So
how
many
of
all
of
the
requests
that
have
been
processed
produced
an
error?
So
it's
a
fraction
now,
let's
take
a
look
at
how,
based
on
those
service
level
indicators,
you
can
define
service
level
objectives.
B
So
an
SLO
consists
of
an
service
level
indicator
a
service
filter
that
is
used
to
uniquely
identify
your
service
in
the
selected
SLI
provider,
so,
for
example,
Prometheus
and
an
evaluation
success
criteria
that
is
dependent
on
the
comparison
strategy
that
you've
designed
and
now
we
will
go
through
each
section
of
this
configuration
and
explained
it
in
detail
what
those
things
mean.
This
is
an
example
and.
B
B
But
this
can
also
be
overridden
in
the
configuration
if,
for
whatever
reason,
the
the
metrics
or
the
sli
values
need
to
be
queried
with
a
different
value
than
is
inferred
from
the
project
name,
other
service
name,
for
example,
and
in
addition,
you
can
also
have
an
ID
field
that
this
could
then
be
used,
for
example,
to
identify
the
Prometheus
crêpes
job
or
for
a
dynaTrace
example
the
service
ID.
So
this
part
is
responsible
for
identifying
a
service
in
the
SLI
provider
to
only
query
the
metrics
or
the
SL
ice
for
the
affected
service.
C
B
So
the
way
this
works
is
that
we
actually
wanted
to
have
the
service
filters
as
as
SLI
provider
agnostic
as
possible.
So
we
tell
you
the
values
of
project
stage
service
and
maybe
an
additional
ID
and
maybe
additional
parameters
in
the
future.
If
we
find
the
need
for
that
and
then
have
an
evaluation
service,
this
has
kind
of
a
plugin
or
modules
mechanism
for
each
SLI
provider.
That
is
then
responsible
for
translating
the
the
filter
properties
to
a
proper
query
for
that
specific
tool
and
if
you
now
say
for
dynaTrace.
B
C
Okay,
so
is
there
some
other
file,
then,
like?
Let
me
just
using
that
dynaTrace
provider
example.
So
is
there
some
other
file
that
gets
defined
to
say
this
stage
and
service
maps
to
this
tag,
structure
or
I,
guess
I'm,
just
trying
to
visualize
where,
where
you
actually
do
that
you
know
or
how
or
if
you
I.
A
A
Opinionated
approach,
where
the
dynaTrace
source
implementation
assumes
that
certain
takes
are
a
given
right
and
therefore
the
inquiries
to
dynaTrace
api.
Assuming
that
there's
a
project
tag
on
it,
I
staged
a
course,
oops
I
think
we
will
probably
have
and
did
correct
nothing
wrong,
but
assume
we
I
think
we
assumed
certain
tags
to
be
present
and
we
just
take
these
project
stage
and
service
names
and
then
kind
of
converted
into
into
a
specific
query
for
specific
mistakes.
A
D
C
Yeah,
you
can
continue,
that's
fine,
yeah
and,
and
just
in
one
other,
maybe
commentary
I
know
we
weren't
going
through
the
different
indicators,
so
this
request
latency
I,
saw.
That
is
a
word
that
the
Google
handbook
used
versus
response
time.
Have
you
found?
That's
it
just
it
seems
like
response.
Time
is
a
more
common
description
than
latency
latency
usually
implies.
You
know,
you
know
a
certain
certain
sort
of
flavor
of
response
time
is.
Are
we
gonna
go
as
are
you?
Are
you
recommending?
We
really
start
using
the
word
latency
and
pushing
that
versus
response
time?
C
B
C
Think
we
I
can
look
into
a
little
bit
but
I
mean
I,
know
our
API
uses
response
time.
Most
most
load,
testing
tools,
use
response
time
and
latency
I,
just
associate
with
like
a
network
latency
like
the
the
the
response
time
is
slow
because
of
like
Network
latency,
for
example,
but
I,
don't
usually
think
of
like
latency
as
being
the
the
end
end
response
time.
Just
just
when
I
read
that
word,
that's
all
it's
just
more
commentaries
and
change.
It.
B
So
I
don't
have
a
personal
presently
prefer
I'm.
One
of
those
terms.
If
we
come
to
the
conclusion
that
the
one
is
more
common
or
more
often
used,
then
I
don't
have
any
problem
switching
to
this
terminology,
just
maybe
we
need
we
should
at
least
until
the
next
community
meeting
finalized
with
with
which
terminology
we
want
to
go.
So
if
you
can
can
put
him
some
some
time
to
do
the
research
on
that
sure.
I
have
no
problem.
B
B
What
we
hear
in
the
industry
that
this
is
not
a
common
use
case,
the
most
common
use
case
we
heard
out
there
is
that
you
usually
compare
to
either
the
previous
value
or,
like
the
average
of
a
certain
number
of
previous
values,
just
to
identify
regression,
for
example.
Otherwise,
you
also
need
to
adapt
the
threshold
multiple
times
to
actually
get
it
right,
and
this
is
the
reason
why
we
have
the
first
comparison
strategy.
A
A
The
old
study,
don't
want
so
kind
of
exclude,
failed
test
results
and
the
second
thing
that
I've
seen-
and
it's
also
the
way
we
implemented
it
back
in
the
days
with
that
Mon-
is
that
you
have
to
have
at
least
a
minimum
number
of
previous
results,
so
kind
of
building
up
a
quote-unquote
baseline,
and
unless
you
really
have,
let's
say
at
least
three
or
five
runs,
you
basically
cannot
evaluate
or
can
can't
come
up
to
a
conclusion.
So
that's
also
why.
A
B
C
And
and
I
thought
to
maybe
this
was
baked
into
your
number
one,
but
come
you
want
to
compare
it
to
I'm
I?
Guess
it's
it's!
Maybe
it's
the
inverse
of
what
Andy
was
saying.
I
guess
I
was
thinking
like
I
want
to
compare
it
to
the
last
good
test,
run
or
I.
Guess
what
yeah
Andy
you
were
saying
was
you
know,
I
I
have
to
go
I'll
flag.
All
of
these
runs
or
order
is
bad
and
don't
count
those
or
something
like
that.
Yeah,
it's
it's
it's!
It's.
C
Exactly
did
you
have
a
number
of
test
runs
in
mind
to
kind
of
have
enough
history,
like
other
words,
this
might
be
there's
a
default
number
of
test
runs.
You
include,
you
know,
because
you
were
trying
to
get
like
an
average
sampling
or
something
like
that.
Was
that
sufficient
five,
seven
yeah
I
think
I.
D
B
C
C
A
B
A
B
So
we
will
get
this
with
every
section,
so
this
is
like
really
there
and
you
accepted
the
term
and
you
already.
This
is
the
plan
for
the
MVP,
so
this
is
not
the
final
version.
This
is
the
version
where
we
found
with
discussion
with
customers
and
other
interested
parties
in
captain
that
they
would
get
the
most
value
out
of.
If
we
build
exactly
that
and
we
can
actually
fulfill
their
use
cases.
B
So
this
is
while
the
while
the
the
features
may
be
a
little
bit
narrow
in
in
functionality,
but
once
we
have
like
the
into
end
use
case,
it's
an
easy
task
to
actually
add
additional
comparison
strategies,
add
additional
service
level
indicators
and
that
so
this
is
not
the
final
version
that
we
will
have.
This
is
the
first
version
that
we
want
to
build.
Probably.
B
C
B
Then,
let's
talk
about
the
objectives,
so
objectives
usually
consists
of
an
SLI,
of
course,
and
we
have
two
values
that
you
can
specify.
You
can
specify
a
value
of
pass
and
you
can
specify
a
needs
approval
value.
They
can
of
course,
be
both
positive
or
negative
depends
on
the
SLI
that
you
actually
chose
so
in
coding.
If
you
take
response
time,
lower
is
better.
B
If
you
take
a
throughput
higher
is
better,
so
it
really
depends
on
the
SLI
that
you
choose
and
those
two
values
can
be
interpreted
like
the
following,
and
there
is
actually
an
example
that
should
really
highlight
how
this
works,
and
this
is
also
something
that
the
tender
it
asked,
what
what
I
mean
with
beyond,
if
it's
less
or
greater
than
those
values,
it
really
depends
on.
If
it's
a
positive
or
a
negative,
if
positive
or
negative
is
better
for
your
specific
SLI
and,
let's
just
do
it,
make
a
walk
through
with
this
example.
B
So
it's
compared
with
previous,
you
should
only
compared
to
the
previous
results.
We
have
an
SLI
of
request,
latency
or
response
time.
P90.
The
pass
criteria
is
5%,
so
plus
5%.
In
that
case,
because
there
is
no
in
front
no
and
needs
approval
is
10%
and
just
for,
for
the
sake
of
simplicity,
let's
assume
that
the
previous
evaluation
of
request-
latency
p90,
was
400
milliseconds.
B
So
now
here
we
are
in
the
next
run.
We
run
the
tests,
we
pull
the
SL
eyes
from
the
SLI
provider,
and
now,
if
we
get
a
value
that
is
lower
than
412
milliseconds,
that
is
actually
the
400
milliseconds
from
pre
4
plus
5%.
That
is
defined
as
the
pass
criteria.
Then
the
result
would
be
pass
and
if
the
threshold
for
manual
approval
is
10%,
so
is
440
milliseconds,
then.
C
B
Of
the
SLI
values
in
420
440
would
result
in
a
needs.
Approval
result
which
then,
in
turn
can
be
interpreted
as
I
need
to
send
a
message
to
slag
to
get
a
manual
approval
that
someone
needs
to
click.
Okay,
this
build
still
makes
it
to
the
next
stage
or
it
gets
promoted
or
whatever
the
next
action
is
or
not,
and
all
of
the
values
that
are
higher
than
10%,
so
greater
or
equal
to
four
hundred
forty
milliseconds
fail.
B
C
So
you
want
to
get
it,
you
want
to
move
away
from
defining
lower
and
upper
bounds,
and
just
assume
that
that
number
is
goes
by
directional,
meaning
plus
it's
really
plus
or
minus
5%,
when
you
say
5%,
as
opposed
to
like
just
before
we
used
to
say
lower
threshold
and
upper
threshold,
which
kind
of
gives
you
the
flexibility
to
make
them
different.
But
this
sounds
like
this:
is
you
have
one
number
and
it
defines
you
know.
B
C
D
A
A
B
E
I
mean
we
talked
about
this
also,
why
not
just
probably
define
this
with
a
lower
than
equals
or
greater
than
equal
sign
and
bul-bul
comparison,
or
something
like
that,
so
it
would
be
passed
lower
than
5%
means
everything,
that's
lower
than
5%,
and
you
can
clearly
read
what's
written
there
so
because,
right
now-
and
that
was
one
of
my
critics
and
the
last
time-
I
review
this
document-
you
only
see
past
5%
and
needs
approval
10%.
This
is
completely
out
of
context.
E
C
Right
and
I
also
think
like
needs
approval,
I
mean
just
a
thymine
I,
like
the
idea
of
having
something
that
indicates
that
but
I
guess
I
kind
of
liked
how
we
had
it
before
to
where
it
was
like
pass.
Or
you
know
it's
like
warning,
there's
a
warning
and
then
maybe
the
warning,
maybe
that's
just
a
different
flag,
like
you
know,
like
you,
have
past
warning,
fail
definitions
and
then
there's
are
like
a
fourth
option.
You
know
warning
needs
approval,
yes
or
no,
or
something
like
that.
D
C
F
Yep
this
is
yogin
hi
following
the
discussion
here.
I
also
think
it
would
be
great
to
have
like
a
lower
and
upper
bound,
because
if
there,
for
example,
response
time
like
is
suspiciously
better
than
before,
then
captain
could
warn,
because
maybe
there
was
there's
some
functionality
missing
or
like
if
the
response
time
was
100
milliseconds
before
and
now
it's
only
20,
it
might
be
that
some
in
the
refactoring
went
wrong.
So
captain
could
detect
this
and
with
the
current
proposal,
I
think
that
would
not
be
possible.
B
C
B
C
B
C
B
B
Let's
let,
for
the
sake
of
simplicity
and
for
the
rest
of
the
document,
let's
just
go
with
this
version.
Otherwise
it's
going
to
get
too
complicated
now
we're
through
with
objectives
indicators
and
how
the
configuration
of
objectives
work
and
now
I
would
like
to
walk
you
through.
We
have
twenty
four
minutes
left
to
what
the
user
would
need
to
do
to
actually
make
use
of
captain
quality
gates.
So,
as
we
said
before,
a
prerequisite
is
a
running
captain
installation.
B
So
you
would
need
to
create
a
project,
of
course,
with
a
specific
name
and
you,
as
usual,
provide
a
shipyard
file.
So
since
you,
you
might
use
your
existing
tools
for
deployment
and
testing,
it
might
not
be
necessary
to
define
deployment
or
testing
strategies.
So
a
simple
shipyard
file
that
just
has
one
stage
with
the
name
might
be
sufficient
for
this
use
case
and
after
create
project
and
then
again
here
we
are
in
the.
This
is
how
it's
going
to
work
pretty
soon.
You
would
need
to
apply
a
uniform.
B
Then
again,
if
you
would
want
to
change
the
services
that
are
configured
for
specific
events,
you
could
adapt
the
the
ML
file
and
use
different
services
instead
by
just
applying
that
UML
file
again
and
if
I
say
technically
wrong
things.
Please
correct
me
for
all
of
the
core
maintenance
that
are
in
the
call,
and
in
fact,
once
we
have,
we
have
that
flexibility
of
exchanging
the
services
that
are
automatically
installed.
B
B
So
if
you
send
captain
start
deployment
event,
you
send
all
of
the
information
you
want.
You
usually
would
send
to
captain
and
we
have
it
and
we
can
use
it
later
on
also
for
as
a
reference
for
for
other
tasks,
and
we
can
visualize
it
in
the
bridge
and
same
goes
for
the
evaluation
service
that,
of
course,
elicits
to
the
event
started
and
in
the
uniform
file
there
is
also
the
SLI
provider
that
is
configured
so.
B
B
Then
you
would
need
to
create
a
captain
service
for
each
project
ever
each
service
that
you
want
to
evaluate
using
captain
quality
gates.
So
cap
create
service
service
name
and
for
the
specific
project
that
you
like.
Usually
you
would,
as
in
the
onboarding
use
case,
you
would
provide
a
helm
chart,
but
then
again
there
is
no
deployment
done
by
captain,
so
there
is
no
need
for
a
L
chart,
but
what
we
need
at
a
service
level
is
the
SLO
file,
the
service
level
objective
file.
B
D
B
Will
come
exactly
to
that
and
then
in
the
next
section
perfect,
so
maybe
I
was
I
got
ahead
of
myself
a
little
bit
while
talking
about
it,
but
this
is.
This
is
like
the
setup
steps
that
are
necessary
to
do
it
and
then,
during
the
the
execution
of
a
deployment
and
tests
and
evaluation.
This
is
what
you
need
to
do
during
that
time.
So
this
is
once
before.
You
want
to
run
anything,
and
this
is
how
evaluation
evaluating
a
service
with
captain
quality
gate
works,
and
there
is
also
already
the
answer
to
your
question.
B
So
let's
say
the
example
could
be
that
you
have
a
Jenkins
pipeline
or
a
bamboo
pipeline
that
actually
takes
care
of
deploying
your
artifacts
and
running
your
tests
in
order
to
have
the
entire
application
delivery,
workflow,
visible
and
also
traceable
in
Captain.
You
would
need
to
extend
the
pipeline
files
with
the
captain,
with
captain
specific
commands
that
inform
captain
of
the
deployment
and
to
test
events
and
and
here's
why
this
is
actually
useful,
intent
and
and
and
helps
you
later
on.
B
So,
for
example,
in
your
Jenkins
pipeline
make
sure
you
have
the
captain's
Eli
installed
and
proper
configured
and
then
send
us
a
deployment
event
before
you
actually
deploy
the
artifact.
So
in
here
this
could
be
like
your
Jenkins
by
a
blind
test
that
then
says,
keep
CTL
apply
if
something
something
and
you
just
informed
captain
that
a
deployment
is
about
to
happen.
So
we
have
the
information
and
also
the
captain
context,
and
after
the
deployment
is
done
so
the
cube
CTL
command
returns
successfully.
B
F
Would
okay,
sorry
for
interrupting
I
would
have
a
question
here.
So
how
is
this
different
from
the
captain
sent
event
new
artifact
because
we've
gotten
send
event
new
artifact?
You
don't
get
the
kept
context
back
from
the
CLI,
and
in
this
case
you
would.
You
would
get
the
captain
context
pack
and
you
can
use
it
further
on
in
all
subsequent
comments.
Yeah.
C
B
E
F
Get
it
then,
since
I
was
not
part
of
all
the
discussions
before
and
it's
like
more
or
less
my
my
first
look
at
this
document,
so
my
thought
would
be
here
that
Captain
starts
deployment
sounds
very
much
for
me.
That
captain
should
do
something,
because
you
tell
you
I
can
start
deployment
in
captain
st.
new
artifact
a
saint
event.
C
B
B
C
C
Think
I
don't
know,
I
think
when
some
tool
is
going
to
integrate
in
with
captain
like
say
Jenkins,
it
just
seems
overly
complicated
I
mean
usually
I'm.
Just
I
run
my
test
right
in
Jenkins,
I,
say
evaluate
result
and
in
in
sync
you
know
basically
synchronously
within
that
pipeline.
I'm
gonna
expect
an
answer
back
and
that
was
kind
of
where,
if
just
think
of
the
simple
call
like
I
make
a
rest
call
I
make
a
poke.
C
B
But
while
it
is
true
that
there
are
more
commands
necessary,
you
also
get
more
of
it
out
of
it
later
on.
So
you
get
the
entire
visualization.
You
get
the
results,
starting
captain.
You
have
all
of
the
benefits
that
as
if
you
would
use
captain
to
actually
run
your
deployments
in
their
tests,
and
here
we
come
to
the
important
next
steps
where
you
actually
say
and
translating
on
the
fly.
Captain
notify
I'm
starting
tests
for
a
specific
project
stage,
name
in
a
service
and
once
the
tests
effect
have
are
through.
B
You
can
send
the
done
event
again.
We
automatically
know
the
start
time
and
the
end
time
of
the
of
the
tests.
So
we
know
the
timeframe
when
the
test
has
happened
and
can
use
the
timeframe
in
the
evaluation
implicitly
later
on.
So
you
don't
need
to
specify
the
start
time
or
the
duration
of
the
time
or
a
stop
time
later
on,
because
we
already
know
so,
while
I
agree
with
you
that
this
that
having
four
commands
less
might
be
easier.
B
I
still
think
that
the
four
commands
won't
make
the
difference
for
any,
for
anyone
that
implements
captain
in
their
pipelines
for
doing
that.
So
if
they
can
implement
this
command,
I
think
they
can
also
implement
the
other
four
and
by
that,
get
the
full
visibility
in
the
captain's
bridge
and
all
of
the
other
advantages.
I
just
mentioned,
and.
A
Drop,
maybe
I
want
to
give
you
some
additional
thoughts
here.
I
think
this
is
a
great
opportunity
for
us
to
build
a
little
Jenkins
library
or
Jenkins
plug-in
that
basically
provides
a
single
convenient
function.
If
you
really
just
want
to
say,
captain
evaluate
the
last
10
minutes
and
then
what
it
does,
it
does
internally
exactly
this
here.
B
C
B
C
No,
no
Anastasia,
just
a
matter
of
how
people
may
expect
to
call
it
I
think
something
like
I
may
want
to
provide
my
own
start/stop
times
right
as
part
of
one
call
you
know,
I
may
want
to
like
I
know
when
I
talked
it
to
notice
they
their
start/stop
times
are
embedded
within
their
testing
tool.
So
all
they're
gonna
know
is
I
ran.
C
B
This
is
a
perfect
example
with
with
near
load
again,
so
if
you
need
additional
context,
information
to
a
third-party
service,
you
can
embed
that,
for
example,
in
the
event
that
you
sends
during
start
tests
where
you
just
add
this
additional
context:
information
in
the
data
block
of
the
event,
and
then
you
have
that
information.
When
you
start
the
evaluation,
you
already
have
the
test,
one
ID
and
you
don't
need
to
basically
come
up
with
snowflake
implementations
of
the
start:
evaluation
command
for
every
individual
and
third-party
service.
That
is
out
there.
C
But
it
sounds
like
I
have
to
know.
I
have
to
be
aware
of
the
captain
context,
though
it
sounds,
like
you
know,
like
I'm,
just
looking
at
all
the
commands
on
your
screen,
I
mean
as
I'm
having
to
keep
track
of
that
content
context
within
my
pipeline
tool,
so
that
I
can
then
later
pass.
That
back
in
is
that
right.
B
Sure
so,
if
you
say
cup,
so
this
is
the
first
time
you
interact
with
captain
following
this
example,
so
this
command
would
most
likely
return
a
captain
context
that
you
have
a
reference
to
that
unique
flow
implication:
delivery
flow,
you
know
and
then
of
course,
the
the
most
pipeline
tools,
I
know
can
save
variables
and
you
can
use
them
later
on
pretty
easily
so
I.
Don't
think
that
this
would
be
much
of
a
problem,
but
if
you
think
otherwise,.
C
C
C
B
It
depends
on
your
needs,
so,
as
I
said
before
you
you,
you
get
something
out
of
it,
so
you
get
the
visualization
and
you
get
the
entire
application
delivery
fro
in
the
captain's
bridge
and-
and
if
you
really
want
to
pass
in
additional
context
information,
then
maybe
the
answer
is
yes,
you
need
that
because
otherwise
you
would
have
no
possibility
to
pass
additional
context.
Information
to
the
elaboration
service
and
the
specific
plugin
that
that
poles,
DSL
eyes
did
maybe
needs
that
context.
Information.
B
Get
a
control
plane
that
I
can
talk
to
various
services
out
of
the
box
without
me
needing
to
write
any
additional
code
within
my
pipelines,
and
this
is
actually
the
the
discussion
that
we
want
to
have
with
people
using
captain
and
that's
kind
of
the
direction
we
want
them
to
go
mature.
If
a
me,
ok,
thank.
D
B
We
have
six
minutes
left,
so
we
sent
the
meta
information
or
a
metadata
about
the
deployment
and
the
test,
and
now
the
actual
evaluation
of
the
captain,
the
gates
can
start.
So
now
you
can
just
say
start
evaluation
and
this
time
I
think
start
is
actually
true,
because
we
want
captain
to
really
start
the
evaluation.
B
While
in
these
two
cases
it's
more
than
a
notification,
then
really
a
command
to
start
something
and
we
provide
the
the
project
and
the
service
name
and
we
can
buy
the
information
we
got
before
automatically
retrieve
the
correct
service
level
objectives
file.
We
have
the
service
level
indicator
provider
configured
in
the
uniform,
yummy-looking
query
the
SL
eyes
and
compare
them
to
the
SL
O's
and
store
the
results
within
captain,
and
this
command
also
returns.
B
The
captain
context,
which
can
then
later
on,
be
used
for
retrieving
the
evaluation
results,
and
this
is
actually
the
way
that
most
of
the
people
we
talked
to
were
comfortable
with
so
sending
an
asynchronous
command
of
a
start
evaluation
and
then
Trust
polling
for
the
results,
while
a
synchronous
call
would
be
maybe
more
comfortable.
For
my
users
point
of
view.
This
is
actually
what
we
got
as
a
feedback
that
also
works
well,
for
people
want
to
work
with
captain
in
that
use
case
and.
A
B
B
C
B
A
This
is
a
great
start
and
I
think
the
action
items
Rob
and
I
will
come
up
with
a
new
different.
That's
a
sample
SLO
file
to
cover
the
things
we
said
earlier.
Right,
I
think
that's
great,
and
then,
if
you
can,
if
you
can
really
change
the
DCI
comments
to
make
it
more,
you
know
to
make
it
easier
to
understand
what
what
it
is.
If
you
do
it
and
notify
I
think
that's
a
great
start
already
and
then
the
question
I
think
the
other
one
that
I
have.
When
can
we
expect
the
first
MVP.
B
A
G
G
E
Stop
you
know
I,
want
to
add
about
the
running,
moving
humanities
or
not,
if
I'm
pretty
sure
there
is
going
to
be
a
docker
image
of
the
evaluation
service,
and
there
might
be
a
way
that
you
can
just
run
this
docker
image
out
of
the
box.
We've
got
too
many
this
run
time.
It's
not
just
a
preferred
and
supported
way,
but
I'm
pretty
sure
it
would
be
possible
to
figure
out
yeah.
B
But
then
again,
this
leaves
out
the
entire
metric
that
captain
brings
to
the
table,
and
this
is
yes,
so
we
really
want
two
people
to
have
the
entire
captain
experience
with
the
bridge,
with
all
the
events
that
actually
are
the
foundation
of
the
way
that
captain
works.
So
what
we
actually
will
come
up
with
eventually
is-
and
this
is
not
the
last
minute
of
the
meeting-
is
we're
researching
ways.