►
From YouTube: CNCF SIG App Delivery Meeting 2019-10-23
Description
Join us for Kubernetes Forums Seoul, Sydney, Bengaluru and Delhi - learn more at kubecon.io
Don't miss KubeCon + CloudNativeCon 2020 events in Amsterdam March 30 - April 2, Shanghai July 28-30 and Boston November 17-20! Learn more at kubecon.io. The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy, and all of the other CNCF-hosted projects
CNCF SIG App Delivery Meeting 2019-10-23
A
B
A
C
Home
I.
D
D
B
B
D
D
A
D
D
Meeting
or
October
23rd
today,
what
we
are
going
to
do
is
have
a
two
presentations
and
then
some
discussion
around
specs
that
we
should
be
thinking
about
for
sagat's
delivery
and
then
we'll
do
updates.
Seeing
this
meeting
doesn't
go
all
day
when
we're
giving
the
presentations,
please
try
to
keep
them
between
15
and
20
minutes.
We
will
definitely
chime
in
so
first
up
today
is
luma.
Are
you?
Are
you
here?
Yes,
this
is
me
right.
A
Can
you
see
my
screen?
Yes,
can
all
right.
Thank
you,
first
of
all,
Brian
and
Lee
and
others
for
allowing
us
to
present.
Here.
We
are
all
happy
to
be
here
when
I
say
we
a
bunch
of
people
from
the
litmus
project.
Our
hope
today
here
is
to
introduce
litmus,
which
is
the
chaos
enslaving
project
for
kubernetes.
A
We'll
get
a
lot
of
coaching
feedback
or
mentoring
and
all
so
encouragement
and
that's
our
purposes.
Today
we
have,
along
with
me
and
Omar
Makara,
co-founder
CEO
of
a
company
called
my
data.
We
also
have
another
project
in
cloud
native
data
management
open
with
ES,
and
then
this
is
the
Excel
engineering
project
at
that
responsive
and
we're
about
eight
full-time
members
working
on
this
project
right
now
and
the
community
is
starting
to
really
go
I'll
talk
about
it
in
a
while
and
I'll.
A
A
A
So
as
as
we
move
towards
development
to
the
CI
pipeline's
actually
staging
in
production,
the
chaos
engineering
becomes
more
and
more
relevant
and
then
the
gap.
Delivery
I
could
be
totally
wrong
about
this,
and
this
is
my
first
meeting
here
so
I'm
totally
honest
I
want
to
learn
more
about
what
we
do
as
a
group
here
and
what
I
think
is.
You
know
when
the
application
starts
done
with
the
development
and
then
moving
towards
the
production.
This
group
will
be
more
relevant
right
in
terms
of
delivery
and
how
to
use
it
all
that
stuff.
A
So
we
think
chaos,
engineering
is
an
area
is
well
fit
here
and
that's
probably
what
they
suggested,
and
so
we
are
very
much
willing
to
learn
what
we
can.
You
know
get
up
here
so
before
we
go
into
chaos.
Engineering
I
just
wanted
to
typical
pitch
to
everyone
else
where
I
in
my
meters
and
they
know
other
users.
We
talk
about
reliability.
Reliability
is
actually
very
much
important
outage
of
services
cause
what
small
dollars
big
dollars.
Example.
You
know
we
have
seen
very
standard
platforms.
A
Still
facing
outages
right
bit
of
slack
and
even
a
double
yes,
it
does
not
mean
that
you
know
they
have
not
followed
the
reliable
testing
processes
and,
in
fact,
they're
some
of
the
best
ones.
So
the
key
really
is
finding
weaknesses
in
there
in
this
deployment
systems,
so
failure
testing
in
CA
pipelines
is
generally
not
good
enough.
I
just
have
a
quote
here
from
Ali
Waziri,
who
is
well
known.
Chaos,
engineering
expert
that
you
know
you
could
test
your
apps
as
much
as
you
want,
but
in
production
no
one
could
predict.
A
What's
the
environment
will
be
so
there
is
always
a
chance
to
for
the
substantive
thread.
So
what
we
do
to
find
the
weakness
is
break
things
on
purpose
in
production,
so
the
loop
really
is
find
the
weakness
it's
time
to
build
the
process.
So
this
brings
to
you
know
what
is
this
value
testing
and
now
you're
talking
about
chaos
tester?
A
So
the
main
difference
is
video
testing
stops
at
the
pipelines
and
since
you
have
got
CA,
for
example,
it's
great
and
CD
is
just
a
matter
of
pushing
the
applications
from
CI
and
to
no
most
likely.
The
SAS
based
systems
will
have
a
CD,
but
chaos
testing
never
ends,
it
extends
to
pre
pod
and
then
production
environments
in
another
way,
I
also
like
the
way
mark
McBride.
So
you
will
co-founder
of
turbine
lab,
explains
chaos
engineering,
because
the
test
chaos,
engineering,
Lu,
what
it
is
really
is
typically.
B
A
A
You
will
have
you
will
wait
for
the
system
to
be
destabilized
and
incident
occur.
Then
you
fight
fight,
resolve
and
then
try
to
bring
it,
but
in
chaos,
engineering,
you
don't
wait
for
it
check
the
fault,
you
analyze,
toom
and
then
again
go
back
to
the
observation.
So
you
do
the
same
stuff
in
a
little
more
planned
way
you
can
choose
when
to
inject.
You
can
actually
cause
less
disruption.
A
You
can
actually
do
it
in
staging
pre,
prod
and
then
prod,
but
this
is
more
pliant
a
way
to
get
resiliency
right,
so
to
summarize,
in
resilience
is
achieved
by
functionality
test
as
well
as
polluters,
but
in
production
is
really
achieved
by
first
of
all,
you
need
to
have
a
good
CI,
but
then
you
need
to
have
random
chaos
as
a
way
to
submit
your
chaos
and
then
analyze
and
then
start
adding
up
or
more
resilient
scenarios
right.
So
this
is
really
important
in
terms
of
chaos.
A
A
The
big
project,
which
is
learning
itself,
is
off
of
the
code,
and
then
you
are
talking
about
writing
a
big
application,
which
is
about
40,000
lines
of
code,
and
if
you
can
see
it's
compared
to
the
rest
of
the
staff,
it's
only
1%
right,
it's
not
even
controlled
by
me
that
includes
kubernetes
and
a
lot
of
applications
that
are
being
developed
out
to
conquer
right.
So
how
do
we
really
get
the
zillions
right?
How
do
we
make
sure
that
things
are
going
to
work?
Well?
A
The
answer
is
really
extend
your
testing
to
chaos,
engine
idling
all
right
and
then
do
it
in
production.
Now,
let's
actually
start
talking
about.
Okay,
now,
I
understand
the
need
for
chaos,
engineering
in
cloud
native
environment,
but
how
do
I
do
that
so,
thankfully
kubernetes
gives
a
lot
of
resources
and
then
not
quite
recently,
but
for
some
time
now
everybody
started
moving
towards
the
Audis
right.
A
So
but
these
are
also
what
is
being
for
development
and
then
is
there
a
way
to
standardize
this
API
for
doing
cloud
native
chaos,
engineering
and
then
that's
where
you
know
we
chimed
in
and
define
some
interfaces
for
doing
chaos
through
C
or
DS,
so
that
you
know
it
becomes
a
little
bit
more
generic
than
being
within
a
team
so
that
we
can
offer
chaos.
Engineering
is
a
general
feature
in
the
kubernetes
community.
A
Just
to
explain
that
a
little
bit
more.
You
know
we
all
know
how
to
create
a
pod.
The
developer
gets
it
done.
I
want
to
get
more
resources
again,
go
into
the
declarative,
configuration
and
then
now
and
actually
done,
spinning
up
my
app
and
then
I
got
all
the
resources.
Now
I
want
to
do
a
chaos,
testing
right,
it's
as
simple
as
actually
spinning
up
one
more
kind
of
chaos,
engine
and
then
pulling
up
the
chaos
experiments
right.
So
we
envisioned
it.
Chaos.
A
Engineering
is
just
an
expansion
of
the
natural
development
and
then
it
should
be.
If
we
into
the
way
we
do
things
as
developers
as
the
Saudis
in
kubernetes
right,
and
that
brings
me
to
introduce
litmus
chaos
whatever
I
said.
Is
that
requirement?
That's
basically
what
it
mostly
also
lit
nice
refunds,
to
put
it
in
a
way
it's
cloud
native
chaos,
engineering
for
kubernetes
developers
and
saris
and
then
how
litmus
is
used.
A
Typically,
I
will
talk
about
lettuce,
chaos
hub
in
a
second,
but
let's
assume
that
you
have
enough
experiments
the
chaos
experiments
out
there
in
the
hub.
Then
you
have
an
app
running
or
a
pipeline
running,
and
then
all
you
need
to
do
is
install
litmus,
which
is
quite
simple
to
do,
and
then
that
installs,
the
chaos,
libraries
and
the
chaos
operated.
And
then
you
pull
in
the
required
charts
from
the
hub
right,
and
this
is
the
community.
We
we
expect
to
grow,
and
then
everybody
start
submitting
their
stuff
into
this
hub.
So
you
can
pull.
A
Is
the
app
you
are
using?
There
must
be
some
charts
that
are
related
to
that,
so
you
pull
them
in
so
install
those
charts
which
are
nothing
but
sea
arts,
and
then
you
start
injecting
the
chaos
right.
So
the
chaos
once
you
start
injecting
this
is
really
nothing,
but
you
annotate
your
app
that
hey,
you
know,
start
this
engine
and
this
something
includes
the
following
experiments
and
there
you
go.
The
chaos
container
starts
up
and
it
turns
the
chaos
and
then
you
can
observe
the
results
right
now.
A
It
really
brings
to
what
is
the
scale
of
hub.
You
got
a
good
operator
framework,
but
I
really
need
to
have
the
experiments
and
the
big
challenge
that
we
faced
is
the
team.
We
can
do
all
that
is
needed
and
we
can
involve
community
to
develop
the
infrastructure,
but
once
the
infrastructure
is
done,
the
major
portion
that
is
remaining
is
the
actual
chaos
experiments
of
the
applications
right
and
that's
why
we
created
chaos
hub
and
let
me
explain
how
the
process
works,
but
before
that
this
is
how
the
arts
works.
A
It
is
available
at
hub
dot
litmus
cancer
I
will.
For
now.
You
know
we
are
still
in
the
process
of
moving
some
of
but
experiments
I
think
there
are
about
20
to
22
experiments.
Already
we
have
about
eight
so
far,
you
know
the
generally
offs
or
the
ones
for
the
developers
to
start
with
the
poor
village
container,
kill
network
loss
advocating
z-disk
well,
and
so
many
of
them
are
there.
A
A
So
the
idea
is
we
encourage
these
developers
or
the
see
a
pipeline
admins
to
convert
the
regular
failure
paths
into
a
litmus
infrastructure
so
that
you
can
call
it
as
a
chaos
experiment
and
then
you
can
use
them
in
the
pipelines
and
then,
additionally,
you
push
that
chart
or
experiment
to
the
chaos
of,
so
that
your
application
users
can
use
that
experiment
in
the
staging
or
production.
So
the
whole
idea
is,
you
know.
A
A
Just
imagine
I
am
converting
my
legacy
application
into
a
cloud
native
environment
which
is
nothing
but
you
know
containerize
it
and
turn
it
on
cool,
and
it
is
now
I
want
to
prepare
a
pipeline
well
I'm
using
so
many
databases
and
then
it
I
don't
know
what
value
testing
I
need
to
do
for
kubernetes
itself.
You
can
actually
pull
them
up
some
chaos.
So
that's
the
whole
idea
right
and
then,
given
that,
how
am
I
doing
on
time
think
about
seven
minutes
to
ten
minutes?
A
Okay,
I
think
we
have
a
decent
community
to
begin
with,
and
then
we
have
trying
to
call
follow
the
standard
practices
that
every
other
kubernetes
community
follows,
and
we
are
trying
to
do
a
release
cadence
every
could
think
of
every
month
and
then
there's
a
community
meter
where
that
happens.
You
know
I
twice
in
a
month
and
then
contributing
new
charts
is
easy.
A
We
are
trying
to
create
tools
where
a
developers
can
really
come
and
you
know
install
some
programs
which
will
create
templates,
and
then
you
just
put
your
failure:
knowledge
into
that
and
then
boom.
You
are
ready,
you
test
it.
You
submit
it
to
your
application
right
and
who's
using
it.
In
fact,
litmus
is
born
out
of
the
ETV
testings
that
we
have
prepared
for
open
EBS,
which
is
the
CNC
of
sandbox
project.
So
what
we
did
is
look,
you
know
we
have
done
a
lot
of
it.
A
We
just
think
that
our
users
can
use
open
use
physics.
Why
don't
we
actually
make
this
infrastructure
open
up
to
the
entire
urban
areas
ecosystem?
So
that's
how
it
is
it's
in
production
I
mean
the
litmus
is
in
production
in
the
open,
EDX
community.
A
lot
of
users
are
now
beginning
to
use
these
litmus
test
cases
of
opening
beers
in
their
production
in
Rodman's
or
at
least
physically
enlightenment,
and,
as
you
considered,
there's
a
Costas
pipeline.
This
is
no
functional
details
and
then
for
every
commit
of
open
abs.
A
Now
we
run
about
10
different
chaos,
tests
that
are
specific
to
opening
be
else
right,
so
something
similar
anybody
can
construct
the
negative
testing
in
the
five
games
and
then
get
benefited.
And
then
we
are
really
happy
that
you
know
some
users
are
starting
to
pitching
I
think
we
have
some
issues
as
part
of
the
recent
Oktoberfest.
We
were
able
to
get
in
touch
with
some
of
the
community
members
and
there
the
process
of
writing
these
charts.
A
A
A
lip
nurse
as
a
tool
for
since
networks
yeah,
I
think
I
we
are
trying
to
be
part
of
the
tirsia
ii-
recommend
that
maybe
I'll
do
this
presentation
again
that
group
as
well
and
then
we
are
already
trying
to
get
one
blog
scheduled
on
whatever
I
just
said
yeah
and
we
are
all
going
to
join
the
slack.
It's
part
of
the
kubernetes
channel
itself,
but
we
want
your
feedback
coaching.
We
will
be
lurking
around
in
every
meeting
square.
So,
given
that
I
don't
know
whether
we
have
time
for
demo,
probably
not.
D
D
B
Thanks,
so
thanks
for
the
possibility
to
share
our
project
with
the
CFC
cup
delivery,
so
my
name
is
Dirk
and
I
will
present
together
with
my
colleague
Andy.
What
kept
me
is
today
when
we
want
to
discuss
which
problem
we
actually
solve
with
captain
how
we
solve
it,
then
quickly
run
through
the
roadmap,
the
community
ecosystem
and
then
to
relations
to
other
projects
that
are
already
part
of
the
CNC
F.
B
So
what
is
Captain?
We
try
to
narrow
it
down
to
one
sentence:
it's
a
message:
driven
control,
plane
for
application,
delivery
and
automated
operations,
and
let
me
lead
by
two
examples
that
I
want
to
go
through
one
for
application:
delivery,
one
for
automated
operations
to
get
to
get
a
quick
picture
of
what
we're
doing
here,
so
application
delivery.
I.
Think
most
of
the
people
in
this
sig
will
be
familiar
with.
That
topic.
B
Captain
usually
starts
with
its
delivery
workflow
after
a
new
container
has
been
built,
so
it
starts
after
the
continuous
integration
part.
So
a
new
container.
For
example,
a
new
version
of
a
service
is
pushed
to
end
container
registry
or
artifact
registry
and
sense
kept
in
an
event
that
a
new
artifact
is
available.
Captain
stores,
all
of
its
configuration
in
git
repository,
updates,
updates,
specific
configuration
files
and
then
triggers
deployment
into
an
environment,
for
example
the
dark
deployment
executed.
Some
tests
gathers
monitoring
feedback,
gets
it
back
to
captain
and
according
to
the
feedback.
B
So
it
has
service
level
indicators
and
service
levels,
objectives
that
are
assigned
to
the
specific
artifact
and
service
decides
if
it's,
if
it
meets
the
quality
criteria,
to
be
promoted
to
the
next
stage,
where
it
would
again
update
configuration
in
the
git
repository
continue
to
the
next
stage.
For
example,
with
the
blue
ring
deployment
again
gathers
the
monitoring
feedback
and,
in
this
example,
decides
that
the
quality
criteria
is
not
met,
triggers
a
rollback
and
also
notifies
the
user
that
the
rollback
was
added.
That
deployment
was
not
successful.
B
The
second
example
how
you
can
use
captain
as
an
as
an
control,
plane
or
automated
operations
so
as
before
a
captain
is
in
the
middle
as
the
orchestrating
instance.
In
this
use
case,
and
initially
we
add
operations
instruction
and
again
store
them
in
the
git
repository
and
that
consists
of
service
level,
indicators,
service
level,
objectives
and
also
information
of
how
specific
service
level
objectives
violation
can
be
remediated
after
that,
Capcom
can
set
up
and
configure
monitoring
rules,
for
example
in
in
Prometheus.
B
Instead
of
alerting
rules
based
on
the
information
of
the
service
level
objectives
that
it
was
given,
then
the
service
is
run
and
given
any
unfair
salos
is
violated.
The
monitoring
provider
detects
it
and
alerts.
Captain
captain
can
see
in
its
configuration
if
it
has
an
applicable
remediation
action
and
executes
that
in
this
example,
it
is
a
to
scale
up.
B
Of
course,
you
could
argue,
you
can
also
do
it
with
with
auto
scaling
and
an
HPA
with
custom
metrics.
Yes,
you
can,
but
you
could
also
think
of
more
application-centric
use
cases
like
toggling
feature
flags.
This
is
also
a
valuable
remediation
action
and
then
again
receive
the
feedback.
If
the
remediation
action
actually
helped
to
remediate
the
problem
or
not
so
in
the
delivery
and
operations
workflows
we
just
saw,
there
are
usually
at
least
three
personas
included.
B
That
includes
the
developers
that
usually
provide
the
container
and
know
best
what's
inside
of
the
container
and
what
the
application
of
the
service
consists
of,
and
they
would
usually
define
the
remediation
action.
So
they
know
which
feature
flags
are
available
and
which
feature
flags
can
be
used
to
remediate
certain
situations,
for
example,
to
switch
from
delivering
dynamic
content
to
delivering
cached
static
content
because
the
requests
a
number
of
retwist
increases.
B
The
second
person
group
is
the
DevOps
engineers.
Those
are
usually
responsible
for
configuring
tools,
for
example,
to
have
chinita
running
in
the
right
version
and
that
it's
able
to
execute
performance
tests
and
they
also
provide
service
level
indicators,
meaning
that
they
configure
Prometheus
in
a
way,
for
example,
that
the
SL
ice
can
be
retrieved
for
that
specific
service
that
is
needed
and,
last
but
not
least,
also
site.
Reliability.
Engineers
are
often
included.
They
usually
define
service
level
objectives
on
top
of
the
service
level
indicators
and
define
stages
and
processes.
B
So,
historically,
most
of
the
work
that
these
three
people
stand
together
has
were
done
in
pipeline
files,
and
this
actually
leads
us
to
which
problem
captain
will
tries
to
solve
or
actually
solves
so
I
think
all
of
us
have
once
in
our
lives,
written
a
CIT,
D
pipeline
and
pipelines
are
a
good
thing
for
continued
integration.
That's
where
it
actually
started
out.
Then
additional
tasks
were
added
like
executing
tests
and
recently
also
taking
care
of
delivery
parts,
and
the
example
over
here
is
about
350
lines.
B
B
So
for
one
project
you
most
likely
have
several
services,
which
would
mean
you
would
have
several
pipelines
and
usually,
if
one
writes
a
good
pipeline
as
it
starts
out,
the
telepods
the
pipeline
is
copied
and
maybe
a
depth
of
the
little
bits,
because
some
other
some
other
project
uses
a
different
load
testing
tool
or
a
different
deployment
tool.
So
you
already
end
up
with
copies
that
are
slightly
changed
for
a
project
and
if
you
then
actually
scale
it
up
to
several
teams,
you
will
end
up
with
several
instances
that
might
be
still
the
same.
B
The
pipeline's,
but
you
will
most
likely
end
up
with
with
a
large
number
of
snowflake
pipelines,
and
this
is
actually
then
hard
to
make,
and
we
already
learned
in
software
engineering
one
on
one
copying
code-
is
a
bad
idea,
so
I
also
think
that
it's
the
same
case
for
for
pipelines,
since
we
already
in
this
stage
where
we
can
write
pipelines
as
code
and
the
challenges
that
we
heard
are.
If
you
have
this
example
in
mind
where
you
have
like
n
times,
X
pipelines
and
you
want
to
enter
security,
team
cut
security
team
comes
up.
B
B
The
same
goes:
if
you
want
to
use
a
different
tool
for
specific
tasks,
you
most
likely
would
need
to
touch
a
lot
of
those
pipelines.
Same
goes,
if
you
want
to
add
notifications
to
all
steps
to
have
like
a
slag
to
it,
for
example,
of
all
the
activity
that
is
going
on
during
application,
delivery
or
operations,
and
a
very
good
example
we
got
is,
if
you
run
an
e-commerce
shop
there
is
this
magic
time
between
Black
Friday
and
Cyber
Monday,
where
you
don't
want
your
site
to
go
down.
B
And
so
we
solve
that
by
defining
application,
delivery
and
operations
processes
in
a
declarative
way
more
details
to
come.
We
rely
on
predefined
cloud
events
to
actually
separate
the
process
of
what
is
happening
from
the
tools,
then,
who
is
actually
executing
a
deployment
or
a
test
or
an
evaluation,
and
we
we
want
to
provide
an
easy
way
to
integrate
and
switch
between
different
tools
and
the
declarative,
deliberate
delivery
flow.
B
We
call
them
shipyard
files
so,
instead
of
implicitly
in
integrating
that
information
into
your
pipeline
files
have
a
shipyard
file
where
you
define
the
stages
that
you
have
and
the
different
steps
that
are
taking
per
stage
that
are
usually
built
out
of
a
deployment,
a
test
and
evaluation,
then
it
then
a
promotion.
So
this
is
really
the
recipe
of
what
to
do
in
which
stages.
B
The
standardized
way
of
communication
is
based
on
the
cloud
events
project
and
for
each
of
the
steps
in
each
stages.
Captain
dispatches
cloud
events
to
a
pop
subs
instance.
So
currently
our
pubs
are
provided
as
nuts
and
the
events
decouple
perfectly
what
the
current
task
wants
to
accomplish
from
who
should
it
accomplish?
So
you
can
also,
since
it's
a
pop
sub
implementation,
have
several
listeners
on
one
topic.
B
So
this
is
just
an
excerpt
of
course,
and
coming
back
to
the
challenges
that
that
we
faced
before
so
the
challenges
captain
accepts
those
challenges
and
actually
has
a
simple
solution
to
all
of
them.
So,
instead
of
many
manipulating
all
the
fuel
pipeline
files,
you
add
your
stage
in
your
shipyard
file
and
you're,
basically
done
because
the
the
delivery
flow
and
the
operation
flow
is
actually.
B
Let's
say,
it's
decided
at
run
time,
Morristown
next,
to
update
your
ship,
your
file
and
you
at
a
stage
in
the
current
and
delivery
random.
This
stage
is
already
taken
into
account.
Same
goes
for
if
you
switch
our
tools,
you
know
uniform
files
or,
if
you
add,
an
additional
tool
for
an
additional
event
in
uniform
file,
or
you
can
also
change
the
approval
strategy
for
the
production
stage
in
your
Cpl
file.
B
So
coming
back
to
the
key
features
of
captain,
as
already
said,
it's
a
message.
Student
control
plan
for
the
delivery
and
operations
use
the
standardized
cloud
events
for
communication
we
headed
in
the
example.
We
have
a
gift
ops
built
in,
so
we
have
all
of
the
configuration
stored
in
a
git
repository
that
also
works
when
it's
when
it's
offline.
So
we
had
the
situation
that
that
that
we
had
a
get-ups
approach
that
relied
on
github
being
online
and
then
get
up
was
down
and
nothing
worked.
B
So
this
is
not
a
situation
that
we
can
live
with,
so
we
decided
to
have
a
local
git
repository
and
just
work
with
up
streams
to
various
public
get
instances.
Captain
is
I.
Think
one
of
the
first
at
least
open
source
project
is
enables
automated
operation,
so
really
cell
feeling
for
object
for
applications
that
is
based
on
SL
Isis
allows
and
remediation
actions.
It
works
very
well
in
multi
stage
and
multi
cluster
scenarios,
so
it
usually,
you
have
a
devastating
and
the
prod
environment,
and
usually
these
stages
are
individual
communities
clusters.
B
So,
as
opposed
to
operate
this
we,
for
example,
if
you
use
operators,
you
would
need
an
orchestrating
entity
all
over
again
because
operators
over
clusters
don't
work
that
well
out
of
the
box.
We
also
have
support
for
non
communities
application,
so
you
can
implement
and
integrate
basically
any
tool
that
has
an
API.
B
So,
as
you
saw
the
uniformed
services
before
they
get
the
cloud
event
and
can
translate
that
to
whatever
is
needed
to
be
accomplished,
so
it
does
not
have
to
be
a
communities,
application
or
community
service,
and
we,
of
course
have
also
observability
built
in
that
means
that
each
application
delivery
flow
and
operations
flow.
We
have
a
UUID
a
captain
context,
as
we
call
it
that
lets
us
then
actually
visualize
a
trace
of
what
has
happened
in
that
specific
application,
delivery
flow
or
operations
flow,
and
this
is
the
first
version
of
the
visualization.
B
It's
not
that
fancy
yet,
but
we
already
have
plans
for
the
next
version
to
actually
level
up
at
this
again,
so
the
roadmap,
what
we
actually
want
to
achieve
by
by
participating
in
the
CAF
and
Alyssa
get
delivery,
is
extending
and
collaborating
on
the
cloud
event
specification.
So
there
are
two
ways
how
captain
can
integrate
and
orchestrate
tools
either
you
write
the
small
service
to
translate
the
captain
cloud
events
to
something
that
the
tooling
understands
and
the
other.
The
other
option
is
the
tool
understands
captain
cloud
events
out
of
the
box.
B
That
would
require
us
to
actually
have
a
lot
of
conversations
with
and
that
I
wrote
it
there
at
first
with
all
of
the
CD
tools
in
the
sea.
Nzf,
landscape
and
I
will
be
glad
to
have
those
conversation
to
actually
take
that
a
step
further
and
what
captain
also
brings
to
the
table.
It
enables
easy
interoperability
between
CN
CF
tools.
B
So
if
you
want
to
use
several
CN
CF
tools
in
one
application,
delivery
through
the
orchestration
part
is
already
taken
care
of,
and
you
can
just
hook
in
your
tools
at
the
specific
steps
that
you
want
to
use
them.
We
of
course,
want
to
continue
adding
additional.
You
have
native
practices
like
cannery
releases
and
also
fleet.
You
bet
feature
based
feature
flag,
based
self
feeling,
Dara
sea,
of
course,
also,
especially
in
the
self
feeling
space,
a
collaboration
possibility
with
the
guys
from
litmus
that
just
did
a
very
good
presentation
thanks
for
that
by
the
way.
B
So
because
self
feeling
works
very
well
when
when
there
is
chaos
testing
around,
so
you
can
actually
test
if
your
self
feeling
strategies
work.
We
of
course
want
to
continue
to
improve
the
interface
and
also
want
to
implement
the
w3c
trace
context.
So
you
have
the
ability
to
visualize
your
application,
delivery
flows
and
your
operation
flows
within
a
trace
text
conformed
tool.
B
We
want
to
build
out
the
support
for
the
uniforms
and
like
a
captain's,
wardrobe
service,
which
is
like
a
service
registry
where
you
can
look
up
what
integrations
for
other
tools
and
services
have
already
been
written
for
Kenny
and
we
want
to
improve
self
feeling.
So
how
does
captain
map
to
the
model
of
application
delivery?
B
I
think
the
topic
1.5
is,
can
be
seen
as
like
an
input
input
to
captain
where
it's
about
the
app
configuration
parameters,
OSU,
helm
and
customize
a
little
bit
in
that
space,
and
then
it
basically
acts
as
an
orchestration
tool
for
all
the
topic
and
topic
3
agendas.
How
am
I
doing
on
time?
20
minutes?
Ok,
let's
see
you
got
about
five
more
five,
more
okay,
so
the
community
is
growing.
B
We
have
a
growing
community
and
a
lot
of
engagements
on
slack,
so
a
lot
of
people
actually
become
aware
of
us
now
and
start
engaging
with
us.
We
have
a
growing
ecosystem,
so
there
are
already
integrations
with
other
tools
there.
Some
of
us,
some
of
those
we
wrote
our
own.
Some
of
those
were
contributed
from
others.
B
This
is
a
list
of
companies.
We
are
currently
working
with
I'm
captain,
so
this
is
a
lot
of.
Let's
say
there
are
banking
in
there
tears
eyes
s
eyes
and
performance
load
testing
with
Nia
fees,
for
example,
and
also
other
workflow
tools
like
X
matters
are
thriving
for
him.
Integration
with
captain
I
have
actually
already
built
all
relation
and
distinction
to
other
CN
CF
projects.
So
this
is
a
really
interesting
topic,
I
think
so.
Captain
uses
out-of-the-box
helm
for
for
deployment,
so
we
have
a
batteries
included
service
that
does
simple,
continuous
delivery
tasks.
B
Why
do
we
have
that?
Because
we
want
for
people
that
want
to
try
out
captain
to
have
an
easy
experience
of
trying
it
out
and
not
not
having
the
need
to
configure
for
different
projects
to
actually
get
the
captain
experience.
Anyways
you'll
be
used
in
in
conjunction
with
sto
for
traffic
routing,
with
Bluegreen
deployments.
We
use
Prometheus
as
a
monitoring
provider,
and
we
already
had
that
we
based
clouds
events
are
standard
way
of
communication.
B
The
relation
to
the
CD
and
observability
space
is
a
pretty
clear
one
for
me,
so
we
want
to
build
out,
integrations
and
collaborate
on
the
cloud
event
specification
and
of
course
there
are
already
existing
workflow
tools
like
Brigade
Argo
burg
flows,
texts
on
Jenkins
X,
and
there
are
certain
differences
to
each
of
those
tools
that
we
we
can.
We
can
clearly.
A
B
If
you
want
to
have
more
information
on
that,
so
if
you
actually
have
any
questions
and
don't
find
the
time
to
ask
them
now,
please
feel
free
to
reach
out
on
the
captain's
left
channel
join
our
community.
So
we
also
have
bi-weekly
community
meetings
or
write
us
an
email
or
just
create
an
issue
in
the
github
project,
as
Brian
pointed
out
earlier,
and
with
that
I
want
to
thank
you
for
your
attention
and
hope
you
consider
captain
as
a
scene,
CF
sandbox
project.
D
All
right,
Thank,
You
Dirk,
thank
you
so
we're
coming
up
short
on
time
and
we
still
have
a
couple
of
items
that
I
want
to
to
go
over
today,
and
so
next
up
on
the
list
would
be
it
says,
scene,
app,
om,
om
s,
any
other
specs
to
consider
for
simplifying
app
delivery
and-
and
what
this
brings
up
is
actually
a
much
larger
topic.
Is
that
one
thing
that
we
need
to
do
and
stick
up?
D
Delivery
is
start
cataloging
these
these
specs,
whether
it
be
the
three
that
are
mentioned
or
that
might
show
up,
so
we
don't
actually
have
the
so
we're
still
new
we're
month
in
a
cell
or
a
month
or
so
into
this
real
process.
But
what
we
need
to
do
next
on
this
process
is
solicit
some
volunteers
to
help
catalog.
Some
of
these
views.
D
I
won't
ask
for
it
here,
because
everyone
is
in
online,
but
I've
actually
listened
it
out
for
the
mailing
list,
but
before
we
think
about
considering
these
specs,
let's
create
a
catalog
of
these
specs.
So
we
can
know
what
we're
talking
about
and
I'll
make
sure
that
we
bring
it
up
in
our
next
meeting
to
show
the
status
of
that
and
next
up
is
there.
Is
there
any
other
updates
that
anyone
wants
anyone
here
to
know
and
I
can
start
off
with?
D
We
did
move
our
meeting
calendar
the
if
you
follow
the
CN
CF
calendar
and
by
our
next
meeting,
is
on
November
6th.
So
just
keep
that
in
mind
and
and
then
the
next
two
weeks
from
men
since
we're
moving
to
the
first
and
the
third
would
be
on
the
20th
of
November.
But
that
will
not
happen
and
we're
going
to
cancel
that
meeting,
because
that
is
during
tube.
Con
/
cloud
native
cons
will
not
be
doing
that
one.
Any
other
updates.
D
B
D
Meeting
for
the
month
of
November,
yes,
but
I
wanted
to
also
highlight
something
else
that
was
brought
up
on
the
on
in
slack
and
cigarette
delivery
is
tasks
with
lots
of
different
functions
right
now,
and
we
want
to
make
sure
that
we
are
showing
our
showing
the
projects
from
the
community,
but
also
weighing
that
with
other
tasks
that
we're
doing
like
the
documentation.
We
did
around
the
delivery
definition.
So
we
want
to
make
sure
that
that
everyone
knows
that
we
won't
just
be
doing
demos.
D
C
I,
don't
add
more
thing
in
that
I
think
you
guys
may
notice
that
there
are
a
lot
of
project
right
now,
they're
presenting
in
a
sync
application,
delivery.
Well,
I
think
it
still
needs
some
time
to
get
formal
feedback,
because
we
are
still
talking
with
sincerity
OSIS
to
formalize
the
process
like
what
will
be
the
next
step.
What
will
be
the
recommendation
later
or
something
so
I
think
it's
still
not
finalized,
yet
so
I
think
the
project
presented
in
secretly.
We
may
still
wait
for
few
time
before
all
of
this
stuff
becomes
finalized.