►
From YouTube: Aug 9, 2022 - Ortelius Keptn Events Overview
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
So
that's
how
come
it
looks
like
there's
a
competing
calendar.
I
think
I
have
access
to
that
calendar
now.
So
I'm
going
to
try
going
in
there
and
cleaning
those
up
to
fix
the
confusion.
A
Yeah,
sorry
about
that
and
this
so
this
meeting
that
we're
doing
now
is
talking
about
the
ortilius
integration
with
kepton.
So
kepton
is
an
event-driven
orchestration
tool
that
we
need
to
write.
I
don't
know
if
it's
called
a
plugin.
A
Yeah,
so
what
will
what
we're
going
to
be
doing
is
writing
the
a
service
that's
going
to
interact
with
the
captain
events.
A
The
captain
events
are
basically,
we
can
push
around
push
push
and
receive
cloud
events,
so
be
more
of
a
generic
cloud
event
structure
that
we
get
to
attach
our
own
payload
to
now.
So
now,
when
we
talk
about
the
overall
process
of
how
the
events
are
going
to
work,
if
we
look
at
a
the
simple
ci
process
and
we'll
just
take
a
simple,
a
simple
single
microservice
that
somebody
may
be
writing
so
the
example
we'll
just
use
is
we'll
have
a
an
application.
A
A
So
on
the
welcome
the
new
folks
we're
just
getting
started.
I
haven't
missed
much
so
when
we
have
a
microservice,
a
developer
updates
the
code
to
a
microservice.
You
know
they
check
your
code
in
and
then
captain
is
going
to
have
a
listener.
A
So
captain's
going
to
listen
for
the
the
git
commit
or
the
pr
merge,
whatever
part
of
it
is
we'll
just
say
that
the
developers
does
the
merge
we
have
a
new
commit.
Captain
will
will
listen
to
that
event
and
it
will
start
doing
its
its
processing.
I
want
to
say
workflow,
but
there's
a
file
inside
of
kept
in
called
the
shipyard
file
that
allows
you
to
set
up
certain
events
to
listen
for
and
how
to
respond
to
them.
A
A
So
one
of
the
things
that
the
captain
folks
told
me
was
they
don't
have
necessarily
like
the
plugins,
like
you
do
for
like
jenkins,
that
will
go
ahead
and
actually
have
like
a
docker
plug-in,
but
they
do
have
more
of
a
generic
way
to
have
a
basically
run
a
shell
script
based
on
that
event.
So
what
ends
up
in
in
turn?
Is
we
run
the
docker
build
at
that
level?
We
can
once
the
build
has
been
completed.
We
can
send
out
another
event
back
to
captain.
A
That
said,
we
completed
the
docker
build
now
from
there.
We
do
our.
We
can
break
it
down
into
even
further
grain,
where
the
next
step
is,
we
want
to
do
our
docker
tag
and
our
push
at
that
level.
So
it
just
depends
on
how
fine
grain
you
want
to
get
on
where
you
want
to
start
sending
the
events
back
and
forth
now.
I
believe
they
even
go
as
far
as
and
correct
me.
A
If
I'm
wrong
on
this
ukarsh,
they
will
send
out
events
saying
I'm
starting
this
process,
so
I'm
starting
the
build
we'll
be
in
one
event,
I'm
doing
the
build
as
another
event
and
then
I've
completed
the
build
as
another
event
is
that
correct.
C
B
A
A
A
We
want
multiple
events
to
occur
at
the
same
time,
so
one
of
them
is
we
want
to
do
a
security
scan
of
the
docker
container
and
then
we
want
to
start
also
doing
a
test
fl
a
test
run
of
the
container
and
then
also
we
want
to
tell
ortilius
that
we
did
the
build.
A
So
that's
where
it
becomes
really
flexible,
because
if
we
want
to
add
in
another
listener
for
somebody
to
do
something
else,
let's
say
we
want
to
start
the
deployment
to
the
dev
environment.
We
can
start
that
as
well,
just
by
adding
in
another
listener
to
do
that
that
deployment
to
dev,
for
example.
B
Yeah,
so
is
jenkins,
more
kind
of
static
in
its
task.
It's
more
like
do.
One
move
on
to
the
next
one,
whereas
captain
has
a
more,
I
don't
know
is
a.I.
A
It
can
do
multiple
things
at
the
same
time,
yeah.
Well,
what
it
is
is
the
relationships
between
the
steps
are
not
they're,
more
dynamic,
yeah
yeah,
so
they're
they're
dynamic.
So
if
I
wanna
add
a
new
event
in
there,
a
new
event
listener
that
to
do
something
off
of.
I
can
add
that
in
very
simple,
where
in
the
jenkins
side
is,
is
where
the
events
are
going
to
kind
of
drive
your
process.
So
it's
just
like
in
any
event,
programming
language.
A
If
you
get
into
like
you
know,
windows
programming
of
even
in
in,
like
in
rust,
rust,
uses
a
lot
of
events
and
there's
certain
programming
language
even
like
in
in
node.js
and
javascript.
You
can
have
an
async
call
to
go
to
go,
get
a
web
page
and
while
that's
off
doing
something,
you
could
go,
do
something
else
and
that's
the
idea
behind
the
kept
in
event
processing
now.
Another
thing,
that's
very
that
provides
the
same
capability
on
the
jenkins
side,
is
something
that's
called
a
jwt.
A
I
think
no
jenkins
templating
jte
jumbo
jenkins
templating
engine
and
the
jenkins
templating
engine
allows
you
to
create
these
templates
once
and
put
in
placeholders
for
where
you
want
something
to
occur
in
the
future.
It
may
not
be
there
yet,
but
you
can
put
in
like
a
no
op
process.
So
when
you
do
implement
it,
you
don't
have
to
go
change
all
your
your
you
just
change
the
template
and
not
all
the
pipelines
itself.
A
So
there
is
a
solution
for
the
jenkins
side,
but
on
captain
it's
all
done
through
the
shipyard
file
and
also
you
can
have
multiple
projects
listening
to
the
same
events.
So
in
our
case,
if
we
have
microservices,
if
it's
a
shared
microservice,
we
can
have
two
different
projects
listening
to
that
shared
microservice
being
built
and
they
can
go
off
and
do
their
event
processing
that
could
be
totally
different.
So
project
a
and
project
b
can
have
totally
different
pipelines,
but
they're
gonna
both
be
triggered
when
the
the
docker
push
has
been
completed.
A
A
It's
based
off
of
k-native,
so
it's
going
to
pick
up
the
k-native
events,
which
is
like
40
or
50
of
them
out
of
the
box,
and
then
you
can
add
your
own,
which
captain
is
a
little
bit
different
on
their
implementation
at
that
level.
Now
also
we
have
the
cd
events
project
at
the
cdf
and
what
they're
trying
to
do
is
standardize
the
event
language.
A
So
when
I
say
do
a
build,
what
does
that
mean?
What
is
the
payload
for
starting
to
build?
What
is
the
the
payload
of
when
I'm
done
with
the
build
to
pass
on
to
other
folks
in
out
there?
That's
listening.
So
that's
one
of
the
things
that
you'll
see
is
the
cd
events
kind
of
giving
a
trying
to
bring
a
comprehensive
language
around
the
events,
so
everybody's
talking
consistently
so
in
in
the
ortillius
captain
world.
We're
gonna
do
that
docker
build
of
that
microservice
ortilius
could
get
notified.
A
Ortelius
is
going
to
then
go
ahead
and
get
listen
for
that
event
to
happen,
and
what
it's
going
to
do
is
it's
going
to
our
event.
Listener
for
ortillius
would
need
to
then
go
ahead
and
take
the
payload
and
create
a
new
component
version
inside
of
ortulius,
so
we'll
what
ends
up
happening
is
every
time
we
do
a
docker
build
we're
going
to
have
a
new
component
version
that
we're
going
to
end
a
crate
inside
of
ortelius.
A
A
Now,
when
we
have
multiple
that
in
turn,
in
the
artillia
side,
when
we
create
a
new
component
version,
we
also
will
go
ahead
and
create
a
new
application
version.
So
we
look
at
who
is
consuming
the
the
current
version
of
the
service
of
the
docker
container,
and
then
we
have
a
new
version
coming
in
and
what
that
tells
us
to
do.
Is
anybody
all
the
applications
that
are
consuming
the
current
version
of
the
of
the
service?
A
Let's
go
ahead
and
create
a
new
version
of
the
application
and
just
replace
that
version
of
the
service
with
the
new
one
that
we
just
built.
So
what
we
end
up
doing
is
we
end
up
creating
new
versions
of
the
application
on
the
consuming
side?
So,
if
you
think
about
the
microservices
world,
the
microservice
is
kind
of
like
the
producing
side
and
then
the
applications,
the
consuming
side,
for
example,
the
front
end.
So
when
we
do
this
ortilius
does
two
steps.
A
We
create
the
new
component
version
and
then
it
goes
ahead
and
creates
the
new
application
version
consuming
the
new
component
version
at
that
level.
Now,
just
because
we
have
a
new
logical
application
version
doesn't
mean
that
we've
done
anything
with
it.
So
what
we
need
to
do
is
go
and
tell
captain
that
we've
done
that,
so
we
actually
will
broadcast
a
new
event
saying
that
ortelius
is
done.
We
have
a
new
application
version
that
we're
ready
to
deploy.
A
So
the
next
step,
I'm
going
to
shorten
the
diagram
that
sasha
and
I
put
together
and
we're
gonna
we're
gonna
kind
of
skip
some
of
the
pieces,
but
we'll
just
to
make
the
the
conversation
a
little
bit
easier
and
then
we
may
circle
back
around
to
fill
in
the
gaps,
if
there's
any
so
when
we
get
the
new
application
version
out
there.
For
this
microservice
captain's
gonna
go
ahead
and
we're
gonna
send
that
that
message
back
to
captain
we're
done
on
the
artillery
side.
A
Captain
may
turn
around
and
send
us
a
new
message
that
says:
go
ahead
and
deploy
this
so
or
go
ahead
and
tell
we're
going
to
have
argo,
go
ahead
and
and
deploy
it.
So
what
we
may
be
doing
is
looking
at
the
argo
ortillius
captain
link
and
the
way
that
one
is
built
is
based
on
the
git
repo.
So
argo
doesn't
really
listen
to
events
from
captain,
but
it
does
listen
to
commits
on
the
get
repo.
A
So
when
we
go
ahead
and
have
captain
telus
go
ahead
and
do
a
deployment,
what
we
would
do
is
ortilius
to
go
ahead
and
listen
for
a
deployment
event
and
in
turn
it
would
go
ahead
and
actually
write
the
appropriate
versions
of
the
services
and
all
the
the
shas
and
all
that
stuff
to
the
helm,
charts
either
you
usually
customize
or
home.
It
could
go
either
way.
A
It
doesn't
really
matter,
but
basically,
at
the
end
of
the
day,
is
we're
gonna
update
the
get
repo
with
the
kubernetes,
either
the
helm
templates
or
the
actual
manifest
files
themselves,
and
once
we
write
that
we're
going
to
commit
it
back
to
the
repository
and
then
argo
will
take
off
from
there
and
do
the
deployment.
A
So
argo
will
do
the
deployment
to
the
to
the
cluster.
This
is
one
of
the
things
I'm
not
100.
A
Sure,
on
is,
if
we're
going
to
get
a
notification
back
from
the
argo
side
or
we're
going
to
get
a
notification
back
from
kubernetes
saying
that
the
the
that
the
deployment
was
completed
and
once
we
get
that
deployment
is
completed.
We
log
that
ortilis
is
listening
to
that
again
and
we
log
that
this
version
of
the
application
was
deployed
to
this
cluster
at
this
time,
and
these
were
all
the
services
that
went
with
it.
A
So
that's
where
the
the
process
kind
of
comes
to
together
and
what
I
found
was
when
we
look
at
the
relationship
between
captain
argo
and
ortilius.
There
are
some
different
places
that
we
need
to
do,
updates
and
I'll
get
into
that.
If
I,
if
there's
no
questions-
and
this
kind
of
makes
sense
at
to
this
point,.
C
The
dialogue
you
mentioned
is
that
somewhere,
it
can
be
looked
at.
B
A
B
A
A
It
was
in,
I
can't
remember
where
our
documentation
as
well
yeah.
A
Yeah,
so
what
ends
up
happening
when
we
look
at
the
the
process
of
the
pipeline,
some
of
the
the
steps
like
the
build
step
doesn't
exist
for
qa
and
production.
So
what
we're
doing
is
we're
taking
that
same
image
that
we
built
in
dev
and
we're
redeploying
to
qa
redeploy
into
production.
A
So
some
of
the
things
that
happen
at
that
level
aren't
repeated,
so
you'll
see
in
in
sasha's
diagram
some
of
the
there's
slight
differences
when
you
look
from
dev
to
qa
to
prod.
So
what
we
end
up
doing.
What
I've
found
is
to
make
this
all
work
we
actually
have
to
have
at
the
when
we
talk
about
microservice
applications.
A
Is
we
actually
need
an
application
repository
so
that
application
repository
doesn't
really
contain
any
code,
but
it
does
contain
like
the
helm,
charts
or
the
the
kubernetes
manifest
files,
and
the
reason
for
this
is
there's
you
can
do
it
in
a
in
the
same
repository
as
like
one
of
the
microservices
or
something
like
that,
but
it
gets
a
little
bit
confusing
when
you
start
getting
into
the
captain
and
the
argo
cd
filters
of
what's
being
updated
at
what
point
of
the
process.
A
Even
if
you're
working
on
different
branches,
you
have
dev
branch,
qa
branch
and
so
on
it.
Actually,
when
I
was
playing
around
with
it,
it
made
it
a
lot
easier
to
have
kind
of
like
an
application
level
repository
that
when
microservices
are
ready
to
go-
and
you
say,
deploy
that
ortilius
is
going
to
say.
Oh,
I
know
you're
deploying
to
qa
qa
is
missing
these
five
services
out
of
the
10..
So
what
I?
What
ortilius
does
is,
go
and
updates
the
home
charts
in
the
shared
application
repository
and
says?
A
Okay,
I
need
you
to
deploy
to
this
environment
and
I
want
you
to
use
these
versions
of
these
services
at
this
level.
So
in
the
in
the
application
repository,
you
do
have
the
different
stages
of
the
pipeline,
so
you
have
a
dev,
qa
and
prod
and
depending
whether
you're,
using
customize
or
home,
or
how
you
have
argo
configured
with
how
it
filters,
but
basically
it's
all
going
to
be
looking
at
updates
against
the
single
repository,
so
that
makes
it
a
a
lot
easier
and
what
I've?
What
I've
seen
is.
A
This
repository
is
literally
a
parent
chart
and
a
child
that
includes
the
child
charts
so.
A
A
Now,
on
the
event
side,
like
I
said,
I'm
not
a
hundred
percent
sure
percent
on
the
argo,
when
argo
completes
how
it's
notifying
us
who
cursed
you,
do
you
know
on
that
front?
I
I
can't
remember
if
you
got
into
those
details.
A
Okay
yeah
this
is
this-
would
be
something
where
brad
mccoy
could
answer
that
question.
So
let
the
other
circle
back
around
with
him
on
that
level.
A
So,
let's
just
review
kind
of
where
we're
where
the
process
is
so
developer.
Checks
in
the
code
for
microservice
captain
listens.
For
that
event,
does
the
docker
build
sends
out
a
an
event
that
the
docker
build
has
been
completed?
Then
it
starts
the
tag
and
push
and
then
once
that's
finished,
then
ortelius
listens
for
the
push
event.
A
It
then
does
its
component
update
in
the
database
an
application
version
update
and
then
sends
an
event
back
to
kept
in,
and
then
captain
may
turn
around
and
say.
Oh,
I
want
you
to
go
ahead
and
do
the
deployment
ortilius
listens
for
the
deployment
event.
It
goes
and
takes
all
the
information
about
the
application
version
that
we're
deploying
updates
the
manifest
files
in
the
application
repository
from
there.
Argo
sees
that
there's
a
change
to
the
repository
goes
off
and
does
its
work
once
argo's
done.
A
It
notifies
both
captain
and
ortillius
ortilius
logs
what's
happened
and
then
captain
takes
off
now.
One
of
the
nice
things
about
captain
is
it:
has
this
this
concept
of
a
quality
gate.
So
after
we
do
the
deployment
the
the
quality
gate
feature
will
kick
in
on
the
kept
inside
the
quality
gate
will
say.
Let
me
throw
some
data
at
this
new
deployment
and
look
at
the
metrics
about
what's
happening
for
this
new
deployment.
A
It'll
measure
the
metrics
and
if
the
quality
gate
comes
back
looking
thumbs
up,
we're
good
to
go,
it'll
actually
send
another
event
out
which
will
allow
us
to
move
to
the
next
stage
of
the
pipeline.
So
in
our
dev
world,
we've
done
our
quality
gate
everything's.
Looking
good
now
we
start
a
new
event
and
we
go
on
to
qa.
A
Now,
on
the
qa
side,
one
of
the
things
that
hey
brad.
A
No
worries
I'm
just
going
through
some
of
the
processing
so
I'll
catch
you
up
here
in
a
second.
So
in
the
after
we
do
the
quality
gate
at
from
development,
and
we
start
moving
into
qa.
We
don't
have
to
do
the
docker
build
anymore
because
we
already
have
the
image.
The
only
thing
that
we
need
to
do
at
that
level
is
going
to
be
doing
the
deployment,
so
captain
would
send
out
another
deployment
event.
A
Ortilius
would
capture
that
event,
and
it
would
recognize
that
okay,
I'm
doing
a
deployment
to
qa.
Now
let
me
take
that
version
of
the
application
and
see
how
much
drift
there
is
between
what
is
there
and
what
people
want.
What
the
state
is.
So,
let's
say
this
state,
let's
say
in
qa
we're
eight
micro
services
behind
so
eight
micro
services
are
are
out
of
the
ten
are
behind
in
versus
the
version
of
the
application
that
we
wanna
deploy.
A
So
what
ortillas
will
do
is
they'll
go
in
and
write
to
the
application
repository
the
new
deployment
manifest
files
with
the
right
versions
of
the
servers
that
need
to
go
to
go
to
qa
and
then
it
goes
and
commits
that
to
the
get
repo
argo
takes
off.
Does
the
deployment
of
those
eight
services
to
qa
and
then,
when
it
finishes,
it's
going
to
notify
us
back
tortillas
to
log
that
that
was
completed
brad.
A
I
had
one
question:
I
couldn't
remember
how
it
it
happens
when
argo
completes
a
a
a
deployment
who's
sending
out
the
event
message,
is
it
argo
or
is
it
captain.
B
Yeah,
it's
argo,
so
they
have
the
concept
of
the
argo
notifications
controller,
oh
that's
right,
and
that-
and
that
was
originally
it
was
separate.
But
in
the
last
two
versions,
they've
just
put
that
into
argo,
so
argo
will
we'll
send
it.
A
Okay,
so
argo's
going
to
let
us
know
that
it's
been
completed.
Ortilius
is
going
to
listen
to
that
and
also
kept
in
now.
We.
A
Process
again,
where
captain's
going
to
start
pushing
the
doing
the
quality
gate
check
at
qa
and,
let's
say,
there's
a
problem:
things
aren't
passing
like
we
should.
You
can
have
captain
make
a
decision,
what
you
wanted
to
do
if
you
wanted
to
to
back
out
or
if
you
wanted
to
just
stop
at
that
level.
A
Let's
say
it's
good
we're
going
to
go
ahead
and
go
to
production.
Same
thing
is
going
to
happen.
It's
you
know.
When
you
go
to
production,
you
may
want
to
run
some
stress
tests
or
some
test
cases
in
production
you
may
or
may
not
want,
and
that's
where
the
pipeline,
the
events
that
are
happening
in
production
are
going
to
be
slimmed
down
a
little
bit
to
where
it's
mostly
going
to
be
like
the
deployment
and
monitoring
is
what
I've
usually
seen
so
that
that's
kind
of
the
the
the
process
that
that
happens.
B
A
It
could
go
either
way,
I'm
I
I
I
picked
the
least
which
is
whichever
one's
easier.
I
would
probably
say
that
to
be
the
most
generic
that
we
have.
Argo
send
the
event
to
captain
and
the
captain
and
ortillius
talk
together,
because,
let's
say
we
replace
out
argo
with
flux
or
something
like
that.
You
know,
then
we
don't
have
to
then
we're
always
just
listening
to
the
same
same
event,
from
captain.
B
C
So
I
have
one
question:
maybe
for
pratt,
so
we
were
saying
we
will
leverage
the
web
hooks
that
are
there
in
captain
right.
So
in
that
case
there
is
no
need
to
send.
Even
that
will
be
listened
at
the
audi
list.
Part
right.
We
can
like
straight
away,
go
and
use
the
apis
that
are
there
for
or
written
for,
arterius
and
also
eliminate
the
one
application
that
that
we
are
going
to
write
for.
B
Exactly
yeah
exactly
yeah,
so
so
it's
actually
there's
no
point
of
us
supporting
that
captain
app
that
we're
going
to
write
for
the
service,
so
we
can
just
use
the
the
kept
in
the
workbook
service.
Now
it's
easier.
It's
cleaner
and
less
work
and
support.
A
Yeah
and
one
of
the
things
that
we'll
need
to
kind
of
define,
I
know
the,
I
don't
think
the
cd
events
team
have
gotten.
This
far
is
the
what
the
ortelius
vocabulary
is
so
like
there's,
you
know
we
have
a
component,
you
know
update
a
component,
you
know
deploy
a
component,
they
may
have
the
deployment
piece,
but
they
won't
have
anything
about
us.
Updating
components
in
in
ortelius,
we'll
be
able
to
listen
to
like
a
docker
push
event.
A
So
when
a
docker
push
happens,
that
or
two
should
be
able
to
listen
for
that
event
to
occur,
but
when
artillios
has
completed
its
work,
we'll
have
to
define
what
our
vocabulary
looks
like
to.
Let
other
people
know
what
we've
done.
B
A
And
I
think
usually
they'll
be
typically
and
we'll
have
to
see
and
we
may
not
need
the
middle
one,
but
there's
a
starting
running
and
completed
kind
of
state
of
the
event
we
may
just
do
starting
and
completed
because
our
events
run
so
quick.
C
Also,
there
will
be
requirement
for
some
apis
that
would
be
consumed
by
ui
yourself
right,
because
we
said
we'll
also
like
create
some
of
the
dashboards,
wherein
we
are
showing
some
help
because
of
different
components.
A
Right
and
that
would
come
into
when
we
update
the
when
we
get
the
event
coming
back,
that
something's
been
deployed
to
kubernetes.
So
argo
completed
the
deployment
after
that,
that's
where
we
would
want
to
start
gathering
well,
there
would
be
two
two
pieces
of
metrics
that
we'd
want
to
get
back.
One
was:
was
the
deployment
successful
or
is
it
backed
out?
A
You
know
that
that
kind
of
metrics,
the
other
metrics
is,
is
how
how
well
is
it
running
and
that's
going
to
be
kind
of
like
a
post
deployment
where
something
like
dynatrace
or
datadog
would
be
giving
us
even
prometheus
or
kalali.
Kali
would
be
giving
us
those
types
of
metrics
back,
and
I
don't
know
how
we
would,
if
those
would
be
events
coming
into
ortulius
or
if
those
would
just
be
us
querying
a
particular
endpoint
for
those
metrics.
B
B
Has
built-in
capability
for
like
devops
metric
or
dora
metrics?
I
think
I'm
not
sure
if
it's
dora
but-
and
we
have
there's
a
grafana,
great
dashboard-
that
we
could
leverage
to
start
off
with.
If
someone
wanted
to
pick
up
that
project,
it
wouldn't
be
too
hard.
B
Obviously
we
need
more
advanced,
but
that
would
be
a
sort
of
a
good
start.
Possibly
we
could
use
grafana
also.
C
A
And
you
could
you
could
use
the
kept
in
quality
gate
in
production
to
gather
the
metrics
as
well,
so
it
and
more
than
likely,
you
know
it
would
be
qualitygate
did
this
get
deployed
would
be
like
one
metric
get
it
deployed
correctly
and
then
the
metric
would
be
like.
You
know,
what's
my
latency
of
the
services,
and
that
would
be
another
metric
that
would
be
kind
of
like
a
quality
gate
feature
that
would
determine
whether
we're
going
to
do
a
rollback
in
production
or
not.
A
So
we
may
be
able
to
hook
into
the
quality
gate
aspect
of
captain
as
well
to
gather
the
health
and
bring
that
data
back
into
ortilius's
dashboard.
C
Yes,
so
we
have
one
question
like
one
of
the
features
I
really
like
about
it.
Captain
is
auto
remediation,
like
it's
not
just
telling
you
that
this
is
one
thing
is
happening,
it
is
actually
you
can
tell.
Let's
say
if
I'm
running
a
cluster,
and
I
am
out
of
parts
let's
now
I
have
a
fix
for
you,
let's
create
two
more
part
after
okay,
so
you
define
those
in
a
yaml
file
and
when
something
happen
at
the
cluster
captain,
look
at
this
file
and
create
more
part.
C
B
C
See
in
the
monitoring
world,
there
are
some
tools
in
kubernetes
ecosystem
like
revasta.com,
but
they
give
you
the
liquid
gear
event
and
and
send
you
the
notification
in
the
slack
and
in
the
slack
you
give
the
fix
it
as
well.
If
you
add
these
two
lines
of
the
yaml
file,
your
problem
is
fixed.
So
if
we're
using
some
of
the
eventing
data,
we
want
to
make
sure
if,
in
the
future,
we
can
use
those
monitoring
events
that
solve
the
remediation
part
of
it,
they
can
handle
those.
They
can
understand
those
events
as
well.
A
Yeah
one
of
the
things
that
we
we
would
want
to
record.
We
wouldn't
want
to
be
doing
the
auto
remediation
work.
We
would
let
that
off
to
the
other
tools
to
do
the
auto
remediation,
but
we
would
want
to
know
that
it
was
done
and
what
was
done
so
that
would
be
something
some
of
the
the
metrics
data
that
we'd
want
to
capture
coming
back.
So
maybe
this
happens
every
single
time
they
deploy.
A
They
have
to
go
spin
up
more
pods,
you
know
who
knows
maybe
they
have
some
jobs
that
run
so
when
they
deploy
they
go
off
and
run
100
jobs
and
that
always
causes
the
pods
to
spin
up.
So
that
may
be
some
of
the
metrics
that
we'd
want
to
be
able
to
bring
back
to
the
artelias
dashboard
saying
that
we
keep
on
doing
this
automation
every
deployment
you
just
you
just
have
to
think
about
ortilius
as
a
big
sponge
of
how
of
gathering
all
those
data
together.
A
So
you
know
the
more
data
we
can.
We
can
gather
and
correlate
together
the
better
off
we're
gonna,
be
from
the
providing
value
to
the
customers.
C
Yes,
and
on
the
topic
of
the
cd
event
side,
like
last
time,
and
when
you
talk
to
the
like,
I
look
at
the
cd
events
in
that
where,
like
captain,
is
all
about
events
like
everywhere,
you
see
in
the
captain.
What
is
it
like?
He
was
talking
to
the
some
tool
in
the
in
an
event
way
and
when
they
notified
this
dude
in
a
event
wave
and
those
events
are
also
asynchronous
by
nature.
You're
going
to
wait
for
the
stuff
to
happen.
It's
going
to
be
the
parallel
as
well,
so.
A
Yeah
and
that's
why
they
have
the
the
started
event,
the
running
event
and
the
finished
event
at
minimum,
just
because
everything
is
asynchronously.
You
know
at
that
level
and
then
so,
the
in
general.
When
we
do
our
our
events,
we
want
to
make
sure
that
we're
doing
generic
cloud
events
with
payloads
that
have
specific
data
that
we're
gonna
pass
around
just
so
we
can
fit
into
the
cd
events
without
having
to
rework
stuff.
C
Somebody
gave
us
yamal
reports,
the
general
response
somebody
gave
us
json
response,
so
we
sending
back
the
data
to
let's
say
captain
and
captain
sending
back
to
that
attack
to
us
in
a
different
format.
Let's
say,
as
of
today,
we
think
about
like
cd
event
is
if
by
default,
is
be
acceptable
for
any
open
source
for
tooling.
So
what
end
up
happening
here
is
just
one
format
of
sending
and
receiving
the
data.
Oh,
where
do
we?
What
actually?
The
purpose
was
with
the
city
event.
A
Exactly
exactly
and
that's
why,
when
we
define
our
vocabulary
of
events
around
ortelius,
that
we
should
do
them
based
on
cloud
events
and
and
that
type
of
structure.
A
So
if
anybody
does
know
cloud
events
go
ahead
and
hunt
it
down
and
read
up
on
it,
it
came
out
of
google,
obviously
the
how
they
they
pass
things
past
things
around
and
one
of
the
one
of
the
concepts
that
when
I
was
talking
to
the
product
manager
of
google
docs,
she
was
saying
that
when
they
send
cloud
events
around
the
cloud
events
end
up
having
end
up
being
pretty
big,
but
when
they
do
that
they
actually
have
all
the
data
that
you
need
to
do
your
part
of
the
actions,
and
you
just
ignore
everything
else.
A
So
that
was
one
of
the
things
that
I
found
out
from
them
was
you're
going
to
get
these
things
are
going
to
grow,
but
you
you
only
pick
out
the
pieces
that
you're
you're
interested
in
and
the
reason
why
they
did
it
was
so.
The
flow
meant
flow
through
all
the
different
layers
went
smoothly
without
having
to
do
like
data
transformation
on
the
fly.
B
What
what's
the
next
steps
in
terms
of
delegating
work.
A
A
B
A
I
think
we
should
do
a
little
working
group
on
this
as
well,
so
the
working
group
would
be
able
to
kind
of
work
through
the
mer
I'll
I'll
I'll
start,
the
mermaid
diagram
to
get
us
started,
and
then
probably
I'm
thinking,
maybe
later
this
week
or
early
next
week.
A
We
do
a
working
group
to
go
over
that
diagram
and
and
keep
on
building
it
out.
B
So
we
have
ortillius
argo
city,
prometheus
grafana,
loki.
A
C
B
Yeah
who's
on
the
call
can
do
that
she's
working
with
me
with
a
lot
of
this
stuff
as
well.
So
I'm
also,
I
also
created
a
demo
app
for
everyone
to
to
practice
with.
B
Check
out
in
the
chat
I
just
it's
got
podcato
here,
so
this
is
from
the
cncf
technical
advisory
group
app
delivery
and
they
came
up
with
this
app
where
you
can
change
out
the
potato
heads
like
arms
and
legs
and
and
try
different
versions
and
it's
perfect
for
cataloging
components
and
and
things.
So
it's
quite
a
good
demo.
So
I
can
teach
people
how
to
change
its
arms
and
legs
and
that's
good
practice
for
for
a
lot
of
things.
B
A
And
the
frame
and
things
like
that,
so
it's
the
same
concept.
I
was
doing
that
around
a
istio
demo.
B
Yeah
so
yeah,
maybe
I
can
create
some
issues
for
like
change
its
leg
changes
and
then
people
get
accustomed
to
the
the
get
repo
that
it's
in
and
then
yeah.
I
think
it's
good
practice
for
people
and
it's
quite
fun
to.
A
Yeah
and
how
many,
how
many
services
it
make
it
up.
B
Services
around
10-
I
think,
okay,
pretty
much
if
you
look
at
the
photo
leg
blade
and
a
few
more
around
like
around
10.
A
B
Even
if
it
was
100
it's
it's,
it's
not
resource
heavy
yeah,
perfect,
very
light.
It's
very
very
light.
You
could
run
it
on
a
rise
required.
A
Nice
so
let's
see,
let
me
figure
out
the
next.
A
So
this
does
this
time
slot
work
for
folks
or
do
we
need
to
go
earlier
later.
B
I
fly
back
to
australia
next
week,
so
this
is
2
20
a.m.
For
me,.
A
B
So
we
might
have
to
either
have
two
groups
or
async
or
or
we
could
have
two
calls
so
like
a
us
friendly
and
then
a
uk
friendly
or
europe
friendly,
and
then
india
sort
of
sits
in
the
middle,
which
is
good
in
pakistan.
They
can
sort
of
almost
attend
both.
A
Yeah,
I
think
we're
gonna
need
two
groups
two
two
times.
A
Yeah
yeah!
Okay,
let
me
let
me
just
schedule
another
working
group
for
this
friday
same
time
and
then
next
week
we'll
get
in
when
you're
back
brad
into
australia,
we'll
figure
out
a
time
slot
for
your
your
side.
B
Yeah
yeah
that
sounds
good
I'll,
make
some
issues
around
potato
head
as
well
and
feel
free
to
pick
them
up
or
give
instructions
of
how
to
do
it,
and
even
if
it
will
even
help
people
do
proper
commits
to
our
tds
project
etc.
For
signing
and
all
that
so
it'll
be
good
practice.
A
Yeah
and
when
you
create
the
the
issues
for
the
potato,
just
label
them
as
like
demo
or
yeah.
B
A
A
Yeah
excellent
anybody
else
have
any
questions.
Comments
concerns
okay,
so
I
will
get
the
invite
out,
for
this
friday
same
time
slot
and
then
we'll
work
on
next
week's
time
slots
as
well.
A
Yep,
and
just
so
everybody
knows
the
agenda
for
fridays
is
we'll
look
we'll
work
through
the
state,
diagram
and
mermaid.
So
folks
can
see
how
the
the
events
are
moving
back
and
forth
and
stuff
like
that.
A
All
right
well,
thank
you,
everybody
for
jumping
in
here
and,
like
I
said
when
we
started
sorry
about
the
late
notice
and
the
confusion
on
the
calendar
around
this,
I'm
going
to
try
to
get
into
the
cd
cdf
calendar
and
get
things
sorted
out.
So
with
that.
Thank
you,
everybody
and
have
a
good
night
good
day
and
we'll
talk
to
you
on
friday.