►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
So
we've
made
every
project
called
alternate
communities
and
within
that
repo
we
now
have
the
setup
for
so
and
we
have
argo
in
the
cluster
and
then
we
have
now
the
ability
to
push
as
we
please
so
we're
now
ready
to
start
making
application
sets
for
the
italians
deployments.
B
A
B
And
let's
say
you
go:
go
down,
you'll,
see
like
ortillius-ms,
like
that
ortelius
ms
text
file
yeah
any
one
of
those,
so
that
this
is
one
of
the
microservices
and
in
change.
The
branch
to
deploy
will
work.
B
You'll
see
there's
the
chart,
that
is
what
we're
using
to
deploy.
So
these
charts
are
working
currently
deploying
everything
out
to
the
cluster
via
deploy
hub
and
direct
home
calls.
Basically,
so
these
are
the
working
charts
you'll
see
in
each
one
of
the
ms
artillery,
ms
you'll
see
the
the
chart
directories.
A
Okay
and
then
I
might
make
this
a
ticket
to
just
just
for
my
to
remind
myself
so,
are
there
any
charts
if
we
deploy
them
that
they
will
do
some?
The
docker
files
will
do
some
computations
that
you
don't
want
and
update
databases,
etc.
A
B
So
if
you
go
into
one
of
those,
what
ends
up
just
giving
you
kind
of
the
flow
of
the
process
go
into
cloud,
build
yeah,
pick
that
deploy
go
to
the
cloud,
build
pick
that
one.
B
So
what
we
we
use
is
a
little
weird
but
trying
to
give
people
exposure
to
a
bunch
of
different
tools.
So
when
we
check
into
the
the
get
repo
there's
triggers
on
the
cloud
build
side
that
will
look
for
the
updates
to
the
microservice,
so
what
it
basically
is
doing.
The
first
step
is
going
to
get
the
ssh
keys
to
connect
up
to
github.
B
So
then
we
can
then
do
some
work
against
github
and
then
the
next
is
logging
into
quay.
And
then
we
set
up
some
environment
variables
like
on
line
31
and
so
on.
That's
where
we're
actually
setting
up
all
the
information
that
we're
going
to
pass
over
to
ortillius
and
deploy
hub
about
this
thing
that
we're
we're
building
and
then
the
next
step
like
in
line
was
at
60.
47
is
where
we
start
actually
building
and
pushing
over
to
over
to
quay
at
that
level.
B
Okay,
so
yeah
all
of
our
microservices
are
docker
builds.
So
one
of
the
things
that
is
a
a
weird
kind
of
I
would
say
you
have
to
do.
A
workaround
is
local.
B
Docker
images,
don't
have
a
a
digest
associated
with
them
until
they're
pushed
over
to
a
registry
so
like
in
line
55,
we
actually
have
to
go
after
it's
been
pushed,
we
can
go
figure
out
what
the
digest
is
and
then
finally,
in
line
65
is
where
we
start
interfacing
with
we're
sending
this
over
this
information
over
deploy
hub
to
push
all
that
content
over
and
do
the
deployment.
B
B
So
that's
where,
once
we
get
the
argo
in
place,
when
we
figure
out
what
we
need
to
update
in
the
application
sets,
we
would
go
ahead
and
do
that
connection
at
that
level.
B
That's
kind
of
what's
happening
in
the
build
and
deployment
space
right
now.
A
A
B
B
The
deploy
hub
is
doing
the
deployment,
so
the
the
section
was
64
to
70
those
lines
down
there
near
the
bottom
are
what
is
actually
doing
the
deployment
steps?
That's
the
cd
part,
those
those
few
lines.
B
So
we
would
then,
as
part
of
the
ortillius
process
we
would
like.
We
talked
about
update
the
the
application.
I
think
it
would
be
the
application
repo
with
like
the
new
information
either.
Do
it
directly
and
just
trigger
off
of
of
f
off
of
a
commit,
or
we
can
get
fancy
and
do
like
a
pr
and
merge
the
pr
yeah.
I
think
the
first
step
would
just
be
whatever
the
simplest.
B
If
we
just
commit,
then
have
argo
go
ahead
and
and
sync
off
of
that
that
commit
you
know
just
commit
directly
to
master
in
in
for
for
now,
and
then
we
can
get
into
branch
type
of
things
down
the
road.
D
If,
if
you
want
to
completely
have
everything
within
kubernetes,
you
have
to
have
our
workflows,
because
otherwise
there's
no
csd
and
the
docker
build
part
will
have
to
be
replaced
because
I'm
not
sure
on
what
version
the
cluster
currently
is.
If
it
is
120
docker
shim
is
already
deprecated
and
as
of
1.22
in
kubernetes,
it's
going
to
be
completely
taken
away.
So
docker
shim
is
not
going
to
exist
anymore
within
kubernetes.
D
Currently,
it
doesn't
make
a
difference
because
you
all
are
building
it
locally
within
github,
but
I'm
not
sure
what
type
of
runner
github
is
running
in
the
first
place.
So
if
github
internally
runs
on
kubernetes,
then
it
will
make
a
difference,
but
ideally
I'm
going
to
assume
that
they
would
be
separate
from
that
like
they
would
wouldn't
really
create
an
situation
which
would
be
impacting
that.
B
C
D
All
my
my
current
client
stuff
to
mechanical
so
yeah,
okay,
but,
as
I
said
right
like
I,
don't
think,
github
will
create
a
problem,
I'm
assuming
github,
hopefully
their
pipelines
when
they
have
created
them.
They
have
created
them
being
aware
of
all
these
different
situations.
So
if
they
still
use
kubernetes,
they
would
have
taken
care
of
that.
I'm
assuming.
B
Right
right,
yeah:
this
is
a
google
cloud,
build
yeah,
I'm
also
slightly
different
than
the
github
actions,
but
they're
all
they
all
achieve
the
same
thing.
B
So
the
step
where
argo
is
going
to
come
in
to
play
is
instead
of
ortelius
or
deploy
hub,
interacting
directly
with
helm.
To
do
the
deployment
we
would
interact
with
the
git
repo
to.
D
I
see
yeah,
I
understand
that
sounds
perfect.
You
would
have
location,
so
you
showed
us
a
microservice.
Is
that
the
only
microservice?
No.
B
B
B
If
you
want
me
to
share
either
way,
go
over
to
the
azure
portal.
B
And
there
it
is,
if
you
look
at
workloads
and
just
filter
on
the
name,
space
for
artelias.
B
Yeah
bottom
one,
so
these
are
the
microservices
that
are
running
so
the
the
first
two
top
ones:
the
artelia
stocks
in
ortiz
www.
B
Those
two
are
being
routed
to
through
an
istio
ingress,
so
istio
we've
exposed
if
you
go
to
the
services
and
ingresses
you'll,
see
where
we
have
things
kind
of
coming
in
and
filter
name
spaces.
B
That
is,
if
you
go
to
like
docs.uh
ortilius.io
or
you
know,
www.ortelius.org
that's
getting
routed
over
to
this
service,
and
this
this
service
then
has
the
back
end
endpoints
of
those
workflows.
So
istio
is
handling
these.
These
two
guys
through
that
ingress.
B
If
you
go
back
to
the
ingresses,
you'll
see
that
there's
another
one
called
ms
nginx
yep.
So
that's
the
ingress
for
all
the
microservices,
the
ui,
the
validate
user
and
stuff.
Like
that,
the
reason
why
I
chose
in
nginx
is
so.
It
makes
easy
easier
for
people
to
do
bring
it
down
locally
and
run
locally
without
having
to
install
istio
as
part
of
that
process.
So
we
will
eventually.
A
B
An
istio
routed
version
of
the
microservices,
but
for
now
I
just
did
the
nginx
reverse
proxy.
A
Yeah,
I
was
just
going
to
say,
that'll
be
down
the
line,
I'd
like
to
do
a
little
bit
of
that,
because
I
want
to
learn
more
about
seo.
I
don't
haven't
used
it
yet
so.
B
B
Yeah
then
nginx
reverse
proxies.
It
doesn't
have
as
many
of
the
weird
quirks
that
engine
x
has
so
that's
kind
of
what's
happening.
So
if
we
go
back
over
to
workloads,
it'll
be
just
a
little
easier
to
see
and
just
go
back
to
that
name.
Space
artelias.
B
So
those
everything
except
for
the
top
two
are
what
we
need
to
deploy
the
first
three
there,
the
msui,
the
nginx
in
general,
are
all
coming
from
the
same
repo.
B
A
B
So
this
is
the
monolith
part.
So
if
you
go
to
change
your
branch
there
to
service
catalog,
the
very
bottom
one
svc
cat.
B
You'll
see,
there's
the
the
three
charts
that
we're
using
that
bottom
one
is
deprecate.
I
just
gotta
delete
it.
A
Okay
and
then
in
terms
of
the
applications
that
anything
pointed
at
the
chart,
it
will
just
deploy
all
the
charts
in
this
correct.
B
Yeah,
this
is
the
monorepo
that
has
on
the
monorepo
side.
We
split
it
out
into
basically
the
the
general
is
the
tomcat
server
running
all
of
the
back
end
endpoint
code
and
then
the
msu
ui
is
still
a
tomcat
server,
but
it's
been
stripped
out
where
it
doesn't
have
any
of
it's
just
the
js
code
and
the
jquery
code
for
the
front
end.
So
we
did
a
kind
of
a
break
just
to
help
with
being
able
to
do
some
level
of
incremental
deployments
on
the
monolith.
But
it's
nothing.
B
B
There's
there's
five
microservices
two
monolith
and
then
the
reverse
proxy.
D
B
Yeah,
so
these
are
the
pretty
much
the
five
there's
there's
some
extra
ones,
but
you'll
see
that
every
microservice
has
its
own
repo.
It's
going
to
be
the
top
five
in
this
list
that
were
updated.
B
I
can
I
can
give
you
the
list,
so
you
guys
know.
A
B
A
B
If
you
deploy
yeah,
if
you
deploy
the
top
five
plus
the
monolith,
charts
you're
good
so
it'll
be
the
total
of
eight
charts.
Yeah.
B
D
Yeah,
the
next
question
is
now,
since
they
are
they're
mono
repo
as
well
as
poly
repo.
Do
you
want
to
have
one
application
set
for
each
mono
repo?
Or
do
you
want
to
have
one
common
application
set
for
all
your
repositories
which
are
trying
to
get
deployed?
Because,
if
that's
the
case
we'll
have
to
change,
I'm
not
sure
what
your
file
format
structure
isn't,
but
we'll
have
to
make
it
all
consistent
in
order
for
it
to
work
with
multi
or
poly
repo.
B
Yeah,
I
think
we're
pretty
consistent.
Everything
is
going
to
have
a
chart
directory
underneath
the
root
of
the
repo
and
then
the
chart
that
corresponds
to
that.
B
A
D
B
Yeah,
so
if
you
brought
up,
if
you
brought
up
the.
B
Yeah,
that's
the
sv
cat.
If
you
brought
up
like
the
that
plus
well,
what
will
happen
is
we
may
have
to
change
the
nginx
slightly
nginx
will
fail
until
all
of
the
back-end
services
are
available.
B
There
is
a
flag
in
nginx
to
tell
it
to
ignore
that,
so
it
can
actually
come
up
and
ignore
the
errors.
If
a
back-end
service
doesn't
exist,
that
we
may
need
to
do
that
in
the
nginx
config.
A
Okay,
that
that's
on
the
we're
not
talking
about
the
internet's
we're
not
talking
about
the
ingress
controller.
Talking
about
the
container
level
engine
x,
right.
B
It
would
be
so
if
you
go
back
to
the
the
cluster
yeah
and
go
to
ingresses.
B
B
Basically,
we're
just
exposing
an
external
ip
address
to
that
nginx
container,
yep,
yep
and
then
in
the
nginx
is
where
we
do
all
the
the
routing.
B
So
if
you
go
back
over
to
the
ortilius
github
repo.
B
A
B
So
when
that,
when
the
external
endpoint
comes
in,
it
hits
this
configuration
file
basically,
and
then
this
is
where
the
upstreams
are,
for
the
reverse
proxy.
A
B
And
there's
a
little
bit
of
stuff
to
handle
post
versus,
gets
on
splitting
out
some
of
the
the
traffic
and
then
in
here
everything
gets
rerouted
from
http
traffic
to
https.
B
And
when
we
build
the
when
we
in
the
kubernetes
cluster,
we've
uploaded
the
certificate
key
and
the
certificate
itself
to
a
kubernetes
secret
as
op
as
opaque
files.
Basically,
and
that's
how
this
guy
picks
up
the
keys
from
kubernetes
on
a
volume
mount
basically.
A
B
D
So
the
certificate
private
keys,
so
where
is
that
stored?
Are
you
all
deploying
that
with
your
deployment
and
you
all
would
have
a
secret
variable
within
your
core
build
or.
B
Go
up
a
little
bit
so
there's
the
volume
out
of
the
of
the
ssl
and
then
in
the
so
it's
coming
from
there.
It's
on
like
line
146.
B
B
Create
a
secret
yaml,
and
then
we
just
apply
that
to
the
cluster
with
the
right.
We
do
that.
A
A
B
Right
yeah,
but
that's
what's
happening,
I
mean
we
could
theoretically,
you
said,
move
it
over
there,
but
I'm
not
going
to
worry
about
it.
A
D
B
Yeah
I
have
to
find
out
like
one
of
the
things
I
wanted
to
do
was
use
let's
encrypt,
but
I
could
not
get
let's
encrypt
working
with
istio,
so
that
was
one
of
the
things
that
I
just
had
to
do
a
work
around
until
we
can
figure
out
how
to
get
let's
encrypt
with
this
yo,
because
there's.
C
B
B
Yeah
with
the
cert
manager-
and
I
can't
remember-
I
tried
it
a
while
ago
now
it
may
be
fixed
because
more
people
are
starting
to
adopt
istio,
but
shirt
manager
would
not
cooperate.
Put
it
that
way.
A
We
can
maybe
reach
out
to
the
captain
people
adam
because
they're,
like
experts
in
istio
as
well
and
they've,
probably
done
these
challenges
before
as
well.
B
Yeah,
so
you
know
the
long
term
for
the
for
the
clusters
is
to
get
right.
Now
we
have
two
clusters
kind
of
the
devops
one
and
and
this
kind
of
production
one,
but
you
know
merge
that
in
all
into
one
one
bigger
cluster
go,
have
everything
go
through?
What
is
it
the
azure
front
end
and
then
istio
for
the
routing?
D
Sure
yep,
that
should
be
fine.
Also.
Regarding
argo
events,
I
reached
the
part
the
last
part
of
it.
You
remember.
We
spoke
about
the
source,
the
event,
bus
and
the
last
part.
The
last
part
requires
either
argo
workflows
or
something
else
for
it
to
work.
So
you
can
create
the
host.
You
can
create
an
event
bus,
but
you
definitely
need
our
go
event.
D
D
Okay,
so
I
started
working
on
the
argo
buffalo
solution
because
it
just
makes
sense
to
have
everything
or
go
family
yeah,
so
I
am
currently
just
working
on
that.
It's
pretty
straightforward,
so
I
should
be
done
by
the
next
print.
B
Nice
yeah,
the
I
watched
a
I'm
part
of
the
sig
events
working
group
and
they
had
somebody
from
argo.
Do
a
presentation
and
kind
of
run
through
the
events
piece
and
the
workflow.
Let
me
see
if
I
can
find
that
the
link
to
that
recording
and
I'll
forward
it
on
to
you.
D
Yeah,
so
the
triggers
that
are
available
to
us
is
our
go:
workflow
trigger
lambda
http
trigger
match
trigger,
which
is
again
you're
hitting
another
source,
sorry
destination,
a
kafka
trigger
kubernetes
object
trigger,
which
means
creating
a
new
kubernetes
object
as
the
trigger
log
trigger
slack
trigger
seo
event,
hubs
trigger
and
custom
triggers,
so
custom
trigger
would
be.
If
you
want
to
build
your
own
or
I
I
just
found
that
it
makes
more
sense
to
piggyback
on
the
argo
workflow
one,
because
you
can
pretty
much
build
your
own
plus
in
future.
B
Right
right,
it'll
be
interesting
to
see
which
is
going
to
be.
D
A
trigger
to
perform
a
workflow,
so
that's
why
I'm
guessing
steve
mentioned
captain
right
so
does
captain
do
work.
B
E
Yeah
yeah,
do
you
have
a
sequence
and
then
inside
a
sequence
you
have
tasks.
But
I
guess
we
can
listen
for
argo,
getting
the
terminology
all
mixed
up,
but
we
can
listen
for
our
ghost
stuff
or
we
can
trigger
our
go
so
either
way
around
and
that's
what.
E
D
Maybe
you
should
have
a
call
separately
adam,
I
think
I'm
across
argo
and
you're
a
cross
captain
and
I
don't
think
I
have
much
I'm
not
a
cross
captain
at
all.
So
yeah,
maybe.
B
And
kind
of
my
guess
on
where
the
cross,
not
necessarily
crossover,
but
the
integration
would
be
from
the
if
you
look
at
just
the
argo
as
the
get
ops
model,
and
you
just
let
argo
do
the
heavy
lifting
of
everything
once
it
hits
the
git
repo
and
the
the
get
is
triggering
something
that
triggering
something
other
than
a
build.
This
is
for
deployment
pieces,
for
example,
that
we
would
actually
go
ahead
and
trigger
argo
cd
to
go
ahead
and
bring
the
cluster
into
the
correct
state
to
match
the
git
repo.
B
Now
things
like
on
the
captain
side,
I
was
he
kept
in
looking
at
the
the
get
repos
of
the
of
the
source
code.
Doing
the
ci
piece
doing
the
build
doing
the
push,
doing,
testing,
doing
all
those
things
and
then
once
you're
happy
with
that
it
captain
would
then
either
talk
to
ortelius
to
do
the
deployment
or
talk
to
argo
directly.
B
You
know
an
ordinance
would
talk
to
to
argo
or
captain
talks
to
argo
directly
at
that
level,
but
I'm
I'm
seeing
the
way
I'm
kind
of
envisioning
argyll's
role
in
this
is
basically
like
ansible,
not
ansible
puppet,
where
it
literally
kept
the
state
of
a
machine
in
sync
and
that's
kind
of
the
role
I
see
argo
playing
in
an
event-driven
world
is
its
job
is
to
keep
the
the
state
of
this
cluster
where
I
wanted
to
be,
and
all
the
other
workflow
pieces
are
more
event-driven
around
that
through
captain
and
so.
E
D
E
Could
do
there
is
put
a
captain
quality
gate
in
to
say
right
here?
Are
your
tests?
We've
got
a
new
artifact,
go
away
and
run
some
basic
tests
and
just
give
a
green
tick,
and
then
argo
can
decide
knows
it's
healthy
enough
to
deploy
because
otherwise
you're
in
a
situation
where
you're
just
blindly
deploying
potentially
bad
code.
B
Yeah
and
that's
where
go
ahead,
I'm
at
nah
go
ahead.
So
the
though
this
is
where
it
gets
a
little
confusing
and
we
still
need
to
flush
it
out,
but
the
helm
chart.
Let's
say
we
take
our
our
one
of
our
microservices
to
text
file
microservice
the
helm
chart
that
defines
the
template
of
the
kubernetes
manifest
is
kept
in
the
text
file
repo.
B
Now
just
because
we
have
the
text
the
the
manifest
looking
there.
What
we
need
to
do
is
when
we
do
a
build,
we're
going
to
have
a
new
image
tag
that
we
just
want
to
apply
as
part
of
the
deployment.
B
So
that's
where
we
separate
the
data
from
definition
and
that's
going
to
be
in
a
values
file.
So
what
ends
up
happening
on
the
helm
side?
Is
you
take
a
values
file
and
you
apply
it
to
the
template
to
the
chart
and
that's
what,
at
the
end
of
the
day
generates
a
a
substituted?
B
You
know
completed
kubernetes,
manifest
file
that
then
gets
sent
out
to
the
the
cluster.
So
the
way
I'm
kind
of
looking
at
it
is
the
helm
chart
would
exist
in
the
in
the
microservice.
Like
text
file
repo,
the
values
file
will
probably
end
up
getting
in
a
different
repo,
which
should
be
the
what
they
call
that
it's
not
that
necessary.
B
The
application,
the
app
the
app
of
apps
repo
would
be
where
the
the
the
values
file
would
probably
apply
to,
and
then
we
just
kind
of
merge
the
two
together
at
that
level.
So
to
kind
of
answer,
your
question
adam,
when
the
developers
are
interacting
with
their
source
code,
repo,
their
microservice
they're,
going
to
be
changing
source
code,
they're,
going
to
check
in
that's
going
to
captain's
going
to
catch
that
they
just
did
a
source
code
change.
We
need
to
rebuild,
we
do
the
build.
B
We
can
do
our
tests,
we
can
get
our
new
image
out
there,
our
new
tag
and
then
at
the
tail
end
of
that.
We
have
to
then
go
and
update
the
values
file
with
the
new
tag,
and
that
would
be
hitting
a
separate,
the
app
of
app
repo
with
the
new
tag.
Information
and
that's
when,
when
you
do
that,
update
into
that
that
app
of
app
repo,
that's
when
argo's
gonna
actually
take
off
and
actually
apply
and
bring
the
state
of
the
cluster
in
sync,
with
the
state
of
get
the
get
repos.
E
E
B
C
D
So
apple
apps
was
the
concept
before
application
set.
So
application
set
is
the
new
way
of
doing
things.
If
you
used
to
use
apple
apps,
you
can
migrate
off
to
application
sets
and
they
follow
certain.
Obviously
you
will
have
to
create
a
new
application
set,
but
henceforth
you
won't
need
any
of
those
changes,
but
your
end
concept
is
still
the
same.
Where
till
you
update
the
values
file,
nothing
gets
affected
within
your
respective
deployment.
D
D
B
Right
right
and
just
so
to
make
sure
I
understand
the
application
sets
we
can
make.
The
application
sets
look
for
changes
in
a
a
particular
directory
in
the
git
repo.
B
D
D
The
other
part
is
along
with
that.
You
can
also
specify
a
specific
branch
so,
as
you
can
see,
target
version
or
revision
on
line
12.,
so
that
will
specify
what
branch
to
look
at,
and
then
you
have
the
paths
where
your
charts
are
specifically
deployed
or
where
your
charts
are
located.
Okay,
this
is
where
it
picks
up
the
charts
from
and.
B
D
So
in
that
it's
going
to
be
production
hyphen,
whatever
chart
name
like
say,
I'm
deploying
argo
events,
so
production
hyphen,
argo
events,
hyphen
values.yaml
got
it.
But
if
I
don't
want
to
do
that
on
all
my
clusters,
I
can
have
a
separate
cluster
which
only
does
step
deployments.
D
So
if
there
is
a
failure
on
a
dev
cluster,
we
can
contain
it
there
and
we
can
probably
create
a
trigger
or,
like.
Obviously,
captains
is
going
to
be
there
to
protect
it
as
well
as
we'll
see
that
if
captain
fails
do
not
progress
to
another
environment
right.
B
So
adam
on
your
side,
so
on
this
side,
if
we
look
at
the
kind
of
the
use
case
scenario,
the
sort
developer
updates
the
source
code,
they
do
their
commit,
we'll
just
keep
it
simple:
they
do
a
commit.
They
didn't
touch
the
values
file
at
that
point,
they're
only
touching
their
source
code,
so
argo's
gonna,
going
to
ignore
that
that
commit
at
that
point.
B
Captain
will
capture
that
commit
because
of
the
source
code
change.
Do
the
build.
Do
the
push
get
the
image?
Do
the
do
your
your
string
tests
and
then
from
there
once
you're
happy
with
it?
We
have
to
take
the
the
new
image
tag
and
apply
it
to
the
values
file.
B
Now,
when
we
do
that
argo's
going
to
take
off
and
start
doing
its
work
on
getting
the
deploying
to
the
cluster,
but
when
we
do
that
commit,
we
have
to
make
sure
that
argo
doesn't
go
into
a
loop
and
start
all
over
again
saying.
Oh,
I
just
did
a.
I
just
got
a
new
update
to
the
the
library
I'm
into
the
repo.
Now
I
have
to
go
through
and
build
again.
E
So,
just
just
to
clarify
it
sounds
sounds
right,
but
captain
itself
isn't
really
going
to
do
anything
captain's
going
to
be
the
orchestrator
to
do
things.
What
we'll
need
our
services
hanging
off
like
microservices
hanging
off
captain
to
actually
do
the
work,
so
that's
where
that
check
or
that
safety
would
would
live.
Let
me
pull
up
a
shipyard
farm
I'll
show
you.
B
So
you're
in
the
middle
of
a
end-to-end
workflow
pipeline
and
due
to
the
deployment
we
have
to
go
and
update
that
same
pipeline,
the
same
repo
that
that
pipeline's
being
hung
off
of
so
as
soon
as
you
do
that
second
commit.
B
A
A
A
D
In
my
case,
for
example,
when
I'm
trying
to
change
just
an
image
stack,
what
happens
is
first,
obviously,
I
just
committed
to
code
that
in
turn
may
or
may
not
build
a
new
image.
Now,
in
my
case,
if
I
commit
the
new
file
to
code,
there
has
to
be
a
docker
build
or
build
of
an
image
that
runs
that
in
turn
will
then
get
deployed
by
argos
cd,
but
only
with
the
old
tag.
Since
it
it
doesn't
realize
that
that
has
changed.
D
Yet
that's
just
an
automatic
sync,
which
you
don't
really
touch
that
in
turn,
once
I
push
the
new
tag,
it
goes
and
triggers
an
argo
event
that
in
turn
goes
and
triggers
an
argo
workflow
to
go
and
push
this
new
commit
sha
into
the
production
cluster,
sorry
into
cr,
and
only
then
will
it
go
and
then
update.
D
No,
the
image
tag
update,
happens
automatically
from
argo
events.
Argo
events
is
doing
the
image
tag,
update.
I'm
only
updating
the
code
itself,
so
the
docker
build
part
is
the
one
which
actually
creates
your
image
tag
right
or
the
build
sha
or
whatever
you
want
to
call
it
right.
It's
the
first
step
is
only
doing
nothing.
It
just
means
that
I've
committed
a
new
file.
D
So
when
I'm
trying
to
push
a
new
change,
I
only
push
the
change
itself
in
the
code.
I
do
not
update
the
values
file
automatically
right.
What
happens
is
when
you
push
the
change
that
triggers
a
build
yeah
so
when
I
say
that
triggers
a
build
that
can
be
from
argo
events
or
wherever
right
so
assume
in
this
case,
I'm
just
pushing
this
image
using
gcr
or
whatever
you're
doing
argo
events,
etc.
D
A
D
Yes,
so
what
happens
is
whatever
test
cases
you
have
that
you
can
run
on
a
completely
split,
separate
quality
gate
or
whatever
kept
in
etc,
and
only
after
those
quality
gates
are
confirmed.
Captain
can
then
trigger
another
argo
workflow,
which
will
then
go
and
basically
update
the
respective
values.ml.
E
E
So
you're
absolutely
right.
Captain
works
on
on
shipyards
and
basically
you
have
stages,
so
these
can
be
called
whatever
you
want.
It
doesn't
really
matter
but
dev,
and
then
you
have
sequences,
so
sequence
is
by
default
standalone.
E
So
if
I
run
the
build
sequence,
basically
as
a
human
or
as
another
tool,
you
would
say,
dev
dot,
build
dot,
triggered
and
then
captain
would
go
away
and
start
running
the
tasks
for
you.
So
in
my
case
first
second
evaluation.
So
this
is
where
we
would
build
up
like
build
the
image
you
know,
get
the
get
stuff
build.
The
image
do
whatever
we
need
to
do
and
then,
when
we're
done,
we
do
evaluation.
Our
evaluations.
These
again,
can
be
named
anything.
E
Not
release
evaluation's
a
special
one,
because
we
already
have
a
microservice
called
the
lighthouse
service
that
listens
for
evaluation.triggered.
Now,
in
the
background,
what
happens?
Captain
will
go
ahead
and
generate
and
distribute
first
dot
triggered
in
this
case,
so
that
it'll
fire
out
an
event
and
it's
down
to
some
sort
of
tooling
some
sort
of
microservice
to
listen
for
that
first.triggered
event
and
do
its
thing,
second.triggered
and
so
on.
E
So
captain
will
basically
wait
until
first
stop
the
the
microservice
that
listens
for
this
signals
that
first
dot
finished
with
a
result,
and
then
captain
will
go
away
and
do
second
and
so
on.
So
now
this
sequence
here
will
only
run
triggered
on
when
the
dev.build.finished
event
occurs,
because
once
you
get
to
the
end
of
this
task,
captain
knows
well,
I'm
finished
that
sequence
yeah
and
it
will
only
run
when
that
is
finished
and
the
evaluation
result
for
that
was
a
pass,
and
then
it
will
go
and
run
this
update
values.
D
A
Yeah,
so
I
I
think,
like
you,
don't
need
both
like.
I
think
captain
in
a
way
is
better
because
it
like
this
is
so
generic.
You
know
if
you
change
it
on
the
other
end
like,
and
you
can
bring
your
own
tools
to
it.
Using
clipton
as
a
control.
Plane
makes
sense
that
that's
without
knowing
about
too
much
about
the
sensors
and
stuff,
but
to
have
those
like
what
sort
of
tests
and
performance
testing
do
we
do
steve
like
do.
We
have
much
tortillas
yet.
D
D
Yeah,
so
maybe
we
don't
require
argo
events
anymore,
so
the
trigger
part
of
it
is
not
there.
And
what
can
happen
is
you
just
have
argo
workflows
and
captain
triggers
these
workflows
and
then
captain
is
managing
the
quality
gates
for
it,
and
it's
only
talking
to
our
workflows.
In
that
sense,
you
would
not
require
something
called
argo
events
anymore.
B
B
D
Argo
events
is
an
events
manager
or
probably
exactly
what
captain
similar
to
what
captain
does,
but
captain
has
features
so
where
captain
would
most
probably
step
in
as
replacing
argo
events
to
manage
the
quality
kits
and
then
you
have
our
workflows
at
the
end,
which
takes
care
of
any
deployments
or
updates,
etc.
That
captain
wants
to
make
to
the
files
like
line
number
22,
update
value
file,
I'm
not
sure
whether
the
captain
does
it
on
its
own,
or
it
requires
a
third-party
microservice
to
perform
that
activity.
E
E
You
ask
you
basically
strip
out
that,
let's
just
not
call
that
build
anymore,
let's
and
then
some
tool
would
say
right:
I've
got
a
new
image,
dev.blah.triggered
and
captain
would
then
say,
and
you
as
part
of
the
payload,
for
that
triggered
event.
You
can
send
in
the
image
and
repo
id
some
service
here.
The
lighthouse
would
pick
that
up
and
say
cool
I've
got
a
new
artifact
to
evaluate,
and
then
it
would
pass
the
result
back.
E
You
wouldn't
even
have
this.
You
would
just
have
whatever
service
here.
Sending
the
evaluation
result
outbound
to.
B
B
Yeah
then
ortulius
would
do
the
update
val
values
file.
B
Yeah,
especially
if
you
like
get
into
we
works
or
code
fresh,
you
know
those
those
other
code
fresh,
isn't
really
a
good
example,
but
I
think
spinnaker
is
starting
to
have
some
pieces
around
git
ops.
B
B
And
then
you
could
do
two
things
on
that
path.
You
can
have
ortilius
just
go
ahead
and
update
the
values
file
and
then
just
come
back
and
said.
I
did
that
or
you
could
have
ortilius
go
ahead
and
update
the
values
file
and
then
sit
there
and
wait
for
it
to
be
completed
the
for
the
deployment
to
be
complete
and
then
come
back
and
tell
you
the
results.
So
you
can
kind
of
do
a
background
process
and
continue
on
or
you
can
have
ortulius
wait.
B
For
for
the
whole
process
to
finish,
and
then
then
it'll
come
back
to
captain
to
go
on
to
the
next
step,
so
either
way
we
can
make
it
work.
E
Yeah,
because
then
you
could
have
ortilius
just
like
here.
When
we
manually
started
this
dev.ladder
triggered,
you
could
have
ortilius
firing
a
new
event,
dev
dot.
Second
blah
dot
triggered,
so
it
doesn't
really
matter
how
long
that
gap
is
because,
as
I
say,
these
sequences
are
standalone
so
soon,
as
that's
finished,
to
captain
it's
it's
as
well.
No
as
soon
as
ortillius
signals
that
it
is
finished,
let's
say.
E
E
A
C
B
And
so
when
you,
when
you
say
when
you
go
back
to
telling
ortelius
that
the
build
has
been
done-
and
you
know
that
part
of
it
is
already
completed
and
you
want
to
go
ahead
and
do
a
deployment
to
kubernetes
plus
a
database
update
because
databases
don't
necessarily
live
inside
of
kubernetes,
we
can
we'll
be
able
to
interact
with
both
worlds.
So
we're
not
constrained
to
kubernetes.
When
you
start
talking
to
ortelius.
E
B
E
B
It's
very
yeah,
definitely
like
I
have.
I
have
some
customers
that
use
salesforce
or
lambda,
and
you
know
you're
going
to
go
through
the
build
process
is
slightly
different
for
like
a
lambda
function.
B
So
for
lambda
you
end
up
creating
a
zip
file
and
you
check
that
zip
file
into
like
artifactory
or
something
like
that
and
then
from
there.
You
have
to
upload
that
zip
file
to
an
s3
bucket
and
then
unzip
it
until
aks
about
our
aws
about
it.
So
you
can
mix
and
match
just
through
the
kept
in
ortillius
interface
dealing
with
lambda
files
as
well
as
well
as
kubernetes
updates
in
the
same
pass.
B
D
B
E
B
Okay,
because
you
know,
if
you
want
to
let's
say
in
the
whole
process,
we
want
to
go
ahead
and
add
in
a
security
scan.
We
want
to
make
it
as
simple
as
possible
to
add
in
that
step
into
the
whole
pipeline
process,
whether
it's
going
to
be
free.
Pre-Deployment,
like
you,
run
a
trivia
against
your
image
or
post
deployment.
Where
you're
going
to
do
some
sort
of
you
know,
inbound
traffic,
you
know
denial
service
type
of
attack.
B
You
know
we
want
to
add
in
a
security,
scan
a
security
quality
gate
being
able
to
add
that
in
minimum
with
minimal
changes
would
be
great.
So
that
would.
A
E
Project
service
and
stage
filters,
so
when
you're
listening
to
these
events,
you
can
say
only
listen
on
this
service.
If
you
wanted
to
go
down
to
that
level,
but.
B
Right,
perfect,
yeah.
I
think
that's
going
to
give
us
a
lot
of
control
and
a
really
good
story
to
be
able
to
focus
on
that
that
type
of
scale
and
being
able
to
manage
the
deployment
to
the
cluster
through,
like
I
said,
more
of
a
state
management
of
the
cluster
through
argo.
A
B
Only
ansible
yeah
ansible
was
a
push
methodology.
Puppet
was
a
had
an
agent
on
every
single
vm
and
it
would
listen
to
the
master
and,
as
anytime
the
slate
the
the
vm
got
out
of
sync
with
the
master.
It
would
go
ahead
and
re-download
all
the
packages
and
bring
itself
into
the
correct
state
with
the
master.
B
A
E
B
Yeah
they
keep
on
building
on
to
the
the
proof
of
concept.
So
it's
I
think
when
we
hook
together
when
we
start
doing
the
ortilius
events
we'll
probably
be
focused
on
a
generic
cloud
event,
instead
of
one
as
specific
to
kept
in
yeah
and
same
thing
on
the
argos
side,
you
know
when
we
start
listening
to
argo,
we'll
be
lis
would
want
to
get
in
there
and
kick
out
cloud
events
of
some
sort.
A
I've
been
playing
around
with
them
as
well,
so
I've
made
a
little
bit
of
card
to
like
receive
and
push
them
as
well.
B
A
B
And
then
you
know,
I
mean
you
know
some
of
the
the
big
use
case
scenarios
that
I
keep
on
thinking
about,
that
the
the
sig
events
is
going
to
be
challenged.
By
is,
let's
say
you
have
a
node.js
package
that
you
just
fixed
the
security
vulnerability
in
and
now
that's
out
in
the
node.js
repo
world,
and
now
I
need
to
go
and
rebuild
all
my
docker
images
that
consume
that.
So
I
I.
A
B
Automatically
kick
off
a
process,
that's
going
to
put
basically
just
like
the
pentabots.
You
know
it's
going
to
go
through.
It's
automatically
going
to
merge
in
rebuild,
get
everything
out
into
the
dev
environment,
for
people
to
start
testing
and
think
about
that
happening
worldwide.
You
know
a
single
single
update
needs
to
go,
tell
millions
of
consumers
that
you
need
to
go
rebuild
and
how
are
we
going
to
handle
that.
B
B
B
I
I've
I
actually
did
look
at
blockchain.
It's
called
supply
chain
management
now,
but
the
problem
with
blockchain
is
you
can't
put
a
large
payload
into
the
blockchain.
You
can't
put
you
know
a
docker
image.
You
know
four
gig
docker
image
in
the
blockchain
it'll
just
crash.
It
they'd
see.
A
A
A
Okay,
so
we'll
go
away,
we've
got
captain
running
now
as
well,
and
then
we
have
argo
cd
will.
I
think
this
this
to
rex
will
really
focus
on
getting
autist
up
and
running
in
the
cluster,
and
then
we
can
start
playing
around
with
it.
B
Yeah
so
I
would
say
we
take
those
eight
micro
services
and
start
deploying
them
into
the
cluster
with
argo,
and
then
we
just
keep
on
adding
on
add-on,
kept
in
add
on
the
other
pieces.
A
Around
those
do
you
have
five
minutes
more
just
if
we
can
make
a
quick
ticket
yeah?
Okay,
because
I
was
thinking
it
was
five
micro
services
and.
B
It's
a
total
of
six.
E
Again
and
you
we'll
we'll
we'll
catch
up,
brad
and
yeah.
A
Yeah
sorry
and
then
workloads,
whatever's,
namespace
and
then
we're
saying
not
not
those
two,
but
the
rest
is
that
what
was
the.
A
B
A
Yeah
and
one
more
question
these
darker
images:
let's
choose
just.
B
Oh
see
the
play
up,
deploys
them,
so
it's
they
end
up
in
a
temporary
overrides
file.
A
And
just
values:
let
me
just
go
to
the
values
so
yeah.
B
A
A
B
So
if
you
look
at,
if
you
go
back
to
kubernetes
the
easiest
go
into
the
go
into
one
of
the
yamls
for
those.
D
A
A
B
Yeah
yeah,
so
if
you
go
into
one
of
them,
yep
that
one's
fine,
you
need
to
look
at.
You
won't
see
any
builds
here,
but
if
you
go
to
tags
right
under
the
I
on
the
left
yeah,
those
are
all
the
tags.
B
And
you'll
see
tags
out
there.
The
way
we
tag
it
is
based
on
branch
plus
a
schematic
number
plus
the
git
commit.
A
Yep
awesome,
okay
and
then
I've
gotta
catch
up
with
the
folks
in
india
to
give
them
an
update
on
this,
because
they're
they're
interested
in
captain
as
well
in
the
cloud
event.
So,
oh,
oh
I'll,
update
the
notes
today
and
then
I'll
update
what
we
talked
about
in
the
india
time
as
well.
Okay,
and
then
I'm
also
thinking
to
increase
the
cadence
of
this,
but
instead,
like
you,
don't
have
to
be
at
the
second
week,
but
every
second
friday
would
be
at
a
later
time,
so
india
can
join.
A
We
don't
expect
you
yeah,
we
don't
expect
you
to
because
it'll
be
quite
late,
your
time,
but
it
will
help
us
to
get
it
going
a
little
bit
faster
as
well.
B
Yep
and
just
send
an
email
to
tracy.
If
you
want
her
to
publish
the
out
on
the
calendar,
the
time.
B
B
B
Yeah
yep,
I
think
we're
we're
definitely
getting
our
heads
around
this.