►
From YouTube: January 21, 2021 Ortelius Architecture Working Group
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
Okay,
so
there's
the
link,
so
you
know
last
week
we
had.
A
Sets-
and
I
thought
it
was
interesting
that
my
big
takeaway
on
it
was
on
the
spinnaker
side.
A
A
If
you
ever
wanted
to
go
and
install
like
wordpress
in
your
kubernetes
cluster,
the
the
helm
chart
for
wordpress
contains
not
only
like
the
wordpress
install,
but
what
is
needed
for
the
database.
So
I
have
my
like
a
my
sequel
and
it'll.
Do
some
other?
You
know
like
three
or
four
steps
to
install
wordpress
on
a
on
a
kubernetes
cluster,
so
that
was
interesting.
A
I
thought
from
the
spinnaker
side
that
they're
approaching
it
like
that,
where
components
are
actually
as
part
of
the
helm
chart
and
then
on
the
argo
side.
A
Argoed,
just
doesn't
really
even
have
that.
A
The
one
thing
that
they're
doing
with
their
application
sets
was
a
way
to
do
a
more
of
a
templating
mechanism
that
will
allow
you
to
basically
reuse
the
same
deployment
manifest
over
and
over
and
substitute
different
values
in
for
like
a
different
cluster.
A
So
there
are
use
cases
more
along
the
lines
of
I
have
a
hundred
clusters
that
I
want
to
send
this
deployment
file
to,
and
it
needs
to
be
tweaked
for
each
cluster.
How
do
we
do
that
and
that's
what
they
were
considering
an
application
set
now?
The
one
thing
I
didn't
did
see
was
neither
one
of
them
have
what
I
was
calling
a
product
view,
and
I've
threw
the
word
product
out
there.
Just
because
I
wanted
to
distinguish
from.
A
Using
the
word
application
or
component-
and
I've
actually
used
the
word
product
before
in
the
past,
to
represent
the
logical
view
of
the
application,
so
that
may
be
one
terminology
that
we
may
want
to
work
around
is
the
concept
of
a
product
instead
of
a
a
logical
application
view
just
to
keep
from
from
mixing
terminology.
A
So
those
are
the
the
takeaways
that
I
got
out
of
that
that
meeting
anybody
else
have
anything
out.
What
were
your
thoughts
on
that.
C
We
did
record
it
sasha
if
that's
something
you
want
to
it's
it's
out
there
in
the
it
should
be
in
this
dock.
The
link.
D
Oh
steve,
one
more
point
that,
regarding
the
sharing
of
the
components
that
was
mentioned.
A
D
So,
for
basically,
argo
like
each
component
can
only
belong
to
one
application,
but
with
the
spinning
other
thing
was
that
a
component
can
be
what
you
say.
How
can
we
say
a
component
can
be
a
central
micro
service
altogether,
so
that
was
a
like
and
they
can
be
shared
among
different
action
sites,
so
that
was
the
difference
between
them.
A
No
not
yet.
I
just
wanted
to
make
sure
I
wasn't
missing.
Anything
like
crum
was
to
say
that
there
I
I
forgot
about
that.
There's
no
really
sharing
of
a
component
between
applications.
C
You
can
does
it
track
who's
who's
sharing.
It,
though,.
D
A
Right
yeah
in
spinnaker:
I
think
they
did
it.
I
think
it
was
called
a
job
or
workflow.
C
C
Know,
well
I
say
what
we
need
to
do
is
pull
together,
get
that
requirement
stock
together
and
I
can
send
it
to
isaac
of
armory
and
we
can
send
it
to
gopi
from
oppsomex
and
to
jesse,
add
into
it
and
then
adam
jordan
from
netflix
and
see
if
they
can
make
comments
on
it
and
indicate.
If
there's
any
glaring
issues.
A
Yeah
so
yeah.
If
we
wanted
to
use.
A
Let
me
so
the
interesting
thing
with
like
argo
is
we
will
not
necessarily
we
want
argo
to
do
the
deployment
we
would
not
be
interfacing
with
argo
we'd
be
interfacing
with
the
git
repo
that
contains
the
the
deployment
files
that
we
want.
Argo
to
send
out,
so
we
go
ahead
and
and
modify
the
deployment
files
check
them
in
and
once
we
check
them
in
then
argo
would
go
ahead
and
deploy
them.
C
But
then
we
can't
we
can't
pull
back
any
of
the
location
right
any
of
the.
If
we
don't
get
a
feedback
loop
from
argo.
That
says
it
was
deployed
to
a
certain
environment.
A
They
do
have
what
they
call
a
post
sync
hook,
that
we
can
be
notified,
that
the
synchronization
has
been
completed,
but
kind
of
what
I'm
thinking
for
for
us
to
work
with
the
this
git
ops
and
even
like
with
spinnaker,
where
you
know
spinnaker's
grouping
groups
components
at
through
a
helm
chart.
So
they
don't
have
a
way
inside
of
their
their
tool
to.
A
A
A
And
yeah
and
I
think
that's
one
of
the
initial
things
I've
been
thinking
about
and
that's
why
I
haven't
really.
C
A
They
are
you're
going
to
set
up
their
their
pipeline
because
they
are
a
cd
tool
and
you're
going
to
say,
go,
deploy
this
helm
chart.
This
is
the
chart
museum
for
you
to
go,
grab
grab
the
chart
from
so
the
are
the
spinnaker
side's
more,
I
would
say
normal
and
the
argo
cd
and
is
going
to
be
a
little
less.
It's
going
to
be
very
sp,
it's
so
specific
of
a
solution
to
gitops
world
yeah,
but
the
one
thing
I'm
thinking
is
is
there?
A
Kubernetes,
do
we
do
we
create
our
our
own
agent
in
there.
C
C
A
A
Spaces
and
that'll
bring
back
this
really
messy
yaml
file,
but
it
is
ammo,
so
we
can
actually
parse
it
into
use
python
and
and
or
go
and
parse
it
into
a
dictionary
that
we
can
go
look
through
for
the
values
that
we
want
to
look
at
and
in
the
in
the
description.
A
So
I'm
just
going
to
bring
up
what
we
have
for
deploy
hubs
microservices.
So
if
we
look
at
the
general.
E
So,
just
quickly
swinging
a
lot
of
comments.
I've
been
juggling
a
couple
of
things
here.
Argo
cd
does
have
a.
I
think,
it's
a
notification
system
which
can
push
notifications
to
various
sources
about
the
changes
which
are
being
carried
out
on
the
diffing
of.
E
Kubernetes
there's
a
couple
of
edge
cases
which
we
might
run
into.
So
if
organizations
are
using
a
control
and
operator
pattern,
they
might
have
services
which
are
dynamically
updating
the
api
within
kubernetes
with
additional
information.
That's
something
we
use
at
igs
for
spinning
up
dynamic
hardware
controllers,
for
example.
So
there
will
be
changes
in
the
api
outside
of
what's
been
put
through
code,
essentially
as
the
application,
dynamically
scales,
various
components.
Just
something
to
be
aware
of
on
that.
One.
A
Yeah
and
what
we're
looking
at
is
I
just
pulled
up
the
the
deployment
file
for
one
of
our
containers
and
the
part
that
we're
interested
in
out
of
all
this
stuff
is
basically
the
image
name,
because
once
we
have
this
image
in
the
tag,
whether
it's
going
to
be
the
the
tag
itself
or
if
they're,
using
the
sha,
either
way
at
the
component
time
on
the
build
side,
when
we
hook
into
the
build
process,
we
capture
what
component
a
nice
pretty
name
for
what
this
container.
This
image
is
for.
A
So
when,
like
you
said
on
through
the
notification
process
of
like
argo,
we
can
get
notified
that
there
was
an
update,
a
sync
just
that
got
done,
and
then
we
can
go
ahead
and
query
the
cluster
and
try
to
figure
out.
You
know
which
image
was
just
changed
as
part
of
that
deployment
process
and
from
there.
We
can
then
mark
in
ortelius
saying
that
this
deployment
just
occurred
to
this
component,
and
maybe
some
of
the
other
information
we
can
gather
would
be.
A
You
know
the
cluster
name,
which
could
be
used
as
the
environment
that
it
went
to.
So
we
can
get
some
of
the
basic
information
saying
that
this
this
container
was
just
deployed
to
this
environment,
which
is
going
to
then
translate
back
to
on
the
arterial
side,
saying
that
this
component
version
was
deployed
to
this
environment
and
subsequently
we
can
work
our
way
back
to
the
application
version,
the
logical
application
that
we
was
deployed.
A
Now,
if
there's
not
a
logical
application,
we
may
need
to
go
ahead
and
create
an
application
version
on
the
fly
to
represent
what
is
the
mix
of
all
the
different
running
pieces
for
an
application,
so,
instead
of
applications
coming
out
of
developers,
world
saying
these
are
my
my
parts
that
we
start
looking
at
it
from
the
other
side
and
putting
together
based
on
name
spaces
and
some
base
information,
like
a
baseline,
that
we
could
say
that
this
is
a
new
version
of
the
application
that
are
that
are
being
pushed
into
this.
C
A
Well
for
the
so,
this
is
solving
the
the
problem
of
what
is
being
deployed.
C
A
Predefined,
so
I
don't
think
they're
going
to
go
into
so
like
the
way
spinnaker
works
they
do
app
component
sets
based
on
a
chart.
A
home
chart
can
contain.
You
know
20
or
30
components
in
a
single
chart.
A
chart
is
not
restricted
to
a
single
component,
so
in
the
spinnaker
world,
the
way
they
do
a
component
set
is
by
writing
a
very
big,
long
chart
that
says,
install
all
these
20
different
microservices
right.
C
Which
is
we're
trying
to
get
away
from
monolith
kind
of
deployments,
we're
trying
to
provide
a
way
to
give
them
incremental
updates,
based
on
collections,
a
collection
at
an
application
level
or
a
collection
at
a
component
set
level.
A
Right
so
under
the
covers,
kubernetes
will
figure
out
if
something's
changed
or
not
so
kubernetes,
even
though
you
pass
it
a
full.
A
Deploy
at
that
level
so.
B
I
I
was
listening
to
you
and
I
cannot
like
avoid
to
thinking,
but
usually
when
I
see
this
problem
like
integration,
decoupled
integration
like
for
any
solution
using
promedios
and
an
exporter
for
providers
that
already
use
like
a
dictionary
set
and
then
putting
this
data
up
to
can
be
used
with
alert
manager
and
we
already
implement
like
some
kind
of
this
solution
with
you.
B
You
have
promoted
with
export
that
you
can
implement
like
I
don't
know
for
flavors
about
taking
metrics,
then
you
just
put
it
you
put
available
this
information
for
other
platforms.
You
need
this
reactive
notification
system.
You
can
use
alert
manager.
That
is
very
light
and
works
very
well.
So
I
don't
know
that's
for
at
least
when
I
try
to
solve
something
like
that.
We
use
this
and
it's
really
it's
it's
well.
The
only
thing
is
little.
B
C
C
So
I
think
that's
what
we
have
to.
We
have
to
keep
our
eye
on
the
ball
that
we're.
You
know
we're
not
just
trying
to
put
something
together
and
throw
it
and
and
say
we
have
something
that
can
report
on
what
occurred.
I
think
we
have
to
keep
thinking
about
how
our
catalog
should
be
that
for
lack
of
a
better
word
single
source
of
truth
before
you
deploy
so.
D
C
B
I
well
for
the
reactive
side
of
the
of
of
solution.
I
think
the
most
cases
is
looking
at
the
same
image,
repository
and
source
repository.
So
if
argo
is
just
going
to
consume,
keep
is
that
it's
all
you
do
so
if
we
can
have
like
a
synchron
synchronization
about
what
is
looking
argo
at
ripple
level
on
image
level,
we
can
act
like
proactively.
B
I
think
so.
But
the
thing
is
how
you
I
don't
know
how
the
project
in
in
argo
cd
we
already
we
can
know
from
the
beginning.
What
is
it
going
to
look
at
of
is
of
or
maybe
if
they
do,
some
change
or
change
the
rebel?
I
don't
know
the
branches
how
we
can
get
notice
about
that.
A
C
So-
and
this
is
just
I
mean
spinnaker
and
any
other
cd
tool
we
interact
with-
is
going
to
act
differently.
Correct
steve,
I
mean
that's
what
I'm
assuming
well
at
least
around.
A
At
least
around
component
sets,
and
so
what
they.
What
we
end
up
with
is
these
deployable
units
that
these
tools
are
working
with,
so
the
deployable
unit
for
spinnaker
is
a
helm
chart
and
a
home
chart
can
contain
many
microservices
on
the
argo
cd.
The
deployable
unit
is
the
deployment
manifest,
and
I
mean
yeah
the
deployment
manifest
the
deployment
yaml
and
you
could
put
multiple
deployment
mmos
into
a
single
repository
because
they're
looking
at
it
as
a
they
look
at
a
single
repository
as
a
deploy
for
an
application.
A
No,
the
developers
generate
the
helm
chart.
I
believe
argo
can
work
off
of
helm,
charts
as
well.
C
B
E
This
is
a
very
quick
thing.
Is
right,
it's
worth
taking
into
account.
Spinnaker
does
support
more
than
just
helm
for
this
stuff,
there's
something
which
you're
saying
trace,
which
I
think
is
has
kind
of
triggered
something
in
my
head.
I
think
there's
a
really
interesting
point
here
around.
E
There
is
what
the
deployment
tool
thinks
it's
going
to
do.
There
is
what
the
kubernetes
api
thinks
is
going
to
happen
and
that's
how
the
helm
dry
run
and
the
like.
If
you're
using
tanker,
that's
got
a
dry
run
mode
as
well,
where
it
queries
the
api,
generates
a
diff
and
that's
before
deployment.
You
can
run
that
step
to
say,
okay,
what
do
I
think
is
going
to
happen
and
that's
kind
of
a
proactive
bit?
I
think
what
do
we
think.
E
To
happen,
but
then
there's
also
as
sergio
was
saying
the
reactive
bit
about
what
actually
happened,
because
there
are
situations
where
the
proactive
and
reactive
won't
match
up,
sometimes
edge
cases
with
kubernetes
have
seen
happen
a
couple
of
times
and
what
we
thought
was
going
to
happen,
particularly
with
helm,
isn't
necessarily
what
happens?
I
think
there
is
a.
E
I
think
there
is
value
and
at
some
point
being
able
to
be
able
to
see
from
their
kind
of
view
of
what
did
we
think
was
going
to
happen
versus
what
is
your
actual
current
state?
I
think
there
might
be
slightly
there's
separate
kind
of
problems
in
some
ways,
but
having
both
at
some
point
would
be
really
useful
to
be
able
to
say
this
is
what
we
told
you
was
going
to
happen.
E
This
is
what
actually
happened,
and
we
can
start
using
that
for
actually
digging
into
why
more
than
anything
else
and
feeding
that
back
on
the
proactive
side,
I
think
the
diffs,
which
are
provided
by
all
of
the
tools
used
so
helm,
tanker,
customized,
they're,
all
pretty
good.
Most
part
they're
fully
mapped
for
kubernetes
api
functionality,
with
the
exception
of
helm,
which
does
it
a
little
bit
differently.
C
How
about
ansible
is
it
still
being
used?
Was
that
sort
of
fading
into
the.
A
Well,
sasha
and
I
ran
into
that
at
looking
at
his
clusters
the
other
day
and
what
we
found
is
in
their
running.
The
one
we
were
looking
at
was
on
aws
and
they
decided
to
stand
up
kubernetes
themselves
on
their
own
vms
ec2
instances,
so
they
actually
built
kubernetes
and
they're
using
ansible
to
build
up
the
linux
machines
that
are
going
to
run
kubernetes.
A
So,
instead
of
using
the
managed
kubernetes,
they
were
managing
it
themselves
and
they're
using
ansible
that
level
to
build
up
the
the
instance
and-
and
like
owen,
like
you're
saying
I
know,
spinnaker
does,
I
know,
do
things
like
being
able
to
deploy
vms
and
things
like
that
in
instances
I
just
was
kind
of
trying
not
to
complicate
the
the
conversation
totally
totally.
E
Fair
yeah,
the
answer
will
bits
an
interesting
one,
because
ansible
can
be
used
for
deploying
stuff
directly
into
kubernetes,
as
well
as
building
the
cluster
as
so-called
terraform.
But
I
don't
even
really
widely
used
for
that.
E
To
be
honest
with
you,
if
a
shop
is
already
heavily
invested
in
ansible
and
they're
beginning
to
explore
kubernetes,
I
have
seen
it
done,
but
not
not
much,
very,
very
rarely
because
it's
pretty
painful
to
be
honest
with
you
same
with
the
terraform
plugin,
is
pretty
painful
at
this
point
in
time
to
do
it
either
way
and
they
don't
really
manage
state
particularly
well,
which
is,
I
think,
an
area
where
we're
the
state
of
it
is
the
interesting
part,
and
but
neither
of
those
do
it
particularly
well
for
kubernetes
at
this
moment
in
time,.
B
Yeah
is
taking
more
more
more
power
in
the
in
the
multi-cloud
side
because,
usually,
if
you
are
working
like
things
in
in
inside
a
cluster
of
kironetics,
usually
you
are
going
to
have
more
native
tools
right,
but
in
at
least
at
red
hat
ansel
is,
is
positioning
on
in
the
multi-cluster
site,
right
the
the
hybrid
cloud
and
stuff
like
that,
because
it's
more
broader
you
can
do
more
stuff
inside
the
same
ansible
and
you
can
you
can
integrate
it
with
the
rest
of
the
tool.
B
A
Yeah,
so
let's,
if
we,
if
everybody
has
a
little
bit
of
time,
let's
just
run
down
the
spinnaker
path
in
a
proactive
solution.
A
So
the
way
that
they're
saying
to
do
a
if
you
want
to
deploy
50
microservices
that
you're
going
to
have
a
helm
chart
with
that
references,
the
50
microservices.
A
Now
that's
the
developer
coding
together
a
big
yaml
file
to
do
that.
A
Now,
one
of
the
things
that
we
could
do
we'd
have
to
look
at
the
hooks,
but
as
because
they
are
a
a
ci
cd
tool,
then
they
have
plug-ins
that
we
can
insert
in
certain
places
we
could
pull
together,
be
inserted
into
that
that
job
in
such
a
place
that,
prior
to
them
doing
the
helm,
chart
or
whatever
they're
going
to
deploy
that
we
can
actually
gather
together
everything
and
consolidate
it
and
then
give
them
the
single
file
to
work
with
at
that
point.
A
A
And
this
would
apply
to
this
gathering
would
would
apply
to
a
git
ops
world
as
well.
A
So,
at
the
end
of
the
day,
when
we
do
the
gathering
of
everything
that
needs
to
needs
to
be
deployed,
we
spit
out
a
and
we
produce
a
home
chart.
A
In
the
get
ops
world,
it
gets
a
little
trickier
because
they
the
whole
idea.
Well,
if
you
look
at
if
you
look
at
argo
cd
and
because
they
don't
have
a
ci
tool
or
cd
tool
that
they're
just
doing
literally
just
doing
the
deployment
doing
a
sink
of
what
the
manifest
is
in
the
get
repo
with
what
needs
to
be
applied
to
the
kubernetes
cluster,
that's
all
they're
doing
they're
just
doing
the
sync
process
to
see
what's
out
of.
What's
what
the
mismatch
is.
A
So
it
gets
picked
up,
so
the
qa
branch
would
get
picked
up
and
then
deployed
out
to
the
qa
cluster,
so
kind
of
what
I'm
thinking
is
that
we
can
simplify
that
process
for
producing
these
deployment
manifests
by
collecting
together
everything,
because
if
you
look
at,
I
was
talking
to
somebody
the
other
day.
It
was
a
gaming
company.
A
They
have
over
100
microservices
that
span
15
products,
shared
components,
shared
microservices
between
the
15
different
products
and
then
that's
just
that's
not
including
all
the
different
environments
that
they're
going
to
so
you
know,
managing
probably
500
versions
of
the
components
is
what
they're
they're
looking
at
dealing
with
across
their
their
world
and
doing
that
in
a
single
home.
Charts
is
not
practical
or
in
a
single
deployment
manifest
for
the
argo
side.
B
Okay,
well
real
quick.
I
I
think
we
are
talking
about
events
right
like
on
on
today,
even
as
architecture,
there
is
a
there
is
usually
the
couple
way
to
the
two
things.
B
Is
you
just
read
in
a
in
a
sync
way,
logs
about
the
platform
where
everything
happens,
so
I
don't
know
if,
if
it's
git
of
there
is
a
a
standard
lock
to
we
can
use,
but
we
can
practically
be
like
reading
logs
about
platforms,
and
today
there
is
a
lot
of
a
good
architecture,
oriented
of
events
that
make
you
sure
that
you
are
proactively
taking
events
and
and
acting
based
on
that,
even
maybe
the
platform
already
do
something
about
that
event.
Right.
B
So
that's
my
thought
about
like
we
can
do
a
really
standard
way,
and
maybe,
if
we
can
work
over,
I
don't
know
that
the
guitar
git
repo
some
images,
repo
logs
and
do
in
a
single
way.
We
can
do
a
really
nice
solution
and
can
manage,
and
don't
you
don't
take
much
about
who
is
going
to
consume
this
git
reboot.
A
Yeah,
I
think
the
events
that
you're
talking
about
are
going
to
be
more
on
the
reactive
side.
So
after
somebody
checks
something
into
they,
they
update
a
deployment
yaml
and
they
check
it
into
a
git
repo
that
git
repo
is
going
to
throw
a
web
hook.
Event
that
will
know
that
there's
going
to
be
a
update
coming
to
the
cluster.
A
So
but
we
don't
know
what
that
update
was
for,
and
what
tracy's
talking
about
is
a
proactive
view
of,
instead
of
us
waiting
for
a
developer,
to
make
some
messy
changes
to
the
deployment
manifest
that,
if
we're
able
to
take
the
information
artelias
and
generate
the
deployment
manifest
for
them,
that
is,
for
that
logical
view
of
the
application
and
apply
that
then
to
get
so
we'd
be
pre
check
into
the
get
repo
for
the
the
update.
That's
gonna
occur
to
the
the
cluster.
Does
that
make
sense
now.
A
E
A
tricky
one,
it's
been
quite
a
long
week
for
a
few
reasons,
so
this
is
an
issue
which
is
very
dear
to
my
heart
right
now,
because
it's
an
issue
I'm
facing
actively,
essentially,
which
I'm
still
trying
to
figure
out
how
much
of
that
architecture
I
can
share,
because
it
might
actually
work
as
a
kind
of
base
point
to
talk
around
some
of
this.
E
E
It's
tight-styling
not
quite
that,
but
I
think
that's
actually
a
really
good
way
to
simplify
it.
So
forget
about
tooling,
forget
about
you,
know,
argo
or
rest
of
it
modeling
it
as
a
pre-commit
hook.
What
would
that
look
like
as
we're
effectively
just
dealing
with
finals
at
that
point,
which
are
always
the
simplest
thing
to
get
down
to,
and
even
so
the
way
I've
seen
it
modeled
without
the
multiple
branches
approaches
being
almost
like.
E
Multiple
data
files
in
a
single
repo
which
point
to
different
environments,
and
that
might
be
a
simple
way
just
to
drill
down
into
what
does
the
simplest
use
case
of
this
look
like
so
in
which
we
deployed
multiple
environments,
which
has
variables
in
the
repo
for
each
environment.
How
would
that
look
like
from
a
pre-commit
hook?
Where
does
autelius
fit
into
that?
What
can
alternates
give
add
on
top
of
that,
and
how
does?
How
can
we
maybe
get
rid
of
some
of
the
manual
effort
in
managing
that
to
be
more
proactive?
E
C
A
good
idea-
and
it's
a
this-
is
a
comp
kind
of
complicated
it.
It
does
oh
and
puts
it
it's
tricky.
It
is
our
first
kind
of
big
project
for
the
open
source
community.
So
I
think
that
we,
you
know
chew
on
it,
for
as
long
as
we
need
to
before.
C
We
start
coding
anything
obviously,
but
I
think
steve
if
you
could
take
a
stab
at
clarifying
what
we
just
discussed
and
incorporate
that
into
the
discussion
from
friday
that
and
get
those
docs
out
to
everybody
that
two
weeks
from
today,
maybe
we'll
all
have
had
a
little
more
time
to
let
it
bake.
A
The
so
if
we,
if
we
take
this
one,
last
comment
before
on
this,
so
if
we
don't
look
at
the
get
ops
world
and
we
look
at
more
of
a
traditional
jenkins
pipeline
when
you
have
a
step
that
does
the
docker
build
we're
able
to
do
right
after
that
step
in
the
pipeline
after
the
docker
builds,
and
they
push
the
the
image
to
the
registry,
we're
able
to
insert
a
little
step
in
there
for
deploy
for
ortelius
to
go
ahead
and
grab
what
was
done.
A
So
we
know
that
this
container
this
image
was
was
created.
It
was
based
off
of
this
repository
and
we
grabbed
that
information.
Now
we
have
a
new
component
version
in
ortelius
and
if
we
have
a
baseline
to
start
with,
we
can
then
rev
an
application
version.
The
logical
application
version
automatically
now
when
the
deployment
happens,
whether
the
the
deployment's
going
to
happen
through
jenkins,
calling
you
know,
cube
cuddle
to
do
an
apply
or,
if
they're,
going
to
call
helm
as
part
of
a
jenkins
pipeline
or
if
they
call
ortulius
to
do
the
deployment.
A
We
can
record
what
just
happened
at
the
deployment
time.
So
there's
kind
of
like
two
worlds:
two
events
that
that
happen,
one
is
on
the
build
side
of
when
something
when
a
component
is
created,
and
then
the
next
is
when
this
component
is
deployed
and
what
what
was
it
associated
to
when
it
was
deployed
and
where
it
was
deployed
to
so
those
are
the
two
major
events
that
kind
of
happen
in
in
the
pipeline
process.
Like
I
said
with
the
with
the
jenkins
world,
it's
very
simple
spinnaker.
A
It
could
come
about
the
same
way
where
they
have
multiple.
E
I
think
from
democracy
spinnaker
can
integrate
with
jenkins
to
trigger
a
build,
but
it
doesn't
do
the
bill
directly
itself.
I
might
be
wrong
on
that,
but
from
what
I
remember
evaluating
a
couple
of
years
ago,
that
was
essentially
the
workflow
integration
into
other
ci
solutions.
A
Okay,
okay
yeah,
I
couldn't
remember
either
so,
but
that's
the
the
kind
of
like
the
two
worlds
that
we
have
is
one
on
the
build
side
capturing
what
was
built
and
pushed
through
the
the
registry,
and
then
the
second
part
is
the
deployment
part
now
in
between.
That
is
the
organization
part
of
how
do
I
associate
versions
of
components
to
logical
versions
of
applications
or
if
we
want
to
call
them
products?
E
So,
just
as
kind
of
last
comment
from
me
and
an
idea
of
something
I
might
try
and
well
I'll
try
and
get
done
over
the
next
couple
of
weeks,
it
might
be
useful,
I
feel
like
having
the
equivalent
of
a
persona
which
we've
done
for
users
would
be
useful
for
this
kind
of
persona
of
what
is
a
environment
or
a
release
process
which
we're
talking
about
here.
E
Okay
and,
as
I
kind
of
mentioned,
the
it's
very
similar,
it's
a
similar
problem
to
what
I'm
facing
at
the
moment,
I
think
I
can
probably
create
a
sanitized
version
of
that
which
I'll
bounce
off
a
couple
of
folk
I'll,
get
you
to
have
a
quick
look
at
steve
as
well
just
to
kind
of
see
if
it's
the
same
kind
of
problem
we're
talking
about
and
then
maybe
in
a
couple
weeks
time.
If
that's
all
good,
we
can
actually
use
that
as
a
reference
to
look
at
and
talk
about
in
a
bit
more
detail
about.
E
E
C
That
would
be
very
helpful
having
that
kind
of
core
core
user
problem
set
really
does
help
define
it,
so
we
want
to
create
a
persona
around.
E
And
what
we're
kind
of
talking
about
touches
on
all
of
them
from
my
view,
because
you've
got
the
well
actually
to
be
honest,
what
we're
talking
about
initially
touches
on
just
the
environment
and
a
pipeline
interface
environment.
There's
going
to
be
additional
complexity
of
how
you
manage
that
across
releasing
to
multiple
environments
in
a
way
and
we've
kind
of
talked
about
all
of
those
different
parts
during
it.
So
yeah
I'll
give
an
update
on
next
week.
E
If
I
think
I'm
gonna
be
able
to
get
something
pulled
together
on
that
early
next
week
and
yeah
try
and
share
it.
A
All
right
anything
else
on
this,
it's
a
tricky
subject,
isn't
the
reason.
The
reactive
is
much
easier,
like
you're
saying
sergio
it's,
you
know
looking
at
logs
and
apis
and
things
like
that
and
reacting,
but
I
I
think
it's
for
our
benefit.
The
proactive
is
definitely
going
to
be
where
we
can
help
simplify
the
process.
C
B
Well,
I
want
to
mention
that
in
the
in
the
persona
side,
like
real
people
like
having
some
issues
today
is
the
what's
new
for
openshift
4.7,
so
for
from
today,
argo
cd
is
in
ga
from
our
kubernetes
platform.
So
there
is
going
to
be
a
lot
of
people
using
argo
c,
officially
in
in
red
hat.
So
probably
in
the
next
weeks
we
are
going
to
have
a
a
lot
of
like
people
asking
for
help
about
using
argos.
B
So
it's
I
think,
I'm
going
to
take
a
lot
of
feedback
from
that
to
to
where
to
have
like
real
issues
where
we
can
maybe
improve
and
do
a
complementary
solution.
I
don't
know,
but
that's
that's
the
thing
before
this.
It
was
argo
like
just
I
don't
know
box
and
some
events,
but
not
much
people
using
it
in
the
real
world,
but
today
is
going
to
to
be
amazing
massive
in
the
objective
work.
So
I'm
going
to
put
attention
on
that.
A
Yeah
is
there
a
and
what's
going
to
be
the
pipeline
process
that
you're
going
to
work
use
with
argo.
B
It's
super
new.
I
know
not
that
update
on
on
how
he's
going
to
use
it
at
the
beginning.
Like
a
year
ago,
maybe
two
years
ago,
argo
was
like
a
solution
for
more
multi-clustered
deployment.
Right,
I
don't
know
if
we
are
putting
inside
of
openshift
like
in
that
way.
I
need
to
check
it,
but
but
that
was
like
the
original
idea
about
argo
cd,
but
two
we
are
releasing
today
the
the
new
version
of
tecton2.
B
So
I
need
to
check
how
tekton
and
rlcd
are
going
to
work
in
this
in
open
chief
curinetix
enterprise
queue
analytics
that
we
are
using.
So
that's
going
to
be
interesting,
like
pipelines
in
the
new,
because
usually
we
we
been
using
jenkins
right
now,
it's
going
to
start
with
tecton
and
tecton
fitting
with
with
argo
cd
and
a
lot
of
other
things
service
mention
almost
I
don't
know
there
is
a
lot
of
middleware
working
inside
the
the
the
urinary
solution.
A
All
right
cool
yeah,
because
I
know
simon
from
techton
mentioned
that
ibm,
has
something
similar
to
argo,
which
is
called
razer.
A
I
think
it's
spelled
r-a-a-z-e-r
and
it
was
a
templating
engine
for
pushing
out
to
hundreds
of
clusters,
so
that
was
one
of
the
other
and
it's
open
source
from
ibm.
B
B
A
F
Yeah,
I
have
one
thing
like
it's
when
I
was
in
the
cloud
platform
team
at
my
like
previous
company
fire
right,
so
we
did
the
same
thing
like
the
proactive
approach
like
what
tracy
had
mentioned
before.
F
So
what
we
did
was
independent
of
the
cloud
providers
or
the
kubernetes
or
the
vmworld
is
that
means
we
took
the
services
which
you
were,
which
would
be
the
potential
like
services
like
dependencies
or
upstreams
right
for
docker
image
build
itself.
F
So
that
means
we
take
the
changes
at
the
build
level
and
then
we
can
take
that
and
propagate
at
the
deployment
steps
and
see
the
proactive
like
with
applications
which
could
be
the
potential
hits
there.
So
that's
what
was
the
idea
with
at
my
like
previous
company,
but
at
walmart
we
have
their
own
tool
called
concord,
a
deployment
tool.
So
I
don't
know
say
how
these
guys
are
doing
here,
so
so,
but
I
can
check
that
out
there
as
well.
A
Yeah,
that's
exactly
what
we're
talking
about
on
the
the
proactive
solution,
so
whatever
whatever
you
did
at
your
old
company,
please
chime
in
on
this
discussion.
C
Well,
gentlemen,
we
have,
and
ladies
we've
taken
an
hour
which
is
30
minutes
longer.
I
think
it's
a
really
good
discussion
and
I'm
excited
that
the
that
we've
kind
of
grown
to
a
point
where
we're
having
these
kind
of
technical
discussions
and
we'll
be
starting
to
get
some
coding
and
some
new
releases
coming
out
on
a
cadence
that
is
required
really
by
the
cdf
and
the
linux
foundation
to
have
updates
and
new
features
coming
out
so
that
they
can
talk
about
them.
A
Well,
I
don't
think
that
we're
that
far
off,
I
think
it's
just
going
to
be
getting
like
owen-
was
saying:
get
some
of
the
s
really
simplify
the
the
first
use
case,
or
you
know
one
or
two
use
cases
and
or
scenario
I
wouldn't
say,
use
cases,
but
scenarios,
and
I
think
that
will
give
us
a
lot
of
a
good
starting
point
to
work
from.
C
Okay
and
on
that,
we
will
more
to
come,
we'll
get
steve
to
get
that
document
out
to
everybody
as
soon
as
possible,
so
that
you
can
think
about
it
in
the
next
two
weeks.