►
Description
GitOps is a set of practices to automate application development and deployment, as noted by the CNCF GitOps WG Charter and its OpenGitOps initiative. We want to explore the intersection of GitOps automation with Service Mesh, how the two trends can help and sustain each other and how they serve the modern quest for platform engineering. Particularly, we want to help understand why both trends are coming of age at this particular time, and what's behind the need for automation and control of cloud native applications and their networking needs, securely and at scale.
A
Evening,
good
morning
to
everyone
connected
over
LinkedIn
YouTube,
we
have
a
platform
you
connect
to
it's
my
first
time,
I'm
leading
this.
This
is
number
48,
so
I
was
watching
even
before
I
joined
solo,
it's
a
it's
a
pretty
it's
a
place
where
we
can
discuss
technical
things.
We
can
discuss
topics
we
can
discuss
whatever
is
tangential
to
Open
Source
service,
mesh
and
ninja,
like
the
cloud
native
tools
and
Tech
that
we
all
love
and
cherish.
A
And,
of
course
this
is
the
the
before
kubicons
or
we
will
be
excited
we're
all
working
on
our
presentations
on
panel.
We
all
really
really
eager
to
to
see
kubacon
in
Amsterdam
of
all
the
places.
Let
me
introduce
myself
quickly:
I'm
Alessandro,
I
live
in
Amsterdam
and
I'm.
Really
really
I
was
really
waiting
for
kubicon
to
come
back
to
Amsterdam
and
I'll.
If
you
come
to
cubicon,
please
stop
by
our
booth,
I.
A
Think
it's
the
number
of
G9
so
that
your
boot
come
talk
to
us
about
service
mesh
about
guitars,
come
meet,
meet
the
team,
akuity
alsoever
as
a
boot,
I,
suppose,
I
guess
and
Nicos
will
tell
you
which
number
is
this.
Is
it
we
also
have
an
application
networking
day
the
day
before
kubicon,
so
it's
a
great
play
great
time
to
to
be
alive
and
doing
called
native
and
a
great
time
to
visit.
A
Amsterdam
today
is
a
sunny
day
in
Amsterdam
I
hope
it's
gonna
stay
for
for
a
little
while,
and
so,
if
you
don't
have,
maybe
you
should
don't
bring
your
brand
umbrella,
just
bring
your
power
soul
and
and
and
your
good
attitude
to
answer
them
so
without
further
Ado,
again
I'm
Alessandro
platform
Advocate
at
solutayo
I
joined
a
few
months
ago
and
I
really
like
the
this.
A
This
this
community
of
practice,
community
of
of
Engineers
at
solo,
including
Alex
I've,
been
admiring
his
his
work
on
githubs
and,
of
course
also
Nicolas
have
been
looking
at
this
live
streams
and
blogs,
and
all
the
dissemination
of
this
GitHub
culture
that
is,
is
promoting.
A
So
we
choose
this
topic
today
about
the
intersection
between
githubs
and
service
mesh
because
we
haven't
covered
it.
Yet
we
usually
do
quite
the
technical
work
through
doing
woods,
but
this
time
I
would
like
more
to
well.
Let's
see
what
we
can,
what
comes
out
of
it
right,
so
we
we
have
this
topic
and
we
have
these
two
great
Tools
service,
mesh
and
guitar
Ops
and
see
what
we
can
do
by
combining
together.
So
talk
too
much,
if
you
guys
want
to
introduce
yourself
Alex
first,
in
alphabetical
order.
B
Yeah
hi
everyone,
my
name,
is
Alex
Lee
I
am
a
field
engineer
here
at
solo
I've
been
here
for
almost
two
years
now
and
prior
to
that,
I
was
a
openshift
container
Solutions
architect
at
Red,
Hat
yeah
nice
to
meet
everyone.
I
got
you
know
really
into
Argo
CD
when,
when
Red
Hat
adopted
it
into
the
platform
and
really
kind
of
hasn't
stopped
from
there
or
it's
just
been
everything
I
do
is
with
Argo
CD
and
I
love
the
technology
it
it
opens
up.
B
D
I
used
to
be,
as
you
know,
your
average
platform
engineer
that
was
just
using
Argo
and
then
I
got
kind
of
convinced
to
go
into
a
full-time
role,
just
talking
about
and
playing
with
Argo
and
making
content
around
it,
because
I
love
the
technical
side
of
it.
A
lot
but
I
also
like
teaching
people
about
it
and
getting
them
introduced
to
the
technology
and
learning
how
to
use
it.
D
Because
I
remember
the
first
time
I
had
to
figure
out
what
get
Ops
was
and
like
how
to
how
it
benefits
me
and
why
I
would
go
through
the
effort
and
now
I
can't
stop
talking
about
it.
So
I'm
really
glad
to
be
here
on
on
this
hoot
to
get
a
chance
to
talk
about,
get
Ops
and
maybe
convince
some
more
people
to
come
over
to
the
get
Ops
way
of
life.
A
But
this
gives
me
an
idea
for
the
very
first
question.
So
what
is?
Can
you
talk
about
your
first
time?
You
you
had
to
fight
or
well
use
githubs,
because
somebody
told
you
to
or
just
out
of
curiosity.
D
Yeah
I
was
I
was
introduced
to
get
Ops
in
Argo
CD
when
I
joined
a
company
that
I
already
had
it
implemented.
So
I
was
used
to
like
the
the
role
before
that
I
was
implementing
kubernetes
into
like
a
really
corporate
environment.
Think
like
insurance.
So
it
was
a
lot
of
Regulation
a
lot
of
like
you
had
to
care
about
every
little
detail
and
then
I
moved
into
a
role
where
I
got
to
use
Argo,
CD
and
git
Ops
to
make
my
changes
to
kubernetes,
and
it
was
so
fascinating.
D
The
first
time
I
went
and
I
created
a
pull
request
and
I
said
I
want
to
change
this
about
the
state
of
the
cluster
and
I
had
to
justify
like
why
and
describe
exactly
what
is
changing
based
on
the
desired
state
in
git,
and
it
I
think
that's
when
it
clicked
for
me
that
I'm
like
oh,
so
this
makes
it
really
clear
what
changed
and
why
it
changed
and
like
there's
a
process
around
changing
that
desired
State.
Instead
of
it
being
you
know,
somebody
has
Cube
CTL
apply
access
to
the
cluster
and
they're.
D
A
I
connect
to
that.
It's
like
is
this
Eureka
moment
like
a
it's,
this
I
I
think
I
I,
remember
a
few
of
these
movies
right,
so
one
when
I
figured
out.
Finally,
what
an
Ingress
was
right
so
when
I
figured
out
the
persistent
volumes
in
kubernetes
or
when
I
discovered,
Argo,
CD
and
guitars,
and
when
I
discover
service
match
when
I
really
realized,
what
this
thing
is
really
powerful
and
I
really
need
to
learn
this
thing.
So
these
are
these
few
months.
D
For
sure,
like
I
I,
think
what's
fun
for
me
to
talk
about
with
Git
Ops
is
like
what
did
you
do
before
get
Ops?
What
was
your
process
for
changing
the
state
of
the
cluster.
B
Yeah
I
can
definitely
answer
that
one
that
was
going
right
through
my
mind.
My
use
case
is
a
little
bit
different,
so
I
came
I
I've,
always
been
in
kind
of
field,
engineering
and
sales
engineering
in
in
my
roles,
helping
customers,
Implement
kubernetes
and
implement
this
technology,
and
in
that
process
you
know
what
I
was
finding
myself
doing
was
I
was
basically
creating
a
whole
ton
of
glue
glue
scripts
in
bash
right
and
every
day
you
know
you,
you
would
do
one
of
two
things.
B
You
would
either
keep
a
long-standing
cluster
up
and
running
24
7,
which
would
cost
you
money
just
for
kind
of
Dev
purposes
and
showcasing
features
and
doing
demos
and
stuff
like
that
or
secondly,
you
would
write
a
whole
bunch
of
glue
scripts
that
you
would
crush
that
cluster.
You
know
treat
it
like
a
cattle,
not
pets
right.
You
would
destroy
that
cluster
Every
Day
spin,
it
back
up
and
hope
that
that
glue
strip
worked
and
that
somebody
Upstream
didn't
change.
B
Something
or
you
know,
some
update,
didn't
didn't,
didn't
break
something,
or
even
like
some
issue,
with
your
local
machine
running
that
script
and
and
missing
some
some
little
minor
piece
right
and
what
I
found
I
was
like.
I
was
actually
spending
a
lot
more
time,
building
up
my
okay
My
Demo
cluster,
my
test
cluster,
and
rather
than
actually
validating
the
features
that
I
was
trying
to
Showcase
to
my
customers
right.
So
that's
where
you
know
I
discovered,
Argo
CD
as
a
tool
there
and
I
was
like
wow.
B
If
I
have
every
single
one
of
my
configurations
stored
in
git
and
I,
just
pull
that
down.
It
really
makes
my
you
know
my
day
so
much
easier
to
the
point
where
it
was
like.
You
want
a
demo.
Okay,
I
can
spin
up
the
cluster
in
less
than
five
minutes.
It's
going
to
be
bootstrapped
with
get
Ops
and
Argo
CD,
and
it
will
be
the
same
exact
thing
every
single
time
with
with
high
reliability
right.
So,
instead
of
kind
of
sitting
there,
you
know
watching
my
little
blue
script
and
hoping
that
everything
comes
out.
D
A
A
That
what
I
was
I
was
really
lucky
to
be
at
Microsoft
in
this
golden
age,
when
they
were
really
starting
doing
containers,
and
they
was
like
yes,
of
course,
and
they
were
like
immense
Azure
devops
pipelines.
That
was
even
before
GitHub
actually
mind
you
right.
A
So
it
was
a
very,
very
long
sequence
of
jobs
and
steps
applying
very
carefully
everything
from
English
research
manager,
I
had
a
very
I
had
a
beautiful
enough
for
me
was
really
I
mean
because
I
wrote
it,
but
it
was
a
very
prescriptive
kind
of
imperative
way
to
to
create
a
cluster
and
and
bring
it
to
a
state
where
you
could
use
it
so
yeah
that
I
think
some,
maybe
the
scripts
are
still
somewhere.
A
Maybe
I
should
convert
them
in
in
applications
of
course,
but
yeah
it's
a
it's
a
it's
already
a
few
years
old.
This
thing
right,
so
the
GitHub
is
just
it's
not
just
something
very
new.
There
is
a
working
group.
There
is
a
there's.
A
it's
part
of
the
I
mean
there
are
conferences
around
it.
It's
not
just
a
fad
right,
there's,
not
something
that
goes
away.
It's
actually
probably
reaching
the
the
the
the
peak
of
adoption
right
or
or
becoming
more
mainstream.
A
So
can
you
tell
me
a
bit
about
akui
I
mean
I,
know
that
I've
been
following
for
for
a
while.
I
know
that
you
get
like
a
very
cool
SAS
control
plane
for,
and
you
just
announced
something
really
cool,
which
is
like
a
hosted
Contour
playing
for
for
Argo
CD
for
on-premise
yeah.
D
Yeah,
the
like
acuty's
mission
is
really
to
continue
supporting
the
open
source
project,
but
creating
you
know
kind
of
a
an
offering
that
allows
you
to
take
advantage
of
everything
you
already
know
from
Argo
CD,
but
focus
on
using
Argo
CD
instead
of
managing
it
itself
and
so
like
right
now,
there's
the
Acuity
platform,
which
is
the
the
the
SAS
offering
Fargo
CD,
except
for
it's
kind
of
a
re-architecture.
D
So
it's
more
of
an
agent
based
model
where
you
know
instead
of
your
Argo
CD
server
like
if
you
think,
of
the
Hub
and
spoke
model
where
you've
got
a
management.
Cluster
Argo
CDs
running
in
that
and
it's
connecting
directly
out
to
each
of
the
Clusters.
And
you
have
to
have
that.
You
know
the
each
of
the
connected
clusters
exposed
to
the
management
one,
the
Acuity
platform,
kind
of
changes,
the
architecture
so
you're
deploying
an
agent
into
each
of
the
connected
clusters
and
then
that
connects
back
to
the
platform
and
that
it
has.
D
A
number
of
you
know
reliability
improvements
because
of
that,
and
you
can
scale
a
single
Argo
CD
instance,
much
more
than
you
can
from
the
open
source
model
and
we're
trying
to
like
that.
Agent
model
allows
us
to
introduce
some
cool
features
like
State
replication.
D
So
it's
something
that
we
just
released
on
the
platform
where,
if,
if
the
Acuity
platform
were
to
like
disappear
or
like
go
down
or
whatever
the
the
agents
in
each
cluster
can
still
manage
the
the
state
of
that
cluster,
even
though
they
were
deployed
out
from
a
central
Oregon
CD
instance,
that's
now
gone
away.
D
So
it's
you
know
kind
of
improvements
like
that
that
we're
trying
to
make
with
the
the
Acuity
platform,
while
still
fully
supporting
the
open
source
contributing
back
to
upstream
and
and
like
in
the
last
release,
I
think
it
was
like
six.
Six
of
our
team
members
were
had
contributed
to
that
release
in
one
way
or
another.
So
it
feels
really
good
to
to
still
be
closely
connected
to
the
open
source
project.
A
I
do
find
so
striking
the
the
similarities
between
solo
and
acquit,
because
we
also
an
open
core
I
mean
we
we
contribute
factually
to
to
the
istio
project.
That's
that's
our
Roots
right.
So
it's
it's
a
thing
right,
I've
been
doing
open
source
for
since
I
was
a
kid
and
I
still
do
it.
So
that's
that
must
be
something
about
doing
open
source
and
that's
why
we
joined
these
companies.
A
That's
why
we
are
here
because
doing
what
we
do
it's,
because
we
we
believe
in
the
value
of
open
development
right,
so
an
open
model
for
developing
software
in
the
open
right.
So
exactly
well,
Alex,
what's
your
so,
let's
bring
also
the
service
mesh
in
the
mix
right.
So
we
could
talk
about
algorithm
because
we
love
it.
But
of
course,
okay.
A
The
other
thing
that
and
and
the
whole
point
of
this,
this
meeting
was
to
talk
about
how
service
mesh
can
also
interact
with
the
interact
with
github's
ideas
and
mentality
right,
so
how
to
because
I
think
you
know
you
are
adopting
the
tops
for
a
reason,
which
is
you
know,
automation,
single
source
of
truth
and
the
service
mesh
also
brings
you
in
the
same
space,
where
you
can
automate,
for
example,
traffic
traffic
shaping
between
between
Services.
A
B
Yeah
I
mean
so
you
know
get
Ops
is
a
mechanism
by
which
you
deploy
applications
onto
your
kubernetes
cluster
right,
so
I,
I
kind
of
see.
You
know
where,
if
service
mesh
is
the
same
way
right,
it's
just
an
application
that
runs
on
top
of
kubernetes
and
therefore
I
can
take
and
pick
it
up
and
put
it
inside
in
inside
my
kubernetes
cluster
through
get
Ops
procedures
right.
B
So
that's
at
least
the
platform
of
istio
or
glue
platform,
deploying
that
using
git
Ops
is
our
key
model
here
at
solo,
no
click
Ops!
None
of
that
right.
We
want
everything
declarative
because
what's
the
point
of
running
on
kubernetes,
otherwise
we
don't
have
much
of
everything
in
declarative
and
anybody
else
can
change
anything.
And
you
know
this
doesn't
work
in
a
multi-tenant
world
right.
So
when
I
kind
of
got
here
to
solo,
that
was
the
first
thing
that
was
part
of
my
Charter.
B
Was
you
know
we
need
to
make
sure
that
get
Ops
is
the
model
to
deploy
a
service
mesh
and
to
deploy
the
the
the
platform
right
glue
platform
and
the
controller
that
that
manages
service
mesh?
B
Aside
from
that,
I
think
adopt's
principles
really
follow
along
with
the
same
way
that
you
would
configure
an
application
running
in
your
service
mesh
through
git
Ops,
breaking
your
application
into
your
app
folder
app
directory
and
then
separating
the
config
out
from
that
again
allows
you
to
really
start
to
treat
your
applications
and
even
the
entire
cluster.
B
As
you
know,
we
have
a
concept
when
we
start
moving
into
into
a
multi-cluster
right,
so
we
have
the
same
management
management
plane
model
where
you
have
a
centralized
place
that
glue
mesh
management
controller
is
running,
and
then
it
is
managing
workloads
that
are
running
inside
istio
inside
what
we
call
workload,
clusters,
Hub
and
spoke
model
right,
and
if
I
can
break
a
break
apart,
my
applications
and
my
config
to
where
my
config
so
such
as
the
virtual
Services,
the
route
tables.
B
All
of
my
security
policies,
actually
live
and
are
managed
in
a
centralized
management
cluster.
And
then
this
configs
are
propagated
to
my
applications
as
they
go,
live
or
or
are
deployed.
What
I'm
able
to
do
now
is
to
truly
treat
my
workload
clusters
as
as
cattle
like
I
mentioned
right.
So
all
that
is
deployed
on
that
workload.
Cluster
is
a
glue
mesh
agent,
istio
and
the
application
itself,
and
then-
and
if
that
entire
cluster
goes
away
and
is
brought
back
up,
it
automatically
bootstraps
itself
to
the
mesh.
B
It
automatically
gets
those
configs
synced
from
both
the
marriage
of
get
Ops
as
as
the
source
of
truth,
but
also
the
solo
controller.
B
You
know
watching
that
source
of
Truth
and
making
sure
that
that
states
is
what
you
know
is
defined
and
get
so
really.
That's
that's
kind
of
where
the
intersection
of
get
Ops
and
service
meshes
for
me
is
it's
just
another
component
that
you
can
bootstrap
on
that
can
provide
you
really
really
Advanced
capabilities
for
traffic,
shaping
routing
policy
handling
and
then
the
get
Ops
just
as
a
layer
over
that
to
keep
that
in
sync,.
A
That's
definitely
it's
also.
Actually
it
is.
My
talk
at
kubicon
is
about
a
family
ephemeral
clusters
right.
So
that's
it's
interesting
place
where
we
we're
all
going
right.
A
So
of
course
like
no,
not
everybody's
going
there
at
the
same
speed
been
working
with
customers
still
like
having
the
same
cluster
for
months
and
years,
but
then
it's
clear
that
as
I
see
this
as
an
evolution
of
devops
right,
so
first
VMS
were
were
treated
as
cattles,
and
now
entire
clusters
are
just
things
that
come
and
go
I'm
actually
using
in
my
processor
I'm,
using
the
the
Gru
and
Minion
nomenclature,
because
of
course,
cat
tools
are.
A
Again,
like
a
very
similar
approach
right
so
because
also
you
say
that
also
now
you're
developing
this
Argo
CD
agent,
and
we
have
this
glue
agent
running
on
these
workload
clusters.
It's
a
it's
an
interesting
thing
that
that's
a
model
that
works
better
than
just
the
you
know
like
the
central
management
that
contains
all
the
Kuba
configs
of
everything
and
then
and
then
impose
things
or
manage
clusters
from
there.
Can
you
comment
on
that?
I,
don't
know
if
you
were
involved
in
yeah.
D
Yeah
we,
like
typically
like
we
found
that
when
you
have
like
that
Central
Argo
CD
instance,
the
more
clusters,
you
add,
the
more
you
have
to
scale
the
components
that
are
in
that
management,
cluster
and
scaling
those
individual
components
like
the
application
controller
sharding
that
out
scaling
the
repo
server
and
the
API
server
it
it
frankly
it
sucks.
To
put
it
plainly
like
it's,
it's
not
the
best
experience,
it's
a
very
like
technical
tweaking
of
like
environment
variables
on
the
workloads
and
it's
just
not
a
pleasant
experience.
D
Honestly,
where,
like
we
found
with
the
agent
based
model,
it's
the
the
processing
relevant
to
that
connected
cluster
stays
in
that
cluster.
So,
as
you
add,
more
and
more
clusters
you're
just
using
a
little
bit
of
the
resources
from
that
one
to
to
manage
that,
you
don't
have
to
worry
about
scaling
out
the
centralized
management
cluster,
so
it
it
really
simplifies
that
process
and
just
it's
a
comment
on
something
Alex
said
earlier,
like
I
I
find
that
basically,
everyone
I
talk
to
if
the
tool
doesn't
support.
D
Git
Ops
they're,
not
interested
in
using
it
like
it's
kind
of
a
prerequisite.
Now
is
that,
if,
if
you
want
to
sell
a
kubernetes
service
or
or
build
a
tool
on
top
of
it,
you
want
to
be
able
to
support
that
declarative
management,
and
you
need
to
make
sure
all
your
configurations
are
represented
as
resources
in
the
cluster
and
then
it
that
should
automatically
integrate
with
whatever
get
Ops
tooling.
You
want
to
use.
A
Crd
is
King.
If
it
is,
you
got
to
do
something
you
gotta
have
a
crd
for
today
nowadays
right
so,
if
you
don't
have
a
crd,
what
are
you
doing?
You
know
and
of
course,
we're
also
moving
into
more
standardization.
We
we're
embracing,
also
Gateway
API,
to
Define
this,
this
these
things
in
clusters
like
like
a
little
destinations
or
ingresses.
So
that's
very
interesting.
It's
also
one
advantage
of
the
agent
mode.
We
also
I,
also
find
quite
important
for
Enterprise.
A
Is
that
you
don't
need
to
punch
any
all
in
a
firewall,
so
Central
Casa
can
manage.
Another
cluster
is
only
Upstream
communication
right.
So
it's
the
agent
opening
a
connection
to
the
to
the
master
and
uploading
and
receiving
commands.
So,
let's
seems
like
a
minor
feature,
but
I
think
it's
it's
pretty
crucial
for
for
many
Enterprise
customers.
B
It's
probably
an
age-old
question
to
you:
Nick,
when,
when,
when
customers
ask,
you
should
I
run
a
centralized
control
plane
or
a
Argo
CD
in
each
cluster,
I'd
actually
say
that
I
personally
to
reduce
blast
radius
and
all
that
other
stuff.
You
know
not
poke
security
holes
generally
deploy
Argo
CD
per
cluster
just
to
kind
of
keep
it
simple.
For
me,
I
know
that
there's
a
whole
ton
of
blogs
about
that.
You
know
which
one
and
which
way
so
it's
really
cool
to
hear
that
the
cooties
you
know
doing
some.
B
C
D
Yeah
one
really
interesting
kind
of
Nuance
of
switching
to
the
agent-based
model
that
we
found
is
that
it
actually
can
save
a
lot
on
network
traffic
depending
on
your
architecture
is
because
you're
no
longer
streaming
every
single
kubernetes
event
back
to
this
management
cluster,
which
isn't
it
if
it's
in
a
different
region
like
you
could
be
spending
thousand
dollars
a
month
on
on
bandwidth.
Just
to
like
get
all
of
that
information
streamed
back
to
your
management
cluster.
D
So
it's
it
just
there's
a
a
multitude
of
reasons
to
to
go
that
way
and-
and
you
get
that,
like
Argo
CD
per
cluster
feeling,
except
for
with
that
single
Argo
CD
UI,
which,
if
you
do
your
app
projects,
correctly
mean
that
you
can
get
the
same
level
of
Separation
as
individual
instances.
But
in
like
you,
know,
one
UI.
A
Is
definitely
it's
the
best
of
both
World,
indeed
like
a
control,
but
also
uniformity
of
deployment?
It's
we
don't
think
of
the
cognitive
load.
I
know
that
the
one
article
CD
per
cluster
is
is
pretty
valid
way
to
do
it,
but
then
I
really
have
to
to
check
all
cluster
if
argosity
is
running
and
also
upgrades
are
very
complicated
right.
So
that's
why
we
solve
it.
Also
for
our
product,
blue.
We
have
this
agent
that
just
does
the
job
of
programming
the
local,
the
local
configuration
for
police.
A
So
that's
it's
pretty
pretty
interesting
model,
but
great.
So
how
do
you
see
the
multi-cluster
going
so
indeed
like
a
there's
this
this
way?
So
what?
What
actually
also
we
love
is
solo.io,
I,
guess
also
observability
right,
so
we
we
really
want.
We,
we
think
that
service
mesh
is
a
big.
It's
a
big
propellant.
It's
a
big
help
when
you
want
to
embrace
really
good
observability
of
your
classes
and
and
I
see.
Also
I'll
go
see
the
moving
to
that
direction.
So
what
do
you
do
with
observability?
You
do
open
Telemetry
or
else.
D
D
What
they
what's
done
on
top
of
that
is,
there's
like
built-in
dashboards
and
audit
logs
for
every
change
that
happens
within
Argo
CD
so
like
if
an
application
is,
is
changed
in
some
way
that
gets
reported
back
to
the
audit
log
and
is
is
made
available
at
like
the
organization
level.
So
you
can
see
what's
happening
across
all
of
your
Argo
CD
instances,
and
you
can
filter
that
down
into
a
into
like
a
a
nice
dashboard
and
then
we've
we've
built
extensions.
D
So
that
not
only
can
you
see
that
in
the
Acuity
like
dashboard
for
your
Argo
CD
instance,
you
can
see
it
in
the
Argo
CD
UI
on
each
application.
You
can
go
and
view
the
audit
log
of
changes
to
that
application
in
in
the
Argo
CD
UI.
So
it
it's
providing
a
level
of
visibility
beyond
what
what
you
can
get
by
default
and
open
source,
because
it
like
the
the
little
things
that,
like
the
interesting
part
with
Argo
CD,
is
it
supports
both
get
Ops
and
get
Ops
anti-patterns.
D
So
if
you
want
to
make
your
changes
outside
of
git,
you
can
right.
It
depends
on
your
organization's
policies
and
so
the
audit
log
kind
of
helps.
If
that's
how
your
organization
is
running.
If
you
can't
get
all
of
your
audit
from
changes
in
git,
you
can
then
see
it
in
the
audit
log
for
your
Argo
CD
instance,
and
you
get
that
kind
of
same
level
of
of
reporting.
A
In
first
thing,
yeah,
it's
of
course
the
tops
is
a
single
source
of
Truth,
provided
that
everybody
sticks
to
the
rules
and,
and
you
have
good
enforcement
of
the
rules.
Otherwise,
it's
like
like
what
I
do
with
my
kids
I
mean
you
can
set
some
rules,
but
if
you
don't
enforce
them,
they
just
moved
right.
So
they're,
just
don't.
C
A
Even
don't
do
it,
it's
a
so!
Actually
that's
why
I
mean
I
I!
Think
it's
a
it's
almost
like
a
reflex.
I
I
put
a
sync
policy:
Auto
El
every
all
this
stuff,
already
part
of
every
application
definition,
because
otherwise
it's
not
even
githubs
right.
So
if
it
doesn't
how
to
heal,
it's
not
really
you're,
not
really
doing
githubs
right.
So
it's
kind
of
a
automatic
either
I'm,
almost
like
a
reflex
right.
So.
D
Yeah,
that's
I,
think
that's
why
the
the
fourth
principle
set
out
by
the
open,
get
Ops
project
is,
is
handling
that
where
it
says
the
the
only
mechanism
through
which
the
system
is
intentionally
operated
on
is
through
these
principles
like
it's.
It's
saying
that,
like
don't
let
your
engineers
have
right
access
to
your
clusters
anymore
only
get
only
the
Argo
CD
service
account
should
really
be
the
one
with
with
right
access.
D
Unless
you
have,
you
know,
you're
the
SRA
on
call,
but
even
then,
like
you,
want
everything
to
just
be
through
git,
and
you
kind
of
you
have
to
make
it
the
easiest
option
right,
because
if
it's
easier
for
somebody
to
run
Cube
CTL
from
their
their,
you
know
developer
workstation
against
production,
they're
more
likely
to
do
that
than
if
they
have
to
go
through
a
whole
process.
So
you
want
to
make
sure
that
your
your
git
Ops
processes
are,
are
really
streamlined
and
reflect
the
needs
of
the
users
and
aren't
just
like
a
big
bureaucracy.
A
This
reminds
me
there.
There
was
a
time
where
I
was
doing
a
lot
of
devops,
of
course,
and
there
was
this
idea
of
tainting
every
virtual
machine
that
ussh
into
ussh
introducing
machine
for
debugging
great.
But
now
that
machine
is
marked
for
deletion,
because
you
you
you,
you
can't
really
tell
what
somebody
does
so.
You
may
delete
your
own
logs
or
better
history
and
then
so
that
machine
would
be
wipe
out
after
you,
you've
done
your
your
troubleshooting,
of
course,
but.
D
D
A
So
so
Alex,
let's
close
with
you,
because
I
feel
like
I,
didn't
give
you
much
much
space
I'm,
sorry
but
but
you're
working
on
now.
What's
what's
your
Focus
now.
B
Yeah,
with
respect
to
git
Ops
I've
been
working
on
kind
of
methodologies
for
our
organization
and
even
some
of
our
customers.
Now
I've
tested
this
we've
been
building
what
we
call
the
app
of
apps
catalog,
so
I
pretty
heavily
use
the
app
of
apps
pattern
inside
Argo,
CD
I.
B
Think
it's
a
pretty
useful
pattern
within
and
my
kind
of
goal
here
was
to
be
able
to
enable
really
anybody,
even
if
they
had
no
idea
what
a
service
mesh
was
or
no
idea
what
istio
or
even
Argo
CD
was
to
be
able
to
pull
this
repo
down
and
deploy
full-blown.
You
know
examples
right
out
of
the
gate
using
Argo
CD
using
app
of
apps
pattern
to
get.
B
You
know
your
full
cluster
bootstrapped,
for
example,
using
k3d
Argo
CD
deployed
on
it
all
of
the
glue
mesh
platform
components
deployed
and
registered,
and
then
your
applications
and
configs
expose
all
throughout
one
single
line
command.
It's
been
working
very
well
I've.
Seen
this,
you
know
model
really
help
to
kind
of
avoid.
What
I
was
mentioning
at
the
beginning
of
this
call,
which
is
taking
so
much
time
to
build
your
own
kind
of
bespoke
environment
rather
than
just
consuming
it
and
and
testing
and
validating
features
on
top
right?
B
Don't
you
try
try
to
kind
of
I
had
a
couple
talks
that
I
was
trying
to
submit
to
to
some
places,
basically
dry,
your
service
mesh
right.
Don't
don't
do
things
multiple
times
if
you
don't
need
to
unless
you're,
particularly
trying
to
say
A
B
test
a
a
feature
or
a
parameter
inside
your
core
platform
components.
B
You
know
the
idea
of
what
I've
been
working
on
is
here
I'm,
going
to
provide
that
to
you
out
of
the
box
so
that
you
can
start
actually
focusing
on
the
value,
add
which
is
putting
configurations,
putting
policies
and
establishing
policies,
or
you
know,
observing
your
actual
applications
running
on
top
of
the
platform,
rather
than
spending
all
of
those
Cycles
actually
building
the
platform
itself.
A
Yeah
it's
the
same
places
is
this
concept
of
battle
is
included
by
swappable
right
so
because
you
can
always
change
later
on
and
I,
it's
sort
of
out
of
it's
the
same
idea
of
autopilot
in
jke
and
in
a
case
they
are
called
guard
rails.
It's
it's
interesting
because
and
then
you
wonder
why
jke
or
AKs
don't
do
this
with
githubs
right
with
Argosy.
Just
the
just
point
just
give
me
a
repo
that
I
can
point
my
cluster
to
and
I
know.
A
They're
gonna
be
to
a
point
that
I
can
use
them
right
so,
but
the
big
Clouds
Of
course
that
they
picked
they're
a
little
behind
in
Innovation
right.
So
you
can
can't
blame
them
right,
so
they
they
have
to
run
massive
organizations
and
massive
departments.
Why
we
like
a
small
companies
like
us,
can
innovate
much
faster,
so.
A
Yeah
yeah
they
do
this
massive
Innovation.
Instead,
we
we
are
a
leaner,
faster
and
I.
Don't
know
if
you
fall.
We
also
innovating
a
lot
in
the
service
measure
area
with
ambient
mesh,
of
course,
with
the
BPF.
It's
really
a
lot
going
on,
and
sometimes
also
it's
almost
difficult
to
catch
on,
to
stay
up
to
date,
but
yeah
we
do
our
best.
A
I
would
like
to
thank
you
so
much.
We
it's
a
short
one,
because
it
was
also
very
last
minute
and
I.
Thank
you.
Thank
you
folks,
for
for
being
here
and
listening
to
me
and
and
supporting
me
in
in
this
last
minute.
A
The
record
will
be
out,
or
we
will
put
some
comments,
also
a
little
interesting
like
the
the
blog
post.
We
we
mentioned
also
Alex
as
a
as
a
name
of
this
office.
This
service
match,
with
all
the
all
the
Argo
CD
tooling
attached,
so
we
will
probably
leave
some
some
links
in
the
comment
feel
free
to
to
add
some
more
so
and
I
will
thank
everybody
for
for
being
here
and
I.
A
Think
Chris
will
will
stop
the
live
stream
as
soon
as
he
reads
this
message
and
thank
you
again
have
a
nice
Tuesday
evening
or
afternoon
wherever
you
are
guys,
I
mean:
where
are
you
where
you
guys
from.
A
All
right,
okay,
that's
must
be-
must.
D
A
It's
sold
out,
so
it's
a
bit
late
for
that,
but
I'm
definitely
gonna
be
a
darker
con
and
there's
also
Easter
corn.
Of
course,
I'll
try
to
to
split
my
spring
myself
into.
Thank
you
again.