►
Description
We are joined today by Gerald Nunn to learn how to use RHACM (Open Cluster Manager) and OpenShift GitOps together for Day 2 cluster configuration and management. This session will show how these two products can inter-operate with each other bringing their own unique strengths to the table making for a powerful combination. With Argo CD and ACM, it's a "yes and" answer.
B
A
All
right
good
afternoon,
good
options,
sorry
that
we
are
starting
just
a
hair
late.
Today
we
had
we
were.
We
were
getting
ourselves
organized
here,
not
a
lot
in
the
wave
announcements
for
me
today,
so
I'm
just
going
to
say
that
Christian
and
I
got
the
notification
email.
Yesterday
our
talk
for
get
Ops
get
optim
in.
B
A
B
Yeah
yeah,
so
as
as,
as
Hillary
said,
we
are
on
top
of
the
of
our
talk,
getting
accepted
so
really
really
cool
we'll
be
there
at
kubecon,
because
I
got
I
also
have
a
kipcon
Talk
got
accepted,
so
really
really
cool
we'll
be
there
that
the
schedule
is
out
for
argocon.
So
I
will
put
that
in
the
chat
we
have.
The
argocon
schedule
go
check
it
out,
I!
B
B
I
know
especially
like
if
we
are
like,
if
we're
like
jet
lagged
right,
we're
gonna
be
like
we're,
gonna,
be
like
in
a
different
time
zone
and
we're
gonna
be
all
full
of
full
of
coffee,
and
but
maybe
that's
good,
maybe
like
we'll
just
like
just
do
a
caffeinated.
You
know
word
vomit,
yeah,.
B
Word:
vomit,
yes,
exactly
exactly
it,
so
that's
cool
check
that
out
a
lot.
A
lot
of
great
talks
it
was,
it
was
difficult.
I
was
I
was
part
of
the
program
committee
there.
It
was
difficult
to
choose
the
talk,
so
keep
them
coming.
Please
we
love!
We
love
these
those
talks
and
then
also
today,
hot
off
the
press.
Today,
hot
out
literally
like
an
hour
ago,
the
schedule
for
get
Ops
con
was
was
released.
B
B
If
you,
if
you
missed
the
the
announcement,
there's
been
a
little
bit
of
reshuffling
in
the
Linux
Foundation
get
Ops
con,
which
is
usually
was
usually
co-located
with
kubecon,
is
actually
now
located
with
open
source
Summit,
which
is
really
cool
because
we
kind
of
partnered
with
the
cdcon
guys
right,
get
Ops
and
CD
are
so
closely
related.
It
only
made
sense
so
shout
out
to
the
cdcon
programming
committee,
working
with
us
to
put
on
a
great
kind
of
CD
comp.
B
Let's
get
Ops
con,
it's
going
to
be
a
two-day
multi-track
with
with
talks
with
talks,
lightning
talks.
You
can
bounce
back
and
forth.
It's
going
to
be
really
really
cool
and
it's
going
to
be
it's
going
to
be
in
Vancouver
right,
so
it's
I'm
excited
to
to
get
back
to
Canada
and
I.
Think
that
leads
me
into
today's
guest
because
today's
guest
Gerald
noon
or
none
I,
guess
you'll
you'll
correct
me
here.
B
You
know
a
principal
technical
Marketing
Manager
here
at
Red,
Hats
focused
on
openshift
get
UPS
the
technical
marketing
at
here
at
Red
Hat,
focusing
on
openshift
get
outs
which
is
Argo
CD
base,
so
he's
definitely
one
of
one
of
the
guys
I
always
reached
out
to
when,
when
I
have
like
a
question
or
if
I
don't
know
what
I'm
doing
with
with
get
Ops
or
Argo
CD,
so
yeah,
so
Gerald
noon
here
to
talk
about
like
ACM,
to
talk
about
Argo,
to
talk
about
kind
of
having
like
a
control
plane
of
of
mini
art,
mini
to
Argo
instances,
kind
of
like
hybrid
mode
and
kind
of
all,
all
that
stuff,
but
I'll
I'll.
C
Never
I
actually
never
had
no
worries
I
get
that
all
the
time.
Now
it's
actually
none,
as
in
like
the
Catholic
nun,
but
just
the
next
one
at
the
end,
but
no
problem
the
different
people
pronounce
it
different
ways.
It's
never
an
issue,
I
figure.
As
long
as
they're
talking
to
me,
that's
a
good
thing,
so
yeah
yeah,
no
matter
what
they
call
me
that
much
at
the
end
of
the
day
and
I
was
gonna
say:
isn't
it
the
other
way?
Christian
I'm
always
bugging
you
for
help?
Well,.
B
C
B
C
To
be
here
today
and
excited
to
talk
about
managing
openshift,
get
opposite
scale,
and
some
of
the
things
that
DCM
brings
to
the
table
just
to
be
up
front.
As
Christian
mentioned,
you
know,
I'm
the
good
Ops
marketing
manager,
which,
which
means
I,
tend
to
be
more
on
the
open,
Shake,
It,
Up
side
of
the
house,
not
so
much
the
ACM
side
of
the
house,
but
I,
really
like
ACM
as
a
product.
C
For
my
sales
background
and
I,
just
felt
like
it
really
fits
well
with
managing
get
offs
at
scale.
It'd
be
a
good
opportunity
to
talk
about
it
in
more
detail.
Hopefully,
today
could
be
a
little
conversational
with
you
and
Hillary,
because
I've
got
a
few
things
that
might
be
a
somewhat
controversial
or
some
interesting
points
to
discuss
and
I'd
love
to
get
your
guys's
or
your
folks.
Sorry
for
the
guy
there,
Hillary
feedback
and
I'm.
C
C
That's
a
fairy
ride,
yeah
yeah
or
you
could
there's
a
float
plane.
You
can
take
if
you've
never
done
the
float
plane
from
Vancouver
to
Victoria
I
highly
recommend
it.
It
flies
at
a
relatively
low
altitude.
It's
like
a
you
know,
a
single
propeller
plane
that
you
get
on
and
very
tiny.
It's
an
interesting
experience.
C
B
A
C
A
C
B
A
B
Live
in
the
bus
life,
the
what's
so
before
before,
actually
you
get
started.
Gerald
I
do
want
to
mention
that
we're
going
to
be
showing
ACM
right,
so
we're
going
to
be
showing
ACM
and
openshift
get
Ops.
But
if
you
are
an
open
source
fan
like
I,
guess
all
of
us
here,
the
the
you
know:
openshift
get
Ops.
B
The
Upstream
is
Argo
City,
as
you
guys
all
know,
but
for
ACM
I,
don't
think
a
lot
of
people
know
there
is
an
upstream
to
it
called
open
cluster
management.
So
a
lot
of
what
general
is
going
to
show
can
be,
you
know,
can
be
translated
and
you
know
a
lot
of
what
we
can
talk
about.
It
be
translated.
B
So
if
you
do
want
to
kind
of
mess
around
with
with
some
of
these
things
that
that
that
we're
going
to
be
talking
about
I
put
that
in
the
chat-
and
you
can
check
that
out
as
well.
B
If
you
are,
if
you're
wondering
right
like
what
ACM,
if
you
want
to
test
out
what
ACM
can
get
you
there's
the
the
GitHub
page,
it
and
I
think
I
believe
it's
in
the
operator
hub
for
okd,
not
a
thousand
percent
sure
someone
correct
me
or
if
someone
knows
please
let
us
know
so:
cool.
A
B
A
A
C
So
I've
got
one
slide
to
show
and
we'll
start
with,
maybe
the
controversy
or
the
discussion
on
that
one,
because
I'd
be
interested
in
your
feedback
on
on
terminology.
So
let
me
share
my
screen
here:
I'm
going
to
share
the
entire
screen,
and
maybe,
if
you
don't
mind,
because
you
know
I'm,
just
a
a
poor,
TMM
I
only
have
one
monitor.
So,
if
there's
things
going
on
in
the
background
on
chat,
you
can
always
just
ping
me
to
talk
about
something.
C
C
That
brings
a
lot
of
different
features
for
managing
kubernetes
at
scale,
but
for
today
we're
just
really
focusing
on
you
know:
how
do
we
manage
I
hope
we
should
get
officer
I'll
go
CD
across
a
massive
clusters
and
the
first
thing
I
kind
of
want
to
just
start
off
with
this
is
my
only
slide.
I'm,
showing
today
is
kind
of
some
of
the
different
topologies
that
come
into
play
when
you're
looking
at
the
physical
management
of
these
rocd
openshift
ones.
C
Now
the
controversy
part
for
Christian
and
Hillary
and
folks
that
are
following
along
here
is
kind
of
the
naming
convention
that
I'm
using
here
I,
don't
think,
there's
been
a
naming
convention
established
yet
by
the
the
working
the
get
Ops
working
group.
Christian.
Keep
me
honest
on
that
one.
So
this
is
the
naming
convention,
I
kind
of
like
it
I'm
going
with
right
now,
but
happy
to
change,
but
essentially,
you've
got
centralized,
which
is
where
a
lot
of
people
start
where
you've
got
one.
C
You
know
Argo
openshift
get
up
sitting
in
one
cluster
and
it
is
managing
all
of
the
other
clusters
itself
right.
It's
pushing
things
out
to
those
different
clusters
and
managing
that
directly.
The
other
model
that
you
get
is
distributed
where
every
cluster
is
essentially
running
its
own
Argo
CD
or
its
own
openshift
get
Ops
and
it's
essentially
a
one-to-one
mapping
between
the
cluster
and
that
Argo
or
one
day
and
night.
You
have
multiple
Argos
running
on
that
cluster.
C
The
idea,
though,
is
that
our
girl
does
not
have
any
external
clusters
tied
to
it
right,
whereas
here
we
obviously
have
four
different
external
clusters
type
that
Argo
here,
there's
no
external
clusters
whatsoever
now
centralized
is
great
from
a
management
point
of
view,
because
you
know
I've
got
one
thing
that
I'm
touching
that
I'm
managing
it's
easy
to
deal
with,
but
it
doesn't
really
scale
is
the
problem
with
it.
The
other
issue
is
centralized.
It
tends
to
come
up.
C
Is
that's
a
single
point
of
failure
in
the
sense
that
if
I
lose
this
Argo
CD
here,
I've
lost
my
management
capability
across
my
fleet,
distributed,
on
the
other
hand,
addresses
the
single
point
of
failure,
but
from
a
management
point
of
view,
it's
difficult
to
manage,
because
now
I
have
to
go
from
Argo
to
Argo,
to
Argo,
to
see
what's
going
on
and
manage
my
application
if
I've
got
it
deployed
across
multiple
clusters.
C
So
really
this
is
where
this
hybrid
model
comes
in
and
it's
kind
of
what
I
like
to
do,
and
it's
my
preference,
but
you
know
again
as
remote
things
and
get
Ops
Christiano
had
this
discussion
many
times.
There's
no
one
true
way:
it's
really
going
to
be
dependent
on
your
organizations,
needs
and
requirements,
and
so
from
a
hybrid
perspective.
What
we
have
is
we
have
something
in
the
middle.
C
This
can
be
openshift
get
Ops,
it
can
be
an
Argo,
it
can
be
ACM,
which
is
what
we're
going
to
be
looking
at
today,
which
is
managing
these
other
Argos
CDs,
and
in
this
model
this
thing
in
the
middle
is
not
pushing
out
any
applications.
C
So
you
get
all
the
goodness
of
our
group,
but
you
still
get
that
single
pane
of
glass
experience
and
you
also
void
the
single
point
of
failure
right
like
if
I
use,
ACM
I,
don't
lose
my
management
capability
on
these
different
clusters.
I
just
lose
that
single
view
into
what's
going
on
with
my
fleet,
so
Hillary
and
Christian
from
a
naming
convention.
What
do
you
think
about
this
centralized
distributed?
Hybrid
naming
convention?
Do
you
like
it
not
like
it
other
naming
conventions
that
you've
seen.
B
I,
the
only
other
naming
convention
I've
seen
with
respect
to
this,
and
it's
something
I
I
chatted
to
you
Gerald
about
and
why
I
called
this
episode
about
control,
plane,
I,
think
I've,
seen
hybrid,
being
called
control,
plane
and
I'm,
not
I'm,
not
saying
that
that's
a
better
terminology:
I'm,
not
I'm,
not
even
saying
I'm,
not
saying
I
like
it
specifically
I'm,
just
saying
I've
heard
so
like
when,
if
if,
if
folks
are
kind
of
just
like
researching
this,
you
may
see
hybrid
also
as
control
plane
right,
meaning,
like
you,
have
a
your
your
your
control
playing
your
hybrid
model.
B
Like
you
have
separate
Argos,
you
know
CD
instances
managed
by
the
central
location,
the
Argo
City
instances
themselves
may
be
managing
other
things,
but
the
Argo
City
instances
themselves
are
are
are
being
I,
guess
centrally
managed
right
or
control
plane.
I
do
I'm,
not
sure
what
terminology
I
actually
like
to
be
honest
with
you:
I
need
to
I'm,
not
sure
I
I
know
I,
know
Hillary's
probably
I
mean
both
of
us
were
full
of
opinions.
B
B
A
Typically,
typically,
when
we're
looking
at
a
hybrid,
it's
like
a
concept
of
two
two,
almost
two
sides
of
the
coin
meeting
or
you
know
taking
taking
things
that
are
separate
and
and
in
some
way
joining
them
like
a
hybrid
Cloud
architecture,
is
partially
Cloud,
partially
bare
metal
right.
It's
it's
taking
two
things
that
are
are
similar
but
different
enough
and
then
like
okay.
This
is
this
is
the
joining
point
in
this
situation,
because
you're
actually
adding
something
else
to
become
that
centralized
management
thing
and
all
the
Argos
are
assets.
A
I,
don't
feel
like
hybrid,
really
accurately
defines
it.
If
you
told
me,
oh
yeah,
we're
going
to
hybridize,
Argo
I
would
think
more
like
the
app
of
apps
approach
or
like
something
else,
but
I.
Also,
don't
love
control,
plane,
yeah
partially,
because
it's
so
over
used
yeah
right,
it's
so
overused
everything's,
a
control
plane.
Everything
in
its
brother
is
a
control
plane
I,
don't
I'm
I'm
unhappy
with
all
of
it.
It's
the
takeaway
there
I
am
not
easily
pleased
I
I.
A
B
B
A
Mixed
topology
actually
is
actually
a
pretty
good
one,
but
I
still
think
that's
a
little
bit
of
a
misnomer
because
of
the
fact
that
you're,
actually
not
altering
Argo.
Fundamentally
here
right.
C
Yeah,
no,
it's
a
good
good
point
there,
because
when
this
discussion
came
up
on
one
of
the
forums,
I
originally
proposed
a
centralized
and
distributed
and
I
said
really
it's
less
about
how
you
got
laid
out.
Things
laid
out
more
about
intent
right,
so
intent
on
centralizes
I've
got
one
thing
that
does
everything
and
type
what
distributed
as
I'm
all
spread
out
hybrid,
the
intent
is
still
distributed
right.
It's
like
a
flavor
of
distributed
and
I
used
to
frankly
call
this
Hub
and
spoke
until
I
realized.
B
A
Traffic
flow
right,
the
traffic,
the
traffic
flow
is
different,
I'm,
not
sure
you
know,
naming
things
is
hard
and
I.
Think
red
hat
is
notoriously
bad
at
it.
So
maybe
we're
the
wrong
people
to
solve
this.
A
You
know
I
I
hate
to
say
that
about
us,
but
it's
true,
but
I
think
that
we
see
I,
don't
know
if
have
you
guys
seen.
Hypershift
hypershift
just
recently
went
into
external
preview.
It
hasn't
been
super
talked
about,
like
maybe
I
shouldn't
even
be
saying
it,
but
I
did
see
people
posted
on
LinkedIn.
So
it's
not
like
Hush
Hush.
Well,.
B
A
The
control
plane,
thank
you
yes,
and
that
was
exactly
what
my
point
was
going
to
be
the
hosted,
control,
plane,
kind
of
thing.
It's
this
exact
same
model,
more
or
less
right,
you're
going
to
have
a
bunch
of
things,
they're
all
distributed,
you're
going
to
have
one
central
location
that
does
the
management
and
the
visibility
and
everything
right
like.
But
it's
it's
there's
a
lot
of
parallelisms
there
and
I
also
don't
like
hosted
control
plane
for
this
either
for
the
record.
But
it's
it's
there's
a
pattern
here
right.
A
B
The
only
reason
I
say,
control
plane
with
like
with
Gerald,
has
here
as
hybrid,
is
because
that's
what
code
fresh
and
Acuity
are
calling
it,
but
like
it's
almost
like
you
said
Hillary,
it's
almost
like
a
meaningless
like
phrase
at
this
point,
because
it's
like
you
know,
I
I,
guess
I
guess
it
makes
sense
right
like
I
guess,
maybe
that's
the
The
Conversation
Piece
should
be
like
the
strengths
and
weaknesses
of
each
model,
even
though
we
don't
have
a
name
for
it
like
like
Brands,
like,
for
instance,
right
like
like.
B
If
we're
switching
a
conversation
to
like
the
strengths
of
weaknesses
right,
you
have
like
the
centralized,
Hub
and
spoke,
but
then
the
onus
is
on
the
Argo
City
rbac
model
right
like
if
you
have
a
multi-tenant
the
situation
where
you're
you
have
developers
like
actively
using
the
Argo
CD
UI
to
deploy
and
manage
their
applications.
I
think
you
have
to
use
some
sort
of.
B
There
are
back
model
because
then,
then
you
have
like
you
know
different
tenants
on
there
right.
You
have
the
team,
a
team,
B,
Team,
C
kind
of
doing
that
sort
of
thing
which
then
like.
B
If
you
start,
and
also
if
you
start
getting
too
big,
you
get
what
Hillary
Hillary
always
verb,
oomed
right,
you
start
getting
oomed
because,
like
you're,
you
know
your
repos,
like
you're,
just
like
trying
to
hold
all
that
stuff
into
memory
from
the
repo
server
on
Argo
can
cause,
can
get
you
umed
right
like
so.
A
Yeah,
you
have
to
clarify
so
owned
being
out
of
memory
right
and
I,
think
that
was
potentially
clear
from
Context
Clues,
but
I'm
going
to
find
it
anyway
on
like
when
you're
looking
at,
like.
What's
going
on
with
my
pod
right,
like
it'll,
show
oom
for
out
of
memory
on
the
Pod
and
I
was
like,
oh
it
oomed
or
it's
uming.
If
that
like
is,
is
in
a
cycle
of
then
restarting
and
then
going
through
the
same
exact
process
of
Crash
Loop
crashing
due
to
the
do.
A
The
memory
issue
and
I
turned
that
into
a
verb,
because
it's
delightful
I
I
feel
like
I,
didn't
originate
it,
but
I
can't
remember
who
I
would
have
heard
it
from
and
even
the
person
who
I
said
thought
I
heard
it.
From
said
to
me,
no
I
heard
it
from
you.
You
did
that
first,
so,
somewhere
somewhere
deserves
the
real
credit
for
uming
being
a
verb.
It's
certainly
not
unique
to
me,
but
that's
that's!
A
B
Yeah
I
think
well
it's
it's
it's
fun
to
say,
but
not
fun.
When
it
happens
right,
you
don't
want
to
be
at
it
like
you're,
just
constantly
crashing
and
not
to
say
that
the
the
centralized
service
book
doesn't
work
or
you
can't
scale
it.
It's
just
like
something
you
need
to
keep
in
mind
and
then
things
or
like
multi-tenancy
and
then
the
the
then
in
the
distributed.
You're
like
I,
guess
or
maybe
correct
me
if
I'm
wrong
here,
Gerald
like
what
you
mean
by
distributed
I.
C
Know
yeah
I've
got
a
slide
that
goes
into
more
of
the
logical
aspect
and
I'm
just
trying
to
keep
it
simple
today
in
terms
of
not
going
too
deep
into
things.
But
the
idea
really
is
that
in
the
distributed,
there's
no
external
clusters
from
an
Argo
perspective
right.
It's
only
talking
to
the
local
kubernetes
service,
gotcha.
C
Yeah,
that's
it
yeah,
yeah,
really,
the
whole
difference
to
me.
You
know
the
scaling
is
one
aspect,
but
really
it's
a
single
point
of
failure
with
centralized
versus
the
distributed
model.
Right
is
really
where
things
come
into
play
for
me
in
terms
of
the
big
thing
that
a
lot
of
my
customers
are
looking
to
to
avoid
and
good
talk
on
the
on
the
hybrid
model.
So
I'll
put
my
thinking,
cap
on
and
take
some
of
the
suggestions
from
the
chat
as
well
yeah,
some
better
terminology
there.
C
So
with
this
slot
I
just
really
wanted
to
set
the
stage
for
what
I'm
going
to
show
in
terms
of
how
things
work.
So
what
we're
going
to
be
showing
is
this
for
lack
of
a
better
word
for
right
now,
this
hybrid
model,
as
I
turned
it
here,
a
control,
plane
and
we'll
have
a
look
at
ACM.
So
it's
all
right
without
further
Ado
I'll
I'll
leave
this
slide
and
we'll
go
into
more
demo.
Yeah.
A
B
Exactly
yes,
yes,
so
remember,
choose
choose
that
technical
debt
wisely
right.
B
C
Okay
and
again,
Hillary
and
Christian,
you
guys
can
keep
me
yeah.
Sorry,
you
folks,
you
see
it's
a
really
bad
habit
to
try
to
break
it.
Becomes
such
a
verbal
tick
for
you
right
that
it's
like.
A
Actually
becomes
like
saying
or
like
right:
I,
don't
intentionally
ever
say
those
things,
but
they
certainly
fly
out
of
my
mouth
at
alarming
rates.
C
Yeah,
absolutely
so,
if
you
go,
if
you
folks
I
just
did
it
again,
if
you
folks,
keep
me
honest
here
in
the
sense
of
as
we
go
through,
the
demo
feel
free
to
to
pause.
Me
I'm,
hoping,
like
I,
said
to
keep
this
conversational
between
the
three
of
us
and
folks
that
are
chatting
on
the
on
the
channel
so
that
we
can
have
more
of
a
conversation
as
we
go
through
it.
C
So
today,
I'm
going
to
be
showing
ACM
managing
rocd
like
we
talked
about
openshifted
Ops
and
what
we're
going
to
be
doing
is
bootstrapping
a
new
cluster
that
I've
already
created
and
brought
into
ACM.
So
I've
got
a
hub
cluster,
which
is
my
local
cluster.
This
is
where
the
ACM
Hub
is,
is
running
I.E,
That,
central
part,
the
control
plane,
that's
going
to
manage
everything,
and
my
AWS
cluster
here
is
the
one
that
I've
imported
and
it
is
a
fresh
cluster.
C
B
B
C
A
C
Yeah
they're
fairly
similar
but
gosh.
The
panel
tends
to
follow
more
of
the
Windows
model.
Where
you
get
the
bar
along
the
bottom,
and
it's
always
there
you
can,
you
can
intelli
hide
it
as
well.
If
you
want
to,
but
I
always
just
leave
it
there
because
again,
I'm
an
old
man
I
like
to
see
my
applications
that
are
running
in
the
bar
and
not
guess
that
I've
got
something
running
so
and
I
like
just
clicking
on
and
switching
it
again,
because
that's
muscle
memory
for
my
windows
days
so
but
anyways.
C
This
is
the
Amazon
cluster
that
we're
going
to
bootstrap.
So
this
is
what
we're
going
to
see
from
a
demo
perspective
today
is
how
ACM
makes
it
easy
to
bootstrap
these
clusters
using
openshift
get
Ops
what
I
just
wanted
the
purpose
of
logging
in
was
first,
so
you
could
see
that
I'm
just
using
Cube
admin,
that's
the
only
authentication!
That's
set
up.
You
can
see,
there's
no
certificates
that
are
being
configured
for
this.
B
B
B
C
You
go.
Oh
I've
got
my
clusters
here
and
essentially
what
I'm
going
to
do
is
I'm
going
to
configure
this
and
I'm
going
to
just
relay
out
my
screen
here,
a
little
bit
so
I'm
logged
into
that
that
Amazon
cluster.
If
I
do
an
OC
status
here,
we
can
see
just
to
make
sure
because
I
don't
know
about
you
Christian
or
Hillary,
but
I'm
always
logging
into
the
wrong
cluster
at
the
wrong
time.
A
B
I
have
I
used
to
just
use
environment
variables,
like
you
know,
for
for
different
clusters
and
now
I,
just
I
force
myself
to
use
the
use
context
to
learn
the
context
commands
the
Q
config
use
context
whatever
they
could,
they
could
make
that
command
ease
a
little
easier,
but
but
yes,
I,
have
tons
of
clusters
and
I'm
always
messing
up
one,
because
I
think
I'm
on
one
and
you're
enough
you're
actually,
on
the
other.
C
Yeah
I
do
that
all
the
time
so
in
this
command
here
all
I
did
was
I
put
on
a
watch
for
Argo
application
objects
right
and
you
can
see
right
now.
It
doesn't
have
any
resource
type
of
applications,
because
the
git
Ops
operator
hasn't
been
installed.
We
haven't
done
anything
yet
in
terms
of
configuring,
it
so
the
way
the
configuration
works
I'm
going
to
get
into
the
meat
of
this.
While
this
demo
is
running,
but
just
to
kind
of
show
a
little
bit
here.
C
If
I
go
into
my
local
cluster
here,
you
can
see
an
ACM
that
there
are
different
ways
to
categorize
your
clusters
and
decide
how
things
are
going
to
get
deployed
across
the
different
clusters
and
one
of
the
more
common
ways
like
everything
else
in
kubernetes
is
through
labels.
You
can
label
your
cluster
right.
C
A
C
C
So
the
way
that
that
label
is
going
to
work
is
I,
have
a
policy
set
set
up
for
get
Ops,
and
this
policy
set
in
ACM
is
what's
actually
going
to
be
rolling
out
this
cluster,
so
you
can
see
already
I've
got
a
my
local
clusters
got
three,
but
here
in
the
AWS
cluster
I've
got
a
mix
of
policies
that
are
compliant
and
not
compliant
as
it
goes
through
the
process
of
setting
things
up.
If
I,
look
at
the
actual
policies
that
I
have
in
play,
I've
got
three
policies
out
of
this
policy
set.
C
All
a
policy
set
really
is
is
just
a
grouping
of
policy
right.
It's
nothing.
Fancy
or
complicated.
Just
allows
you
to
manage
it
in
a
more
efficient
and
easy
way.
So
I've
got
three
policies
that
are
being
delivered
here.
The
first
one
is
the
manage
git
Ops
operator
policy.
Oh
see,
if
you
look
up
here
now,
you
can
see
the
operator
got
installed,
because
now
it's
finding
the
application
crud,
but
it
is
there's
no
applications.
C
Actually,
existing
that'll
take
a
little
while
for
it
to
get
going
so
the
manage
get
Ops
operator
deploys
the
actual
openshift
get
Ops
operator
or,
if
you're,
using
the
Argo
Community
operator,
it
could
deploy
that
operator
as
well.
Then
I
have
another
policy
down
here,
which
is
copying
my
sealed
Secrets
key
or
C.
So
I'm,
using
sealed
secrets
for
encryption.
I,
don't
have
the
bandwidth
in
my
home
lab
to
run.
C
You
know,
Hashi,
Corpus,
Vault
or
something
fancy,
and
nor
do
I
want
to
pay
a
cloud
provider
to
store
my
secrets
for,
like
demo
type
purposes
so
I'm
using
seal
secrets
and
if
you're
not
familiar
with
it,
it
essentially
uses
a
private
key
that
you
need
to
provide
to
the
sealed
Secrets
operator
in
order
for
it
to
decrypt
your
secrets
and
git
that
are
stored
in
git.
C
So
in
order
to
get
that
key
into
my
the
cluster
that
I'm
deploying
here
I
have
a
policy
that
copies
that
key
from
The
Hub
cluster
into
this
manage
cluster,
and
that's
what
that
policy
is
doing.
We'll
look
at
this
more
in
more
detail
in
a
second
and
the
last
one
is
just
setting
up
my
notifications.
I've
got
that
out
as
a
separate
policy,
Argo
CD
notification.
C
So
those
are
the
three
things
that
are
happening
here
and
if
we
go
and
look
into
the
this
here,
oops,
let
me
go
and
edit
it
one
thing:
I
forget
about:
oh,
you
can
see
it's
starting
to
bootstrap
here
on
the
right,
so
the
applications
are
coming
in
and
the
it's
using
a
a
sync
waves
to
do
this.
C
So
from
a
placement
point
of
view,
in
terms
of
the
policy,
you
can
see
that
I've
got
the
placement
tied
to
a
label
called
get
Ops.
If
that,
if
it
sees
this
label
on
that
cluster,
which
I
just
added
to
my
AWS
cluster
and
that
label
exists,
this
policy
will
kick
in
and
off
it
goes
to
the
races
or
this
policy
set
will
kick
in
and
off.
It
goes
to
the
races.
So
if
we
look
at
the
policies
that
are
doing
the
work,
let
me
do
my
good
Ops
here.
C
C
B
Trying
to
show
they
are,
they
are
the
managed
Fields
is
something
for
machines
and
not
for
us,
as
people
I
mean
they're
like
they're
useful,
but
like
like
not
useful,
with
respect
to
like,
like
like
looking
at
a
config,
it's
useful
like
if
you're
coding,
something
right
or
if
you're
like
writing
an
operator
or
if
you
need
to
like
reference,
something
or
but
yeah
like
it's
just
like
minimize
that
please
I'm
glad
I'm,
glad
I,
think
cubeconfig
hides
them
now
by
by
default
and
and
I'm
and
I.
B
Remember
that
being
a
fight
I,
remember
that
being
a
fight
in
the
GitHub
issue
about
like
hiding
or
not
hiding
you
but
anyways,
I'm
gonna,
say
I
could
spend
an
hour
complaining
about
that.
Yeah.
A
A
Me
to
be
making
changes
and
it's
one
of
the
reasons
I
don't
like
the
GUI,
even
even
with
like
the
color
coding
and
everything
it's
still
just.
It
makes
it
too
easy
to
start
trying
to
change
things
in
the
wrong
place
and
like
like
you're,
a
good
old
OC
patch
commands
say
it
saves
me
all
of
that
trouble.
But
it's
you
know
it's
whatever.
Yes,.
C
C
C
So
there's
a
policy
here
and
this
policy
is
really
just
an
aggregation
of
what
we
call
configuration
policy,
which
is
what
I'll
show
next
and
from
a
what
you're
doing
is
essentially
templating
and
providing
the
ammo
that
you
want
to
be
deployed,
so
in
ACM
the
way
the
configuration
policy
AKA.
This
thing
here
works
is
that
you
can
check
if
the
Clusters
that
are
being
managed
by
that
policy
or
tied
to
that
policy
have
specific
yaml.
And
then
you
can
optionally
say
you
want
to
enforce
that
yaml
to
be
there.
C
So
if
that
yaml
is
not
there,
ACM
will
actually
push
that
yaml
out
to
that
cluster
from
a
policy
right
in
enforcing
mode.
It's
almost
like
a
get
Ops
by
policy
type
capability,
and
it's
pretty
neat
in
terms
of
the
the
way
it
works
is
some
of
the
capabilities
it
brings
and
has
a
couple
of
extra
features
that
Argo
doesn't
have
which
we're
going
to
touch
on
here
as
we
go
through
this,
because
it's
quite
interesting,
but
at
the
simplest
level
you
can
see
here.
C
I've
got
an
object
definition,
which
is
my
namespace
object,
I'm
saying
it
must
have
it
and
from
a
radiation
action
I'm
enforcing
it.
So
that
is
telling
ACM
that
if
this
is
not
on
the
target
cluster,
you
need
to
create
this
yaml
right
drop
that
yaml
in
there
and
you'll
see.
This
pattern
is
just
repeated
throughout
right.
So
as
I
go
through
this
I'm
creating
some
cluster
roles,
my
cat's
making
a
guest
appearance
on
my
desk
I'm,
going
through
some
cluster
roles
to
set
up.
What's.
C
Wars,
so
we
set
up
the
the
cluster
roles
and
everything
configure
the
subscription
object,
which
is
the
next
thing
that
we're
doing
and
then,
if
I,
keep
going
down,
I
won't
go
through
every
little
thing
here,
but
the
one
thing
that
I
want
to
point
out
here
is
this
one
here.
C
So
what
we're
pushing
out
here
is
an
application
object
and
Argo
CD
application
object
and
you'll
notice
here.
I've
got
this
here
as
the
path
right
now,
if
you're
familiar
with
application
sets.
This
probably
reminds
you
a
bit
of
an
application
set
when
you're
having
to
template
the
application
object,
that's
being
generated
so
ACM
has
the
capability
to
do
lookups,
which
is
a
feature
that
Argo
does
not
have
right.
You
can
look
things
up
on
either
the
Hub
cluster
or
in
the
Target
cluster,
and
in
this
case
here
the
from
cluster
claim.
C
C
It's
pasting
it
in
here
and
this
matches
up
to
this
path
in
my
git
Ops
repo,
where
I'm
doing
all
my
cluster
configuration
out
of-
and
that
is
where
I
generate
the
bootstrap
application
that
you're
seeing
be
deployed
here
on
the
right
right,
so
that
lookup
capability
is
really
neat
and
allows
you
to
do
a
lot
of
cool
things.
But
you
can
see
that
from
a
setup
perspective,
this
isn't
rocket
science
right.
C
This
is
not
something
that's
overly
complicated,
but
it
allows
you
to
really
make
it
easy
to
spin
up
both
the
getups
operator,
as
well
as
the
bootstrap
application
to
the
Target
clusters.
Similarly,
if
I
get
out
of
this,
does
that
for
Christian
and
Hillary?
Does
that
you're
following
all
this?
No
problems
make
sense.
B
Yeah
so
so
kind
of
just
to
recap,
this
is
kind
of
also
you
know
to
step
back
just
a
little
bit.
It
seems
that
the
like
this
looks
like
a
okay
like
when
you
install
a
cluster
with
ACM,
and
just
you
know
for
everyone's.
You
know
edification
like
you,
you
can
install
openshift
cluster
with
ACM,
like
with,
if
you
set
it
up
right,
like
click
of
a
button
sort
of
thing,
but
then
you
can
assign
policies
to
them
as
well.
Right
like
once.
This
cluster
is
up.
B
It's
almost
like,
like
you
know,
like
you
know,
like
those
Russian
dolls
right
like
those
like
vegetables,
yeah,
yeah,
yeah,
yeah
or
or
like
a
dominoes
like
I,
don't
know
what
analogy
to
use
but
you're
like
you're
clicking
a
button,
and
this
thing
tells
this
thing
to
tell
us
this
thing
to
configure.
C
That's
exactly
it:
yeah
I
mean
you:
can
you
can
create
the
cluster
directly
out
of
ECM
and
you
can
label
the
cluster
with
the
get
Ops
label
right
there,
and
everything
will
just
happen
from
start
to
finish.
For
you
right,
you
don't
need
to
do
anything
for
demo
purposes.
C
C
The
other
nice
thing
from
a
cluster
creation
point
of
view
is,
you
can
tie
it
into
ansible
as
well
like
you
can
actually
run,
plays
and
play
books
out
of
ansible
automation
platform
and
have
it
configure
outside
infrastructure,
as
well
as
part
of
that
cluster
creation
right.
So
if
you
need
to
interact
with
a
load,
balancer
and
AWS,
for
example,
you
can
have
a
Playbook
that
does
that
and
manage
all
that,
so
it
really
goes
a
long
way
to
managing
a
fleet
of
openshift
instances
over
and
above
just
the
git
Ops
capability.
C
Okay
and
then,
if
we
go
back
to
our
my
policies
here,
we
can
see
now
I'm
fully
compliant.
My
AWS
cluster
has
got
all
policies
deployed
ready
to
go
no
problem
at
all.
If
I
go
back
to
bring
this
screen
up
here,
we
can
see
it
looks
like
everything's
synced.
C
A
Okay,
we
also
have
an
ultra
wide
monitor
it's
it's
mounted
on
the
wall
in
front
of
me,
because
it
is
so
big
it
would
have
you
experienced
the
thing
where
you
cannot
get
other
people
to
agree
on
whether
or
not
your
text
is
legible
due
to
the
aspect
ratios
of
their
own
monitors
in
relation
to
your
monitor,
while
it's
being
shared.
C
C
Yeah,
so
that's
what
I
always
do,
because
otherwise
you're
right
it
becomes
a
tiny
little
screen.
I
mean
we're
kind
of
off
topicing,
but
in
Gnome
you
can
enable
if
you're
running
Linux
on
fractional
scaling.
So
usually
when
I'm
running
my
ultra
wide
I'm
at
125,
but
for
screen
sharing.
You
know
you
could
just
bump
that
up
a
bit
to
something
that
makes
it
more
legible
as
well.
A
Okay,
we
actually
have
a
real
question
that
is
not
off
topic
in
the
chat
and
since
we're
already
stopped,
let's
just
go
ahead
and
pull
it
up.
So
do
you
manage
machine
sets
via
ACM
policy
or
with
Argo.
C
So
I
manage
it
with
Argo,
but
there's
a
bit
of
a
you
know.
Philosophical
discussion
about
you
know
how
far
to
go
with
acms
configuration
product
capability
right,
so
I
I,
think
I
kind
of
stated,
my
biases
up
front.
You
know
I'm
the
openshift
get
Ops
TMM,
that's
really
where
the
bulk
of
my
job
is
and
I've
used
Argo
for
a
long
time
now,
and
so
I
tend
to
look
at
Argo
as
my
tool,
and
you
know
the
old
adage
about
the
carpenter.
C
When
you
have
a
hammer,
everything
looks
like
a
nail.
So
Argo
tends
to
be
my
go-to
to
deploy
anything
well.
The
only
thing
I'm
using
ACM
to
deploy
is
is
bootstrap,
Argo,
essentially
right
or
bootstrap
open.
Once
that's
done
everything
else,
I
manage
with
openshift
get
Ops,
but
having
said
that,
that's
the
way
I
like
to
work
and
that's
my
personal
philosophy,
I,
don't
think
a
red
hat.
C
We
have
a
strong
Direction
saying
you
know
do
this
or
do
that
in
terms
of
that
particular
question
right,
like
it's
really
going
to
be
up
to
your
organization,
how
you
want
to
manage
things
like
from
my
perspective.
Argo
has
a
few
features
that
are
quite
nice
in
terms
of
being
managing
deployments,
that
I
like
that
are
kind
of
a
little
more
behind
the
scenes
in
term
in
the
policy
in
terms
of
managing
things.
But
the
policy
on
their
hand,
can
be
really
convenient
too
right.
C
It's
really
easy
to
create
them,
get
them
deployed
and
ACM
does
a
great
job
of
managing
that
as
well.
So
you
know
I'm
not
saying
one
way
or
the
other
is
better
or
worse.
It
just
depends
on
what
you
prefer
to
do
and
how
you
manage
things.
If
you
talk
to
people
that
are
more
on
the
operations
team
that
are
doing
everything
in
ACM,
they
will
gravitate
to
the
configuration
policy
to
do
this
stuff,
whereas
developers
who
tend
to
be
more
in
pure
Argo
World,
we'll
just
do
things
in
our
go
right.
A
Oh,
if,
if
there's
any
follow-ups
or
additional
questions
on
that
particular
one
but
I
I
feel
like
it
was
good
I'm,
a
nice
political
answer.
B
I
feel
so
that's
that's.
Probably
the
right
answer
for
like
but
I
I
feel
like
like
if
you
are
using
ACM
as
like
the
the
control
plane
or
the
hybrid
model
for
like
managing
like.
If
that's
like
your
focus
to
me,
it
makes
sense
to
have
Argo.
B
Do
it
because
it's
kind
of
like
you're
delegating
like
if
you
have
Argo
instances
right
like
you're,
just
like
okay
Argo,
you
take
care
of
everything
on
on
this
machine
right
versus
doing
it
on
ACM,
if
you're
more
like
you're
using
ACM
like
as
just
like
your
machine
management
like
you're
you're,
doing
a
lot
with
ACM,
it
does
make
sense
to
to
put
it
in
ACM.
But
for
me
it
depends
on
the
model.
You're
using
I
am
more
inclined
to
use
Argo,
but
then
I'm
biased,
because
I
have
an
Argo
picture
up
here.
B
So
yeah,
that's
just
that's
just
me
so
and
and
I
see
that
Luis
is
also
sorry.
Luis
he's
one
of
the
ACM
guys
is
on
so
sorry,
yeah.
C
Yeah,
well,
they
will
correct
me
or
back
me
up
as
necessary.
Yes,
but
anyways
just
to
show
this
is
the
cluster
that
we
just
configured
so
now
you
can
see:
hey
I've
got
certificates.
Great,
that's
done!
I've
got
authenticators
right,
I'm,
not
just
tied
to
keep
admin
anymore.
I
can
log
in
as
my
admin
user,
I.
C
Yeah
we
have
the
custom
login
Matrix
theme
that
somebody
did
for
the
the
console
UI
challenge
many
moons
ago,
which
is
quite
nice
and
if
I
go
into
operators,
you
know
look
at
installed.
Operators
you
can
see.
I've
got
a
bunch
of
different
operators
installed
now,
whereas
before
I
had
none
right.
So
that's
all
done
for
me.
The
other
thing
I
want
to
kind
of
touch
on
with
ACM.
That's
kind
of
neat
is,
if
you
notice
I've,
got
this
banner
up
here.
C
Aws.Cluster
in
that
Banner,
so
I'm
using
an
ACM
feature
to
actually
do
this,
but
tying
it
into
Argo
to
manage
that.
So
if
we
go
look
at
my
Argo
installation
in
terms
of
workloads,
pods
and
oops,
sorry
get
out
of
this.
All
projects
get
off
turn
on
the
filter.
You
can
tell
it's
a
fresh
cluster
nothing's
configured
the
way
I
like
it,
and
we
look
for
the
the
repo
server
down
here.
We
look
at
the
environment
variables.
C
You
can
see,
there's
some
environment
variables
that
got
spit
out
here
right
in
terms
of
what
the
cluster
ID
is
for
this
cluster.
What
the
cluster
name
is,
what
the
infrastructure
ID
is
and
what
the
subdomain
for
this
cluster
is,
and
this
is
another
kind
of
area
I'm
interested
in
Christian
you're
in
Hillary's
opinion
on
whether
how
gitopsy
or
not
it
is,
if
I
go
back
to
ACS,
to
show
you
where
this
is
happening,
policies
the
managed
get
Ops
operator,
I'm
bouncing
around
a
little
bit
here.
C
Next,
one:
okay:
here
we
go
so
what
I'm
doing
in
ACM
is
that
when
I
deployed
the
operator
in
the
Argo
CD
crud
that
actually
deploys
the
operator,
you
can
specify
environment
variables
for
the
different
components,
so
I'm
specifying
some
environment
variables
for
the
repo,
but
then
I'm
using
acm's
capability
to
look
things
up
to
populate
those
variables
right.
So,
when
I
deploy
into
my
target
cluster,
the
infrastructure
ID
gets
populated
from
that
Target
cluster
same
with
the
cluster
ID,
the
cluster
name,
the
subdomain.
So
there's
a
question
about
machine
sets.
C
For
example,
one
of
the
problems
with
machine
sets
is
that
the
default
name
of
Ford
is
tied
to
this
infrastructure.
Id
I
think
it
is,
or
the
cluster
ID
one
of
the
two
I
can't
remember.
So
you
can't
just
like
have
it
sitting
as
the
ammo
somewhere,
you
have
to
know
the
name
in
advance
right
and
that
this
ID
is
generated.
It's
not
something
that
you
ever
know
in
advance
until
the
cluster
is
created
so
using
ACM.
That
allows
me
to
basically
populate
these
variables
and
then
what
I've
got
is
I've
got
a
plug-in.
C
C
Essentially,
take
the
output
of
customize
and
say:
if
I've
got
a
dollar
sign,
squiggly
bracket,
you
know,
name
change
it
to
the
environment
variable
if
there's
a
matching
environment
variable.
So
the
interesting
thing
about
this
I'm
kind
of
interested
in
getting
your
opinion
on
Christian
and
Hillary
is
whether
this
is
get
opsy
or
not,
because,
as
you
know,
one
of
the
principles
of
git
Ops
is
that
the
source
of
Truth
is
in
git
and
as
soon
as
you
start
doing
some
stuff
like
this.
C
Well,
that's
not
really
true
anymore
right,
because
some
of
the
source
of
Truth
now
is
that
environment
variable
for
me,
I'm,
okay
with
it
because
I'm
pretty
pragmatic,
you
know,
there's
a
balance
between
do
I
really
want
a
100,
000,
customized
overlays
to
patch
these
individually
versus
just
kind
of
templating.
In
this
one,
simple
case
or
but
you
know,
I'm
just
kind
of
interested.
What
your
folks
thoughts
are.
B
Yeah
I
I
I've
actually
actually
had
this
conversation
with
a
Christian
poster
many
many
moons
ago,
like
you
know,
how
far
do
you
go
in
your
source
of
truth
because
having
a
deployment
in
in
git
is
like
well
that
deployment
spins
up
replica
sets,
which
then
spins
up
pod
so
like,
is
that
less
get
Ops
than
just
like,
defining
just
nothing
but
pods
right
and
at
some
point
it
gets.
It
gets
kind
of
crazy
and
I.
B
Think
it's
just
where
you're
the
point
of
demarcation
is
I
like
to
quote
Alexis
Richardson
right
who
coined
the
phrase
get
up,
so
you
know
just
kind
of
have
that
in
my
back
pocket
right,
it's
like
I'm,
quoting
someone
who
actually
coined
the
phrase
get
Ops,
but
he
said
that
the
point
of
the
points
of
get
UPS
or
or
like
in
his
mind.
The
point
of
get
Ops
is,
is
the
state
of
your
cluster?
B
Basically,
you
want
your
cluster
back
in
the
state
or
as
close
as
back
to
the
state
as
it
originally
was,
and
so
so
things
like,
like
you
know,
Helm,
does
it
too.
Helm
has
like
the
lookup
picture
as
well.
B
Then,
then,
that
that's
the
point
of
it
right
like
it's,
it's
it's
more!
It's
it's
less!
You
know.
Taking
the
you
know
the
religious
aspect
of
get
Ops
right
in
a
philosophical
aspect
of
it
like
what
is
it
for,
like
you're
in
your
end
state
right
that
that's
what
it's
for
you,
you
want
the
end
state,
so
I
I
think
this
is
completely
fine.
B
I
have
absolutely
no
trouble
with
doing
things
like
looking
up
things
that
are
Dynamic
right
things
like
cluster
names
and
node
names
and
things
you
should
actually
shouldn't
really
care
about
in
the
beginning
to
begin
with
right.
So
you
know
I
think
at
least
that's,
like
my
opinion,
right,
I
kind
of
tend
towards
Lanes
towards
okay.
You
know
do
as
much
as
you
can.
However,
there's
going
to
be
situations
where,
like
what's
your
end
goal,
your
end
goal
is
to
get
the
same
state
so
then,
whatever
you
need
to
do
to
do.
A
Yeah
to
me
this
is
one
of
those
things
where,
with
the
how
far
do
you
go
right
like
this
policy
could
exist
and
get
with
this
exact
lookup?
And
if
you
need
to
change
the
nature
of
the
lookup,
then
that
would
be
what
changes
and
gets.
And
then
you
get
tons
of
the
same
thing.
But,
like
you
said
you
don't
want
a
thousand
customized
overlays,
that's
not
what
I
would
call
sustainable
technical
debt
I
need
a
rainbow
effect.
Sustainable
technical
debt.
A
Okay,
you've
got
to
balance
that
you
cannot
take
something
so
far
down
that
you're,
giving
yourself
too
many
things
to
manage
and
keep
aligned
and
and
or
too
many
instances
of,
or
you
know
too
much
compute
spent
to
do
something
that
could
be
done
with
a
pearl
one-liner
like
not
even
a
one-liner
in
Pearl.
Look
how
small
that
is,
that
all
regex
yeah,
it's
basically
all
regex
I
love
Pearl!
Oh.
B
C
There's
actually
a
command
called
EnV
VAR,
which
is
why
I
named
it
this
way,
but
unfortunately
the
Argo
CD
image
doesn't
have
it
and
I
try
to
avoid
image
management
like
the
the
plague.
In
my
existence
like
I,
don't
want
to
be
keeping
this
thing
up
to
date.
So
that's
why
I
just
defaulted
as
Hillary
said,
you
know,
Pearl's
like
a
Swiss
army
knife
right,
I,
don't
have
anything
else,
but
I
have
Pearl.
Okay,
I
can
do
it.
Yeah.
A
B
I'm
pretty
good
at
said,
but
with
awk,
that's
that's
kind
of
like
when
I
start
like
that's
why
my
head
starts
hurting.
A
A
Was
it's
perfectly
you
can
read
if
Pearl
is
perfectly
human,
readable?
Okay,
it's
just
like
learning
any
language
that
you
have
to
speak.
Okay,
it's
fine!
It's
fine
and
pearl
doesn't
change
for
better
or
worse.
It
hasn't.
A
C
Okay,
so
that's
great
thanks
for
the
the
feedback
on
that.
So
another
thing
I
just
wanted
to
show
a
bit.
Was
you
know
you
can
go
to
the
applications
list
and
you
can
see
all
your
different
applications
here
right,
so
you
can
see
them
deployed
across
the
the
different
clusters
like
there's
one
remote
cluster.
C
Here,
for
example,
you
can
drill
into
things
and
you
can
see
the
the
layout
of
what
they
look
like
and
how
things
are
deployed
right,
click
on
the
application
object
and
see
that
it's
healthy
and
that-
and
you
can
see
this
across
the
fleet
right
over
and
above
that,
there's
also
a
feature
if
we
go
into
clusters
here,
because
I
could
never
remember
where
the
the
the
link
is
an
ACM
called
observability.
C
What
observability
does
it
essentially
collects
all
the
metrics
from
the
managed
clusters
right
and
allows
you
to
query
those
metrics
on
the
Hub
clusters?
This
allows
you
to
provide
dashboarding
across
your
Fleet
of
cluster,
so
for
grafana.
What
I've
done
here,
as
you
can
see,
there's
a
default
one
that
comes
out
of
the
box
for
clusters
in
general,
but
I
created
a
get
Ops
one
and
I
can
go
in
and
I
can
see
my
get
Ops
instances.
C
So
here
is
the
one
that's
running
that
AWS
cluster
that
we
just
deployed
the
operator
in
you
know:
I
can
see
how
many
applications
I,
have
repositories
up,
etc,
etc.
See
the
statuses
here
in
terms
of
health
and
sync,
and
then
I
can
just
flip
over
and
see
my
local
one
as
well
right.
So
I
can
see
what
how
my
Argos
are
looking
across
my
fleet
very
quickly
and
easily
in
ACM.
This
feature
is
super
easy
to
deploy
like
I'm,
not
much
of
an
infrastructure
guy
and
I'm,
just
using
my
Network
as
tax.
C
My
commute
consumer
level,
network
attached,
storage
device
for
object,
storage
and
it's
just
very
easy
to
set
up
and
get
going,
and
it
gives
you
this
benefit
of
getting
all
the
metrics
because
again,
I
don't
know
about
you
folks,
but
when
I'm
running
multiple
clusters
I'm
tending
to
bounce
from
cluster,
to
Cluster,
to
Cluster
to
see.
What's
going
on
and
be
able
to
get
this
single
pane
of
glass
view
across
my
fleet
is
a
huge
benefit
to
me.
A
A
That
is,
that
is
a
huge
success.
Yeah
this
is
so
can
I
confess
I
have
actually
never
seen
ACM
work
before
today.
Can
you
believe
it
or
not,.
B
And
it's
it's
cut.
Well,
what's
really
cool!
Is
that
it's
like
tip
of
the
iceberg
because,
like
you
can
have
Integrations
with
like
ansible
and
ansible
Tower
and
like,
like
literally
use
it
as
a
control
plane
for
your
infrastructure,
like
not
not
just
like
Cloud
native
stuff
right,
so
that
it's
it's
really
it's
like
tip
of
the
iceberg,
sort
of
thing
install
manage
openshift
clusters
install.
You
know
your
traditional
manager.
You
know
your
your
traditional
infrastructure.
Do
all
this
get
opsy
goodness
and
have
a
control
plane
for
your
yes,.
C
Yeah,
the
the
other
cool
thing
with
ACM
I
mean,
like
I,
said
today:
I'm
really
just
focused
on
managing
your
get
Ops,
but
really
what
got
me
into
this
as
a
solution
architect
for
my
customers
is
that
many
of
my
customers
are
struggling
with
managing
the
tenants
on
the
cluster
right
like
hurting
the
cats
and
making
sure
they're
doing
the
right
thing.
C
So
I've
been
really
huge
into
policies
as
a
way
to
manage
my
my
tenants
in
the
sense
that
you
know
I
want
to
make
sure
when
somebody
deploys
a
pod
disruption
budget.
They
don't
set
that
pod
disruption
budget
in
a
way
that
prevents
me
from
taking
a
note
down
for
maintenance
or
doing
an
upgrade
right.
How
do
I
do
that?
I
can
have
a
policy.
That's
just
every
time.
Somebody
creates
a
pod
disruption
budget.
C
It
says
Okay
you're
not
allowed
to
set
it
this
way,
because
it's
going
to
block
things
or
I
can
let
them
set
whatever
they
want.
But
report
back
to
the
central
pane
of
Glass
on
the
policies
to
say
you
know
what
there's
this
many
clusters
that
have
violations,
because
people
are
deploying
bad
pod
disruption
budgets
right
as
an
example,
so
being
able
to
make
sure
that
the
the
tenants
on
the
platform
are
doing
the
right
thing,
because
many
of
them
are
not
going
to
be.
C
You
know
overly
familiar
with
the
ins
and
outs
of
kubernetes
right,
like
the
number
of
times,
I've
had
to
tell
somebody
the
importance
about
liveness
and
Readiness
checks,
which
is
like
basic
kubernetes.
101
is
still
astounding
to
me
right
so
having
these
policies
in
place
and
then,
most
importantly,
a
central
place.
We
can
actually
understand
what
your
policy
status
is
across
a
large
Fleet
of
clusters.
It's
huge,
and
this
is
really
what
started
me
with
ACM,
but
from
there
I
really
kind
of
morphed
into
the
get
Ops
side
as
well.
C
C
The
other
thing
that
ACM
has
over
Argo
is
this
control
plane
is
that
it
is
a
pull
model,
not
a
push
model,
so
Argo
CD
when
it
wants
to
talk
to
remote
clusters,
it
has
to
push
I
connect
from
its
point
to
that
point,
right
and
typically,
what
I
find
a
lot
of
customers
is
that
those
points
that
it's
trying
to
reach
are
blocked
by
firewalls
right,
like
those
firewalls
allow
outbound
traffic,
but
they
don't
allow
inbound
traffic
so
ACM
what
it
does
differently
is
it
deploys
an
agent
on
the
target
clusters
and
it
does
a
poll
it.
C
The
target
clusters
always
talk
back
to
the
hub
cluster
right
and
from
a
network
flow
perspective
that
is
much
more
likely
to
be
allowed
by
customers.
Networking
infrastructure
from
a
farall
perspective
than
a
push
model
is
so
that's
the
other
kind
of
benefit
that
ACM
brings,
but
it's
a
very
subtle
one
that
isn't
really
clear
when
you
see
it
kind
of
working
in
action,
yeah.
B
Yeah
and
there's
also
kind
of
like
you
know,
push
or
pull,
and
it's
like
why
not
both
right,
like
you
have
both
like
with
with
you
know,
like
you
have
your
caking,
you
can
eat
it
too,
with
with
ACM
and
Argo
and
using
both
of
them
together.
B
You
know
it.
It
allows
for,
for
some
of
those
things
when
I
first
started
working
with
ACM
I
was
so
used
to
Argo.
That,
like
my
mind,
just
like
exploded
when
I
connected
my
cloud
instance
ACM
to
like
my
local
openshift
cluster,
that
I'm
running
right
here,
my
little
space
heater,
that
I
call
my
openshift
cluster
and
I
was
able
to
like
manage
it.
I
was
like,
oh
because
it
has
an
agent
that,
like
you
know
that
I
was
I
was
like.
B
Oh
my
God,
but,
like
you
know,
working
in
Argo
I
was
used
to
like
the
other
way
around.
I'm,
like
this
is
never
gonna
work,
I'm
like
oh,
that's,
actually,
pretty
cool
and
I
can
see
how,
like
some
that's
more
tolerable
to
like
organizations
that
you
know
have
firewall
rules
or
have
some
sort
of
compliance
things
that
they
need
to
do
it
really,
you
know,
allows
them
to
start
using
like
some
some
of
these
things
that
are
really
really
cool,
yeah.
A
I
I've
experienced
that
firsthand
when
I
was
working
in
the
smart
building,
iot
space
and
actually
having
iot
gateways
that
had
to
be
in
physical
locations
and
be
conformant
to
corporate
I.T,
best
practices
and
standards
and
going
through
it
audits
on
that
which
we
could
talk
about
forever.
But
there's
actually
one
other
thing:
I
want
to
talk
back,
I
want
to
roll
back
and
I
know
we're
over
time.
So
I
hope
you
guys
can
stick
with
us
just
like
five
minutes.
A
I
want
to
roll
back
to
the
topic
about
using
policies
to
enforce
kubernetes
best
practices
and
agents,
because
there's
something
else
I
want
to
show.
If
you
don't
mind
since
you're
driving,
can
you
go
to
the
operator
hub
on
a
cluster
Gerald
yeah.
C
Absolutely
let
me
switch
over
to
my
local
one
right.
A
A
This
is
some
a
project
that
I
worked
on
for
a
really
long
time
and
and
red
Hatter's
past
also
contributed
to,
in
addition
to
Red
hudders
present,
and
the
actual
basis
for
this
is
something
called
Cube
linter,
which
came
out
of
Stack
rocks,
so
cube.
Linter
is
a
static
analysis
tool
that
goes
into
your
GitHub
workflows.
That
will
validate
that.
Your
manifests
are
compliant
with
kubernetes
best
practices
which
are
written
into
their
their
policies.
A
A
Specifically,
it
doesn't
do
all
of
the
checks
that
they
expose
in
the
in
the
static
analysis
tool,
but
this
is
a
really
great
tool
for
people
who
are
not
dealing
with
a
full
Fleet
and
might
just
be
looking
at
the
the
single
cluster
perspective,
and
it
should
also
have
a
grafana
board
and
stuff
for
you
to
see,
and
then
one
of
the
things
that
we
did
when
we
run
it
is
that
whenever
it
determ
finds
a
validation
or
like
a
like,
whenever
the
validations
fail
rather-
and
it
finds
a
violation
of
the
the
best
practice,
it
doesn't
block
anything.
A
It
doesn't
have
that
kind
of
functionality.
But
on
our
end,
our
automation
will
pick
that
up
and
it
will
raise
a
jira
with
the
owner
of
the
workload
and
tell
them
hey.
You
need
to
fix
this.
It's
not.
This
is
not
working
and
here's
why
this
is
not
going
to
work
and
I'm
very,
very
proud
of
this
project
and
its
existence
in
the
operator
Hub
and
its
its
usage
across
Red
Hat.
Not
that
it's
my
idea
or
anything
for
a
very,
very
period
of
time.
A
Like
six
months,
I
was
the
tech
lead
on
this
project
and
I'm
I
think
I'm
still
a
maintainer
because
they
haven't
removed
me,
which
is
amazing,
yeah
and
I
I,
particularly
love
and
advocate
for
this
project
and
red
Hatters
passed
specifically,
a
friend
of
mine,
named
Rob,
worked
really
really
hard
on
this
to
to
bring
basically
SRE
knowledge
about
how
to
operate
workloads
on
kubernetes
to
the
to
the
masses.
So
an
absence
of
the
ACM
policies.
Here's
another
thing
that
does
really
similar
stuff.
A
Do
we
do
call
it
dvo
Devo
certainly
did
not
did
not
come
to
mind
for
any
of
us.
That
was
an
opportunity
missed
yeah.
B
A
B
That's
pretty
cool,
that's
awesome,
awesome
all
right!
So,
let's
see
if
there's
any
questions
or.
A
C
I
manage
everything
with
Argo,
so
yeah
like
I,
said
I'm
biased
to
that.
So
that's
the
way.
I
work,
the
only
thing
I
I,
don't
manage
in
our
goal
is,
is
the
stuff
that
gets
bootstrapped
once
the
bootstrap
happens,
everything's
Argo
from
then
on
again,
that's
just
the
way
I
like
to
work.
It's
I'm
not
saying
that's
the
one
true
way,
but
for
me
that's
the
way.
I
work.
B
It's
we're
biased
so
because
we're
you
know
we're
we're
we're
Argo
folks
here
so,
but
but
I
I
yeah.
If
I
can't
shove
it
into
Argo,
then
then
it's
probably
a
CI
task
right.
C
B
C
No
I'll
be
up
front
now,
if
I
wasn't
doing
that,
little
trick
that
I
just
showed
about
you
know
populating
the
environment
variables,
there's
probably
more
things
I
would
be
doing
in
ACM
like
machine
sets
right,
which
you
need
to
have
that
information
to
do
effectively
or
you're
like
what
we
talked
about
earlier
you're
doing
a
lot
of
overlays
to
manage
the
patching
of
that
stuff
right
on
a
discovery
basis.
A
I
think
that's
really
actually
expensive,
but
you're
using
you're
using
ACM
to
extend
the
base
functionality
of
Argo.
So
that's
that's
cool,
I,
think
at
the
end
and
we're
coming
up
we're
like
so
far
past
time.
We're
like
we
never
go
this
this
late,
but
this
was
a
great
stream
I
think.
In
the
end,
it
comes
down
to
what
I
always
say,
which
is
choose
your
technical
debt
wisely.
However,
you
decide
to
do
that.
It's
not
that
you're
choosing
a
solution.
A
It's
that
you're
choosing
what
this
our
solution
for
the
best
results,
necessarily
as
much
as
you're
choosing
solution
that
will
give
you
what
is
sustainable
technical
debt
for
your
team
and
as
somebody
who
spent
a
lot
of
time
in
the
quality
engineering
space
and
in
the
operations
space
like
that
piece
it
that
matters
more
than
any
religious
argument
anywhere.
What's
sustainable
for
your
team,
what
workflow
makes
it
sustainable
for
your
team?
So
that's
that's
just
what
I'm
going
to
advise
you
on
that
I
have
no
opinions.
B
Awesome
awesome
see
here
you
can
find
you
can
find
Gerald
on
the
on
social,
so
I'll
throw
I'll,
throw
that
up.
He,
he
usually
he'll
post,
a
bunch
of
blogs.
I
really
enjoyed
reading
your
blogs
even
before
they
started,
showing
up
on
on
the
red
hat
sites
right,
so
Gerald
puts
that
up
there
and,
of
course
you
can
find
my
social
there
for
on
on
Twitter,
as
as
long
as
it
stays
up
right
right
there
and
then
Hillary,
which
is
the
chief
mermaid.
B
Yeah
and
it's
just
cheap
mermaid,
so
I'm
pretty
sure
you
could
I'm
pretty
sure
they
don't
check
that
I'm,
pretty
sure
you
just
put
whatever
you
want
on.
A
Yes,
it
is
a
business
card
title.
My
my
my
official
title
is
still
principle,
reliability,
engineer,
principal
software
engineer,
I,
don't
even
know
actually
whatever
that
means,
but
yeah.
So
you
know
functional
titles,
business
card,
titles,
workday,
classification,
titles.
How
many
remember
when
I
said
red
Hat's
bad
at
naming
things
that.
A
So
I
named
myself
and
that's
that's
that
so
absolutely
I,
don't
post
that
much
on
Twitter,
but
when
I
do
I
hope
it's
interesting.
We
are
definitely
over
time.
Christian.
We
didn't
discuss
who's,
hitting
the
end
stream
button
who's
doing
what
we
don't
know.
I.
A
He's
gonna
do
the
outro
all
right
folks
in
two
weeks,
we'll
be
back
with
ansible
on
Z
systems,
which
is
mainframes
from
the
the
Z
systems
team?
Actually,
some
of
those
folks
out
of
IBM,
so
we're
looking
forward
to
that
session.
B
Awesome,
yes,
excited
all
right
all
right
folks,
we're
out
of
here
and
in
what
like,
as
Hillary,
always
says
right.
What's
what's
what's
your
what's
the.