►
Description
Kubernetes is now the de facto development infra building block for new applications. And with GitOps, it should really be invisible to you, deployed the same way as you release code! But beware: what is deployed on top of K8s is really what matters. In this session, we will be going through how GitOps-ifying your K8s management with ArgoCD and Codefresh makes so much sense, but also talk about how this is really one side of the K8s equation.
A
B
Thank
you
for
having
me
hi
everyone
and
thank
you
for
joining
the
session.
Like
sharon
mentioned,
my
name
is
sod
malik
and
I'm
the
cto
and
co-founder
of
spectrocloud,
and
so
today,
we'll
be
talking
about
get
ops
to
manage
your
kubernetes
clusters,
specifically
looking
at
argo,
cd
and
cluster
api.
B
So,
a
little
bit
more
about
myself,
like
sharon,
mentioned,
I'm
very
passionate
when
it
comes
to
technologies
like
containers
and
kubernetes
and
distributed
systems.
I
was
part
of
a
in
early
cloud,
startup
called
clicker
technologies
where
we
focused
on
multi-cloud
application
management,
and
that
is
actually
where
we
first
started.
Working
with
docker
containers
and
orchestration
platforms
like
apache,
mesos
and
kubernetes,
so
clicker
was
acquired
by
cisco
in
2016
and
at
cisco.
We
worked
very
closely
with
our
customers
on
their
digital
transformation
and
their
adoption
of
cloud
native
technologies
like
containers
and
kubernetes.
B
So
in
2019,
along
with
other
key
executives
from
clicker
technologies,
we
we
left
to
start
spectral
cloud
to
really
make
kubernetes
accessible
and
approachable
for
everyone.
Now,
with
that,
let's
talk
about
how
do
we
align
kubernetes
into
our
life
cycle
with
a
life
cycle
for
application
or
application
workloads?
B
B
They
go
ahead
and
merge
that
into
their
code,
git
repository
now
through
the
power
of
a
continuous
integration
pipeline.
There
are
two
artifacts
that
are
generated.
One
is
an
actual
docker
image
which
is
properly
pushed
to
a
docker
registry
and
then
the
other,
of
course,
is
a
kubernetes
manifest
specification.
B
It
could
be
either
a
direct,
manifest
or
an
helm
chart
that
is
pushed
into
a
git
repository.
This
special
git
repository,
of
course,
is
being
read
by
argo
cd.
Any
changes
that
happen
in
the
skip
repository
are
automatically
been
pushed
to
any
designated
cluster,
whether
they're
helm,
charts,
customized
or
even
raw
manifest.
B
B
This
would
include
provisioning
the
underlying
infrastructure
as
a
as
an
example
on
a
public
cloud
like
amazon,
this
would
be
creating
the
networking
constructs
like
the
vpcs,
the
subnets,
the
security
groups
and
then,
of
course,
after
all,
the
core
infrastructure
is
provisioned,
provisioning,
the
actual
eks
cluster
and
any
node
groups
that
sit
on
top
of
it.
So
once
the
cluster
is
fully
up
and
running,
like
I
mentioned
before,
the
previous
workflow
applies
right.
B
So
why
is
this
a
problem?
Well,
if
you
start
thinking
about
kubernetes,
it's
if
you're
still
in
the
experiment,
experimentation
phase-
maybe
it's
okay.
However,
as
you
start
moving
into
early
productization,
there
are
a
few
aspects
that
need
to
be
considered.
You
know,
while
kubernetes
has
become
that
common
control
plane
a
common
operating
system
for
running
containers
application.
B
B
So,
instead
of
having
a
separate
workflow
for
managing
your
kubernetes
lifecycle
and
your
applications,
wouldn't
it
be
nice
if
you
could
somehow
manage
even
your
kubernetes
infrastructure
while
get
offs
right?
How
would
you
be
able
to
connect
argo
cd
with
kubernetes?
You
know
specifically
for
it
to
manage
the
kubernetes
infrastructure
lifecycle,
so
this
is
where
cluster
api
comes
in
and
for
those
of
us
who
are
not
really
familiar
with
cluster
api
cluster
api
is
a
declarative
way
for
you
to
manage
your
kubernetes
cluster.
B
It
is
a
governed
by
the
cncf
cluster
lifecycle.
Special
interest
group
has
a
massive
community
and
is
being
adopted
by
virtually
all
modern
kubernetes
management
platform,
including
google
and
those
vmware
tanzu,
and
even
our
company
spectrocloud
very,
have
really
heavily
relies
on
cluster
api
and
its
capabilities.
B
So
what's
unique
about
cluster
api,
is
it's
declarative
management?
You
know
similar
to
how
you
manage
other
resources
in
kubernetes
clusters.
You
describe
the
entire
state
in
terms
of
its
cluster,
its
configuration
that
is
being
able
to
provision
this
specification
becomes
a
blueprint
or
a
template
used
to
orchestrate
and
manage
the
cluster
and
because
the
apis
are
really
kubernetes-style
apis.
B
So
cluster
api
does
provision
end-to-end,
multi-master,
conforming
clusters
across
any
environment.
Unless
you
perform
all
the
day,
one
and
day
two
life
cycle
tasks
such
as
your
scaling
operations,
your
upgraded
kubernetes
clusters
and
because
it
does
manage
the
end-to-end
life
cycle
from
the
infra
all
the
way
up
to
the
kubernetes
layer.
It
also
has
some
very
powerful
resiliency
capabilities
where
potentially,
if
one
of
the
nodes
that
the
cluster
api
is
managing
becomes
faulty
cluster
api
can
automatically
replace
it
by
provisioning
a
brand
new
node
to
take
it
to
take
over.
B
So
how
does
cluster
api
work
in
the
back?
So
it's
a
cluster
api.
It's
a
kubernetes
project
built
using
operators
and
controllers.
There
are
high
level
abstractions
or
crds
that
define
the
cluster
aspects.
The
machine
deployment
the
account
configurations
now
for
each
of
the
different
environments
or
clouds
that
it
support
it,
uses
a
plug-in
called
a
provider.
B
This
provider,
of
course,
manifests
into
different
behaviors
in
the
case
for
amazon,
the
implementation
would
be
to
provision
eks
clusters
with
node
group,
and
the
volumes
are
ebs
volumes
right.
Similarly,
other
clouds
will,
of
course,
have
other
other
types
of
implementations,
so
with
cluster
api
right.
This
would
really
unify
the
workflow
in
terms
of
both
the
kubernetes
infra,
as
well
as
the
application
workflows
themselves
right.
So
now,
the
devon
it
operations
using
a
declarative
specification
into
a
git
repository,
can
specify
which
clusters
to
provision.
What
is
the
upgrades
to
happen?
B
So
we're
going
to
jump
into
a
short
demo
where
we
will
provision
a
brand
new
amazon,
eks
cluster
now
using
cluster
api,
we'll
show
the
actual
end
to
end
lifecycle
for
that,
and
then
also
we'll
show
a
quick
demo
using
a
bare
metal
provisioning
of
of
an
actual
cluster
using
cluster
api.
And,
of
course,
this
whole
thing
would
be
driven
directly
through
argo
cd.
B
B
If
I
drill
down
into
this
capi
cluster
and
look
at
the
details
for
this
applications
notice
that
it's
looking
at
a
specific
directory
or
specific
repository,
github
stash
argo
cd
at
the
folder
capi
clusters.
So
let's
jump
into
that,
I'm
going
to
go
into
the
actual
repository
and
go
into
the
capi
clusters
directory
right
from
here.
There's
at
this
point
no
yaml
there
are
no
configurations,
any
cluster
supervision.
B
What
we're
gonna
do
is
we're
gonna
go
into
the
directory
and
copy
the
configuration
for
provisioning,
an
eks
cluster
and
I'll
describe
in
a
moment
all
the
different
properties
that
are
specified
here.
We're
going
to
go
back
into
the
capi
clusters
directory,
add
a
new
file,
and
then
we
can
call
this
cluster
aws3,
dot,
yaml
and
paste
the
structure
here.
So
this
is
the
specification
for
cluster
api.
It's
describing
a
single
new
clustertv
provision
and
by
the
way
for
the
infrastructure
in
this
case,
use
the
aws
managed
control
plane.
B
This
is
the
specification
to
say,
use
the
the
provider
for
eks.
If
we
take
a
look
at
the
eks
specific
configurations,
you
can
specify
either
dynamically
provisional
resources
or
statically
place
them
on
a
specific
network.
So
in
this
case
we're
specifying
to
use
the
network
here
and
then
additional
properties
like
machine
pool
configurations,
can
also
be
specified
if
we
were
to
commit
this
into
the
actual
git
repository,
and
we
were
to
go
back
into
my
my
argo,
cd
and
obviously
argo
cd
every
minute
it
refreshes
the
actual
the
manifest.
B
But
you
can
force
a
refresh
by
clicking
on
refresh
notice
that
it
detected
a
change.
It's
now
loading
in
the
latest
specifications
and
hopefully,
within
a
few
seconds
right.
We
should
see
all
the
various
aspects
of
the
the
actual
cluster
api
resources
come
in.
I'm
going
to
refresh
the
screen
right
here,
notice
that
it's
provisioning
now
the
cluster
crd
object,
the
machine
pools
and
all
the
other
configurations.
B
If
I
go
into
the
aws,
manage
control
plane
and
if
I
click
on
the
the
events
tab
here,
a
second
right,
it's
going
to
show
all
the
behind
the
scenes,
the
provision
that's
happening
for
the
cluster.
We
can
log
even
directly
into
the
eks
control
plane
here
and
if
I
can
refresh.
B
Notice
that
there
is
now
an
eks3
control
plane
that
is
in
a
creating
state
and
cluster
api
will
actually
wait
for
the
entire
cluster
to
be
fully
provisioned
before
it
proceeds
on
with
the
with
the
next
operations
and
with
that,
I'm
just
going
to
real
quickly
also
show
how
to
provision
a
mass
cluster.
So
with
that
I'll
go
back
into
the
git
ops,
argo
cd
directory
into
capi
cluster
go
into
the
bak.
B
There
are
some
unique
lists
here
in
terms
of
specifying
you
know
where
the
actual
machine
image
template
to
use.
What
are
the
specifications
for
the
bare
metal
system?
But
if
I
copy
this
configuration
and
if
I
go
back
into
the
actual
cluster
go
to
copy
clusters
here,
we
can
add
a
new
file
and
I'll
say
this
is
going
to
be
cluster
mass
2.,
yaml
I'll
paste,
the
content
and
I'll
commit
it.
B
And
then,
within
a
few
seconds,
inside
of
my
argo
cd,
it's
going
to
detect
there's
a
new
modification
change
here
and
then,
of
course,
start
provisioning.
Also,
the
mass
cluster
behind
the
scenes,
so
cluster
api
really
makes
it
very
easy
to
not
only
provision
but
manage
the
end-to-end
lifecycle
of
your
clusters.
B
B
So,
but
that's
not
the
end
of
the
story
right
as
customers
find
their
journey
and
adoption
of
kubernetes.
There
are
problems
that
cluster
api
does
not
address.
For
example,
cluster
api
only
manages
the
core
four
layers.
You
know
the
operating
system,
kubernetes,
networking
and
so
on,
but
who
manages
the
additional
add-on
services
from
your
logging,
your
monitoring,
security
and
other
aspects
of
that
right?
There
are
other
day
two
operations
beyond
the
cluster
api
support:
everything
from
providing
role-based
access,
control
within
the
cluster
cost,
control,
backup,
restore
and
logging
and
monitoring
solutions.
B
All
of
these
different
capabilities
need
to
also
be
addressed
now
either
you
build
them
manually
or
this
is
where
the
selling
part
comes
in
our
product
spectra
cloud
palette
does
right,
as
I
mentioned
before,
the
cappy
is
foundational
for
pallet.
In
fact,
we
are
one
of
the
key
startup
contributors
to
cluster
api
and
what
what
we
do
with
pallet
is
we
extend
cluster
api
to
cover
the
entire
stack,
all
the
layers
from
the
15
to
16
different
integrations
when
it
comes
to
logging
monitoring,
service,
mesh
and
so
on.
B
We
make
it
easy
to
integrate
it
all
as
a
declarative
model.
It
caters
to
all
the
data
functionality
from
your
backup,
restore
your
quota
control
and
any
additional
enterprise
grade
controls
you
may
need,
and
a
third
selling
point
for
us
is
that
pallet
is
not
opinionated.
In
fact,
you
can
bring
it
in
to
any
existing
environment.
Even
if
you
have
your
existing
openshift
or
legacy
technologies,
we
can
work
with
this
cluster
technologies
and
we
work
with
any
distribution
any
environment
in
any
in
any
cloud.
B
So,
let's
take
a
look
at
a
complete
workflow
now
with
palette
in
the
picture,
so
palette,
of
course,
integrates
with
cluster
api,
but
now
holistically
from
a
singular
declarative
model
that
your
devops
and
I.t
engineers
are
maintaining.
You
not
only
get
the
aspect
of
to
clusters,
but
also
managing
all
the
day.
Two
operations
right,
especially
as
you
move
to
production
across
multiple
clouds,
makes
it
really
easy
end
to
end.
B
And
so
with
that,
that's
the
final
side
that
I
have
I
want
to
say
you
know,
come
and
talk
to
us
wherever
you
are
in
your
cloud
native
and
kubernetes
journey.
You
know
whether
you
are
starting
now
or
moving
your
containers
to
production.
B
We
can
really
help
you
simplify
the
way
you
manage
your
kubernetes
environments
by
the
way.
If
you
are
interested
in
pallet,
we
are
running
a
promotion
for
argo,
con
users
where
we're
providing
them
with
free
access
for
a
few
months
on
any
kubernetes
project
they
may
have
and
beyond
pallet.
If
you
guys
have
any
questions
on
kubernetes
or
cluster
api
myself
and
my
team
will
be
more
than
happy
to
help
out
so
feel
free
to
email
me
at
saw.spectrecloud.com
or
tweet
at
me
at
saa
malik.
B
So
thank
you
so
much
that's
my
presentation.