►
Description
Don't miss out! Join us at our upcoming event: KubeCon + CloudNativeCon Europe in Amsterdam, The Netherlands from 18 - 21 April, 2023. Learn more at https://kubecon.io The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy, and all of the other CNCF-hosted projects.
A
A
So
my
name
is
Nick
I'm
the
Delville
lead
at
spectrocloud
and
what
I
want
to
focus
on
today
is
a
did
dive
on
a
you
know,
Hands-On
use
case.
Typically,
you
will
find
a
lot
of
talks
and
presentation
on
cluster
API,
githubs
and
both
together,
but
most
of
the
time
they
will
give
you
like
a
simple
example
or
a
very
simple
use
case
today.
A
I
want
you
to
have
a
transparent
experience,
I'm
not
going
to
hide
anything
around
cluster
API,
gitups
and
augustd,
which
is
the
githubs
tool
we're
going
to
be
using
we're
gonna.
Do
everything
live
and
we're
going
to
see?
What
are
you
know
the
considerations
you
have
to
to
make
to
basically
make
gitups
with
cluster
API
useful
in
your
organization?
A
So
let's
take
a
quick
look
at
the
agenda
for
today.
So
first
I'm
going
to
quickly
go
over
cluster,
API
and
github's
principle.
Then
we're
going
to
take
a
a
closer
look
at
cluster
API
components
and
what's
provisioning,
workflow,
look
like
then
we're
going
to
talk
about.
A
You
know
what
sort
of
additional
tools
you
may
want
to
have
on
top
of
cluster
API
and
address
some
caveats,
especially
when
you
want
to
combine
with
augocd
or
any
other
github's
tooling,
and
then
we'll
have
the
use
case.
Deep
dive
live
demos
and
then
I'll
do
like
another
demo,
also
as
we
as
we
move
forward,
we're
going
to
start
with.
You
know
simple
demo
and
then
we're
going
to
build
our
our
full
use
case.
A
So
let's
get
started
before
talking
about
cluster
API
or
githubs
or
any
other
sort
of
tooling
that
work
with
the
same
pattern
in
kubernetes,
which
is
the
operator
pattern.
So
let's
talk
about
this
first,
so
operators
are
not
something
really
new.
It's
been
there
for
a
couple
of
years
now,
but
what
makes
it
very
different
compared
to
you
know
what
people
used
to
do
in
kubernetes
before
is
that
you
can
bring
automation
inside
of
kubernetes
as
opposed.
A
You
know
to
infrastructure
as
code
or
other
tools
like
that
is
that
they
live
outside
of
kubernetes.
With
the
operator
pattern,
you
are
bringing
automation
capability
within
the
kubernetes
cluster,
so
how
it
works.
It's
basically,
it
has
multiple
components.
First,
you
have
what
we
called
customer's
definition
in
the
case
of
cluster
API.
A
Well,
cluster
API
implements
a
lot
of
custom
resources.
The
cluster
itself
machine
machine
deployment,
we're
going
to
see
them
later,
but
they
are
essentially
resources
that
extend
the
native
initial
kubernetes
API
by
making
those
objects
first
class
citizen
inside
kubernetes,
then
the
second
component
of
the
operator
pattern
is
a
custom
controller.
So
the
role
of
the
custom
controller
is
to
monitor
changes
that
are
realized
on
those
custom
resources
like
the
cluster
API
cluster
and
others,
and
then
react
and
perform
certain
actions
based
on
events.
A
So,
for
example,
if
you
create
a
new
cluster
API
cluster,
then
the
custom
controller
of
cluster
API
will
make
sure
that
this
cluster
will
be
deployed
in
the
right
environment
with
the
right
machine,
the
right
instances,
size,
the
right,
Network,
etc,
etc.
So
it's
automating
things
outside
of
the
communities
cluster.
Why?
At
the
same
time,
monitoring
a
representation
of
these
resources
within
the
kubernetes
cluster,
and
it
does
this
permanently
so,
in
other
words,
that
means
that
the
custom,
controller
or
I
mean
doesn't
have
to
be
only
one.
You
will
see
in
cluster
API.
A
There
are
more
than
one
custom
controller.
They
are
creating
a
Reconciliation
Loop
between
the
desired
state,
which
is
the
resources
that
are
created
within
the
cluster
and
the
real
current
state
of
the
infrastructure,
which
is
what
exists
outside
of
the
Committees
cluster
here
in
the
case
of
cluster
API,
this
is
the
state
of
the
cluster
you
want
to
to
deploy.
A
So
now,
let's
talk
about
the
githubs
pattern,
so
github's
pattern
means
that
you
are
storing
not
only
your
code
on
your
repositories
or
that
would
be
like
the
dev
that
stores
his
code
on
git
repo,
the
application
code,
which
is
on
top
of
this
representation.
Here
traditionally,
you
know
it's
going
to
trigger
a
pipeline
that
maybe
living
within
GitHub
actions
in
that
particular
example
build
the
image,
and
then
this
is
the
traditional
you
know
developer
pipeline
now
a
github's
pipeline
will
add
another
component
to
it.
A
So
we
need
an
extra
tool
to
create
a
request:
reconciliation
Loop
between
this
desired
state,
so
the
manifests
and
what
is
deployed
in
the
cluster,
the
current
state
of
the
cluster,
in
our
case
in
the
example
today
we're
going
to
be
using
August
CD,
which
is
going
to
be
responsible
for
that
particular
automation,
so
any
change
pushed
into
the
manifests
will
be
implemented
into
the
cluster.
So,
for
example,
if
you
change
a
container
image,
August
CD
will
reconcile
this
within
the
cluster
and
replace
the
image
where
appropriate.
A
A
Decoratively
there's
no
imperative
command,
so
it's
kind
of
avoiding
any
possible
mistakes
made
by
human
because
we
bring
this
extra
automation
and
then
there's
the
second
benefit,
which
is
around
security,
because
now
all
those
operations
are
are
performed
by
you
know,
service
account
associated
with
August,
and
only
that
particular
service
accounts
will
require
permissions
to
perform
action
within
particular
namespace
or
within
the
cluster,
which
is,
in
the
end,
reducing
the
attack
surface
of
that
particular
cluster.
A
So
now,
let's
take
a
quick
overview
of
cluster
API
and
some
of
the
main
components
so
at
the
very
top
the
cluster
API
cluster
type
is
merely
an
interface
for
more.
You
know
specific
and
lower
level
implementation,
details
that
are
implemented
by
the
infrastructure
provider,
and
then
we
also
have
other
providers
that
cluster
API
relies
upon.
We
have
the
bootstrap
provider
and
the
control
plane
providers.
So,
as
I
was
saying,
the
infrastructure
provider
role
is
to
encapsulate
all
the
tasks
that
are
specific
to
a
particular
cloud
or
infrastructure.
A
A
So
the
the
role
of
the
bootstrap
provider
is
basically
to
turn
any
machine
into
a
kubernetes
node
using
Cloud
init
scripts,
so
the
bootstrap
Provider
by
default,
if
you
don't
specify
anything,
kuberdm
is
going
to
be
used.
So
that's
going
to
be
the
cloudiness
script
corresponding
to
making
sure
you
know.
Kubadm
is
used
to
turn
that
machine
into
a
that
instance
into
a
kubernetes
nodes.
Then
there
is
a
cap
PM
for
microcodes
for
Talos.
A
If
you
are
deploying
a
Talos
cluster
which
is
created,
let's
say
very
opinionated
kubernetes
cluster,
the
control
plane
provider
will
be
responsible
for
creating
the
control
plane
nodes.
So
in
the
same
way
the
bootstrap
provider
is
responsible
for
turning.
You
know
a
machine
into
a
into
worker.
The
control
plane
provider
is
responsible
for
turning
the
machine
into
a
control
plane.
It
does
have
all
the
the
config
specification,
the
type
of
instance,
the
machine
template.
A
You
want
to
use
all
those
kind
of
things
and
all
those
those
three
components
are
combined
into
options
for
the
cluster
CTL
init
command
that
you
perform
when
you
want
to
deploy
the
cluster
API
components
into
your
existing
management.
Kubernetes
cluster
that
you
are
going
to
be
using,
for
you
know,
hosting
the
cluster
API
resources,
so
cluster
CTL
inits
will
take
so
those
three
option:
b
for
bootstrap
provider,
I
for
infrastructure
or
C4
for
control
plane.
A
So
now,
let's
take
a
look
at
the
different
controllers
involved
with
a
cluster
API
and
the
different
resources
they
are
responsible
for.
So
we
can
map
out
the
different
components
here
to
what
we've
just
seen
in
the
middle.
Here
we
have
the
controller
manager
for
the
infrastructure
provider.
So
the
cap
G
controller
manager
on
the
top
right.
This
is
the
bootstrap
provider,
corresponding
controller
manager
and
on
the
bottom
right.
This
is
the
control,
plane,
provider,
corresponding
controller
and
then
on
the
far
left.
A
A
So
the
cluster
is,
as
said
earlier,
the
interface
for
a
more
specific
implementation
of
the
cluster
within
the
cloud
provider.
You
want
to
Target.
So
this
is
why
it's
making
references
to
the
infrastructure
provider,
the
cluster
in
inside
the
infrastructure
provider,
which
is
the
gcp
cluster.
So
there's
if
the
infrastructure
provider
is
gcp,
there's
a
one-to-one
mapping
between
the
generic
cluster
and
the
more
specific
gcp
cluster.
A
But
of
course
the
cluster
is
also
composed
of
the
the
control
plane,
of
course.
So
then,
this
is
why
you
have
a
reference
to
the
control
plane
Provider
from
within
the
cluster
right.
So
we
have
a
reference
to
a
gcp
cluster
and
we
have
a
reference
to
the
controller
manager
responsible
for
deploying
the
control
plane
now
for
the
worker
nodes.
This
is
where
you
have
the
machine
deployment
that
is
defined
here
on
the
left
and
it's
similar
to
a
deployment
in
kubernetes.
A
In
the
way
it
behaves
so
in
kubernetes,
a
deployment
resource
is
responsible
for
controlling
how
different
pods
are
are
deployed
within
the
cluster,
so
responsible
for
managing
the
number
of
replicas
how
they
are
started
and
making
sure
you
always
have
the
right
number
of
desired.
Pods
running
here
same
principle,
but
for
your
worker
nodes,
the
machine
deployment
is
going
to
manage
in
machine
sets
and
the
Machine
sets
is
going
to
be
composed
of
one
or
several
machines.
A
And
again
this
is
quite
generic
machine
and
this
machine
will
have
a
one-to-one
mapping
with
a
gcp
machine.
So
one-to-one
mapping
with
the
gcp
machine
and
the
gcp
Machine
will
encapsulate
the
specific
information
related
to
your
infrastructure
provider
so
related
to
to
gcp
in
our
case,
and
then
the
machine
deployment
itself,
which
is
acting,
as
you
know,
making
reference
to
a
template.
We'll
also
have
a
reference
to
a
gcp
machine
template
the
same
way
a
deployment
makes
reference
to
a
pod
template
and
in
addition
to
that,
we've
seen
before
that.
A
A
Also
I
didn't
represent
all
relations
here
on
the
picture
like,
for
example,
the
control
plane
also
has
a
relation
to,
of
course,
a
gcp
machine
template
because
it
needs
to
get
the
machine
from
much
information
from
somewhere.
So
yeah
I
didn't
represent
all
the
relations
because
otherwise
or
all
the
objects,
because
it
will
be
just
too
much
it's
just
like
the
main
ones.
A
I
know
it
may
be
a
bit
abstract
at
the
moment,
but
what
I
can
propose
is
to
go
through
our
first
demo,
where
we'll
deploy
a
workload
cluster
from
cluster
API.
The
only
thing
is
that
I've
already
installed
cluster
API
within
the
management
cluster,
meaning
that
I've
run
the
command
cluster
CTL.
But
what
we
can
do
already
is
just
to
check
that
I
have
nothing
in
my
gcp
environment
again
and
then
we're
going
to
start
from
there.
A
So,
let's
take
a
look
at
the
management
cluster
right
now
so
I'm
using
canines,
which
is
very
useful
to
get
you
know,
quick
access,
at
whatever
information
you
want
on
the
cluster.
So
here
I
can
see
that
you
know.
Cluster
CTL
init
has
deployed
the
main
components
of
the
crds
which
we're
going
to
see
in
a
minute
and
then
also
all
the
controller
manager,
so
kpg
controller
manager,
which
is
the
infrastructure
provider
controller.
Then
we
have
the
bootstrap
controller
manager,
so
the
bootstrap
provider
controller.
A
We
have
the
control,
plane,
controller
control,
provider
controller
and
we
have
the
generic
I
mean
Main
cluster
API
controller
there
in
terms
of
clds
anything
ending
with
cluster
dot
x,
dash
case.io,
has
been
has
been
installed
by
cluster
API.
So
we
can
have
see
here.
Cluster
I,
don't
have
any
cluster
deployed
all
right,
and
so
with
the
one-to-one
mapping
to
gcp
Cluster,
don't
have
anything
there
yet
so,
let's
create
the
cluster.
A
So
the
commander
you
want
to
do
is
cluster
CTL
generate,
and
then
you
specify
the
name
of
the
cluster,
so
that
will
be
Cappy
Nick
here
the
kubernetes
version
number
of
control,
plane,
nodes
and
number
of
worker
nodes
as
well,
and
this
is
going
to
generate
the
manifests
that
you
can
redirect
into
a
file
and
then
we're
just
going
to
take
a
look
into
that
file.
Once
it's
been
generated,.
A
Okay,
let's
take
a
look,
so
we
have
a
bunch
of
information
there.
It
starts
we
with
the
cluster,
so
generate
the
formation
like
the
cluster
Network
block,
The
Cider
block
here,
making
reference
to
the
control
plane,
making
a
reference
to
the
infrastructure
provider
as
I
was
mentioning
before
one-to-one
mapping
with
the
gcp
cluster,
a
bit
more
specific
information
like
the
gcp
network
here,
as
well
as
the
project
ID
the
region
where
we
want
to
deploy
cluster
the
workload
cluster
now
in
terms
of
the
control
plane,
bunch
of
information
like
the
Kube
ADM
configuration
spec.
A
The
machine
templates
as
I
was
mentioning
also
used
by
the
control
plane
to
generate
I
mean
to
deploy
the
nodes
here,
the
gcp
machine
templates
referenced
by
the
control
plane
and
the
Machine
deployment,
using
that
particular
image.
We're
going
to
see
you
know
what
all
the
restriction
on
images
and
you
know
what
consideration
you
need
to
have
for
them
the
instance
type.
Then
we
have
the
machine
deployment
for
the
worker
nodes,
the
bootstrap
saying
that
you
know
how
you're
going
to
join
the
cluster
template.
A
And
then
yeah,
that's,
basically
all
the
information
that
we
need
to
deploy
our
cluster.
So
now,
the
only
thing
only
thing
that
we
need
to
do
is
to
apply
this
manifest
into
our
cluster
and
see
what's
happening.
So
we
have
one
two,
three,
four,
five,
six
seven
custom
resource
created.
So
we
can
already
take
a
look
here.
The
cluster.
A
A
A
A
Yeah
this
is
pending
at
the
moment
because,
remember
machines.
This
is
for
the
worker
worker
is
going
to
be
provisioned
once
the
control
plane
has
been
provisioned.
We
can
take
a
look
at
some
of
the
logs
there,
while
it's
deploying
so
the
main
cluster
API
controller
is
basically
waiting
for
the
infrastructure
provider.
Here
you
can
see
infrastructure
provider
to
to
provision
all
the
different
component
in
gcp,
and
this
is
the
infrastructure
provider
corresponding
controller.
So
it's
currently
reconciling.
A
This
is
expected,
so
reconciling
instance,
so
it
must
be
already
there
in
in
gcp
we're
going
to
see
in
a
moment
the
bootstrap.
This
is
still
waiting
right
for
the
machine.
The
the
worker
and
the
control
plane
should
be
doing
something
here,
still
reconciling
so
it's
not
finished
yet
just
take
a
look
quickly
at
what's
happening
in
gcp
and
then
we'll
continue.
A
We
move
forward
and
take
a
look
later
when
we
start
the
the
second
demo.
So
here
you
can
already
see
that
the
control
plane
has
been
provisioned
and
then
in
a
moment
we'll
have
the
worker
as
well
but
yeah.
For
the
moment,
let's
go
back
to
our
slide.
A
And
move
forward,
so
the
question
you
have
to
ask
yourself
now:
is
cluster
API
enough
to
deploy
your
cluster
not
really
as
such
right
because,
okay,
the
cluster,
although
not
fully,
deployed
yet
once
it's
deployed,
it
won't
be
necessarily
working
because
first,
we
didn't
install
any
cni
right.
So
it's
not
part
of
the
cluster
API
process
to
install
the
cni,
and
this
has
to
be
managed
after
the
cluster
has
been
provisioned.
So
it
means
that
it's
provisioning
nodes
which
are
not
ready.
A
Also,
how
do
you
plan
for
the
underlying
operating
system
used
by
cluster
API?
Well,
there's
a
tool
called
image:
Builder,
that's
cluster!
That's
also
part
of
the
cluster
API
documentation,
that
is
a
mix
of
hashico
Packer,
with
ansible
to
generate
kubernetes,
ready
images.
So
what
that
means
kubernetes
is
ready.
It
means
that
for
the
particular
version
of
kubernetes
you
want
to
install
so
I'm
speaking
here.
A
For
example,
if
you
want,
you
know
traditional
instances,
so
non-managed
kubernetes,
so
let's
say
like
in
in
gcp
like
we
did
the
underlying
operating
system
image
need
to
embed.
You
know
kublet,
and
you
know,
with
the
right
version
and
all
the
the
kubernetes
binaries
needs
to
be
present
there
with
the
right
version.
So
you
need
to
build
the
image
before
that
and
same
thing
when
you
are
upgrading.
A
Of
course,
you
know
you
can
declaratively
upgrade
your
cluster
by
changing
the
version
numbers,
but
first
you
have
to
make
sure
that
the
image
you
are
using
will
have
the
right
numbers,
the
right
versions
as
well.
So
this
is
something
you
have
to
manage
yourself,
which
leads
to
another
questions.
How
do
you
add
additional
software
or
infrastructure
components
like
the
CSI?
Maybe
you
want
some
Ingress
or
you
know
any
other
components
and
any
other
software
layer?
A
This
is
where
you
know,
github's
principle
may
may
kick
in,
but
we'll
see
what
are
the
different
options
in
in
a
moment
and
how
to
provide
Auto
scaling
for
the
number
of
nodes.
It's
not
part
of
you
know
the
base
components
we're
just
in
now.
This
is
something
you
have
to
to
do
extra
to
plane
for
extra
and
then
the
last
one
is
okay.
A
A
So
our
more
let's
say
detailed
use
case
comprises
all
those
components
so
first
to
solve
the
additional
software
layer
issue.
A
This
needs
to
be
installed,
so
that's
an
efficient
way
to
to
deploy
software
inside
your
workload
cluster
directly
from
your
management
cluster
as
an
additional
step,
then
there's
also
the
cluster
Auto
scaler
project.
So
that's
the
same
sort
of
you
know
principle
where
you
deploy
the
autoscaler
component.
You
can
actually
deploy
them
in
the
workload
cluster
or
in
the
management
cluster.
A
In
our
use
case,
we're
going
to
deploy
them
into
the
the
management
cluster.
I
say
them
because
for
every
workload
cluster
you
want
to
to
create.
You
need
a
dedicated
instance
of
the
cluster
Auto
scaler.
You
know,
components
of
the
pod
needs
to
run
for
every
workload.
Cluster.
Then,
of
course,
we're
going
to
be
using
August
to
deploy
the
workload
clusters
and
I'll
go.
A
Cd
will
be
responsible,
for
you
know
also
managing
I
mean
not
only
managing
the
creation
of
the
worker
cluster,
but
the
provisioning
of
the
add-ons
using
the
cluster
API
addon
provider,
helm
and
also
responsible
for
adding
information
to
the
auto
scale,
autoscaler
pod,
because
the
in
the
architecture
we
are
deploying
today
right,
Auto
scaler
is
installed
within
the
management
cluster,
so
the
autoscaler
Pod
need
to
get
access
to
the
workload
cluster
to
be
able
to
monitor
the
resource.
A
If
that
happens,
then
it's
going
to
provision
new
nodes
after
10
minutes,
for
example,
if
you
remove
all
those
pods,
if
those
nodes
are
not,
you
know
required
anymore,
because
you
know
you
have
enough
resources
in
your
in
your
cluster
than
the
the
worker
nodes
are
going
to
be
destroyed,
and
to
do
that
also,
the
the
cluster
Auto
scalar
pod
need
access
to
the
group
config
of
your
workload
cluster
right,
so
it
needs
to
be
implemented
in
a
sequential
fashion.
A
If
you
want
to
manage
this
with
Argo
CD,
that
leads
to
another
sort
of
issue
right,
because
githubs
means
that
you're
going
to
check
in
you
know
to
to
push
information
into
Git
You,
don't
want
to
push
a
clear
text
config
file
into
git,
so
we're
going
to
be
using
Subs,
so
secure
operation
from
Mozilla
to
encrypt
the
group
config
file
and
customize,
which
is
you
know,
like
Helm,
a
way
to
manage
packages,
not
packages,
but
a
way
to
manage
how
you
deploy
application
in
kubernetes,
so
Helm
is
doing.
A
Packaging
customize
is
more
like
a
configuration
management
tool,
so
customize
supports
Subs
decryption
via
caseops,
and
that
will
allows
us
allow
us
to
decrypt
the
group
config
in
our
gocd.
But
for
this
we
need
to
implement
a
couple
of
things.
A
So,
first
we're
going
to
be
using
the
app
of
apps
pattern
in
August
CD.
That
just
means
that
we're
going
to
create
an
ago
CD
application
and
the
only
job
of
that
application.
The
only
function-
is
to
host
other
application
definitions,
so
that
particular
pattern
is
useful
to
automatically
add
children
application,
because
here
so
you
have
a
representation
here
on
the
left
right.
So
we
you
have
the
parent
application,
which
is
Cappy
clusters
right,
because
you
can
then
manage
all
your
cluster
as
a
pack
of
application
right.
A
So
here
for
our
development
environment
that
we
are
going
to
provision,
we
will
need
to
do
two
things,
deploy
the
cluster
right
and
then
get,
of
course,
the
config
file
and
then,
in
the
second
part,
inject
the
coupe
config
into
Auto
scaler
and
deploy
the
autoscaler
pods
into
the
management
cluster,
so
just
to
combine
them
into
a
single,
let's
say:
application
pack,
you
can
use
the
application
of
applique
of
apps
the
app
of
apps
pattern
and
also
so,
as
I
said.
A
The
benefit
is
that
now
those
applications-
those
August
CD
application-
contain
the
definition
right
of
the
children
application,
which
means
that,
as
soon
as
you
synchronize
the
parents,
so
the
copy
cluster
application
automatically
here
on
the
right,
you
can
see
the
children
application
are
going
to
be
created
automatically
within
your
agrocity
environment
right.
So
it
makes
it
makes
the
management
a
lot
a
lot
easier.
A
I'll
go
CD
supports
both
Helm
and
customize
to
deploy,
manifests
into
the
destination
kubernetes
cluster.
So
we're
going
to
be
using
Helm
to
customize
the
cluster
API
resources
and
install
you
know
our
workload,
cluster
resources
in
our
cluster
API
management
cluster
and
then
we're
going
to
be
using
customize,
mainly
because
of
the
subs
capabilities,
and
because
customize
can
easily
configure
a
specific
portion
of
resource
manifest
and
customize
them.
So
that
means
that
I'll
go
CD
will
need
to
be
patched
to
support
caseops
and
also
modified
to
import.
A
A
A
A
We
can
see.
It's
been
provisioned
if
we
take
a
look
at
some
of
the
custom
resources.
Let's
check
the
gcp
cluster
okay,
so
you
can
see
that
it
is
ready
and
last
thing,
let's
take
a
look
at
the
gcp
machines.
Here,
it's
running
and
now,
let's
just
double
check
in
the
Google
console
that
the
cluster
is
effectively
available.
There
you
go
so
I
need
to
refresh
that
page
refresh,
and
we
should
see
the
two
machine
they
are
appearing.
A
So
we
have
the
two
machine,
one
in
U.S,
Central
1C
and
the
other
one
in
U.S,
Central,
1A,
okay,
so
I
think
we
can
say
that
it
worked
perfectly
now.
Let's
move
on
to
the
next
demo,
so
the
first
thing
I'm
gonna
do
is
show
you
the
structure
of
the
repository
we're
going
to
be
using
for
I'll
go
CD.
A
A
That's
for
the
cluster
Auto
scalar
and
the
dev
cluster.
Here.
This
is
the
configuration
for
cluster
API
itself.
So
again
we
have
the
repo
URL
here
and
we're
going
to
be
using
the
pass
Helm
capy
gcp,
which
is
this
one,
so
it
it
does
have
a
standard.
You
know
Helm
structure
with
the
Manifest
templates
and
the
values
dot
the
channel
that
we're
going
to
see
in
a
minute
if
we
look
at
Argo
CD
right
now,
let's
just
log
in.
A
You
can
see
that
I
have
my
application
copy
cluster.
This
is
the
parent
application.
If
I
look
at
the
configuration
you
can
see
the
report
URL.
This
is
our
main
repo
source
and
then
the
path
is
the
apps
section.
I
just
show
you,
so
it's
currently
out
of
think
and
you
can
see
that
this
application,
so
the
copy
cluster
parent
app
has
two
children,
so
the
dev
cluster
and
the
dev
cluster
Auto
scaler.
So
now,
let's
go
back
to
the
Repository
here.
A
Let's
take
a
look
first
at
the
helm
section,
so
we
have
the
cluster
API
template
we're
going
to
start
with
cluster.yaml,
which
is
remember,
you
know
the
top
object
we
are
going
to
to
use
the
same.
You
know
traditional
configuration
that
can
be
generated
by
cluster
CTL
and
we're
just
gonna
make
a
bit
more.
You
know
templatization
around
it,
so
the
name
we're
gonna
put
this
into
the
cluster
name
in
the
helm,
values,
file
and
then
we're
gonna
also
add
the
cluster.
A
As
you
know,
the
the
top
level
cluster
name,
then
the
project
more
specifically
the
gcp
region,
then
here
in
the
matching
machine,
template
control,
plane,
we're
gonna,
be
using
the
you
know,
a
particular
instance
type
for
our
control
plane
and
also
the
gcp
image
we
want
to
to
use
is
going
to
be
specified
there
same
thing
for
the
worker
right
and
then
for
the
kuberdm
control
plane,
we're
going
to
specify
the
number
of
replicas
so
for
our
control
plane,
the
kubernetes
version
as
well
and
for
the
worker,
basically
just
is
going
to
be
the
name
and
the
Machine
deployment.
A
We're
gonna
have
again
cluster
name
and
the
different
references
in
with
the
kubernetes
version
and
the
number
of
worker
replicas
that
we
can
also
specify.
And
on
top
of
that,
we
are
also
going
to
configure
some
annotation
that
are
used
by
cluster
Auto.
Scaler,
namely
here
we
want
to
set
the
maximum
size
of
the
cluster
and
the
minimum
size
of
the
cluster
right.
So
let's
take
a
look
at
the
values.chamel,
which
is
basically
containing
all
the
the
values
that
we
want
to
to
Define,
so
the
cluster
name.
This
is
copy
Dev.
A
We
specify
the
gcp
project
region,
the
instance
type
for
the
control
plane
and
when
N1
standard,
two
the
same
thing
for
the
worker
instance
type
gcp
image.
So
we're
going
to
start
with
a
number
of
control,
plane
replica
to
one
same
thing
for
the
worker
Max,
we're
gonna,
set
it
to
10,
and
we've
set
the
minimum
to
to
one
actually
right
and
in
terms
of
the
add-ons
we're
going
to
install
nginx
and
then
we're
also
going
to
install
Calico.
A
A
Now,
let's
take
a
look
at
the
customize
section,
so
we
have
the
the
base
directory
where
you
have
all
the
manifests
required
to
deploy
cluster
autoscaler.
So
we
have
traditionally,
you
know
clusterable
binding
for
the
management
cluster
for
the
workload
cluster
cluster
role,
the
deployment
required
for
cluster
Auto
scalar
to
be
installed,
and
this
is
where
you
have
the
command.
A
This
is
the
part
that
is
of
interest
to
us,
so
this
is
where
we
want
to
specify
the
coupe
config
file
for
autoscaler
to
monitor
the
workload
cluster,
and
this
is
also
defining
the
name
of
the
cluster.
We
want
to
monitor
right,
so
those
two
things
are
the
most
important
one,
the
auto
Discovery
here,
as
well
as
the
group
config
file.
A
So
for
our
customization
here
we
need
to
basically
enable
activate
sort
of
all
these
resources
for
customize
to
use
them
in
our
overlay.
So
in
our
overlay
we're
gonna.
This
is
where
we
we
are
going
to
overwrite
the
the
configuration
so
first,
the
deployments.
What
we're
going
to
change
here
is
basically
the
the
name
we
want.
We
want
to
give
to
the
cluster
if
we
compare
it
to
the
one
above
right.
This
is
the
original
one,
the
base
one
which
is
Kappy
Nick
in
the
deployment
here.
This
will
be.
A
We
have
also,
if
we
look
in
the
customization,
we're
going
to
add
a
prefix
Dev
to
all
the
resources
that
we
are
going
to
create
and
we
have
also
a
nem
reference
file.
So
we
want
to
change.
You
know
all
the
names
that
we
are
going
to
modify
potentially
have
some
references
in
other
field,
and
this
is
where
we're
gonna
tell
customize
to
to
accordingly
modify
the
name
in
all
those
references,
and
then
we
want
to
get
a
secret
generator.
A
This
is
because
we
want
to
decrypt
the
coupe
config
file
and
the
coupe
config
file.
We
want
to
decrypt.
Has
this
pass
here,
so
there
is
one
from
one
of
my
prior
tests:
we're
going
to
rewrite
it
because
we're
going
to
deploy
a
new,
a
new
cluster.
So
this
is
one
remaining
from
my
previous
test
and
you
can
see
that
you
know
it's
it's
effectively
encrypted
there
right.
I
cannot
make
any
you
know,
use
of
that
file
if
I
want
to
connect
to
to
the
cluster.
A
A
So
let's
go
back
to
our
terminal
there,
so
you
can
see
that
I
have
inaugural
CD
namespace
with
many
different.
You
know
plots
running.
So
let's
take
a
look.
First
at
the
August
CD
server
located
there,
you
can
see
that
if
I
look
for
image,
that's
it
is
running
an
image
from
my
personal
registry,
where
I've
added
caseops
to
August
CD
the
caseops
capability.
So
that's
one
thing
that
we
need.
A
Then,
if
we
take
a
look
at
the
repo
server
and
look
just
for
pgp,
you
can
see
that
I
have
created
a
minute
container
whose
role
is
going
to
be
to
import
the
pgp
key.
That
is
mounted
as
a
new
volume
into
the
Container
right
so
and
then
the
container
once
you
know
it
does
its
job.
Then
the
you
know
the
the
payload,
let's
say
container
will
take
over
and
you
know
perform
it's
it's
normal,
it's
normal
role.
A
A
There's
a
last
thing:
I
want
to
show
as
well
in
terms
of
audio
CD
configuration.
We
have
a
config
map
there,
where
I
have
I
added
extra
option
so
that
case
Ops
can
be
used
because
it's
a
plugin.
You
need
to
add
this
extra
customize
option
when
using
customize
to
do
to
do
anything
right.
So
when
August
CD
will
collab
customize,
it's
going
to
add
this
specific
option
that
will
allow
for
the
usage
of
caseops.
A
Now,
let's
go
back
to
August
CD
and
what
we
are
going
to
do
is
reconcile
the
application
resync,
the
parent
application,
so
that
our
children
application
can
be
installed.
So
if
I
look
at,
you
know
the
application
status
there,
I've
got
my
single
parent
application
and
that's
basically
it
so,
let's
sync
it
and
see
what
happened
so
now.
A
If
you
look
in
the
copy
cluster,
you
can
see
that
the
dev
cluster
and
Dev
cluster
autoscaler
children
application
have
been
synchronized
and
now,
of
course,
I've
got
both
my
Dev
cluster
and
Dev
cluster
Auto
scaler
that
are
out
of
sync.
So
in
the
order
we
want
first
to
deploy
the
cluster,
then
we'll
have
to
do
you
know
a
couple
of
manual
actions
to
encrypt
our
config
file
and
commit
it
push
it
into
our
repository
and
then
we'll
proceed
with
the
cluster
Auto
scalar
installation.
A
So
first,
let's
think
the
dev
cluster.
So
you
can
see
all
the
components.
Those
are
all
the
cluster
API
manifests
and
the
objects
that
are
required
for
you
know
the
workload
cluster
deployment
as
we've
seen
before.
So,
let's
first
sync
this
and
check
what
happened.
So
it's
going
to
basically
deploy
the
cluster
right.
So
you're
gonna
see
it's
going
to
start
by
creating
the
machine,
gcp
machine
and
reconcile
this
in
gcp.
So
we
can
take
a
look
at.
A
A
So
now,
let's
go
back
to
our
top
application
View,
so
our
Dev
cluster
autoscaler,
of
course,
is
still
out
of
sync.
But
before
proceeding
to
the
dev
cluster
autoscaler
install,
we
need
to
encrypt
the
config
file
into
a
config
map.
So
let's
do
this
now
for
this:
let's
go
into
our
Visual
Studio
code
environment
open
on
Terminal
there.
So
what
I
want
to
do?
First
is
so
I'm
connected
to
the
right
here
on
the
top
right,
the
right
cluster,
so
we
can
quickly
CTL
get
cluster.
A
We
can
see
that
our
of
course,
Cappy
Dev
cluster
is
provision.
So
what
we
will
do
now
is
get
the
config
file
from
cluster
CTL.
Here
we
go
now,
we
need
to
create
a
config
map
out
of
that
group,
config
file,
so
copy
Dev,
Dash
config.gml,
and
then
we
can
encrypt
this
on
this
sub.
Stash
e
copy,
Dev
Dash
group
config.tml
into
the
expected
file,
which
is
called
config.enc.tml.
A
So
we
can
already
see
that
the
coupe
config
config
map
has
been
decrypted
with
the
right
information
and
now,
let's
just
think,
to
install
cluster
Auto
scaler.
A
Okay,
it's
being
deployed
the
Pod
is
being
deployed.
So
now
we
can
check
that
it's
a
new
pod
has
been
deployed
into
your
our
management
cluster
and
then
we
can
proceed
with
the
auto
scaling
test
and
just
check
that
all
the
application,
the
add-on
application-
have
been
installed.
Let's
connect
to
our
terminal
now
to
check
that
been
effectively
installed.
A
A
David
was
killer
there
and
if
we
display
the
configuration
we
can
see,
the
arguments
for
the
commands
are
effectively
the
ones
that
we
have
defined
from
the
customization
file
right,
so
everything
looks
good
there.
You
can
check
the
log
quickly
it's
waiting
for
logs
now
on
the
second
screen
here,
let's
connect
to
our
Dev
cluster.
So
first,
let's
get
the
quick
config
file.
We
can
just
set
it
to
copy
dev.com,
config
Just,
Launch,
K9s.
A
Now,
let's
take
a
quick
look
at
the
add-ons
where
they
are
coming
from,
so
they
are
coming
from
other
crds
that
we
have
installed
with
the
add-on
software
layer.
We
have
two
see
all
these.
We
have
the
hence
chart
proxies
and
the
Hand
release
proxies
that
will
install
Helm
chart
in
the
destination
workload
cluster.
So
here
we've
added
two
of
them,
so
first
the
nginx
and
as
and
the
Calico
one.
A
So
the
the
difference
is
that
if
you
can
see
here,
the
Calico
one
doesn't
have
any
specific:
a
label
selector
matching
cluster
here,
it's
basically
all
of
them
right.
The
current
matching
cluster,
it's
all
of
them,
because
we
don't
have
specified
any
label
selector
in
the
corresponding
manifest
for
that
custom
resource
for
nginx.
It's
a
bit
different.
You
can
see
here
that
you
have
match
level
for
cluster
selector,
so
only
the
Clusters,
so
the
the
cluster
API
clusters
that
have
that
particular
label
will
get
nginx
installed
and
you
can
see
here.
A
A
Reserving
or
requesting
quite
you
know,
heavy
large
resources
in
that
particular
deployment.
I
have
five
replicas,
so
that
should
trigger
the
creation
of
new
node.
Let's
apply
vets,
employment
and
again,
let's
check
the
cluster,
and
you
can
see
that
now
I've
got
multiple
pod
pending
and
on
the
right
here
you
can
see
that
the
dev
cluster
Auto
scalar
process
is
asking
to
scale
up
that
particular
cluster,
meaning
that
it's
going
to
trigger
the
creation
of
new
machine
in
gcp.
A
So
you
can
see
here
waiting
for
the
kubernetes
node
on
the
machine
to
report
ready
States.
So
now,
if
we
go
back
to
the
infrastructure
provider
logs
yeah,
it's
going
to
be
reconciling
with
new
machine
new
Dev
instances,
and
we
can
check
that
on
here.
That's
currently
carico
is
being
deployed
on
new
nodes,
so
we
have
effectively
new
nodes
being
deployed.
A
A
That
concludes
our
demos
for
today,
I
hope
they've
been
useful
and
now,
let's
conclude
that
presentation,
so
we've
seen
that
gitups
plus
cluster
API
is
not
necessarily
an
easy
task.
If
you
want
a
you
know,
comprehensive
set
of
features.
This
is
most
you
know
of
a
do-it-yourself
process
and
you
have
to
dig
into
very
different
concept
and
different
software.
A
So
this
is
where
palette
from
spectral
Cloud
can
really
help,
because
it's
based
on
the
same
Principle.
As
you
know,
cluster
API.
Actually
we
have.
You
know
committed
a
lot
of
code
and
I
contributed
to
a
lot
of
the
existing
cluster
API
providers
and
the
additional
feature,
that's
palette
will
add,
are
related
to
more
Enterprise
features.
So
we
are
really
the
coupling
and
making
cluster
API
more
usable.
By
separating
you
know
the
actual
cluster
from
reusable
cluster
API
profiles,
and
also
on
top
of
that
we
add
extra
security
and
Enterprise
features.
A
I
mentioned
things
like
security
scans,
backup,
s-bomb,
tracking
role-based,
Access,
Control
Etc,
whether
it's
managed
or
unmanaged
kubernetes.
You
can
use
palette
for
any
cloud
or
at
the
edge.
You
can
try
it
yourself.
It's
a
freemium
model,
so
you
know
you
can
have
up
to
you
know
I
would
say
two
or
three
cluster
free
depending
on
the
resources
and
the
number
of
nodes,
but
you
can
really
give
it
a
try.
It's
very
declarative,
same
sort
of
you
know
reconciliation
principle
for
deploying
and
managing
all
your
clusters.
A
Now
what
I
want
to
add
is
you
know
for
for
you
to
get
the
key
takeaways
for
today.
So,
of
course,
cluster
API
is
a
proven
tool
to
manage
kubernetes
cluster.
If
you
combine
cluster
API
with
githubs,
then
you
can
really
provide
a
lot
of
automation
at
Large
Scale,
but
it's
just
the
beginning,
as
we
we've
seen
today,
kubernetes
needs
many
more.
You
know
software
layers
to
be
production
ready,
so
we've
seen
the
cluster
Auto
scaler,
the
additional
add-on
software.
A
That
palette
will
give
you
basically
out
of
the
box,
so
that
gives
you
the
foundation
and
you
have
to
build
on
cluster
API.
To
add
your
all
your
Enterprise
requirement
and,
of
course
that
may
require
a
lot
of
sweat
and
beaten
nails.
So
yeah
you
can
try
to
try
our
palette
to
see
the
difference
and
how
easy
that
will
be
compared
to
what
we've
done
today.
What
we've
achieved
in
one
hour,
you
can
probably
do
it
in
less
than
five
minutes.