►
From YouTube: Webinar: Cluster API (CAPI) - A Kubernetes subproject to simplify cluster lifecycle management
Description
During this talk we’ll do a walkthrough of Cluster API (cluster-api.sigs.k8s.io), a project of SIG Cluster Lifecycle to bring declarative, Kubernetes-style APIs to cluster creation, configuration, and management. After introducing the project, we’ll do a live demo, showing how to quickly create a cluster using AWS, scaling it up, upgrading it, then spinning up an Azure cluster. Finally, we’ll leave some time for Q&A.
Presenters:
Katie Gamanji, Cloud Platform Engineer @American Express
Naadir Jeewa, Senior Member of Technical Staff @VMware
A
All
right,
so
we
are
going
to
go
ahead
and
get
started.
I'd
like
to
take
everyone.
Who's
joined
us
today.
Welcome
to
today's
team
self
webinar
cluster
API
acumen
is
a
project
to
simplify
cluster
lifecycle
manager.
I
know,
I
am
the
cutter
cuts
called
engineer.
Coordinator
of
self
room
and
I
will
be
moderating
today's
webinar
about
one
of
the
coolest
governance
project
to
projects
and
I
hope
you
enjoy.
Also
we
would.
A
Just
a
few
importa
type
important
item
items
to
you
before
you
get,
we
get
started
so
during
this
webinar.
You
are
not
able
to
talk
as
an
attendee.
There
is
a
question-and-answer
box
at
the
bottom
of
your
screen.
Please
feel
free
to
drop
your
questions
in
there
and
we
will
get
to
as
many
as
we
can
at
the
end.
We
are
going
to
finish
time.
A
A
B
Hello,
everyone,
my
name
is
Oh.
My
name
is
Katie
Comanche
and
I
am
a
cloud
platform
engineer
for
American
Express
estimation.
Before
four
months
ago,
I
joined
American,
Express
and
I
am
part
of
the
team
that
aims
to
transform
the
current
platform
by
embracing
the
cloud
native
principles
and
making
the
best
use
of
the
open
source
tools
as
well.
Quite
recently,
I've
been
elected
as
one
of
the
TMC
but
technical
Oversight
Committee
member
for
the
CAF.
B
C
Hi
I'm
Nydia
Jeeva
I'm,
a
senior
member
of
technical
staff
at
VMware,
I'm,
also
a
co
maintainer
of
communities,
cluster
API
provider
aw.
Yes,
that's
a
SS
specific
implementation
going
to
bit
later.
I've
worked
with
AWS
for
many
many
years
and
moved
into
the
community
space
around
two
years
ago.
Yeah.
That's
me.
B
And
today
we're
gonna
start
the
talk
by
giving
an
overview
of
cluster
API,
but,
more
importantly,
giving
an
overview
of
the
ecosystem
that
enabled
the
cluster
API
to
be
created
will
be
a
tool
in
the
landscape.
We're
going
to
follow
this
by
I
tend
to
you
how
cluster
epi
achieved
the
class
as
well
as
some.
B
The
ducati
of
cheery
is
a
feature
of
cluster
API,
providing
a
demo
of
how
class
straightly
I
can
be
installed
as
well.
If
my
demo
is
gonna
work,
we're
going
to
give
an
overview
of
how
cluster
key
I
can
be
used
in
association
with
get
ops
model
and
we're
going
to
conclude
by
giving
an
overview
of
the
roadmap,
and
how
can
you
contribute
and
get
involved
with
cluster
API
so
without
any
further
ado,
I'd
like
to
give
an
introduction
of
the
ecosystem
that
enabled
cluster
API
to
be
created.
B
If
we
go
back
six
years
ago,
we
had
plenty
of
tools
that
provisioned
container
orchestrated
framework
their
capabilities.
Some
of
these
tools
are
going
to
be
locust,
swarm,
Apache,
missus,
chorus,
lead
in
kubernetes,
and
most
of
them
provided
a
viable
solutions
to
run
containers
at
scale.
However,
in
the
last
year's
kubernetes
to
delete
in
defining
the
principles
of
how
to
run
this,
containerized
workloads
nowadays
Copernicus
it's
known
for
its
portability
and
adaptability,
but
more
importantly,
for
its
approach.
B
Tory's,
declarative
configuration
and
automation,
and
this
prompted
for
a
lot
of
users
to
get
involved
around
communities
based
on
its
open
source
nature.
More
than
58%
of
the
companies
are
using
kubernetes
in
production
system.
Now
this
is
based
on
BC
and
self-surgery.
Last
three
2019,
it's
worth
to
mention
that
the
Alba
42%
of
the
surveyed
companies
are
prototyping
kubernetes
as
a
viable
solution
to
afford,
and
at
milestone.
I'd
like
to
mention
here
is
that
more
than
2000
companies
of
the
surveyed
companies
are
using
communities
and
product
in
an
enterprise
system.
B
Now
this
showcases
the
maturity
and
the
high
adoption
rate
for
curing
it
is
one
a
transition
towards
the
development
community.
More
than
2,000
engineers
are
actively
collaborating
towards
feature
build
out
from
bug
fixing
and
when
we're
looking
to
the
end
user,
community,
more
than
23,000
attendees
were
registered
at
the
cocoons
around
the
world.
B
If
you're
looking
into
the
ecosystem.
At
the
moment,
there
are
plethora
of
tools,
provisioning,
a
bootstrap
capability
for
a
cluster.
Some
of
these
tools
are
going
to
be
known,
such
as
Cuban
TM
keeps
breakups
tectonic
and
many
more
they're,
more
than
100
tools,
provisioning,
a
solution
to
create
a
cluster.
At
the
moment,
however,
if
you're
looking
to
you,
every
single
tool
is
going
to
be
difficult
for
us
to
find
a
common
denominator.
B
When
it
comes
to
the
supported
cloud
provider,
what
it
actually
means
is
that
every
single
tool
is
going
to
integrate
with
a
subset
of
existing
cloud
providers,
and
it
based
actually
imposes
quite
a
few
challenges,
supposedly
as
an
end
user.
What
happens
if
you'd
like
to
migrate
your
infrastructure
to
a
different
cloud
provider?
Even
if
you
use
the
same
bootstrap
tool,
you're
gonna
have
very
little
reusable
components
across
different
cloud
providers.
B
Usually,
the
end
result
is
going
to
be
rewriting
your
infrastructure
as
code
from
scratch
again
think
about
what
happens
if
you'd
like
to
migrate
input
structure
all
together
or
to
use
new
tool
in
this
particular
case,
tectonic,
which
I
introduced
earlier,
is
no
longer
under
active
development
and
it
is
to
be
merged.
We,
if
open
ships,
contain
a
platform.
That
means
that
all
the
clusters
which
were
using
tectonic
to
be
provisioned
cannot
do
some
working
forward
unless
the
project
is
not
for
it
and
maintained
in-house.
B
Usually,
if
we
use
a
new
tool,
you
might
create
a
different
tool.
The
result
is
going
to
be
your
writing.
Your
infrastructure
as
code
from
scratch
and
another
challenge
is
imposed
by
areas
such
as
China
and
something
Russia,
and
this
is
because,
if
you'd
like
to
deploy
infrastructure
in
this
regions
is
going
to
be
quite
challenging,
and
this
is
because
these
regions
have
their
own
tooling
to
vision,
infrastructure
out
there
most
of
the
time
as
an
end-user,
you
lose
this
capability
of
lifting
and
shifting
a
cluster
across
different
regions,
especially
in
China.
B
When
are
talking
about
cluster
API,
we
refer
to
C
cluster
lifecycle,
which
had
its
first
initial
release
in
April
2018.
Since
then,
they
had
two
more
releases
and
they
are
now
in
a
v1
alpha,
free
and
API
endpoint.
Now
I
was
mentioning
the
cluster
API
integrates
with
different
cloud
providers,
and
currently
there
are
14
cloud
providers
that
actively
collaborate
and
integrated
class
radii
we're
going
to
have
the
support
for
some
of
the
major
cloud
providers
such
as
AWS
GCP
major.
B
But
on
the
other
side,
we're
going
to
have
the
support
for
bare
metal
providers
such
as
packet
on
metal
free,
but,
more
importantly,
we're
going
to
have
the
support
for
Chinese
cloud
providers
such
as
by
the
cloud
or
Alibaba
cloud,
and
this
is
extremely
empowering
empowering
for
an
end
user
community,
because
cluster
epi
enables
us
to
create
our
clusters
in
China
with
the
same
ease.
We
do
so,
for
example,
in
Europe
or
in
the
US
region.
B
Let's
look
a
bit
into
more
details
of
how
cluster
API
works.
Supposedly
you
have
a
task
or
project
to
create
different
clusters
in
different
cloud
providers.
Different
regions,
the
first
step
we're
going
to
need
to
achieve
is
the
creation
of
a
management
cluster
for
testing
purposes.
It
is
recommended
to
use
kind
to
provision.
The
management
cluster
and
kind
is
pretty
much
a
doc
riced
version
of
kubernetes.
If
you'd
like
to
use
cluster
api
in
production.
B
B
The
second
controller
we're
going
to
require
is
going
to
be
the
bootstrap
provider,
and
this
is
the
component
that
translates
the
llamó
configuration
into
cloud
units
crepe,
and
it
will
be
able
to
attach
an
instance
from
a
cloud
provider
to
the
cluster
as
a
node.
Currently,
this
capability
is
supported
by
a
cube,
ATM
and
Talas,
and
it's
hard
to
mentioned
here
that
the
bootstrap
provider
will
manage
the
control
plane,
which
is
a
new
resource
introduced
in
referring
of
cluster
api
and
the
last
controller.
B
We're
going
to
require
is
going
to
be
the
infrastructure
provider,
and
this
is
going
to
be
the
particle
which
actually
interacts
with
the
api's
and
provisions
our
infrastructure,
such
as
the
ec2
instances,
moments
our
Institute
instances
subnets.
If
you
have
any
security
groups
Emeril's,
you
would
like
to
attach
your
machines.
All
of
these
are
going
to
be
created
from
this
controller.
B
I'm,
gonna,
showcase,
cubes
pray
and
get
sprays,
a
tool
that
uses
possible
underneath
to
provision
the
infrastructure
is
code.
In
this
view,
in
this
particular
view,
it's
a
trimmed
output
of
all
the
ansible
roles,
a
developer.
We
need
to
be
aware
when
creating
configuring
and
managing
a
cluster
cluster,
it
may
completely
rethink
management's,
call
and
the
lifecycle
of
a
cluster,
and
it
reduces
it
to
a
couple
of
manifests,
a
search
if
you'd
like
to
create
a
cluster
in
AWS.
This
is
going
to
be.
B
One
of
the
manifests
are
going
to
be
requiring,
in
this
particular
instance,
we're
going
to
have
a
cluster
D,
and
this
usually
is
going
to
create
all
the
networking
components
necessary
for
a
classroom.
As
you
can
see
in
our
specs
section,
we
choose
a
slash
16
for
out
pods
as
well.
In
the
first
line,
we
can
see
that
this
particular
er
D
was
created
in
VO
and
alpha
3,
which
is
the
latest
version
of
cluster
API
as
well.
B
We've
the
new
version
of
cluster
we're
gonna,
have
the
control
penguin
friends,
which
is
gonna,
manage
our
masters
and
we
just
gonna
link
a
template
turns
that
configuration.
However,
from
all
of
this
manifest
I
would
like
to
draw
your
attention
towards
the
infrastructure
reference,
because
this
is
the
component
which
actively
integrates
with
the
clock
provider
api's
with
infrastructure
reference
in
this
particular
case,
we're
going
to
create
it
in
the
1
alpha
frame
point
and
we're
going
to
have
the
kind
AWS
cluster.
B
What
was
going
to
happen
underneath
it's
actually
going
to
invoke
the
parameters
which
are
very
specific
to
this
call
provider.
As
such,
we
say
that
we
would
like
to
create
our
cluster
in
the
region.
You
Center
one
as
well.
We
say
that
we
like
to
attach
an
SSH
key
with
the
name
default
again.
This
is
just
a
an
example
of
the
variables
to
have.
We
can
have
a
deeper
customization
of
our
cluster
as
well.
Now,
if
you'd
like
to
create
this
cluster
in
JCP,
these
are
going
to
be
the
only
changes
changes
required.
B
As
you
can
see,
we
can
reuse
our
clusters,
but
we
just
changed
our
infrastructure
reference
from
a
SS
cluster
to
jessica
cluster,
and
this
in
itself
will
invoke
the
configuration
which
is
specific
to
disappea.
As
such,
the
region
naming
is
quite
different
in
playing
google
cloud
and
you're
going
to
have
the
name
here
of
West
three.
We
have
the
concept
of
a
project
which
again,
in
spite
specific,
anticipate
and
we
associate
our
cluster
to
project
C
API,
and
we
attach
a
network
by
giving
its
name
defaults.
B
The
API,
and
if
you
want
to
take
this
a
further
step
to
actually
chunk,
is
the
simplicity
for
using
our
manifest.
This
is
how
you're
gonna
create
a
cluster
and
ensure
again
we
just
change
our
infrastructure
reference.
The
client
is
going
to
become
is
your
cluster,
and
this
in
this
journey
is
going
to
invoke
the
parameters
for
that
particular
cloud
provider.
As
such,
the
location
is
going
to
be
you
at
West
Europe.
B
We
can
specify
our
network
name
and
we
can
even
specify
a
recent
research
group
if
we
need
to
you're
pretty
much
get
the
gist
of
it.
Of
how
simple
it
is
to
use
a
manifest
and
really
tailor
your
cluster
for
buying
the
demo
configuration
and
over
time
this
has
been
a
very
efficient
and
very
effective
project,
and
this
is
because
cluster
API
uses
the
building
blocks
principles.
It
is
not
concerned
itself
with
actually
integrating
with
kubernetes
source
code
and
consuming
its
capability.
That
way,
it
actually
builds
on
top
of
the
existing
primitives.
B
Now
the
important
feature
for
cluster
API
is
the
fire.
It's
cloud
agnostic.
It
is
actually
creating
this
one
interface
that
allows
you
to
connect
to
different
cloud
providers
by,
but,
most
importantly,
you
can
switch
between
cloud
providers
by
having
minimal
changes
to
your
manifests,
and
the
last
thing
I'd
like
to
mention
about
cluster
API.
It's
currently
under
assiduous
active
development,
and
if
you
have
a
use
case
for
class
3
PI,
please
give
it
a
try
and
feedback
to
the
community.
C
Change:
okay,
hopefully
that's
working
for
everyone,
and
so
I'm
gonna,
going
to
sort
of
dare
how
we
built
cluster
API
to
sort
of
achieve
this
cloud
agnosticism,
and
why
we
think
building
clusters
in
this
way
is
the
right
way
to
to
be
deploying
communities
clusters.
So
if
we
think
about
Cuba
Nexus
cluster,
there's
a
bunch
of
components
that
we
need
to
have
running,
we
need
to
have
like
to
contain
a
runtime
with
a
container
D
cryo
or
something
else
we
need
to
have.
Etsy
D
is
the
backing
store
for
communities.
C
We
need
to
deploy
API
server.
We
need
to
manage
the
bunch
of
micro
services
that
make
up
a
kubernetes
control
plane
and
we
need
to
manage
the
lifecycle
days.
We
need
to
fit
how
we
gonna
ensure
that
upgrades
happen
right
so
cross
api
is
really
meant
to
design
this
solve
this
problem
once
and
for,
like.
We've
spent
the
last
couple
of
years
reinventing
kubernetes
installers,
and
we
should
stop
doing
it
and
sort
of
move
on
to
this
or
the
deeper
challenges
or
like
how
how
you
build
cloud
native
applications
within
an
organization.
C
We
want
to
take
away
that
frustration
of
that
just
managing
community
clustering
help
organizations
move
to
developing
applications,
so,
firstly,
you
need
to
do
is
like
bootstrap
these
components
so,
instead
of
reinventing
the
wheel
plus
API
is
built
in
layers
and
that
this
initial
layer
is
cube,
ATMs
cube,
ATMs
project,
that's
been
running
for
a
number
of
years
for
the
ins
across
the
life
cycle
and
it's
started.
Life
is
a
command-line
installation
tool
that
you
could
run
on
a
machine
and
it
will
bring
up
a
control
plane.
C
We
actually
use
cube
ATM
inside
cluster
API
to
create
communities
clusters.
The
great
thing
about
this
is
cube.
Atm
is
almost
like
the
canonical
kubernetes
installer.
It
produces
conformant
clusters.
Therefore,
any
cluster
that's
built,
we
cluster
api,
which
is
using
cube
ATM
underneath,
is
automatically
conformant
within
reason.
You
need
to
get
there
networking
infrastructure
correct
that,
apart
from
that,
we
are,
we
are
not
doing
anything
new
in
this
regard.
So
the
next
thing
we
have
to
think
about
is
this
other
aspect
of
getting
a
working
conformant
communities
cluster.
Is
that
underlying
infrastructure
right?
C
So
communities
is
plugging
into
load
balancers?
It's
plugging
into
different
cloud
provider
infrastructures,
so
we
need
to
get
that
running
in
a
certain
way,
and
this
is
the
bit
where
we
so
we're
using
cube
ATM
to
bootstrap
communities
and
take
care
of
setting
up
CD
in
this
datastore
and
in
class.
The
API
is
creating
machines
and
creating
firewalls
in
the
infrastructure
layer
and
that's
kind
of
how
we
see
class
API
being
use
is
just
another
layer
like
there's
or
go
into
in
bit
minute,
but
class
API
is
essentially
a
communities
API
for
provisioning
infrastructure.
C
So
this
enables
you
to
then
build
your
own
lays
on
top
using
your
standard,
kubernetes
mechanism.
So
if
you're
using
CI
CD,
that's
knows
how
to
manipulate
manipulate
kubernetes
objects
well
using
cluster
API
that
CI
CD
system
knows
how
to
deploy
clusters
in
different
cloud
providers
or
bare
metal
or
or
virtual
machines
right.
So
it's
about
giving
a
consistent
interface
that
you
don't
have
to
learn
anything
new.
It's
not
a
new
bespoke
proprietary
interface.
It's
your
standard!
You
can
use
all
of
those
all
that
feeling
that
you've
built
for
your
application
development.
C
You
can
apply
that
again
to
managing
your
clusters
and
that's
why
you
know
this
there's
interested
in
it.
So
in
terms
of
the
people
who
involved,
we've
got
loads
of
organizations
are
building
layers
on
top
of
cluster
API,
so
you
know
being
where
Microsoft
New
Relic.
So
we
have
a
lot
of
community
participation,
and
this
really
helps
us
build
something
that
can
work
for
everyone,
and
we
take
away
that
strain
that
if
you're
an
organization
you're
tasked
with
building
out
new
communities
into
structure,
you
don't
have
to
build
all
that
infrastructure.
C
As
a
code,
you,
don't
you
don't
have
to
write
your
own
telephone
manifests
again.
It's
like,
let's
give
you
a
more
eighty
download
at
all
use
it.
It's
got
lots
of
people
working
on
it
together.
You
know
like
this
is
the
best
practice
in
this
source
environment
and
that
all
of
that
works
been
done
for
you.
C
So
as
Katie
was
talking
about,
we
have
these
abstraction
cluster
API,
and
what
we've
done
is
we've
sort
of
mirrored
the
way
that
kubernetes
works
right.
So
a
machine
if
you
think
that
is,
is
that
an
analogous
to
a
pod
in
a
trust
API.
So
we
have
a
machine
controller
and
very
two-bit
in
in
a
moment
that
what
that
means
specifically
but
machine
controller,
is
managing
individual
in
AWS
ec2
instances
in
be
severe.
It's
going
to
be
managing
individual
VMs
right
and
that's
all
it
does
doesn't
do
anything
else.
C
But
then
we
have
a
machine
set
controller
which
is
similar
in
communities
to
the
replica
set
so
replica
set.
You
say
how
many
copies
of
a
pod
should
exist
in
much
the
same
way.
Machine
set
says
how
many
copies
machine
well,
how
many
machines
should
exist
with
a
particular
configuration
so
same
idea,
so
we're
reusing
those
kubernetes
models
in
cluster
API
and
a
deployment
where
it's
is
much
simpler.
C
So
we
have
a
machine
deployment
which
is
able
to
scale
down
one
machine
set
and
scale
up
a
new
one,
so
we
can
do
things
like
upgrades
and
finally,
this
is
something
new
that
we
did
for.
We
went
out
for
free,
which
is
the
introduction
under
the
control
plane.
So
this
is
in
in
a
sense,
I
guess
their
most
analogous
fingers
are
stateful
setting
across
API
base
pacifically
interested
in.
How
do
we
manage
the
kubernetes
control
plane
itself?
So
is
that
CD
healthy
or
not
when
we
do
an
upgrade?
C
Is
this
at
CD
version
changing
what
what
other
things
need
to
happen
in
this
cluster
to
migrate
from
one
version
of
core
DNS
into
proxy
to
another?
So
this
the
control
plane
controller,
is
going
to
take
care
of
those
de
2
operations
right
so
started
off
with
a
small
cluster,
but
now
you've
got
lots
of
teams
developing
applications.
You
need
to
make
this
control
plane
bigger,
more
stable,
well,
the
control
plane
controller
can
help
you
do
that,
so
the
nuts
and
bolts
of
how
these
controllers
work.
C
So
this,
the
kubernetes
modern
I
guess,
is
this
sort
of
control
loop
model.
So
you
declare
an
on
the
declare
resource
on
communities
and
it
states
the
desired
end
state
and
then
the
controller
is
viewing
some
aspect
of
the
worlds.
It's
checking
what
stays
in
and
it
makes
changes
to
that
world
to
make
it
more
in
line
with
the
desired
state.
So
this
is
a
this
is
quite
different
way
to
traditional
infrastructure
provisioning.
C
If
you,
if
you're,
used
to
using
traditional
infrastructure
management
tools,
what
you're
having
to
do
there
is
sort
of
declare
a
gigant,
a
sort
of
graph
of
interrelationships.
Have
this
thing
this
my
ec2
instance
depends
on
this
subnet
and
so
I
need
to
ensure
that
this
subnet
gets
created
before
this
ec2
instance
in
cluster
API.
We're
not
doing
that.
C
We
have
individual
controllers
there,
just
interested
in
their
small
little
area
of
the
world
and
they'll
try
and
do
what
they
want
to
do
and
if
it
fails,
that's
fine
now
we'll
come
back
and
try
it
again
later,
and
this
turns
out
to
be
quite
a
powerful
model
for
building
distributed
systems,
and
this
is
used
throughout
communities
and
because
it's
been
successful
in
communities.
We've
specifically
taken
this
to
infrastructure
itself
into
original.
So
doing
the
same
thing.
So
this
allow
this.
C
So
what
I'm
going
to
do
now
is
I'm
going
to
do
a
little
to
demo,
and
so
what
we're
going
to
do
is
because
cluster
API
requires
plus
API
is
built
on
kubernetes.
So,
therefore,
we
need
communities
to
run
cluster
API
to
run
to
create
communities
clusters.
Might
we
love
cuban
at
ease
so
much
you
put
cuban
at
ease
in
your
kubernetes
so
which
is
bit
weird
right,
so
I'm
gonna
we
can
do
is
just
gonna.
This
presentation.
C
What
so,
what
it
did
earlier
is
I
created,
crying
cluster.
So,
as
Katie
mentioned
kindness,
a
sort
of
wave
running
kubernetes
on
your
desktop,
it
uses
dr.
so
and
it
builds
a
sort
of
nested
kubernetes
control
plane
and
what
I
did
next
is
I
just
initialize,
so
you
can
go
to
our
QuickStart
guide,
which
is
at
cluster
API
8.6
up
Kate's,
dot,
IO
and
I'm
just
basically
run
through
this,
but
from
here
you
can
download
everything
I'm
talking
about
today.
C
So
what
they're
doing
is
just
initialize
the
vSphere
provider.
So
this
cluster
cutter
tool
takes
care
of
all
of
the
different
components
of
cluster
API,
so
you're,
not
having
to
do
tube
cutter
will
apply.
F's
of
various
manifests
from
different
github
repos
cluster
cards
will
has
all
of
that
knowledge
built
in
it
knows
where
to
go
on
to
get
hard
to
get
the
latest
releases
and
store
them.
C
C
So
I'm
just
saying,
gonna
I
want
to
create
infrastructure
morning,
create
least
a
cluster
I'm
gonna
I'm
gonna,
give
it
a
name
with
lease
fear
test,
I'm
also
going
to
just
create
an
AWS
cluster,
so
I'm
just
gonna
put
these
into
llamo
files,
so
we
can
look
at
that.
It
just
will
print
out
to
screen
a
template.
So
the
way
to
get
this
template
working
is
you
can
either
set
environment
variables
or
you
can
edit
a
file
in
in
your
home
directory,
so
doctor
cluster
API
and
this
cluster
katayama.
C
So
you
can
either
present
these
as
a
mirabal's
or
as
a
Yammer
file,
and
this
is
just
basic
information
and
that
you
need
just
to
get
started.
If
you
there's
nothing
to
stop
you
creating
your
own
templates
and
there's
nothing
to
stop
you
using
any
of
your
standard
cuba,
Nettie's
management
tools
to
create
clusters
right,
so
the
cluster
cuttable
config,
using
the
templates
that
come
from
github
are
just
the
way
to
get
started
and
so
and
that
just
has
created
yeah
more
files
which
define
an
AWS
cluster.
C
It
used
my
memorable,
saying:
I
want
to
use
us
Wes.
It's
done.
Some
sort
of
templating
you'll
make
this
bigger.
Actually
sorry,
it's
done
some
settings
around.
What
cube
ADM
needs
to
provision
to
AWS
the
version
which
I
had
in
my
Yama
file
and
I'm.
Also
just
going
to
change
some
of
these
I'm
gonna
have
a
free,
node,
so
I
have
this
cube
ATM
control
plane
to
make
that
a
high
rate
ability,
control,
plane,
I'm
also
going
to
have
free
nodes
in
as
my
sort
of
agents.
C
C
Show
you
what's
happening
on
my
infrastructure,
so
show
you
on
Rhys
fare
as
well,
so
it's
created
a
load
balancer.
So
this
is
going
to
be
my
entry
point
into
my
communities,
API
server
and
if
I
go
to
VP
sees
it's
started,
creating
the
resources
that
are
going
to
make
up
this
kubernetes
cluster
in
a
double
yes.
So
by
default
we
create
VPC
for
every
cluster.
But
if
you're
in
many
environments,
you're
gonna
have
you
own
AWS
infrastructure
already.
C
So
in
that
scenario
you
can
provide
those
identifiers
within
the
manifest
and
then
it
will
use
your
existing
infrastructure
to
provision
that
cluster,
so
I'm
going
to
continue
working
with
the
recent
one,
because
the
AWS
one
takes
a
little
longer
because
they're
some
of
their
EPC
provisioning
is
take
takes
a
while.
So,
as
you
can
see
in
this
vSphere
moment,
I've
got
my
first
control
plane,
node,
that's
come
up
and
as
that,
as
those
instances
come
up,
I'll
have
my
communities
cluster
running.
A
B
C
That,
let's
do
that
rather
than
waiting
for
me
I'm
my
cluster
I'm
gonna
hand
you
back,
so
why?
Why
just
demonstrate
it
is
like
how
to
get
going
just
on
your
own
individual
computer
right
so
and
but
we
need
to
think
about
what
the
data
operations
here
and
I'll
return
back
to
that
in
a
minute,
but
in
in
your
production
environments,
you're
going
to
be
using
a
long-lived
cluster,
that's
going
to
have
these
cluster
API
resources
and
then
now
you
can
start
to
do
your
configuration
management
across
all
your
team
sitting.
C
B
Please
let
me
know
if
it
gets
on
the
screen,
so
it's
a
nice
continuation,
yeah
cool
annex
continuation
of
the
previous
demo
was
okay.
We
have
a
cluster.
What
do
we
actually
see
around
it?
So
in
this
particular
case,
I'm
going
to
showcase
AWS
and
it's
nothing
different
from
what
was
showcased
previously
I'm
just
having
a
cluster
in
a
tub?
B
Yes
wave
six
nodes
from
which
free
of
them
are
going
to
be
actually
five
nodes,
because
I'm
gonna
have
three
of
them
being
my
control
plane
for
my
masters
and
I'm
gonna
have
another
two
to
be
my
worker
nodes.
Now,
when
I'm
connecting
to
that
in
a
base,
cluster
I'll
be
able
to
actually
see
this
in
a
more
native
way
too
to
kubernetes.
B
So
if
I
get
all
my
notes
in
my
new
cluster
again
I'm,
seeing
three
masters
two
notes
now,
what
you
see
in
the
bottom
output
is
going
to
be
the
machines
which
is
a
CRT
provision
by
cluster
API.
Now
previously
I've
seen
that
all
of
these
cluster
well,
the
entire
clustering
can
be
described
in
manifest.
If
we
have
manifest,
if
you
have
llamo
configuration,
we
can
make
the
best
use
of
models
such
as
get
ops,
because
we
can
really
have
this
concept
of
cluster
as
a
resource.
B
So
that's
exactly
what
I've
actually
done
here,
I'm
going
to
use
Argos
CD,
which
is
something
we
prototype
at
the
moment
for
our
CI
and
American
Express
and
Argus
II
D
is
known
for
its
kind
of
standards
to
deliver
these
get-ups
model,
which
sees
the
gate
repositories
as
the
source
of
truth
or
kind
of
describes,
the
desire
state
of
our
mutually
resources.
But
now
we
can
have
a
description
of
how
infrastructures
well
so
what
I've
done
here?
B
I've
wrapped
all
of
the
manifest
provision
by
cluster
API
into
a
home
chart
and
I,
am
going
to
showcase
that
it's
instead
of
having
one
generous
file,
it's
going
to
load
I,
separated
that
into
different
files.
So,
for
example,
our
cluster
CRD
is
going
to
be
something
like
that.
We're
going
to
have
a
site
of
well
pods,
which
we
can
template
because
it's
a
hound
chart.
We
can
complete
it
forever,
but
what
I
chose
the
template
out
of
all
of
this
is
the
amount
of
nodes
for
our
control
plane
and
the
version
for
kubernetes.
B
So
this
is
gonna,
be
a
very
specific
home
way
to
consume
these
values
as
well.
If
you
look
at
the
replicas,
I
am
pulling
this
from
my
home
ballast
as
well,
and
if
I
look
into
obviously
migration
file.
Currently
I
have
two
workers
as
described.
We
have
three
masters,
which
is
our
control
plane,
and
we
have
a
particular
version
cue.
So
in
these
get-ups
model,
what
I,
actually
should
we
would
like
to
showcase
is
how
a
change
within
our
git
repository
is
going
to
kick
in
a
process
or
an
action
for
our
cluster.
B
How
to
be
actually
to
add
an
additional
note.
That's
what
we're
gonna
achieve
today.
So
today's
days,
I'm
gonna
go
to
my
phallus
file
and
I'm
just
going
to
change
my
amount
of
replicas
to
free,
I'm,
gonna,
commute
that
leave
a
test
message
and
I'm
gonna
push
it
with
the
our
ghost
city.
That's
going
to
be
pushed
nothing
interesting
here
to
sing
for
now,
I'm
gonna
go
back
to
the
I
requested
a
view
in
this
particular
case.
B
If
I
click
on
this,
this
is
going
to
be
all
the
resources
that
I'm
currently
creating
for
a
cluster
and
most
of
it,
for
example,
gonna
help
our
machine
deployment,
which
is
going
to
be
for
our
work.
Nodes.
I'm
gonna
have
my
cube:
ATM
control
plane,
which
is
going
to
be
for
our
master
nodes.
But
what
we're
really
interested
in
is
currently
we
can
see
that
we
have
for
our
M
distance
transformation
deployment.
B
B
What's
actually
gonna
happen,
underneath
we
can
chuck.
Is
this
in
our
view?
So
we
can
say,
new
machine
is
being
provisioned
and
if
we
take
it
further
step
and
if
you
look
into
our
in
obvious
console
way,
if
we
refresh
we'll
be
able
to
see
that
we
have
our
new
and
deformation
deployment,
our
work
and
erode
being
provision
in
AWS
as
well-
and
this
is
a
very
powerful
concept
again
mentioning
that
we
can
really
think
about
our
clusters
as
a
resource.
B
And
we
can
really
even
visualize
our
clusters
using
a
get-ups
model
or
a
dashboard
again,
quite
a
powerful
concept.
When
you're
thinking
about
your
clusters,
as
as
cattle
rather
than
pets,
and
with
this
I
would
like
to
pass
it
back
to
you.
Natira.
If
there
is
anything
else
to
show
from
the
previous
panels,.
C
So
if
you're
going
to
the
QuickStart
and
just
be
aware
that
the
classical
way
is
not
to
be
an
end-all
of
glass
API,
it's
it's
your
entry
point
right
in
and
you
should
think
about
how
to
open
up
this
power
to
the
west,
the
rest
of
your
organization
right
rather
than
having
one
gigantic
cluster,
which
has
all
of
your
applications
as
an
F.
If
it's
so
easy
to
create
clusters,
you
can
start
to
delegate
the
creation
of
these
clusters
to
individual
teams
as
well.
C
Let
them
take
control
of
those
clusters,
but
also
you
can
provide
them
sort
of
the
policy
and
governance
around
them
as
well
so
I.
What
I've
now
got
two
clusters
so
I
take
a
look
at
mine
control
planes.
So
there's
one
thing,
because
that
aw
one
is
taking
a
little
while
to
come
up.
I
won't
be
able
to
show
you
is
this
idea
of
a
move,
so
I
will
show
this
in
the
slide.
So
the
move
is
something
we
did
in.
We
went
out
for
free.
C
So
the
concept
across
a
cut
for
me
this
is
to
enable
this
idea
that
if
you
need
a
long-lived
management
cluster
to
be
running
cluster
API
and
be
managing
all
of
these
other
workload
clusters,
then
how
do
you
create
that
management
cluster
and
what
classic
cotton
move
does?
Is
it
can
move
the
resources
from
your
kind,
the
bootjack
cross
that
you
created
on
your
computer?
It
can
move
all
of
those
resources
to
the
cluster
you
just
created
and
then
that
clusters
can
not
only
create
new.
You
work
grade
class
new
clusters
from
itself.
C
You
can
also
manage
itself
so
there's
quite
a
few
organizations
who
use
cross
they
playing
this
way
that
they
or
all
of
their
clusters
are
basically
self-managed
so,
and
they
can
also
upgrade
themselves
as
well
right.
So
this
is.
This
is
really
the
new
features
that
we
brought
in
to
read
one
out
for
free
through
the
stateful
management
of
the
control
plane.
C
If
there's
time
and
my
AWS
decides,
let
me
then
we'll
do
it,
but
that's
basically
how
the
classical
move
works.
If
you
want
a
long-lived
management
cluster
yeah
so
done
that
so
talking
about
the
future
right
now,
we're
really
focused
on
managing
the
control
plane
better.
So
we
did
a
initial
release.
Now
it
already
does
upgrades.
It
already
knows
how
to
introspect
at
CD
to
make
sure
it's
healthy.
C
We
don't
start
trying
to
do
an
upgrade
if
you
were
at
CDs,
not
working
example,
but
we're
working
on
bringing
flexibility
in
how
we
do
upgrades
for
those.
If
you
need
to
reconfigure
bits
of
the
control
plane
or
you
need
to
change
your
setting
on
Kubla,
and
how
do
you
manage
that
change?
So
we
a
lot
of
things.
We're
working
on
is
improving
observability,
so
we're
starting
to
introduce
the
idea
of
conditions.
C
So
these
are
API
that
you
will
see
on
those
resources
and
then
you
can
build
management
tooling
from
that,
whether
or
not
you
want
to
ingest
that
into
something
like
Prometheus
or
you
want
to
build
some
automatically
remediation
from
that
as
a
higher
level
tooling
above
cross,
the
API,
then
you
can
go
ahead
and
do
that
and
finally
we're
going
to
work
towards
stabilizing
the
API.
So
one
thing
to
note
is
that
the
clusters
that
are
created
by
across
the
API
today
are
using
cube
ATM
underneath,
therefore
they're,
like
fully
criminales
conforming
clusters.
C
So
in
terms
of
what
the
release
currency
is,
we
try
and
do
it
about
once
every
six
months
we
do
a
minor
release
and
minor
releases
in
this
case
followed
that
sort
of
kubernetes
model.
There
can
be
API
changes
within
a
minor
release
and
we
have
public
planning
meetings
for
those
which
I'll
mention
will
mention
that
and
we
communicate
through
caps.
C
So
you
can
see
all
this
in
the
github
repos
there's
all
proposals
will
be,
should
have
their
own
markdown
document
in
there
and
we're
gonna
be
iterating
on
the
API
and
get
that
get
that
to
completion,
but
the
underlying
controllers
and
infrastructure
riders
are
robust
today,
they're
ready
for
you
to
go
and
use
so
I
was
saying
really
big
change
in
0.3,
we
went
out
for
free.
Was
the
control
plane
controller
that
everything
I
showed
you
around?
This
cluster
cuttle
CLI
tool
was
new
and
we
built
documentation
and
booked.
C
So
if
you
want
to
use
things
like
auto
scaling
groups
and
their
equivalents
in
agile,
so
machine
pools
is
a
new
API
for
doing
that.
Machine.
Health
checks
are
a
way
of
what
to
do
when
something's
happen
with
a
machine
and
it's
a
pluggable
model,
you,
you
can
define
their
checks
that
are
required
and
if
they
failed,
then
those
machines
are
martyrs,
unhealthy
and
then
their
decision
can
be
made,
what
to
do
with
them
and
finally
failure
domains.
So
how
do
we
make
sure
that
control
plane
instances
are
distributed
adequately
across
a
a
data
center?
C
So
we
want
a
house,
they
need
track
or
we
want
it
in
a
cloud
provider.
We
want
it
in
different
rail
ability
zones
within
a
region
yeah
and
then.
Finally,
if
you
want
to
get
involved,
we
have
a
discussion
forum.
So
that's
Google
Groups.
The
links
will
be
active
in
the
uploaded
PDF,
so
we
use
we
in
terms
of
mailing
lists.
We
use
the
kubernetes
sick
cluster
lifecycle,
Google
Group.
We
have
meetings
on
zoom'
at
10:00
p.m.
Pacific,
which
is
6
p.m.
UK
time.
For
me,
that's
all
right.
A
B
Can
take
part
of
this
question
from
the
end
user
perspective,
why
would
you
use
cluster
API
rather
than
go
with
a
managed
service
by
cloud
provider?
This
goes
back
to
some
of
the
challenges.
The
if
you
provide
your
infrastructure,
for
example,
in
AWS,
and
if
you're
gonna
use
PKS,
for
example,
is
gonna,
be
always
there
it's
going
to
be
very
difficult
for
you
to
transition
to
a
different
provider
or
even
have
different
flavors
of
it,
you're
going
to
be
stuck
with
a
cloud
provider
if
that's
something
which
is
alright
for
the
business
kind
of
resolution.
B
That's
absolutely
fine,
however,
if
you
are
in
an
ecosystem
which
is
quite
diverse,
for
example,
if
you
have
your
infrastructure,
you
need
to
deploy
infrastructures
in
different
regions,
not
just
Europe,
and
not
just
words.
You
have
to
have
replicas
of
your
clusters
all
around
the
world.
How
do
you
manage
that,
and
this
is
why
I
mention
China,
for
example,
because
even
if
AWS
is
in
China
at
the
moment?
Well,
it's
got
more
mature,
but
most
of
some
of
the
products
of
services
are
still
not
available
in
China,
for
example.
B
One
year
ago,
actually,
one
year
and
a
half
ago,
like
route
53
was
not
available
like
you'd,
have
to
introduce
own
DNS
solution.
So
you're,
not
you
don't
have
the
same
capabilities
with
the
same
cloud
providers
across
all
regions.
That's
what
I'm
trying
to
say
so.
Cluster
API
really
enables
you
to.
Even
if
you
cannot
use
the
same
cloud
provider,
you
do
some
changes
to
your
manifests
because
you
can
reuse
them.
You
just
port.
B
Your
actual
put
your
own
configuration
from
whatever
cloud
you
want
to
move
towards
and
that's
how
you're
going
to
provision
your
infrastructure
in
a
repeatable
manner.
So
it's
all
about
lunch
when
debilitates
of
your
infrastructure
and
how
can
you?
How
can
you
manage
it
with
minimal
effort?
It's
all
about
that
image
runs
and
I.
Think
that
was.
There
was
another
question
which
was
more
yeah,
I,
don't
know,
but
it's
my
answer.
A
C
A
C
So
today,
weird,
it
depends
on
your
implementation,
so
for
AWS,
for
instance,
it
definitely
does
create
a
elastic
load
balancer.
We
don't
as
yet
allow
you
to
plug
in
your
we.
We
have
been
doing
some
discussion
around
a
pluggable
load,
balancer
model,
and-
and
there
are
the
beginnings
of
that
in
the
vSphere
provider.
Just
because
in
if
you're,
onner
on-premise
environment,
there
isn't
a
single
load
balancer
solution,
so
we
still
freak
me
out.
C
A
A
C
Probably
not
is
the
answer,
because
the
specifically,
if
you
look
at
something
like
the
control
playing
controller,
it
expects
they're
clustered
to
be
configured
in
a
certain
way.
It
expects
cube
ATM
to
be
to
have
been
used
underneath
then
we
use
the
way
that
cube
alien
works,
to
be
able
to
manage
things
like
the
LCD
job
line
at
the
control
plane,
components
it
becomes
difficult
because
we
would
have
to
try
and
introspect
how
this
machine
was
created
to
be
able
to
figure
out
how
to
employ.
C
C
B
C
C
And
yes,
so
that's
that's
where
you
would
use
cluster
cuttle
move
to
move
those
resources
from
that
kind
cluster
into
maybe
your
first
created
cluster.
So
you
would
your
first
step
in
your
journey
when
you
using
kind
and
classical
is
to
create
that
initial
management
cluster
and
then
moved
the
resources,
and
so
we
have
some
logic
to
pause.
What
the
controller's
doing
copy
those
resources
into
the
new
cluster
delete
them
from
the
old
one
without
any
of
you,
infrastructure
changing
and
then
unpause
all
the
controllers.
C
A
C
C
A
Great,
so
thank
you
Katie!
Thank
you.
Not
you
for
a
great
presentation
was
awesome.
Really,
that's
all
the
questions
we
have
time
for
today.
So
thanks
for
joining
us
today,
the
webinar,
recording
and
slides
will
be
online
later
today
and
we
are
looking
forward
to
seeing
you
at
the
future
cm
CF
webinar
so
have
a
great
day.
Everybody.