►
From YouTube: Declarative Kubernetes Lifecycle Management Across Multi-Clusters/Clouds with Cluster API
Description
Operating and managing Kubernetes clusters at scale is hard. In this talk, Jun Zhou from Spectro Cloud will join Lin to review how the Kubernetes Cluster-API (CAPI) project provides declarative, Kubernetes-style APIs for cluster provisioning and management across different multi-clusters and clouds, with live DEMOs targeting both public and bare-metal environments.
#kubernetes #multiclusters #lifecycle #operate
00:03 welcome + speakers intro
5:03 service mesh and Istio news
7:15 general discussion around Cluster API for k8s
13:25 Cluster API architecture
27:17 Live DEMO + answer live questions
44:40 wrap up + answer live questions
A
Hello,
hello,
welcome
to
another
hood:
live
stream
interested
in
learning
the
best
way
to
manage
clusters
seamlessly
across
multi-clusters
and
even
multi-clouds
cube
fat.
What
is
their
project?
Maybe
no
more
in
this
episode
I
have
I'm
going
to
be
able
to
catch
up
jim
from
spectra
cloud
so
that
I
can
learn
everything
about
kubernetes
cluster
api.
A
I
am
so
excited
to
learn
more
on
this
topic,
so
my
name
is
lin
sang
in
case
this
is
your
first
time
I'm
the
director
of
open
source
at
solo.io,
I'm
also
being
a
long
time
contributor
and
founding
maintainer
of
the
istio
project.
Gene.
I
am
so
excited
you
are
here
today.
Can
you
introduce
yourself
to
your
audience.
B
Yeah
sure,
thanks
liam
for
having
me
here,
my
name
is
jinzhou,
I'm
the
director
of
engineering
and
also
the
founding
chief
architect
at
a
spectrocloud
before
spectrocloud.
Actually,
I
was
in
a
previous
startup
called
clicker
technologies,
which
was
also
co-founded
by
our
current
ceo
henry
fu,
and
there
we
were
doing
like
a
also
multi-cloud
management
for
the
application
workload
and
at
the
spectral
cloud
we
are
kind
of
doing
the
similar
thing,
whether
it's
right
now,
we
are
doing
like
all
the
applications
management
within
kubernetes.
B
So
initially
we
were
doing
virtual
machine
based
workload,
management
and
now
we
are
mentioning
the
competency
clusters,
as
well
as
all
the
applications
running
within
the
kubernetes
clusters.
So
in
my
last
couple
of
years
I
had
a
had
a
chance
to
work
with
multiple
new
container
technology
projects.
B
B
We
have
children
at
the
time
then
later
on
a
couple
of
years
later,
when
we
started
a
new
one,
when
we
should
cover
this,
because
kubernetes
already
win
the
battle
so
along
the
way,
I
have
done
like
a
couple
of
different
projects
using
kubernetes
and
we
do
know
the
ins
and
outs
of
the
kubernetes
cluster
management
issues
and
challenges
and
opportunities
right
there
and
then
in
2019.
B
We
started
the
spectrocloud
and
we
help
our
customers
to
manage
zero
kubernetes
clusters,
either
it's
in
public
cloud
or
private
cloud
and
we
help
them
manage
in
a
very
consistent
way,
whether
so
all
the
full
stack
from
the
bottom
layer
to
the
top
layer.
We
have
them
to
manage
all
the
of
course
very
many
different
cloud
and
different
clusters.
B
Yes,
yes,
so
when
we
started
the
spreader
cloud
in
mid
of
nine
2019,
we
joined
the
cluster
api
project
from
day
zero
actually
and
then
at
that
time.
Actually,
the
class
api
was
not
that
mature.
We
had
an
internal
kind
of
a
long,
one-month
long
debating
whether
we
want
to
join
the
community,
or
we
want
to
build
something
of
our
own
to
manage
the
equipment's
cluster.
Eventually,
we
make
the
decision
to
join
the
community
to
help
the
community.
You
know
grow
better.
B
At
the
same
time,
we
can
also
take
all
the
great
work
that
the
community
has
done
and
like
after
two
or
three
years,
when
we
look
back,
we
think
that
this
was
the
correct
decision.
We
joined
the
community,
making
it
better.
We
contributed
a
lot
of
new
stuff
into
the
community.
Of
course,
we
took
all
the
great
stuff
that
community
has
done
and
right
now
class
dpi
was.
It
is
kind
of
the
the
de
facto
tooling
to
manage
the
kubernetes
clusters.
B
If
you
are
not
going
to
directly
use
the
cloud
and
manage
the
services
like
ekslo
or
gke
right
other
than
that
cluster
api
is
kind
of
a
de
facto
tooling.
Many
of
the
solutions
out
there
are
actually
built
on
top
of
the
classic
chai,
to
name
a
few
like
some
of
the
big
names
like
a
google
cloud
and
those,
I
think
many
of
you
should
already
heard
of
that,
and
also
yeah
red
hat
open
shift,
also,
and
also
like
a
vmware
temple
right.
B
Also,
the
truth
is
he
built
on
top
of
kubernetes
and
good
on
top
of
classic
sorry,
and
we
are
glad
that
we
are
also
one
you
know,
bigger
contributor
to
the
to
the
open
source
project.
A
That's
awesome:
okay,
let's
get
to
the
news
before
we
start
interview
june.
So
the
first
thing
I
want
to
talk
about
is
buoyant,
alarms,
fully
managed,
link
b
with
buoy
and
clouds.
This
is
pretty
interesting.
Well,
buoyant
is
really
attempting
to
you
know,
take
the
management
away
from
the
users
and
be
able
to
automate
everything
most
of
everything
for
for
the
users.
A
A
Probably
one
of
the
most
confusing
blog
I
read
most
most
recently
for
the
past
few
months
is
what's
really
interesting
from
a
istio
perspective
was
we
were
reading
like
the
the
data
coming
out
of
the
blog?
Well,
it
has
performance
numbers
of
how
istio
performs
regarding
latency
right
how
you
know
psyllium
performs-
and
you
know
our
chief
architect-
you
are
actually
tried
the
data
and
we
couldn't
come
to
anything
closer
to
the
data
published.
A
We've
been
also
wondering
you
know.
Are
we
trying
to
compare
layer
4
to
layer
7,
so
the
data
is
really
misleading.
Our
buddy,
a
friend
at
google
louie
ryan,
who
is
a
principal
engineer
in
google,
also
spoke
up
with
his
voice
right.
Uncontrolled
l7
config
in
a
multi-tenant
proxy
is
the
outage
and
noise
enable
factory
so
interesting
perspective,
definite
encouraging
folks
to
get
all
different
perspective
on
this.
A
Let's
see
that
is
the
news
for
this
week
regarding
the
hood
next
week.
We
are
so
excited
we're
going
to
have
kubecon
europe
in
vanessa,
so
I
will
be
there
in
person,
I'm
hopefully
to
meet
many
of
you
there
in
person.
A
So
we
will
not
have
a
hoot
next
week,
but
we
will
have
something
exciting
for
you
the
week
after
so
stay
tuned
on
the
agenda.
For
that
with
that,
I'm
going
to
start
our
peak
brains
of
doing
what
is
the
persona
targeted
for
cluster
api.
B
Yeah,
so
that's
a
good
question.
I
think
that's
the
first
question
you
already
asked
when
you
go
into
the
one
of
the
open
source
project,
so
closely
check
is
to
manage
the
like
the
kubernetes
life
cycle
of
the
underlying
cluster
itself.
I
know
in
this
like
a
broadcast.
B
Most
of
the
time,
the
topic
was
around
like
upper
layer
right,
a
service
subject,
matter
layer
or
application
layer.
Those
are
kind
of
already
running
within
a
kubernetes
cluster,
which
is
already
managed
by
some
other
product.
So
class
api
is
to
manage
these
underlying
kubernetes
cluster.
So
technically,
for
anyone
who
want
to
set
up
a
kubernetes
cluster
and
try
it
out
either
it's
a
developer
or
devops
team
team.
They
can
use
the
cluster
chat.
B
B
B
A
Okay,
so
my
next
question
is:
why
do
users
need
to
pay
attention
to
cluster
api?
Sounds
like
the
you
try
to
answer
that
question
already,
which
is,
if
you
have
a
kubernetes
cluster,
that
you
need
to
spin
up
across,
maybe
different
clouds,
different
environments.
The
cluster
api
provides
this
simple,
consistent
experience.
That's
declarative,
for
you
is
that
yeah.
B
Yeah
yeah,
I
think,
of
for
a
specific
open
source
project.
First
of
all,
the
class
api
itself
is
one
of
the
kubernetes
sab
project,
so
it's
under
the
umbrella
of
kubernetes
and
kind
of
a
guarantee
that
it
will
have
the
support
from
the
community
right
now,
some
you
know
one
person,
hobby
project,
it's
supported
by
the
whole
community
and
actually
right
now
the
community
is
very
active.
I
I
just
took
an
took
a
look
a
couple
days
back
when
I
prepared
this
like
there
are
more
than
400
contributors.
B
Yeah,
it's
a
lot
so
two
years
ago,
when
I
was
preparing
another
kind
of
article
well,
actually
one
year
and
a
half
one
and
a
half
years
ago
I
checked
the
contributor
at
that
time.
It's
around
200.!
B
B
Like
I
mentioned,
the
class
dpi
was
already
used
by
many
of
the
bigger
players
out
there
like
google
and
those
vmware
red
hat,
and
there
are
many
couple
of
other
startups,
also
using
class
cpi,
so
it
is
already
the
defacto
tooling
to
you
know,
to
manage
the
applications
and
manage
the
kubernetes
cluster.
Like
I
mentioned
it's
using
a
very
declarative
way
to
doing
that
so
which
will
save
the
users
many
you
know
manual
steps.
You
don't
have
to
do
many
manual
steps
to
provision
the
cluster.
B
A
Yeah,
can
I
ask
you
if
I'm
saying
my
kubernetes
is
compliant
with
kubernetes
compliance
test?
Does
that
means
it
already
implements
the
cluster
api?
A
B
So
if
I
understand
your
question
correctly,
you
are,
you
are
seeing
the
kubernetes
cluster
itself
is
like
a
singing
cf
conformant
right,
the
api
from
aka
right,
yeah,
yeah
yeah,
so
that
is
just
from
the
api
of
the
kubernetes
to
the
end
user,
like
to
the
application
developer,
but
to
deploy
a
cluster
with
that
conformant.
A
Okay
got
it
that
makes
sense.
Can
you
explain
how
cluster
api
work
at
a
high
level.
B
Yeah
sure
yeah
yeah,
so
so
this
is
a
kind
of
a
high
level.
How
class
api
works,
like
I
mentioned,
like
class
api,
is
using
a
declarative,
ap
model
because
it's
trying
to
utilize
the
kubernetes
to
manage
the
kubernetes
itself
so
before
class
api,
zero.
Many
other
tools
already
exist
and
all
of
these
tools
they
either
using
like
a
command
line
or
any
other
format.
B
They
do
not
use
kubernetes
to
manage
kubernetes
and
class
api
was
the
first
one
trying
to
utilize,
because
kubernetes
itself
already
have
most
of
the
you
know,
control
logic
built
in
like
have
the
h.
A
control
plane
have
all
the
great
hi
extensibility
so
class
api.
How
did
that
work?
It
kind
of
a
design
define
a
couple
of
different
crds
and
crd
with
regard
to
customer
resource
definition,
and
you
just
define
what
you
want
for
the
cluster
like.
B
There
are
a
bunch
of
crd
like
cluster
crd
machine
crd
and
some
higher
level
crds
like
the
control
plane
and
the
machine
deployment.
So
the
control
plane
crd
is
trying
to
manage
the
control
plane,
node
and
the
machine
deployment
crd
is
to
manage
the
worker
node.
So
whether
they
do
the
crd,
do
you
just
identify,
for
example,
like
what
is
the
cluster
version
and
how
many
control
plane
nodes,
how
many
worker
nodes
and
then
the
you'll
have
a
management
cluster,
and
this
management
cluster
will
have
the
crds
already
defined.
B
And
then
there
are
a
bunch
of
controllers
running
within
the
management
cluster.
Then
we're
just
trying
to
go
ahead
and
create
the
virtual
machines
and
create
the
control
plane,
including
like
a
machine
deployment
and
then
trying
to
bring
up
this
workload
cluster
inside
all
the
different
cloud
environments.
B
B
Yeah
another
slide
I
want
to
just
quickly
show
like
internally
how
this
works
right,
so
this.
So
from
this
perspective,
though,
the
infra
provider,
the
bootstrap
provider,
all
the
different
providers,
in
the
end,
they
will
take
some
like
a
definition
from
the
crds.
Like
the
cluster
definition
controlling
the
work
definition,
it
would
generate
some
like
a
cube
enemy
config.
So
qp
anime
is
one
of
the
default
like
a
management
tools
from
the
kubernetes
community
as
well.
B
It
is
trying
to
do
all
the
cluster
bootstrapping
once
the
cube
enemy
configure
is
down,
then
the
infra
provider
that
the
infra
platter
means
like
per
cloud
like
either
it's
amazon
or
vmware
or
google
that
they
will
have
their
own
provider
to
talk
into
their
own
cloud
apis.
So
I
would
take
the
machines
back
like
a
harmony
cpu
in
a
memory
and
then
take
the
cube
enemy
configo.
B
A
Yeah,
that
makes
sense,
so
it's
cluster
api.
What
if
I
deploy
my
cluster
out
there,
do
you
do
any
life
cycle
management
of
my
cluster,
I
mean.
Certainly
you
know,
kubernetes
has
so
many
patches.
You
know
there
are
security,
fixes.
That's
cluster
api,
help
user.
The
admin
take
care
for
that.
B
Yes,
yes,
so
that's
a
great
question
to
manage
the
kubernetes
cluster
itself.
I
think
one
of
the
biggest
challenges
is
there
are
so
many
new
releases
coming
out
right
either
it's
a
new
feature
release
or
the
security
pattern.
There's
so
many
releases
out
there
we're
going
out.
So
the
challenge
is:
how
do
you
upgrade
your
kubernetes
cluster?
It's
in
that
challenge,
not
in
the
day
one.
B
It's
actually
in
day,
two
like
how
do
you
upgrade
and
class
live
chat
will
actually
help
you
do
that
because
of
this
declarative
api
model
right
when
you
try
to
do
the
cluster
upgrade,
the
only
thing
you
need
to
do
is
tell
the
controllers.
Okay,
I
want
to
change
the
version
from,
for
example,
1.21
to
1.22.
B
That's
the
only
thing
you
change,
you
tell
the
controller,
you
change
your
desired
cluster
back,
like
the
version
will
be
changed
to
121
to
102..
That's
it.
Then
the
controller
will,
in
the
end,
help
trying
to
rolling
upgrade
all
your
worker
nodes
and
the
control
panels
within
the
cluster
and
eventually
bring
up
the
cluster
into
your
version.
So
this
is
the
collaborative
model,
the
kind
of
not
very
much
difference
between
the
initial
provision
and
the
d2
management.
B
A
A
Okay,
yes,
I
guess
it
works.
Similarly,
as
the
istio
operator
api,
which
a
while
user
just
used
to
tell
the
system
the
control
plan,
you
know
what
are
the
version
of
istio
they
want
to
install.
The
only
difference
I
would
say
is
kubernetes
upgrades
has
been
really
boring
right.
It
works.
Well,
it's
your
upgrade.
It's
more
exciting.
You
know
it
needs
sometimes
a
handshaking
all
right.
That's
that's
exciting.
So
the
next
question
I
want
to
ask
you
is:
how
are
you
leveraging
cluster
api
at
spectral
cloud?
B
Yeah
so,
like
I
mentioned
that
we
joined
the
class
api
community
when
we
started
the
spec
cloud
and
we
leveraged
all
the
great
work
from
the
community
and,
at
the
same
time
we
continued
to
contribute
back.
So
the
underlying
platform
of
a
spectral
cloud
is
using
the
class
api
to
do
all
the
cluster
management,
but
class
dpi
itself
right
now
out
of
today,
are
still
lacking
a
couple
of
capabilities,
for
example
the
application
management,
so
the
kubernetes
cluster
itself
in
the
end
need
to
serve
the
application
right.
B
B
B
So
in
the
end,
I
think
the
spectro
cloud
helped
the
customers
to
manage
a
fleet
of
clusters
across
all
the
environment
and
the
the
underlying
building
block
is
class
api
and
we
built
on
top
of
that.
A
Yeah
I've
seen
your
ui.
It's
alex,
lay
from
solo
did
a
demo
recently
using
spectrocloud
and
glue
mesh
to
to
stand
up
cluster.
You
know
to
install
glue
mesh
I've
seen
that
you're
done
that
the
ui
is
pretty
sleek
yeah
perfect.
So
what's
your
perspective,
how
do
you
see
service
mesh
fits
in
the
puzzle
here
from
a
spectral
cloud
perspective
with
you
know,
building
solution
on
top
of
kubernetes
and
also
you're
providing
solution
on
top
of
cluster
api
at
the
app
layer
right.
So
where
do
you
see
service
mesh,
fitting.
B
This
is
a
great
question.
I
think
spectrocloud
and
the
solo
have
been
working
closely
together
in
the
last
couple
months
with
one
of
the
bigger
customers.
B
We
all
you
know
one
of
the
shared
customers,
and
I
think
one
thing
that
we
do
see
from
the
experience
that
we
work
with
our
customer
is,
although
there
are
many
different
vendors
out
there
for
the
different,
you
know
challenges
within
the
kubernetes
clusters,
but
when
the
customer
they
trying
to
you
know,
provide
the
coordinated
service
to
zero
end
user,
for
example,
for
it
department
within
a
big
enterprise
right
when
they
try
to
provide
the
kubernetes
to
zero
end
user.
They
took
the
kubernetes
cluster
as
a
whole
platform.
B
So
when
you
took
this
whole
thing,
as
the
you
know,
the
whole
platform
as
one
thing
to
manage
there
is
really
not
that
much
opportunity
or
solution
out
there
to
help
you
do
that,
and
spectrocloud
has
provided
a
very
unique
way
to
let
the
users
or
customers
define
everything
within
a
single
blueprint,
or
we
call
a
cluster
profile
right.
The
cluster
profile
world
contains
what
is
the
operating
system?
What
is
the
kubernetes
version?
What
is
the
networking
storage
and
also,
what
is
the
service
master
layer?
What
is
the
login
monitoring?
B
Basically,
everything
that
they
will
need
to
run
it
within
the
cluster
can
be
defined
within
this
cluster
profile
and
service
match
is
obviously
kind
of
a
bridging
the
gap
between
the
between
the
underlying
kubernetes
and
the
application,
because
it's
a
very
critical
component
for
the
application
right
so
using
this
unique,
coherent
way
to
define
what's
needed
within
cluster.
It's
very
easy
for
the
customer
to
you
know,
manage
multiple
different
clusters
very
consistently
and
for
a
single
cluster.
So
that
is
the
one
perspective
right.
B
So
spectral
cloud
will
help
solo
and
other
like
a
different
component
service
method.
Layer
to
to
very
easily
to
manage
it
within
a
single
kubernetes
cluster
on
the
second
aspect
is:
if
we
think
about
the
application
development,
at
least
to
my
perspective,
right
now,
the
most
of
the
application
running
within
equivalence
boundary.
That
means
the
the
runs
within
one
single
kubernetes
cluster
and
we
do
see
the
requirement
to
to
span
across
multiple
different
kubernetes
cluster
and
service
match
is
helping
application
to
do
that.
B
So,
when
you
have
multiple
different
kubernetes
clusters,
together
with
your
application
requirements
to
spend
together
with
your
service,
master
layer
need
to
spend
a
quarter
different
call.
So
there
really
need
one
kind
of
a
single
platform
for
you
to
manage
all
the
different
kubernetes
together
right
all
the
clusters
together,
and
we
do
think
that
spectral
cloud
can
fit
in.
B
You
know,
working
with
the
service
administrator
to
really
provide
a
very
consistent
way
for
the
application
to
consume
the
underlying
infrastructure.
So.
A
Yeah
at
solo,
we
are
very
excited
to
work
with
spectrocloud
to
solve
this
application
network
challenges,
not
just
for
single
cluster.
Certainly,
we
want
to
solve
it
for
single
cluster
too,
but
also
for
clusters,
multiple
clusters
that
span
multiple
clouds
and
be
able
to
leverage
service
mesh
to
provide
that
zero
trusted
network
right
for
the
application
to
be
able
to
provide
the
defense
in
depth
right
at
every
single
hub
that
we
can
trust
and
we
don't
allow
any
unnecessary
access
by
the
way
hi
omar.
I
see
a
really
nice
comment
from
you.
A
Welcome
oma,
seems
amazing
and
feeling
a
great
gap.
So
thank
you
for
that
comment,
where's
that
I
think
I'm
looking
forward
to
the
demo.
Can
you
show
us
a
demo
of
your
cluster
api.
B
Let
me
know
you
can
see
my
terminal
cool,
so
the
cluster
api,
like
I
mentioned
right,
the
the
way
it
works
is
you
have
a
management
cluster
and
then
from
the
management
cluster.
You
can
create
and
manage
all
the
workloads
class
cluster.
So
here
I
have
a
simple,
simple
kind:
cluster.
B
So
kinda,
if
you
do
not
know
that
yet
so
kinda
is
one
of
the
tools
to
help
you
create
a
kubernetes
cluster
within
docker
itself.
It
is
called
kubernetes
in
docker.
I
have
just
the
docker
installed
in
my
local
laptop
and
I
create
a
kind
of
cluster
within
this
kind
of
tooling.
So
within
this
kind,
I'm
using
it
as
a
management
cluster
and
from
there
I'm
creating
two
workload
clusters
like
one
cluster,
I
quite
like
aws
and
the
other
one
is
the
eks.
So
internally
class
api
actually
supported
two
types
of
the
clusters.
B
One
is
the
virtual
machine
based
cluster.
That
means
the
class
api
will
go
ahead
and
create
the
virtual
machines
either
in
public
cloud
and
or
private
cloud.
It
will
create
the
virtual
machines
and
then
go
into
the
virtual
machine
to
do
the
bootstrapping
and
then
join
all
the
worker,
like
virtual
machines,
all
bare
metal
machine
into
a
big
kubernetes
cluster.
So
the
second
type
of
clustered
support
is
the
the
management
service
that
means
for
ats
eks
and
not
jk
yet,
but
lead
to
management
coordinated
service.
B
It
can
also
support
it
directly
from
clutch
api
as
well,
and
in
this
demo
I
kind
of
have
two
clusters
running
within
amazon.
This
aws
one
is
the
virtual
machine
based
one
and
the
eks
one.
Is
the
management
service
based
one
because,
like
the
creating
within
aws,
when
we
try
to
create
the
cluster?
If
you
know
you,
you
need
to
provision
the
vpc
first
and
the
provision
vpc
kind
of
taken
a
long
time.
So
I
initially
like
launched
the
dc
cluster
beforehand
and
I
can
quickly
show
what
are
the
crds?
B
What
are
the
controllers
that
we
have
talked
about?
So
if
we
take
a
take
a
look
at
the
other
part
running
within
my
kind
of
cluster,
so
we
have
a
kappa
controller.
This
will
be
the
per
cloud
provider
controller,
so
this
one
will
be
talking
to
aws
for
the
all
the
ec2
api
or
eks
api.
We
have
a
bunch
of
other
controllers,
like
the
bootstrap
controller.
B
What
being
able
to
generate
the
kv
admin
config
and
this
control
plane
controller
will
be
managing
the
control
plane
like
the
adcd
rolling,
upgrade
edc
membership
management,
and
then
this
cable
controller
will
be
managing
like
the
actual
machines
or
machine
deployment
and
all
those.
So
you
already
do.
You
will
have
like
a
per
cloud
controller
inside
its
own
namespace
and
then
the
capi
ones
will
be
common
for
all
the
different
cloud.
So
this
is
the
way
like
a
kind
of
a
plug-in
architecture.
B
Let's
see
if
we
have
a
one
of
the
constructed,
one
of
the
crd
is
called
a
machine,
so
each
one
of
the
machine
will
correspond
to
a
virtual
machine
in
the
cloud
and
eventually
will
be
corresponding
to
a
kubernetes
node
within
the
cluster.
So
for
my
demo
aws
cluster,
you
can
see.
I
have
three
machines.
I
have
a
one
control
play
machine.
I
have
like
two
workers,
the
two
workers,
it's
kind
of
defined
with
one
machine
deployment.
B
With
this
machine
definition,
I
have
another
layer
of
aws
machine,
so
the
aws
machine.
Again,
it's
like
a
per
cloud.
This
is
how
it
is
supported
on
like
a
multiple
different
cloud
from
the
crd
perspective,
so
the
machine
object
will
be
common
for
all
the
different
clouds.
Eventually,
machine
object
will
have
kind
of
a
relationship
to
the
per
cloud
machine
object.
B
In
this
case
it's
called
aws
machine
object,
and
then
we
have
this
kcp
pct
object,
so
the
kcp
means
the
keep
animating
control
plane
and
this
one
you
can
see,
have
what
is
the
version
this
one?
What
many
define
your
cluster
version?
So
all
the
control
plane
component,
like
api
server
controller
manager
running
within
the
same
p,
node,
will
have
the
existing
basic
conversion
and
you
can
define
like
a
one
replica,
one
controlling
node
or
multiple
control,
plane
node
in
this
case,
in
the
demo
we
have
a
one
replica.
B
Another
crd
is
the
machine
crd,
the
machine
deployment
crd
in
this
case,
the
machine
deployment.
It
is
similar
like
the
application
right,
if
you
think,
about
the
port
and
like
a
deployment
for
the
application
for
the
containers,
so
the
machine
deployment
and
the
machine
kind
of
following
the
same
pattern
yeah.
So
it's.
B
B
The
machine
deployment-
sorry,
this
is
a
little
bit.
This
is
the
metadata.
This
is
the
will
be
the
spec
right,
the
machine
deployment
you
can
see.
What
are
the
replica
numbers
and
the
selector
similar
very
similar
to
the
deployment
to
see
like
a
native
deployment,
api
definition,
one
other
select
selector-
will
be
matching
all
the
machines
and
then
it
will
contain
a
template,
so
the
template
will
contain.
First
of
all,
what
is
the
cube
enemy
config
template?
This
guy
will
provide
the
internal
cube
enemy
configuration
and
then
it
will
contain
the
reference
to
the
infrastructure.
B
Like
an
aws
machine
template,
the
aws
machine
template
will
contain
like
what
is
the
cpu
memory,
what
is
the
instance
type
or
the
region
zone?
And
it's
all
the
information
and,
of
course
this,
like
the
version
of
the
machine
deployment
itself,.
B
So
this
is
the
machine
deployment
eventually
like
when
we
just
need
to
create
all
the
crds
within
this
management
like
a
kubernetes
cluster,
and
then
the
controller
will
go
ahead
and
create
the
cluster
itself
once
the
like
once
the
control
it
is
done.
B
B
Give
me
one
more
thing
so
note
here
we
can
see.
This
is
the
actual
look
at
the
target
workload
cluster
already
provisioned
by
the
controllers.
We
can
say
that
they
have
one
control
plane.
We
have
like
a
two
worker
node,
so
all
the
information
will
be
available
kind
of
already
calculated
within
this
management
cluster.
So
from
the
control
plane
from
the
target
workload
crosstalk
perspective,
it
doesn't
know
anything
about
the
management
cluster,
but
from
the
manning
cluster
you
can
do
all
the
like
a
skill
up
and
a
scale
down,
or
we
can.
B
We
can
quickly
try
if
we
open
this.
If
you
want
to,
for
example,
do
a
skill
up
right
for
one
two
and
the
now
it's
a
two
replica.
Let's
see,
we
want
to
create
a
new
node,
let's
see
how
it
works.
We
just
need
to
change
the
replica
from
two
to
three
okay
and
we
do
the
save
we
can
see
the
the
phase
is
turned
to
skating
up
and
there's.
B
B
B
The
cluster,
so
if
you
want
to
create
a
new
cluster,
so
I
can
show
you
what
I
provided
to
create
this
cluster
right.
This
is
just
the
yaml
file
we
generated
to
create
a
cluster.
The
cluster
contains
what
is
the
control
plane
reference
and
then
the
infrastructure
cluster?
This
is
the
very
top
level
like
kind
of
umbrella
doesn't
contain
much
of
like
a
logic,
but
in
the
end,
your
app
like
aws
cluster
will
contain
what
is
a
region?
B
What
is
the
acid
keys
or
the
other
cluster
level
definition
control,
plane,
creep
enemy
controlling
one
man
did
like
the
control
plane
like
the
adcd
membership,
how
many
replicas
for
the
control
plane-
and
these
are
captured
the
qubit.
I
mean
configuration
internally
when
you
bootstrap
the
cluster
and
what
is
the,
how
many
replicas
you
need
and
what
is
the
version
of
the
control
plane?
In
the
end,
it
will
contain
a
template
reference
like
aws
machine
template,
so
the
awn
machine
template
will
specify
like
what
is
the
instance
type.
A
B
That's
a
great
question,
so
usually
the
kubernetes
version
kind
of
a
refers
to
the
control
plane
version
right.
If
you're
talking
about
what
is
the
version
of
the
kubernetes
cluster,
you
already,
it
refers
to
the
control
plane
version
which
is
running
within
the
cluster,
so
the
from
that
sense,
the
control
plane
version
should
match
the
cluster
version,
but
the
worker
node
doesn't
have
to
be
exactly
the
same,
especially
during
the
ruling
upgrade
right.
B
You
might
want
to
rolling
upgrade
the
one
worker
pull
first,
keep
the
other
one
to
stay
better,
any
issue
things
like
that.
So
there
is
a
version
like
a
screw
management
between
the
different
versions,
but
it
doesn't
have
to
be
exactly
the
same
for
the
workers
and
the
control
plane.
A
So
there
is
a
way
you
can
specify
the
workers
version
as
well
english
into
the
control
plan.
Okay,
makes
sense.
B
So
this
machine
deployment
is
for
all
the
workers
right,
the
machine
deployment.
Similarly,
you
will
specify
like
how
many
replicas
and
then
for
each
of
the
replica,
similarly
have
a
two
reference.
One
same
to
the
control,
bring
have
this
cube
enemy
and
also
have
the
machine
template,
and
also
you
it's
about
the
version
here,
so
this
version
will
specify
like
the
cubelet.
What
is
the
cubed
version
running
within
the
worker
node,
and
this
cubelet
will
be.
You
know
talking
to
the
control
plane.
A
Okay,
very
cool:
do
you
want
to
check
if
your
other
node
come
up
now
yeah?
Let's.
B
B
We
can
see
now
there
are
three
replicas
right
now
still
zero
to
one
unavailable
means
it
is
still
trying
to
bring
it
up
during
the
cluster.
Let's
see
the
mission
yeah.
This
is
the
machine
object
already
created
and
it's
I
believe
there
should
be
another
aws
machine
internally,
also
trying
to
spin
up
yeah.
The
cloud
is
still
trying
to
create
this
new
machine
in
the
world.
A
B
Exactly
I
can
quickly
show
another
like
another
cloud
like
the
e
key
as
a
basic
cluster,
so
that
one
is
a
little
bit
different
so
that
one
we
do
not
manage
the
control
plane
because
the
eks
menu
the
control
plane
itself,
and
now
we
use
a
kind
of
aws
manage
the
control
plane.
So
this
manual
control
plane.
B
We
do
not
specify,
like
the
the
the
hominin
replica,
what
is
the
cpu
memory
and
all
the
information,
so
this
magnetic
control
plane
will
be
corresponding
to
the
eks
configuration
itself
and
then
for
the
worker
node
again,
we
do
not
specify
like
the
machine
deployment.
We
are
using
the
aws
management
machine
tool
called
abs,
the
machine
pool
so
the
machine
pool
kind
of
similar
to
the
machine
deployment.
You
specify
the
replica
numbers
you
specify
like
what
is
the
ami?
B
What
is
the
the
the
subnet
and
the
and
you
know
like
the
node
group
information
and
all
those?
So
the
difference
is
that
we
do
not
manage
the
machine
individually
like
we
do.
If
it's
the
virtual
machine
based
cluster
right,
we
manage
each
individual
machine
specifically
in
this
case,
we
only
manage
the
poor
level
like
on
within
the
poor.
It
is
up
to
eks
to
kind
of
do
the
auto
scaling.
Do
the
like
a
node
management
and
all
those.
A
Interesting
and
I
assume
the
similar
resources,
maybe
with
small
configuration
tweaks,
you
could
take
them
to
deploy
aws
and
also
not
only
just
aws
on
azure
and
google
cloud.
B
Yes,
yes,
so
the
tooling
class
api
have
a
tooling
called
cluster
cattle,
so
cluster
cuttle
is
like.
I
can
show
you
my
script
very
easy
to
generate
the
cluster
definition.
So
this
is
the
how
I
generated
data
to
definition,
using
cluster
cuddle
right.
So
for
each
different
cloud
you
are
just
using
cluster
cuddle,
generate
the
cluster,
give
it
a
name
provided
the
the
the
flavor
kind
of.
What
is
the
infrastructure
you
want
to
use
for
eks?
B
It
is
a
maintenance
machine
pool
for
the
like
a
virtual
machine
base
that
you
just
provided
the
infrastructure
infrastructure
as
aws,
and
then
you
give
it.
What
is
the
version?
You
want,
how
many
numbers,
how
many
replicas
you
have?
These
are
the
two
like
a
very
simple
lines.
I
use
to
generate
the
data
to
yaml
files
and
then
directly,
you
apply
the
yaml
file
and
the
controller
was
just
trying
to
you
know
spin
up.
A
Yeah,
so
that's
really
cool
because
you
don't
have
to
learn
each
individual
cloud
providers
command
right
or
install
even
install
them.
So
now
my
I
guess
my
next
question
is
assuming
I
have
this
food
demo,
eks.yaml
or
aws.yaml
generated.
Do
I
just
use
cube
cuddle
to
apply
it?
Oh
exactly.
I
think
you
yeah
okay,.
A
B
B
This
is
the
one
like
the
cluster
cuddle
in
it
will
help
you
install
all
the
crds
have
installed
all
the
control
controllers
within
your
management
cluster
and
again
you
don't
need
to
worry
about
like
how
do
I
install?
Where
do
I
pull
the
crds?
Where
do
I
pull
or
do
the
you
know,
product
definitions
for
my
controllers,
this
cluster
cutter
in
it
will
help
you
do
all
these
everything.
B
A
That's
awesome.
I
feel
this
is
the
new
way
I
want
to
use
to
provision.
You
know
kubernetes
cluster
in
different
cloud,
because
the
api
is
just
the
same.
The
commands,
I
can
just
specify
different
infrastructure.
So
thank
you
for
teaching
me
that
that
here
and
the
cluster
api
is
there
anything
else
you'd
like
to
share
before
we
wrap
up.
A
Awesome
this
has
been
really
interesting.
I
want
to
take
a
moment
to
thank
you
june.
We
really
appreciate
you
sharing
your
knowledge
and
we
also
get
to
learn
a
little
bit
about
spectral
cloud
and
what
solo
and
spectrocloud
is
doing
together.
B
A
And
I
apologize
faye,
I
just
saw
your
question.
Thank
you
so
much
for
being
at
the
hood
of
almost
every
single
session.
We
have.
We
really
appreciate
audience
like
you
so
face
asking
how
many
providers
do
you
have.
I
assume
all
public
clouds,
including
big
3s
in
the
us,
along
with
alibaba.
B
A
B
Yeah
great
question:
all
the
like:
the
public
and
the
private
cloud,
if
you
can
think
of
I,
I
would
say
I'm
pretty
confident
that
there
will
be
a
provider
already
available,
like
the
all
the
provider,
the
cloud
provider
within
us,
either
the
public
or
private,
and
also
within
china
like
alibaba,
tencent,
even
baidu,
and
also
some
of
the
ad
providers.
They
all
have
a
provider
over
there.
A
That's
awesome,
I
feel,
like
a
cluster
api,
is
very
mature.
It's
really
widely
adopted
yeah.
Thank
you
faye
for
that
question.
All
right!
Thank
you
so
much
june
and
I
hope
folks,
you
guys
find
this
interesting
and
if
you
do
give
us
a
thumbs
up
and
also
subscribe
to
our
channel,
so
you
don't
miss
any
of
our
future
food
episodes
and
if
you
have
specific
topic,
you
want
to
see
our
special
guest.