►
From YouTube: Webinar: Cluster API – Yesterday, Today, Tomorrow
Description
In this webinar, learn about Cluster API and common Kubernetes lifecycle management alternatives. After a survey of alternative projects, we’ll provide a brief refresher on Cluster-API itself and how it fits into a broader infrastructure provisioning story. Finally, we’ll touch on some thoughts around higher level orchestration needs that could build on Cluster-API.
Speakers:
Saad Malik CTO & Co-Founder @Spectro Cloud
Jun Zhou Chief Architect @Spectro Cloud
A
Okay,
let's
get
started
I'd
like
to
thank
everyone
for
joining
us
today.
Welcome
to
today's
cncf
webinar
cluster
api
yesterday
today
and
tomorrow,
I'm
jerry
fallon
I'll,
be
moderating.
Today's
webinar
we'd
like
to
thank
our
presenters
today,
sad
malik,
cto
and
co-founder
of
spectrocloud
and
jin
zhu,
chief
architect,
spectrocloud
just
a
few
housekeeping
items
before
we
get
started
during
the
webinar.
You
are
not
able
to
talk
as
an
attendee.
There
is
a
q,
a
box
at
the
bottom
of
your
screen.
A
Please
feel
free
to
drop
your
questions
in
there
and
we'll
get
to
as
many
as
we
can.
At
the
end.
This
is
an
official
webinar
of
the
cncf
and,
as
such
is
subject
to
the
cncf
code
of
conduct.
Please
do
not
add
anything
to
the
chat
or
questions
that
would
be
in
violation
of
the
code
of
conduct.
Please
be
respectful
of
all
of
your
fellow
participants
and
presenters.
A
B
B
So
these
changes
did
not
happen
overnight
and,
as
you
can
imagine,
it
still
continues
to
be
a
complex
undertaking.
Many
enterprises
choose
to
run
in
a
hybrid
cloud
environment
where
legacy
applications
or
data
sensitive
applications
may
run
on
premise,
but
opting
to
run
other
workloads
on
a
public
cloud.
So
we
witnessed
these
complexities
and
challenges.
Organizations
were
having
in
adopted
kubernetes,
not
just
with
with
provisioning
but
day
two
management
and
more
operationalizing
it
in
terms
of
monitoring
logging
security.
B
So
myself,
along
with
two
other
vps
one,
was
the
co-founder
of
clicker.
The
other
one
was
vp
of
engineering
at
clicker.
We
left
18
months
ago
to
start
spectral
cloud
to
focus
on
providing
a
sas
platform
to
simplify
the
experience
of
running
kubernetes
clusters
in
production
by
giving
enterprises
the
flexibility
and
control
and
consistency
they
need
without
giving
up
the
ease
of
use
and
manageability
at
scale.
B
So
with
that
heading
up
to
june.
C
C
So
cube
app
is
the
first
available
tool
to
bring
up
a
kubernetes
cluster.
It
uses
command
line
tools
like
aws,
cri
or
decloud.
To
prepare
the
cloud
infrastructure
then
launch
the
virtual
machines
to
form
a
cluster,
and
after
that
critical
add-ons
like
saying
I
and
the
dns
are
installed,
the
actual
kubernetes
component.
C
C
So
for
gce,
the
kibab
script
will
build
a
yoda
data
which
contains
the
complete
side
configuration
for
both
the
master
and
worker.
So
no
matter
what
the
vm
is
right,
whether
it's
a
control,
plane,
node
or
worker
node.
The
yoda
data
is
the
same
and
it
contains
both
information
and
then
based
on
the
machine
identity.
C
The
low
cost
sorter
will
configure
itself
without
contacting
the
softmaster
and
unfortunately,
the
eurodata
for
gce
exceeded
the
cyber
limit
and
we
were
trying
to
strip
out,
like
all
the
comments
in
the
youtube
data
script,
to
make
the
site
small
enough
the
1.3
release
of
kubernetes
in
june
2016.
We
can
see
that
the
list
goes
longer,
adding
more
providers
and
the
more
os
support,
but
actually
from
this
release,
cuba
script
already
entered
the
maintenance
mode.
It
will
not
accept
any
new
providers.
C
And
now,
if
we
check
the
latest
code,
most
of
the
providers
already
got
deleted
and
only
gce
is
zero
and
the
first
available
tool.
I
think
the
script
is
always
easy
to
start
start
with,
but
very
soon
the
community
realized
that
we
need
a
proper
installer
solution,
because
the
kubernetes
becomes
so
complicated
and
around
like
three
months
after
kebab
enters
maintenance
mode
in
the
next
104
release.
Cube
enemy
were
the
last
and
q
amin
simplified
the
cluster
bootstrapping
to
literally
just
a
two
command.
C
So
internally,
as
a
command
line
tool,
cube
admin
divided
the
whole
setup
process
into
different
modular
faces
with
most
of
the
faces
can
be
reused
in
different
lifecycles.
C
The
cube
admin
init
does
the
most
of
the
job.
It
will
bring
up
a
single
node
kubernetes
cluster
internally
in
the
first
validate
and
prepare
the
node
or
generated
certificate
or
generate
the
manifest
for
the
static
port
and
then
start
control
playing
components
like
api
server,
controller
manager,
scheduler
and
htcd
to
bootstrap
to
the
first
controlling
node.
C
Then,
in
the
end,
it
will
install
the
core
add-ons
like
dns
or
queue
proxy.
The
join
can
happen
both
for
control,
plane
and
workers
join
the
control
plane,
node
will
prepare
the
certificate
and
then
start
the
control,
plane
component
and
join
the
etcd
cluster
and
join
a
worker.
Node
is
relatively
simple:
it
will
just
a
static
kubelet
and
a
register
to
the
ips
server
cube
admin
support
like
multiple
he
topology
and
the
main
difference
between
the
two
is
the
hcv
cluster.
C
It
can
start
a
static
cd
port
on
each
of
the
control
plane.
Node
then
join
them
together
to
form
a
stacked
atct
cluster,
or
it
can
also
support
to
use
an
external
etcd
cluster.
If
we
already
have
one
and
all
the
worker
nodes,
the
cubelet
they
target
to
the
api
server
through
this
external
load,
balancer,
so
cube,
admin
has
very
focused.
The
scope
and
part
of
the
philosophy
is
to
do
the
minimum
and
do
the
best.
C
After
cube
admin
was
announced,
it
became
the
de
facto
tool
to
do
cluster
bootstrapping
and
was
adopted
by
many
other
kubernetes
management
tools,
while
at
the
same
time
creating
a
kubernetes
cluster
needs
a
lot
of
other
tasks,
while
the
cube
admin
does
not
cover.
For
example,
infrastructure
management
is
not
there.
You
need
to
provision
the
host
with
the
infrastructure
networks
and
the
storage,
and
also
it
will
not
manage
add-ons
either.
C
C
But
this
is
a
great
by
looking
at
the
support.
At
least
it
covers
all
the
different
combinations,
but
it
also
comes
with
a
big
price
at
the
non-managing
infrastructure,
so
for
cube
spree.
The
philosophy
is
bare
metal.
First,
for
all
the
features
it
will
get.
A
work
on
barometer.
First
then
consider
the
support
on
other
clouds
internally.
It
builds
on
top
of
ansible
using
os
eccentric
way
to
do
the
configuration.
C
Because
of
the
bare
metal
first
philosophy
cube
spree,
try
to
avoid
any
external
dependency,
for
example,
for
cube
admin,
ht
setup.
It
will
need
an
external
lb
on
top
of
the
control
plane.
Node
for
the
eps
server
endpoint,
but
cube
spread,
can
support
a
partial
he
setup
without
external
lv
through
the
concept
of
a
local
host
engine's
proxy.
C
So
basically,
what
what
it
does
is
all
the
worker
node.
The
keyblade
will
talk
to
a
local
engine's
proxy
and
this
engine
is
a
proxy.
What
distributed
the
traffic
to
all
the
control
plane,
api
servers,
but
the
components
on
the
control
plane,
node
like
the
keyblade
and
the
controller
manager
and
scheduler
they
only
target
to
the
single
api
server
on
the
local
host
and
also
the
api
server
endpoint
for
the
client.
That's
outside
of
the
cluster
is
only
to
the
first
control
plane
node.
C
For
example,
if
you
want
to
do
a
cubecat
tool
from
the
laptop
to
this
cluster,
it's
not
going
to
work
because
of
the
first
control
plane.
It's
down,
so
cube
spree
was
released
before
cube,
admin
exists,
so
initially
it
had
its
own
ways
to
configure
all
the
kubernetes
components
to
join
the
cluster
and
later
on,
when
cube
enemy
became
mature
cube,
spree
start
to
use
the
cube
enemy
out
of
the
tool
to
do
the
cluster
bootstrapping
and,
in
addition
to
combining
functionality,
keeps
free,
add
other
functions
to
make
the
cluster
ready
to
use.
C
Once
it
finishes
like
it
will
install
critical
add-ons
like
syncline,
it
can
help
you
integrate
to
the
cloud
providers
to
utilize.
The
cloud
features
like
elb
for
your
load,
balancer
service
or
ebs.
For
your
persistent
audience.
It
has
a
good
support
for
environment
either
behind
the
proxy
or
pure
air
gap
in
setups.
C
C
C
So
some
of
the
features
I
feel
are
quite
unique
in
cops.
The
first
one
is
the
dns
for
cops.
Dns
is
a
must-have
and
all
the
control
plane
node.
They
have
static,
dns
names
so,
instead
of
directly
using
ip
to
communicate.
Well,
the
thing
is
the
benefit.
Is
all
the
control
plane
nodes
have
a
static
identity,
then
similar
to
kubernetes
default
set.
You
can
replace
control
plane
nodes
without
losing
the
identity
and
initially
cops
are
required
to
have
an
external
gns
with
a
valid
domain
name
but
later
on.
C
C
C
So
if
you
are
using
terraform
to
manage
your
id
system,
you
can
easily
plug
it
in
and
it
can
also
help
integrate
with,
like
other
features
like
the
horizontal
part
of
speeding
and
aws
im,
authenticator
and
others
in
order
to
provide
an
end-to-end
solution
and
remember,
cops
would
also
develop
before
kid.
Anime
exist
right.
It
builds
a
lot
of
internal
tooling
to
make
this
happen.
C
For
example,
all
the
cluster
configurations
are
stored
inside
a
steep
store,
which
is
support
using
s3
or
google
bucket
or
even
file
system.
At
the
back
end,
the
image
builder
project
was
used
to
prepare
the
base
image
for
cops
and
nodap
is
the
first
running
component
inside
of
the
vm
launched
by
cloud
init.
C
It
will
install
docker
and
accumulate
and
other
dependencies
directly
into
the
host,
and
then
it
will
install
product
cube,
which
starts
the
next
stage.
Bootstrap
the
product
keyboard
will
generate
the
manifest
for
static
port
and
attach
adcd
volumes
to
the
control
plane.
Node
then
dns
controller
is
managing
the
cluster
digital
functionalities
like
gaussian,
virginia
and
elicit
manager
is
managing
the
adcd
cluster,
as
well
as
data
backup
and
intercede
encryption.
C
So
this
is
a
topology
of
cops
cluster
on
aws
using
a
private
subnet,
as
we
can
see,
cart
can
provision
a
new
vpc
and
set
up
all
the
subnet,
now
gateways,
elbs
and
other
infrastructure
components
and
for
the
control
plane
node
within
each
zone.
It
is
using
a
separate,
auto
scanning
group
and
because
of
the
intensity
volume,
you
cannot
attach
the
volume
to
a
vm
in
a
different
zone
and
for
workers.
It
is
managed
within
one
single,
auto
securing
group
across
multiple
loans.
C
C
It
can
achieve
some
level
of
auto
hitting
by
using
auto
scanning
group,
but
some
of
the
other
issues
it
may
not
be
able
to
cover
like
some
bootstrapping
feeders
so
that
no
they
come
up
fine,
but
not
able
to
join
the
cluster,
though
the
kind
of
feeders
the
customers
are
not
able
to
auto
recover
and,
secondly,
cops
bundles
with
fixed
versions
of
critical
add-ons.
So
if
you
want
to
use
another
different
version
for
your
thing
eye,
it's
not
directly
supported
and
there's
no
bare
metal
or
b
sphere
support.
C
As
of
now,
there
was
some
experimental
support
previously,
but
it
was
already
deleted
from
the
list
code
and,
lastly,
cops
release
a
lag
behind
the
upstream
kubernetes
release.
For
example,
the
ladies
cops
release:
1.17
was
released
on
end
of
may
and
upstream
1.17
was
released
last
december,
so
there's
around
like
half
a
year
left
behind.
C
C
It
is
stacked
with
the
control
play,
node
or
live
outside
of
kubernetes
and
how
to
plan
for
feeder
domains,
to
maximize
your
system,
availability
and
also
for
each
of
the
kubernetes
component.
Like
api
server
controller
manager,
there
are
hundreds
of
configuration
options
and
you
may
also
want
to
know
what
are
the
best
practices
for
security?
C
Imagine
if
you
need
to
manage
the
tens
or
hundreds
of
clusters
altogether
with
all
the
challenges
in
mind.
Let's
take
a
look
at
cluster
api
and
how
it
plans
to
address
those
challenges.
C
The
main
goal
of
cluster
api
is
to
use
kubernetes
to
manage
equivalence
using
kubernetes
declarative
api
model,
leveraging
crds
and
customer
controllers.
The
project
started
in
early
2018
and
currently
in.
We
were
after
three
stage
with
more
than
200
contributors
from
the
big
companies,
and
it
has
a
long
list
of
supported
cloud
providers.
It
covers
most
of
the
public
and
private
cloud
environment
as
well
as
bare
metal
here,
we'll
briefly
go
over
the
class
api
crd
models.
C
Plus
the
api
is
built
using
custom
resource
definition,
known
as
crd,
so
crd
is
the
way
of
extending
kubernetes
declarative
api
model
to
manage
pretty
much.
Everything
like
crd
defines
the
desired
state.
Then
the
customer
controllers
will
keep
reconciling
trying
to
bring
the
resources
to
the
desired
end
state.
C
I
think,
first
of
all,
of
course
we
need
a
cluster
crd
and
a
cluster
is
composed
of
a
group
of
nodes,
so
we
need
a
crd
for
the
node
as
well,
and
actually
that's
the
two
fundamental
crds
class
api
has
the
cluster
and
the
machine
to
make
the
system
work
across
all
the
different
cloud
environment
class
api
has
additional
cloud
specific
crd
for
both
cluster
and
machine.
For
example,
the
aws
cluster
contains
information
like
region
or
btc,
and
machine
contains
information,
like
instance,
type
and
easy
for
each
of
the
vm.
C
Apart
from
the
underlying
infrastructure
vm,
we
still
need
information
to
represent
how
to
configure
those
vms
and
that's
where
cuban
and
config
come
into
picture.
It
will
generate
you.
The
data
which
contains
cubic
animating
bootstrapping
commands
like
either
cuban
needs
or
cuban
join,
then
utilize,
the
cloud
init
to
do
the
bootstrap
when
the
vm
is
launched
so
technically
with
the
cluster
and
the
machine
plus
keep
enemy
config,
we
are
able
to
create
a
kubernetes
cluster.
C
We
need
to
create
one
cluster
resource
and
one
machine
resource
for
each
of
the
node
that
you
want
to
launch
and
create
a
machine
resource.
One
by
one
is
going
to
be
difficult
if
you
want
to
have
like
hundreds
of
nodes
in
the
cluster
to
solve
this
problem.
On
top
of
the
machine
crb
cluster
api
also
provided
to
higher
level
crt
to
manage
your
control
plane
and
the
workers
separately.
C
Click
admin
control,
plane
is
used
to
manage
control,
plane
mode,
handles
functions
like
trigger
enemy,
innate
certificate
generation,
adcd
membership
management
and
also
upgrade
and
scaling
of
the
control
plane.
Node
and
the
machine
deployment
is
modeled
after
port
and
application
deployment
to
provide
the
ability
to
bring
up
a
pool
of
worker
nodes,
with
the
exact
same
configuration.
C
C
So
instead
of
a
directly
managed
vms,
it
can
use
auto
signaling
group
to
launch
a
group
of
node,
but
that
feature
is
not
supported
in
autocloud
yet,
and
it
only
supported
for
worker
node,
not
for
the
control,
plane,
node,
but
the
control
plane
node
is
still
directly
created.
Bands
in
the
cloud
here
are
some
of
the
main
features
in
cluster
api
roadmap.
C
C
Some
other
big
features
coming
up
are
like
other
skills,
integration
and
sport
instance
support,
and
also
like
pre-deleted
hooks
for
machines
and
a
cluster
resource
set,
which
is
used
to
help
install
critical
add-ons
like
cmi,
and
actually
this
was
the
previous
roadmap.
The
community
is
now
doing.
V1
upper
4
planning
with
most
of
the
0.3x
features,
will
be
mainly
like
bug
fixes
and
the
new
features
will
be
all
moved
on
to
0.4.
C
So
we
will
talk
about
cod
and
class
api.
Here
is
a
compare
between
those
two
in
terms
of
the
challenges
that
need
to
be
addressed
for
the
different
cloud
support
cluster
api,
I
think,
is
a
winner
as
a
co-op
that
does
not
have
on-prem
and
bare
metal
support.
Yet,
and
cluster
api
covers
most
of
the
cloud
environment
as
well
as
bare
metal
through
its
cloud
provider,
plug-in
architecture
for
the
operating
system.
Class
api
mainly
supports
ubuntu,
1804
and
7207
cops
have
more
os
support.
C
Then
the
bootstrapping
tool
will
dynamically
install
the
dependencies
like
docker
and
cubelet
into
the
vm
on
the
fly
when
the
vm
is
up,
but
cluster
api
pre-built
all
data
dependencies
into
the
image
itself.
I
think
there
are
pros
and
cons
for
both
approach
when
we
pre-build
all
the
dependencies
into
the
image.
That
means
for
all
the
upgrades
you
will
need
to
make
a
new
image,
with
the
newer
versions
of
the
kubeblades
and
q
admin
and
all
those
and
today
class
api
release
images
for
other
versions
for
all
the
main
cloud.
C
C
So
things
like
that
doesn't
happen
often,
but
it
can
still
happen
in
the
future
for
bootstrapping
cluster
epi
use
cube
animating
internally,
but
cops
use
its
own
equivalent
called
product
cube.
Both
are
doing
similar.
Things
for
integration,
cops
bundles
with
fixed
versions
of
critical
add-ons
like
the
same
eye
well
class
api.
The
latest
release
just
added
the
cluster
resource
set
where
you
can
include
yaml
files
for
the
critical
add-ons
like
a
single.
So
it
will
automatically
install
that
for
you,
but
you
still
need
to
prepare
those
ema
files
for
day
two
management.
C
I
think
both
support
upgrade
and
cards
have
additional
features
like
adcd,
backup
and
restore
and
multi-cluster
management.
The
class
api
can
manage
multiple
clusters
from
a
single
management
cluster,
which
makes
a
little
bit
easier,
but
for
cops
each
of
the
cluster
is
managed
separately,
as
we
can
see,
they
are
doing
similar
things
and
each
of
them
have
something
that
the
other
one
does
not
have.
C
So
what
if
we
combine
them
together,
so
this
diagram
is
actually
from
cube
contact
from
the
cops
founder.
It's
not
surprising
that
concept
team
already
thinking
about
the
integration
with
the
cluster
api
actually
called
and
the
class
api.
They
use
the
same
image
builder
project
to
manage
the
base
os
image
for
both
project
with
the
cluster
api
integration
crops
can
cover
more
cloud
and
can
move
to
an
operator
pattern
to
do
the
reconsideration
to
recover
or
recover
more
feeders.
C
C
B
So
there
are
at
least
four
large
key
areas
that
infrastructure
and
operations
teams
need
to
be
aware
of
and
responsible
for
mean
the
first
bucket
security
right,
the
aspect
of
keeping
not
only
your
base
operating
system,
but
your
base
container
images
up
to
date
and
secure
right
in
the
cloud
you
might
have
a
managed
provider
that
provides
an
out-of-the-box
security-hardened
image.
But
if
you're,
dealing
with
hybrid
cloud
or
multi-cloud
or
if
you
need
like
you
mentioned-
need
to
bring
your
own
libraries
or
support
inside
your
operating
system,
then
you
have
to
be
responsible
for
it.
B
I
would
recommend
you
know,
on
a
regular
basis,
run
a
cve
scanning
tool
like
vaults
and
for
container
images.
You
might
be
using
something
like
trivi,
but
in
the
in
the
practice
of
of
keeping
more
security
aspect,
google
also
publishes
a
disturber-less
image,
and
these
distro-less
images
are
relatively
very
small
images
as
a
couple
megabytes
which
don't
really
have
a
big
security
exposure.
B
Other
aspects
in
security,
of
course
deals
with.
How
do
you
secure
ingress
access
into
your
clusters?
You
can
implement
network
security
policies
in
kubernetes.
Ideally,
you
would
do
it
both
for
inter-service
communications,
but
at
the
very
least,
do
it
for
your
north
south
communications
for
public
cloud
environments,
there
is
a
security.
There
is
a
service
type
of
load,
balancer
where
you
can
specify
a
load,
balancer
source
range,
and
essentially,
if
you
specify
that
you
limit
what
are
the
actual
inbound
security
rules
of
how
access
happens
into
your
clusters.
B
How
do
you
provide
logging
monitoring
and
tracing
support
not
only
for
your
infrastructure
platform,
but
also
for
your
application
workloads
right?
Most
organizations
do
run
a
central
logging
and
monitoring
platform
and
they'll
use
tools
like
fluent
bit
and
others
to
essentially
pipe
all
their
logging
monitoring
into
the
central
clusters
and
on
the
business
continuity
side
right.
B
How
do
you
provision
and
maintain
your
high
availability
for
your
control
plane
toward
all
the
components
that
you
mentioned,
and
also,
on
top
of
it
doing
things
like
backup
and
restore
and
snapshotting
of
your
control
planes
and,
as
the
updates
happen
right?
How
do
you
ensure
that
you
don't
get
any
any
impact
to
the
availability
of
your
applications?
B
Next,
so
looking
at
some
of
the
requirements
of
what
would
it
mean
to
have
a
higher
level
orchestration
right?
There
are
at
least
three
key
areas.
One
is
on
the
ease
of
use
right.
The
other
is
manageability
and
flexibility,
so
cluster
api
itself
is
an
extremely
powerful
tool
that
makes
it
really
easy
to
provision
and
maintain
your
kubernetes
clusters.
B
However,
there
is
a
steep
learning
curve
to
understand
how
to
use
it,
and,
like
june
mentioned
you're
not
going
to
just
use
cluster
api
by
itself,
you
will
have
other
pieces
of
technology
to
work
with
it
as
an
example.
If
you
are
building
your
own
operating
systems
and
maintaining
your
own
versions,
you
will
probably
automate
it
using
something
like
packer
on
bare
metal
systems.
You
might
need
to
integrate
it
with
ipixie,
booting
and
redfish
apis.
B
What
if
there
could
be
a
higher
level
orchestrator
that
combines
the
best
of
these
technologies
right
and
provides
a
simpler
api
for
users
to
be
able
to
consume
and
utilize.
This
system
itself,
the
cluster
apis
and
many
of
these
technologies
that
I
mentioned,
are
moving
very
fast
v1.
Alpha
3
had
some
breaking
changes
same
with
v1
alpha
2.,
and
it's
expected.
You
know
as
there's
more
and
more
functionality
coming
in
until
it
becomes
a
stable,
there's
going
to
be
more
changes.
B
A
higher
level
orchestrator
right
just
by
definition,
could
simplify
and
keep
the
actual
apis
for
end
users
very
consistent,
even
if
the
underlying
technologies
are
changing
so
at
scale.
As
you
start
having
more
clusters,
how
do
you
monitor
the
health
and
status
of
your
clusters
right,
while
also
ensuring
that
your
latest
security,
patches
and
configurations
are
applied
across
all
your
clusters?
B
Potentially
a
higher
level
orchestrator
can
build
these
functionalities
into
it,
to
reporting
and
alerting
built
into
the
platform
and
doing
all
of
this
flexibly
across
any
cloud
of
your
choice,
whether
you
want
to
bring
your
own
os
images
or
whether
you
want
to
use
the
one
the
cloud
provides.
What's
your
own
integrations.
B
All
of
these
things
are
very
differentiating
features
that
not
a
single
platform
today,
all
of
it
has,
but
one
of
the
key.
The
key
proponents
that
we're
discussing
today
is
all
of
these
higher
level.
Orchestrators
should
be
built
on
top
of
cluster
api
and
potentially
cops
because
doing
so.
You
leverage
all
of
the
work
the
community
is
doing
today.
I
believe
there
are
14
different
public
and
private
cloud
support
already
in
cluster
apis,
with
more
being
added
day
in
day
out.
B
There
are
new
functionalities
exposed
from
these
cloud
providers
in
terms
of
new
type,
new
ways
of
doing
load,
balancing
and
new
ways
of
doing
security
groups.
All
these
capabilities
are
being
added
into
cluster
api,
and
if
these
higher
level
orchestrators
were
sitting
at
a
later
on
top,
they
would
essentially
be
able
to
leverage
all
the
work
the
community
is
doing
and
still
being
able
to
provide
values
that
are
needed
for
their
customers.
B
B
It
does
simplify
the
lifecycle,
management
of
application
from
deployment
scaling
service
discovery,
resiliency
many
different
aspects,
but,
as
we've
discussed,
there
are
many
other
areas
and
capabilities
needed
to
run
a
kubernetes
in
production
right
thing
from
monitoring
logging
security
ingress.
All
these
different
areas
in
some
ways
are
needed
to
be
integrated.
B
I
think
one
of
the
hallmarks
of
kubernetes
is
not
boiling
the
ocean
by
building
all
of
these
capabilities
into
itself,
and
instead
it
allows
a
healthy
and
thriving
ecosystem
of
vendors
and
partners
to
provide
solutions
for
logging
monitoring
security.
Each
of
these
different
buckets
you
see,
will
have
tens
and
tons
of
options
and
different
choices.
Some
of
these
choices
may
work
for
majority
of
enterprises
and
there's
other
choices
which
may
work
for
very
specific
use.
Cases
cncf
today
has
over
1400
components
and
technologies
in
this
landscape,
and
this
list
will
only
grow
over
time
right.
B
What's
going
on
and
and
stay
connected
with,
kubecon
and
events
like
those
so
that
pretty
much
wraps
up
the
the
portion
that
we
have
were
there
any
questions
and
one
more
aspect
too.
If
you
want
to
continue
talking
with
june
or
me,
please
continue
the
conversation
at
spectrecloud.com
and
please
feel
free
to
join
our
community
at
slack.spectorclaw.com.
A
Awesome
thanks
guys
for
a
great
presentation.
It
looks
like
we
have
one
question
here
from
jindong
lee
newbie
question:
what
other
apis
are
there
as
an
example
in
relation
to
cluster.
B
C
Another
episode,
if
I
understand
correctly
the
the
question
was
trying
to
ask
if
there
is
any
other
like
similar,
tooling,
trying
to
use
the
crd
to
manage
equipment's
cluster.
So
if
I
understand
correctly,
this
is
a
question
and
I
think
the
answer
is
no
right.
Now
the
community
put
the,
I
think,
most
of
the
effort
into
the
cluster
api
project
and
it
is
pretty
active
with
all
of
the
like.
A
Okay,
well,
thank
you
guys
for
a
great
presentation,
and
I
want
to
thank
everyone
for
your
time
today.
The
webinar
and
recording
will
be
available
later
today
on
the
cncf
website,
along
with
the
slides.
Thank
you
again
for
joining
us
and
have
a
great
day.