►
From YouTube: Sponsor Demo: VMware- Cluster API in TKG Service Architecture and Data Protection on TMC and Velero
Description
"Before using the Tanzu Kubernetes Grid (TKG) Service for vSphere, it helps to have an
understanding of the Kubernetes architecture and the underlying technology, the Cluster API,
that makes TKG possible. We’ll start at the lowest layer and then zoom out, we will paint a
picture of how all these technologies are interconnected.
Afterward, find out how to use VMware Tanzu Mission Control (TMC) to centrally manage data
protection on your Kubernetes clusters across multiple environments. Easily back-up and
restore Kubernetes clusters and namespaces.
Learn more at: https://tanzu.vmware.com"
A
Bonjour
coup
con
eu
and
welcome
to
our
demo
theater
as
joe
beda,
our
principal
engineer,
at
vmware,
co-creator
of
google
compute
engine
and
first
ever
kubernetes
project
committer
sums
up
why
we
all
love
open
source.
I
don't
care
who
you
are.
There
are
more
smart
people
outside
of
your
company
than
there
are
smart
people
inside
of
your
company.
Our
two
presenters
today
show
cluster
api
and
valero
our
first
presenter.
Kendrick
coleman
is
an
open
source
technical
product
manager.
He
figures
out
new
and
interesting
ways
to
run
open
source,
cloud-native
infrastructure
tools
with
vmware
products.
A
B
This
is
an
introduction
and
not
a
deep
dive
into
the
individual
components
such
as
networking
or
storage,
but
it
is
meant
to
be
a
high
level
overview
for
anyone
getting
started.
There
are
multiple
layers
in
this
architecture.
First
is
taking
a
look
at
the
kubernetes
architecture.
This
will
not
go
deep
into
the
services
or
how
storage
or
networking
work,
but
instead
just
focus
on
the
core
components
that
satisfy
a
kubernetes
cluster
at
the
infrastructure
level.
Next,
we
will
look
at
the
cluster
api
architecture
and
what
this
technology
is
doing
to
automate
kubernetes
deployments.
B
Finally,
we
will
take
a
look
at
the
vsphere
with
kubernetes
environment
to
see
how
these
components
all
relate
to
what
we've
learned.
Thus
far,
alright,
let's
get
started
at
the
end
of
the
day.
What
is
it
that
we're
trying
to
accomplish?
Well
it's
to
get
this
containerized
application
running.
If
you're
watching
this,
then
it's
likely
you're
already
familiar
with
what
a
container
is
and
as
well
as
a
container
time.
Kubernetes
adds
a
topology
and
combines
features
to
make
it
one
of
the
best
suited
container
schedulers
available
in
the
context
of
kubernetes.
B
A
container
is
not
the
lowest
level
object.
Instead,
it's
the
pod,
a
pod,
can
have
one
or
more
containers
within
it.
If
you
have
an
application
that
has
multiple
services
or
layers
that
all
have
their
own
container,
the
pod
doesn't
have
to
make
up
the
entire
application.
The
application
can
be
spread
out
amongst
multiple
pods
or
even
across
traditional
virtual
machines
as
well.
B
Kubernetes
gives
you
the
flexibility
to
architect
the
application
that
best
fits
your
environment
and
needs
a
pod
has
to
run
somewhere,
and
this
is
the
first
part
of
where
the
kubernetes
infrastructure
layer
comes
into
play
with
the
kubernetes
worker
or
node.
The
kubernetes
infrastructure
is
made
up
of
only
a
few
pieces,
but
just
like
the
vsphere
esxi
hosts
that
are
responsible
for
running
your
virtualized
applications.
Kubernetes
workers
are
responsible
for
running
your
containerized
applications.
B
The
kubernetes
worker
can
run
multiple
pods
and
the
size,
and
the
amount
of
the
pods
that
can
run
on
a
worker
are
going
to
depend
on
the
size
of
the
worker
itself.
This
is
analogous
to
what
we
experienced
with
vsphere
today
on
how
big
our
esxi
hosts
are.
Next
is
the
kubernetes
master
or
controller
like
vcenter?
This
is
the
brain
of
the
kubernetes
deployment.
It
is
packed
with
services
that
are
required
for
keeping
the
cluster
functioning
and
carries
all
the
components
for
deploying
applications
to
the
workers
themselves.
B
B
The
api
server
provides
the
front
end
for
the
cluster
by
exposing
the
kubernetes
api
internal
components
such
as
the
scheduler
or
the
nodes,
and
external
components
such
as
cube,
cuddle
or
api
driven
systems,
can
all
make
calls
to
the
api
server.
The
kubernetes
controller
manager
is
a
service
that
watches
the
shared
state
of
the
cluster
through
the
api
server
and
makes
changes
attempting
to
move
the
current
state
towards
a
desired
state.
B
The
scheduler
is
what
will
watch
for
new
pods
as
they
are
requested
and
created
etcd
pretty
much
becomes
our
database.
It
saves
the
current
state
of
the
cluster.
The
control
plane
of
kubernetes
can
scale
as
well
there's
a
lot
more
complex
configurations
that
need
to
take
place
that
isn't
mentioned
in
this
diagram,
such
as
front
ending.
All
these
additional
master
nodes
with
a
load
balancer
but
ncd
will
replicate
changes
across
the
masternodes
given
in
a
highly
available
situation.
B
The
amount
of
worker
nodes
needed
is
based
on
the
resources
needed
to
run
the
applications.
These
can
scale
as
needed
as
well.
All
of
this
combined
represents
a
single
kubernetes
cluster
at
the
infrastructure
level.
These
are
all
virtual
machines
running.
On
top
of
vsphere,
now
that
we
understand
what
a
cluster
is
comprised
of,
we
need
to
know
how
it's
built.
B
There
are
a
lot
of
blogs
and
articles
github,
repos
and
tools
available
that
cover
everything
from
starting
at
the
lowest
level,
by
installing
individual
services
on
linux
and
creating
tls
certificates
up
to
the
highest
point
where
everything
is
deployed
automatically,
there
have
been
multiple
attempts
at
delivering
a
single
kubernetes,
installer
experience,
but
they're
either
tailored
to
a
specific
infrastructure
provider.
They
were
developed
very
early
before
kubernetes
began
maturing
or
their
proprietary
solutions.
B
So
how
does
kubernetes
know
how
to
use
its
principles
on
objects
that
it
doesn't
understand?
This
is
where
the
custom
resource
definition
or
the
crd
comes
into
play.
This
is
an
extension
of
the
kubernetes
api,
so
it
knows
how
to
interact
with
new
types
of
objects.
Cluster
api
is
a
set
of
crds
that
allows
kubernetes
to
know
how
to
interface
with
an
infrastructure
provider,
and
it
knows
how
an
object
of
type
machine
or
cluster
is
represented
at
a
high
level.
This
is
how
cluster
api
works.
B
As
a
user,
I
define
a
cluster
specification
within
the
specification.
I
define
the
types
of
machines
that
will
make
up
my
cluster
and
using
the
standard
cubecontrol
command
line.
I
apply
this
to
a
kubernetes
cluster
that
has
the
cluster
api
components
installed.
The
kubernetes
cluster
with
cluster
api
is
considered
now
my
management
cluster.
B
Catalogs
vsphere
with
kubernetes
takes
this
to
another
level
by
introducing
an
architecture
that
provides
even
more
features
and
unique
capabilities
within
vsphere.
There
is
a
concept
of
the
supervisor
cluster.
This
is
our
management
cluster
for
cluster
api.
It's
represented
as
a
few
virtual
machines
that
are
automatically
created
when
enabling
the
workload
platform
service
on
a
cluster
that
satisfies
all
the
requirements.
B
B
Vsphere
with
kubernetes
has
a
concept
of
vsphere
namespaces
like
kubernetes
or
linux
namespaces.
It
defines
a
boundary
or
security
context.
A
vsphere
namespace
is
like
a
resource
pool,
but
it
can
run
multiple
types
of
kubernetes
objects,
such
as
virtual
machines,
vsphere
pods
and
even
multiple
tonsi
kubernetes
grid
service
clusters.
B
The
namespaces
also
have
role-based
access
control,
which
is
inherited
through
vsphere,
single
sign-on
and
there's
also
resource
and
object
limits
that
can
be
imposed.
These
limits
give
administrators
the
control
over
the
namespaces
and
make
sure
applications
do
not
take
up
more
resources
than
allowed
within
vcenter.
We
can
see
that
the
compute
cluster
has
passed
all
the
tests
required
to
enable
the
workload
service
and
the
supervisor
cluster
virtual
machines
have
been
created.
These
virtual
machines
represent
the
kubernetes
and
cluster
api
management
cluster.
B
Each
one
has
its
own
unique
ip
address,
but
through
a
leader
election
process,
only
one
will
be
the
interface
needed
to
interact
with
any
cluster
or
vsphere
pod.
This
example
shows
a
tonzu
kubernetes
grid
cluster,
without
going
through
everything
in
depth.
It's
easy
to
understand
the
cluster
specification
and
the
types
and
amounts
of
virtual
machines
we
want
to
represent
a
new
kubernetes
cluster.
B
After
this
specification
is
applied
to
the
supervisor
cluster,
it
will
invoke
the
cluster
api
components
to
achieve
a
desired
end.
State
the
kubernetes
master
and
worker
nodes
are
represented
as
virtual
machines
within
the
namespace
that
was
defined
in
the
specification
thanks
for
watching
this
video
on
the
tonzu
kubernetes
grid
service
architecture,
our.
A
A
C
Hi,
I'm
keith
lee
a
technical
marketing
manager
at
vmware
focused
on
tanzu
mission
control.
In
this
short
video,
I'm
going
to
demonstrate
the
new
data
protection
feature
in
vmware
tanzania
mission
control.
This
new
feature
allows
operators
to
centrally
manage
data
protection
across
their
entire
fleet
of
kubernetes
clusters.
Tanzu
mission
control
data
protection
is
built
on
a
solid
open
source
foundation
using
the
popular
valero
project,
tanzu
mission
control
installs
and
manages
the
lifecycle
of
valero,
so
you
don't
have
to
instead
of
operating
valero
directly
in
every
cluster.
C
Tanzania
mission
controls,
ui,
cli
and
api
allows
you
to
centrally
create
backups
and
restores
of
all
your
clusters,
regardless
of
where
they
are
located.
You
can
backup
and
restore
clusters
namespaces
and
even
groups
of
resources
using
kubernetes
label.
Selectors
tanzu
mission
control
automatically
passes
these
commands
to
its
cluster
agent
technology
and
valero
executes
the
backups,
passing
back
status
errors
and
full
backup
details.
Okay,.
B
C
Before
I
can
enable
data
protection
and
perform
a
backup,
I
need
to
create
an
account
credential
for
data
protection.
An
account
credential
is
a
cloud
provider
account
which
tanz
image
control
uses
for
either
lifecycle
management
of
clusters,
as
seen
here
or
for
data
protection
as
we're
going
to
create
now
creating
this
allows
tmc
to
create
and.
B
C
S3
buckets
to
store
the
backups,
you
can
create
one
or
many
data
protection
credentials,
so
we
give
it
a
name
and
it
will
create
an
aws
cloud
formation
stack
template.
Next,
I
will
log
into
my
aws
console
off
screen
and
create
a
cloud
information
stack
using
this
template
in
which
it
will
return
an
arn
and
amazon
resource
name
and
there
we
have
a
credential
created.
Now.
Let's
enable
data
protection
on
a
cluster.
C
C
We
now
see
data
protection
is
enabled
on
the
cluster
and
we
now
have
a
data
protection
tab
to
demo
data
protection.
I'm
going
to
use
an
app
called
acme
fitness,
which
comprises
of
many
microservices
such
as
front-end,
catalog,
cart,
payment,
etc.
We
will
up
the
namespace.
This
app
was
deployed
to
delete
the
catalog
service,
which
includes
a
mongodb
and
a
persistent
volume
and
then
restore
it
on
the
data
protection
tab.
We
can
create
backups
and
restores
let's
go
ahead
and
create
a
backup.
C
B
C
Name
and
now
we
can
see
its
processing
and
in
a
few
moments
it
will
be
ready
on
clicking
the
backup
we
get
some
further
details,
such
as
the
backup
type.
If
label
selectors
were
used,
number
of
name
spaces
backed
up
and
how
many
volumes
were
a
snapshot
so
now
off
screen,
I'm
going
to
create
a
disaster
by
deleting
the
catalog
service.
In
my
acme
fitness
app
on
refreshing,
the
page
we
can
now
see,
we
don't
have
any
catalog
items
in
our
app
to
perform
a
restore.
We
select
a
recent
backup
and
click
restore
for
restore.