►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello,
everyone
Welcome
to
Cloud
native,
live
where
we
dive
into
code
behind
Cloud
native
I'm,
Annie,
telesto
and
I'm,
a
cncf
Ambassador
as
well
as
I,
lead
marketing
at
vision
and
I
will
be
your
host
tonight.
So
every
week
we
bring
a
new
set
of
presenters
to
Showcase
how
to
work
with
Cloud
native
Technologies.
A
They
will
build
things,
they
will
break
things
and
they
will
answer
all
of
your
questions,
so
you
can
join
us
every
Wednesday
or
Tuesday
to
watch
live,
and
this
week
we
have
two
amazing
speakers,
Kevin
inside
here
with
us
to
talk
about
making
VM,
diverse
class
citizen
kubernetes
with
cute
birth,
and
as
always,
this
is
an
official
live
stream
of
the
cncs
and
as
such
it
is
subject
to
the
cncf
code
of
conduct.
So
please
do
not
add
anything
to
the
chat
or
questions
that
would
be
in
violation
of
that
code
of
conduct.
B
Thank
you
Annie
and
thank
you.
Everyone
for
joining
us
today
on
our
webinar
Qbert,
making
VMS
a
first-class
citizen
in
kubernetes,
so
my
name
is
Saad
Malik
and
I'm.
The
CTO
here
at
spectral,
Cloud,
where
we
focus
on
kubernetes
management
and
joining
me,
is
Kevin
Rubik,
the
principal
architect
of
Spectra
Cloud
Kevin,.
B
So
before
we
talk
about
cupert,
why
are
we
doing
VM
Management
on
kubernetes?
So,
even
though
we're
seeing
a
massive
adoption
of
containers
based
workloads,
the
reality
is
that
VMS
are
here
to
stay
for
most
organizations.
Refactoring
all
of
their
existing
applications
to
Containers
is
a
significant
amount
of
effort.
Even
today,
if
you
look
at
the
number
of
VMS
running
on
VMware,
it's
over
85
million,
and
that
number
is
only
going
to
grow
over
the
next
few
years.
B
At
the
same
time,
there
is
many
organizations
are
expressing
concerns
with
the
whole
uncertainty
around
the
broadcom
and
VMware
Acquisitions.
So
most
of
these
organizations
are
also
looking
at
alternative,
hypervisors,
just
hedging
their
badge
and
de-risking
their
operations
and
there's
also
additional
benefits
of
potentially
saving
on
some
of
the
more
expensive
hypervisor
costs.
However,
for
me,
I
think
one
of
the
biggest
advantages
of
running
the
hypervisor
inside
of
kubernetes
is
that
your
cure,
Paradise
footprint,
is
already
expanding.
B
Your
platform,
engineering
teams,
your
Ops
teams
are
already
learning
how
to
use
kubernetes
and
managing
infrastructure,
and
if
that's
the
case,
why
not
have
the
same
platform
manage
both
your
container-based
workloads
and
your
virtualized
workloads?
One
all
of
the
training
and
support
and
operations
are
as
unified
and
two
as
your
workloads
over
time
do
migrate
towards
containers.
As
you
start
refactoring
your
applications,
it's
the
same
shared
infrastructure
that
you're
using
there's.
No
additional
Hardware
cost
that
to
add
into
the
picture
now
looking
at
the
technology
behind
the
scenes.
So
cubert
is
what
provides
VM
management.
B
B
The
underlying
hypervisor
is
a
KVM,
but
cubert
does
leverage
a
technology
called
lipwork
which
acts
like
an
abstraction
layer
between
different
hypervisors
by
the
way
openstack.
The
openstack
platform
itself
is
also
built
using
KVM
and
libert,
so
the
technology
has
been
mature
for
many
many
years.
Some
of
the
really
cool
capabilities
that
kubernet
provides
as
powerful
hypervisor
capabilities
one.
Not
only
does
it
help
provision
the
state
of
the
machines,
but
it
also
automatically
restarts
VMS
when
they
crash.
B
They
enforce
the
VM
configurations,
everything
from
your
CPUs
to
your
RAM
networking,
storage
and
then,
of
course,
many
different
networking
options
you
are
able
to
for
quick
POC
is
a
quick
environment,
use
a
pot
networks
using
popular
clcnis
like
Calico
and
Celia,
and
we'll
get
a
little
bit
more
into
the
details
of
that.
But
you
can
also
leverage
multis
for
more
advanced
configurations,
whether
you're
exposing
direct,
vlans
or
bondings
to
your
VMS
directly.
B
So,
looking
at
a
little
bit
of
the
architecture
of
Hubert
users
interface,
you
know
using
Cube
huddle
or
the
K9s
or
favorite
tool
with
the
kubernetes
API
server
from
here
they'll,
essentially
provision.
What's
called
a
virtual
machine
or
virtual
machine
instance,
there
is
a
controller
running
here
called
the
vert
controller,
which
essentially
looks
at
the
VMI
and
will
launch
an
actual
pod
called
Evert
launchers,
and
a
pod
gets
scheduled
to
a
node,
and
then
the
vert
launcher
will
specify
to
the
VMI.
B
Now
all
the
operations
are,
of
course,
managed
directly
by
this
slip.
Work
and
this
work
Handler
directly
on
the
Node.
Until
the
time
when
the
user
says
delete
the
actual
VMI
object,
in
which
case
the
reverse
operation
happens,
deprovisioning
happens
of
the
domain
and
then
the
pawn
itself
is
fully
killed.
B
Now,
just
one
last
asking
before
we
jump
into
to
Kevin,
what
does
it
take
to
run?
Cube
word
like
what
does
the
entire
stack
look
like?
Well,
you
have
to
obviously
start
with
some
physical
bare
metal
boxes.
These
servers
become
part
of
the
underlying
host
clusters.
Now
the
first
question
we
ask
is
who
manages
the
life
cycles
of
these
servers?
There
are
many
ways
of
orchestrating
environmental
servers.
B
One
of
the
more
popular
projects
is
canonical
Mass,
which
is
a
Bare
Metal,
Management
interface
and
canonical
Mass
allows
you
to
provision
and
manage
the
lifecycle,
experimental
machines
like
a
product
cloud.
If
you
have
a
small
keyword
cluster,
you
don't
need
a
big
data
center
use
case
or
self-provision
capabilities.
There
is
an
open
source
project
called
Kairos
that
provides
tamper-proof
immutable
operating
systems
that
you
can
install
onto
these
boxes
for
Ubuntu
Fedora
Souza,
any
operating
system
that
you
may
have
once
you
have
the
actual
bare
metal
boxes.
B
You
obviously
need
Storage
storage
comes
in
two
different
flavors.
You
may
have
a
software
defined
networking
solution,
a
commercial
product
like
portworx
or
Rooks
F,
and
obviously
there
are
direct
access
capabilities
with
storage
and
raid
networks.
You
know
whether
you're
using
flash,
arrays
or
netapps
or
Dell
emcs.
B
On
the
networking
side,
there
aren't
many
different
cnis
by
the
way.
This
is
just
a
representation
version.
There
are
many
different
choices.
You
can
do,
whether
you
could
go
with
Calico
or
cilia
again,
like
I
mentioned.
If
you
do
have
more
advanced
networking
requirements,
exposing
different
vlans,
you
can
use
a
product
called
multis.
B
You
go
ahead
and
provision
the
kubernetes
on
top
of
it.
You.
Obviously
you
have
to
install
a
cubework
piece
for
monitoring.
You
can
use
something
like
Prometheus
and
grafana
backup.
You
have
a
couple
different
solutions
like
olera
or
trilio,
and
then
once
the
cluster
is
up,
you
can
provision
your
different
namespaces
provision,
your
containerized
applications,
along
with
your
virtualized,
the
apps
and
then
the
last
slide
is
just
some
specific
requirements
relating
to
cubework.
You
do
need
a
kubernetes
cluster
that
is
117
or
later.
B
The
host
notes
ideally
are
running
bare
metal,
with
virtualization
support
from
the
BIOS
or
firmware
passed
directly
down
in.
If
you
are
testing
Cube
or
whether
on
a
private
data
center
like
VMware
or
in
a
public
Cloud,
you
can
also
enable
emulation
support
for
storage,
because,
if
you're
just
looking
to
provision
virtual
machines
on
individual
nodes
without
any
live
migration
support,
you
can
have
various
regular.
B
You
know
read
only
one
to
read
my
redirect
once
kind
of
volumes,
but
if
you
do
need
ability
to
live
migrate,
VMS
across
multiple
nodes,
then
you
do
need
a
CSR
driver
with
like
with
read,
write
many
capabilities
and
again
for
networking
keeping
things
simple.
You
can
use
pod
network,
but
if
you
have
more
advanced
use
cases
through
vlans,
you
can
use
multis
so
Kevin.
What
are
we
going
to
be
seeing
in
the
demo
today
and
what
does
the
environment
look
like.
C
Yeah,
thank
you,
sir,
so
the
environment
that
we
have
to
work
with
today
actually
looks
like
this.
So
let's
see
we
have
much
to
slide
there.
So,
let's
grab
get
it
back.
B
C
It
provides
it's
a
number
of
bare
metal
nodes.
We
have
one
control,
plane,
node
and
five
worker
nodes
all
deployed
by
canonical
Mass,
on
top
of
which
we
are
running
kubernetes
126,
then,
in
this
particular
environments,
we're
using
portworx
Enterprise
to
provide
a
distributed
storage
solution;
psyllium
four
part
overlay
networking,
so
that
runs
on
top
of.
A
C
We
have
an
additional
Nick
in
these
machines
that
give
us
access
to
particular
vlans
in
the
network.
So
I
can
show
putting
a
VM
directly
on
one
of
the
vlans
for
which
we
use
motors,
and
then
we
have
a
couple
of
regular
components
like
middle
or
B
and
ngi
next,
to
provide
load,
balancing
and
Ingress
services
on
a
bare
metal
cluster
which
Cube,
vert
and
Prometheus
graphomet
can
then
use
and
then
using
Cube
vert.
C
We
can
run
virtual
machines
and
of
course,
we
have
this
side
by
side
of
containers
and
virtual
machines
running
next
to
each
other.
So
what
I'll
show
is
these
steps
of
creating
first,
an
ephemeral,
virtual
machine.
This
is
what
you
often
see
in
other
demos
and
then
what
we
can
do
to
put
such
a
virtual
machine
on
an
existing
VLAN,
then
how
you
can
create
a
persistent
virtual
machine
using
something
called
Data
volumes
so
that
the
machine
actually
lives
on
a
persistent
volume
and
you
can
make
changes
to
it.
C
So
all
of
this
is
available
on
a
GitHub
repo.
So
if
you
want
to
check
out
any
of
the
manifests
here,
you
can
get
it
at
k-ray
bike
at
cncf-cubert.
The
link
will
be
posted
in
the
channel
as
well
to
get
it
there.
So
we
have
these
manifests
here
and
let's
first
start
at
looking
at
what
one
of
these
things
look
like.
C
So
a
virtual
machine
manifest
is
just
a
spec
similar
to
like
an
ovf
definition
of
what
the
VM
should
look
like,
and
it
has
very
similar
items
like
how
many
CPU
cores
should
be
in
there,
which
disks
should
be
part
of
it.
What
network
interfaces
should
we
use,
then
there's
a
translation
to
what
that
means
inside
of
kubernetes,
so
that
same
number
of
CPU
cores
also
means
that
we
want
to
reserve
that
amount
of
storage
capacity,
and
you
can
play
with
over
commitment
here
the
interface
inside
the
VM.
C
Can
we
can
translate
to
what
that
is
on
the
kubernetes
side.
So,
in
this
case,
we'll
start
by
putting
this
container
on
the
regular
pod
Network
or
this
VM
on
the
regular
product
Network
as
if
it
were
any
other
container
and
then
we'll
look
a
bit
further
at
what
it
requires
to
put
that
on
a
VLAN
and
in
this
case
we're
just
saying
that
the
volume
for
this
is
a
container
disk
which
means
that
it's
not
persistent.
C
If
we
shut
this
down
and
start
it
back
up,
we
just
get
the
same
image
all
over
again,
but
it's
a
great
way
of
starting
a
VM
from
a
known
image
that
is
ephemeral.
So
it
can
be
really
useful
for
Ramos,
for
example,
and
then
we
can
do
stuff,
like
Cloud
init,
for
example,
to
provide
additional
steps
that
automatically
need
to
be
run
as
the
VM
starts
up.
You
can
also
do
this
via
a
CLI.
C
So
if
you
install
the
vert
CTL
plugin,
you
can
do
this
with,
like
Cube
cuddle
crew,
install
vert,
for
example,
to
install
this
automatically.
You
will
get
this
CLI,
which
gives
you
access
to
a
lot
of
the
common
items
so,
but
when
it
comes
to
like
creating
a
VM,
you'll
see
that
that
is
actually
quite
quite
basic.
You
can
provide
some
options
and
it
will
spit
out
a
basic
manifest
here,
but
it's
usually
better
to
just
use
the
keyword
guide
to
write
and
manifest
directly
as
we
have
here.
C
So
let's
apply
this
there
we
go
and
then
we'll
see
inside
of
the
cluster
that
we
now
have
a
virtual
machine.
So
it
got
the
the
configuration
that
we
wanted
and
it
is
starting,
and
that
will
create
a
virtual
machine
instance,
which
is
another
definition
which
is
essentially
pretty
much
the
same
configuration.
But
now
this
is
like
desired:
State
configuration
of
an
instance
that
actually
should
actually
exist
in
the
cluster.
C
And
so,
if
we
look
at
parts,
we
now
have
a
third
launcher,
and
this
is
the
process
that
actually
runs
Key
View
to
run
the
virtual
machine
with
the
resources
a
lot
to
it.
So
this
part
will
have
these.
Let's
see
resource
limits
apply
to
it
here
to
specify
how
much
you
can
actually
use
from
the
cluster,
and
you
can
see
here
that
Network
information
automatically
gets
added
to
this
part
depending
on
how
we
configure
it.
C
On
host
83
300,
if
we
wanted
to
connect
to
that,
we
could
just
say
cubford
VNC
and
say
we
want
to
connect
to
the
VMI.
Actually,
what
was
the
name
that
we
had
for
this
this
machine
right
here
and
we
want
to
connect
to
that
which
is
like
in
rendering
space
c
and
CF
r.
C
C
A
It
is
yeah
and
there's
a
quick
audience
question
as
well:
hey
I
have
a
question
why
we
need
VMS
in
kubernetes
all.
C
Right
why
we
need
VMS
kubernetes,
because
not
all
workloads
are
easily
convertible,
so,
for
example,
if
you
have
a
a
postgres
server
trying
to
containerize
postgres
is
actually
quite
a
bit
of
work.
C
If
you
want
to
do
that
reliably
since
you'd
have
to
do
like
a
multi-container
distributed
database
kind
of
setup
and
for
a
lot
of
workloads,
it
might
actually
be
not
really
worth
it
doing
all
that
refactoring
of
the
application
for
all
parts
of
the
application,
and
so
it
can
be
really
useful
if
you
can
take
some
parts
of
the
applications
that
are
hard
to
containerize,
bring
them
into
the
kubernetes
cluster
and
then
run
them
as
a
regular.
Vm
still
have
the
motion
capabilities
and
just
hold
off
on
the
conversion
of
that
particular
piece.
C
B
Now
just
add
that
cubert
is
the
cloud
native
orchestration,
the
cloud
native
approach
to
being
able
to
manage
virtualization
with
the
complete
hypervisor
built
into
a
single
package,
and
obviously
it
can
run
in
any
environment
in
a
weather,
experimental
or
cloud
or
data
center.
Even
though,
ideally,
it
runs
are
bare
metal
so
that
you
can
get
maximum
performance.
A
Great
and
they
had
a
extra
question
as
well:
what's
the
difference
between
dcms
and
work
nodes.
B
So
the
worker
nodes
are
the
bare
metal
nodes
that
are
actually
running
the
cluster
itself.
The
virtualized
BM
workloads
run
as
containerized
pods
inside
of
the
cluster
right,
so
you
might
have
I'm
just
throwing
out
an
example,
10
worker
nodes
that
are
comprised
of
the
bare
metal
notes
and
maybe
Kevin.
You
can
show
the
nodes
here.
B
B
C
C
Running
here,
different
Ubuntu
versions-
yeah,
so
let's
see
I
just
stopped
this
particular
VM.
C
So
if
we
take
a
look
inside
the
cluster
and
look
for
Network
attachment
definitions,
then
for
the
cncf
webinar
these
are
per
namespace,
so
it
can
be
really
useful
to
actually
lock
this
down
on
a
per
namespace
level.
We
can
say
that
we
have
something
called
vlan0
which
will
give
access
to,
in
this
case
just
the
native
VLAN.
But
of
course
you
can
put
different
Finland
IDs
here.
I
think
we
have
like
a
different
one
here,
which
will
give
access
to
VLAN
128,
for
example,
and.
B
C
When
you
associate
it
to
such
a
network
name,
it
will
make
that
VM
accessible
on
that
particular
VLAN.
So
let's
apply
this
manifest,
which
will
update
our
VM
there.
We
go
do
now
be
linked
to
there.
We
go
so
now.
It
is
linked
to
the
multis
network
and
we
will
start
it
back
up
again
so
we'll
set.
Oh,
is
it
running
at
all
already
true?
C
So
v9
zero
gets
translated
to
130
in
this
particular
Port
that
the
machine
is
on
and
it
gets
an
IP
address
directly
on
that
Network,
and
so
this
can
be
used
to
make
sure
that
VMS
can
be
migrated
from
an
existing
solution
like
hyper-v
VMware
onto
a
kubernetes-based
virtualization
cluster,
without
changing
anything
in
the
VM,
you
just
convert
the
VM,
you
bring
it
here.
It
can
keep
the
same
name.
Ip
address
everything
and
run
as
usual,.
C
C
We
want
to
grab
the
contain
the
same
container
image
that
we
had
for
the
other
one,
but
push
that
onto
a
read,
write
many
persistent
volume
of
a
certain
size,
a
size
that
we
can
control
and
then
run
the
VM
from
that
volume.
So
everything
else
here
is
the
same
and
instead
of
a
container
disk
now
it
says
a
data
volume
here
with
the
name
of
that
template,
and
so
what
happens
is
that
inside
the
cluster?
C
B
A
C
There's
a
scratch
DV
here
to
initially
link
the
the
container
volume
on,
and
then
there
was
an
import
process
that
just
already
ran
that's
copied
over
all
the
data
into
this
persistent
volume,
and
now
the
part
that
gets
created
with
this,
let's
see,
actually
the
VMI
that
will
get
created
in
a
moment
will
then
be
linked
to
that
persistent
volume.
So
the
container
this
one
that
we
have
is
not
linked
to
any
Precision
volume
here,
but
the
here
we
go.
B
C
Now
we
have
another
VM
here
which
is
again
on
the
part
network,
but
this
is
fully
persistent
and
you
can
save
state
to
this.
So
once
we
have
something
that
we
actually
care
about,
we
want
to
make
sure
that
we
don't
lose
it
when
we
upgrade
kubernetes
or
a
node
goes
down
for
maintenance.
For
example,
we
want
to
be
able
to
move
those
workloads
around.
C
There
we
go
so
what
we
can
do
is
we
can
do
live
migration
and
we'll
do
that
for
the
second
persistent
VM,
and
so
what
this
manifest
is,
is
it's
just
a
virtual
instance,
migration
resource
that
tells
kubernetes
or
cubford
to
migrate
a
VM
of
this
name
in
this
namespace.
C
C
A
An
audience
question
the
debug
asks
what
would
be
the
reason
some
of
these
companies
still
use
VM
rather
than
containers.
C
Because
this
it's
the
same,
reusing
reason
that
we
still
have
bare
metal
servers
that
we
still
have
mainframes
here
and
there
in
reality,
what
happens
is
that
new
technology
just
gets
added
and
it
becomes
like
10
times
more
ubiquitous,
but
the
previous
version
iteration
of
that
technology
typically
never
fully
goes
away,
and
so
there
will
always
be
or
for
a
long
time
there
will
still
be
VMS
around
that
are
running
some
really
important
workloads
and
it
will
be
very
difficult
to
get
rid
of
all
of
them.
B
Yeah
and
I
think
just
another
data
point
to
add
is
most
new
organizations
obviously
will
start
new
projects,
Greenfield
applications
all
containerized
or
serverless,
but
there
are
many
Legacy
organizations
that
have
applications
for
many
many
years.
That
data
point
VMware
alone
has
85
million
workloads
running
on
VMS
and
they're,
estimating
by
2025
that
number
to
be
close
to
120
million.
So
we're
still
going
to
see
an
increase
of
vm-based
workloads,
whether
it's
different
type
of
load,
balancers
or
firewalls.
Somebody
put
a
comment
about
pfSense,
you
know
has
a
different
solution.
B
There
are
appliances
that
customers
or
vendors
are
providing
to
customers
as
virtual
appliances.
Many
of
them
are
still
virtualized
based.
So
we're
still
going
to
see
adoption
of
VM
Technologies,
even
though
over
time
it
is
going
to
start
slowing
down
when
containers
and
serverless
become
ubiquitous
everywhere.
C
Exactly
and
so
that's
why
it's
important
to
make
sure
that
all
of
the
core
capabilities
of
running
VMS
at
scale
can
be
done
in
kubernetes
like
live
migration,
like
snapshot
capabilities
to
show
that
you
can
confidently
move
those
workloads
over
because
there's
definitely
benefits
on
doing
all
of
the
same
inside
of
the
a
kubernetes
cluster,
because
these
workloads
here
I
have
now
control
over
how
I
publish
this.
So
this
could
be
a
VM.
That's
running
like
it
is
a
container
inside
of
the
pop
Network.
So
it's
fully
shielded
from
the
network.
C
It
runs
in
an
overlay.
There's
no
attack
surface
until
I
create
a
kubernetes
service
to
expose
one
or
more
of
the
ports
on
that
VM,
for
example,
and
then
I
can
control
how
that
happens,
whether
that's
a
via
cluster
IPS
or
other
workloads
in
my
clusterf
access
or
via
load
balancer,
so
that
they
can
be
externally
accessed
or
even
as
an
entire
vlancer,
that
the
whole
VM
can
be
accessed
depending
on
how
far
you
are
in
your
migration
towards
a
kubernetes
operations
process.
B
B
Just
think
about
the
1800
plus
Integrations
available
in
the
cncf
landscape,
everything
from
your
logging,
your
monitoring,
your
ing
resolutions.
All
of
these
Technologies
for
most
part,
will
work
not
only
with
container-based
Technologies,
but
also
with
virtualization
with
Qbert
itself.
So
the
example
I
don't
know
if
Kevin
you're
going
to
show
later,
maybe
a
backup
solution
or
Prometheus
example.
But
all
these
different
capabilities
work
the
exact
same
way
even
for
virtualized
workloads.
C
Indeed,
and
so
oh.
A
Where's
mine,
there
was
a
extra
audience
question
I
think,
after
that
we
can
get
back
to
the
demo.
The
regular
program,
okay
here
I
have
another
question:
is
that
VMS
in
communities,
meaning
that
nested
virtualization
or
not.
C
No,
this
VM
vmd
running
on
bare
metal
so
directly
on
the
hypervisor.
What
happens
is
that
essentially,
all
of
the
containers
are
running
natively
as
containerized
processes
on
the
the
Linux
host,
it's
Ubuntu
running
on
the
bare
mineral
machine
and
then
there's
KVM
as
a
kernel
module
that
provides
hypervisor
capabilities
and
all
the
VMS
run
as
that.
So
it's
no
nested
virtualization.
All
that
this
launcher
does.
Is
it
just
kicks
off
the
key
mu
process
of
triggering
the
hypervisor
to
spin
up
a
VM,
but
there's
no
necessary
virtualization
Happening
Here.
C
It
provides
an
emulation
mode
to
like
run
it
without
necess
visualization,
but
that
would
be
even
slower
all
right.
Let's
look
at
two
last
steps,
snapshots
and
snapshots
with
stores.
So
if
we
wanted
to
make
a
snapshot
of
a
VM
before
we
do
some
maintenance
to
it,
for
example,
we
can
take.
We
can
ask
kubernetes
to
take
a
snapshot
of
the
machine.
C
This
natively
integrates
with
the
CSI,
so
it
requires
that
the
CSI
snapshotter
is
enabled,
which
is
what
we
provide
this
for
customers,
something
that
we
automatically
enable,
and
then
the
CSI
will
do
the
heavy
listing
of
creating
the
snapshots.
But
it
will
give
you
an
easy
to
use
resource
called
a
VM
snapshot
to
maintain
the
state
that
you
want
So.
If
we
do
this
and
we
apply,
and
then
we
can
see
that
now,
a
VM
snapshot
exists
of
this
VM.
C
This
persistent
VM
that
we
have-
and
that
will
also
show
if
we
go
into
a
volume
snapshot
as
a
snapshot
of
the
underlying
PVC.
So
all
of
those
like
dependencies
automatically
gets
resolved
and
it
finds
that
it
needs
to
use
the
port
work
snapshot
class
to
make
a
snapshot
of
the
PVC
that
belongs
to
this
particular
VM.
C
C
But
we
can
already
ask
this
to
now
start
restoring
the
snapshots,
and
this
looks
very
similar-
it's
a
VM
restore
which
will
then
again
look
for
the
VM
look
for
all
of
the
persistent
volumes
that
are
associated
with
that
VM
and
then
we
return
it
to
the
states
of
the
snapshot
that
we
choose.
So
this
will
be
this
snapshot.
So
in
the
snapshot
that
we
made
here,
we
gave
it
snap
vm2
as
a
name,
and
that
is
also
the
snap
that
we
call
out
here
to
restore
so
we
apply
this
one.
C
C
What
we'll
see
is
that
the
data
volume
has
now
changed
to
a
restore,
so
a
new
PVC
has
been
created.
If
we
look
for
PVCs
here,
we
see
that
our
original
data
volume
is
still
here,
which
was
the
state
before
we
restored
the
snapshots.
So
you
still
have
the
ability
to
go
back
to
that
as
well,
and
now
a
snapshot
has
been
restored,
creating
a
new
PVC
with
the
content
of
the
the
the
that
particular
volume
when
the
snapshot
was
created
and
the
virtual
machine
is
now
connected
to
this
particular
PV
PVC.
C
A
Great
and
there's
a
few
audience
questions
as
well,
so
Sean
asks
is
the
keyboard
support,
horizontal
slash,
vertical
scaling.
C
Let's
see
there,
it
depends
a
little
bit
on
the
the
technology
used.
So,
for
example,
if
you
use
port
works,
then
it
can
do
automatic
scaling
of
the
underlying
persistent
volumes
if
they
run
start
to
run.
As
outer
space,
many
of
the
csis
will
provide
that
kind
of
capability.
B
So
Cooper
does
for
the
virtualist
workload
so
you're.
Looking
for
the
CSI
and
storage,
you
could
do
horizontal
scaling,
but
for
the
VMS
themselves
there
is
a
manifest
resource
type
called
a
virtual
machine
instance
replica
set.
Essentially,
you
could
think
of
it
like
a
deployment
or
an
actual
replica
set
that
we're
all
used
to
in
containers,
and
you
are
able
to
attach
a
horizontal
pod,
Auto
scaler
to
that
virtual
machine
instance
replica
set,
so
you
can
drive
it
based
on
CPU
and
memory.
It'll
do
automatically
horizontal
scaling.
Now
it's
really
interesting.
A
B
Ansible
does
have
I
mean
this.
Keyboard
is
a
very
mature
product.
Obviously,
with
seven
years
behind
in
the
making,
there
is
an
ansible
module
called
cubert
VM
that
allows
you
to
manage
all
the
lifecycle
aspects
from
provisioning
VM
to
day
two
operations
to
deleting
VMS.
So
absolutely
you
can
use
ansible
for
that.
C
Yeah,
essentially,
that
just
makes
like
these
steps
more
straightforward
to
do
when
you're
talking
about
managing
the
software
inside
of
the
VM
itself,
you're
still
free
to
use
whatever
traditional
OS
management
and
software
management
tool.
You
want
to
use
inside
of
that
ansible
puppet
Chef.
That's
kind
of
that
kind
of
options.
A
Great
and
then
there
was
another
question
from
Deepak
what
would
happen
if
any
alternative
that
comes
to
kubernetes?
What
would
these
companies
do
to
continue
with
the
communities
or
get
into
new
technology?
Is
it
always
scalable
to
change
Tech
every
time.
C
Well,
once
you've
converted
this
essentially
you've,
converted
it
to
KVM
and
KVM
is
essentially
is
used
in
all
of
the
major
Cloud
platforms
that
are
not
azure
or,
what's
the
other
one
hyper-v.
B
And
kxn
is
the.
C
Other
one
yeah,
so
everything
else,
Amazon
Google,
nutanix,
they're
all
running
or
running
versions
of
KVM.
So
if
some
other
container
orchestrator
were
to
come
along,
then
yes,
you
might
have
to
move
your
VMS
to
that
other
platform,
but
it's
very
very
likely
that
that
hypervisor
will
be
kvn
as
well,
because
the
container
orchestration
doesn't
really
have
anything
to
do
with
hypervising
and
so
that
technology
will
probably
just
stay
the
same.
A
Cool
and
then
there
was
a
last
question
so
far,
would
it
be
possible
device
plugin
be
effectively
utilized
within
cupboard
VM
to
enable
integration
and
management
of
GPU
resources.
A
There's
been
a
lot
for
sure
so
far,
I
don't
see
anything
new
there,
but
I
have
questions
as
well.
Do
you
have
anything
else
to
finally
say
in
the
thank
you
portion
or
should
I
get
to
my
questions.
B
I
mean
I
can
just
for
my
closing
comments.
I
would
just
want
to
say
that
I
think
there's
lots
of
advantages
to
running
virtualization
in
kubernetes,
right,
obviously
being
able
to
use
a
cloud
native
approach
and
framework.
Many
of
the
capabilities
that
kubernetes
provides
from
service
Discovery
to
Auto
healing
secret
passing.
These
are
natively
supported,
whether
it's
through
containers
or
for
virtualized
workloads
and
then,
like
we
mentioned
many
of
the
Integrations
that
exist
in
the
cncf
landscape
will
continue
to
add
value
for
both
virtualized
workloads
and
container
workloads.
A
Good
and
if
the
audience
has
any
questions,
please
ask
them
now
now
is
the
perfect
time
to
ask
them
so
go
ahead
and
type
away,
but
while
we
see
if
the
audience
types
any
questions
in
I
would
like
to
ask
you:
is
there
any
prerequisites
for
running
virtual
machines
on
kubernetes.
C
Yes,
the
the
biggest
is
that
if
you
want
a
live
migration
which
you
probably
want,
you
don't
want
VMS
to
be
stuck
on
a
single
note.
C
You
must
use
a
storage
solution
that
supports,
read,
write
many
persistent
volumes,
they
can
be
block
or
file
system,
but
in
most
cases
it
has
to
be
a
file
system
type.
To
have
read,
write
many
support,
depending
on
the
the
storage
solution
that
you
use,
that
is
I,
think
the
big
one
and
then
on
the
networking
side,
it
is
just
recommended
to
make
sure
that
you
have
a
good
Network
design
that
separates
out
data
from
management
and
gives
enough
nics.
A
Great
and
then
there's
a
question
from
the
audience:
can
we
create
Android
OS
VM
via
keyboard.
C
A
Okay,
so
coming
at
some
point
so
also
for
my
side,
I
run
cupert
on
any
kubernetes
cluster
in
any
cloud
or
only
in
certain
environments.
C
Well,
the
the
challenge
is
that
if
you
run
Cube
verde
in
the
clouds,
the
cloud
providers
typically
don't
give
you
access
to
the
low
level
networking,
and
so
it
might
be
difficult
to
publish
the
VMS
directly
on
some
of
the
like
virtual
subnets,
that
you
have
as
that
technology
tends
to
to
conflict.
So
if
you
run
it
on
the
cloud,
it's
recommended
to
keep
the
VMS
running
in
pod
networks
in
overlays
and
then
publish
them
as
Services.
C
A
Great
and
then
there's
a
question
from
Jesus
can
snapshots
be
exported
which
format
S3
supported.
C
The
backup
Technologies
typically
do
that.
So,
if
you
use
it
kind
of
depends,
so
if
you
use
Valero,
then
then
what
happens
is
that
the
backups
get
converted
into
a
CSI
snapshot
and
then
it
depends
on
the
snapshots
provider
setup
of
where
these
go.
So,
if
you
use
like
a
typical
Valero
layout
with
for,
for
example,
portworx
Enterprise,
the
quick
setup
is
to
have
a
snapshot
and
it
gets
stored
inside
of
the
existing
Port
work
storage.
C
But
you
can
also
set
up
Port
Works
Cloud
snaps,
which
will
then
actually
move
those
snapshots
off
to
either
S3
or
something
like
pure
array.
Flashblades
any
like
NFS
or
S3.
Compatible
storage
solution
allows
you
to
move
that
off.
If
you
use
something
like
trilio,
then
you
can
do
all
of
that
in
one
shot
where
it
will
take
a
snapshot
of
the
VM
to
get
at
a
readable
copy
of
the
data,
while
the
VM.
C
A
Good
and
then
there's
an
audience
question
from
debug,
which
is
a
pretty
much
a
big
question
or
like
a
large
one,
I
think
what
would
the
role
of
AI
be
in
communities
in
the
future
or
maybe
in
devops
or
cloud
any
thoughts
there?
Yes,.
B
That's
a
great
question:
obviously
we're
seeing
generative
AI
is
taking
over
the
world.
Every
single
person
is
using,
it
I
feel
there's
going
to
be
two
different
aspects.
One
is
going
to
be
in
terms
of
the
workloads
being
able
to
make
it
easy
to
develop
workloads,
optimize
and
run
and
place
them,
but
the
other
aspect
is
at
the
infrastructure
level
being
able
to
provide
intelligent
placements
across
different
nodes
being
able
to
schedule
different
workloads
across
different
clusters
without
a
human
intervening
and
specifying
run
my
workloads,
it's
going
to
be
more
AI,
driven,
automatically
I.
A
Good
and
then
from
my
side
the
question
came
to
mind:
is
there
something
like
VMO
sure
for
cupert.
C
B
C
A
Good
a
few
more
questions
in
mind,
but
if
the
audience
has
any
questions,
some
please
feel
free
to
drop
them
in
and
we
can
get
to
them.
So
from
my
side,
is
there
something
like
DRS
for
cute
bird.
C
There
is
so
there
is
something
called
the
D
scheduler,
which
is
existing
project
for
kubernetes.
It
helps
you
evict
workloads
from
nodes
and
let
the
kubernetes
reschedule
them
on
other
nodes
that
are
not
as
busy,
and
it
has
built-in
Logic
on
like
how
busy
in
those
has
to
be
before
that
eviction
process
happens.
C
This
can
be
leveraged
for
cube
vert,
because
it
essentially
does
the
same
thing.
You
just
configure
it
the
way
that
you
want
how
how
what
the
thresholds
are
for
the
utilization
and
under
utilization,
and
then
it
will
automatically
decide
to
evict
certain
VMS
from
nodes
which
will
cause
live
migration,
two
nodes
that
are
not
as
busy
and
that
way
you
have
your
essentially
TRS
at
home.
A
C
So
the
the
quick
way
is,
there
are
two
options:
one
is
running
a
VM
as
if
it
were
a
container
workload,
and
so
it
just
shows
up
as
another
part
inside
the
containers
on
the
regular
cni
overlay
Network
that
you
typically
use
like
the
Calico
or
cilium,
provides
and
then
like
any
container
on
kubernetes.
So
don't
directly
have
access
to
that
particular
part,
but
you
can
publish
access
to
parts
of
it
using
kubernetes
services,
and
so
you
can
create
a
service
and
then
buy
a
metal
lb.
C
C
And
then
it
can
get
either
an
IP
address
there
or
you
can
statically
configure
it,
and
then
it
was
then
it
will
be
running
just
as
as
if
it
were
plugged
in
directly
on
the
network
or
if
it
was
like
similar
to
a
VMware
cluster,
where
you
give
it
a
a
network.
A
port
group
that
is
in
a
similar
feeling.
A
Good
final
call
the
questions
for
now.
Can
you
have
anything?
Please
send
them
in
and
then
Jesus
asked
can
be
an
endpoint.
We
use
this
targets
of
kubernetes
load,
balancer
service.
A
Well,
very
great,
so
if
anyone
doesn't
have
any
questions,
do
you
have
any
final
words?
No
sod
or
Kevin
yeah.
B
Yeah,
absolutely
so
I
think
everyone
thank
you
for
attending
the
webinar.
If
you
want
to
see
and
play
around
with
all
the
Kevin's
workloads,
his
link
is
available
directly
on
this
page
here.
I
believe,
and
you
also
published
that
link
onto
the
chat
Kevin
does
have
a
webinar
next
week
or
in
two
weeks
on
bare
metal
kubernetes
with
Naz.
That's
also
going
to
be
done
with
Kevin
and
a
expert
from
canonical
Mass.
B
So
please
do
a
scan
this
QR
code,
if
you're
interested
in
attending
there,
if
you're
more
interested
about
bare
metal
clusters
how
to
provision
them
how
to
maintain
them.
Please
do
take
a
look
at
our
blogs
and
also
spectral
cloud
is
invested
in
this
area
for
managing
a
complete
stack
everything
from
bare
metals
to
virtualized
workloads.
A
Perfect,
those
were
a
really
good
ending
words
for
today,
but
as
always
from
my
side
as
well.
Thank
you
everyone
for
joining
the
latest
episode
of
cloud
native
live.
It
was
great
to
have
a
session
about
making
VMS
a
first-class
citizen
in
communities
with
cubework.
We
also
really
love
the
interaction
and
questions
from
the
audience,
really
lovely,
to
always
see
that
and
we
bring
you
the
latest
Cloud
native
code,
every
Wednesday
or
Tuesdays,
and
in
the
coming
weeks
we
have
more
great
sessions
coming
up
so
stay
tuned.
For
those.