►
From YouTube: Webinar: Hybrid Cloud Kubernetes with Nodeless
Description
Nodeless Kubernetes enables you to stretch a single Kubernetes control plane to schedule pods across heterogenous compute types (on-demand, pre-emptible, Fargate, ACI, etc) on multiple cloud providers (on-prem, AWS, GCP, Azure, etc). The webinar will enumerate the motivation for wanting a hybrid cloud control plane, provide an overview of the state of the art in open source Nodeless projects like virtual-kubelet and Kip, and call out current limitations. We will walk through a live demo of stretching an existing Kubernetes control plane to dispatch pods to a different cloud provider using virtual-kubelet and Kip.
Presenter:
Madhuri Yechuri, Founder @Elotl
A
I'd
like
to
thank
you,
everyone
who's,
who
is
generous
today,
welcome
today,
CNCs
webinar,
multiple
governance
with
with
the
less
I'm
pulsing
noise,
since
their
ambassador
and
cloud
specialist
at
Oracle
I'll
be
moderating
today's
webinar.
You
would
like
to
welcome
our
presenter
today,
mud,
Madhuri,
ute,
founder
LLL,
to
before
getting
started.
Just
a
few
key
very
items
do
deserve
an
hour.
We
are
not
able
to
talk
as
an
attendee.
There
is
a
QA
box
at
the
bottom
of
your
screen.
A
Please
feel
free
to
drop
your
question
there
and
you
will
get
to
us
Millia
we
can
at
them.
This
is
an
official
webinar
of
ciencia
and,
as
such
is
subject
to
the
CSS
code
of
conduct.
Please
do
not
have
anything
that
to
the
chat
or
questions
that
could
be
hellish
obstacles
of
that
code
of
conduct.
Basically,
please
be
respectful
of
all
your
fellow
sports
content
presenters.
B
Paolo,
thank
you
all
for
joining
this
session
on
hybrid
and
multi-cloud
kubernetes,
with
no
less.
We
will
be
talking
about
the
definition
of
multi-cloud
and
hybrid
cloud
communities
and
go
over
a
couple
of
use
cases
where
you
could
find
multichart
kubernetes
to
be
your
compelling
fit
for
your
workloads
and
define
no
less
communities
walk
through
the
implementation,
some
of
the
implementation
details
of
knodel
s
and
how
it
satisfies
some
of
the
requirements
of
multi-cloud
and
hybrid
cloud.
B
Kubernetes
use
cases
will
go
through
a
live
demo
of
using
Nautilus
communities
to
achieve
multi-cloud,
kubernetes
and
walk
through
some
of
the
caveats
and
work-in-progress
items
for
multi
cloud.
Kubernetes
takeaways
references
and
Bill
have
Q&A
at
the
end.
We'll
also
have
a
short
Q&A
after
the
demo
to
answer
any
questions,
and
if
we
have
any
questions
unanswered,
we
will
include
it
in
a
FAQ
section
before
uploading
the
slide
deck
and
recording
to
CN
CF
website.
B
B
Take.
For
example,
you
have
an
on-premise
kubernetes
cluster
that
is
running
on
vSphere
in
an
on-premise
data
center,
and
you
have
on
premise
parts
that
are
deployed
to
essentially
BMS
that
are
running
on
ESX
servers.
You
might
want
to
stretch
this
kubernetes
cluster
across
multiple
cloud
providers
beyond
on-premise
data
center.
You
might
want
to
ships
some
parts
to
AWS
or
ship
some
parts
to
a
at
the
same
time.
You
might
not
want
to
go
ahead
and
deploy
control,
plane
on
AWS
or
agile,
and
maintain,
monitor
and
update
the
characteristics
of
the
control
plane.
B
You
might
not
have
the
resources
time
or
the
interest
to
maintain
this
plethora
and
hundreds
of
control
planes
across
various
public
cloud
providers
when
you
actually
have
a
working
control
plane
that
you're
happy
managing
on
your
on-premise
data
center.
This
allows
you
to
focus
on
your
core
business
instead
of
curating
pet
compute
on
each
of
the
cloud
providers
and
you're
also
avoiding
the
proliferation
of
control
planes
across
various
cloud
providers.
One
of
the
use
cases
is
wanting
to
burst
during
peak
workload.
B
Let's
say
that
you
have
your
kubernetes
master,
that
is
deploying
thousand
parts
to
ten
work
on
ODEs
on
on
premise:
data
center,
you've
done
your
capacity
calculation
and
you
figured
out
that,
in
order
to
run
your
stable
state
thousand
pod
workload,
you
need
ten
worker
nodes
on
your
on-premise
data
center.
You've
raised
your
purchase,
orders
and
configured
your
cluster
and
everything
is
working.
Fine
on
the
on-premise
data
center
kubernetes
cluster,
let's
say:
Black
Friday
comes
around
and
you're
a
peak
workload.
B
Spikes
from
thousand
parts
to
2,000
parts:
now
you
don't
have
the
time
to
go
and
race
purchase
orders
to
procure
the
additional
worker
nodes
to
stick
in
into
this
communities
master.
In
order
to
be
able
to
take
on
this
burst
workload,
it
would
be
nice
if
you
can
burst
the
thousand
extra
parts
to
a
cloud
provider
like
AWS
or
Azure
or
any
other
cloud
provider
to
take
on
this
extra
workload.
That's
coming
in
during
Black
Friday
and
once
Black
Friday
passes
when
you're
back
to
regular
operations.
B
Your
10
worker
nodes
on
your
on-premise
data
center
are
sufficient
to
run
your
thousand
parts,
so
you
would
want
to
get
back
your
footprint
to
entirely
on
on-premise
data
centers.
So
this
is
one
of
the
use
cases
where
it
would
be
nice
to
be
able
to
achieve
a
hybrid
cloud
or
a
multi
cloud
scenario
that
is
driven
through
a
single
kubernetes
master.
B
Another
use
case
is
if
your
running
machine
learning
workloads
where
you
have
your
data,
that's
residing
on
a
public
cloud
provider,
but
you
don't
want
to
set
up
a
kubernetes
control
plane
across
multiple
public
cloud
providers.
It
would
be
good
to
be
able
to
drive
the
machine
learning
work,
Lord
learning
phase
through
the
master
that's
residing
on
your
existing
cloud
provider,
which
could
be
an
on-premise
data
center.
B
How
are
the
way?
What
are
the
ways
in
which
we
can
achieve
such
multi-cloud
kubernetes
without
having
to
initiate
a
proliferation
of
control,
planes
that
are
spread
across
multiple
cloud
providers?
Nodelist
communities
is
one
such
way
of
achieving
multi-cloud
kubernetes
Nola's
communities
essentially
presents
a
virtual
worker
node
into
an
existing
kubernetes
control
plane.
The
virtual
work.
A
node
is
advertising
very
large
capacity.
That's
available
for
placement
of
pods
to
the
control
plane,
but
in
actuality
it
trance
on
there
is.
It
consumes
very
small
capacity.
B
If
a
pod
comes
in
to
the
control
plane
and
is
scheduled
on
the
virtual
work,
a
node,
the
virtual
worker
node
under
the
covers,
looks
at
the
pods
resource
requirements,
amount
of
CPU,
V,
CPU
memory,
storage
routes,
network
ups,
etc,
and
provisions
just-in-time,
cost-effective,
compute
for
your
pod
on
the
cloud
provider
that
it
has
configured
to
run
the
pods
on.
So
the
just-in-time
provision,
if
you've
configured
to
run
on
AWS,
for
example,
the
just-in-time
provisioning,
on-demand
instance,
a
spot
instance
or
a
fire
gate
launch
type.
B
The
types
can
be
either
set
manually
or
you
could
ask.
The
word
should
work
in
order
to
pick
the
right
launch
type
based
on
your
resource
requirements
and
your
SLS.
The
virtual
worker
node
provisions,
the
just-in-time
compute
nodes
for
your
pod
and
your
part
is
dispatched
over
to
the
just-in-time
provision.
Compute
node.
B
As
long
as
the
pod
is
running
to
the
control
plane,
it
appears
as
though
the
pod
is
running
on
the
virtual
worker
node
and
the
pod
functions
like
a
regular
pod
and
once
the
pod
terminates,
the
underlying
compute
node
is
automatically
terminated
by
the
virtual
work
and
also
think
of
compute
nodes.
That
come
up
and
disappear
on
various
cloud
providers
based
on
your
pod
life
lifecycle,
rather
than
an
always-on
statically
provisioned
compute
node,
that's
hand
curated
by
you.
The
virtual
work,
a
node
in
actuality
under
the
Commerce
is
not
a
real
worker
node.
B
B
Provider
is
an
open
source
project
from
a
low
tool
and
the
virtual
cubelet
and
flower
instance
provide
a
container
is
speeking
cubelet
api
and
it
tells
the
api
server
that
it
advertises
to
the
api
server
that
hey
api
server
I'm
a
worker,
node
I,
speak
hewlett
api,
and
I
have
this
very
large
capacity.
That's
available
to
me
for
scheduling
and
the
very
large
capacity
that
is
advertised
is
configured
through
a
config
map,
so
you
could
say
that
I
have
thousands
of
V
CPUs
terabytes
of
memory,
hundreds
of
GPU
devices
etc.
B
So
if
you
think
about
what
are
the
main
differences
between
a
vennila
kubernetes
control
plane
and
the
node
les
supercharged,
the
vanilla,
kubernetes
control
plane
is
depicted
in
the
top
half
of
this
slide,
where
you
think
of
your
worker
nodes
as
large
bins
that
have
static
capacity
and
your
pods
are
balls
of
different
sizes
that
are
being
stuffed
into
these
large
bins.
Let's
say
a
small
pod
has
one
BCP
one
gig
of
ram
a
medium
pod
has
two
recipients,
two
big
ceramic
cetera.
B
So
you
have
quite
a
bit
of
wasted
resources
because
essentially
you're
doing
bin
packing
and
the
optimal
bin.
Packing
is
only
possible
if
all
of
your
parts
are
homogeneous
types
and
your
thoughts
are
running
for
a
predictable,
constant
amount
of
time
with
nodelist
when
you're
supercharging
your
control
plane
with
node
less.
B
What
you
see
is
one
large
which
will
work
a
node
which
is
presenting
very
large
capacity
to
the
control
plane
and
the
large
worker
node
is
doing
bin
selection
instead
of
bin
packing
there
you're
getting
the
right
sized
bin,
that's
provisioned,
just
in
time
for
each
ball,
so
a
small
bin.
Small
ball
gets
a
small
bin,
a
medium
sized
ball,
gets
a
medium-sized
bin,
etc,
and
once
the
pods
terminate
the
underlying
bin
is
tossed
automatically.
B
B
So
here
I
have
my
mini
cube,
that's
running
on
my
macbook
and
I
have
the
one
master
node
and
I
have
the
virtual
work,
a
node
which
is
called
virtual
cubelet.
If
I
look
at
virtual
hewlett
working
on
and
look
at
the
available
capacity,
it
says
that
it's,
it
has
20
D
CPUs
512
weeks
of
RAM
and
240
limit
that
is
available
to
be
scheduled
on
this
virtual
work,
a
node.
If
I
look
at
the.
B
B
Let
cloud
instance
provide
a
container
that
is
telling
the
api
server
for
many
cube,
that
hey
I
have
23
CPUs
and
five
twelve
weeks
of
RAM
that
are
available
for
for
scheduling,
parts
and
I
can
run
up
to
200
parts
and
the
configuration
for
the
number
of
resources
that
are
available
to
be
scheduled
on
the
virtual
cubelet
node.
It
comes
through
config
map,
so
you
can
configure
GPU
devices
and
other
devices
as
well.
If
you
want
to
expand
the
capacity
of
the
virtual
of
the
virtual
work
on
over.
B
So
let's
look
at
a
sample
engine
X
deployment
as
a
workload
I
have
the
nginx
deployment
that
sets
app
to
nginx
and
I
want
to
schedule.
One
replicas,
the
mode
selector
for
this
replica
is
said
to
watch
in
cubelet
and
I
would
want
one
visible
and
two
gigs
of
ram
that
needs
to
be
provisioned
for
the
engine
export.
So
let's
go
ahead
and
create.
B
Nginx
deployment
and
while
we
had
while
the
air
deployment
is
being
created,
if
we
look
at
the
virtual
cubelet
config
map,
we
should
see
that
we
have
the
cloud
provider
that
is
configured
to
ship
parts
too,
and
we
also
see
the
CPU
and
memory
and
apart
limit
configurations
that
are
set
for
the
as
the
advertised
capacity
for
the
virtual
cube
light
mode.
So
if
we
look
at
what's
happening
under
the
covers.
B
The
engine
X,
the
one
replica
for
the
nginx
deployment
would
be
scheduled
on
to
the
virtual
working
on,
because
this
is
the
only
worker
node
that's
available
for
the
control
plane,
and
it
has
enough
capacity
to
take
on
one
V,
CPU
and
two
weeks
of
RAM.
Our
resource
requirements
for
the
nginx
replicas.
The
virtual
worker
node
is
scheduled
to
is
configured
via
the
config
map
to
ship
parts
to
AWS.
B
So
the
virtual
worker
node
can
look
at
the
Internet's
replicas
resource
requirements
and
it
will
provision
just-in-time,
compute
for
your
pod,
for
your
nginx
replicas
and
once
the
computers
provisioned,
the
nginx
pod,
is
dispatched
over
to
the
just-in-time
provisioning
node.
The
just-in-time
provisioning
note
can
be
of
various
flavors.
It
can
be
on-demand,
Spotify
Gate.
You
can
specify
by
a
annotation
to
your
deployment,
which
compute
launch
type.
You
have
a
preference
for
or
you
could
let
the
virtual
work
a
node
pick
the
right,
compute
launch
type
for
your
pod
automatically.
B
Once
the
part
once
the
compute
node
is
provisioned.
The
pod
is
dispatched
over
to
the
compute
node.
If
your
application
has
strict
startup
time
requirements
like,
for
example,
if
you
have
strict
SLA
of
5
seconds
between
the
time
you
say
create
nginx
deployment
and
the
pod
needs
to
respond
to
inch
and
external
requests.
You
can
also
specify
a
number
of
set
count
of
3
warmed
instances
that
are
available
to
dispatch
your
parts
to
the
white
capri.
B
One
instances
enable
you
to
do
is
they'll
cut
down
the
instant
startup
time,
which
varies
from
cloud
provider
to
cloud
provider,
and
also
it
varies
based
on
the
computer
launch
types
labor.
You
pick
on
a
given
cloud
provider.
So
if
you
would
rather
not
wait
for,
let's
say
a
minute
and
a
half
for
an
easy
to
on-demand
instance
to
come
up
in
order
to
dispatch
your
point,
you
can
specify
the
count
of
let's
say
five
number
of
on-demand
instances
of
type
C
for
dot
large.
B
That
would
enable
you
to
cut
down
on
instance,
startup
time
at
the
cost
of
having
to
spend
a
little
extra
for
the
preborn
instance
launch
types.
So
once
the
instances
are
provisioned,
the
engineers
replica
is
dispatched
over
to
the
instance
and
to
the
control
plane.
It
will
look
like
the
nginx
replica
is
running
on
the
virtual
worker
node.
So
let's
look
at
the
nginx
replicas
in
my
AWS
console.
B
And
it's
running
in
a
t
3
small
instance
type,
which
is
the
the
best
fit
instance,
type
that
honors
the
1d
CPU
and
two
heaps
of
RAM
resource
requests
for
the
nginx
replicas
and
the
tags
of
the
instances
that
provision
just
in
time
reflect
the
labels.
So
the
nginx
deployment
had
apps
set
to
nginx
as
a
label,
so
that
label
appears
as
a
tag.
So
it's
very
easy
to
do,
accounting
and
monitoring
of
the
resources
that
are
belonging
to
a
certain
kubernetes
cluster
that
are
being
dispatched
in
various
cloud
providers.
B
There
is
also
the
name
tag
of
the
of
the
virtual
worker
node
and
the
namespace
in
which
it
is
running
and
the
controller
ID
for
the
virtual
work,
a
node
so
yeah.
You
can
easily
perform
accounting
tasks
as
well
in
this
case,
because
I
did
not
specify
any
affinity
for
a
certain
launch
type.
What
we
get
is
that
in
fact,
Oh
on
demand
instance,
let's
go
down
and
see.
We
should
see
that
it's
running
on
regular
on
demand.
Instance,
it's
a
normal
on
demand.
Instance
that
is
running
the
nginx
replicas.
B
What's
happening
under
the
covers
is
we
would
get
the
the
two
extra
compute
launch
types
for
the
two
new
engine
exports?
They
are
provision
in
parallel,
so
you're
a
scale-up
time.
The
time
it
would
take
for
you
to
scale
up
to
unexpected
bursts
in
your
birth
load
is
the
time
it
would
take
for
provisioning,
a
single
cloud
computer
instance,
because
they're
the
scallops
are
happening
in
parallel.
B
Essentially,
what
we
are
trying
to
achieve
here
is
to
realize
our
on-premise
communities,
control
main
being
able
to
automatically
ship
pods
to
AWS
or
Rajaraja
city
or
any
other
trial
providers.
Let's
go
back
and
see
whether
our
to
all
the
three
parts
are
up
and
running
and
our
three
parts
are
up
and
running:
let's
get
the
deployment
three
out
of
three
are
ready.
So
if
we
go
ahead
and
delete
the
deployment.
B
The
three
parts
are
terminated,
the
underlying
compute
launch
type
will
automatically
be
terminated.
So
if
I
go
back
to
the
to
my
ec2
console
and
look
at
the
instance
that
is
out
here
refresh
the
page,
the
instance
is
being
shut
down.
So
once
your
parts
are
terminated
because
your
your
burst
capacity
is
no
longer
required,
the
underlying
computer
launch
type
is
automatically
terminated,
so
you
get
back
to
being
100%
on
your
cloud
provider
a
if
it's
zero
computer
footprint
on
the
other
cloud
provider.
Let's
verify
that.
B
We
are
back
to
running
center
pods,
so
this
is
a
very
simple
demo
of
shipping
workloads
from
the
kubernetes
control
name.
That
is
running
on
a
mini
cube
on
my
mark
book
out
to
AWS,
and
the
behavior
would
look
exactly
the
same
if
you
would
want
to
ship
from
mini
cube
planning
on
line
to
a
shirt
or
any
other
cloud
provider
as
well.
Some
of
the
caveats
for
the
Nautilus
ecosystem.
As
of
now.
B
Persistent
warning
support
for
the
parts
that
are
running
on
the
cloud
provider
B,
which
is
the
Florence
la
provider,
is
not
yet
supported.
It's
working,
progress
and
demon
suits.
Support
also
would
need
to
be
added
to
no
less
eco
system,
because
currently
the
implementation
and
the
interpretation
of
teaming
set
assumes
that
the
demon
said
parts
are
running
on.
Every
single
worker
node
will
have
to
be
in
green,
and
this
is
the
result
of
demon,
sect's
kind
of
evolving
from
an
era.
B
Their
on-premise
data
centers
for
the
default,
and
you
wanted
to
have
the
demon
set
pod
running
on
these
very
large
worker
nodes
will
have
to
reinterpret
it
in
the
context
of
nor
less
has
to
put
the
demon
said.
Part
really
need
to
run
on
the
virtual
work,
a
node,
or
does
it
need
to
run
on
each
of
the
cells
that
the
that
the
parts
are
being
dispatched
by
other?
B
What
you
working
on
as
far
as
supported
cloud
providers
are
concerned,
multi
cloud
via
node,
less
and
virtual
cubelet
and
Kip
is
functional
on
AWS
and
GC
keeps
beta
on
assured.
If
there
is
interest
in
consuming
any
other
child
provider,
please
let
us
know
and
we'd
love
to
add
support
for
any
cloud
provider
that
you
are
interested
in
I'm,
going
to
pause
here
for
questions
to
see
if
the
demo
makes
sense
and
if
they
were.
If
there
are
any
open
questions
and
hand
it
over
to
follow
hi.
A
B
B
Let
me
look
through
the
deployment
for
so
what
will
work?
A
Nord
is
simply
a
deployment
with
one
replica
and
the
word.
The
replica
is
running
virtual
cubelet
that
is
built
with,
provided
it's
running
virtual
cubelet.
That
is
using
kit
provided
under
the
covers,
and
this
container
essentially
is
speaking
cubelet
api.
So
it
talks
to
the
connects
back
to
the
api
server
and
it
advertises
itself
as
a
worker.
So
it's
simply
a
part
that
can
be
scheduled
on
any
of
your
existing
kubernetes
control
planes.
B
So
it's
not
any
separate
component
that
needs
to
be
explicitly
configured
in
your
kubernetes
cluster,
so
you
can
deploy
it.
For
example,
let's
say
can
deploy
it
on
and
on
an
on
premise:
kubernetes
control,
plane
you
can
deploy.
You
can
create
the
deployment
on
an
EPS
cluster
or
a
gke
cluster,
etc.
Does
that
make
sense?
Monisha.
A
B
Yeah,
it
is
simply
a
deployment
if
you
would
like
to
try
it
out
there.
Is
there
simple
deployment
scripts
that
are
available
in
the
Kip
repo?
The
repo
is
linked
in
the
slide
deck
that
will
be
uploaded
to
CN
CF
site
by
the
end
of
the
day.
The
deploy
directory
inside
Kip
gives
you
a
couple
of
options
for
deploying
a
simple
kubernetes
cluster
with
on
AWS,
so
GCP
orbit
VPN
for
on-premise
for
bursting
from
on-premise
to
AWS.
B
For
example,
if
you
want
a
simple
set
up
to
try
it
out,
you
can
use
any
of
the
terraform
scripts
or,
if
you
would
like
to
deploy
it
on
an
existing
kubernetes
cluster,
you
can
use
the
manifest
scripts
inside
the
manifest
skip
directory.
I
will
link
all
of
these
in
the
slide
deck.
That's
uploaded
that
will
be
uploaded
to
the
CNCs
site.
A
B
Yeah,
so
we
default
to
VPN
or
Direct
Connect
or
whatever
is
your
preferred
method
for
securing
the
communication
between
the
virtual
work,
a
node,
that's
running
on
cloud
provider
a
and
the
virtual
work,
a
node
that's
and
the
parts
that
are
running
on
chart
provider
B.
You
can
also
try
out
a
simple
set
up
with
a
VPN.
Vpn
is
one
of
the
basic
ways
of
securing
the
communication
between
your
provider,
a
and
char
provider
B.
B
So
here
is,
for
example,
it's
creating
a
V
PC
with
a
VPN
gateway
for
a
WI,
so
you
can
default
to
VPN
or
Direct
Connect
or
if
depends
on
your
cloud
provider
a
and
shelter
Y
the
B.
If
you
have
strict
requirements
for
a
certain
kind
of
security
link
between
cloud
provider,
a
and
sha
provider
B,
please
file
an
issue
on
the
key
people
and
we'd
happy
to
we'd
be
happy
to
see
how
that
would
work.
A
A
B
So
we
kind
of
look
at
cloud
compute
from
the
one
of
the
fundamental
concepts
of
no
less
is
we
look
at
cloud
compute
as
something
that
is
needed
for
pods
rather
than
disjoint
entity?
That
is
provision
six
months
or
nine
months
ahead
of
time
and
waiting
to
run
pods
so
based
on
the
pod
lifecycle,
compute
instances
come
up
again,
one
of
the
fundamental
premises
of
node
less
is
there
is
one
compute
launch
type
per
pod?
B
B
What
we
believe
is
that,
as
consumers
of
parts
you
shouldn't
be
worried
about
hand,
curating
compute,
launch
types
on
each
of
the
cloud
providers
and
trying
to
keep
list
of
what
are
the
hundred
something
different
types
of
on-demand
instances
versus
spot
pricing
in
versus,
what's
the
latest
and
greatest
on,
forget,
etc.
So
we
make
it
very
easy
for
you
to
consume
the
right.
B
Compute
launch
type
for
your
pod,
based
on
the
pods
SLS
and
resource
requirements,
so
having
one
compute
launch
type,
but
worldwide
is
kind
of
false,
along
those
lines
of
being
able
to
honor
our
goals
of
being
able
to
manage
the
compute
needs
of
a
pod.
On
a
point,
by
point
basis,
does
that
answer
your
question?
Others
no.
B
So,
let's
go
ahead
and
look
at
some
of
the
takeaways
increasing
adoption
of
kubernetes
on
various
cloud
platforms
is
leading
to
proliferation
of
control
planes
and
a
lot
of
heterogeneous
clusters
spread
across
various
cloud
providers.
It
leads
to
a
huge
amount
of
operational
overhead
and
also
based
it
spend.
B
There
is
no
need
for
you
to
create
a
thousand
different
control
planes
for
each
control
plane
running
a
hundred
parts
each
just
because
you
want
to
try
out
a
new
cloud
provider
and
you
do
not
have
you
would
rather
expand
your
existing
control
plane
across
various
cloud
providers
then
hand
curate
these
control
planes
on
various
cloud
providers.
Multi-Cloud
kubernetes
is
an
idea
where
we
are
looking
to
simplify
operations
of
kubernetes
clusters
or
across
various
cloud
providers
by
reducing
proliferation
of
control
planes.
B
It
hugely
simplifies
multi-cloud
capacity
planning,
because
you
don't
have
to
worry
about
hand,
curating,
maintaining
monitoring
your
cluster
auto-scaling
knobs
for
your
worker
nodes
and
worried
about
how
to
scale
down
work
and
also
have
to
worry
about.
There
is
a
newer,
better,
cheaper
cloud
provider
in
the
market.
How
do
I
adopt
this
cloud
provider?
You
know
in
a
very
quick
and
agile
fashion
without
having
to
provision
control
planes
across
various
cloud
providers.
B
If
you
are
interested
as
well,
if
you
have
questions,
please
feel
free
to
write
to
me
or
file
an
issue
on
the
Kip
repo.
We
hang
out
on
virtual
cubelet
channel
on
kubernetes
slack
or
feel
free
to
pop
in
and
ask
any
questions
that
you
might
have
as
well,
but
thanks
a
much
to
alertly
engineering,
Brandon
Cox
will
Moshe
neighbor,
hi
and
John
Roman
are
for
helping
put
together
most
of
the
code
base
and
the
scripts
that
were
demonstrated
in
the
session,
DITA
Bhatia
and
Goff
from
worship,
cubelet
team
and
CN
CF
marketing
team.
B
Thank
you
so
much
for
setting
up
this
webinar
and
making
it
such
a
smooth
process
and
thanks
to
audience
money,
Sergey
oddish,
for
your
questions.
If
you
have
any
further
questions,
please
feel
free
to
ask
them.
Now
we
will.
We
will
include
the
responses
to
any
unanswered
questions
in
a
FAQ
section
in
the
slide
deck
that
will
be
uploaded,
along
with
the
recording
to
CN
CF
website
by
the
end
of
today.
A
Mother
is
having
more
truth
versus
I
think
that
we
have
time
to
sit
yes,
yeah,
please
yeah
sure
from
Manish
Kumar
again.
Thank
you
manage
for
questions
again.
I
can
in
case
us
Direct,
Connect
communication
to
cloud
provider
where
I
will
be
storing
cloud
providers
secrets.
This.
Is
these
similar
the
way
with
four
six
four
deployments.
B
The
cloud
providers
secrets,
as
in
the
cloud
provider
credentials
for
the
accounts,
the
VPC
and
security
groups
that
you
would
want
to
consume
on
cloud
provider.
All
of
them
are
coming
in
through
the
config
map
and
secrets
configured
on
your
original
kubernetes
control
planes,
so
all
of
that
will
be
will
be
stored
on
the
control
plane.
That
is
the
main
control
plane
that
is
driving
the
scheduling
of
the
pods.
If,
if
this,
if
my
understanding
of
the
secrets
is
incorrect,
manish,
please
let
me
know
if
there
are
any
other
secrets
that
you
are
talking
about.
B
A
A
B
For
sure
yeah,
so
that
is
the
one
of
the
main
motivations
for
developing
nodelist-
is
that
we
would
want
you
to
be
achieve
something
like
this,
where
you
have
a
single
cloud
control
plane-
and
you
have
some
thoughts
running
gone,
say.
Aw
is
some
thoughts
on
on-prem
some
parts
and
azure
some
parts
on
TCP
or
maybe
all
cloud
in
the
future.
So
you
can
have
multiple
virtual
cubelets
that
are
dispatching
parts
to
various
cloud
providers.
B
So
you
have
one
one
cloud
provider,
that's
configured
for
virtual
work
and
let
me
show
you
a
picture,
so
you
have
one
a
cloud
provider,
that's
configured
per
virtual
work,
a
node
and
you
can
run
multiple.
What
should
work
on
ORS
because
at
the
end
of
the
day,
they're
simply
pods,
so
you
can
run
multiple
virtual
worker
nodes
that
are
that
are
dispatching
parts
to
various
cloud
providers.
A
subset
of
that
question
is
you:
can
Ranma
we'll
work,
a
nodes
that
are
dispatching
parts
to
multiple
availability
zones
within
a
within
a
given
region?
B
If
you
want
to
do
multi
reach,
my
TAC
fell
over
H
a
scenario
as
well,
so
the
the
base
unit
for
a
virtual
worker
node
is
a
single
region.
Is
a
single
zone
in
a
single
region
for
a
single
cloud
provider.
The
virtual
worker
node
is
a
very
lightweight
pod,
so
you
can
run
as
many
parts
as
you
want
to
ship
to
various
cloud
providers,
various
regions
etc.
Does
that
make
sense.
A
A
A
B
Let
me
address
the
first
question:
I
couldn't
hear
the
second
one
clearly,
so
I
might
request
you
to
repeat
it
so
the
for
the
first
question.
Yes,
you
can
set
affinity
for
a
certain
cloud
provider
which
is,
which
is
something
that
can
be
configured
from
your
kubernetes
control,
plane
itself.
So
if
you
have,
let's
say
one
virtual
queue:
black
for
AWS
and
one
virtual
cube-like
for
Ashes,
they
simply
appear
as
nodes
worker
nodes
to
the
kubernetes
control
plane.
B
So
we
can
enable
you
to
define
policies
as
to
just
having
an
affinity
for
a
part
to
a
certain
worker
node
based
on
metric
SLA
metrics,
like
cost
or
provisioning
times,
or
any
other
constraints
that
you
might
have
like.
For
example,
what
is
the
cheapest
cloud
provider
at
the
moment
for
running
my
GPU
workloads?
So
you
can
express
these
essays
as
node
constraints
on
via
the
kubernetes
control
plane
and
that
will
drive
the
dispatching
of
the
parts
to
the
worker
nodes
that
are
associated
with
each
cloud
provider.
B
So
that's
something
that
is
a
very
compelling
use
case
for
no
less
because
being
able
to
switch
between
various
cloud
providers
via
a
single
control
plane
in
a
programmatic
policy.
Driven
Bay
is
something
that's
super
useful
for
you
to
again
enable
you
to
focus
on
your
business
instead
of
curating
pad
compute
on
each
cloud
provider.
Paulo.
Could
you
please
repeat
the
question
about
Rancher
I
didn't
catch
it
completely?
Yes,.
A
B
Are
three
I
haven't
unfamiliar
with
Rancher,
but
I'm
not
familiar
with
the
product?
Er?
Three,
so
let
me
look
it
up
and
I
will
respond
to
it
as
part
of
FAQ
in
the
slide
deck
but
Rancher
itself.
Since
it
is
managing
kubernetes
control
planes,
any
kubernetes
control
plane
can
be
supercharged
to
notice,
because
the
virtual
work,
a
node,
is
simply
a
pod
that
is
dispatched
to
the
control
plane.
So
you
should
be
able
to
run
what
you
work
a
node
on
any
rancher
managed
control
plane
itself.
Let
me
look
at
key
s3
and
update
address.
A
Okay,
excellent:
no,
we
don't
have
more
questions.
I
want
to.
Thank
you
so
much
very
wise
to
have
some
great
presentation.
We
learn
a
lot.
You
have
much
more
to
learn
and
use
with
this
very,
very
cool.
Thank
you
so
much
all
right,
and
that
is
all
the
questions
everyone,
the
webinar
record
in
as
wise
will
be
violated.
Today
we
are
looking
for
our
to
see
you
at
the
future.
Cf
Magna
have
a
birthday
in
this
state
bye-bye.
Thank
you.