►
From YouTube: Virtual Kubernetes Cluster by Chidambaranathan R
Description
In this talk, we will discuss about the challenges with respect to multi-tenancy in Kubernetes and see how the open source project "vcluster" looks promising to address those. We will also deep dive into vcluster's architecture that explains how it could able to run multiple (virtual) kubernetes clusters within a single K8s cluster with a DEMO. Finally, we will also look into various use cases where "vcluster" could be used.
A
One
welcome
you
all
to
this
session.
In
this
session,
we
will
see
what
is
virtual
kubernetes
cluster
and
what
it
offers
myself
nadine,
and
I
work
as
principal
software
engineer
at
ericsson,
predominantly
into
verification
of
machine
learning
operations
and
a
test
automation.
The
agenda
of
this
session
goes
like
this.
That
starts
with
the
background,
then
the
overview
of
virtual
kubernetes
cluster,
with
a
recorded
demo
to
see
how
it
works
and,
finally,
its
various
use
cases
where
it
could
be
used.
A
Let's
start
with
the
origin
story,
the
origin
story
is
based
on
the
challenge
that
we
faced
in
our
organization.
It
may
be
applicable
for
others
too,
as
we
all
know
in
any
cloud
native
product
development
team
right
test
environment
that
is
kubernetes.
Cluster
plays
a
major
role
in
our
day
to
day
task
like
during
study
phase,
to
do
proof
of
concept
during
development
phase
for
individual
developers
to
test
the
code,
changes
during
testing
phase
to
certify
the
release
of
the
product,
especially
in
cacd
pipeline
right.
So
now,
let's
see
these
challenges
one
by
one.
A
First,
let's
see
the
challenge
that
a
team
faces
for
a
development
environment
multi-tenancy
is
really
tough
with
kubernetes
and
there
are
pretty
much
two
model
which
the
people
use
to
share
a
cluster
across
individual
teams
right.
So
one
model
is
that
there
is
a
shared
cluster
for
everybody
with
a
namespace
isolation,
so
an
individual
team
or
sorry,
so
an
individual
or
a
team
gets
their
own
namespace
and
that's
what
they
have
access
to
right.
A
But
the
challenge
here
would
be:
the
teams
might
be
developing
a
global
objects
like
custom
resource
definitions,
that
is
crds,
and
we
all
know
that
they
cannot
manage
that
in
a
namespace,
isolated
cluster
right.
So
the
only
practical
solution
here
is
used
to
dedicated
kubernetes
clusters
to
guarantee
true
separation,
which
is
known
as
cluster
based
isolation,
where
every
individual
or
team
gets
a
dedicated
cuban
disk
cluster
and-
and
you
know
even
this
is
problematic
for
a
lot
of
reason.
A
Finally,
it
brings
in
environmental
impact
means
an
individual
user
may
not
use
the
full
capacity
of
the
resources
in
the
kubernetes
cluster
and
most
of
the
time
right,
it
will
be
under
utilized
and,
as
we
could
see
in
the
comparison,
the
isolation
is
not
that
great
in
name
space,
whereas
it
is
true
in
separate
cluster,
but
at
the
same
time
it
increases
the
cost
and
brings
in
other
complexities
that
we
discussed
right.
So
what
if,
instead
of
name
space
and
cluster
isolation,
an
individual
team
member
is
given
a
dedicated
cluster
without
all
these
challenges.
A
A
You
know
telling
the
team
that
no,
no,
we
are
running
this
version
of
service
mesh
and
that's
all
we
run
in
the
kubernetes
cluster,
and
this
is
how
we
set
it
up
and
everybody
must
obey
by
that,
and
they
don't
want
to
run
thousand
different
versions
of
cuban
discluster
design.
Okay,
so,
however,
we
can't
blame
them
for
it,
but
practically
their
approaches
are
in
the
way
of
progress
or
flexibility
for
developers.
Now
what?
A
A
The
next
challenge
that
we
face
this
lex
list
flexibility
in
cicd
in
a
conventional
cacd
pipeline
system
under
test,
we
may
need
to
have
a
kubernetes
cluster
installed
with
the
respective
version
every
time
manually
or
via
automation,
where
the
setup
time
of
cuban
disk
cluster
will
be
like
10
to
20
minutes
right,
then
the
system
test
would
start
which
gradually
increases
the
overall
feedback
loop
time
of
your
cacd
pipeline.
Also,
let's
say
we
want
to
test
a
product
in
multiple
cubic
versions.
A
It
is
going
to
take
lot
of
time
to
test
all
these
probabilities
and
combination
because
setup
time
of
kubernetes
cluster
going
to
take
time.
So
what
if
there
is
a
flexibility
to
have
all
these
covered
in
the
cecd
pipeline
with
the
quick
feedback
loop
time,
then
it
is
more
advantageous
for
any
product
development
team.
Right
and,
more
importantly,
you
know
it
brings
in
more
confidence
before
they
ship
the
software
package
to
the
end
users
right
just
like
any
organization.
A
We
also
faced
all
these
challenges
and
this
triggered
a
study
to
explore
technologies
that
could
help
us
to
use
the
development
and
test
environment
efficiently
using
cluster
virtualization
technique
and
reduce
the
cost
and
provide
more
freedom
of
choice
to
the
developers
while
exploring
we
came
across
an
interesting,
open
source
project
called
vcluster
that
looks
promising
to
address
the
challenges
that
we
discussed
so
far.
Now
the
next
question
would
be
what
is
v
cluster
on
a
funny
note
going
forward.
A
This
session
will
be
like
the
inception
movie
in
a
very
simple
term,
virtual
cluster
or
in
short
v
cluster
or
lightweight
kubernetes
cluster.
That
runs
on
top
of
another
kubernetes
cluster.
Now
v,
cluster
is
an
open
source
project.
Initially,
v
cluster
is
built
in
a
commercial
product
by
loft
labs
and
going
forward.
I
will
be
using
the
term
v
cluster
in
place
of
virtual
capabilities.
Cluster.
More
specifically,
v
clusters
are
just
a
control
plane
that
runs
inside
another
kubernetes
cluster.
What
is
a
control
plane?
A
As
we
know,
a
regular
kubernetes
cluster
will
have
an
ap
server.
Then
we
have
a
data
store
like
hcd
standard
backend
for
kubernetes,
where
the
cluster
configuration
and
state
data
such
as
number
of
parts
and
their
state
are
stored.
Then
we
have
a
controller
manager
that
controls
the
deployment
replica
sets
etc.
A
Also,
we
have
a
scheduler
that
actually
takes
the
parts
and
assign
them
to
the
different
nodes
in
the
cluster.
Where
the
container
gets
started,
then
we
have
the
workloads
on
top
of
this
control
plane.
Typically,
we
will
create
a
namespace
to
essentially
run
parts
in
them.
Typically,
we
will
have
multiple
name
spaces
right
and
that's
how
a
regular
kubernetes
cluster
looks
like
then,
for
anything,
we
always
talk
to
this
ap
server,
which
is
the
gateway
to
the
kubernetes
cluster
and
in
a
regular
kubernetes
cluster.
A
We
have
the
capability
to
spin
up
workloads
inside
the
name
spaces,
and
so
why
not
a
control
plane
inside
a
namespace?
And
there
comes
the
v
cluster
which
is
designed
in
such
a
way
that
a
control
plane
runs
as
a
pod
inside
a
name
space
of
the
underlying
cuban
disk
cluster
going
forward.
We
will
call
this
underlying
kubernetes
cluster
as
host
cluster.
A
A
A
A
Now
I
have
already
connected
to
the
host
cluster
and
I
have
listed
all
the
namespaces
in
the
host
cluster.
Now,
let's
create
a
namespace
called
host
namespace
in
the
underlying
host
cluster,
which
is
supposed
to
host
our
virtual
cluster
okay.
Now
what
about
the
permissions
in
the
host
cluster
to
run
vcluster?
Actually,
we
don't
need
any
cluster
wide
permission.
Only
the
permissions
to
create
a
stateful
set
and
a
service
in
this
namespace
is
required
right.
A
A
Called
vc1
in
the
host
namespace
and
there
are
multiple
ways
we
can
create
a
v
cluster,
either
through
v,
cluster
cli
or
helm,
or
just
a
cube,
ctl
apply.
I
have
used
v
cluster
cli,
which
is
a
very
lightweight
binary
from
vcluster
project
and
under
the
hood,
as
we
see
here,
it
executes
a
helm
command
that
pulls
the
chart
and
deployed
to
the
cluster.
A
A
A
A
What
it
does
is
it
does
a
cube,
ctel
port
forward
to
the
v
clusters,
ap
server,
now
I'll
split
the
terminal
here,
the
v
cluster
cla
command,
creates
and
cube
config
in
the
current
working
directory.
As
you
can
see
here
now,
let's
export
it
to
the
environmental
variable
called
cubeconfig,
so
that
all
my
cubectl
commands
now
points
to
the
v
clusters
ap
server
and
not
to
the
host
clusters.
Ap
server,
okay,
so
any
cube
ctl
that
I
am
going
to
execute
now
will
run
inside
the
v
cluster.
A
Now,
let's
list
all
the
name
spaces
inside
the
v
cluster.
As
we
can
see
here,
it
is
different
from
what
it
is
there
in
the
host
cluster.
Now,
let's
get
the
part
in
the
cube
system,
namespace
of
the
v
cluster.
As
we
can
see
here,
there
is
one
part
running
called
core
dns.
We
will
talk
about
this
core
dns
in
the
later
part
of
the
session.
Now
we
are
admin
in
this
week
cluster.
Now,
let's
create
a
a
namespace
inside
the
v
cluster
called
ns1.
A
A
A
A
A
Now,
let's
go
through
a
few
slides
to
understand
what
happened
in
the
demo
from
v
clusters
architectural
perspective.
Now
I
am
going
to
walk
through
command
by
command
that
we
did
in
the
demo
to
show
the
inside
perspective
of
vcluster
okay.
Now
what
did
we
do?
First,
we
took
a
regular
kubernetes
cluster
that
is
host
cluster
right.
Then,
what
did
we
do?
We
created
a
namespace
called
host
namespace
on
top
of
host
cluster
right
then,
using
v
cluster
cli,
we
created
a
v
cluster
called
vc1
inside
the
namespace
host
namespace
right.
A
It
has
a
stateful
set
that
we
call
as
vc1
is
the
v
cluster
that
we
created
and
it
has
an
associated
service
right
and
the
stateful
set
essentially
creates
a
pod
with
a
single
replica
in
this
case
and
inside
this
part
is
wherever
v
cluster
really
leaves
now,
and
we
have
two
containers
in
v
cluster
pod.
The
first
one
is
a
control
plane.
This
is
where
we
can
pick
our
own
supported,
kubernetes
distributions
like
k3s,
which
is
the
default
one
and
other
distributions,
as
listed
here.
A
It
has
an
ap
server.
I
mean
the
control
plane
has
an
ap
server
for
any
kind
of
cube,
ctl
interaction
with
vcluster,
and
it
has
a
separate
controller
manager.
Then
the
controller
manager
is
hooked
up
to
a
data
store
by
default.
It
is
sqlite
and
it
supports
more
than
like
hcd
etc.
That
means
when
we
write
something
to
this
api
server
right
inside
the
v
cluster,
it
writes
into
separate
data
store,
and
this
data
store
is
mounted
as
a
volume
to
this
v
cluster
part.
A
Then
we
have
a
second
container,
which
is
sinker,
and
we
will
talk
about
the
sinker
in
a
while.
Where
our
answer
to
the
question
where
the
parts
are
running
recites,
then
we
connect
it
to
the
v
cluster
using
v.
Cluster
connect
command
right.
So
what
is
done
in
v
cluster
is
that
we
don't
talk
to
the
apa
server
of
the
underlying
host
cluster
and
run
cube.
Ctl
commands
against
it.
Instead,
we
spun
up
a
control
plane
itself
that
runs
as
a
power
inside
a
namespace.
A
A
What
did
we
do?
Next?
We
created
a
deployment
called
sample
with
the
image
x
with
replicas
2
right,
it's
pinned
deployment,
and
now
we
have
another
entry
in
the
data
store
about
this
deployment.
Now
this
control
manager
inside
the
control
plane
watches
the
new
entry
in
the
data
store
and
it
creates
two
parts.
It
means
it
writes
two
more
objects
into
the
data
store.
Now
we
have
four
objects
in
v
clusters:
data
store
right.
We
are
name
space
deployment
and
two
parts
owned
by
this
deployment.
A
Now
these
two
parts
are
waiting
to
be
scheduled
whose
job
is
to
schedule
is
the
job
of
scheduler
right
to
schedule
these
parts
to
the
nodes,
but,
as
we
can
see
here,
there
is
no
scheduler
in
virtual
cluster
right.
So
the
big
difference
between
the
regular
kubernetes
clusters,
control,
plane
and
v
clusters
control
plane
is
that
there
is
no
scheduler.
Instead,
there
is
something
called
a
sinker
v.
A
Cluster
does
not
use
scheduler
from
its
distributions
and
it
only
uses
ap
server
and
controller
manager
and
syncer
is
what
really
makes
the
vcluster
really
virtual
at
the
sinker.
What
it
does
is
it
watches
the
vcluster's
data
store
and
copies
the
parts
down
to
the
underlying
host
namespace
that
talks
to
the
host
cluster
cp
server.
So
now
the
host
cluster
scheduler
right
that
will
schedule
these
parts
and
the
networking
policies,
admission,
controller,
etc,
and
the
host
cluster
will
also
apply
to
these
parts.
A
But
these
restrictions
are
only
to
the
pod
level
because
all
the
higher
level
objects.
You
know
that
the
things
that
that
are
creating
parts
like
deployment
stateful
set
all
these
kind
of
things
stays
inside
entirely
virtual
inside
the
v
clusters
data
store
and
they
will
not
be
synchronized
down
to
the
host
cluster.
Only
the
parts
are
synchronized
and
obviously
we
can
create
multiple
namespaces
right
in
v
cluster
and
create
same
deployments.
A
But
now
how
do
we
handle
the
naming
conflicts
since
all
these
parts
are
scheduled
in
the
underlying
host
cluster?
What
sinker
does
is,
when
it
sinks
from
multiple
namespaces
down
to
single
namespace
in
the
host
cluster,
it
renames
the
pod
and
adds
two
suffix
that
is
virtual
namespace
and
virtual
cluster
name
in
which
this
workload
is
running.
Without
the
namespace
there
would
be
conflict
right.
A
Let's
say
you
have
two
parts
of
the
same
name
and
two
virtual
namespaces.
As
here
then,
we
are
mapping
these
parts
into
single
name
space
in
the
underlying
host
cluster
and
that
wouldn't
work
right,
because
this
name
would
get
conflict
in
the
underlying
namespace.
So
the
name
space
in
which
this
workload
is
running
is
suffix.
Also,
we
can
have
multiple
v
cluster
inside
the
same
namespace,
so
to
avoid
conflict
v.
Cluster
name
is
also
added.
A
So
that's
what
we
saw
in
the
demo
now,
let's
see
how
different
kubernetes
resources
are
handled
inside
v,
cluster,
all
the
higher
level
objects.
You
know
the
things
that
are
creating
parts
like
deployment,
stateful
set
stays
inside
the
control
plane
of
the
v
cluster,
even
the
crd
states
inside
the
v
cluster-
and
this
is
what
makes
the
v
cluster
really
virtual
and
gives
the
feel
of
running
our
own
separate
kubernetes
cluster,
and
these
higher
level
kubernetes
objects
right,
never
reaches
the
ap
server
of
the
host
cluster.
A
So
the
basic
design
here
is
to
reduce
the
request
on
the
host
cluster
now
coming
to
the
lower
level
object.
Syncer
plays
a
key
role
in
syncing
them
to
the
host
cluster
and
synchro
sings.
Only
few
things,
mainly
the
parts
and
other
stuff
that
parts
need
to
start,
so
the
basic
design
here
is
to
avoid
performance
degradation
like
workloads
running
inside
vcluster
right
will
run
or
should
run
with
the
same
performance
as
workloads
which
are
running
directly
on
the
underlying
host
cluster.
Okay.
A
Now,
let's
shift
our
focus
to
the
backbone
of
any
system
that
is
networking
and
understand
how
it
is
handled
in
v
cluster,
as
we
saw
sync
component
in
the
v
cluster
syncs,
the
parts
to
the
underlying
host
cluster
right
and
so
regular
part
to
part
communication
works
out
of
the
box,
just
like
a
regular
networking
and
can
communicate
with
each
other
via
ip
based
networking,
syncer,
also
synchronizes
services.
It
also
wire
them
to
the
right
parts.
A
What
it
rewires
is
that,
as
we
saw
initially,
there
were
only
one
part
running
inside
the
cube
system,
namespace
of
v
cluster
right-
that
is
core
dns,
so
by
default
v
cluster
has
a
core
dns
deployment
in
its
cube
system,
namespace
itself.
This
means,
when
a
sinker
synchronizes
the
part
down
to
the
host
cluster.
It
updates
the
port
spec
and
pointed
to
the
v
cluster
dns
service
instead
of
host
clusters
cluster
by
dns
service,
and
then
the
name
resolution
works
as
normal
right.
A
So
cluster
internal
dns
just
works
as
expected
inside
the
v
cluster,
that
that
also
means
that
you
cannot
directly
acc
access
the
host
services
inside
the
v
cluster,
which
is
kind
of
a
security
that
comes
in
built
in
as
part
of
vcluster.
Okay,
now,
what
is
the
meaning
of
nodes
from
v
cluster
perspective
in
case
of
nodes?
We
need
to
understand
the
sinker
swings
in
both
direction
right.
A
A
As
you
can
see
here,
the
host
cluster
has
three
worker
node,
but
it
shows
only
two
worker
nodes
because
our
workloads
are
running
only
on
these
two
nodes
and
also
we
can
see
here
it
doesn't
show
the
real
node
details
here.
Actually,
it
shows
all
the
fake
node
details,
which
means
the
v
cluster
when
it
copy
the
nodes
from
host
cluster
to
v
cluster
right
it
renames
them
and
obfuscates
certain
things
say:
if
a
user
doesn't
want
to
have
the
fake
notes,
they
want
to
see
the
real
notes.
A
Right
still,
they
have
option
to
install
v
cluster
with
certain
configurations
so
that
they
can
see
this
real
node
information
in
the
v
cluster
itself.
Now,
let's
see
the
comparison
between
namespace
cluster
and
vcluster,
so
giving
each
team
a
cluster
is
going
to
be
expensive
and
giving
them
name.
Space
might
be
too
restrictive.
How
about
something
in
between
maybe
creating
v
cluster
is
the
way
to
go
right
hope
you
all
agree
based
on
what
we
saw,
what
we
saw
so
far
with
respect
to
v
cluster.
A
Now,
let's
see
the
use
cases
where
v
cluster
could
be
used.
Lots
of
people
uses
v
cluster
for
more
ephemeral
activities,
one
for
development
environment,
where
developers
will
have
full
freedom
of
choice.
Example.
If
a
developer
wants
to
run
service
meshes
through
a
particular
version,
it
is
completely
up
to
him
because
it
is
going
to
be
only
scoped
to
your
namespace
and
it
is
not
going
to
impact
other
people
and
give
them
sort
of
their
own
little
sandbox.
A
A
While
we
spin
up
v
cluster-
and
this
gives
of
lot
of
flexibility
to
test
the
product
in
multiple
kubernetes
versions
and
also
it
helps
us
to
test
the
product
that
has
different
variants
of
insulation,
like
optional
components
or
without
optional
components,
and
all
these
probabilities
and
combination
can
be
done
in
parallel,
using
v-cluster
in
the
cacd
pipeline
itself.
That
will
save
a
lot
of
time
and
effort
and
brings
in
more
confidence
with
the
product
development
team
when
they
ship
the
product
to
the
end
users
right
now,
as
a
user.
A
If
you
are
interested
to
explore
more
on
v
cluster,
you
can
refer
the
v
cluster
website
and
their
documentation.
That
has
more
details
on
the
architecture
and
the
answer
part
and
say
if
you
want
to
raise
any
issues
or
if
you
would
like
to
contribute,
we
can
always
do
that
in
the
github
page,
and
if
we
need
any
clarification,
we
can
always
post
that
in
their
slacks
channel.