►
From YouTube: Knative August Meetup/ Demo: Optimizing Vcluster for Multitenant Knative by Ishan Khare
Description
Demo and lead a discussion about how the Vcluster is optimizing for multi tenant Knative clusters, specially for multi-tenant Knative scenarios.
A
Okay,
everybody
I'm.
B
Nishan,
I
basically
am
software
engineer
with.
A
A
We
are
the
maintainers
and
the
developers
of
the
wii
cluster
project,
and
today
we
are
going
to
basically
talk
a
little
bit
about
the
wii
cluster
project
itself.
What
exactly
the
cluster
is
and
how?
It
is
implemented
under
the
hood,
how
things
work,
and
also
in
terms
of
k,
native
how
we
have
kind
of
optimized,
the
v
cluster
project,
with
something
that
we
are
calling.
We
cluster
plugins
using
the
we
cluster
sdk.
B
A
It's
going
to
help
the
k
native
community,
I
would
say
when
we
have
scenarios
where
we
have
multiple
v
clusters
for
multi-tenants
each
tenant
having
their.
A
I'll
I'll
start
with
the
slides
now
so,
basically,
if
you
talk
about
we
cluster,
every
virtual
cluster
is
going
to
have
its
own
api
server
and
in
terms
of
which
api
server
we
are
talking
about.
So
we
have
currently
four
official
supported
api
servers.
A
One
is
the
k3s
which
is
the
by
default,
applied
api
server,
other
flavors
that
we
have
are
vanilla
s.
We
have.
B
A
When
it
comes
to
like
what
kind
of
advantages
of
e-cluster
would
have
like,
it
is
better
isolated
than
your
regular
namespaces.
B
And
it's
still.
A
Cheaper
than
you
know
creating
real
kubernetes
clusters
for
everything,
so
it's
kind
of
somewhere
in
between
namespaces
and
fully
dedicated
kubernetes
clusters.
A
Comparison
chart
where
v
cluster
is
kind
of
giving
a
right
balance
between
separate
namespace
for
every
tenant
was
a
separate
cluster
for
every
tenant
and,
as
we
can
see
when
it
comes
to
isolation,
we
have
pretty
strong
isolation
between
cluster
and
the.
B
A
So
yeah,
I
think
what
you're
trying
to
ask
is
that
isolation
like
if,
if
the
attack
is
on
the
post.
B
Clusters,
api
server
then.
A
Any
protection
is
provided
by
v
cluster-
I
be
not
in
that
case,
but
yeah
every
week.
Cluster
will
okay.
A
Okay,
so
attacks
on
node
kernel
and
the
node
cube
right.
I
think
not
really,
because
we,
when
it
comes
to
we
cluster,
we
are
not
directly
dealing
with
like
managing
the
nodes
themselves.
B
We
basically
fake
the
notes,
the
the
actual
host.
A
Nodes
and
sync
them
up
into
we
cluster,
so
I'm
not
pretty
sure
that
v
cluster
directly
would
provide
any
any
safeguarding
features
for
the
node.
B
B
Cheap
and
easy
and
like
very
low
overhead,
so
anything
else.
A
Typically
inside
the
v
cluster,
you
would
add
some
more
namespaces,
depending
on
your
different
software.
A
And
that
is
followed
by
we
selecting
for
selecting
a
particular
namespace
for
actually
deploying
our
workloads
and
once
you
create
a
deployment.
Basically,
what
happens
is
the
actual
parts
created
by
the
deployments
never
run
inside
the
v
cluster?
Instead,
what
you
see
here,
port,
1.2
and
part
3-
are
basically
fake
bots
and
we
basically.
A
The
host
name
space,
which
are
which
are
basically
running
on
the
host
cluster
and
they
are
constantly
in
sync,
with
the
fake
parts
that
you
see
inside
the
b
cluster.
B
Flow
of
any
we
glass
workload
deployment
so.
A
So
these
are
basically
not
all
the
exact
number
of
resources,
but
basically
these
are
the
kind
of
resources
that,
whenever
you
create
a
wheel
cluster
using
the
v-cluster
cli
or
the
head
chart,
these
are
the
basic
resources
that
are
created
for
every
week.
Cluster
instance
in.
B
The
post
in
the
host
cluster
inside
the
namespace,
so
you'd
have
isolation
with
our
back
service
accounts
and
that
network.
A
A
Of
the
wheel
cluster,
we
can
take
a
look
at
it
in
a
bit
more
detail
in
the
following,
slides
and
recently,
we've
also
added
some
more
config
maps
that
would
allow
you
to
initialize
a
b
cluster
with
particular
charts
deployed
or.
B
Moving
on
so.
B
The
syncing
of
the
core
resources
of
kubernetes
when.
B
The
host
resources
and
the
virtual
resources
inside
the
virtual
customer
so
yeah
so
in.
A
Terms
of
like
when
we
say
what
would
be
the
typical
deployment
of
a
multi-tenant,
we
cluster
with
each
tenant
having
a
k.
A
Those
inside
that
first
cluster,
so
typically
I
just
named
them
as
we
cluster
one
two
and
three
and
for
stability
simplicity's
sake,
even
the
name
spaces
are
same
as
the
weight
clustering.
So
basically,
these
three
v
clusters
would
be
existing
inside
three
namespaces
of
the
same
name,
and
this
dotted
boundary
can
basically
be
served
as
a,
I
would
say,
the
separation
line
between
the
host
cluster
versus.
What's
in
the
virtual
world,
so
in
case
we
have
k
native
deployed
inside
every
v
cluster
right.
A
So,
as
you
can
see,
we
have
a
k
native
sorry,
we
have
a
k
native
component
showing
up
here
and
what's
going
to
do,
is
at
the
end
of
the
day,
everything
is
going
to
run
on
the
host
cluster
itself.
Hence
you
have
these
k
native
components
which
basically
run.
A
The
j
native,
once
you
install
k
native
you,
have
the
k
native
serving
name
space
and
all
these
components
run
as
boards
inside
the
k
inside
that
namespace.
So
I'm
just
assuming
the
same
thing,
but
just
inside
the
wii
cluster
itself
and
inside
the
v
cluster
you
would
have
a
name
space
called
k
native
serving,
and
then
these
are
synced
down
to
the
host
cluster.
But
because
you
are
installing
k
native
in
every
v
cluster.
So
you
are
going
to
end
up
with
a.
A
B
So
the
general
way
of.
A
B
The
workload
we'll
have
some
sure.
B
Okay,
so.
A
Yeah
so
it
the
same
thing
happens
with
the
services.
I've
not
shown
the
services
here,
but
basically
that
would
be
the
very
similar
case.
A
B
Is
that
we
just
have
one
cluster
wide
installation
of.
A
The
of
k
native
in
the
host
cluster,
as
you
can
see
here,
which
would
be
the
k
native
serving
name
space
and
all
the
components
of
k
native,
are
installed
here.
They
decide
here
and
for
every
v
cluster.
Whenever
you
create
a
k
native
service,
the
k,
the
cluster
white
canadian,
should
be
able
to
pick
this
gain
into
service
on
and
act
on
it
just
like
it
was
acting
in
the
previous
slide,
but.
A
Central
component
that
can
do
the
heavy
lifting
for
us
and
the
pods
and
other
poor
resources
are
automatically
being
synced
by
the
jnative
by.
B
The
by
the
by
the
sinker
of
v
cluster.
So
this
is
typically
the
goal
that
we
have.
B
A
Here's
the
details
or
basically
a
rough
architecture,
of
how
this
is
implemented
and.
A
To
some
extent,
and
we'll
also
go
through
a
basic.
A
Typically,
we
have
the
plug-in
plus
tinker
here
in
the
previous
slides.
We
just
had
the
sinker,
which
was
already
syncing
the
core
resources
like
parts
and
services
for
us
over
here.
We
also
have
another
component
called
plugin.
So,
basically
now
the
stateful
set
that
I
showed
you
in
the
previous
slide,
which
had
two
containers.
One
was
the
api
server
itself.
The
second
was
a
side
card
container,
which
was
the
sinker
now,
in
this
case,
we.
B
The
domain
specific
syncing.
A
You
know
syncing
information,
that
for
a
particular
plug-in,
let's
say,
k,
native
or
tikton
or
whatever
so
like
how
and
which
resources
are
need
to
be
synced
up
and
down
and.
B
B
It
is
my
screen
able
to
show
the
question.
You
know
that
even
has
okay,
I'm
not
sure,
but.
A
So
in
the
previous
slide,
if
I
just
go
back
in
this
case,
what
I'm
trying
to
showcase
is
that
for
every
api
server
you
have
an
installation
of
k
native.
So
yes,
the
k
native
serving
controller
is
running
against
each
of
the
apis
servers
inside
these
v
clusters.
A
B
Virtual
api
server
of
each
of
the
virtual
clusters,
but
in
this
diagram,
what
we
want
to
implement
is
something
reverse.
Where.
A
The
k
native
controller
is
running
against
the
host
clusters.
Api
server
and
the
key
native
services
which
are
created
inside
the
v
clusters
are
basically
sink
down,
but
they
sink
down
in
the
in
the
way
that
they're
still
isolated
in
their
own
name
spaces.
So
they
don't
interfere.
B
You
would
start
seeing
the
workloads.
Another
question
that
one
has
raised
is
it
looks
like
in
this
configuration.
You
can
see
key
native
service,
config
revision.
B
Talking
about
this
diagram,
so
yes,
as
I
said.
A
That
this
is
not
like
the
complete
complete
thing
over
here
we
have,
and
this
whole
plugin
is
still
like
a
work
in
progress.
So
the
problem
here
is,
if
I
sync
these
parts
as
well
back
up.
A
Okay,
yeah,
but
you're
right
in
this
case
the
weak
cluster,
you
you
would
not
be
able
to
see
parts
right
now,
because
if
I
sync
the
parts
back
up,
they
would
also
like
clash
with
the
core
sinker
that
we
cluster
has
which
tries
to
sync
them
down
again
with
the
host
cluster.
So
there
would
be
kind
of
a
race
condition
right
now.
So
the
first
goal
of.
A
Specific
resources-
and
then
we
might,
you
know,
find
a
way
around
marking
things
like
deployments
and
pods
so
that
they
are
bypassed
by
the
core
sinker
and
only
synced
up
by
the
plugin,
so
that
there's
no
race
condition
between
the
plug-in
and
the
core
syncer
for
the
core
resources.
B
B
Okay,
so.
A
Yeah
so
I'll
just
explain
a
little
bit
about
this
design,
as
I
was
explaining
the
plug-in
and
single
thing.
So,
as
you
already
understood
that
plugin
and
syncer
in
this
design
would
so
syncer
is
already
the
core
component
of
vcluster,
along
with
the
api
server
and
the
plugin
forms
the
third
side
card
of
the
stateful
set
that
I
showed,
and
basically
the
reason
that
I
have
put
this
whole
box
at
the
boundary
of
v.
A
Cluster
and
host
cluster
is
because
the
plug-in
and
the
sinker
have
information,
or
basically
the
kubernetes
con,
the
kubernetes
context
of
both
the
vcluster
api
server,
as
well
as
the
hostplus
api
server.
So
it
can
talk
to
both
the
api
servers.
That's
why
it's
sitting
on
the
boundary
of
this
whole
division
of
the
virtual
world
and
the
host
world,
and
basically
the
typical
flow,
is
somewhere
very
like
wherever
you
see
the
red
arrows
is
where
the
plugin
steps
setting
and
wherever.
A
So
we
will
talk
about
this
lab.
Let's
say
the
k
native
service
created
by
a
we
cluster
user
as
soon
as
first
of
all,
the
plugin
would
also
pick
up
crds
from
the
k
native
installation
on
the
host
cluster
and
make
certain
crds
available
inside
the
v
cluster.
So
you
can
always
specify
which
specific
crds
you
want
so
right
now
we
only
have
support
for
the
case
service.
A
Route
spec
as
the
next
step,
but
right
now
we
do
these
syncing
the
syncing
of
these
three
right
now.
So
this
allows
you
to
run
commands
like
k,
explain,
k
service
in
the
v
cluster
without
installation
of
k
native
inside
the
v
cluster.
So
the
v
cluster
installation
on
your
host
cluster
is
enough
for
you
to
get
or
create
k,
native
service,
config
or
revision
directly
on
the
v
cluster.
A
So
you
start
with
a
a
simple
k
native
service
and
as
soon
as
you
create
it,
we
have
the
plugin
and
we
have
basically
something
called
in
terms
of
code
coming
in
terms
of
code
out
of
the
architecture.
We
have
something
called
vcluster
sdk
that
allows
you
to
implement
these
things
with
a
relatively
more
easier
way
and
directly
plug
in
with
the
core
syncer.
A
So,
basically,
you
have
functions
like
sync
down.
A
Which
basically
get
called
so
every
time
a
new
case
service,
for
example,
is
called.
You
would
get
a
call
to
your
sync
down
function
and
you
basically
compare
the
virtual
object
and
the
physical
object
and
see
what
fields
need
to
be
synced
and
what
fields
don't
need
to
be
seen
or
like
what
fields
don't
exist
at
all.
So
in
the
first
instance,
you
will
have
the
virtual
object
directly
entirely
as
you
defined
your
key
native
service.
But
your
physical
object
to
your
sync
down
function
as
a
parameter
would
be
an
empty
object.
A
Basically,
it
would
be
a
typical
reconcile
kind
of
a
step
and
you
would
create
the
k
native
virtual
service
into
the
physical
service
and
the
plugin
has
some
helper.
A
Labels
that
we
apply
to
this
k
native
service
so
that
we
are
able
to
identify
a
canadian
service
that
is
being
managed
by
a
v
cluster
versus
something
that
was
created
directly
on
the
host,
which
might
have
the
same
name
or
maybe
like
even
created
by
another
v
cluster,
which
runs
on
the
host
cluster.
So
that
kind
of
isolation
and
identification
is
done
through
application
of.
B
A
combination
of
some
name
mangling,
and
also
with
some
labels,
are
applied.
A
As
soon
as
this
lands
in
the
host
cluster
world,
we
already
have
k
native
installed
on
this
port
cluster.
In
the
k
native
serving.
A
The
k
native
serving
auto
the
k
native
serving
controller
would
act
on
it
start
creating
the
configurations
and
revisions
and
since
plugin
also
has
plugin,
has
multiple
controllers
registered
for
all
three
crds.
We
will
start
getting
similar
events
for
our
sinkers
for
conflict
and
revisions.
So
when
I
say
sinkers
like
they
are
very
much
similar
to
the
core
sinkers.
A
It's
just
that
we
have
a
poor
sinker
in
as
part
of
the
v
cluster
core
itself,
for
example,
for
pods
and
services,
whereas
the
plug-in
would
have
a
dedicated
sinker
that
we
implement
out
of
the
we
cluster
core
ins
as
a
plugin.
So
so
the
the
sinker
for
config
crd
would
get
called
with
with
basically
the
virtual
object
as
being
nil
and
the
physical
object
as
the
configuration
object.
So
just
the
opposite
is
going
to
happen
here
and.
B
Based
on
which
kind
of.
A
A
Down
method,
as
soon
as
you
create
the
k
service,
the
k
native
controllers
will
start
acting
on
it
and
a
lot
of
fields
in
this
case
service
will
be
changed
or
modified,
and
in
those
cases
we
would
have
both
the
objects.
You
would
have
the
virtual
object.
We
will
have
the
physical
object,
since
both.
A
Up,
I
have
not
shown
the
sync
method
in
the
diagram,
but
the
sync
method
is
somewhere
where
you
have
both
the
objects
and
you
implement
the
logic
where
you
need
to
calculate
which
field
got
updated
and
which
should
be
replicated
up
or
down
like
so
what
values
should
take
precedence
and
which
way
this
thing
should
happen.
So
it's
more
like
an
update,
kind
of
a
function
where.
B
You
decide
you
decide
and
you
write
the
logic
for
copying
of
fields.
A
And
which
fields
get,
for
example,
the
status
in
this
case
will
always
come
from
the
physical
object
and
you
never
try
to
replicate
the
status
from
the
virtual
object.
So
the
whenever
there
is
an
operating
the
status
field,
you
always
copy
it
from
physical
to
the
worship.
Whereas
when
you
say,
let's
say,
there's
a
difference
in
the
spec
field,
the
precedence
is
always
given
to
the
virtual
object,
because
you
may
be
changed.
The
image
tag
in
the
virtual
object
and
you
now
want
that
to
reflect
in
the
case
service.
A
So
in
both
the
cases,
the
sync
function
would
be
called,
but
the
logic
is
written
by
by
the
plug-in
implementer
of
which.
A
In
case
of
the
conflict,
it's
going
to
be
just
the
reverse:
that
we
don't
have
the
virtual
object
initially
and
we
have
the
physical
object.
A
We
have
a
separate
indexer,
which
is
needed
because
the
kind
of
ownership
labels
that
we
apply
to
creative
service
when
we
sync
it
initially
with
plugin,
are
not
copied
over
to
this
to
the
revisions.
So
the
key
is,
for
example,
it's
like
managed
by
label,
for
example.
These
are
there.
These
are
copied
over
by
the
k
native
controllers
in
the
in
case
of
config,
but
whenever
you
create,
whenever
they
create
the
revision
object,
those
labels
are
never
copied
and
hence
we
have
to
maintain
a
separate
indexer.
A
This
indexer
allows
the
plugin
to
keep
a
track
of
which
revisions
are
particularly
owned
by
this
particular
conflict,
and
this
like
when
you
have,
when
you
start
imagining
that
you
have
multiple
configurations:
multiple
v-clusters,
each
having
their
own
plugin
and
indexers,
but
you
would
have
this
indexer
as
a
helper
to
understand
whenever
a
revision
event
comes
in,
that
is
it
being
managed
by
the
configuration
which
is
being
watched
by
this
plugin
or
is
it
a
revision?
That
is
not
the
concern
of
this
particular
v
cluster?
A
A
You
know
which
v
cluster
a
particular
division,
belongs
to
and
shouldn't
be
synced
with
this
particular
one
or
some
other
ones.
So
this
is
basically
where
the
indexer
comes
in
and
helps
us
resolve
the
only
the
revisions
that
are
belonging
to
this
particular
we
cluster
and
also
help
us
translate
back
the
name
when
it
comes
to
ownership
that
you
know
which
revision
is
being
controlled.
B
By
which
configuration?
Because.
B
B
A
That's
where
the
indexer
comes
in,
so
we
do
copy
the
owner
references
and
we
try
to
update
the
owner
differences
once
you.
A
Object
name
would
be
translated
and
the
physical
name
and
the
virtual
name
are
translated
accordingly
and
the
owner.
Differences
for
the
virtual
revision
versus
the
physical
revision
would
also
be
different,
because
the
configuration
name
here
is
a
little
bit
mangled
versus
the
actual
configuration
name.
So
in
the
same
way,
we
have
to
use
that
mangled
and
unbangling
of
those
names
between
the
virtual
objects
as
well.
A
But,
yes,
we
do,
like
all
that
is
part
of
this
metadata
translation
to
and
flow
between,
virtual
and
physical
thing,
and
that's
where
this
whole
indexer
thing
comes
into
play.
You
know
when
it
comes
to.
A
This
whole
thing
so.
B
I
think
this
this
should
probably
be
it,
and
we
can.
A
Jump
to
the
demo,
I
think.
B
If
we
have
more
questions
or
would
we.
B
B
A
It's
taking
a
little
bit
longer
because
I
think
the
screen
sharing
is
on
so.
B
A
A
Parts
running
here,
as
we
talked
about
so
basically
the
k
native
plugin
plug-in
itself,
is
running
as
a
sidecar.
We
have
the
sinker,
which
is
the
core
sinker
of
the
v
cluster,
and
we
have
the
k3s
rancher
as
the.
B
Api
server
running
okay,
we
have
one
more
question:
there
can
be
answers.
Oh
okay,.
A
So
basically
we
have
the
stateful
sites
here
and.
A
See
these
are
the
logs
from
the
plugin.
A
The
crds,
which
are
already
being
synced
with
the
virtual
cluster,
so,
as
you
can
see,
I
just
created
the
virtual
cluster
right
now
and
I
did
not
install
keynative
into
it,
but
we
still
have
the
creative
services
as.
B
You
can
see-
and
I
can
show
you
like-
okay
x
plane
see
so
we
have
the
k
native
service
inside
the
v.
A
B
A
So
we
I'm
just
going
to
create
this
very
basic
native
service
here,
and
this
will
be
created
in
context
of
the
virtual
cluster
and
we
should
see
it
getting
synced
to
the
host
and
all
the
updates
applied.
B
A
We
have
it
on
the
left
on
the
docker
desktop
and
also
there
was
some
updates.
As
you
can
see,
sync
was
called
for
this
particular
service.
We
had
update
and
multiple
times
called
to
sync,
as
and
when
the
actual
controllers
acted
on
the
physical
service
and
updated
various
fields
and
status.
We
got
sync
method
called
and
they
were
reflected
into
the
virtual.
B
A
We
have
some
labels
applied,
object,
name
and.
B
Also,
some
things
like
sorry:
the
labels
are
here
managed
by
with
the
namespace.
A
Have
to
have
those
indexers,
the
helper
indexes
which
help
us
to
identify
which
revisions
are
owned
by
v,
cluster
k,
native
v,
cluster
versus
other
v
clusters,
and
things
like
that.
But
basically,
as
you
can
see,
we
have
the
same
name
and
the
status
fields
are
also
reflected
between
the
virtual
and
the.
B
Physical
names,
I
am
not
sure
if
magic
dns
is
working
on
my
system
right
now,
let's
try
and
see.
No,
it's
not
working
very
well!
So
probably
some
issue
over
there
yeah.
B
B
B
B
Is
not
reflecting
properly
with
the
slip
I
o?
Instead,
it's.
A
B
B
A
We
have
elaborate
tests
for
k-native
services
over
here.
We
are
also
implementing
tests
for.
B
A
I
think
that
was
it
for
the
demo
part
you
can
have.