►
From YouTube: 20200421 Kubernetes Working Group for Multi-tenancy
Description
In-depth demo and Q&A about the Virtual Cluster Project, and quick status updates from the Hierarchical Namespace Controller team, and the Secure Benchmarking team.
B
B
Okay,
hi
everybody
and
welcome
to
our
regularly
scheduled
work.
Multi-Tenancy
working
group
for
kubernetes
today,
PHA
is
going
to
be
presenting
on
the
virtual
cluster
project,
which
is
an
open
source
project
that
you
can
check
out
on
the
multi-tenancy
github
page,
which
is
part
of
the
kubernetes
sake.
Project
on
github,
so
he'll
be
giving
us
an
update
and
a
demo
with
his
coworker.
C
C
How
we
are
give
you
you
guys
a
detailed
demo
about
what
we
have
been.
We've
already
have
done
so
so
I'm
assuming
most
people
knows
the
concept
of
work
or
to
cluster,
so
I
won't
spend
too
much
time
on
the
kind
of
idea.
I
just
give
a
few
sentence
update
about
the
architecture
and
you
refresh
your
memory.
So
today's
presentation
is
more
about
the
the
exactly
things.
What
do
we
have
implemented
in
this
project?
C
So
so
the
the
reason
that
we
want
to
come
up
with
what
you
Castle
is
try
to
you
know
here
is
the
case
that
in
case
tenants
needs
in
a
hard
kubernetes
multi-tenancy.
So
we
can
have
solved
this
problem
by
providing
no
dedicated
control
plan
for
each
tenant.
But,
unlike
other
solutions
like
what
you
couplet
they
they
let
you
use
vertical
bit.
You
connect
your
other.
C
So
essentially,
you
can
treat
in
over
Jocasta
as
a
kind
of
extending
current
in
a
namespace
based
in
isolation
mechanism,
basically
provide
each
tenant
a
cost
to
be
you
moonshell,
so
each
tenant
will
have
a
dedicated
control
plan,
which
we
call
the
canada
master,
and
we
have
a
developer
sinker,
which
is
a
kind
of
controller
to
synchronize
the
object
that
needed
for
parkour,
beginning
from
with
henan
the
master
to
the
supermassive,
which,
which
is
a
actual
kubernetes.
That
manages
the
physical
resource
like
notes.
C
So
so
so
the
higher
idea
is
a
firm
can
in
the
perspective,
so
they
probably
don't
even
know
that
they're
running
in
this
kind
of
higher
hierarchical
level
they
just
assuming
they
have
all
my
delicate
kubernetes.
They
are
not
about
the
existence
of
the
super
master
at
all.
This
is
our
you
know.
Ideal
case
so
Oh,
so
now
in
this
project
there
are
three
major
components
that
we
have
been.
C
And
also
the
photo
tenants
they
don't.
We
do
mean
store
scheduler,
because
in
this
model
the
how
scary
thing
is
a
story
in
a
super
master.
The
tenant
doesn't
need
a
schedule.
So
this
is
a
controller
that
that,
let
it
you
can
sync
off.
You
know
Mary
to
the
lifecycle
for
all
the
tenant,
the
master
components
and
the
second
one
is
a
sinker,
so
it
is
which
is
the
centralized.
The
controller
that
populates
the
API
object
needed
for
provisioning
from
tanana
MA
to
the
super
master.
C
It
also
know
in
charge
of
in
charge
of
the
updating
the
objects
that
is
back
from
the
supermassive
can
master,
so
there
by
the
tenants
can
see
the
over
purpose
of
the
Paulino
pod
creation.
They
know
exactly
so.
They
know
they.
They
kind
of,
cannot
gather
idea
what
happening
in
the
super,
but
not
indirectly.
Just
they
can
only
view
the
status
of
the
objects
in
a
tenant.
My
Canon
master
on
we
also
the
single
count
owner
also
needs
to
in
appearing
in
the
skin,
seemed
object
in
issue
the
date.
C
The
states
between
the
Canon
master
and
a
civil
master
consistent
because
we
are
need
to
we
need
to
take
careful
along.
You
know,
corner
cases
like
you
know,
sync
restarting
during
the
crash
ten
and
a
master
restart
aromatic
crash,
or
in
all
these
counter
cases
we
were
the
synchrony
to
make
sure
the
data
still
are
still
consistent.
The
third
component,
which
is
which
we
call
is
V
and
agent.
This
is
a
no
demon
that
simply
is
proxy
or
the
tenant
accumulated,
get
calls
to
the
actual
cubed
e.
C
The
process
running
a
node,
because
in
our
model,
so
the
tenant
is
supposed
to
only
in
Curie.
The
positive
belong
to
himself,
so
the
via
agent
need
to
ensure
that
okay,
if
there
is
a
potter
request
about
you,
know,
politcal
request
the
target
a
path
has
to
be
on
the
buying
the
tenant
who
buy
the
tenants.
So
this
kind
of
isolation.
This
is
kind
of
provide.
Another
level
of
isolation,
also
work
on
a
problem
such
that
the
coup,
but
it
can
only
be
readies
to
one
master.
C
They
cannot
serve
itself
magical
current
master
at
all,
so
so
the
so.
This
is
the
page,
nothing
you
know,
github
page,
so
at
least
stop
everything.
Today,
I'm
going
to
go
through
most
of
kana
in
this
major
with
me
on
this.
There
is
a
short
demo
here,
but
this
is
a
one.
We
are
not
only
do
much
more
than
this
short
demo,
but
this
is
a
good
story
for
very
simple
in
no
idea
what
what
do
we
have
in
human
didn't?
C
We
I
also
end
in
an
instruction
which
is
a
more
details,
kind
of
very
detailed
steps.
I'm
not
going
to
go
over
all
this
because
trows
demo,
we
repeat
most
of
the
steps
that
I
have
mentioned
in
this:
no
instruction,
okay,
so
next
I,
don't
you
briefly
talk
about
from
you
know,
uses
or
the
actual
user
of
perspective?
What
are
the
things
that
are?
We
support
it
and
I
would
have
not
support
it,
because
so
what
we
do
that
is
we
trying
to
you
know
making
a
kubernetes
api
commemorate.
C
He
retain
the
kubernetes
api
compatibility
as
much
as
possible
on.
Indeed,
we
have
tried
there,
which
we
have
tried
around
the
motor
caster
against
the
kubernetes
conformance
test.
Most
of
the
tests.
Actually
pass,
and
there
was
one
failure,
because
it
is
it's
kind
of
hard
to
support
the
suffered,
or
maybe
I
if
somebody
knows
what
it
is,
because
you
cannot
even
force
that
the
super
master
antenna
must
have
had
the
same
system
of
the
main.
C
So
this
is
one
that
fails
besides
of,
but
these
are
the
kubernetes
conformance
test
because
they
to
me
there
are
still
kind
of
elitist
go-to
scopes
of
the
test.
There
are
some
other
parts
that
we
want
you
to
tell
the
users
and
won't
you
let
let
let
the
users
of
well
something
something
that
we
motor
caster.
That
cannot
be
supported.
For
example,
example
on
the
motocross
are
kind
of
follow
a
service
design
pattern,
which
means
we
don't
want
to
expose
the
entire
super
muscle,
no
topology
up
to
determine
a
master.
C
So
let's
say:
if
su
master
has
a
panels
arm,
you
know
model
that
cannot
master
won't
show
can
knows,
they
only
show
the
nose
that
only
shows
up
once
the
part
actually
runs
ron
is
bonded,
cure,
particular
nose
in
a
super
master.
That
being
said,
if
there
is
no
cause
ronnie
in
the
cannon
master
know
which
and
all
to
be
able
to
show-
and
this
also
means
that
well,
you
cost
question.
D
C
C
Decimate
that's
the
main
motivation
of
this
actually
project.
We
wanted
the
super
master
nodes
shared
by
all
the
tenants,
but
in
a
tenant
from
canons
perspective,
they
only
show
a
virtual
note
that
he
that
they
have
cost
money.
There
was
actual
pause
run
in.
There
were
two
notes
in
a
demo.
I
will
give
you
more
details
about
what
does
this
mean,
but
on
the
other
hand,
what
I
mean
is
the
weight
across
a
dozen
so
for
the
demons?
C
That
alike
were
closed
in
the
parent
Master,
because
demons
are
assuming
it
way
around
that
demon
all
the
notes.
Now
this
will
introduce
a
lot
of
harm.
He
won't
support
it.
So,
in
other
words,
oh,
if
the
singer,
the
singer
controller,
will
reject
him
in
equated
part
if
the
know
name
has
been
said
in
spec,
so
we
were
going
to
simply
reach
at
this
point.
Understand,
thank
you
so
and
now
the
thing
is
under:
we
we
don't
the
singer.
C
Doesn't
this
doesn't
update
the
no
lease
objects
in
a
tenant
master
because
nowadays,
Canaries
most,
like
mostly
relies
on
no
lease
to
do
the
hobby
check
happy
update
so
in
other
words
the
version
of
it
that
we
owe
the
single
we
are
updated.
The
virtual
mode
object
with
the
status
update
in
a
tenant
master,
but
in
a
much
you
know
slower
pace
compared
to
the
no
lease
update,
which
is
happened.
Maybe
every
10
seconds
all
right.
So
we
check
which
means
if
the,
if
the
10
of
us
is
to
a
node
controller,
it
may
some.
C
If
we
don't
increase
the
grace,
you
know
not
what
is
a
grace
period,
sometimes
the
tenant?
No,
the
controller
may
report
the
notice
of
like
kind
of
thing
to
pure
word
and
in
the
worst
case
you
know
the
party
which,
and
we
can
really
kicking,
introduced
some
forums.
Our
recommendation
is
kind
of
increase.
This.
No,
the
monitor
is
pure
logic
value.
So
let's
say
like
a
16
seconds,
then
the
tenant,
you
know
the
node
controller
won't
do
some
real
things.
C
Okay,
the
13,
the
cockiness
is
not
Canada.
Well,
so,
if
Canada
want
to
use
DNS,
they
you
know
model,
they
have
to
use,
though
their
own
DNS,
in
a
10
and
a
master
there
there
is
a
kind
of
hardcore.
Do
you
know
kind
of
contract
is
that
the
DNS
service
should
be
created
in
in
that
could
be
system
namespace
using
a
name
Covidien
s.
So
this
is
the
reason
that
we
sinker
missed
you
you
know
by
looking
at
the
past,
you
find
the
coding
and
service
and
updated
cos.
C
Ip
include
include
the
parts
back
on
PS
config.
You
can
see
in
certain
sense
you
can
treat
in
your
sinker
as
a
very
powerful
and
complicated
web
hook,
which,
because
all
the
parts
actually
provision
the
su
master
is
the
spec
is
kind
of
manipulated
by
the
sinker.
The
first
quality
is,
we
enter
support
service,
but
you
made
it
clear
is
if,
if
we
want
to
support
a
service,
make
the
service
work
in
a
DNS
or
make
you
know,
the
service
behavior
looks
consistent
from
Tanana
and
super
perspective.
C
The
CIDR
of
the
Kalamazoo
master
should
be
the
same.
I
would
spend
more
time
later
to
you
say
more
about
why
whether
this
me
just
give
your
heads
up
where
to
cross
the
fully
support
attend
service
account.
So
everything
that
is
every
service
account
created
in
Canada
master.
You
will
be,
you
will,
finally,
it
actually
taking
effects
in
the
park.
C
Another
thing
is
the
weight
loss.
Doesn't
support
attend
in
the
persistence
volume
so
which
means,
if
you
used
or
kind
of
the
set
up
the
order.
You
know
the
person
mod
me
and
Helena
master
or
used
or
you
in
store
or
person,
PD
controller,
your
customized
PD
control
unit
and
master.
This
wouldn't
work
at
this
point
because
our
port,
so
basically
all
understanding
our
model
is
like
a
soap
model.
So
all
the
10
Cal,
mostly,
is
the
PVC.
You
just
describe
what
you
want.
C
The
actually
position
warning
we
know
provision
part
should
be
they
should
be
done
in
the
supercluster
they're
being
said
so.
Auto
Peavey's
and
stirrups
Lori
cluster
provided
by
the
civil
master
and
our
sinker
we
are
published.
Rolls
object
back
up
to
form
assume
us
that
you
have
a
master.
So
if
you
don't
have
a
zinc
with
created
PV,
we
are
going
to
either
ignore
it
or
just
possibility
ready
to
buy
the
sinker.
The
last
point
is:
there's
Canada
master
to
master.
We
recommend
they
use
the
same.
Kubernetes
inversion
to
cure.
C
Voiding
are
incompatible
with
your
behavior
and
the
current
sinker
controller
or
controller
built
on
the
community
1.16
api's.
Now
now
I
think
the
major
commands
1.18,
so
some
new
EPS
may
not
be
supported,
but
that
won't
be
a
big
program.
So
as
all
the
times,
we
are
going
to
really
see
our
components
using
the
latest
catch
up.
The
major
as
their
Intel
project
currently
I
think
is
pretty
mature
in
a
sense
that
we
have
done
very
intensively.
No
testing
internally
I
think
is
pretty
ready
fire
everybody
who
give
a
try.
C
E
C
G
C
F
The
demo
will
be
down
on
the
mini
cube
and
in
the
version
of
mini
cube,
I'm
using
is
version
1.5.2
and
then
the
corresponding
kubernetes
version
as
1.16.
There
is
no
specific
reason
for
using
this
version
as
Fae
just
to
mention.
We
have
try
out
virtual
cluster
in
many
major
release
from
one
point,
twelve
to
one
point.
Sixteen,
and
in
the
future
we
will
try
to
catch
up
waste.
F
No
lady
at
the
released
version
of
communities
before
I,
get
started
like
to
explain
some
terms
in
a
settings
that
can
help
you
to
better
follow
this
demo.
So
there
are
three
terms
I'm
going
to
use
frequently
in
this
demo.
The
first
one
is
in
a
super
cluster,
which
is
this
mini
cube
cluster.
The
base
cluster
that
hosts
all
the
virtual
cluster
second
is
no
super
master.
F
So,
let's
get
start
so
in
order
to
use
the
virtual
cloud
first,
a
we
need
to
deploy
all
in
a
management
component
for
a
virtual
cluster,
and
also
the
virtual
cluster
is
real
I
own
to
CR
DS.
First
one
is
the
virtual
cluster
C
Rd
another
one
is
no
cost
to
version
c
rd.
I
will
explain
the
contents
of
this
to
CR
days
later
so
now.
Let's
just
apply
these
to
CR
DS
and
also
this
project
also
rely
on
the
tenant
in
a
tenant,
namespace
er
D.
F
Ok,
so
we
have
this
for
C
Rd
in
Stowe.
Next
we
can
start
deploy
the
management
component
for
non
virtual
cluster
for
lease
demo.
We
create
in
all-in-one
llamó
that
can
help
us
to
deploy
all
the
resource
required
by
the
management
components,
but
but
for
the
first
a
release
version
we
will
create
a
helm
chart
who
is
in
a
deployment
process.
F
So
let's
apply
the
siamo.
As
you
can
see,
we
create
function
of
resource.
You
can
just
ignore
them
first
to
save
them,
because
that
is
for
a
tenant
controller
for
a
virtual
cluster
controller.
We
first
Oh
creator,
namespace
PC
manager
and
then
we'll
create
plus
the
row
in
a
service
account
for
each
component.
Then
we
set
up
VC
manager
and
a
VC
syncher
waste,
no
workload
deployment
and
we
create
we
set
up
the
VN
agent
used
to
workload
daemon
set.
F
Let's
check
the
status
of
this
three
components:
okay,
as
we
can
see,
all
these
components
are
up
and
running,
and
during
these
three
English
three
components,
the
VC
manager
is
responsible
for
provisioning,
the
tenant
master
phone,
a
virtual
cluster
and
a
later,
when
we're
creating
a
virtual
cluster.
We
wanted
to
see
what
happened
inside
of
VC
manager.
So
let
me
just
out
from
the
log
of
the
VC
manager.
F
So,
as
we
can
see,
on
the
right
hand,
side,
we
have
an
M
space
cannon,
one
at
the
main
created
in
a
later.
We
will
create
a
virtual
cluster
object
under
this
namespace.
Next
we
can
start
creating
a
collage
diversion.
So
in
order
to
ease
the
creation
process,
we
develop
a
command
line
tool
called
virtual
cluster
controller,
which
can
help
user
to
create
the
cluster
version,
the
virtual
cluster
and
generate
the
quba
config
file
required
buying
the
virtual
cluster.
F
Let's
create
the
last
version,
first,
okay,
so,
on
the
left
hand
side,
we
can
see
that
the
VC
manager
said
a
new
cluster
version
is
installed,
cv,
Sampo
MP.
So
what's
inside
cluster
version,
let's
take
a
look,
so
the
cluster
version
is
cluster
version
is
too
Ciardi,
let's
define
the
version
and
in
a
configuration
of
the
tenant
masters,
components
which
include
any
d
CD,
an
API
server
and
a
controller
manager.
For
now
we
still
hard-code
all
this
configuration
and
aversions
in
this
llamó.
F
But
in
a
later
in
the
future,
we
plan
to
cooperate
with
other
working
group
like
cluster
atom
working
group
to
come
up
with
some
standard
operator
that
it
can
help
us
to
fetch
the
manifesto
from
a
remote
git
repository
okay.
So
next
we
can
start
creating
a
virtual
cluster
same
as
before.
We
need
to
specify
a
llamó
file,
but
this
time
we
also
need
to
say
the
name
of
could
be
config
a
file
we
wanted
to
use
for
these
virtual
cluster.
F
Okay,
the
virtual
cluster
is
creating
a
TC
n
times,
let's
see
what's
inside
the
virtual
cluster
llamo,
so
inside
the
virtual
cluster
llamo,
we
define
some
basic
information
of
de
virtual
cluster
like
the
name
and
an
M
space
of
the
virtual
cluster.
Under
an
ass
back
field,
we
define
a
cluster
domain
of
the
virtual
cluster
and
then
the
cost
of
version
required
to
buy
this
virtual
cluster
notice.
Here.
If
this
cluster
version
is
not
available
on
a
super
cluster,
then
this
virtual
cluster
cannot
be
created.
We
also
specify
nappy
ki
expire
expiration
day.
F
F
Okay,
so,
as
we
can
see,
on
the
left
hand,
side,
the
VC
manager
is
still
deploying
the
components
of
the
tenant
master.
It
will
deploy
all
these
components
in
order,
ET,
CD,
API
server
and
a
controller
manager.
Okay,
the
virtual
closet
virtual
cluster
is
Cup,
is
currently
up
and
running
and
on
the
right
hand,
side
we
can
see
that,
to
the
VC
controller,
tell
us
the
etcd
API
server
in
the
controller
manager.
All
these
three
components
already
also,
we
generate
a
coupie
config
file.
F
It's
not
necessary
I,
just
don't
want
it
to
keep
typing
a
copy.
Config
command
line
option.
Ok,
so
next
we
wanted
to
check
the
status
of
this
virtual
cluster.
We
wanted
to
make
sure
this
virtual
cluster
is
work
as
expect.
First,
we
check
the
cluster
information,
so,
as
you
can
see,
we
have
kubinashi
master
running
at
this
address
for
the
virtual
cluster.
F
Let's
check
nanopore
service
on
a
super
cluster,
so,
as
you
can
see,
on
the
left
hand
side,
we
have
a
no
port
service
named
API
server,
SBC
and
ana
port
number
as
30
1542.
Okay,
let's
also
check
in
an
m-space
on
a
virtual
cluster.
We
have
four
initial
namespace
default,
no
lease
public
in
the
system.
Let's
check
in
an
m-space
on
a
super
cluster,
so
we
have
five
more
m
space.
The
first
one
is
in
the
root
namespace
of
the
virtual
cluster.
F
So
what's
inside
and
routine
ends
based
so
ings
on
the
root
namespace,
we
have
the
three
components
of
the
tenant
master,
running
API,
server,
controller
manager
and
etcd,
and
for
each
namespace
on
a
virtual
cluster,
we
will
create
a
corresponding
names
based
on
their
super
cluster.
The
name
of
the
corresponding
M
space
is
starways.
F
The
root
2
namespace
follows
by
the
target:
namespace
okay,
so
next
I
will
set
up
a
core
DNS
on
a
virtual
cluster,
so
this
is
not
necessary,
but
in
later
demo,
I
wanted
to
show
how
to
access
a
pot
in
the
cluster
using
a
DNS,
so
I
said
a
place
DNS
server,
let's
check
the
cluster
information
again.
Okay,
so
in
the
DNS
server
is
up
in
Iran
e-waste
in
Illuminati
master.
So
by
now,
I
have
show
how
to
set
up
the
deployment
management
component
for
the
virtual
cluster
and
I
have
created
a
real
virtual
cluster.
F
F
F
So
we
can
see
that
two
inch
next
part
is
up
in
Iran
in
a
super
ona
super
cluster.
So
if
everything
works
as
expected,
this
two
paths
show
up
in
running
on
the
virtual
cluster
as
well.
Let's
check
again,
okay,
so
in
a
two
engine.
X
pod
currently
is
up
and
running,
and
we
also
have
an
engine
X
service
up
in
irani
on
a
virtual
cluster.
So
now
we
have
an
engine
X
service
in
a
two
engine
in
spots.
So
can
we
visit
those
two
parts
in
a
list
service
from
enzyme
cluster?
F
Let's
give
a
shot.
I
will
first
apply
a
very
basic
hot
Cocker,
so
I'll
use
this
part
as
an
interactive
cur
client
I
will
lock
in
this
part,
using
a
cur
command
to
visit
the
pot
internet
service
from
enzyme
posture
I,
just
one
is
part
up
in
a
running.
Do
nothing
important,
so
I
use
a
needs,
your
command
top.
F
Let's
login
to
the
pots:
okay,
now
we
are
inside
a
pots,
so,
as
they
just
mentioned
currently,
the
service
discovery
on
a
virtual
cluster
is
rely
on
the
environment
variables.
So
here
here
are
all
the
environment
inside
the
pots,
and
we
can
see
that
here
is
the
service,
my
nginx
service,
IP
cluster
ID.
So
by
using
this
environment
variables,
we
should
be
able
to
visit
my
serve
my
nginx
service.
Let's
try
it.
F
F
And
also
my
ng
next
one,
ok,
so
in
this
example,
I
show
that
the
virtual
cluster
allow
user
to
deploy
basic
resource
like
staple
set
service
in
a
pod
in
a
virtual
cluster,
also
allow
user
to
visit
all
this
resource
in
cluster
through
the
DNS.
Next
I
wanted
to
demonstrate
how
to
use
the
controller
or
in
operator
on
a
virtual
cluster,
so
before
I
get
started
to
briefly
explain
how
does
controller
or
an
operator
work
inside
the
kubernetes.
F
So
the
core
idea
of
controller
is
a
client
community
clients
running
Ziya
pod
in
a
cluster
that
this
client
also
wanted
to
also
need
to
have
the
ability
to
control
or
access
certain
resource
on
the
cluster
through
the
API
server.
So,
to
accomplish
this,
the
clients
need
to
need
to
need
to
accomplish
two
important
steps:
first,
a
to
need
to
get
the
full
address
of
the
API
server.
Next,
it
need
to
prove
to
the
API
server
lat.
It
has
no
permission
to
control
certain
resource
so
to
get
the
full
address
of
the
API
server.
F
The
client
can
using
can
use
the
environment
variable
as
well.
So
here
are
all
the
environments
in
Zion
part
running
on
the
cluster
and
amount.
Then
there
are
two
very
important
environment:
a
tea
service
host
and
anna
kubernetes
service
plot.
So
the
value
of
no
kubernetes
service
host
is
in
a
cluster
IP
of
no
community
service,
and
this
community
service
is
responsible
for
all
in
cluster
api
server
visit.
F
F
So
inside
a
pot,
the
information
of
a
service
account
is
stored
under
a
service
account
directory,
which
includes
a
root
certification
file,
the
information
of
the
namespace
and
the
bear
chicken.
So,
each
time
when
clients
wanted
to
talk
to
an
API
server,
it
will
use
the
information
under
this
directory
to
identify
itself.
F
F
It
will
try
to
create
a
real
corresponding
pots
on
the
super
cluster,
but
when
it's
creating
a
new,
creating
a
real
pots
on
a
super
cluster,
it
will
use
the
tannin
cluster
cannon
masters,
information
to
set
environment
variables
and
the
information
under
the
service
account
directory.
That
is
to
say,
if
a
pot
finally
up
in
running
then
add
from
the
pots
perspective.
It
will
feels
like
it
is
running
on
a
virtual
cluster
instead
of
running
on
the
super
cluster,
because
on
information
information
it
can
get
is
by
is
the
esna
information
of
the
virtual
cluster.
F
Okay.
So
enough
explanation,
that's,
let's
see
a
real
working
case.
So
let
me
in
Stowe
controller
on
the
virtual
cluster,
so
the
controller
I
am
going
to
install
is
a
very
basic
controller.
It's
a
pot
Lister
controller.
The
container
is
still
creating.
So,
let's
see
what's
inside
the
pot,
Lister
controller,
so
the
paddle
is
the
controller,
has
a
very
basic
client
set
which
is
based
on
the
in
cluster
configuration,
and
the
list
client
set
will
keep
listing
on
under
the
default
namespace
every
10
seconds.
F
Okay,
the
controller
is
works
as
expected.
It
keep
leasing
on
and
parts
under
the
default
namespace
the
Kerr
pods,
the
my
nginx
pods.
We
just
create
it
and
itself
so
English
example
I
demonstrate
that
the
virtual
cluster
supporting
a
supporter,
controller
and
operator
patent
next
I
wanted
to
show
how
to
use
some
advanced
feature
like
persistent
volume
on
a
virtual
cluster.
F
So
when
designing
the
persistent
volume
so
when
designing
the
feature
to
secede
support
the
persistent
volume,
we
study
several
use
case
of
the
persistent
volume
and
we
observe
that,
when
using
a
persistent
volume
from
a
user's
perspective,
the
most
important
thing
is:
if
the
president
volume
clan
can
be
successfully
found
and
if
they
can
use
in
a
persistent
volume
on
a
cluster
users,
normally
don't
care
about
how
many
purses
and
volume
are
available
in
how
to
create
a
persistent
volume.
It
is
the
cluster
administrator's
responsibility
to
create
a
workable
precision
volume.
F
So,
in
our
case,
we
think
we
treat
to
the
the
the
virtual
cluster
is
actually
a
client
for
the
super
cluster.
So
we
believe,
when
we
try
to
use
a
persistent
volume,
we
should
create
the
precision
volume
on
our
super
cluster
and
in
a
user
of
the
virtual
cluster,
should
only
be
responsible
for
creating
a
persistent
volume
claim.
That
is
to
say,
if
layer
is
no
pending
persistent
volume
clan
on
their
virtual
cluster,
then
we
will
not
synchronize
the
persistent
volume
back
to
the
virtual
cluster.
F
F
F
F
Okay,
the
PVC
is
successfully
created
in
Anna
status
is
bound,
let's
check
the
PV
again
this
time
we
can
see
that
the
PV
has
been
synchronized
back
to
the
virtual
cluster.
The
status
is
bound,
so
in
this
case,
I
show
that
the
virtual
cluster
support
advanced
feature
like
a
persistent
volume.
The
virtual
cluster
is
primary
as
ms/ms
at
providing
hard
multi-tenancy
in
service
environment.
So
if
there
is
no
active
resource
connected
to
a
note,
then
a
node
should
be
collect.
True
B
should
be
deregistered
from
a
virtual
cluster.
F
F
Noticed
you
there,
okay,
so
Larry's
no
pods
running
on
a
virtual
cluster
now,
but
note
is
still
there
because
currently
the
default
graze
period
of
note,
the
often
a
note
garbage
collector
is
around
two
minutes.
So
we
need
to
wait
for
two
minutes,
since
this
is
in
the
last
case,
so
I
think
I
can
ask
and
in
question
its
comments.
If
you
have
yeah.
E
E
Several
questions
but
I'll
just
pick
one
or
two,
because
we
don't
have
time.
Firstly,
congratulations.
This
is
a
very
complex
thing
and
you
guys
have
done
an
awesome
job.
So
really,
congratulations
to
you
guys,
really
good
job.
Okay,
yeah
super
cool
stuff,
but
because
this
is
a
complex,
there's
still
potential
for
holes
right,
so
I
want
to
go
back
to
a
couple
of
things.
One
is
ignition
control
right.
Can
each
tenant
clusterer?
Can
each
virtual
cluster
have
a
independent,
fully
independent
admission
controllers
or
does
the
is
there
some
dependency
on
the
super
master.
C
Yeah
I
gotta
be
in
control,
I
already
signs
down
by
tenant
individually.
This
is
but
reality
why,
in
production,
I
do
see
cases
you
mean
if
you
have
cases
that
we
only
enforce
or
club
or
the
Mission
Control
in
a
super
master.
We
don't
have
a
very
strict
requirements
of
you
cannot
do
you
know
you
cannot
do
that
by
design.
I
think
most
of
the
music
founders
should
be
done
in
a
tenant.
E
C
Super
which
part
of
me
Munro
talking
about
the
more
EPS
or
be
in
control,
or
that
has
nothing
to
do
with
scheduler.
So
so
everything
that
scheduling
needs
there
is
a
other
thing.
So
currently
we
don't
be
like
if
there
is
a
seared
we
owning
sink
Odyssey
are
the
API
co-equal
resources
that
al-quran
supposed
we
don't
country
sink
any
CR
days.
That
me,
then,
being
said,
we
were
already.
We
sink
all
the
resources
that
need
my
schedule
at
this
point,
but
even
the
scheduler
needs
special
see
are
these.
E
E
I
C
So
currently,
so
the
most
web
fools,
if
you
have
ten
of
the
case,
no
their
own
web
folks
in
a
tenant
of
master,
has
no
power,
because
the
cha
has
already
shown
that
they,
you
know
the
controller
actually
works,
which
means
he
is
the
support.
If
you
start
a
web
hook,
server
server
should
be
able
to
access
those
rows.
Paul
web
hook
definitely.
E
C
Fully
expose
the
underlying
apology,
so
so
then,
but
if
people,
because
why
you
is
the
demon
said
that
you
probably
have
some
already
expectation,
let's
say
I
see,
10
knows
I
hope
you
see
imposed
to
be
created,
but
but
but
but
but
but
but
eternally
we
doing
suppose
the
4th
apology.
The
knows
I
chose
to
show
up.
You
know
kind
of
on
demand
if
owning
your
possible
use.
A
part
of
what
your
part
is
going
to
show
up.
If
you
install
a
demon
said
there
is
a
few
problem.
C
Is
that
you,
probably
the
team
tzedakah
not
to
be
getting
down,
cannot
be
automatically
scaled
down,
so
my
GC
I
really
just
want
to
work
all
the
knows.
We
are
persisting.
A
tenant
make
sense.
C
The
singing
are
currently
all
my
talk
at
this
moment
of
folks
on
the
counter
plane.
The
data
presentation
is
a
completely
different
topic
because,
but
if
you
talk
about
network
policy
I
believe
never
policy
are
currently
we
don't
synchros.
Never
policy
object
tom
beakers,
but
that's
sick
that
that's
not
a
big
problem.
We're
only
support
support
that
I
would
say
that
if
you
create
a
network
policy
within
the
tenant,
the
denim
policy
should
be.
Can
I
I
don't
think
there
is
a
big
blocker
to
make
growth
normal
policy
work
in
a
super
master?
I
I've
got
a
couple
comments
just
along
the
lines
of
slakey
making
it
work
more.
Like
other
things,
one
thought
is:
you've
got
the
custom
resources
reversed
virtual
cluster
and
cluster
version
and
things
there
seems
to
be
a
lot
of
overlap
and
config
there
with
the
cluster
API.
Yes,
you
know
it.
It
would
be
nice
if,
if
you
as
a
user,
you
could
choose
to
use
a
virtual
cluster
or
an
external
cluster
with
the
cluster
API
with
the
same
API
sure
the
other
one
was
kind
of
the
command
line.
I
C
I
think
that
is
I
think
that
costs
the
CDR
tours
is
kind
of
pretty
simple,
so
I
think
I
can
simply
repeat
the
auto
test.
By
do
you
use
cookie
CPR,
no
power
battle,
mm-hmm,
it's
just
a
very
simple
wrapper
to
a
few
magic
things
become
sure.
Yeah
in
general
looks
really
good.
Thank
you.
K
C
So
maybe
you
see
so
the
way
that
we
name
it.
You
can
see.
All
the
you
know.
Single
room
service
has
a
pretty
long.
Prefix
I
also
add
some.
You
know,
you
know
you
I
be
part
of
six
digit.
You
know
random
random
character.
This
is
also
D
introduced
to
or
were
there
any
possible,
you
know,
name
name
name,
name,
name,
conflict
in
a
civil
master.
So
that
being
said,
there
is
a
kind
of
a
hidden
restriction
here.
This
preface
can
be
pretty
long.
C
C
K
K
C
C
Environments,
certainly
the
host
network.
We
should
an
even
a
lot
of
hot
you
access
also
present
right.
Well,
Matthew
is
in
reality.
If
you
really
put
when
you
put
these
things,
I
mean
this
is
a
contra
priori,
but
if
you
want,
you
come
up
with
any
meaningful
solution
in
a
public
cloud,
so
every
part
also
likely
we
are
have
this.
Only
no
II,
the
BBC
ronzi
in
a
BBC
environments,
the
the
bigger
challenging
that
in
that
regard,
is
actually
the
service.
C
In
many
cases,
the
past
can
not
even
possible
to
access
to
the
notes
in
the
public
clouds.
That
part
is
a
we
integrate,
has
some
solution
because
we
change
some.
We
we
change
the
cookie
proxy.
In
that
part.
I,
don't
think,
is,
is
good.
You
know
open
source
one,
because
it's
pretty
like
in-house
fault
on
I,
think
that'll
be
introduced.
More
charity
in
that
regard
is
if
he
got
isolated
in
narrowing
amendments.
How
do
we
make
sure
that
all
the
communities
basic
service
discovery
became
them
like
service
cross
I?
It
works.
C
That's
pretty
challenging
on
the
isolation
partner,
I'm
sure
I.
Don't
worry
too
much
because
in
at
least
in
our
you
use
case
in
that
in
a
public
environments,
mostly
you
you,
you
may
use
the
sandbox
controller
like
a
car,
a
container
or
device
and
off
so
the
isolation.
Part
I,
don't
want
you
to
march,
because
you
have
a
BM.
You
know
you,
you
know
Raptor
all
where
all
the
content
of
processes,
the
more
the
more
challenging
comes
up.
These
actually
is
this.
Never
working
set
up
for
the
service
discovery,
afraid.
E
C
D
Not
quite
sure,
if
followed
what
you're
saying
about
clustering
piece
for
the
services,
I
thought,
you
said
it
front
that
all
the
tenants
have
to
have
the
same
cyber
block
for
the
cost
service
cluster
IPS.
So.
C
I
want
to
make
what
I'm
saying
that
is
in
if
you,
assuming,
if
you
assuming
every
pot,
has
runs
east
on
V
PC.
If
you
use,
you
know,
bless
you
using
I
in
a
network
so
that
that
I
think
in
some
cases
rolls
traffic
and
not
even
go
through
the
halls
of
house
Network
stack
at
all,
so
the
traditional
cross
IP.
So
this
is
changing
the
host
IP
table
that
began
to
work
at
all.
So
that's
a
challenge,
yeah.
That
has
nothing
to
do
with
the
demo.
C
Data
show
up
today,
I'm
saying
that
if
you
want
to
wrong
sandbox
environment,
see
in
the
public
cloud,
then
you
have
a
trouble
that
have
the
Casa
Rosada
type
service
work.
Definitely
Lola
Madison
had
no
the
balanced
type
service
still
work,
but
a
class
IP
is
a
challenge.
The
cross
IP
things
are
making
this
demo
is
that,
although
we
sync
the
service
from
the
tentative
super,
but
the
crust
IP
creation,
is
there
done
by
the
cannon
a
super
separately,
so
the
cross
IP
in
the
pennant
and
the
super
are
different.
C
So,
although
the
scars
IP
in
the
path
still
works,
but
from
tanana,
they
may
see
a
little
confusing,
because
in
my
ABS
ever
I
see
my
services,
this
cross
IP,
but
in
the
path
see
the
environment
variable
saying
that
across
IP
is
that
IP.
So
there
is
a
little
bit.
You
know
confusing
in
that
regards
that's
what
I
meant
all
right.
We
can
somehow
work
on
this,
but
we
need
more
work,
but
you
walk
around
this
because
we
cannot
just
simply
copy
the
class
IP
from
Canada
super,
because
the
IP
can
conflict.
G
Hi
this
is
Dan
for
my
head
now,
I
just
want
to
say
yeah,
it's
really
cool
great
job,
guys
I
mean
I
had
a
question.
So
how
does
this
model
support
resources
that
want
cluster
wide
scope,
such
as
OLM
or
the
cluster
autoscaler,
for
example?
Can
there
be
certain
kind
of
namespaces
that
are
privileged
and
can
see
beyond
their
tenant
master
or,
if
not
like?
How
do
you
support
something
like
a
cluster
wide
API
service.
A
I
C
Didn't
realize
I
said
it
before.
So,
if
you
want
around,
in
contrast,
you
should
enjoy
it.
Independent
master,
but
in
some
cases
is
depend
on
the
use
case.
Someone
may
want
enforcer
global
mean
control,
but
that
is
more
about
you
know.
Super
master
cross
cross.
The
ministry
don't
want
to
do
something
that
mean
for
us.
Everybody
need
to
follow
it
so
which
means
you
can
an
adult
doesn't
install
a
web
hook
or
any
control.
I
I
need
my
role,
but
you
to
behavior
like
this
is
there
is
slightly
different.
C
Is
that
in
the
old
behavior
we
thought
about
you
cluster?
You
probably
get
an
error
message.
You
say
that
this
part
cannot
be
omitted
because
there
is
a
mission
failure,
but
in
your
model
there
is
a
slightly
different
see.
That
really
happens
because
the
part
is
actually
created
in
this
canon
master,
but
they
the
object,
is
created,
but
the
part
cannot
be
start
because
it
cannot
occur
in
the
soo
master.
Instead
of
a
sinker
will
report
an
error
message.
You
say
why
they
cannot
be.
C
D
C
Okay,
that
is
a
different
issue,
because
I
have
a
sinker.
First,
we
want
to
make
sure
you
it
if
they
keep.
First,
we
have
found
I
shall
set
up
for
the
sinker
and
their
sinker.
We
don't
have
a
song
here
and
I.
Think
that
needs
to
be
100%
keeps
Ronnie,
but
we
do
have
it,
but
we
do
want
is
keep
out
Ronnie.
The
all
we
can
do
is
we
cannot
guarantee
that
is
always
running.
What
we
can
do
is
once
it's
come
up
back
again.
C
D
Yeah
saying
that
you've
thus
increase
if
every
time
the
sinker
tries
to
create
the
pod
copy
on
the
master,
the
master
Kuh
API
server
rejects
it
because
of
its
the
master
has
some
emission
control
it
rejects
it,
there's
a
consistent
error
message
that
ought
to
get
back
to
the
tenant
user.
Somehow,
yes,
yes,.
C
D
H
C
I
think
in
this
page
we
say
that
you
want
to
contribute
the
first.
You
can
come
back
connect
with
me
on
record
file,
everything
here,
so
you
can
just
file
Chiara
Jesse
just
go
to
you,
though
you
Pete,
we
have
a
just
file
PR,
and
just
if
you
or
if
you
want
to
talk
more,
you
can
send
me
message
you
we
can.
We
have
them
all
fly
wonderful.