►
From YouTube: WG-Multitenancy Bi-Weekly Meeting for 20220531
Description
WG-Multitenancy Bi-Weekly Meeting for 20220531
A
Hey
everybody
welcome
to
our
regularly
scheduled
working
group
for
multi-tenancy.
Today
our
agenda
has
a
kimaji
project
for
hard
multi-tenancy
we're
to
do
some
introduction
and
feedback,
and
then
jim
and
team
will
be
giving
us
a
docs
update
on
our
documenting
of
how
to
achieve
multi-tenancy
within
the
kubernetes
project.
B
C
D
C
What
about
now
is
set
much
better?
That's
good
thanks,
god
2022
the
year
of
linux
on
desktop
anyway.
Let's
listen,
how
it
works.
So
thank
you
so
much
for
introduction-
and
I
don't
know
I
can
share
my
screen.
No,
it
seems
not
so
oh
can
I
hold.
E
C
E
C
Go:
okay,
okay!
Thank
you!
So
much
so
I'm
really
happy
to
be
here
because
at
least
maybe
two
years
ago,
adriano
was
here
in
the
same
meetup
or
community
call
or
whatever
it
is
presenting
capsule
and
I'm
part
of
classics.
C
What
does
it
mean
we
went
for?
We
received
some
questions
by
people
asking.
How
can
I
use
my
notes?
How
can
I
get
the
list
of
validating
books,
mutating
the
books
or
just
some
other
scoped
resources,
and
obviously
with
capsule,
with
the
concept
of
just
namespaces
a
collection
of
spaces
as
a
tenant
wasn't
feasible?
Wasn't
we
were
unable
to
do
that,
so
it
pops
in
our
mind,
this
idea
of
kamaji
and
kamaji
essentially
is
kubernetes
that
is
managing
kubernetes.
C
Just
joking,
but
keep
in
mind
that
kamaji
is
an
operator
that
is
going
to
be
installed
in
a
kubernetes
cluster
with
a
custom
resource
definition
that
we
names
config
samples,
then
control,
plane,
and
in
this
standard
control
plane
we
are
going
to
specify
some
options.
C
So
it's
pretty
similar
to
the
nested
api
nested
nested
cluster
api
project,
because
we
are
going
to
deploy
a
deployment
with
the
api,
server,
controller
manager
and
scheduler
inside
of
a
kubernetes
cluster
and
it's
going
to
be
exposed
using
an
ingress
or
eventually
using
a
load,
balancer
and
not
board
service
and
so
on
and
so
forth.
And
actually
we
are
supporting
some
options.
C
I'd
say
the
default,
one
at
least
for
having
working
kubernetes
kubernetes
cluster
options
such
as
the
version,
the
cubelet
c
group
driver
and
also
the
settings
regarding
the
network.
So
the
address
the
exposed
address,
the
port
eventually
domain,
the
service,
cedar,
block,
pod,
cedar
block
and
blah
blah
blah.
C
C
If
you
don't
mind,
I
would
like
to
show
you
also
a
brief
demo.
I
hope
that
the
screen
is
good,
so
right
now,
I'm
using
a
kind
instance
I
deploy.
I
installed
command,
control,
plane,
so
kind
great
cluster
and
blah
blah
blah,
and
I
installed
using
ham.
So
I'd
remember,
helm
list
all
name
spaces
yeah.
We
got
kamaji
and
kamaji
system
in
the
kamaji
system.
C
We
got
our
operator
and
then
we
deployed
the
ad
cd
multitalent
cluster.
It
can
be
deployed
inside
the
kubernetes
cluster.
It
could
be
external.
We
are
not
opinionated
about
that.
In
the
end,
we
just
need
the
certificates
to
connect
to
the
add
cd
and
then
the
api
endpoint,
or
rather
the
address
of
the
lcd
instance.
So
right
now
I
would
like
to
move
in
the
default
name
space
and
I'm
going
to
watch
for
the
10
control
plane,
then
control
plane
or
maybe
get
10
and
control
plane,
watches.
C
Okay,
and
with
that
said,
we
can
apply
our
example.
Complex
samples.
C
The
status
in
is
in
provisioning
because
we
are
generating
all
the
required
certificates,
all
the
keep
coming
required
for
the
admin,
the
scheduler
and
the
controller
manager.
In
fact,
if
you
keep
ctl
get
pulse,
you
can
see
that
we
got
our
two
deployment,
two
two
parts
of
the
deployment
and
if
I
switch
to
the
containers,
you
can
see
that
we
got
the
cube
api
server,
then
the
scheduler
and
the
controller
manager,
and
now
finally,
we
got
our
api
where
well
our
kubernetes
cluster
up
and
running
in
less
than
30
seconds.
C
C
Get
secrets
should
be
test
admin
yeah,
I
was
lucky
keep
config.
In
fact,
it
is,
and
with
this
keep
config,
although
there
are
some
issues,
because
I
use
the
my
bat,
I
use
the
wrong
example.
C
And
the
port
was
41
for
for
free
for
recall
correctly,
cheap
config,
gmp,
cube
config
ctl
version.
It's
a
demo,
I'm
sorry
for
that,
but
I
should
be
able
to
access
it.
Let
me
check
I
can
try
to
solve
it
in
a
glimpse.
I'd
say
it's
cordiness,
a
name
because
I
wasn't
able
the
ingress
okay,
my
bad
and
it's
not
service.
They
should
be
not
port,
and
the
address
should
be
this
one.
C
C
Okay,
I
should
be
good
enough.
Maybe,
but
let
me
check
so
jq
secret
hit
config
tmp
config
oops
ctl
version
yeah,
it's
working
thanks!
God,
it's
working,
so,
as
you
can
see
here
we
got
the
correct
version
and
in
the
meanwhile
I
would
like
to
do
this
test.
So.
C
C
C
We
got
no
join
nodes,
so
I'm
going
to
export
the
cube
config
to
the
tenant
control
plane.
One
ctl
version
is
the
correct
one:
yeah,
it's
the
correct
one
and
let's
see
if
you
join
node
bash,
so
we
are
using
the
correct
one
and
in
a
glimpse
here
we
should
see
a
new
container
here,
classic
schematic
and
workers.
So
essentially,
this
is
just
a
kindness
note
instance,
and
it's
joining
the
tenant
control
plane
using
cube
adm.
C
C
E
C
C
No,
we
shouldn't
need
that
and
it
was
deploy.
Kind
join
note
should
be
the
correct
one
kamachi.
No,
absolutely
we
have
to
export
the
cubeconfig
gmp
config
yeah.
Now
it
is
correct
one
and
let's
go.
C
B
Yeah,
I
think
dario
you
said
this
you
know
is
similar
to-
I
guess
cappy
nested
and
I
was
curious
to
know
like
from
your
perspectives.
What
would
be
the
differences
or
how
would
you
position
one
versus
another.
C
I'd
say
that
honestly:
firstly,
I
have
to
try
to
understand
better
the
nested
cluster
api
and,
if
I
understood
correctly,
there
is
a
sort
of
thinker,
so
the
idea
is
to
use
different
api
using
a
different
using
an
api
server
deployed
inside
the
cluster.
That
is
going
to
sync
some
resources
in
the
upper
cluster.
C
Instead,
if
I
understood
correctly
so
if
I'm
wrong,
please
correct
me:
no
worries
with
commercial.
Instead,
we
would
like
to
provide
a
way
to
spawn
real
kubernetes
clusters
and
ensuring
that
the
network
and
storage
and
the
node
segregation
is
available
at
the
infrastructure
level,
because
we
had
a
nice
chat
at
kubecon
with
a
google
engineer
that
was
asking
how
we
are
solving
the
multi-tenancy
issues
from
the
network
and
the
storage
perspective
from
the
network
perspective.
F
F
And
I
guess
I
don't
I'm
not
familiar
with
nested
cluster
api,
I
mean
we
use
cluster
api
standard
with
the
management
node.
I
think
the
big
difference
here
is
the
api
server
or
the
control
plane
is
running
in
the
management
cluster
versus
external.
Is
that
the
big
difference
or
I
guess
what
other
pieces
are
in
place.
C
C
Yeah
yeah
keep
in
mind
that
connectivity.
We
discovered
that
from
the
infrastructure
perspective.
Obviously,
since
the
control
plane
is
running
inside
the
queen's
cluster
and
the
worker
nodes
could
be
in
a
different
network,
we
noticed
some
issues
with
the
web
books,
the
validating
emulating
web
books
using
the
cluster
ap.
So
a
solution
was
using
the
node
port,
but
you
know
it's
not
so
cool
having
the
web
services
running
in
node
ports,
so
we
are
working
with
the
connectivity
and
we
got
also
gonzalo.
C
That,
unfortunately,
is
not
here
due
to
the
time
zone,
and
if
I
remember
correctly,
there
is
a
request
on
that
yeah
refactoring
connectivity.
So
our
idea
is
to
provide
connectivity
configured
automatically.
So
you
don't
have
to
create
all
the
certificates
you
don't
have
to
deploy
the
demo
set
on
the
worker
nodes
and
blah
blah
blah.
It's
going
to
be
exposed
also
because
I
saw
from
the
documentation
that
there
are
so
many
moving
parts
with
connectivity.
D
You're
asking
about
the
cappies,
the
nesocappy
sinker.
I
think
that
the
only
thing
that
gets
synced,
so
it
also
does
deploy
entirely
different
api
servers
per
tenant.
I
think
it
also.
B
D
Fcd
per
tenant,
so
it
would
get
around
the
eight
gigabyte
limitation
but
of
course,
at
the
cost
of
increased
resource
utilization.
My
understanding
is
that
the
sinker
is
just
for
the
pods
and
it's
a
pity
phase,
not
on
the
call,
or
at
least
I
don't
think
these
on
the
call.
D
If
you
were,
if
you
could
take
over
from
me
now,
but
I
think
that
what
the
sinker
does
is
that
it
is
just
for
the
pods
so
that,
basically,
you
create
a
pod
at
a
child
in
like
one
of
the
tenant
clusters
but
the
tenant
clusters.
D
They
all
look
as
though
they
have
uniform
access
to
all
of
the
nodes.
But
of
course
they
they
can't.
There
has
to
be
one
common
scheduler
that
schedules
the
pods
among
all
of
the
tenants.
So
that's
what
the
syncer's
forms
offer
everything
in
kubernetes,
it's
just
for
the
pods,
so
I
think
what
I
was
going
to
ask
is:
how
do
you
solve
that
problem?
So,
if
you've
got,
you
know
one
real
cluster
with
100
nodes
and
then
you've
got.
You
know,
10.
D
Tenant
clusters
running
on
that,
how
does
it
allocate
the
nodes
between
them
and
how
does
it
handle
resource
sharing
within
the
nodes,
ignoring
ignoring
quality
of
service
like
just
like
basic
cpu
ram,.
F
C
C
To
to
sell
sorry
for
the
bad
word,
because
we
don't
we
don't
like,
we
don't
want
to
sell
kamachi,
but
from
the
let's
say,
marketing
perspective.
We
are
saying
that
comanche
could
be
a
nice
solution
for
a
control
plane
as
a
service,
because
we
saw
a
lot
of
potential
customers
saying
that
I
would
like
to
get
a
lot
of
api
servers,
but
running
at
scale
is
a
pain
because
in
the
end,
each
production
cluster
must
have
three
control
plane
nodes
and
you
have
to
manage
them
and
it's
not
really
convenient.
C
I've
been
an
sre
and
I
was
taking
care
of
five
production
clusters
for
a
gross
total
of
two
thousand
and
five
hundred
notes,
and
it
was
a
pain,
although
everything
was
automated,
but
there
is
a
huge
toy
behind
that.
So
my
idea,
also
with
adriano
that
is
in
listening
mode
right
now,
was
to
say,
since
operators
are
really
cool
because
they
are
eliminating
a
lot
of
toil
a
lot
of
operational
cost.
C
Let's
let
manage
kubernetes
by
kubernetes
itself,
so
that's
really
basic
design,
but
now
also
thinking
about
that
regarding
how
to
place
workloads
in
the
admin
cluster.
The
original
design
of
kamaji
is
that
you
bring
your
own
devices,
will
be
used
by
maybe
by
cloud
providers
small
cloud
providers
that
would
like
to
provide
kubernetes
to
their
customers,
but
sorry
what
but
because
otherwise.
D
That's
the
big
difference
between
this
solution
and
either
nested
cappy
or
even
loft,
v
cluster,
which
I
don't
know
as
well.
I
just
know
what
I've
read
about
it
is
that
both
of
those
basically
subdivide
one
large
cluster
up
the
in
terms
of
the
control
plane.
A
lot
of
it's
similar,
like
I
think
the
details
are
different,
as
I
said,
like
multi-tenant
ncd
versus
many
deployments
of
that
cd,
but
I
think
that
nested
cappy
is
also
implemented
as
an
operator
in
the
in
the
what's
called
the
parent
cluster.
D
Them
are
are
not
they're
like
equally
good
implementation
choices,
it
might
be,
inter
have
you
spoken
to
faye
and
the
rest
of
the
nested
cappy
folks,
because
it
might
be
interesting
to
to
work
to
see
if
there's
a
way
to
work
together.
I
know
that
nessa
cappy
and
v
cluster,
even
though
they
have
very
similar
goals.
They
decided
not
to
join
forces
because
there
were
enough
differences
in
implementation
that
they
didn't
want
to,
and
that
might
turn
out
to
be
the
case
with
with
you
as
well,
but
it
might
be
worthwhile
talking.
D
I
don't
know
if
v
cluster
has
an
open
source
core
it
might
or
it
might
not.
I
confess
I'm
not
certain
about
that,
but
I'm
sure
you
can
look
that
up
with
this.
The
nested
capy
folks,
because
it's
got
a
sort
of
standard
api,
the
api
and
capi
for
creating
new
clusters.
D
Perhaps
there
would
be
some
way
to
basically
work
with
them
and
see
if
you
can
extend
that
model,
even
if
you
end
up
just
sharing
information
and
end
up
sharing
ideas
that
might
be
useful
as
well,
but
yeah
I'd
encourage
you
to
reach
out
to
faye
his
his
face
contact
information
on
any
of
our
pages
ryan.
D
F
Yeah
so
yeah,
and
I
guess
some
of
my
questions
right
is
you
know
this
is
one
of
my
the
biggest
bigger
problems
I
have
with
cappy
in
general.
Is
you
now
kind
of
have
a
single
point
of
failure
for
every
single
one
of
your
clusters,
because
your
control
planes
are
on
a
single
kubernetes
nodes?
Have
you?
F
C
I'd
say
that
it's
it's
out
of
the
scope
of
comanche,
okay
yeah,
because
in
the
end
everything
is
here
in
the
multi-standard
xd,
because
what
you
can
do
is
to
say,
oh,
let's
say
I'm
announcing
the
content
control
play
endpoint
using
bgp.
Obviously
I
can
try
to
move
the
the
ap
to
a
new
to
a
new
part
in
a
different
cluster,
but
that
will
require
also
a
replication
of
the
multi
design
and
lcd.
I'm
not
saying
that
it's
not
possible.
C
Obviously
you
can
do
that,
but
I
think
that
could
be
something
implemented
out
of
the
kamachi
project.
F
The
tenant
control
plane
pod.
I
guess
aware
that
or
will
it
accept
a
move
or
is
it
locked
into
that
kubernetes
cluster,
for
whatever
you
know,
reason
and
so
interesting?
Okay-
and
I
guess
the
other
question
too,
is
you
know
since
it's
bring
your
own
worker
nodes?
Are
you
planning
on
any
hooks
kind
of
like
how
cappy
provides
for
like
post,
qb,
qa
dm
hooks
or
whatever
for
like
upgrades?
F
You
know
if
I
have
you
know
a
thousand
nodes,
it's
super
easy
to
update
the
control
plane
as
you
leaving
that
to
the
the
I
guess,
the
cluster
owners
to
update
their
worker
nodes
themselves
or
how
does
that
kind
of
look.
C
Well,
our
idea
is
to
integrate
deeply
with
cluster
api.
We
got
also
in
our
roadmap
because
in
the
end
we
saw
that
there
is
a
resource
in
the
cluster
api,
that
is,
the
control
plane
reference
in
the
cluster
crt,
or
something
like
that.
So
we
would
like
to
provide
a
provider
for
a
comaji,
so
the
complete
life
cycle
of
the
sensor
is
going
to
be
to
be
managed
using
cluster
api.
C
Okay,
including
the
the
worker
notes,
then
yeah,
absolutely
absolutely
cool,
also
because
cluster
api
is
doing
that
you're.
Firstly,
upgrading
the
control
plane,
then
upgrading
the
worker
nodes
in
rolling
a
rolling
update
way
of
we
or
with
blue
green
or
whatever.
It
is.
C
The
idea
is
to
say
I
would
like
to
provide
kubernetes
clusters
to
make
customers
that
could
be
also
internal
customers
in
a
big
organization,
because
we
saw
that
it's
a
huge
pattern
adult
and
I'm
going
to
use
cluster
apis
because
in
the
end
it
seems
that
it's
the
standard
way
to
manage
kubernetes
cluster
at
scale.
C
The
current
stator
I
saw
that
there
are
some
providers
in
cluster
api
and
they
are
mostly
using
cube
adm
besides
some
implementation
with
azure
and
so
on,
and
so
forth.
Keep
in
mind
that
in
the
comanche
code
base
cool
mode,
we
are
using
cube,
adm
yeah,
because
we
got
kubernetes
yeah
here
it
is
and
it
was
a
pain
by
the
way,
because
cuba
dm
has
not
been
designed
to
be
used
as
a
library.
C
B
Okay,
yeah
one
other
question:
you
know
if
you
go
back
to
your
diagram,
are
there
any
specific
networking
requirements
between
the
worker
nodes
and
the
what
you're
calling
the
admin
cluster
here.
C
No,
no,
no
security
requirements.
The
idea
is
that
if
I
would
like
to
use
a
different
cni
for
each
tenant
bring
my
own
devices,
I
well
bring
in
my
own
machines
or
virtual
machines,
I
can
do
that
using
a.
B
C
B
F
I
did
have
another
question
too
for
about
the
control
planes,
so
when
I'm
on
a
worker
node
and
do
I
have
full
access
to
the
everything
that
you've
deployed
in
the
control
plane
through,
you
know
in
cube
system
or
etc.
So
I
can
see
those
pods
okay,
yeah.
F
So
aks
eks
right
they
hide
stuff.
They
you
know
their
stuff
hidden
behind
the
the
connectivity
or
whatever
proxy
they're.
Using
that
you
don't
see,
so
you
don't
see
the
api
server
right
usually,
and
so
that's
where
I
guess,
where
I'm
curious
so
do
I
see
the
api
server
as
a
tenant
or
is
that
kind
of
hidden
away
like
aks,
gke,
etc?
C
C
C
F
C
C
No,
they
are
not
considered
a
spot,
they
are
not
considered
spot
because
if
I
export.
C
C
Now
I
should
see
when
the
cube
idm
starts
yeah.
Now
it's
work,
it's
working
so
yeah.
The
demo
is
working.
I'm
able
to
see
the
node.
C
C
D
Thanks
sarah,
I
think
jim
you're
up
next
for
the
for
the
doctor.
B
D
B
Yeah,
so
we
had
the
second
order
from
the
google
docs
to
this
full
request,
and
I
think
most
of
you
might
have
seen
it.
I
think
a
few
people
commented
and
had
some
suggestions
here.
So
most
of
the
suggestions
are
accepted.
I
think
there
was
this
one
question
on
whether
we
should
you
know
say
something
more
about
any
anything
about
pod
security
when
which
model
spot
security
is
in
tune
for
so
I
think
this
is
the
one
pending
comment
where
I
feel
it
applies
to
all
forms
of
tenancy.
B
So
I
don't
know
if
there's
anything
specific
we
can
call
out
but
would
be
interested
in
getting
another
thoughts
and
yeah.
I
don't
know
who
else
is
in
terms
of
the
reviews.
Here
we
have
a
couple
of
looks
good
to
me.
I
guess
comments
here,
but
maybe,
as
a
next
step,
I
was
thinking
we
could
even
broaden
it
to
some
of
the
sigs.
B
If
we
wanted
to
elicit
specific
feedback
from
a
few
folks
in
the
community
we
can,
but
the
idea
would
be
to
then
you
know,
collect
that
feedback
and
close
out
the
pr.
So.
D
Did
you
need
one
thing
for
me?
I
remember
seeing
something
about.
I
just
glanced
over
the
original
pull
request.
There
was
something
about
a
drawing
needing
to
be
redone.
Is
that
urgent.
B
So
I
had
redone
it
in
powerpoint
because
I
didn't
have
the
then
when
I
copied
it
from
the
google
doc
it
didn't.
The
resolution
was
a
little
bit.
B
F
B
Right,
yes,
I
think
that's
the
current
kind
of
securities
just
called
out
up
front
and
said
you
know
it
applies
to
everything
so
and
in
fact,
security
is
more
relevant,
of
course,
or
more
critical.
If
you're
looking
at
multiple
tenants
sharing
the
same
cluster
so
but
yeah,
I
don't
know
what
else
we
would
want
to
say
here
other
than
of
course,
security
is
important
in
any
any
form
of
tenancy.
B
C
F
B
F
B
Okay,
yeah,
I
haven't
in
a
long
time
either
so
yeah
all
right,
so
then
I'll
I'll
just
add
to
the
slack
channels
and
request,
reviews
there
and
then
feel
free
to
if
anyone
else
has
any
other
suggestions
or
stuff
or
just
if
you
want
to
chime
in
on
these
comments,
most
of
them
most
of
the
suggestions
over
there.
I've
accepted
yeah.
But
if
there's
anything
else
just
feel
free
to
add
to
the
pr.
B
By
the
way
I
did
put
this
in
and
we
were
debating
at
one
point
where
to
put
this
right,
so
I
put
this
into
the
security
section
itself,
which
I
felt
that's
where
it
best
fit,
but
again
open
to
suggestions.
Of
course,
if
you
go
in
here,
where
did
security
go
there?
It
is.
If
you
expand
security,
then
there's
multi-tenancy
shows
up
in
here.
D
B
B
D
F
B
B
F
Yeah
and
we
lost
tasha,
but
we
have.
We
have
some
action
items
for
jordan,
the
pr
that
needs
to
get
picked
up
that
tasha
owns.
So
he
may
mention
that
when
we
reach
out
so
we'll,
we
need
to
get
that
done
for
him
as
well.