►
From YouTube: Kubernetes Working Group for Multi-tenancy 20200323
Description
Agenda:
Berat Senel: EdgeNet Kubernetes distributed Cloud presentation http://edgenet.planet-lab.eu:8081/ https://github.com/EdgeNet-Project.
Daniel Sover: Operator Lifecycle Manager and some of the multitenancy related problems the project has and how it solves them
See agenda doc for helpful github and presentation links here: https://docs.google.com/document/d/1fj3yzmeU2eU8ZNBCUJG97dk_wC7228-e_MmdcmTNrZY/edit#
A
Hi
everybody
and
welcome
to
the
kubernetes
multi-tenancy
working
group.
Today
we
have
two
presentations.
The
first
one
is
going
to
be
by
Bharat
Snell
on
the
edge
net
kubernetes
distributed
cloud
and
the
second
one
is
going
to
be
Daniel
Silver
talking
about
the
operator,
lifecycle
manager
and
multi-tenancy
related
problems,
so
going
to
hand
it
over
to
brought
to
kick
us
off.
B
B
B
B
Infrastructures
for
research
and
networking
open
to
the
public,
Internet
I
I
know
that
edge
can
mean
a
lot
of
different
things
to
different
people.
So
let
me
just
situate
for
you
where
we
are.
We
are
in
the
wired
edge
and
we
are
behind
the
the
distribution
network.
We,
our
our
our
machines,
are
typically
ract
servers
at
research
institutions
like
also
e
VMS,
on
on
people
in
zone.
B
Here
we
really
have
a
viral
potential.
Sorry
to
use
that
word
at
this
time
too,
because
of
the
the
ability
to
install
using
software.
If
you
could
just
go
back,
one
slide
Barrett,
so
my
installation
of
edge
net.
No,
it's
super
simple.
Anybody
can
just
bring
up
a
Ubuntu
or
sent
to
us
via
me
and
in
minutes,
have
an
edge
net
node
running.
B
Nexen
and
then
what
we
are
leveraging
is
the
fact
that
there's
such
a
wide
base
of
people
who
know
a
docker
and
kubernetes
in
the
past
us
with
planetlab
europe,
we
were
using
Linux
V
server.
Then
we
moved
to
Alexi
Linux
containers
kind
of
rolled
our
own
control
frameworks
here
we're
using
things
everybody
nuts.
B
So
we're
here
to
talk
about
multi-tenancy
edge
net
is
multi
provider
and
multi
tenant.
So
I
people
provide
the
VMS
on
which
the
nodes
are
deployed.
They
can
be
individuals,
they
can
be.
That's
our
typical
model,
research
institutions,
universities,
laboratories,
anybody
who
has
a
machine
that's
connected
to
the
Internet
they
can
offer
nodes
then
I.
The
multi-tenant
aspect
is
that
we
have
researchers
around
the
world
who
use
the
system.
Also.
B
All
of
this
is
mediated
by
edge
nets,
and
so
what
we're
here
for
today
is
to
enter
into
dialogue
with
you.
We
want
to
know
who's
facing
issues
they're
similar
to
the
ones
that
we're
facing
we're
eager
to
combine
our
efforts
with
others.
For
instance,
we
have
a
monthly
meeting
with
people
at
measurement
lab
at
Google
in
New,
York
City,
where
we
they're
developing
a
similar
system.
They
have
deployed
a
similar
system.
B
C
Hello,
everybody
this
is
who
is
speaking,
I'm,
going
to
talk
about
what
we
have
done
so
far
and
what
we
want
to
have
in
the
next
steps.
So,
firstly,
we
want
to
contribute
back
to
the
cavernous
cognitive,
so
because
of
that
reason
we
switched
programming
language
we
used
in
the
first
version
of
pagelet
to
go
language,
and
for
now
we
use
custom
resource
definitions.
To
extend
current
is
API.
C
We
in
general
be
focused
on
two
principal
dams.
One
of
them
is
bringing
current
as
e
to
the
H
company
and
enabling
multiple
synchronous.
Let's
have
a
look
at
our
architecture.
So,
as
you
know,
this
is
a
standard
accustomed,
sort
of
information,
architecture,
custom
source
definitions
which
uses
the
make
event
API
server
and
the
custom
resources.
In
the
same
way,
they
do
to
handle
core
resources
like
ports
and
daemon
sets,
as
you
see
in
the
figure.
So
we
have
plaintiff.
Crv
is
in
use
such
as
appointment
authorities,
you
steamed
slices
and
so
on.
C
So,
in
the
first
place
estimation,
when
we
say
H,
we
don't
mean
the
white
sage
saw
the
devices
such
as
smartphones,
laptops
and
sensors,
that
really
have
limited
spare
computer
and
data
storage
can
even
be
with
connectivity
problems.
Yet
so
by
saying
edge
cloud
we
make
not
like
a
people
teeth,
rotated
that
being
phosphoric,
so
I
do
not
establish
the
campus
age
cluster
that
the
edge
nodes
are
located
in
distributed
locations
by
getting
the
benefit
of
a
feature
that
makes
it
easy
for
third
parties
to
add
nodes
into
cluster.
C
C
D
I,
just
ask
a
quick
question
on
that:
one,
so
do
you
is
the
intent
to
offer
a
single
global
cluster,
of
which
all
of
the
different
regions
are
represented
by
attributes
on
the
nodes?
Are
you
going
to
have,
for
example,
regional
clusters
that
can
span,
for
example
like
a
continent
or
a
part
of
the
continent,
or
something
like
that?
I
mean.
C
C
C
Time
at
this
point,
I
want
to
mention
that
in
the
next
versions
we
will
create
a
permission
CRV
for
authority
admins
to
delegate
responsibilities
independently
to
the
users
giving
admin
privilege
independently,
but
formal
uttered.
Admins
can
delegate
user
approval,
creation
of
team
and
slices
to
managers
and
Tech's
are
responsible
for
the
nodes
that
are
contributed
by
that
authority
to
the
edge
net
cluster
and
a
user.
Is
anyone
just
anyone
who
develops
and
deploys
applications
on
Asia
slice
quorum?
C
Let's
look
through
the
mall
tennis
aspect
of
edge
nets.
Here
is
a
typical
case
when
this
cane
is,
we
have
a
cluster
admin
that
being
crates
namespaces
to
be
used
for
different
tasks
by
different
user
groups.
As
we
see,
namespace
ABC
have
been
credit,
some
seconds
needed
to
create
to
be
created
in
the
cluster.
Therefore,
our
admin
credit
app
for
users
took
final
7
telework
to
users
separately.
C
Now
the
users
can
make
use
of
namespaces
according
to
do
a
transitions
of
service
accounts
before
I
forgot
to
say
a
token.
Inc
file
carrier
service
account
information.
So
what
is
the
problem?
The
problem
is,
this
is
not
scalable.
We
cannot
do
that
for
thousands
of
users
as
hero
agent
offers.
A
solution.
Think
about
the
use
case
of
ResNet,
with
the
edge
nodes
were
catering
with
locations
like
kubernetes.
We
have
a
cluster
admin
here
we
go
in
this
case.
C
When
registration
request
has
been
made,
Asianet
sends
an
email,
including
a
one-time
port,
for
the
email
notification.
The
reason
is,
we
don't
want
to
struggle
with
fake
registration
retest.
At
this
point
there
is
no
notification
arriving
the
email
address
of
the
agent
cluster
admins
down
authority'
admins.
Well,
fine,
they
are
email
address
by
using
their
one-time
code,
they
receive
by
email.
They
need
to
do
that
in
24
hours.
Otherwise,
the
system
destroys
that
one-time
code.
C
So
this
is
the
time
our
cluster
admin
needs
to
take
an
action,
because
the
admin
gets
a
notification
about
the
authority
registration
which,
since
the
admins
will
fight
their
email,
addresses
the
only
thing
that
the
class
Rodman
does
is
approving
the
requests
in
72
hours.
It
is
that
easy.
There
is
nothing
to
do
manually.
So
what
happens
after
the
approval?
C
The
authority
request.
Controller,
creates
an
authority
objects.
According
to
the
register.
1
an
authority
is
created
engagements.
The
controller
generates
an
namespace
by
combining
the
authority
name
and
the
string
of
authority
as
a
prefix
then
sets
a
resource
code
in
it
to
prevent,
deploying
application
in
this
namespace
and
crates
and
author
the
admin
user.
C
We
are
weeks,
the
authority'
controller
sends
a
notification
that
says
your
auto
registration
is
successful
in
this
email.
It
includes
user
specific
profile,
so
utter
darkness
can
now
start
using
edge
not
with
that
file,
and
they
are
ready
to
welcome
his
registration
request
on
their
authorities,
so,
like
lottery,
admins
have
been
done,
but
with
the
difference
users
make
registration
request
on
authorities
that
already
registered
in
a
lot
and
again
the
same
procedure.
Email
verification
and
run
users
verify
their
email
addresses
authority.
Admins
get
notified
about
the
registration
request,
of
course,
and
auto
tagging.
C
Yes,
not
fight
about
the
users.
Roommate
registration
request
on
the
authority
that
uttered
admin
registered
in
here
at
or
the
admins
approve,
the
users
and
users
get,
would
fight
back
to
registration
successful,
so
users
can
now
start
using
edge
net
as
well
by
using
their
to
code
Phi
of
their
route.
To
do
email
addresses.
C
But
it
is
not
allowed
to
deploy
applications
in
authority
namespace.
So
thanks
to
the
resource
cutter,
there
are
two
solutions
to
that:
let's
start
with
teen
one
altered,
admins
or
manager,
so
attacked
by
class
rudeness
can
create
teams
by
choosing
these
users
from
different
authorities
when
a
team
is
created,
the
team
control
of
generous
and
namespace
and
sets
the
name
by
combining
motor
the
namespace
on
the
team
me,
but
still
it
is
not
a
lot
to
deploy
applications
in
team
name,
space.
C
So
that
takes
us
here:
users
operating
in
teammate
space
can
create
slices
by
choosing
users
from
different
authorities.
Many
slice
is
created,
slice
country
controller
generates
a
namespace
the
same
as
the
team
controller
does,
but
it
sets
a
resource
photo
depending
on
the
slice
profile
and
in
this
slice,
namespace
users
who
participate
in
that
slice
can
deploy
their
applications.
So
there's
a
shortcut
Auto
deadness
and
managers
by
Rick
McCann
done
with
the
create
slices
in
order
to
namespace,
as
you
see,
and
the
participants
can
deploy
your
applications.
I
want
to
show
how
are
not
contribution.
C
Crv
works
still
mentioned
our
provider,
not
mode
providers
need
to
install
an
Ubuntu,
VM
or
Santo
operates
and
operating
satisfy
em,
and
then
they
just
need
to
put
a
snap
ssh
public
key
in
into
that
virtual
machine.
So
when
the
VM
is
ready,
they
prepare
a
viable
configuration
file,
including
nod
name
IP,
address
port
SSH
user
and
not
available
to
information.
That's
it
controller
trades,
a
SSH
client
and
to
install
the
country's
unnecessary
packages.
It
also
creates
a
cube,
ABM
join
token
and
run
the
join
command
remotely.
C
C
Authorities
also
can
enable
or
disable
scheduling
on
the
notes
that
they
contribute
to
the
edge
net
cluster
by
updating
that
not
contribution
object
when
they
do
that
the
controller
automatically
disable
or
enable
scheduling
on
that
not
so
in
the
future,
we
want
to
create
a
consumable
resource
site
to
provide
justice
among
the
users
and,
as
I
mentioned,
we
want
to
trade.
We
want
to
provide
and
not
allocation
slices
because
of
the
constraints
of
CRTs.
C
C
B
F
C
Slice
is
a
CR
D
which
creates
a
namespace
by
setting
a
resource
culture
in
this
version.
By
getting
that
benefit
of
admission
controller,
we
also
want
to
provide
not
a
location
in
slices,
which
means,
for
example,
1
and
adding,
creates
a
slice
by
including
a
certain
users.
They
will
be
able
to
select
the
notes
that
they
can
deploy.
Your
applications
are
true,
so,
but
basically
it
is
a
CR
D
which
creates
the
namespace
by
constraints.
We
can
say
that
ok.
C
D
So
this
is
Adrienne
speaking
I've
been
leading
the
hierarchical
namespaces
of
project,
and
it
sounds
like
there's
a
little
bit
of
overlap
there,
maybe
not
a
ton,
we're
more
for
the
sort
of
case
of
teams
than
individual
owners.
It's
kind
of
funny
I
feel
like
I,
haven't
really
heard
many
people
in
the
multi-tenancy
group
talking
about
things
like,
and
user
identity
and
stuff
like
that
I
feel
like
even
though
technically
it
certainly
is
a
kind
of
multi-tenancy.
If
you're
like.
D
Usually
we
end
up
talking
about
more
like
teams
or
applications
rather
than
individual
users,
but
for
the
teams
and
to
a
certain
extent,
for
the
slices
as
well
might
be
I'd
be
interested.
If
you
want
to
check
out
the
hierarchical
maiden
name,
space,
controller,
HMC-
and
let
me
know
if
you
think
that
that
could
fit
anywhere
into
your
project.
C
Hi
Adrienne,
so
thank
you
so
much
for
informing
us
about
that.
Of
course
we.
This
is
why
we
wanted
to
join
this
working
group
so
to
contribute
back
to
the
community
and
get
help
from
the
community.
So
I
am
curious.
You
mentioned
that
you,
you
didn't
look
from
the
aspect
of
individual
users
in
the
most
intensive
working
groups.
That's
just
a
matter
of
this
sorry.
D
Just
that
it's
well
covered
in
other
SIG's
I
believe
does
anyone.
Anyone
should
I
mean
if
they
know
the
correct
answer
to
this,
but,
for
example,
there's
a
whole
system
of
identity
providers
and
and
integration
into
identity
providers.
I'd.
Imagine
that
goes
through
six
security
for
end-users
I,
don't
I
I,
don't
feel
like
we
discuss
it
much
in
this
group.
F
D
D
Okay,
that
was
just
I,
was
interested
by
that
because
we
don't
often
discuss
that
here
and
so
I
guess
what
I
would
say
to
there
at
and
and
the
other
contributors
is
that,
if
you're
interested
in
more
there,
maybe
we
can
take
this
up
on
the
slacker
on
the
mailing
list
to
talk
about
it.
But
it's
not
something.
I
have
the
pains
off
off
the
top
of
my
head
check
out
article
namespaces.
D
It's
got
a
certain
amount
of
self-service
options
as
well
for
busy
rewriting
the
UX,
but
hopefully
we'll
be
able
to
land
that
within
the
next
week
or
two
but
yeah
you
can
create
a
namespace
that
can
see
a
child
namespace
and
copies
a
bunch
of
objects
into
it.
So
on
yeah
feel
free
to
give
me
a
shout:
it's
from
our
main
page,
you
can
our
main
git
repo
page.
You
can
find
agency
info
information
there
and
my
contact
information
is
there
as
well.
D
The
only
other
thing
I
would
sort
of
ask
is
how
I
asked
about
whether
this
was
truly
a
planet-wide
cluster
before
in
it,
and
the
answer
was
yes,
but
it
also
sounds
like
you
can
in
many
cases
limit
nodes
to
only
being
used
by
one
team,
and
so
I
guess
I
was
wondering
about
that
like
in
both
cases.
How
frequently
are
people
limiting
nodes
so
that
only
they
can
run
on
their
own
nodes,
in
which
case
they're,
basically
using
the
API
server
in
the
infrastructure
that
are
not
really
contributing
anything
back
and
on
the
flipside?
D
How
often
are
people
truly
running
a
running
workloads
in
multiple
regions
in
the
world
and
the
reason
I
ask
this
is
because
if
you
only
have
one
cluster
spanning
the
world,
you
have
one
failure
domain.
Anything
that
goes
wrong
with
any
of
your
components.
Instantly
affects
everybody,
and
so
it's
often
considered
to
be
a
best
practice
to
have
several
clusters
as
long
as
is
not
needed
to
deploy
across
them,
because
for
that
need
something
like
Federation.
D
C
F
C
Because
now
we
are
migrating
trend
abusers
into
H
nodes,
so
which
means
we?
We
will
have
very
12
users
in
the
next
month
so
which
which
which
which
will
allow
us
to
gather
some
data
about
your
first
question,
but
we
also
want
individual
users
provide
their,
for
example,
usual
missions
and
into
the
system.
So
in
this
presentation,
I
didn't
mentioned
that,
but
we
also
want
to,
for
example-
and
we
want
to
provide
a
system
that
that
individual
users
who
provided
notes,
can
enable
and
disable
not
according
to
the
authorities
in
Asianet.
C
So,
for
example,
I
just
want
I,
just
I,
just
trust,
Dunlap
Europe
or
so
when
you
understand
I
want
my
not
to
be
used
by
their
team,
so
they
they
will
be
able
to
do
that
by
creating
limitations
and
they
are
not
contribution
objects.
So
I
think
this
is.
This
is
an
important
case,
because
this
is
will
be,
is
public
infrastructure,
so
some
users
wouldn't
want
to
open.
They
are
not
to
everyone
because
they
would
want
to
feel
their
self
safe.
So,
okay,.
B
Cool,
ok
thanks
if
I
could
just
add
one
thing
in
response
to
your
question:
Adrian
the
value
of
our
system
like
these
previous
systems
has
to
do
with
the
availability
of
vantage
points
worldwide,
and
so
the
typical
use
is
to
deploy
to
multiple
vantage
points
around
the
world.
It
might
be
that
you're
conducting
internet
measurements,
and
so
there
are
certain
things
that
you
can
only
see
from
certain
locations
or,
you
might
be
investigating,
say
different.
B
Ways
in
which
intellectual
property
rights
are
applied
on
websites
around
the
world,
so
you
want
to
connect
from
different
locations.
Things
like
that
there
or
you
might
be
doing
content
to
an
experimental
content
distribution
network.
So
in
all
of
these
cases
the
worldwide
distribution
is
very
important
within
we
don't
typically
have
people.
The
point
is
just
a
few
nodes
are
located
close
together,
cool.
D
A
H
Okay,
so
hi
everyone
I'm
here
to
talk
about
operator,
multi-tenancy
and
the
upper
lifecycle
manager.
There's
a
couple
of
people
on
the
team
yeah
on
the
call
so
feel
free
to
reach
out
ask
any
questions.
You're
gonna
have,
as
the
presentation
goes
along
and
I'm
sure,
we'll
be
able
to
answer
them
so
to
begin.
H
So
to
begin
with,
2-level
said
I
just
want
to
talk
about
what
is
an
operator
so
we're
on
the
same
page.
I
think
this
is
a
handy
graphic
that
just
describes
basically
how
the
user
posts
custom
resources
to
the
communities,
API
server
and
in
a
yellow
specification,
and
then
an
operator
watches
for
those
for
those
events
for
those
custom
resources
coming
in
and
then
reconciles
them,
and
ultimately,
operators
are
creating
native
kubernetes
resources
in
the
core
API,
and
so
this
is
sort
of
a
flow
and
I.
H
H
I
included
this
just
to
show
that
a
lot
of
the
operators
now
that
are
going
to
people's
clusters,
are
you
know,
production
critical,
like
if
you're
running
a
web
application-
and
you
want
to
back
in
Postgres
database
and
you
decide
to
provide
it-
be
an
operator
that
database
has
to
be
running
all
the
time
and
it
has
to
be
stable.
So
these
operators
that
people
are
installing
into
the
grants
clusters
are
really
fulfilling
critical
people
are
really
critical
infrastructure,
so
yeah,
so
at
a
high
level.
H
H
If
you
want
to
install
a
one-off
operator
into
your
development
cluster
and-
and
you
know,
just
write
some
llamó
and
apply
it-
that's
fine,
but
when
you're
talking
about
large
production
clusters,
you
really
want
to
have
reproducibility
durability,
and
you
know
have
a
better
have
a
better
understanding
of,
what's
being
in
your
question
being
put
into
your
cluster,
all
I'm
also
provides
curation
for
what
operators
are
available,
sort
of
like
repositories
that
you
would
expect
in
Vienna
or
in
other
operating
systems.
Olm
has
this
idea
of
catalogs
that
basically
are
catalogs
and
operators.
H
You
can
say
show
me
all
the
operators
available
in
my
cluster
to
install
and
then
it'll
respond,
and
then
you
can
think
big
one,
so
it's
very
similar
to
D
package
or
or
yum,
or
those
types
of
systems
where's
their
kettle.
I
posted
on
the
cluster.
Yes,
yes,
so
there's,
there's
actually
catalog
pods
that
are
running
that
are
able
to
that.
H
Have
a
gr,
PC
connection
open
to
a
API
service
called
package
manifests
and
you
can
say,
like
get
packages,
get
packages
get
package
manifests
and
that
will
reach
out
to
those
catalog
sources
and
there
and
then
it'll
return
a
list
of
all
the
different
operators
and
there
so
there's
different
catalogs
there's
like
the
community
operators,
catalog
there's
the
Red
Hat
operators
catalog.
You
can
make
your
own
catalog,
but
essentially
there's
a
go
curate
at
least
inside
the
cluster
of
available
operators
mission,
which
is
very
handy.
G
D
D
J
Okay,
so
this
is
Nick
help
from
Owen
as
well.
It's
typically
it's
a
resource
called
catalog
source,
which
we
put
a
reference
to
a
container
image
and
containment
registry.
It
gets
pulled
down
and
run
as
a
pod
and
that
also
leaks
surfaces
that
GRP
capi.
So
all
the
data
is
baked
into
those
images
that
you
reference
in
these
cube
resources.
H
Thanks
so
sprites,
okay,
it
makes
available
these
kinds
of
operators.
It
also
provides
a
packaging
format
with
metadata.
We
think
about.
How
do
you
package
an
operator?
It's
something
we're
working
along
with
the
cig,
apps
delivery
team
find
a
common
format,
I
think
we're
settling
on
the
OCI
format
of
packaging
operators
in
images,
but
then
how
do
we
attach
all
the
metadata
and
all
the
information
that
we
want
around
the
operator?
So
all
them
provides
some
kind
of
packaging
format
and
it
also
provides
install
and
upgrade
safety.
H
So,
as
I
said,
operators
in
your
cluster
are
providing
critical,
critical
infrastructure.
You
want
to
be
sure
that
if
they
change
versions
that
you
know
things,
things
remain
running
smoothly
and
I
think
this
is
actually
a
key
area.
This
is
a
very
this.
A
complex
area
of
this,
an
area
that
OMS
trying
to
tackle
active,
upgrade
safety
and
dependency
resolution.
You
can
have
operators
that
depend
on
other
operators
and
install
a
suite
of
operators
into
your
cluster
at
once
and
upgrade
them
inline.
So
this
is.
H
This
is
really
the
core
functionality
that
o
ulemas
is
providing.
This
is
just
a
diagram
of
some
of
the
series
that
OLM
is,
is
working
with
it's
a
series
of
two
separate
operators
that
are
providing
that
are
managing
these
series,
which
ultimately
boil
down
to
installing
user,
specified
operators
into
the
cluster
so
yeah
at
a
high
level.
That's
what
all
them!
That's,
what
all
undoes
so
talking
about
specifically
multi-tenancy
solutions
that
all
then
provides.
Currently.
H
H
Basically,
he's
frustrated
because
there's
there's
inconsistencies
in
how
Co
G's
are
are
scoped
and
how
they're
applied
and
I
think
this
is.
This
is
one
of
the
problems
that
we
face,
often
and
I
know
that
other
people
in
this
working
group
face
it
as
well,
because
they
have
some
conflicts
between
the
different
series
in
the
cluster
and
also
how
do
you
life
cycle
this
year?
These
windows,
new
versions
and
and
making
sure
that
this
sort
of
the
upgrade
process
is
is
seamless.
H
So
this
is
just
random
person
renting
on
reddit,
but
this
is
sort
of
a
motivation
for
some
of
the
multi-tenancy
stuff
that
we
have.
So
this
is
an
email
that
Evan
sent
out
Evans
on
the
call
to
this
working
group
about
a
year
ago.
Talking
about
our
work
with
multi-tenancy,
and
so
we
came
up
with
this
concept
called
operator
groups
which
I
want
to
talk
about
on
the
call.
But
essentially
we
are
working
on.
H
You
know
allowing
users
to
independently
install
operators,
scope
that
are
scoped
to
manage
resources
in
their
set
of
namespaces
only
and
trying
to
sort
of
encapsulate
and
isolate
those
operators
from
other
operators
and
when
those
operators
manage
the
same
API
is
this
can
cause
resource
contention
and
as
we
saw,
or
they
would
see,
RDC
Rd
contention.
But
essentially
this
is.
This
is
an
example
of
some
of
the
things
that
would
work
that
we
were
looking
at
last
year.
H
So
we've
been
we've
been
on
work
on
these
multi-tenancy
problems
now
for
a
while
and
this
group
specifically.
So
these
are
some
of
the
considerations
that
we
that
we
think
about
in
all
and
when
we
think
about
multi-county
and
operators,
many
operators
can
be
installed
in
a
single
namespace.
Maybe
maybe
that's
the
model
that
you're
you're
looking
for
the
operators
can
be
in
one
namespace
and
their
operands.
Those
sort
of
things
that
the
operators
are
watching
for,
like
the
custom
resources
they
can
be
in
other
namespaces
in
multiple
other
namespaces
that
can
meet
any
namespace.
H
But
essentially,
you
have
this
sort
of
many
to
many
relationship
between
where
the
operators
are
sitting
and
where
their
operands
of
stating
as
I
said,
operators
can
watch
their
operands
across
one
namespace.
Several
namespaces
are
all
namespaces
and
I'll
have
examples
of
this
coming
up
that
make
it
more
clear.
Operators
can
depend
on
other
operators.
H
Olam
supports
the
idea
of
installing
multiple
operators
together
and
also
operators
also
have
elevated
privileges
in
the
cluster,
and
not
all
users
can
install
them.
Operators
change
the
fundamental
behavior
of
the
cluster,
because
the
multi-tenancy
currently
grains
isn't
strong
enough
to
totally
encapsulate
them.
So
you
have
to
be
careful
about
who's
allowed
to
install
operators
and
sort
of
what
actions
they
can
take.
H
H
Another
one
is
a
single
operator
watching
a
set
of
namespaces,
so
here
you
have
one
Operator
and
it's
watching
Kafka
topics
on
only
a
series
of
namespaces
and
not
all
namespaces.
And
lastly,
we
have
a
one
to
one
where
we
have
a
singleton
installed
in
the
cluster,
and
so
all
these
configurations
that
I
just
went
over
are
all
powered
by
something
called
an
operator
group,
which
is
a
custom
resource
that
we
came
up
with
within
all
m2.
H
You
sort
of
help
guide
our
multi-tenancy
solution
originally
and
I'd
like
to
talk
a
little
bit
about
the
operator
group
specifically
to
the
group,
as
this
is
a
spec,
so
as
an
object,
it's
fairly
straightforward
in
that
it's
a
small
object.
It
just
has
some
labels
on
it
that
basically
explain
what
namespaces
the
operator
group
is
targeting
and
what
operators
want
to
be
part
of
it.
H
So
this
is
the
spec
I
just
want
to
show
it
to
you
to
show
you
that
it's
basically
a
simple
looking
object,
but
the
the
actual
explanation
of
it
is
something
we're
going
to
go
into
based
on
the
docks,
but
basically
what
the
operator
group
does
is
it
tries
to
match
what,
when
you
install
an
operator,
how
does
that
operator
know
which
namespaces
like
a
if
my
operator
is
supposed
to
watch
a
couple
of
namespaces?
How
do
I
configure
that
at
runtime?
So
this
is
based
on
the
operator
group.
H
The
operator
group
can
inject
into
the
deployment
of
the
operator
itself
the
set
of
namespaces
that
the
operator
is
supposed
to
watch
and
also
is
an
operator
okay
to
install
in
this
namespace
like
I
found
a
cluster
admin.
I
may
be
ok
with
installing
operators
in
one
namespace,
but
not
another
namespace.
So
this
is
again
based
on
the
operator
group
and
how
does
a
user
know
whether
an
operator
is
watching
a
certain
namespace?
This
is
based
on
a
custom
resource
in
the
namespace
at
the
operator
groups
again
propagating.
H
So
the
operator
group
is
sort
of
behind
the
scenes
trying
to
enable
this
multi-tenant
need
that's
inherent
in
OLM,
based
on
what
all
em
is
trying
to
accomplish.
So
these
are.
These
are
just
the
motivations
of
what
what
what
the
operator
group
is
trying
to
accomplish.
So
the
best
way
to
explain
it
is
we
have
a
very
nice
dock
that
sort
of
goes
over.
H
You
want
to
put
an
operator
dependencies
first
and
the
operator
group
sort
of
drives
some
of
that
some
of
that
behavior,
and
so
if
you
have,
for
example,
an
operator
that
that
is
it.
That
is
a
that
supports
all
the
spaces.
But
new
installator
to
an
operator
group
entering
the
namespace
where
an
operator
group
is
single
namespace.
Only
then
operator
installation
will
fail
so
operator
groups
are
a
way
to
try
to
provide
basic
soft
multi-tenancy
within
within
the
cluster
and
so
I
think
I'll
continue.
H
Absolutely
yep
that
was
that
was
a
public
dock.
I
have
the
actually
sorry
the
slides
are
have
the
link
right
here.
Oh
they're
docked
design
operator
groups,
it's
a
really
comprehensive
document,
so
I
just
wanted
to
link
to
it
and
and
say
that
check
that
out,
I
couldn't
condense
it
down
into
a
series
of
bullet
points,
that's
cool
Thanks,
absolutely
so!
Okay!
So
that's
so
that's
operator
group
is
our
one
take
on
sort
of
the
the
multi-tenancy
aspect
of
it.
So
what
are
some
of
the
problems
that
all
n
is
dealing
with?
H
Basically,
one
issue
is
that
users
don't
expect
to
have
to
do
a
fun
configuration
of
operators
when
installing
them
like
if
I
see
you
know
a
Postgres
operator
and
I
want
to
install
it
in
my
cluster
I,
don't
necessarily
think
about
okay,
which
namespace
is
easy
watching
and
which
namespaces
do
I
want
to
install
you
in
first
I
kind
of
just
wine
stall,
it's
sort
of
like
I.
You
know
you
keep
install
a
Python
package,
I'm,
not
thinking
that
much
about
it.
So
that's
that's
one
design
with
operators
that
use
a
little
bit
front.
H
Loader
look!
Some
of
the
complexity,
their
users
and
operator
authors
for
that
matter,
don't
necessarily
want
to
think
about
and
another
issues
that
operators
are
my
segregated
enough
from
one
another
inside
the
cluster.
Even
if
you
install
them
in
separate
namespaces,
they're
series
are
global,
they
show
up
in
discovery
and
you
can
tell
as
user,
whether
they've
been
installed
or
not,
and
I
think
this
goes
back
to
that
post.
H
That
person
made
on
reddit
the
isolation
between
operators
and
things
that
install
see
IDs
Custer
in
cadiz
now
is,
is
not
all
the
way
to
the
level
where
we
can
make.
The
sort
of
promises
around
operator
is
stability
and
behavior
that
we
want
there's
a
lack
of
a
first-class
tenant
model
which
basically
restricts
access
for
developers
since
all
operators.
Generally,
our
view
is
that
cluster
admin
should
be
installing
these
operators,
because
they
have
a
lot
of
power
over
what's
going
on
in
the
cluster
and
then
again,
the
developer
is
the
end
user
who
wants
to.
H
You
know,
make
the
custom
resources
and
use
the
operator
to
fuel
their
their
application,
which
may
be
another
operator
that
depends
on
those
customer
resources.
So,
in
a
nutshell,
the
installing
this
yogi
effects
the
entire
cluster-
and
this
makes
operator
a
dependency
and
life
cycle
a
lot
harder.
So
these
are
some
of
the
problems
that
sort
of
we're
encountering
and
what.
G
H
Yes,
I
think
we've
talked
about
that.
This
is
I,
think
we've
I
think
me,
Evan
and
I
think
we've
talked
I,
think
we've
talked
about
this
recently
and
in
the
past
week
or
two
and
I
think
this
goes
into.
What
are
some
one
solutions
are
I
think
we're
very
interested
in
the
virtual
cluster
and
agency
projects
to
think
about
doing
doing
this
type
of
this
type
of
solution.
I
mean
Evan.
You
were
you
said
you
were
looking
at
the
mercial
cluster
solution
right,
yeah.
K
H
H
Anything
that
can
sort
of
better
provide
a
multi-tenancy
solution
is
something
that
were
interested
in
and
in
the
long
term,
we're
very
interested
in
the
multi-tenancy
whereabouts
on
some
of
the
solutions
that
can
come
out
of
it
because
it'll
make
our
work.
Our
work
that
much
easier,
so
yeah
work
for
the
sorry.
D
Can
you
hear
me
yeah
if
it
weren't
for
the
CRD
issue,
because
CDs
are
always
global?
It
seems
like
the
rest
of
it
might
not
be
so
bad,
because
an
operator
is
just
a
regular
workload,
and
so,
as
long
as
the
operator
has
some
kind
of
native
idea
as
to
which
namespaces
it
can,
it
is
supposed
to
be
operating.
Then,
as
long
as
you
set
up
all
of
your
are
back
permissions
correctly,
then
every
which
could
be
part
of
the
OLM
OLM
could
basically
restrict
operators
to
different
portions
of
the
cluster.
D
I
D
H
G
L
Yeah
yeah
I
think
that
is
Haram.
So
that's
the
reason
I'm
stick
to,
but
you
cross
that
you
solve,
because
we
say
you
face
the
same
problem,
so
yeah
we're
out
of
time.
I'm
just
curious.
Doesn't
it
oh
I
am
check
the
conflict
of
the
operator.
Let's
say
if
two
operators
are
managing
the
same
parts.
Typically,
you
may
cause
harm.
H
L
No,
if
the
two
operators
are
kind
of
that
matters,
for
if
they
are
because
operator
typically
watch
for
a
pod
label
right
and
for
some
reason,
the
operator
watching
for
the
same
label.
So
one
path
may
be
managed
by
two
operators
externally,
probably,
but
without
a
way
to
detect.
He
tackles
this
conflict
because
sometimes,
if
you
install
to
many
of
the
operators,
people
may
oh
I,
don't
know
there
is
a
conflict
that
operator
working
on
the
same
thing.
Then
the
read
us
is
really
just
a
hawk
I'm.
L
H
L
J
J
Provides
the
same
api's
and
watches
the
same
set
of
namespaces
it'll
prevent
you
from
doing
that
if
you're
installing,
with
OLM
beyond
that,
if
you're,
just
creating
like
deployments
that
manage
or
like
kind
of
conflict
with
your
resources
and
do
some
naughty
things
that
you
don't
want,
we
don't
detect
that
I'm
sure
there's
ways
that
there
can
be
designs
with
some
web
hooks
that
prevent
those
sorts
of
problems.
But
what
I
just
explained
was
the
extent
of
what
a
woman
does
yeah
on.
L
Yeah
I
agree,
I
mean
because
this
is
sometimes
it
just
developer,
make
some
mistakes
but
I
hope
you
know
in
their
you
know,
high-level
framework
level.
If
there
is
something
can
help,
you
know
what
is
counseling.
Maybe
maybe
it's
just
just
wide
idea,
something
that
I
haven't.
Read
people
I.
K
Think
it's
a
really
good
point.
There's
you
can
kind
of
take
this
a
little
bit
farther,
though
right,
it
might
be
reasonable,
like
you
suggested,
to
have
an
operator
sort
of
own
the
management
of
a
pod.
It
also
might
be
reasonable
for
multiple
operators
to
interact
with
the
same
pod,
maybe
managing
different
parts
of
it.
One
of
the
men
checks
a
sidecar.
One
of
them
adds
a
label
when
it's
them
does
something
else.
K
So
there
might
be
also
cases
where
it's
fine
for
two
operators
to
manage
the
same
piece
of
a
pod
spec
just
at
different
times
and
they
hand
off
control,
so
I
think
it's
a
really
interesting
problem
and
I.
Don't
think,
there's
anything
really
trying
to
solve
it
right
now,
but
yeah.
It's
definitely
something.
We
should
start
thinking
about.
K
G
G
You
mentioned
having
to
manage
the
operator
groups
for
for
our
clusters.
We've
been
keeping
a
helmet
art
going
for
for
creating
all
of
the
tenant
specific
objects,
so
we
recently
just
through
the
operator
group
into
that,
so
every
tenant
that
we
create
just
gets
one
now,
so
that
can
be
a
partial
solution
to
it.
So
the
work
group
here
is
making
an
operator
for
tenant
namespaces
as
well,
and
it
could
end
up
going
in
there.