►
From YouTube: WG-Multi-Tenancy Bi-Weekly Meeting for 20220405
Description
WG-Multi-Tenancy Bi-Weekly Meeting for 20220405
B
Yeah
sure
I
can
do
it,
hey
everybody
welcome
to
our
regularly
scheduled
multi-tenancy
working
group
for
kubernetes
meeting.
Today
we
have
a
pretty
packed
agenda
or
with
some
co-chair
proposal,
changes
which
everyone
should
have
seen
go
out
on
the
mailing
list
and
also
jim
and
team
have
some
updates
on
the
kubernetes
multi-tenancy
docs
proposal.
B
We
also
have
an
announcement
so
I'll
just
start
with
the
announcements
and
then
I'll
hand
it
over
to
jim
and
team
to
talk
about
the
docs.
So
first
announcement,
the
hierarchical
namespace
controller,
hnc
1.0,
is
now
out
big
milestone
for
the
project
super
excited
for
the
team,
and
you
know
whoever,
if
anyone's
checking
it
out
using
it,
has
feedback.
B
Please
let
the
team
know
we're
all
watching
github
issues
and
chat,
so
you
know
take
it
kick
the
tires.
Let
folks
know
what
you
think,
but
the
1.0
milestone
has
been
achieved
and
really
really
big,
big,
congratulations
to
adrian
ludwig
and
everyone.
Who's
worked
on
it
so
far,
and
then
the
second
update
is
that
we
are
going
to
be
nominating
some
new
co-chairs
for
the
multi-tenancy
working
group.
B
B
All
four
folks
have
been
really
engaged
in
the
project
for
probably
three
years
at
this
point,
leading
various
teams
and
then
ryan
has
been
our
most
consistent
sort
of
cross
project
contributor,
which
we
really
appreciate
him
for
so
I
sent
out
sort
of
the
announcement
on
the
mailing
list
for
this
proposal
to
add
new
co-chairs
and
we
will
be
moving
forward
with
that
using
lazy
consensus.
So
if
anyone
wants
to
reply
to
the
chain
with
a
plus
one
or
feedback,
please
feel
free
to
do
so.
B
If
anyone
wants
to
give
me
any
feedback
offline,
I'm
definitely
here.
But
if
we
don't
hear
any
objections,
then
we'll
be
moving
everyone
into
co-chair
status
at
the
end
of
next
week
on
friday,
which
is
the
15th
of
april.
So
those
are
the
two
bookkeeping
announcements.
I
do
see
that
someone's
in
our
slack
channel
asking
for
the
link
for
the
zoom
meeting,
it
looks
like
someone's,
replied:
okay,
cool
thanks.
B
What
did
I
say?
Always
there
to
reply
to
the
slack
message
or
pick
up
the
pr,
so
thanks
ryan
and
yeah
cool.
So
I
think
that's
all
of
our
bookkeeping
for
today
and
now
I'll
hand
it
over
to
jim
to
talk
about
the
docs.
A
All
right,
thanks,
tasha,
okay,
let
me
share
my
screen
and
I'll
also
add
the
docs,
the
doc
link
into
chat
for
those
who
might
not
have
it
so
yeah
a
few
weeks
ago.
I
think
we
started-
or
maybe
I
guess
it's
been
perhaps
has
been
maybe
longer
than
that
a
couple
of
months.
We
started
this
effort
to
propose
an
update
to
the
community
stocks
for
multi-tenancy
right,
because
one
of
the
key
questions
we
get
is,
it
seems
like
we
have
reached
like
tasha
mentioned,
with
hnc
and
also
with
other
projects.
A
We
have
reached
some
level
of
maturity
and
knowledge
on
multi-tenancy,
but
there's
not
much
or
there's.
Actually,
no,
you
know
guidance
in
the
community
stocks
around
multi-tenancy,
so
the
idea
was
to
propose
an
update
there.
So
this
is
a
working
document
which
we
will
convert
into
a
pr
hopefully
fairly
soon.
I
think
we're
almost
there
we've
completed
most
of
the
tasks
where
you
know.
A
I
think
we
have
a
few
pending
items
and
then
we
can-
and
even
while
those
are
in
discussion,
we
can
go
ahead
and
at
least
create
a
draft
pr
and
start
getting
some
wider
reviews
on
it.
So
the
idea
was
for
today
was
to
just
go
through
some
of
the
main
questions
which
I
had
listed
in
the
agenda,
and
you
know
once
we're
good
with
those
we
can.
I
feel
we
can
convert
to
a
pr
and
they'll
always
be
minor
changes.
A
I'm
sure
we'll
get,
and
hopefully
we'll
get
plenty
of
other
good
feedback,
and
we
can
continue
to
iterate
before
we
dive
into
details.
Does
anybody
else
have
any
just
general
thoughts
on
how
this
has
progressed
so
far
or
where
we
are?
Does
everyone
kind
of
share
the
same
feeling
that
we're
in
a
good
spot,
to
start
converting
to
a
pr?
After
a
few
you
know,
kind
of
items
have
been
resolved.
A
Okay,
all
right
so
yeah,
maybe
just
going
down
through
the
list,
and
I
I
had
listed
a
few
things
but
feel
feel
free
to
bring
up
others,
which
you,
you
feel
a
worthy
of
discussion
right
now
and,
of
course,
about
things.
We
would
those
few
minor
things
we
can
continue
changing
just
offline
and
asynchronously.
A
F
Yeah,
can
you
can
you
hear
me?
Yes,
okay,
cool,
so
I
did
make
some
updates
based
on
your.
You
know,
feedback
gym,
so
you
know
so.
The
way
I
was
thinking
about
this
is
that
you
know
we
have
this
whole
or
kubernetes.
Has
this
whole
concept
of
storage
class
and
versus
persistent
volumes
and
pvcs
right,
so
the
claims
that
you
could
attach
to
a
namespace?
F
You
know
I
I
I
thought
so
how
about
we
use
some
of
those
constructs
and
because,
because
this
whole
section
we
are
talking
about
control,
plane,
isolation
and
how
we
can
use
a
name
space
per
tenant
strategy.
I
thought
the
whole
pvc
claim
per
tenant
that
you
could
attach
to
a
namespace
could
be
a
good
story
to
tell
you
know
because
you're
right,
like
you
know,
I
was
reading
through
your
comment.
F
You
could
have
an
app
that
is
inherently
multi-tenant
and
you
could
talk
about
things
like
sharding
and
you
know
other
isolation
strategy
when
it
comes
to
storage.
But
that's
not
a
kubernetes
story
right.
So
we
are
you
know.
So
the
thing
I
mean.
What
we're
doing
here
is
to
try
try
to
tell
that.
Kubernetes
story
and
from
that
angle,
so
I
I
thought
maybe
this
would
be
a
good
topic
to
talk
about.
I
mean
I
love
feedback,
obviously
from
other
than
what
others
think,
but
that's
essentially
what
I
was
thinking.
F
So
you
could
take
a
claim
or
you
could
take
a
pvc
and
attach
that
to
a
namespace
and
if
you're,
following
a
name
space
for
a
pretend
strategy,
you
know
so
yeah,
so
you
can
assign
some
dedicated
storage
for
that
tenant
right
or
for
the
workloads
or
the
pods
running
within
that
name,
space
dedicated
to
a
tenant
right.
So
that's
that's
the
whole
idea
again
right.
A
Yeah,
no,
that
makes
a
lot
of
sense,
and
I
think
it's
good
to
you
know
outline
some
best
practices
and
I
recall
there
was
there
were
some
prior
discussions
and
on
this
so
seems
like
one.
One
option
could
be
that
if
you're
running
in
a
multi-tenant
environment
you
have
a
storage
class
per
tenant,
which
you
know
prevents
any
any
chance
of
somebody
else
claiming
a
pv
reclaiming
right.
The
other.
A
E
Generally,
we've
recommended
a
dynamic
with
a
deletion.
Reclaim
right
is
the
preferred
and
then
storage
class
for
namespace
is
kind
of
the.
If
you
can't
do
dynamic,
ppcs
or
pvs.
F
I
was
looking
at
the
whole
storage
class
for
namespace,
but
I
couldn't
find
good.
So
are
we
able
to
associate
associate
a
storage
class
with
the
namespace.
A
A
A
But
that
seems
like
yeah
I
mean
even
so
I
guess
the
question
is,
if
someone's
using
delete
and
if
reinforce
again
that
would
have
to
be
enforced
somehow,
but
let's
say
if
they
use
the
reclaimed
policy
of
delete.
That
seems
like
it's
safe
enough
right
I
mean
there's
no
right
or
anyone
else.
Do
you
see
any
other
risks
with
but
running
with
multi-tenancy
and
any
chance
of
misuse
of
storage.
E
I
mean
obviously
yeah
I
mean
under
the
covers
right.
It's
it's
a
volume
mounted
to
your
a
node
at
some
point,
so
there's
there's
risk
there.
E
So
it's
it's
really
going
to
depend
on
the
other
policies
you
have
in
place
to
prevent
you
know
privileged
escalations,
host
path
or
host
mount
pass
and
all
that
other
stuff
right.
So
so
so
really
it's!
A
So
so
the
guidance
we're
saying
is
use
dynamic
storage
do
not
use
like
you
know,
any
any
host
mounts
or
even
like
yeah.
If
you
have
temp
and
things
like
that,
you
want
to
make
sure
that
you're
not
mapping
to
empty.
E
So
you
want
to
host
try
to
like
limit
that,
though
right
I
mean
and
that's
where
it
gets
hard,
because
most
security
auditing
tools
that
people
use
to
leverage
a
lot
of
this
like
twist
lock,
does
all
of
that
right,
yeah,
and
so
you
have
to
kind
of
be
mindful
that
you
do
have
some
some
risk
exposure,
regardless
from
that.
From
that
perspective,
right.
A
G
The
thing
is
for
storage
isolation
we
using
sandbox
and
not
using
sandbox,
has
difference
so.
A
E
Yeah
and
in
realistically,
if
you're,
using
a
dynamic
storage,
it's
going
to
provision
it
and
attach
a
pvc
immediately
and
when
the
pvc
goes
away,
the
rekindling
policy
will
kick
off
and
it
will
delete
it
so
the
risk
of
anyone.
Stealing
the
data
between
you
know,
pod
restarts
is
minimized,
but
but
if
you're,
not
using
dynamic
storage,
then
or
dynamic
provisioning
you're
exposed
and
that's
where
the
storage
class
come
to
play.
But
it's
still
files
on
a
disk
right
somewhere.
G
I
actually,
I
would
suggest
you
know,
move
the
story.
Networking
isolation,
part
in
the
sound
in
the
wrong
time,
part
basically
with
the
sandbox,
because
they
are
so
different.
We
then
without
sandbox
and
and-
and
I
don't
think
in
this
document
we
are
strongly.
Are
we
encouraging
sandbox?
Or
we
say
it
is
okay,
if
you
don't
have
sandbox.
A
Yeah,
I
don't
think
we've
taken
a
strong
stance
to
say
that
yeah
sandboxes,
you
know
recommended
or
anything
like
that.
We're
saying
I
think
the
reality
is-
and
you
know
maybe
others
have
kind
of
seen
use
cases
also,
but
a
lot
of
folks,
especially
sas
providers,
are
running
without
sandbox
containers
and
they
are
using
namespace
based
multi-tenancy.
A
H
Yeah,
hey
jim,
this
is
jeremy,
so
sandboxes
should
really
be
used
if
you're
going
to
run
untrusted
code
code
that
you
haven't
had
an
opportunity
to
scan
or
review.
G
A
Yes,
so
the
one
one
view
is:
okay,
is
this
really?
Is
our
storage
and
networking
data
plane
or
control
plane?
I
I
think
here
what
we're
talking
about
is,
if
you
want
a
namespace
per
tenant,
what
kind
of
kubernetes
control
plane
or
kubernetes
api
constructs?
Can
you
use
to
you
know
for
configuring,
some
storage,
isolation
right
so
yeah?
I
guess
it
gets
a
little
bit
fuzzy
and
I
think.
G
Yeah
so
because,
if
you
are
talking
about,
like
you
know,
dynamic
storages
right,
some
house
amounts
it's
all
data
plan
things,
but
if
you
just
talk
about
the
pvc
object
and
pv
object,
okay,
there
are
control
pane
things,
so
either
we
make
it
clear.
We
just
talk
about
any
control,
plane
objects
in
the
api
server.
How
do
you
deal
with
that
and
forget
about
exactly
how
that
api
is
implemented
by
the
whatever
pv
provider?
And
that's
also
fine?
So
maybe
we
have
made
it
clear.
G
D
Hey
jim
fine,
this
is
dead
yeah.
I
agree
with
you.
Basically,
if
we
limit
this
discussion
in
that
particular
section
to
pvc
and
storage
classes
and
then
reclaim
policy
of
delete,
then
it
will
still
fall
into
what
jim
you
were
saying
about.
D
If
you
want
namespace
list
for
10
and
then
what
all
the
things
we
need
to
ensure
from
storage
network,
our
back
point
of
view,
but
if
we
start
going
down
the
path
of
like
the
data
plane
isolation,
then
that
starts
blurring
the
boundary
between
what
we
have
in
sandboxing
and
here
so
we
might.
We
might
want
to
just
restrict
this
part
to
pvcs
and
storage
classes.
A
A
I
don't
know
what
the
kubernetes
stocks
name
it
like
pvcs
pvs,
but
we
can,
you
know,
follow
the
same
whatever
this
section
is
named,
like
configuring
storage,
or
something
like
that.
Okay
and.
E
A
D
Yeah,
although
jim
I
mean,
wouldn't
those
be
whether
it's
namespace
based
or
not,
I
mean
these
are.
D
E
D
Point
then,
maybe
that
point
you
had
written
in
the
comments.
I
thought
it
was
good
where
all
those
four
and
potentially
three
or
four
whatever
will
end
up
being,
but
we
move
them
in
advanced
isolation
or
advanced
concepts,
something
like
a
complete
separate
section
where
I
felt
that
those
are
cutting
across
the
board
right,
whether
it's
a
multi
customer
or
multi-team.
D
And,
like
the
last
section,
additional
considerations
I
had
put
in
that
for
sas
providers,
but
I
mean
that
is
the
sort
of
the
bucket
that
I
was
thinking
is
the
place
where
we
can
yeah
have
all
these
additional
things
which
are
not,
let's
say
so.
Probably,
quotas
are
still
namespace
specific,
but
api
priorities
and
fairness
like
there
is
that
one
mention
of.
D
If
you
have
logging
and
monitoring
as
services
which
are
cutting
across
the
board,
then
how
do
you
give
priorities
to
the
parts
so
that
those
get
handled
with
a
higher
priority
right?
So
those
actually
are
not
specific
to
name
space
pertaining
that
they
can
apply
to.
I
mean
multi-team
multi-customer,
even
in
probably
virtual
control
plane,
but
also
those
parts
which
are
for
super
cluster
or
api
requests
which
belong
to
the
super
cluster.
They
need
to
be
handled
with
higher
priority
right.
D
D
If
that
makes
sense
like
when
we
talk
about
now,
we
are
going
to
change
the
network
isolation
to
just
network
policies,
but
if
we
were
to
still
look
at
dns
from
that
particular
lens
that
it
is
still
applied
to
part
of
the
networking
right
like
it
probably
doesn't
have
to
be
a
separate
subsection
on
its
own.
F
Yeah
so
before
we
jump
into
dns,
I
mean
to
your
earlier
point:
yeah
you
know,
so
I
would
like
to
keep
the
resource
quotas
and
the
limit
ranges,
though
the
whole
quota
section
under
the
name,
space
yeah.
D
F
D
With
cody,
yes,
limit
ranges,
quotas,
those
make
okay,
because.
F
D
D
That
was
my
sorry
to
interrupt
jim,
because
I
just
wanted
to
make
one
additional
point
regarding
the
additional,
like
that
last
section
where,
rather
than
calling
it
additional
considerations,
which
I
agree
with
you
and
ranjit
that
they
should
not,
people
should
not
think
that
those
are
options.
They're.
D
Perfect
sense,
so
naming
the
title
of
that
section
will
still
be
very
important.
Advanced
isolation
is
a
good
title.
I'm
fine
with
that.
D
If
we
all
agree
with
that,
but
then
we
can
also
talk
about
that
last
point
and
we
will
separately
I'd
like
to
talk
about
that
gym
the
operators
and
how
they
actually
is
a
big
concern
when
we
talk
with
teams
like
what
operators
to
use
and
all,
but
that
can
also
just
fold
into
that,
so
it
will
be
a
list
of
like
five
things
and
maybe
something
else
that
doesn't
fit
in
the
neatly
fits
into
these
other
sections.
D
We
can
still
you
use
that
additional
advanced
isolation
section
to
discuss
and
outline
these
things,
because
we
want
to
be
comprehensive
without,
like
you
know,
missing
anything
which
all
of
us
have
seen
in
the
field,
and
it
will
be
good
for
us
to
just
capture
it
in
some
place.
Sure,
okay,
so
let
me
stop
and
yeah
we
can
continue
with
the
earlier
flow.
I
didn't
mean
to
interrupt
you.
A
A
Like
we're
saying
that
you
know
in
the
security
settings,
we
want
folks
to
consider,
so
we
could
say
like
common
considerations
or
things
like
that
right,
which
or
we'll
we'll
find
some
proper
name
for
it.
The
other
option
is
we
move
some
of
that
into
key
concepts
right.
We
could
move
security
into
key
concepts
so
that
it
gets
prominence
there
even
before
talking
about
control,
plane
and
data
plane,
isolation.
D
So,
actually,
that's
a
good
point,
jim
and
now
I'm
the
main
problem
with
that.
I
think
with
that
will
be
it's
just
too
much
of
details
before
you
know,
building
up
the
whole
story
right
now,
the
way
it
is
flowing.
It's
just
like
use
case
is
very
high
level.
Then
key
concepts.
Okay,
everyone.
D
D
A
D
A
A
A
You
know
if
there's
yeah
and
there
we.
I
guess
the
question
is
what
what
else
do
we
want
to
say
about
storage?
Then?
So
again,
we
could
revisit
sandboxing,
but
we
have
covered
sandboxing
and
node
isolation
here,
yeah.
D
I
think
ryan,
it
was
you
right.
You
were
mentioning
about
this
talking
about.
D
D
I
think
ryan,
you
should
just
take
a
look
at
the
part
which
is
there
currently
in
sandboxing
and
then
what
additional
things
you
think
should
go
as
part
of
our
data
plane,
studio
solution.
A
D
So
what
do
you
all
think
like
should
dns
be
its
own
subsection,
or
can
we
fold
it
as
part
of
network
isolation?
I
mean
whatever
we
want
to
call
like
the
network:
segregation,
not
segregation.
We
are
not
using
that
word,
but.
F
D
Yeah
exactly
now,
that's
what
I
was
thinking,
because
we
decided
to
rename
that
it
is
probably
a
different
issue.
Now,
okay,
it
can
stand
on
its
own.
I.
A
F
So
there's
there's
just
one
thing:
I
I
sort
of
purposely
stayed
away
from
is
bringing
in
service
mesh
when
writing
the
network
isolation
part,
because
that
can
open
up
a
can
of
worms
right.
So
I
don't
know
just
wanted
to
get
the
thoughts
from
the
group
here.
Is
that
worth
bringing
in
or
just
give
that.
D
No,
but
actually
I'd
like
to
hear
ranjit
what
sort
of
specific
problems
have
you
encountered
in
the
sense
that
what
ryan
is
saying
yeah
from
the
technology
point
of
view,
that
is
true,
service
measures
will
have
their
own
network
policies
and
what
not,
but
when
it
comes
to
configuring
configuring,
a
cluster
for
multi-tenancy,
what
we're
like
there
is
something
that
you
were
thinking.
That
is
a
problem
right.
That's
why
you
so!
I
would
like
I'll
be
happy
to
actually
hear
what
you
have
in
mind.
There.
F
Well,
yeah,
I
know
I've
heard
some
customers
have
spoken
to
you
know.
You
know,
especially
the
one.
The
ones
that
are
building
a
sas,
multi-tenant
environment,
bring
bring
in
a
service
mesh
to
control,
restrict
traffic
more
at
the
infrastructure
layer
right
and
it's
it's
something
that
always
comes
up,
but
I
wasn't
sure
if
that
that
was
the
focus
of
this
document
or
you
know,
do
you
even
bring.
D
That
actually,
that
you-
because
you
mentioned
that
I'm
reminded
of
something
which
I
wanted
to
bring
up
to
the
group
and
also
specifically
ask
jim-
is
see.
Opm
keyword
know
that
they
are
good
sort
of
building
blocks
or
tools
that
can
also
help
with
some
of
these
restrictions,
and
there
is
a
passing
reference
to
policy
engines
in
the
dock
so
separately
from
service
meshes.
D
Where
what
should
we
should
we,
when
the
first
question
is,
should
we
not
actually
bring
those
at
least
passing
references
to
those
tools
in
here
and
how
do
they
help
with
the
multi-tenancy
configuration
like
if
there
is
some
guidance,
because,
ultimately
cluster
admins?
If
they
know
that
okay,
I
can
use
opr
keyword?
No,
that's
a
good
thing
for
them.
Right
like
it
could
be
comprehensive.
A
All
right
yeah,
so,
let's,
let's
answer
the
service
mesh
question
first
and
then
we
can
discuss
policy
engines
as
well.
Some,
I
think
you
know
again,
if
you're
creating
like
an
other
considerations
or
you
know
an
other
sort
of
section,
it's
it's
fair.
A
Service
mesh
there
and
say
that
can
be
used
for
network
policies.
I
think
they
I
don't
know,
then
whether
we
have
to
go
into
details
like
okay.
You
know
some
service
meshes
operate
at
layer,
seven
versus
layer,
four.
F
A
To
explain
and
and
if
we're
putting
this
in
the
kubernetes
docks,
I
mean
the
more
we
stay
close
to
the
kubernetes
api,
the
better
right,
because
you
know
there
could
be
perhaps
other
higher
level
docs,
which
explain
you
know
other
things
right
so
so
I
think
it
yeah
and
obviously
even
with
service
mesh.
There's
things
like
network
service
mesh
and
then
celium
operates
with
ebpf.
So
there's
so
many
different
implementations.
A
A
A
E
I
guess
the
only
suggestion
I'd
say
is:
maybe
we
could
even
make
it
more
generic
and
saying
you
know
admission
and
mutating
web
hooks.
G
E
D
Yeah
yeah,
I
mean
that's
fair,
so
in
fact,
looks
like
adrian
is
not
here
today
right
because
I
had
asked
him
to
also
write
a
blurb
on
hnc,
which
yeah,
which
can
also
go
in
that
section.
So
basically
that's
our
section
where
we
have
all
these,
I
mean
for
lack
of
better
word,
I'm
just
calling
those
as
advanced
concepts,
but
they
are
in
some
way.
They
are
advanced
concept
right.
Not
everyone
will
need
those
at
the
same
time,
they
are
important.
So.
A
Right
so
so
for
tools,
though,
like
projects
and
tools,
we
can
have
just
a
separate
section
and
we
can
move
this
towards
the
end
and
we'll
have
to
check.
I
know
we've
so
some
other
documents.
I
have
not
seen
this
in
the
core
kubernetes
docs,
but
other
docs
do
include
like
a
section
where
you
know
any
any
project
can
kind
of
add
themselves
and
there's
a
disclaimer
about
third
party
projects.
A
Things
like
that,
so
we
can
follow
those
guidelines
and
just
have
a
section
on
projects
with
links
to
you
know
cappy
nested,
hnc
and
then
any
third-party
tools
like
we
cluster
capsule
cloud
arc.
You
know
that
want
to
be
listed
so,
but
we'll
have
to
check
on
what
the
allowed
guidelines
are
and
how
to
do
this.
You
know
where
things
can
be
listed,
but
it's
not
there's
no
sort
of
you
know.
D
D
I
think
I've
seen
that
network
see
cnn.
D
Okay,
just
I
know
we
are
sort
of
towards
the
end,
but
jim.
I
wanted
to
ask
our
discuss
with
the
group
and
because
we
had
the
specific
question
the
last
point
there.
A
D
Yeah,
so
let
me
share
with
the
group
what
that
point
is
about
and
where
it's
so
see
when
all
of
us
understand
what
operators
are
and
controllers
are
and
what
we
have
when
we
talk
with
customer
seen
is.
Invariably
there
is
some
operator,
one
or
two
in
fact,
sometimes
more
than
two
in
their
clusters
and
they
are
trying
to
give
a
sas.
Their
operator
is
just
part
of
the
story.
D
That's
a
very
typical
example,
and
there
are
such
containerized
applications
that
we
are
seeing
more
and
more
every
day.
Now,
where
the
storage
is
being
managed
by
the
operator
and
the
application
has
its
own,
you
know
it
may
or
may
not
have
an
operator.
So
from
a
cluster
admins
point
of
view,
the
trouble
that
they
seem
to
face
is
okay.
I
have
all
these
bicycle
operators
out
there.
D
Some
are
open
source,
some
are
proprietary,
but
how
do
I
know
which
one
is
best
from
my
needs
of
creating
a
sas
and
from
that
when
we
started
working
with
such
teams,
we
realized
that
there
are
some
basic
things
that
an
operator
needs
to
support
and
those
are
the
five
things
that
I
have
listed
out
there
and
I
won't
go
into
the
details
of
those
and,
as
I
have
mentioned
in
the
comment
jim,
I
will
shorten
that,
but
I
wanted
to
wait
for
this
discussion
to
happen
before
I
take
a
stab
at
that.
D
But
the
main
point
was:
if
we
bring
that
out
as
well,
then
it
just
helps
the
cluster
admin
that
when
they
are
choosing
operators-
and
I
agree
with
your
point
where
we
ultimately
the
operators
are
for
providing
some
service,
and
the
question
comes,
I
mean
sas
model
is
where
this
becomes
more
relevant.
D
What
how
should
an
operator
be
now?
We
are
not
going
to
guide
them
on
operator
development?
That's
not.
This
document
is
meant
for,
but
as
a
cluster
I
mean
when
they
have
to
choose
existing
operators
from
let's
say
a
marketplace
or
even
some
operative
repository.
D
They
can
use
at
this
guidance
to
check
which
operator
will
satisfy
their
needs.
So
that's
the
background
and
then
the
team
you
all
can
read
up
on
that,
but
that
was
what
my
thoughts
were,
that
if
we
are
having
a
separate
section
for
like
advanced
concepts,
advanced
isolation,
then
we
can
put
this
discussion
as
part
of
it.
A
Right
now,
thank
you
that
helps
explain
and
frame.
You
know
the.
I
guess
what
needs
to
or
what's
described
here
my
I
guess
on
my
first
reading,
and
maybe
this
was
based
on
the
prior
versions
which
you're
waiting
to
update,
but
this
seemed
very
much
focused
on
operator
authors.
So
if
we
can
flip
that
and
say.
D
D
A
D
That
that's
the
sentence
which
can
mislead
right
written
and
I
have
written
that,
but
it's
in
the
like
a
while
back
so
yeah,
but
I
will
shorten
this
if
you
are
okay
and
I
mean
if
the
team
feels
that
this
is
part
of
that
kind
of
the
document
that
we
are
creating,
this
will
add
value.
Then
definitely
I
will
work
on
it.
A
So
my
related
question,
though,
was,
is
this
guidance
for
operators
specifically,
or
should
we
generalize
it
for
any
shared
service
right,
so
is
there
or
is
there
something
you
know,
sort
of
unique
or
specific
that
we
and,
of
course
we
can
in
in
that
section
we
can
talk
about.
You
know
operators
also,
but.
D
A
E
I
mean,
I
think
it
does,
because
crds
are
cluster-wide
and
a
shared
service.
If
it's
not
using
custom
crds,
then
it's
kind
of
by
default
has
to
be
namespace
in
some
regard
right
and
so
then
you
have
namespace
isolation.
If
you're
following
everything
else,
but
from
an
operator
perspective,
it
largely
has
to
be
cluster-wide.
So
I
guess
you
could
just
say:
cluster
scoped
services.
D
Let
me
ask
you
jim,
I
mean
I
roughly
understand
what
you
are
implying,
but
I
had
a
very
specific
guidance
as
part
of
this
session.
So
if
we
to
answer
your
question,
if
we
see
ultimately,
we
want
to
provide
a
concrete
guidance
and
when
it
comes
to
an
operator,
is
the
vehicle
or
the
mechanism
through
which
a
service
gets
created
or
is
enabled
in
the
cluster
right.
D
So
if
you
are
using
a
mysql
operator
as
part
of
lets,
say
a
wordpress
service,
then
that
mysql
operator
needs
to
do
one
two
three
or
need
to
have
these
things,
one
two,
three
four
whatever
so
I
I
think
when
I
rewrite
this,
I
will
give
that
context
to
the
discussion
that
from
where
the
operators
are,
what
roles
and
operators
play
as
part
of
a
shared
service
and
then
the
specific
guidance
that
we
recommend
that
kind
of
structure
that
we
are
liking
right.
D
So
I
will
rewrite
this
in
that
flavor,
where
the
guidance
is
very
specific,
like
if
you
are
using
a
mysql
operator,
for
example,
make
sure
that
that
controller
or
the
crd
spec
has
certain
things
defined
so
that
you
are,
you
know
like,
for
example,
guaranteed
scheduling.
You
need
to
make
sure
that
there
are
the
resource
request
and
limits
defined.
Otherwise,
kubernetes
will
bump
out
your
board
and
that
can
just
crash
your
wordpress
service.
I
mean
that
particular
instance.
Obviously,.
D
So
that's
true,
that's
actually
a
very
valid
point
and
valid
question,
but
because
we
have
already
sort
of
talked
about
those
things
as
part
of,
let's
say
the
name,
space
isolation
where,
if
you
have
as
a
cluster
admin,
if
you
have
control
over
the
traffic,
let's
say
through
policy
engine
or
something
to
apply
those
policies,
no
matter
what,
then,
yes
and
that
point
has
been
covered,
but
I'm
specifically
now
referring
to
the
situation
where
a
control
plane
is
getting
extended
right.
D
All
these
controls,
an
operator,
will
add
a
new
api
and
that
api
is
not
going
to
expose
ports
or
anything.
It
will
be
a
very
high
level,
abstraction
and
behind
the
scene
the
control
is
going
to
create
whatever
resources
that
it
wants,
or
as
part
of
handling
that
crud
operation
on
that
new
api.
So
now
the
question
is:
what
should
we
look
at?
I
mean
as
a
cluster
admin.
D
What
should
I
look
at
when
I
see
a
two
competing
mysql
operators
which
I
want
to
install
in
my
cluster,
and
the
guidance
will
be
specifically
around
that
point
where
we
will
tell
them
that?
Okay,
look
at
some
of
these
things
that
your
controller
is
doing.
D
It
might
require
them
to
you
know:
analyze
the
operator's
code,
just
look
at
eyeball
it
if
it's
possible
just
look
at
spec
and
all
of
that,
so
this
section
will
definitely
shrink,
but
that's
where
I
felt
that
we
might
be
able
to
handhold
some
of
the
decisions
for
them.
You
know.
E
D
See
and
ryan.
I
understand
that
my
still
the
question
is:
they
are
not
the
same
thing
right
like
when
I
add
a
new.
Yes
broadly,
it
is
all
controlled,
plane,
isolation,
but
now
we
are
talking
about
control
planes
that
are
written
by
third
party,
whereas
control
planes
and.
E
Yeah
but
I'm
saying
any
any
shared
service
right
like
that's,
not
necessarily
an
operator
right,
because
I
guess
I
mean
technically
I
would
I
don't
know
if
I
would
consider
kiverno
an
operator,
but
it's
essentially
implementing
you,
know
crds
that
have
reconcilers.
D
Point
so
yes,
these
days
and
as
we
have
heard,
the
team
and
everyone
talk
about
everything
will
be
a
custom
resource.
Eventually,
everything
is
moving
in
that
direction,
but
and
even
though
everyone
has
their
own
custom
resources
right.
D
What
I'm
specifically
concerned
about-
and
this
is
again
coming
from
the
discussions
in
the
field-
is
there
are
certain
certain
operators
which
are
more
just
for
the
lack
of
better
term.
We
can
call
them
a
stateful
service.
Let's
say
like
cassandra
data.
D
Elastic
and,
of
course,
my
sequence
we
have
studied,
they
have
very
specific
route
right.
They
they
define
a
custom
resource
whose
own
intention
is
to
provide
a
high
level
abstraction
to
create
instances
of
that
underlying
service,
and
it
it's
not
a
service
which
is
going
to
be
used
across
the
board,
like,
let's
say
or
across
the
cluster
like
given,
but
it
will
be
an
instance
of
mysql
which
will
be
used
as
part
of
a
particular
application
stack.
It
could
be
whatever
the
application
you
are
using.
E
D
E
D
E
D
I
know
what
is
like
no
or
opa
opa's
main
role
is
not
that
right
as
against
that
the
main
role
of
mi
sql
operator
is
to
actually
create
behind
the
scene.
The
entire,
like,
whatever
is
needed
to
run
a
mysql
as
a
instance
in
the
kubernetes.
So
the
the
difference
I
feel
between
something
like
even
a
service
mesh
has
their
own
crds
right.
D
So
if
we
take
a
broad
brush
and
not
call
this
out,
I
I
feel
that
we
will
miss
out
on
providing
a
good
guidance
there,
because
they
are
not
the
same
things
like
just
saying
that
yeah
all
the
cluster
wide
services
need
to
be
need
to
have
all
these
things
anyways.
So
you
know
we
we
won't
be
able
to
finally
tell
the
our
readers
that
what
specifically
need
to
be
looked
at
when
they
are
choosing
something
like
is
operator
for
a
shared
storage
service.
D
I
mean
not
shared,
but
a
storage
kind
of
service.
G
Yeah,
I
think
that
the
recommendation
here
is
actually
highlighted
the
thing
that
we
have
discussed
in
previous
documents.
So
is
that,
okay
to
say,
if
we
write
an
operator
in
a
multi-tenancy
scenario,
how
about
we
trying
to
automate
the
thing
that
we
we
have
discussed?
For
example,
we
we
we
can
that
guaranteed
scheduling.
Is
it
possible
for
this
operator
to
have
people
like
saying
travel
like
a
web
hook,
so
to
automatically
patching
all
the
requests
for
the
parts
that
are
that
other
that
customer?
G
That
requests
basically
make
everything
that
you
list
as
a
automation
for
the
operator.
So,
for
example,
the
operator
can
automatically
set
up
the
network
policy
for
the
tenants.
I
would
say
that
kind
of
thing
makes
a
little
bit
more.
I
would
say
just
like
a
recommendation,
not
enforce
everything
that
you
have
mentioned,
because
the
thing
that
I
mentioned
is
kind
of
specific
to
a
particular
use
case.
That.
D
Yeah
and
that's
what
I
had
originally
called
out,
that
these
are
the
recommendations
coming
from
the
field
for
the
sas
kind
of
use
case,
not
the
multi
team
we
haven't
seen
so
much
probably
they
might
arise
in
that
scenario
as
well,
so
see,
I'm
not
suggesting.
First
of
all,
I'll
be
rewriting
this.
The
main
point
is,
if
I
to
answer
your
question
yeah
like
some
of
these
things,
can
be
automated
yes
and
to
answer
ryan's
question.
Those
can
apply
across
the
board
for
every
service
out
there.
D
The
the
main
point
is
today
in
the
field
what's
happening.
Is
there
are
operators
which
have
which
are
doing
certain
things
right.
E
D
For
cluster
admins,
it's
we
are
not
recommending
or
telling
them
how
to
automate
things.
We
are
helping
them
choose
one
operator
or
another
based
on
certain
criteria
that
they
can
easily
check
to
just
maintain
their
multi-tenancy
posture.
Of
course
they
can.
All
of
these
things
can
be
done
through
opa.
Probably
keyword,
no
can
do
it.
We
have
an
admission
controller,
that's
part
of
two
plus,
which
does
many
of
these
things
and
yeah.
D
We
don't
actually
mention
those
how
to
automate
them
here,
because
it
is
mostly
about
look.
You
are
these
third-party
controllers
that
you
are
adding
to
the
cluster
and
they
need
to
follow
certain
things.
That's
the
whole
point.
G
Okay
yeah,
my
work
is
for
existing
operators.
So
if
we
write
this
way,
are
we
saying
that
all
the
existing
operators
they
are
not
multi-tenancy
ready?
Maybe
it
is
they
are,
but
I
just
don't
want
to
deliver
that
message.
No.
D
D
Then
they
have
to
choose
an
operator
for
something
I
don't
know
and
for
this
discussion
this
particular
discussion.
I
am
not
going
to
call
out
keyword,
no
as
a
operator
from
that
perspective,
because
it's
more
an
admission
controller
which
is
doing
the
policy
engine,
but
something
like
yeah,
my
secret,
a
lot
so
many
other
services,
right,
mango,
etc.
D
So
yeah,
none
of
the
operators
are
multi-tenancy
ready
or
not
ready.
We
cannot
as
a
team,
we
cannot
make
that
decision.
I'm
just
saying
that
we
can
help
cluster
admins
with
some
great
criteria
or
guidance
on
what
to
look
for
when
they
have
to
choose
certain
operators
as
part
of
their
control,
plane
and
add
new
things
from
you
know
third
party
repositories
into
their
cluster.
That's
all
the
main
point
is.
A
A
A
So
that
way,
we
then
it
seem,
may
flow
a
little
bit
better
to
say
that
as
a
cluster
admin
you're
going
to
be
configuring,
some
required
services
like
dns,
there's
optional,
shared
services,
perhaps
like
oppa,
gatekeeper
or
caberno,
there's
other
operators.
You
might
configure
for
per
tenant
services
right,
and
we
can
cover
all
three
and
provide
some
guidance,
some
high-level
guidance
on
how
to
go
about
selecting
or
evaluating
these
services
and
what
they
want
to
put
into
their
cluster.
D
A
I'm
not
suggesting
we
do
I'm
saying
we
we,
if
we're
talking
about
operators,
it
seems
like
we
should
also
mention
something
about
other
shared
services
which
are
non-operated
not
driven
through
operators
right
because
there
are
several
examples
like
valero
right.
So
if
I
want
to
install
valero,
is
it
multi-tenant?
Is
it
not?
Does
there
might.
A
D
No
that's
a
great
question
and
yeah
I
mean
that's
the
reason
we
are
having
a
separate
section
where
we
can
definitely
again
probably
shared
services
can
have
its
own
subsection
and
try
to
the
reason
I'm
resisting.
That
particular
point
is
because
what's
going
to
happen,
you
know
is
the
guidance
that
we
have
learned.
That
is
more
specific
to
the
operators
that
we
have
seen
and
work
with
in
the
field
and
something
like
the
question
that
you
raised
are
valid
questions
like
valero.
D
Is
it
multi-tenancy
like
ready
or
not?
Honestly,
I.
D
How
the
guidance
that
I
am
going
to
give
later
on
how
that
relates
to
keyword?
No,
I
won't
be
able
to
be.
You
know
confident
about
that.
So,
rather
than
diluting
the
whole
message,
maybe
what
we
can
do
is
we
can
have
anyway,
we
are
going
to
have
a
separate
section
of
advanced
isolation
right,
so
we
can
have
shared
services
like
qrno
valero
and
then
maybe
you
can
take
a
stab
at
writing
about
those,
and
this
can
be
very
specific
section
that
let
me
first
take
a
stab
at
that.
A
A
Again,
I
feel
we're
pretty
close
to
getting
this
pr
ready.
So
it's
exciting
to
see
it
come
along.
D
Jim
just
one
last
point:
the
zoom
link
that
I
used
from
our
main
document.
Is
that
not
is
this
not
that
zoom
link?
I
think,
because
nobody
was
there
on
that
particular.