►
From YouTube: Kubernetes Multi-Tenancy Working Group 20210601
Description
Regular bi-weekly meeting
Agenda:
Lukas Gentele: Loft has open-sourced their virtual cluster technology and would like to talk about eventual convergence with the virtual cluster project
https://github.com/loft-sh/vcluster
Harsh :
* Issue: https://github.com/kubernetes-sigs/hierarchical-namespaces/issues/41
A
Hi
everybody
and
welcome
to
our
regularly
scheduled
multi-tenancy
working
group
for
kubernetes.
Today
we
have
lucas
on
he's
going
to
talk
about
loft
and
their
virtual
cluster
technology,
that
they're
open
sourcing
and
then
harsh
is
going
to
go
over
an
open
issue
in
the
hierarchical
name,
spaces
project.
So
lucas,
would
you
like
to
kick
off.
B
Sure,
thank
you
so
much
yeah.
We,
I
think
last
time
we
spoke
was
in
december,
so
it's
actually
been
a
while.
Last
time
we
talked
about
the
virtual
cluster
implementation
as
part
of
our
commercial
product
called
loft,
and
you
know
when
we
last
spoke,
we
had
the
plans
to
open
source
it,
but
it
wasn't
open
source
yet
so
I
guess
we
couldn't
dive
as
deep
into
what
we
actually
built
under
the
hood
as
we
wanted
to
share
it
with
the
with
the
sick,
so
yeah
a
couple
of
weeks
ago.
B
We
we
did
it
and
actually
open
sourced
our
implementation.
You
can
find
it
on
vcluster.com
there's
a
github
repository
as
well,
and
you
know
we're
happy
to
share
any
insights
and
see
where
we
can
potentially
join
forces.
Work
on
you
know
different
things
together
that
you
know
both
projects
can
essentially
build
up
on
because
I
think
there's
a
lot
of
interest
in
virtual
clusters
and
it's
probably
going
to
be.
You
know
growing
over
the
next.
B
You
know
maybe
12
months
or
so,
because
there
are
a
lot
of
use
cases
for
virtual
clusters,
and
I
assume
there
will
be
other
projects
appearing
as
well.
You
know
with
certain
opinionated,
you
know
views
on
what
a
virtual
cluster
should
look
like
and
how
to
actually
implement
it,
and
you
know
we
just
want
to
be.
I
think
my
cto
fabian
is
on
call
as
well.
You
know,
as
as
loft
labs.
We
just
want
to
be
part
of
that
journey
and
essentially
contribute
wherever
we
can.
B
A
Awesome
yeah,
so
I
know
that
in
the
original
conversation
that
we
were
having,
there
was
a
lot
about
sort
of
being
super
interested
in
seeing
virtual
cluster
and
how
it
might
align
with
the
upstream
virtual
cluster
work
that
we're
doing
so
faye
did
you
have
any
thoughts
about
how
the
two
projects
might
work
together.
C
I
think
we
have
lucas,
and
I
and
fabio
has
some
offline
discussion
in
the
slack
channel.
I
believe
the
architecture-wise
from
the
user
perspective
we
provide
us
the
same.
You
know
the
user
experience
in
the
sense
that
user
got
a
kind
of
in
in
act.
You
know
kubernetes
clusters
for
their
workloads,
so
they
all
both
of
us
just
focus
on
the
reducing
the
integration
cost
and
make
it
as
a
black
box
supporting
any
user
application
of
greater
crt
et
cetera.
C
So
the
holding
difference
is
in
terms
of
the
implementation,
the
the
biggest
challenge
for
us,
I
think
for
combining
everything
is
the
name
space,
so
virtual
cluster,
we
were
designing.
I
I
was,
I
was
thinking,
keeping
the
api
capability
as
much
as
possible.
So
that's
a
reason.
I
pretty
much
choose
to
think
every
name
spaces.
C
Every
name
is
based
in
the
virtual
cluster
to
the
super
master
loft
doing
a
slightly
different
way.
In
a
sense,
they
combine
objects
into
from
the
one
tenant
into
one
namespace
so
well,
this
is
the
biggest
architectural
differences.
If
we
want
to
combine
effort,
I
would
say
this
is
the
biggest
blocker
so
yeah.
C
I
don't
know
so
so
lucas,
do
you
have
any
sort
of
well
doesn't?
Is
this
completely
necessary
or
you
guys
want
to
so
I
I
don't.
I
don't
think
you
know
making
the
virtual
classes
support
both
mode
is,
is
the
right
path,
because
it's
complicated
a
lot
of
things
and
I
don't
see
it
is
really
necessary.
B
Yeah,
I
I
mean
I
completely
agree
with
what
you
said
that
that's
probably
the
biggest
you
know
kind
of
hurdle
about
actually
sharing
things
because
yeah
I
mean
so.
I
see
I
see
advantages
of
of
both
approaches,
essentially
so
like
to
kind
of
summarize
my
thoughts,
so
I
think
the
you
know
I
mean,
like
you
said,
essentially
creating
several
namespaces
for
each
namespace
in
the
virtual
cluster
in
the
underlying
host
cluster
right,
that
is,
that
is
closer
to
the
you
know,
kubernetes
api
way
of
doing
things.
B
The
reason
why
we
decided
to
essentially
bundle
all
resources
created
in
a
virtual
cluster
in
one
underlying
namespace
and
essentially
mapped
them
to
a
single
name.
Space
is
because
we
wanted
to.
You
know,
have
this
a
virtual
cluster,
more
self-contained.
B
That's
also
one
of
the
reasons
why
we,
you
know,
don't
require
any
kind
of
special
privileges
to
deploy
the
virtual
cluster.
So
our
approach-
and
that's
kind
of
you
know
I
guess
historically
from
where
we're
coming
from
you
know
we're
building
a
product
that
essentially
lets
organizations
create
a
self-service
experience
for
kubernetes
for
the
engineers
right.
B
So
essentially,
if
you're
an
engineer
with
loft,
you
can
spin
up
name
spaces
right
and
shared
kubernetes
clusters
and
then
once
you
hit
the
limits
of
namespace
and
you
need
to
do
things
across
name
spaces,
you
know,
but
your
network
policy
doesn't
allow
you
to
do
that
because
it
isolates
each
name.
Space,
then
you're
running
into
issues
right
and
then
it's
kind
of
nice
to
be
able
to
spin
up
a
virtual
cluster,
and
you
can
do
that
within
that
restricted
namespace.
B
B
But
I
do
see
your
points
of
having
certain
advantages
like
I
mean,
especially
regarding
network
policies
right,
it's
going
to
be
pretty
hard
for
us
to
reflect
network
policies
when
we're
mapping
to
a
single
name,
whereas
for
you
it's
it's,
basically
just
having
namespaces
with
different
names,
and
you
can
still
have
network
policies
that
you
know
regulate
the
traffic
between
these
namespaces,
essentially
in
the
underlying
cluster.
C
Yeah,
I
I
see
I
see,
I
think
this
because
of
I
see
your
point,
so
we
you're
probably
trying
to
solve
the
things
from
different
point
of
view.
So
so
so
you
one
why
my
my
first
thing
about
a
virtual
class
is
so
I
trying
to
hide
the
super
class
of
the
nine
class
as
much
as
possible.
So
that's
a
reason,
but
you
probably
go
with
the
different
route,
so
people
worth
accessing
the
underlying
one
and
they
see
the
limitation
and
they
trying
to
expand
that
capability.
C
B
Yeah
I
mean
for
us
it's
more
the
use
case.
You
know
typically
folks
have
access
to
their
namespaces
and
that's
kind
of
the
self-service
experience
that
our
product
provides
and
then
when
they
need
more
than
just
the
namespace,
then
they
spin
up
a
virtual
cluster
inside
of
it,
but
they.
B
C
D
Would
it
may
would
have
kickle
namespaces
be
able
to
help
here
because
they
do
support
a
self-serve
model
where
you
could
spin
up
some
more
namespaces?
The
naming
wouldn't
be
correct.
You'd
have
to
have
some
kind
of
auto-generated
name,
but
if
you,
if
there's
a
mapping
layer
which
I
assume
there
must
be,
you
could
do
that
without
cluster
permissions.
B
Similar
in
terms
of
the
mapping
that
hierarchical
name,
spaces
does.
D
And
so
but
but
they,
but
it
supports
a
creation
and
deletion
model
that
doesn't
require
cluster
level
privileges.
You
have
something
similar
in
capsule
right.
Do
you
not
or.
D
B
Think
I
think
capsule
is
a
different,
open
source
project.
Solving
the
same
thing.
That
kiosk
does,
I
think
what
capsule
does
is
they're,
mainly
so
they're
an
operator,
and
they
have
crds,
whereas
we
solve
the
self-service
problem
with
an
api
server
extension.
So
we
basically.
B
Exactly
we
at
the
resource
called
space
and,
although
you
don't
have
the
permission
to
create
name
spaces,
you
do
have
the
permission
you
know
to
list
create
delete
spaces
that
you
own
essentially.
D
I
see
okay
yeah
it.
It's
a
thought
if
you
wanted
to
reconcile
the
two
virtual
cluster
models
that
article
namespaces
might
be
able
to
fit
there,
if
you
need
to
map
to
an
underline
to
a
real
name
space
on
the
cluster,
but
you
don't
want
to
have
cluster
level
privileges.
I
think
that
might
be
something
that
could
be
worth
looking
into.
I'm
obviously
there
to
assist
if
you
want
to
do
that,
but
I
don't
know
enough
about
virtual
clusters
of
either
variety
to
to
really
lead
that.
B
There's
probably
something
I
I
would
need
to
look
into
myself,
because
I
have
actually
have
not
played
with
hierarchical,
namespace
controller,
yet
I'd
probably
have
to
play
around
with
it
first
to
actually
know
if
it
could
help
in
that
direction.
D
Yeah,
okay,
there's
a
quick
start,
look
up
the
concept
called
sub
spaces.
Sub
namespaces
is
how
we
handle
the
cell
service
model.
It's
pretty
straightforward.
There's
there
are
some
weird
edge
cases,
but
the
basic
idea
is
not
too
too
complicated.
So
yeah
just
give
me
a
shout
if
you,
if
you
have
any
questions
I'll,
do
that
for
sure.
Thank
you.
Cool.
C
B
So
for
the
essentially
for
the
admin
of
a
virtual
cluster
to
be
able
to
install
crds
or
yeah,
do
they
write?
Do
they
want
to
have
a
capability
of
install
crd?
So
oh
yeah?
Definitely
yeah,
I
mean
that's
one
of
the
I
mean
there
are
a
couple
of
use
cases
for
virtual
clusters
as
we
see
it
and
you
know
one
of
them
obviously
being
doing
things
across
name
spaces.
B
Even
though
you're
running
in
a
you
know
isolated
environment,
where
each
namespace
kind
of
is
you
know
for
its
own
and
another
one.
Is
you
know
everything
related
to
crds,
whether
that
be
you
know,
I
wanna
yeah.
C
Okay,
another
question
is
yes,
sorry,
so
I
probably
should
look
here,
but
how
do
you
guys
handle
the
notes?
So
do
you
expose
everything
up
or
just
someone
notes.
B
That's
actually
a
super
interesting
question
yeah,
so
by
default,
we'll
just
show
you
one
node
from
the
underlying
cluster
and
there
is
the
option
to
also
provide
a
node
selector
to
essentially,
you
know,
reflect
multiple
of
the
underlying
clusters
nodes.
I
mean
they're
regarding
nodes.
You
can
add,
actually
a
lot
of
complexity,
but
we're
just
at
the
start
of
this.
B
There
are
a
lot
of
feature
requests
regarding
extending
that
capability.
C
Yeah,
so
in
current
virtual
cluster,
what
up?
What
we
was
currently
doing
is
post
any
load
that
you
have
a
virtual
part
running
on
it.
So,
basically,
if
you
have
a
part
running
on
the
underlying
cluster,
I
really
expose
that
node
into
the
virtual
cluster
so
button,
so
that
that's
the
reason
that
that
that's
the
reason
I
want
to
do
that
to
keep
the
node
semantic
as
as
com
as
a
compatible
as
possible.
So
if
you,
if
you
set
power
and
affinity,
you
will
see
qv,
noding
or
vc.
C
That's
my
own
problem,
so
because
I
was
trying
to
resolve
the
problem
from
the
capability
perspective
as
much
as
possible.
That's
the
reason.
I
also
want
to
ask,
because
I
do
see
people
a
lot
of
people
ask
a
lot
of
things,
but
it's
all
about
the
usage
model.
So
does
your
customer,
or
do
you
have
use
case
that
they
want
to
run
their
own
scheduler
instead
of
the
default
one.
B
Oh
no,
we
haven't,
we
haven't,
heard
any
requests
in
that
direction.
We
have
heard
requests
about.
You
know,
kind
of
taking
notes
or
faking
capabilities
of
nodes
and
stuff
like
that.
B
C
And
we
have
gc
everything.
So
if
your
part
is
gone,
your
v
naught
will
be
going
a
few
minutes,
so
I
want
to
make
it
as
as
kind
of
like
an
elastic
experience
in
terms
of
capacity.
I
want
so
that's
more
like
service
service
ends
of
thing.
Because
of
this
follow
my
philosophy
of
moderate
things.
I
want
to
model
the
supercluster
as
a
part
resource
provider
as
much
as
possible.
C
B
Interesting,
and
do
you
I
mean,
is
there
a
way
for
the
like?
Does
it
is
there
a
way
in
the
sinker
to
kind
of
enforce.
C
Yeah
we
have
certain
nodes,
yeah
yeah,
we,
oh
it's
in
photosynthesis.
So
basically
in
off-screen
motion
we
don't
do
that,
but
for
the
honestly
internally
we
did
something
like
this,
but
this
is.
This
is
not
standard
way.
You
can
extend
it
by
yourself,
but
I
don't
want
to
do
it
actually
make
make
made
the
entire
logic
very
complicated.
C
Yes,
yeah
you
can
say
I
want
to
expose
certain
nodes
to
to
to
vc,
but
it
depends
you
might
in
my
current
design
if
the
super
crosstalk
doesn't
own,
a
particular
doesn't
only
buy
the
particular
vc
owner
anyway,
so
there's
no
point
to
use
post
the
nose,
but
if,
if
you're
doing
the
other
run,
especially
in
your
case
in
a
sense
that
the
vc
user
probably
can
access
the
super
cluster
and
see
all
the
nodes,
so
they
probably
want
to
explore
those
nodes.
B
F
B
Hey
I
I'm
fabian,
I'm
the
I'm
the
cto
lucas
mentioned
before
so
maybe
like
a
quick
addition
to
what
lucas
said
so
currently,
the
weak
cluster
does
not
require
cluster
access
at
all.
So
we
don't
create
like
a
cluster
role,
which
also
means
the
vcluster
cannot
list
nodes
in
the
host
cluster.
Obviously
because
a
node
is
a
cluster-wide
resource
and
if
you
don't
have
any
cluster-wide
permission,
you
cannot
list
any
nodes
as
well,
which
is
why,
by
default
vcluster
fakes
notes.
B
So
essentially
it's
a
little
bit
similar
to
what
you
are
doing,
except
that
the
node
stators
and
nodes
spec
and
stuff,
like
that.
That's
all
like
fake
information,
except
for
like
the
node
name
itself,
which
we
can
obtain
by
the
pod
pod
spec,
because
the
node
name
is
is
in
there.
So
we
create
a
node
basically
on
demand.
B
As
soon
as
there
is
a
pot
within
the
virtual
cluster
that
has
a
node
name.
That
is
not
yet
in
the
virtual
cluster.
Then
we
create
a
fake
note
for
it.
We
also
have
a
different
option,
which
is
what
lucas
mentioned
so
they're,
actually
different
ways
how
you
can
create
notes
within
we
cluster.
So
just
this
fake
notes
is
like
the
first
way
of
doing
it,
but
then
we
also
allow
you
to
turn
off
these
fake
notes
and
actually
sync
the
real
data
of
notes.
B
But
then
you
have
to
create
a
cluster
role
for
the
v
cluster,
but
then
it's
essentially
working
the
same
way
as
you
described
that
phase.
So
we
just
think
like
the
the
information
about
the
real
node
instead
of
like
faking
the
information
and
besides
that,
we
also
allow
you
to
create
either
a
node
selector
to
to
enforce,
like
syncing
only
of
certain
nodes
and
there's
also
the
options
to
sync
all,
because
one
thing
we
we
figured
or
people
were
asking
us.
B
So
I'm
not
sure
if
you
you
handling
this
or,
if,
like
you,
don't
want
to
handle
this.
But
currently
we
had
a
hard
time
with,
like
demon
sets
because
like
if
you,
if
you
schedule
a
demon
set
within
the
virtual
cluster,
you
kind
of
need
to
see
like
which
notes
are
there.
B
Otherwise,
it's
pretty
hard
for
the
controller
to
to
schedule
those
those
parts
really
on
on
the
notes,
if
you,
if
they
are
not
there
in
the
witcher
cluster,
that's
why
we
also
allow
on
you
to
sync
like
all
nodes,
if
you,
if
you
want
to.
B
Is
that
is
there
like
a
so?
Is
there
like
a
a
reasoning
behind
why
you
don't
allow
demon
sets,
I
mean
from
a
security
standpoint.
We
are
also
not
so
we
don't
recommend
you
to
to
do
that.
C
D
C
F
F
You
know
log
shippers
or
drivers
or
something
across
the
entire
cluster,
but
is
there
maybe
there's
a
even
outside
of
like
virtual
cluster
or
something?
Is
there
a
use
case
for
a
daemon
set
for
an
admin
of
a
namespace
where
they
might
want
to
launch
a
log
shipper
on
any
node
that
they
have
a
pod
running
on
or
something
where
it's
a
subset
of
the
cluster?
F
C
In
terms
of
demonstration,
as
I
know
so,
some
very
common,
besides
of
lower
demand,
if
you
don't
like
load
management,
that's
one
type
of
edema.
Another
side
of
the
demo
inside
is
that
do
the
do
something
around
the
image
because,
for
example,
they
want
to
fasten
the
image
pulling,
so
they
want
to
prove
some
image,
and
I
also
know
some
demon
sets.
F
There's
another
project
under
this
tncf,
I
can't
remember
the
name
of
it.
It's
open
something
starts
with
the
k,
open
crew
open
cruise.
Yes,
I
I
think
they
have
an
actual
api
for
pulling
an
image
to
all
the
nodes.
C
Yeah
yeah
yeah-
I
I
was
in
that
open
cruise,
oh
okay!
Yes,
so
that's
exactly
what
I'm
I
was
talking
about
in
the
open
cruise
use
case,
everything
besides
the
image
pooling,
so
I
don't
see
a
strong
use
case
that
is
completely
running
user.
Application
requires
a
daemon
set,
maybe
yeah.
If
anybody
knows
anything
about
this,
so
they
want
to
run
something.
G
F
F
C
B
Maybe
I
mean
I
agree
with
with
everything
you
said
but,
like
maybe
the
demon
said
like
I'm,
not
sure
if
that's
like
something,
you
won't
support,
but
like
a
virtual
cluster
within
a
virtual
cluster-
and
I
know
you
have
like
this
via
an
agent
which
is
essentially
yeah-
a
demon
set
right.
As
far
as
I
stood,
that
correctly.
B
D
G
Realistically,
if,
as
faye
said,
you
try
to
hide
the
fact
that
you're
in
a
virtual
cluster,
so
if
someone
thinks
they
have
a
kubernetes
cluster
themselves,
they
may
try
to
do
some
of
those
tools,
because
it
I
mean
virtual
custom
virtual
cluster,
isn't
the
only
one
that
would
want
to
deploy
something
if
they're
treating
this
as
their
own
bespoke
kubernetes
cluster.
Instead
of
underlying
a
shared
one
right.
D
C
Sorry
guys
yeah,
you
might,
you
know,
thought
yeah,
I
I
do
see
one
use
case.
This
is
one
tenant,
one,
two
for
example:
they
they,
although
the
silver
cluster,
is
a
part
provider.
One
virtual
cluster
can
negotiate
with
the
classroom
and
say
hey
guys.
I
want
to
two
or
three
dedicated
notes.
Please
my
product
need
it,
and
if,
if
you
go
with
that
round,
then
we
can
do
is
expose
a
part
of
the
nose
up
for
that
vc.
C
E
C
Regards,
if
you
do
that
way,
then
you
can
run
them
sets
on
rows.
V,
node
that
you
don't
have
any
problem.
B
I
mean
it
probably
comes
down
to
a
similar
thing
with
hierarchical,
namespace
controller.
Right
I
mean
who
would
have
thought
we
needed
that
kind
of
nested,
self-service,
namespace
approach
like
three
or
four
years
back,
probably
three
years
down
the
road.
You
know
people
would
say
like
hey
getting
virtual
clusters
right
now.
I
want
virtual
clusters
whenever
I
need
them
and
that
could
be
a
solution
to
it.
D
Yeah,
I
view
them
a
little
bit
differently.
Like
first
of
all,
I
mean,
I
think,
there's
a
limit
to
how
many
levels
of
hierarchical,
namespaces
you'd
want
that
limits,
probably
somewhere
between
three
and
five,
and
the
reason
for
that
is
that
it's
supposed
to
mirror
the
let's
call
it
the
relevant
organizational
structure
that
you're
dealing
with.
So
you
could
have
teams
and
sub
teams
and
microservices.
So
that's
like
three,
maybe
you
you
add
one
more.
On
top
of
that,
that
gets
you
to
four
more
than
one
is
useful.
F
D
Yeah
and
that
that's
the
only
case
where
I
could
imagine
a
v
cluster
in
a
v
cluster
is
where
you
is
where,
like
the
base
layer,
is
completely
opaque,
like
if
you're
at
an
organization,
and
they
have
the
base
cluster,
which
I
think
was
the
the
apple
model.
If
I'm
not
mistaken,
it's
like
you
know,
your
platform
team
is
running
the
base
cluster
and
then
you
get
a
v
clustering
inside
of
that.
D
You
probably
there's
no
need
to
to
subdivide
that
further,
because
you
just
go
back
to
the
platform
team
and
ask
for
another
cluster,
whereas
the
model
for
hierarchical
namespaces
is
totally
different,
like
you're,
literally
inheriting
things,
which
probably
isn't
as
relevant
for
v
cluster
but
yeah.
If
the
base
layer
is
at
a
completely
different
company,
then
I
can
see
wanting
that
one.
G
One
thing
to
consider,
though
right
is,
if
we
don't
support
like
a
damon
said
or
something
else
like
you
can't
pass
conformance
testing,
and
so
it
really
ties
down
to
a
question
of
like.
Is
your
virtual
cluster,
providing
conformant
criminals
clusters
or
subsets?
Sorry,
my
doctors
wants
to
go
outside,
but
it's
worth
it's
worth.
I
guess
pointing
out.
If
you
don't
support
those,
then
you
can't
pass
conformance
so
then
you're
not
really
providing
a
quote-unquote
cognitive
cluster,
but
a
subset.
C
C
So
you
probably
can
you
can
still
pass
a
conformance
test,
but
the
whole
problem
is:
I
don't
think
that
compensates
written
in
a
way
that
they
create
three
nodes
and
they
run
demand,
set
and
confirm
that
you
have
three
parts.
They
probably
just
use
one
node.
If
that
is
case,
you
still
can
pass
it.
The
whole
problem
is
for
demon
sets.
C
B
C
I'm
just
I'm
just
saying
that
from
the
conformance
test
perspective,
maybe
even
even
if
I
we
claim
we
don't
support,
demonstrate,
but
does
that
support?
It
doesn't
mean
that
we
cannot
pass
performance
tests.
The
whole
problem
is,
the
use
of
demon,
says,
is
pretty
powerful
or
what
I
always
say
is
very
detailed.
You
need
to
have
the
permission
to
knows
in
the
first
place,
that's
yeah
yeah.
Well,.
F
So
I
I
can't
see
a
case
where
I'd
ever
configure
the
pod
security
policy
or
something
enough
that
would
allow
somebody
to
do
that,
but
I
don't
think
it's
necessarily
like
blocked.
You
know
it's
not
a
blocker.
If,
if
somebody
were
to
give
enough
permissions
to
somebody
in
a
another
cluster,
you
know
a
virtual
cluster
to
be
able
to
do
that.
C
Yeah
yeah,
I
think
we
already
spent.
I
want
to
give
some
time
for
the
for
the
other
topic,
but
for
no,
I
don't,
I
think
lucas.
Actually
you
guys
have
a
good
point.
I
think
we're
just
trying
to
solve
the
problem,
but
from
different
angles.
I
I
I
don't.
I
don't
see
us
strict,
you
know
requirements
saying
we
have
two
merge
rules.
I
don't
think
I
didn't
think
so
because
of
the
usage
model
is
is
kind
of
different.
C
We
we
probably
should
in
terms
of
writing
or
whatever
just
make
sure
people
understand
the
differences.
B
Yeah,
it
makes
a
lot
of
sense.
I
mean
you
know,
guess
I
guess
you
know
if
there
is
a,
I
guess,
there's
two
variations
of
of
a
sinker
right.
It's
a
one-to-one
versus
a
one-to-n
thinker
and
that's
probably
what
we
should
kind
of
communicate
to
people.
So
they
understand
what's
happening
behind
the
scenes,
because
I
guess
I
mean
the
synchro
is
right.
It's
really
the
most
complicated
part,
yeah.
C
Maybe
maybe
maybe
just
the
abstraction
model,
because
so
I
I
probably
have
to
mention
that
more
in
these
which
process
I
want
to.
As
I
said
so
in
our
model,
I
want
to
abstract
the
entire
underlying
cluster
as
a
part
provider,
only
there's
no
other
any
other
functionality
at
all
just
for
the
provider.
But
in
your
case
you
want
to
be
that
more
powerful,
so
people
have
more
control
that
cluster
as
well.
So
that's
probably
the
primary.
E
Yeah
that
was
interesting
like
one
thing
like
I'm,
not
exactly
sure
about
the
use
cases,
because,
like
for
the
previous
topic
itself,
for
example
like
it
can
always
be
debated
like.
Should
it
really
belong
to
the
name
space
level
or
should
the
platform
team
deploy
to
the
cluster
level
because,
for
example,
one
one
project
that
came
to
my
mind
was
no
problem
director,
which
basically
reads
the
logs
of
system
blocks
and
alerts
based
on
it.
E
So
if,
but
again
it
can
be
debated,
but
is
it
like
possible
in
either
of
the
v
clusters,
to
just
spin
up
or
sync
notes,
based
on
the
number
of
instances
that
were
required
in
the
demon
set.
E
Like
that,
if
the
demon
set
has
like
three
like
the
count
is
three
then
is
there
a
way
like
either
of
those
either
of
the
v
clusters?
Can
import
like
three
of
the
nodes
and
just
start
deploying
it.
C
It
is
okay,
you
enter
the
virtual
cluster,
saying
you
just
change
the
synchro
or
add
a
new
api,
so
this
set
of
the
node
selector
needs
to
be
exposed
to
certain
vc.
That's
certainly
completely
doable,
so
I
we
we
don't.
We
don't
do
that
in
the
present,
because
we
think
it
is
kind
of
more
open,
operated
option
that
people
can
do.
C
I
don't
want
to
fully,
I
mean
supporting
upstream,
because
to
me
it
seems
like
it's
kind
of
it's
a
variation,
but
I
don't
want
to
change,
put
every
variation
into
the
single
in
upstream
repo,
because
because
of
the
you
so
we
want
or
we
only
we
want
to
make
our
story
consistent
and
make
sure
the
project
grows,
fits
for
many
people's
needs.
E
Yeah,
it
makes
sense,
makes
sense.
So
the
issue
I
mentioned
the
agenda
has
to
do
with
hierarchical
spaces
and,
like
basically,
the
motivation
for
this
was
like
this.
Currently
only
one
global
hnc
configuration
for
industry-
and
I
I
wanted
to
propagate
like
different
things
in
each
of
the
conflicts
like
basically
like
one
of
the
conflicts,
would
have
secrets
and
custom
resources,
whereas
others
would
have
just
the
secrets
or
maybe
secrets
and
confidence
something
it
could
be
something
entirely
different.
Also.
E
So,
basically
I
wanted
two
hnc
configurations,
but
after
I
had
a
chat
with
adrian,
he
explained
like
why
it
could
be
too
complicated
and
not
easy
to
implement.
So
then
what
I
realized
is
like.
Maybe
it
would
be
better
if
the
global
configuration
can
have
exception,
support
not
just
for
a
particular
resource
but
for
an
entire
gvk.
D
So
so
with
that
so
already
in
the
global
configuration
you
you
specify
like
what
gvk
are
actually
these
days,
it's
a
it's
group
resource.
What
group
resource,
like
you
know,
v1
secrets,
so
the
global
one
already
says
like
which
which
resource
types
get
propagated
and
which
ones
get
don't.
D
We've
talked
about
allowing
that
per
namespace,
but
then
you
get
problems
like
what,
if
there's
a
conflict
and
what,
if
a
namespace
moves
from
one
hierarchy
to
another,
and
so
that's
what
I
mentioned
in
the
in
the
issue,
which
I,
which
I
just
pasted
into
the
link
there.
So
I
think
all
of
these
problems
are
solvable,
but
what's
missing
right
now
is
a
couple
of
things.
D
D
It's
one
thing
to
say
that
it's,
it
would
be
a
little
bit
more
useful.
If
I
could
understand
like
exactly,
why
is
it
because,
like
they're,
two
different
teams-
and
you
know-
one
team-
is
working
in
this
way
and
other
teams
working
in
that
way
or
is
it
because
they're
not
different
teams
but
they're
being
used
for
different
purposes
like
just
the
more
we
know,
the
more
it
might
be
useful,
but
even
if
we
decide
okay
yeah,
we
really
do
just
want
like
per
tree
configuration
then
we
have
to.
D
Then
we
just
need
to
answer
these
questions
like
what
happens
if
a
namespace
moves
from
one
tree
to
another
or
what
happens
if
a
parent's
configuration
conflicts
with
a
child
configuration
or
do
you
only
allow
this
configuration
at
the
root
and
then
what
happens
if
something
that
was
a
root
moves
and
is
no
longer
a
root?
So
it's
like
there's
a
lot
of
edge
cases,
and
this
is
kind
of
like
what
I
was
saying
to
to
lucas.
D
Yeah,
so
I
don't
know
where
you
want
to
take
like
like:
do
you
want
to
tell
us
a
little
bit
more,
maybe
about
why
you
want
this
like?
Why
why
you
want
that
kind
of
flexibility
in
the
different
things.
E
So
basically
the
idea
was
to
have
like,
like,
like
I
mentioned,
like
the
different
teams,
so
it.
E
Make
sense
like
so
like
like
if
I
had
two
projects
installed,
and
so
basically
I
was
thinking
in
this
way.
If
I
had
this
two
separate
projects
installed
and
each
of
these
projects,
they
have
nothing
to
do
with
each
other,
and
I
want
I
just
want
to
propagate
the
objects
of
the
first
project
to
the
first
conflict
and
the
second
project
and
yeah.
E
It
could
be
any
project
and
those
projects
would
have
their
own
custom
resources
so,
like
the
config,
eight
could
propagate
the
custom
resource
of
the
project.
A
and
the
config
b
could
propagate
the
custom
basis
of
the
project
b.
E
So
this
could
allow
users
to
like
the
projects
it
would
enable
the
projects
itself
to
have
this
structure
where
they
can
just
have
this
config
for
namespace
and
users
of
that,
like
users
just
have
to
create
one
namespace
for
that
project
and
everything
else
will
be
handled
by
the
project
based
on
the
subness
of
that
project.
But.
A
D
Right
so
so
two
things
very
quickly
if
their
cust,
if
they're
crds
the
crds,
still
need
to
be
created
at
the
cluster
level,
because
that's
something
that
hierarchical
namespaces
doesn't
give
you
it
doesn't
give
you
like:
per
namespace
crds.
We
looked
and
tried
to
figure
out
if
there
was
any
feasible
way,
we
could
get
that
done,
and
the
answer
was
basically
no.
D
You
have
to
use
virtual
clusters,
so
you're
already
doing
a
cluster
level
configuration,
and
so
I
guess
the
question
I
would
ask
is
is
given
that
you
already
need
a
cluster
level
config
to
add
a
crd.
Why
couldn't
you
just
turn
on
propagation
globally?
E
Right
like
what
could
happen
is
like
the
team,
a
would
like
the
project,
a
would
have
its
own
credentials
and
configurations
which
should
be
passed
down
and
those
like
it.
It
might
not
be
a
good
idea
to
expose
those
that
information
to
the
other
namespace,
which
has
another
project
running
so.
D
But
you
get
that
today,
right
like
if
you
have
two
root
name
spaces,
you
put
the
credentials
in
in
one
root,
name
space
they
only
get
propagated
to
the
descendants
of
that
one
namespace.
They
don't
get
propagated
to
the
other.
One
right.
G
Said
I
mean
I
could
say
one
of
our
use
cases,
which
is
slightly
different
of
an
ask,
but
you
know
one
of
our
common
use
cases.
Is
we
really
only
want
like
one
resource
propagated
out
like
our
big
one?
Is
you
know
we
have
a
private
ca
that
we
use,
so
we
like
to
put
the
public
chain
in
every
name
space
so
that
everyone
can
add
it
to
their
containers
to
the
trust
store.
G
So
we
don't
have
issues,
and
so
we
really
like
want
one
config
math
propagated
down,
and
so
in
order
to
achieve
that,
we
basically
have
to
configure
all
config
maps,
and
then
it
basically
achieved
that.
So
that's
like
one
use
case,
it's
slightly
different,
but
it's
a
it's
a
common
use
case
that
we
have
it's
like.
We
just
want
a
way
to
get
everything
everywhere,
but
you
know
and
there's
not
a
good
way
to
do
an
exception.
D
Is
relatively
easy?
If
if
you
want
to
add
that
that
should
be
pretty
easy
to
add,
because
you
just
right
now,
I
think
we've
got
the.
We
have
three
modes
which
are
propagate,
remove
or
ignore
and
and
ryan.
You
know
I've
discussed
this.
It
would
be
fairly
easy
to
add
an
allow
propagate
where
basically
like
propagation
is,
by
exception
only
and
so
like
it's
default
off,
but
you
can
turn
it
on.
D
That
would
be
pretty
easy
to
do,
but
you
can
still
do
that
globally,
like
across
the
entire
cluster,
and
I
think
that
would
be
pretty
straightforward
like
it
would
still
be
nice
to
have
a
design
dock
or
maybe
extend
the
existing
design
dock.
That's
pretty
straightforward
to
do
harsh.
To
get
back
to
what
you
were
saying.
It's
like.
D
I
appreciate
that
in
your
use
case
you
basically
only
care
about
static
stuff,
which
is
to
say
like
well,
I
don't
care
what
happens
if
two
namespaces
like
if
a
namespace
changes
its
its
ancestors,
because
I'm
not
gonna.
Do
that.
That's
that's
fine,
but
we
still
need
to
define
what's
gonna
happen.
So,
for
example,
let's
say
that
you
move
one
namespace
with
one
set
of
propagations
to
another,
with
a
different
set.
Is
that
an
error
is
that
going
to
be
prevented?
D
How
will
the
user
allow
that,
like
it,
doesn't
have
to
be
a
great
user
experience,
because
we
don't
know
what
a
good
user
experience
is,
but
we
do
need
to
know
what
the
experience
is
going
to
be,
and
we
need
to
know
that
like
so
so
that,
and
we
need
to
know
that
we're
not
like
locking
ourselves
into
we're,
not
painting
ourselves
into
a
corner
so
that
we
can't
get
out
of
it.
Does
that
make
sense.
E
Yeah,
it
does
so
like.
I
thought
about
a
good
example.
Maybe
this
can
help
so
like
if,
let's
say
the
project
a
requires,
the
secrets
to
be
propagated
across
this
name
says
right
like
which
the
project
creates,
and
the
project
b
doesn't
require
any
secret
to
be
propagated
and
yeah.
A
E
Wouldn't
yeah,
it
wouldn't
be
a
good
idea
to
like
propagate
secrets,
which
were
created
in
project
b.
D
D
What
about
what
ryan
said?
What
would
that
help
if
we
basically
had
an
allow
propagate
mode,
where
basically,
would
be
opt
in
so
you
could
still
say,
like
by
default
secrets
are
never
being
propagated,
but
you
can
add
an
annotation
to
any
individual
secret
and
that
would
force
it
to
be
propagated.
D
E
D
Well,
it's
already
at
the
gvk
level,
oh,
you
mean
like
within
a
namespace.
E
Yeah
yeah,
so,
like
we
just
mentioned
that
this
namespace
would
propagate
like,
would
not
allow
this
dvk
to
be
propagated.
So,
but
I'm
not
sure
like
what
would
happen
if
we
change
the
ancestor
path
and
all
that.
D
Yeah
and
that's
the
complication
like
what
we're
saying
like
to
solve
ryan's
case
where
we
say
propaga,
we
are
allowed
to
turn
propagation
on
in
any.
You
know
for
a
secret,
for
example
anywhere,
but
then
you
have
to
use
an
additional.
D
You
know
an
annotation
on
the
secret
if
you
actually
want
to
be
propagated,
that
we
understand
very
well
like
we
already
know
how
object
overriding
happens
when
you
move
from
one
place
to
another
like
if
a
name
space
moves
from
one
place
to
another.
That's
already
handled
very
well,
and
it's
quite
robust.
D
If
you
want
to
do
it
at
the
gvk
level
per
name
space,
then
you
need
to
worry
about
what
happens
if
the
parent
is
is
prop
has
the
mode
propagate,
but
the
child
is
allowed
propagate,
which
one
wins,
which
one
loses.
What
happens
if
one
of
them
is
like
removed?
D
That's
when
it
becomes
really
really
hard
to
or
not
necessarily
really
really
hard,
but
that's
where
it
becomes
non-trivial
and
we
need
answers
for
all
of
these
questions,
and
so
that's
all
I'm
saying
is,
is
that
I
don't
have
the
answers
to
them
and
if
you
want
to
so
like
the
way
we've
done
development
before
I'm
going
to
paste
something
into
the
chat
like
this
was
the
dock
that
we
used
to
actually
add
exceptions
to
hnc
in
the
first
case,
and
you
can
see
actually
that
the
the
first
link
there
in
that
dock
is
the
original
design
dock
for
for
hnc
and
that
one's
much
longer.
D
I
I
try
to
stick
to
much
shorter
design.
Docs
these
days
like
we
have.
We
have
single
purpose
design
docs
now,
but
we're
going
to
need
a
design
dock
that
sort
of
says.
How
are
these
things
going
to
happen
and
they
don't
have
to
be
long,
so
you
can
see
like
the
exceptions,
design.
D
Dock
is
only
you
know,
it's
only
five
pages,
and
and
it's
got
like
one
picture
and
a
couple
of
so
this
is,
it
doesn't
have
to
be
like
massive,
but
it
does
need
to
exist,
and
so
that's
really
the
next
step,
if
you
want
to
to
add
this
feature,
have
a
design
dock
where
we
actually
can
go
through
these.
These
details-
and
you
say
like
this-
is
what
I'm
trying
to
achieve
and
say.
This
is
how
I
expect
these.
D
You
know
corner
cases
to
work,
and
then
we
can
start
writing
pr's
once
we
agree
on
that,
so
it
doesn't
it's
not
a
heavyweight
process,
basically
like
if
I
like
it
and
brian
likes
it
and
we're
like
okay,
yeah,
that's
good,
you
can
go
ahead,
then
we
can
just
then
you
can
go
ahead
and
start
implementing
it
and
ryan,
and
I
will
we'll
we'll
like
review
all
of
the
pr's
and
and
and
that's
fine,
but
we
just
do.
We
do
need
to
understand
like
what
we're
driving
towards.
G
If
I
could
add
quickly,
on
top
of
that,
I
it
might
not
be
hurt
to
look
at
like
a
another
tool
to,
I
guess,
extend
agency,
because
I
think
if
we
did
have
the
allow
you
know
allow
exception
correct
me
if
around
I
don't
see
jim
on,
but
I
think
adrian
you
looking
at
this
kiverno
does
support
mutating
policies,
so
you
could
do
like
an
allow
and
then
use
a
caverno
policy
at
a
namespace
level
to
enforce
the
propagation
or
vice
versa.
G
D
Yeah-
and
I
think
that
that's
like
that's-
definitely
an
option
in
the
short
term,
I'm
certainly
not
opposed
to
having
per
per
higher
per
name
space
configs.
As
I
said,
we've
discussed
it
many
times
and
it's
just
like.
We
just
need
a
concrete
proposal,
so
harsh,
if
you
want
to
I,
I
think
that
that's
the
next
step
is
just
going
ahead
and
writing
that
doc
and
sharing
it
and
we
can
iterate
on
it
like
it
doesn't
have
to
be
perfect.
E
Oh
right,
I'm
just
thinking
like,
would
it
be
a
bad
experience
for
the
let's
say
the
project
maintainers
like
to
enable
like
whatever
secrets
they
are
creating
like?
Let's,
let's
take
the
simple
example
of
a
secret,
and
so
if
the
project
maintainer
is
creating
a
secret
on
behalf
of
the
user
or
the
user
is
creating
a
secret
for
that
project,
then
just
adding
an
annotation
which
mentions
that
the
hnc
like
which
let's
hnc,
know
that
this
can
be
propagated.
I
think
that's
a
good
solution
too.
D
D
That's
going
to
be
fairly
straightforward,
yeah
and
that
I
think,
would
be
very
straightforward
or
well
fairly
straightforward
to
implement.
You
could
probably
just
go
back
and
to
that
very
design
dock
that
I
just
shared
the
exceptions,
one
and
and
basically
propose
some
changes
to
that
dock
and
say
like
yeah,
we've
had
we've
got
three
modes
in
here:
we're
adding
a
fourth
one
called
allow
and
here's
how
it's
going
to
work
and
then
can
go
ahead
and
start
that
we
can
probably
ryan,
and
I
can
probably
approve
that
fairly
quickly.
E
Yes,
yes,
so
I
I
think
I'll
go
with
that
part.
So
I'll
have
to
look
I'll
go
through
this
doc
briefly
and
then
get
back
to
you
on
slack.
D
D
Okay,
great
and
if
it
doesn't
work-
and
you
still
and
you
decide,
you
need
to
go
back
and
and
do
like
the
per
namespace
configs.
Then
that's
fine
too.
It's
just
those
are
the
those
are
the
problems
that
we're
gonna
have
to
deal
with.