►
From YouTube: Logical cluster deep dive
Description
A discussion around how logical clusters are implemented in kcp and how we might approach enabling priority & fairness per logical cluster (i.e. per workspace)
A
So
just
to
recap
what
we
talked
about
briefly
off:
recording
kcp
embeds,
a
fork
of
kubernetes
that
has
been
modified
to
be
logical,
cluster
aware,
the
fork
is
never
intended
to
be
externally
visible
to
anyone
trying
to
code
clients
and
controllers
against
kcp.
A
It
is
only
for
internals
of
kcp
to
function
so
existing
clients
or
existing
controllers
that
are
able
to
talk
to
kubernetes
or
openshift
or
anything
that
looks
like
kubernetes
or
openshift
from
an
api
perspective
will
continue
to
function.
Discovery
exists,
so
everything
you
could
do
in
client
go.
You
can
do
against
a
kcp
workspace.
A
Some
resources
may
not
exist
so,
like
kcp
doesn't
have
pods
out
of
the
box.
So
if
you
have
a
controller,
that's
trying
to
manipulate
pods-
and
you
point
it
at
kcp-
it's
probably
not
going
to
work,
but
if
you're
just
working
with
namespaces
and
custom
resource
instances,
all
of
those
mechanics
will
function
if
you're
trying
to
write
a
controller
or
a
client
that
is
able
to
talk
to
multiple
or
across
multiple
logical
clusters.
A
A
To
look
at
is,
if
you
look
at
the
normal
current
storage
mechanism
or
storage
pattern,
everything's
under
registry
subdivided
by
group
resource
and
then
in
in
kubernetes
or
openshift.
There
is
no
cluster,
it's
just
group
resource
and
optional
namespace
and
then
a
name
and
for
for
kcp.
A
This
we
add
a
cluster
segment
in
between
the
resource
and
either
the
namespace
or
name
for
all
of
our
built-in
types.
So
that's
for
namespaces
secrets,
services
or
sorry
service
accounts
that
sort
of
thing
anything
that
is
a
custom
resource
instance
will
have
registry
group
resource
and
then
either
an
identity
or
the
fixed
string,
custom
resources
for
the
purpose
of
this
discussion.
A
What
those
mean
is
not
relevant,
so
I'm
going
to
skip
over
it
and
then
just
like
we
saw
down
here,
we
do
have
a
cluster
segment
that
precedes
an
optional
namespace
and
the
name-
and
this
is
how
we
introduce
multiple
logical
clusters
into
an
xcd
store.
That's
back
in
kcp
and
then
all
of
the
code
in
the
kcp
fork
of
kubernetes,
which
is
in
kcp
dev
cooper.
C
A
So
everything
in
this
fork
has
been
modified
to
support
the
multiple
logical
clusters
that
are
visible
in
the
city
storage
patterns.
Here,
let
me
pause
there
any
any
questions
on
any
of
this.
A
No,
the
hierarchy
is
contained
wholly
within
this
segment,
so
it
could
be
something
like
root:
colon
red
hat,
colon
engineering,
okay,.
A
Colon
to
separate
in
that
hierarchy,
so
we
are
guaranteed
that
there's
only
ever
one
level
of
nesting
when
it
comes
to
specifying
the
cluster.
Just
because
we
we.
A
All
right,
so
how
do
we
do
all
of
this?
So
basically,
if
you
go
into
our
fork,
which
is
what
I
have
open
on
the
screen
here
anywhere-
that
you
see
a
kcp
dev
logical
cluster
import
is
somewhere
that
we're
doing
something
with
logical
clusters.
A
So
let
me
find
I'm
just
going
to
be
showing
some
examples
here.
I
think
you.
A
So
if
you
look
at
get,
for
example,
get
will
determine
the
logical
cluster
name
from
the
context.
This
is
something
that
kcp
is
responsible
for
setting
in
a
couple
different
places
why
my
font
size
went
down.
But
if
we
trace
like
you'll,
see
that
there's
a
whole
bunch
of
places
where
this
with
cluster
function
is
invoked.
And
so
you
can
go
look
in
the
places
here
and
they're,
also
in
kcp.
A
A
B
A
Okay,
nevermind
I'll
come
back
to
that
later,
but
suffice
it
to
say
there
is
a
cluster
name,
a
logical
cluster
name
that
has
been
added
to
the
context.
I
have
to
go
back
and
double
check
where
it
got
added,
but
in
this
particular
code
flow
we
are
retrieving
a
key
from
ncd.
A
A
So
what
we
do
is
we
know
the
cluster
name
from
the
context,
mainly
because,
when
you're
doing
when
you
make
a
request
to
kubernetes
the
request
pattern
that
we
have
looks
like
one
of
these,
so
this
is
an
actual
url
path
that
kcp
would
receive
from
a
client.
A
It
would
be
slash
clusters
and
then
you
would
have
a
cluster
name
and
then
the
remainder
of
the
request
would
be
a
standard
request
that
you
would
make
to
kubernetes
or
openshift
so
slash
apis.
Some
group
some
version
some
resource.
It
could
be
slash
api,
v1,
slash
secrets,
so
anything
you
can
do
in
terms
of
metrics
health,
liveness,
probes,
apis,
api
requests.
All
of
these
have
a
prefix
with
the
cluster
there.
A
So
once
we
are
decoding
data,
that's
coming
out
of
ncb,
we
set
that
and
then
that
is
available
to
anywhere
in
the
code
base
that
needs
to
work
with
it
and
the
logical
cluster
package
or
repository.
That's
in
kcp
allows
you
to
take
any
basically
anything
that
has
a
get
annotations
method
on
it,
which
is
all
kubernetes
objects
and
you
can
call
logicalcluster.from
and
that
will
return
a
logical
cluster
name.
That's
been
decoded
from
those
annotations,
and
so
what
you'll
see
is
logicalcluster.com.
A
A
The
there's
a
whole
lot
in
here
so
custom
resources,
for
example,
we'll
take
a
look
at
the
crd,
get
its
logical
cluster
and
use
that
as
part
of
establishing
crds.
So
there's
a
whole
bunch
of
code
in
kubernetes
in
our
fork.
Where
we
are,
you
know,
injecting
or
doing
some
logic
that
involves
the
logical
cluster.
D
Yeah,
sorry
about
that,
so
my
question
about
the
is
about
the
the
annotation.
So
in
that
case,
annotation
is
part
of
the
the
object
data
right,
yeah
that's
stored
in
at
cd.
D
B
D
So
so
logical
cluster
is
not
really
a
cluster.
It's
a
way
to
distinguish,
or-
or
I
have
some
isolation
in
the
objects
being
stored
in
the
storage.
A
So
a
logical
cluster
is
meant
to
be
like
a
real
cluster,
where
you
have
cluster
scoped
resources,
like
name
spaces
or
cluster
role,
cluster
roles
and
cluster
role
bindings.
A
Each
isolated
cluster
is
what
we're
calling
a
logical
cluster.
So
one
advantage
to
this
is
if
you
and
I
are
both
sharing
kcp
and
I
have
a
logical
cluster
and
you
have
a
logical
cluster.
We
can
each
create
the
exact
same
crd
in
terms
of
group
version
and
resource,
but
my
crd
could
look
100
different
from
your
crd
and
they
will
not
collide
with
each
other.
A
B
D
So
one
question
I
have
is
when,
when
there's
a
so
I
know
that
a
workspace
is
mapped
to
a
logical
cluster.
So
each
time
you
create
a
workspace,
you
you
have
a
new
logical
cluster
in
in
the
kcp.
D
A
Right
yeah,
so
every
workspace
has
a
logical
cluster.
Not
every
logical
cluster
has
a
workspace
and
the
the
logical
clusters
that
don't
have
workspace
are
reserved
for
internal
kcp
system
use.
So
there
are
privileged
kcp
admin
level.
Clients
that
can
go
do
things
with
these
logical
clusters
that
don't
have
workspaces,
but
you
generally
don't
need
to
know
much
about
that
or
worry
about
that.
C
D
Ncd,
so,
from
a
user's
point
of
view,
a
user
doesn't
really
see
logical
cluster
right
when,
when
the
user
is
inter
interacting
with
the
api
server
is
through
the
whichever
workspace
that
he
is
in.
A
Yeah
now
you
can
switch
workspaces
right.
A
Yes
and
controllers
do
have
the
ability
to
see
data
across
many
logical
clusters
in
like
they
have
the
ability
to
do
what
we
call
wild
card
lists
and
watches.
A
There
are
some
special
requirements
for
what
you
must
do
to
make
this
work,
but
they're
they're,
straightforward,
and
so
you
can,
if
you're,
trying
to
write
a
controller.
A
Let's
say
that
you're,
I
don't
know
you're
doing
cert
manager,
because
that's
our
canonical
example,
so
cert
manager,
today
only
operates
against
things
in
a
single
cluster
makes
sense
that
that's
what
clusters
are
they're
standalone,
cert
manager
could
be
updated
so
that
you
could
install
one
copy
of
it
and
it
could
handle
certificate
requests
in
any
logical
cluster
across
all
of
them.
If
you
don't
rewrite
or
modify
cert
manager
or
any
other
controller,
then
you
have
to
have
one
copy
installed
per
workspace.
D
And
how
do
you
configure
the
the
actual
workspaces
that
can
be
visible
to
that
controller?.
A
A
A
It
just
references,
my
name
and
then
once
I
have
that
api
export
defined,
then
I
am
able
to
look
at
its
status
and
there's
a
special
url
in
there
that
I
would
consume
and
use,
and
if
I
access
that
url,
I
have
the
ability
to
do
a
wildcard
list
and
watch
and-
and
I
can
do-
I
can
do
individual
crud
operations
per
logical
cluster
or
per
workspace.
So
if
I
know
that
you
have
a
certificate
signing
request
in
your
workspace
abc123,
I
would
have
a
url
that
looks
like
clusters.
B
C
C
A
A
second
okay:
here
we
go
so
I
have
my
local
kcp
running
and
you
I'm
in
the
root
workspace,
which
is
our
top
level
workspace.
And
so,
if
I
get
we'll
look
at
tenancy,
what
you'll
see
down
here
at
the
bottom.
A
C
A
Any
I
must
not
have
any
findings
on
this
all
right,
I'm
not
going
to
try
and
make
this
work
on
the
fly,
but
basically
this
is
what
you
would
do.
Is
you
would
you
would
go
through
the
virtual
workspace
url
for
the
api
export
and
then
you'd
be
able
to
see
these
things
and
operate
on
them?
So
if
I
wanted
to
do
something
on
a
single
single
one,
I
could
be.
You
know,
root
abc
one
two
three,
and
if
this
particular
thing
existed,
I
could
I
could
get
it.
I
could
create
it.
A
So
I
know
in
terms
of
like
priority
and
fairness.
A
C
A
So
this
function
in
staging
kate's,
api
server
package
registry,
generic
registry
store.go.
This
starts
the
observer
that
periodically
tracks
object,
counts
per
resource
name,
and
this
is
created
when.
A
A
A
But
the
cluster
that
it's
putting
in
is
a
wild
card
and
there's
some
some
hackery
that
we
did
so
if,
if
you're
doing
a
star,
which
is
the
wild
card
request,
then
the
prefix
ends
up
just
being
an
etcd
registry
group
resource
that
ends
up.
A
That's
the
prefix,
and
so
this
ends
up
counting
all
instances
of
whatever
your
resource
is
across
all
logical
clusters,
because
if
you
count
where
this
is
the
prefix,
it's
going
to
include
cluster
one
cluster,
two
plus
three,
and
so
the
idea
that
I
had
in
my
head
for
how
to
attack
this
problem
was
that
while
we
can
start
observing
for
that
global
prefix,
you
kind
of
need
something
that
is
watching
for
workspaces
and
when
a
workspace
is
created,
you
start
a
track
or
start
one
of
these
observers
for
each
workspace.
A
D
D
A
But
so,
if
I
create
a
thousand
workspaces
that
has
no
bearing
on
this
function
being
invoked
because
it's
invoked
when
the
api
type
is
added
to
the
api
server.
So
that's
when
the
built-in
types,
like
secrets
are
registered.
It's
whenever
a
crd
is
created.
It's
going
to
do
that
per
crd,
and
especially
with
the
api
export
bit
that
I
was
showing
a
few
minutes
ago
when
I
create
an
api
export
for
certificate,
signing
requests.
A
B
B
A
I
apologize
I
I
need
to
end
this
a
little
bit
early,
I'm
starting
to
get
like
a
floater
in
my
vision,
so
I
kind
of
want
to
go
and
lie
down.
Okay,
but
please
continue
to
ask
questions
in
slack
and
when
I'm
not
having
something
flashing.
In
my
vision,
I'm
happy
to
chat
more.
D
Sure
sure
yeah
I
need
some
time
to
digest.
Whatever
you
just
said
and
look
more
into
the
code,
and
I
might
have
some
questions
about
how
you
know
an
incoming
request
is
dispatched
or
does
it
really
make
any
difference
if
there
it?
The
request
is
to
crd
within
a
workspace
or
or
it
doesn't
really
make
a
difference.
D
The
when
I
say
make
a
difference
I
mean:
is
it
the
same
hand
handler
chain,
that's
servicing
that
or
yeah
yeah.
So
there's
no
like
individual
handler
chain
per
workspace
right.
It's
all
served
by
the
same
handler
chain,
but
there
might
be
different
actions
depending
on
the
workspace
or
logical
cluster.
There.
A
Resource
quota
is
running
a
variety
of
go
routines
per
workspace,
so
we
like
that's
just
an
example
of
where
we
have
multiple
controller
instances
and
multiple
admission
instances
where
it's
one
per
workspace,
we'll
do
the
same
thing
for
garbage
collection
when
we
implement
it
other
things
like
the
tokens
controller
rather
than
running
one
token
controller
per
logical
cluster.
We
run
one
token
controller
and
it's
handling
across
logical
clusters.
B
D
Okay,
yeah
I'll
I'll,
try
to
understand
the
code
a
bit
more
and
then
I'll
come
back
with
some
more
questions.
Thank
you
for
this
session.
A
Okay,
you're
welcome
and
please
reach
out
in
the
future,
happy
to
provide
more
details.