►
From YouTube: Community Meeting, December 14, 2021
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Thank
you,
hello,
and
welcome
to
the
kcp
community
meeting
tuesday
december
14th
2021.
We
have
a
pretty
exciting,
pretty
packed
agenda
today
I
wanted
to
call
out.
We
are
not
going
to
have
one
of
these
next
week
december
21st.
A
A
One
next
week
december
21st,
if
you
join,
you
will
be
alone
or
you
know,
maybe
other
people
will
join.
You
can
talk
to
them
about
it.
Whatever
you
want
great
with
that
out
of
the
way
we
have
some
stuff
on
the
agenda
and
I
will
hand
it
over
to
josh
to
talk
about
initial
work
on
sinker
and
ocm.
A
C
I
am
sorry
if
my
last
meeting
ran
over
all
right.
Yes,
so
I
put
this
on
at
the
last
minute
because,
to
be
honest,
we
we
work
out
all
the
kinks
at
the
very
last
minute,
but
we
wanted
to
get
it
in
before.
We'd
have
to
wait
till
the
new
year
to
at
least
show
our
initial
initial
take
at
it.
Let
me
post
quickly
in
the
chat,
so
this
is
the
repo
the
demo
is
coming
from.
I
also
need
to
give
credit
where
credit
is
due.
C
I
did
not
write
any
of
this
code.
I
am
just
the
demonstrator,
wrote
this
code
and
actually
has
put
a
couple
of
pull
requests
back
against
the
the
kcp
from
some
of
this
work
that
he
came
across
and
I'll
point
some
of
that
out
as
we're
we're
going
through
it.
He
is
in
china
though,
and
it's
1am
there
or
12
am
or
something
like
that,
which
is
why
I
I'm
doing
the
demonstration
but-
and
I
think
I
saw
dario
joined
as
well.
C
He
he's
been
working
with
chojin,
so
he'll
pipe
up,
I'm
sure
as
we're
we're
going
through
it.
So
without
further
ado,
let
me
share
my
screen
and
then
maybe
set
a
little
context
here.
So
what
we're
looking
for,
what
we
were
looking
for
here
is
acm
is
all
about
creating
and
managing.
C
And,
and
so
it
made
complete
sense
that
we
have
this
fleet
of
clusters,
what
what
better
place
to
start
than
to
help
connect
those
up
to
kcp
as
they
provision
or
need
to
expand
the
underlying
footprint.
So
this
is
the
administrator
behind
the
scenes,
not
the
developer,
using
kcp
or
if
they've
got
a
large
bunch,
a
large
number
of
clusters
that
they
want
to
start
to
use
with
kcp.
C
Again,
we've
got
fingers
into
that
fleet,
and
so
it
made
complete
sense
to
be
able
to
or
to
work
to
hook,
kcp
up
and
which
means
specifically
hooking
the
sinkers
up
on
these
managed
clusters
to
then
point
back
to
a
kcp
instance.
So
here
I
did
the
first
part
of
the
demo.
I
ran
that
audit
before,
which
is
just
setting
me
up
three
three
three
kind
clusters,
one
of
them
is
represents
the
hub
and
I
keep
saying
acm,
but
actually
we're
using
open
cluster
management
ocm,
which
is
the
community
version
of
acm.
C
It's
just
the
core
technology
for
managing
or
working
with
multiple
clusters.
So
it's
got
that
concept
of
my
cluster,
that
is
under
management,
which
we
call
a
managed
cluster,
and
it's
got
this
concept
of
add-ons,
which
are
pieces
that
go
along
with
our
agent
that
we
might
want
to
want
to
run,
and
so
I've
created
three
clusters.
One
is
that
hub
and
then
two
are
managed
clusters
which
are
going
to
be
my
targets
in
this
case,
for
what
I
want
to
do
with
kcp.
C
The
last
part
here
is,
after
I
created
them
in
kind.
I
joined
those
clusters
to
my
to
my
hub.
Joining
is
about
a
60
second
activity,
but
again
the
reason
I
bring
this
up
is
you
know
any
existing
customer
out
there
with
acm
is
going
to
have
provision
custom
clusters
or
have
a
number
of
clusters
under
management,
we're
also
looking
at
moving
sd
the
service
delivery
team,
we're
looking
at
acm.
Well,
this
portion
of
acm
becoming
a
core
part
of
that
for
them
to
do
work
under
the
covers
so
again.
C
Another
opportunity
to
introduce
kcp
into
these
spaces
as
sd
spins
of
clusters,
as
well.
So
pretty
much
anywhere.
That
acm
has
has
a
connection,
and
acm
is
not
specifically
limited
to
ocp
as
well,
so
you
can
import
any
of
the
star
ks,
so
aks,
eks,
iks,
etc.
It
works
with
kind,
which
is
what
this
demo
is
as
well
as
a
few
of
our
competitors
like
3ks
and
and
some
of
the
other
ones.
Although
those
we
don't
officially
support,
but
because
they
are
based
off
of
vanilla
cube,
which
we
do,
we
will
support
they.
C
You
know
it's
able
to
function
there
so
again,
starting
plan
three
clusters,
a
hub
and
two
managed
clusters
that
are
gonna,
we'll
just
call
them
my
manage
clusters
or
my
workers
that
kcp
is
going
to
use
and
I've
imported
those
into
my
hub
so
that
my
hub
can
see
them,
and
so
next
I'm
going
to
bring
kcp
online
and
so
we're
just
taking
a
copy
of
that
coop
config
here
for
the
setup
to
use
and
then
we're
going
to
activate
kcp.
So
it's
pulling
it
in
it's
actually
cloning,
the
repo
directly
right.
C
Now
it's
going
to
do
the
build,
we're
going
to
start
kcp
up
and
we're
going
to
start
up
an
extra
controller
from
that's
like
that's
an
acm
add-on
controller
that
we
built
specifically
for
kcp
so
for
each
add-on.
This
is
the
pieces
we
add
to
our
agent.
We
include
an
additional
controller
that
is
responsible
for
then
making
sure
that
add-on
lands
on
the
correct,
managed
clusters
that
we
want
so
we've
got
it
built,
so
we're
going
to
turn
it
on.
So
we've
got
the
certificates
and
the
kcp
server
itself
is
booting
up.
C
Actually,
let
me
get
a
little
fuller
screen
here
and
then
we're
bringing
up.
This
is
the
controller
I
just
mentioned.
So
this
is
the
one
that's
going
to
look
at
the
on
the
acm
hub
for
the
manage
clusters
and
then
make
a
determination
of
whether
the
kcp
add-on
should
be
enabled
and
the
kcp
add-on
is
going
to
deploy
the
syncer
and
connect
that
syncer
back
to
a
logical
cluster
in
my
kcp
instance.
So
everything
here
now.
B
C
Up
and
running
I've
got
both
of
my
controllers
are
going,
so
let
me
grab
the
demo
screen
again.
Hopefully
this
is
readable,
and
so
very
first
thing
we're
doing
is
on
the
hub.
These
were
the
managed
clusters
that
I
was
talking
about.
So
we
have
cluster
one
and
cluster
two.
These
are
the
two
client
clusters
that
were
created
and
then
imported
we're
going
to
create
the
cluster,
the
add-on
for
syncer.
C
So
again,
this
is
something
chojen
just
put
together,
but
it's
a
we
have
a
cr
type
and
you
have
multiple
add-ons
depending
on
what
you
want
to
use
from
the
acm
or
the
mce
toolbox,
so
like
policies,
applications
etc.
But
in
this
case
we
have
one
specific
for
kcp
we're
developing
one
for
the
hypershift
manage
clusters.
C
Another
example
of
what
goes
in
there
acs
has
an
add-on,
and
so
these
are,
you
can
pick
and
choose,
but
so
the
one
for
today
is
the
syncer,
so
we're
actually
going
to
apply
that
now
so
created
the
resource
on
the
hub,
and
so
now
we're
going
to
enable
it.
So
this
controller
that
we
have
running
that
you
saw
me
that
kcp
ocp
ocm
that
was
launched
earlier.
C
C
So
I'm
going
to
bring
both
of
my
clusters,
so
I'm
going
to
add
the
annotation
there,
and
this
is
just
one
way
of
doing
it-
that
we
did
for
this
initial
demo.
We
might
use
something
like
our
placement
rule
and
you'd
be
able
to
put
a
specific
syncer
on
systems
that
have
nvidia
gpu
or
systems
that
are
in
a
specific
region,
etc,
and
these
keep
going
we're
going
to
see
that
the
add-on
was
created.
So
this
is
the
signing
request
for
approval
that
got
automatically
approved
by
our
controller.
C
And
so
this
is
just
showing
you
that
managed
add-on
again
that
was
created,
and
so
now
we're
going
into
kcp,
and
so
this
again
is
running
in
the
kind
server
on
my
laptop
against
the
cluster,
so
we're
using
the
same
deployer
demo
that
we
did
before
we
did
before
when
we
used
managed
or
our
back
end
instead
of
syncer.
But
this
time
we're
just
doing
strictly
a
syncer
piece
for
it.
So
the
syncer
is
just
as
you
guys
usually
had
it.
C
C
His
words
is
a
bit
of
a
hack
to
make
that
binding
work,
but
he
was
expecting
that
there
would
be
an
eventual
solution
that
may
be
different
than
what
he
proposed
in
the
pull
request,
and
I
can
link
that
pull
request
after
I'm
done
as
well,
and
so
then
we're
going
to
create
the
namespace,
and
this
is
inside
the
kcp
and
we're
going
to
create
the
deployment.
This
is
the
standard
demo
deployment
that
usually
gets
used,
and
so
then
we're
going
to
watch
the
cube
namespace.
C
So
we
actually
see
the
two
there's
the
initial
deployment
here
and
then
there's
what
the
splitter
does
to
the
logical
clusters,
and
so
we
should
see
all
of
those
come
online.
In
a
few
seconds
we
see
they're
coming
up
now,
so
cluster
one
had
one
and
cluster
two
got
two.
So
that's
the
part
at
the
end
of
the
I
guess
scripted
portion
of
the
of
the
demo.
So
let
me
just
do
a
quick
use
context.
C
We'll
do
cluster
two,
because
it
had
the
two
deployments
kind:
cluster,
two
and
we'll
do
casey
get
deployment.
C
And
so
here
we
have
the
deployment
and
we
see
it's
the
2x2
going
into
the
default
namespace
that
was
put
there,
but
we
also
see
this
is
the
we
have
the
kcp
syncer
running
here.
This
is
what
our
add-on
had
pushed
into
the
environment
and
then
linked
back
up
to
the
specific
kcp
resource,
and
so
I
can
add
one
more
cluster
six
more
clusters
and
in
this
design
at
least
as
long
as
they
have
that
annotation,
then
the
syncer
gets
deployed
and
gets
connected
and
I
get
a
larger
spread
of
devices.
C
So
that's
the
initial
demo.
Things
were
we're
sort
of
talking
about
and
we're
sort
of
waiting
for
it
to
see
the
direction
kcp
goes
in.
Some
of
these
is
it's
not
integrated
with
workspaces,
yet
we're
waiting
for
that
to
sort
of,
I
think
it's
still
not
totally
complete,
but
the
idea
being
that
you
could
create
a
workspace
and
then
we
could
have
our
controller
automatically
connect
the
sinkers
to
it,
depending
on
depending
on
labeling
etc
as
well.
C
The
other
one
we
have
we've
been
thinking
about
is,
if
you
have
multiple
instances
of
kcp,
is
being
able
to
deploy
multiple
instances
of
the
sinker,
pointing
to
those
different
kcps
to
be
able
to
push
down
to
the
individual
clusters.
And
again,
I
guess
we're
open
to
other.
You
know
or
we're
interested
in
other
ideas
or
other
ways
to
sort
of
slice
and
dice
and
be
able
to
bring
clusters
into
the
kcp
fold.
C
A
Yeah,
so
this
looks
this
looks
really
cool.
Thank
you
for
thank
you
for
sharing
this.
This
looks
really
awesome.
It
seems
like
this
is
effectively
a
replacement
or
a
or
another
alternative
for
the
cluster
controller.
A
That
you
tell
it
about
clusters,
it
it
reaches
out,
it's
responsible
for
installing
the
syncer,
it's
responsible
for
setting
all
that
up.
I
think
with
yeah
two
with
two
really
great
benefits.
One
is
that
it
uses.
As
I
understand
it,
it
uses
a
ocms
cluster
registration
thing
like
like
right
now.
A
What
we
have
today
is
like
give
us
a
coupe
config,
and
we
will
take
over
your
cluster,
which
is
not
a
great
value
proposition
for
anybody
wanting
to
use
this
seriously
and
ocm's
registration
process
is
a
bit
more
like
the
the
spoke
cluster
says
to
the
hub
cluster.
I
would
like
to
join
here
are
credentials,
and
then
they
have
a
lease
and
there's
like.
B
A
Like
it's
a
bit
more,
you
know:
what's
the
word,
I'm
looking
for
secure
good
solid,
so
that's
definitely
good
and,
and
it's
also
using
ocm's
add-on
management
to
install
and
and
maintain
the
syncer.
Is
that
correct?
That's.
A
Gotcha,
okay,
so,
oh
I
see
so
the
so
this
we
haven't
really
gone
through
how
sinkers
would
be
upgraded
in
in
our
world
so
having
well
having
any
answer
to
that
is
infinity
times
better
than
what
we
have
now
and
so
yeah.
That's
that's
really
exciting.
I'd
like
to.
A
I
think
I
need
to
understand
more
about
what
what
I'm
seeing
to
like
understand.
You
know
if
there's
any
like
problems
with
it,
but
it
seems
it
seems
like
a
like
a
great
replacement
for
the
cluster
controller
we
have
today
and
being
built
on.
Something
that
is
already
has
already
like
walked
through
the
mine
seems
like
a
real
benefit.
So
thank
you
for
this
work
and
for
sharing
it
with
us.
Yeah.
D
Yeah
that
that's
some
strata
and
in
fact
my
question
is
a
bit
the
continuation
of
of
json
remark,
which
is,
I
didn't
see
how
you
start
kcp
itself,
but
I
assume
then,
that
you
don't
run
the
cluster
controller.
In
fact,.
C
D
C
That's
right
well,
ocm
installed
the
syncer,
but
yeah
yeah
we
manually
did
it
so,
okay,
sorry!
So
let
me
yeah
give
me
a
second
here
see
how
well
I
can
type
casey
I'm
on
this
get
applied
manifest
work
dash.
C
See
if
that
works?
Yes,
so
this
is
acm
under
the
covers.
It
has
manifest
work,
managed
cluster
view,
there's
a
couple
of
different
crs.
So
this
is
what's
delivering
the
different
pieces,
so
it's
got
the
actual.
Then
it
creates
the
namespace.
It
creates
the
sync.
It
brings
the
syncer
config
the
service.
B
C
Etc,
etc
and
then
yeah
and
so
sets
that
up
with
the
config
for
the
connectivity
back
to.
D
D
Yeah-
and
I
was
asking
this
question
because
currently
in
the
cluster
controller,
it's
a
bit,
you
know
highly
tied
together
several
features
and
one
of
those
features
that
is
highly
tied
in
the
cluster
controller
is
the
api
management.
The
fact
that
when
you
join
a
cluster,
you
import
the
apis
among
a
list
of
of
apis,
which
we
are
interested
in.
You
import
those
apis,
so
mainly
the
three
ids
that
we
can
rebuild
from
so.
C
D
That
means
that
we,
we
would
have
to
think
to
deco
think
about
decoupling
the
api
management
which
will
be
done
by
the
way,
because
it's
a
hundred
percent
simplistic
for
now
and,
of
course
it
will
be
changed
in
the
future.
Changes
to
you
know,
api
imports
and
stuff
like
that
yeah.
I.
A
Think
I
think
the
the
you're
totally
right-
please
don't
rewrite
this,
but
also
I
didn't
yeah.
C
D
And
so
just
to
be
sure,
I
understand
for
now
to
demo
what
you
demoed
with
the
deployments.
You
just
had
added
the
deployments
crd
manually
in
the
logical
clusters
right.
C
All
right,
any
other,
sorry,
okay,
I
don't
think
I
didn't
add
it.
We
added
it
well,
we
included
it
in
our
payload,
so
this
was
the
crd
data
getting
there
so
that
it'd
be
present
for
us
sure,
yeah,
and
so
I
guess
so
jason
back
to
your
point.
This
is
where
we
can
so
there's
a
representation
of
this
on
the
the
hub
side,
where
this
is
all
being
orchestrated
from.
So,
if
there's
a
version,
you
know
we
change
the
image
or
something.
A
Yeah
definitely
the
the
sinker
upgrade
story
is,
like
I
said,
a
complete
like
to
do,
write
a
syncer
upgrade
story
and
so
having
something
especially
something
that
would
let
us
slowly
roll
it
out.
I
think
the
first
thing
we
would
be
able
to
do
is
just
say:
oh
a
new
image
has
been,
you
know
like
a
new
syncer
image.
Is
there
let's
apply
it
to
everything
immediately,
which
could
cause.
C
A
G
Okay,
is
the
font
size?
Okay
on
this,
I
think
so,
yeah
yeah
all
right.
So
I
have
a
couple
of
panes
here.
Up
top
I've
got
kcp
running,
so
I'm
just
gonna
make
that
because
we
don't
really
need
to
see
that,
so
this
is
a
empty
clean.kcp
directory.
G
The
only
thing
that
I
did
was
I
installed
the
workspace
crd
into
the
admin
logical
cluster
you're,
going
to
see
a
whole
bunch
of
command
lines
that
have
the
insecure
skip,
verify
and
server,
because
I'm
going
to
be
jumping
around
between
logical
clusters
and
I
don't
have
them
in
a
cube
config
right
now.
So
if
we
go
into
the
admin
logical
cluster
and
we
ask
for
workspaces,
there
are
no
workspaces.
G
So
the
first
thing
I'm
going
to
do
is
apply
a
couple
of
things,
so
I
have
a
what's
called
a
source
workspace
and
if
we
look
at
that,
it
literally
is
a
workspace
named
source.
There's
nothing
else
in
there
and
then
the
next
thing
I'm
going
to
do
is
a
target
workspace
and
this
one
inherits
from
source.
G
So
let
me
go
ahead
and
apply
that
one
as
well.
So
now
I
have
my
two
workspaces
and
what
I'm
going
to
do
is:
did
you
apply
the
source?
Workspace?
G
Yes,
oh
yeah,
sorry,
sorry,
yeah!
No
questions
are
good,
so
I
have
my
two
workspaces.
So
if
we
do
get
workspaces,
you'll
see
source
and
target
they're
new,
so
they're
they're
empty,
and
what
I'm
going
to
do
is
add
some
other
crd.
So
we're
going
to
go
to
crib,
crds,
we'll
go
deployments
and
oops.
I
didn't
mean
to
put
that
in
admin.
I
meant
to
put
that
in
source
all
right.
So
now
we
have
deployments
defined
in
both
the
admin
cluster
and
the
source
cluster,
but
not
in
the
target
cluster.
G
So
if
we
say
source
what
sort
of
crds
do
you
have
you'll
see
deployments
because
I
goofed
you
should
also
see
that
in
apps
or
in
admin
and
you'll
also
see
the
workspaces
now,
the
cool
thing
is:
when
we
look
in
target,
you
don't
see
any
crds,
and
this
is
on
purpose,
so
with
workspace.
Api
inheritance.
G
What's
important
is
that
the
apis
are
available,
but
crd
happens
to
be
an
implementation
detail
so
like
we
don't
need
a
user
to
know
that
a
crd
exists
for
the
api
to
be
available
and
in
fact
there's
it's
an
anti-pattern.
I
would
say
for
an
end
user
to
check
to
see
if
the
crd
exists
to
determine
if
they
should
proceed
with
using
an
api.
That's
what
discovery
is
for.
G
So
if
we
go
to
the
source
cluster
here,
actually
I'm
going
to
just
switch
to
make
this
a
little
shorter.
So
we're
going
to
go
raw
and
we're
going
to
go
to
clusters,
source
apis
and
we're
going
to
go
ahead
and
group
stuff
name.
So
we're
going
to
look
at
all
of
the
api
groups
that
are
available
in
the
source,
cluster
and
you'll,
see
apps
is
down
here
at
the
bottom.
I
don't
have
these
sorted
at
the
moment,
and
so
that's
expected
because
in
the
source
workspace
we
added
the
deployments
crd.
G
So
we
would
expect
to
see
the
apps
api
group
in
the
source,
logical
cluster.
If
you
go
to
target,
though
you'll
also
see
apps
is
in
the
target
and
that's
the
what
the
inheritance
is
doing.
If
I
go
to
some
other
random
workspace-
or
I
should
say
logical
cluster,
you
don't
see
apps
and
that's
because
it's
just
a
random
logical
cluster
that
I
put
in
right
now
and
there's
no
workspace
for
it.
There's
no
inheritance
there.
G
So
I
am
able
to
do
things
like
go
into
the
target
and
say
show
me
deployments
and
there
aren't
any.
I
will
go
ahead
and
create
a
default
name
space
and
then
I
can
say,
create
deployment
image.
Engine
x,
we'll
call
it
foo
and
I've
created
a
deployment
in
the
target
namespace
and
now,
if
I
I
should
say
in
the
target,
logical
cluster
with
the
default
namespace
and
if
I
get
deployments
you'll
see
that
it's
there.
If
I
go
back
to
the
source,
I
don't
see
any
deployments,
and
so
this
is
another
distinction.
G
When
we're
looking
at
a
workspace
that
owns
the
crd
and
a
workspace
that
inherits
the
api.
The
data
like
in
this
case
deployment
instances
are
separate.
So
this
really
is
about
api
availability
and
inheritance
and
not
about
instance,
inheritance.
So
I
can
go
ahead
and
create
go
into
the
source
cluster
and
I
can
create
a
namespace,
we'll
call
it
default
and
I
can
create
a
deployment
in
that
source,
logical
cluster.
G
I
will
call
it
foo.
Well,
let's
call
it
r
so
we'll
put
bar
inside
of
the
source,
logical
cluster,
and
now,
if
I
get
deployments
in
source
you
see
bar.
If
I
go
back
to
target,
you
only
see
food,
so
they're
completely
separate.
Just
like
you
would
expect
the
way
that
the
logical
clusters
work.
I
have
one
more
thing
to
show
you
that's
related
as
well,
and
that
has
to
do
with
the
core
api
v1
discovery
and
crds
and
inheritance.
G
So
if
I
go
to
clusters
go
with
target
api.
G
G
Now,
if
we
go
back-
and
we
say,
let's
take
a
look
at
discovery
for
target
you'll,
see
pods
and
pod
status
have
shown
up,
and
this
is
aggregating
and
combining
discovery
from
crds
through
inheritance,
along
with
the
resources
that
are
available
out
of
the
box,
and
this
obviously
will
work
in
the
source,
logical
cluster
as
well,
and
if
we
go
to
some
other
logical
cluster,
you
don't
see
pods
in
this
section.
G
This
works
for
everything
that
you've
seen,
except
for
open
api
schemas
because
of
the
way
that
code
is
written,
it's
difficult
to
get
those
to
be
aggregated,
so,
at
least
with.
I
think
that
I
can't
remember
that
was
just
with
core
v1.
It's
been
a
while,
since
I
looked
at
it,
but
there
is
some
aggregation
that
goes
on
within
open
api,
but
not
all
of
it.
I
think
it's
just
the
core
v1
aggregation,
that's
not
working!
So
that's
an
area
that
we
can
explore
a
little
bit
further.
G
The
other
restriction
or
hard
coding
that
we
have
here
is
that
this
only
applies
for
workspaces
that
are
defined
in
the
admin
logical
cluster.
So
if
I
put
a
workspace
in
some
other
logical
cluster
and
had
another
workspace
inherit
from
it,
that
will
have
no
effect.
It
was
a
limitation
just
for
the
prototyping
for
right
now,
and
I
know
that
stefan
and
david
and
others
have
been
talking
about
how
to
expand
into
org
workspaces.
G
That
would,
you
know,
end
up
changing
this
prototype
work
to
support
more
than
just
one
borgworks
or
workspace
so
to
speak,
and
I
think
that's
about
all
I
had
to
show
so
any
questions.
F
I
had
one
just
did,
I
think
parsing
and
I'm
gonna,
just
regurgitate
what
you
just
said.
Basically,
if
I
have
like
two
source
workspaces,
one
for
maybe
people
that
do
database
apis
and
one
for
another
group
of
people
to
do
some
other
thing
with
those
changes,
then
that
would
be
able
to
both
flow
down
into
one
target.
Is
that
right.
H
G
Yeah,
this
is
definitely
it
was
an
area
of
exploration,
to
figure
out
what
hackery
we'd
need
to
do
to
inside
of
kubernetes
to
make
this
possible,
and
stefan
is
totally
correct-
that
we
have
an
entire
data
model.
That's
separate
from
what
I
just
showed,
where
we
plan
to
allow
api
producers
to
export
their
apis
and
then
end
users
to
consume
them
through
imports,
and
you
don't
have
to
go
through
what
I
just
demoed
to
make
that
happen.
None
of
that
you're.
G
G
A
All
right
thanks,
thank
you.
This
is
very
exciting.
Actually,
I
do
have
a
question
real,
quick
roughly.
How
invasive
is
this
to
the
kubernetes
code
base
is
this?
Is
this
like
tons
and
tons
of
changes
to
kubernetes,
or
is
this
mainly
using
extension
points
we've
put
into
kubernetes.
G
Somewhere
between
four
and
six
iterations
for
where
to
attack
and
insert
this
the
there's,
nothing
that
I
added,
that
is
like
logical
cluster
specific
inside
of
kubernetes.
I
actually
undid
and
moved
some
of
the
hacks
that
we
had
done
in
kubernetes
and
I
was
able
to
move
them
into
kcp.
G
There
are
some
pieces
that
I
left
in
place
because
I
either
forgot
about
them
or
decided
to
defer
until
later
so,
like
there's
portions
of
the
crd
controller
code,
that
our
logical
cluster
aware
and
need
to
remain
logical
cluster
aware
until
we
can
potentially
find
a
way
to
hack
on
top
of
that,
instead
of
directly
inside
of
kubernetes.
G
Yes,
so
the
inheritance
part
is
all
written
in
kcp
and
injected
down
into
kubernetes
through
some
some
customizations
for
how
to
instantiate
a
client
and
a
shared
informer
factory,
as
well
as
like
a
kcp,
specific
lister
and
shared
informal
factory
for
crds,
and
the
lister
is
logical
cluster
aware
because
the
code's
in
kcp
the
api
or
open
api
aggregation
currently
has
some
or
it
is
logical,
cluster
aware
and
that
code
is
in
kubernetes,
mainly
because
they're
right
now
isn't
the
way
around
it.
G
But
we
are
actively
working
to
try
and
minimize
any
you
know
any
kcp
isms
in
the
kubernetes
code
base
so
that
we
can
get
all
this
stuff
going
upstream
nice,
but.
H
I
think,
even
if
we
keep
it
like
that,
it's
on
a
scale
from
zero
to
ten.
Maybe
it's
a
five
in
pain,
so
it's
acceptable,
I
think
the
more
we
can
upstream,
the
better,
of
course
yeah,
but
it's
it's
still
the
real
api
extension
api
server.
This
is
important,
I
think
so.
We
can
use
cids
even
for
more
advanced
concepts
like
api
imports,
yeah
invisible,
but
they
are
used.
G
One
other
thing
that
I'll
mention
that
wasn't
like
a
user-facing,
visible
change
in
the
demo,
was
that
I
removed
our
reliance
on
cube,
aggregator
and
wrote
what
I
called
a
mini
aggregator
that
it's
an
api
server
or
generic
api
server
that
can
aggregate
the
generic
control
plane,
which
is
the
core
v1
serving
along
with
the
api
extensions
api
server,
which
serves
up
the
api
extensions
group
itself,
and
then
it
also
will
aggregate
anything
contributed
by
crds
to
like
discovery
and
open
api,
and
none
of
that
requires
cube
aggregator
or
api
services.
G
G
D
Switch
there,
you
see
my
vs
code,
yep
yeah,
sorry,
it's
okay,
so
mainly
that's
the
continuation
of
the
last
demo
about
virtual
workspaces
and
the
first
implementation
of
virtual
workspaces,
which
is
mainly
to
get
the
list
of
the
workspaces
a
user
has
access
to
or
either
its
personal
workspaces
or
the
user,
the
workspaces
of
its
organization
he
has
rights
to
to
to
list.
D
Now
I'll
show
that
I'm
using
in
addition
to
the
last
time,
I'm
using
this
new
option
here
added
when
json
added
the
authentication
and
oidc
enablement
so
in
and
then
I
have
tokens
to
yup
sorry.
D
To
be
able
to
manage
three
users-
mainly
user,
one
user,
two
and
a
user
three,
so
that
that
means
that
and
then
let
me
start
that
and
then
I
will
start
the
command
line
of
the
virtual
workspace
api
server.
That
will
connect
to
this
kcp
instance,
of
course,
to
find
the
real
objects,
the
real
workspace
resources
that
are
stored
there.
D
So
now
I
can
do
quite
the
same
then
start
than
the
last
time
on
this
virtual
workspace
endpoint.
Here
I
will
create
a
workspace
or
workspace
one.
I
have
to
put
the
validate
faults
here,
because
open
api
is
not
supported
for
now,
and
so
let
me
create
a
workspace
into
my
personal
environment,
my
as
a
personal
workspace
and
this
corresponding
to
the
last
demo,
this
effectively
delegates
to
kcp
and
creates
an
object,
a
workspace
object
in
the
organize
organization,
logical
cluster
in
kcp.
D
So
if
I
do
cube
ct
I'll
get
workspaces,
for
example,
here
I
will
point
directly
to
kcp
and
see
that
workspace.
One
has
been
created
delegated
now.
If
I
I
will
create
another
one,
this
time
with
user
2,
second
user
here
and
now.
If
I
get
the
list
of
workspaces
for
user
one,
I
will
see
only
workspace
one
and
the
same
for
user.
Two,
of
course,.
D
D
What
happens
if,
because
all
those
workspaces
are
stored
in
the
same
logical
cluster,
which
is
mainly
on
one
chart
in
kcp
on
the
kcp
instance
and
all
the
workspaces
for
the
same
organization
would
be
stored
in
the
same
logical
cluster,
and
so
what
happens
if
two
you,
two
distinct
users,
want
to
create
a
workspace
name,
the
workspace
one
or,
for
example,
workspace
two.
So
I
will
do
that
now.
If
I
create
sorry,
I
just
once.
D
If
I
create
here
workspace
two,
but
with
user
one,
it
will
still
create
a
workspace
to
objects.
That
means
that,
in
the
context
when
pointing
to
the
personal
with
a
personal
environment
of
the
user,
he
would
see
this
workspace
as
created
as
workspace
2..
If
I
do
here,
cube,
ctl
get
workspaces.
D
On
this
endpoint
we
can
see
now
that
user
one
sees
workspace
one
and
workspace
two,
but
in
fact,
as
we
can
see
in
the
url
here,
if
you
do
directly,
if
you
get
the
workspace
resources
objects
that
are
the
raw
workspace
resources
that
are
stored
in
kcp
under
the
cover.
You
can
see
that
this
workspace
named
workspace
2
in
the
context
of
of
user
1.
D
In
fact
it
is,
it
has
been
renamed
or
you
know,
disambiguated
in
the
organization
to
be
so
that
the
internal
name
is
workspace
too,
with
the
suffix
here,
so
that
it's
it's
unique
and
then,
if
I
do
cube
city
I'll
get
workspaces.
D
But
oh
sorry,
not
this
one.
If
I
come
back
and
list
the
workspaces
for
user
2,
you
can
see
that
there
is
also
a
workspace
too.
But
then
it's
the
one
in
the
context
of
user
two,
it's
pointing
to
the
the
a
distinct
url,
because
in
fact
it's
a
distinct
workspace
and
the
same
again.
If,
if
I
delete
workspace
2
the
works,
the
workspace
name-
workspace
2
for
user1
here
again
user1
pointing
to
this
api
server
url
here,
which
is
a
his
personal
one.
D
We
say
like
that
it,
it
would
just
be
like
it's
workspace
2
that
has
been
deleted,
but
now
I
can
see
that
and
and
of
course
in
the
list,
it
would
just
see
only
workspace
one,
but
still
the
the
the
workspace
2
from
user
2
is
is
still
there
as
we
can
see.
Now,
yes,
and
if
we
do
get
workspaces
directly
into
kcp,
we
can
see
that
we
just
deleted
the
workspace
whose
pretty
name
was
workspace
2
in
the
context
of
user
1.
D
But
the
workspace
2
in
the
context
of
of
user
2,
which
is
a
distant
workspace,
has
been
kept.
So
it's
things
are
quite
isolated,
but
still
every
user
will
see
his
own
personal
workspaces
with
the
pretty
names
he
gave
through
this.
You
know
virtual
workspace
here
accessible
from
this
url
and
now
just
the
last
last
point.
Of
course
we
can,
there
would
be
the
opportunity
to
share
a
workspace
so
that,
if
I'm
part
of
an
organization-
maybe
I
don't
have
any-
I
never
created
a
personal
workspace.
D
But
if
I
have
access
to,
if
someone
showed
me
the
workspace
of
an
organization,
then
I
would
have
access
to
it
and
especially
user
3
has
no.
I
never
created
a
personal
workspace,
but
then-
and
so
of
course,
if
we
do
this
get
workspaces
for
user
three,
we
have
nothing
but
then,
just
by
creating
a
cluster
role,
binding
of
the
cluster
role
associated
to
workspace.
D
To
this
workspace
to
the
group
which
is
org
three
here,
then
that
will
allow
user
three
to
see
workspace,
the
corresponding
workspace
in
its
list
of
workspaces
and
have
access
to
it.
But
of
course
here
it's
only
a
cluster
role,
binding
with
list
or
get
access.
So
if
user
3
tries
to
delete
that
in
the
organization
it
would
have,
it
wouldn't
have
the
permission
to
do
that.
D
H
Just
have
a
comment
quickly,
the
use
case
for
that,
so
it
looks
like
we
virtualized
workspace
names
right
per
user.
I
think
there
are
two
user
stories.
Basically
behind
that
one
is
three
tier,
so
we
could
have
workspaces
which
all
live
in
one
org
workspace,
so
it's
much
cheaper
than
having
another
org
for
every
user.
H
So
this
is
one
use
case
and
the
other
one
is
basically
maybe
git
ops
or
something
where
you
have
workspace
manifests
like
for
for
a
big
application,
which
consists
out
of
15
different
workspaces.
You
can
have
those
manifests
in
one
place
and
get
and
get
a
copy
of
this
whole
stack
of
17
or
15
workspaces
all
at
once
with
supply
and
the
names
wouldn't
change
like
it's
really
self-contained.
You
don't
have
to
virtualize
your
names
manually.
D
So
yeah
the
next,
the
next
step.
Of
course.
Here
we
have
only
one
organization
with
with
all
the
workspaces
inside,
and
the
next
step
would
be
to
support
several
organizations
and
and
then
being
able
to.
You
know,
associate
quite
the
same
way
as
it
has
been
down
here,
associate
a
user
with
an
org
and
then
being
able
to
get
the
workspaces
in
the
logical
cluster
associated
with
this
organization.
B
So
I
had
two
questions:
the
oidc
token
file
had
separate
orgs
for
each
of
the
users.
D
Yeah
that
was
at
the
very
beginning
to
you
know,
make
a
distinction
between
rights
associated
to
the
subject
itself,
to
the
to
the
user
and
and
permissions
given
to
a
group
of
user.
So
but
but
of
course,
the
the
orgs
that
are
here
in
the
token
file.
These
are
not,
I
mean
currently
at
least
directly
linked
to
what
we
call
organizations.
D
D
Exactly
so,
there
is
a
cluster
role
for
admin
rights
on
the
workspace
and
then
cluster
role,
binding
from
directly
from
the
user
to
the
cluster
role
associated
to
this
workspace.
So
those
these
are
cluster
roles
with
named
resources.
H
The
rules
alternative
five
of
the
document
for
the
details,
it's
a.
We
need
something
like
unique
names
and
if
we
have
a
naming
schema
for
some
object,
which
is
the
holy
mining
at
the
moment,
we
get
that
for
free.
Basically,
so
we
can
detect
collisions
in
names
because
yeah
when
you.
D
Create
a
on
the
personal
virtual
workspace
when
you
create
a
workspace.
The
first
thing
we
will
do
is
try
to
create
the
cluster
role
binding,
with
the
name
being
the
pretty
name
of
the
workspace
that
the
end
user
wants
in
its
environment
in
this
context
and
the
the
concatenate
the
to
the
user.
So
this
ensures
that
the
unity
that
so
that
the
end
user
cannot
have
two
workspaces
with
the
same,
pretty
name
in
for
a
given
user,
I
mean
is.
D
H
B
D
Yeah
he
would
be
yeah.
The
thing
is,
of
course,
what
I
did
is
I
created
a
role
binding
here.
D
We
two
to
the
the
you
know,
list
role
for
this
workspace.
Of
course
I
did
that
manually.
I
assumed
that
in
the
in
in
a
in
a
real
implementation,
you
know
which
would
which
we
would
go
towards.
D
We
would
probably
use
some
sort
of
subresource
on
the
workspace
subresource
to
share,
for
example,
at
the
level
or
at
the
personal
level,
and
in
which
case
we
would
be
able
to.
You
know
probably-
and
I
assume
also
there
would
be
some
way
to
for
the
user-
to
re
to
change
the
pretty
name
also,
maybe
as
a
separate
source
of
the
personal
workspace
workspace
resource.
D
So
I
mean
since,
since
we
are
in
this
virtual
workspace,
you
know
slash
personnel
where
or
slash
organization,
then
we
have
also
the
freedom
to
add
some
resources
for
anything
or
any
additional
features
that
we
would
want,
especially
sharing
and
stuff
like
that.
Where
would
where
we
would
like
to
plug
additional
logic
that
any
additional
things
that
would
be
necessary.
H
D
Yeah,
but
the
thing
is,
is
since
we
we,
before
coming
to
the
you,
know
kcp
layer
where
workspaces
are
stored.
Since
we
go
through
this
virtual
workspace
layer
of
where
we
have
all
the
freedom
to
code,
then
we
can
do
all
the
checks
that
are
necessary
for
this
type
of
additional
actions
that
does
it
and
so
steve.
B
I
guess
I
don't
know
that
I
understand
like
well
enough
when
we
might
want
a
new
sub
resource
and
I
certainly
understand,
like
the
object,
fan
out
comment,
but
it
seems
like
if
we
want
to
bind
names
to
something
unique
to
a
user
and
a
user
should
always
be
able
to
change
names,
implement
implementing
that
via
code
in
a
sub
resource.
B
H
D
Yeah,
I
didn't
have
time
to
to
do
anything
else
than
you
know,
creating
the
role
binding
manually
to
to
show
the
possibility
of
sharing
workspaces,
but
obviously
it
would
not
be
by
creating
the
role
binding
manually
in
the
final.