►
From YouTube: Community Meeting, February 1, 2022
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right,
hello
and
welcome
to
kcp
community
meeting
february
1st
2022.
We
have
an
agenda
that
I
am
not
presenting
for
some
reason,
and
let's
start
with
that,
I
think
I
think
andy
beat
me
to
the
first
item,
which
is
scoping
proposal.
A
A
Do
you
do
we
want
to
just
go
through
this
and
and
sort
of
double
check
that
this
is
what
we
want
and
make
sure
that
everybody
is
on
board
with
both
what's
in
it
and
what's
not
in
it.
Does
that
sound
good
yeah.
B
A
We'll
get
to
it
first
deleting
the
query
command
there
we
go
and
for
some
reason,
rob
is
banned.
A
A
Yeah,
so
this
is
the
this
is
the
script.
I
guess
we'll
just
go
through
and
see.
If
we're
all
on
board
with
it,
we
want
to
demonstrate
the
minimal
cube
api
server.
A
You
can
fork
code
that
we
have
and
run
it
exactly
like
we're
about
to
show
for
the
rest
of
the
the
rest
of
the
demo.
That's
basically
like
there's,
no,
you
know
there's
no
cards
up
my
sleeve
this
is
this
is
something
you
could
also
do
just
using
these
commands.
A
C
Oh,
it's
it's
mine!
I
I
added
those
as
a
suggestion
on
top
of
andy's
suggestion.
So
basically
I
need
to
mark
that
note
as
ready
for
the
ingress
controller
for
the
ingress
demo.
Later.
C
D
A
A
I'm
not
gonna
go
through
that
right
now.
I
think
I
think
we
probably
want
this.
Thank
you
for
your
you're
beating
me
to
it
you're
accepting
all
of
these.
This
is
also
going
to
demonstrate.
The
plugin
is
that
is
that
looking
good,
I
haven't
played
with
the
plug-in
at
all,
but
is
it
looking?
Is
it
looking
good.
E
Well,
it's
match
so
I
I
mean
it
depends
if
whether
here
we
want
to
make
use
of
it
because
it's
you
know
easier
and
that's
not
the
main
or
if
we
want
to
use
a
raw
cube,
ctl
create
workspace.
A
Nice
porcelain,
on
top
of
the
plumbing.
E
I
mean
it
depends,
it
depends
just
in
which
demo
we
want
to
to.
You
know,
focus
on
on
on
the
manual
on
what
it
means
to
to
run.
That
manually,
I
mean
I
think
I
mean
in
in
a
number
of
demos.
We
would
switch
probably
from
at
least
create
one
cluster,
so
I
mean
yeah,
I
I
can
use
the
manual
variant
in
any
of
the
demos
in
the
one
we
choose.
A
I
think
that's,
I
think,
that's
a
good
idea,
I
think,
like
the
first
time
show
here
is
what's
actually
happening
here
is
here,
is
the
the
raw
stuff
that's
happening,
and
then
you
know
in
order
to
shorten
this
part
of
the
demo
in
the
future,
we
made
a
plug-in
that
does
it
for
you.
That
is
easier,
yeah.
The
thing.
E
A
E
I
mean
the
thing
is
that
here,
in
fact,
we
have
two
two
things
we
have
a
cube,
ctl
create
to
create
the
workspace
and
then
a
cube,
ctl
workspace
use,
which
mainly
under
the
cover,
calls
the
cubeconfig
subresource
on
the
workspace
endpoint
and
then
you
know,
merges
parts
of
this
cubeconfig
content
into
the
current.
The
current
conflict
context,
of
course,
to
be
able
to
use
cube
ctl
transparently
afterwards.
So
I
mean,
obviously
it's
not
just
one
line
it's.
It
corresponds
to
pure
cube
ctl,
but
at
least
three.
D
D
A
E
E
That
that's
what
I'm
that's!
Why
I'm
saying
that
it's
more
about
what
I?
What
is
the
focus
that
we
want
to
to
give
to
each
demo?
I
mean
if
we
precisely
have
to
run
three
commands
and
merge
part
of
of
of
config
into
another
one,
then
the
focus
of
the
demo
becomes
becomes
much
more
on
this,
but
I
agree
that
maybe
we
can
just
run
this
and
then
have
a
comment
that
explains
that
it's
just
not
magic.
Just
a
shortcut
for
pure
cube
city
and
comments.
A
E
A
Think
I
think
I
don't
know
if
there
is
also
a
like
text-
voice
script
that
goes
along
with
this,
but
if
we
use,
if
we
use
the
plugin,
we
should
describe
what
it's
doing,
because
otherwise
it's
one
of
those
like
look.
I
need
a
web
server
on
one
command.
It's
you
know,
python
run
a
web
server
and
like
there's
a
lot
of
stuff
that
happens
back
there,
that
you
know
it's
not
just
one
command,
it's
a
lot
of
stuff,
but.
A
Is
just
like
a
little
bit
of
you
know
you?
Can
you
can
imagine
what
this
does
anyway?
I
don't
want
to
hang
to
yeah
candy.
B
Go
ahead,
clayton
did
add
a
comment
that
we
should
comment
as
we're
going
through
this.
So
I
think
narrating
is
good,
whether
it's
only
vocal
or
also
included
in
comments.
So
I
in
section
four,
I
started
adding
comments
that
describe
what's
going
on
and
they
want
to
do
the
same
elsewhere.
A
I
will
leave
this
open
and
come
back
to
it
later
when
I'm
not
presenting,
but
that
will
be
my
bookmark
to
myself
to
add
text.
Create
a
deployment
seems
good,
except.
A
That
made
that
go
away
mr
hand,
yeah.
B
A
A
C
Yeah
go
ahead
so
about
this
for
the
ingress
to
work,
I
need
basically
to
deploy
a
couple
of
controllers,
well,
one
emboy
and
the
controller,
and
this
should
be
done
next
to
the
kcp
or
you
know
so
it's
it's.
Where
do
we
put
that
at
the
top
yeah?
I
think.
A
Okay,
yeah,
I
mean
again
like
when
we
as
we're
going
through
it.
It
doesn't
have
to
be
very
like
and
stop
while
we
describe
what
we're
doing
now,
but
just
like
just
nothing
up
our
sleeves,
we're
not
running.
You
know
eight
other
services
in
the
background
that
do
all
the
magic
for
us
and
it's
all
something
you
could
go
run
yourself
if
you
were
interested
okay
policy
organizations
and
sharding
show
an
org
workspace.
Is
this
all
done
again?
A
I
am
not
up
on
the
latest
of
what
kcp
is
even
doing
these
days.
Org,
workspace
and
creation
of
a
workspace
gaining
the
ability
to
access
to
a
workspace,
create
a
controller
that
list
watch
across
resources
in
all
workspaces
and
does
useful
work.
A
B
Would
suggest
we
don't
necessarily
need
to
do
303
for
creating
a
controller.
Given
I
mean
we
have
controllers
that
can
do
cross
workspace
stuff,
and
I
think
just
the
fact
that
kcp
is
running
for
right
now.
Is
that
and
for
302
we're
still
debugging
some
our
back
issues
with
the
way
we
have
things
set
up
so
that
that
is
valid
for
the
demo,
but
not
working
yet.
D
301
is
this
david?
Have
your
workspace
create.
E
Yes,
I
assume
that
workspace
create
by
the
way
it's
already
used
in
in
the
first
demo,
but
there
it
could
be
more
interesting
to
show
that
then,
when
you
create
a
workspace,
it
has
been
assigned
to
a
chart,
and
then
you
have
a
valid
url
that
points
to
that
chart.
B
H
E
Yeah,
but,
but
now
the
admin
is,
is
the
org.
I
mean
at
least
we
could
create
the
tag
you.
We
could
showcase
the
fact
that
we
can
assign
a
workspace.
I
mean,
create
a
workspace,
it
is
assigned
to
a
chart
and
then,
if
you
have
another
chart,
you
can
create
a
second
workspace
and
it
might
be
assigned
to
a
second
chart.
If
you
remove
the
first
chart-
and
you
can
see
the
result
of
this
in
the
list
of
the
workspaces
and
also.
H
D
A
H
B
Three
yeah,
so
the
original
demos
had
only
only
had
logical
clusters
and
network
spaces
as
a
concept,
so
I
think
it's
compelling
and
important
to
show
we've
added
workspaces
and
we
have
our
back
once
we
get
it
working
on
the
workspaces
and
we've
got
the
personal
virtual
workspaces
that
we
can
demonstrate
as
well.
We
have
api
inheritance,
which
is
section
four,
so
I
think
all
of
that
is
new
and
interesting
for
us
to
demo.
Okay,
so.
B
E
By
the
way,
yes,
as
you
said,
andy,
it's
still
based
on
the
workspace
controller
and
workspace
chart
controller.
E
E
Yeah
yeah,
exactly
I'm
just
wondering,
might
maybe
it
might
be
interesting,
just
commenting
the
demo
to
showcase
that
we
have
first
class
concept
for
a
kcp
instance.
Even
if
we
don't
show
about
moving,
you
know
we
don't
show
moving
across
charts,
but
at
least
we
have
a
a
concept
and
and
a
resource
that
clearly
identifies
the
kcp
instance
that
can
be
shared
in
the
future.
A
A
Yeah
to
what
we
showed
last
year,
so
I'm
wondering
where
I
don't
want
that
setup
to
happen
before
the
demo.
I
wanted
to
happen
at
the
beginning
of
the
day.
E
Because
that's
that's
really
straightforward!
Currently,
once
you
started,
kcp
just
create
the
shard
and
and
create
a
secret
with
the
current
cubeconfig
of
kcp
and
then
everything
works.
You
know
immediately
there's
nothing
to
wait,
so
we
could
just
have
that
be
part
of
the
demo.
If
we
want
to
to
be
complete
and
and
provide
the
complete
picture,
it's
instantly.
H
H
A
A
The
other
twenty
percent
is
as
an
operator
or
as
a
user
of
our
service
that
we
operate.
You
care
that
it
shards,
that
is,
you,
know,
zero,
downtime
or
low
downtime,
and
then
it
scales,
that's
less
exciting
than
like
features.
I
mean
scaling
and
reliability
and
speed
are
features,
but
it's
less
exciting
than
I
can
write
a
controller
that
runs
across
a
bunch
of
seemingly
clusters.
A
H
Partially
yeah-
and
I
I
think
as
well
like
I'd
like
for
there
to
be
a
clear
understanding
of
like
so
what
oh,
we
showed
uploading
a
cubeconfig.
So
what
like?
That's
not
I
struggle
to
even
think
of
what
we
would
say
like
that's,
not
a
compelling
part
of
the
demo,
like
oh
we're,
going
to
build
something
in
the
future.
Great.
H
D
H
I
think
that's
fine.
I
just
think
we
need
to
have
a
clear
like
strategy,
for
what
does
this
mean
like?
Why
are
we
showing
this
to
someone
like
showing
someone?
The
fact
that
we
have
a
shard
scheduling,
controller,
isn't
useful
to
the
user
like
they
don't
care
so
translating
that
into
some
sort
of
like
okay
cool?
What's
the
thing
I
can
do
with
this,
here's,
how
I
would
approach
that
thing
is
cross
workspace
list,
then
great.
B
Here's
how
I
would
approach
it
so
we
start
off
and
we're
setting
up
the
environment.
We
show
that
you
spin
up
kcp.
We
show
that
you
spin
up
the
virtual
workspaces
server
and
then
right
after
we
do
that
we
have
to
create
a
shard.
So
we
we
basically
can
say
we're
doing
some
additional
setup
work.
That
is
part
of
some
stuff.
B
That's
in
progress
that
relates
to
scaling
kcp
beyond
a
single
kcp
instance,
and
that's
why
we're
doing
this,
because
it's
work
in
progress,
we're
not
going
to
talk
about
it
anymore
unless
we
really
want
to.
But
it's
not
really
saying
we
have
this
workspace,
shard
scheduling,
controller
and
blah
blah
blah
it's
just.
We
have
to
do
this
step
because
it's
work
in
progress
and
we'll
get
you
know
we'll
have
a
cool
demo
for
it
later.
I
A
Agree:
let's
move
on
anything
else
on
policy
organizations
and
formally
charting,
or
we
can
move
on
to
api
inheritance,
which
is
also
not
done
yet
so
we'll
we'll
see
this
will
be
fun,
create
a
new
source
workspace
and
inherit
does
should
this
inherit
from
source.
B
No,
so
that's
why
it
says
eventually.
So
what
I
want
to
demonstrate
is
that
you
have
two
workspaces
that
are
independent,
not
inheriting
from
anything
you
go
to
the
source,
workspace
and
install
a
crd,
and
then
you
create
an
instance
of
that
custom
resource
in
the
source.
Workspace
you
show
that
it
was
created.
B
Then
you
switch
to
the
workspace
that
eventually
will
be
inheriting
and
show
that
there's
no
crds,
there's
no
cowboys
and
then
the
part
that
I
you
know
haven't
finished
yet
is
I
need
to
patch
the
workspace
to
I
need
to
patch
the
workspace
so
that
it
does
inherit,
and
I
think
that's
gonna
have
to
be
a
raw
cube,
control
command
and
not
a
kcd
workspace
command,
because
I
can't
edit
from
there.
E
H
B
Yeah,
I
I
think
it'll
probably
go
away,
but
you
know
this.
The
way
we
describe
this
is
that
we
did
some
exploratory
work
to
prove
out
that
we
could
inject
apis
into
a
workspace
without
them
actually
living
there
as
crds
in
that
workspace,
and
that
this
is
the
this
lays
the
groundwork
for
the
api
importing
and
exporting
that
we
plan
to
do
in
the
future.
That
will
be
much
more
powerful
than
just
inheriting
some
apis
from
one
single
parent
workspace.
A
Okay,
so
this
one's
also,
this
section
is
also
a
little
weird
because
we're
showing
something
both
incomplete
and
deprecated.
Basically,
right
like
this
is
this:
is
this
will
work
and
we
think
we're
going
to
do
something
better.
B
B
A
Yeah,
absolutely
sorry,
I
didn't
mean
it
to
sound
like,
like
we
shouldn't
demo
this,
because
it's
going
to
go
away
like,
like
you
said
the
point
of
it
is
to
show
we
learned.
We
did
it,
we
showed
it
was
possible,
we're
going
to
do
it
and
it
it
doesn't
go
away.
B
Okay,
I
I
will
go
in
hopefully
today
and
get
the
rest
of
this
fleshed
out.
A
Is
when
we
patch
that
workspace
to
inherit
does
it
need
to
do
anything
else
before,
like
I
imagine,
the
next
step
is
like
and
now
look.
It
has
cowboys
and
now,
when
I
change
the
cowboy
type
up
here
it
changes
down
like
you
know,
the
magic
trick
is,
I
change
it
to
my
left
hand
and
it
changes
in
my
right
hand,.
B
So
what
it'll
show
is
that
when
you
patch
to
inherit
the,
if
you
run
the
api
resources
command,
you'll,
see
cowboys
so
I'll.
Add
that
stuff
in
there
too
and
you'll
be
able
to
do
crud
operations
on
cowboys
in
the
inheriting
workspace
and
it's
not
going
to
pull
in
any
instances
from
the
source
workspace.
It
just
gives
you
access
to
the
api.
A
Yeah-
and
I
think
that
is
also
even
a
good
segue
to
like,
but
there's
something
even
more
powerful
coming
down
coming
down
the
the
conveyor
belt,
which
is
like
you
know,
complex
import,
export
relationships.
A
Okay,
is
that
everything
we
have?
Is
there
anybody
sitting
on
anything
else
that
they
wanted
to
add.
A
Andy,
did
you
want
to
talk
about
your
item
now
that
I'm
done
preempting
you
or
do
you
want
to
talk
about
namespace
scheduler
versus
the
other
two?
I
can
wait.
Okay,
all
right.
We
had
some
discussion
in
slack
about
I.
I
guess
I
would
say
that
the
current
state
of
the
namespace
scheduler
and
the
current
state
of
deployments,
but
are
an
ingress
controller
sort
of
fight,
and
we
should,
I
think,
well
deployment.
A
Splitter
can
just
go
away
over
time,
like
I
don't
think
it's
even
like
part
of
our
current
active
path
toward
where
we
want
to
be.
I
think
namespace
scheduler
is
more
like
that,
but
the
question
I
have
is:
should
ingress
controller
continue
to
exist.
The
way
it
is
existing
and
name
space
scheduler
should
get
off
of
its
lawn,
or
do
we
want
to
somehow
ideas?
Welcome
figure
out
how
ingress
controllers
work
should
cooperate
with
namespace
scheduler
andy
go
ahead.
B
So
I
had
a
thought
yesterday,
which
I
think
could
would
be
a
simple
fix.
I
don't
know
if
it's
a
band-aid
or
like
an
acceptable
long-term
solution,
but
we
could
add
an
annotation
to
a
resource
that
basically
tells
the
namespace
scheduler.
Don't
touch
me,
so
you
could
create
an
ingress
and
put
on
the
annotation.
That
says:
I'm
a
root,
don't
touch
me
and
then
the
name
space
scheduler
wouldn't
assign
it
to
a
p
cluster.
A
Yes
per
instance.
Is
there
a
race
there?
If
somebody?
Because
a
user
is
just
going
to
say,
like
you
know,
create
ingress
who
they
might
not
annotate
it
with,
and
I'm
the
root
the
controller
for
the
ingress
would
say
like.
Oh,
you
look
like
a
root
I'll,
apply
that
and
then
create
your
children,
but
in
the
meantime
the
name
space
scheduler
might
try
to
do
something
with
it
too.
H
B
Oh
yeah,
so
when
kyle
and
chris
were
working
on
the
ede
tests
in
the
pull
request,
that's
doing
the
namespace
mapping
so
like.
If
you
have
two
workspaces
that
both
have
the
default
namespace
and
they're
sharing
a
single
physical
cluster
as
a
target,
then
we
have
this
pr
that
says:
okay
default
in
workspace,
one
when
it
lands
on
your
cluster
is
not
actually
default.
It's
like
workspace
one
default
which
eventually
will
switch
to
a
hash.
But
what
ends
up
happening
is
the
ingress
controller.
B
Expects
that
the
root
ingress,
the
one
that's
supposed
to
live
only
in
kcp
never
gets
a
label
on
it,
assigning
it
to
a
cluster,
because
the
ingress
controller
creates
leaf
ingresses
that
do
get
the
label
on
so
that
there's
one
per
physical
cluster,
the
namespace
scheduler,
on
the
other
hand,
says
I'm
going
to
assign
each
namespace
to
a
random
physical
cluster
and
then
anything
that
gets
created
in
that
namespace
is
also
going
to
go
to
that
same
physical
cluster.
H
What
is
there
one
root,
ingress
per
logical
cluster,
who
creates
it.
D
A
Yeah
yeah
this
this
goes
to,
I
think,
a
long
debate
we
had
about
how
all
of
this
should
work
in
general
right.
The
deployment
splitter
was
a
demo
for
the
general
concept
of
this,
but
I
think
we
had
a
lot
of
conversations
about
how
it
shouldn't
be
creating
it
should
be
creating
new
leaf
resources
or
children
resources.
A
The
sinker
should
see
the
root,
see
the
actual
thing
being
scheduled
to
the
p
cluster
and
then
say.
I
know
that
I'm
not
supposed
to
just
create
an
ingress
here,
I'm
supposed
to
do
some
subset
of
that
and
coordinate.
You
know
both
both
figure
out.
What
that
is
for
this
type
and
generalize
it
across
all
types.
E
Since
that
it's
quite
related
to
the
two
steps
thinking
proposal
that
that
I've
done
some
weeks
ago,
of
course,
it
was
really
you
know
a
sort
of
dummy
implementation,
but
the
the
princip,
the
the
ideas
behind
that
was
really
to
separate
scheduling
from
transforming
objects
with
possibly
adding
new
objects,
for
example,
ingress,
sleeps
and
the
last
part
would
be
you
know
completely
transparent
thinking
completely
systematic
thinking.
So
this
was
one
approach.
It
seems
to
me
that
maybe
the
the
more
current
approach
to
that
is
something
that
clayton
also
mentioned
in.
E
E
We
would
have
possibly
at
least
it's
one
option,
a
virtual
workspace
that
would
gather
all
the
you
know,
all
the
resources
that
are
have
to
be
visible
by
a
given
thinker
and
then,
when
doing
that,
it
could
be
possible
in
the
virtual
workspace
whose
goal
is
typically
to
you
know,
add
some
logic
on
on
resource
gathering
reference
retrieval
then
such
a
virtual
workspace
could
be
enhanced
to
provide
on-the-fly
transformation
on
on
the
resources
it
exposes
to
the
sinker.
So
I
mean
it
seems
to
me
that
it's
it's
related
to
to
those.
E
You
know
exploration
points,
but
the
main
thing
is:
do
we
decide
to
somehow
segment
or
separate
pure
thinking
from
transformation
from
scheduling?
A
A
Yeah,
I
think,
in
the
fullness
of
time
and
by
the
time
this
is
this
is
more
mature.
We
should
have
a
better
general
solution
which
might
be
the
the
two-phase
syncing
or
the
transformer
that
puts
it
into
that
makes
available
something
in
a
virtual
workspace.
A
A
That
gets
like
seen
differently
by
certain
sinkers
or
seen
in
the
normal
way
by
a
sinker
and
knows
what
that
means
for
itself,
but
I
don't
think
we
could
come
anywhere
close
to
solving
that
problem
in
a
general
or
even
specific,
to
ingress
way.
So
I
think
that
that's
an
open
area
for
exploration
in
the
future.
G
A
Yeah
yeah,
I
mean
in
the
in
the
worst
case
it
trades
one
problem
for
another
which
is
instead
of
having
101
objects
being
stored
and
updated.
We
have
one
object:
that's
updated
a
hundred
times
per
update
right,
which
is
which
is
just
a
different
dimension
of
problem.
E
Yeah
but
then
it
becomes
mainly,
I
mean
if
we
have
something
like
in
the
future,
a
virtual
workspace
that
just
presents
this
as
distinct
resources
to
the
sinker.
I
mean
as
typical
cube
resources.
The
whole
question
then
becomes:
where
do
we
store
the
you
know,
internal
statuses?
Where
do
we
store
the
the
the
additional
transformed
information
and
think
back
information
that
should
not
be
seen
by
the
external
kcp
clients?
It
becomes
a
sort
of
storage
question
as
soon
as
we
have
a
virtual
workspace
layer
that
would
present
things
and.
H
E
D
A
Yeah
yeah,
I
agree,
I
don't
think
any
of
us
have
any
even
a
third
each
of
the
answer
and
they're
just
more
problems.
The
longer
we
talk
about
it
so
yeah
I
will
I
can
we
can
table
it
for
now.
I
think
short
term.
A
We
can
have
this
namespace
scheduler
just
ignore
an
ingress
or
in
general,
take
a
list
of
things
we
ignore
and
that
list
defaults
to
ingress,
but
I
will
feel
bad
about
it
until
we
figure
out
what
is
that
something
that
needs
to
go
in
ahead
of
the
demo
yeah?
I
think
so
because
the
because
currently
it
sounds
like
the
namespace
scheduler
and
the
ingress
controller
that
we
plan
to
have
both
running
for
the
demo
will
yeah
fight.
A
Luckily,
it's
an
easy
fix
on
the
namespace
scheduler
side,
so
yeah
and-
and
I
will
create
a
doc
and
welcome
your
contributions
to
poke
holes
in
the
many
ways
we're
doing
this
wrong
because
yeah,
I
don't
think
we
have
a
solution
to
this.
Yet
anything
else
on
that
topic.
B
B
Okay,
so
my
topic
from
the
past,
several
meetings
is
around
scoping.
If
you
are
new
and
missed
out
on
some
of
the
previous
calls,
the
idea
here
is
trying
to
come
up
with
some
proposals
to
go
to
upstream
kubernetes
to
make
changes
to
some
of
the
core
code
to
support.
B
What
for
kcp
is
logical
clusters
and
workspaces,
but
from
an
upstream
perspective,
if
we're
not
necessarily
trying
to
get
logical
clusters,
slash
workspaces
into
kubernetes,
then
we
need
some
more
generic
term
for
it.
So
this
document
is
pretty
long
and
it
is
a
big
scratch
pad.
B
So
what
I'm
interested
in
is
some
specific
aspects
of
it,
so
I
would
love
to
get
some
feedback.
Offline
is
totally
acceptable
if
you
look
at
the
mainly
like
scoped,
listers
and
scoped
clients
portion
of
this,
so
the
the
idea
here
is
that
a
scope
represents
a
subdivision
of
a
cluster.
B
So
the
idea
behind
a
scope
is
that
it
represents
effectively
a
full
cluster,
but
a
single
api
server
can
have
multiple
scopes
and
they're
isolated
from
each
other.
So
if
I
have
a
scope,
a
and
a
scope
b
within
each
of
those,
there
are
cluster
scoped
resources,
so
naming's
a
little
confusing
and
then
there's
namespace
scoped
resources.
So
I
can
create
a
crd
in
scope.
A
I
can
create
a
crd
in
scope
b
and
if
I
say
let
me
go
c,
let
me
get
a
list
of
all
the
crds.
B
You
generally
would
only
see
them
for
either
scope
a
or
scope
b,
but
not
all
of
them.
Unless
you
do
something
where
you
specifically
tell
your
lister
like,
I
want
to
see
everything.
So
what
I'm
looking
for
feedback
on
are
the
proposed
api
changes
in
here?
You
can
see
some
of
the
descriptions
around
what
I
would
do
for
scoped
listers
and
then
scope
clients.
B
This
is
probably
something
I
would
imagine.
Folks
are
maybe
a
little
bit
more
familiar
with
if
you're
used
to
generated
clients.
So
when
you
look
at
something
like
a
cluster
scoped
client
set,
this
is
existing
code.
There's
no
changes
in
what
I
have
highlighted
here,
so
you'll
have
a
getter
interface,
where
you
can
call
the
name
of
your
your
kind
and
get
back
an
interface
for
it,
and
this
is
the
interface.
Has
all
the
crud
methods
create
get
list
update,
patch
and
so
on?
B
So
I
have
a
one
proposal,
which
is
you
we
could
add
a
scope
method
inside
the
interface
you'd
have
create
get
list
and
scope
and
scope
would
return
basically
another
instance
of
the
same
custom
resource
definition
interface,
which
is
scoped
to
a
particular
scope
which
basically
maps
out
to
a
logical
cluster
in
kcp's
world
or
a
workspace.
B
And
I
have
a
discussion
and
a
comment
here
with
some
issues
that
I
have
with
the
current
proposed
approach,
and
so
please
feel
free
to
read
that
as
well,
and
I
am
going
to
add
some
more
scoping
for
storage
for
the
api
server
side,
but
I
haven't
written
that
out
yet
I
also
need
some
help
on
basically
the
biggest
to
do
here.
B
How
do
we
justify
the
need
for
scopes
upstream
because
within
a
kubernetes
api
server,
as
things
exist
today,
there's
just
things
at
the
cluster
scope
and
things
that
are
namespace
scoped
and
there's
no
reason
or
there's
no
need
in
pure
upstream
kubernetes
today
to
necessarily
subdivide
those.
But
if
we
can
come
up
with
some
justifications,
I
would
love
it.
I'd
love
some
help
there
I
do
see
steve
you're
asking.
B
H
Like
can
you
because
I
was
thinking
like
one
of
the
I'm
not
sure
if
you've
been
running
with
this,
but
like
the
the
previous
conversation
that
we
had
for
what
scopes
would
be
doing
in
vanilla
cube
was
all
based
on
the.
H
B
H
B
You
don't
yeah,
I
see,
there's
no,
stopping
you
from
you're
saying,
like
the
scoping,
would
not
necessarily
map
to
a
logical
cluster
like
we're
doing
it.
It's
really
just
given
a
full
key
space.
Can
we
subdivide
it
and
yeah?
I
don't
know
like.
If
you
basically
say,
cluster
scope
is
cluster
scope.
Crds
are
cluster
scoped
and,
like
I
don't
know
how
you
would
subdivide
that
into
multiple
out,
like
my
scope,
our
scope
and
not
end
up
with
logical
clusters
like
we
have
them
right
now.
So
it's
it's
tricky.
B
H
Don't
think
you'd
need
to
I
mean
I
I
think
like
it
would
be
fine,
even
for
a
closer
scoped
object
like
if
you
were
doing
some
sort
of
like
hashing
modulus
thing
to
just
like
break
it
up
into
different
things
that
the
controller
looks
at.
I
just
think
that
the
way
that
we
present
the
scoping
concept
may
need
to
be
less
strict
and
provide
fewer
guarantees
than
the
kcp
logical
cluster
concept.
B
Yeah,
I
understand
what
you're
saying
that
makes
sense.
So
the
couple
other
things
I
do
have
some
branches
of
kubernetes
and
kcp
that
have
an
initial
prototype
of
implementing
all
of
this
stuff.
I
am
currently
in
progress
or
in
fight
with
redoing
the
prototypes
as
proper
commits,
because
most
of
the
commits
in
these
branches
are
just
whip,
because
I
was
in
a
hurry,
so
I
do
have
another
branch
which
I'll
link
in
here
as
well.
B
That's
got
this
in
a
more
logical
series
of
commits
if
you're
interested
in
controllers
listers
caches
the
key
mechanism,
the
way
that
indexes
work,
how
the
cueing
works.
There's
a
lot
of
good
information
in
here
about
different
patterns,
the
some
things
that
I
would
like
to
make
singly
globally
configurable
but
modifiable,
so
that
we
can
change
what
key
functions
are
used.
For
example,
there's
a
there's
a
lot
of
good
information
in
here.
B
It
eventually
needs
to
be
edited
and
organized
so
like
when
you
get
down
to
the
stuff
in
the
later
part
of
the
doc.
There's
a
lot
in
here
that
is
not
well
organized
yet,
but
if
you're
looking
for
lots
of
good
tidbits
on
what
sort
of
changes
we
need
to
make
to
hopefully
get
kubernetes
to
accept
upstream,
so
that
kcp
can
eventually
stop
working
on
a
fork.
B
There's
lots
of
info
in
here
please
take
a
look
if
you're
interested,
I
really
could,
could
use
the
help
and
would
appreciate
it
at
least
for
the
api
changes
because
to
do
what's
in
here.
I
have
to
go,
modify
the
generators,
and
I
would
prefer
to
do
that
once
or
twice
as
opposed
to
multiple
iterations.
A
So
the
as
far
as
proposing
it
upstream
why
they
should
want
it
all
the
examples
we
have
are
for
basically
logical
clusters
for,
like
scoping
beyond
the
scope
of
a
cluster.
Is
there
any
possible
benefit,
either
performance
or
ux
or
features
to
scoping
it
further
down
like?
Could
I
use
a
scope
that
more
efficiently
does
label
selectors
or
more
efficiently
does
like
only
things
within
this
namespace,
or
something
like
that
like?
Is
there
any
possible
benefit
to
using
the
scope,
injecting
scope
to
do
other
stuff
that
people
might
want
to
do
in
this?
B
B
You
can
modify
a
shared
informer
factory
to
customize
the
label
selector
factory
wide,
but
that
doesn't
help
with
clients
and
the
way
that
I've
approached
the
code
and
the
prototype
and
it
could
change.
But
you
set
a
like
the
scope
ends
up
getting
set
on
a
request
for
a
client,
for
example,
and
then
there's
an
opportunity
for
the
scope
to
mutate
the
request
before
it's
sent
over
the
wire.
B
So
maybe
you
know
it
would
have
to
manipulate
the
raw
request,
details
so
url
and
whatever
else,
but
maybe
I
could
find
some
ways
to
broaden
that
to
so
that
you
could
make
it
a
more
easily
constructible,
like
labor
label,
selector
scope,
for
example,.
A
A
B
Yeah,
I
think
it
would
be
an
interesting
exercise
to
explore
what
it
would
look
like
to
do:
kcp
logical
clusters
as
scopes,
combined
with
a
controller
where
you
want
to
shard,
where
you
want
to
bucket
things,
because
if
we
like
either
support
logical
clusters
or
we
support
h.a
sharded
controllers,
but
we
can't
blend
them
together.
Somehow,
then,
I
think
that
we
have
a
weaker
argument
or
it'll
be
harder
for
us
for.
B
Anyways
we're
almost
out
of
time,
so
I'm
going
to
stop
presenting
I've
made
my
pitch.
So
please
be
thinking
about
this
and
if
you
have
any
questions
or
want
to
help
out
just
reach
out.
A
Have
we
also,
I
mean,
have
you
andy
or
as
clayton
or
anyone
else
like
started
discussing
this
with
anyone
upstream
at
all?
Like
I
haven't,
I.
B
Yeah
I
I
should
probably
go
chat
with
jordan
and
david
eads
and
at
least
start
with
those
folks
and
see
if
this
just
falls
flat
or
they
could
envision
some
uses.
A
I
B
And
if
you
don't
have
permission
to
assign
the
milestone,
just
let
somebody
know
like
I
can
do
it
other
folks
on
the
call
can
do
it
as
well.