►
From YouTube: Community Meeting, March 15, 2022
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Welcome
everybody
to
the
community
call
of
kcp
march
15th.
We
have
an
agenda
with
a
couple
of
items.
I
guess
we
will
skip
the
first
one
for
the
moment
and
move
that
to
the
to
the
to
the
end,
I'm
not
sure
paul.
Should
we
do
that
first
or
should
we
first
talk
about
the
technical
topics?
Yeah?
Let's
start
with
the
technical
topics,
all
right,
so
then
that
gives
the
voice
to
jason.
He
has
two
topics.
B
Yeah,
I
guess
not
a
lot
of
not
a
lot
of
discussion.
I
think,
but
there
was
some
chatter
in
the
slack
this
morning
that
I
wanted
to
advertise
here,
which
was
we're
going
to
as
part
of
p
prototype.
B
3
we're
going
to
have
a
static
deployment
enamel
for
syncer
installation
static
is
maybe
I
should
have
put
that
in
air
quotes
is
going
to
be
a
template,
yaml
document
in
the
in
the
repo
somewhere
that
as
part
of
installation
of
the
sinker,
you
have
to
replace
some
strings
with
some
other
strings
to
point
at
the
right.
Kcp
address
put
in
the
certificate
data
and
specify
your
cluster
name,
but
it's
not
going
to
be
generated.
It's
not
going
to
be
generated
in
any
kcp
code.
It's
going
to
be
some
yaml.
B
A
Yeah
just
one
comment:
the
thinker
will
know
about
like
deployments,
for
example,
so
we
are
already
dependent
on
certain
types
and
knowledge
about
them,
because
service
accounts,
that's
a
topic
coming
later
to
mount
them.
You
have
to
modify
deployment
specs
prospects.
Basically
so
there's
knowledge
about
that.
So
we
could
use
that
knowledge
as
well
to
sync.
Only
so
with
secrets,
especially
secrets
are
important.
Those
which
are
referenced.
A
B
A
C
C
A
Foreign
bank,
so
we
there
was
a
discussion
a
few
days
back
last
week.
I
think
at
the
end
of
the
week,
so
the
the
medium
term
vision
is
that
there
is
something
in
the
api
in
the
workspace
where
the
workload
cluster
objects
are
created,
something
which
tells
the
thinker
and
other
actors
which
resources
should
be
synced.
So
I
think
we
use
the
name
thing
external
sync,
something
like
that.
So
this
there
will
be
an
object
which
tells
you
those
other
four
objects.
A
For
now,
it's
just
a
sensible
list,
deployment
secrets,
complex
maps,
service
accounts.
I
don't
know
something
like
that.
C
C
A
So
I
think
it's
mostly
provided
by
the
user.
So
there's
a
cube
config,
which
you
create
in
the
secret.
The
logical
cluster
name
is
the
workspace
where
the
works
where
the
workload
cluster
object
should
be
created.
So
it's
one
workspace,
which
is,
I
mean
it's
selected
by
the
user,
deploying
the
syncer
okay.
A
E
I
have
a
quick
question
related
to
this:
we're
registering
a
cluster
running
a
pulse
sinker.
How
does
the
what's
it
called
like?
There's
we're
not
just
like
synchronizing
things,
we're
also
api
import.
There.
A
I
guess
there's
again
a
short
term
solution,
which
I'm
also
not
completely
sure
of
medium
term.
There
will
be
an
api
to
provide
discovery.
Information
like
there's
another
object
where
the
thinker
will
publish
the
schemas
and
then
the
negotiation
machine
will
restart
and
do
all
of
the
things
it
does
short
term.
A
F
A
Also
have
a
question
mark
who
will
get
discovery
and
doesn't
this
need
access
to
a
p
cluster.
B
A
F
The
api
resource
bit
that
the
cluster
controller
and
syncer
manager
spin
up
sets
up
importing
crds.
I
don't
know
if
that's,
if
you
have
auto
publish
apis
turned
on,
but
definitely
pull
stuff
in.
G
G
Yes,
I
mean
we
need
to
revisit
with
the
external
sync
stuff,
which
is
mainly
something
built
on
top
of
the
basic
api
bindings.
A
Otherwise,
I
will
quickly
talk
about
service
accounts,
so,
with
the
help
of
a
number
of
people
we
have
service
accounts
merged,
which
means
controllers
are
running,
so
you
can
create
service
accounts.
You
get
secrets
as
tokens.
All
of
that
is
done.
Authentication
works.
Semantics
is
basically
like
in
cube,
but
there's
nothing
like
cross
workspace
authorization
service
accounts
always
apply
to
the
workspace
they
are
defined
in.
This
doesn't
have
to
be
like
that
forever.
We
can
talk
about
extensions,
but
it's
like
that
for
now
and
about
membership
or
access
to
to
an
org.
A
Those
service
accounts
automatically
implicitly
get
fork
access,
so
nothing
to
do
like
holes
in
the
wood
workspace.
Nothing
to
do
like
that
service
accounts
just
work
automatically
in
their
workspace,
nowhere
else
and
ongoing
work.
So
your
team
is
working
on
syncing
them
into
the
p
cluster.
So
this
is
not
done
so
you
cannot
deploy
pots
deployments
whatever
which
use
them,
but
hopefully
we
will
see
that
very
soon.
H
Accounts
for
myself
and
stephan.
I
just
want
to
double
check,
because
this
is
one
of
those
like
fundamental
things
that
it's
always
better
to
ask,
rather
than
let
an
assumption
go
untested,
which
would
be.
We
still
don't
really
think
that
the
key
goal
of
kcp's
workload
mindset
is
to
run
controllers
on
underlying
clusters,
but
we
very
much
want
to
ensure
that
all
workloads
running
on
physical
clusters
have
a
connection
to
a
high
level
service
account
that
could
be
part
of
larger
systems
of
identity
and
authorization
which
in
practice
are
probably
going
to
behave.
H
H
The
goal
of
the
goal
of
cube
was,
we
were
like:
oh,
we
made
service,
accounts,
magic
and
then
people
use
them
to
deeply
couple
workloads
to
the
cluster
they're
on
kcp
workload,
service,
transparent
multi-cluster,
does
not
couple
workloads
to
the
cluster
they're
on
the
next
question,
then,
is.
Are
we
deeply
coupling
controller
work
like?
Is
the
goal
of
kcp
of
kcp's,
transparent,
multi-cluster
to
run
workloads?
H
Yes,
is
it
to
run
controller
workloads,
one
percent
of
the
time
how
much
mental
time
are
we
spending
like
it's
nice
and
it's
convenient
to
do
magic
service,
account
authorization
and
authentication,
but
is
it
the
same
problem
that
it
is
on
cube
with
operators
and
controllers
in
those
patterns?
I
would
probably
say?
No,
so
maybe
we
can.
C
H
E
H
A
service
account
is
for
a
service
and
the
identity
for
workload
super
important,
the
magic
that
makes
controllers
loop
back
to
their
own
cluster
and
work
automatically
was
accidental
in
cube
and
has
a
whole
bunch
of
side
effects
that
are
non-desirable
from
a
security,
reliability,
tenancy
isolation
perspective
before
we
reproduce
that
magically
automatically
in
kcp.
We
should
think
about
it
so
well,
we
can
do
a
follow-up
right.
E
C
E
B
I
raised
my
hand
and
lowered
it
because
I
saw
a
rat
hole
opening
underneath
me,
but
I
think
I
think
we've
mostly
said
everything
that
I
was
going
to
say.
A
B
A
H
Want
so
I
mean
I,
I
would
say,
the
current
state
is
already
recorded
in
the
use
cases
in
the
transparent,
multi-cluster
design
dock,
so
adding
net
new
above.
That
is
the
caution
and
if
we
have
a
disconnect
between
what
we're
implementing
and
what
those
dots
say
today,
that's
just
a
good
chance
for
us
to
be
like.
Are
we
actually
following
the
dock?
Are
we
saying
there's
a
dock,
but
we
don't
actually
agree
on
the
key
points,
because
the
doc's
not
clearer,
we
need
to
go
debate
it
more.
A
All
right,
I
don't
see,
hands
you
postpone
to
another
meeting.
Maybe
to
this
test
is
deeper
all
right.
Next,
one
steve
crdb
magic
magic,
but.
I
Cool
super
quick
updates,
so
we
we're
basically
passing
all
tests
up
through
ee
there's
patches
in
core
cube
to
enable
new
storage
there's
patches
and
like
the
integration
stuff,
there's
patches
and
keyboard
em.
So
you
can
run
a
a
cluster
doing
this.
I
I
I've
found
a
quite
a
large
number
of
things
in
cube
itself
where
people
were
like
bending
the
rules
of
the
api,
while
writing
tests
and
stuff,
and
so
those
tests
obviously
broke
when,
when
I
redid
the
api,
so
there's
a
lot
of
stuff,
that's
merged
upstream,
I'm
gonna
try
to
rebase
my
changes
on
like
vanilla,
upstream
cube
just
to
get
the
minimal,
and
then
I
will
also
try
to
get
a
minimal
pr
against
our
fork.
A
I
I
agree,
I
think
it
would
be
a
good
time
to
do
it.
I
think
mostly,
it
is
completely
orthogonal.
I
imagine
just
based
on
like
mucking
around
with
the
code
base.
I
A
I
Me
yeah,
and
at
least
on
my
my
fork
of
our
fork,
there's
been
a
bunch
of
stabilizing
I
had
to
do
to
make
things
actually
run
and
test
green
and
whatnot.
I
I'm
not
sure
like
what
order
like
I
don't
want
to
pile
extra
stuff
on
top
of
the
rebates
like
it
shouldn't
be
whoever's.
Rebasing.
Okay
also
now
fix
all
these
problems,
so
I'm
open
to
thoughts
on
how
to
like
intelligently
get
this
stuff
together.
I
F
Yeah,
I
agree
with
you
there's
a
lot
of
stuff
that
needs
to
be
done
to
improve
the
hygiene
totally
happy
to
come
up
with
a
plan
for
reorganizing
commits
and
grouping
things
together
and
like
what.
What
rebases
should
look
like
going
forward
I'm
off
next
week,
but
would
certainly
entertain
talking
about
it.
When
I'm
back.
A
D
Cool
so
march,
18th
is
the
end
of
p3,
which
is
awesome.
We're
coming
up
on
that
pretty
quick
at
the
end
of
this
week.
It's
time
to
start
talking
about
our
themes
for
p4
and
deciding
what
we
want
to
focus
on
there.
I
wanted
to
propose
to
the
group
first,
just
kind
of
based
on
some
of
the
feedback
that
I've
seen
and
how
p2
and
p3
has
gone
that
rather
than
try
and
scatter
our
focus
and
make
a
little
bit
of
progress
on
a
bunch
of
different
areas.
D
D
But
I
also
want
to
suggest
that
we
take
advantage
of
our
scheduling
and
how
we
moved
p3
to
be
on
the
18th,
giving
us
a
bigger
iteration
for
p4
and
provide
ourselves
a
little
bit
of
breathing
room
on
our
scoping
and
design
work
for
p4
by
taking
maybe
next
week
and
say,
maybe
to
like
the
28th
to
actually
scope
out
and
design
what
we
do
decide
on
on
working
on.
F
I
like
the
idea
of
trying
to
focus
on
things.
I
think
that
there's
a
lot
of
areas
that
we
could
spread
ourselves
in
the
code
base
and
rather
than
doing
that,
trying
to
clean
up
some
tech
debt
and
improve
the
user
experience
or
for
a
small
subset
of
things
like
transparent
multi-cluster
and
some
evolution
with
apis.
D
All
right!
Well,
if
people
don't
feel
too
strongly
about
it,
that
my
suggestion
was
going
to
be
that
we
pick
as
our
topic,
transparent,
multi-cluster
and
really
feel
good
about
the
story.
We
can
tell
around
transparent
multi-cluster
and
also
our
repo
hygiene
and
engagement
like
what
do
people
first
see?
How
do
they
get
involved
in
kcp,
and
what's
that
that
three
sentence
message
that
we
need
them
to
understand
at
the
end
of
p4,
so
that
they're
like
yeah,
I
get
it.
A
H
It's
the
killer
app,
but
you
don't
want
it
to
be
a
dead
killer
app
and
I
think
that's
like
the
it
doesn't
have
to
be
perfect,
but
it
has
to
be.
It
has.
H
A
Yeah
I
mean
we
get
every
day
every
morning
I
see
questions
from
people
trying
it
and
they
fail
by.
I
mean
the
list
of
resources.
Is
it's
always
there,
like
people
get
that
one?
This
should
be
hidden
from
users.
For
example,
I
mean
we
should
say
deployments
and
replica
sets
and
xyz
work,
and
they
should
work
out
of
the
box.
B
Along
the
same
lines
and
slightly
bigger
in
scope
than
that
is
I've
heard
a
lot
of
people
who
want
web
hooks
and
who
want
pod,
exec
and
pod
logs,
and
I
I
realize
those
are,
you
know
massively
huger
issues
than
than
hard-coding
the
list
of
resources
that
work,
but
I
think
we
need
to
at
least
have
an
answer
to
those
questions
that
is
better
than
the
answer
we
have
now.
The
answer
we
have
now
is
like
yeah.
We
know
we
need
it
eventually,
we
have
a
vague
idea
of
how
to
do
it.
A
D
Okay,
it
sounds
like
folks
are
pretty
comfortable,
focusing
on
whatever
parts
of
transparent
multi-cluster
we
want
to
prove
out
and-
and
that
could
be
superficial
if
we
need
it
to
be
for
some
areas
and
have
a
deeper
implementation
for
others,
and
just
to
be
clear,
I
I
don't
think
that
means
that
we
don't
make
progress
in
other
areas.
We've
got
a
lot
of
variety
of
contributors
that
are
on
the
team
that
have
different
focuses.
They
want
to
do,
which
is
perfectly
fine.
D
I
think
we
need
to
to
actually
pick
off
the
pieces
of
tmc
that
we
agree
on,
and
I'd
like
to
suggest
that
by
the
29th,
which
is
in
two
weeks
we're
able
to
review
designs
of
what
we've
picked
off
so
that
that
means
we
have
this
week
to
finish
up
prototype
three
and
we
have
about
a
week
and
a
day
or
so
to
actually
sketch
out
those
those
issues,
scope
them
out
with
with
the
actual
task
that
will
complete
and
and
probably
have
some
sort
of
script
sketch
for
what
we
want
to
show.
A
B
It
be
helpful
or
anti-helpful
to
open
an
issue
for
design
pod
exec
or
does
it
you
know
design,
pod,
logs
stuff
and
just
get
people's
thoughts,
and
you
know
vague
high
level
ideas
and
then,
from
that
take
a
design.
I
could
probably
like
come
up
with
a
design,
but
I
want
to
make
sure
that
everybody,
everybody's
interests
are
involved
in
that
and
same
with
web
hook.
F
D
A
A
So
let's
go
through
them.
There
aren't
so
many
which
is
good.
So
we
make
progress.
It
seems
first,
one
is
api
inheritance
test,
black
andy.
F
Yeah,
just
it
showed
up
in
one
of
my
pr's
and
one
of
the
et
multiple
runs,
so
I
haven't
looked
at
it
yet,
but
I
will
be
changing
that
end-to-end
test
as
part
of
my
api
binding
controller
work.
So.
F
A
I
mean
we
had
a
couple
of
issues
last
week,
beginning
of
this
this
week,
everybody
who
writes
tests,
please
be
aware.
We
have
controllers
doing
things
we
have
informers
which
have
to
think
in
sync
to
do
things,
so
we
need
to
require
eventually
in
many
places,
that's
what
we
found
and
fixed
and
plagues
fake
vr's.
A
Maybe
it's
a
different
or
a
similar
one.
I
don't
know
we
will
see,
could
be
all
right
next
one.
It
was
a
follow-up
from
our
auth
work
in
the
virtual
workspace
api
server.
There
was
one
check
missing
which
we
have
in
cluster
workspace
admission,
namely
authorization
for
that
I
added
to
br.
So
it's
nothing
big
just
another.
Subject:
access
review
to
check
that
the
user
can
use
the
works.
Cluster
workspace
type
prs
up,
ready
for
review.
F
Yes,
if
you,
if
you've,
got
a
pr
for
it,
I
think
like
feel
free
to
just
set
the
milestone.
You
know
if
it's
something
that
you
know
needs
to
go
into
whatever
milestone
it
is.
F
That,
probably
is
a
p4
thing,
and
I
know
there
was
a
question
from
kyle
that
he
I
think
was
kyle
that
came
in
after
I
created
it.
So
stefan
you
and
I
should
take
a
look
at
that
and
respond
so
I'll
put
pro
4
on
that.
F
Yeah,
so
that's
the
meta
namespace
index
func
that
doesn't
record
the
cluster
name
in
the
key.
So
that's
going
to
be
a
big
chunk
of
work
to
fix.
It's
probably
should
get
rolled
into
the
scoping
work
that
we
eventually
need
to
plan
out
and
get
merged.
So
I
think
that's
something
we
should
try
and
tackle
for
prototypes
for
do
you.
I
A
What
you
have
now
is
basically
index
buckets
which
are
by
namespace
correctly,
but
the
namespace
keys.
They
are
not
prefixed
by
workspace,
so
the
buckets
are
big
like
the
across
workspace,
but
inside
of
each
of
the
buckets
of
the
index,
everything
is
unique,
so
it's
not
bad
for
get,
but
it's
bad
for
list.
I
F
I
H
That's
actually
an
interesting
one,
but
I
haven't
heard
a
concrete
use
case
for
it,
yet
we
I
think
this
is
in
like
the
we're
not
really
good
at
getting
patterns
of
how
people
should
actually
be
using
the
stuff
documented.
So
like
that
first
one
I
tried
for
the
observability
guys.
I
want
to
go
back
and
do
that
again
for
a
couple
of
other
examples.
H
If
someone
has
a
cross
workspace,
namespace
scope
use
case,
like
I
can
think
of
a
few
off
the
top
of
my
head,
like
you
know,
system
namespaces,
that
just
by
convention
contain
a
bunch
of
policy
and
that
controller
could
read
it.
I
just
don't
know
we
would
probably
need
some
examples
of
it,
but
that's
a
good
one,
though
steve,
to
also
tie
into
our
access
patterns.
Doc,
like
say,
hey,
we
thought
about
this,
but
we
don't
have
a
use
case
for
it.
Yet
so
we're
not
worried
about
it.
H
Right,
it's
expensive
for
a
databases
too,
but
it
actually
in
even
an
lcd.
It
is
actually
not
that
expensive.
If
you
can
set
up
the
chunk
scanning
effectively
so.
H
G
H
You're
right,
so
that's
actually
one
so
I'll
put
that
in
access
patterns.
I
actually
it's
tough
because,
like
we're
struggling
to
find
we,
we
have
use
cases
scattered
across
a
couple
places
and
I
was
trying
to
come
up
with
bigger
use
cases,
and
then
we
have
like
controller
access
patterns
which
exist
in
the
design
docs
for,
like
the
database
stuff.
H
So,
like
I
do
think
the
canonical
place
kind
of
is
at
the
storage
layer.
What
access
patterns
you
can
support
or
choose
to
support,
defines
the
shape
of
the
data
model.
You
need
and
data
is
truth,
but
I
think
we're
a
little
weak
today
about
coming
up
with
them
and
then
getting
them
all
in
the
same
spot.
So
someone
can
go
find
them.
So
maybe
we
start
with
putting
them
in
the
access
patterns.
Section
of
storage
right
now,
which
is
in
the
sharding
design
dock.
H
A
G
G
F
Yes,
a
reminder
for
the
prs
too
like
if
you
open
up
a
pr,
you
expect
it
to
get
merged.
Please
set
the
milestone
on
that
also.
E
I
mean
the
goal
for
me
is:
is
enabling
a
way
to
deploy
kcp
with
some
p
clusters
with
kind,
because
we
actually
need
to
validate
controller
interaction
and
have
it
be
consistent,
so
you
just
run
a
command
so
that
we
can
use
that
a
to
e.
We
use
that
for
development.
E
We
use
that
for
demoing,
but
I
mean
that's
kind
of
riffing
on
like
we've
had
previous
conversations
about
like
yeah,
the
demo
script
is
in
bash
and
that's
hard
to
maintain,
so
I
mean
explicitly
in
that
issue.
It's
like
you
would
like
something
in
go
line.
Please
I
mean
yeah,
it's
a
little
bit
more
expensive
initially,
but
we
could
actually
maintain
it
over
time.
It's
easier
to
do,
cross-platform
all
that
jazz.
E
Yeah,
I
mean
the
way
that
I'm
like
we
have
shared
fixture
in
for
most
of
our
tests
today.
But
what
that
implies
is
nothing
unless
you
run
a
persistent
server,
because
we
don't
have
a
good.
We
have
to
compose
a
test
suite
and
be
able
to
to
be
able
to
do
test
manage
fixture,
but
it's
really
easy
to
do
persistent
fixture,
except
that
you
have
to
be
able
to
start
whatever
you're
going
to
be
targeting.
So
currently,
I
know
the
command
line
invocation.
A
E
F
Do
you
want
to
spec
out
a
quick
sketch
of
like
what
you'd
expect
the
go,
struts
interfaces
to
look
like
and
what
you
might
want
from
a
command
line,
and
then
we
can
put
you
know,
help
wanted
on.
E
There
sure
I
mean
to
me:
I
care
I
don't
really
care
about
the
internals
I
just
want
like
given
kcp
and
kind
you
know
binaries
available
in
the
command
line.
Give
me
something.
F
Yeah
so
and
expose
the
configuration
in
a
way.
That's
consumable,
you
have
a
what
I
would
say
is
a
minimal
desired
ux
in
mind.
Like
you,
you
have
an
experience
and
an
outcome
that
you're
looking
for.
So
at
the
very
least,
if
you
can
sketch
out
what
you
want
the
command
line
to
look
like
or
how
people
would
do
this,
then
somebody
can
go
implement
it.
I
think
it's
a
little
not
so
it's
not
quite
so
tangible
with
what's
in
the
description
here.
If
that
makes
sense,
sure.
A
J
Yeah,
oh,
go
ahead,
but
it's
is
it
my
turn
pulse
yeah!
It's
it's!
It's
so
just
go
ahead:
okay,
yeah!
So
I
I
was
in
the
context
of
how
something
like
olm
or
if
people
have
been
following
rec
pack,
things
that
like
manage
control,
plane,
updates
or
upgrades
over
time.
J
J
So
that
we
can
upgrade
things
that
change
the
control
plane
like
let's
say,
crds
api
services,
et
cetera
before,
like
committing
to
that
change
for
the
actual
workloads
or
on
the
cluster.
J
As
I
as
a
service,
I,
as
some
meta
operator
that
manages
upgrades
of
a
set
of-
let's
just
say,
crds
on
the
control
plane
right
for
a
service.
J
J
Have
a
when
I
change
that
configuration.
What
I
expect
to
happen
is
an
upgrade
right,
I'm
basically
describing
like
olm
or
cbo,
but
operating
over
kcp
clusters
and
pivoting
between
kcp
clusters.
Instead
of
just
like
changing
the
cluster
that
they
exist
on
and
the
reason
is
like.
I
want
to
kind
of
avoid
well
I'll
I'll
I'll,
stop.
A
F
Yeah,
I
think,
nick
we
should
talk
later,
okay,
because
stefan
and
I
and
others
have
been
working
on
api
exports,
api
imports,
so
to
speak,
api
which
we're
calling
api
bindings
and
then
being
able
to
evolve
a
schema
over
time.
F
So
if
you've
got
a
crd
and
it
looks
like
some
shape
and
then
you
add
some
fields,
you
move
some
stuff
around,
we
want
to
be
able
to
make
those
changes
identifiable,
assuming
we
can
automate
figuring
out,
what's
compatible
and
what's
not,
and
we
want
to
allow
controllers
service
providers
operators
whatever
you
want
to
call
them
that
own,
reconciling
those
those
custom
resources.
We
want
to
make
it
so
that
they
can
deal
with
evolutions
as
well.
F
So
we
should
probably
talk
separately
through
some
of
your
use
cases
and
like
what
your
vision
looks
like
and
how
that
meshes
or
not
with
where
we're
trying
to
do
what
we're
trying
to
do
with
the
api
export
and
resource.
H
H
I
think
we
could
say
something
even
today,
which
is
like,
if
you
say
the
word
operator,
it
doesn't
mean
anything
in
the
kcp
context.
We're
trying
we're
trying
to
go
back
to
like
saying
like
operator
is
a
pattern
that
lets
people
bring
their
own
assumptions
to
we're,
trying
to
say,
like
services,
yeah
sure.
F
H
Apis
controllers
that
run
them
infrastructure
that
makes
rolling
out
changes
to
service
apis
easier,
because
api,
that's
exposed
in
a
workspace
you
know
is,
is
you're
consuming
it.
You
just
expect
that
api
contract
to
be
honored,
the
mechanics,
are
hidden
and
then
there's
a
under
developed
set
of
concepts
past
that,
because
we
were
kind
of
a
bunch
of
times
andy's
saying
like
focused
on
those
concepts.
H
If
there's
additional
use
cases
for
evolution
of
like
evolving
apis
on
a
cluster,
one
person
assumes
they
know
how
to
evolve
all
those
apis,
there's
a
fundamentally
different
assumption.
I
think
that
we're
saying
which
is
actually
no
most
admins
of
a
cluster
have
no
idea
how
to
evolve
those
apis
and
they're.
Just
hoping
that
if
upgrades
are
delivered,
that
red
hat
or
the
operator
team
knew
how
to
do
that,
I
think
we're
trying
to
switch
that
responsibility,
which
is
the
person
rolling
out.
H
The
changes
is
accountable
for
making
sure
those
apis
rolled
out
doesn't
mean
that
we
might
not
have
that
use
case,
which
is
someone
in
bulk,
wants
to
go
trigger
a
bunch
of
roll
outs
of
new
apis,
because
a
vendor
or
a
provider
has
done
it.
But
we're
trying
to
start
from
the
assumption
that
the
team
rolling
out
that
api
is
responsible
and
accountable
in
the
racy
sense
to
making
that
successful,
not
the
vendor
and
I
the
vendor,
might
beat
that
team
and
the
vendor
might
want
those
tools.
H
A
H
J
Yeah,
so,
given
that,
let
me
let
me
just
reframe
this,
because
I'm
going
to
be
on
vacation
for
the
next
two
days
or
so
so
I
won't
be
able
to
like
follow
up
with
you
guys
until
friday
to
rephrase
in
the
terminology
and
the
viewpoint
that
you
just
gave.
J
Let's
say
I
view
a
logical
cluster
as
a
just
a
set
of
configuration
like
a
like
a
immutable
deployment
of
everything,
including
apis
controllers
deployments,
like
all
the
resources
that
make
up
that
cluster
and
what
I
want
to
do
is
upgrade
from
one
set
of
configuration
that
defines
my
entire
cluster
to
another
set
of
configurations
that
defines
my
other
cluster,
and,
it
seems
to
me,
like
kcp,
provides
a
really
unique
opportunity
because
spinning
up
a
new
cluster
to
pivot
to
is
now
something
that's
possible
much
more
cheaply.
J
And
so
it's
like
think
about
the
analogy
for
a
deployment
to
a
pod.
This
would
be
like
between
logical
clusters,
if
that
makes
sense,.
H
Maybe,
and
there's
this
this
is
similar
to
folks
some
things
we've
talked
about.
You
know.
In
the
long
run,
the
machinery
of
this
should
enable,
potentially
within
a
given
cube
cluster
lots
of
logical
little
clusters
for
different
layers
of
security.
Like
you
could
imagine
a
logical
cluster
for
the
infrastructure
that
runs
on
the
control
planes
is
physically
on.
It
is
impossible
for
an
end
user
who's
not
authorized
to
access
that
to
even
be
able
to
see
the
deployment
services
that
comprise
a
control
plane
on
a
self-hosted
cluster,
for
example.
H
They
would
you
know
end
users
might
never
even
see
that
in
that
mental
model,
some
of
what
you're
describing
the
use
case
you've
described
is
I
want
to.
I
want
to
be
able
to
look
at
the
two
different
views
before
and
then
figure
out
how
to
test
between
them
on
a
all
resources
sense,
not
on
the
per
resource
sense,
because
what
we
described.
C
H
J
H
Of
a
set
of
config-
and
they
want
to
do
those
at
the
same
time
because
they
understand
how
all
those
resources
are
used
together.
So,
like
a
team,
maybe
the
way
I'd
use
this
in
the
kcp
sense
would
be
a
team
wants
to
is
about
to
take
an
outage
period
or
they're
at
an
outage
window,
and
they
want
to
upgrade
all
of
the
up
the
service
implementations
at
the
same
time
because
they
want
to
minimize
their
outage
window.
H
So,
like
they're,
looking
at
a
logical
cluster,
they've
got
some
services,
some
deployments
some
magic
foods,
magic
bars,
magic,
food,
magic
bar
have
a
v2
and
they're
like
yep.
I
want
to
work
through
and
accelerate
doing
all
these
upgrades
at
once,
and
then
I
want
to
cut
back
so
it's
a
blue
green
upgrade.
So
I
think
that-
and
that's
kind
of
what
we're
saying
is
like
I
want
to
change
all
my
dependencies.
I
somebody's
provided
them.
I
need
to
test
them.
H
J
Yeah,
it's
it's
like
one
of
you.
If
I
view
each
one
of
these
things
as
a
graph
and
then
there's
like
a
hyper
graph
connecting
them.
I
want
to
do
it
all
in
a
single
like
abstract
it,
so
it
seems
atomic
for
all
the
configurations.
Since
I
I
can't
know
what
those
those
sub
graphs
all
those
edges
contain,
because
we
don't
have
all
that
information
like
I
don't
know.
If,
if
adding
a
a
deployment
here
will
affect
a
deployment
in
another
name
space,
for
whatever
reason.
D
Yeah
we
can.
We
can
postpone
that
one
till
next
time.
I've
just
seen
a
couple
requests
this
past
week
about
whether
or
not
we
need
to
continue
updating
prototype,
2
pieces
and
yeah
how
we
can
make
sure
that
we
use
what
we've
already
done
to
make
sure
we're
not
breaking
things
we're
currently
doing
so.