►
From YouTube: Community Meeting, August 17, 2021
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right
welcome
to
kcp
community
meeting
august
17th
2021.
We
have
a
few
small
items
on
the
agenda.
Nothing
well,
no,
no
just
no
discussion
topics
yet,
but
we'll
see
where
we
end
up.
I
wanted
to
surface
that
david
david
is
on
pto
right
now,
but
right
before
he
left,
he
made
a
lot
of
progress
on
the
rebasing
of
the
kubernetes
fork
on
top
of
122..
A
It's
a
lot
cleaner
now
and
a
lot
less
far
behind
it's
only
like
this
many
commits
which
isn't
too
bad
it
was.
It
was
much
worse
and
in
much
worse
shape
before
so,
I
think
when
he
gets
back,
we
will
he's
gonna
prioritize,
getting
that
like
merged
and
updated
and
have
everything
in
the
kcp
repo
depend
on
that.
A
But
that
is
great
progress
on
that.
If
you
are
interested
in
that
or
I've
had
a
couple
of
people
ask
what
is
it
that
we're
proposing
making
changes
to
in
kubernetes?
And
that
is
a
much
better
answer
than
the
answer
we
had
before,
which
was
a
lot
of
stuff
based
on
118..
So
that's
very
exciting.
This
is
my
public
recorded
documented
promise
to
write
a
design
doc
for
the
multi-cluster
scheduling
stuff
that
we
have
been
discussing
in
in
in
some
in
this
meeting
venue
and
some
in
other
venues.
A
I
think
we
are
circling
closer
and
closer
towards
something
we
think
will
work.
But
in
order
to
validate
that,
I
need
to
write
that
down.
Have
people
poke
holes
in
it
or
or
identify.
A
Missing
cases
or
things
that
might
not
scale
like
you
would
want
that
includes
the
how
we
will
schedule
things
to
multiple
clusters,
how
the
sinker
will
see
those
things
being
scheduled
to
the
cluster
and
make
them
manifest
on
the
re
on
the
cluster
on
the
physical
cluster
and
in
some
cases,
how
it
will
report
state
back
up
to
kcp
to
make
other
scheduling
and
and
syncing
decisions.
Things
like
we'll
create
a
service
on
one
physical
cluster
and
it
needs
to
report
the
services
ip
to
kcp
to
have
it
propagated
to
other
clusters.
A
So
they
can
talk
to
it.
That's
sort
of
foundational
groundwork,
that's
necessary
for
a
lot
of
multi-cluster
stuff
when
things
are
scheduled
across
clusters
and
not
just
scheduled
to
a
single
of
the
available
clusters.
A
B
And
so
what's
your
what's
your
mindset
there,
just
you
want
to
get
like
the
basics,
the
doc
written
up
and
do
like
the
quick
prototype,
or
how
do
you
plan
on?
How
do
you
plan
on
kind
of
showing
or
trying
the
deeper
stuff
once
you
get
like
like?
How
far
does
do
you
think
the
doc
has
to
go
before
your
you
think
it's.
A
Yeah
my
my
goal
on
the
dock
is
to
have
something
that
people
can
conceptually
understand
for
two
reasons.
One
is
pure
bus
factor
like
I
don't
want
this
information
to
only
exist
in
my
head
and
I
guess
in
in
clayton's
head.
If,
if
I
disappear
off
the
face
of
the
earth,
we
wouldn't
want
to
start
up
start
from
scratch.
So
a
design
dock
is
a
good
place
for
that
and
also
to
like.
A
We
think
this
is
a
good
model
for
this,
but
we
and
we
should
obviously
yeah
and
I'm
not
I'm
not
gonna,
get
hit
by
a
bus.
I'm
gonna
get
paid
hundreds
of
millions
of
dollars
to
leave
the
was
I
going
to
say,
oh,
oh,
to
give
people's
feedback
early
before
we
invest
too
much
in
prototyping
it.
If
we
can
get
more
eyes
on
the
ideas.
B
A
It
shouldn't
it
shouldn't
block
prototyping,
we
can
prototype
while
we
think
that's
the
way
to
go,
but
I
want
to
make
sure
that
we
have
that
publicly
documented
as
like.
This
is
where
we're
gonna
go
and.
B
I
was
gonna
say
like
one
of
the
things
I've
been
going
and
doing,
which
will
be
good
to
have
the
doc.
So
I
can
add.
Some
song
is
like
to
highlight
the
conceptual
differences
between
previous
approaches
and
when.
C
B
That's
different
so,
like
an
example
like
I'm
looking
at
and
you
know,
contrasting
like
carmada's
approach,
cube,
fed
b2
q51,
like
wanna,
get
some
of
that
there's
a
couple
of
other
efforts
that
I'm
aware
of
it'd
be
good
to
have
the
trying
ideas
in
different
directions.
Point
to
do
pros
and
cons,
so
I'll
certainly
contribute
and
help
with
that
as
we
go
another
element
jason
that
I
wanted.
B
B
That's
like
there's
the
transparent
multi-cluster
sitting
on
top
of
something
that
kind
of
does
a
little
bit
of
placement,
scheduling,
probably
two
separable
pieces
and
then
there's
the
sinker,
which
is
effectively
more
of
a
generic
controller
which
is
trying
to
apply.
You
know
a
set
of
policy
to
it,
so
that
picture
is
useful.
It
actually
got
me
thinking.
One
of
the
things
we
had
discussed
is
what
happens
when
you
want
to
take
an
existing
cube
object
and
then
do
something.
B
That's
actually
multi-cluster
aware
in
a
more
meaningful
way,
so
there
was
actually
a
couple
of
examples
was
like.
Oh,
you
know
like
that's
a
what,
where
what
happens
when
you
have
coordination
points
that
are
more
sophisticated,
get
the
simple.
The
simple
coordination
points
that
we'll
have
in
the
design
that'll
actually
be
great
to
have
the
simple
one
written
up,
so
you
can
foil
it
with,
and
then
you
know
does
this
fit
in.
So
I
think
it's
a
yeah.
B
It
makes
total
sense
to
me
to
focus
on
that
right
now,
I'm
trying
to
do
the
same
thing
for
policy,
although
I
think
policy
is
a
little
bit
further
behind.
A
One
of
the
it
sounds
like
what
you're
saying
is
one
one
of
the
things
that
we
that
the
design
doc
needs
to
focus
on
is
how
controllers
will
work
without
being
modified.
How
like
how
you
could
install
argo
on
a
logical
cluster
and
have
it
just
work,
but
then
the
next
step
is,
but
if
you
wanted
to
provide
a
multi-cluster,
argo
or
a
multi-cluster
k-native
or
a
multi-cluster
aware
controller
for
these
things,
here's
how
you
would
do
it.
Here's
where
you
would
hook
in.
B
A
B
B
I
have
transparent
deployments,
we
decided
to
say,
like
let's
cut
damon
set
as
a
damon
said
it
doesn't
make
sense
to
propagate
in
the
same
way
and
then
say:
okay,
like
what
would
a
location
aware
of
damon
said,
look
like
oh
well,
you
actually
want
it
to
have
strategies
of
how
to
roll
out
potentially
across
multiple
locations.
B
We
have
some
examples,
so
it
kind
of
like
teases
up
to
say.
Would
that
be
something
that
someone
could
contribute
to
the
sinker?
Would
you
have
to
fork
the
sinker
to
get
that,
or
would
you
be
able
to
create
another
control
and
cubelet
has
a
lot
of
these
like
problems,
which
is
the
way
that
cubelet
works?
Is
it's
an
api
server
and
a
controller
and
then
there's
a
whole
bunch
of
sidecar
controllers
that
run
on
the
node?
B
For
like
sdn
or
cni
plug-in
or
csi
plug-ins,
that
lack
a
conceptual
framework
that
allows
any
kind
of
efficiency.
So
you
know
nodes
aren't
resilient
to
disconnection
with
api
server,
because
even
if
we
made
cuba
resilient,
all
those
other
controllers
would
break
and
then
cni
wouldn't
work
or
something
like
that.
So
right,
it
was
actually
really
useful
to
me,
because
I
was
like
oh
great
once
we
have
described
this,
we
can
say
what
it
doesn't
do
and
then
tee
that
up
as
topics
for
investigation.
A
Yeah,
so
your
example
was
the
location
of
where
demon
set,
which
I
think
is
even
a
different
issue
than
what
I
was
talking
about.
One
is,
I
mean
we
need
to
do
both
one
is.
How
do
I
write
a
multi-cluster
aware
controller
for
existing
non-multi-cluster
aware
types
like
a.
C
B
Of
the
scheduler,
because
that
gets
into
stuff
like
bin
packing
and
like
what,
if
you
wanted
to
write
a
controller
that
lets
you
bin
pack,
instances
of
databases
onto
location
clusters.
What
would
overlap
with
that?
Someone
else
came
up
with
another
example,
which
was
really
exciting.
K
native
is
the
kind
of
canonical
one
for
I
don't
really
want
to
care
which
cluster
my
native
function
runs
on.
I
just
needed
to
have
access
to
the
right
things,
but
then
there
was
another
one
was
like
well.
B
If
the
location
concept
is
described,
I
can
start,
then
comparing
it
to
other
things
that
might
have
location
that
might
need
resource,
aware
scheduling,
but
do
they
have
to
build
the
whole
stack
over
again?
So
that's
another
kind
of
controller
that
gets
easier
because
part
of
the
problem,
the
controller
is
dealing
with,
is
a
common
problem
that
currently
like
cube
any
any
problem
that
is
scheduling,
pods
cube,
helps
you
with
any
problem
that
is
not
scheduling.
B
Pods
cube
does
not
help
you
with
most
cube
clusters,
just
end
up
scheduling,
pods,
but
it
means
everything
then
becomes
a
pod.
What
happens
when
you
actually
don't
care
about
the
pods
you
care
about
the
scheduling
so
right
and
again
like
having
the
doc
would
be
great,
because
then
we
can
put
the
teasers
in
so.
A
You
know
building
some
of
this
out,
but
I
think
some
of
the
later
on
stuff
will
become
not
controversial,
but
we'll
need
more
feedback
from
people
that
would
be
interested
in
writing
a
multi-cluster
aware,
controller
or
a
multi-cluster
aware
type.
I
think
we
can
conceptualize
what
those
are,
but
we
want
someone
who's
actually
gonna
have
to
write
that
to
have
some
feedback
on
that.
So
that
is
my.
That
is
my
solemn
promise
to
you.
I
am.
I
am
off
next
wednesday
until
september
7th,
so
we'll
have
a.
A
A
So
yeah,
so
my
hope
is
to
get
this
design
doc
in
in
you
know
something
that
we
can
share
and
start
talking
about.
But
then
I
will
give
you
all
two
weeks
without
me
to
think
it
over
and
then,
when
I
come
back,
you
can
deluge
me
with
with
all
the
comments
and
and
problems
with
it,
but
so
I
think
we'll
have
a
meeting
next
week
and
not
the
next
two
weeks.
Unless
you
all
want
to
have
a
meeting
yeah,
you
know
you
don't
need
me
to
have
these
things.
A
So
if
you
want
to
have
one
or
whatever
you
absolutely
can,
but
I
was.
B
Gonna
add
sorry
two.
A
B
Summary
ones,
so
I
spent
some
time
with
jessica.
B
Just
brainstorming
and
a
few
people
have
seen
this
I'm
trying
to.
I
got
the
basic
policy
pr
and
then
I
generated
like
700
questions
in
a
list
with
jessica
and
I'm
trying
to
like
figure
out
which
of
those
questions
come
back
in.
So
this
is
the
if
a
logical
cluster
is
a
net
new
concept
that
you
know
either
we
have
in
kcp
or
goes
into
core
cube
or
whatever.
B
I
can't
do
really
hard
tennessee
between
workloads
on
a
single
cluster,
so
I
have
to
give
everybody
multiple
clusters.
I
there's
no
constructs
built
into
cube
that
really
make
tenancy
of
organizational
concepts.
Work
well
like
you
can
build
them
on
top
of
name
spaces,
but
then
you
still
hit
the
limitations
and
name
spaces.
Logical
clusters
would
be
okay,
we'll
take
both
of
those
flip
it
around
a
logical
cluster
is
the
unit
of
all
tenancy
and
there
are
hard
boundaries
between
it.
How
would
you
accomplish
those?
I
got
like
a
long
long
list.
B
Most
of
them
come
down
to
some
form
of
what
are
kind
of
the
problems
that
we're
aware
of
that
large
deployers
of
cube
and
tenant
environments
have
what
are
some
of
the
things
like
from
the
generated
basic,
multi-cluster
and
say
multi-tenancy
over
the
years.
What
are
some
emerging
patterns
on
cloud
providers
like?
What
are
the
analogs
to
cloud?
B
Can
we
articulate
a
better
model?
So
we've
got
a
bunch
of
questions.
I
think
we're
starting
to
circle
around.
We
could
see
like
three
models
like
a
simple
model,
which
is
the
kind
of
cube
out
of
the
box.
Yolo
like
have
fun
build
it
yourself.
Logical
clusters
are
a
tool.
Here's
how
you
could
compose
them
as
simple
as
possible
to
get
some
measure
of
tendency
for
your
org,
and
that's
it.
There's
a
next
level
up,
which
would
be
more
of
the
you
want
to
do
organizational
sub-tenancy
where
you'd
say
like
yeah.
B
I
want
to
come
in
and
be
able
to
set
say
that
a
group
of
people
have
this
much
quota
by
default
and
then
they
can
use
it
to
consume
and
then,
when
they
hit
a
limit,
an
organizational
admin
can
come
in
and
add
more
so
things
that
are
pretty
familiar
in
most
existing
systems
like
vsphere
or
even
the
cloud
providers.
Then
there's
the
third
one,
which
is
probably
what
I
would
call
true
hard
policy
hard
tenancy
policy,
which
is
you
expect
to
have
multiple
competing
sub-tenants.
B
That
themselves
are
hard
isolated
and
this
is
something
cube's
never
really
been
able
to
provide
in
a
single
cluster.
The
mindset
here
would
be:
could
you
offer
a
primitive
and
a
relative
pattern?
That's
good
enough,
so
that
you
actually
could
get
that
super
hard
tendency
boundary
between
two
full
part
tenants.
So
you
could
have
organization,
organization,
b
and
b,
relatively
guaranteed,
that
they
are
hard
separated
on
policy,
resource
constraints
etc.
And
you
can
manage
this,
so
you
could
run
a
service
that
says
I
have
competing
initiatives,
and
I
see
this
within
large
organizations.
B
You
know
certainly
there's
people
out
there
who
would
want
to
use
this
for
their
own
services
or
if
they
were
building
they're
building
a
service
that
they
were
offering
to
internal
teams
or
as
a
customer
thing.
So
they
think
about
people
offering
any
kind
of
hardware
resources
for
sale.
What
could
be
catalyzed
there?
Could
we
make
something?
That's?
Could
we
get
some
ideas
in
place
that
seem
useful
enough?
That
people
could
tweak
or
take
them
and
actually
build
full
production
services
that
have
hard
tendency,
isolation,
the
right,
sharding
characteristics,
et
cetera?
B
You
could
run
a
yeah.
You
could
run
a
customer
facing
service
and
sell
to
individual
tenants.
So
that
means
three
levels
and
then
the
third
rule
is
like.
We
want
to
have
a
demonstration
model,
because
I
think
you
know
from
a
red
hat
perspective
we
would
plan
on.
That
would
be
what
our
goal
is,
but
then
you'd
be
able
to
take
pieces
of
that
and
bring
it
into
the
other
two,
the
the
medium
complex
one.
B
So
you
can
do
organizational
policy
and
the
simple
one,
so
you
can
build
something
out
of
the
box,
probably
going
to
try
to
come
up
with
a
couple
of
example:
api
objects.
A
couple
of
example
patterns,
it's
kind
of
come
down
to
like.
If
you
want
to
make
everything
that
you
do
cube
control
appliable.
B
D
So
clayton
I
might
be
playing
catch
up
here
and,
if
there's
better
resources
that
I
need
to
go
dig
into
then
just
point
me
in
the
right
direction.
When
we
talk
about
policy
here,
are
we
talking
about
inventing
a
new
language
of
policy?
Are
we
talking
about
supporting
the
existing
constructs
like
network
policy
yeah?
I'm
not.
B
We're
talking
about
a
meta
system,
for
how
would
you
extend
a
tenanted
cube
system
to
add
your
own
policy
constructs
so
that
you
could
do
organizational
sub
policy
at
a
generic
level?
So
this
would
be
something
like
how
would
you
reproduce
the
aws
account
system
in
an
open
source
fashion
built
around
a
cube-like
control
plane
and
still
get
still
get
hard
tendency
like
you
do
when
you
have
like
you,
can
have
a
back
plane
that
sits
in
there
again.
B
These
are
the
three
levels
of
it,
which
is
sure
first
levels
yolo
like
cube
today.
Second
level
is:
I
want
to
split
up
my
org
and
have
reasonable
soft
boundaries
and
then
there's
the
I
want
to
be
able
to
do
super
hard
boundaries,
and
I
don't
want
to
have
to
come
back
later
on
and
be
like.
Oh,
we
missed
a
really
fundamental
property
of
heart
tendency.
Like
the
you
know,
you
should
be
able
to
create
things
that
the
creator
does
not
have
access
to
because
of
organizational
policy.
B
This
is
like
the
thing
that
apis
would
fit
into
for
doing
organizational
policy
or
resource
policy
or
whatever.
So
it's
not
those
systems.
It's
the
system
that
enables
those.
D
So
we
have
the
discussion
around
placement,
behavior
and
work
distribution
for
open
cluster
management,
there's
also
a
policy
layer
also
focused
on
multi-cluster
policy,
both
distribution,
compliance
enforcement.
Those
are
all
belongings
there
yeah.
This
is
this,
and
then
I
think
it
that
makes
sense-
and
I
think,
I'm
kind
of
poking
at
would.
It
doesn't
make
sense
to
review
those
in
this
forum
to
look
at
some
machinery
that
might
help
support,
what's
needed,
underneath
the
layer.
B
B
What
would
that
look
like,
so?
I
think
that's
relevant
what
you're
describing
michael.
I
don't.
I
think
it's
the
it's
the
step
up
from
that
I'll
I'll
show
you
the
work
in
progress,
doc
that
jessica
and
I
were
just
drafting
in
last
week.
Okay,
sure
this
is
like
this
is
definitely
trying
to
hit
the
meta
question
and
then
come
back
to
those
and
be
like.
Could
you,
for
example,
within
an
organization,
create
a
resource
that
a
controller
materializes
across
a
lot
of
resources
across
a
lot
of
logical
clusters
to
do
policy?
B
B
How
would
you
write
a
controller
that
deals
with
organizational
concepts
and
then
how
would
you
have
controllers
that
are
orthogonal
to
organizational
concepts,
but
still
have
a
tendency
component
so,
like
an
example
would
be?
How
do
you
expose
an
api?
Like
think
about
gcp
or
aws,
there
are
apis
that
are
not
exposed
to
you
as
a
tenant.
B
Your
org
admin
goes
and
enables
those
by
virtue
of
potentially
the
billing
relationship
once
you've
enabled
that
api,
then
someone
can
use
it
so
that
system,
it's
it's
adjacent
to
what
you're
describing
and
I
think,
having
some
examples
of
those
to
contrast,
will
be
very
useful.
D
Okay
and
then,
when
you,
when
we
use
policy
in
this
forum,
certainly
anyone
from
the
broader
community
that
looks
at
this
is
probably
going
to
pop
the
question.
How
does
this
compare
to
open
policy
agent
right,
defining
opal
policies
with
rego
as
admission
controllers,
with
the
way
that
gatekeepers
realized
opal
policies
and
kubernetes?
D
B
Like
accounts,
I
think
that
it
is
about
how
do
you
there's
a
fundamental
unit
of
container
that
you
have
to
today
is
only
boot,
a
cube
cluster
that
is
the
only
that
is
the
only
container,
that's
larger
than
namespace,
effectively
by
saying,
what's
the
analog
to
namespaces
for
clusters,
where
you
don't
have
to
instantiate
an
implementation
to
get
a
cluster,
but
you
can
use
many
of
the
same
constructs.
B
What
are
the
patterns
that
would
make
sense
there
and
they
look
a
lot
like
you
know
what
are
the
analogs
to
cloud
right,
because
at
some
level
this?
The
reason
why
aws
says
accounts
in
gcp
has
hierarchical
projects
with
hierarchical
quota
is
both
of
them
made
trade-offs
based
on
their
internal
systems.
Sure,
what's
a
what's
a
meta,
organizational
container
concept
that
would
work
really
well
for
the
most
number
of
people
that
draws
from
those
experiences
things
like
opa,
I
do
think
have
a
spot.
What
I
would
probably
say
is.
B
It
might
be
that
policies
like
I
think
about
this
is
like
what
is
a
controller.
What
is
a
web
hook
and
what
is
a?
What
is
a
policy
engine?
Those
are
all
blurry
lines
right.
You
can
have
a
web
hook,
driven
by
a
controller
where
you
know
the
controller
reads:
some
inputs
and
outputs.
You
know
control
a
control.
Loop
can
record
certain.
B
In
a
way
that
a
control
loop
could
calculate
a
an
open
policy
dock
or
something
for
a
webhook
to
use,
so
we
probably
want
to
stay
on
the.
What
are
the
things
that
a
general
policy
engine
could
plug
into
and
how
would
it
plug
in
is
a
key
part
of
this.
So
your
your
point
before
about
examples
from
ocm
or
whatever,
I
think
is
relevant
of
you-
might
have
a
policy
engine
that
you
want
to
consult
for
some
of
these
decisions.
B
That
is
a
just
as
reasonable
an
integration
point
as
a
mechanism,
that's
in
process,
so
it
would
be.
You
know,
kind
of,
like
all
self-service
systems,
have
some
similarities
and
then
vast
difference,
at
least
if
you're,
using
servicenow
for
a
process,
management
system
or
a
policy
management
system
with
ticketing,
and
all
that
there's
elements
of
that
that
could
feed
into
other
systems.
B
We'd
want
to
be
relatively
flexible
to
the
type
of
policy
that
can
be
applied
and
that
maybe
that's
the
three
layers
I
was
kind
of
describing
is
like
you
could
hook
it.
You
could
plug
it
in
or
you
could
build
apis
around
it.
I
think
I'm
most
interested
in
are
there
apis
that
feel
sufficiently
general
even
looking
at
them
today,
so
that
you
get
some
of
that
someone's
already
done
this
I'll
just
plug
my
stuff
into
it
and
oppa
and
other
web
hooks
controllers
that
work
today.
B
If
those
are
familiar
enough
to
be
like,
oh
well,
I
could
see
how
I
could
integrate
my
existing
policy
story.
Like
I
want
quota
the
example.
A
couple
people
have
done
like
cluster
resource
quota,
which
is
for
a
set
of
name
spaces.
Dude
quota
quote,
is
kind
of
a
generalizable
pattern.
B
Would
it
be
reasonable
to
say
like
well,
then
why?
Wouldn't
I
have
an
organizational
level
quota
or
something
like
that.
Could
I
extend
those
existing
patterns?
Familiar
sure.
Could
I
plug
opa
in
and
have
opa
make
decisions
for
me?
Yes,
and
could
I
gut
the
implementation
and
replace
it
with
my
own,
backed
by
whatever
yes,
so
kind
of
the
three?
The
same
approaches
that
we've
taken
for
cuban
for
minimal
api
servers
like
how
could
someone
build
their
own?
What
can
we
provide?
What
is
a
batteries
included?
B
Modern
cloud
aware
cube
in
cuban
formed
and
enterprise
use
case,
aligned
organizational
containment
unit
for
any
unit
of
work,
because
that's
their
aspiration
for
kcpd
generic
control
plane
for
anything.
D
So
then
come
with
you
there
when
you
start
talking
about
the
scope,
control
mechanisms
or
isolation
or
meta
containment
model,
you
needed
a
way.
You
need
a
way
to
formally
identify
in
a
unique
form.
Some
set
of
objects
that
you
intend
to
contain
so
is
it
already
within,
has
already
been
kind
of
worked
out.
How
you
would
drive
that
naming.
D
But
I
still
want
to
be
able
to
describe
the
access
control
model.
An
example
there
might
be.
I've
got
a
cloud
service
offering
me
kafka
instances
top
excuse.
Other
related
objects,
I'm
going
to
provision
those
I'm
going
to
create
a
kcp
api
object
and
its
fulfillment
ultimately
results
in
creating
instances
in
a
cloud
service
that
are
now
part
of
my
scope
or
access
control.
D
But
I
need
a
way
to
identify
those
objects
and
so
that
kafka
service
needs
a
way
to
onboard.
Here
is
how
I
identify
instances
in
my
service
or
if
I've
got
a
cluster.
Here's
how
kecp
identifies
uniquely,
not
only
clusters
but
name
spaces
or
other
objects
within
the
cluster,
and
so
that
concept
of
like
a
cloud
resource
name,
seems
to
come
up
because,
ultimately,
as
you
talk
about
these
types
of
policies,
you're
also
talking
about
really
our
back
right.
D
You're
doing
role-based
access
control
at
some
level,
you're
going
to
define
a
concept
of
tenancy
that
that
evolves
from
who
has
the
power
of
the
purse
to
pay
for
it
right.
There's
a
billing
construct
at
the
very
root
down
to
how
I
subdivide,
who
has
authority
to
create,
destroy
or
generate
billable
resource
down
to
maybe
individual
consumers
that
are
simply
getting
the
benefit
of
that
set
of
resources
and
in
that
type
of
hierarchical
model.
D
Where
you
extend
from
billing
to
organizational
structure
to
consumer
the
objects
that
you
want
to
associate
have
to
be
identifiable
so,
as
we
think
about
the
sort
of
meta-policy
containment
model,
I
think
you
also
end
up
having
to
create
some
equivalent
form
of
or
standard
for
how
kcp
defines
cloud
resource
names
and
how
orthogonal
services
that
are
provisioned
that
aren't
kubernetes
things
that
are
related
to
these
kcp
apps
would
also
say
declare
to
this
general
purpose.
D
B
Elements
of
that
are
definitely
reasonable.
The
relationship
between
what
are
the,
what
are
the
fundamental
concepts
you
know,
almost
everyone
does
charge
back
up
some
form
or
they
have
a
system
where
they
wish
they
had
charged
the.
What
are
the
overlaps?
Is
there?
Is
there
a
pattern?
That's
reasonably
broad
enough,
so,
like
tying
a
containment
unit
to
cost
is
a
fundamental
thing
who
has
authority
to
change
that
that
gets
into
you
know
each
of
the
three
levels
right.
B
B
So
those
are
kind
of
like
the
three
levels
of
cube
kind
of
started,
from
the
assumption
that
we
believe
that
we
could
offer
level
one
which
is
provide
labels
on
namespaces,
which
is
sufficient
to
solve
all
problems,
or
you
can
build
a
solution
for
some
of
those
problems
on
top,
and
then
we
left
two
and
three
as
an
exercise
for
readers
and
actually
one
of
my
one
of
my
inputs
coming
back
into
this-
is
that
actually
it
wasn't
sufficient
to
do
that
in
cube
and
given
the
benefit
of
history,
we
need
to
think
about
layers.
B
One
two
and
three
and
probably
have
a
couple
of
examples:
tie
those
to
the
existing
systems
that
are
out
there
like
the
cloud
systems
and
then
say
make
a
decision
at
that
point.
How
far
do
we
go
like
your
your
idea
of
names,
it's
possible
that
one
or
more
of
the
aspects
of
authorization
or
authentication
actually
disappears
behind
the
system
and
that's
okay
right.
B
We
guessed
in
cube
that
we
could
put
make
names
opaque.
We
mostly
have
gotten
away
with
it
in
our
back
our
back's,
a
fundamental
part
of
cube.
If
someone
tried
to
use
the
kcp
like
things,
would
they
expect
darbak
to
work?
My
gut
is
yes
and
so
might
treat
our
back
as
an
axiom
and
say
you
know
we're
going
to
have
late.
We
want
to
have
labels
on
everything.
That's
a
cube-like
thing.
Our
back
is
part
of
clusters.
B
B
Maybe
we
could
leave
that
under
the
covers,
but
having
a
pattern
or
discussing
the
topic
and
saying
like
this
is
an
exercise
left
for
the
reader,
because
we
have
evidence
that
it
work
or
conversely,
we
recommend
this
scheme
because
it's
actually
the
scheme
we're
already
using
in
these
areas
and
it's
practical.
D
B
B
You
do
not
just
get
all
apis,
which
means
you
can
have
apis
that
modify
the
system
that
are
not
inside
all
logical
clusters,
which
means
you
can
actually
define
a
model
where
the
inputs
and
the
like.
Looking
at
a
logical
cluster,
every
logical
cluster
is
a
beautiful
and
unique
snowflake
from
the
inside
from
the
outside.
You
can
detail
homogeneity,
but
the
assumption
that,
like
inside
of
a
logical
cluster,
there
is
a
logical
cluster
resource
that
you
can.
B
Change
is
actually
very
handily,
completely
absent,
which
means
there
is
no
way
from
inside
a
workspace
to
change
that.
The
apis
that
allow
you
to
change
it
outside
are
where
you
can
spend
the
time
sure,
and
this
actually
gets
into
things
that
we
talked
about
early
on.
We
haven't
talked
about
in
a
while,
but
like
virtualization
of
cube
core
resources
through
things
like
aggregated
apis,
you
can
do
read-only
layers
that
are
very
effective.
B
Once
you
start
getting
into
like
deletes
things
get
a
little
bit
more
hairy,
but
what
you
can
do
is
you
could?
Certainly
we
could
actually
talk
about
how
you
could
virtualize
and
expose,
because,
like
again
like
so
great
example,
here
is,
if
you
have
enough
people
eventually
you're
going
to
want
here's
the
list
of
the
things
I
have
access
to.
B
That
is
not
something
cuba
ever
did
by
default.
It
requires
you
have
to
build
a
certain
trade-offs.
Would
you
want
that
list,
though,
to
be
queryable?
You
need
to
make
it
queryable
for
the
purposes
of
controllers,
working
across
multiple
logical
clusters
and
list
watchables,
so
that
you
can
build
controllers
on
it
and
that's
another
axiom
right,
like
the
assumption
is
anything
we
do
here
is
controllable
or
somebody
can
go,
build
a
completely
opaque
system.
I'd
probably
say,
since
we
want
to
solve
the
multilogical
cluster
controller
problem,
the
way
that
we
solve
the.
B
How
do
you
get
a
list
of
the
workspaces
you
access
to?
Are
the
list
of
logical
clusters
you
have
access
to
so
that
you
can
perform
a
list
that
api
will
be
really
important
and
should
emerge
from
the
policy
stuff
you
don't
have
to.
You
could
implement
a
open
api
based.
You
know,
file
based
database
base,
whatever
story
for
how
you
get
logical
clusters,
I
think
that's
within
the
realm
of
possibility,
but
the
best
part
would
be.
B
We
probably
actually
can
combine
the
idea
of
logical
each
logical
cluster
can
have
its
own
apis
with
the
fact
that
apis
don't
have
to
be
implemented
in
cube
and
can
be
virtualized
to,
like
you
know,
say,
you've
got
a
system,
a
record
like
aws.
Would
it
be
possible
to
say
here's
the
accounts?
You
have
access
to
delegate
that
call
expose
it
in
cube-like
fashion.
It
might
be
we'd
want
to
think
through
that
as
we're
going,
but
it
should
be
possible
to
set
up
a
and
I'm
going
to
say.
B
Kcp
is
a
stand-in,
for
you
should
be
able
to
set
up
a
server
that
can
run
instances
that
can
get
you
logical
clusters
just
because
you
want
to
have
something
useful
out
of
the
box
trying
to
get
that
out
of
the
box
so
that
it
feels
cube-like.
It
offers
those
same
controls
you're
talking
about
michael
and
it
has
the
nice
separation.
I
think
that's
like
our
secret
weapon
is
because
we
can
bring
different
apis.
B
One
of
the
objects
is
new
logical
cluster.
One
of
the
objects
is
access,
control
lists.
One
of
them
is
an
rbac
system.
One
of
them
is
oprah
web
hooks,
but
when
someone
goes
and
tries
to
create
a
logical
cluster,
we
separate
that
from
the
actual
system
that
provides.
C
B
B
We're
trying
to
summarize
the
key
strategic
opportunities
axioms
was
part
of
what
I
had
just
started
on
and
I'll
I'll
definitely
share
some
stuff
with
you.
Anybody
else
who's
interested.
You
know
ping
me
in
the
slack
channel,
I'm
I'm
mostly
just
trying
to
iterate
enough
to
get
some
ideas
down
it's
more
of
like
the
based
on
what
we
know
already.
What
could
we
guide
them?
So
it's
a
very
long
run
saying
it's
very
early,
michael.
D
No,
no
and
that
that's
fair.
I
I
appreciate
we're
still
trying
to
get
some
of
the
concepts.
I
do
have
kind
of
a
practical,
concrete
question
for
desired
outcome.
Do
we
believe
that
it
is?
And
I
guess
we
is
there-
a
relatively
formed
consensus
around
the
intersection
of
tenancy
and
cluster?
D
We
talked
about
the
not
necessarily
multi-cluster
but
multi-context,
and
the
ability
to
have
a
kcp
server
distribute
work
to
one-to-n
clusters,
but
also
one-to-n
clusters
accepting
work
from
when
to
end
kcp
servers,
and
then
you
get
into
this
in
this
invite
in
in
by
end
configuration.
Perhaps
in
that
model.
D
It
makes
sense
to
me
that
if
I
have
a
kcps
or
if
I
have
let's
say
the
tendency
model,
I've
got
a
coke
kcp
server
and
a
pepsi
kcp
server,
and
if
the
kcp
server
is
bound
to
two
dedicated
clusters
and
the
pepsi
kcp
server
is
bound
to
two
dedicated
clusters,
then
I
can
see
a
model
that
says
look.
I
have
really
strict
tenancy
boundaries.
D
B
We're
talking
about
two
subtly
different
things
would
you
describe?
Cluster
roles
is
not
transparent.
Multi-Cluster,
that
is
that's
ex.
That's
config
management
of
cube
clusters,
or
maybe
like
we
can,
we
bring
it
up.
Config
management
of
use
clusters
is
inherently
a
lower
level
version
of
a
transparent,
multi-cluster
use
case.
If
you
have
to
know
details
of
that
cluster
to
build
it,
it's
not
transparent.
D
So
then,
what
about
crds,
though,
like
the
idea
that
I
can,
I
can
bring
my
own
crds
to
kcp
server,
and
then
I
have
a
set
of
unique
controllers
that
can
render
those
the
output
of
those
crs
the
instances
from
those
crds.
D
Maybe
I'm
maybe
I'm
missing
something
there,
but
if
I,
if
I
bring
my
own
custom
resource
you
know,
maybe
I
can
pick
on
argo
application
set.
Okay,
probably
target
the
control
planes.
B
The
kcp
instances
like
I
want
to
go,
deploy
some
stuff
sure
here's,
the
crd
for
it.
Okay,
argo
says:
hey.
I
want
this
to
go.
Argo
at
that
point.
Location
is
a
higher
level
concept
but
like
if
you
want,
if
you
want
argo
to
manage
location,
that's
not
transparent,
multi-cluster,
that's
argo,
the
best
it
can
do
today.
B
Sure
if
you
want
location
abstraction
through
argo,
because
you
just
want
the
get
ops
part
of
argo,
not
the
like
the
idea
too,
of
cd
isn't
built
into
or
the
idea
of
like
it's
all
like
canaries
or
whatever
like.
If
you
have
a
cd
pipeline,
you
have
to
build
that
with
many
people
have
to
build
that
with
multiple
clusters.
You
should
be
able
to
build
a
cd
pipeline
inside
a
single
logical
cluster
through
transparent
multicluster.
B
That
would
be
a
key
net
new
capability,
but
you
would
still
be
able
to
use
git,
ops
or
argo
and
a
cd
pipeline
completely
ignoring
but
you're,
not
that's,
not
a
transparent
multi-use.
Cluster
use
case.
That's
a.
I
probably
just
want
to
install
the
argo
agent
on
the
underlying
clusters,
but
then
I
don't,
I
guess.
D
One
of
the
things
I'm
trying
to
get
my
head
around
is
that
if
kcp
is
somewhat
focused
on
things
like
providing
tendency
for
crds
right
that
that
that
makes
perfect
sense
to
me
the
moment
when
a
cluster
might
be
the
the
final
resting
place
for
input
to
kcp,
logical
cluster,
1
and
kcp
logical
cluster
2.,
and
if
kcp
logical
cluster
1
has
a
different
set
of
crds
than
2.
D
B
But
when
you
get
to
the
point
of,
I
have
a
workload
that
I
want
to
spread
across
these
two
locations.
The
mental
model
shifts
from
I'm
making
these
two
locations
to
consider
be
consistent
to.
They
are
expected
to
be
consistent
and
when
they
are
not
expect
when
they
are
not
consistent,
something
catches
that
and
flags
that,
as
this
isn't
a
suitable
target,
I'm
going
to
bring
everything
off
of
it.
So
it's
it's
the
it's
by
splitting
the
ability
to
change
you
move
to
it's
like
nodes
right
like
when
I
deploy
a
pod
onto
a
node.
D
Note,
or
or
in
practice,
so
so
I
would
I
would.
I
would
summarize
that
or
paraphrase
that,
as
if
I
had
two
unique
tenants
that
each
had
their
own
kcp
logical
cluster,
they
cannot
share
a
set
of
clusters
behind
them.
Unless
there's
an
explicit
assumption
that
all
of
those
clusters
have
exactly
the
same
set
of
types
and
configurations
right.
There.
C
B
There
probably
should
be
back
pressure
against
that
sort
of
thing,
but
I
don't
think
that's
within
scope
of
the
transparent
multi-cluster
use
case.
That
would
probably
be
a
use
case
more
like
an
ocm
style
use
case,
which
is
my
job
is
to
enforce
consistency
of
clusters
when
they
drift.
I
will
warn
it.
What
is
my
back
pressure
to
prevent
upgrades
from
blowing
up
workload
like
think
about
a
node
right?
We
have
pdbs
our
back
pressure
from
workloads
to
administrators
that
prevent
the
application
from
being
disrupted.
There's
a
missing.
C
B
Yeah,
I
think
about
this
very
similarly
like
there
should
be
concepts
in
ocm-like
systems
for
cluster
disruption
that
exhibit
back
pressure
on
rolling
out
updates
to
those
clusters.
That
would
impact
the
workloads
on
them,
and
I
think,
as
part
of
this,
we
should
propose
what
the
back
pressure
mechanisms
are,
that
defend
logical
clusters
and
transparent
multi-cluster
that
are
really
orthogonal.
To
that
it's
it'll
I
mean
a
cluster
is
a
lot
like
a
node.
B
D
Yeah,
I
know
that
that
definitely
makes
sense.
That
makes
sense,
and-
and
I
think
I
I
would
again
kind
of
paraphrase-
that
if
you
have
multiple
din
and
sharing
clusters,
there
is
an
assumption
of
consistency
among
them.
In
addition
to,
if
there
is
some
reason
why
a
workload
can't
be
distributed
or
upgraded,
we
need
a
way
to
provide
that
feedback
into
kcp
so
that
they
know
hey.
These
clusters
are
available
and
not
available
for,
and
we
really
have.
B
To
split
crds
into
two
types:
what
are
the
crds
that
are
really
part
of
the
application?
What
are
the
series
that
part
of
the
infrastructure
like
persistent
volumes,
part
of
the
infrastructure?
You
expect
persistent
volumes
to
work
across
clusters,
but
you
might
need
something
at
the
higher
level.
That's
like
does
cluster
a
actually
have
permission
to
snapshot
this
persistent
volume
from
another
cluster
and
the
answer
may
be
yes
or
no.
B
That's
a
higher
level
concept,
whereas
some
things
will
be
vague,
like
in
the
short
run
service
mesh,
probably,
is
both
a
high
level
and
a
low
level
services
deployments.
Definitely
both
a
high
level
and
a
low
level
entity
operator.
B
Probably
the
best
operation
like
entity
operator,
if
you
intend
it
to
spread
across
clusters,
might
actually
be
better
to
have
it
at
a
higher
level.
You
know
anything
anything
that
you
can
make.
The
problem
is
an
application
level.
B
You
get
a
benefit
because
now
clusters
are
simpler
because,
instead
of
deploying
instead
of
polluting
a
bunch
of
clusters
with
with
lots
of
concepts,
you
have
individual
higher
level
systems
that
are
designed
for
keeping
concepts,
isolated
and
a
system
whereby
you
can
say.
Oh,
I
want
to
roll
out
changes
to
this
api,
like
we
will
build
it
to
kcp
layer
type
things
sets
of
apis,
which
implementation
of
an
api
is
bound
to
that
connect
that
to
the
policy
mechanisms
that
let
us
do.
B
Controller
list,
watches
across
tens
of
thousands
of
logical
clusters,
be
like
only
the
workloads
that
use
this
api
have
specified
you
as
the
implementation
and
have
acknowledged
that
you
have
these
permissions.
You
have
connected.
The
role
will
be
allowed
to
list
watch
across
it
which
break
like
fixes
the
secret
problem
or
helps
us
address
the
secret
problem.
Is
we
really
want
those
those
integrations
to
feel
first
class
in
cube,
we're
kind
of
just
like
yolo?
Here's
some
roles
and
it's
a
good
start.
The
next
step
really
is
like.
B
B
We
want
to
crisp
up
that
boundary
between
is
your
workload
orthogonal
to
infrastructure
or
not,
and
if
it's
not,
are
you
using
an
abstraction?
We
provide
to
touch
the
node
and
most
of
the
cases
where
they're
doing
that
today,
it's
an
escape
patch
around
gaps
and
cube
features
as
we
close
those
you
need
less
and
less
gas.
You
leave
your
host
path
less
now
that
you
have
local
volumes.
Local
persistent
volumes,
for
instance,.
D
That
makes
sense,
and
then,
if
you,
if
you
did
have
a
type,
that's
specific
to
the
application,
something
that
hadn't
been
clear
to
me
until
this
conversation,
if
I'm
gonna
define
a
type
like
a
kafka
topic,
something
that's
not
directly
running
in
a
supporting
cluster
behind
a
logical
cluster,
then
the
controller
that
operates
on
that
type
in
that
kcp
server
is
a
controller.
That's
running
outside
of
the
set
of
clusters
that
are
managing
the
parts
of
the
application
that
consume.
That.
B
It
could
be
running
as
a
pod,
but
it's
talking
to
the
control
plane
and
that
workload
as
is
isolated
from
the
other
workloads,
but
then
that
leads
to
that
leads
up.
You
know
that
leaves
operational
teams.
Things
like
they
would
need
hub
clusters
or
protected
infrastructure
clusters
where
they
could
run
controllers
great.
Now
you
have
security
zones
for
workloads,
but
you'd,
say:
hey.
All
controllers
for
service
integrations
have
to
run
in
a
level
two
security
zone.
Oh
this
won't
schedule
under
this
cluster
because
it's
not
a
level
two
security
zone.
D
D
The
kafka
topic
as
controllers
that
themselves
are
just
another
workload
for
kcp
running
in
some
set
of
clusters
behind
the
scenes,
but
they
are
actively
reconciling
the
state
of
crs
that
are
in
a
specific
kcb,
logical
cluster,
and
if
I
wanted
to
scale
it
out,
a
single
infrastructure
focused
oriented
cluster
in
the
background,
could
be
running
controllers
for
multiple
accounts
of
kafka
topics
that
are
tied
into
different
cloud
accounting
mechanisms
and
are
completely
different
that
are
servicing
the
type
and
instance
information
from
multiple
disparate
kcp
servers.
Yeah.
B
B
I
don't
know
the
next
level
is
imagine
a
controller
that
creates
kafka
instances,
but
it
doesn't
care
too
much
about
what
they
are
and
then
add
the
proxy
layer
or
something
that
divvies
up
access
to
topics
by
telling
kafka
to
program
its
access
control
or
by
adding
a
proxy
in
front
of
kafka
instances,
and
that
exposes
a
topic
api
and
both
of
those
could
be
in
two
different
sets
of
logical
clusters:
side
by
side.
No
one
would
ever
know.
One
of
them
has
the
kafka
topic
api
or
a
lot
of
them.
B
B
And
then,
if
someone
down
the
road
like,
if
confluent
offered
a
a
per
topic
creation
api,
then
you
could
say
well
then
just
cut
out
the
middleman
and
switch
the
implementation
of
my
kafka
topic.
Api
from
this
to
another
provider
and
the
provider
would
be.
You
know,
confluent
topic
allocation.
The
api
didn't
change.
The
implementation
did
so
starting
to
think
about
ways
that
we
can
layer
systems
which,
in
a
single
cluster,
you
can
kind
of
do.
B
B
Also,
let
you
say
these
chunks
of
apis
can
be
consumed
by
other
controllers,
so
you
might
actually
have
like
a
multi-layered
system
or
at
some
point,
you're
like
yeah
I'll,
just
move
this
to
a
service
off
the
cluster.
Someone
wrote
a
great
integration.
I
don't
even
have
to
run
that
integration.
You
know,
maybe
you
could
imagine
an
operational
team
just
saying
I'm
going
to
boot,
a
controller
on
a
vm
in
my
data
center
that
reads
from
the
kafka
from
a
kafka
api
exposed
by
confluence
web
ui
and
programs.
B
A
set
of
logical
clusters
and
the
only
people
who
have
access
to
those
credentials
are
the
people
who
go
to
that
vm.
So
thinking
about
the
ways
that
you
start
to
divvy
up
trust
meaningfully,
I
think,
is
where
we've
got
to
be
able
to
think
about
going,
because
then
that
lets
us
do
really
hard
boundaries
between
you
know.
B
D
B
Especially
especially
if
you
get
some
benefits
you
didn't
have
before,
like
most
things
just
work
and
then
later
on,
you
can
add
a
second
cluster
and
divvy
up
the
workload.
That's
I
think
where
it
starts
like
you
want
optionality,
but
the
optionality
has
to
be
a
net
win
for
you,
so
trying
to
figure
out
what
the
net
wins
are
to
get
you
to
trick.
You
just
be
like.
Oh,
I
like
the
idea
of
this
layer,
because
I
got
better
security
for
my
workloads.
B
A
Yeah
and
we
thought
we
wouldn't
have
an
agenda
to
fill
the
time
today.
Look
at
us.
B
So
I
want
to
ask
folks
who've
joined.
We
definitely
have
been
kind
of
working
through,
like
you
know,
broad
topics
like
meta
topics
here
and
then
doing
a
little
bit
more
pragmatic
collaboration
in
slack
or
in
small
groups.
Are
there
folks
here
who
wanted
to
bring
up
topics
that
we
can
put
on
for
next?
Time
are
folks,
mostly
just
interested
in
listening,
and
I
mean
I
know,
joaquin,
like
you,
haven't
you've
been
working
on
some
of
the
prototype
stuff
for
like
load,
balancing
across
multiple
clusters
to
fit
into
the
prototype.
C
Well
about
the
prototype
on
the
kcp
ingress,
I
was
in
pto,
but
I
was
checking
you
know
some.
Some
of
the
current
implementations
around
global
load,
balancers
and
open
source
stuff
and
those
multi-cluster
solution
usually
have
like
a
rigid
architecture
that
you
know
makes
it
more
complicated
to
integrate
with
what
we
are
doing
at
kcp
as
they,
for
example,
try
to
reach
the
physical
cluster
directly
and
stuff
like
that.
C
So
as
we
discuss
on
another
forums,
I
will
start
working
on
on
a
prototype,
so
implementing
those
parts
inside
that
kcp
ingress
prototype
to
actually
use.
I
will
try
to
create
some
interface,
but
I
was
thinking
on
using
as
we
discussed,
cloudflare
or
or
other
cloud
providers.
A
A
A
I'm
gonna
call
time
and
we
will
see
you
all
next
week.
I
will
hopefully
be
bringing
a
doc
with
more
summary
of
our
thinking.
Hopefully,
maybe
hopefully.