►
From YouTube: Community Meeting, August 24, 2021
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right
welcome
to
the
kcp
community
meeting
august
24th
2021
clayton
added
items,
but
I
actually
want
to
preempt
that,
because
last
week
I
promised
a
doc
on
how
we're
all
thinking
about
moving
moving
apps
transparently
to
become
multi-cluster.
A
A
lot
of
this
is
the
first
part
of
this
at
least
is
kind
of
a
review
for
folks
who
are
coming
to
this
new.
I
think
the
the
prototype
we've
talked
about
for
a
while
works
somewhat
like
this,
but
I
want
to
make
sure
that
we
update
it
to
match
what
we're
actually
thinking.
This
is
very
similar
to
the
prototype.
Instead
of
the
deployment
splitter,
we
have
this
general
purpose
scheduler,
which
is
just
the
deployment
splitter.
That
does
more
some
talk
about
what
the
api
server
does.
A
The
cluster
controller
is
roughly
the
same
as
we've
been
talking
about
this
whole
time.
I
think
we'll
probably
have
it
also
be
responsible
for
validating
and
reconciling?
No,
you
don't
want
it
to
I'm,
seeing
I'm
seeing
uber's
head
shakes
from
clayton
no.
B
I
well
okay-
maybe
I
misinterpret
actually.
I
don't
know
that
the
cluster
controller
as
it
is
today,
will
map,
but
we
won't
know
until
we
come
back
in.
I
really
do
expect
a
decoupled
mapping,
though
between
location
and
cluster,
which
could
have
multiple
implementations.
So
a
controller
makes
sense.
B
I
don't
want
to
see
coupling
of
the
controller
for
the
logical
cluster.
The.
A
Yeah,
so
so
the
this
cluster
controller
is
roughly
like
this
is
for
physical
locations,
physical
or
what
we.
A
This
is
the
register
sum
footprint
of
register
some
footprint
of
compute
to
a
logical
cluster
pool
which
does
the
registration
logic,
which
I
think
we
also
need
to
go
into
and
then
basically
to
be
able
to
to
map
work
to
it
down
a
map
work
down
to
it
from
a
logical
cluster.
It.
A
Be
in
the
same
controller,
there's
nothing
it
does!
That
needs
to
care
about
logical
cluster
policy
creation,
validation,
reconciliation.
It
could
be
a
separate
box
relatively
easily
right.
It
could
just
be
some
other
third
box
that
runs
up
here
against
kcp.
That
says
some
new
logical
cluster
has
been
created
or
the
policy
for
some
logical
cluster
has
been
updated
or
deleted.
And
what
do
I
need
to
do
in
response
to
that?
So
so,
you're
totally.
B
Yeah
I'm
trying
to
think
about
it
from
the
perspective
of
the
ocm
case.
The
ocm
example
was
very
useful
from
the
perspective
of
cubelet's
registration
process
was
honestly
poorly
designed
and
we
went
through
like
five
different
iterations,
and
I
don't
think
we
still
have
a
clear
theory
of
that.
I'm
perfectly
comfortable
with
how
cubelet
registration
capacity
reservation
is
maximally
useful
and
minimally
overlapping
with
other
things.
Ocm
definitely
went
further
in
one
direction
that
I'm
comfortable
with
in
the
sense
of
in
the
long
run.
B
The
act
of
registering
capacity
is
actually
physical
clusters
are
just
one
type
of
capacity,
and
so
there's
a
whole
set
of
things
which
is
location,
might
be
a
good
first
stab
at
something
that
the
scheduler
cares
about.
I'm
not
quite
convinced
that
physical
clusters
are
the
only
mapping,
for
instance.
We
may
very
well
want
to
map
the
active
placement
of
shards
event
or
that
the
act
of
bin
packing
instances
of
things
that
are
not
represented
in
transparent
multi-cluster,
in
which
case
the
scheduler
might
actually
not
be
a
component
wholly
owned
by
transparent
multi-cluster.
B
B
It's
probably
likely
that
some
aspects
of
placement
should
be
meaningfully
decoupled
from
physical
clusters
and
the
mechanism
of
cube
workload
transparent,
but
that's
probably
may
still
be
within
transparent
multi-cluster,
but
we
probably
want
to
come
up
with
two
different
phrases
for
like
the
very
concrete
inner
loop,
which
today
I
would
say
I
was
thinking
of
transparent
multi-cluster
as
the
minimum
necessary
to
run
existing
cube
workloads.
A
B
Maybe
the
broader
one,
which
is
the
subsystems
that
are
reusable
for
other
concepts,
so
scheduler
is
one.
Cluster
controller
is
actually
too
specific,
probably
because
of
its
tie
to
physical
clusters,
because
there
could
very
well
be
additional
things,
but
it
doesn't
mean
that
this
is
necessarily
wrong.
B
It's
we
got
to
get
really
crisp
on
definitions
here,
because
it
will
be
very
easy
to
accidentally
couple
physical
clusters,
cluster
controller
scheduler
and
the
sinker
a
little
aggressively
or
overly
aggressively
and
miss
an
opportunity
which
ocm
passed
on
for
good
reasons
in
which
cube
didn't
really
have
the
domain
expertise
at
the
time
to
identify
the
the
ideal
use
case.
So
we're
left
with
like
a
mishmash
of
systems
in
the
cubelet
and
the
node
controller
that
yeah
really
consistent.
They
solve
the
problem,
but
they're
not
orthogonal
enough.
B
Build
a
vm
controller
that
lets
me
represent
vms
generically
across
four
different
cloud
types,
and
I
want
the
lowest
common
denominator
subset
and
I
want
a
vm
image
type
that
is
opaque
to
the
actual
implementation
and
that
has
to
schedule
vm's
higher
level.
Vm
constructs
which
have
no
analog
on
existing
q
clusters
or
maybe
like
similar
to
what
cube
vert
does
in
which
case
location
is
not
a
physical
cluster
location
might
actually
just
be
an
aws
region.
The
active
controller.
B
B
One
way
to
do
that
is
to
have
a
controller
that
has
permissions
on
the
specific
region,
and
only
that
controller
has
those
permissions,
in
which
case
the
placement
decision
might
be
generic.
So
you
might
say,
like
hey,
I
need
a
vm,
I
don't
care
which
it
is,
but
you
don't
have
a
controller
that
has
access
to
every
aws
account,
but
you
might
offer
two
physical
aws
accounts
and
actually
the
concept
of
I
put
an
ec2
instance
into
a
location
which
is
aws
account
one,
and
then
I
also
want
to
spread
that
to
a.
B
B
I
don't
want
to
belabor
it,
but
I
want
to
put
like
we'll
be
like
scheduler
cluster
controller
and
syncer
in
all
those
cases,
sinker
would
be
replaced
by
something
that
looks
a
lot
like
the
ack
operator
today
might
actually
be
generic,
and
the
cluster
controller
would
actually
go
away
completely
because
the
ack
operator,
for
instance,
could
be
configured
as
like.
I
want
to
run
an
instance
of
the
ac
operator
that
represents
this
phys,
this
location.
B
I
have
an
account
credential
that
allows
me
to
act
in
that
location,
I'm
running
on
protected
infrastructure
on
a
vm.
That's
set
up
outside
the
system
that
can't
be
physically
compromised
by
taking
over
the
control
plane,
but
I
can
offer
the
ability
to
run
in
a
bunch
of
accounts,
so
you
get
like
a
nice
separation
of
security
domains,
which
is
something
that
people
don't
have
today
without
building
complex
systems
for
themselves.
A
Yeah,
okay,
so
in
that
case
the
thing
that
we
would
need
to
change
about,
this
is
the
or
the
thing
that
we
would
need
to
decouple
further.
Is
that
the
scheduler?
A
B
B
A
B
Is
all
it
is
a
cubelet
is
a
controller,
so
you
could
draw
an
analogy
between
a
an
individual
instance
of
a
cubelet
is
itself
an
instance
of
a
controller
that
would
be
closer
to
what
the
ack
model
is
yeah,
a
model
where
you
then
have
a
higher
level
which
is
divvying
out
other
like
a
sinker,
is
technically
the
location
controller
for
a
physical
cluster
location,
because
it
is
a
controller
running
in
a
spot
that
offers
resources.
B
That's
tied
to
those
resources
as
well,
and
the
synchro
does
not
have
to
run
on
the
physical
cluster.
So
that's
a
that's
another
thing
that
we
have
to
be
like.
It's
not
an
explicit
thing,
it's
useful,
but
we
should
be
like
this
is
maybe
like
the
way
most
people
will
run
it
because
it
aligns
the
security
domains.
B
But
it's
also
very
important
to
note.
You
may
want
to
run
it
adjacent
or
similar,
in
which
case
it
doesn't
have
to
in
any
way
be.
It
can't
be
compromised
by
a
compromise
of
the
physical
cluster
because
it
has
right
permissions
on
the
kcp
ips
server.
So
I
think,
like
yeah,
we
almost
are
kind
of
developing
into
three
diagrams.
So
there's
this
diagram
very
pragmatic.
B
What
are
recommended
alternatives
for
configuration,
don't
have
to
do
those
now,
just
we
could
leave
the
note
like
thinking
about
where
the
syncer
runs
and
what
it
has
permissions
on
is
an
important
characteristic
of
a
control
plane
story
which
is
today
that's
all
bundled
together
in
one
cluster,
and
so
any
compromise
tends
to
escape
unpredictably
is
putting
the
sinker
on
the
physical
cluster,
the
most
appropriate
way
to
do
it.
We
don't
know
yet
a.
A
Okay,
what
I'm
hearing
is
a
section
on
different
ways
to
run
the
sinkers
and
the
advantages
and
disadvantages
of
each.
Neither
no
way
is
perfect,
but
they
each
provide
some
trade-offs.
Stinker
in
the
cluster
is
better
when
you
think
the
cluster
might
be
independent
failure.
B
Which
is
a
fundamental
tenant?
I
think
we
have
that
in
our
thing
is
like
a
fundamental
tenant.
Is
we
are
leveraging
the
cluster
to
perform
an
act
of
distributed?
Resiliency
placing
the
sinker
someplace
else
might
compromise
that
so,
but
then
the
security
trade-off
is
the
sinker
has
access
to
the
physical
cluster.
A
Right,
the
the
sinker
running
inside
of
a
physical
cluster
means
that
it
is
running
instructions
that
can
be
confused
to
do
other
things
if
it's
running
outside
by
stuff,
that's
running
inside
the
physical
cluster.
If
it's
running
outside
it
still
has
the
same
permissions
to
do
the
same
things
against
that
cluster,
but
it's
harder
to
reach
it
to
compromise
it
because
it's
running
elsewhere,
maybe.
B
This
is
just
a
fancy
way
of
saying:
is
we
didn't
actually
design
kubernetes
with
a
threat
model
for
out
of
cluster
problems,
because
at
the
beginning,
kubernetes
did
not
have
a
unified
application
model
concept?
It
also
lacked
the
we.
We
always
treated
a
cube
unit
as
a
unit
of
independent
resilience,
and
there
were
some
assumptions
that
went
in
about
separated
versus
non-separated
control
plane,
for
instance,
that
come
with
a
bunch
of
trade-offs.
Probably
something
we
can
do
better
here
is
clearly
articulate
the
trade-off
between
those
and
show
both
options.
A
A
B
A
A
Decisions
the
scheduler
is
already
going
to
be
complex.
A
I
wonder
if
it
would
be
a
useful
scoping
of
a
useful
change
of
scope
to
say
this
is
the
like:
kubernetes
scheduler
only
for
kubernetes
objects,
not
for
dm's
and
like
until
in
the
future
we
decide
we
want
to
extend
to
vms
until
in
the
future.
We
want
to
extend
to
arbitrary
arbitrary
types
I
feel
like
like.
I
would
like
to
know
more
about
that
use
case
and
the
user
who
wants
it
before
we
start
building
that
and
building
that
amount
of
minerality
into
it
and.
B
I
think
that's
a
great
example.
What
I
would
probably
say
is
don't
write
scheduler
in
fact
skip
like
going
back
to
that
previous
comment.
It's
actually
much
more
useful
to
describe
the
api
and
api
fields
that
comprise
the
input
and
so
a
location
for
scheduler
to
make
capacity
decisions.
There
has
to
be
a
place
to
read
capacity.
B
I
would
expect
that
as
a
more
of
a
flow
diagram,
maybe
like
a
better
way
to
do
this
is
this:
it
looks
like
a
placement
or
a
like
logical,
topology
diagram,
maybe
showing
the
flow
of
information
from
an
object
comes
in
that's
a
request
on
a
resource
type
and
a
set
of
extracted
information
from
that
resource
type,
some
of
which
is
relationships,
some
of
which
is
constraints
and
some
of
which
is
resources,
and
then
the
scheduler
is
solving
for
those
that
implies
that
look,
either
location
or
something
associated
with
location
carries
resource
and
resource
and
constraints,
or
let's
call
it
topology
info,
which
we,
at
least
in
the
basic
example
right.
B
A
Yeah
so
so
the
location
would
say
an
admin
has
set
me
up
to
say
you
can
use
no
more
than
500
cpus
of
me,
let's
say,
and
300
of
them
are
currently
being
used.
So
I
have
200
left,
that's
also
an
important
signal
for
it
to
emit
and
I
have
the
constraints:
aws
usc
1
high
security
zone,
whatever
that
that
bit
of
information,
those
bits
of
information
live
in
the
location,
object.
B
The
actual
modeling
of
that
we
would
describe
location
as
the
first
order
approximation
of
it,
and
then
we
would
probably
come
back
and
say
and
is
that
the
actual,
correct
modeling,
so
we're
kind
of
like
doing
two
things
we're
trying
to
get
to
a
prototype.
That
shows
the
basic
concept
and
then
we're
going
to
throw
away
whatever
pieces
of
that
don't
actually
fit
to
the
broader
use
case.
B
So
it's
showing
the
three
bits
of
info
being
modeled,
whether
like
right
now,
there's
a
dotted
line
around
all
three
of
them
that
says
location
which
implies
a
location
controller
of
some
form,
but
that
could
very
well
be
the
sinker.
We
don't
have
to
assign
that
responsibility
to
the
location
controller.
Yet.
A
The
the
sinker
can't
do
the
at
least
as
far
as
I
have
been
envisioning
it,
the
the
cluster
controllers
responsible
for
regis
or
registering
right
now,
it's
a
very
dumb
registration,
but
in
the
future
registration
of
a
physical
cluster,
some
portion
of
a
physical
cluster,
and
then
it
installs
the
sinker
right.
So
something
something
whether
the
sinker
is
the
person
emitting
this
information
back
up.
Something
needs
to
run
first
to
install
the
sinker
there
or
start.
B
I
I
feel
like
that's
probably
I
would
say
right
now,
just
based
on
what
we
know.
I
feel
like
that's
going
a
little
bit
too
far
if
the
syncer
has
permissions,
so
the
syncer
has
a
larger
set
of
permissions
than
any
individual
logical
cluster
scheduled
onto
it.
Ideally-
and
we've
talked
about
this
elsewhere,
which
we
should
record
make
sure
it's
recorded.
Is
we
assume
that
the
set
of
capabilities
it
has
is
not
dramatically
larger
than
the
sinker?
The
act
of
granting
permission
to
a
sinker
is
itself
a
much
larger
permission.
B
I
would
prefer
that
the
cluster
controller
be
heavily
decoupled
from
the
system.
For
that
reason,
which
is
the
cluster
controller,
could
be
implemented
by
a
get
ups
flow.
It
could
be
implemented
by
an
operator.
It
can
be
implemented
by
an
ocm
style
system,
but
it
is
absolutely
not
something
that
the
by
default.
B
Cannot
really
quickly
from
a
kcp,
be
able
to
spin
up
a
cluster
controller
that
bundles
all
that,
but
we
should
make
it
clear
that
the
cluster
controller's
responsibility
is
decoupled
into
these
pieces
that
we
expected
right,
because
just
for
practical
reasons,
I
don't
expect
kcp
servers
to
have
root
on
the
clusters
they're
operating,
except
in
a
dev
and
demo
setup
like
we
want
to
preserve
that
property
and
the
cluster
controller,
providing
that
property
in
a
demo
or
test
environment.
We
don't
want
people
to
accidentally
confuse
that
with.
A
Right
so
then,
in
that
case,
that
makes
sense
in
that
case,
should
the
syncer
subsume
all
the
responsibilities
of
the
of
the
cluster
controller
and
the
way
that
you
register
a
cluster
is
to
install
the
syncer
on
it.
You,
an
admin,
install
the
syncer,
give
it
the
scope
down
bit
of
permissions
and
tell
it
what
kcp
to
reach
out
to
and
register
itself.
I.
A
A
B
Again
like
this
is
hard
to
represent,
but,
like
imagine,
instead
of
kcp
api
server
here,
you
saw
like
a
bunch
of
horizontal
blocks
with
hard
cut
lines
between
them
and
then
a
bunch
of
sub
box
blocks
within
them.
Then
the
mental
model
of
like
the
sinker
registering,
which
one
does
it
register
to
what
permissions
does
it
have
becomes
much
more
nuanced.
I
don't
again
like
what
I
think,
we're
kind
of
trying
to
say
is.
We
should
set
up
for
the
prototype
case
and
embed
the
break
in
the
assumption
that
locks
us
into
something.
B
B
Exactly
the
pattern
that
ocm's
using
right,
like
the
accept
or
the
register,
accept
paradigm
will
continue
for
all
this
it'll
be
there
for
control
right.
We
probably
will
not
allow
a
controller
to
just
get
root
access
to
all
the
secrets
of
7000
logical
clusters,
like
that.
That
is
a
model
in
cube,
that's
horrifically
broken
in
the
tenancy
model
and
so
like
those
are
things
that
are
hard
to
build
in
from
the
outside.
So
we're
taking
a
lesson
from
acm
address
or
ocm
addresses
the
gaps
in
the
registration
in
the
node
registration
model.
B
B
B
This
diagram
is
fine,
but
then
certain
descriptions
have
to
be
nuanced
like
this
may
have
a
set
of
credentials
that
allow
it
to
register
itself
and
there
might
be
an
act
process
very
likely,
there's
an
act
process,
but
we
don't
yet
understand
the
scope
of
how
the
how
organizational
policy
will
work
across
multiple
sets
of
logical
clusters,
for
instance,
to
say
that
enough
and
that
hopefully,
like
that,
gets
done
soon.
A
Not
only
not
only
registration
has
to
have
an
act,
but
updating
your
labels
has
to
have
an
act
right
if.
B
In
principle,
the
original
reason
that
the
cube
did
it
this
way
was
just
convenience
because
the
cubelet,
the
cubelet,
the
control
plane,
is
treated
as
a
single
security
domain,
which
is
why
it's
very
hard
to
build
tenancy
into
a
single
cube
cluster
today,
because
the
fundamental
assumption
is
that
it's
a
single
security
domain
for
the
control
plane.
Therefore,
from
a
mental
model,
the
problem
we're
trying
to
solve
is
the
exact
opposite
of
that,
because
we
don't
need
something
to
solve
the
single
security
domain.
We
have
that
it's
a
single
cluster.
B
A
B
B
You
could
just
implicitly
impersonate
all
users
across
all
platforms
unless
your
authorization
system
supported
that
which
we
would
again
probably
counsel
people
not
to
do
in
this
model.
So
sorry,
jason,
that's
a
bunch
of
problems
that
I
don't
think
it
changes
too
much,
but
it
changes
the
nuance
of
what
what
permissions
does
the
sinker
have,
implicitly,
I
think,
we'd
say
probably-
can
register
itself
probably
cannot
assign
labels
like
the
ocm
model,
unlike
the
cubelet,
based
on
these
criteria,.
B
Think
it's
very
likely
that,
however-
and
this
will
probably
depend
as
like-
there
could
be
multiple
different
ways
to
do
this.
At
least
in
my
head
right
now,
everybody
everywhere
is
going
to
be
talking.
Some
variation
of
a
cube
api
with
an
ax
cycle,
which
is
persistent
volume
claims
are
the
same
problem
right.
Here's
a
claim,
bind
it.
Here's
a
pod,
bind
it
to
a
node.
Here's
a
here
is,
and
we
don't
have
this
necessarily
in
cube
today,
but
like
here's,
a
service,
I
want
this
service
to
be
able
to
talk
to
this
service.
B
Cube,
doesn't
actually
model
that,
as
a
result,
people
go
do
network
policy
and
come
up
with
really
complicated
rules
to
work
around
the
actual
problem
they're
trying
to
solve,
which
is.
I
want
these
services
to
be
accessible
to
these
groups,
so
the
ack
model
probably
is
going
to
come
down
to
a
sing
somewhere.
A
sinker
is
going
to
create
a
cube
resource
that
the
cluster,
the
logical
cluster
or
cube
api
that
it
is
registering
to
is
orthogonal
to
the
things
that
expose
that
location.
B
So
it
is
probably
not
going
to
be
acceptable
for
it
to
create
a
location
object
in
all
logical
clusters
that
would
expose
it
right.
You
want
a
layer
of
indirection
there,
but
then
the
choice
to
expose
that
location
is
mediated
through
a
different
layer,
which
is
probably
the
is
probably
some
variation
of
a
location,
controller
or
a
policy
which
says
like.
I
don't
even
know
what
resource
the
sinker
should
create.
B
It
may
not
be
a
location,
it
might
actually
be
something
like
a
a
unit
of
compute
with
resources
and
all
that
and
that
flows
up
through
location
and
potentially
another
object
that
actually
the
scheduler
looks
at
both.
But
then
a
sinker
could
write
one
and
another
thing
could
write
another.
That's
like
how
the
vm
thing
probably
ends
up
being
modeled
is
a
controller
can
write
both
of
those
controller
could
write
one
of
them.
B
We
just
haven't
gotten
far
enough
down
the
use
case
and
as
you're
doing
this,
we
are
concretely
turning
over
places
where
we
would
want
to
decouple
that
model.
A
B
So
your
your
problem
is
absolutely
get
to
the
minimum
viable
thing
and
then
the
stuff
folks
working
on
this
in
parallel
will
be
giving
the
feedback
around
like
hey
as
we're
iterating.
We
might
identify
a
reason
to
split
this
particular
thing,
and
it's
very
likely
that
I
mean
getting
to
the
prototype
honestly
would
probably
be
my
statement
of
what
we're
trying
to
do
with
the
design
is
show
the
the
bones
of
a
model
that
demonstrate
the
usability
benefit.
B
A
B
And
that's
why
I'd
want
to
see
the
data,
the
actual
physical
material
bits
of
data
flowing
into
an
object
out
of
an
object
that
then
sets
us
up
to
then
say:
oh,
we
actually
need
to
hard
separate
the
responsibility
for
setting
this
data
because
it
violates
a
it,
provides
a
trusted
version
or
something
like
that.
Right,
like
it
means
it
allows
for
a
confused
deputy
it.
It
allows
you
to
acquire
capabilities
like
as
much
as
possible,
like
I
think
we
were
trying
like
the
model.
B
I'm
approaching
this,
as
is:
can
we
actually
build
an
accurate
capabilities?
Modeled
system
around
this
cube
tried
that
and
there's
a
couple
places
where
we
just
punted
on
the
confused
deputy
problem,
which
is
acceptable
in
the
cube
context,
totally
reasonable.
I
think
we
have
to
dif.
We
are
differentiating
this
from.
You
can
always
go
run
a
bunch
of
cubes.
If
you
want
we're
trying
to
do
something
that
lets
you
reason
about
the
security
of
multiple
cubes,
which
does
not.
You
have
to
have
some
principle
at
the
base
of
it.
A
Yeah,
the
the
synchro
scoping
question
makes
me
want
to
skip
ahead.
To
is
the
is
this
how
we
are
envisioning
synchro
working,
where,
if
you
have
two
logical
clusters,
I
made
a
mistake
calling
these
a
and
b
but
two
logical
clusters
and
two
physical
clusters.
We
end
up
running
four
copies
of
the
sinker
each.
B
A
B
I
ran
an
agent
we're
trying
to
go
after
that.
Next
check,
which
is
clusters
today,
are
dramatically
underutilized
in
their
ability
to
represent
multiple
different
dimensions
of
capability
so
like
I
could
create
15
different
node
types
and
offer
those
to
different
teams.
It's
super
painful
and
you
can't
actually
strongly
tie
resource
consumption
with
resource
permission
right.
It's
very.
It's
basically
impossible
in
cube
today
to
restrict
a
team
to
a
specific
set
of
available
resources.
B
Like
say
you
have
gpus,
there
is
no
really
effective
way.
To
quote
how
much
gpu
you
can
use
in
a
permission
sense,
you
can
use
the
quota
system
and
you
can
use
labels
and
all
that
I
think
we're
trying
to
get
to
the
point
of.
Can
you
can
someone
model
on
a
cluster?
What
the
chunks
of
of
capacity
would
be,
which
means
more
than
one
and
then
they
don't
have
to.
They
can
absolutely
do
the
one-to-one,
but
then
we
don't
bake
in
the
assumption
that
it's
one-to-one.
B
So
then,
all
clusters
are
treated
as
homogeneous,
because
there's
there's
like
15
use
cases,
I'm
aware
of
where
you
have
two
things
in
the
cluster
and
they
sit
uneasily
together
and
you
want
to
make
an
explicit
choice
to
place
them
together
and
there's
actually
like.
The
beauty
of
this
is
like
those
two
sinkers
can
all
run
on
the
same
nodes
if
they
wanted
to,
and
we
want
to
think
about
the
parameters
for
our
back
pressure
such
that
you
could
actually
get
a
reasonable
sharing
of
resources.
A
Yeah
yeah,
the
the
the
main
reason
I
wanted
to
confirm.
This
was
the
model
we
were
going
for
was,
for
the
sinker's
permission,
scoping
right
that
sinker
a
a
watches,
logical
cluster,
a
and
only
has
permissions
to
do
stuff,
like
I
don't
know
how
exactly
we'll
model
this
to
physical
cluster
a's
are
back,
but
sinker
aaa
only
has
the
narrowest
set
of
permissions.
It
needs
to
serve
logical
cluster
a's,
request,
yeah
and.
A
B
Yeah
there
is,
there
is
almost
certainly-
and
this
is
called
out
in
the
it's
called
that
in
one
of
the
docs
in
some
form,
which
would
be
there's
an
implicit
assumption
that
there
will
exist
a
construct
that
makes
the
sinker
a
able
to
see
all
logically
logical
clusters
that
expose
it
as
a
location,
which
is
something
that,
like
cube,
doesn't
really
offer
today
and
basic
crds
wouldn't
offer,
because
again
it's
an
out
of
cluster
scope
problem.
What
would
be
the
what
would
be
the
most
concrete
way
to
represent
that?
B
Don't
know
yet,
but
it
it
would
need
to
be
fast,
materializable,
watchable
listable
for
sinker
to
be
a
controller,
and
I
I
think
if
we
we
know
that
we
have
to
tackle
that
problem,
but
it
can
be
approximated
today
by
brute
force,
and
so
it
doesn't
block
the
prototyping
at
first.
If
we
found
out
that
we
couldn't
make
it
work,
that
would
be
an
argument
to
rethink
how
controller,
like
is
the
controller
pattern
actually
useful
in
this
context?
I'm
pretty
sure
we
can
make
it
that
way.
It's
just.
A
Yeah
I
the
argument
against
it
sounded
like
you
were
saying:
you
still
want
sinkers
to
be
able
to
watch
across
multiple
logical
clusters,
and
I
think
that.
B
The
assumption
is,
is
that
a
sinker
will
be
able
to
say
I
want
to
watch
all
of
the
logical
clusters
for
the
set
of
resources
exposed
via
the
location
that
I
am
granted
yeah.
There
may
be
a
couple
different
ways
to
model
that,
but
implicitly,
that
would
be
what
allows
a
sinker
to
a
scale
horizontally
be
it
has.
B
It
will
have
some
internal
structure,
which
is
like
I've
gotta
deal
with
things
for
multiple
logical
clusters
anyway,
and
then
it
allows
us
to
tackle
head-on
a
lot
of
the
challenges
that
existing
controllers
don't
have
very
few
existing
controllers
watch.
Everything
like
the
garbage
collector
is
the
thing,
and
it's
like
a
horrible
hack
of
like
it's
just
disgusting
and
it
works.
It
barely
works
only
because
of
the
amount
of
engineering
time
spent
in
it.
B
Most
user
experiences
actually
have
the
same
problem
as
the
garbage
collector
controller,
which
is
they
need
to
watch
a
lot
of
resources
and
show
a
reasonably
consistent
view
of
those
there's,
probably
an
argument
that
we
will
have
a
construct
which
is
watch
the
set
of
things
that
reply
that
are
relevant
to
me
and
that's
handled
dynamically
and
given
me
gives
me
a
consistent
list
watch
view.
A
So,
if
you're,
actually
let
me
update
my
diagram,
it
sounds
like
you're
actually
arguing.
We
want
something
like
this,
where
there's
only
one
sinker
and
it's
watching
I'm
going
to
undo
all
this
later.
B
You
don't
have
location
in
those,
so
you
should
put
location
in
there
which
is
yeah.
There's
each
of
the
physical
clusters
has
two
locations:
there's
a
sinker
tied
to
every
location
and
then
there's
a
mapping
between
exposing
a
location
into
a
logical
cluster
for
the
purpose
of
performing
this
action.
A
Okay,
I
think
I
think
I
am
coming
around
to
this,
and
I
think
this
actually
solves
a
problem
that
I
was
talking
about
earlier
with
some
folks
about
confusing
these
sinkers
into
being
able
to
do
other
things
right.
So
right,
if
that's
not
the
one.
A
B
B
Relevant
to
location,
alpha
and
location
beta-
and
this
is
a
thing
that
we
need
to
think
through
and
it
is
not
in
any
way
like
modeled
yet,
but
it's
that
was
actually
my
list
on
how
to
do
orchestrate
logical
clusters
across
in
in
instances
and
be
able
to
do
consistent
list
watch
across
n
instances
and
amological
clusters
in
the
doc
so
starting
to
sketch
out
what
the
solution
space
looks
like
so
that
sinker
alpha
says
hey.
B
I
just
need
somebody
to
tell
me,
like
all
of
the
things
that
I
should
care
about,
because
if
it
has
to
watch
everything
it
has
access
to
everything
and
in
cube
originally,
you
know
we
said.
Oh
pods
can
watch
everything,
so
they
can
see
all
secrets
which
means
that
every
like,
if
you
put
a
secret
on
the
cluster,
every
every
node,
had
access
to
that
secret.
B
We
added
retroactively
the
node
authorizer
and
the
node
authorizer
allows
you
to
put
a
patch
on
the
whole,
but
it
was
the
whole
was
there
I
think
going
in
is
what's
the
construct
that
we
could
use
that
allows
any
controller
to
solve
this
problem
of.
Instead
of
granting
access
blanket
to
everything
in
a
scope,
the
set
of
scopes
are
more
concrete.
So
it's
like
almost
the
inversion
of
the
namespace
model
right
in
cube
cluster
is
the
hard
boundary
and
namespace
is
a
soft
boundary,
but
we
have
our
back
on
namespaces.
B
That
controllers
basically
ignore
most
of
the
time
flip
that
model
around
and
imagine
a
system
where
the
first
level
container
like
a
namespace
is
the
hard
boundary
and
then
something
in
the
background
is
synthesizing
the
cluster
model.
So
that's
like
the
we're
basically
presenting
something
that'll
work
to
compose
clusters,
and
so
then
most
people
work
inside
that
context.
They
get
the
benefit
of
namespaces,
so
we
actually
are
adding
something
to
the
cube
stack
in
a
unique
and
novel
way.
B
That
gives
you
more
power,
which
then
on
a
single
cluster
would
then
allow
you
to
come
back
in
and
retroactively
build
in
hard
tendency
to
a
single
cluster,
because
you
could
say,
oh
yeah,
sure,
like
there's
the
base
api
server,
that's
part
of
a
cluster
that
has
no
internal
hard
isolation
and
then
there's
something
with
layers
on
top
that
provides
that
hard
isolation.
So
you
you
know
in
theory
that
kcp
model
then
could
conceivably
allow
any
single
cube
cluster
to
acquire
hard
tendency
without
having
to
change
too
much.
A
A
A
B
Varying
degrees
of
success,
I
mean
honestly,
like
you
know
the
vulnerability
rate
of
a
container,
is,
you
know,
probably
twice
or
3x,
that
of
vms,
but
both
of
those
are
very
low.
So
what
we're
really
trying
to
model
is?
You
should
be
able
to
create
isolated
clusters
with
isolated
nodes
for
isolated
workloads,
but
then
in
the
same
system,
without
a
physical
change.
B
Right,
like
the
cluster
api
nested,
like
you're,
just
running
a
inefficient
version
of
a
logical
cluster
when
you
do
cluster
api
nested,
which
is
why
it's
interesting
to
say
how
could
we
support
cluster
api
nested
by
building
in
a
harder,
tenancy
layer
and
again,
like
you're,
absolutely
right,
jason?
The
mindset
of
you
can't
protect
an
individual
cube
api
server,
which
is
why
we
would
want
the
tenancy
model
of
the
kcp
layer
or
whatever
you
know.
Kcp
is
the
prototype,
but
you
can
have
multiple
chunks
of
those.
The
app
model
doesn't
change.
B
A
Yeah,
I
guess
we
can
definitely
make
hard
harder
tendency
boundaries
up
in
kcp,
where
we
have
logical
clusters
and
and
can
separate
them
better.
But
if
all
that
all
they're
doing-
and
we
can
make
sure
well,
users
who
really
want
to
can
specify
this
workload
must
only
exit
be
the
only
thing
to
exist
on
whatever
node
it
ends
up
on
right.
B
Only
if
only
if
you
expose
that
location
to
both
of
them,
though-
and
I
think
that's,
the
important
thing
is
that
the
the
middle
thing
that's
in
the
location
controller
that
doesn't
really
show
up
here
is
there's
another
abstraction
point
between
location
and
logical
cluster,
but
the
sinker
doesn't
get
to
see
that
right.
That's
the
key!
That's
the
key
unit
of
indirection,
which
again
we're
just
solving
the
q
problem.
We're
solving
the
general
problem
today
of
people
have
apps
spread
across
different
disparate
physical
security,
organizational
domains.
B
The
way
everything
is
built
is
you
just
run
more
of
them,
so
then
the
mental
model
here
would
be.
What's
the
minimum
layer
of
abstraction,
we
have
to
add
to
decouple
location
from
centralization.
So,
like
the
example,
I
would
probably
say
is
location.
Alpha
is
actually
probably
two
constructs,
there's
a
background
construct,
which
is
granting
someone
access
to
that
location
or
a
policy
system
that
is
saying.
Oh
yeah,
like
usually
this
user,
can't
get
access
to
any
of
these
physic.
B
You
know
any
location
that
in
any
way
doesn't
meet
the
criteria
of
a
high
security
isolation
and
then
the
easiest
one
is,
you
could
always
say.
Well
then,
there's
just
two
locations
in
two
different
places
that
have
the
same
name
like
you
can
physically
shard
your
whole
control
plane.
Like
imagine,
you
have
a
eu
like
just
for
the
sake
of
argument.
You
have
an
eu
regulatory
regime
that
requires
you
to
have
control
planes
physically
separated
in
that
geographic
unit.
B
The
only
thing
you
have
to
change
is
you
just
split
the
whole
infrastructure
down
the
middle
and
you
can
split
your
control
plane
in
the
middle,
but
the
patterns
don't
change
like
an
application
targeted
for
high
security
still
should
be
targeted
for
high
security
targeted
for
the
particular
cloud
provider
targeted
for
the
particular
organizational
project.
So
the
app
didn't
change.
Just
the
infrastructure
change,
that's
kind
of
the
abstraction
we're
trying
to
get
to.
A
So
so
a
user
who
really
cares
about
security
would
would
put
themselves
in
a
logical
cluster,
separated
from
other
logical
clusters,
put
themselves
request
or
get
and
get
approved
for
a
location
for
which
they're
the
sole
tenant
which
maps
to
a
physical
cluster
for
which
they're
the
sole
tenant
and
if
they
want
to.
They
can
also
say
this.
This
workload
must
run
as
the
sole
tenant
on
that
node
right
that
we
we
that's
like
maximum
security,
node
cluster
up.
B
Infrastructure,
node
cluster
control,
plane
right.
The
stack
could
be
the
same
you,
but
the
difference
being
that
you
can
actually
define
something
that
spans
two
of
those
which
you
can't
today
right
no
cube
can
span
two
physical
location
or
two
physical
clusters
and
thus
deal
with
the
implications
of
single
cluster
failure
or
api
transition,
so
yeah,
so
that
that
additional
layer
gives
you
something,
but
you
can
still
reduce
it
to
a
single
vertical
stack,
but
you
know
with
the
appropriate
certifications
and
controls.
B
You
can
also
share
that
control
plane
across
physical
sites,
because
I
mean
technically,
I
am
infrastructures
often
run
into
this
right.
Like
organization
run
central
ldaps
that
offer
impersonation
functions.
That
means
you
can
become
anyone
in
that
company.
If
you're,
an
administrator
of
that
that
I
am
solution
like
ldap,
if
you
have
right
ldap,
you
are
root
on
everything
in
the
infrastructure.
B
We
want
to
build
in
the
constructs
that
allow
someone
by
default,
to
kind
of
mitigate
that,
for
instance,
by
building
in
the
constructs
that
would
say
well,
just
because
im
says
you
have.
Access
to
this
doesn't
actually
mean
that
we've
acted,
for
instance,
like
being
able
to
offer
those
controls
will
eventually
show
up
as
another
property
of
infrastructure,
and
that
gets
back
to
the
point
about,
like
the
ack
operator
as
an
example,
the
ack
operator
may
be
the
only
thing
that
has
access
to
orchestrate
that
account.
B
You'd
want
to
be
able
to
run
that
with
the
appropriate,
best
best
practices
or
constructs,
but
then
offer
an
api
where,
as
long
as
you're
really
clear
about
who's
allowed
to
call
you
and
then
you're
also
clear
about
what
you're
doing
on
their
behalf,
you
might
have
some
confused
deputy,
but,
let's
be
honest
like
if
we
didn't
want
confused
deputy
we'd,
be
running
single
core
machines
on
physical
infrastructure
separated
in
a
faraday
cage,
and
nobody
does
that
because
nobody
needs
that
or
really.
A
B
But
again,
a
value
here
is
that
we're
looking
to
standardize
the
kinds
of
constructs
that
people
use
to
build
infrastructure
and
applications
so
that
in
most
cases,
everybody's
reusing
the
same
concepts
and
then
somebody
is
allowed
to
dial
at
each
of
the
levels.
The
dial
you
can't
turn
today
is,
if
you
actually
have
multiple
cloud
infrastructures
or
multiple
regions
or
multiple
accounts,
there
actually
is
no
way
to
do
that
without
building
your
own.
We're
we're
trying
to
tackle
at
least
the
build
your
own
dial
and
offer
a
primitive.
B
That's
useful
that
inherits
from
primitives
that
people
find
useful
in
other
spots
like
same
way,
cube
standardized
deployment.
Can
we
standardize
organizational
tenancy
policy
and
allow
multiple
systems
to
coexist
right
like?
Can
you
combine
an
aws
tenancy
model
with
for
infrastructure,
with
an
on-premise
tenancy
model
based
on
organizational
control?
Sure?
Could
you
also
integrate
other
approaches
like
when
you
need
to
have
a
multi-environment
or
multi-security
domain
policy?
Can
you
tie
that
in
the
same
way,
apps
didn't
have
to
change
the
abstraction
layer
that
sits
in
there
lets
you
float
across
it
ambitious,
but
dream
big.
A
Ambitious
indeed,
okay,
I
will
need
to
go
through
this
and
and
inject
the
concept
of
location
as
an
as
a
proxy
and
intermediary
between
logical
clusters
and
physical
clusters.
It.
A
Well-
and
I
like
I'd
like
to
go
into
more
and
think
more
about
the
idea
of
registration,
so
the
the
current
prototype
the
registration
protocol,
is
you
create
a
new
object
of
a
cluster
type
and
say
here's
the
config
to
talk
to
that?
That's
terrible
if
the
registration
was
just
came
up
from
the
sinker
like
to
in
order
to
register
a
sinker,
shows
up
on
the
cluster
and
says
you
can
have
this
much
of
my
resources.
A
I
have
permission
to
do
the
stuff
you
like
you
want
to
do
in
this
cluster.
I
have
the
permissions
to
do
that
and
then
obviously,
then
something
would
have
to
like
acknowledge
that
on
the
api
server
side
and
say
like
yes,
you
are
allowed.
Thank
you
for
registering.
B
The
reality
is,
is
that
ideally,
the
best
angle
would
be.
It
should
be
possible
to
just
start
a
new
syncer
with
a
set
of
credentials
that
allow
you
to
acquire
that
permission
when
that's
orchestrated
by
a
system,
that's
handing
out
those
credentials
because
it
is
the
infrastructure.
So
an
example
of
that
is
you
don't
want
to
give
something.
B
A
cube,
config
necessarily
you'd,
prefer
that
the
sinker
be
able
to
acquire
an
identity
that
authorizes
you
to
make
us
to
set
a
certain
set
of
things
like
that's
kind
of
where
the
cube's
self-contained,
our
back
model
works.
Fine,
most
people
building
layers
on
top
of
that
are
tying
it
into
higher
order
systems,
but
it's
kind
of
a
weak
mapping.
B
There's
a
lot
of
infrastructure
patterns
that
actually
work
best
if
the
person
who's
orchestrated
the
instance
is
also
orchestrating
the
tie
because
process
security
like
you're,
pointing
out
this
before,
like
there's,
no
perfect
tendency.
The
thing
that
runs
the
vms
is
responsible
for
assigning
identity.
Vms
the
things
in
the
vms
that
create
containers
is
responsible
for
assigning
identity.
B
The
things
within
the
containers
that
connect
to
other
systems
are
responsible,
for
you
know
having
their
own
layers
of
identity,
so,
like
an
example,
would
be
a
service,
that's
exposed
publicly
that
goes
through
a
dedicating
proxy
is
layering
it
up.
You
definitely
want
to
have
that
chain
of
trust
up,
and
then
you
need
to
compose
it
with
something.
So
like
the
example,
I'd
probably
use
would
be.
Someone
has
a
service
which
gives
you
physical
clusters
of
some
form
in
an
account.
B
The
best
outcome
would
be
that's
fully
decoupled
from
the
the
create
process
such
that
someone
can
have
that
physical
chain
of
trust.
Up
by
saying,
oh
I'm,
the
one
who
owns
the
account.
I
also
expose
the
api
that
I
can
test
another
system
that
says:
is
this
person
allowed
to
create
construct
a
and
the
cons?
You
know.
That's
the
capability
system
model.
B
If
I
can
create
a
cluster
in
there
and
give
it
the
permissions
that
I
have
and
I
own
the
api,
I
can
basically
go
all
the
way
up
the
stack
with
a
chain
of
trust
and
if
I'm
consulting
an
identity
system,
saying
like
hey
I'm
machine
a,
I
just
need
a
source
of
truth
for
the
the
modeling
like
who's
allowed
to
do
what,
but
then
you
could
basically
say
the
sinker
can
come
up
and
be
like
yeah.
I'm
authorized
to
do
this.
B
I'm
authorized
to
register
myself,
but
we'll
have
a
bunch
of
like
lower
level
compositional
things
which
is
someone
just
a
development
environment.
They
just
want
to
get
their
cube,
config
think
about
cube
config
as
the
cutting
through
the
layers
versus
cube,
config
and
then
trying
to
add
the
layers
layers
letter
later.
A
Yeah
yeah
yeah
yeah.
I
like
that
I
will.
I
will
look
into
acn's
registry
my
registration
model
and
write
more
about
that
in
this
with
the
one
minute
remaining.
I
am
off
the
next
two
weeks
after
today,
so
I
will
add
this
to
the
doc
share
this
doc
with
folks
tear
it
apart.
Add
to
do's,
add
comments.
You.
B
B
A
B
B
It's
fine
they're,
basically
just
again
like
every
discussion
triggers
new
things,
so
I
want
to
try
to
get
it
more
concrete,
like
you're
doing
as
well
on
the
policy
side,
but
it's
kind
of
the
active
iteration
through
the
design
space,
it's
kind
of
like
summarizing
the.
What
we're
trying
to
do.
Even
this
discussion,
I
think,
is
useful
for
saying,
like
what
are
some
of
the
meta
principles
that
we're
still
not
calling
out
that
would
inform
you
know
each
of
the
layers.
A
Okay,
great,
thank
you
for
this
very
thorough
conversation.
As
always,
and
I
will
see
you
all
in
two
weeks
and
I'll
share
this
talk
immediately
after
this
thanks.
Everyone
thanks
everyone.
Thank
you.