►
From YouTube: Community Meeting August 3, 2021
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right,
hello
and
welcome
to
the
kcp
community
meeting
tuesday
august
3rd
2021.
A
We
have
some
topics
on
the
agenda,
mainly
to
go
over
discussions
happening
in
other
forums
and
make
sure
that
those
are
sort
of
known
and
we
can
talk
about
them
and
and
david
you
mentioned,
and
I
think
in
the
slack
that
you
had
started
embarking
on
the
quest
journey
tour
of
rebasing,
our
kubernetes
fork
on
latest
kubernetes.
Do
you
have
do
you
have
updates
or
a
plan
for
that
or
anything
about.
B
I
hope
to
be
able
to
start
testing
the
final
rebase,
probably
tonight
or
tomorrow
morning.
In
fact,
I
had
to
resolve
conflicts,
as
you
can
imagine
each
step
at
each
commit
of
the
history
of
our
picture
branch,
but
most
of
them
were
quite
simple.
Some
others
were
a
bit
more
tricky,
but
it
seems
that
there
was
nothing
blocking.
You
know
where
you
would
not
be
able
to
do
the
exact
same
thing
or
at
least
to
do
changes
that
have
the
same
meaning
that
what
has
been
done
initially.
B
So
I'm
quite
I'm
positive
for
now
on
the
result,
but
of
course
I
might
have
some
bad
surprises
when
trying
to
run
that,
but
we'll
see
then
I'll
step
to
the
debugging,
I
switch
to
the
debugging
step,
but
but
for
now
that
seems
to
be
quite
no
quite
positive.
B
Or
anything
like
that.
Well,
for
now
I
tried
to
rebased
to
rebase
on
master,
in
fact
the
very
last
question
which
we
could
decide
to
change,
but
by
the
way
I
mean
rebating
to
a
quite
recent
release
of
cubone.
It
would
would
mainly
do
the
be
the
same
because
I
yeah
you
know
some
changes
were
quite
impactful.
B
Changes
were
from
the
end
of
2020,
so
maybe
the
I
mean
I
don't
have
a
strong
opinion
there,
but
I
think
the
best
will
be
possibly
to
to
depend
on
on
kubernetes
to
rebates
on
kubernetes
master
and
by
the
way
it
seems
that
the
very
last
kubernetes
release
is
very
near
to
the
master,
because
there
was
a
very
recent
release,
123
that
something
or
it
seems
that
the
commits
nearly
are
the
same.
So
we
could
switch
to
to
this
one
also,
if
we
want.
A
Yeah,
I
mean
definitely
the
the
lift
from
118
to
anything.
Recent
is
going
to
be
pretty
large
and
not
terribly
different,
whether
it
was
the
latest
release
or
master.
I
think
mainly
the
my
question
is
to
how
often
we
want
to
update
our
work.
How
often
we
want
to
rebase
whether
that
should
be
a
sort
of
continuous
process
or
every
release
or
something
in
between.
I
don't
have
a
strong
opinion.
I'm
just
trying
to
yeah,
probably.
B
If
we
stick
it's
my
opinion,
but
I
might
be
wrong,
of
course,
but
if
we
stick
to
kubernetes
master
as
as
much
as
possible
and
then
we
would
be
able
more
easily
to
you
know,
re
propose
some
easy
parts
to
keep
on
it
already.
I'm
thinking,
for
example,
at
the
you
know,
adding
strategic
match
patch
on
crds,
for
example.
This
is
quite
a
change
that
is
not
that
impactful,
that's
not
related
to
serious,
tenancy
or
stuff
like
that.
So
this
is
the
type
of
changes
that
might
be
proposed
as
first
steps.
B
B
A
Absolutely
I
I
think
right,
the
the
order
of
operations
is
get
our
rebase
relatively
recent
and
then
start
figuring
out
how
we
can
collect
them
into
changes.
I
think
you're
right
about
the
strategic
patch
being
a
good
first
candidate,
because
it's
sort
of
small,
less
controversial,
less
less
less
difficult.
I
think
more,
more
just
sort
of
like
this
is
a
mechanical
change.
We
think
we'd
like
to
support
that
we
think
would
be
useful.
I
think
things
like
crd,
tenancy
and
making
everything
a
crd
are
going
to
be.
A
Not
you
know
not
here
on
fire
controversial
but
like
yeah,
something
we'd
have
to
motivate
people
through.
B
Yeah,
the
level
is
not
the
same,
and
some
of
those
more
impactful
changes
also
depend
on
choices.
We
still
have
to
do
because,
for
example,
tenancy
is
based
on
you
know,
getting
the
cluster
name,
getting
the
logical
cluster
name
in
every
cases.
You
know
also
on
the
on
the
various
layers,
client,
side
and
and
even
etc.
One
and
obviously
at
some
point
we'll
have
to
add-
and
I
think
clayton
already
thought
about
that-
add
an
abstraction
layer
on
that
to
abstract
the
way
we
get
and
set
the
the
cluster.
B
C
The
rebasing's
goal
is
to
reduce
the
friction,
so
people
can
continue
to
prototype
and
then,
when
we
kind
of
get
that
prototype
state,
that
can
show
like
the
three
things
working
together
like
and
then
say
like
here's,
the
three
things
you
want
to
do:
here's
what
we
learned
written
it
all
down
here,
the
output
of
the
prototype
is-
and
we
think
that
we
should
have
roughly
these
set
of
interfaces
here.
Roughly
these
set
of
interfaces
here
and
there's
different
audiences
for
each
like.
So
I
think
we're
just
going
to
agree
with.
A
Yeah
yeah
the
the
the
the
caps
aren't
going
to
flow
soon.
We
still
need
to
do
the
rest
of
the
prototyping
and
and
building
it
up,
but
right
also
updating
the
fork
will
unblock
things
like
what
keem
was
talking
about
a
few
a
couple
of
meetings
ago.
Yeah.
B
A
It's
painful
to
have
to
be
pinned
to
this
old
fork
and,
and
it
means
we
don't
have
to
split
apart.
Our
repos,
which
is
nice.
C
And
so
like
for
minimal
api
server,
one
of
the
things
that
like
was
we're
trying
to
get
to
a
point
where
figure
out
what
the
interface
is,
that
we
would
propose
for
minimal
api
server
as
a
framework
that
we
would
propose
upstream.
C
When
we
get
to
that
point,
we
might
actually
just
have
branches
where
we
get
to
a
stable
point,
and
then
we
rebase,
but
we
have
two
branches.
We
have
a
fee.
We
have
a
few
brands
per
cube
release
so
that
other
parts
of,
like
the
like
the
say
like
logical
cluster
spins
off
logical
clusters,
might
just
be
on
one
branch
but
there's
a
reason
to
say
like
maybe
we
would
actually
just
support
or
do
the
rebases
on
each
so
that
you
could.
C
You
could
keep
up
a
life
cycle
on
minimal
and
try
things
out
until
at
some
point
those
fold
into
cube
branches,
at
which
point
you
wouldn't
need
those
feature
branches
anymore.
Whereas
logical
clusters
might
always
be
outside
of
cube,
but
they
rely
on
interfaces
in
cube
or
something
like.
We
don't
know
enough
to
know,
but
yeah
the
the
exercise
of
doing
the
rebate
starts
getting
us
in
that
mindset
of
like
what
would
it
just
take
to
have
multiple
feature
branches
per
cube,
release.
B
Yeah
and
also
yeah
being
as
as
as
near
as
possible
to
master
also
I
mean,
would
force
us
to
have
the
exercise
of
thinking
in
the
current
changes
in
kubernetes.
What
is
going
towards
our
direction
and
what
is
mainly
somehow
diverging
from
our
directions.
For
example,
I
just
stumbled
upon
the
fact
that
in
open
api
pruning
defaults
now
is
not
down
anymore.
When
you
build
the
open
api,
but
when
we
you
aggregate
all
the
open
apis
in
the
aggregation
server
and
typically
that's
a
change.
That's
not
really
in
our.
B
You
know
direction,
because
now
we
don't
use
the
aggregation
server
anymore
to
to
to
build
open
apis,
but
you
we
we,
on
the
contrary,
bypass
that,
to
you
know
dynamically.
Take
the
open
api
that
corresponds
to
your
logical
cluster.
B
So
that's
quite
also
useful,
I
think,
to
to
to
be
as
near
as
possible
as
master,
because
then
we
can,
you
know,
track
the
changes
and
have
an
idea
of
where
the
overall
repo
and
kubernetes
you
know
code
is
going
and
and
possibly
try
to
be
a
bit
pro
proactive
on
the
ongoing
directions
that
that
could
be
taken
by
other.
You
know,
stakeholders
does
it
make
sense.
A
Yeah
yeah,
that's
definitely
another
good
side
effect
of
this
is
that
by
being
up
to
date,
we
will
be
able
to
track
more
closely
what
is
happening
there
and
be
aware
of
them,
and
I
don't
think
well,
I
don't
know
so
it's
possible,
but
I
don't
think
there's
going
there's
any
changes
happening
upstream.
That
will
actively
hurt
us
like
very,
very
badly,
but.
C
I
was
gonna
actually
has
hazard.
Experience
we've
had
on
this
in
the
past
is
when
so
like
openshift
went
through
this
for,
like
the
first
three
years
of
cube,
which
was,
we
would
trigger
things
that
would
help
us,
which
then
created
a
ton
of
pain
for
us
on
the
rebates.
So,
like
original
versions
of
client
go
restructure
most
of
the
initial
api
server
to
support,
aggregated
apis
like
five
different
refactors,
that
david
or
jordan
or
myself
did
on
or
stefan
like
various
things,
so
it
was
like.
C
I
actually
feel
that
the
people
look
like
this
is
an
area
where
yeah,
I
agree,
you're
a
thing
jason,
which
is
probably
unlikely
to
be
broken.
We'll
have
to
keep
fresh
in
our
head.
How
to
keep
those
things
rebased,
it's
very
likely
that
the
pain
will
be
when
we
get
an
upstream
thing,
then
rebasing
and
realizing
we
missed
something-
or
you
know,
like
so
that'll,
be
where
our.
A
C
Comes
in
so
I
probably
would
say
getting
doing
the
rebases
more
often
like
twice
the
release
is
actually
a
good
habit
to
start
getting
into
and
then
having
those
branches
might
we?
Maybe
we
just
want
to
do
it
now
like
we,
we
try.
We
just
come
up
with
a
feature
branch
for
the
name
of
the
release,
not
like
master
or
anything.
C
123
and
then
like
do
it
like
every,
like,
whatever
we
have
every
two
months,
maybe
be
like
hey,
like
let's
plan
ahead
to
do
the
the
chunk
and
like
halfway
through
main
branch
development
or
whatever
for
the
next
release,
like
it's
a
great
habit
to
be
in
because
then
it
just
forces
us
to
keep
that
fresh.
Like
one
of
the
problems
that
happened
early
days
with
rebasing
openshift
on
top
of
cube
was
even
at
three
month
windows.
C
A
Does
it
would
it
make
sense
to
when
you
say
something
like
we
want
to
do
this
every
couple
of
months
or
every
like
six
weeks?
Does
it
make
sense
to
automate
this
and
even
just
have
a
job
that
tries
to
and
emails
you
when
it
fails
and
emails
you
how
it
fails?
No,
probably
not
it.
C
C
I
probably
should
do
one
where
I'm
like
bringing
it
back
in
and
then
david
should
review
it,
and
we
should
probably
have
a
couple
of
folks
like
keep
that
patch
set
like
we
want
to
keep
the
pass
that
small,
probably
what
I
would
say
is,
as
we
explore
some
of
the
next
steps
on
prototyping,
logical
clusters
or
whatever
we
actually
drop
the
old
implementation
commits
and
put
in
new
ones
and
then
bring
the
feature
branch
history
into
line
with
it,
and
we.
What
we
really
need
I'd
probably
say,
is
the
automation
around
the
did.
B
C
Normalization
regressions
like
a
couple
of
virtual
scenarios,
like
that's,
probably
the
best
investment
versus
the
automation
to
go,
do
the
rebate,
because
rebase
is
more
mechanical
than
anything
else.
A
Yeah,
with
with
a
set
of
test
scenarios
that
we
can
automate,
we
can
perform
a
rebase.
We
can
automate
rebasing
it's
mechanical,
it
should
always
work
if
it
doesn't
work.
If
there's
some
merge
conflict,
we
have
to
go
like
resolve,
then
that's
good
to
be
notified
of,
and
if
it
isn't,
if
there
isn't
a
merge
conflict,
we'll
run
the
tests
and
if
the
tests
fail,
we
found
out
that
something
failed.
I'm
just
trying
to
like.
I
hate
I
hate
having
to
schedule
things
on
my
calendar
to
remind
myself
to
do
something.
A
I
prefer
a
computer
to
do
it
and
tell
me
when
it
failed,
but
but
definitely
like
whatever.
In
reality,
it's
whatever
david
is
most
comfortable
with
whatever
your
emotions.
B
Yeah
I
mean
for
this
first
time.
I
think
that,
since
there
is,
you
know,
maybe
even
more
more
than
one
euro
obviously
by
hand
is,
is
yeah.
But
then
you
know
if
we
do
this
very
regularly.
Surely
as
as
soon
as
we
have
at
least
the
check
of
the
you
know,
demos,
as
as
preliminary
integration
tests,
because
we
for
now
we
don't
have
real
integration
tests,
but
at
least
if
we
have
something
to
check
that
the
our
basic
scenarios
and
funding
scenarios
still
work,
and
we
can
do
that
automatically.
C
C
It
is
and-
and
we
probably
don't
need
more
than
a
couple
of
tests
right
now,
because
the
goal
will
be
to
throw
the
prototype
away
completely
and
replace
with
a
more
structured
approach
and
a
more
effective
approach
probably
is
going
to
be
like
well
what
the
scenario
test
might
actually
do
it
so,
like
you
know,
simple,
go
tests
that
just
depend
on
local.
That.
C
A
kind
server
or
whatever
or
stub
out
like
big
chunks
of
it,
something
that's
fairly
self-contained
that
you're
confident
would
work
I'd,
be
okay.
If
the
core
tests
passed,
but
then
yeah
like
there's
a
subtle
regression,
those
are
just
going
to
happen.
That's
going
to
be
the
hard
stuff
to
find.
I
don't
think
those
will
repeat
like.
C
As
cubes
the
cube
stuff,
that'll
change,
it's
either
our
refactors
to
add
things
in
the
cube
like
the
mechanical
stuff,
it's
easy
to
break
stuff
during
those
refactors.
The
core
integration
tests,
excluding
clusters
will
work
really
well
when
other
stuff
breaks.
It's
just
going
to
be
really
hard
to
predict.
I
think
that's
where
we
would
say
just
enough
of
a
basic
test
that
you
can
get
over
it
and
then,
if
we
still
have
a
bug
by
the
end
of
prototyping,
when
we
start
thinking
about
normalization,
normalization
should
have
a
pretty
rigorous
test
suite
around
normalization.
C
That's
able
to
be
used
independently,
that's
worth
spinning
that
off
be
like
look.
We
just
don't
ever
like
when
someone
comes
up
with
a
new
way
to
use
crds
or
adds
a
new
field,
or
whatever
that's
happening
in
an
upstream
context.
Wherever
that
normalization
library,
framework
tool
lives
and
flowing
downstream
to
us.
A
Yeah
even
david,
you
mentioned
the
running
our
intent
like
our
demo
scenarios
as
end-to-end
tests.
I
think
we
could
even
benefit
from
something
even
less,
which
is
like
just
to
test
that
creating
object,
foo
in
namespace
bar
in
logical
cluster,
a
and
being
able
to
do
it
in
logical
cluster
b
and
then.
A
B
Yeah
sure
we
could
yeah,
we
could.
Obviously
there
are,
I
mean
at
least
in
my
memory.
There
are
some
cases
where
you
know
stuff
could
be
found
by
the
whole
demo,
for
example,
that
I
did
not
find
initially,
especially
because
in
typically
in
the
whole
demo,
you
have
both.
You
know
logical
cluster
isolation,
but
then
also
the
you
use
controllers
that
either
are
multi-cluster,
do
multi-cluster,
watches
and
also
single
cluster
watches,
which
is
what
what
typically
the
the
the
splitter
does.
B
Speeder
only
points
to
one
cluster
and
since
it's
the
whole
point
is
you
know,
cluster
phys,
cluster
name
management,
logical
clusters
and
stuff,
like
that.
B
That's
quite
important
to
be
able
to
test
both
controllers
that
you
know
watch
on
the
whole
on
on
all
the
the
logical
clusters
and
and
typical
and-
and
you
know
typical
or
standard
controllers,
that
only
watch
on
a
given
cluster
name-
that's
quite
important
to
have
both,
which
is
also
why
I
thought
that
that
the
the
whole
scenario
that
was
the
funding
one,
is
quite
interesting
to
to
test.
But
but
I
might
be
wrong
as
well,
and
surely
you
know
having
simple
things?
C
Like
a
couple
of
things
where
we,
we
had
loose
integration
tests
and
then
we
coupled
them-
and
there
was
a
lot
of
strength
in
that,
but
it
was,
it
wasn't
always
hitting
the
things
that
we
wanted
and
then
we
threw
those
out
or
ignored
them.
And
then
we
wrote
a
whole
bunch
of
duplicate
tests
that
were
less
covered,
but
we
could,
like
the
e
to
e
tests,
ended
up
being
somewhat
pragmatic.
C
A
C
Of
the
design
of
it
and
the
fact
that
we
split
responsibilities
across
components
which
had
various
tradeoffs
cube
is
a
coupled
system.
Kcp
as
a
as
a
logical
prototype
is
not
a
coupled
system
like
like,
I,
I
think,
there's
an
anti
there.
I
feel
like
what
we
I
would
like
to
inject
is
a
little
bit
of
an
anti
pressure
on.
Let's
have
a
whole
bunch
of
scalable
components
that
don't
actually
matter
independently
versus
the
simplicity
of
the
framework
will
handle
some
of
that.
C
But,
like
it's
like
the
moving
parts
right
like
let's
say
just
hypothetically,
we
end
up
with
the
control
plane
and
maybe
an
optional
lcd
connection
for
when
you're
running
an
aha
mode
and
then
there's
a
set
of
you
know
like
the
sync
or
maybe
like,
or
a
component
like
the
syncer
that
exists
in
some
other
library
that
shares
a
global
controller.
C
I
don't
know
that
we
want
more
components
than
that
unless
we
really
need
a
really
concrete
reason,
because
every
additional
thing
we
run
reduces
our
ability
to
run
in
like
the
simple
contexts
so
like
thinking
through
that
be
like
okay.
Well,
we're
not
a
coupled
system.
We
should
be
fighting
coupling
into
better
like
minimal
api
server
is
kind
of
about
being
able
to
do
things
inside
the
cube
api
server
that
cube
kind
of
as
an
evolved
thing
can't
do.
C
C
B
A
So
if
you
want
engineers
to
talk
for
hours
and
hours,
just
bring
up
the
concept
of
testing
strategies,
but
I
I
I
I
do
think
that
having
tests
that
cover
specific
pieces
like
the
end,
end
test,
you
know
absolutely.
If
something
fails
and
the
end-to-end
test.
Doesn't
you
know
the
demo
doesn't
run
anymore?
Then
that's
serious,
but
that
is
also
fragile
because
it
implicates
a
lot
of
components.
Yeah.
B
D
A
A
longer
time,
just
something
like
can
I
create
the
same
object
in
two
logical
clusters
and
they
don't
interact.
That's
great
and
can
I
watch
across
logical
clusters
is
a
like
useful
thing
to
maintain
that
we
can
do
and
then
we'll
also
on
top
of
that,
after
those
tests
pass,
we
run
the
end-to-end
tests
of
you.
Can
we
run
the
demo
yeah
and,
and
detect
like
you
know,
is-
is
the
whole
system
working
great,
but
first
we
have
tested
each
individual
thing.
C
A
Great,
so
I'm
very
excited
that
this
is
ongoing.
Thank
you
for
all
of.
I
can't
imagine
how
painful
that
must
have
been
to
update
from
118
to
to
nowish,
and
hopefully
everything
keeps
working
and
well
I
shouldn't.
I
should
knock
down
all
the
wood
so
that
it
keeps
working
but
yeah.
B
Sorry,
jason,
so
just
to
be
clear,
I
started
rebasing
on
master,
should
we
maybe
as
a
conclusion,
maybe
just
not
now,
but
just
think
about
it.
I
should
have
probably
opened
an
issue,
but
anyway
do
we
want
finally
to
rebase
and
master
the
the
feature.
You
know
the
logical,
logical
clusters,
branch
feature
or
or
do
we
want,
as
we
said,
to
rebase
on
the
late
latest
for
now
kubernetes
I
mean
it
would
be
quite
the
same
for
me,
but
I
just
have
to
know.
A
A
C
The
latest
kubernetes,
because,
in
the
event,
someone
regresses
something
stupid
and
not
in
maine,
I
don't
want
to
have
to
debunk
it
yeah,
like
the
odds
of
like,
like
cube,
spends
four
months.
There's
always
something
subtly
stupid
the
odds
in
the
cube.
Api
server
is
low,
but
I'd
be
like
the
api
server.
If
somebody
breaks
with
priority
and
fairness,
I
do
not
want
to
be
debugging
any
disabled
parts
of
it.
So.
B
Yeah,
so
it
might
be,
or
do
you
think
it
would
make
sense
to
so?
Have
the
you
know
official
feature
branch
that
is
depending
upon
from
kcp
to
be
on
the
latest
kubernetes
release
and
then
have
a
pending
a
distinct
branch
which
would
we
would
regularly
update
to
master.
B
A
A
I'm
not
sure
that's
right
of
kubernetes,
so.
B
C
A
A
We
should
we
shouldn't
yeah.
I
think
I
think
we're
in
agreement,
and
we
will
document
this
as
we
go
as
always
anyway.
Great.
Thank
you.
Thank
you
for
that
update
and
I
look
forward
to
seeing
to
see
more
of
it.
I
know
what
you
will
be
happy
to
not
to
be
able
to
use
ingress
v1
other
things
we
have
been
talking
about
in
in
various
forums,
and
I
wanted
to
bring
here
as
a
for
visibility
and
also
for
discussion.
A
If
there's
discussion,
clayton-
and
I
have
talked
a
bit
about
how
logical
clusters
map
to
their
physical
clusters,
specifically
logical
clusters-
are
supposed
to
be
self-service,
you're
supposed
to
be
able
to
just
show
up
and
say
I
have
a
namespace
I'd
like
to
give
you
and
for
that
logical
cluster
to
spring
into
existence
at
the
time
it's
at
first
ask
for
a
thing
and
take
that
and
do
something
with
it.
A
The
question
is
mainly
around
if
this
kcp
instance
is
connected
to
thousands
of
physical
clusters,
how
it
probably
shouldn't
just
randomly
assign
your
logical
cluster
to
one
of
these
thousands
of
physical
clusters.
It
should
probably
have
some
idea
which
ones
are
open
to
be
given
to
and
which
ones
have
these
constraints
and
limitations.
So
I
think
we
are
circling
around
some
concept
of
when
you
create
a
logical
cluster.
A
I
think
at
first
it
might
not
do
anything.
It
might
not
schedule
things
to
physical
clusters.
You
might
have
to
say
by
the
way,
physical
or
sorry,
by
the
way,
logical
cluster
that
I
just
created.
You
are
now.
You
now
have
access
to
physical
cluster,
a
and
b
and
c
in
these
locations
with
these
traits,
and
then
at
that
point
the
sinkers
will
pick
up
and
do
the
sinking
down
to
those
clusters
that
makes
it
slightly
less
self-service.
A
You
can
still
create
resources
there,
but
they
don't
get
scheduled
to
real
clusters
until
you
sort
of
enable
them,
you
know
enable
real
clusters
for
them.
So
it's
a
bit
of
a
middle
ground
between
being
self-service
and
being
sane.
B
A
Don't
want
to,
we
don't
want
self-service
to
be
like
great.
I've
spread
your.
I
spread
your
resources
across
a
thousand
clusters
around
the
world.
If
you
want
to
fix
it,
you
have
to
clean
them
all
up
and,
like
you
know,
put
it
back
to
where
you
want.
That's
not.
I
got
you
this
deck
of
cards
and
I
threw
them
all
on
the
floor.
A
You
can
put
them
wherever
you
want
them,
so
I
think
that's
a
good
balance
and
is
something
we
are
defining
that
config,
defining
how
a
logical
cluster
maps
to
a
number
of
physical
clusters
is
still
tbd,
but
that's
the
direction
we're
going.
C
What
and
honestly,
we
might
even
be
willing
to
say
it's
out
of
scope
for
the
prototype,
but
we
should
pose
the
questions
that
that
frame
the
spectrum
of
choices
you
could
make
all
the
way
from.
You
know
a
direct
mapping
to
a
very
complex
policy
engine
with
a
set
of
different
apis
and
what
are
the
different.
A
C
D
C
I
would
say
that's
a
great
question.
I
think
what
we're
prototyping
around
is
like
that.
Logical
clusters
are
a
hard
abstraction.
You
are
not
coupled
to
physical
infrastructure
with
a
logical
cluster.
C
Someone
could
choose
to
which
is
reasonable,
and
so
the
default
statement
would
be
the
mapping
between
what
happens
when
you
do
things
in
a
logical
cluster
is
up
to
the
the
type
of
integration.
So
let's
say
you
have
a
a
really
simple
controller
that
looks
at
that
logical
cluster
and
goes
and
creates
cloud
resources
based
on
that,
it's
probably
the
responsibility
of
that
controller
to
fit
into
a
larger
system,
and
I
think
it
would
be
reasonable
to
kind
of
describe
how
you
could
achieve
that.
C
But
then,
like
the
other
one
which
would
be
like
hey,
I
want
a
transparent
multi-cluster.
I
want
to
copy
deployment
a
down
to
two
different
physical
clusters.
C
Again,
it's
under
the
scope
of
each
of
those
different
ways
that
you're
using
logical
clusters
to
define
it,
and
so
it
would
be
reasonable
to
say
the
logical
cluster
is
hard
decoupled
from
cloud
accounts
and
all
those
things
it's
only
what
you
bring
to
it.
That
makes
it
but
for
transparent
multi-cluster.
D
D
D
Or
yeah
yeah
yeah,
the
idiot
and
the
more
locks.
I
was
trying
to
remember
the
right
name
but
yeah.
C
And,
like
I
think,
that's
a
reasonable
okay,
so
so
here's-
and
this
is
what
the
self-service
was
supposed
to
be-
do
believe
that
cube
has
completely
failed
at
self-service
and
that's
okay
right
cube
had
a
much
bigger
goal,
which
was
to
accomplish
standardizing
deployment
apis
for
most
containerized
software,
and
it
succeeded
incredibly,
but
everyone
on
top
and
like
this
has
been
like
you
know,
talking
folks
in
multi-cluster.
Working
on
this,
for
like
years,
is
like
the
self-service
angle
of
cube
is
a
have
fun.
C
Good
luck,
build
your
own
thing,
build
it
with
primitives,
not
designed
perfectly
to
map,
but
they
do
like
each
of
the
primitives
make
sense.
A
good
name.
Space
makes
sense,
a
cluster
makes
sense.
Clouds
chose
to
do
clusters
because
it
works
really
well
for
them
and
they
can
sell
it.
Some
people
went
and
did
multi-tenancy
or
built
crazy,
complex
multi-tenancy
into
a
single
cluster,
because
that
worked.
I
do
want
to
at
least
explore,
and
this
is
what
that
the
self-service
doc
would
be
trying
to
capture
is
like.
C
Could
you
articulate
something
that
captures
the
best
aspects
of
cube
and
the
best
assets
of
everybody's
self-service
system?
So
that
you
could
like
create
a
center
of
gravity
around,
maybe
if
you
have
cube
this
is
a
really
natural
way
to
do
self-service
and
self-service
implicitly
underscores
someone
set
that
up
for
you.
A
regular
user
doesn't
need
self-service.
They
just
go
set
up
a
cube
cluster
or
set
up,
get
ops
or
run
a
bash
loop
or
run
a
kcp
instance
like
a
hypothetical
prototype
instance
that
doesn't
feel
service.
C
I
do
think
you
should
be
able
to
bypass
the
self-service
and
get
something
useful
out
of
whatever
kind
of
comes
out
of
this
effort
like
I
can
just
start
a
binary
and
I
don't
got
to
go,
do
like
I'm
wearing
my
admin
hat,
but
I
might
need
to
say
like
yeah.
This
is
my
cluster.
This
is
my
cluster
with
my
cluster.
It
just
works,
so
I
think
yeah
you're
right,
michael
the
the
assumption
is,
there
is
a
separation
between
the
consumer
of
a
logical
cluster.
C
D
So
does
that
also
imply,
then
there's
a
design
principle
that
the
parameters
of
how
the
logical
cluster
is
mapped
down
to
us
at
a
physical
cluster
should
be
explicit.
Api
constructs
so
from
a
consumer
of
kcp
either
closed
box,
but
to
an
administrator
of
a
kcp
environment,
they're
very
wide
box,
they're,
very
programmable,
they're,
very
accessible.
C
I
think
absolutely
and
the
white
box,
maybe
there's
two
or
three
implementations
of
the
white
box
at
different
levels,
of
the
spectrum
of
complexity,
where,
like
kind
of
the
example
that's
in
the
prototype
today,
is
we
create
a
cluster
resource?
That's
what
most
people's
multi-cluster
stuff
basically
boils
down
to
is
like
here's,
a
cube,
config,
here's
a
cluster,
and
that
has
a
lot
of
like
it.
It
works
for
a
lot
of
the
straightforward
cases
and
you
can
use
it
as
a
primitive
and
a
larger
system.
C
But
I
do
think
when
you
talk
about,
I
would
I
would
love
for
the
idea
of
self-service
to
remove
the
need
to
build
a
primitive,
that's
reused
in
a
higher
system
and
instead
let
someone
let
a
lot
of
people
reasonably
plug
in
their
systems,
just
like
so
like
in
cube.
If
you
want
stuff
to
run
on
nodes
with
apps,
you've
got
an
approach
for
it
kind
of
takes
on
that
responsibility,
and
you
can
do
it
through
custom
resources
or
daemon
sets.
You
can
do
it
through
on
node
interfaces
like
unix
domain,
sockets
or
local
ports.
C
I
would
say
I
would
really
love
for
the
self-service
part
of
this
prototype,
an
investigation
to
really
hit
that
the
thing
I
was
like
you
know:
what's
what's
the
thing
that
made
docker
awesome,
which
was
docker,
pull
or
docker
run,
had
a
unix
chunk
that
was,
reproducible
cube
was
keep
control
apply.
C
C
D
C
C
C
So
like
in
cube,
we
had
a
mission
control
and
then
we
added
finalizers
and
then
we
looked
at
initializers.
We
did
web
hooks
instead
and
web
hooks
are
an
unholy
disaster
of
a
of
an
operational
failure
in
practice
for
most
people,
initializers
had
some
limitations
but
like
like
today
in
cube,
you
can't
initialize
a
namespace
with
a
set
of
resources
in
a
transactional
way,
because
we
missed
the
opportunity
to
do
that
with
like
it's
still
tractable
to
some
degree.
C
Some
people
have
worked
around
it,
but
you
could,
like
you,
have
to
do
something
that
basically
says
I've
set
it
up.
Now
you
can
go,
and
you
don't
have
like
a
window
where
you
can
expose
that.
I
want
to
make
sure
that
that
window
is
built
in
in
the
way
you
were
describing,
michael,
so
that
someone
a
reasonably
flexible,
composable
ad,
controllable,
like
quota
rate,
limiting
approval
all
that
stuff
they
can
fit
it
in
in
a
way
that
feels
natural.
C
I
feel,
like
we
kind
of
have
enough
experience
in
the
ecosystem.
Part
of
this
could
be.
Maybe
we
just
get
enough
of
it
teed
up,
so
that
we
have
a
concrete
that
someone
can
play
with,
but
then
other
people
can
say.
Oh
I
don't
like
that
concrete.
We
should
try
these
different
apis
and,
I
think,
like
that's.
A
level
of
flexibility
would
be
while
we
might
have
an
opinion.
C
C
Iterate
on
would
then
be
the
is
a
great
trade-off,
because
then
we
could
say
like
let's
go,
let's
go
partner
with
ocm
and
with
sig
multi-cluster
and
the
folks
at
huawei
doing
and
alibaba
doing
like
the
super
crazy.
I
know
there's
some
food
nested
like
see.
If
we
could
like
find
like
a
couple
points
in
that
spectrum
and
be
like
hey.
Does
this
solve
your
use
case?
No,
let's
tweak
it.
While
we
can
kind
of
have
a
couple
running
groups:
okay,.
C
And
status
on
that
prior,
that
doc
is
like,
I
think,
a
couple
of
folks:
I've
shared
a
google
doc
roundup
and
I
share
to
the
community.
C
I
gotta
I
wanna,
do
another
pass
and,
like
kind
of
like
add
some
ideas
and
then
flip
it
out
into
a
public
doc,
I'll
share
it
with
the
google
groups
and
I'll
link
it
and
we'll
kind
of
play
around
with
it.
I
wanted
to
get
some
stuff
on
paper
with
folks
sure
it's
far
from
complete,
so
it's
that's
partial.
That
sharing
aspect
is
just
about
to
happen.
D
A
Yeah,
so,
in
addition
to
that
policy
mapping
what
physical
clusters
a
logical
cluster,
can
you
get
to?
That
also
would
be
where
we
would
hang
this.
No
logical
cluster
in
this
organization
can
have
more
than
this
many
resources
in
any
of
these
physical
clusters,
or
you
know
that
that's
where
we
hang
quota,
that's
where
that
gets
very
complicated.
That
sort
of
affordance
is
where
we
open
it
up
to
do
all
of
that
other
stuff
and
then
make
it
hopefully
eventually
extensible,
so
that
people
can
bring
their
own
logic
to
it.
A
The
quota
issue
goes
to
or
makes
me
want
to
skip
down
to
the
terminology
updates.
These
aren't
like
written
in
stone,
and
you
will.
You
are
welcome
to
come
up
with
better
terms
for
any
of
these
things.
I
think
there
is
a
general
sense
towards
moving
from
the
term
physical
cluster
to
the
term
logic.
Sorry
cef
is
great.
Words
are
fun
from
physical
cluster
to
location
where
a
location
might
actually
map
to
the
same
underlying
physical
cluster,
so
a
logical
cluster
can
point
to
three
locations.
A
Two
of
those
locations
might
actually
be
different
resource
chunks
of
the
same
physical
cluster.
A
This
further
abstracts
away
actual
underlying
resources
in
a
way
that
I
think
we
will
find
useful
later
yeah
location.
Is
I
don't,
like
the
term
location
either?
I
don't
like
any
of
these
these
terms,
but
we
needed
something
that
was
more
abstract
than
a
physical
cluster.
Physical
cluster
itself
is
an
abstract
thing
when
these
things
are
on
clouds,
yeah.
C
I
actually
wanted
to
spend
some
time,
as
that
kind
of
plays
out,
which
is
say,
is
that
an
actual
abstraction
that
can
be
reused
by
other
components
that
are
not
tied
to
physical
clusters
but
are
instead
tied
to
abstract
concepts
like
geographies
or
other
types
of
schedulable
units,
so
like
there's,
the
transparent
multi-cluster
is
trying
to
place
like
bin
pack
loosely
coupled
workloads
lightly,
coupled
workloads
onto
large
things.
C
So
it's
more
development,
dense
environment,
there's
another
use
case,
which
is,
I
want
to
run
a
bunch
of
things
for
someone
as
a
service,
and
I
want
to
place
them
into
chunks,
and
that
is
a
scheduling
thing
which
I
actually
wanted
to
explore,
whether
the
location
concept,
whether
there's
a
commonality
between
the
idea
of
like
I,
want
to
have
a
construct
that
you
could
schedule
place
and
resource
bin
pack,
chunks
into
chunks
and
transparent,
multi-clusters,
a
really
concrete
one
which
is
like
I
want
to
put
a
workload
someplace.
C
Is
there
a
commonality
between
that
and
the
general
problem
of?
I
want
to
make
a
decision,
so,
let's
say
I'm
running
database
instances
as
a
service
for
someone
across
a
large
chunk
of
locations.
C
Is
there
a
way
to?
We
obviously
want
to
use
the
cube
scheduler,
and
we
want
to
have
some
concepts
but
like
cubelet,
there's
no
equivalent
to
cubelet
right,
like
the
cube.
Scheduler
is
very
specific.
C
C
This
is
location
or
something
like
location,
a
concept
that
could
actually
be
reused
to
do
two
things,
constraints
and
capacity
such
that
you
could
actually
do
reuse
elements
of
the
pod
scheduler
we're
already
talking
about
like
for
transparent
multi-cluster,
like
trying
to
do
some
basic
scheduling,
that's
at
a
higher
level
constraints
and
placement
versus
pods
themselves,
but
there's
a
little
bit
of
a
tighter
pots.
We
can
do
that
equivalent
for,
like
I
want
to
place
a
database
instance
of
a
certain
size
into
a
pool
of
resources.
C
C
If
you
want
to
build
any
kind
of
service
across
multiple
chunks
of
capacity,
you
have
to
have
some
mechanism
like
this.
Is
there
enough
thing
there
that
we
can
say?
Oh
well,
this
is
a
concept,
that's
reusable
and
then
the
scheduler
is
also
reusable,
and
maybe
you
don't
run
one
scheduler,
but
you
have
a
couple,
but
the
idea
being
somebody
wants
to
create
a
service
on
top
of
a
logical
control.
Plane
then
wants
to
do
their
controllers.
C
Can
they
delegate
scheduling
and
bin
packing
to
something?
That's
reasonably
general
enough,
no
idea,
but
that
was
a
that
consideration
was
in
my
head.
So
it's
yeah,
it's
like
placing
things
onto
physical
clusters
that
aren't
just
transparent,
multi-cluster
workloads
and
then
placing
things
in
the
logical
cluster
onto
completely
opaque
resource
types
so
that
people
don't
have
to
implement
their
own
sharding
and
bin
packing.
But
we
could
reuse
the
constraint
system.
So
that
was
a
side
note
to
what
jason
just
described,
which
kind
of
comes
out
of
that.
B
Yeah
sorry
go
ahead,
I
mean
just
to
be
sure.
I
understand
the
the
summary.
What
you're
explaining
is
that
you
know
location
is
not
just
a
rename
of
physical
clusters,
it's
it's
a
the
name
or
the
concept
for
an
in
direction.
In
fact,
we
just
name
some
con,
some
abstract
concept,
which
will
lead
to
the
to
the
physical
place
where
workloads
have
to
execute
through
some
rules
that
that
could
be
customized
or
you
know,
implemented
in
several
ways.
A
The
location
type
might
be
location,
whatever
name
give
it
a
name.
You
want
camels,
but
it
says
I
have
100
cpus
and
a
terabyte
of
ram
and
I
have
the
labels
cloud.
Equals
aws
location
equals
us
east,
you
know
security
equals
hi
or
something.
D
D
Today,
when
we
import
a
managed
cluster,
the
agent
will
auto
discover
data
points
about
the
cluster,
including
things
like
cloud
provider
region
and
a
handful
of
things
versions
whatever,
but
certainly
that
information
about
cloud
provider,
region
and
vendor
and
version
of
kubernetes
that
and
that
type
of
information
can
be
useful
to
help
drive
placement
behavior
with
a
placement
rule
that
that
the
abstract
definition
of
my
constraints,
my
conditions,
might
express
you
know
I
need
region
in
and
then
a
list
of
like
value,
1
value,
2
value
3,
so
that
we
pick
clusters
that
are
from
that
region.
D
D
So
I
think
the
concept
of
location
as
a
facade
and
abstraction
and
direction
for
something.
That's
not
one
to
one
a
cluster
that
may
be
a
subset
of
a
cluster
is
good.
I
also
tend
to
think
that
labels
for
replacement
are
good.
We
could
easily
adapt
the
placement
rule
behavior
that
we
have
today
to
recognize
labels
on
another
kind.
That
is
not
the
managed
cluster
object,
but
rather
a
location
object,
so
that
would
be
completely
feasible.
A
A
D
A
We
should,
I
think
we
should
avoid
enforcing
a
hierarchy
on
these
things
because
the
to
the
scheduler
we
don't
want.
We
don't
want
the
scheduler
to
know
clouds
and
regions
and
zones
and
racks
or
anything
like
that
hierarchy
of
things
to
the
scheduler.
It's
just
a
label
which
means
nothing
to
the
scheduler
and
a
value
which
means
nothing
to
the
scheduler.
The
region
string
could
be
any
string
if,
if
it
doesn't
know
what
those
things
are
it
just.
D
So
things
that
you
can
still
do
with
that,
so
even
though
you
know
we
talk
about
the
scheduler,
I'm
going
to
kind
of
map
it
into
the
mental
model
that
I
have
around
the
placement
controller,
even
though
we're
primarily
just
using
labels
on
a
set
of
objects,
we're
still
able
to
use,
match
expressions
and
label
selectors
to
drive
some
pretty
pretty
advanced
set
of
conditioning
on
it
right
I
can.
I
can
drive
an
anti-affinity
placement
by
knowing
that
I
need
a
non-intersecting
set
of
values
for
a
particular
label
key.
D
C
Of
the
thing
I'd
like
to
see
is
that
transparent,
multi-cluster
is
concrete
enough
to
say
I
want
a
cube-like
application
that
runs
across
clusters
a
properly
abstracted
one
of
the
trade-offs
where
the
limits
to
get
one
angle
of
it
and
the
second
angle
would
be
the
other
use
cases
I
want
to
run.
C
I
want
to
run
a
bin
packing
service
on
top
of
logical
clusters
that
is
not
or
on
the
on
a
through
like
a
kcp
like
thing
that
is
not
creating
workloads
the
same
way
because,
because
I
think
with
the
database
as
a
service,
there's
a
couple
of
examples
where
you
actually
don't
want
transparent
multi-cluster,
you
want
bin
packing.
You
might
want
transparency
for
another
set,
so
it
should
be
somewhat
orthogonal.
C
But
how
do
those
two
interact?
That'll
give
us
enough
data
to
then
combine
with
the
ocm
use
case
and
maybe
like
we
should
michael.
That
would
be
helpful
to
talk
about
the
different
use
cases
and
how
they
like
what
are
the
highest
level
conceptual
differences
that
we
can
frame
like
I'm
using
administrative
placement
policy
to
make
a
set
of
decisions
about
these
concrete
things.
C
Do
they
overlap
with
you
know
the
same
use
cases
or
we
actually
make
sure
like
I'd,
rather
have
six
use
cases
for
location
and
an
understanding
for
each
like
this
abstraction
runs
counter
to
this
abstraction
or
if
we
abstract,
both
of
these
behind
it,
neither
one
works
as
effectively
to
kind
of
guide
that
maybe
there
are
two
different
location
types
and
honestly,
a
lot
of
kcp
is
a
little
bit
of.
We
can
do
typing
a
lot
more
effectively.
C
K
native
tried
to
do
some
duct
typing
and
they
ran
into
some
challenges
again,
a
goal
for
the
kcp
prototype
and
the
exploration
would
be
when
duct
typing
is
hard.
What
can
we
do
to
improve
duct
typing
and
so
having
a
scheduler
we'd
be
like
yeah?
I
want
a
scheduler.
I
want
to
ask
for
a
scheduler
to
run
across
these
location
concepts
or
to
like
a
a
physical
cluster
or
a
logical
cluster
capacity
pool
or
a
cloud
account
and
then
have
that
intersect.
D
B
D
Application
set
model
allows
us
to
express
to
use
the
patient
rule
api,
express
a
desired
set
of
places.
We
want
application
configuration
to
be
delivered
and
then,
with
the
application
set,
it
interacts
with
that
in
order
to
generate
a
set
of
decisions
or
cluster
decisions,
placement
decisions
that
then
drive
how
ultimately,
the
cluster
lit
and
the
manifest
work
and
subscription
controllers
distribute
the
work
but
they're.
Just
at
that
point,
they're,
just
the
the
assembly
line,
picking
it
up
from
the
source
and
putting
it
into
the
target
they
they
are
not
thinking
about.
D
B
Does
it
make
sense
according
to
what
we
just
said
to
to
say
that
our
location
would
be
somehow
at
the
cluster
level?
What
you
know
you
can
find
in
not
affinity,
not
selector
terms,
something
that
that
is,
you
know
mainly
and
and
or
of
of
you
know,
not
selectors
and.
C
A
C
A
This
yeah
this
goes
to
the
one
of
the
bullet
points
here,
which
is
whether
we
want
to
reuse
the
node
selector
concept,
exactly
verbatim
that
that
when
you,
when
you
give
us
a
deployment
with
node
selectors
amazon,
it
assumes
a
cluster
that
is
amazon
and
then
either
filters
that
out
or
doesn't
filter
it
out
or
whatever,
but
basically
assumes
all
nodes
in
that
cluster
have
the
same
labels
as
the
cluster
as
the
location.
I
think
that
is
an
open
topic
of
exploration
and.
C
Hypothesis
that
we'd
love
to
disprove
is
useful,
because,
again
it's
when
I
say
a
node
selector,
I
don't
know
what
nodes
you
have
and
there's
nothing.
That
requires
me
to
know
what
nodes
you
have
right.
The
intent
is
to
abstract
the
workload
from
the
nodes
is
the
abstraction
between
the
nodes,
sufficient
so
and
things
like
anti-affinity
rules
and
honestly
service
spread
topology.
C
The
fact
that
service
spread
topology
is
in
kind
of
invalidates,
I
think,
or
reduces
the
need
for
some
of
those
constructs
that
fedview1
and
fedv2
had
which
were
balancing
because
they're
very,
very
close
to
describing
workload
specific
rules
about
placement
that
should
cleanly
move
to.
If
I'm
working
over
a
set
of
homogeneous
nodes,
I
can't
actually
tell
the
difference
between
a
single
cluster
and
multiple.
C
I
do
have
the
job,
but
all
these
discussions,
because,
like
this
is
the
stuff
that
I
feel
like
the
prototype
was
intended
to
generate,
was
the
examining
the
axioms
and
asking
those
questions
about
where
we
can
couple
existing
solutions
and
reuse
them
like
ocm,
placement
rules,
node
selectors,
learnings
from
locations
and
labels,
etc.
So
this
is
going
really
well.