►
From YouTube: Community Meeting September 14, 2021
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right,
hello,
everyone
welcome
to
kcp
community
meeting
september
14th,
not
the
september
13th
meeting
that
this
was
origin
originally
called.
Thank
you
david
for
noting
that
that
did
not
was
not
true.
It's
been
a
couple
weeks
since
the
last
one
of
these,
and
I
think,
there's
a
lot
to
talk
about
this
time.
Let's
see
if
we
can
get
through
it.
Michael
was
just
talking
before
I
hit
record
about
this
demo.
I
want
to
talk
about
the
demo.
A
I
want
to
talk
about
some
other
stuff,
that's
listed
below
and
any
other
thing
that
comes
up
so
I
might,
I
might
michael
timebox
you
to
some
amount
of
time,
but
who
knows
how
much
so,
let's
let's
get
started
with
that
one.
B
Okay
sounds
good,
so
just
for
context
and
history,
we
had
talked
about
in
this
forum
using
some
of
the
open
cluster
management
api,
which
really
is
focused
on
providing
mechanics
to
do
multi-cluster
orchestration,
so
there's
an
agent
framework
that
allows
you
to
register
to
a
hub.
There
is
a
framework
to
understand
how
to
distribute
work
or
desired
configuration
to
a
set
of
clusters.
There's
an
api
to
help.
You
describe
your
desired
placement
rules
around
where
you
want
certain
configuration
to
be
placed.
B
We've
talked
about
this
in
that
and
talked
about
those
concepts
in
this
forum
on
prior
calls.
In
order
to
make
that
more
real,
we
put
together
a
running
example
that
will
have
a
kcp
api
server.
We'll
have
an
additional
controller
that
runs
side
by
side
and
that
controller
uses
api
from
open
cluster
management
to
then
distribute
configuration
to
a
set
of
clusters.
B
The
physical
clusters
behind
the
scenes
most
of
the
code
has
been
written
by
jen
jojen,
is
in
a
different
time
zone,
he's
12
hours
off
of
our
time
zone,
so
he's
not
able
to
make
this
time
slot
and
then
hau
lu,
who
just
happens
to
be
in
vacation
in
this
time
zone,
but
just
out
of
pocket
josh
packer
who's
on
the
call
is
going
to
take
us
through
kind
of
in
the
end,
and
with
that
josh
I'll
hand,
the
baton
over.
C
C
The
similar
demo
that
we
already
have
in
the
kcp
org
with
the
deployment,
splitter
and
so
what's
been
engineered,
is
a
version
of
deployment,
spitter
splitter
controller
that
is
able
to
consume
the
ocmac
api
api,
specifically
the
placement
rules
and
the
manifest
work
pieces,
and
so
michael,
I
thought
you
were
gonna,
show
a
diagram,
but
I
can
just
kick
right
into
the
demo.
First.
B
If
it
so,
let's,
let's
show
the
picture.
If
here
I
will
pull
it
up
here.
C
Just
to
give
a
little
context
of
what
the
what
the
framework
and
what
it
looks
like
so
ocm,
maybe
let
me
share
my
or
you've
got
the
screen.
We'll
look
at
the
demo,
then
I'll
show
the
ocm
pieces
yep.
So
hopefully.
B
Yes,
we
do
all
right,
it's
awesome.
So
in
this
flow,
this
is
just
showing
the
moving
parts
right,
so
kcp,
logical
server,
the
user
will
create
a
deployment
and
that
deployment
will
then
have
a
reaction
from
the
controller.
The
controller
is
going
to
do
a
couple
of
things.
It's
going
to
generate
a
placement
object.
The
placement
object
is
in
effect,
you
can
almost
think
of
it.
Like
a
select
clause
for
a
set
of
clusters,
you
can
attach
various
conditions
by
labels
by
resource
priority
by
other
means
that
basically
allow
you
to
say.
B
I
want
clusters
that
match
this
set
of
rules
and
then
an
object
called
the
placement
decision
gets
generated
by
open
cluster
management
and
placement
decision
says.
Hey
here
are
the
specific
set
of
clusters
up
to
whatever
number
of
desired
replica
of
clusters.
You
want,
or
you
know,
unbounded,
so
all
clusters
are
available,
but
the
placement
decision
now
gives
you
the
here
are
the
actual
clusters
that
match
this
condition
and
then
a
controller
like
our
integration
controller
here
can
actually
use
that
information
to
generate
additional
things
and
what
it's
going
to
generate
are
going
to
be.
B
These
manifest
work
envelopes
which
will
package
any
number
of
objects
that
are
coming
in
and
place
them
into
each
of
the
managed
clusters
that
that
are
desired.
So
each
manage
cluster
has
a
managed
cluster
name
space.
Manifest
work
is
a
namespace
scoped
resource.
When
you
place
a
manifest
work
object
into
its
cluster
name
space.
B
And
I'm
intentionally
not
getting
into
this
set
and
binding
construct
here,
that's
more
about
our
back.
I
would
we'll
table
that
we
can
cover
that
more
depth
later
on.
The
big
thing
I
want
you
to
see.
First
is
watching
kcp
logical
server,
using
placement
to
understand
where
to
deliver
configuration,
notice
that
placement
and
manifest
work
are
not
objects
that
the
user
is
interacting
with.
B
C
Okie
doke
without
further
ado,
then
this
is
the
demo
repo
that
we're
going
to
use
for
this.
This
show
there
are
a
few
pieces.
I
have
to
push
back
that
I
patched
just
to
make
sure
it
worked
end
to
end
is
maybe
the
the
key
here,
but
it's
everything
under
the
contrib
dem
contribution
demo
section.
So,
let's
get
rolling
with
that.
So
first
things
first,
is
we
set
up
the
demo
environment?
That's
going
to
run
it.
C
C
It
does
some
basic
validation
to
make
sure
that
the
system
is
good
and
it's
gonna
bring
the
controllers
up,
takes
a
little
longer.
First
time
you
run
through,
obviously
because
it's
building
it
out,
but
once
this
is
done,
what
I've
got
running
in
linux
here
is
the
kcp
controller
and
the
kcp
ocm
controllers,
and
they
should
have
opened
up
the
6443
so
we'll
hop
over
to
another
window.
We'll
do
a
quick
check
just
to
make
sure
we're
all
good
here
and
yep.
C
We
see
that
it's
up
so
now
we're
going
to
run
the
actual
demo
piece,
and
so
first
things
first,
is
it's
connecting
to
the
hub
cluster.
This
is
where
ocm
or
acm
can
be
running.
It
did
a
quick
check.
We
see
here
so
there's
no
managed
clusters
currently
found
inside
of
the
hub,
so
we're
going
to
import
two
managed
clusters,
so
these
are
snow
clusters
that
I've
got
out
there
running
openshift
in
this
case,
but
again
it
can
import
star
ks,
pretty
much
works,
so
the
coupe
ct
lcms
the
command
line.
C
We
have
for
bootstrapping
onboarding
of
both
the
hub
so
setting
up
ocm,
and
so,
if
I
quickly
flip
over
this,
since
it
takes
a
few
seconds
for
the
import
to
run
anyways,
if
we
flip
over
here,
we've
got
the
open
cluster
management,
the
community
page
etc.
This
is
where
the
cluster
adm
command
line
comes
from.
There's
a
couple
of
quick
commands.
You
can
run
to
get
started
and-
and
that
gets
you
both
the
hub
setup
as
well
as
allows
you
to
do
the
imports
and
all
the
steps
are
right
here.
It's
important.
B
B
C
And
so
it's
pretty
much,
you
know
these
are
kind
create
for
your
clusters
if
you
want
to
build
them
or
I'm
using
my
ocp,
but
you've
got
a
single
command
to
initiate
the
hub
and
then
a
single
command
that
you
use
a
few
times,
which
is
what
I
was
demonstrating
that
imports
the
individual
clusters.
So
we'll
click
here
and
keep
going
we're
going
to
do
a
get
managed
cluster.
So
remember
up
here
we
saw
that
were
none.
Now,
we've
got
two.
C
We
can
see
they're
available
and
they're
joined,
which
is
what
we're
after
so
this
is
where
we
get
into
the
ocm,
specific
apis
and
so
we're
creating
a
managed
cluster
set.
You
can
think
of
this
as
a
grouping
or
the
glue.
That's
going
to
tie
together
the
clusters
that
I
want
the
kcp
to
apply
to
it's
going
to
tie
together
the
namespace
that
I
want
to
use
to
do
the
monitoring
it's
going
to
tidy.
It
ties
also
user
access
together.
C
So
it's
kind
of
like
a
resource
grouping
used
to
collect
a
bunch
of
pieces
and
provide
our
back
against
that,
and
so
first
thing
is:
I
created
the
resource
itself
for
the
manage
cluster.
Now,
I'm
adding
the
clusters
themselves
to
it.
That's
done
via
a
late,
a
label
that's
going
through,
though
a
web
hook
that
make
sure
that
your
you
have
the
rights
to
that
cluster
set
to
be
adding
the
label
to
the
manage
cluster.
That
brings
it
in
and
then
we
can
see.
C
So
we
have
a
group
that
represents
the
clusters
I'm
now
going
to
attach
that
to
the
namespace,
where
I'm
going
to
be,
my
kcp
is
going
to
be
doing
the
demo,
so
I
do
that
which
is
again
creates
the
demo
namespace,
but
also
creates
a
binding
resource
that
connects
again
the
group
to
the
namespace
and
then
now
we're
going
to
move
into
the
actual
connected
kcp.
And
so
you
can
see
here,
we've
done
the
connection
so
we're
using
the
demo
config.
This
is
the
role
or
the
loopback
to
my
own
cluster.
C
And
so
at
this
point,
we've
now
created
a
deployment
resource
deployment.app
resource
actually
just
like
in
deployments
better.
The
deployment
controller
has
picked
that
up,
and
so,
as
michael
mentioned,
that
controller
that
the
shujen
wrote
is
going
to
go
out,
it's
going
to
create
the
placement
rule
and
then
using
the
placement
rule,
which
is
going
to
tell
it
which
clusters
to
go
to
it's
going
to
create
the
manifest
work
object
which
delivers
it
and
deploys
the
placement
remotely.
C
So
now
we're
going
to
switch
over
back
to
the
hub
itself
to
see
the
placement
rule
that
got
created
by
the
controller,
and
so
we
see
a
placement
rule
is
created
now.
This
is
just
a
prototype
implementation.
So
for
each
deployment
it's
creating
a
specific
placement
rule
to
make
sure
it
goes
into
a
certain
place.
There
are
a
bunch
of
different
ways.
We
can
slice
this
up,
but
you
could
have
for
a
kcp,
that's
managing,
let's
say
two
or
three
clusters.
C
C
If
we
look
at
some
of
the
enhancements
that
are
coming
in
again,
this
is
this
is
where
using
the
placement
sort
of
it.
It
presents
a
lot
of
opportunities,
and
so
there's
a
change.
We're
making
to
be
able
to
look
at
taint
and
tolerations
against
the
target
clusters
to
decide
where
it's
you
know
which
ones
have
available
nodes
where
we're
going
to
go.
We
also
have-
and
for
me
this
is
the
this-
is
one
of
the
important
ones
not
just
in
kcp
but
outside.
In
the
way
we
do.
C
Application
management
is
resource,
scheduling,
as
well,
so
being
able
to
look
at
a
list
of
10
clusters
and
say
I
need
to
deploy
it
onto
two
of
these,
and
I
want
the
two
that
are
the
least
utilized
as
an
example,
or
I
want
the
two
that
are
the
most
utilized
so
that
I
can
improve
my
packing
layer,
and
so
this
is
continually
continuing
to
expand
in
the
community,
and
so
you
know
this
is
one
of
the
sort
of
key
pieces
that
the
ocm
brings
with
the
delivery.
C
Is
that
being
able
to
filter
on
a
bunch
of
different
capabilities
as
well
as
now
dynamic
resources,
etc?
Questions,
because
I
probably
started
talking
fast,
all
right,
we're
going
to
flip
back
here
then.
So
we
had
it
now
we're
going
to
take
a
look
at
the
actual
placement
rule
output.
This
is
actually
the
it's
called.
The
placement
rule
decision
kind
that
you
find
and
we
can
see
it
match
the
two
and,
as
I
said,
depending
on
the
type
of
spec
you
create,
it
can
be
just
a
label
match
to
find
all
open
shift.
C
Additional
ones
around
taints,
there's
a
around
there's
ones
for
is
the
system
on
the
manage
cluster
online
or
offline
and
therefore
don't
target
it,
etc,
etc.
So,
as
michael
mentioned
as
well,
so
we
had
the
placement
rule
created
by
the
controller.
We
also
create
the
manifest
work,
and
so
this
is
the
encapsulation
of
the
deployment
object
that
is
going
to
then
be
deployed
to
the
remote
systems.
C
C
So
there
are
different
opportunities
and
different
ways
to
expand
it,
and
then
we
can
talk
a
little
bit
about
some
of
the
scale
pieces
we've
done
as
well
with
manifest
work
and
about
how
many
clusters
we've
targeted
et
cetera.
We
have
some.
I
have
some
data.
I
can
share
on
that.
If
people
are
interested
but
anyways,
so
we
have
the
manifest
work
and
again,
this
is
what
is
then
pushed
down
to
the
manage
cluster
as
an
applied,
manifest
work
and
that
payload
is
then
instantiated
in
the
in
the
manage
cluster
or
the
target.
C
So
now
we're
going
off
and
we're
using
the
kube
config
0
to
connect
to
the
remote
cluster
and
it
did
a
get
deployment,
and
then
we
see
the
coup
config
02,
which
is
the
other
one.
So
you
can
see
we
actually
showed
jen
put
in
some
code
to
to
discern
the
replica
sets.
So
we
look
at
the
deployment
the
deployment
had
a
replica
set
of
three
and
we
knew
we
were
going
to
do
clusters,
so
it
spread
or
started
the
deployment
out
to
those
different
clusters
in
this
configuration.
C
If
we
had
had
three
clusters,
then
we
would
have
seen
one
deployment
on
each
of
them
and
you
know
from
a
speed
perception:
how
long
does
it
take
to
get
down
there?
We're
talking
in
the
20
32nd
tops
bracket,
and
I
guess
I
said
I
would
share
some
data
in
a
different
thread.
We've
been
doing
some
applications
of
our
subscription,
which
is
also
subscription
kind,
which
is
also
in
the
the
oc
part
of
the
ocm
community,
but
I'm
not
using
that
here,
but
we
deployed
that
using
manifest
work.
C
We
did
actually
the
subscription
object
and
three
other
resources.
Some
roles
and
service
accounts
to
2000
target
target
clusters,
and
it
took
a
pro
once
they
were
written
out,
takes
less
than
20
seconds
for
it
to
be
written
to
the
managed
cluster
for
2
000
of
these
remote
client
clusters
that
we
were
targeting.
So
we
know
we
have
some
scale
capabilities
and
we're
actually
doing
some
runs
now
to
ramp
that
up
to
try.
C
You
know:
3
400,
manifest
work
across
the
same
2000
clusters
just
to
make
sure
we
have
the
scale
properties
that
we
think
we
need.
You
know
to
be
able
to
take
this
down
the
road
to
edge
telco,
et
cetera,
so
anyways,
so
that
is
the
deal.
So
what
again?
Just
to
sort
of
recap
is
we've
got
the
deployment
controller.
That's
running
there
that
or
sorry
the
deployment
splitter
controller,
that's
running
there
that,
instead
of
using
the
sync
is
going,
is
going
out
and
looking
for
a
placement
rule.
B
B
So
then
a
controller,
that's
watching
that
can
react,
and
so
we
use
the
same
capability
as
josh
was
highlighting
with
subscriptions
to
cause
an
application
to
appear
to
move
from
one
cluster
to
another,
and
that's
true
for
stateless
applications
right.
We
basically
scale
down
and
remove
resources
for
the
app
and
cluster
one
scale
up
or
add,
resources
for
the
app
and
cluster
two
and
the
the
emergent
behavior
right
at
a
distance
is
the
application
moved
from
a
to
b.
B
We've
got
some
additional
work
in
place
with
volume
replication,
where
we
can
now
actually
migrate,
a
stateful
workload
by
pre-mirroring,
a
pv
or
set
of
pbs
that
the
application
needs
scaling
it
down
in
cluster.
One
scaling
it
up
in
cluster
two
with
the
relevant
pvs
and
you're
off
to
the
races
and
that's
using
other
projects
in
the
community.
D
And
that's
actually
great,
so
there
goes
a
great
demo.
Thank
you.
Thank
you
all
for
doing
this.
So
I'd
probably
say
you
know
something
I'd
really
like
to
see,
and
this
maybe
jason.
D
We
need
to
get
that
the
app
app
experience,
doc,
cleaned
up
and
then
shared
with
the
community
side,
and
then,
I
would
probably
say
it'd
be
useful
to
show
kind
of
the
equivalent
the
equivalent
to
the
experience
through
the
placement
right,
because
there's
some
there's
some
subtle
things
in
here,
michael
as
you're
talking
about
it,
you
know
we'd
want
to
make
sure
that
the
ex
the
experience
that
the
end
user
sees
from
the
kcp
side
from
the
transparent
multi-cluster
side
is
transparent.
D
So
there's
some
of
the
nuances
in
there.
So
that'd
be
a
great.
You
know
next
step
for
this
from
a
demo
perspective,
would
love
to
see
it.
B
Sure
and
the
way
that
we
structure
this
right
we're
definitely
going
behind
the
curtains,
because
we
wanted
the
community
to
see
what's
possible
there,
but
from
a
user's
perspective
if
they
go
back
to
the
kcp
server
to
the
kcp
logical
cluster
and
they
interact
with
it.
They'll
continue
to
only
see
their
deployment.
B
B
How
do
you
reflect
back
into
the
original
deployment
object
x
of
you
know
why
deployment
replicas
is
available
or
not
available.
So
there's
there's
an
open
switch
here
in
this
demo
as
well.
There's
there's
limited
feedback.
We
have
feedback
on
like
the
manifest
work
objects,
but
we
haven't
done
a
lot
to
aggregate
that
or
try
to
format
it
in
a
way
that
would
fit
back
into
the
kcp
logical
cluster,
but
the
user.
D
D
Yeah-
and
I
think
that's
that's
really
important,
like
the
the
the
mental
model-
would
be
there's
a
set
of
expectations
and
experience,
and
you
have
to
be
consistent
with
the
existing
cube
or
that's
not
transparent.
Those
expectations
will
be.
You
know
some
of
the
a
deployment
behaves
like
a
deployment
there'll
be
other
expectations
called
out
in
that
dock.
That
will
get
cleaned
up
and
would
be
things
like.
There
is
a
way
to
go,
see
pod
logs
or
exec
on
pods.
D
That's
a
different
solution
doesn't
mean
they
can't
compose,
and
it
may
very
well
be
that
actually
the
best
outcome
here
is
we
end
up
with
a
couple
of
different
avenues
and
angles
like
so.
We
know
that
we'll
hit,
you
know
some
limits
and
some
types
of
transformations
that
may
be
a
perfectly
reasonable,
low
scale
approach,
and
then
we
say:
okay.
Well,
what
are
the
gaps
that
would
cause
us
to
fail
when
we
hit?
You
know
muxes
of
10,
000
or
100
000
applications.
D
We
don't
have
to
worry
about
that
today,
because
we're
focused
on
making
sure
that
we
have
that
that
right
experience
in
place.
So
I
would,
I
would
absolutely
like
to
see
maybe
like
steps
one
and
two
and
jason
when
we
get
that
dot
cleaned
up
like
it
would
be
steps
one
and
two
probably
would
be.
You
know
the
stateless
and
the
stateless
example.
What
it
would
look
like
you
know
in
this,
and
if
the
demo
could
show
that
that'd
be
great.
A
Yeah
this
is
this
is
a
really
cool
demo.
Thank
you
for
that,
for
sharing
it
and
for
the
people
that
actually
built
it
that
aren't
here,
hopefully
watching
this
later.
Thank
you
for
building
it.
It
seems
like
a
an
interesting
avenue
to
basically
replace
or
change
how
the
the
deployment
splitter
is
phrased
and
to
replace
the
syncer
code
like
the
kcp
syncer
is
not
involved
in
this
demo
right
it.
Instead,
it
is
tcp
is
the
api
server
and
this
new
controller
that
that
schedules
offloads
to
ocm,
stuff
and
ocm.
A
Does
the
rest,
which
I
think
is
an
interesting
avenue
to
explore
too
and
concretely.
B
A
key
part
of
the
proposal
here
is:
we
are
proposing
that
the
cluster
lit
agent
be
leveraged
versus
the
current
synchro
limitation.
There's
there's
a
a
very
reasonable
registration
protocol
for
the
clusterwith
agent
running
on
a
cluster
to
join
into
a
hub.
Today.
B
There
is
a
request
here
that,
instead
of
kind
of
the
labeling
method
of
driving
placement
decision,
which
occurs
today
when
the
virtual
deployment-
or
you
know,
copy
of
the
deployment
gets
gets
created
here,
we're
using
placement
so
that
we
can
amend
and
extend
the
logic,
that's
used
to
form
a
decision
and
then
also
document
what
the
result
of
the
decision
was
in
a
first-class
way
like
you
can
look
it
up
by
looking
at
the
placement
decision.
B
The
reason
there
is
a
placement
and
a
placement
decision
is
because
of
scalability
we
used
to
have
a
type
called
placement
rule
which
still
exists:
it's
still
supported
in
the
product
and
it's
still
available
in
the
project,
but
we
moved
it
into
a
different
api
group
and
we've
made
some
changes.
As
a
result,
the
original
placement
rule
captured
decisions
in
the
status
condition
of
the
rule,
but
then,
when
we
start
pushing
the
limits
of
two
and
three
thousand
clusters
under
management,
we
risk
exceeding
the
size
of
the
icd
object
limit.
B
So
placement
decision
can
have
one
decision
of
up
to.
I
forget
what
the
hard
limit
is.
It's
got
a
hard
limit
of
how
many
decisions
it'll
record
and
then
the
controller
will
create
additional
decisions
if
there
are
more
clusters
and
will
fit
neatly
into
ncd
right.
So
there's
there's
already
some
aspects
here
of
scalability
that
we've
gone
through.
So
I
think
there's
a
lot
of
power
and
capability
that
can
accelerate
the
objective
of
kcp
right.
You
can.
B
It
here
some,
I
think
there
was
a
conversation
about.
Can
you
have
multiple
kcp
servers
that
schedule
work
into
multiple
physical
servers
right,
splicing
up
the
workspace
where
you've
got
maybe
one
or
a
set
of
projects
or
name
spaces
that
are
spliced
across
many
clusters?
B
You
could
absolutely
do
that
with
this
flow
here,
you'd
have
all
you
could
have
those
clusters
assigned
to
one
cluster
manager
and
then
each
kcp,
logical
server
might
have
might
even
share
placement
rules,
but
then
only
be
putting
work
in
specific
namespaces
or
projects.
B
You
know
when
they
are
sharing
a
physical
cluster
behind
them,
but
so
I
think
you
know
take
the
action
clayton
that
we
want
to
look
at
the
dock.
Look
at
the
use
cases
what
the
experience
we
want
to
create
for
the
kcp
developer,
and
then
we
can
amend
this
prototype.
You
know
continue
to
iterate
on
it.
We
welcome
prs
for
any
updates
that
folks
want
to
try.
I
know
I
see
john
on
on
the
list.
B
If
we
wanted
to
experiment
with
bringing
in
some
of
the
stuff
with
pv
replication
at
this
level,
that's
something
we
can
play
with
here
as
well.
So
there's
you
know.
E
I
just
maybe
I
had
a
question
and
and
possibly
interesting
direction
for
this
work.
That's
how
it's
related
to
api
negotiation,
because
currently
one
thing
that
exists
in
the
sinker
and
is
integrated
with
the
cluster
manager
across
your
controller.
Sorry,
is
the
fact
that,
when
a
cluster
joins
the
kcp
on
a
given
a
logical
cluster,
then
a
number,
it's
configurable,
of
course,
but
a
number
of
apis
from
the
physical
clusters
are
put
from
the
open
external.
E
You
know
open
api
models
that
are
in
the
discovery,
api
and
then
are
reconciled
or,
let's
say,
negotiated
with
possibly
existing
apis
in
the
logical
cluster
for
the
same
api
and
then
when
it's
it's
everything
is
is
consistent.
Then
the
the
the
api
is
published
inside
the
the
logical
cluster
as
a
series,
and
this
allows.
Typically,
if
you
have
two
clusters-
and
you
have
let's
say
the
deployment
api
of
one
of
the
cluster
that
is
not
compatible
with
with
the
other
one
or
with
the
internal
deployment.
E
If
model
that
already
lies
in
in
the
logical
cluster,
then
it
it
becomes
an
additional
placement
rule.
In
fact,
that
means
that
deployments
will
not
be
scheduled
on
one
of
the
clusters,
because
the
api
of
this
cluster
has
been
seen
as
non-compatible
with
the
apis
that
is
internally
used
in
the
logical
classroom.
So
that
could
be
very
interesting
to
explore
how
the
prototype
that
you
did
could
be.
E
You
know,
merged
with
this
api
stuff,
typically
in
the
existing
yeah
thinker
and
cluster
manager.
It's
just
when,
when
you
bring
a
new
a
new
cluster
that
then
you
pull
the
the
requested
apis
and
then,
of
course,
all
the
negotiations
start,
and
so
that
could
possibly
be
also
something
to
do
when
you
import
a
managed
cluster
inside.
You
know
your
logical,
kcp,
logical
cluster,
then
we're
going
to
get
the
apis.
A
B
Negotiation
as
kcp
is,
and
so
those
parts
I
think,
are
100
complimentary.
I
think
you
could
100
take
the
api
negotiation
behavior.
We
could
do
a
couple
of
things
right,
continuing
to
create
the
negotiated
kinds
in
kcp
so
that
you
can
understand
sort
of
the
minimum
surface
area.
Yeah
could
continue
completely,
as
is,
and
I
think
hal
has
reached
out
to
you
and
kind
of
begun,
providing
some
feedback
as
we're
trying
to
play
with
that.
How
lou
and
then
the
other
side
of
it
is.
B
We
could
extend
the
concept
of
placement,
so
the
placement
controller
is
incorporating
different
conditions
label
matching
where
the
manage
cluster
api
object
has
a
set
of
labels
and
the
placement
matches
those
labels.
In
order
to
come
up
with
a
decision
you
can
match
on
the
cluster
conditions
right,
whether
the
cluster
is
considered
available
or
joined
or
not,
and.
F
B
Match
on
most
recently,
we've
added
resource
utilization
or
resource
capacity
right,
available
memory,
capacity,
available,
cpu
capacity
in
order
to
prioritize
which
clusters
are
possibly
selected
so
and-
and
we
expect
to
I
think,
josh
had
the
pr's
up-
you
saw
taint
and
tolerations
is
something
you
know,
there's
a
lot
of
parallels
to
cooperate.
D
So
it
sounds
like
so
there's
three
action
items.
I've
heard
so
there's
the
the
getting
making
sure
that
the
experience,
the
app
experience
doc
has
a
set
of
clear
expectations
that
then
the
this
could
be
like
a
step
one
and
two
in
that
could
see.
Can
we
emulate
those
if
we
satisfy
those
and
what
would
be
the
challenge
of
tradeoffs?
The
second
point,
I
think,
is
you
know,
as
the
transparent
multi-cluster
design
dock.
F
D
The
experiential
parts
of
negotiation
of
api
objects,
the
point
of
that
is
to
create
an
experience
for
the
end
user
when
they
see
a
consistent
object,
they're
immediately
aware,
when
their
objects,
move
out
of
sync
or
an
administrator
is
immediately
aware.
Both
of
those
are
things
that
are
relevant
from
an
ocm
site.
Is
there
anything
else?
I
miss
jason
or
that's
it
michael
anything.
I
missed
in
that.
F
Go
ahead,
jason,
you
want
to
say
something
yeah,
so
I
want
to
to
kind
of
like
stress
on
that
transferring
multicultural
I
know,
like
michael
you've,
said
from
the
end
user.
This
is
like
nothing
will
change
and
it's
going
to
be
transparent,
but
when
I.
F
Is
the
cluster
lit,
having
a
star
permission
to
kind
of
be
able
to
apply
all
the
things
it
needs
to
apply
like
deployment
secrets
services,
whatever
guests
need,
or
do
we
have
a
more
granular
way
of
saying
you
know
the
cluster
that
is
limited
to
that
set
of
resources,
because
I
negotiated
it
or
I
I
limit
through
the
concept
of
placement
or
whatever,
that
cluster
led
to
only
have
access
to
these
set
of
resources
of
only
these
set
of
resources.
D
I
I
I
think
this
is
a
good
topic.
I
don't,
let's
take
this
to
one
of
the
channels
actually,
and
maybe
that's
something
michael
and
jason.
We
can
discuss
there
just
because
I
want
to
get
to
the
other
topics.
We're
at
40
minutes.
A
Yeah
yeah
yeah.
Thank
you
very
much
for
this
demo.
Obviously
a
lot
to
discuss
there.
Let
me
re-present.
A
You
seeing
this
yeah
I've
added
or
I
will
add
ai's
from
that
discussion
above
one
thing
I
wanted
to
talk
about
was
kubecon
is
coming
up
it's
about
a
month
from
now
and
we'd
love
to
have
something
we
can
show
to
say
like
this
is
the
new
demo
based
on
the
previous
demo.
This
is
what
we've
this
is
what
we've
produced
and
what
we're,
where
we're
sort
of
thinking
about
going
in
the
future.
A
The
first
thing
that
comes
to
mind
is
cr
negotiation,
david's
work
on
crd
negotiation
and
the
demo,
for
that
is
good
done
already
in
the
can.
Beyond
that,
I'd
love
to
be
able
to
demonstrate
some
transparent,
multi-cluster
progress
in
terms
of
so
the
previous
demo
is
here's
a
deployment
split
into
two
deployments
right
and
that's
that's
great,
that's
magic.
I
think
the
next
thing
is
to
say
this
deployment
depends
on
this
secret
or
config
map,
or
this
deployment
depends
on
probably
not
a
volume,
because
those.
B
A
Tricky,
but
going
the
other
way.
Also,
this
service
depends
on
this
deployment
and
how
we
would
transparently
multi-clusterize,
not
just
the
deployment
and
split
it,
but
also
transparently
multi-clusterize,
the
dependence
of
those
deployments
and
the
things
that
depend
on
those
deployments
and.
E
A
So
that's
certainly
that's
certainly
a
large
hammer
right
like
that's.
That's
the
easiest
possible
way
to
do
this
to
just
schedule.
You
know
logical
cluster
namespace
to
basically
a
random
cluster
or
you
know
we
could.
We
could
do
better
than
random
in
the
future,
but
yeah
a
first
class
would
be
this
namespace
within
this
logical
cluster
gets
assigned
to
this
physical
cluster.
A
That
is
definitely
easy
to
do,
which
has
a
benefit,
because
we
want
to
show
this
off
in
a
month.
I
think,
if
we
do
that,
if
we
simplify
that,
then
there's
no
reason
not
to
take
on
the
stretch
goal
of
being
able
to
show
that
they
move.
So
because
that
scheduling
decision
is
so
dead
simple
then
it
should
be
dead,
simple
to
detect
the
cluster.
We
put
you
on,
went
away.
B
B
D
Example
would
be,
we
need
the
use
cases
globally,
representable,
we
should
target,
you
know,
showing
we
would
say.
We'd
show
tmc
progress
with
this
use
case.
Yeah.
A
Sure
yeah
yeah.
No,
so
I
think
I
think
the
making
that
making
that
I
want
to
use
a
different
word
besides
milestones,
because
that
means
a
specific
thing.
Making
that
progression
of
progress
public
would
be
good
for
ocm
folks
to
base
their
thinking
off
of
just
to
let
people
know
what
our
thinking
is
and
then
also
as
a
soft
non-committal
milestone
roadmap
of
where
of
this
demo.
The
next
demo,
the
next
step
with
the
next
demo.
D
Yeah-
and
I
say
that
mostly
because
so
like
we
got
two
bolts
here
so
like
the
third
fourth
bullet,
I'd
probably
say,
is
some
some
level
of
ingress
movement,
which
would
be
joaquim's
efforts
and
then
some
level
of
as
a
stretch
potentially
to
support,
maybe
use
case
one
or
two,
the
inner
service
connectivity,
or
at
least
somebody
who's.
Looking
towards
that
direction,
so
ben
or
ben
bennett
might
be
able
to
be
the
one
tagged
for
that.
D
There's
the
organizational
so
the
policy
aspects,
organizational
workspace
and
otherwise
I'd
like
to
get
those
use
cases
down.
I've
got
half
of
them
in
a
draft
update
to
the
investigation.
Pr
based
on
just
you
know,
the
idea
of
you
know
clarifying
terminology
like
you
know,
instead
of
logical
cluster
logical
clusters,
the
mechanism
workspace
might
be
the
actual
api
object.
An
organization
is
a
thing
that
is
a
is
a
itself
a
workspace
under
which
workspace
objects
can
be
created
that
carries
ownership.
D
One
of
the
other
api
policy
objects
things
like
stuff
in
the
acm
world
for
policy
stuff,
like
other
types
of
organizational
structure
like
personal
workspaces
or
organizational
workspaces,
how
you
might
model
those.
So
that
would
be,
that
would
probably
be
like
organizations
and
workspaces
would
be
the
policy
aspects
and
then
the
sharding
stuff.
D
I
don't
think
it
has
to
be
for
this,
but
I
do
think
getting
to
the
point
where
we
can
articulate
what
the
the
working
across
workspaces
looks
like
from
a
controller
point
of
view
in
some
form,
so
whether
that's
through
sinker
and
tmc,
or
whether
some
of
the
stuff
steve
and
I
are
kind
of
iterating
right,
we
demonstrated
some
variation
of
a
workspace
flow
or
a
location.
Specific
flow
would
be
enough.
So
experience
wise.
I
want
to
get
use
cases
for
the
organization
and
have
those
docked,
so
that
would
probably
be
three.
D
Is
there
something
else
that
we're
forgetting
in
terms
of
end
user
experience,
so
we've
got,
we've
got
a
cube
user
coming
to
this.
Has
a
developer
or
a
cycle
experience
that
is
agnostic
to
clusters.
That's
a
continuation
of
our
previous
promise
and
then
we
have
the
could.
We
make
control
planes
scale
for
larger
and
larger
sets
of
application
teams
by
decoupling
the
idea
of
a
cluster
from
the
apis
and
an
std
instance
from
one
cluster.
D
A
My
instinct
is
almost
certainly
but
minimal.
D
Api
server
might
actually
be
one,
so
maybe
that's
one
that
we
go
back
to
and
say,
look,
here's
a!
Maybe
that's
like
something
david.
This
is
this
could
be
a
continuation
of
some
of
the
other
threads
would
be.
We
go
back
and
we
look
at
okay.
A
Yeah,
so
in
terms
of
things
we
would
like
to
deliver
in
a
month
timeline
to
concretely
show
people
a
dock
about
policy
like
where
we
are
thinking
about
policy
and
workspace
is
going,
seems,
useful
and
a
dock
about
how
we
plan
to
make
this
charitable
and
scalable,
including
moving
workspaces
across
shards,
transparently
to
controllers
that
might
be
watching
them
like
we're
talking
about
a
dock
for
these
things
and
not
a
like
demonstrable.
A
D
I
don't
think
the
code's
that
hard.
I
think
it's
agreeing
on
a
concept
and
being
able
to
articulate
a
set
of
use
cases
that
other
people
could
agree
on
use
cases
of
the
hard
bit
the
code's
the
easy
bit
because,
for
instance,
you
can
demonstrate
kcp
starting
up
and
then
instead
of
today,
where
you
can
just
ad
hoc,
create
logical
clusters.
E
C
E
I
assume
that
there
is
some
pending
work
behind
that,
especially
the
fact
that
for
now
crds,
where
apis
that
you
have
in
the
admin
logical
cluster
are
not
by
default
inherited
by
other
logical
clusters.
I
mean
there
are
possibly
we
have
to
check,
but
underlying
challenges
that
were
not
tackled
for
now,
and
that
may
be
related
to
those
yeah.
D
Vary
implementations
and
life
cycles
coherently,
so
like.
How
do
you
run
canary
versions
of
an
api?
How
do
you
run
canary
versions
of
a
controller
that
that
I
think
should
be
separated,
and
I
would
say
what
we
have
today
is
good
enough
to
show
tmc.
So
it's
a
little
bit
less
on
that
focus.
We
obviously
want
to
allow
you
to
install
crds
and
have
them
work
in
your
current
namespace,
but
the
virtualization
infrastructure
for
apis
needs
probably
its
own
demo
chunk.
We
would
just.
D
I
think
we
can
kick
it
out
of
this
one
because
we're
at
the
we
need.
We
need
examples
through
sinker
and
some
of
the
other
things
we're
talking
about
to
demonstrate
it
like.
How
would
I
concretely
go
and
write
a
multi-cluster
controller?
I
need
to
have
a
mental
model
that
works
for
multi-cluster.
What
is
the
mental
model.
A
A
When
you
say
a
multi-cluster
controller,
is
that
a
transparent,
multi-cluster
controller,
where
the
controller
just
said
like
tekton,
for
instance,
consumes
task,
runs
and
produces
pods
and
isn't
aware
of
how
those
pods
are
scheduled,
that's
very
easy
to
make
transparent,
because
it
can
just
give
back
to
kcp,
hey
schedule
this
pod
somewhere.
A
D
The
the
idea
of
packing
resources
into
buckets
as
a
type
of
controller-
I
think
we'd
start
with
a
more
general
type,
which
is.
Can
I
expose
an
api
that
has
an
effect?
That's
completely
orthogonal
to
the
system,
it's
a
little
bit
like
the
demo.
We
just
saw.
How
would
you
write
a
controller?
D
You
know
a
thousand
shards
and
a
million
a
million
individual
logical
clusters
or
workspaces.
How
would
you
evolve
that
api
over
time?
How
would
you
shard
that
effectively,
so
that
is
a
prereq
for
the
the
question
of
okay?
Well,
now
that
I
have
that,
how
would
I
go
design?
How
would
I
reuse
scheduling
across
different
problem
domains,
but
not
have
to
keep
writing
my
own
scheduler
and
reimplement
paints
and
tolerations
or
resource
management
each
time?
D
So
those
are,
I
think
those
can
be
pushed
further
out
down
the
tree
and
we
should
actually
turn
this
into
a
tree
in
the
docks
of
we're
explicitly
eliminating
things
from
the
short
run
to
be
able
to
productively
show
something
that
is
relevant
to
immediate
users.
It's
nice
to
be
able
to
make
efficient
controllers
and
reuse
existing
stuff
in
the
world.
That's
not
the
point
of
what
the
current
phase.
A
Right,
I
think,
even
even
more
complex,
so
you
mentioned
the
use
case
of
having
a
transparent,
multi-cluster
controller.
You
would
want
to
be
able
to
canary
those
versions,
and
that
adds
some
complexity,
there's
even
more
complexity.
If
you
want
the
user
to
be
able
to
pin
to
some
version
or
not
even
like
that,
like
the
default
case,
is
I
install
or
I
configure,
I
request
controller.
A
You
know
tekton
version
0
21
and
I
I
want
to
be
responsible
for
upgrading
it
to
22
and
23,
and
not
that
I'm
opting
into
some
service
that
automatically
upgrades
me
over
time,
or
maybe
I
do
but
well.
No,
that
I
I.
D
Almost
certainly,
I
almost
certainly
think
that
that
is
a
portion
of
the
story,
which
is
the
complete
story
for
apis
is
sometimes
you
care
about
very
strict
api
evolution.
Sometimes
you
don't
the
level
of
service
that
someone
offers
for
an
api
that
has
very
strong
contracts
is
different
than
when
you're
yoloing
your
team's,
quick
crd,
with
a
quick
hack
controller.
How
do
we
isolate
and
separate
those
two
while
giving
each
one
the
flexibility
to
do
what
it
needs?
A
Yeah,
okay,
but
that
is
sort
of
the
overlap
and
and
crossroads
of
the
policy
workspace,
design
and
controllers.
Because
you
want
to
say
I
can
install
a
controller
or
configure
or
opt
into
a
controller
as
part
of
my
workspaces
policy,
and
that
workspace
policy
is
I
now
I
want
to
upgrade
the
version
of
that
or
I
want
to
opt
in
to.
D
You're,
probably
not
dealing
with
controllers
you're,
probably
dealing
with
apis
but
yeah,
you
say
yeah.
How
do
I,
how
do
I
expose
a
set
of
consistent
apis
to
a
user?
So
imagine
a
the
set
of
resources
that
that
ocm
exposes
in
a
consistent
chunk.
How
would
someone
get
a
chunk
of
api
resources
that
allow
them
to
do
acm
or
ocm
like
things
and
have
an
api
for
it?
How
would
you
allow
someone
to
have
a
chunk
of
cube-like
resources?
D
How
would
you
allow
someone
to
have
only
k-native
functions
and
service
binding
objects,
and
then
how
would
those
evolve
and
how
would
you
shard
scale
and
do
that,
so
it
is
a
it
is
a
it
is
tied
in
with
organizational
policy.
As
you
see,
jason.
A
Yeah,
I
don't,
I
think
I
mostly
agree
with
you,
but
somewhat
disagree
with
you
on
the
fact
that
it's
just
apis
like
I
think
I
think
that
is
an
issue
of.
I
am
a
workspace
and
I
would
like
to
enable
the
canadian
apis
or
whatever.
I
think
there
is
a
a
separate
further
issue
of
I
want
to
install
or
configure,
or
whatever
k
native
version
x,
apis
with
version
wide
controllers
for
them,
because,
even
though.
C
A
D
Being
able
for
us
to
write
that
down
involves
sorting
the
use
cases,
like
that's
a
one
percent
use
case
or
a
five
percent
use
case.
The
hundred
percent
use
case
of
the
99
use
cases
yeah.
I
expect
to
be
able
to
create
a
pod,
a
function,
a
deployment,
a
service,
a
rbac
rule,
a
bucket,
a
quota,
maybe
a
couple
of
the
other
resources,
and
then
the
spectrum
between
I
want
to
set
up
a
canary
test
and
choose
an
implementation
and
then
the.
How
do
you
delegate
control
over
implementation
to
a
third
party?
D
That's
where
the
organizational
policy
comes
in
so
yeah?
Absolutely
it's
a
spectrum
from
one
end
to
the
other.
We
need
to
kind
of
have
a.
We
need
to
be
able
to
articulate
the
spectrum.
I
think
we're
getting
close
to
that
with
some
of
the
some
of
the
stuff
we've
been
doing
around
the
sinker
and
around.
D
A
Yeah
as
a
bookmark
for
a
future
topic,
because
I
don't
want
to
get
into
it
with
seven
minutes
left,
I
think
we
should
settle
on
whether
we
think
users
will
you
keep
you
keep
mentioning
canary
as
the
as
the
use
case,
which
assumes
people
generally
are
on
an
up-to-date,
up-to-date
thing
and
sometimes
want
to
canary
before
they
do
an
update.
That's
part,
that's
part
of
it.
That's
important
to
get
right.
A
B
D
The
difference
is:
is
that
a
little
bit
like
libraries,
a
good
library,
doesn't
break
its
api
and
offers
concrete
changing
if
you're,
building
apis
that
are
intended
to
be
transient?
Those
would
not
be
the
class
of
things
that
would
be
exposed
to
tens
of
thousands
of
users,
because
there's
a
fundament
fundamental
mismatch
between
design
of
an
api
that
you
plan
to
change
all
the
time
and
the
actual
point
of
an
api.
You
might
have
right.
A
Right,
I
think
again
we're
talking
about
apis
as
if
the
implementation
is
not
part
of
the
effective
api
right.
The
api
between
k-native
version,
20
and
21
remains
exactly
the
same,
but
the
implementation
of
it
is
slightly
different
in
some
way
that
matters
to
me
whether
or
not
it
actually
matters
to
me
or
I'm
just
afraid
of
it
mattering
to
me.
So
I'm
not
going
to
update,
I
think
in
reality,
people
I
don't
want
to
limit
the
conversation
to
apis,
because
the
controller
like
logic,
the
implementation
of
it
is,
is.
D
So
important
yeah,
we
should
be
cautious
that
when
we
say
implementation,
we're
saying
an
api
exposes
those
by
virtue
of
existing.
If
you
change,
you
have
changed
the
api,
but
if
you
didn't
communicate
that
that
leads
to
the
fear
you're
discussing
and
that's
where
the
failures
happen.
So
what
we're?
What
we're
effectively
talking
about
is?
Can
you
detect
the
difference
in
an
api
and
when
an
api
has
a
different
definition,
regardless
of
whether
it's
implementation
or
api
side?
D
Can
you
make
the
transition
between
those
that
that's
part
of
evolution
like
how
do
you
find
the
people
who
are
willing
to
opt
into
being
broken?
So
we
definitely
got
in
the
weeds
on
this
david?
Do
you
want
to
jump
over
so
sharding.
D
This
is
mostly
just
to
give
a
framework
for
what
we
would
do
with
sharding
in
terms
of
to
be
able
to
effectively
do
a
shard.
You
also
need
to
be
able
to
understand
how
to
implement
watch
across
a
set
of
shards
and
so
just
trying
to
make
sure
that
we
can
articulate
all
of
the
challenges
in
one
spot.
We'll
then
go
back
in
and
say:
okay!
Well,
is
this
worth
the
cost?
What
would
a
prototype
look
like
what?
D
How
do
we
stage
up
the
set
of
prototypes
to
shard,
underneath
you
know
more
than
one
kcp
instance,
or
across
more
than
one
kcp
instance
for
a
set
of
workspaces.
So
all
right
and
that's
basically
it
stephen
stephen.
I
are
iterating
on
that
actively
now.
E
Yes,
I
started
last
week
looking
into
trying
to
run
the
dev
workspace,
so
you
know
the
engine,
the
new
engine
behind
the
quality
workspaces
ids
cloud
id,
so
that
workspace-
mainly
it's
a
one,
two
or
sometimes
three
custom
resources
that
are
that
have
a
controller
that
finally
create
a
deployment
with
a
number
of
things
like
pvs
pvcs
services
and
all
those
and
config
maps
and
secrets
and
all
those
steps
to
to
have
a
full-blown
ide
and
so
yeah.
I
tried
I
started
trying
to
to
run
that
on
on
kcp.
E
The
whole
point
is
precisely
to
show
that
we
can
have
application
based
crds
on
on
the
kcp
layer
only
and
and
have
the
running
deployments
that
underpin
the
the
ids
on
on
physical
clusters
and
especially
being
able,
if
you
create
a
workspace
in
one
main
space,
to
have
the
all
the
objects
of
the
you
know,
network
space
being
put
on
one
given
physical
cluster
and
maybe
the
ide
for
another
workspace
that
workspace
in
another
namespace,
but
in
a
distinct
cluster.
E
So
obviously,
I
had
to
unplug
web
hooks
from
from
the
dev
workspace
controller
because
they
are
not
supported
at
all.
For
now
and
and
yeah
things
are
moving
forward
and,
if
possible,
I'd
like
to,
I
might
present
a
short
demo
of
this
if
it
sort
of
works
tomorrow
in
the
devtools
org
in
the
in
in
a
presentation
about
app
studio.
A
E
E
Exactly
and
I
probably
post
things
interesting
things
is
yes
that,
finally,
we
find
a
number
of
you
know
bugs
or
things
to
fix,
also
in
the
existing
prototype.
So
that's
quite
interesting
also
to
bring
up
the
various
limitations
that
we
were
not
aware
and
that
you
know
that
are
shown
by
real
cases
like
real
use.
Cases
like
this
one,
which
is
quite
a
complete
use
case.
A
To
be
fair
cool,
I
look
forward
to
seeing
it
and
thank
you
for
working
on
that
all
right
with
that,
I'm
gonna
call
time
and
post
the
recording.
Shortly
after
this,
I
added
some
notes.
If
you
have
other
notes,
you
think
I
missed
feel
free
to
add
them
to
the
issue
or
mention
it
in
slack
or
any
of
the
other
ways
all
right
thanks.
Everyone.