►
From YouTube: 2021-05-18 Community Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right
we're
going
hi,
so
this
is
the
community
meeting
for
kcp
for
may
18
2021.
This
is
being
recorded
and
will
be
put
up
on
the
youtube
channel
a
progress
update
from
last
week.
There
were
a
few
big
changes
this
week
and
some
small
changes
following
from
it,
one
was
supporting
push
mode.
This
is
basically
instead
of
the
cluster
controller
reaching
into
the
cluster
and
installing
a
deployment
to
do
syncing
from
inside
the
cluster.
A
I
don't
think
we've
still
yet
sorted
out
whether
which
model
is
best
in
all
cases.
If
there
is
a
model,
that's
going
to
be
best
in
all
cases,
we
might
end
up
supporting
both
models
indefinitely
or
supporting
a
yet
third
model,
where
the
syncer
runs
externally
completely
outside
of
the
cluster
controller.
A
That
is
a
topic
david.
Did
you
see
it.
B
Yeah
yeah
sorry,
I
was
saying
about
to
say
that
this
third
model
is
is
already
mainly
available,
especially
for
being
able
to
debug
stuff.
For
example,
you
are
able
to
run
that
locally
and
debug
that
in
vs
code.
A
Yeah
yeah,
it's
certainly
it's
certainly
a
nice
development
mode
to
be
able
to
run
the
synchro
completely
separately.
I
think
there's
some
questions
about
how,
if
that's
a
model,
we
expect
to
run
in
a
like
real
world
production
scenario,
how
that
process
would
be
started
and
stopped
and
signaled
yeah
yeah.
C
I
I
would
probably
say
too,
like
this
gets
into
like
the
longer
term.
You
know
it's
a
it's
a
topic
that
we're
just
doing
just
enough
to
get
moving
yeah
and
to
keep
ourselves
from
being
too
baked
into
one
mindset,
so
we'll
probably
put
we
should
assign
that
to
one
of
the
investigation
tracks,
roughly,
which
is
maybe
maybe
a
combination
of
logical
cluster,
which
talks
a
little
bit
about
sharding
and
there's
a
transparent
multi-cluster
doesn't
really
care,
maybe
there's
a
separate
arc.
C
That's
a
sub
investigation
of
logical
cluster,
which
is
or
maybe
sorry,
there's
a-
and
this
is
like-
I
was
gonna-
go
put
a
diagram
for
this
together,
but
like
a
logical
cluster
can
be
used
by
transparent,
multi-cluster
doesn't
require
it.
One
of
the
dependencies
for
transparent
multi-cluster
is
something
like
crd,
syncing
or
crd
normalization.
Another
one
is
syncing.
Thinking
is
a
huge
topic.
Maybe
we
should
just
clone
and
spawn
a
dis,
a
thread
for
specific,
the
behaviors
of
sinkers
that
we're
trying
to
look
at
catalog
of
the
field.
C
We
kind
of
talked
about
this
so
yeah.
You
know
jason
or
david,
which
of
you
would
probably
prefer
to
chase
the
sinker
thread
like
we
should
just
go
ahead
and
talk
about
you
know,
use
cases
that
would
inform
it
what
people
are
doing
and
then
how
it
might
evolve,
and
we
can
add
some
of
these
comments
there.
I
guess.
A
Yeah
I
can,
I
can
take
that
on
and
at
least
start
a
thread
or
start
an
issue.
Maybe
a
discussion,
probably
an
issue
of
like
these
are
the
things
we
have
been
investigating.
We
need
to
figure
out,
I'm
I'm
sort
of
generally
curious
how
people
expect
to
run
this
in
in
a
real
world
scenario.
It's
nice
that
we've
proven
we
can
move
the
code
anywhere
right
like
this.
The
sinker
just
takes
two
configs
and
can
as
long
as
you
can
get
it
to
configs
and
the
connectivity
to
the
clusters.
A
C
We
did
like
certainly
like
the
the
one
of
the
third
topic
you
know
minimal.
Api
server
is
a
well
usable
library
that
helps
you
accomplish
a
particular
problem
domain.
Well,
I
think,
on
the
sinker
side,
you
were
kind
of
dancing
around.
C
What
are
the
common
patterns
that
you
shouldn't
have
to
solve
like
controller
runtime
and
operator
sdk,
and
even
cube
controllers,
like
that's
a
it's
a
fairly
low
level,
building
block
syncer
is
a
higher
level
building
block
you're,
not
really
trying
to
solve
the
it's
a
morph
like
framework
rather
than
library,
you're,
not
trying
to
solve
one
particular
problem
perfectly
you're
trying
to
solve
a
large
class
of
problems
that
are
fairly
generic.
C
Garbage
collector
controller
in
cube
is
a
namespace
controller,
and
you
know
the
work
that
that
open
cluster
management
was
doing
and
like
almost
every
helm
like
or
config
like
operator.
Roughly
has
this
problem.
They
all
have
a
bunch
of
special
cases
and
broken
what
broken
glass
and
cut
yourself.
If,
like
you
know
the
the
canonical
example,
I
think
of
is
with
helm
where
they
don't
sync
crds,
they
don't
update
crds.
C
So
like
it's
a
great
example
of
there's
a
class
of
problem
that,
if
we're
trying
to
bring
people
together,
reusable
framework
for
syncing,
is
there-
and
I
know-
we've
got
the
feedback
from
cap
n
that
the
virtual
cluster
sinker,
they
think,
is
the
best
thing
in
the
universe.
Michael's
got
a
bunch
of
depth
there.
I
think,
like
that,
that
arc
is
about
bringing
the
right
people
together
in
this
community
or
in
a
related
one,
and
then
we
can
use
the
investigation
doc
to
frame
up
where
we're
spending
our
time,
trying
to
build
that
collaboration.
A
Yeah,
I'm
sorry
because
also
so
in
in,
I
think,
there's
definitely
a
need
for
a
catalog.
You
know
go
out
in
the
field
and
find
all
these
thinking
type
things
and
compare
and
contrast
them
all
together.
A
I'm
curious
whether
we
will
find
that
one
of
the
sinkers
that
exists
in
the
world
is
90
of
the
way
there
and
just
needs
a
10
boost
to
become
the
whole
thing
or
whether
we
need
to
write
it
from
scratch.
That
is,
that
is
like
general,
for
the
purposes
that
we
need
it
to
be
general.
I
don't
know
right.
Yeah,.
C
Ideally
to
me,
I
I'm
pretty
convinced,
at
least
at
a
high
level,
that
transparent
multi-cluster
basically
requires
a
general
sinker
with
specific
edge
cases,
and
you
have
to
be
able
to
catch
enough
of
the
edge
cases,
but
you
don't
have
to
catch
all
of
them.
I
think
some
of
the
other
sinkers
are
like
the
all
the
sinkers
I've
seen
have
basically
baked
in
one
or
two
of
like
the
hard
assumptions,
and
we
know
what
people's
heart
assumptions
are.
We
can
at
least
check
them.
C
I
think
the
big
one
that
we're
changing
the
narrative
on
is
that
crds
can't
that
have
to
be
the
same.
Somebody
is
going
like
basically
serious
normalization,
crd
virtualization
and
whatever
we
do.
There
is
a
mechanism
that
fundamentally
allows
syncing
to
be
more
general,
which
not
everyone's
had.
D
Yeah
so
like
on
that
note,
the
there's
a
way
that
you
can
make
the
syncing
so
that
we
don't
have
to
take
a
specific
crd
version,
which
is
how,
which
is
how
hive
does
it
right
they
for
them?
The
objects
are
embedded
in
there,
they're
just
they're,
yaml,
indented
and
another
thing
right,
and
so
it
doesn't
match
the
only
thing
that
matters
is
what
version
of
cr
of
that
crd
you
have
in
the
target
cluster.
You
don't
have
to
do
this
thing
where
you
import
the
crds
into
into
like
the
kcp
layer.
D
You
don't
even
have
to
have
them
there
at
all,
let
alone
them
being
the
right
version.
So
that'd
be
a
thing
to
be
concerned
about
if
you've
got
logical
clusters
that
are
at
different,
that
assume
different
versions
of
crds
or
whatever
I
mean.
Theoretically,
I
suppose
the
crd
itself
has
a
version
in
it
and
as
long
as
people
are
versioning
those
carefully,
you
know
v1
output
too,
then
you
can
have
multiple
of
those
in
you
know.
At
the
same
time,
right.
C
And
that's
what
we
were
kind
of
referring
to
is
actually
what
what
I
think
for
tran
for
transparent
multi-cluster
to
work.
You
need
an
object
at
the
logical
cluster
level
and
you
need
an
object
under
the
underlying,
but
the
trick
is
by
looking
at
all
the
clusters.
In
a
specific
scheduling
domain,
the
the
candidate
targets
you
find
whether
those
crds
overlap
and
the
lowest
common
denominator.
So
you
actually
make
it
an
operational
problem
if
they
don't
overlap.
C
So
that's
kind
of
that
was
what
that
I'd
say
is
like
our
contribution
to
this
moving
this
discussion
forward,
which
is
the
idea
that
you
can
have
a
hard
schema,
which
is
the
lowest
common
denominator
that
also,
if
that
changes
at
any
point,
though,
if
the
world
is
no
longer
true
like
somebody
goes
and
removes
one
of
those
crs
on
crds
on
the
underlying
cluster
or
changes
the
api
version
or
whatever,
then
we
would
detect
that
as
part
of
the
core
logical
cluster
crd
reconciliation,
loop
and
you'd
say
something
like
hi
operator
alert
user,
a
on
cluster
b
made
an
incompatible
api
change
that
led.
C
E
C
Of
those
classes
of
problems
where
people
are
yoloing
it
and
we're
saying
no,
no,
the
the
line
we're
going
on
is
the
logical
cluster.
Cr
has
to
be
able
to
survive
upgrade,
and
then
we
would
reorient
the
universe,
and
I
mean
the
universe
like
olm
operators,
whatever
around
the
mindset
of
if
your
apis
aren't
stable,
you
don't
belong
on
the
underlying
cluster.
C
If
your
apis
have
to
change
you,
if
you
need
that
flexibility,
you
go
up
to
the
higher
level,
so
it's
kind
of
a
this
is
the
key
point
of
the
exploration.
I
agree
with
all
the
things
you're
saying
like
you're
right
eric
and
I
think
what
we're
trying
to
do
is
articulate
how
our
trade-offs
here
would
enable
transparent
multi-cluster
as
the
core
use
case,
but
then
we'd
go
find
the
people
who
actually
may
have
already
done
90
of
this
and.
C
You
bring
up
about
hive
is
if
we
believe
that
we
can't
do
the
crd,
the
if
you
can't
define
an
api
at
the
higher
level
you're,
not
presenting
an
api
to
the
to
the
end
user,
so
you're
effectively
going
down
to
you're
manually
managing
it
manually,
you
define
some
api
up
to
above
which
maybe
it's
a
specific
pinned
version
of
pods,
or
it's
a
special
crd
that
does
magic.
C
You
still
have
to
maintain
api
stability.
On
that.
I
think
what
we'd
try
to
do
is
say:
cube's
core
has
to
be
forward
compatible.
That's
something
certainly
like
on
the
openshift
side.
We
care
very
deeply
about
right,
like
we
don't
break
epic
compatibility,
but
if
someone
does
and
we're
to
do
that,
how
would
you
mitigate
it,
and
I
think
the
sinker
would
pick
up
use
cases
for
things
like
maybe
like
an
additional
part
of
the
syncer
investigation
kind
of
comes
down
to
what
do
you
actually
do
if
the
api
versions
get
incompatible?
C
Whose
problem
is
that
to
solve?
Is
it
the?
Is
it
the
person
who
provided
that
cluster
for
use
problem?
Is
it
the
person
who
did
the
extensions
problem
so
like
say
something
like
rook
changes,
their
apis
in
a
non-compatible
way?
If
you
change
your
apis
in
a
non-compatible
way,
you
can't
do
you,
you
lose
a
bunch
of
other
guarantees,
maybe
we're
talking
about
like
what
are
the
guarantees?
C
Interesting,
and
so
I
only
captured
like
a
quarter
of
this
I'll
I'll.
Add
this
to
the.
What
were
we
going
to
write
this
in
the
previous
month's
agenda?
Google
doc,
or
are
we
going
to
do
it
as
comments
on
the
issues.
A
Let's
do
let's
do
comments
on
issues.
I
think
the
google
doc
is
annoying
to
have
access
to
so
yeah.
Just
gonna
probably
delete
it
and
just
put
things
in
issues
I'll.
A
Thank
you
eric
for
mentioning
that
yeah,
so
so,
to
summarize,
for
my
own
understanding,
rather
than
requiring
rather
than
sort
of
freezing
crd
versions
in
time
and
saying
you
can't
change
your
crd
version,
kcp
would
kcp
and
the
sinker
would
allow
you
to
change
your
api
only
in
compatible
ways
and
if
you
break
compatibility,
the
syncer
should
at
least
signal
the
you
know
the
signal,
the
syncer
and
kcp
and
the
cluster
controller
should
at
least
signal
to
an
operator
hey.
This
cluster
got
an
incompatible
api
change,
yeah.
So.
C
C
Like
so,
you
have
two
clusters
and
they
both
are
deployment
version
one
and
they
both
have
the
same
field.
We
shouldn't.
If
one
of
them
upgrades
gets
a
new
field.
We
shouldn't
allow
you
to
set
that
field
on
the
deployment
up
above
it's
not
yet
added
yeah.
When
the
second
cluster
adds
it,
then
you
should
be
able
to
transparently
start
using
that
field
right
if
one
of
the
clusters
was
rolled
back
so
that
that
field
was
lost.
C
The
best
outcome
like
this
is
like
in
my
head.
This
sounds
like
this:
is
the
holy
grail,
kcp
spins
down
your
workload
on
the
one
that
doesn't
have
it
does
all
the
other
stuff
it
can
and
then
either
succeeds
and
like
deployments,
you
probably
should
be
able
to
a
lot
of
those
operations
are
fairly
straightforward
but
allows
you
to
move
the
workload
to
the
cluster.
C
That's
not
impacted
because
you
changed
the
api
and
you
rolled
back
in
an
incompatible
way,
and
then
it
should
fire
the
alert
which
says
we
can't
satisfy
your
spread
conditions,
but
we
prioritized
availability
over
your
spread
conditions,
but
your
availability
is
now
reduced
and
then
some
other
you
know
imagined
ecosystem
component.
That
we're
going
to
add,
which
says
like
we
were
talking
about
this
before,
is
like
the
are
your
applications,
meaning
their
sla
would
be
like.
You
would
see
your
sla
number
trick.
You
know
something
somewhere.
C
A
number
in
a
metric
on
a
graph
goes
from
you're
resilient
to
these
classes
of
failures,
and
it
goes
down
one
notch
which
is
you're
less
resilient,
and
then
that
alert
triggers
an
admin
to
be
like.
Oh,
we
didn't
realize
we
were
doing
this.
Let's
go
find
all
the
teams
using
that
new
field,
kcp
in
theory,
makes
that
easy
like
and
again.
This
is
very
api
centric.
C
We
should
probably
try
to
map
these
two
like
I,
I
we
have
a
lot
of
examples
of
this
over
the
years
in
openshift
for
sure
most
people,
basically
just
like.
D
C
D
Go
ahead,
the
the
least
common
denominator
thing
are
we
talking
about
having
the
having
the
kcp
layer
knows
about
a
crd
foo,
it's
actually
going
to
go
out
to
the
logical
clusters
and
query
what
version
they
have
and
like
do
a
sember
compare
and
get
the
lowest
one
of
those
to
import.
Maybe
even
structural.
Compare
I
mean
effectively
cube.
C
It
has
to
be
optional,
and
so,
by
definition,
we
in
theory-
and
this
is
not
perfect
in
theory-
the
one
of
the
investigation
topics
is
going
to
be.
Can
we
show
two
clusters?
One
upgrade
gets
the
optional
field.
The
structural
comparison
shows
the
common
denominator
and
highlights
that
one
of
those
is
has
a
new
field,
maybe
in
an
administrative
operational
focus
way
we're
not.
There.
C
It
might
actually
be
that's
too
complicated,
but
yeah
people
with
crds
are
already
kind
of
yoloing
it
on
updates,
and
in
most
cases
when
you
add
something,
that's
not
optional,
you
break
right
away,
trying
to
encourage
the
incentives
for
people
to
build
extensions
where
they
can
test
these
sorts
of
things
actually,
like
think
of
kcp
as
the
minimal
api
server,
a
specific
set
of
tests
that
we
could
add
and
help
people
with
is
api
evolution,
which
is
a
super
hard
problem
that
nobody's
doing
like
everybody's
doing
this
individually
within
their
own
domains.
C
Thinking
about
cube's
power
as
like
a
control
plane
for
apis
would
be.
Could
we
make
api
evolution
more
predictable
and
have
tools
that
kind
of
encode
best
practices?
Knowledge
like
if
you
declare
a
v1,
all
new
fields
are
optional.
Oh,
you
didn't
follow
that
rule,
you're
blocked
other
examples
would
be
like
behavior
changes.
Will
we
never
catch
those?
But
how
would
you
set
up
a
test
framework?
Well
you'd,
set
up
two
clusters
and
then
you'd
move
the
app
between
them
and
you'd
have
a
test
that
the
behavior
stays
the
same.
C
That
would
help
us
catch
some
of
those
failures
that
we've
had
and
like
our
services-
and
I
you
know
some
of
the
iks
guys
also
mentioned
this
as
well.
It's
like
this
is
stuff
that
comes
up
over
and
over
and
over
is
like
subtle.
Api
behavioral
changes
can't
get
them
all,
but
the
app
focused
ones,
the
more
that
we
can
hone
in
on
making
those
testable
in
isolation,
like
even
machine
set
api.
You
know,
there's
things
that
we
can
do
that,
make
the
act
of
testing
against
multiple
environments.
Super
easy
through
automation,.
A
Yeah
that
also
made
me
think
of
when
we
have
a
transparent,
multi-cluster
working
with
moving
things
across.
You
know
moving.
A
C
Love
this
example
too,
because
the
original,
like
goal
that
chaos
monkey
at
netflix
was
building
towards
was
whole
region
failures.
How
many
people
test
whole
region
failures
and
like
if
you
go
through
the
literature
and
you
go
through
what
they
wrote
over
the
years
as
they
evolved
this
story.
Whole
region
failure
is
just
really
hard
to
test,
because
you
don't
have
enough
control
over
all
your
dependencies
and
one
of
the
things
I
took
from
that
is.
They
were
successively
fleshing
out
the
operational
expertise
they
have
at
dealing
with
their
dependencies.
C
Just
like
you
know
the
pod
smasher
and
cube
like
the
hitting
a
pod
and
making
sure
app
comes
up
like
you've,
built
that
into
the
normal
fabric
of
operations
that
opens
the
door
so
yeah,
so
I'd,
say
a
secondary
goal
of
transparent
multi-cluster
and
it
should
be
captured.
But
if
we
don't,
we
should
really
flesh
it
out
is
it
should
be
trivial
to
test
regional
disruption
and
dependency
disruption
in
a
meaningful
way,
and
those
are
the
that's
part
of
what
we're
contributing
is
helping
get
to
that
point.
C
A
A
D
C
C
Maybe
we
tie
it
into
an
automated
system,
which
is
the
health
check?
Maybe
we
don't,
but
just
getting
to
that
knob
would
be
a
good
like
milestone
goal
of
you
could
simulate
evacuation
of
a
thousand
applications
and
it
nobody
notices,
like
we
kind
of
say,
like
you,
move
the
first
app
your
map
moves,
you
didn't
notice.
This
would
be.
How
do
you
do
that
in
mass?
We
should
design
as
like
a
second
third
step
goal.
D
Yeah,
can
we
plumb
in
some
some
like
failure
simulation
framework,
so
I
can
actually
issue
a
command
that
says
this
logical
cluster
just
died
and
it
actually
makes
the
other
end
of
that
respond
in
the
same
way
that
it
would
if
it
actually
died
right.
We
sort
of
put
a
you
know.
You
know
in
there.
I
think
that's
a
good
idea.
C
Too,
it's
like
when
we
design
out
like
the
apis,
we're
talking
about,
are
similar
to,
but
not
exact.
That
was
kind
of
michael's
point
in
the
threads
like
the
apis.
We're
looking
for
are
domain
specific.
What's
the
transparent,
multi-cluster
domain,
there's
there's
capacity,
there's
there's
permission.
There's
limits,
there's
a
whole
bunch
of
factors.
Can
we
tease
them
apart
sufficiently
so
that
these
are
easy
to
do
and
can
we
make
that?
I
have
like
the
the
pitch
line?
I
think
eric
gets
implicit.
C
There
is
not
only
do
we
make,
I
think
the
goal
for
transparent
when
I'm
trying
to
make
applications
kind
of
just
work
at
the
next
higher
level,
we're
trying
to
give
you
the
operational
and
the
applica
development
tools.
C
You
need
to
do
the
right
things
on
both
sides
and
and
ping
pong
so
yeah,
whether
it's
data
center
splits
like
simulating
a
data
center
partition,
would
be
like
because
again
assuming
a
network
interconnectivity
construct,
sometimes
that's
going
to
be
managed
outside
the
cluster,
sometimes
it'll
be
managed
in
but,
for
instance,
imagine
that
network
partition
exists
for
other
reasons
like
security
or
audit.
C
Like
what
happens,
if
you
actually
get
a
cluster
compromise-
and
you
decide
to
put
the
cluster
in
lockdown
mode
or
at
least
that
portion
of
the
workload,
what
would
that
look
like
to
say?
Stop
all
traffic
at
the
boundaries,
black
hole,
all
incoming
connections?
And
how
can
we
overlap
use
cases?
C
We
could
probably
steer
where
transparent
multi-cluster
goes
to
either
the
aha
side
or
the
recovery
or
resilient
side.
I
might
lean
towards
the
recovery
and
resilience
side,
because
practically
speaking,
multi-cluster
might
be
a
little
bit
harder.
You
do
need
a
little
bit
of
if
you're
trying
to
gracefully
move,
but
you
definitely
need
to
be
able
to
test
and
simulate
failure,
and
maybe,
if,
if
that's
not
simulating,
that's
just
part
of
normal
operations
that
that's
that's
not
even
a
net
win
there.
A
Yeah
yeah,
if
you
did,
did
you
have
something
I
feel
like
you
kept
starting
to
talk
and
then
I,
as
I
interrupted
you,
I
just
want
to
make
sure
I
didn't.
B
Well,
well,
I
just
wanted
to
mention
that
that
someone
in
in
the
issues
already
talked
about
you
know
when
you
register
two
physical
clusters
into
kcp.
Currently,
then,
of
course,
the
code
is
imported,
the
second
time
just
override
the
the
the
crds
that
were
created
from
the
resources
detected
in
the
first
one.
B
So
of
course,
I
answered
that
that
it's
really,
you
know
just
the
basic
implementation
that
we
did
for
now
and
didn't
tackle
any
negotiation
as
clayton
just
mentioned,
but
but
on
the
other
hand,
I
think
that
on
the
kubernetes
fork
side,
you
know
the
hacks
that
we're
done.
We
mainly
have
the
theory
tendency
working
for
for
kcp,
so
this
should
be
possible
to
start.
Maybe
you
know
really
small
steps
by
small
steps
working
on
this
crd
negotiation,
or
at
least
you
know,
compatibility
stuff.
B
Maybe,
as
a
first
step,
I
could
look
into
something
like
when
you
add
a
second
physical
cluster,
then
we
just
check
that
the
the
shima
of
the
surety
is,
if
it's
compatible
or
not
well,
what
are
the
difference
and
and
possibly
calculate
the
least
common
common
denominator
and
and
finally
just
use
this
one
at
least
that
would
allow
us,
you
know
having
a
feeling
of
how
it
would
behave,
because
there
is
already
the
use
case
for
that
with
you
know,
I
think
deployments
and
1.19
and
1.20
kubernetes
and
then
and
then
we
we
would
be
maybe
in
a
bit
of
better
shape
to
take
over
the
other
part
of
this,
which
is
how
do
we
really
describe
and
define
you
know,
declaratively
what
really
importing
resources
from
a
physical
posture
or
a
logical
cluster
into
a
another
logical
cluster?
B
What
does
this
import
really
means?
I
mean
at
some
point.
We
will
need
to
have
some
way
to
declare
that
we
want
to
explicitly
import
some
resources
from
one
cluster
to
another
one
so,
but
for
now
we
don't
have
these
ways
to
you
know
describe
that,
but
at
least
we
could
at
least
implement
compatibility
checks
in
what
we
have
today.
So
if,
if
anyone
thinks
it
could
be
a
good
way
to
start,
we
could
already
start
looking
into
that
with
what
we
have
and
from
this
have.
B
You
know
more
insight
into
how
to
to
define
this
this.
You
know
more,
this
wider
resource
sharing
scope,
I
mean
landscape,
which
is
both
virtualization
inheritance
and
normalization,
and
then
that
are
described.
A
Yeah
yeah
and
then
the
the
whether
or
not
the
crd
type
type
details
are
compatible
with
each
other
feeds
into
a
constraint
that
the
cluster
either
sounds.
Yeah.
A
B
As
we
have
sorry
as
soon
as
we
have
you
know
this
comparison
and
and
and
the
the
process
of
this
settled,
you
know
implemented
of
comparing
shimaz
and
and
calculating
the
least
common
denominator
or
or
concluding
that
it's
not
compatible.
Then
we
could
apply
that
in
future
steps
such
as
you
know,
taking
that,
as
as
a
basis
for
for
a
constraint
and
scheduling
constraints
names
I
mean.
A
Yeah
yeah,
I
think,
in
order
to
demonstrate
a
a
a
more
transparent
multi-cluster
than
we
have
today.
The
the
demo
we
have
now
basically
just
says,
give
this
thing
a
thing
and
it
splits
it
in
half.
It
doesn't
even
reconcile
it
more
than
that,
like
it
doesn't
even
check
on
the
status
of
that
later.
In
order
to
get
to
something
that
actually
demonstrates
and
shows
stuff
moving,
we
have
to
have
something
that
tells
it
to
move,
tells
it
why
it
needs
to
move
those
are
constraints.
Those
are
things
like.
A
I
never
want
to
have
more
than
two
replicas
in
the
same
zone
or
something
and
then
the
the
crd
compatibility
just
becomes
another
constraint,
and
that
then
we
get
the
clayton's
demo
of
I
upgrade
things
spill
over
into
the
new
thing.
I
downgrade
yeah
exactly
yeah
and.
C
Discussion
about
what
is
the
cluster
api?
What
does
the
placement
api
look?
Like?
I
probably
say
I
think
right
now,
my
gut
is
what
we
should
do
as
a
first,
while
we're
like
we
need
to
be
doing
two
things
synthesizing
like
what
people
are
doing,
looking
for
opportunities
to
collaborate
and
being
able
to
show
those
simple
examples,
so
like,
for
instance,
I'll
just
give
like
the
in
my
head
in
transparent
multi-cluster,
you
create
a
service,
you
create
a
deployment
they
go
to
a
cluster.
C
With
that
demo
in
place,
then
we
can
say:
okay,
the
second
iteration
of
this
might
go
in
a
different
direction,
based
on
what
we've
learned
based
on
what's
in
the
community,
but
the
behavior
of
the
experience
needs
to
be
at
least
as
good
as
this.
The
thing
I'm
like
to
be
transparent,
you
have
to
minimize
the
number
of
external
objects
you
create.
I
think
that's
another
fundamental
difference.
C
It's
not
completely
different
federation
tried
this
and
karmada's
got
some
variance,
but
like
that's
kind
of
what
we're
going
for
is
the
object
with
minimal
changes.
Cube
keeps
working,
everything
keeps
working
and
then
we
can
say:
oh
well,
maybe
it
shouldn't
be
an
annotation.
Maybe
it
should
be
another
object.
What
transactionality
do
we
need?
What
are
the
policies?
Are
those
exposed,
et
cetera,.
A
Yeah
yeah
yeah
yeah.
Definitely
I
don't
want
to
create
a
new
type
unless
we
have
to
or
until
we
have
to
at
least,
but
I
do
think
that
describing
various
complex
constraints
will
eventually
require
something
more
than
annotations
or.
C
Yeah
we,
so
this
is
a
thing
that
by
the
time
we
get
to
phase
two
hope
like
by
the
time
we
have
this
demo
working
there's
a
couple
of
other
dimensions
that
cube
hasn't
really
explored
like
so
we
talked
briefly
about
aggregated
api
servers
and
virtual
resources,
but
it's
an
incredibly
powerful
pattern.
That's
hard
to
do
unless
you
can
fork
the
repos
and
aggregated
api
servers
come
with
an
overhead.
That's
operationally
high
web
hooks
come
up
with
a
different
operational
cost.
C
The
problem
we're
actually
trying
to
solve
is
is
to
make
those
for
a
a
particular
customer
or
end
user
or
platform
domain,
be
easy
to
tweak
any
three
of
those.
So
if
it's
possible
to
do
virtual
resources
more
easily,
it's
possible
potentially
that
we
have
a
sub-resource
on
all
objects,
which
is
its
policy
object.
You
couldn't
do
that
before.
Like
federation
tried
this,
and
it
was
just
too
early.
C
C
Jordan
was
actually
saying
he
was
looking
at
ceo,
which
is
a
common
expression
language.
It's
much
simpler
and
we
were
like
you
know.
Could
we
once
we've
got
some
of
these
base
things
in
place,
minimal
api
server,
opening
up
some
experimentation?
That
would
be
got
a
crd.
I
want
to
drop
in
a
new
sub
resource
with
this
common
expression
that
takes.
C
D
C
D
C
Done
so
that
angle
is
we
won't
really
be
able
to
improve
those,
but
I
don't.
I
don't
want
us
to
necessarily
get
stuck
in
the
trap
of
everything
has
to
be
a
crd
and
a
controller,
especially
when
there
are
fundamental
capabilities
that
would
enhance
the
kcp
control
plane
as
a
control
plane.
Use
case
right,
like
I
want
to
add
admission.
B
Yeah
this
this
relates
also
to
to
something
that
I
I
fixed.
I
mean
I
started
fixing
in
our
you
know
about
kubernetes
about
table
converters.
You
know
the
the
the
tables
that
you
get
because
for
now
for
all
the
legacy
shim
objects,
the
table
definition
is
mainly
just
you
know,
code
that
you
have
for
any
object
and
so,
of
course,
for
example,
for
the
the
demo
that
we
we
initially
showed
and
that
we
we
now
implemented
in
order
to
have
the
the
deployment
the
list
of
the
deployments
correctly
shown.
B
Then
you
have
to
you
know.
For
now.
I
mainly
detect,
of
course,
we
we
add
deployments
and
pods
as
crds
in
kcp,
but
then,
when
I
detect
that
that
the
objects
theoretically
were
part
of
the
of
the
legacy
scheme,
I
just
plug
back
the
the
table
converter
from
from
the
corresponding
object
instead
of
of
the
cod1,
but
typically
just.
D
B
David
yeah,
but
just
just
to
show
you,
I
mean
the
the
type
of
limitations
that
we
have
with
the
current
state
of
of
the
crds
and
and
obviously
this
would
be
this
just
to
me
an
example
of
being
able
to
define
part.
I
mean
most
part
of
it
from
from
cities,
you
know
declaratively,
but
then
having
the
ability
to
plug,
just
as
clayton
said,
using
kcp
or
the
minimal
api
server.
As
as
a
library,
you
know
as
something
that
in
which
you
can
plug.
Also
behaviors.
B
You
know
that
that
are
you
know,
implementations
would
allow
typically
tackle
those
type
of
cases.
You
have
your
object,
mainly
defined
in
crd,
but
then
the
ability
to
plug
a
number
of
of
you
know
details
or
like
subresources,
or
you
know,
maybe
table
converters
or
stuff
like
that
directly
inside
the
kcp
server.
So
I
completely
agree
later
on
that
doing
this
is
quite
seems
quite
a
step
backward,
I
mean,
and
that
was
mainly
a
way
to
explore
what
should
be
what
would
be
necessary
to
have
the
same.
B
You
know
table
power
in
in
typical
in
objects
broad
as
theories
that
we,
what
we
have
currently
in
in
the
legacy
objects,
and
but
that
seems
to
me
that
that's
the
type
of
problems
where
we
have
to
to
make
cid-based
or
declaratively
based
objects
more.
You
know.
C
Some
run
into
some
trade-offs
there
as
we
go
okay,
so
we
talked
about
phase
two
multi-cluster
phase,
one
minimal
api
server.
Jason
was
back
this
week.
He
and
I
chatted
briefly.
He
was
finishing
up
some
stuff.
He
won't
be
able
to
talk
to
later
in
the
week,
we'll
try
to
collaborate
and
get
a
group
of
folks
going.
He
had
a
couple
people
he'd
talk
to.
C
I
want
to
go
talk
to
some
others
and
say
like
hey:
here's,
a
group
of
people
who
will
talk
about
minimal
api
server,
use
cases
that'll
just
be
that's
kind
of
phase
one.
What
would
minimal
api
server
look
like
as
a
demo,
and
then
we
talked
just
now
about
sinking
as
a
sub
element.
Were
there
any
of
the
other
core?
I
guess
logical
clusters.
I
haven't
made
more
progress
on
what
policy
for
logical
cluster
would
look
like,
but
it
might
be.
C
E
So
certainly,
I
think,
in
terms
of
the
syncing
I'd
love,
to
see
whether
we
could
get
any
synergy
or
alignment
between
the
placement,
technology
or
api,
that
we've
got
in
open
cluster
management
and
certainly
with
policy
as
well.
You
know
think
about
either
configuring
existing
policy
objects
or
understanding
whether
policy
objects
have
been
applied
correctly
from
a
either
an
audit
or
an
introspection
point
of
view
right.
I
set
up
policies.
I
want
to
distribute
it
across
my
logical
clusters
and
I
want
those.
A
E
There's
two
layers
of
it:
so
open
cluster
management,
as
we
use
it
today
in
acm,
is
using
hive
to
provision
a
cougar
to
provision
an
openshift
cluster.
However,
the
layer
that
then
manages
that
cluster
is
intentionally
decoupled
from
the
provisioning
life
cycle.
So
the
idea
that
there
is
a
pod
running
an
agent
behavior,
it's
a
pull
model.
The
pod
falls
back
to
the
cluster
manager
and
there's
an
operator
for
the
cluster
manager.
That's
in
operator
hub,
io,
there's
an
operator
for
the
cluster
lit
agent.
E
It's
an
operator
how
bio
the
community
operator
is
there
today,
but
those
establish
a
understanding
of
the
cluster.
There's
an
api
called
manage
cluster.
There
is
an
api
called
cluster
lit,
which
is
the
agent
there's
a
way
for
those
to
establish
the
identity
of
the
cluster
and
then
there's
a
way
to
distribute
work
down
to
the
cluster.
Those
pieces
are
intentionally
decoupled
from
the
provisioning
life
cycle.
A
Yeah,
that
makes
sense.
That's.
That
is
a
reasonable
separation
of
concerns.
I
think,
and
I
would
want
I
would
want
kcp
not
to
not
to
conflate
the
two
either.
I
think
I
I
think,
michael
before
you
joined,
we
talked
a
bit
about
doing
a
survey
of
other
sinker,
sinker-esque
technologies
and
sort
of
figuring
out
how
close
they
are
to
what
we
would
need.
How
close
we
are
to
what
they
need,
whether
we
should
well.
A
We
should
all
team
up
in
what
ways
should
we
team
up
to
avoid
overlap
and
increase
synergy?
I
hate
that
word,
but
increased
energy
well,.
C
And
michael,
I
think,
like
the
point,
there
was
another
point
too,
I
think
is:
I
don't
think
I
think,
where
the
phase
that
we're
at
is,
it
would
be
better
to
have
real
flawed
non-obviously
thought,
non-completely
thought
through
and
perfected
apis
to
study
from
than
it
would
be
to
delay
those
for
perfection,
and
I
think
this
kind
of
gets
to
like
the
cluster
placement
versus
like
a
higher
level
placement.
C
So
what
I
would
probably
say
is
like
the
themes
and
the
mechanisms
and
the
use
cases
the
more
that
those
are
shared
than
whether
those
are
documented
between
this
group
and
the
other
folks
that
will
end
up
having
to
go
talk
to
the
better
and
then,
if
we
we
know
like,
like
as
an
example
like
in
the
placement,
some
of
the
placement
stuff
that
you
had
was
drawing
concepts
from
cube
and
similarities.
I
think
that's
a
key
point
which
would
be
co-opting.
C
What
already
works
for
familiarity
reasons
is
likely
something
it's
like
in
my
own
head
thinking
about
some
of
the
transparent
multi-cluster
stuff.
I
do
think
that
there
should
be
tolerations
and
node
selectors
that
are
sucked
off
by
the
sinker
or
by
the
placement
story
at
the
higher
level
that
you
can't
tell
the
difference
between
a
label
that
a
cluster
would
have
or
a
label
that
a
pod
would
have,
but
that's
managed
for
you,
which
I
think
is
like
the
dual
of
what
you're
doing,
which
is
very
concrete.
C
You
know
cluster
assignment
the
merging
of
those
at
a
higher
level,
having
a
really
concrete
like
this
works
great
for
clusters.
This
works
great
for
pods,
and
then
we
can
say:
okay.
How
would
we
actually
give
control
to
the
operational
side,
but
the
applications
team
just
gets
a
value
that
they
that
they
supply,
or
maybe
it's
defaulted?
I
think
I
think
that
angle
is
useful
as
well,
so
I'm
not
as
concerned.
C
If
we,
I
would
say
that
the
first
couple
of
iterations
in
kcp
are
about
exploration
of
throwing
stuff
away
and
then
the
community
discussions
would
be.
Can
we
point
out
why
they're
use
cases?
Could
we
quickly
orient
ourselves
to
like
use
case
a
oh
or
api,
a
it
was
designed
for
this
use
case?
Oh,
it
misses
these
use
cases.
Well,
we
didn't
have
those
use
cases
great.
Let's
pick
a
big
chart
or
a
big
table
that
shows
the
trade-offs
and
then
the
second
loop
around
with
you.
C
In
that
example,
use
case
for
clusters
we
might
say:
oh
we've
had
a
great
idea
that
we
could
steal
and
bring
back
down
to
the
cluster
level
or
bring
back
down
to
the
cube
level.
Certainly
taints
and
tolerations
are
not
the
be
all
and
end
all
of
scheduling,
avoidance
we've
gone
through
like
eight
iterations
of
spread
and
anti-affinity.
C
I
suspect
we'll
go
through
like
five
more
so
there's
an
element
of
I'm
a
little
bit
of
a
fan
of
throwing
stuff
at
the
wall
to
see
where
it
sticks
for
specific
use.
Cases.
E
And
that's
fair,
I
think,
at
a
minimum.
What
I'd
suggest
is,
as
we
are
collecting
the
use
cases,
I
can
supply
the
ones
that
we've
been
thinking
about
at
open
cluster
management,
because
I
think,
there's
overlap
in
the
use
cases,
certainly
and
evaluate
what
is
the
most
appropriate
way
to
solve
those
with
a
concrete
api.
C
I
did
want
to
ask
so
that's
so
that
was
the
one
thread
which
is
syncer
and
sinker
policy.
Are
there
other
threads
that
folks
thought
about
since
last
week?
I
know
there
was
one
email
discussion
or
there's
an
issue
discussion
about
the
global
load,
balancing,
probably
that
falls
under
transparent
multi-cluster.
C
Are
there
there's
been
the
discussion
about?
Could
hive
use
this,
and
that
was
the
sharding?
How
do
you
go?
Do
high
scale
services,
that's
kind
of
trickling
along
devin's,
devin
and
eric
and
others
have
been
iterating
derek,
did
a
follow-up
with
him.
So
that
thread
is
more
the
kcp
as
a
generic
control
plane
for
a
high
scale
service.
C
A
I
saw
the
the
global
load
balancer
joaquin,
you
posted
this.
I
don't
know
if
we
ended
up
with
a
if
with
anything
more
concrete
than
yes,
it
should
work,
but
we
don't
have
any
specific,
like
any
more
specific
guidance
about
how
it
should
work.
Yeah.
A
But
you
know
what's
next:
on
there
I.
E
E
Is
there
a
concept
of
a
health
check,
definition
or
health
check
api
that
is
similar
to
the
health
check,
live
program
pods,
but
that
can
be
expressed
in
a
way
that
validates
the
application,
maybe
with
other
synthetic
tests
or
basically
probing
for
application
availability
outside
of
the
clusters
in
which
the
application's
running.
E
But
I
think
that
would
be
a
useful.
It's
it
it's
similar
to
or
or
aligned
with
that
idea
of
the
global
load
balancer
in
some
sense,
because
it's
it's
something
that's
not
necessarily
may
or
may
not
be
provided
by
the
clusters
themselves.
It
may
be
a
service
that
is
aware
of
those
clusters
and
consuming
workloads
running
within
them.
C
A
really
interesting
one,
too.
I
think
it
touches
on
eric's
point
earlier
as
well
as,
if
we're
going
to
do
failure
simulation,
then
failure
simulation
or
if,
if
the
normal
mechanics
of
what
we're
building
lend
themselves
really
well
to
failure
stimulation
of
very
real
world
types
of
failures,
then
the
duel
of
that
is
that
if
we
simulate
it-
and
we
don't
also
detect
it-
we've
missed
an
opportunity
and
there's
lots
of
ways
to
attack
it.
But
we
kind
of
want
to.
C
One
nice
thing
that
health
checks
do
for
pods
is
that
they're
very
simple
concept
and
they,
while
they
have
limitations,
the
simple
concept
works.
Well
enough.
Most
of
the
time
maybe
like
we
should
think
through.
What
is
the
simple
signal
of
health?
Is
it
readiness?
Is
it
something
more
more
appropriate?
Is
it
a
synthesis
of
signals?
Is
it
something
like
an
sla
or
an
slo
or
the
sli
that
drives
both
of
those.
C
C
One
question
I
would
have
is
it
probably
should
look
like
something
we're
already
doing
in
a
couple
of
different
ways?
Who
would
be
the
right
people
to
have
that
expertise
that
they
can
bring
that
forward?
I
think
michael
you've
got
some
of
it
and
you
know
sre
folks
who
are
looking
at
summarizing
status
across
existing
multiple
clusters.
Today,
their
observability
would
have
some
of
it.
C
I'm
wondering
if
we
actually
need
to
put
ourselves
in
a
spot
where
we
could
force
ourselves
to
have
to
go
attack
at
least
part
of
the
problem
to
do
it
so
maybe
like
with
a
transparent
multi-cluster
to
simulate
the
rebalance,
we're
going
to
have
to
simulate
one
type
of
failure.
What
failure
would
that
be.
A
Yeah
and-
and
I
think
not
just
delete-
you
know,
delete
a
cluster
or
or
something
and
see
everything
rebalance
but
watch
somehow
watch
aggregated
metrics
while
it
rebalances
and
see
what
the
effect
was
on
that
if
people
are
using
transparent,
multi-cluster
to
put
workloads
closest
to
their
end
users
and
one
of
those
clusters
goes
away,
it's
presumably
going
to
get
slower
for
one
of
their
users,
and
we
should
be
able
to
see
that
in
some
aggregated
metric,
not
just
as
a
like
demonstration
of
how
this
went
wrong.
A
But
actually
I
went
right
like
how
the
you
know
everything
worked.
It
got
a
little
slower
for
people,
but
at
least
it's
still
there
aggregating
metrics
had
not
been
something
I
had
thought
of
so
far
or
or
sort
of
collecting
and
putting
them
all
in
one
place
and
and
global
load.
Balancer
health
checks
also
is
something
we'll
have
to.
C
Yeah,
whatever
observability
observability,
maybe
the
argument
would
be
like
pod
logs,
like
accessing
a
pod's
individual
namespace.
C
Maybe
there's
an
opportunity
here
to
say:
what's
the
one
remove
interface
that
works
really
well,
and
that's
you
know
kind
of
saying
whatever
your
aggregation
solution
is.
The
aggregation
solution
has
to
give
you
enough
input
that
you
can
make
meaningful
decisions.
What
are
the
meaningful
decisions
for
someone
running
and
implicitly,
aha
service?
C
That
should
be.
You
know
we
should
bake
in
sli,
and
you
know
concepts
that
both
measure
monitor
and
maintain
that
loop
is
really
important.
So
maybe
it's
like.
Maybe
that's
like
we
that
up
as
the
what
is
the
observability
dual
to
transparent,
multi-cluster,
what's
like
transparent,
observability,
look
like
where
you
don't
actually
know
the
mechanism
or
the
implementation,
even
though
there
might
be
like
a
couple
of
obvious
ones.
C
What
how
do
you
because
cube
kind
of
punted
on
this,
and
then
we
came
back
around
and
just
used
most
people
just
used
prometheus
or
one
variation
of
that
ecosystem
early
on
the
project
had
some
ideas
that
were
kind
of
you
know
we
just
didn't
plan
out.
There
wasn't
really
a
technology
in
the
space
prometheus
kind
of
sucked
all
the
the
oxygen
out
of
the
room
on
that.
C
Could
we
set
up
a
similar
scenario
where
either
you
know
a
de
facto
standard
emerges
or
the
use
case
demands
someone
actually
go
solve
the
problem
and
people
are
like
this
is
such
an
important
problem
that
we
want
to
solve
it
by
doing
this
integration
because
it
gives
such
a
big
win
to
an
operations
team
so
like
the
api
that
you
create,
that
creates
the
need
for
someone
to
integrate
with
it
same
thing
for
load.
Balancer
too,
I
think
that's
a
it's
the
same
kind
of
argument.
Right
we'd.
A
Yeah
I
I
completely
agree
with
the
caveat
that
it
seems
like
another
huge
scope
increase
for,
for
what
we're
doing
like
like
a
completely.
We
need
it
we
like
in
order
to
in
order
to
have
transparent
multi-cluster.
You
have
to
be
able
to
see
what's
happening
in
there,
but
at
least
let
us
at
least
give
us
the
the
hooks
to
write
it.
E
When
I
tried
to
poke
at
the
concept
of
health
check
like
I,
don't
have
an
answer
for
that
one,
it's
not
something
that
I
can
do
deeply
enough
in
other
forms.
However,
I
have
kind
of
a
idea
around.
There
should
be
an
api
that
I
can
define.
That's
simply
giving
me
a
up
or
down
right.
Just
that
simple
up
or
down
doesn't
have
to
be
aggregated
metrics.
Whatever
that
check
is
that
it's
something
that
can
answer.
A
Right
right
at
first
blush,
it
should
be
easy,
relatively
easy.
Once
transparent,
multi-cluster
exists,
gigantic
asterisk
to
annotate
a
service
to
say
health
check.
This
somehow
annotate
it
in
the
general
sentence.
Tell
a
health
check
this
and
then
have
a
controller
that
watches
for
annotated,
dustly
services
and
runs
a
you
know,
probe
against
it
and
reports.
You
know,
updates
the
status
of
it
with
failed
last
health
check
or
whatever,
but
a
gigantic
asterisk
aside,
and
also
I'm
sure,
there's
way
more
complexity
to
even
that
than
the
thing
I
described.
C
I
would
say
probably
like
we
should,
I
would
say,
the
investigations
in
kcp
are
intended
to
show
working
things
that
catalyze
ideas
that
then
either
fold
into
existing
projects,
spawn
additions
to
existing
projects
or,
if
necessary,
we're
not
quite
certain
yet
lead
to
kcp,
I
would
say,
probably
transparent,
multi-cluster
as
a
use
case.
You
know
I
was
kind
of
jason
and
I
were
kind
of
spitballing
I
was
thinking
about
this.
After
would
be.
C
I
could
see
kcp
dev,
eventually
being
a
number
of
repos
in
a
number
of
different
domains
that
correspond
to
the
investigations
where,
for
some
of
the
investigations
it
might
be
like
minimal
api
server,
we
might
have
some
examples
for
transparent
multi-cluster.
There
might
actually
be
a
legitimate
project
for
logical
clusters
that
might
be
documentation
and
design
explorations,
but
it
might
belong
to
sick,
multi-cluster
or
cube
api
server
for
observability.
C
There
might
be
a
working
group
that
is
incentivized
to
go
solve
this.
Maybe
that
doesn't
actually
have
to
happen
close
to
this
group,
but
we
should
think
I
think,
like
we're,
trying
to
be
as
much
of
a
conduit
as
we
can
to
say,
like
somebody
should
think
about
this.
Can
we
catalyze
the
discussion
to
happen
where
we're
natural
to
integrate
with
and
by
we,
I
mean
the
general
control
planes
for
cube
and
when
someone
goes
and
does
that
it
really
changes
the
narrative
for
a
particular
problem
domain,
so
not
being
a
throttle
or
a
gate.
A
Yeah,
we
are
now
out
of
time,
so
thank
you,
everyone
for
showing
up
please.
If
something
stuck
out
to
you
in
this
conversation,
please
add
it
to
the
agenda
notes.
I
will
post
this
recording
and
update
the
notes
on
the
issue.
A
Otherwise,
I'll
see
you
all
on
the
internet,
all
right
have
a
good
one.
Everybody.