►
From YouTube: 2021-06-15 Community Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Oh
there
we
go
all
right,
hi,
welcome
to
the
kcp
community
meeting
tuesday
june
15th.
We
have
a
couple
of
a
couple
of
updates.
Actually,
first
speaking
of
recording,
I
wanted
to
get
people's
opinions
so
the
last,
maybe
three
or
four
of
these
I
we
did
the
recording
we
post
them
to
youtube
when
I
post
them
to
youtube.
A
I
also
watch
them
again
at
2x
and
take
quick
notes,
because
I'm
really
bad
at
taking
notes,
while
the
meeting
is
going
on
is
that
useful
to
people
is
that
is
that
I'll,
probably
still
watch
them
anyway,
just
to
refresh.
A
What
we
talked
about,
but
yeah
the
note
taking
or
the
videos
in
general
the
videos
we're
definitely
gonna.
Do
the
videos,
no
matter
what,
because
that's
easy
and
takes
no
time
to
do,
but
the
note-taking.
I
will
probably
rewatch
them
again
at
2x
anyway,
but
that
way,
if
I
don't
need
to
take
notes,
while
I'm
doing
that,
that
would
be
fine.
But
if
people
find
those
at
all
useful,
I
will
keep
doing
that.
D
E
Six
to
nine
weeks
of
something
like
there's
a
lot
of
context,
sharing
that
doesn't
always
happen
in
doc
form
like
you
know,
it's
not
there's
not
a
commonly
accepted
context
for
what
the
heck
something
is
like.
This
happened
a
lot
during
early
days
of
cube,
which
is
like
continuous,
like
reinforcement
of
like
here's,
the
things
that
we're
talking
about
and
here's
the
same
things
we
talked
about,
and
here's
like
that
that
always
does
have
reinforcing
value
for
getting
people
into
a
shared
worldview.
A
F
Echo
and
emphasize
a
little
bit
what
clayton
said
right,
so
I
tried
reading
notes
first
off.
I
totally
agree
with
notes
much
better
than
video,
but
as
clayton
said
you
know,
the
early
notes
did
capture
all
the
early
context.
I
mean,
I
suppose,
that's
just
gone,
but
if
we
could
either
you
know,
maybe
try
a
little
harder
to
get
that
kind
of
information
into
future
notes.
Great.
A
Yeah
yeah,
I
definitely
I
would
like
to
do
a
better
job
of
filing
issues
to
track
things.
I
know
like
a
lot
of
times.
We
say
like
oh
yeah,
we'll
follow
up
with
this
later
and
don't
so.
I
will
try
to
do
an
even
better
job
of
tracking
issues
and
I'll
keep
doing
notes
and
I'll
try
to
do
more
notes
as
we
have
them
so
yeah.
That's
that's
very
good
feedback.
I
I
I'm
glad
to
hear
that
they
are
useful.
A
I
had
been
doing
them
for
my
own
use
and
if
anybody
else
benefits
from
them,
I
will
keep
doing
them
yeah
great
in
in
the
last
week
or
actually
yesterday,
I
sent
a
pr
to
get
us
more
towards
the
next.
The
next
phase
of
the
demo
is
to
be
able
to
watch
resources
rebalance
across
clusters
as
clusters
join
and
leave
and
become
unready
and
and
all
sorts
of
stuff.
A
A
As
new
clusters
join
or
become
unready,
or
or
whatever
happens,
the
deployment
controller
will
see
those
changes,
rebalance
and
update
existing
things,
delete
deployments
as
deplete
delete
deployment
leafs
as
clusters
get
deleted
and
hopefully
be
able
to
give
us
a
more
flashy
fluid
demo
of
watching
replicas
rebalance
across
existing
clusters.
This
ties
into
david's
work,
david's
work
is
also
improving
the
api
negotiation
logic
so
that
a
cluster
becomes
unready
if
its
types
are
not
compatible
in
the
way
that
we
need
them
to
be.
A
This
will
be
one
way
that
we
can
demonstrate.
You
know
why
have
five
clusters?
One
of
the
clusters
suddenly
changed
what
its
definition
of
a
deployment
was
in
an
incompatible
way
and,
as
a
result,
the
deployment
replica
is
there
left-
and
you
know
we're
subsumed
by
the
other
clusters,
but
you
didn't
get
a
global
outage.
What
you
would
have
gotten
before
without
this
multi-cluster
and
without
this
api
negotiate
negotiation
work.
A
Is
that
your
your
single
cluster
no
longer
knows
what
the
deployment
is
in
the
way
that
you
need
it
to,
and
so
everything
just
sort
of
bursts
into
flames
and
everything
dies.
Now,
after
this
and
david's
change,
they
will
move
gracefully
over
to
okay
clusters
and.
E
So
to
practice
the
restating
the
kind
of
objectives,
so
there's
kind
of
three
things
that
are
useful
here:
the
idea
of
a
control
plane
for
cube.
That
is
reactive.
The
same
way
that
cube
is
to
machine
life
cycle
with
some
reasonably
like
first
stabs
and
there's
been
prior
art
to
this
and
we're
kind
of
drawing
on
some
of
that,
but
kind
of
getting
a
set
of
new
people's
brains
around
all
of
the
problems.
E
So
that
we're
not
you
know,
we
don't
have
to
be
experts
in
the
all
of
the
historical
problems,
but
like
we're
kind
of
getting
ourselves
in
a
jason
and
david
after
this
will
be
in
kind
of
a
mental
spot
where
they're
like.
Oh,
I
thought
about
these
problems
enough
that
now
I'm
dangerous
and
I
can
expand
my
danger
by
going
and
learning
about
other
projects
and
where
they've
failed
and
all
that
and
the
difference
between
earlier
projects
is.
This
is
the
api
variability
which
is
with
control
over
apis
flexibly.
E
We
gain
a
new
tool
that
lets
us
move
up
the
food
chain
in
terms
of
the
amount
of
damage
and
havoc.
We
can
cause
two
that
this
would
be
useful
for
an
end
user,
which
is,
I
would
like
to
just
use
deployments
like
I
do
today
and
have
a
net
new
feature.
E
This
will
set
us
up
so
that
we
can
then
test
the
hypothesis
that
how
many
of
the
other
resources
do
you
need
do
you
need
replica
sets?
Do
you
need
pods?
What
are
the
ways
like
demo,
three
or
prototype
three
phase
is
like
okay?
Can
we
go
mitigate
the
lack
of
replica
sets
or
pods,
and
then
the
third
one,
I
think,
is
opening
the
door
to
what
are
the
understanding?
All
the
open
questions
for
the
world
of
apis
can
change.
E
Apis
are
what
really
matter
in
a
declarative
system
having
good
declarative
apis,
some
of
which
are
going
to
be
the
cube
resources,
some
of
which
are
these
future?
What
are
the
mechanisms
and
tools
that
we
would
need
that?
E
What
would
it
mean
to
have
a
type,
a
controller
that
comes
along?
It's
like?
Oh,
I'm
going
to
go
from
k
native
to
I'm
going
to
go
from
cloud
foundry
or
kuroku
to
knativ.
E
How
do
I
make
that
api
change
reasonable
at
this
control
plane
layer?
Maybe
the
controllers
get
swapped
out,
but
the
api
didn't
change.
What
would
I
have
to
do
so?
That's
kind
of
setting
up
that
third
phase
of
what,
if
apis,
are
fungible.
So
those
kind
of
the
goals
of
this
bull
prototype
is
that
useful
for
resetting
context.
F
E
We
should
re-
and
this
is
like
a
good
thing
too-
is
like
in
the
prototyping
phase.
We
should
practice
restating
what
our
value
is
at
all
times,
so
that
we're
testing
the
hypothesis
that
what
we're
building
is
actually
valuable.
F
I'll
I'll
say,
I'm
still
a
little
bit,
confu
sure
that
I
understand
what
you
mean
about
api
fungibility.
I
think
I
heard
you
talk
about
something
that
was
centrally
engineered
and
kept
coherent
across
the
clusters,
and
earlier
I
saw
discussions
of
well
what,
if
the
different
target
clusters
had
different
versions
of
some
apis.
G
Yeah,
that
might
be
interesting.
I
mean
the
the
demo
I
plan
to
do
just
we
have
a
second
step
of
this
meeting
I
started
working
on
on.
You
know
a
prototype
of
of
exactly
this.
I
mean
when
connected
to
two,
for
example,
two
physical
clusters
and
pulling
the
api
definition
of
deployments,
for
example,
for
each
physical
cluster,
and
then
it
happens
that
you
know
you
have
pulled
the
first
definition.
G
Then
it's
been
accepted
as
the
you
know,
api
of
development
of
deployments
in
your
kcp
logical
cluster,
and
then,
when
a
second
physical
cluster
comes
in,
it's
not
the
same,
and
if
it's
not
compatible,
then
we
would
check
if
the
second
api
for
deployments
is
compatible.
If
it's
not,
if
we
agree,
you
know
taking
the
lcd
of
both
models
and
and
working
with
that
in
our
logical
cluster
or
if
it,
if
we
don't
agree
taking
the
lcd
of
this,
then
one
of
the
of
both
physical
cluster
would
be
marked
as
not
supporting
this
api.
G
So
there
is
some.
You
know
negotiation
here
that
has
to
occur
so
that
we
are
sure
that
everything
that
we
accepted
in
as
apis
inside
the
logical
cluster
is
consistent
and
and
if
there
is
any
change
that
breaks
this
consistency,
that
then
we
would
be.
I
mean
admin
typically
would
be,
would
have
the
ability
to
to
know
that
before
you
know
possibly
forcing
this
or
changing
the
api.
E
Yeah
and
mike
like
like,
why
are
apis
important?
So,
if
you're
building
a
declarative
api
system,
that's
unword
from
physical
infrastructure,
you
might
have
multiple
agents
acting
on
it,
their
life
cycle
and
their
api
evolution
is
managed
right
like
so.
A
cube
cluster
cube
guarantees
that
we
never
regress
or
break
apis,
which
means
all
apis
are
forward
compatible,
but
occasionally
we
deprecate
or
remove
beta
apis
crds
have
a
number
of
api
evolution
gaps.
Thinking
about
this
is
even
though
this
isn't
the
short
term
goal.
E
How
do
you
manage
the
software
lifecycle
of
declarative
apis
over
a
long
period
of
time
as
a
control
plane,
because
the
whole
point
of
a
control
plane?
Is
you,
don't
accidentally
screw
up
and
then
delete
everything
in
the
node
universe,
because
you
made
a
one-off
typo
error
that
blew
up
your
entire
cloud
infrastructure
so,
depending
on
what
the
problem
space
is,
will
go
in
different
directions,
but
reinforcing
the
apis
and
their
evolution
is
a
key
part
of
a
long-term
value
proposition
of
something?
That's,
not
a
physical
cluster.
E
A
Even
a
a
thing
that
I
am
excited
for
david
to
build,
even
though
he
doesn't
know
he's
building
it,
yet
is
a
a
thing,
a
tool
that
you
can
plug
into
ci,
to
see
a
crd
definition
on
the
left
and
on
the
right
and
say
that
this
will
be
a
compatible
change.
You
are
not
introducing
a
breaking
change
that
is
like
outside.
We
should
also
operationalize
that
and
run
it
in
the
reconciler
and
make
it
be
something
that
we
detect
when
it
when
it
hits
the
cluster.
A
G
I
look
that
quite
much,
but
in
fact
at
least
it
doesn't
exist
in
go
in.
You
know
semantic
differencing,
I
mean
in
fact
subtyping
detection,
which
is
mainly
what
we
want
to
to
to
have,
and
so
I
started
something
as
as
a
library
that
is
then
used
in
in
the
corresponding
controller
that
I
have
been
working
on.
But
of
course,
for
this
library
I
just
did
the
minimal,
because
it's
in
fact
quite
complicated
because
you
know
json
chemical
can
represent
the
same
semantic,
meaning
with
many
many
different
ways.
G
So
for
now
I
left
a
number
of
things.
Just
you
know
unimplemented,
but
you
get
an
error.
If
there
are
some
changes
that
we
cannot
detect
if
they
are
compatible
or
not,
but
mainly
the
main
points,
hopefully
in
the
the
main
types
in
kubernetes,
do
not
use
this
type
of
those.
You
know
advanced
chizenshima
stuff
like
any
of
or
you
know,
cumbersome
combinations
of
those.
G
Then
it
would
be
very
easy
for
other
people
to
contribute
and
complete
that
and
of
course,
it
could
be
used
also
for
even
for
use
cases
with
crds.
I
mean
current
currently
used
crds,
where
you
want
to
check
through
ci
that
you
didn't
include
a
change
that
would
make
it.
You
know
backward
incompatible
without
incrementing
the
kubernetes
api
api
version,
for
example.
It
could
be
very
useful-
and
I
also
took
the
approach
of
limiting
this
to
structural
sheamus,
which
makes
it
more.
I
mean
much
simpler
than
the
general
case.
F
David,
you
talked
earlier
about
negotiation.
Are
you
imagining?
This
is
something
humans
are
involved
in
or
is
some
automated
process.
G
Let
me
explain:
if
you
had
a
cluster,
then
you
would
have
an
automatic
mechanism
that
will
pull.
You
know
the
the
apis
you're
interested
in
from
the
physical
cluster
into
the
logical
cluster
as
api
resource
import
object.
Then
you
have
one
api
resource
import
for
an
api
version
and
a
physical
cluster,
and
it
contains
the
shima
mainly
and
the
definition.
You
know
everything
that
you
typically
have
in
a
cid
version
and
then
the
negotiation
will
occur.
G
If
you,
if
you
have,
of
course
only
one
api
resource
import,
then
it
will
create
one
negotiated
api
resource,
which
is
a
distinct
object
with
the
same
shima.
But
then,
if
you
have
two,
if
you
add
a
second
api
resource
import
from
another
physical
cluster,
then
it
will
compare
the
shimmer
of
this
second
import
of
the
same
api
with
the
shimmer
that
you
have
on
your
negotiated
api
resource
object,
which
is
which
we
have
you
have
only
one
off
in
in
for
a
given
logical
cluster
and
then
the
semi-automatic
part.
G
So
all
this
is
automatic,
but
then
the
semi-automatic
part
of
it
is
that
you
can,
in
your
negotiated
api
resource
resource
api
object.
You
can
you
have
an,
I
mean
I
added
a
field
which
is
published
or
not
because
what
you
you
may
want
to
you
know
to
have
let's
say:
10
physical
cluster
join,
which
do
not
have
the
same
kubernetes
in
the
same
version
of
your
api,
the
same
hmi,
exactly
you.
G
You
may
want
those
10
physical
clusters
to
join
your
physical
cluster
before
accepting
or
publishing
the
the
the
api
that
results
from
the
least
common
denominator.
Of
all
this,
you
know,
api
is
coming
all
these
versions
of
the
api
coming
from
physical
clusters,
so
by
default
it
creates
a
negotiated
api
resource
object
but
which
is
not
published.
But
of
course
it's
just
a
question.
G
We
could
just
decide
to
publish
it
automatically
and
then,
when
you
publish
it
it
it
will
create
the
crd
in
neurological
cluster
that
corresponds
to
the
least
common
denominator
of
all
the
corresponding
apis
imported
from
your
physical
clusters.
But
then,
if
you
have
a
an
11th
cluster
as
soon
as
it's
published
as
a
crd
in
your
logical
cluster,
if
you
had
an
add
an
11th
physical
cluster,
whose
api
for
the
same
you
know,
api
version
is
not
compatible
with
the
existing
lcd.
G
Then
you
would
not
be
able
to
change
it.
You
would
just
get
a
status
with
an
error
message,
conflict
on
your
api
resource
import,
and
then
that
means
that
you
know,
even
you
could.
You
know,
have
a
ui
where
you
can
see
what
are
all
my
imports
for
from
for
these
logical
clusters
and
those
with
would,
which
are
you
know,
conflicting
and
those
with
which
are
compatible
and
published.
G
In
my
logical
cluster,
so
with
these
two
just
two
additional
objects,
you
can
have
a
automate
as
much
as
possible
but
still
keep
provide.
You
know,
freedom
for
for
checking
the
impacts
and
and
acting
accordingly
does
it
make
sense?
Does
it
answer
your
question.
A
Maybe
do
it
yeah.
The
idea
is
that
as
much
as
as
much
as
we
can
do
automatically,
we
will
try
to
do
automatically,
but
when
it
breaks
down
and
needs
a
human
to
kick
it
into
working
again,
a
human
can
kick
it
into
working
again
and
hopefully,
when
a
human
has
ruled
a
new,
a
new
joiner.
Doesn't
un
doesn't
unfix
it
right,
yeah
exactly.
F
F
G
Yeah,
probably-
and
there
is
something
that
I
didn't
mention-
is
that
you
can
enforce
the
the
shema.
In
fact,
the
crd,
if
you
add
a
crd
yourself
in
your
logical
cluster,
let
me
say
I
would
add
the
deployments
urd
that
I
extracted
or
built
myself
in
your
logic
in
my
logical
cluster,
then
that
means
that
the
negotiated
api
resource
I
mean
yeah
ap
resource
for
for
this
api
is
enforced
to
the
c,
to
the
shima
of
the
crd
that
I
added
manually.
G
So
that
means
that,
in
such
a
case,
whatever
you
add,
whatever
api
comes
from
physical
clusters
would
never
impact
your
your
api.
You
know
in
your
logic
register
because
it
has
been
enforced
by
you
or
an
admin,
explicitly
adding
a
crd
for
this
type
and
in
such
a
case
it
would
still
continue
doing
automatically
the
work,
but
only
the
the
check
work.
You
know
you
have.
G
So
you
still
have
the
way
to
the
ability
to
enforce
or
to
bring
them.
You
know,
iteratively
until
you
decide
you
publish
the
corresponding
api
externally
in
your
logical
cluster.
B
B
So
it'd
be
useful
eric.
Why
do
you
think
it's
impossible?
What
parts
of
it
that
are
most
impossible,
this
the
least
common
denominator
thing-
and
maybe
I'm
maybe
I'm
just
you-
know,
misunderstanding
the
concept
but
like
you're,
not
talking
about
taking
v1
of
us
of
the
crd
and
v2
of
the
crd,
no
with.
B
E
V1
and
v2
so
like
so
there's
a
syntactic
changes
so
like
changing
the
name
of
a
column
right
and
there's
compatible
schema
changes.
So,
like
let's
talk
like
databases
right,
you
have
a
schema
and
a
table.
You
can
add
a
column.
What
do
you
have
to
do?
If
you
have
to
add
a
column,
you
have
to
provide
a
default.
E
Why
do
you
have
to
provide
a
default
because
things
that
were
reading
the
column
before
still
have
to
provide
whatever
the
row
was
you
can't
remove
a
column
without
breaking
a
schema
so
effectively
remove
a
column?
All
writers
have
to
change
so
like
thinking
about
this,
like
v1
as
we
define
it
in
cube,
is
the
set
of
additions
that
are
compatible
with
v1
there's
a
small
set
where
occasionally,
people
screw
up
where,
like
they'll
add
a
validation,
rule
right,
violates.
B
That's
so
I've
met
so
much
so
the
point
is
that
we
are
assuming
well-behaved
schema
evolution
within
a
version
and
to
be
fair,
we
can
test
that
because
the
sinker
is
effectively
a
distributed
test
mechanism
of
I
I
don't
doubt
that
we
can
test
that.
I
what
I
doubt
is
that
existing
like
existing
evolved
schemata
out
there,
don't
necessarily
are
not
necessarily
well
behaved
in
this
regard,
and
so
therefore,
would
just
simply
not
be
supported.
E
There's
a
self
selection
factor
people
stop
using
your
good
your
api
because
they
can't
trust
it
you're,
not
a
good
candidate
for
declarative
config.
If
you
change
what
the
api
means
under
the
covers
and
everybody
breaks,
so
there's
a
there's
one
filter
there,
the
filter,
that's
left,
would
be
meaningful,
actionable
changes
and
unintentional
subtle
changes.
E
E
If
we
can
get
to
a
point
where
we
can
test
that
hypothesis,
there
is
a
set
of
the
ten
percent.
The
this
you
screw
up
where
we're
like.
Oh
the
person
delivering
the
code
that
implemented
that
operator
on
that
cluster,
screwed
up
unintentional
api
change.
We
have
two
other
clusters
that
are
still
working
perfectly:
fine,
pull
it
out
flag
it
as
an
error,
same
way
that
you
would
for
an
incompatible
schema
change.
E
The
90
catches,
the
up
front,
the
10
catches
the
after
the
fact
and
the
10
kind
of
looks
not
that
much
different
from
operations,
team,
screwed
up
and
so
forth.
So
I
think
we
can
maybe
get
nine
percent
or,
let's,
let's
call
it
so
you
add
another
nine
to
what
you
can
handle
for
api
evolution.
What
do
you
do
with
that?
Last
one
nobody's
perfect
subtle,
like
behavior
changed,
that's
where
you
then
need
the
next
loop,
which
is
hey.
How
do
I
know
that
this
instance
on
this
cluster
is
running
correctly?
E
Well,
I
have
a
health
check.
What
happens
when
that
health
check
fails?
I
evacuate
how
do
I
get
someone
to
define
that
health
check?
I
have
to
incentivize
them
in
pods.
We
have
a
health
check.
What
if,
for
instance,
we
had
a
location,
app
health
check
that
looked
a
little
bit
like
an
slo
and
when
that
slo
fails
on
one
of
the
three
clusters
we
say:
oh
one
of
our
sensors
has
told
us
the
cluster
that
just
happened.
E
You
know
what
we've
got
extra
capacity,
let's
go
and
again
these
are
like
three
successive
systems
but
like
at
the
meta
level,
they're
systems
we
do
not
have
and
they're
systems
that
could
potentially
be
generally
useful
in
theory
and
that's
how
you
go
from
one
nine
to
two
nines
to
three
nines
in
this
particular
domain.
Will
we
get
there?
I
don't
know,
but
like?
Can
you
see
that
progression?
That's
kind
of
the
thought
process.
E
It's
a
new
technological
loop.
Now
the
best
outcome
would
be.
We
get
to
the
prototype
stage.
We
test
this
out.
We
find,
like
you,
know,
hey
like
actually,
if
we
could
convince
people
to
write
deployment,
level,
health
checks
and
we
could
fit
that
into
the
system,
that's
generally
useful,
even
on
a
single
cube
cluster,
no
doubt
and
so
like.
How
would
we
work
through
that?
We
don't
know
yet
but
like
it,
opens
the
door
for
that
and
that
a
cluster
is
a
faded
life
cycle.
E
Failure
domain
in
theory,
we're
working
around
a
cluster
is
a
failure
domain,
but
a
set
of
apis
and
a
set
of
software
is
also
failure
domain
having
two
failure
domains.
If
we
can
layer
them
correctly,
could
get
us
that
next
level
up,
but
then
you
can
bring
it
to
a
single
cluster
and
you
get
the
benefit
which
is
like,
oh
well,
like
the
reason
why
your
cluster
failed.
Like
say
you
have
one
cluster
and
you've
got
this
running
on
top
of
one
cluster.
E
E
A
So
I
think
you're
absolutely
right
that
detecting
this
will
be
hard
remediating.
It
automatically
will
be
an
order
of
magnitude
harder
and
like
scaling
this
for
all
types
for
all
possible
types
and
all
possible
diffs
of
types
is
a
ton
of
work.
The
idea
is
not
to
make
it
perfect.
The
idea
is
to
make
it
possible.
It
doesn't
exist
today.
So
anything
we
do
is
an
improvement
on
the
state
of
the
art
and
yeah.
E
Reducing
the
investment
improves
the
three
areas.
So
that's
why
that
extra,
like,
I
think,
jason
you
captured
perfectly
it's
like
we've
got
to
go,
do
a
bunch
of
work
to
do
apis.
Then
you've
got
to
figure
out
api
evolution.
Everybody
has
the
api
evolution
problem
today.
It
is
a
percentage.
It
is
a.
It
is
reason
why
some
slas
fail
on
cube.
Then
the
next
level
up
is
single
cluster
failure,
correlated
failure,
domains,
that's
another
mine
on
the
cube,
reliability
or
just
general
teams
on
top
of
cube
reliability
and
then
the
the
movement.
E
If
you
can
get
the
movement
in,
you
actually
give
a
solution
to
both
of
those
problems,
but
you've
also
added
in
a
new
layer
that
you
can
then
use
even
in
the
abs.
So
it's
like
we're
looking
for
we're
going
to
put
a
bunch
of
investment
in
and
get
three
benefits
the
the
venn
diagram.
Hopefully-
and
this
is
why
we're
prototyping
overlaps.
A
E
A
That
the
that
the
code
to
detect
it
at
all
is
is
new,
exciting
experimental
code
that,
if
it
works,
we
don't,
we
can
give
it
to
people
when
they
are
making
prs
on
their
code
right
like.
If,
if
we
can
detect,
we
can
detect.
B
That's
the
use
case
that
I
wanted
to
that.
I
was
kind
of
coming
at
this.
From
was
that
I've
been
thinking
of
this
in
terms
of
using
this
setup
to
do
rapid
api
development,
where
I'm
going
to
make
mistakes,
api
mistakes
and
I
want
to
be
able
to
still
like.
I
want
that-
not
to
like
implode
the
whole
thing
on
me
right,
but
I
guess,
if
I'm
using
it
correctly,
if
I
only
roll
out
my
latest
pr
to
one
logical
cluster
watch
that
guy
blow
up
and
decide.
B
E
E
So
in
theory,
we
have
a
tool
that
can
help
us
do
controller
evolution
or,
and
the
controller
is
like
a
stand-in,
but,
like
you
could
say
what
are
most
people
doing
iterative
development
on
apis
built?
You
know
they
have
schema
migration
tools
and
all
that.
How
do
they
test
in
the
world
they
deploy
it.
It
runs
isolated,
it
has
no
dependencies,
but
how
would
you
test
as
an
operations,
team
or
infrastructure
or
app
infra,
the
meta
stuff
that
runs
the
other
stuff?
E
Is
you
want
to
be
able
to
use
those
tools
and
bring
them
in,
but
also
have
the
environment
be
amenable
to,
like?
I
could
kind
of
you
could
see,
look
at
logical
clusters
and
apis
being
exposed
into
them
as
a
form
of
feature
gating,
which
is
like
how
would
you
roll
out
a
change
to
tens
of
thousands
of
applications
by
involving
your
api?
You
want
to
do
you
know
some
form
of
feature
gating.
You
want
to
do
some
first
stage
as
we
start
playing
around
with
this.
E
Thinking
about
how
we
get
there
and
like
we,
this
is
basically
a
problem
that
tools
like
olm
and
other
like
controller
management
lifecycle
things
most
people
don't
have
tools
that
let
them
work
with
this,
I'm
not
sure
our
goal
is
going
to
be
like
in
the
short
run,
deliver
all
these
tools
but
starting
to
think
about.
Well,
you
know
if
you
want
to
be
the
declarative
api
for
everything,
and
you
want
to
have
these
declarative
apis.
What
are
the
tools
that
help
you
roll
out?
Your
changes
to
clear
the
vape
has.
F
So
clayton,
let
me
just
make
sure
I
just
test
my
understanding
of
what
you're
talking
about
when
you
talk
about
logical
clusters,
we're
talking
about
another
kcp
hub,
so
you
talk
about
a
thousand
logical
clusters,
they're
all
running
over
the
same
set
of
physical
clusters,
so
you're
talking
about
a
thousand
kcp
hubs,
all
talking
to
the
same
target
or
physical
clusters.
E
I
mean
yeah
like
an
instance
of
kcp,
could
run
against
one
cluster.
It
could
run
against
a
thousand
clusters.
E
It
might
be,
there's
I
think,
there's
we
we
are,
we
are
probably
pretty
early
on.
I
think
you
could
come
up
with
different
topologies
and
we
haven't
really
made
any
opinions
there,
but
yeah
you
could
imagine
a
hierarchies
of
apis.
You
could
imagine
logic
like
this
being
bolted
on
alongside
kcp,
because
it's
a
library-
and
you
can
imagine
someone
being
like
this
sounds
complicated.
E
All
I
want
to
do
is
run
transparent,
multi-cluster
applications,
I
would
say
best
outcome-
would
be
we're
focusing
on
transparent,
multi-cluster
first,
because
it
is
concrete,
pragmatic
and
helps
us
open
the
door
to
these
other
problems.
Someone
who
wanted
to
could
definitely
go
hack
in
that
direction.
F
So
again,
so
I'm
sorry,
I'm
just
you
know
new
and
slow.
Here
you
talked
about
a
multiplicity
of
logical
clusters.
I
outlined
one
way
to
where
scenario
where
that
might
be.
You
seem
to
think
there
are
others.
Can
you
outline
other
scenarios
where
there
might
be
a
thousand
logical
clusters?
What
else
is
is
in
the
scenario.
E
So
the
set
of
apis
exposed
into
a
logical
cluster
is
not
just
crds
installed.
It
could
be
anything
right.
The
point
of
kcp
or
cube
api
server
as
a
library
is
cube.
Api
server
happens
to
be
very
crd,
focused
or
minimal,
kcps
or
minimal
api
servers
like
they
tend
to
use
crds.
As
a
stand-in,
there's
nothing
that
says
they
have
to
be
code,
like
that's
kind
of
one-dimensional
thinking.
It
helps
when
we're
talking
about.
E
We've
got
some
basic
stuff
and
we're
gonna
go
look
at
these
clusters
and
calculate
crds
and
put
them
into
name
spaces,
but
there's
no
reason
that
that's
the
only
way
that
we
could
operate
it's
just
for
right.
Now,
it's
like
the
purposes
of
this
demo
we're
exploring
the
prototype,
we're
exploring
those
setups,
but,
for
instance,
another
example
would
be
you've
got
a
an
api
server
or
a
server
somewhere.
That
exposes
a
list
of
open
api
objects
and
dynamically.
E
C
E
That
you
know
the
the
crds
have
to
be
real
crds,
there's,
no
reason
that
they
could
be
files
on
disk.
They
could
be
an
ncd,
they
could
be
in
a
logical
cluster.
They
could
be
on
a
different
server.
So
I
think
right
now
we're
just
trying
to
work
through
patterns
that
feel
familiar
to
people.
So
don't.
G
Yeah
and
clearly
we
are,
we
are
trying
to
use
what
we
have
in
cube,
because,
as
many
of
you
know
that
I
assume
ceo,
these
are
quite
are
you
know
quite
very
tied
to
how
kubernetes
is
defined.
I
mean
so.
An
api,
apr
service
and
series
and
crd
controllers
are
quite
deep
into
into
cube
code,
so
I
mean
using
crds
to
crd.
Let's
say
machinery
to
import
apis
and
make
them
available
in
in
kcp
instance
is
obviously
I
mean
the
most
straightforward
way
until
now.
G
But
then,
even
even
if,
if
we're
in
in
the
prototype,
I
started
and-
and
we
spoke
of
now-
I
didn't
add-
you
know
any
shear
the
notion
the
notions
are
api,
import
and
negotiated
api
resource,
because
what
we
want
is
mainly
just
define
the
schemas
of
the
apis
that
should
be
exposed
outside
from
the
logical
cluster.
F
F
F
I
kind
of
understand
the
idea
of
a
kcp
as
a
a
hub
that
talks
to
multiple
physical
clusters
and
if
logical
cluster
doesn't
mean
hub
or
physical
cluster,
I'm
not
clear
on
what
it
means.
No.
E
Sorry
so,
okay,
so
kcp
is
like
a
cube
api
server.
So
that's
all.
It
is
stripped
out
a
bunch
of
the
other
cubecraft
right
so
start
with
a
cube.
Api
server
now
add
the
idea
that
if
you
set
a
certain
request,
header
the
cube
api
server
that
was
saying
like
hey,
I
looked
at
etcd
and
I
looked
at
all
the
objects
in
ncd
and
here's
all
the
objects
in
lcd
that
are
under
this
key.
E
Instead
added
a
prefix
to
that.
That
was
the
name
of
a
logical
cluster
and
the
name
theological
cluster
is
foo.
So
instead
of
getting
slap
in
ncd
instead
of
getting
like
slash,
slash,
pods
return,
all
pods
everything
under
the
slash,
pods
key
and
then
the
cube
api
server
interprets
that
into
pods
in
different
interfaces,
where
namespace
is
another
key
segment,
so
like
pod,
slash,
foo
or
pod.
Slash
bar
is
all
pods
that
name
space
bar.
E
E
Instead
of
the
key
in
ncd,
that's
being
looked
up
is
slash
pods,
it's
slash,
foo,
slash,
pods
and
then,
if
I
wanted
to
get
all
pods
across
all
logical
clusters,
someone
could
again
set
a
request
header
or
something
like
this
completely
arbitrary.
That
says
like
star
and
it
would
go
say:
oh
don't
put
that
prefix
in
go,
get
everything
and
then
each
of
the
pods
that
gets
returned.
You'd
get
pod
pod
baz
from
namespace
bar
okay,.
F
Okay,
so
logic
cluster
is
then
okay,
so
it's
just
another
level
of
namespace
mechanism
within
the
hub.
E
E
If
you
come
in
with
logical
cluster
food,
go
talk
to
this
fcd
with
this
prefix
and
then
how
logical
clusters
are
created,
like
the
the
policies
and
all
that
just
treat
those
as
completely
undefined
it's
just
a
mechanism
but
to
cube's
perspective
cube's
going
to
serve
for
a
given
cluster,
an
api
endpoint
that
says:
here's
all
the
open
api
objects.
Here's
all
the
resources
packing
that
is
effectively
what
the
logical
cluster
enables,
which
is
each
logical
cluster,
could
expose
a
different
set
of
cube
apis
and
crds.
E
E
The
crds
that
get
inherited
from
a
logic,
a
different
logical
cluster
and
the
crd,
that's
in
this
logical
cluster,
are
combined
and
composited
together.
What
david's
working
on
is
effectively
something
that
goes
and
helps
create
some
of
those
crd
objects.
So
the
end
result
to
a
consumer,
looks
like
the
union.
Slash
intersection
of
you
know.
If
you
have
two
different
real
clusters,
go
get
their
objects,
turn
them
into
crds
and
then
surface
them
out
through
a
logical
cluster,
so
to
an
end
user.
Looking
at
that
logical
cluster,
they're
like
well,
I
got
a
pod.
G
Yeah
and
and
to
to
be
fair
what
I,
what
I've
been
working
on
is
is
mainly
a
prototype.
I
mean
whose
main
goal
was
allowing
us
to
play
with
that,
because
until
now
I
mean
the
the
rules
of
what.
How
do
we
detect
when
there
is
an
incompatibility,
the
rules
according
to
which
we
decide
that
we
can
still
enforce
the
change
in
the
negotiated
api,
etc.
All
these
were
not
really
clear
so,
which
is
why
I
started
with
these
two.
G
You
know
objects
here.
It
is
api,
resource
import
and
a
negotiated
api
resource.
So
that
and
and
it's
it's
on
purpose
decoupled
from
the
crd,
I
mean.
Finally,
you
can
just
decide
creating
the
crd
into
inside
your
logical
cluster.
I
mean
publishing
your
api,
but
but
then
you
could
still
just
work
and
and
that's
the
goal
we
could
still
just
you
know,
run
the
controller
of
the
negotiation
with
those
two
objects
and
just
play
with
it
and
see.
G
How
does
it
react
when
you
add
manually,
10
api
resource
import,
for
you
know
with
associated
to
10,
distinct
locations?
And
here
again,
I
just
do
not
directly
refer
to
clusters,
but
just
to
an
oppac
location
string.
So
I
mean
you
have
several
api
resource
import
for
a
given
api,
a
name
for
a
given
api
and
that
are
associated
to
various
locations.
They
come
from.
They
were
inputted
from,
and
then
you
have
one
resulting
negotiated
api
resource
and,
of
course,
for
the
sake
of
what
we
already
have
and
what
kcp
already
provides.
G
Then
it's
plugged
to
with
cluster
on
one
hand
and
and
crds
in
the
logical
cluster,
on
the
other
hand,
so
that
we
can
have
the
the
whole
flow
and
and
and
have
the
existing
demos
work
the
same.
But
the
whole
point
is
that
we
would
be
able
to
create
api
resource
imports
even
manually.
If
you
would
like
to
you,
know
manually
import
a
given
api
that
doesn't
come
from
a
physical
cluster,
for
example,.
G
But
the
whole
point
is
is
not
to
have
something
you
know
final,
but
but
but
but
something
you
know
sufficiently
built
so
that
we
can
start
playing
with
that
and
that
you
know
other
people
and
kcp
users
can,
you
know,
have
a
consistent
behavior
when,
when
importing,
distinct,
apis
or
different
apis
from
from
different
clusters,
physical
clusters-
and
then
I
mean
to
me-
it's
just
a
start
of
discussion-
then
then
we
can
discuss
on
something
that
is
more
concrete.
I
A
A
Yeah
mike
does
that
does
that
answer
any
part
of
your
question
or
lead
you
to
more
questions?
I
that
answers
a
lot.
F
Asked
one
more
thing:
so
we're
talking
about
multiple
logical
clusters
again
in
a
hub
that
sits
in
some
sense
above
or
aside
from
many
physical
clusters.
So
when
david
talks
about
importing
from
locations,
he
can
talk,
he's
talking
about
importing
either
from
a
physical
cluster
or
a
different
logical
cluster.
Is
that
right.
G
Yes,
yes,
I
mean
for
now
how
it's
plugged
into
the
existing
system
is
only
from
physical
clusters,
because
we
we've
not
gone
that
far
to
as
importing.
You
know,
apis
from
one
logical
cluster
to
another
one,
but
it
could
be.
I
mean
this
exactly
the
same
way,
because
an
import
is
just
you
know,
indeed,
prototypes
just
an
object,
so
you
could
just
create
a
an
api
resource
import
from
apis
that
are
and-
and
even
you
know,
even
the
the
api
import
mechanisms.
G
So
it
it
reads
on
an
existing
cluster
that
you
access
from
your
cube
config.
It
reads
the
both
the
discovery,
endpoint
and
the
open
api
shema
that
are
exposed
publicly
and
from
that
rebuilds,
the
the
the
the
open
api,
v3shima
and
and
and
a
sort
of
crd.
In
fact,
so
you
could
even
it's
it's.
You
know
it's
quite
straightforward.
E
E
There
are
a
set
of
standard
cube
apis
for
discovery
for
understanding
what
types
are
there
you're,
not
using
them
if
you're
assuming
that
pods
exist?
That's
like
that's
just
a
legacy
element
of
our
system
but
kcp
as
an
idea.
Logical
clusters
is
a
mechanism
minimal
api
servers.
Everything
is
just
a
cluster
you
can
talk
to.
Cluster,
has
a
useful
bucket
of
types
of
apis
and
instances
of
those
things
with
one
tenant
concept.
A
Yeah,
I
think
a
lot
of
these
fall
into
the
the
bucket
of
graduating
existing
concepts
to
the
next
level.
Like
crds
are
currently
cluster
scoped
they're
still
going
to
be
cluster
scoped.
We
just
need
a
concept
of
something
larger
than
a
you
know.
A
we've
been
trying.
We've
been
grasping
for
a
word
for
what
is
not
a
cluster
but
and
honestly
like
this.
E
C
E
Kind
of
behaves
the
same
or
you
use
a
built-in
that
gives
you
all
the
power
and
flexibility
and
not
all
the
cube
types
can
work
well
as
built-ins
because
of
validation
and
other
rules.
Ideally,
as
a
project,
we
converge
those.
What
are
the
mechanisms
that
make
converging
easier?
It
might
actually
be
at
the
end
of
the
day
that,
like
crds,
get
dramatically
improved
in
cube
and
it
gets
easier
to
sprinkle
code
among
them
or
tie
it
we're
not
there.
E
Yet
we
probably
will
be
for
a
while,
but
a
lot
of
this
work
kind
of
opens
the
door
for
like
well.
If
we
only
had
to
use
crds
for
internal
types,
where
are
the
gaps
we
see
those
like
almost
immediately
david,
fixed,
a
bunch
of
them
and
those
tradeoffs
get
pushed
back,
which
is
like
okay?
Well
like
what
do?
We
really
need
to
go,
be
able
to
run
an
awesome
api
server?
Well,
I
need
code
at
a
meta
level
like
web
hooks
web
hooks
suck
operationally.
E
G
G
When
you
add
a
new
api
resource
import,
it
would
you
know
recalculate
the
lcd
and
the
compatibility,
because
everything,
cid,
wise
and
kubernetes
is
based
on
controllers
and
and
is
done
upfront
for
now.
A
Cool
and
and
mike,
if
you
have
more
questions,
please
you
know,
ask
them
I
I
don't.
I
don't
want
anybody
to
get
the
mistaken
impression
that
we
all
know
what
we're
all
talking
about.
So
if
something
we're
talking
about
is
unclear,
it's
probably
because
we
haven't
fully
thought
it
out
or
or
or
are
incapable
of
communicating
it
clearly
in
either
case
we
should
do.
We
should
do
better.
So
thank
you,
yeah
great.
On
that
I
mean
we
have
five
minutes
left.
A
It
sounded
like
david
is
working
on
this
negotiation
stuff
and
we'll
have
something
to
show
soon
I'll,
have
rebalancing
and
response
to
cluster
changes.
Soon
after
deployments,
I
was
going
to
tackle
demon
sets
and.
E
I
am,
I
am
trying
to
commit
myself
to
going
through
some
of
the
motions
on
what
are
the
real
obstacles
to
if
we
only
had
these
sets
but
didn't
sink,
pods
like
if
pods
didn't
come
back
because,
like
I
think
that's
the
fundamental
like
scale
challenge
is:
if
we're
delegating,
can
we
still
give
you
an
operational
experience
was
exploring
that
yesterday
with
a
few
people,
as
we
were
talking
about
like
security
models
and
what
we
would
do
so
I'll,
hopefully
try
to
put
that
under
as
just
more
exploration,
dock
work.
A
E
Right
now
is
we
have
a
tool
where
we
can
summarize
the
status
what's
the
experiential
gap,
so
I'm
going
to
spend
some
time
on
that
exploring
like
mentally
and
try
to
get
something
either
in
a
dock
or
like
some
discussion,
because
that
will
lead
to
like,
while
you're
kind
of
going
at
pragmatic,
sync
jason.
This
will
be
what
are
the
things
that
aren't
sync,
that
paper
over
the
experience
loss
for
someone
who
comes
in
like
I
created
a
deployment.
What
the
heck
are
my
replica
sets?
E
E
Just
like
a
really
really
really
insanely
high
level,
there'd
be
like
pass-through
where
it's
like:
hey
we've
got
two
locations
in
this
name
space.
When
you
call
the
pod
name
space
call
we
go
make
a
pods
namespace
called
the
two
clusters
summarize
them
and
return
them
in
a
single
list.
Why
would
that
fail?
Well,
first
off
what?
If
the
names
collide,
then
there's
other
aspects
like
what,
if
you
wanted
to
create
a
pod,
because
the
top
layer
you're
like
I
want
to
create
a
pod
to
go
run
something
on
a
cluster.
E
I
don't
care
where
it
is.
What
if
you
choose
the
same
name
so
thinking
through
some
of
these
cases,
coming
up
with
examples,
examples
that
I
to
me
it's
like
this
is
surfacing
so
we're
kind
of
in
the
pragmatic,
like
we
think
we
can
do
crd
negotiation.
We
think
we
can
do
logical
cluster
tendency.
We
think
we
can
do
policy
under
logical
clusters,
and
I
think
we
can
do
an
amazing
amount
of
things
with
apis.
E
We
think
we
can
do
sinking
and
modification
what
we
don't
know
whether
we
can
do
is
paper
in
all
the
rest
of
the
gaps,
so
that
someone
who
creates
a
deployment
gets
the
experience
they
want.
So
I'm
just
going
to
do
some
prep
pathfinding
there
and
then
hopefully
have
something
that
someone
can
then
go
prototype
down
or
we
can
have
some
discussions.
It's
probably
a
little
bit
more
of
a
discussion
because
I
think
we
have
to
explore
the
idea
space.
A
Yeah,
even
even
related
to
that
is
right.
Now,
when
you
give
kcp
a
deployment,
it
will
go,
create
other
deployments
for
you
right
and
we
might
not
want
to
show
you
those.
We
might
only
want
to
show
you
the
deployment
you
created
with
aggregated
status
of
these
fake.
You
know
behind
the
scenes
deployments
we
created
for
you,
but
not.
G
Or
or
even
even
concluding,
whether
we
really
really
need
whether
we
really
need
to
create
them
because,
as
we
said,
we
could
also
mix
the
sinker
and
the
and
the
splitter
I
mean
just
sink
while
doing
some
changes
on
the
fly.
Yeah.
E
There's
other
angles
too,
which
is
yeah
if
we
could
surface
back
info
about
the
individual
shards,
that's
a
tool
that
cute
fed
never
had
so
cube
fed
was
full
object,
doing
syncing,
doing
complex
things
for
each
sinking,
trying
to
open
up
the
possibilities
of
like
how
would
we
go
in
a
different
direction
and
try
to
address
the
challenges
that
cube
fed
had
circling
back
to?
Maybe
this
is
intractable
because,
like
there's
still
the
percentage
chance
that
we're
like
well,
we
can
do
all
these
other
things,
but
we
can't
actually
make
transparent,
multi-cluster
work.
C
E
A
E
E
Right
now
is
like:
could
you
co-opt
the
entire
cube
ecosystem
so
that
everybody
who's
just
using
cube
today,
would
just
as
well
prefer
to
do
this?
That's
like
a
growth
hack
or
a
collaboration
hack
or
a
community
hack,
which
is
like?
How
do
you
make
it
matter
to
the
biggest
community
possible,
but
you're
right
mike,
like
all
of
those
long
tail
use
cases,
whether
they're,
long
tail
and
scale
long
tail
in
specialization,
long
tail
and
library?
E
We
definitely
want
to
enable
those,
but
we
need
the
big
multiplier
where
it's
like.
This
applies
to
everybody,
oh
and
we're
all
working
on
the
same
tools,
a
little
bit
like
how
cute
kind
of
did,
for
you
know,
everybody
kind
of
looks
at
cuban
sees
their
own
thing.
We
want
everybody
to
be
able
to
look
at
the
kcp
idea
and
be
able
to
use
those
same
tools.
A
Yeah
with
that,
I
think
we
have
hit
time.
I
will
post
a
recording
and
notes
as
soon
as
I
can
thanks
everyone.
Good
discussion,
we'll
see
you
next
week.