►
From YouTube: Kubernetes SIG Multicluster 2020 Dec 15
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
A
A
All
right,
laura
last
time.
Last
time
I
looked,
I
think
the
agenda
was
empty,
except
you
were
gonna
put
something
on
there.
C
Okay,
so
hopefully
you
all
can
see
my
screen
right
now.
I
wanted
to
just
touch
base
on
the
cluster
id
cap
that
there's
a
pr
for
and
in
particular,
take
a
second
just
to
briefly
reintroduce
it
for
anybody,
but,
most
importantly,
bring
up
the
outstanding
questions
that
are
on
the
pr
I
kind
of
consolidated
all
the
comments
into.
C
I
think
four
main
questions
that
maybe
I
could
just
you
know
check
if
that,
if
there's
any
other
discussion
with
the
people
who
are
here
today
about
and
then
I
would
love
to
determine
if
we
can
merge
this
pr
with
provisional
status
and
then
figure
out
what
the
next
round
of
moving
it
to
implementable
is
either.
C
So
that's
kind
of
the
purpose
so
just
to
remind
with
my
cute
little
clusters
down
here,
but
the
goal
of
the
cluster
id
cup
is
to
define
a
standard
for
how
the
clusters,
how
clusters
in
a
multi-cluster
environment
should
refer
to
each
other,
and
my
understanding
is
that
the
goal
here
is
to
make
it
useful
or
strict
enough
to
unblock
some
known
use
cases
and
some
of
those
have
been
which
is
detailed
in
the
cap
to
multi-cluster
identifying
clusters
that
are
in
a
multi-cluster
setup
in
logs,
like
for
debugging
purposes,
to
be
able
to
disambiguate
pods
in
a
multi-cluster
headless
service.
C
Put
maybe
put
some
cluster
aware:
dns
names
on
them
track
when
new
clusters
have
joined
the
environment,
which
is
happening
to
teal
and
orange
right
now.
So
this
is.
This
is
my
understanding
of
why
we're
we're
doing
this
and
what
sort
of
like
minimal,
viable
kep
here
should
achieve.
C
So
on
the
pull
request,
this
is
kind
of
what
I
thought
the
outstanding
questions
are,
so
there
seemed
to
be
some
concern,
particularly
about
how
to
mitigate
people's
existing
implementation.
So
if
people
are
already
using
multi-cluster
setups
that
they
home
rolled,
they
have
some
idea
of
what
a
cluster
id
might
be
in
the
wild
if
they
don't
follow
the
standard
that
we're
laying
out
now
like.
How
do
we
make
room
for
that?
C
There's
some
discussion
about
how
long
does
a
cluster
id
really
need
to
be
immutable
and
unique
and
within
what
boundary
so
is
that,
like
in
the
cluster
set,
is
that
across
you
know
the
whole
crd
and
who
or
what
is
in
charge
of
enforcing
that?
So
I
think
part
of
this
might
have
been
about
some
loose
language,
maybe
in
the
in
the
prose,
but
we
can
talk
about
it
directly.
C
There
was
one
sort
of
discussion
point
about
whether
clusters
can
be
members
of
multiple
cluster
sets,
and
if
so,
how
does
this
immutability
uniqueness
thing
about
cluster
id
that
we're
trying
to
set
up
here
in
this
standard
hold
in
that
case,
and
how
are
we
going
to
pick
a
real
name
for
cluster
claim
because
people
don't
like
it?
So
those
are
the
things
I
kind
of
have
one
slide
for
each
I'm
going
to
pause
a
little
bit
and
to
see
if
anybody
has
any.
C
A
Are
you
sharing
maybe
wrong
window.
B
C
B
C
I
I
cleaned.
D
C
C
Great
okay-
hopefully
you
got
this
part
that
this
is
what
I
want
us
to
do
here
are
my
cute
clusters
that
you
missed
before.
Here
is
the
overall
questions
and
then
moving
right
along
to
where
we
left
off.
This
is
kind
of
the
first
question,
so
I
kind
of
want
to
open
for
anybody
to
discuss
here.
How
do
we
mitigate
people's
existing
implementations
of
cluster
id
in
the
wild?
That
may
not
follow
this
standard?
So
this
is
what
I
kind
of
gathered
from
the
conversation.
C
Possibly
we
could
explicitly
state
in
this
cap
that
implementations
not
following
the
standard
are
not
cluster
claims
and
therefore
can't
benefit
from
any
of
the.
You
know,
from
the
implications
of
whatever
strictness
that
we
can
apply
as
the
standard
develops
over
time
and
or
provide
some
migration
information
or
recommendations.
Maybe
in
response
even.
C
I
guess
I
kind
of
want
to
hear
if
this
is
something
we
think
we
need
to
spend
like
how
much
we
should
put
in
at
this
point,
how
much
room
we
should
really
make
for
other
existing
implementations
versus
kind
of
focus
on
trying
to
get
people
to
migrate,
or
maybe
just
explicitly
stating
that
you
know
if
you're
not
doing
this
standard,
then
you're,
not
this
thing
that
the
standard
says
the
standard.
Is
you
know
what
I'm
saying
so.
B
E
Anybody
else
know,
I
think
it's
mostly
one-offs.
Like
people
are,
I
mean
every
every
multi-cluster
anything
has
some
way
to
identify
clusters,
but
I
don't
know
that
we've
really
come
up
with
any
consistent
model,
yet
as
a
community
that
that's
in
use,
I
think
honestly,
if
we,
if
we
had
we'd,
probably
just
jump
on
that
to
some
degree.
I
know
there's
some
there's
been
some
questions
about
how
cluster
api
could
could
fit
into
this,
and
you
know
that
aside,
I
don't
know
that
that's
really
solved
it
either.
My
high
level
take.
E
Is
that
like,
if
we
we've
kind
of
set
out
to
define
the
the
bare
minimum
api
that
could
be
useful,
for
you
know,
building
multi-cluster
things
on
top
of,
so
you
know
with
that.
E
If
it's,
if
it
really
is
the
bare
minimum
useful
api,
then
I
think
you
know
migration
should
be
relatively
easy
and,
by
the
same
token,
like
I,
I
kind
of
like
the
idea
of
saying,
like
you
know,
anyone
who's
building
on
this
api,
like
you
know,
you
have
to
you,
have
to
meet
this
api
for
to
be
able
to
be
cluster
claim
compatible
or
whatever
name
we
end
up
going
with,
and
and
if
you
don't
you
know,
then,
like
you
know,
multicultural
services
may
not
work
correctly.
E
That
kind
of
thing,
but
to
make
that
viable.
I
think
we
have
to
really
focus
on
coming
up
with
that
bare
minimum
useful
api.
A
I
definitely
think
we
should
explicitly
state
that,
like
the
the
verbiage
that
is,
there
is
good.
I
think,
we're
effectively
saying
that,
like
we're,
we're
also
deciding
that
it's
not
a
goal
that
we
have
cluster
claims
be
backward
compatible
with
any
existing
implementation
of
anything
anywhere.
D
Yeah
yeah,
I
think
you
know
just
to
pick
on
what
what
you
just
said.
D
Multi-Cluster
services
are
a
semantic
in
an
api,
not
an
implementation,
so
any
one
implementation
of
multi-cluster
service
might
or
might
not
work
without
cluster
claims.
If
you
have
a
completely
bespoke
multi-cluster
solution,
say,
for
example,
we
built
your
one-off
corporate
version
of
this
and
you
want
to
start
performing
to
multi-cluster
services.
You
can
do
that.
Go
nuts
right.
If
you
want
to
take
a
community
derived
implementation,
you
might
need
the
cluster
claims.
But
if
you
not,
if
you're
using
your
own
implementation,
there's
nothing
stopping
you.
C
Yeah,
I
get
that
that's
the
general
sense
and
I
think
maybe
what
could
potentially
address
the
comments
about.
It
is
a
more
explicit
basically
what
you
just
said
on
the
cap,
though,.
E
D
D
A
E
D
E
D
E
I
do
think,
though
you
know,
we
should
also
consider
that
part
of
it,
like
a
corollary,
I
guess,
is
that,
as
as
new
things
start
building
on
cluster
claims,
you
kind
of
do
need
to
be
compatible
right
like
you,
you
could
opt
out
of
cluster
claims,
but
you're
going
to
be
opting
out
of
more
of
the
multi-cluster
ecosystem.
D
Yeah
that
that's
reasonable,
but
that's
forwards,
looking,
which
is
right,
fair
right
to
say
retroactively.
You
have
to
conform
to
this
api
that
didn't
exist
at
the
time
you
were
designed,
obviously
doesn't
fly,
but
that's
fine.
It's
just
that
we're
going
to
build
more
stuff
on
top
of
this,
and
we
suggest
that
you
come
along
for
the
ride.
E
E
C
Yeah,
I
think,
just
to
close
this
one
out.
I
think
I
can
put
a
little
bit
more
pros
on
this
point
that
this
is
the
basis
for
more
improvements
in
the
multi-cluster
api
ecosystem.
If
you
have
an
existing
implementation,
you
know
you
don't
you
may
not
follow
this
standard
and
then
you
it
would
you
wouldn't
necessarily
be
considered
a
cluster
claim
under
this
specific
standard
and
that's
fine,
but
we'll
keep
we'll
still
be
building
on
this
basically,
and
then
it
will
just
be
explicit.
C
Is
what
I'm
getting
from
this
conversation?
So
that's
the
plan
there
moving
along.
How
does
a
cluster
id?
How
long
sorry
does
a
cluster
id
need
to
be
immutable
and
unique
and
within
what
boundary,
and
what
is
in
charge
of
enforcing
that?
So
I
picked
up
from
the
comments
that
we
seem
to
be
generally
coalescing
around
as
long
as
the
cluster
is
in
the
cluster
set.
C
So
its
single
cluster
id
needs
to
be
immutable
and
unique
within
that
cluster
set
for
as
long
as
the
cluster
is
in
that
given
cluster
set
and
one
of
the
sort
of
cases
out
of
this
would
be
that
if
you
unregister
and
then
re-register
that
that
same
cluster
unregistering
would
involve
deleting
the
cluster
claim.
But
when
you
re-register,
you
would
add
a
new
one
and
potentially
could
have
the
same
id
as
it
did
before.
Possibly.
A
C
Has
to
be
deleted,
so
I
guess
that's
an
open
point.
If
that
is
a
concern.
E
Well,
I
think
so
the
cluster
claim
would
be
local
to
the
cluster
right
like
this
is
this
is
within
a
cluster,
so
I
think
maybe
maybe
an
implementation
could
build.
That's
a
good
question.
We
should.
We
should
discuss
what
happens
if,
like
you,
lose
connection
to
a
registry
or
whatever's
managing
your
your
membership,
but
I
think
one
of
the
one
of
the
things
is
deleting
the
we
had
a
cluster.
A
couple
cluster
claims
enumerated
right.
We
had
the
cluster
set
id
and
the
cluster
id.
E
I
think,
deleting
the
cluster
set
id
when
removing
from
a
from
a
cluster
set
makes
sense.
I
don't
know
that
we
necessarily
want
to
make
any
assumptions
around
what
happens
to
the
cluster
id
like
there's,
no
reason
that
couldn't
live
on.
I
just
don't.
E
E
E
I
think
the
only
thing
we
we
can
mandate
is
that
the
cluster
id
exists,
at
least
as
long
as
a
cluster
is
a
member
in
a
cluster
set,
and
then
we
want
to
leave
room
for
like
globally,
unique
in
time
and
space
ids.
E
If
some
implementation
does
that,
but
that's
for
the
cluster
id,
I
think
for
the
cluster
set
id
that
is
tightly
coupled
with
membership
in
a
cluster
set
and-
and
it
would
be
strange
if
it
lived
beyond
because
then
someone
who
thinks
that
the
cluster's
in
a
cluster
set
might
might
take
some
action,
so
that
could
be
a
useful
tool
for
detecting
whether
or
not
a
cluster
belongs
to
a
cluster
set
within
some.
You
know
margin
of
time
for
reconciliation.
D
So
would
it
be
useful
within
the
cap
to
just
denote
which
well-known
names
we
think
are
intrinsic
to
the
cluster
and
which
names
are
intrinsic
to
the
set
like
to
touch
on
the
cluster
api
topic
right?
D
We
could
make
a
recommendation
that
when
you
use
cluster
api
to
install
a
cluster,
that
it
creates
a
cluster
id
claim
and
it
puts
it
into
the
cluster
and
it
never
ever
touches
it
again
right
right
and
if,
if
a
user
decides
to
change
their
cluster
id
midstream
like
who
knows
what
will
break
downstream,
because
we've
told
them
not
to
do
that.
B
Yep,
do
we
see
cluster
ids
being
generated
unique,
but
how
are
they
going
to
be
repeatable
right?
So
if
I
add
a
cluster
to
how
to
add
to
a
cluster
set
an
instance
to
a
cluster
and
it
comes
up
with
a
cluster
id,
that's
unique
and
then
I
unregister
it
and
re-register
it
again
and
find
it
gets
a
new
cluster
id.
But
does
it
get
a
new
unique
cluster
id
I
mean?
Are
we
going
to
get
repeats?
B
E
Yeah,
I
think
we
have
to
allow
that.
I
I
mean
per
in
a
perfect
world.
I
don't
think
we
would.
I
think
unique
and
non-repeatable
is
better,
but
you
know
because
there's
so
many
different
implementations
and
like
I
don't
know
that
that's
a
constraint
we
can,
we
can
mandate.
We
can
certainly
say
non-rep
like
unique
during
the
lifetime
within
a
cluster
set.
But
beyond
that
I
think
it's
a
should.
Not
a
must.
E
So
it's
not
like
you're
going
to
be
generating
uuids
every
time
exactly
because,
basically,
that
mandates
a
uuid
which
means
that
if
you
ever
want
to
use
this
in
a
human,
a
human-readable
context,
it
gets
really
gross.
B
Unless
you
have
uuids,
essentially
aliases
right,
where
you
map
you
know
or
you're
using
labels
right,
you
could
use
uuid
for
the
id
makes
it
unique,
and
then
you
use
labels
associate
labels
with
cluster
ids.
If
you
will
and
therefore
you're
now
associating
actions
with
the
cluster
member
based
on
labels,
rather
than
anything
else,
that's
very
kubernetes
kubernetes-esque,
I
would
say
so.
D
Right,
which
was
exactly
this
like,
should
how
prescriptive
do
we
need
to
be
what
about
people
who
are
importing
old
clusters?
What
about
people
who
don't
want
to
use
uuids?
What
about
human
readable
cases?
I
personally
am
I,
I
think
what
we
should
do
is
not
dictate
but
strongly
recommend
that
your
cluster
id
resource
is
effectively
intrinsic
to
your
cluster
everybody.
Every
cluster
should
have
one,
whether
they're
in
a
set
or
not,
and
if
you
don't
have
a
good
reason
not
to
do
this.
D
D
You
can
retrofit
it
onto
any
existing
cluster.
The
how
we
use
that
for
a
cluster
set
then
becomes
another
layer
of
conversation.
E
Right
and
if
I
could,
if
I
could
really
like,
simplify,
I
think
the
goals
here,
it's
that
we
want
a
cluster
id
claim
intrinsic
to
the
cluster
that
is
guaranteed,
at
least
within
a
cluster
set,
but
maybe
beyond,
preferably
beyond
that
can
be
used
as
basically
a
key
and
a
map
to
look
up
clusters.
E
You
know
that's
that's
kind
of
the
real
goal,
so
an
implementation
has
a
has
a
nice
way
to
reference
clusters
in
a
cluster
set,
and
then
the
cluster
set
id
within
a
cluster
would
be
related
to
membership
in
a
cluster
set,
and
that
would
be
a
a
nice
easy
way
to
identify
which
cluster
set
the
cluster
belongs
to,
and
those
are
kind
of
like
the
two
high-level
things
in
my
mind
that
we
really
want
to
accomplish
here.
A
So
I
I
think
that
the
the
semantics
that
you've
just
described
are
jeremy.
Are
they
make
sense
to
me?
The
other
bullet
on
this
slide
is
like:
where
do
you
enforce
those
things
right
so
like
as
an
example
like?
Would
it
be
productive
to
just
think
through
a
couple
like
failure,
modes
of
if
you,
if
you
have
a
value,
say
that
your
cluster
id
changes
while
you're
inside
a
cluster
set?
What
happens?
A
What
should
what
should
enforce
the
uniqueness
constraint
there,
and
maybe
since
multi-cluster
services,
are
a
good
example,
because
they're
they're
relatively
real?
What
would
we
expect
to
happen
if
you
have
a
multi-cluster
service
deployed
on
three
clusters
and
cluster
a's
cluster
id
claim
changes.
E
So
the
yeah,
so
I
think
those
probably
need
could
use
some
spelling
out
on
the
on
the
kep,
but
in
general
I
would.
I
would
say
that,
if
that,
if
that
was
allowed
to
change,
things
would
break,
I
don't
know
how
but
like
I.
If,
basically,
if
we,
if
we
say
things
shouldn't
break,
if
you
change
those,
then
you
need
to
yeah
what
what
tim
said.
It's
a
mac
address
things
will
go
really
funky.
E
If,
if
that
changes,
because
we're
using
it
to
reference
each
cluster
and
and
like
I
guess,
new
services
that
get
created
might
use
the
new
id
slash
address
in
their
in
their
headless
service
names,
old
services
may
not
until
maybe
forever
or
maybe
until
some
future
reconciliation.
E
So
I
mean,
maybe
maybe
there
needs
to
be
an
admission
controller
that
prevents
mutating.
Of
course,
you
could
always
delete
and
recreate
yeah
as.
B
B
E
D
Used
to
I
used
to
be
a
big
proponent
of
trying
to
enforce
immutability
and
really
draconian
rules,
and
every
time
I
do
that,
I
end
up
regretting
it
because
at
some
point
it
becomes
the
the
thing
between
me
and
my
goal.
Right
so
or
or
users
find
a
way
around
it,
and
then
I
have
no
way
of
dealing
with
it.
So
I
think
we're
better
off
here
saying
this
is
what
we
strongly
urge
people
to
do.
D
Use
a
uuid
in
fact
use
the
cube
system
uuid
that
way,
you're
guaranteed
to
be
compatible
with
everybody
else,
and
don't
ever
change
it.
If
you
change
it,
it
might
be
cached,
it
might
be
used
in
other
systems.
It
is
the
primary
key
for
your
cluster.
If
you
change
your
own
primary
key,
it's
broken
and
you
get
to
keep
both
halves.
A
Yeah
and-
and
I
think
since
to
to
remember
that
laura's
goal
for
today
is
to
get
the
kept
to
provisional
status.
I
think
don't
do
it.
Wrong
is
a
very
reasonable
principle
at
that
level
of
maturity.
E
D
E
C
Anyway,
okay,
there's
a
question
about
cross
search
to
authenticate
and
jeremy
said,
maybe
in
the
future
in
particular,
I
took
out
the
verification
related
stuff
from
this
kept
so
that
we
could
save
it
for
another
day.
So
I'm
gonna
skip
over
that
one.
For
now,.
E
Yeah,
just
a
little
tldr,
I
think
it's
in
it's
captured
in
the
dock,
but
has
had
a
chat
with
some
folks
from
sigoth
and
basically
we
don't.
We
don't
have
a
good
thing
to
verify
against
that's
universal,
so
we'd
be
signing
like
anyone
who
could
who
could
write
a
cluster
claim
would
probably
be
able
to
force
a
signature.
So
you
know
the
verification
probably
needs
to
be
something
out
of
band.
E
Right
right,
and
so
we
need.
Basically,
we
should
revisit
in
the
in
the
future,
was
the
when
we
have
specific
use
cases
and,
and
that
was
kind
of
the
feedback.
C
C
You
know,
specify
like
what
happens
when
a
cluster
joins
maliciously
as
another
one
or
what
happens
when
a
cluster
changes
its
id
in
the
middle
of
its
life,
and
it
sounded
like
delete
and
recreate
as
the
pressure
valve
for
being
able
to
mutate
is
better
than
a
enforced
admission
controller
idea,
and
that
we
would
recommend
for
admins,
basically
what
they
should
do
and
which
includes
not
changing
its
id
midlife
and
leave
it
up
to
them
to
break
their
own
clusters.
If
they
want.
E
Here
totally
yeah,
I
think
the
you
know
the
big
thing
about
immutability
and
the
statement
that
we
make.
There
is
really
telling
anyone
who's
dependent
on
this,
that
they
don't
need
a
watch
and
that
they
don't
need
to
react
to
changes,
and
if
they
see
a
new
cluster
id,
they
can
treat
it
as
a
new
cluster,
which
you
know
has
a
bunch
of
implications
on
the
on
implementations
and
what
they
can
do
right.
C
Can
clusters
be
members
of
multiple
cluster
sets
and
if
so,
what
happens
to
our
special
words
here
in
that
case,
and
there
seemed
to
be
basically
a
oh,
no
sounds
very
complicated
kind
of
feeling.
If
I
understood
properly
and
then
let's
discuss,
do
you
think
that's
a
discussion
for
right
now
for
tablet
for
later.
A
My
gut
feeling,
and
just
my
own
subjective
personal
feeling,
is
that
I
don't
think
we
should
make
it
impossible
to
do
that,
because
I
can
easily
imagine
like
a
multi-hub
setup
in
a
in
a
sufficiently
large
organization
where,
like
maybe
there's
one
hub,
that
is
about,
like
cluster
level
configuration
management
and
like
cluster
level
services
and
a
completely
different
hub
for
user
applications.
E
I
kind
of
well
so
I
can
see
the
use
cases,
but
I'm
just
practically
trying
to
think
like
we've
created
concepts
like
namespace
sameness
and
if
a
if
a
cluster
belongs
to
two
cluster
sets
it.
That
seems
like
it
actually
links
the
namespace
sameness
spaces
and
like
we're
kind
of
picking
and
choosing
cluster
set
behaviors
that
are
spread
across
but
like
namespace
sameness,
would
then
seem
to
have
to
hold
across
both
cluster
sets
if
there's
any
overlapping
clusters.
So
I
wonder
if
instead,
we
shouldn't
try
to
like,
like
you.
E
You're
creating
hierarchical
name,
spaces
right,
because
the
cluster
set
really
is
just
another
name:
space,
exactly
exactly
and
and
and
that
might
be
safer
because
at
least
then
we
can
still
make
statements
about
the
cluster
set
like
namespace
sameness.
So
in
all
the
namespaces
within
your
hierarchical
cluster
sets,
namespace
sameness
would
still
hold,
but
maybe
other
things
don't
have
to
maybe
like
multi-cluster
services
could
be
applied
to
sub-cluster
sets
or
something
I'm
just
riffing
right
now
without
having
a
lot
of
thought
into
this.
But
I
just.
B
A
Right
and
that
within
a
cluster
set,
there's
like
namespace
sameness
applies,
but
there's
no
principle
that,
like
any
particular
individual,
has
permission
to
write
into
every
name
space
right.
So
there
there
will
naturally
be
varying
levels
of
privilege
within
what
you
can
do
on
any
particular
cluster
in
the
set
that
still
respect
namespace
sameness,.
B
E
Yeah
yeah
the
things
I've
heard
are
like
well.
What,
if
I
want
to
expose
some
services
from
one
cluster
set
to
in
another
cluster
set-
and
I
think
tim
just
mentioned
this
too,
like
explicit
collaboration
between
cluster
sets
is
probably
something
we
want
to
talk
about,
but
separately
like
that
feels
more
like,
like
like
ingress
to
a
cluster
set.
E
We
could
call
it
federation,
but
I
think
this
is
worth
this
is
that's
like
a
separate
topic
that
we
should
address,
but
but
I
I
kind
of
agree
that,
like
as
soon
as
a
cluster
belongs
to
multiple
cluster
sets,
you
really
just
kind
of
made
your
your
cluster
set
bigger
and
if
you
don't
look
at
it
that
way,
and
you
actually
try
to
keep
them
separate,
you're,
you're,
probably
in
for
a
world
of
pain,
trying
to
figure
out
like
what
is
this
name
like
what
are
the
invisible
transitive
impacts
across
the
clusters
that
boundary
yeah.
D
D
I
think
it
fits
really
well
at
the
cluster
set
concept
and
you
know
within
the
the
larger
mesh
world
there
is
this
concept
of
mesh
federation,
which
is
an
unfortunate
choice
of
words,
but
that's
what
it
is
and
I
think
that's
the
place
where
you
join
different
cluster
sets
and
and
then
within
the
set.
I
agree
with
everything
everybody
else
was
saying:
you're,
just
it's
just
you're
extending
your
mesh
you're,
extending
your
set
right.
E
My
my
model
is
like
peered
networks
like
a
cluster
set
belonging
to
two
to
two
networks
or
to
two
or
sorry
a
cluster
belonging
to
two
cluster
sets
is
like
is
peering
between
two
networks.
What
happens
and
look
at
like
ip
ranges
as
name
spaces?
E
D
C
Okay,
right
cool,
I
can
run
with
that
and
now
for
the
layup.
How
are
we
going
to
pick
a
real
name
for
cluster
claim
sounds
like
we
do
polls
here,
so
I
can
make
a
poll
and
this
these
are
the
ideas
that
I
have
seen
eavesdropped
on.
If
anybody
wants
to
throw
something
else
in
the
hat,
I
assume
I
can
also
put
a
little
other
text
box,
so
we
can
accumulate
more
from
people
who
aren't
on
the
pr
or
aren't
here
in
this
call.
C
Yes,
this
is
supposed
to
reference.
The
structure
of
these-
let's
see
put
it
in
here
of
these
two
claims,
it's
a
little
crowded
in
here
now,
but
they're,
basically,
combinations
of
name
and
value
for
either
the
cluster
id
or
the
cluster
set
membership
and
sorry
I'm
trying
to
pull
up
the
definition
of
them.
But
now
I'm
lost.
E
C
D
E
Right
and
without
I,
we
also
with
the
claim,
I
think
we
originally
had
validation
in
there,
which
you
decided
to
scrap.
I
think
it
made
more
sense
when,
when
it
was
like
a
verifiable
claim,
but
now
it's
just
like
it's
a
key
value.
E
Little
poll
yeah.
C
Peel
off
any
more
brainstorming
that
might
be
at
hand,
but
I
will
definitely
send
the
poll
and
put
a
free
form
text
box
too.
If
there's
any
other.
A
Thoughts,
I
guess
yeah
so
one
one
thing
that
josh
burkus
has
done
in
the
past
in
the
the
last
six
months
or
so
I
think,
we've
run
a
couple
of
these
things
is
we
have
like
a
public
comment,
our
publicly
writable
spreadsheet,
where
we
gathered
names
and
then
okay.
C
E
And
then
we'll
run
a
poll,
maybe
maybe
in
the
new
year
we'll
we'll
go
through
and
yeah
pull
out,
all
the
all
the
non-starters
and
then
and
then
we
can
run
a
poll.
But
I
don't
know
that
that
needs
to
block.
You
know
calling
a
provisional
no
way.
C
Speaking
of
can
we
emerge
this
as
provisional-
maybe
you
all
already
know,
but
this
was
the
definition
of
provisional,
I'm
new
to
kubernetes
governance,
so
I
needed
to
read
it
and
I.
A
A
E
Okay,
yeah,
including
things
like
named,
I
think
unresolved,
is
a
good,
is
a
good,
so
any
any
a
good
thing
to
use.
So
any
remaining
questions,
let's
just
make
sure
we're
they're
included.
C
Okay
sounds
good.
I
will
make
some
updates.
I
will
make
a
spreadsheet
and
hopefully
I
caught
everything
I'll.
Let
people
comment
again
if
not,
but
hopefully
we
can
get
it
into
the
next
phase.
Here's
my
goal.
D
I
would
be
happy
to
if
you
think
it's
interesting.
I
think
it's
interesting.
E
C
D
It
yes
yeah
all
right,
so
the
discussion
around
what
was
it
carmada
got
me.
Thinking
like
all
I
could
see,
was
the
patterns
right
and
and
we're
using
different
words
and
we're
arguing
about
different
details,
but
I
felt
like
I'd
seen
the
pattern
enough
times
before
that
it
might
be
worthwhile
to
try
to
extract
the
framework
from
the
details.
D
So
after
the
meeting
I
guess
two
weeks
ago,
I
sat
down
to
just
sketch
this
out
and
I've
mutated
the
words
a
little
bit
over
the
last
couple
of
weeks,
but
pretty
much.
This
is
what
what
I
saw.
D
I
see
this
pattern
in.
If
you
look
at
the
various
git
ops
products,
they
follow
the
same
basic
pattern.
If
you
look
at
carmodo
and
federation,
they
follow
the
same
basic
flow,
and
so
maybe
it's
useful
to
to
talk
through
the
stages
of
this.
So
I
started
with
this
idea
of
the
user,
who
talks
to
a
source
of
truth
now
in
github's
world
that
would
be
a
git
repository
and
they
would
make
a
pull
request
or
something
to
their
git
repository
in
the
karmata
point
of
view.
D
That
would
be
a
kate's
api
server
or
the
federation
point
of
view.
That
would
be
a
kubernetes
api
server
that
you
load
some
yaml
or
some
some
resource
definition
into,
and
if
you
squint
like
this
is
the
the
work
api
and
everything
else
just
sort
of
fits
into
the
same
pattern.
So
we
load
some
payload
into
the
source
of
truth.
D
Then
there's
some
target
selection
heuristic,
whether
that's
label
selectors
on
a
set
of
clusters
in
some
look
aside
registry
or
whether
it's
some
other
database
query
like
it
doesn't
really
matter.
The
the
target
selection
mechanism
is
related
to
like
none
of
these
things
are
independently
mutable,
they're
part
of
a
stack
right
and
so
there's
some
way
of
saying.
D
Is
the
grouping
of
clusters
it's
going
to
be
stored
somewhere,
and
I
expect
over
the
next
year
that
we'll
see
some
interesting
implementations
of
these
ideas
now
that
you've
selected,
which
clusters
this
payload
is
going
to
go
to
there's
going
to
be
a
zero
or
more
specializations
that
you
apply
per
cluster
right.
So
I
know
some
people
use
templates
like
the
I've
seen
people
doing,
ginger
processing,
we've
seen
people
doing
helm,
we've
seen
people
doing
customize,
we've
seen
the
karma
stuff
had
some
some
specifics
right.
D
D
D
But
at
the
end
of
the
day,
this
payload
runs
through
this
this
pipeline
and
it
gets
delivered
out
to
the
individual
member
clusters
and
I
was
hoping
now
we
can
quibble
over
the
words.
I
was
hoping
that
if
we,
if
we
agree
that
this
is
the
basic
framework
or
something
like
this,
then
whenever
we
look
at
a
new
example
of
an
api,
we
can
say
well.
This
is
the
specialization
phase
and
this
is
the
delivery
phase,
and
these
are
the
details
and
maybe
just
simplify
some
of
the
conversations.
E
I
think
one
of
the
other
things
that
comes
out
of
this
diagram,
too,
is
it
looks
like
we
actually
have
maybe
some
separate
phases
that
can
be
built
independently
right.
So
maybe
maybe
there's
like
an
all
up
implementation
that
handles
everything
from
the
source
of
truth.
All
the
way
down
to
payload
delivery,
but
maybe
you
know,
target
selection
can
be
a
plug-in.
A
payload
specialization
can
be
a
plug-in
or
something
like
you
know
their
other
way.
I
think
framing
it.
This
way
gives
us
other
ways
to
look
at
the
problem.
D
Yes,
and
possibly,
but
also
like
giving
words
to
the
different
phases
of
this,
this
life,
I'm
trying
to
find
the
right
word:
it's
not
a
life
cycle.
It's
a
it's!
A
it's
a
stack!
It's
a
pipeline.
D
Is
it
is
kind
of
a
pipeline
but
like
to
get
really
concrete
with
it
like
your?
The
payload
could
in
fact
be
the
raw
yaml
plus
a
bunch
of
specializations
like
a
customized
directory
right
and
so
the
like.
The
the
pink
and
the
blue
level
are
kind
of
merged,
but
abstractly
there
are
different.
They
are
different
things
right.
D
Likewise,
the
payload
delivery,
just
because
the
source
of
truth
is
git
doesn't
mean
that
it's
always
going
to
be
pulled
or
pushed.
I
could
have
an
active
agent
who
gets
you
know
web
hooks
from
github
and
pushes
down
to
member
clusters,
or
I
could
have
an
agent
in
each
member
cluster
polling,
github
directly
right,
and
so
I
liked
the
the
the
stage
is
kept
independent.
Sorry,
I'm
talking
a
lot.
A
So
one
other
variable
that
I
think
is
present
in
this
chart
that
I
don't
know
that
there's
a
good
way
to
do
it,
but
like
the
the
payload
itself,
has
many
different
forms
and
the
biggest
the
best
like
thing
that
I
can
draw
through
that
space
currently
is
like
there's
a
pattern
of
I'm
I'm
distributing
individual
resources
or
I'm
distributing
a
package
right
so
like
a
subscription
type
thing
or
helm
chart,
or
you
know
name.
A
Your
package
format
here
is
a
different
kind
of
payload
from
you've
got
individual
resources
you're
distributing,
and
I
don't
know
that
it
necessarily
needs
to
be
represented
in
this
diagram.
But
it's
like
in
the
neighborhood
as
a
related
concept
that
seems
to
to
divide
like
the
space
that
we've
seen
in
the
sig
recently.
D
A
Well,
just
I,
I
think
what
you
said
is
true,
but
more
that,
like
it,
it
seems
to
be
like
a
dimension
that
characterizes
part
of
the
problem.
Space
is
like
you
know,
cube,
fed
one
and
two
and
the
carmada
thing
are
really
not
about
distributing
packages.
A
They're
about
distributing
individual
resources
versus
like
if
I'm
distributing
a
helm
chart
and
like
michael
elder,
who
was
here
last
week,
was
talking
with
kevin,
who
was
presenting
on
carmada
about
their
subscription
thing
that,
like
one
of
their
that
that
is
essentially
around
distributing
packages
so
like
you,
can
make
a
subscription
to
a
home
chart.
I'm
just
making
the
observation
that
like
it
seems
to
be
one
of
the
one
of
the
axes
that
people
like
view
the
problem
space
in
like
they're.
A
D
That's
an
interesting
distinction,
I'm
not
sure
what
they
what
I
think
about
that
in
some
sense
they're,
both
raw
payloads
and
the
you
can
hand
wave
and
say
well,
the
specialization
is
the
part
where
it
takes
a
package
and
expands
it
or
it
could.
I
could
imagine,
maybe
breaking
the
or
inserting
a
layer
somewhere
that
was
sort
of
the
denormalization
step.
A
I
think,
depending
on
what
scheme
your
payload
delivery
step
has
that
it
might
vary
like
I
could
see.
Payload
deliver
delivery
like
distributing
a
resource
that
says
install
this
chart
right
like
yeah,
so
I
I.
D
A
D
That's
the
part
that
I'm
actually
struggling
a
little
bit
with
right.
If,
let's
assume
my
source
of
truth
is
git,
I
could
just
check
in
a
chart
or
even
just
a
reference
to
a
chart
right
and
say.
Well,
I
expect
the
fubar
chart
to
be
on
all
of
my
clusters
and
the
expansion
of
the
fubar
chart
could
happen
all
the
way
down
in
the
member
cluster
right
or
it
could
happen
at
the
specialization
or
it
could
happen
at
the
like.
D
It
could
happen
at
any
one
of
these
stages,
but
I
could
just
as
easily
have
the
user
say.
Well,
I
want
to
apply
this
chart
so
go
ahead
and
expand
it
to
its
fully
denormalized
form
and
then
check
that
in
and
then
have
the
individual
things
pushed
out
and
the
net
result
will
be
the
same
yeah,
but
the
concept
is
is
different
at
what
level
you
want
to
manage
it.
E
You
could
have
those
variables
that
either
like
before
or
after
expansion
or
you
could.
You
could
even
have
multiple
phases
of
expansion
like
it
seems
less
likely,
but
you
could.
You
could
have
some
source
get
ex
source
get
expanded
at
one
phase
like
at
delivery,
and
then
member
clusters
do
some
additional
expansion.
I
guess,
like,
I
think,
getting
hung
up
on
payload
as
like
what
we're
used
to
which
is
like
yaml.
It
makes
this
a
little
more
difficult.
E
A
It
might
be
your
thing
here
right
and
probably
what
matters
the
most
about
it
is
that,
like
it's,
the
part
that
you
expose
to
the
user
is
what
matters
the
most
about
it.
It
matters
a
lot
less
like
how
it's
transformed
after
that,
but
I
guess
the
only
thing
to
since,
since,
like
you
noted
on
here
it
might,
the
source
of
truth
might
be.
Git
repository
might
be
kubernetes
api,
just
maybe
another
little
call
out.
That's
like
payload
can
can
have
many
forms
too.
D
Okay,
that's
a
great
great
feedback
and
that's
that
one's
easy
to
integrate.
I'm
still
going
to
think
a
little
bit
about
whether
it's
worth
denoting
like
processing
steps,
like
perhaps
there's
a
processor
between
the
user
and
the
source
of
truth,
and
perhaps
there's
a
processor
between
the
source
of
truth
and
the
payload
deliveries
somewhere
and
perhaps
there's
a
processor
at
the
member
cluster
and
any
one
or
all
of
those
could
be
no
well.
B
Here
so
much
so
that
it's
not
going
to
provide
value,
is
it
better
to
be
prescriptive
and
say
yeah
we're
going
to
cut
off
some
limit?
You
know
we're
going
to
limit
this
to
this,
how
it's
going
to
happen,
but
then
it's
everybody's
going
to
have
a
very
consistent
model
of
what's
going
to
happen.
The
more
you
know
could
be
processed
here
could
expand
here.
The
ambiguity
is
going
to
kill
us.
D
Yeah
I'm
trying
to
be
descriptive,
not
prescriptive.
Here
I
was
trying
to
attach
general
terms
to
things
that
people
are
already
doing
right,
but
I
do
I
mean
I
have
my
own
feelings
about
how
things
should
be
done,
but
I
know
that's
not
how
people
are
always
doing
it.
E
That,
maybe
just
a
top
level
like
there
may
be
a
processor
somewhere
in
here
that
changes,
the
form
of
the
payload
is
is
enough
there
may
or
may
not,
because
I
think,
like
it's
good,
to
specify
that
there
is
a
processor,
but
in
terms
of
the
actual
flow,
I
don't
really
think
it
matters
like
if,
if
my
payload
gets
transformed
slightly
but
still
represents
the
same
data
at
each
phase,
that's
fine
right,
like
that.
I
don't
think
that
changes
this
model
at
all.
A
Yeah,
here's
a
question
that
I
was
thinking
about
last
night:
where
should
this
thing
live
so
that
it's
accessible?
What
the
diagram.
B
D
Oh,
I
mean
I
just
sort
of
wrote
it
for
the
purpose
of
discussion.
I
hadn't
really
thought
about
publishing
it
more
broadly.
I
I
don't
know
something
to
think
about.
D
Unfortunately,
there's
no
really
good,
like
collaborative
image
formats
that
are
easily
difficult,
I
mean
I
could
turn
it
into
ascii
art
and
we
could
check
it
as
a
markdown
file.
E
D
I
would
be,
I
would
be
happy
to
render
this
and
check
it
in
somewhere.
I
just
feel
like
it
that
makes
it
not
very
amenable
to
changes
over
time.
Let
me
think
about
how
how
to
do
that.
I,
I
was
sort
of
joking
about
ascii
art,
but
maybe
I'm
not
actually.
D
B
I
added
this
comment
too.
In
the
chat.
Is
there
a
a
parallel
reverse
diagram
if
you
will,
or
to
show
synchronization
of
data
back
up
from
the
clusters
and
how
that
gets
back
to
the
source
of
truth
or
is
reflected
in
the
source
of
truth,
whether
that's
resources
that
were
created
directly
against
a
member
cluster
or
resources
existed
on
a
cluster
when
it
becomes
a
member.
D
So
paul
had
a
comment
that
was
specifically
about
garbage
collection,
but
I
I
think
it's
really
just
about
reconciliation
in
general
yeah,
which
is
the
the
payload
delivery
mechanism,
really
owns
the
responsibility
for
deciding
whether
there's
actually
anything
to
do
here
or
not
right.
So
if
the
resource
already
exists
and
it's
in
the
form
that
it
wants
to
be,
then
the
payload
delivery
is
is
effectively
no
up
right.
D
D
A
Yeah,
I
think
I
I
I
don't.
I
don't
think
that
question
has
to
have
a
neat
answer
right
now,
like
I
think
the
primary
value
of
this
picture
is
that
it
puts
names
on
things
that,
like
we've,
all
talked
about
without
naming
them
and
like.
I
do
think
this
captures
a
lot
of
commonalities
in
different,
like
fairly
different
approaches,
so
I
feel
like
it's
just
a
solid
diagram
and
it's
good
for
being
a
solid
diagram.
A
D
Okay,
I
got
some
feedback,
I
will
work
on
this
diagram
and
I
will
think
about
how
to
commit
it.
I
know
bowie
says
save
as
an
svg
and
sp
spgs
are
difficult,
but
but
kind
of
only
barely.
E
B
D
Know
what
good
rendering
tools
there
are
for
graphics?
I
don't
don't
use
it
much
I'll
look
into
what
options
could
be
here.
A
Waiting
for
someone
to
say
postscript:
okay,
so
final
order
of
business,
should
there
be
any
more
business
in
this.
E
Year,
I
I've
kind
of
assumed
you
know
based
on
holiday
schedules,
everything
it
might
make
sense
to
you
know
reconvene
in
january,
early
january
I
don't
know
what
everybody
else
thinks.