►
From YouTube: Kubernetes SIG Multicluster Feb 21 2023
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
You
I
see
the
the
website
is
coming
along.
Thank
you
so
much
for
that
work.
B
Oh,
no,
no
worries
yeah
yeah,
we're
trying
to
basically
automate
the
creation
of
the
pages
now
because,
right
now,
it's
just
the
markdown
stuff,
but
we
need
to
automate
that
with
a
GitHub
actions
on
the
same
way,
they
do
it
for
the
Gateway.
A
C
D
D
A
D
Yeah
another
Tuesday,
happy
Tuesday,
yeah,
happy
happy
a
Merry
Tuesday
to
everyone.
C
D
Yeah
I
heard
for
the
pnw.
Some
more
snow
is
coming
this
week,
so
it
felt
pretty
spring-like.
The
birds
were
all
chirping
and
such,
but.
D
C
D
C
C
D
C
E
D
E
D
D
We
all
it's.
E
D
C
G
D
D
A
A
Lead:
okay:
Laura!
Could
you
unsure
because
they.
D
A
A
Can
you
guys
see
my
screen?
Yes,.
A
Okay,
I'm
gonna
get
started
then
so
there
have
been
some
interests
from
the
community
about
some
of
the
topics
such
as
cluster
inventory,
multi-cluster
control,
plane,
multi-cluster
controller
and
the
work
API,
and
we
basically
started
this
talk
to
mostly
trying
to
gather
some
interest
and
have
some
feedback.
So
please
feel
free
to
interrupt
during
this
presentation
and
provide
your
comments
or
any
feedback
or
any
suggestion,
because
nothing
here
is
like
this
is
the
way
we
should
do
it.
This
is
mostly
just
a
brainstorm
idea
documents
so
before
I
get
started.
A
The
concepts
I
want
to
highlight
this
point
that
Mike
wants
me
to
highlight
the
concept
the
apis
here,
obviously
should
which
size
in
a
kubernetes
cluster,
but
it
doesn't
have
to
it-
can
be
an
RPC
server
or
Cloud
API
provider,
but
for
Simplicity
of
the
discussion
and
the
and
the
demo
I'm
gonna
assume
that
we're
gonna
be
working
with
a
kubernetes
cluster
and
just
want
to
quickly
mention
topics
below
a
vendor
and
provider
neutral,
so
nothing
that
is
specific
to
a
certain
venue
or
provider.
A
Okay.
So
the
first
topic
I
want
to
discuss
is
cluster
inventory.
A
A
A
A
So
in
the
about
API,
you
can
define
a
cluster
properly
with
the
cluster
ID,
with
a
certain
value.
This
value
can
be
the
name
or
it
can
be
a
unique
identifier
as
well,
and
then
we
also
have
the
concept
of
cluster
set.
So
I
feel
like
with
this
really
great
work
from
Laura
and
the
community
having
this
sort
of
a
backbone
of
uniquely
identifying
a
cluster
I'm,
hoping
that
we
can
come
up
with
some
idea
of
building.
A
On
top
of
that,
so
we
can
expand
the
usage
to
having
a
cluster
inventory
so
having
that
cluster
inventory.
What
what
is
the
use
case
right
so
again,
it
gives
you
kind
of
a
centralized
location
to
view
or
maybe
manage
multiple
clusters,
and
then
it
allows
you
to
maybe
add
or
remove
clusters
that
you
want
to
track
through
some
kind
of
registration
or
approval
process,
and
then
it
potentially
having
a
set
of
cluster
allows
you
to
do
some
work
on
it,
such
as
the
cubesick
project
or
the
API.
A
It
also
provides
sort
of
a
foundation
work
for
building
a
multi-cluster
control
plane,
because
without
the
inventory
you
don't
know
what
what
you're
working
with
in
a
multi-cluster
and
I
got
feedback
from
the
community
to
also
mention
specifically,
the
git
Ops,
so
git
Ops,
without
going
too
deep
into
it.
Githubs
is
using
git
as
a
single
source
of
shoe
for
kubernetes
resource,
manifest
deployment.
D
So
one
nice
thing
about
the
about
API
is
that
it's
this
cluster
local
representation
of
these
two
pieces
of
information
without
itself
specifying
that
it's
a
cluster
registry.
You
know
like
it's
the
piece
in
between
it.
D
It's
the
cluster
local
piece,
this
kind
of
like
I,
don't
have
the
like
previous
context
in
it,
but
I
know
that
cluster
a
cluster
registry
was
like
a
big
project
that
sigmc
tried
to
solve
a
couple
years
ago
that
was
kind
of
like
dropped
in
favor
as
again
it
kind
of
is
before
me,
but
dropped
in
favor
of
I.
Think
at
the
time
there
wasn't
enough,
like
known
use
cases
about
like
what
that
cluster
registry
would
what
what
would
be
using
that
information.
D
Besides,
like
oh
generally,
we
know
that
that
something
should
know
about
all
of
them,
so
I
think
it
ended
up
being
hard
for
the
community
at
that
time
to
specify
what
that
was
and
then
now
to
date,
the
especially
in
terms
of
MCS,
some
sort
of
MCS
controller.
That
knows
something
like
what
a
cluster
registry
would
need
to
know
is
required,
but
not
specified
so
I
mean
speaking
for
gke.
D
There's
a
out
of
cluster
process
to
know
who
is
a
member
of
the
cluster
set
I
think
for
Submariner
it's
in
cluster
in
the
in
a
central
more
like
a
hub
cluster,
though
Stephen
can
totally
fix
me
if
I
said
that
wrong.
So.
D
Those
have
now
like
evolved
a
little
bit
like
past
or
they've,
evolved
past,
where
this
conversation
was
in
the
first
place
because
they
have
actual
you
have
actual
usage.
So
I
do
think.
That's
a
good
opportunity
to
you
know
re-talk
about
it
as
to
whether
it's
an
extension,
I
guess
of
the
about
API
the
way
I
guess
I
imagined
it
is
that
something
like
something
has
to
populate.
What's
in
cluster
property
and
like
be
the
admission
controller
for
that
information
and
right
now,
it's
very
like
vague.
D
What
would
be
that
in
a
cluster
registry
could
be
that,
but
since
we
haven't
defined
that
concept
almost
on
purpose,
then
yeah
I
feel
like
that's
kind
of
like
the
state,
I
think
a
potential
Goods
step
or
something
that
falls
naturally
out
of
the.
D
The
like
centralness
I,
guess
of
like
about
API
as
a
sigmc
project,
is
that.
D
Like
right
now,
all
that
we
really
say
is
that
users
of
MCS
API
must
populate
something
must
populate
about
API
so
that
MCs
can
make
DNS
and
then,
on
top
of
everything,
DNS
in
the
MCS
API
spec
is
optional,
but
I
don't
think
it
is
in
practice
in
all
of
our
cases,
so
yeah
I,
think
I
think
just
in
general,
I
want
to
bring
up
the
point
that,
like
there's,
not
a
strong
standards
obligation
between
the
three
of
these
things,
as
it
stands
right
now
and
I
do
think
cluster.
D
If
we
try
to
standardize
what
a
cluster
registry
is
and
how
tooling
should
like
interact
with
about
API
or
through
a
cluster
registry
concept
that
maybe
it
self-populates
the
cluster
local
representation,
which
happens
to
be
about
API,
that
that's
that
would
like
tie
them
together
a
bit
more
strongly,
and
maybe
the
time
is
for
that,
since
we
have
some
real
examples.
A
No
I
think
that
makes
a
lot
of
sense.
I
I
was
hoping
we
wish
to
somewhat,
maybe
not
a
conclusion,
at
least
having
the
starting
discussion
point,
or
maybe
we
can
start
sanilizing
the
cluster
registry
again
or
having
the
attempt,
like
you
say,
maybe
with
some
linkage
to
the
API
so
I
quite
like
that
discussion.
Point
does
anyone
else
have
any
comments
on
this
topic.
F
We
can
get
into
it
more
later,
but
yeah
I
tried
to
touch
on
exactly
what
Laura
was
describing
on
the
first
sentence
or
two
of
the
multi-lister
controller
section
at
the
bottom,
basically
that
it
may
Implement
a
multiplication
implement
or
be
dependent
upon
some
other
implementation.
Providing
the
like
cluster
registry
to
fill
out.
The
cluster
ID
kind
of
thing.
F
A
Right
yeah,
so
it
seems
like
everyone
is
in
agreement.
Maybe
this
is
definitely
a
topic
that
we
should
continue
to
explore.
A
Yeah,
maybe
maybe
I'll
drive
in
other
documents,
of
having
clustering,
inventory
and
sort
of
start
that
discussion.
Maybe
we
can
try
to
standardize
the
cluster
witch
history.
What
do
you
think
about
that
or.
D
Yeah
I
have
one
other
thought
I
want
to
know.
What
is
the
overlap
here,
use
case
or
problem
definition
between
like
the
list
of
the
Clusters
and
then
the
list
of
the
things
in
the
Clusters,
because
I
know,
there's
been
like
conversations
on
slack
I.
Think
there's
a
project
called
like
clusterpedia
that
was
about
being
having
like
a
centralized
tool
set
or
something
to
you
know
get
pods
across
everything.
Basically
do
is
that
another.
Is
that
a
do?
We
feel
like
that's
in
the
same
category,
I
guess,
foreign.
A
I,
don't
think
so
because
I
think
that
is
more
of
a
search
capability.
But
you
you
have
a
list
of
inventory
and
then
you
wanna
you
wanna
sort
of
drill
down
on
some
of
the
some
of
the
details
of
that
inventory,
but
that
you
know
I
I.
Think
for
my
scaling
perspective,
like
you
can
already
imagine
if
we
hold
all
that
data
on
on
the
wherever
this
cluster
inventory.
D
A
Might
be
a
little
bit,
it
might
be
a
little
bit
too
much,
but
that
that's
just
my
idea,
I
I
was
hoping
in
the
beginning
of
this
discussion.
I
was
really
hoping.
We
start
off
really
small
with
the
cluster
registry
and
then
mainly
just
tie
the
the
cluster
ID,
and
maybe
the
cluster
set
together
and
I
even
mentioned
that
you
know
even
the
status
of
the
even
of
the
status
of
the
the
cluster
I
know
it's
useful,
but
maybe
it's
not.
A
Maybe
it's
not
a
requirement,
yet
it
might
I
I
really
want
to
focus
on
the
requirement
being
having
the
cluster
ID
and
the
cluster
set.
So
we
can
sort
of
continue
to
expand.
You
know
that
that
API
or
whatever
it
is
we're
trying
to
create,
does
that
make
sense.
Yeah.
D
I
would
still
like
to
add
under
use
cases
we
can
have
whatever
asterisk
or
something
about
the
idea
of
then
being
having
like
a
tool
chain
on
top
of
this,
that
can
search
just
be,
can
search
like
other
resources,
I
guess
in
other
clusters,
just
because
I
know
that's
been
talked
about
several
times,
and
then
we
can
kind
of
decide
if
that's
a
motivating
use
case
for
like
what
would
actually
be
the
client
of
this
thing
more
in
that
sense
than
like.
D
You
know
actually
solving
that
problem
right
now,
but
yeah
I
think
there's
some
prior
Arthur
and
I
also
think
that
that's
something
we
could
get
like
individual
like
test.
You
know
like
testimonies
of
like
I
needed
this
in
this
case,
which
is
also
usually,
is
preferentially
where
sigmc
wants
to
start
from
for
new
projects
generally.
G
So
one
thought
is:
would
it
be?
It
might
be
useful
to
document
requirements
in
terms
of
the
admin
use
cases
like
as
an
Enterprise
admin.
I
would
like
to
have
a
single
place
of
accounting
for
all
clusters
in
my
Enterprise
and
then
describe
how
that
relates
to
a
cluster
set,
you
know
is:
is
a
registry,
a
registry
of
all
clusters
across
all
cluster
sets,
or
is
it
per
cluster
set?
G
So
if
you,
if
you
cast
it
in
terms
of
the
how
I
mean
I
would
expect,
this
is
a
primarily
a
tool
for
the
Enterprise
admin.
So
as
Enterprise
admin
I'd
like
to
do
this
and
I'd
like
to
do
this,
I
think
that
would
help
sell
the
case
better,
and
that
should
cover
things
like
you
know.
G
Is
it
tied
to
Cluster
sets
in
some
way,
or
is
it
independent
cluster
sets
or
what
are
the
other
kinds
of
information
that
the
admin
could
use
with
this
centralized
cluster
registry,
and
then
there
could
also
be
a
good
multiple
roles
with
different
privileges
want
to
share
access
to
this
registry,
so
that
you
know
admin
a
only
gets
to
see
clusters
in
Department
a
and
admin
b
gets
to
see
all
clusters
everywhere.
G
A
A
G
One
quick
thought
is
the
cluster
set
could
be
a
somewhat
limited
trust
boundary
because
it
mandates
namespace,
seamless
and
so
on
right,
whereas
an
Enterprise,
you
would
have
lots
of
clusters,
not
all
of
which
have
namespace
sameness.
So
presumably
a
cluster
registry
would
need
to
be
having
a
larger
scope
than
a
cluster
set.
F
I'm
very
much
decrement
on
that.
That's
definitely
a
use
case
that
we're
interested
in
addressing
is
having
some
sort
of
cluster
registry
implementation.
Able
to
group
and
manage
multiple
separate
cluster
sets
right.
A
Okay,
I
I!
Don't
want
to
take
up
all
our
time,
so
I'm
gonna
move
on
to
our
next
topic:
the
multi-cluster
control
plane,
so
the
the
multi
cluster
control
plane.
Obviously,
as
the
name
implies,
it
means
having
a
management
cluster.
A
Now
that
you
have
assuming
you
have
the
cluster
inventory,
so
with
the
with
the
management
cluster
you
can,
you
can
use
it
to
sort
of
delivery.
Workload
perform
the
the
query
that
Laura
was
talking
about
and
then
and
then
it's
also
I
guess
sort
of
important
that
the
managing
a
cluster
is
similar
to
managing
other
resources
in
kubernetes
and
I
know
that
I
know
in
terms
of
design.
A
Maybe
we
can
take
a
look
at
the
existing
resources
and
sort
of
scale
it
out
to
multi-cluster
waves,
so
a
a
multi-con
cluster
control
plane
may
also
have,
with
in
itself
like
a
dedicated,
a
pod
or
or
container
with
no
other
API
support
for
resources
such
as
a
part
or
deployment
in
in
the
control
plane.
A
So
this
this
might
help
the
keeping
the
multi-cluster
control
plane
lightweight
so
by
having
that
control,
plane,
lightweight
and
then
each
each
control
plane
manage
a
certain
a
set
of
clusters
that
sort
of
helps
with
the
access
control
that
we
we
talked
about
earlier
as
well,
because
each
each
multi-classson
control
plane
only
have
a
view
of
certain
clusters.
So
it's
almost
impossible
for
it
to
see
the
other
cluster
that
you
don't
want
a
certain
user
to
see.
A
So
some
of
the
use
cases
I
know
that
in
in
cluster
API,
it's
mostly
dealing
with
the
provisioning
and
even
in
terms
of
provisioning,
it
does
sort
of
have
the
concept
of
oh.
This
is
my.
This
is
my
management
cluster,
so
in
that
case
it
provides
the
use
cases
to
provide
a
kubernetes
API
interface
that
a
managed
multiple
clusters.
A
You
know
it
contains,
as
we
talked
about
contains
the
cluster
Registries,
so
that
we
can
select
the
Clusters
search
the
cluster
and
deliver
workload
and
maybe
enforce
some
policy,
and
then
it
also
provides
sort
of
like
the
the
foundation,
work
or
the
environment
for
all
the
other
multi-cluster
capability
capabilities
can
be
developed,
such
as
the
work,
API
and
and,
as
Laura
mentioned,
the
search
as
well
from
the
claustopedia,
so
that
is
the
main
sort
of
the
main
goal
of
the
multi-cluster
control
plane
just
building
on
top
of
the
cluster
inventory.
A
This
is
maybe
a
bit
more
high
level,
but
does
anyone
have
any
comments
on
on
this?
This
concept.
D
A
Yep
so
so
the
work
API,
the
reference
implementation
is
written
sort
of
in
so
so
in
like
us
having
a
centralized
location
where,
where
that's,
where
your
workload
that
you
want
to
deliver
to,
so
in
that
centralized
location,
it's
basically
can
be
multi-cluster
control
plane.
So
once
you
apply
the
workload
on
that
multi-cluster
control
plane,
the
the
work
agent
is
supposed
to
be
able
to
watch
that
workload
and
then
pull
the
workload
down
to
the
the
spoke
clusters.
D
Yeah,
maybe
maybe
the
like
work
API
right
now,
like
the
architecture,
refers
to
like
the
Hub
and
spoke
model
where
the
Hub
is,
where
the
what
work
you
want
to
deliver
is
published
and
I.
Think
you're
saying
that
Hub
should
be
standardized
into
this
concept
of
a
multi-cluster
control
plane
that
maybe
also
serves
other
tooling.
That
needs
a
centralized
control,
plane,
yep.
D
D
A
Yep,
but
do
you
think
the
the
cluster
registry
is
sort
of
like
the
prerequisite
to
having
this
this
multi-cluster
control
plane.
A
Thank
you.
Oh,
you
think
it's
possible
to
to
implement
the
control
plane
without
the
the
cluster
inventory.
D
Probably
not
as
described
here,
I
forget
if
the
stock
also
has
multi-cluster
controllers
in
here.
Is
that
also
I
think
that's
also
topic
right,
yeah,
so
I
think
it's
possible
to
do
multi-cluster
controllers
without
a
cluster
registry.
D
F
Api
implementation
might
be
able
to
do
something
that
could
be
a
multi-cluster
control
plane
where
you
apply
resource
to
one
I'm,
not
sure,
but
it
might
without,
like
necessarily
looking
it
up
in
a
separate
implementation.
A
A
A
Yeah
feel
free
to
drop
any
comments
or
fix
my
spelling
grammar,
as
well
as
as
I
as
I
go.
Thank
you
for
who,
whoever
whoever
is
waiting
up,
yeah.
C
F
Yeah
I'm
not
sure
if
or
how
much
of
that
might
be
implemented,
but
inside
the
black
box
of
terraform
Cloud
right.
A
Thank
you
any
other
feedback
before
I
move
to
the
next
Topic
in
the
interest
of
time,
because.
F
H
At
this
moment,
I
mean
AKs
has
a
product
called
AKs
fleet
manager,
which
is
pretty
much
against
version
of
multi-causer
control
plane
and
the
camera
is
also
using
a
kind
of
like
both
products
are
using
some
kind
of
a
like
a
strip
down
API
version,
with
no
building
controllers
Etc
to
to
to
to
host
the
the
core
Hub
Caster
functionalities,
so
I'm
thinking
about
the
specific
work
atom.
H
C
A
Totally
agree
that
we
should
have
a
reference
implementation
of
this,
like
the
project
that
I
work
at
the
open
cluster
management
also
provides
sort
of
this
multi-cluster
control
plane,
so
I
think
in
the
I'll
get
to
it
on
the
next
topic,
the
work
API
topic
and,
and
then
we
added
a
doc
entry
to
the
work
that
Nicholas
and
Laura
has
been
doing
and
and
that
in
that,
in
that
entry,
I
actually
referenced
all
three:
the
open
cluster
management,
the
kamada
and
the
the
fleet
manager.
A
So
we
will
but
yeah
as
I
as
I
go
through
the
word
API
project
and
maybe
after
I'll
do
a
quick
demo
of
sort
of
how
to
tie
the
work.
Api,
the
control
plane
and
the
inventory
together
with
one
of
the
implementation
just
to
maybe
gather
some
feedback
on
notes.
A
But
yeah
I
definitely
agree
that
we
should.
We
should
have
a
reference
to
these
implementation.
G
I
would
have
the
same
comment
as
before,
which
is
to
find
the
requirements
in
terms
of
use
cases
like
as
as
an
Enterprise
admin.
I
would
like
to
use
a
multi-cluster
control
control
plane
to
do
this
right
or
like
role-based,
role-based
workflows,.
A
That's
that's
excellent
point:
yep,
okay,
I'm
gonna,
I'm
gonna
jump
to
any
other
feedback
before
I
jump
to
the
work
API
so
for
the
work.
Api
I'm
going
to
go
really
quickly
here,
because
it's
already
a
project
that
is
establishing
Community,
sick
and
I.
I
did
at
least
two
presentations
on
it.
So
it's
basically
about
having
not
necessarily
A
hubspoke
architecture,
but
the
current
implementation.
The
reference
implementation.
A
A
So
so
the
use
cases
is
you
you,
obviously
the
workload
you
want
to
see
the
status
of
your
workload,
delivery
and
you
you
might
want
to
see
the
status
of
the
resources
after
after
the
delivery
as
well.
A
So
if
you're
interested
in
this
go
ahead
and
please
visit
the
design
dog
in
the
in
the
repo
I,
have
we
have
two
YouTube
video
on
the
discussion
about
work
API
as
well,
and
I
I
want
to
do
a
quick
demo,
a
really
quick
demo
using
open
cluster
management
and
then
inside
it
there's
a
there's.
There's
an
implementation
of
the
work,
bi,
so
sort
of
show
that
the
all
three
topics
that
we
discussed
so
the
cluster
inventory
you
get
with
these
one
of
the
apis.
A
So
you
get
the
manage
cluster
we
can
see
on
the
left
side.
This
is
sort
of
the
multi-cluster
control
plane
on
the
right
side.
Here,
there's
two
cluster
that
register
to
the
multi-cluster
control
plane.
So
we
sort
of
touched
on
this
topic
as
well
how
each
namespace
is
dedicated
to
the
Clusters.
So
then,
so
now
we
can
look
at
the
work
that
we
want
to
deliver
to
one
of
the
Clusters.
A
So
the
namespace
here
is
cluster
one.
So
we
we
want
to
put
the
work
into
the
cluster
one
cluster
and
then
the
workload
is
simply
a
a
deployment
and
then
here
sort
of
brings
back
some
of
the
resource
status
of
this
deployment
and
if
I,
if
I,
apply
that
work.
A
So
we
can
see
that
the
the
work
is
applied
on
the
on
the
cluster
one
and
then
the
work
is
not
in
cluster
2
because
it
was
only
applied
on
the
cluster
one
namespace.
So
if
we
get.
A
To
look
at
the,
if
we
look
at
the
status
of
the
work,
you
can
see
that
the
resources
available
is
being
applied
successfully
and
they
actually
bring
back
some
of
the
resource
status,
such
as
the
the
radio
replicas.
The
the
web
accounts
available
web
accounts
Etc.
So
that
was
just
the
quick
demonstration
of
the
Implement
reference
implementation
of
the
work
API,
so
I've
tied
to
the
topic
with
the
multi-callister
control
plan.
The
cluster
inventory
as
well.
D
Anymore,
really,
quick
before
you
go
on.
Can
you
talk
a
little
bit
about
like
what
is
like
open
items
for
work,
API
project
like?
Is
it
that
there's
more
vendors
having
more
implementations?
Is
it
like
effectively
this
conversation
here
where
aspects
of
the
work
API
project
like
the
multi-cluster
control
plane,
which
is
like
in
implemented?
You
know,
ad
hoc,
I,
guess
and
the
reference
implementation
for
work.
Api
is
something
you
want
to
pull
up
more
as
a
standard.
Just
a
quick,
maybe
like.
What's.
A
We
want
to
set
as
open
standard
for
delivering
work
to
multi-clusters
there's,
there's
some
enhancement
proposal
in
the
pr
we've
been
going
back
and
forth
on
some
of
the
items,
so
we
would
love
for
the
community
to
chime
in
on
on
ideas
how
to
substantialize
this.
This
work,
API.
A
And,
and
we
and
our
goal
is
to
make
it
a
vendor
neutral
and
provide
neutral
as
well
and
just
having
that
having
that
standard
API
for
local
so
that
maybe
in
the
future,
if
we
have
a
centralized
clustering
inventory,
we
may
may
not
have
a
centralized
multi-cluster
controller
controller
or
or
the
control
plane,
using
the
work
API
to
deliver
to
delivery
across
multiple
clusters.
A
So
so
other
projects
that
maybe
don't
have
a
multi-cluster
capability
can
sort
of
adopt
either
the
work
API
or
some
other
API
to
gain
a
multi-cluster
capabilities.
A
A
I'm
gonna
hand
it
to
Mike
Morris
sorry
that
I
took
so
long
on
the
multi-cluster
controller
topic.
Mike.
Do
you
want
me
to
stop
sharing
or
you
want
me
to
just
just
scroll
down
as
you
go.
F
Yeah
just
keep
sharing
and
scroll
down,
okay,
yeah,
so
yeah.
The
multi-cluster
controller
section
was
intended
to
kind
of
cover
a
range
of
projects,
some
of
which
already
exist.
F
That
may
have
some
overlap
or
dependency
on
a
cluster
inventory
or
multi-cluster
control
plane.
But
essentially
the
kind
of
focus
of
scope
is
more
on
application,
functionality
or
networking
than
it
is
on
cluster
management
itself.
So
the
cluster
registry
inventory
is
useful
in
terms
of
understanding
where.
F
Or
where
an
application
may
be
located,
but
it's
not
really
doing
much
more
than
that.
It's
not
typically
going
to
be
concerned
with
any
kind
of
workload,
placement
or
scheduling
concerns,
and
it's
not
necessarily
a
hub
and
spoke.
What
sounds
like
one
kind
of
example
of
this
would
be
a
multi-cluster.
Let's
say
cloud
Ingress
of
some
sort
that
could
be
offered
as
a
hosted
cloud
service.
I.
Think
gcp
has
an
example.
Implementation
of
this.
F
They
can
root
traffic
to
multiple
different
kubernetes
clusters
and
services
within
them.
So
that
would
be
one
example
of
a
multi-cluster
controller
that
is
able
to
like
Route
traffic
to
different
clusters
and
there's
I
I
haven't
looked
closely
enough,
but
I
think
right
now,
they're
within
a
single
cluster
set
I.
Don't
know
that
for
certain,
if
it
doesn't
cover
that
yet
then
it's
certainly
a
point
of
a
potential
extensibility
to
be
able
to
Route
across
different
cluster
sets
or
something
like
that.
F
I
had
in
mind
was
any
kind
of
distributed
database,
so
having
database
replication
for
high
availability
or
different
failover
redundancy
models,
whether
that's
a
like
writing
to
like
multi-write
long
or
a
like
replica
or
a
whatever.
F
So
one
of
the
examples
here
was
cockroachdb
has
a
example
of
being
able
to
configure
a
database
that
spans
multiple
clusters,
so
I
think
that's
another
kind
of
like
application,
availability
use
case
of
being
able
to
share
something
or
replicate
availability
or
data
between
clusters.
But
it's
focused
on
like
kind
of
like
application-centric
use
case,
rather
than
being
a
generic
implementation.
F
Yeah
and
I
guess
just
kind
of
like
the
last
point
is
as
addressed
in
the
beginning:
it's
not
necessarily
constrained
to
being
self-hosted
inside
a
kubernetes
cluster
or
put
could
be
a
controller
that
lives
externally,
somehow
as
a
cloud
managed
service
or
other
such
implementation
that
is
able
to
direct
traffic
into
or
configure
resources
in
different
clusters.
F
D
Came
to
mind,
yeah
I
really
want
to
connect
with
the
operator.
Sdk
I
know
a
kubecon
n
a
they
were.
There
was
a
talk
about
like
multi-cluster
control
operators,
which,
at
least
as
far
as
I
understood
it
from
that
talk
and
like
skimming
later
kind
of
similar
to
where
about
API
is
today
it's
that,
like
the
concept
of
like
an
operator,
knowing
that
there
are
more
clusters
like
there
is
a
cluster.
D
You
know
Pro,
like
type
that,
like
some,
you
know
like
parts
of
the
operator,
SDK
are
now
becoming
more
intelligent
about,
but
the
part
of
like
knowing
what
those
clusters
are.
I
think
that
is
like
the
fundamental
question
still
about
like
are
we
connect?
Is
this?
You
know,
have
a
dependency
on
something
as
centralized
as
a
cluster
registry,
or
is
this
about
us
making?
D
You
know
this
piece
or
this
like
subset
of
it'll
wear
but
yeah
overall,
just
I
do
think
like
for
people
who
are
interested
in
this
topic
and
if
we
keep
pursuing
it,
that
connecting
with
what
the
operator
SDK
is
doing
in
this
space
would
be
really
fruitful.
A
I
I
totally
agree
because
I
I
took
a
look
at
the
I.
Think
the
controller
one-time
project
I
mean
they.
There
was
an
item
about
abstracting,
the
the
a
certain
part
of
the
API
I
think
the
the
cluster,
the
cluster
part
of
the
API,
so
that
was
already
done
and
and
in
control
one
time
you
you
sort
of
can
set
up
a
project
that
you
know
you.
You
watch
a
resource
on
one
cluster
and
then
have
it
kind
of
well
replicated
to
the
other
other
clusters,
so
it'll
be.
A
But
the
usability
is
a
it's
a
little
bit
difficult
in
my
opinion,
but
it'll
be
really
nice
if
we
can
be
linked
with
the
the
cluster
ID
and
about
API
to
simplify
that
I'm,
not
sure
Laura,
if
you,
if
you
had
a
look
at
that
as
well
yeah.
A
See
yeah,
maybe
yeah,
maybe
I'll
I'll
reach
out
and
see
what
the
the
status
of
the
ongoing
multi-cluster
support
on
on
this
project
as
well.
Yeah
I'll
I'll
take
a
TV
on.
A
Okay,
I
think
that
wraps
up
for
the
topics
that
I
want
to
discuss,
I
know:
there's
still
still
some
topics
so
I'm
I'm
gonna
stop
my
screen
sharing.
D
Okay,
cool,
thank
you
that
was
awesome
and
we
can
keep
talking
about
these
for
a
while
and
also
in
slack,
but
I.
Think
there's
some
more
like
notes
about
prior
art.
People
can
add
in
and
definitely
that
follow-on
we
were
just
talking
about
for
multi-cluster
controllers,
talk
to
controller
runtime
or
get
like
the
updates
on
that.
D
Maybe
like
do
a
demo
or
talk
about
it
over
here
or
get
somebody
to
to
who
has
some
insight
to
talk
about
it
and
then
that
work,
API
proposal
NPR
to
take
a
look
at
as
well
I
think
are
direct
action
items
for
right
now.
D
G
G
Okay,
let
me
share
that
hang
on.
G
Yes,
so
this
is
actually
kind
of
a
broad
topic,
but
really
just
systematically
looking
at
all
aspects
of
multi-cluster
service
networking
again
and
looking
at
what
more
needs
to
be
done
both
on
foundational
pieces
as
well
as
individual
sub
projects.
Okay-
and
my
thought
was
to
discuss
this
and
had
some
brief
discussion
with
the
Gateway,
API
and
Gamma
teams,
and
they
suggested
to
kind
of
build
on
it.
G
So
that'll
be
probably
I'll
I'll
be
discussing
this
within
the
gamma
team
and
possibly
they
get
free
pating
as
well
and
seeing
how
we
can
kind
of
work
across
these
different
groups,
so,
okay.
So
the
first
thing
is
that
this
is
really
again
as
as
I
think.
The
last
point
that
Michael's
mentioned
this
is
about
service
networking
across
clusters,
as
opposed
to
managing
multi
clusters
and
there's
been
work
going
on.
G
You
know
in
this
say
you
know
things
like
Grim,
says:
API,
Gateway,
API
and
all
the
gamma
stuff,
and
then
with
all
these
different
service,
mesh
projects
and
they're,
all
in
various
states
of
completion
and
semi
overlap,
and
so
on.
So
if
you
look
at,
but
we
I
think
we
need
to
have
a
proper
framework
for
multi-cluster
service
networking
and
then
and
then
based
on
that
re-examine
these
projects
to
see
whether
they
need
changes
or
additional
building
out
so
there's
the
foundation
between
like
what
is
the.
G
What
is
the
kind
of
multi-cluster
network
service
networking
model
that
that
really
eventually
makes
sense
across
different
scenarios
and
then
and
then
I
feel
like
what
what's
been
done
so
far
is
also
still
somewhat
piecemeal,
and
you
know
when
projects
like
gamma
are
coming
along,
which
are
sort
of
trying
to
combine
across
service
mesh
and
gateways
and
multi-clusters,
and
all
that
one
has
to
rethink
as
to
okay,
let's
properly
think
what
is
so
just
the
just
to
wrap
up
for
now.
G
This
is
the
initiative
I'm
just
starting
to
starting
to
discuss
within
gamma,
and
other
projects
obviously
wanted
to
discuss
here
as
well.
We
are
out
of
time
we
can
discuss
it
on
a
future
call,
and
that
does
that
I'll
I'll,
just
just
jump
to
the
takeaway
slide
or
the
next
steps
was
essentially
a
defining
caps
and
gaps
on
proper.
You
know
a
little
bit
more
comprehensively,
defining
multi-cluster
service
networking
model,
with
reference
topology
models
and
and
getting
into
some
topics
like
single
Network
versus
multi-network.
G
These
are
things
that
I
was
covering
in
the
slide,
but
we
can
go
over
it
next
time
and
even
multi-meshes
and
setting
that
framework
and
then
re-examining
all
these
subcategories
like
service,
Discovery
policy
service
load,
balancing
non-cubernetes
endpoints
within
that
and
sort
of
driving
this
Within
in
collaboration
with
multi-cluster.
G
That's
a
very,
very
quick
summary
there's
some
detail
here,
which
we
could
go
and
go
over
and
another
time
and
I'm
I'm
planning
to
share
this
within
the
gamma
team
as
well
as
well
as
potentially
maybe
the
Gateway
API,
but
probably
the
commodity,
That's
The.
Quick,
summary
I
know
I
yeah,
mostly
just
teased,
without
really
giving
details,
but
hopefully
you've
got
the
sense
of
where
this
is
going.
Yeah.
D
I
mean
I
think
having
that
sort
of
unified
how
to
how
to
multi-cluster
service
network
with
all
the
things
is
like
something
that
several
cigs
and
working
groups
have
like
a
little
bit
of
the
piece
of
the
pie
in
so
like
it's
long-standing
that
it
it
has,
you
know,
could
use
more.
G
Yeah
more
resolution
and
yeah,
precise
definition,
and
so
on
I
mean
and
as
we
discuss
and
and
things
like
gamma,
which
are
again
trying
so
I'll
just
give
you
one
or
one
out
of
many
examples
right
when
we
were
trying
to
do
things
with
multi-cluster
network
policy.
There
was
this
thing
about
well:
what's
the
real
multi-cluster
Network
model
to
begin
with
right
I
mean:
can
we
have
single
networks
and
multi-network
topologies?
G
And
there
was
a
certain
view,
but
then
projects
like
gamma
are
forcing
us
to
rethink
that,
because
you
know,
when
you
look
at
a
service
mesh,
they
have
different
deployment
models
like
multi-network,
for
example
right.
So
if
now
you're
trying
to
build
kitchen
native
apis
for
a
lot
of
these
multi-cluster
service
networking
models,
let's
precisely
redefine
the
model
again
and
then
and
then
re-examine
each
of
these
subcategories,
like
policy
like
service
Discovery,
like
load
balancing
within
the
context
of
that,
so
that
we
don't
have
these
ambiguous
kind
of
areas.
G
D
All
right
yeah,
let's
table
it
for
now,
because
we
have
15
seconds.
There
are
a
couple
other
notes
that
Mike
added
in
under
there
too
to
review,
but
we
can
also
take
those
up
next
time.
I
also
posted
some
quick
updates
and
links
about
updates
on
the
MCS,
conformance
chat
test
and
the
sigmc
website,
but
you
can
read
those
we
don't
necessarily
have
to
do
readout
for
that
right
now
and
more
updates
will
come
in
the
future
as
well
when
we
have
time.
D
Thank
you,
everybody
for
stopping
by
and
see
you
in
two
weeks
and
or
on
the
slack.
Thank.