►
From YouTube: Kubernetes SIG Multicluster Dec 1 2020
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
C
It's
going
well
hope
everybody
that
had
some
time
off
last
week
had
a
good
good,
safe
break
and
we'll
get
started
in
maybe
a
couple
minutes.
C
Laura
and
kevin,
I
see
that
you
both
have
agenda
items
today.
I
suspect
that
kevin's
is
probably
the
longer
one
kevin.
Would
you
mind
if
laura
went
first,
so
we
could
knock
hers
out
and
avoid
starving
her
for
time
cool
thanks.
C
C
Okay,
why
don't
we
go
ahead
and
you
know
I.
I
realized
posting
recordings
the
other
day
that
I
have
stopped
like
doing
an
introduction,
so
we're
gonna
do
an
introduction,
and
here
it
is
welcome
everybody
to
december
1st
meeting
of
kubernetes
sigma
multi-cluster.
First
up
is
laura
with
kep
about
cluster
id.
B
Thank
you
paul.
Let
me
turn
on
my
face
a
little
bit
to
say
hi,
so
my
name
is
laura.
I'm
actually
kind
of
new
to
this
group,
I'm
also
new
to
google
just
a
couple
months
ago
and
hoping
to
get
a
little
bit
more
involved
in
the
sig.
So
I'm
trying
to
help
shepherd
the
cluster
id
proposal.
That
jeremy
has
been
working
on
more
through
the
the
formal
process
here.
B
So
I
opened
a
pull
request
for
the
cap,
so
I
really
appreciate
that
people
can
take
a
look
at
it
as
it
is
like
formerly
over
there
now,
as
opposed
to
in
the
dock.
B
I
think
the
particular
things
I
want
to
highlight
from
the
conversations
that
I've
been
a
fly
on
the
wall
for,
for
the
past
couple
of
weeks,
I've
basically
stripped
out
everything
relating
to
signing
or
verification
in
the
version
that
I
put
in
the
pull
request
to
kind
of
keep
it
as
minimal
to
kind
of
like
the
stuff
that
I
think
we
can
mostly
agree
on
at
the
at
the
beginning
and
then
try
and
add
that
stuff
in
later.
B
So
I
pulled
out
the
verification,
user
story
and
the
field
in
the
cluster
claim
for
signature
and
anything
referring
to
whether
claims
are
signed
by
may
or
may
not
be
signed
by
an
authority.
B
So
if
you
disagree
with
that
or
or
do
agree
with
it,
I
guess
let
me
know,
and
then
I
think
the
only
other,
besides
just
generally
looking
for
any
other
comments
to
be
moved
over
to
that
pull
request.
If
there's
anybody
else
who
thinks
they
need
to
be-
or
you
know
somebody
who
needs
to
be
an
approver
on
this
kep,
so
I'm
new
to
kubernetes
governance
myself,
but
approver
being
the
people
who
actually
get
to
choose
whether
the
cup
is
done
or
not.
Then
please.
B
Let
me
know
right
now,
paul
you're,
you're,
winning
that
but
yeah.
Just
let
me
know
if
there
should
be
somebody
else
there
and
it's
nice
to
actually
say
hi
to
everybody
here
in
the
in
the
sig.
So
nice
meeting
you
and
hopefully
working
with
you
all.
C
Well,
thank
you
very
much
for
opening
the
pr
laura
and
very
glad
to
have
you
welcome
cool
thanks,
there's
a
comma
in
there
that
didn't
sound.
C
C
D
All
right,
so
it
looks
like
a
desktop,
is
not
quite
stable.
I
have
a
standby
laptop,
so
maybe
I
will
switch.
If
disconnected.
Can
I
share
my
screen.
C
Well,
this
is
odd.
I
am
signed
in
as
the
host
and
yet
I
cannot
allow
you
to
share
your
screen.
D
C
C
E
F
D
C
I'm
going
to
sign
in
from
another
machine
and
see
if
I
can
enable
it.
So
you
just
you,
you
don't
have
the
the
control
lit
up
to
see
it
or
to
share.
C
D
All
right
so
looks
like
it's
sharing,
so
a
little
bit
my
about
myself
before
going
through
this
slide,
so
one
can
currently
leading
the
clown
native
open
source
team
at
huawei,
and
I
actually
started
working
on
kubernetes
back
to
2015
and
I
had
a
long
period
of
working
together
with
clinton
and
have
some
knowledge
about
the
federation
project
stuff.
D
D
Federation
background
so
actually
from
the
federation
v1,
I
think
that's
kind
of
actually
a
prototype
of
the
federation
idea.
So
we
know
there
are
some
things
to
be
improved
that,
like
we
actually
had
all
the
information
in
the
annotation,
so
that's
kind
of
not
able
to
make
it
stable.
That's
the
I
think,
that's
the
biggest
largest
barrier
and
but
actually,
I
think,
the
in
the
v1.
They
have
the
dedicated
federated
api
server.
D
So
that's
kind
of
one
thing:
maybe
we
can
still
think
about
to
reuse
and
so
for
the
v2
implementation.
Actually,
there
are
a
lot
of
improvements.
So
to
me,
I
think
the
the
the
the
most
the
best
improvement
are
the
modular
design
and
also
we
have
the
dedicated
placement
and
the
override
api
structure
to
a
user
perspective.
It's
actually
still
a
couple
of
api
or
embedded
api,
so
it
means
that
we
have
the
template
placement
and
overriding
inside
one.
D
Api
object
right,
the
federated
types
so
that
one,
that
kind
of
style
resulting
are
not
compatible
with
the
kubernetes
native
apis
and
require
extra
lending
efforts
and
also
like
the
the
the
truly
integration,
is
not
able
to
directly
adopt
federation
because
the
api
is
different
and
also
it's
actually
a
one-to-one
mapping
of
the
federated
api
and
that
the
actual
workload
or
kubernetes
api.
So
every
time
a
user
wants
to
create
a
federative,
for
example,
federated
deployment.
D
They
still
need
to
fill
up
a
lot
of
fails
so
that
one,
it's
kind
of
not
quite
easy
to
use
and
also
the
v2,
is
actually
trying
to
build,
provide
building
blocks.
But
users
still
need
a
lot
of
customization.
So
that's
kind
of
also
one
barrier.
That
slows
down
the
adoption
so
and
we
actually
tried
to
offer
cube,
fed
v2
for
our
customers
during
past
years
and
got
some
feedback
from
them.
D
Our
one
feedback
is
that
the
users
are
kind
of
healthy
concerned
about
vendor,
locking
because
the
federation
v2
seems
not
that
popular
as
we
expected
and
also
it
still
needs
some
kind
of
income
in-house
customization
to
make
it
a
entrance
solution.
So
they
are
kind
of
worried
about
it's
being
wasted
in
the
future.
D
So
so
the
idea
is
that
maybe
we
can
try
to
to
simplify
that
and
also
provide
some
built-in
best
practice
to
make
it
easier
for
the
users
to
start
trying,
instead
of
doing
a
lot
of
customization
before
really
try
out
the
functionalities
and
actually
so,
in
the
early
days
about
moving
from
v1
to
v2,
we
actually
had
a
discussion
about
the
api
design,
whether
to
to
provide
coupled
api
and
the
decoupled
api.
So
coupled
api
means
actually
just
what
we
are
using
today
in
the
v2.
D
It's
embedding
the
template
and
also
the
placement
and
the
override
inside
the
one
api
type
right.
So
the
detail
problem
is
that
we
we
can
actually
still
try
to
attach
to
the
vanilla,
kubernetes
ap
objects,
but
have
the
federate
specific
information
stored
in
other
api
objects.
D
So
so
also
regarding
the
field,
the
feedbacks
and
our
experiences
on
the
federation,
v1
and
v2.
So
now
we're
actually
thinking
about
trying
to
try
to
implement
the
decoupled
api
way.
So
with
this,
we
can
use
kubernetes
api
to
define
the
federated
application
template
so
that
that
would
have
been
very
helpful
for
to
for
integrating
the
existing
tools
that
already
are
downtown
kubernetes
and
also
we
we
can
have
a
standalone
policy
or
a
placement
api
to
define
the
spreading
requirements.
D
It
means
how
to
how
to
spread
or
propagate
the
federated
applications,
and
maybe
we
can
also
support
one
tool
and
mapping
for
of
the
policy
and
workload
or
policy
and
to
the
resources.
So
users
don't
need
to
always
fill
up
the
fields
every
time
they
trying
to
create
federated
applications,
and
we
can
also
provide
some
default
policies,
so
users
can
just
in
interact
with
kubernetes
api
to
create
federated
applications.
D
So
that
may
be
very
helpful
to
bring
the
experience
that
very
similar
to
the
day
they're
just
using
a
single
kubernetes
cluster
and
also
for
the
override
override
requirements.
We
think
that
are
actually
very
relevant
to
the
actual
cluster,
the
resources
that
go
into
so,
for
example,
we
may
have
different
image
registry
in
different
environment,
so
we
may
need
to
change
the
image
prefix
before
when,
when
the
resource
goes,
when
a
for
example,
a
pod
go
into
a
different
level,
cluster
region,
and
also
maybe
the
storage
class-
may
be
a
provider
specific.
D
And
also
actually
thinking
about
the
federation
functionality
functionality
today,
we
actually
think
that
we
may
still
need
provide
some
extra
functionalities
to
to
make
it
to
provide
a
full
featured
landscape
for
the
multi-cluster
application
automation.
D
But
I
I
think
that
actually
a
lot
of
functionalities,
we
can
just
integrate
with
the
some
of
the
other
open
source
projects
today
in
the
ecosystem.
D
So,
like
the
multi-cluster
networking,
there
are
already
some
projects
and
the
the
monitoring
observation,
the
prometheus
and
also
like
thanos.
They
have
some
offering
already
so
for
the
api
management,
the
maybe
I
just
highlight
the
cursor
so
for
the
api
management.
We
think
that
so
the
workload
management
and
the
like
the
clustering
registry
and
also
the
propagation
execution
are
kind
of
we
already
have
in
the
v2
today.
D
D
And
also
for
the
propagation
execution
we
can
introduce
the
abstraction
abstraction
layer
to
for
supporting
both
push
based
mode
and
the
progesterone
and
the
cluster
lifecycle.
Management
is
kind
of
a
abstraction
of
cluster,
of
automation
like
scale
out
scaling
or
to
provisioning
stuff,
and
also
that
would
be
easy
to
easily
make
it
easier
to
adopt
on
both
cluster
api
and
other
other
obstructions
and
also
for
the
unified
api
endpoint.
It's
kind
of
providing
equivalent
functionality
about
the
one
thing
we
discussed
earlier
is
the
federated
read-only
apis.
D
So
actually
it's
it's
a
it's
kind
of
a
gateway
routing.
They
requested
to
a
specific
cluster
that
user
wants
to
access.
So
so
then
the
user
can
know.
There's
no
need
for
the
user
to
get
direct
network
connection
between
the
client
and
the
cluster
api
enterprise,
and
also
it
would
be
very
helpful
to
unify
the
authentication
and
the
authorization
and
then
for
the
upper
layer,
it's
kind
of
advanced
layer.
D
So
I
think
in
this
layer,
like
the
one
thing,
is
that
we
need
when,
in
the
cloud
bursting
cases,
we
need
to
balance
between
when
to
schedule
a
application
to
a
new
cluster
or
schedule
to
the
application
to
existing
cluster,
but
at
the
same
time
scale
out
that
cluster.
So
this
kind
of
we
need
to
think
about
how
to
make
these
scheduling
and
cluster
auto
scaling,
coordinate
and
also
actually
the
policy
management
and
the
config
map.
D
Config
management
are
just
kind
of
very
useful
functionalities
that
every
end
user
would
ask
ask
for,
and
the
underlying
are
actually
what
we
have
today
in
the
kubernetes
core.
So
cluster
api
is
not
in
the
core,
but
kind
of
others
are
already.
D
The
interfaces-
and
maybe
we
also
in
the
in
the
longer
development
we
may
also
add
some
features
to
kubernetes,
like
one
thing
we
have
discussed
before-
is
that
about
the
kind
of
summarizing
the
usage
at
one
cluster
level.
So
it
would
be
easier
for
to
doing
the
scheduling
decision
between
clusters
sort
of
things.
C
F
F
One
thing
I'd
like
to
to
at
least
make
aware
make
you
aware
of
there
is
some
work
around
a
github
project
called
open
cluster
management,
where
I,
along
with
several
other
folks,
are
kind
of
looking
at
similar
problems,
particularly
around
things
like
the
placement,
behavior,
you've
kind
of
highlighted,
scheduling
and
propagation,
the
binding
of
policy
and
workload,
and
I
wonder
if
it
would
make
sense
to
maybe
either
in
this
form
or
outside
of
this
form
to
go
through
and
look
at
where
there's
any
opportunities
for
alignment
around
that
upstream
capability.
F
D
Yeah,
so
actually
I
I
I
I
think,
yes,
the
open
cluster
management
also
trying
to
solve
the
multi-cluster
problem.
One
thing,
I'm
not
quite
sure,
is:
what's
what's
the
plan
about
the
open
cluster
management
for
the
future?
So
to
me
I
think
some
of
the
end
users
and
also
some
vendors.
D
We
are
collaborating
that
they
are
expecting
some
standard
bro
in
the
incubated
in
the
upstream,
so
that
that
is
the
we
think
is
the
best
way
to
benefit
the
whole
week
system.
D
And
another
thing
is
that
so
we
we're
actually
trying
to
solve
the
problems
from
the
the
kind
of
the
how
to
say
how
the
end
users
start
deploying
the
applications.
So
I'm
not
sure
if
that
piece
is
already
provided
in
open
cluster
management
or
not.
F
It's
at
least
a
problem
area
that
that's
being
looked
at.
I
think
the
thing
that
that
I
find
very
helpful
and
reassuring
is
that
a
lot
of
the
same
missing
pieces
that
you've
highlighted
are
things
that
we've
observed
as
well
right:
the
tight
coupling
of
the
federated
types
to
existing
kubernetes
api
and
wanting
to
have
a
way
a
multi-cluster
way
of
delivering
application,
parts
that
doesn't
require
require
rewriting
them
into
the
federated
type
system.
F
F
So
I
think,
at
least
in
terms
of
recognizing
we're
recognizing
the
same
problems
that
you're
highlighting
you
know.
I
think,
looking
at
what
you
know
could
be
contributed
even
further
upstream,
but
definitely
open.
Cluster
management
is
is
at
least
an
open
playground
for
working
through
some
of
these
concepts
that
we
are
trying
to
build
out
and
engage
others
in
as
well.
F
But
I
I
guess
the
net
is.
I
think
this
is
really
good.
I
think
we've
also
seen
some
similar
challenges
with
some
of
the
parts
that
are
currently
available
and
I
think
it'd
be
a
good
opportunity
if
you're
interested
to
kind
of
dive
into
where
there's
alignment
here
and
also
maybe
figure
out
where
those
differences
would
be
useful
as
well.
D
All
right,
so
I
just
go
on
so
to
introduce
more
about
details.
What
we
are
currently
trying
to
do
so
here
is
actually
a
one
of
the
the
prototype
we
are
working
on,
so
so
for
the
on
the
left.
It's
kind
of
the
architecture
and
on
the
right
are
from
the
concept
perspective.
D
So
so
we
are
actually
kind
of
taking
the
v1
and
also
v2
into
consideration,
and
so
so
currently
we
think
we
need
a
dedicated
api
server
for
the
for
the
federation
layer,
because
we
want
to
actually
interact
with
kubernetes
native
api
of
vanilla,
netis
apis
and
also
in
that
case
we
are.
Actually
we
don't
want.
The
nila
deployment
controller
were
working
because
it's
actually
different
different
action
after
we
received
the
manifest
so
here
in
the
in
the
control
federated
control
plan.
D
So
the
commander
here
is
just
kind
of
the
name
of
our
prototype,
so
we
have
a
cluster
controller.
It's
very
a
similar
concept
to
what
we
have
today
and
the
policy
controller
is
to
actually
interact
with
the
policy.
So
the
binding
is
one
of
binding
is
actually
a
api
just
to
to
persist.
D
Some
of
the
middle
result
in
the
hcd
and
the
execution
controller
is
the
propagation
execution
layer,
one
implementation,
so
this
one
is
a
push
based
one
so
for
the
pro
based
one
we
can
just
have
agent
in
the
member
cluster
to
do
that.
D
All
right.
So
so,
actually,
there's
not
much
new
in
in
in
the
from
the
architecture
perspective
from
the
concept
perspectives
on
the
right
are
the
vanilla,
kinetic
apis
that
users
can
just
either
use
avocado
to
to
create,
and
so
our
kind
of
assumption
is
that
maybe
in
one
organization
the
admin
can
set
up
a
resource
pool
that
have
many
member
clusters
and
they
have
the
one
federation
control
plan
federated
control
plan
and
they
can
assign
different
name
spaces
to
different
business
teams.
D
So
they
can
create
applications
and
also
either
at
me
or
the
team
members.
They
can
create
policies,
so
the
policies
can
be
kind
of
general
to
able
to
affect
a
set
of
kinetic
resources
to
propagate
them
in
in
kind
of
similar
format.
D
And
so
so
we
we
also
actually
use
using
the
work
workload
api
to
store
to
actually
mirror
between
the
actual
resourcing.
The
member
cluster
and
the
federated
control
plan,
so
the
exclusion
execution
space
here-
is
actually
a.
D
It's
actually
a
namespace,
but
it's
just
for
us
storing
the
member
cluster
specific
thing,
so
we
give
it
another
name
and
in
in
the
real
world
use
case.
Actually
the
admin
can
give
the
reader
excess
of
the
member
cluster
api
to
all
the
business
teams,
so
so
the
the
business
teams
they
can
check
how
many
member
clusters
they
can
use
and
also
maybe
share
the
resource
capabilities
and
customize
the
their
application
location
from
the
federation
level.
D
So
this
one
is
actually
a
api
workflow,
so
either
the
user
can
create
the
propagation
policy
first
or
they
can
create
the
kubernetes
resources
first.
So
the
propagation
policy
controller
will
create
the
propagation
binding.
It's
kind
of
a
object
for
internal
processing,
so
the
scheduler
will
fill
up
the
clusters
so
actually
assign
the
the
resources
to
clusters
and
also
the
binding
controllers
will
create
the
work
accordingly.
D
So
it's
kind
of
actually
expanding
the
policy
according
to
the
the
workload
the
kubernetes
resources
that
match
and
the
big
creates
the
biting
and
also
in
the
propagation
body,
there
are
a
set
of
assigned
member
clusters,
so
the
binding
controller
will
expand
according
to
that.
D
So
the
purpose
propagation
work
will
actually
it's
stored
by
a
member
cluster
level,
so
either
the
execution
controller
or
the
agent
in
the
member
clusters
can
take
that
and
create
the
resources
in
the
member
clusters,
and
also
the
the
status
collection
way
is
kind
of
symmetric
from
the
right
right
to
the
left.
D
And
the
so
for
the
application
creation,
we're
actually
expecting
people
to
use
vanilla,
kubernetes
api,
so
so
here
I'm
just
visiting
an
example
of
propagation
policy
here,
so
so,
actually
in
in
this
part,
in
this
pink
box,
it's
a
resource
selector
able
to
to
match
a
set
of
api
objects.
D
So
your
user
can
either
just
fill
up
like
api
version
and
the
kind
to
to
match
all
kind
of
resources,
or
they
can
enumerate
some
of
the
names
or
use
label
selector
or
field
selector
to
match
the
kind
of
subset
of
the
resources
they
want
to.
They
want
to
spread
across
the
clusters.
D
They
also
see
the
aussie
association
here
is
actually
a
advanced
feature
like
whether
take
all
the
matching
resources
as
one
concept
application.
So
because,
in
the
in
the
in
the
real
world
use
case,
people
actually
may
build
application
concepts
in
our
service
concept
in
their
system.
D
But
in
our
project
in
our
prototype
we
we
can't
assume
what
kind
of
cons
higher
level
obstruction
they
they
define.
So
we
are
still
pro
providing
the
kubernetes
primitives,
so
associate.
Association
here
is
to
kind
of
reflect
the
higher
level
obstruction
to
take
all
the
matching
resource
together,
as
one
kind
of
bundle
so
schedule
them
to
the
same
set
of
metal
clusters,
and
the
the
yellow
box
here
is
is
actually
the
more
about
the
scheduling
stuff.
D
So
cluster
affinity
is
is
used
to
to
it's
it's
kind
of
very
similar
to
node
affinity
on
par
on
part,
so
users
can
just
indicate
what
what
kind
of
cluster
they
want
to
schedule
and
the
toleration
cluster
duration
is
also
kind
of
similar
to
productivity.
D
We
can
have
tents
on
the
clusters
to
to
to
to
to
pick
some
of
the
cluster
for
the
special
use
and
the
spread
constraints
are
for,
like
in
the
case
that
people
want
to
spread
their
application
across
different
domains,
so
region
and
the
availability
zone,
and
also
maybe
provider,
are
kind
of
obvious
field
domains
and
also
some
people
may
think
that
kubernetes
clusters
are
also
a
philadelphia.
They
can
define
either
by
actually
by
label
or
by
failed
selector,
and
they
can
restrict
the
numbers.
D
The
numbers
either
exactly
match
a
number
or
where
they
can
have
a
kind
of
minimum.
At
the
maximum.
D
Oh
the
question
to
do
you
have
to
create
a
deployment
in
the
management
cluster
before
it
can
be
propagated?
D
D
So
so
users
still
need
to
like
submit
deployment
yamo,
but
so
the
idea
here
is
actually
so,
maybe
if
the
user
can
create
a
general
policy
api,
so
they
don't
need
to
set
up
the
propagation
policy
every
time
and
they
can
just
frequently
grade
deployment.
G
Hi,
so
I'm.
H
The
one
who
asked
that
question
so
my
follow-up
question
to
that
is,
if
I
have
a
very
large
number
of
clusters,
I'm
managing
and
lots
of
teams
the
management
cluster
not
get
overwhelmed
with
lots
of
deployments
like
the
federated
types.
They
don't
actually
create,
like
pods
and
deployments
on
the
management
cluster,
so
they're
very
lightweight,
because
they're,
essentially
just
like
an
entry
in
etsy.
H
So
so
I
guess
that's
my
question
there.
It's
like
at
scale
like
how
would
that
work.
H
So
with,
if
I'm
creating
the
actual
deployment
and
the
management
cluster,
let's
say
I.
H
Clusters
and
100
teams
right
all
using
that
management
cluster.
That
means
I'd
have
to
create
a
like,
essentially
a
clone
of
the
deployment
for
all
the
40.
So
you
know
I
could
have
like
many
thousands
of
deployments
in
that
one
management
cluster,
so
the
overhead
is
greater
than
the
federated
types.
Does
that
make
sense.
D
Great
you
mean
the
like:
the
objects
starting
at
cd
right.
D
So
the
parts
will
not
be
created,
so
we
actually
just
using
the
kubernetes
api
to
receive
a
vanilla,
kubernetes
manifest,
but
actually
we
are
using
the
propagation
policy
api
to
to
specify
how
to
spread
the
the
deployment
so
so
actually
is
so.
The
behavior
is
still
kind
of
similar
to
what
we
have
in
the
v2.
Today
we
just
spread
the
deployment
object
to
the
member
clusters.
H
D
So
so,
creating
a
deployment
in
the
management
cluster
or
in
the
federated
layer
is
just
actually
have
a
entry
there.
The
actual
the
actual
kind
of
instance
or
part
of
the
deployment
are
started
in
the
member
cluster.
C
So
I
I
understood
the
the
architecture
to
be
that
the
the
management
cluster
is
is,
maybe
cluster
isn't
quite
the
right
word
for
it.
I
imagined
it
as
a
like
an
api
server,
the
normal
cube
api
server
running
an
extension
api
server
because
it
supports
crds
right
running
in
a
pod
and
the
normal
controller
manager
not
running,
and
instead
only
the
k,
armada
or
karmada.
C
I
didn't
I
didn't
catch,
how
you
like
to
pronounce
the
name,
but
only
the
controllers,
for
the
scheduling
and
filling
out
defaults
and
computing
work
to
be
running
in
the
thing
called
the
management
cluster,
if
that
makes
sense,
so
maybe
cluster
not
being
the
right
word,
because
it's
not
not
a
normal
cluster,
it's
an
api
endpoint
and
it
has
some
different
controllers
running.
But
that's
how
I
understood
it
is
that
correct
kevin.
D
Right
so
yeah,
actually
the
the
management
cluster
here
is
not
accurate.
So
here
is
actually
just
a
it's
kind
of.
D
Is
more
accurate,
so
yeah?
So
here
is
just
the
api
endpoint
and
we
have
the
the
the
commander
control
controllers
to
interact
with
the
apis.
D
So
it's
it's
like
this.
The
user,
like
you,
can
set
the
config
to
point
to
this
api
endpoint
and
create
vanilla
deployment,
object
here
and
also
create
the
oh
sorry.
D
Create
the
vanilla,
kubernetes
deployment
here
and
also
create
the
the
propagation
policy
here
and
then
the
the
policy
controller
and
the
binding
controller
and
the
execution
controller
will
spread
coding.
The
policy
definitions
and
the
resource
definitions
to
them
to
have
the
resources
in
the
member
clusters.
Gotcha.
G
H
Understand
my
original
question
is:
let's
say
I
have
40
member
clusters
and
a
single
cluster
that
has
stomata
installed
right
and
I
have
spread
evenly
over
those
in
clusters.
Member
clusters,
a
thousand
deployments
per
cluster
that
would
mean
and
they're
separate,
independent
right.
For
some
reason,
that
would
mean
I'd.
Have
forty
thousand
deployments
in
the
management
cluster
in
your
diagram?
Is
that
a
correct
understanding.
D
Yes,
so
I
I
think
actually
it
will
need
more
storage
to
to
to
store
the
objects.
That's
true
because
so
even
we
just
for
example,
we
have
one
federated
deployment.
D
So
though
we
we
just
use
the
deployment
vanilla
deployment
defined
here,
it's
logically
a
federated
deployment,
but
we
also
will
create
if
you
have
so,
for
example,
I
want
to
create
a
for
the
related
deployment
to
all
the
member
clusters.
We
will
have
the
acro
the
same
number
of
work
objects
for
each
member
for
each
deployment
in
the
member
cluster,
so
we
do
store
more
objects.
H
D
H
And
somebody
just
messaged
me
in
chat
and
said
that
the
the
implementation
for
the
deployment
isn't
like
it's
not
going
to
get
scheduled
in
the
management
cluster.
So
that's
correct
right,
which
is
why
it's
lightweight
on
the
manageable
cluster.
I
Just
just
to
be
clear
what
I
was
saying
on
the
chat.
If
I
understand
correctly,
please
correct
me:
if
I'm
wrong,
you
have
this
management
cluster,
which
is
just
a
kubernetes
api
server
that
happens
to
have
all
of
the
same
types
defined
as
kubernetes.
The
same
group
version
kind
same
schema,
but
they're
not
implemented
the
same
as
kubernetes
right
they're,
they're
placeholders
for
the
member
clusters
is
that
right.
D
Right
right,
actually,
the
idea
is
kind
of
the
federation
v1
idea
the
the
federation
control
plan.
They
will
have
its
own
ap
server,
but
yeah.
I
E
C
Just
responding
to
a
comment
about
potential
benefits
of
this
approach,
I
I
the
thesis
that
I
am
hearing
is
that
the
the
benefit
of
using
an
api
server
with
a
different
implementation
behind
the
types
is
that
it's
more
compatible
with
existing
tooling
existing
artifacts
like
helm,
charts
or
you
know
your
favorite
package
format
here
and
does
not
have
the
like
translation
or
transformation
need
that
was
present
in
coop
in
cube.
Fed
v2.
D
H
H
D
It's
easy
to
support
crd
here
so
because
we
can
call
the
the
the
community
the
apis
api,
to
check
how
many
and
also
what
kind
of
a
api
enabled
in
in
this
api
server.
So.
I
I
I
think
you
could
achieve
that
either
by
just
running
the
kubernetes
api
server
with
all
the
built-in
types,
but
not
the
controller
manager,
which
would
give
you
the
defaulting
and
validation
behavior
or
you
could
literally
clone
the
schema
out
into
crds
everything
from
a
type
system.
Point
of
view.
Everything
that
you
want
to
express
about.
A
deployment
is
expressible
through
crd,
and
then
you
just
have
to
be
constantly
syncing
against
whatever
upstream
represents
in
each
version.
H
C
So
I
I
understood
that
the
current
implementation
is
that
there
that
kevin
you
are
literally
running
the
the
cube
api
server
binary
right.
C
C
Types
it
is,
it
is
the
the
same
api
server
that
you
would
find
in
a
normal
cube
deployment,
so
the
built-in
types
are
going
to
look
the
same,
and
I
understood
also
that
it
has
the
extensions
api
server
built
into
it,
so
that
you
can
register
crds.
C
So
in
the
sense,
in
the
sense
that
it
sort
of
depends
on
what
your
definition
of
sameness
might
be
like
in
indefinitely.
It
seems
that
a
goal
is
that
the
api
endpoint
that
a
user
is
is
programming
with
a
client
looks
exactly
like
it
is
a
normal
kubernetes
cluster
as
a
goal.
C
I
I
guess
the
difference
is
that
instead
of
you
know,
if
you,
instead
of
creating
a
deployment
or
instead
of
creating
replica
sets
and
then
pods,
if
you
create
a
deployment
that,
since
the
normal
controller
manager,
isn't
running
that
when
you
create
a
deployment,
it
goes
through
like
the
built-in
and
built-in
admission
stuff,
like
validation
and
defaults,
and
then
whatever
admission
controllers
are
registered,
but
then
just
sits
there
right.
So
that
is
that's.
My
understanding
does
that
is
that
david,
that
that
we've
been
chatting
and
does
that
make
sense.
E
Yeah,
I'm
still
trying
to
get
a
hold
of
this
again,
I'm
new
it
to
the
sig,
so
it
you
know
it
come
down
to.
So.
If
I
do
a
coupe
cuddle
against
this,
this
propagation
server
and
I
say,
put
a
deployment.
There
apply
a
deployment
and
then
I
say
get
the
deployment
you
know
is
it
going
to
show
me
pods
listed,
is
gonna
have
a
different
status
output,
because
now
it's
not
necessarily
pods
aren't
being
created,
they're,
not
even
being
thought
of
it's
just
it's
almost
a
promise
to
make
a
deployment
at
some
later
time.
E
So
it
seems
like
this
weird
mix
of
trying
to
maintain
the
same
api
but
at
the
same
point
having
very
different
semantics
behind
it
and
as
an
operator
that
kind
of
looks
confusing
to
me
versus
if
I'm
very
explicit,
about
the
types
like
this
is
a
federated
deployment
versus
a
deployment.
Then
it's
very
I'm
very
conscious
and
explicitly
creating
something,
and
so
when
I,
when
I
think
of
this,
naturally
from
an
operator
point
of
view,
it
seems
to
me
that,
oh
I
really
want
to
say.
E
E
Oh
yeah,
I
it
just
something
feels
unnatural
to
me
and
I
I
didn't
I
knew
so,
I'm
I'm
still
trying
to
put
my
finger
on
it
and
again
you
might
have
had
feedback
from
the
previous
version
if
you
did
kind
of
that,
crt
very
explicit,
federated
deployment
type
that
people
said
no.
No.
I
don't
like
that.
I
I
may
have
missed
that.
C
Yeah,
I
think
I
think
that's
something
that
kevin
addressed
earlier
and
I
will
just
say
you
know
from
my
own
my
own
experience
like
having
been
involved
with
cube,
fed
v2
that
we
heard
a
lot
at
red
hat
from
users.
That
said
that
the
the
extra
transformation
into
like
a
federated
resource
was
energetic
barrier
because
it
required
transformation
of
you
know
any
helm,
chart
any
collection
of
resources
that
you
wanted
to
use,
and
I
I
think,
having
you
addressed.
That
was
something
that
you.
I
They
want
it
a
little
bit
different
in
cluster
a
than
they
do
in
cluster
b
and
as
we've
seen
with
helm,
the
in
the
limit,
every
field
of
every
sub-resource
or
every
substructure
becomes
parameterizable,
and
then
the
entire
thing
becomes
a
parameter
and
the
kubernetes
api
server
doesn't
lend
itself
well
to
in
place
substitutions
because
it's
not
stored
as
text
right.
An
int
field
is
stored.
I
I
This
is
where
you
see
like
helm.
You
know
is
so
wildly
popular
is
because
people
can
template
it
and
other
solutions
like
customize,
take
the
templates
and
they
say
templates.
You
know
specialization
is
cool
and
all
but
separate
the
specialization
from
the
base
resource
and
that's
easy
to
do
because
it's
files
which
are
just
text
so
I'm
curious
to
see
how
those
problems
will
be
dealt
with
in
this
model.
I
don't
think
it's
per
se
wrong.
I
just
think
that
the
problems.
D
D
So,
actually
for
the
specializing
spatialization
to
the
different
member
clusters.
So
the
idea
is
that
we
have
the
separate
override
api
api
to
enter,
to
indicate
that
the
reason
why
we
didn't
put
the
the
override
the
override
fails
here
is
that
so
we
think
that
the
override
requirement
may
not
quite
align
with
the
placement,
so
the
override
are
more
likely
relevant
to
the
target
cluster.
D
The
resources
go
into,
so
maybe
they
can
have
one
override
one
special
override
per
cluster,
but
the
placement
they
can
be
just
a
more
general
rule
to
to
to
to
restrict
the
resources
go
into
a
set
of.
I
D
So,
actually
people
will
need
to
use
the
override
api
to
do
that,
so
you
can,
but
the
deployment
is
is
kind
of
special,
because
it's
because
it's
kind
of
need
scheduling
the
the
different
number
of
replica
takes
different
use
takes
different
resources
right,
so
we
kind
of
need
scheduling
mechanism,
but
actually,
if
you,
if,
for
example,
if
why
people
want
different
image,
the
the
container
image
in
cluster
a
and
cluster
b,
so
people
can
actually
just
set
it
a
override
policy,
also
override
or
overrider
rule,
to
say
that
any
deployment
go
into
the
cluster
a
I.
D
If
the
image
field
matches
a
specific
pattern,
I
will
just
override
the
image
field
and
also
we
can
have
a
similar
rule
on
the
cluster
v.
I
D
D
Maybe
we
need
kind
of
plugging
mechanism
for
the
matching
and
also
the
the
overriding
because,
like
if
people
want
to
match
a
certain
prefix
of
image,
it's
kind
of
very
different
to
matching
just
the
labels,
and
also
the
overriding
may
be
a
little
bit
different,
but
the
the
the
overall
idea
is
that
we
we
want
to
make
make
it
able
to
to
match
and
override
all
the
fields.
I
Thanks
this
is
really
interesting,
something
but
it's
time
to
drop,
but
the
one
thing
I'd
like
to
do
as
a
sig.
I
think
something
we
should
perhaps
do
is
to
take
the
general
model
of
this
and
cube,
fed,
v1
and
v2
and
get
ops
and
up
level
the
design
into
maybe
like
a
picture
that
shows
the
abstract
concept,
and
then
we
could
map
these
sorts
of
proposals
onto
the
abstract
concepts.
Right
I
mentioned
in
chat
right.
C
Yeah
I've
I've
had
similar
thoughts
tim
in
terms
of
like
also
just
kind
of
highlighting
what
the
what
the
key
like
sliders
that
people
tend
to
want
are
in
the
sense
of
like
what
are
the
trade-offs
present
and
what
you
present
to
users,
because
there's
there's
clearly
like
we
can.
We
can
search
this
this
sig
meeting
for
evidence
that,
like
there
are
groups
of
people
that
want
significantly
different
things
for
reasons
that
make
sense
to
each
of
them.
C
C
So
look
for
an
updated
invite
kevin
one
thought
that
I
had
while
you
were
presenting
this
is
that
I
think
if
you
could
show
us
a
demo
that
I
think
it
would
help
help
some
folks
get
oriented
in
terms
of
exactly
like
what
does
the
management
cluster?
If
you
can
hear
my
air
quotes,
what
does
that
actually
look
like?
C
What's
its
relation
to
the
cluster
hosting
karmata
and
just
more
fully
understand
exactly
how
something
exactly
how
maybe
something
they're
working
on
would
make
sense
in
if
they
were
using
karma,
so
is,
is
maybe
sometime
in
the
next
week
or
two
good
to
do
a
demo.
C
Okay,
great
well,
thanks
a
lot
everybody
for
you
know
attending
today,
and
thanks
for
to
kevin
and
laura
for
presenting
and
we'll
see
you
next
week,.