►
From YouTube: Kubernetes SIG Multicluster 2020 Dec 8
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
A
A
C
D
B
All
right,
why
don't
we
get
started,
so
this
will
be
tuesday
december
8th,
2020
kubernetes
hector
you're
on
the
agenda.
First,
with
cube,
fed
beta
roadmap.
A
Yeah,
so
some
some
of
you
or
some
people
in
the
community
has
asked
what
is
the
state
of
kefad
and
when
it
will
become
peter
and
what
does
need
to
be
done.
A
The
idea
for
this
meeting
to
the
agenda
this
item
was
more
or
less
to
share
our
thoughts
about
your
factory
to
be
mark
has
gita
and
I
cancel
our
presentation,
but
we
will
super
brief.
Yes,
just
like
one
second,
I
will
start
the
screen.
Well,
maybe
you
have
to
come
into
this
to
serve
the
string
ball.
I
don't
know.
A
No,
I
mean
no
yeah,
otherwise
I
can
simply
share
the
link
and
then
state.
B
It
you
know
something
changed
about
this
recently,
I'm
going
to
try
to
log
back
in,
and
I
will
be
right
back.
I
I'm
unsure
of
what
is
going
on,
but
we
have
the
same
thing
kind
of
happened
last
week.
A
A
A
Okay,
let
me
try
again
yeah,
I
say
horse
disabled
participant
on
the
screen.
A
B
Yeah,
I
guess
I
guess
I
need
to
take
an
action
item
to
understand
why
this
appeared
to
stop
working
sharing
the.
A
Doc
for
now
works,
I
hope
yeah.
I
hope
you
are
access,
you
can
access
to
it,
but
it
just
has
the
same
ones
live,
so
I
will
go
through
it
mainly.
The
idea
is
that
we
have
marked
two
features:
has
deprecated
the
two
of
them.
We
can
see
this,
I
think,
can
be
achieved
with
current
third-party
tools,
such
as
a
serial
multi-cluster
istio,
for
some
of
the
all
these
features
that
have
been
marked
as
deprecated.
A
The
features
are
alpha
and
are
kind
of
increasing
the
the
maintenance
yeah
we
we
consider
there
are
other
better
possibilities
instead
of
using
qfed,
as
the
purpose
was
basically
to
federate
resources
across.
A
So
the
second
point,
in
order
to
become
better
after
these
are
these
two
features
are
completely
removed
to
mark,
has
vita
q-them
just
to
remind
that
few
push
reconciler
features
has
been
marked
as
beta
so
yeah.
We
will
mark
what
is
in
there,
which
will
basically
will
be
this
feature,
plus
the
scheduling
preference
feature
after
that.
A
The
intention
is
to
disable
scheduling
preference
in
order
to
then
proceed
with
the
next
steps,
which
means
mainly
to
split
into
the
current
repository
having
two
models
like
two
two
different
models,
so
qfl
will
remain,
has
a
repository
but
will
be
composed
of
multiple
gold
models,
and
then
one
gold
model
will
be
what
we
call
to
qft
core,
which
will
be
only
focus
on
federation
of
resources,
and
the
other
model
will
be
the
kyoto
scheduling
where
these
scheduling
reference
features
might
leave
a
cool
evolved
independently
of
of
the
of
the
core
so
yeah.
A
Once
these
steps
are
achieved,
we
will
consider
the
mark
has
a
better
qfed,
yeah,
probably
new
features
my
line
in
between,
but
these
three
points
in
the
in
the
slides
will.
B
A
Awesome
so
yeah
yeah-
I
I
repeat
myself
again.
These
three
steps
are
the
ones
that
we
consider
in
order
to
to
be
stable
in
order
to
to
have
the
the
core
of
cube,
fed
the
functionality,
or
at
least
the
fertility
has
been
enabled
by
the
fall
and
the
functionality
that
try
to
pursue
the
same
objective
goal,
which
is
validated
resources
across
customs.
A
B
So
let
me
make
sure
that
I
heard
this,
and
if
I
can
summarize
it
sounds
like
the
generating
new
apis.
Joining
clusters
to
to
cube,
fed
and
distributing
resources
are
will
advance
to
beta
and
basically
the
rest
of
the
features
will
be
alpha
and
the
federated
ingress
and
service
discovery
ones
will
be
removed
after
deprecation
right.
A
B
C
C
Okay,
all
right
so
last
time
I
I
think
we
actually
have
some,
maybe
concepting,
not
quite
clear,
so
I
would
like
to
go
through
it
first,
maybe
just
to
give
the
very
high
level
what
we
are
trying
to
do
and
then
I
can
quickly
we're.
B
Seeing
a
sorry
kevin
we're
seeing
a
slack
window,
oh
I'm
not
sure
if
you
have
multiple
screens,
but
I'm
seeing
a
slack
window
right
now.
Sorry,
maybe
I
choose
the
wrong
window.
C
Okay,
so
I
I
want
to
first
to
clarify
what
we
are
trying
to
do
and
I
also
refined
the
architecture
and
the
concept
graph.
So
maybe
we
can
first
go
through
that
and
then,
after
a
quick
demo,
we
can
take
a
look
at
the
api.
What
we
are
designing
now
so
based
on
the
federation,
v1
and
the
v2.
So
we
actually
think
about
that.
C
We
we
want
to
provide
a
kind
of
turnkey
solution
for
the
end
user,
so
they
can
easily
to
use
what
we
they
are
already
familiar
with:
the
kubernetes
native
api
definition
for
the
federated
application.
So
actually
the
concept
it's
called
the
template
in
and
also
we
need
to
use,
set
up
the
standalone
policy
api,
so
basically
the
the
placement
api
and
in
the
meanwhile
the
mapping
should
be
resourced.
Here.
C
C
Then
it's
much
easier
for
the
user,
not
to
repeat
filling
up
the
the
the
the
spreading
preference
fields
when
the
every
time
they
trying
to
create
a
federated
application,
and-
and
the
third
thing
is
that
also
the
override
api,
the
override.
We
think
we
need
to
also
provide
a
standalone
api,
because
it's
actually
more
a
cluster
relevant
thing
and
actually
we
can
set
up
from
a
a
cluster
relevant
perspective
and
also
the
idea
to
make
it
stand
alone.
Is
that
we
want
to
share,
share
the
override
rules.
C
That
means
that
when
people
have
multiple
federated
resources
going
to
a
same
cluster,
they
may
share
some
of
the
override
rules
all
right.
So
I'm
going
to
skip
the
landscape
thing
because
we
already
covered
it
last
week
and
for
the
architecture.
I
want
to
repeat
that
so
so.
Come
on
k.
Amada
is
the
kind
of
the
name
of
our
prototype.
C
C
The
scheduling
decision
for
propagating
separating
the
resources
to
different
member
clusters
and
others
are
not
quite
special
so
on
the
right
is
the
the
kind
of
concepts
so
the
user
input
our
resource
template.
C
So
the
so
actually.
I
think
this
graph
is
kind
of
clear
how
these
concepts
are
involved
together
and
for
the
resources
being
scheduled
and
also
expanded
with
the
override
fields
it
will.
They
will
be
stored
in
the
a
special
namespace
called
the
execution
space.
C
C
So
next
one
is
actually
a
more
detail
about
the
api
workflow.
So,
just
to
give
a
idea
how
the
components
and
the
apis
interact
with
each
other
all
right.
So
I
think
maybe
we
can
first
go
through
the
demo.
C
Okay,
so
I
already
have
a
setup
and
we
have
the
km
under
already.
C
Fetching
the
kubernetes
api
version,
but
kubernetes
version
and
later
we
will
also
fetch
all
the
apis
enabled
in
this
cluster
and
on
this
class
that
we
don't
have
much
labels
and
and
other
marks.
But
that's
enough
for
for
doing
the
demo
and
also
we
we
are
currently
pointing
the
a
the
token
of
accessing
the
the
cluster
into
a
separate
secret.
C
So
with
this,
it's
much
easier
to
just
give
the
read
access
to
the
to
the
application
teams
in
a
organization,
so
they
can
curate
how
many
available
clusters
in
the
in
the
system
and
they
can
also
customize
the
propagation
preference
all
right.
So
here
I
added
two
yamo.
So
one
is
the
deployment.
C
C
So
it's
also
very
simple
that
we
have
the
resource
selector
that
will
match
that
deployment,
and
the
placement
rule
says
that
I
want
to
propagate
this
deployment
to
the
member
cluster.
One.
C
C
This
all
right,
and
also
on
the
top
I
have
the
we
can
watch
the.
C
All
right,
so
so,
actually
the
what
happens
in
the
behind
this.
Just
as
we
showing
in
the
graph
that
the
propagate
the
policy
controller
will
get
the
notification
that
there's
a
policy
and
they
will
match
the
deployment
to
to
the
propagation.
B
So
kevin,
is
it
okay?
If
I
ask
a
question
here.
B
B
B
E
B
Yes,
okay,
and
so
the
question
is
is,
and
I
don't
think
that
we
explicitly
got
this,
but
I
was
wondering:
is
there
any
way
that
you
provide
status
back
through
the
member
or
sorry
the
management
clusters
control
plan.
C
Yes,
yes,
we
just
not
yet
implemented
that
so
so
that's
why
we're
touring
to
the
member
cluster.
C
So
so,
actually
we
so
it's
the
first
step
is
we.
We
will
fetch
the
status
and
store
it
in
the
work
api
and
also,
then
we
will
simply
aggregate
all
the
status
struct
to
the
binding
api
and
we
are
also
thinking
about
more
advanced
kind
of
aggregation
and
the
abstraction
of
the
status,
but
no
no
progress
on
that.
Yet,
okay,
yeah.
I
can
share
the
structure
definition
if
you
like
to
know.
C
A
A
C
Yeah,
so
maybe
this
one
is
much
clearer
to
help
understand,
so
we
actually
so
the
commander.
They
have
its
own
aps
server
to
store
the
objects,
even
though
we
use
the
the
kubernetes
api
version
and
the
kind,
but
actually
it
has
nothing
to
do
with
the
kubernetes
cluster.
C
So
we
have
the
eps
server,
commander,
controller
manager
and
also
the
scheduler
running
separately.
They
are
not
aware
of
whether
they
are
running
inside
a
quantities
cluster
or
they
are
just
running
on
a
machine.
A
Okay,
let's
see
so
then
my
question
is
where
the
validation
of
this
object
is
done
right.
If
the
deployment
is
a
valid
template,
for
instance
or
or
if
you
have
a
resource,
how
will
you
validate
that
this
custom
research.
C
You
made
the
validation
yeah,
so
a
validation
so
actually
validation.
I
think
the
the,
for
example,
the
kubernetes
core
api
definition.
They
already
have
the
validation
implementation
right,
so
we
can
so
this
api
server
we're
actually
using
the
kubernetes
api
server
binary
to
to
run
that.
So
that
part,
I
think
that
it's
easy
and
for
the
so
it's
actually
a
full-featured
kubernetes
api
server.
So
we
can
also
install
crd
there
and
also
people
can
customize
their
validation
on
against
the
customer
resource.
A
I
see
I
see
all
right
and
the
last
question
is
based
on
the
on
the
arrows
in
the
graph.
I
assume
it's
a
push
model
right,
there's
no
full
model,
you
don't
deploy
anything
on
the
classes
right.
C
Well,
actually,
so
we
we
have
a
kind
of
a
obstruction
layer
here
in
the
landscape,
but
it's
fine,
so
we
want
to
provide
a
layer
to
to
make
it
either
users
want
to
use
push
based
model
or
pro
based
model.
The
difference
is
the
difference.
Is
that
whether
the
it's
done
in
by
the
control
plan-
or
it's
done
by
the
agent
in
the
member
clusters.
C
Okay,
so
I
think
we
can
move
on
to
the
a
little
bit
more
details
about
the
apis,
so
for
the
for
the
federated
resource,
the
template,
I'm
taking
the
deployment,
as
example,
but
actually
all
the
api
resource
types
are
similar.
C
So
we
so
the
major
idea
of
what
we're
doing
now
is
that
we
want
to
provide
the
best
compatibility
with
the
people
today,
they're
using
a
single
kubernetes
cluster,
so
yeah
the
the
resource,
the
federated
resource
definition
is
actually
exactly
the
same,
with
kubernetes
api
and
for
the
propagation
policy.
Maybe
let
me
play
it.
C
All
right
so
for
the
propagation
policy,
api
yeah,
it's
actually
the
standalone
placement
api,
so
we
basically
have
the
selector
field.
This
one
is
to
to
to
match
the
resource
they
want
to
propagate
or
the
the
policy
want
to
apply
to
and
also
the
propagate
dependency
is
kind
of
a
very
advanced
feature
for
automatically
propagate
the
config
map.
That
is
used
by
a
deployment,
for
example.
C
So
the
placement
is
actually
the
the
the
the
scheduling
preference,
whether
the
where
the
clusters,
the
these
resources,
the
the
user,
want
them
to
to
be
scheduled
to.
So
we
basically
have
the
clustered
affinity,
it's
kind
of
very
self-explanatory.
C
Users
are
able
to
enumerate
the
cluster
name
or
or
just
match
the
clusters
by
label
selector,
and
also,
we
also
borrow
the
concept
of
tense
tolerations
to
the
federation
layer.
So
the
user
can
mark
some
clusters
with
the
tanks
for
special
usage
and
only
the
the
resources
spread
by
the
propagation
policy
that
has
have
the
corresponding
toleration
can
go
there
and
spread.
The
constraints
is
kind
of
defining
to
how
to
separate
the
resources
between
each
other.
C
So
either
the
system
can
spread
the
resources
by
label
or
by
fails.
So
actually,
the
idea
is
to
dynamically
group
the
clusters
into
different
groups.
For
example,
we
can
group
the
clusters
by
zones
or
by
regions
or
by
cloud
providers
or
by
other
startup
sort
of
things,
and
then
we
can
also
restrict
the
the
number
of
the
groups.
C
So
the
override
the
api
is
new.
We
are
still
working
on
designing
that.
So
the
basic
idea
is
that
the
override
we
think
it's
more
kind
of
cluster
relevant,
because
we
we,
for
example,
we
have
clusters
in
different
regions
or
different
environment.
For
example.
Some
clusters
are
on-prem
and
some
clusters
are
in
the
public
cloud.
C
We
also
have
the
different
container
image
registry
in
the
different
environment,
so
we
we
can
use
the
override
policy
to
to
to
change
the
image
image
prefix
to
save
the
downloading
bandwidth
and
the
latency
stuff,
and
also,
for
example,
the
storage
class,
because
the
in
different
environment,
we
may
have
different
storage
background
set
up
so
so
the
idea
is
that,
for
some
of
the
resources
go
into
a
certain
set
of
clusters,
they
may
share
a
group
of
common
override
rules.
C
So
that's
why
we
we
make
it
a
standalone
api.
So
it's
also
looks
a
bit
very
similar
to
the
policy
api
because
we
make
all
this
api
standalone.
So
we
have
the
resource
selector
here
to
to
restrict
to
what
kind
of
resource
types
this
override
policy
will
apply
to,
and
also
the
targeted
selectors
here,
because
we
are
not
actually
propagating
the
the
resources.
C
So
here
is
just
a
some
rules
to
find
out
the
the
clusters,
so
either
a
user
can
just
number
it
by
a
cluster
name
or
or
use
label
selector
to
select
a
set
of
clusters
so
overrides
here
for
the
initial
release
with.
I
think
we
will
just
make
it
easy,
so
we
name
it
as
plain
text
over
rider.
So
it's
it's
actually
the
same
way
with
what
we
have
in
the
v2
today.
D
I
I
I
just
make
it
easier
if
I
deploy
resources
to
a
given
cluster,
specifically
not
through
the
cube
fed
api
server.
So
I
go
right
to
a
cluster
and
deploy
assert
resources
that
resources
then
reflected
or
now
do
I
have
to
get
all
the
resources
in
my
multiple
clusters
that
I'm
managing
do.
I
actually
have
to
do
two
queries,
one
through
the
api
server,
the
kamada
api
server
car
monitor.
That's.
D
Server
and
then
one
actually
to
each
other,
the
other
individual
clusters,
because
the
resources
aren't
collated
altogether.
C
E
E
C
Are
are
created
by
the
federation
control
plan
right?
Yes,
so
actually
in
in
the
proposed
landscape.
We
we
have
a
another
component
here
to
to
to
solve
this
situation.
So
basically
we
can
have
a
so
the
unified
api
and
the
point
here.
C
Actually
you
can
just
use
a
api
gateway
to
to
to
do
that,
so
it's
basically
a
routing,
whatever
your
api
requested
to
a
certain
cluster,
so
you
can
use
kind
of
request,
context
to
switch
between
clusters
and
then
you
can
switch
to
the
federated
control
plan,
whether
you
can
also
switch
to
directly
to
a
member
cluster.
C
The
the
benefit
is
that
you
don't
need
to
collect
everything
to
the
federated
federation
layer.
So
it's
it's
it's
kind
of
less
resource
consumption
of
the
federation
layer
and
in
the
meanwhile,
you
can
access
all
these
clusters
in
your
system.
With
the
same
token
and
you,
you
can
also
enforce
the
same
set
of
row
binding
stuff,
because
you
have
the
endpoint.
D
The
unified,
the
unified
api
endpoint,
if
I
associate
my
context
to
a
cluster
and
say
git
pods,
for
example,
will
it
get
pods?
Will
it
query
all
the
pods
on
all
the
cluster
members
is
essentially
a
distributed
query
or
it
goes
off
and
launches,
gets
to
each
specific
member
and
then
collates
the
responses,
or
is
it
what's
it
actually
talking
to
at
that.
C
Point
if
I
say
it's
actually
just
routing
your
request
to
a
single
cluster,
okay.
C
D
I
mean
it
also
would
be
interesting
when
I
do
that
we're
using
deployments
as
an
example.
This
time,
if
I
say
git
deployments,
understanding
which
deployments
are
deployed
through
kind
of
this,
the
multi-cluster
control
plane
and
are
essentially
federated
deployments
versus
ones
that
are
bound
specifically
through
non-fed.
You
know,
we're
deployed
through
non-federated
means
right
understanding.
The
distinction
at
some
point,
I
suspect,
will
be
a
useful
operator
operation.
D
And,
and
if
I,
if
I
also
go
to
a
specific
cluster
member
and
I
say,
get
deployments,
I
suspect
at
that
point,
because
I'm
not
going
through
the
cube
fed
api,
I'm
going
to
get
a
list
of
deployments
and
not
be
able
to
tell
if
those
were
placed
based
from
the
cube
fed
distribution
mechanism
or
directly
against
the
cluster,
because
to
a
given
cluster
member.
D
D
Kr
armada,
api,
sorry,
okay,
so
the
problem
is
yeah.
Yeah
yeah!
No
worries
no
worries.
That's
that's
my
fault.
If
I,
if
I
go
through
the
k,
armada
api
and
do
you
know
push
deployment
apply
deployment
and
then.
C
C
D
Get
deployments
I
that
deployment
looks
like
every
other
deployment
on
that
member
I
mean
unless
kr
marta
is
also
marking
it
somehow
and
then,
as
an
operator,
if
I'm
just
happen
to
be
going
against
that
member
thing
and
I
delete
that
deployment
which
was
deployed
by
the
k,
armada
server,
there's
a
reconcile
issue
and
all
those
I
mean
how's
that
all
be
addressed.
D
C
Yeah
yeah,
that's
a
actually
very
basic
functionality.
We
need
to
provide
so
just
in
any
case.
If,
for
example,
deployment
is
deleted
directly
from
the
member
cluster,
but
it's
actually
created
by
the
the
the
kml
control
plan,
either
the
either
the
execution
controller
or
the
agent
in
the
member
cluster
will
recreate
the
deployment.
D
C
C
The
cluster,
if
you,
for
example,
the
deployment
here
is
created
by
the
the
commander
control
k,
a
model
control
plan.
You
directly
delay
the
deployment
here,
so
so
the
if
it's
a
push
based
mode
mode.
The
the
execution
controller
of
kmart
will
create
the
deployment
according
to
the
work.
D
C
All
right
so
for
the
api,
we
just
covered
the
overwrites
and
then
is
the
member
cluster
api.
So
for
the
member
cluster
api,
it's
actually,
for
example,
the
api
endpoint
field.
If
it's,
it's
actually
used
only
in
the
push
based
mode
model,
but
if
it's
a,
if
it's
a
pool
based
mode,
it
means
actually
we
have
the
agents
in
different
member
clusters.
C
The
agents
will
register
the
member
cluster
to
the
control
plan.
So
so
also
even
the
member
cluster
are
inside
a
private
network
or
behind
a
firewall
doesn't
have
a
public
ip
accessible
from
the
control
plan.
C
C
While
we
are
serving
the
end
users
and
also
in
the
status
besides
the
general
very
high
level
summarize
of
the
oscillators,
we
are
also
fetching
the
kubernetes
cpu
version,
kubernetes
version
and
all
the
enable
the
api
versions.
C
So
later
on,
we
can
check
in
the
scheduling
stage
whether
the
the
a
a
candidate
cluster
they
have
the
enabled
the
target
api
version
and
the
kind
and
filled
out
the
candidate
before
the
actual
resource
going
to
the
member
cluster
and
the
field
and
also
the
resource
summary
note
summary.
It's
kind
of
a
a
a
summary
of
the
cluster
like
healthy.
How
many
nodes
are
there
and.
C
The
usage
of
the
whole
cluster,
so
we
can
later
on
schedule
according
to
the
cluster
usage.
C
So
that's
the
basically
the
api
idea
we
have
so
far
and
also
in
the
graph
you
can
find
the
there
are
another
two
is
the
propagation
binding.
So
this
one,
we
are
actually
thinking
it's
a
api
for
the
controllers
to
interact
with
each
other.
So
so
I'm
not
introducing
it
here
and
the
propagation
is.
C
If
you
attend
the
meeting
very
often
you
should
be
familiar
with
that.
We
don't
have
anything
special
on
that,
so
that's
the
or
the
status
we
have
so
far.
B
B
B
When,
when
and
and
I
know
that
you're
you're
just
prototyping
now,
but
when
you
install
a
crd
on
the
management
cluster,
I
I
I
guess
if
I'm
thinking
about
since
crds
based
apis
typically
are
using
like
at
least
one
or
two
web
hooks.
The.
B
I
guess
I'm
guessing
to
myself
and
I
guess
this
isn't
a
question
I'm
thinking
out
loud
and
I'm
wondering
maybe
if
you
can
confirm
if
I
register
a
crd
in
the
management
cluster,
I
need
to
probably
run
the
the
web
hook
for
it
in
the
hosting
cluster
that
hosts
the
management
cluster.
Is
that
right.
C
Discussion
yeah,
so
so,
actually,
in
this
case,
the
register
is
the
crd
is
registered
to
the
kr
motor
api
server
and
we
actually
need
to
run
the
web
hook
here
to
talk
to
the
krmada
ips
server
to
do
the
validation,
yeah.
B
Right
so
so
I
I
wonder
if
we
can
just
go
up
a
level
of
detail
or
down,
as
the
case
may
be
so
say
that
I've
got
my
crd
and
I
install
it
on
the
k,
armada,
api
server
and
it
has
a
validating
web
hook.
B
So
I
create
a
validating
and
mission
web
hook
resource
on
the
kr
moda
api
server
that
admission
web
hook
resource
has
to
point
to
a
normally.
I
would
expect
it
to
point
to
a
service
that,
like
if
we
forget
about
k,
armada
and
we're
just
talking
plain
cube.
B
I'd
expect
that
emission
web
hook
resource
to
point
to
a
service,
that's
on
the
same
cluster,
and
so
I
guess
my
question
is:
will
the
pattern
be
when
people
register
crds
with
the
kr
moda
api
server,
since
that
api
server
isn't
or
since
the
management
cluster
isn't
really
a
fully
functional
cluster?
B
It
seems
to
imply
that,
like
you
will
typically
host
the
web
hook,
pods
that
will
serve
the
web
hook.
Requests
in
the
cluster
that
is
hosting
the
management
cluster.
C
Well,
I
actually
actually
yes,
for
example,
not
just
the
web
hooks,
but
also
you
have
the
operators,
because
the
the
kr
model
control
plan
is
actually
just
the
propagating
the
resources.
So
you
need
to
have
the
operators
in
the
member
clusters.
D
Or
deploy
a
crd,
I
deployed
against
the
kr
mata
api
server
with
policies
such
that
it'll
be
deployed
to
each
member
where
it'll
be
used,
or
I
expect
it
to
be
used
and
those
have.
Hopefully,
those
policies
will
match
when
I
create
the
instances
of
those
securities
as
well,
so
that
I
don't
try
to
create
an
instance
on
a.
C
For
example,
you
have
the
like
yama
for
running
your
operator.
You
can
submit
that
yamo
to
kmarter
api
server
and
the
the
kr
model
will
help
you
to
set
up
the
actual
operator
in
the
member
clusters
and
then,
when
you
have
the
crds
to
the
kr
motor,
the
custom
resources
to
the
kr
model,
api
server
and
it
it
can
propagate
to
the
member
cluster.
And
you
have
the
full
function
of
the
crd
and
operator
in
the
member
cluster.
B
E
All
right,
thank
you.
So
much.
Thank
you.
Let
me
just
see
if
I
can
share
quickly
here.
This
is
just
a
really
brief:
hey
everybody!
We're
gonna
be
having
some
conversations
about
things
related
to
multi-cluster,
maybe
a
bit
more
broad
than
just
kind
of
where
coop
vet
has
traditionally
been.
E
But
you
know
a
lot
of
the
concepts
kevin
I'd
love
to
actually
have
you
join
us
in
this
conversation
as
well,
but
basically,
we've
been
working
in
this
myself
and
other
team
members
have
been
working
in
a
public
head
of
org
around
open
cluster
management,
and
we
had
built
some
technology
that
originated
in
ibm
and
ibm
research
and
then
transferred
some
of
that
into
red
hat
and
have
been
working
to
open
source
that
and
then
we're
looking
to.
You
know,
drive
more
awareness
and
understand
some
more
of
the
use
cases.
E
E
There
are
some
things
that
might
be
appropriate
for
things
like
sig
policy,
maybe
some
things
in
in
the
continuous
delivery
or
good
ops
community
as
well,
but
we'll
just
invite
everyone
who's
interested
to
come
and
participate
in
in
the
calls-
and
let
us
know
your
thoughts
on
the
use
cases
that
we're
discussing.
Let
us
know
what
you
think
aligns
to
other
parts
of
the
community.
E
This
is
very
much
a
capability
that
we're
focused
on
everything
from
how
do
we
provision
clusters?
How
do
we
attach
that
agent
that
does
work
distribution?
Two
clusters?
How
do
we
deliver
continuous
policies?
How
do
we
deliver
applications?
How
we
drive
observability,
there's
components
that
integrate
with
things
like
prometheus
and
thanos
as
well.
E
So
the
reason
it's
a
separate
project
is
because
we
think
you
know
it's
hard
to
just
digest
all
of
that
everywhere
upstream,
at
the
same
time,
but
there's
definitely
parts
like
manifest
work
is
our
work
api
that
might
be
appropriate
to
consider
intake
multi-cluster
we're
actively
looking
at
integrating
things
like
cluster
claim
into
the
way
that
we
think
about
managing
clusters,
integrating
things
like
the
multi-cluster
service
definition
as
well.
So
just
invite
everyone.
The
links
are
in
the
agenda,
but
in
particular
the
meeting
details
are
public
here.
E
The
agenda
doc
is
also
public
and
publicly
editable.
So
if
you
want
to
begin
to
add
topics,
we
plan
to
do
these
every
two
weeks,
so
we'll
probably
end
up
missing
one
at
the
end
of
december
because
of
us
holiday,
but
after
that
about
every
two
weeks,
and
that's
it
just
if
you'd
like
to
come,
join
in
and
participate
in
the
conversation,
we'd
love
to
have
you
any
comments
or
questions.
E
E
E
We
use
a
concept
called
placement
rule
pull
up
this
repo
here
just
for
a
sec
and
the
placement
rule
is
actually
used
across
both
the
way
that
we
deliver
applications
and
the
way
that
we
deliver
policies.
But
placement
rules
basically
work
around
the
concept
of
label
matching
or
match
expressions
very
similar.
To
you
know
the
way
services
include
in
points
and
pods,
and
things
like
that
so
clusters
manage
clusters
is
an
api
kind,
has
a
set
of
labels.
E
A
placeholder
rule
considers
the
set
of
managed
clusters
that
a
user
has
access
to
comes
up
with
a
similar
concept
of
cluster
decision.
That's
consistent
with
that
notion
of
propagation
policy
and
then
the
placement
rule
could
get
used
by
different
kinds
of
controllers.
It's
used
by
a
policy
controller
that
delivers
and
syncs
policies.
It
does
active
enforcement
or
audit
it's
consumed
by
applications.
E
You
would
ask
the
question
about:
can
you
view
all
of
the
pods
across
all
clusters
and
another
component
that
we're
working
on
getting
into
the
open
source
is
our
search
componentry,
which
does
actually
index
all
of
the
kubernetes?
Things
from
all
clusters
puts
that
in
a
in
a
database
built
on
that
has
graph
edges,
so
you
can
find
relationships.
E
So
there's
things
like
that
that
that
correspond
pretty
closely
like
member
cluster
and
manage
cluster.
Very
close
propagation
policy
and
placement
rule
a
little
divergence
in
some
of
the
ways
that
we
think
about
that
everything
in
open
cluster
management
are
cube
crds,
with
the
exception
of
the
way
that
we
collect
search
index
information.
E
So
everything
ties
naturally
to
a
cube
api
server
on
a
standard
cube
cluster.
We
primarily
focus
obviously
on
running
it
on
openshift
right.
I
work
with
from
red
hat,
but
the
goal
is
to
make
it
more
general.
We
can
still
manage
to
any
managed
coupe
service,
so
an
eks,
aks,
gke
et
cetera,
are
all
managed
two
targets
today,
so
kind
of
a
whirlwind
answer,
we'll
kind
of
start.
E
The
first
call
on
thursday
off
with
an
overview
just
to
kind
of
provide
a
lay
of
the
land,
and
then
the
intent
is
to
do
a
deep
dive
on
the
registration
api
that
is
in
open
cluster
management,
how
a
cluster
can
attach
to
a
hub
and
how
work
distribution,
which
is
decoupled
from
registration,
but
how
they
can
be
put
together
to
both
register
a
cluster
and
address
work.
But
you
can
take
out
the
registration
protocol,
that's
in
open
cluster
management
and
still
use
the
work
controller,
as
is
just
with
a
different
registration
protocol.
E
D
Thank
you
question
of
the
cigar
is
this:
are
we
looking
towards
a
reconciliation
between
these
two
projects
or
a
thousand
flowers
type
of.
E
Yeah,
the
intent
of
why
did
we
have
a
separate
project
than
just
kind
of
going
through
sick
multi-cluster
is
a
great
question
and
in
particular
there's
a
community
repo
where
we,
you
know,
try
to
talk
about
that.
I
think.
Basically,
we
see
this
space,
as
also
needing,
I
think,
kevin
used
the
term
turnkey
solution.
I
think
it's
a
it's
a
great
phrase.
I
think
there's
a
need
for
how
we
can
kind
of
pull
together
different
projects
and
make
sure
they
work
effectively
together.
E
It
is
100
100,
our
intent
that
either
part
or
all
of
this
content
ultimately
join
a
foundation
somewhere
right,
cncf
kubernetes,
but
we've
kind
of
found
that
there
are
certain
things
that
align
really
well
to
slices
of
the
community.
For
example,
cluster
registration
work
distribution
might
be
appropriate
to
consider
in
sick
multi-cluster
policy
and
policy.
Distribution
might
be
appropriate
for
sig
policy,
so
hate.