►
From YouTube: Kubernetes SIG Multicluster 28 June 2022
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
B
A
I
think
we
have
a
relatively
short
agenda
today
and
laura
who
added
it
is
not
able
to
be
here,
but
the
discussion
was
relevant,
something
I
can
probably
kick
off
continuing
a
discussion
from
slack
with
a
couple
questions
around
the
mcs
api.
A
A
And
it's
probably
a
good
time
to
revisit
these
decisions,
because
I
think
we
had
you
know
way
back
when
we
had
a
lot
of
discussion
around
these
things.
A
Oh
yeah,
thanks
for
for
linking
in
the
chat
to
the
to
the
discussion,
also
new
computer
using
the
web
client.
So
forgive
me
if
anything
is
slow
or
broken
the.
So
I
think
laura
answered
in
chat.
The
the
reason
for
not
including
the
target
port
is
because
service
imports
are
really
about
clients
and
the
client,
though
the
thinking
was
that
the
client
really
doesn't
need
to
be
aware
of
the
target
port,
because
it's
probably
not
relevant.
If
it's
in
another
cluster,
it
may
not
even
be
reachable.
A
That
was
that
was
the
thinking.
If
anyone
has
thoughts,
concerns
or
suggestions
around
that
you
know,
I
don't
think
it's
too
late
to
have
that
discussion.
It's
certainly
easier
to
add
fields
than
take
them
away.
C
Yeah,
I
think
hi
yeah
probably
has
a
as
a
yeah
go
ahead.
Hi.
D
So
this
is
chen
speaking.
The
the
question
was
actually
proposed
by
my
teammate
gene,
who
couldn't
be
here
today.
We
were
using
the
the
the
the
modex
service
multicultural
service,
api,
kep
1645,
to
build
our
own
multi-cluster
implementation
and
in
our
specific
case
we
were
leveraging
the
native
kubernetes
native
kubernetes
service
api
to
to
actually
implement
the
service
impulse,
and
when
we
were
developing
our
solution,
we
find
out
that
we
need
to.
D
D
So
we
do
understand
that
if
people
were
say
using
an
annoy
proxy
rather
than
the
native
kubernetes
service
api,
then
the
target
pod
field
may
not
be
needed,
but
but
but
you
do
feel
that
fulfill,
because
there
are
so
many
different
possibilities
and
for
for
those
who
who
does
use
for
those
who
who
who
do
use
the
native
kubernetes
service
api.
This
this
view
this
target
port
view
you've
added
as
an
option.
One
can
really
help
the
solution.
A
So
so
that
that
makes
sense
what
you're
saying
I
think
the
concern
with
putting
it
in
the
service
import
api?
Is
it
might,
in
my
experience
at
least
anything
that
gets
put
in
an
api
someone's
going
to
rely
on
it?
And
so,
if
we
put
it
in
the
api,
then
every
implementation
needs
to
we'll
basically
need
to
set
it
or
we'll
get
in
a
situation
where
api
behavior
isn't
consistent.
A
A
A
Over
generalizes,
for,
as
you
say,
if
you're
using
an
envoy
proxy
or
even
you
know,
it
would
be
a
valid
implementation
of
the
mcs
api
to
have
some
like
load
balancer,
actually
be
where
what
the
cluster
set
ip
direct
stu
and
there
there
may
be
no
target
port
concept
at
all,
or
it
might
even
be
multiple
target
ports.
A
And
so
that's
where,
like
the
api,
is
kind
of
the
minimum
set
of
things
that
we
were
hoping
would
be
set
on.
Every
cluster
not
meant
to
be
like
constraints
for
what
you
can
do.
If
you
need
additional
information.
C
I
think
this
is
related
to
the
second
question.
This
is
a
this
is
that
your
concern
is
valid
when
it's
in
the
step
would
it
still
be
the
same
if
it's
not
in
a
stack,
but
it's
in
the
status.
A
I
so
I
think
I
I
don't
think
the
distinct,
so
I
I'm
not
sure
that
the
distinction
between
whether
the
information's
in
status
or
spec
is
a
huge
difference
like
if
somebody's
consuming
it
and
and
we
we
say
that
target
port
should
be
set.
Then
then,
I
think
what
we're
going
to
see
is
that
clients
start
assuming
that
the
target
port
is
set
and
if
clients
are
assuming
the
target
port
is
set,
then
that
means
every
implementation
is
going
to
have
to
set
it
to
something
and
it,
and
it
may
not
always
make
sense.
A
If
they
start
seeing
it
set,
so
has
to
be-
I
I
guess
like
no,
I,
our
api
spec
is
very
clear.
Like
you're
right,
it's
optional,
it
doesn't
have
to
be
set,
but
I
don't
think
that
stops
people
from
assuming
that
it
is
set
like
a
service
port
like
we
need
a
port.
I
guess
I'm
I'm
open
to
disagreements
here.
If
we
think
we
we
should
support
it,
you
know
I'm
I'm
not
against
it.
I
just
wanted
to
kind
of
explain
the
the
original
reasoning
here
yeah.
Thank
you
very.
C
Much
yeah,
I
think
again,
the
bigger
decision
is
the
spec
versus
status.
What
is
do
you
think,
is
the
general
rule
between.
A
Second
states,
so
I
so
when
I
originally
proposed
mcs
api
like
two
years
ago
or,
however
long
now,
I
was
originally
thinking
status
because
I
agree
with
the
comments.
It
feels
a
lot
like
a
status
but
where
I
was
kind
of
where,
where
I
think
the
discussion
took
us
was
was
a
couple
things,
so
one
status
is
usually
meant.
A
It's
generally
used
to
allow
some
other
contributor
to
a
resource
other
than
the
owner
to
set
information
on
that
resource
right
so,
like
the
the
resource
owner
sets
a
spec
and
some
controller
updates.
The
status
service
imports
are
a
little
weird.
A
A
So
that
so
that
was
kind
of
one
thing
in
in
that,
like
maybe
it's
not,
maybe
it's
not
hugely
necessary.
Yeah,
normally
controllers
set
the
status
of
that
object.
Normally,
the
controller
sets
the
status
of
an
object
that
somebody
else
created.
B
A
Service
implementation,
the
controller,
creates
the
service
import
and
then
sets
the
relevant
information
on
it,
based
on
the
service
exports,
which
are
the
resources
that
that
the
user
creates
in
other
clusters,
but
the
real,
if
I'm
honest,
I
think
what
it
really
drilled
down
to
was
the
fact
that
prior
to
124-
and
I
think
it's
actually
still
alpha-
there
was
no
way
to
create
to
set
status
with
cube
cuddle,
and
so
it
made
working
with
service
imports
and
like
prototyping
implementations,
just
awful
and
part
of
it
was
part
of
it
was.
A
That
is
that
you
know.
Even
now,
in
most
platforms,
you
would
have
to
write
a
custom,
some
custom
code
to
build
a
service
import,
there's
no
way
to
just
create
a
service
import
and
verify
that
things
are
working
because
you
you
would
need
to.
You
would
need
to
have
something
else,
update
the
status
now
that
cube
cuddle
has,
I
think,
it's
still
alpha
support
for
sub
resources,
that's
starting
to
be
improved,
but
it's
going
to
take
a
long
time
before
that's
fully
rolled
out.
C
I
see,
I
think,
in
our
implementation,
the
service
input
is
created
by
another
controller.
The
service
import
controller,
just
like
normal
controller,
reconcile
the
service
input
and
then
update
its
statements.
So
in
our
case,
it's
just
a
normal
normal
flow,
because
we
have
a
second
controller
that
is
looking
at
the
service
export
and
then
create
a
service
input.
In
your
mind,
you
were
you
thinking,
that's
the
same
controller
that
is
both
looking
at
the
service
export
and
create
service.
A
Whether
it's
the
same
actual
controller
or
not,
I'm
not
sure
that
is
a
distinction
that
matters
so
much
to
the
to
the
end
user
to
the
end
user.
It's
it's
your
implementation
is,
is
fully
owning
the
service
import,
whether
it's
one
controller,
two
controllers,
it's
it's
still
magic
behind
the
scenes
like
having
multiple
controllers,
makes
sense
like
on
on
gke.
I
can
tell
you
we
have
something
similar,
there's,
there's
multiple
logical
processes
involved,
but
it's
from
a
from
the
user
standpoint.
C
Yes,
because
I
mentioned
this
because
I
thought
you
mentioned
to
have
a
controller
right,
it's
both
great
and
right.
The
status
is
not
common,
yes
saying
we
are
not.
This
is
not
what
our
pattern
is.
Our
pattern
is
normal
controller,
got
it
don't
just
just
read
the
step,
read
a
spec
creator
status
and
I
have
another
control
reader.
It's
that
is
different.
The
chain
is,
is
the
normal
chain.
A
Yeah,
that
makes
sense,
although
I
think
even
even
then,
if
I
look
at
like
other
use
cases
for
spec
and
status
and
maybe
I'm
wrong.
Maybe
this
is
too
much
of
a
generalization,
but
generally
it
seems
that
spec
spec
is
set
by
like
one
group
and
or
like
a
logical
owner
and
status
is
set
by
by
some
other
logical
owner,
so
like
it
would.
A
If
you
had
a
controller,
create
a
resource,
it
would
be,
and
another
controller
update
the
status
normally
like
yeah
the
api
set
up,
so
that
those
could
those
could
come
from
completely
different
teams
or
different
companies,
or
one
could
be
a
human
and
one
could
be
yeah
and-
and
you
know
so,
that's
kind
of
where
we
were
coming
from
yeah
that
that's
exactly
what
our
pattern.
C
Is
yeah
it's
two
different.
Whatever
control
one
is
set
aspect
one.
So,
basically
the
controller
is
going
from
the
specs
do
what
the
spec
needs
to
do
and
then
set
a
state
and
say:
what's
the
result
of
what
I've
done
it
right
kind
of
this
is
the
pattern
we've
followed
every.
We
have
also
many
controllers
right.
E
Yeah,
I
see
what
you
mean
yeah.
That
means
that
means
there's.
C
Yeah,
I
think
that's
a
dif
against
implementation
detail.
That
means
there's
a
one
controller
watching
the
service
export
service
and
then
produce
the
service
import
include,
including
the
spec
as
a
status
there
right
in
our
case,
we
that
guy
only
produced
the
service
importer
spec
in
our
case
and
then
there's
a
separate
controller
reading
the
spec
creating
so
so.
We
act
that
the
reason
we
have
a
status
is,
we
have
extra
stack.
I
have
a
actual
controller
to
read
a
service,
basically
service
import
controller,
to
read
a
service
import
produce
the.
D
C
C
The
service,
export
controller
and
service
didn't
produce
the
whole
service
input
for
everything
it
just
produced
the
spec,
which
basically
is
nothing
and
then
there's
another
controller
too.
So
if
we
combine
this
controller,
that
would
make
sense
if
we
separate
that
controller,
they
kind
of
have
this
spec
in
their.
D
Yeah
in
upper
case,
currently
that
the
the
service
import
is
created
in
a
concert
more
like
a
request
to
to
to
import
their
service
from
the
whole
fleet,
the
whole
whole
cluster
side
to
the
to
to
a
specific
member
cluster
and
then
there's
another
controller
who
fulfills
who
fulfills
that
service
input
objective
is
the
the
service
back
after
all,
those
conflict,
the
resolution
process,
that's
the
general
workflow.
D
A
Okay,
that
is
something
we
talked
about
as
well.
I
think
the
and-
and
maybe
this
needs
a
doc
if
you
want
to
kind
of
write
up
a
really
short
form
proposal,
so
we
can
discuss
there,
but
the
I
think
one
of
the
discussions
that
we
had
around
that
was
the
idea
that
the
existence
of
the
name
space
is
for
better
or,
worse
might
be
a
a
easier
to
reason
about
way
to
declare
whether
or
not
a
server
should
be
imported,
because
otherwise,
what
happens?
Is
you
get
like?
A
We've
been
kind
of
building
around
that
concept
of
namespace
sameness,
where
a
namespace
with
a
given
name
should
mean
the
same
thing
in
all
of
your
clusters,
and
if
the
service
import
existence
is
what
is
used
to
dictate
whether
a
services
is
imported
or
not,
then
that
no
longer
holds
and
in
some
clusters
like
namespace
foo,
might
might
have
the
have
a
a
service
of
a
given
name
and
in
other
clusters.
It
wouldn't.
A
D
Normally,
implementation,
currently
we're
still
following
the
namespace
seminars
principle
word
by
word,
specifically
when
we
create
a
when
a
user
want
to
import
a
service
by
creating
a
service
involved,
it
has
to
be
created
in
that
specific
name
space.
So
if
I
have
a
service
import
named
store
under
namespace
flow,
they
will
always
try
to
import
the
service
at
the
namespace
yeah.
So
it's
still
following
the
principle.
D
I
guess
over
concern
is
more
like
because
in
in
the
proposal
in
in
in
1645,
it's
being
kind
of
intentionally
ambiguous
on
the
concept
of
cluster
size,
it
didn't
specify
like
because
I
I
my
understanding
is
it's
a
actually
a
common
use
case
for
for
customers
for
for
users
to
to
only
want
to
import
services
on
specific
clusters,
not
the
whole
connected
customers
side
and
the
proposal
kind
of
like
bypasses
this
concern
by
by
saying
that
the
customer
side
is
a
is
a
is
a
user
construct
that
can
be
manipulated
on
the
user
side.
A
Yeah
so
you're
you're
right.
That
is
something
we're
missing
and
I
think
that's
because
it's
actually
bigger
than
multi-cluster
services,
that's
more
general
multi-cluster
concept.
I
think
it's
probably
time
that
we
figure
that
out
too,
as
a
sig
current
thinking
has
and
and
some
past
discussions
have
kind
of
been
that
the
way
that
you
should
say
that
certain
clusters,
shouldn't
import
and
services
from
a
certain
namespace
would
be
to
disallow
creation
of
that
namespace
in
those
clusters.
A
And
then
you
know,
that's
that's
a
heavier
handed
approach
to
preventing
service
import,
but
it
it's
also
more
general,
because
what
we
kind
of
wanted
to
avoid
is
that
we
have,
if
we
have
other
multi-cluster
concepts
that
build
on
that
namespace
sameness
as
well.
That,
let's
say
you,
you
don't
create
service
imports
in
namespace
foo,
but
you
still,
you
know,
create
some
other
multi-cluster
level
resources.
A
C
Oh
so,
if
I
understand
correctly,
when
you
say
namespace
saveness,
that
also
implies
that
not
initially
or
I
always
thought,
namespace
statements
means
if
two
services
have
same
name
in
the
same
name
service.
That
means
the
same
service.
It
sounds
like
you
have
one
step
further
saying:
if
you
have
this
namespace
in,
if
I
understand
a
cluster
in
this
multi-cluster,
then
they
should
all
look
the
same.
They
should
no.
A
Not
necessarily
like
they
shouldn't
necessarily
have
all
the
same
resources,
but
like
the
service
import
should
be
created,
because
you
would
expect
that
you
can
talk
to
if,
if
foo
exists,
if
need
space
foo
exists
in
your
cluster,
then
you
would
expect
to
be
able
to
talk
to
services
in
namespace
foo
in
every
class
in
every
cluster.
Now,
if
you
have
network
policy
that
prevents
it
that's
a
different
story,
but
you
know
in
when
we've
had
the
network
policy
conversations
in
this
sig
as
well.
A
We've
also
said
that
network
policy
shouldn't
really
be
cluster
dependent.
It
should
be
namespace
dependent,
so
namespace
bar
can't
talk
to
namespace
foo
in
any
cluster,
but
like
some
like
backend
namespace
can
always
talk
to
namespace
foo,
regardless
of
what
cluster
it's
in
and
if
you
want
to
wall
off
a
cluster
food
just
shouldn't
exist.
C
If
I
again,
let
me
rephrase
if
I
can
understand
it,
means
something
at
least
service
import
and
the
network
policy.
If
it
exists,
it
should
be
the
same
for
all
the
names
same
name,
spaces
in
that
multi
cluster.
So
for
the
same
name,
spacebook
in
your
multi-cluster
settings
in
regards
of
survey
service
input
and
the
network
policy,
then
they
should
be
the
same
for
every
namespace
right,
other
things,
maybe
not,
but
these
three,
or
at
least
the
at
least
these
threes
for
now,
should
be
the
same
right.
Otherwise
they
shouldn't
exist.
A
A
C
Names
so,
and
so
that
goes
to
trans
question
of
the
case,
that
some
of
them-
okay-
I
guess
it's-
it's
kind
of
just
some
kind
of
rule-
make
it
easier
to
reason
you
either
have
it
or
not.
If
you
have
a
partial
one,
then.
A
So
yeah,
but
what
the
yes
and
and
then
for
the
two
kind
of
sub
questions
the
status
versus
the
status
and
target
port.
I
think
adding
target
port
is
optional
and
just
saying
some
implementations
have
this:
if
we
need
it
is
reasonable,
I
think
it
would
be
good
to
show
if
we
really
do
need
it
or,
if
or
if
like,
if
it's
really
just
an
implementation
detail
of
your
mcs
implementation,
then
is
it
necessary
in
the
api.
A
So
that's
a
question:
if
we
need
it,
we
can
add
it.
If
we
don't
really
need
it,
though,
and
we
can
work
around
it,
then
you
know
smaller.
Apis
are
better
in
my
mind,
but
you
know
we
can
always
add
in
terms
of
spec
and
status.
A
I
think
the
big
thing
is
that
status
is
just
really
clunky
for
anything
that,
where,
like
until
124
and
and
the
sub
resource
on
cube
control
becomes
standard,
it's
it's
just
really
clunky
to
work
with
statuses.
A
Is
that
because
you
wanna
try
without
a
controller
that
is.
A
Thing
is
that
yeah
like,
like
you,
can't
just
create
a
resource
with
with
us
and
and
like
we
wanted
it
to
be
so
simple
that
you
can
write
a
bash
script
to
you
know
to
cube
cuddle,
create
your
your
service
imports
and,
if
it's
status,
we
can't
do
that.
C
A
C
C
So
so,
if
I
understand
correctly,
it
means
it
just
makes
that
human
controller
a
lot
easier
to
write
that.
A
F
A
And
if
and
the
the
other
thing
is
that
the
the
entire
service
import
is
always
managed
by
the
implementation,
that
can
be
one
controller,
multiple
controllers
or
assisted
men,
a
human
system,
but
it
always
is
generally
owned
by
the
same
organization,
whereas
normally
with
spec
and
status
they're
owned
by
different,
like
some
logical
entity,
creates
the
resource
and
another
logical
entity
updates
the
status.
The
specifics
of
number
of
controllers,
like
it
may
make
sense
to
have
multiple
control.
C
We
don't
have
any
tend
to
have
any
other
questions.
So
how
do
we
move
forward
with
this?
So
the
the
target
port
sounds
pretty
straightforward.
We
need
to
see
if
it's
it,
we
can
justify
that
in
the
option
of
building
that
the
spec
versus
status
seems
status
is
putting.
This
back
is
sounds
like
the
strongest
reason
is
just
to
make
human
easier
to
do
for
system
and
means
type
of
system.
They
can
easily
create.
A
Yeah,
I
would
say
in
for
both
I
think,
if
you,
you
know,
if
you
look
into
it
and-
and
you
think
either
change
is,
is
important.
I
would
just
write
a
doc
and
share
it
with
the
forum
and
we
can
have
a
conversation
like
a
short
block.
It
doesn't
need
to
be
huge,
but
it
might
be
an
easier
place
and-
and
you
know
more
inclusive,
for
the
folks
who
can't
make
this
meeting
as
well.
Okay,.
C
F
So
if
you
do
not
have
any
time,
I
would
leave,
or
if
you
have
some
time,
I
wanted
to
ask
you
questions,
jeremy.
These
questions
are
related
to
the
sick,
provisioning,
the
sick,
I
mean
I
was
hearing
your
conversation
and
I
got
the
point
of
whole
conversation
that
different
peoples
have
different
requirements.
So
the
mcs
api
that
we
implemented
have
some
different
implementation
and
who
knows
like?
I
was
reading
one
multi-tenancy
article
this
week,
which
has
been
posted
on
kubernetes.
F
It
includes
what
the
concept
of
virtual
cluster
and
maybe
one
day
we
would
be
having
multiple
multiple
kubernetes
cluster
like
two
kubernetes
cluster.
That
would
be
having
three
virtual
clusters
and
that
might
even
fall
in
our
domain.
So
I
wanted
to
understand,
like
there
are
a
few
routes
that
I
have
like.
There
are
folks
from
open
cluster
management
and
karmada
who
are
working
on
the
projects
that
have
similar
domain
and
they
are
doing
things
in
their
own
way
and
we
have
our
own
way
of
doing
things.
F
Basically,
I
wanted
to
understand
we
are
at
the
center
of
sigma
multi-cluster
operations,
so
I
wanted
to
ask
that:
do
we
have
any
sort
of
governance
model,
how
how
they
do
things
and
how
we
can
include
them
in
our
own
propos,
how
we
can
include
them
in
our
own
implementation
or
if
we
are
not
including
them
in
our
own
implementation.
We
are
at
least
letting
people
know
that
there
are
folks
from
karmada,
or
there
are
folks
from
open
cluster
management
who
have
api
for
this
domain
apis
for
this
domain.
F
Multi-Clusters
can
be
operated
because
there
is,
you
know,
lots
of
way
to
orchestrate
multi-clusters
service
meshes
can
be,
you
know,
used
and
various
different
things,
and
maybe
in
the
future
we
would
be
having
our
declarative
apis
like
the
cluster
api
that
we
have
currently
and
we
would
be
able
to
provision
multiple
clusters
and
let's
say
we
would
have
an
a
cluster
on
aws,
a
cluster
on
azure,
a
cluster
on
gke
and
we
would
be
provisioning
them
at
once
using
a
single
api.
So
these
seems.
D
F
I
mean
I'm
a
new
guy
and
I
haven't
introduced
myself
to
you
properly,
I'm
a
student,
currently
I'm
a
google
summer
of
code
participant
and
this
week
I've
been
selected
for
alibaba
summer,
of
course,
with
qpl
and
I'm
I
was
searching
for
some
sick,
where
I
can,
you
know,
make
some
contributions,
so
I'm
definitely
looking
forward
to
code-based
contribution,
but
this
sick.
I
wasn't
able
to
find
lots
of
documentation
related
to
these
idea,
and
I
thought
that
this
should
fall
in
the
domain
of
this
sake.
A
Yeah,
I
think
you
bring
up
a
really
good
point.
We
are
missing
right
now
and
unfortunately
I
am
gonna
have
to
drop
here
early
today.
We
have
a
big
meeting
down
here
that
that
I've
traveled
in
for
so
I'm
gonna
have
to
drop
off
in
a
minute
here,
but
I
think
maybe
this
is
a
good
topic
for
the
next
meetings
agenda.
If
you
want
to
throw
it
on
the,
I
think
we're
missing
a
a
proper
statement
on
like
what
is
a
cluster
set.
A
We've
talked
about
namespace
sameness,
but
like
we
haven't,
really
talked
about
what
makes
a
cluster
set
and
that
would
help
dictate
how
other
tools
carmada
the
cluster
api
should
generally
think
about
the
clusters.
They
create
to
make
sure
that
they
work
together
so
that
this
sig
can
build.
On
top.
F
Okay,
that
makes
sense-
and
maybe
we
can
have
more
interaction
on
this
topic
in
next
meeting.
A
Thank
you,
though,
this
this
has
been
a
really
good
discussion.
I
am
quite
happy
to
revisit
topics
if,
if
we
have
more
information
now-
and
it
sounds
like
you
know,
it
sounds
like
we
do
and
I
also
love
seeing
that
people
are
digging
into
the
apis.
So
so
thanks
everyone,
I'm
sorry
for
cutting
this
this
week
short,
but
great
discussion,
we'll
see
you
in
a
couple
weeks
and
on
slack.