►
From YouTube: Kubernetes SIG Multicluster 13 Oct 2020
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
C
C
A
I
just
put
a
ping
to
trojan,
since
he
has
an
item
on
the
agenda
for
today.
B
A
All
right,
maybe
we'll
just
give
it
a
moment,
and
if
we
don't
have
trojan,
we
can
just
move
on
to
the
second
agenda
item
and
come
back
to
it.
Come
back
to
the
work
api,
one.
B
And
so
I
guess
the
second
item.
We
talked
about
kind
of
continuing
the
conversation
from
last
week
on
on
potential
cluster
registry
use
cases
and
and
what
what
the
core
characteristics
of
an
api
could
look
like.
That
seems
kind
of
more
open-ended
and
also
to
be
honest,
I
haven't
been
able
to
kind
of
dig
in
as
much
as
I
wanted
over
the
past
week.
So
I
wonder
if
maybe
we
can
swap
with
hector.
A
C
In
all
right,
maybe
while
we
wait
since
since
paul's
back-
and
I
see
you-
you
answered
my
point
on
the
my
comment
on
the
dock-
perhaps
you
can
discuss
whether
we
actually
want
to
merge
the
cluster
registration
and
cluster
set
docs.
C
Yes,
that's
my
thought
as
well
because,
like
as
tim
pointed
out
when
we
were
discussing
the
cluster
registration
dock,
we
ended
up
getting
into
quite
a
lot
of
rather
implementation
oriented
detail
there,
whereas
we,
the
clusters
that
document
which
we
hadn't
gone
over
at
that
point,
has
stayed
fairly
high
level
and
implementation
agnostic.
A
Yeah
sorry
go
ahead,
I
was
just
going
to
say
we
do
have
trojan
now,
so
we
can
finish
talking
this
through
maybe
time
box
it
to
a
couple
minutes
and
move
on
to
work
api.
I'm
I'm
happy
either
way
I'm
usually
happiest
when
I
have
to
do
the
least
amount
of
work,
so
I'm
very
happy
to
leave
them
separate,
because
that
is
the
least
amount
of
work.
A
Soul:
okay,
all
right
jin
you're
up
next.
D
Okay,
let
me
share
my.
D
Screen
yeah,
so
from
the
last
time
we
I
introduced
about
the
work
api
that
we
are
working
on
for
some
recap
on
the
work
api
is
to
define
a
list
of
the
manifest
on
the
hub
cluster
that
can
be
that
will
be
applied
on
the
spoke
on
the
managed
cluster.
D
So
a
simple
example
here
is
we
define
a
work
that
you
know
manifests
in
a
work
apis
back.
We
can
put
a
list
of
the
manifest
or
the
resources
on
the
manifest
and
then
the
reconcilers
of
the
work
we'll
try
we'll
try
to
apply
this.
D
A
list
of
the
manifests
onto
the
managed
cluster
and
also
there
will
be
a
set
of
the
conditions
for
each
manifest
will
be
updated
on
the
works
status
field
to
indicate
whether
each
manifest
in
the
in
the
manifests
of
the
work
api
has
been
applied
successfully
and
whether
the
the
mapping
resources
has
been
applied
successfully.
D
D
So
this
is
one
possible
examples,
such
as
in
the
hub
cluster.
We
we
will
have
a
namespace
for
the
cluster
g
and
the
namespace
for
the
cluster
r
and
the
work
controllers
on
the
different
clusters.
For
example,
classrg
will
try
to
only
watch
the
work
apis
in
the
namespace
of
cluster
g
in
a
hub
cluster.
D
Also,
the
cluster
arts,
world
controller
will
try
to
only
watch
the
work
apis
on
the
namespace
of
cluster
r
in
the
hub
cluster,
but
the
manifest
defined
in
the
working.
I
could
be
in
different
name
space
than
the
cluster
g.
It
could
be
another
namespace,
such
as
namespace
4
in
class
of
g.
So
the
point
here
is
the
namespace
on
hub
cluster
could
be.
D
Unrelated
to
the
things
to
the
resources
that
we
deployed
on
the
manage
clusters
namespace,
so
this
is
more
like
the
namespace
on
hub
cluster
will
be
something
like
a
container
that
will
contain
all
the
work
that
to
be
deployed
on
a
certain
cluster,
but
the
namespace
of
the
resources
that,
in
the
manifest,
is
unrelated
to
the
namespace
in
the
hub
cluster,
so
that
that
also,
is
that
clear?
Or
is
there
any
comments
for
this
top
diagram?.
B
B
You
know
kind
of
around
the
cluster
set,
and
and
if
this
api
works
on
a
on
a
cluster
set,
then
you
know
it
seems
it
seems
like
we're
kind
of
diverging
from
that
a
bit
if
we,
if
we
have
like
this
other
name,
space,
that
maps
to
a
cluster,
because
now
in
this
in
the
work
api,
my
name
space
rights
dictate
which
clusters
I
can
access,
whereas
normally
I
just
care
about
which
name
spaces.
I
can
access.
D
So
does
that
mean
that
for
the
cluster
set
that
so
I'm
not,
I
haven't
involved
in
the
classic
set
discussion,
so
does
that
mean
that
cluster
set
we'll
also
follow
the
namespace?
In
this
like
we
create
a
cluster
setting?
A
certain
namespace
means
that
all
the
cluster
that
in
this
cluster
set
will
also
have
a
certain
namespace.
So.
B
B
I
don't
think
we
we
really
dictated
that
the
namespace
has
to
be
present
in
each
cluster,
just
that
it
shouldn't
mean
different
things
or
you
know,
be
owned
by
a
different
team,
and
so
with
that,
like
you
know,
it
would
make
it
make
sense
to
me
if,
if
like
the
same
kind
of
rules
extended
to
to
the
work
api,
but
that
I
mean
the
complication
that
comes
to
mind.
B
There
is
of
course
like
if,
if
I
did
have
some
multi-name
space
construct
that
I
wanted
to
deploy
by
the
work
api,
it's
less
clear
how
that
would
work.
But
at
the
same
time
I
wouldn't
have
to
think
about.
You
know
cluster
access,
because
I
think
at
least
with
multi-cluster
services.
That's
been
something
that
we've
kind
of
been
trying
to
steer
away
from.
A
So
for
me,
I
think
I
think
it
might
pay
to
kind
of
well
to
point
a
couple
things
out
to
start
with,
which
is
for
me.
I
don't.
A
I
do
think
that
we
will
need
to
have
some
cluster
level
grouping
in
the
hub
for
things
that
go
to
a
particular
cluster
namespace
seems
like
a
good
fit,
and
I
I
don't
think
it's
incompatible
with
namespace
sameness,
and
the
reason
for
that
is
that
I
think
you
could
implement
namespace
sameness
in
the
hub,
if
you,
if
you
think
of
the
namespaces
for
each
cluster
as
an
implementation,
detail
potentially
and
think
about
if
I'm
scheduling
work
through
some
like
higher
level
construct
than
the
work
api,
that
could
respect,
namespace,
sameness
explicitly,
and
I
I
just
don't
feel
like
I
get
how
this
could
break
name.
A
B
So
I
think
I
think
the
you're
right
it
doesn't
necessarily
break
it,
but
it's
kind
of
this
new
concept
and
you
know
if
we
think
about
namespace
level
access
like
we
would
be.
I
guess
where
it
kind
of
looks
at
things
in
a
different
dimension.
B
Like
you
know,
it's
it's
a
different
use
of
namespace
right,
so
normally
we're
we're
granting
we're
like
looking
at
applications
at
the
name
space
level,
regardless
of
cluster
and
in
this
mode
we
are
looking
at
workloads
targeting
a
a
cluster
regardless
of
of
namespace
and
those
can
be
compatible
like.
I
definitely
agree
that
there's
you
know,
there's
room
for
something
that
that
targets
specific
clusters,
but
I
just
wonder
like
if
I'm
applying
work
in
cluster
g.
What's
stopping
me
owner
of
namespace
foo
from
creating
work
for
bar.
A
Yeah,
I
guess
so
one
one
thing
that
we
could
do
to
to
implement.
Namespace
sameness
like
forget
about
a
higher
level
api
for
and
and
think
about,
making
work
like
this
layer
of
work,
respect
name
space
sameness
is
you
could
have
an
emission
controller
that
that
does
like
a
subject
access
review
and
make
sure
that
you
have
cluster
set
level
permission
on
namespace
x
before
you
make
a
work
that
puts
something
into
name,
spacex
right.
D
So
yeah,
so
in
that
way,
that
that
mean
we,
we
still
need
to
have
some
field
on
the
work
api
to
to
identify
which
cluster
this
work
belongs.
A
I
don't
think
so,
because
it's
the
the
namespace
permission
should
be
independent
of
the
cluster
right
within
a
cluster
set.
If
you
have
permission
to
manage
namespace
x,
you
have
it
on
every
set
or
sorry
every
cluster
in
the
set
right,
okay
yeah.
I.
D
See:
okay,
yeah.
I
think
that
makes
sense,
but
another
part
of
question
is
like:
if
we
put
a
namespace
thing,
is
here
so
how
about
the
cluster
scope
resource,
for
example?
A
I
think
that
it's
you
know,
depending
on
the
layer
of
resource
that
you
put
into
work,
I
could
easily
imagine
people
wanting
to
put
storage
classes
into
work.
For
example,.
A
A
B
Yeah,
I
think,
like
conceptually,
there's
there's
not
much
difference.
I
guess
you
know
part
of
the
cluster
set.
Is
we've
kind
of
set
the
cluster,
so
that
needs
to
be
controlled
by
this
by
some
common
authority
and
it
seems
like
cluster-scoped
resources
kind
of
following
that
authority.
So,
just
like
you
might
have
access
to
a
namespace,
you
might
have
access
to
the.
A
A
Okay,
while
tim,
while
tim,
puts
his
thoughts
into
writing
any
other
thoughts
or
impressions
from
folks.
A
E
Yeah,
I
would
personally
consider
that,
basically
you
you
say
you
deploy
workload
and
this
workload
has
a
specific
storage
class
or
a
specific
user-specific
kind
of
back-end
or
for
storage.
So
probably
it's
also
creating
storage
classes.
So
I
guess
that
that's
one
simple
example
of
a
usage.
F
F
So
I
I
will
admit
I
have
not
read
the
whole
doc
I
I
am
confused
as
to
why
we're
trying
to
map
a
whole
cluster
into
a
namespace
and
how
you
map
multiple
namespaces
into
that
namespace.
Like
I
just
don't
understand
the
point
of.
D
It
yeah,
I
think
the.
F
I
apologize
if
that's
in
the
back.
Let
me
sorry,
let
me
let
me
add,
I
apologize
that's
in
the
dock
and
I'll
try
to
catch
up
and
read
the
doc,
and
maybe
the
better
point
is
just
to
ask
questions
there,
but
I
yeah
please
go.
D
Yeah,
I
think
the
thing
here
is
like
we
want
to
have
an
agent
on
the
manage
cluster
that
fetch
the
work
apis
from
the
hub
cluster.
D
So
one
of
the
possible
ways
like
we
just
want
the
so
so
what
we
define
work
is
like
we
define
a
set
of
the
manifests
in
the
work
and
the
work
is
something
that
will
be
deployed
on
a
single
cluster.
D
D
F
So
so
I'm
confused
then,
and
I
apologize
again-
I
have
not
been
following
along
very
closely.
I
am
confused
as
to
what
value
a
one-to-one
mapping
here
does
like
why,
if
I'm
a
user,
why
wouldn't
I
just
submit
my
work
to
the
cluster?
If
I
know
which
cluster
I
want
it
to
run
in.
D
You
mean
the
works,
the
work
only
deployed
to
a
single
cluster.
I
think
the
the
reason
is
that
we
are.
We
also
want
to
maintain
the
status
of
the
applied
resources
on
the
work,
so
on
the
work
conditions,
we
know
how
the
each
manifest
is
applied
or
available
on
the
certain
cluster,
so
grouping
a
work,
defining
a
work
that
can
be
deployed
to
multiple
clusters
might
make
that
status
to
be
too
large,
like
we
need
to
collect
all
the
conditions
for
all
the
clusters
that
apply
this
workload.
F
I'm
still
like,
I'm
still,
I
don't
want
to
waste
the
whole
time
on
this,
and
actually
I
have
to
drop
off
in
a
few
minutes,
I'll
I'll
read
the
doc.
Here's
here's
my
main
question,
though,
if
I'm
a
user
who
is
working
within
namespace
foo,
and
I
want
to
deploy
something
cluster
g-
and
I
know
I
want
it
on
cluster
g,
why
would
I
go
through
this
api
instead
of
just
talking
to
cluster
g.
B
Kind
of
elaborating
on
that,
my
my
question
is:
is
the
same
like
the
value
the
value
here
seems
more
like
I.
I
want
something
in
my
name
space
in
all
the
clusters
and
it
seems
like
being
able
to
deploy
to
to
the
work
api,
and
then
you
know
select
clusters
is
is
where
I
really
get
a
multiplier,
but
yeah.
If
I
know,
if
I
just
wanted
a
namespace,
we
want
cluster
g,
we
I
mean
yeah.
I
could
just
talk
to
g,
so
I.
A
That
says,
schedule
this
work
onto
clusters,
a
b
c
d,
e,
f
or
entire
cluster
set,
and
that's
probably
where
the
actual
value
is
where
this
is
an
implementation.
Detail
of
that,
and
I
expect
that
like
if
you
just
want
work,
if
you
just
want
resources
on
one
cluster,
it
you
probably
don't
want
to
use
work
if
you
want
to.
A
If,
instead,
you
want
to
schedule
things
to
like
clusters
that
you
don't
know
in
advance
that
haven't
been
created
yet
like
that's,
where
the
value
proposition
or
you
want
to
just
assign
them
or
schedule,
work
to
a
number
of
different
clusters
in
a
set.
There's,
that's
probably
where
the
value
proposition
is
where
it's
not
directly
from
work,
but
it's
from
a
higher
level,
primitive
and
in
order
to
make
those
higher
level
primitives
work
correctly.
We
need
to
make
work
work
correctly.
B
That
that
makes
a
lot
of
sense,
but
the
the
thing
that's
confusing
to
me,
then,
is
this
namespace
breakdown,
because
because
if
we
have
a
namespace
for
cluster,
then
then
it
seems
like.
I
still
need
to
know
the
clusters
in
advance
right
so
that
I
can
deploy
to
those
namespaces.
A
So
the
namespace
alignment
with
cluster
and-
and
I
think
trijin
may
have
just
said
this,
but
I
got
a
I'm
doing
this
on
my
phone
and
I
had
a
phone
call
interrupt
the
audio.
So
forgive
me
if
I'm
repeating
something
chojin
said
is
the
the
idea
of
constraining
the
work
in
a
particular
cluster
to
a
certain
name.
Space
is
to
limit
the
degree
of
permission
that
an
agent
applying
the
work
on
the
spoke
needs
in
the
hub.
A
Okay,
so
that
that
is
the
rationale
to
allow
that
agent
to
run
with
a
low
level
of
privilege
in
the
hub.
Personally,
I'm
open
to
like
other
arrangements,
but
that
was
the
original
motivation.
Is
that
on
the
money
children.
F
A
F
Sorry,
I
I
feel,
like
I'm,
asking
questions
that
are
probably
answered
in
the
doc
and
I
don't
want
to
waste
everybody's
time
and
unfortunately
I
have
to
drop
off.
I
and
I
really
wanted
to
talk
about
the
next
topic,
but
oh
well,
I'll
catch
up
on
the
notes.
Sorry
see
ya.
B
See
ya,
so
I
I
guess
going
back
to
the
your
question
paul,
I
think
about
the
premise
like
that
makes
sense.
I
think
one
of
the
things
that
we've
bought
ourselves
with
the
concept
of
cluster
set,
and
this
like
shared
authority
and
and
reasonable
trust
between
clusters-
is
that
maybe
that's
not
as
that
isolation
is
maybe
not
as
important.
It
might
not
be
so
bad.
If
we,
if
the
agent
I
mean
the
agent
that
installs
things
across
namespaces
by
pulling
from
the
from
the
work
api
will
be
privileged
right.
B
So
maybe
maybe
you
know
if
we
look
at
a
high
trust
model
where
it's
up
to
the
agent
to
you
know,
grab
pieces
of
work
that
that
have
a
selector
matching
and
I'm
you
know
I'm
starting
to
assume
all
kinds
of
selectors
exist
here
and
labels,
but
let's
assume
that
you've
got
a
selector
to
find
work.
Matching
the
current
cluster
then
maybe
like.
Maybe
that's
enough.
B
A
Yeah
the
I,
I
don't
think
that
we
should
abandon
least
privilege
as
a
principal,
I
hear
what
you're
saying,
though,
about
maybe
the
trust
model
in
a
cluster
set
could
be
different.
I
still
think
that
it's
desirable
to
have
least
privilege.
A
Yeah,
I
would
imagine
that
I
would
imagine
potentially
many
different
higher
level
controllers
creating
work.
B
I
can
see
that
working,
but
maybe
like
maybe
the
doc,
could
just
use
a
diagram
of
like
a
theoretical,
higher
level
controller
and
what
that
could
look
like,
because
if,
if
that
was
pulling
work
from
various
namespaces
and
applying
them
to
cluster
namespaces,
then
we
I'm
just
trying
to
think
about
what
that
actually
looks
like
if,
if
the
namespace
ever
has
a
traditional
namespace
meeting,
meaning
in
the
hub
cluster,
then
we'd
have
some
name
spaces
that
are
our
namespaces
and
some
namespaces.
B
A
D
B
D
Okay,
so
just
one
question
so
with
the
higher
level
primitive
controllers
that
may
relate
to
something
like
cluster
or
cluster
sas,
or
just
assume
that
we
already
have
that
api
for
cluster
set,
for
example,.
B
D
B
Yeah
hector,
why
don't
you
go
next
and
then
maybe
we
can
talk
about
the
since
neither
stephen
or
I
was
able
to
look
in
as
much
as
we'd
want.
We
can
talk
about
the
cluster
id
registry
stuff
next
week.
E
Yeah,
thank
you,
so
I'm
gonna
be
really
brief.
Basically,
we
were
proposing
to
delete
or
disable
some
features
that
has
been
added
in
the
past
to
qfed
and
personally,
I
believe
they
are
out
of
the
scope
or
in
some
case
probably
experimental
as
well,
and
the
in
the
future
of
maintenance
and
and
kind
of
focus
on
them,
a
single
role
for
qfes,
which
is
basically
federated
resources.
E
Yeah.
I
propose
open
two.
Two
late
issues
want
to
kind
of
delete
the
the
cross
cluster
delivery
feature
and
then
other
features
which
was
the
english
dns.
E
Maybe
the
title
was
a
bit
really
direct,
but
the
intention
was
mainly
to
try
to
call
the
attention
of
any
person
using
the
the
tool.
So
my
question
to
to
the
meeting
or
the
question
in
the
meeting
is:
what
do
you
suggest
how
generally
we
should
approach
these
in
terms
of
so
we
basically
delete
the
feature,
maybe
disable
it
and
and
then
next
release
delete
it
or
those.
Some
of
these
features
are
in
alpha
state,
so
yeah,
it's
a
bit
complex
to
be
honest,
to
keep
montana.
A
I
think
that
that
you
can,
since
they
are
alpha
like
you,
should
be
able
to
delete
basically
anything
that
you
want,
and
I
think
I
think
it
makes
sense
to
have
cube,
fed,
be
focused
on
doing
its
core
use
case
correctly.
A
So
I'm
I'm
in
favor
of
of
deleting
features
that
aren't
used
you
might
want
to
put
well.
I
think
I
think
dns
is
already
behind
a
gate
as
one
example,
you
might
want
to
make
sure
the
gate
is
off
by
default.
If
it's
not
already
and
think
about
adding
a
warning,
when
you
enable
the
gate
that
it's
a
deprecated
feature
and
message
to
people,
it
will
be
deleted
on
x
or
y
date.
E
B
Yeah,
thank
you
all
great
conversation
and
great
work
on
the
work
api.
That's
yeah
good,
good
discussion.
So
thanks.