►
From YouTube: Kubernetes SIG Multicluster 2020 July 21
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right,
I
think
we
have
enough
people
here
to
get
started
a
little,
a
few
more
trickling,
a
couple
things
on
the
agenda
today.
Let
me
share
my
screen.
A
First,
I
wanted
to
kind
of
catch
everyone
up
with
the
outcome
of
the
cube
proxy
decision
for
multi-closer
services
as
of
last
week,
so
that
shouldn't
take
very
long
here,
all
right.
So
basically,
after
a
lot
of
you
know,
discussion
back
and
forth,
I
think
there
are.
A
There
are
some
real
concerns
with
the
with
the
approach
of
using
service
as
kind
of
a
middle
layer
between
q,
proxy
and
service
import,
but
they're
not
really
any
concerns
that
are
blockers
for
alpha
and
we
don't
really
have
good
enough
justification,
at
least
for
an
alpha
level
api
to
make
any
like
you
know,
staging
repo
changes
or-
or
you
know,
modifications
to
cube
proxy.
A
So
it
seems
like
probably
the
right
way
to
go
is
to
get
a
sigs
repo
for
multi-cluster
services,
build
it
for
alpha
with
some
controller,
to
convert
service,
import
to
service
and
kind
of
use
that
as
a
middle
layer,
it
won't
be
the
cleanest
thing,
but
hey
it's
it's
alpha
and
then,
as
we
go
to
beta,
we
can
kind
of
revisit
this,
and
at
that
point
you
know,
I
think
we'd
have
stronger
justifications,
for
you
know
doing
something
like
getting
a
staging
repo
making
cube
proxy
changes
and
all
of
that,
mostly
on
the
grounds
of
user
experience,
which
is
really
more
important
for
beta
than
alpha.
A
C
C
Yeah,
I
I
just
want
to
briefly
discuss
about
the
cube
fed
api,
so
I
currently
work
for
huawei
cloud
and
we
actually
have
tried
a
cube
fed
in
production
for
quite
a
while
and
been
serving
a
bunch
of
customers
in
multi-cluster
use
cases
and
as
well
as
hybrid
cloud
use
case,
and
we
we
actually
got
a
lot
of
feedback
from
the
user,
especially
some
started
using
from
federation
v1.
C
So
the
major
the
feedback
was
that
the
they
don't
feel
very
convenient
using
the
cube
fed
api,
because
it's
it's
totally
different.
The
experience
with
the
the
day
v1
and
we
understand
the
the
background
from
upgrading
from
v1
to
v2.
C
But
the
major
difference
is
that
today,
users
actually
want
to
just
use
the
core
api
while
kim
fed
these.
We
know
that
the
especially
for
the
federated
deployments,
the
workloads
they
are
different
apis.
So
a
lot
of
existing
user
developed
functionality
need
to
be
rebuilded
to
adopt
on
the
federated
workload
api.
That's
the
major
the
barrier
of
slowing
down
the
user
from
using
cube
fed,
and
so
we
are
thinking
about
probably
another
way
to
to
improve
this
experience
so
like
in
cubafed.
C
We
actually
have
a
command,
it's
called
the
federate,
so
basically
it's
converting
a
a
workload
yama
to
a
federated
workload,
yambo
so,
but
that's
kind
of
a
extra
step
for
users
to
do
in
the
end
of
day.
C
The
running
system
still
need
to
be
aware
of
the
fed
api,
so
we're
thinking.
Maybe
we
can
start
with
something
like
a
server-side
federate
or
probably
also
simplify,
that
pipeline
to
reduce
some
of
the
steps,
so
make
probably
make
just
the
directly
deal
with
the
workload
definition
defined
by
a
core
api
and
it
also
can
dispatch.
I
mean
also
replicate
the
the
replica
replicate
the
workloads
to
the
member
clusters.
According
to
the
like
placement
and
scheduling.
A
This
yeah
so
I'll
be
honest.
I
don't
have
a
huge
amount
of
context
with
cube
fed
and
I
see
hector.
I
know
you.
D
D
Basically,
what
qfed
is
doing
is
exposing
the
core
components
and
custom
components,
or
in
this
case,
resources
of
objects,
as
you
want
to
call
it
to
be
federated,
so
it
is,
is
using
the
core
api
or
extending
the
core
api.
D
I
don't
see
that
to
be
a
problem
in
qfed
in
the
current
api.
In
fact,
I
I
think
it's
one
of
the
let's
say
stronger,
simpler
points
that
has
qfed
that
you
don't
really
need
to
know
to
know
new
resources
or
how
the
resources
have
been
integrated.
You
just
kind
of
run
a
command
and
they
are
allowed
to
be
user
instead
of
your
cluster.
C
Yeah,
I
would
definitely
write
a
a
document
to
describe
more
details
so
today,
like
the
federated
workload
api.
Currently,
the
most
part
of
the
the
api
definition
is
very
similar
to
the
core
apis.
But
the
major
difference
is
that
the
they
have
different
api
version.
Different
api
workflow
work
group.
So
for
the
users,
if
they
have
a
some
of
the
application
developed
on
top
of
a
core
api,
they
cannot
directly
adopt
a
federated
apa.
D
I
mean
you:
can
I
I
kind
of
understand
two
issues
that
you
rise.
One
about
the
group
I
think
we
are,
or
the
code
currently
doesn't
support,
multiple
groups
or
versions,
or
something
like
that.
I
I
mean
I'm,
I'm
not
sure
which
one
of
the
two,
but
I
know
that
that
husband
has
a
right
as
an
issue
and
probably
that
needs
to
be
kind
of
a
standard
or
or
or
maybe
we
can.
You
can
help
us
to
to
to
allow
to
having
multiple
groups
for
the
same
resource.
D
If
I'm
not
mistaken,
the
second
issue,
I
don't
really
see
a
problem,
so
you
have
a
a
deployment
right
and
you
you
want
this
deployment
to
be
federated,
so
you
can
simply
define
this
deployment
object,
has
federated
and,
and
the
the
let's
say
the
object.
The
structure
will
be
the
same.
Nothing
will
change
so
I
still
don't.
I
can't
identify
the
issue
that
you
are
describing.
D
E
Hi
this
is
jim
here,
if
I
understand
correctly
kevin.
What's
your
concern
is
the
migration
from
a
single
cluster
application
to
the
cube
fad
that
might
be
a
little
bit
challenging?
For
example,
if
we
already
have
a
application
defined
using
home
chart,
you
can
directly
deploy
the
hem
chart
into
a
single
cluster,
but
maybe
it's
not
possible
to
do
it
directly
into
the
security
fad
api.
D
Oh
yeah,
so
in
this
case
we
have
similar
issues.
Well,
no
issues,
but
we
we
have
similar
objects
right,
like
workloads
or
custom
resources.
So
in
that
case
we
simply
federate.
That
kind
of
crd,
I
think
the
helm
operator
has
a
crd,
and
in
that
case
what
you
can
do
is
is
simply
federate
that
crd
and
then,
with
the
placement
rules
you
can
define
where
these
com
or
chart
will
land
on
among.
C
A
C
Yeah,
I
I
will
create
the
issue
and
I
will
also
definitely
check
the
if
the
the
solution
hector
mentions
can
solve
our
issues.
C
And
another
thing
we're
also
thinking
about
is
that
for
some
of
the
beginner
users
using
multi
managing
multiple
clusters,
they
may
not
yet
meet
the
requirements
to
manage
applications
across
multiple
clusters.
I
mean
they,
they
have
they
just.
There
are
some
implications
on
in
some
clusters,
but
it's
still
kind
of
they
are
running
separately.
C
So
we
are
thinking.
One
user
requirement
in
this
case
is
that
they
want
have
a
kind
of
unified
api
enterprise,
so
they
can
simplify
the
the
access
and
identification,
authorize,
authentication
and
the
authorization
and
what
we,
what
we
did
in
the
product
is
we
we
provided
a
kind
of
a
api
gateway
equivalent
thing,
and
I
want
to
know
if
it's
worth
to
upstream.
D
So
I
think
you
can
use
for,
for
specific
cases
overrides
the
overrides
property
for
a
federated
resource.
Allow
you
to
define
specific
constraints
for
a
for
a
specific
cluster.
If
you
have
the
same
application
running
across
multiple
classes,
you
can
use
that
too,
to
to
define
specific
constraints
or
properties
overrides
there's
an
option
about
adding
new
resources.
I'm
not
sure
if
I
understood
but
yeah,
I
I
would
say
if
you
have
multiple
apps
and
different
clusters
has
different
conditions
or
requirements.
D
C
Yeah,
I
I
know
how
cubed
works.
I
I
mean
just
that
we
in
some
cases
users
have
multiple
clusters
and
they
not
yet
meet
the
requirement
to
use
cubefat.
So
I'm
talking
about
a
multicluster
in
general,
they
have
requirements
to
to
have
a
unified
api
enterprise.
So
they
can.
They
don't
need
to
change
the
api.
C
Endpoint
change
the
context
every
time
they
just
log
in
one
time
and
they
can
kind
of
switch
just
like
today
we
access
we
modify
pods
or
deployments
on
in
different
name
space,
so
users
are
expecting
something
similar
to
that.
So
they
can
maybe
just
with
one
parameter
change
to
a
cluster
and
they
can
modify
the
things
in
that
cluster.
D
A
Right
yeah
that
that's
true,
I
would
use
context
today,
but
it
sounds
like
the
missing
piece
might
be:
how
does
a
user
actually
get
credentials
for
all
of
the
clusters,
because
context
would
probably
be
the
right
way
once
you
have
your
credentials
to
switch
your
clusters,
but
this
is
starting
to
sound
more
like
cluster
registry,
which
is
excellent,
because
we
are
kind
of
brainstorming
requirements
for
that
right
now,
so
kevin
actually
paul
shared
on
the
on
the
forum.
A
A
Okay,
because
I
think
today
it
would
be,
you
know,
get
credentials
for
each
cluster
and
then,
but
once
that's
done
context
is,
is
probably
the
best
way
to
switch
the
cluster.
You
want
to
talk
to.
C
Okay,
I
would
check.
C
E
Awesome:
hey
jeremy.
I
have
a
dumb
question
here.
So
apart
from
the
multi-cluster
service
api,
is
there
any
like
any
other
plans
for
the
to
manage
the
actual
deployment
across
which
backed
this
multiple
cluster
service?
A
The
probably
the
most
promising
is
kind
of
the
early
discussions
around
the
work
api,
which
are
shared
a
couple
weeks
ago
in
this
agenda.
I
think
there's
some
links
to
github,
but
it's
kind
of
early
stages,
so
nothing
super
concrete.
Yet
I
think
today
it's
deploy
your
services
and
then
multicultural
services
work,
but
we
kind
of
leave
that
deployment
up
to
the
user,
but
I
think
that
would
be.
That
would
be
a
good
next
step.
A
Sure.
Thank
you
also
really
quickly.
I
I
just
want
to
make
sure
that
I
didn't
cut
anyone
off
after
presenting
the
outcome
earlier,
I
realized
my
audio
was
was
actually
not
working
for
the
first
bit
of
this
call.
So
if
anybody
had
anything
to
add
there.
B
I
was
just
gonna
say
like
thanks
for
putting
that
stuff
together.
I
think
it's
super
helpful.
Just
like
compare
all
the
outcomes.
I
actually.
My
follow-up
question
was
like.
Is
there
if,
if
folks
want
to
get
involved
in
contributing
to
like
actual
implementation,
like
what
would
be
the
best
way
to
do
that.
A
I
so
I'm
gonna
change
the
the
request
to
for
a
staging
repo
to
a
request
for
a
sigs
repo,
and
I
think
once
that's
done.
We
can
just
start
working
on
it
and
create
issues
and
coordinate.
But
yes,
please,
love
would
love
the
help.
F
I
think
I'm
one
of
the
folks
that
want
to
help
so
hey,
I'm
steve.
I
was
here
a
while
ago,
but
I
haven't
been
here
in
a
while,
but
I
want
to
look
and
get
involved
in
things,
so
I
work
on
contour
primarily
during
the
day,
but
this
stuff
super
interesting.
I
helped
build
some
of
the
work
around
gimbal
if
you've
heard
of
that
which
is
sort
of
a
multi-cluster
kind
of
routing
thing,
so
super
interested
in
a
lot
of
this
so
just
wanted
to
say,
hi
and
I'm
happy
to
help
both
awesome.
A
Awesome,
that's
great,
well,
welcome,
and-
and
thank
you,
I
think,
yeah
so
I'll
I'll
work
on
getting
that
set
up
this
week
and
then
we
can
maybe
start
working
on
the
implementation.
C
The
the
survey
is
out
on
kdev.
A
Yes,
I
sent
that
out
this
morning
early,
so
hopefully
we'll
start
getting
some
names,
some
votes
in
on
the
names
for
what
we
want
to
be
calling
multi-cluster,
well
groups,
or,
I
guess
we'll
find
out.
A
So
I
think
the
plan
was
to
let
that
bake
for
about
two
weeks
and
then
we'll
we'll
look
at
the
results.
B
A
The
naming
to
replace
the
supercluster
term,
I
think
we
we
probably
also
need
to
decide
on
the
api
group.
I
think
that's
been
getting
less
thought
it
seems
intuitive
and
we
could.
We
could
absolutely
disagree,
but
it
seems
intuitive
that
would
be
multi-cluster
dot
and
then
either
x
case
or
kate's,
depending
on
where
it
ends
up
living.
A
A
Cool,
I
think
that's
it.
I
think
there
was.
There
was
one
more
request
that
I
wanted
to.
I
guess
now
that
the
api
is
ready
to
go
and
we're
going
to
start
this
process.
There
was
one
more
thing
that
came
in
before
we
sign
off
the
suggestion
that
we
switched
to
taking
a
list
of
ips
limited
to
one
just
to
be
a
little
more
future
compatible
with,
with
some
of
the
dual
stack
work
for
multi-cluster
services
on
the
service
import,
so
not
a
huge
change.
A
B
A
Yeah
so
that
that
hasn't
really
been
defined
in
terms
of
how
specific
we
want
to
implement
it,
but
yeah,
I
think
I
think
my
thinking
this
point
has
been
that
it'll
be
a
coordinates,
plug-in
for
and
we'll
probably
want
that
to
use
service
import
directly
because
there's
some
things
that
we
want
to
probably
do
especially
to
handle
like
stateful
sets
that
might
have
the
same
name
for
a
given
endpoint
in
in
multiple
clusters.
A
So
we'll
want
to
inject
the
the
cluster
id
in
that
name
and
that's
kind
of
spelled
out
very
loosely
in
the
cap
right
now.
I
think
one
of
the
beta
graduation
requirements
was
was
defining
that
api,
so
once
we
get
the
alpha
implementation
underway,
that's
kind
of
the
next
thing
that
I
want
to
bite
off
and
you
know
love
help.
There
too.
A
Well,
thanks:
everyone
appreciate
it
and
see
you
next.