►
From YouTube: Kubernetes SIG Multicluster Jul 27
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
B
Yeah,
okay,
I'm
having
some
kind
of
issue
with
my
headphones.
B
Why
don't
we
wait
a
moment
and
see
how
many
other
folks
we
get
and
we'll
continue
trying
to
get
my
headset
to
work
all
right.
A
A
All
right,
so
this
is
the
sig
multi-cluster
meeting
for
tuesday
july
27th,
2021.
C
Cool,
hey
everyone.
I
am
john.
I
work
on
east
geo,
mostly
so
we
have
been
looking
to
mcs
quite
a
bit.
C
I
actually
personally
haven't
been
super
involved
with
it,
but
the
person
who
is,
I
think,
is
unavailable
today,
so
I
am
here
in
their
place,
so
I
may
not
be
an
expert
on
everything,
but
one
thing
we
wanted
to
discuss
was
around
leader
election.
So
one
thing
we've
noticed
is
you
know:
we're
using
the
standard
client
go
leader
election
package
right
now
and
we're
kind
of
hitting
the
limits
of
it,
especially
once
we
go
into
kind
of
having
these
globally
replicated
controllers.
That
are,
you
know,
syncing
these
service
exports
and
service
imports.
C
You
know
across
different
regions.
So,
for
example,
like
one
concern
we
have
is
that
say
we
have.
You
know
some
clusters
running
in
asia
and
in
america
if
the
one
in
america
gets
a
leader
election
lock,
then
now
it's
like
in
charge
of
asia,
and
we
have
all
these
egress
costs
and
latency
et
cetera
for
this
cross-globe
replication.
C
When
there's
perfectly
fine
controllers
that
are
running
in
asia
at
the
same
time,
that
could
easily
just
be
the
leader
for
their
own
clusters
and
you
know,
reduce
latency
costs
et
cetera,
but
the
problem
is
we
don't
want
to
just
restrict
it
only
to
that
because
in
case
the
ones
in
asia
go
down,
we
of
course
want
to
fail
over
to
the
ones
in
america.
So
just
general
problems
like
that.
There's
also
other
use
cases.
C
For
you
know
kind
of
improvements
to
the
leader
election
such
as
one
thing
we
have
is
we'll
run
multiple
versions
of
istio
in
the
same
cluster,
and
this
is
just
kind
of
a
general
problem.
Not
even
a
multi-cluster
service
discovery
specific
and
we
want
a
specific
version
which
is
dub
the
default
one,
which
is
generally
the
latest
one.
That's
not
just
a
canary
to
be
a
higher
priority
to
become
the
leader,
because
it's
hopefully
better
than
the
other
ones
and
can
act.
You
know
in
better
ways.
C
Today
it's
just
kind
of
first
one
wins,
so
you
may
end
up
with
some
super
old
version
becoming
the
leader
and
kind
of
behaving
less
optimally,
so
I
kind
of
put
together
this
cap.
That's
in
the
the
meeting
notes
for
adding
this
concept
of
kind
of
prioritized
leader
election
and
the
idea
is
actually
pretty
simple.
I
was
surprised
how
easy
it
was
to
implement.
C
Basically,
each
participant
in
the
election
would
just
be
able
to
assign
themselves
an
arbitrary
key.
So
this
could
be
anything
if
you
were
doing
something
with
versions.
For
example,
you
could
say
you
could
put
your
version
in
the
key.
C
If
this
was
about
like
the
region,
you
could
put
what
region
you're
in
or
obviously
you
could
put
a
json
blob
if
you're
doing
something
you
know
complex
and
then
each
participant
also
has
some
sort
of
function
to
determine
how
their
key
compares
to
the
existing
leader,
and
so
if
they
decide
that
they
have
a
higher
priority,
then
they'll
just
take
the
lock,
even
if
it's
held.
C
C
A
I
I
mean,
I
think,
all
those
use
cases
definitely
sound
sound
reasonable.
I
think
you
know
here
in
this
sick,
obviously
we're
we're
most
concerned
with
that
multi-cluster
case,
and
so
I'm
curious,
because
when
I
was
reading
through
it
there's
there's
a
few
questions.
That
kind
of
popped
out
like
are
is
the
assumption
that
there's
like
one
controller
instance
in
asia,
responsible
for
all
of
your
clusters
in
asia
or
is
it
like?
Is
it
more
like
a
cluster
or
a
controller
per
cluster,
and
you
just
kind
of
want
to
prefer
local.
C
That's
a
good
question,
I
think
you'd,
probably
in
the
simplest
case,
you'd
have
mini
clusters
which
could
be
in
many
different
regions,
and
you
could
have
multiple
clusters
in
the
same
region.
They
probably
all
have
a
controller,
perhaps
or
maybe
some
would-
and
some
wouldn't,
but
we
also
pretty
regularly
support
multiple
versions
as
well,
so
you
may
need
to
kind
of
do
both
like
not
just
like
by
version
and
by
region,
if
that
makes
sense,
which
that
may
get
kind
of
complicated
like
you
need
to
be.
C
I
don't
know
like
all.
The
controllers
obviously
need
to
have
a
similar
logic
for
taking
ownership,
otherwise
they'll
just
steal
ownership
from
each
other,
which
is
problematic,
so
I'm
not
sure
exactly
what
happened
there.
But
for
us
you
know
we.
We
do
support
multiple
versions
at
once,
so
and
obviously
multiple
clusters
in
multiple
regions,
so
it's
kind
of
any
topology
is
possible.
A
Okay,
yeah
kind
of
what
I'm
getting
at
there
is
the
so
like
this
is
in
this
vein,
that
has
kind
of
come
up
recently
of
like
now
that
we've
really
started
to
figure
out
how
to
deploy
things
across
clusters,
connect
things
across
clusters.
Coordinating
things
across
clusters
is
kind
of
a
natural
next
step
for
sure
yeah.
A
But
in
this
in
this
case,
like
I'm
trying
to
understand
so
if
you
have
a
replica
in
each
in
each
cluster,
and
that
would
be
the
preferred
controller
for
that
cluster,
like
what
scenarios
like
what
scenarios
might
that
controller
not
function
but
the
like,
but
the
the
cluster
api
is
still,
for
example,.
C
Yeah,
so
I
think
I
mean
downtime
is
something
that's
always
possible.
It
could
also
be
that
they
don't
they
choose
to
not
deploy
a
controller
in
that
cluster,
at
which
point
that
is
something
that
could
be
solved
by
like
coordination
between
all
the
clusters
and
say,
like
hey
all
these
other
clusters,
like
this
one,
doesn't
have
a
controller,
so
you
actually
are
responsible
for
that
one,
but
then
any
new
clusters.
A
Yeah,
that's
that
to
me
sounds
pretty
reasonable.
A
I'm
I'm
also
curious
about
a
different
use
case
that
so
I
think
this
makes
a
lot
of
sense
for
when
you're
trying
to
control
things
like
in
clusters
in
a
region,
but
everything
is,
is
generally
home
within
a
cluster
which
sounds
like
the
core
case.
Another
case
that's
come
up
is
when
you're
you
have
controllers,
spread
across
clusters
responsible
for
global
state,
so
there's
no
like
natural
place
for
leader
election
to
live
like
obviously,
if
the
resources.
A
C
Yeah,
that's
a
good
question
so
generally,
like
we
have
kind
of
two
concepts.
Typically,
we
support
like
all
these
different,
crazy
topologies,
but
the
the
most
common
one
is
we'll
have
like
a
a
single
config
cluster
which
stores
like
the
estio
configuration.
So
it's
like
routing
rules
and
tls
policies
and
whatnot,
and
then
we'll
read
from
a
bunch
of
clusters
for
like
endpoints
and
services,
so
kind
of
the
service
discovery
aspect,
and
so
there
is
kind
of
this
centralized
location
like
a.
C
We
call
it
a
config
cluster,
it's
kind
of
like
the
the
core
cluster.
I'm
not
super
familiar
with
it,
but
I
think
multi-cluster,
ingress
from
on
the
gk
side
does
a
similar
thing.
C
So
it
seems
like
there's
some
prior
art
there.
So
I
I
don't
know,
there's
other
multi-cluster
things
as
well.
Like
psyllium,
I
don't
know
if
they
have
a
similar
concept,
but
it
seems
like
it
is
definitely
something
that
needs
to
be
solved
by
almost
all
you
know
things
doing,
multi-cluster.
C
But
it
gets
a
bit
more
confusing
in
our
case
because
we
have
like
you,
can
have
multiple
primary
ones
and
then
the
user
keeps
them
in
sync,
which
is
both
good.
If
you
like
to
slowly
roll
things
out,
and
you
have
a
great
ci
pipeline
or
bad,
if
you
don't
and
now
you
have
to
keep
things
in
sync
and
that
doesn't
work
well,
so
those
those
kind
of
pros
and
cons.
A
Right,
okay,
cool
yeah-
I
mean
for
that
first
problem
from
my
reading
of
this
cap.
I
think
I
need
to
go
through
it,
but
it
seems
pretty
reasonable
to
me.
I
think
that
second
problem,
it
would
probably
be
good
to
have
like
some
istio
folks
come
way
in.
I
think
we've
just
started
kind
of
exploring
some
ideas
around
that
in
the
last
couple
weeks,.
A
A
Awesome
does
anyone
else
here
have
anything
else
to
add
thoughts.
B
D
D
You
know
kind
of
influence
the
election
mechanisms
to
kind
of
choose
the
master
rather
than
they
kind
of
choosing
it
automatically
based
on
attributes
like
parties
or
what
type
of
cluster
they
are
handling
and
all
this
kind
of
a
thing
so
yeah
it
was
not
automatic
but
kind
of
user
force,
but
that
top
frequency
for
that
operation
was
need
base.
So
it
was
not
very
frequent.
So
that
way
we
were
kind
of
you
know
we
were
able
to
do
that.
D
With
kind
of
you
know,
from
a
user's
point
of
view
that
if
you
user
want
to
make
any
specific
controller
or
master,
it
can
go
and
choose
choose
the
master.
So
at
that
moment
it
actually
gives
the
locks
to
it
and
become
master,
but
obviously
it
could
work
and
scale
better
if
the
frequency
of
that
choosing
mastery
is
going
to
be
less
otherwise.
You
need
to
kind
of
rely
on
attributes,
and
you
know
using
those
attributes
to
kind
of
influence.
The
elections.
B
So
I
I
did
have
a
question
like
for
from
a
used
case
that
I
kind
of
have
heard
like
when
you
said
that
the
in
the
use
case,
where
you
have
multiple
versions-
and
you
want
to
choose
generally,
the
latest
version
that
isn't
a
canary
is
that
part
of
like
a
deployment
flow.
Where,
like
you,
maybe
have
versions
a
b
c
present
in
a
cluster.
B
Where,
like
c,
is
the
newest,
and
you
maybe
want
to
roll
out
d
and
not
and
like
evaluate
things
scoped
to
d
in
some
kind
of
like
health
check
or
ede.
Before
you
make
d,
the
one
that
should
be
the
leader.
Is
that
sort
of
like
what
the
encompassing
flow
is
like.
C
Yeah,
I
think,
if
you
just
think
of
the
general
case
like
it
would
be
perfectly
reasonable
to
just
say
just
newest
version,
and
if
you
didn't
do
canaries,
then
that's
fine,
like
you
would
just
make
your
comparison
function
like
if
my
version's
newer,
then
you're
good,
to
go,
and
that
would
work
for
a
lot
of
users
for
us.
In
particular,
we
do
have
that
sort
of
canary
like
we
allow
people
to
roll
out
a
new
version.
Do
some
tests,
but
he's
just
kind
of
like
the
control
plane
for
pods
right.
C
So
basically,
what
users
would
typically
do
is
they
deploy
the
new
version?
Roll
over
a
few
kind
of
test
workloads
make
sure
things
are
working
and
then
move
everything
over
and
kind
of
market
the
default,
and
so
we
would
want
that
default
one
to
be
kind
of
the
the
leader,
but
I
think
I
mean
I
could
certainly
see
in
a
different
scenario
where
they
don't
have
kind
of
this
default
or
canary
concept,
and
it's
just
latest
version
wins.
B
B
Yeah,
I
think
so
I
was
like
chatting
a
little
with
jeremy
on
the
side.
I've.
I
think
a
picture
would
probably
help
me
understand
the
like
the
topology.
In
the
multi-cluster
case
like,
and
I
mean
I
don't
mean-
that
to
to
be
any
like
value
judgment
on
like
what
you
propose,
it
sounds.
B
Valid
real
use
case
for
me,
I
just
like
every
time
it
happens,
that
we've
talked
about
this.
I've
been
terminally
distracted,
so
I
think
perhaps
we'll
jeremy
and
I
will
try
to
draw
something
up
offline
and
we
can
make
sure
that
we're
thinking
about
the
same
thing
sounds
like
a
very
interesting
use
case
to
me,
though,.
A
Yeah
I'll
take
it
past
that
diagram
and
share
it
around
that
may
be
included
in
the
cap
as
well.
If,
if,
in
fact,
it
is
the
correct
diagram,
I
think
yeah
it
would
help
kind
of
just
like
show
which
topology
scenarios
like
which
multi-cluster
scenarios
this
actually
addresses
and
what
the
failure
modes
are.
B
All
right:
well,
thanks
for
coming
and
chatting
with
us
do
we
have
anything
else
on
the
agenda
today.
I'm
sorry,
but
I
haven't
had
a
chance
to
look
yet
it's
been
a
day
over
here.
B
All
right
well
sounds
like
we
can
get
our
time
back
thanks
everybody
for
joining,
we'll
see
you
in
two
weeks
since
we're
doing
bi-weekly
meetings
now
and
everybody
have
a
great
day
awesome.
Thank
you
thanks,
john
thank
you,
john.