►
From YouTube: Kubernetes SIG Multicluster 2019 Dec 17
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
C
A
A
C
C
D
D
There
we
go
yes,
so
this
is
the
operator
on
an
operator
hub
and,
as
you
can
see,
this
is
all
the
information
that
needs
to
be
provided
it's
a
bit
much
to
ask
new
users
or
even
experienced
users,
and
so
with
subtitle,
it's
very
simpler.
The
general
idea
is
that
we
have
these
two
clusters
that
we
want
to
connect
west
and
east
and
then
submariner.
We
use
another
cluster
or
you
can
do
this
in
one
of
the
connected
clusters.
D
D
That's
it
that
was
nice
and
easy,
so
that
just
sets
up
a
number
of
CR
DS
on
on
the
broker
and
it
stores
some
information
and
a
foul
called
broker
and
food
or
sub
m,
and
this
is
an
attempt
to
comment
from
the
last
tech
hole
made.
Me
laugh
a
little
compared
to
this.
So
basically,
this
is
all
the
information
that
the
user
would
have
to
provide
manually
and
an
opaque
file,
and
if
you
decode
it,
it
looks
like
that.
So
there's
a
bunch
of
stuff
that
we
probably
shouldn't
be
really
shipping
around.
D
D
D
But
that's
basically
it
so
behind
the
scenes
subcategories
and
connects
to
the
various
clusters
that
we're
joining
find
out
what
their
network
information
is.
So
that's
the
sliders
here
and
then
constructs
the
CRS
that
we
need
to
they'll,
get
the
operator
up
and
running,
and
the
operator
then
goes
off
and
does
its
thing
and
starts
the
bridge
components
of
submariner
and
get
that's.
D
D
F
D
F
D
F
D
D
C
D
D
F
D
E
F
That
will
do
the
same
thing
and
you
only
have
there
the
cluster
nodes,
then
some
of
the
details
were
impossible
to
discover
and
I
wonder
if
at
some
point
we
can.
Okay
now
make
some
sort
of
standard
internet
is
where
those
details
will
be
easy
to
discover
in
the
case
of
open
seat.
For,
for
example,
we
have
the
clustered
networks
here
they
damn
what
we
have
those
details,
but
I
guess
it's
nothing
standard.
Yet.
E
E
C
A
A
A
A
A
A
So
we
had
options
for
organization
type,
which
include
end-user
provider
host
research,
consultants,
organizations
that
do
everything
the
and
we've
got
a
fairly
decent
distribution
there,
particularly
we
have
almost
half
of
our
respond,
E's
or
end
users,
which
is
nice
to
have
looking
at
this.
From
the
perspective
utility,
most
people
are
using
kubernetes
and
production
again.
What
we
would
like
to
see
out
of
out
of
who
we
want
to
reach
hold
on
just
a
second.
A
Sorry,
just
really
congested
this
morning,
we
allowed
respond
DS
to
respond
on
their
own
behalf,
which
is
what
most
of
them
did
or
to
respond
on
behalf
of
their
customers,
and
in
that
case
they
could
either
sort
of
say,
hey
in
general
across
our
customer
base.
This
is
what
we're
seeing
versus
hey
I'm,
a
sales
rep
and
I'm
going
to
fill
this
out
on
behalf
of
my
one
big
customer.
B
B
A
Did
not
ask
how
many
customers
they
were
responding
on
behalf
of
if
I
go
back
and
look
at
what
organizations
those
are
if
they
said
they
weren't
allowed
to
be
anonymous,
then
I
can
probably
guess
how
many
customers
were
talking
about
or
or
what
the
industry
sector
they're
in
right.
The
but
yeah.
A
Okay,
I
just
some
brief
sort
of
breakdowns
on
cluster
size
trying
to
get
an
idea
of
not
not
cluster
size.
Actually,
this
is
should
be
whoops.
Sorry
I.
Should
we
label
that
there
should
be
number
of
clusters,
I
just
realize
not
cluster
size.
Now
I'm
gonna
shoot
new
set
of
slides,
because
I
saw
another
mistake
earlier
on,
so
because
actually
there's
another
row
below
this,
that
says
vendor
and
as
a
software
vendor
and
it's
getting
cut
off
there
weren't
very
many
of
those
are
only
three
of
those
anyway.
A
End-Users
actually
have
the
largest
sort
of
it's.
We
have
a
you
know
definite
hockey-stick
distribution
in
terms
of
how
many
clusters,
people
are
using,
where
most
of
the
people
responding
are
running
somewhere
in
the
low
double
digits
of
clusters,
except
that
there's
a
few
who
are
running
hundreds
or
thousands
and,
like
I
said
we
have
to.
A
We
have
one
end
user
running
5,000
clusters
and
won
everything
and
I
can
tell
you
who
that
is
in
the
everything
that's
IBM,
who
is
running
19,000
clusters
and
those
kind
of
I
would
distort
averages
and
that
sort
of
things.
But
you
can
see
by
the
huge
difference
between
the
median,
the
average
I
also
actually
just
ran
a
90th
percentile
across
our
sort
of
set,
and
particularly
if,
if
we
stop
paying
attention
to
IBM,
then
we're
looking
at.
A
We're
really
looking
at
a
hundred
or
less
clusters
for
ninety
percent
of
our
users,
so
somewhere
between
two
and
100
clusters
for
ninety
percent
of
our
users.
So
if
we
were
looking
at
this
one
perspective
of
how
many
clusters
should
a
solution,
we're
looking
at
be
designed
to
support,
at
least
this
point
in
time,
you
know
getting
our
sort
of
general
audience
we're
looking
at
100
clusters
less
and
then
we
look
at
increasingly
special
cases.
A
A
The
number
one
answer
it
continues
to
be.
The
number
one
answer
is
two
thirds
of
all
respond.
E's
were
geo,
distributing
their
clusters,
so
I
think
anything.
The
cig
works
on
needs
to
address
geo
distribution
is
a
use
case
because
it's
the
one
use
case
that
everyone
has.
Then
we
have
another
group
of
use
cases
that
is
common
to
more
than
half
of
respondents,
but
only
slightly
more
than
half
of
respondents
and
that's
application.
High
availability,
hybrid
cloud,
which
we
defined
as
both
as
running
multiple
Hosting's.
A
So
so
it
could
be
either
several
different
cloud
providers
or
a
public
cloud
provider
in
on-premise,
but
running
in
more
than
one
cloud
per
team
distribution
of
clusters.
One
of
these
things
by
team
per
team
is
we
had
a
lot
of
write
ins
that,
from
my
reading,
amounted
to
per
team
distribution,
so
I
think
wouldn't
I
didn't
write
that
option.
A
Very
well
for
people
to
understand
that
it
applied
to
them,
because
people
were
saying
you
know
that
they
had
a
cluster
per
security
context
which,
from
my
perspective,
is
kind
of
per
team
or
not
per
production
contacts.
Sorry!
So
like
one
cluster
for
well
several
clusters
for
dev,
you
know
a
couple
for
staging
one
for
products,
cetera.
A
A
Upgrades
once
people
want
multiple
clusters
for
upgrades,
then
we
get
into
a
group
of
answers
that
address
slightly
less
than
half
of
our
respond
ease,
so
that
is
per
customer
clusters.
I
scaling,
in
other
words,
launching
multiple
clusters.
In
order
to
overcome
the
limitations
of
how
many
nodes
you
could
reasonably
have
in
one
cluster
failover
between
clusters,
which
I
personally
expected
to
be
a
much
more
popular
use
case
than
it
actually
was.
There's.
A
A
G
Yeah
sure,
noting
that
that
all
of
this
stuff
I
think
certainly
all
of
the
top
half
of
that
slide,
is
definitely
what
we
anticipated
and
I.
Don't
think,
there's
any
surprises
there,
and
most
of
the
thinking
that's
been
going
on
up
to
now
has
as
aimed
to
address
vast
majority,
if
not
all,
red,
blue
and.
A
G
A
The
US
Department
of
Energy
is
very
fond
of
requiring
different
energy
providers
be
on
separate
Hardware,
and
so,
if
anybody
was
serving
the
deal
we
they
might
run
into
the
issue
of
needing
to
run
separate
clusters
because
of
do
a
compliance,
even
though
those
clusters
might
only
be
six
notes,
I,
security
and
I
think
to
a
certain
extent,
security
is
a
use
case.
I
really
overlaps
with
per
team
or
per
customer.
A
A
And
then
largely
because
of
Bob
killin
forwarding
around
our
survey,
we
got
a
number
of
respond
ease
from
research
organizations
and
those
are
mostly
interesting
because
their
responses
are
very
different
from
everyone
else.
For
example,
I
found
the
research
organizations
actually
were
fond
of
Federation
v1,
because
somehow
it
worked
better
for
their
research
use
cases.
G
Yeah
I
had
a
quick
question,
so
the
one
the
one
thing
that
superficially
looks
slightly
surprising
is
the
regulation
item
being
relatively
small.
I
was
just
curious
whether
given
the
wording
of
the
questions,
whether
the
use
case
of
I
need
my
processing
and
data
housed
in
the
EU
versus
u.s.
versus
wherever.
If
that
is
wrapped
up
in
the
Geo
distribution
item,
it.
A
H
A
And
then
GU
distributing
and
then
and
then
regulatory
compliance
is
everything
yes,
so
yeah.
If
we
run
this
next
year,
which
I
think
we
probably
should
then
I
think
we
should
take
all
the
right
hands
and
we're
about
to
see
actually
a
whole
whole
question
of
right
ends.
That
I
think
we
could
actually
have
as
a
more
categorized
answer
in
the
future.
A
G
A
G
A
A
So
now
we
we
asked
this
question
of
what
are
your
primary
blockers
to
operating
in
a
multi
cluster
environment,
and
this
was
a
free
text
thing.
So
I
went
through
the
text
and
tried
to
pick
out
somatic
reasons
that
people
were
supplying
in
their
own
free
text.
Do
understand
that
this
is
me.
Looking
at
everybody's,
you
know,
hand
type
responses
and
trying
to
come
up
with
what
are
the
main
blockers
that
they're
running
into
the
as
opposed
to
them.
Checking
checkboxes
I,
don't
know
if
that
makes
it
more
or
less
valid
anyway.
A
A
You
know
which
was
expressed
just
as
management
or
d2
operations
or
a
bunch
of
other
things,
so
the
ability
to
manage
multiple
clusters
and
also
configuration
where
you
know
people
are
citing
the
lack
of
tools
to
either
configure
all
of
their
clusters
according
to
a
template
or
make
sure
that
services
on
those
clusters
are
all
configured
according
to
the
same
templates
or
the
exact
opposite.
A
Wanting
to
have
sort
of
a
unified
service
or
utility
application
substrate,
like
people
cited
things
like
hey
if
they're
using
hashey
core
vault,
for
you
know
secure
of
credentials,
etc.
They
want
it
to
work
across
all
of
their
multiple
clusters,
the
other
popular
problems,
sort
of
observability,
top-level
observability
of
all
of
their
stuff,
all
the
stuff
across
all
of
the
clusters.
All
the
individual
clusters
information
upgrades
were
mentioned
as
a
frequent
problem,
saying
you
know,
hey
you
know
doing
a
rolling
upgrade
across
many
clusters
takes
pretty
much
forever.
A
A
A
Kurtmult,
a
cluster
tools
couldn't
really
erase
the
differences
between
different
clouds
in
terms
of.
A
C
How
much
of
that
is
like
difference
between
the
like
cluster
capabilities
and
cluster
configurations?
And
you
know
different
off
providers.
I
know,
there's
a
different
option
for
auth,
but
like
I
could
see
that
being
like
one
class
of
things
and
then
another
being
like
availability
of
cloud
provider.
Services,
like
my
thing,
only
works
with
managed
sequel
in
Ex
cloud,
but
not
why
clouds
manage
sequel
thing,
yeah.
A
Generally,
I'm
actually
currently
reviewing
talk
proposals
for
Southern,
California's,
Linux
Expo
and
we're
going
to
accept
a
proposal
that
is
on
exactly
this
topic
from
a
large
end-user
who
is
trying
to
deal
with
managing
kubernetes
across
multiple,
multiple
clouds
and
having
a
bad
time
of
it.
The
is
a
common
thing.
A
This
and
then
we
can
talk
about
this
later
on,
or
maybe
even
on
slack.
Okay.
Now
one
of
the
things
that
we
wanted
to
get
out
of
the
survey
was
to
get
more
of
a
sense
of
who's
using
our
our
primary
sub
projects
and
what
state
of
use
that
is,
and
so
we
get
a
different
picture.
So
this
is
sort
of
the
picture
from
cluster
registry
in
terms
of
where
people
are
in
terms
of
using
cluster
registry,
so
seven
respond.
A
It
does
it
kind
of
that
kind
of
reinforces
my
expectation,
which
is
that
people
have
heard
of
cluster
registry,
but
they're
not
really
using
it
the
and
then
we
have
had
a
bunch
of
responses
and
I
would
say
in
terms
of
the
comments.
The
comments
even
more
reinforced
the
belief
that
even
some
of
the
people
who
said
they
were
planning
on
using
it,
for
example,
hadn't
actually
evaluated
it.
If
you
followed
me
based
on
their
on
their
text
responses,
so.
A
This
is
a
kind
of
a
selection
of
some
of
the
responses
on
the
right.
The
generally
feeling
I
get
is
people
liked
the
idea,
but
not
very
many
people
are
familiar
with
the
reality
at
this
point,
and
so
you
know,
but
on
the
upside,
if
we
wanted
to
really
change
the
API
people
are
not
wedded
to
the
existing
project.
A
We
have
more
using
and
more
planning
the
text
comments
that
I
got
from
people
both
from
people
who
are
using
it,
and
the
people
who
are
specifically
not
using
it
gave
me
the
impression
that
a
lot
more
people
had
actually
directly
evaluated
v2
Federation,
the
so
one
of
the
big
pieces
of
feedback
that
we
got.
There
is
one
that
you've
heard
before,
which
is
people
had
issues
with
the
pull-push
decision,
the
that
got
that
comment
at
least
four
times
so.
A
And
then
the
last
thing
that
we
sort
of
asked
about
here
there's
national
for
comments.
He
is
what
tools,
what
non
sig
MC
tools
were
people
using
and
why
and
for
the
Y
we
all
both
supplied
a
blank
space
for
them
to
write
in
their
own
reasons
and
like
five
potential
reasons
that
people
could
check
off
so
so
for
the
other
ones.
A
First
of
all,
lots
of
people
mentioned
internal
tools
and
I
would
be
willing
to
bet
even
more
people
were
actually
using
internal
tools
than
mentioned
them,
other
ones
that
came
up
Capitan,
Commodore,
ranchers
stuff,
something
called
fog
Atlas,
which
I've
never
heard
of
before
same
with
entropy
controller
I,
don't
know
what
that
is.
A
couple
of
people
mentioned
slate
see:
I
I
should
put
it
to
buy
that
sorry.
A
couple
of
people
mentioned
slate,
see
I
hope
you
mentioned
something
called
Razi
and
surprisingly
enough,
only
one
respond.
C
I
think
I
think
we
should
probably-
and-
and
this
part
of
the
medium
here
just
for
time
purposes,
thanks
a
lot
for
the
update
there
Josh.
It
was
really
interesting
before
we
move
on
to
the
final
demo,
like
as
the
are
the
Submariner
folks
like.
Are
things
ready
for
you
to
demonstrate?
Connectivity
is
yeah.
D
H
H
A
A
F
D
So
no
cheating
I
promise
all
the
containers
are
running
as
they
were
at
the
start
of
the
Cole,
and
so
here
we're
I'm
gonna.
Do
a
quick
connectivity
test
running
Karl
cluster
two
and
connecting
to
cluster
three,
which
is
running
engine
X
was
the
ten
second
time
I
under
answer
straight
away,
so
that
shows
connectivity
between
the
two
clusters.
B
Hey
so
yeah
last
time
we
met,
we
talked
about
how
we,
with
some
of
the
new
work
coming
out
in
communities,
1.16
and
1.17,
with
endpoints
slice,
and
then
service
topology
actually
have
a
lot
of
the
building
blocks.
Theoretically
to
do.
Multi
cluster
service
deployments
and
I
mentioned
that
I
would
throw
together
a
demo
of
that
working
so
that
we
can
make
it
not
theoretical
and
actually
a
real
piece
to
work
with
so
I'm
gonna
share
my
screen
right
now.
H
B
A
and
zombie
and
I've
got
two
services
that
I'm
going
to
be
deploying
I've
got
a
the
demo
service,
which
is
basically
just
a
simple
HTTP
server
that
that
returns,
the
name
of
the
pod,
the
cluster
it's
running
in
and
the
zone
with
each
request
and
I've
got
a
ping
er
workload
that
just
pings
that
service
every
second
and
logs.
The
response.
B
So
this
is
so
started
by
printing
the
logs
from
from
the
pinger
in
zone
a
cluster
1,
and
you
can
see
that
it's
spreading
traffic
between
all
of
the
backends
in
cluster
1
zone,
a
and
I,
will
show
the
same
thing
from
cluster
1
zombie
and
that
pinger
is
also
just
talking
to
its
local
zone
but
spreading
traffic
around
now.
Let's
look
at
cluster
2
zone
a
now.
B
We
didn't
set
topology
here,
so
we
expect
that
it'll
be
talking
across
both
zones
and
it
is
but
still
within
cluster
2
and
then
the
same
thing
here
in
the
zone
B.
It
looks
the
same
as
as
donate
for
cluster
2,
so
this
is
just
basically
topology
basics.
Here
haven't
done
anything
special
yet,
but
I've
got
endpoints
slices
in
each
cluster
one
in
each
and
now
I
want
to
copy
between
the
two.
B
So
you
can
see
here,
one
endpoint
slice
in
each
cluster
and
now
I
want
to
copy
them.
I
want
to
pull
them
down
and
copy
them
to
the
other
cluster.
Basically,
what
I'm
doing
here
is
cleaning
up
the
metadata.
The
most
important
thing
is:
I'm
I'm
leaving
out
the
managed
by
label,
so
the
endpoint
slice
controller
doesn't
try
to
manage
these
endpoints.
B
So
these
are
just
manually
manage
endpoints,
so
I've
got
one
endpoint
slice,
I'd,
pull
down
from
each
cluster,
now
I'm
going
to
apply
the
cluster
two
and
point
slice
to
cluster
one,
and
vice
versa,
and
now
we'll
see
the
endpoints
to
there.
I've
got
now
to
employ
slices
in
cluster
one
and
two
and
the
older
one
is
still
managed
by
the
end
point.
B
So
it's
controller,
but
the
the
new
one
is
a
as
a
manually
manage
one,
and
so
now,
if
I
look
at
the
logs
from
from
the
cluster
one
zone,
a
pinger
service
I
can
see
that
it's
actually
talking
across
both
clusters,
but
still
in
zone
a
so
it's
respecting
the
topology.
And
now,
if
we
look
at
some
B,
we
expect
to
see
the
same
thing
yeah
and
you
can
see
that
it's
talking
now
in
Justin's
own
B,
but
across
both
clusters
and
now
for
the
cluster
to
pingers.
B
That
or
services
that
don't
have
topology
I
expect
it
to
be
spread
across
both
zones.
Both
clusters-
and
it
is-
and
we
see
the
same
thing
hopefully
for
cluster
2
zombie-
so
that's
that's
the
demo,
basically
just
showing
that
we
have.
We
have
the
building
blocks
for
actually
spreading
work
across
clusters
right
now
and
this
this
just
uses
a
regular
service
today,
but
with
additional
endpoints,
and
it
really
highlights
the
flexibility
of
endpoint
slice
and
things
we
can
do
with
it.
With
the
mix
of
the
managed
and
unmanaged
slices.
B
H
F
B
It
seems
like
it's
a
little
confusing
when
it
now
doesn't
mean
cluster
got
local
or
local
cluster
right,
and
then
there's
also
this
idea
that
you
know
I've
gotten
the
demo
in
both
clusters.
So
this
also
is
kind
of
making
the
assumption
that
a
service
with
a
given
name
in
a
given
namespace
is
the
same
service
across
clusters,
and
that
seems
like
something
that
we
probably
want
to
put
some
thought
in
and
maybe
look
at
kind
of
standardizing
it
as
a
best
practice
before
something
like
this
would
make
a
lot
of
sense.
H
H
H
C
It's
sort
of
the
cube
fed
v2
behavior
is
I,
mean
I,
think
that
what
you
said
Tim
describes
it,
but
I
it's
not
an
assumption
based
on
the
name.
It's
a
like
I
think
what
you're
referring
to
is
the
the
DNS
names
and
those
come
from
the
fact
that,
like
a
your,
your
spreading
in
cube
head
of
one
resource,
basically
across
multiple
clusters,
so
it
has
the
same
name
like
aa
priori
so.
I
H
So
if,
if
we're
all
comfortable
with
that
assumption
like
it
might
be
worthwhile
to
produce
like
a
position
statement
right
like
like
a
semi-formal
dock,
that
we
check
in
somewhere,
that
says
we're
gonna
assume
this
as
sort
of
the
underpinning
of
a
bunch
of
multi
cluster
capabilities,
and
that
way
at
least,
if
people
ask
about
it,
they
can
say
like.
This
is
one
of
our
like
fundamental
foundational
assumptions.
C
H
F
G
Sense,
yes,
just
to
comment
on
that,
and
so
so
there
were
two
kind
of
schools
of
thought.
So
first
question
is:
is,
if
does
that
apply
only
to
namespaces
or
there's
also
been
applied
to
things
inside
those
namespaces?
So
if
I
have
two
services
in
a
namespace
with
the
same
name,
you
know
they
implicitly
the
same
well.
Can
they
actually
be
different.
G
So
so
the
two
approaches
taken
initially
were
so
there
was
just
a
pragmatic
approach
to
use
consistent
bands
across
clusters
to
mean
the
same
thing
within
the
context
of
federated
stuff,
and
then
there
was
an
alternative
approach
which
I
don't
think
they
ever
implemented
that
which
we
thought
was
strictly
superior,
which
was
to
use
labels
and
have
you
know,
labels
as
a
way
of
defining
stainless
across
clusters.
So
it
you
know
you
could
pick
sort
of
arbitrary
sets
of
labels.
F
G
H
The
pods
and
services
and
pods
and
deployments
have
an
implicit
and
area
relationship,
whereas
I
think
here
we're
trying
to
set
up
a
single
one-to-one
relationship.
It's
funny
that
I'm
arguing
this
because
I
when
I
first
learned
to
go
I
was
a
big
opponent
of
convention
over
configuration,
but
I've
really
come
around.
I
think
this
is
a
case
where
conventionally,
the
name
of
the
being
the
same
thing
means
the
same
thing.
I
think
the
interesting
question
is:
is
it
automatic
or
is
it
opt-out
of
all
and
on
what
granularity
right.
F
F
C
I'm
trying
to
to
ask
myself
what
the
best
format
to
record
these
types
of
assumptions
is
I
personally,
think
that,
like
the
convention
over
configuration
is,
is
going
to
be
very
important,
especially
like
at
high
scale
numbers
but
I
wonder
like
Tim.
Let
me
pick
on
you
since
you
suggested
it.
What
do
you
think
the?
What
do
you
think
the
output
that
would
describe
these
decisions
would
be
decisions
or
assumptions
would
be
called?
Is
it
first
principles,
I
think.
H
It's
well
so,
first
of
all,
I
don't
know
where
it
would
live
like
we
don't
have
a
website
dedicated
to
sings
right.
So
maybe
we
want
to
actually
cross
that
bridge
I'm,
not
sure,
but
somewhere
I
would
say,
there's
a
doc
that
says
sort
of
the
foundational
principles
for
multi
cluster
components
or
features
or
whatever.
H
The
right
word
is
that
you
know
we
share
broadly,
like
everybody
write
blog
post
about
why
we
think
this
is
okay,
and
if
we
all
agree
on
it,
then
it
becomes
just
a
thing
that
every
multi
cluster
component
or
project
or
sub
project
assumes,
and
then
it
becomes
the
the
vernacular,
whereas
today
like
we're
having
a
conversation
about
it
because
we've
never
written
it
down
right,
so
I
think
we
probably
would
do
well
with
a
small
number
of
clearly
enumerated
based
principles.
Principles
is.
C
B
I
G
For
it,
so,
first
of
all,
thanks
super
great
demo,
very
useful
I.
Think
just
one
one
I
think
major,
principally
kind
of
thing,
which
is
worth
bearing
in
mind
so
implied
in
this
demo,
is
that
there
is
some
mechanism
for
keeping
essentially
all
of
the
clusters,
aware
of
all
of
the
endpoint
slices
in
all
of
the
other
clusters.
Yes,.
B
G
G
Wanted
comment
on
that
when
yeah
before
and
and
if
you
sort
of
extrapolate
you
know
currently,
you've
got
two
zones
and
tooth
Busters
or
something
approximately
that.
But
if
you
look
at
the
survey
results,
you
know
a
more
realistic
kind
of
scenario
is
sort
of
order
of
a
hundred
clusters
and
probably
more
than
you
know,
two
endpoints
and
whatever
it
is
in
each
cluster.
G
You
quickly
get
into
a
situation
where,
first
of
all
that
that
propagation
across
all
the
trusters
is
sort
of
an
N
squared
problem
or
n
cube,
possibly,
and
secondly,
if
if
something
goes
wrong
with
that,
so
you
know
you
can
imagine,
you've
got
this
agent,
that's
now
reconciling
everything
deleting
the
endpoint
slices
from
clusters
and
replacing
them
with
new
ones,
etc.
If
that
thing
goes
wrong,
it
probably
goes
wrong
across
all
the
trustees
and
globally
brings
this
whole
thing
down.
G
H
I
I
would
like
to
I
agree.
We
need
to
think
about
this,
so
I
called
out
the
same
scalability
issues,
but
I
think
there's
real
value
and
having
the
sort
of
endpoint
control
plane
be
local
to
each
cluster
from
a
failure
modes.
Point
of
view.
It
means
that
you
only
ever
have
to
like
every
component
sto
and
whatever
else
we're
building
only
has
to
look
at
its
own
cluster
right
like
today,
you
have
to
configure
each
sto
pilot
to
talk
to
each
other
cluster,
which
is
a
disaster.
It's
just
a
mess
but
misunderstood.
C
Also
I
think
there's
an
assumption
that,
like
the
harvesting
of
the
endpoint
data,
would
work
a
particular
way,
there's
probably
other
ways
where
each
cluster
can
like
beam
up
to
a
hub
type
thing.
These
are
my
endpoints
from
my
cluster
and
pull
down
the
endpoints
from
the
other
clusters,
and
you
know
if
that
hub
is
down
failover
to
a
second
hub
or
a
third
yeah.
H
I
think
that's
a
great
point.
I
think
there's
like
Jeremy
was
intentionally
vague,
I.
Think
with
how
the
endpoints
end
up
in
other
clusters,
you
could
imagine
a
fully
peer-to-peer
thing.
You
could
imagine
a
centralized
one
may
be
a
master
elected
across
cluster.
One
like
I
could
imagine
different
models
for
how
to
get
those
endpoints
and
and.
B
C
C
H
That's
like
parallel
to
service,
that's
like
a
multi
cluster
service
or
a
an
exported
service,
or
something
and
on
the
scalability
side
you
know
we
could
leave
the
door
open
to
things
like
lazy
loading
like
if
you
you
know
the
first
time
you
do
a
DNS,
lookup
or
a
VIP
lookup
or
something
then
it
would
load
the
endpoints
or,
like
I,
think
there's
lots
of
opportunities
for
optimization,
but
fundamentally,
like
cute
proxy,
has
a
scalability
problem
which
exists
regardless
of
Multi
cluster.
This
will
just
make
it
worse.
So
we
need
to
address
that
yeah.
Just.
H
G
C
Same
well,
thank
you,
everybody
for
coming
and
I
believe
our
next
meeting
would
be
probably
if
we
did
the
two-week
thing
it
would
be.
It
would
be
the
last
week
of
December,
but
I
I,
don't
think
that's
a
good
time
to
meet.
So
maybe
we
can
meet
the
second
week
of
January
all
right.
Okay,
all
right
thanks!
Everybody
have
a
happy
new
year
and
a
holiday
season
take
care.
Thank.