►
From YouTube: Kubernetes SIG Multicluster 2019 Dec 03
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
B
A
A
One
is
continuation
of
the
conversation
that
we
started
in
the
cig
multi
cluster
session
at
cube.
Con
and
another
is
a
discussion
of
multi
cluster
services,
so
Vasavi
e
cig
multi
clustered
session
at
cube
con.
My
biggest
takeaway.
Is
that
there's
an
interest
in
demand
in
the
community
for
various
use
cases
being
tackled
in
the
community
by
signal
t
cluster
and
that
we
should
put
some
energy
into
figuring
out
what
the
best
ones
of
those
are
two
tackles.
So
I
wonder
if
anybody
else
has
a
key
takeaway,
they
want
to
share.
D
B
E
A
E
A
D
I'm
just
gonna
say
so,
like
I,
do
really
like
the
idea
of
going
use
case
driven
rather
than
like,
hey,
like
here's,
a
solution
to
the
problem
like
what
is
the
space
of
problems
that
we'd
like
to
see
solved
rather
than
trying
to
figure
out
like
hey.
Do
we
fix
these
in
Kubb
like
do
we
create
a
bunch
of
CRTs,
like
whatever
the
heck?
It's
like
I'd.
Much
rather
see
us
sort
of
start
from
the
bottom
and
say
like
hey.
These
are
the
use
cases
that
I
want
that
I
want
to
go
solve
right.
D
So
it's
like
you,
know,
I'll,
throw
a
couple
of
simple
ones
out.
It's
like
hey,
I
just
need
to
have
an
inventory
of
all
the
clusters.
I
care
about
in
the
universe
is
an
easy
one
right,
like
that's
the
very
basic
definition
of
multi
cluster.
But
then
you
know,
another
use
case
is
like
okay.
Well,
you
know
I
want
to
be
able
to
see
an
inventory
of
all
this
stuff.
That's
running
on
all
of
those
clusters.
D
You
know
I
go
on
and
on
and
on,
but
like
those
are,
the
kinds
of
I
don't
know.
Use
cases
that
sort
of
come
crop
up
most
often
for,
for
me
at
least
I,
don't
know
how
do
we
want
to
aggregate
start
aggregating?
What
these
things
are
like
do,
virtual,
sticky
notes.
Do
we
want
to
just
go
around
the
room
and
record
stuff
like
we.
A
There's
there's
an
agenda
document
I
thought
we
could
just
like
take
notes
on
on
use
cases
that
people
threw
out
so
I
started
started
writing
in
the
middle
of
you
talking
Jake.
Could
you
just
repeat
the
the
first
two
and
we
can
kind
of
go
around
the
room
and
just
like
hear
people's
thoughts
about
this
I
think
to
start
yeah.
D
So
my
first
two
are
I
just
need
to
get
a
list
of
you
know
all
the
clusters
that
I
care
about
and
my
what
their
API
that's
right.
That's
like
the
simplest
one
and
then
I
guess
the
the
next
one
that
come
up
for
us
and
like
our
journey
was
I
need
a
way
to
go
figure
out
what's
running
on
all
of
those
clusters
without
having
to
connect
to
every
single
one
of
those
clusters.
D
A
D
F
H
F
D
Will
say
what
ended
up
happening
for
us
is
that
different
people
started
deploying
different
things
and
they
everybody
had
a
different
view
of
what
they
cared
about.
And
so
our
strategy
was
look.
You
can
add
an
init
sanitation,
anything
on
the
planet
that
you
basically
want
to
get
like
an
aggregator.
That's
what
we
added
yeah,
because
we
ended
up
in
a
scenario
where,
like
oh
you'd,
think
like
well.
I
I
You
know
this
kind
of,
like
an
infinite
number
of
those
things
that
you
can
do
with
a
batch
script
and
I
personally
think
that
they're
a
slightly
less
interesting
category
of
problem
to
solve
than
the
ones
that
you
can't
do
with
a
pipeline
batch
script,
of
which
there
are
also
many
right.
It's
worth
drawing
the
distinction
between
those
two,
maybe
I,
don't
maybe
it's
an
artificial
distinction,
but
that's
sort
of
been
my
intuition.
So.
I
I
think
that's
definitely
one
one
of
the
more
interesting
categories
that
cannot
easily
be
solved.
The
best
script
I
think
the
other
one
is
its
active
management
sort
of
like
operator
flavored
things
which
do
stuff
between
clusters
that
are,
that
is
kubernetes.
Cluster
cannot
reasonably
do
or
even
a
group
of
independent
kubernetes
clusters
cannot
achieve.
That
needs
some
kind
of
coordination
between
the
clusters
that
have
a
category
of
I
think
interesting
stuff,
that's
difficult
to
do
to
the
basket.
D
E
Yes,
I
think
we
have
I
mean
if
we
want
to
focus
on
use
cases,
I,
don't
think
relationship
is
the
interesting
part.
I
think
the
relationship
is
how,
like
you
said
how
we
model
the
the
the
problem
like
if
I'm
putting
on
my
end-user
hat
I.
Don't
want
you
to
talk
to
me
about
relationships
and
sameness.
I
want
you
to
talk
to
me
about
what
works
and
what
doesn't
work,
I.
Think
I,
don't
think
you
can
escape
talking
about
the
relationship
at
some
point,
but
it's
not
the
front
of
mind
thing
when
you
say
works.
D
E
B
B
We
also
want
to
perform
some
actions
so
when
they
see
all
the
resources
they
don't
expect
to,
you
know
SSH
to
a
specific
class
so
to
actually
do
something
because
they
already
have
the
aggregated
view
and
that's
where
it
goes
really
sounds
really
fast,
because
they
also
want
to
do
actions
and
they
Stickley
so
I
think
bath.
Scrapers
is
a
nice
thing
to
do,
but
it's
very
often
not
good
enough
for
actual
use
cases.
I
Could
I
just
ask
a
quick
question
in
that
context
serves
so
to
me
it
feels
like
there
are
two
things
there.
The
one
is
you
know.
First
of
all,
you
read
read
stuff
out
of
a
bath
hunter
class
misters,
and
then
you
want
to
take
action
on
it,
which
is
what
I
understood
your
use
case
was
and,
and
you
know
potentially
that
action
is
another
batch
script
that
goes
over
all
the
clusters
and
does
something
which
I
think
is
is
slightly
different
than
an
active
kind
of
more
like
a
reconciliation
kind
of
operator
concept.
B
I
B
That's
a
great
question:
I
think
the
answer
is
ultimately
both
probably
some
automation
and
probably
some
manual
intervention
when
it's
needed,
but
more
importantly,
to
the
there's
a
requirement
for
a
role
based
access
control.
Sort
of
so
you
don't
want
all
of
your
users
to
be
able
to
see
all
of
your
resources
in
all
of
the
clusters.
So
that's
where
you
also
need
a
more
sophisticated
auerbach
mechanism.
So
this
I
think
this.
This
is.
B
They
can
use
topic
I'm,
not
sure
if
you
want
to
drill
down
to
all
the
use
cases
that
it
can
involve
and
I'm,
not
sure
how
many
people
are
actually
interested
in
this
specific
use
case.
From
our
perspective,
we
care
specifically
about
what
Tim
mentioned
earlier,
which
is
castle
to
castor
a
communication
services
that
cost
cost
of
services
that
live
on
different
clusters
and
need
to
communicate
to
each
other.
So
from
our
perspective,
this
is
one
of
the
top
use
cases
so.
B
A
H
F
And
when,
when
you
say,
deploying
apps
specifically,
what
are
you
talking
about
there
because
I
mean?
Obviously
you
could
deploy
resources
to
resources
into
these
clusters
and
again
going
back
to
the
bash
script.
You
can
loop
through
and
do
that.
But
in
your
scenario,
are
you
talking
about
actually
declaring
defining
applications
and
letting
a
quote-unquote
scheduler
make
the
choice
of
which
clustered?
Where
in
the
cluster,
to
deploy
the
components
of
the
application?
I'm.
H
H
H
So
this
is
what
it's
into
Pony
territory,
but
this
is
the
actual
application
use
case
and
I
want
to
deploy
the
application
front-end.
An
entire
copy
of
that
stack
in
whatever
number
of
replicas
to
each
cluster
and
the
front
ends
should
connect
to
their
local
data
shards
and
not
to
remote
ones
in
a
different
cluster.
Unless
the
data
shards
in
the
local
cluster
are
not
responding
for
some
reason
like
like
that,
what
that
is
the
end
goal
right.
F
J
Yeah
now
I'll
kind
of
+1
that,
like
I,
was
talking
to
a
lot
of
customers,
a
cube
con
and
a
lot
of
the
questions
are
actually
pretty
simplistic
right
like
right
now,
I
just
run
CI
LCD
and
it
just
loops
over,
like
you
say,
down
loops
over
a
bunch
of
cluster
endpoints
and
deploys
it.
But
I
want
to
aggregate
the
status
of
that.
Okay,
you
could
do
that
pretty
easily
with
like
Prometheus
alerts
or
whatever,
if
anything
wasn't
ready,
but
then
they
want
to
say
they
want
the
concept.
J
I
guess
like
josh
is
saying
like
a
a
diamond
deployment,
but
instead
of
across
nodes
in
the
cluster
clusters
in
a
group
or
clusters
in
a
you
know,
like
I,
have
a
group
of
clusters
designated
east
one,
a
group
of
clusters
that
is
made
in
West,
one
make
sure
there's
exactly
one
deployment
in
east
one
I,
don't
care
which
cluster
make
sure
there's
exactly
or
make
sure
every
cluster
has
an
example.
It's
just
a
way
of
taking
as
like
diamond
set
deployment,
replicas,
etc
and
just
leveling
it
out
two
clusters:
random
notes,
in
my
opinion,.
C
J
C
That
the
the
person
is
heard
of
deploying
the
apps
shouldn't
care
about
clusters,
they
should
care
about
attributes
and
somebody
else
is
worrying
about
what
actual
clusters
there
are,
and
we
see
that
that
multi
cluster
is
is
very
closely
tied
to
notions
of
identity
and
tendency.
The
in
order
to
fulfill
this
vision,
the
the
the
clusters
have
to
support
multiple
tenants.
K
E
F
Yeah
I
think
the
demon
said
is
very
interesting.
I
also
like
the
idea
of
not
when
you
get
into
multi
cluster,
especially
if
you
start
scaling
it
out
into
a
large
number
of
clusters.
You
can't
you
lose
the
ability
to
deal
with
any
one
cluster
and
you
have
to
deal
with
groupings
of
clusters
as
an
entity
and
having
the
ability
to
identify
clusters
or
having
clusters
identified
themselves
and
then
identifying
what
its
deployed
based
on
those
attributes
really
opens
up.
The
ability
to
scale
out
your
your
management
of
deployments
across
those
clusters.
F
I
agree
with
it
like
the
automatic
magical,
scaling
and
distribution
of
services
or
resources
across
the
clusters,
maybe
one
day
but
I
think
that's
too
magical
I
think
we
need
to
have
the
underpinnings
that
people
understand
that
are
simple,
that
allow
you
to
scale
with
speed
and
then
from
there.
You
can
build
out
more
intelligent
capabilities
on
top
of
that.
So
what
I
heard.
A
F
E
E
J
I
want
to
play
devil's
advocate
a
little
bit
to
you'll
point
down.
I
think
you
know
we're
at
the
place
now
when
nodes,
we're
kind
of
preaching
that
nodes,
don't
matter
right,
nodes
can
come
up
and
down.
We
can
over
scale
them
they're
kind
of
mutable.
We
don't
care
like
I'd
say
you
know
why
don't
post
is
the
same.
One
of
tree
clusters
is
just
attributes,
and
then
you
know,
customers
just
say:
hey
I
want
my
workloads
to
be
somewhere
with
these
attributes.
I
want.
I
I
Even
you
know,
with
sophisticated
tools,
but
I
think
that
the
and
I
think
that
was
sort
of
ahead
of
where
most
people
were
four
years
ago.
Whatever
I
think
these
days,
I
heard
lots
of
people
at
cube.
Con
talking
about
running
hundreds
of
clusters
and
and
I
can
guarantee
you.
They
don't
remember
the
names
of
all
their
hundreds
of
clusters
and
they
want
to
deal
with
them.
In
the
same
way
that
they
deal
with
nodes
in
the
cluster.
E
D
F
E
E
A
A
E
Heard
the
use
case
of
people
wanting
to
migrate
storage
between
clusters,
as
you
treat
clusters
more
like
cattle,
some
of
the
workloads
will
want
to
keep
their
cattles
name
as
it
goes.
I,
don't
know
how
that
analogy
doesn't
work
they
want
to.
They
want
to
keep
their
storage
as
they
move
between
clusters,
for
whatever
reasons
right
and
like
literally
I've
heard
this
as
an
upgrade
strategy.
I,
don't
trust
communities
upgrades
I
want
to
bring
up
a
new
cluster,
qualify
it
and
then
move
my
app
right.
G
F
J
That
was
a
common
customer.
Osku
cube,
come
hey,
I'm
running
this
demon,
demon
deployment
between
three
clusters
and
they're.
All
you
know
active
passive
or
whatever
cuz
one
fails
I'm
just
in
the
GSLV.
Oh
that's
another
one
and
I
may
need
my
data
to
be
compact.
Right,
I
mean
there
Changez
architecture
changes
you
can
make
to
that
right,
like
Cubase,
CQRS
and
run
event
system
to
make
sure
you
know,
clusters
pick
things
up
and
not,
but
definitely
some
kind
of
discoverability
or
power
data
parity
between
clusters
for
sure.
A
E
I'll,
add
to
that
I
agree.
I'll,
add
to
that
the
obvious
extension,
which
is
things
like
network
policy
and
but
also
the
ability
for
cluster
to
cluster
API
server
off
so
client
in
cluster,
a
being
able
to
talk
to
API
server
in
cluster
B,
for
example,
if
you
have
multi
cluster
style
controllers
that
are
going
to
master
elect
or
something
like
that
manual
sharing
of
cross
cluster
secrets
is
a
giant
pain
in
the
ass.
D
D
I
mean
I
think
the
one
that
comes
up-
that's
I,
guess
simpler
than
all
of
these
is
just
like.
Hey,
you
know:
I
have
10
clusters,
they
don't
all,
have
the
same
off
provider
created
for
them
so
like
how
do
I
give
Bob
an
identity
or,
like
my
CI
tools,
identity
across
those
clusters
and
manage
our
back
in
a
uniform
way
like
those
sorts
of
things
is
another
problem
that
frequently
comes
up,
especially
as
like
look.
D
Impersonation
I'm
gonna
do
like
all
my
own
identity
management,
sort
of
independent
of
these
individual
clusters
and
I'll
sort
of
create,
like
my
own
identity
provider,
that
were
I'm
pushing
down
across
those
with
impersonation
which
like
can
work,
but
nobody
seems
particularly
excited
about
that
solution.
I.
E
I
I
think
they.
The
question
is
a
good
one,
as
a
user
I,
certainly
I
would
want
to
be
able
to
have
the
same
expression
of
our
back
across
two
different
providers.
Right
with
within
Google
I
can
use
a
Google
Group
within
Amazon
I
use
something
different
within
Asha
I
use
something
different
and
that's
sort
of
awful.
E
L
Rancher
Rancher
itself
has
various
authentication
providers
sort
of
built
into
it,
where
you
get
sort
of
Rancher,
specific
users
and
mappings.
Therefore,
and
we
actually
proxy
requests
into
the
kubernetes
clusters
and
then
use
impersonation
to
basically
do
the
subject
and
within
the
author,
Z
or
the
actual
cluster
role
bindings
that
we
create
are
going
to
be
the
user
objects
that
are
Rancher
specific
and
then
we
do
impersonation
to
get
the
user
to
show
up
as
that
user
within
the
actual
kubernetes
cluster.
Okay,
I.
D
Mean
Chris
like
I
guess
it
would
be
interesting
to
hear
some
feedback
in
the
rancher.
Like
are
people
happy
with
that
like
if
you
run
into
any
gotchas
like
we've
done
a
lot
of
brainstorming
about
how
we
would
do
this
and
like
the
impersonation
solution,
was
sort
of
like
your
model
of
having
another
identity
management
solutions
that
sits
on
top
of
all
of
your
coud
clusters,
it's
kind
of
the
only
good
solution,
we've
ever
seen.
I,
don't
know
like.
What's
the
feedback
been
on
that,
since
you
have
some.
L
So,
just
from
you
know,
I
I
mean
I
deal
with
most
of
our
customers,
so
we
get
it.
We
get
a
lot
of
good
feedback
from
it,
but
one
of
the
things
that
we
actually
built
into
rancher
or
into
our
sort
of
solution
was
because
originally
the
only
option
you
had
to
use
the
rancher
authentication
to
get
into
your
cluster
was
to
proxy
through
Rancher,
which
then
became
a
bottleneck
from
from
a
scale
perspective
so
or
even
just
from
a
distribution
perspective
on.
L
Basically,
we
synchronize,
essentially
your
page
opens
into
that
cluster
so
that
the
local,
often
web
observer
to
do
authentication,
you're
dialing
the
API
service
directly
using
the
rancher
specific
bearer
token.
But
it's
still
getting
the
same
sort
of
are
that
sync,
because
it
comes
from
my
enter
at
the
end
of
the
day.
You
just
don't
have
a
rancher
and
sort
of
dead
about
gotcha.
D
L
You
got
rid
of
having
to
have
like
a
single
external
like
impersonation
proxy
for
log
in
right,
and
it's
actually
so
the
other.
The
other
side
of
this
as
well
is,
if
you
use
Rancher
in
the
sort
of
conventional
model,
you
actually
don't
need
to
expose
your
kubernetes
API
servers
to
the
internet,
because
we
do
essentially
what
is
reverse
proxying
for
your
API
servers,
so
that
you
only
have
to
expose
Rancher
on
and
that's
that
authentication
proxy.
But
that's
sort
of
there's
two
options.
A
G
G
A
G
G
G
If
that
service
exists
in
the
local
cluster
and
then
it's
going
to
make
sure
that
the
connectivity
happens
to
the
service
on
the
pasture,
if
that's
what
they're
traitor
wanted
or
if
it's
going
to
connect
to
the
same
service
that
lives
on
another
cluster,
because
maybe
it's
what
the
many
states
are
wanted
or
because
the
local
service
is
down
in
the
capacitor.
So.
E
E
K
Also
I'm,
adding
into
that
the
what
about
services
that
span
like
cluster,
a
and
B
or
cluster
B
and
C
yeah
like
it
seems
it
seems
obvious
that
in
this
multi
cluster
world
there
may
still
be
there's
still
definitely
gonna
be
a
case
where,
for
wanting
detective
service
in
my
local
cluster,
but
then
also
adding
the
case
for
I
want
to
talk
to
uninstall
the
service
and
I.
Don't
care
where
it
is.
E
So
you
know
like
as
a
starting
point.
One
of
the
things
I
think
that
has
been
successful
about
kubernetes
is
how
easy
it
makes
services
in
the
simple
case
right
just
access
a
special
DNS
name
and
you
get
a
VIP
and
you
don't
have
to
worry
about
any
of
the
vagaries
of
DNS
refreshes
and
time
to
live
in
those
sorts
of
things.
Could
we
extend
a
similar
model
to
multiple
clusters
and
what
we
started
to
talk
about?
E
Unfortunately,
we
ran
out
of
time
at
contribs
summit,
but
was,
is
the
new
endpoint
slice
primitive,
a
good
starting
point
for
that
I
feel
like
it's
got
most
of
what
we
need
and
if
it
needs
more
than
maybe
that's
the,
maybe
that's
the
hook
that
we
want
right.
I.
Keep
asking
this
question
of.
What
can
I
push
down
into
commodities
to
make
multi
cluster
easier,
I
feel
like
that
might
be
one
of
them.
The.
A
K
L
L
It
wasn't
the
the
discussion
we
started
off
with
didn't
necessarily
assume
sort
of
uniformity
of
the
back
end
of
your
service.
We
were
discussing
around
using
potentially
someone
like
topology
key
and
so
forth
to
sort
of
wait
service
service
endpoints
so
that
you
don't
end
up
after
accidentally,
unnecessarily
sending
traffic
to
a
different
cluster.
L
If
you
don't
thinkI
to
and
something
actually
Josh
brought
up
today
was
the
concept
of
front-end
service
hitting
a
back-end
service,
and
if
the
backend
service
is
down
to
try
another
back-end
service,
but
that
actually
that
logic
with
what
we're
sort
of
what
we
were
sort
of
talking
about
here.
What
actually
end
up
needing
to
be
done
at
the
application
level,
rather
than
the
sort
of
this
controller
level,
because
all
this
controllers
going
to
do
is
look
for
healthy
endpoints
and
then
populate
them
into
the
endpoint
slice.
L
The
other
clusters
and
doesn't
necessarily
have
information
I
mean
there's
a
little
bit
of
time.
I
mean
I,
guess
it
would
think
that
if
the
partners
are
healthy
but
but
that
was
sort
of
I,
think
what
we
were
discussing
so
yeah
I
think.
E
The
important
point
there
is
like
the
way
services
are
implanted
in
communities
today
is
pretty
dumb
right.
There
isn't.
A
concept
of
the
endpoint
I'm
trying
to
reach
is
down.
Let
me
try
another
one
right,
it's
it's
very
transparent,
and
so
you
would
rely
on
what
you
said.
The
unread
enos
would
remove
it
from
the
set,
though
I
think
we
can.
If
we
do
this
right,
we
leave
room
in
the
implementation,
for
things
like
smarter
service
meshes
to
do
better
than
that,
but
the
baseline
could
be
really
simple.
Yeah.
L
And
then
the
other
big
thing
that
I
wanted
to
sort
of
I
mean
I
ran
into
this
issue
and
it's
a
pretty
rudimentary
issue.
But
let's
say
you
have
huge
eNOS
running
and
doing
your
service
discovery
on,
but
you
have
worker
nodes
in
different
or
you
know,
geo
distribution
or
you
have
Judas
Reid
and
worker
nodes,
and
you
get
very
like
all
over
the
map.
Responses
from
Cuba
DNS
cuz
you've
got
replicas
running
in
different.
You
know
with
just
different
distances
to
where
your
source
of
the
DNS
queries
coming
from
I
mean
that's.
L
E
This
is
what
this
is
exactly
what
the
surface
topology
feature
is
for,
which
went
in
to
one
seven
or
is
going
into
one
seven:
eight,
you
come
urghhh,
so
what
service
topology
lets
you
do
is
say
for
a
given
service.
I
prefer
that
you
always
choose
back
ends
in
the
same
zone
or
the
same
rack
or
the
same.
E
So
we
could,
in
theory,
take
that
same
idea
and
extend
it
to
multiple
clusters.
So,
if
you
imagine
a
controller
that
runs
across
all
of
your
different
clusters,
so
you've
got
clusters
a
B
and
C.
It
runs
across
a
B
and
C
gathers
all
the
endpoints
for
a
given
service
across
all
three
clusters
merges
those
lists
and
then
publishes
the
merged.
The
Union
list
could
I
is
topology
sufficient
for
making
that
a
multi
cluster
service.
At
that
point,.
K
L
K
G
E
Most
of
the
cloud
providers
today
already
have
control
that
annotates
nodes
with
zone
and
region.
Those
are
the
sort
of
the
two
standards
apologies
and
so
both
of
those
are
automatically
captured
by
end
point
slice
on
every
end
point
that
needs
to
become
more
extensible.
There's
already
an
issue
open
against
end
point
slice
for
that.
But
let's
just
assume
now
that
zone
and
topology
are
sufficient.
E
E
B
We
were
playing
with
some
a
little
bit
different
concept.
It
was
before
we
were
aware
of
the
topology
thing,
so
what
we
were
doing
this
was
we
just
used
the
traditional
DNS
method,
where
you
gather.
Let's
say
you
have
the
same
service
deployed
on
a
b
and
c.
You
gather
the
v
p--
for
this
service
from
a
b
and
c,
and
then
basically
in
dns,
there's
a
way
to
give
weight
for
each
week
so
per
host
for
health.
B
For
every
dns
query,
you
have
a
list
of
beeps
that
can
you
know,
serve
this
dns
and
there's
a
something
that
is
beginning
to
the
DNS
protocol
that
you
can
give
weight
to
each
of
the
feet.
So
if
we
do
some
something
that
calculates
the
weight
of
the
veep
by
the
zone
or
the
top,
but
I
mean
by
the
topology
that
is
defined
for
the
service.
I
think
that
could
be
just
an
alternative
solution.
B
G
E
B
So
it
doesn't
matter
how
you
get
to
the
solution,
but
I
think
the
point
is
that
a
service
in
a
specific
cluster
has
an
IP
that
is
unique
to
the
service
and
when
you
do
a
local
dns
query
in
your
cluster,
even
if
it's
for
any
mode
service,
it
I
mean
the
fact
that
it's
a
remote
service.
You
have
a
list
of
VIPs
that
you
can
prioritize
yeah.
B
G
Yeah,
but
before
they
so
before
the
conversation
that
we
have
a
queue,
come
I
mean
it's.
What
we
are
working
on
I
mean
to
explore
the
this
this
problem,
but
I
think
that
there
is
one
problem
with
that
solution,
and
is
that
once
you
have
resolved
that
IP
address,
and
maybe
you
are
getting
like
an
IP
address
that
is
going
to
let
you
connect
to
that
service
in
some
of
the
clusters,
then,
if
that
cluster
goes
down
or
if
the
link
goes
down
or
whatever
do
you
will
keep
going
to
that
IP
address?
G
A
A
If
anybody
wants
to
experiment
with
using
in
point
slice
in
this
way
and
see
what
how
far
they
can
get
I
think
that
would
be
really
useful
input
to
the
next
conversation,
and
if
anybody
has
a
demo
that
they
would
like
to
show,
please
feel
free
to
to
reach
out
to
me
or
just
go
ahead
and
stick
it
on
on
the
agenda,
but
we're
at
time
for
today.
So
thanks
a
lot
for
everybody
to
everybody
for
for
coming
today,
I
think
this
is
probably
the
best
attended
signal
tea
cluster
meeting.
Yet.
F
E
I
think
it's
a
I
like
the
idea,
but
I
want
to
see
if
it
actually
works
and
if
it
doesn't
now
is
the
time
to
be
changing
the
api's
like
endpoint
slice,
before
yeah,
yeah,
awesome
or
or
even
two,
but
like
honestly,
if
we
think
that
it
may
need
changes,
but
we're
not
sure
putting
a
hold
on
it.
I'm,
okay
with
that
too,
but
let's
figure
that
out
sooner
rather
than
later,
and
maybe
it
needs
nothing.
That
would
be
ideal
right.
That.