►
From YouTube: Kubernetes SIG Multicluster 2020 August 25
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
C
D
B
B
All
right,
maybe
we
can
just
get
started.
I
think,
looking
at
the
agenda
today,
giles
you're
gonna
give
us
a
demo
of
your
implementation.
D
B
Hold
on
I'm
going
to
have
to,
I
think,
I'm
going
to
have
to
enable
sharing
here
or
maybe
it'll
actually
paul,
just
joined
paul.
Are
you?
Are
you
currently
logged
in
as
host
to
enable
sharing?
I
am
just
about
to
do
my
best.
E
B
D
Sorry
about
this,
so
I
can
remember
how
to
use
zoom.
D
It
oh,
this
is
great,
wants
preferences
to
share
my
screen.
I
thought
it
already
had
it.
I'm
gonna
have
to
quit
zoom
and
come
back
in
guys.
I'm
really
sorry
give
me
one.
Second,
no
problem
at
all.
We
have
all
been
there.
G
D
I
hope
this
time
it'll
work,
sorry
guys
I
could
have
sworn
I
enabled
this
the
other
day.
This
is
the
downside
of
working
for
cisco
and
intending
to
use
webex
for
most
things,
so
yeah.
So,
basically,
if
I
give
you
kind
of
a
very
brief
overview
of
the
history
of
this,
we
were
already
looking
at
multi-cluster
networking
and
before
we
read
the
kep
and
so
what
we
did
was
we
took
the
approach.
We
were
already
adopting
and
kind
of
mapped
it
onto
the
kep
apis.
D
You
know
1645,
so
what
I
call
that
is,
you
know
cluster
services,
you
know
leveraging
existing
toolset
and
I'll
come
to
how
we
do
that
so
yeah.
Basically,
what
is
the
problem
statement?
Here's
our
approach
and
I'm
just
going
to
run
through
very
quickly
how
we
do
regular
kubernetes
services,
how
we
do
headless
services
and
how
we
do
how
we
handle
non-kubernetes
stuff.
So
basically,
this
is
just
the
kept
1645
problem
statement.
Already.
D
Isn't
it
you
know
being
able
to
produce
and
consume
services,
have
services
across
multiple
clusters
because
have
some
kind
of
policy
for
those
ultimate
course.
We
want
to
also
integrate
with
their
metal,
vms,
serverless,
etc.
D
So
that's
just
a
picture
of
you
know
what
are
we
trying
to
achieve,
but
then,
okay,
if
I
come
to,
how
does
it
work?
Well,
the
key
concept
we
came
up
with
was
this
idea
of
a
proxy
kubernetes
service.
So
that's
to
say
if,
if
your
source
cluster
wants
to
access
a
service
in
a
remote
cluster,
what
you
do
is
you
create
a
service
instance
in
your
source
cluster
and
that
maps
onto
a
set
of
edge
pods
in
our
initial
implementation.
Those
edge
pods
are
running
fido
vpp,
but,
of
course,
that's
not
required.
D
That's
just
how
we
got
it
up
and
running.
They
can
be
anything,
but
the
key
thing
is
those
edge.
Pods
really
act
as
a
deployment
and
then
effectively
they
use
port
numbers
to
identify
services.
So
we
create
a
proxy
service.
It
looks
like
a
regular
service
in
that
cluster,
but
what
it
will
do
is
it
will
map
onto
the
set
of
edge
pods
on
a
port
number
that
tells
the
edge
pods,
which
remote
service
it
is
so
then,
for
a
headless
service.
It's
a
little
bit
different.
D
We
we
create
a
proxy
service
for
each
endpoint,
so
each
each
endpoint
we
add,
we
need
a
new
proxy
service
in
the
source
and
then
equally,
if
you
want
to
access
stuff,
that's
outside
kubernetes.
Again
we
just
create
a
proxy
service
and
that
works
and,
of
course
we
can
reach
in
and
reach
into
clusters
from
outside,
using
the
same
kind
of
approach.
D
So
we
looked
at
it
and
really
again,
you
know
correct
me.
If
I'm
wrong
from
my
reading
of
kept
1645,
it
seemed
to
be
saying
that
in
as
much
as
you
have
an
endpoint
slice
or
one
of
our
endpoint
slices
for
each
remote
cluster,
then
you
are
relying
on
the
fact
that
the
the
cni
space
doesn't
overlap
between
those
clusters,
because
you
effectively
have
sort
of
a
flat
network
to
reach
those
endpoints
and
effectively.
D
What
we
wanted
was
an
approach
where
we
could
just
assume
that
out
of
the
box,
the
clusters
might
have
overlapping
addresses.
They
might
all
have
exactly
the
same.
Cni
addresses
and
the
same
cluster
ips,
but
we
still
wanted
it
to
work,
but
we
also
wanted
it
to
work,
regardless
of
which
cni
you
were
using
and
without
any
changes
to
q
proxy.
I
mean
there
are
improvements.
Probably
you
could
make
if
you
allow
changes
to
keep
proxy,
but
we
we
wanted
to
avoid
that.
D
What
we
really
wanted
to
avoid
and
really
just
comes
down
to
scalability
was
any
requirement
to
track
the
remote
cluster
endpoint.
So
we
wanted
to
take
an
approach
where
each
cluster
was
kind
of
responsible
for
looking
after
its
own
resources
and
where
you
know
a
remote
cluster,
that's
accessing
a
service
in
a
cluster
just
needs
to
know
here's
the
cluster
ip
for
the
service
factories
of
it
between
clusters,
but
you
know
that's
all
it
needs
to
know
it
doesn't
have
to
actually
track
what
the
endpoints
are
for
that
cluster.
D
In
the
case
of
a
headless
service,
it
just
needs
to
know
how
many
replicas
there
are.
It
doesn't
need
to
know
what
the
current
ip
address
of
each
replica
is.
So
it's
much
more
of
an
intent
based
approach,
saying
you
know
there
are
five
replicas
we'll
create
five
proxy
services,
but
we
don't
care
where
those
you
know
what
I
p
addresses
those
those
replicas
end
up
on.
We
don't
need
to
know
that's
a
concern
of
the
remote
cluster.
D
It
has
a
nice
little
security
property
as
well.
I
guess,
which
is
that
you
can
only
reach
services
that
are
being
exported,
and
so
all
of
the
stuff
will
show
with
the
the
proxy
services
it's
basically
created,
based
on
having
service
exports
and
service
imports.
So
there
is
no
ip
reachability
between
clusters.
You
can
only
reach
services,
which
is
why
why
I've
kind
of
called
it
service
routing?
That's
that's
pretty.
B
Awesome
quick
question:
if
you
don't
mind
yeah
sure
so
for
headless,
does
it
actually
create
like
a
headless
service
that,
like
that's,
if
I
you
know,
do
I
get
a
service
name
that
returns
all
of
the
ips.
D
One
of
the
things
that's
there's
a
couple
of
things
you
have
to
get
your
head
around
with
what
we're
doing
that
are
a
bit
wacky
one
is
one,
is
the
fact
that
we're
effectively
using
port
numbers
to
identify
services,
so
you
kind
of
flip-flop
from
a
client
relying
on
the
ip
and
then
the
edge
pod,
what
the
proxy
service
and
then
the
edge
pod.
You
know
using
a
port
to
identify
the
service.
D
The
the
other
thing
is
that
the
the
dns
view
of
the
world
is
going
to
be
different
in
different
clusters,
so
we
will
potentially
put
the
same
domain
name.
We
might
have
to
return
different
ip
addresses
in
different
clusters.
That
seems
to
me
that
seems.
D
D
Yes,
yes,
okay,
so
that
was
yeah.
That
was
very
much
our
goal
was
to
say
to
do
it
all
based
on
services
and
that
those
were
just
sort
of
the
things
that
fall
out
of
it.
That
they're
perfectly
logical.
It's
just
that
perhaps
people
take
a
little
a
little
while
sometimes
to
wrap
their
heads
around
it,
particularly
the
thing
of
using
ports
to
identify
services.
So
if
I
do
a
quick,
a
quick
walk
through
of
what
would
happen
with
a
regular
service,
you
know
your
client
part.
D
It's
going
to
get
an
a
record
based
on
that
a
record,
that's
going
to
direct
it
at
a
proxy
service
and
of
course
the
client
is
going
to
fill
in
a
port
when
it
when
it
opens
a
socket,
of
course,
having
hit
that
proxy
service
you're
going
to
say
well,
10
1096,
1,
1,
port
80
is
going
to
map
to
my
deployment,
my
set
of
edge
pods,
but
my
target
port
is
going
to
be
1,
2,
3,
4
five
rather
than
eighty,
and,
of
course,
that
means
for
multi-port
service.
D
You
would
have
to
effectively
translate
each
port
separately
the
because
these
proxy
services
effectively
create
one
per
per
port
rather
than
those
being
multiple
proxy
services.
So
now
it's
that
port,
one,
two,
three,
four
five.
When
it
arrives
at
an
edge
pod
and
again,
you
can
see
it's
just
arriving
over
the
cni.
These
ip
addresses
are
all
just
regular,
cni
assigned
addresses,
and
so
there's
probably
some
kind
of
a
you
know,
vxlan
or
ipnip
tunnel
or
whatever,
between
the
two
parts.
D
But
we're
ignoring
that
for
now
that
that
port
number
is
what's
now
going
to
identify
the
vip.
The
bit
really
is
just
something
that's
shared
between
our
edge
pods
to
identify
services.
D
It
turns
out,
you
don't
need
vips,
we're
I'm
talking
to
the
people
who
are
on
that
and
vpp
and
I've
got
a
way
of
avoiding
bits
entirely
if
you,
basically,
if
you
know
what
your
next
hop
is,
if
you
can
do
a
nap
that
the
points
are
the
next
top,
then
you
don't
you
don't
actually
need
of
it.
D
But
again,
maybe
they
help
people
in
terms
of
understanding
that
the
vip
is
identified
with
a
service
or
in
the
case
of
headless
with
an
endpoint
and
so
based
on
that
vip,
we're
going
to
route
out
across
the
ipsec
tunnel.
So
if
you
think
in
terms
of
now
perhaps
a
service,
that's
actually
in
cluster
2
and
cluster
3,
and
perhaps
there
are
5
replicas
in
cluster
2
and
you
know
10
replicas
in
cluster
three.
D
What
you're
going
to
do
is
send
a
third
of
the
traffic
to
cluster
two
and
two
thirds
to
cluster
three.
So
you'll
do
an,
I
think,
called
cost
multi-path
routing
decision,
but
then
it'll
go
through
the
ipsec
tunnel
and
land
in
cluster.
Two,
in
this
case,
as
one
and
a
third
of
all
flows
will
and
now
cluster
two
is
going
to
say.
Okay,
I've
got
that
vip
and
I
know
that
vip
corresponds
to
this
real
service.
That's
got
a
cluster
ip
in
my
cluster.
D
What
we'll
also
have
to
do
is
source
that
and
the
reason
for
the
source
nap
again
is
because
the
cni
you
know
the
the
remote
edge
pods
need
to
know
how
to
route
back
to
this
to
this
sort
of
remote
application,
pods
need
to
know
how
to
route
back
to
this
edge
part.
In
fact,
I
now
can't
see
the
rest
of
my
builds,
but
I'm
assuming
the
rest
of
the
build
works.
We
go
through
a
regular
service.
D
B
Yeah
and
so
that's
cool-
you
answered
my
question
about
how
you
do
waiting,
because
one
of
the
one
of
the
reasons
why
we
why
we
started
with
the
flat
approach,
but
by
no
means
wanted
to
require
the
flat
approach.
But
we
started
there
because
waiting
is
is
an
important
part.
If
you
have
an
exact
number
of
back
ends
in
each
cluster.
D
B
D
So
we
have
again
in
the
vpp
implementation,
we
have
an
equal
cost,
multipath
support
now.
Obviously,
at
some
point
you
know
this
our
goal.
To
put
it
up
front,
I
mean
our
goal
with
this
would
be
to
open
source
it
get
it
into
the
community.
But
equally
we
understand
and
expect
that
not
everyone's
going
to
want
to
use
fido
vpp.
D
So
I
would
hope
that
somebody
might
implement
the
edge
using
ebpf
or
using
ovs
or
whatever
it
might
be,
and
so
that's
then
a
constraint
you
know:
do
they
also
support
unequal
cost
multipath,
but
again
it's
very
intent
based.
So
if
you've
got,
as
I
say
like
say,
five
cl
five
endpoints
in
one
cluster
and
ten
in
another
cluster
it'll
just
send
a
third
to
one
and
two
thirds
to
the
other.
It's
not
actually
tracking
whether
the
requested
10
replicas
are
all
up,
because
that's
the
concern
of
the
remote
cluster
to
do
that
right.
D
Okay,
bear
in
mind
when
we,
if
you
you
know
the
background
of
this
team,
certainly
yan
and
I
are
both
people
from
very
much
a
sort
of
sp
background,
and
you
know
we
think
in
terms
of
things
like
bgp,
so
we
tend
to
think
right.
You
know
make
make
each
cluster
responsible
for
its
own
problems
and
make
the
clusters
not
have
to
not
have
to
know
each
other's
problems.
As
you
know
as
much
as
possible,
and
then
you
know
from
a
headless
service
approach.
D
Yeah,
it
looks
a
bit
more
complex,
but
really
it's
just
the
same
that
your
your
client's
going
to
do
a
lookup.
It's
going
to
get
these
multiple
a
records
and,
as
you
can
see
here,
those
a
records
are
each
actually
cluster
ips
for
different
proxy
services
in
my
local
cluster.
So
it's
going
to
pick
one
of
them
and
having
picked
it,
it's
really
just
going
to
follow
through
the
same
thing
that
that
proxy
service
you
can
see
now
that
the
proxy
service
for
the
first
endpoint
is
on
port
1001.
D
For
the
second
end
point
it's
on
1002
and
having
picked
the
1001,
that's
going
to
flow
through
to
the
edge
pod,
which
is
going
to
use
a
vip
in
this
case
10
17
1
1,
rather
than
1
2.
For
the
second
end
point,
and
now,
when
that
hits
the
edge
pod
in
the
remote
cluster,
it's
not
going
to
use
pro
q
proxy.
It's
now
going
to
know
directly.
D
Okay,
I
need
to
go
to
that
that
end
point,
and
so
that's
that's
headless,
but
we
can
also
take
the
same
approach
and
extend
it
to
reaching
out
to
resources
and
reaching
in
from
other
resources.
So
you
know
effectively.
We
just
create
a
proxy
service
which
again
just
maps
onto
our
edge
pods,
and
that
can
then
point
us
out
to
some
kind
of
external
service,
and
you
know
the
same
reaching
in
again.
We
would
just
use
the
same
kind
of
machinery
in
our
edge
part.
Of
course.
D
In
this
case,
this
is
a
real
service,
not
a
proxy
service
we're
going
to.
But
what
we
wanted
was
to
kind
of
have
this
one
approach
that
we
would
use
for.
You
know
regular
services.
Multi-Cluster.
D
Sorry
headless
services
reaching
in
reaching
out,
I
think
in
the
case
of
people,
then
sort
of
start
thinking
about
service
mesh
and
that's
one
of
the
contrast
here.
I
suppose,
because
in
a
if
you
have
a
sort
of
flat
ip
space,
probably
what
you
want
to
do
with
multi-cluster
istio
is
run
a
single
control
plane.
D
However,
in
our
case
istio
control
plane.
However,
in
our
case,
because
we
don't
have
that
flat
network
we'd
probably
take
the
approach
where
we
go
through
gateway
pods
in
the
remote
clusters
and
that's
something:
we've
not
we've
not
set
up
yet,
but
that
should
work
fine,
because
I
guess
you
know-
and
that's
that's
one
of
the
interesting
things
with
all
of
this
is
like.
D
D
I'm
not
really
sure
you
know
in
reality
what
the
what
the
kind
of
ratio
is
there
in
terms
of
so
that's
sort
of
the
data
plane
in
terms
of
how
we
built
the
control
plane,
I
mean
nikos-
can
show
the
demo,
but
fundamentally
we've
taken
that
model
of
having
like
a
centralized
control
plane,
but
then
with
agents
in
each
cluster
as
well
as
edge
plots.
So
nikos
do
you
want
to
grab
the
whatever
zooms
equivalent?
The
ball
is
because
I
think
that
was
my
last
slide.
A
Nikolas
just
showed
the
architecture
maybe
up
front
with
the
one
slide
and
then
go
into
the
demo
yeah
one.
Second,
I
need
to
do
anything
that
speed.
C
You
guys
can
see
my
screen
right,
yeah
yeah,
all
right.
So
essentially,
this
is
the
the
architecture
that
we
have.
It's
very
simple.
We
have
basically
something
called
speakers,
and
this
allows
us
to
talk
to
essentially
an
author
controller,
and
you
know
this
is
basically
the
mcs
agent
that
runs
in
every
basically
cluster
and
southbound
speaker
essentially
visa.
What
we
have
like
the
component.
We
have
to
talk
to
our
data
plane,
and
you
know
this.
Basically,
I
have
different
implementations
like
charles
mentioned.
C
E
Question
really
quick
could?
Could
you
just
explain
exactly
what
a
speaker
is?
I
I'm
unfamiliar
with
the
the
term
used
in
that
manner.
So
I'm
just
a
little
curious.
If
you
can
add
some
color.
C
Yeah,
so
speakers
essentially
is
basically
the
the
component
or
we
call
this
speaker
right,
but
essentially
it
allows
us.
It
provides
with
the
communication
to
the
northbound
right.
So
essentially
it
creates
a
grpc
connection
up
to
the
controller
and
then
that's
the
way
to
communicate
through
grpc
to
the
controller
and
northbound
and
then
southbound.
We
have
kind
of
the
same.
D
A
Yeah
yeah,
so
so,
but
basically
the
speakers
they
talk
to
the
controller,
they
update
their
cluster
information
tunnel
information
and
whatever
you
know
like
or
what
services
are
available
and
the
controller
is
basically
a
reflector.
So
it
gets.
You
know
all
this
information
from
all
the
clusters
and
then
reflects
it
to
all
the
other
clusters.
So
basically
the
grpc
is
a
two-way
streaming.
E
D
A
D
Yeah,
okay,
quickly,
I'll
share
sorry
guys
I'll
share
this
you're
thinking
this
one
yan
right,
the.
D
Yeah,
so
this
is
our
sort
of
a
high
level
view
where
we
have
yeah
the
mcs
controller
and
then
each
each
cluster
has
an
ncs
agent,
which
is
basically
the
exploded
thing
that
nicos
was
showing
and
that
will
then
control
multiple
edge,
pods,
but
so
there'll
be
two
speakers
effectively
one
talking
northbound
to
the
controller
and
one
talking
southbound
to
the
individual
edge
pods
and
that
will
be
sort
of
replicated
in
each
cluster.
D
I
think
that's
when
yam
was
after
yeah.
This
is
all
starts
to
blow
up
more
but
yeah.
That
was
the
kind
of
the
high
the
highest
level
view
yeah.
I
think
that,
was
it
really
yeah
nikos?
Do
you
wanna?
Do
you
wanna,
try
again
and
see
how
your
audio
is
yeah.
C
All
right
so
yeah.
Basically,
we
have
an
implementation
for
the
cap
1645.
C
Essentially
in
this
demo
we
will
try
to
showcase
a
supercluster
that
has
three
clusters:
cluster
a
b
and
c,
and
then
two
of
them
are
actually
running
nginx
like
an
nginx
service.
C
C
Someone
would
go
and
apply
a
service
export
crd
into
kubernetes,
and
then
this
will
be
pushed
into
our
centralized
controller
and
that
will
turn
and
create
the
service
import
and
advertise
that
to
all
the
other
three
clusters
so
that
basically
cluster
b
can
access
the
nginx
endpoints
through
a
proxy
service
that
we
create
in
cluster
b
and
we
can
yeah.
Yes,
I
can
show
the
implementation
now,
so
we
use
a
kind
basically
to
create
a
setup
of
three
clusters
and,
let's
say
cluster,
two
and
cluster
three:
I
they
run
the
ngmx
service.
C
I
can
show
you
the
services
as
well
service,
so
kind
one
doesn't
have
any
services,
so
it
doesn't
want
to
expose
any
service
right
now.
So
basically
we
we
want
to
use
kind
one
to
get
the
important
services
and
then
cluster
2
and
cluster
3.
They
run
the
engine
x
service
and,
as
you
can
see,
we
actually
have
the
same
cluster
id
for
nginx.
So
you
know
we
can
have
overlapping
id
addresses
and
okay.
C
So,
basically,
right
now
from
the
pods,
we
have
the
controller
in
a
cluster
one.
So
basically
the
centralized
controller
is
running
on
cluster
one,
but
you
can
think
that
this
controller
can
run
individually
on
a
separate
cluster
without
any.
You
know
other
edge
spots
running
there
and
we
can
essentially
start
the
agents.
C
So
this
we
create
the
mcs
agent,
which
is
basically
responsible
to
here
for
changes
for
service
exports
and
service
imports,
and
then
we
can
see
here.
C
C
So
we
have
the
ipip
tunnel
interface
and
right
now.
C
We
don't
have
any
security
association,
so
basically
I
don't
have
any
other
clusters
being
part
of
the
supercluster.
It's
just
a
basically
cluster
one
that
is
part
of
the
supercluster.
So
let's
add
some
clusters
in
the
setup,
I'll
go
and
add
the
cluster
too,
and
then
you
see
that
basically
now
I
have
a
an
ipsec
tunnel
between
cluster
one
and
cluster
two
and
then
basically
I
have
a
route
to
get
out
to
that
cluster.
I
will
add
a
third
cluster
as
well.
C
C
What
we
can
show
is
not
the
application
service.
Sorry,
it's
the
service,
export
yeah,
so
the
definition
of
a
service
export.
Basically,
according
to
the
spec,
we
have
to
define
a
service
export
that
has
the
same
name
and
the
same
name,
space
of
the
application
that
we
want
to
export,
and
essentially
this
has
to
be
consistent
across
all
the
clusters.
So
if
I
have
an
nginx
service,
it
has
to
be
similar
on
all
the
clusters
and
it
has
to
run
on
the
same
name
space.
So,
let's
try
to
export
that
now.
C
Export
and
now
you
can
see
that
basically,
we
have
a
new
route
to
the
new
service
proxy.
So
let's
take
a
look
at
how
this
looks
from
a
services
perspective
in
cluster
one.
So
in
cluster
one,
we
basically
created
a
new
service
proxy
that
basically
an
application
pod
that
needs
to
access
that
nginx
service
on
cluster.
Two
we'll
have
to
go
through.
So
let's
create
the
girl
pod.
C
Which
one
is
it
yeah?
Okay?
Okay,
so
you
could
cuddle
up
my
css
fairly
context,
guys
you
want
me
so
basically,
I'm
creating
a
client
that
basically
wants
to
access
that
nclex
service
and
then
let's
go
in
that
girl
and
then
we
can
curl,
basically
with
a
service
name,
and
we
see
that
we
can
hit
basically
the
pods
that
are
the
back
ends
for
that
service.
C
Yeah
and
then
you
know,
let's
have
a
case
where
basically
the
cluster
goes
down,
so
I'm
basically
not
deleting
the
the
agent.
C
Yeah,
that's
how
far
we've
got
so
far.
That's
in
14
days,
almost
okay,
wow
yeah.
B
This
is
wicked,
a
couple
questions
did,
did
you,
I
guess,
do
you
have
any
feedback,
or
did
you
hit
any
snags
when
implementing
the
cap.
C
Well,
we
hit
a
lot
of
stacks,
but
that
was
basically
difficulties
in
implementing
of
the
whole
data
plane
to
control
plane,
but
the
spec.
I
think
it
was
a
pretty
straightforward
right.
I
mean
the
controllers
were
very
easy
to
drive
both
the
service,
export
and
service
import
controllers.
So
that
wasn't
a
big
problem
and
I
think
the
spec
is
very
clear.
The
only
you
know
difference
we
have
with
the
spec
is,
basically
we
don't
advertise
endpoints
right.
We
just
advertise
that
right
service
export,
so.
D
And
nicos
nyan
weren't
you
saying
that
you
felt
that
the
service
import
model
maybe
would
be
cleaner
if
you
had
a
service
import
from
each
cluster
that
was
exporting
exactly
so
rather
rather
than
consolidating
into
one
service
import.
That
was
probably
the
only
thing
that
we
felt
yeah,
I
mean,
would
be
different.
Yeah.
B
Okay,
yeah,
we
should
we
should
discuss
that,
and
I
don't
know
if
today's
the
right
spot
or
on
the
dock
or
that's
that's
an
interesting
conversation.
Part
of
part
of,
I
think,
where
we
went
with
the
original
service
import,
was
the
idea
that
we
really
do
want
all
of
the
services
with
the
same
name
to
be
blended
and
that,
if
you,
if
you
wanted
individual
addressing,
they
really
feel
like.
Maybe
they
should
be
different
services.
B
But
you
know
that's
always
the
type
of
thing
that
we
should
discuss
and
if
you
I'd
love
to
hear
a
different
perspective
and
then
also
very
good
feedback
on
the
endpoints.
You
know,
I
think
you've
just
shown
that
it's
possible
to
implement
the
behavior,
which
is
really
all
we
really
wanted
to
document
with
that
api
without
necessarily
propagating
the
endpoints.
F
Yeah,
I
would
say,
but
we
will
be
still
interested
in
the
the
count
of
the
endpoints
in
individual
clusters
so
that
we
can
load
balance.
You
know
so
that
we
can
do
the
unequal
cost
old,
balancing
with
proper
weights
right.
B
Right,
yeah,
and
so
that
was
that
was
part
of
why
we
originally
did
actually
include
that.
I
think
there
was
some
back
and
forth
about
should
we
include
that
there
is
endpoints
or
not,
and-
and
we
started
going
towards-
yes,
because
people
might
want
to
build
on
top
of
this
with
some
understanding
of
at
least
counts,
but
I
I
tried
to
spell
out:
maybe
I
did
a
bad
job,
but
the
endpoints
don't
necessarily
have
to
be
the
original
exporting
endpoint.
B
E
D
Ahead
yeah,
I
was
going
to
say
I
mean
the
like
to
rasto's
point.
I
guess
yeah,
we
probably
if
we
had
separate
imports
from
each
cluster,
then
yeah.
Maybe
the
import
could
just
have
to
cancel
the
number
of
endpoints
and
that
might
be
all
we'd
need
if
we
were
happy
to
just
track
the
required
number
of
replicas,
rather
than
real
time
track,
how
many
were
up
but
yeah
the?
I
guess
the
thing
is
the
the
in
the
case
of
a
regular
service,
so
non-headless
we
don't
yeah.
We
only
need
to
know
the
count.
D
We
don't
actually
distinguish
them
from
our
point
of
view.
So
there
is
nothing
really
in
our
cluster
in
the
in
the
headless
case,
then
yes,
we're
going
to
have
to
have
a
separate
proxy
service
for
each
one.
So
it's
kind
of
funny
because
the
the
cap
has
a
vip
for
the
regular
case
and
not
for
the
headless
case.
For
us,
it's
almost
the
other
way
around.
You
know
we,
but
as
I
say
I
don't,
I
don't
think
we.
That
was
one
thing
that
we
probably
wouldn't
bother
with
the
vip.
D
We,
as
I
said
I
think
when
I
was
presenting
the
slides,
I
actually
have
figured
out
a
way
that
we
could
not
use
this
at
all,
because,
as
long
as
as
long
as
the
source
cluster
knows,
the
cluster
ip
that's
been
assigned
in
the
remote
cluster,
or
at
least
the
control
plane
does
and
as
long
as
you
have
a
way
in
your
nat
to
set
the
next
harp,
then
you
don't
even
need
vips,
but
the
again,
some
of
this
is
around
not
only
what's
the
most
elegant
way
to
make
it
work,
but
also
what
what
is
something
that
people
are
going
to
understand?
D
Well,
don't
you
still
have
a
vip,
though,
with
your
created
service
on
the
importing
cluster?
Well
on
the
input
on
the
importing
cluster,
we
just
use
a
reg
one
of
those
proxy
cluster
instances.
It's
a
proxy
cluster
ip,
so
that's
just
a
regular
right
and
that's
and
that's
locally
assigned,
rather
than
being
something
global,
so
that
yeah.
So
that's
totally
a
valid
implementation.
B
Yeah,
okay,
actually,
okay,
yeah!
If
you
look
at
the
sigs,
we've
got
a
alpha.
Repo
up.
We've
actually
followed
a
similar
approach
for
for
the
first
okay,
which
is
to
actually
use
a
service
in
the
cluster
for
for
keep
proxy
programming,
and
then
the
service
import
vip
just
becomes
that
proxy
cluster
ip
okay,
excellent
excellent.
B
D
I
think
maybe
it's
maybe
that
the
thing
would
be
to
make
it
sort
of
optional
and
have
it
maybe
have
some
wording
around.
Perhaps
the
two
different
ways
of
implement
at
least
two
different
ways
of
implementing
one
way
you
create
a
sort
of
flatter
network
space
and
track
endpoints
and
one
where
you
use
proxy
services.
Instead,
I
don't
know
when
we
could
maybe
suggest
some
wording
there
or
or
whatever
yeah
I
mean.
D
That
would
be
great
if
you
wanted
to
make
a
pr
or
yeah
sure-
and
as
I
said,
I
mean
we,
we
started
this
work
before
we
became
aware
of
kept
1645,
so
we
actually
ended
up
modifying
our
implementation
to
align
with
it.
But
having
done
that,
you
know,
obviously
we
would
love
to
see
our
stuff
sort
of
folded
into
the
cap
as
a
sort
of
this
is
one
good
approach
for
doing
it,
and
really
you
know,
ultimately
we
you
know.
D
Yes,
we
came
up
with
this
inside
cisco,
but
we
don't
want
this
to
be
a
cisco
thing.
We
want
to
make
sure
that
you
know
other
people
get
on
board
and
it
gets
traction.
B
Yeah
this
is
this
is
awesome.
Thank
you
for,
for
showing
this
I
do
have
to
ask.
Could
you
add
the
slides
and
docs
and
stuff
you
talked
today
or
any
of
it
to
the
notes,
because
I
think
yeah.
A
So
a
good
question
here:
would
you
guys
or
would
somebody
be
interested
in
actually
setting
this
up
as
an
open
source
project
and
maybe
taking
it
to
cncf
or
you
know
any
kind
of
a
body
and
would
you
would
you
guys
be
interested
in
working
on
that.
B
E
I'll
I'll
just
ask
you
to
like
repeat
that:
ask
on
the
email
list
just
so
it
has
like
the
widest
possible
reach.
I'm
gonna
do
my
best
to
get
the
recording
up
today,
but
I
think
you'll
you'll
get
wider
reach
on
the
list.
E
B
Yeah,
it
would
be
great
to
include
the
the
slides
and
everything
on
that
as
well,
so
the
people
we'll
see.
G
I
mean
I
bet
I
I
guess
I
have
one
question.
It
occurs
to
me
that
that
means
that
there's
a
there's,
a
very
different
architecture
between
a
flat
network
and
a
and
a
I
guess,
isolated
network
with
like
ipsec,
tunneling
and
stuff,
like
that.
B
I
think
that's
a
very
good
question
and
something
we
should
certainly
talk
about
my
initial
assumption.
That
needs
to
be
discussed
more
and
is
that
we
don't
really
want
to
take
a
stance
necessarily,
I
think,
if,
if
so
long
as
it
doesn't
impact,
how
things
are
used,
it
seems
like
multiple
implementations
are
okay.
B
There
are
certainly
flat
network
deployments
out
there
that
I
don't
think
we
want
to
dictate
that
you
can't
have,
but
at
the
same
time,
obviously
there
are
not
flat
networks
as
well,
and
we
need
to
be
able
to
handle
those,
and
I
don't
know
that
we
want
to
dictate
right
or
wrong.
I
don't
know
if
anyone.
A
Knows
anything
is
there
yet
the
non-flower
networks
are
actually
kind
of
more
generic
case,
and
especially
if
the
clusters
are
in
different
admin
domains
right
I
mean
you
cannot,
cannot
say
it's
going
to
be
one
flat
network
and
what
have
you
because
it
can
be.
You
know
service
exporters
in
company
a
and
service
importers
in
company
b,
and
you
know
they
want
to
consume
some
services
that
somebody
provides-
and
you
know
these
also,
these
nonfat
networks
allow
you
to
go
into
places
like
okay.
A
So
let's
do
a
service
broker
actually
based
on
something
like
this
right.
So
it's
it's.
It's
quite
a
generic
approach
and
if
you
happen
to
have
a
flat
network
underneath
I
mean
it's
still
going
to
work.
A
Decisions
that
can't
be
made
yeah,
so
I'm
not
saying
we
should,
but
I'm
just
kind
of
giving
the
background
a
little
bit.
You
know
on
why
we
went
the
way
that
we
went
and
yeah.
E
E
Was
one
of
our
one
of
our
values,
explicit
or
not
in
in
working
from
like
an
api
standpoint
at
the
level
that
we
did
so
there's?
There
is
nothing
wrong
with
having
multiple
implementations
and
they
they
don't.
That
doesn't
necessarily
mean
that
you
might
only
have
one
implementation
of
each
type
right
like
there's.
No
need
that
that
everything
be
factored
down
to
one
thing
of
broad
type.
Only.
A
B
Yeah,
I
think
what
we
really
want
to
do,
and
this
is
where
feedback
would
be
great,
is
make
sure
that
we
we
really
get
enough
of
the
behavior
on
the
user-facing
side,
how
you
actually
interact
with
these
services
and
what
you
can
expect
when
they're
exported
and
imported
so
that,
regardless
of
what
those
implementations
end
up
being,
you
know,
we
have
many
many
out
there
eventually,
hopefully,
and-
and
they
you
at
least,
can
use
them
in
a
consistent
way
and
build
some
consistent
tooling.
On
top
of
them.
A
E
Okay,
so
jeremy,
I
think
you've
got
another
tick
on
the
agenda
after
this.
B
Yeah
I
can
share
this.
Is
this
will
be
super
quick
andrew
who
had
to
sign
off,
unfortunately
put
up
the
pr
to
expand
the
alpha
repo
to
to
support?
Actually,
maybe
I
won't
share
I'll
just
I'll
just
leave
it
linked,
but
basically
to
optionally
add
a
use.
The
service
status
load
balancer
ip
to
let
you
specify
external
addresses
on
the
service
import
and
have
that
programmed
through
that
alpha
implementation
that
uses
the
dummy
services.
B
I
think
this
is
along
the
lines
of
what
we
talked
about.
I
just
I.
I
thought
it
was
awesome
that
we
got
a
pr
so
quickly
on
the
on
the
sigs
repo,
so
to
me
seems
like
we
should
definitely
merge
this,
so
the
behavior,
the
existing
behavior
was
that
you
create
a
service
import
and
it
gets
typed
from
a
derived
service.
That's
used
to
program
q
proxy
with
android's
change.
B
You
can
also
supply
an
ip
on
the
service
import
when
you
create
it
and
if
it
doesn't,
basically,
if,
if
necessary,
it
will
set
it
as
the
load
balance
right
be
on
the
service,
which
makes
everything
work
has
the
downside
temporarily
of
of
also
using
a
cluster
ip.
But
this
is
the
this.
Is
that
alpha
approach
we
wanted
to
take
to
avoid
having
to
program
cube
proxy
yet
anyway,
so
I
I
was
planning
on
merging
that
unless
anybody
had
any
major.
B
Cool
well,
it's
it's
linked
in
the
notes,
so
take
a
look
and
if
nobody
has
any
objections,
I'll
probably
merge
it.
This.
E
Evening,
okay,
so
I
had
had
one
more
thing
on
there.
Just
a
couple
of
reminders
first,
is
that
we
have
the
cluster
registration
and
cluster
set
use
case
documents
that
are
world
writable
that
are
open
and
we're
really
interested
to
know.
If
folks
have
interesting
use
cases
they
want
to
share,
I
would
say,
there's
no
rush,
but
I
just
want
to
really
literally
remind
people
that
they
are
open.
The
other
one
is
that
chojin
posted
a
kind
of
proto-kepp
document
for
the
work
api
that
I
just
wanted
everybody
to
be
aware
of.