►
From YouTube: Kubernetes SIG Multicluster 2020 Feb 25
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
D
D
You
know
some
sometimes
we'll
have
fifteen
or
twenty
it's
sort
of
hit
or
miss
yeah
Jimmy
I
did
like
he
fed
stuff
up
in
the
agenda,
because
I
suspect
that,
like
the
Multi
cluster
services,
stuff
might
predominate
the
discussion
and
I
kind
of
need.
You
fed
more
as
like
just
a
notice
for
people
and
I
didn't
want
you
to
get
cut
off,
yeah
sure.
C
B
D
D
B
So
I'm
Jimmy,
dice
and
I'm
from
d2,
IQ
and
yeah,
so
we're
very
interested
in
Kieffer
you're
interested
in
supporting
it
and
moving
it
forward
again.
I
know
seem
to
have
stalled,
so
we've
recently
done
some
like
scale
testing
of
it,
and
we
wanted
to
do
some
work
to
kind
of
optimize
that,
but
a
large
scale
fits
a
lot
of
our
use
cases.
D
B
B
So
one
thing
I
wants
it
the
first
one
we
investigate.
There
is
a
scale
test
in
there
right
now
and
that
I
just
wanted
to
see
what
people
think
of
the
state
of
that
may
be
discussing
the
the
current
state
of
that
and
how
we
can
expand
that
scale
testing
it's
already
there
and
then,
if
there
is
access
to
any
cloud
resources
that
we
can
use
to
do
that.
That
would
be.
That
will
be
really
great
for
the
project.
Yeah.
D
So
we
we
never
got,
we
never
sort
of
pierce
the
veil
on
testing
with
cloud
providers
absent
any
access
to
cloud
resources.
One
thing
you
could
potentially
look
at
is
using
kubernetes
and
docker.
Is
the
project
name
kind
to
simulate
those
like
receiving
clusters
where
each
one
of
those
is
running
in
a
container
yeah,
so
that.
B
B
B
You
know
across
the
Internet,
not
just
all
local,
some
of
the
some
of
the
controllers
that
we
looked
at
yeah
the
latency
effects,
for
example
cluster
health
status,
one
single
cluster
having
a
three
second
latency-
will
affect
cost
health
status
for
every
cluster,
for
example.
So
it's
things
like
that
that
we
need
to
do
some.
There's
no
point
part
of
my
points
in
here.
We
see
some
refactoring
of
the
controllers.
Yes,
take
that
into
consideration.
B
Okay,
so
there's
something
there's
the
other
part,
so
the
other
part
is.
We
need
to
obviously-
and
we've
already
started,
putting
pr's
in
for
this
Hector.
Thank
you
for
what
you've
done
for
exposing
metrics
in
couplet,
which
is
a
good
first
step
to
starts
to
quantify.
You
know
any
improvements
that
we
make
and
yet
can
be
looking
through
the
controllers
and
looking
at
how
we
can
say,
redesign
those
want
to
migrate
over
to
control
a
runtime
because
of
some
of
the
niceties
that
we
can
take
advantage
of
there.
B
That
kind
of
especially
around
metric,
specially,
on
other
things,
to
simplify
the
code
base
as
well
and
provide
the
same
capabilities
and
trying
to
think
about
how
we
cash
a
bit
more
effectively
and
a
bit
more
cleverly
really
around
what
resources
were
watching
in
target
clusters
to
try
and
reduce
the
resource
usage
they're.
The
kind
of
things
that
we're
thinking
about
right
now.
D
Well,
I'm
very
glad
to
have
to
have
you
and
the
other
G
to
IQ
folks
involved.
Let
me
take
this
opportunity
to
say
that
that
Cupid
needs
more
maintainer
x'.
So
if
you're
interested
in
you
know,
participating
or
like
maintainer
x'
a
lot
of
responsibility,
but
if,
if
you're,
just
interested
in
contributing
I,
don't
hesitate
to
reach
out
I
guess
some
year
or
two
to
Jimmy
yeah.
B
Absolutely
you
know
we
we
want
more
like
where
we
actually
think
I
put
it.
It's
really
great,
and
it
feels
so
many
use
cases
that
we
want
and
simplifies
a
massive
architecture
and
I
and
I
think
yeah.
If
we
can
get
more
contributors,
then
that'd
be
great
I.
Think
what
I'd
like
to
do
is
a
through
this
project.
I
think
we'll
get
more
take-up
from
it
as
it
moves
to
maybe
to
beta
or
something
like
that,
but
we
want
to
stabilize
it
before
we
go
to
beta
yeah.
D
B
B
E
There's
this
I'm
sorry
to
be
contrarian.
There
seems
to
be
this
general
malaise
around
cube,
fed
even
at
cube
con
people
were
sort
of
all
not
super
excited
about
it.
I
would
love
to
hear,
maybe
not
today,
if
we
don't
have
time,
but
but
what
specific
use
cases
are
you
trying
to
address
with
it
that
don't
have
other
solutions?
I
know
full
well
that
other
solutions
are
few
and
far
between
right
now,
but
I'd
be
curious
to
hear
sort
of
very
specifically
what
sort
of
problems
you're
tackling
with
it
sure.
B
We
can
do
that
in
the
future
and
there
are
absolutely
other
ways
that
could
be
tackled.
I'd
say
one
of
the
reasons
for
us
using
it
was
there
was
something
that
worked
right
now
right,
that's
what
we're
approaching
again,
there's
something
that
works
right
now
for
a
number
of
our
use
case.
It
doesn't
fulfill
all
of
them,
but
let's
see
how,
if
we
can
go
work
with
that,
and
then
you
know
build
on
top
of
it
rather
than
building
something
necessarily
from
scratch.
D
B
D
B
Yeah
so
I
think
part
of
them
again
keeping
the
project
going
and
getting
the
project
getting
more
emphasis
behind
it,
starting
to
put
out
with
more
releases
regularly,
maybe
even
every
couple
of
weeks.
Something
like
that
there
will
be
features
going
in.
There
will
be
bug
fixes
going
in
may
hype,
help
to
build
a
bit
of
momentum
behind
it
again.
B
That's
kind
of
the
main
reason
for
that
I
mean
there
are
some
bug
fixes
already
in
there
that
uses
a
reporting
that
are
actually
fixed,
for
example
the
or
is
a
a
KS
uppercase
post
name
and
cube
config.
They
can't
you
can't
join
a
no
KS
cluster
at
the
moment,
and
he
says
this
is
already
fixed
in
master,
so
just
releasing
it
fixes
someone's
use
case
right.
So
so.
D
The
sort
of
the
the
point
at
which
the
momentum
kind
of
dissipated
behind
q
fed
there
had
been
a
lot
of
interest
from
the
people
that
were
involved
at
the
time
and
like
taking
the
project
to
beta
bicubic
on
at.
At
one
point,
when
we
resumed
releases,
I
think
last
release
we
did
was
like
was
labeled
in
a
way
to
communicate
it
as
a
beta
release.
D
Candidate
I
think
we
probably
want
to
go
back
to
version
numbers
that
are
clearly
alpha
and,
like
I,
am
all
about
stepping
back
from
the
brink
of
beta
until
there's
like
a
real
feeling
from
the
people
at
all
that
are
involved,
currently
that
it's
something
that
wouldn't
meet
people's
expectations
as
a
thing
called
beta.
That
like
has
enough
of
like
an
energetic
force
and
momentum
behind
it
that
you
know
we
can
responsibly
call
it
beta.
How
do
you
feel
about
that?
Jimmy
yeah.
B
B
Just
say
just
like
this:
just
come
back
to
what
Tim
said
second
ago
about
other
solutions
to
the
same
problem.
There
are
and
I
think
this
is
another
part
of
maybe
Cupid
in
my
opinion,
why?
It's
so
great
is
that
actually
it's
centralized
a
lot
of
use
cases
for
maybe
less
technologically
advanced
users
that
don't
have
necessarily
other
capo.
This
we
give
talks
with
these
other
things
that
they
didn't
have
that,
and
this
is
with
a
sweet
spicy
fit
for
keep
it
right
now
is
managing
large
set
of
clusters.
B
B
More
capabilities
in
other
areas
to
implement
maybe
a
more
complex
solution:
I
guess
that
that's
the
kind
of
sweets
what
we
see
for
at
the
moment,
and
that
might
be
saying.
Well,
we
don't
get
so
much
community
feedback,
because
community
tend
to
like
complicated
techy
things
in
my
experience,
rather
than
something
that
is
too
simple,
sometimes.
D
What
what
kind
of
time
frame
were
you
thinking
for
a
next
alpha
release?
Jimmy
I
think
there's
been
a
couple
things
that
we
merge
this
week
that
should
go
out
if
they're
merged
one
of
one
of
the
things
that,
like
just
my
own
opinion
here,
that
I
think
like
sort
of
complicated
people's
ability
to
use
cube
thread
in
the
past.
Is
that
like
there
was?
There
is
a
at
least
my
just
my
own
subjective
view.
Maybe
a
desire
to
have
like
releases
be
more
meaningful
than
this
is
what's
done
for
stuff.
D
B
Completely
honestly,
we
haven't
I'm
happy
to
cut
alphas
whenever
I
say
yeah
weekly,
it
would
be
great,
may
14
O's,
but
we
haven't
even
actually
considered
no
really
schedule.
Yet
it's
just
what
I
say.
I
I
really
feel
that
if
we
get
this
going,
all
behind
it
and
getting
our
releases
out
will
also
allow
us
to
quantify
between
the
swing
releases.
The
improvements
that
are
made
and
sneaking.
B
D
So
maybe
something
to
think
about
you
know
is
to
take
away.
Maybe
you
and
and
Hector
and
myself
can
talk
about
when
the
next
time
to
do
a
release
would
be
I.
Think
some
of
the
metric
stuff
that
Hector's
working
on
it's
still
in
flights
and
maybe
after
that
settles
might
be
a
good
time
to
start
doing
releases
again.
B
C
F
Yeah
so
I'll
share
my
screen
here
in
a
sec,
but
a
few
of
us
at
Kubek,
I'm,
San
Diego
got
to
talking
basically
about
multi
cluster
services
and
and
just
seeing
that
more
and
more
people
are
obviously
using
multi
cluster.
So
how
can
we?
What
can
we
do
to
make
it
easier
to
kind
of
consume
the
services
deployed
across
clusters?
That
kind
of
thing
so
we've
continued
to
to
kind
of
think
about
that
and
work
on,
and
we
put
together
to
talk
with
a
proposed
API
and
what
that
could
look
like.
F
F
F
You
know
this
is
the
implementation.
Whatever
the
implementation
is,
but
basically
whatever
they
there
is
some
behavior
that
needs
to
happen
somewhere.
The
MC
SD
controller
will
be
responsible
for
that.
So
we've
been
looking
at
two
primary
use:
cases
here
and
I'm,
just
gonna
kind
of
focus
on
headings
until
we
get
to
code.
The
the
first
one
is
kind
of
the
simplest
case.
Just
you
have
different
services
in
separate
clusters
and
you
want
to
consume
them
across
clusters.
F
So
I
have
cluster
a
with
one
service
and
cluster
B
with
with
another
workload
and
the
workload
in
B
wants
to
consume
the
service
in
cluster,
a
that's
kind
of
the
simplest
multi
cluster
case.
You
know
maybe
they're
managed
by
different
teams
or
at
orgs,
and
you
just
want
to
wait
to
to
expose
them
and
then
the
second
one
is
single
service
played
across
multiple
clusters.
F
D
F
F
E
Ignoring
the
workloads,
though
like
when
you
set
up
a
set,
when
you
set
up
a
stateful
set,
often
you
will
have
a
host
name
assigned
to
the
endpoints
within
the
service
that
goes
with
it
and
if
I
set
that
up
in
two
different
clusters,
I
might
have
the
same
endpoint
name
in
two
different
clusters
and
I.
Don't
know
that
those
things
are
virtual.
G
F
G
E
F
F
F
We
have
today
so
I'll
jump
down
to
the
service
export
to
kind
of
show
how
simple
we're
hoping
this
can
be,
but
basically
a
service
export
is,
is
a
simple
CRD
that
you
would
you'd
create
this
resource
in
in
an
namespace
with
the
same
name
as
the
service
you
want
to
export,
and
then
it
would
just
be
exported
we're.
You
know,
maybe
there's
some
status
that
needs
to
be
on
the
service
export
to
communicate
whether
or
not
it's
been
picked
up,
but
but
basically
we're
just.
F
This
is
a
name
map
resource
and
once
you
create
it,
this
services
export.
Now
the
what
happens
when
you
export
this
and
I
think
this
is
an
important
part
of
the
API.
Is
we
want
to
make
it
just
like
consuming
cluster
IP
services
today?
So
this
is
this?
Is
your
in
cluster
experience,
that's
extended
to
multiple
clusters,
and
so
we've
been
we've
been
looking
at
something
like
a
super
cluster
IP
multicast
IP.
F
That
would
be
a
vent
that
gets
created
because
of
that
imported
service
resource
in
each
cluster.
Connected
to
your
controller
that
you
could
access
just
like
cluster
I'd,
be
today
and
then
cube
proxy
or
whatever
the
implementation
does,
could
route
traffic
to
the
endpoints
accordingly
and
those
endpoints
might
be
in
a
different
cluster.
They
might
be
in
multiple
clusters,
or
you
know,
maybe
maybe
they're
in
the
same
cluster
but
you're
potentially
using
this.
F
You
know
for
future
proofing
and
you
just
want
to
know
that
you
can
add
end
points
from
other
clusters
for
for
failover
in
terms
of
DNS
we're
thinking
again,
it
looks
just
like
you
see
today
with
within
cluster
services,
except
that,
instead
of
a
service
dot
cluster,
that
local
name,
it
might
be
a
service
dot.
Super
cluster
that
local
name.
If
you
connect
to
that
name,
and
that's
probably
how
we
recommend
people
use
this,
it
would
resolve
to
that
super
cluster
IP
within
your
cluster.
F
D
Can
I
can
I
see
if
I
can
summarize,
like
the
mechanics
so
far,
absolutely
it?
So
it
seems
that,
like
the
the
knowledge
of
membership
in
the
super,
cluster
is
sort
of
reserved
for
the
controller,
where
that
might
you
know,
come
from
any
number
of
sources,
but
in
the
in
the
cluster
that
is
exporting
the
service,
you
make
an
exported
service
that,
in
turn,
results
in
the
controller
somehow
creating
imported
services
on
the
other
members
of
the
super
cluster
and
those
are
using.
D
F
Exactly
that's
right,
but
with
it
with
a
with
a
different
name
that
lets
you
kind
of
declare
that
you
know
that,
like
I,
don't
care
where
the
service
is
located,
so
we're
not
we're
not
trying
to
like
read
repurpose
cluster
dot
local.
This
would
be
a
different
way
to
access
it
that
lets
you
just
specify
you
don't
care
now,
also
we've.
This
is
purely
within
clusters.
So
this
is
like
a
pure
kubernetes
inclination
that
we've
been
talking
about,
and
you
know,
there's
there's.
F
F
G
F
F
So
if
a
service
in
any
cluster
has
a
given
name
and
is
in
the
same
namespace,
probably
with
some
exceptions
like
cube
system
default
and
maybe
maybe
a
few
others
that
are
common
depending
on
which
projects
you're
using
but
for
the
most
part,
if,
if
you
have
a
namespace
and
you
have
a
service
with
the
same
name
in
that
namespace
in
multiple
clusters,
that
should
be
an
equivalent
service.
And
so
if
we
create
an
imported
service
in
a
cluster.
F
Because
of
this,
because
the
controller
sees
the
export
in
one
cluster
and
and
creates
it
in
another,
it
would
be
with
the
same
namespace
or
the
same
name
in
the
same
namespace.
And
so
you
know
if
we're
assuming
that
you
have
access
to.
If
you
want
to
like
read
the
service
or
anything
like
that,
you'd
need
access
to
resources
in
that
namespace.
So.
F
We
haven't
we
kind
of
specifically
not
described
how
this
needs
to
be
implemented
so
that
that
would
be
the
the
controller.
So
you
know
it
could
be
that
your
controller
is,
you
know
some
controller
running
in
one
of
the
clusters
that
that
watches
all
the
other
clusters
and
synchronizes
endpoints,
or
it
could
be
that
you
have
some
kind
of
gossip
thing
going
on
and
you've
got
and
each
clusters
not
a
controller,
that's
responsible
for
watching
each
other
cluster.
H
Currently.
We
have
like
a
central
controller
that
actually
depends
on
queue
fat,
but
that's
like
a
prototype
just
to
test
and-
and
we
are
moving
in
towards
like
a
distributed
implementation-
that
we
have
some
central
place
with,
where
the
clusters
exchange
the
information
about
the
exported
services,
and
then
we
are
looking
even
at
other
possibilities,
so
we
get
to
still
more
resilient
and
distributed.
But,
yes,
you
see,
there
are
many
many
ways
of
implementing
that
I.
H
E
D
H
H
You
know
I
have
the
feeling
that
maybe
something
that
will
let
you
export
several
services
at
once
like
having
a
selector
instead
of
just
having
this
name
for
the
service
and
then
that
services,
maybe
you
want
to
export
several
of
them
based
on
labels
or
something
like
that.
It
will
be
more
similar
to
service
selector
or
bot,
selector
or
policy
yeah.
F
F
We
we've
kind
of
been
ting
that
if
you
wanted
to
do
like
you
know,
export
every
service
in
this
namespace
or
export
selected
services
that
match
the
label
you
could
you
know
it
wouldn't
be
hard
necessarily
to
write
another
controller
that
just
watches
services
and
creates
the
exports
automatically
matching
those
rules,
but
I
think
that's
it
that's
a
good
point.
It
would
be
good
to
get
feedback
too.
You
know.
F
One
thing
that
that
may
or
may
not
matter
is
that
I
may
be
more
difficult
to
expose
status
on
on
the
service
export
about
a
specific
service
if
it
actually
Maps
the
multiples,
maybe
not,
maybe
maybe
the
status
that
you
know.
We've
kind
put
together
a
few
examples
here,
but
maybe
they're
not
hugely
valuable,
so
I
think
these
are.
This
is
definitely
something
that's
worth
talking
about.
F
F
So
if
your
implementation
had
a
specific
cluster
that
you
know
was
responsible
for
performing
the
synchronization,
that
cluster
would
probably
need
a
cluster
registry,
but
there's
no
real
reason
why
any
of
the
other
clusters
would
that
in
that
kind
of
like
fan-in
fan-out
model,
if
you
had
a
fully
distributed
model,
then
yeah,
you
probably
need
some
the
cluster
registry
in
each
cluster,
so
that
each
cluster
would
know
who
to
talk
to
I.
Think
that's.
F
We've
been
kind
of
leaving
that
up
the
implementation,
I
mean
realistically
there's
also
a
case
where
it'd
be
everything
just
lives
outside
of
your
clusters
or
there's
some
other
cluster.
That's
it's
only
job
is
to
perform
this
task.
You
know-
and
it's
not
actually
part
of
the
part
of
that
export/import
system
right,
so
we've
kind
of
been
leaving
that
out,
but
this
implementation
doesn't
actually
require
that
any
of
the
consuming
clusters
know
about
the
other
clusters
just
about
the
services.
So
then
is.
D
D
Maybe
you
could
define
as
a
characteristic
that,
like
anything
that
you
put
you
make
an
expose
service
for
like
should
be
consumable
within
the
the
Glebe
that
constitutes
any
given
supercluster
right,
yeah
beg
there.
There
isn't
any
notion
as
far
as
I
can
tell
miss
of
like
go
here,
but
not
there
right.
F
Right
yeah,
we're
kind
of
assuming
there's
a
high
degree
of
trust,
and
then
one
of
the
things
that
is
kind
of
spelled
out
here
in
another
section
on
governance,
is
that
basically,
that
that
high
degree
of
trust
also
needs
to?
You
know
be
in
some
kind
of
space
where
you
can
make
the
statement
that,
like
this
namespace
is
consistent,
you
know
like
we
again.
We
make
the
assumption
about
that
name.
Namespace
sameness.
This
kind
of
trust
area
needs
to
needs
to
be
able
to
enforce
that.
F
F
E
I
think
the
there's
the
cluster
problem
is
simply
if
I
am
relying
on
the
name
which
is
not
cryptographically
secure.
Obviously,
then
anybody
who
controls
a
cluster
can
join
my
service,
so
I
think
I
have
to
trust
the
admins
of
all
my
clusters
or
I
have
to
have
a
controller,
that's
smarter
than
simply
merging
by
name
right.
F
E
In
fact,
you
know:
maybe
it's
maybe
it's
worthwhile
the
position
that
says:
there's
actually
two
parts:
there's
sort
of
the
COS
in
DES
and
the
COS
aouda's
on
the
gazelle
decide.
We've
got
this
thing
that
represents
a
global
service
or
a
supercluster
service,
and
if
we
all
agree
that
that's
how
we're
going
to
consume
supercluster
services,
then
cool.
We
can
lock
that
down
and
then
the
Gazeta
is.
How
do
you
convert
a
cluster
service
into
a
super
cluster
service
and
that's
the
export
and
I?
E
E
Service
I
think
it's
exported
service
here
represents
the
inside
the
shell
routing
the
east-west
I
guess
is
with
this
sort
of
common
vernacular.
That's
not
exactly
right,
but
service
in
cluster,
a
service
and
cluster
B.
Anybody
who's
within
the
crunchy
outer
shell
can
talk
to
that
service
and
have
them
be
merged.
I,
don't
think
this
addresses
in
any
way
the
publishing
of
a
service
beyond
the
edge
of
the
clusters
right.
The
same
way,
that's
cute
proxy
doesn't
handle
that
right.
D
E
E
F
But
one
of
the
things
that
we're
also
kind
of
talking
about
here
a
little
further
down
is
that
by
kind
of
targeting
that
east-west,
which,
as
Tim
says,
isn't
maybe
how
two
percent
accurate
in
terms
of
terms,
but
when
you're
from
kubernetes
to
kubernetes,
you
can
take
advantage
of
more.
You
know
knowledge
about
the
source
and
destination,
and
so
you
know
we've
mentioned
things
like
like
topology
keys
in
session
affinity,
and
you
know
without
digging
too
much
into
this
right
now.
F
Cuz,
that's
that's
a
whole
other
thing
that
I
would
maybe
encourage
people
to
read
up
on
offline,
but
you
know
there's
some
new
work
around
topology
and
what
that
could
look
like
whatever
that
ends
up
being
in
kubernetes
and
serviced
apology
we
probably
want
to
have
a
similar
thing
extend
to
extend
across
cluster.
So
if
you
have
clusters
that
are
maybe
somewhat
separated,
you
probably
don't
want.
You
know
it
with
some
kind
of
high
cost
between
them.
D
F
Of
the
proposal
yes
well
so
kind
of
two
parts
of
that
question.
So
the
way
we've
been
thinking
about
it
like
I
guess,
I
would
say
exactly
how
that
would
work
kind
of
depends
on
would
depend
on
where
those
clusters
are
I.
Think
like
with
this
iteration.
If
we're
talking
about
you,
know
the
the
current
service
topology
API.
If
those
clusters
are
co-located
in
the
same
topology
boundary
this,
you
know,
if
we
just
did
the
the
simple
queue
proxy
iptables
implementation,
the
service
would
kind
of
always
be
backed
by
those
by
both
clusters.
E
Also
add,
there's
within
Signet,
which
this
will
have
to
go
to
within
Sigma.
There's
a
separate
discussion
about
whether
service
should
have
a
concept
of
a
failover
selector
or
a
failover
mode,
which
is
only
used
if
there
are
no
endpoints
in
the
primary
mode.
How
that
intersects
with
topology
isn't
defined
right
now,
it's
pre
kept,
but
I
understand
what
the
idea
is
about.
I
think
to
put
to
go
back
to
what
you
asked
Paul
like
what
are
the
actual
requirements
here.
D
E
D
E
D
That
makes
sense.
Do
you
think
you'll
present
this
that
Signet
anytime
soon.
F
E
D
Certainly
think
this
is
like
a
it's
an
easy
to
digest
like
good
starting
point
conceptually
like
there
I
I'm,
imagining
that
there's
a
lot
of
stuff
that
you
like
explicitly
you
in
your
minds
like
ignore,
called
out
of
scope,
so
I
think
it's
a
good
starting
point.
I
I
guess
my
question
might
be
like
well,
I
won't
even
ask
it,
because
it's
too
soon
so
I'll
just
keep
everybody
on
the
edge
of
this.
E
Thanks
jerk,
no
really,
you
know,
like
I've,
seen
a
prototype
of
this
thrown
together.
The
ideas
here
aren't
particularly
earth-shattering
I
think.
The
main
goal
here
is
to
get
consensus
on
like
this
is
how
it
should
kind
of
work.
If
everybody
assumes
that
it
works
the
same
way,
then
we
can
all
explore
different
topics,
different
ways
of
implementing
and
different
second
stage
problems
here.
Right,
I've
talked
before
about
this.
This
idea
of
the
adjacency
of
the
possible
right.
E
F
You
know
what
kind
of
policy
can
we
build
on
top
of
that
and
yeah?
It
really
kind
of
opens
up
the
discussion,
but
reiterating
what
Tim
said
I
think
more
eyes
on
this
document
would
be,
would
be
great,
more
input,
raising
the
questions
that
we've
kind
of
raised
here,
kind
of
figuring
out
what
we,
what
we
haven't
thought
about,
I.
E
F
Really
get
to
in
detail,
but
we've
been
thinking
that
there
would
be.
We
basically
associate
the
imported
end
points
with
with
the
lease,
and
it
would
be
the
controller's
responsibility
to
keep
that
up
to
date,
and
so,
if,
if
the
lease
expired,
like
we
stopped
hearing
from
one
of
the
clusters,
we'd
removed
those
end
points.
So
if
anything
happened
to
the
controller
and
it
wasn't
able
to
keep
things
in
sync,
there
would
be
something
at
each
cluster
level,
regardless
of
how
the
controller's
implemented
centralized
distributed.
F
However,
there
would
be
something
at
each
cluster
level
that
would
clean
up
imported
services
if
it
stopped
hearing
from
some
things
so
that
we
wouldn't
get.
You
know
still
end
points
living
around
forever.
Mm-Hmm
I
think
that's
definitely
something
that
needs
to
happen,
and
then
we
need
to
kind
of
agree
on
at
least
high-level.
D
Okay,
yeah
I,
think
I.
Think
that
would
be
interesting
to
talk
about
in
in
a
future.
Discussion,
certainly
like
one
of
one
of
the
feedback
that
we
heard
most
in
the
community
around
the
Confed
design,
which
is
like
a
push
model,
is
that
it's
like
that
mode
is
inherently
vulnerable
to
like
a
single
point
of
failure.