►
From YouTube: Kubernetes SIG Federation 20170530
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
B
C
D
B
A
Yeah
I
can
maybe
tell
I
think
the
issue
he
probably
was
running
into
was
the
batch
API
is
need
to
be
enabled
one
and
what
he
was
trying
to
do
was
probably
enable
it
outright
like
there
is
a
separation
between
rest
of
the
api's
and
the
api's,
which
we
sort
of
consider
alpha
in
federation,
so
batch
API
is,
and
the
autoscaler
API
is
are
considered
alpha
and
they
are
not
enabled
by
default.
So
I
think
he
was
running
e2e
test
without
that
and
trying
to
enable
them
directly.
I
was.
E
Only
showing
what
in
fact,
I
think
I
thought
I
talked
to
him
on
Friday.
Actually,
he
I
he
mentioned
that
he
fixed
that,
but
they
were.
He
was
running
into
some
other
aristos
not
be
available
issue
resource
means,
load,
balancer
or
something.
Some
other
resources
is
no
GCE
environment.
That's
what
he
was
saying:
hey
Jana
you
there
last.
A
A
So
one
more
thing
I
saw,
which
is
updated
in
his
PRS
that
he
sort
of
updated
the
deployment
the
Cooper
fed
deployment
or
the
e2e
deployment
using
etcd
3.
Currently
Cooper
Fed
uses
that
and
in
fact,
reasonably
explicitly
set
flag
as
the
API
server
back-end
to
be
etcd
to
not
just
bcc
degree.
So
I
don't
know
why
he's
we.
A
B
The
last
time
I
saw
I
saw
that
it
was
assigned
to
Quinton
and
he
had
a
few
comments
and
I
saw
these
comments
from
my
fan
as
well
that
he
should
only
enable
it
in
me
to
eat.
As
mentioned
said,
we
shouldn't
change
their
situation,
which
I
agree
and
I
guess
he
was
just
waiting
for
the
tests
to
be
test
to
be
passing
like
agree
that
I
start
saying,
you're
failing
last
I
mentioned
yeah,
just
you
know,
I
think
yeah
boy
I
could
have
split.
C
C
D
B
For
those
ABS
we're
additive
regression
like
jobs
and
batch
and
auto
scaling
those
two
good
questions,
but
we
didn't
have
B
controllers
that
promise,
so
we
had
to
disable
those
API,
and
so
that's
how
we
are
disabling
ap
eggs,
which
are
not
fully
working
like
they're
disabled
by
the
point
and
motorcade
has
computation
again
with
them.
So
it's
not.
B
Api's
without
controllers,
we
do
want
to
disable
and
five
years
as
well,
where
we're
defining
a
size
like
the
child,
relatedly
new
one
haven't
been
tested,
but
others
such
as
services
or
Indian
such
have
been
there
longer
and
have
we
have
more
confidence
in
there.
We
can
enable
them,
by
default
by
disabled
all
the
earth
for
UPS
by
default,
but.
C
F
This
is
a
luncheon,
sorry,
the
so
the
indication
that
something's
out,
because
it's
using
annotations,
is
primarily
for
resources
for
managing
your
own
data
and
so
it's
kind
of
a
bit
of
a
gray
area
for
Federation,
because
we're
kind
of
managing
our
own
data.
But
it's
not
like
first-class
we're.
Not
you
know
if
I'm
a
replica
set
and
there's
an
annotation
that
I'm
using
is
a
replica
set.
F
That's
a
problem
for
me
to
be
known
out
like
something
other
than
alpha,
but
if
I'm
Federation
I'm,
storing
some
data
I'm
not
actually
managing
I'm,
not
responsible
for
replica
set
I'm
just
using
it,
then
it's
okay.
It's
not
good
because
of
the
backwards
compatibility
and
validation
issues,
but
it
doesn't
have
the
same
implication
that
make
sense
I.
B
They'll
be
APA
yeah,
the
APA,
but
there's
also
the
implementation
link
users
expect
that
it's
the
bring
up
Edition
and
this
video
is
enabled
and
they
can
create
those
resources.
They
expect
that
it's
all
working,
fine
and
like
I
said
didn't
matter
quickly.
I
do
is
those
APA
would
fit
inside
the
controller
at
all.
So
thanks
we
want
to
disable
the
APA
until
they
are
fully
implemented.
Yes,.
B
C
B
A
D
A
D
A
I
understand
the
API
path
itself
for
depicts
in
what
phase
or
what
is
it
currently,
for
example,
in
Google
notice,
the
the
replica
site
API
is
have
given
beta
one
and
they
might
have
of
even
also
in
in
Federation.
We
are
always
since
the
beginning,
when
the
kicker
said
was
enabled
in
Federation.
It
always
was
even
beta
and
and
still
exist,
to
be
even
returned.
What
I
understand
is
that
the
API,
if
it
is
an
elephant,
depicts
in
the
paths
that
is
the
correct
way
of
doing
it
right.
G
Mean
well,
there
is
no
separation
in
the
reference
dog
that
we
expose,
but
every
time
we
make
analyze,
we
explicitly
say
what
the
maturity
level
how
PAP
is
in.
There
is
not
right.
So,
yes,
there
is
an
indie
kind
of
a
grew
version
that
people
specifying
diamo
files,
but
we
do
say
that
this
ABI
Alf
I
use
at
your
own
column
right.
G
A
H
C
G
Was
true
many
many
many
reasons
ago,
things
have
become
much
better
now
people
who
put
alpha
beings
in
alpha,
but
what
you
are
seeing
is
yeah
deployment
and
replica
says
like
both
in
the
spirit
that
you
are
mentioning
now,
but
that
was
in
1.2.
That
was
like
five
years
ago,
but
since
then
we
have
gotten
in
Lord
because
with
group
questioning
things
and
alpha
8/8
do
go
to
alpha.
But
now
in
communities
house.
H
I
think
our
problem
is
categorically
different,
because
kubernetes
problem
was,
they
were
sloppy
about
putting
things
in
alpha.
Our
problem
is
in
kubernetes
its
beta,
but
we're
not
in
beta,
so
we
have
a
real
mismatch,
whereas
it's
not
to
say
like.
Oh,
we
were
better
about
our
bookkeeping,
it
would
work.
We
have
legitimate
like
different
bookkeeping
or
including
the
resources
in
beta
and
base
communities
abroad.
Certain
Appa,
which
is
a
different
problem.
C
Yeah
yah
and
it
wasn't
actually
a
problem
in
the
past,
because
there
was
no
such
thing
as
an
alpha
part.
You
just
decided
that
something
with
either
alpha
beta
in
the
release
notes
that
the
path
was
always
neither
alpha,
no
beta,
even
for
things
that
were
technically
alpha
and
beta
in
Romanian,
though,
there
was
no
mismatch,
but
now
it
seems
that
by
kubernetes
cleaning
up
their
act
and
actually
pushing
things
to
alpha
and
beta
and
release,
we
need
to
do
the
same.
C
Is
anybody
here
very
well
I'm
sort
of
the
reason
I'm
belaboring
this
point?
A
little
bit
is
I.
Think
it's
a
bit
of
a
problem
or
big
problem.
Actually,
so
we
need
to
maybe
schedule
a
separate
discussion
to
in
sorted
out.
Is
anyone
well
plugged
in
with
the
API
group
at
the
moment?
Is
it
like
a
API
sync
I
haven't
been
to
one
of
their
meetings
for
quite
some
time,
I've.
B
C
Is
I'm
curious
how
they
said
so
if
people
build
you
know
application
based
on,
for
example,
beta,
which
technically
has
a
finalized
API,
then,
as
soon
as
that
thing,
if
all
of
those
applications
are
going
to
break
when
the
API
moves
from
the
data
URL
path
to
the
released,
URL
path
and
b11
of
it
is
so
I'm
curious
and
we
sort
of
have
a
similar
ish
problem,
which
is
that
the
daenerys
application
that
works
against
kubernetes
will
not
work
against
Federation,
because
we
deem
something
to
be
alpha.
For
example,
even
though
yeah.
B
G
It
absolutely
with
the
guide.
This
is
happening
right
now.
Deployments
and
oblique
effects
have
been
moved
from
extensions
washwomen
recovered
to
acts
class
reason
together
as
I-5
towards
taking
them
TGA
and
if
you
are
using
Coverity,
is
today
a
1.6.
You
can
send
your
deployments
to
both
extensions
and
acts,
and
that
will
be
true
for
the
virtual
engineer,
collection
at
least
o'clock
reasons,
I
think
so.
C
Sounds
like
we
might
need
to
do
something
similar,
but
but
even
more
confusing,
which
is
you
support
the
kubernetes
version
in
Federation,
because
we
sort
of
go
through
multiple
beta
phases.
We
go
through
the
you
know.
The
API
in
kubernetes
is
already
finalized
and
therefore
beta,
and
we
will
support
that.
But
then
we
also
support
additional
extensions
like
these
placement
things
and
annotations
and
whatever
else
and
those
then
you
know,
are
not
finalized
yet
and
then
they
become
beta,
those
extension.
C
So
so
we
just
I
guess
we
need
to
think
this
through
a
bit
more
carefully
than
we
have
given
the
changes
in
communities
that
API
management
and
and
fix
it,
because
I
don't
think
we
yeah
I,
think
I
think
we
need
to
make
sure
that
we
did
correctly
undocumented
property
and
follow
our
own
procedures.
But
maybe
let's
try
the
left
for
the
next
meeting
rather
than
then
too
much
time.
You
know,
but.
B
I
did
want
to
come
back
to
lick,
which
is
should
be
disabled
by
default.
Invitational.
Should
we
mix
now
as
jobs
controller
is
most
should
be
an
invalid
by
default
or
for
one
release
that
you
still
you
keep
it
disabled
and
make
sure
we
have
enough
confidence
and
then
enable
it
in
the
next
feelings.
Now.
C
B
C
C
Quite
sure,
I
understand
the
argument
went
when
we
implemented
federated
replicas.
For
example,
they
were
v1
in
Nettie's,
implemented
the
kubernetes
api
that
will
be
won
in
the
Federation
API
on
the
URL
path.
But
in
the
release
notes
we
see
we
said
that
we're
still
finalizing
the
API
with
respect
to
placement
and
stuff
which
is
currently
in
annotations
yeah,
so
we
buy
them
releases.
H
So
we
say:
replica
sets
are
more
established
and
secret
and
config
mounts
are
more
and
they've
even
set
some
more
established
than
jobs,
because
those
exist-
and
we
have
this
now
many
different
different
levels.
Disabling
new
controllers
by
default
seems
like
a
more
away,
essentially
tell
the
user
beside
your
lease
nodes
that
this
is
an
alpha
be
jerk.
Not
everything
necessarily
has
the
same
level
of
confidence.
I
mean
yeah.
C
I
can
understand
that
that
argument
now
I
mean
I,
don't
necessarily
agree
with
it
and
and
again
I
think,
rather
than
do
something
on
the
spur
of
the
moment
here.
That
is
inconsistent
with
what
we've
done
in
the
past.
What
I
would
prefer
that
we
do
is,
is
actually
sit
down
and
think
of
the
whole
thing
through
properly
and
and
then
implement
whatever
that
thing
is,
it
just
doesn't
seem
like
we've,
given
it
enough
due
diligence
to
to
make
a
decision
to
change
what
we've
been
doing
up
to
now.
H
F
Sure
say
it:
I
I
agree
with
Quincy.
We
I'm
not
really
participating
in
the
API
shake
so
much
but
like
when
it
comes
to.
You
know
the
issues
around
annotations
and
trying
to
move
things
to
stability.
I've
been
jogging
with
Clayton
Coleman,
who
is
one
of
the
core
people
in
API
and
he's
definitely
looking
to
see
Federation
sort
of
move
away
from
the
annotations
based
approach,
which
is
one
of
the
motivations
for
like
some
of
the
discussion.
F
If
I
gave
you
the
impression
that
I
was
assuming
it
was
sloppiness,
I
think
my
concern
was
that
there
wasn't
a
path
forward.
So
it's
like
yes
experimenting,
you're
trying
things
out,
but
there's
not
actually
a
path
defined
to
stability,
and
so
that
was
kind
of
the
reason
for
initiating
that
discussion.
Cool,
excellent.
C
G
Mod
then
mob,
they
that's
all
green.
We
pretty
much
the
same
as
last
week,
except
that
is
still
haven't,
managed
to
make
three
submits
blocking
again.
Hopefully
we're
going
to
do
like
today,
cool
they're
super
green
though
then
they
were
green
Wow.
If
this
is
the
CI
I,
don't
know
how
to
send
I
can
forget.
I
can
go
for
milling
unique.
If
I
can
you?
Okay?
If
you
turn
on
the
chat
on
this
one
on
this
side,
yeah,
okay,
I,
will
say
I
deal
in
a
minute.
Okay,.
C
G
H
Once
the
replica
sets
a
controller
stuff
goes
in,
I
will
be
looking
more
at
getting
making
sure
the
replica
set
tests
work
properly.
Again.
It
is
the
same
controller
working
in
progress,
actually
updating
occurring
replicas
that
control
and
debugging
it
in
an
attempt
to
fix
the
test,
but
we're
going
to
accent,
yeah
and.
G
C
B
I
added
this
beyond
in
the
list
of
for
one
PS
I
just
wanted
to
request
of
is
anyone
else.
You
can
review
this
I
reviewed
it
and
what
I
will
give
the
design
proposal
isn't
so
great
if
someone
else
can
take
a
look
at
this
as
a
little
tiny
sphere
for
policy
policy
based
placement,
so
maybe
Quentin.
If
you
can
take
a
look
or
if
anyone
is,
it
would
be
great
to
have
some
other
person.
Apart
from
you,
who
also
takes
a
look
at
this
yeah.
Sorry.
B
G
C
G
G
B
A
I
G
A
Yeah
I
can
give
a
very
brief
overview
like
of
the.
There
are
two
alternatives
suggested
over
here,
but
the
proposal
of
the
alternative,
which
we
are
saying
is
good
to
go
or
would
be
taken
is
the
case
where,
when
a
user
creates
a
stateful
set
in
Federation,
it
will
be
sort
of
partitioned
into
the
underlying
clusters,
something
similar
to
what
applica
said.
Does
so
say
a
user
submits
a
stateful
set
with
five
replicas
and
you
have
two
clusters.
One
replica
one
cluster
gets
three
replicas
another
test
attached
through
the
two
replicas
and.
A
That
additional
two,
that
the
user
also
creates
the
governing
service
domain,
that
is
using
a
headless
service
which
will
which
needs
to
be
created
in
Federation.
The
same
service
will
be
created
in
underlying
slashes,
and
with
this
with
this
much
alone,
there
will
be
two
stateful
sets
in
two
clusters
and
the
replica
is
reachable
within
the
clusters
alone,
because
that
much
is
taken
care
by
the
headless
service
and
the
state
full
pots
that
are
created
within
a
cluster
additional
to
that.
A
What
the
I'm
suggesting
is
that
each
pod
can
have
multiple
DNS
identities
so
currently
using
the
headless
service,
there
is
one
identity,
a
pod
gets.
If
you
scroll
down
later
a
lower
anakin,
then
maybe
there
is
some
DNS
names
and
all
where
you
can
yeah
yeah,
something
like
this.
So
this
so
additional
DNS
identity
is
when,
when
much
happens,
after
that,
the
controller
the
citation
controller
will
create,
and
that
too,
if
it
is
sort
of
chosen
by
the
user
by
either
by
specification
or
as
a
notation
saying
that
we
need
public
discovery.
A
C
Can
I
just
ask
a
clarifying
question,
err,
fun
and
this
might
help
other
people
as
well.
My
understanding
is
that
the
intention
is
that
there
are
essentially
two
primary
use
cases.
The
one
is
that
a
user
wants
to
deploy
an
application
is
local
clusters
and
in
each
cluster
there
needs
to
be
something
like
a
quorum.
You
know,
a
typical
use
case
might
be
an
LCD
of
three
nodes
in
every
cluster,
and
that
is
catered
for
by
by
this,
and
you
can.
C
You
know,
make
sure
that
you
get
exactly
in
three
in
each
cluster
by
using
min
and
Max
in
the
placement
preferences
and
then
the
other
use
case
is
you
want
to
create
something
like
a
quorum
or
a
stateful
set
across
multiple
clusters?
For
example,
you
have
five
clusters
removed,
one,
it
could
be
infants
in
each
cluster
and
they
should
all
be
able
to
find
each
other
and
create
the
Corrine
Network
and
on
that
one
with
the
external
DNS
and
queen
and
Spencer
a
headless
services
before
that
correct.
C
A
So
the
headless
service,
and
so
what
Quinton
was
mentioning,
is
I
did
not
state
the
two
use
cases
which
I
have
specified
here.
So
the
two
use
cases
are
one
use
cases
that
you
have
multiple
clusters,
multiple
federated
twisters
and
you
have
some
application,
which
probably
wants
deployment
as
smaller
local
Cora.
A
For
example,
in
three
clusters
you
deploy
an
application
which
wants
to
form
a
forum
in
question
number
one
also
place
number
2
and
version
number
three
all
separately
and
maybe
periodically
they
want
to
reach
through
only
one
or
they
have
some
other
mechanism
of
reaching
across.
So
this
is
one
use
case
which
can
be
solved
directly
by
creating
a
headless
surface
in
the
local
cluster
and
giving
the
desired
replicas
or
the
stateful
set
in
that
local
question.
A
The
second
use
case
is
that
either
in
with
the
local
Cora,
either
the
same
part
so
same
stateful
pots
want
to
ensure
that
they
can
communicate
across
the
clusters.
Also,
for
example,
in
this
one
example
that
we
gave
it
is
three
three
three
so
so
there
is
a
bigger
quorum
which
needs
to
be
formed
of
all
nine
stateful
pots,
also
or
it
could
be.
Only
the
local
quorum
is
not
required.
A
The
application
wants
only
a
global
forum
where
maybe
one
replica
per
cluster
is
desired
and
those
replicas
should
be
able
to
reach
each
other
and
discover
each
other
across
the
clusters
in
a
federated
base.
First
of
the
person,
so
both
these.
These
are
the
two
use
cases
that
we
are
trying
to
present,
which
can
be
solved
with
this
design
and
the
local
quorum
is
solvable
just
by
creating
the
headless
service
and
stateful
set
and
using
the
way
stateful
sets
work
in
the
local
clusters.
A
A
An
annotation
again
I
I,
don't
have
a
data
quay
as
an
API
or
something
which
can
be
specified
means
while
creating
while
creating
the
while
creating
the
stateful
set.
Preferences
needs
to
be
specified.
I
haven't
given
the
exact
details
of
preferences
over
here,
but
I
think
that
probably
needs
to
go
in
the
document.
Then,
once
the
feature
is
done
so
I.
A
This
is
the
design
that
is
proposed
here.
It
is
more
or
less
sort
of
using
the
features
which
are
there,
and
we
just
make
the
federated
controllers
not
much
feature-rich
and
all
that
stuff.
The
best
way
of
doing
this
probably
is
having
a
CNI
network
across
an
overlay
network
across
clusters.
That
is
what
I
think
is
the
best
way
of
doing
it,
and
it
probably
can
take
care
of
the
security
concern
that
you
mentioned
as
of
now.
A
So
we
have
some
overlay
networks
in
component
s
cluster
itself,
which
maintain
the
network
across
the
nodes
within
a
cluster
that,
if
some
of
the
CNAs
can
support,
extending
it
across
the
cluster
us
among
the
federated
clusters
that
that
would
be
the
best
solution
according
to
me,
but
as
far
as
our
security
concern
exposing
the
services
is
concerned,
the
services
when
they
are
created,
the
user,
can
choose
that
they
are
created
or
exposed
on
HTTPS
or
HTTP.
So
how
much
ever
that
secure,
TLS
or
HTTPS
transfer
provides
security.
F
C
I
was
going
to
add
a
comment,
but
there's
a
general
requirement
for
secure
exposure
of
services
between
kubernetes
clusters,
and
this
is
kind
of
being
on
our
roadmap.
For
quite
a
few
quarters
now
I
think
you
called
sorry
whose
roadmap
on
the
Federation
roadmap-
and
maybe
it
actually
applies
outside
of
Federation,
but
but
certainly
there's
been
a
requirement
expressed
by
customers
and
brainstormed
by
ourselves.
C
For
how
do
we
expose
things
between
multiple
federated
clusters
in
a
secure
way,
so
they're
not
just
out
on
the
internet
but
they're,
somehow
visible
to
other
clusters
in
the
Federation
and
I?
Think
that
that
problem
and
the
solution
to
that
problem
extend
beyond
stateful
service
and
I
would
propose
that
we
solved
the
general
problem
and
then
apply
the
solution
to
stateful
sets
as
well.
C
So
one
way
of
doing
that
would,
for
example,
be
to
have
a
an
explicit
option
for
the
type
of
stateful
set
in
this
particular
case,
which
is
explicitly
you
know
publicly
visible,
nerds
or
whatever,
and
that's
the
only
type
supported
now
and
then
in
the
future.
When
we
have,
you
know
private
services
between
clusters,
we
can
have
another
type
which
says
now:
I
don't
want
my
stuff
public
I
want
to
private,
and
you
know
I'm
waving
my
hands
a
bit
but
approximately
like
that.
C
F
But
I
mean
in
the
case
of
kubernetes
in
an
individual
cluster
I
have
the
option
of
having
essentially
private
services,
whereas
now
I,
just
don't
do
a
Lobel
to
type
I
do
something
that
has
a
cluster
IP
and
if
you're
doing
in
Cross
Club
across
clusters,
would
you
say
that's
just
not
an
option?
Well,
you
have
that
option
now
as
well.
You
can.
You
cannot.
F
C
C
A
Yeah,
actually,
there
is
one
comment
which
Quinton
gave
earlier,
who
is
about
listing,
as
you
mentioned,
I
have
given
to
abstract
you
skaters,
as
of
now,
but
not
listing,
say
a
real-world
application
which
use
case
which
can
depict
that
this
is
how
it
can
be
used,
and
this
is
what
is
the
need
for
that
week.
I
like
that
also.
C
C
That
I
understand
what
it
is
that
you
won
fleshed
out
so
so
take
this
case
number
one
for
example,
and
I
just
want
to
get
Crispin
and
I
don't
want
to
go
and
make
changes
that
don't
address
the
question
that
you're
asking.
So
it's
a
filler
app
for
reasons
of
high
availability
once
the
staple
pods
distributed
in
different
clusters
such
as
they
can.
Thus
it
can
withstand
cluster
failures.
There's
trip
isms
and
application
with
one
single
quark
global
quarry.
So
is
that
not
clear
what
the
requirement
is
there?
I
guess.
F
C
D
C
C
A
A
D
D
F
For
these
use
cases,
they
necessarily
have
to
be
conjoined
like
could
we
consider
you
know
the
simple
case
if
I'm,
just
going
to
you
know,
replicate
like
deployments
and
replica
sets
and
then
separately,
consider
how
you
would
create
like
have
the
option
of
creating
global
corn
or
cross
cluster
quorum,
the
kind
of
seem
I,
don't
know,
they've
seemed
orthogonal
to
me.
Is
that
not
the
case.
C
F
A
F
Like
or
going
much
I
didn't
mean
to
confuse
that.
I
just
meant
that,
like
the
approaches
simply
like
scheduling,
stateful
sacks
and
if
it
ended
in
two
individual
clusters
not
trying
to
do
like
cross
cluster
by
quorum
dot,
maybe
that's
something
that
could
be
done
like
first
and
then
separately.
You
could
consider
the
possibility
of
doing
global
quorum,
but
like
okay,
okay,
I
get.
It
may
be
simpler
to
design
a
feature.
If
those
things
are
in
fact,
orthogonal
and
I
mean
I
suspect
they
are
I.
G
A
Over
in
say,
for
example,
who
are
they
but
say
the
use
cases
that
I
did
receive
from
internal
teams?
Was
they
have
some
databases,
internal
databases,
it's
might
not
might,
which
can
form
our
large
size
cluster
quorum.
Also,
for
example,
30
30,
replicas
or
40
replicas,
and
they
are
sort
of
capable
of
working
in
a
smaller
replica.
Most
would
also,
for
example,
three
or
four
replicas
can
be
local,
and
then
one
of
them
can
communicate
across
out
of
this
quorum,
three
of
three
or
four
replicas.
A
One
of
them
can
communicate
across
to
another
such
quorum
of
three
or
four
replicas,
and
all
of
them
can
work
as
a
distributed.
The
BB
kind
of
a
thing
that
is
one
concrete
use
case,
which
I
would
get
from
other
internal
teams
that
it'll
be
seen.
Okay
and
I
think
there
are
some
other
bb's
which
can
work
in
such
large
stateful
forms.
A
C
I
think
there's
there
are
many,
you
know
many
similar
applications
and
if
what's
missing
here
is
concrete
examples,
I
think
it's
particularly
reasonable.
We
could
go
and
add
some
I'm,
not
sure
that
a
gymnast
is
well
I.
Think
that
would
make
the
document
more
readable
and
I
personally
can
think
of
a
bunch
of
them
are
two
or
three
a
lot
for
me.
I'm,
just
not
sure
that
if
you
measure
me
hold
up
the
content,
yes,
somebody
thinks
that
it
is
unlikely
to
be
remembered
in
these
cases
for
this
and
we
shouldn't
build
it.
C
Therefore,
it
is
that
is
that
a
concern
or
a
totally
by
naruse
question?
Can
we
deal
with
these
two
things
independently
and
I?
Think
as
we
can
in
the
case
of
services,
which
is
a
very
similar
situation,
we
could
have
just
simply
replicated
services
across
clusters,
but
we,
in
addition
to
that,
we
build
cross
cluster
and
service
discovery,
which
is
actually
the
super
interesting
thing
for
from
customers.
Point
of
view,
because
it
gives
you
a
che
and
all
these
other
good
things
which,
which,
incidentally,
is
was
identified
as
the
sort
of
biggest
use
case.
C
J
D
All
right
so
I
get
to
expand
on
your
point.
Clinton
I
get
the
document
similar
to
when
you
just
described.
As
you
know,
here's
the
value
add
of
cross
cluster
service
discovery,
which
everybody,
a
mistake,
I
think
appreciate,
I
think
would
be
want
to
stare.
At
least
I'd
like
to
understand
is
for
staples.
That's
what
is
the
value
add
for
implementing.
C
Love
to
me
than
anything
yep,
that's
the
whole
room
that
went
dead.
Ask
yeah
I
mean
a
Google
has
a
thing
called
global
chubby,
which
is
used
extensively
across
Google
for
word,
locks
between
data,
centers,
etc.
So
there's
any
of
these
use
cases,
but
writing
them
down
and
being
explicit
about
them.
It
was
like
a
good
idea.
C
H
D
C
Well,
you
are
going
I
just
made
a
comments
that
Google
has
a
thing
called
global
chubby,
which
is
used
extensively
across
Google
by
many
many
applications
and
yes
across
cluster
either
election
blocking
and
all
sorts
of
things
so
I
mean
that's
essentially
the
use
case,
although
stable
predictable
child
easy,
but
we
can,
we
can
and
they're
a
bunch
more.
You
know
you
can
have
multi
cluster
pretender
clusters
etc.
One
but
yeah
I
can
I
can
work
with
your
fun.
We
can.
We
can
flesh
out
to
move
along
the
list
of
concrete
example
right.
C
D
H
Your
fears,
I,
would
have
like
specific
motivating
examples,
so
we
build
a
generic
thing
to
do
something
that
nobody
quite
wants
to
do.
Everybody
will
need
something
specific
enough
for
a
yeast
case
that
the
generic
thing
we
build
doesn't
prove
useful
for
anyone.
That's
such
a
good
way.
To
put
it
then.
I
I
G
H
A
Actually
want
one
simple
examples:
what
might
be
there
for
concrete
use?
Cases
acquaintance
has
put
one
commented
solace
at
the
end
of
the
pier
he
has
listed
good
couple
of
them
connections
on
the
top
of
his
head,
which
you
can
think
of
which
I
did
not
add
so
far
thinking
wheel
system
of
a
discussion
and
then
I'll
do
that
Ella
mix-ins
yahoo-hoo
sharing
licking
my
shake
any.
Can
you
just
go
to
the
PLA?
See
all
I
hear
is
oh.
C
I
just
had
one
last
question
also
to
clarify
before
we
wrap
up,
which
is:
did
everyone
actually
get
a
chance
to
read
the
document?
I
mean
it's
totally
understandable
if
you
didn't
actually
before
this
meeting,
read
the
document
and
that's
you
know,
that's
a
different
problem
to
solve
the
document
you
tried
hardened
and
you
couldn't
understand
it
so
I
would
encourage
people
just
to
be
very
clear
as
to
whether
they
read
the
document
before
the
meeting
or
whether
this
is
the
first
time.
They've
really
thought
about
it.
Either.