►
From YouTube: Kubernetes Federation WG sync 20180214
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
A
One
one
way
of
specifying
placement
walls
having
a
month,
one
relationship
there,
the
definitive
spec
is
used.
It
could
be
a
different
object
store
with
a
object
left
to
the
given
object,
or
it
could
be
part
of
the
spec
itself
and
all
the
other
directives
which
include
placement
or
affinity
or
any
additional
stuff
which
might
be
categorized
in
placement,
can
be
more
object,
resources
or
different
resources.
B
It
wasn't
necessarily
going
to
solve
all
problems,
but
if
I
wanted
to
propagate
namespaces
selectively
without
an
annotation
based
mechanism,
we're
going
to
be
using
like
cube,
namespaces
there's
no
such
thing
as
a
federated
namespace.
Essentially,
so
you
kind
of
need
a
placement
resource
that
can
be
associated
with
the
namespace
of
excellence.
B
All
this
to
say
that
I,
don't
think.
There's
no
one-size-fits-all
solution
like
in
most
cases,
especially
around
policy.
It
seems
to
make
sense
of
like
a
one-to-one
mapping.
I
think
that
that
was
kind
of
the
relevant
outcome
of
the
last
week's
discussion
was
talking
about
one
to
one
versus
one.
The
many
and
placement,
at
least
in
the
base
case
I,
think
needs
to
be
one-to-one
to
support
policy.
B
Then
maybe
it's
nice
to
have
one
too
many
to
make
it
easier
for
certain
use
cases,
but
I,
don't
think
you
could
do
one
too
many,
and
actually
just
do
that
and
kind
of
similar
around
namespaces.
If
you
want
to
be
able
to
provide
placement
directives
for
namespaces,
you
kind
of
need
a
1:1
relationship,
then
you
can't
put
it
on
the
namespace
because
it's
not
our
objects,
so
you
know
separate
resource.
C
I'm
I'm
getting
a
bit
lost
here,
so
first
of
all,
I
just
want
to
clarify
what
we're
talking.
I
mean
namespaces.
To
take
a
concrete
example,
we
have
at
least
kind
of
three
notions
of
federated
namespaces.
We
have
the
namespaces
in
the
Federation
API,
where
you
put
things
that
you
want
federated
and
then
you
have
the
concept
of
creating
consistent
namespaces
in
multiple
clusters,
maybe
of
all
your
clusters,
or
maybe
a
subset
of
your
clusters,
and
then
we
have
this
concept
of
actually.
C
So
so
that's
just
literally
propagating
the
namespaces
themselves
to
all
the
clusters,
not
the
contents
of
the
namespaces
and
then
there's
also
been
talk
of
propagating
everything
that
is
contained
within
a
namespace
into
multiple
clusters
or
making
the
contents
of
namespaces
in
multiple
clusters
consistent
in
some
way
so
which
of
the
three?
Are
you
referring
to
here?
I
guess.
B
C
B
We'd
have
to
put
in
a
lot
of
work
and
the
way
that
I've
been
trying
to
simplify
that
problem
is
just
saying:
namespace
is
a
namespace,
is
a
namespace
where
aggregated
with
a
cube,
API
that
supports
namespaces
and
we're
supporting
namespace
saying
and
the
Federated
API,
but
as
far
as
like
a
first-class
federated
resource
with
a
template
for
namespaces
that
just
doesn't
exist,
does
it
make
sense?
So
it's
just
it's
a
special
case
and
we
have
to
handle
it
accordingly.
B
C
You
sure,
and
then
I
can
imagine
that
template
inside
they're
saying
this
is
the
thing
I
want
to
propagate
being
different
than
the
federated
thing.
I'm
I'm
kind
of
you
know
stretching
the
bounds
of
what's
useful
I'm,
just
conceptually.
That
would
seem
possible
in
the
same
way
that
I
could
create
a
federated
replica.
State
called
foo
and,
and
it's
template
in
slide
might
might
be
called
something
different,
but.
B
A
replica
set
is
just
it's
an
object:
it
exists
right
and
it's
in
a
namespace.
The
Federated
replica
set
is
an
object
that
exists
in
a
namespace
and
so
to
me
having
introducing
another
concept
that
can't
actually
contain
things,
because
we
don't
really
want
another
special.
Well,
that's
a
preconception,
I
guess
I
guess.
B
B
Having
like
so
my
thinking
is
that
having
a
federated
namespace
means
there's
kind
of
a
disjoint
between
like
like
if
I
have
a
federated
namespace,
maybe
it
would
make
sense
if
it
was
a
one-to-one
mapping
with
you
name,
space
kind
of
like
how
we're
having
like
a
cluster
from
cluster
registry,
has
a
one-to-one
mapping
with
like
a
federated
cluster,
and
they
just
store
different
things,
and
maybe
that
would
make
sense.
If
so,
I
guess
it's
possible
and
I
guess
I
shouldn't
discount
it.
But
does
that
make
sense
like
what
I'm
envisioning
yeah.
C
Yeah
I
guess
I
I'm,
trying
to
avoid
creating
too
many
special
cases
and
I
also
think
we
need
to
be
very
clear
on
you
know
things
like
our
back.
You
can
just
say
our
back
Confederate,
our
back,
but
actually
there's
there's
quite
a
few
fairly
different
concepts
there.
There
is
what
your
permissions
are
in
the
Federation
control
plane
to
create,
and
that's.
B
B
A
Want
to
interrupt
you
guys
in
the
scenes,
so
how
about
the
thinking
of
namespace
as
two
cases,
so
one
is
a
federated
namespace
and
the
other
one
is
actually
a
federation
namespace.
So
the
concept
of
namespace,
which
applies
to
say,
okay,
this
is
Russia
to
segregate
objects
or
give
us
segregation
for
a
user
or
that
kind
of
stuff.
The
same
stuff
can
apply
to
the
control
plane,
so
a
federation
names.
A
This
is
only
a
federation
specific
object
and
it
is
not
necessarily
the
same
type
which
will
have
a
template
within
this
corresponds
to
a
kata
subject:
it's
a
federated
namespace
in
that,
so
you
can
treat
the
namespace
exactly
like
every
other
object,
operators
which
we
are
federating,
but
we
have
an
additional
federation
namespace
which
can
provide
some
more
functionality
for
salvation
of
evil
in
the
source.
Tab
like
spoken
them
or,
for
example,
you
can
have
multiple
federated
resources
of
same
names
specified
within
two
different
condition:
namespaces
I
think
ooh,
like
I.
Just.
B
Like
to
maybe
clarify
the
language
and
for
me
I
think
I've
just
been
thinking
about
it,
the
kubernetes
namespace
and
the
federated
namespace
and
the
difference
would
be
a
cure.
Denny's
namespace
is
is
the
universal
concept
for
name
spacing,
and
that
exists,
whether
a
memory
cluster
or
in
the
Federation
control
plan
and
then
a
federated
namespace
would
be
a
special
case.
That
sort
of,
as
you
say,
defines
maybe
placement
or
like
cluster
by
cluster
like
differences
but
like
whatever
is
like,
could
actually
control
propagation.
Does
it
make
sense
like
the
yeah.
B
So
that
I
can
you
know,
start
the
Federation,
which
consists
of
QAPI
Federation,
API
cluster
registry
API
and
then
start
a
controller
for
like
federated
secrets
and
create
a
federated
secret
and
then
actually
watch
it
go
to
memory
clusters.
I
should
say
the
fixture
actually
has
memory
clusters
as
well.
So
that's
kind
of
the
step
one
for
me
is
just
being
able
to
like
take
a
federated
resource
and
create
you
know
these
associated
cube
resources
and
memory
clusters.
B
The
next
step
is
going
to
be
like
implementing
some
mechanism
that
does
overrides
and
then
the
third
step
will
be
implementing
that
something
that
supports
placement.
I'm
not
worrying
about
scheduling
at
this
point,
but
I'd
like
to
have
something
concrete
to
work
on
once
I
get
to
placement,
which
won't
be
hopefully
in
too
long.
So
it's
not
that
we
have
to
do
it
today,
but
I
don't
really
want
to
have
this
dragged
on
any
more
than
you
do
sure.
C
B
C
Know
I
think
that
makes
a
lot
of
sense
to
do
that,
and
then
these
notifications,
we
can
execute
them
pretty
quickly
inside
that
infrastructure,
cool,
okay,
I
was
actually
without
wanting
to
derail
this
conversation.
I
was
wondering.
We've
got
quite
a
few
new
members
here
that
I
certainly
don't
know
the
faces
or
the
profiles
of
all
these
people.
C
Should
we
have
a
quick
sort
of
roundtable
introduction
thing
for
the
people
who
may
be
relatively
newer
here,
there's
James
and
Dario
I
think
some
of
us
are
fairly
well
known
to
everyone,
but
some
of
us,
maybe
not.
Does
anyone
want
to
introduce
themselves
and
tell
us
what
they're
here
for
and
what
they're
interested
in
and
where
they
come
from.
E
Sure
I
can
start.
This
is
James,
so
I'm,
a
software
architect
and
I'm
one
of
the
contributors
on
the
Linux
Foundation
own
app
project,
which
is
open
network
automation
project
in
the
most
recent
release,
they're
migrating
a
lot
of
their
deployments
over
to
use
kubernetes
and
in
the
subsequent
release.
Following
that,
there's
a
lot
of
plans
for
a
lot
of
the
components
to
have
support
for
geo,
redundant
or
multi
cluster
resiliency,
so
I'm
just
here
to
mostly
learn
whatever
I
can
that'll
help
move
it
forward
cool.
D
DarÃo
I
work
for
a
Medusa
and
we
are
strong
partner
with
dread
that
we
are
one
of
the
first
capital
openshift
with
the
enterprise
solution,
and
you
are
following
the
federation
project
because
we
have
a
lot
of
plaster
all
over
the
world
and
we
like
to
failure,
8
them
and
that's
all
I,
don't
know
Marwin
pol.
We
have
some
meeting
so,
and
this
is
one
of
my
personal
activity
to
follow
up
direction.
A
We
a
fight,
in
effect
until
we
reach
probably
a
conclusion
or
the
statement
we
as
Maru
is
already
doing.
We
can
certainly
follow
what
was
part
of
the
original
proposal
in
the
cutout
in
the
document,
like
placement
being
the
object
with
very
minimalistic
mapping
between
that
suit
I
mean
the
director.
C
F
The
one
quick
thing
I
was
curious
about
I.
Remember
one
of
the
goals
of
the
this
part
of
the
Federation
project
was
that
people
would
be
able
to
explore
different
solutions
and
approaches.
Is
anyone
besides
Meru
working
on
prototyping
or
or
sort
of
playing
around
with
this
on
their
own
or
is
Moreau
really
the
only
one
doing
nice
or
not
suppose,
prototyping.
C
I'm
not
aware
of
anyone
other
than
Meru,
but
but
the
assumption
is
that,
if
the
architecture
will
be
such
that,
if
people
don't
like
what
Meru
is
done
or
what
this
group
does
in
terms
of
implementations
of
controllers
or
whatever
that
that
the
architecture
will
be
such
that
it'll
be
fairly
easy
to
build
with
someone's
or
different
ones,
as
opposed
to
the
current
monolithic
kind
of
approach
in
in
version
one
which,
which,
essentially
you
know
everything
bundled
into
a
big
control
controller.
So
you
couldn't
easily
tease
out.
G
So
Jonathan
to
dovetail
on
what
Quentin
said:
I
think
it
part
of
the
challenge
here
is
finding
the
right
primitives.
So
we
can
compose
so
that,
like
you,
can
you
can,
to
the
greatest
extent
possible
and
composition,
is,
is
very
challenging
to
do
in
a
general
way,
but
the
greatest
possible
like
if
you,
if
you
want
a
different
API
for
placement,
for
example,
finding
a
way
that
you
should
be
able
to
do
that
without,
like
changing
the
rest
of
the
stack
around
it,
which
may
be
just
a
rehash
of
what
Quentin
just
said.
F
That
all
makes
sense
to
me,
I
guess,
looking
at
it,
I
have
a
I,
have
a
maybe
concern
or
a
I'm
wondering
if
one,
if
Meru
is
the
only
one
doing
this
proto
teamwork
and
Maru
comes
out
of
this
work
with
some
product
that
works
and
nobody
else
has
done
any
exploration
of
it
themselves.
We
would
most
people
just
end
up
going
on.
What
would
brew
is
with
what
Maru
has
done,
because
it's
done
rather
than
to
explore
like
spend
time.
Looking
at
other
approaches,
yeah.
B
The
reality
sorry
carry
on
murder.
Sorry
I
have
just
one
thing
to
interject
here:
I
think
a
lot
of
there's
kind
of
a
barrier
to
entry,
around
credit,
I
think,
which
is
you
need
a
lot
of
infrastructure
to
be
able
to
run
a
cube
server
run
a
cluster
registry
run
a
federation.
Anything
I
run
the
controllers
you
validate,
that
something
works.
So
most
of
the
work
in
the
Ford
repo
is
about
creating
reusable
stuff
to
enable
experimentation.
So,
up
until
like
today,
I
would
say
people
would
have
really
high
bar
to
help
or
to
go.
B
You
know
explore
things
as
of
today:
I
think
you
actually
have
like
some
basis
for
doing
some
experimentation.
If
you
wanted
to
do
so,
and
I
would
encourage
anybody
who's
interested
in
doing
that
to
contact
me
because
we
are
absolutely
I
mean
right.
Not
just
the
working
group
in
general
I
think
we're
all
we'd
all
like
to
have
as
many
people
experimenting
and
prototyping
different
ideas
as
possible
and
proving
or
disproving
like
the
conceptions
that
we've
been
sort
of
talking
about.
C
There's
just
one
comment
to
add:
there
I
think
there's
a
balancing
act
between
you
know
using
all
the
resources
we
have
available
to
experiment
and
not
actually
producing
something
that
there
is
any
amount
of
agreement
on
being
the
correct
way
forward
and
and
on
the
other
end
of
the
spectrum,
you
know
just
doing
one
looking
at
one
thing
and,
and
only
you
know,
spending
any
energy
on
that.
So
my
personal
preference
and
advice
would
be
that
we
try
and
have
a
kind
of
central
one
of
these
things
and
and
I'm.
C
You
know
the
way
it's
going,
it's
looking
like
it's
what
this
group
is
going
to
produce,
because
there
seems
to
be
a
reasonable
amount
of
census
here
and
maybe
morose
prototype.
Will
you
know
morph
into
the
sort
of
default
alpha
implementation
which
then,
obviously
everyone
can
play
around
with?
And
we
may
decide
that
it's
not
right
and
then
we
change
it
because
that's
what
alpha
API
is
a
for,
but
but
that
it
would
eventually
become.
C
You
know
one
of
the
commonly
used
ones
of
cluster
Federation
and
if
somebody
else
had
people-
and
they
didn't
like
the
way
that
panned
out
I
mean
ideally,
they
should
kind
of
contribute
at
this
point
in
the
project,
so
that
we
can
ideally
come
to
some
consensus.
But
but
if
that's
not
possible
and
there's
a
minority
group
that
had
thinks
that
a
different
way
of
doing
it
is
a
good
idea.
At
least
they'll
have
the
infrastructure
to
you.
Yeah.
G
G
A
strong
and
different
idea
for
certain
parts
of
this,
so
that
would
actually
be
really
useful
for
this
group,
because
it
would
allow
us
to
more
easily
see
do
the
boundaries
of
what
we're
discussing
when
I
breakdown,
in
the
way
that
we
expect
them
to,
so
that
we
can
accommodate
like
two
different
approaches
for
parts
of
the
problem.
Space
like
we're
planning
to
it's
very
tough.
To
do
that
with
that.
G
B
That
would
be
where
I
would
expect
experimentation,
I,
wouldn't
so
much
expect,
like
oh
I'm,
just
going
to
re-envision
the
problem
in
its
entirety.
It's
more
like
well
I
have
some
ideas
about
placement
they're
a
little
bit
different
from
what
we've
been
discussing
or
there's
kind
of,
like
a
tangent
that
we
said
we
didn't
want
to
do
that.
C
Yes,
yes,
absolutely
I
think
that's
the
right
way
to
go
about
it.
I
mean,
let's
face
it
up
to
now
that
the
problem
we've
had
is
in
the
last
year
or
so
is
is
not
actually
too
many
people
experimenting.
The
problem
has
been
not
enough.
People
actually
write
in
code.
So
that's
what
we
need
to
address
and
yeah
I
wouldn't
want
to
fragment
that
few
people.
We
have
writing
code
now
to
be
busy
on
too
many
divergent
efforts.
C
G
A
So
I
was
thinking
that
the
last
three
meetings-
I
guess
yeah,
there's
a
third
meeting
where
we
have
been
talking
about
placements
and
not
necessarily
have
come
to
a
solution
which
can
solve
everything.
So
let's
keep
it
to
what
it
is
right
now
like
and
I
will
say
this
thing:
let's
just
leave
casement
the
way
it
is
and
continue
thinking
about
it
and,
let's
move
on
to
the
next
object,
which
might
need
some
discussion,
for
example
the
status
or
scheduling.
F
That
does
one
thing
I
might
say:
has
somebody
written
up
or
is
there
enough
detail
in
the
notes?
That's
somebody
coming
back
trying
to
understand
the
placement
discussion
could
reproduce
it
it
feels
like.
So
if
it's
been
discussed
over
the
past
three
sessions,
that
seems
like
there's
enough
information
that
if
somebody
could
have
summarized
it
into
a
form
that
other
people
could
look
at
later,
that
might
be
useful.
A
A
C
A
A
B
Mean
the
placements,
the
thought
really
hasn't
been
updated
to
sort
of
take
into
account.
Some
of
the
discussion
we've
been
having,
especially
around
like
the
canonical
it's
like
place
to
put
placement
in
the
case
of
applying
policy
and
the
whole
discussion
round
like
one
to
one
versus
one
to
many
one
too
many
seemed
like
something
that
was
convenient
for
applying
placement
across,
like
all
resources
saying
an
application,
maybe
would
be
nice
for
namespaces
as
you're
talking
about
earlier.
B
It's
can't
really
modify
and
namespace
kubernetes
namespace
object
to
have
placement,
but
I
mean
that
this
guy
should
we've
been
having
around
policy
to
me
suggests
that
they're
having
a
per
like
a
one
to
one
mapping,
maybe
makes
more
sense,
at
least
is
a
primitive
I.
Think
it's
nice
to
have
a
one-to-many
placement
mechanism
for
convenience,
but
ultimately
the
primitive
is
like:
where
does
this
resource
go
and
whatever
mechanism
determines
that
I
was
graduating
or
policy
or
manual
like
specification
and
link?
There's
all
boil
down
to
where
do
I
put
this
thing
is
I'm.
Looking.
B
Let
me
get
I
mean
it's
it's
kind
of
like
an
obvious
question,
I
hope
or
I
think,
but
like
I'm,
just
trying
to
like
baseline,
that
for
any
given
in
federated
resource.
We
need
some
indication
of
which
clusters
it
goes
into
that's
kind
of
like
the
base
requirement
and
where
we
put.
That
was
the
question,
and
we
were
kind
of
toying
with
the
idea
of
having
some.
B
B
Conceiving
of
it
was
you
basically
have
either
a
selector
or
like
references
to
cluster
federated
clusters
and
you
could
have
either
or,
and
that
would
allow
like
either
like
a
weak,
Association
or
stir
Association.
You
just
have
to
pick
one
and
does
that
make
sense
as
the
simplest
possible
thing
I'm
not
saying
we
want
to
do
it,
but
I'm
saying
that's
the
simplest.
To
my
mind,
useful
and
commentation
replacement
very
elusive.
What
we're
talking
about
I'm.
C
Totally
comfortable,
yet
with
the
idea
that
I
mean
it
seems
pretty,
like
almost
all
use,
cases
will
involve
putting
different
resource
types
into
the
same
cluster
and,
if
that's
a
good
way
of
describing
it,
but
but
to
to
have
a
useful
deployment
into
one
or
more
kubernetes
clusters.
You
typically
need
you
know
a
replica
set
or
deployment
and
a
secret
and
a
service,
and
whatever
more
than
one
thing,
and
you
can't
have
a
useful
thing
if
those
things
are
in
different
sets
of
clusters.
C
So
if
your
secrets,
you
know
end
up
in
the
wrong
in
the
different
cluster
than
the
you
know,
the
replica
set
or
the
deployment
that
needs
them,
you
don't
have
a
useful
system,
so
it
seems
to
me,
like
the
common
case,
would
involve
making
sure
that
these
things
are
consistent.
The
placement
of
multiple
resources
would
be
consistently
specified
and
I
did
mention
that
you
know.
C
Proximity
and
and
those
kind
of
things
are,
you
know
more
nuanced
and
than
just
having
the
same
placement
directive,
but
the
very
first
step
is
to
make
sure
that
all
of
these
things
have
consistent
placement
directives
and
I'm,
not
sure
that
having
them
basically
cut
and
pasted
into
all
of
those
resources
and
updating
them
all
trying
to
update
them.
All
in
concert
is
a
good
starting
point.
B
C
B
I
guess
what
I'm
trying
to
separate
in
my
head
is
the
difference
between
directives
to
a
propagator
versus
what
a
user
would
define,
and
maybe
I
should
just
make
that
explicit.
So
if
I
am
a
like
a
federated
seeker
controller,
because
that's
what
I've
been
working
on,
I
have
a
federated
resource
I've,
you
know
collection
of
clusters
and
I'll
need
to
determine
which
clusters
these
secrets
are
going
to
go
into
or
which.
D
B
D
B
To
a
post,
primitive
and
then
at
a
higher
level,
maybe
I
have
you
know
propagation
resource
that
groups
a
bunch
of
resources
together
and
but
that's
a
placement
directive.
So
I
thought
that
was
what
we
were
talking
about
to
me:
they're,
they're,
they're
kind
of
logically
distinct,
because
how
I
how
I
define
the
place
yeah
right?
Okay,.
C
B
Of
thinking
about
it,
like
it's
kind
of
messed
up
with
scheduling
a
little
bit,
the
scheduling
is
more
than
just
that.
But
but
to
me
like
I
guess,
I've
been
confusing.
Maybe
the
to
like
a
declaration
is
you
know
what
I
want
and
then
a
decision
is
what
I'm
gonna
get
I've,
maybe
been
worrying
too
much
about
the
decision
and
I'm,
not
particularly
in
how
the
declaration
works
so
long
as
we
have
an
effective
way
of
getting
to
this
decision.
That
makes
sense.
Yeah,
okay,
so
I
continue.
A
Yes,
so
concluding
IV
falling
back
to
this
that
either
the
scheduler
or
a
controller
would
have
this
information
calculated
or
figured
out
some
way,
and
we
remain
in
with
the
scheduler
on
the
controller,
which
is
a
bun
to
one
mapping
of
the
placement
directive
to
on
to
an
object.
Where
is
I,
think
whatever
is
specified
by
the
user
or
the
API
is
actually
a
one-to-many
placement
directive?
That
is
it
what
we
are
computing.
You
know
I'm.
B
Just
out
of
thought,
I
wonder
if
this
makes
sense,
maybe,
like
we've,
been
talking
about
placement
resources
and
that's
kind
of
a
way
for
declaring
intent.
What
do
people
think
about?
Actually
having
you
know
overrides
not
just
like
overrides
for
the
values
of
a
resource
in
a
given
cluster,
but
also
indicating
like
placement
decisions,
I'm
kind
of
like
trying
to
separate
like
the
scheduling
from
the
propagator,
and
this
would
be
kind
of
a
way
where
the
scheduler
could
write
details
like
placement
decisions
that
then
a
propagator
to
respect
to
make
sense
at
all
I'm.
B
I
mean
I'm
trying
to
sort
of
have
like
if
I
have
a
placement
resource,
maybe
it's
selector
based
or
maybe
it's
taking
into
account.
It's
gonna
be
ante
affinity,
proximity
that
kind
of
thing,
and
so
there's
gonna
be
some
mechanism
there's
going
to
compute
based
on
you
know,
correct
cluster
state
current
available
clusters.
You
know
all
these
details,
who's,
gonna
figure
out
this
resource
should
go
into
this
cluster
or
not
I.
Just
want
like
in
terms
of
separation
of
concerns
like
I,
want
to
have
a
place
where
that
decision
can
be
written.
Yes,
so.
C
C
Individual
replica
sets
as
an
example
for
each
cluster
that
had
decided
to
put
things
into,
and
then
some
piece
of
software's
job
was
just
to
like
stamp
out
whatever
it
was
told
to
do.
But
this
replica
set
with
this
number
of
replicas
in
this
cluster
and
the
scheduler
part
would
decide
you
know
which
clusters
and
how
big
in
each
cluster
and
all
those
kind
of
things
templates
and
things
like
that
and
that
was
shut
down
specifically
by
Brian
grant.
Who
didn't
want
and
I'm.
C
You
know
we
can
certainly
revisit
the
conversation
and
I
I
I
put
that
proposal
forward
in
the
first
place
and
it
was
deemed
not
a
good
idea
and
at
the
time
I
didn't
really
fight
too
hard
about
it,
because
I
didn't
think
it
was
super
important.
But
the
main
objection
to
that
was
that
we
now
have
kind
of
desired
state
in
two
places.
So.
C
Namely
that
there
is,
you
know,
a
replica
set
in
the
Federation
control
plane
stored
in
a
CD
that
says
we
this
we
as
a
Federation
decided.
We
want,
you,
know
X
number
of
replicas
in
cluster
Y
and
then
we
propagate
that
and
then
cluster
Y
has
exactly
the
same
thing.
It
says:
there's
a
desire
to
have
this
number
of
replicas
in
this
cluster
and
then
the
same
applies
to
kind
of
status.
So
you've
got
the
status
of
that
replica
set
inside
the
cluster
and
then
do
you
kind
of
mirror
that
but.
B
C
C
Sorry
go
ahead,
but
so
in
practice
you
know
the
the
scheduler,
for
example.
Let's
call
it
that
needs
probably
needs
things
like
status
in
order
to
be
able
to
make
its
decisions,
I
mean
certainly
at
the
moment
it
does.
So
if
it
creates
a
replica
set
in
a
cluster
and-
and
you
know,
the
status
of
the
replica
set
is
that
all
the
pods
are
crashed
or
that
you
know
they're,
it's
not
making
progress
and
the
number
of
actual
running
replicas
never
climbs
to
the
right
number
right
now.
C
It
actually
makes
choices
on
the
fly
to
you
know
create
those
replicas
and
other
clusters
to
reconcile
the
problem.
So
we
do
need
that
that
status
somewhere
right
now,
that
is
cached
in
RAM
in
the
controller.
So
the
controller
actually
keeps
an
updated
thing
in
RAM
of
the
status
of
that
object
in
each
cluster
and
uses
that
to
decide
where
to
put
you,
replicas,
etc.
C
B
B
Mean
I
to
be
clear.
The
reason
I'm
I'm,
really
sort
of
big
on
the
decoupling
is
that
for
some
use
cases
having
something
that
works
as
federation
v1
does
today
and
kind
of
like
does
scheduling
and
propagation
in
line
that
might
make
sense
and
for
simpler
use
cases
you
know,
maybe
that
isn't
what
happens?
It's
a
far.
C
C
G
Little
while
I
did
not
actually
hold
it
up
in
real
life,
you
covered
most
of
what
I
was
going
to
say.
I
do
have
a
different
point
to
make
now.
Is
that
probably
the
placement
and
like
placement
affinity
of
resources
that
travel
together
I
expect
to
be
an
area
that
is
very
likely
that
different
folks
will
have
different
ideas
again,
I
think
the
game
there
is
like
find
the
right
primitives
that
we
can
compose
or
that,
if
you
can
compose
in
different
ways
to
suit
your
fancy.
G
C
C
Replica
sets
being
the
canonical
example,
there's
currently
in
the
b1
implementation
as
a
way
of
saying
you
know,
put
half
there
and
half
there
or
2/3
here
and
one-third
there,
and
that
turns
out
to
be
really
useful
and
there's
a
default,
which
is
you
know
just
to
spread
it
equally
across
things.
So
that's
that's
a
pretty
heavily
used
featured
and
it
doesn't
really.
It's
not
obvious
to
me
how
we
take
what's
in
the
document
at
the
moment
with
the
cluster
selector,
and
we
expand
that
to
be
able
to
cater
for
that
fairly
basic
use
case.
B
C
Where
is
that
covered
scheduling?
Okay,
yeah?
Oh
so
those
are
different
yeah
and
the
placement
preferences?
Okay,
so
placement
tells
you
which
clusters
and
scheduling
tells
you
how
much
in
each
of
those
selected
clusters.
Okay,
that
sounds
reasonable.
Okay,
I'll
go
and
scrutinize
this
document
a
bit
better
than
the
head
of
okay.
B
C
C
Out
in
my
head,
how
how
consistent
is
how
decoupling
this
scheduling
preferences
are
from
the
placement
preferences
it
seems
like
they
would
have
to
be
kept
consistent
in
a
sense.
So
so,
right
now
in
the
example
document
the
cluster
selector
says:
region
equals
us
west
and
the
scheduling
preferences
say
us
east,
it's
five
and
us
West.
It's
five.
Let's
say
or
wait
relative
wait.
So
so
that
isn't
gonna
work
at
the
moment.
Ice.
B
Know,
to
my
mind,
I'm
not
entirely
sure
how
reconcile,
while
they
are
in
the
same
way
that
I
don't
think
they're
really
reconciled
and
Confederation
v1,
because
they
they
can
be
like
both
in
operation
at
the
same
time,
but
I
I
could
use
I,
guess
I.
The
challenge
for
me
is
you
could
use
placement
without
scheduling
and
I.
Guess
I'm
coming
to
think
that
maybe
you're
right,
you
can't
really
use
scheduling,
though
placement
is
kind
of
implicit
in
scheduling
right.
It's
not
your
client.
B
It
implicitly
defines
placement
I
want
you
to
go
into
this
cluster
and
I.
Want
you
to
have
these
values
yeah,
but,
but
to
me,
like
scheduling,
would
drive
like
a
placement
decision
like
not
a
placement
directive,
but
I'm
gonna,
say
you're
gonna
go
into
this
cluster
and
these
are
the
overrides.
You're
gonna
have
something
sense:
Leeson
a
logical
perspective.
It's
it's
kind
of
hard
to
separate
from
the
mutation
I
know,
but
I'm.
C
D
C
B
B
Think
that's
correct
and
I
think
that's
okay,
yeah
all
right!
It's
kind
of
still
a
little
bit
money,
but
I
guess
in
my
way
of
thinking.
There's
some
use
cases
that
don't
really
require
scheduling
like
I
just
want
to
stamp
if
I
stamp
out
identical,
copies
or
I
mean
just
hard
code.
What
the
overrides
are
for
I.
C
There's
there's
preferences
as
to
which
clusters
I
want
to
go
into
right,
correct
those
are
currently
captured
in
kind
Federation
placement
correct.
The
other
thing.
Will
the
cluster
selector
saying
anything
that
falls
within
any
cluster
that
falls
within
is
plus
two
selector
I
want,
and
then
there
is
the
requirement
to
be
able
to
determine
the
relative
sizes
of
those
placements,
and
that
is
currently
in
what
is
called
federated
call.
It
replica
set
scheduling
preferences
and
that
also
specifies
a
set
of
clusters
and
numbers
of
replicas
that
can
go
into
those
and
all
weights.
B
C
C
B
Difference
is:
is
that
replica
set
scheduling
preferences
is,
is
configuration
for
a
scheduler
to
determine
what
the
actual
account
will
be
for
a
given
cluster,
whereas
overrides
is
a
way
of
the
user
defining
any
field
that
they
want
to
change,
not
just
replica
count.
They
could
say
I
want
to
use
this.
Imagine
in
this
cluster,
because
different
region
I
want
to
use.
You
know
these
labels,
because
I
label
things
a
little
bit
differently
in
this.
You.
B
C
Should
be
concrete,
I
want
to
put
my
application
in
two
clusters:
I,
don't
care
which
ones
as
long
as
they're
in
America
and
I
want
two
thirds
in
the
one
cluster
and
one
third
and
the
other
cluster.
It's
not
clear
to
me.
I
specify
that
it
sounds
like
I
need
a
replica
set
scheduling
preferences
to
specify
the
two-thirds
one-third
requirement
and
it
sounds
like
I
need
a.
B
Sorry,
if,
if
I'm,
defining
scheduling,
preferences
and
I'm
saying
I
want,
you
know
this
many
in
this
cluster
and
this
many
in
this
cluster
isn't
my
and
my
assumption
is
that
the
scheduler
would
determine
placement
like
that.
You,
wouldn't
the
user,
wouldn't
be
responsible
for
determining
that.
You
would
be
saying:
okay,
I
want
you
know.
X
replica
is
in
this
cluster
extract.
Please,
in
this
cluster,
set
the
placement
so
that
it
goes
into
the
clusters
that
I'm
setting
the
overrides
on
so
scheduling
preferences
configure
the
scheduler.
D
C
G
I
also
think
it's
not
super
clear
from
this
document
at
this
point,
necessarily
what
the
relationship
of
these
pieces
are
to
one
another
and
also
I
wondered
that
might
be
something
that
we
can
clarify.
I
feel
like
we
are
very
much
still
trying
to
figure
out
exactly
what
the
right
like
what
the
right
API
constructions
are.
So
that
should
also
be
noted.
C
Too
many
philosophical
objections
to
to
the
way
the
direction
it's
taking
I
just
I
just
don't
know
how
I
would
do
a
pretty
basic
thing
with
what's
in
the
document
right
now
and
I.
Think.
But
let
me
let
me
take
it
offline
and
see
if
I
can
figure
it
out
from
the
document
package.
Concrete
questions
in
the
document
to
clarify
what
I
don't
understand.
A
A
G
Yeah
I
had
much
better
throughput
on
things
and
group
discussions
when
they
happen
more
than
once
a
week
that
doesn't
make
them
very
expensive.
So
we
should
pay
attention
whether
we
think
they're
useful,
but
right
now
I
would
say
that,
like
it's
quite
useful
to
have
twice
a
week,
discussions
I
think.
B
F
Meaning
anything
anybody
was
said
they
were
going
to
do.
Peru
I'd,
be
kind
of
interested
in
getting
a
bit
of
information
or
having
it
out
there
about
whether
yeah
nor
stuff
is
ready
for
people
to
start
prototyping
on
I,
don't
know
if
that's
in
a
state
yet
that
people
can
really
start
diving
into
it.
D
B
Think
you
might
have
missed
me
saying,
but
I'm
I'm
more
than
happy
to
bootstrap
anybody
who
wants
to
get
started
working
with
what
I
have
you
know
it
should
be
possible
to
get
new
API
types
and
play
with
propagation,
because,
like
getting
a
getting
a
federation
up
and
running
is
pretty
easy,
an
integration
test
environment.
It's.
C
C
B
B
So
it's
way
easier
as
far
as
putting
it
like
in
a
different
repo
I
think
for
now,
I
prefer
to
just
continue
like
soldiering
on
and
once
we've
reached
a
point
where
were
happy
with
what's
going
on,
then
you
know
figure
out
how
to
transition
to
that
other
thing.
But
in
terms
of
like
collaborating
with
other
people
like
I'm
totally
happy
I
mean
I.
G
C
I
think
it'll
be
important
to
have
a
kind
of
you
know.
This
is
the
thing
that
this
thing
is
using
as
the
place
to
you
know,
get
stuck
working
and
hopefully
that
you
know
let's
say
Meru
gets
run
over
by
a
bus
tomorrow
like
we
I
would
like
to
think
that
this
thing
would
continue
using
that
and
allocating
to
what
will
ultimately,
ultimately
become
v2
yeah.