►
From YouTube: Kubernetes Federation WG sync 20180129
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
C
E
B
B
B
F
D
Are
you
guys
able
to
see
I
have
already
shared
that
oh
good
man
yeah,
so
a
last
I,
remember
just
to
recap:
we
had
a
discussion
around
this
diagram
and
just
a
loose
relationship
among
the
functionality,
not
innocent.
We
had
to
go
to
the
next
section
in
the
document.
All
basically
brainstorming
around
the
specifics
of
the
API
might
be
I.
D
D
B
I
replied
I'm
happy
to
go
through
anything,
yeah
I
should
say:
I've
started
prototyping
this
and
that's
not
to
preclude
other
people
from
doing
work
or
I
mean
you
seemed
pretty
interested
in
having
things
move
ahead
and
so
I'm
trying
to
maybe
build
knowledge
around
how
we're
gonna
compose
this
thing
and
test
it
and
sort
of
stuff.
I.
Think
some
point.
People
to
that,
like
my
goal,
is
to
push
PRS
not
just
much
code,
and
obviously
this
is
an
official
thing.
It
just
gives
people
an
opportunity
to
have
visibility.
B
The
only
PR
I
have
right
now
is
just
it's
trivial
like
beyond
the
base
commits
and
the
repo
or
just
initializing
the
API
server
builder
subsequent
commits
will
be
just
like
adding
a
secret,
adding
testing
for
this.
You
know
getting
integration,
testing
working
as
anybody's
who's
interested
in
following
along
participating,
contributing,
feel
free.
It's
not
really
official
and
I.
B
F
There
was
sort
of
two
themes,
maybe
that
we
wanted
to
do
today.
The
one
is
actually
dig
through
the
details
of
this
brainstorming
on
how
to
do
templates
and
this,
and
that
the
next
thing
and
the
other
area
was
to
figure
out
if
there
are
more
people
who
want
to
start
like
implementing
stuff,
can
is
there
stuff
that
we
can
usefully
implement
stuff
you've
just
mentioned.
F
For
example,
getting
some
of
the
less
contentious
API
is
actually
implemented
in
an
API
server
and
getting
the
tests
to
run
against
them,
and
these
kind
of
things
seem
like
stuff
that
can
happen
in
parallel
with
these
discussions,
and
we
could
obviously
tweak
those
things
if
we
change
our
minds
on
anything,
but
it
seems
like
the
bulk
of
the
work
could
be
done
and
so
I
think
one
of
the
goals
of
it
is
he's
Paul.
Here.
F
B
My
my
mandate
is
to
get
to
sort
of
a
minimum,
the
minimum
thing
which
would
be
secrets,
config
map
replicas,
set
with
placement
with
overrides
with
propagation.
It's
not
to
say
that
what
I'm
coming
up
with
is
going
to
be
by
any
means
comprehensive.
It's
just
like.
What's
the
Midway,
let's,
let's
go
as
far
as
we
can
in
the
direction
of
having
a
solution.
F
Excellent
yeah
yeah
that
was
sort
of
the
direction
that
I
was
thinking
of
recommending
is,
is
picking
out.
You
know
we
have
a
bunch
of
existing
objects
that
we
support
in
the
existing
system
and
we
have
implementations
of
the
API
well,
the
API
is
basic,
nothing
more
than
kubernetes
api,
plus
plus
some
annotations,
and
we
have
the
controllers,
which
are
kind
of
bundled
up
versions
of
all
of
these
areas
of
functionality
that
we've
been
discussing.
F
So
one
option
would
be
to
basically
go
through
that
list
in
some
reasonable
order
and
convert
them
to
what
we
think
a
reasonable
shot
at
the
new
API
is
given
the
degree
of
consensus
we
have
at
the
moment,
and
then
you
know
you
just
cut
those
concrete
implementations
as
some
sort
of
review
point
and
say:
yes,
that's
broken,
or
it's
good
or
whatever,
and
then
in
parallel,
once
we've
identified
I
think
you
know
two
or
three
API
objects
that
we
can
agree
on.
This
is
a
good.
F
B
F
All
right
so
so,
maybe
maybe
then
we're
you
mean
shashi
and
your
phone
can
get
together
after
this
and
figure
out
concretely
like
specific
things
to
do
in
the
next
few
weeks
and
anybody
else
who
wants
to
get
involved
in
actually
writing
code
in
the
next
week
or
two
can
climb
in
there.
And
then
we
can
use
the
rest
of
this
discussion
to
just
go
through
the
current
brainstorming
in
the
dark
and
see
if
we
can
get
some
more
agreement.
There.
B
I
there
is
an
outstanding
question
that
Jonathan
raised
not
sure
where
and
the
document,
maybe
it
is
below
so
for
below
that
for
below
that
diagram.
The
first
question
that
I
think
is
actually
really
important
is
how
we
handle
all
version.
A
yeah
has
anybody,
or
has
everybody
seen
Jonathan's
question?
D
B
B
B
This
is
going
in
action
and
I
think
this
is
fine.
The
only
sort
of
weird
thing
about
this
that
I
that
I
was
thinking
about
was
that
if
I
had
an
alpha
like
if
I'm
starting
off
creating
a
new
Federation
federated
replica
set,
the
initial
burden
should
probably
be
alpha
because
I
haven't
finalized
a
propagation
mechanism
or
how
it's
gonna
behave.
Scheduling-Wise
that
sort
of
stuff,
but
I
mean
it's
already
apps,
p1
and
kubernetes,
and
so
I
would
suggest
that
we
would
have
like
alpha
one.
We
target
apps
be
one
beta.
B
One
would
also
call
like
target
apps,
B,
1
and
B.
1
would
also
talk
target
app,
C
1,
because
that's
the
stable
version
does
that
make
sense.
So
there's
there's
the
Federation
version
will
always
be
less
than
or
equal,
and
hopefully
they
were
converging
on
equal.
Does
that
make
sense
like
this
is
kind
of
a
funny
topic,
I'm,
hoping
that
it's
coming
yeah.
F
I
had
a
similar
discussion
in
my
own
head
recently
about
similar
stuff,
em
and
I
came
to
roughly
the
same
conclusion.
I
guess
any
caveat
is
that
in
theory,
all
later,
kubernetes
versions
are
backwards
compatible
in
the
sense
that
you
can
forgiving,
maybe
I'm,
maybe
I'm
about
to
talk
garbage
and
I
was
gonna
say.
If
you
don't
know,
if
you're
a
and
Eddy's
client-
and
you
don't
know
about
subsequent
versions
an
object,
you
can
just
end
your
old
version
to
the
same
API
server
and
everything
works.
Fine.
F
B
Some
time
in
future,
it
does
like
it's
not
so
much
a
compatibility
issue
that
we're
addressing
here,
it's
more
if
I
am
creating
a
federated
replica
set
and
like
the
fields
that
are
available
in
the
template
will
be
dependent
on
the
API
version.
I
gave
the
kubernetes
object
that
I'm
targeting.
So
if
it's
an
older
version,
it
may
not
include
fields
that
the
user
wants
to
use.
Yeah.
F
Yeah
no
I
totally
agree
and
I.
Guess
that
begs
the
question
as
to
whether
the
Federation
API
needs
to
provide
the
same
backwards
compatibility
so
so,
given
installation
of
Federation,
would
it
support?
You
know
v1
maker
said
templates,
mb2
replica
said
tablets
at
the
same
time
the
appropriate
methods
I
think.
D
So
yeah
I
think
the
same
analogy
which
the
replica
said.
Two
parts
phase
can
be
used
like
both
of
them,
for
example
in
competitors.
If
you
try
to
use
an
older
version
of
replica
sets,
which
probably
has
been
upgraded
to
be
one
right
now,
you
would
still
be
targeting
to
the
latest
version
of
parts
and
you
wouldn't
specify
the
APA
version
at
all
in
the
template.
So
we
can
probably
do
the
same
thing
and
what
Maru
you're
suggesting
seems.
F
F
D
B
So
so
we
have
a
point
we
should
just.
We
should
just
do
some
research
and
see
how
replicas
handles
its
templating
of
the
pod,
because
it
occurs
to
me
that,
okay,
as
long
as
we
have
backwards
compatible,
my
concern
was
more
I
was
thinking
of
like
how
do
we
represent
to
this
to
the
user
if
they
want
to
use
a
specific
version
of
like
say,
replica
set,
what
version
do
I
know?
Maybe
it's
not
a
good
question.
B
I
guess:
I,
yeah,
I,
so
I'll
just
say
like
I
mean
I'll.
Take
it
as
an
action
item
to
go.
Look
at
how
pods
that
does
this,
because,
if
I'm
like
well,
can
you
actually
change
the
type
of
the
field
between
versions?
But
maybe
that's
okay,
because
there's
backwards
compatibility
implied
in
the
pod
like
spec
anyway,
oh
I'll
figure
that
out
but
Jonathan
sorry
for
education.
This
conversation,
do
you
have
any
thoughts.
G
I
didn't
have
any
thoughts
when
I
asked
the
question
I,
just
I
sort
of
saw
that
and
given
our
previous
experience
with
the
Federation
versioning
I
figured,
it
was
worth
at
least
mentioning
when
I
saw
it
encoded
here
in
the
template.
I
guess
one
thing
that
one
thing
that
does
come
to
mind
is
that
if
you
have
this
progression
of
a
federated
resource
from
say
alpha
to
beta,
to
be
one
I
wonder
if,
if
so
now,
the
Federation
resource
is
at
V
1.
G
F
I
I
think
somewhat,
irrespective
of
what
kubernetes
does
right
now:
I
think
that
the
template
version
and
the
federated
object
version
are
independent
of
each
other
or
they
are
ready.
There
are
different
version
numbers
so
so
no
version,
27
of
a
federated
replica
set
may
implicitly
in
May
version
two
of
the
replicas
attempt,
and
then
we
rev
the
Federation
version.
You
know
as
many
times
as
we
like
to
change
the
placement
options
and
whatever
else
is
in
that
wrapper
without
replicas
it's
changing
during
that
period.
Does
that
make
sense,
yeah.
E
B
That
makes
sense
that
was
my
sort
of
thinking
off
the
cuff
and
what
I'm
thinking
about
I
think
what
Quentin
saying
makes
more
sense
that,
like
it,
doesn't
actually
have
to
be
a
relationship.
I
was
kind
of
concerned
that
if
the
user
wanted
a
specific
API
version
of
a
Cooper
Nettie's
resource
to
be
propagated,
they
would
have
to
like
have
some
sort
of
matrix
to
figure
out
which
API
version
do
I
have
to
use.
B
D
B
And
whatever
API
version
is
current
in
that
minimum
supported
like
that's,
going
to
be
at
least
which
you're
gonna
get
like
in
terms
of
one
template
version
not
releases.
Okay,
okay
I
will
make
a
point
to
have
a
section
that
describes
that
summarizes
that
in
a
stalk,
okay,
Thank
You
Jonathan
for
raising
that
for
sorting
it
out,
because
my
initial
impression
was
clearly
not
well
thought
through.
D
D
Yep
here,
so
what
I?
What
I
could
sum
up
was
that
placement
that
we
are
specifying
as
a
functional
area?
It
seems
fine
to
me,
whereas,
when
I
I
tried
thinking
about
deriving
an
api
for
the
placement,
I
was
already
confused.
So,
for
example,
if
you
read
the
comment
that
I
get
put
out
so
implementation
wise,
we
have
a
couple
of
options.
We
could
have
a
more
simple
approach.
D
We're
just
like
services
versus
the
endpoints
labels
are
Nash,
and
that's
that
that
way
we
can
just
match
labels
and
resources
and
clusters
which
have
the
same
labels.
Resources
go
to
that
cluster.
You
could
have
some
mid
way
approach
which
is
implemented
in
cluster
selector
currently
in
Federation,
or
you
could
have
a
full-fledged
policy
based
implementation,
which
actually
sort
of
integrated
into
this
position
using
OPA
and
admission
controllers,
and
there
is
some
overlap
of
placement
with
scheduling.
Also
so
I
have
two
questions
on
this.
So
let's
look
here.
One
open
policy.
A
D
D
D
B
B
E
B
To
have
like
him
participate
in
a
discussion,
I'm
hoping
to
do
I,
so
I
really
I
should
have
asked
and
to
try
to
show
up
for
this
if
you
could
but
try
it
we'll
try
to
get
him
like
in
a
future
meeting,
either
Federation
and
a
federation
working
group
or
like
multi
cluster
anyway,
just
to
have
his
direct
sort
of
participation.
In
my
mind,
having
placement
is
a
separate
resource
has
the
advantage
of
allowing
policy
to
be
enforced.
The
current
annotation
based
mechanism-
you
could
enforce
it
if
you
had
an
admission
controller.
B
That
said,
you
can't
change
these
fields
under
these
circumstances,
but
if
you
had
I
mean
it's
difficult
to
do
that
in
a
configurable
way
like
in
an
annotation
based
mechanism,
if
you
have
a
resource
like
you
know,
Federation
placement
or
something
like
that,
you
could
potentially
have
Harbach
rules.
That
said,
you
can't
change
rules.
B
You
can't
change
those,
so
policy
could
actually
be
enforced
by
some
sort
of
policy
engine.
It
could
be
set
via
placement
resource,
and
then
you
know
you
only
have
privileged
users
being
able
to
modify
that.
Does
it
make
sense.
So
to
me,
it's
like
having
a
separate
resource.
Just
to
summarize
having
a
separate
resource
allows
us
to
like
reuse
kubernetes
like
a
CL
mechanism,
which
is
our
back.
If
it's
all
in
the
one
same
object,
then
you
have
to
jump
through
hoops
to
be
able
to
protect
like
on
a
field
basis
and
I.
F
We
don't
object
to
the
idea
of
separating
these
things
out,
but
my
kind
of
interest
is
actually
from
a
different
point
of
view,
I
think
in
practice.
If
you
look
at
the
current
mechanism,
I
mean
it
does
actually
achieve
what
you
suggesting
in
that
you
have
these
policy
objects,
which
you
specify
and
have
their
own
set
of
our
back
rules,
and
then
you
have
admission
controllers
and
those
enforce
you
know.
F
What
the
current
mechanism
doesn't
provide
is
the
ability
to
share
placement
across
multiple
objects,
which
I
think
is
quite
a
common
user
requirement
so
and
not
only
the
the
actual
requirement
for
placement,
but
at
the
actual
placement
after
scheduling.
So
let's
say
you
know
you
had
a
replica
set
and
a
secret
and
a
config
map,
or
something
that
logically
had
to
end
up
next
to
each
other
for
the
thing
to
work.
F
Currently,
you
can
specify
the
same
selection
criteria
for
all
three
of
them
and
you
can
say,
put
me
Europe,
but
they
may
not
actually
end
up
in
the
same
zone
as
each
other
and
therefore
not
actually
fulfill
your
requirements.
So
then
you
forced
to
kind
of
give
them
an
explicit
cluster
to
put
them
all
in,
and
things
get
messy
and
one
way
of
getting
around.
F
That
would
be
to
have
some
kind
of
affinity
rules
where
you
say
you
know,
I
want
my
secret
next
to
this
replica
said
wherever
that
ends
up
and
all
that
kind
of
stuff.
But
another
way
of
doing
it
is
just
to
say
that
you've
got
a
single
selector,
and
that
thing
you
know
selects
to
some.
You
know
set
of
clusters,
for
example,
and
then
all
of
the
things
that
that
are
tied
to
that
selector
end
up
in
in
those
same
places
and
so
that
that
part
of
it
I
find
quite
interesting.
I.
B
D
D
E
B
I'm
not
sure
if
that
was
kind
of
the
end
goal
or
if
it
was
just
stepping
stone
to
some
future
iteration,
but
it
seemed
like
there
was
admin,
defined
policy
and
then
see
was
applied
to
resources
and
so
I'm
not
sure
when
you're
talking
about
actually
having
policy
directives
on
the
resources
themselves.
Is
there
any
precedent
for
that
or
can
you
describe
what
you
mean?
No.
D
B
You
see,
that's
not
a
generic
scenario.
I
mean
that's
a
it's
it's
the
the
precedent
for
this
is
node
selection
and
I'm,
pretty
sure
that
that's
sort
of
the
mechanism
that
was
put
in
place
for
cluster
selection,
at
least
any
existing
implementation.
You
mean
no,
it's
very
well
like
there's,
no
selection,
kubernetes
and
when
I
was
looking
at
the
code
that
implemented
cluster
selection,
it
closely
mirrored
what
node
selection,
yes,.
A
B
D
G
A
G
D
F
You
well
so,
maybe
before
we
consider
that
to
be
a
final
decision,
I
mean
I,
do
have
some
reservations
about
separating
these
things
out.
You
know
you
end
up
with
synchronization
problems
where
you
know
what
happens
if
you
specify,
and
maybe
there
are
good
answers
to
them.
If
you
specify
the
the
placement
directive
before
there
are
any
things
that
match
it,
then
I
guess
nothing
happens,
and
if
you
specify
something
afterwards,
you
specify
something
it
doesn't
have
a
matching
placement
it
resumed
he
doesn't
get
scheduled.
B
Mean
the
goal
is
isn't
saying:
this
is
what
we're
doing
it's
more
like
thinking
about
what
potential
like
just
kind
of
here's,
a
solution,
space,
here's
potential
solutions,
I'm,
not
saying
this
is
what
we
do.
I'm
just
gonna.
Try
this
path
and
we'll
see
how
I,
like
it,
people
like
it.
How
well
it
works
and
it
doesn't
work.
We
can
iterate
yeah.
F
And
on
the
flip
side,
I
mean
if
everyone
in
this
room
agrees
that
this
is
the
best
way
to
do
it.
We
should
just
make
that
as
a
tentative
decision
and
say
that's
what
we're
gonna
do
unless
somebody
comes
up
with
good
reasons,
I'm
quite
keen
for
us
to
get
to
some
sort
of
decisions
sooner
rather
than
later
so
I
just
said,
I
can't
think
of
any
good
reasons,
not
to
consider
that
to
be
the
right
way
to
do
it.
Does
anybody
else,
disagree,
I,
guess,
I.
A
Still
I'm
not
entirely
sure
why
we
always
want
them
separate
like
this
seems
like
for
simpler
cases,
or
at
least
some
users.
It
would
be
less
config
and
less
overhead
to
just
have
the
placement
and
the
override
just
next
to
each
other
like
here.
In
this
case,
value,
1
I,
think
that's
a
valid
argument.
G
One
thing
to
note
Greg
is
that
in
practice,
users
are
doing
these
as
llamó
files.
They
can
still
put
the
Federation
placement
and
the
Federation
overrides
and
the
object
next
to
each
other,
separated
in
that
one
llamó
file,
it's
not
exactly
quite
as
maybe
not
quite
as
integrated
of
an
experience,
but
a
user
could
make
one
llamó
file
that
contains
all
of
their
the
Federation
relevant
objects,
for
one
object
that
there's
one
object
are
trying
to
feder.
8,
ok,
I.
F
G
I
think
that's
a
good
point.
Quentin
I
would
always
say
back
that
I
think
people
who
are
using
the
kubernetes
api
or
writing
a
custom
controller
are
unlikely
to
be
the
sort
of
people
who
are
going
to
want
the
very
simple
user
experience.
I
expect
that
the
people
who
want
the
simplest
experience
will
tend
to
be
using
sort
of
the
basic
kubernetes
tools
or
a
GUI
and
in
the
context
of
a
GUI.
The
representation
of
this
can
be
much
more
integrated
than
the
actual
objects
being
created.
B
Mean
this
has
kind
of
gone
through
a
couple
iterations
and
previously,
like
I
thought.
Well,
maybe
there
should
be
a
field
on
the
resource
you
could
either
have
like
a
local
selector
or
you
can
have
a
reference
to
a
placement
resource
and
it
would
be
exclusive.
You
pick
one
of
the
other,
but
I
I
mean
I
think
we
can
always
add
that
capability,
like
as
long
as
something
as
additive.
It's
relatively
easy
to
add
it.
It's
more
like
eating.
G
F
B
As
long
as
we
don't
have
to
do
like
placement
like
replica
set
placement,
you
know
deployment
placement
or
a
replica
set
override
deployment
override
by
having
things
be
agnostic
and
use
selectors
or
some
other,
like
type
agnostic
mechanism
for
like
relating
them.
To
me,
it's
just
easier
to
experiment
with
I'm,
not
saying
this
is
what
we
end
up
with
I'm
gonna
mean
that
if
we
want
to
vary
the
implementation
of
one
of
these
mechanisms,
we
only
have
to
change
the
implementation.
We
don't
have
to
go
back
and
effect
all
like
for
each
type.
B
F
Can
foresee
some
challenges
with
the
generic
placement
approach,
in
that
there
are
already
enough
cases
where
we
have
five
specific
choices.
I
mean
one
one
off
the
top
of
my
head.
That
I
can
so
waiting,
for
example,
is
something
that
only
applies
to
things
that
have
weights.
So
you
know
waiting
secrets
or
config.
Maps
doesn't
really
make
any
sense,
and
it
gets
much
more
complex
than
that
where,
if
you
have
like
deployment
across
multiple
clusters,
that
has
a
whole
bunch
of
configuration
around.
How
do
you
want
the
rollouts
to
happen?
F
B
Time
say:
oh
I
got
it
wrong
and
overrides
I.
Think
I
was
only
talking
about
placement.
Cuz
overrides
do
imply
typing,
as
you
say,
and
as
far
as
the
scheduling
goes,
I
mean
the
example.
At
the
end
of
the
document
is
tight
because,
as
you
say,
that's
no
doesn't
make
sense
to
have
a
tight
diagnostic
scheduling
preference.
It's
very
specific
to
what's
your
schedule.
Okay,.
D
F
G
So
simply
saying
this
has
a
weight
of
0
doesn't
imply
that
it
shouldn't
be
propagated,
and
if
you
choose
to
make
that
implication,
that
is,
that
is
sort
of
above
and
beyond
the
already
existing
specification
of
thinking.
The
overrides
affect
what
objects
are
going
to
look
like
if
you
were
to
put
them
into
each
cluster
and
the
propagate
or
the
placement
effects
which
clusters
are
going
to
put
the
objects
into
I.
G
Think
I
agree
that
the
distinction
is
not
very
useful
when
it
comes
to
replica
sets,
but
there
is
I
mean
I
think
there
still
is
I
think
there
are
potentially
more
valuable
examples
and,
at
the
very
least
that
one
is
straightforward
enough
to
understand
and
to
see
why
you
could
why
there
is
a
difference
between
a
zero
weight
and
not
propagating
they're,
not
placing
the
object
in
cluster.
Perhaps
we
can
find
a
better
example,
but
I
still
think
it's
conceptually
said.
G
Can
think
of
a
better
example?
Sorry
go
ahead.
I
was
just
thinking
you
could
potentially
implement
something
like
that
is
a
federation
placement
selector,
that's
some
sort
of
special
I
mean
you
could
add
more
details
to
the
selector
like.
If
the
object
has.
If
a
replica
set
has
no
replicas
in
it,
then
don't
propagate
it.
That
could
be
part
of
the
placement
spec.
No.
D
B
Here
but
the
reason
there's
overlap
is
I
think
we
want
to
at
least
test
out
like
having
these
things
work
in
isolation.
Whether
or
not
it's
you
know
it
solves
people's
problems
is
another
thing,
but
for
somebody
who
just
wants
to
worry
about
placement,
it
isn't
gonna
vary
things
across
clusters.
Maybe
they
don't
need
overrides
for
someone
new.
You
know
just
wants
things
going
everywhere.
If
it
wants
to
bury
things
by
cluster,
maybe
they
don't
need
placement
like
the
same
thing
with
scheduling.
B
G
Something
Quinton
brought
up
before
the
idea
of
just
the
idea
of
a
GUI
I.
Think
one
thing
is
useful
up
and
I
think
somebody
else
has
brought
up
before
kind
of
the
idea
that
the
kubernetes
api
is
are
quite
low
level
and
in
a
lot
of
cases,
people
might
be
thinking
of
things
at
a
higher
level
and
if
the
expectation
is
that
at
some
point,
people
will
be
interacting
with
a
federation
not
so
much
via
creating
federation
placement
resources,
but
through
some
user
interface.
G
I
think
there's
potential
that
it
makes
it
easier
to
design
a
user
interface.
If
the
conceptual
building
blocks
are
straightforward
and
don't
have
odd
interactions,
it's
easier
to
to
mix
that
together
in
a
user
interface
than
it
is
in
a
user
interface
to
try
and
tease
things
apart
that
are
mixed
up
that
are
combined
in
the
representations
underlying
it.
F
Yeah
I
guess
the
caveat
is
what
Greg
mentioned
a
bit
earlier,
which
you
know
if
the
majority
of
your
use
cases
are
the
simple
one
forcing
your
users
to
have
to
go
through
something
or
even
your
API
or
your
GUI
designers,
to
go
through
something
more
complicated
just
to
cater
for
the
potentially
more
complicated
use.
Cases
might
not
always
be
the
right
idea,
but
I
think
we
have
personally
I
think
we're
heading
in
the
right
direction
here,
with
with
the
decomposition
and
I.
Don't
have
any
major
reservations
so
far,
I
think.
E
Maybe
one
of
the
examples
I
have
in
mind
was
I
found.
My
introduction
to
Cooper
needs
to
be
fairly
came
at
in
the
sense
where
you
have
a
deployment
and
where,
if
you
only
wanted
to
pass
a
specific
configuration,
you
also
had
to
deal
with
volumes
and
config
maps,
and
you
already
talked
about
at
least
two
resources
and
a
stanza
in
a
pod
spec
that
is
more
complicated
than
it
potentially
should
be.
So
I
don't
think
the
barrier
is
that
high
to
having
two
separate
resources.
B
The
goal
of
that
is
in
making
sure
there's
a
distinction
between
a
federated
like
I'm,
just
calling
it
a
template
at
this
point
like
a
federated
template,
is
defining
something
you
know
the
basis
for
which
you're
gonna
propagate
to
multiple
clusters
and
anyway,
sorry
I'm
confused.
Are
you
suggesting
that
maybe
like?
Why
are
we
doing
this
versus
just
having
the
kubernetes
object
or
because.
D
The
implementation,
the
the
point
that
stood
out
over
there,
was
that
the
implementation
of
whatever
the
controller
saw
the
static
controllers.
Whichever
way
you
are
implementing,
it
would
be
more
complex
to
have
interactions
between
multiple
objects
and
then,
if
they
are
active
controllers,
they
have
to
watch
all
different
objects
and
make
decisions.
F
I,
remember
that
was
talking
about
a
farm
and
I,
and
we
did
come
to
the
conclusion
that
that
putting
them
together
was
better
than
pulling
them
apart
at
that
time
and
I
can't
for
the
the
top
of
my
head.
Remember
that
it
seemed
at
the
time
to
be
fairly
conclusive
reasons
why
that
was
better,
and
we
had
consensus
on
that
now
that
I'm,
looking
at
it
I'm
struggling
to
find
very
confusing
example,
I
mean
so
a
complex
controller
that
that
did
all
of
the
things
we
anticipate
would
have
to
kind
of
trace.
F
F
There
is
a
good
example.
Actually,
let's,
let's
say
we
had
a
like
placement
that
we
call
in
the
Federation
placement
object,
and
so
we
created
like
a
replica
set
and
we
created
a
federation
placement.
That
said
put
it
all
in
all
these
clusters
and
then
somebody
deleted
the
placement,
not
the
actual
replica
set,
but
just
the
placement
with
the
controller
I
mean
what
did
the
sensible
and
expected
behavior
be
with
the
controller
notice,
that's
being
deleted
and
then
delete
the
stuff
from
all
the
clusters.
I.
E
B
F
E
B
D
Yeah
the
point
I
wanted
to
voice
outwards
that
segregating
these
as
different
pieces
of
the
API
and
giving
the
facility
to
maybe
aggregate
them.
All
together
sounds
pretty
nice
to
be
done
in
terms
of
the
API
and
probably
simple
for
static
implementation,
for
example,
object
which
is
definition
which
is
placed
somewhere
as
I,
get
something
and
once
propagated
only
but
to
implement
dynamic
controller.
The
way
it
is
implemented
in
Federation
would
become
much
more
complex.
Then
probably
it
is
right
now.
That
is
what
I
wanted
to
voice
out
yeah.
A
Your
example
you're
giving
Quinton
I
think
you
this
means
you
need
to
do
like
a
sort
of
a
two-phase
upgrade
right
like
you
prepare
the
placement
for
your
changes
in
the
in
the
object,
and
then
you
have
to
like
do
a
few
steps
that
like
do
it.
So
you
don't
like
this
delete
your
your
a
deployment
in
the
process.
Yeah.
F
That
that's
a
good
point.
If
you
wanted
to
make
changes
to
the
placement
and
the
spec
or
whatever
of
the
thing,
you
either
end
up
needing
like
transactions
at
the
API
level,
which
I
don't
think
we
have
at
the
moment,
or
you
end
up
with
a
sort
of
pretty
inefficient
propagation
result,
which
is
that
you
propagate
the
interim
state,
which
you
didn't
really
want
and
then
re
propagate
the
end
state.
Is
that
sort
of
approximately
what
you
were
referring
to
you.
E
I
know
I
know
the
deployment
has.
My
deployment
strategy
is
where
they
covered
at
least
two
different
ways
to
handle.
When
a
deployment
is
updated,
it
seems
like
a
placement
strategy
that
could
fit
in
the
spec
here
could
cover
one
of
these
cases
as
to
when
the
placement
is
updated.
What
is
what
is
yet?
The
expected
outcome
is:
there's
a
recreation
or
a
deletion,
or
something
like
that.
F
G
Also,
maybe,
along
that
maybe
somewhat
similarly
and
no
I
wonder
it
would
be
interesting
to
create
a
way
so
like
for
cute
cuddle
for
cuban
areas,
authorization,
you
can
ask
the
api
server
if
you're
authorized
to
do
something,
I,
wonder
if
there's
what
sort
of
similar
provisions
are
possible
to
make
in
the
Federation
control
plane?
Is
it
possible
to
build
it
in
such
a
way
that
you
can
have
a
way
to
see
like
hey?
G
If
I
make
this
change,
it
will
spit
out
a
propagation
object
which
is
like
this
is
how
your
change
would
affect
your
objects
in
clusters.
I,
don't
know
if
that
would
be
useful.
It
seems
like
if
we're
considering,
building
something
that
has
potentially
different
models
like
a
push
model
versus
a
full
model
that
could
fall
out
of
this
I,
say
and
I
create
the
artifacts
necessary
for
a
pull
model,
but
instead
of
pushing
putting
them
in
the
cluster
they're
in
the
place
where
something
will
pull
from
them.
G
That
I'll
say
that
again
Quinton,
so
you
can
imagine
so
kind
of
in
the
same
vein
of
how
in
kubernetes
are
back
in
authorization,
you
can
do
an
API
call
that
asks.
Am
I
authorized
to
do
this
action
on
this
object
and
you
can
might
list
config
Maps
sort
of
a
so
the
kind
of
introspection
into
what
you're
able
to
take
a
dry
run,
a
dry
run
in
essence,
so
I
would
say
if
I
make
this
change
to
this
placement
object.
G
What
will
the
propagation
look
like
and
since
we're
can,
if
we're
building
things
in
this
way
and
considering
both
push-pull
models,
where
you
could
have
a
model
that,
instead
of
a
controller,
pushing
changes
into
the
cluster,
the
controller
creates
configurations
that
the
cluster
then
pulls
that
could
potentially
be
diverted
into
a
response
to
the
user,
not
to
the
location
not
to
the
sort
of
pull
location,
but
just
to
the
user
to
see
here's.
What
the
new
configurations
would
look
like
would.
A
A
D
D
D
G
On
Wednesday
yeah,
it
also
sounds
like
Peru
is
not
necessarily
blocked
and
is
going
to
be
able
to
go
and
build
things
out
in
the
repository
and
do
some
of
the
experimentation
and
I
think
how
this
could
all
happen.
In
parallel,
like
you've
mentioned,
Clinton
yeah,
look
one
point,
one
sort
of
logistical
or
administrative
point:
these
meetings
should
they
be
included
in
the
coop
in
any
state
Multi
cluster
YouTube
playlist,
or
should
there
be
an
alternative
playlist
for
these
particular
meetings?
G
F
This
one
today
was
actually
a
one-off
or
one's
of
periodic
and
I
would
say
they
should
have
their
own
stream
or
whatever
you
call
that
thing.
Playlist,
okay,
just
link
it
out
of
the
Federation,
sorry
out
of
the
multi
cluster
group,
so
that
if
you
want
to
follow
this
working
group,
you
go
to
this
playlist
and
I'm
sure
over
time
there
will
be
other
working
groups
and
now
that
they're
in
playlists,
okay.
G
So,
in
that
case,
I
will
they
mostly
the
ones
that
have
been
recorded.
I,
will
upload
them
and
try
and
see
what
is
necessary
in
order
to
get
a
play,
Coover,
Nettie's
or
playlist,
or
anything
on
YouTube,
and
also
make
sure
that
it's
clear
in
the
community
repo
that
there
a
separate
list
of
meetings
that
are
probably
a
more
interest
to
people
are
interested
in
Federation,
specific
yeah.