►
From YouTube: Kubernetes Federation WG sync 20180122
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
A
C
D
B
C
B
C
C
B
Yeah
I'll
put
a
link
to
those
back
in
the
chat,
so
this
this
meeting
the
point
this
having
this
meeting
was
to
kind
of
resume:
brainstorming
about
API
surface
and
the
federated
API
space
and
and
I
sent
a
message
to
the
mailing
list.
It
was
kind
of
late
on
Friday,
with
like
a
gist
and
Hakham
D,
with
similar,
with
with
the
same
content
in
it
that
just
represent
a
little
bit
of
brainstorming
that
we've
done
it
Red
Hat
that
I
think
kind
of
builds
on
everything.
That's
been
discussed
previously
in
Federation
working
group.
B
I
dropped
that
link
into
the
chat
and
I
think
that
what
I
would
what
I
would
say
to
folks
that
I'm
not
sure
if
this
made
it
into
copy,
that's
in
that
gist
or
in
the
the
email
that
I
sent
out
but
like
these
are
really
just
brainstorming,
I'm
I'm,
not
necessarily
sure
yet
whether
the
ideas
in
in
that
gist
are
partially
totally
or
0%
useful,
but
I
think
probably
it's
best
that
we
can
at
least
like
get.
This
discussion
started
again.
B
So
I
don't
know
where
folks
would
like
to
start
off,
but
I
do
think
that
it's
probably
most
important,
not
necessarily
to
debate
like
the
particulars
of
API
constructions.
But
if
we
can
talk
about
like
the
breakdown
of
the
different
functional
areas
and
agree
or
see,
if
we
have
any
kind
of
consensus
on
like
that,
the
decomposition
of
different
functional
areas
is
is
something
that
makes
sense
to
people
and
that
we
think
rings
true
about
the
problem
space.
So
I'm
open
to
suggestions
on
where
to
start.
B
C
Yeah,
that
sounds
like
a
sound
approach
fall,
but
I
think
there
is
one
more
aspect
which
I
thought
of
or
I
sort
of
tried
to
point
that
in
the
previous
workgroup
sync
also
is
presenting
the
evolution
of
the
Federation
API
itself
to
the
users.
So
we
can
we,
for
example,
we
have
couple
of
options.
One
is
that
currently,
whatever
Federation
API,
if
you
I
mean
it's
not
really
a
Federation
API,
but
the
way
Federation
is
presented
to
the
user.
It's
the
same.
C
Chaos,
API
and
users
have
some
knowledge
that
they
put
some
annotation
on
something,
and
if
they
make
this
request
against
a
federation
control
plane,
then
they
achieve
something
so
evolution
from
there
to
whatever
we
are
calling
v2.
What
might
be
the
best
approach
to
do
that
so
first
option
that
might
come
to
mind.
C
Second,
which
sounds
like
a
more
genuine
I,
have
been
trying
to
think
about
it.
A
lot
I
might
not
be
completely
sure
if
it's
doable
is
that
we
keep
that
API
as
is
and
use
the
aggregation
concepts
which
k
test
now
has
and
present
that
as
an
aggregated
API,
so
that
earlier
users
can
also
use
the
same
Federation
control,
plane
and
whatever
we
are
thinking
of
building
now
can
gradually
be
presented
to
those
users
and
the
older
one
can
be
faced
out
in
some
way.
C
B
I
think
that
there
are
probably
like
I
think
both
of
the
kind
of
approaches
in
place
that
that
you
just
described
are
valid.
I.
Think
that
so
one
one
thing
that
like
definitely
dominates
my
thinking
about,
like
evolution
versus
like
replacement
or
whatever
in
this
entire
problem
space
in
general,
is
that
it's
so
meta
like
not
not
Federation,
not
any
particular
code
that
exists
now
or
doesn't
exist
that
there's
just
there
are
a
lot
of
different
pieces
of
the
problem.
B
Space
and
I
think
that
one
thing
that
we
should
be
mindful
of
as
we
talk
about
this
is
that
I
think
that
it's
probably
valid
for
different
folks
with
different
motivations
to
to,
like,
eventually
agree
to
disagree
and
the
sweet
spot
in
general
for
solving
problems.
A
multi
cluster
is
probably
one
where
we
can
like
pick
and
choose
as
users
like
if
we
like
a
particular
API
construction,
or
we
like
a
particular
flavor
of
distribution
of
resources
into
different
clusters.
B
And
if,
if
we
like,
find
the
right
way
to
hit
the
tuning
fork
and
make
it
resonate,
maybe
things
will
fall
into
different
buckets
that
we
can
that
we
can
swap
out
what's
in
the
bucket
right.
So
like
I,
don't
know
if
that
makes
any
sense
and
I
feel
like
I
am
starting
to
ramble.
So
I'm
gonna
shut
my
mouth
now
and
a-five
yeah.
C
I
completely
agree
to
what
you
are
saying
that
I'm
also
thinking
on
similar
lines,
where
user
or
per
se,
let's
say
developers
also
should
be
able
to
carry
on
work
in
parallel
and
user
should
be
able
to
consume
whatever
pieces
are
in
segregation
altogether
of,
or
we
should
I
mean,
as
a
user
should
have
an
option
of
what
I
want
to
choose
to
use.
So
I
don't
have
that
on
paper
as
yet,
but
what
I
was
exploring.
C
C
For
example,
Federation
could
be
one
extension,
API
server
and
rest
of
the
API
is
in
whatever
group
version
we
classified
them
can
be
another
one
or
multiple
other
extension,
API
servers
or
the
same
API
server
configured
in
with
some
API
or
without
some
API
or
these
kind
of
options,
but
the
one
aggregated
endpoint
that
user
talks
to
can
be
like
a
KS
API
server.
The
only
possible
or
same
way
I
could
actually
do
not
really
do
it.
C
C
C
Is
you
run
the
KS
API
server
and
run
it
with
all
the
actual
KS
API
is
disabled,
barring
some,
for
example,
authorization
or
admission
control,
but
all
the
apps
and
workload
API
is
disabled
and
all
that
work
load
the
API
is
requests
are
delegated
to
an
extension
API
server,
which
is
actually
today's
Perdition
api
server
and
the
in
parallel.
Another
extension
API
server
is
there,
which
is
work
in
progress,
as
we
are
talking
about
the
API.
How
does
this
sound
for
composite?
C
B
Yeah
I
mean
I,
think
that's
that's,
definitely
a
possibility,
and
so
to
give
an
example
of
applying
I
like
the
concept.
I
was
just
kind
of
throwing
around
to
a
particular
area
like
one
one,
functional
area
of
this
problem
space
is:
how
do
we
represent
API
resources
and
represent
their
like
if
you
want
to
differentiate
a
part
of
them
in
a
particular
cluster?
B
We
could
have
a
kubernetes
api
server
with
the
controller's
disabled,
basically
and
maybe
special
admission
control,
and
that
could
be
where
the
the
resources,
the
the
main
kubernetes
resources
lived,
and
then
there
could
be
some
other
API
groups
that
handle
different
elements
of
the
problem
space
right.
Basically,
that's
that's
what
you
were
yeah
yeah,
so
I
think.
That's
sorry,
go
ahead.
Yeah.
B
So
that's
one
flavor
right,
I
think
there
could
be
another
flavor
where
we,
where
you
know
some
of
us,
build
API
groups
that
have
like
first-class
variants
of
the
important
controller
types
like
federated
deployment
or
whatever
I
and
those
might
be
aggregated
together
with
the
cluster
registry
and
maybe
also
with
like
the
the
kubernetes
main
kubernetes
like
a
normal
kubernetes.
And
then
there
I
think
they're,
probably
like
a
couple
other
variants
to
where,
for
example,.
B
Maybe
you
can
use
something
like
helm
and
and
have
helmet
arts,
and
maybe
a
future
version
of
helm
might
have
some
feature
that
lets
you
plunk
out.
Api
resources,
in
particular
clusters
that
came
from
the
cluster
registry,
or
maybe
you
have
your
own
template,
set
up
right
and
I
I.
Think
that
probably
it's
natural
and
good
for
those
things
to
coexist.
B
Instead
of
trying
to
force
everybody
into
one
kind
of
particular
API,
construction
or
patent,
and
that
also
you
know,
lets
us
potentially
compete
in
like
the
marketplace
of
ideas,
but
I'll
stop
here,
because
I'm
getting
that
feeling
like
I'm,
just
like
rambling
and
I,
wonder
if
that
example
makes
sense
to
people
that
there's
maybe
like
different,
fundamentally
different
approaches
that
we
can
take
for
that
particular
part
of
the
problem.
Space
and
I
think
it's
ok
for
different
approaches
to
coexist
and
natural.
B
C
Take
five
minutes:
I
actually
did
not
intend
to
talk
about
how
the
the
pieces
could
be
constructed
and
sort
of
deployed
or
run
I
actually
wanted
to
point
out
that
the
API
itself,
when
it
is
presented
to
the
user
like
when
user,
consumes
today,
if
the
user
consumes,
it
is
actually
the
KSR
API
a
subset
of
Kaos
API,
for
example,
replica
sets.
The
way
replica
sets
today
exists.
The
same
API
would
be
involved
against
today's
Federation
control
plane
and
it
just
works
with
cement.
C
If
there
are
no
annotations,
there
are
some
default
annotations,
so
this
remains
plus
whatever
we
call
we
to
that
is
presented
as,
for
example,
in
your
gist
you
published,
the
APA
group
could
be
federation,
dot,
apps,
dot,
caters
dot,
io
/v,
1,
alpha
min,
or
something
something
like
that,
both
of
them
Co
presented
to
the
user.
There's
I
mean
there's
an
option.
Also
to
do
that.
C
I
own
I
mean
I
was
trying
to
think
this
as
the
API
transition,
as
one
option
like
what
I
talked
about
the
option
on
an
option
to
I
was
trying
to
think
hard
for
this
transition
and
how
this
could
be
presented
to
the
user
if
it
makes
sense.
If
we
decide
that
we,
this
option
is
not
really
a
good
option,
then
I
mean
I
I,
don't
want
to
keep
on
talking
about
the
same
thing
like,
as
you
said,
nothing
about
the
same
thing.
E
The
what
I'm
hearing
is
we
call
Federation
v1
should
that
be
coexist
with
at
least
what
Paul
has
been
putting
forth,
which
is
different
ideas
that
would
form
the
basis
of
v2
and
I.
Think
what
I'm
hearing
your
Fon
say
is:
should
the
upgrade
path
from
V
1
to
V
2
be
a
necessary
condition
in
order
to
even
start
talking
about
v2,
you.
B
So
maybe
this
is
a
good
time
to
discuss
like
the
the
different
functional
areas
that
are
all
kind
of
bound
up
in
what
we
talked
about
as
being
Federation.
So
I
can,
let's
see
I,
think
I
think
this
is
in
the
there's
a
section
of
this
and
the
gist.
So
we
don't
have
to
rely
on
my
feeble
brain.
I
will
drop
a
link
into
the
chat,
but.
D
Yeah
I
had
a
quick
comment
which
is
I,
don't
think
those
overrides
necessarily
need
to
be
per
cluster
and
in
many
cases
we
explicitly
or
users
explicitly
do
not
want
to
specify
clusters
by
name,
but
rather
the
overrides
are
based
on
some
more
general
selection
mechanism.
To
give
a
concrete
example,
you
know
to
deploy
these
this
template
into
North,
America
and
Europe,
but
in
North
America,
you
know
give
it
the
North
American
config
and
in
Europe,
give
at
the
European
config
and
there's
no
mention
of
a
specific
cluster
name
in
there.
I
gotcha.
B
B
Yeah
so
I,
I,
plus
plus
one
that
Quinton
I
I,
guess
when
I
wrote
that
particular
piece
of
English
I
had
been
thinking
him
like
eventually
they
go
into
a
cluster
and
eventually
like
some
of
those
are
differentiated
from
the
the
template
in
some
way.
I
think
I
think
you're
exactly
right.
That,
like
like,
probably
in
the
long
term,
especially
policy
and
selection,
are
like
where
more
people,
their
their
hearts
and
minds,
lie
than
explicitly
naming
clusters
which
is
gonna,
get
really
annoying
fast
right.
E
Labels
specifically
what
we
try
to
apply
to
question
registry
and
that
you
can
find
labels
to
hear
clusters,
so
that
allows
to
have
more
course,
selection
and
with
regards
to
the
Europe
versus
us
example,
I.
Think
there's
a
point
in
that
I
mean
we
call
policy
configuration,
but
if
the
configurations
are
sufficiently
different
such
that
the
deployment
spec
itself
would
be
completely
different
at
this
point,
maybe
you
were
not
trying
to
solve.
We
shouldn't
be
solving
this
at
the
meta
federated
level
and
just
sit
punt
it
and
say
these
are
two
different
deployments.
F
B
But
here's
here's
where
I
think
of
like
there
are
probably
enough
different
approaches
and
folks
that,
like
will
really
like
a
particular
approach
for
solving
this
functional
area
that
it's,
it's
probably
inevitable,
that
there
will
be
different
approaches
that
get
implemented
for
this.
One
in
particular,
especially.
E
D
Yeah
I
was
gonna,
make
a
similar
comment
too.
I
think
what
Christian
was
alluding
to,
which
is
that
I
think
the
necessity
to
override
things
and
customize
them
per
cluster
is,
and
this
is
this
is
just
a
speculation,
is
relatively
limited
and,
and
while
one
could,
you
know
infinitely
customize
each
template
as
it
goes
into
a
different
cluster.
At
some
point,
it
becomes
a
different
thing
if
you
ever
customize
it
and
besides
that
I
think
we
could.
D
D
Took
weights
and
sizes
of
of
things,
so
in
the
case
of
a
replica
set,
you
know
the
counter.
Replicas
is
one
obvious
thing
which
which
might
be
different
in
different
clusters
and
then
perhaps
use
that
as
a
starting
point
and
say
what
concrete
use
cases
don't
work
with
that
and
then
incrementally
allow
further
customization,
where
absolutely
necessary,
rather
than
starting
from
the
other
extreme
and
saying
what
can
we
customize
and
how
do
we
do
that
which
seems
like
it
might
be
a
endless
rabbit,
hole.
G
We
talked
about
layering
things
on
top
and
maybe
maybe
a
different
abstraction
is
we
allow
customization,
but
the
it
is
done
through
an
API
object
which
can
be
layered
on
top
based
on
a
cluster
level
selector.
So
if
the
cluster
has
a
label
of
us,
then
we
applies
another.
It's
spectrally,
a
wraparound
capital
apply,
and
then
you
almost
don't
have
to
declare
what
isn't
is
not
allowed,
because
anything
that
is
allowed
by
the
API
can
be
layered.
H
H
B
Yeah
I
agree,
so,
let's,
let's
move
on
in
this
list,
just
to
keep
keep
paying
attention
to
time.
So
there's
the
the
template
in
overrides
and
then
there's
also.
How
do
things
get
placed
into
clusters
which
you
can
like
like?
We
can
probably
easily
all
imagine
a
few
different
approaches
right,
like
there's
similar
to
like
expressing
overrides.
Maybe
there's
like
room
for
explicit
placement
like
you
might
have
a
resource.
That's
like
I
want
you
to
put
these
particular
resources
into
this
cluster.
You
might
have
maybe
something
more
of
selection.
B
That's
like
using
these
label
selectors
apply
all
the
resources
that
you
know
about
into
these
clusters.
You
might
also
select
the
clusters
with
the
selector
or
you
might
have
an
even
more
abstract
policy
component,
but
this
seems
like
another
axis
in
the
in
the
problem
space
and
if
my
math
analogies
ever
get
annoying-
or
you
know
like
way
more
math
than
my
very
little
math,
that
I
know
and
I'm
just
wrong
feel
free
to
tell
me
to
shut
up
but
like
it
seems
like
an
independent
axis
right
from
how
do
you
declare
these
things
versus?
E
B
Know
so
that's
that's
a
valid
point
just
to
try
to
express,
what's
in
my
mind
or
or
explain
better
what
what
this
language
refers
to.
You
can
definitely
think
of
the
differentiation
of
a
resource
as
as
far
as
like,
where
it
goes
as
potentially
being
placement
to
and
I
agree
that
there
is
an
overlap
with
scheduling,
emplacement.
H
But
I
mean
you
could
have
it
manually
defined
I
want
this
many
replicas
in
this
cluster
or
you
could
have
a
morgues
in
it
like
where
these
are
specifying
it,
but
it
user
could
also
be
specifying
how
they
want
the
scheduler
to
treat
that
resource
and
those
are
I.
Think
fundamentally
different.
Like
declarations.
B
Yes,
so,
for
example,
like
scheduling
and
just
to
level
set
when
we
talk
when
we're
talking
about
scheduling,
I
think
we're
all
referring
to
something.
That's
like
take.
These
replica
sets
that
are
like
derived
from
the
same
thing,
they're
placed
into
different
clusters,
and
we
want
to
using
some
information
about
the
actual
state
of
the
world
manipulate
those
replica
counts
to
achieve
some
goal
right,
like
in
a
in
a
way
where
something
is
looking
at
what's
happening
in
cluster
X,
Y,
&
Z,
and
maybe
the
traffic
going
into
them
or
CP
CPU
utilization
whatever
and.
E
A
I'm
kind
of
user
groups
out
there:
how
are
we
differentiating
vocabulary
wise
between
cluster
level,
scheduling
and
Federation
level,
scheduling
in
that
I
kind
of
feel
like?
These
are
two
different
things,
because
there
are
no
scheduling
is
information
we
want
to
transmit
down,
because
there,
where
is
the
Federation
level
scheduling,
is
something
else
entirely.
It
sounds
like
we're
conflating
the
two
a
little
but
I'm
new
here,
so
maybe
not.
B
A
Yes,
because
I
mean
when
I'm
talking
about
replication
in
across
a
single
cluster
like
I,
want
to
differentiate,
give
a
concrete
example
like
differentiate
number
of
replicas
in
Europe
versus
number
of
replicas
in
Australia
I
got
this
information.
I
want
to
communicate
to
the
cluster,
and
yes,
I
have
to
denote
that
somewhere.
The
federation
level
so
needs
to
be
communicated,
but
it's
not
so
much
like
half
on
where
these
things
get
bounced
across
regions.
B
Okay,
point
taken
and
I
think
that's
a
good
point.
So
circling
back
to
Christian
your
comment,
TLDR
I,
think
that
it's
very
arguable
that,
like
there's
some
collision
between
placement
and
overrides
and
higher-level
behaviors,
are
awesome,
like
rebalancing
across
clusters,
probably
feed
into
those
two
we're
just
trying.
We've
just
been
trying
to
pick
pick
them
apart,
so
we
can
think
about
them
in
kind
of
a
in
ordered
way.
So
a
I
am
appreciative
if
anybody
can
find
a
better
decomposition.
I.
B
C
C
The
first
three
actually
contain
in
the
template
and
the
placement,
as
Christian
was
talking
about
what
in
their
status,
is
different
from
these,
because
it's
like
some
feedback
from
the
cutoff,
the
current
state
or
whatever
it
is
and
scheduling,
is
something
dynamic
like
the
earlier
first
and
second
and
third
point
or
something
would
start
at
the
beginning
like
when
a
user
requests
comes
in.
Where
is
the
shelving
is
which
is
sort
of
everlasting,
or
you
can
say,
new
kind
of
a
thing
which
goes
on
so
how
about
the
four
functional?
Yes,
I
think.
H
B
Mute
button,
operation,
yeah,
honestly
I,
don't
think
that
to
take
the
we
don't
all
have
to
have
unanimous
consent
like
it's.
It's
totally
acceptable
for
all
of
us
to
see
these
things
differently
and
when
it
comes
to
like
actual
api's
that
we
implement
like
there,
there's
no
need
to
have
unanimous
consent
so
yeah.
If,
if
someone
has
a
flavor
of
like
a
formulation
for
an
API
that
describes
an
overrides
and
placement
that
they
think
really
is
valuable,
I
there's
no
reason
that
we
should
all
have
to
agree
and
implement.
Only
one
thing:
I
should.
H
Say:
I'm
not
trying
to
suggest
that
the
implementation
has
to
have
these
five
distinct
areas
or
even
five
distinct
resources,
but
I
think
I
mean
just
for
going
back
to
Christians
example
of
you
know:
well,
if
I'm
doing
overrides,
that
kind
of
is
I'm
indicating
maybe
the
clusters
I
want
these
overrides
applied
to.
How
is
that
different
from
placement?
I?
E
E
Yeah,
the
v4
v3,
or
something
there's
where
I'm
not
completely
I
mean
we're
starting
into
like
sufficiently
changing
the
base.
Template
I,
don't
know
if
it's
efficiently
but
we're
changing
the
template
and
then
that
kind
of
begs
the
question
whether
these
should
be
two
separate
objects.
Instead
of
just
one.
E
F
E
H
That
the
use
cases
may
be
just
not
clear
I
mean
we
were
just
trying
to
pick
something
that
would
demonstrate
like
I
want
to
override
something
for
cluster,
but
I
mean
it's
we're
not
suggesting.
This
is
the
best
way
to
do
this
particular
thing
like
burying
images
across
clusters,
if
you
can
think
of
a
better
reason,
why
I'd
want
to
do
overrides,
like
we'd,
happy,
happily
incorporate
that
example.
Instead
of
this
one
yeah.
E
H
Secret
but
I
think
the
girl
was
just
having
a
generic
way
of
allowing
overrides
for
a
given
cluster
set
of
clusters,
and
it
wasn't
like
we
need
to
be
prescriptive
about
how
that
would
be
used.
It
just
seems
like
people
would
benefit
from
this
capability
and
separating
it
out,
at
least
with
me
and
a
given
implementation.
Maybe
even
the
first
pass
could
ignore
that
requirement
entirely
and
just
if
he
needs
a
very
replica
count,
he
can
do
some
sort
of
scheduling,
magic,
I,
guess.
I
One
interesting
thing
to
think
about
just
in
all
of
this
is
how
much
does
the
Federation
project
prescribe,
how
you'll
use
Federation
versus
building
something
that
users
are
choosing
like
all
right?
There's
a
lot
of
capability
here.
How
do
the
users
figure
out
what
to
do
I?
Think
it's
I
mean
it's
kind
of
like
one
of
the
interesting
problems
with
kubernetes
a
lot
of
times.
I
The
people
who
are
building
these
things
in
kubernetes
and
designing
them
are
not
the
people
who
are
actually
administering
clusters
that
use
them,
which
in
this
case
for
Federation
I,
think
is
especially
true
because
and
for
multi
cluster
things
I
mean
most
of
us.
I
know,
I
have
maybe
run
managed
a
few
development
clusters
on
my
own
I
federated
them
only
because
I
was
testing
Federation
but
versus
users
who
actually
have
multiple
clusters
in
the
real
world
and
address
these
problems.
I
It
sounds
like
this
design
approach
is
leaning
more
towards
we're
going
to
make
a
more
generic
set
of
capabilities
things
that
maybe
let
users
do
a
lot
of
things
that
we
hadn't
thought
about
and
then,
as
they
can
then
decide.
Okay.
This
seems
like
it
would
be
useful
to
me,
which
I
think
is
probably
a
good
thing,
but
is
also
a
dangerous
thing,
because
it
tension
leads
to
a
world
where
there's
a
lot
of
for
awhile
confusion.
Where
there's
a
lot.
It
takes
some
time
to
find
the
best
practices,
but.
H
I
think
the
goal
is
to
allow
evolution
and
iteration.
It's
not
that
maybe
upstream
we
might
focus
on
a
particular
approach
or
strategy
that
seems
to
you
know,
solve
many
people's
problems
as
possible,
yeah
even
user
needed
to
or
wanted
to
bury
that
behavior.
They
could,
you
know,
implement
a
new
resource
type
and
implemented
in
controller,
and
it
would
integrate
well
and
that
there's
been
kind
of
a
traditionally
a
kind
of
a
problem.
H
I
think
Dodd
is
more
in
keeping
with
kubernetes
like
if
you
install
kubernetes,
you
get,
you
know
a
set
of
resources
and
the
set
of
controllers.
If
you
want
to
vary
that
behavior,
you
technically
can
go
in
and
change
the
resources
and
change
the
controller's.
It
may
be
tightly
integrated
in
some
areas,
but
that's
the
goal
and
I
think
we
start
from
that,
as
the
baseline
I
think
we're
more
likely
to
be
successful
than
if
we're
preventing
people
from
experimenting
or
making
it
difficult.
Yes,.
B
What
one
thing
I
want
to
just
call
out
for
everybody
is
that
the
content
it's
in
this
gist.
These
are
these-
are
strong
men.
These
are
all
strong
men
constructions
to
explore
the
problem
space
like
they're,
not
prescriptions
in
any
way
and
we're
still
trying
to
figure
out
like
when
we
talk
about
this,
a
red
hat
which
of
these
ideas
are
valuable
and
which
aren't
so.
B
B
E
F
J
J
Real
quick
I
think
is
an
useful
offering
maybe
overwrites
or
absence
of
override
is
perfectly
fine
just
to
say
we'll
start
with
it
by
the
way
it's
Ilya
and
but
I
also
would
like
to
quote
from
that
absence.
All
right
can
may
preclude
users
from
using
figuration
all
together.
In
our
case,
we
desperately
inspiration,
but
we
have
specific
examples
where
we
need
to
override
their
minimum
replica
counts,
HPA
settings
annotations
because
those
are
tied
to
specific
clusters.
What
is
lb
annotations,
whether
it's
storage
annotations,
so
those
only
specific
per
cluster?
B
Think,
that's
a
really
good
point
and
I
think
that
one
thing
to
keep
in
mind,
as
we
like
talked
about
this
problem
space
is
what
is
this
like
for
you,
as
a
user
and
I
love
to
have
some
real
live
users
that
can
take
a
look
at
these
ideas.
As
we
talk
about
them
and
say,
hey
that
one
looks
really
good
or
I'm,
not
sure
what
that
means
immediately
and
maybe
give
us
some
kind
of
feedback
as
we
go.
B
B
So
the
the
next
bullet
on
there
is
propagation
of
placed
resources
into
clusters
and
what
what
that
means
is,
as
distinct
from
articulating,
where
do
things
go?
How
do
they
get
there
and
there's
there's
a
few
different
approaches
like
that
are
probably
all
valid,
depending
on
the
use
case
like
dude
there
there's
probably
room
for
an
approach.
B
That's
like
take
all
of
these
resources
that
have
to
go
into
a
particular
cluster
and
put
them
into
a
git
repo
and
use
something
like
Cuba
plier
as
a
back-end
to
actually
get
them
there
versus
have
a
controller,
that's
actively
maintaining
a
connection
to
each
target
cluster
and
doing
the
reconciliation
and
like
an
online
type
of
fashion.
But
that's
that's
what
propagation
of
placed
resources
into
cluster
the
first.
C
When
I
read
this
first
time,
what
I
understood
was
this
actually
points
about
in
what
fashion
the
resource
needs
to
be
propagated
to
the
trustor?
So,
for
example,
a
resource
has
to
be
partitioned
and
propagated.
I
remember
this
game
has
use
case
earlier
in
one
of
the
discussions.
I
don't
remember
exactly
which
one
or
the
resource
has
to
be.
For
example,
the
replica
set
can
be
copied
into
each
cluster
or
it
could
be
partitioned,
and
then
sub
replica
set
of
replicas
said
that
the
producer,
because
can
be
in
I,
think.
D
A
H
Is
is
the
process
that
gets
like
that
actually
puts
the
resources
into
the
other
cluster,
it
calls
the
API,
etc.
So
there's
kind
of
a
separation
here,
but
between
defining
what
you
want
to
happen,
defining
your
template,
your
overrides
your
placement
directive
and
then
the
propagation
mechanism.
What
actually
takes
all
that
and
interfaces
with
the
clusters,
or
at
least
the
configuration
the
clusters
are
going
to
be
reading
from
nothing
sense.
B
Sir,
yes,
so
I
would
make
the
analogy
of
like
propagation
is
differentiating
like
do
you
want.
Do
you
want
ground
shipping,
or
do
you
want
air
like
either
way
the
things
gonna
get
there?
The
question
is:
what
mechanism
gets
it
there
like?
Is
it?
Is
it
a
controller,
that's
watching
some
API
surface
and
is
like
now
I
need
to
like
reconcile
this
resource
in
cluster
X
Y
&
Z,
or
is
it
like?
Do
you
have
something?
B
I
Think
this
is
a
a
really
valuable
thing
to
pull
out
just
be
they
probably
from
I.
Think
about
this.
The
most
complicated
problem
that
this
brings
up
and
the
most
complicated
of
this
is
going
to
be
dealing
with
who
what
user
or
what
service
account,
is
doing
these
things
and
making
sure
that
it's
properly
audit
ball
and
logs
and
other
vents
are
generated
properly.
I
know
that
this
sort
of
the
propagation
into
clusters
was
always
a
bit.
I
The
authentication
and
authorization
piece
of
that
was
always
some
was
something
that
was
difficult
for
a
federation
that
we
required
quite
a
bit
of
thinking
and
that
I
don't
know
if
we
ever
came
to
a
clear
resolution
on
how
that
would
work.
So
the
idea
of
splitting
this
out
and
potentially
allowing
different
models,
I
think
is
really
valuable.
I
I
mean
I
can
even
imagine
a
model
where
maybe
a
very
restrictive
model,
but
a
model
where
the
Federation,
the
properly
declaration
overrides
and
placement
ends
up
producing
at
some
endpoint
gamal
files
that
a
user
is
told
like
that.
The
user
knows
that
they
can
go
with
their
own
cube,
cuddle
and
upload
themselves
by
hand.
I
could
imagine
that
being
some
sort
of
policy
where
these
things
have
to
be
done
by
users
or
have
to
be
done
by
some
administrative
user,
not
by
a
machine.
D
One
one
brief
comment-
and
this
actually
applies
across
several
of
these
areas-
is
that
there
there
is
in
a
more
surface,
gated
implementation
of
these
ideas
there.
There
is
some
inherent
bleed
between
these
areas,
so
to
give
some
specific
examples,
so
the
placement
decision
might
be
based
on
where
there
is
quota
for
the
user,
that
is,
placing
the
resources
or
which
clusters
are
available
and
and
perhaps
even
the
status
of
related
objects
so
that
it's
it's
in
practice.
D
It's
possible
to
decouple
these
phases
entirely,
but
in
practice
you,
you
often
end
up
with
kind
of
like
loop
backs
where
we
made
a
placement
decision.
We
attached
a
propagation
strategy
to
that,
but
then
the
propagation
strategy
failed
on
one
of
the
clusters,
because
there
was
no
capacity
or
no
quota
or
the
credentials
for
the
propagator
were
invalid
or
whatever
the
case
may
be.
So.
Therefore,
we
may
need
to
go
back
and
make
a
different
placement
decision,
which
may
then
result
in
a
different,
override
or
tip.
D
H
In
at
the
bottom,
where
we
talk
about
like
possible
implementation
for
defining
scheduling
preferences
because
I
think
there's
a
sort
of
a
separation,
in
my
mind
between
placement
directives,
which
is
kind
of
like
user
defined,
be
a
selector
or
something.
But
then
there's
like
scheduling,
directives
and
the
whole
process
of
scheduling
to
me.
That's
something
it's
distinct
from
the
placement
side
of
them.
Does
that
I'm
not
explaining
this
well,
but
just
to
differentiate,
scheduling
versus
placement
placement
is
user.
Defined
scheduling
is,
can
be
completely
dynamic
and
really
complicated,
but
they're
separate
I.
D
Don't
see
them
as
separate
personally
I
see
what
the
user
specifies
as
input
into
the
scheduling
policy.
So
user
says
you
know
on
one
extreme
of
that
user
says
I
want
this
thing
in
cluster,
a
and
and
the
scheduler
says
right.
I
will
do
basically
very
little
other
than
say.
Yes,
you
can
print
it
in
cluster.
A
or
I
wanted
in
any
cluster
in
Europe
is
what
the
user
specifies,
in
which
case
the
scheduler
makes
a.
C
D
Specific
decision
and
the
equivalent
in
kubernetes
would
be
you
know:
I
want
a
pod
I,
don't
care
where
it
goes,
and
the
scheduler
picks
a
mode
or
the
skill
or
the
user
says.
I
want
a
pod
with
label
blue
and
then
the
schedule
that
chooses
one
of
the
blue
label-
pods
or
the
user
specifies
I
want
this
on
node
X,
in
which
case
the
scheduler
puts
it.
H
On
their
debt,
X
I
was
just
trying
to
differentiate
between,
like
the
scheduler
doesn't
change
the
placement
directive.
It
tries
to
respect
it
and
if
it
can't
it
can't,
but
it's
not
defining
the
placement
directive
as
we're
defining
this
dock.
That's
all
I'm,
trying
to
say
I,
agree
that
its
input
to
a
scheduling
algorithm
the
same
way
that
the
scheduling
carcasses
at
the
bottom
are.
There
I
mean
the
point
here
isn't
to
have
you
say
you
have
to
break
things
up.
It's
very
likely
that
you
know
an
advanced
implementation.
B
B
The
hack
and
view
thing
is
basically
like
an
etherpad
that
also
has
a
pane.
Where
of
renders
them
work
down
you
can
you
can
probably
leave
probably
the
best
way
to
comment.
This
comment
on
a
gist
on
the
gist,
that's
linked
in
there.
In
retrospect,
it
was
probably
mostly
just
confusing
to
share
that
happen.
E
I
Yeah,
thank
you.
Paul
I
would
appreciate
a
Google,
Doc
or
potentially
even
a
PR
in
the
community
repo
or
something
just
that
would
work
as
well.
If
we
were
deciding
if
we
didn't
want
to
use
Google
Docs
but
I.
Think
at
this
phase,
Google
Docs
is
probably
a
good
tool.
There's
gonna
be
a
lot
of
commenting.