►
From YouTube: Kubernetes Federation WG Sync 20181121
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Maru
any
info
on
paul
and
ivan
would
they
be
joining.
A
A
C
A
A
Yeah,
yes,
hi,
can
you
please
introduce
who
is
the
person
speaking.
A
D
A
Okay,
so
there
is
some
update
on
today's
agenda
with
respect
to
probably
the
use
cases
that
you
guys
have.
Why
not
go
ahead
and
you
can
give
us
a
brief
overview
of
them.
E
Well,
one
of
the
more
unique
perspectives
that
we
have
is
that
the
people
using
the
federated
clusters
won't
be
the
same
people
managing
them
so
for
the
end
user,
which
would
be
the
scientists
of
the
different
experiments,
if
say,
the
the
process
of
adding
more
clusters,
removing
clusters,
changing
things
should
be
kind
of
seamless
for
them
and
they
shouldn't
have
to
know
anything
about
it.
E
Just
that
one
of
the
things
I've
got
here
is
that
we're
watching
the
status
synchronization
so
that
the
users
only
have
to
authenticate
with
the
host
cluster,
where
we
tried
a
a
job
workflow
that
would
send
out
jobs
to
different
parts
of
the
federation.
But
it
could
only
work
if
the
jobs
were
added
on
the
host
cluster
itself,
meaning
it
was
meaningless
because
it
was
only
using
a
single
cluster
and
otherwise
it
couldn't
access
the
results
of
the
the
job
to
check
if
it
was
still
running
or
if
it
had
crashed.
A
I
I
did
not
get
the
first
point
when
you
said
when
distributing
jobs,
you
had
to
create
the
job
directly
in
the
cluster.
I
believe.
Currently,
if
you
create
a
job
or
a
federated
job
per
se
and
create
a
placement
for
it,
the
same
job
with
same
completions
and
parallelism
without
any
update
would
be
created
in
each
of
the
federated
clusters.
I
mean
that's
the
sink
mechanism,
any
resource
mechanism.
E
Yes,
that's
what
we
get
currently,
but
the
the
problem
was
that
each
job
was
had
a
unique
configuration
they
were
submitted
by
a
script
so
that
each
one
was
customized
and
so
running.
Three
of
the
same
thing
was
just
doing
the
same
work
right
over
and
over
again
on
three
different
clusters.
D
B
Like
something
like
a
rolling
or
something
like
that,
is
that
would
that
be?
No
that's
right,
like
would
a
correct
description
would
be
something
like
a
rolling
job
where
you
create
the
federated
job,
and
then
it
runs
in
cluster.
One.
It
comes
to
completion
then
runs
into
cluster
two
and
so
on
and
so
forth.
Is
that
what
you're
shooting.
D
Not
necessarily
like
we
might
just
want
it
to
run
once
it
doesn't
matter
which
cluster.
C
It's
more
a
matter
of
defining
your
job
and
then
having
it
being
scheduled
on
wherever
there's
capacity.
Exactly
I
mean.
Is
that
something
that
could
be
accomplished
by
I
mean
what
there's
parallelism
and
completions
you.
You
essentially
want
parallelism
of
one
right
and
you
don't
want
it
running
everywhere,
or
is
it
just
that
you
want
parallelism,
but
you
only
want
it
running
on
a
single.
E
D
It
yeah
this
is
something
we
saw
that
is
different
from
what
we
had
with
federation
v1,
where
the
behavior
was
that
we
would
there's
only
one
instance
of
the
job
running
across
all
clusters.
Yeah.
A
Yeah,
so
something
similar
to
v1
is
what
I
was
trying
to
achieve
in
that
pr
that
you
have
referenced
two
five:
nine
job
sharing
preferences.
So
the
intention
of
that
is
that
you
specify
global
parallelism
and
global
completions
number
of
completions
and
the
jobs
would
be
created.
The
partitioning
that,
across
the
clusters.
D
A
Yeah-
and
I
I
think
one
of
you
did
put
up
a
comment
over
there
also,
I
did
read
it
last
week,
but
I
was
not
really
you
know
up
to
speed
on
or
was
not
able
to
properly
reply
to
that.
So
what
we
can
do
is
for
this
particular
job
requirements
that
you
have.
We
can
sync
offline
also,
and
we
can
either
in
phase
manner
or
whichever
way
we
can
achieve
the
functionality
that
is
needed,
I'll
put
it
out
in
that
scheduling
preferences.
A
So
the
intention
of
those
high
level
schedulers
or
the
shutting
preferences
is
actually
to
sort
of
mimic
a
global
behavior,
so
that
scheduling
is
achieved
at
one
place
and
you
can.
You
can
also
look
at
it
as
partitioning
the
same
job,
because
you
would
be
specifying
a
single
value
for
parallelism
and
single
value
for
completions.
F
So
a
question
for
the
cern
guys,
so
I
mean
it
seems
fairly
straightforward
to
have
at
least
two
levels
of
parallelism
one
one
being
you
know
how
many,
how
many
instances
of
the
process
can
be
running
at
the
same
time
and
then
how
many,
how
many
clusters
it
can
be
run
in
which
are
two
sort
of
orthogonal
concepts.
F
E
F
Okay,
that
makes
some
sense.
I
guess
what
I'm
wondering
is,
if
they're
running
in
parallel
and
independently
of
each
other
in
one
cluster,
why
would
they
not
also
be
able
to
run
in
parallel
independent
of
each
other
in
different
clusters
as
long
as
long
as
the
as
long
as
they
didn't
run
too
many
times
or
process
the
data
too
many
times?
F
So
so,
usually
I
don't
know
how
you
actually
partition
the
job
queue
out.
F
I
mean
one
way
is
to
actually
bake
the
actual
set
of
jobs
into
the
into
the
containers,
and
then
you
know
have
each
job
able
to
make
sure
that
it
only
processes
one's
you
know,
labeled
for
it
or
in
some
way
indicated
for
that
particular
instance
of
the
process
and
the
other
one
is
to
have
a
sort
of
a
queue
where
they
all
pull
off
the
same
queue
and
that
way
make
sure
they
don't
process
the
same
thing
which
of
the
approaches.
Do
you
use
or
how
do
you?
F
How
does
a
given
instance
of
the
process,
I'm
using
that
word,
just
because
it's
less
ambiguous
than
job?
How
do
they
decide
which
work
items
to
process
right
so.
D
Well,
the
way
we
use
it
is
very
much
like
a
batch
system,
so
the
the
partitioning
of
the
jobs
is
done
by
an
upper
level
and
we
basically
use
kubernetes
as
a
patch
cube,
basically
and
but
the
the
partitioning
of
which
job
should
process
which
data
is
is
done
already
and
when
the
job
is
submitted.
D
What
okay?
So
each
one.
D
Yes,
yes,
okay,
exactly
and
then,
and
then
I
I
don't
know
if
I
understood
correctly
what
you
were
saying,
but
what
I
understood
is
that
you
were
suggesting
to
to
label
the
job
immediately
at
the
scheduling
phase
and
decide
which
cluster
should
be
running.
Is
it.
F
No,
no.
What
I
was
actually
suggesting
is
that
that
there
be
some
you
know,
central
set
of
tasks
that
need
to
be
performed
and
that
all
instances
of
the
job
that
all
of
the
processes
pull
off
that
cue
all
right.
So,
like
long
running
jobs
that
just
pull
work
exactly
and
maybe
each
one
only
pulls
one
item
and
then
terminates
or
something
I
mean
that's
possible.
F
Okay,
as
far
as
I'm
aware
they're
designed
to
be,
you
know,
identical
instances,
identical
processes,
multiple
of
them
and
and
they
each
process,
some
subset
of
the
data
and
and
that
processing
which
subset
of
the
data
is
determined
at
runtime,
typically
based
on
a
work
queue.
D
Okay,
okay,
so
that's
that's
not
how
we
actually
yeah.
That's
not
how
we
are
using
jobs
right
now,
so
we
are
pushing
all
that
should
be
yeah.
We
are
pushing
all
the
metadata
with
the
job
itself.
E
D
So
basically,
what
we've
been
doing
is
running
like
this
in
a
single
large
cluster
and
we've
been
looking
at
moving
this
workload
to
the
federation,
but
it
might
be
that
what
you
were
saying
before
that.
Maybe
this
is
not
how
it's
designed
to
work,
that
we
have
to
look
into
it
to
to
like
launch
workers
that
will
pull
up
the
tasks
as
as
they
land
there,
but
we
would
still
need
to
have
a
mapping
of
who
should
pick
what
which
tasks.
I
guess.
F
Yes,
or
or
or
any
of
them
can
pick
any
tasks
as
long
as
they
don't
pick
the
same
one,
but
maybe
maybe
I'm
misunderstanding
you.
It
sounds
like
your
jobs
actually
only
have
a
single
task
in
them
and
each
one
is
independently
configured
exactly
yeah,
okay,
yeah
yeah.
That's,
I
think,
that's
a
bit
of
a
weird
use
case.
As
far
as
I'm
aware,
I'm
not
a
jobs
expert,
but
but
my
understanding
is
that
it's
intended
to
be.
You
know
a
single
replicated
process.
F
Multiple
identical
instances
are
spawned
based
on
parallelism
and
the
configuration
of
the
job
and
then
all
of
them
process
data
which
they,
you
know
figure
out.
Somehow.
I
think
the
only
distinction
between
the
jobs
is
their
index
from
what
I
recall,
and
so
they
can
use
the
index
to
you,
know
hash
and
get
a
data
set
or
something
but
other
than
that
they
don't.
Each
get
independent
configuration.
C
I
think
what
he's
describing,
though,
is
the
base
case
for
a
job
which
is,
I
want
something
to
run
to
completion,
and
I
want
it
to
be
guaranteed
to
be
run
like
if
you
run,
you
can
mimic
the
same
behavior
with
a
pod,
but
you
don't
really
get
the
same
guarantee
that
it
completes.
That's
exactly
it.
Yeah.
D
Fair
enough
and
what
what
we
were
looking
like
in
the
future,
that's
why
the
job
scheduling
preferences
was
really
interesting
would
be
to
to
change
the
ratio
of
which
clusters
run
or
which
jobs
or
how
many
dynamically
so
that
we
could
scale
out
to
to
like
external
clusters,
when
we
have
peaks
and
scale
that
back
down
when
things
stabilize
so
that
we
could
to
tune
this
live
and
the
jobs
give
this
like
ability
to
do
this
quite
easily
or
potentially.
A
Okay,
in
that
case,
in
that
case,
you
would
actually
be
targeting
multiple
jobs
using
one
preference.
Am
I
right.
A
Using
one
word
sorry
so
so
currently,
if
you
see
the
scheduling,
preference
is
designed
in
such
a
way
that
it
targets
a
single
resource,
single
federated
resource.
So,
for
example,
you
can
say
a
single
federated
job
resource
which
has
say
m
completions
and
n
as
a
parallelism
which
is
distributed
and
which
can
be
rebalanced
based
on
weights
or
based
on
ways.
A
cluster
can
get
higher
number
of
parallelism.
Another
pressure
can
get
a
lower
number
of
parallelism,
which
is
the
partition
number
of
the
total.
A
Whereas
what
what
I
heard
as
of
now
is
like
you
have
a
set
of
jobs,
predefined
set
of
jobs,
say
some
say,
20
jobs
and
which
needs
to
be
distributed.
Each
of
the
job
has
parallelism
and
completion
as
one.
D
A
Correct
and
then
based
on
some
weighting
schedule
across
the
clusters,
you
would
want
to
give
say
one
cluster,
more
number
of
jobs
out
of
this
pool
and
another
cluster
may
be
lesser
number
of
jobs.
Out
of
this
pool
is.
A
A
So
we
are
sort
of
talking
about
a
set
of
jobs
or
a
pool
of
jobs
which
can
then
be
distributed
by
the
control
team,
related
control,
game.
D
I
don't
know
if
this
fits,
because
it
kind
of
makes
more
sense
now
that
you
start
talking
about
one
job,
having
like
parallelism
being
like
partitioning
the
the
job
into
multiple
species,
that
kind
of
makes
sense.
F
Yeah,
that's
the
intention
of
the
job.
Is
that
if
you
have
you
know
a
thousand
data
items
to
process,
you
create
one
job
right,
then
you
can
see
what
percentage
of
the
total
work
has
been
completed,
whereas
as
soon
as
you
split
it
up
like
like
you
have,
and
then
you
don't
have
an
easy
way
of
finding
out.
You
know
whether
the
total
set
of
work
has
been
done.
You
have
to
do
that
aggregation
yourself.
F
I
had
another
question
if
we're,
if
we're
finished
with
that
topic,
so
I
was
just
curious
what
data
requirements
are
for
your
jobs
like?
Where
does
the
data
live
and
and
do
you
need
some
affinity
between
the
job
and
and
the
data
that
it
processes
in
terms
of
which
cluster
it
lands
in
or
anything
like
that.
D
D
That's
great:
we
either
do
what
we
do
today,
which
is
kind
of
like
we
run
a
cache
that
will
fetch
the
data
remote
if
required,
or
we
will
do
a
sync
distribution
of
the
data,
so
the
jobs
won't
have
to
care.
Basically
it
will.
They
will
have
a
local
endpoint
where
the
data
is
accessed.
Okay,
yeah.
That
makes
it
much
easier.
Yeah.
F
Cool
I
mean,
I
think
this
is
a
great
use
case
for
federated
jobs.
We
just
need
to
figure
out
how
to,
as
far
as
I
can
tell
you,
the
the
current
approach
you
take
will
work
should
work
with
federation
borrowing
any
bugs.
The
only
thing
you
will
lack
is
that
you
won't
have
like
an
aggregated
completion.
F
You'll
just
have
a
whole
bunch
of
independent
jobs,
each
of
which
get
pushed
into
potentially
different
clusters
and
finish
independently
complete
independently,
but
you
won't
have
a
simple
way
of
getting
an
aggregated
status,
whereas
if
you,
if
you
had
all
those
jobs,
those
instances,
those
processes
grouped
under
a
single
federated
job,
then
you
would
easily
be
able
to
see.
You
know
how
many
completions
there
were
across
all
the
clusters.
F
No
that
well
yes
in
a
way
I
mean
they
can
either
you
know
all
access
the
same
data
and
just
use
some
hashing
function
to
figure
out
which
pieces
they
that
particular
instance
needs
to
process
the
problem
with.
F
That
is
that,
if
you
know
one
instance
of
the
job
lags
behind
all
the
rest,
then
there's
no
one
else
to
like
pick
up
the
slack,
which
is
why
it's
better
to
have
a
central
queue
of
some
sort
where
they
can
all
pull
items,
or
at
least
per
cluster,
where
all
the
instances
can
pull
work,
items
of
the
queue
and
process
them
and,
and
that
way
you
likely
to
complete
the
job
quicker.
D
Okay,
so,
and
because,
like
the
the
person
that
has
been
working
with
us
on
this,
they
started
looking
at
having
a
custom
resource
to
define
their
workflows.
We
could
use
that
to
like,
as
the
source
of
the
tasks
to
be
done.
I
guess.
F
Yeah
you'd
have
to
be
a
little
bit
careful
with
like
concurrency
issues
like
you,
do
actually
need
something
with
proper
q
semantic,
but
but
I'm
pretty
sure
there
are
off
the
shelf
things
for
that.
So
if
you
could
run
a
essentially
like
a
kafka
queue,
for
example
yeah
you
could
just
feed
it.
D
Yeah
yeah
we
can.
We
can
try
both
options
but
just
to
to
finalize
on
this,
the
so
the
and
the
other
approach
which
we
are
using
now,
which
would
require
that
each
job
goes
to
completion
on
one
single
cluster.
This
would
come
with
the
job,
scheduling,
preferences
or.
A
Yeah,
so
about
that,
so
job
scheduling
preferences
there
is
a.
There
is
a
flaw
in
that
as
of
now
so
because
there
is
one
field
which
is
which
is
immutable,
which
is,
I
guess,
the
completions
once
the
job
is
created.
A
There
is
a
requirement
on
this
that
the
template,
which
is
the
spec
of
the
job,
has
to
be
created
prior
to
the
job
trading
preferences
and
the
way
propagation
controller
works.
As
of
now
is
that
if
we
turn
the
propagation
on
it
will
already
be
propagated
to
or
we
don't
do
not
have
control
on
that
that
when
we
want
to
propagate
it?
Actually
that
is
one
of
the
comments
maru,
while
reviewing
the
jobs
made
on
this
vr
also
earlier.
A
So
one
additional
item
we'll
have
to
do
is
that
control
on
the
propagation
or
there
has
to
be
a
switch
where
the
shop
after
creating
the
template,
the
job
scheduling,
preferences
resource
is
created,
and
after
that
is
created,
the
switch
and
that
the
dock
should
be
propagated
is
turned
on.
It
can
be
done,
but
it
cannot
be
done
with
the
pr
in
that
shape
as
of
now
so
some
additions
would
be
required
on
that.
F
On
that
topic,
did
anyone
have
a
chance
to
speak
to
the
kubernetes
job
people
and
figure
out
whether
it
was
possible
to
have
that
completions
fields
changed
to
mutable?
Is
that
a
an
option,
or
is
there
a
good
reason
why
it's
immutable?
A
I
searched
around
yeah.
I
think
there
is
some
reason
behind
that
and
there
is
a
history
associated
with
that,
because
it
was
not
always
immutable,
so
it
was
turned
as
a
field
which
which
cannot
be
changed
after
not
necessarily
initial
versions
when
the
job
was
released.
F
Okay,
now
it'd
be
good,
I
mean
it
seems
like
this
is
a
really
really
valuable
feature
of
federation
to
be
able
to
essentially
dynamically
move
completions
around
between
clusters
and
and
we
can
either
do
it
by
editing
the
existing
jobs,
which
is
currently
impossible
because
of
that
immutable
field.
F
As
far
as
I
understand-
or
we
can
do
it
by
just
you
know,
creating
very
small
jobs
in
each
cluster
and
then
creating
more
very
small
jobs
when
those
complete,
which
has
the
same
like
net
effect,
but
it's
just
more
work
and
and
and
a
bit
more
messy
that
make
sense.
A
So
I
thought
first,
we
will
try
with
that,
but
if
I
don't
see
any
possible
flaws
or
failures
in
that,
so
if
there
are
possible
flaws
and
fail
or
failures
in
future,
then
we
can
simply.
A
F
A
And
thomas
you
can
also
probably
we
can
set
aside
some
time
to
sync,
and
then
I
can
update
this
particular
pr.
Accordingly,
it
anyways
needs
a
rebase,
and
this
basically
was
pending,
because
when
this
vr
was
done,
we
did
not
have
a
mechanism
of
overriding
any
field
which
the
functionality,
I
think
would
be
in
this
room.
B
A
Bring
me
on
slack
and
we
can
set
that
time
for
meeting
yourself
all
right.
B
C
The
agenda,
that's
sort
of
something
that
has
become
apparent
from
the
generation
of
federation
primitives.
That
really
doesn't
make
sense.
So
I
would
expect
to
see
that
in
the
coming
weeks,
without
the
near
term,
deliverable
okay.
E
And
we
would
possibly
like
some
way
some
way
to
kind
of
build
on
top
of
that
and
say
to
have
more
than
just
labels.
So
to
say
this
label
should
be,
or
these
clusters
should
be
only
scheduled
on
if
this
label
is
included,
and
these
these
clusters
are
fine
to
schedule
on
with
any
labels.
Even
if
you
don't
put
a
label.
G
Okay,
tomorrow,
can
I
say
something
about
this:
sorry
go
ahead,
dario.
If
the
number
of
clusters
in
the
federation
start
to
be
big,
probably
we
should
start
to
think
about
something
like
chains
and
more
than
labels,
because
with
the
labels,
there
are
some
some
expression
that
you
cannot
create.
For
example,
say
something
like
I
want
every
cluster,
but
not
this
one.
G
C
C
G
F
Yeah
I
was
involved
in
the
early
teens
and
tolerations
design.
I
I
I
don't
actually
think
it's
terribly
good
to
be
honest,
but
but
I
think
there's
value
in
being
consistent
with
kubernetes
and
and
right
now,
to
the
best
of
my
knowledge,
unless
things
have
changed
fairly
dramatically,
it's
it's
complementary
to
labels.
F
So
the
intention
is
that
you
know,
even
if
you
think
you
want
to
go
into
all
these
clusters
with
these
labels,
there
are
certain
that
you
just
won't
land
in
unless
you
explicitly
include
them
into
that
set
over
and
above
the
label,
so
so
that
cluster
is
tainted,
which
you
know.
Maybe
it
is
you
know
old
cluster
or
something
that
you
don't
even
know
about,
so
it's
tainted
so
by
default.
F
With
things
this
problem
is
yeah.
Another
good
example
is
if
some
of
the
nodes
are
being
retired
and
you
don't
want
workloads
to
land
on
them,
you
want
them
to
drain,
so
you
you
go
and
taint
all
of
those
nodes
and
and
by
default
everybody,
even
the
people
who
don't
know
anything
about
these
nodes
being
drained
will
not
land
on
them
all
right.
Unless
there's
like
some
special
source,
that
says
you
know,
even
while
that
thing
is
busy
draining,
it
still
needs
a
vital
demon
set
running
on
it.
D
I
think
I
think
that
matches
the
use
case
we
have
like
we,
we
we're
looking
into
having
a
quite
flexible
set
of
clusters
behind,
so
that
we
can
spawn
in
different
public
clouds
or
or
as
we
get
new
hardware
deliveries
internally.
We
also
use
that,
and
I
think,
the
concept
of
like
retiring
a
node,
we
we
can
look
at
it
as
retiring
a
cluster
and
just
adding
new
ones.
Things
like
this
yeah.
F
So
off
the
top
of
my
head,
I
would
say
we
should
just
emulate
taints
and
tolerations
as
they
are
applied
to
nodes.
We
should
use
the
same
concepts
for
clusters.
Perhaps
you
know.
Obviously
there
might
be
some
things
that
don't
quite
work,
but
I
think
in
principle,
there's
nothing
obvious
that
I
can
see
that
wouldn't
work,
so
you
can
tend
to
cluster
tolerate
a
cluster
with
a
taint,
etc.
F
Yeah
just
to
be
clear,
we're
super
super
enthusiastic
about
supporting
cern's
use
cases
because
they're
they're,
real
and
they're
big
and
they're,
you
know
doing
real
stuff,
and
so
we
would
really
like
to
accommodate
them
happy
to
do
lots
of
work
to
get
your
stuff
working
properly.
D
And
then,
from
our
side
like
we,
we
slowed
down
a
bit
in
last
couple
months,
but
thomas
just
joined
and
he
picked
up
pretty
quickly.
So
we'll
be
able
also
to
contribute
like
code
upstream,
and
you
will
be
working
closely
with
everyone.
E
A
The
two
use
cases
that
you
guys
talked
about,
we
would
be
able
to
find
some
solutions
for
them
in
the
near
term.
There
was
nothing
else
on
today's
agenda.
Apart
from
these
items,
is
somebody
who
wants
to
talk
something
apart
from
these
things.
C
I
have
one
item
quinn.
You
recently
posted
the
slack.
The
update
that
you
were
gonna
provide
for
one
one,
three
for
receive
multi-cluster.
F
C
F
A
Yeah,
I
think
I
I
read
your
update.
I
think
the
wording
did
reflect
this
exact
point
that
either
by
the
end
of
this
year
or
earlier
next
year,.
F
Okay,
good,
that's
what
I
had
in
my
head.
So
I'm
glad
that's
what
I
wrote
down.
I
think
it
would
be
super
valuable
to
I
mean
these
kinds
of
questions
that
are
coming
out
of
of
the
certain
use.
F
Cases
are
exactly
the
kinds
of
things
that
inform
an
intelligent
api
design
and
I
think
we
should
work
through
some
of
those
before
declaring
anything
better
in
terms
of
the
api
and
obviously
we
don't
have
to
do
like
all
of
the
api
in
one
go,
and
I
assume
the
intention
is
to
is
to
migrate
each
api
object.
You
know
somewhat
independently
from
alpha
to
beta,
to
ga.
Obviously
there'll
be
some
dependencies
there
and
probably
want
to
group
them
together
and
say
this
bunch
all
go.
F
You
know
beat
on
this
date,
but
you
know.
Maybe
a
good
starting
point
is
is
jobs
because
we
have
like
a
real,
proper
customer
here
with
fairly
detailed
use
cases,
and
we
can
use
that
to
firm
up
the
jobs
api
and
some
of
the
dependent
apis.
C
So
I
think
the
the
way
that
the
way
things
are
working
out,
there's
the
sync
stuff
where
we're
just
propagating
resources.
C
I
think
that's
sort
of
if
we're
converging
on
something
that's
going
to
be
workable
to
your
point
about
individual,
like
jobs
or
like
replica
set
staple
sets
where
you
can
add
higher
level
behavior
and
have
it
be
far
more
useful
like
I
guess
I
don't
really
think
about
it
so
much
in
terms
of
api,
but
just
the
future,
because
the
api
will
be
like
underpinning
all
of
this
and
then
we'll
be
adding
controllers
and
additional
api
resources.
C
It
can
add,
like
I
guess,
I'm
just
to
me
like
the
idea
of
an
api
being
beta
or
whatever.
I
think
we
we
have
more
than
an
api
to
describe-
and
I
think
feature
is
maybe
a
better
word,
but
I'm
I'm
open
to
other
ones.
I
just
I
think,
focusing
on
the
api
side
seems
limiting
to
me.
That's
all
I'm
trying
to
say.
F
Yeah
no,
I
agree,
I
mean
apis
to
some
extent
an
api
has
a
has
semantics
associated
with
it.
It's
not
just
a
bunch
of
fields
there's
actually
like.
If
you
specify
this
field,
then
this
behavior
happens
and
that's
implemented,
often
in
a
controller.
So
you
know
when
I
say
api,
I
mean
api
and
semantics
of
api
and
therefore
behavior
of
controllers,
maybe
does
that.
C
I
guess
when
I
think
of
something
like
like
multi-cluster
dns,
like
it's
more
of
a
feature
than
an
api
and
when
I
think
of
job
scheduling
it's
more
of
a
feature
than
an
api,
so
I'm
I
mean
we
kind
of
like
how
cube
has
done
it,
but
I
don't
know
I
think,
yeah.
I
think
we're.
C
It
is,
but
I
mean
the
federate
enable
stuff
where
you're
able
to
enable
federation
of
any
type
isn't
strictly
speaking,
api
centric,
I
mean
more
of
a
cli
and
similarly
like
being
able
to
federate
resources
or
even
a
whole
name.
Space
is
also
something
that
is
not
necessarily
api
based.
So
that's
where
I'm
very
enough
central
to
it.
It's
just
not
the
whole
story.
That's
all.
F
Yeah
makes
sense.
I
had
one
other
thing
that
I
was
about
to
mention.
That
has
slipped
my
mind:
oh
yeah,
just
about
the
you
know
the
lower
level
versus
the
higher
level
apis.
F
I
absolutely
agree
I
mean
in
in
an
ideal
world
the
lower
level
apis
are
much
simpler
and
and
presumably
therefore
much
easier
to
go
to
beta
and
ultimately
ga
I
was
just
thinking
more
of
you
know:
high
level
use
cases,
for
example,
this
jobs,
one
which
may
have
prevented
us
or
may
have
been
prevented
by
limitations
in
the
lower
level
apis,
for
example,
the
the
label-based
cluster
selection
that
you
mentioned.
You
know
we
can.
We
can
obviously
just
push
all
that
up
into
the
higher
levels
and
have
all
the
higher
level
apis.
F
You
know
do
their
own
semantics
around
that
or
we
can
support
label
based
cluster
selection
in
the
lower
level
apis
and
then
have
it
shared
by
everything
that
sits
on
top
of
it,
which
seems
preferable.
So
so
that's
an
example
of
of
something
where
the
low-level
stuff
may
not
be
baked.
Yet.
F
F
C
The
like
the
steps
to
go
generic
override
selector-based
placement
and
then
I
think-
and
I
mean
the
fact
that
we're
generating
all
the
primitives
means
that
evolving
it
is
actually
going
to
be
a
lot
easier
because
we
don't
have
to
step
through
every
single
type
that
we're
supporting.
We
can
actually
just
deploy
again.
So
my
thinking
is
that
within
the
next
few
weeks
I
would
expect
by
mid-december
we
can
start
thinking
about
bringing
everything
together
under
a
single
resource
rather
than
having
them
separate.
C
I
think
I
mean
my
thought:
is
that
that's
something
that
like
on
the
huawei
side,
you
guys
have
one
of
that
and
red
hat
we've
been
kind
of
holding
a
line,
because
we,
I
don't
know
one.
We
wanted
to
sort
of
start
from
first
principles
again
and
make
sure
that
we're
making
the
right
decisions,
but
clearly
the
user
experience.
B
C
Having
like
separate
template,
placement
and
override
resources
is
problematic,
and
it
also
makes
controllers
a
lot
harder.
The
only
reason
I
think
remaining
well.
There
was
two
reasons
for
for
keeping
these
things
separate.
One
was
off
based.
Maybe
we
wanted
a
separation
of
concerns
for
people
who
are
maintaining
placement
and
overrides
so
that
you
could
enforce
policy
easier.
I'm
not
sure,
that's
a
compelling
reason
and
we
could
work
around
with
authorization
web
hooks
like.
Basically,
when
you
try
to
change
something
it'll,
you
know
we
could.
C
C
Even
crds
currently
relies
on
resource
version
and
generation
and
putting
anything,
but
the
template
into
the
template
was
problematic,
because
then
the
resource
version
of
the
generation
would
be
changing
all
the
time
for
reasons
that
have
nothing
to
do
with
the
form
that
would
you
know
was
intended
to
go
into
the
the
member
clusters.
C
This
is
kind
of
like
a
rough
sketch.
Obviously
there
needs
to
be
like
some
formalization
and
documentation
of
this.
But,
to
my
mind
this
this
would
be
a
win.
I'm
open
to
hearing
contradictory
opinions
as
to
why
this
wouldn't
be
a
good
idea,
or
congratulations
that
we've
actually
come
full
circle
when
we're
back
to
where
we
were.
F
Yeah,
my
preference
would
be
to
reduce
churn,
so
I
mean
we,
you
know
we
had
aggregated
altogether
stuff.
Then
we
broke
it
apart
and
now
we're
talking
about
moving
back
again,
I
do
like
the
principle
of
like
smaller,
reusable
components.
With
you
know,
independent
controllers
and
independent
api
objects.
F
I
do
you
know
many
of
those
may
be
not
intended
or
not
suitable
for
exposing
directly
to
end
users
and
and
another
approach,
rather
than
bundling
them
all
together
to
make
them
convenient
for
end
users
is
to
just
build.
You
know
higher
level
abstractions
on
top
of
them,
which
provide
a
you
know,
big
aggregated
api
object
to
an
end
user,
which
is
easier
for
them
to
use
and
under
the
hood
it
just
you
know,
generates
the
the
more
the
smaller
pieces,
the
the
overrides
and
the
cluster
selector
and
the
you
know.
F
Whatever
else
template,
I
think
I
think
I'm
leaning
more
towards
that
approach
now,
rather
than
you
know,
going
backwards
and
bashing
all
the
stuff
back
together
again
that
we
split
apart,
but
but
I
prepared
to
be
convinced
otherwise,.
F
C
C
Yeah
yeah,
I
mean
the.
I
will
say
that
it's
not
just
about
user
experience.
I
might
have
sort
of
glossed
over
this,
but
there's
a
certain
amount
of
complexity
in
all
the
controllers.
By
having
things
separate
and
having
to
coordinate,
I
mean
an
example
of
a
problem.
Having
placement
separate
from
the
template
is
that
when
you're
doing
scheduling
or
is
it
overrides,
I
think
it's
overrides
like.
C
F
You
you
need
like
openly
toward
none
of
the
data
you
don't
want
to
be
in
intermediate
states.
I
agree,
and
maybe
we
can
come
up
with
solutions
to
that
where
you
know,
groups
of
api
objects
all
have
to
be
there
before
before
certain
actions
are
taken.
I
think
it's
worth
talking
about.
It's
certainly
an
interesting
area.
A
Yeah,
because
this
topic
actually
came
up,
I
had
to
do
some
homework
and
then
bring
that
up-
probably
the
next
next
meeting
but
I'll
state
it
anyways.
So
I
was
able
to
gather
some
feedback
from
the
huawei's
users
also
and
the
biggest
problem.
What
we
look
at
is
the
ux
or
the
usability
of
the
same
so.
A
What
what
really
the
users
want
is
that
they
have
existing
applications.
Cluster
applications
when
I
say
users,
I'm
talking
about
java
users,
so
they
have
existing
cluster
applications
and
they
want
to
federate
them
or
they
want
to
have.
I
mean
they
have
multi-class
use
cases
where
the
application
either
is
already
deployed
in
a
given
cluster,
or
they
want
to
deploy
it
in
a
given
cluster.
But
they
want
to
treat
the
resources
as
the
k
test,
resources
itself.
A
They
and
you
can
then
additionally
have-
and
they
are
okay
to
additionally
have
some
additional
information
which
they
can
name
federated
resources,
but
they
don't
want
to
touch
the
already
existing
clear
test
resources
and
they
want
the
direct
access
to
them.
Also,
so
I
mean
it's
more
or
less
to
do
with
ux
or
how
we
present
so
like
how
I
present
it
is
like
when
we
federate
or
when
we
federate
a
cluster.
A
I
should
be
able
to
see,
for
example,
a
resource
which
is
a
deployment
as
deployment
and
then
some
additional
information
which
might
be
the
federated
resource,
which
might
say
where
you
place
this
deployment,
where
you
have
overrides
or
what
are
the
changes
which
you
want
to
do
for
that
particular
question,
whereas
what
we
have
right
now
is
that
deployment
is
wrapped
inside
of
edited
resource,
so
it
doesn't
remain
compatible
with
say
whatever
installation
mechanism
they
have
or
the
view
they
have
in
the
they
might
have
uis
on
top
and
all
that
stuff.
A
C
I
think
that
having
distinct
federated
types
is
to
me
that
sort
of
I
don't.
I
don't
see
a
reason
from
not
for
not
doing
that.
I
think
it's
been
a
huge
win
to
have
the
types
differentiated
in
the
api.
C
My
hope
would
be
that
the
capability
that
I'm
anticipating
like
being
able
to
target
in
the
api
or
yamalan
disk
and
convert
between
federated
resource
and
it's
non-federated,
equivalent
like
in
either
direction
I
mean
once
we
have
the
capability.
It
really
goes
both
ways
potentially
doing
like
a
whole
application
or
via
helm,
hooks
or
whatever,
but
the
idea
would
be
is
we
we
provide
a
seamless
transition
between
the
different
types,
the
idea
that
you
can
create
cube
types
and
they're,
somehow
federated
via
some
ancillary
objects.
C
F
Yeah,
that
was
a
that
was
a
very
strong
requirement
in
federation.
B1
was
to
support
kubernetes
types,
and
I
think
we
agreed
that
that
building
higher
level
api
objects
that
were
precisely
kubernetes
types
was
a
good
transition
option
for
people.
So
you
know
we
could.
We
could
essentially
build
a
federation
v1
api
on
top
of
federation.
F
F
We
would
just
say
look.
That
is
a
fundamental
limitation.
If
you
don't
like
those
limitations,
then
you
go
to
the
you
know
the
other
part
of
the
api
which
which
gives
you
you
know,
first
class
types
and
all
those
kind
of
things.
I
think
that's
the
right
approach
and
I
don't
think
it
needs
to
in
any
way
compromise
the
rest
of
federation
v2.
F
C
A
Yeah
and
what
I
was
saying
is
that
I
did
not
necessarily
I
was
not
necessarily
indicating
that
that
we
necessarily
want
to
build
this.
Only.
What
I
meant
was
that
that
kind
of
user
experience
is
what
the
whole
users
are
looking
at,
so
one
one
more
crude
way,
not
a
direct
way
of
doing
that
is
what
we
are
talking
about.
A
Is
that
translation
or
transition,
or
some
tooling,
which
can
provide
a
mechanism
either
by
a
helmholk
or
by
a
tool
which
can
convert
the
yamls
from
the
k
test
types
to
the
pediatric,
corresponding
particular
types,
but
probably
a
more
seamless
mechanism.
I'm
just
putting
out
that
thought.
I
mean
if,
if
anybody
comes
up
with
a
good
idea,
then
it's
something
for
us.
F
A
F
One
very
last
parting
comment,
so
another
another
part
of
what
you
just
described
is
the
the
ability
to
support.
You
know
users
accessing
kubernetes
clusters
directly
and
creating
and
deleting
and
doing
things
there
or
maybe
just
reading
the
status
of
the
clusters.
F
Concurrently
with
you
know,
creating
and
reading
things
by
the
federation
api
and
that
that
was
definitely
and
and
associated
with
that.
The
idea
that
you
know
we
have
a
bunch
of
clusters,
we
have
a
bunch
of
resources
in
them
and
now
we
want
to
add
a
federation
layer
on
top
of
that,
without
you
know,
killing
all
of
those
things
we
just
want
them
to
be
inherited,
and
we,
if
you
remember
in
federation,
d1,
we
actually
had
the
very
explicit
concept
of.
F
I
think
we
called
it
wasn't
the
inheritance.
We
had
a
name
for
it.
Adoption
that's
right,
so
the
the
federation
api
would
basically
adopt
the
existing
resources
in
the
underlying
clusters
and,
and
it
actually
worked
fairly
well
in
b1.
I
don't
think
it's
terribly
challenging,
but
but
it's
a
again
a
very
common
requirement
that
I've
heard
from
customers
yeah.
A
Okay,
so
we'll
see
you
see
all
of
you
next
week.