►
From YouTube: Kubernetes Federation WG sync 20181010
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
B
B
D
D
A
A
D
F
A
Sorry
about
that
zoom
continually
crashes
on
my
laptop
now,
which
is
super
fun
I
was
I,
was
wondering
if
we
had
someone
else.
Besides
myself,
that
was
interested
in
taking
a
final
pass
through
the
helm,
chart
refactor
I
think
it's
ready
to
go,
but
I
didn't
want
to
approve
an
LG
TM
at
the
same
time
since
some
communities
consider
that
kind
of
aggressive,
so
I
wanted
to
solicit
a
review
from
someone
else.
D
D
You,
okay,
the
next
one
on
the
agenda
is
my
dog
beer,
which
was
documentation
about
the
replica
head
unit.
References
I
see
that
only
Vanya
did
put
some
comments
and
one
of
your
comments,
Vanya.
If
what
you
were
experiencing
seems
to
be
a
issue
or
a
bug
which
I
can
I
think
can
be
opened
as
an
issue
separately
and
can
be
tackled
as
part
of
that
okay,
okay,
and
if
there
are
any
problems
with
the
documentation,
you
can
you
can
point
them
them
off.
I
saw
one
which
I
will
fix
an
update,
appear
yeah.
F
A
A
A
D
So
earlier
I
think
she
had
a
go
at
trying
to
have
generic
way
of
overriding
the
whole
thing,
so
the
generic
way
would
be
like
whatever
we
are
specifying
in
the
template,
the
same
thing
or
the
whole
structure
is
considered
or
the
whole
API
is
considered
for
override.
That's
somehow
who
did
not
go
to
earlier.
D
So
that
is.
That
is
one
open
issue
we
can
say
as
of
now,
so
this
can
be
done
in
two
ways:
either
we
try
to
go
ahead
and
find
a
generic
way
of
overriding
anything
out
of
the
template.
Spec
or
we
just
add
a
possibility
of
multiple
field
overrides.
As
of
now
I
see
that
the
possibility
of
multiple
field
overrides
is
much
more
simpler
than
later
so
yeah.
A
That's
like
the
final
remaining
large
development
item.
One
thing
that
I'll
share
with
the
group
that
I've
personally
been
wondering
about
is
I
other
projects
in
the
system
we've
found
it
to
be
really
effective
to
like
have
a
weekly
release
for
a
for
a
patch
version
and
in
general
I
think
it
would
be
for
folks
that
are
trying
to
adopt
the
project.
A
E
G
I'm
not
sure
this
is
necessarily
what's
been
before
it,
but
there's
not
just
one
open
issue
on
the
job
scheduling,
there's
two
and
they're
kind
of
linked,
because
overriding
completions
is
not
really
possible.
Unless
you
do
things
in
a
certain
order,
because
if
you
create
a
template
and
then
create
overrides,
potentially
the
jobs
will
be
created
in
the
target
clusters
and
because
completions
is
not
beautiful.
A
D
A
A
Just
a
representative
example
or
fun
I
feel
like
the
doc
in
your
poll
is
really
good
in
the
sense
that
we
have
zero
documentation
actually
merged
that
feature
into
the
repo
at
this
point
and
in
general,
I
feel
like
it's
better
to
get
imperfect,
doc
merged,
especially
since
we
have
a
rather
low
amount
of
it
at
this
point
and
then
make
corrective
changes.
You
know
in
a
follow-up
that,
for
example,
the
original
author
doesn't
have
to
do
so.
A
What
what
I
was
kind
of
wondering
about
is
is,
for
example,
Wanya.
You
put
a
few
comments
on
the
poor
request
that
that
are
some
kind
of
formatting
changes
and
narrative
changes.
I
wondered
if
you
might
be
open
to
like,
let's,
let's
merge
orphans
poor
request,
because
it
adds
a
lot
of
value
and
in
a
follow
up
someone
else,
and
you
can
even
do
this
yourself
if
you
wanted
to,
could
make
changes
that
kind
of
correct
some
of
the
the
details
that
may
be
main
formatting.
A
F
F
D
So
what
my
notion
is
that
the
documentation
and
the
examples
in
the
documentation
will
point
out
some
basic
cases
and
some
cases
which
appraise
the
user
about
the
feature
about
the
more
detail,
stuff
and
possibility
of
what
all
can
be
done
by
this.
Can
you
can
be
pointed
out,
or
I
mean
they
can
have
a
look
at
food
also,
and
many
use
cases
might
come
out
from
their
usage,
which
are
already
not
part
of
the
feature
or
in
the
code.
So
that's
how
I
really
should
work?
Okay,.
A
D
I
A
All
right,
yeah
I
think
we
can
I
work
to
get
doc.
You
know
merged,
though
they
need
not
quite
perfect
in
and
of
with
things
thing
that
would
be
great.
You
know
help
us
build
out
our
our
docs
for
people
as
they
try
to
use,
and
that's
that's
all
I
have
wanted
to
talk
through
on
that
one.
As
an
example.
D
Yeah
and
I
know
that
she
also
has
been
you
know,
trying
to
put
a
set
of
documentation
for
the
link
set
of
features
that
he
has
been
working
in
last
couple
of
months.
He
has
just
been
marked
with
different
issues,
either
in
external
DNS
or
for
DNS
or
somewhere
else,
which
are
not
necessarily
inside
Federation
v2,
which
has
been
stopping
him
from
putting
in
a
documentation
which
might
explain
the
whole
feature
end
to
end
so.
A
A
That
I
see
there's
some
comments
from
Shashi
and
Vanya
that
I'll
try
to
address
today,
but
I
want
to
be
fitting
in
with
that
and
actually
have
some
diagrams
that
I've
produced
for
or
how
the
unit
speaker
works.
That
has
a
follow-on
to
this,
which
is
number
311
that
adds
idoc.
I
was
I,
was
hoping
to
basically
put
about
the
same
information
in
markdown
file,
along
with
diagram
that
I
I
don't
have
as
much
knowledge
as
the
man
shot
himself,
but
maybe
some
basic
information.
C
D
Clothes
yeah
and
apologies
like
it
was
there
in
my
task
list
like
since
a
month
and
I
am
facing
some
issues
like
if
I
put
it
out,
like
everyone
will
start
getting
the
same
issues.
I
think
I
need
some
working
cold
to
put
out
that
that
said,
I'm
blocked
right
now
so
I.
Hopefully
it
should
go
away
soon,
so
I'm
working
towards
that.
D
H
D
It's
not
with
fetish
in
v2
it's
with
some
things
related
to
coordinates,
and
then
the
external
DNS.
So
initially
I
tried
the
mind
of
working
examples
with
coordinates,
but
it
seems
there
are
some
box
in
coordinates
and
also
in
external
DNS.
So
the
code
base
I
am
not
able
to
I
even
arranged
the
pull
request,
but
seems
like
because
of
each
vendors,
a
lot
of
other
food.
We
cannot
migrate,
etcd,
b3
and
Natalie
with
a
PCT,
and
there
are
a
couple
of
complications
there,
so
I
mean
so
global
DNR.
D
So
again,
I
am
stuck
with
like
we
cannot
write
C
names
there,
because
we
are
writing
at
tax
records
for
every
every
record
there.
So
CNN
cannot
coexist
their
texts
ago.
So
I
think
he
might
need
to
introduce
a
few
more
features
like
I
just
named
or
earlier.
Something
like
that.
So
I
think
I'm
block,
not
in
petition
B
in
its
nothingness
and.
D
D
So
the
scenario
which
I
am
trying
to
write
like
I
was
directly
trying
to
write
C
name
in
Google
Cloud
DNS,
that
that
is
not
supported
by
the
DNS
provider.
So
we
might
need
to
use
an
alternative.
Like
writing.
An
audience
record
also
so
I
think
those
are
like.
Maybe
not
it's
third
features.
Okay,
I
think
I
think
there
is
a
way
already
identified
by
the
x1
DNS
to
solve
those
kind
of
problems.
I
think
we
are
encountering
that
right
now.
D
A
D
Yeah
no
I
was
saying
apologies.
I
did
intend
to
reply
to
the
comments
that
you
had
put
on
my
PA
but
sort
of
last
two
days.
I
was
off
for
personal
reasons,
and
they,
the
comments
came
in
just
before
the
weekend,
so
about
the
propagation
and
I
actually
had
sort
of
thought
about
it,
and
what
you're
suggesting
is
is
a
useful
feature,
but
what
I
thought
is
for
now?
We
can
have
a
similar
approach
that
we
took
for
the
jobs
feature
when
we
did
enable
the
job
templates
and
and
propagation
of
four
jobs.
D
So
what
we
did
decide
is
that
it
is
supposed
to
be
there
in
user
documentation
that
if
they
need
to
use
this
feature
that
at
that
point
of
time,
the
override
should
be
created
before
the
template.
So
a
similar
approach,
yeah
yeah
I-
let
me
put
it
on
it,
so
the
code
that
you
saw
in
the
comment
that
you
know
put
actually
that
is
not
needed.
So
the
code,
the
currently
mandates
that
if
the
template
does
not
exist,
it
will
not
Bouchon.
D
You
link
so
I'll
just
remove
that
piece
of
code,
and
that
should
work,
and
it
should
be
able
to
go
ahead
and
do
that,
should
you
link,
create
overrides
and
create
the
placements
before
the
template
is
created
and
for
now
it
seems
it
might
solve
this
particular
issue,
but
about
the
need
of
enabling
propagation
based
on
some
other
conditions.
That
also
seems
to
be
useful,
so
what
I'm
suggesting
is
we
can
track
it
separately
like?
E
G
D
Yeah,
so
the
more
useful
feature
that
is
part
of
my
plan
is
to
have
the
automate,
automated
rebalancing
of
jobs
so,
and
that
is
a
bit
complicated
because
of
this
prerequisite
that
the
completions
cannot
be
updated
in
the
clusters
where
the
jobs
are
already
created.
So
that
is
the
useful
feature
that
the
users
are
actually
interested
in
and
the
ones
that
that
simplified
job
scheduling
currently
is,
as
you
mentioned,
is
more
like
distribution
of
the
tops
among
the
textures
and
it
which,
whatever
we
are
doing,
it
seems
like
more
off
for
working
around
with
prerequisites.
D
G
Guess
my
my
concern
is
more
that
I'm
not
sure
that
scheduling
jobs
really
makes
a
huge
motor
cells
like
what
you're
describing
seems
better
suited
to
an
entirely
separate
mechanism
where
you
use
schedule
pods,
and
you
don't
have
this
restriction
on.
You
know,
completion
just
like
you
basically
have
another
mechanism
that
handles
federated
job
separate
from
cluster
local
choppers.
I
just
want
to
put
it
out
there
because
it
seems
like
like
having
to
work
around
the
limitations.
The
jobs
imposes
like
why.
Why
bother?
Why?
Just?
G
G
B
I'm
not
sure
I,
fully
understood
the
issue
here.
I,
remember
that
when
I
implemented
jobs,
the
problem
was
the
order
that
is
used
to
override
the
jobs
right.
Fundamentally,
the
only
thing
you
can
do
with
the
order
at
that
time
was
the
fact
that
you
can
only
template
eyes
what
you
can
modify
and
since
there
are
some
there
were
some
immutable
fields
you
cannot
modify.
You
cannot
template
eyes.
The
multiple
fields.
B
G
G
G
G
G
G
B
So
fundamentally,
this
somehow
something
that
we
may
define
as
the
system
should
modify
this
back
and
recreate
the
job
which,
which
is
strange,
I,
understand,
because,
fundamentally,
at
the
end,
you
end
up
having
a
jobs
that
is
different
than
the
previous
jobs.
Just
because
you
had
to
modify
the
the
immutable
fields.
D
Is
pointing
out
it
it
as
that?
Currently,
if
you
see
the
high
level
schedulers,
they
the
architecture,
really
what
they
are
trying
to
leverage
is
the
propagation
mechanism
using
the
low-level
api's
so
and
what
job
scheduler
is
also
doing
is
trying
to
create
local
jobs,
so
job
per
cluster,
for
that
given
spec,
and
we
see
problems
with
that
and
what
I
am
trying
is
who
work
around
those
problems
as
of
now.
D
C
G
Yeah
I'm,
not
questioning
whether
that's
useful,
it's
more
that
having
to
work
around
like
the
immutable
completions
field
of
the
existing
job
type
seems
kind
of
strange
to
me,
given
that
we
really
want
more
flexibility
when
we're
talking
about
multi
cluster.
Like
sure
Hobbs
is
a
thing,
wasn't
really
designed
to
be
rescheduled
to
a
different
cluster.
G
G
I
guess
I'm
just
putting
it
out
there
I'm,
not
sure
that
this
is
necessarily
something
we
decide
in
discussion
and
it
just
might
be
worth
investigating
whether
we
have
some
sort
of
alternate
controller
that
you
know
just
deals
with
things
differently.
I
mean,
if
I
think
about
how
you
handle
things
like
completions.
G
Does
it
really
make
there's
a
cost
to
that
so
I'm
not
suggesting
oh
yeah.
We
must,
you
know,
do
something
separate
I'm,
just
raising
the
possibility
put
using
jobs
in
its
current
form.
G
B
B
G
B
G
Thank
yous,
a
question
for
me
of
what
exactly
a
multi
cluster
job.
Why?
Because,
if
within
a
given
cluster
a
job
is
either
complete
or
not
complete,
if
it's
not
complete,
it
needs
to
be
restarted
its
entirety.
If
I
have
a
job
running
across
multiple
clusters,
what
exactly
is
it
doing
and
how
are
those
results
composed
I
mean
I
know
this
is
a
general
question,
but
if
you
could
provide
like
a
specific
answer
to
a
use
case
where
a
multi
lost
her
job
is
valuable
about,
I
would
really
help
well.
B
For
example,
we
have
some
kind
of
data
that
we
can
imagine
to
another
cluster.
We
have
some
data
pump
jobs
that,
for
example,
collected
data
in
in
from
the
web
or
whatever,
and
that
push
data
into
the
into
the
DB,
and
this
is
this
is
run
daily.
We
may
want
to
run
this
kind
of
jobs
and
we
want
to
run
to
define
just
to
the
job
at
the
cost
of
the
Federation
level
and
to
propagate
each
is
for
each
cluster
to
propagate
the
same
kind
of
job
right.
That's
it.
G
G
B
G
G
That's
helpful
because
I
mean
to
me
the
case
you're.
Describing
is
really
that
the
non
scheduled
version
of
the
job
it
doesn't
really
require
any
advanced
scheduler
you
just
you
want
to
run
it
everywhere,
because
you
want
results
collated
from
each
cluster,
so
every
fun
do
you
have
any
examples
that
use
the
advanced
scheduling
capability.
So
I
could
understand
sort
of
scenario.
D
Yeah
so
some
use
cases
that
have
come
across
are
based
on
analytics
capabilities
or
analytics
jobs
which
basically
need
to
be
running
in
different
clusters,
using
some
question
open
data
and-
and
they
are
sort
of
repeatedly
run
so
the
completions
might
be
very
high,
like
I-
can't
really
put
a
number
to
that.
But
there
would
be
some
defined
completions
very
high
number
of
completions,
and
so,
if
say,
a
cluster
is
unavailable,
then
then
the
same
kind
of
completions
needs
to
be
done
somewhere
else,
parallelism
is
involved.
G
D
So
when
I
say
analytics
it
it's
something
like
the
exact
case,
but
we
were
exploring
earlier
wolves
utility
of
this
on
this
position
itself
in
kind
of
edge
clashes.
So
you
have
small
clusters:
small
edge
clusters
based
on
the
edges,
which
are
local
to
say,
edge,
kind
of
scenarios
and
I.
Don't
do
not
have
the
exact
exact
description
of
that
use
cases,
but
that's
the
kind
of
rough
use
case.
C
G
B
Sorry,
but
we
have
also
example
of
some
jobs
that
can
run
in
any
cluster
and
may
want
to
run
in
multiple
gossip
to
be
sure
that
to
have
the
data
at
the
right
time.
Right,
for
example,
I,
don't
know
statistic.
Sorry,
I
I
need
to
speak
about
our
specific
use
case,
but,
for
example,
when
we
wanted
to
collect
the
beta,
we've
got
to
have
a
specific
kind
of
food
on
airplanes
or
on
flights,
and
we
ran
a
daily
these
kind
of
jobs
and
collect
this
data.
B
We
may
want,
for
example,
if
you
have
a
cluster
in
u.s.
and
the
class
in
Europe
who
may
want
to
run
this
job
once
in
both
clusters
to
be
sure
that
that
data
is
going
to
light
at
the
right
time.
So
in
that
case,
for
example,
the
DB
is
the
same
that
the
data
producer
that
was
going
to
be
the
same
if
we
are
lucky,
and
that
probably
is
not
an
advance
of
the
example
of
scheduling
that
is
an
example
of
Tasker
that
who
may
want
to
run
in
both
cluster.
At
the
same
time,.
B
G
Collating
so
I
mean
I'm,
not
suggesting
that
you
know
there
aren't
use
cases
that
involve
you
know,
a
workload
that
can
just
run
at
anywhere.
It
just
sounds
like
the
examples
we
have
today
or
more
about.
We
want
to
run
in
specific
clusters,
so
scheduling
is
more
about
making
sure
that
for
a
given
job,
all
of
the
targeted
clusters
run
that
job
and
to
me
that
I
mean
that's
I.
G
Think
that's
helpful,
because
then,
when
we're
trying
to
solve
the
problems
of
like
computability,
it's
really
not
a
matter
of
like
I
need
to
take
this
job
and
move
it
somewhere
else.
It's
more
like
if
this
job
doesn't
run
for
whatever
reason,
I
just
need
to
restart
it
and
the
schedulers
job
isn't
really
about
reapportioning
work.
So
much
is
just
making
sure
that
every
cluster
that
is
in
scheduling
preferences
is
getting
that
job
running
it
and
it
sort
of
I.
G
Don't
know
I
mean
the
thing
that
I'm
a
little
bit
confused
about
is
what
is
the
schedule
they're
provide
over
like
manually,
saying
I
want
this
job
to
run
these
clusters
by
setting
those
clusters,
because
the
schedule
they're
going
to
be
determining
the
size
of
the
job
like
the
number
of
cards
I
mean
I,
guess
that
makes
sense.
If
you
have
a
really
big
cluster,
then
you
can
afford
to
run.
You
know
more
pods.
It's
actually
complete
faster.
G
D
Mon,
so
if
to
define
the
task
of
the
scheduler,
the
objective
that
we
needed
put
aside
at
Tartine
was
to
be
able
to.
As
you
mentioned,
our
partition
the
job
into
all
the
clusters,
which
might
be
many
and
ensure
that
the
completions
are
are
achieved
in
the
minimum
possible
time.
Given
the
resources
of
the
defense.
So.
G
D
D
Iii,
don't
mean
necessarily
that
first
in
context
of
be
one
partition
was
in
the
context
of
before
we
can
say,
define
the
job
right
now
and
that
definition
is
used
using
that
job
scheduling
preferences
or
whatever.
So
the
target
is
this
that
a
definition
should
be
given
or
the
cellular
should
be
able
to
run
the
job
in
the
among
the
given.
D
Resources
are
around
the
given
resources
of
all
the
clusters
that
you
have
and
achieve,
or
complete
the
completions
or
the
task
at
hand
in
the
minimum
possible
time,
which
is
arbitrary,
which
might
differ
based
on
the
availability
of
the
questions
and
how
jobs
respond
in
the
pressure
not
which
is
totally
not
in
a
human's
control.
So.
G
I
think
that
I'm,
what
I'm,
what
I'm,
still
struggling
with
a
little
bit
is,
is
how
you
would
determine
say
the
completion
count
or
the
parallel
I.
What
is
it
you
can
define
how
many
run
in
parallel
and
how
many
total,
what
I'm
I'm
a
little
bit
confused
about,
is
the
way
that
I
can
see
the
jobs
as
far
on
maybe
it's
influenced
by
v1
is
that
you
have
like
a
total
number
of
completions,
and
you
have
a
total
number
like
what
is
that.
D
Weights
were
only
the
weights
for
the
ventilation,
nothing
is
so
proportionally
its
weights,
it
basically
the
proportionality
of
the
question.
So
you
can
say
that
just
one
probably
is
a
small
question,
so
you
give
it
a
ratio
of
one
and
number.
Two
is
twice
the
size
of
a
fan,
so
you
can
give
it
a
ratio,
so
the
distribution
would
happen
in
the
ratio
of
a
little
and.
C
B
Sense,
I'm
not
sure
if
we
have
done
at
that
use
case,
but
I
would
say
that
what
would
you
mention
are
different
that
the
size
of
the
clusters
could
be
an
example
of
these,
then
we're
going
to
use.
We
may
want
to
use
this
I,
don't
know
I'm,
not
sure,
but
it
could
make
sense.
The
fact
that
having
big
cluster
may
bring
two,
I
paralyzed
movement,
I
concurrency
right
but
I,
don't
know
I
I
about
time
to
map
this
on
our
reality
and
remedies.
G
It
might
be
I'm
kind
of
I'm
just
kind
of
wondering
whether
we
should
just
consider
that
model,
because
I'm
I
think
it
makes
complete
sense
that
we're
talking
about
arbitrary
workloads.
It
can
just
be
distributed
anywhere,
but
I'm
not
sure
it
doesn't
make
as
much
sense
to
me,
but
I'm,
not
a
user,
to
portion
a
pool
based
on
a
weight
when
really
I
need
to
run
everything
and
all
the
clusters
and
the
only
variability
is.
You
know
how
big
a
job
can
I
kind
of
run
any
given
cluster,
given
its
size.
B
G
I
just
think
it
seems
like
to
me.
Those
are
two
sort
of
separate
modes
of
scheduling.
One
is
like
truly
batch
and
one
is
more
or
no
the
word
for
it,
but
it's
it's
more
specific
to
the
clusters.
It's
not
I
can't
run
anywhere.
I
have
to
run
in
these
clusters,
and
the
only
variability
is
how
fast
can
I
run
with
how
many.
D
The
whole
sole
aim
is
to
ensure
the
fastest
completions
of
all
other
overall
workload
across
these
questions
and
the
variable
for
the
clusters
might
be
only
size
or
probably
how
fast
the
nodes
over
there
probably
computationally
faster.
They
have
more
more
specific
work
load
CPUs
that
kind
of
stuff,
so.
G
I,
take
your
point
that
it
just
because
you
know
we
don't
have
the
use
case
or
musers
with
the
use
case
in
this
meeting
it
doesn't
mean
the
use
case
does
exist,
I
think
I
would
I.
Think
I
would
just
like.
It
just
suggests
to
me
that
maybe
there's
there's
two
different
schedulers,
or
at
least
like
different
flavors
of
scheduling,
one
that
is
about
you
know,
making
sure
that
things
it's
almost
like
a
daemon
set
version
of
a
job
like
you
have
to
run
in
all
these
clusters.
G
Sort
of
thing
versus
I
can
just
run
anywhere
and
I
just
want
to
get
it
done
as
fast
as
possible.
I
think
at
least
to
me
it
would
be
helpful,
at
least
is
just
sort
of
identify
the
use
case.
The
scheduler
is
targeting
and
be
able
to
have
because
I
think
those
are
separable
problems.
You're
gonna
do
one
of
the
other
to
be
both
at
the
same
time
for
a
given
job
like
you
can
have
maybe
multiple
jobs
that
use
different
forms
of
scheduling
but
I
would
just
want
to
avoid
the
potential
of
you
know.
G
G
D
Does
it
does,
and
actually
what
you
are
saying,
see
thinking
about
the
possibility
of
various
use
cases
and
trying
to
fit
that
into
the
solution
that
we
are
providing?
The
approach
that
we
took
and
the
solution
that
I
actually
was
working
on
is
try
to
implement
or
try
to
work
on,
something
which
is
doable
as
of
now,
which
is
doable
and
useful
to
the
wrong
use
cases
that
you
have
so
this
current,
whatever
I
have
I
have
pushed
the
PR.
D
G
My
suggestion:
wasn't
it
wasn't
useful
it's
more
than
that.
I
wanted
to
clarify
what
problems
were
trying
to
solve
and
yeah
I
mean
the
example
of
a
batch
workload.
I
think
it's
as
well
suited
to
that
where
it's
about
getting
as
much
work
done
as
possible
off.
You
know,
I,
don't
think
clusters
as
possible.
G
G
Though
I
mean
I'm
not
suggesting
that
you
know
the
current
limitation
people
are
I
just
wanted
to
be
clear
on
like
why
it
was
necessary
to
say
modified
completion
or
not
modified,
but
to
like
to
deal
with
completions
in
the
way
that
you're
proposing.
It
makes
sense
to
me
now
for
talking
about
a
batch
sort
of
case.
G
G
That
we
have
immediately
at
hand
or
more
focused
on
just
distributing
workloads
to
specific
cluster.
Is
we
have
a
better
chance
of
solving
that
problem?
Well
because
we're
the
consumers
of
that
solution,
if
instead,
we're
not
really
the
consumers
of
a
batch
bait
like
a
batch
centric
solution
that
ideally
we'd
go,
engage
the
people
that
are
so,
we
make
sure
what
we're
delivering
is
actually
solving
their
problem.
So
maybe,
as
part
of
this
implementation,
you
could
we
engage
the
folks
at
CERN
and
see
if
they're
interested
in
providing
some
insight.
G
A
I
found
this
personally
to
be
really
helpful
in
terms
of
one
understanding.
Actually,
you
know
like
various
things
that
were
covered
in
the
discussion,
but
then
also
the
idea
that,
and
you
can
have
separate
higher
level
schedulers
that
accomplish
different
things.
I
think
is
a
really
powerful
concept
and
I
think
that's
good
because
it
avoids
you
know
the
need.
A
You
make
one
thing
that
is
universally
flexible,
which
tends
not
to
work
out
very
well
so
I'm,
very
supportive
of
like
schedulers
and
higher
order
constructs
that
have
narrow
things
that
they
accomplished
and
a
you
know
a
multiplicity
of
those
different
things
for
different
purposes.
I
think
that's
a
great
idea.
I
am
I,
am
thinking
that
really
will
will
hit
probably
some
issues
around
mutability
elsewhere.
That
will
probably
need
to
talk
to
again,
but
this
is
a
been
a
good
discussion.
Thanks.
D
A
D
One
more
thing
that
might
come
out
of
that
exercise
is
that
we
might
have
lot
of
issues
which
either
are
getting
stale
or
are
too
big
to
be
addressed
as
a
single
issue
which
can
be
partitioned
into
multiple,
smaller
ones
which
people
can
take.
Steps
at
and
documentation
is
the
other
thing
like
Paul.
You
mentioned
that
there
is
a
lot
of
documentation
that
can
be
done
in
which
certainly
stands
out
as
a
good
first
issue,
for
any
other
person,
of
course,
giving
the
difference
to
some
existing
talk.