►
From YouTube: Kubernetes SIG Scheduling meeting 2018-03-15
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
Okay,
so
the
first
item
that
I
wanted
to
talk
about
is
a
quick
basic
update
on
the
work
items
for
111
priority
and
preemption.
It's
it's
for
111.
We
think
that
we
are
on
track.
I'm
arranging
with
some
customers
and
the
work
on
the
demon
said.
Controller
which
is
done
by
Klaus
is
in
progress.
We
are
still
thinking
about
whether
we
should
keep
it
in
110
or
111.
A
A
Yeah,
so
we
saw
an
issue
in
in
the
way
that
scheduler
equivalence,
cache
of
schedule
is
updated.
There
was
a
bug
in
handling
events
from
PBC's.
That
bug
is
fixed.
Well,
particularly,
it
was
not
particularly
PVC
by
the
way
that
PVC
was
affecting
one
of
our
functions
and
if
I
got
it
was
not,
it
was
not
releasing
the
lock
and
updating
the
cache
was
causing
a
timeout.
A
bug
is
fixed,
so
we
think
that
it's
on
track.
So
we
need
a
design
dog
for
policy
library
of
scheduler.
That
class
is
gonna.
A
Work
on
I,
don't
have
any
update
on
that
design,
dog
for
gang
schedule.
Klaus
has
written
a
and
a
an
early
version
of
the
document.
I
saw
it,
it's
still
pretty
rough.
We
need
to
work
on
that,
then
graduate
paint
notes
by
condition
and
no
demons
at
scheduler
to
beta.
These
are
I
believe
on
track
as
well.
We
are
gonna
use
part
of
this
painting
notes
by
condition
in
India,
menses
controller,
well
kind
of
kind
of
related
to
this,
but
it's
not
exactly
the
same
but
anyways.
A
A
A
A
A
Okay,
so
there
was
another
issue
that
I
created
recently
Red
Hat
folks
average
generally
had
been
working
on
a
document
as
well
to
create
priority
basically
default
priority
in
or
specify
default
priority
in
certain
namespaces
I
also
created
this
issue
that
I've
linked
in
the
agenda
so
I
believe
we
need
to
have
a
scheduling
policy
that
well
this
is
this
is
kind
of
different.
So,
basically,
this
is
a
policy
constraint
in
these
scheduling
properties
of
parts.
A
If
you
look
at
issue,
this
includes
like
allowed
priority
classes
or
maybe
default
priority
classes
as
well
allowed
toleration
allowed
affinity
another
half
an
hour,
particularly
no,
definitely
not
an
anti
affinity
and
also
specifying
maybe
the
default
scheduler
that
you
want
to
have
for
your
pods
and
stuff,
like
that.
We
don't
have
any
policy
today
for
any
of
these
matters
and
we
would
like
to
have
those
at
some
point,
hopefully
in
111
but
I'm,
not
so
sure,
if
we
can
get
it
in
111
on
time
for
111.
A
But
we
would
like
to
work
on
them
and
I
would
also
like
a
version
Robby,
if
you're
here,
to
tell
us
what
they
think
about
having
a
more
like
holistic
policy
here,
as
opposed
to
having
like
individual
individual
components
or
any
which
we
have
30
scheduler
that
addresses
each
one
of
them
each
each
one
of
these
separately.
What
do
you
guys
think
yeah.
D
E
A
D
Yeah
yeah,
no,
basically
I
was
saying
yeah
I,
think
I
agree.
It
would
be
better
to
have
some
holistic
policy
that
covers
more
areas
of
scheduler,
but
but
I
think
the
only
concern
I
have
because
scheduler
covers
the
is
about
the
global
as
aspects
of
class.
Just
like
right
now
we
have
been
talking
about
several
policy
is
in
several
policy
constants,
that
is
per
namespace,
I'm,
I'm,
not
sure
to
helper
namespace
cost
rental
and
the
policy
based
on
theater
inside
the
scheduler.
D
A
So
so,
to
be
clear:
I'm
not
talking
about
necessarily
a
scheduler
policy.
Sorry
about
that
if
this
was
confusing,
but
this
is
probably
gonna
be
a
lowball
policy
that
is
gonna
affect
a
scheduling
decision,
so
this
is
not
going
to
be
something
necessarily
that
only
a
scheduler
cares
about
it's.
It's
actually
about
a
policy
that
affects
maybe
even
thought
creation,
for
example.
So
when
you
create
a
part,
this
policy
may
affect
the
decision
of
what
kind
of
toleration
is
you
can
add
to
that
part.
A
D
So
it
seems
it
seems
if,
if
we
want
to
have
something
like
more
centralized
policy,
where,
like
different
parts
of
the
system,
let's
say,
for
example,
admissible
against
a
scheduler
and
maybe
other
components
of
the
system
can
access.
So
do
we
want
something
like
that,
like
like,
via
some
sort
of
centralized
policies
where
different
parts,
different
components
of
the
system
are
accessing
data
and
whatever
is
appropriate
to
they
move
they
are
taking
excel
based
on
them.
Yes,.
A
That's
something
that
I
had
in
mind
so
this
this
is,
for
example,
as
I
said
in
that
issue.
This
is
something
that,
for
example,
tells
us
whether
a
particular
node,
like
pod
affinity,
for
example
or
anti
affinity,
makes,
and
so,
if
we
don't
have
such
a
policy,
someone
could
create
a
pod
with
an
anti
affinity
that
prevents
any
other
part
in
the
cluster
to
be
scheduled
in
the
same
zone.
For
example
stuff
like
that
so
or
toleration,
someone
could
possibly
add
a
toleration
to
these
pods,
letting
them
scheduled
on
nodes
that
have
special
hardware.
A
You
know
our
current
approach
to
protect
nodes
with
special
hardware
is
to
taint
them
with
that
at
a
particular
taint,
and
then
let
pause
tolerate
those
tense.
The
part
that
we
really
require
those
resources
to
tolerate
those
things
so
that
they
can
be
scheduled
on
those
nodes,
but
not
other
parts,
can
be
scheduled
stuff
like
that.
So
if
a
user
can
add
a
toleration
arbitrarily
to
his
parts
to
tolerate
anything
in
the
cluster,
then
none
of
these
techniques
will
be
used.
That's
fine!
We
thought.
Maybe
we
should
have
these
policies
yeah.
D
I
agree
and
Bobby
sorry
to
interrupt
you.
It's
definitely
like
you
like
I,
think
we
agree
we
should.
We
should
have
some
soup
policy
like
that,
and
it
covers
more
aspect
of
scheduling
features,
but
it
seems
like
there
are
some
people
on
this
collar
and
they're
like
from
release
FEMA,
and
they
are
looking
for
some
more
info.
Maybe
we
can
discuss
about
their
data
and
maybe
after
that,
maybe
we
can
continue
our
discussion.
Sure.
A
D
G
So
the
first
thing
is
since
I
guess:
I
can't
talk
to
Klaus
until
tonight,
the
for
the
rest
of
Syd
scheduling.
So
if
we're
looking
at
bumping
the
rest
of
demons
that
scheduling
to
1.11
a
bunch
of
things
have
already
been
committed,
including
test
changes,
so
does
anybody
else
have
an
idea
of
how
that
would
affect
the
stability
of
1/10?
Whether
or
not
we
need
to
back
additional
things
out
so.
B
G
E
A
Alpha
class,
the
Alpha
cluster
test,
yeah
I,
mean
out
for
cluster
test.
This
feature
was
on
right.
So,
yes,
do
you
write
these?
Are
these
are
put
special
test?
These
are
not
a
regular
or
like
operation,
but
your
ID
in
in
alpha
clusters.
We
need
to.
We
need
to
have
this
new
change,
that
class,
the
PR
that
basically
is
currently
waiting
for
that
classes,
and
we
need
that
one
or
we
have
to
take
those
out
if
you
really
want
to
do
it.
I
want
to
have
those
tests
pass
enough
enough
systems.
A
H
And
I
originally
had
questions
about
the
doc
for
the
daemon
set
stuff,
but
it
sounds
like
it.
That's
an
all-or-nothing
thing.
So
if
we
don't
go
into
110,
they
could
just
pull
the
doc.
We
don't
need
pieces
of
it
so
and
I'll
take
that
up
with
Josh
and
Klaus
separately.
But
the
other
thing
that
we
are
trying
to
do
is
get
folks
eyes
on
the
release,
notes,
because
we
need
to
get
those
cleaned
up
for
the
release
and
there's
a
number
of
questions
addressed
to
members
of
this
SIG
still
outstanding
in
the
doc.
H
A
H
A
H
H
F
A
A
C
Hear
me
now:
I
can
yes,
okay,
I've
been
trying
to
keep
the
scheduling
mailing
list
CCD
as
much
as
possible,
but
just
wanted
to
bring
up
the
fact
that
in
two
weeks,
from
two
weeks
from
today,
there's
a
meeting
in
the
nvidia
office
in
santa
clara
that
you
may
be
interested
in
some
of
the
topics
we're
going
to
discuss
there.
I've
in
the
chat
now
is
the
agenda
document,
and
each
line
in
the
agenda
as
of
yesterday
or
today
has
has
a
design
a
Google
Doc
that
is
ready
for
review
and
comment.
C
So
there's
and
the
three
that
I
highlighted
that
you
may
be
interested
in
our
use
cases
around
the
extended
resource,
Toleration
device,
plugin
hooks
that
I
think
Rohit
put
together
a
release
or
two
ago
there
may
be
some
improvements
or
changes
extensions.
We
need
there,
that's
in
the
device
plug
in
the
second
Lincoln
or
device
plug-in
doc
section
in
the
agenda,
the
next
one,
and
these
probably
the
the
lion's
share
of
it-
is
the
resource
class
and
resource
API.
C
This
is
almost
like
a
college
class
because
there's
there's
dozens
of
pages
and
each
one
of
these
documents.
It's
pretty
intense
a
lot
of
work,
obviously
I'm
glad
to
see
everyone
did
all
all
kinds
of
background
work
to
hopefully
make
our
meeting
successful
and
then
the
third
piece
is
rent
node
from
Nvidia
put
together
a
GPU
sharing
document
which
allows
containers
within
a
pod
to
share
a
GPU
and
an
InfiniBand
device.
C
So
I
don't
know
how
much
that
one
really
touches
on
sharing,
because
it's
not
I'm
sorry
touches
on
scheduling,
because
it's
not
sharing
a
single
GPU
amongst
multiple
pods.
Yet,
but
that's
where
they're
headed
and
so
those
those
are,
the
three
things
I
think
need
summer
of
you
and
then
I
know
Bobby.
You
can
make
it
at
least
to
some
portion
of
the
of
the
meetings
are
by
the
way
on
March
29th
30th.
A
So
I've
personally
spent
quite
a
few
hours
with
with
some
of
the
folks
here
on
designing
resource
class
and
compute
resources
and
other
concept
brought
up
in
some
of
these
documents,
particularly
with
fish
and
drying.
So
definitely
that's
something
worth
discussion
and
having
a
lot
of
thinking.
It
requires
a
lot
of
careful
design
and
consideration
before
we
move
forward
with
it.
It's
gonna
affect
a
lot
of
scheduling
and
resource
related
designs,
and
maybe
some
of
the
devices
that
we
want
to,
or
we
don't
even
know
about
how
you
want
to
support
in
the
future.
C
The
other
last
thing
I'll
mention
is
that
supposedly
we
will
have
some
remote
connectivity
in
terms
of
WebEx
for
folks
who
are
able
to
join,
maybe
for
an
hour
here
or
an
hour
there
and
their
busy
schedules.
So
I
can't
speak
to
help
good.
The
quality
will
be
I
have
no
idea
what
the
facilities
are
like,
but
and
videos
trying
to
set
that
up.
That
would
be
great.
A
A
I
would
I
would
like
to
see
what
will
happen
in
the
next
24
hours
and
then,
hopefully,
if
he
didn't,
if
he
didn't
make
it,
we
are
gonna
push
it
to
111
and,
as
Josh
said
in
order
to
make
some
of
these
alpha
cluster
tests
green,
we
have
to
call
out
some
of
the
already
checked
in
PRS
or
already
merged,
be
ours,
okay
and
okay.
So
I
wish.
Do
you
want
to
continue
the
discussion
on
our
policy
yeah.
D
Yeah
definitely
so
I
mean,
like
I
agree
like
we
should
have
some
sort
of
for
more
detailed
policy,
but
I
think
we
actually
is
to
discuss
the
right
design
we
want
to
have
and
like
if
we
have
some
sort
of
centralized
policy
where
we
should
have
and
what
should
have
and
like,
for
example,
component
like,
for
example,
and
missile
plugins
scheduler,
and
what
are
the
other
components,
those
want
to
access
that
policy.
So
yeah.
A
Absolutely
I
mean
definitely
this
like
a
holistic
solution
or
holistic
policy
requires
touches
to
more
than
one
component.
This
is
not
going
to
be
something
that
only
affects
the
scheduler.
As
we
already
said,
this
is
going
to
probably
affect
a
bunch
of
admission
controllers
as
well,
and
we
need
to
write
a
proper
design
dot
for
that,
but
that
covers
all
of
these
areas.
Yeah.
D
A
Yes
in
was
very
interested
in
I'm
working
on
this
and
because
he
had
already
faced
some
of
the
issues,
particularly
with
toleration
in
some
of
his
customers,
his
cosmic
roster.
So
he
was
also
thinking
about
writing
a
document
and
he
on
slacks.
He
told
me
that
he
is
has
ridden
a
sort
of
like
a
first
draft.
Do
you
guys?
Definitely
we
should
collaborate?
I
would
also
like
to
take
a
look
and
share
my
thoughts
as
well.
Yes,.
A
All
right
and
I
guess
the
last
item
that
I
have
here
is
scheduler
cache
comparer.
So
if
you
guys
are
aware
of
a
problem
that
we
have
been
seeing
recently,
not
necessarily
very
recently,
but
at
least
in
the
past
several
months
is
various
issues
related
to
the
fact
that
the
scheduler
cache
has
a
stale
information.
So
we
have
seen
that
the
scheduler
has
tried
to
put
parts
unknowns
that
do
not
have
CPU,
because
scheduler
task
scheduler
that
there
are
available
CPU
resources
are
unknown
and
so
scheduler
has
tried
to
put
more
pods
on
those
nodes.
A
While
those
notes
actually
had
some
other
pods
running
and
didn't
have
enough.
Cpu
we
have
seen
scheduled,
are
not
putting
parts
on
notes,
because
scheduler
thinks
that
those
notes
have
a
lots
of
parts
and
while
some
of
those
parts
already
deleted
and
so
on
so
forth,
we've
seen
bunch
of
other
similar
issues
related
to
schedule.
Cache
by
mistake.
So
I
proposed
adding
a
new
mechanism
worse
or
like
a
new
feature
in
scheduler,
very
scheduler
rebuilds.
It's
for
like
a
copy
of
a
cache
of
his
cache.
A
B
Two,
there
are
two
fixes
that
went
in
in
one
ten
that
are
that
applied
to
this.
Basically,
both
were
on
the
watch,
cache
issues
and
updates
which
could
affect
it
plus.
If
you
wanted
to
the
original
dot
design
of
some
of
the
informers
that
have
a
period
that
you
could
specify
for
when
you
want
to
realist
and
the
realist
alone,
because
you'd
be
relisting
anyways,
we
just
refresh
your
entire
cache
and
any
hits
that
were.
B
There
would
just
be
reacted,
so
that
alone
would
solve
the
problem
by
just
changing,
because
right
now
the
list
watch
is
set
to
watch
forever.
Right
list
once
done,
start
up
and
watch
forever,
and
that
was
the
problem
because
it
was
done
for
efficiency
reasons
and
what
we
had
originally
many
many
moons
ago
is.
We
had
a
constant
refresh
of
that
list
and
we
never
really
had
a
problem,
because
it
was
always
doing
a
realist
for
for
scalability
concerns.
B
We
got
rid
of
the
listing
and
switched
to
a
pure
watch
based
system,
which
then
had
issues
with
the
Informer's
and
the
other
caching
mechanisms
that
were
put
in
place.
But
if
you
were
to
just
put
back
in
an
interval,
you
know
before
the
listing
itself,
which
you
would
have
to
do,
anyways
for
your
design.
You
would
get
back
the
behavior
that
you're
looking
for
right.
A
So
you
are
absolutely
right:
I
am
aware
of
that
and
actually
I
think
this
realistic
import
and
like
Informer's
terminology
called
resync,
if
I'm
not
mistaken,
so
resync
is
particularly
disabled
in
scheduler
cash-poor,
for
the
same
reasons
that
you
mentioned
Tim,
we
wanted
to
have
more
scalability
and
we
felt
like
resyncing
and
larger
clusters.
Probably
gonna,
take
some
time
when
I
spoke
with
API
server.
A
Folks,
those
guys
expected
this
mechanism
that
currently
is
in
the
scheduler
to
work,
and
we
don't
expect
to
need
resyncing,
and
actually
they
said
that
this
is
the
more
recommended
way
compared
to
resyncing,
because
it
also
helps
removing
some
of
the
loads
from
the
API
server
as
well.
Chris
thinking
requires
API
server
to
basically
send
all
the
objects
again,
and
this
is
apparently
a
more
recommended
way.
That's
why
we
are
thinking
about
trying
to
do
about
this
issue.
A
A
A
So
it's
of
course
not
one,
not
one
ten,
but
it's
one
nine
and
they
are
saying
this
issue
and
there
are
a
few,
a
few
issues
actually
in
github
as
well,
which
are
related
to
the
issue,
and
some
of
the
folks
had
recently
mentioned
that
they
are
saying
similar
problems.
That's
why
we
built
this
like
relisting
or
resyncing,
and
building
this
copy
of
the
cache
and
comparing
it.
So
this
is
currently
invoked
by
sin.
Sending
a
signal
to
this
scheduler
I,
say
it'll,
be
great.
A
A
I
A
I
B
B
A
B
J
A
A
That's
one.
The
second
mechanism
is
sorta
like
built-in
plugins.
These
plugins
are
still
plugins
somewhat
similar
to
what
we
have
today
for
predicates,
but
we,
what
I
have
in
mind
is
to
make
them
a
little
bit
more
explicit
in
a
way
or
more
flexible
in
a
way
so
that
they
can
be
easily
added
or
removed,
and
probably
we
are
going
to
have
a
little
bit
more
flexibility
in
the
interface
where
we
call
them
but
anyways.
A
J
A
Web,
the
web
hook
version
is
simply
is
going
to
be
similar
to
the
extenders
that
we
are
gonna
have
today,
and
this
is
gonna
be
like
the
sort
of
like
slower
mechanism
which
does
not
have.
It
does
not
need
to
recompile
the
scheduler.
The
second
mechanism
is
similar
to
predicates.
In
a
way,
basically,
we
are
gonna
have
a
bunch
of
plugins
inside
the
scheduler
codebase.
Those
those
guys
are
gonna,
be
faster
plugins
which
are
gonna
run
in
the
same
process,
and
you
are
gonna,
have
access
to
scheduler,
caches,
etc.
A
J
A
An
external
process,
so
G
RPC
mechanism
that
you
mentioned
requires
everything
to
go
over
the
RPC
interface,
which
is
usually
an
external
process.
It's
gonna
be
faster
than
the
webhook
that
we
have
today,
but
it's
not
gonna,
be
super
fast,
like
an
in
process
plugin
but
I,
don't
know,
I
I'm
quite
hold
on
your
idea
of
having
like
gr,
PC
or
regular
webhooks.
G
RPC
is
it's
kind
of
nice
because
it
it's
a
little
faster,
probably
than
marshalling
and
marshalling
to
JSON
and
so
on.
A
A
We
don't
have
a
whole
lot
of
extension
mechanism
built
on
GRP,
although
we
do
have
some
I
believe,
maybe
something,
but
there
I'm
I'm.
Not
so
sure.
If
you
are
gonna,
go
that
path,
but
that's
certainly
an
option
as
well
or
we
can
have
what
maybe
both
of
them
put
a
web
hook
or
like
a
regular
REST
API
and
maybe
to
our
PC
as
well.
A
Not
necessarily
so,
but
yes,
if
we
build
mechanism
inside
the
scheduler
to
to
basically
convert
to
pour
the
buff
and
reconvert
back
from
for
the
top
for
the
buck
to
scheduler
state.
Yes,
we
can
do
okay,
but
it's
not
any
model
box.
We
need
to.
We
need
to
add
something
to
do.
Scheduler,
of
course,
right.
J
I
Yeah
body
this
is
debug
again
I,
so
just
one
maybe
I
can
I
can
do
that
in
our
mailing
list
as
well.
But
one
other
thing
which
we
are
doing
is:
we
are
in
the
process
of
implementing
the
powered
affinity
thing
as
part
of
the
firmament
scheduler.
So
one
thing
that
came
out
was,
you
know:
we
were
looking
at
the
documentation
as
well,
so
especially
the
entire
affinity,
hard
level,
anti
affinity,
being
symmetric
means
if
you
have
two
parts
part
as
1
and
s2
and
s1
has
a
rule
of
anti
affinity
rule
that
said.
I
Ok,
you
I
cannot
be
deployed
with
s2,
okay,
so
okay,
so
that
s
one
comes
along
and
it
gets
deployed
on
an
ok
and
now
the
s2
part
comes
along,
but
it
does
not
have
any
rule
in
it.
So
it
seems
the
fact
that
entire
Finity
is
asymmetric.
The
system
the
scheduler
makes
sure
that
he's
gonna
check
all
the
powers
to
ensure
that
there's
no
there's
no
conflict
with
this
particular
part.
I
So,
essentially,
what
that
means
is,
if
any
new
part
that
comes
along,
which
doesn't
have
any
rules,
anti
affinity
rules
and
it
we
still
gonna
check
all
the
parts
to
make
sure
if
the,
if
any
existing
running
part
is
in
conflict
with
this
new
part.
Oh
that's
a
lot
of
processing,
maybe
I,
don't
know.
If
there
is
this
correct
or
doesn't.
A
I
I
mean
this
is
this
is
a
very
simple
thing
to
solve,
so
essentially
the
fact
that
the
entire
Finity
is
symmetric.
You
can
just
put
this.
You
make
sure
at
the
it's
a
process
issue.
Actually,
so
you
make
sure
that
the
one-upper
push
the
you
know
the
rule
in
the
in
the
part
which
has
a
conflict
with
other
paths
that
solves
the
problem,
because
I
think
it's
more
of
a
problem
for
us
for
a
moment,
because
we
do
bad
scheduling
in
in
in
kinetics.
I
I
think
it
is
still
a
issue,
but
it's
becoming
like
it
made
a
bigger
issue,
because
we
have
to
treat
the
affinity
parts
differently
and
then
we
not
able
to
do
that,
because
the
part
that
that
comes
as
two
in
this
case,
for
example,
even
though
it
doesn't
have
any
anti
affinity
rule,
but
we
still
need
to
be
aware
of
the
fact
that
you
know
the
s
one
which
is
already
there
has
a
conflict
with
it
all
right,
so
that
kind
of
throws
us
off.
Yes,.
A
I
I'm
saying
is
so:
let's
say:
I
have
a
two
part:
spar,
that's
one,
and
in
this
one
you
clearly
specify.
Okay,
I
anti
affinity
to
s
to
basically
I
cannot
exist,
coexist,
that's,
okay!
So
that's
why
so,
but
the
part
has
to
currently
there's
no
I
mean
it's.
So
if
you
can
force
the
the
I
mean
it's
process
issue,
a
learning
system
can
enforce
that.
But
if
the
user,
the
developer
or
whoever
the
deploys
that
part
as
to
he
needs
to
make
sure
that
year,
the
entire
Finiti
rule
opposite
of
that.
I
B
A
I
No,
no,
so
so,
so
let
me
let
me
rephrase
that
again
so
far,
that's
1
and
s
2.
So,
as
one
has
an
affinity
rule
as
to
means
I
cannot
coexist
with
as
true
and
currently
in
the
system.
We
allow
part
s
2
to
come
along
and
then
and
it
doesn't
need
to
have
any
rule
in
it,
no
affinity
rule
for
us.
What
so
and
because
of
that
we
end
up
scanning
all
the
you
know.
I
We
make
sure
that
before
we
deploy
s
2,
we
make
sure
that
there's
no
other
part
in
the
system
which
is
in
conflict
with
this
incoming
part.
You
see.
So
what
I'm
saying
is
if
the
pod
has
to
very
explicitly
had
a
entire
finiti
policy,
which
is
opposite
of
s1,
it
says
as
to
affinity
as
one
Eskimos,
so
that
cannot
solve
the
problem
base.
Yeah.
A
I
It's
more
like
a
process,
so
do
we
want
to
let
the
system
worry
about
this
or
we
let
it
become
the
process
issue,
actually
so
that
if
the
developer
doesn't
do
it
right
and
they
suffer
basically
I
mean
the
thing
is
the
reason
I'm
saying
is.
We
are
unnecessarily
complicating
our
scheduling
logic
because
of
this
point:
zero.
Zero,
zero
point.
One
percent
I
understand
that
so
process.
The
reason
I
keep
saying
process
issue
means
is
that
you
know
it's
not
going
to
be
bulletproof.
So
if
somebody
makes
a
mistake,
no
doubt
that.
A
D
Actually
I
did
I
would
like
to
add
something
here.
Yeah,
first
of
all,
I
think
the
way
that
asymmetric
anti
affinities-
working-
let's
say,
as
you
say,
like
s1,
hedge
and
I,
affinity
to
s2
and
s1-
gets
a
schedule
first
and
then
like
we
want
value
on
the
schedule.
S2,
we
are
checking
all
the
ports
on
that
node
I'm,
not
sure.
That's
really
happening
in
fact,
what's
happening,
s2
is
getting
a
schedule
on
on
in
order.
We
are
pretty
small
similar
s.
Good
s2
could
be
scheduled
on
in
order,
whereas
money
already
scheduled
so
I.
F
D
D
D
B
Don't
think
that
the
policy
enforced
I
mean
sometimes
like
the
you're
saying
process,
but
in
the
nomenclature
that
we
use,
at
least
in
this
version
we
call
it
policy,
so
I,
don't
think
having
a
pal
I,
don't
think,
there's
anything
bad
necessarily
with
having
the
strict
policy
right.
We
just
never
explicitly
specified
it
before.
That
says
like,
if
you
say,
anti
affinity
on
s1,
then
that
basically
implicitly
implies
the
anti
a
fitting
s2
from
s1
right.
This.
B
I
Crazy
yeah
because
they
can
solitary,
we
can
all
depart
so
it
gets
even
more
complex
for
us
because
we're
doing
bad
scheduling
and
for
us
we
are
processing
part
which
are
not
even
scheduled
so
and
we
want
to
treat
them
differently
now.
If
we
start
doing
that,
and
we
can't
so,
we
can
zero
in
on
the
affinity,
part
which
are
the
entire
finiti
rule,
and
we
can
treat
them
differently,
but
the
TARDIS
too
does
not
have
an
affinity.
How
do
you?
D
D
D
D
D
D
A
I
I
A
F
D
I
A
A
I
With
very
initial
stages
of
implementing
part
affinity,
we
have
we
already
implemented
node
affinity,
empowerment,
and
we
are
about
to
do
that
and
we're
I
mean.
We
were
pretty
sure
that
this
is
house.
We're
gonna
treat
these
parts
which
have
an
affinity,
rules
or
affinity
rules
differently,
because
we're
gonna
chew
them
up.
So
in
order
for
us
to
do
that,
and
then
we
saw
this
documentation
and
that
kind
of
threw
us
off.
A
Yeah
that
this
is
definitely
something
that
we
need
to
also
work
on,
for
you
know
for
the
default
of
schedule,
and
this
is
not
something
that
is
causing
problems
for
you,
but
it's
also
causing
performance
issues
in
the
default
schedule
as
well.
We
made
some
little
improvements
in
terms
of
performance
and
all
by
doing
some
optimization.
For
example,
if
T
topology
is
only
the
node
just
just
other
than
that
particular
node.
Instead
of
checking
all
the
parts
in
the
cluster
and
stuff
like
that,
but
these
are
not
enough
to
really
address
the
problem.
A
I
think
we
need
to
change
the
API
a
little
bit
or
the
behavior
expected
behavior
from
anti-fan
ID
a
little
bit.
I
would
like
to
work
with
you
guys,
I
I,
don't
know
if
this.
This
is,
of
course,
one
possible
options,
one
of
the
possible
options
to
just
remove
the
symmetry
and
maybe
I,
would
also
like
to
see
a
couple
other
options.
If
you
have
any
ideas,
please
share
them
with
me,
and
then
we
can
make
a
decision.
No.
I
I
definitely
we're
not
there
yet
I
think
so
what
I
just
wanted
to
kind
of,
but
we
just
trying
to
build
the
we
just
starting
kind
of
small
and
as
we
can
I
extend
it
and
obviously
we're
gonna
do
that
as
part
of
the
equation,
and
you
know
we
can
but
I
think
for
the
time
being.
What
we
are
doing
is
just
go-go-go
simple.
Basically,
so
we're
gonna
assume
explicit
behavior
in
in
for
moment,
and
then
we
can,
you
know,
keep
chatting
and
then
see
whatever
we
end
up
doing,
we
can
do.