►
From YouTube: Kubenetes SIG Scheduling Meetings 20170817
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
Also,
there
are
no
knobs
like
to
avoid
movement
or
changes
like
higher
low
spikes,
so
that
three
scheduler
is
not
regarded
as
soon
as
something
happens.
If
some
something
happens
for
some
longer
timer,
then
maybe
it's
better
to
rescale
to
trigger
reschedule
because
reschedule
by
design
is
destructible
and
we
want
to
reduce
disruptions
as
much
as
possible.
So
in
current,
like
I
have
implemented
two
strategies.
First,
it
strategy
is
reschedule
our
checks
whenever
they
are
duplicate
or
more
than
one
parts
associated
with
a
service
business
be
the
same
service
on
the
same
node.
B
It
affects
those
power
generally
like
the
junction
here
assumption
here
is
that
it's
not
needed
for
high
availability
to
have
pots
associated
with
the
same
service
on
the
same
node,
more
than
one
thoughts
associated
with
the
same
service
on
the
same
node.
So
this
is
monastery.
Another
status
is
I,
haven't,
be
maintained.
It's
based
on
node
utilization.
B
Actually,
no
duty,
ligation
can
be
a
can
be
in
different
scenarios.
Let's
say
whenever
new
node
is
as
a
added,
then
node
mostly
would
have
low
utilization
its
newly
added
nodes,
and
then
there
might
be
existing
nodes
which
might
go
under
utilized
for
whatever
region
ports
are
terminated
on
from
them
and
there
might
be
existing
nodes.
Those
are
coming
back
after
maintenance,
so
in
this
state
it
really
doesn't
make
any
difference
whether
it's
in
you
know
getting
node
as
far
as
the
node
is
with
low
utility
asana.
B
This
strategy
is
to
regard
them
and
there
are
two
configurable
threshold
like
uses
thresholds
and
target
thresholds,
and
these
thresholds
are
based
on
CPU
memory
in
parts
per
a
percentage,
and
then
there
are
number
of
North
shadow
thresholds
also.
So,
for
example,
let's
say
you
have
very
big
last
cluster
hundred
two
hundred
three
hundred
nodes
and
there
is
only
one
node
by
the
low
low
utilization
err,
maybe
for
some
it
didn't
make
sense
to
run
really
scheduler.
B
So
you
could
really
set
number
of
nodes
threshold
so
that
whenever
number
of
low
utilized
nodes
are
above,
they
and
only
reschedule
edge
triggered
I
have
I
have
also
prepared
a
graph
to
explain
this,
because
this
is
a
more
complicated
strategy
than
the
first
one.
Just
to
so
like
how
this
strategies
but
I'm
sorry,
this
is
like
hand
drawn
a
diagram.
I
have
just
Gideon
Taylor.
So
basically,
if
you
see
on
this
x-axis
there
are
node.
Id
is
basically
you
you
could
just
imagine
and
on
by
X's,
like
of
the
hugest
matrix-like
CPU
memory
reports.
B
B
So
basically
idea
here
is
like
for
this
row
utilized.
No
still
this
much
capacity,
we
could
move
from
the
know
itself
from
from
the
notes.
Those
are
above
target
utilization.
So
basically
this
much
capacity
we
could
move.
This
capacity
could
be
higher
than
could
be
larger
than
this.
So,
but
we
don't
move
anything
more
than
this,
so
so
that
no
node
is
no
node
goes
above
target
utilization
also
like
it
depends
on
several
other
factors,
also
that
I
talked
about,
but
this
is
how
basically,
this
is
strategy
works,
so.
B
B
Yeah
yeah
I
talked
about
talked
about
this
I'm
going
to
about
direction.
Yeah
yeah,
just
a
second
I'm,
sorry,
so
so
this
is
how
this
is
change
and
how
this
mixin
strategy
work,
because
first
of
all,
I
am
using
epics
and
sub
resource
and
erikson
sub
resource
by
default
takes
care
of
disruption
modulator.
So
as
far
as
like,
whenever
any
strategy
tries
to
every
clip
order,
it
gets
error.
If
port
deception
budget
is
violated
and
then
we
don't,
then
we
ignore
that
border
and
we'd
all
to
continue
with
that
order.
B
So
now
I'll
start.
My
demo,
like
I,
show
both
strategies
how
they
were
also
like.
I
have
just
prepared
this
demo
in
only
in
two
node
cluster.
Initially,
I
wanted
to
use
some
simulator
like
they
have
in
Auto
scholar
autoscaler,
but
the
summer
like
I
mean
I
I
didn't
get
enough
time
to
do
that
so
I'm,
showing
that
only
in
two
node
cluster
and
I'm
hopeful
that
it
should
give
some
idea
like
how
it's
very
working.
B
So
here,
if
you
see
like
I,
have
two
node
cluster,
one
node
is
ready
and
there
is
no
not
ready
so
I.
It's
like
it
just
want
to
I,
want
to
show
you
like.
First
I
I
will
schedule
ports
on
the
ready
nodes
and
then
I'll
run
a
scheduler
what
it
does
when
the
other
load
comes
back.
So,
as
I
said,
like
first
I'm
going
to
say
a
z1
where
the
that
creates
a
duplicate
nodes
associated
with
some
services,
also
like
I'm,
not
creating
services.
B
B
B
B
Also
like
this
is
scheduled,
runs
both
straight
as
he
is
a
1
by
burner,
but
right
now,
because
no
node
is
underutilized
based
on
I
should
show
you
later
about
that,
but
right
now
it's
basically
only
strategy.
One
will
take
will
be
rather
not
so
so
you
see
like
it
did
not
process.
It
did
not
do
anything
with
the
node
120,
because
that
node
was,
and
you
know,
order,
and
then
it
processed
node
61,
and
if
you
see
here
in
this
window,
like.
B
Yeah
in
this
window,
you
see
all
those
fold,
duplicated
nodes
are
evicted
and
now
those
nodes
are
moved
to
120.
So
so
now
you
see
four
loads
are
four
ports
are
all
new
node
120
+
4
4
ports
are
on
already
existing
node
note
61.
So
this
is
like
just
simple
things
now
now
I
show
you
like
the
second
strategy,
like
no
utilize
low
utilization
basic
strategy.
First
I
just
delete
this
all
the
existing
portal
to
just
show
that
so
they
are
terminating
right
now,
but
once
they
are
terminated,
terminated,
terminated
and
I'll.
A
B
B
Yeah
I'm
going
to
I'm
going
to
show
you
that
the
strategy
next,
because,
as
I
said
this
is
said,
it
said,
as
he
only
takes
care
of
removing
duplicate,
sir
associated
with
some
survey,
sir.
So
that's
why
it
did
not
take
about
that.
Loader
I
show
you
another
strategy
right
now.
That
will
take
care
of
that
load
here,
like
I,
think
you
are
talking
about
just
just
a
second
again.
B
B
So
if
you
see
this
because
the
first
strategy
I
did
not
have
any
parameter
self
so,
but
for
this
strategy,
lono
to
utilize,
a
sauna
like
I
have
at
least
I
told
you
like
two
threshold,
so
threshold,
one
and
threshold
pilot
threshold
to
so
basically,
the
first
threshold
so
like
all
these
thresholds
are
in
percentage
like
when
the
CPU
uses
are
less
than
20%.
Memory
is
less
than
20%
and
ports
are
20%
percent.
That
means
that
node
is
underutilized.
B
If
all
the
all
these
metrics
CPU
memory
and
ports
are
under
20,
I
mean,
as
I
said,
you
could
tune
the
way
you
want
to,
but,
for
example,
I'm
showing
this,
and
this
is
Stargate
utilization.
A
target
utilization
is
like
like
where
we
want
to
have
our
notes
at
this
threshold
like
50
50
percent,
but
this.
A
B
A
B
A
B
Because
the
reason
is-
and
he
said
like
right
now-
either
I
think
we'll
have
to
integrate
it
bit
like,
for
example,
Prometheus
and
that
kind
of
thing,
but
otherwise
to
get
the
right
correct
load.
Every
is
the
only
value
we
hear,
but
like
odd
request,
values,
I
think
scheduled.
All
it
is
uses
the
same
thing
for
for
calculation
yeah,
but
I
think
that
that's
part
of
future
work,
but
right
now
you're
right,
that's
why,
like
we
have
three
parameter:
CAC,
CPU
memory
and
Porter.
B
B
But
is
for
every
node
for
all
the
pots
on
that
Nord
self,
but
the
only
thing
like
we
don't
know
here,
like
first
of
all
the
best
effort
reports,
those
are
not
specifying.
So
that
is
one
thing
and
then
this
is
not
the
real
load
load
averages
but,
as
I
said
like
because
we
are
taking
care
of
number
of
ports,
so
in
some
way
it
might
help.
B
So
I
mean
right
now
right
now
we
have
that,
but
is
future
work.
We
want
to
consider
our
load
average,
but
I
think
for
initial
think
this
might
be
good
enough
for
and
I'll
show
you
like,
I
just
done
that,
so
what
I'll
do
now
again,
I'll
create
some
ports
like
I'm,
going
to
create
12
pots.
Those
are
like
were
stable
and
guaranteed
pots
and
I'm
going
to
create
four
pots,
so
those
are
only
best
airports.
B
So
I
think
there
are
only
four
parts:
those
are
starting
with
B
and
the
doors
are
like
best
different
parts,
other
all
other
courts
are
guaranteed
and
I
mean
bustable
parts
and
those
are
like
created
by
replication
controller
with
your
one
replicas,
but
I
think
that
is
not
that
important
in
this
strategy.
So.
B
What
I
do
I
again
run
reschedule
error
right
now,
right
now
what
I
am
doing?
I'm,
not
starting
I'm,
not
starting
another
node.
There
is
just
one
node
and
I'll
show
you
that,
even
if
I
run
a
scree
scheduler,
it
won't
do
anything,
but
there
is
just
one
load
or
difficulty
what
it
it
it's
doing.
It's
simulating
all
the
nodes
are
ever
target
utilization.
Let's
say
you
have
100
nodes
and
all
are
about
50%.
That
means
there
is
nothing
like
the
schedule
should
should
do
unless
you
again
tune
your
thresholds.
B
So
all
are
running
now
and
now
I'll
run
reschedule
err,
see!
No!
No,
it
says
no
node
is
underutilized,
because
there
is
only
one
order,
and
here
it's
says
how
much
is
the
utilization
on
that
node
like
there
are
80%
border,
because
the
maximum
part
I
have
a
number
of
what
I
have
said
20
so
because
there
are
16
ports,
so
it
says
80
and
CPU
is
like
37
point.
5
and
memory
is
only
10
but
because,
as
far
as
one
metric
is
ever
target
utilisation-
oh
that
means
that
notice
ever
target
utilization.
B
B
So
now,
so,
if
you
see
the
first,
it
evicted
best-effort
parts
and
then
it
evicted
other
two
parts
because,
as
I
said
like
there
are
50%,
only
50%
parts
could
be
there,
so
it
is
stopped
at
50%,
because
50%
mean
ten
ports
and
other
ports.
It
moved
to
100
other
loader
that
is
dot
one
to
zero.
So
if
you
see
here,
they
are
still
terminating
like
six.
Now,
six
ports
are
all
node
120
and
ten
ports
are
on.
B
B
Right
now,
without
yuning,
your
status
is,
if
you
just
run
scheduler,
it
won't
do
anything
because
again,
it
says
no,
no
reason
that
you
realize
them,
because
now
both
notes
are
like
above
no
loading
underutilized
I
mean
there
is
no
node
like
underutilized
it.
So
it's
not
going
to
do
anything
now.
What
I
do
like
I
just
keep
my
policy.
What
I
do
I
make
all
these
40
40.
B
B
B
A
B
Yeah
it's
it's
on
my
under
my
account
on
Gita
in
V
scheduler
and
as
I
said,
although
there
are
not
many
folks
here,
I
wanted
to
go
for
cube
in
cubit
arepas,
so
that
more
people
could
contribute.
As
I
said,
this
is
a
premium
initial
original
and
we
really
want
to
like
go
forward
because
there
are
so
many
other
things
we
could
implement,
but
definitely
I
can
send
you
a
link
to
your
sega
scheduling
group.
Also.
C
D
Hey
hey,
this
is
Bobby.
Sorry
I,
my
armed
a
bit
late,
so
I
had
it
back
to
my
had
back-to-back
meetings.
So
I
read
a
little
bit
late.
Maybe
my
question
is
already
answered,
but
I
was
wondering
if
what
he
described,
sort
of
like
different
policies
in
the
current
autoscaler
or
is
it
completely
different,
constant
I
would.
B
Say,
like
I,
think
autoscaler
autoscaler
has
different
goals
and
the
scheduler
has
different
goals.
I
mean
definitely
from
implemented
point
of
view.
You
could
implement
anything
anyway
error,
but
conceptually
those
are
like
very
different
concepts,
as
I
understand
so
I
mean
I
am
Not
sure
like
we
would
really
want
to
combine
those
two.
But
but
one
thing
like
as
I
said,
definitely
a
scheduler
could
be
integrated,
is
autoscaler
al,
because
whenever
auto
scatter
outer
scale
scalar
X
noon
odor
it
can
help
reschedule
no
Ben.
B
It
is
rigorous,
for
example,
at
say:
whenever
autoscaler
acts
in
you
know
order,
you
put
send
a
signal
to
the
scheduler
so
that
this
fellow
noon,
a
node,
has
been
added
and
then
it
could
trigger
a
strategy
so
that
the
new
node
could
get
some
port
so
and
it's
not
underutilized
ooh,
so
definitely
like
integration
could
be
tunnel,
but
I'm
not
sure
like
that
the
policies
or
strategies
I
have
implemented
in
mr.
Delarue.
We
want
to
implement
that
is
part
of
autoscaler
I
mean
theoretically
in
from
implementation
point
of
view.
D
C
D
B
Yes,
but
but
I
you
like
this,
that
is
just
one
scenario:
I
mean
adding
a
new
node
or
I
mean
that
is
just
one
scenario,
even
if
you
don't
add
any
any
new
node
and
you
feel
you
have
your
existing
cluster
and
still
you
might
have
nodes
underutilized
or
because
some
nodes
might
be
terminated
from
those
four
nodes,
and
in
that
you
could
reschedule
ER
without
autoscaler.
So.
D
B
I
agree
with
you
completely,
but
but
what
I'm
saying
there
are
other
scenarios
aware
like?
Even
if
you
don't
use
Auto
scale
order,
it
could
be
used.
In
fact,
I
have
seen
some
people
some
of
you
just
asking
about
that,
for
example
in
one
of
the
clusters
that
one
user
was
maintaining
what
happened
because
of
some
reason,
some
nodes
went
down
and
all
the
ports
moved
from
those
nodes
which
were
down
further
other
nodes
and
then,
when
the
notes
came
up,
that
user
was
really
looking
for
how
to
move
those
spots.
B
E
B
A
B
D
B
Think
I
think
what
we
discussed
previously,
or
at
least
what
I
have
seen
I
think
the
idea
was
to
integrate
it
with
autoscaler,
but
not
to
make
it
part
of
order.
Scale
area
I,
think
that
we
have
already
discussed
to
like
yeah,
make
it
integrated
with
Auto
scale
area.
So,
but
it
was
always
good
to
have
a
separate
component
because
there's
so
many
use
cases
I
think
the
autoscaler
is
not
worried
about
them
and
we
are
autoscaler
for
a
risk
and
ever
could
help
sure.
A
B
A
A
Did
have
a
question
with
regards
to
now
that
you're
here
Bobby
what
the
status
was
of
priority
and
preemption,
because
that
affects
a
couple,
a
whole
bunch
of
things.
So
I
was
wondering
if
you
could
give
a
quick
TLDR
we're
where
it's
at
I've
had
I've
been
backlogged
and
a
lot
of
my
sick
scheduling
responsibilities.
So
I
wanted
to
yeah.
D
D
I
ran
into
an
issue
with
respect
to
well
I,
don't
know
how
many
people
are
familiar
with
this,
like
predicates,
metal-free,
computation
and
also
we
what
we
do
in
and
that
area
of
this
is
that
me
sometimes
for
some
of
the
predicates
may
pre
compute
some
information
before
running
the
predicate
for
all
the
parts
in
the
system,
so
so
that
we
can
run
those
predicates
faster
and
that
part
was
not
really
dealt
for
preemption
or
sort
of
like
having
like
something
like
a
dry
run
that
we
that
preemption
work
needs
so
I
had
to
spend
quite
a
bit
of
time
to
prepare
all
of
the
other
already
Sanders
peers
as
well.
D
F
I
guess
it's
kind
of
a
it's:
a
combination
between
API
validation
and
the
scheduler.
The
short
story
is
that
namespace
resources,
if
they're
included
as
part
of
a
container
resource
request,
would
pass
validation
and
get
ignored
by
the
scheduler,
and
so
your
pod
could
be
scheduled,
even
though
your
container
contains
a
requests
for
a
non-existent
resource
which
I
thought
I
brought
it
up
to
to
David
and
he
was
kind
of
surprised
by
the
behavior.
F
So
the
fix
is
kind
of
in
two
pieces
and
both
of
those
PRS
are
linked
there.
The
first
one
has
been
merged
already,
which
is
to
handle
namespaced
resources
outside
of
the
kubernetes
io
domain,
as
opaque
integer
resources
that
do
not
allow
overcommit,
and
it
also
special
cases
that
we
are
the
legacy.
Oh
I,
our
prefix
inside
the
kubernetes
set
io
domain
and
the
second
one
is
still
being
discussed.
F
A
Going
once
twice
three
times:
okay:
well
thanks
everybody
and
just
a
heads
up
that
for
other
SIG's
that
there
are
proposed
release,
notes
that
people
are
requesting.
So
if
you
have
a
feature
like
that,
you
wanted
to
PSA
as
part
of
the
release
notes
process
for
the
core
I
think
the
rescheduling
would
live
outside
the
core
so
like
priority
and
preemption
is
one
of
those
things.