►
From YouTube: Kubernetes SIG Scheduling Meeting - 2019-12-05
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
So
I've
got
a
bunch
of
items
added
and
others
have
added
as
well.
So
the
first
I
think
main
item
that
we
have
for
1.18
is
the
premium
progression
phase
2.
We
will
try
to
duplicate
all
the
old
passes
and
the
course
scheduler
that
executes
predicates
and
priorities.
A
Those
right
now
are
not
exercised
at
all,
and
so
we
can,
we
can
remove
them
and
once
they
are
removed,
we
can
change
the
default
configuration
of
the
scheduler
from
being
defined
in
terms
of
predicates
and
priorities
to
be
defined
in
terms
of
plugins
and
also,
lastly,
but
not
least,
is
moving.
The
actual
code
from
the
predicate
and
priorities
should
be
in
the
plugins
files.
A
While
we
do
all
of
that,
we
need
to
make
sure
that
cluster
or
scalar
can
integrate
properly
with
us
right
now.
They
vendor
our
code
and
then
they
they
basically
get
a
list
of
all
the
predicates
that
the
scheduler
configures
and
they
execute
them.
So
since
we
are
removing
that,
we
need
to
make
sure
that
they
are
able
to
integrate
with
the
framework
directly
so
creating
a
framework
and
then
execute.
A
For
example,
one
filter
plugins
were
brand
pre-filter
and
then
one
filter
etcetera,
so
hopefully
that
there
are
two
the
may
be
the
items
of
basically
switching
the
default
configuration
to
be
in
terms
of
plugins
and
breaking
down
metadata
calculation
to
pre-filters.
We
need
to
do
2d
to
do
those
two
pretty
quickly
so
that
the
cluster
autoscaler
team
has
enough
buffer
to
experiment
and
try
to
migrate
to
the
different
without
though,
without
those
they
can't
do
it
really.
A
A
A
Okay
sounds
good,
so
the
second
item
we
had
discussed
during
the
during
cube
con
I
discussed
that
with
Ravi
and
way
multi
config
scheduler.
We
we've
had
multiple
requests
about
how
they
can
how
customers
can
do
bin
packing
and
the
idea
of
running
like
a
second
scheduler
just
to
do
that.
First
for
a
segment
of
their
workloads
didn't
seem
right,
especially
unmanaged
platforms
like
whether
it's
Microsoft,
Google
or
or
anywhere
else,
and
so
we
had
this
idea
of
having
the
scheduler
the
default.
A
Scheduler
itself
try
to
behave
differently
depending
on
the
workload
type
and
the
interface
that
we
we
sort
of
felt
can
be
appropriate
as
the
existing,
the
scheduler
name.
So
basically,
a
scheduler
the
default
scale
we'll
be
watching
for
two
or
more
scheduler
names,
and
we
can
tie
them
to
different,
eternal
two
different
framework
profiles.
Two
different
profiles
that
are
based
on
different
framework
configurations,
for
example,
profile
one-
would
be
the
different
provider.
A
We
have
right
now,
which
is
mostly
about
like
spreading,
and
then
we
can
create
a
second
profile
that
you
know
alters
the
scoring
functions
to
optimize
for
bin
packing
and
those
can
coexist,
and
they
would
they
would.
We
would
execute
either
of
them,
depending
on
which
scheduler
name
the
part
specified
in
a
cluster
where
you
have
the
ability
to
you
can
always
have
the
ability
to
divide
your
cluster
into
node
goals,
using
node
affinity
or
even
using
tins
and
tal
additions.
A
That
feels
like
that
makes
mix
a
lot
of
sense
so
that
you
wouldn't
get
too
much
overlap
in
terms
of
the
two.
The
two
profiles
behaving
you
know
opposite
to
each
other,
but
at
the
same
time
you
have
like
one
managed
service
that
can
support
different
types
of
workloads
on
different
segments
of
the
cluster.
So
I
haven't
the
issue
there
to
start
a
discussion
thanks
for
everyone
who
commented
on
it,
you
know
I
think
Aldo
will
will
take
over
on
that
one.
A
My
my
intuition
is
that
we
will
have
a
single
profile
by
default
so
that
we
maintain
the
current
behavior
may
be
in
1.19.
We
can
introduce
a
second
one
like
the
bin
packing
one,
so
it
could
be
like
by
default,
packaged
with
with
the
default
scatter,
because
they
feel
that,
like
best
fit
and
and
worst
fit
like
I
hate
from
from
Brian
grant
those
like
to
us,
you
know.
A
C
I
just
asked
uncommon
to
clarify
that
just
yeah.
This
is
a
super
super
good
feature
and
which
is
users
asking
not
only
to
use
of
who
are
running
multi
scheduler
about
facing
the
the
racing
issue.
About
also
that,
because
we
have
right
now,
we
have
the
minimum
unions
is
plugging,
so
we
can
flexibly
combine
no,
no
matter.
If
the
packing
is
front
upstream
or
downstream,
we
can
combine
them
together
in
whatever
format
you
want
and
combine
them
into
profile
right.
So
this
is
give
the
user
in
the
most
of
flexibility.
A
Translate
so
the
third
teacher
is
carried
escaped
from
the
from
last
release,
adding
the
default
topology
spreading.
This
is
Aldo
others
kept.
Hopefully
this
is
gonna
get
done
in
1.18.
It's.
A
A
The
other,
the
next
feature
is
improving
scheduled
frameworks,
drawing
it
down
like
a
tremendous
job,
trying
to
investigate
that
area
in
general
and
basically
try
to
converge
with
what
we
have
right
now
with
the
scalability
sick.
So
he
will
try
to
import
most
of
the
principles
that
they
have
there
around,
how
the
workloads
are
created,
how
the
metrics
are
collected
and
how
they
are
reported.
So
him
and
Jan
and
and
two
other
contributors
will
be
doing.
A
So
we
need
to
be
as
efficient
as
we
can
and
we
need
to
monitor
that
all
the
time
watch
out
for
regressions,
etc
and
also
try
to
craft
new
workloads
that
mimics
better
what
we
work
workloads
we
see
in
the
wild
so
that
we
don't
really
have
those.
You
know
our
synthetic
workers
that
doesn't
really
give
us
any
signal
of
how
the
scheduler
is
doing
in
practice.
A
Sounds
good,
so
the
other
feature
is
promoting
component
config
to
beta
I.
Think
I
think
if
we
are
going
with
the
second
with
the
ulti
config
scheduler
I
think
it
that's.
Gonna
require
component
content
changes,
so
we
either
going
to
do
V
1
alpha
2
first
or
we
do
it.
V
1
beta
directly,
either
I
think
I'm
fine
with
either
I
would
love
to
have
people
config
graduated
as
soon
as
possible,
because
we
want
to
duplicate
policy,
but
we'll
see
what
aldo
will
come
up
with
in
his
cape
and
what
are
the
trade-offs
there?
A
A
And
the
other
one
is
having
a
new
repo
for
plugins,
so
that's
ways
suggestion
order
to
consolidate
efforts
around
creating
out
of
three
plugins
in
one
place
so
that
we
are
able
to
share
code
and
better
exercise
the
framework.
Api's
I
don't
know
if
we
has
any
more.
If
you
have
any
more
comments
or
thoughts
on
this
yeah.
C
A
C
A
A
So
there
is
this
idea
that
I
would
have
been
thinking
about
which
is
doing
creating
a
pod
info,
a
type
in
the
in
the
scheduler
making
it
like
a
first-class
type
similar
to
node
info,
and
basically
the
idea
there
is
any
computer
precomputed
I
made
to
do
on
the
pod
started
computation,
not
something
that
changes
over
time.
Basically
when
when,
when
the
cash
sees
the
part
for
the
first
time
that
pod
info
annotated,
state
gets
calculated
and
that's
it
example
of
this
includes
things
like
creating.
A
You
know:
label
selector
selected
from
rebel
selector,
which
is
extremely
expensive
to
do
while
evaluating
affinities
and
and
input
spreading.
Other
things
like
I,
don't
know
like,
for
example,
calculating
odd
resource
like
odd
level
resources
so
that
we
don't
really
need
to.
You
know
sum
up
everything
every
time
we
do
it
so
and
etc,
like
this
type
of
static
state,
I,
think
it's
useful
to
annotate
it
and
and
and
have
it
as
part
of
the
catalyst,
doesn't
get
computed.
A
A
We
need
to
look
at
the
existing
part,
so
we
have
to
create,
for
example,
those
selectors
again
and
again
and
and
it's
hard
to
catch
them
right
now
we
could
come
up
with
our
own,
like
you
know,
caching
pattern,
like
least
recently
used
that
we
advocate
plugins
to
to
to
maintain,
but
I'm
not
really
well
set
on
which
one
is
better.
One
provides
you
with
like
we
keep
there
if
we
have
a
black
caching
pattern
where
they
expires
once
in
a
run.
A
So
if
you
don't
find
a
part
in
for
there
you're
just
going
to
create
it.
So
there's
no
problem,
but
it's
something
to
maintain.
There's
a
thread
tail
that
I
wrote
in
there.
That
needs
to
garbage,
collect,
etc,
make
sure
that
it's
well
maintained.
On
the
other
hand,
if
we
create
that
part
in
four
type
and
we
make
it
part
of
the
scheduler
cache-
it's
gonna
add
more
memory
overhead,
because
it's
gonna
be
always
there
with
that.
A
The
part
is
being
excluded
or
not
or
whatever,
so
it
will
always
be
there
so
at
scale,
since
pods
are
like
two
orders
magnitude,
typically
in
larger
number,
the
number
of
nodes,
so
that
might
be
a
concern.
So
I'd
love
to
hear
your
thoughts
on
this
I'm
gonna
create
an
issue.
This
wish
to
start
a
discussion
around
this.
A
D
Guess
what
we
need
to
answer
is:
is
it
very
unlikely
that
that
something
is
not
used?
Something
like
this.
These
resources
that
a
pod
is
using
is
something
that
we
need
once
once
saw.
A
Yeah,
so
all
of
this
is
in
the
context
of
trying
to
like
we
focused
a
lot
on
the
hobby
stationary
or
the
vanilla
workloads
that
doesn't
use
any
of
our
features,
trying
to
optimize
that,
and
then
we
get
to
the
point
where
we
really
will
optimize
there.
But
now
we
need
to
focus
more
on
optimizing.
The
other
features
so
that
we
encourage
using
it
at
scale.
So
that's
where
I'm,
basically
coming
from.
A
D
They
kind
of
stick
thanks
good
thing
about
the
cash
though
like.
If
we
have
this
pod
info,
or
we
already
have
a
node
info
on
every
play
needs
to
know
about
it.
I
mean
if
a
plant
emits
something
new
from
the
power
that
no
one
needs
to
change
that
part
of
the
code,
whereas
if
we
have
a
cache
that
we
can
they
it's
an
agnostic,
cash
than
any
plug-in
can
nothing.
B
A
A
My
concern
here
is
that
you
can't
really,
but
least
recently
used,
might
not
be
the
best
criteria
to
decide
which
which
brought
in
foes
should
be
evicted
from
the
cache,
because
when
you
evaluate
a
pod
to
be
scheduled
on
the
cluster,
you
are
moving
across
nodes
right.
So
in
a
window.
So
if,
in
the
previous
scheduling
cycle,
you
looked
at
this
group
of
nodes,
so
you
looked
at
those
specific
pods,
you
you're
not
going
to
look
at
those
wasn't
very
later.