►
From YouTube: Kubernetes SIG Scheduling Weekly Meeting for 20210617
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right,
yeah
hi
everyone,
so
this
meeting
is
recorded
and
will
be
uploaded
to
youtube.
We've
got
a
couple
of
agenda
items,
so
the
first
one
is
removing
policy
api
in
123.
mike.
Do
you
want
to
talk
about
that.
B
Yeah
so
just
like,
I
sent
out
an
email
to
sig
scheduling
and
kubernetes
dev.
Earlier
this
week,
we've
been
planning
to
remove
the
policy
api,
since
I
think
119
now
and
replacing
it
with
the
plug-in
component
config.
B
We,
I
think,
we've
made
a
lot
of
concessions
and
work
to
keep
it
supported
as
we've
transitioned
over
to
component
config,
and
we've
done
a
lot
of
good
work
around
making
sure
that
everyone
has
ample
time
to
update
to
that.
So
if
there
aren't
any
major
objections,
just
letting
everyone
know
that
this
policy,
which
is
you
know
if
you're
using
predicates
and
priorities,
that's
the
policy
api
that
will
be
removed
in
the
next.
B
I
think
that's
two
releases
from
now,
so,
just
as
a
heads
up
and
any
feedback
we
linked
to
the
issue
or
we
can
discuss
it
there
or
any
questions
so
consider
this
the
official
warning
that
it
will
be
removed
soon.
That's
all
I
have
to.
A
Say,
thank
you,
and
so
I
I
think
you
already
sent
an
email
to
both
kubernetes
dev
and
this
exchange
group
and
we'll
be
updating
the
document,
not
the
logging
on
the
flag
to
indicate
that
this
will
be
duplicated
in
123,
so
we'll
be
update
that
log
in
122,
and
we
will
be
also
updating
the
website
as
well
to
indicate
that
this
will
be
removed
in
123..
A
So
hopefully
that's
enough
like
announcement
to
get
people
off
of
it.
Hopefully
everybody
already
moved
to
component
conference
because,
quite
frankly
like
it's
way
more
powerful
way
better
than
the
older
one.
So
there's
a
lot
of
incentive
to
actually
do
that
migration.
A
On
this
item,
all
right,
so
the
next
section-
the
next
agenda
item-
is.
A
The
extended
resources
in
heterogeneous
environments
aldo-
do
you
want
to
talk
about
this?
One.
C
Yes,
so
we
had
dave
contributing
trying
to
add
configuration
to
the
balance,
resources
plugin
and
when
doing
the
review,
we
discovered
that
when,
when
we
do
the
scoring
for
this
kind
of
resources-
and
you
have
a
custom
resource,
for
example,
a
gpu
that
only
applies
to
certain
nodes.
C
C
C
So
initially
we
were
discussing
whether
or
not
it
was
worth
changing
this,
but
I
think
pretty
much
everyone
agrees
that
is
necessary,
so
the
next
question
is
which
algorithm
we
should
go
with.
If
you
have
any
thoughts,
please
dump
it
down,
then,
in
the
in
the
issue,
or
already
raise
your
question.
A
Here
so
to
clarify
this
is
like
currently
the
balanced.
This
is
related
to
balanced
allocation.
Only
right.
C
A
Okay
and
the
default
right
now,
we
actually
don't
take
extended
resources
into
account
at
all,
unless
somebody
instantly
configures
the
plugin
to
take
one
of
them
into
account.
A
Yes,
and
if
they
do
just
to
try
it
like
to
reiterate
what
he
says
and
make
sure
that
summarizing
it
correctly.
A
A
And
I
just
want
to
also
add
some
like
caveats
here,
like
the
best
practices
is
actually
not
to
send
like
not
to
include.
A
Like
basically,
you
would
add
tents
on
nodes
that
have
gpus
and
so
the
other
gp,
the
other
nodes
that
has
the
gpu
shouldn't
actually
be
considered
at
all
for
in
the
scoring
phase.
But
are
you
saying
that
those
nodes
that
don't
have
the
gpu
will
all
of
them
will
be
scored
zero
or
just
they
will
be
scored
lower,
but
with
various
right.
C
A
A
Yes,
any
questions
from
others.
If
you
have
any
like
you
want
yeah
to
to
get
a
better
understanding
of
the
issue.
D
I
I
have
a
question
on
this
feature,
so
does
this
is
the
issue
that
you
know
for
a
walk
for
a
part
which
doesn't
require
gpu?
But
you
know
because
of
the
gpu
plugin,
so
in
the
scoring
phase,
then
the
node,
with
without
without
gpu
support,
will
not
get
selected.
D
There's
less
chance:
okay,
so
I'm
thinking
you
know
for
work
for
workload
or
for
any
workload
right
either
as
part
of
or
scale
set
or
whatever.
If
it
doesn't
report
a
resource
right,
then
the
then
that
risk
then
scoring
for
that
should
not
be
scoring
and
then
scoring
should
not
consider
that
resource
right.
A
C
I
well
if
it's,
if
it's
requesting
the
resource,
it
would
the
node
that
doesn't
have
the
result.
Resource
wouldn't
pass
the
filter.
Oh.
C
So
if
we
assume
that
that
the
best
practice
is
to
you
know,
have
things
and
tolerations,
then
I
think
the
best
solution
is
to
simply
ignore
the
resource
completely
right,
the
resource
that
it
that
this
node
doesn't
have
so
that
so
that
we
have
the
same
behavior
as
as
even
only
we
were
looking
into
cpu
and
memory
right.
D
But
this
only
applied
to
the
workloads
or
the
that
doesn't
require
that
resource
right
for
the
workload
that
requires
the
resource,
either
it's
gpu
or
bpo,
or
whatever
special
resource.
We
still
need
to
consider.
C
D
Okay,
we'll
be
filtered
out,
and
then
but
the
scoring
does
a
scoring.
Consider
like
you
know,
for
example,
if
a
part
it
needs
gpu
resource,
just
a
scoring.
Consider
like
what
is
available
gpu
on
that
node
on
on
the
node.
You
know
consider
that,
for
example,
for
a
node
that
you
know
has
more
gpu
we're
going
to
give
it
a
higher
score.
C
D
D
D
D
C
C
D
C
D
Yeah
that
that's
that's,
that's
a
that's
yeah,
that's
an
easier
case,
but
I'm
just
thinking
if
they
have
different
very
different,
you
know
percentage,
one
is
higher,
the
other
is
lower
and
then
the
other
node
is
opposite
so
which
one
I
forgot.
The
algorithm,
which
one
does
a
current
algorithm
choose.
C
D
I
see
yeah
that
makes
sense,
I
think,
keeping
a
balance
amount
of
balance
among
all
the
different
resource
types.
That
will
be
good
right.
We
don't
want
to
starve.
Like
you
know,
we
still
have
one
node
still
has
a
lot
of
memory
you
know
left
over,
but
no
cpu,
that's
not
good
for
the
utilization
rate
yeah.
Thank
you.
A
Okay,
thank
you
so
much
aldo
for
bringing
this
up
and
explaining
the
nonces
of
this
issue
so
yeah.
If
anybody
has
any
questions
about
the
suggestions
on
how
to
improve
the
situation
or
which
direction
we
should
go,
please
comment
on
the
issue.
I
think
we
need
to
fix
this
in
this
cycle.
I
think
this
is
important.
A
Okay,
so
that's
the
only
two
agenda
items
we
have.
Does
anybody
have
any
questions
or
topics
they
want
to
discuss?
We
have
10
more
minutes
until
for
this
meeting.
D
I
have
a
question
so
I'm
new
to
this
group
or,
although
I
have
done
quite
some
scheduling
work
so
if,
like
you
know,
if
I
have
an
idea
for
I
mean
for
making
the
schedule
better
or
like
maybe
propose
a
schedule
extension,
what
is
the
process.
A
So
I
can.
A
I
think
we
we
need
to
update
our
probably
our
community
doc,
but
I
can
I
can
summarize
here
quickly,
so
we
start
with
opening
an
issue
to
discuss
the
high
level
idea
so
that
we
try
to
evaluate
whether
what
you
need,
whether
it
exists
already
or
not
like,
can
be
solved
by
existing
features
or
not,
and
then
we
get
to
an
understanding.
Well,
this
is
a
missing
feature.
A
Let's
move
to
a
proposed
like
proposal
phase,
maybe
we
brainstorm
some
ideas
initially
on
the
issue,
but
the
next
step
is
going
to
be
writing
up
a
cap
once
we
agree
that
okay,
this
is
a
missing
feature
and
this
feature
needs
to
be
in
core
kubernetes.
You
write
kip,
which
is
like
a
key
enhancement
proposal.
A
We
discuss
on
the
cap
the
exact
semantics
of
the
feature,
the
options
how
to
graduated.
You
know
through
alpha
beta
ga
we
pull
in
you
know
other
sigs.
If
it
is
related
to,
for
example,
say
ignored
or
sick
storage,
it
requires
some
attention
from
them.
A
This
is
merged.
We
move
into
implementation
phase
if
it
is
a
new
extension.
There
are
two
options
if
we
believe
that
this
extension
needs
to
be
encore,
kubernetes
right
away,
then
we
again
it's
open
for
discussion.
If
we
believe
that
well,
it's
not
clear
that
this
is
something
useful
for
everybody.
A
There
are
some
doubts
about
having
hosting
it
in
core
kubernetes.
The
other
option
is
to
have
it
in
the
kubernetes
and
the
in
the
cigs
repo.
We
have
a
plug-in
repo
for
for
scheduling.
We
maintains
that
repo,
our
co-co-chair.
We
can
add
it
there
as
well.
So
that's
the
like
general
price,
but
I
guess
at
this
point
like
the
first
step,
is
to
open
an
issue
propose
what
you
which,
what
you
would
like
to
see
to
have
in
in
in
the
scheduler.
D
Yes,
yes,
thank
you
yeah,
so
you
just
mentioned
so
one
so
there
I'm
just
thinking
about
the
result.
One
is
your
mission
to
option.
One
is
to
be
extension
of
the
cognitive
scheduler
right
another
is
you
mentioned
the
plugin.
Is
that
does
that
mean
that
plugin?
Are
you
referring
that
to
as
a
cognitive
extension?
Is
that
what
you
mean
most.
A
Most
scheduler
features
are
implemented
as
plugins,
so
we
take
a
post
that
we
have
in
entry
plugins.
We
either
host
it
directly
in
three
like
it
gets
released
with
the
kubernetes
scheduler
with
kubernetes
release,
or
if
we
feel
that
this
plug-in
is
doesn't
belong
in
core.
It's
a
too
niche
or
too
specific
to
a
specific
use
case
that
we
don't
feel
it
should
be
supported
in
in
core
kubernetes.
A
A
If
the
feature
is
not
really
like
a
plug-in,
and
basically
you
want
to
change
the
preemption
behavior
in
a
specific
way,
the
default
permission,
behavior
specific
way
or
change
the
way
that
we're
queuing,
pods
or
whatnot.
It's
like
something
internal
to
the
scheduler
that
is
not
configurable
or
not
like
not
cannot
be
implemented
as
plugin
again,
it
has
to
be
in
core,
but
we
need
to
discuss
it
over
the
issue
et
cetera
same
thing,
for
example,
if
you
want
to
add
a
metric
and
whatnot.
D
I
see
okay,
so
there's
a
specific
repository
for
the
scheduler
extension
or
scheduler
plugin
right.
A
If
you
would
not
for
the
internal
ones
for
the
internal
ones,
they
are
available
inside
this
scheduler
package.
If
you
go
to
kubernetes
package,
scheduler
you're
gonna
find
a
directory
called
framework
under
that
there
is
a
plug-in
package
that
lists
all
the
internal
the
entry
plugins
that
come
with
the
default
scheduler.
A
A
You
know
feature
so
you
we,
we
give
the
community
another
option
which
is
more
flexible.
There
is
less
you
know
bar
to
entry.
There
you
host
it
there.
Maybe
you
prove
that
it
is
useful
enough
and
there
is
traction
and
then
you
come
back
to
the
community.
Look.
This
plugin
is
being
used.
Frequently,
we've
agreed
on
its
semantics:
let's
try
to
merge
it
back
into
core.
D
I
see
stage
I
don't
want,
because
I
don't
know
that
position.
I
know
the
the
the
scheduler
framework,
the
cognitive
schedule
or
the
code,
but
the
that
one
which
is
not
the
core
plugin.
What
is
a
link
to
that?
Do
you
mind
attaching
that
link
to
this.
A
To
the
meeting
agenda
and
ping
us
on
on
the
slack
channel,
ask
the
same
question
on
the
slack
channel
scheduling
mike
aldo
and
others
who
are
leading
the
said,
can
and
and
we
and
myself
we
can
answer
all
these
questions
there
as
well,
and
there
are
contributors
who
have
you
know,
contributed
plugins
to
that
people
can
answer
questions
as
well.
There,
okay.
D
I
just
see
that,
although
posted
that
plugin
is
that
the
external,
the
non-core
plugin
right,
is
that
the
one
sorry?
What
was
the
question
I
mean
in
the
chat
window?
I
I
see
that
you
know
someone
called,
I
think
aldo.
I
hope
I
pronounced
the
name
correctly.
Yeah
posted
the
link.
Is
that
the
the
yes
external
plug-in
right.
E
I
have
a
question
about
the
trimaran
load,
aware
placement
plugins,
the
target
load
packing
and
the
other
one
is
anybody
here,
one
of
the
contributors
to
that.
B
The
main
contributors
for
that
I
think,
were
wei
and
some
people
from
our
team
at
red,
hat,
passer
and
chen,
which
I
don't
think
are
on
the
call
right
now,
but
I
work
kind
of
with
them.
So
if
you
have
like
a
general
question,
I
might
be
able
to
help.
E
Oh
yeah,
I
just
wanted
to
ask
about
like
the
what
is
the
current
status
of
like,
where
is
it
deployed
in
production
today,
and
what
is
the
scale
that
they
have
reached?
We
are
trying
to
do
something
similar
at
uber
and
I
just
wanted
to
get
in
touch
with
them
and
just
just
bounce
off
some
ideas.
B
Yeah,
so
you
could
ask
around
in
the
slack,
but
I
know
as
far
as
production
like
like
abdul
was
mentioning
that's
obviously
in
the
external
repo,
so
it's
not
shipped
with
kubernetes,
but
we
are
doing
some
work
with
openshift
to
try
to
support
those
and
we
might
end
up
getting
some.
You
know
usage
information
about
seeing
those
deployed
at
scale
and
like
the
next
couple
of
releases,
it's
still
kind
of
early.
But
that's
that's
a
good
question
to
ask.
B
If
you
ping
way
he's
been,
he
leads
the
scheduler
plug-ins
revo.
Pretty
much
so
he's
had
a
lot
more
detail
about
those
than
me.
E
And-
and
I
should
do
this
on
slack
okay-
I
can
yeah.
B
Yeah,
the
slack
the
sig
scheduling,
slack
channel
is
a
great
way
to
remember.
Okay,
thank.
E
A
Great,
thank
you,
everybody
for
very
good
discussion
today.
Any
final
thoughts.
We
have
couple
minutes.