►
From YouTube: Kubernetes SIG Scheduling Meeting - 2020-01-30
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
A
A
A
We
also
moved
both
the
queue
sort
and
the
point
logic
into
their
own
plugins.
We
are
discussing
modifying
the
semantics
of
post,
actually
removing
post
filter
and
adding
priests
courses.
All
the
instances
of
using
that
extension
point
was
basically
as
a
priest
core
and
the
default
scheduler,
so
it
might
make
more
sense
to
just
call
it
that
and
then
other
than
that
I
think
I
think
that's
it.
From
now
on.
A
A
A
B
A
C
A
A
Improving
schedulers
benchmarks
and
so
yeah
Chung
has
been
working
with
the
community
I
think
they
got
a
new
benchmark
framework.
Let's
call
it
a
tower,
not
develop
another
framework
within
the
integration,
scheduler
perf
directory
that
allows
creating
benchmarks
by
simply
specifying
like
the
pod,
spec
and
load
spec
I
guess
so
it's
easier
now.
These
basically
specify
the
characteristics
your
workload
same
way
that
you
specify
it
when
you
normally,
you
know,
interact
with
kubernetes
and
then
from
that
you
can
have,
but
it's
easier
to
create
new
benchmarks.
A
I
think
one
sticking
point
there
was
how
to
cool
to
collect
the
metrics.
We
needed
some
logic
from
Prometheus
that
was
not
exported.
We
tried
to
fork
it
in
two
Canaries
and
there
is
some
pushback
from
the
maintainer
x'
with
a
project,
and
so
we
might
change
our
approach
so
that
logic
basically
computes
the
percentiles
from
a
histogram
metric
but
I
guess
if,
if
there
is
a
resistance
to
copying
of
working
the
code
from
prometheus
that
confuses
the
percentiles
from
histograms,
we
can
just
predefined
some.
You
know
buckets
and
and
report
distributions
across
these
buckets.
A
A
A
Come
follow
up
on
this
one.
Actually
they
keep
scalar
component
config
to
beta.
So
this
is
post
ponded
and
then
we
have
now
the
because
we
are
moving
to
alpha
2,
because
we
are
changing
the
configuration
to
allow
multiple
scheduler
configurations
in
the
same
component,
config
file,
so
we're
going
to
V
and
alpha
2
first
and
then
moving
to
P,
1,
beta
1
and
the
next
and
1.19.
Hopefully,.
C
Myrtle
has
been
created
and
the
I'll
accept
cap
about
the
Coast
Guard
ring,
so
that
was
a
interesting
idea
to
just
use.
Q
sought
to
achieve
the
goal
for
Cusco
doing
so,
because
me
and
Abdullah
did
a
force
around
of
review.
I
will
reveal
more
on
the
technical
details,
but
generally
the
idea
looks
good
to
me.
So
it's
it's
opposed
to
some
general
in
conditioner
like
keep
recording
the
ready
past
the.
C
Terminating
part
of
something
I
can't
remember
the
details
so
that
killed
work
out
of
tree-hugging
as
well,
so
I
suggest
them
to
open
the
cap
to
us
and
we
knew
that
and
the
one
other
thing
is
that
as
I
will
point
out,
so
because
it's
a
sub
repo,
so
basically
it
seems
we
should
should
not
keep
the
caps
in
the
repo
banana
/
enhancement.
Is
there
we
should
put
into
our
sub
ripple
and
reveal
the
detail,
different
details
and
in
contention
there.
B
C
The
expected
go
of
this
item
is
that,
at
the
end
of
this
release
cycle,
we
have
ripple
with
both
plugins
contributed
by
different
vendors,
and
we
have
documents
guiding
the
people
how
to
write
us,
cracking
and
real
cracking
implemented
and
exercised
by
the
large
companies.
That
is
our
goal
for
this
I.
A
D
A
A
So
there
are
a
couple
of
things
that
we
discussed
over
the
past
couple
of
weeks,
one
that
might
be
worth
discussing
in
this
sake
in
the
meeting
in
the
seg
meeting,
one
is
head-of-line
blocking
the
second
one
is
what
to
do
with
terminating
pods
in
the
context
of
part,
spreading
core
affinity
on
the
affinity
and
whatnot,
which
one
do
you
guys
believe
is
more
interesting
to
discuss.
First,
because
we
only
have
12
minutes
and
unless
you
have
a
third
item
that
you
also
would
like
to
discuss
and
believe
it's
it's
more
urgent.
C
A
D
We
kind
of
had
that
confusion
about
which
cig
should
own
this,
so
yeah
I'm
gonna,
leave
that
and
try
to
find
out
where
we
can
definitely
pin
down
who
needs
to
own
this,
and
after
that,
it
should
really
just
be
a
bookkeeping
thing
because,
like
I
said,
we
do
have
a
lot
of
the
work
done
to
update
like
the
feature
gates
and
the
tests.
So,
besides
that
it's
just
Docs
changes.
D
That's
just
a
quick
comment
about
that.
I
opened
this
about
a
week
ago
trying
to
switch
the
D
scheduler
to
use
go
modules.
The
problem
with
it
is
that
there
were
a
lot
of
places
that
it
depended
on
upstream
Kate's,
bio,
slash,
kubernetes,
really
just
for
a
lot
of
helper
functions
and
some
it
was
referencing.
Some
internal
API
is
instead
of
like
using
case
that
iOS
/
API,
where
it
could
have
so.
D
Unfortunately,
the
best
solution
that
I
could
get
for
some
of
those
helper
functions
was
just
copy
them
to
our
own
utils
package,
they're,
pretty
simple
they're
all
like
parsing
fields
in
a
pod
and
trying
to
figure
out,
like
you
know,
namespace
and
things
like
that,
but
it
hasn't
really
gotten
much
attention.
So,
if
there's
anyone
that
can
get
some
review
or
maybe
offer
up
some
insight
as
to
other
places
that
these
helpers
could
be
coming
from,
that,
we
could
actually
depend
on
and
has
a
module.
D
A
So
I
guess
we
can
quickly
talk
about
I'm,
terminating
also
I.
Guess
we
kind
of
reach
to
an
agreement
where
terminating
pods
is
not
a
terminated
part
from
the
schedulers
perspective,
so
we
shouldn't
really
consider
a
terminating
or
a
spot
that
doesn't
exist
because
it
can
potentially
have
some
resources.
A
A
So,
basically
we're
taking
the
more
conservative
approach
here,
but
for
other
cases
like
pod
spreading,
some
people
in
the
community
feel
that
it
might
not
be
so
bad
to
exclude
terminating
parts
from
the
computation
of
parts
printing.
Basically,
if
a
terminating
part
exists
in
an
one
topology,
then
we
shouldn't
really
count
it
against
this
queue,
because
eventually,
that
part
is
going
to
go
away.
It's
not
from
pod
spreading
perspective.
It's
not
a
resource
that
it's
holding,
it's
just
like
a
computation
for
availability,
so
it
makes
complete
sense
to
consider
that
pod.
A
A
B
My
general
culture
is
just
to
try
to
avoid
special
cases
possible.
The
particular
example
that
they
provided,
like
I,
feel
like
I
saw
with
spreading
like
a
demon
cell,
would
be
more
appropriate,
probably
something
like
that,
but
yeah.
In
the
other
hand,
I
think
just
special
case
in
spreading
is
known
to
black.
A
Yeah
I
mean
the
idea
there
is.
Is
that
like
against?
It's
not
a
resource
that
it's
holding,
and
so
we're
not
concerned
that
the
part
will
get
you
know
rejected
by
the
cubelet,
because
we
made
a
decision
based
on
a
based
on
you
know,
considering
a
terminating
part
as
something
that
will
be
deleted
in
the
future.
So
there's
a
lot
less
risk
there
and
it's
actually
like,
if
we
think
about
part
spreading
as
a
feature
to
enhance
availability
of
a
service,
then
it
makes
really
complete
sense
to
say
a
terminating
part.
It
should
count
against.
A
A
A
For
some
reason
we
had
some
logic
did
that
was
garbage
collecting
that
history
and
then
aparted.
Some
call
that
reached
his
max
back
or
could
have
had
this
back.
It's
back
off,
reset
it
back
to
one,
even
if
it
was
like
in
its
tenth,
try
and
whatnot
so
so
that
bug
has
been
fixed
by
Aldo.
I
will
be
back
ported
to
previous
releases,
but
it
not
much
not
be
enough
to
solve
this
problem
completely,
and
so
it's
something
to
think
about
it's
something
that
maybe
can
be
like
a
new
a
new
effort
in
general.
A
A
We
need
probably
to
print
like
we
did.
We
did
some
brainstorming
on
that
issue,
but
I
don't
know
what
the
community
feels
about
this.
Should
we
is
this
something
that
we
should
care
about.
A
lot
is
this
something
only
triggered
by
some
extreme
cases.
That
typically
should
not
occur
unless
someone
is
shooting
themselves
in
the
foot
by
you
know
creating
way
too
many
high
priority
parts
that
can't
be
scheduled
like
in
this
scenario
like
I,
don't
imagine
why
would
you
do
that?
A
C
So
what
what
why
idea
I
can
think
of
is
that
so
right
now,
as
we
have
know
that
the
spec,
the
part
of
that
priority
Catan
means
two
things:
why
is
the
kissa
implementation
by
default
is
just
starting
by
the
priority?
The
other
thing
is,
it
determines
the
preemption
logic,
so
basically,
what
I
can
think
of?
Instead
the
spec
priority
kind
of
cost
attorney
and
should
determine
the
preemption
logic.
Only
the
high
priority
calls
cambrian
the
low
priority
cost,
but
in
the
other
hand,
regarding
the
Q
sod,
we
should
be
open
to
xs/small.
C
C
A
Yeah
I
mean
like
the
other
thing,
is:
how
do
we
evaluate
this
like
understand?
You
have
this
unit
test,
but
I?
How
can
we
create
benchmarks,
representative
of
such
workloads
that
allows
us
to
evaluate
all
these
different
heuristics,
because
those
aren't
going
to
be
heuristics?
They
will
solve
some
cases,
but
they
will
not
address
others
and
potentially.
D
A
A
A
You
know
a
couple
of
course
on
github
regarding
this
thing,
but
like
I,
are
we
convinced
that
this
is
actually
a
problem
that
needs
solving
or
is
it
something
that
can
be
solved
with
a
better?
You
know,
cluster
configuration
meaning
using
resource
quotas,
or
you
know
again
like
someone
is
using
pod
priority
like
is
abusing
it
basically.