►
From YouTube: Kubernetes SIG Scheduling Meeting - 2019-4-11
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
Folks
again,
if
you
didn't
attend
last
week's
meeting,
I
said
that
we
are
going
to
have
a
contributor
summit
session
IQ
con,
so
we're
gonna,
we're
gonna
meet
there
and
it's
an
opportunity
for
you
folks
to
meet
new
contributors
or
if
you
are
a
new
contributor,
it's
an
opportunity
for
you
to
go
and
see
some
of
the
veteran
contributors
and
learn
from
them.
It's
you're
gonna
also
have
a
couple
of
demos.
A
I
know
that
Robbie
is
going
to
have
a
demo
on
D
scheduler
class
may
have
a
demo
of
Q
batch
or
some
other
projects
is
working
on.
If
you
have
other
projects
that
you
want
a
demo
at
the
contributors
summit,
please
let
me
know
we
can
arrange
that
the
country
video
summit
is
going
to
be
like
a
half
day
and
we
are
still
waiting
for
the
final
decisions
on
those
I.
Don't
know
how
many
other
six
are
going
to
have
country.
A
B
So
one
thing
that
in
30
days
are
working
on
is
usage
based
schedule
like
how
can
we
use
custom
metrics
while
making
scheduling
decisions,
they
want
to
use
extender,
they
have
a
working
demo,
but
they
want
to
present
either
in
contributor
summit
or
in
one
of
our
Sigma
teens,
and
they
also
have
an
integration
with
these
schedulers.
So
that's
something
that
I'm
interested
in
so
those
two
aware
so.
B
C
So
as
as
Ravi
just
mentioned,
so
we
have
an
implementation
which
kind
of
started
with
you:
it'd
be
scheduling
and
we
have
we.
We
are
trying
to
use
metrics
for
influencing
scheduling
decisions
and
we
have
an
integration
with
D
shared
ruler
as
well,
and
we
were
kind
of
interested
to
showcase
it
either
in
one
of
the
sig
meetings
here
or
even
in
Q
corn.
C
A
A
So
we
have
at
least
three
demos
as
far
as
I
know
so
far
or
if
you
and
Robby
want
to
have
like
a
joint
demo
anyway,
we
I
think
we
have
enough
time
for
for
all
these
demos
and
even
have
more
more
if
other
peoples
are
interested,
so
yeah
sure
absolutely
I
would
encourage
you
to
to
showcase
your
projects
at
the
contributors
on
it.
That's
a
great
venue
for
attracting
new
contributors,
and
you
I
think
you
should
definitely
do
that.
B
Is
the
extender
still
pretty
much
going
to
be
the
same,
especially
using
HTTP,
or
do
we
have
any
plans
of
I?
Remember
you
talking
about
like
I
think
this
was
some
time
ago,
but
we
had
thought
of
using
plugins
that
could
be
compiled
into
a
scheduler
binary,
but
I
think
we
have
moved
away
from
it.
Like?
Is
the
plan
coming
back
or
how
is.
A
The
framework,
basically
their
current
framework,
is
exactly
like
that,
so
all
these
plugins
are
going
to
be
compiled
into
the
binary.
The
way
that
it
works
is,
it
is
different
than
the
currently
scheduler.
So
what
we
have
done
is
that
in
the
code
we
are
going
to
have
better
isolation
of
code,
so
your
new
plugins
do
not
need
to
be
no
marriage
into
existing
files
if
they
can
be
in
a
separate
directory
that
you
copied
to
the
code
base
and
compiled.
This
removes
some
of
the
most
of.
A
Of
rebasing,
your
code
with
them
with
the
head
of
the
repo
and
that's
one
one
option:
Jonathan
also
had
a
pretty
good
proposal,
basically
updated
our
framework
proposal
and
and
added
a
section
on
rendering
him
the
whole
scheduler
code
and
adding
your
own
plugins.
We
see
you,
you
create
a
repo
for
yourself
for
your
plug-in
and
you
been
during
the
whole
scheduler
code
and
you
can
just
invoke
the
scheduler
and
pass
a
registry
to
it.
The
registry
includes
your.
A
A
B
A
So
another
update
is
the
even
part
is
spreading.
It
got
a
couple
of
reviews
from
API
folks,
particularly
I,
think
Clayton
and
and
also
daniel
daniel
Smith
has
have
commented
on
this
PR.
They
have
great
comments
and
I'm
glad
that
this
is
getting
closer
to
to
be
finalized.
Thank
you
very
much
for
your
efforts
and
this
this
is
very
valuable
and
I
think
also
some
other
some
other
reviewers
of
the
mention
that
this
is
gonna,
be
a
wholly
better
feature
for
many
folks
who
wanted
to
use
anti.
A
A
Is
a
complex
feature?
Definitely
we
need
to
put
a
lot
of
thinking
behind
the
design,
otherwise
we
may
run
into
issues
in
the
future
and
we
need
to
have
another
feature
that
is
very
similar
about.
This
slightly
differ,
so
it's
better
to
think
about
it
carefully.
Right
now,
instead
of
moving
forward
quickly,
alright
and
also
I
would
like
to
thank
way
again
for
one
important
improvement
to
affinity
so
way
has
been
able
to
identify
an
area
in
the
code
that
do
to
lock
contention,
mostly
I,
believe
I
was
taking
a
lot
of
time.
A
He
managed
to
remove
the
lock
contention
with
atomic
variables
and
also
he
managed
to
simplify
a
little
bit
of
the
code
and
changed
some
of
the
types
from
floating-point
operations
to
integer
operation.
I,
don't
know
how
much
that
has
improved
performance
by
itself,
but
we
definitely
know
that
floating-point
operations
are
much
slower
than
integer
operation,
so
probably
that
had
some
effect
as
well.
But
since
these
two
were
in
a
single
PR,
we
know
that
both
of
them
together
I,
should
improve
performance
by
a
factor
of
two.
So
we
have
now.
A
Affinity
is,
inter
pod,
affinity
is
2x
faster,
so
this
is
on
top
of
a
lot
of
other
optimizations
that
we
had
done
in
the
past.
This
is
this
is
great
news,
because
I
believe
you're
in
the
ballpark
of
like
4
to
5
X,
slower
than
other
predicates.
You
remember
that
we
were
a
thousand
Explorer
than
other
predicates
interpreter
finicky,
anti
affinity,
we're
a
thousand
ex
slower
than
other
predicates,
and
now
we're
talking
about
like
four
or
five
weeks
slower.
A
E
Just
some
sense
I'm
thinking
on
how
I
spot
this
yeah,
actually
I
I
thought
the
improvement
won't
help
on
the
hard
part
of
finishing
in
the
beginning,
because
I
don't
think
that
helped
for
the
predicate
right.
But
after
looking
into
the
code
actory
actually
for
pada
after
day
for
the
softer
productivity,
it
actually
use
sort
of
the
symmetric.
Hard
part.
Wait
something
like
there,
so
that
if
there
is
a
part
of
any
being
hard
upon
affinity
in
that
priority
face,
it
will
calculate
every
time
every
time.
A
To
me,
too,
so,
basically
for
other
folks
who
don't
have
the
for
context,
so
we
raise
optimization
is
applied
to
interpret.
Definitely
priority
function,
not
yes,
but
what
we've
observed
is
that
even
the
predicate
function
calls
this
priority
functionality
and
calculates
these
basically
awaits
called
basically
soft
pod
affinity
for
existing
parts
of
the
cluster.
When
it
does
the
predicate.
A
That's
why
the
predicate
has
also
improved
as
a
result
of
this
optimization,
although
I
don't
know
why
we
gone
through
this
path,
particularly
I
kind
of
understand,
some
of
the
reasons
but
I
feel
like
if
we
didn't
do
this
part
in
the
predicate
I
feature,
if
you
would
still
be
more
or
less
as
useful,
I
would
say
this
is
this
is
something
that
could
have
been
done
differently.
Anyhow,.
E
No,
no
exactly
it
because
in
the
priority
phase,
4
the
inter
path,
priority
phase,
it
will
look
at
every
part
of
which
every
existing
part,
which
has
the
heart
for
the
affinity
and
the
category
that
to
add
to
for
each
launch-
and
you
know
there
is
a
field
called
symmetric,
hard
part
affinity
weight
so
instead
of
plus
1
or
minus
1,
it
just
added
that
value
to
every
match
so
that
that
is
kind
of
involved
every
time
in
part
of
any
priority
phase.
So
this
is
why
the
performances
yeah.
A
A
He
himself
is
not
greatly
in
favor
of
the
infinite
unity
affinity
and
he
is
the
person
who
we
should
probably
defecate
it
anyway,
so
yeah,
so
you
know
there
also
I
would
like
to
mention
one
other
thing.
There
has
been
a
lot
of
confusion
about
the
future
of
anti
affinity,
in
particular
after
we
have
this
even
thought
spreading.
So
a
lot
of
people
ask
us
so
once
we
introduced
even
party
spreading,
are
we
still
gonna
have
anti
affinity?
It's
gonna
be
more
or
less
the
same
feature.
A
So
first
of
all,
there
are
a
major
difference
between
even
part
is
spreading
and
anti-authority
in
particular
hard
part,
anti
affinity
and
forces.
Only
one
part,
Curt
apology,
domain,
even
part
of
spreading,
allows
us
to
have
much
more
than
one
part.
Basically,
as
long
as
the
paws
are
spread
evenly
among
failure
domains,
we
can
have
as.
A
As
we
want
so
if,
for
example,
you
have
ten
parts
and
two
zones-
and
you
want
to
spread
among
the
two
zones,
you
can
have
five
zones,
a
five
pass
per
zone
in
an
toy.
Definitely
few
spreads
specified
hard
and
hard
fought,
anti
affinity
third
zone,
then
you
would
end
up
having
only
one
part
per
zone
and
eight
of
those
part
would
not
be
scheduled
at
all.
So
this
is
first
of
all,
this
is
one
made
mean
difference
between
the
two
going
forward.
A
We
think
that
we
are
going
to
have
we're
gonna,
basically
support
only
node
or
hostname
as
a
topology
for
anti
affinity
in
the
future.
We
may
not
necessarily
drop
support
of
other
failure,
domains
from
entire
family
right
away.
This
is
going
to
go
through
some
duplication
process
and
I
am
still
not
so
sure
if
we
are
going
to
deprecated
it.
That
was
my
initial
thinking,
but
now
that
I
think
about
it.
If
we
find
a
better
mechanism,
let's
say
having
like
a
slow
path,
fast
path,
implementation
in
the
code.
A
If
we
can
find
a
good
implementation
for
a
slow
path
and
a
fast
path,
we
may
actually
go
with
this
approach.
Basically
make
hostname
a
fast
path
until
our
users
use
host
name
only
for
anti
affinity.
If
you
want
to
use
our
feature-
and
if
you
use
anything
other
than
hostname,
then
you
are
gonna
end
up
having
a
much
slower
scheduler,
but
you
still
can
do
it.
A
If
you,
for
example,
one
is
small
clustering,
don't
care
as
much
about
performance
and
for
some
reason
you
strongly
feel
you
need
anti
affinity
in
a
failure
domain
other
than
just
a
node
name.
You
can
still
do
it
but
be
aware
that
this
is
gonna
be
slow,
so
that
passed
one
other
approach
that
we
can
take,
but
this
is
still
very
early
stage
at
this
point.
A
I
would
prefer
to
first
finalize
and
build
even
further
spreading
and
then
go
and
do
a
more
in-depth
analysis
of
the
code
and
see
whether
we
can
have
the
whether
we
can
have
a
slow
path,
fast
path
for
the
anti
affinity
and
then
based
on
the
information
we
obtain
there.
We
can
make
an
informed
decision
about
the
future
of
that
feature.
A
B
Before
I
get
started
with
the
questions,
I
have
some
updates
to
give.
The
first
thing
is
in
115
we
want
to
have
Kota
scope,
selectors,
like
resource
Kota
scope,
selectors
moved
to
beta,
because
we
kind
of
wanted
to
allow
some
critical
parts
to
be
used
in
other
namespaces
too,
like
as
of
now,
the
critical
parts
are
restricted
to
cube
system
namespace.
A
B
I'll,
do
that
that's
one
update
and
there
is
another
plug-in
called
partner
admission
pod,
not
selected
admission
plugin,
that's
something
I've
discussed
with
Bobbi
I
mean
there's
something
that
I
wanted
to
be
enabled
by
default,
because
there
are
some
functionalities
in
other
components.
Sorry
other
distributions
that
are
pretty
much
similar
to
what
we
are
doing
in
this
plugin,
but
was
one
of
the
concerns
that
Bobbi
has
raised.
Is
it
uses
an
annotation
and
we
wanted
to
move
from
annotation
based
model?
So
I
haven't
raised
this
up
with
cigarette
actually
yet.
A
Yeah,
the
annotation
part
is
something
you're
discouraged
from
doing
so.
I
remember
that
I
had
a
talk
with
Brian
grant
and
Brian
France
every
use,
those
as
a
way
to
convey
configuration
so
yeah.
We
should
definitely
talk
with
sick
architecture
or
capi
folks
to
see
what
is
the
recommended
approach
there,
but
otherwise
the
whole
idea
is
fine,
but
this
is
just
the
implementation,
which
is
a
little
problematic
yeah.
B
Yeah
and
look
more
into
it
like
the
plugin
supports
both
annotation
as
well
as
we
can
have
custom
config
file
through
which
we
can
tell
the
default
node
selector
for
both
cluster
cluster
level,
as
well
as
namespace
levels.
So
that's
another
option,
so
we
can
like
remove
the
annotation
based
path
and
then
just
have
this
particular
concrete
file
supported.
That's.
A
Yeah
I,
honestly
I,
don't
have
a
whole
lot
of
preference
or
strong
technical
inclination
towards
any
of
these,
because
it's
a
little
further
away
from
my
field
of
expertise.
It's
you
know.
The
API
folks
have
a
lot
of
other
criteria
to
accept
or
reject
a
particular
idea
that
they
know
better
I
prefer
to
ask
those
books
and
they
may
have
some
other
guidance.
A
B
F
B
A
Actually,
that
was
something
that
we
were
thinking
about
as
well.
You
know
to
probably
move
the
scheduler
to
his
own
repo,
so
so
that
you
don't
need
to
check
in
all
the
things
from
the
kubernetes
we
we
might
actually
do
it
at
some
point.
At
the
same
time,
given
that
this
scheduler
reads
a
ton
of
stuff
from
the
API
server,
so
the
scheduler
has
a
pretty
long
chain
of
dependencies.
So
there
are
lots
of
or
api
kubernetes
api
that
this
that
scheduler
depends
on.
A
So,
even
though
we
may
move
it
to
our
own
repo
in
terms
of
pulling
in
code,
you
will
still
end
up
pulling
in
a
lot
of
code
if
basically
size
of
pointer
or
things
of
that
sort.
Yes
has
the
question:
it's
not
gonna
be
a
whole
lot
better
than
what
it
is
today,
but
in
terms
of
like
code,
cleanliness
and
the
the
code
that
developer
needs
to
deal
with.
In
that
case,
it
will
be
better
I
think
if
it
has,
if
it
has
its
own
repo
and.
F
If
I
might
just
add
some
thoughts
to
that,
I
mean
you
know,
one
of
the
nice
things
about
the
scheduler
extender
is,
is
that
you
literally
just
you
know,
hold
a
standard
container
image
for
the
scheduler
and
then
combine
that
with
your
own
code.
But
one
of
the
goals
of
the
scheduling
framework
is
really
to
you
know
give
these
plugins
access
to
say
the
scheduler
cache
and
we
want
to
avoid.
You
know.
Network
calls
and
marshalling
so
compiling
in
really
becomes
the
only
viable
option.
F
A
So
another
another
thing
that
I
was
thinking
about
yesterday
was
the
use
of
the
new
go
module
feature
of
God,
so
I
don't
have
any
in-depth
and
thoughts
about
that,
but
two
possible
obvious
options
that
come
to
my
head
is
to
one
to
have
these
plugins
becoming
like
go
modules
that
are
then
consumed
edge,
alert
and
to
the
other.
The
other
option
make
the
whole
schedule
like
a
go
module
that
some
of
these
plugins
important.
So
these
are
the
two
obvious
options.
A
Maybe
there
are
other
better
options,
but
the
hues
of
Ko
modules
for
building
the
framework
is
also
and
I
think
these
are
all,
in
my
opinion,
still
orthogonal
to
the
actual
idea
and
the
implementation
that
we
are
working
on.
I,
believe
and
hope
that
if
we
decide
to
go
with
one
of
these
other
approaches,
we
can
relatively
easily
convert
the
existing
implementation
that
we
are
working
on
into
this
new
new
core
module
team.
B
Yeah,
that
makes
sense,
but
at
high
level
I
would
personally
prefer
to
have
single
scheduler
core
and
then
in
the
vendor,
in
the
when,
during
directly
we
can
have,
or
in
the
vendor
director,
we
can
have
multiple
plugins
coming
in.
That's
the
approach
that
I
would
prefer.
That's
one
thing.
The
second
thing
is
the
extender.
B
B
A
Extenders
what
we
support
it.
So,
if
you
don't
want
to
compile,
you
can
still
use
extenders,
but
of
course,
extenders
are
not
going
to
be
as
versatile
and
extenders.
The
extension
points
for
extenders
are
rather
limited
right,
so
we
have
like
I
play
right
now,
three
or
maybe
four
extension
points
for
extenders
versus
for
our
framework,
at
least
in
the
proposal.
We
have
many
of
these
extension
points,
so
possibility
of
building
a
more
flexible
plugin
and
a
plugin
that
can
make
changes
to
various
parts
of
the
scheduling
with
the
framework
is
a
lot
higher.
A
E
A
F
A
You
know
other
than,
of
course,
something
like
web
hooks
and
the
extenders
that
we
have
I'm
not
so
sure
if
there
are
any
better
options,
at
least
when
it
comes
to
like
the
API
server
admission
controllers.
Those
are
pretty
much
the
same
right,
so
they
are
like
compiling
and
then
we
can
enable
or
disable
them
in
a
config
file,
which
is
really
much
similar
to
the
approach
that
we
are
taking
with
the
plugins
for
the
scheduler.