►
From YouTube: Kubernetes SIG Scheduling Meeting - 2019-08-22
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
It's
basically
a
link
to
our
meeting
notes.
If
you
scroll
down
on
the
second
page,
you
will
see
that
we
have
a
few
items
for
August
22.
The
first
item
is
that
code
freeze
is
coming
at
the
end
of
this
month,
so
exiting
next
week
is
going
to
be
very
close
to
our
code
freeze.
So
this
week,
I
would
like
to
go
over
the
items
that
you
were
tracking
for
150
116
and
see
where
we
are
with
respect
to
what
we
plan
to
do.
So,
let
me
actually
send
a
link
to
our
spreadsheet
as
well.
A
There's
the
spreadsheet,
with
a
link
to
our
and
items
the
first
one.
The
first
one
is:
you
implement
all
extension
points
of
the
scheduling
framework.
I
believe
that
this
is
all
done.
Correct
me:
if
I'm
wrong
Abdullah,
do
you
have
any
updates
on
this
I
know
that
you
have
been
closely
involved
with
all
these
things.
As
far
as
I
know,
everything
is
done
right,
yeah.
B
A
A
Next
item
is
a
scheduler
should
have
spread
them
on
among
physical
hosts,
so
we
actually
tried
do
this
Abdullah
and
Ahmadiyya
would
were
all
very
closely
involved.
With
this
effort
we
got
pushback
from
API.
Basically,
they
didn't
want
to
add
any
kind
of
semantics
to
particular
labels
in
a
cluster.
Basically,
we
didn't
want
us
to
or
the
scheduler
to
distinguish
between
a
node
versus
a
zone
versus
a
region
based
on
just
the
labels
on
these
nodes.
A
A
So
this,
basically
a
user
can
specify
the
labels
that
they
want
parts
to
be
spread
by
default
among
those
failure
moments.
So
if
it
is
now
it,
for
example,
it
can
be
scheduled
by
default.
Is
gonna
spread
pausing
on
my
nodes
or
if
this
label
is
a
physical
host,
it
can
spread
among
physical.
We
can
actually
have
more
than
just
one
label.
A
We
can
have
like
a
hierarchy
or
like
three
levels
or
so
of
spreading
among
no
it's
physical
hauless
zones,
it's
whatever
the
user
specifies
in
the
config
next
item,
supporting
less-than
and
greater-than
operators
in
inter
pod
affinity.
I.
Don't
think
that
this
we
have
seen
much
progress
on
this
front.
I,
don't
know
if
four
classes
here
today
looks
like
crosses
here.
He
was
the
shepherd
for
that
feature.
I,
don't
know
the
class.
Do
you
have
any
update
for
us?
Is
there
any
progress.
C
About
the
attachment
and
the
last
comments
that
I
hope
Anna
chilly
is
that
we
discounted
his
class,
so
we
can
only
keep
the
HTTP
because
a
RESTful
API,
but
if
anyone
use
the
you
know
he
was
coding,
he
was
the
Python
client
code,
all
the
cocaine
code.
You
need
to
change
the
code
up
in
this
class.
We
need
to
get
it
back
over
on
the
the
other
old
uncle
from
the
can
kind
of
go
either
go
whether
they're.
Okay
about
this
part,
I.
A
See
but
I
give
it
given
this
I
I,
don't
think
that
it
can
make
it
to
116
right
yeah,
so
this
is
gonna,
be
I
just
write
postponed,
but
let's
see
if
we
can
make
much
progress
in
the
next
cycle,
even
partly
spreading
among
physical
I
want
failure
domains.
This
is
something
that
way
has
been
working
on
and
as
far
as
I
can
tell
it's,
it's
done.
Is
that
correct
way.
A
C
A
A
F
A
And
there
are
two
items
which
we
didn't
track
really
reserved
all
grayed
out.
There
was
another
identification
of
cloud
specific
code
for
volume,
counting,
basically
max
volumes
volume
support.
This
was
something
that
Robbie
was
kind
of,
involve
it
and
my
guess.
No,
he
meant
was
also
working
on
it.
I
do
not,
for
others,
Ament.
D
So
previously
we
used
to
get
from
the
node
or,
like
we
have
a
hard-coded
value
for
the
number
of
volumes
that
can
be
attached
to
a
unknown,
and
so
now
we
have
the
CSA
Informer
within
the
scheduler.
So
it
should
be
good,
but
I
have
to
check
with
him
and
once
I
confirm
and
then
I
will
get
back
to
you.
A
A
Far
as
I
can
tell
it,
the
deprecation
think
has
started
basic,
so
I
think
we
can
call
it
on
track,
and
so
one
other
thing
that
Robbie
you
were
involved
with
was
resource
code
of
scope,
selector
to
GA
I,
actually,
before
the
meeting
I
check
the
status
it's
pending
on
one
other
PR,
which
I
reviewed
in
the
past
and
left
a
comment,
but
you
didn't
actually
continue
working
on
it.
This.
This
is
a
piata
actually
I
put
it
in
the
spreadsheet,
76
310.
A
A
D
That,
like
we,
are
kind
of
treating
cube
system
as
a
special
namespace,
meaning
we
are
limiting
the
the
critical
parts
only
to
that
namespace
and
at
least
I
can
think
of
two
different
customers,
not
only
of
Red
Hat,
but
I
have
someone
from
the
community
asking
for
the
critical
parts
to
be
allowed
in
other
namespaces,
not
only
limited
to
cube
system.
So
that's
one
of
the
reasons
I
have
made
that
as
criteria
for
GA.
As
of
now
it's
beta
but
I
have
made
one
of
the
it
one
of
the
criteria
for
graduation
Ya.
A
D
A
Yeah
sure
I
mean
at
the
time
we
felt
like
it
should
be
okay,
if
we
restrict
those
priorities
to
the
cube
system,
but
I
agree
that
it's,
it
may
not
work
for
everybody.
Yes,
so
the
next
item
is
graduate
heavy
schedule
component
can
fake
to
the
1
beta
1.
This
one
is
actually
we
can.
We
could
do
it,
but
I
believe
just
in
Santa
Barbara
proposed
that
all
all
the
fields
of
the
config
should
be
optional,
so
that
when
you're
not
specified,
we
don't
go
with
that.
You
know.
A
Basically,
we
can
detect
that
they
are
not
specified
in
the
config,
as
opposed
to
just
getting
their
default
goal
value.
So
this
is
particularly
important
because
some
values
such
as
a
bull
can
be
false
by
defaulting
go,
and
if
it
is
false,
then
you
cannot.
You
cannot
tell
whether
a
user
has
explicitly
set
it
to
false
or
the
user
hasn't
specified
it
in
the
case
that
user
hasn't
specified
it.
You
want
to
go
with
a
default
value
which
is
basically
specified
in
your
code,
not
necessarily
what
go
imposes.
A
This
is
important.
We've
run
into
issues
with
going
with
go
default
values,
so
we
are
not
gonna
promote
this
component
conflict
to
beta
in
116.
We
are
gonna,
wait
for
the
other,
basically
the
other
PRS
to
marriage
and
make
all
these
fields
optional,
or
converting,
basically
making
all
of
these
fields
two
pointers
so
that
we
can
distinguish
between
a
field
that
is
a
specified
versus
one
that
is
not
specified
and,
as
a
result
is
near.
A
A
H
A
Nothing
else:
oh,
we
have
a
few
items
on
the
agenda,
so
okay
yeah
our
agenda.
Let's
bless
for
now
just
talk
about
the
updates
for
for
the
projects
and
not
discuss
technical
details
towards
the
end
of
the
meeting.
If
we
have
the
time
we
can
go
for
more
technical
details,
so
another
thing
that
I
wanted
to
quickly
remind
you
is
that
Q
Khan
San
Diego
in
November
is
coming
up.
A
A
Great,
thank
you
very
much.
There's
gonna
be
also
some
other
sessions
that
contributors
can
talk
with
developers.
So
I,
don't
know
exact
details
about
those
I
know
that
there
is
not
going
to
be
any
face-to-face
submit
this
time.
I
work
in
the
the
summit.
You
know
we're
not
gonna
have
any
face-to-face
for
six,
but
there
is
a
replacement
for
it
with
some
details.
I
can
actually
look
them
up,
but
the
more
important
part
was
to
make
sure
that
we
have
intro
and
type
sessions
for
the
sake.
A
Yeah,
we
can
actually
I
think
just
so
quickly.
I
think
we
can
make
it
configurable,
because
now
one
size
does
not
fit
all
clusters.
So
that's
why
I
propose
making
it
configurable
with
the
default.
That
is
basically
the
current
value
that
we
using
the
scheduler,
which
is
10
second
max,
but
for
some
clusters
may
not
work
so
I
think
we
can
make
it
configurable
all
right,
so
Klaus
you
can
take
it
over.
I
I
I
In
cost
mode,
some
meet
a
class
sparked
off
means
credit
rival,
chord,
and
then
the
driver
will
talk
with
uber
API
server
to
create
the
extruders
and
its
tutor
will
run
individual
tasks
and
send
it
out
back
to
driver.
So
a
spark
job
should
have
a
list
of
driver
and
an
executor,
and
if
alt
has
done,
driver
will
kill
all
of
his
tutors.
I
I
I
I
I
I
A
Yeah
yeah,
so
my
question
was
basing
it
from
the
name
of
those
jobs.
Can
you
tell
I
mean,
have
you
have
you
said
the
names
so
that
you
can
tell
which
part
groups
they
belong
to
order?
It's
not
possible
to
tell
that
from
the
just
the
names
when
you
were
looking
when
we
were
looking
at
the
above
yeah
here
when
you're
looking
at
the
names,
can
you
tell
which
groups
they
belong
to.
I
I
A
A
A
That's
fast
basically
plan
so
yeah
I
guess
you
can
work
together
and
see
how
we
can
actually
bring
some
of
those
changes
to
the
framework.
I
mean
that's,
of
course,
one
of
the
options
there
might
be
some
other
options
available.
This
way,
I
think
you
might
like
I
guess
right
now.
We
are
out
of
time
officially,
but
if
there
is
any
other
quick
comment,
question.
H
H
G
So
a
little
background
on
this
is
Monica
wants
to
make
the
discover
works
without
the
dependency
on
the
current
given
hospira
and
api,
because
they're
common
from
the
Yankee's
that
some
Cristeros
customers
are
not
going
to
offer
feature
all
right.
So
then,
just
suppose
we
don't
have
given
us
a
future
yeah.
H
So
we
want
to
be
able
to
run
the
scheduler
with
whatever
version
of
unity
is
currently
running,
and
that
could
still
continue
to
provide
run
time
even
part
spreading,
based
on
the
constraints
that
are
not
available
in
the
pod
API,
because
for
whatever
reason
either
they
are
not
running
the
right
version
of
kubernetes
or
the
reasons
so
so
I
want
the
D
scheduler
to
still
be
able
to
do
the
even
parts
trading,
and
so
that
is
the
main
reason.
I
was
basically
suggesting
that
we
have.
H
Then
we
have
the
the
D
scheduler
support,
an
API
that
works
with
or
without
the
pod
API
a
topological
strains
being
available,
and
if
the
pod
a
constraints
are
available,
then
we
would
use
that.
Otherwise
we
would
use
the
constraints,
as
provided
from
the
from
the
from
the
D
scheduler
strategy
policy
itself,
so
that.
A
H
H
I
am
open
to
suggestions
there,
but
yeah
I
mean
if
they
specify
that
that
means
the
user
wants
those
topology
spirits
constraints
rather
than
being
read
from
the
pod
API
itself,
if
they
don't,
if
they
skip
that,
that
means
that
the
user
wants
to
use
whatever
is
specified
in
the
pod
API
that
got
merged
already.
So
we
would
use
that
same.
A
H
Right
so
I
mean
basically
let
me
paste
it
has
a.
It
has
the
same
exact
same
semantics
as
what
is
as
the
APA
that
is
in
the
pod.
So
it
would
have
a
name:
a
label
selector
and
max
Q
and
a
topology
key.
So
the
label
selector
within
the
namespace
would
determine
what
pods
are
the
target
for
for
for
for
inform
the
even.
A
One
one
one
obvious
one
obvious
thing
is
that
you
know
one
obvious
limitation:
is
that
such
a
way
of
configuring,
these
schedule
limits,
probably
the
number
of
the
number
of
namespaces
or
types
of
parts
that
you
want
to
spread?
Am
I
right
or
you
allow
unlimited
number
of
namespaces
and
lingual
selectors
to
be
specified?
Yeah.
H
I
think
the
current
ways
that
you
can
specify
multiple
namespaces
and
for
each
namespace,
you
can
specify
list
of
constraints,
so
I
mean
yeah.
I
mean
I.
I
started
that
when
the
party
PA
was
not
there
and
so
that
we
can
basically
fulfill
the
goals
of
like
having
the
pod
always
spread
evenly
at
runtime
for
given
set
of
pods
right.
So,
but
now
we
have
the
party
pair.
So
we
would
want
both
things
right.
H
So
if
somebody
doesn't
specify
the
per
namespace
spread
constraints,
then
we
will
basically
use
the
pod
API
spread
constraints
to
do
the
spreading.
But
if,
if
they
specify,
then
we
would
use
this,
that's
the
idea,
and
that
would
allow
us
to
not
have
to
wait
for
117
or
118,
and
the
feature
is
actually
available
right
and-
and
it
might
actually
feel
little
less
resource-intensive
if
I
think
I
think
about
think
out.
Think
out
think
about
it.
A
little
out
out
of
the
head
like
it
doesn't
have
to
go
through
each
and
every
parts
spread
constraint.
A
A
H
Sounds
good
yeah
I
mean
I'm
open
to
other
suggestions
on
what
the
API
should
look
like,
but
I
think
the
key
idea
is
that
we
should
not
tie
this
to
the
two
like
like.
This
should
always
only
work
with
elf
IP
I,
or
it
should
only
work
with
the
pod
API,
and
that
would
make
I
mean
that
was
the
whole
point
of
the
D
schedule,
or
not
it's
not
being
inside
right.
So
that
will
help
a
lot
other
community
users
do
the
time
spreading.
H
D
Yeah,
so
when
we
started
out,
we
were
thinking
we
would
go
with
the
pas
de
pere
that
that's
going
to
be
available
in
115,
but
since
it
has
slipped
to
116,
we
are,
we
are
thinking
of
going
this
route,
but
from
a
maintenance
perspective.
I
think
it
it's
kind
of
too
early
to
say
like
unless
we
get
a
feedback
saying
that
okay,
this
is
absolutely
needed
in
the
in
older
releases
of
Cuban
artists,
and
if
there
is
a
strong
community
feedback
saying
that
yes,
this
is
this
is
needed.
We
can
think
about.
D
D
Yeah,
this
can
be
confusing
it,
at
least
from
like.
Even
while
writing
the
document
it
would,
it
would
be
difficult
for
people
to
understand
when
they're
reading
it,
so
it's
its
first
one.
Let's
go
with
the
Alpha
and
then
see.
If
we
get
some
good
feedback,
we
can
improve
on
it,
or
else
we
can
tell
that.
Okay,
since
it's
alpha,
we
are
going
to
remove
it.
We
would
just
support
the
pod
API
yeah.
A
Thank
you,
okay.
Thank
you
very
much.
This
is
the
end
of
our
meeting.
I
know
the
drawer
of
the
other
items
and
people
had
other
questions,
but
hopefully
we
will
get
on
to
all
of
them
neck
bite
next
week
and
please
do
not
forget
that
we
are
getting
closer
to
the
code
freeze
and
if
you
have
pending
pr's,
please
think
one
of
us.
We
have
the
long
way
RV
class
for
approval
and
review.
Thank
you
very
much
see
you
folks
next
week.