►
From YouTube: Kubernetes SIG Scheduling Meeting 2018-06-21
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello,
everyone
and
welcome
to
scheduling
meeting
today
we
are
going
to
talk
about
mostly
a
quick
update
on
1:11
and
then
mostly
our
plans
for
1:12
for
111.
As
far
as
I
can
tell,
and
until
last
night
everything
has
been
going
well,
I
haven't
checked
emails
much
in
the
morning
today,
so
I
don't
know.
Hopefully
everything
is
still
going
well.
We
are
on
track
for
releasing
priority
and
preemption,
as
basically
enabled
by
default
in
111
as
a
beta
version,
and
basically
everything
so
far
has
been
good.
A
A
Alright,
so
with
respect
to
112,
I
have
several
things
in
mind,
but
I
would
like
to
also
collect
the
collect
your
ideas
and
feedback
and
incorporate
them
into
our
plant
for
112
the
several
items
or
major
ones.
One
is
improving
performance
of
a
scheduler.
There
are
a
couple
of
plans
for
doing
that.
One
is
to
improve
performance
by
enabling
equivalence.
A
Cache,
Jonathan
and
Harry
are
working
on
that
and
I'm,
hoping
that
we
can
finalize
the
work
in
this
quarter
and
move
that
move
that
featured
to
ADA
more
recently,
just
a
couple
of
days
ago,
when
I
tried
benchmarks,
with
with
any
with
the
equivalence,
cache
enabled
I
faced
some
of
the
issues
with
respect
to
basically
the
benchmarks
timing
out
in
a
larger
cluster
like
five
thousand
nodes
10,000
parts,
they
were
not
working
well
with
equivalence
cache.
These
issues
should
be
investigated
and
a
further
tests
probably
should
be
developed
before
we
can
move
them
to
beta.
A
Now
that
all
of
our
parallel
tasks
that
drug
predicates
have
to
go
and
read
from
it,
and
we
have
changed
the
mechanism
for
to
have
read,
lock
versus
write,
lock,
but
I
guess,
write,
lock,
is
invoked
or
obtained
frequently,
so
that
our
read
locks
are
stuck
also
behind
that
lock.
So
we
need
to.
We
need
to
investigate
some
of
these
issues
and
try
to
improve
performance
of
equivalence.
Cache!
That's
one
of
the
bigger
plans.
We
have
things
to
move
to
bait.
Basically,
the
painting
notes,
by
condition
should
move
to
beta.
A
We
didn't
do
that
in
111,
but
hoping
that
we
can
do
it
in
112.
We
are
hoping
to
move
demons
that
are
scheduling
to
beta.
Basically,
today
in
111,
we
have
that
feature
in
alpha
kubernetes.
We
are
hoping
to
be
able
to
move
it
to
beta
in
112.
After
its
moved
to
beta,
we
can
completely
remove
reschedule
from
our
our
infrastructure.
You
know
now
that
three
scheduler
is
gone.
A
We
can
now
I
guess
we
are
ready
to
bring
the
scheduler
as
a
standard
component
of
kubernetes
and
as
a
basically
add-on
that
is
sort
of
like
a
more
to
say
it,
but
like
a
standard
version
available
in
the
standard
version
of
kubernetes
and
I
would
like
to
know
how
of
Ashe
thinks
about
that.
As
the
owner
of
the
scheduler.
A
B
B
Sorry,
the
one
thing
I
was
wondering
regarding
that
demon
set
is
scheduling
by
default,
is
scheduled
error.
Is
it
cause
you
feel
like
there
is
a
like
default.
I
mean
there
is
a
job
that
we
can
have
something
like
I
mean
non-critical
job
like
whenever
tests
are
running
so
that
side
by
side
whenever
pr's
are
created,
we
can
see
like
if
we
are
having
any
issues
or
not
something
like
that,
so
that
we
can
get
some
feedback
in
parallel.
B
B
B
A
A
That's
actually
a
good
point.
We,
if
we
can
do
that,
we
should
probably
do
it
at
all
right,
not
I'm,
not
so
sure
about
whether
we
can
enable
it
in
CIC,
I
guess
in
Sky
City
we
can
have
like
optional
tests.
Basically,
if
a
feature
is
not
so
stable,
we
don't
want
to
block
everybody
from
or
or
if
the
test
is
played
here.
You
know,
because
the
feature
is
not
always
reliable.
We
don't
want
to
block
everybody.
C
You've
also
pointed
out
before
thanks
bobby's
mentioned
that
not
every
alpha
feature
can
be
enabled
by
default
for
test
when
it
replaces
existing
behavior.
So
priority
and
preemption
replaces
the
scheduling
queue
with
a
new
queue,
and
if
we
turned
that
on
by
default
for
tests,
then
we
would
not
be
covering
production
code
in
test
so
for
discussion,
something
new
that.
A
Okay,
these
are
some
of
the
things.
We
would
also
like
to
finalize
the
design
of
the
scheduling
framework
in
the
next,
hopefully
one
month
or
so,
or
maybe
ideally,
the
next
couple
of
weeks
or
so
I
have
already
sent
a
PR
to
add
that
proposal.
A
design
proposal
for
the
scheduling
framework
I
have
seen
some
very
good
comments
and
questions
about
the
framework
and
I
would
like
to
address
those
soon.
A
If
you
have
any
other
ideas
or
comments,
suggestions
about,
oh
one,
more
thing
is
actually
gang
scheduling,
so
a
class
is
leading
that
effort
and
he
has
some
ideas
so
so
far
our
decision
has
been
to
create
a
separate
scheduler
for
gang
scheduling,
basically
to
schedule
batch
jobs
in
a
separate
scheduler.
The
reason
that
we
thought
this
might
be
useful
or
might
be
helpful
is
the
fact
that
batch
scheduling
requires
a
sort
of
different
require,
a
set
of
different
metrics
and
predicates,
and
we
didn't
want
to
combine
all
of
those
enlisting
in
a
single
scheduler.
A
B
A
In
sync,
once
we
had
a
new
feature
like,
for
example,
preemption
was
an
example
that
was
a
relatively
complex
feature
that
was
hard
to
support
in
diversity.
Scheduler
I
agree
with
you
that
will
probably
run
into
the
same
problem
in
the
future.
If
we
want
to
keep
the
two
in
a
completely
separate,
come
on.
D
A
Thing
that
we
wanted
to
to
do
was
to
possibly
use
this
scheduling
framework
for
both
of
these
two
components
so
basically
use
the
same
structure
for
both
of
the
both
of
the
two,
but
maybe
they
have
different
set
of,
and
if
our
plugins
is
our
or
completely
standard,
that's
what
we
are
aiming
for,
or
if
the.
If
the
plugins
are
completely
portable
for
the
for
the
framework,
we
should
possibly
be
able
to
develop
a
plugin
for
for
the
framework
and
use
it
in
both
of
the
places
or
both
of
these
components.
A
Really
gang
scheduling
we
don't
but
I
would
like
class
to
basically
define
the
achievable
goes
for
112
I
know
that
he
already
has
a
prototype.
We
we
were
thinking
about
renaming,
cube
arbitrator
to
something
more
relevant,
like
maybe
a
scooter
or
something
like
that
and
focused
on
scheduling
aspect
of
gang
scheduling.
At
this
point
for
112
we
can
hopefully
have
a
alpha
version
of
the
patches
scheduler,
but
I
since
class
cannot
attend
that
10:00
a.m.
meeting
I
would
like
to
wait
for
him
until
the
next
meeting
to
see
what
he
thinks.
B
Mean
so
far
like
I
mean,
haven't,
discussed
internally
concrete
plans
for
1.12,
so
I
mean
right
now
we
don't
other
the
only
thing,
I
think
it
the
only
thing
that
is
more
important
for
us
a
deferred
that
demons
had
a
scheduling
by
default
scheduler,
but
maybe
like
in
the
next
3-4
weeks
like
when
we
have
more
discussion
about
1.2
L
work.
It
might
give
more
idea
to
us
if
you
want
to
have
anything
else.
A
A
A
Yeah
that
one
is
actually
that's
a
good
point.
We
should.
We
should
definitely
see
some
progress
there
too.
We
are
kind
of
involved
still
in
the
process,
but
we
thought
that
the
policy
team
is
probably
a
better
place
for
for
leading
that
effort
and
who
you
are
gonna
be
involved
pretty
heavily
because
it's
gonna
affect
the
scheduling
as
well,
but
it's
not
something
that
we
really
need
to
implement
on
the
scheduler
side.
These
will
all
will
all
go
to
the
admissions
side.
B
A
A
D
A
D
D
Are
after
putting
all
this
code
in
there
and
see
so
that's
we
just
started
that
we
setting
up
my
men
we
setting
up
a
local
cluster.
We
got
pretty
good
resources,
local
as
well,
and
but
at
some
point
we
are
gonna,
go
bigger
and
then
we're
gonna
go
to
GC
as
well.
So
next
couple
couple
of
weeks,
two
or
three
weeks,
so
the
way
we
are
doing
is
we're
gonna
end
it
because
we
know
that
the
part,
the
affinity
is
a
culprit.
D
B
D
Essentially,
you
be
planning
to
have
a
mix,
workload
exelon
and
we
have
pre
actually.
So
we
have
a
lot
of
interest
in
China,
actually
so
they're
buncha,
guys
involved
and
they're
helping
out
the
max
part.
For
example,
you
know
the
max
part
you
can
have
you
can
like
in
kubernetes
we
can
define
the
max
parts
for
at
the
node
level,
so
somebody
took
on
that
work
and
then
the
other
thing
we
need
to
do
that
is
after
we.
D
Hopefully
we
don't
see
any
any
issue
with
the
with
performance
impact,
because
the
way
we
are
doing
part
affinity,
we
have
lot
of
performance,
you
know
design,
we
can
we
store
all
the
though,
as
the
pods
come
in,
we
store
it
in
a
hash
table
in
memory
and
all
that
so
I
think
it
should
be
pretty
efficient
and
then
remember,
I
mentioned
that
so
any
any
part
which
has
a
powered
affinity
definition.
D
We
chew
them
up,
basically,
because
that's
the
there's
a
better
way
of
doing
it
in
a
graph
in
a
network
graph,
but
I
think
we
had
a
lot
of
issues
and
we've
been
working
with
unil
on
that.
But
we
didn't
want
to
really
get
bogged
down
and
then
boil
the
ocean
at
this
time.
So
right
now,
so
essentially,
as
the
parts
come
in,
we
chew
them
up
for
if
they
have
any
power,
definitely
because
we
wanna
run
them
one
by
one
things
right.
A
A
D
This
is
the
Assumption
I.
Remember
last
time
we
discuss,
so
the
assumption
we
have
made
is
the
reso
be
essentially,
as
I
said,
you
know
we
chewing
up
parts
if
they
have
a
part
definition.
So
that's
the
assumption
we
are
making.
So
we
making
assumption
that
the
part,
the
anti
affinity
is
explicit
symmetry.
Basically,
if
the
powered
incoming
part
has
anti
finish
definition
in
it,
only
then
will
queue
it
up.
Okay,
so
if
it
doesn't
because
otherwise,
then
we
can
end
up
basically
doing
the
queue
of
the
whole
thing,
it
doesn't
really
make
sense.
Yeah.
A
D
A
It
I
mean
it's
very
hard
for
producers
to
to
make
that
kind
of
a
you
know
change
to
their
participant,
because
sometimes
the
actually
cannot
do
it
simply
because
there
are
like
mata
Mata
tenants
in
a
cluster.
So
you
cannot
go
and
ask
all
your
colleagues
to
create
the
same.
Like
anti
offends
you're
losing
all
of
their
parts.
It
becomes
a
little
harder
to
basically
keep
that
promise,
but
but
I
think.
D
As
I
said,
the
reason
we
have
to
do
that
is
because
you
have
to
queue
democracy.
So
there's
a
better
approach
as
well,
which
we
can
have.
We
were
exploring
with
I
think
there's
a
research
paper
which
you
know,
land
multi-year
Road
and
we
started
experimenting
with
that.
But
they
have
a
lot
of
holes
in
that,
so
we
can
and
then
these
guys
they
got
pretty
busy
and
we
were
not
able
to
get
feedback
with
them.
So.
B
D
Just
just
for
the
time
being,
this
is
what
we
are
doing.
Actually
so
yeah
I
mean
the
as
I
said.
My
argument,
for
that
would
be
the
fact
that
you
know
how
often
would
you
have
an
affinity
or
affinity?
So
when
you
do,
you
have
to
manually
basically
define
that
so
as
opposed
to
any
incoming
part,
so
Mike
I.
A
Can
give
you
one
compelling
use
case
Google,
for
example,
recently
launched
so
tenancy
and
and
GC,
so
the
idea
of
saltiness
is
that
you
as
a
customer,
go
and
ask
for
a
part,
a
node
that
is
gonna,
be
running
only
your
tasks.
This
is
something
that
a
lot
of
a
lot
of
you
know.
Security
applications
require
because
they
don't
want
to
share
to
you
and
you're
lying
resources
with
other
non
trusted.
Workflows.
Alright.
A
So
when
you
can,
when
you
want
to
do
that
in
in
communities,
you
can
easily
put
a
pod,
anteye,
anteye
affinity,
rule
and
when
you,
when
you
reschedule,
you
can
say,
only
Ron
tasks
that
have,
for
example,
the
label
that
is
saying
that
this
pod
is
from
my
user
and
on
this
note,
I
don't
want
to
have
any
other
parts.
So
it's
very
easy
to
do
with
pod
anti
affinity.
But
of
course
other
users
of
the
clusters
may
not
want
to
add
for
basically
your
wish.
D
Yeah
yeah
I,
agree,
I,
think
so.
You're
right,
I
think
so
for
yeah
I
think
so
the
concern
would
be
that
we're
not
really
doing
true
comparison,
because
we
kind
of
expecting
the
behavior
to
be
explicit
disease,
so
I
think.
But
right
now,
the
way
we
are
we
can,
as
I
said,
you
know,
I
think
there's
a
better
way
of
doing
it.
So
essentially
we'll
see
if
we
can
make
progress
down
the
road
but
I
think
for
the
time
being.
Yeah.
A
D
A
In
kubernetes
we
used
to
have
it.
Actually,
you
know:
if
you
wanted
to
achieve
so
tenancy
you
can,
you
could
have
an
anti
affinity
and
label
pause
properly,
so
it's
only
part
from
your
users
are
basically
owned
by
your
user.
I
can
be
scheduled
on
that
note
right,
so
you
could
set
the
topology
to
node
and
have
an
affinity
rule
with
the
proper
selector,
but
but
now
actually
I
bet
this
feature
import
when
I
was
in
the
port
team,
and
it's
now
released
in
GC
as
well.
So
GC
supports
it.
That's
not
the
VM
level.
A
A
D
D
A
Right
right,
that's
right!
These
are
all
good
ideas
you
know
sometimes
distributed
is
distributed,
schedulers
help.
Sometimes
they
lead
to
more
complexity
and
optimal
decision
making.
But
like
scheduling
it's
it's,
a
tough
balance
depends
on
the
goals
really,
but
we
would
definitely
like
to
explore
ideas.
There.
B
D
Good,
so
I
will
definitely
keep
you
guys
posted,
but
I
think
that
that's
definitely
yeah.
So,
for
the
time
being
we're
going
to
the
explicit
and
and
then
we'll
you
know,
we'll
start
thinking
about
it.
I
think
so.
I
have
meeting
with
you
know
these
guys
are
pretty
busy.
So
if
we
can
nail
the
others,
the
other,
so
they
essentially
coming
up
with
a
new
design
purpose
it
there's
at.
You
know
that
in
the
CCS,
but
it's
not
fully
fleshed
out
and
when
we
started
implementing
it
we
found
lot
of
holes.
A
D
A
A
A
D
One
final
thing:
I
want
to
add
one
other
thing:
we're
gonna
start
looking
at.
Is
this
multi
scheduler
thing
we
have?
We
know
we
have
lot
of
problems
because
our
four
moment
scheduler
it
becomes.
You
know.
Let's
say
we
get
good
throughput.
It's
still
gonna
coexist
with
the
default
scheduler.
We
know
there
lot
of
issues,
so
we
will
work
on
that.
Maybe
we
may
have
to
some
make
some
changes
that
the
cooperate
equal
as
well
in
order
for
us.
Oh
that's,
another
task,
we're
going
to
do
it
after
that.