►
From YouTube: Kubernetes SIG Scheduling meeting - 2018-07-12
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
I'd
left
start
hi.
Everyone
welcome
to
scheduling,
meeting
looks
like
today.
We
don't
have
that
many
participants,
let's
see
if
more
people
are
gonna,
join
all
right.
So
let's
have
a
quick
update
or
maybe
not
so
quick
update
on
items
that
we
have
been
working
on.
I
guess
hurry.
You
have
some
updates
on
the
equivalents
cache,
so
we
have,
as
some
of
you
guys
may
know,
we
have
been
working
on
improving
equivalence,
cache
performance
and
trying
to
move
it
to
beta
in
112
and
I.
Believe
Harry
has
done
a
very
good
job.
A
B
You
thank
you
right
now
and
we
started
work
to
improve
the
crew
in
the
cache
last
week
and
change
the
current
design
to
the
two-level
cash.
So
we
can
avoid
the
global
lock
during
the
critical
part
of
scheduling
and
pretty
cool,
pretty
clear
space.
So
you
know,
according
to
the
test
of
the
pro
requests,
the
performance
is
pretty
good.
We
have
a
much
higher
throughput
during
scheduling
and
the
latency
has
been
improved.
B
A
lot
and
I
did
a
test
in
one
installment
note
cluster
and
it
looks
like
the
scheduling
time
has
been
huge,
shotand
being
able
to
you
chroma
the
catch
and
also
I
can
notice.
The
throughput
has
been
improved
by
at
least
two
or
three
times
before
being
able
the
increment
catch,
because
there
is
no
global
log
and
we
can
review
the
cache
pretty
good
results.
So
that
is
the
expected,
and
the
only
problem
I
have
is
that
I
don't
have
very
powerful
machine
to
do
a
test.
Like
five
thousands
notes.
B
Every
time
I
run
five
thousand
those
tests
in
the
test,
no
time
out,
because
my
machine
is
too
small.
So
if
anybody
have
a
powerful
machine,
especially
a
physical
machine,
to
run
two
tests
in
that
cache,
please
let
me
know
ever
let
him
know
know
what
configuration
you
to
do
to
you
know,
reproduce
the
test
results
yeah.
So
that's
it.
A
So
I
have
a
powerful
machine.
I
can
do
that
for
you
by
the
one
other
thing
that
you
can
do
in
since
those
benchmarks
are
essentially
integration
tests
change
the
time
not
for
integration
test.
There
is
a
there
is
an
environment
variable
it
or
maybe
there
is
a
flag
I
think
there
is
an
environment
variable
that
you
can
change
I.
Don't
have
that
in
mind
right
now,
but
if
you
look
at
those
those
you
know
scripts
or
main
files,
you
will
find
it.
A
There
is
one
environment
variable
that
you
allows
you
to
change
the
timer
anyhow.
I
will
be
happy
to
help
you
out
with
that,
but
it's
very
impressive
that
you've
been
able
to
achieve
about
3x
performance
improvement.
Of
course,
the
benchmarks
not
necessarily
the
same
as
like
real-world
scenarios,
the
benchmarks
create
a
lot
of
pods
which
have
similar
expect
so
essentially
in
terms
of
equivalence.
A
All
of
those
are
equivalent
to
one
another
and
as
a
result,
you
will
see
probably
a
lot
higher
performance
improvement
compared
to
a
case
where
a
lot
of
parts
from
different
kinds
of
replica
sets
or
different
with
a
lot
of
parts
with
different
spec
are
created
in
a
cluster.
But
still
it's
impressive
to
see.
2X
performance
improvement,
maybe
in
real
world
scenarios,
it's
not
as
large
but
even
like
2x
or
1
&
1/2
X
is
pretty
impressive.
B
A
One
other
item
which
is
kind
of
related
to
you
here
is
the
image
locality
so
that
the
priority
function,
I
reviewed
it
actually
today
and
I
LG
TM
dit,
it
looks
like
we
are
closer
to
have
it
merged.
If
you
have
any
further
updates,
please
share
with
us
other
people.
If
you
have
questions
or
comments
about
that,
you
can,
you
can
say
as
well.
So
hey
do
you
have
any
further
updates
on
that
image,
locality,
yeah.
B
I'm
a
mentor
she
were
all
the
way
through
all
the
way,
only
spur
requests
so
I
think
the
current
proposal
on
patch
is.
It
looks
generally
good,
so
maybe
just
an
you
know,
some
needs
to
fix.
I
will
let
a
final
review
comment
on
there
for
requests
and
very
very
appreciate
Robins
job
shearing
through
the
algorithm.
It's
awesome,
yeah.
A
A
He's
also
working
on
moving
demon
set,
basically
moving
demons
to
the
scheduling
to
default
scheduler
for
112,
and
that
is
in
progress
as
well,
but
I
don't
have
much
updates
to
share
with
you
hopefully,
next
time
that
Klaus
is
here,
you
can
share
more
information
about
those
with
respect
to
other
items
that
we
have
been
working
on
Ravi,
you
may
have
some
updates
about
the
DES
scheduler.
Basically,
do
you
have
any
further
updates
and
D
scheduler
and
how
it's
going
yeah.
C
So
I
try
to
add
a
strategy
to
Li
scheduler
system
that
police
together,
so
that
once
we
retire,
we
scheduler
d
schedule
that
could
be
used
in
its
place,
at
least
for
some
time
before
the
demons
that
controller
graduates
like
scheduler,
scheduling,
dementia
quad
graduates
to
beta.
That's
a
vicious
reviewing
it.
So
that's
the
state
of
B
scheduler.
As
of
now.
C
A
Now
I
was
unmuted.
Sorry,
sir
thanks
for
letting
me
know
anyway.
I
was
saying
that
one
other
item
that
we
have
been
working
on
is
pod
scheduling
policies,
so
participation
policy
is
a
set
of
policies
that
allows
an
admin
to
basically
restrict
certain
scheduling
policies
of
parts.
So
today
in
kubernetes
someone,
let's
say
a
malicious
user
that
has
access
to
a
cluster,
can
go
and
set
some
anti
affinity,
let's
say
to
his
or
her
own
pods
to
prevent
any
other
person
or
any
other
pod
from
being
scheduled
in
the
same
zone.
A
It's
not
that
hard
to
do,
and
similarly,
he
someone
can
set
anti
affinity
to
prevent
other
parts
from
getting
scheduled
on
the
same
node
and
so
on.
So
this
could
basically
cause
disruption
in
the
cluster.
Similarly,
no
particular
policy
prevents
users
from
selling
any
Toleration
that
they
like
under
pods.
So,
for
example,
we
have
created
things
and
Toleration
to
allow
some
scenarios.
A
C
A
C
A
Particular
taint
and
then
add
the
Toleration
only
to
the
parts
that
really
require
GPU,
but
there
is
no
way
to
prevent
other
users
from
adding
the
same
Toleration
to
other
parts.
So
we
are
working
on
this
part
of
scheduling
to
basically
let
an
admin
to
set
some
of
these
policies
for
to
let
some
users
to
have
these
kind
of
toleration
or
anti-authority
and
so
on
for
their
parts
and
not
letting
everybody
to
have
them
that
is
going
on.
A
So
mostly
it's
now
our
security
and
policy
team
working
on
that,
but
we
are
also
involved
in
the
design
and
if
you
guys
have
any
comments,
you
can
go
and
share
your
comments
on
concerns
in
the
design
document.
Yes,
in
is
mostly
working
on
that
I
I
heard
from
him
today
that
he's
a
he
has
been
in
touch
with
the
security,
six
policy,
I
believe
and
he
is
going
to
update
the
PR,
and
we
can
hopefully
finalize
it's
good,
I
guess
or
all
the
ideas
that
I
wanted
to
tell
you
about.
A
C
Related
to
the
equivalence
cache
so
for
the
5k
for
tests
that
I
have
basically
requested
I
can
help,
as
well
as
for
Beto
like
I
can
help
as
well,
and
the
other
thing
is
I
usually
run
it
on
an
eight-core
machine
and
it
it
works.
Fine
most
of
the
times
like
the
test
runs
fine
most
of
the
times
so,
and
it's
so
yeah
I
can
help
help
in
that
area.
In
case.
A
One
thing
that
we
can
do
is
probably
I
mean
we
don't
need
Harry
to
give
us
much
much
directions.
We
probably
can
check
out
the
master.
The
head
of
master
and
rotten
their
5000
note
benchmark
and
then
apply
the
past
at
heavy,
has
been
working
on
and
try
it
again
and
see
the
amount
of
performance
improvement.
So
any.
A
C
A
Actually
sure
please
go
ahead
and
take
a
look
as
soon
as
possible,
because
I
guess
I
lgt
am
did
that
it
and
it
couldn't
match
any
time.
So
if
you
have
any
concerns,
please
go
ahead
and
review
it
as
soon
as
you
can.
I
can't
cancel
the
LG
TM.
If
you,
if
you
think
you,
you
must
read
it
before
we
magic
yeah.
C
C
A
So
that
a
new
formula
that
we've
been
using
makes
it
a
little
bit
more
distributed.
Of
course
it's
still
prefer
at
that
machine,
but
given
that
given
a
new
formula,
given
that
we
are
combining
the
priority
with
other
priorities,
for
example,
and
in
particular
distribution,
priority
or
least
use
priority,
even
that
we
were
combining
them
with
those
I,
believe
that,
with
this
new
priority
function,
that
concern
goes
away
to
our
next
time.
Okay,.
C
Yeah,
the
third
point
that
I
wanted
to
talk
about
is
the
scope
selectors,
the
things
that
we
have
used
for
prior
to
classes
restriction
for
Kota
and
which
is
an
alpha
again,
which
we
would
like
to
graduate
to
be
10
1.12.
So
the
cus
is
working
on
it
and
he
is
going
to
create
peers
next
week
for
them
so
that
they
can
graduate
to
beta
sounds.