►
From YouTube: Kuberentes SIG Scheduling Meeting - 2019-02-28
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
But
everything
is
fine
until
this
recently
at
that
it
just
failed
on
the
gke
gke
cars
tears
a
sweet,
and
he
only
face
on
that
and
I
checked
on
the
contacts
the
arizona
surfaced
right
after
our
PR
dad
promotes
broadcast
from
when
beta
1
2
so,
but
it
is
really
weird
their
only
face
on
that
ke
casters
sweet
doesn't
feel
any
other
swiss
or
even
pre-committed
CI
or
something
so.
I
checked
the
code.
B
The
code
is
just
trying
to
use
a
combination,
sign
current
said
to
create
a
priority
class,
and
the
logic
is
just
to
check
whether
the
error
is
male
on
error
is
already
exist
in
the
past.
For
any
other
case,
you
fail
and
he
just
fell
on
that
the
other
conditions.
It
didn't
print
out
the
error
message,
so
it's
really
struggling
to
find
out
what
the
root
cause
is.
I
see.
A
B
A
A
B
B
A
B
A
Yeah,
thank
you
for
bringing
it
up.
There
is
no
way
I
wanted
to
actually
tell
folks
that
we
are
getting
closer
to
the
code
freeze
for
114.
We
need
to
need
to
manage
all
the
changes
that
we
want
to
have
in
114.
Of
course,
we
cannot
introduce
any
new
features.
The
feature
freeze
is
already
passed,
but
any
other
remaining
changes
we
need
to
get
in
I
know
way
that
I
need
to
review
one
of
your
PRS
for
supporting
affinity
to
more
than
one
part.
I
will
definitely
do
that.
A
I
apologize
again
for
not
doing
it
already,
because
I
have
been
and
I
still
am
kind
of
busy
with
all
these
variations
stuff
like
that
inside
Google.
So
hopefully
that
will
end
this
week
and
then
I
can
focus
more
on.
There
are
many
pr's.
The
good
news
for
us
is
that
in
114
we
managed
to
deliver
a
bunch
of
our
important
features
that
we
wanted
to
deliver.
One
was
the
back
of
mechanism,
foreign
schedulable
parts.
That
already
is
out.
We
have
been
able
to
optimize
notice
status
updates.
That
is
already
done.
A
We
have
increased
the
scheduler
throughput
by
quite
a
bit.
Actually,
in
our,
if
you
go
to
our
dashboard
now,
you
will
see
that
for
the
scheduler,
throughput
is
actually
Hana
pods
and,
in
fact,
in
those
tests
its
capped
at
100
peso.
It
cannot
exceed,
but
we
think,
since
we
see
it
just
of
like
a
flat
line
of
100,
it's
reasonable
to
think
that
the
scheduler
actually
exceeds
100
parts
per
second.
A
So
we
have.
We
have
been
able
to
make
a
couple
other
optimizations
to
the
scheduling,
queue
and
yeah.
So
there
are
some
other
relevant
optimizations
that
we
have
already
managed
thanks
to
one
I,
don't
know,
if
is
present
yeah
he
just
joined
actually
Thank
You
Wong
for
your
PRS
this
and
this
other
PR
actually
is
one
more
optimization
to
save
the
API
server
QPS
bandwidth.
Essentially,
so
recently
we
made
a
change
to
update
timers
for
the
last
scheduled
time
of
pods
and
those
timers
are
updated
every
time
in
the
API
server.
A
We
determined
that
this
is
not
required,
so
one
helped
us
to
bring
that
timer
into
the
scheduler
memory
base
your
scheduler
context
without
updating
the
API
server.
So
this
change
is
already
emerged
as
well.
So
this
will
help
us
improve
API,
server,
bandwidth
and
I
know
that
there
are
a
couple
of
other
things
that
we
would
like
to
do.
One
is
supporting
some
operators
for
some
like
less-than
and
greater-than
operators
for
for
toleration,
and
that
is
not
out
there.
A
Yet
we
wanted
to
I
guess
support
the
same
like
less
than
greater
than
operations
for
pod,
selectors
I
believe
so
those
are
not
also
managed
either,
and
these
these
both
of
these
two
are
features.
We
cannot
really
have
in
1:14.
These
two
definitely
open
115.
The
feature
freezes
already
passed
like
a
few
weeks
ago.
We
cannot,
we
cannot
measure
any
of
these,
even
if
they
already
so,
and
wine
has
also
been
working
on
equivalence,
cache,
so
I
don't
know
if
we
have
managed
to
achieve
any
better
performance.
A
Recently,
we
ran
into
one
issue
with
equivalence,
cache
that
even
when
we
simplified
it
quite
a
bit
and
made
it
just
a
single
structure
with
a
single
level
map
lookup,
we
still
saw
some
performance
degradation
in
most
common
scheduling
scenarios.
Essentially,
when
there
are
a
number
of
parts
and
assuming
that
there
are
no
others
on
the
scheduler
parts
in
it
in
the
queue
and
you
want
to
schedule
all
of
them,
the
equivalents,
cash
or
equivalence
class
should
not
slow
us
down.
A
Having
like
a
lot
of
scheduled
herbal
parts
as
a
result
of
a
burst
of
pod
creation
is
a
common
scenario
that
many
of
the
larger
users
have,
and
at
least
today
having
a
very
large
number
of
unschedulable
pods
in
your
cluster
is
not
a
very
common
scenario
in
most
of
kubernetes
clusters,
so
we
should
optimize
for
the
much
more
common
scenarios
and
we
should
make
sure
that
equivalence
class
does
not
hurt
our
most
common
scenarios.
So
far
we
haven't
been
able
at
least
that's
the
last
update
that
I
have.
A
We
haven't,
been
able
to
make
to
design
or
implement
this
equivalence
class
so
that
the
most
common
scenarios
are
not
hurt.
We
will
keep
working
on
this.
Hopefully
we
can
get
it
right
for
going
forward.
You
know
now,
assuming
that
114
is
almost
done
going
forward.
We
should
focus
on
a
couple
larger
areas,
one
of
the
common
common
sort
of
area,
that
many
of
the
folks
that
I
have
talked
to,
including
our
own
customers.
Users
are
cue
cards
and
various
other
venues.
A
One
of
the
very
common
complaints
or
pain
points
about
communities
is
that
it's
not
optimized
for
scheduling
batch
workloads
batch
is
a
big
big
area.
We
have
to
focus
on.
Scalability
has
been
also
a
common
pain
point.
We
have
been
able
to
address
it
to
an
extent
and
going
forward.
It
does
not
make
a
whole
lot
of
sense
to
put
a
lot
of
priority
under
scalability.
A
It
should
always
be
important,
but
right
now
we
feel
you
reach
to
a
point
that
our
bandwidth
to
the
API
server
has
become
a
bottleneck,
not
anymore
our
own
algorithmic,
our
own
logic
algorithm
for
our
own
logic,
so
that
language
should
be
addressed
first
before
it
becomes
a
priority
for
us
to
increase
the
APD
scheduler
throughput
again
so
for
one
forward
batches,
one
important
area
that
you
would
like
to
focus
for
1:15.
We
will
probably
start
thinking
about
some
of
these
improvements.
I
know
that
class
has
been
working
on
also
queue
batch
and
an
incubator.
A
Hopefully
we
will
be
able
to
bring
some
of
those
in
as
well,
and
we
should
also
start
working
more
aggressively
on
the
scheduling
framework,
because
that's
that
can
be
an
enabler
for
building
more
complex
features
like,
for
example,
some
of
these
batch
scheduling
that
we
would
like
to
to
do.
It
also
can
also
possibly
become
like
a
barebone
scheduler
that
we
can
use
to
build
another
scheduler,
maybe
like
a
batch,
is
scattered.
We
haven't.
A
Any
of
these,
of
course,
these
are
all
like
very
early
ideas,
but
one
possibility
is
to
build
another
scheduler,
which
is
specifically
for
scheduling
batch
workloads
that
works
in
parallel
to
our
default
scheduler,
which
is
more
geared
towards
or
more
optimized
towards
running
services.
So,
for
both
of
these,
we
also
need
the
framework,
so
we
should
also
put
more
emphasis
on
the
framework
yeah.
These
are
the
updates
and
some
of
the
general
directions
that
I
had
in
mind
that
I
wanted
to
share
with
you.
A
D
A
I'm
sure,
yes,
if
you
go
to
our
meeting
notes,
the
meeting
notes
is
the
link
to
the
meeting
notes
is
our
invites,
and
also
we
have
a
spreadsheet
for
for
the
items
that
we
were
working
on.
That
spreadsheet
link
is
also
in
our
meeting,
invite
or
maybe
signing
our
meeting
and
voices
in
our
the
first
page
of
our
notes,
meeting
notes:
I
can
actually
yeah.
D
C
A
Yes,
definitely
so,
yes,
we
had
a.
We
had
an
early
version
of
the
document
and
Jonathan
improved
that
document.
It's
currently,
mostly
at
all.
Like
a
design
level,
we
have
very
early
implementation,
which
is
very
lightly
gonna
change.
These
are
very
early
developments,
but
the
idea
behind
the
framework
is
that
this
scheduling
framework
is
just
a
very
bare
born
of
the
scheduler
with
almost
no
feature
almost
all
the
features
of
the
scheduler
becomes
plug-ins
for
that
framework.
D
I'm
the
other
question
I
had
is
how
do
I
find
out
a
comparison
of
the
performance
numbers?
So
you
mentioned
now
you
were
doing
testing
yeah
like
one
you
can
get
up
to
a
hundred
pods.
Yes,.
A
C
A
So
I
just
sent
the
the
perf
dashboard
link
to
the
chat
window.
You
know
the
childÃs
yeah
exactly
so
you
cannot.
You
cannot
change.
You
cannot
send
a
link
directly
to
a
particular
page
of
perf
dashboard.
But
if
you
go
to
this
perf
dashboard
choose,
for
example,
GCE,
5,000
nodes
and
then
second
drop-down
menu,
scheduler
and
then
third
drop
down
menu,
scheduling,
throughput.
A
A
Is
this
is
actually
no?
This
is
not
perversion.
If
you
click
on
any
of
those
dots
in
the
graph,
it
will
show
you
when,
when
the
test
is
run,
so
this
is
basically
taking
the
head
of
the
kubernetes
repo
runs
all
these
scalability
tests
against
it.
So
it's
not
necessarily
perversion.
It's
just
a
snapshot
of
the
repo
on
that
very
date.
Okay,.
A
D
A
Oh,
oh
yeah,
absolutely
so
the
ones
that
I
was
talking
about
were
slightly
higher
level
general
directions
that
we
should
be
heading
to,
but
it
doesn't
necessarily
mean
that
these
are
going
to
be
the
only
items
that
you
are
going
to
work
on
so
evenly
distributing
parts
among
failure.
Failure
domains
is
definitely
some
of
us.
One
of
the
important
projects
that
we
should
be
working
on.
I
know
that
you
have
sent
out
a
cap
for
that.
I
will
definitely
review
it
very
soon.
A
Even
if
we
add
like
max
per
failure,
domain
to
anti
affinity
still
doesn't
work
well
in
a
cluster
that
is
highly
dynamic
in
the
number
of
nodes
in
different
failure
domains
to
change
over
time.
So
we
definitely
want
to
have
this
even
distribution
of
pausing
among
failure,
domains
and
that's
that's
an
important
project
for
115
yeah.
B
Thank
you,
and
by
the
way,
I
can
give
one
data
point
of
the
use
case
of
scheduling
from
work.
So
in
WoW
internal
meetings,
one
team
is
actually
feel
some
additional
code
on
top
of
the
schedule
code
base,
and
it
is
just
several
thousands
code
in
what
it
does
is
just
to
implement
a
specific
item.
It's
like
a
additional
feature,
so
I
told
them
so
right
now.
Schedules
extender
is
the
only
option
to
do
that.
You
can't
cross
the
fingers
on
schedule
from
work.
I
don't
have
I.
B
A
I
know
this
is
actually
a
big
pain
point
for
many
users,
the
person
or
the
team
that
you,
you
know
is
not
the
only
one
I
keep
hearing
in
various
venues,
including
cube
carnot.
Many
users
are
really
waiting
for
this,
because
the
many
have
done
some
customization
to
this
scheduler
and
maintaining
and
merging
the
changes
from
the
upstream
has
become
a
huge
pain
for
a
lot
of
folks.
So
yes,
absolutely
we
are
gonna
focus
on
this
and
hopefully
we
are
going
to
deliver
it,
or
at
least
a
good
portion
of
it
in
115.
B
If
there's
aesthetic
part,
and
even
if
it's
system
node
critical
priority,
it
still
has
the
chance
to
be
affected
by
complain
and
are
they
going
to
the
code
I
found
that
it
because,
when
they
load
the
static
apart
from
file-
and
it's
normally
in
the
API
machinery,
they
actually
maintain
a
mirror
part
yeah
and
the
some
of
their
functions
when
they,
when
they
get
all
the
to
select
from
to
choose
a
victim
to
evict.
Actually,
they
put
a
snapshot
of
destiny.
Part.
B
A
You
so
much
yeah,
that's
very
important.
Actually
these
are
some
of
our
most
critical
demons.
Some
of
them
are
most
critical
demons,
so
it's
important
that
we
protect
them
and
make
sure
that
you've
never
affected
or
preempted
yeah.
Thank
you
please
and
send
the
link
to
that
issue.
To
me,
if
possible
way,
sure
I
can
put
it
on
the
with.