►
From YouTube: Kubernetes SIG-Scheduling Weekly Meeting for 20200521
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
So,
first
of
all,
so
there
was
a
code
free,
no
conflict
cat
freeze
on
the
119
caps,
so
several
several
public
apps
get
updated
to
either
reflective
real
status
to
protect
beta
LGA
and
also
we
are
adopting
the
latest
PR.
Our
cap
standard
of
PR
is
short
for
production
readiness
review.
So
that
is
you
to
answer
some
questions
on
scalability
magics,
real
back
something
cousins
with
run
it
at
really
in
the
production
environment.
So
we
are
being
adopting
that
and
the
first
one
is
to
promote
the
consult
component.
A
Config
scheduled
component
object
to
be
Bob
Bethel
one
there
are.
There
are
some
items
going
on,
for
example,
removed
the
binding
second
settings
as
well
as
some
from
other
ones,
so
are
those,
so
are
we
going
to
wait
for
those
fears
to
to
update
that
we
won't
offer
to
API,
and
then
we
started
will
be
1
beta,
1,
PR.
B
B
C
C
B
C
A
B
C
B
B
A
Yes,
yes,
so
no
matter
other
is
taking
they're
so
dedicated
to
my
close
to
my
house.
We
maybe
need
to
be
a
little
get.
It
press
prioritize
a
little
bit
a
little
bit
because
we
less
than
one
maybe
just
one
month
before
the
code.
Freeze,
I,
guess
so,
because
you
know
API
reviews
sometimes
take
time
and
also
the
Post
theater
definition
PR
and
used
to
depend
on
this.
I
need
to
delay
a
bit.
I
want
to
come
alpha
and
as
the
additional
prosecutor,
it's
just
there.
A
B
A
Component
config
to
beta
1,
second
wise,
the
poster
theater
extension
point
with
certain
number
is
9:
1
0,
3
8.
There
are
two
parts
here:
why's
the
I
call
it
pre
factoring,
and
the
two
of
them
have
been
merged
and
I.
Think
three
of
them
has
much
this
one
also
got
much
so
there's
only
one
left,
which
is
to
the
last
one
of
the
part
wants
to
combine
all
the
necessary
stuff
into
one
interface
called
preamp
handle
and
grant
handle
wraps
some
necessary
methods
to
wrong
the
filter
and
the
pre-filter
add
pada
pre-fitted.
A
Post
filter
plugin
not
only
limited
to
preemption,
but
whatever
they
want
to
do
so.
This
is
the
current
standards
of
people
superior
expungement
and
next
wise
Abdullah
did
some
tremendous
work
on
optimizing,
the
performance
of
part
of
finishing
and
the
pod
anti
attorney
and
yeah.
After
that,
do
you
want
to
mention
some
I'll
list,
all
the
items
skills
you
did.
C
C
It's
like
pre-processed
affinity,
selectors
and
whatnot,
and
this
is
useful
because
when
we
do
affinity
calculations
specially
for,
like
you
know,
existing
pods,
we
don't
want
to
do
it
again
and
again
and
and
processing.
These
selectors
is
actually
shows
up
in
our
profile
like
performance
profiles.
That
is
something
expensive,
and
so
what
we
did
was
basically
creating
this
broad,
enforced,
wrapper
and
every
time
we
add
a
part
to
the
cache
we
compute
those
selectors.
C
We
clear
the
selectors
and
we
just
like
basically
store
them
with
the
pod
and
when
an
incoming
part
comes
in
to
be
processed,
but
those
selectors
are
already
processed,
and
so
we
just
like
we'll
just
you
know,
do
the
matching,
and
so
this
is
one
optimization
that
we
did.
The
other
one
is
like
again
eliminating
completely
all
the
mutexes
that
we
have
in
part
affinity
calculations,
and
that
also
gave
us
both
these
optimizations
gave
us
like
a
2x
performance
improvement
and,
at
the
same
time,
there
was
I
think
it
was
a.
C
C
Enter
pod
affinity
so
there's
like
just
like
there's
some
some
details
and
there
I
think
it
was
a
minor
issue
because
typically,
we
like
in
usually
and
cloud
in
the
cloud
even
on-prem
would
properly
at
apologies
to
nodes.
We
don't
play
with
that
too
much,
but
I
think
it
was
something
worth
fixing
and
so
yeah.
So
at
the
end,
I
think
1.19
in
general,
we
got
product
like
affinity
now,
I,
don't
believe
it
is
something
that
is
a
photonic
as
it
was
before.
C
I
do
believe
that
people
can
use
it
at
scale
and
it
is
still
like
it
as
an
overhead
compared
like
to
having
a
low
workload
that
doesn't
use
it,
but
I
don't
think
it's
a
it's.
A
major
bottleneck
right
now.
I
do
think
at
some
point,
I
mean
I
think
that
right
now,
most
likely
the
even
at
scale
we
will
be
able
to
hit
100
pots
per
second
and
our
bottleneck
is
the
API
server,
no
ethical
conditions
for
affinity
if
it
was
being
used.
A
Thanks
Abdullah
and
I
recall,
there
was
one
one
tip
we
can
leverage
actually
in
the
performance
optimization,
which
is
to
use
pre-allocated
the
whole
slice
and
used
index
so
make
that
in
there
only
let
index
atomic
so
that
reduces
the
cutoff
of
locking,
so
that
we
just
each
time,
use
that
in
synchronized,
lock
to
inject
the
real
object
into
an
air
slice
so
that
we
don't
need
to
lock
on
the
whole
slice
right.
I
think
electric
eels
were
very
useful.
We
can
in
future
use
know
optimizing.
Oh.
A
Yeah
and
also
this
bring
up
the
discussion
I
used
to
have
this
puppy
one
year
or
1/2
years
ago,
which
is
the
real
semantics
on
pod
affinity
wide
run.
Some
ways
is
its
affinity
terms
is
not
only
wise
more
than
one
so
right
now,
this
matrix
is
not
clear
documented,
but
in
Thomas
implantation
we
enforce
the
multiple
paths.
Affinity
terms
have
to
be
satisfied
on
a
single
path.
So,
but
when
we
discuss
we
sort
of
think,
maybe
it's
not
good
idea
to
limited
pod
affinity,
all
the
fun
affinity
terms
to
be
focused
as
well.
A
One
part
is
that
if
there's
multiple
part
kind
joint
right
satisfy
all
the
pod
affinity,
we
can
think
if
satisfied
so,
yes,
I
saw
how
do
we
open
their
issues?
So
maybe
in
the
future
we
can
reopen
that
and
the
star
to
start
working
on
the
imputations.
That
is
a
wide
discussion.
Pod
affinity,
next
wise
bought
bottom
binding,
logics
crying
out
for
the
logic
respecter,
follow
the
including
the
skin,
your
framework
standards.
A
Instead,
when
you
suppose
a
very
wide
interface
to
the
framework
handle
so
I
think
that
much
cleaner
in
the
Divine's
compressor
and
the
last
mines
I
noticed
that
there's
the
requirement
in
the
disk
are
inside
that
to
implement
pathologist
spread
strategies,
because
you
know
the
difference
casual
doesn't
run
any
of
the
filter
dragging
of
cope
ragging
in
the
runtime.
They
just
run
in
the
moment
that
part
gets
in
in
the
in
the
in
the
beginning
of
the
schedule
cycle.
A
So
in
wrong
time,
we
had
the
excursion
which
increments
a
lot
of
strategy
is
to
for
you
to
balance
all
the
betrayals
across
the
cluster.
So
there
is
a
comment
on
between
the
implement
that
implement
the
part
apology
spread
on
the
discussion,
I
think
Shaw
or
someone
is
interesting,
taking
that
to
shine
in
system
less.
So
they
want
to
take
on
that.
One
I
know
that
my
Yong
chol
someone
raised
that
original
issue,
but
if
you
have
interest
I
campaign
man
yang
to
see
kids
walking,
if
now
you
can
go
start
working
on
their.
A
E
Okay,
yeah
yeah
I
was
gonna
start
working
from
that
yeah
I
was
I,
was
gonna
start
working
that
in
the
near
ish
future
yeah
looks
like
yeah
there's
like
a
work
in
progress,
pull
requested.
Someone
had
submitted
a
while
ago
so
is
gonna
attempt
to
revive
it
looks
like
there's
probably
gonna
be.
There
will
be
some
changes
required.
D
B
D
A
C
A
C
The
other
thing
is
yeah,
I
mean
the
general
plans,
because
one
thing
other
thing
is
other
plans
to
look
into
like
as
Aldo
was
asking
into
the
scoring
functions
to
try
to
improve.
For
example,
like
you
know,
placement
win
ante
affinities
us
or
what
word
is
used
as
a
as
a
scoring
function,
rather
than
predict
it.
Others
yeah.
D
D
Scheduler
I
think
that
kind
of
led
into
the
the
work
that
we
were
talking
about
before
about
breaking
the
framework
dependencies
so
that
they
could
be
easier
imported
by
other
projects.
So
I
mean
I
mean
we
could
import
plugins
as
they
are
and
I
think
even
the
registry
now.
But
we
don't
want
to
do
that
if
that
bring
more
cakes,
/
kubernetes
dependencies.
D
A
Yeah
another
thing
is
that
the
China
tip
car
is
going
to
be
virtual.
So
there's
a
there's,
the
mail
asking
for
that.
We
are
willing
to
present
a
virtual
with
the
patient
on
signature,
Charles
dip-dye,
but
that
must
be
in
the
China's
kind
of
time.
So
if
you
have
interested
you
just
let
me-
and
after
and
now
we
put
your
name
on
that.
D
Okay,
so
I
have
the
scheduler
issue
here
that
this
was
just
this
links
to
some
I
added
it
to
the
to
the
agenda,
links
to
some
of
the
other
cost.
That
was
going
on
about
dying
into
the
default
scheduler
strategy
and
talks
about
moving
some
of
the
scheduler
code
to
staging
to
make
it
easier
to
import.
D
And
then
there
was
also
the
suggestion
that
Robbie
had
about
having
the
descaler
modified.
Pods
back
I.
Think
was
the
title
of
that
issue,
which
kind
of
tied
into
that
too
about
just
informing
the
D
schedulers
decisions
to
make
sure
that
we're
not
d
scheduling
in
a
case
where
it's
just
going
to
end
up
back
on
the
same
node.
D
D
Yeah,
along
with
that,
I
would
really
like
if
we
could
get
more
active
on
the
approvals
for
the
PRS
related
to
removing
gates
dependencies,
where
we
don't
need
them.
Yeah
yeah,
because
a
lot
of
these
PRS
have
been
open
for
a
while
and
if
they
starting
to
Newberry
basis
and
I.
Just
out
of
respect
to
those
authors,
I
don't
want
to
keep
them
hanging
on
for
a
long
time.