►
From YouTube: Kubernetes SIG Scheduling Meeting - 2019-03-07
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Alright
guys,
let's
just
start,
as
you
know,
this
meeting
is
recorded
and
will
be
uploaded
to
public
internet
we've.
Gotta
start
the
meeting
I
have
a
few
items
that
I
wanted
to
talk
about,
and
now
I
saw
that
sudha
should
also
join
I.
I
would
like
to
actually
hear
some
updates
about
resource
been
packing
that
suit
issues
to
work
on.
A
B
Know
for
sure
so,
I
the
last
time
I
checked
I
had
it
working
in
that
in
my
PR,
but
there
were
some
changes
made
to
the
schedule
after
that,
so
I
just
had
to
solve
some
code,
conflicts
and
I.
Think
someone
got
back
to
me
to
create
a
cap
but
I
before
that
I
had
created
a
Google
Drive
document
which
I
shared
with
the
issue,
which
has
like
the
discussions
about
the
design
detail
with
loss
like
what
he
was
thinking.
I
know,
but
I
think.
B
A
So
sounds
good,
so
the
thing
is
that
recent
development
workflow
has
changed
slightly.
Basically,
the
idea
is
that
we
need
to
have
all
these
caps
ready
and
merged
before
our
feature,
please,
which
usually
happens
about
a
month
or
so
earlier
than
the
code
freeze,
so
feature,
freeze
or
114
has
already
passed.
A
It
was
alright,
was
end
of
January
or
so
I
believe,
and
the
code
phrase
is
basically
now
so
it's
I
would
say
it's
too
late
for
this
feature
to
go
into
114,
but
we
should
definitely
work
on
it
for
115
and
then,
if
I
don't
know,
if
Klaus
can
provide
reviews,
I
know
I
think
he
can
in
in
the
case
that
he
cannot
provide
reviews.
I
will
be
happy
to
help
and
we
also
have
a
few
other
folks
like
way
he's.
He
have
Jonathan's
here
you
can
have
Leon
and
one
are
all
here.
A
Any
of
us
can
help
review
your
PR,
but
please
make
sure
that
you
follow
the
workflow.
This
is
something
that
you
cannot
do
anything
about.
You
know
it's
sort
of
like
and
you
work
well
and
hold
parties
all
currencies
repositories.
So
we
need
to
have
a
cap
only
to
get
it
marriage
before
the
feature,
freeze,
yeah
sure
I'll
do
that.
A
Yeah,
so,
as
you
know,
code
freeze
is
here:
we
can
no
longer
have
any
new
features,
but
if
there
are
bugs
to
be
fixed
we
can
still
get
those
in
during
the
code.
Freeze,
please
make
sure
that
you
know
all
your
PRS
hopefully
are
already
managed.
I
know
that
there
are
some
that
you
are
still
working
on,
but
I
believe
most
of
the
other
stuff
that
you
wanted
to
have
in
1:14
are
already
there.
A
A
He
has
a
cap
for
a
new
feature
for
spreading
pods
evenly.
Not
a
Polish
domain
actually
wanted
to
discuss
that
for
a
few
minutes,
because
I
felt
like
that,
there
was
a
little
bit
of
disagreement
about
how
to
approach
that
problem.
They
I
didn't
read
your
further
comments
after
I
left
some
comments
on
your
PR,
but
one
of
one
of
the
issues
is
that
in
your
PR
for
the
cap,
you
are
you
say
that
we
should
add
this.
Even
distribution
feature
to
the
existing
affinity,
an
affinity
including
no
definitely
not
affinity.
A
C
Me
too,
me
too
me
your
thoughts
on
this,
so
that
is
well
smart,
actually,
the
very
beginning,
thinking
as
well
but
later
like
I,
think
that
it
is
actually
almost
identical
with
currents.
Tps
spec,
which
is
you
mentioned,
that
other
affinity
label
selector
the
namespace
topology
key
right.
That
means
we
have
to
kind
of
introduce
almost
the
same
API
spec
top
of
the
spec
rather
than
reuse,
the
current
part
spec
the
affinity.
C
So
in
users
perspective,
maybe
they
were
questioning
why
almost
the
same
API
spec
was
introduced
and
and
also
internally,
we
have
to
kind
of
maintain
two
identical
data
structures,
which
means,
when
a
part
gets
updated.
We
have
two
additional
update
the
additional
data
structure
right
to
update
that
to
make
make
it
up-to-date
efficiently
make
make
predicate
decisions,
whether
it's
can
be
evenly
spreaded
or
not.
So
this
is
like.
C
Why
do
we
need
another
data
structure,
for
that
is
because,
right
now
in
predicate,
for
example,
you
practically
in
metadata
right
when
we
produced
that
that
cache
we
use
sort
of
topology
pair
map
to
start
some
kind
of
information
to
efficient
way
to
tell
us
whether
this
part
can
yeah
natural
knowledge
right
actually
for
even
spreading
is
actually
also.
We
should
check
whether
it's
a
match
or
no
match,
and
it's
almost
the
same
almost
almost
that
the
same
logic
has
what
fidelity
and
did
is
kind
of
cartoon
part
match
right.
You
know.
C
A
Try
and
extend
some
some
amount
of
duplication.
A
logic
could
happen,
but
implementation
is
sort
of
like
separate
than
the
API
I
would
say.
If
we
need
an
extra
the
structure,
we
may
still
need
that
extra
structure,
regardless
of
how
the
API
is
designed,
isn't
that
right,
I
mean
if
we,
if
just
feature,
is
under
existing
affinity,
really
still
have
to
have
like
another
structure
for,
for
example,
counting
the
number
solo.
A
Also,
our
metadata
for
all
these
predicates
and
priority
functions
also
like
global
by
it.
By
that
I
mean
there
is
one
metadata
which
we
pass
along
to
all
the
products.
It's
not
like.
Every
predicate
builds
its
own
metadata
or
every
priority
function
builds
its
own
metadata.
So
in
that
sense,
addition
of
another
predicate
or
priority
function
does
not
require
addition
of
a
new
set
of
metadata.
So
that's
my
thinking,
that's
one
thing.
A
The
other
thing
is
that
or
Tran
for
a
fennekin
I,
definitely
particularly
for
high
affinity,
is
that
we
want
to
make
it
limited
to
node
only
right,
so
in
the
near
future,
it's
going
to
be
different
so
for
4
entire
finiti.
Our
plan
is
basically
to
remove
the
apology
by
default,
it's
going
to
be
only
node.
No,
in
that
case,
it
may
not
fit
well
with
this
idea
of
having
distribution
in
a
in
an
arbitrary
topology
domain
having
having
that
under
yeah.
C
Yeah-
and
that
is
better-
the
second
concern
for
me,
so
my
original
parents
to
provide
this
feature
on
all
three
functions,
so
we
provide
another
family
high
affinity
so
that
we
can
results
on
scenario
like
we
do
really
upgrade
on
deployment
which
has
the
entire
fan,
in
fact,
because
it's
a
rolling
away,
so
additional
pad
additional
placement
replacement
path
should
be
launched
at
that
time.
Maybe
the
system
is
not
has
additional
capacity
on
landing
the
additional
placement,
so
that
we
are
fair,
that
it
will
be
kind
of
deadlock
on
that
kind
of
situation.
A
Affinity
or
like
even
even
distribution
of
parts
in
a
geology
domain
is
some
sort
of
anti
affinity
anyways.
So
it's
not
exactly
an
affinity
in
the
sense
that
if
there
is
one
part
and
f2
policy
domain,
you're
not
off
with
another
part
there,
but
in
a
way
it's
anti
affinity
which
basically
achieves
even
distribution
of
parts.
In
fact,.
D
C
A
E
A
F
F
D
D
C
This
is
my
first
concern
if
I
can
get
more
yeah
API
machinery
expose
comment
on
that
that
we
be
good
as
they
can
concern
is
that
if
we
go
that
way
put
it
on
the
standalone
feature
posting
we
can
now
support,
I
would
say,
refined,
aesthetic,
some
part
and
higher
for
me.
So
we
lose
that
kind
of
feature
on
path
affinity
if
the
users
are
good.
A
Yes
point
another
thing
to
keep
in
mind:
is
that
putting
something
like
even
distribution
under
node
or
under
yeah
under
no
Definity
under
inter
pod
affinity,
not
anti
affinity,
just
interpret
affinity
may
be
confusing
right,
so
you
have
no
definitely
to
certain
notes,
and
then
you
also
provide
even
distribution,
which
is,
which
is
basically
something
like
anti,
definitely
not
really
affinity,
so
it
is
combining
these
two
together
becomes
a
little
bit
confusing.
So
anyway,
we
can.
You
can
actually
talk
about
the
API
design
with
the
API
folks
and
get
their
opinion
about
this
sure.
A
G
A
So
we
have
like
two
similar
things
for
two
differently,
but
this
one
for
business
for
Interpol
affinity,
right
and
then
I
I
commented
on
your
document,
basically
saying
that
this
could
be
a
little
bit
confusing
for
anti
affinity,
because
anti
affinity
is
match
any
and
if
it
is
match
any,
then,
when
you
provide
both
operators
to
specify
a
range,
you
don't
get
a
range.
You
actually
will
end
up
matching
against
almost
all
the
parts,
not
almost
all
the
parts
which
have
that
label.
For
example,
you
can
see
if
I
whiteboard
here.
D
F
A
So
you
go
back
to
affinity
and
then
first
term,
let's
say
you
say,
foo
less
than
I,
don't
know
20
and
second
term
who
larger
than
10
all
right.
So
you
think
you
are
creating
a
rule
that
matches
pods
whose
fool
Abel
is
between
10
and
20.
But
this
is
actually
not
what
it's
gonna
happen.
It
is
kind
of
match
pretty
much
all
the
parts
which
have
a
full
Abel,
because
in
anti
affinity
this
is
match
any.
A
So
if
a
card
match
is
the
first
route,
this
considered
a
match,
so
any
label
whose
foo
is
less
than
20,
including
foo,
cost
5
or
0
or
minus
10.
It's
gonna
match
this
and
therefore
we're
gonna
have
an
affinity
to
this,
and
the
same
thing
is
for
greater
than
10,
so
anything
larger
than
10,
including
foo,
equals
200.
It's
gonna
match
against
this
and
your
part
will
is
going
to
end
up
having
an
affinity
to
those
parts
which
have
a
label
equal
to
100,
for
example.
Alright.
A
So
this
is
this
is
a
problem
and
this
can
be
even
further
confusing
for
users,
because
this
does
not
apply
to
affinity.
It
is
affinity.
Is
match
all
and
10
affinity
is
match
any
alright.
So
if
you
have
the
same
rules
in
an
affinity,
it's
gonna
work,
fine
affinity,
then
you're
gonna
get
a
range.
So
for
that
reason,
I
was
thinking
that
maybe,
instead
of
introducing
greater
than
and
less
than
we
can
actually
introduce
a
range
operator
that
works
similarly
for
both
cases.
A
A
C
C
Well,
actually
for
this,
if
user
wants
to
go
with
polenta
and
Dee
and
to
put
Andrew
full
into
a
range
actually,
they
should
put
two
two
conditions
inside
the
same
term,
yeah
relating
to
the
match
expression,
rather
than
turn
into
the
different
term
different
term.
Well,
where
will
be
confusing
and
will
cause
unexpected
behavior,
but
if
they
put
into
the
same
term,
we,
when
you
should
have
multiple
operators.
A
A
C
So
basically,
we
can
put
put
ear
like
this
for
any
time,
no
matter
what's
inside
the
logic
is,
it
will
be
a
result
right.
We
actually
calculated
a
result
of
individual
term
in
all
way
or
in
any
way,
so
we
don't
care
too
much
about
inside
the
term
right.
So
if
the
semantics
of
the
term
is
reasonable,
we
can
go
pursue
with
their
right.
If
the
result
like
yet
example,
Bobby
you
raise
their
matching.
So
the
matching
result
of
the
different
terms
actually
doesn't
make
really
sense,
but
it's
still
valid.
C
A
C
A
G
A
A
Absolutely
we
should
definitely
document
this
very
well,
otherwise
many
users
will
get
confused
yeah,
it's
it's
fine
I
mean
as
long
as
we
can
provide
them
mechanism
I
it's
an
ideal
because
we
are
introducing
some
API.
That
is
easy
to
get
wrong
and
that's
that's
generally
bad
practice,
but
I
mean
we
should
definitely
document
this
very
well.
Hopefully,
hopefully,
users
follow
this.
This
particular
problem
does
not
apply
to
affinity,
affinity,
we'll
be
fine
in
basically,
in
both
cases,
affinity
will
be
the
same,
but
for
anti
affinity.
This
is
gonna
cause
some
confusion
for
users.
A
A
C
A
Sorry
about
the
delay,
so
anyhow
I
want.
Basically
the
main
question
here
about
this
PR
is
that
if
we
built
this
sort
of
structure
that
you
have
created
as
a
part
of
metadata
creation,
is
it
gonna
slow
us
down
much?
If
that's
the
case,
and
we
can
think
about
alternatives,
it's
not
too
bad,
then
maybe
we
should
keep
it
as
a
part
of
metadata
creation
or
metadata
production
instead
of
just
keeping
it
all
the
time
in
the
scheduler
yeah.