►
From YouTube: Kubernetes SIG Scheduling Meeting - 2019-4-04
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Is
recorded
and
will
be
uploaded
to
public
Internet,
so
we
can
start
so
the
way
you
were
asking
about
this
permit
plug-in
that
folks
from
Intel
have
built,
and
he
believed
that
what
they
have
done
should
probably
be
portable,
but
we
will
be
changing
some
of
the
interfaces.
As
you
know,
there
is
a
PR
out
there
for
changing
the
interface,
so
I
assume
what
they
have
done
is
not
exactly
following
the
exact
same
interface
that
we
are
we
have.
A
So
maybe
we
need
to
make
some
changes,
but
that's
probably
not
a
whole
lot
of
work
for
them
to
change
it.
That's
one
thing.
The
other
thing
is
that,
given
that
we
have
this
scheduling
framework,
we
don't
necessarily
have
to
have
only
one
implementation
of
a
plug-in
all
right.
So,
for
example,
we
can
have
like
three
different
plugins
that
implement
gang
scheduling
and
we
cannot
have
all
of
them,
but
only
one
of
them
is
enabled
in
a
cluster,
so
users,
depending
on
features
and
maybe
some
other
criteria
that
they
have.
A
A
Right
so
ok,
I
have
a
few
items
that
I
would
like
to
talk
about.
One
is
that,
thanks
to
Robbie,
he
proposed
that
we
should
have
a
session
in
the
contributor
summit
of
Q
Khan.
So,
as
you
know,
we
will
be
in
Q
Khan
in
about
one
and
a
half
months
from
now,
so
we
are
going
to
have
a
you're
going
to
have
a
contributor
summit
session
for
CB
scheduling.
Please
attend
if
you're
interested.
We
also
encourage
new
contributors
to
attend
and
become
familiar.
We
are
hopefully
gonna
have
some.
A
There
is
some
ok,
so
the
non
pre-empting
priority
cap
is
matched
thanks
to
Ellery
for
her
health
care
and
the
PR
for
finishing.
Basically,
the
non
pre-empting
priority
that
god
has
started
earlier
by
other
contributors
is
an
Alex
Wang
is
working
on.
It
is
almost
ready.
It
still
needs
some
changes
to
fix
some
of
the
build
issues,
but
hopefully
those
will
happen
soon
and.
A
All
right,
you're
working
on
updating
we're
working
on
a
physical
host
spreading
it,
which
is
actually
a
new
feature
in
kubernetes
scheduler.
We
are
trying
to
basically
read
and
I.
Will
you
probably
have
some
context
if
you
have
attended
previous
meetings,
you're
trying
to
read
the
physical
host
information,
for
example,
VMware
vSphere
or
maybe
cloud
provider
API
if
they
provide
such
information
and
then
try
to
spread
parts
among
nodes
that
are
located
on
actually
different
physical
hosts?
A
This
is
useful
for
improving
reliability
of
workloads
in
kubernetes
Jonathan,
also
known
as
mr.
Cadiz
is
gonna
work
on
this
and
whether
he
has
also
volunteered
to
work
on
some
part
of
it.
Hopefully,
they're
gonna
have
a
cap
soon
for
this
feature,
and
we
will
go
from.
There
is
implement
any
question
about
physical
house
ready.
A
Other
than
that
we
also
have
another
somewhat
similar
feature
for
even
party
spreading.
This
is
different.
Basically
even
party
spreading
is
is
a
feature
that
allows
us
to
spread
parts
evenly
among
a
few
failure
domains.
These
failure
domains
could
be
nose,
zones,
regions
or
any
other
labels.
So
what
you're
trying
to
do
here
and
what
is
different
from
anti
affinity,
is
the
fact
that
here
we
support
spreading
more
than
one
pot
per
failure
domain,
so
in
required
anti
affinity.
A
If
you
specify
the
quadrant
I
affinity,
only
one
part
can
exist
in
a
single
failure
domain,
for
example,
one
part
in
a
zone,
but
with
this
feature
you
can
spread
parts
evenly.
For
example,
if
you
have
five
parts
and
let's
say
five,
five
zones
and
ten
thousand
each
zone
will
get
two
parts
and
then,
after
we
add
this,
we
are
going
to
simplify
the
inter
pod
anti
affinity
feature.
Basically,
we
are
going
to
deprecate
the
topology
key
from
anti
affinity
and
we
will
only
support
anti,
definitely
to
the
same
to
the
same
node.
A
So
our
current
interpolant
affinity
will
not
work
in
arbitrary
topology
anymore.
I
mean
this
is,
of
course
our
is
gonna.
Go
through
a
regular
deprecation
process
is
so.
This
is
not
going
to
be
something
together.
Just
immediately
become
a
drop.
It's
gonna
have
really,
if
you're,
using
it
you're
going
to
have
some
time
to
change
it
to
even
participate.
A
A
A
A
This
new
change
is
pretty
much
ready
for
review.
I
know
that
a
lot
of
folks
have
already
reviewed
it
and
made
a
lot
of
great
comments
about
these
things.
I
addressed
those
comments
and
I've
made
more
changes
so
feel
free
to
take
another
look.
I
actually
pushed
some
changes
today
feel
free
to
take
another
look.
I
know
that
some
folks
are
waiting
for
the
framework
and
are
I
have
received
a
lot
of
requests.
That
many
other
folks
are
willing
to
contribute
to
the
framework
and
build
various
features
of
the
framework.
A
A
B
A
Host
is
spreading,
so
you
know
actually
these
are
kind
of
orthogonal.
So
basically,
what
we're
gonna
do
is
that
you
know
today.
The
scheduler
already
has
a
priority
function,
that
by
default,
spread
spots
evenly
among
failure,
domains
and
well
not
most
likely
dorrance.
Actually
among
no
it's.
This
is
not
arbitrary.
A
By
default,
kubernetes
caswell
tries
to
spread
parts
service
parts
of
a
single
collection
among
different
nodes.
So
this
particular
feature
the
physical
host
spreading
is
gonna,
replace
this
default
behavior.
Essentially,
this
default
behavior
will
become
first
try
to
spread
parts
among
actual
physical
hosts
if
that's
not
an
option,
for
example,
because
many
nodes
are
on
the
same
host
and
you
cannot
really
spread
among
that
meaning.
I
was
then
try
to
spread,
among
notes.
A
I
said
this
is
one
one
part
of
it,
but
another
important
part
is
that,
as
a
part
of
this
physical
host
spreading,
we
are
gonna,
have
a
physical
host
label
on
nodes
all
right,
so
we're
gonna
create
a
standard
label.
For
my
standard,
I
mean
this
is
gonna,
be
part
of
our
API,
similar
to
other
labels
that
we
have
for
our
nodes
and
zones.
A
So
we
are
gonna,
have
a
label
for
physical
house
yeah
once
we
have
this
label
of
course,
users
can
then
go
and
use,
for
example,
even
parties
Freddie
the
feature
that
we
are
working
on.
This
is
a
separate
feature
and
in
the
topology
they
can
put
this
physical
host
label
right
or
the
key
of
this
physical
person
with
doing
by
doing
that,
basically,
the
enforce
spreading
of
pods
among
physical
hosts.
This
is
not
gonna,
be
just
a
basically
default
priority
feature.
A
This
is
now
they
can
use
it
as
a
part
of
the
house
as
a
part
of
like
predicate,
for
example,
if
they
want
same
thing,
they
can
do
with
anti
affinity
or
affinity.
They
can
enter
affinity,
we're
going
to
change
in
the
future,
but
for
affinity
at
least
they
can
now
provide
the
physical
host
label.
So,
as
you
can
see,
this
is
part
of
this
can
be
using
other
features
and
the
other
part
is
going
to
replace
some
of
the
existing
schedule,
thanks.
That
makes.
B
D
A
D
D
For
the
cool
badge
I
think
after
we
support
account
scheduling,
we
also
so
called
some
others,
such
as
I
also
leverage
the
pre
Davis
and,
apart
from
the
default
scheduler
right.
Yes,
I
also
updated
some
comments
of
some
suggestion
about
to
to
make
the
products
and
the
priority
to
be
a
common
library,
so
other
can
leverage
such
channels
or
algorithm
to
build
their
common
to
build
their
scheduler
and
in
the
coop
batch.
We
also
do
some
husband
to
the
to
the
sled
mobility.
D
Yes,
because
of
we
found
several
easy,
was
there
and
yes
and
I
recently
I'm
working
on
the
framework
part
I'd
like
to
make
sure
the
scheduling
work
of
yeah
see
in
default.
Scheduler
can
also
support
batchest,
batchest
Calgary
and
a
default
scatter.
So
at
least
we
can.
We
can
share
the
framework
part
and
for
some
patch
work
load
of
maybes
for
some
other
work
load
we
can
exchange
the
framework
means
the
different
scenario.
A
D
So
I
think
this
is
the
offline
research
of
flying
work
in
the
key
Mahan
and
another
another
things
in
my
handle
that
I'm
we
are
trying
to
bring
the
back
a
bit
here
in
Oakland,
A's,
yeah,
I
think
I
use
it
to
open.
You
see
you
there
in
the
upstream.
Yes,
this
the
several
it's
a
silver
item
here
and
because
the
you
know
there,
404
yes
for
such.
B
D
D
Okay
were
prepared,
something
there,
let's
make
sure
sure,
maybe
I
will
I
will
open
up
and
the
link
later
to
the
document.
Our
meeting
minutes
yeah.
So
because
there's
a
were
item
here
and-
and
we
already
have
some
idea-
so
we
start
a
separate
project
currently
to
to
get
with
the
or
idea
of
what
we
are
going
to
do.
For
this
part,
yeah.
A
D
For
the
Q
part
of
the
this
is
a
stop
atom
of
this
of
this
issue.
Yeah
also
support
queue
and
bunch
of
regarding
the
kill
there
is
I
didn't
propose.
I
may
keep
in
the
community,
because
I
single
kill
maybe
also
have
some
interaction
with
the
Matt
attendance.
We
already
have
mighty
tenants
working
over
there,
but
you
know
the
sounds
of
sharing
or
results.
A
D
They
will
not
do
that,
but
oh
I
mean
when
you
to
align
with
them.
I
don't
want
to
build
another
another
feature
here
and
a
multi-tenant.
Some
people
build
another
way,
so
that
will
be
maybe
company,
but
oh,
maybe
yeah.
Yes,
when
you
to
calendar,
get
Martin
tenants
working
group
carrying
should
be
getting
involved
on
this
part
yeah.
A
D
A
E
E
C
Okay,
anything
else,
yeah
some
update
on
the
even
past
spreading
so
I
think
almost
all
the
comments
has
been
I
just
accepted
the
API
design,
so
I
think
in
your
latest
discussion.
I
just
put
the
spec
up
how
you
describe
a
group
of
notes
using
the
spec,
which
is
in
our
current
affinity,
move
that
despite
pillory
yeah
I
saw
that
Bobby's
comments.
Is
that
bit
somewhat
big
people
confusing
right?
You
use
this
in
those
even
is
spreading
and
also
in
affinity
yeah.
C
According
to
the
user
experience
I
also
proposed.
Another
solution
is
to
kind
of
flatten
the
inside
for
definitely
spec
like
flattening
the
terms
yeah,
Frank,
fatten
them,
flap
them
unto
them,
and
the
stem
level
have
the
even
spreading
speck
and
so
that
they'll
do
some
sort
of
flexibility,
but
it's
still
expressive.
So
this
is
the
most
de
written.