►
From YouTube: Kubernetes SIG Scheduling - biweekly - Feb. 1, 2018
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
So
the
there
is
a
temporary
landing
location.
Now
it's
called
kubernetes
SIG's,
it's
not
for
org
or
purse,
there's
not
a
singer
org
or
org
per
sig.
So
it's
gonna
be
a
flat
space.
There's
some
minor
constraints
around
having
a
well-defined
corners
file.
That
orders
filed
points
back
to
the
sig.
Oh,
that's
the
only
really
constraint
there
you
can
preface
you
don't
need
to,
but
you
can
preface
the
name
of
the
sig
if
you
wanted
to
you
into
the
name,
I
think
that
would
probably
held
for
discoverability
right
because
I
do
think
like.
B
Otherwise,
if
you
just
have
a
flat
namespace,
you're
gonna
run
into
the
same
problem
that
incubator
has
and
then
it's
just
dozens
of
repos,
that's
very
difficult
to
make
sense
out
of
it.
But
if
you
did
something
like
you,
if
we
could
define
our
own
policy
of
how
we
named
repositories
right.
So
if
we
want
to
say
like
scheduling
something
right,
we
can
apply
that
to
any
sub
component
in
the
sever
repo.
B
All
of
the
other
side
projects
that
are
currently
in
incubator,
they're
gonna
need
to
find
a
new
home
eventually
doesn't
need
to
happen
now,
right
or
even
like
six
months
for
now,
but
in
a
year's
time
frame.
The
plan
is
that
everything,
an
incubator
should
be
moved
out.
Probably
by
that
time-
and
this
is
the
temporary
landing
home-
we
haven't
really
determined
whether
or
not
it'll
be
the
final
landing
home
right.
B
A
A
B
This
org,
they
could
be
separately.
You
can
shift
repos
from
one
org
to
the
other,
so
it'll
have
to
be
shifted
into
that
org,
which
is
kubernetes
sings
right
so
and
if
we
wanted
to,
we
could
consolidate
into
repos
or
we
could
we
could
split
apart.
We
could
do
whatever
we
wanted
to
do.
We
we,
as
a
sig,
have
autonomy
to
decide
what
is
the
granularity
and
what
repositories
we
will
sponsor
underneath
there
and
that's
totally
up
to
us,
which
is
nice
right.
B
A
C
A
C
I
would
not
I
would
not
say
that
it's
the
ready
for
production,
because
definitely
we
haven't
tested
at
very
large
scale,
but
one
thing
we
have
done
like
you
might
be
knowing
behave
like
OpenShift
online
and
they
are
like.
We
have
run
that
in
try
run
mode
in
dry
run
mode
of
what
it
does.
It
does
not
affect
any
part,
sir.
It
just
lists
like
what
parts
would
be
a
big
tailor
if
we
didn't
a
scheduler.
So
that's
what
we
have
done.
We
have
done
some
testing
on
a
smallest
clusters
like
few
notes
cluster.
C
Definitely
we
haven't
done
on
large
clusters,
so
still
I
mean
definitely
like.
We
don't
have
that
much
confident
or
are
actually
I
cannot
say
very
confidently
that
we
can
still
run
that
in
last
cluster,
but
definitely
we
need
more
experience
with
that,
but
I
mean
so.
We
need
more
feedback
so
force
you
over.
So.
B
One
of
the
things
I
was
thinking
about
like
we.
We
had
it
as
its
separate
components
for
forever
right,
but
it
really.
I
I've
always
personally
thought
that
it
could
be
a
separate
bill
routine
in
the
scheduler
itself,
but
a
feature
dated
totally
isolated.
You
have
to
buy
into
a
type
of
thing
because
you're
populating
the
same
caches
right,
you're
doing
a
lot
of
the
same
algorithmic
stuff.
B
D
B
C
But
I
mean
I,
definitely
agree,
I
mean
I
would
say:
I
don't
disagree
with
that,
but
I
think
one
biggest
advantage
I
see
that
it's
outside
first
of
all
like
Alec,
we
can,
we
can
manage
it,
it
can
have
its
own
life
cycle
like
we
can
make
changes
in
the
code
or
as
as
much
as
we
want
to
like,
like
we
have
done
so
many
religious
once
it's
inside
the
scheduler,
maybe
we
won't
be.
We
won't
be
able
to
do
that
that
much
freely
I'm.
C
C
And
another
thing
because,
like
oh,
it
can
be
done
as
a
board,
so
I
think
it
also
makes
it
like
very
nice
utility
that
you
can
run
outside
and
I.
Think.
The
way
like
you
were
talking
about
caches,
I,
think
cluster
capacity
has
more
caching
as
scheduler
s,
but
this
schedule
on
really
don't
have
that
much
I'm
a
scheduler
does
not
have
any
cash,
a
cache
cache
yourself
it
just
like.
It
gets
information
about
nodes
and
ports
in
the
beginning,
and
then
it
runs
its
policies
and
strategies
on
them
and
in
Abbotsford.
D
A
C
A
C
A
Another
benefit
is
that
so
both
for
this,
and
maybe
for
cluster
capacity
at
some
point,
especially
before
for
the
scheduler.
If
people
want
to
run
both
new
scheduler
and
and
the
main
scheduler
on
there,
probably
they
need
more
memory
if
they
want
to
basically
keep
to
two
of
these
processes,
because
probably
they
they
use
the
same
information
like
something
similar
to
schedule.
Cache
some
information
about
notes,
some
information
about
ods
on
those
zones.
C
B
Also
like,
if
you
look
at
like
other
algorithm
models
like
the
firmament
model,
which
is
based
on
the
Quincy
model,
they
continually
do
rescheduling
as
part
of
the
process
right.
So
the
original
algorithm
was
these
at
least
written.
That
way
right.
So
it's
a
dataflow
algorithm,
which
is
much
faster
than
what
we
do.
B
B
A
A
If
you
guys
know
anybody
who
wants
to
try
the
feature,
that
would
be
great
because
we
really
want
it
to
get
some
mileage
before
we
move
it
to
beta
or
unknown
problems.
Actually,
we've
already
received
some
a
couple
of
comments
and
feedback
from
some
of
the
users
in
the
community
who
have
tried
it,
and
we
have
identified
one
issue.
A
It
was
not
an
issue
in
in
preemption
per
se,
but
it
was
the
fact
that
some
of
our
critical
add-ons
didn't
have
priority
and
when
priority
was
enabled
in
a
cluster,
some
of
those
critical
atoms
were
evicted
by
or
preempted
by
the
scheduler.
So
we
are
working
on
adding
those
priorities
to
critical
add-ons
and
all
but
we'd
like
to
get
more
feedback
from
like
bigger
customers
who
may
want
to
try
this
feature
before
we
really
move
it
to
beta
blocker.
If
we
cannot
find
anyone,
yeah.
C
But
I
think
like
in
three
dot
in
1.10,
once
1.10
is
released,
I
think
we
would
be
able
to
use
those
features
because,
as
you
said
like
like,
there
are
two
things
like
we
are
looking
for,
you
first
of
all
like
for
our
inference,
so
that
like,
if
they
have
higher
priority,
they
can
be
scheduled
so
be
there
before
other
parts,
and
second
thing
like
we
are
looking
for,
because
in
Cuba
later
we
have
a
vixen
manager
and
ma
X
and
major
manager
currently
uses
or
a
critical
word
annuity
sensor.
So
it
is
possible
er.
A
C
A
A
C
D
C
A
A
A
B
Opinion
with
this
has
been
limited
because
I'm,
because
Lukas
has
stepped
out
for
military
service
on
sequester
lifecycle,
and
he
did
a
lot
and
my
time
has
been
inundated
with
signal:
Esther
lifecycle,
PR
reviews,
so
my
bandwidth
to
review
six
scheduling
PRS
has
been
greatly
diminished
over
the
last
year
or
so
so
I
I
would
like
to
but
I
think
this
cycle
I
will
have
limited
capacity
to
do
so.
Okay,.
E
B
B
That
way
you
should
see
another
second
way,
if
you
want
to
build
up
you're,
not
part
of
the
main
org,
but
you
want
to
become
a
member,
is
to
do
a
query
on
the
pull
request
and
see
if
the
scheduling
label
is
applied
to
it.
If
that
scheduling
label
is
applied
to
it
feel
free
to
comments,
you
know
you
don't
need
to
necessarily
be
an
expert
in
the
space
to
review
someone
else's
code.
There's.
D
B
Or
you
know
you
can
you
can
review
their
logic
and
start
punking
down
and
executing
in
that
way?
So
this
is
the
two
best
ways
and
you
know
we
helped
guide
Klaus
through
this
and
Klaus
went
from
like
0
to
60,
because
he's
already
been
in
the
scheduling
space
for
some
time.
So
it
was
nice
right.
So
he
knew
he's
worked
on
other
open-source
projects
before
and
that
that
that's
the
easiest
way
to
get
notifications
and
to
get
up
to
speed
if
you
want
to
become
a
maintainer
or
viewer
sounds.
E
B
B
C
A
So
there
are
some
requirements,
but
basically
the
idea
is
that
you
know
you
start
contributing
by
a
reviewing
code
by
submitting
PR
and,
as
you
basically
build
up
this
experience
and
review
more
code
and
marriage,
more
PRS
and
add
new
peers.
You
gradually
go
up
in
the
ladder,
the
basic
idea,
so
if
you
start
contributing
more,
it's
pretty
natural
to
see
yourself
going
up
the
ladder
and
becoming
reviewer
an
approver
and
so
on
in
the
near
future.
So
it's.
E
A
E
Yeah
I
think
I'm
already
ravine,
say
the
gaps
once
a
lot,
but
this
one
I
want
to
start.
The
second
question,
I
think
I
already
you
all
probably
already
answer
so
you're
saying
that,
as
you
part,
is
being
actively
well
done,
but
there
is
no
current
fix
as
of
now
so
do
you
know
what
the
issue
is
like
I
mean
I
know
the
scheduler.
E
Basically
I
like
we
are
in
big
on
the
stateful
sides
and
we
are
seeing
that
very
often
the
assumed
part
failures
are
seen
and
we
have
to
actively
go
and
like
because
I
repeat
cases,
what
are
the
schedulers
are
in
a
Czech
configuration
we
have
to
literally
log
in
to
all
our
hosts
three
hosts
and
basically
we
start
the
schedulers.
So
what
station
I'm
thinking
of
like
basically
writing
something
that
will
detect
the
event
and
basically
do
a
literally.
We
start
up
a
scheduler.
A
Can
I
can
give
a
little
bit
background
and
a
quick
update
on
this
issue
so
for
others,
if
they
don't
know
what
you're
talking
about
actually
you've
seen
that
a
scheduler
cache
sometimes
gets
stale,
so
it
happens
in
various
scenarios.
We've
seen
that
this
case
reference
that
doesn't
exist
in
the
cache
we
have
seen.
The
scheduler
tries
to
schedule
pods
on
a
node
that
is
in
the
cache,
but
the
node
is
actually
deleted
from
the
cluster,
so
basically
scheduler
or
somehow
didn't
receive.
This
looks
like
scheduler
didn't
receive
the
delete
event
for
that.
A
So
these
are
some
of
the
scenarios
that
we've
seen
all
of
them
indicate
that
schedule
somehow
misses
some
events
from
the
API
server,
so
there
are
unfortunately,
multiple
moving
pieces
in
this
path.
One
is
that
maybe
it
CD
didn't
send
these
events
out.
Maybe
API
server
didn't
send
those
out.
Maybe
the
scheduler
received
those
and
dropped
them
on
the
floor.
So
there
are
a
bunch
of
these
moving
pieces.
There
is
a.
A
There
are
actually
did
the
API
or
a
case
of
a
team
or
the
CSI
as
they
call
it
here
are
looking
at
some
of
these
shoes
you're
trying
to
debug
this
problem
so
far
we
are
not
so
confident
about
what
is
the
actual
problem
here,
whether
it's
yet
CD
or
its
API
server,
that
it's
not
sending
the
events
or
is
it
scheduler?
That
is
not
really
processing
those
events
properly.
We
don't
know
yet.
We
have
had
some
like
sort
of
like
workaround
in
the
scheduler.
Recently
one
was
something
that
I
I
added
to
make
processing.
A
D
A
That
to
pod
that
logic
didn't
exist.
I've
added
that
recently,
we've
added
another
sort
of
like
workaround
to
do
scheduler
to
take
care
of
the
deleted
hot
events.
So
if
the
mr.
sorry
deleted
note
events
if
he
missed
those
deleted,
note
events
in
their
scheduler,
we
now
have
a
logic
in
like
the
pointing
face
of
scheduling
scheduler
when
his
schedule
tries
to
bind
a
pod
to
a
note
and
a
note
does
not
exist.
Scheduler
tries
to
get
that
note
from
the
API
server.
A
E
A
B
A
B
The
there's
a
problem,
though,
is
we
have
like
over
200
in
70
or
last
time,
I
checked.
We
had
like
270
issues
that
were
open
and
my
guess
is
that
that's
probably
already
been
recorded
somewhere
in
that
issue
set
right
either
they
already
have
the
label
applied
or
you
do
a
keyword
search
so
you'd
have
to
suss
through
that
sample
space,
because
right
now
there's
you
know,
there's
27,000
issues
and
kubernetes,
and
you
know
4,000,
open
ones,
250
or.
B
E
So
I
think
people
are
reporting
a
lot
of
these
with
as
impart
issues,
but
it
sounds
like
like
Bobbie
has
a
lot
of
those
thoughts
that
he
just
outline
right.
It
sounds
like
it
might
be
beneficial
that
Bobbie
or
you
or
I
mean
maybe
he
creates
this
issue
with
all
these
findings,
and
then
we
close
all
the
other
ones
saying
those
are
just
versions
of
this
I
mean
basically
I
want
a
place
where
I
can
track.
E
This
I
mean
if
there
is
a
better
suggestion
for
tracking
this
I'm
open
I
just
want
to
make
sure
that
I
have
like
we.
We.
We
are
upgrading
our
production,
dc's
humanities,
but
there
is
a
like
how
do
I
get
notified
if
this
gets
fixed
and
those
kind
of
things
is
what
concern
is
sure
I
can
I
can
I
can
create.
A
D
A
D
D
Comments,
yes,
actually
I
had
something.
So
let
me
start
with
introducing
myself
I'm
maintaining
of
Christelle
to
Skyler,
and
basically
we
figured
out
that
we
depended
heavily
on
scheduler
I,
don't
know
how
much
you
know
about
cluster
to
schedule,
but
it
actually
works
by
literally
importing
scheduler
code
and
simulating
how
scheduler
would
behave
in
different
scenarios.
So,
first
of
all
sort
of
physic
out
of
scurrying,
we
figured
out.
We
want
to
have
someone
attending.
You
know
the
meetings
just
in
case
there
is
some
change
that
can
affect
us,
so
we're
not
blindsided
by
it.
D
I'm,
not
sure.
Yet,
if
it's
going
to
be
me
permanently
or
is
it
going
to
be
someone
else,
but
so
that's
sort
of
a
general
think
we
figured
out
might
make
sense,
so
we
sort
of
can
prepare
for
any
changes
in
advance
and
specifically
for
this
meeting.
It's
from
this
meeting
today
it
seems
actually
desk.
Scheduler
is
something
that
is.
D
Let's
say
it
can
have
an
interaction
if
cast
out
of
Scylla
that
actually
it
would
probably
be
good
to
make
sure
those
two
things
work
together
because
they
are
the
natural
fit
both
I
assume
are
aimed
at
improving
crustal
utilization
and
just
making
sure
that
stuff
I
mean
basically
that
the
crust
has
a
hole,
makes
sense.
So
is
there
somewhere,
like
a
design,
now
just
grab
some
some
good
starting
points,
so
we
can
look
at
it
and
maybe
you
can
have
I,
don't
know
a
meeting
call.
D
A
Sure
absolutely
so
I
know
that
cluster
autoscaler
heavily
depends
on
a
scheduler
and
scheduler
predicates
in
particular
to
basically
try
or
simulate
the
state
of
a
cluster
around
I
figure
out
whether
and
another
pod
could
possibly
fit
on
another
node
and
stuff
like
that,
so
we
are
really
relevant
of
that,
and
actually
it
makes
a
lot
of
sense
for
autoscaler
and
scheduler
to
work
closely
together.
As
far
as
darks
and
all
is
concerned,
we
have
a
have
a
repository
in
community
to
call
for
Alexi
be
scheduling,
design
Docs.
A
D
Mean
be
honest:
I
haven't
looked
around
to
see.
What's
there,
I
mean
I
I
have
a
pretty
good
knowledge
of
how
most
of
scheduling
work,
I
spent
quite
a
lot
of
time,
reading
dedicated
code.
So
it's
just
during
this
meeting
in
particular,
you
are
talking
about.
Yes,
the
desk,
a
jeweler
in
whatever
schedule,
that's
I,
yes,
I
am
aware
of
it.
It's
a
separate
I
understood
it's
a
separate
sort
of
project
and
they
separate
pod.
D
C
D
Do
you
think
I
mean
I
I
would
love
to
get
into
some
more
detailed
discussion,
but
I
don't
know
anything
about
the
scheduler
other
than
what
it's
probably
I
mean
all
I
know
it's
based
on
the
name.
Lady.
So
do
you
have
some
good
starting
point
for
me
to
learn
about
it
and
then
maybe
we
can
just
think
and
discuss
it.
I
don't
know
in
a
week
or
two
yeah.
C
I
mean
there
are
two
documents
like
we
have
read
me
on
this
schedule:
Aleppo
in
GID,
hub
github,
and
there
is
design
document.
Also,
if
you
want
I,
can
share
that
I
think
it's
or,
if
you
are
part
of
this
is
scheduling,
I
think
there
might
be
link
already
I.
I,
if
you
want
I,
can
forward
you
again.
So
those
are
the
two
things
like
you
can
start
with.
D
D
C
I
mean,
as
I
said,
like
definitely
from
like
from
the
beginning.
We
have
some
long-term
plan
to
have
it
integrated
with
Kiki
Lester
order,
skilar
and
I
said,
like
the
first,
a
huge
case.
I
think
that
we
wanted
to
address
there
same
thing
like
when
that
whenever
the
new-
and
you
know
dit
and
added
and
that
time
it
knows
as
that
these
cases
can
get
activated,
because
now
you
know
DS
empty
so
that
in
Kent
it
can
evict
some
parts
from
the
other
nodes.
C
D
D
A
I
generally
see
kind
of
like
an
overlap
between
certain
things
in
scheduler,
in
D,
scheduler
and
autoscaler,
particularly
preemption
in
scheduler,
and
also
some
of
the
work
in
autoscaler
and
the
scheduler
or
kind
of
like
overlapping
with
one
another.
These
scheduling
autoscaler
also
has
a
component
that
are
kind
of
like
doing
this
exact
same
thing.
Basically
Auto
scale
you
can
put
more
pods
or
patch,
basically
nodes
with
more
pods
and
remove
some
of
the
notes
from
the
cluster.
A
That's
that's
the
path
for
like
shrinking
the
cluster
and
Auto
scale,
and
these
schedulers
seem
to
be
doing
somewhat
of
a
similar
work.
So
it'd
be
great
for
all
of
us
to
think
about
the
future
of
all
of
these
three
components
and
maybe
how
we
can
convert
some
of
those
into
fewer
components.
It's
possible
yeah.
E
So
there
is
one
more
thing:
I
don't
know
if
this
is
already
fixed.
We
keep
seeing
this
match.
Note
select
ratio
where
I
don't
know
if
this
belongs
to
the
scheduling
group
or
not
where
there
are
a
large
number
of
pods
that
show
up
as
matched
node
selected
issue
and
we
upgraded
from
1.5
to
1.7,
but
so
I
think
the
node
affinity
moved
from
a
notations
to
a
field,
so
I
think
that
was
related,
but
then
I
think
we
still
keep
seeing
for
newer
pods
also.
B
Issue
exactly
so
III
think
I
know
what
he's
talking
about,
because
this
was
a
long
time
ago
when
we
originally
everything
was
an
annotation
for
like
affinity
and
page
toleration
z',
and
then
we
moved
him
to
a
field
and
there
was
like
one
six
time
frame,
and
that
was
when
we
did
that
a
bunch
of
things
broke.
So
if
you
had
the
legacy,
we
had
this
crib
in
one
six,
the
crib
allowed
for
both
the
annotations
and
the
fields
right
and
then,
when
you
went
to
one
seven,
we
got
rid
of
the
annotation
support.
B
B
So
it's
one
nine
now,
so
it's
one
nine
one,
eight
and
one
seven
right.
So
the
if
you
upgraded
from
previous
versions,
you
skip
to
version
without
upgrading
some
of
the
fields
for
your
original
submission.
Yeah
you're
gonna
run
into
whole
bucket
loads
of
issues
over
time
because
of
how
api's
got
promoted
in
the
shuffling
of
api's
over
time
and
like
so,
every
single
iteration
of
the
communities.
B
B
A
minor-
and
there
is
no
difference
in
kubernetes,
a
minor
version-
is,
is
nearly
a
major
version
like
then
we
sub
in
kubernetes.
It
supports
one
dot,
release
across
upgrades.
If
you
skip
an
upgrade
and
you
go
to
then
does
not
support
it.
There's
not
testing.
You
mean
with
the
minor
version.
Yes,
if
you
went
from
one
five
to
one
seven,
that
is
a
not
supported,
not
tested
path.
Oh
I
thought
minded
versions
were
supported.
Okay,
my
single
dot.
Revisions
of
minor
versions
are
right.
B
A
E
Okay,
but
what
I
was
trying
to
say
is,
like
I,
think
I'm
still
seeing
that
issue
even
like
after
the
upgrade
right.
So
now
those
parts
are
deleted
and
we
and
and
and
recreated-
and
we
are
still
we
still
come
across
that
issue
sometimes,
is
that
something
that
belongs
to
scheduling
and
what
can
I
do
more
to
get
this
debug
bar
yeah.
D
B
C
It's
good
okay,
just
just,
but
one
thing
I
would
like
to
point
out:
I
mean
it's
a
very
simple
thing
say:
I
mean
whatever,
like
teams
are
definitely
but
just
one
ting
like
several
times
in
like
the
clusters
like
we
are
running
whenever,
like
we
see
this
mesh
node
selector
issue
most
of
the
time.
Oh
it's
a
configuration
issue.
I
mean
most
of
the
time
like.
E
C
The
selectors
are
not
set
correctly
like
well,
no
labels
had
selectors
and
ports,
and
you
want
your
headings
and
policies
based
on
like
there
are
some
medicine
plugins,
and
maybe
they
are
not
correctly
configured
so
several
tends
to
be
like
we
have
seen
those
kinds
of
each
configuration
issues
in
our
cluster
I
mean
I
just
wanted
to
point
out.
That
might
not
be
the
case,
but
just
as
an
example,
boiled
I.
B
Can't
tell
you
the
number
of
times
before
affinities
were
fields
that
I
had
to
debug
poor
souls,
because
the
annotations
were
not
type
checked
even
on
admission,
that,
like
we
had
structured
JSON
in
the
import
of
an
annotation
that
wasn't
field
checked
all
the
way
through.
So
you
know
that
legacy
was
not
kosher,
or
at
least
now
it's
checked
on
admission
right.
So
it's
it's
hard
type
checked,
but
even
then
you
can
you
can
come
across
malformed
strings,
which
is
really
easy
to
make
a
mistake
on
okay,.