►
From YouTube: Kubernetes SIG Scheduling meeting - 2018-11-08
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
B
Start
the
meeting
so,
as
you
know,
this
meeting
is
recorded
and
will
be
uploaded
to
public
internet
since
we
already
ate
already
very
much,
let's
jump
on
the
updates.
As
some
of
you
know,
some
of
our
some
of
our
features
are
postponed
and
this
release
because
of
the
fact
that
113
is
going
to
be
a
stable
activities.
One
of
the
more
recent
ones
that
we
decided
to
postpone
is
what
Robbie
was
working
on
Robbie.
You
are
aware
that
this
limit
priority
function
is
postponed
to
114,
right
yeah.
B
B
B
B
C
Sophisticated
the
whole
story
is
that
we
need
to
kind
of
intended
plug
the
network
connection
from
the
Poconos
to
master
and
III
Testarossa
well
in
our
regular
Italy
testing
in
test
grade,
but
in
fair,
the
master,
upgrading
ete
test
suite
so
and
I
narrowed
down.
The
issue
is
that
it's
a
no
issue
that
in
GCE
upgrade
CA
area,
the
master,
the
IP
of
the
master,
which
is
using
in
communication
between
the
buccaneers
and
the
master
and
change
from
public
IP
to
internal
IP.
C
So
in
that
way,
the
the
logic
that
I
made
the
connection
master
doesn't
work
anymore,
so
just
in
Santa
Barbara
so
raised
up
here,
fixing
that
issue
by
blocking
the
traffic
to
post
public
IP
and
internal
IP.
So
the
latest
test
has
been
passed
so
I
think
is
good.
So
we
have
all
things:
scabbards,
yeah.
B
So
I'm
I
I
think
it's
fine
to
to
disable.
The
testing
are
great
of
great
test
for
now,
but
we
should.
We
should
actually
try
to
enable
it
one
of
the
even
in
our
great
test,
maybe
change
the
test
in
any
ways
that
you
think
is
appropriate
and
maybe
even
in
the
operators
harder.
The
reason
is
a
little
bit
worried
that
when
this,
when
an
older
version
of
the
cluster
is
upgraded,
which
was
not
expecting
things
but
think
based
direction
to
work,
it
could
possibly
cause
problems.
So
it's
it'll
be
great.
C
D
Bobby,
what
was
the
point
that
you
made
is
is
valid
too,
because
we
face
the
same
issue:
I
think
for
older
release,
not
from
one
note
element
to
under
12
but
1.10
to
1.11
I.
Think
it
was
for
scheduled,
Lehman
supports
like
two
weeks
ago
or
or
some
time
ago,
where
we
saw
an
issue
which
is
which
fails
during
the
upgrade.
B
I
have
for
you
guys
regarding
a
feature
that
we
were
seeking.
You
know
the
scheduler
actually
moves
all
the
unschedulable
parts
every
time
that
it
receives
an
update
for
an
hour.
This
makes
sense,
but
the
problem
is
that
node
updates
are
very
frequent.
Basically,
the
scheduler
receives
an
update
for
an
out
every
10
seconds.
This
is
what
keep
that
sensor
and,
as
a
result,
we
keep
rescheduling
a
lot
of
parts
which
determined
to
be
unscheduled
earlier.
B
Alright,
so
in
a
large
cluster,
you
can
imagine
that
the
schedule
receives
hundreds
of
these
odd
of
the
unknown
updates
every
second,
so
it
keeps
free
trying
all
these
on
a
schedule
on
a
schedule
of
odd
part.
I
had
an
idea
that
we
should
look
at
the
status
of
the
nodes
and
see
if
there
is
anything
changed
in
the
node
update.
That
makes
the
note
schedule
able
or
more
scheduled
or
changes
to
schedule
ability
of
a
node.
For
example,
the
amount
of
resources
amenity
has
changed,
or
labels
on
a
node
has
changed.
B
Things
on
a
node
has
changed
and
stuff
like
that.
So
we
implemented
this.
Actually,
one
of
our
contributors
have
a
simple
mentis:
it
turned
out
that
it
increases
the
scheduler
CPU
usage.
A
lot
in
larger
clusters
in
scalability
I
had
to
send
a
PR
to
revert
that
back,
but
the
idea
is
still
valid
and
we
would
like
to
continue
working
on
that
idea.
I
will
try
to
reach
out
to
the
node
team
to
see
if
the
node
itself
can
report
such
such
things
to
us.
B
So,
basically,
if
the
node
itself
are
the
one
of
the
idea
is
that
I
note
itself
tells
us
whether
it
has
changed
anything
that
could
potentially
make
the
node
more
scheduled
about,
for
example,
not
resources,
names
tables
any
of
those
change.
Maybe
the
node
can
set
a
flag
and
its
status
so
that
the
scheduler
knows
that
it
can
retry
on
schedule
of
all
pot.
We
will
see
if
that
works,
but
anyway,
for
now
we
had
to
pull
that
out
of
the
rules.
That's
another
update
that
I've
had
for
you.
B
I
I
still
want
to
push
these.
You
know
extension
points
for
the
scheduling
framework
out.
Hopefully,
in
this
release,
if
I
can
I
guess
it's
pretty
much
done
really.
The
code
itself,
but
we
would
like
to
add
tests
for
it.
I'm
gonna
try
to
do
that
today
or
tomorrow.
Hopefully
we
can
get
those,
let's
see
other
than
that
I.
Don't
have
other
updates
for
you
really
Bobby.
C
B
C
B
A
C
A
A
C
B
B
Us
in
a
central
place,
like
the
scheduler,
comparing
all
the
properties
of
notes
with
the
previous
version
that
we
have
so
hopefully
the
other
one
works.
Better
yeah
I
mean
the
distribute
version
where
the
node
reports
it
is
better.
Okay,
that's
all
for
me.
Unfortunately
we're
running
out
of
time.
But
if
you
have
any
quick
questions
comments,
please
go
ahead
and
share
with
us.