►
From YouTube: Kubernetes SIG Scheduling Meeting - 2019-10-24
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Right,
yeah
hi
everyone.
As
you
know,
this
meeting
is
recorded
and
will
be
uploaded
to
YouTube,
so
I
have
a
couple
of
agenda
items
and
then
please
feel
free
to
open
up
for
any
other
issues.
For
the
first
one
is
the
framework
migration
I
think
we
are
on
track
to
achieve
our
goal
for
phase
one
by
the
code
freeze.
So
all
predicates
are
now
wrapped
around
plugins
priorities
are
almost
there.
We
have
just
a
couple:
they
are
in
review
the
custom,
predicates
and
priorities.
Truong
started
as
well.
A
B
A
A
A
The
second
item
is
the
schedulers
default
topology
spreading
I?
Don't
think
alder
is
on
cold,
but
he's
it's
also
on
track.
The
cap
has
been
merged
for
how
we
gonna
do
this
so
he'll
start
implementing
it
I'm
very
soon,
I'm.
The
third
item
is
improving
our
schedules
benchmarks.
So
I'm
gonna
give
way
the
chance
to.
B
See
ya,
then
there
are
few
peer,
also
like
using
again
so
we're
watching
mechanism
to
get
the
latest
get
the
minimum
ever
had.
So
you
know
before
we
just
use
in
polling
to
continuing
regularly
to
code
objects.
So
that's
just
one
key
are
improving
the
accuracy
and
also
Germany
Asif
raised
another
one:
to
stop
the
goal
and
Tamre
to
eliminate
the
overhead
of
the
deferred
functions.
B
You
know.
Default
functions
were
yeah
executed
at
the
end
of
each
each
test
cycle,
so
otherwise
is
also
good
for,
but
I
think
for
the
for
the
trailing
depart
for
this.
We
haven't
started
yet,
for
example,
to
reevaluate
the
accuracy
of
their
benchmark.
For
example,
there
is
a
big
dot
and
we
are.
We
are
we're
right
now
modifying
that
number.
But,
according
to
my
knowledge,
we
should
we
shouldn't
be
modifying
that
because
it's
a
number
of
dynamic
according
to
your
own
time,
I
go
on
one
time
because
goal
long.
B
A
B
A
And
another
one
is
actually
getting
it
into
the
dashboard.
The
kubernetes
perf
dashboard
I
think
we
already
have
a
page
there,
but
it's
not
being
updated
and
I
think
it's
been
broken
for
a
while
subway.
Do
you
wanna
like
update
the
issue
with
Church
and
like
really
keep
track
of
this
with
with
gravy
another?
This
one
I
think.
A
B
A
A
A
A
A
Then
changing
the
permit
API,
this
is
done.
It's
already
merged.
Scheduler
configuration
cleanup,
I
think
this
is
generally
on
track.
So
I've
had
a
wonderful
job
as
well
with
aldo
shepherding
the
world
would
cleaned
up
a
lot
of
the
factory
related
code
and
actually
the
whole
factory
package
is
not
deleted.
Remove
the
next
one
is
improving
scheduler
metrics.
Please
stop
me
if
you
have
any
comments
or
you
have
any
any
questions
about
this
by
improving
scalar
matrix.
This
is
also
done.
A
Team
nodes,
by
condition
2g
air-driven,
did
a
wonderful
job
here
as
well.
It's
already
done
so.
We
got
a
chance
to
also
delete
like
for
prep
for
predicates
with
it,
so
that
was
a
really
great
job
and
also
cleaning
up
Deadwood
resource
code,
a
scope
selector
to
GA.
So
Ralph,
do
you
want
to
talk
about
this?
This
is
also
related
to
a
relaxed
namespace
restriction
for
critical
cause.
I
guess
isn't.
C
D
Yeah,
it
is
so
at
this
point
of
time
there
is
an
integration
test
that
is
continuously
failing.
The
squadron
has
has
mentioned,
I
looked
a
bit
into
it,
debugging
it
for
some
time.
I
could
not
find
a
solution
to
it
and
at
this
point
of
time,
like
yon
hot
mic,
dame
from
that
help,
if
we
get
up
so
one
of
them
is
going
to
work
on
this
was
I,
do
not
have
any
cycles
to
work
on.
C
D
D
A
D
Seven,
six
three
one
zero
n:
we
should
be
able
to
create
a
peer
to
graduate
the
resource
covers.
Scopes
are
efficacy,
so
it
right
now
is
not
a
it's.
It's
in
beta
I
try
to
come
moving
it
to
G,
and
then
I
have
to
revert
that,
because
the
other
peer
did
not
go
in
relaxing
in
space.
Restrictions
were
critical,
so
we
have
to
get
the
relaxed
sexual
affairs
yeah.
A
Okay:
okay,
yeah
to
me
this
is
the
priority,
because
the
other
one
being
a
beta
itself.
Why?
Because
you
know
by
default,
so
it
can
wait
until
next
release,
but
the
last
one
in
Lexington's
restrictions
you've
been
getting
a
lot
of
requests
for
this
yeah
the
stream
and
they
work
around.
Is
people
creating
you
know
pods
in
the
cube
system,
namespace,
which
is
not
ideal?