►
From YouTube: Kubernetes SIG Scheduling Meeting - 2020-02-13
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
B
C
So
you
know
basically,
is
getting
a
little
bit
obscure
so
to
mitigate
that
I.
Think
in
general,
we
can
solve
this
problem
for
any
plugging
for
our
our
three
plugins,
but
we
should
at
least
make
an
effort
to
support
validation
and
defaults
for
the
default
plugins.
So
the
proposal
there
is
to
move
the
API
definition
for
each
plugging
all
the
way
up
to
the
version
folders,
and
that
way
we
can
first
of
all
make
it
visible,
make
it
make
the
arguments
visible
to
users
and
possibly
add
validation,
logic,
oh
yeah,
and
maybe
conversion.
B
Right
so
just
like
to
rephrase
so,
as
you
all
know,
for
each
plug-in
we
have
a
plug-in
config
and
the
plug-in
config
right
now
in
the
component.
Config
API
is
basically
like
unknown,
so
that,
basically,
when
you
parse
it,
you
know
which
exact
start
tag
that
it
corresponds
to,
and
the
suggestion
here
is
that,
like
what
we
are
doing
right
now
is
those
trucks
are
basically
defined
with
the
plug-in
itself
under
the
framework
plugins
directory.
B
B
We
should
respect
duplication
policy
for
those
for
those
configurations,
and
so
it
makes
a
lot
of
sense
to
make
them
part
of
the
API
directory.
Although
they're
they
are
not
going
to
be
used
anywhere
like
we
are
not
going
to
embed
them
in
any
configuration
they
are
just
being,
they
just
define
the
type
that
you
know
like
a
plugin
ad
will
be
parsed
into
or
decoded
into,
and
so
yeah.
That's.
Basically,
what
I
wanted
to
add
to
what
out
dimension.
A
B
B
Way,
yeah
I
mean
this
is
certainly
like.
We
definitely
need
to
follow
kubernetes
deprecation
policy
for
these
types
because
they
are
part
of
the
API
having
them
like
deep
inside
in
the
fluid,
doesn't
make
things
like
obvious
and
so
placing
them
in
staging
as
well,
and
all
that
thing
will
force
us
to
to
do
the
right
thing
when
we
try
to
update
arguments
and
and
whatnot
okay.
So
we
agree
on
this
one,
that's
great.
B
D
So
this
actually
came
up.
I
was
talking
to
Ravi
about
it.
It
came
up
when
we
were
working
on
the
D
scheduler
recently
trying
to
move
that
to
go
modules
and
get
everything
updated
all
its
dependencies.
In
kubernetes
we
found
that
there
were
a
lot
of
dependencies
either
mostly
transitively
through
scheduler
code
that
was
imported
there
that
ended
up
importing
pretty
much
all
of
k-ci
o
/
to
kubernetes.
D
D
I
link
to
their
enhancement
because
a
lot
of
their
motivation
was
similar
not
the
same
as
ours,
which
I
also
outlined
in
the
dock,
but
the
main
idea
is
really
just
to
make
the
scheduler
more
important
for
external
projects
and
whether
that's
people
that
are
using
the
scheduler
framework
writing
like
custom,
schedulers
or
just
out
of
tree
plugins
that
they
want
to
compile
at
some
point
not
depending
on
all
of
kubernetes.
It's
really
helpful
for
that
and
also
becomes
difficult
with
dole
modules
to
import
Kate's,
Iowa
communities.
D
B
B
Let
me
finish,
and
then
you
can
so
the
second.
The
second
thing
is
cubed
is
the
skidder
is
actually
just
a
client
right?
Oh
maybe
I
said
well,
it's
a
client
of
of
Koopman.
It's
just
a
controller
running
so
like
I
support
that
move
in
the
sense
that
shouldn't
really
depend
on
the
core
kubernetes
code.
But
that
also
depends
on.
Are
you
asking
the
first
question?
What
is
the
canonical
piece
of
code
I
should
reside
in
in
under
like
kubernetes
package?
What
does
kubernetes
provide
there.
D
Yeah
I
think
that's
a
really
good
question.
I
can't
say
definitively
I
think
that
it
kind
of
comes
down
to
a
debate
over,
like
you
said
what
canonically
should
belong
in
the
core
kubernetes
repo
I
think
you
can
make
a
lot
of
arguments
from
breaking
the
scheduler
out,
because
it
does
run
as
a
compiled
command
has
flags
that
get
taken
to
it
or
get
passed
to
it
when
you
run
it.
So
in
that
sense,
you
know
we
are
a
controller,
that's
part
of
kubernetes,
but
we
also
the
default.
Scheduler
is
its
own
thing,
sort
of.
E
E
E
Library
provider
for
other
consumers,
I
spoke
with
Mike
a
little
bit
about
it
earlier
this
week
and
as
an
intermediate
option.
We
could
also
provide
people
with
something
like
a
scheduler
framework,
maybe
not
necessarily,
we
need
to
expose
or
move
to
the
staging
repository,
the
entire
scheduler,
but
at
least
the
main
it's
the
framework
so-called,
which
will
allow
the
other
authors
of
plugins
other
authors
of
full
schedulers.
We
use
the
same
components
that
we
are
using
inside
of
the
cube
scheduler,
so
I
would,
and
as
an
example
in
in
Kip
karo.
E
What
we
did
is
something
called
CLI
runtime,
which
is
a
set
of
helpers
that
are
useful
for
people
building
CLI
plugins
that
just
want
to
have
a
similar
look
and
feel
like
than
normal
Q
kernel
commands,
and
we
asked
cube.
Caudill
are
the
first
consumer
of
that
library
and
it
resides
right
next
inside
of
the
staging
repository.
So
maybe
then
they
say
we
need
to
expose
answer
extract
the
entire
scheduler,
but
we
should
think
about
extracting
into
some
kind
of
I.
Don't
know
schedule
a
framework,
central
or
library,
whatever
the
name
will
be.
E
It
doesn't
matter
at
this
point
in
time.
We
should
think
about
exposing
this,
because
that
part
of
the
functionality
will
be
helpful
for
others
to
write.
If
we
know
that
we
are
using
similar
functionalities
in
the
D
scheduler
I'm,
pretty
sure
that
they
will
be
other
people
that
will
benefit
of
that
kind
of
library.
Code
right.
B
I
mean
the
cluster.
Autoscaler
is
another
example.
Yeah
I
completely
agree
when
you
like.
One
of
the
things
that
I
I
was
also
thinking
about,
is
how
we
verge
and
their
framework,
and
we
can't
version
it
as
long
as
it's
you
know
under
under
the
package,
because
we
provide
zero
guarantees
there
I
don't
know
like
this
is
a
late
me
together
question,
which
is
what
guarantees
commitments?
B
Are
we
going
to
adhere
to
moving
forward
if
we
provide
this
as
a
library,
I
honestly
do
not
know
and
like
deprecation
policy
and
whatnot
for
library,
api
is
not
not
like.
You
know
config
and
object
api's.
How
is
the
different
dance
communities
have
a
policy
for
that
D
from
your
experience
with
API
server
and
cube
Castle.
So
how
does
that
yeah.
E
E
D
E
Faster
will
actually
means
that
we
need
to
increase
the
API
guarantees,
so
it
won't
be
one,
but
it
has
to
be
at
least
at
the
current
level.
So
if
we
start
releasing
every
month,
that
means
our
our
version.
Guarantees
will
have
to
go
up
that
much
from
what
I've
noticed
the
majority
of
the
API
missionary
libraries
like
client
go.
They
should
work
without
any
problems
between
versions,
but
they
have
a
one
to
one
matching,
because
Clan
go
has
a
different
version
ink
in
than
the
main
kubernetes,
so
they
have
a
ever.
They
have
a
table.
E
I
think
it's
it.
It's
living
at
the
root
of
the
client
go
published
repo
where
they
claim
from
which
particular
version
this
is
working.
But
since
this
is
building
on
top
of
the
kubernetes
api,
it's
the
api
that
it
gives
the
guarantees
our
library
I
mean.
The
scheduler
library
would
also
be
built
on
top
of
the
api's
that
you
just
talked
before.
E
We
entered
this
topic
so
building
on
top
of
that
I'm,
guessing
that
it
would
be
totally
up
to
us
to
define
the
API
guarantees
and
I'm
guessing
just
stating
that
we
would
be
following
the
same
guarantees
as
as
the
main
kubernetes
projects
and
including
the
deprecation
policies
and
whatnot.
It
would
be
a
a
sufficient
step
forward.
I.
A
I
can't
get
give
a
new
convertible
options.
Technical
option
right
now
back
in
the
users
perspective,
I
would
think
of
it
as
necessary,
especially
that
the
chip
control
example
is
pretty
similar
like
what
we
used
scheduling
to
consume
the
upstream
code.
I,
encouraging
users
to
consume
our
framework,
so
would
be
very
beneficial
for
all
those
external
users.
So,
right
now,
when
I,
when
I
try
to
compose
the
scheduling,
clogging,
sub
rat
hole,
I
was
thinking
also
thinking
about
how
to
you
the
cushion
matrix
comparability,
how
to
define
the
rules
there
and
yeah.
A
C
B
They
need
some
logic.
For
example,
Randolph
pre-filters
under
filters
run
the
risk
or
underscore,
and
and
all
that
jazz
right.
This
is
still
in
multiple
places
in
the
scheduler,
it's
not
properly
factored
out.
Initially,
you
are
like
what
we
call
like
an
engine,
for
example,
that
we
can
right
now
move
to.
A
Way,
yeah
I
agree
with
doin
those
kind
of
template
algorithm
code,
which
rather
plugin
should
be
stay
in
quarantine
or
it's
just
an
interface,
for
example
in
118
suppose
we
have
renamed
the
or
we
have
down
yet,
but
we
are
renamed
right.
Host
yeah,
post
filter
to
pre
score,
for
example,
suppose
version
compatibility
matrix
how
to
let
the
downstream
users
true.
B
But
I
guess
my
point
is
like
I
think
we
might
want
to
extract
that
out
and
make
it
a
little
bit
like
as
part
of
the
library,
because
that
is
also
a
lot
of
overhead.
For
anyone
who
wants
to
build
a
scheduler
to
do
the
framework
itself
is
is
useless.
If
you
don't
have
the
logic
around
it,
to
actually
run
the
framework
so
and-
and
it
is
not
simple
to
do
that-
I
think.
C
C
C
E
I
totally
agree
with
this
approach:
I
mean
it's
not
something
that
we
need
to
do
right
away.
I
was
rather
and
I.
Think
Mike
approach
was
brother.
This
is
a
long-term
goal
and,
like
there's
definitely
a
lot
of
the
schedule.
Internals
that
I
would
not
want
to
expose
in
any
way.
I
would
prefer
them
live
inside
of
the
main
repo
and
that's
perfectly
fine,
but
there's
nothing
stopping
us
from
doing
this.
Incrementally
yeah
the.
E
And
to
answer
what
other
statement
you
might
be
surprised
to
learn
how
much
coupling
you
will
have
once
you
actually
start
doing
the
this
split
and-
and
we
learned
the
hard
way
and
you
can
talk
to
API
missionary
folks-
they
learn
the
hard
way
as
well
every
single
time
there
is
this
kind
of
a
split
going
on
you'll.
Learn
that
you
have
a
lot
of
code
that
is
tightly
coupled
to
the
cube
internals
and
that's
not
necessarily
has
to
be
that
way.
So
the
extraction
is
also
a
good
way
to
refactor.
D
B
There
are
a
ton
of
advantages
we
can
think
of
when
we
do
this.
One
thing
that
might
be
just
in
terms
of
like
trying
to
brainstorm
playing
devil's
advocate
here.
The
one
concern
that
I
have
with
this
is
the
increased
commitment
on
our
side,
and
it's
not
because
like
I,
don't
want
to
do
extra
work,
it's
because
it
will.
It
might
slow
us
down
in
terms
of
development.
So
this
is
something
we
need
to
do
balance.
We
don't
want
to
take
into
responsibility.
That
is
not
widely
used.
B
That
would
you
know
slow
us
down
in
terms
of
how
we
develop
our
our
default
scheduler
and
how
fast
we
can
move
with
the
features
if,
for
example,
moving
it
to
a
staging
repo
requires
us
to
adhere
to
specific.
You
know,
duplication
policy
that
is
too
strict,
and
it
would
you
know
instead
of
replacing
a
feature
in
one
or
two
releases.
B
It's
gonna
take
us
like
much
so
I'm,
not
saying
that
we
shouldn't
do
it,
because,
if
that
is
just
we
want
to
be
careful
with
this,
what
we
exactly
move
to
the
staging,
how
we
craft
our
state,
our
our
deprecation
policy,
to
allow
this
good
balance
between
our
commitment
to
the
community.
At
the
same
time,
velocity
of
you
know,
feature
developed,
so
this
is
something
for
the
doc.
Your
document
Mike
as
well
like
to
to
awesome
I,
guess,
started
discussion
there
as
well.
How
can
we
balance
that
and
also
explicitly
disk
start
discussing
the
duplication?
D
B
And
just
give
examples
of
exactly
like
the
API
say
what
they
are
doing
or
keep
cut
out
what
they
are
doing,
and
maybe
we
can
people
who
actually
went
through
that
process
and
get
their
perspective
and
experience
like
how
did
that
work
out
for
them.
What
was
the
right
balance,
maybe
whatever
they
are
using
right
now,
was
something
that
didn't
work
out
very
well,
and
they
would
have
liked
to
change
it
if
they
had
to
start
over.
B
All
right,
thank
you
so
much.
We
discussed
before,
like
that.
The
the
5:00
p.m.
a
meeting
is
optional,
because
unless
we
have
an
agenda
item,
I'm
gonna
like
but
I
will
every
week,
I'm
gonna
send
an
email
to
cancel
it,
but
just
like
I
understanding
that
this
is
going
to
be
optional
moving
forward.
As
long
as
we
don't
have
an
ID.
Thank
you
see
you.