►
From YouTube: Kubernetes SIG Scheduling Meeting - 2019-04-25
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
You
know
there
was
a
long
discussion
about
how
we
should
proceed
with
that
project,
particularly
because
the
project
after
it
was
renamed
to
volcano
and
a
pod
expanded
to
support
other
things
like
queue,
jobs,
code,
a
hierarchical
code
and
stuff,
like
that,
he
felt
like
it's
no
longer
in
the
Charter
of
scheduling,
only
it
spans
multiple
SIG's.
As
a
result,
we
were
contemplating
whether
we
should
keep
it
as
a
cyclical
or
should
I
go
and
become
a
larger
project
of
its
own.
A
Was
there
so
we
discussed
the
possibilities
and
deep
I
believe
two
possible
options
that
everybody
agreed
on
was
one
made
this
project
a
workgroup
so
that
multiple
six
can
contribute
to
to
make
it
a
completely
independent
project
and
donated
to
CN,
CF
and
I
saw
that
it
becomes
a
separate,
CN
CF
project?
So
it's
now
up
to
a
way
to
decide.
A
A
You're
right
I
believe
the
code
will
be
owned
by
by
whole
e-cigs
or
something,
but
you
all
right,
I,
don't
know
exactly
how
this
this
is
gone
accordingly.
I
don't
have
a
whole
lot
of
expertise
of
it.
Respect
to
their
responsibilities
of
six
versus
four
groups
or
working
groups.
I
would
leave
it
up
to
Huawei
and,
and
it
is
sharing,
maybe
to
decide
you
write
it.
Maybe
maybe
they
shouldn't
do
this
if
they
wanna
own
code,
yeah.
B
Like
one
of
the
things
that
we
were
discussing
intimately
I
mean,
at
least
within
Red
Hat
is
how
can
we
have
this
particular
component,
part
of
course
scheduler,
so
that
the
customer
does
not
have
to
do
like
use
this
as
an
add-on
component
or
something
so
like
I'm,
fine
with
if
this
is
1
by
sake?
But,
yes,
you
are
right
that
it
is.
It
has
actually
gone
beyond
the
scope
of
cig
scheduling,
so.
A
You
know
initially
that
was
our
clan.
Basically,
what
I
mean
by
that
is
that
the
plan
was
that
we
would
bring
cube
batch
and
batch
scheduling
capabilities
as
a
part
of
course,
scheduler
after
you
got
converted
or
like
change
the
scope
to
volcano,
it's
no
longer
just
a
passive
scheduler,
so
my
understanding
is.
It
goes
well
beyond
the
charter
of
only
a
sake.
It
covers
some
lifecycle
management
aspect
of
bad
jobs.
A
It
covers
quota
management
and
all
the
even
hierarchical
corridors.
It
requires
some
significant
changes
to
make
the
API
so
anyway,
it's
it's
just
not
something
that
we
can
host
as
just
one
sake.
It
feels
like
it
goes
beyond
the
scope
of
just
one
sake
and
that's
why
we
we
decided
to
go
this,
but
if
it
was
only
there
just
about
scheduling
and
logging
scheduling,
a
master
scheduling
and
all
you're
right,
we
could
have
could
have
had
it
as
a
part
of
the
course
scheduler.
A
A
B
A
A
question
about
that
all
right.
One,
more
reminder
about
code.
Freeze,
sorry,
this
is
enhancement,
freeze,
not
code,
freeze,
yet
so
enhancement
freeze
as
on
April
30th.
As
far
as
I
can
remember,
so
we
are
only
five
days
away
if
you
have
any
tip
that
needs
to
be
merged
by
then,
please.
Let
me
know
as
far
as
I
know,
most
of
the
stuff
that
you
wanted
to
do
for
1:15
already
have
caps
on
already
where's
Jonathan
sent
out
a
cap
yesterday.
A
A
We
should
try
to
spread
parts
among
physical
hosts
other
than
just
nodes,
because
in
a
lot
of
scenarios,
multiple
nodes
land
on
the
same
physical
host-
and
it
would
be
great
if
we
have
some
information
about
the
actual
underlying
physical
host,
so
that
we
can
spread
pods
among
those
physical
house
for
reliability
reasons.
So
if
one
of
those
physical
hosts
go
down
goes
down,
it
doesn't
bring
almost
all
of
the
services
or
a
large
number
of
instances
of
the
service.
But
it's
so
that's
the
feature
that
we
are
targeting
for
115.
A
A
Node
name
should
I
believe
course,
name
if
I'm
not
mistaken,
but
anyway
we
have
a
few
of
these
standard
labels
and
we
were
trying
to
also
add
another
one
for
physical
host,
but
whether
we
should
go
this
path
or
not.
Is
it
like
a
bigger
discussion?
There
are
some
methods
going
on,
but
I
still
ask
Jonathan
to
write
this
cap.
So
that's
not
what
the
discussions
can
be
redirected
to
this
cap
and
we
can
decide
whether
we
should
go
with
having
one
more
standard
label
so
with
a
particular
semantics.
A
Basically,
this
this
label
is
gonna
indicate
that
when
a
node
is
on
a
particular
physical
house,
so
basically
it's
a
label
for
the
physical
host
and
and
the
scheduler
is
gonna
treat
that
label,
especially
and
it's
kind
of
spread
positive,
mind
those
physical.
So
the
discussion
about
whether
this
is
the
path
that
we
should
pursue
or
not
is
ongoing.
We
will
know,
hopefully
in
the
next
few
days
whether
this
is
doable
or
not,
or
whether
this
is
approved
or
not.
So
these
already
of
this
from
me
I
know
that
there
is
no.
A
A
Api
reviewers
had
some
comments,
daniel
particularly
also
known
as
lava
lamp.
It
is
get
up
ID
and
has
some
comments
about
that.
That
is
an
ongoing
discussion.
We
will
know
whether
we
can
have
that
or
not.
Then
there
are
a
couple
of
concerns
about
that
particular
approach.
One
is
one
is
performance.
You
must
make
sure
that
it
is
performing.
C
A
Second,
concern
is
backward
compatibility.
We
must
make
sure
that
it's
gonna
be
backward
compatible
and
in
the
case
of
rollback
we
can
also,
you
know
if
you,
for
example,
upgrade
to
a
new
version
that
supports
less
than
greater
than
and
suddenly
we
have
to
rollback
92.
We
need
to
be
able
to
somehow
tolerate
the
the
feature
and
sort
of
supported.
A
A
A
What
that
PR
caused
some
other
performance
degradation
and
and
some
of
our
benchmarks
I
mean
improve
one
thing
and
reduce
performance
in
other
areas,
so
that
got
averted
now
way
has
worked
and
found.
The
reason
for
this
go
down
as
I
said,
another
PR
that
has
removed
that
particular
problem,
so
the
PR
is
met.
We
were
waiting
for
the
results,
hopefully,
by
the
end
of
today,
we
will
know
how
this
new
version
work
works.
C
They've
seen
original
PR
I
throw
the
rubbish
of
my
home
to
my
neighbors
yeah.
This
time,
I
think
I
think
cause
it's
kind
of
there.
We
additional
put
the
initialization
for
ain't,
60,
64
pointer
whenever
is
Infiniti
case
or
not
for
the
case
so,
but
it
really
sets
the
performance
of
now,
if
any
case
so
great.
A
Point
I'm
glad
that
you
found
this
and
for
the
information
of
other
folks,
so
there
was
just
initialization.
We
knew
in
64
the
worst.
Okay,
we
had
like
a
one
large
array
of
in
64
is
one
per
node.
You
may
you
may
think
that
this
shouldn't
be
a
problem
right,
so
you're
just
allocating
one
in
64
for
each
node
in
the
cluster,
and
this
actually
called
performance
reduction.
A
Performance
degradation
was
significant
and
if
you
look
at
all
of
these
graphs,
you
see
that
suddenly
priority
evaluation
of
the
holy
scheduling
has
increased
the
latency
of
priorities.
Priority
evaluation
has
increased
quite
a
bit
by
this
seemingly
benign
change
anyway.
It's
this
is
a
good
lesson
for
a
lot
of
us
to
to
measure
performance
because
a
lot
of
times
we
don't
think
that
small
things
or
small
changes
make
a
lot
of
difference,
but
they
actually
do
we've
seen
this
in
other
areas
as
well,
but
thank
you
very
for
pointing
that
out.
I'm,
sorry
for
interrupting.
C
Us
you
Ravi
and
I.
Also
looking
to
code,
we
can
figure
out.
What's
the
real
father
right
by
the
data,
the
testing
data
doesn't
lie
so
I
spend
one
whole
week
to.
Firstly,
I
want
to
simulate
of
end-to-end
testing
simulate
that
testing,
which
you,
the
GCE
sorry
the
Kip
mark
5000
bit
later
on
I
think
is
impossible,
because
if
you
use
83
real
nails,
I
don't
have
so
much
nails,
so
I
turned
to
other
other
ways
to
kind
of
try
to
be
reproduced.
C
C
I
I
figure
I
fixed
two
issues
in
cube
market
sure
so
I
and
no
I
I
can
setup
a
cube,
Mart
custard
I
can't
I
can
do
that,
but
I
I
haven't
run
the
real
key
back
Kumar
test
because
it's
in
the
GCE
or
cube
mark
4000.
This
means
well
the
horror
master
spinner.
Yes,
we
can.
You
can
directly
connect
to
or
SSH
to
so
for
some,
for
example,
for
the
IBM
Custer.
You
can
now
do
that
so
I
haven't
right.
C
C
By
the
way,
I
haven't
found
a
tip
to
kind
of
simulate
a
local
development
environment
to
have
multiple
volcanoes.
So
if
you're
interested,
maybe
next
time,
I
can
I
can
do
a
very
short
demo,
because
right
now,
I
think
the
most
easiest
way
to
debug
your
sketch
schedule.
It's
wrong,
Doku,
a
cluster
that
is
also
my
favorite
away,
but
it
only
speed.
Not
a
wine
nelq
right
so
for
for
particles
is
fine
in
most
cases
but
for
priority.
Of
course
you
cannot
do
that
because
it
directly
returns.
C
If
the
available
else
is
more
right,
so
I
can
I
can
locally
simulate
a
hollow.
It's
a
fake
note
in
it
actually
register
yourself
as
an
L.
Of
course,
it's
acting
as
a
capelet.
It
doesn't
do
any
beyond
co-op
CII
in
other.
In
other
words,
it
doesn't
spin
up
a
real
container.
Alpa
I
had
it
to
register
the
party
information
is
ready
to
be
a
server,
so
maybe
next
time
I
can
do
a
short
demo.
Yeah
yeah
I
think
it
helps
local
debugging
yeah.
That.
A
Sounds
great,
actually,
we
will
be
will
be
happy
to
see
all
of
that
information
that
you
have
found,
and
these
are
useful
for
debugging
and
also
for
performance
tuning
I.
Myself
sometimes
try
to
reproduce
problems
in
integration
test,
but,
as
you
know,
for
performance
testing,
integration
tests
are
not
always
great.
Sometimes
the
results
are
not
exactly
the
same
as
real
benchmarks,
real
cluster
benchmarks,
so
something
like
a
local
cluster
will
be
a
little
closer
to
a
real
cluster
summary
and
hopefully
those
results
are
closer
to
or
more
accurate
than
just
integration
test.
Yeah.
A
Okay,
if
there
is
no
other
updates,
we
can
end
the
meeting
today.
Thank
you
very
much
for
attending
I
will
see
some
of
you
guys
next
week
and
by
the
way,
one
quick
thing
we
are
gonna
have,
if
you
were
not
here
last
week,
we
are
going
to
have
a
contributor
summit
meeting
face
to
face
meeting
for
six
scheduling
if
you
are
going
to
kill
Khan
try
to
attend
our
meeting.