►
From YouTube: Kubernets SIG Scheduling Meeting - 2019-07-11
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
Okay,
yeah
so
hi
everyone,
so
I'll
be
hosting
this
meeting
today,
because
Bobby
is
running
a
little
late.
So
Bobby
and
I
listed
a
couple
items
on
the
on
the
agenda
to
discuss
and,
of
course,
we
can
then
open
the
floor
for
any
questions
in
discussions
or
open
issues.
So
the
first
item
on
the
agenda
is
the
is
of
course,
the
framework.
We
have
I
think
five
left
extension
points
that
needs
to
be
implemented.
A
I
think
the
first
four
already
implemented
waiting
for
reviews,
the
last
one
is
dependent
on
the
score
plug
in.
Let's
call
it
station
points,
I
think
we're
almost
done
there.
There's
some
small
discussions
around
it.
The
integration
tests
and
it's
I
think
they
should
be
done,
should
probably
be
done
soon.
The
hope
is
that,
after
we
do
that,
we
will
start
with
some
the
maybe
predicates
to
move
them
into
I.
A
Do
the
filter
plugin
like
I,
guess
we
were
going
to
start
from
bottom
down
so
that
we
can
keep
the
order
as
it
is
because,
as
far
as
in
standard
fee
change,
the
order
that
will
probably
change
our
performance
profile,
we
can
also
probably
start
with
the
priorities
and
moving
them
into
the
score.
Plugin
score,
slash
normalized
score,
so
that
should
be
fun
as
well.
Hopefully,
we'll
be
able
to
find
opportunities
to
improve
some
of
these
priorities.
Now
that
we
get
the
chance
to
look
at
them
again
and
move
them
in
two
inch
plugs.
B
Right
now
the
the
progressive
excuse-
basically
I,
have
all
the
code
checking.
So
you
can
see
the
six
PRS
in
total
and
I.
Think
the
most
challenging
part
is
the
API
part
which
we
have
we
had
completed.
The
daniel
smith
from
the
6
AP
missionary
has
been
given
a
green
light
on
the
api
and
in
abdullah,
Miyoung
and
bobbi,
and
I
both
agree
on
almost
the
api
part,
so
that
is
the
most
challenging
part.
So
it
was
a
yeah.
We
don't
have
any
hint
brokers,
so
the
rough
things
are
just
scheduling
internal
logic.
B
Reviewing
so
I
think
abdullah
spends
a
lot
of
time.
Reviewing
the
second
and
three
puros
there's
some
minor
places
in
is
about
update,
but
I
think
I'll
update
most
of
them.
So
we
have
three
periods
more
to
review.
So
then,
then,
after
that,
I
will
I
would
say
at
same
time.
I
will
open
the
website
document
PR
to
talking
and
do
some
user
related
documentation
on
that
then
yeah.
This
feature
is
safe
to
be
shipped
in
160.
B
B
B
B
B
A
B
A
Yeah
I
was
just
waiting
because
I
thought
it
would
be
like
too
much
work
to
you
know,
keep
reviewing
few
hours
that
has
like
long
chain
of
dependencies
yeah
but
I'll
get
to
it.
Probably
next
week,
like
probably
tomorrow
next
week
as
well
sure
the
end
of
this
month
should
be
far
yeah.
Okay
sounds
good,
yeah
yeah.
This
is
this
looks
like
really
really
promising
I
mean
lots
of
people
ask
for
it.
A
I,
don't
think
we
have
any
any
other
way
to
to
to
do
even
odd
spreading
right
now,
even
if,
if
there's
a
hacky,
we
I
don't
think
that
is
even
a
hacky
way
to
do.
It.
I
mean
yeah,
yes,
rikku
yeah,
so
physical
host
spreading
I'm
ad.
Do
you
have
any
questions
or
comments
about
it?
I
have
some
updates
from
Aldo
side,
but
I
guess
I
could
start
with
you.
First.
C
The
couplet
like
I'm,
not
sure
if
it's
we
could
discuss
it
over
here.
The
thing
I
can't
total
I,
like
the
vegetation
happens
after
the
label
extraction,
whether
it's
happening
at
the
same
time.
Back,
if
told
me
that
the
label
extraction
would
happen
before
the
node
is
registered
with
with
the
API,
so
good
yeah.
A
A
Guess
my
point
is
so
what
happens?
Is
it?
The
cubelet
creates
a
struct
called
like
the
node
struct
right
and
then
it
posts
it
to
the
API
server
would,
and
that
means
it
basically
writes
it
to
its
CD,
and
only
at
that
point
the
scheduler
will
find
that
there's
a
new
node
and
we'll
try
to
schedule
pods
on
that
node.
So
the
code
path
that
I
pointed
you
to
is
basically
when
the
cubelet
creates
that
struct
in
memory
in
on
the
node.
A
A
System
doesn't
yet
know
that
this
not
exist
yet,
okay,
so
once
that
once
the
once
the
cubelet
and
that's
basically
the
registration
process,
that's
what
they
call
it
just
like
that's
when
the
curator
just
sells
itself
with
cluster
and
says
I
am
a
node
in
this
cluster,
so
I
guess
what
I
was
saying
is
only
at
that
like.
If,
if
you
are
setting
the
zone
at
that
point,
then
there
will
be
no
race
condition,
because
the
system
still
doesn't
know
that
the
not
exist.
A
A
For
the
case
of
migration,
there
will
be
always
a
period
of
time
between
whatever
whether
it's
a
controller,
it's
a
demon
set
or
the
cube
itself,
probing
the
cloud
provider
API
for
the
physical,
physical
ID
of
that
node
and
the
time
that
node
that
ID
gets
propagated
and
Brittany
GCD
and
the
schedule
get
being
aware
of
that.
So
you
always
have
that
risk
condition
like
there's
no
way
you
can
solve
it.
Okay,.
C
A
A
A
A
The
other,
the
other
approach
I
was
pointing
you
to
is
a
controller
that
is
implemented
as
probably
a
diamond
set,
or
maybe
a
like.
It
depends
on
how
efficient
is
the
cloud
provider
API.
So
in
some
cases
the
physical
host
could
exist
on
the
metadata
server
of
the
VM
that
you
know
hosts
the
node
itself.
So
you're
not
calling
you
know
a
central
API
give
and
you
give
that
API
the
node
name,
and
then
it
returns
to
you,
the
physical
host.
A
You
probably
would
want
a
diamond
set
because
you,
of
course
it
depends
of
the
cloud,
provide
an
API
how
it
is
implemented.
But
if
it
is
implemented
as
a
local
server
on
the
road,
then
a
diamond
set
makes
complete
sense.
If
it
is
just
a
generic
API
that
you
know,
you
call
over
like
an
edge
like
something
like
you
know,
you
know
like
API
is,
did
google.com,
then
yeah
a
controller
with
a
single
instance
somewhere
should
be
good
enough.
A
Okay,
yeah,
it's
clear
now
yeah,
then
I
would
like
to
give
a
feedback
on
all
those
he
didn't
manage
to
to
get
to
this
meeting.
He
was
also
pursuing
getting
a
defense
like
a
you
know,
consistence
of
what
that
label
should
look
like.
So
he
was
him
and
I
attended
the
CID
provider
meeting
yesterday
we
didn't
get
a
chance
to
talk
much,
but
it
doesn't
seem
that
there's
a
lot
of
enthusiasm
behind
it.
A
A
How
they
currently
already
have
an
approach
I
way
to
to
set
the
physical
host
ID,
but
they
use
this
zone
yeah
and
and
it's
it's
a
configuration
parameter
so
I
think
that
and
there's
a
long
discussion
and
the
threat
there
and
I
think
Tim
called
it
hard
as
a
hack
at
some
point,
I
think
it's
a
hack,
I,
don't
think
it's
portable,
so
we're
trying
to
push
back
on
that
I!
Don't
know
how
much
we
want
to
push
back
on
it
because
I
guess.
My
our
goal
at
this
point
is
to
either
yes
or
no.
A
D
This
has
been
the
subject
of
a
long
conversation
and
discussion
in
the
community,
defining
a
standard
label
line,
in
my
opinion,
shouldn't
be
that
hard
really
but
I
guess
there
were
some
concerns
about
API
and
durability
of
the
API.
That's
why
people
are
considering
other
options
and
thinking
that
maybe
we
cannot
introduce
why.
D
But
at
this
point,
I
I,
don't
think
that
we
have.
We
have
heard
a
definite,
no
yet
so
at
this
point,
I
feel
we
should
try.
Hopefully
either
we
convince
them
that
this
is.
This
is
useful,
or
it
really
come
up
with
a
compelling
reason
why
we
shouldn't
do
it
I
haven't
seen
I,
think
the
mean.
The
reason
that
we
are
advocating
adding
this
makes
sense
makes
a
lot
of
sense
for
on-prem,
especially
where
you
know
the
UW
don't
have
a
whole
lot
of
guarantees
about
durability
of
the
VMS
in
cloud
providers.
D
There
are
some
guarantees,
but
on-prem,
essentially,
users
have
to
take
care
of
durability
of
their
VMS
themselves
to
a
large
extent.
So
for
those
cases
it
makes
to
have
a
feature
like
this,
where
you
spread
pods
among
physical
house.
So
this
is.
This
is
a
compelling
reason
in
my
opinion,
to
have
this
label.
A
A
A
D
I
feel,
if
you
really
have
largely
centers,
you
can
have
both
zones
as
well
as,
of
course
so
I
mean
I,
can
imagine
that
some
some
users,
especially
not
at
on-premise
becoming
larger
and
a
lot
of
a
lot
of
our
users,
are
converting
their
data.
Centers
to
like
one
of
these
deployments,
I
think
zone
can
become
also
another
label
of
its
own.
Basically,
users
want
to
use
zones
as
well,
so
I,
don't
think
using
zone
is
a
great
yeah.
A
D
A
D
A
So
the
last
one
is
pod
overhead
I
think
the
person
who's
working
on
it
is
not
on
the
core,
so
I
looked
at
it
again,
Bobby
I
think
it's
fine.
So
when
you
get
a
chance,
if
you
want
to
approve
it,
I
think
they
change
in
the
scheduler
side
is
minimum.
I
would
have
liked
that
it's
more
contained,
we
do
compute
them
are
like
you
know
the
resources
that
a
pod
consumes
in
multiple
places.
D
D
D
E
Nothing
much
I
think
only
there's.
Only
one
comment
which
I
have
I'd
like
most
of
the
comments
which
up
the
ladder
I
had
address
them,
except
for
one,
because
I
think
the
old
PR
did
not
have
such
as
the
old
priority
function
did
not
have
the
ability
to
have
AIDS
so
so
for
backward
compatibility.
I
have
had
a
default
version
over
there
like,
so,
if
nobody
is
using
that
they
need
to
have
a
default
over
there.
That's
I
think
that's
the
only
thing
which
I
find.
D
E
Like
because
in
previous
versions,
they
were
just
using
extended
resources
for
CPUs
and
memory,
so
just
maybe
explore
CPUs
and
memory,
which
would
define
one
way
and
since
one
of
the
variables
like
most
people
do
not
use
way
they
want.
They
are
all
the
resources
to
be
bin
packaged
they're,
just
going
to
call
resources
which
are
going
to
be
bin
back.
So
they
do
not.
E
D
E
E
E
D
One
more
thing
I
would
like
to
add:
if
there
is
no
other
questions,
let's
see,
first
is
if
there
is
any
questions
so
yeah,
one
more
thing
that
I
would
like
to
add
is
regarding
the
PR
that
changed
some
of
our
logs
to
a
channel.
Actually,
just
before
coming
to
the
meeting,
I
left
a
comment
there
with
a
link.
You
know,
using
channels
instead
of
logs
is
not
necessarily
a
good
idea.
D
This
is
actually
an
anti-pattern
that
is
not
recommended
and
go
a
lot
of
a
lot
of
folks
who
are
usually
actually
newer
to
go,
tend
to
use
channels
more,
be
used
kind
of
like
overuse
this,
this
feature
of
go
and
use
it
instead
of
locks
as
well.
This
is
not
necessarily
good
channels
actually
inside
to
use
mutexes,
so
they
are
not
necessarily
more
efficient
and
in
fact
they
could
have
more
overhead.
D
So
we
should
not
use
it,
and
sometimes
they
can
make
our
code
more
complicated
as
well.
So
channels
are
meant
to
to
pass
values
between
threads
or
go
routines.
They
are
not
meant
for
synchronization.
So,
basically,
writing.
Like
one
thing,
I
will
value
or,
like
sometimes
people
use
it
to
just
send
an
empty
value
to
another
thread,
to
notify
that
thread
and
stuff
like
that.
Some
are
sometimes
these
use
cases
are
okay,
but
generally
using
them
for
synchronization
and
in
a
place
of
lock
or
miss
commute
X
is
not
a
good
pattern.
D
D
A
Think
the
last
round
was
at
least
a
discussion
there,
student
capsulated
behind
like
an
interface
and
then
and
then
without
using
a
lock
or
channel,
becomes
an
imputation
detail.
I
do
like
the
idea
of
encapsulating
that,
in
that,
like
you
know,
send
receive
it
makes
it
really
cleans
up
a
lot
of
the
you
know,
redundancies
that
we
have
around
capturing
errors,
so
I
am
supportive
of
that
interface.
D
Sometimes
sometimes
you
cannot
depend
on
what
kind
of
parallelism
you
use
so
some
sometimes
we
use
this
paralyzed
intelligence
or,
like
you,
burn
IDs
internal
library,
that
one
will
run
to
completion.
I
chose
not
to
start.
Basically,
unless
you
pass
the
context
with
cancer,
it
will
just
continue
to
completion
so
I
think
in
in
a
particular
PR.
We
don't
cancel
or
anything
we
just.
We
just
want
to
return.
The
first
error
we.
A
A
So
you're
able
to
pass
a
context
and
you
can
always
create
a
context
with
cancer
and
the
cancer
is
a
return
value.
So
if
you
don't
have
it,
then
you
don't
have
it
basically,
the
interface
that
you
have
to
you
will
have
to
create
what
for
capturing
the
real
create
without
passing
in
the
cancer
callback.
I
don't
know,
take
another
look
and
give
us
feedback
on
that
and
a
new
proposal.