►
From YouTube: Kubernetes SIG Scheduling Meetings 20170731
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right
so
I
started
the
video
said,
oh
with
special
guests.
At
today's
meeting,
Brendan
Burns
is
going
to
demonstrate
a
CI
which
is
a
cool
feature
that
is
part
of
a
juror
now,
as
of
like
I
guess
a
week
or
so
ago,
and
then
after
the
presentation
we
can
see
if
there's
any
other
agenda
items
that
folks
want
to
talk
about.
But
why
don't
we?
What
Brendan
get
started.
B
A
Hey
I
have
a
quick
question:
is
the
audio
level
low
on
anyone
Brendan's
audio
level
low
for
anyone
else
on
the
call,
or
is
it
just
me?
Oh.
B
A
That
better
yeah-
that's
that's,
definitely
better.
I
do
I've.
B
B
And
so
what
I
have
here
is
a
very
lonely,
kubernetes
cluster
that
just
has
a
master,
obviously
scheduling
it's
disabled
on
the
master,
and
so
what
am
I
showing
now
is
so
I
should
take
a
step
back.
A
sure
container
instances
allows
you
to
run
individual
containers
without
VM
infrastructure
and
so
that
what
I'm
going
to
demonstrate
is
how
we
link
this
up
to
kubernetes.
B
So
I
have
an
empty
kubernetes
cluster
here
without
any
nodes,
just
a
master
and
then
I'm
gonna
run
the
connector,
and
what
the
connector
is
doing
is
it's
basically
bridging
between
the
azure
container
instances,
API
and
the
kubernetes
api,
and
so
we'll
see.
Now
we
go
over
here
and
we
list
nodes
again.
Is
we
now
have
a
new
node,
a
new
virtual
node?
B
B
C
B
B
Well,
alright,
I'm
gonna,
I'm
gonna
stop
the
demo,
but
the
the
basic
notion
is
that
it
actually
creates
the
container
over
an
AC,
I
and
I
swear
five
minutes
ago.
This
worked,
but
it
creates
the
container
over
an
AC
I
and
then
starts
populating
the
information
back
into
kubernetes,
so
that
pod
goes
running
and
you
can
then
see
its
information
inside
of
the
kubernetes
api.
What.
B
I
was
assuming
it
reports
like
a
hundred
cores
or
something
like
that.
I
cover
exactly
what
I
said
but
like
there
is
quota
in
a
CI,
so
there
is
a
quota
to
how
many
containers
you
can
run,
and
so
we
could
populate
that
information
in,
but
actually
that's
one
of
the
interesting
questions
basically,
which
is
if
this
thing
effectively
has
infinite
capacity,
is
modeling
it
as
a
virtual
node.
Even
the
right
thing
to
do
right.
B
It's
certainly
the
easiest
re
obsession
hold
my
microphone
up.
It
certainly
is
the
easiest
thing
to
do,
but
is
it
the
right
thing
to
do
and
then,
of
course,
like
I,
don't
model
any
any
kind
of
affinity
right
now,
and
so
the
scheduler
by
default
is
gonna
view
it
as
a
node
is
gonna
view
that
node
as
a
failure
zone,
it's
gonna
try
and
avoid
placing
multiple
containers
onto
that
theoretic
failure
zone.
But
of
course,
actually
a
CI
isn't
really
a
node
and
so
putting
multiple
containers
on
it
actually
doesn't
imply
anything
about
failure.
B
B
They're
all
right
exactly
exactly
and
so
there's
a
degree
to
which
I
think
the
interesting
question
effectively
is:
how
much
affinity
do
we
push
into
the
ACI
API
versus?
How
much
do
we
push
into
the
kerbin
Nettie's
API,
and
is
it
that
even
the
case
that
we
should
try
and
use
the
same
scheduler
right
I
mean?
Maybe
the
truth
is
that
we
shouldn't
use
the
same
scheduler
and
if
you
want
to
run
stuff,
you
know
from
kubernetes
into
this
thing.
B
B
Toleration
effectively
well
write
a
complete
scheduler,
that's
responsible
for
scheduling
things
and
understands
a
CI.
Iii.
I
think
that
this
is
the
goal
of
having
the
goal
of
bringing
this
up
of
the
meeting
effectively
is
because
I
don't
think,
we've
explored
this
before
this
is
the
brand-new
piece
of
cloud
infrastructure
that
no
one
has
really
ever
built
before
and
as
such,
but
I,
but
I,
it's
very
intentionally
intended
to
be
a
low-level
building
block
that
orchestrators
build.
On
top
of.
A
Yeah,
although
I
mean
you
could
argue
that
most
people
are
running
kubernetes
on
virtual
machines
today
and
so
like.
If
you
ask
for
spreading
your
pods
across
nodes,
those
nodes
could
all
be
on
the
same
physical
machine
and
you
actually
don't
get
any
failure.
Tolerance
right
so
like
like
the
existing
infrastructure
than
most
people,
are
using
it's
already
virtualized
and
we.
B
A
B
B
B
Yeah
I
will
also
say
that
part
of
the
part
of
the
goal-
the
cloud
provider,
certainly
when
we
do
this
in
Azure
and
I-
think
the
same
thing
is
true:
in
gke,
we
explicitly
spread,
try
and
spread
the
machines
right.
We
explicitly
tell
the
infrastructure
like
hey,
please
try
not
to
co-locate
these
things
right.
You
know
with
grouping
primitives
that
try
and
avoid
landing,
multiple
machines
on
the
same
so.
A
Right,
yeah,
exactly
exactly
and
I
mean
I
assume
that
the
management
since
groups
do
the
same
thing
and
I'm,
not
a
hundred
percent
sure.
But
but
yes,
but
that's
exactly
what
the
issue
is,
is
like
how
much
you
have
like
two
level:
two,
the
whole
scheduling
going
on
here
and
and
and
how
much
information
you
expose
from
the
lower
level.
Or
do
you
try
to
combine
this
scheduling,
which
that
seems
hard
and
may
be
undesirable?
But
yes,.
B
And
here
is
this
even
a
deeper
thing:
English
is
that,
like
the
node
is
vanishing
at
some
level,
and
yet
we
still
model
notes
like
if
we
are,
if
we
want
notes
to,
and
we
believe
that
pods
land
on
nodes
at
some
level,
alright,
and
if
that
is
kind
of
going
away
like
what
is
that?
What
does
that
mean?
Well.
A
B
A
Nothing,
they
can't
know
so
when
they
do.
When
you
do
cube
control
like
look
at
the
logs.
What
do
you?
What
do
you
see
you
get
the
logs
I
mean
you
get
just
like
you
would
get
cute?
Ok,
so
you
can
get
you
can
get
the
logs,
but
you
don't.
You,
don't
obviously
get
like
the
system
with
a
long
as
you
get
there.
B
A
B
C
C
C
B
So,
like
I
think
that
I
think
the
first
question
effectively
is
that
like
does
kubernetes
core
schedule
or
even
wanna,
know
about
this
and
understand
this.
And
if
the
answer
there
is
no
then
at
the
then
the
rest
of
it
is
obvious.
The
rest
of
the
vid
is
like
okay,
great,
we'll
implement
our
own
scheduler.
B
B
B
B
And
there's
some
details
like
cube,
control,
exec
and
cube
control,
logs
kind
of
like
use
it
to
go,
find
and
make
and
wire
things
up,
but
you
can.
You
could
have
a
virtual
cubelet
that
knew
what
it
was
doing,
that
like
did
the
right
things
to
make
to
wire
logs
and
exec
up
and,
and
so
the
the
scheduler
would
be
ceccolini
the
same
thing
as
this
connector
right,
which
is
it
would
say:
oh
hey,
look,
there's
a
pot
over
there.
It's
not
scheduled
it
doesn't
have
an
node
name
associated
with
it.
B
Therefore,
I
need
to
go
and
take
that
pod
take
its
annotations
and,
of
course,
if
we
went
down
this
road
you'd
have
potentially
special
annotations
for
a
CI
and
then
I'm
go
create
the
ACI
containers,
that'll
map
that
and
then
I'm
gonna
pump
like
in
this
world.
The
cubelet
doesn't
exist
like
there
is
no
cubelet
for
ACI,
basically,
and
so
the
connector
is
the
thing.
That's
pumping
all
the
data
back
into
the
API
server
to
make
it
make
the
API
server
understand
that
there's
a
pod
there.
B
The
question
effectively
is
like:
do
we
think
that
there's
enough
of
an
interface
layer
between
the
pod
and
and
whatnot
that
we
can
use
that
we
can
refactor
the
existing
scheduler
like?
Because
that's
what's
happening
right
now,
as
the
existing
scheduler
is
just
saying
like
I've
space
over
there,
I'm
gonna
go,
do
it
or
do
we
believe
that,
like
there
is
just
isms
and
like
we
should
just
implement
it?
A
separate
scheduler
well.
A
B
And
that's
probably,
and
that's
probably
what
was
going
on
and
honestly
like
in
the
connector,
that's
more
or
less
what's
going
on
is
there's
a
bunch
of
different
threads
and
one
of
the
thread
is
threads.
It's
just
looking
for
containers,
it's
basically
just
being
the
cubelet.
It's
looking
for
containers
that
have
been
scheduled
to
this
virtual
ACI
node
and
then
it
goes
and
creates
them
in
a
CI
right.
B
B
A
But
I,
don't
I
still
don't
understand
exactly
what
this
other
scheduler
would
do.
I
mean
we
talked
about
like
it
would
set
the
node
name
to
a
CI
or
something
that's
what
we
were
using
in
our
hypothetical
example
here,
but
in
that
case,
like
it's
really
just
the
same
as
our
existing
scheduler
and
so
operating
the
way.
Your
prototype
thing
here
you
know
is
doing,
and
so
how
and
like
we.
B
C
B
B
A
B
A
A
B
Ideally,
you
say
it
per
pod
right
like
ideally
when
I
know
that
I
wanted,
like
sort
of
let
that
think
this
is
a
flexible
pod
versus
a
not
flexible,
pod
or
I
mean
cuz.
Ideally,
this
is
like
bursts,
like
I,
would
love
to
be
able
to
set
up
Jenkins
to
target
this
thing
without
with
minimal
effort
on
a
part
of
the
user?
But
if.
A
A
That
it's
the
right
approach,
that's
not
bad!
That
might
be.
That
might
be
the
right
thing
to
do,
but
yeah
it's
interesting
to
think
about
whether
the
only
change
you
would
need
initially
is
just
take
the
existing
scheduler
and
we
can
just
choose
different
policies
like
disable
the
spreading
or
or
something
like
that,
because
that
would
obviously
be
the
simplest
or.
A
A
Yeah
but
I
yes
and
I
and
I
guess
the
question
is
like
what
other
functionality
would
you
want
in
this
in
the
in
the
scheduler
or
maybe
maybe
the
idea
is
that
you
then
push
the
edge
some
of
the
scheduling
policies
into
the
controller
that
is
connecting
kubernetes
ooh.
They
see
I
think
so,
like
the
spreading
policy,
maybe
would
be
implemented
there
instead
of
in
right
here,
Nettie's.
A
B
A
B
A
A
A
B
A
B
A
A
E
A
E
Yeah
yeah
I
mean
like
whenever
we
have
a
lesser
referred,
preferred,
scheduling,
bit-bit,
pod
and
key
affinity.
The
topology
key
can
be
empty
and
I
think
out
of
4.
That
is
the
only
104
that
we
allow
empty
table
row
G
key,
but
it
was
true
of
206
in
107
we
made
one
change
also
like
I
made
that
change
and
I
think
David.
You
had
created
that
issue,
but
but
when
I
looked
at
that
change,
some
of
what
we
did
like
whenever
that
affinity
in
annotation
feature
is
disabled.
E
We
checked
by
we
check
that
topology
key
is
never
empty
for
any
of
the
four
cases,
and
because
of
that
what's
happening
when
people
are
upgrading
from
1.6
207
and
when
the
topology
key
is
empty,
it's
filling
in
1.7
because
now
it's
it
does
not
allow
template
apology,
key
and
that
is
kind
of
regression
also,
and
so
that
is
the
issue.
This
easily
I
see.
E
Yeah
yeah
actually
also
like
in
1.8.
We
have
completely
removed
that
code.
You
remember
like
we
had
a
feature
ghetto
for
affinity,
notation
all
right.
So
what
we
did
like.
We
had
a
check
involved
or
Serena
like
whenever
that
feature
is
disabled
affinity
in
a
notation
we
always
check
the
topological
should
not
be
empty
and
and
by
default.
That
feature
is
disabled
in
what
not
Serena,
and
that's
why
that
issue
is
happening.
I.
A
E
No
but
but
it
the
problem
happens
when
you
said
that
in
pod
also
I
mean
quad
fields.
Also,
what
is
spec
fields?
I
mean
basically
like
it's
simply
like
like.
We
have
like
preferred
scared,
I
forward
a
preferred
to
scheduling
right,
soft
FLT
affinity.
We
don't
be
allowed
empty
topologically,
but
somehow
in
1.7
now
we
had.
We
have
a
checker
that
does
not
allow
that.
A
E
D
A
D
E
If
we
take
that
J,
if
we
take
that
check
out
for
affinity,
annotation
disabled
case,
then
it
will
work
fine,
because
then
the
code
is
exactly
same.
What
we
already
have
in
1.6
and
what
we
have
currently
in
1.8
L,
be
so
it
will
be
parity.
Just
because
of
that
check
like
we
are
having
that
issue,
I
see.
D
D
E
A
E
A
Otherwise,
yeah
I
mean
I,
can't
I
can't
remember
it
was
I
can
remember.
Wojtek
was
the
main
person
who
was
working
on
that
and
so
I,
don't
I,
don't
remember
but
yeah.
If
it's
not.
If
the
check
is
not
protecting
us
from
something
that's
unimplemented,
then
it
seems
fine
to
get
rid
of
the
check
right.
Oh
yeah.
A
A
Does
anyone
else
have
have
anything
else,
I
wanna
talk
about
so
the
one
thing
I
want
to
talk
about
was
Tim.
Sinclair
has
a
conflict
at
this
time
and
some
of
the
people,
or
at
least
I,
don't
know
some
at
least
one
person
from
the
East
Coast
kind
of
conflicted
at
one
hour
later
from
now,
and
so
well,
with
Tim
and
I
found
that
we
are
both
available
at
noon.
A
E
E
A
About
with
the
other
folks
there,
anyone
who
would
not
be
able
to
attend
at
noon
on
Thursday
that
would
also
make
it
a
little
bit
earlier.
I
mean
nobody.
We
don't
seem
to
ever
get
anyone
from
Europe
attending
this,
but
that
way
it
would
at
least
be
a
more
reasonable
hour.
Also
in
Europe
it
would
be
9:00
p.m.
it's
not
going
to
help
the
folks
in
Asia.
A
There
actually
seems
to
be
more
interests
in
Asia
for
this
SIG
than
in
Europe,
but
you
know
it
would
help
the
hypothetical
people
who
don't
attend
from
Europe
could
maybe
I
don't
know
all
right.
So
nobody
else
nobody,
nobody
objected
to
that.
I'll
send
out
an
email,
an
alien
once
we
can
see.
If
anybody
has
has
has
an
objection
to
that,
and
if
not
I
guess
we
can
change
it
to
noon
on
Thursday,
starting
two
weeks
from
now
anything
else.
Anybody
wants
to
talk
about
reassigned.
A
For
the
projects,
oh
yeah,
that's
a
good
point.
So
yeah
we
didn't
like
I
signed
up
for
the
ones
that
I
have
the
bandwidth
to
be
the
approver
for,
but
they're
still
a
bunch
that
don't
have
a
provers
signed
up.
So
we
have
to
figure
out
what
to
do
about
that.
Maybe
I
should
send
email
to
the
mailing
list
or
you
can
do
it
like
Bobby's
talking
about,
like
maybe
on
the
spreadsheet,
for
1.8,
and
we
introduced
this
idea
of
having
approvers
so
that
the
person
working
on
the
PR
would
know
like
when
they're
done.
A
Basically
and
and
it
could
merge
and
like
I-
made
myself
the
Krueger
for
a
few
of
the
things
on
the
list,
but
like
I
can't
do
all
of
them
because
I
don't
have
time,
and
so
there
are
still
a
bunch
that
don't
have
approvers,
and
so
that
is
potentially
a
problem,
because
it
means
that
we
might
get
to
the
code
freeze
and
nobody
has
the
time
to
actually
review
it.
So.
A
F
Since
I
wish
is
also
here,
I
have
a
question
with
respect
to
cattle
brain
they
would,
if
you
remember,
we
want
to
use
no
schedules
paint
for
instead
of
unschedulable
fee,
and
there
is
an
issue
that
there
is
a
point
that
away
she
has
raised
with
respect
using
no
scheduled
thing.
Do
you
think
it's
appropriate
to
talk
about
it
now.
E
E
E
Basically
his
idea
is
he
checks
for
as
far
as
there
is
any
one
hospital
tent
the
he
sets
the
load,
he
I
mean
knowldege
on
a
schedule
availa,
but
what
I
suggested
I
think
we
should
not
rely
on
that,
because
no
schedule
tent
is
not
exactly
same
as
and
schedule
availa,
because,
as
far
as
there
are
thoughts
that
can
that
could
tolerate
that
no
schedulable
tent
the
those
spots
can
still
could
still
be
scheduled
right.
So
what
my
citizen
wazzle?
E
A
E
But
generally,
but
generally
like
whenever
in
case
of
drain,
whenever
be
set
and
scaredy
level,
I
think
except
demon
said,
so
we
don't
allow
any
other
court
to
schedule
right.
So
that's
why
like
I
was
suggesting,
as
far
as
we
have
some
some
standard,
just
one
standard,
a
low
schedule
to
enter,
and
if
we
have
some
policy
that
says
no
court
should
have
that
tenth,
maybe
except
Damon
said
so
because
anyway,
Damon
said,
could
be
a
scheduled
er
because
they
have
some
sort
of
like
everywhere,
tend
to
some
like
that.
E
So
I
mean
my
point
was
like
we
should
not
have,
or
some
general
tend
to
know.
Schedule
tend
to
that
are
already
on
the
north.
So
to
signal
that
node
node
is
on
a
schedule
available,
because
because
you
don't
want
our
paths,
it's
alright
yeah.
Exactly
because
what?
If
some
by
mistake,
there
are
new
ports
that
are
tolerating
that
tent
and
then
the
bow
sports
cable
is
to
be
scheduled.
Then
the
node
is
in
the
process
of
being
drained
well,.
A
Yeah,
this
is
a
problem,
I
mean
I,
don't
know
it's
kind
of
not
great
to
have
a
taint
that
has
special
meaning
I
mean
like
if
we
could
do
this
using
the
standard
like
some
kind
of
authorization
mechanism
like
or
an
emission
controller,
or
our
back
like.
We
have
this
as
a
related
issue
with
dedicated
nodes,
where
we
don't
want
to
allow
people
to
just
set
a
toleration
for
a
dedicated
node
taint,
because
then
they
could
use
other
people's
dedicated
nodes,
and
this
sounds
a
lot
like
that.
A
But
I
mean
one
of
the
one
of
the
issues.
Is
that
actually
I
think
it
comes
up
in
both
cases?
I
think
we
have
like
a
star
toleration
like
I,
think
there's
some
way
to
express
that
you
tolerate
all
taints,
I
can't
remember
and
yeah
like,
and
so
so
that's
gonna
be
a
problem.
I
mean
I,
guess,
I
guess,
whatever
authorization
scheme
we
you
use
like
you
would
prohibit
that
addition
to
you'd
have
to,
in
addition
to
prohibiting
tolerating,
like
the
dedicated
node,
taint
or
the
or
the
or
the
drain
taint.
A
So
but,
but
maybe
like
my
hope,
would
be
that
these
are
these
cases
are
similar
enough,
like
the
the
dedicated
node
scenario
and
the
draining
scenario,
if
like
in
one
case,
you
want
to
limit
who
can
set
the
Toleration
and
in
the
other
case
you
want
to
forbid
the
Toleration
completely,
but
that
sounds
similar
enough
that
maybe
we
could
use
the
same
same
mechanism
for
it.
On
the
other
hand,
I
mean
I,
guess
you
could
argue
like
if
we
aren't
going
to
allow
people
that
tolerate
it,
then
maybe
this
is
just
a
bad
idea.
A
A
E
E
A
I
think
the
question
is
whether
we
really
want
to
prohibit
tolerating
that
pain.
I
mean
yes,
certainly
for
backward
compatibility.
You
want
you
want
the
existing
stuff,
but
the
existing
stuff
isn't
going
to
be
tolerating
that
taint
anyway,
except
for
that,
while
the
one
case
that
I
would
just
mention,
which
is
like
the
wild
card.
B
A
A
Be
nice
I
think
we
do
like
you
know.
We
do
eventually
want
to
get
rid
of
unschedulable.
It's
there's,
no
reason
we
should
need
it
if
we
have
paints
and
Toleration
and
spice.
There
are
some
access
control
kind
of
permissions
kind
of
issues
with
toleration
that's
yeah
like
like
in
this
case
and
in
the
dedicated
nodes
case,
we
have
to
figure
out
late,
Thanks,
all
right,
yeah,
thanks
for
bringing
that
up,
I
didn't
haven't
had
time
to
look
into
that
one
either
anything
else.
People
wanted
to
talk
about.