►
From YouTube: Kubernetes SIG Node 20181009
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
Looks
like
a
logical,
logical
home
for
that
we
discussed
this
shortly
in
June
in
signal
meeting
and
I
guess
there
was
kind
of
or
the
opinion
seemed
to
be
inclined
towards
towards
this
being
okay,
so
moving
moving
under
community
seeks
organization
excellence,
some
time
frame
and
now
I
wrote
this
topic
again
on
the
table.
I
already
created.
B
A
That
we
want
to
continue
to
sponsor
and
move
them
under
current
SIG's,
where
that's
appropriate
and
I.
Think
you
better
just
gonna
a
same
project,
discovery
standpoint.
So
I
have
no
objections
to
doing
this.
I'm,
not
aware
of
any
other
immediate
actions
from
from
folks,
and
so,
if
it's
just
a
matter
of
the
process,
the
Dawn's
not
here
to
act
as
well.
Well,
but
basically
the
process
is.
We
need
to
get
the
issue
opened
in
the
appropriate
repo
to
execute
the
transfer
and
then
Don
and
myself
should
go
to
AK.
B
A
B
C
D
D
D
So
this
is
talking
about
the
case
where
you
have
a
cluster,
where
your
nodes
are
not
all
the
same,
and
you
might
have
nodes
that
support
different
subsets
of
the
runtime
classes
that
are
available
in
the
cluster
and
ideally
the
user.
Experience
would
look
something
like
I,
create
a
pod
or
workload
or
replica
set.
D
We
can
pick
the
best
one
and
I
can
move
forward
with
that
and
write
up
a
more
formal
cap,
so
yeah
in
terms
of
the
goals
I
already
mentioned.
The
primary
goal
is
to
be
able
to
have
pods
run
on
nodes
that
support
the
runtime
class,
but
there's
some
other
features
that
we've
talked
about,
adding
to
runtime
class
and
I
want
to
make
sure
that
the
solution
we
land
on
scales
to
the
future
state
when
those
features
are
supported
as
well.
D
This
could
also
be
useful
for
rollouts,
so
say:
I'm
rolling
out
a
new
version
of
kata
containers,
I,
don't
care
as
the
developer,
which
version
my
pod
ends
up
running
with,
but
maybe
the
new
version
they've
made
some
improvements
to
the
memory
footprint,
and
so
it
has
a
different
pot
overhead,
and
so
that's
where
it
gets
pretty
complicated.
Thinking
about
the
schedule.
I'm
not
trying
to
solve
this
problem
right
now,
but
I
would
like
to
kind
of
think
about
whether
a
solution
is
going
to
be
possible
with
the
scheduling
approach
that
we
take.
D
So
going
into
the
requirements,
there
needs
to
be
a
mechanism
in
the
scheduler
to
ensure
that
pods
are
steered
to
appropriate
nodes.
So
this
is
basically
what
does
the
scheduler
do
to
actually
steer
the
pods?
There
needs
to
be
some
sort
of
either
discovery
or
registration
mechanism
so
that
you
say
these
are
the
runtime
classes
that
this
node
supports,
or
the
node
itself
says.
These
are
the
runtime
classes,
that
I
know
I
support
and
then
the
last
piece
is
basically
the
API
of
how
the
how
the
node
actually
declares,
which
runtime
classes
it
supports.
D
So
going
into
that
first
requirement
around
what
the
scheduling
mechanism
could
look
like.
There's
really
two
approaches
that
I
came
up
with
and
I'm,
not
an
expert
in
kubernetes
scheduling,
so
I
definitely
may
have
missed
some
other
possibilities,
and
this
is
kind
of
where
I'm
looking
to
get
some
feedback.
D
The
first
approach
would
be
to
basically
not
make
any
changes
to
the
scheduler
and
instead
rely
on
the
existing
scheduling,
primitives,
so
node
affinity
would
probably
be
the
main
one
here.
What
this
could
look
like
is
the
runtime
class
itself
specifies
the
read
time.
Class
API
object
specifies
the
node
affinity
for
the
nodes
that
support
that
runtime
class,
and
then
we
would
have
a
runtime
class
admission
controller
that
basically
takes
that
node
affinity
and
injects
it
into
the
pod.
D
The
advantages
of
this
approach
are
there:
are
any
scheduler
changes
required?
It's
fairly,
you
know
non
invasive
change
to
the
system,
but
it
would
be
very
hard
to
mix
this
in
with
choosing
runtime
classes
based
on
what's
available
in
the
cluster.
In
the
case,
when
we
support
multiple
also
if
the
user
has
additionally
specified
some
node
affinities
to
select
some
subset
of
the
nodes
that
support
that
runtime
class,
then
mixing
those
node
affinities
together
gets
fairly
hairy
and
then
also
debugging.
D
D
Instead,
it
looks
something
like
you
know:
no
nodes,
Matt
selector
with
this
label
set
that
you
specified,
and
so
the
other
approach
is
just
to
modify
the
scheduler
itself.
The
scheduler
has
a
pretty
modular
architecture,
and
so
these
new
scheduling
predicates
are
I'm,
sorry
kind
of
a
standalone
thing,
and
in
this
case
we
would
basically
write
a
scheduler
predicate
so
that
the
scheduler
actually
understands
the
runtime
class,
primitive
and
basically,
it
would
probably
probably
have
a
runtime
class
Informer.
D
D
D
A
It
might
be
confusing
topics
for
me:
I'm
thinking
the
list
of
resources
that
a
scheduler
must
watch
with
1b
we're
saying
the
scheduler
must
watch
the
runtime
class
resource
as
well
and
then
later
on,
you
have
an
option,
I
think
to
see
or
something
where
a
note
is
not
necessarily
reporting
the
runtime
class
on
its
status,
but
instead
it's
driven
by
some
label
selector,
we're
basically
saying
scheduler.
We
need
to
be
aware
of
how
runtime
classes
are
mapped
to
nodes
and
we'd
have
to
start
watching
on
10
classes.
D
D
Okay,
so
I'll
talk
about
discovery
and
registration.
Now
it's
really
discovery
or
registration
and
or
I
guess,
but
this
is
basically
how
do
how
is
it
determined
what
runtime
classes
a
nerd
supports,
so
the
first
option
I
listed,
is
based
on
runtime
handler
discovery
right
now.
The
only
piece
of
the
runtime
class
that
is
tied
to
the
actual
node
is
the
runtime
handler,
which
is
interpreted
by
the
the
CRI
implementation
to
actually
run
the
pod,
with
the
correct
runtime
yeah
with
the
correct
runtime,
and
so
the
only.
D
D
Downside
is
CRI.
Implementations
need
to
be
updated
with
the
new
API
and
it
implies
that
runtime
handler
is
the
only
kind
of
meaningful
piece
of
a
runtime
class
that
I
knew
it
needs
to
support.
So
this
means
that
if
you
say
we're
doing
a
roll
out
of
a
new
runtime
version,
you
would
need
to
make
sure
that
the
new
versions
were
named
had
a
different
runtime
handler
name
were
say
you
say
we
started,
including
devices
into
the
runtime
class
definition,
then
that
would
need
to
be
taken
into
account
as
well.
D
There's
not
really
any
opportunity
for
for
the
user.
Sorry,
the
cluster
admin
to
say:
well,
yes,
this
work.
This
need
supports
this
runtime
class,
but
actually
I,
don't
want
pods
of
this
runtime
class
running
on
here.
So
I
want
to
update
the
set,
but
potentially
we
could
mix
in
one
of
the
manual
options.
So
the
first
manual
option
is
basically
to
add
a
list
of
runtime
classes
to
either
the
nodes,
spec
or
status
I'm,
not
sure
where
it
would
make
more
sense,
but
say
you
know
these.
D
So
the
third
option
is
to,
rather
than
declaring
the
set
of
runtime
classes
that
each
node
supports
rather
declare
the
set
of
nodes
that
support
each
runtime
class.
So,
rather
than
updating
the
nodes
back
update
the
runtime
class
spec
to
include
the
the
nodes
that
support
that
runtime
class
and
yeah
I
think
this
approach
may
be
scales
a
little
better,
especially
if
we
go
with
kind
of
the
label.
Selector
approach
that
I'll
talk
about
later.
Probably
we
don't
want
to
actually
have
a
list
of
nodes
in
the
runtime
class
suspect.
A
D
Wasn't
envisioning
that
we
would
define
we
would
have
a
well-defined
label.
I
was
kind
of
thinking
more.
The
runtime
class
has
a
label
selector
or
nodes,
and
that
could
even
look
a
lot
like
node
affinity
or
just
a
node
affinity
and
then,
and
then
it's
up
to
the
cluster
provisioner
to
you
know,
declare
the
labels
on
the
right
sets
of
nodes.
Okay
run
done
class.
D
It
could
be
that
you
know
you
have
runtime
class,
that
node
decades
that
io
/
run
c
/,
kata
whatever,
but
I'd
rather
I-
think
leave
it
open
and
and
just
provide
some
guidelines
for
that.
Okay,
and
so
the
last
section
is
just
about
the
API,
which
I
think
the
API
really
falls
out
from
the
decisions
made
in
the
previous
two
sections.
So
there's
not
a
whole
lot
to
analyze
here,
but
the
different
ways
of
reporting
the
runtime
class,
supported
by
the
nodes
or
nodes
supported
by
the
Renton
class.
D
So
yeah,
so
that's
all
I
came
up
with
after
chatting
with
Bobbie
a
bit
who,
if
you
don't
know
him
he's
one
of
the
SIG's
scheduling,
leads
I'm
sort
of
leaning
towards
adding
implementing
the
native
scheduler,
sir
native
scheduler
support
and
then
specifying
a
label
selector
on
the
runtime
class
that
matches
that
runtime
class
to
nodes
and
leveraging
that
information
in
the
scheduler
plugin
so
yeah.
Any
thoughts
on
that.
E
D
Yes,
but
except
that
I'm
not
really
sure
what
unrelated
to
the
runtime
class
means,
if
you're
selecting
on
them
right,
I
mean
I
could,
for
example,
suppose
you
labeled
your
nodes
in
two
sets
a
B
and
C
and
like
just
like,
like
based
on
location,
yeah,
yeah
based
on
location,
and
it
just
so
happened
that
all
of
your
product
container
runtimes
were
in
Zone,
B
I.
Guess
there
is
a
concern
that
you
could
say
well
actually
I'm
going
to
have
three
different
rendang
classes.
D
E
C
D
So
the
the
advantage
I
see
to
this
is
the
user
who's.
Creating
the
pods
back
is
probably
the
like
the
application
developer
or
they
care
about
what
they
might
care
about,
which
run
time
they're
using,
but
they
don't
necessarily
care
about
which
set
of
nodes.
They're
scheduling
to
you
versus
the
the
runtime
class
is
configured
by
the
cluster
administrator
and
so
they're
the
ones
who
say.
Okay,
this
set
of
nodes
is
going
to
support
this
runtime
class.
A
How
I
asked
this
so
if
I
wanted
to
understand
why
a
pod
was
scheduled
where
it
was
scheduled?
Do
you
see
that
the
label
selector
Clause,
that's
on
the
runtime
class
associated
with
that
pod
gets
merged
with
the
pods
note
selector,
or
do
I
need
to
do
a
different
level
of
analysis
to
understand
why
the
pot
landed
where
it
landed.
D
D
D
So
that's
it
that's
a
different
problem.
That's
what
I
mentioned
as
like
one
of
the
future
directions
that
we
might
want
to
go
in
is
selecting
the
runtime
class
from
the
pod
I
and
so
for
now,
I'm
just
explicitly
selecting
around
cause
yeah,
but
that
is
one
of
the
reasons
that
I
prefer.
The
native
schedulers
support.
Is
that
suppose,
I
suppose
I
say
that
I
want
to
run
my
my
pod
with
cotta
containers
and
we
support
cata,
111
and
112
and
111
has
an
overhead
of
50
megabytes
and
112
as
an
overhead
of
40
megabytes.
D
D
E
To
think
about
reporting
of
node
features,
because
if
I
would
say,
I
support,
kata
111
and
like
some
other
runtime
112
and
then
I
asked
for
cuddle
112.
Is
it
going
to
be
able
to
figure
out
that
the
like
there's,
a
certain
grouping
of
labels
for
each
front
cotton
cloth?
Sir
I
suppose?
Maybe
that's
a
little
bit
further
than
essentially.
G
How
is
the
polymer
had
going
to
be
used?
I
mean
the
once
we
have
that
touch
way,
that
regular
schedulers
would
also
be
aware
of
that
yeah.
So
why
would
you
need
another
scheduler
to
understand
so
that
I
mean
the
Ong?
No
scatters
won't
scare,
your
colleagues
who
consider
how
much
the
over
had
and
was
the
request
and
limit
and
then
make
a
decision
right
then,
if
that's
the
case
by
doing
steel
needles,
so.
D
D
You
could
have
a
pod
overhead
field
on
the
pod
and
then
the
admission
controller
basically
says:
okay
you're
using
this
front-end
class.
Then
your
overhead
is
this,
and
the
scheduler
understands
that
overhead
skilled
on
the
pod
or,
alternatively,
the
scheduler
knows
how
to
look
up
the
runtime
class
from
the
pod
and
then
pull
the
look
in
the
overhead
when
she
specified
in
the
class
from
there.
A
Convinced
on
the
1b
choice
over
1a
and
I
I,
like
the
idea
in
theory
that
I
can
look
at
a
pod
and
understand
why
it
was
bound
to
endowed
without
having
to
look
at
additional
resources,
I,
guess
and
especially
because
in
the
case
of
runtime
class,
I'm,
not
mistaken
Tim.
That
mostly
everything
on
there
was
immutable.
Post
creation.
A
What
what
benefit
I
get
with
the
indirection
I
guess
versus
the
major
pro
I
have
is
like
I.
What
you're,
basically
positioning
runtime
classes
for
here
is
similar.
What
we
position
other
things
say,
an
open
shift
are,
is
like
a
way
of
guiding
pods
to
the
right
pool
of
nodes.
So
like
whether
that's
a
namespace
scope,
note
selector
or
it's
a
runtime
class
base,
note
selector,
it's
just
yet
another
thing
that
merges
into
your
node
selection,
set
and
like
to
me.
That
seems
easy
to
understand
and.
A
A
I
mean
I,
just
I
know
we
have
other
admission
controllers
and
in
the
open
source
Karuna
days
as
well
as
downstream.
You
know
distributions
that
do
that,
and
so
at
least
I
have
a
sense
of
what
that
looks
like,
but
maybe
there's
something
unique
I
missing
here.
That
may
be
your
example
quick,
although
yeah.
D
D
Wasn't
really
thinking
about
the
the
use
case
of
kind
of
like
the
positive
feedback
of
the
node
is
the
pot
is
actually
scheduled
and
you
want
to
know
why
it
was
scheduled
about
that
versus
I
was
thinking
more
about
the
case
where
the
pot
is
not
scheduled,
and
you
want
to
know
why
it's
not
scheduled,
and
in
that
case
I
think
they.
If
you're
just
merging
in
the
nude
selectors,
then
you
could
end
up
with
a
very
complicated
error
message
that
says:
well,
nothing
in
this
label
set
match
done.