►
From YouTube: Kubernetes SIG Node 20190326
Description
Meeting Agenda:
https://docs.google.com/document/d/1Ne57gvidMEWXR70OxxnRkYquAoMpt56o75oZtg-OeBg/
A
A
So
maybe
some
nodes
support,
sir
some
set
of
random
classes
and
another
set
of
nodes
support
another
set
of
runtime
classes,
and
we
want
to
make
it
so
that
the
this
is
kind
of
the
set
of
nodes
that
support
what
set
of
runtime
classes
is
sort
of
defined
at
the
cluster
level
or
through
the
runtime
class
or
the
nudes,
or
something
like
that.
I
mean
the
users.
Don't
really
need
to
think
about
that
apology
too
much.
A
So
if
I
say
I
want
to
run
with,
you
know:
Windows
hyper-v,
runtime
class,
then
I
know
that
I'm
yet
scheduled
onto
a
Windows
node
that
supports
that.
A
So
at
a
high
level,
the
proposal
is
to
add
a
new
topology
struct
to
the
runtime
class
definition
and
within
the
topology.
There
would
be
essentially
some
sort
of
nodes,
selector
and
a
set
of
toleration.
X',
and
so
both
of
these
are
required
because
the
nudes
selector
says
these
are
the
nodes
that
I
want
to
match.
So
that's
sort
of
an
attractive
scheduling
thing
kind
of
pulling
the
pod
to
the
set
of
nodes.
A
A
So
this
would
happen
as
a
at
admission
time
as
a
mutating
admission
controller
built
in
toleration.
Z'
is
really
easy.
We
just
append
the
Toleration
from
the
runtime
class
to
the
pods
toleration
x'
and
that
and
eliminate
some
deep
duplicates
there.
The
nude
selector
bit
is
a
little
trickier
and
depends
on
the
nude
selector
model
that
we
choose.
A
The
easiest
would
be
to
just
use
a
map,
string
string
and
say
you
know
these
are
the
labels
and
the
values
that
must
match
exactly
to
the
nodes
to
be
scheduled
on
that's
the
easiest,
but
also
the
least
powerful,
and
makes
it
hard
if
we
have
kind
of
differing
nodes.
That
kind
of
both
support
the
same
runtime
class.
A
So
a
better
option
might
be
node
selector
requirements
which
are
have
more
expressive
operations.
So
I
can
say:
I
want
to
match
nodes
that
have
a
have
this
label
with
values
in
the
set
and
maybe
a
version
greater
than
6
or
whatever,
but
it's
still
in
hand
set
of
operations,
and
this
needs
to
be
mixed
in
by
appending
these
to
every
node
affinity,
every
node
affinity
term,
because
those
affinity
terms
are
ward
across
them.
A
So
this
says
basically,
instead
of
merging
them
in
an
admission
time,
the
scheduler
would
actually
understand,
run
time
classes
and
would
would
find
the
Associated
rent
time
class
of
the
pod
at
scheduling
time
and
and
take
these
into
account
and
one
advantage
that
Bobby
who's,
one
of
the
sig
scheduling
tiel's
pointed
out,
is
that
then
we
can
get
a
nice
error
message
in
the
event
that
pod
can't
be
scheduled.
That
can
explain
that
this
run
time.
A
Class
wasn't
supported
on
the
set
of
nodes,
and
another
advantage
is
if
we
want
to
use
the
more
expressive,
no
affinity
selector.
Then
we
don't
have
to
worry
about
doing
this.
Cross-Product,
merge
logic
and
and
don't
have
to
kind
of
explode
the
pod
affinity
terms
for
that,
so
yeah
I
think.
That's!
That's
the
overview:
are
there
any
kind
of
questions
or
comments.
B
A
Yeah,
that's
that's
a
great
point.
I
should
go
and
add
that
the
cab
so
there's
kind
of
few
different
things
here.
One
there's
restrictions
on
what
labels
a
nude
can
self
apply,
but
there's
not
restrictions
on
what
labels
like
an
administrator
provisioner
can
apply
to
the
nodes,
and
this
also
kind
of
comes
in
as
a
security
issue.
If
I
want
to
be
able
to
say
that
you
know
this
privilege,
runtime
class
shouldn't
be
run
on
unprivileged
nodes
or
something
like
that.
B
Okay,
so
maybe
what
we
should
put
in
here
is
just
sort
of
clarifying.
You
know
these
ones
can
be
used
as
part
of
these
use
cases
that
are
Auto
applied.
That
might
include
things
like
the
OS
and
oh
and
kernel
version
that
we
have
there
today
and
then
we
could
just
say
you
know.
If
we
want
additional
ones
like
you
know
particular
runtimes
configured,
then
we
can
just
say
make
sure
that
you
add
those
when
you
add
the
node
of
the
cluster.
That's
that's
kind
of
what
you're
saying
yeah.
C
A
So
this
is
a
good
question
and
kind
of
something
we've
run
into
a
few
different
times
with
run
time,
classes
sort
of
what
is
the
overlap
between
run
time,
class
and
kind
of
general
cluster
policy.
I,
don't
want
run
time
class
to
become
a
generic
policy
mechanism.
A
You
know
any
pod
in
this
namespace
with
a
set
of
labels
or
something
like
that
gets
all
of
these
scheduling
constraints
applied
to
it,
and
so,
in
that
case,
runtime
class
would
just
be
applied.
On
top
of
that,
I
guess
there's
some
question
about
how
toleration
so
overlap
with
that.
If
I
want
to
have
a
scheduling
policy
that
validates
that
certain
toleration
czar
absent
on
a
pod,
then
it
would
be
hard
to
do
with
that
hard
to
do
that.
If
the
toleration
czar
mixed
in
at
scheduling
time
rather
than
an
admission
time.
A
B
A
B
C
G
A
A
Said
another
disadvantage
of
the
scheduling
approach
is,
it
does
not
mean
modifying
the
scheduler,
it's
a
pluggable
architecture.
So
I
don't
see
that
as
a
big
drawback,
but
it
does
mean
that
the
scheduler
would
need
to
start
watching
the
runtime
class
API
to
be
able
to
look
up
and
kind
of
cross-reference
those
runtime
classes
with
the
pods
I.
G
A
And
there's
always
the
possibility
that
the
runtime
class
could
be
modified.
Even
if
we
make
these
immutable
fields,
you
can
still
delete
the
runtime
class
and
recreate
it,
and
so,
in
the
case
of
scheduling
once
the
pod
is
already
scheduled,
we
don't
really
need
that
scheduling
information
except
for
kind
of
introspection
and
debugging,
but
with
something
like
pod
overhead.
You
would
probably
want
to
have
have
that
be
consistent,
yeah,
as
long
as
the
pod
exists,
okay,
you're
making.
This
seem
simple.
E
A
E
H
I
got
too
many
comments
like
then.
Finally,
this
discussion
came
like.
Is
it
necessary
to
start
the
part?
Okay,
then
he
comment
like
we
need
to
sort
through
the
admission
process.
Okay,
then
I
found
three
test.
Cases
were
failing
when
without
sorting
adding
the
admitting
part
with
starting
so
I
I
II
raised
another
pier
it's
it
as
PR.
Only
working
process,
just
I
remove
the
solder
mean
sorting
code
here.
Okay,
before
admitting
the
pot
until
the
part
copulate,
oh
okay,
then
I
remove
the
test.
Cases
which
is
failing
test
case
handle
part
conflict.
H
C
H
This
here
so
I
commented
this
one,
then
another
one
is
handling
memory
exceeded,
they're
doing
the
same
thing
here
time
I
mean
creation,
timestamp
and
really
here
then
sending
it
to
handle
bar
admission
meeting
this
one
I
complimented
this.
So
this
fear
it's
working
fine
in
the
test.
All
the
tests
are
working,
fine,
okay,
this
one
is
also
working
fine,
but
here
we
have
too
many
discussions
by.
We
need
to
cover
really
need
to
cover.
I
mean
very
really
need
this
starting
mechanism,
because.
H
I
create
I,
just
updated
by
for
creation
timestamp
here
like
this
okay,
but
it's
not
getting
reflected
because
that
could
be
late
when
it
is
creating
the
pod,
its
updating
then
times
of
overwriting
the
timestamp
whatever
I'm
sending
to
this
could
be
like
okay,
so
I
think
it
is
not
necessary
to
sorry
if
I'm
wrong,
just
correct
me
or
explain
me,
is
it
necessary
to
start
here
or
no
need
to
start?
We
can
we
remove
this
colder?
Keep
this
core
a.
E
H
E
Am
not
clear,
understand
the
SOT.
You
have
that
the
you
simplify
code,
which
is
really
good
using
my
circuitry
short
and
so
so.
I
show
that
the
review
only
asking
for
the
semester
to
test
the
courage
because
you'd
be
need
to
unit
Esther,
because
your
change
and
based
on
your
chin,
you
do
need
a
unit
pastor,
so
a
be
named.
The
Aaron
just
use
the
Christine.
This
you
t
need
a
unit.
Has
the
courage.
H
H
H
I
Is
still
necessary
and
I
don't.
This
is
a
reason
why
we
need
to
remove
this
behavior
I'm,
not
sure
about
the
testing.
Point
have
a
look
at
that
closely,
but
sounds
like
the
problem
is
just
like.
You
cannot
add
a
test
to
properly
test
this
behavior,
and
that
should
not
be
a
reason
to
remove
this,
and
we
can.
Then
we
can
discuss.
H
E
H
I'm
trying
I
was
trying
to
create
into
in
test
by
updating
the
pod
timestamp,
but
it
is
not
kidding.
I
mean
it
is
not
getting
reflected
whatever
by
updating
the
timestamp
like
this.
Okay,
whenever
the
part
gets
added
its
overrating,
whatever
I'm
updating
the
timestamp,
it's
over
overriding,
the
timestamp
that
copilot
is
updating
its
own
timestamp
when,
over
the
part,
get
created.
It's
like
first-come,
first-serve,
whatever
is
coming
first,
it's
adding
those
those
part
whatever
is
coming.
Second,
it's
objecting,
but
if
anything
conflict
is
there,
some
like
for
conflict
or
memory
exceed
income
contract,
something.
E
Lucas
you
to
review
them
I
just
want
to
say
that
actually,
the
order
based
on
the
creation
time
is
pretty
important
before
our
logical
is
better
than
the
enemy
Singh.
So
there
the
handle
edition-
and
it
is
also
could
be
filled
apart
due
to
some
resources,
so
the
creation
time.
Actually
it
is
the
keep
hard
to
for
ask
you
to
decide
which
one
should
I
admit
and
which
ones
do
that
together
that
resource.
So
so
that's
the
that's
actually
at
least
that
we
should
agree
on
that.
One.
H
H
E
H
D
J
Hi
this
is
Vinay
so
I'm.
Following
up
from
last
week's
discussion,
we
discussed
a
bunch
of
issues
with
Derrick
and
David.
I
have
updated.
The
cap
I
went
through
the
flow
control,
as
we
discussed
on
the
email
threat
on
the
review
thread,
I
split
it
up
into
two
part
conditions
and
then
went
through
the
logic
and
then
have
updated
that
I
just
push
the
update
a
few
like
just
before
this
meeting
so
I'm,
hoping
that
during
this
coming
week,
or
so,
we
can
take
a
review.
J
Another
review
of
the
current
flow
control,
the
the
container
level
policies
and
the
pod
level
retry
policies
that
are
mentioned
in
there
I've
also
captured
the
notes
that
David
had
the
concerns
that
David
had
regarding
lowering
memory
sitting
in
a
control
loop
and
then
trying
to
lower
it.
And
then,
when
the
memory
lowering
is
completely
successful,
then
it
updates
the
resources
allocated.
J
J
Add
today's
schedule,
look
sums
up
the
container
resource
requests
and
then
based
on
that
it
decide
it
filters
sorts
and
filters
the
node
and
picks
a
node
for
scheduling
for
update
we're
going
with
the
same
flow
and
in
the
case
of
pod
overhead
I
believe
that
component
gets
added
to
the
containers
requests
and
it
becomes
part
of
the
scheduling
decision.
So
it's
just
a
constant
that
gets
added
for
update
as
well
I,
don't
see
any
impact
to
it
for
the
cap
that
I
have
at
least
the
flow
control
that
I
have.
J
Vba
currently
is-
and
it's
not
just
maybe
there
are
other
plugins
that
are
working
towards
like
AI
based
machine
learning
based
algorithms,
that
can
that
want
to
make
better
recommendations
for
pod
resource
usage,
and
this
could
potentially
be
one
of
the
things
if
this
can
vary.
If
this
is
a
constant,
then
it's
probably
not
a
not
so
important.
So
that
was
the
only
point.
A
J
Okay,
if
yeah,
if
this
is
something
that
benefits
from
for
if
it
won't
be
constant
and
it
might
benefit
from
an
external
entity
like
VP,
a
or
some
other
kind
of
ML
prediction,
algorithm,
taking
a
look
at
it
and
making
predictions
on
it
and
then
updating
it
from
via
a
batch.
Then
maybe
it's
not
priority,
but
it's
probably
something
to
consider
exposing.
Why
I
could
let
to
crack
it
nemetrix
over
API
reporting.
A
G
J
K
E
E
How
does
it
grow
so
there's
a
many
many
many
missing
steps,
yeah,
so
they're
not
there
yet,
but
the
packet
level
powder
overhead,
the
concept
introduced
to
to
the
node,
even
it
is
constant,
even
maybe
it's
dynamic,
but
do
we
just
only
can
constant
is
spurious
better
for
that
today
and
the
will
have
asked
you
all
sector
doing
the
scheduling
and
an
amazing
time.
We
can
take
that
into
consideration
right
now.
We
just
blindly
reserved
sound
for
Cuban
eight
for
kubernetes
system.
Some
is
further
colonel.
E
So
I
I
don't
want
how
the
overhead
is
because
is
having
to
complete
the
staff
and
make
the
kind
to
the
vertical
Otto's
gaining.
Even
more
company
is
vertical.
What
is
game
already
complicated?
There's
a
lot
of
concern
so
well.
When
we
talk
about
the
first
draft
and
the
second
draft,
actually
the
current
proposal,
already
it's
a
third
adapter
internally
we
talked
about
about
and
so
another
calcium
content
I
think
it
is
simplified,
a
lot
of
complexity.
So
so
this
not
overcomplicate
again.
These
are.
J
E
J
E
J
That
was
it
for
me.
The
only
other
outstanding
item
is
to
see
if
we
can
get
an
additional
reviewers,
some
additional
reviewers
to
come.
The
owner
for
this
cap
now
and
I
want
to
see
if
we
can
get
reviews
I'm,
trying
to
figure
out
plan
out
milestones.
What
coding
milestones
I
wanna
have
I'll
do
that
over
the
next
couple
of
weeks
once
the
cap
is
merged,
I'll
have
I
can
go
to
my
boss
and
say
this
is
kind
of
in
and
we
need
to
drive.
We
need
to
push
this
now.
J
J
E
B
Repo
and
I
had
had
commented
on
on
that
asking
some
questions
about
how
it
relates
to
CRI
tools.
But
my
my
broader
question
is:
if
we
need
to
offer
changes
to
the
CRI
API,
as
we
do
things
like,
you
know,
add
hyper-v,
support
for
runtime
classes
and
implement
pod
overhead,
which
of
court,
which
is
going
to
need
to
be
reflected
in
CRI.
If
it's
not
already
there.
What
is
the
process
and
who
are
the
reviewers
for
that.
E
B
So
so
I
guess
getting
getting
to
nuts
and
bolts
here
like.
If
we
want
to
change
the
craap
I
do
we
still
need
to
open
caps.
Is
that
the
process
we're
doing
or
are
we
saying
that
we're
going
to
basically
incubate
a
change
that
API
and
have
one
kept
to
say
here
is
the
new
you
know,
version
of
the
CRI
API
is
as
a
separate
ship
cycle
from
kubernetes.
I
B
I
B
You
know.
Windows
knows
that
we're
adding
support,
but
I
want
to
make
sure
that,
because
we're
gonna
use
hyper-v
to
support
running
multiple
windows
versions
side-by-side
on
the
same
node,
that's
the
case
where
we
need
to
be
able
to
say
when
we
create
a
pod.
It's
going
to
be
this
OS
version,
or
this
you
know,
processor
architecture.
J
So,
okay,
hey
Patrick!
This
is
Vinay,
while
you're
looking
at
the
CRI.
Could
you
also
please
look
at
my
camp
update
where
I
saw
that
there
is
a
need
to
in
the
update
container
resources,
we
have
the
next
container
config
we
what
I
added
a
section
for
windows
container
config,
maybe
flesh
out
the
details
on
what
that
should
look
like,
so
that
an
update
would
work
for
Windows
as
well
as
Catalan,
etc.
This.
J
And
the
vertical
scaling
it
was
based
on
your
comment
and
when
I
took
yeah
I
got
a
chance
to
run
a
Windows,
manage
windows
node
my
cluster
at
it
and
then
try
out
I
think
the
outstanding
issue,
at
least
the
server
tree
19
that
I
years
it
doesn't
natively
support,
update
docker
plate
wasn't
quite
working.
So
there
is
a
piece
on
the
shim
or
the
CRI
yeah.
C
J
B
J
F
D
F
Can
you
hear
me
now?
Yes,
yes,
well,
you
should
have
heard
what
I
said
it
was
great
stuff,
so
Patrick
I
think
we
probably
need
to
get
to
talk
a
little
bit
about
that.
You
know
if
we
what
what
additional
changes
we
need,
because
you
know
there
are
other
patterns.
We
already
support
right
platform
selection
through
manifests
indexes.
That's
where
things
right,
I'm,
not
sure
exactly
what
what
it
would
entail.
You
know
infrastructure
you
mean
CRI
over
over
and
above
just
you
know
the
pattern,
selecting
the
container
you
want
based
on
it.
F
B
But
like
an
example
case
here
would
be
that
you
know
like
right
now:
we've
got
Windows
Server
2000
19
supported.
If
somebody
takes
a
dependency
on,
you
know,
Windows
Server,
you
know
whatever's
going
to
ship
via
wherever
the
next
version
is.
They
may
not
be
able
to
build
that
for
the
previous
version.
F
B
But
the
way
that
we're
dealing
with
that
today's
we
could
use
you
know
you
know
node
selectors
or
taints
and
toleration
x'
to
sort
of
steer
to
get
onto
the
right
node.
But
one
of
the
ways
that
we're
solving
that
problem
is
we're,
making
it
where
the
newer
Windows
nodes
can
be
backwards
compatible,
and
so
in
that
case
we
need
to
be
able
to
disambiguate
and
say
you
know
when
you
do
this
poll
or
when
you
do
this
run,
we
want
to
prefer
this
specific
OS
version.