►
From YouTube: Kubernetes SIG Node 20210316
Description
Meeting Agenda:
https://docs.google.com/document/d/1j3vrG6BgE0hUDs2e-1ZUegKN4W4Adb1B6oJ6j-4kyPU
A
All
right
so
welcome
everyone
to
the
march
16th
signo
meeting,
like
all
meetings
that
recorded
and
if
you're
not
able
to
attend,
you
can
check
them
out
later
on
youtube.
The
agenda
seems
to
be
growing
as
I'm
looking
at
it
here.
So
let's
go
through
our
normal
floating
off
sergey.
If
you're
here,
if
you
want
to
talk
through
where
we
are
with
merge
rates.
B
Hi,
yes,
so
yeah,
we,
oh
again,
even
after
quote,
freeze
we
are
still
in
in
the
green,
so
we
burn
in
prs
rather
than
accumulating
them.
Last
week
we
burned
17
cherry
peaks.
I
mean,
which
is
great.
We
accumulated
a
lot
during
this
fist
time
and
also
like
more
than
nine
sig
instrumentation.
Pr,
like
it's
a
structured
logic
related.
So
I
know
we
have
an
exception
for
that
and
you
keep
merging
those
ivana.
Do
you
want
to
say
more.
C
C
So-
and
I
had
I
think
somewhere
on
the
agenda
talking
about
this-
so
were
we
got,
there
were
two
things
that
I
submitted:
exceptions
for
for
sig
node,
so
one
was
structured
logging
because
we
don't
want
to
leave
the
cubelet
half
migrated
and
the
other
was
we
missed
merging
by
a
day
for
updating
the
probe
termination
grace
periods,
so
both
of
those
received
exceptions
and
everything
is
mostly
working
on
time.
The
probe
stuff
got
merged
as
far
as
structured
logging
goes.
This
is
the
board.
C
This
is
everything
that's
in
scope,
for
this
release
and,
like
almost
everything
is
done.
Basically,
all
of
the
pr's
that
are
left
have
been
reviewed,
lgtm's
and
are
just
waiting
on
approvers.
C
I
guess
this
one
might
not
have
an
lgtm,
but
I
can
go
back
and
add
that
so
that's
the
state
of
structured
logging
and
so
we've
got
like
19
more
pr's
there
and
then
probably
one
follow-up
to
update
the
like
file
in
hack
to
ensure
that
those
don't
regress
to
non-structured
logging
and
to
you
know,
clean
up
anything
that
we
potentially
missed,
but
that's
kind
of
the
the
state
of
that
there.
C
So
that's
great,
but
unfortunately
we're
finding
that
you
know
reviewer
time
is
a
limited
resource,
so
in
terms
of
sig
node
pr
triage,
that's
moving
along
as
well,
but
we're
definitely
seeing
you
know
things
kind
of
blocking
on
needing
approvers.
So
I
guess
this
is.
C
This
is
not
a
call
out
or
anything
like
that,
but
that's
kind
of
where
I'm
just
seeing
in
the
past
week
the
most
limited
bandwidth
and
I
think
in
part
that's
because
we're
actually
seeing
a
ton
of
people
stepping
up
and
helping
with
review,
which
has
been
really
awesome.
C
So
for
everybody
who
has
been
helping
with
you
know
following
the
instructions
in
these
stocks
here
helping
with
triage
helping
with
reviews
for
the
most
part,
I
have
been
like
adding
cards
to
the
board,
but
not
necessarily
having
the
time
to
triage
them,
so
very
grateful
for
everybody.
Who's
been
stepping
up.
It's
been
awesome.
A
A
Cool
all
right:
well,
thanks,
sergey
and
lana,
so
I
don't
know
jack
if
you're
here,
if
you
want
to
do
an
update
on
what
you've
found
with
respect
to
the
the
probe
issues,
we've
been
discussing.
D
Well,
hey
thanks!
Everybody,
sorry
that
I'm
the
annoying
pro
guy
again,
I'm
looking
forward
to
playing
a
different
role
at
some
point
in
the
near
future.
I'm
going
to
paste
a
couple
links
here.
So
here's
an
issue
this.
This
actually
came
out
of
the
investigation
into
docker
shim
versus
non-docker
shim.
So
for
my
test
it
was
container
d
and
it's
a
really
long
sort
of
tail.
D
But
the
the
conclusion
that's
interesting
is
that
the
the
exec
probe
timeout
feature
gate
when
you
disable
that
which
is
what
landed
with
the
fix
prior
to
120,
it
actually
doesn't
protect
the
previous
behavior
there's
a
an
edge
case
where,
in
the
exec
probe,
timeout
equals
false
configuration.
D
It's
not
that
the
timeout
is
ignored,
but
it's
that
probes
that
take
longer
than
the
timeout
are
actually
just
thrown
away.
So
the
sort
of
practical
repro
case
is
a
is
a
negative
probe.
So,
like
a
probe
that
always
returns
failure
that
takes
longer
than
the
timeout
to
commence.
I
can
see
folks
being
confused.
I've
been
explaining
this
to
so
many
people.
It's
super
hard
to
explain
this
with
english
language,
but
hopefully
in
the
pr.
The
there's
unit
tests
and
and
tests
that
kind
of
express
what's
going
on.
D
I
don't
think
this
is
a
121
emergency
type
thing
this
has
been
essentially
was
released,
this
way
with
120.,
so
the
the
discovery
is
that
exec
probe
timeout
equals
false,
has
been
sort
of
buggy
since
120.,
so
we
don't
have
to
rush
in
for
the
121
release
in
my
view,
but
that's
obviously
your
guy's
call
any
questions.
A
No
just
thank
you
for
helping
us
figure
out
bugs
on
bugs
and
best
way
of
ordering
these
bug
fixes
and
I'm
sure
we
can
all
take
a
look
afterwards.
I.
D
I
wouldn't
call
it
a
refactor,
but
there
was
a
follow-up
commit
for
there's
a
there
was
an
original
commit
that
that
delivered
the
bulk
of
the
fix,
and
there
was
a
follow-up
commit
like
a
week
and
a
half
later
just
rejiggering.
The
way
contexts
are
kind
of
layered
on
top
of
one
another
and
in
one
of
those
fixes
the
correct
the
exec
probe,
timeout
feature
gate
was
not
a
factor,
and
so
it
creates.
D
Yeah
no,
I
would
love
to
know
that
too.
I
see
seed
and
he's
not
here
today,
so
maybe
he's
he's
not
around.
So
if
I
can
get
his
attention,
I
can
that's
a
simple
question
to
ask
him,
because
maybe
there
is
a
good
reason
and
needs
to
be
fixed
in
a
slightly
different
way.
A
Very
cool
all
right
well
looks
like
we
have
some
exciting
reading
on
probes
and
thank
you
for
putting
the
dock
together
and
I'll.
Take
a
look
at
least
for
myself
this
afternoon.
I'm
probably.
A
Easily
look
confused
as
we
keep
talking
about
these
things,
so
thanks
jack,
for
putting
a
talk
together.
Next
up
was
around
the
device
plugin
api.
I
don't
know
how
to
pronounce
the.
F
Hi
yeah,
it's
anyways,
that's
my
initial,
then
my
last
name:
okay,
so
hi
everyone.
So
I
have
a
proposal
pertaining
to
the
device.
Plugin
api,
as
you
may
know,
it
already
has
an
allocate
call.
Then
an
optional
pre-start
container
call.
F
I
have
a
couple
of
use
cases
that
would
benefit
from
having
a
de-allocate
and
a
post-stock
container
calls.
As
far
as
I
know,
this
might
have
popped
up
more
than
once
before.
For
some
reason
it
never
actually
went
through.
I'm
not
sure
why.
F
F
We
are
trying
to
support
using
usage
of
fpgas
for
for
data
centers,
which
means
we
tend
to
use
kubernetes.
The
idea
is
once
we
decide
to
kill
this
container,
we
have
to
know
we
have
to
basically
reset
the
fpgas.
If
we
don't
do
so,
then
the
fpga
maintains
this
configuration
maintains
its
programming,
basically,
which
means
it
can
keep
working
on
the
network.
F
Taking
up
macri's
mac
addresses
ip
addresses
from
the
pools,
possibly
polluting
the
network,
sending
traffic,
and
if
that
happens
at
high
speeds,
that's
really
not
something
we
would
like
to
have
plus
there's
also
the
aspect
of
power
consumption,
which
means
if
we
just
have
fpgas
that
are
supposedly
done
working,
but
they
are
actually
still
working
in
the
background
because
we
couldn't
reset
them.
That
means
we're
just
losing
a
lot
of
power.
F
That's
a
nice
use
case
for
me
for
my
work,
for,
for
using
a
post
stop
container
hook
for
the
device
plugin
api,
the
dl
locate
is
well,
I
I
I'm
not
sure
if
we
should
have
both
or
not,
but
we
have
allocate
and
pre-start
containers,
so
it
might
make
sense
to
have
both
the
allocate
and
post
off
container,
not
sure
what
the
logic
of
separating
them
was,
but
whatever
it
was,
we
might
have
to
maintain
it.
F
We
have
another
use
for
dl
locate,
which
is
we
are
able
to
partition
our
fpgas,
which
means
the
device
plug-in
we
have
for
this
would
have
to
offer
the
entire
fpga
as
one
but
then
would
offer
also
the
same
fpga
as
say
for
smaller
partitions
once
one
of
them
is
actually
used.
We
can
stop
advertising
the
other,
but
we
need
to
know
when
this
gets
de-allocated,
so
we
can
go
back
to
the
advertising
the
partition
versions.
F
I
have
had
other
users
who
had
different
use
cases.
Someone
had
a
different
use
case
with
he
wanted
to
basically
be
able
to
dynamically
bind
and
unbind
drivers
to
virtual
gpus
right
now.
They
can't
do
that
with
the
allocate
obviously,
but
they
have
a
walk
around.
But
apparently
this
walk
around
doesn't
work
every
time,
so
they
end
up
having
leaks
per
se.
F
E
Hi
everyone
hi,
I'm
rora,
I'm
one
of
the
maintainers
of
akri,
an
open
source
project
and
yeah.
Just
adding
to
what
muhammad
mentioned
acree
does
use
the
device
plug-in
allocate
to
to
you
know
advertise
some
resources.
You
know,
discover
some
resources
and
use
them
and
there's
currently
a
need
for
another
deal
gate
so
that
we
can
mark
those
resources
as
free
that
can
be
used
by
other
other
pods
or
workflows
as
well.
So
right
now
we
have
a
workaround
for
that,
but
it's
not
perfect.
E
So
having
that
consistency
from
cubelet
side
would
make
our
would
make
the
scenario
easier
to
maintain
and
and
better
performance,
of
course,
to
free
up
resources
right
after
we
are
aware
of
that,
I
wouldn't
go
into
details,
but
yeah
we
do
have
the
concept
of
service
workers
and
other
stuff.
That
would
make
sense
also
to
have
this
allocate
functionality.
F
Yeah,
okay
also,
I
think
the
change
doesn't
have
to
be
breaking
to
existing
users.
It
can
be
an
optional
api
call
just
like
the
pre-stock
container
hook.
So
people
who
don't
need
it,
don't
really
have
to
use
it.
It's
just
an
optional
extra
call.
F
I
guess
that's
it.
That's
that's
what
I
have
so
I
already
have.
I
think
I
had
some.
I
had
a
pull
request
actually
implementing
this,
but
it
was
a
while
ago,
so
it
might
need
a
lot
of
freebases,
but.
A
Yeah,
so
I
think
one
thanks
for
for
coming
and
sharing
your
use
cases.
I
think
there
are
some
other
folks
who
aren't
on
today's
meeting
that
I
think
I'd
want
to
have
to
also
kind
of
re
review
this.
So
it's
books,
like
renault
or
kevin
who've,
done
a
lot
of
work
in
the
device
plug-in
api
from
nvidia.
I
think
when
I
get
their
perspective
as
well,
but
so
I
tried
to
draw
attention
to
the
pr
towards
them
and
I'll
follow
up.
A
I
think
maybe
some
of
the
history.
I
think
that
was
trying
to
recall
on
why
we
did
not
do
a
d
allocate,
and
I
thought
that
the
assumption
was
that
you
could
clean
up
in
your
allocate
from
a
past
use
and
so
having
a
de-allocate
from
the
start
was
was
maybe
deferred.
A
Maybe
the
kind
of
question
I
have
is
like
do
any
of
the
devices
you
all
are
exploring.
Are
they
like?
Are
any
of
them
like
over
the
fabric
or
type
of
like
network
connected
devices?.
F
Yes,
at
least
for
my
case,
so
our
fpgas
are
usually
hooked
to
the
network,
so
they
are
network
connected
already
we
so
they
have.
Basically,
you
can
think
of
it
as
they
have
two
network
ports.
One
is
for
control,
which
is
just
attached
to
the
cpu.
That
would
actually
use
them
with
kubernetes
and
the
other
is
you
can
call
it
the
data
plane
network
interface,
which
means
if
we
don't
really
reset
it
post
use
right
after
we
use
it.
F
It
stands
to
actually
pull
you
with
the
data
network,
so
I
understand
that
we
can
clean
it
up
when
we
allocate
it
again,
but
for
the
entering
period
between
the
allocation
and
the
next
allocation,
not
good.
A
Yeah,
okay,
so
I
I
don't
know
if
we'll
reach
consensus
on
this
today
without
like
a
broader
set
of
participants,
but
I
appreciate
you
drawing
out
your
use
case,
so
there
are
some
other
folks.
I
don't
know
renault,
have
these
topics
come
up
in
the
cdi
working
group,
so
I
know
that
alex
was
talking
about.
I
don't
know
how
much
you've
engaged
in
those.
G
A
Maybe
the
other
thing
muhammad
and
apologize
for
the
other
speaker's
name
have
you
engaged
at
all
with
there's
a
container
device
interface
kind
of
working
group
community?
That
is
also
trying
to
explore
particular
problems
in
this
area.
F
I
think
you
now
have
mentioned
it
when
I
was
talking
to
him,
I
I
didn't
have
the
time
to
actually
look
deeply
into
it.
The
last
time
I
actually
talked
to
her
noah
was
about
five
months
ago,
so
it
was
still
early
in
the
development,
not
really
sure
what
the
state
of
things
is
now
so.
A
Yeah
so
alex,
I
don't
know
if
he's
here
from
intel
often
is
interested
in
the
space
and
so
kind
of
the
things
I'm
wondering
like
is
in
some
of
these
call
flows.
There
might
be
other
desires,
like
we've
heard
desires
that
on
allocate
flow,
they
want
to
be
able
to
pass
parameters,
and
so
I'm
kind
of
wondering
like
on
a
d
allocate
flow,
would
you
need
particular
parameters
passed
like
what
what
are
maybe
for
shared
use
devices?
A
Is
there
something
unique
around
how
that
device
is
prepared
that
even
on
the
allocate
step
or
not
working
out,
and
so
maybe
we,
the
next
step,
is
we
can
just
get
like
a
shorter
group
of
us
together
on
maybe
signaled
slack
to
say:
hey?
Can
you
review
this
and
draw
some
attention
and
yeah
if
we
can
get
a
group
so
yeah.
H
We
actually
created
issue
like
three
years
ago,
or
something
like
that.
So
like
exactly
about
this
topic,
so
okay,
I
would
like
to
also
mention
that
a
living
device
uncleared
like
under
the
neck
until
the
next
use
is
not
good
from
security
point
of
view,
except
in
multi-tenance
environment,
so
they
like
some
for
example,
fpgas.
H
A
F
Yes,
because
the
it's
not
technically
a
program,
it's
not
like
a
gpu
when
the
program
is
done.
It's
it's
out.
It's
it's
a
circuit!
So
it's
there
unless
you
actually
clear
it
out,
so
you
can't
really
say
that
I'm
going
to
clean
it
up
after
it's
done
working
in
the
container,
it's
just
not
possible,
plus
they're
really
made
to
to
keep
functioning
until
you
kill
the
container.
Not
they
don't
have
an
exit
condition
from
inside.
That's
that's
what
it
means.
A
Okay,
so
maybe
we
can
try
to
get
some
momentum
behind
this
in
the
upcoming
release,
but
for
today
just
seems
like
we
have
an
action
time
to
try
to
get
a
group
of
folks
together
to
see
what
we
want
to
do,
potentially
in
device
plugins
in
122
and
beyond,
and
find
a
plan.
A
I
don't
know
if
it'll
be
next
week,
but
like
at
least
we
should
get
something
together
either
on
the
signo
mailing
list,
see
if
we
can
get
like
a
group
or
of
just
like
interested
participants
or
that
that's
probably
the
best
path
forward.
Maybe
we
can
follow
up
on
that,
and
maybe
we
can
find
some
dedicated
time
to
talk
about
what
we
want
to
do
with
device
book.
Okay,.
A
C
Yeah
I
mentioned
that
earlier.
Maybe
I
will
not
go
over
that
again
but
say
the
announcement
that
I
missed
saying,
which
was
there's
a
contributor
survey.
You
should
participate
in
it.
It's
not
going
out
to
like
all
of
the
channels,
because
I
think
they
only
want
to
hear
from
contributors,
not
just
from
kubernetes
users.
So
please
fill
out
the
survey.
A
Cool
and
vinaya,
I
think
you
were
next
for
vpa
I'll
just
preface
by
saying
I
still
haven't
had
time
to
look
at
post
121
items.
A
This,
I
don't
see
the
name.
Okay
back
to
you,
alana.
C
C
Oh,
no,
so
swap
it's
making
progress.
I
have
synced
with
a
few
people.
I
have
taken
various
comments
in
the
doc
and
tried
to
sort
of
squish
them
into
at
least
what
an
mvp
proposal
looks
like
for
alpha,
so
that,
in
terms
of
targeting
next
release
in
the
next
couple
of
weeks,
we
can
get
together
a
cap
and
we
can
sort
of
at
least
hash
that
out
for
alpha.
C
I
don't
know
that
there's
necessarily
alignment
in
terms
of
where
we
want
to
go
for
like
beta
and
ga,
but
I
think
that
people
are
relatively
aligned
for
what
we
want
to
do
for
alpha,
which
would
be
next
release.
So
if
you
have
not
had
a
chance
to
look
at
that,
please
look
at
that
and
comment
away,
and
you
know
let
me
know
if
you're
interested,
if
you
want
to
get
more
involved,
that
kind
of
thing
and
yeah
a
bunch
of
people
have
been
reaching
out
to
me
like
from.
C
I
guess
there
was
something
on
hacker
news
recently
about
system
d,
umd,
and
so
a
bunch
of
people
have
been
reaching
out
to
me
being
like.
I
saw
that
you're
doing
things
with
swap
in
kubernetes,
so
there's
definitely
very
wide
interest
in
this
and
so
yeah.
I
just
I'd
encourage
people.
I
hear
the
interest,
but
you
know
make
sure
that
you're
actually
like
guiding
us
in
the
right
direction
to
ensure
that
you
know
it's
not
just
interest.
C
It's
also
that
we're
getting
other
people's
voices
involved
as
well,
so
yeah
I'll
try
to
get
a
kept
together
within
the
next
couple
of
weeks,
especially
if
there
isn't
much
more
comment
there.
But
we
may
need
to
schedule
more
time
for
like
a
wider
discussion
once
we
get
into
that.
Undoubtedly
so.
A
C
C
Just
I
think
in
terms
of
an
alpha,
because
there's
so
much
that
could
possibly
change
what
I
am
proposing
currently
and
which
I
think
is
mostly
like
people
have
alignment
on
for
what
we
should
do
for
alpha
is
we
like
allow
the
cubelet
to
run
with
swap
on
the
node,
but
we
ensure
that,
like
cris,
are
not
like
telling
workloads
that
they
can
run
with
swap.
C
So,
in
theory,
like
you
know,
we're
not
really
exposing
the
swap
to
the
workloads,
but
we
turn
swap
on
on
the
node
and
see
what
happens
as
an
alpha
feature,
because
if
there's
like
serious
stability
or
accounting
issues
like
just
getting
in
and
kind
of
doing
all
that
initial
grunt
work
is
probably
going
to
take
a
release
and
then
like
making
sure
that
we're
actually
testing
it
and
it
looks
like
semi-stable,
I
think,
that'll
be.
C
You
know
a
full
release
cycle
just
to
kind
of
get
the
foundation
in
place
and
then
once
we
get
there,
we
can
talk
about
what
we
want
like
workload,
exposure
to
the
swap
to
look
like
beyond
just
the
there's,
some
swap
on
the
node
as
a
shared
resource-
and
you
know
the
system
processes
are
maybe
able
to
use
it.
Maybe
we
have
a
possibility
of
like
bringing
in
things
like
umd,
but
it's
all
very
open
still,
I
would
say.
C
G
To
I,
I
wouldn't
be
opposed
to
actually
adding
changes
to
the
cri,
because
I
don't
think
I
mean
we'll
be
hiding
this
behind
a
flag.
Anyways
like
we
wouldn't
be
enabling
it
by
default
right.
E
G
C
Yeah
well
so
I'm
thinking
that,
like
you
know,
if
we
enable
this
so
say
we
enable
swap
on
a
node,
I
think
we're
gonna
have
to
go
in
and
do
all
of
that
plumbing
in
the
cri,
but
it'll
effectively
all
like
be
off
by
default,
but
that
would
enable
us
going
forward
in
order
to
be
able
to
start
twiddling
those
values.
G
Yeah
I
mean
like
as
an
initial
one
right.
We
can
have
a
percentage
like
simple
setting
for
an
alpha,
so
someone
can
like
play
with
it,
test
it
and
then
come
back
and
have
more
solid
recommendations
on
how
it
should
be
allowed
to
be
tuned.
So,
like
you,
have
some
memory
limit
or
request,
and
your
swap
should
be
a
percentage
of
that
like
100
percent
80
percent
75
percent,
but
that
could
be
a
potential
knob
I'll
I'll
comment
as
such
yeah.
C
Yeah,
I
think,
make
sure
that
you're
you're
taking
a
look
commenting
on
the
doc.
I
I
suspect
that
this
has
been
low
priority,
given
all
of
the
other
priorities
within
the
release
cycle,
but
given
that
we're
coming
up
on
the
end
of
121
just
wanted
to
make
sure
I
flagged
it.
It's
it's
in
people's
back
of
mind
and
that
work
and
it
gets
enough
attention
to
actually
go
in
and
put
together
a
cap
that
everybody
is
happy
with
or
mostly
happy
with.
A
Yeah,
I
just
think
maybe
we
attract
a
different
audience
if,
depending
on
prioritization
so
like,
if
we
say
we
want
to
support,
running
or
tolerate
running
on
hosts
with
swap
to
protect
the
node
agents
themselves,
but
not
empower
a
workload
like
that
gets
a
different
audience
than
the
audience.
Who
said,
I
won't
able
to
have
my
workload
consume,
swap
directly
and
stuff.
So
no.
A
Or
the
other
one,
if,
like
that's
a
bad
choice,
but
I
think
we
should
just
be
clear
on
which
persona
we're
tackling
first.
C
Yeah,
I
agree.
I
think
that
the
problem
is
that,
like
sort
of
the
initial
plumbing
work
in
terms
of
just
like
a
phase
roll
out
because
swap
would
be
potentially
so
disruptive
to
a
cluster.
C
I
think
that,
probably
like
the
any
initial
work
that
you'd
be
doing
in
the
scope
of
one
release,
probably
would
not
necessarily
be
enough
to
like
start
running
workloads
with
swap,
but
we
could
potentially
see
like
stability
improvements
off
of
the
bat
and
so
that's
kind
of
how
I've
been
prioritizing
use
cases
just
like
trying
to
ensure
that,
given
that
we
probably
can't
ship
something
like
fully
featured
that
will
help
workloads
in
one
go.
B
So
I
I
don't
think
we
I
mean
ike
from
google
is
working
on
that
as
well.
Icon
call,
but
I
think
in
singularities
we
can
do
that.
We
already
experimenting
with
enabling
swap
for
customers
for
running
single
pod
per
node,
and
these
customers
typically
run
some
like
either
machine
learning
stuff
or
something
that
requires
loading,
enormous
files
into
memory,
and
they
will
just
benefit
from
having
extra
swap
memory
to
load
these
files
and
don't
worry
about
actual
memory
limits
on
a
node.
B
So
we
can
do
something
in
a
single
release
and
we
actually
have
some
use
cases
that
you
want
to
tackle.
So
it
would
be
great
to
start.
C
Working
on
that
from
what
I
could
see,
at
least
in
the
dock,
that
that
doesn't
seem
like
one
of
the
major
use
cases
like
the
idea
of
a
single
pod
on
a
node
and
nothing
else
gets
scheduled
there
and
that
thing
sort
of
has
free
reign
in
terms
of
swap
allocation
on
the
node.
That
doesn't
seem
like
a
super
common
use
case,
and
I
think
there
has
been
at
least
a
lot
of
community
push
to
not
want
to
sort
of
treat
swap
like
you
know,
emergency
memory
or
extra
expansion
of
memory.
C
There
are
other
use
cases
that
people
have
been
directing
prioritization
away
from
there.
So
I
mean
it's
certainly
a
possibility.
Thus
far
like
consensus
does
not
appear
to
be
in
that
direction.
I
mean,
if
that's
what
we
want
swap
to
be,
then
we
can
have
a
discussion
about
that,
but
that
doesn't
seem
to
be
the
majority
view
currently.
A
I'm
sorry
I'm
just
like
cash
switching
I'm
trying
to
think
what
is
it
that
we
would
actually
need
to
do
in
the
cubelet,
because
you
could
turn
fail
swap
on
off
today.
I
guess
and
I'm
trying
to
think
what
is
the
change,
but
oh
give
me
a
second
to
change
my
mindset,
so
jeremy,
you
want
to
talk
next
on
no
problem.
Detector.
I
Yeah
sure
so,
basically,
I
was
in
discussions
with
the
nerd
problem
detector
team
over
the
last
couple
of
months
on
basically
a
design
on
how
to
add
windows
support.
So
so
a
few
people
have
read
the
document
reviewed
it,
but
I'm
still
definitely
open
to
feedback
on
it.
It's
definitely
not
like
a
fully
closed
thing,
it's
more
of
like
a
living
document,
but
it's
basically
a
discussion,
and
this
is
also
happening
for
sig
windows
as
well
on.
I
How
do
we
improve
no
problem
detector
to
one
run
on
windows
and
then
also?
How
do
we
make
it
so
that
it
does
equivalent
of
what
linux
has
on
windows?
So
basically
detecting
like
things
like
kernel,
failures
and
stuff
like
that
and
what
kind
of
things
to
look
for
so
like
one
of
the,
for
example,
one
of
the
unique
things
we're
probably
looking
for
is
basically
like
activation
status.
You
know
for
licensing
reasons,
and
so
please
feel
free
to
look
at
the
design.
Document.
I
Feedback
is
also
welcome
and
I've
linked
the
bug
to
that-
and
you
know
this
is
more
of
like
hey.
This
is
kind
of
what's
happening,
and
you
know
reviews
or
anything
like
that.
It
would
also
be
helpful.
I
And
that's
mainly
yet
oh
and
quick
status
on
it.
It
builds
now,
so
you
can
actually
like
get
a
binary
for
it
and
soon
we'll
be
adding
windows
service
support
for
it.
A
Very
cool
and
then
last
time
I
got
today
so
jenna
was
jim.
J
Hi
there
yeah,
I
didn't,
have
too
much
of
a
big
update.
I
did
after
some
great
discussion
last
week
I
did
post
an
issue
to
kubernetes
about
this
sort
of
phrased
in
the
language
of
let's
just
look
at
the
conformance
test
as
something
that's,
maybe
a
little
bit
too
strict,
and
some
of
the
reasons
why
we
might
want
to
allow
administrators
or
or
solutions
to
not
have
the
amount
space
be
the
host
mount
namespace.
J
So
it
goes
into
a
little
more
detail
as
well
about
the
the
system
d
issue
and
what
the
specifics
are
around,
why
it
is
inefficient,
and
I
can
talk
a
little
bit
more
about
there
if
people
are
curious,
but
I
just
thought
I'd
just
let
you
know
the
issues
there
open
for
comments
and
and
more
discussion
there
unless
there
are
any
specific
questions
right
now.
A
A
So
if
the
test
isn't
just
if
the
test
has
a
bracket
conformance
next
to
it,
it's
it
should
be
in
conformance
or,
but
I
didn't
think
when
we
looked
up
afterwards,
that
it
actually
did
so,
maybe
if
we
can
just
make
sure
the
issue
points
to
the
test
in
question
and
and
make
sure
there's
no
confusion
on
if
it's
actually
conformance
or
not.
That's
probably
absolutely
I
can
do
that
cool.
A
All
right,
I
don't
know
if
there's
other
questions
on
this.
I
know
dawn's
not
here,
and
she
was
particularly
interested
on
this,
but
she's
not
here
to
give
her
feedback.
So
hopefully
we
can
get
her
feedback
asynchronous.
I'm.
J
Sure
I
mean
I
maybe
I
can
just
say
a
couple
words
real,
quick
about
the
systemd
issue.
Now
that
I
know
a
little
bit
more
about
the
specifics.
Basically,
the
inefficiency
that
we
were
seeing
in
systemd
is
kind
of
two
parts.
One
is
systemd
itself,
had
some
inefficiencies
and
how
it
was
sort
of
rescanning,
the
the
mount
info
information
out
of
proc
and
that
has
been
fixed
to
some
degree.
I
think
the
patch
went
into
like
main
mainline
systemd
in
december,
and
so
you
know
openshift
doesn't
have
it
yet.
J
So
that's
you
know,
there's
going
to
be
a
lag
time
before
everyone
gets
that
fixed,
but
even
with
that
fix
it
decreases
the
problem
by.
I
think
the
estimation
was,
you
know,
30
to
50
percent,
but
that
means
that
there
is
still
an
overhead
involved
in
having
a
large
number
of
mountain
points
that
even
the
the
latest
you
know,
system
d
isn't
going
to
address.
J
So
there
is
still
some
value
in
doing
this
just
for
efficiency's
sake
as
well
as
some
of
the
other
reasons
that
I
mentioned
the
issue.
The
second
piece
of
it
is
really
about
a
more
granular
mechanism
for
the
kernel
to
to
signal
what
parts
of
the
mount
info
have
changed
as
mounts
come
and
go
and
change,
and
it
just
seems
like
that-
I
think
there
was
an
fs
info
was
the
name
of
the
call
that
people
were
kind
of
pushing
for,
but
the
development
of
that
seems
to
be
very
slow.
J
A
So
I
know
this
is
a
red
hat,
focused
issue
and
I
know
we
have
particular
segments
one
of
the
things
I'm
curious
is,
I
don't
know
if
anyone
from
intel
was
here,
but
there
are
particular
classes
of
workloads
where
this
type
of
cpu
xs
is
potentially
excessive,
and
so
I'm
just
wondering
if
anybody
else
has
discovered
this
issue
outside
of
maybe
red
hat's
experience.
A
Thinking
things
like
anybody
who
might
work
with
telco
verticals
or
that
type
of
thing
where
minimizing
latency
is
important.
So
maybe
a
call
to
action
for
those
folks
who
are
interested
in
those
communities
to
to
maybe
share
their
experience.
A
Cool
all
right,
well
that
I
think
we've
gotten
through
today's
agenda
are
there
any
items
that
we
wanted
to
talk
through.
That
did
not
have
time
to
talk
through.
We
can
take
topics
now,
otherwise
we
can
get
back
to
hopefully
closing
out
the
release
and
starting
to
plan
for
a
successful
next
release.