►
From YouTube: Kubernetes SIG Node 20210413
Description
Meeting Agenda:
https://docs.google.com/document/d/1j3vrG6BgE0hUDs2e-1ZUegKN4W4Adb1B6oJ6j-4kyPU
A
Welcome
everyone
to
the
april
13th
kubernetes,
signed
meeting
this
reminder
meeting
is
recorded
and
hopefully
be
uploaded
to
youtube
for
later
viewing
today.
I
think
we
wanted
to
focus
primarily
on
looking
ahead
towards
122
and
beyond
and
try
to
get
feedback
from
the
broader
community
on
what
we
want
to
do
with
respect
to
kept
planning,
but
before
we
do
that,
I
didn't
know
if
either
sergey
or
alana
wanted
to
give
an
update
on
just.
B
B
C
Yeah,
I
spent
a
bit
of
time
cleaning
up
the
board
and
because
I
had
not
had
much
of
a
chance
given
you
know,
sort
of
the
state
of
like
the
121
freeze,
so
we're
we're
getting
there.
There
was
a
lot
of
stuff
in
the
two
triage
column.
We
made
a
lot
of
progress
last
week
in
the
sig
node
weekly
triage
session.
So
I
imagine
we'll
make
even
more
progress
this
week
and
one
thing
that
I
should
mention
so
sergey
mentioned
the
cherry
picks
happening
last
week.
C
I
also
wrote
down
a
doc
in
order
to
sort
of
ensure
that
documentation
is
available
going
forward
in
case
other
people
want
to
pick
up
that
role.
So
that's
in
the
community
repo
and
I
think
sergey
lgtmmed
it,
but
it
still
needs
approval.
So.
A
All
right
well,
thank
you
alana
and
sergey.
I
think
the
last
thing
I'll
comment
here
is:
we
have
to
close
out
our
annual
report
for
sick.
A
We
had
a
pr
opened
earlier
in
the
month
and
thank
you
to
those
who
helped
give
feedback
I'd
like
to
close
on
that
in
the
next
day
or
two.
So
if
folks
want
to
provide
feedback
up
on
the
agenda,.
A
Please
help
us
do
the
best
we
can
to
accurately
reflect
our
state
of
the
world.
So
with
that
in
mind,
though,
like
I
said
bernal,
do
you
want?
Let
me
get
you
sharing
and
you
can
walk
through
the
rough
list.
122.
D
D
All
right,
can
you
see
my
screen
yep?
D
Okay,
all
right,
so
we
can
get
started
so
the
first
one
on
the
list
here
is
cri
graduation
and
we
identified
a
couple
of
items
here
over
the
past
few
weeks.
So
one
is
potentially
a
call
to
return
a
list
of
images
that
cubelet
ignores
for
gc
and
the
second
one
derek
you
propose
adding.
If
you
can
see,
we
can
propagate
the
integer
resources
to
cri
runtimes.
A
Yeah,
maybe
a
couple
on
those
two
notes,
so
we've
had
a
lot
of.
I
think
the
we've
had
special
prominence
on
like
the
pause
container,
depending
on
your
runtime
to
have
cube,
not
garbage
collected,
but
we've
gotten
a
variety
of
feedback
on
others
who
say
that,
as
a
part
of
their
image
for
an
operating
system
host
like
there's
some
other
pre-installed
images
that
they
would
also
like
to
alert
the
cube
to
ignore
so
probably
a
good
call
of
action
as
we
evolve.
A
This
is
to
like
get
more
understanding
on
those
use
cases,
I
guess,
and
then
the
integer
resources
to
crime.
A
This
is
more
like
we've
had
a
feature
in
kubernetes
to
support
opaque
counted
resources,
but
those
resources,
weren't
always
propagated
down
to
the
runtime,
to
act
upon
and
and
may
have
been
in
some
cases
available
to
vice
plugins,
but
not
universally,
so
just
trying
to
ensure
that
we
have
a
uniform
view
of
resource
requirements
rather
than
a
partial
view,
it
the
whole
way
down.
I
think
this
is
the
spirit
of
that
one
yeah.
D
And
we
also
want
to,
like
I
know
like
there
was
some
ci
issues
around
some
container
d,
jobs
and
cryo
jobs.
But
I
think
now
we
should
be
in
a
position
to
switch
the
cubelet
to
use
the
new
v1
instead
of
v1
alpha
one
and
122..
So
we
get
one
step
closer
to
beta.
D
Okay,
so
the
next
one
is
node
graceful
shutdown,
so
that
was
just
declared
beta
in
121.
So
we
received
feedback
from
clayton
on
this
one
where
we
want
to
see
if
we
can
have
this
like
introduce
a
way
for
it
to
be
more
flexible
and
not
tied
to
like
priority
class
names,
but
rather
to
values.
D
So
david
porter-
and
I
I
had
a
few
ideas
on
this
one
so
we'll
propose
the
changes
as
additions
to
the
cap
and
see
what
folks
think
and
try
to
evolve
further
and
we'll
continue
to
be
in
beta.
A
And
then
maybe
renault
as
you
go
through
these
items,
you
know
I
know
we
have
30
folks
on
the
call
just.
Could
you
maybe
give
a
one
line
summary
of
what
each.
D
D
Sure
so
no
graceful
shutdown
is
a
feature
we
added
to
the
cubelet.
So
when
you
reboot
the
node
cubelet
detects
using
debus
from
systemd
that
we
are
rebooting
the
node
and
then
it
uses
a
systemd
inhibitor
lock
to
actually
go
and
stop
all
the
parts
gracefully
so
today
like
without
that
feature
like
systemd,
doesn't
understand,
pre-stop
hooks
or
any
of
the
kubernetes
concepts
right.
So
with
this
feature
in
place
after
we
do
a
drain,
anything
that
remains
will
be
like
gracefully
shut
down
on
reboot.
D
So
our
reboot
and
shutdown
story
is
getting
better
and
like
for
the
first
iteration.
We
just
added
like
two
classes
like
regular,
regular
pods
and
critical
parts,
but
now
we
want
to
evaluate
adding
more
classes
or
values
over
there,
so
we
can
have
like
more
of
a
shutdown
order
rather
than
the
big
two
buckets
we
have
today.
A
D
So
so
the
next
one
on
the
list
is
c
groups
v2,
so
the
the
current
status
is
like
cubelet
and
the
container
runtimes
convert
the
v1
values
into
v2
values.
So
there
are
no
c
groups,
specif
v2,
specific
knobs
that
are
exposed.
So
as
part
of
that
first
cap
we
just
want
to
get
like
feature
parity,
and
so
I
think
we
should
target
alpha,
and
for
that
there
are
a
couple
of
items.
One
is
like
there
are
like
as
we
work
through
this.
D
We
discovered
that
some
metrics
and
stuff
wasn't
working,
and
there
are
some
fixes
going
into
run
c
and
lip
container
for
the
next
rc
94,
which
will
help
fix
those
metrics,
and
that
is
in
like
in
turn,
help
fix
the
breaking
the
remaining
tests.
So
we
want
to
get
a
c
group
sweeter
specific
job
running
on
all
pr's,
and
the
final
thing
to
call
out
here
is
making
sure
that
c
group
like
cpu
manager
and
anything
else.
That
also
depends
on
c
groups.
A
A
E
A
I'm
happy
to
continue
to
review
the
cup
enhancement
given
that
past
history,
but
don't
know
if
giuseppe's
got
the
primary
panel
on
this
or
not
so.
D
D
F
D
Okay,
all
right,
so
the
next
one
is
the
memory
qos
for
c
groups
v2
by
tim
su.
I
think
so
it's
it's
a
new
cap
that
goes
one
step
further,
like
we
like.
First
step
is
c
groups.
We
do
parity
and
the
next
step
is
like
how
we
can
take
advantage
of
the
new
features
added
to
c
groups.
V2
and
one
area
is
like
memory.
So
there
are
new
knobs
like
memory
memory,
low
pi
and
max.
So
in
v1.
D
We
just
had
the
limit
so
like
the
the
cap
is
discussing
how
we
can
take
advantage
of
those
knobs.
So
I
think
there's
still
some
back
and
forth
going
on
on
that
kept.
So
if
we
agree
to
something
there,
is
there
any
objection
to
targeting
an
alpha
there
because,
like
the
risk,
there
is
that
we
may
pick
something
for
now
but
later
on,
depending
on
testing,
we
may
change
it,
and
that
should
be
okay
for
alpha
right.
A
Yeah,
so
I
haven't
kept
up
with
the
latest
conversation
since
I've
commented
on
his
cup,
but
I
know
personally
when
all
you
and
I
are
interested
in
this,
because
we
think
there's
a
lot.
A
Winning
ooh
in
the
long
run,
I
guess
what
isn't
called
out
in
the
cap-
and
maybe
this
set
of
folks
on
this
call,
could
help
shepherd
a
little
bit
is,
is
a
good
set
of
representative
test
workloads
that
maybe
today
don't
operate
as
well
as
would
be
desired
in
a
secret
v1
with
traditional
memory
settings
that
we
could
maybe
validate
against?
A
A
D
Community,
so
I
guess
we
can
like
chime
in
with
the
workloads
over
there
and
then
agree
or
off.
First
design
and.
D
Take
it
to
the
next
step:
okay,
all
right,
so
the
next
one
is
username
spaces.
So
I
see
michael
tuffin
on
the
call.
So
I
know
like,
like
originally
kinwa
folks,
had
a
proposal
and
then
michael
and
tim
morgan
chimed
in
with
their
suggestions
on
how
we
can
partition
the
the
uid
and
the
gid
space,
and
I
admit
I
have
to
catch
up
on
it
again
but
happy
to
do
that.
So
michael,
do
you
feel
we
have
a
solid
enough
proposal
from
you
on
the
kip.
E
So
yeah,
I
feel,
like
we've,
made
a
lot
of
suggestions
in
the
comments.
I
don't
think
that
kep
has
been
updated
to
reflect
that
I'm
not
I'm
not
100
sure
the
current
status,
but
I
don't
think
it
has
been
updated.
I
know
matt
brinkley
from
google
was
also
looking
at
it,
but
I
think
the
last
time
I
talked
to
him.
He
kind
of
mentioned
that
the
original
author
had
said
like
yeah.
I
don't
really
have
time
right
now
to
work
on
it.
E
So
I'm
not
sure
if
it's
something
that's
gonna
be
worked
on
in
the
next
couple
months.
E
Yeah,
I
don't
at
least
this
quarter,
okay,
so
I
I
just
you
know,
I'm
still
interested
in
it
like.
I
think
some
other
folks
at
google
are
still
interested
in
it,
so
we're
seeing
if
we
can
find
an
owner,
but
I
don't
know
if
it'll
be
prioritized
yeah.
D
A
A
This
one,
but
we
should
probably
reach
out
to
sig
storage,
to
see
if
there's
anyone
there
in
that
community.
That's
interested
in
pursuing
this,
because,
okay,
where
past
efforts
fell
down
on
this,
was
largely
around
the
intersection
of
username
space
mapping
and
storage
code.
E
D
Yeah,
probably
some
interesting
psp
next
intersection
as
well.
A
Yeah
like
if
you
look
at
the
the
old
prs
that
I
think
jordan
and
I
we
almost
had
a
version
of
this
that
went
to
alpha
but
like
really
managing
of
pvs
was-
was
the
part
that
we
always
got
hung
up
on
but
yeah.
I
I
just
think,
let's
call
this
one
out
and
see.
Maybe
anyone
six
storages
wanting
to
tackle
this.
E
D
There
like
so,
we
want
to
move
cra
to
beta
and
ideally
like
if
we
need
any
cri
changes
for
user
name
spaces.
If
we
can
get
those
nailed
down,
then
we
can
make
it
a
part
of
the
cri
data.
E
A
E
D
A
D
Yeah,
so
there
was
like
some
few
options
where
we
said
like
runtime
can
pick,
but
then
it
will
have
to
inform
the
cubelet,
so
one
additional
hop
back
and
forth,
but
right,
I
guess
people
haven't
really
nailed
that
down.
D
D
And
another
thing
of
note
here
is:
like
colonel
phi:
11
landed
some
patches,
that'll
that'll,
help
with
the
churning
issues
so,
which
is
a
good
sign
like
which
was
a
big
blocker,
which
made
us
think.
Oh,
we
have
to
do
like
a
single
mapping
for
the
cluster
or
the
node,
but
we
are
kernel
is
making
progress.
E
On
was
that
on,
like
decoupling
username
spaces,
from
like,
like
storage,
mappings,.
D
E
D
A
G
E
G
D
Okay,
all
right,
so
the
next
one
is
the
liveness
probes
time
out.
Elana
you
wanna
talk
about
it.
C
Yeah,
there's
just
a
couple
of
updates
from
alpha
that
I
need
to
make
for
beta
and
then
it's
just
a
matter
of
like
flipping
the
feature
flag.
There.
E
C
Actually
something
that
this
sort
of
brought
up,
which
is
that
in
general,
liveness
pro
or
termination
liveness
grace
period
seconds-
I
don't
know
whatever
it's
called
right
now
it's
allowed
to
be
a
negative
value
like
it's
not
currently
rejected
and
validated.
So
there
is
a
larger
problem.
That's
sort
of
related
to
this,
so
derek
had
suggested
that
we
fix
this
for
beta
and
also
take
care
of
like
the
pod
level
one
as
well.
I
think
that
that
is
a
good
idea.
I
have
been
working
on
the
cap
update.
C
D
D
D
D
Yeah,
I
think,
dawn
isn't
and
I
I
will
have
to
check
offline
yeah,
okay
yeah.
So
next
one
swap
pilana
has
has
a
cap.
Please
review!
Yes,.
C
It's
up,
it
doesn't
have
specific
implementation
details
yet
because
I
want
to
make
sure
that
everybody
likes
the
high
level
before
I
actually
go
in
and
write.
How
we'll
do
the
implementation
so,
but
mostly
should
be
there.
So
please
take
a
look.
I
also
have
it
on
the
agenda
to
remind
people
and
there's
a
link
awesome.
D
Thanks
so
the
next
one
is
second
by
default,
so
sasha
has
a
draft
up
and
I
need
to
review
it
and
then
once
it's
ready,
we'll
post
it
for
like
we'll,
propose
it
for
review
on
signored
and
see
if
we
can
take
it
forward.
D
D
H
High
phones,
so
in
general,
from
my
perspective,
to
graduated
to
the
beta,
we
have
to
fix
some
issues
like
the
first
issue
is
init
containers.
I
already
prepared
pull
requests
how
to
handle
it,
containers
and
reuse
the
memory
that
are
located
for
the
init
containers.
H
I
will
edit
under
the
document
the
additional
issue
that
derek
asked
requested.
It
was
provide
some
memory
manager.
Metrics,
probably
I
want
to
provide
it
under
the
resources
api,
so
I
have
some
additional
questions
regarding
to
it
and
I
added
it
under
today
agenda.
So
we
can
discuss
it
later
later.
I
just
don't
want
currently
to
catch
all
the.
A
Time
all
right,
it
must
be
a
good
time
to
maybe
give
awareness
of
the
work.
So
do
you
want
to
just
summarize
what
memory
manager
is
doing
in
case
there's
folks
on
the
call
who
want
to
engage.
H
H
It
also
works
for
huge
pages,
currently
for
only
four
sizes
of
one
gigabyte
and
two
megabytes
pages
and
in
general
it
can
like
guarantee
you
that
your
memory
will
stay
under
the
expected
pneuma
node,
like
anyways
the
schedule
under
the
kernel,
trying
to
provide
your
like
best
effort,
but
it
still
will
not
guarantee
you
that
you,
your
workload,
will
always
run
always
allocate
the
memory
from
the
same
pneuma
node.
But
the
memory.
H
A
Yeah-
and
I
don't
know
if
kevin's
on
the
call
today,
but
I
think
I
assume
rdm
you're
working
with
kevin
cluse
from
video
on
like
getting
the
prs
reviewed
that
you
needed.
A
That
was
we're
not
we're
not
getting
delayed.
So
thanks
again,
thanks.
D
All
right
awesome,
so
the
next
one
is
pod
overhead
sergey.
I
think
it's
some
reason
it
slipped
in
121
right.
Do
you
have
an
update
for
us.
B
Yeah,
whatever
has
is
just
to
give
a
context,
is
a
way
to
tell
how
much
our
head
runtime
class
will
introduce
and
how
much
more
memory
like
you
need
to
allocate
for,
like
kubernetes
need
to
allocate
for
ports.
Just
this
release,
we
fixed
a
big
issue
with
port
overhead.
It
wasn't
accounted
in
some
allocations,
so
I
mean
graduation
to
ga
it's.
It
wasn't
beta
for
a
very
long
time.
I've
been
asking
around
who
is
using
it,
and
I
didn't
find
many
people
using
it
in
production.
B
So
I
think
one
of
the
graduation
criteria
for
j
would
be
to
find
like
real
use
and
make
sure
that
every
like
it's
it's
satisfying,
all
the
all
the
needs
and
the
fact
that
we
found
this
critical
bug.
Justice
may
indicate
that
nobody
actually
using
it
very
extensively
so
yeah
if
you're
using
it.
Please
reach
out
to
me,
and
I
will
start
a
document.
Pinging
people.
A
A
But
I
guess
just
to
relate
what
this
feature
did
was
if,
if
folks
are
exploring
other
sandboxing
solutions
when
they're
in
the
runtime,
like
I
I
can
appreciate
that-
maybe
folks
are,
I
know,
google
at
g
visor
and
then
there's
kata
and
others
in
the
community
like
we're
not
like
yeah.
We.
B
Yeah
for
g
visor,
it's
considerate
and
I
think
it
was
like
tested,
but
it
was
never
enabled
on
a
mass
scale
in
production,
so
I
will
keep
pushing
with
the
team
and
check
with
them.
B
A
B
D
All
right
so
so
derek
you
want
to
be
the
approver
on
that
one
or
I'm
not
sure
I
can
appear
as
well
yeah.
I
had
worked.
A
On
the
original
cup,
with
tim,
all
claire-
and
so
I
don't
know
if
he
wants
to
be
available
here
as
well,
but
I'm
happy
to.
A
D
Okay,
so
the
next
one
is
windows,
privilege
containers,
it's
mark
on
the
call.
I
Yeah
we're
hoping
to
get
the
implementation
pr
merged
pretty
early
in
122..
This
got
delayed
just
because
of
some
late
reviews
coming
in
and
not
able
to
get
consensus
in
time.
So
hopefully
we're
still
targeting
alpha
for
122.
D
Okay,
awesome,
so
the
next
one
is
cubelet
credential
providers.
I
think
this
also
slipped
through
in
121
andrew.
Are
you
on
the
call.
E
I
do
have
a
quick
question
on
that,
which
is,
if
anybody
I
know
it's,
it
wasn't
in
the
original
cat.
But
if
anybody
is
looking
at
adding
support
for
like
passing
bound
tokens
into
the
credential
providers,
you.
E
So
like,
if
you
like
today,
you
can
mount
a
projected
token
on
a
pod
right,
so
you
get
like
an
auto
rotated
token
for
the
service
count
associated
with
that
pod,
and
you
can,
because
those
are
all
like
oidc
compatible
jots.
You
can
federate
those
identities
into.
E
You
know
third
parties
outside
the
cluster,
so
there
could
be
some
advantage
to
I
mean
I
I
want
it
eventually
to
be
able
to
do
image
pulls
with
with
the
identity
that's
bound
to
the
pod,
and
usually
you
have
to
exchange
the
token
that
kubernetes
generates
for
a
token
from
whatever
provider
is,
is
hosting
the
image
in
order
to
fill
10k
for
the
image
bulb
so
making
it
possible
to
pass
the
the
like
bound
token
for
the
pod
into
the
request
to
the
credential
provider
for
the
image
full
credential.
A
I
think
it's
a
really
interesting
use
case
for
this
particular
one.
My
memory
on
this
is
that
there
is
a
we
had
grew
a
feature
in
a
cube
that
we'd
kind
of
forgotten
that
then
got
coupled
to
externalizing
of
cloud
providers
where
the
the
cup
here
that
andrew
had-
and
I
tried
to
help
with-
was
delegating
to-
I
guess-
they're
a
secondary
source
to
allow
you
to
figure
out
credentials
when
pulling
from
particular
registries,
whether
it's
gciro
or
that
type
of
thing.
D
A
If
the
scenario
you're
describing
is
like
today,
when
you
pull
an
image,
it's
completely
disconnected
from
the
pod
it's
associated
to
in
the
cubelets
view
of
the
cache,
and
maybe
we
should
start
thinking
about,
like
a
pod
overlay
type
view
on
the
image
cache
and.
A
Folks
like
how
to
run
times,
pull
inside
their
sandbox,
maybe
slightly
differently
or
tie
things
together,
that's
interesting,
and
so,
if
otherwise.
E
Yeah,
I
think
I
think
for
us
it's
more
like,
like
we've
had
customers
ask
like
if
they
could
like
scope
it
to
the
surface,
account
on
the
pod
or
like
today,
they're
exporting
a
long
live
key
from
like
gcp
im
and
sticking
that
in
an
image
full
secret,
and
they
don't
want
to
do
that
anymore.
So
those
are
kind
of
that's
kind
of
where
we're
coming
from.
D
Think,
like
the
intel's
like
confidential
computing
presentations
recently
like
captured
those
use
cases,
michael,
if
you
know
so,
maybe
like
exploring
a
common
document
for
requirements,
big
time
image
pulls
to
pods
and
capturing
these
use
cases.
There.
D
All
right,
so
the
next
one
is
node
service.
Log
viewer.
I
I
can
talk
to
that
one
as
well
sure,
so
this
is
a
feature
that
will
allow
users
to
use
cube
ctl
to
get
service
logs
from
both
linux
and
windows
nodes,
and
this
one
was
there
was
a
kep
opened
in
121.
I
D
B
No,
I
think
I
mean
one
of
the
problem-
is
monitoring
agents
and
security
agents
that
sometimes
lacking
support
for
other
runtimes
in
dockersham,
but
we
need
to
keep
talking
to
them.
Okay,.
D
All
right,
so
the
next
one
is
c
advisor,
less
cri,
full
stats,
so
peter
and
david
porter
have
been
working
on
a
cap
david.
You
want
to
summarize
that
yeah.
F
Sure
yeah
for
this
one,
it's
basically
we
want
we
kind
of
have
some
issues
right
now,
because
we
have
the
cri
to
kind
of
abstract
within
runtime,
but
see
advisor
is
still
vendored
in
and
it
has
basically
logic
for
each
runtime
kind
of
hard
coded
into
it.
So
we
want
to
kind
of
move
off
of
that
and
have
the
cri
kind
of
be
responsible
for
providing
metrics,
but
there's
some
kind
of
migration
story
around
current
metrics
and
that
type
of
thing
that
we
need
to
figure
out.
A
It
it's
one
that
I
have
to
openly
admit.
It
gives
me
great
fear
that
we're
gonna
miss
something
so,
but
I
unfortunately
don't
have
bandwidth.
D
A
D
Happy
to
help
review
and
make
sure
like
so
like
two
guiding
principles
are
don't
break
any
existing
metrics
and
the
second
one
is
don't
regress
on
performance.
So.
F
D
C
A
A
A
J
Yeah
identifying
the
ci
jobs.
If,
at
all,
I
see
a
lot
of
references
to
this
in
future
black
in
some
ca
jobs,
I
think
yeah.
I
think
it's
mostly
the.
E
D
A
Yeah,
so
I
just
if
I
had
a
way
to
like
stack,
rank
priority
on
this,
and
maybe
we
have
to
do
this
a
separate
exercise.
I
want
to
like
thank
vinay
for
his
patience
and
helping
us
get
to
a
good
design
on
that.
So
the
good
news
is
that
the
cup
was
merged.
A
I'd,
I'd,
really
love
to
progress
on
this
and
so
yeah.
I
think
at
this
point
the
it's
probably
ready
to
just
get
the
implementation,
and
I
haven't
looked
at
his
latest
implementation
pr,
but
I'm
I'm
really
happy
with
how
design
turned
out.
D
A
A
Yeah
I
just
wanted
to
let
vinay
know
that
I
I
appreciate
his
enduring
persistence
in
this
important
area.
C
That's
great
and
what
I
will
do
so
we
have
this
empty
to-do's
column
here
as
well,
and
once
we're
finished,
reviewing
everything
I'll,
make
sure
and
I'll
go
and
check
everything
and
fill
in
every
to-do
for
the
caps,
so
that
we
know
what's
left
in
order
to
get
them
moving
for
next
meeting.
C
D
Elena,
okay,
so
the
next
one
is
new
cpu
manager
policy
or
enabling
external
policies
for
cpu
manager,
so
francesco
has.
K
K
Yeah
yeah
yeah.
This
needs
a
bit
of
context,
so
my
original
intent
was
to
add
new
cpu
manager
policies.
So
I
was
start
writing
a
cap
and
follow
all
due
process.
Then
I
posted
in
the
mailing
list
in
the
signal
mailing
list
asking
for
feedback.
There
was
strong
interest
in
hey,
let's
just
enable
external
policies
and
stop
adding
built-in
policies.
K
So
here
we
have,
let's
say
a
decision
to
make
as
a
community
like,
which
is
the
correct
way
to
extend
cpu
manager
in
the
first
place,
adding
new
built-in
policies
or
enabling
external
causes,
because
I
I
was
not
involved
in
kubernetes
at
the
time,
but
during
the
the
conversation
it
emerged
that
the
origin
originally
was
proposed
to
enable
external
policies.
I,
if
I'm
not
mistaken
from
levante,
which
should
be
in
the
call.
I
I
believe
so.
K
A
K
Just
another
just
just
to
follow
up
if
we
decide,
as
it
is
my
impression
that
my
my
understanding
that
we
want
to
move
forward
towards
external
policy,
I
can
talk
with
kevin
which
in
himself
proposed
kevin
close
in
self-proposed
day.
Let
maybe
it's
time
to
reorganize,
supermanager's
device
plug-in
and
talk
with
him
and
maybe
get
out
decide
about
this
way
forward.
So
really
what
I'm
looking
forward
is.
Yeah,
sorry,
don't
yeah.
A
So
I
guess
just
the
context
set
for
those
that
don't
have
the
history.
I
guess
so.
Originally
there
were
two
cpu
manager
policies
identified
and
we
only
ever
got
one
written
or
I
guess
really
three.
There
was
the
the
none
or
default
as
you
have
now,
and
then
there
was
the
static
one,
which
was
the
idea
being
like.
How
do
we
gain
trust
in
our
user
community
for
performance,
oriented
workloads,
and
then
we
had?
A
We
had
talked
about
a
third
one
called
dynamic,
which
was
going
to
be
like
what
could
the
cubelet
try
to
do
to
to
make
this
more
autopilot
in
nature
and
like,
instead
of
giving
a
a
pod
a
a
bound
cpu
set
for
its
life,
like
potentially
like
adjust
it
dynamically
based
on
observed
usage
and
that
type
of
thing.
E
A
A
I
think
the
one
thing
I
would
be
curious
about
is
like
what
the
set
of
potential
policies
are,
that
people
are
interested
in
enumerating
and
then
understanding
how
they
intersect
with
things
like
memory
manager
or
topology
manager
and
then
like
looking
at
a
flat
perspective,
but
that's
just
the
history
on
how
we
ended
up
as
we
are,
as
we
are
now,
but
doesn't
mean
we
can't
change,
there's
also
arguments
for
just
allowing
it
to
be
completely
handled
outside
of
the
cubelet
for
all
these
resources,
and
so
I
guess
so
that
that's
all
I
had
on
that.
A
K
K
It
is
needed
to
understand
how
to
move
forward
either
moving
forward
with
the
cap
with
the
built-in
policies
or
pivot
in
towards
enabling
the
external
ones.
So
really,
unfortunately,
I
I
don't
have
a
more
cla,
more
clear
statement
with
respect
to
state.
So
really
we
don't
know
how
to
move
forward
past
that
I
guess
we'll
just
go.
Is
there
a
reason
why.
A
D
D
J
J
One,
let
me
just
give
you
one
data
point
before
you
go
there,
so
kind
akira
is
working
with
kind
and
trying
to
get
some
changes
and
so
kind
can
be
rootless,
or
something
like
that
so
and
he
has
one
pr
which
is
like
a
short
pr
that
is
stuck
in
kk
forever.
A
You
direct
yeah,
so
I
apologize.
I
I
I
admire
akiro's
desire
to
to
do
this
work.
I
think
the
tension
I've
had
with
it
was.
A
Just
understanding
the
usage
context
right,
it's
one
thing
for
kind
to
run
and
then
maybe
not
and
make
some
sacrifices
and
that's
another
thing
to
say
in
different
production
scenarios.
Right
you'd
have
to
do
something
different.
A
So
I
think
my
recollection
was
that
previous
attempts
here
had
disabled
all
resource
management,
which
is
basically
like
turning
off
a
third
of
the
sig,
and
so
I
had
thought
that
was
secret
speed,
2
that
actually
you
could
keep
resource
management
on
and
still
be
rootless,
and
so
I
think
we
just
need
to
figure
out
what
the
intermediate
hot
points
we
want
to
have
but
yeah,
I
guess
maybe,
if
I'm
wrong,
please
please.
Let
me
know.
J
Well,
the
way
I
think
about
it
direct
is
like
there's
nothing
wrong
in
getting
something
in
alpha
where
you
are
making
assumptions
about
what
will
work
and
what
will
not
work
and
what
needs
to
be
switched
off
right
and
then
going
from
alpha
to
beta
will
be
like
a
bigger
bar
for
especially
production
scenarios
before
it
can
be
switched
on
by
default.
Right.
D
A
A
L
J
I
I
think
akira
gave
a
demo
to
see
grundtime
in
cncf.
I
can.
I
can
find
and
paste
the
link,
okay
and
also
invite
him
he's
in.
I
think
his
in
japan.
So
it's
gonna
be
hard
for
him
to
make
this
time.
I
think
yeah.
So
if
you
can
ask
him
some
questions.
D
Okay,
asynchronously.
A
Yeah,
I
mean,
I
think,
as
a
macro
community
challenge,
though
dim's
like
I
don't
know
if
we're
good
at
this,
but.
A
Is
it
the
cap
theorem,
where
you
can
be
consistent
available
or
whatever
there's
almost
like
to
some
degree,
we
get
some
of
these
enhancement
requests
that
are
very
forward-looking
and
they
require
you
to
sacrifice
some
leg
in
your
three-legged
stool
and
I'm
not
sure
like
if
our
responsibility
in
the
seg
is
to
find
a
way
to
ensure
we
get
all
the
prereqs
done.
So
we
don't
have
to
do
that
and
then,
like
I'm
completely
sensitive
to
like
the
mental
overhead.
L
L
A
I
think
kind
is
a
good
like
intermediate
space,
that's
kind
of
questioning
some
of
our
assumptions,
but
right
but,
like
my
fear,
is
that,
like
someone
sees
kubernetes,
supports,
rootless
and
then
asks
okay,
every
5g
deployment
in
the
world
needs
to
do
this,
and
then
it
actually
becomes
a
barrier
to
kubernetes
adoption
rather
than
an
accelerant
right,
because
then
you'll
be
like,
what's
the
next
three
months
of
things,
that
people
need
to
figure
out.
A
D
J
So
derek
I'll
give
you
an
example
of
why
this
this
rootless
one
might
be
of
interest
right.
For
example,
cube
adm
is
looking
at
a
rootless
cubadium,
but
then
they
they'll
end
up
calling
things
in
cubelet,
which
is
going
to
be
your
root.
J
So
between
cubadium
going
rootless
and
kind
being
available
in
a
rootless
mode,
then
you
could
have
like
the
initial
cluster
for
cluster
api,
the
master
cluster,
which
will
then
spin
off
workers
clusters
right
so
that
one
could
be
rootless
right
because
it's
not
really
doing
anything
other
than
you
know,
bringing
up
the
workloads
and
it's
it's,
the
bootstrap
master
cluster.
Basically
right.
A
L
A
J
Exactly
so
we're
going
to
look
at
that
there
and
then
come
back
here
for
sure,
but
I'm
just
saying
that
that
is
one
use
case
where
you're
running
something
on
the
laptop.
You
don't
really
need
to
run
as
root,
and
you
start
a
initial
cluster.
That's
not
your
production
cluster!
That's
just
for
bootstrapping
workloads
on
on
an
actual
worker,
node
work,
cluster.
A
Useful
down
the
line
yeah,
it's
just
a
rootless
cubot
that
needs
to
launch
privileged
containers
like
it
has
to
go
to
deploy
a
cni
right
and
then
it
has
to
be
able
to
probably
display
deploy
all
the
systigs
in
the
world
and
that
type
of
thing.
E
A
A
I'm
completely
open,
obviously
to
explain
the
right
outcome
so
thanks
james.
H
A
On
two
dimensions
like
effort,
complexity-
and
I
don't
know
something
else,
but
I'm
kind
of
wondering
like
some
of
these
items
are
not
as
simple
as
the
ones
we
did
in
the
past.
So.
J
A
A
Cool
well
thanks
a
lot
renault
and
I
guess
we
look
forward
to
everyone's
feedback.
We've
got
three
minutes
left
on
today
and
I'm
not
sure
we're
gonna
get
to
the
other
items,
but
for
the
calls
out
to
the
caps
that
we're
numerating
here.
Hopefully,
everyone
has
some
context
and
we
can
get
together
a
good
plan
for.