►
From YouTube: Kubernetes SIG Node 20211019
Description
Meeting Agenda:
https://docs.google.com/document/d/1j3vrG6BgE0hUDs2e-1ZUegKN4W4Adb1B6oJ6j-4kyPU
A
Okay,
good
morning,
everybody
or
good
evening,
good
part
of
the
day,
it's
october,
19th
2021
signals
meeting
welcome.
Today.
We
we
have
a
reasonably
short
agenda.
Let's
go
through
that
and
it's
typical.
We
start
with
what
happened
in
past
two
weeks.
We
skipped
last
week
because
of
kubecon.
A
I
think
I
mean
I
hope
everybody
enjoyed
group
convict
and
had
the
chance
to
listen
for
some
presentations,
or
at
least
like
watch
for
us,
like
being
so
active,
we
a
little
bit
down
in
prs,
so
we
like
burning
through
the
backlog,
but
we
still
very
high
number
of
prc
still
here
so
yeah.
I
I
looked
through
the
prs
if
nothing
was
closed,
that
is
not
ordinary,
so
everything
is
expected,
nothing
we
lost
or
didn't
nothing.
A
B
B
All
right
so
yeah,
so
I
I
wanted
to
start
a
cap
and
the
the
template
said
that
I
should
make
sure
that
the
corresponding
sig
agrees
before
I
do
a
lot
of
work.
So
I
I
linked
here
a
draft
with,
like
the
it's,
a
very
draft
drafty
draft,
so
there
I
I
didn't,
spend
a
lot
of
time
there,
but
I
I
wanted
to
see
if
this
was
something
that
people
would
support.
So
the
the
point
of
this
cap
is
to
expose
their
oci
runtime
hooks.
B
So
the
the
oci
already
defines
a
bunch
of
runtime
hooks
that
are
useful
to
know
when
the
container
started
when,
like
the
different
stages
of
the
container,
have
started
on
the
same
thing
when
it
stops
and
right
now
those
are
not
exposed
in
any
way.
So
the
the
the
container
runtimes
have
this
hooks.
So
container
d
and
cryo
have
the
hooks.
B
Other
runtimes,
like
kata,
container
or
shivisor,
have
only
a
subset
of
the
hooks,
not
all
of
them,
but
some,
but
they
are
not
exposed
in
any
way
to
kubernetes.
So
for
people
running
workloads
and
kubernetes,
they
cannot
make
use
of
these
hooks.
B
So
the
proposal
would
be
to
add
a
way
to
like
we
need
to
do
the
whole
wiring
of
this
hooks
up.
The
stack
so
make
them
available
through
the
cri
api
and
then
make
the
cubelet
use
the
cri
api
and
then
also
like
specify
where
in
the
pot
spec.
All
of
this
goes
so
like
it's
a
lot
of
wiring
to
to
make
this
happen.
B
I
don't
think
it's
too
hard,
but
it
they
are
quite
a
bunch
of
moving
parts.
So
we
need
to
be
in
agreement
that
we
want
to
make
this
happen.
C
C
C
So
do
we
have
constant,
because
this
topic
is
kim
talked
to
us
one
and
a
half
years
ago,
or
maybe
even
even
earlier,
that
one.
So
I
think
that
some
community
members
use
some
concern.
I
was
looking
forward.
C
C
C
B
E
B
Just
just
a
second
so
before
we
like
you,
ask
about
kim
folk.
Yes,
so
I'm
currently
at
microsoft,
because
microsoft
acquired
kinfolk,
but
I'm
in
the
same
team
that
brought
this
from
kim
folk
two
years
ago,
and
yes,
we.
This
is
something
that
we
would
like
to
have
in
order
to
like
intercept
when
containers
get
created.
But
it's
not
just
us.
So
we've
seen
a
lot
of
different
use
cases
out
there
of
different
things
that
people
want
to
run
when
containers
get
created.
B
B
F
F
F
How
would
you
make
that
switch
but
but
yeah,
I
think,
there's
a
definitely
a
lot
of
interest
in
this
at
the
container
runtime
level
as
well-
and
you
know-
and
we
would
like
to
have
some
conversations
on
it-
I
maybe
the
nri
model
is
going
to
be
a
good
api
for
us
to
you
know
to
implement.
F
I'm
not
I'm
not
really
sure
we
we're
going
to
need
more
work
here.
So.
E
So
I
think,
also
to
add
one
more.
I
have
a
long
history
with
the
hooks
proposed
them
originally
in
run
c
and
we've
added
them
in
cryo.
So
one
thing
that
came
out
over
the
years
is
like
the
example
of
the
nvidia
hook
right,
so
that
was
too
complicated
to
write.
So
there
was
an
effort
started
some
time
ago,
called
cdi
to
look
into
how
we
can
simplify
the
hooks.
E
So
with
cdi,
we
have
like
declarative
way
to
modify
the
runtime
specification,
so
the
hook
doesn't
have
to
go
and
make
those
changes,
because
the
runtime
is
already
capable
of
making
a
lot
of
those
changes
and
then
the
hook
becomes
much
simpler
like
running
ld
config
or
something
like
that.
So
that's
the
path
for
gpu
enablement
type
of
hooks,
but
I
do
agree
that
there
is
no
easy
way
to
run
hooks
that
are
like
capturing
second
calls
or
tracing
and
so
on.
So.
C
Yeah,
if
I
remove
correctly,
please
correct
me
if
nano
is
here,
I
think
that
we
talk
about
this.
We
all
generally
feel
those
a
lot
of
use.
Cases
like
the
like
a
media
gpu
support
a
lot
of
use
cases
we
think
about
those
book
is
useful,
but
the
the
constant
risk
it
is
a
hard
because
we
also
realize
the
focus
it
is
very,
very
powerful.
Those
hook
also
some
of
the
hook
implementation,
it
is,
could
be
like
the
harm
of
the
entire
host.
C
So
then,
there's
the
expose
of
the
sacred
concept
and
safety
concern
all
those
kind
of
things.
So
this
is
why
we
risk
in
the
past.
Then
we
want
to
continue
discussing
so,
but
I
think
this
is
what
we
used
two
years
ago
and
but
we
want
to
continue
discussing.
I
think
that
the
other
other
effort
to
make
the
more
declarative
and
what
kind
of
hook
you
want
to
install
and
regulate
that
kind
of
that
we
try
to
push
it
forward
so
anyway.
C
I
just
want
to
share
some
background
contacts
here
and
yeah
they're
hookers.
We
want
there's
the
leader.
F
There's
a
lot
of
good
contact,
good
text
being
added
to
this
topic
and
brian,
makes
a
good
point
right.
H
Then
it
if
we
expose
that
in
the
kubernetes
api
or
in
cri
it
may
make
the
part
I
mean
it
depends
on
how
I
define
the
api,
but
I
may
make
the
part
I
mean
non-portable
right,
because
if
your
hook
depends
on
something
on
host,
then
it
means
that
your
this
part
has
to
run
on
host
with
that
software
or
all
those
dependencies
so
and
those
defenses
are
not
built
into
the
image.
H
If
I
I
mean,
and
if
I
understand
correctly-
that's
possible,
so
that's
something
we
may
want
to
avoid,
because
you
basically
are
writing
a
part
directly
depend
on
something
on
host
and
not
going
right,
not
packaged
in
the
image.
F
Yeah,
you
know,
I
think,
that's
what
morneau
was
talking
about
with
the
cdi.
This
makes
a
lot
more
sense
when
these
are
pre-determined
dependencies
for
applied.
You
know
for
certain
use
cases,
but
if
we
were
to
expose
it
in
a
very
general
way,
yeah
that
could
be
very
risky.
So
we
we
want
to
be
careful
here
right.
C
Yeah,
that's
that's!
That's
the
reason
in
the
past
that
we
raised
this
concern,
and
actually
this
is
even
two
years
ago.
It's
not
first
time
talk
about
israel
when
we
first
started
those
focus,
I
believe
many
people,
because
we
want
more
powerful,
more
flexibility,
but,
on
the
other
hand,
in
the
signal
we
have
to
consider
about
the
general
use
cases,
maybe
for
the
security
and
for
safety
and
for
the
house
host
safetyness
right.
So
so
that's
why
we
have
been
discussing
so
many
times.
This
is
and
for
the
I.
C
If
I
remember
again,
I
want
to
ask
mike,
if
I
remember
today
for
the
king
course
use
cases
and
two
years
ago,
most
it
is
for
the
chasing
and
maybe
right
now
it's
different,
but
right
now
before
that
the
most
important
things
it
is,
they
they
are
missing
about
the
kubernetes
labels
and
the
power,
the
information
with
the
information
passing
to
for
those
but
the
rest
of
stuff.
Actually
we
could
so.
This
is
why
singapore's
well-defined
use
cases
we
can
help,
but
it,
but
do
we
need
to
go
through
the
like
powerful
hooks.
C
You
need
to
enable
that
at
the
same
level
and
we
can
discuss
so
that's
that's
the
way
being
we've
been
always
positive,
we
could
understand
you,
use,
keys
and
support.
You
use
cases
in
a
proper
way
and
make
sure
it's
not
like
you.
If
I
don't
crack
it
because
there's
the
many
cases
came
to
us
want
to
use
empower
book,
but
at
the
same
time,
because
they
said
oh,
why
why
they
want
to
say
ai
level.
It
is
mostly
it's
just
for
label.
B
Right
yeah,
so
for
our
main
use
case,
yes,
you're
right,
it
is
for
tracing
and
yes,
the
the
applying
the
labels
is
is
the
main
thing.
What
what
we
want
to
do
is
be
able
to
detect
a
container
as
soon
as
it
starts
like
right
from
the
beginning
and
right
now
we
are
using
some
interception
with
fa
notify
and
it
works,
but
it's
kind
of
hacky.
That's
why
we
would
want
to
have
something
that
is
not.
B
Right,
but
there
are
other
use
cases
that
we
are
also
interested
in
for
security,
like,
for
example,
being
able
to
intercept
a
container
before
it
starts.
If
the
image
is
not
like
approved
to
run
for,
in
whatever,
like
white
list
of
of
images
that
are
approved.
H
C
C
F
D
That's
the
thing
I
would
add:
if
there's
a
concern
about
wanting
to
be
able
to
detect
the
like
containers
like
you
know,
faster,
as
opposed
to
like,
after,
like
a
full
sync
loop
or
something
like
that,
there
was
that
proposal
about
optimizing
the
plague
in
order
to
be
to
take
that
time
down
significantly.
B
B
The
other
thing
we
we
were
discussing
was
that
some
some
metadata
or
not
metadata,
some
underlying
implementation
details
are
exposed,
but
others
are
not
like,
in
particular
the
one
that
we
don't
have
access
to
directly
is
the
process
id
of
the
container,
which
is
the
data
that
we
would
like
to
have,
but
other
other
things
are
exposed
and
yeah,
but
this
would
be
like
a
different
cap,
exposing
the
container
process
id,
I
don't
know,
maybe
it's
something
that
we
could
explore.
If,
if
you
think
the
hook
proposal
makes
no
sense.
C
E
I
I
think
in
the
verbose
mode
we
were
already
returning
that,
if
I
remember
correctly,
I
have
to
check,
but
the
problem
with
that
is
like
what
you
return
for
a
vm
right
like
a
vm
based
runtime
will
have
different
characteristics
and
in
case
of
cryo,
for
example,
we
are
not
starting
the
pause
container
by
default
anymore.
So
for
the
pod
there
is
no
pid,
so
so
paid
is
kind
of
not
always
guaranteed
to
be
present.
B
So,
okay,
so
in
for
our
project,
what
we
want
is
to
be
able
to
trace
containers
as
soon
as
the
start.
They
start,
but,
as
I
was
saying,
there
are
other
projects
that
we
are
also
involved
with.
That
want
to
be
able
to
like
deny
executing
a
container
based
on
the
image
and-
and
all
of
this
is
about
being
able
to
like
intervene
when
right
when
a
container
is
starting.
A
Yeah,
I
think
what
may
be
useful
looking
at
this
cap
is
to
expand
this
motivation
section
into
actual
use
cases
rather
than
generically
said
that
the
hooks
are
useful.
Maybe
that
will
help
to
understand
which
scenarios
you
want
to
address
and
maybe
address
some
of
the
scenarios
differently
or
like
declaratively
some
way.
Okay,.
H
A
G
So
it
seems
like
admission
control
like
maybe
what
you're
looking
at
is
is
not
one
in
its
at
the
cube
api
perspective,
but
maybe
at
the
node
level.
I
think
that
there
was
a
kept
in
the
past
for
looking
at
like
node
level
and
mission
controller.
A
I
C
So
yeah,
I
agree
with
you.
This
is
why
I
keep
asking
I
I
want
to
summarize:
what's
you
I
think
you
choose
to
put
all
your
use
cases.
There
are
several
things
you
ask
her
for
right,
so
when
it
is,
though,
you
want
to
understand
the
labels
right,
so
I
think
the
partly
what
we've
been
talking
talked
about
in
the
past
in
a
single
node.
I
think
that's
reasonable
through
the
ci
to
passing,
but
the
host
enable.
Maybe
we
need
to
argue
because
there
maybe
have
too
much
information
and
it's
out
of
control.
C
So
that's
the
separation
and
also
have
the
security
constant
potential
security
concerns.
So
we
we
should
discuss
if
you
need
it,
but
the
to
like
the
interject
of
the
execution
to
look
at
the
image
and
it
is
assigned-
and
this
person
can,
I
personally
think
that's
kind
of
the
wrong
way
to
using
and
that's
the
security
concept
there.
We
need
to
discuss
with
the
sick.
Us
is
that
the
right
model,
I
think,
that's
at
least
I
think
about
it
as
well.
C
Maybe
I'm
wrong
and
the
stig
also
have
the
more
power
to
answer
this
question
and
the
last
one.
Is
you
want
to
get
that
the
id
we
can
discuss,
because,
right
now
it
is
true.
It's
undefined
like
no
give
you
use
cases
like
the
vm
right,
so
the
certain
use
cases
we
just
don't
know
which
one
should
be
exposed.
If
we
could
agree
upon
that,
one
I
think
that's
readable.
We
could
add
that
one
in
the
ci
level
and
as
the
status
and
so
so
it's
just
split
into
different
cases.
C
If
you
just
simply
say,
oh,
I
want
the
oci
hook.
We
there's
the
huge
concern
for
us
for
safety
for
the
safe
of
the
not
just
even
not
just
for
security
right.
So
so
we've
been
discussed
this
many
times
and
and
also
the
because
new
cases
like
the
allyla
just
reads
after
we
found
the
sun
back
for
the
pleg
and
actually
from
the
moon
as
the
comments
to
some
extent,
the
comments
has
to
move
to
it
towards
a
different
way
so
which
is
actually
we've
been
raised,
the
same
concept
it's
just
recently.
C
D
Yeah,
I
I
agree
with
all
of
that,
and
I
would
add,
like
one
of
my
concerns
is
at
least
as
this
cap
is
currently
written.
This
is
a
very
large
scope
change.
I
don't
think
that
it's
like
it.
D
It
may
seem
small,
but
I
don't
see
it
as
that
small
and
having
been
one
of
the
folks
who's
been
very
like
arms,
deep
in
the
cubelet
recently.
I
really
think
that
we
would
need
like
significantly
more
resources
in
order
to
be
able
to
like
like
I.
I
don't
think
this
can
just
be
sort
of
a
drive-by
thing
where
you
know
you
do
the
one
or
two
pr's,
and
it's
done
we
would
have
to
have
people
like
actively
involved
in
cubelet
maintenance
in
order
to
be
successful,
yeah.
F
I
think
elena,
it's
important
to
recognize
that
the
the
integration
with
kubernetes
is
is
going
to
be
the
more
difficult
part
of
this,
for
you
know
for
generic
circumstances,
but
the
container
runtimes
both
currently
or
you
know
all
of
them
can
currently
support
these
hookbooks.
It's
just.
They
don't
have
a
common
way
to
configure
them.
F
The
the
cryo
team
has
a
good
set
of
code
for
a
way
to
configure
them,
and
the
container
d
team
has
put
together
this
nri
proposal,
which
is
in
it
and
that's
being
refactored
as
we
speak.
So
I
would
say
it's
an
alpha
kind
of
work
in
progress
effort,
and
that
would
be
another
way
that
we
can
configure
these.
F
These
hooks
in
a
generic
way
at
the
container
runtime
level,
but
how
we
expose
that
to
kubernetes
is
still
is
still
also
a
work
in
progress,
and
I
agree
with
you:
there
there's
a
couple
of
groups,
there's
the
cod
cod
group,
that's
working
on
this,
and
that
has
produced
the
cdi
already,
but
again
that's
at
a
different
level.
It
has
not
been
exposed
directly,
except
through
certain
annotation
kind
of
models.
You
know
to
the
kubernetes
api.
J
I
That's
also
the
big
thing
right
now
of
every
time.
We
touch
that
part
of
the
kubrick.
We
end
up
breaking
things
in
unexpected
ways,
because
we
don't
have
the
test
coverage
to
actually
make
big
changes
like
we
probably
have
six
months
of
testability
work
with
the
you
know,
small
handful
of
people
who
are
full
time
on
kubelet
before
making
any
change
like
this
is
even
considerable.
C
I
agree
with
you,
but
I
just
want
to
say
that
the
aussie
hook
front
is
been
discussed
more
than
five
years.
We
feel
initially.
I
think
the
people
maybe
disagree
with
us,
but
we
pushed
with
the
reason
we
didn't
really
solve
that
problem.
It
is
we
do
think
about
the
security
concern.
I
believe
right
now,
kubernetes
community
everyone
agree
with
us.
C
We
we
always
want
to
see
different
way
to
solve.
Customer
use.
Cases
like
that,
like
the
nvidia,
also
can
talk
to
this
one
like
a
master
leader,
but
I
understand
that
so
far.
I
understand.
Actually
they
move
forward
and
they
don't
really
need
the
ocr
hook.
C
We
fully
support,
but
the
initial
is
just
master
required,
so
we
we
can
help
you
break
your
current
problem
into
the
pieces,
help
you
solve
the
problem
to
some
extent
not
but
but
ocr
hook
is
really
concerned
for
us
how
to
expose
at
the
cri
level
and
and
and
also
even
goes
to,
maybe
because
the
initial
proposal
even
goes
to
the
power
level.
So
that's
been.
C
B
Yeah,
okay
on
the
nvidia
point
like
they
solved
it
having
a
ranty
rapper
and-
and
this
is
the
kind
of
thing
that
I
say
that
it's
not
a
nice
workaround
and
I
would
like
us
to
find
a
better
solution
right
and
not
have
like
each
one
implements
their
own
ugly
work
around.
But
I
mean,
but
I
understand
everything,
you're
saying
and
I
will
take
it
back
to
my
team
and
see
if
we
can
think
of
something
better,
more
better
scoped
or
how
how
we
can
move
forward
with
that.
F
Yeah
yeah
mark
just
give
myself
and
ronaldo
a
call
or
chat
we
can
we
can.
We
can
tell
you,
we
can
hook
you
up
with
the
and
then
also,
of
course,
alexander
and
everybody
else's.
That's
commented
here
would
be
a
great.
You
know,
link
for
you
to
get
get
in
touch
with
the
status
of
where
these
apis
are.
I.
J
Would
actually
suggest
to
welcome
and
join
the
code
working
group
meetings.
We
have
it
on
tuesdays
once
in
two
weeks
and
that's
exactly
the
point
where
we're
discussing
different
use
cases
for
nri
and
cgi,
and
your
input
will
be
very,
very
well
loaded.
C
A
Hey
next
item
on
agenda
is
marcus
cross-based
resources
in
cri
cap.
A
L
Hello
folks,
yes,
this
is
another
api
change,
draftkip
that
I've
been
working
on
for
the
past
first
few
weeks,
a
few
months
and
now
now
in
a
cap
form.
So
this
would
be
about
bringing
class-based
resources
in
the
cri
sierra
protocol.
L
A
little
bit
of
background
in
this
is
that
for
all
the
initial
motivator
was
intel
rdt
technology,
which
is
a
basically
qos
qos
technology
for
memory,
bandwidth
and
and
cache,
and
it's
like
inherently
a
class
based.
So
you
have
a
finite
limited
amount
of
classes
that
you
can
configure
through
kernel
interfaces
and
then
you
can
assign
pitch
to
these
classes
and
we
thought-
and
then
this
is
also
quality
is
already
also
supported
in
and
in
ocean
run
times
tech.
L
So
we
we
thought
about
ways
how
to
support
this,
also
in
content
run
times
and
then
then
in
kubernetes
and
all
it
is,
as
I
said
like
inherently
like
class-based
sort
of
resource,
then
block
block
io
was
a
logical
kind
of
next
step
in
this
direction
as
well.
It's
about
the
both
controls
are
also
supported
in
oci
runtime
spec,
but
they
are
like
very
hardware
specific
and
like
hard
hard
for
user
to
use,
as
is
so.
We
came
up
with
this
kind
of
plus
based
class
based
approach
for
that
as
well.
L
So
the
motivation
was
initially
initially
to
make
intelligently,
supported
incubators
and
then,
with
that
game
also
either
block.
I
o
local,
your
controls.
L
The
goals
here
would
be,
of
course,
support
the
strategy
and
blockchain,
but
also
make
this
like
more
generic
generic
support
for
possible
future
extensions
of
similar
kind
of
class-based
class-based
resources.
I
don't
know
if
the
class-based
this
or
class
resources
is
the
best
name.
Maybe
it
could
be
like
content
or
qos
resources
or
something,
but
this
is
class
base,
is
now
what
I
what
I
used
in
this
kept
craft.
L
So
the
proposal
itself
would
be
in
this
gap
to
enable
this
class-based
resources.
You
see
our
protocol,
so
basically,
we
have
a
like
a
two-step
approach
here,
so
first
enable
enable
this
in
sierra
protocol
follow
the
pattern
that,
for
example,
second
filters
was-
was
using
so
having
sierra
protocol
and
then
enabled
first
to
put
annotations
and
then
in
the
second
next
step.
The
goal
would
be
to
make
it
really
a
first-class
citizen
in
incubators
with
podspec
extensions,
but
this
step
is
basically
just
about
the
sierra
protocol.
L
So
the
proposal
would
add
would
be
to
add
a
notion
of
this
clause
with
resources
in
in
sierra
protocol
and
then
enable
port
annotation
parsing
in
a
cubelet
to
assign
them
containers.
L
So
so
this
is
sierra
protocol
changes
are
pretty
small
and
backwards
compatible
in
this
proposal,
so
we
added
a
new
new
message
in
linux:
container
config,
linux
container
class
resources,
and
it
would
be
basically
just
a
map.
D
Just
a
quick
question
because
it
looks
like
this
is
going.
The
route
of
pod,
annotations,
pod
annotations
can't
really
be
versioned
like
an
api,
and
that
can
cause
a
lot
of
problems
for
like
version
sku
and
kubernetes.
Do
you
have
a
plan
to
deal
with
that.
J
D
A
D
I
I
from
my
understanding
from
doing
api
reviews
recently
going
from
annotations
to
like
annotations,
are
great
for
proof
of
concept,
but
trying
to
go
from
annotations
in
a
cap
as
like
an
officially
supported
thing
in
the
kubernetes
code
base
and
then
trying
to
make
those
api
like
officially
supported,
constructs.
That
is
very
difficult.
D
C
M
Yeah,
I
was
going
to
say
for
the
host
process
containers
for
the
output
notation
we
put
in
the
cri
api
changes
and
the
annotations,
because
there
was
a
delay
for
when
the
cri
api
updates
would
get
vendored
into
the
different
container
runtimes.
So
we
did
have
a
plan
to
you
know:
support
the
cri
fields
right
away.
C
Want
to
support
a
leila
because
I
had
in
the
past,
I
had
the
experience
to
graduate
in
need
container
from
annotation
to
the
field
and
during
the
alpha
beta,
both
ericton
and
myself,
and
the
really
it's
really
tough
job
for
us.
Basically,
it's
just
tedious,
but
they're
really
easy
to
bring
the
issues
yeah.
D
So
this
is
just
based
on
a
conversation
I
had
with
jordan
liggett,
I'm
currently
doing
api
review
shadowing
for
node.
So
this
is
something
that
I
would
bring
up
as
an
api
reviewer.
So
I
wanted
to
flag
that
early.
L
That's
actually
at
least
that
as
a
kind
of
alternative
approach
in
the
end
as
well,
so
it
is
possible
to
I
want.
I
have
thought
about
that.
Okay,
we
can
do
it
like
in
lockstep,
the
crea
sierra
protocol
and
podspec
as
well.
The
reason,
basically,
why
I
put
this
point
animations
would
be
that
it
would
like
be
easier
and
less
friction
to
first.
First
put
it
in
or
add
it
in
sierra
protocol
and
then
then
think
about
the
prospect
enhancements,
but
I'm
I'm
totally
open
to
adding
it
in
podcast
as
well.
L
L
But
this
is
still
early
early
draft
version,
so
I
really
would
like
to
get
feedback
feedback
on
this.
Of
course,
the
naming
is
one
one
thing,
but
then
there
is
also
in
this
drop
cap.
It's
only
about
linux
container
config
at
the
moment
recently,
we
started
to
think
that
probably
this
could
be
something
or
should
be
something
that
is
like
os
diagnostic,
so
it
would
be
like
in
in
the
container
conflict
level,
so
automatically
kind
of
linux
and
windows
would
be
supported.
L
I
don't
have
much
experience
without
windows,
containers,
windows,
nodes,
nodes,
myself,
so
that's
basically
why
I
need
to
link
with
container
config,
but.
C
Several
use
cases
could
be
used.
So
sorry,
I
have
to
look
at
your
cap
yet
so
yeah,
so
so
you
so
one
thing's.
Obviously
you
have
the
two
use
cases.
I
think
it
is
really
important
use
cases
and-
and
we
do
see
the
problem
we
don't
save-
support-
why
it
is,
is
not
to
come
accountable
resource
right.
So
we
do
see
that
we
know
that
years.
So
there's
a
certain
resource
using
especially
for
memory
and
even
container
runtime,
and
it
is
being
killed
and
reclaimed
resource.
C
But
there
are
certain
resources
is
not
a
chargeable
have
to
charge
it
to
the
host.
So
looks
like
the
your
purpose
will
try
to
adjust
that
problem.
So
then
we
can
better
account
better
monitoring
and
so
then,
as
a
result,
we
even
could
be.
Maybe
I'm
not
sure
at
this
moment
better
scheduling
and
the
reason
I
say
that
not
sure
is
because
today
scheduler
don't
have
the
usage
a
while
scheduler.
So
that's
what
I'm
not
sure
how
help
on
that
one.
But
that
means
on
the
lower
side.
C
The
admission
will
be
do
better
job
because
we
do
know
our
situation.
Then
you
mentioned,
like
the
block,
I
o.
Of
course
we
don't
support
the
bracket
our
while
and
even
network
at
all,
and
I'm
not.
I
didn't
really
map
to
how
this
can
solve
that
problem,
but
I
just
want
to
do
we
do.
Do
you
have
like
clear
of
the
use
cases
how
you're
going
to
using
this
proposal
to
config
like
the
for
the
other
link?
C
You
also
mentioned
the
windows,
I'm
not
sure
windows
at
all,
because
they
mentioned
here
even
just
touch
base,
one
nine.
If
I
remember
I
saw
just
quickly
scan,
but
I
hope
we
have
the
more
clear
cases
like
how
you
are
going
to
solve
this
broadcast
and
all
maybe
network
io,
because
it's
kind
of
similar
policy
how
you
plan
to
address
this
problem
through
this
proposal.
C
So
can
you
update
the
cap
for
those
kind
of
things,
so
we
can
have
more
clear
understanding,
because
at
least
I
don't
know
how
those
two
kisses
like
the
two
kisses,
both
cases
I
do
know
it
is-
I
mean
like
not
accountable
resource
and
how
to
the
block
io,
how
the
same
proposal
can
handle
both
cases
here.
L
So
the
kind
of
main
idea
here
here
would
be
that
those
classes
would
be
kind
of
opaque
or
particulate.
So
all
the
configuration
of
the
classes
and
what
would
be
available
would
be
handled
and
taken
care
of
by
well
the
system,
administrator
administrator
or
the
or
the
container
runtimes.
So
basically,
cubelet
or
kubernetes
would
not
need
to
know
anything
about
the
details.
How,
for
example,
this
blockchain
classes
would
be
configured.
L
L
Yes,
and
and
also
because
these
are
like
very
hardware-
hardware
or
system,
specific
settings
can
be
so
number
of
rdt
classes
available
and
all
the
devices
that
exist.
F
Even
in
situations
where
you
would
want
a
node
to
be
optimizing
itself,
you
know
for
the
resources
that
are
currently
on
it.
I
think
it's
also
going
to
be
useful
to
tell
our
emission
controller
and
schedulers
right
that
that's
happening
on
that
note,
so
they
could
take.
You
know,
take
advantage
of
the
changes
that
are
happening
in
the
resources
that
are
being
reported
in
the
stats
going
up
to
understand.
Well,
why
did
that
pod?
All
of
a
sudden
have
more
resources
right
it.
F
We're
probably
gonna
need
a
little
bit
better
integration
between
what's
happening
on
the
node
and
what's
happening
at
the
cluster
management
side
of
the
fence.
N
I
had
one
other
question
I
kind
of
wanted
to
bring
specifically
around
the
block
io
proposal
there,
what's
kind
of
the
difference
between
or
maybe
a
question,
what's
the
difference
between
using
intel,
rdd
for
block
io,
you
know,
control
versus,
like
secret
v2
has
a
lot
of
I
o
enhancements
right,
like
I
o
latency
and
I
o
different
types
of
I
o
controls.
So,
what's
kind
of
the.
N
L
Yeah
I
mean
yeah.
This
block
finality
would
be
like
two
different
resource
types
or
different
class
resources
here.
So
rdt
is
just
about
memory.
Bandwidth
and
cache
allocation
block
io
is
like
a
totally
kind
of
orthogonal
class
or
qos
control
in
in
here
as
a
separate
kind
of
separate
resource,
and
it
it
uses
j
groups
blocked
by
your
control
directly.
L
Oh,
I
see
okay,
so
so
yeah,
so
this
admin
would
configure,
for
example,
this
class.
Is
that
okay,
this
go,
I
don't
know
guaranteed
versatile
vestibule
or
gold
silver
bronze
whatever
and
say
that
okay
bronze
throttle
all
all
block
devices
to
like
10
megabytes
per
second-
I
don't
know
but
or
or
very
what,
whatever
specific
controls,
even
very
detailed,
detailed
control
controls
or
them
just
yeah,
of
course,
like.
N
E
N
L
Yeah
and
about
about
the
runtime
support,
so
so,
oh
mobile,
jose,
iranton
spec
has
has
support
for
rdt
and
I
belong
kaio
controls
and
runs.
He
supports
those
and
cryo.
L
We
submitted
fears
that
were
emerged
like
some
some
month
or
two
ago,
so
basically,
rdt
and
blockchain
are
supported
in
cryo
at
the
moment,
but
the
annotation.
Well,
it's
it's
made
available
for
users.
We
are.
We
are
pod
annotations,
but
they
are
parsed
in
cryo
at
the
moment.
So
in
the
content
runtime
side,
which
is
not
optimal,
but
it's
it's
a
great
kind
of
way
to
make
it
make
it
available
for
early
adopters
so
to
say-
and
we
have
a
pr
open
for
for
container
ds
as
well.
At
the
moment.
C
A
Posted
this
link
as
well.
So
if
you,
if
you
will
attach
this
link
to
the
cap
as
well,
if
it's
informational,
it
may
be
helpful
for
reviewers.
L
L
L
A
A
A
Okay,
yeah.
I
think
it
was
also
mentioned
on
slack,
okay,
so
another
cap
to
review,
and
then
I
want
to
share
this
link
with
everybody
you
want
to.
I
mean
we
want
to
remove
docker
human
124
and
time
is
running
very
fast.
So
please,
if
you
can
distribute
this
form
among
end
users
like
whoever
using
kubernetes
and
like
maybe
they
can
give
us
feedback
on
how
prepared
they
are.
A
I
think
it's
very
important
from
communication
point
to
make
sure
that
kubernetes
is
still
seen
as
a
good
platform
and
like
it's
very
unfortunate
to
break
customers
and
unexpectedly.
So
I
think
double
use
of
this
form
is
first
to
bring
awareness.
A
This
is
happening
and,
second
is
collect
information,
so
both
are
very
important,
so
yeah,
please
distribute-
and
I
also
wanted
to
bring
up
this
one.
I
saw
this
immersed
like
on
probe
termination
on
creative
shutdown.
We
terminate
the
probes
and
I
see
another
apr
that
doing
the
same
on
other
things,
and
I
want
to
discuss
like
we
discussed
before
in
one
of
the
pr's
that
we
don't
want
to
terminate
readiness
props.
A
We
want
to
keep
running
them,
even
if
they
fail,
they
will
just
make
the
port
unready
and
maybe
like
some
traffic,
will
stop
flowing
to
that.
So
I
was
wondering
if
there
is
any
considerations
here.
I
think
you've
been
reviewing
this
pr
so.
A
O
So
yeah
we
were
testing
graceful
shutdowns
recently
and
we
ran
into
an
issue
with
readiness,
probes
and
liveness
probes,
specifically
on
the
graceful
shutdown
aspect
of
the
cubelet.
O
We
saw
that
the
liveness
probes
were
still
being
re,
run
on
the
shutdown
which
could
lead
to
the
pod
being
terminated
ungracefully
in
that
situation,
and
so
this
pr
terminates
those
probes
on
shutdown,
and
so
we
shouldn't
see
that
issue
anymore.
I
didn't
actually
see
this
comment
yet.
Did
I.
A
Answer
yeah,
I
just
posted
it
yeah.
Thank
you,
yeah.
I
think
we've
been
solving
the
same
problem
in
this
pr
that
was
mentioned
here,
and
there
is
a
follow-up
for
that.
We
actually
saw
the
problem
on
pod
dilution.
You
also
didn't
stop
liveness
props
and
it
led
to
some
unexpected
behaviors
when
graceful
termination
wasn't
executed
as
well.
A
O
A
I
think
we
can
discuss
it
then,
like
offline.
I
just
wanted
to
make
sure
that
we
like
this
comment,
is
not
lost.
N
G
A
So
any
other
topics,
I
think
one
big
topic
for
next
week,
will
be
reviewing
the
caps
because
we
had
a
soft
deadline
and
we
need
to
understand
what
caps
has
a
what
kept
scheduled
for
123
or
they
have
a
prs
out
and
if
they
don't,
we
need
to
understand
whether
we
want
to
continue
with
this
caps
or
postpone
them
to
next
release.
So
if
you
have
any
caps
that
you're
running
for
123,
please
come
to
the
next
music
to
discuss
it.
F
On
the
surgery
on
the
deprecation
of
the
hashem,
was
there
an
expectation
on
who's
going
to
handle?
You
know
the
the
reconfiguration
of
running,
you
know,
kubernetes,
you
know
make
sure
it
gets
back
up
to
reconfigure
it
or
we
just
gonna
is
we're
just
gonna
pop
an
error
message
or
you
know,
you've
got
to
define
your
new
external.
A
Yeah,
as
of
now,
the
recommendation
is
to
drain
an
auto
like
restarted
with
new
runtime.
Whatever
your
hosting
environment
is
so,
there
is
no
automatic.
A
F
A
And
I
I
think
the
biggest
issue
is
that
all
the
dependencies
on
docker
are
somehow
hidden.
It's
very
I
mean
typically,
dependencies
are
not
coming
from
ports
or
applications
themselves,
but
from
monitoring
aspect
of
it
or
some
security
aspect
of
it,
like
maybe
some
even
like
registry
configurations
that
suddenly
will
not
be
applied.
So
we
don't
automatically
transition
or
transfer
any
of
that
because
it
may
not
be
desirable
for
customers.
So
yeah.
F
If
you
were
going
to
an
external
cry,
doctor
sham,
you
still
wouldn't
be
using
some
of
those
internals,
so
okay
yeah,
that
makes
sense
so
quiesce
reconfigure
and
hopefully
the
clouds
will
handle
it.
But
but
I
expect
you
know,
like
mini
coop's.
Probably
gonna
have
a
way
to
you
know
just
re
rerun.
I
it
seems
like
it
would
be
nice
if
we
had.
You
know
some
kind
of
documentation
somewhere.
That
explained
this.
You
know
some.
You
know
this
process,
for
at
least
you
know
a
dozen
or
half
a
dozen.
A
Yeah
we
have
a
action
item
on
that
we've
been
discussing
what
document
and
I
think
the
only
documentation
we
wanted
is
self-hosted
kubernetes,
but
you
bring
in
a
good
point.
Maybe
in
kubernetes
io
website
you
can
link
to
other
clouds
and
some
ways
to
configure
stuff
we're
actually
doing
it
for
security
and
monitoring
agents.
F
A
You
here
yeah
and
situation
is
even
worse
for
windows,
but
it's
not
that
I
mean
yeah.
We
still
need
to.
We
still
have
a
desire
to
duplicate
docker
and
remove
it.
We
just
need
to
make
sure
that
we
don't
break
customers
too,
in
a
very
big
way.