►
From YouTube: Kubernetes SIG Node 20230221
Description
SIG Node weekly meeting. Agenda and notes: https://docs.google.com/document/d/1Ne57gvidMEWXR70OxxnRkYquAoMpt56o75oZtg-OeBg/edit#heading=h.adoto8roitwq
GMT20230221-180547_Recording_1410x1120.mp4
A
Hello,
it's
February,
21st
2023,
it's
a
signature,
weekly
meeting
welcome
everybody
I
want
to
start
this
meeting
today
with
the
update
on
our
PR
status.
I
didn't
include
this
table
before,
like
last
four
meetings,
I
think
we've
been
deep
into
enhancement,
please
and
enhancement.
Work
and
I
I
felt
that
it
may
not
be
top
of
mind
for
everybody
before
that.
I
mentioned
that
a
number
of
PRS
was
growing
because
of
everybody
was
busy
with
enhancement,
mostly
our
programs
and
I
think
now
we're
at
the
station.
A
It's
still
growing
because
we
don't
have
many
approvers
around.
So
today
we
have
230
PRS,
it's
one
like
close
to
the
maximum
we
had
ever
so
and
I
know
more
now
has
limited
availability
these
days
so
we'll
have.
We
need
to
find
we'll
need
to
start
being
creative
on
approving
things.
A
If
you
have
any
form
of
what's
going
on,
you
can
always
click
on
those
links.
It
summarizes
PR's
created
a
closed
merged
last
four
weeks.
It's
there's!
No,
not
many
surprises
there,
so
yeah
just
click
through
and
you
will
catch
up
on.
What's
going
on
with
a
signaled
with
that,
I
want
to
get
into
beginning
of
the
meeting
and
I
think
first
item
from
Lucy
I
think
it's
Lucy
if
you're
on
a
call.
A
So
this
is
a
link
that
Lucy
yeah,
you
see
created.
A
Ian
I
think
there
is
a
lot
of
discussion
happening,
so
maybe
somebody
can
kick
off
this
conversation
and
you
can
discuss
in
the
voice
and
then
maybe,
if
we
need
more
notes,
we
can
put
it
later
is
Ian
around.
Yes,.
B
I'm
here
hello,
can
you
hear
me
yeah,
okay,
perfect
yeah,
so
the
situation
that
we're
looking
at
is
we
have
like
a
32
physical,
core
system,
64
vcpus
we're
specifying
dash
dash
reserved
CPUs
with
a
subset
of
those
cores.
Eight
eight
vcpus.
In
fact,
and
this
CPU
manager
policy
is
static.
B
We
have
some
pods
that
are
using
a
static
allocation
like
they
have
a
full
entitlement
of
an
entire
vcpu
or
several
ecpus,
and
those
work
perfectly
there's
nothing
running
on
those
CPUs
and
everybody's
happy
there,
but
for
the
pods
that
are
scheduled
that
don't
have
such
a
static
allocation
of
resources
specified
in
their
configuration.
We
see
that
those
pods
are
getting
allocated
to
the
cores
that
are
specified
on
the
dash
dash
Reserve
CPUs
line.
B
This
was
kind
of
not
what
I
was
expecting
to
behave.
I
was
expecting
that
the
dash
dash
Reserve
CPUs
would
not
get
any
kubernetes
tasks
scheduled
on
them,
but
that's
not
what
I'm
seeing
I
was
just
wondering
if
this
is
appropriate
to
create
an
issue
for
or
if
I've
misconfigured,
something
that's
I,
probably
misconfigure,
something
but.
C
What
you
are
describing
is
true,
and
so
who
is
following
a
crossword
clue
and
probably
it's.
B
Okay,
then
I
will
create
an
issue
for
this.
Thank
you.
F
F
B
B
So
in
this
case
we
do
have
isolated
CPUs
configured.
We
have
actually
every
CPU,
but
those
eight
Reserve
CPUs
is
configured
as
an
isolated
CPU.
F
But
you're
still
noticing
that
the
pods
that
are
being
allocated
from
the
default
pool
end
up
getting
CPS
from
the
reserved
ones.
A
Cool
and
Swati,
if
you
can
suggest,
do
we
need
the
kubert
lock
in
this
bug
or
do
we
need
like
motherboards
Google
talk
if
you
can
advise
duplicate,
yeah.
F
Maybe
if
you
can
describe
c
groups
as
well,
you
know
how
those
default
default
pods
are
getting
CPUs
allocated.
I
think
that
information
would
be
useful
as
well.
Probably
cubelet
log
has
that
information,
but
I'm
not
entirely
sure.
B
Okay
and
and
kind
of
one
side
question
is:
how
does
how
how
do
the
isolated,
CPUs
kind
of
actually
interact
with
cubelet?
Does
it
detect,
which
CPUs
are
isolated
and
make
any
decisions
with
them
or
or
does
it
expect?
The
isolated
CPUs
should
be
configured
in
alignment
with
the
reserve,
CPU
spec.
F
F
A
Yeah
and
if
there
is
a
lack
of
documentation
as
well,
maybe
we
need
to
have
separate
a
bug
clarifying
that
but
yeah,
let's
start
with
fixing
the
issue.
First,
okay,
next
up
Mikhail.
G
Yes,
so
this
is.
This
is
something
that
I
started
a
long
time
ago,
because
when
working
on
the
job
policy,
we
wanted
to
annotate
failed
pods
with
the
information
about
the
reason
for
the
failure,
and
one
of
the
possible
reasons
was
that
the
partisan
killed.
G
So
we
wanted
to
you
know,
recognize
this
this
fact,
but
it
turned
out
that
there
are
like
complications
in
and
also
the
it's
not
clear
how
to
how
to
yeah.
It's
not
standardized.
That
was
one
thing
and
another
thing
that
there
was
a
bug
in
actually
cryo
that
on
Signal
Vito
it
wouldn't
actually
produce
this
reason.
This
killed
so.
G
This
was
recently
fixed
by
by
developers,
incayo
I
bombed
the
test,
infra
images
to
point
to
the
latest
version
of
kraya,
so
that
now
the
test
passes
because
I
yeah
the
first
PR
is
that
it
introduces
a
test
it.
We
test
that
checks
that
the
reason
is
unkilled
for
the
context.
G
The
feature
about
the
job
failure
policy-
don't
use
this
signal,
so
we
restricted
the
the
feature
to
only
the
situations
where
we
know
what's
going
on,
and
this
was
the
third
and
maybe
we
will
return
to
this,
but
probably
under
a
different
cap.
But
for
now
we
want
to
yes
progress
with
standardization
so
that
this
would
open
us
this
route
in
the
future
to
improve
handling
of
five
spots
in
job
in
particular.
So
what
is
missing
now
is
approvalized
so
that
it
was
already
granted
by
a
downtrend
so
I'm
happy
for
that.
G
If
there
are
like
no
issues
or
no
one
has
other
objections,
I
will
just
unhold
so
that
the
2ps
will
match
so
one
with
the
e2e
test
and
the
other
one
is
the
change
to
the
documentation
of
the
field
so
that
we
say
that
if
it's
unkilled
and
by
the
C
group
killer,
then
the
specific
reason
is
set-
and
this
is
what
is
currently
happening
in
both
Kario
and
containerdy.
G
A
That's
great
I,
don't
have
any
questions
about
that,
but
I
have
one
comment
and
maybe
don't
you
can
update
on
that
so
I
found
this
documentation
about
CRI
test
which
are
in
some
CRI
tools.
Folder
I
mean
Serato,
Repository
and
I'm,
not
sure
what
it
is
like
I
don't
know
the
history.
Maybe
you
can
shed
the
lights
with
the
histories
and
the
reason
I
remembered
about
it
is
because
this
end-to-end
test
adds
validation
of
specific
condition
when
Shri
needs
to
like
crime.
A
But
no
no
yeah,
I
think
this
test
is
great.
I,
don't
have
questions
about
this
test.
I
I,
wonder
about
CRI
tests
that
we
had.
G
E
That's
not
the
cryo,
so
the
many
people
ask
for
how
we
are
going
to
because
the
docker
one
of
the
reason
people
want
using
Docker
is
not
just
darker
as
the
container
runtime,
also
those
tools
and
and
also
testing
how
easy
we
can
create
off
the
test.
So
so
so
and
also
there's
the
back
then
right
now
we
only
see
the
two
container
runtime
vaccine.
E
Actually
there's
the
mini
team
approached
me
to
say
they
want
to
create
of
the
container
runtime,
replace
the
docker
and
some
of
the
people
maybe
still
remember
rocket
right
so,
but
how
we
test
those
kind
of
things
they
do.
They
really
really
Implement
our
interface,
how
we
are
going
to
build
the
tools
to
provide
the
similar
like
the
docker
engine,
similar
like
of
the
CLI
Behavior.
So
this
is
why
we
start
this
CRI
test
and
also
both
of
those
kind
of
together.
E
E
So
that's
kind
of
what
we
are
doing
so
so,
but
I
haven't
followed
the
recent
status
if
you
notice
that,
even
like
the
darker
confirm
not
today's
darker,
but
we
we
try
to
make
the
darker
because
there's
the
some
people
put
effort
to
ex
to
similar
after
darker
Behavior,
then
how
we
are
going
to
test
this
through
the
docker.
But
that's
the
separate
effort
that
I
don't
want
to
follow,
because
that's
the
I
I
thought
I
I
forgot
that
the
status
so
that's
kind
of
the
background,
the
context.
A
Reasons
that
we
returned
in
case
of
whom
kill
and
I
wonder
if
we
need
to
do
the
same
or
similar
in
CRI
test
to
validate
Cris,
and
maybe
it
was
something
that
you
mentioned
before,
but
it
totally
spaced
out
of
my
mind.
I
I
only
looked
at
it
because
I
was
reviewing
contribution
and
the
contribution
folder
for
stick
node
for
annual
report
and
I
stumbled
across
this
file.
I
paste
it
in
here
in
the
comments.
E
What
Mia
has
doing
is
quite
different
from
here,
so
that
one
it
is
try
to
test
the
container
runtime
Behavior
if
there's
the
womb,
if
the
how
to
do
the
memory,
how
to
detect
of
the
CPU
all
those
kind
of
things.
So
that's
a
little
bit
the
separate
things.
What
Miha
is
doing
it
is,
oh
after
the
content
runs,
have
reported
error.
How
kubernetes
pick
up
those
same
things
and
the
surface
those
issues
to
the
to
the
cluster
lab
of
the
component.
D
Yeah
you
both
are
right,
you're,
both
right.
We,
we
don't
currently
have
an
integration
test
that
does
a
kill
based
on
him.
You
could
create
a
container
and
and
give
it
a
shot
and
then
check
the
response,
so
that
wouldn't
be
too
hard
a
test
case
to
add
and
I
think
it
would
be
a
good
one
to
add
surgery
to
cry.
We
run
cry
test
a
lot
more
often
and
more
frequently
than
we
do.
You
know
no
tests,
so
probably
good
idea
to
add
it.
A
Okay,
yeah
and
don't
this
is
error
code
on
CRI
specifically,
so
that's
why
I
mentioned
C
write
test.
I
know
that
Mikhail
whole
cap
is
much
wider
than
that.
Just
one
step.
G
Okay,
thanks
thanks
for
this.
If.
A
Thank
you,
yeah
and
I
can
create
an
issue
to
maybe
have
a
test
here
and
see
if
anybody
want
to
pick
it
up.
Thank
you.
D
D
Good
first
first
timer
kind
of
issue
if
you
want
to
open
up
and
in
graduals
for
Christ.
Thank
you.
A
A
You're
up
next
yeah
I
think
it's
about
memory
course,
of
course,
class
based
resource
class
resources
right
no.
C
C
So
basically
the
objective:
there
is
yeah
the
3675
resource
project
miniature
refactor
yep,
so
we
had
we,
we
were
currently
going
through
the
latest
state
of
the
cap
definition.
Is
it
this
online
to
as
we
went
through
an
iteration
with
the
reviewers
just
shortly
before
127
cap
freeze,
and
we
shared
that
with
the
community
working
group
just
just
to
to
get
first
feedback.
C
So
we
identified
several
areas
where
we
think
we
we
will.
We
we
can
repactor,
basically,
what's
written
in
the
cap
around
the
user
models
and
some
some
wording
around
kind
of
the
nungoals
and
and
so
on.
So
we
identified
some
some
things
which
we
can
remove
from
from
our
objectives.
C
So,
the
plan
is
for
the
next
meeting
in
two
weeks
from
now.
We
do
also
a
demo
based
on
the
latest
kind
of
prototype.
What
we
have
for
attribute
based
configuration
and
discuss
some
of
the
flaws
which
were
identified
as
currently
needing
some
more
documentation
in
the
cap,
so
we
will
be
going
through
them
with
practical
examples
and
see
how
we
can
improve
the
documentation
and
see
if
the
product
Fortress
fitting
the
needs
so
yeah.
F
C
Meeting
is
March
7th
same
time
yesterday.
C
Yes,
we
we
had
some
in
in
the
capital
list
of
open
areas,
so
we
know
what
the
open
area
is:
cubelet,
restart
as
we
are
introducing
a
new
manager
which
will
be
handling
the
resource
management
through
drivers.
What
happens
with
cubelet
restart?
We
will
cover
that
also
in
the
demo.
C
Next,
in
two
weeks
from
now,
other
topic
is
the
API
how
how
resource
requests
will
be
more
or
less
passed
by
the
users
if
we
are
going
to
use
GRE
for
that
and
how
this
will
interact
with
the
with
the
scheduling
component.
So
we
will
cover
also
that
topic,
so
we
are
looking
into
that.
C
And
in
terms
of
the
flows
there
was,
there
were
opens
on
bootstrapping,
so
we
we
kept
that
and
what
happens
in
case
of
error
if
drivers
are
failing
how
how
the
bot
deployments
will
react
to
that.
So
we
will
try
to
cover
that
all
through
the
most
give
some
some
view
or
some
some
kind
of
some
ideas
how
this
can
be
solved.
C
C
Yeah
we
can
check
with
Marlo
if
we
can
find
a
little
bit
later.
It's
Pops
I,
don't
know
if
other
attendees
will
be
available,
but
we
we
can
try
to
to
see
if
we
can
shift
further
down.
A
Yeah
and
I
mentioned
about
open
issues
and
tracking
open
issues
to
make
sure
that
we
get
approval
like
I
mean
try
to
get
early
approvals
on
resolving
those
open
issues
with
people
who
will
be
approving
final
Camp.
C
That's
why
the
objective
so
we
we
have
basically
Kevin
and
spotty,
helping
us
on
on
that
basically
giving
feedback
on
on
exactly
the
opens,
and
we
we
will
try
to
cover.
C
A
So
we
just
discussed
this
morning
on
sidecar
working
groups
at
we
will
be
doing
a
lot
of
changes
in
the
similar
spots
that
this
in
place
vertical
scale
feature
because
it's
also
related
to
life
cycle.
It's
not
like
very
intersectional,
but
I
will
likely
attach
the
same
files.
So
I
wonder
what
the
status
here
and,
if
you
know,
is
not
here
I,
we
don't
have
anybody
to
title.
A
Okay,
thank
you
for
update
yeah
I,
I
I
would
I
would
hate
if
we
will
do
something
in
sidecar
work
group
and
being
able
need
to
really
rebase
all
over
again
and
yeah.
It
will
be
interesting.
127
promised
to
be
a
big
release.
I
hope
it
will
be
successful.
A
Okay,
let's
make
it
successful.
I
always
changes.
Okay.
Thank
you,
everybody
if
there
is
no
other
topics.
Let's
conclude
this
meeting
bye.