►
From YouTube: Kubernetes SIG Node 20230613
Description
SIG Node weekly meeting. Agenda and notes: https://docs.google.com/document/d/1Ne57gvidMEWXR70OxxnRkYquAoMpt56o75oZtg-OeBg/edit#heading=h.adoto8roitwq
GMT20230613-170334_Recording_640x360
A
Hi
folks
welcome
to
June
13
2023
signode
weekly
meeting.
Let's
get
started
Sergey
did
you
want
to
go
over
the
enhancements
tracking
board?
First.
B
Yeah
face
to
the
link,
so
the
goal,
if
you
want
to
get
in
128,
is
to
get
into
bracket
State,
like
status.
I
think
we
already
already
have
three
in
this
status
and
I
believe
we
have
a
few
more
that
were
already
merged,
but
still
wasn't
updated
on
the
status.
It
should
be
good
there,
but
if
you're
working
on
something
and
you
want
to
get
in
128,
please
watch
the
board
and
watch
the
issue.
Notifications.
A
Yeah
I
think
one
thing
I
noted
is
a
few
kepts
are
missing,
prr
approval
and
that's
not
something
that
we
do
so.
We
need
to
work
with
prr
approvers
on
that
one
and
make
sure
their
comments
are
addressed.
A
Great
thanks
again,
okay.
So
the
next
thing
on
the
agenda
is
by
Rob
Scott,
a
new
cubelet
API
to
expose
a
pod
Readiness
rob
you
wanna
yeah
I,.
C
I,
just
just
to
be
clear,
this
is
something
that
that
we're
not
intending
to
get
into
this
cycle.
I
know
we're
awfully
close
to
the
enhancement
phrase,
but
I
want
to
be
clear.
This
is
not
intended
for
that,
but
we
just
wanted
to
start
the
discussion
sooner
than
later,
I'm
on
the
GK
networking
team
and
karagina
on
the
networking
team
with
me
has
been
doing
most
of
our
work
around
this
trying
to
improve
basically
health
checks
and
maybe
katarzina.
You
can
provide
more
information
on
what
we're
hoping
to
achieve
here.
D
Yeah
sure
thanks
Rob
hi,
all
right.
So,
like
Rook
mentioned,
we
are
working
on
some
hashtag
controller
and
actually
we
Face
the
issue
that
to
fetch
some
detailed
information
about
what
we
always
need
to
ask
you,
API
and
I,
was
wondering
why
there
is
no
API
in
cubits
that
we
can
fetch
some
information
about
local
plots
and
actually
I
would
like
to
start
working
on
the
cap
to
propose
some
API.
So
we
can
fetch
the
information
about
the
local
codes
like
we
are
right
now.
D
We
are
mostly
interested
in
status,
for
example
the
post
Readiness.
This
is
the
sum
of
the
all
conditions
that
needs
to
be
passed
for
pot
and
actually
for
our
workload.
It's
important
to
understand
each
of
this
condition
not
only
the
results,
like
pots
ready
status,
and
it
would
be
great
if
we
could
expose
this
sfgrpc
API
and
we
can
fetch
the
list
of
whole
pots
or
we
can
fetch
spots
by
uid
or
something.
E
I
guess
one
of
my
concerns
would
be
that
the
API
server,
the
cube
API
server
is
the
authoritative
Source
on
statuses,
and
so
why
Would?
We
not
look
there
for
statuses
for
or
Readiness
of
pods.
C
Yeah
I
think
I
think
the
key
thing
that
we're
trying
to
do
here
is
yeah.
You
know
the
the
thing
that's
actually
running.
The
health
checks
is
kublet
on
the
Node
and
then
to
have
to
have
a
dependency
on
API
server
and
that
connection
to
understand
health
status
feels
like
not
a
very
efficient
or
reliable
thing.
When
you
have
all
the
information
locally,
we
just
would
love
a
way
to
just
access
that
locally
without
having
to
have
that
additional
connection,
work
and
also
be
healthy.
D
F
D
So
we
was
looking
because
there
is
for
the
post.
This
is
only
the
pot
resources
API.
We
thought
about
extending
it
with
some
functions
like
get
for
statuses,
but
we
can
also
create
new
one
like
post
info
and
in
future.
It's
gonna
be
neat.
We
can
add
some
other
stuff
there,
but
start
at
the
beginning.
First
about
extend
his
report.
Resource
API.
B
Now
another
question
would
be
since
we're
doing
invented
plaque,
which
improved
performance,
and
this
API
is
also
may
need
to
be
very
performant.
Maybe
you
need
to
think
about
streaming
kpi
rather
than
pool
based
version
of
it.
D
G
D
So
for
us
we
would
like
to
get
pots
healthy
statuses
and
for
our
component
it
is
crucial
to
know
each
condition
by
not
only
the
whole
computed
by
cubelet
ready
status,
but
all
conditions,
and
this
the
reason
is
that
a
user
can
also
add
some
conditions,
for
example
in
neck
I'm,
sorry
in
Readiness
probes,
and
they
are
added
as
a
condition.
So
we
need
we.
Our
country
I,
would
like
to
know
exactly
if
this
is
from
Probe,
and
this
is
if
this
is
containers
ready
or
some
other
conditions.
H
G
H
You
to
me
it's
kind
of
like
the
you
try
to
be
worse
engineer
for
certain
demons,
learning
on
the
Node
or
certain
part
learning
I,
don't
know
that
you
try
to
reward
things.
At
least
your
last
reflex
response,
that's
kind
of
dangerous.
H
This
is
why,
under
on
the
kubernetes,
the
one
one
things
we
try
to
do
at
least
that
I
try
to
do
based
on
the
experience
with
the
internal
work.
If
people
kind
of
like
the
pollute
of
the
know,
the
agent
all
the
time
bypass
of
the
real
master
of
I
mean
okay,
we
are
not
alone
the
control
plan,
and
this
is
what
we
are
doing
in
the
book
and
I
try
to
prevent
that
for
last
many
years
and
attractive
achieve
after
treat
of
balance,
something
but
you
but
I
do
think
about
that.
H
We
could
expose
because
we
did
before,
but
we'll
never
get,
that
official
name
support,
but
to
get
additional
condition,
which
is
kubernetes,
API,
don't
support,
but
some
random
work
now
attractive,
imposter,
and
then
we
have
to
decode
in
those
kind
of
things,
that's
really
dangerous.
More
for
the
open
source.
Loop
Community
I
mean
for
everybody.
If
they
are
for
their
own
offer,
they
want
to
do
some
hack
things.
That's
their
job,
but
I
feel
like.
Therefore,
all
open
source
kubernetes
I
want
to
make
sure
this
is
still
healthy.
H
D
Well,
I
understand
but
I
well,
so
we
are
proposing
grid
only
of
course,
the
API
and
the
the
Readiness
probes.
They
are
already
in
standard
right,
so
there
just
shows
up
as
a
condition,
and
so
the
reason
behind
this
project
is
the
container
of
native,
which
is
bypassing,
like
you
say,
the
central
Authority,
because
previously
haptic
was
going
to
the
nodes
and
taking
the
note,
healthy
and
with
container
date.
D
If
you
want
to
be,
let's
say,
faster
or
more
independent,
and
with
this
project
we'd
like
to
do
the
we
would
like
to
show
the
real
state
of
the
port,
which
is
actually
the
pot
is
running
even
if
Cube,
API
or
even
if
a
control
plane
is
have
some
issues,
connectivity
issue
or
something
issue.
It
might
happen
for
a
very
short
time,
but
still
might
happen,
and
this
readiness
probes
they
are,
of
course,
for
us.
There
are
some
things
that
are
not
desired.
D
C
I
think
what
I'm
hearing
here
is
that
we
we
should
revisit
and
and
come
back
to
this
a
little
bit
later,
I
appreciate
the
the
feedback.
You
know,
I
think
what
we're
hoping
to
achieve
here
is
something
that
could
be
broadly
beneficial
but
hearing
some,
some
very
good
and
well
thought
out
concerns.
So
we'll
come
back
if,
with
with
with
some
of
these
addressed,
and
maybe
we'll
share
a
doc
or
something
with
the
community
yeah
I.
H
Think
the
I
think,
if
you
can
put
a
dog
with
the
well
the
cug,
I,
think
yeah
and
what's
the
use
cases
and
how
you
Pro
I,
think
we
can
hide
those
kind
of
things
and
expose
those
part
status
API,
because
actually
people
depend
on.
We
do
know
the
people
using
our
and
official
important
status.
We
never
really
is
the
officially
supported.
So
that's
why?
Because
we
want
people,
cautious,
I'm,
open
with
the
leader
only
but
the
real
name,
don't
decoding
the
system.
H
A
All
right
thanks,
we
can
move
on
to
the
next
topic.
Vinay.
You
have
a
couple
of
requests
for
reviews,
I
guess.
J
Yeah
hi
good
morning,
everyone
yeah-
these
are
a
couple
of
small
housekeeping
PRS.
One
of
them
came
from
cap
enhancement,
lead
to
change
the
the
current
Milestone
from
1
27
to
128,
then
keep
it
in
Alpha
and
the
other
one.
Is
we
made
some
late
stage?
Api
changes
change
the
name
of
the
API
just
before
merge
or
after
code
freeze,
I
believe,
but
that
hasn't
caught
up
in
the
cap
and
people
looking
at
it
might
be
confused.
J
So
just
was
wondering
if
Tim
has
already
looked
at
it
and
LG
TM
did
so
another
pair
of
eyes
and
then
emerge.
A
So
next
one
is
Kevin:
Kevin
is
looking
for
an
approval.
Any
stock
due
turn
I
haven't
seen
this
one
yet
so
so
this
is
for
promote
improved
multi,
multi-numa
alignment
in
topology
manager
to
Beta.
Yes,.
H
A
Right
thanks
Swati,
you
have
a
follow-on
on
the
discussion
regarding
the
node
resources,
topology
API
yeah.
K
So
we
had
a
discussion
I
think
two
or
three
weeks
ago
about
this
and
we
kind
of
got
an
informal
approval.
I
was
just
hoping
that
we
can
get
a
formal
approval
as
well
and
get
the
repo
creation
process.
You
know
to
the
over
the
line.
Essentially,
so,
if
attack
leads
I
believe
Derek
Dawn
Ronaldo,
you
can
take
a
look
I.
Think
that
will
help
me
thanks.
A
Yep
sounds
good.
A
All
right
that
takes
us
to
the
last
one
you're
looking
for
a
prr
review.
C
F
L
Really
long
Rich
Journey
read-only
and
we
already
have
upper
work
from
Team
14.,
but
we
also
need
approval
from
other
people,
especially
protecting
it.
Approvert
I
think
to
disappear.
First,
okay,
yeah.
A
Yeah
so
I'm
I'm
reviewing
this
one
and
I
think
like
one
interesting,
follow
on
I
know:
Sergey
raised
it
last
week
right.
So
one
thing
that's
interesting
here,
that's
being
proposed
is
how
do
we
discover
if
a
new
feature
like
something
that's.
L
There
is
the
proposal
or
the
tri
is
extended
to
reproach
new
structure
called
runtime
on
runtime
class
info
to
to
direct,
and
so
each
of
the
right
type
class
has
a
feature
available.
Availability
just
regards
the
real.
Only
so
the
pubert
can
inspect
that
and
raise
our
error.
If
the
feature
is
not
approached.
L
A
A
So
if
there
are
thoughts
like
happy
to
hear
them,
but
otherwise
I'm
I'm,
feeling
okay,
I'm
feeling
all
right
I
mean
this
may
be
the
right
way
to
go
the
other.
The
only
other
alternative
may
be
to
return
in
status,
exactly
what
the
container
runtime
is
doing,
but
that
still
doesn't
give
the
cubelet
the
ability
to
fail
at
admission
time.
H
So
we're
in
compact
before
but
I
any
freshman
here
so
I
need
to
come
refresh
my
memory.
What's
the
previous
concept,
I
remember:
I
reviewed
the
original
proposal.
That's
the
couple
years
ago,
so
I
did
take
a
look
but
I
have
to
honestly
say:
I,
don't
have
time
for
this
circle
this
circle
for
this
one.
So
if
you
guys
feel
comfortable
move
forward,
if
you
don't
feel
comfortable,
then
you
want
me
to
take
a
lot
of
look
I
just
this
week.
I
don't
have
time
and.
A
H
A
A
We
can
try,
we
still
have
like
couple
more
days.
So
if.
L
A
I
I
can
bring
one
topic,
particularly
with
Mike
speaking
about
the
whole
progress
or
pull
an
image
with
the
progress.
Now
that
it
was
suggested
that
we
add
the
timeout
also
to
the
old
call.
So
I
was
thinking
not
to
touch
the
old
call
at
all.
But
if
we're
feeling
Wizards
might
as
well
just
combine
them
and
extend
fully
without
having
a
separate
API
calls
for
the
image
pool
with
the
progress,
but
just
extend
the
actual
image
pool
request
with
optional
parameters,
whether
to
have
the
progress
reported
and
whether
to
have
the
time
out.
G
Think
so
going
that
route
would
probably
be
cheaper
and
give
us
the
ability
to
back.
You
know
the
cherry
pick
some
changes
into
like
containerdy
1.6.
If
we
need
to
it's
worth,
certainly
worth
thinking
about
the
the
other
comments.
I
was
having
the
real
reason.
I
wanted
to
just
move
it.
There
is
because
it
just
having
a
full
progress,
timeout
sort
of
fixes,
most
of
the
issues
that
people
are
you
know
running
into
today.
G
They
don't
they
don't,
have
they're,
not
always
running
on
client
tools
that
are
going
to
be
looking
at.
How
much
is
coming
down
it's
more
of
a
oh
I.
I
know
I've
got
this
pod
and
it
needs
a
20
gigabyte
image,
but
I
don't
know
if
it
stopped
or
not.
I,
don't
know
if
it's
making
any
progress
and
if
you
have
that
time
out,
it'll
that'll
cover
most
of
those
scenarios.
I
G
And
the
complexity
of
that
is
is
what
does
it
really
mean
to
say
this
much
has
been
pulled
on
this
one
particular
image
request.
If
there's
a
common
shared
blob
across
two
image
requests
which
which
one
do
you
attribute
that
pull
to
the
first,
the
second
one's
waiting
on
the
first
one
right,
it's
not
going
to
time
out
or
anything
like
that,
because
progress
is
being
made
on
a
first
request
right.
You
see
what
there
there's
some
Corner
issues
there
and-
and
we
don't
always
know
what.
G
But
it
depends
on
the
registry
and
the
contents
they've
stored
there
on
whether
or
not
we
can
identify
in
the
header
how
much
total
is
going
to
be
pulled.
We
don't
always
know
it.
Just
it's
just
going
to
take
a
while
to
come
down
we'll
know
after
at
the
end,
but
we're
not
going
to
always
know
you
know
the
size
of
the
of
the
blob
we're
pulling
down
when
it's
large
together.
G
A
One
more
thing
I
want
to
do.
Okay,
did
we
like
explicitly
address
that
we
won't
respect
the
image
full
timeout
and
will
allow
with
this
approach
to
the
for
the
pool
to
progress.
As
long
as
you
know,
it's
not
timing
out
per
check
period.
I
Isn't
that
a
different
thing,
we're
talking
about
the
no
progress
timeout?
That's
when
the
transfer
transferring
an
image
stalled
completely
and
there's
zero
bytes
for
like
10
30.
A
Seconds,
yes,
yes
exactly
so,
if,
if
that
is
enabled,
then
we
shouldn't
explicitly
fail
when
we
reach
the
two
minute
or
whatever
the
CRA
timeout
is
it
can
go
beyond
the
two
minutes
as.
G
Long
as
we
are
making
things
about
the
other
time
out,
yes,
so
until
like
see,
we
can
talk
about
that.
Basically,
if
you
make
an
API
call
and
there's
been
no
response,
we
need
to
make
sure
that
that
pull
request
is
is
sync
and
waiting
for
a
response.
No
time
out
there
I
think
it
were
I,
think
that
does
work
today.
For
now
it
depends
on
which
version,
but.
G
A
A
A
I
And
actually
now
that
I
spent
10
seconds
thinking
about
it,
it's
not
just
suggesting
something.
We
won't
be
able
to
merge
the
two
goals
together,
because
the
first
one,
the
original
goal
is
not
streaming
it
just
Returns
the
result
back
and
if
we
want
to
get
the
reports
back,
we
need
to
stream
an
API.
So
that
has
to
be
a
separate
call
but
yeah.
Maybe
we
can
move
the
time
out
inside
the
original
call
and
reuse
the
same
structure.
Okay,.
M
A
All
right
thanks
anything
else.
Folks,.