►
From YouTube: Kubernetes SIG Node 20230103
Description
SIG Node weekly meeting. Agenda and notes: https://docs.google.com/document/d/1Ne57gvidMEWXR70OxxnRkYquAoMpt56o75oZtg-OeBg/edit#heading=h.adoto8roitwq
GMT20230103-180311_Recording_1580x1340
A
Hello,
hello,
it's
January,
3rd
2023
I
was
corrected
once
already
today
at
2023,
it's
a
signaled
weekly
meeting,
welcome
everybody
Happy
New,
Year
and
Happy
whatever
holidays,
you
celebrated
I
want
to
kick
off
meeting
by
staging
the
number
of
pull
requests
correctly
active
it's
205
on
signaled
over
the
weekend
over
the
holidays,
we
had
not
too
many
created
like
45
and
a
few
closed
and
merged
out
of
closed
PRS
I
noticed
a
few
that
are
aging
and
improving
tests
that
were
stuck
on
needs.
A
Rebase
and
the
author
never
came
back
so
if
you're
interested
in
some
initial
some
ideas.
What
to
improve
in
tests.
Look
at
this
close
PRS,
and
maybe
you
can
fish
out
something
interesting
that
you
can
pick
up
and
continue
with
that
I
want
to
go
to.
The
next
item
is
a
CRI
API,
so
I
can
okay,
so
there
is.
A
A
I
hope
yes,
yep.
Okay,
thank
you
so
yeah.
This
is
a
PR
that
I
submitted
a
some
sometime
last
year,
I
created
a
document
trying
to
explain
how
versioning
of
grpc
protocols
Proto
files
apis
will
be
implemented.
But
this
led
me
to
some
strange
paths,
because
you
never
know
what
compatibility
is
supposed
to
be
between
different
versions.
A
That's
why
I
wanted
to
concentrate
on
specifically
CRI
API
and
what
compatibilities
we
had.
The
discussion
also
was
caused
by
contribution
assignment
where
we
had
discussions
of
future
of
CRI
API
and
what
what's
coming?
Next
plus,
we
had
the
if
we
have
a
new
thing
in
like
I.
A
Don't
remember
this
happened
before
we
had
incompatibilities
one
of
the
versions
of
container
G,
that's
currently
active
with
1.5,
so
previous
release,
like
I,
said
126
that
you
just
released
cannot
work
with
125
because
we
switched
the
CRI
API
version.
This
all
led
me
to
creating
this
explanation.
A
How
CR
API
may
be
versioned
and
like
what
kind
of
compatibility
we
expect
I
would
encourage
everybody
to
read
through
and
give
a
feedback
on
like
whether
this
is
what
we
want
to
implement
and
also
whether
this
will
be
okay
with
for
the
velocity
of
feature
development.
A
Today
we
have
very
interesting
thing
on
CR
apis
in
TV,
kubernetes
and
runtimes
may
not
want
to
take
definition,
intermediate
version
of
Sierra
API
because
they
want
to
have
a
stable
version,
and
we
cannot
ship
stable
version
before
the
kubernetes
will
be
will
reach
a
stable
version.
That's
why
we
have
this
interesting
release
schedule
when
we
need
to
release
kubernetes
first
and
then
we
need
to
reduce
the
runtime
version,
supporting
this
new
changes
in
CRI
API,
and
only
then
we
can
connect
them
both
together
to
work
together.
A
We
have
examples
of
that
in
with
In-Place
vertical
Auto
scaling,
where
we
need
to
update
container
G
to
use
the
version
like
beta
version
of
Syria
API,
and
this
is
what
I
documented
here,
but
this
also
implies
that
a
beta
version
of
crap
API
also
needs
to
be
stable,
and
we
need
to
be
very
careful
with
that.
So
yeah,
if
you
have
interest
in
this
topic,
please
read
through
and
leave
comments
and
we
can
discuss
it
in
PR.
If
you
have
comments
right
now,
please
pick
up.
B
I'm
trying
to
think
why
something
seems
off
about
this
to
me,
but
my
my
brain
isn't
yet
catching
up
to
my
my
emotion,
and
so
is
this:
is
this
something
you
want
to
close
on
one.
A
I
know
that
there
may
be
some
features
in
the
pipeline
and
again
I'm
also
coming
back
from
long
holidays,
and
it
may
not
be
I,
may
not
remember
all
the
Caps
that
are
currently
in
pipelines
that
we'll
need
changes,
but
what
I
want
to
log
on
is
that
our
release
get
like
our
velocity
with
updates
for
Sierra
API
is
what
documented
here.
So
you
cannot
run
faster
than
it's
documented
here.
A
Yes,
I
think
yeah
yeah.
This
is
recommended,
feature
development
flow
and
once
we
said
that
this
is
a
flow,
it
will
be
easier
to
comment
like
I.
Remember,
basic
networking
was
very
I
mean
they
didn't
like
the
situation
when
they
need
three
releases
to
ship
a
feature.
A
They
just
needed.
One
extract
field
to
be
populated,
but
then
they
need
to
go
through
progression
of
many
releases
to
have
this
field
added,
then
container
runtime
take
Independence
on
this
field
and
then,
finally,
combining
these
two
together
and
I
just
outlined
this
like
fastest
development.
My
fastest
development
feature
flow
is
looking
like
that.
So
this
is
what
I
outlined
here.
Yeah.
B
I
was
trying
to
think
through,
like
if
there's
any
intersection
there,
that
we
could
call
out.
I
know
that
in
place
resource
resizing,
for
example,
went
through
the
well.
We
need
CRI
changes
and
then
we
can't
have
the
alpha
feature
gate
and
then
I
was
trying
to
think
through
going
forward
on
this.
What
we
would
do,
as
we
progress
that
to
Beta
unless
the
CRI
change
BGA
or
can
it
be
beta
or
can
the
feature
gate
not
go
ga
until
it
reached
a
certain
majority
in
the
CRI.
C
B
I've
been
out
for
a
few
weeks
and
my
brain
is
tired,
but
that
that's
the
one
thing
I
think
would
be
useful
for
us
to
have.
If
it's
not
there.
A
Yeah
and
another
goal
I
had
here,
is
a
declaring
and
declaring
the
versions
Q
policy.
Once
we
have
a
version,
skill
policy,
what
CRI
version
kubernetes
needs
to
support,
then
we
may
need
to
improve
cell
tasting
and
with
testing.
There
are
some
ideas
like
what
can
be
improved
and
one
idea.
Maybe
that
I
mean
we
just
need
to
make
sure
that
kubernetes
can
function
with
a
version
of
CRI.
That
is
supported.
So
if
we
support
three
versions
back
then
we
support
three
versions
back
and
test
it
somehow
and
from
testing
effects.
A
You
can
run
some
tests
in
with
a
proxy
of
CRI.
Maybe
that
will
strip
out
other
known
fields
and
do
something
crazy.
Like
I
said.
C
C
B
A
D
I
have
some
thought:
I
will
try
to
formalize
it
in
a
comment,
but
would
like
to
share
it
right
now,
so
practically
for
CRI
changes.
We
can
split
it
into
two
big
buckets
like
one
is
information
sharing
from
a
couplet
down
to
runtime
like
to
use
something
an
example
was
like
summary
apis
or
both
overhead
API.
D
What
was
added
for
Kata
containers,
so
those
usually
doesn't
really
require
any
kind
of
skills,
but
there
are
some
changes
which
require
like
request
and
reply
from
runtime,
and
if
we
start
to
use
the
version
for
that,
we
might
end
up
very
soon
and
look
like
very
big
numbers
visit
and
I'm,
not
sure
what
it's
really
where
we
want
to
do.
We
can
try
to
use
some
approach.
D
What
was
used
in
the
device
plugins
apis
when
the
capabilities
was
exposed
but,
like
some
things,
are
supported
or
not
supported
and
like,
while
the
features
in
alphabet
it
probably
can
report
can
be
reported
as
a
with
additional
capabilities.
D
A
I
generally
agree
with
that,
my
only
point
that
it
shouldn't
break
couplet.
So
if
container
runtime
is
a
little
bit
older,
but
not
that
much
old,
something
that
we
supposed
to
be
working
on,
it
still
needs
to
not
crash
and
work
right.
I
mean
function
reasonably
well,
and
maybe
capabilities
is
one
of
the
way
to
implement
it.
I
just
wanted
to
make
sure
that
we
agree
on
skew
that
we
need
to
support.
D
Yeah,
my
point
was
what
like,
if
a
kublet
on
the
like
first
in
each
negotiation,
phase
between
between
runtime
and
couplet
detects
what
some
capabilities
not
existing.
For
example,
I,
can
place
vertical
scaling
when
it
will
just
say
sorry,
I,
don't
support
this
feature
and
do
nothing.
A
Yeah
one
thing
that
I
discovered
is
we
added
this
paint
flag
to
images
that
we
downloaded
and
come
on.
I
think
implemented
this
wooden
flag
for
post
image.
It's
mostly
for
sandbox
images,
but
Cantera
d
has
this
PR
stack
so
today
it's
another
capability.
When
you
query
the
list
of
images,
it
will
return
the
list,
but
it
will
not
set
this
Boolean
flag
on
pause
image.
So
kubert
will
attempt
to
clean
up
like
to
garbage
collect
the
sports
image
when
it
uses
contain
energy.
A
So
this
I
mean
this
may
also
be
implemented
as
capability,
but
I
mean
I.
Think
better
way
is
to
just
not
crash
kubert
and
just
like
okay,
we
cleaned
up
the
post
image,
but
then
you
already
download
it
again
so
I'm
not
sure
we
need
capability
for
every
single
change
that
we
make
in
county
energy,
that
I'm
trying
to
say.
A
A
Yeah,
if
you
have
more
feedback,
please
commenter
and
Sasha.
Thank
you
for
slides
and
presentation
you
made
on
a
contributor
assignment.
It
was
very,
very
interesting.
A
Next
item
I
have
striking
out
it's
update,
signal
teams.
I
didn't
realize
we
have
GitHub
teams
that
also
needs
to
be
updated,
corresponding
to
owners
files,
and
now
we
clean
them
up
so
PR
was
merged.
Okay,
all
the
teams
are
in
up
to
date,
Nicole.
E
Happy
New
Year,
guys,
okay,
so
yeah.
It's
an
interesting
discussion
on
the
CRI
API
I
was
listening
on
it
and
I
think
it
does
need
to
be
there's
some
thought
to
be
given
there
I'm
trying
to
think
about
what
would
in
place
what
resize
do
in
case.
We
have
a
container
deal
that
doesn't
support
it
previously.
I
think
we
have
at
least
for
the
client
side.
We
have
this
notion
of
the
support
n
minus
2.
E
So
if
you're
in
version
126,
then
you
need
to
support
clients
that
are
two
versions
below
and
there
was
we
got
around
that
requirement
for
our
own
GA
by
increasing
the
ga
by
n,
plus
one
n
minus
two
plus
one.
So
essentially,
if
we
go
Alpha
in
127,
GA
will
be
130
onwards.
That
strategy
I
don't
know
if
that's
helpful,
that
gives
you
enough
runway,
for
it
avoids
a
lot
of
complexity
in
The
Code
by
having
to
support
down
level
I,
don't
know
if
that
can
be
employable
strategy
for
CRI.
E
Yes,
if
the
continuity
has
more
features
than
kublet,
then
it's
never
going
to
invoke
that
yeah
that
the
point
is
there
so
content
from
continuous
perspective.
It
can
support
Google
it
two
levels
down.
So
it's
it's
one,
more
condition
to
add
to
to
our
pieces
in
the
puzzle:
yeah
writing
backwards.
D
We
need
to
take
into
account
the
release
schedule
of
containing
MD.
So
right
now
we
have
two
branches
1.6,
which
is
stable
and
production
we
use,
but
it
means
what
not
that
many
new
features
will
be
added
where
and
we
have
1.7,
which
has
like
a
lot
of
new
features.
Yes,
a
lot
less
sleeping.
E
Okay,
Switching
to
1.7
will
essentially
involve
a
major
testing
for
two
versions
below
I
kind
of
see
that
happening
but
yeah.
This
has
gotten
me
thinking
a
little
bit.
I'll,
give
it
more
thought,
as
at
least
from
the
in
place,
part
resize
perspective.
What
would
happen
previously?
I
think
we
vaguely
discussed
I
vaguely
remember
discussing
that.
Okay,
the
containerdy
runtime
doesn't
support
it.
Then
we
fail
goblet
I,
believe
that
was
one
of
the
resolutions
we
reached
I.
E
C
E
E
Yeah,
in
this
case,
we'll
bubble
the
error
back
through
the
status
field.
Right
I
hope
it
will
sit
as
in
progress
and
then,
if
it
fails,
then
we
need
to
see.
This
is
something
I
need
to
think
about
for
In-Place,
resize
anyways,
getting
to
the
topic
of
getting
the
pr
merged
for
Alpha
over
the
past
month.
I
think
I
spent
a
little
bit
of
time,
rebasing
on
on
a
periodic
basis
and
saw
that
thank
you,
David
Porter,
for
helping
getting
the
test
up
and
running.
E
We
now
have
full
e2e
test
with
that
container
d.
Merge
that
we
did
in
the
test
report
test
infraripo
and
there.
The
pr
102884
is
passing
the
test.
E
I
have
kicked
off
the
test
on
rebases
multiple
times,
and
it
has
a
history
of
successful
runs
that
you
can
see
in
the
link
there,
and
there
is
also
a
problem
where,
while
we
were
developing
that
test,
some
other
PRS
which
are
unrelated
picked
it
up,
because
our
filters
were
pretty
broad,
we
were
picking
up
on
changes
in
the
kublet
kubernetime
folders
and
that's
kind
of
affecting
other
PRS
a
little
bit
so
I'm,
hoping
that
we
will
be
able
to
merge
the
first
PR
soon,
which
is
the
API
changes,
only
one
one,
one,
nine
four,
six
and
I'm
wondering
if
we
can
do
it
this
week,
I
don't
know
where
how
we
stand
from
the
release
perspective.
E
B
Yeah
I
think
that
makes
total
sense.
You
I
just
you'll,
be
gated
by
someone
who
can
do
approve
at
that
directory,
which
you
know
Tim.
If
he's.
B
People
do
yeah,
I,
don't
think,
there's
a
timeline
problem.
The
release
date
we
saw
this
morning
sent
out
a
note
on
timelines
for
the
upcoming
release
and
yeah.
We
have
plenty
of
Runway.
E
Yeah
I
think
this
I'm
pretty
confident
that
we
like
I'm,
80,
confident
that
we
won't
create
much.
You
know,
because
here
with
merging
this,
the
main
PR
but
merging
the
API
PR
and
just
watching
it
for
a
week
or
so
to
see
that
nothing
bad
happens
is
a
fairly
safe
strategy
from
my
standpoint
and
makes
it
easier
to
merge
the
second
PR
as
well,
because
a
lot
of
the
the
generated
files
equation
is
taken
out
of
the
picture.
From
that.
B
Yeah,
so
let
me
let
me
comment
on
the
pr
that
this
looks
good
to
me
and
then
Tim
can
hopefully
get
approve.
E
Yeah
yeah
perfect.
Thank
you
very
much,
so
I'll
thank
them
separately
after
you
comment
on
it
and
then
see
if
we
can
get
it
merged
this
week,
even
though
the
the
enhancement
itself
is
not
yet
tracked,.
C
I
think
that
deadline
is,
if
so,
I
think,
but
it's.
E
E
It'll
get
tracked
so
I
hope
that
doesn't
stop
us
from
merging
it
until
the
enhancement
gets
tracked.
If
we
do
it
now,
then
it's
better
because
despair
has
been
there
for
a
while.
It's
a
big,
at
least
on
its
own
standing
standing
alone.
It
has
been
baked
quite
well
over
many
rebases
and
so
far
I
didn't
have
to
make
major
code
changes
on
rebases,
especially
on
the
API
side.
There
wasn't
there's
some
of
the
comment.
E
The
comments
that
are
there
are
still
sitting
from
November
2021,
the
initial
and
then
last
latest
comment
was
from
latest
commit
from
July
June
2022,
which,
after
which
Tim
looked
at
it
and
lgtm,
they
haven't
changed
at
all
due
to
rebases.
So
it's
pretty
safe.
It's
a
low
risk
change
at
this
point
merging
it
will
help
reduce
the
churn
and
have
others
pick
up
on
the
new
changes
that
are
coming.
E
Okay,
thanks
Jack,
that's
pretty
much
all
I
had
for
this
point,
but
we
do.
We
do
need
to
add
the
periodic
CI
jobs
that
can
come
in
after
the
feature.
Gate
is
merged,
which
is
the
AP
IPR.
E
A
A
When
the
port
being
deleted,
Twice
first
with
a
big
grace
period
and
second
with
a
very
short
grace
period,
kubert
changed
the
behavior
over
in
121
from
from
killing
second
time
immediately
to
killing
second
time
after
waiting
for
First,
Grace,
Period,
timeline,
timeout
and
I.
Think
the
request
here
is
to
look
at
this
PR
I'm,
not
sure
anybody
looked
at
it
because
it
was
well.
It
was
November,
14th,
yeah,
I,
think
some
feedback.
A
Okay,
so
yeah,
please
take
a
look
if
you
have
any
time
for
that.
A
I
think
that
all
the
requests
here
so
again
bug
is
that,
second
time,
when
we
delete
ports
at
pot
will
still
wait
for
First
Grace,
Period,
timeout
I
think
this
behavior
is
not
ideal.
C
So
I
think
like
the
change
was
that
if
you
start
with
a
higher
timeout
and
then
when
you
you
come
back
later
and
you
reduce
the
timeout,
the
new
timeout
should
be
respected
because
I
remember
like
Clayton
introduced
that
and
we
had
to
make
changes
in
the
runtime.
So
maybe
there's
some
bug
there,
which
is
breaking
that
behavior,
so
the
behavior
described
makes
sense,
but
yeah
we
can
take
a
look
and
see
what
requests
are.
A
A
Okay,
is
there
any
more
topics
on
agenda
today.
A
Going
once
going
twice,
thank
you.
Everybody
happy
New,
Year,
again
welcome
to
New
Bright
here
and
hope
that
it
will
be
better
than
previous
one.
Let's
get
half
an
hour
back,
bye,
bye,
you
too,
bye.