►
From YouTube: Kubernetes SIG Node CI 20230426
Description
SIG Node CI weekly meeting. Agenda and notes: https://docs.google.com/document/d/1fb-ugvgdSVIkkuJ388_nhp2pBTy_4HEVg5848Xy7n5U/edit#heading=h.2v8vzknys4nk
GMT20230426-170630_Recording_2236x1120.mp4
B
It
is,
it
is
April
26
2023,
it's
a
signaled,
CI
weekly
meeting
welcome
everybody.
We
have
two
items
on
agenda
and
then
we
will
go
to
triage.
We
didn't
have
trash
for
a
long
time,
so
it
would
be
interesting.
C
Yeah
so
I
noticed
this
per
submit
was
failing
a
couple
of
weeks
ago
of
personal
investigation.
I
believe
I
found
the
root
cause.
It
seems
to
be
an
issue
with
how
the
SSH
keys
are
propagated
and
if
you
go
almost
to
the
end,
I
I
explained
that
the
full
solution
in
a
couple
of
comments
before
now.
This
might
be
a
little
bit
out
of
scope
for
this
meeting,
but
the
potential
solution
is
to
upgrade
the
all
the
com.
All
the
dependencies
in
in
our
program.
C
Detector
I
was
wondering
if
we
have
any
policies
for
these.
Do
we
want
to
upgrade
as
soon
as
possible,
use
the
latest
versions?
C
What's
Pro?
What's
the
best
next
steps
here
to
just
upgrade
everything.
B
Typically,
we
upgrade
as
needed,
so
if
it's
needed,
we
do
upgrade.
Okay,
so
you're
saying
that
there
are
some
algorithmismatch.
C
Right
so
this
is
an
issue.
I
saw
in
another
totally
different
scenario
where
a
particular
version
of
the
X
crypto
SSH
Library,
when
using
an
RSA
key,
it
defaults
to
sha1
as
a
signing
algorithm.
That
algorithm
is
applicated
because
it
has
collisions
and
it's
already
been
proven.
So
the
recommendation
is,
if
you're
using
RSA,
you
need
to
use
a
newer,
a
newer,
algorithm
and
I
checked
the
version
I
think
it's
mentioned
there
version
V6
v0.60
uses
the
the
other
algorithm
for
default
instead
of
Shae
one.
C
B
C
I'm
not
100
sure
the
way
this
works
is
I
think
this
is
only
for
the
test
logic,
because
how
we
SSH
to
the
basically
for
contrast
in
in
other
jobs
we
use,
we
execute
directly
the
commands
via
OS
dot
execute.
C
So
we
don't
really
use
ago,
SSH
Library,
and
in
that
way,
we
we
rely
on
on
the
OS,
where
we're
deploying
the
tests
to
to
the
user
content,
correct,
SSH,
keys
and
algorithms
when
we're
using
a
library
go
in
this
case
a
go
Library
we're
using
whatever
the
library
implements,
and
in
this
case
it's
using
another
algorithm.
C
So
at
the
end,
I,
don't
think
this
is
going
to
be
an
issue,
particularly
because
one
it's
only
focusing
on
tests.
Two,
it's
only
focused
it's
only
targeting
creating
new
nodes
and
running
stuff
on
those
nodes
via
SSH,
so
I
think
they
upgrade
it
safe.
I
would
be
worried
about
breaking
other
other
things
by
upgrading
every
other
component.
C
That's
I
think
that
what
that's
why
my
main
suggestion
was
just
to
agree.
The
the
crypto
SSH
Library
instead
of
everything
else
as
well.
B
B
C
C
B
Okay,
yeah
next
agenda
item
is
Dixie
Dixie.
Do
you
want
to
present
so
to
give
some
context?
We
work
with
Dixie
to
understand
how,
like
we
discovered
some
gaps
in
test
coverage
like
we
didn't
reading
test
previous
versions
of
previous
releases
with
critical
tests,
so
we
decided
to
just
put
some
more
structure
into
that
and
Dixie
prepared
some
presentation
and
how
we
can
structures
this,
so
I
will
stop
listening
to
them.
Yeah.
E
A
Okay,
so
the
idea
is
to
have
a
guidance
around
what
tests
we
want
to
cover
in
this
signal
test
grid.
So
if
you
see
in
the
container
D
tab,
there
are
a
lot
of
tests
that
don't
make
sense
to
be
here,
for
example,
the
input
in
place
pod
resize,
probably
we
could
have
those
in
sigmode
cubelet
so
that
there
is
a
Clarity
around
what
variable
you
are
testing
we'll
go
ahead
and
present
the
dog.
A
A
So
I
would
like
to
propose
that
we
have
a
naming
convention
where
we
specify
say
CRI
first
and
then
the
C2
and
Os,
then
the
test
type
and
the
release
Branch.
There
could
be
exceptions
around
not
specifying
these
values
if
those
values
are
set
as
default
for
a
particular
Tab,
and
if
the
release
branch
is
Master,
we
could
just
keep
adding
the
release
plans.
Now.
A
The
different
test
tabs
that
signal
CI
should
care
about
is
the
first
one
here
is
signaled
release
blocking
this
should
have
all
the
tests
with
no
weird
variables
like
the
default
value
for
CRI
could
be
container
D
default
value
for
C
group
could
be
C
group
A2,
and
we
could
test
against
both
OSS
like
have
the
configuration
against
both
OSS
in
one
single
job,
so
that
it
creates
two
instances
as
in
per
instance,
per
OS,
and
then
we
test
against
all
the
release:
branches
from
n
minus
3
to
master
and
the
different
test
types
that
this
tab
could
have
is
node
conformance
test,
serial
test,
skipping,
the
flaky
and
Alpha
and
beta
test,
and
the
node
features
test.
A
Apart
from
these
three
I
see
that
there
are
two
other
tabs
that
were
approved
on
GitHub,
so
we
we
can
continue
to
have
node
conformance
test
with
CRA
cryo
and
C
group
P1
and
C
group
A2.
These
are
the
only
two
exceptions
that
do
not
comply
with
the
default
values
in
the
sigma
release,
blocking
tab
and
the
test
names
as
per
the
naming
convention
would
have
like
this.
A
If
the
Sierra
is
default
like
here,
we
we
should
have
this
node
conformance
test
running
only
against
container
D
and
C
group
P2
and
both
OSS
and
against
all
the
release
branches.
So
the
name
could
be
like
node
conformance
release
and
the
branch
name
likewise
for
node
feature,
since
this
will
also
run
against
all
default
values,
the
name
could
be
like
node
feature
release
and
the
release
Branch
name
the
request
for
serial
test
as
well
and
exceptions
being
these.
A
These
tests
are
already
there,
the
exceptional
ones,
but
there
is
missing
coverage:
around
n,
minus
3
release
branch
and
some
of
the
missing
coverage
around
serial
release
branches,
and
whenever
there
is
a
new
version
of
kids
released,
probably
we
could
have
someone
who
owns
the
release,
responsibility
who
ensures
that
there
is
a
new
image
against
which
all
the
tests
run
for
every
new
case
version
that's
released
and
the
test
run
against
the
new
n
branch
and
the
new
n
minus
3
Branch.
A
So
that
could
be
the
person
who's
responsible
for
updating
the
test
tabs.
Can
that
person
can
take
care
of
this,
and
this
is
a
mistake.
This
is
ignored,
cubelet
the
cubelet
tab,
so
this
could
have
a
variability
around
cubelet,
and
this
could
cover
the
tests
that
that
have
features
that
require
specific
configuration
like
features
like
swap
eviction
device
plugin,
huge
Pages.
These
features
require
special
note
setup
and
then
features
like
cubelet
credential
provider.
A
This
requires
special
kids
configuration,
so
whatever
features
require
special
configuration
that
would
go
in
cubelet
and
all
the
alpha
and
beta
beta
features
could
also
be
here,
and
then
you
could
also
have
tests
like
load
conformance
with
Alpha
features
enabled
with
the
beta
features,
enabled
with
Alpha,
disabled
and
beta
disabled
all
different
combinations,
so
that
there
is
enough
coverage
and
the
default
values
like
these
could
test
against
container
D,
only
C
to
P2
only
and
costs,
and
only
against
Master
branch
and
the
release.
A
Responsibility
would
be
to
update
the
versions
Q
test
like
whenever
there
is
a
new
Branch,
the
versions
queue
support,
changes
like
for
which,
for
whatever
the
new
release
is
about
and
then
in
container
D,
we
could
have
test
like
node
features
that
run
against
containerdy
Main
and
all
combinations
of
Kates
and
containerdy
versions
supported
by
gke.
A
So
this
is
specific.
This
is
I'm,
suggesting
that
we
have
this
specific
for
gke
to
begin
with,
like
whatever
containerdy
versions
are
supported
by
what,
with
whatever
GK
versions,
We
could
have
those
tests
to
begin
with,
and
then
once
other
providers
want
to
add
new
tests,
they
could
go
ahead
and
we
are
very
much
flexible
around
the
guidance
here.
B
Yeah,
so
Isaac
I
think
it's
a
key
point
about
gkey
is
brother.
Jiki
is
somehow
special
is
because
you
want
to
create
some
Matrix
of
compatibility
between
kubernetes
and
container
G.
That's
why
we
want
to
make
sure
that
we
have
some
Matrix
that
we
recommend
to
people
to
use
and
it's
easier
to
go
with
whatever
Matrix
we
have
in
Cloud
providers.
A
That
say
these
are
the
combinations
that
are
mostly
tested
and
that's
the
only
metric
that
we
want
to
provide,
and
we
can
also
have
node
conformance
tests,
running
against
different
combination
of
kits
and
container
diversions
as
supported
by
JK
and
likewise
serious,
and
apart
from
these
three
any
tests
that
are
specific
to
container
the
versions
and
container
the
build
related
tests
could
also
go
in
here,
and
that
would
help
us
that
would
help
us
clean
this
tab
here,
which
has
a
lot
of
different
tests
and
a
lot
of
tests.
A
B
Yeah
I
think
we
also
discussed
that
maybe
our
first
tab
can
be
called
Summit
passes
and
it
is
blocking
because
release
blocking
and
release
and
forming.
Concerning
that
this
team
is
striking.
Can
we
want
to
help
them
do
that,
but
at
the
same
time,
like
I
think
what
this
top
is?
Creation
is
like
tester
is
all
defaults.
I
can
basically
test
like
conformance
of
features,
then
tests,
everything
that
needs
special
configuration.
It's
Google,
it
special
configuration
and
then
test
compatibility
is
contain.
B
Runtimes
and
I
have
two
tabs
for
that:
content,
energy
and
cryo,
and
you
have
different
test
Matrix
how
to
test
compatibility,
but
not
necessarily
every
feature
needs
to
be
tested
against
every
version.
So
we
need
to
make
sure
that
every
tab
is
responsible
for
every
variability
of
specific
component.
D
Question
about
the
C
group
driver
is
that
another
dimension.
It
seems
like
there's
a
few
tests
that
test
like
specific
drivers,
a
c
group
of
s
versus
systemd,
but
it
doesn't
seem
like
it's
really
any
overarching
we'll
test
everything
with
secret
best
and
then
we'll
also
adjust
everything
with
systemd.
B
I
think
we
want
to
I
want
to
make
sure
that
V2
secretary
V2
coverage
is
overwhelming,
because
pretty
soon
all
operating
system
will
switch
to
C
group
V2
as
a
default
and
systemd
already
talking
about
like
duplication,
V1
very
soon,
so
I
think
they're
talking
about
December
this
year.
So
we
will
get
to
the
situation.
B
When
we'll
ask
ask
people
to
forcefully
migrate
to
V2,
that's
why
we
want
to
make
sure
that
coverage
is
enough
and
the
defaults
are
I,
think
we
it's
time
to
start
defaulting
to
V2
and
station
V1
as
a
additional
credibility.
A
Mostly
about
system
B
driver
but
I'm,
not
sure
but
yeah,
if
you
want
to
add
specific
tests
to
a
particular
driver,
I
think
if
at
all
it's
required
I
think
again
we
could
continue
adding
those
two
cubelet,
yeah
I
think
maybe
in
cubelet
tab,
since
it's
a
special
configuration
but
yeah
I
can
think
more
about
it
and
add
in
the
dock.
If
it's
needed.
A
F
Right
yeah,
quick
question
on
sorry.
At
the
top
of
this
document
it
was
mentioned
or
a
little
bit
down.
I
was
mentioned
that
the
cryo
yeah
conformance
test
for
C,
V1
and
V2
were
special
cases.
F
Is
there
a
recommendation
on
how
we
can
get
them
to
be
not
special?
In
this
scenario,
I
guess
I'm
a
little
confused
on
what's
special
about
them.
A
So
I
think
I'm,
sorry,
you
would
know
but
I'm
also
not
aware
of
to
me.
It
makes
sense
to
have
these
tests
in
the
cryo
tab
itself,
but
I
saw
a
PR
that
requested
adding
these
tests
as
release
blocking
so
maybe
circuit.
You
know
why
these
were
added.
B
Yeah
I
think
we
can
flashing
a
little
bit
here
that
between,
like
as
I
mentioned,
release
blocking
tab
is
called
release
blocking
and
I
think
this
document
is
trying
to
switch
from
release
from
it
being
released,
blocking
to
it
being
like.
Let's
test
default
configurations,
let's
say
it's
like,
let's
say
feature
of
Kubota
rather
than
like
runtimes
and
then
runtime
variability
needs
to
go
and
compatibility
with
different
runtimes
is
to
go
into
like
specific
runtime
groups.
B
So
maybe,
if
you
can
rename
it
into
a
signal
default
features
or
something
like
that
and
release
blocking
can
be
made
not
through
special
dashboard,
like
Special
top
and
how
dashboard,
but
through
edging
Zone
them
into
a
release,
team
release,
block
and
an
release
and
forming
tests.
Okay,
I
think
the
dimension
is
different.
So
in
that
here
is
not
like
I,
think
yeah
I
think.
B
Maybe
if
we
can
rewrite
like
rename
this
tab
a
little
bit
and
not
make
it
release
blocking,
then
it
may
be
easier
to
understand,
and
also
we
wanted
to
highlight
that,
even
though
it's
release
blocking
we
don't
test
with
previous
releases,
so
I
think
it
may
be
better.
Like
say
like,
if
you
can
scroll
to
container
GitHub,
okay,
so
in
container
data,
we
want
to
test
like
specific
versions
like
previous
versions
of
kubernetes
previous
versions
of
contain
energy
based
on
some
test,
Matrix
that
we
can
all
agree
on.
B
I
think
the
same
will
be
with
cryo
and
those
also
need
stability
is
blocking.
So
we
either
create
our
own
release
Block
in
a
part
of
defaults,
or
we
can
just
use
release
team
release
blocking
another
Edition
forming.
Okay,.
B
Yeah
I
understand
it
and
that's
partially
the
reason
why
we
want
to
even
like
refactor
all
this
Dash
reports,
because
it's
super
unclear
where
to
add
your
feature
like
let's
say
you
have
swap
feature
not
Swap
and
like
it's
once
it's
GA.
Is
it
released
block?
You
know
it's
not
released.
Blocking
I
mean
it
may
be
a
release
blogging,
but
it's
still
non-default
configuration.
B
So
we
don't
want
like
we
probably
want
it
in
separate
tab
where
we
configure
it
specially
and
then
maybe
you
want
to
test
it
compatibility
with
different
runtimes,
but
maybe
computerized
different
runtimes
will
be
a
subset
of
tests.
So
we
want
to
make
sure
that
there
is
a
guidance
where
to
put
your.
G
So
as
a
clarification,
the
the
scheme,
so
the
what
is
formally
this,
the
signaled
release
blocking
is
now
going
to
be
something
like
a
default
configuration
thing.
So
will
that
only
have
like
one
containerdy
test
with
costs
and
C
group
V2?
Is
that
kind
of
the
plan
with
that?
Or
will
there
be
a
a
more
expansive
Matrix
underneath
that
tab.
B
That
was
a
plan,
so
the
plan
was
you
want
to
test
features
of
kublet
rather
than
specific
configuration
of
runtimes,
and
this
will
be
the
first
step
so
like
this
is
a
kind
of
recommended
way
to
run
kubernetes
and
we
tested
all
the
features
run
successfully
on
the
on
like
on
the
default
configuration,
let's
say,
and
then,
if
you
want
specially
configure
your
environments
and
you
go
the
second
tab
to
validate
and
then
if
you
want
to
use
different
runtimes
and
test
compatibility
with
runtimes
then
go
to
the
next
tab.
G
Would
so
I
I
understand
the
stated
purpose
of
having
it
be
agnostic
of
runtime,
but
it
kind
of
implies
to
me
that
having
multiple
runtimes
within
that
tab,
just
even
the
two
different
tests
of
both
cryo
and
containerd,
with
Seekers
V2
and
third
default
OS.
That
kind
of
to
me
better
test
the
cubelet
like
features
because
then
we're
testing
the
cubelet
independent
of
the
actual
runtime
itself,
where
we
have
the
like
the
normalizing
function
of
having
both
runtimes
to
kind
of
keep
each
other
passing.
G
Do
you
think
that
that
would
be
appropriate
in
this
scheme?
And
then
we
have
those
two
one
for
cryo
one
container
D.
We
can
have
the
same
C
version
and
then
and
then
in
the
other
tabs
we
can
go
in
more
depth
about
like
C
group
version
and
operating
system.
You
know
in
the
case
for
container
D
has
both
Ubuntu
and
costs.
G
Well,
not
even
yeah,
I,
guess,
I
guess
in
a
very,
like
slim
definition
of
the
compatibility
so
intent
instead
of
like
you
know,
and
instead
of
having
the
full
Matrix
of
all
of
the
different
ways
that
each
individual
runtime
want
to
be
tested
just
have
one
of
each
of
like
you
know,
redefine
that
this
is
the
default
test
for
each
of
these.
You
know
we
can
run.
G
It
looks
like
you
have
Casa
and
Ubuntu
for
a
containerdy
on
T
group,
V2
and
then
we'd
have,
like
you
know,
Fedora
core
OS
on
C
group
V2
with
cryo,
and
to
have
those
three
kind
of
be
the
the
default
release
blocking
tests
and
then
yeah
I
think
it's
okay,
cool,
yeah
I
would
I
would
prefer
to
have
at
least
one
cryo
in
this
default
scheme.
You
know
so
that
we
have
the
The
Wider
visibility
of
like
the
naming
of
the
default,
and
also
just
like
kind
of.
G
We
can
declare
our
sort
of
default
configuration
to
a
community
as
well
and
then
from
there
we
can.
We
can
go
into
more
depths.
B
C
B
Variability
it's
great
if
you
start
adding
more
and
more
Dimensions
to
that
it
will
be
quite
complicated.
G
Yeah,
just
one
is
perfectly
fine
for
me.
Just
you
know
just
having
having
cryo
somewhere
on
that
board
would
would
be
my
ideal.
G
A
A
Maybe
we
could
change
the
OS
here
from
Ubuntu
to
Fedora
and
have
cryo.
B
And
on
this
topic,
the
plan
was
to
use
built-in
contain
energy.
Whatever
is
built
into
Ubuntu.
Of
course,
we
we
can
use
that.
B
Video
I
don't
know
if
it's
built
in,
but
we
need
to
Define
like
what
will
be
the
default.
G
Yeah
for
for
for
Fedora,
we
would
need
to
install
based
on
the
version
of
the
release.
Branch,
we
don't
have
I
mean
we,
we
do
have
cryo
available,
but
it
we
install
it
so
it,
but
we
can.
We
can
make
that
Declaration
of
what
the
default
would
be
for
each
release.
Branch.
G
You
know,
based
on
that,
so
I
think
that
this
this
scheme
works.
For
me.
B
G
B
Okay,
step
aside:
okay,
so
if
you
can
prepare
this
as
a
public
document
to
rebate,
we
can
then
share
it
more
broadly
sure
and
start
implementing.
G
As
a
quick
note
I'm,
this
is
a
great
initiative.
As
someone
who
has
worked
on
these
tabs,
it's,
it
was
always
very
confusing,
so
I'm
looking
forward
to
having
a
more
clear
mapping
between
Tab
and
test
and
like
Matrix
of
configuration.
So
this
is
great.
B
Okay,
I
suggest
we
go
to.
B
B
Okay,
I
pre-opened,
all
this
box
and
I
did
some
cleanup
for
this
set
up
today.
Dashboard.
So
one
initiative
that
Ike
is
running
is
switching
tests
into
rm64.
C
Should
we
consider
upgrading
this
to
the
document
that
digital
just
mentioned.
B
So
rm64
I
think
it
will
be
a
variability
on
couplets
site.
C
C
Change
besides,
like
I,
think
from
what
I
understand,
we
only
need
to
change
the
the
base
of
os
image
right.
B
Yeah
I
suggest
Ryan
calls
a
standard
test
on
arm,
and
this
is
a
you
need
our
file.
C
I'll
save
self-assign
them.
You
can
join
me
in
this
one
I
think.
Even
if
we
I
mean
we
could
leave
privatizes
and
if
we
needed
to
Emirate
analytics
yeah.
If
you.
B
Yeah,
it's
all
the
case
that,
yes,
it
is
I
hope
that
this
will
be
fixed.
So
I
will
put
an
interview.
B
Pointers:
another
pod
exit
set
of
tests.
B
Okay,
stress
test
for
grpc,
HTTP
and
HTTP
props
I.
Think
we've
had
this
test
as
a
unit
test
before,
but
we
can't
run
them
normally.
B
B
G
B
Setup
it
we
know
to
support,
and
this
is
KK
side
of
arm
enablement.
B
Okay,
it
will
test
for
node,
lock,
query.
B
B
Was
it
nursed
for
127
or
it.
B
Yeah
this
one
I
moved
to
triage
because
it
was
stuck
in
the
waiting
on
author,
mostly
to
double
check
with
the
team.
What's
happening.
E
Yeah
I'm,
currently
working
on
this
one.
There
are
a
few
changes
that
we
have
to
make,
but
it
should
it
should
be
reviewed
ready
for
review
soonish
in
general.
The
core
changes
were
approved
previously
and
we
are
currently
working
on
end-to-end
test
changes.
E
So
if
people
want
to
take
a
look,
I
think
you
know
you
can
start
taking
a
look.
I
can
push.
You
know
the
latest
changes
that
I've
made
so
far,
and
that
should
be
a
good
enough
reflection
of
what
I've
been
working
on
but
yeah.
This
is
important
and
we
kind
of
want
to
get
this
merge.
This
release
for
sure.
H
These
are,
these
are
the
the
end-to-end
test,
Improvement,
which
are
basically
making
room
to
the
real
fix,
because
the
real
fix
is
quite
simple
but
exercising
it
is,
as
was
mentioned
it
something
so
we
are
fixing
also
dental
and
testing
the
process
and
making
them
we
believe
better.
So,
yes,
this
is
the
dependency
which
are
the
first
comments
from
Swati
spare
taking
in
isolation.
They
are
self-contained
thanks,
Ryan
for
the
review
earlier,
and
this
should
be
going
first,
I
believe.
B
Okay,
yeah,
oh
yeah,
I
looked
yeah
because
it
looks
good.
It's
just
failing
some
tests:
okay,
yeah.
Let
me
take
a
look
again
and.
H
B
And
this
one
is
the
same:
I
want
to
double
check
the
status
sources.
Ryan
you
has
spr
and
I
think
last.
What
happened
is
asked
why
it's
needed
I
wasn't
sure
what
it
fixes.
Let.
B
Last
one
was
oh
yeah,
we
discussed
it
a
test
with
failing
and
I
needed
to
be
taking
a
look
at.
G
It
yeah
this
one
has
just
been
yeah
at
the
bottom
of
my
list
and
that
I
don't
love,
because
we
should
still
be
running
these
on
system
B,
but
there
was
a
problem
like
for
a
while.
There
was
a
problem
with
the
test
setup
itself
and
now
there's
like
some
there's
one
test,
that's
failing
and
I
haven't
had
the
cyclist
to
allocate
to
figure
out.
Why
so
is
it?
Where
is
it
in
triage
right
now?
It
should
be
waiting
on
author
for
sure,
yeah.
B
I
just
moved
it
into
three
hours
for
now
to
double
check
what's
happening,
do.
B
G
Yeah
yeah.
That
would
be
great.
Thank
you
sorry
about
this.
One
thanks
for
taking
the
time
to
look
at
it.
Yeah
I
would
love
to
one
day
allocate
the
time
to
get
this.
Okay.
B
I
hope
you
will
find
it
thanks
for
more
than
the
justice.
Okay,
we
13
minutes
before
the
end
of
the
meeting.
We
can
try
to
look
at
bugs,
as
I
said,
I
cleaned
up
all
other
PRS
in
this
data
board,
so
it
should
be
fine.
On
box
side,
we
have
quite
a
few
I'm,
not
sure
how
many
we
can
go
through
this
certain
minutes.
B
B
B
D
D
F
G
G
This
is
an
interesting
feature,
request
I,
would
it
doesn't
seem
like
it
looks
like
it
sounds
like
to
me
that
the
The
Entity
that's
doing
this
clean
up
I
think
the
keyless
GC
is
not
being
particularly
smart
about
it
and
just
going
in
chronological
order,
I'm,
not
sure
special
handling
of
being
this
container.
H
G
B
G
H
Okay,
I
may
be
off
Target,
but
it's
not
the
semantic
of
init
container.
Isn't
something
that
okay,
you
start.
It
runs.
It
ends
at
that
point
who
cares
anymore,
this
behavior
that
these
users
complain
about,
looks
more
like
okay,
this
all
makes
sense
if
you
take
your
sidecar
container,
not
in
it
container,
but
maybe
this
is
something
we
can
evaluate
when
discussing
this
issue.
F
No
I
think
the
semantic
of
if
a
pod
gets
stopped
and
or
deleted
and
recreated,
and
it
containers
always
run
again.
G
Yeah
I
agree
with
you
Ryan,
it
kind
of
sounds
like
the
pot
is
being
restarted
and
all
of
the
containers
are
being
rerun,
but
istio
is
not
handling
that
well
right,
I
would
be
I
would
be
surprised
if
the
Cubit
was
restarting
a
dead
in
it
container.
That
would
that's
not
expected
to
me.
So
if
that's
happening
and
the
pot
itself
wasn't
research,
that's
definitely
a
bug.