►
From YouTube: Kubernetes SIG Node 20211020
Description
Meeting Agenda:
https://docs.google.com/document/d/1j3vrG6BgE0hUDs2e-1ZUegKN4W4Adb1B6oJ6j-4kyPU
A
Hello,
hello,
it's
okay
october,
20
2021,
it's
a
weekly,
ci
sig,
note,
ci,
subgroup,
meeting
hello,
everybody,
let's
go
into
agenda
here,
you
can.
A
I
present
so
yeah
first
item
on
agenda
is
francesca
is
franchising
here
today.
B
Yes,
I
am
okay,
oh
yeah,
this
topic,
so
okay,
so
it's
simple
and
hard
at
the
same
time,
some
tests-
they
don't
want
to
have
dependency
on
the
node
state,
of
course,
but
still
they
do
because,
for
example,
an
easy
example
is
the
test
which
wants
to
stress
out
the
node
and
if
it
fails,
who
is
in
charge
of
clean
up
all
the
pots
which
needs
to
be
created
because
you
want
to
saturate
the
node
memory
manager.
B
C
So
the
all
of
those
tests.
Well,
I
don't
know
about
the
memory
manager
test.
The
container
runtime
restarts
are
serial
and
they're
disruptive
so
like
they
should
only
ever
run
like
that's
the
only
thing
that
should
be
running
on
that
node
and
the
node
should
be
back
in
a
good
state
before
it
runs
the
next
test.
D
B
Clean
up
after
itself,
okay,
that's
a
good
answer,
but
that
means
we
need
to
have
a
careful
review
that
each
test,
which
is
supposed
to
leave
the
node
good,
actually
does
leave
the
node
in
a
good
state,
which
is,
of
course,
it's
right
thing
to
do,
but
it's
non-trivial
ever
considering
the
state
we
are
at
this
point
in
time
I
mean
I'm,
I'm
just
trying
to
to
face
the
unfortunate
reality.
I
I
completely
agree
with
you
about
how
it
should
be.
E
We're,
sadly,
in
the
position
of
unless
we
move
every
one
of
those
tests
to
its
own
distinct
node.
We
have
no
choice
but
to
fix
them
like
every
one
of
the
serial
disruptive
tests,
kind
of
relies
on
the
fact
that
every
other
serial
disruptive
test
cleans
up
after
itself
they're,
not
particularly
great
at
doing
it
right
now,
which
is
part
of
why
some
of
them
are
so
flaky
but
like
piecemealing
out
a
couple
of
tests
into
like
some
other
suite,
isn't
gonna
help
us
actually
fix
them.
A
C
F
It
depends
on
the
test
like,
but
in
general,
like
I
already
have
pr
that
like
get
rid
of
the
dynamic
configuration,
I
don't
have
research
problems
under
the
pr.
I
have
some
additional
problems
with
serial
tests.
Like
eviction
pressure
problems
like
the
daring
conviction
tests
it
puts
environment
under
the
disk
pressure
and
from
some
reason,
on
the
next
test.
F
E
Sorry
go
ahead,
there's
no
scheduler
in
the
ete
no
tests.
The
test
infrastructure
just
assigns
the
pod
itself.
F
Okay,
I
understand,
but
we
at
least
we
check
the
like.
We
check
the
available
condition
under
the
nodes
like
after
the
eviction
test.
We
have
some
check
that,
like
will
verify
that
it
does
not
have
this
pressure
condition
other
than
not,
and
only
after
it
it
will
continue
like
like
under
the
eviction
test.
I
also
saw
some
workarounds
like
try
to
start
spot
under
the
after
each
to
verify
that
it's
really
can
start
the
pod,
but
also
some
kind
of
hack.
Like
I
don't
know,.
E
I
get
the
eviction
tests
have
a
bunch
of
weirdness
a
lot
of
the
time
when
eviction
when
pressure
comes
back
is
because
of
pre-pulling
images
that
are
expected
to
be
there,
and
so
you
kind
of
have
to
do
this.
Awkward,
like
dance
of
cleaning
up
after
yourself,
waiting
for
a
run
time
to
clean
up
and
then
re-pulling
images
and
then
checking
for
stuff.
F
Okay,
yeah,
maybe
it's
definitely
the
problem,
because
I
saw
mostly
like
most
fellows
that
I
had
on
my
under
my
local
environment
was
after
a
system
critical
test.
E
E
C
Jumping
back
to
something
you
said
earlier,
about
the
dynamic
cubelet
config
tests
and
getting
rid
of
them,
the
dynamic
cubelet
config
restarts
are
like
really
flaky
like
half
the
time.
The
cubelet
just
doesn't
restart,
and
so,
as
a
result
of
that,
then,
like
the
whole
test,
job
fails,
and
so
I
would
imagine
when
we
move
away
from
this.
We
really
don't
want
to
be
restarting
the
cubelet
unless
we
absolutely
need
to
for
the
test.
C
C
I
think
that
we
should
try
where
ever
we
can
to
like
separate
that,
like
there
there's
some
frustration
right
now,
like
almost,
I
think,
almost
all
of
the
memory
manager
tests
are
excuse
me
serial
tests
right
now,
but
like
we
really
don't
want
that,
because
then
we
can't
run
them
as
part
of
a
like
pr
like
pre-submit,
because,
like
you
can't
run
them
in
parallel.
So
I
think
that
we
really
need
to
go
and
like
for
anything
that
needs
like
a
super
special
config.
We
probably
want
to
split
that
out
into
its
own
job.
C
F
But
again
like,
if
you
get
rid
of
dynamic
google
configurations,
I
I
did
not
see
any
problems
with
like
is
restarting
the
kubrick
it's
like
for
me.
It's
worked
always
after
I
get
rid
the
problem
that
we
have
with
the
dynamic
configuration
it
because
it's
like
under
e2e
node.
We
have
some
restart
loop,
that,
like
monitoring
the
state
of
the
cubelet
and
once
the
kubrick
is
stopped.
E
Perfectly
part
of
why
dynamic,
kubelet
configures,
so
flaky
is
partially
just
the
way
we
have
the
systemd
units
and
stuff
set
up
right
now,
the
actual
like
test
harness
for
how
we
run
the
kubelet
is
like
a
kind
of
accidental
mess
in
a
lot
of
ways
in
that
we
have
some
stuff
that
handles
running
things
without
systemd,
presumably
because
you
know
at
some
point
in
time,
we
had
test
infrastructure
that
didn't
have
a
system
d,
but
at
this
point
like
we
just
have
like
various
parts
that
assume
we
do
various
parts
that
assume
we
don't
and
nothing
is
particularly
consistent
or
solid
when
it
comes
to
that,
and
so
I
think
we
have
some
issue
like
between
the
way
we
set
up
systemd
units
and
the
way
dynamic,
config
tries
to
restart
stuff
and
also
like
our
own
explicit
restarts
of
the
kubelet
that
just
kind
of
like
don't
play
together
nicely.
E
F
Yeah,
exactly
and
by
the
way
like
when
we
had
also
dynamic
cubelet
configuration.
It's
not
that,
like
dynamic
configuration
include
into
it
like
restart
of
the
kubrick,
I
checked
the
dynamic
configuration
code
and
it
just
like
it
configured
the
stuff
and
it
stops
the
cubelet,
and
it
is
expecting
that
someone
else
will
start
it
again.
C
E
A
And
going
back
to
francesca's
question:
do
we
like,
and
the
reason
I
mentioned,
restarting
cubelet
after
every
test
is:
is
there
any
way
we
can
cleanly
cleanly
like
clean
everything
after
every
single
test?
So
maybe
we
can
have
a
rounds
with
clean
restarts
and
like
clean
everything
and
runs
without
it.
And
this
way
we
can
understand
the
flakiness
or
like
failure
to
like
we
need
to
detect
first
to
clean
up,
and
it's
not
easy
to
detect
failure
to
clean
up.
A
If
next
text
test
is
fails
instead
of
like
does
it
fail
to
clean
up?
So
maybe
there
is
a
easy
way
to
force
like
I
want
to
test
that
check
the
test
actually
passing,
so
I
want
to
run
it
into
configuration
that
cleans
up
after
every
single
execution
and
then,
if
it
passes,
then
I'm
confident
the
test
is
fine
and
I
need
to
find
the
dependency.
C
B
F
But
we
still
have
some
like
issues
with
the
memory
manager.
I
know
that
alexa
provided
some
pull
request,
but
I
think
he
closed.
He
closed
it
because
we
had
some
discussion
under
it.
There's
a
lot
of
people,
but
the
problem
is
that,
like
memory
manager
requires
some
amount
of
huge
pages
like
for
additional,
like
additional
verifications
and
sometimes
sometimes
not
every
time
under
the
serial
name,
it
failed
to
allocate
the
specific
amount
of
few
pages.
F
If
I
remember
correct
it's
something
like
2
256,
something
like
this
and
it's
that
the
test
never
failed
under
the
separately,
because,
probably
just
it's,
it
has
more
memory
under
it
and
like
it
run,
it
runs
less
tests.
So
the
memory
not
so
fragmented
and
it
always
succeeds
like
to
allocate
the
memory.
So
my
question:
do
we
want
just
to
remove
the
memory
manager
from
the
serial
link
for
now,
because
the
anyway
have
the
separate
length
for
it?
So
it's
not
a
big
problem.
B
E
If
we
move
them
into
their
own
lane
as
long
as
we
don't
also
give
them
like
a
gpu
instance
and
stuff,
it
shouldn't
be
that
expensive,
I'd
mostly
just
be
worried
about
people,
never
running
them
like
it's
already
kind
of
hard
to
figure
out
what
tests
you
need
to
run
like.
If
you
don't
actually
look
at
the
kubelet
regularly
and
a
lot
of
people
who
would
be
reviewing
code
wouldn't
know
to
either.
A
And
it
is
asked
for
clean
iran
still
make
sure
that
we
investigating,
like
it's
easier
to
investigate
failures,
or
there
are
no
flakes.
What's
the
main
idea
here,.
F
If
you
are
talking
about
the
separate
plane,
like
the
memory
measure,
does
doesn't
have
any
flakes
under
it.
Like
I
copy
paste
the
link
on
the
on
the
test
grid
and,
like
you
can
see
it's
like
always
past
and
and
under
the
ceiling,
sometimes
it
is
failing
like
because
of
the
memory
fragmentation,
probably.
B
B
F
Exactly
just
one
out
of
several,
I
know
I
think
the
same
through
for
the
cpu
manager
and
for
the
topology
manager,
because
again
we
don't
have
some
special
special
tags
under
under
the
test
description
that
you
will
say,
do
not
run
it
under
the
serial
label.
Ladies.
F
In
general,
under
the
under
the
memory
manager
test,
we
are
trying
to
run
something
like
compact
like
under
the
kernel.
We
have
some
like
file
to
compact
the
memory,
but
again
it's
never
guaranteed
you
that
you
will
it.
It
will
really
release
all
fragmented
memory
and
will
like
join,
because
it
cannot
touch
a
camera
like
memory
used
by
the
camera
and
stuff
like
this.
So
and
it
also
additional
question
like
how
much
time
it
takes
to
run
for
memory
differentiation
under
the
leaders,
like
you,
don't
have
any
like
progress.
Borrow
stuff
like
this.
A
I
think
this
knowledge
we
can
put
it
into
code
so
next
time
we
try
to
bring
it
back
to
serial.
We
will
stumble
across
this
note
and
re-relate
whether
we
did
enough
to
clean
up
something.
F
But
in
general,
just
like
reducing
the
amount
of
few
pages
that
the
test
should
allocate
or
increase
the
amount
of
memory
under
the
environment
should
help.
You
know
it's
like
because
currently
we
are
like,
I
said
we
are
asking
for
200
56
cube
pages
is
like
512
megabytes
of
memory
like
we
can
ask
for
10
few
pages
and,
like
probably
it
will
solve
the
problem.
A
F
A
C
A
G
Yes,
so
I
found
this
issue
a
couple
of
weeks
ago
to
remove
some
of
the
features
that
are
no
longer
enough
from
an
alpha
jar
and
as
a
follow-up
item,
I
was
suggesting
adding
the
new
the
new
jobs
that
the
new
features
are
actually
are
in
alpha
to
this
job,
because
we
have
a
list
of
these
jobs
that
we
should
add
this.
Or
is
this
not
really
that
important?
Then
we
can
skip
it.
G
I
already
somebody
already
removed
the
previous
jobs,
which
are
no
longer
in
alpha
cool.
C
A
It's
jonathan,
okay,
hey
jonathan.
D
Hey
al
nice
to
meet
you
all,
I
guess
I
can
introduce
myself,
I'm
jonathan
lebon.
I
work
on
federal,
core
os
and
recently,
like
the
last
few
weeks,
it
came
to
our
attention
that
federal
core
is
actually
used
by
kubernetes
ci
for
running
the
node
e2e
tests.
So
we
thought
we'd.
Finally,
do
one
the
thing
that
we've
been
wanting
to
do
for
a
while,
which
is
hook
up
the
kubernetes
e2e
test
to
our
federal
core
sti.
D
Just
so
it's
sort
of
reciprocal
in
the
in
coverage
we're
we're
not
planning
to
get
on
it
like
soon,
but
eventually
that
that's
the
goal
at
least
for
now.
It
would
inform
our
federal
course
releases.
So
we
know
we're
not
breaking
anything
in
in
kubernetes.
D
So
this
pr
here
is
because
essentially
in
in
our
ci
system,
for
photo
cores,
we're
very
heavily
invested
in
chemu
testing
so
like
like.
Basically
chemi
is
like
the
backbone
of
our
ci.
Just
everything
wasn't
chem.
D
We
do
have
also
tests
in
like
aws
and
openstack
and
azure
and
stuff
like
that,
but
a
lot
of
it
is
in
in
camu,
and
so
the
current
way
that
run
remote
dot
go
works,
it's
very
geared
towards
and
just
the
whole
harness
in
general,
it's
geared
towards
running
things
in
gce,
but
when
I
actually
looked
at
it,
there's
nothing
really
gc
specific
about
running
these
tests
like
as
long
as
you
have
an
ssh
connection,
you
can
run
the
tests
on
via
ssh,
so
I
just
sort
of
generalize
things
a
bit
so
that
we
can
point
the
run
remote
to
any
ssh
machine
and
then
just
go
from
there
and
that
meshes
well
with
our
model
of
camu,
because
because
we
can
just
tell
it
okay
run
on
this
cameo
machine.
E
Yeah,
I
can
take
a
look
at
that
tomorrow.
I
think
the
idea,
generally
speaking,
pretty
good,
like
fourth
thing
that
has
to
always
run
on
gcp,
is
kinda
when
it
comes
to
wanting
to
generalize
them
across
different
providers
and
stuff
too.
C
F
But
also
take
some
accounts,
like
you
have
some
cloud
in
it.
That's
configured
during
the
gc
start
like
at
least
for
a
few
pages
tests.
It
configure
some
additional
one
gigabyte
headquarters
via
kernel
arguments
or
real
trust
allocation.
I
don't
remember
it
yeah.
I.
D
Can
yeah
I
can
chime
in
here
so
basically
photocores
the
equivalent
of
cloud
init
is
called
ignition,
and
so
the
existing
fro
core
os
test,
that's
running
in
in
the
kubernetes
cr
right
now,
is
using
ignition
to
do
so.
In
this
case
for
the
test,
it's
about
setting
up
the
cryo
binary
for
testing,
and
so
yeah
we'd
be
doing
the
same
thing
there.
But
but
you're
right
like
there
is.
D
E
For
what
it's
worth,
that's
also
true,
when
running
in
gcp
mode,
if
you're
reusing
an
existing
host
like
the
test
suite
itself
and
the
like
provisioning
of
a
host
are
something
that,
like
should
be
pretty
separate
anyway.
E
E
D
Cool
okay:
there
is
one
more
thing
I
wanted
to
talk
about
about
this,
which
is
I'd
like
to
see
if
it's
possible
to
take
the
next
step
after
this,
which
is
basically
have
a
way
to
prepare
all
the
binaries
up
front,
and
then
we
just
sort
of
upload
all
the
binaries
into
the
host
and
then
run
the
test
there,
without
necessarily
having
like
a
test
harness
and
like
right
now,
for
example,
to
run
the
test,
you
literally
do
go
run,
run
remote
dot
go
right,
but
that's
that's
a
go
binary,
so
I
was
thinking
like
if
we
could
have
all
that
stuff
prepared
up
front,
and
it's
just
you
know,
execute
this
thing
on
the
host
and
you're
done.
E
There's
a
flag
to
run
remote
that
just
creates
the
archive
without
uploading
it
it's
not
very
well
documented,
but
yes,
that
is
technically
possible
already.
D
So
does
that
okay,
that's
yeah.
I
think
that
that
helps
a
lot,
but
for
running
the
archive
like
how
does
that
work
like
like,
so
you
upload
the
stuff,
the
the
archive
to
the
remote
via
ssh,
but
is
there
a
way
to
does
it
contain
like
a
copy
of
of
run
local
or
something
on
it?.
E
Not
run
local,
so
if
you're
trying
to
manually
orchestrate
how
how
you
run
the
test,
there's
a
few
different
things
you
have
to
do
and
exactly
how
you'd
want
to
do
that
kind
of
gets
tricky,
but
the
way
it
runs
today
the
run
remote
copies
everything
to
the
remote
host,
unzips
it
and
then
just
runs
a
binary
on
the
host
and
so
like.
You
can
take
out
the
like
middle
step
of
that
and
just
use
it
to
prepare
the
stuff
copy.
It
yourself
and
then
run
it.
D
Right,
cool
yeah
that
that's
where
I
was
going
out,
I
didn't
realize
this
was
already
already
supported
nice.
E
Feel
free
to
ping
me,
and
I
can
help
you
figure
that
out
mostly,
I
read
through
an
annoying
amount
of
this
recently.
A
Yeah
there
is
a
selection
of
stick
testing.
Maybe
you
can
announce
cpr
there
as
well,
so
it
gets.
Some
attention.
A
G
Sure
I
added
this
one
as
well.
This
is
a
pr2.
Basically,
we
want
to
run
the
node
swap
the
node
eviction
memory,
eviction
tests
on
the
swap
on
flap
scenario
on
the
under
the
existing
eviction
tests:
job
there's
a
cl
pr
submitted
on
the
there's,
also
on
the
link
yeah
that
one.
G
So
if
you
go
to
the
pr
or
you'll
notice
that
I
propose
using
a
different
image
with
swap
on
a
couple
of
notes,
I'm
not
entirely
sure
if
we're
able
to
run
swap
on
cos,
I'm
gonna
figure
this
out,
but
right
now
I
didn't
configure
it
on
their
costs
only
under
ubuntu
yeah.
I
think
that's
mostly
it
and
any
suggestion
would
be
greatly
appreciated,
yeah,
it's
something
I
have
for
you
right
now.
D
E
Yeah,
let's
say
assume
how
it
needs
to
be
done
mostly
for
this
either
only
run
in
ubuntu,
and
we
take
that
trade
off
or
get
swap
working
with
coreos
ii.
C
We
have
ignition
configs
already
for
swap
for
for,
like
fedora
krs
nodes
so
like
if
you
go
and
hunt
around
in
the
same
folder
that
we
have
the
like
the
swap
configs
for,
like
ubuntu,
there's,
also
ignition
configs
for
fedora
core
os.
Although.
C
If
there,
I
think,
there's
various
ways
you
can
do
it
and
I
think
it
sets
it
up
after
the
fact,
as
opposed
to
like
at
provisioning
time,
which
I
think
is
just
a
it's
a
sort
of
limitation
of
our
test
environment.
But.
B
G
Reuse,
the
one
that
we
already
have
I'm
importing
the
file
on
the
on
the
image,
but
yeah,
I'm
gonna
research.
How
to
do
this
on
on
calls,
because
I'm
not
entirely
sure
this
will
work
as
it
is
right
now.
C
C
Then
I
guess
it
sounds
like
we're.
Are
we
at
the
last
agenda
item.
A
I
just
wanted
to
understand
what
the
action
item
here
on
on
this
pr.
Is
it
good
enough
or
it's
I
see
this
comment
that
it
needs
to
have
another
another
test.
C
C
Even
though
we
had
said
multiple
times,
we
were
not
canceling
because
of
kubecon
and
we
didn't
cancel
the
calendar.
Invite.
So
I'm
not
really
sure.
We
continue
to
have
really
poor
attendance
at
the
alternate
time
and
a
bunch
of
people
keep
telling
us
they
can't
make
any
other
time,
but
then
they
don't
show
up
to
the
meetings.
So.
C
I
think
we
have
one
more
scheduled
for
november
and
then
maybe
we
can
see,
engage
on
that
and
I
mean
given
that
we,
you
know
advertise
the
time
as
part
of
our
kubecon
talk.
We
should
probably
hold
that
one,
but
I
think
after
that
meeting
if
we
continue
to
have
poor
attendance,
I
think
we
should
just
revert
back
to
the
wednesday
time,
because
if
we're
not
getting
a
different
crowd,
then
I'm
not
sure
it's
valuable
to
have
the
other
time,
especially
because
I
have
a
conflict
at
that
slot.
A
C
My
my
only
like
request
is
that
I
mean,
if
folks
are
watching
this
on
the
recording
like
if
you
have
a,
if
there's
a
major
holiday.
That's
a
conflict
like
please
tell
us,
because
otherwise
we're
gonna
be
like
okay,
no
one's
showed
up
why
we
haven't
been
getting
any
feedback
like
that.
So
we
just
have
it's
been
like
silence
and
no
one
has
showed
up
yep.
A
A
Okay,
so
we
have
some
time
left:
let's
go
to
the
board.
A
And
there
are
three
for
review
and
one
of
the
work
in
progress
so.
C
H
C
Is
this?
Is
this
still
work
in
progress.
I
A
Yeah,
I
think
we
kept
it
because
a
lot
of
tests,
I
see.
Oh,
not
a
lot
of
tests.
A
Okay
and
pre-pull
images.
C
I
don't
know
oh,
this
is
it's
because
we
pre
pull,
I
think,
and
then
the
e2e
underscore
node,
but
this
is
e
to
e
slash
stuff.
So
it's
possible
that
in
the
other
e
test
we
don't.
C
C
A
Okay,
a
few
waiting
on
author,
but
I
reviewed
that
they
all
either
hold
or
work
in
progress,
so
it
should
be
fine.
A
Yes,
we
have
10
or
15
minutes
left.
Let's
go
to
box
wrong
dashboard.
C
A
H
C
C
C
Okay,
there's
a
here's
it
here
we
go
set
capabilities
for
a
container.
It's
in
the
docks.
I
can
link
him
this
and
then
I
will
close
the
issue.
C
A
A
Okay,
I'll
take
you
later.
C
Oh,
let's
just
put
that
needs
more
information.
A
C
Yeah
that
defaults
to
true
comments
is
six
years
old,
so
I
think
it's
just
outdated.
A
C
A
Just
about
beta
feature,
graceful
determination.
C
Are
we
sure
that
you
want
that,
though,
because
we've
had
issues
with
so
to
give
you
some
context
when
pods
start
getting
marked
for
deletion,
if,
if
you
keep
running
probes
on
them,
so
liveness
probes
are
really
the
big
problem.
If
you
keep
running
probes
on
them
after
they've
been
marked
for
deletion
like
they're
marked
for
deletion
right,
so
you
shouldn't
be
routing
traffic
to
them.
C
So
readiness
should
be
false
if
you
keep
running
probes
on
them,
especially
if
you
keep
running
liveness
growth
on
them,
but
the
pods
are
shutting
down.
Those
probes
are
going
to
start
returning
false
and
it
causes
all
sorts
of
weird
behavior,
including,
like
some
things,
trying
to
restart
pods
due
to
races.
So
it
seems
to.
B
C
Like
we
absolutely
should
be
setting
readiness
to
false
when
things
are
gracefully
terminating,
so
I'm
not
sure
that
this
is
correct.
C
A
My
biggest
concern
is
to
make
sure
it's
not
ready,
at
least
so
many
thousand,
if
it's,
if
it
will
be
marked
as
this.
C
Could
potentially
be
marking
it
as
as
ready
if
it
continues
to
run
and
the
thing
isn't
terminated
yet,
but
I
think,
as
soon
as
we
mark
the
node
for
graceful
shutdown,
we
want
to
turn
all
the
readiness
probes
to
false
it's
possible
that
it's
not
doing
that
for
some
reason.
But
then
I
think
the
bug
is
different.
It's
not
that
the
probes
aren't
running,
but
rather
the
last
run
of
the
probes
didn't
set
it
to
false.
A
What,
if
like
termination
period,
is
like
one
hour
and
then
during
this
one
hour,
something
like
there's
like
three
stop
hook
that
doing
some
cleanup
work.
But
what
can
steals
your
traffic.
C
Yeah
well
so
the
idea
of
those
things
would
be
that
pods
could
still
continue
to
serve
requests
that
were
like
already
in
flight,
but
they're
not
serving
any
new
requests.
The
load
balancer
isn't
like
readiness
pro
really
means
is
a
kubernetes
service
going
to
load
balance
traffic
towards
the
thing,
and
you
don't
want
that.
If
your
node
is
shutting
down,
you
want
the
node
to
shut
down
so.
C
J
Yeah,
I
guess
the
the
confusion
I
had
here
was
just
a
little
bit
around
the
pr
that
we
had
if
it's
like,
basically
what
the
behavior
of
that
pr
is
with
respect
to
radiance
pros,
because
the
pr
basically
just
removed
all
probes.
If
I
understand
correctly
so
I
wasn't
sure
if
that
was
the
correct,
behavior
or
not
so.