►
From YouTube: Kubernetes SIG Node CI 20230308
Description
SIG Node CI weekly meeting. Agenda and notes: https://docs.google.com/document/d/1fb-ugvgdSVIkkuJ388_nhp2pBTy_4HEVg5848Xy7n5U/edit#heading=h.2v8vzknys4nk
GMT20230308-180458_Recording_1524x1120.mp4
A
Hello,
hello,
it's
March,
8th
2023,
it's
a
CI
subgroup
meeting
called
signals.
Welcome
everybody.
We
have
few
items
on
agenda
today.
First,
you
I
think
it's
like
this.
One.
A
Yeah
I
wanted
to
let
everybody
know
that
there
is
this
effort
going
on.
We
want
to
run
rm64
tests,
I
mean
test
on
item
64
machines.
Right
now
we
have
like
we
have
these
machines
available
in
in
gcp,
where
we
run
all
the
tests,
so
it
shouldn't
be
too
complicated
to
edit
and
Ike
I
think
will
be
working
on
that.
So,
if
anybody
interested
to
help
Ike,
please
join
the
party.
B
You
mean
workout
Yeah,
well,
yeah,
the
performance
of
that
is
it
significantly
better
in
the
gke.
I've,
never
tried
it
so
to.
A
Be
fair,
I,
don't
know
the
numbers
I
I
know
that
there
are
workloads
that
run
faster.
I
know
some
workloads
are
not
running
faster
because
they
just
compiled
not
optimized
for
rm64.,
but
I
know
that
you
can
find
workload
that
runs
significantly
faster
yeah.
B
Kubernetes
built
significantly
faster
on
my
M
run,
with
the
I'm
running
a
Ubuntu
on
in
a
UTM
Camu,
so
I'm
expecting
really
good
things
here.
This
is
great
work.
A
Yeah
and
right
now
some
vendors
already
running
and
including
Google
like
we
run
rm64
kubernetes
on
arm
64..
It's
already
supported
this
effort
is
just
to
make
sure
that
the
tests
are
on
early,
so
we
will
get
yeah
regressions,
464.
shift
effort.
A
A
So
now
we
we
will
have
in
our
log
dump
script,
we'll
have
code
that
will
collect
some
metrics
from
nodes
when
test
is
completed
same
way
as
we
collect
a
set
of
like
all
ports
running
and
all
events
from
the
supports
and
all
runs
from
the
node.
So
you
can
add
some
information
on
the
end
of
the
test,
especially
on
failed
tests,
and
this
one
will
collect
some
metrics.
It's
a
very
good
Improvement.
But
it
reminds
me
of
this
thing
that
happens
at
some
point.
A
We
switched
from
log
dump
script
from
KK
repository
to
lockdown
script
in
testing
free
Repository,
it's
a
good
switch
because
we
want
to
minimize
amount
of
test
code
and
KK
repository,
but
at
the
same
time
it
had
some
incompatibility.
So
there
is
a
flag
that
you
need
to
apply
to
your
test
job
to
use
this
new
script
and
I
was
wondering
if
anybody
here
is
interested
in
this
task
to
just
apply
this
flag
to
whatever
signal
test.
We
have
I
know
that
set
of
views
that
are
still
using
old
Scripts.
A
If
you're
interested
in
some
task
by
getting
started.
Tasks
is
a
good
opportunity
to
contribute.
C
D
A
Live
like
that
like
for
maybe
when
I
joined
it's
this
ever
started
so
like
before
my
time,
two
years
at
least
so
it's
not
urgent,
but
more
and
more
tests
will
rely
on
this.
I
mean
if
we
want
this
metrics.
We
want
us
a
lockdown
script,
so
yeah.
C
A
I,
don't
think
we
have.
A
E
So
this
is
about
really
to
get
the
feeling
about
this
forum
if
there
are
any
concerns
or
comments
about
promoting
some
of
the
existing
for
resources.
End-To-End
test,
node
conformance
about
the
process,
I
had
an
item,
but
it's
on
me
I
will
do
the
research
and
talk
with
cigarch
or
any
other
substituted,
but
if
there
are
concerns
from
this
forum,
so
why?
Why
do
that
at
all?
E
Because
we
need
to
enable
the
endpoint
on
Windows
and
the
easiest
way
to
have
tests
running
on
Linux
and
also
on
Windows
is
to
have
them
as
not
conformance,
which
is
arguably
a
good
thing
in
general,
and
a
good
thing
to
explore
in
general,
so
I
think
provided
a
tester.
Stable
should
be
no
problem
but
I'm
here
to
get
the
the
feeling.
The
pills
from
this
form
definition.
A
Not
conformance
is
not
owned
by
cigarch
it's
owned
by
us,
so
we
can.
We
can
decide
and
right.
There
was
a
cab
that
was
accepted
in
one
of
the
previous
releases
and
unfortunately,
we
didn't
follow
through
I
mean
we
pulled
through
partially,
so
it
changed
definition
of
not
conformance
a
little
bit
clarifying
what
it
means
and
not
conformance
means
that
this
functionality
is
supposed
to
be
working
everywhere,
so
it
doesn't
need
to
have
special
like
setup
or
like
special
components
installed
in
addition
to
kubernetes.
A
E
Yeah
and
then
this
this
thing
just
popped
out
because
of
the
ga,
the
ga
conversation
so
yeah.
There
is
the
research
manager
tests
in
general,
which
the
related
to
produce
routes,
and
so,
in
some
cases
they
produce
routes.
There
are
dependencies
on
extra
components.
This
is
why
I'm
talking
a
subset
of
them,
but
such
a
subset
exists
and
we
can
totally
move
forward
so
yeah
yeah.
Thank
you
for
the
updates.
A
Thank
you,
yeah
I
think
you.
This
is
probably
sources
you're,
uncovering
so
many
things
like
this
promotion
and
support
on
Windows,
plus
scalability
and
DDOS
protection.
Yes,.
F
A
Pioneer
yeah
no
new
endpoints
yeah.
A
Yeah
I
don't
know
if
Mike
is
on
the
call
here.
A
A
Yeah
I
just
wanted
to
ask
Okay,
so
this
is
as
discussed
last
week.
We
don't
have
like
somehow
we
lost
also
the
test
coverage
so
on
all
the
releases
on
like
124
and
up
and
Mike
said
sign
up
to
support
it.
So
we
have
first
job
already
here
right.
Can
you
talk
more
yeah.
G
Now
it's
just
running
conformant
with
no
conformance
tests.
Eventually
we
could.
We
could
increase
the
conversion
this,
but
I
wanted
to
make
it
something
that's
consistently
green
I
would
I
would
like
to
maybe
ideally
onboard
ethereal
and
some
other
tests,
but
until
they're,
unstable
I
would
rather
have
them
outside
this.
G
G
Once
1270
is
released
and
new
version
is
cut,
then
we
can
have
a
127
branch
but
yeah,
that's
that's
about
it
I'm
using
cause.
For
now
we
can.
We
can
expand
this
to
Ubuntu
as
well
I'm
using
a
cost
version
that
the
costing
officially
supports
that
kubernetes
version.
G
G
A
We
have
separate
image
file
because
in
the
past
it
was
always
like
I
remember
what
we
did
like
we
removed
the
remote
was
a
remote
runtime
flag
from
kubernetes
and
it
broke
I.
Think
because
we
shared
the
image
file
between
versions,
right,
I.
Think
it's
a
good
Improvement
here.
Do
you
think
we
can
move
it
out
of
this
file
into
separate
files?
So
you
will
have
like
fully
separate
124
files,
fully
separate
125
files.
A
Maybe
that
will
improve
situation
further,
so
we
can
have
a
clear
way
to
distinguish.
This
is
124,
they
think,
and
then
we
will
move
like
remove
this
files
and
create
new
files.
This
will
be
125
for
testing
right.
A
G
All
the
all
these
release
Branch
jobs
into
one
single
file,
so
it's
a
little
bit
clearer
to
find
them
out,
but
both
options
work
yeah,
ideally
I
added
to
the
container,
because
I
didn't
want
to
add
a
file
at
the
moment,
but
I
think
it
makes
sense
to
separate
it.
C
A
Renee,
do
you
want
to
talk
about
your
in
place,
yeah.
B
So
the
CI
enablement
job
that
the
pl
that
I
had
yesterday
now
it
got
merged.
Thank
you,
sir
k
for
merging
that
What
I
was
observing
to
see
how
it
does,
and
the
first
first
runs
that
it
did
are
looking
red
when
I
dug
into
it.
It
I
found
that
about
11
or
12
of
the
34
tests
that
run
have
failed,
and
the
reason
that
shows
up
is
the
pods
were
pending
we're
expecting.
B
We
have
a
timeout
of
300
seconds
for
scheduling
the
pods
and
the
timeout
exceeded,
and
the
pods
were
pending
unschedulable
because
of
not
enough
CPU.
Now
looking
at
the
node,
which
has
like
the
pull
jobs,
which
we
run
with
multiple
like
three
or
four
worker
nodes,
this
one
is
just
a
single
node
and
it
has
two
Milli
CPUs
a
two
thousand
Milli
CPUs,
sorry,
and
that
the
tests
were
when
the
tests
failed.
They
locked
the
parts
that
were
running
on
that
on
that
node
and
there
were
several.
B
So
my
guess
is
that
this
is
happening
because
all
the
spots
they
when
they
run
they
take
about
200,
300,
Milli,
CPUs
and
six
or
seven
of
them
would
exhaust
what's
there
and
if
there
are
in
parallel,
they'll
fail.
So
the
immediate,
the
short
the
quick
fix
for
now
is
to
run
them
in
serial
and
have
this
potential
to
fix.
Pr
I,
just
I'm,
not
sure
if
that
is
sufficient.
I
don't
have
much
experience
with
these
on
the
test
side
of
things.
A
Yeah
we
had
similar
problems
with
probes
like
whenever
you
test
props.
You
heavily
depend
on
timing,
and
if
your
Port
scheduling
takes
a
while
and
then
like
Pro
execution
takes
a
while,
and
there
is
a
CPU
saturation.
We
always
had
this
flakes
that
we
fixed
with
timeout
updates
I
I,
really
don't
like
the
idea
to
do
something
serial
just
because
we
don't
have
resources
on
the
nodes
like
it,
doesn't
feel
healthy
and
it
will
lead
to
more
expensive
infrastructure.
B
So
one
of
the
tests
in
there
the
scheduling,
scheduler
test,
has
to
be
serial
because
at
the
end
there
was
a
potential
flake
causing
issue
that
long
way
identified
and
Wong
Chen.
She
submitted
a
PR
that
was
merged
yesterday
as
well.
I
am
not
sure
if.
B
Increasing
the
number
of
nodes
could
I
mean
I
could
really
lower
bring
down
the
number
of
possibly
you
know
instead
of
allocating
300
400,
which
I
just
picked
a
number
pulled
out
of
the
air.
I
can
go
with
like
40
to
60
Milli
CPUs,
and
that
will
help,
but
it
still
leaves
this
question
mark
of
it'll.
The
flakiness
is
there.
A
Go
ahead.
Sorry.
E
You
probably
already
checked
that,
but
I
want
to
mention
just
in
case,
because
we
we
had
this
issue
when
doing
the
CPU
manager
tests,
which
want
to
allocate
exclusive
CPU.
So
now
the
CPU
amount
becomes
a
bottleneck,
so
the
CI
machines
are
actually
quite
weak,
meaning
they
have
two
c
two
two
entities
that
kubernetes
called
CPUs
available.
So
I.
E
B
It's
not
timing,
I
I
checked
so
in
the
in
the
bug.
Let
me
just
post
the
link
to
the
bug
the
issue
I
created
I
did
some
initial
analysis
and
I
think
the
the?
Where
did
it
go?
Okay,
let
me
get
it
from
this.
One.
A
B
E
Know
what
I
meant
is
about
it's
not.
It
could
be
a
hyper
thread
or
a
physical
core,
and
it
doesn't
really
matter
what
that
entity
I'm.
Just
thinking
a
lot
about
it
could
be
that
if
the
the
tests
go
go
well,
while
in
serial
could
be
still
related,
because
when
they
run
serially,
all
the
worker
resources
are
available.
So
if
you
have
two
cores
and
two
cores
enough,
then
you
are
sure
you
are
sure
you
have
the
full
two
cores
for
your
tests.
If
they
run
alongside
anything
else,
resources
could
be
taken.
E
B
No,
it
is
so
it
it's
not
in
the
issue.
The
analysis
is
in
the
pr
description
itself.
So
if
you
scroll
down
to
the
pr
description,
it's
okay
you'll
see
that
I
have
a
comment
there,
and
this
is
exactly
what
it
is.
When
I
looked
at,
why
these
are
sitting
not
scheduling
the
I
looked
at
the
node
when
it
dumps
out
it
tells
you
how
much
node
allocutable
it
is
elocitable
is
there
and
that
is
2000
ml
CPU.
B
So,
as
you
said,
this
confirms
that
we
have
weak
machines
running
over
there
and
I
couldn't
I.
The
choice
is,
you
know,
add
more
add
more
nodes
to
the
job
which
I
really
don't
want
to
do,
because
that's
going
to
cost
more
or
in
this
case
run
it
longer.
So
it's
allocated
either
ways.
You're
gonna
pay
a
little
bit
longer
when
I
looked
at
the
pull
jobs
after
creating
this
PR,
both
the
alpha,
the
alpha
features,
Alpha
enabled
and
then
my
own
In-Place
rig
size.
B
I
expected
those
pull
job
timings
to
increase
because
I'm
running
them
in
serial
I
was
another
impression
that
they
were
running
in
parallel.
But
it
looks
like
those
timings
have
not
changed,
which
means
the
CI
jobs
are
somehow
running
in
parallel,
but
the
pull
jobs
are
not.
Is
that
the
case.
B
I'm,
so
when
I
created
this
PR,
it
runs
through
all
a
bunch
of
jobs.
Right,
it
runs
all
the
jobs
I
have
added
to
it.
The
alpha
alpha
features
job
which
runs
the
reporters
in
place,
bodily
size
tests
and,
if
you
scroll
down
further
at
the
bottom,
you'll
see
the
test
results.
B
So
there
is
that
Alpha
features
cos.
Alpha
features
that
job
that
one
runs
all
the
In-Place
pod
resource
tests
as
well
and
when
I
compared
the
timings
it
time
it
took,
it
took
1
or
29
minutes
to
run.
I
was
expecting
this
to
increase
because
you
know
I'm
running
in
serial
now.
This
was
not
the
case
before,
but
the
time
of
this
has
not
changed,
so
that
suggests
that
this
is
already
running
in
serial
in
the
pull
job,
but
in
the
CI
job
they
run
in
parallel.
B
When
you
look
at
the
ca
job,
it
finishes
in
30
about
30
minutes
with
those
failures,
so
the
cost
is
going
to
be
that
it's
going
to
run
three
times
longer.
I
guess
I
would
just
want
to
with
this
PR
I
want
to
see
validate,
there's
no
way
to
validate
the
CI
job.
Unless
this
goes
in
right
is
there
if
I'm
missing,
maybe
I
missed.
Something
is.
A
There
maybe
serial
CI
job
as
well
so
I
think
serial
as
I
think
is
like
serial
tag
is
not
serial
setting,
so
it
doesn't
mean
the
test
will
run
serially.
You
need
to
configure
it.
So
that's
why
we
have
separate
jobs
for
that.
B
Oh
okay,
okay,
then,
please
take
a
look
at
my
PR
and
see
if
that
really
maybe
I
need
to
do
more
than
just
a
Serial
tag.
A
So
I
think
here
you
wouldn't
find
anything
serial
right,
yeah!
Oh
thank
you
futurism
out
at
all.
A
B
So
this
was
not
there
before
and
it
still
took
an
hour
and
a
half.
So
the
timing
on
the
pull
job
has
not
changed.
The
CI
job,
which
is
linked
in
the
issue,
you'll
see
that
it
runs
in
30
minutes,
which
is
great,
but
if
it
fails,
then
it's
no
good
I'm
expecting
that.
If
we,
if
we
do
this,
then
it
will
take
one
and
a
half
hours
just
at
the
pull
job.
But
then
we
are
also
running
this.
The
C
group
even
job
we're
running
it
in.
B
So
the
yeah,
if
you
look
at
that,
it's
running
26
minutes
with
those
failures
and
they
all
seem
to
be
running
in
parallel,
which
is
causing
the
out
of
CPU,
can
schedule
issue
and
if
we
run
them
in
serial.
This
is
my
theory.
My
hypothesis
I
just
don't
know
if
I'm
on
the
right
track
with
this,
so
I
wanted
someone
to
take
a
look
at
the
pr,
and
if
this
is
not,
then
fine,
what
density
is
needed
to
get
the
student
in
Syria.
A
B
About
the
the
two
jobs
that
we
have,
the
C
group
V2
in
place,
CI
runs
every
24
hours
and
the
V1
runs
48
hours,
so
at
least
for
now,
I'm.
Okay,
with
the
we
can
at
least
try
this
out
if
this
works,
but
if
we
know
it
from
code
review,
it
doesn't
work
then
I'll,
do
it
what
we
need
to
do
to
fix
it,
so,
as
you
can
see,
most
of
them
run
about.
70
of
them
ran
fine
and
the
30
percent,
which
failed
are
failing,
because
of
not
enough
CPU.
A
Yeah
afraid
we
increase
the
periodic
for
everything
just
to
save
on
infrastructure
costs.
Right
now,
it's
running
very
high.
B
A
Okay,
thank
you.
Let's
go
to
something
to
the
next
item
and
we're
out
of
agenda
item.
So
let's
go
to
triage.
A
H
I
did
have
one
question:
I've
not
been
able
to
find
my
test
log,
but
when
I
run
locally,
it's
fine
using
focus
and
stuff
I
can
you
know
see
my
test
run
and
show
that
fast,
but
I
haven't
been
able
to
find
them
in
the
the
like
the
pr
logs
there
where
the
test
is
actually
on.
H
Which
jobs
did
you
check?
I
might
check
the
latest,
one
that
are
McKees,
told
me,
which
was
Patrick
several.
He
said
check.
A
C
C
A
So
Focus
here
is
yeah,
so
I
think
we
can
be.
We
can
just
add
not
conformance
to
that,
and
it
will
be
it
resources.
A
Okay,
we
discussed
it
with
David
and
we
may
be
a
good
idea
to
add
some
timestamps
as
well
to
the
logs,
because
this
way
we
can
test
what
started
at
there
is
a
field
called
started
at
that
this
field
value
at
least
somehow
close
to
the
timestamp
that
it's
supposed
to
be
so
that
may
be
another
addition,
but
I
don't
want
to
complicate
this
PR.
Maybe
it
can
come
later
so.
H
A
D
Yeah
this
one
was
the
fourth
point
that
you
see
on
the
pr
description
is
the
one
that
was
back
ported,
so
I
think
this
is
in
a
way
blocked
as
well,
so
I'm
currently
working
on
fixing
that
and
then
we
can
think
about
backboarding.
A
This
one
that's
on
AWS.
H
Yeah
so
I'm
not
I'm,
not
sure
of
status,
whether
it
should
like
what
the
the
end
goal
is
really
so
I
know.
H
So
there
was
a
GCE
runner
on
that
ran
a
remote
test
on
GCE
and
it
was
in
use
by
some
of
the
node
tests,
and
then
there
was
someone
from
the
Sig
infra
I
think
that's
working
on
setting
up
a
product
cluster
on
eks
to
use
some
of
the
AWS
credits
and
try
to
remove
some
of
that
cost
from
GC
over
to
AWS,
and
so
this
was
a
PR
I
created
sort
of
in
support
of
that.
H
But
then
there's
some
discussion
yesterday
on
Signal,
whether
if
that
was
a
good
idea
or
not
or
I
guess
maybe
the
the
registry
changes.
Is
it
just
gonna
be
enough,
so
so
I'm
not
I've
got
no
strong
feelings
about
this.
What's
up
just
curious
kind
of
what
the
what
everyone
else
thinks.
A
H
H
The
like
there's
a
bit
of
tension
between
like
a
desire
to
remove
cloud
provider
code
from
kubernetes
kubernetes
and
then
a
desire
to
also
use
AWS
credits
which
might
require
keeping
some
of
the
code
there
and
whether
the
route
approaches
move
the
test
Runner
entirely
out
of
kubernetes
kubernetes
into
a
different
repo,
so
that
you
can
also
remove
the
vendor
dependencies
yeah.
Let's
so
it's
kind
of
where
it's
at
the
moment.
H
Yes,
yeah:
there's
a
guy
named
Mohammed
over
there,
that's
working
on
setting
up
the
the
prowl
cluster
I,
don't
I,
don't
think,
there's
been
a
I
guess,
I'm
kind
of
I'm
a
little
bit
concerned
that
they're
that
they're
sort
of
approaching
hey,
let's
build
this
cks
crop
cluster
and
do
this
and
I
don't
think,
there's
been
a
lot
of
cross
communication
regarding
the
actual
use
of
it.
A
Yeah
I,
don't
think
there
is
any
concern
from
the
I
think
people
are
afraid
of
having
something
run
on
AWS
and
not
any
and
not
being
supported
at
all.
So
like
some
infrastructure
that
start
running
but
then
like
nobody
looking
at
that,
it's
it's
ready
concerning
and
we
had
situations
like
that
before
similar
situations.
A
A
Okay,
okay,
this
issue.
A
Okay,
I
can
try
to
Ping
six
storage
on
that.
A
And
protests
so
yeah
I
think
we
looked
at
it
last
time
last
week,
I'm
not
sure
why
it's
not
triage
so
yeah.
What
is
happening
is
some
liveness
and
Readiness
probe
test
started
Trading
with
a
strange
condition,
so
sometimes
liveness
fails
but
and
we
detect
that
it
fails,
but
then
for
the
container
is
not
being
restarted
in
the
same
way.
Sometimes
suppose
it
is
true
expected
number
of
restarted
zero.
You
see
like
sometimes
it
expects
not
to
restart,
but
it
actually
restarts,
and
sometimes
it's
opposite.
It.
A
A
Some
reason
I
only
see
this
kind
of
or
number
of
restarts
one,
but
are
the
same.
A
Okay
I
promise
you
I
saw
it
Opposite
as
well
anyway,
so
it's
failing
now
and
I
think
it
may
be
critical
now.
So
let
me
market
as.
A
A
Okay,
so
we're
done
with
this
triage
other
two
items.
A
Oh
I
need
to
move
I'll
move
later.
We
look
at
all
this
I'll
just
move
them
and
you
can
call
them
and
we
have
20
minutes
for
bugs.
So
we're
done
with
test
agenda.
I
will
now
switch
to
bugs.
If
you
interested
bugs
just
please
stay
and
help
out.
A
As
you
remember,
last
time
we
looked
at
important
soon
box,
bye,
Mike
this
time,
I
I
want
to
triage
items
so
26.
Maybe
you
can
go
through
a
few
of
them.
A
F
F
H
A
We
try
to
read
it,
but
then,
like
we
typically
go
needs
information.
A
To
investigate
and
it's
a
very
old
version,
so
what
we'll
do
is,
we
will
say
they
are
needs
information.
A
There
is
a
special
problem
for
those
that
we
periodically
pitch
items
from.
A
H
H
There
was
some
discussion
down
below
regarding,
like
a
possible
feature,
enhancement
to
the
API
server,
to
allow
you
to
disable
sort
of
like
an
admission
hook
to
disable
setting
the
node
name
on
a
pod.
Unless
you
wrote
a
scheduler
and
I,
don't
think
there's
anything
for
like
Cuba
to
do.
A
F
A
A
So
yeah
it's
and
it's
all
related
to
this
caps
that
c
caps
around
command.
F
B
C
C
There
was
a
ticket
about
this
before
related
to
static
pods,
but
there
was
a
lot
of
talk
on
the
ticket
and
the
conclusion
is
the
source
of
Truth
for
the
static
pods
is
the
file
on
the
disk,
not
the
mirror
pod.
You
can
see
in
the
API
server,
so
it
should
not
respond
to
the
delete
for
the
mirror
pot.
If
you
want
to
see,
I
can
try
to
find
that
ticket
for
you.
F
There's
a
recent
comment
by
Clayton
on
the
issue
that
I
opened
where
he
is
proposing
some
solution
to
it,
but
yeah
I
don't
have
the
number
and
yet
I
know.
A
A
So
the
problem
is
that
if
customers
specified
the
incorrect
command
in
11,
so
regions
probes-
and
we
wouldn't
really
recognize
a
failure
to
run
this
probe
as
failure
of
a
profiler.
So
we'll
just
say
it's
a
a
warning,
so
I
already
opened
it
mostly
okay,
I
love
the
chat
so
I
already
opened
it
mostly
to
as
I
thought
that
there
is
no
information
that
this
happens.
But
apparently
there
is
I
I
thought
that
we
hit
in
this.
A
There
is
a
very
interesting
catch
here,
so
in
this
code
in
exact
property
check
that,
if
exit
code
present,
then
we
go
through
this
case
and
if
it's
not
present,
then
we
can
get
it.
It's
a
timeout
and
we
show
its
timeout
but
then
like
in
opposite
case.
It's
unknown
and
unknown
will
not
result
in
any
logs
or
events,
so
it
will
be
swoled
silently.
A
Apparently
there
is
an
exit
code.
When
there
is
no
like
binary
cannot
be
found.
There
is
exit
code,
so
it
this,
this
branch
of
code
will
be
used
so
we'll
see.
Failure,
I
think,
I.
Think
this
is
what
happened
so
yeah
I
tried
to
reoperate
and
I
couldn't
find,
couldn't
get
it
to
get
into
this
branch
of
code,
but
I
wonder
if
the
if
there
are
cases
on
this
branch
of
Court
will
be
executed.
A
F
H
Me
add
me
to
R
Phillips.
F
A
Do
you
see
the
same
behavior,
possibly.
A
Yeah
we
have
couple
customers
complaining
about
it.
It's
it's
not!
It's
not
natural,
like
that.
Pro
like
wrong
executable
will
not
result
in
test
well
in
profile.
H
F
A
H
Yeah
I've
run
into
like
some
some
odd
things
about
the
test:
I
wonder
if
I
should
just
document
them
or
actually
try
to
make
changes
like
like
the
node
and
and
the
node
conformance
test
will
fail
if
your
disk
is
too
large
the
fail,
if
you're
just
too
small.
So
it
took
me
a
minute.