►
From YouTube: Kubernetes SIG Node CI 20230208
Description
SIG Node CI weekly meeting. Agenda and notes: https://docs.google.com/document/d/1fb-ugvgdSVIkkuJ388_nhp2pBTy_4HEVg5848Xy7n5U/edit#heading=h.2v8vzknys4nk
GMT20230301-180259_Recording_1920x1120.mp4
A
Hello,
everybody,
it's
a
signal.
Ci
meeting
it's
the
first
day
of
March
2023.,
welcome
everybody
I!
Think
we'll
start
with
Mike.
You
have
an
agenda
item.
B
Yes,
so
I
created
this
issue
a
couple
of
weeks
ago,
I
plan
on
going
back
to
it.
Let's
see,
are
you
sharing
screen
if
you.
B
Will
work
on
this
I
think
I
mentioned?
We
might
want
to
start
with
node
conformance
test
to
make
it
to
write
for
context
before
for
everyone
that
doesn't
have
context.
I
realize
we're
not
testing
the
previous
kubernetes
versions,
so
we
only
test
whatever
reason
master
and
we
don't
test
for
our
release.
Branches,
for
example,
one
24
or
125
and
I
think
it
would
be
a
good
place
to
start
here.
We
can
start
with
something
small
like
first
test.
B
I
want
to
hear
you
input
of
this.
If
anyone
has
any
opinions
or
suggestions.
C
B
Great
I
will
post
some
updates
on
the
next
meeting.
I
think
we
canceled
our
next
meeting,
but
if
not
I
will
I
will
send
I
will
I
will
just
send
some
some
vrs.
Anyone
wants
to
take
reviews
for
this.
I
can
start
sending
them
that
way.
B
C
A
A
In
you
updates
about
it,
I
think
we
discussed
it
last
time,
so
maybe
we
just
get
to
it
when
we
will
discuss
the
three
harsh
Titans.
A
Okay,
any
more
agent
items.
A
I
will
move
on
to
down.
We
had
we
had
skipped
two
weeks,
so
this
support
accumulated
a
lot
of
items.
Let
me
try
to
make
it
wider.
Is
it
still
working
for
everybody?
Am
I
still
sharing
my
screen,
yeah
yeah.
C
B
A
So
I
would
suggest
we
start
with
ogtm
items.
It
may
be
easier
to
just
look
at
what
already
approved.
D
A
Okay,
not
internet,
pool
job
for
eviction.
A
Okay,
I
think
I
looked
at
it
and
I
asked
David
why
we
need
it,
but
I
think
it's
fine
in
general.
It's
it's
a
good
practice
to
have
CI
job
for
every
regular
job.
So
we
can
run
it
On
Demand
on
PRS.
The
only
thing
I
wanted
to
maybe
as
a
process.
Maybe
we
need
to
start
putting
corresponding
like
pediatric
jobs
in
the
comment
here.
So
we
have
a
mapping
somehow.
A
Okay,
this
is
done.
A
A
So,
ideally
you
want
to
be
able
to
trigger
this
periodic
jobs,
because
you
you
want
to
test
how
you
affect
those
periodic
jobs.
So
you
see
this.
This
is
a
CI
job
and
it
has
always
run
false
and
optional
true,
so
you
can
trigger
it
from
your
PR,
but
it
wouldn't
be
triggered
automatically.
So,
ideally,
you
want
to
have
this
one
like
Ci
job
for
every
a
periodic
job,
and
if
you
have
that
so
ideally,
you
also
want
to
have
mating
between
them.
A
So
maybe
above
the
name,
we
have
a
comment
saying
which
periodic
job
it
corresponds
to
yeah
I,
don't
want
to
block
this
PR
because
I
think
we
have
plenty
of
those.
B
To
have
a
small
question,
probably
unrelated
to
this,
but
more
on
the
on
the
nature
of
CI
of
periodic
and
PR
jobs.
Is
there
any
way
to
to
have
like
only
one
job
and
reuse
some
of
the
parts,
because
it
seems
a
little
bit
awesome
to
maintain
the
jobs
that
do
exactly
the
same
with
just
the
difference
that
one
runs
periodically
and
another
one
runs
on
demand?
A
Yeah
I
think
it's
a
question
for
sick
testing.
I
was
thinking
more
of
jobs
that
runs
test
with
specific
tag
like
maybe
have
a
CI
jobs
that
have
running
job
with
specific
tags
that
doesn't
exist,
and
then
you
can
mark
in
your
PR
your
test
with
this
tag.
So
it
will
only
run
your
test,
so
it
may
be
like
very
fast
CI
job
only
specifically
for
your
end-to-end
test.
So
that
may
be
interesting
idea,
because
it
will
enable
this
like
very
fast
iterations
through
CI
job.
A
A
Okay,
all
right,
so
this
is
smooth
all
right.
This
one
Psy
I,
don't
know
Psy.
A
It
seems
to
be
LG
TM,
so
you
discussed
it
last
time,
you're
just
moving
it
to
a
release
blocking
okay
and
I.
Just
taken
that
email
alert
email
is
proper
one.
So
yeah
one.
A
I'm
on
that
I'll
just
approve
it
I
don't
know
thanks.
C
A
A
That's
true,
it
could
merge.
While
we've
been
looking
at
that.
Okay,
this
one
was
a
legitimate
I.
Think
we
discussed
it.
I,
don't
think
it's
a
CI
group
related
yeah.
It's
just
a
promotion.
A
A
A
Okay
and
this
one
yeah
it's
for
grpc
I,
commented
on
that
just
today
morning:
yeah,
it's
okay,
so
this
is
correct.
A
A
A
Deeper
reviews
and
on
a
call,
I'll
move
it
into
need.
Approver.
D
D
A
Window
sorry,
taking
some
time,
I
will
be
done
soon.
A
D
A
A
Okay,
I
will
keep
it
in
dimitriage.
Let
me
put
a
priority.
E
D
D
D
A
A
Okay,
this
makes
sense
so
1.5
was
just
end
of
life
and
I
think
we
found
the
issue
that
we
assigned,
like
we
used
very
old
releases
to
test
with
continuity,
so
this
totally
makes
sense,
except.
A
A
So
container
G
1.5
just
went
into
end
of
life,
so
this
PR
may
be
not
up
to
date
completely
because
maybe
1.5
needs
to
be
removed
as
well,
but
yeah
I
don't
need
to
review
it.
A
Continuous
life
cycle,
it's
work
in
progress,
I
work
is
this
yeah
yeah
just
this
job.
A
Yeah
I
think
it's
just
it's
archived,
because
it's
sick
testing
doing
something.
D
A
Okay,
so
this
in
place
update
it's
outside
of
scope
of
this
screw,
but
definitely
needs
a
review
here.
A
A
A
Oh,
this
is
a
new
way
to
test,
so
it
may
be
interesting
topic
by
the
way.
Let
me
take
this
PR
and
I'll
put
it
in
this
document
and
it'll
come
back
to
that.
A
I
should
have
thought
about
it.
It's
very
nice,
just
put
it
on
the
hold
we'll
come
back
to
it.
E
F
F
A
As
a
serial,
because
of
other
reasons
like
do
you
restart
Kubler
to
something
yes,.
F
A
Yay
less
Legacy
providers,
but
it's
not
related
to
us.
A
A
E
A
A
Yeah
this
one
is
also
I
filed
it.
So
when
we
looked
at
the
grpc
promoting
grpc
into
props
into
GA,
we
found
that
all
live
in
is
pro
trading
just
days.
A
So,
if
you
look
at
different
ports,
different
tasks
that
like
hcp
are
failing
exactly
is
profiling.
It
feels
that
the
logic
of
handing
profilers
is
not
working
properly,
at
least
on
some
tests,
so
I
file
this.
If
anybody's
interested
in
looking
into
probe
failures.
Please
take
a
look
at
this
one.
C
E
A
At
first
I
thought
it's
Windows
related
because
I
clicked
on
a
few
examples
and
they
all
were
windows.
I,
like
oh
yeah,
I,
found
the
reason.
It's
a
good
thing
to
blame,
but
then,
like
we
looked,
I,
think
James.
A
C
C
A
Okay,
so
yeah
I
look
at
the
couple
here
in
the
some
claiming
that,
like
they
started,
failing
agnostic
course
because
they
said
the
port
sandbox
has
changed
so
I
think
it's,
maybe
some
other
tests
affecting
this
one
and
sometimes
are
unexpected.
Just
so
don't
fail
like
lamb
is
pro
says:
it's
failed,
but
then
Pro
just
proceed
being
normal
and
continue
never
being
restarted.
A
So
yeah
yeah
I
see
you.
If
you
want
to
take
a
look
as
well.
A
And
yeah,
let
me
get
into
this
one
I
promise
to
get
back,
so
we've
been
looking
at
sidecar
working
group
how
to
implement
a
good
life
side,
life
cycle
tests
and
we've
been
struggling
to
express
them
nicely.
So
how
do
you
express
the
fact
that
you
need
containing
is
to
start,
and
then
it
needs
to
finish
before
next
container
starts.
A
So
idea
was
that
if
we
construct
Port
such
a
way
that
you
mount
a
host
file
and
dump
all
logs
from
this
port
like
containers
into
this
log
file,
then
you
can
read
this
log
file
and
do
interesting
commands
like
stars
before
exist
before
which
are
implemented
based
on
parsing,
because
this
log
file.
So
basically
the
log
file,
is
looking
for
specific
substrings
and
say,
like
this:
substrings
never
happens
after
this
substring
inside
stuff,
like
that.
A
So
this
way
you
can
test
complex
logic
and
sequencing
of
Port,
startup
or
container
startup
and
container
termination.
I.
Think
it's
a
very
nice
idea
and
it's
very
easy
to
express
like
to
write
test.
The
only
limitation
it
it's
a
single
node
test,
so
you
need
to
be
on
the
same.
A
Node
like
test
exit
test
needs
to
run
on
the
same
node
as
couplet,
which
is
a
limitation,
but
it's
typically
how
we
run
many
CI
jobs
for
end-to-end
node,
so
it
shouldn't
be
a
problem
if
you're
interested
to
learn
more,
please
review
this
PR
I
think
it's
pretty
neat
I
I
think
you
will
look
at
it
and
maybe
we'll
write
more
tests
like
that
and
we'll
cover
finally
cover
pod
life
cycle.
Better
I
think
my
David
Porter
is
also
working
on
covering
Port
life
cycle.
A
So
we
need
to
combine
forces
and
do
it
together.
A
Okay,
we
triashed
everything
here,
I
think
we
have.
We
have
24
minutes
left
I
wanted
to
go
into
barclayage
this
week
and
not
go
into
issues
to
do
because
I
want
to
understand
what
how
we're
doing
in
terms
of
bugs
which
are
marked
as
critical
tune
or
critical
or
importance.
A
A
Important
soon
and
searching
here.
A
Okay
and
it's
already
marked
as
but
like,
let's
try
to
use
this
one,
since
it's
marked
here
already.
A
A
I
mentioned
Tim
Hawkins
on
this
issue,
because
we've
been
looking
at
type
separation
for
multiple
types
and
the
type
separation
is
when
the
same
type
is
used
in
multiple
places
and
then
a
side
effect
of
this
type
separation
type
reuse
is
that
you
add
a
field
for
one
and
it
has
to
be
supported
for
another
place,
but
may
not
make
sense
for
another
place.
It
was
a
case
for
when
we
added
grpc
props.
We
add
the
grpc
proc
for
probes,
but
we
didn't
want
to
add
grpc
support
for
liveness
hooks.
A
So
we
need
to
separate
types
like
Fork
types
and
forking
types,
Backward
Compatible
from
a
product
perspective,
but
it's
not
Backward
Compatible
from
a
client
go
perspective.
Client
go
library
and
it
costs
a
lot
of
grief
and
many
people
have
troubles,
updating,
client
go
and
we're
not
since
we
don't
promise
background
community
so
that
we
sometimes
accept
these
changes,
but
sometimes
it's
just
too
big
of
a
change.
So
we
don't
do
it.
A
Okay
set
full
deployment
report
going
to
unknown
state.
C
A
A
It
was
also
three
hours
but
I,
don't
see
like
only
one
of
the
most
assigned
to
somebody.
A
I
think
so
I
think
we
said
that
it
was
a
regression.
D
D
A
I
think
we
yeah.
This
is
important
soon,
because
it
it's
a
regression
and
I
thought
that
we
promoting
CRI
starts,
but
we
may
not,
in
the
studies.
A
C
A
Because
it's
about
metrics
I
think
it's
about
CPU
being
too
high
when.
A
Yeah,
something
Mark
is
important
soon,
but
survives
five
releases.
A
An
enabling
feature
gate,
it's
a
beta
feature,
gate
I,
believe.
D
D
A
A
Yeah
things
all
we
will
be
able
to
do
is
in
this
meeting
is
to
go
through
this
issue
and
check
if
they
deserve
to
be
in
this
series,
speak
up.
Otherwise,
I
will
just
go
and
go
through
these
issues
and
check
if
they
needs
to
be
in
this
release.
A
A
I
will
close
it
yeah.
We
had
a
lot
of
test
fails.
We
we
had
a
cni
problem
before
I
think
when
we've
seen
I
was
promoted
to
1.0.
It
was
incompatible
with
something
so
if
16
I,
probably
on
a
few
tests,
but
when
we
we've
been
blaming
all
other
tests
for
being
cni
problem.
Well,
indeed,
in
fact,
C9
is
just
a
message
when
continuity
just
started
as
there
is
a
bunch
of
C9
or
initialized
messages,
but
then
it
keep
gets
into
initialized
state,
so
I
will
I
would
assume
it's.
A
Yeah
still
feels
important.
We
need
to
discuss
some
main
signal
missing.
Maybe
somebody
will
pick
it
up
get
file
system
and
four
logs.
D
A
Zero
bytes
byte
file,
which
restart.
A
F
F
E
F
Cc
me,
but
unlikely
it
will
be
fixed
soon.
It
waited
for
five
years,
but
anyway,
yes,.
D
A
Yeah
I
don't
quite
understand
what
the
issue
is:
I
need
to
look
deeper,
but
it
has
32
likes.
Typically,
indication
of
something
goes
goes
wrong.
A
Grand
demon
said
somebody
prioritized
a
triage
and,
with
this
description
a
factory.
A
D
C
A
Are
we
out
of
time
I
will
finish
three
more
tasks:
unbox,
asynchronously
and
I
will
try
to
triage
all
the
bugs
today,
I.