►
From YouTube: Kubernetes SIG Node 20210812
Description
Meeting Agenda:
https://docs.google.com/document/d/1j3vrG6BgE0hUDs2e-1ZUegKN4W4Adb1B6oJ6j-4kyPU
A
Hello,
it's
august,
12th
2021,
sig,
note
ci
subgroup
meeting
and
today
we
we
have
a
few
items
on
agenda.
Let
me
share
my
screen.
B
B
Sounds
good.
First
item
is
from
me:
let's
punt,
that
to
the
end,
because
I
would
like
to
actually
triage
things.
I
think
it
was
just
the
first
thing
that
made
it
on
the
agenda.
A
Okay,
mike
wants
to
discuss
something,
but
then
you
said
it:
maybe
it's
merged.
C
No,
it's
not
much,
but
already
every
talked
of
time
with
fellaini
with
it.
No,
not
really.
No,
no,
no
need
to
talk
about
it
anymore.
I.
A
Yeah
we
need
to
review
for
that.
Anyway,
we
will
get
the
reviewers
soon
new
contention
test.
Failing.
B
So
I
guess
I
can
talk
a
little
bit
about
that
one,
and
that
is
why
imran
is
here.
So
basically,
I
was
looking
at
test
grid
and
I
discovered
that
the
node
cubelet
features
master
tests
were
failing.
D
B
Reason
they
were
failing
is
because
these
new
node
feature
lot
contention.
Tests
were
failing
on
that
job
for
some
reason,
and
that
was
a
little
puzzling
to
me,
because
I
wasn't
sure
why
slow
disruptive
tests
were
running
as
part
of
that
suite
so
like
I
felt
like
that
should
have
been
excluded
to
begin
with.
The
other
thing
is
that
the
the
job
that
is
running
those
tests,
it
only
is
running
like
the
one
test
and
nothing
else.
B
So
I
thought
all
this
was
a
little
bit
weird
and
I
figured
we
should
we
should
discuss
today.
So
the
issues
are
closed
because
I
just
reverted
the
addition
of
the
tests,
because
I
was
like
it's
breaking
a
release
informing
thing:
let's
go
figure
it
out.
First,
before
we
have
this
merged,
I
was
not
sure
why
it
wasn't
showing
up
in
the
pre-submit
ci.
B
So
anyways
it
was
just.
The
easiest
thing
for
me
to
do
is
to
click
the
button,
to
revert
it
and
not
like
do
any
further
investigation.
This
is
not
like
me
saying
we
don't
want
these
tests.
This
was
just
me,
making
the
dashboard
go
green
in
the
laziest
way
possible.
So
I
think
we
should
talk
about
this.
What
should
we
do
about
this
test?
B
Given
that
it's
a
slow
and
disruptive
thing,
I
don't
think
it
makes
any
sense
to
be
running
in
the
standard.
Cubelet
features
suite,
so
I'm
not
sure
why
it
got
picked
up
there.
E
So,
just
before
that,
so
I
mentioned
one
of
the
dashboards
which
was
showing
green
runs.
So
what
was
about
that?
You
know
those
that
dashboard
showing
greenland,
as
well
as
the
cubelet
master
test,
showing
red.
B
The
comment
about
that
one
yeah
there
we
go
the
one
that
says
it's
got
green
runs.
So
if
you
click
on
the
one
that
says
it
has
green
runs
and
you
look
at
this,
it
is
a
job
where,
like
literally
the
only
thing,
it's
doing
is
running
that
one
test
and
nothing
else
so
yeah.
A
That's
what
we
discussed
with
rtm
before
we
wanted
to
have
this
test
and
we
decided
similar
to
some
other
features.
This
is
so
disruptive
that
made
itself
its
own
tab.
B
Yeah,
so
I'm
wondering,
if,
like
so
clearly
like
it
being
serial
destructive,
so
the
thing
that
we're
seeing
is
that
when
we
run
it
against
the
like
regular
feature
suite,
which
I
don't
think
it
should
be
there,
it
times
it
out,
which
suggests
to
me
that,
like
probably
the
reason
it's
passing
here
is
because
there's
nothing
else
competing
with
it
on
that
cluster,
but
I
don't
actually
know
like
do
we
want
to
because,
like
it's
so
related
to
this,
I
added
another
agenda
item,
because
we
are
constantly
getting
alerts
that
we
are
exhausting
our
available
project
resources
in
node.
B
We
have
a
lot
of
like
single
feature
test
jobs
right
now,
I'm
wondering
if
maybe
we
should
consolidate
some
of
them,
because,
like
do
we
really
need
to
like
spin
up.
You
know
like
a
node
to
like
run
just
this
one
test
or.
A
Pendulum
is
coming
back
right
when
we
had
all
the
tests
together,
we
like
been
suffered
up
with
flakes
all
the
times
that
we
decided
like.
We
need
to
start.
A
Yeah
serial
may
be
a
good
alternative
here,
because
so
I
think
specifically
for
this
test-
this
it
doesn't
kill
the
node,
for
instance,
because
like
we
have
graceful
termination
thing,
it
initiates
like
node
shutdown.
I
mean
it
emulates
it,
but
kublet
thinks
that
it
goes
into
not
shut
down.
So
I'm
not
sure
whether,
like
how
kublet
will
behave
after
that
like
whether
it
will
be
like
stable
enough.
So
maybe
it's
deserves
its
own
type.
A
B
B
And
so
you
can
see
this
one's
running
like
every
four
hours
or
something
like
that,
and
so
every
time
that
happens
like
we
spin
up
some
nodes,
then
we
run
the
tests
on
them
and
then
we
spin
it
all
down
and
when
we're
trying
to
decide
what
tests
to
run
on
them,
we
use
what's
called
a
test
selector
and
there
are
like
various
ways
that
we
can
select
these
tests
so
typically
like
sig
nodes,
jobs
will
be
running
test,
tagged,
sig,
node
like
in
the
square
brackets,
but
that's
not
all
we
can
select
on
and
so
sergey,
and
I
are
talking
about
the
serial
tests
and
serial
tests
must
be
run
like
one
after
the
other.
B
They
can't
be
parallelized.
So
if
they're
disruptive
then
like
they
won't
compete
with
other
tests
on
the
cluster,
and
so
that's
that's
why
we
have
this.
This
whole
tab,
yeah,
that's
the
container
d1,
where
we
run
a
bunch
of
tests
serially
and
we
retry
them.
So
we
can
see.
Sometimes
some
of
these
are
like
flaking
and
they
pass
most
of
the
time,
but
not
necessarily
all
the
time.
B
So,
like
a
slow
test,
I
think
takes
more
than
five
minutes
to
run
or
something
like
that
for
the
e
to
e's
and
there
are
other
various
tabs.
So
you
can
see
if
you
look
at
the
test
names
in
that
sort
of
grid
on
the
left.
There
are
some
slow
tests.
There
are
all
of
these
in
this
particular
tab
will
be
labeled
cereal,
because
this
is
the
serial.
Suites
will
all
run
after
one
another,
and
most
of
them
are
tagged
with
like
a
particular
feature.
B
So
that
feature
is
like
not
ga,
so
we
want.
We
may
want
to
be
able
to
like
separate
that
out
to
test
it
in
different
things,
and
that
gives
us
the
opportunity
to
easily
do
so.
That
is
your
test
grid,
update
in
case
everybody's,
like
what
does
it
mean
to
do
a
thing
in
a
serial
test
versus
not?
You
can
read
more
about
this
in
the
end
to
end
test,
documentation
and
I'll
link
that
for
the
agenda.
A
E
So,
what's
the
sorry,
I
was
saying
what
what
should
be
the
course
of
action
for
this
particular
test,
because
it
has
been
quite
I've,
been
I
mean
say,
I
would
say
that
it
has
been
quite
some
time
this.
This
task
has
been
lying
around.
We
want
to
get
this,
get
this
flags
to
public
configuration
rather
than
just
duplicate.
E
B
Think,
basically,
the
issue
was
that
we
missed
adding
a
serial
tag
in
the
end
to
end
tests,
so
they
got
added
to
the
wrong
suite
and
they
were
disruptive
and
they
failed
every
time.
I
think
we
want
these
in
the
cereal
suite
and
not
in
the
regular
feature
suite.
B
B
It's
not
it's
not
like
we're
physically
moving
a
thing
at
a
tab.
The
tabs
are
just
the
right.
I
understand
yeah.
So
if
you
tag
the
test
with
serial,
which
I
think
it
should
be
because
it's
a
disruptive
test,
then
it
will
automatically
get
picked
up
by
the
serial
jobs
and
then
we
don't
need
this
separate
job.
B
F
Like
some
additional
reason
for
like
separate
type,
can
be
some
special
requirements.
For
example,
huge
pressure
test
requires
like
additional
configuration
on
the
operating
system
for.
F
B
F
Separate
it
and
I'm
totally
agree
that,
for
example,
on
some
stage,
we
can
just
remove
like
memory
manager
like
tab,
for
example,
cpu
manager.
It
should
not
require
some
special
treatment.
I
think
again,
like
I
ensure
regarding
the
costs,
for
example,
because
cpu
cpu,
measured
in
memory
manager
tabs
they
run
on
top
of
instances
with
four
cpus
and
like
if
it's
four
cpus
it's
automatically
received
for
like
16
gigabytes
of
memory.
F
The
question,
if
like
we
will
now
change
the
serial
name
to
run
with
four
cpus,
and
we
will
remove
additional
tabs
for
memory
manager
and
cpu
manager
will
be
improved
like
the
situation
or
not
because
like,
for
example,
we
have
pr,
not
kubrick
serial
that
run
now
currently
on
each
like
on
each
pull
request
right,
because
I
don't
remember
that
we
run
before
it
like
cubelet.
D
F
Lane
on
each
pull
request-
and
now
we
are
running
it
like
we
are
not
complete
serial
one
and
the
same
for
continuity.
We
now
is
running
it
for
each
pull
request.
F
B
I
mean
we
could
also
run
it
like
less
frequently
too.
I
think
I'd
like
I
don't
know,
I'm
not
I'm
not
the
the
price
calculator
person.
The
the
issue
is
less,
I
think
right
now
I
mean
quota
and
costs
are
related
right.
B
Now
we're
constantly
bumping
up
against
our
quota
so,
like
I
think
that
suggests
we
probably
want
to
remove
some
jobs
or
consolidate
some
jobs,
because
I
know
like
I
think
a
bunch
of
these
tests
are
running
like
every
four
hours
and
some
of
them
are
running
with
eight
nodes,
and
I
don't
think
that
we
need
to
do
that
and,
like
probably
I
mean
I
can't
say
for
sure,
but
probably
it
might
be
cheaper
to
slightly
upsize
some
of
the
nodes
in
certain
tests
and
remove
a
bunch
of
other
ones,
particularly
if
they're
running
with
eighth
notes,
so
yeah.
A
B
F
And
some
additional
questions
like
for
like
cereal
and
we
are
requiring
some
gpus
to
run
gpu
test
again.
How
does
it
like?
I
don't
know
how
it's
again,
if
it's
more
expensive
instance
or
less
expensive,
so,
for
example,
gpu
we
can
like
move
to
the
separate
lane
because
it's
requested
gpu,
and
so
we
can
make
this
job
manual.
For
example,
if
you
just
change
something
related
to
gpu
test,
you
can
manually
request
to
run
it
like
the
same
thing.
We
have
for
the
like
non-periodic
memory
managers
that
one.
B
Falls
under
we
need
different
infrastructure
for
it,
so
we
should
have
a
separate
job
for
it.
So,
basically
the
question
is
like:
could
it
run
on
nodes
like
as
part
of
a
standardized
suite,
or
does
it
need
special
infrastructure?
B
So,
for
example,
swap
needs
special
infrastructure
because
none
of
our
tests
run
with
swap
enabled,
except
for
the
swap
tests
whereas
like
if
we
look
at,
for
example,
the
say
I
don't
know
like
the
the
lock
contention
test,
for
example,
they
don't
need
like
a
special
note
or
anything
like
that.
They
don't
have
to
be
running
on
their
own.
They
could
be
running
as
part
of
like
a
suite
running
on
other
nodes.
It
just
might
take
a
little
bit
longer
and
that's
fine.
B
There's
no
point
in
like
going
and
having
its
own
tab
and
spinning
up
its
own
thing
is
just
one
test.
So
there's
a
bunch
of
tests
like
that,
where
we
could
say
like
you
know
we
should
see
like,
does
it
need
special
infrastructure
or
does
it
not
if
it
doesn't?
Let's
try
to
consolidate
so,
for
example,
like
gpus,
that's
special
infrastructure.
We
don't
normally
run
with
those
or
if
something
needs
like
see
groups
v2,
that's
special.
We
don't
normally
run
with
c
groups
v2.
A
A
A
I
hope
not
no
okay,
at
least
this
is
good
that
we
like
separated
cleanly
okay,
so
action
items
here.
Imran.
Can
you
follow
up
on
that?
Like
add
cereal,
make
sure
that
it
runs
in
serial
and
then
we
can
remove
this
extra
job.
E
Sure
so
just
to
get
it
right,
so
the
current
test,
I
need
to
add
a
serial
tag,
the
the
pr
that
was
reverted,
I
need
to
add
a
serial
tag
to
there
and
the
pr
that
got
added
for
in
test
infrarepo
for
the
job.
That
needs
to
be
reverted
right
so
that
that
shouldn't
be
there
anymore.
B
A
Thank
you
and
sorry
for
extra
work.
Just
oh
test
infrastructure.
B
This
should
have
been
caught
earlier.
We
missed
it,
it's
fine,
we're
fixing
it
and
luckily
sergey,
and
I
both
have
approver
now,
so
we
should
be
pretty
responsive
in
terms
of
fixing
it.
A
E
Is
true?
Okay,
so
once
one
question,
could
this
be
of
any
significance?
So
there
were
two
separate
peers,
one
in
the
test
infra
and
one
in
one
in
the
community
slipper
for
this
end-to-end
test
and
the
e3
test.
Pl
got
merged
first
before
the
job.
The
test
interrupt.
Oh.
E
B
That
the
test
in
for
job
didn't
exist
so
like
prow,
wouldn't
be
running
it,
but
because
it
wasn't
marked
serial
it
got
picked
up
by
the
existing
features.
Job,
which
is
like.
I
see
a
node
feature.
I'm
going
to
run
this
and
I
was
failing
so.
B
B
E
I'm
not
sure
how
much
time
I
will
have
after
this,
but
if
I
do
I'll
definitely
reach
out
to
you
lana.
Regarding
this,
I.
B
D
Yeah
yeah,
I
I
can
help
on
this
like
so
you
said
again
about
like
one
thing
is:
are
we
talking
about
the
same
cleanup
like
increasing
the
interval
and
consolidating
the
tabs.
B
So
that's
that's
the
next
item
on
the
agenda.
This
one
is
even
easier
than
that.
If
we
go
to
the
test
grid
tab,
you
can
see.
We
have
like
a
billion
jobs
here
and
a
bunch
of
them
are
labeled
pr.
So
we
want
to
just
move
everything.
That's
labeled
pr
into
like
a
sig
node
pre-submits
tab,
because
the
it's
a
little
bit
weird
that
we're
mixing
in
test
grid
a
bunch
of
like
pre-submit
tabs,
which
are
when
you've
learned.
D
B
Yeah
with
periodics
they're
all
in
the
same
place,
so
we
were
talking
about
just
because
there's
like
30
jobs
or
something
showing
up
in
sig
node
cubelet
that
we
should.
We
should
move
those
into
a
sig,
node,
pre-submits
tab
and
then
maybe
also
look
at
like
some
of
the
other
existing
tabs
that
we
have
because,
like
we
have
like
container
d
and
container
dio
and
like
a
bunch
of
other
stuff
that
is
just
like.
Do
we
really
need
that
separately
or
do
we
not
like?
Maybe
we
can
get
away
with
not
so.
D
B
Number
of
jobs
there
creating
one
is,
is
pretty
straightforward,
I
believe,
like
it's
basically
just
renaming,
and
I
think
if
you
try
to
like
the
ci
will
catch
it.
If,
like
you,
make
a
mistake,
so.
B
Yes,
we
want
to
do
some
cleanup
so
that
we
don't
have
all
of
the.
Let
me
just
pull
up
the
sig
node
notes
to
make
sure
that
this
is
all
written
down
here.
So
we
want
to
move
and
I'm
sorry
my
typing
is
super
slow.
I
don't
know
what
happened
but
like
a
week
ago,
google
docs
has
just
gotten
ridiculously
slow,
so
we
want
to
move
the
like
pr
jobs
out
of
the
sig
node
cubelet
tab
to
a
new
sig
node
pre
submits
tab.
B
Cool
great
and.
B
It's
it's
like
literally
going
like
letter
by
letter.
There's
like
a
delay
for
I
I
think
it's
just
got
really
there's
some
sort
of
javascripty
regression
anyways
it
continues
to
go.
It's
just
super
slow,
okay.
I
think
it
should
all
be
there
now.
Oh
it's
it's
quite
amazing
and
I
don't
know
how
to
fix
it.
B
There
we
go
it's
there,
it's
there,
oh
I
I
spelt
something's
wrong.
Great,
that's
what
I
get.
B
A
This
blocking
as
a
separate
tab.
B
All
right
so.
A
B
Yeah,
so
I
mean
you
can
see
like
the
spikes
happen,
and
I
think
that
the
granularity
of
this
chart
is
such
that
you
can't
necessarily
see
the
spikes
easily.
I
put
last
90
days
on
there
because
I
think
that
was
like
all
of
the
history
you
could
see
but
like
basically,
since
the
freeze,
unfroze,
we've
had
pretty
high
utilization
and
I
think,
there's
some
threshold
that
whenever
we
get
over
it,
we
get
an
alert,
and
I
know
that
definitely
happened
during
code
saw
when
we
had
a
big
pileup
of
prs,
and
that
happened.
B
I
think
yesterday
again,
so
I
think
we're
running
like
yeah
there's
one
of
them
where
we,
basically
we
run
out
of
resources,
so
it
seems
like
our
average
utilization
has
definitely
gone
up
over
the
last
90
days,
particularly
since,
like
the
test
freeze
unthawed.
So
it's
probably
like
reasonable
of
someone
to
like
this
is
a
small
project.
B
A
Yeah,
I
think
it
coincides
with
the
times
when
we
start
looking
at
cereal.
Very
actually,
so
I
think
cereal
may
have.
A
No,
we
haven't
put
it
into
pr
validation,
so
now
we
can
run
it
as
part
of
pr
validation.
B
Like
this,
this
graph
goes
all
the
way
back
to
like.
May
I
think,
but
I'm
I'm
certain
that
we
got
that
fixed
in
like
june
or
july,
and
the
spike
has
been
basically
just
during
like
the
last
two
weeks
of
code
freeze,
but
we
were
spending
the
whole
month
working
on
the
spirit
serial
tests.
So,
like
you
can
see,
I
mean
the
big
thundering
herd
thing
on
the
fourth
of
august,
like
that's,
that
sort
of
second
big
spike
there,
where
you
know
we
had
to.
B
We
had
all
the
prs
come
free
for
one
code:
freeze,
unfroze
but
yeah,
it's
it's
like.
Basically,
it's
just
been
the
best
last
week
that
we've
really
spiked
so.
D
B
So
I
think
that,
like
in
a
in
addition
to
just
cleaning
up
the
dashboard,
I
mean
this
is
a
very
complimentary
item
to
the
last
one,
which
is
like
we've
got
a
bunch
of
dashboards
to
clean
up
and
then
once
you're
cleaning
up
the
dashboards.
You'll,
probably
like
notice
when
there's
overlap
between
test
suites
and
whatnot.
And
we
might
want
to
also
try
to
consolidate
some
of
the
actual
test
and
for
jobs.
B
I
don't
know,
but
I
bet
you
that's
sig
testing
could
tell
you
or
sig
in
for
us
since
I
think
they're
splitting.
D
A
B
Yeah,
I
mean
surely
like
spinning
up
I
if
it's
number,
of
course,
for
example
like
do
we
need.
You
know
five
jobs
where
you've
got
eight
machines
that
all
have
four
cores
or
could
we
replace
that
with
like
one
job
that
has
maybe
slightly
larger
machines?
If
it's
just
a
core
count,
then
I
think
that
there's
probably
really
easy
savings
to
pick
up
just
because
we
have
so
many
jobs
and
like
every
job
does
come
with
cpus.
It's
not
like
they're,
just
using
one.
So.
F
So
sometimes
we
don't
really
need
to
increase
like
the
number
of
cpus,
but
we
need
to
increase
the
number
of
memory,
but
we
still
need
to
increase
the
number
of
cpus
because
we
don't.
We
don't
have
a
chain
like
any
choice.
A
A
It
would
be
logical
to
have
it
separated,
but
yeah
I'll
find
out.
D
B
B
F
I
just
excluded
the
memory
manager
test
from
it.
I
can.
I
can
take
a
look
on
it
like
for
future
investigation.
I
don't
have
a
problem
with
it.
So.
F
A
B
A
A
A
E
A
A
So-
and
I
just
like,
since
we
noticed
that
the
graceful
termination
fighting
I
want
to
yeah,
it's
failing-
we
need
to
create
an
issue
for
that.
This
is
not
supposed
to
fail.
Think
after
mike
added
email
to
all
the
kublet
jobs,
we
will
start
receiving
emails
right.
So
maybe
we
did
receive
email
for
this
test
because
it
was
failing
for
a
while,
but
going
forward
we
to
be
discovered
easier.
B
Okay,
so
does
that
bring
us
to
new
bug
triage
board,
so
I'm
excited.
F
B
B
Well,
no,
no,
it's
it
like.
It
was
file
the
sig
node,
because
the
test
is
named
signaled,
but
then
we
reassigned
it
to
sig
windows.
So
we
should
at
least
dim,
set
a
comment
there.
We
might
be
able
to
close
this
one.
Do
you
want
to
go
and
look
at
tim's
comment.
D
B
Am
not
looking
at
that
one
for
some
reason
that
one
particular
test
is
failing
on
the
swap
job,
and
I
suspect
it's
because
I
feel
like
there
was
something
that
was
wrong
with
the
fedora
images
on
some
of
the
cryotests
previously
and
when
we
fixed
that
that
test
stopped
failing
peter
is
peter
on
the
call.
B
B
I
don't
know,
maybe
it
should
be
if
it
needs
a
gpu.
If
you
click
on
the
test
grid
tab,
I
was
like
well,
that's
failing
all
the
time,
so
maybe
we
should
not
do
that.
I
haven't
been
able
to
find
any
successful
runs
of
that
test.
A
F
A
Now
back
triage
sorry,
it's
only
like
10
minutes
left.
I
will
stop
sharing.
I
need
to
stop
recording
right.
B
Not
yet
because
the
bug
triage
board
is
new
and
I
want
it
to
be
on
the
recording,
so
let
me
did
I
pull
it
up.
Maybe.
B
B
F
B
Great,
so
this
is
a
new
project
board
that
I
made
and
that
I
forget
his
full
name,
but
n4j
was
helping
me
with,
because
I
mentioned
this
was
a
thing
that
I
wanted
to
try
out
so
needed.
It's
great.
B
Look
new
things:
oh.
B
B
So
basically
sort
of
I
tried
to
sort
these
based
on
their
status
and
the
hope
is
that
this
board
will
maybe
help
us
like
ensure
that
we're
looking
at
all
of
the
new
incoming
bugs
like,
even
if
I
guess
they
kind
of
sit
forever
again,
which
is
not
the
greatest
thing
it
sort
of
sucks
when
people
file
new
bugs
and
they
don't
get
a
response,
and
this
could
help
us
as
a
group
go
through
all
of
those
and
ensure
that
at
least
we're
like
taking
a
quick
look
at
them.
B
Know
that,
like
you
know,
the
pull
requests
are
mostly
steady
state
and
I
think
that
people
are
kind
of
picking
them
up
as
needed
when
they
need
to
be
looked
at,
but
I
think
that's
not
happening
with
bugs,
so
I
think
it
might
be
good
to
actually
triage
bugs,
and
so
I
sort
of
sorted
this
board
into
bugs.
Don't
quite
have
the
same
statuses
as
pull
request
might
so.
B
Specifically,
I
wanted
to
like
call
out
the
high
priority
bug,
so
anything,
that's
critical,
urgent
or
important
soon
and
potentially
might
need
like
back
ports
made.
There's
just
everything.
That's
been
triaged,
so
all
of
these
things
in
theory
should
have
triage
accepted
on
them
yep
and
then
anything
we're
like
waiting
on
the
person
who
filed
it.
B
I've
thrown
it
like
if
it's
needs
information
or
not,
reproducible
that
kind
of
thing
I've
thrown
it
in
this
column,
that's
sort
of
like
a
waiting
on
author
bug
edition,
but
then
everything
that
hasn't
been
triaged
yet
has
like
kind
of
been
chilling.
B
Anything
that
doesn't
have
the
triage
accepted
label
is
chilling
in
this
column,
and
so
it
might
be
good
if
we
want
to
start
going
through
these
during
the
the
signal
meetings,
because
our
the
ci
meetings
in
particular,
because
that
way
we'll
ensure
that
at
least
like
we
can
kind
of
dole
out
the
bugs
and
everybody
can
take
a
quick
look
at
them
and
we
can
make
sure
that
they're
all
sort
of
accurately
filed
so
are
folks
interested
in
taking
a
look
at
a
few
of
these
right
now
to
get
them.
D
B
Let's
look
at
a
few
of
these
and
then
I'm
hoping
that
we
can
sort
of
do
this
on
an
ongoing
basis
within
the
ci
subgroup,
and
that
way
will
at
least
be
you
know,
in
a
similar
way
to
having
incoming
things
or
triaging
for
issues
on
the
the
test
board.
We'll
also
be
able
to
do
that
for
bugs
in
kubernetes,
kubernetes
and
other
repos.
That
people
happen
to
file
bugs
against.
B
Oh,
this
is
an
is
this
a
support.
This
is
kind.
B
This
doesn't
sound
like
a
bug
to
me.
This
sounds
like
a
feature.
B
I
feel
like
this
is
kind
of
code
organization
too,
but
I'm
not
sure
what
the
label
is
for
that
and
I
think
we
can
probably
triage
except
this
one.
B
D
B
Cubelet
fails
to
start
if
the
dynamic
config
feature
gate
is
enabled
via
the
cubelet
configuration
file.
Oh,
yes,
I
think
I
took
a
look
at
this
one.
The
other
day,
there's
a-
and
somebody
like
this
on
the
channel
there's
a
pr
up
for
this,
which
basically
moves
the
validation
a
little
bit
later
in
the
startup,
so
that,
like
it,
can
actually
read
things
out
of
the
config
file.
A
B
Okay,
then
let
me
triage
accept
this
one
and
then
I
guess
somebody
needs
to
look
at
the
vr,
but
that's
correct
elsewhere.
A
B
B
A
It's
mostly,
she
advises
things.
B
So
I
guess
they
upgraded
cube
and
then
could
not
access
c
groups
from
inside
their
container
on
flat
car.
It
sounds
like
a
capabilities
thing
or
something
like
that.
Maybe
I
don't
know
if
the
defaults
changed
between
119
and
120..
This
was
so
long
ago
that
it's
like.
B
E
I
I'm
not
capable
of
looking
at
that
one,
but
I'll
talk
to
the
current
the
concerned
person
to
identify
what
could
be
the
issue.
B
A
B
Great,
well,
I'm
glad
so
I
guess
we've
got
a
bunch
of
bugs.
We
don't
have
a
lot
of
time
left.
Do
we
want
to
do
this
as
like
kind
of
a
regular
thing
going
forward
on
a
weekly
basis
like
do
we
just
want
to
try
to
like
get
this
backlog
down
a
little
bit
each
time,
maybe
synchronously
or
asynchronously,
because
I'm
sure
the
bugs
are.
A
A
A
Better
and
better,
like
quality,
increases
every
release.