►
From YouTube: Kubernetes SIG Node 20210125
Description
Meeting Agenda:
https://docs.google.com/document/d/1j3vrG6BgE0hUDs2e-1ZUegKN4W4Adb1B6oJ6j-4kyPU
A
A
A
B
A
Yeah,
it's
work
work
in
progress,
but
this
affects.
I
think
this
affects
how
somehow
test.
B
A
A
Yeah,
I
guess
it's
important,
it
does
what
happens
when
funeral
containers
are
disabled?
I
think
it
will
be
less
important
once
we
ga
the
feature,
but
for
now
I
think
we
can
review
it
at
least
so
I'll
move
it
to.
A
A
Okay,
update
pause,
control,
container
pause
image.
I
don't
think
it's
related
to
us
at
all,
so
I'll
archive
it
use
recently
promoted
images.
I
think
it's
related
to
this
one
like
that's
what
we
looked
before.
So
it's
about
image
promotion,
but
I'm
not
sure
what
context
it
is.
A
If
anybody
knows
the
context,
please
speak
up.
B
The
pause
images
one
are
a
little
bit
related
to
sig
node
because
they
are
they're
basically
to
do
some
windows
enablement
so
which
is
node
related.
B
A
Yeah,
I
I
just
don't
want
to
pollute
this
board,
because
this
is
for
yeah.
B
A
A
This
is
also
our
group.
This
is
on
one
of
the
some
tests
we're
failing
and
there
is
a
fix
by
enabling
alpha
apis
enabling
for
api
six
tests,
so
this
is
needed.
A
A
Okay,
okay-
and
I'm
not
sure
about
this
one-
is
it
related
to
us
explicitly
disable
entry
plug-in
dc?
I
don't
think
it's
related
to
our
group
at
all,
but
they
change
one
of
our
jobs
by
including
this
entry
plug-in
gcn
register
flag.
A
And
cpu
manager
tests,
I
think
it's
it
is
for
us
yeah.
C
Yeah,
I
opened
two
pull
requests
because
we
still
have
a
lot
of
errors
under
the
serial
link
and
one
of
the
reason
that
it
like
it
failed
it,
because
the
project
manager
test,
fell
in
the
middle
and
did
not
did
not
clear
the
state
file
and
now,
once
the
complete
restarted
and
because
the
like,
the
state,
cpu
manager,
file
present
and
his
state
is
not
the
same
as
under
the
system.
The
kubrick
filter
restarted
so
like
all
tests
after
it
just
failed
on
out
stuff
like
this.
A
C
It's
the
fix
it's
because,
because,
like
in
goal
mega,
you
have
after
each
stuff
like
that
you
can
clean
all
the
stuff
on
after
the
test
running
so
also
if
something
is
falling
in
the
middle
of
the
test
after
it
will
run
anyway.
So
I
just
moved
all
the
cleaning
stuff
under
the
afterash
and
it
should
solve
the
problem
again.
I
am
sure
it
will
solve
serial
lane,
like
all
serial
problems,
but
at
least,
let's
start
with
it,
because
it's
definitely
a
problem.
C
Yeah,
it's
it's
working
yeah
and
the
second
one
is
to
fix
the
wrong
assumptions
under
the
cpu
manager
methods
regarding
the
cpu
topology,
because
it's
half
assumption
that,
if
you
like
that,
you
have
more
than
one
cpus
on
the
on
the
machine
and
it's
not
true
for
serial
name
like
because
we
run
with
the
scene
like
with
the
machine
with
a
single
cpu.
Sorry.
A
I
think
so
next
I
wanted
to
just
take
a
look
at
issues
that
we
currently
in
progress
and
double
check
that
we're
actually
working
on
them.
So.
A
Are
you
supposed
to
be
working
on
that?
No,
it's
exactly.
A
D
So
this
so
the
first
two
are
somewhat
related,
I'm
behind
on
life,
so
I'm
behind
on
those,
but
basically
I
I
merged
a
a
change
that
would
have
hopefully
fixed
the
node
conformance
test,
basically
making
them
skipping
the
branch
that
was
basically
running
the
wrong
the
wrong
set
of
tests,
but
now
there's
a
different
issue.
So
one
of
one
of
the
other
tags
which
I
don't
fully
understand
in
the
protest,
yet
I
have
to
dig
into
it.
D
Something
is
not
right:
it's
not
able
to
talk
to
the
master
cluster,
so
I'm
just
assuming
that
there's
a
tag
that
is
doing
something
that
is
was
maybe
intended
for
when
we
were
running
running
the
running
in
docker,
and
so
there
might
be
some
mixed
mode
that
is,
is
broke
there.
So
I'm
I'm
currently
looking
into
that.
D
Yeah,
so
basically
we
we
made
a
change
in
attempt
to
fix
that
part,
but
then
now
something
else
isn't
working.
So.
C
It's
me
yeah.
Oh.
C
Yeah,
it's
something
different
that
wrote
by
english
letters,
but
it's
a
russian
meaning.
C
Yeah
exactly
it's
a
regard
to
serial
lane
and,
like
I
said
before,
currently
the
problem
is.
A
A
I
think
this
is
this:
has
a
pr
submitted
for
that
right.
A
Okay,
so
work
on
that
and
orphans
also
pr
for
that
you're
working
on
this.
A
D
A
Your
issue
here
established
plan
this
one
right.
A
Okay,
anybody
else
wants
to
move
the
issue.
Anybody
looking
for
issue.
We
have
a
bunch
of
issues
to
do
like
13
of
them.
A
Okay,
I
guess
it's
fine
and
yeah.
Now
we
have
16
prs
that
ready
for
review.
Let's
all
take
a
look,
and
oh
this
is
already
hdm.
I
will
move
all
gtms
here,
but
yeah.
Please
take
a
look
at
the
sprs.
A
If
you
oh
gtm
by
tomorrow,
we
can
ask
for
approvals
before
the
meeting.
Typically
before
signal
meeting
people
are
quite
active
for
proving
things.
A
Okay,
let's
go
back
to
agenda
and
I
did
you
up.
F
Yeah,
so
this
agenda
is
related
to
the
issue
about
establishing
a
test
plan
for
container
d,
so
I've
seen
that
on
test
grid,
like
half
of
the
jobs
are
failing
and
like
we
are
testing
them
on
the
like
older
versions
of
container
d
and
older
versions
of
cube
on
kubernetes.
A
F
It's
not
a
document.
It's
a
comment
on
the
issue.
F
Yeah,
just
the
above
one
also
like
it
is
this
summary
of
the
container
jobs
that
we
are
running
yeah.
So,
basically,
if
you
look
at
these
jobs
like
we
are
running
on
kubernetes
version,
1.15,
1.16
and
then
directly
on
master.
Similarly
container
d1.21.3
and
now
container
d
1.4
is
released
and
1.5
is
in
beta.
F
So
I
mean,
like
I
mean
who
will
tell
me
that,
like
for
what
the
versions,
especially
for
the
the
conformance,
the
and
the
kubernetes
e2e
jobs
that
are
important
to
us,.
F
So
yeah
these
are,
we
are
running
in
like
this
way,
but
they
have
minor
versions
as
well
like
1.5.3
or
something
something
yeah.
E
Not
1.2.0,
that
would
be
my
intuition
here.
B
F
E
I
still
have
the
question
of:
do
we
actually
even
own
these
jobs,
I'm
still
confused
by
the
sort
of
mishmash
of
you
know,
container
d
people
and
kubernetes
slash
signaled
people
who
who
seem
to
sort
of
come
and
go
to.
You
know,
create
these
jobs
and
they're
they're.
I'm
not
sure
that
we
have
any
code
that
we
sort
of
own
that
runs
these.
B
So
I
quote:
unquote
own
them,
but
given
that
we're
deprecating
docker
shim,
these
are
going
to
be
our
only
conformance
tests
in
nodes,
so
we
probably
at
least
have
some
vested
interest
in
them:
testing
the
correct
things
and,
interestingly,
so
we're
running
into
some
cry
like
race,
conditiony
things
and
dim
said
well.
Are
they
failing
in
the
container.
C
B
I'm,
like
I
don't
know,
and
if
it
turns
out
that
the
container
d-tests
are
just
so
old
they're,
not
even
testing
what
we
like,
where
we
would
see
the
affected
versions
and
that's
a
problem.
So
yeah.
A
More
questions:
do
we
need
to
test,
like
some
tests,
are
testing
continuously
being
like
working
fine
with
different
versions
of
kubernetes
yeah.
F
A
E
Out
the
ones
that
are
master
versus
container
d
or
x
version
versus
container
d,
then
that's,
maybe
we
need
to
spit
pull
these
out
into
our
own
specific
test,
job
and
say:
look.
These
are
the
the
test
jobs
that
we
own
that
are
node
and
test
this
stuff
versus
these
are
the
jobs
that
container
d
happens
to
own
and
we
ignore
them
for
the
most
part,
and
they
just
happen
to
show
up.
I
mean
they
should
probably
even
have
their
own
dashboard.
At
that
point,.
F
There
is
a
different
dashboard
that
container
yawns,
I
think,
and
they
are
I
mean
they
are.
They
have
two
jobs,
one
for
building
containers
and
one
for
running
the
kubernetes
equities.
So
this
is,
I
think,
this
whole
board
we
on
and
yeah.
So
do
we
know,
especially
the
top
three
jobs
like
e
to
e
note,
conformance
and
note
features
like
and
on
that
point
one
thing
as
well
like.
F
If
you
go
to
container
d
repo
or
a
kubernetes
repo,
I
mean
there
is
no
like
matching
version
things
like
on
which
versions
of
kubernetes
and
of
which
quantity
version
supports
or
like
that
or
how
many
old
container
d
version
current
master
supported
like.
If
you
go
to
cryo
repo,
you
will
see
a
matching
version
list
that
between
kubernetes
and
cryo,
so
that
is
missing
for
continuity.
A
A
I
don't
remember
exactly,
but
there
is
a
policy.
Okay,
maybe
I
can
dig
it
up.
The
kubernetes
policy,
no
container
d
policy.
A
B
A
I
think
derek
is
the
most
active,
but
all
of
them
all
maintainers.
F
Okay,
one
more
issue
on
this:
we
are
having
a
build
like
if
you
scroll
down,
we
are
having
build
container
d
jobs
like
that
they
are
just
building
container
d.
So
we
have.
We
are
building
container
d
for
four
releases,
so
I
mean
is
building
container
b
and
testing
it
out.
I
don't
think
so.
It
is
our
job,
so
should
we
have
them?
In
our
I
mean
here,
yeah.
F
E
F
But
but
they
look
our
dashboard,
they
make
our
dashboard
red
sometimes,
and
it
is
red
from
a
long
time
now.
A
I
think
if
you
can
keep
all
the
tests
that
doesn't
just
master
kubernetes
like
I
think
our
test
needs
to
live
in
kubota
master,
a
different
type,
not
in
container
github,
and
this
way
we
can
tell
that
maester
works
with
all
the
runtimes
that
we
support.
So
this
will
be
our
statement
to
support
and
all
the
building
of
contagionary
thingy
will
stay
in
continuity
and
maybe
even
get
them
out
of
signal
completely.
F
A
A
Yeah
and
oh,
there
are
two
continuity
containers,
the
io
and
continuity.
F
A
Triggered,
I
think
we
start,
we
need
to
start
moving
some
of
these
tests
into
signal,
critical
function,
passing
and,
like
ideally,
run
all
the
master
tests
on
container
continuity
and
move
them
into
critical.
F
And
about
the
version
policy
like,
should
I
file
a
separate
issue
for
that.
A
We
can
discuss
it
in
that
issue.
I
I
think
I
mean
right
now.
1.2
is
already
out
of
support,
even
out
of
extended
support,
so
we
like,
like,
I
think
we
can
make
a
statement
that
kubernetes
master
needs
to
support
two
versions
back,
like
all
supported
continuity
versions,.
F
A
Does
anybody
know
official
policy
because
I
assume
the
policy
is
all
supportive
versions
of
consolidation,
but
I
might
be
wrong.
A
Jorge
was
asking
do
we
need
a
special
account
for
people
actually
working
on
fixing
flakes,
and
I
wanted
to
check
with
everybody
here.
If
you
feel
the
need
for
those
accounts,
we
can
start
like
asking
for
them.
If
you
don't,
we
just
need
to
close
this
issue.
So
what
was
the
status?
Like?
I
mean
I'm
like
I
work
for
google,
so
I
have
some
credits
for
gcp,
so
I
can.
I
don't
need
special
accounts,
but
I'm
happy
to
hear
more
from
you.
C
Like
like
for
community,
it
can
be
wonderful
again,
I'm
by
myself.
I
find
the
ways
like
in
reddit
to
to
have
some
environments
to
test
flaky
tests.
But
if
someone
doesn't
have
such
opportunity
such
an
option,
it
can
be
great
to
have
some
community
accounts
that
can
be
used
again.
The
question
of
how
it
will
be
like
limited
and
who
will
control
it.
A
And
can
you
give
me
some
understanding
like
how
much
do
we
need
to
like?
Is
it
like
from
time
to
time
you
just
want
to
grab
accounts
from
a
pool
of
accounts
and
run
a
antenna
test,
or
you
want
to
create
it
and
troubleshoot
like
what's
the
typical
scenario,
how.
C
Much
probably
the
typical
scenario
is
like
you
have
flake
test,
and
you
want
to
run
it
like
locally,
that
you
can
access
machines
and
you
can
access
the
cluster
on
runtime
at
runtime
check
to
check
what
is
the
real
situation
under
the
environment,
and
probably
it
should
not
take
more
than
one
day.
You
know
like
at
least
for
one
day
or
something
like
will
work.
I
believe.
A
C
Like
additional
issue,
for
example,
that
I
had
that,
sometimes
when,
like
your
pull
request,
will
fail
from
some
reason
like
under
the
ci,
and
you
don't
have
any
idea
why
it
happens.
C
For
example,
I
had
some
pull
requests
regarding
the
cpu
manager
and
it
passed
for
me
locally
when
I
tested
it
under
my
environment,
but
it
did
not
pass
under
the
ci
because
it
is
still
is
using
docker
docker
shim
as
like
runtime,
and
I
used
container
deep,
plus,
run
ceo
and
so
yeah.
It's
also
an
additional
value
for
such
environment.
A
E
E
I
think
he
dropped
he.
The
the
message
in
the
chat
just
says
you
know
please
look
at
the
please
look
at
the
pr's
or
you
know,
do
some
investigation
and
let's
get
the
pr's
deflaked.
A
A
Okay,
so
yeah,
I
think
we
need
to
look
at
all
the
16
prs
that
currently
active
and
it
will
result
it
made
it
all
quite
a
few
issues
currently
in
to
list.
So
maybe
next
time
we'll
have
more
issues
in
to
do
and
keep
making
everything
greener.
E
I
will
like
to
offer
commentary
on
this
orphan's
job
in
particular,
besides
being
an
awful
name.
What
we
need
to
do
with
the
tests
in
that
particular
group
is
move
them
to
some
other
group,
basically
because
they're
not
supposed
to
be,
you
know
on
the
history
of
the
test
organization,
those
are
just
basically
miscellaneous
tests
that
weren't
put
into
a
group.
So
whenever
somebody
adds
a
new
test,
it
doesn't
fall
into
one
of
the
other
groups.
E
That's
where
it
falls,
and
you
know
we
need
to
try
to
move
them
around
and
I've
moved
a
couple
of
them
at
times
based
on
their.
You
know,
non-flakiness,
but
it's
difficult
so.
C
E
A
With
that
we
we
will
start
this
test
portion
of
our
meeting
and
ilana.
I
wanted
to
suggest
like
tries
all
the
issues
for
signatures.
So
if
you.
B
Yeah,
I
guess
we
can
get
interesting.
You
know
what
we
want
to
do
with
things,
so,
maybe
just
for
the
group
how
many
people
were
on
last
week's
full
c
full
sig,
node
meeting
did.
Did
people
get
to
see
us
chatting
about
that
topic?
There.
B
Okay,
it
sounds
like
most
people
were
there.
I
just
don't
want
to
like
rehash
everything.
If
you
know
people
were
there
and
they're
like
well,
I
don't
you
don't
need
to
tell
me
all
this
again.
So,
let's
see-
and
I
am
now
co-host
so
I
can
share
my
screen
cool.
What
I'm
gonna
do
is.
I'm
gonna
share
the
board
and
then
I
think
what
we
should
try
to
do
is
figure
out
sort
of
an
ongoing
strategy.
B
So
let
me
show
you
what
I've
got
and
hopefully
now
you
can
see
the
board.
So
we
have
a
bit
of
a
like
triage
backlog
in
sig
node,
and
it's
like
for
various
reasons.
You
know.
There's
we
have
a
lot
of
activity
in
the
sig,
we're
one
of
the
largest
cigs
and
we're
a
little
bit
backed
up
in
terms
of
you
know
like
prs
and
issues.
I
think
there
are
500
outstanding
issues
give
or
take
in
kk,
with
a
sig
note
tag
on
them.
B
There's
like
in
the
150
range
open
prs.
So
that's
like
a
lot
and
we've
been
making.
I
think
good
headway
in
terms
of
trying
to
drive
that
log
down,
but
we
need
to
like
actually
have,
I
think,
a
process,
and
we
don't
want
it
just
to
be
sort
of
like
predicated
on
one
person.
B
And
I
think
the
board's
actually
been
making
like
pretty
good,
pretty
good
progress,
since
I
sort
of
put
this
together
so
I'll
just
very
quickly
go
over
this
again.
I
went
over
this
in
the
regular
sig
node
meeting,
but
I
will
go
over
this
quickly
here
as
well.
I
put
together
this
triage
board
to
help
myself
get
sort
of
a
lay
of
the
land
of
all
of
the
non-testing
prs
in
node.
B
So
if
I
look
at
the
sort
of
default
query
that
I
have
to
add
cards
to
the
board
here,
so
we
have
this
query,
which
is
basically
it's
open.
It's
a
pr.
It
has
the
label
signed,
but
it
doesn't
have
a
testing
label
because
those
are
going
to
the
ci
board
and
so
basically,
like
you
know
a
couple
times
a
week
or
whatever
I
will
go
through
all
of
the
new
cards
and
put
them
on
the
board
as
needed.
B
B
So
this
one,
for
example,
is
work
in
progress,
so
I
will
throw
that
in
waiting
on
author,
because
it's
definitely
not
ready
for
review
somebody's
still
working
on
it.
This
one
doesn't
have
any
like
labels.
That
would
stop
us
from
reviewing
it.
As
far
as
I
can
tell
right
now,
so
I
will
like
throw
that
into
the
needs.
B
Reviewer
column
same
with
this
one,
this
one's
also
work
in
progress,
so
you
can
kind
of
see
how
this
goes,
and
this
is
how
I
would
typically
go
through
this
process
that
one
looks
fine
and
then
this
one
keeps
getting
sig
node
stuck
on
it,
because
it's
a
big
update
for
dependency,
so
I've
just
been
ignoring
that
one,
because
I
keep
taking
the
sig
node
label
off
of
it
and
keeps
getting
put
back
on
so.
A
B
Sure
I'll
do
that
and,
like
I
mean
there's
been
a
bunch
of
discussion
on
this
one
that
has
been
not
particularly
like
it
doesn't
need
us
so
I'll,
just
archive
that
great
okay.
Now
it's
gone
forever
so
so
this
is
kind
of
the
state
of
the
board,
and
so
I
have
these
columns
set
up,
there's
not
really
any
rules.
So
I
guess
one
of
the
outcomes
from
the
the
meeting
on
tuesday
last
week
was
you
know
who's
gonna
own
this
board.
Like?
B
Can
we
write
down
everything
that
we
need
to
get
done
like?
Can
we
write
down
guidelines
for
people
to
review
so
that
we
can
make
more
headway?
And
it's
not
just
like
me
doing
all
of
this.
Unfortunately,
github
projects
don't
have
a
lot
of
automation
available
and
so
like
a
lot
of
this
kind
of
does
have
to
be
manual
just
because,
like
it's,
it's
not
smart
enough
to
be
able
to
like
put
things
in
columns
based
on
labels,
which
I'm
a
little
bit
surprised
by,
but
that's
kind
of
the
state
of
things.
B
So
there's
one
column
on
here:
that's
automated,
which
is
the
done
column
and
since
we've
started
this,
there
are
now
45
things
and
done,
which
is
pretty
exciting.
And
presumably
you
know
like
once
once
a
release
is.
C
B
We
can
you
know
like
archive
this
or
like
archive
all
the
things
in
that
column,
so
we
don't
have
like
an
infinite
backlog
of
done
things,
but
for
now
this
is
kind
of
like
everything
that's
done
for
121
and
there's
some
stuff
that
are
closed,
some
stuff
that
emerged.
So
this
one's
all
automated
everything
else
like
adding
cards
to
the
board,
is
manual
and
like
moving
them
between
columns
is
manual.
B
So,
basically
anything
in
the
needs.
Approver
column
has
an
lgtm
but
doesn't
have
an
approved
everything
it
needs.
Reviewer
doesn't
have
any
blocking
labels
on
it.
It's
in
theory,
ready
for
review.
Waiting
on
author
has
some
blocking
labels
like
work
in
progress
or
like
it
has
a
hold
on
it,
or
somebody
gave
feedback
and
said.
Please
go
fix
this
thing
and
that's
why
it
would
be
in
this
column
and
the
triage
has
like
some
stuff,
like
you
know,
does
it
have
okay
to
test
on
it?
B
Does
it
have
a
release?
Note
like
we
have
to
go
and
do
a
first
pass
on
it
before
we
can
even
kind
of
pull
it
into
the
board.
One
of
the
things
I've
been
wondering
is:
maybe
should
we
start
signode
has
traditionally
not
really
used
the
like
needs
triage
label
on
pr's.
Should
we
start
triage
accepting
things
once
we
look
at
them
and
like
use
that
to
throw
things
into
like
before
we
put
anything
in
like
needs,
reviewer
and
so
on.
Like
should
we
be
applying
that
label?
B
Like
that's
back
of
my
mind,
question
there
are
many
questions,
so
I
guess
I
that's
kind
of
the
intro
and
I
would
welcome
folks
feedback
so
you've
seen
the
board
like.
Do
you
have
any
questions?
Do
you
have
any
opinions
as
to
how
it
should
work.
E
B
Oh
great,
so
let
me
let
me
jump
into
that.
So
a
triage
accepted
is
a
new
label
that
was
implemented
project
wide,
basically
to
allow
sigs
to
indicate
like
if
they're
like
having
gone
through
triage
they've
accepted
either
this
issue
or
pr
a
lot
of
groups
use
it
for
issues
like
yes,
this
is
a
valid
issue
and,
like
somebody
can
now
look
at
it.
So
you
know
don't
waste
your
time.
B
Looking
at
issues
that
haven't
been
triaged,
some
sigs
use
it
for
both
prs
and
and
issues
so,
for
example,
in
sig
instrumentation
anytime,
we
have
like
an
incoming
thing
and
it's
fine.
We
will
stick
a
needs
tree
or
we'll.
Stick
the
triage
accepted
label
on
there
to
like
indicate
that
we've
looked
at
it
and
it's
legit.
B
A
frustrating
thing
is
when
random
people
like
apply
the
you
know:
triage
accepted
label
when
they're,
not
even
a
member
of
the
sig,
so
that
kind
of
throws
that
off
a
little
bit,
because
I
think
anybody
can
do
it,
but
in
general,
like
you
know,
you
can
see
we're
not
really
using
them
here.
All
of
these
kind
of
have
needs
triage
on
them.
A
few
of
them
have
triage
accepted
so.
B
Yeah
because
it
would
be
good
to
like
you
know,
for
example
like
if
at
some
point
we
could
move
to
a
point
where
I
know
that
there
are
some
additional
tooling
that
we
can
run
for
automation
on
these
project
boards
and
if
we
could,
for
example,
manage
this
thing
on
a
totally
label-based
basis.
So
we
don't
have
to
actually
do
any
work.
Moving
these
things
around.
That
would
be
awesome.
F
D
A
I
think
moral
comment
I
think
sometimes
what
I'm
struggling
with
is
like.
I,
I
have
a
different
approach
for
triage
and
prs
I
or
like
reviewing
prs.
I
sometimes
go
into
like
all
the
aprs
and
comment
on
some
of
them
and
see
me
on
some
of
them
and
then
I
have
a
special
like
filter
in
the
inbox.
So
all
the
pr's
that
I
ever
commented
on
goes
into
special
folder
and
the
only
folder
I
look
at
so
I
typically
don't
look
at
pr's.
A
I
never
explicitly
opened
myself
or
nobody
mentioned
me,
so
this
definitely
will
help
to
feed
my
incoming
stream.
So
I
can
go
to
a
needs
reviewer
and
just
fish
things
from
there
and
it's
only
27
of
them.
So.
B
Yeah,
that's
that's
sort
of
what
I've
been
doing
as
well,
because
initially
you
know
if
you've
got
just
a
pile
of
like
150
pr's,
and
you
have
limited
reviewer
time.
You
know
like
how
do
I
know
which
ones
to
look
at
and
since
we've
moved
to
this
I
know
renault
has
given
me
the
feedback
that
having
the
needs,
approver
column
is
super
helpful
because
he
doesn't
have
to
go
and
look
at
every
single
sig,
node
labeled
pr
or
like
do
his
own
queries.
B
You
know
he
can
just
like
go
through
this
column
and
kind
of
like
mash
approve,
and
you
know
for
the
things
that
are
like
you
can
also
filter
on
these
boards.
So,
for
example,
if
you
just
want
like,
if
you
just
want
to
look
at
like
clean
up
things
right,
those
are,
they
tend
to
be
a
little
bit
easier
to
review
than
like
a
bug
fix
or
something
like
that,
and
so
that
makes
it
relatively
quick.
So
I'm
like
oh
okay,
there's
like
nine
things.
B
That
probably
don't
need
a
lot
of
like
mental
energy
to
improve
on
here
that
we
can
just
get
out
of
the
way.
So
the
filtering
capabilities
of
this
board-
I
don't
know
if
I
went
over
this
in
the
other
meeting,
but
I
find
them
super
handy.
So,
for
example,
if
I'm
like
looking
for
low
hanging
fruit
in
the
needs
reviewer
column,
I
can
put
in
like
a
size,
xs
query
and
like
okay.
B
These
ones
are
all
maybe
pretty
easy
to
review,
because
they're
all
like
very
small
changes,
so
the
the
filtering
capabilities
are,
I
think,
super
great
and
make
it
easier
to
go
through
these
sorts
of
things
and
hopefully
like
basically,
my
only
complaint
with
this
thing
is
that
I
can't
just
say,
like
you
know,
needs
reviewer
must
have
these
labels
and,
like
waiting
on
author,
must
have
these
labels
and
github
will
just
move
things
for
me.
That
is
what
I
wish
it
would
do,
but
it
does
not
currently
do.
A
So
I
guess
I
mean,
except
doing
what
automation
must
do
like
moving
lgtm
to
neet
approver.
The
only
reason
to
have
a
meeting
for
triage
is
to
actually
look
at
issues
that
have
a
label
nicias
and
apply
this
triage
label.
Everything
else.
I
think
all
the
columns
are
quite
explanatory
and
like
somebody
like
automation,
supposedly
needs
to
do
that.
But
in
the
lack
of
estimation
we
need
to
like
either
have
a
rotation
of
people
updating
the
board
or,
like
I
don't
know
how
to
approach
it
like.
B
A
rotation
or
if
people
can
just
kind
of
do
it
ad
hoc
or,
like
I
don't
know
if
you
even
have
a
big
enough
pool
of
people
to
do
a
rotation,
I'm
totally
fine
with
continuing
to
do
it
ad
hoc,
like
basically
any
time
that
I
sit
down,
to,
go
and
review
things
I'll
just
go
and
make
sure
the
board
is
relatively
up
to
date.
My
biggest
worry
right
now
is
for
this
waiting
on
author
column.
B
Like
sometimes
you
know,
things
will
get
ready
for
review
again
and
it's
not
obvious
that
they
have
done
that.
Like
one
thing,
that's
really
nice
is.
If
somebody
puts
a
changes,
requested
type
of
review
on
a
thing,
then
you
can
see
the
changes
were
requested
and
until
they
dismissed
that
or
when
they
dismissed
that.
That,
in
theory
should
go
away.
B
But
yeah
I
mean
for
the
most
part
a
lot
of
these
are
just
like
you
know,
there's
there's
changes
that
are
needed
and
people
don't
have
like.
I
also.
B
We
could
sort
these
columns
automatically
by
like
last
modified
time
on
the
pr,
but
like
these
are
things
that
just
github
can't
do.
I
know.
Actually
I
think
bob
killen
was
saying
that
github
has
reached
out
to
kubernetes
about
like
feature
improvements
and
like
they
are
inspired.
I
think,
by
how,
as
a
project,
we
manage
some
things
on
github,
so
I
can
try
to
give
bob
feedback
as
well,
so
maybe
like
we
can
github
to
do
those
things
for
us,
because
that
would
be
nice.
B
Cool
okay,
so
it
sounds
like
we
are.
I
guess
we're
relatively
comfortable
with
this
board.
Do
people
feel
relatively
comfortable
that,
like
they
could
go
and
jump
into
this
and
they
would
know
like?
I
think
what
I
can
do,
because
I
know
sergey,
like
you,
put
some
cards
sort
of
at
the
top
of
the
columns
as
sort
of
like
a
guide
for
what
each
column
meant
and
maybe
what
I
can
do
is
I
can
do
that
and
then
also
like
write
some
documentation
in
the
community
repo
under
sig
node.
B
That's
just
basically
like
documents,
you
know:
here's
our
ci
board,
here's
our
sig
node
board.
I
can
maybe
put
a
stub,
so
you
can
just
you
can
fill
in
the
ci
stuff
later.
You
know
here's
how
to
review
things,
and
you
know
here's
how
the
process
works.
I
can
send
that
to
this
group
and
people
can
get
a
chance
to
review
and
you
know
provide
feedback,
see
if
it
makes
sense,
lazy
consensus,
merge
it.
Maybe
in
a
week
or
two.
B
A
E
B
Yeah,
it's
totally
totally
hub
filters
for
myself.
I
just
try
to
like
you
know,
do
those
like
filters
and
like
pick
one
or
two
pr's
and
kind
of
go
through
them,
the
typo
fix
ones
are
really
easy
to
review,
not
typo
fixed
ones.
Are
they
take
a
lot
more
time,
so
I
just
kind
of
like
allocate
an
hour
and
get
through
whatever
I
can
get
through.
B
Cool,
so
I
guess
that's
that's
kind
of
all
I
had
for
that
one.
I
can
write
something
down
in
the
meeting
minutes
that
I
am
action
to
go.
Do
this
and
sort
of
formalize
this
more
and
then
I'm
hoping
that
you
know
other
people
will
have
the
the
docs
and
they'll
feel
comfortable
sort
of
jumping
into
this
board
as
well.
B
I
am
like
mentoring,
a
group
of
people
who
want
to
become
sick,
node,
reviewers
right
now,
so
like
one
of
the
things
I've
been
having
them
do
is
like
just
sort
of
have
at
this
triage
column,
because
you
don't
need
any,
like.
You
know,
particular
knowledge
of
the
code
base
in
order
to
be
able
to
move
things
out
of
here.
B
You
know
it's
just
a
matter
of
looking
at
the
pr
seeing
there's
nothing
too
scary
in
the
diff
like
putting
maybe
needs
okay
to
test,
maybe
evaluating
priority
and
then
throwing
it
sounds
like
we
should
also
be
throwing
in
the
triage
accepted
on
things
and
then
yeah,
that's
good,
but
then,
and
that
now,
like
you
know,
I
can
just
kind
of
work
through
as
well.
If
we're
in
agreement
on
that
one,
it
needs
triage,
we
can
just
kind
of
go
through.
B
All
of
these
that
have
already
been
in
the
like,
you
know,
needs
approver,
needs
reviewer
and
just
triage
accept
them
anything
that
has
an
lgt
about.
It,
surely
should
have
triage
accepted
anything
that
needs
reviewer.
We
might
want
to
double
check
it,
but
are
probably
also
fine.
A
Yeah,
if
you
want
to
use
this
like
second
half
of
this
meeting,
to
go
through
new
issues
that
needs
triage,
we
can
definitely
do
that.
Yeah.
B
That
would
be
great,
I
know
like
so
one
thing
that
I've
sort
of
been
brushing
under
the
rug
has
been
the
the
like
incoming
issues,
I'm
just
looking
at
prs
and
the
reason
is
there
are
so
many
of
them
and
for
the
most
part
right
now,
I'm
just
trying
to
focus
on
like
actual
breaking
issues,
which
I
will
then
assign
to
myself.
So
I
can
keep
track
of
them.
B
I
would
like
to
improve
sort
of
the
overall
health
of
issues,
but
I
I
don't
want
to
be
too
ambitious,
because
I
don't
have
all
the
time
in
the
world
to
work
on
this
as
much
as
I
wish.
I
did
so,
I'm
thinking
like
maybe
sort
of
later
in
the
release
cycle
or
going
into
the
122
release
cycle.
I
can
start
to
like
kind
of
whack
away
at
the
the
issue
backlog.
One
of
the
things
I'm
hoping
to
maybe
do
is
run
a
bug
squashing
party.
B
So
that
way
we
can
just
kind
of
get
a
big
group
of
people
who
are
all
like
taking
a
day
or
two
to
just
like
drive
down
the
size
of
our
bug,
backlog
and
then
that
way,
once
we've
kind
of
gotten
through
all
of
that
and
we've
said,
okay,
these
issues
are
legitimate.
These
ones
we've
closed.
You
know
these
ones
are
support,
requests,
they
don't
go
to
us
and
so
on
and
so
forth.
Then
we
might
be
able
to
add
issues
to
this
as
well
for
the
triage.
B
B
I
don't
know
if
this
group
is
ready
for
that
yet
because
we've
got
a
big
backlog
of
node
reviewers,
but
I
don't
know
that
all
of
them
are
active,
but
maybe
once
we've
been
doing
this
and
we've
got
kind
of
like
a
good
active
pool,
we
can
start
doing
that
like
oh,
this
looks
like
you
know.
Something
so-and-so
would
want
to
look
at
and
not
just
rely
on
the
bot
because
for
the
people
who
are
not
active,
you
know
they
might
get
a
sign
and
they'll
never
look
at
it.
So.
B
Yeah,
the
one
thing
that
I've
heard
from
a
number
of
people
is
this
meeting.
Time
is
really
bad
for
them,
because
it's
a
like
a
lot
of
people
tend
to
have
conflicting
work
meetings
on
mondays,
so
I
don't
know
if,
like
we,
should
consider
moving
this
meeting
like.
B
A
Yeah
I
send
doodle
before
before,
like
when
victor
was
stepping
out
and
I
started.
Having
this
meeting,
I
sent
a
doodle
and
I
think
that
was
the
most
accepted
time,
but
we
can
reduce.
B
Yeah,
do
you
want
to
send
that
sergey.
B
A
Great
okay,
I
we
are
thinking
about
the
same
agenda.
B
A
We
are
not
extending
agenda
to
product
box,
yet.
B
A
Okay,
I
can
send
it
to
you
again.