►
From YouTube: Kubernetes SIG Testing Meeting for 2022-02-23
Description
Kubernetes SIG Testing Meeting for 2022-02-23
A
D
A
D
D
I
guess
it's
a
bit
late
during
you
know
insurance
appeal,
but
welcome
to
the
sick
testing
meeting
it's
recorded
and
uploaded
to
youtube.
Please
follow
the
cncf
code
of
conduct
which,
in
short,
be
excellent
to
each
other.
It
looks
like
I'll
be
hosting
today.
If
anyone
wants
to
take
notes.
That
would
be
helpful,
but
it
looks
like
for
the
most
part.
We
don't
have
an
agenda
this
time,
so
we
can.
D
E
Well,
let's
figure
yeah
I'll
figure,
let's
figure
it
out
offline,
but
I'll,
see
where
they've
went
and
then
we
can
get
youtube
admins
to
help
us
out
with
like
mass
uploads
and
things
like
that.
So
no
worries
we'll
get
you
we'll
get
everybody
squared
away.
D
I
think
we
still
have
some
six
cli
things
in
here.
A
D
Okay,
yeah,
so
actually
after
this
meeting
is
over.
Today
is
the
last
day
of
write
your
perf
details
at
google
and
then
it's
right
feedback
for
other
people
and
I've
been
writing
for
a
couple
of
promo
candidates.
So
I
need
to
finish
my
birth,
which
I
have
not
really
done
and
then
get
cracking
on
writing
feedback,
so
I'm
gonna
be
like
fairly
unavailable
for
the
next
week
or
so
because
I
need
to
do
justice
to
those
folks
promos.
D
So
I'm
probably
going
to
be
coming
back
around
to
this
sometime
next
week
for
the
most
part
and
tony,
and
I
are
also
trying
to
make
sure
that
we
finally
get
another
kind
of
police
out
the
door
now
that
we
hopefully
don't
have
any
known
outstanding
regressions.
D
Yeah,
it's
it's
been
quite
a
while
we've
had
like
a
steady
string
of
like
people
aren't
available
for
a
little
bit
or
like.
We
discovered
that
there's
another
regression
or
like
a
fix
for
a
regression
cause
a
different
regression
or
something
because
of
the
all
the
different,
more
esoteric
platforms,
they're
supporting
now
and
don't
have
ci4
so
we're
now
in
a
state
that
appears
to
be
releasable
since
maybe
about
a
week
or
two.
But
antonio
and
I
have
both
had
poor
availability.
We're
hoping
to
get
one
out
sometime
this
week.
E
I
can
also
see
if
we
can
get
somebody
to
write
us
a
blog
about
it.
Okay,
not
you
or
antonio.
While
you
all
do
the
work.
D
I
think
so
I
think
I
can
definitely
you
can
come
up
with,
like
you
know
what
projects
have
been
going
on.
I'm
really
aware
that
some
of
the
things
that
I
felt
less
comfortable
answering
are
later
in
the
doc.
With
things
like,
how
do
you
count
membership?
I
don't
I'm
a
little
hesitant
to
like
unilaterally
answer
these,
but
I'm
also
not
sure
who
all
to
go
for,
I
think
more.
The
answer
is
more
like
we
don't
really
pay
heavy
attention
to
that.
I
was
about.
D
C
D
Yeah
is
that
unacceptable?
Okay,
because
I
think
that's
actually
the
case
in
our
sig,
we
don't
really
have
anything
where
we
say
like.
Oh,
you
can
only
discuss
this
if
you're
a
sig
member,
we
just
if
there's
something
where
a
decision
ultimately
needs
to
be
reached,
it's
mostly
going
to
come
down
to
sub-project
owners.
D
So
we
pay
attention
to
that.
But
I
don't
think
we
we
haven't
made
a
big
distinction
about
being
a
member
or
not.
E
F
A
Okay,
so
what
big
work
did
we
highlight
this
year.
D
Oh,
I
will
need
to
come
up
with
a
summary
for
it,
but
we
definitely
had
some
work
on
kind,
quite
a
lot
cool,
so
I
don't
have
like
a
short
tl,
dr
at
the
moment,.
B
D
We
probably
will
want
something
on
that
we'll
want
to
reach
out
to
the
folks
working
on
the
kubernetes
six
eight
framework,
because
I
know
there's
been
a
fair
bit
there,
but
similarly,
I'm
not
sure
like
what
the
short
version
is,
but
there's
been,
there's
been
some
more
work
and
some
releases
on
that
cube
test.
Two
we've
had
some
work
on
and
there's
definitely
a
lot
going
on
in
pro
cole
might
be
able
to
talk
more
about
that.
F
Yeah
I'll
have
to
go
back
and
see
which
of
the
the
crown
improvements
are
kind
of
specifically
beneficial
for
the
kids
community.
Yeah
I'll
need
to
go
through
that
separately.
Like
then,
I'm
also
kind
of
totally
unavailable
this
week,
though,
unfortunately,.
A
D
D
E
D
D
That
made
some
progress
in
the
past
year
and
everything
else
has
been.
Almost
everything
else
has
been
unkept.
D
I'd
say
we
mostly
don't
in
general
kind,
has
a
contributor
guide.
Now
I
imagine
we'll
probably
get
to
one
with
prow
is
we're
starting
to
spin
up
a
dock
type,
but
we
won't.
We
don't
have
one
yet
and
all
the
other
ones
have
the
template
file
that
points
back
to
the
like.
Overall
kubernetes.
D
There's
some
things
that
I'm
I'm
not
sure
if
they
like
strictly
fall
under
our
sig,
like
there's,
been
some
work
on
the
existing
uwe
framework
that
we're
that
we
use
for
most
of
our
tests.
D
I
think
it
mostly
falls
under
the
projects
that
I've
listed
already,
but
I
know
I'm,
I
know
there's
more.
D
I
don't
believe
so.
No,
we
put
out
a
call,
but.
D
A
D
Yeah
and
I
had
one
short
talk
at
a
google
event,
but
it
was
not.
It
was
not
a
kubernetes
event.
We
talked
about
some
things
in
the
project,
but
I
don't.
I
don't
think
that
counts.
D
D
Like
again
in
kind
we've
maintained,
we
have
a
resources
page
and
we
maintain
an
index
of
the
relevant
talks
and,
like
we
haven't,
had
more
because
it's
not
a
whole
lot
to
add.
I
think
the
existing
talks
are
pretty
good.
A
Okay,
okay,
so
I
see
two
caps
from
2021.
A
One
hazel
and
cube
test
continuously
deploy
kate's
prowl.
This
is.
D
By
ciao,
that's
a
good
one.
We
should
call
that
out
under
the
prow
work
done.
D
G
F
I
was
just
gonna
say,
I'm
not
sure
if
we've
officially
changed
the
status,
that
captain
any
sense,
but
that
work
has
certainly
landed
and
has
been
for
a
long
time
now.
Yeah
that.
D
I
don't
recall
any
status
updates
going
by,
but
I
also
wasn't
as
hands-on
with
that
particular
kit.
D
Okay,
we
also
did
we
got
the
bazel
removal
in
kubernetes
too,
like
ga
this
past
year,
and
we
made
some
progress
on
the
cube-tested
cube
test
too,
but
that's
stalled
at
the
moment,
just
because
there's
no
one
available
to
work
on
it.
So
that'll,
probably
one
of
the
things
we
put
under
the
like
call
to
action.
E
Yeah,
I
know
apologies,
you
said
cube
test
to
cube
test
too.
Yes,
thank
you.
D
We
we
have
a
kept
for
that.
That's
amazing.
D
D
The
owner
moved
on
to
other
areas
of
work,
so
we're
gonna
need
to
get
someone
to
pick
that
back
up.
D
Okay,
that
one's
basically
to
beta,
but
it's
not
landed
a
stable.
I
will
take
that's
a
bit
of
a
long
tail.
B
Ciao
about
that,
okay,
so
I'll
make
this
the
beta,
and
then
this
one
went
ga.
D
D
And
then,
which
one
were
you
talking
about,
then?
So
we
have
two
more.
We
have
the
cube
tested
cube
test
two
which
is
at
beta,
I
think,
also
in
121,
and
then
we
have
the
kubernetes
basal
removal,
which
is
ga
as
of
123.
D
D
D
It's
a
lot.
There's
a
lot
of
ci
jobs
using
cubetest
yeah
for
sure.
D
123.,
our
the
the
bazel
one
123
went
stable,
the
oh,
which
one's
bazel
you
test,
two
went
g,
a
or
wins,
or
the
one
beta
121.
I
believe.
D
B
D
Yeah,
I
think
that
one's
a
pretty
big
win.
We
have
us,
we
have
some
similar
effort
going
on
without
a
cap
in
test
and
pro
that's
almost
completed.
D
A
D
A
All
right
owner's
files
do
you
have
a
list
of
all
the
owners
files
for
the
project
for
the
sig,
we'll
doing
pretty
good
about
having
them
in
six
diemo.
D
A
D
D
It's
not
any
better
in
skills
they're,
going
to
like
bare
minimum
make
sure
that
it
keeps
functioning
which
thank
you
for
that,
but
it
just
the
bare
minimum
of
attempting
to
review
prs.
There's,
there's
no
like
active
work
or
super
involved
owners.
D
And
then
test
framework
is
good.
The
new
test
framework
is
good
and
it's
extra
good
because
it's
not
like
a
critical
dependency
anywhere
in
the
project.
It's
more
of
an
offering
for
folks
that
like
wanted
to
use
what
is
our
critical
dependency
the
intrigue
framework,
but
we
steered
them
away
from
that
because
it's
a
bit
of
a
mess
and
it's
we
decided
it
wasn't
worth
effort
to
try
to
extract
it.
So
this
is
a
clean,
clean,
alternative.
D
I'm
not
sure
if
we,
if
we're
using
this,
this
was
an
intern
project
that
was
supposed
to
do
secret.
Sync,
I'm
not
sure
cole
is
that.
Do
we
wind
up
using
this?
D
I
don't
know
what
that
is
this
sinks
secrets
from
cloud
secret
manager
to
kubernetes.
D
We
use
a
separate
tool
for
that
called
kubernetes
external
secrets.
Okay,
so
I
think
I
think
that
this
didn't
make
it
through
to
production.
This
was
primarily
aaron
and
aaron's
intern.
D
It
is,
but
it
isn't
I'd
say
of
since
again
it
turns
out
it's
just
one.
Approver
really
and
reviewers
is
also
pretty
much
active.
I
think
we'll
see
if
grant
mushri
is
will
be
around.
I
haven't
seen,
I'm
not
gonna
try
to
see
a
yfc.
D
That
and
that
I'm
maybe
the
only
active,
reviewer
question
mark-
and
this
I
mean
we
do
use
this
in
ci
and
the
intention
is
to
stop
using
q
test
tube
test.
We
formally
deprecated
like
maybe
two
years
ago,
so
we're
only
accepting
bug,
fixes.
B
D
In
theory,
cube
test,
two
is
much
easier
to
maintain
it's
much
better
organized
and
has
cleaner
separation
concerns,
and
it
has
better
support
for
if
people
want
to
add
like
integration
with
something
else
that
we're
not
hosting
upstream.
It's
very
easy
to
do
without
like
having
to
fork
or
having
to
do
it
upstream
so
like,
for
example,
cops
has
their
own
implementation.
That
is
maintained
in
the
cops
tree
and
but
even
though
it
should
be
easier
to
maintain,
we
don't
have
maintainers.
D
So
okay
kind
is,
would
you
say,
kindness,
healthy?
It's
healthy,
we'd
like
to
be
better,
I'm
probably
going
to
need
to
emeritus
omwat
there
and
james
hasn't
been
very
active
either.
We
are
in
discussions
looking
for
folks
prepared
to
grow,
to
like
maintain
her
someday
well,
but
we're
okay.
In
the
meantime,
we
do
have
two
active
reviewers
and
approvers.
D
F
Yeah,
there
are
some
things
that
might
be
worth
noting.
Here,
though
in
particular,
I
think
we
still
really
need
the
sig
case
intro
to
continue
taking
over
some
things.
In
particular
the
monitoring
stack.
F
We
at
google
can't
really
continue
running
the
old
monitoring
stack
that
we
had
because
of
some
license
changes
with
grafana,
so
we've
had
to
switch
over
to
an
alternative
monitoring
stack,
that's
google
internal,
but
because
it's
on
our
projects
that
are
at
google.com,
so
we
need
the
sig
gate
stanford
to
continue
taking
over
that
kind
of
monitoring
stack
in
order
to
continue
having
things
like
the
bosco's
metrics
dashboard
meet.
D
But
I
can
tell
you
that,
like
with
both
of
those
hats
on
at
the
moment,
we
kind
of
stopped
trying
to
move
proud
things
because
of
the
cost
issue.
We
we're
we're
exceeding
our
budget
already
so
migrating
things
in
the
short
term
is
a
problem,
and
I
need
to
respond
to
some
threads
after
this
about
the
cost
mitigation
we're
working
on.
I'm
not
sure
when
we'll
have
that,
I'm
not
sure
when
that
will
have
taken
sufficient
effect,
that
we'll
be
comfortable
doing
large
migrations
again.
A
Sorry,
I
don't
know
if
it
was
cole
or
mushu
can
one
of
you
just
add
some
notes
here.
Please.
D
A
D
D
D
D
I
would
like
to
continue
splitting
out
the
most
useful
things
like.
It
has
been
kind
of
useful
as
an
incubator,
and
it
was
useful
when
we
weren't
necessarily
trying
to
make
things
as
reusable
outside
the
project
to
just
have
everything
together
and
interoperating.
You
can
have
configs
cross-checked
and
things
like
that.
We
have
moved
towards
a
place
where,
like,
instead
of
browse
configuration
being
right
alongside
source
code,
we
have
a
config
directory
and
everything
that
is
kubernetes.
D
Specific
configuration
wise
is
scoped
to
there
as
much
as
possible,
but
there's
it's
kind
of
an
expensive
project
to
actually
split
out
some
of
these
things
that
have
become
so
big
and
useful.
So
I'm
not
I'm
well
hesitant
to
ask
someone
to
actually
do
that,
because
even
if
someone
showed
up
and
had
all
the
time
to
do
it,
it's
going
to
be
disruptive
to
the
other
people
working
on
the
project
right,
but
over
time.
I
think
particularly
the
like
smaller
and
medium-sized
projects.
D
I
think
the
I
think
the
mono
repo
sprawl
has
been
challenging.
A
D
In
a
super
great
place,
if
that
happens,
I
know
there
are
other
people
that
do
but
like,
for
example,
like
alvaro,
knew
things
quite
well
with
red
hat,
but
I
was
reaching
out
to
him
recently
and
turns
out
his
day.
Job
doesn't
involve
testing
anymore,
so
we
do
have
some
other
folks
like
peter,
and
this
call
knows
about
prow,
but
for
the
repo
at
large.
D
That
said,
there's
also
plenty
of
things
where
that's
already
kind
of
the
case,
and
we've
had
to
do
kind
of
like
archaeology
on
them.
Things
like
the
triage
dashboard,
a
googler,
wrote
and
didn't
leave
a
lot
of
docs
or
anything,
and
it
was
running
fine,
no
one
paid
attention
to
it
and
then,
when
we've
had
issues
with
those
things
again,
mostly
googlers
have
gone
and
like
dived
in
on
it.
But
we've
come
in
from
not
knowing
anything
and
it's
been
okay,
because
it
is
pretty
all
open
as
much
as
possible.
D
D
D
D
For
the
most
part,
there
are
a
few
things
that
use
fixed
projects
like
5k,
node
scale.
Testing
just
has
a
project
that
has
the
quota
for
that
and
is
carefully
scheduled,
but
almost
everything
else
uses
rent
something
from
bosco's
and
bosco's
is
also
in
charge
of
making
sure
that
we
don't
leak
resources
that
it
goes
through
the
project
and
deletes
everything.
D
D
So
it's
one
of
those
things
that,
like
it
needs
to
have
someone
available
to
like
review,
fixes
and
potentially
make
fixes,
ideally,
but
that's
tricky
to
staff,
because
we
don't
really
have
feature
requests
and
I
think
that's
mostly
what
companies
are.
I
know
at
least
at
google.
That's
you
know,
gonna,
take
priority.
A
D
I
don't
think
so:
there's
not
a
ton
of
code
there.
We
have
some
talks
that
reference,
how
it
works
and
there's
some
documentation
about,
like
the
high
level
thinking
and
design.
A
D
F
And
who
is,
I
was
gonna
say
our
on-call
team
is
kind
of
like
in
like
emergency
reviewer
mode
right
now,
for
that,
in
the
sense
that
nobody
else
is
like
we
don't
really
have
any
much
need
for
reviewers,
but
when
we
do
there's
nobody
that
really
knows
anything
about
it.
So
we've
kind
of
designated
ourselves
as
the
people
that
can
do
the
reviews
for
now,
but
we
yeah,
like
bennett,
said
we
don't
we
don't
know
anything
about
it.
Really,
it's
not
a
very
difficult
tool
to
understand.
F
So
we
can
handle
that
in
the
meantime,
but
it
would
be
potentially
nice
to
have
somebody
that
knows
it
a
little
bit
better
if
we
plan
to
do
any
more
work
on
in
the
future,
yeah.
D
For
sure
we're
asking
a
bit
much
of
the
like
testing
for
on-call
team
to
take
care
of
all
these
things.
That
said,
like
I,
wouldn't
necessarily
want
to
just
pick
some
completely
random
person.
We
don't
know-
and
just
say:
okay,
you
own
this
now,
but
on
the
other
hand,
I'm
not
really
super
concerned
with
someone
being
like
extremely
familiar
with
it.
It's
not
very
complicated.
D
If
I'm
not
around,
and
the
on-call
team
doesn't
is,
is
no
longer
taking
care
of
this,
then
we
don't
have
anyone
that
can
actually
do
the
approve,
and
then
we
can't
merge
code.
If,
if
someone
wants
to
send
a
fix
that
so
right
now,
we
are
depending
on
the
testing
for
on-call
team,
to
extend
to
covering
that
so
that
we
at
least
have
a
few
people
that
someone
can
do
a
basic
review
and
approve.
A
Yeah,
I'm
thinking
kind
of
what
grant
just
dropped
in
the
chat
if,
if
we
only
have,
if
googlers
are
the
only
ones
that
know
how
these
work
and
have
access
to
them,
we
should
definitely
get
a
list
together
and
start
seeking
new
contributors
and
maintainers
or
reviewers.
Well,
so.
D
There's
a
little
bit
of
prints
here
so
most
of
the
proud
stuff
is
deployed
from
git
by
automation,
and
even
some
of
that
is
deployed
to
community
infra.
But
something
like
triage
or
kettle
is
deployed
by.
I
run
a
make
file
that
is
in
the
repo
and
it
pushes
to
a
specific
google
project
that
only
googlers
have
access
to.
Someone
could
stand
up
another
instance,
but
it
would
be
high
effort
and
it
will
require
new
projects
and
we
might
have
to
restart
the
data
and
whatnot.
D
A
D
Triage
and
kettle
are
used,
probably
by
a
fairly
small
subset
of
our
power
user
test
debuggers,
and
I
would
really
like
to
keep
them
around
for
them,
but
I
don't
think
it's
as
critical
as
like
prow,
whereas
bosco
somewhere
in
between
anyone
could
handle
that
you
should
not
need
much
in
the
way
of
like
direct
cluster
access.
D
It
is
auto
deploy
from
git,
but
someone
needs
to
take
care
of
it.
Cattle
is
what
powers
the
the
triage
storage
page
pedal
is
the
like.
First
step
of
data
processing,
it
collects
up
all
the
job
results
and
puts
them
into
bigquery
and
a
more
useful
format,
and
then
triage
takes
from
the
bigquery
queries.
D
It
does
some
further
clustering
and
produces
failure
clusters
and
has
a
web
front
end
for
that
plus
kind
of
like
two
two
components
I
think
there's
in
theory,
you
could
use
the
kettle
data
for
other
things
off
the
top
of
my
head.
D
I'm
not
aware
of
anything
currently
running,
I
think
there's,
maybe
one
small
metrics
thing
that
tells
you
like:
these
are
the
flakiest
tests
and
that
used
to
be
in
the
monitoring
stack
that
has
moved
around
a
bit
and
instead
now
there's
just
like
a
json
file
in
gcs
that
you
can
go
load
to
say
these
are
like
the
most
faily
tests.
D
So
I'm
also
not
sure
that
anybody
is
inspecting
those
and
there's
some.
You
can
get
a
decent
view
of
that
just
by
the
heat
map
in
prows
front
end.
D
The
failure
clustering,
though,
is
really
useful
for
that
subset
of
folks
that
are
digging
into
why
our
tests
like.
Why
is
this
test
flaky,
finding
where
it's
been
repeatedly
failing,
but
we,
I
think
you
know
we
could
live
without
it.
It
would
just
be
a
lot
harder
to
track
down
failures.
D
Oh
yeah,
I
use
that
the
triage
site
all
the
time
absolutely
yeah
like
there
are
a
few
people
for
which
it
is
pretty
critical.
I
think
there
are
a
number
of
other
people
that
are
completely
oblivious
to
its
existence
and
would
never
use
it
and
we
would
get
by
okay.
It's
just
anybody
that
wants
to
dig
into
wire
test.
Failing
it's
really
powerful.
A
D
A
D
That
reviewers
are
improvises
one.
D
I
personally
don't
love
open
issues
as
a
metric
just
because,
like
that,
punishes
continuing
to
track
things
that
aren't
fixed,
but
if
there
was
some
way
to
measure
issues
that
aren't
getting
a
response.
D
But
not
all
most
of
our
projects
don't
have
the
haven't
used
the
like
untriaged
label
or
if
they
have
it,
it's
not
getting
marked
triaged.
I
think
we've
got
people
stretched
thin
enough,
just
responding
at
all
yeah.
Have
you
have
you
seen
the
triage
party
stuff
at
all?
Yeah
we've
discussed
anyone
in
the
past,
but
again
part
of
the
problem.
Is
we
just
don't
have
that
many
people
even
trying
to
triage
everything
so
it
it
would
cost
more
time
to
set
this
up
than
I
think
it
would
save.
D
A
So
we
don't
have
a
contributing
markdown
in
our
community
folder.
We
did
have
one
I
saw
somewhere
else.
I
think
it
was
in
test.
No.
It's
on
our
sig
testing,
repo.
D
I
think
that's
another
boilerplate
one:
okay,
like
our
repos,
have
the
boilerplate
contributing
markdown.
That
points
you
to
the
like
general
kubernetes,
stuff
kind
is
the
only
one
that
I
know
that
has
an
edited
one
that
points
you
to
the
like
kind
contributor
guide,
there's
a
but
there's
like
a
boilerplate
one
in
kubernetes
community
repos
that
points
you
to
the
overall
just
like
how
to
contribute
to
kubernetes.
D
So
we
should
have
one
of
those
in
each
of
our
sub
projects.
But
it's
it's
just
the
standard.
Okay,
nothing
sig
specific,
except
for
kind.
A
D
And
it's
been
challenging
to
say
what
should
people
work
on
to
get
to
the
top,
because
I
would
say
most
of
our
projects
are
in
more
of
a
like:
keep
the
thing
running
mode
and
less
of
a
like.
Let's
make
it
more
complicated
with
more
features.
D
I
think
prout
is
certainly
still
developing
features,
but
almost
every
other
sub
project
prowl
in
the
e2e
framework,
and
then
almost
every
other
sub
project
is
just
like
maintenance.
Keep
up
with
things
like
kind,
is
keeping
up
with
kubernetes
changes,
but
we're
not
really
adding
much
in
the
way
of
feature
set
or
like
boss
goes
we're
just
trying
to
make
sure
someone
can
approve
fixes
we're
not
really
looking
for.
E
How
much
work
do
you
think
it
would?
It
would
be
for
an
approver
to
put
in
for
bosco's
like
five
hours
a
month,
10
hours
a
month.
D
Yeah,
like
five
or
less,
I
would
say
that
there
are
many
months
where
we
really
don't
need
to
change.
Anything.
Boscos
is
probably
the
easiest
one
to
have
someone
help
approve,
because
we
just
need
to
make
sure
you
understand,
like
our
priority,
for
that
is
stability.
There's
really
no
interesting
features
there,
but
it
is
something
that
is
very
critical
to
our
ci
that
we
use
constantly
all
day
every
day,
handling
hundreds
and
hundreds
of
projects.
For
us.
D
E
I
wonder
if
this
kind
of
stuff
would
be
a
great
debit,
kubernetes
email,
where
we
say
we're
looking
for
current
contributors,
who
have
an
extra
five
hours
a
month
to
step
up
to
the
plate
to
be
bosco's
maintainers,
and
you
know,
unfortunately,
we
can't
take
new
contributors
at
this
time,
blah
blah
blah,
but
we
get
really
specific
on
dev,
and
I
know
for
a
fact
that
there
are
other
current
contributors
right
now
that
have
been
looking
for
ways
to
get
more
to
get
more
valuable
into
the
project.
D
E
For
you
all
too,
I
can
do
the
I'll
do
the
email.
If
that's
what
you
need
me
to
do,
and
I
mean
obviously
I'll
show
it
to
you
all
before
we
send
it.
What
do
you
think
about
something
like
that?
Just
like
literally
getting
the
horn
out
that
says,
here's
some
projects
that
some
subprojects
of
ours
that
need
approvers.
D
E
Then
that's
why
I'm
saying
we
can
like.
We
can
establish
that
by
saying
you
know,
we
need
a
current
contributor
to
step
up.
You
know
and
hear
like
you
know
you,
you
need
to
be
at
least
a
reviewer
in
another
area
or
something
like
that.
I
mean
we
can
definitely
and
maybe
maybe
we
can
even
do
like
trial
periods.
E
D
Team
just
to
rotate
someone
else
into
doing
the
reviewing
yep
and
they
can
still
prove
initially.
D
I
think
that's
going
to
work
the
easiest
in
bosco's
of
all
the
projects,
because
it's
also
the
it's
the
lightest
carry
most
of
the
time.
We
don't
really
need
people
to
do
anything
other
than
potentially
be
available
if
there's
a
bug
fix
so
that
someone's
ready
to
approve
like
we
have
some
folks.
That
could
just
do
that,
but
I
think
we
have
an
anti-pattern
going
there
like.
I
don't
actually
want
to
see
our
node
trying
to
approve
literally
everything
in
the
project.
It's
exhausting.
A
D
E
A
C
A
Cool
we're
out
of
time
what
needs
to
get
done
in
here
we
need
to.
We
need
to
count
our
unique
reviewers
and
approvers
in
our
packages.
We
own.
D
We
need
a
lot
more
detail
on
a
few
of
these
sections,
like
the
work
that
could
be
highlighted.
We
have
like
stub
bullet
points.
D
We
should
be
looking
at
running
and
ci
there's
so
many
things
to
do.
A
Yes
I'll
do
that
and
the
other
thing
real
quick.
So
when
it
comes
to
growing
reviewers,
we've
been
super
liberal
in
ccli.
We're,
like
someone,
showed
up
made
a
couple
pr's
like
hey.
Do
you
want
to
start
reviewing
pull
requests
like
there's
very
you
know
low
impact
negatively
they
can
have
by
saying.
Oh,
this
looks
good
when
there's
still
things
missing
right,
because
three
viewers.
E
D
And
thanks
a
lot
teddy
for
helping
with
everything.
A
D
What's
talking
to
me
last
night,
I
definitely
need
to
take
some
vacations.
Yes,.
E
A
D
Doing
that
we
were
looking
at
the
first
internally
amongst
like
existing
contributors
that
do
have
domain
knowledge
to
see
if
anybody
you
know
wanted
to
reaching
out
to
like
some
project
owners
and
things.