►
From YouTube: CNCF TOC Meeting 2023-9-19
Description
CNCF TOC Meeting 2023-9-19
B
There
we
go
yeah
I
was
saying
that
that
shirt
is
one
of
my
favorite
shirts.
B
B
C
B
A
G
A
D
C
D
C
I'm
just
quickly
running
through,
we
don't
have
quite
enough
Toc
members,
but
we
also
don't
need
corn
in
the
day.
So.
D
Ahead
and
rock
and
roll
start
all
right
welcome
everyone.
It's
September
19th,
it's
the
TOC
public
meeting
today,
thanks
for
joining
us
today,
we're
going
to
talk
about
sandbox
projects,
sandbox
annual
reviews
specifically,
and
how
some
of
the
changes
to
the
process
that
were
proposed
previously
or
in
summer
have
went
yesterday.
The
tags
should
have
completed
as
many
of
the
annual
reviews
as
they
could
have
based
off
of
the
conducting
Sanyo
sandbox
annual
reviews
document
that
was
worked
on
by
a
small
cohort
of
individuals
in
the
ecosystem.
D
The
intent
was
to
ensure
that
these
didn't
take
more
than
maybe
an
hour
to
two
hours
per
project
by
a
contributor.
From
a
tag
doing
these
reviews.
We
do
know
that
we
had
a
lot
that
were
submitted.
I
think
we
ended
up
with
over
30
at
one
point
that
were,
as
you
can
see,
on
the
slide
not
equally
distributed
amongst
all
the
tags,
which
is
often
reflective
of
our
ecosystem.
So
today
we
wanted
to
understand
a
little
bit
more
about
how
everybody
felt
the
process
went.
D
Whether
or
not
this
is
going
to
be
sustainable.
Moving
forward
were
a
lot
of
great
ideas
from
that
cohort
that
put
together
the
process.
However,
as
with
most
things
in
engineering,
we
don't
really
know
what
works
until
we
try
it
out
so
I'm
interested
to
hear
from
the
tag,
chairs
and
Technical
leads
and
other
participants
in
this
process
how
they
felt
this
went
what
feedback
they
had
any
challenges
were
they
able
to
complete
them
all
I
suspect.
The
answer
to
that
one
is
no,
and
I
will
turn
it
over
to
you
all.
H
Oh
I
can
probably
go
for
tag
runtime,
okay,
thanks
hi,
well,
I'm
Rajesh.
So
from
a
tag
runtime
perspective,
we
had
over
14
reviews
to
go
over
and
we
did
have
a
couple
of
challenges,
mostly
from
contributor
bandwidth
and
lack
of
volunteers,
from
background
time
to
go
through
like
14
reviews.
H
As
of
today
out
of
the
14
reviews,
we
have
around
two
which
we
could
complete
end-to-end
and
when
I
say
complete
end-to-end,
like
a
review
comment
posted
on
the
pr
we
have.
We
still
have
four
pending
projects
wherein
we've
still
not
started
the
reviews
as
of
now
mostly
because
we
didn't
have
contributors
and
for
some
of
them
we
reached
out
to
tag
environmental
sustainability
and
they
were
kind
enough
to
like
some
of
the
contributors
from
there
were
like
kind
enough
to
help
us
out
as
well.
H
But
then
these
are
still
in
progress
and
yet
to
be
reviewed,
we
again
are
going
following
up
on
that
number.
We
have
another
like
five
of
the
projects,
wherein
we're
still
waiting
for
follow-ups
from
contributors
who
have
reviewed
it,
and
then
you
know
going
through
a
review
from
like
say,
tag
chairs
or
leads,
and
then
again
going
for
a
review
from
to
see
Leo
so
and
then
getting
back
to
the
pr.
H
And
then
we
have
three
other
projects
wherein
the
contributors
reviewed
the
project,
but
then
we're
still
waiting
for
a
final
act
or
a
remediation
or
a
review
from
the
TOC
leader.
So
so
that's
that's
kind
of
where
we
are
from
a
tag.
Runtime
perspective.
All
of
what
I've
said
is
collated
in
this
sheet:
yeah
thanks
for
putting
that
out
Ricardo.
H
The
other
thing
that
I
also
wanted
to
call
out
was
we
we
faced
a
couple
of
challenges
in
terms
of
how
it
was
difficult
in
terms
of
reviewing
projects
which
had
reviews
written
for
2022.
But
then
we
were
reviewing
them
now,
so
that
the
data
was
kind
of
not
consistent,
for
example
for
kamada.
H
They
have
applied
for
incubation
and
then
some
of
the
data
that
was
put
it
put
out
in
the
2022
annual
review
was
not
consistent
with
the
incubation
proposal
as
of
now
which
which
makes
sense,
but
then
it
it
made
difficult
to
go
back
and
forth
on
the
current
state
and
the
state
from
2020
doing
things
like
that.
So
this
is
where
we
are
I
think
we
could
have
done
better
in
terms
of
like
reaching
out
to
other
tags
in
distributing
this
load
from
TAG
runtime.
H
That's
that's
what
I
feel,
but
then
yeah.
These
are
some.
D
That's
fantastic,
so
I
I
want
to
dive
into
some
a
little
bit
more
of
the
meat
of
that.
So,
in
addition
to
the
contributor
bandwidth
and
availability
of
contributors
that
drove
a
lot
of
the
kind
of
completion,
challenges
and
coordination
issues,
do
you
think
that
the
tag
either
the
chairs
the
contributors
got
a
better
understanding
of
the
projects
themselves
and
kind
of
where
they
were
at?
Was
it
a
good
exposure
and
learning
opportunity
between
tag
members
and
projects.
H
Absolutely
one
thing
that
I
really
missed
was
how
meticulously
the
the
process
document
was
laid
out
in
order
to
conduct
the
review.
That
was
really
good.
That
helped
a
lot.
So
thanks
for
doing
that
and
going
through
the
reviews
and
getting
this,
this
definitely
helped
getting
more
contributors
to
tag
runtime
in
terms
of
like
getting
them
acquainted
with
all
the
projects
and
then
going
through
that
that
that
really
helps
so.
Yes,
that
was.
F
Well,
I
can
talk
briefly
about
Tag
app
delivery.
We
only
had
four
and
I
I
know
three
of
them.
We
don't
have
there's
progress,
but
they
haven't
been
completed.
I
really
like
the
tag.
Runtime
I
just
wanted
to
say
that
really
fast,
just
Ricardo,
thanks
for
sharing
that
just
looking
at
the
documents
and
the
review
documents
just
quickly
that
it's
awesome
being
able
to
see
that
and
it'd
be
nice
to
have
a
template
going
forward
with
each
of
those
that's
anyway.
F
So
for
me,
I
was
looking
at
cdks
and
I.
There
were
a
couple
process
things
as
I
was
going
through
it,
for
example
like
what
you
were
just
saying
about
cremata
there's
some
things
that
stood
out
that
do
need
remediation
or
for
the
TOC
to
look
at
so
I
I
put
that
in
the
issue
in
the
Tag
app
delivery,
but
I
wasn't
sure
if
I
needed
to
put
that
on
the
issue
in
the
TOC,
so
just
little
process,
things
that
can
be
ironed
out
and
then
going
forward.
F
They'll
go
a
lot
faster,
but
I
can
see
that
the
next
time
we
meet
to
discuss
you
know
kind
of
process
and
what
worked
and
what
didn't?
We
will
have
a
lot
of
feedback
just
to
streamline
and
then
make
a
checklist.
D
D
F
And
that's
where
I
also
wasn't
sure
and
we
weren't
sure
whether
we
should
how
engaged
we
should
be
with
the
project,
whether
walking
them
through
what
the
thought
processes
I
mean.
If
it's
supposed
to
only
be
an
hour
or
two
I
mean
the
engineers
and
the
maintainers
were
really
responsive,
so
that
was
really
good
to
see,
but
how
much
engagement
before
the
review
is
complete
and
then
inviting
them
to
come,
participate
in
various
ways.
D
Okay,
other
tags
or
even
maintainers,
on
the
call
that
participate
in
this
I'd
be
curious
to
hear
from
them
as
well.
G
I
I
can
give
you
like
a
very
quick
summary
on
from
the
storage
side
that
it's,
it's
not
being
a
large
number,
as
you
can
see,
but
I
think
it
has
probably
being
positive
in
that
it's
helped
engage
with
the
projects
that
we
enhance,
to
a
certain
extent
lost
touch
with
sort
of
force.
The
contact
points,
which
was
good.
G
I
think
we're
gonna
probably
need
to
think
about
how
we're
going
to
scale
this
if
the
number
is
grown,
I'm
kind
of
wondering
if
we
need
to
you,
know
we're
going
to
need
as
much
as
possible
process
and
templates
this
and
whatever
else,
to
kind
of
make
it
low
touch.
G
But
it
also
got
me
thinking
as
to
some
of
the
things
that
we
can
take
out
of
this.
So
you
know
I
I
feel
like
it
would
be
as
we
go
through
the
first
batch.
It
would
be
a
good
idea
to
collect
some
stats
in
terms
of
maybe
some
of
the
common
outcomes
like
what
what
do
we
want
to
do?
G
What
was
what
was
the
recommendation?
For
example,
you
know
where
they're,
where
the
projects
actually
being
on
a
good
track
or
not
on
a
good
track.
You
know
and
kind
of
get
the
kind
of
use
that,
as
as
Maybe
inputs
into
the
into
the
selection
process
for
sandboxes
up
front,
for
example,
you
know
to
kind
of
say:
look
if
80
or
90
percent
of
sandbox
projects
maybe
are
not
going
to
plan,
or
maybe
they
are
right.
G
G
We
should
use
this
as
a
as
a
guidance
point,
because
the
sandbox
process
is
kind
of
a
bit
of
an
experiment
in
itself.
Right
and
we've,
we've
grown
the
portfolio
hugely
in
the
last
year,
Way
Beyond,
probably
our
initial
expectations,
and
we
we
should
kind
of
use
that
as
a
level
set
I,
think
or
use
this
as
an
opportunity
to
level
set.
G
Also
using
you
know,
by
level
setting,
we
can
probably
set
out
some
guidance
to
projects
to
kind
of
say
look.
These
are
the
things
that
80
of
the
projects
get
called
up
on
at
their
annual
reviews.
So
these
are
the
things
you
should
be
working
towards
in
your
first
year
as
a
Sandbox
projects,
you
know
to
kind
of
improve
the
pass
rate.
If
you
wish.
D
We
still
have
bandwidth
and
availability
issues
within
the
tags,
and
it
doesn't
sound
like
with
the
volume
of
sandbox
and
even
incubating
in
graduated
projects,
because
we've
had
discussions
within
the
community
around
annual
reviews
for
those
to
make
sure
that
we
do
have
touch
points,
make
sure
they're
healthy
and
that
they're
achieving
the
things
that
they
want,
that
it's
not
sustainable
at
scale
for
what
we
have.
D
However,
the
engagement
and
the
enrichment
between
the
tag,
members
and
the
projects
is
potentially
a
positive
here.
That's
what
it
sounds
like
being
able
to
engage
with
them
understand
more
about
the
projects
where
they're
at
and
being
able
to
proactively
help
them
so
identifying
issues
earlier
on.
This
is
like
the
whole
concept
of
shift
left
in
security,
so
we're
talking
about
shifting
left
for
project
maturity.
D
That
makes
a
lot
of
sense
to
me,
but
we
need
to
do
this
in
an
automated
fashion.
I
think
leveraging
some
of
the
the
information
and
the
learnings
that
we
have
here
around
where
projects
are
getting
stuck
is
beneficial.
How
does
that
align
with
guideposts
I
can
see
this
being
as
a
valuable
input
to
furthering
some
of
the
discussions
on
on
the
moving
levels
process
and
some
of
the
criteria
changes
some
of
the
template
changes
that
can
go
on
there.
D
What
else
are
from
going
through
this
exercise,
like?
Does
it
sound
like
having
a
function
to
check
in
with
projects,
maybe
not
necessarily
as
an
annual
review
mechanism,
but
something
that's
more?
D
Automated
low
friction
puts
more
of
the
interactivity
on
the
projects
that
are
already
doing
good
things
to
kind
of
self-manage
and
self-sustain.
The
way
that
we
allow
self-governance
to
occur
in
those
projects
that
don't
have
the
same
amount
of
activity
or
may
not
be
as
responsive
having
that
be
the
first
function
to
come
back
into
the
tags
to
have
these
discussions
to
get
them
back
on
the
right
path.
G
So
sorry,
to
speak
up
again,
but,
like
a
few
of
the
things
could
be
automated.
You
know
so,
for
example,
where
we
are
giving
guidance
to
say
you
know,
make
your
community
meetings
public
and
have
a
slack
Channel
group
somewhere
and
things
like
that.
G
Those
are
the
sorts
of
things
that
they
can
be
templated
and
the
projects
can
actually
register
those
things
in
a
simple
place
in
GitHub
somewhere
and
then
they
can
be
checked
automatically
and
it
kind
of
is
sort
of
like
almost
self-checking,
so
like,
for
example,
if
they
do
have
a
slack
group,
you
know
they
can
register
that
somewhere,
but
both
can
check
the
number
of
members,
and
you
know
the
number
of
messages,
for
example,
and
can
engage,
engage
engagement.
G
That
way
and
also
like
you
know,
make
sure
that
public
meetings
are
registered
somewhere
in
a
Google,
doc
or
whatever,
and
you
know
track
updates,
but
it
it
will
be
POS.
It
would
be
nice
to
kind
of
template
these
things
so
that
the
the
which
and,
by
the
way,
the
a
lot
of
these
things,
which
we're
measuring
probably
also
the
same
sort
of
things
that
the
projects
needs
to
build
the
community
in
the
first
place
as
well.
G
D
Yep
that'll
make
sense,
GOC
members
or
Liaisons
that
participated
in
this
or
had
the
opportunity
to
from
annual
review
completion.
Do
you
have
any
feedback.
I
And
then
just
go
to
federating
on
let's
automate
this,
because
this
is
not
sustainable
at
all,
and
it
also
took
a
lot
of
time,
I
think
more
than
what
I
expected,
because
I
think
it
depends
like
independent
on
the
Community
member
who
is
reviewing
the
pr
if
someone
is
completely
new
to
the
community,
they
just
wanted
to
help
out
that
was
nice
like
love
to
see
their
interest,
I
think
feedback
from
such
community
members.
I
It
required
a
lot
of
back
and
forth,
which
is
fine,
but
it
also
takes
time
so
I
think
it
also
depends
on
the
kind
of
volunteers
we
get
and
so
on.
So
yes,
let's
just
please
automate
this.
J
And
if
I
could
just
add
something
Emily
yeah
so
from
the
cncf
staff
side,
we
actually
did
have
a
great
meeting
on
Friday
to
discuss,
potentially
how
the
the
larger
LFX
platform
can
help
with
Automation
and
data
Gathering.
So
we've
given
them
details
on
on
current
pain
points
the
list
of
things
that
are
in
the
GitHub
issue,
from
Krishna
and
they're,
looking
at
potentially
ways
to
to
proactively
get
some
of
this
information
kind
of
call
out
things
and
make
it
available.
J
D
That's
fantastic
to
hear
all
right,
so
we've
gone
through
this
activity,
we've
gotten
some
feedback,
I
think
definitely
the
automation
need
is.
Is
there
we
do
still
have
projects
submitting
annual
reviews.
The
club
monitor
bot
is
going
actually
out
and
and
pinging
them
Leo
had
a
question
around
how
much
less
work
is
this
for
the
TOC
now
and
it
sounds
like
there's
still
a
lot
of
work
for
the
Liaisons
and
the
tag
chairs,
so
I
definitely
agree
that
there's
still
a
lot
of
work
that
needs
to
happen
here.
D
There's
still
a
lot
of
coordination,
so
we
may
not
have
reduced
the
the
amount
of
effort
for
all
all
individuals
involved.
We
might
have
just
distributed
it
a
little
bit
more
and
given
that
we're
already
strapped
for
contributor
resources
and
time
there
might
be
something
that
we
need
to
do
here
and
Ricardo
I'll.
Let
you
speak.
K
Oh,
it's
gonna
say
something
about
automation,
but
yeah
there.
There
are
several
things.
You
know
that
could
be
automated,
but
obviously
it
takes
a
lot
of
work.
I
mean
there's
the
aspect
of
thinking
the
reviewers,
maybe
the
suggestion
of
having
something
on
slack
or
a
slack
Channel
we
tag.
Runtime.
Has
this
like
channel
that
we
created
for
the
reviews?
K
People
could
have
a
bot
that
actually
pings
the
assignees
or
reminder
you
know
to
to
complete
the
review
or
to
show
some
progress.
So
so
there's
that
aspect.
Additionally,
there
could
be
some
GitHub
way
to
assign
their
review
to
a
person
and
have
a
broader
to
remind
as
well.
K
The
other
aspect
is
the
request
for
information
from
the
projects,
so
I'm
not
really
sure
what
the
what
can
be
done
there
but
I
mean
some.
Some
of
them
folks
might
have
more
suggestions,
but
it
will
be
nice
to
have
an
automated
way
to
request
information
from
the
projects
and
when
someone
is
reviewing
the
project
yeah.
So
those
are
some
of
the
aspects
that
I
can
think
of
of
automation,
but
I
think
we
can
distill
all
the
different
ones
in
a
follow-up.
K
D
Okay,
what
about
the
content
in
the
instructions,
so
we
got
some
feedback
that
the
instructions
that
were
provided
were
pretty
clear.
For
the
most
part,
there
are
still
some
confusion
areas,
but
the
overall
content
of
reviewing
the
project,
scope
and
the
goals:
community
development
project,
governance,
long-term
planning,
those
areas
collaboration.
How
is
the
project
integrating
with
other
projects
in
the
ecosystem?
Do
we
feel
that
that
content
from
the
conducting
sandbox
annual
reviews
document
was
worthwhile
in
engaging
with
some
of
these
projects
and
ascertaining
where
they're
at
or
where
they
may
be
getting
stuck?
D
H
Ahead
it,
it
did
feel
like
that
the
content
in
the
dock
was
pretty
much
helpful
in
conducting
the
review.
My
question
is
whether
the
the
template
for
the
annual
review
would
reflect
the
instruction
set
in
that
in
the
document.
If,
if
that's,
what
something
that
we
were
doing
like,
that
would
be
great
so
that
we
can
it'll
be
easier
for
us
to
automate
us
now.
F
Sorry
I
was
just
looking,
for
which
section
it
is
I
thought
the
instructions
were
good
where
I
was
having
a
hang
up
is
when
it
wasn't
a
quick
check
box
that
yes,
they're
doing
great
everything's
wonderful
tag.
You
know
recommends
things
that
you
know
the
review
is
complete,
so
I
think
it's
section.
It's
five
I'm,
looking,
maybe
I'll
find
out
later,
but
if
you
have
problems
with
or
if
it
looks
like,
the
project
is
having
issues
that
need
to
be
remediated
the
process
there.
F
Is
it
more
go
ahead
and
submit
some
issues
to
the
project?
Hey,
look
at
these
things,
or
is
it
purely
hey,
Toc
Liaisons?
This
is
what
we
saw.
Can
you
help
do
something
about
it?
So
how?
What
are
the
boundaries,
how
much
interaction
with
the
project
when
it
comes
to
that
those
are
my
questions.
D
Yeah
I
think
based
off
of
how
we
had
it
written
and
I
believe
section.
Five
is
the
right.
One
from
the
document
is
that
the
the
TOC
Liaisons
would
be
primarily
responsible
for
engaging
with
a
project
on
some
of
those
issues,
but
I,
don't
think
we
actually
fully
explored
what
they
are.
D
So
it
sounds
like
we
need
some
resolution
for
when
problems
are
uncovered
associated
with
projects
or
even
just
challenges.
They
don't
all
have
to
be
problems.
Indicators
of
problems
to
come,
but
definitely
agree
with
like
simplifying
the
easier
path.
If
a
project
looks
like
they're
they're
on
the
right
track,
the
the
feedback's
been
positive,
we've
not
seen
anything,
that's
untoward
or
questionable
in
their
practices.
They
seem
to
be
doing
fairly
well.
D
That
should
be
much
simpler
path,
but,
as
we
start
uncovering
some
of
this
stuff,
whether
that's
lack
of
Engagement,
not
clear
indicators,
missing
documentation
or
governance,
lack
of
activity
associated
with
the
project
that
should
warrant
a
more
in-depth
engagement,
probably
with
them,
either
by
the
tag
buy
a
Toc
member
or
even
just
a
general
Community
member
looking
to
help
out.
So
how
do
others
feel
about
that?
Moving
moving
it
more
into
like
a
pull
mechanism.
Alex
came
off,
mute.
G
I
I
was
just
wondering
what's
sort
of
the
grace
periods,
so
so,
for
example,
can
sandbox
projects
still
be
embryonic
after
a
year,
and
is
that
bad,
but
it
may
be
after
two
years
or
three
years?
Is
it
really
bad
and
needs
remediation
like
I
I,
you
know
what
I
mean
like:
where
is
there
a
line.
D
Yeah,
that's
actually
not
something
that
we
have
and
Toc
members
I'd
be
curious
to
hear
your
thoughts
on
this,
but
we
don't
actually
Define
time
frames
for
how
long
you
can
or
can't
stay
in
a
particular
majority
level
of
the
foundation.
Each
project,
matures
at
a
different
rate
than
others
and
their
concept
of
maturity,
is
going
to
vary
you're
not
going
to
have
every
project.
Look
like
kubernetes
you're
not
going
to
have
every
project.
D
Look
like
spiffy
you're
not
going
to
have
every
project
look
like
Falco
or
any
of
the
other
projects
within
the
ecosystem.
They're
all
kind
of
unique.
There
are
some
characteristics
that
are
similar.
D
We
can
probably
pattern
them
out,
but
what
works
for
one
project
that
comes
in
I
mean
we've:
had
projects
apply
to
sandbox
and
then
say
you
should
really
be
looking
at
incubation
in
three
months:
you're,
not
quite
there
yet,
but
you're
going
to
be
real
soon,
and
then
we
have
other
ones
that
are
just
so
new
and
they
don't
have
a
lot
of
community.
There,
they're
still
exploring
their
use
cases,
but
they
have
a
good
concept
for
experimentation.
Those
ones
might
be
there
for
a
couple
years
and
that's
okay,
too,.
G
Right
but
but
I
guess
what
I'm
trying
to
say
in
an
indirect
way
is:
do
we
do
we
want
to
have
like
a
line
where
we
say
you
know
Community
growth
not
going
in
the
right
direction
and
you
know
indicator
is
not
in
the
right
direction.
For
two
annual
reviews
mean
you're
automatically
going
into
archive,
or
something
like
that.
You
know
to
to
to
kind
of
have
a
have
an
automatic
filter.
K
I
K
I
think
a
clear
line
there
is
maybe,
after
two
years
you
see
no
activity
on
the
project
right.
So
then
you
see
like
no
commits
no
no
activity
on
GitHub
right,
so
so
that
that's
a
clear
line
of
archival
I
mean
I'm
just
saying
two
years,
but
I
mean
it
could
be
a
year
and
a
half
or
a
year
depending
on
consensus
right,
but
but
in
terms
of
like
project
being
in
sandbox
I
think
we
already
talked
about
it.
K
The
the
projects
can
remain
in
sandbox
indefinitely
and
they
can
remain
there
to
experiment
as
long
as
they
have
some
sort
of
activity,
they're.
Looking
for
new
things,
trying
new
ideas
but
yeah,
so
that
that's
hard
to
tell
like
you
know,
you
know
what
the
line
is
there,
but
you
know
I
think
if
there's
some
sort
of
activity
the
project
should
still
be
in
sandbox.
D
Yep,
the
automation
of
inactivity,
detection,
I
think,
is
definitely
an
area
and
I
believe
we've
there's
work
being
done
on
that
Daniel
or
Amy.
I
think
this
was
one
of
the
topics
that
the
TOC
recently
talked
about
is
we
do
have
metrics
and
indicators,
it's
a
matter
of
how
do
we
automate
discovery
of
an
active
project
so
that
we
can
engage
with
them
and
understand
whether
or
not
they
truly
are
an
actor
there's
something
else
going
on
I.
G
D
Yep
I
agree,
I
I,
think
I've
seen
the
exact
same
one
I
just
can't
place
it
right
now,
so
automation
of
discovery
of
inactive
projects
definitely
needs
to
happen.
That's
something
that
we
can
work
on
with
the
foundation
to
ensure
that
that
is
one
less
thing
on
our
plates,
so
we're
not
spending
Cycles
going
and
hunting
for
those
projects.
If
we've
already
got
the
data
collected,
we
should
be
able
to
automate
that
Discovery
and
engagement
as
far
as
annual
reviews
go.
D
I
want
to
reduce
the
amount
of
work
that,
where
we've
created
as
a
result
of
the
annual
review
process,
without
compromising
the
level
of
Engagement
and
exposure
that
both
the
tags
and
the
projects
get
to
one
another,
because
I
think
that
is
incredibly
valuable,
as
well
as
the
opportunity
for
getting
new
contributors
into
these
tags.
It's
a
matter
of
how
do
we
keep
that
level
of
Engagement?
How
do
we
ensure
that
there
is
as
much
technical
support
within
the
process
through
automation,
mechanisms
to
reduce
that
level
of
effort?
D
D
How
about
we
disable
the
Clone,
monitor
bot
from
requesting
annual
reviews
for
projects
for
right
now,
it's
generating
an
excessive
amount
of
work.
We
currently
don't
have
the
contributors
to
go
through
everything.
I
think
I'd
like
us
to
finish
up
the
annual
reviews
that
we
currently
have,
maybe,
except
for
projects
that
are
already
applying
to
move
levels,
because
we're
Toc
is
going
to
look
at
them,
regardless
we're
going
to
know
what's
happening
and
they've
already
implied
so
they're,
probably
in
a
decent
State,
maybe
a
few
things
off
and
then
from
there
we
can
figure
out.
D
L
I
feel
like
we
can
be
objective
about
that
and,
if
there's
something
that
we
can
establish
health
and
through
automation,
awesome,
but
otherwise,
I
think
it's
it's
not
providing
Direct
Value
today,
necessarily
because
we're
not
really
doing
anything
with
that
information,
we're
not
archiving
we're,
not
moving
levels,
we're
not
engaging
it's
just
a
report,
so
I
fully
support
pausing
that
and
figuring
out
A
New
Path
forward
and
would
love
to
hear
from
all
the
tag.
Leadership
is
to
possible
different
options.
We
should
consider.
F
Thank
you.
I
will
say
that
I
did
highlight
areas.
We
haven't
completely
finished
it
because
there
are
areas
of
concern
but
areas
of
integration
and
where
they
can
have
touch
points
with
the
tag
and
other
projects
that
they
could
integrate
with.
So
next
steps
that
I
would
see
after
this
one
I
mean
they
have
a
lot
of
homework.
F
You
know
talk
with
the
home
community
and
maybe
that's
within
Tag,
app
delivery
and
I
know
you
know,
could
use
more
maintainers
or
even
contributions
between
both
of
them
and
then
a
couple
other
areas,
so
I
saw
it
as
a
good
thing
going
through
the
review.
If
a
lot
of
it's
automated
I
mean
even
if
it
doesn't
happen,
but
at
least
some
touch
points,
but
I
do
see
where
we
can
have
some
value
with
that
project.
Talking
to
other
projects
within
the
ecosystem.
G
Maybe
this
is
a
bit
dramatic,
but
it
kind
of
depends
on
what
the
TOC
envisage
to
sandbox.
For
so.
G
If
we,
if
we
want
to
put
the
time
frame
on
things
and
the
idea
for
the
sandbox,
is
that
eventually
they
do
go
to
incubation,
because
that's
what
the
foundation
is
about,
do
we
want
it
to
be
self-selecting
by
the
maintainers
I.E?
G
A
review
is
done
at
the
point
where
the
maintainers
say
that
they're
on
an
incubation
track
and
if
they
don't
say
they're
on
an
incubation
track
within
some
given
period
of
time,
two
years,
three
years.
Whatever
that
number
is,
then
it
automatically
goes
into
an
archival
process
and
if
it
and
if
they
do
say
that
goes
into
an
incubation
track,
then
it
goes
for
an
incubation
review
and
then
that
kind
of
resets
either
resets
the
clock,
because
something
is
missing
or
it
actually
goes
to
incubation.
D
I
think
there's
a
lot
of
good
ideas
in
that
it
sounds
like
I
mean
just
talking
about
the
concept
of
an
incubation
review,
one
of
the
areas
that
the
TOC
has
encountered
with
projects
that
do
apply
once
they
once
they
file
the
pr
on
our
repo
is
doing
that
initial
cursory
check
with
the
project
of
are
they
actually
ready?
Having
the
tags
step
in
to
be
able
to
do
that
would
be
beneficial
and
just
to
Ricardo's
point.
The
annual
review
doesn't
need
to
actually
still
be
an
annual
review.
D
It
can
just
be
a
review
and
that
moving
levels
function
is
a
great
checkpoint
to
re-engage
with
the
project,
to
make
sure
that
they're
on
the
right
track
and
there's
still
the
opportunity
for
them
to
request
assistance
from
tags.
Even
if
they're
not
there,
how
do
I
get
on
the
incubation
track.
A
D
What
are
their
thoughts?
I
mean
like
this
is
excellent
feedback
that
and
I
think
that
we
have
enough
information
that
we
can
start
to
take
action
and
make
this
a
little
bit
more
meaningful.
There's
there's
still
the
project
moving
levels,
task
force,
I,
think
that
has
very
similar
Concepts
to
some
of
what's
being
discussed
here
so
like
mines,
which
is
great
and
I
want
to
be
able
to
capitalize
on
that
back
within
the
task
force,
with
the
recommendations
that
are
come
out
of
it.
D
Okay,
so
Leo.
M
Yeah
one
one
thought
about
the
Handover
process,
so
we
have
intact
environment,
environment
sustainability.
We
don't
have
any
reviews
assigned
to
us,
but
we
helped
tag
runtime
with
two
reviews
and
I
think
we
can
also
make
this
process
a
little
bit
more
Slimmer
I'm,
not
sure
if
it
is
the
best
idea
to
hand
over
the
entire
review.
M
I
think.
Maybe
it's
easier
to
just
like
bring
up
this
topic
in
the
tag
and
say
you
can
like
contribute
to
Tech
runtime,
so
they
own
still.
The
review
and
the
tag
chairs
do
not
lose
the
entire
review,
because
I
think
it's
kind
of
like
just
observing
kind
of
the
two
reviews
that
we
help
with
it's
putting
us
a
little
bit
in
like
a
middleman
position.
It's
a
little
bit
strange
because,
like
Tech
runtime
was
kind
of
requesting
help.
M
D
C
D
I
think
for
projects
that
are
currently
in
the
middle
of
an
annual
review
by
a
tag
member,
if,
if
you're
getting
responses
from
those
projects,
let's
close
those
out
and
wrap
them
up,
because
I
still
think
the
projects
are
getting
value
from
that,
but
anything
that
hasn't
been
started.
I,
don't
think
that
we
should
take
on
right
now.
D
D
If
we
just
quickly
do
a
cursory
overview
of
them
and
then
accept
them
based
off
of
the
discussions
from
this
call,
and
then
the
expectation
is
that
no
projects
moving
forward
would
be
submitting
annual
reviews
until
we
figure
out
how
we
want
to
do
a
review
process
which
it
sounds
like
we've
got
some
great
suggestions
here
as
input
to
the
moving
levels
task
force,
as
well
as
engagements
back
with
the
tags
for
those
projects
and
Integrations
amongst
projects
themselves.
I
don't
want
to
forget
about
that.
One.
D
J
Yeah
I
think
so
those
are
the
key
ones,
so
the
ones
that
haven't
started.
How
many
of
those
how
many
of
those
are
open
do.
We
know
cheers.
J
Sorry
so
there's
closing
out
and
wrapping
up
the
in-flight
annual
reviews
and
then
there's
ones
that
haven't
started
already.
How
many
are
those
are
they
do
you
just
want
to
close
those
out?
What
do
you
want
off.
C
The
top
of
my
head,
there's
about
like
six
or
so
that
have
come
in
through
like
July
and
August,
that,
like
haven't
been
like,
like
touched
at
all
I,
think
we
should
probably
add
those
back
in,
and
at
least
do
like
a
a
cursory
review
on
the
TOC
side.
C
C
D
All
right,
awesome,
I,
really
appreciate
the
tags,
leadership
and
the
tag
contributors
focus
on
doing
this,
because
this
is
something
that
I
feel
like
personally
has
been
long
overdue
is
is
reevaluating
these
annual
reviews.
We
we
had
indicators
of
the
value
that
they
provide,
but
maybe
that
was
not
necessarily
aligned
with
the
outcome
and
how
we
were
actually
using
them.
K
D
C
Kind
of
blocked,
if
I'm
honest
into
the
year
there.
D
D
D
Awesome
all
right,
we've
got
14
minutes
left
any
other
questions.
Foreign
Community
Award
nominations
are
open
if
you
are
subscribed
to
the
TOC
mailing
list.
Great
you've
already
got
this
message.
We
have
a
new
award,
though,
that
I
want
to
call
out
particularly
of
interest
to
this
group
of
folks
is
the
taggy.
D
This
award
is
designed
to
identify
a
tag
contributor
who
has
gone
above
and
beyond
and
has
broad
reach
and
significant
impact
in
growing
the
tag
ecosystem,
so
think
about
who,
in
your
in
your
circle,
within
your
tag
or
even
in
other
tags,
has
been
super
beneficial
and
you've.
Seen
that
impact
nomination
links
are
in
the
slide
deck
as
well
as
on
the
mailing
list
and
if
you're
not
on
the
mailing
list,
I
do
recommend,
subscribing.
C
Order
this
year
we're
doing
one
because
I'm
trying
not
to
be
able
to
overload
things
if
we
get
phenomenal
amount
of,
like
you
know,
nominations
and
all
of
that
we
can
consider
expanding
it,
but
for
this
year
we're
gonna
do
one.