►
From YouTube: Kubernetes SIG Node 20201020
Description
Meeting Agenda:
https://docs.google.com/document/d/1j3vrG6BgE0hUDs2e-1ZUegKN4W4Adb1B6oJ6j-4kyPU
B
Hello.
Okay,
thank
you
just
for
recording
it's
a
signal
meeting.
It's
october,
20th,
hello,
everybody
with
pull
request.
We
have
many
weeks
when
we
have
more
pull
requests
created
than
merged.
Luckily,
for
us,
some
of
them
were
closed.
This
week
we
have
very
few
closed
ones
and
only
two
merged
most
of
the
closed
ones
were,
are
closed
by
author,
so
maybe
create
by
mistake
or
duplicate
it,
and
we
have
a
lot
of
pull
requests
created.
B
So
I
I
think
we
need
to
double
down
on
the
review
and
pull
request.
Unfortunately,
I
didn't
have
much
time.
Last
week
I
was
catching
up
after
being
sick
for
a
week,
so
hopefully
this
week
I
will
spend
more
time
reviewing,
but
I
will
encourage
everybody
else
to
also
review
and
more.
We
have
lgtm
pull
requests
easier.
It
will
be
for
to
approve
and
move
it
forward,
so
yeah.
This
is
unfortunate,
update
that
we
jumped
very
high
this
week.
Hopefully
we
can
clean
up
by
in
in
next
couple
weeks.
C
A
A
D
Hello,
so
yes,
I
don't
have
that
much
to
say
about
this
topic
today.
I
only
want
to
point
out
with
open
day
a
new
pr.
We,
the
full
proposal
for
this
work,
as
I
presented
this
something
like
one
and
one
goal.
So,
yes,
I'm
looking
for
reviewers
of
this.
This
would
be
great.
If
there
are
any
comments
you
give
us,
then
I
will
happy
to
answer
any
questions
you
have
about
that.
E
So,
just
to
add
more
context
like
so
folks
from
red
hat
have
been
reviewing
it
and
we
think
it's
in
a
good
place.
Now
there
are
a
couple
of
open
questions
there.
So
if
folks
want
to
take
a
look
and
simon.
E
Yeah,
this
did
not
made
the
kept
deadline.
So
meanwhile,
like
I
guess,
we
can
have
branches
to
validate.
E
C
Your
mic
has
like
a
squeaking
noise
dawn,
that's
making
it
really
hard
to
hear
you,
but
it
sounds
like
a
bird
chirping.
It's
like
the
closest
thing
I
can.
C
What
I
heard
was
your
mic
isn't
working
great
and
I
will
work
through
the
agenda
for
you.
Okay,
so
I
think,
as
the
next
step
on
this
particular
item,
we
need
to
just
kind
of
assign
primary
approvers.
C
I
last
tried
to
get
my
display
through
the
username
space
effort.
First
phase,
I'm
happy
to
do
it
again,
and
so,
if
there's
anybody
else
on
the
metadata
for
this,
that
wants
to
volunteer
to
help
review
it'd
be
appreciated.
I
see
renault
you're
listed
yeah.
E
C
We
can
use
this
as
an
opportunity
to
champion
some
new
new
folks
as
well,
so
for
now
I'm
happy
to
mark
myself
as
a
as
an
approver,
but
maybe
if
we
can
get
some
additional
reviewers
outside
of
maybe
red
hat
and
kinbalk
that'd
be.
G
I
can
take
a
look
at
it
as
well
kevin
from
nvidia.
We
have
an
internal
implementation
of
username
spaces
we
use.
So
if
nothing
else,
I
can
add
some
flavor
to
what
we
do.
C
Awesome
kevin.
So
let's
let's
go
with
that
as
a
plan.
So
if
we
can
add
kevin
as
an
additional
reviewer
and
then
I'll
handle
approval.
C
And
anything
else
you
want
to
touch
on
this
other
than
a
request
to
to
review
in
the
next
few
weeks.
C
C
Awesome
moving
on,
then,
is
jeremy
here
to
talk
through
the
node
problem.
Detector,
yeah.
H
Hello,
everybody,
my
name
is
jeremy
edwards
and
so
I'm
from
seek
windows,
and
I
created
a
proposal
for
node
problem
detector
to
run
on
windows.
So
this
is
one
of
the
feature
gaps
in
windows
where
basically
on
linux,
you
have
this
tool
that
runs
and
it
analyzes
the
host
os
and
finds
problems.
You
know
that
can
come
from
that
host
os.
H
You
know
these
various
issues
and
so
on,
and
so
I
looked
at
at
the
code
and
basically
what
it
does
in
linux
and
kind
of
adapted
created
a
proposal
to
adapt
it
to
windows
with
a
bunch
of
windowsisms,
like
you
know,
like
what
happens
if,
like
windows
defender,
finds
something
like
the
host
is
compromised
or
something
like
that,
maybe
we
can
bubble
that
up
to
the
kubernetes
control
plane
so
that
it
can
respond.
H
There
is
a
question,
any
potential
problem,
detective
plug-ins.
We
can
plan
for
the
first
version
those
are
listed
as
in
the
document
itself.
So
there's
a
bunch
of
problems
a
lot
there.
It's
somewhat
exhaustive
in
in
the
problems
that
we
can
look
for,
but
I
wanted
to
put
it
up
to
the
community
to
see
which
ones
are
actually
ones
that
are
worthwhile
to
do
and
ones
that
are
not.
H
So
there
is
a
list
of
options
there
that
you
know
we
can
look
at,
but
I
I
would
expect
probably
around,
like
a
third
of
this
list,
would
get
implemented,
maybe
like
one
or
two
would
get
released
that
you
know
when
it
when
it
actually
goes
out
initially,
and
we
could
build
upon
that
of
course,
but
the
the
document
goes
into
details
of
like
okay.
How
do
we
actually
deploy
this,
and
it
also
discusses
things
like
the
limitations
of
windows
like
there's?
H
A
lack
of
privileged
container
support,
so
things
like
daemon
sets
won't
work
for
this.
Quite
yet
that's
coming
in
kubernetes
120
but
potentially
like,
and
it
may
get
delayed
in
windows,
but
mpd
actually
has
like
it.
H
It
does
have
a
mode
where
it
can
run
in
systemd,
so
we
would
have
a
similar
mode
that
would
run
in
windows,
services
and
and
so
on,
and
so
it
goes
into
the
details
of
how
to
how
to
do
all
that
so
yeah,
I'm
basically
just
looking
for
community
feedback
before
we
actually
go
ahead
and
start
implementing
it,
and
also
the
doc
links
have
been
added
to
the
slack
channels.
For
my
problem,
detector
and
sigmoid.
C
Thanks
jeremy,
so
I
think
my
first
thought
on
this
and
don
I
wish
your
mic
was
working
as
well.
Is
that
maybe
we
could
use
this
as
a
good
opportunity
to
kind
of
assess
the
health
of
the
mpd
subproject
itself
to
make
sure
that
the
current.
F
C
Owners
and
contributors
are
still
active
in
providing
feedback,
and
if
we
need
to
supplement
it
and
then
I'd
love
to
figure
out
jeremy,
if
this
is
a
an
area
that
you
could
help
out
with
given
new
interest,
I
don't
know
like
I'm
thinking
of
names
that
we
typically
had
leaned
on
in
the
past
on
for
this
component
area
was
was
like
lantau.
I
don't
think
he's
here
today.
C
So
that's.
The
first
kind
of
a
question
that
came
to
mind
is
like
with
new
interest:
can
we
expand
or
reinvigorate
health
in
the
in
the
subproject
itself?
So
my
first
thought
is
maybe
don
you
and
I
can
work
together
to
get
maybe
a
readout
on
the
current
state.
I
C
From
the
primary
sub-project
owners
and
then
align
this,
this
use
case
with,
like
a
as
an
action
item
out
of
that
readout,
is
that
fair
jeremy.
H
I
mean
that
sounds
fine
to
me.
I
I
looked
at
it.
It
looks
like
the
project
having
been
updated
in
a
while
there's
been,
I
think,
a
steady
stream
of
prs
and
stuff,
but
it's
not
exactly
obviously.
C
C
In
general,
like
for
call
to
action
here,
I
will
happily
review
this.
I'm
sure
don
will
review
this
and
then
maybe
we
can
queue
up
a
discussion
on
the
overall
health
of
mvd
to
see
if
we
need
to
supplement
it
from
a
timing
line
perspective.
Did
you
have
a
release
timeline
in
mind.
H
This
proposal
was
published
out
early
just
in
the
anticipation
that
we
would
have.
You
know
some
review
time
and
stuff
like
that,
so
we're
trying
to
get
ahead
of
the
ball.
This
this
proposal
has
been
circulated
as
well.
So
I'm
hoping
that
basically,
hopefully
around,
like
the
end
of
this
quarter,
we
would
have
like
some
some
clarity
of
like
what
needs
to
be
done
here
and
then
you
know,
hopefully
maybe
one
better.
The
runway.
C
Okay
and
then
my
only
other
thought
this-
we
have
some
folks
at
red
hat
that
are
looking
at
windows
as
well.
So
I'm
I'll
rally
them
to
help
also
look
at
this,
but
was
this
presented
within
sigmoidos
itself?.
H
Yes,
this
proposal
was
presented
windows
a
couple
years.
C
Awesome
yeah
and
as
a
psa
I
mean
I
have
three
daughters.
So
at
any
moment
my
daughter
might
come
in
and
and
enjoy
a
sig
note
call.
So
it's
happy
to
hear
we're
recruiting
of
all
ages.
C
So,
let's
we'll
we'll
review
the
dock
and
we'll
fall
up
on
the
dock,
but
thanks
jeremy,
for
bringing
this
forward
with
that.
If
there's
no
other
comments
on
this,
we'll
move
on
to
the
next
topic.
J
Yeah
hi
chris
here
this
is
just
an
update
regarding
the
topology
manager,
scope
and
memory
manager,
and
maybe
there
will
be
one
question
on
the
end,
so
topology
manager
scope
is
now
currently
under
the
review
by
kevin,
and
many
thanks
for
the
great
review.
Also
thanks
for
the
review
for
sergey
in
francesco
memory
manager,
which
cap
was
merged
recently,
there's
also
opened
the
pr
and
the
documentation
pr
update
and
any
anywhere
reviewer
is
welcome
here,
because
this
this
changes
a
little
big.
J
So
if
you
are
willing
to
review,
feel
free
to
go,
and
my
question
is
because
we
are
wondering
how
to
great
implement
the
handling
of
reusing
memory
from
init
containers
by
up
containers
the
same
as
it
is
in
the
cpu
manager,
and
we
have
some
initial
ideas
and
we
thought
that
maybe
it
would
be
good
to
reflect
that
in
the
cap.
So
my
question
is:
should
we
open
a
pr
for
the
cab
and
it
can
be
merged
anytime
or
it
is
it?
Is?
J
It
may
be
stopped
by
some
enhancement,
freeze
or
or
better
release.
This
is
just
like
an
update
to
the
cap.
C
Yeah,
so
I
think,
there's
from
my
perspective
no
issue
to
maybe
discuss
a
complicating
edge
case
that
was
encountered
during
an
implementation
like
on
the
existing
cup,
and
I
don't
think
that
that
should
be
a
problem
with
respect
to
release
planning,
it's
just
a
good
forum
to
have
the
discussion,
and
so
I
I
have
no
issue
with
that.
C
The
the
particular
situation
you're
describing
is
just
one
of
those
situations
that
I
guess
we
may
have
had
some
oversight
on
when
thinking
through
it,
like
I'm,
also
just
thinking
about
where
the
where
particular
containers
could
be
writing
to
get.
This
accounted
right
so
like
in
a
container
that
writes
to
an
empty
dir,
will
have
its
memory
charge
transferred
to
the
pod,
but
never
picked
up
by
the
app,
but
it
still
needs
to
be
considered
consumed,
and
so
we'll
just
have
to
work
through
some
of
those
scenarios,
but
yeah.
C
Please,
please
supplement
the
existing
doc
with
that.
You
had
a
lot
of
great
diagrams
when
I
was
reviewing
it
earlier.
C
Maybe
my
ask,
I
know
when
we
did
the
cap.
There
were
existing
branches
that
had
this
work
in
progress.
Was
there
anything
about
the
two
pr's
you
enumerated
that
deviate
from
the
original
cap
that
you
want
to
give
special
attention
to.
C
Problem
and
kevin,
you
helped
a
lot
in
reviewing
the
cap.
Are
you
have
you
had
a
chance
to
to
look
at
either
of
these
two
pr's
either.
G
I
looked
at
them
early
on
when
I
was
doing
the
review,
because
at
the
time
the
kept
was
harder
to
follow
than
to
just
kind
of
glance
through
the
code
and
see
how
things
were
were
implemented,
and
then
we
kind
of
you
know
worked
together
to
get
the
cap
and
shape
where
it
matched
what
was
in
the
code,
but
I
haven't
done
a
proper,
formal
review
of
the
code
at
all.
Yet
it
was
more
just
glancing
through
it
to
see
what
was
being
done.
C
Okay,
so
I
see
you're
an
assignee
on
at
least
one
of
them
kevin,
I'm
wondering
given
how
intimately
related,
if
you
wouldn't
mind
at
least
taking
a
pass
at
95479
and.
G
Sure
I
think
that
makes
sense-
I'm
also
I'm
in
california
right
now,
instead
of
europe,
so
it
might
help
with
if
we
needed
to
do
any
live
discussions
in
the
afternoon
or
something
for
the
next
two
weeks.
G
C
So
I
I
just
made
sure
they're
both
labeled
with
the
milestone
appropriately.
So
look
at
for
any
of
us
that
are
filtering
our
kubernetes.
We
can
make
sure
they're
not
lost
particles,
but
thanks
kevin
for
the
help
on
this
and
we'll
try.
K
C
Problem
unblock
the
review:
are
there
any
other
items
you
want
to
raise
on
this?
Otherwise,
we'll
just
move
on
to
the
next
topic.
C
Awesome:
okay,
so
francesco
are
you
here
to
yes,
okay,.
L
Yeah,
it's
actually
a
very
simple
question
about
how
to
move
forward.
So
we
have
two
pr's
at
the
moment:
implementing
the
resources
concrete
apis
apis
which
cap
was
recently
merged
and
just
wondering
how
you
folks
prefer
to
have
them.
Meaning
reason
I'm
asking
because
one
is
which
the
one
which
is
adding
new
fields
to
existing
api
is
self-contained
nice
and
ready
for
review
so
and
the
other
is
a
bit
more
complex
because,
for
example,
I'm
adding
and
more
end-to-end
tests
and
the
watch
implementation
is
still
getting
in
shape.
L
C
C
L
C
I
haven't
looked
at
these,
the
two
pr's
are
relatively
new.
Well,
one
is
a
little
older,
but
the
other
one.
My
general
thought
is
coupling
the
pr
with
the
actual
testing
that
goes
with.
It
is
usually
easiest,
and
so
I
don't
know
if
either
of
these
pr's
that
you
have
presented
here
like
shows
the
ede
tests.
L
That
might
come
with
it
and
at
the
moment
today
the
prs
are
grouped
by
odor,
so
one
is
for
is
from
me
and
the
other
is
from
alex
say
from
huawei,
which
I'm
not
sure
is
on
the
call.
But
still
so
you
see
the
reason
I'm
asking
is
making
just
one
pr
makes
harder,
because
there
are
two
orders
involved
so
again,
but
if
you
prefer
this
way
we
can
totally
sort
it
sort
it
out
somehow.
Now
the
only
reason
I
alluded.
C
To
renault
was,
he
was
doing
work
in
the
same
release
in
the
same
area
that
I
also
don't
have
fresh
in
my
cash.
If
that's
all
merged
yet
or
not,
I
don't
know
if
david
ash
paul
knows
or
not,
if
he's
here
today,
and
so
just
making
sure
you
conflict
in
the
graduation
of
this
feature
with
that
and
the
enhancement
it'd
be
good
if
you
could
just
pair
up
with
renault
on
figuring
out
the
right
path.
Okay,.
L
C
L
C
Well,
we're
making
good
progress
here
so
sergey.
I
think
you're
next
on
the
yeah.
B
So
one
of
the
low
priority-
I
just
I
know,
duplicated
apis
in
kubernetes,
and
this
is
something
I
fished
out
of
almost
wrote
on
prs.
Somebody
suggested
to
remove,
put
unknown
state
and
initial
pr
was
just
marking
this
state
as
deprecated,
and
it
totally
makes
sense.
Nobody
said
the
state,
I
extended
it
and
removed
all
the
mentions
of
pod
unknown.
B
So
now
one
question
that
was
mentioned
in
original
pr:
how
we
prevent
people
from
saying
put
unknown
going
forward,
and
I
mean
I
was
thinking
to
include
some
panic
in
the
code,
but
then
I
said
like
maybe
maybe
it's
too
harsh.
So
I'm
just
curious.
What
do
you
think
about
it
and
is
there
some
guidance
how
to
proceed
with
that?
B
C
So
seth
and
ryan
and
renault
and
myself
and
david
eads
have
been
chasing
down
interesting
edge
cases
for
pod
phase
reporting
that
we've
been
seeing
our
own
ee,
sweets
and
stuff.
My
memory
is
bad,
but
the
area
we
were
hitting
pod
unknown.
C
Potentially
I
thought
was
related
to
if
the
node
had
been
drained
and
the
container
no
longer
existed,
but
the
node
had
been
or
the
pod
had
never
been
drained.
So
it
was
a
damon
set
and
that
can
the
host
was
restarted,
but
the
container
record
never
remained.
I
thought
we
had
an
issue
here
around
also
continuing
to
report
pod
unknown
phase,
but
the
changes
I
think
we
merged
this
past
week
had
the
mall
reporting
container
terminated,
but.
M
Am
I
mixing
topics
so
we
never
set
pod
phase
unknown.
We
just
prevent
so
like
the
cuba
can
get
in
a
situation
where,
if
the
container
doesn't
exist
in
the
run
time
anymore,
it's
pretty
hard
for
the
cubelet
to
just
to
determine
whether
the
whether
the
container
in
the
pod
has
never
run
or
whether
it
run
ran,
terminated
and
got
cleaned
up
by
the
runtime.
M
And
so
we
just
set
the
the
last
terminated
field
so
that
the
phase
the
phase
detection
logic
in
the
cubelet
works
properly.
As
so,
it's
basically
set
the
tombstone
in
the
status
so
that
it
knows
that
it
either.
You
know
that
it's
in
a
terminal
that
the
the
container
has
run
in
the
past,
but
we
never
set
pod
phase
unknown.
A
Okay,
aren't
you
really
had
the
face
that
I
know
and
majority
things
I
didn't
experience
with
production
nice.
You
have
that
the
majority
is.
A
Haven't
heard
the
people
like
seeing
the
status
not
enough
like
five
minutes,
that's
configurable,
so
at
that
time
they
don't
know,
is
the
company
company
still
running
or
it
is
not?
It
is
all
the
year.
Then
we
will
say
that
I
know
so.
We
fix
that
problem
for
many
people,
but
yeah.
I
think
you
still
in
today's
production,
sometimes.
C
A
Yes,
that's
the
major
things,
basically,
that's
also
by
design
initially,
why
we
have
that
diagnostic,
because
we
don't
want
to
google
and
take
another
action.
So
this
then
we
give
that
information
to
the
controller
like
each
the
java
controller
or
whatever
controller,
to
take
the
design.
So
it's
not
like
that
unless
we
saw
all
this
kind
of
language
issue,
I
don't
think
we
may
be
with
other
states
other
names
for
the
face,
but.
C
Yeah
so
my
own
history
is
saying:
the
cubelet
should
never
report
unknown
status,
but
an
external
agent
when
the
cubelets
unable
to
heartbeat
about
that
pod
may
report
unknown
status,
and
so
I
guess
sergey
would
what
was
motivating.
Maybe
you
to
explore
this
and
then
we
should
probably.
B
So
the
motivation
for
original
pr
that
there
is
no
code
that
sets
the
status,
so
I
mean
in
entire
kkk
repository.
So
I
mean,
since
nobody
sets
the
status
of
this
phase
and
like,
why
would
we
keep
it.
C
So
for
my
own
re-education
I'd
want
to
go
back
and
see
if
anything
had
lingered
there
and
the
code
in
there
had
changed
when
it
went
to
taint
based
evictions
versus
the
prior
behavior,
and
so
it
could
be
that
there
used
to
be
something
being
said
that
got
removed
and
probably
just
getting
a
document
that
enumerated.
The
history
of
this
phase
is
probably
the
first
thing
that
would
be
helpful
for
us,
because
I'm.
C
In
my
head
that
the
node
status
is
what
goes
unknown,
not
just
the
pod
phase,
and
so
getting
that
written
down
is
probably
the
best
thing
we
could
do
right
now.
B
C
So
yeah
we
have
to
investigate
the
right
thing
and
then
I'm
also
curious
if
other
satellite
projects,
like
virtual
cubelet,
are
ever
setting
this
phase.
But
that's
maybe
a
separate
exercise
we
could
explore.
C
B
Okay,
okay,
okay,
so
next
item
is,
since
we
have
time.
I
writing
this
pr
on
promotion,
runtime,
cluster,
ga,
so
disabling
feature
flag
is
easy.
One
thing
that
I
want
to
also
do
is
promote.
We
want
beta1
into
v1
api
and
part
with
this.
B
Fieldpod
overhead
also
goes
to
the
same
folder,
because
it's
really
hard
to
detach
this
field
out
of
runtime
class.
So
now
there
is
a
v1
api
like
node
api,
and
there
is
a
port
overhead
field
inside
it,
and
I
was
wondering
how
bad
it
is,
and
is
there
any
better
practice
to
do
that
or
it's
just
fine
to
have
beta
field
in
v1.
K
Okay,
is
there
any
is,
is
there
was
their
discussion?
Is
there
discussion
on
making
pot
overhead
ga
as
well,
or
is
that
kind
of
further
down
the
road.
B
C
I
guess
we
can
measure
risk,
but,
like
I,
I
don't
see
a
reason
why
we
would
treat
this
any
different
than
when
we
add
a
new
field
to
any
existing
v1
api,
and
so
like.
C
My
first
reaction
is
there's
not
a
problem
here
as
long
as
in
validation
code,
we
protect
writing
to
that
field.
If
and
only
if
the
overhead
feature
gate
is
enabled,
but
I
don't,
I
don't
think
that
the
presence
of
that
field
should
block
the
promotion
of
runtime
class
generally.
So
I
think
I
don't
think
you
need
to
couple
the
two
sergey
and
it
should
be
fine.
C
Do
this
we
never
followed
up
on
who
wants
to
take
a
review
on
this
work
with
you
sergey,
and
so
was
it
tim
who
was
going
to
help
with
you
or
do
we
want
to
have
any
other
volunteers
to
help
shepherd
review?
I
don't
know
eric
if,
if
you
would
like
to
assist
or.
I
C
All
right,
thank
you.
I
think
that's
everything
on
today's
agenda,
so
we're
relatively
prompt
before
we
adjourn.
Are
there.
C
I
I'm
open
to
everyone's
perspective
on
if
that
helps,
or
not,
I'm
not
adverse
to
to
trying
that
out.
So
I
guess
seth
or
kevin
or
dawn
or
any
of
the
other
approvers
or
active
reviewers.
We
have
today
on
the
call,
if
we
think
that'd
be
helpful,
I
think
that's
fine.
I
think
probably
many
of
us
just
use
gubernator
to
figure
out
what
is
the
the
area
that
we're
going
to
go
hit
on
any
given
moment,
but
we
could
probably
be
more
effective.
C
C
We
we
should
probably
think
about
sergey
if
you
wanted
to
help
shepherd
this.
If
we
wanted
to
have
a
more
dedicated
issue,
triage
process
that
we
actually
say
if
we
use
it,
that
will
be
a
recurring
set
of
individuals
who
are
actually
doing
it
and
that
probably
is
like
figuring
out
what
the
right
cadence
is
to
run
that
triage
and
drawing
attention
to
it.
C
B
I
think
what
triage
label
may
help
us,
and
I
want
your
feedback
on
whether
it's
actual
problem.
We
want
to
make
sure
that
issues
being
looked
at
and
we
have
some
decision
on
every
issue
like
whether
we're
pursuing
it
or
like
we
are
okay
for
somebody
to
pursue
it.
So
I
was
thinking
we
can
start
using
triage
label
there.
I
don't
think
it's
very
useful
for
pr's,
but
it
may
be.
What
is
your
perspective?
What
what
problem
do
you
want
to
address.
C
C
So
if
we
want
to
push
that,
let's
try
to
see
if
we
can
get
volunteers
to
do
that,
I
don't
know
dams.
I
see
on
the
call
I
know:
we've.
I've
talked
to
lori
a
little
bit
about
this
in
past
interactions,
but
I
view
this
as
like.
How
do
we
scale
folks,
rather
than
asking
how
to
more
time,
slice
an
existing
set
of
small
folks,
so.
I
That
let
me
think
about
it,
I'll,
let's
follow
up
on
slack
I'll
I'll
see
if
I
can
block
like
30
minutes
every
week
to
do
that.
That
might
be
a
good
start.
I
think
yeah.
We
might
not
need
all
the
approvers
there,
but
we
need
sufficient
people
sufficient
eyeballs.
You
know
it
will
only
work
if
this
there's
a
good
crew,
that's
consistently
participating
over.
C
C
All
right
excellent,
so
thanks
for
everyone
who
engaged
today
and
then
drew
attention
to
items
that
we
can
try
to
unblock
and
we
will
meet
again
next
week.
Bye.