►
From YouTube: 20191030 SIG Arch Prod Readiness Reviews
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
A
This
is
the
research
and
pilot
phase,
so
we're
just
trying
to
figure
out
what
the
hell
we're
doing
and
to
start
reset
up
the
team.
Thank
you
we're
all
here
or
a
bunch
of
us
here,
and
the
idea
here
is
that
for
now,
oh,
these
are
not
blocking
reviews.
We
want
to
go
through
use.
This
experience
is
to
develop
what
up
should
be
use.
A
This
experience
to
develop
what
questions
are
most
relevant,
but
not
hold
anything
up
in
this
cycle
for
sure
the
questionnaire
here
we
have
a
start
on
it,
but
it
means
a
lot
more
work
like
this
section
is
completely
empty.
We
can
talk
about
that
in
a
minute
as
to
who
may
want
to
start
adding
to
that.
A
Another
thing
we
discussed
you
know
kickoff
meeting
previously
was
the
idea
of
talking
to
individual
operators
and
actually
sort
of
coming
up
with
some
sort
of
interview
to
helps
us
out
the
kind
of
production
issues
people
are
actually
having
in
the
real
world
rather
than
the
theoretical
ones,
or
things
that
we've
seen.
You
know
some
of
us
have
seen
I'll,
try
and
broaden
that
that,
for
you
of
people's
issues
and
then
use
that
to
help
develop
the
questionnaire
as
well
as
to
make
sure
that
the
things
we
ask
are
meaningful.
A
Another
item
that
was
brought
up
was
the
idea
of
evaluating
current
caps
that
have
stalled.
So
cron
job
was
the
one
that
was
mentioned,
in
particular
as
one
example
of
this,
where
the
feature
is
sort
of
stuck
in
beta
and
stuck
in
beta.
In
part,
because
of
issues
relating
to
production,
readiness
and
how
it
behaves,
it
is,
as
far
as
reliability
in
a
production
environment.
A
So
take
a
look
at
those
and
evaluate
those
and
try
to
understand
and
postmortem
summer
meals,
the
kinds
of
problems
that
was
seen
that
it
caused
things
to
get
stuck
and
how
we
might
do
a
better
job
of
preventing
those
things
earlier,
rather
than
you
know,
at
the
end
of
the
design.
Rather
in
the
development
cycle,
we
were
left
with
some
open
questions
around
the
scope
of
this
effort.
Right
now,
we're
talking
about
kind
of
the
production
readiness
review
that
you'd
want
to
do
at
alpha
beta
GA,
to
have
phases
and.
A
Actually,
I
can't
recall
exactly
I
should
have
taken
better
notes,
apparently
because
specifically
feature
life
cycle,
if
this
meant
actually
all
the
way
down
to
like
deprecation
of
the
future,
I
think
that's
whether
we
would
extend
it
to
that
and
I
guess
when
you
know
deputation
over
future,
there
are
issues
around
transition
and
that
sort
of
thing.
Finally,
how
do
we
measure
the
effectiveness
of
this
effort?
A
A
A
Awesome
Wow:
this
could
be
a
short
meeting.
So
one
of
the
things
sort
of
try
to
make
that
more
concrete
right
is.
We
need
to
review
some
caps
right.
We
need
to
take
our
questionnaire
naturally
evaluated
in
some
caps,
so
we
need
to
select
those
caps
for
that
review.
I
think
voiture!
You
have
one
in
mind
at
least
already
that
you
want
to
do
from
scalability
perspective,
I.
B
Think
that
for
now,
I
have
two
for
one
I
asked
the
author
of
it
like
to
actually
take
a
look
into
those
questions
and
and
see
what,
if
those
make
sense
and
like
if
he
can
potentially
answer
them,
and
actually
he
already
unstirred
or
of
the
questions
from
questioner.
I
just
didn't
have
yet
chance
to
look
into
the
answers.
I
will
try
to
do
it
either
tomorrow
or
Monday,
because
Friday
is
a
holiday
in
Poland
and
I
have
one
more
on
my
mind
that
I
didn't
yet
talk
to
the
author.
Okay,.
A
Can
you
put
those
in
the
so
mechanically?
How
do
we
want
to
do
this?
So
we
write
if,
if
people
are
reviewing
and
answering
these
questions,
are
they
just
answering
them
on
issue
or
the
answering?
Do
we
have
them
like?
Ideally,
some
of
these
they're
not
just
questions,
because
we
young
people
think
about
other
questions,
because
operators
will
need
the
answers,
so
they
have
to
be
captured
in
some
kind
of
playbook
or
some
kind
of
place.
Where
we
we
do
this,
it
could
be
in
the
cab.
It
could
be
a
separate
document.
C
D
C
Guess
I
have
one
question
about
trying
to
predict
ahead
of
time,
which
caps
to
cause
problems
on
I?
Can
I
can
look
at
the
cron
job
bits
and
say
that
a
kept
level
discussion
probably
would
not
have
set
off
red
flags
for
me
for
cron
job?
If
I
went
back
in
time
to
what
was
it
three,
nine
or
whatever
it
was
that
it
came
in
and
then
exploded
clusters
I,
think
I.
Think
Google
has
some
trouble.
I
know
Red
Hat
did
we
had
a
very.
A
A
C
E
Hey
folks,
it's
Mishnah
here
I
just
wanted
to
say
something
that
might
be
useful
in
this
context,
which
is
at
Google.
We
have
this
concept
of
play
books.
Every
feature
that
gets
launched
must
have
a
production
playbook.
If
we
require
something
similar
for
community
speeches,
the
details
you
are
referring
to
David,
which
is
around
metrics
and
logs.
All
those
details
could
be
captured
there
and
how
to
use
them.
C
A
So
I
think
that
part
of
the
the
thing
I
would
want
to
do
then
is
like
figure
out.
You
know
part
of
what
came
up
in
the
discussion
previously
of
this
work
was
like
okay,
who
should
we
talk
to
what
what
operators?
As
far
as
you
know,
there's
there's
there's
cloud
providers
like
recommend
having
Google
and
you
know,
coral
or
vendors,
but
there's
also
enterprise
people
running
these,
and
so
what
what
people
do
does
this
group
have
access
to
that
might
be
able
to
interview
in
a
more
structured
way?
A
C
A
Okay,
so
this
is
one
aspect:
this
research
is
one
of
the
things.
What
I
would
ask?
Is
there
somebody
willing
to
take
point
on
putting
together
a
draft
of
an
interview,
the
kind
of
questions
we
want
to
ask
people
putting
together
some
data
sources
like
that's
one
data
source
interviews
are
another
data
source,
but
really
that
research
aspect
of
what
kind
of
failures
do
we
see
and
then
you
know
get
gather
the
data
around
that
as
far
as
hopefully
being
able
to
address
them
in
the
future
and
any
takers
on
that.
A
G
A
A
Well,
let's
set
some
deadlines.
I
want
a
draft
of
the
the
late,
the
questionnaire
or
we
want
to
identify
I
people.
We
can
talk
to.
You
know
something
like
that
right.
It
is
you
don't
have
to
you,
don't
they
have
all
the
knowledge
yourself
and
I?
Think
none
of
us
would
expect
that.
So
this
this
aspect
I'll
call
the
research
aspect,
which
is
this
and
this
our
field,
research.
A
A
Any
other
comments
on
that
area
of
that
field:
research,
trying
to
determine
that
this
is
separate,
I
think
the
post-mortem
on
CAPS
is
separate
a
different
thing,
unless
you
guys
want
to
take
that
too.
But
to
me
this
is
the
other.
Thirteen
different
aspects
looks
like
there's
some
chats
you
could
we
not
ease
failure
stories
so
yeah
that
Valerie
you
may
want
to
just
an
order.
You
may
just
want
to
take
note
of
that
and
think
of
it
as
yet.
A
A
A
Do
we
want
to
assign
them
to
individuals
and
have
them
reach
out
to
the
authors
and
go
through
the
list
of
questions
with
the
authors
that
have
or
at
least
poke
the
authors
to
go
through
that
list
in
some
sense
voluntary?
At
this
point,
so
it
might
be
a
little
more
difficult
to
get
answers
out
of
people.
Any
thoughts.
F
Just
to
maximize
use
of
time,
it
might
be
good
to
get
kind
of
match
people
up
one
to
one
and
have
them
go
look
at
something
and
bring
back
a
few
points
of
interest.
I,
don't
think
we're
trying
to
be
exhaustive
at
this
point,
just
kind
of
catch
low-hanging,
fruit,
I
think
it
would
be
good
to
numb
to
think
through
keps
from
various
aspects.
So
there
are
a
couple
there
for
scalability.
A
So
here
it
sounds
like
you're
talking
about
Jordan,
like
that's
almost
more
mortem
type
of
things.
So
there's
two
there's
two
there's
two
things
we
need
to
do.
One
is
is
look
at
previous
caps.
Things
are
not
even
implemented
and
maybe
caused
problems.
That's
really
important
and
then
there's
also
sort
of
testing
out
or
evaluating
this
process
and
the
kind
of
questions
against
in
flight
caps
to
see
if
it
elicits
anything
useful
from
that.
Well,
that
voice
check
when
you're
looking
at
these
scalability
ones,
I
think
that's
what
he's
doing
this.
F
A
A
F
They're
monitoring
or
other
aspects
of
like
actually
administrating
this
feature:
it's
not
really
a
gap
in
the
feature
itself,
but
it
doesn't
do
a
good
job
of
describing
like
how
does
a
cluster
admin
like
use
this
right?
Okay,
so
a
not
that
we
have
to
limit
ourselves
to
these.
But
if
these
are,
the
categories
of
questions
were
one
to
figure
out
if
they
would
be
helpful
kind
of
measuring
existing
cups
right.
A
Okay,
so
what
I
would
ask,
then,
is
what
that
means
is
the
lever
is
gonna
volunteer,
I,
put
myself
down
as
one
there
was
ever
gonna
volunteer
to
do
this.
This
part
of
it
now
with
this
part
of
it,
is
you'd.
You
look
through
the
caps
you'd
find
one
that
you
think
needs
needs
this
treatment
or
could
benefit
from
it
and
coordinate
with
the
other
people
on
this
team.
So
I
guess
I'll
need
to
set
up
a
slack
channel
for
this
team.
So
that's
another
action
item
and.
A
Then
what
we
would
do
is,
if
you
find
one
you
want
to
do,
put
it
on
the
slack
channel,
so
that
other
people
don't
do
it
too,
and
and
then
it'll
be
up
to
you
to
go
actually
talk
to
the
author
of
that
and
run
through
the
questionnaire
and
also
think
about
or
discuss
with
them
and
review
the
cap
and
see
what
other
questions
are
missing
for
the
questionnaire
that
might
help
couple.
These
kind
of
issues.
C
F
A
Of
these
questions
that
are
on
here
that
that's,
where
they
come
from
I
didn't
know
we
had
it
dropped
already.
My
apologies
I
missed
that
part
yeah
yeah,
so
well,
that's
another
task.
Quinten
that
we
need
to
do
is
is
look
at
this
draft
right,
like
I,
have
a
bullet
here
with
nothing
underneath
it
I
probably
could
go
dig
around
and
find
that
the
things
that
people
are
asking
here
about
that
and
add
them
in,
but
I
just
didn't
at
the
time.
A
Somebody
getting
somebody
suggested
this
section
and
I
just
put
it
in,
but
this
is
just
a
very
rough
draft
which
had
filled
in
a
bunch
of
scalability
questions.
I
mean
one
of
the
goals
we
have
to
think
about.
Sometimes
the
reviews
here
can
be
a
little
more
painful
than
we
want
them
to
be
right.
So
I
think
this
is
always
this
balance
of
void,
check.
I,
know,
I
was
careful
in
putting
the
other
DS
scalability
questions
to
try
to
make
them
things
that
people
could
answer
without
too
much.
A
A
So
voice
check
so
you're
coming
at
this
as
producing
scalability
from
scalability
like
should
we
be
reaching
out?
Do
we
could
we
see
other
SIG's
that
might
have
some
skin
in
this
game
that
want
to
might
have
things
they
want
to
surface
from
their
point
of
view,
as
opposed
to
being
this
central
thing
right,
I
think
I'd
like
to
make
sure
we
get
the
viewpoint
of
other
other
folks.
A
A
A
A
In
some
sense,
this
is
all
of
our
responsibility,
but
I,
don't
think
anything
gets
done
when
it's
all
of
our
responsibility,
so
I
would
like
it.
If
somebody
is
there,
anybody
who
can
volunteer
to
take
point
on
or
either
some
section
of
this
or
wants
to
just
volunteer
now,
not
necessary.
You
take
point
to
volunteer
and
say
I'm
gonna
go
through
this
and
think
through
each
of
these
questions
and
try
to
add
or
comment
on
them,
as
we
did
get
people's
comments
when
we
submitted
this
initially.
But
obviously
it's
not
done
you.
C
A
A
F
A
This
was
this
was
this
was
in
particular
just
the
ones
around
production
issues.
I
mean
okay
like
well
I
mean
you
know.
You
bring
up
a
point
that
that
one
other
question
I
had
in
our
kickoff
Creek.
We
will
should
efforts
like
like
David's
Pro,
beta
cap
right
on
or
issues
around
alpha
beta
G,
a
criteria
be
part
of
this
this
process,
but
I
guess
that's.
Maybe
we
need
to
just
do
one
thing
at
a
time
we
have
enough
here
today.
F
A
So
how
about
this,
who
is
willing
to
identify
I,
don't
know
three
or
four
different
caps
that
that
were
stalled?
To
do
this
I
mean
we
know
we
know.
Cron
job
is
one.
Is
that
is
there
actually
a
cap?
This
is
before
my
time.
Is
there
actually
kept,
for
that?
Was
that
that's
pre-cut
process?
Isn't
it
it.
C
C
I
A
A
Postmortem
Jordan
will
we'll
just
start
with
like
let's
select
some
to
investigate
and
then
next
time
maybe
we
can.
We
can
put
put
the
screws
to
somebody
to
actually
deliver
something
resolution,
open
questions.
So
does
anybody
have
thoughts
on
this?
In
particular
the
two
questions
that
were
open
and
we're
we're
well
I
think
we
just
said:
no.
We
don't
want
to
deal
with
that
right
now,
maybe
eventually,
but
not
not.
Now
that's
my
opinion,
but
this
one
I
know
quittin.
A
A
H
It's
still
the
same
problem
that
we
had
a
while
back,
but
that
there
was
a
phase
when
when
we
would
release
each
version
of
kubernetes
and
then
there
would
be
like
10
patches
that
came
out
very
soon
after
that,
which
presumably
were
the
result
of
some
kind
of
flaws.
I,
don't
know
if
that
number
is
a
reasonable
metric.
If
that
number
got
smaller,
would
that
be
a
good
thing,
not
sure.
C
C
B
F
B
F
C
F
C
A
And
actually
that
I
mean
that
is
one
of
the
questions.
Right
is
what
what
a
reason.
Well,
that's
allowed
yeah.
What
are
the
answer
lies
in
order.
The
reason
will
also
house
and
so
like
you're
right,
if
nobody's
documenting
theirs.
Now,
then
it's
certainly
there's
an
effect
of
getting
those
documented.
H
Yeah
I
think
we
have
a
very
real
risk.
You
know
you
can
you
can
do
a
million
things
to
put
in
these
readiness
reviews
and-
and
some
of
them
are
subjective-
and
some
of
them
seem
very
nice
to
have.
But
unless
you,
unless
you
actually
measure
the
thing
that
you're
trying
to
make
better,
which
presumably
is
reliability
in
the
broad
sense
of
the
term,
then
it's
very
very
difficult
to
work
out
whether
you're
just
wasting
it
yeah.
A
F
What
we
have
to
measure
something,
and
so,
if
we're
looking
at
the
artifacts,
that
we
as
a
kubernetes
project,
produce
its
components
and
config
and
like
documentation
for
how
to
run
these
things.
And
so,
if
we
measure
like
do
we
document
exactly
how
you
should
run
these
things?
Yes
or
no,
like
even
just
measuring
saying,
is
that
ending
up
over
time
right.
E
I
think
you
just
get
it
over
thing
right
like
if
you
start
say
two
months
from
now
I'm
saying
for
all
new
kept
stable
to
flag
this
and
then
maybe
like
six
months
from
then
we
can
see
how
many
features
that
been
into
ask
for
data
but
actually
manageable
in
production.
Again
like
this
is
a
in-crowd.
So
it's
data
from
the
community,
so
it's
subjective,
but
we
can.
We
can
go
back
in
time
and
evaluate
how
well
we
did,
and
if
this
process
actually
helped.
F
Maybe
there's
also
the
possibility
of
synthetic
tests
or
upgrade
tests
or
scale
tests
or
synthetic
workloads
that
are
under
the
control
of
the
project
that
we
can
ask
questions
of
like
this
bug
report
that
we
got
do
we
have
a
synthetic
test
that
would
exercise
this
scenario
and,
if
not,
why
not?
And
if
we
do
then
measuring
the
health
of
those
tests
over
time
can
be
useful.
That.
H
Sounds
like
a
very
useful
metric
actually
and
not
subject
to
some
of
the
other
problems.
I
mean
we
get
bug
reports
retro
actively.
They
either.
You
know
they
get
triage.
They
either
real,
bugs
or
they're
not
and
for
the
real
bugs
like
what
percentage
of
them
I
guess.
The
answer
is
zero
percentage
of
them
head
head
tests,
otherwise
they
wouldn't
have
got
into
production
but
being
able
to
measure
that
that
thing
yeah.
F
I
mean
we,
we
generally
do
well
adding
test
coverage
for
bugs
that
get
fit
get
fixed.
But
asking
questions
like
is:
is
this
a
category
of
bug
like
it's
good
that
we've
added
a
unit
test
around
this
particular
thing?
But
is
this
a
category
of
bug
that
higher
level
test
could
have
caught
and
I?
Think
that's
a
it's
a
good
question
for
this
task.
It
like
there's
a
lot
of
people
who
are
involved
in
yeah,
yeah
yeah.
H
The
other
way
it
is
presumably
will
have
some
anecdotal
evidence
to
suggest
that
this
group,
that
sitting
here
is
useful
because
there's
like
some
problem,
we're
trying
to
solve
yes
consume.
That
is
like
some
production,
outages
or
problems,
and
maybe
I
mean.
Is
that
data
measurable
in
some
way?
Or
is
it
just
like
people
have
a
gut
feel
that
we
have
a
problem
here
worth
solving
I.
A
Think
individual
organizations,
depending
on
how
so
I
said,
I,
think
that
you
can
say
that
the
number
of
failures
or
such
of
problems-
you
see,
there's
two
different
things
you
can
do.
You
can
say
if
all
new
features
are
going
through
this
PR
process,
then
we
don't
have
to
attribute
failures
back
to
an
individual
feature
in
order
to
see
a
measurement
or
see
a
signal
about
whether
this
is
succeeding.
We
would
see
that
if
we
get
fewer
issues
out
of
the
newer
version
of
kubernetes,
that
includes
features
that
have
gone
through
this
process
over
time.
A
F
Then
that
seems
like
a
like
a
1-1
question
survey
like
on
a
scale
of
one
to
ten.
How
difficult
was
it
to
start
supporting
the
new
features
in
this
release?
That's
a
really
easy
thing
to
measure
and
then
that's
a
really
easy
thing
to
say.
Well,
we
tried
to
make
improvements
in
this
release.
We
tried
to
document
stuff
better
and
like
be
really
clear
about
how
you
address
these
things.
Scale
of
one
to
ten,
how
did
we
do
and
just
release.
A
E
E
I
think
Johnny
sort
of
value
related
to
this
meantime
to
response,
so
that
should
also
be
something
that
we
measure
and
should
be
part
of
the
questionnaire
in
that,
like
you
could
ask
like,
was
this
really
stable
or
how
many
issues
will
you
hit,
for
example,
but
we
should
also
ask
when
you
hit
those
issues.
How
quickly
were
you
able
to
resolve
very
able
to
find
the
documentation?
You
need
it
to
to
figure
out
what's
going
wrong
right,
okay,.
A
A
E
A
Okay-
okay,
that's
great
anything
else.
We
have
about
eight
minutes
left
and
I've,
gotten
sort
of
my
agenda
and
I.
My
item
here
is
assignment.
Work
on
individual
I.
Do
think:
we've
done
that
in
general.
There's
somebody
assigned
every
one
of
these
at
least
one
person
we
do
have
this
I
would
think
of
it.
Each
each
group-
that's
assigned
to
one
of
these
items,
really
should
take
responsibility
for
doing
this.
For
next
time
it's
sort
of
either
get
done.
A
If
some
of
them
are
you
just
get
it
done
other
ones,
sort
of
put
together
a
brief
couple
of
lines.
It
says
here's
what
I
expect
to
get
done
or
we
should
get
down
over
the
next
few
weeks.
One
thing
I
did
want
to
bring
up.
There
is
a
25-minute
session
at
the
summit.
Do
you
talk
about
this?
So
maybe
next
time
we
want
to,
you
know,
obviously
we're
not
going
to
be
done.
This
is
a
process
is
underway,
but
we
can
so
you
want
to
discuss
whether
we
want
to
use
that
time.
A
It's
only
a
couple
weeks
away
to
to
to
recruit
people
for
these
kind
of
survey.
You
know,
surveys
or
interviews.
We
certainly
want
to
go
over
the
process
and
what
we're
planning
on
doing
and
and
internally
give
people
a
heads
up,
but
also
a
week.
We
should
discuss
and
think
about
what
we
want
to
solicit
from
people.
So.
E
On
that
note,
a
lot
of
this
work
could
be
done
efficiently,
if,
like
the
reviewers
and
approvers
that
we
have
for
the
various
sub
projects
are
familiar
with
the
questions
that
you're
trying
to
put
up
here,
mm-hmm,
because
a
lot
of
these
issues
in
theory
can
be
caught
as
part
of
the
court
review
process
and
like
the
design
review
process.
So
you
should
make
sure
that
we
try
and
engage
and
empower
ask
my
reviewers
and
approvers
as
possible
and
make
sure
they're
familiar
familiar
with
this
process.
Editing
sever!
E
A
And
like
ideally,
I,
don't
want
something
Jordan
I
talked
about
a
long
time
ago.
He
suggested
was
that
we
see
if
we
can
come
up
with
a
way
to
do
this.
That
became
essentially
be
handled
by
existing
tooling,
where
you
know
they
both
confirmed.
One
thing
we
did
is
like
you
have
to
add
a
line
to
a
file,
and
then
somebody
has
to
approve
it
and
the
owners
file.
They
are
basically
Flags.
A
The
people
who
are
able
to
approve
that
and
so
for,
if
you
go
back
to
this
questionnaire,
may
be
full
or
feature
enable
men.
You
know
there's
a
set
of
people
that
that
can
review
that,
and
so,
when
you
go
to
enable
feature,
there's
tooling
that
says
hi
you're,
you
know
you
you're,
enabling
this
you're
kind
of
getting
rid
of
this
feature
gate
and
making
a
GA
or
you're
adding
a
future
date.
In
order
to
do
that,
you
need
you
know,
review
by
this
specific
set
of
individuals
or
approval,
the
idea
being
that
we
don't.
A
We
don't
want
there
to
be
like
a
centralized
process.
I,
don't
know
we'll
do
what
we
have
to
do
to
make
this
effective,
but
I
prefer
not
to
have
one
team,
that's
bottlenecking
things
and
rather
have
that
responsibility
like
see
the
scalability
section,
six
capability
right,
obviously,
that
one's
easy-
and
you
know
nobody
else,
necessarily
has
to
be
able
to
understand
that
so
anyway.
Those
are
things
that
you
want
for
next
time.
F
Yeah
I
think
distributing
work
like
having
having
cigs
that
are
cross-cutting,
come
up
with
like
a
set
of
questions.
That
kind
of
are
leading
questions
like
if
you
don't
have
any
idea.
What
the
answers
to
these
questions
are.
Maybe
you
need
to
think
about
kind
of
this
dimension
of
your
design
and
so,
like
I,
like
what
six
scalability
was
doing,
trying
to
boil
down
some
of
the
things
they
know
are
issues
into
like
really
straightforward
questions.
F
Some
of
them
are
straightforward
and
then,
like
you,
said
having
something
with
the
tooling
so
that
a
new
feature
shows
up
like
well.
We
want
to
gather
documentation
about
metrics,
let's
say
let's
head
and
make
it
structure.
You
can
say
if
you
have
this
feature
like
here's,
where
you
need
to
write
out
with
the
metrics
for
that
art.
F
So
something
like
that,
so
the
tools
aren't
like
like
they're
stars
of
yet
human
involved,
but
at
least
prompts
you
to
say.
Oh,
this
is
something
you
need
to
gather
like
people
are.
Gonna
have
to
run
this
so
gather
this
information
put
in
here
and
then
a
human
can
review
it
and
make
sure
it
makes
sense
right
and.
A
We
want
the
review
burden,
I
guess
when
I
was
trying
to
express
is
we
want
the
review
burden
to
be
narrow
enough
that
it
doesn't
create
like
if
you
create
an
enormous
process
that
you
know
one
individual
has
to
go
through
and
review
every
single
question
on
this
list,
making
everything's
answered
in
the
best
possible
way.
It's
gonna
be
a
disaster
right,
so
if
we
can
show
that
out
to
20
different
people
that
are
reviewing
each
individual
little
bits
of
it,
I
think
we'll
get.
B
My
personal
feeling
is
that
at
least
what
I
would
ideally
like
to
shoot
to
ask
ellaby
is
that
we
will
also
come
up
with
some
criteria
where,
for
the
answers
of
those,
questions
were
if
they
are
within,
with
the
answers
are
within
some
thresholds
or
the
end.
All
the
answers
are
no
or
something
like
that.
That
would
be
that.
Would
we
that
we
will
rely
on
the
like
approvers
of
the
cap
to
just
to
those
three
dollar
launches
or
whatever
we
call
them
like
escalate
to
six
scalability.
B
If
it's
not
obvious
what
the
impact
of
the
feature
is
is
and
I
think
we
should
ideally
try
to
do
something
similar
with
all
of
all
the
questions
here,
just
to
like
reduce
the
the
load
and
the
pain
on
on
the
outers
of
the
cap.
Because,
like
the
more
reviews,
you
need
to
pass
the
more
time
of
the
table.