►
From YouTube: 20191008 SIG Arch Prod Readiness Reviews Kickoff
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
All
right,
so
we
are
recording-
and
this
is
the
kickoff
meeting
for
this
production
readiness
group
for
city
architecture
in
urban
IDs.
It's
October,
8,
2009
teen,
and
thank
you
everybody
for
joining.
Please
put
your
name
in
the
attendee
list
so
that
we
know
who's
involved
here
and,
if
possible.
If
there's
anybody
who
could
take
notes,
otherwise,
I
can
try
to
take
notes.
I'll
just
give
a
quick
intro,
and
then
we
can
maybe
talk
about
the
people
are
I.
B
Well,
we're
still
maybe
growing
quickly,
but
but
at
some
point
you
have
to
hit
a
level
of
maturity
where
people
are
really
depending
on
this.
Obviously
in
many
production
environments-
and
we
have
to
be
really
careful
well
what
gets
rolled
out.
So
the
idea
is
to
try
and
introduce
some
of
the
process,
and
you
know
gates
that
would
help
us
make
sure
that
that's
that's
true,
I'm
John,
bel,
Marek
I
have
a
cap
out
there,
which
is
sort
of
along
with
fish
who
some
of
you
may
know
who's.
B
Not
here
he's
on
leave
right
now,
but
coming
back
soon,
he
and
I
were
talking
here
with
another
person
at
Google
about
how
do
I
do
this
and
that's
where
the
cap
came
from
and
so
hoping
to
get
people's
feedback
and
see
where
we
can
take
this
I
guess
on.
My
am
I
may
be
asked
for
it:
interest
from
people
based
on
people
where
they're
listed
in
my
resume,
so
Jed,
actually
Jared
you're
up
next
I
guess
yeah.
A
Hi
I'm
Jared
Dylan
I'm,
an
engineer
for
d2
IQ
and
I'm
the
developer
on
the
Kudo
project,
so
mostly
I've
been
interested
in
in
control
of
runtime
and
capi
machinery,
so
I
I,
Jason,
I,
just
talked
to
at
last
cute
con
and
we've
been
talking
ever
since
about
doing
some
more
work
over
in
Sega
architecture
and
with
everything
going
on
now.
It
seemed
like
a
great
time
to
make
that
time
so
excellently.
C
D
E
B
F
C
G
B
H
Hey
Jordan,
like
I
work
at
Google,
been
with
the
project
for
a
while
and
have
kind
of
been
involved
in
a
lot
of
different
aspects
of
bringing
features
to
GA
status
and
would
like
to
see
that
being
made
more
consistent
and
I.
Take
the
learnings
from
some
areas
that
we
think
we're
done
really
well
and
helped
make
that
yeah
more
consistent.
H
B
B
Project
or
if
we're
going,
if
it's
going
to
disband
after
a
while
and
it's
crossing,
then
maybe
maybe
working
group,
but
I
want
to
get
people's
thoughts
on
that
two
of
the
things
that
I
thought
are
somewhat
related
or
adjacent
in
some
way
are
the
alpha-beta
and
GE
criteria
and
the
cap
that
was
recently
that
was
recently
submitted
around
beta
transition
policy.
So
this
is.
F
B
So
that's
that's
the
other
goal
there,
if
possible,
I
want
to
avoid
like
brand
new,
tooling
or
I
want
to
use
the
existing
tooling
that
we
have
and
that
those
are
my
goals
around
it
and
I
guess.
I'm.
Looking
for
feedback
and
thoughts,
I
I
did
have
a
proposal
here.
Initial
proposal
document
them
obviously
document
what
the
criteria
are
come
up
with
a
questionnaire
and,
and
the
tooling
aspect
of
this
would
be
that
each
of
these
questions
in
the
questionnaire
would
be
able
to
be.
B
Would
essentially
1
we
look
if
there's
tooling,
that
can
evaluate
whether
whether
that
question
is
true
or
not
or
answered
properly
or
not
without
any
additional
human
input.
Otherwise
it
could
be
like
somebody
entering
a
line
in
a
file
somewhere.
That
describes
something
about
the
answer
to
this
question
and
the
relevant
saying
irrelevant.
People
would
approve
that
that
sort
of
thing
so.
B
You're
saying
how
does
this
relate
to
the
post?
I
mean
that
I
think
the
criteria
involved
are
gonna,
be
written
by
people,
hopefully
on
this
call,
who
have
a
lot
of
experience
running
these
clusters,
and
but
but
some
of
the
questions
are
things
like.
Can
this
feature
be
disabled
in
production,
which
certainly
in
a
beta
phase
or
an
alpha
phase?
We
need
that
ability
right.
I
mean
I,
think
that
we
already
have
that
policy
in
place
captured
somewhere,
but
whether
we
have
any
sort
of
enforcement
of
that
or
not
sure
yeah.
H
The
the
beta
thing,
especially
like
that
is,
is
this
feature
required
to
start
up
a
cluster
and
if
it's
required
at
beta
levels,
that's
concerning.
That
probably
means
it's
like
there's
a
mismatch
between
how
stable
it
actually
is
and
how
stable
things
building
on
top
of
it
expect
it
to
be
so
that,
like
that's,
a
really
key
one,
that's
not
actually
something
we
enforced
today,
I
I
would
like
to
see
our
conformance
tests
run
with
no
beta
API
is
enabled.
A
A
What
III
that
there's
a
bunch
of
post
GA
work
as
well,
there
that's
ongoing,
and
that
was
the
I
guess.
The
specific
thought
I
had
around
with
that
was
the
establishment
of
like
SLA
is,
and
that's
the
lows
around
the
behavior
of
V
1
beta
1,
I,
muted
performances
mentioned
a
little
bit
in
this
cap,
but
I
guess.
My
question
would
be
in
in
this
production
readiness
and
maybe
something
we
can
either
automate
or
or
may
require.
A
Human
is
at
what
point
at
what
point
is
something
expected
behavior
versus
having
to
open
an
issue
for
and
being
treated
as
either
a
regression
or
a
bug
in
the
system
specific
to
those
and
and
I
was
I
specifically
look
at
that
because
at
CR
DV
1,
because
the
GA
for
the
work
for
that
included
SLA
is
Anessa
lows.
So
I'm
not
sure
if
that
fits
under
this
existing
monitoring
requirements
or
something
that
we
want
to
capture
here.
But
but
it
is
I,
think
important
in
in
stabilizing
something
from
beta
to
GA.
B
B
This
is
the
example
of
here
some
of
the
things
we
would
consider,
but
if
we
feel
that
this
cap
is
something
we
want
to
move
forward
with
which
I'm
getting
the
feeling
at
least
the
people
on
this
call
think
so,
then
part
of
that
would
be
defining
that
checklist,
which
would
be
something
everybody
contributes
to
I,
think
and
that
would
become
a
special.
You
know
that
would
become
a
separate
document
within
the
within
the
within
the
within
the
repository.
B
But
then,
additionally,
along
with
that,
we
would
implement
some
sort
of
process
to
ensure
that
those
that
those
questions
are
answered,
that
the
play
books
are
available,
etcetera,
etcetera,
so
yeah
I,
think
definitely
SL
eyes,
and
s.
Ellos
are
important,
I
guess
you're
trying
to
say
is
if
we
define
these
SLO
is
in
this
process.
Does
that
mean
that
if
people,
if
we're
not
meeting
those
SLO
in
some
production
environment,
we
raise
an
issue
or
in
some
test
environment
may
be
more
more
achievable?
H
So
I
think
there
are
two
kind
of
dimensions
there.
One
is
changes
to
existing
features
and
needing
to
understand
like
I've
got
this
GA
feature
in
use
in
my
cluster
I.
Don't
have
the
option
to
disable
it
or
cut
off
access
to
it
and
now
like
in
a
new
release,
a
thing
is
happening
to
it,
like
some
new
features,
some
new
fields,
some
new.
Whatever
is
this
kind
of
regress
performance,
and
that's
a
really
hard
question
to
answer.
H
H
And
then,
when
you
go
to
make
those
changes,
if
you're
changing
an
API
that
already
exists,
that
people
are
already
using
that's
already
GA,
you
can
have
a
bar
to
measure
things
against
and
say:
okay,
we
added
these
new
features
and
we
are
still
within
the
previously
specified,
that's
a
little
SLA
for
brand
new
things
like
hey
here's,
a
new
API
that
no
one
has
used
before
its
newly
beta
newly
GA
like
to
me.
It's
making
sure
you
define
those
things,
so
you
have
your
setting
expectations
correctly
and
someone
who's
going
to
go.
B
Absolutely
I
mean
I
I
know
like
when
we
did
core
DNS.
We
had
no
information
about
whether
it
was
like
what
are
our
GA
criteria.
I
have
no
idea
because
I
don't
know
how
this
current
system
behaves
right,
so
I
can't
say
whether
we're
better
or
worse,
very
easily,
and
even
our
testing,
when
we
did
finally
come
up
with
some
things,
are
the
the
testing.
That's
that's
thinner
and
the
existing
scalability
suite
really
only
tested
a
very
narrow
slice
of
what
we
needed
to
test
in
order
to
to.
D
To
have
a
have
a
sense
of
where
the
big
problems
lie
here:
I
I,
don't
personally
I,
don't
operate
a
lot
of
these
clusters
and
do
the
upgrades
and
things
do
we
have
a
sort
of
a
community
accepted
set
of
you
know
in
problems
today
like
do
we
get
a
lot
of
performance
regressions?
Well,
where
are
the
big
problems
to
solve
it
because
I?
D
B
Think
that's
a
great
question:
I,
don't
have
an
answer
offhand.
You
know.
One
of
the
other
folks
involved
here
is
on
are
necessary.
Teams
or
I
would
be
pulling
in
those
folks
to
have
the
the
experience
to
try
to
bring
the
experience
of
running
tens
of
thousands
of
clusters
into
that
and
informing
it.
But
I
know
we
have
other
people
who
are
operating
clusters
here
and
so
I'd
love
to
love
to
hear
from
them
right
now,.
C
So
I
can
speak
somewhat
to
that
probably
more
conjecture
than
I'd
like
I
can
probably
come
back.
You
know.
Maybe
next
meeting
with
you
know
some
scrub
data,
but
you
know
the
primary
challenge
that
we
see
really
is
just
you
know,
users
keeping
up
with
the
upgrade
cadence
and
keeping
up
to
date
with
clusters
and
kind
of
language
languishing
with
you
know,
beta
features
that
have
graduated
and
become
stable,
but,
due
to
you
know,
lack
of
staffing
or
operational
resources
being
able
to
actually
utilize
them.
C
I
know
that
that's
orthogonal
to
this
discussion
I'll
try
to
come
up
with
something
next
time.
That
is
a
little
bit
more
meaty
from
the
tech
side.
But
overall
you
know
the
rollback
is
something
that
we
consider
to
be.
You
know
somewhat
problematic,
like
you
know,
in-place
upgrades
for
communities
cluster
those
fail
rollback
is
particularly
challenging.
C
We've
you
know
kind
of
adopted
or
sometimes
commenda
immutable.
You
know
type
of
upgrade
where
you
just
basically
throw
machines
at
it
and
have
them
join
the
classroom
via
cube
ATM
or
something
like
that
to
avoid
some
of
those
pitfalls.
But
it's
I'll
come
back
next
time
with
with
some
better
data
on
that
front.
Okay,.
B
I
mean
yeah
like
I
said
this
is
this
is
more
of
a
kickoff
to
try
and
organize
the
effort,
as
opposed
to
make
substantial
progress
on
on
it.
It
south,
along
those
lines,
we've
kind
of
drilled
into
the
into
the
discussion
a
little
bit
Jordan.
It
look
like
you're,
gonna,
say
something
or
my
that
my
imagination,
okay,
so
maybe
we
should
back
up
a
little
bit
and
and
sort
of
have
that
discussion
like
right.
What
to
me
this
is
actually
the
main
thing
I
wanted
to
get
out
of.
B
This
call
is
what
how
we're
gonna
organize
this
going
forward,
so
that
we
can
look.
We
can
just
sort
of
make
progress
on
it.
The
do
do
others
think
that
these
adjacent
items
are
within
scope
of
some
group.
Like
do
we
need
to
define
a
subgroup
of
sig
architecture
in
this
case
to
manage
and
and
sort
of
Shepherd
through
these
kinds
of
efforts?
What
do
we
think?
That's
just
unnecessary
extra
overhead
and
we
should
just
treat
them
as
individual
efforts
going
forward.
Any
thoughts
on
that
I.
A
There
may
even
be
some
like
information
discovery.
I
put
I
put
some
thoughts
in
the
chat
as
well
like
and-
and
you
know
about
some
what
Valerie
said-
maybe
maybe
the
first
step
is
to
gather
information
and,
as
we
advance
this
capita.
If
there's
ongoing
work
around
like
what
one
of
the
things
I
listed,
there
was
was
figuring
out.
What
features
are
failing
to
thrive
or
failing
to
advance
to
this
process
and
could
use
that
help
and
what
help
they
need
I
think
might
help
to
find
if
a
on
noise
of
projects
necessary,
yeah.
B
Think
that
we're
we're
these
coming
together,
the
beta
transition
policy-
and
this
chap
that's
the
subject
of
this
call
in
Italy-
is
that
we
don't
want
the
transition
policy
to
push
things
to
go
to
GA
before
they're
ready,
just
because
they're
gonna
timeout,
so
DIMMs,
I
think,
you're
saying
like
cron
job
is
something
it's
another.
One
of
these
that's
lingered
is
that
is
that
correct?
That's
right
and.
H
H
Yeah
that
went
into
that
that
one
was
a
little
different.
It
wasn't
that,
like
it
wasn't
workable,
it
was
broadly
adopted
it,
so
it
clearly
was
workable
to
some
extent,
but
was
more
usability
issues
with
that
one
and
yeah
everything
all
happy
api's
are
the
same.
Every
unhappy
api
is
unhappy
in
its
own
way.
Yeah.
G
B
D
B
G
G
I
So
them
some
confuse,
like
obviously,
people
have
been
running,
production-ready
iptables
oriented
clusters
for
fifteen
or
sixteen
releases
of
termina
days.
Is
the
scope
of
this
discussion
about
how
we
transition
to
future
production
states
or
how
we
transition
new
and
maturing
features
into
the
production,
ready
state
or.
G
H
Correct
yeah
some
features
aren't
necessarily
intended
as
replacements
for
other
features
right.
It's
just
like
an
alternative
like
I,
think
of
our
authenticators
right.
We
have
ones
that
tie
into
open
ID
connect
and
we
will
have
ones
that
tie
into
authenticating
proxies
and
one
isn't
deprecated
in
the
other,
but
some
features
are
intending
to
deprecated
others
and
thought
needs
to
be
given
to
how
like
once
this
is
stable
and
mature
performance
and
whatever,
whatever
like.
How
would
someone
transition
to
this
feature.
H
And
that's
something
that
I
think
doesn't
get
a
lot
of
thought
necessarily
like
people
just
kind
of
assume.
Oh
well,
when
you
wrote
spin
up
new
clusters,
they'll
use
this
new
thing
without
really
thinking
about,
but
I
have
thousands
of
old
clusters.
What's
gonna
happen
to
them,
a
lot
of
these
I
think
are
really
useful
to
make
sure
the
developers
and
designers
of
features
are
thinking
about.
What
I
want
to
avoid
is
just
a
checklist
with
two
questions
that
the
people
who
are
supposed
to
answer
them
don't
understand.
H
The
reason
for-
and
so
you
end
up
with
a
bunch
of
like
very
surface
level
or
like
na
doesn't
apply,
doesn't
fly,
doesn't
apply
types
of
answers,
mm-hmm,
so
maybe
something
we
can
maybe
giving
examples
or
like
the
reason
this
is
important
is
because
people
running
clusters
are
going
to
need
to
do
this.
So
explain
to
this
persona
like
how
they
would
make
sure
your
feature
is
on
and
working
and
so
I
don't
know
if
it's
examples
or
kind
of
describing
scenarios
or
for
the
people
who
have
to
answer
these
questions.
I
I
Target
any
specific
installation
or
deployment
model,
like
you
know,
people
in
the
world
that
are
running
hyper
scale
hosted
solutions
where
control
planes
are
running
on
an
under
management
cluster.
You
have
people
in
the
world
running
control
planes
on
hosts.
You
know
people
on
the
world
doing
some
variation
between
when
you
get
down
the
nuts
and
bolts
of
it.
What
what
level
of
guidance
are
we
targeting
here
in
this
group?
Is
it
like
the
how
to
observe
like
a
keep
it
log
or
a
queue,
controller
log
or
the
command
sport?
B
B
You
know
this
ability,
so
there's
they're,
actually
logging
and
that
there's
documented
somewhere
to
play
up
and
says
here's
how
you'd
actually
that
information
they
have
as
allies
like
that,
you
can
know
that
the
actual
looking
metrics
you
can
look
at
or
some
something
you
can
look
at
to
see
that
it's
working
in
production-
it's
not
just
you
know
well
nobody's
complained
yet
right.
So
so
more
more
generic
kind
of
things
like
they're,
not
that
pretty
much
any
deployment
mode
are
going
to
need.
B
I
H
Reminds
me
a
lot
of
the
supported
components:
q,
like
the
supported
versions
in
components,
q
and
upgrade
order
document
like
that.
Doesn't
we
we
worked
really
hard
not
to
end
down
a
particular
topology,
because
there
are
several
in
existence,
but
the
people
writing
the
kubernetes
components
have
to
know
what
they're
expected
to
support.
As
far
as
like
n
minus
one,
plus
one
order
of
operations,
types
of
things,
and
so
the
components
Q
upgrade
order
document
says
like
before
you
upgrade
your
cubelet.
H
H
Of
things
like
service
account
tokens
to
bound
service
account
tokens
like
that's
the
feature
that
Sega
has
been
working
on
for
a
long
time
and
it's
got
a
tricky
rollout
and
like
well
I
think
we
probably
should
have
thought
about
rollout
like
from
the
very
beginning.
Instead
of
saying
we
can
make
this
better
and
people
who
want
to
turn
it
on
can
and
then
I'm
like
well
world,
come
on
turn
it
on
it's
better.
H
I
And
I
only
asked
this
because
I
think
I
think
we're
on
the
cusp
of
some
complicated
things,
probably
and
so
like
dims,
from
like
a
signal
perspective
like
I'm
thinking.
If
I
see
a
future
feature
enhancement
that
says,
cubelet
tolerates
cgroups
me
too,
and
like
I'm
wondering
what
type
of
production
readiness
guide
would
I
offer
for
that
and
what
type
of
migration
guide?
If
any,
would
one
be
expecting
for
that
and
that's
probably
similar
to
like
I
could
tables
where
so
I
could
be
a
type
of
thing.
I
G
B
I
mean
definitely
well
so,
when
you're
talking
about
something,
that's
replacing
an
existing
feature,
I
think
it's
actually
easier
right,
because
then
you're
you've
got
some
bar
that
you
can
draw,
and
you
know
that,
if
you
can,
you
can
beat
that
bar
that
what
you're
doing
is
is
better
I.
Think
that
you
know
the
the
thing
is
that
at
least
at
that
time
there
was
no.
B
There
was
no
list
of
what
things
of
what
that
bar
should
be.
It
was
this
more
based
on
the
judgment
of
the
people
who
were
who
were
working
on
it
on
the
process
with
us
right,
and
what
I
would
hope
to
get
out
of
this
is
that
we
would
at
least
have
a
list
of
things
that
people
need
to
consider
deeply,
and
the
particular
cases
of
any
given
feature
are
going
to
have
their
own
peculiarities
as
well,
but
I,
don't
know,
I'm,
not
sure
that
answers
the
question.
I
found
that
that.
B
Individuals
being
too
much
part
of
the
decision-making
process,
as
opposed
to
having
some
kind
of
criteria
that,
if
that
individual,
isn't
there
or
can't
be
there
anymore,
there's
somebody
else
who
can
can
make
that
evaluation.
I!
Guess
from
that
point
of
view,
one
having
the
upfront
list
of
okay,
here's
what
we
need
to
work
on.
If
we
want
to
achieve
this
and
to
you
know,
I
know
that
that
you
know
I
still
have
to
convince
certain
individuals,
because
there's
people
who
who
are
in
that
position
in
the
project,
that's
what
their
job
is.
H
Cap
has
questions
that
hint
at
some
of
these
areas
like
what
are
the
upgrade
and
downgrade
implications
of
your
feature
and
there's
like
an
ocean
of
potential
issues
there
and,
like
you
said
there
are
people
with
the
state
of
the
system
in
their
heads
and
like
if
we
look
at
rollout
of
some
of
these
features
like
there
are
really
good
lessons
to
be
learned
and
like
for
the
you
know.
Oh
yeah,
your
controller
has
to
work
with
cubelets
current
version
back
to
two
levels:
old
and
the
implications
of
that,
and
and
so
acquaintance
you.
H
You
say
that
this
scope,
weather
seems
very
large,
I.
Think
it's
like
limitless
if
we
took
it
to
the
nth
degree,
but
if
we
did
like
a
breadth-first
rather
than
a
depth-first
version
and
just
took
some
of
the
really
broad
questions
in
the
kept
template
and
asked
specific
questions
like.
Are
you
changing
the
controller?
Can
you
work
with
cubelets
up
to
two
versions
older
than
you,
like?
G
So
that
seems
to
indicate
jardim
that
we
need
to
serve
as
a
watering
hole
to
get
the
people
who
are
bringing
in
caps
and
going
back
to
what
John
was
saying
that
I
didn't
know
who
we
need
to
talk
to.
It
can
be
serve
as
the
point
flash
point
where
you
know
these
people
we
line
up
some
caps.
We
line
up
some
people
who
know
the
area
really
well
like.
If
it
is
a
networking
stuff,
then
make
sure
that
Timbers
around
or
you
know
they
bring
people
in
and
do
that
for
some
time.
G
D
That
the
watering
hole
idea
is
a
very
valuable
thing,
and
if
that's
all
that
comes
out
of
this,
if
that
is
valuable,
so
having
a
common
place
for
people
to
come
and
learn
about
these
things,
I
can't
help
thinking
that,
unless
we
have
some
kind
of
metric
for
saying,
is
that
having
the
desired
effect?
Are
we
having
you
know
less
features
stuck
in
beta
or
are
we
having
less
up
problems?
Are
we
having
you
know
whatever
we
weren't,
how
effective
that
is.
B
Well,
then,
actually
I
think
is
interesting,
because
the
the
the
beta
timeout
deadline,
I
think
force
is
that
right
either.
There's
somebody
who's
interested
in
that
functionality
and
has
customers
or
interested
in
that
functionality
and
they're
willing
to
support
it
and
push
it
forward
or
there
aren't,
and
if
there
aren't,
then
let's
get
rid
of
it
right
and
if
there
are,
if
they're
important
enough
to
that
those
people,
they're
gonna
they're
gonna,
have
to
pick
it
up.
Just.
H
I
think
my
concern
is
more
that
we
will
not
learn
lessons
from
past
mistakes
and
that
that
we're
not
asking
the
questions
that
need
to
be
asked
to
even
understand
if
a
design
is
ready
to
go
implement
like
there's.
There's
this
rush
to
a
rush
to
go,
get
things
in
implementable
at
the
feature:
freeze
deadline
for
every
release
and
there's
not
a
crisp
there's,
not
always
some
some
features
are
better
than
others.
There's
not
always
a
crisp
vision
of.
B
Well,
you're
talking
about
was
actually
at
design
time
that
that,
like
right
so
there's
there's
the
set
of
questions
that
need
to
be
asked
to
design
time
and
then
there's
no
sort
of,
hopefully
rubber,
stamping.
That
needs
to
be
done
at
a
promotion
time
to
say
yeah.
We
actually
fulfilled
those,
those
and
and
the
other
criteria.
B
F
D
F
D
B
H
I,
don't
know,
I
think
it's
easier
to
ask
these
questions
like
in
the
design
phase
than
it
is
to
say,
like
we're,
ready
to
promote
debate
over
ready
to
GA.
Can
we
have
you
know
a
plus
one
on
our
PR
and
then
we're
like
wait?
Wait
does
does
the
scale?
Can
you
upgrade
it
like?
Have
you?
Maybe
you
should
fundamentally
rethink
your
design.
That's
a
really
awkward
point
to
be
asking
these
questions.
F
G
G
Just
now,
when
we
were
talking
about
Cody
and
as
I
went
and
looked
at
what
John
had
filed
for
the
GA,
you
know
Cody
NS,
stuff
and
then
I
said:
okay,
fine,
there's
a
set
of
questions
that
people
asked,
and
this
is
a
set
of
questions
they
were
asking
for
this
kind
of
an
issue
right.
So
when
we
are
doing
something
with
a
specific
cap,
should
we
write
it
down?
As
this
was
the
set
of
questions
we
asked,
and
then
these
were
the
responses
that
we
got.
Would
that
be
like
useful?
G
B
So
so
what
are
the?
What
are
the
questions?
We've
asked
in
the
past
during
design
reviews
during
reviews
of
promotions,
I
think
those
are
yeah,
I
think
those
are
valuable
things
do
we
have
and
we
have
Rosie
API
conventions.
Rightly
I,
guess
we
don't
have
anything
around
other
than
what's
in
the
kept
template
much
around
like
what
are
questions
we
want
answered
by.
G
B
I
I
This
is
what
I
want
to
add
to
my
marring
or
alerting
on
on.
Is
this
thing
working
or
not
speaking?
For
me
at
least
here
at
Red
Hat
like
we
would
find
that
immensely
useful
and
I
know
we
depend
on
the
work
of
the
the
overall
cluster
monitoring
efforts
happen
across
to
me
to
help
make
that
better,
but
I
think,
like
that's
a
key
part
of
like
production.
Readiness
to
me
is
like.
Can
I'm
honor
it
to
know
if
it's
healthy
or
not
mm-hm.
B
I
Just
been
numerating
the
set
of
metrics
that
your
feature
added
or
did
not
add,
and
why
would
be
a
really
useful
first
step,
just
sometimes
these
things
comin
in
and
then
there's
like
a
second
pass.
That
gets
them
either
exposed
and
keep
state
metrics
or
gets
them
actually
crawled
afterwards
by
the
broader
modern
community
that
sometimes
two
features
find
our
way
in
production
before
you
actually
knew
how
to
monitor.
H
And
I
think
people
like
people
give
some
thought
to
failure
modes
when
they're
designing
them,
because
you'll
see
them
in
design
Docs
like
yours,
the
flow
of
my
controller
and,
if
I
encounter.
This
then
I'll
do
this,
even
if
I
encounter
that
then
I'll
retry,
then
I'll
back
off
and
like
there
is
thought
given
to
that.
But
take
at
that
final
step
and
saying
and
here's
how
you
know
if
this
is
happening
like
this,
here's,
the
name
of
the
metric
and
it's
a
gauge
or
histogram
or
whatever
like.
B
So
I
think
that
what
what
this
brings
up
is
sort
of
like
what,
where
is
the
appropriate
place
to
document
all
this,
like
is
everything
going
to
be
in
the
cap
or
do
we
have
a
separate
play
book
that
contains
operational
aspects
like
to
kept
right
now
it's
got
the
design
in
there
supposedly
right.
It's
got
a
lot
of
stuff
in
there
and
is
that
and
then
I
don't
know
it's
not
the
right
place
to
keep
all
of
that.
It's
not
I
can
see
pros
and
cons
both
ways,
but
does
it
stay?
B
What
my
observation
is
that
caps
are
not
maintained,
and
so
I
mean
they
don't
even
flip
the
state
from
like
implementable
to
implemented.
You
know
like
right,
much
less,
maintaining
them
over
time.
As
the
feature
involves
yeah
I
guess
people
are.
We
only
have
about
two
minutes
left
so
I
guess
what
are?
What
are
our
next
steps
here?
We
didn't
really
answer
the
question.
B
I
think
they're
drained
a
few
minutes
late
with
one
of
the
questions
I
had
was
what
how
we
want
to
organize
this
effort
and
what
kind
of
scope
we
want
to
give
it
I
feel
like
we've.
If
anything
added
scope-
and
you
know,
do
we
want
to
just
for
now
continue
as
a
any
morphus
group
that
doesn't
necessarily
have
a
sub
project
or
a
working
group
title
or
do
we
think
there's
a
need
for
that
kind
of
thing
justified
later
at
this
point,
I
would
say
the
next
item.
B
H
I
think
we
can
probably
coordinate
in
cigar
ch,
mailing,
less
thread
and
the
the
document
that's
open.
The
the
cap
I
I
like
starting
with
a
bunch
of
questions,
questions
that
operators
would
have
and
then
figuring
out,
like
whether
we
have
the
information
to
answer
those
questions
and
if
not
how
we
want
to
go
about
gathering.
But
starting
with,
like
the
people
who
run
clusters,
what
questions
do
they
have
big
long
list
of
those
and
that's
something
we
can
go
out
and
like
solicit
from
people
who
run
clusters?
That
would
be
really
helpful
right.