►
From YouTube: 20201007 SIG Arch Prod Readiness
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello,
everybody:
this
is
the
kubernetes
sig
architecture,
production
readiness,
review,
sub
project
meeting
for
october,
7th
2020.
all
right.
Let's
begin,
we
have
a
new
attendee
today
kendall.
Would
you
please
introduce
yourself.
B
Yeah,
my
name
is
kendall
nelson.
I
work
for
the
openstack
foundation.
B
Most
of
my
involvement
in
kubernetes
so
far
has
been
over
in
sig
cloud
provider,
specifically
provider
openstack
and
sig
contributor
experience,
so
I
signed
up
like
a
week
or
so
ago
to
all
of
the
like
sig
architecture
list
to
kind
of
get
a
better
lay
of
the
land
and
stuff.
I
wasn't
sure
exactly
what
this
meeting
was
about,
but
I
figured
I
would
like
go
to
all
of
them
and
see
how
it's
going.
A
Sure,
well,
this
this
tends
to
be
a
pretty
small
meeting
and
pretty
low-key.
Basically
I'm
john
bellameric
and
we
have
voytech
who's
labeled
as
sixth
scalability
right
now
and
david
eats,
and
so
we
started.
A
This
sub
project
started
a
process
a
couple
of
releases
ago
of
production,
readiness,
reviews
for
new
features
and
while
in
features
graduating
to
different
stages,
and
so
what
we
talk
about
here
is,
if
there's
any
questions
that
came
up
during
those
reviews
that
we
want
to
consult
with
one
another
on
or
changes
in
process
or
that
sort
of
thing.
As
far
as
making
those
production
readiness
reviews
happen,.
A
So
it
tends
to
be
pretty
pretty
pretty
short
so
far.
Maybe
next
next
release
we
want
to
make
it
mandatory
pr's
we
wanted
to
do
at
120,
but
we
just
sort
of
dropped
the
ball
to
take.
I
can
take
responsibility
for
that,
and
so
anyway,
that
might
get
a
little
bit
a
little
bit
busier
after
that.
So
I'll
bring
up
the
agenda.
C
D
A
There
are
you
seeing
the
agenda
right
now:
okay,
good,
okay,
so
pretty
short
agenda.
I
wanted
to
for
boy
tech
and
david.
A
We
had
some
discussions
on
slack.
Were
there
any
new
things
that
came
up
from
the
reviews
over
the
last
few
days
that
you
wanted
to
discuss.
E
The
other
thing
I
noticed
with
the
tool
the
cap
control
tool
I
well,
I
should
have
read
the
help
on
the
tool
I
assumed
it
didn't
do
this.
I
didn't.
I
didn't
actually
look.
Does
it
tell
you
so
I
I
inquired
for
pr
review
approvers,
where
it's
got
my
name
on
it
or
your
name
or
voitex
name?
Is
there
a
way
for
me
to
find
the
ones
that
are
trying
to
target
a
particular
release
and
don't
have
a
prr
approval.
A
E
But
I
I
was
surprised
at
how
light
it
was,
although
I
guess
I
do
know
that
that
sig
apps
and
6
cli
and
api
machinery,
we
all
have
fairly
light
releases
planned
for
120..
So
it
could
be.
A
Yeah,
maybe
it's
just
a
light,
a
light
cycle,
but
I
I
had
the
same
concern.
We
can
investigate
that
because,
like
I
said
there
may
be
ones
the
caps
not
being
touched
but
they're
just
targeting
it
through
the
enhancements
process,
and
I
didn't
go
through
the
enhancements
spreadsheet
and
like
look
to
see
that
everybody
had
it
so.
E
Let
me
take
a
note:
okay,
the
other
thing
that
stood
out
to
me
when
I
was
doing
these
reviews
is
that
there
was
a
feature
that
I
wish
that
I
had
so
in
a
lot
of
these,
even
the
ones
in
alpha.
A
So
so
there
is
a
there.
E
A
E
E
Answer
for
someone
to
do
like
I'd
like
for
that
to
be
really
easy
for,
so
that
when
I
say
like
hey,
you
should
do
this.
They
can
go
okay.
How
do
I
do
it
and
I
can
link
them
to
here's?
How
you
do
it?
Do
we
think
that's
something
valid
for
something
like
cubestatemetrics.
A
I
mean,
like
I
said
right
now:
it's
kind
of
there's
no
sort
of
common
systematic
way
to
do
it.
It's
just.
We
ask
the
question
of
each
individual
and
in
some
cases
that
might
be
like
there
was
one
that
was.
We
talked
about
before,
where
it
was
the.
A
Fqdn
name
for
the
pod
pod
host
name
or
something
like
that,
and
you
know
you
could
you
could
do
a
query
to
count
those
right,
but
that's
not
a
metric.
A
I
think
my
I
think
it's
in
the
it
wasn't
through
slack,
but
I
think
my
thinking
there
was
I
didn't
necessarily
want.
Maybe
it
was
a
super
easy
way
to
do
it,
but
I
think
somebody
wouldn't
want
somebody
to
go
through
a
bunch
of
effort
create
this
metric.
That's
actually
not
super
useful
for
an
operator
like
an
operator
like
if
oh
no,
this
was
for
a
different
one.
This
was
if
they
set
this
hosting
too
long.
It
would
cause
errors.
It's
like.
As
an
operator.
A
A
E
F
F
That
would
be
great
but,
like
I
didn't
come
up
with
anything
like
that,
so
I
was
ending
up
with
like
saying:
okay,
like
the
cube
cdl
list,
all
pods
and
filter
by
this
field
is
for
now
good
enough.
We
should
think
about
something
better
than
in
the
future,
but
I
don't
want
to
block
this
cab
because,
like
we
don't
have
good
idea
how
to
do
that.
Yeah
pretty
much.
A
Right
I
mean
I
guess
david
I
would
I
would.
I
would
kind
of
throw
this
one
back
on
ap
machinery
in
the
sense
of
right
now,
there's
no
metadata
about
the
fields
to
know
which
fields
are
alpha
beta,
whatever
right,
so
so,
there's
no
program,
there's
no
sort
of
programmatic
way
that
we
can
automate
automatically
have
this
this
kind
of
count.
If
we,
if
we
tagged
fields
as
to
their
stage
in
the
metadata
of
api
machinery,
then
we
could
write
a
piece
of
code
that
just
does
this,
for
you.
E
E
There
was
one
other
thing
that
that
came
up
while
I
was
thinking
about
it
and
and
reviewing
what
these
things
were
doing,
and
I
just
at
the
same
time
I
was
reviewing
things
happen
to
be
dealing
with
an
internal
problem
in
some
of
our
ci
runs
where
it
appears
that
there's
a
request,
storm
and
we're
trying
to
figure
out.
Why.
E
If
metrics
aren't
like,
if
they
only
track
the
things
that
they
know
they
need
to
track,
then
we
don't
have
an
easy
way
when
there
is
a
request
storm,
because
some
new
feature
got
turned
on
to
be
able
to
say
it
was
because
of
this,
and
if
you
guys
recall
the
cbe
from
I
don't
know.
Well,
we
fixed
it
like
six
months
ago,
seven
months
ago,
the
one
where
you
could
send
a
user
agent
that
was
like,
I
don't
know,
100k
long
or
something,
and
it
would
blow
out
the
prometheus
metrics.
E
Requests,
I
kind
of
want
that
back,
like
I'm
thinking
about,
like
my
life
in
production
like
right
now,
I'm
chasing
it
in
ci,
so
I
can
actually
do
a
lot
of
stuff.
I
have
an
audit
log.
I
have.
I
have
a
bunch
of
options,
but
being
able
to
see
it
in
the
metrics
would
be
really
useful.
Have
we
considered
whether
we
would
be
willing
to
come
back
from
this
group
with
a
recommendation
that
says
like
hey?
E
F
Think
yeah,
I'm
also
missing
that
in
some
debugging
that
I'm
doing
sometimes
I'm
not
sure
like
how
this
would
work.
We
got
because
because
I
think,
like
like
one
thing
was
like
the
size
of
the
individual
label
and
the
other
was
like
sending
each
request
from
like
a
different
agent
and
like
blowing
the
cardinality
blowing
up.
So
I'm
not
sure
if
this
is
possible
but
like
I
would
really
like
to
have
it
in
metrics
back.
Yes,
I.
E
A
D
E
A
E
E
But
I'm
looking
at
it
and
wondering
like
if
we
had
some
kind
of
cluster
salt,
could
we
hash
them
down
to
get?
E
I
don't
know
some
reasonable
number
of
hashes
so
that,
even
if
somebody
changes
it
and
we
have
ten
thousand
hashes,
if
we
have
we'll
be
able
to
catch
the
accidental
people,
not
the
malicious
ones,
who
are
throwing
different
user
agents
out
and
fill
all
the
buckets
but
we'd
be
able
to
catch
the
the
case
where,
like
oh
csi,
driver
foo,
you
you
screwed
up
somehow
and
you're,
making
2000
requests
a
second
right.
A
E
A
C
E
A
E
Right,
like
I'd,
be
able
to
de-anonymize
it
eventually,
whether
I
did
it
like
that,
whether
I
did
it
with
some
sort
of
protected
resource
that
more
users
can't
say
like
we.
The
mechanism
is
less
important
to
me
than
trying
to
figure
out
the
like
via
metrics,
who
is
doing.
This
is.
A
C
E
C
A
I
wonder
if
there's
a
way
we
could
do
it
only
for
system
components.
Right
I
mean,
maybe
that's
not
sufficient
for
what
you
want,
but
like
thinking
from
the
pii
perspective
right
like
I
don't
think
people
will
be
concerned
if
it's
internal
components
that
are
you
know
this
controller,
that
controller
but
their
own
workloads.
They
might
not
on
their
own
service
accounts
and
things
they
might
not
want.
You
lobbying
like
they
could
be
a
set
of
specific
service
accounts
for
which
we
log
their
access
patterns
or
something
like
that.
F
Yeah
that
is
kind
of
something
what
we
did
with
other
things.
I
think
I
think
I
can't
remember
exactly,
but
but
yes,
we,
we
are
actually
we
basically
limited
the
potential
values
to
some
well-known
set
and
other
or
something
like
that
somewhere
in
the
code.
I
can't
remember
exactly
what
what
that
metric
was,
but
there
was
some
precedent
for
that,
but
I
don't
think
that
really
helps
for
the
this
particular
issue.
F
C
A
E
E
That
was
another
thing
that
I
was
noting
when
I
was
going
through.
The
pr
reviews
last
week
is
is
that
is
information
that
I
miss
and
yeah
I'm
trying
to
make
up
for
it
in
some
ways
by
asking
for
metrics
for
these
new
features,
when
what
would
really
satisfy
would
just
be.
E
A
A
Okay
cool,
so
I
think
the
big
thing
that
that
you
brought
up
here
previously
on
slack
is
the
the.
A
How
do
we
separate
the
review
of
the
production
readiness
from
the
review
of
the
pr?
That's
the
design,
and
I
wrote
up
a
couple
tooling
options.
I
didn't
quite
finish:
writing
the
details
of
each
one,
but.
A
A
We
always
do
our
approvals
based
upon
a
pr,
but
what
we're
really
trying
to
do
is
is
approve
sort
of
the
life
cycle
of
the
cap
as
opposed
to
the
life
cycle
of
pr,
and
so
I'm
not
sure
that
matters
that
much,
but
it
kind
of
plays
into
some
of
the
discussions
below
in
the
options,
but
essentially
ensuring
that
accounts
receive
their
review
approval
as
they
go
to
each
stage,
also
palpated
and
stable.
A
So
that's
a
requirement
of
targeting
the
release,
ensuring
that
approving
the
prr
doesn't
approve
the
entire
cap.
We
want
discoverability,
so
we
have
we
sort
of
have
that
now
and
that
you
can
list
all
the
ones
with
your
pr
approver,
but
it
doesn't
actually
things
that
are
already
done.
A
A
Things
right
now,
but
it's
a
little
happy
defer.
You
want
to
defer
engaging
this
team
until
the
cap
design
is
otherwise
approved
by
the
sig
leads
or
other
kept
approvers,
and
then
we
want
to
ensure
that
the
pr
approval
is
done
by
a
member
of
this
team.
F
Yeah,
I
think
they're
nice
to
have
it's
probably
not
like
and
like
strictly
required,
but
nice
to
have
would
be
that
I
would
be
able
to
actually
really
approve
like
fully
approve
the
pr.
The
prr
without
like
asking
sig
leads
for
approval,
which
I
need
to
do
now
because,
like
I'm,
not
the
owner
of
like
say,
sick,
instrumentation,
directory
or
anywhere
so
yeah,
on
the
one
hand
like
I'm
not
able
to
fully
approve
it.
On
the
other
hand,
like
someone
else
can
approve
it.
Without
my
knowledge,
even
potentially.
F
D
A
All
right
david
did
those
reasons.
E
But
no
in
this
case
I
I
can't
think
of
anything
that
people
get
confused
over
overall.
How
do
you
guys
think
we
did
like?
I
I
feel
like
this
release
went
okay,
I
think,
based
on
the
query
of
the
tool,
after
the
fact
that
might
be
colored
by
the
fact
that
voitek
did
like
three
times
as
many
as
you
and
I
did
john
yeah,
but
I
thought
it
went
okay,
you
wanna
keep.
B
A
Fairly
well
but,
like
I
said,
if
you
yeah,
I
think
the
tooling.
A
Goals
so
then
I
probably
I
didn't
really
get
time
to
to
put
anything
like
very
thoughtful
down
here,
but
these
were
the.
I
believe
these
were
the
basic
options
we
discussed
on
slack
were
like
a
pr
approved,
approved
command
and
associated
labels.
I'm
trying
to
remember
how
we
do
this
with
api
review.
I
think
both
of
you
are
api.
Reviewers,
I'm
not
I.
I
know
that.
A
Says
api
review
is
required,
do
you
have.
E
When
the
label
goes
on
there,
you
can
actually
filter
pretty
easily
in
github
on
the
label
and
then,
if
I'm
being
honest,
it's
usually
that
somebody
bugs
me
says
like
hey:
this
label
showed
up.
We
need
to
go,
look
at
this
on
slack
and
then
the
actual
approval
process,
there's
like
a
separate
there's,
a
way
that
the
folder
ends
up,
separated
it's
still
directory
based,
but
only
like
it's
like
that
approver
directory,
it
doesn't
honor
the
the
directory
above
it.
If
that
makes
sense
right.
A
E
Now
there
was
that
was
different.
Isn't
there
there's
there
might
be
a
way
that
the
vendor
stuff
gets
skipped.
I'm
not
a
vendor.
Approver
voy
tech
wouldn't
be
because
he
has
the
whole
thing,
but
there
might
be
a
way
to
punch
out
specific
files
inside
of
a
directory.
A
There
is
not
last,
I
checked,
I
checked
on
this
a
while
back
and
I
think
our
internal
tooling,
maybe
does
it
but
like
it,
didn't
see
any
tooling
in
the
kubernetes
owner's
prowl
stuff
that
allows
you
to
specify
like
a
glob
or
something
and
say
this
glob
is
punched
out
for
different
set
of
approver
or
something
I
think
it
has
to
be
some
directory.
A
So
that
would
mean
putting
the
pr
like
in
its
own
directory
and
and
that's
what
some
of
these
other
options
are.
We
talked
a
little
bit
about
it
being
like
a
subdirectory
of
the
cap
that
doesn't
make
a
whole
lot
of
sense,
because
then
you
have
to
put
the
right
owners
file
in
to
that
directory,
which
is
kind
of
hokey.
A
So
yeah,
so
this
first
option
seems
to
require
fairly.
It
would
be
a
fairly
familiar
mechanism,
let
me
just
say
like
a
command
prr
approved,
and
then
you
can
use
labels
to
find
ones
that
are
waiting.
This
would
be
about
the
prs.
We'd
have
to
have
a
way
to
identify
liquid
api
reviews
that
that
a
that
apr
requires.
B
A
Review
in
this
case
that
a
pr
requires
a
pr
approval
which
probably
is
trickier
than
it
sounds
like
you
have
to
probably
inspect
the
actual
like
read
me
what
what's
being
touched
or
that
you'd
have
to
put
you
know,
look
at
the
kept
that
demo.
A
So
that's
you
know
a
little
bit
of
a
hassle
and
I
think
pr
approved
you
need
some
new
pro
functionality
for
that
and,
as
I
sort
of
mentioned,
it's
sort
of
like
managing
the
pr
life
cycle
about
the
capital
life
cycle,
that's
not
necessarily
huge
when
we
always
end
up
doing
things
yeah
where
this
comes
in.
Is
that
we're
not
looking
we're,
not
managing
that
and
being
like?
Oh
we're,
targeting
this
cap
to
a
release
like
that's
not
done
anywhere
in
github
like
we
target
individual
prs?
A
I
guess
it
is
it's
done.
When
you
target
the
issue
associated
with
the
cap,
you
set
the
milestone
on
it,
so
we
potentially
could
block
there,
rather
than
blocking
in
prs
in
the
enhancement
structure.
So
that's
it.
Those
are
probably
our
two
sort
of
point
points
of
leverage
or
where
we
can
either
in
the
milestone
assignment.
A
A
The
nice
thing
about
doing
it
on
the
milestone
is
that
in
theory
they
could
write
their
whole
cap
and
not
touch
their
cap
again
when
they,
when
they
graduate,
because
they
already
put
in
it
they're
planning
to
graduate
in
121
to
beta.
Then
when
121
comes
around,
they
just
retarget
the
issue
and
they
start
writing
prs.
A
E
A
A
The
question
would
the
convolution
these
two
are
the
these.
These
are
probably
the
ones
that
are
the
two
options
that
try
to
make
the
best
use
of
existing
tooling.
That's
create
a
separate
directory
that
the
pr
team
can
be
owners
of
or
or
say,
either
way
with
an
owner
with
a
parent
owner
and.
A
This
was
the
one
I
think
we
were
most
interested
in
in
the
last
discussion.
We
would
leave
the
pr
in
the
readme.
We
create
a
separate,
prr
metadata
directory
that
basically
contains
whether
this
cap
has
been
approved
for
these
different
stages,
and
that
would
let
us
use
the
existing
tooling
for
owners,
but
it
has
this.
It
splits
the
cap
metadata
away
from
the
approvals
away
from
the
camp
itself,
and
I
think
it's
still
going
to
require
additional
tooling
because
of
what
I
just
said
like.
A
A
So
I
guess
what
I'm
saying,
given
that
nobody
else
is
saying
anything
I
may
as
well
just
keep
talking
the
this
seems
clunky
and
still
requires
some
new
tooling.
A
A
A
Right,
yeah,
I
think
you
have
to
be.
You
have
to
write
access
to
the
repo
which
most
people
don't
have
kkk,
at
least,
but
this
is
what
this
is
the
debate.
What
is
it
we
really
want
right?
I
I'm
thinking
aloud,
I'm
thinking
well,
what
we
really
want
is
we
want
to
do
a
review
and
to
prove
it
when
they
target
it
for
alpha
they're
going
to
fill
out
the
alpha
stuff
when
they
target
it
for
beta.
We
want
to
do
another
review
and
they
target
it
for
stable
another
review.
A
I
don't
really
care
about
how
many
iterations
happen
between
those
three
gates,
and
so
when
they
target
the
issue,
I
would
think
the
milestone
one
option
would
be
they
try
to
assign
the
milestone
and
says
you
can't
design
that
milestone,
because
you
haven't
completed
this
yet
now.
I
think
that's
a
totally
new
process.
I
don't
think
we
do
that
right
now
at
all,
you
can
target
the
milestone,
the
issue
to
any
milestone
you
want
and
nobody,
nobody
can
stop
you.
The
thing
we
do
with
milestones
is
on
pr's.
A
F
I
think
that
doesn't
address
the
problem
that
in
fear
like
someone
else,
can
approve
prr
for
you
right
yeah,
I
mean
can
be
approved
by
someone
not
from
the
pr
team.
That's
mostly
what
I
mean.
So
unless
we
start
like
we
do
some
tooling
to
m4
either.
Someone
will
manually
do
that.
Someone
from
the
enhancement
team-
or
we
will
add
some
tooling-
that
we
like
someone
from
pr
team
approved
or
the
corresponding
vr.
A
All
that
does
is
move
is
prevent
them
from
entering
into
their
release,
but
we
still
it
doesn't
change
the
mechanism
of
how
we
do
the
approval.
That's
where
we
would
probably
ideally-
and
I
don't
know
the
cons
of
this-
ideally,
what
david
mentioned,
which
is
we
put
it
in
a
plr.d
in
the
directory
and
then
that
owner's
files
set
up
such
that
this
file
in
a
directory
can
only
be
approved
by
this
team.
A
Then
that
would
be
that
would
get
us.
I
think
that
income
that
would
give
us
actually
this
document
we'll
see
that
we
just
mostly
think
we
want
in
this
document,
except
for
the
first
bullet
point,
which
was
ensure
that
they
receive
this
at
each
stage.
The
milestone,
that's
the
only
thing
that
the
milestone
thing
achieves.
B
E
Yeah
yeah
and
I
don't
know
how
complicated
that
code
is.
I've
never
been
in
there.
F
A
F
A
They'd
have
to
set
the
owner's
file
up
in
there
correctly
like
we,
we
could
do
the
pr
metadata
thing,
along
with
the
milestone
thinking.
The
milestone
thing
would
then
just
check
for
that
file
and
and
that
it's
been
approved
for
that
same
stage
and
the
the
feature
issue
has
a
tag
with
the
the
stage
and
everything
on
it
or
you
can
look
at
the
cap.gmo,
but
we
can
probably
between,
like
the
kept.emo,
the
labels
in
the
a
separate
directory
with
prr
just
metadata.
A
A
A
Heads
the
the
first,
the
the
so
the
minimum
bars
we
could
just
start
with
the
the
chat
and
the
separate
directory.
If
that
turns
out
to
be
too
much
of
a
hassle,
we
can
talk
about
updating
owners
files.
I
mean
we
can
just.
I
can
discuss
this
verbally
with
the
enhancements
team
who
at
least
some
of
the
folks
in
there
I
think,
are
familiar
with
the
crowd
stuff
or
discuss
it
with
somebody
else
as
needed.
So
I'll
write
something.
A
A
We
would
need
that
with
the
milestone
thing.
This
would
now
be
a
separate
file
or
directory
so
that
we
can
approve
it
without
having
worrying
about
improving
the
whole
pr.
Pr
like
that
would
be
done
in
a
separate
pr.
A
Enable
discovery
of
kept
awaiting
approval
by
pr
approver.
No,
that
doesn't
do
anything
for
that.
Although
the
tool
could
now
double
check
that
extra
metadata,
if
we
had
it
the
metadata
one
does
something
like
that
with
some
work
on
the
kept
control
tool
before
engaging
the
prt
into
the
kepler.
Otherwise
I
think
it
does
that,
because
we
do
this
much.
A
B
E
A
There's
also
a
discussion.
This
is
where
it
might
get
interesting.
So
there's
a
discussion,
and
I
don't
think
it's.
I
don't
know
where
it
stands
right
now,
but
today
the
enhancements
team
goes
through
this
sort
of
laborious
process
of
going
through
this
spreadsheet
and
there's
a
discussion
of
creating
a
receipt
system
where
basically
in
order
to
get
into
a
release,
rather
than
this
sort
of
back
and
forth
with
humans,
you
create
a
pr
that
says.
E
Maybe
I
think
that
that
might
be
at
least
more
powerful,
because
I'm
just
trying
to
think
about
someone
who
they
own
the
content
of
of
what's
in
there
if
they
want
to
go
through
and
like
adjust
a
metric
and
stuff
to
have
telling
them
you
got
to
go
to
this
separate
directory
when
I'm
writing
caps.
I
want
to
thank
you.
A
E
A
Until
you
hit
it
in
production,
wow,
okay,
well,
that
feels
a
little
bit
unconclusive
but
inconclusive.
But
let's
let's,
let's
go
back
around
before
before,
like
by
the
end
of
the
week.
If
you
could
because,
like
I
said,
I
would
want
monday
to
write
anything
down
before
I
bring
you
to
not
that
it's
like
super
urgent,
but
it'd
be
nice
to
move
along.
Okay!
A
Thank
you.
Is
there
anything
else.
A
Okay,
all
right!
Well,
thank
you,
everybody
and
we'll
talk
next
time.