►
From YouTube: 20220223 SIG Arch Prod Readiness
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Looks
like
okay,
everybody
welcome.
This
is
the
kubernetes
production
readiness
sub-project
meeting
for
the
23rd
of
february
2022,
and
I
think
we
take
maybe
you've
frozen
because
you're
you're.
Very
still,
I
see
him
he's
good.
A
No,
it's
my
it's
my
frozen
still
frozen
for
me,
but
I
could
hear
you.
Okay,
let
me
put
the
agenda,
which
I
think
is
pretty
limited
right
now,.
C
A
Sure,
okay,
all
right
go
for
it.
David.
C
I
can
try
and
take
notes
sure
so
you
know
this
time.
There
were
a
lot
of
reviews.
C
C
I
think
the
quality
of
the
pr
that
people
included
had
increased
significantly,
at
least
in
the
ones
I
reviewed.
I
looked
at
it.
The
the
questions
were
answered
fully
and
completely.
They
didn't
skip
over
any
of
the
questions,
usually
the
troubleshooting,
which
is
what
I
spent
a
lot
of
time,
reviewing
the
directions.
There
were
actually
quite
good
listing
out.
You
know
potential
failure
modes.
I
only
had
a
few
that
were
like
well,
this
just
won't
fail
so
yeah,
I
think
from
my
perspective,
it
was.
It
was
a
very
positive
release.
B
Yeah,
I
think
I
I
have
very
similar
feelings,
so
it's
definitely
much
better
and
I
I
was
actually
in
in
many
cases
or
at
least
some
cases.
I
was
looking
into
history
and
it
wasn't
that
in
in
at
least
in
some
cases
it
wasn't
that,
like
it
was
filled
well,
but
like
the
actual
up
like
the
first
lineup
rover
or
ziga
provers
or
however,
we
call
them
was
they
were
doing
good
job
or
much
better
job
than
previous
releases,
forcing
cap
outers
to
to
improve
that
so
yeah.
B
I
definitely
agree
with
with
david
here
I
think
the
the
it
especially
it
improved
in
like
the
metrics
or
the
monitoring
part
of
the
pr.
I
think
that
that's
where
it's,
where
I've
seen
the
huge,
the
the
biggest
improvement,
at
least
in
my
portion
of
prrs
there's
one
thing:
there's
one
small
pr
that
I
opened
to
against
the
kept
template,
not
sure
if
you
had
a
chance
to
see
that
I
know
that
elena
elana
seen
that
because
she
was
commenting
on
that.
B
But
I
just
pasted
the
link
on
the
chat.
Maybe
you
you
could
take
a
look.
It's
basically
about
like
one
of
the
questions
regarding
the
tests,
because,
like
many
people
were,
were
telling
that,
like
they
don't
know
how
to
write
tests,
they
don't
know
what
to
test
and
so
on.
So
I
just
like
added
a
small
small
comment
there,
with
with
the
link
to
the
example,
pr
that
was
adding
some
tests
for
enablement,
enabled
by
enablement
disablement
of
the
feature
gates
so
yeah.
I
think
that's,
that's
mostly
my
recollection.
A
Okay,
yeah-
that
sounds
great.
I
my
experience
was
similar.
Might
probably
my
my
biggest
issue
is
that
I
was
personally
busy
busy
busy
and
didn't
didn't
get
to
it
soon
enough,
but
thankfully
my
wonderful
team
here
picked
up
the
slack
for
me.
I'm
sorry
about
that.
So
I
didn't
do
as
much
as
I
would
have.
I
didn't
do
my
share,
but
yeah
the
quality
seemed
to
be
pretty
good,
there's
still
kind
of
an
issue.
A
We
a
lot
of
times
I'm
looking
ones,
I'm
looking
at
like
this,
isn't
even
ready,
for
this
is
barely
ready
for
fig
approval
review,
much
less
pr
type
review
right
like
they
haven't.
A
Doing
yet-
and
so
you
know
that's
normal,
I
guess,
but
I
don't
know
if
there's
anything,
we
would
want
to
do
around
we've
sort
of,
on
the
one
hand,
it's
good
to
get
an
early
review
prr
in
and
make
sure
people
are
on
track
with
it.
On
the
other
hand,
like
going
back
and
checking
three
times
a
day
for
a
week
and
see
if
well,
you
know,
because
I
don't
know,
that's
how
I
do
it
because
getting
notifications
don't
always
work
that
well
for
me.
A
So
I
just
gotta
cycle
up
where
I
go
and
go
through
my
list
and
check
them
all
and
see
if
anybody's
actually
added
anything.
I
don't
seems
a
little
inefficient,
but
I
guess
it's
not
that
big
of
a
deal
yeah
not
much
else
there.
A
Okay,
well,
let's
let
me
finish
this
note
here
and
then
and
then
we
can
jump
into
that
anybody
else
have
any
other
comments
on
this
from
a
retro
from
a
not
on
the
pr
team
right.
We
have
a
couple
other
folks
here
who
maybe
were
subject
to
the
reviews,
whether
they
were
whether
any
feedback
there.
D
Yeah
I
was
was
so
smooth
this
release.
I
was
really
surprised.
I
was
mostly
doing
pr
reviews
for
duplications
and
promotions
to
ga.
So
it's
all
quite
straightforward,
prs,
but
ones
that
I
just
observed.
D
Also
was
very
smooth,
I
I
don't
know.
I
was
really
impressed
how
everything
works
and
it
felt
that,
like
there
is
a
big
team
of
pr
reviewers,
which
I
don't
know
that
not
the
case.
A
D
Yeah,
I
think
this
aspect
of
what
you
just
mentioned,
that
you
need
to
check
periodically
with
its
need
for
pr
review.
I
had
similar
like
mirror
experience.
I
needed
to
pin
kelana
directly
to
like
tell
her
like
it's
ready.
Please
take
out
their
part
of
it,
so
maybe
some
process
around
that
may
be
helpful.
D
I
I'm
I
mean
process,
always
twofold
right.
So
on
one
hand,
you
improve
things
and
another
hand
you
introduce
more
burden,
but
maybe
something
that
can
help
and
not
too
heavyweight.
B
A
C
Yeah,
I
also
do
a
polling
approach,
so
you
know
it's
often
you
know
once
a
day
leading
up
and
then
you
know
maybe
twice
a
day
really
close
and
and
then
in
that
last
week,
I'll
often
ping,
the
the
developers
directly
on
slack
just
to
make
sure
they've
they've
seen
it
because
I
know
we
get
notifications,
don't
work
so
yeah
if
you're
looking
for
more
than
like
that
I'll
get
to
it
tomorrow.
Pinging
me
is
the
best
way
to
do
that.
Yeah.
A
Yeah,
I
don't
I
I'm
mostly,
I
think
just
complaining.
I
don't
know
if
you
want
to
change
anything
for
that,
because,
like
like
sergey
said
like
any
additional
process
is
more
of
a
headache,
probably
than
it's
worth
so
we
just
that's
just
the
nature
of
open
source
development
or
any
development.
Maybe
okay,
awesome
any
other
comments.
Before
we
move
on
to
david's
things
he
wants
to
talk
about
whatever
those
may
be.
C
So
you
know,
I
don't,
I
think
most
of
the
people
on
the
call
have
seen
the
cap
related
to
new
beta
apis
being
turned
off
by
default.
One
of
the
recommended
criteria,
for
that
was
to
update
the
prr
template
to
ensure
that,
when
transitioning
from
beta
to
stable,
the
the
cap
author
asserts.
Yes,
I
have
gotten
enough
real
feedback
that
I
believe
the
feature
is
stable
and
I'm
looking
for
help
on
or
I
believe
the
api
is
stable.
C
I'm
looking
for
help
on
phrasing
that
so
the
way
it
was
described
in
the
camp
review
was
validate,
that
the
feature
was
reasonably
validated
in
production
use
cases,
that's
a
little
vague.
If
I
were
to
write
just
that
down,
can
can
we
crisp
up
what
we
really
want
from
a
cap
author.
B
Yeah,
so
I
think
I
I
might
be
the
one
who
who
suggested
this.
This
wording,
which
I
I
agree,
is
a
little
bit
vague
but.
B
I
think
it's
effectively
what
we
what
we
want.
I
I
don't
know
how
to
phrase
it
better,
but
I
think
what
we,
what
we
really
want
is
to
to
have
some
kind
of
proof.
B
C
C
C
With
the
docs
that
were
provided
and
not
with
direct
handholding
or
direct
guidance
from
the
feature
author,
do
we
want
to
enumerate
things
like
that?
Do
we
even
think
we
require
them?
Logically,
today,.
B
I
think
we
may
not
require
many
of
them.
Currently
I
I
mean
I
would
love
to
to
like
wearing
my
scalability
hat.
I
would
love
to
have
like
a
real
production
use
case
in
large
clusters
or
large
number
of
objects
or
whatever,
but
realistically
speaking,
I
think
it's
it's
too
much
for
at
least
for
now,
maybe
maybe
in
the
future.
We
will
be
able
to
do
that,
like.
I
think
what
I
was
thinking
about
was
maybe
some
kind
of
like
brief
description
of
like
of
the
setup
where
it
was
validated.
B
For
example,
it
was
like
a
gk
cluster
in
version
x,
the
and
and
running
like
or
whatever,
like
open
shift
cluster
or
whatever
I'm
running
in
version
x
and
having
like
n
number
of
those
instances
and
and
instead.
A
Ahead,
sorry,
sorry
to
interrupt
I'm
getting
a
little
confused
because
there's
two
so
we
said
I'm
trying
to
remember
here.
We
said:
okay,
we're
going
to
turn
beta
off
by
default
and
then
the
complaint
was
hey.
We
are.
How
are
we
going
to
get
feedback
so
we're
talking
now
about
when
we
go
from
beta
to
stable?
Is
that
what
we're
talking
about
specifically?
A
D
A
Right
so
there's
different,
I
think
what
what
I'm
hearing
from
well
what
what
I'm
trying
to
gather
is:
there's
different
types
of
feedback
right,
there's
feedback
around
the
api
usability
and
design
and
then
there's
feedback
around
the
stability
of
the
feature
and
there's
feedback
around
the
the
the
scalability
of
the
feature.
A
When
do
we
expect
each
of
those
type
of
feedback
to
come
into
play
and
who's
responsible
for
making
sure
that
that's
been
addressed,
so
I
think
we
had
a
little
discussion
about
this.
If
I
take
on
the
on
the
pr
like,
I
don't
want
us
to
get
into
the
business
of
making
sure
like
early
reviews
and
like
really
broad
problems
of
making
sure
that,
like
functionally
the
api
is
usable,
I
don't
think
we're
the
right
group
to
do
that
and
just
piss
people
off.
If
we
try.
A
If
we,
if
we
try
to,
I
think
that's
the
api
review
team
and
the
fig
approvers
scalability,
we
definitely
have
some.
We
want
to
make
sure
something's,
not
really
production
ready
if
it
scales
like
nowhere
near
what
it
needs
to.
So
we
we've
incorporated
scalability
for
sure
in.
A
D
A
Want
to
focus
I'm
just
trying
to
think
it
through
right.
I
would
want
to
focus
the
feedback
pieces
on
the
stability
and
possibly
the
scalability,
so
that
maybe
I'm
just
catching
up
and
that's
already
what
we
were
talking
about,
but.
A
A
You
have
a
whole
suite
of
criteria
as
well,
and
this
is
part
of
our
internal
production
readiness.
So
how
far
do
we
take
it
in
the
open
source
world
where
we
don't
have
operators,
I
mean
we
do
we're
all
there's
a
bunch
of
us
who
are
operators
but
like
I
it
would
be
lovely,
it
would
be
lovely
if
we
can
open
source
wise,
get
some
more
of
that
validation
so
that
when
it
does
fall
into
openshift
or
eks
or
gke
teams
to
evaluate
it.
A
Like
all
the
easy
problems
or
all
the
the
obvious
problems
have
been
solved,
so
what
does
that
mean
for
these
questions?
I
guess,
like
milk
time
number
of
clusters
number
of.
D
A
We
get
where
we
are
today
before
we
get
to
production
readiness
review,
so
I
would
probably
want
to
think
more
on
those
soak
time.
B
Yeah,
I
agree,
I
think
I
I
was
I
was.
I
had
also
the
soak
time
like
on
my
mind
when,
when
speaking
like
last
time,
so
I
was
just
trying
to
not
phrase
it
in
a
soak
time
criteria.
So
so
I
mentioned
this
like
let's
describe
like
how
it
was
used
in
production
and
like
in
what
cluster
like.
How
long
is
a
good
one
criteria
like
how
how
many
objects
and
stuff
like
that
basically
to
to
have
to
know
that,
like
it
was
really
used
there?
B
Obviously,
like
findings
concerns
like
problems
that
it
faced.
That's
that's
kind
of.
I
was
implicitly
assuming,
but
instead
of
like
focusing
on
on
specific
questions
like
focus
on
knowing
that
it
really
was
used
by
by
someone.
C
So
I
I
could
use
some
help
in
in
drafting
a
pr
that
doesn't
I'm
willing
to
take
a
stab
at
it,
but
I'm
not
feeling
confident
that
I
can
express
a
question
that
is
easy
for
someone
to
understand
what
we're
asking
an
answer
and
easy
for
us
to
interpret
that
answer.
Yeah.
A
I
think
I
also
have
a
concern.
There's
one
concern,
I
don't
know
if
it's
valid,
so
I
love
your
thoughts
like
as
a
broad
open
source
project.
We
don't
want
to
create
a
criteria
that
only
three
or
four
or
five
companies
in
the
world
could
actually
give
a
valid
response
to
right,
like
only
one
of
the
cloud
providers
or
one
of
the
major
distribution
vendors
could
possibly
sponsor
a
feature
because
you
know
they're
the
only
ones
who
would
take
the
time
to
pass
this
gate.
C
I'll
try
to
be
careful
of
that
when
I,
when
I
craft
the
draft
so
between
now
and
the
next
meeting,
I
will
try
to
take
what
you
guys
have
said
and
craft
question
based
on
it
and
see.
If
I
can
solicit
a
review
there.
B
Yeah,
I'm
I'm
afraid
that
we,
we
will
not
be
perfect
this
time
and
I
think,
like
after
the
first
first
release
when
this
question
will
be
asked
really,
we
will
know
better.
So
I
think
we
we
just
need
to
prepare
for
that.
B
Like
I
agree,
we
should
think
as
hard
as
possible
to
to
make
it
as
useful
as
or
us
as
useful
for
us
to
to
or
as
easy
to
under
as
easy
to
understand
and
answer
and
as
easy
for
us
to
interpret
that,
but
I
think
we
will
be
iterating
on
this
question.
I
don't
think
it
will
be
one
time
thing.
D
And
this
question
is
not
only
for
beta
disabled
by
default.
It's
like
typical
all
the
time
right.
A
D
It's
I
mean
I
clearly
see
how,
like
we
have
a
feature
called
pod
overheads
at
vga.
D
In
this
release
and
everybody
running
with
pod
overhead
enabled
I
mean
these
filters,
are
you
can
set
them
up,
but
we
really
waited
for
anybody
to
at
least
take
a
like
start
using
it,
and
we've
been
hoping
for
divisor
and
kata,
and
finally,
carter
took
a
like
set
this
fields
and
run
it
in
production,
and
only
after
that,
we
changed
this
feature,
because
before
that,
nobody
even
tried
to
to
use
feature
fields
so
aspect
of
soc
time
with
this
feature
present
but
not
being
used,
is
passed
and
then
second
aspect
is
somebody
actually
using
the
feature
also
passed.
D
Now
we
can
ga
this
feature
and
opposite
example
is
dynamic,
coupled
config
when
everybody
running
with
dynamic
google
config
disabled,
but
I
mean,
since
we
have
very
little
feedback
on
enabling
this
feature,
we
can
be
duplicating
this
feature
and
removing
it.
A
A
You
know
for
major
features
for
big
things
like
that
are
sponsored
by
one
of
the
big
contributors.
Then
this
is
going
to
be
an
easy
question
because
it's
going
to
be
part
of
their
process,
but
I
worry
a
little
more
for
small
features
that
have
been
added
by
you
know
a
smaller
company
or
smaller
set
of
individuals.
A
How
they're
going
to
how
they're
going
to
do
that.
I
don't
want
to
exclude
all
those
people
from
being
able
to
contribute.
D
Yeah
one
feature
I
remember
is
two
sigma
added
a
feature
to
change:
how
host
names
being
represented
and
on
certain
distress
and
yeah.
It
was
interesting.
They
probably
were
the
only
companies
that
used
it
when
we've
been
promoting
to
beta
and
to
this
day
I
I
only
know
that
they
using
this
feature,
nobody
else
to
my
knowledge.
D
B
Less
problematic
because
they
they
were
supporting
this
feature
and
they
were
using
it
in
production
so
so
kind
of
like
they
gathered
their
feedback
that
feedback
from
from
their
own
like
production
clusters.
So
I
think.
B
Someone
is
adding
a
feature
because
they
without
like
having
a
clear-
although
maybe
it's
not
the
case,
for
like
the
actual
full
apis
right
so.
A
Well,
I
think
staircase
point
one
isn't
just
for
full
aps.
If
you're
going
to
stable
and
we're
not
gonna
say
if
an
api
is
going
to
stabilize
we're
gonna
say
if
a
feature
is
going
to
stable,
it
has
to
have
some
validation.
I
mean
yeah.
I
guess
the
other
feature
gates
are
on
by
default.
So
in
some
sense
this
is
a
new
criteria
for
them,
but
it's
kind
of
a
a
false
sense
of
security
that
it
was
on
by
default.
A
If
nobody
used
it,
I
mean
so
anyway,
let's
see
what
david
comes
up
with
and
we'll
we'll
iterate
from
there
and
and
give
it
a
try.
Are
there
any
other
topics?
I
had
one
thing
I
wanted
to
bring
up
if
nobody
else
has
anything
else.
A
Minor
thing
I
just
wanted
to
raise
it
to
everybody's
attention
that
it's
that
time
again
it's
coming
up
on
cue.
What
is
it
it's
q1,
I
think,
still
it's
february,
but
we
probably
need
to
look
at
our
survey
again
revise
it
for
the
latest
releases
and
we
had
some.
I
think
discussion
about.
A
A
I
think
that
might
just
change
the
analysis,
but
so
I'm
happy
to
take
that
on
as
putting
together
the
survey
again
and
sending
it
out
for
everybody
to
review
on
this
team
and
then
once
it
looks
good,
we
can
blast
it
out
to
the
world
again
and
david
did
the
analysis
this
last
year.
Thank
you
david
and
I
did
it
the
year
before
we'd
love,
to
see,
if
there's
any
other
volunteers
to
jump
in
or.