►
From YouTube: Kubernetes SIG Release Meeting 20181218
Description
https://github.com/kubernetes/community/tree/master/sig-release
part #2 of the 1.13 release retrospective
https://docs.google.com/document/d/1mKrJm3N4dC4rfOwa5WcvOptmjpteMcNOg1ZSoACKU1g/edit#
A
B
E
B
A
A
E
A
E
E
D
E
D
C
Yeah
I
think
if,
if
every
role
can
at
least
select
a
representative
to
be
present
for
the
meetings,
that's
that's
enough.
I
think
I
think
initially
that
it's
important
to
have
as
many
bodies
in
the
room
to
kind
of
sync
up,
but
once
the
teams
break
out
into
their
relative
roles,
I
think
it's
fine
to
just
have
one
or
two
so
I
mean
the
staggering
should
be
fine,
but
I.
Don't
think
that
we
have
to
do
it
all
the
time
I.
D
A
Why
is
attending
the
meeting
so
important
that
we
need
people
there
for
synchrony
like
what
are
the
critical
portions
of
the
release
that
still
seem
to
require
that
that
aren't
yet
asynchronous
I
guess
I
would
view
required
attendance
at
a
daily
meeting
to
be
a
sign
of?
We
don't
quite
have
our
communication
game
where
it
needs
to
be
in
tracking
ongoing
work.
A
C
Same
so,
the
request
came
out
for
and
I
think
y'all
might
have
seen
it
for
a
doodle
for
alternating
time,
alternating
times
for
the
actual
sake,
release
meetings
as
well
and
I
think
again,
you're
totally
right,
there's
a
balance
and
there's
also
a
communication
aspect.
So
if
we
are
not
communicating
properly
I
think
we
are
I,
think
all
the
dots
that
we
collect
and
the
communications
that
we
have
around
the
release
process
are
fairly
solid,
but
definitely
shout
out.
If
we
there
need
to
be
things
that
we
fix.
E
I
think
just
keep
being
in
mind
tracking
down
when
there's
there's
a
need
for
conversation
on
something
and
being
mindful
that
sometimes
it's
people
in
a
different
time
zone
and
being
accommodating
of
that.
So
it's
not
just
always
one
set
of
people
demanding
other
set
of
people
show
up
at
their
2:00
a.m.
or
things
like
that.
I
mean
that
that's
an
obvious
courtesy
to
do
professionally
right
I
mean
that
that
that
shouldn't
be
unreasonable.
But
again
we
should
be
able
to
do
a
better
job
of
asynchronous
communication.
Like
Aaron
says.
E
All
right,
then,
moving
on
to
what
will
we
do
differently
if
there's
nothing
else
there
to
discuss
okay
test
flakes
major
area
of
discussion
last
week,
I
know
before
we
dive
into
the
stuff.
Here
are
any
of
these
links
the
notepad
from
last
week's
discussion.
Okay,
so
first
comment:
Josh
around
reviewing
test
jobs
and
I-
don't
know,
maybe
it
reorganized
that
shows
all
of
these
do
these
kind
of
get
covered
by
the
conversations
last
week,
and
should
we
just
link
I.
F
Don't
think
anybody's
done
that
I
think
creating
an
actual
PR
is
still
on
my
plate,
so
the
so.
The
idea
is
just
to
reorganize
all
the
test
jobs
into
a
blocking
in
and
informing.
We
should
have
the
same
almost
exactly
there's
a
couple
of
things
that
have
to
be
in
exactly
almost
exactly
the
same
jobs
in
master
as
we
do
in
the
release.
Branch
and
a
bunch
of
things
are
gonna
get
a
bunch
of
the
tests.
You
know
so.
F
Basically,
some
things
that
are
currently
blocking
will
end
up
informing
some
things
that
are
currently
in
all
are
going
to
drop
out
of
sight
entirely
and
if
sig
release
is
the
only
entity
looking
at
those
jobs,
they'll
stop
anyone
because
jobs
that
basically
have
a
70%
flake
rate.
Unless
somebody
is
actively
working
on
fixing
those
that
are
really
not
doing
anyone
any
good.
F
C
F
Yeah
but
they're
all
medium-term
projects,
because
we
have
a
lot
of
technical
debt.
They're,
like
Erin,
pointed
out
that
we
need
a
lot
better
visit
ability
into
how
flaky
the
tests
actually
are,
so
that
people
can
see
if
they're,
improving
things
the
which
requires
creating
some
new.
It's
a
new
UI,
the
some
of
the
other
things
I
mean.
Obviously
tackling
the
upgrade
downgrade
problem
is
a
major
task
in
and
of
itself.
F
F
F
So
that
particular
item
anyway
on
that
is
a
lot
simpler,
which
is
taking
the
status
of
the
test
as
they
are
now.
You
know
with
some.
You
know
both
in
terms
of
how
flaky
are
they
and
how
useful
he
is,
what
they
show
us,
reorganizing
them
and
I'm
sure
there
will
be
argument
over
individual
tests
when
I
filed
the
PR.
A
Okay,
I'm
gonna,
say
works
down,
so
this
is
something
I
care
deeply
and
passionately
about.
However,
if
I'm
being
fair
to
myself
and
to
you
I,
don't
think
it's
gonna
be
realistic.
That
I
can
simultaneously
coordinate
the
114
release
and
also
coordinate
driving
down
flakes
everywhere.
A
First,
we
need
to
make
sure
there
is
a
clear
and
well
understood
path
of
escalation
so
like
how
do
we
identify
who
is
responsible
for
fixing
a
given
test
or
flake,
and
then
how
do
we
actually
ask
like
that
appropriately?
This
is
why
I
start
at
the
CI
signal,
playbook
and
I
feel
like
the
CI
ii.
Playbook
has
gotten
very,
very
wordy
and
verbose
and
contains
a
lot
more
information.
E
A
Just
that
specific
nugget
is
that
specific?
Who
do
we
ask
elate
to
and
how
do
we
escalate
is
really
is
for
useful
information
for
the
entire
community,
not
just
the
CI
signal
person.
I
feel
like
the
CI
signal
guide,
could
then
kind
of
link
to
that
and
be
a
lot
more
about
how
what
are
your
daily
responsibilities
as
far
as
keeping
track
of
CI
signal,
so
that's
the
WHO
and
how
right
then.
The
next
thing
is
like.
A
E
A
So
we
need
to
find
safe
environments
to
reproduce
the
folks
to
see
the
blanks
right
if
this
is
just
kind
of
the
slightly
trickier
version
of
test-driven
development
before
you
go
and
fix
the
issue,
make
sure
that
you
can
reproduce
the
issue
and
like
the
test
that
goes
read
so
that
you
can
see
the
test
go
green
once
you
fix
the
issue.
So
in
theory
we
have
some
place
to
reproduce
these
flakes
and
we
can
see
that
they
are
then
not
flaking
anymore,
once
we've
fixed
ones.
So
that's
the
like.
A
How
do
we
actually
address
the
meat
and
potatoes
of
the
problem?
And
then
it
comes
to
okay.
So
now
we
know
who's
going
to
do
the
work
and
we
know
what
the
work
like
will
know
when
the
work
is
done.
How
do
we
go
identify
all
of
the
work
to
be
done,
and
so
this
is
where
we
have
the
human
identification,
which
is
a
CI
signal
person,
feeling
a
lot
of
pain
and
filing
a
bunch
of
issues.
A
We
have
velodrome
the
dashboard
that
shows
the
top
click
iasts
jobs
and
the
two
tests
that
flake
for
those
jobs
on
a
weekly
basis.
We
could
expand
that
to
now,
do
it
for
all
of
the
post
submit
and
periodic
jobs,
like
maybe
focusing
specifically
on
the
release
blocking
ones,
and
then
we
could
also
federating
known
anti-patterns,
such
as
tests
that
depend
on
events
which
are
not
guaranteed
or
tests
that
use
the
expect
err
not
to
have
occurred
timed
out
waiting
for
condition
anti-pattern.
A
So
that's
how
we
can
get
huge
swaths
of
work
that
then
in
theory,
we
could
Fedder
eight
out
to
other
people.
I
would
imagine
that
the
folks
from
cig
testing
will
be
interested
in
helping
raise
this
signal
and
this
visibility.
A
But-
and
maybe
this
is
maybe
this
is
all
CI
signal
actually
needs
to
do
in
order
to
maintain
green
signal-
is
to
kind
of
focus
on
that
that
Federation
of
work
only
focuses
on
the
release
blocking
jobs,
but
those
are
my
things
and
I
really
think
that
this
is.
This
is
something
that
I
want
to
see
us
make
substantial
progress
on,
but
I
probably
shouldn't
volunteer
to
be
the
guy
making
that
substantial
progress,
because
I've
got
enough
on
my
plate.
I
would
definitely
concur
with
that.
A
All
those
things,
though
they
don't
they
do
seem
outside
the
purview
of
sig
release.
I.
Think
specifically
the
how
to
you
write
tests
which
do
or
do
not
depend
on
events,
given
that
there's
so
core
to
the
production
operating
system
right,
there's,
not
an
operator
out
there.
It
doesn't
depend
on
events,
so
if
we
can't
rely
on
them
in
tests,
I
think
we've
got
some
challenged,
but
I
think
I
do
agree
that
there's
a
lot
of
work
to
be
federated
among
a
group
of
people.
A
I
do
think
that,
certainly
in
a
say,
like
a
bi
machinery,
it
should
be
responsible
for
ensuring
that
you
can
test
the
API
of
machinery
of
kubernetes
right
I'm
discussion
to
have
here.
I,
don't
think
oh
I
forget
if
you
were
at
the
contributor
summit
during
the
steering
committee
Q&A,
but
I
sort
of
I
laid
out
and
didn't
hear
anybody
violent
disagree
with
the
idea
that
cig
release
gets
the
final
say
on
whether
or
not
we're
gonna
cut
a
release
and
whether
or
not
test
liking.
It
should
be
part
of
the
criteria
for
cutting
it.
E
A
G
A
A
As
a
follow-up
or
action
item,
I
would
like
to
see
somebody
go
through
the
notes
that
Maria
took
as
part
of
the
deep
linking
session
and
tried
to
put
together
some
some
concrete
issues
out
of
that,
and
we
could
maybe
even
put
together
like
a
project
board,
to
track
just
those
issues
to
sort
of
figure
out
what
we
want
to
work
on
and
who's
gonna
work
on.
It.
B
C
A
A
I
will
help
I
can
help
create
issues
and
for
all
I
know.
This
is
something
that
Maria
cares
enough
about
that.
She
would
rather
do
it.
I,
don't
want
to
speak
for
her
since
she's,
not
here,
but
you
could
definitely
use
people
to
steward.
All
of
that
forward
would
be
my
guess:
I
can
boil
we
in
the
this
is
recorded
and
will
be
watched
by
others
since
I.
Don't
know
that
I
necessarily
need
a
volunteer
right
now,
but
I'm
gonna
get
cranky.
If
this
is
something
that's
not
staffed
next
quarter
again.
D
H
H
Myself
and
hopefully
I-
don't
regret
it,
but
I
want
to
get
into
understanding
more
about
the
CI
signal
process,
so
I'd
be
happy
to
help
dedicate
a
few
cycles
to
this.
I
just
need
some
help
and
some
guidance
on
sort
of
where
to
get
started
since
I'm
sort
of
still
new
to
that
side
of
the
house,
I've
only
been
doing
the
enhancing
side
for
the
past
two
releases.
Now
it's.
F
Right,
I
I
am
planning
to
work
on
flakiness
and
some
other
long-term
issues
in
the
114
cycle
since
I'm,
not
on
the
release
team.
For
that,
however,
for
some
of
this,
if
I
actually
want
to
consult
with
Maria,
just
because
at
the
beginning
of
the
release
cycle,
she
is
going
to
want
some
stuff
that
she
can
assign
to
her
shadows,
and
so
I
want
to
divide
up
the
work.
F
A
E
A
To
go
chase
down
sakes
to
own
each
of
the
jobs
and
make
sure
that
they
have
email
addresses
that
test
grid
can
send
alerts
to
if
their
test
fails
more
than
ten
times
in
a
row.
Part
of
that
is
actually
doing
the
shuffling
of
the
jobs
of
the
reorganization
and
part
of
that
is
finding
some
way
to
collect
and
instrument
and
generate
the
metrics
necessary
to
consider
jobs
in
or
out
of
release
blocking
one
of
those
metrics
could
be
considered,
flakiness
also
things
like
job
duration,
so
on
and
so
forth
and
Josh.
A
You
might
be
a
great
person
to
take
the
lead
on
some
of
that,
just
because
I
suspect
a
lot
of
it
could
be
scraped
out
of
bigquery
the
metrics
weren't
written
in
such
a
way
that
a
human
being
could
actually
read
all
of
the
summary
lines
that
come
out
of
test
grid
and
parse.
This
is
a
human
being,
but
I
would
love
to
have
a
computer
automatically
generate
those
and
check
them
against
known
thresholds.
F
E
I
think
that
sort
of
things
one
of
the
key
takeaways
here
is:
we
need
to
do
some
project
management
on
that
drive
towards
execution.
We've
done
a
lot
of
talking
and
note-taking,
and
we
need
to
bubble
that
up
into
some
action
next
in
the
interest
of
time.
I
do
want
to
move
forward
to
the
next
couple
sections
unless
there's
anything
else
to
cover
there.
So.
C
One
one
note
on
my
side
with
regards
to
you:
the
handbooks,
the
role
handbooks
if
you're
writing
a
handbook,
if
you're
reading
a
handbook,
if
you
I
mean
in
in
all
opportunities,
you
should
try
to
link
out
to
to
more
visible
locations,
more
discoverable
locations
for
people
so
to
I.
Think
Aaron's
point
a
little
earlier.
I
think
it
was
his
first
point
before
we
got
went
down
the
track
yeah
any
anything.
That
is
a
document
that
would
be
viewed
by
someone
who
is
not
on
the
release.
Name
should
be
linked
out.
E
So
the
next
section
is
cig
architecture,
oversight
sign-off
for
enhancements,
graduating
to
beta
nga,
so
I
know
a
number
of
the
prior
releases.
We've
kind
of
talked
around
this
a
bit
trying
to
do
more
vetting
of
what's
coming
through
for
enhancements
more
formally
sure
the
first
one
on
the
list.
There
are
actually
your
well
you
and
Patrick
I.
Guess
together
are
the
combination
of
the
sets
of
things.
Do
you
want
to
summarize
there
I'll.
D
D
There
are
situations
when
it's
not
very
clear
if
it's
ready
for
GA
or
beta
for
that
matter,
in
terms
of
what
are
the,
what
the
fee
enhancement
is
supposed
to
offer
and
test
readiness-
and
this
in
fact
came
to
four
with
the
cig
Windows
support
in
113.
So
as
a
release
lead
I
found
it
like
too
laborious
or
even
unrealistic
to
just
go
very
deep
into
each
of
the
enhancements
to
even
understand
what
is
being
planned
and
what
is
being
worked
on
and
and
if
it
only
need
the
main
traction
comes
only
towards
code
code.
D
Freeze
escalation
at
that
time
becomes
too
late
in
the
path
in
the
process,
and
this
rightfully
so
frustrates
the
contributors.
So
the
ask
is,
we
should
have
some
way
where,
but
we
have
like
a
checklist
or
form
a
list
of
items
that
the
release
team
that
mainly
the
the
sick
owners
kind
of
commit
to
and
that
gets
signed
off
by
say,
say
gark
or
like
a
bigger
body.
That
kind
that
has
the
right
to
like
give
a
stamp
and
then
the
release
team
can
go
off
of
that.
D
To
make
sure
that
you
know
all
these
are
met
so
that
the
enhancement
can
go
to
beat
your
GA.
So
if
this
is
what
caps
are
supposed
to
be,
then
probably
we
we
should.
We
should
kinda
notch
the
six
to
do
more
of
that
in
1:14
I
know,
Aaron
was
also
trying
to
make
this
compulsory
for
all
the
enhancements
in
140
and
see
how
it
flies.
D
C
It
is
caps,
it
should
be
kind
of
said,
should
always
be
caps.
Everyone
should
be
filing
a
cap
for
as
much
as
possible
of
the
work
that
you're
doing
whether
it's
release
or
not
released
related.
The
the
current,
the
current
bucket
of
issues
that
are
in
the
enhancements
repo
right
now
are
essentially
vaguely
kept
tracking
issues
or
what
should
be
kept
tracking
issues
and
they
define
I,
mean
they're
they're
supposed
to
have
information
on
the
docks
the
tests
required
and
so
on
and
so
forth.
C
If
that
stuff
is
not
complete,
it
should
be
completed
by
the
people
who
are
owning
the
enhancement
as
well
as
have
the
as
well
as
have
the
the
caps
actually
filled
out.
There's
a
larger
issue
that
I
had
mentioned
somewhere.
Maybe
in
the
architecture,
birds
of
a
feather
is
that
we
don't
have.
We
don't
have
a
consistent
idea
of
what
graduation
criteria
is
for
the
entire
project,
its.
If
someone
can
find
if
someone
can
find
that
document
for
me,
I,
like
I
I,
almost
challenge
them
so
so
because
we
don't
have
consistent
criteria.
C
We
constantly
go
into
this
mode
of
hey
I'm,
trying
to
get
my
things
like.
Is
this
a
political
thing
that
I'm
trying
to
get
GA?
Is
it
because
it
has?
You
know
the
the
required
amount
of
tests
or
what
is
the
required
amount
of
tests?
Our
documents
are
documentation
for
these
right.
We
don't
have
any
of
that
well-defined.
So
that's
that's
a
that's
a
full
product.
That's
a
cross
project
concern
that
needs
to
be
solved
first
and
then,
yes,
everyone
should
be
filing
caps.
I
refuse.
A
To
ship
114
until
we
have
that
defined
well,
okay,
certainly
within
your
prerogative
as
the
release
team
lead
in
lieu
of
caps.
However,
when
Dan
and
I
were
working
on
one
six,
one
of
the
things
we
went
through
and
looked
at
each
of
the
proposed
enhancements
chipping
and
for
the
risky
ones
we
did
make.
Each
of
those
enhancement
odors
write
us
a
one
to
two
page
document
describing
their
their
rollout
strategy.
The
role
that
Reggie
all.
A
Unfortunately,
they
were
all
in
Google
Docs,
so
they
lost
the
sands
of
time,
but
certainly
there
are,
there
are
other
all.
There
are
other
options
and
you
are
within
your
per
you
know,
there's
prior
art
for,
as
the
release
team
lead,
demanding
additional
documentation
from
individual
enhancement
donors
about
their
their
seaworthiness.
A
You
may
so
to
be
clear
and
I
will
stop
talking
soon,
I
swear.
This
was
also
a
part
of
my
rent
that
the
contributor
summit
in
the
steering
committee
Q&A.
So
it's
recorded
there
for
all
to
see
that,
like
I,
don't
think
it
is
fair
to
expect
the
release
team.
Slash
release
lead
to
have
sufficient
technical
depth
to
dive
deeply
into
a
given
issue
for
all
the
issues
in
the
release
to
figure
out
whether
or
not
it
is
okay.
A
Don't
necessarily
see
those
right
now
in
the
standard
kept
template
like
I
would
prefer
to
see
some
more
tactically
focused
things
in
the
template
and
then
back
to
the
graduation
criteria.
Thing
like
I'd
like
to
have
some
input
into
what
that
list
should
be,
but
I
would
much
rather
have
state
architecture
get
their
act
together
and
write
down
at
least
some
kind
of
checklist
that
we
can
review.
A
So
until
there
is
a
scalable
project-wide
process
which
I
totally
agree.
That
is
what
caps
are
supposed
to
be
for
and
that
this
is
going
to
be
a
key
part
of
how
caps
relate
to
the
architectural
vapi
review
process.
You
can
for
an
individual
release,
which
should
have
1520
enhancements
force,
distribute
that
task
on
to
the
enhancement
owners.
We
have
had
done
that
before
and
I
wish.
I
could
show
you
those
examples,
but
you
know
Plus
as
core
OS
employees.
A
You
know
got
a
bunch
of
Googlers
in
the
main,
to
write
these
1
to
2
page
documents
for
us,
so
we
can
understand
they
enhancement
going
in.
So
he
could
feel
confident
that
we
knew
what
was
enough
to
know
whether
it
was
good
or
bad
and
what
to
do
if
it
was
bad,
and
he
would
contact
in
that
case,
so
I'm
definitely
willing
to
help,
try
and
recover
these
from
the
sands
of
time
or
we
can
talk
with
Dan
about
that
experience
as
well,
because
we
have
in
place
exact
same
problem.
It
was
so.
C
Fun
so
I
don't
disagree.
What
I,
what
I
don't
think
it
should
be
is
an
optional
process.
I
think
everyone
should
be
held
to
the
same
standard.
That
way,
there's
no
question
about
what
needs
to
happen:
every
release,
what
happens
to
happen
for
every
enhancement.
So
if
we
decide
on
a
process
and
we
we
gather
that
feedback
from
the
sense
of
time
I
think
we
should
make
sure
that
it's
formalized
and
it's
across
the
board
sure.
A
I
mean
I'm
always
trying
to
be
pragmatic
here,
there's
volunteers,
volunteer
effort,
always,
and
you
know,
try
and
focus
on
the
things
we
can
be
can
get
out
the
door.
There
is
prior
art
for
doing
this
and
we
should
apply
that
prior
art
until
you
know,
we
don't
need
to
do
it
anymore,
because
the
process
you
know
takes
take.
This
can
take
this
concern
off
of
our
off
of
our
radar.
We're
placing.
E
C
So
there's
there
there's
already
a
an
issue
open
to
update
the
documentation
around
filing
enhancements,
so
one
the
documentation,
that's
in
the
Kaine
Hansen's
repo
and
then
also
the
kept
templates.
Those
are
two
separate
issues
that
all
linked
to
this
document
right
now
and
all
add
notes
about
the
suspicious
specific
criteria.
A
In
1/6
we
had
it
was
the
enhancement
owner
and
then
the
sponsoring
sake.
If
we
didn't
get
enough
feedback
from
the
sponsor
from
either
the
enhancement
owner
or
the
sponsoring
sake,
they're
the
leader
now
I
guess
their
TL.
If
they
have
one,
we
would
just
say
when
it's
not
going
to.
We
will
use
our
authority,
as
sake
release,
to
unshipped,
to
ensure
that
their
enhancement
does
not
ship
and
so.
C
C
H
A
Can
in
fact
do
it
and
yes,
you
could
you
could
you
could
decide
to
ship
not
check
the
entire
release,
but
I
do
I?
Do
you
think
we
have?
We
have
been
effective
and
more
narrowly
focused
threats
of
destruction,
yeah.
C
E
On
the
plus
side
feature,
freeze
is
two
months
before
we
ship
right
so
I
mean
we.
We
have
time
to
work
with
the
caps
or
the
feet,
the
enhancements,
rather
sorry,
the
the
things
that
are
identified
in
that
first
month.
We
do
have
time
to
just
to
review
them
and
say:
hey
we're
missing
these
aspects
and
we
need
to
bulk
them
up.
D
D
E
So
I
guess
the
thing
final
thing
in
this
section
that
we
haven't
touched
on
is
review
by
a
milestone,
comment
from
Patrick,
but
I
think
this
also
relates
to
the
earlier
discussion
of
how
do
we
make
sure
we
have
the
right
visibility
on
these
things
as
they
go
by
I
thought
that
that
was
kind
of
a
normal
part
of
the
kept
process.
But
does
anybody
have
a
sense
of
what's
missing.
C
Well,
you
know,
let's
do
this
or
let's
do
this
or
what
he's
like
what
what's
left,
that
we
need
to
do
to
get
this
shipped
right
and-
and
we
were
like
I-
was
like
you-
should
definitely
have
a
cap
for
this
right
and
it
was
so
late
in
the
game
at
that
point.
So
there's
not
an
understanding
that,
like
you,
should
be
filing
them
all
the
time.
This.
E
C
Although
my
internal
calendar
is
off
at
this
point
yeah
but
yeah,
the
the
discussion
I
can
I
can
for
you
the
thread
once
I
find
it,
but
essentially
that
the
criteria
that
was
required
was
not
well
defined
or
or
explicit
right.
So
we
that's
not
fair
to
them,
because
they've
been
driving
towards
getting
something
out,
not
knowing
that
there
were
additional
things
that
you
know.
If
they
had
done
it
kept,
you
know
long
long
in
the
past,
then
it
would
have
been
less
of
a
problem.
E
A
A
The
number
of
people
is
bottlenecks
such
that
I,
like
Patrick's
way
of
describing
how?
If
there
is
a
capacity
problem,
it
would
be
good
for
us
to
make
sure
priorities
are
set
up
front.
So
maybe
one
way
we
could
surface.
That
is
that
capacity
issue
is
to
ensure
that
a
feature
a
has
a
cap
can
be
that
cap
has
been
reviewed
and
signed
off
by
somebody
from
sake
architecture.
You
know
prior
to
feature,
freeze
or
or
after
some
date.
A
Beyond
feature
freeze
something
like
that,
but
this
is
where,
like
I'm
totally
the
guy,
that's
willing
to
be
the
jerk
and
throw
down
whatever
hammers
need
to
be
thrown
down,
but
my
organizational
skills
that
maybe
not
the
greatest
and
they're
number
of
people
here
who
are
way
way
better
at
messaging,
this
sort
of
stuff
and
and
spreading
the
work
around.
So
this
is
why
I
want
to
talk.
The
least
right
now
is
to
hear
like
how
you
best
think
this
could
be
done.
A
C
I,
don't
I,
don't
think
we
need
to
throw
a
hammer
I
think
it's
a
it's
a
problem
that
everyone
understands
just
kind
of
driving
to
the
solution.
Instead,
so
I've,
you
know
I'm
hearing
that
we
need
to
improve
the
documentation
around
caps,
how
to
do
the
process
in
general
I'm
hearing
that
we
need
a
succinct,
graduation
criteria
across
the
board.
B
C
So
there's
also
there's
also
an
issue
open
for
that,
so
I
linked
the
kept
tracking
board.
If
you
all
want
to
take
a
look
at
it,
there
are
some
things
that
are
essentially
required
to
do
to
make
caps
real
real,
because
we
have
explicitly
stated
in
the
cap
documentation
that
kept
her
in
a
beta
state
and
I.
Don't
think
that's
true
anymore!.
A
Right
I
feel,
like
I,
have
experienced
some
hesitancy
from
you
and
Caleb
and
Jason
other
folks
when
I
talk
about
trying
to
shove
everything
through
caps
and
I
kind
of
I
I
believe
we
are
long
overdue
for
just
imposing
some
amount
of
structure
on
this.
So
we're
gonna
just
try
it
and
see
how
guess
so.
The
message
would
be
like:
okay,.
A
H
A
Author,
so
as
a
person
working
on
this
effort
and
with
some
estimate
of
how
much
downstream
work
is
going
to
be
generated
by
trying
to
do
that,
I
would
strongly
suggest
that
not
strong
is
just
not
doing
that
for
114.
This
next
release
the
tooling
and
my
best
estimation
as
a
person
who
has
spent
you
know
sign
up
until
sundown,
not
not
really,
but
you
know,
I'm-
imagine
virtual
sunup,
sundown
or
working
on
on
this
problem
of
how
to
scale
the
process
to
the
project's
current
size.
A
It's
not
quite
ready
for
that,
and
any
the
gaps
would
have
to
be
filled
by
humans
and
as
one
of
the
humans
who
would
be
forced
to
fill
that
gap.
I
would
request
that
you
not
bawlin,
told
me
for
that
I
totally
on
understand.
We
were
coming
from
and
there's
nothing
I
wish.
I
could
say
more
than
yes,
I
agree,
let's
throw
down
the
hammer,
but.
G
As
far
as
I
can
tell
I
believe
a
document
that
outlines
the
future
needs
to
talk
about
how
look
great
darker
is
gonna
work
and
it
needs
to
talk
about
testing
and
we
need
some
way
of
indicating
that
it's
finished
and
those
things
all
sound,
doable
I
mean
with
a
markdown
document
and
just
our
review
process.
That's
what
we.
A
A
Challenges
with
notification
that
are
well
known
that
need
to
be
addressed
by
automation
in
terms
of
splitting
out
the
architectural
view
process
decision
into
someplace.
That
is
both
durable
and
eat.
Very
easy
for
incredibly
oversubscribe
folks,
like
your
Tim
Hawkins,
your
Bryan
grants
and
your
inter
tunes
to
subscribe
to.
In
order
to
facilitate
these
reviews
that
they
are
needing
to
be
a
part
of.
G
C
I
think
so,
yeah
just
to
interject
I
think
that
we
I
there
needs
to
be
a
process
and
I
think
that
we're
moving
towards
having
a
process
I
think
that
having
the
requirement
for
architecture
to
re-review,
essentially,
everything
that
has
been
submitted
as
a
capper
is
a
design
proposal
at
this
point
is
not
feasible,
so
there
needs
to
be
some
sort
of
transition
mode.
I,
don't
know
what
that
looks
like
exactly
yet,
but
I
I.
Don't
think
that
we
can't
ask
them
to
essentially
review
their
order.
C
E
G
A
Totally
agree
and
one
of
the
challenges
I
was
explaining
is
about
the
signal-to-noise
ratio.
The
fact
someone
would
like
today
have
to
do
that
work
to
to
flag
the
issues
that
need
to
be
reviewed
by
sig
architecture.
We've
had
you
know,
people
volunteered
to
you
that
didn't
say
architecture,
it's
already
more
or
less
a
full
day's
worth
of
work
to
just
do
a
single
pass
like.
C
Yeah,
so
it
it's
it's
yeah,
so
it's
I
mean
it's
anywhere
from
at
the
start.
It
could
be
close
to
fifty
something
right
and
it
usually
drops
down
to
about
twenty
something
by
the
end
of
the
release
right.
The
problem
is
up
until
up
until
the
point
that
you're,
not
tracking
them.
You
are
tracking
them
and
that's
a
huge
cognitive
burden.
There
there's
also
the
the
issue
of
once
you
start
tracking
them.
The
things
that
slip
are
essentially
things
that
you
couldn't
get
information
on.
C
G
Where
would
we
propose
tracking
this
sort
of
thing,
because
I
do
think
that
if
we're
gonna
say
okay,
we're
giing,
this
feature
that
there
needs
to
be
some
kind
of
paper
trail
for
like
we
have
an
actual
plan
for
this
case
in
point
we
were
talking
about
the
window
stuff
earlier
I
just
took
a
look
on
test
grid.
We
still
do
not
have
a
single
working
test
for
that,
not
even
one
right.
C
C
A
I
So
the
GA
question
doesn't
come
in
all
the
caps.
If
the
cap
is
going
from
beta
to
GA
only
then
we
need
to
do
it.
The
extra
effort,
the
most
of
the
other
caps,
are
alpha
and
beta,
so
I
think
we
have
a
lesser
number
of
caps.
Then
you
know
the
50
caps
right,
so
that
is
one
one
thing
there
and
the
other
one
was
the
windows.
One
I
definitely
feel
that
they
jumped
the
gun
in
the
sense
that
yeah
there
was
no
cap
and
I.
C
C
E
C
Mean
because
we
explicitly
say
in
the
documentation
that
kept
or
a
beta
process
right,
you
can't
necessarily
and
and
that
SIG's
don't
necessarily
need
to
follow
them
right.
So
it
starts
with
the
documentation
right.
It
starts
with
the
people
in
the
process
and
because
they
weren't,
oh
because
they
weren't
aware
of
that
I
can't
fault
them
for
that.
I
actually
feel
bad
I
feel.
I
Bad
about
them
too,
but
there
is
also
this.
You
know
we
have
the
push
and
the
pull
right
and
working
in
the
community
implies
that
you
don't
know
beforehand
what
you're
going
to
end
up
with
or
what
you
will
have
to
do,
and
we've
made
people
jump
through
hoops,
including
the
CSI
folks,
the
dynamic
volume
stuff
and
it's.
This
is
not
the
first
time
they
are
facing
it
for
the
first
time,
so
we
are
feeling
bad
for
them.
But
you
know
this
is
something
that
we
face
on
a
daily
basis.
I
G
C
What
that's
that's
fair,
so
I
mean
Aaron
you,
you
myself
Caleb
JC
horrible,
let's
get
together
and
mash
on
a
on
a
doc
or
something
and
communicate
this
properly.
Let's
I
mean
let's
table
the
rest
of
that,
because
I
think
this
could
easily
bleed
into
an
hour-long
conversation
the
front
from
the
perspective
of
graduation
criteria.
There
is
a
mention
of
we
should
explicitly
say
what
needs
to
happen
for
beta
and
GA
I.
C
Think
it's
even
more
important
to
say
what
needs
to
happen
from
alpha
to
beta,
because
I
think
that
you
know
graduating
something
from
also
to
beta
is
arbitrary.
At
this
point,
right
and
and
and
as
far
as
I
know,
you
know
my
past
experience
the
the
chorus
and
the
red
hats
like
we
won't
like
unless
you're
mashing
on
it
and
you're
in
your
day
to
day
work
like,
as
you
know,
as
a
product
strategy,
we
don't.
We
refuse
to
introduce
features
that
are
not
beta
right.
C
G
A
I
I
agree
with
there
they're,
both
p0
almost
and
I,
agree
that
there
are
a
number
of
features
that
get
rushed
from
alpha
to
beta
because
they
never
actually
get
turned
on
on
hosted
services
until
they
get
to
beta.
So
like
we
never
actually
see
large
user
traffic
or
experimentation
with
the
feature
until
it
hits
beta,
which
is
wrong.
A
Alpha
is
really
the
time
to
experiment
and
play
around
where
you
can
break
things.
But
yes,
that's
why
I
would
like
to
understand
what
are
the
rigid
criteria
to
get
to
beta
and
make
sure
that
we
are
incentivizing
people
to
improve
the
stability
of
their
stuff
rather
than
just
shipped
a
new
feature
after
a
new
feature.
So
it's
harder
to
get
something
through.
That
leads
it
to
better
quality
work,
yeah.
C
So
I
think
I
think
we're
all
seeing
the
same
stuff
and
we
have
it
well
documented
through
in
putting
things
right,
putting
Tim's
hat
on
for
a
second.
We
have
the
graduation
criteria
that
we
need
to
do.
We
need
to
facilitate
the
documentation
and
the
communication
around
the
cap
process
and
the
graduation
criteria
and
I
will
I
will
help
facilitate
that
I'll
work
with
you
and
Erin
and
Caleb,
and
we
can.
C
D
Inclusion
that
we
want
for
these
enhancements
as
part
of
the
our
release
itself,
should
they
follow
the
stages
of
graduation
again,
should
they
follow
caps
or,
and
how
far
can
they
be,
including
our
release?
Notes,
yes
or
no,
we'll
still
need
some
guidance
there,
and
if
the
answer
is
none
of
these,
then
then
we
need
to
get
on
on
the
same
page
and
then
communicate
it
when
these
enhancements
come
to
us.
So.
E
A
I
my
feeling
on
this
particular
issue
is
that
it
is
up
to
sega
architecture
and
the
steering
committee
to
be
careful
about
what
SIG's
are
chartered
beyond
that
decision.
I
think
what
a
sig
wants
to
talk
about
in
the
release
should
be
broadly
up
to
them,
because
sig
architecture
and
steering
committee
have
created
this
special
interest.
Group
I
should
be
able
to
talk
about,
should
have
you
know,
access
to
community
resources
like
the
release
notes.
So
if
we
have
oh
yeah,
no.
A
C
It
needs
to
be
a
separate
meeting.
That's
fine
I
know
this
came
up
for
a
cig,
a
double
u.s.
and
113.
So
again,
if
we
enforce
I
know
in
forces
as
a
heavy
word
for
a
community
project,
but
we
need
to
define
graduation
criteria
across
the
board
right.
This
is
not
just
for
KK.
This
is
everything
right.
Everything
that
touches
a
kubernetes
repo
should
have
some
some
sort
of
setting
right.
If
we
can
do
that,
then
we
can
easily
just
point
people
back
to
the
docks.
This
is
not
like.
C
This
was
not
put
down
right
if
you're,
if
you're,
writing,
release
notes
for
for
kubernetes
kubernetes
I,
don't
think
that
out
of
tree
stuff
should
be
included
in
that
I
think
that
it's
fine
to
link
back
to
it
I
think
it's
fine
to
link
back
to
it
and
say
like
hey.
This
new
feature
is
out,
and
it's
related
to
blah
blah
blah
check
out
this
repo
for
more
information
right
I.
Think
it's
fine
to
do
that,
but
including
release
notes
for
something
that's
not
included
in
the
repo
that
you're
releasing
for
I.
A
Begged
and
pleaded
I
think
with
the
appropriate
parties
to
make
sure
we're
not
trying
to
land
packaging
if
out
of
tree
bits
into
this
release.
I
hope
that
there's
an
ongoing
discussion
about
that
for
perhaps
the
next
release
I
don't
want
to
see
release
notes
for
other
repos
land
in
KK.
The
entire
reason
we
want
to
split
this
project
up
into
as
many
repos
as
possible,
is
to
allow
those
to
launch
independently
of
the
kubernetes
release
like
release
life
cycle.
A
So,
while
I
can
appreciate
that
I'm
not
trying
to
call
them
out
specifically
but
I,
know
like
AWS,
has
done
a
great
job
of
having
a
bunch
of
out
of
tree
projects
to
enable
things
like
alb,
ingress,
I,
think
and
some
I
am
stuff
right,
cool,
great
I.
Don't
necessarily
think
that
means
they
get
to
ride
on
the
big
old
hype
train
that
is
a
kubernetes
release.
I
think
we
shouldn't
empower
them
to
like
create
their
own
own
hype.
E
A
C
C
Think
I
think
that
what
we
can
do
is
spin
up
a
greater
effort
around
communication
for
this
stuff,
not
just
for
communication
of
a
kubernetes
release
but,
like
you
know,
similar
to
the
lwk
Katie,
here's
what
you
missed!
Here's
what
you
missed
in
the
last
cycle
or
something
here's
what
happened!
Yeah.
A
It's
maybe
dial
it
back
a
little
bit
like
I.
Don't
want
to
try
and
redefine
the
answer
to
the
question:
what
are
we
releasing
and
why
are
we
releasing
it?
This
way,
I'm
rabid
that
we
continue
to
release
just
KK
and
that
our
release
notes
and
such
just
talk
about
the
stuff.
That's
in
kubernetes,
kubernetes,
I
wasn't
specifically
about
calling
out
a
commercial
thing.
I
would
like
to
empower
the
whole
community,
but
then
that
leads
us
down
a
much
longer
radical
of
distributions,
and
what
are
we
package
again?
Why
and.
E
G
A
lot
from
Ben,
so
I
mean
one
is
I,
don't
think
we
want
things
like
say,
proud
or
kind,
showing
what
kind
of
current
instruments
release
notes,
but
they
are
still
covering
these
six
projects.
I
think
the
idea
of
some
kind
of
like
here's,
what
the
project
has
been
up
to.
As
a
separate
note,
sounds
like
a
great
idea,
and
we
can
let
things
do
whatever
they
want
there.
There
is
another
point
about
the
whole
commercially
focused
SIG's
thing,
where
that's
kind
of
where
sig
cloud
provider
exists
and
there's
there's
totally
outside
scope.
G
This
meeting
there's
a
discussion
around,
should
other
cloud
providers
tempted,
but
yeah
I
wouldn't
want.
I
wouldn't
want,
like
all
of
the
testing
first
showing
up
and
the
KK
notes.
That
is
noise.
If
it's
something
that
we're
actually
packaging
with
the
release,
it
should
show
up
there
if
we're,
not
packaging
it
with
the
release.
It's
not
part
of
the
lease.
It
shouldn't,
be
a
noise
noise.
All.