►
From YouTube: Kubernetes Release Retrospective for 1.14
Description
We hold a public release retrospective for every Kubernetes release. Feedback is encouraged.
See here for more information: https://github.com/kubernetes/community/blob/master/sig-release/README.md
A
A
This
community
meeting
is
mostly
for
the
release
team
in
cig
release
and
sings
to
talk
about
how
the
release
went
and
things
like
that
so
I'm
about
to
hand
that
out,
but
before
we
start,
please
be
advised
that
this
meeting
is
being
live-streamed
to
youtube
and
recorded,
and
this
is
a
public
meeting
as
well,
so
everything
that
you
say
will
be
in
the
public
record.
So
with
that
I'm
gonna
hand
it
over
to
our
released
lead
for
1.14,
which
is
mr.
Aaron
Berger.
Take
it
away.
B
Hi
everybody
I'm
an
Aaron
of
sick
beard,
so
technically
I
am
not
supposed
to
be
the
moderator
for
this
retrospective,
but
the
moderator
who
said
they
were
going
to
show
up
has
not
yet
so
I
thought
I
would
rainbow
for
a
few
minutes
to
try
and
give
them
time.
And
if
not,
are
you
talking
about
me
here,
yeah,
that's
exactly
who
I'm
talking
about
I've
been.
B
Okay,
I
didn't
I,
didn't
see
you
on
so
first
off
the
before
I
get
to
all
of
that.
Today's
cat
t-shirt
is
a
cat
in
an
ice
cream
cone.
That's
slowly.
Melting
I
really
wanted
to
wear
this
shirt
for
code
LA,
but
I
couldn't
quite
get
it
in
time.
So
I
shared
it
with
the
release
team
on
the
Tuesday
of
code
flaw,
but
I
realized.
This
hasn't
made
an
appearance
at
the
community
meeting,
so
I
have
now
successfully
completed
my
goal
of
wearing
a
different
cat
t-shirt
every
week
during
the
kubernetes
114
release
life
cycle.
B
Thank
you
all.
This
has
been
a
wonderful
accomplishment
and
that's
one
of
the
first
things
that
I
think
went
well
for
this
release
and
so
I'm
gonna
hand
off
to
Jace
to
help
us
talk
about
what
we
did
during
the
release
cycle.
What
we
liked
about
it,
what
we
didn't
like
about
it,
what
we
might
do
differently
so
Jace
you
want
to
take
it
away.
Yeah.
C
Thanks
so,
first
of
all,
I
am
thrilled
and
honored
to
be
conducting
another
retrospective
for
the
community.
The
first
one
I
did
for
the
community
was
back
in
1.3,
so
it's
been
a
long,
long
track
record
of
doing
these
for
the
community
and
you
know
simply
get
it
released
boundaries.
It's
a
great
time
to
capture
some
of
the
things
that
go
right
and
wrong
and
how
we
deliver
our
software
to
the
world
and
it's
a
pretty
complex
process.
C
These
days
and
I
feel
like
the
retrospective
is
an
opportunity
for
us
to
constructively
I,
give
feedback
and
learn.
So
if
you're,
not
in
the
document,
I'm
not
going
to
show
my
screen,
because
it's
I
have
a
convoluted
setup
here,
but
if
somebody
else
wants
to
share,
they
can
otherwise.
The
retro
dock
is
a
bit
dot.
Lee
/
k8s
one
one,
four
retro,
so
Kate's
one
one,
four
retro
and
you
can
actually
see
what
we're
doing
here
before
we
get
started
just
to
say
how
this
works.
I'm
gonna
go
through
the
items.
C
We're
gonna
skip
the
previous
retro
follow-up,
it's
more
for
introspection
on
the
release
team
and
also
the
community
to
see
how
we
did
it
meeting
our
commitments
from
the
last
retro
generally.
We
do
really
well,
which
is
exciting,
so
these
seem
to
be
making
a
difference.
I
wanna
remind
everybody.
If
you
are
speaking,
that's
great
unmute,
your
microphone.
If
you're,
not
speaking,
please
make
sure
you
muted,
so
it's
there's
not
ambient
noise.
Making
it
hard
for
people
to
speak
also
be
super
respectful
of
this
space.
C
We
want
this
to
be
constructive
and
and
if
there's
something
that
comes
up
that
doesn't
feel
right
or
that
you
are
having
issues
with
or
or
doesn't
feel
like
it's
a
safe
space.
You
can
definitely
contact
me
privately
in
chat
and
I
will
be
happy
to
address
that.
Also,
just
you
know,
if
there's
something
that
you
wanted
to
change,
make
sure
that
we
go
after
the
problem
and
not
the
person
and
and
just
make
this
a
positive
experience
for
everybody.
So
without
further
ado,
I'm
gonna
hop
down
into
the
to
what
went
well.
C
B
We
spend
way
too
much
time
focusing
on
what
went
wrong
and
then
think
about
all
sorts
of
ways
that
we
can
prevent
failure
from
happening,
and
we
maybe
don't
spend
enough
time
focusing
on
the
things
that
went
right
and
the
things
that
allowed
us
to
remain
adaptable
and
flexible,
because
after
all,
we
believe
that
recovery
is
a
better
and
more
effective
way
of
operating
than
prevention,
because
sometimes
removing
friction
allows
us
to
go
faster
and
do
things
better.
So,
with
that
in
mind,
I
tried
to
focus
on
a
number
of
things.
B
I
thought
I
went
well
first,
of
course,
obviously
all
the
cat
t-shirts.
Thank
you
all
so
much
for
the
inspiration
that
was
super
fun.
I
also
think
that
a
lot
of
the
deadlines
were
extremely
well
respected.
This
release
cycle
I,
don't
have
in
mind
exactly
how
well
we've
done
in
past
release
cycles,
so
we
did
have
sort
of
a
bonus
slippage
in
the
enhancements.
Freeze
deadlines,
because
of
reasons
we
can
talk
about
later,
that
I
felt
like
we're
expected
and
built-in,
but
every
other
deadline
was
thoroughly
respected.
We
actually
released
on
time.
B
You
didn't
cherry-pick
anything
after
the
cherry-picked
deadline,
and
we
only
excuse
me.
Like
code
thaw
happened
when
it
was
supposed
to
happen,
and
we
only
have
one
cherry-pick
land
in
between
code
thaw
and
the
cherry
pick
that
one
super
cool
I
feel
like
skin.
Setting
up
the
release
schedule
to
take
advantage
of
weekends
as
useful
soak
time
for
CI
signal
was
part
of
what
lent
me
confidence
to
some
of
our
larger
decisions.
So
I
had
code
thaw
in
mind,
specifically
as
well
as
go/no-go
for
release.
B
You
know
we
sort
of
set
up
quite
some
sort
of
for
release,
especially
like
we.
We
set
our
cherry-picked
deadline
on
a
Thursday,
thus
giving
us
a
Friday
into
Saturday
in
the
sunday
of
continuous
CI
signal
to
more
effectively
be
prepared
to
make
a
decision
about
the
release
on
Monday.
That
was
a
scheduling,
boundary
I,
sort
of
picked
up
from
1:13
and
wanted
to
try
again
and
I,
also
utilized
it
to
a
lesser
extent,
for
code
thoug
as
well.
B
B
So
I
have
this
suspicion
that,
because
we
did
that
more
PRS
were
allowed
to
flow
in
prior
to
entering
code
freeze,
so
we
have
less
of
this
crazy
backlog
and,
as
a
result,
when
we
came,
it
came
time
to
enter
code
fall.
We
we
burned
through
a
slightly
smaller
PR
backlog
in
under
24
hours,
which
I
think
is
the
fastest
time
we've
ever
gone
through
a
PR
backlog,
so
I
felt
like.
B
Overall,
we
took
this
super
piki
behavior
and
managed
to
smooth
it
over
for
a
better
experience
in
the
1:14
release,
as
well
as
providing
a
better
experience
for
those
people
who
wanted
to
start
ramping
into
the
115
rooms.
I
felt
as
though
the
call
for
release
team
shadows
was
heard
loud
and
clear.
It
was
really
super
encouraging
to
me
to
hear
that
they're
here
and
see
that
there
was
that
much
interest
to
join
and
help
out
with
the
release
team.
So
it's
good
to
know.
B
We
have
that
that
channel
that
broadcast
channel
available
and
that
people
are
listening
to
that.
One
thing
I
wanted
to
give
a
super
special
heartfelt
thanks
for
was
the
release
team
operating
effectively,
while
I
was
dead
to
the
world
in
out.
Sick
I
was
best
friends
with
some
combination
of
flu
and
strep
and
could
not
leave
my
bed
for
a
significant
period
of
time
and
the
release
team
was
able
to
make
the
appropriate
decisions
they
needed
to
make
and
accomplish
what
they
needed
to
accomplish.
B
I
believe
part
of
that
is
due
to
the
fact
that
we
had
been
operating
as
a
team
for
like
nine
or
ten
weeks
by
that
point,
so
we
had
certainly
had
opportunity
to
gel,
but
I
think
that
was
just
a
great
sign
of
a
smoothly
operating
team.
I,
like
I
personally,
had
concerns
that
maybe
like
I
talked
too
much
and
I
have
too
many
opinions
and
that
I
might
not
leave
room
for
folks
to
gain
confidence
that
they
were
able
to
make
effective
decisions,
and
I
was
really
glad
to
be
proven
wrong
about
that.
B
Finally,
you
know
this
was
the
release
where
we
said
everybody
must
have
a
cap.
Everybody
must
use
a
cap
and
and
I
think
it's
really
great
that
everybody
agreed.
We
should
do
that
so
I
think
just
the
use
of
caps
in
it
in
itself
entering
a
kept
aa,
cracy
or
living
on
clinic
kept
tuned
or
whatever
other
crazy
cat
puns.
You
want
to
make
I'm
glad
that
we
we
agreed
to
do
that.
B
I'm,
also
glad
that
we
managed
to
sort
of
take
this
ambiguous
cap
concept
that
was
spread
to
Sweden
secretly
sigchi
on
sega
architecture
and
agree
that
CPM
could
own
it
and
kept
it
classy.
Thank
you
George
and
that
sega
architecture
sort
of
pulled
their
responsibilities
back
into
they
really
care
about
technical
review
and
they
created
an
api
review
process
to
kept
that
going
forward
and
then
I
think
sig
release
sort
of
most
effectively.
There's
there's
a
good
degree
of
overlap
between
the
enhancements
folks,
in
sake,
release
and
attendance
at
sig
p.m.
B
In
reading
the
contents
of
the
Cape's
I'm
sure
some
of
us
have
opinions
on
the
clarity
around
all
of
this,
but
I
will
just
say
personally
speaking
when
it
came
time
for
me
to
begin
crafting
the
messaging
around
kubernetes
and
to
really
be
able
to
articulate
particularly
describe
like
why
people
should
care
about
this
and
what
they
can
do
with
it.
Now
that
they
couldn't
do
before
caps
that
had
clearly
written
user
stories
were
100%
invaluable.
It
was.
D
D
So
the
next
one
so
there's
a
revision
there,
because
I
didn't
quite
understand
what
Hannes
had
done
and
I
was
thinking.
He'd
done
something
on
his
Sunday,
but
actually
it
was
just
the
the
time
difference
to
Europe.
So
the
the
positive
useful
thing
here
was
that,
as
we've
spread
the
team
out,
they
we
have
more
people
involved
across
more
time
zones
on
the
actual
release
day,
as
we
were
starting
to
do
the
build
and
staging
that
gave
us
more
time.
D
It
wasn't
just
time
in
one
work
days,
time
zone,
but
he
was
able
to
start
his
workday
and
get
some
things
going
so
that
it's
the
others
of
us
and
and
further
west
time
zones
came
online.
Things
were
already
ready
for
us,
and
that
was
actually
a
really
awesome
team
work
and
then
the
the
last
one
line
here
is
CI
signal.
D
I
found
it
really
really
useful
that
the
the
master
blocking
boards
were
cleaned
up
during
the
cycle,
and
this
this
made
it
much
easier
for
me
to
notice
some
things
both
on
master
but
also
on
the
patch
release
branches
that
follow.
So
that
was
a
really
huge
improvement
to
cycle.
C
Anything
else
those
are
mine,
cool,
so
Jeff,
awesome.
E
This
release,
we
went
with
a
more
editorialized
version
of
the
release
notes,
rather
than
going
to
all
the
cigs
and
pulling
them
to
write
out
a
bunch
of
themes
for
the
release
cycle.
We
actually
just
mirrored
what
the
CNC
F
and
the
release
lead
deemed
as
the
major
themes
of
the
release
and
then
kind
of
highlighted
them,
including
linking
to
the
caps
for
those
enhancements
that
made
the
release
notes
cycle
a
lot
easier
to
manage
and
I
also
think
it
improved.
The
readability
of
the
release
notes
the
cycles
significantly.
C
Well,
the
they
come
in
here
is
so
bug
triage
removals
of
the
priority,
critical,
urgent
and
code,
slash
requirements
as
a
whole,
we're
both
100%
success
and
removed
a
lot
of
toil
on
both
sides
of
triaging
contributors
bug,
triage
release
lead
so
just
as
a
little
bit
of
editorializing
as
a
former
release
leave
myself
I
was
concerned
when
this,
when
the
the
changes
were
being
proposed,
because
I
felt
like
these,
the
governance
that
was
there
was
necessary
and
I
am
so
thrilled
that
the
strategy
here
worked.
C
So
congratulations
the
release
team
on
making
this
happen
and
work
that
seemed
to
remove
a
lot
of
the
the
agony
that
we
had
done
in
the
past.
So
congrats
I'm
gonna
open
up
the
floor
for
anybody
who
has
not
added
anything
to
the
retrospective
dock,
but
has
some
positive
things
to
say
about
their
experience
with
the
release
process
or
anybody
specifically.
They
want
to
call
out,
as
somebody
who
they
thought
did
a
great
job
just
before
I
hand
the
mic
over
to
anybody
else,
I'm
gonna,
say
Erin.
You
did
an
amazing
job.
C
You
truly
raised
the
bar
of
what
it
means
to
be
a
release
leader
and
you've
made
me
completely
thoroughly
reluctant
to
ever
volunteer
now,
because
I'll
just
look
that,
but
anyway,
you
did
a
fantastic
job.
Thank
you
and
anybody
else
want
to
add
some
comments
before
we
move
on
to
what
could
could
have
gone
better.
B
B
That
would
have
been
kind
of
a
crazy
Sunday
for
a
us-based
member,
so
that
was
phenomenal
and
to
give
some
thanks
back
to
you,
Jace
I
feel
like
a
piece
of
advice.
I
got
from
you
about
being
a
released
lead.
Is
that
like
sometimes
you
just
have
to
do
songs
and
dances,
they
keep
everybody
entertained
and
paying
attention,
because
sometimes
this
stuff
can
be
super
boring.
So
are
you
entertained?
C
B
Just
okay,
sorry,
just
one
last
thing,
because
I'm
not
sure
I've
ever
done
it
before
I
just
want
to
extend
that
thanks
to
each
each
member
of
the
release.
Team
I'm
not
gonna,
have
time
to
name
all
the
shadows
but
I
think
for
the
leads
like
Thank
You,
Claire,
Lawrence,
Thank,
You,
Maria,
Natalia,
Thank,
You,
Ahmed,
Wafi,
Thank,
You,
Nico,
Pantera,
dhis,
Thank,
You
Hannes.
Thank
you,
Jim
angel!
B
Thank
you,
Dave
stipple,
Thank,
You
Natasha
woods
and
thank
you
to
Tim,
pepper
and
Aleksandra,
who
are
gonna,
be
the
patch
release
managers
for
this
story,
but
they
came
to
all
of
them
because
they
did
like
I
was
just
here
doing
the
songs
and
dances
and
cat
t-shirts.
Everybody
else
did
all
the
real
hard
work.
So
thank
you.
C
B
I
will
try
to
be
quick
about
some
of
this,
so
shadows
selection
took
too
long
for
me
to
feel
comfortable.
I
talked
about
the
team.
Gelling
I
wish
I
had
gotten
the
opportunity
for
the
team
to
jell
sooner
and
I
felt,
like
the
amount
of
time
I
had
to
wait
for
shadows
to
roll
in
and
for
leads
to
figure
out
how
they
were
going
to
interact
with
their
shadows
was
a
little
bit
too
long.
I
attribute
some
of
this
to
entering
this
weird
holiday
slump.
B
I
tribute
tribute
some
of
it
to
kind
of
figuring
out
the
shadow
survey
like
as
the
release
was
starting
to
go
on,
so
excuse
me,
I
feel
like
there
could
be
more
dedicated
attention
paid
to
that.
It's
really
difficult
to
expect
the
release
lead
to
simultaneously
focus
on
the
release
going
out
the
door
and
then
also
focus
on
getting
a
new
release
team
built
and
how
to
most
effectively
do
that.
B
We've
had
discussion
and
sake
release
about
making
the
release
team
like
a
formally
own
sub-project,
the
owners
of
which
would
be
all
of
the
emeritus
release,
leads
who
could
more
effectively
steward?
What's
going
on
going
forward?
Well,
the
current
release
lead
focuses
on
what's
happening
there
so
I
know.
Josh
Burris
is
doing
a
number
of
great
things
there
I
feel
like
that's
all
I
want
to
say
on
that
topic.
B
Figuring
out
all
of
their
requirements
to
be
a
release
team
member
I
felt
like
that
tracked
out
over
time.
I
showed
up
to
something
where
there
are
a
bunch
of
documents
scattered
all
over
the
place.
There
were
three
different
google
groups
stick:
release
release
team
in
kubernetes,
milestone
burned
down.
There
was
no
clear
way
of
using
those
on
which
Docs
it
took
me
a
while
to
figure
out.
B
What's
like
I
couldn't
I
couldn't
give
people
a
checklist.
I
believe
that
my
karpea
has
since
gone
through
and
more
or
less
created
that
checklist
and
I
still
have
some
cleanup
work
to
do
on
killing
off
the
kubernetes
milestone,
burned
down,
Google
Group
and
just
leaving
it
to
kubernetes
release
team
and
kubernetes
cig
release.
Assuming
other
people
found
that,
like
reduction
in
things,
helpful.
B
B
It
also
could
be
that
I
was
scrambling
to
build
consensus
around
the
use
of
caps
while
at
the
same
time
being
in
the
middle
of
the
release
cycle,
and
it
would
have
been
cool
to
have
all
of
that
consensus
built
and
the
template
ready
to
go
and
the
checklist
ready
to
go
all
prior
to
the
release
happening.
I
think,
like
I,
started
to
try
and
build
consensus
and
get
agreement
that
we
wanted
to
move
in
this
direction
with
a
lot
of
people
in
person
at
cube
con
at
the
contributor
summit
in
Seattle.
B
So,
for
example,
I
felt
like
I,
had
a
pretty
concise
list
of,
like
caps,
need
to
have
a
test
plan
and
graduation
criteria
and
then,
when
I
tried
to
get
this
on
the
cap
template
so
that
people
would
have
a
really
easy
thing
to
copy-paste
for
him.
I
ended
up
in
cig
bike,
shed
with
lots
and
lots
of
pontification
and
strong
opinions
from
folks
in
cig
architecture
in
CPM
about
trying
to
get
things.
B
Super
perfect
forever
and
I
really
just
wanted
to
get
to
the
state
of
merge
in
iterate
and
to
figure
out
how
to
most
effectively
do
that
going
forward
so
like.
As
a
result,
there
were
a
lot
of
people,
despite
like
Claire
and
myself
and
other
folks,
going
to
a
bunch
of
sig
meetings
personally
and
answering
questions
on
slack
and
trying
to
get
people
to
show
up
in
the
CPM
channel
and
to
the
6
p.m.
meetings
to
talk
about
all
things
kept
related.
There
were
still
a
variety
of
people
who
were
surprised.
That's
fine!
B
We're
humans,
there's
always
going
to
be
surprised.
A
lot
of
people
as
some
people
seem
to
be
surprised
that
caps
were
mandatory.
I
feel
like
I
was
very
explicit
about
that
in
all
the
channels,
except
for
emails,
there's
no
clear
definition
around
what
implementable
means.
I
think.
That's
still
an
open
question
today
and
I'm.
Okay
with
that
and,
let's
see
so
like
I,
acknowledge
all
of
this
I
still
feel
like
it
was
better
to
push
us
all
forward.
B
I
think
one
of
the
ways
I
described
this
on
the
podcast
is
that
we
are
all
humans
and
humans
are
messy
and
my
goal
was
to
get
everybody
to
kind
of
put
their
mess
into
the
same
pile
so
that
we
could
all
agree.
That's
where
the
mass
lives
and
then
start
to
maybe
like
clean
it
up
and
organize
it
so
I'm,
looking
forward
to
how
we
can
clean
things
up,
add
a
little
more
structure.
F
F
50%
of
the
cups,
did
not
have
any
testing
plans
that
could
be
found
and
of
the
50%
that
did
the
quality
across
the
bar
was
very
spread,
so
I
think
just
ways
to
what
people
have
better
ideas
around
what
requirements
we
want
and
like
even
within
those
requirements
like
when
we
say
testing
plans
like
what
does
a
good
testing
plan.
Look
like
graduation
criteria
like
what
are
we
looking
for
here?
I
think
those
were
the
big
pieces.
I
would
add,
but
agree
with
everything
that
Aaron
said
as
well.
B
So
to
follow
up
on
that
and
then
I
genuinely
do
want
to
hear
like
the
community's
feedback
on
caps,
because
this
was
honestly
about
making
everybody's
lives
easier,
not
just
the
the
release
teams
like
it
actually
involved
a
little
bit
more
toil
on
the
police
to
use
behalf
so
test
plans.
If
you
are
releasing
an
enhancement
and
it's
going
to
stable
and
you
don't
have
end-to-end
tests,
I,
don't
think
you
have
a
sufficient
test
plan.
If
your
test
plan
consists
of
telling
me
we
wrote
tests,
that
is
not
a
test
plan.
B
The
sort
of
test
plan
that
I
really
like
if
I
put
on
my
hat
as
a
former
CI
signal,
is
a
test
plan
that
says
here's
the
test
grid
dashboard.
You
can
go
look
at
so
that
you
can
see
the
tests
are
green
right
now
and
you
can
see
how
long
they've
been
running
and
you
can
see
how
flaky
or
not
flaky
they
are
so
I
understand
that
some
of
these
were
written
by
very
developer
centric.
Mind
sets
where
it's
like
look.
B
B
The
other
comment
would
be
around
graduation
criteria.
Some
people's
graduation
criteria,
I
felt
like,
was
because
it's
been
in
beta
forever
and
we
want
it
to
be
GA
now:
okay,
cool
but
like
what
other
metrics?
Are
you
using
to
decide
that
it
should
be
stable?
This
is
kind
of
a
rehash
of
a
conversation
we've
had
before.
Where
turns
out,
there
is
actually
no
written
down
definition
of
what
stable
means,
what
beta
means
and
what
alpha
means.
B
We
have
some
of
that
written
down
in
some
places
when
it
comes
to
API
versioning
and
maybe
potentially
a
deprecation
policy
and
maybe
potentially
upgrade
inversion
skew,
but
we
don't
have
a
consistent
bar
for
everything
that
goes
to
stable,
must
add
a
minimum.
Have
you
know
in
an
tests
or
maybe
half
conformance
tests
or
have
such
and
such
user
documentation
I'm,
not
really
sure
what
the
list
of
criteria
should
be.
I
think
it's
unfair
to
expect
sake,
release
to
be
the
sync
to
enforce
those
criteria
consistently.
B
I'm,
not
sure
what
the
right
Clearinghouse
is
for
that
either,
like
my
perspective,
is
I
would
imagine
like
sig
Docs
probably
has
opinions
on
the
quality
of
the
docs
say.
Cluster
lifecycle
might
have
opinions
on
the
quality
of
the
upgrade
downgrade
experience.
Sig
architecture
probably
has
opinions
on
the
API
design,
principles
that
are
in
place.
Sig
testing
probably
has
opinions
on
the
test
coverage.
B
Sig
cloud
provider
might
have
opinions
on
how
many
clouds
this
thing
works
on
so
I
think
there
needs
to
be
like
some
central
place
where
we
kind
of
get
some
consensus
on
this
and
document
it
and
evaluate
all
the
caps
against
that
bar.
Maybe
that's
sick
p.m.
since
sig
p.m.
owns
the
cap
process.
Maybe
it's
sake.
Architecture
since
they're,
trying
to
sort
of
centralize
all
of
the
consistent
design
principles
used
across
kubernetes,
okay
I
think
is
all
the
talking.
I'm
gonna
do
I'm
serious
to
hear
about
other
people's
opinions
on
caps.
B
Believe
that
is
probably
a
reasonable
place
to
put
that
sort
of
stuff.
I
think
you
probably
want,
like
some
more
documentation
around
the
so
I
kind
of
feel,
like
the
documentation
around
the
specific
fields
and
values
of
keps
is
a
little
lacking
and
there's
a
lot
that
spells
out
what,
but
maybe
not
as
much
the.
Why
and
as
always,
I
tend
to
have
an
opinion
that,
like
things
that
get
too
wordy,
will
make
my
eyes
glaze
over
and
I
won't
actually
read
them
and
I
have
concerns
that.
C
So
I
I
guess
my
concern
is
when
we
talk
about
adding
things
to
the
cap.
I
feel
like
sometimes
we're
doing
these
we're
doing
keps,
but
we
don't
really
know
why
or
what
the
specific
purpose
is
like
for
me.
C
The
original
purpose
was
intended
to
sort
of
gate
the
work
that
was
going
into
releases
so
that
we
were
having
less
false
start
features
and
that
there
was
some
planning
around
how
how
things
are
gonna
be
staffed
out
and
to
make
sure
that
if
there
are
dependencies
that
you
untangle
those
before
you
start
working
and
that
was
sort
of
the
implemental
implementable
I
think
it
was
that
that
you're,
saying
you're,
signalling
that
this
is
a
funded
thing.
That's
got
people
lined
up
behind
it.
C
C
Maybe
the
idea
of
of
doing
a
retro,
specifically
around
caps
with
each
of
the
cigs
who
submitted
caps
for
the
for
this
last
release
cycle,
and
maybe
we
could
get
more
feedback
and
and
just
make
sure
that
everybody's
aligned
around
what
are
we
trying
to
accomplish
with
this?
Does
it
do
what
we
need?
Is
it
too
complex?
C
Is
it
not
have
enough
elements
and
whatnot,
but
essentially
I,
think
we're
at
a
point
in
the
evolution
of
this,
that
we
need
to
do
a
reboot
on
caps
just
to
verify
that
we're
doing
what
we
need
to
do,
because
now
we're
getting
traction
on
it
and
I
think
it's
a
good
time
to
do
that.
So
that's
just
my
personal.
Take
being
you
know,
in
cig
architecture,
cig
p.m.
and
also
involved
in
the
release
I
fully.
B
Agree
with
that
I
felt
like,
but
I
didn't
have
the
bandwidth
to
like
greatly,
simplify
or
improve
the
cap.
Experience
I've
made
the
decision
to
be
a
bull
in
a
china
shop
and
move
us
towards
caps
and
I
do
think
they
need
simplification,
I'm
hesitant
to
use
the
term
reboot,
but
I
think
certainly
getting
clear
expectations
sounds
like
a
good
idea
to
me.
B
Some
amount
of
so
I
was
also
really
warned
that
like
doing
this
without
any
kind
of
automate,
maybe
a
super
painful
experience
and
yep
totally
get
that
I.
Agree.
I.
Think
now
is
the
time
for
a
noun
and
we're
all
in
this
one
place
I
feel
like
now.
It's
the
time
for
us
to
identify
the
areas
where
automation
could
make
our
lives
simpler.
B
I
think
a
super
great
initial
start
would
be
to
make
sure
that
the
enhancement
tracking
issues
don't
link
to
PRS,
but
they
instead
like
directly
to
the
cap,
which
is
what
the
template
for
those
new
issues
is
today.
But
there
needs
to
be
some
cleanup
done
on
all
of
the
previous
PRS
once
you
have
that,
then
you
can
make
it
easier
for
tooling
to
like
crawl,
all
of
them
and
parse
out
what
we
need
to
parse
out,
maybe
maybe
remove
some
of
the
human
toil
and
maintaining
a
spreadsheet
stuff
like
that.
If.
C
D
H
H
D
H
Yeah
and
I
know
that
at
least
we
see
a
signal,
that's
something
that
we
experimented
with
this
week.
I
think
we
reached
a
solution
that
worked
for
us
in
the
end
or
other
combination
of
solutions,
but
it
is
very
much
an
open,
an
open
question,
the
other
like
specific
DCI
signal,
for
example,
something
else
that
I'm
thinking
is
that
there's
always
almost
two
threads.
H
There
is
just
long
term
stability
and
the
category
of
important,
but
not
urgent,
things
that
sometimes
end
up
getting
punted
again
and
again
to
the
future
and
there's
the
more
urgent
stuff,
that's
related
to
the
current
release.
So
that's
maybe
consistently
failing
tests.
That
also
need
quite
a
bit
of
attention.
So
I
like
there
is
an
argument
in
my
head
for
even
splitting
the
sky
signal
team
in
in
two
sub
teams
that
could
each
was
a
bit
more
on
different
priorities.
C
That's
definitely
something
we
should
capture
in
the
vote
will
try
differently.
So
if
somebody's
feeling
inclined
to
add
that
to
that
section,
that
would
be
super
helpful,
I'm
Maria.
If
you
want
to
do
that.
B
One
comment
to
that:
if
that's
okay,
which
is
that
part
of
what
I
worked
with
Maria
and
some
other
folks
on
during
coop,
con
North
America
in
December,
was
to
just
do
a
brain
dump
of
all
of
the
things
we
could
do
to
help
thief
like
the
project,
then
we
created
a
like
github
label
for
this,
so
you
could
consistently
query
for
it
across
the
project.
We
created
a
project
board
and
then
nothing
happened,
because
it's
an
unstaffed
unfunded
effort,
I
believe
it
is
so.
This
is
something
that
state
testing
explicitly
calls
out
of
scope.
B
We
don't
write
your
tests
for
you,
so
we're
not
going
to
troubleshoot
your
tests
for
you,
you,
as
developers
who
write,
features
and
then
write
tests
to
exercise.
Those
features
should
make
sure
that
your
tests
pass,
but
we
should
make
sure
we
have
tooling
in
place
to
help
you
more
effectively
identified.
B
There's
a
part
of
me
that
feels
like
this
is
one
of
those
cases
where,
like
everybody's
responsible,
but
unfortunately
that
leads
to
the
tragedy
of
the
Commons,
where
nobody
feels
like
they're
responsible
for
this
stuff.
I
also
feel
like
it's
unfair.
To
put
this
burden
on
the
release
team,
they
are
focused
solely
on
like
making
sure
we
are
good
to
go
I,
don't
think
they
should
be
focused
on
doing
a
lot
of
troubleshooting
in
triaging
and
fixing.
B
That
said,
if
organizationally
there
are
people
who
want
to
step
up
and
help
out
with
all
of
that
stuff,
I'm
happy
to
have
them
live
wherever
just
to
have
that
effort
happen
to
me,
it
might
be
like
a
working
group
or
something
we
do
have
at
least
like
some
of
that
work
called
out.
Some
of
that
backlog
available
just
needs
people
to
do
it.
C
Great
Tim:
do
you
want
to
go
ahead
and
continue
with
the
next
few
points,
or
did
we
capture
what.
D
You
needed
so
I
think
for
the
so
just
trying
to
capture
in
the
notes
what
we're
saying
there
also
so.
The
next
unique
one
I
believe
is
around
dependency
management.
We
we've
been
increasingly
having
discussions
around
how
actively
we
manage
dependencies
and
in
this
cycle
it
came
up
with
respect
to
golang.
D
We
we
had
a
realization
to
had
late
that
for
the
kubernetes
1.14
support
life
cycle,
which
basically
runs
through
year,
end
2019
golang,
that
we've
used
and
have
under
test
or
had
under
test
ongoing
1.11
was
going
to
go
out
of
support
in
august
ish
of
this
year.
So
we
we
made
a
kind
of
late
call
to
switch
over
to
go
line
1.12
because
it's
support
lifetime
would
carry
through
the
kubernetes
1.14
support
lifetime.
E
It
kind
of
is
part
of
the
whole
dependency
discussion,
so
as
part
of
release
notes,
you
have
to
update
all
of
the
external
dependencies
that
a
release
may
or
may
not
be
changing
and
I
am
assuming
over
the
many
releases.
We've
had
it's
gotten
as
granular
as
what
Ruby
library
something
in
cluster
add
ons
is
using
to
talk
to
elasticsearch
and
I
feel
like.
We
should
really
only
be
talking
about
the
dependencies
that
the
binaries
that
we
are
building
and
shipping
actually
rely
on,
rather
than
all
of
these
other
separate
dependencies.
C
All
right
looks
like
you've
got
a
few
yeah.
E
Absolutely
I
just
wanted
to
make
sure
there
wasn't
any
other
dependency
talk,
because
we're
about
to
switch
gears.
So
right
now
we
have
two
different
processes
to
generate
release.
Notes
for
those
of
you
that
aren't
aware
Nago,
which
is
the
big
suite
of
how
to
release
things,
has
a
means
of
generating
release,
notes.
It
generates
the
markdown
and
that
actually
commits
it.
And
then
we
have
a
awesome
release,
notes
markdown
generator
that
Mike
wrote.
E
Those
are
two
completely
separate
code
bases
they
do
completely
different
things
and
I
feel
like.
That
was
a
little
bit
confusing
when
we
were
trying
to
figure
out
where
we
needed
to
write
our
markdown
for
the
release
tool
to
actually
pick
it
up
and
when
we
were
trying
to
fix
certain
bugs
that
we
found
in
the
release,
notes
tool
where
to
go
so
I
think
it
might
be
something
worthwhile
combining
those
two
efforts,
so
we're
using
the
same
release,
notes,
tool
everywhere.
C
Jim's
here
paging
Jim
does
like
so
it's
the
test
document
generation
earlier
in
the
cycle
to
identify
issues.
I
can.
B
One
of
the
issues
that
rolled
up
so
reference
Docs,
there
were
apparently
issues
generating
API
reference
documentation
for
capital,
art
reasons
that
I
can't
articulate
here,
but
Jim
made
it
sound
like
it
was
death
by
a
thousand
paper
cuts
rather
than
one
clear,
unambiguous
fix,
and
so
I
think.
That's
why
we're
talking
here
about
would
have
been
useful
to
go
through
that
process
sort
of
continuously
to
catch
failures
in
that
as
they
cropped
up
rather
than
waiting
until
the
very
end
and
oh
woops,
our
reference
Docs
are
generating
I
guess.
E
I,
probably
should
have
moved
that
up
to
right
after
the
Onaga
thing.
One
thing
that
wound
up
happening
this
release
was
right,
as
Hannes
was
pressing
the
big
red
button,
we
realized
that
APR
that
we
were
hoping
to
merge
in
for,
like
just
spelling
fixes,
did
not
actually
get
merged
in
before
the
big
red
button
was
pressed,
so
we
had
to
then
go
and
quickly
PR
into
both
master
and
release.
1:14,
the
spelling
fixes
and
phrasing
changes
for
release
notes
so
having
like
almost
a
go/no-go
or
validating
that
there
are
no
PRS
going
into
release.
B
The
meta
problem
here
is
I
didn't
actually
go
verify
that
everything
was
merged
before
I
said
yes,
but
I'm.
Trying
to
spell
this
particular
thing
out
about
draft
PRS,
just
because
other
people
watch
this
video
and
I'm
trying
to
warn
you.
If
you
use
draft
PRS
right
now,
you
might
have
a
bad
time.
This
is
something
that
both
testing
from
is
aware
of
and
has
been
trying
to
work
with
github
on,
but
this
is
also
because
it's
not
exactly
a
super
stable,
github
feature
like
anytime.
B
You
use
a
github
thing
that
if
you
go
to
their
API
documentation,
they
say
you
have
to
have
this
application,
/foo,
octopus
monkey
or
whatever
ever
to
get
it.
It's
probably
gonna
be
a
little
wonky
for
a
while
anyway.
Thank
you
for
raising
that
my
bad.
Thank
you
for
opening
a
file
up
here.
Instead
close
it
out.
C
The
dreaded
big
red
button-
let's
see
Maria,
do
you
wanna
end
up
the
next
few
items
here
and
then
hand
it
off
Darren
sure.
H
So
I've
got
a
couple:
I
think
that
are
more
questions
than
they
are
observationally
went
wrong,
so
the
first
one
was
around
going
112
and
my
question
is:
we've
done
from
what
I've
seen
like
a
couple
of
colon
bombs
over
recent
releases?
Do
we
feel
that
we
have
enough
material
and
enough
experience
and
enough
for
stories
to
put
together
some
sort
of
a
dry
run
or
test
drive
ci
that
we
could
just
run
before
actually
bumping
going
in
in
the
coracoid
phase
and
without
able
to
be
useful.
G
G
So
this
is
something
that
we
need
to
do
urgently.
Somehow
I
think
when
we
were
trying
to
do
the
112
one
I
think
there
was
some
suggestions
on
how
we
could
be
testing
112
one
before
we
switched
over
to
112
from
112
zero
to
one
twelve
one,
but
I
think
I.
Think
it's
going
to
be
more
urgent
now
to
be
able
to
simultaneously
test
multiple
versions
of
go
I.
B
Like,
apparently
that
wasn't
the
last
time
I
was
going
to
talk
so
part.
Part
of
my
concern
here
is
that
he
is
not
sig
release
his
job
to
go.
Watch
these
latency
graphs
or
whatever,
like
part
of
the
switch
to
a
major
dependency
change,
is
to
get
the
appropriate
state
to
say.
Yes,
this
looks
good
to
us
or
no.
B
G
B
Or
yes,
because
part
of
why
we
involve
them
is
to
see
these
sorts
of
performance
issues
involved.
Spinning
at
5,000
nodes
and
we're
not
exactly
I
mean
I,
can't
fully
speak
on
behalf
of
my
employer,
but
I
feel
like
we're,
probably
not
going
to
spin
up
multiple
5000
clusters
in
parallel
just
for
testing
purposes,
to
satisfy
some
kind
of
test
matrix.
So,
while
I
agree
it
is,
it
is
possible
to
change
the
version
to
get
some
sort
of
like
there.
B
If
there
are
issues
in
testing
frog,
just
don't
have
them
offhand
to
like
check
against
you
know.
Lying
head,
for
example,
would
be
really
cool
and
we
could
do
a
sanity
check,
but
that
sanity
check
is
only
ever
going
to
use
a
subset
of
the
full
set
of
tests
that
we
run.
For
example,
one
of
the
problems
we
ran
into
is
going
to
line.
112
was
only
uncovered
by
an
upgrade
test
that
takes
17
hours
to
run
and
that's
maybe
a
whole
other
story,
but
like
just
like
so
forking
off.
G
Don't
to
be
fair,
scalability
stuff.
This
is
not
the
first
time
it's
come
up.
It's
come
up
multiple
times
before,
so
this
specific
job
in
scalability,
yes,
I
agree
the
112
0
112
one.
The
bug
fix
that
we
had.
That
is
something
one
often
be,
and
probably
never
going
to
see
something
like
that
again.
Yeah.
B
We
have
historically
found
that
there
are
what
appear
to
be
harmless
changes
that
end
up
introducing
regressions
at
the
very
last,
so,
no
matter
how
diligent
we
are
about
preventing
change
to
make
sure
these
huge
long
tests
aren't
won't
change
on
us
like
it
does
always
tend
to
happen
at
the
end
I.
So
I
might
often
like
the
pattern.
I
often
repeat,
the
scalability
is
like:
how
can
we
find
out
about
this
stuff
more
quickly?
B
More
frequently,
are
there
ways
to
see
these
performance
impacts
without
having
to
wait
for
a
5,000
node
cluster
to
stand
up?
I
personally
feel,
like
I,
have
not
seen
an
answer
to
that
question
about
like
whether
these
regressions
are
caught
with
smaller
clusters
or
whether
the
coop
mark
simulations
more
effectively
simulate
all
of
this
stuff,
and
so
it
bugs
me
a
little
bit
like
I
feel
like
some
of
the
long
I'm
going
off
on
a
tangent
now
and
I
will
stop
after
this,
but
some
of
the
longest
running
jobs
against
poll
requests.
C
So
what
I'm
hearing
very
clearly
here
is
the
SIG's
scalability
needs
to
be
staffed
up.
So
if
you're
in
this
call
right
now
and
you're
interested
go
to
the
SIG's,
scalability
readme
and
sign
up,
because
this
is
one
of
those
efforts
that
is
chronically
understaffed
comes
up
at
the
last
minute
and
it
has
been
in
my
experience
as
release
lis,
the
uniform
thing
that
always
slows
a
release
down.
So
let's,
let's
fix
that
with
community
effort.
That's
that's
how
we
do
it
right.
G
I
B
However,
our
release
branch
manager
is
in
a
European
timezone
and
so
expecting
them
to
wait
all
the
way
to
end
at
they
Pacific
to
cut
and
release
to
line
up
with
that
sync
point:
probably
not
the
greatest
expectation
of
work-life
balance,
so
we
ended
up
cutting
the
really
the
arcing
one
in
a
time
that
made
sense
for
Europe
but
left
the
code
time
has
previously
announced
in
scheduled
at
Pacific
time,
maiya
Copa
I
totally
agree.
They
should
be
synched
up
and
lined
up.
So
maybe
that
Thanks
thanks.
B
The
question
is
a
fallacy:
it's
not
that
that's
not
what
I'm
trying
to
say
it
props.
The
question
like
is,
and
if
they,
the
Civic,
always
a
reasonable
time,
maybe
we
shuffle
it
back
to
noon
or
morning
or
something
like
that,
because
I
do
agree.
Having
them
be
identical
is
perfect.
That
way,
you
know
the
release
candidate
corresponds
to
exactly
what
is
in
the
release
branch,
whereas
because
of
that
window,
what
was
in
the
release?
Branch
differed
ever
so
slightly
from
the
release.
B
B
When
one
piece
of
feedback
I
would
like
to
solicit,
you
don't
have
to
tell
it
to
my
face,
but
I'm
curious
what
you
as
a
community,
think
maybe
you
can
all
tell
Chase
or
something
but
I
mentioned
sort
of
up
top.
How
I
hadn't
been
sending
out
weekly
emails
and
nobody's
told
me
they
didn't
like
that,
nobody's
told
me
they
really
liked
that.
So
this
is
I.
Forget
the
name
of
the
paradox
right
now.
B
A
B
But
it's
not
that's
true.
Like
so
I
tried
to
communicate
status
and
things
that
were
important
in
a
couple
different
forums.
I
would
do
it
weekly
at
the
release
team
meeting
and
with
those
meeting
notes
that
are
publicly
available
and
then
I
would
do
it
again
at
the
community
meeting
with
those
meeting
notes
that
are
publicly
available
and
so
I
could
have
just
copy
pasted
that
stuff
into
an
email
and
sent
it
out
actually
fun.
Fact.
If
you
read
the
community
hosting
template,
it
says
that
the
host
is
supposed
to
email.
B
The
meeting
notes
of
the
community
meeting
to
kubernetes
dev
when
the
community
meeting
is
over,
but
that
hasn't
happened
in
years.
I,
think
and
so
I
just
want
to
try
and
reduce
the
amount
of
time
I
have
to
try
and
like
distill
and
summarize
and
send
stuff
out
to
people,
but
I
agree
like
I
suspect.
Most
people
would
have
preferred
email
communication.
In
addition
to
all
the
other
forms
of
communication,
I
was
doing
alright.
C
Right
y'all,
it
is
10:50
9:00
a.m.
Pacific.
We
are
at
the
top
of
the
hour
and
the
end
of
our
meeting
I
want
to
thank
you
all
for
another
great
retrospective,
and
hopefully
this
was
useful
to
you
all.
If
there's
feedback
on
my
moderation
or
things,
we
could
do
better
for
the
retro
itself.
I
definitely
reach
out
to
me
and
let
me
know,
yeah
and
think
release
we'll
probably
pick
up
the
what
we'll
do
differently
and
the
release
team
as
well.
So
hopefully
that
will
be
a
continued
discussion
George
anything
before
we
call
it.