►
From YouTube: Kubernetes 1.15 Release Retrospective Part I
Description
Retrospective on things we did and can improve in the Kubernetes release cycle.
A
Hi
everyone
and
welcome
to
the
kubernetes
communi
meeting
hope
you're
all
having
a
fantastic
week
before
we
start
the
meeting
here
we
do
have
a
code
of
conduct.
This
is
recorded,
aim
stream,
so
make
sure
you
don't
say
or
do
anything
that
you
don't
want
permanently
recorded.
This
is
a
kind
of
a
special
community
meeting
we're
going
to
do
a
coop
notice,
1.15
release
retrospective.
So
this
community
meeting
is
all
about
the
new
release
and
the
retro
for
that
release.
We
do
have
a
moderator
today,
Christine
so
I'm
gonna
hand
it
over
to
Christine
yeah.
B
Thanks
for
the
intro,
just
in
case
I'm
new
to
the
community,
my
name
is
Christine
I
work
at
Linda
and
I'm
a
technical
project
manager
there
and
yeah
I.
Think
let's
get
started
so
just
to
I'm,
not
gonna
share
my
screen,
but
everyone
has
the
link.
B
I
dropped
it
in
the
in
the
chat
or
e
to
do
a
little
bit
of
a
previous
retro
follow-up
and
then
go
over
what
went
well,
what
could
have
gone
better
I
know
some
of
the
things
I
saw
in
the
chat,
especially
around
the
delay,
and
then
what
things
you
could
do
differently
for
116
and
beyond.
So
without
further
ado,
let's
jump
right
into
it.
B
C
C
Do
nobody
assigned
themselves
action
items
last
time
so
may
a
coupling
that
myself
I've
done
them
for
the
past
couple
retros
one
thing
I
personally
followed
through
the
completion
was
ensuring
that
we
do
actually
have
owners
for
all
of
the
release
blocking
jobs.
This
was
done
by
making
sure
that
every
buddy
has
an
email
address
associated
with
their
jobs.
Intested
sends
out
alerts
if
those
jobs
fail.
If
you're
on
the
release,
team
you've
definitely
been
seeing
those
alerts,
but
if
you're
on,
like
cig
networking
or
six
scalability,
you've
also
been
seeing
those
alerts.
C
Some
of
the
suggested
changes
that
came
up
didn't
have
super
clear
owners,
but
I
just
wanted
to
give
some
shouts
out
to
people
who
are
like
putting
in
the
work
and
following
up
Maria
Natalia
has
been
working
diligently
to
sort
of
help
consolidate
all
of
the
different
jobs
across
the
difference
to
release
those
dashboards
and
thanks
to
some
of
Maria's
work.
So
fine
and
some
of
Katherine's
work.
We
now
sort
of
automatically
generate
all
of
those
dashboards
and
are
working
to
gradually
like
reduce
the
set
of
things
that
people
have
to
look
at
to
understand.
C
Rico
Pandarus
talked
about
a
number
ideas:
a
number
of
ideas
to
sort
of
improve
our
workflow
I've
sort
of
come
up
with
some
kind
of
like
organization-wide
triaging
workflow
that
could
be
used
by
the
release
team
or
individual
SIG's
discussion,
for
that
is
still
kind
of
ongoing.
Most
recently
I
saw
some
discussion
about
this
in
contributor
experience.
I
think
it's
also
been
discussed
in
snake
release.
D
C
D
With
regards
to
packaging,
so
Tim
and
I
have
been
discussing
what
to
do
with
packaging
quite
a
bit
over
the
last
few
days,
so
moving
into
the
116
release
cycle.
What
we're
trying
to
do
is
make
sure
that
you
can
build
debs
and
rpms
for
individual
versions
for
all
versions
of
the
individual
packages,
so
queue
baby,
mqq
blood,
cube,
CTL,
kubernetes,
CNI
and
CRI
tools
on
any
version
on
any
architecture
for
any
distro
or
any
channel
right,
so
that
work
is
active.
D
We
have
a
few
PRS
up
that
I've
linked
around
in
the
channels,
but
I
can
link
them
here.
With
regards
to
the
cap
stuff,
the
cap,
stuff
I
think
one
of
the
next
things
that
we
need
to
do
is
metadata
validation
for
the
caps,
so
that
is
on
the
board.
I
can
link
the
board
as
well
with
regards
to
some
of
the
things
that
need
to
be
done
around
caps,
one
of
the
biggest
things
that
I
see
is
that
caps.
There
are
some
caps
that
have
been
open
since
November
December
right
that
needs
to
change.
D
There's
I
think
there's
still
a
problem
around
merging
early
and
iterating,
often
on
caps.
So
one
quick
suggestion
would
be
to
add
a
and
open
questions
part
of
a
kept
right,
so
we
can
just
capture
the
feedback
from
whatever
review
process
and
move
forward.
At
least
we
don't
lose
that
across
github
comments
right.
B
C
B
C
D
C
I've
been
involved
in
at
least
some
of
the
PRS
that
Tim
is
put
forth
and
it
was
I
hadn't
yet
found
my
way
to
like
an
umbrella
issue
or
some
central
source
of
truth
for
like
what
the
plan
was
for
that
works.
If
you
can
fly
back
to
those
caps,
that'd
be
cool,
yep
and
Christine.
I
will
hand
it
off
to
you,
and
I
will
be
quiet
now.
B
E
Hope,
I
am
so
just
wanna
give
a
shout
out
to
Josh.
If
he's
here,
for
you
know,
organizing
and
managing
shadow
related
things,
so
that
I
feel
like
that
went
well
and
I
hope
he
stays
on
amiridis
or
someone.
You
know
maybe
take
over
his
position
or
maybe
he
should
have
shadows
and
then
someone
you
know
takes
over
his
role.
E
Yeah,
that's
pretty
much.
What
went
well,
oh
yes,
and
also
we
had
a
special
channel
just
for
release
engineering
or
release
management,
related
discussions
and
I
think
that's
much
better
than
just
having
it
in
say.
Release
because
there's
also
other
conversations
happening
there
and
you
know
conversations
get
its
cut
off
between
different
contexts
or
different
topics,
so
that
was
nice
having
our
own
channel.
D
Yeah
plus
I
mean
+12
that
you
know
Cheryl
thanks
for
the
idea
and
thanks
for
like
actually
working
to
get
it
implemented.
Katherine.
Thank
you
for
actually
having
slack
in
for
automation,
so
that
that
was
super
easy
to
do.
If
anyone
is
interested
in
like
seeing
what
our
release
process
was
for
1:15,
you
can
actually
go
to
release
managers
and
watch
all
of
our
chatter.
Previously.
D
D
Before
there
was
no
channel,
so
if
you
needed
to
ping
someone
about
creating
packages
or
something
you
would
ping
them
individually
and
that
you
know,
that
message
would
be
wherever
it
was
right,
going
back
really
quickly
to
the
the
Emma
Redis
advisor
emeritus
advisor
the
kind
of
the
goal
and
and
Thank
You
Josh
for
doing
it
for
the
last
few
cycles.
But
one
of
the
goals
is
that
you
know
we
spun
up
the
release.
Team
sub-project,
officially
write
the
release.
Team
has
always
existed,
but
did
not
have
a
set
of
consistent
sub
project
owners.
D
What
we
did
this
cycle
was
move
the
set
of
n
minus
three
I,
think
R
and
minus
4
release
team
leads
as
sub
project
owners,
so
there's
a
consistent
body
of
expertise
across
across
the
release
team.
So,
since
we
change
we
change,
you
know
ownership,
essentially
every
release
cycle.
We
thought
it
was
important
to
have
a
consistent
body
of
people
who
knew
about
the
release
cycle
right.
So
the
sub
project
owners
for
the
release
team
sub
project,
our
former
release
team
leads
right.
D
So
josh
is
one
of
them,
and
Josh
became
emeritus
advisor
for
the
last
few
cycles.
What
we'd
like
to
see
is
a
rotation
of
those
those
previous
release,
team
leaders
coming
back
to
say,
hey
we're
going
to
we're
going
to
check
it
out
one
more
time
and
see
how
things
are
going
and
help
people
along
in
the
process.
So
that's
kind
of
the
goal
for
that.
B
F
That's
me
two
of
the
things
we
were
able
to
accomplished
having
somebody
separate
who
was
in
charge
of
the
shadows
this
prior
to
adding
the
a
position
the
release
lead
was
also
in
charge
of
all
of
the
shadow
stuff,
except
that
de-facto
run
a
couple
of
releases.
Stephen
augustus
actually
did
it
so
having
an
official
position.
That
was
not
the
release,
lead
that
was
in
charge
of
shadows
and
succession,
and
that
sort
of
thing
allowed
us
to
do.
Two
things
that
we
wanted
to
do
for
a
while,
but
had
not
been
able
to
accomplish.
F
One
was
to
select
the
shadows
earlier.
That
is
to
have
all
the
shadows
selected
by
the
time.
We
were
like
ten
days
into
the
release
cycle,
which
meant
that
we
didn't
have
this
sort
of
three-week
period
where
we
were
theoretically
in
a
new
release
cycle,
but
the
team
wasn't
staffed
up
and
we're
gonna
continue
that
for
116,
the
shadow
application
is
already
open,
we'll
be
open
through
tomorrow,
and
then
we
saw
I
think
next
week.
C
Actually,
timely
selection
of
shadows
was
something
else
they
came
up
as
a
result
for
the
114
retro.
So
I
wanted
to
thank
Josh
for
following
up
on
that
to
clarify
some
things.
Part
of
the
awkwardness
in
shadow
selection
for
the
previous
release
cycle
was
that
it
started
after
the
holidays
in
coop
con
and
a
whole
bunch
of
other
stuff.
So
it's
something
we
should
be
mindful
of,
as
we
consider
the
rotation
from
q4
to
q1.
C
Well,
I
was
coming
in
as
the
released
lead
for
114,
because
it's
really
difficult,
like
shadow
selection,
sort
of
starts
about
three
weeks
prior
to
the
release
going
out
the
door
which
anybody
involved
in
the
release
will
tell.
You
is
a
very
attention-grabbing
time
that
the
release
lead
can't
necessarily
focus
on
a
whole
bunch
of
things.
So
it's
using
somebody
to
explicitly
delegate
to
you
for
the
selection
process
has
been
hugely
helpful.
D
So
just
mentioning
that,
while
it's
good
to
have
that
one
person
I
think
to
build
up
the
shadow
team,
remember
that
as
any
any
release
team
anyone
16
release
members
coming
into
this.
Remember
that,
as
a
lead,
part
of
your
job
is
to
help
mentor
your
shadows.
So
you
should
be
doing
that
actively
throughout
the
cycle.
G
I
would
just
like
to
point
out
that
we
made
a
plan
to
spin
up
a
separate
release,
notes
website
for
the
changelog
we
implemented
it.
It
went,
live
without
a
hitch
and
that
was
kind
of
Awesome
and
much
props
to
Sasha
and
Lindsay
for
kicking,
but
especially
Sasha.
He
did
so
much
work
yeah.
That's
it
all
right.
Awesome.
H
Right,
yeah
I
just
wanted
to
note
that
it
seemed
like
the
cycle
based
on
top
of
prior
cycles.
Documentation
update
leads
were
generally
able
to
know
what
they
were
doing
at
any
point
in
time
and,
as
we
had
occasional
questions
come
up,
we
could
say
I'm
pretty
sure,
that's
in
the
handbook,
and
generally
it
was
where
there
was
some
bit
of
clarification
needed
the
lead
proactively
on
the
spot,
made
a
PR
and
made
the
doc
more
clear.
H
D
So
again
leads
shadows.
Everyone
coming
into
the
116
cycle.
Remember
that
the
role
handbooks
are
living
documents
right,
so
everything
that
you're
reading
in
a
role
handbook
question
it
feel
free
to
question
it
feel
free
to
make
changes
on
it
if
it
doesn't
like.
If
the
process
doesn't
work,
if
it
feels
awkward,
if
it's
something
that
can
be
automated
write
it
down
like
make
it
make
an
issue,
make
an
action
plan
to
fix
it.
I
I
A
couple
of
items
that
came
out
of
that
in
terms
of
feedback
was
that
the
enhancements
for
use
deadlines
was
really
hard
on
Cup
reviewers
and
they
felt
that
there
they
were
being
overwhelmed
with
the
amount
of
kept
review
needed
and
then
a
second
item
that
came
out
of
that
thread
was
that
there's
no
easy
location
to
find
the
release
calendar
so
I
think
the
second
one
might
be
actioned
on
already
for
116,
since
I
think
there's
an
outstanding
issue
for
putting
it
on
the
May
insec
release
calendar
versus
its
own
separate
one
right.
D
So
I
sent
a
I
sent
an
email
to
ke
dev,
as
well
as
everyone
who
was
on
the
deadlines
or
horrible
email.
That
kind
of
collates
all
of
the
action
items
for
that
I
think
the
email
subject
is
some
thoughts
from
sig
p.m.
in
encode.
Freeze
I
can
find
the
I
can
find
the
thread,
but
essentially
yes,
the
the
deadline.
I
think
the
you
know
the
kept
the
the
enhancement
freeze
thing
feeds
into
the
release.
Calendar
I
recently
created
a
global
kubernetes
release
calendar
that
has
been
updated
in
the
release.
D
Leed
handbook.
So
now,
instead
of
creating
your
own
calendar
release,
lead
should
be
reaching
out
to
the
sake
release
chairs,
to
get
access
at
it.
Access
to
that
calendar
and
add
the
items
from
there
right
and
you
know,
feel
free
to
put
a
note
of
like
what
release
this
item
is
related
to
so
like
1/16
code
phrase,
right,
there's
also
a
note
to
make
sure
that
those
items,
those
events
added
to
the
calendar
are
have
reminders.
Email
reminders
for
7
days
in
advance
of
the
of
the
event
right,
so
that
I
think
that
was.
D
That
was
the
suggestion
from
the
deadlines
or
horrible
thread,
so
that's
been
done.
I'll
formally
more
formally
announce
that
I
guess
outside
of
the
release
meeting
as
a
follow-up
to
that
some
thoughts
on
Sigma
from
sig
p.m.
email.
There
are
a
few
more
action
items
within
that
email
that
I
want
to
wrap
up
before
sending
out
an
update.
B
I
So
the
neck-
the
next
item
was
also
added
by
me
on
behalf
of
lava
lamp,
which
was
the
code
freeze.
Date
was
one
week
after
coop
Connie
you
this
put
pressure
on
cichlids
and
PR
reviewers,
which
there
was
a
lot
of
commentary
in
slack
around
the
code.
Freeze
date
coming
right
after
coop
Connie,
you
being
hard
yeah,
I,
don't
know.
D
D
I
opened
up
so
again
in
within
that
sub.
That's
from
sig
p.m.
email
is
the
fact
that
cubed
on
you
know
the
cube
cons
were
stacked
together
and
the
cube
cons
have
been
so
last
year
we
had
Shanghai
and
Seattle
stacked,
and
this
year
we've
had
Barcelona
and
Shanghai
stacked.
I
would
I,
don't
know
what
the
right
venue
is
to
propose
not
having
cube
cons,
stacked
like
that,
because
I
think
this
will
happen.
This
will
make
it
like.
This
will
happen
every
like
at
least
one
release
cycle
a
year.
C
I
will
allso
I'm
not
going
to
get
into
this
discussion
now,
but
I
think
this
adds
onto
the
pile
of
like
is
quarterly
too
fast
to
release
cadence
or
turbine
eddies.
Would
we
be
more
capable
of
absorbing
the
impact
of
conferences
about
more
things
than
just
kubernetes
if
we
had
a
larger
time
period
to
plan.
I
D
B
E
So,
as
you
may
have
known
since
version
114
shadows
that
weren't
given
permissions
to
do
the
you
know
to
actually
do
the
jobs
that
the
branch
manager
would
do
so
to
compensate
for
that,
the
branch
manager
would,
you
know,
have
like
a
zoom
meeting
and
share
their
screen
and
kind
of
walk
through
the
process
of
doing
like
a
release
cut.
So
that's
you
know
as
a
shadow
back
then
in
1:14.
It
feels
really
disconnected
like
you
can't
really.
E
Just
you
know:
oh
that's
how
you
do
it
and
then
you
kind
of
don't
know
what
questions
to
ask,
because
it's
like
a
walkthrough,
so
we're
hoping
that
in
1/16
that
shadows
under
Young,
we
will
get
permissions
so
that
they
could
maybe
do
a
release
cut
for,
like
maybe
alpha
or
like
a
beta
cut,
and
then
they
feel
more
involved.
Because
right
now,
it's
just
more
like
conveying
information
to
shadows
and
they
don't
feel
as
involved
as
they
would
like
to
so.
D
In
a
yeah,
so
an
update
to
the
group,
the
branch
management
role
will
be
moving
out
of
the
release
team
for
116
branch
management
role
will
be
part
of
release
managers.
Release
managers
will
be
composed
of
the
current
patch
release
managers.
The
branch
managers
and
associates
right
so
associates
will
be
what
we
consider
the
shadows
today,
as
well
as
the
kubernetes
build
admin,
so
kubernetes
build
admins
are
the
people
who
have
access
to
actually
push
the
button
and
package
group
annette
is
right.
K
D
C
Just
just
to
tack
on
to
that,
like
the
product
security
committee
has
something
approximating
like
we,
we
value
people
who
show
up
and
do
work
for
three
to
six
months
before
we
consider
giving
them
the
full
keys
to
the
car,
which
is
not
unlike
the
sacral
eyshadow
process,
where
you
shadows
for
a
release
cycle.
Before
we
give
you
the
keys
to
the
car
I
completely
understand
Cheryl's
position,
I,
also
completely
and
sympathetic
with
making
very
sure.
We
trust
who
we
get.
The
keys
to
the
car,
so
I
think
that's
a
step
in
the
right
direction.
C
B
H
So
we
we've
ended
up
automating
away
the
test
infrastructure,
we'll
roll
on
the
release
team,
but
one
of
the
gaps
that
we
have
still
and
that
came
up
a
couple
times
during
this
release
was
when
the
test
infrastructure
fell.
Over
on
the
release
team
side,
we
were
looking
at
the
release
blocking
test
grid,
for
example,
or
the
release.
115
desk
read
data
and
we
were
trying
to
understand
what
was
going
on
and
was
there
a
problem
in
kubernetes
or
was
it
the
underlying
test
infrastructure?
And
there
were
a
couple.
H
There
was
at
least
one
instance
where
something
started
seeming
wobbly
sort
of
on
a
Tuesday
and
by
Friday
we
understood
that
there
was
a
problem
and
there
may
have
been
multiple
things
conflated
there,
but
in
the
meantime
the
release
team
wasn't
sure
what
was
going
on.
I
could
see
a
couple
of
paths
forward
on
this
one.
The
release
lead
understanding
that
was
part
of
their
job
to
seek
that
out
proactively
on
behalf
of
the
team.
H
This
could
also
be
the
CI
signal
lead
doing
that
much
more
proactively
or
also
cig
testing
the
testing
for
folks,
knowing
that
the
release
team
is
going
to
wonder
about
these
types
of
things.
So,
basically
just
we
need
communication
and
where
there
used
to
be
somebody
who
could
have
done
that-
and
it
wasn't
necessarily
formally
part
of
their
role
now
that
that
most
potentially
most
logical
place
is
not
there.
We
need
to
make
sure
that
we
continue
to
communicate
these
things.
H
H
L
H
C
As
a
chair
of
state
testing,
I
and
as
an
organizer
of
the
gates
and
for
working
group,
I
am
very
interested
in
moving
all
of
the
test
infrastructure
to
a
place
where
the
community
can
more
proactively
take
on
a
supporting
role
for
this
stuff
to
collectively
raise
awareness
of.
What's
going
on
so
I
think.
That's
one
way
that
we
could
get
some
help
here.
Another
way,
it's
it's
a
rabbit
hole
that
myself
and
Jase
have
gone
down
in
the
past
is
former
release.
C
C
Good
question
like
maybe
maybe,
but
it's
not
it's
unclear
to
me,
like,
if
part
of
the
confusion
over
whether
it's
the
tests
or
it's
something
else,
comes
from
the
fact
that
we
don't
have
clear
enough
test
of
dashboards
or,
if
we're
not
getting
a
strong
enough
or
if
we
don't
have
clear
dashboards
for
the
health
of
uppers
things.
Okay,.
D
So
one
thing
to
note
was
that
the
tests
on
call
the
testing
Hong
call
person
was
kind
of
hard
to
find
not
the
person
themselves.
But
the
information
regarding
the
person
like
I
know
that
testing
Ops
is
the
place
to
go
and
that
there's
also
like
I,
think
Godot
Cates
at
I/o
slash
test
on
call
or
something
like
that.
That
will
lead
you
to
like
the
one
member.
Do
we
have
our?
Is
there
plan
to
have
a
mailing
list
for
that.
K
F
C
C
A
great
idea,
as
we
look
to
broaden
support
as
we
look
to
sort
of
staff,
an
on-call
team
from
community
members,
but
I
think
in
the
interim,
it's
probably
most
effective
if
we
reduce
the
number
of
communication
channels
that
a
human
being
has
to
be
responsive
to
which
is
kind
of
why
using
the
test
infra
on
call
alias
is,
is
the
recommended
path
and
if
that's
not
documented
in
the
appropriate
places,
we
should
work
on
that
right.
That
the
page
mentioned
is
a
thing
that
has
existed
since
time.
C
D
B
This
is
just
a
suggestion
and
I
know:
slack,
isn't
always
the
best
medium
for
it.
But
one
thing
that
we
do
is
we
have
four
on
calls.
The
person
needs
primary,
we
have
it
and
as
the
channel
purpose,
so
we
go
from
whatever
the
source
of
truth
is
whether
it
be
like
a
link
or
something
and
and
then
that
way
like
you
can
go
to
what
is
it
the
testing
up,
slack
channel
and
and
and
see
you
know
from
there
and.
C
That's
that's
in
the
testing
ops
topic
as
well,
so
yeah
another
option
could
be
like
opening
issues.
If
we
know
there
are
known
problems
so
that
people
can
like
link
their
PRS
against
issues
to
be
like
I'm
having
a
problem
because
of
this
again,
I
feel
like
this
all
comes
back
to.
If
somebody
wants
to
like
own
and
help
implement
the
policy
for
this
I.
Welcome
that
I
really
do
all
suggestions
are
welcome,
but
what
needs
to
happen
is
the
work
to
make
those
happen.
I'll.
D
B
It
okay,
let's
move
on
just
because
we
have
about
20
minutes
left.
So
next
up
was.
I
I
They
found
out
that
these
actually
didn't
need
to
block
the
115
release
and
we
were
able
to
ship
without
waiting
for
a
fix
for
this,
but
it
did
cause
us
to
delay
the
release
by
about
two
days,
yeah,
so
I
think
mostly
just
calling
out
that
had
we
had
those
tests
on
some
blocking
on
master
blocking
or
one
of
the
main
blocking
test
grids
that
we
look
at
for
SEC
release
regularly.
We
would
have
seen
those
failures
earlier
since
they'd
been
starting
in
late,
May,
I
believe
so.
D
So
one
of
the
things
that
were
mentioned
was
that
that
the
tests
cost
a
lot
and
I
I
totally
understand
that
I
think
that
we
also
have
the
ability
to
run
periodic
tests
and
figure
out
like
what
the
schedule
for
periodic
tests
are.
If
the
you
know,
if
we're
saying
that
tests
cost
a
lot,
I'd
also
like
to
think
about
the
time
that
it
takes
to
wrangle
people
that
are
on
the
release
team
that
are
responsible
for
cutting
the
release
time.
That's
shifted
out
of
their
day
like
what
does
that
cost
right.
C
So
I
have
a
couple
of
comments.
First,
off
the
reason:
the
tests
weren't
on
the
release
blocking
dashboard
is
they
didn't
meet
criteria
that
were
put
together
back
in
the
I
believe,
1:13
or
112
time
frame
about
what?
What
what
qualities
should
a
job
have
to
be
considered.
Release
blocking,
and
some
of
these
include
the
amount
of
time
it
takes
for
the
job
to
run
the
frequency
with
which
it
is
run.
C
C
I
will
take
some
may
a
culpa
for
not
actively
engaging
to
continue
like
shuffling
test
jobs
around
but
like
I
said,
Murray
has
certainly
taken
the
lead
on
that
and
would
welcome
help
from
other
people
to
sort
of
help,
groom
tests
that
do
or
do
not
meet
these
criteria.
I
think
that's
that's
my
main
comment
for
here.
C
B
I
think
I'm
the
well
from
the
previous
meeting.
That
might
be
something
put
in
the
parking
lot
for
now,
because
yeah
that
might
be
a
bit
of
a
rabbit
hole
but.
D
H
B
C
How
long
does
it
take
the
job
to
go
from
red
to
green,
and
this
is
all
kind
of
done
anecdotally
by
humans
who,
when
they
experienced
enough
pain,
will
get
cranky
and
go
look
at
these
things
and
decide
if
they
should
live
it
out.
What
well?
The
Abraham
has
continued
implementation
of
a
robot
that
does
these
things.
Yep.
F
I've
been
actively
involved
in
the
various
coop
cons
and
I
had
even
less
time
than
expected
to
check
on
shadow
status
in
the
middle
four
weeks
of
the
release,
which
meant
that
I
missed
some
things
in
terms
of
people,
kind
of
falling
off
of
active
participation
and
the
answer
that
it's
probably
going
to
be
a
combination
of
I
mean
next
really
seem
to
be
more
available.
But
also
that
maybe
we
should
look
at
having
if
he
is
gonna,
be
a
permanent
role,
having
also
an
EI
shadow,
so
that
we
can
alternate
coverage.
D
B
H
D
That's
that's
so
the
person
who
submitted
us
I
think
is
Andre
Martine
and
the
the
people
involved
are
stefan
schmidt,
ski
and
Nikita
got
it.
B
Ok,
ok,
I'll
just
read
it
off.
Then
there
is
a
single
I've
seen
kubernetes
kubernetes
repo,
but
zero
are
season.
All
other
repos
I
know
that
they
were
fixing
the
publishing,
but
so
that
all
other
repos
would
have
this
RC
tag
that
are
are
seized
on
all
repos
are
porn
for
us,
since
once
they
are
created,
it
is
when
we
bump
our
go
libraries
and
perform
some
tests
and
our
CI
the
fact
that
all
of
the
repos
were
not
published
with
the
a
RC
tag
should
have
been
a
release.
Block
I
am
OH.
M
C
It's
a
it's
a
concern
that
being
unable
to
cut
yeah
release
candidate
tags
on
all
of
our
all
of
the
repost
that
live
in
staging
are
actually
starting
to
be
consumed
by
pieces
of
this
project
that
are
also
trying
to
go
out
the
door,
and
so
this
is
similar
to
how
like
coop
a
DM,
gets
really
cranky.
When
release
candidate
Debian
packages
aren't
cut
right.
I
do
you
feel
like
we
should
have
a
discussion
about
this
being
a
release
blocker
in
terms
of
the
technical
implementation.
C
I
will
again
welcome
anybody
who
wants
to
show
up
to
the
kitchen
for
working
group
where
Nikita
and
Steven
are
working
to
get
the
publishing
bot
up
and
running
over
there
and
what
welcome
contributors,
who
want
to
help,
build
and
maintain
the
bots
to
make
sure
that
it
does
all
these
great
wonderful
things.
I
can
make
a
tracking
issue
and
actually
items.
D
B
Great
moving
along.
G
So
we
ran
into
something
interesting
literally
this
morning.
We
noticed
that
some
of
the
themes
on
the
blog
had
no
release
notes
related
to
them,
and
it
turns
out
that
there
were
not
actually
any
release
notes
in
the
PRS,
but
the
you
know,
functional
work
was
done
so
it
got
a
theme,
it
might
have
been
nice
or
we
might
have
been
able
to
catch
us
if
we
had
gotten
the
themes
or
worked
with
coms
a
little
bit
more,
but
that
that's
ultimately
the
problem.
M
So
it's
so
kind
of
kind
of
off
that
same
topic,
also
related
to
the
caps
is
the
is
that
want
for.
We
had
a
meeting
with
the
with
release.
Notes
with
the
cig
leads
with
you
know,
with
docks,
and
then
we
were
really
able
to
understand
what
the
themes
were
better.
The
unfortunate
thing
is
that
we
met
a
little
bit
later
on
in
the
cycle.
M
So
really
just
calling
out
would
be
a
lot
more
helpful
to
have
that
earlier
on
in
the
cycle,
talk
with
the
cig
leads
and
understand
what
features
and
themes
and
and
kind
of
what
are
the
important
things
that
we
are
going
to
focus
on
within
this
release,
and
then
that
will
lead
the
way
and
help
our
progress
throughout
the
rest
of
the
really
cycle
so
definitely
definitely
plus
one
on
those
runs.
Okay,.
M
C
A
quick
question
on
that
regard
I
feel
like
what
I
perceived
happened
in
certainly
the
113
and
114
release
cycles
was
like
the
leads
and
the
feature
release
lis
in
the
future
person
sort
of
went
out
and
talked
to
a
bunch
of
different
sakes
in
person
at
their
meetings
to
sort
of
remind
them
that,
like
caps,
were
coming
and
to
get
their
features
put
together,
but
also
to
get
an
impression
of
like
what's
really
important
to
you.
What
matters
did
that
happen
this
time
around.
I
So
we
didn't
go
to
every
single
Sigma
ting,
but
we
did
reach
out
to
all
the
SIG's
who
had
enhancements
that
were
slated
to
land
in
115
in
some
way,
whether
that
was
over
slack
or
another
another
medium,
so
he
did
reach
out
to
all
of
them
to
get
their
feedback
I.
Think
in
part,
because
of
coop
con
a
lot
of
that
there
was
some
latency
getting
all
of
that
assembled
since
when
we
started
doing
it,
they
were
all
very
much
focused
on
coop
con
mode
and.
J
H
J
D
I
mean
this
goes,
this
goes
into
like
cigs
I,
so
that
email
thread
that
I
linked
a
little
higher
up
suggests
SIG's
doing
planning
sessions
right
around
code
freeze
for
every
cycle
right,
getting
an
idea
of
what
their
themes
are
going
to
be
before
we
start
the
cycle,
I
mean
the.
The
fact
of
the
matter
is
the
enhancements
that
exist
right
now
after
enhancements
goes
in
and
kind
of
checks,
this
stuff
out.
D
A
D
D
C
Let's
go
I
agree
with
what
Stephen
was
saying:
I
think
that
runs
into
the
sort
of
split
brain
problem
that
the
release
lead
runs
into,
where
you
have
like.
Really
sleep
focused
on
getting
things
out
the
door
just
like
the
Stig
is
focused
on
getting
things
out
the
door,
but
you
also
need,
like
somebody
else,
focused
on
the
next,
the
next
cycle,
in
the
something
that
we
tried
in
the
past
and
is
the
entire
reason
the
comms
role
exists
was
to
try
and
like
by
the
time
feature.
C
J
So
I
actually
tried
to
cheat.
You
know,
I
told
Taylor
I
was
like
hey
man,
I
think
I've
written
the
blog
posts.
Already
we
got
plenty
of
time
and
I
ended
up
not
using
90%
of
it.
So
I
was
like
don't
waste
your
time,
because
we
could
have
just
prepared
better
to
sprint
at
the
end.
I
think
I,
don't
I,
don't
know
I
other
than
chasing
people
earlier
and
trying
to
be
more
aggressive,
chasing
people
down,
which
was
tough.
H
I
mentioned
basically
that
the
calendar
I
think
hit
is
slightly
weird
on
this
release
cycle
the
because
we
are
trying
to
treat
this
as
a
continuum,
the
the
shadows.
We
tried
to
call
out,
for
wit,
with
Claire,
to
collect
themes
around
enhancements
and
double-check
on
themes
ahead
of
and
after
code
freeze
as
well.
So
there
were
like
kind
of
every
three
or
four
weeks
and
some
check-ins
with
the
SIG's
on
the
key
things,
and
more
often
than
not
a
lot
of
them
are
like
cute
con
holiday.
H
B
I
This
one
will
be
real
short
and
sweet
because
we've
basically
already
started
doing
it,
but
the
test
in
four
role
no
longer
feels
that
it's
super
needed
now
that
we've
added
automation
and
all
the
remaining
tasks
are
going
to
be
absorbed
by
the
branch
manager,
role
and
then
test
in
for
on-call.
So
we
no
longer
need
a
test
infra
on
the
release
team,
so.
I
J
D
N
L
One's
pretty
easy
this
next
one,
but
basically
there
were
just
seemingly
unnecessary
binary
argument
changes
so
as
a
distribution
owner
not
as
a
release
team
member,
basically
a
whole
bunch
of
crap
broke.
You
know:
hypercube
removes
the
word
cube
from
all
their
images
and
allow
privilege
got
set
to
true
and
the
flag
got
removed
in
in
Cuba.
L
So
just
and
there's
no
mention
of
that
in
the
release
notes,
if
somebody
could
find
that
I'd
be
happy
to
see
it,
but
just
as
a
as
a
distribution
owner
these
kind
of
things,
I,
don't
know
why
those
decisions
were
made.
Hopefully
somebody
had
a
good
reason,
but
when
you're
maintaining
a
distribution,
it's
kind
of
like
why
the
hell.
Why
do
why
is
that
important?
Those.
C
M
C
A
question
of
like
did,
it
did
was
sufficient
that
was
given
and
then
at
the
time
that
we
decided
it
was
deprecated
which
gives
you
a
lead
time
to
deal
with
it.
And
then
we
should
have
called
this
house
and
very
specifically
within
a
deprecation
section
of
the
release
notes.
So
I
would
be
interested
to
understand
where
the
ball
was
dropped
along
the
way,
but
that's
a
very
valid
concern
yeah.
So.
N
Yeah,
I'm,
I
think
in
general,
it'd
be
good
to
just
have
a
good
flag
section
like
right
up
there
at
the
top,
because
there
are
some
other
cases
where
features
were
moved
from
alpha
to
beta
and
added
on
by
default.
One
of
them
was
around
believe
it
was
humorous
device,
manager'
resource
counting
which
broken
windows,
and
so
we
were
able
to
respond
to
catch
that
testing,
which
wasn't
a
problem.