►
From YouTube: Kubernetes 1.13 Release Team Meeting 2018-10-15
Description
1.13 notes: http://bit.ly/k8s113-minutes
1.12 retro: https://docs.google.com/document/d/1OgylAYqU0YoJz-PTd8uzyHtMcxYSewSq06AGeh1F-A8
A
A
I
guess
I,
don't
know
and
we're
gonna
follow
a
code
of
combat,
since
this
is
being
publicly
recorded,
so
try
not
to
be
too
much
Ledger,
myself
included,
I'm,
also
going
to
try
and
time
box.
Our
1:13
update
today
since
we'd
like
to
run
the
rest
of
the
112
retrospective
at
the
second
half
of
this
meeting.
A
So
a
couple
announcements
I
just
wanted
me
to
put
out
there
we're
working
on
a
flow
of
1:00.
This
morning
we
seem
to
be
having
some
problems
actually
cutting
the
bill,
possibly
due
to
the
bill
taking
longer
or
working
differently
because
of
the
addition
of
an
image
for
learning
conformance
tests.
So
we're
working
on
that.
We
also
added
two
additional
builds
to
the
schedule.
Alpha
2
is
going
to
be
built
on
October,
23rd
and
RC
2
on
November
30th.
A
This
is
with
the
intent
of
exercising
release
tooling,
once
roughly
every
two
weeks
throughout
the
release
cycle
to
make
sure
that
everything
is
working
smoothly.
We
have
also
heard
your
feedback
that
having
the
release
team
meeting
at
11:00
a.m.
Pacific
does
not
work
for
some
folks,
and
so
we
will
be
moving
a
meeting
back
to
10
a.m.
Pacific
starting
on
next
week,
and
then
one
announcement
and
from
myself,
I
will
probably
sync
up
with
Kendrick
offline
tomorrow
or
Wednesday.
A
So
with
that,
my
plan
is
to
just
kind
of
run
through
the
agenda,
has
written
and
asked
each
of
the
different
roles
for
their
updates
and
I'm
gonna.
Try
and
cap
us
off
if
we
run
at
the
half
hour
mark,
but
I
think
this
is
going
to
be
relatively
short.
So
with
that
I
will
hand
on
to
you
Kendra
Coleman,
our
enhancements
lead.
Thank.
C
You
very
much
I,
like
the
idea
of
synching
up
and
kind
of
making
sure
that
people
have
an
understanding
of
what's
gonna,
make
it
what's
not
going
to
make
it,
because
I
do
have
a
feeling
that
there's
a
few
in
there
that
are
probably
stretches
is
the
best
way.
To
put
it
only
because
a
lot
of
people
take
the
the
default
way
of
doing
this,
and
if
it
doesn't
make
it
for
this
milestone,
they
just
automatically
just
punt
it
to
the
next
one.
Now,
without
really
actually
thinking,
if
it's
gonna
make
it
or
not,
and.
A
That
that's
fair,
like
the
goal
here
is
just
a
quick
sanity
check
prior
to
entering
feature,
freeze
and
then
probably
another
one
prior
to
entering
code
slush.
If
you
know
they
tried
for
the
stretch
and
it's
clear
they're
not
going
to
make
the
stretch,
maybe
we
should
back
off
down
the
line
anyway,.
C
Keep
going
absolutely
so
a
few
things
have
changed.
There
were
some
ups
and
some
downs,
but
as
of
since
last
week,
so
three
more
issues
have
been
added
to
these
spreadsheets.
So
now
we're
totally
41
that
are
being
tracked,
most
I
think
a
few
extra
ones
were
in
regards
of
storage.
They
seem
to
have
quite
a
bit
in
there
that
they're
trying
to
push
for
for
113
this
morning.
C
D
Yeah,
so
Josh
cannot
see
a
signal
report
last
week
since
then
I
think
one
issue
has
been
closed.
One
new
thing
showed
up
with
failing
tests,
then
there's
some
flakiness
and
some
tests-
yeah
we're
working
through
most
of
them
have
people
looking
into
it
yeah
in
the
the
sinner
report.
Some
of
the
links
are
broken
up
because
some
other
tests
Suites
changed
names
and
that
will
be
updated
in
a
new
test
report.
I
will
be
sent
out
tomorrow.
D
A
Okay,
thanks
for
the
update,
Morton,
I
guess,
I
have
a
couple
additional
things:
I
want
to
share
there
in
the
context
of
release
blocking
chops,
real,
quick,
we'll
try
and
just
take
two
or
three
minutes
here,
just
to
show
you
what
I'm
working
on
drone
it's
the
wrong
screen.
It's
always
the
wrong
screen.
I'll,
get
it
right.
The
first
share
one
of
these
times.
This
is
the
perils
of
a
multi-monitor
setup.
Okay.
A
So
we're
looking
at
the
meeting
notes
on
the
left,
if
I
click
through
here
to
just
take
release
repo
issue
number
24,
to
find
criteria
for
a
job
to
be
released.
Blocking
there's
a
Google
Doc
in
here
I
would
like
to
turn
this
into
refining
the
criteria
that
we
listed.
The
sink
release
repo
I
pulled
that
out
of
this
Google
Doc.
The
TLDR
is
that
the
ideal
candidate
finishes
a
run
in
less
than
an
hour
and
runs
at
least
every
two
hours.
This
is
largely
because
we
like
to
towards
the
end
of
the
release
cycle.
A
We
like
to
wait
for
jobs
to
pass
three
times
in
a
row.
That's
the
same
commit
so
a
job
takes
about
an
hour
to
run.
That
means
best-case.
We
wait
three
hours
if
it's
scheduled
every
two
hours.
In
the
worst
case,
we
wait
six
hours,
that's
generally
enough
time
for
us
to
make
a
decision
within
a
working
day
if
jobs
take
longer
than
that,
it
starts
to
be
a
real,
like
grind
to
a
halt
sort
of
situation.
A
So
then
next
I
wanted
to
try
and
take
these
criteria
and
turn
them
into
some
kind
of
query
that
I
could
run
it
against
our
bigquery
data
set.
That
is
automatically
populated
by
all
the
job
runs.
So
that's
the
spreadsheet
on
the
left.
Here
we
run
poking
around
with
different
thresholds
for
the
maximum.
B
A
For
the
past
week
same
thing
for
how
frequently
the
child
runs
the
same
thing
for
how
frequently
the
job
passes,
just
regardless
of
commitments,
so
I'm
missing
a
couple
criteria
like
whether
or
not
the
job
passes
the
same
passes
and
times
that
are
rowing.
It
was
the
same,
commit
and
things
like
that,
but
you
can
see
here
that
roughly
15
of
the
jobs
currently
in
the
released
master
blocking
dashboard
fit
the
criteria
of
they
run
in
three
hours.
They
are
scheduled
every
three
hours
and
they
have
a
passed
percent
of
about
90
percent.
A
We
have
five
jobs,
so
I'm
working
I'm,
just
almost
messing
with
this
all
locally
until
I
can
get
a
job
that
actually
generates.
Something
and
put
the
record
criteria
is
thing.
I
expect
to
have
this
happen
later
this
week,
update
there
in
fact
we'll
proceed.
Oh
and
the
reason
all
the
dashboards
got
renamed
is
a
similar
effort
to
try
and
make
sure
that
all
the
dashboards
have
the
same
jobs
and
the
same
naming
scheme.
So
it's
really
easy
to
just
kind
of
identify.
E
E
E
Some
tickets
are
updated
and
on
track,
but
most
of
them
don't
have
any
update
on,
since
they
became
on
the
mini
milestone
on
113
and
I.
Wonder
whether
a
new
tag
would
help
on
that.
That
signifies
whether
an
issue
is
being
actively
worked
on
to
avoid
pinging
everyone
all
the
time
or
keeping
track
yeah.
B
G
F
It
it's
it's
complicated
people
seem
to
respond
best
to
the
human
touch,
so
yeah,
I
I,
don't
know,
we've
gone
back
and
forth.
I,
don't
want
to
say,
like
this
isn't
impossible,
but
if,
in
past
experience,
the
tag
itself
hasn't
been
sufficient
and
partly
because
there's
so
much
noise
and
github
that
people
just
don't
see
those
regardless
of
whether
whether
it's
a
tag
or
not
and
the
the
best
actions
we
can.
One
of
us
is
the
banditry
oz
and
reaches
out
to
an
individual
says:
hey
we're
needing
a
little
bit
of
a
forward
progress
here.
A
Claim
pops
her
video
on,
but
just
briefly
before
she
gives
the
experience
of
a
prior
issue
bug
triage
person.
I
think
he
added
you
know
how
they
tabat
goes
through
and
adds
like
life
cycle
snail
life
cycle
rock
whatever
there's
also
a
life
cycle
active
in
life
cycle
is
one
of
the
things
that
anybody
can
use.
So
if
you
know
you're
working
on
an
issue,
you
can
tag
it
as
life
cycle
active.
We
got
rid
of
all
the
status
in
progress
status
approved
for
milestone.
A
Whatever
things
cuz,
nobody
seemed
to
know
how
those
worked,
and
it
was
also
restricted
to
the
smaller
groups
of
people.
I
would
also
say
that
historically
right
now,
it
kind
of
doesn't
matter
too
much,
because
we
only
really
start
paying
super
close
attention.
As
we
get
closer
to
code.
Freeze
words
like
we
don't
historically
have
the
best
track
record
on
putting
everything
into
the
milestone
right
now
in
one
lumps
though
some
of
the
PRS
and
issues
and
like
burning
down
from
now,
we
kind
of
only
start
to
focus
our
attention
on
that.
A
B
Comments,
Hey,
yeah,
I'm,
so
sorry,
I
think
I
missed
a
little
bit
of
what
Nico
was
saying,
but
it
sounded
like
it
was
relevant
in
terms
of
getting
people
to
respond.
My
experience
is
what
you
say:
Aaron
it's
completely
when,
when
I
go
and
I
actually
just
stopped,
someone
in
in
in
channel
or
yeah,
mostly
on
slack
I,
get
I,
usually
got
a
pretty
quick
response,
including
people
who
you
know,
live
and
work
in
a
different
time
zone
from
myself.
B
The
one
thing
about
the
one
thing
that
I
want
to
happen
with
labels
is
that
I
want
issues
and
PRS
to
basically
be
treated
the
same
in
terms
of
labeling
in
terms
of
requirements.
I,
don't
know,
if
that's
the
case
right
now.
The
other
thing
that
I
think
Nico
is
getting
at
is
it's
super
helpful
to
get
like
to
get
on
this
kind
of
thing.
Early
and
I
know
that
I
have
been
confused
by
not
everything
being
in
the
milestone.
B
Like
things,
keep
getting
merged
in
Turks
could
burn
any
strays
all
the
time
and
they
are
seemingly
completely
unrelated
to
any
kind
of
project.
So
I
don't
know
that
we
have
a
really
good
way
to
differentiate
between.
Is
this
specifically
next
release
milestone
things
that
we're
talking
about,
and
is
this
just
like
a
random?
Oh,
let's
merge
something
that
seems
weird
and
I'm
not
sure.
What's
going
on,
there.
F
Two
comments:
I
would
throw
in
there
the
on
the
stuff
just
merging.
It
feels
weird.
That
is
how
this
community
operates.
For
the
majority
of
the
cycle,
like
it's
kind
of
a
free-for-all,
rapid
delivery
of
stuff
and
the
only
way
I
see
that
really
changing
is
if
we
go
to
a
shorter
release
cycle
where
rigor
is
demanded
on
an
ongoing
basis
and
it's
just
not
with
a
quarterly
release
but
also
I
think
maybe
related
to
that,
like,
like
you
were
saying
it's
important
to
try
to
get
the
mindset
early
on
that.
F
It's
not
just
like
yeah,
well,
we'll
be
rigorous
later
because
it
doesn't
work.
You
have
to.
You
really
have
to
build
that
culture.
So
even
if
you're
I
guess
for
a
Niko's
you're
going
out
and
reaching
out
to
the
folks
one-on-one
on
slack
to
say,
like
hey
looking
for
some
status
here,
you'll
kind
of
start
to
sound
like
a
broker
record,
asking
that
over
and
over
again
and
Gwen
and
I
went
through
the
same
thing.
But
one
little
micro
messaging
thing
that
I
try
to
do.
E
A
Just
on
that
point,
I
dropped
a
link
in
the
chat
about
saved
replies
in
github,
which
can
help
speed
up,
or
at
least
automate,
that
process
a
bit
on
your
end,
because
you're
gonna
be
saying
the
same
thing,
a
bunch
of
times
to
the
same
people
so
there's
so
there
is
and
good
hope,
to
help
you
make
that
process
a
little
bit
easier,
so
I
yeah,
so
I
have
been
an
issue
to
enforce
milestone
all
the
time.
I
think
this
came
up
as
a
result
of
the
conversation
from
the
cherry-picks
PR
4k
community.
A
If
we're
enforcing
that,
if
we're
just
simply
adding
an
extra
label
to
a
comment
and
enforcing
that
mindset,
all
the
time
I
like
enforcing
that
all
the
time
I
think
the
mindset
will
follow.
That
could
be
useful.
I
think
the
issue
is
right.
Now
we
kind
of
want
to
the
most
friction
free
process
unit.
We
want
the
least
amount
of
friction
possible
right
now,.
I
A
That
is
restricted
to
a
specific
group
of
people.
It's
not
something
that
anybody
can
do
so
it's
either
that
we
need
to
change
the
restrictions
on
that
command
temporarily
and
then
start
to
restrict
them
as
we
get
further
in
the
process
and
would
like
the
release
team
to
exercise
more
control
or
we
think
about
building
automation.
That
would
automatically
add
the
milestone
at
certain
times
and
that's
something
we'll
have
to
scope
out.
A
So
I
think
there
is
a
fuzzy
union
between
the
people
who
have
access
to
apply
a
milestone
and
the
ones
that
would
be
approving
the
PRS
anyway.
So
if
it's
a
matter
of
the
mat
adding
an
extra
in
adding
an
extra
line
to
their
approval,
that
adds
the
milestone.
I
I'm,
not
sure
that
I
see
much
friction
in
that.
But.
B
B
J
D
A
I
K
The
first
thing
I
just
want
to
surface
everybody
is
Kay
website,
has
a
slightly
different
branch
name
for
the
wound:
up,
13
release
for
this
release
cycle.
So
it
used
to
be
that
you
would
make
a
PR
against
the
release
version
for
a
certain
release.
So
last
really
cycle,
you
made
a
PR
against
release
112
now
for
upcoming
releases,
the
branch
name
is
going
to
change
so
for
this
release,
Doc's
PR
I
should
be
made
against
the
dev
1.13
branch
in
Kay
website.
K
So
that's
all
all
the
information
is
on
the
PR
template
for
Kay
website,
and
so
hopefully
there
shouldn't
be
that
much
friction.
We
just
wanted
to
surface
that
for
all
of
you
in
case,
something
like
that
comes
up
apart
from
that
have
cleaned
up
the
dachshund
phone,
this
on
spreadsheet,
and
it's
now
all
up
to
date
and
we'll
be
reaching
out
to
announcement
owners
this
week
to
get
the
work-in-progress
PRS
open
against
K
website.
A
A
A
F
A
A
Here,
I'm
totally
fine,
if
somebody
else
just
says
stop
talking
here
and
you've
been
rambling
for
too
long
all
right,
you
do
that
to
me,
I'll,
do
it
to
you
and
we
will
all
get
by
as
a
team.
Okay,
that's
that's
tea
notes,
so
I'm
pasting,
the
link
so
we're
switching
from
one
13
release
thanks
everybody.
A
If
you
want
to
stick
around
for
the
rest
of
the
retrospective
on
112,
where
you
reach
the
part
where
we
talk
about
what
we're
gonna
do
differently
this
release
cycle
and
come
up
with
some
action
items
be
staying
around
I
have
no
idea
where
we
left
off
in
the
retrospective
last
week,
but
I
see.
Actually
there
is
a
resume
here
in
signals
meeting
line,
so
I
pasted
the
link
in
the
notes.
What
I
do
people
want
me
to
share
share
my
screen
with
the
doc
on
it?
So
we
all
know
what
we're
talking
about.
G
A
F
F
F
Give
the
background
of
the
the
general
issue
is
that,
right
now,
when
we're
going
through
test
results,
a
large
portion
of
the
test
cases
are
running
in
a
way.
That's
coupled
somehow
or
another,
with
GCE
or
gkd,
and
wobbly
Nisour
flakiness
in
the
underlying
platform
leads
to
tests
flakiness
and
we've
had
trouble
or
know
if
we
like
trouble
almost
over
States,
but
we
haven't
had
the
best
or
the
most
transparent
to
fluent
communication
with
the
Googlers
in
charge
of
the
platform.
We
don't
always
know
who
do
we
ping?
F
We,
we
don't
even
know
what
type
of
information
they
would
prefer
to
get
from
us.
So
maybe
it's
something
for
a
template
that
would
help
them
understand
what
problems
we're
seeing
to
more
quickly
get
to
whether
or
not
we
have
a
GCE
issue
just
generically,
because
we
kind
of
end
up
asking
around
let's
lack
and
then
sometimes
we
just
sort
of
hear
back
like
oh
yeah,
so
it
so
knows
about
that
they're
working
on
it,
and
then
that
leads
to
the
final
segment
of
the
sentence
that
are
getting
status
back,
we'd
kind
of.
F
In
those
cases
we
just
sort
of
hear
like
yeah,
it
should
be
fixed
this
week
and
we
didn't
know
who
to
talk
to.
We
didn't
know
deterministically
when
it
was
fixed
how
to
check
that
the
fix
was
in
and
it
was
just
sort
of
waiting
and
hoping.
The
things
that
had
been
red
will
go,
green
and
and
that's
not
issue
triage
hope.
Isn't
engineering.
B
Okay,
well,
I
want
one
anyway,
yeah,
basically,
I
think
what
I
should
I
were
talking
about.
A
little
bit
was
a
ability
of
having
like
a
contact
person
on
each
side,
so.
B
A
Gce
and
the
GK
he
jobs
I
feel.
Historically,
we
have
not
had
the
best
record
for
the
gke
jobs
and
I'm
going
to
be
pushing
really
hard
that
we
remove
some
from
release
blocking
I.
Don't
think
that
a
hosted
offering
that
is
not
open
source
should
actually
be
blocking.
The
release
of
kubernetes
like
aks,
and
you
can
still
October,
is
today
and
I.
Don't
want
to
allow
that
going
forward.
I
see
no
reason
that
gke
should
be
doing
the
same
thing.
Java
chef.
A
A
Failing
that,
maybe
we
just
go
ahead
and
move
on
to
the
next
issue
about
kind
of
bug.
Also,
just
a
administer
of
you
know:
I
want
to
leave
us
at
least
10
minutes
at
the
end
of
this
to
talk
about
what
we're
going
to
do
better
so
I
will
cap
off
discussion
was
10
minutes
for
me
to
start
talking
about
action
items
winner,
Tim,
Europe,
on
kind
yep,.
B
This
is
literally
what
I've
just
been
saying
several
times
already
and
I
apologize
for
saying
it
earlier,
because
I
was
confused.
What
we're
doing
yeah
I
want
I
want
pull
requests
to
be
labeled
as
kind
bug
if
they
are
a
bug,
fix
and
I
want
that
to
be
part
of
the
culture
and
part
of
the
expectation,
always
keeping
in
mind
that
you
know
the
cultural
change
is
hard
to
impose
from
on
top,
but
but
basically
I
want
pull
requests
to
be
treated
the
same
as
issues
and
not
regard
I.
A
B
A
B
B
A
That's
that's
a
good
point,
because
I
kind
of
feel
the
same
way
that
it's
a
bug
fix
and
not
a
bug.
So
before
we
do
something,
can
we
figure
out
like
how
we
feel
about
that?
Is
it
really?
Your
do.
I
want
less
labels.
I
want
to
keep
us
to
the
magic
number
of
7,
plus
or
minus
2
of
kind
labels,
so
I
think
just.
A
A
So
moving
on
I
had
something
in
here
about
tide,
not
giving
us
the
ability
to
merge
PRS
as
part
of
like
a
priority
or
critical
batch.
Is
this
something
that
we
think
we
could
somehow
allow
with
like
a
different
tied
queries
specifically
for
critical
fixes
or
something
along
those
lines?
Coal
has
is
thinking
face,
line.
L
L
G
Main
time
when
this
came
up
was
when
we
opened
up
the
queue
there
were
a
lot
of
things
getting
into
the
queue
right
just
after
release,
and
then
we
had
some
really
critical
fixes
that
we
wanted
to
merge.
But
we
couldn't
merge
in
the
one
point
X
in
the
release
branch
before
the
one
before
it
merges
in
the
main
branch.
So.
F
A
So
the
thinking
here
is
that
we
we
were
really
uncomfortable
with
deviating
from
our
normal
process,
because
our
normal
process
is,
you
always
merge
things
into
master
first
and
then
you
cherry-pick
them
into
the
release
branch
afterwards.
And
yes,
this
is
a
special
case
when
merging
into
master
suddenly
become
super
backlogged.
But
it's
really
really
critical
that
we
get
things
into
the
release
bridge
I'm,
just
wondering
if
it
like
the
workaround
that
we
did
kind
of
expressed
our
intent
a
lot
better,
which
is
that
we
had
us.
A
We
have
a
separate
set
of
tied
queries
specifically
for
the
release
branches.
We
can
put
the
fix
into
the
release
branch
first,
because
that's
the
branch
off
of
which
we're
going
to
be
cutting
the
bits
and
that's
the
branch
whose
signal
we
are
watching
anyway,
and
then
we
can
have
that
fix,
merging
the
master
afterwards
as
part
of
downstream
motivation.
A
The
idea
is,
we
really
don't
want
to
impede
development
of
ongoing
features,
we'd
like
to
keep
co-taught
as
short
as
we
can,
and
it
seems
as
though
like
there's
a
long
tail
of
bug
fixes
at
the
end
of
the
release
cycle,
where
it
trickles
down
to
just
like
one
or
two
really
critical
things
that
need
to
be
fixed.
But
it
also
takes
us
a
while
to
figure
out
if
they
actually
fix
the
problem
at
hand.
I'm
thinking
specifically
of
scalability
related
issues
here,
but
there
could
be
other
examples,
so
I
think
it's
just
impossible
to.
A
A
Thaw
is
the
one
special
case
where
that
kind
of
harms
us
the
reason
it
takes
so
long
to
come
out
of
code.
Thaw
is
because
our
tests
are
so
flaky.
So
if
we
made
our
tests
less
flaky,
we
would
merge
faster
and
we
wouldn't
experience
this
pain
so
much
but
I
agree.
It's
perhaps
something
I'd
like
to
us
to
live
on.
It's
maybe
something
that
we
can
put
an
action
item
to
investigation.
A
F
So
in
things
that
worked
well
having
the
the
delay
was
really
useful
because
it
meant
we
had
this
strong
separation
between
master
and
release,
branch
and
master
had
basically
nothing
coming
into
it,
which
meant
we
had
twice
the
test
coverage
and
that
helped
make
up
for
the
large
number
of
flakes
and
variability.
We
could
get
a
little
bit
better
signal
so
then,
that
led
to
a
very
short
period
at
the
end
relative
to
the
FOB,
so
the
fall
was
followed
very
quickly
by
when
we
were
intending
to
release
and
it
just
it.
F
F
These
tests
are
really
flaky,
I,
think
they're
they're,
not
even
as
good
as
a
coin
toss
and
then,
if
they're,
taking
eight
hours
to
run
that
goes
back
to
the
potential
problems
with
the
hosting
platform.
If
there's
the
slightest
variability
there
something
for
a
platform,
wobble
causes
a
weird
timeout
or
something
that's
just
much
more
likely
to
happen
on
a
long
test.
But
Aaron.
F
A
A
A
F
F
So
I
think
this
goes
back,
maybe
to
the
question
earlier
before
we
started
the
retro
of
how
we
understand
that
something
is
in
progress
on
other
projects
that
I've
worked
on,
where
we
had
something
like
test
grid.
If
somebody
had
triage,
they
could
click
the
cell
and
stick
a
little
text
annotation
and.
A
It
was
mm-hmm
I
can
I
can
speak
to
sorry
for
cutting
you
off.
I
can
speak
to
the
fact
that
we're
still
working
on
trying
to
open
sourced
tested
and
internally
test
code.
Let's
people
like
tag
individual
test
failures
and
and
tie
them
back
to
issues
the
reason
we
can't
yet
do
that
in
the
open
source
land
is
because
there's
no
concept
of
authentication
or
authorization
baked
in
just
yet.
F
A
A
How
did
we
do
this
last
time
or,
roughly
speaking,
how
good
or
healthy
were
things
looking
at
this
point
in
the
release
cycle
in
the
past
and
right
now,
like
it's
one
of
the
reasons
I
have
been
so
as
part
of
so
many
release
cycles
is
because
I
feel
like
I'm,
a
human
and
I
have
a
brain
and
I
have
a
memory.
It's
not
super
reliable
though,
but
I
at
least
kinda
get.
A
How
long
does
it
usually
take
the
cut
of
your
lease
and
like
when?
Do
we
usually
start
this
like?
When
have
we
started
cutting
releases
in
the
past?
Does
anybody
know
so
I
feel
like
having
some
of
this
information
to
Snowdown
and
compiled
would
be
incredibly
useful
and
it
really
just
takes
a
human
being
going
back
and
reading
a
bunch
of
documents
and
trying
to
collate
these
things
together.
Do.
A
In
the
sub
role
Docs,
it
could
be
possible,
like
I,
think
it's
a
great
thing
for
somebody
in
like
not
that
I
am
volunteering
myself,
because
I'm
maxed
out
here,
but
I
think
I
release
lead
shadow.
Could
do
this
really
well
or
I
think
it
would
be
a
really
great
preparatory
thing
to
get
some
context
on
how
things
have
been
done
with
historically
to
allow
you
to
act
as
a
release,
lead
but
I,
think
more
or
less
anybody
could
try
and
go
through
and
summarize
this
even
just
to
like
sort
of
assess.
A
What's
the
state
of
the
news.
Generally
speaking,
what
is
the
release
process?
Yeah
I
think
you
probably
could
use
a
dedicated
like
secretary
role
for
that
and
I
tried
to
be
the
change
I
want
to
see
in
one
six
and
I
brought
it
up
every
release
afterwards.
So
if
you
go
into
the
release
one
six
directory
in
cig
release,
you
can
see
where
we
tried
to
do
that
for
both
flake
for
like
test
flakes
and
what
their.
What
was
the
go
no-go
decision
for
that
particular
day
and
I
would
love
to
see
that
carried
forward.
A
But
I
haven't
personally
had
the
time
to
do
it
for
every
other
release
that
I
haven't
been
the
you
know,
colleague,
for
or
taking
an
explicit
role
doing
that.
So
there
is
a
template.
We
have
prior
art
of
trying
to
do
literally
that
so,
if
we
could
put
some
wood
behind
those
error,
arrows
would
be
great
yep.
A
So
we
have
about
five
minutes
left
here.
So
I
think
I
addressed
Isis
point
about.
Can
we
start
the
release
before
the
afternoon
Pacific
time?
Yes,
in
my
opinion,
the
release
really
should
be
started
immediately
after
a
meeting
at
a
time
kind
of
like
this,
like
anywhere
from
9
a.m.
to
11
a.m.
Pacific,
so
that,
ideally
it
is
done
and
ready
to
go
out
the
door
5
4
p.m.
to
5
p.m.
Pacific,
maybe
doing
it
on
a
Friday,
also
not
the
best
idea.
Yes,.
I
A
A
Friday-
and
we
that
has
been
the
historical
target,
was
to
have
the
release
finished
by
end
of
day
on
the
release
day.
Yes,
she
also
has
an
item
here
about
could
be
potentially
staged
and
mock
release
talks
for
a
few
of
the
interim
releases,
instead
of
only
waiting
for
the
final
release
punch
so
I
have
no
idea
if
Doc's
new,
really
branch
naming
strategy
could
help
support
us
here
or
not.
Yeah.
K
So
there
isn't
that
look
like
preview
of
the
release
branch,
which
is
constantly
updated
whenever
anything
comes
lands
in
that
branch.
I
think
the
main
issue
here
is
probably
the
generated
docs
I
know
there
were
some
issues
during
the
112
release
around
generating
the
API
and
CLI
Docs.
So
I
think
it
would
be
a
good
idea
to
generate
those
a
couple
times
throughout
the
cycle
and
just
test
it
in
general.
K
A
Don't
either
okay,
thank
you,
so
I
think
we're
good.
There
maybe
has
some
context.
We
can
fill
that
in
and
follow
up
meetings.
Thank
you
for
saying.
We
can
skip
the
next
item.
Hans.
We
are
being
a
little
more
prudent
and
mindful
about
how
packages
are
cut
and
sharing
that
knowledge.
So
you
have
another
item
about.
It
would
be
great
to
gather
new
comers
and
show
them
the
different
tools
we
use.
You
want
to
speak
to
that
yeah.
I
Sure
I
I
was
new
in
112
and
I
wondered
what
is
going
on
in
the
other
teens
and
which
tools
which
processes
are
they
using
and
I
had
the
idea.
Maybe
it's
worth
gathering
all
the
newbies
and
just
like
show
every
team
showing
off
how
they
kind
of
do
the
work
and
which
tools
they
use.
Of
course,
this
is
some
sort
of
commitment
by
the
teams,
but
I
think
it
might
be
worth
it
if
every
team
like
it
gets
a
lot
of
five
minutes,
ten
minutes
showing
off
how
they
do
their
work.
A
Yes,
that's
one
of
the
roles
or
purposes
of
the
shadowing,
so
I
can
speak
for
the
branch
management.
So
one
of
the
roles
or
one
of
the
tasks
I
did
there
was
to
meet
with
the
current
branch
manager
and
their
shadows
to
go
over
how
releasing
kubernetes
works
and
what
tools
are
used
for
that
particular
role.
A
I
Was
more
wondering
about
like
I?
Was
shadow
for
release?
Branch
management
and
I
would
have
loved
to
get
a
bit
more
insight
into.
Let's
say
CI
signal
which
tools
are
they
using
or
whatever
enhancement,
trekking
and
that
stuff?
So
I
think
it
would
be
worth
showing
all
the
tools
which
all
the
teams
are
using
for
all
the
newbies
that
make
sense.
A
So
I
think
it's
worth
it
to
to
call
out
again,
like
everyone
should
be
documenting,
documenting,
documenting
I
think
it's.
It
should
be
everyone's
responsibility
to
not
only
document
on
on
the
on
the
cig
release
repo,
but
to
also
read
the
documentation
across
the
different
roles.
I
think
I
think,
overall,
the
the
process
of
one
mentoring,
a
shadow
and
two
shadowing
yourself-
is
intensive.
So
I
don't
know
how
to
fix
that
specifically,
but
I
think
having
everyone
involved
and
looking
at
the
tools
or
where
we
point
people
for
the
tools
that
we
use
is
important.
A
I,
agree,
I,
guess,
I've
noticed,
we've
started
to
have
like
individual
roles
meet
with
their
shadows,
which
is
great
I,
do
have
a
concern
that
it
gets
a
lot
more
tribal
if
we're
passing
everything
down
via
video
conference
and
I
strongly
agree
with
having
more
of
this
document.
So
we
should
have
like
a
page
that
shows
a
rough
summary
of
what
each
role
is
in
what
their
responsibilities
are
for.
Your
asked
for
the
tools
directly
I
think
something
I'm
trying
to
do.
A
This
quarter
is
to
put
together
some
video
walkthroughs
of
the
tools
that
CI
signal
uses,
at
least
like
I,
don't
know
if
it's
super
necessary
for
you
to
see
all
of
the
lovely
lines
of
text
that
scroll
by
in
Doug's
console
window
and
he
cuts
a
release.
But
I
hear
you
that
you're
looking
for
what
are
the
interfaces
and
touch
points
so
I
understand
what
the
different
roles
and
responsibilities
are
and
how
I
can
might
make
life
easier
for
the
different
roles.
A
So
that's
sort
of
like
getting
the
team
to
gel
a
little
bit.
I
think
is
the
purpose
of
starting
these,
where
these
earn
down
meetings.
A
lot
earlier
in
the
release
cycle
is
for
us
to
get
used
to
working
with
each
other
and
keeping
track
of
things
and
how
we
interface,
because
again
like
none
of
this,
is
really
super
strictly
necessary
until
we
start
getting
much
closer
to
code
slash,
it
used
to
be
historically
that
these
these
weekly
meetings
didn't
happen
until
we
went
right
in
the
code
freeze.
A
A
I'm
trying
to
I
would
like
to
I'm
seeking
to
try
and
improve
demonstration
of
automation
in
the
kubernetes
community
meetings
in
general
and
then
seeing
if
we
can
take
those
and
turn
those
into
videos
that
could
be
cooked
out
or
I
will
be
doing,
dedicated
videos
for
tools
like
test
grid
and
triage
and
just
bouncing
around
from
the
pr2
goober
nadir
to
seeing
a
test
history
for
it
stuff
like
that.
But
again,
that's
all
sort
of
very
issue
triage
and
CI
signal
triage
focused.
A
But
to
me
that's
that's
the
more
important
step
of
just
like
figuring
out
what's
going
on
and
how
everything
is
tied
together.
I,
don't
know
so
much
about
the
other
individual
turns
so
with
that
we
have
about
eight
minutes
scheduled
left
for
what
we
want
to
do
differently.
This
time,
I,
don't
even
know.
If
we
actually
followed
up
on
did
we
do
what
we
said
we
do
last
time.
H
I'm
sorry
for
a
journey
in
late
thanks,
Erin
and
for
running
it.
So
one
thing
that
we
are
doing
differently
is
like
involving
fish.
I
mean
shadows,
continue
on
to
the
point
shadows
at
least
more
actively
for
Bootsy
I
am
the
branch
man
branch
manager
row
a
little
bit
for
the
alpha
to
the
we
have
multiple
alphas
and
our
C's
this
time,
and
at
least
for
the
Alpha
to
either
harness
or
yang
they
volunteered
to
be
cutting
and
running
the
I
mean
cutting
the
release
and
publishing
it.
H
So
that
should
be
should
get
them
more
hands-on
experience
and
also
for
CI
I
know.
Josh
is
working
really
close
with
his
shadows
to
even
delegate
some
of
the
failures
to
them
and
also
involve
them
in
putting
the
report
sweetly
repose
together.
So
at
least
that's
one
thing.
This
cycle
we
plan
to
do
is
take
the
help
from
the
shadows
a
bit
more
and
also
we
increased
as
I
say,
increase
the
frequency
of
the
number
of
bills
and
releases
itself
so
that
we
can
exercise
the
release
tools
more
frequently.
A
Notes
of
the
shadows,
you
should
feel
free
to
get
involved
and
you
should
be
getting
involved
as
much
as
your
lead
is.
If
you
have
the
tools
and
ability
to,
if
you
feel
like
you,
don't
have
those
tools
or
ability.
Let
your
lead,
don't
let
someone
know
the
whole
point
of
having
the
shadows
program
is
to
be
able
to
leverage
everyone
more
effectively
and
and
kind
of
spread.
The
load
so
make
sure
that
you're
being
enabled
just
as
much
as
and
not
just
kind
of
watching
the
release
cycle
go
by
right,
make
sure
you're
involved.
A
F
B
A
Mean
one
thing:
I'll
throw
out
there
since
we
do
have
such
a
short
time
and
we
didn't
really
capture
action
items
along
the
way
is
for
those
of
you
who
are
involved
in
the
current
release
cycle
to
consider
what
process
changes
you
would
be
willing
to
introduce
into
the
release
earlier
in
the
release
cycle.
Generally
speaking,
one
of
the
gripes
I've
had,
if
release
teams
in
the
past,
is
they
decided
to
do
everything
completely
differently
like
right
during
code,
freeze
and
I
think
that
earlier
is
a
lot
better
before
introducing
process
changes.
A
If
you
see
the
time
to
message
it
out.
So
if
we
believe
things
like
constantly
requiring
milestone
for
all
pull
requests
and
issues
is
something
we
care
passionately
about.
We
should
draft
together
a
proposal
for
what
we
want.
Why
we
want
it,
what
the
benefits
are
and
how
it's
going
to
be
implemented,
and
then
we
should
do
that.
A
L
A
A
What
version
do
we
upgrade
to
did
this
fix
get
into
the
release
per
inch
before
the
upgrade
tests
are
in,
because
the
upgrade
tests
take
forever
to
run
so
it'd
be
really
easy
to
see
that
at
a
glance,
making
sure
that
we
have
the
exact
same
set
of
blocking
jobs
on
all
of
the
release
branches,
as
well
as
understanding.
Why
there's
a
slight
difference
between
the
blocking
jobs
that
are
on
master
versus
the
release
branches?
A
Thinking
specifically
of
like
the
scalability
jobs
and
the
Canadian
package
published
job
I
think
we
should
remove
jobs
that
have
been
continuously
failing
for
more
than
10
days,
where
right
now,
I'm
saying
n
is
a
120.
There
are
some
jobs
where
n
is
380
I'm
trying
to
triage
those
down
to
specific
classes
of
failures,
I
had
an
action
item
to
allow
the
triage
dashboard
to
look
back
further
than
a
week.