►
From YouTube: Kubernetes SIG Release Meeting 20181204
A
A
A
All
right,
I
think
I
will
go
ahead
and
get
the
ball
rolling.
This
is
the
December
4th
2018
cig
release
meeting.
This
is
a
kubernetes
meeting
like
all
the
others,
it's
being
recorded,
we'll
post
it
to
YouTube
for
posterity
shortly
afterwards,
and
we
expect
everybody
to
adhere
to
our
code
of
conduct
during
the
meetings
yeah.
So
that's
it.
Welcome
to
our
bi-monthly
meeting.
A
A
B
A
A
The
outside
they're,
mostly
outside
it,
looked
quite
smooth
as
well,
so
kudos,
great
job,
so
I
guess
that
goes
on
the
next
thing
on
the
agenda.
I
just
wanted
to
put
a
shout
out.
1:13
retrospective,
coming
up
on
Thursday
in
the
community
meeting
Erin
had
actually
very
wisely
and
slack
I.
Think
earlier
today
mentioned
that
we
should
also
be
scrubbing
the
112
retro
dock,
to
compare.
A
If
there
are
things
that
we
definitely
did
or
didn't
do
and
be
thinking
about
rolling
things
forward
or
understand
where
we
are
because
we
off
to
write
these
things
down
and
then
move
on
to
the
next
and
life
happens.
So
I
got
links
to
both
of
those
there
and
the
doc
anything
else.
Everybody
wants
to
stay
on
retro
or
1:13.
A
B
B
A
D
A
D
Had
one
other
one
13
related
thing,
maybe
we
can
discuss
this
in
retro,
so
feel
free
to
put
me
there,
but
I'm
looking
through
the
release,
notes
and
the
top
portion
of
them
seems
or
sorry
not
quite
the
top
portion.
But
the
major
theme
section
is
broken
out
by
cig
and
some
of
them
seem
really
really
fluffy.
D
For
example,
sink
big
data
says
that
it's
been
focused
on
community
engagements
relating
the
third
party
project.
There
have
been
no
impacts
on
the
113
release.
I
have
no
idea
why
this
is
in
a
file
called
change
log
inside
of
kubernetes
kubernetes.
Another
example
might
be
sake,
IBM
cloud
where
they
talk
about
how
they
rolled
out
version
112
of
kubernetes
in
there
IKS
service,
I
kind
of
feel
strongly
that
we
should
be
talking
about
commercial
offerings
in
the
release,
notes
for
a
community
or
an
open-source
project.
D
I
say
this,
of
course,
not
having
checked
to
see
if
there's
any
specific
GAE
stuff
mentioned
here.
But
if
there
were
I
would
kindly
ask
that
we
remove
that
as
well.
So
this
is
kind
of
a
symptom
of
the
way
release,
notes
have
evolved
to
be
collected
over
time
or
they're.
Just
sort
of
fanned
out
to
sakes
and
sakes
are
kind
of
reporting
in
really
maybe
I'm
wrong.
I
was
under
the
impression
that
this
file
called
changelog
for
most
repositories.
D
Historically,
talks
about
the
code
changes
in
Bac
repository,
so
even
though
kubernetes
is
a
project
that
spans
multiple
repos
we're
not
really
packaging
up
all
of
those
multiple
repos
into
this
sinkhole
project,
unless
we
get
too
far
afield
and
start
talking
about
distros
and
packaging
stuff
up
in
there.
Just
really
just
to
kind
of
highlight
that
I'm
not
sure
these
release
notes
are
an
example.
I
would
want
the
community
to
follow
going
forward.
Yeah.
E
F
E
Content
was
also
not
useful
for
individual
contributors,
and
the
idea
was
that
cichlids
would
be
able
to
produce
user
relevant
content
that
would
go
into
a
chains
log,
but
certainly
we
I
think
I
guess
you've
definitely
highlighted
one
at
the
steering
committee
level.
We
should
be
much
more
stringent
about
removing
kind
of
vendor-specific
cigs,
at
least
that's
my
that's
my
opinion
that
we
shouldn't
have
them.
We
should
collapse
them
into
a
a
kind
of
a
personal
feather.
A
kind
of
thing,
I
would
love
to
see.
E
Sig
IBM
cloud
and
all
sig
insert
provider
all
collapsed
out
encyclical.
A
provider
has
one
part
of
this
problem.
The
other
is
yes,
having
final
review
over
the
major
theme
section,
so
they're,
not
just
fluffy
commercial
announcements,
we
probably
need
to
add
a
an
actual
task
and
the
enhancements
leads
role
that
to
do
that
or
to
say
that
they
are
responsible
for
for
doing
that
and
cutting
communications.
We
shouldn't
go
out
or
maybe
make
it
added
tasked
with
a
communications
role
on
the
release.
D
The
major
themes
thing
is
kind
of
like
this
horrible
mutant.
That
grew
from
my
attempts
to
take
a
massive
list
of
changes
and
then
distill
the
down
toad.
What
seemed
like
anywhere
from
three
to
five
human,
manageable
themes
that
I
could
then
get
off
to
a
comms
team
to
go
market
and
blog
about,
and
then
the
next
person
decided
they
couldn't.
They
couldn't
maintain
that
editorial
voice
as
a
human
and
it
was
easier
to
shard
all
of
that
work
out
to
six.
D
B
There
was
also
going
bit
of
discussion.
Wincing
AWS
also
want
you
to
be
alpha.
Our
provider
features
to
be
included
in
the
release
notes,
and
there
was
a
matador.
Then
then
we
pointed
we
decided
to
point
to
their
release.
Notes
within
our
wonder
gate,
mainly
because
there
was
no
process
that
I'm
out
of
dream
providers.
Cloud
providers
increases
yeah.
D
A
Of
the
things
that
I
see
within
that
is
this
changelog,
even
though
it's
it's
in
the
KK
repo,
this
one
in
particular
starts
to
read
like
a
distro
like
it
there's
something
bigger
to
it
tonight.
I
can
imagine
that
that
is
a
natural
outcome,
as
we
split
the
monolith,
but
somehow
it's
got
to
be
done
right
still
to
where
it's
like
it's
the
open
source
stuff,
and
there
has
to
be
something
there.
A
Steering
so
I've
been
kind
of
bumping
into
this
a
bit
on
in
the
discussions
around
support
and
the
working
group
LTS
there's
a
couple
of
things
between
the
architecture,
sig
architectures
charter
and
the
steering
committee.
Also
like
steering
committee,
there
says
decide
how
and
when
official
releases
of
kubernetes
artifacts
are
made
and
what
they
include
so
changelog
I
would
think,
falls
squarely
within
that
and
then
another
weird
one.
A
Then,
since
we're
kind
of
on
the
topic
getting
input
from
sig
release,
the
steering
committee
Charter
says,
declare
a
release
so
that
the
committee
can
ensure
quality
feature.
Other
requirements
are
met,
like
that's
a
that
one
maybe
feels
like
at
this
point.
It
has
actually
been
delegated
to
sig
release.
Maybe
the
doc
is
behind.
D
That
we
they're
the
cuff
there
no
wow,
we
also
apparently
control
access
to
the
different
repos
and
orgs,
which
is
so
I.
Think
it's
trying
to
say
we
have
the
final
say
I
would
imagine
the
rest
of
the
steering
committee
would
agree
with
me
that
we
will
delegate
and
defer
as
much
of
that
judgment
to
this
sake
as
is
possible,
and
we
only
would
consider
ourselves
the
escalation
point
of
last
resort.
D
E
E
Between
the
CMC
F
and
the
rest
of
the
project
writ
large,
it's
nominally
the
CNC
F
is
the
owner
of
all
of
this
stuff.
But
the
idea
that
the
steering
committee
would
delegate
all
of
those
enumerated
responsibilities
to
one
or
more
special
interest
group,
but
as
as
Aaron
was
saying
as
there
there's
needs
to
be
a
body
of
escalation
of
last
resort
and
it
would
be
the
steering
coming.
I
can.
D
So
that
would
probably
tremendously
slow
down
the
release
process.
I
think
when
it
comes
to
clarifying
what
a
release
includes
I'd
like
to
understand
whether
or
not
steering
committee
thinks
they
need
final
sign-off
on
the
concept
of
a
distribution
or
whatever
word
you
want
to
use
or
if
it's
something
that's
more
of
a
save
architecture
or
decision
in
concert
with
sake,
release.
D
A
Sense,
yeah
I
think
that
that
what
is
kubernetes
discussion
is
potentially
one
that
could
take
a
lot
of
time
face
to
face
I
think
last
year
in
Austin
it
was
one
thing
to
like
be
up
on
the
stage
and
sort
of
like
hey
Colonel
versus
distribution
idea.
Let's
talk
about
it,
but
it's
kind
of
slowed
since
then,
and
if
it,
if
it
becomes
a
topic
of
face-to-face
discussion,
I
could
see
it
being
very
robust
conversation.
D
A
Maybe
the
slowness
like
slowed
down
is
probably
the
wrong
way
to
say
it,
but,
like
you,
big
huge
keynote,
big
important
people
packs
the
room
with.
However
many
thousand
or
they're
almost
felt
like
it
was
about
to
start
some
of
them
in
too
many,
and
maybe
that's
like
you
say
it's
just
it's
waiting
for
us
to
make
it
happen.
Each
of
us.
A
Well,
I
think
one
of
the
things
like
you
kind
of
hinted
at
that
level
that
top
level
editorial
voice
is
hard
to
find
authoritative
lis
in
an
individual
like
for
our
release,
team
volunteers,
if
we
like
yeah
they
own
it,
but
do
they
feel
comfortable
owning
it
versus
like
feeling
like
it
could
be
a
stirring
level.
Oh
that's.
D
I
I
would
love
to
see
some
more
invested
effort
from
somebody
or
a
group
of
people
at
cig
architecture
who
probably
have
a
little
more
purview
over
how
all
of
this
hooks
together
and
what
is
more
or
less
meaningful
for
an
architectural
perspective
to
help
us
kind
of
like
I,
said
I
feel
like
we've,
maybe
done
the
mapping
part
great.
We
haven't
done
the
reducing
part
terribly
well
and
they
might
be
a
good
pool
of
people
to
pull
from
there.
A
Makes
sense
to
me
so
the
next
thing
that
was
on
the
agenda
test,
flaked
discussion
and
weather
in
December.
We
do
something
for
a
bit
of
a
coordinated
push
to
do
some
cleaning
up
in
the
potentially
quiet
period,
Josh
and
Aaron.
You
talked
about
this
I,
don't
know
if
there's
a
specific
tracking
ticket
or
I
guess
they're
gonna
be
a
MapReduce
II
sort
of.
C
A
C
D
D
A
B
C
Almost
say
if
we
were
gonna
have
a
brother
called
deflate
the
test.
This
would
have
been
to
a
cap
that
big
amount
of
work,
but
I
mean
frankly,
if
you
know
over
the
next
quarter,
we
got
the
event
messaging
anti-pattern
erased
and
sig
performance
completed
d
flaking
the
5,000
node
test,
then
that
would
be
a
huge
amount
of
progress
from
perspective.
How
flaky
the
customer
is
and
then
the
remaining
sort
of
big
area
flakiness
would
be
upgrade
my
tests,
the
so.
D
D
We
have
the
bigquery
metrics
dashboard
for
Velodrome,
for
example,
which
highlights
the
top
flaky
ax
stops
and
then
the
to
flake
iasts
issues
for
those
we
also
have
the
triage
dashboard,
which
shows
it
takes
the
failure
text
from
each
test
that
fails
across
every
single
job
and
tries
to
cluster.
All
of
those
together
and
then
sorts
it
by
which
failure
texts
have
happened
the
most
across
the
past
two
weeks,
but
I'm
not
sure
we
have.
We
have
yet
identified
the
perfect
like
leaderboard
or
whatever.
D
We
also
have
a
graph
that
shows
the
daily
flakiness
of
each
of
the
pre
submit
jobs.
We
don't
yet
have
a
graph
that
does
the
same
thing
for
the
release,
blocking
jobs
and
I
personally
have
envision
something
that
looks
kind
of
more
like
test,
creating
ish
but
more
like
a
heat
map.
So
you
can
see
radiations
flakiness
over
time
rather
than
a
bunch
of
lines.
I
think
that
would
be
really
good
for
us
as
humans,
who
can
do
pattern
matching
visually,
but
so
I
just
think.
There's
there's
a
lot
to
do.
D
D
D
D
D
Interested
in
the
flakes
sorry
sorry
to
cut
you
off
I'm
interested
in
the
flakes
part
specifically
for
this
release
in
trying
to
continue
the
stake
we
put
in
the
ground
with
release
blocking
criteria.
Flakiness
was
one
of
those
criteria
and
I
want
us
to
be
able
to
measure
that
criteria.
We
still
I
think
where
I
talked
about
that
leaderboard
or
whatever.
We
still
kind
of
lack
that
for
the
release
blocking
jobs
to
see
that,
yes,
they
run
this
frequently
cool
that
screen.
They
take
this
long
cool
that
screen
there.
This
flaky,
oh,
no,
that's
red.
D
E
D
Actually
have
a
job
that
does
that,
but
the
last
time
I
looked
at
the
job
is
timing
out,
but
we
do
is
shocking.
Oh
yeah
I
know,
but
that
is
that
is
something
I
will
try
and
adjust
the
time
at
if
I
can
find
it
that
we
do
want
to
get
data
on
was
to
see
whether
or
not
the
number
try
a
temp
s--
is
actually
hiding
flakes
from
us,
and
if
so,
you
know,
maybe
we
can
have
a
number
of
jobs
that
are
intended
to
fail
to
just
uncover.
D
Flaky
things
like
the
events
thing
doesn't
actually
really
uncover
itself
until
it's
run
with
a
whole
bunch
of
other
super
flaky
tests,
like
we
also
kind
of
lack
the
process
to
prove
that
if
we
do
tag
a
test
is
flaky
and
then
kick
it
out
of
the
main
thing.
How
do
we
actually
know
that
we
have
deep
liked
that
test?
Can
you
show
me
a
run
where
it
runs
with
the
same
concurrent
tests
executing
all
around
of
it?
You
can't
because
once
we
start
something
as
flaky,
you
only
run
it
with
other
flaky
tests.
C
I
was
gonna,
say:
I
I,
don't
have
data
to
back
this
up,
but
my
feeling
from
watching
the
boards
is,
we
have
a
confluence
of
individual
tests
that
are
flaky
and
then
jobs
that
are
flaky.
If
you
follow
me,
that
is
jobs
that
produce
conditions
that
tend
to
make
tests
flake
in
those
jobs
that
don't
flake
otherwise.
C
C
C
C
F
A
Was
gonna
say
earlier,
the
this
has
kind
of
come
up
and
the
reason
I
wanted
to
put
it
on
the
agenda
was
like
okay,
it's
December
and
the
last
conversation
had
been
well.
Maybe
in
December
there's
actually
a
bit
of
a
lull
where
a
few
targeted
things
could
be
done
like
not
to
solve
the
problem
or
some
of
the
like,
not
gonna,
solve
upgrade
downgrade
tests
or
probably
even
event
passing
but
like
for
me.
C
Would
agree
my
priority
list
would
be
a
doing
the
dashboard
and
be
trying
to
get
enough
people
sort
of
marshaled
together
to
make
a
decision
about
what
the
way
forward
and
upgrade
down
great
s
is
and
and
the
alternatives
here
are
trying
to
fix
the
existing
jobs
versus
replacing
them
with
new
jobs
that
are
sort
of
created
from
scratch.
Rather
than
inherited,
because
the
existing
jobs
do
not
have
good
ownership.
C
C
So
the
M,
but
yeah
I
mean
so
that
would
be
honestly
the
other
big
thing,
because
that
is
like
step
1
or
step
0
in
looking
at
hey.
Let's
deflate
them,
don't
worry
test
its
first
saying:
okay,
what
kind
of
upgrade
our
tests
are
we
going
to
have,
because
it's
not
at
all
clear
to
me
that
the
tests
that
we
do
have
is
the
tests
that
we
want
to
have.
D
D
D
Anyway,
like
I
only
ever
got
as
far
as
moving
the
gke
jobs
off
but
I
think
there
were
a
number
of
other
jobs
that
we
kind
of
agree.
Don't
belong
the
master
blocking
years.
We
performed
the
scale
5000
that
scale
job,
even
if
it
catches
a
legitimate
failures
like
it
just
doesn't
run
frequently
enough
for
us
to
pay
as
much
attention
to
and
then
maybe
some
of
the
serial
jobs
could
also
get
moved
over
there.
C
A
This
reminds
me
of
a
question
I've
had
on
the
5000
note,
since
you
mentioned
that
Aaron
on
the
scale
testing.
Is
there
like
a
architectural
or
philosophical
level
desire
to
have
those
run
serially
or
is
it
just
like?
Is
it
resources?
If
we
threw
money
at
the
problem,
could
we
get
more
runs
or
is
there
a
desire
to
not
have
three
three
three
day
tests
running
in
parallel
with
one
starting
and
stopping
each
day?
It.
D
D
So,
staff
resources
should
be
showing
up
to
the
Cates
infra
working
group
meeting,
we're
trying
to
move
slowly
and
methodically
migrating
over
DNS
first
and
then
looking
at
things
that
will
assist
the
openness
of
the
release
process.
So
I
think
that's
the
GCR
in
Google,
Cloud,
buckets
or
storage
buckets,
and
there
are
a
couple
other
utility
clusters
that
were
looking
at.
D
That's
where
I'm
talking
about
like
in
the
eventual
future,
where
we
migrate
jobs
over
the
question
is
like
how
many
how
much
money
is
going
to
be
spent
for
jobs
that
are
just
obvious
common
sense
reference
implementation
project,
things
versus
jobs
that
are
testing
out
cloud
provider,
specific
things
so
in
the
world,
where
we're
verifying
that
at
five
thousand
eight
cluster
meets
its
slis
and
SOS
in
the
world
of
scalability.
That
can
often
depend
upon
how
the
cluster
is
set
up
than
the
specifics
of
the
environment.
D
So
I
would
imagine
Google
might
pay
Google's
money
to
make
sure
that
that
is
valid
in
Google's
cloud
and
Amazon
would
probably
pay
Amazon's
money
to
make
sure
that's
valid
in
Amazon's
Cloud
and
we're
where
we're
at
today,
either
in
terms
of
money
or
physical
resources.
We
just
don't
have
the
room
to
do
anything
more
than
serial
runs
of
these
five
thousand
in
clusters:
environments
that
there's
area
it
was
this
yeah.
So
I
had
to
put
together
this
crazy,
like
Monday,
Wednesday,
Friday,
Tuesday,
Thursday
scheduling
for
all
of
them.
That.
D
Will
be
as
soon
as
I
finished
the
working
group
draft
a
charter?
Okay,
thanks
for
being
teasing
like
we
do,
have
regular
meetings.
The
channel
in
slack
right
now
is
called
Kate's
in
14,
and
you
can
show
up
there
and
ask
questions
and
people
will
point
you
in
the
right
direction.
I
plan
on
getting
all
the
things
renamed
to
the
WG
Kate's
in
front.
Once
we
get
the
Charter
ratified,
which
I
hope
to
have
accomplished
by
the
end
of
the
year.
A
A
C
C
So
if
we
can
go
ahead
with
Stevens
question
error,
which
is
generic
yeah
and
refers
to
the
requirements
and
the
rolling
handbook
which
may
or
may
not
exist,
how
do
we
want
to
handle
it
for
114?
Obviously,
for
115
going
forward,
we
can
make
sure
that
the
role
handbooks
are
all
caught
up
with
the
requirement
section.
It's
just
that
nuts,
not
necessarily
gonna,
get
done
in
the
next
five
or
six
days.
A
B
A
Making
sure
that
leads
understand
is
a
part
of
their
job
to
reach
out
to
prospective
shadows
and
how
the
conversations
is
kind
of
level
set.
Here's.
What
would
be
expected
can
does
that
sound
scary
or
exciting.
Can
you
commit
to
it
or
a
better
wait
till
another
time,
Pat's
pan,
when
I
can
like
that?
That's
just
one-on-one
between
the
the
lead
in
the
perspective
shadow
I
think,
would
be
really
beneficial
to
be
doing
now.
B
D
And
I
am
breaking
tradition
here.
I'm
looking
to
have
I
already
have
been
elder
as
a
release
lead
shadow
I'm
looking
to
have
another
non
Googler
as
a
release,
lead
shadow
in
the
interest
of
diversity
and
I've,
been
approached
by
a
couple
of
people
I've
held
off
on
having
that
conversation
with
them,
because
I
kind
of
need
to
figure
out
what
those
responsibilities
are
that
I'm
looking
out
of
looking
looking
for
from
them?
Oh
god,
I
can't
use
words
anymore,
anyways
yeah.
A
C
C
C
D
D
A
D
True
now,
I
think
personally,
I'm
approaching
it
from
the
perspective
of
my
main
concern
with
being
or
least
team
lead
at
all
is
I
tend
to
get
yanked
into
a
bunch
of
different
things.
So
I
want
to
make
sure
I've
got
some
easily
delegate
able
stuff
that
the
shadows
can
pick
up
in
the
event
that
I
disappear.
A
B
D
G
B
E
Of
envision
at
least
been
playing
around
with
it
in
the
back
of
my
head,
it
would
mirror
how
we
would
want
to
have
so
I
would
imagine
we
have
a
section
of
a
cap.
That
is
where
you
have
the
room.
You,
like
you,
have
today
kind
of
a
developer
focused
documentation
where
that
goes
into
the
API
design,
for
an
enhancement.
I
would
like
to
add
to
the
cap
tool
a
command
that
would
create
a
pull
request
against
like
a
API
review
or
architectural
tracking
repository.
E
That
would
give
a
pointer
to
the
cap
for
people
in
tech
architecture
to
review
and
move
also
the
enhancements
process
rather
than
I
mean
overtime.
Well,
we
move
enhancements
tracking,
similarly
to
a
request
based
system
rather
than
an
issue
based
system,
so
we
don't
have
to
after
the
end
of
the
release,
try
and
scrape
the
issues
and
see
what
actually
went
into
the
release
or
not
so
I'm.
E
Imagining
that
they're,
the
architectural
review
and
the
enhancements
tracking
for
a
single
release
would
be
a
more
or
less
the
same
process
just
in
two
different
repositories:
okay,
that's
just
kind
of
a
thought:
I.
The
ideas
I've
had
kicking
around
in
my
head
and
that
you
would
get
a
sub
before
a
cap.
Woods
like
implementable.
Ideally
you
have
someone
from
cig
architecture,
give
the
stamp
of
approval
before
you
before
you
move
forward.
B
So,
from
my
experience
at
least
what
should
be
the
set
of
questions
or
the
checklist
that
should
be
there?
Maybe
what's
the
conformance
coverage
or
not
coverage,
but
should
the
confluence
test
pass
really
have
separate
profiles
and
also,
if
there's
any
known
issues,
are
those
known
issues
kind
of
acceptable
and
who
kind
of
makes
a
call
on
that?
What
would
be
some
kind
of
oversight
and
directing
to
say?
Okay,
this,
this
enhancement
with
this
set
of
known
issues,
is
okay
to
go
to
beta
or
GA.
B
D
E
Approve
or
something-
and
we
have
to
like
that-
that
process,
the
API
review
process
kind
of
died
on
the
vine
before
we
could
roll
out
and
new
me
duration
of
it,
because
it
was
like
Phil's
project
and
I
felt.
My
opportunity
leave
then
became
Brian's
project
and
then
Brian
went
on
vacation
and
a
bunch
of
other
stuff
to
do.
Yeah.
D
D
Yeah
okay
like,
but
my
personal
feeling
is
we're
lacking
an
assigned
person
to
do
oversight
on
these
things
and
I
think
that
it
is
unfair
to
expect
the
release
League
to
have
the
time
capacity
and
technical
depth
to
make
a
judgment
call
on
each
and
every
single
one
of
these
caps
I
also
think
is
probably
unfair
to
expect
that
of
the
enhancement
sleeve
I
mean
Kendrick
out.
How
are
you
kind
of
getting
a
feel
for
whether
or
not
these
things
were
headed
in
the
right
direction
or
not?
G
Caps
I
honestly
I
wasn't
paying
too
much
attention
to
the
caps
in
this
whole
thing,
because
what
I
was
doing
is
I
was
looking
at
the
tracking
issue
with
inside
of
the
the
enhancement,
repo
and
then
I
would
see
the
KK
P
R.
That's
it
so
that's
associated
with
it
and
that's
where
I'm
sitting
there
making
sure
that
it's
actually
having
some
sort
of
progress
during
this
whole
time.
If
it,
if
there
was,
you
know,
no
comments
or
no
tests
being
ran
or
anything
like
that,
that's
when
sort
of
the
the
flags
were
being
raised.
B
Okay,
I
think
adding
mostly
the
release
team,
at
least
with
113.
We
were
tracking
the
progress
versus,
but
not
not
at
Lisa's
last
minute,
seeing
if
they
were
doing
it
was
designed
already
to
train
a
graduate,
so
that
checklist
was
something
that
the
release
team
was
missing
or
when
you
get
somebody
to
give
that
stamp
of
approval.
B
D
A
Okay
think
like
maybe
this
we
are
seeing
the
signs
of
that,
though,
like
so
going
back
to
what
was
it
I'm
storage
snapshots
that
was
going
on
I,
think
in
my
cycle
and
the
from
like
kept
perspective
or
from
the
SIG's
perspective,
a
whole
bunch
should
happen,
everybody
finally
aligned
and
was
was
cool
and
happy
with
stuff
and
then,
when
I
hit
architecture
level,
there
was
a
big
whoa
wait
a
second
and
at
that
point
code
like
at
the
cig
level,
they
decided
what
they
were
doing.
They
had
we're
rapidly
going
towards
implementation.
A
Now,
back
over
in
cig,
release
were
on
the
release
team,
specifically
not
cig,
release
on
the
release
team.
We
were
looking
like
Kenny
said
at
the
PRS
and
issues
and
from
that
perspective
is
a
window
into
the
cig,
what
they
decided
things
look
coherent
and
on
track,
but
then
project
oversight
wise.
There
was
a
disconnect
there.
A
So
I
do
think
that
that
needs
to
happen
somehow
earlier
and
have
some
criteria
there
that
then
maps
to
us
in
the
release
team
being
able
to
just
kind
of
say
like
okay,
you
you've
committed
all
this
stuff
but
is
you're
showing
your
reminder
e,
where
your
tests
are,
where
your
Docs
are
the
basics
that
we
asked
for
that.
It's
not
oh,
say
a
surprise,
but
I
feel
like
we
are
sort
of
the
surprising
gate
holder.
Sometimes
my.
G
A
D
Ultimately,
what
I'm
trying
to
drive
to
here
is
you
know,
is
the
next
release
the
one
to
say
you
have
to
have
a
cap
if
there's
no
cap
associated
with
this
thing,
you're
trying
to
push
in
I'm,
not
sure
your
coach
plans,
I
know
that's
gonna
run
into
the
friction
of
like
bug,
fixes
and
random
features
that
are
too
small
for
a
cap.
I
get
that
and
it
might
encourage
some
people
to
even
slime
their
way
in
rather
than
go
to
eat
as
much
larger
process.
Those.
E
E
I
would
say
no
to
your
question
Aaron
not
for
this
release,
okay,
at
least
because
at
least
until
we
are
in
a
better
place
with
caps
themselves,
big,
the
idea
of
caps
or
they
should
span
one
or
more
releases.
We
have
not
yet
provided
guidance
on
to
how
that
mapping
occurs.
I
think
we
should
get
that
guidance
done
through
the
114
cycle.
F
E
E
Don't
want
to
lose
that
ability
if
we,
if
we,
by
moving
by
for
mandating
that
enhancements
that
are
tracked,
must
must
have
a
cap
I
think
there's
a
lot
we
can
do
in
terms
of
making
that,
like
you
know,
in
realizing
suggestions,
you
were
making
with
respect
to
finding
enhanced
owners
and
making
it
easier
to
manage
caps
themselves
right
and
to
move
more
work
towards
capped.
I.
Think
that
you
know
as
a
larger
organization.
Certainly
we
can
help
push
contributions
from
our
end
to
that
direction.
D
Think
that's
fair.
It
could
be
that
I'm
trying
to
use
the
word
cap
as
a
proxy
for
I
need
a
document.
I
need
to
see
like
a
design
document.
I
need
to
see
a
test
plan.
I
need
to
see
this
in
that
and
I
guess.
Tracking
issues
are
kind
of
the
place
for
that
information
right
now,
but
I
don't
feel
it
is
sufficient
and
the
way
I
would
deal
with
the
fact
that
a
lot
of
enhancement
issues
don't
have
caps
associated
with
them
as
I.