►
From YouTube: CDS Reef: Governance
Description
The Ceph Developer Summit for Reef is a series of planning meetings around the next release and some community planning.
Schedule: https://ceph.io/en/news/blog/2022/ceph-developer-summit-reef/
A
All
right,
so
I
think
we've
got
enough
requirement.
We
can
get
started
so
welcome
to
the
cds
for
governance.
I
want
to
talk
a
bit
about
part
of
your
opinions
now
that
we've
got
that
quincy
released
and
by
the
way
thanks
everybody
for
finally
getting
quincy
out
the
door.
I
know
it
was
a
bit
of
a
extended
process
this
time,
but
we
think
we
made
a
much
higher
quality
release
because
of
the
extra
time
all
the
extra
testing
we
did.
So
thanks-
and
I
think
it's
looking
great.
A
But
today
we're
going
to
talk
a
little
bit
more
about
the
governance
model.
We
said
after
we
replaced
quincy.
We
discussed
this
more
and
tried
to
formalize
it
and
get
emerged
and
the
essential
idea
hasn't
really
changed
much.
It's
that
we
turned
the
existing
sf
leadership
team
into.
We
call
it
the
steps
fast
during
committee,
making
explicit
that
it's
not
just
development
leads,
but
we
welcome
contributions
and
involvement
from
other
members
of
the
community
as
well.
A
Also,
I
mean
the
traditional
responsibility
of
the
steering
committee
is
to
also
elect
the
executive
council,
whose
role
is
to
kind
of
make
sure
everything
gets
done
and
try
and
drive
the
different
aspects
of
the
project.
A
A
A
All
right,
so
we
already
had
some
excellent
feedback
from
ernesto
anthony
and
nike,
and
then
I've
also
already
reviewed
this.
I've
got
a
few
more
things
to
fix
up
based
on
wrestle's
comments
still,
but
I
think
the
key
principles
haven't
really
changed.
Much
we're
just
going
to
change.
The
word
show
ups
participate
to
make
it
clear
that
we're
not
looking
for
presents
but
involvement,
hence
books
from
other
time
zones
who
can't
make
things
like
the
synchronous
meetings
are
welcome
to
participate
as
well.
A
A
A
In
terms
of
the
steering
committee,
as
mentioned,
this
would
be
kind
of
grandfathered
in
grandfathered.
In
from
this
existing
clt,
and
mainly,
we
use
votes
to
amend
the
government's
model
if
necessary
and
like
this
hearing
like
the
executive
council
and
continue
the
existing
meetings
as
we
do
to
discuss
tactical
things
and
try
to
explicitly
make
maybe
monthly
or
by
monthly
or
some
completely
slower,
cadence
of
focus
on
a
longer
term
strategy
for
the
projects
and
technical
direction.
A
As
one
change
here
was
that
I
removed
the
language
around
some
keeping
the
cap
to
a
certain
size
or
or
fixing
terms,
or
anything
like
that,
because
I
felt
like
the
existing
model,
where
we
haven't
really
done
that
has
been
working
out
pretty
well,
it
lets
folks
can
become
dormant
or
actively
active
again
as
they
like
and
have
time.
A
That
means
that,
even
if
you
folks
don't
necessarily
have
time
to
attend
all
the
meetings,
we
at
least
have
people
can
benefit
from
their
perspective
and
expertise.
A
Then
again,
the
idea
would
be
that
the
steering
committee
would
be
formalized
for
voting.
The
meetings
would
be
open
for
anybody
to
attend
says
the
clt
meetings
are
today.
A
Things
like
the
python
and
mozilla
governance
documents
were
more
explicit
about
what
happens
in
various
edge
cases
like
when,
when
your
members
need
to
members
of
the
council
need
to
leave
or
how
the
nomination
process
works,
or
another
thing
I
wanted
to
add
here
was
around
for
the
council
having
a
requirement
that
there's
at
least
more
than
one
organization
represented,
so
there's
no
appearance
of
being
dominated
by
one
particular
organization.
A
I
wonder
what
the
python
government
says
again.
We
trust
everybody
to
act
in
the
interest
of
the
projects
when
they're
working
in
the
counseling's
doing
committee,
but
to
avoid
that
to
ensure
that
and
to
avoid
the
appearance
of
one
vendor
dominating
or
one
or
organization
dominating
at
least
or
anything
that
one
seed
in
the
council
is
is
from
a
different
vendor
than
other
there's.
Others.
A
D
A
Well,
folks
have
any
anything
else
dad,
please
feel
free
to
comment
on
github.
You
want
to
make
some
updates
that
describes
earlier
and
we
can
merge
a
file
version
and
also
update
the
other
part
of
the
docs.
It
describes
a
current
governance
page.
There
then
probably
merge
this
with
that
with
the
existing
governance
page
some
fashion,
and
that
priority
has
a
description
of
this
foundation
in
its
role
as
well.
A
F
Well,
if
no
one
has
anything
like
more
substantial,
I
would
like
to
bring
up
the
case
of
not
upgrading
the
long-running
cluster
until
the
the
very
last
moment.
This
is
something
that
happened
not
just
with
quincy
but
also
with
pacific,
and
in
both
cases
we
had
late
breaking
bugs
in
the
case
of
pacific.
It
was
actually
a
sev
bug
in
the
case
of
quincy,
and
this
was
something
related
to
just
existing
old
baudman
versions,
not
handling
their.
F
You
know
this
in
the
same
way
this
is,
you
know
we
literally
upgraded
the
the
the
lrc
on
like
the
day
before
the
release,
and
I
I
think
we
should
make
it
a
rule,
at
least
for
for
for
major
releases
to
upgrade
it
at
least
at
least
a
week
before
before
the
releases
actually
goes
out
because
like
even
if,
even
if
we
just
merge,
you
know
in
that
in
that
time
frame,
even
if
only
a
couple
of
pr's
you
know
get
merged.
F
F
Getting
by
this
twice
already,
so
I
think
we
should
make
it
a
formal
rule
and
knock
it.
G
A
B
Yeah
plus
one
of
that
there
there
were
a
couple
things
that
I
tripped
over,
that
I
remember
tripping
over
from
the
pacific
release
as
well.
So
I
I
started
a
document
on
the
sepia
wiki.
I
don't
know
if
it's
the
best
place
for
it,
but
that
covers
all
of
the
spots
that
the
new
release
name
needs
to
be
added
to,
and
I
think.
F
B
A
At
this
time
in
particular,
we
seem
to
have
a
number
of
kind
of
late
breaking
bugs.
I
guess
another
area
that
was
troublesome
was
the
most
performance,
so
we
could
do
perhaps
more
performance
testing
earlier
in
this
cycle,
as
well
with
that
more
varied
hardware
to
try
to
detect
any
kind
of
regressions
and
before
we
have
a
little
more
time
to
fix
them.
H
H
It's
it's
the
same
theme
and
we've
seen
it
pretty
specific
all
the
way
through
now
what
seems
like
quincy.
So
it's
it's
kind
of
a
general
theme,
but
of
course,
testing
is
incredibly
difficult
to
get
right.
So
it's
it's
a
tough,
a
tough.
I
guess
not
to
crack.
I
Say
I
agree
with
you
david.
I
mean
I
guess
when
it
comes
to
testing
the
more
you
do,
the
better
it
is
like
for
the
upgrade
case.
I
guess,
like
you
know,
for
example,
the
giveaway
cluster
was
getting
upgraded
every
time
there
was
something
new
right,
but
the
lrc
wasn't
lrc
did
so.
That's
the
model
of
the
story,
the
more
you
do,
the
more
you
find.
H
Yeah,
it's
unfortunate
it's
kind
of
like
when
you're
cleaning
the
house
and
you
start
to
get
into
the
nooks
and
crannies
like,
oh,
my
goodness.
What
did
I
miss
here?
But
you
know
clearly
periodically
you
have
to
do
it
and
you
know
it's
why
we're
trying
to
help
out,
on
our
end,
doing
the
testing
on
our
large
clusters,
but
at
the
same
time
we
probably
have
something
more
formal
because,
for
example,
we've
been
stuck
dealing
with
this
bug
that
we've
been
fighting
for
a
month.
Now
we
haven't
been
able
to.
H
We
haven't
even
done
a
release
candidate
of
quincy
much
less
than
stable
release,
so
we
haven't
been
able
to
contribute
in
that
regard.
So
I
think,
just
as
a
as
a
community,
we
have
to
find
a
way
to
make
sure
that
we
have
a
very,
I
guess,
well-defined
process
that
ensures
at
least
some
level
of
standard
testing
always
occurs
and
there's
not
deviation
where
things
kind
of
slip
through
the
cracks,
because
it's
always
those
little
things
that
at
least
we
find
trip
us
up
on
different
set
releases.
A
Yeah,
I
think
that's
a
good
area
to
look
into
a
little
more
david
at
the
especially
the
parts
where
we
may
have
some
testing
that
we
do
during
the
release
process.
We
don't
do
during
regular
development
if
we
ran
some
of
those
tests
on
a
more
regular
basis
earlier
we'd
find
these
bugs
earlier
and
not
have
to
kind
of
rush
to
fix
them
at
the
end
of
the
cycle,
which
that's
applies
to
minor
issues,
minor
releases
too.
H
Yeah-
and
you
know
also
sometimes
we
seem
like
a
a
chain
because
something
may
have
not
gotten
tested
on
one
patch
or
something,
but
that
has
like
downstream
effects
that
impact
other
things
you
end
up
with
kind
of
this
knock-on
effect
so,
for
example,
the
the
bug
that
kind
of
broke
sharding
on
versioned
buckets
like
that's,
led
to
all
kinds
of
mess
afterwards,
whereas
if
it
had
just
been
caught
when
it
was
first
introduced,
a
lot
of
stuff
wouldn't
have
happened.
H
You
wouldn't
be
in
this
situation,
where
we
wouldn't
be
in
the
situation
we're
currently
in
so
it's.
I
agree.
It's
just
it's
just
more
continuous
testing
and
I
know
it's
expensive.
I
know
it's
hard,
you
know,
but
maybe
that's
an
area
that
we
focus
on
like
well.
How
do
we
get
the
resources
that
are
necessary
to
to
make
that
more
successful?
Or
you
know
speed
through
iterations
or
whatever?
H
The
case
may
be,
but
I
think
I
think
solving
those
problems
is
worthwhile
because
at
least
from
my
experience,
we've
kind
of
proven
the
need
for
it.
So
I
think,
we're
beyond
the
burden
of
proof
to
say
we
need
to
do
it
and
now
it's
more
a
question
of
like
how
do
we
actually
accomplish
it,
given
the
resource
constraints,
we
have.
A
I
entirely
agree,
I
think
this
lines
up
exactly
with
the
survey
results
as
well
about
from
that
stuff
user
survey
about
concerned
about
upgrading
to
tab,
bugs
that
you
close
with
it.
A
You
need
to
focus
more
on
on
that
better
ways
to
test
and
and
prevent
regressions
from
entering
in
the
first
place,
because,
like
you
were
saying
the
earliest
earlier,
we
can
prevent
bugs
the
the
cheaper
it
is
to
fix
them
and
anything
that
gets
released
or
gets
backwarded
or
stops
it
for
a
longer
time
can
cause
more
and
more
trouble.
Take
for
folks.
J
J
I
don't
think
that
the
foot
that
they
that
did,
that
the
problem
that
that
the
important
testing
is
limited
to
finding
regressions,
I
think
I
think
I
think
I
think
I
would
tend
to
see
it
in
terms
of
scaled,
correct
correctness,
testing
because
there
have
been,
as
there
are,
ancient
bugs
that
were
that
had
devastating
effect
on
consistency
and
various
workload,
scenarios
that
have
been
hidden
for
years
or
orbiting
completely
reflected
incomplete
implementations
of
things.
J
H
Yeah,
I
think
coverage
is
a
big
part
too,
because
I
know
you
know
just
from
our
experience
in
trying
to
work
on
the
orchestrator
side
of
the
house,
like
the
the
ability
to
test
that,
especially
with
container
containerized
releases
and
stuff.
A
And
the
positive
side
I
could
give
a
cluster
this
time
could
help
us
find
a
number
of
issues
that
we
wouldn't
have
found.
Otherwise,
so
I've
met
largely
larger
scale.
I
was
definitely
beneficial
during
this
cycle
and
upgrading
it
multiple
times
as
well.
A
This
might
be
one
thing
that
sam
brought
up
to
me
was
as
an
idea
of
trying
to
track
regressions.
Better
was
trying
to
use
the
red
vine,
the
tracker
in
particular.
It
already
has
a
regression
field
where,
for
example,
if
as
we're
proposing
back
ports
or
creating
backward
issues
figuring
out
whether
things
need
to
be
backported.
A
If
we
went
if
we
did
the
extra
triage
of
checking
whether
this
is
a
regression
or
not
and
filling
in
that
field,
that
would
help
us
gather
a
little
bit
more
data
about
which
areas
we're
seeing
regressions
in
or
are
the
bugs
being
introduced
for
in
new
code.
F
Yeah,
I
definitely
agree
there.
This
view
of
this
is
is
not
used
pretty
much.
No
one
uses
it,
but
even
with
no
additional
automation
such
as
you
know,
such
as
extended
tracking
of
regressions
and
whatnot.
If
we
could,
just
you
know,
perhaps
in
the
cot
calls
or
just
do
maybe
once
every
couple
of
months
do
a
review
of
those
things
of
all
tickets
smart
with
regression
is
because
it's
often
it's
often
the
case
that
like,
for
example,
the
pull
request.
F
Template
says
that,
like
there's
a
check
box
for
whether
this
is
a
very
recent
issue
that
you
know,
and
in
that
case
the
assumption
is
that
it
it
would.
You
would
mention
the
commit
that
introduced
it
in
the
in
the
commit
message
and
the
tracker
ticket
is
optional.
At
least
that's
what
the
language
suggests.
G
F
These
things
basically
mean
that
some
regressions
get
introduced,
and
you
know
if
it's
like
fixed
up
within
a
day
or
two.
It's
one
thing,
but
if
it's
been
there
for
you
know
for
a
month
and
then
something
somebody
wanted
to
backport
something
and
took
that
pr
with
the
regression
as
a
prerequisite.
F
You
know
pay
a
lot
more
attention
to
anything
that
has
a
fixes
tag
in
it
or
says
that
it
is
a
regression,
do
a
review
and
then,
as
far
as
what
is
what
is
causing
this.
So
I
really
had
a
bunch
of
regressions
introduced
early
on
in
the
quincy
cycle
that
took
a
long
time
to
to
track
down
and
pretty
much
all
of
them
came
from
large
scale,
refactors,
which
the
rbdt,
like
the
rbd
piece,
was
only
just
just
just
one
component.
F
That
was
that
was
being
affected,
and
it
is
those
bulk
changes
that
were
responsible
for,
like
I
think
we
had.
F
Five
major
regressions
and
like
four
of
them
were,
were
bulk
changes
related
to
used
refactoring
in
in
in
the
you
know,
code
that
that
isn't
even
related
to
ibd
and
it's
the
fix
ups
to
to
to
make
things
compile
or
to
squash
warnings
on
the
on
the
python
type
check
inside
caused
a
bunch
of
failures,
including
like
a
bunch
of
regressions,
including
actual
user,
visible
api
regression,
because
with
the.
F
Like
one
of
the
refactors
basically
changed
how
the
manager
commands
are
generated,
this
is
now
based
on
python
method
decorators,
and
to
to
someone
who
who
is
not
familiar
with
this
are
just
like
not
experienced
with
this
code.
It
would
seem,
like
you
know,
changing
the
argument,
names
or
or
the
order
of
the
arguments.
Well,
the
order,
I
guess,
doesn't
matter
all
that
much
but
changing
the
argument.
Names
in
this
case
was
something
that
broke
cfcsi,
so
I
think
another
aspect
would
be
to
have
perhaps
a
list
of
areas.
F
Like
a
list
of
files
where
which
which
need
a
particular
attention
and
for
stuff
that
is
kind
of
like
hidden
in
you
know,
just
just
in
the
middle
of
some
some
some
multi
thousand
lines
file,
we
could
maybe
break
those
out
into
separate
files
and
ensure
that
those
get
additional
eyes
and
additional
review.
I'm
talking
in
particular
about
things
where
we
promise
api
stability.
F
A
Interesting
so
you're,
just
being
like
looking
at
breaking
out
pieces
of
code
that
are
part
of
like
stable
apis
in
some
way
to
smaller
files
that
they
can
be
more
easily
reviewed
and
get
more
attention.
F
Yeah,
so
that
it's
so
that
it's
you
know,
perhaps
with
a
with
it,
with
a
scary
comment
at
the
top,
so
that
it's
not
even
a
matter
of
review,
I'm
more
looking
because
reviews.
Ultimately
you
know
up
to
how
it
each
individual
component
does
does
the
review,
but
it's
just
up
to
you,
know,
making
it
less,
making
it
harder
for
developer,
making
a
change
to
to
actually
make
a
change.
F
F
To
vary
from
component
to
component,
of
course,
but
just
something
that
that
that
we've
been
bitten
by
in
the
quincy
cycle.
Because.
G
I
You're
kind
of
in
line
with
what
elia
just
suggested
there
was
one
idea
that
was
thrown
in
the
raiders
team
around
similar,
like
you
know,
encoding
changes,
any
files
that
are
touching
those
encoding
changes
should
have
like
some
kind
of
github
hook
or
something
which
could
indicate
the
the
need,
for
you
know,
cautious,
testing,
questions,
review,
et
cetera,
et
cetera.
We
can
figure
out
what
those
are
the
areas
where
you
know
backwards,
compatibility
and
things
like
that
are
more
prone
to
be
broken.
A
Yeah,
that's
another
good
idea,
making
things
maybe
more
visible
for
reviewers
when
there
are
potentially
dangerous
changes
or
more
risky
changes
that
these
areas
need
more
careful
review
or
more
careful
attention.
I
So
there's
that
and
there's
the
other
piece
of
like
do.
We
have
enough
coverage
in
our
upgrade
test
to
test
such
a
change,
because
a
lot
of
time
things
are
not
getting
caught,
because
we
are
probably
lacking
the
testing,
not
that
you
know
the
test
was
not
breaking
in
the
first
place
so
later
when
that
piece
of
code
gets
exercised.
That's
when
we
find
that.
I
So
I
guess
that's
a
more
tough
thing,
but
I
would
say
like
starting
little
by
little,
even
when
we
adding
new
features,
making
sure
that
that
particular
feature
has
enough
upgrade
test
coverage
if
it
needs
that
will
be
a
good
start.
D
F
Do
do
you
upgrade
sweets?
Do
we
even
run
them
on
a
regular
basis,
or
does
it
happen
only?
F
You
know
when
we
get
closer
to
to
particular
release
and
the
reason
I'm
asking
is
because
pretty
much
all
streets
again
again
against
master
are,
you
know,
were
more
or
less
cancelled
in
favor
of
the
you
know,
integration
branches
that
are
put
together
by
by
component
leads
the
idea
behind.
That
was.
F
I
guess
that
no
one
paid
much
attention
to
to
master
runs
anyway,
because
things
were
seen
to
be
tested
repeatedly
by
the
in
in
the
various
integration
branches.
But
I
do
wonder,
like
the
upgrade
read
so
is
like
something
that
only
the
latest
team
would
possibly
run
on
semi-regular
basis
and
everybody
else
like
more
or
less
tends
to
implicitly
rely
on
that
coverage
on
those
which
just
be
you
know,
being
run
by
someone
and
that
coverage
appearing
out
of
nowhere.
F
Do
we
need
to
make
the
like
to
first
of
all
publicize
the
the
upgrade
suites
because,
because
they
are.
F
A
Yeah,
I
think
that's
a
great
idea
exactly
one
of
those
things
that
we
do
during
our
release,
that
we
should
be
doing
more
regularly
because
there
are
some
upgrade
suites
in
different
components
like
I
think
the
fss
suite
has
some
upgrades
radar.
Suite
has
some
upgrades,
but
it's
not
the
same
coverage
as
the
full
upgrade
suite,
and
even
those
suites
lack
a
lot
of
upgrade
coverage
that
we
could
have
there
link
could
be.
B
I
One
thing
we
started
to
do
probably,
like
you
know,
mid
quincy
development
cycle
was
do
baseline
runs
every
week
and
I
do
remember,
having
asked
yuri
to
do,
base
and
runs
for
upgrade
suites
as
well,
but
I
think
the
problem
with
upgrade
suites
is
that
you
know
they
don't
get
as
much
love
as
the
other
suites
like
you
just
mentioned,
but
I
guess
just
the
knowledge
about
what
the
update
guests
are
doing
and
you
know
how
we
can
increase
coverage
and
run
them.
More
often
is
missing.
C
Well,
we
do.
We
do
have
upgrades
with
schedules,
so
we
intend
to
run
them
on
regular
basis.
Unfortunately,.
C
Yeah,
but
sometimes
you
know,
we
just
like
intend
to
run
too
many
tests,
and
that's
why
you
know
upgrades
cannot
actually
make
in
another
point.
Then,
when
we
test
actually
prs,
we
don't
sometimes
we
do
get
requests
to
actually
test
against
upgrades,
but
it's
not
on
like
a
regular
basis.
I
think
we
need
to
like
when
we
raise
prs,
especially
back
ports.
A
C
How
do
you
know,
how
do
you
know
for
hprs
actually
to
to
run
them.
A
C
A
Like,
even
though
we
have
to
sometimes
have
these
for
tests
running
regularly,
we
haven't
always
had
people's
attention
on
the
results,
though
there
might
be
some
breakage
introduced
that
you
don't
look
at
until
we're
doing
a
release.
So
if
we
had
this
as
part
of
the
regular
pr
testing
for
back
ports,
we
noticed
that
sooner
and
gets
keep
the
suites
in
good
shape.
I
I
don't
mind
running
them
like
radar,
says
hundreds
of
things
that
are
simulink
within
it
and
that's
why
it's
become
a
larger
speed
than
usual.
But
you
know
the
problem
starts
when
tests
are
failing
and
they
don't
get
attention
which
is
when
you
know
it
becomes
just
harder
to
review
every
failure
every
time.
If
it's
the
same
thing
a
lot
of
times,
one
example
was
there
was
some
upgrade
sequence
test
that
got
added
mbs,
upgrade
sequence
test
that
ended
up
in,
like
10
guaranteed
dead
jobs
for
a
while.
I
At
that
point
we
had
to
take
a
decision
about
what
do
we
do
about
this
test
and
they
were
running
within
raiders
just
because
of
the
same
linking
so
if
we
can
make
those
tests
to
pass
and
they're
doing
something
valuable
within
the
radio
suite
absolutely,
but
in
general,
like
you
know,
running
upgrades
on
every
backboard
is
not
a
bad
idea.
I
would
say.
A
I
We've
cleaned
that
up,
but
if
they
are
meaningful,
upgrade
tests
that
wouldn't
make
sense.
Why
not
I
mean
I
can
see
that
yuri
is
probably
finding
some
logistical
problem,
but
if
there
are,
you
know
meaningful
upgrade
test
that
makes
sense
within
the
radar
suite
that
just
simpler.
F
Yeah,
maybe
maybe
the
like,
if
lab
capacity
is
the
issue,
maybe
we
could
kind
of
break
the
back
port
testing
like
break
those
integrate
integration
branches
into
kind
of
break
that
process
into
two
parts.
The
first
part
could
could
be
pretty
much
pretty
much
the
same
as
as
it
is
today.
Where
you
know
smaller
batches
of
prs
would
be.
F
You
know,
grouped
mostly
by
component
basis,
and
then
you
know
tested
with
with
just
just
those
suites
and
because
that's
that's
the
the
time
when
you
know
most
faders
are
when,
when
most
waiters
pop
up
and
need
to
be
either
taken
care
of
or
dismissed
and
after
you
know
after
there
is
some
number
of
of
these
prs
that
are
ready
to
go.
We
can
get
just
a
single,
larger
integration
branch
and
put
it
through.
G
F
F
I'm
not
sure
if
we
have
the
distinction.
We
talked
about
introducing
that
distinction
in
particular
into
into
the
latest
suite,
and
I
think
the
rbd
suite
also
has
some
examples
of
of
jobs
that
that
that
run
for
for
hours,
we
could
maybe
break
those
breakthroughs
out
and
reserve
these
long-running
jobs
for
this
kind
of
second
batch
of
integration,
branches.
H
H
Every
time
we
could
start
with,
you
know,
do
the
the
specific
sweets
to
those
components
and
then
just
some
regular
interval,
be
it
every
three
days
or
four
days
or
five
days
or
whatever
just
always,
run
the
full
suite,
because
then
you've
got
reasonable
confidence
that
your
change
isn't
like
totally
broken
immediately
or
very
close
to
immediately,
and
then
you
have
relatively
quick
feedback
before
lots
of
other
work
has
happened
on
the
on
the
shoulders
of
the
previous
work
that
might
or
might
not
be
broken.
H
So
you
don't
end
up
with
a
cycle
where
you
potentially
have
a
month
after
a
breaking
change
goes
through
and
everybody's
built
off
of
it,
and
then,
oh,
my
goodness,
it's
broken,
because
I
think
that's
that's
the
big
philosophy.
I
That's
exactly
what
my
intention
was
behind.
Those
baseline
runs,
so
it's
like
every
monday
I
mean
I
mean
the
difference
would
be
a
week's
worth
of
change
that
you
have
to
go
back
and
they
can
do
or
find
an
offending
change
for
regression,
but
ik
jobs
and
pathology
did
that,
but
it
was
the
lack
of
like
you
know,
looking
at
it
or
publishing
those
results.
I
So
having
yuri
send
those
results
out
to
the
pitfall,
at
least
for
the
raiders
team,
they
everybody
was
aware
of
what
the
existing
failures
were
and
which
one
needed
attention
which
one
did
not
having
to
do
that
for
the
you
know
for
the
other
switch.
I
don't
see
why
that
would
not
help
the
other
suites
as
well.
C
We
started
doing
that
for
like
lately,
I
think
for
master
for
quincy
preparation
for
the
core
rgw
and
I
think
fs
partly
and
we
included,
like
I
included
the
upgrades
tests
for
baselines
almost
every
week.
You
know
there
is
no
problem,
I
think
I
think
like
in
long
long
time
ago,
and
that
problem
with
the
grace
I
think
has
been
ever
since.
Actually
I
was
hired
so
the
issue
is,
we
do
run
them
right
on
regular
basis.
C
Now
we
probably
don't
pay
enough
attention
and
you
know
in
order
to
to
pay
attention,
that's
why
we
do
baseline
runs,
because
you
know
you
actually
send
emails
into
people's
faces
right
and
people
see
issues,
and
then
they
look
at
them.
So
that's
the
issue
to
run
to
run
upgrades
for
every
integration.
C
I
mean
we,
we
can
try
it
and
see
how
it
works.
I
don't
know.
I
E
When
you
say
love,
do
you
mean
that
we
should
poke
team
leads
to
write
the
release,
notes.
I
I
think
we
shouldn't
have
to
poke
anybody
just
if
we
write
release
notes
when
a
feature
is
merging
or
a
backboard
is
happening.
That
is
when
the
problem
should
gets.
Also,
it's
almost
like.
You
know
something
we
need
to
bring
into
our
day-to-day,
but
if
that
does
not
happen
so
maybe
yeah
I
mean
the
poking
is
the
last
resort.
A
Yeah,
I
agree.
I
think
that
what
I'm
getting
noticed
is
that
we
could
be
more
consistent
about
using
the
pending
release,
notes
file
to
gather
the
release,
notes
for
any
new
things:
major
new
user,
visible
changes
that
are
happening
during
the
development
process,
because
for
some
of
these
we're
kind
of
adding
into
the
release
notes
at
the
last
minute.
But
if
we
gather
them
up
during
development,
it's
much
easier
to
to
keep
track
of
them.
E
Is
there
a
way
to
put
a
flag
like
in
jenkins
to
require
release
notes
for
certain
things
like.
A
E
I
H
I
J
G
Notes
and
I'm
not
sure
how
much
it
would
be
helpful
but
like
at
least
it
would
open
apr
and
then
it
could
be
edited
when,
like
we
can
refactor
it
at
last.
If
like
we
want
to
kind
of
remove
non-substance
substantial
changes,
you
can
just
probably
look
it
up
and
share.
You
know.
F
I
don't
think
it's
an
issue
of
automation,
it's
an
issue
of
well
it's
an
issue
of,
like,
I
don't
think
it's
a
new
culture
with
regard
to
with
regard
to
like
having
having
the
process
automated
by
you
know
having
something
that
would
open
the
pr
and
you
know
create
those
release
notes.
We
don't
have
that
many
to
to
you
know
to
to
to
to
to
justify
that.
F
I
think
the
piece
that
is
missing
is
like
what
nick
has
said
that
there
is
a
pending
like
needs,
a
release,
note
label,
and
then,
if
someone
sets
it,
then
perhaps
we
need
a
check
that
would
be
required
that,
like
if
a
label
is
set-
and
there
is
no
diff
in
defending
release-
notes
file,
then
that
they
are
should
not
be
allowed
to
merge
and.
F
E
F
And
that
way
anyone
like,
even
if
someone
who
is
like
not
particularly
familiar
with
with
the
code
that
is,
that
is
being
put
up
in
the
pr.
But
you
know
they
took
a
look
and
they
they
see
that
the
release
note
is
very
likely
needed
if
they
set
the
label
that
that
you.
G
F
That
addition
to
to
to
to
to
the
list
of
checks
that
are
executed
on
prs
would
basically
ensure
that
that
label
gets
attention
and
it's
either
removed
by
that
component,
read
or
is
actually
fulfilled.
A
E
Yeah,
you
need
the
cultural
will
to
do
it.
That's
the
problem
like
like
not
brushing
your
teeth
and
then
going
to
the
dentist
and
hoping
for
the
best
is
sort
of
what
we
did
last
time.
Yeah.
F
To
to
add
a
label,
then,
to
you
know,
actually
come
up
with
a
release
note.
This
is
basically
an
equivalent
of
a
block
of
flag
on
a
tracker
ticket
right.
F
This
is
the
the
bit
that
is
just
needed
not
only
for
just
for
for
tracking,
but
also
for
enforcement
because
I'll.
Ultimately,
I
comment
like
on
the
pr
that
says.
Oh,
this
probably
needs
a
release.
Note
because
you
please
add
one
that
can
very
easily
be
ignored
or
just
fall
for
the
cracks.
F
Yeah,
the
problem
with
the
checklist
is
actually.
This
is
something
that
we
should
probably
take
on
in
the
reef
cycle.
It
is
currently
not
enforced,
so
the
the
pr
submitter
gets
gets
an
email
message
from
github,
but
like
that's
where
it
ends.
F
So
we
should
probably
hold
the
discussion
in
the
clt
meeting
as
to
whether
the
current
template
kind
of
because
there
are
a
couple
of
inconsistencies
there,
that
you
know
I
could
share
and
others
could
probe
which
I'm
in
as
well
and
once
we.
G
F
We
should
make
that
be
required,
and
I
guess
at
that
point
the
only
way
to
get
rid
of
that
requirement
would
be
to
just
manually
removed
the
entire
checklist
from
the
from
the
pr
description,
but
we
should
probably
leave
that
loophole
there
for
a
while.
Just
to
you
know,
just
until
the
processes
were
flushed
out
and
after
that
had
you
know,
had
a
check
that
would
insist
on
on
it
being
there.
E
Josh,
can
you
also
add
under
that
on
the
matter
of
beating
up
on
the
checklist
of
no
offense
to
anybody
who
wrote
the
checklist
that
the
docs
prs
do
not
disable
the
like
api
tests
and
everything
when
you
check
docs?
So
that
would
be
nice
because
it
takes
like
sometimes
six
or
seven
hours
to
wait
for
an
api
test
to
run,
and
I
don't
need
that
on
docs.
But
could
you
add
that
please
cool.
F
Yeah,
that's
not
a
checklist
thing,
though,
that
is
whether
the
checks
like
whether
those
tests
are
run
or
not
is
determined
by
the
by
the
jenkins
job
definitions.
That's
not
part
of
the
checklist.
The.
E
F
B
B
Since
we
do
require
the
ceph
api
job
as
a
required
check,
it
still
has
to
run
even
for
a
doc's
pr,
so
the
the
job
still
has
to
check
out
the
ceph
repo
and
then
check
what
files
changed
and
if
it's
only
docs
at
exit
0,
but
there
is
a
plug-in
that
is
supposed
to
archive
artifacts
from
that
job
and
for
some
reason
it
hangs
sometimes
seemingly
indefinitely.
So
that's
something
that
I
need
to
figure
out
it's
just
this.
B
This
particular
check
is,
is
a
bit
tricky
since
it's
required
in
order
to
merge
a
pr.
C
B
B
F
Yeah
I
can
well
that's.
There
are
some
new
interests
to
that
since
the
the
python
tasks,
for
example,
it
would
be
nice
if
they
were
checked
by
the
like
for
python
types.
You
know
that
would
be
a
welcome
addition.
One
day,
it
doesn't
do
that
today,
but
it's
it's
certainly
code.
So,
unlike
documentation,
this
is
this
is
code
that
can
can
still
break
things.
It's
not
like
running
unit
tests
is
obviously
not
necessary
and
like
building
itself
is
not
necessary,
but
just
outright
skipping.
Everything
is
probably
not
a
good
idea.
D
F
F
Yeah
some
bash
scripts.
Some
bash
scripts
get
get
used
as
part
of
make
check
it's
for,
for
example,
some
of
the
unit
tests.
In
order
to
run
them,
you
need
to
set
certain
environment
variables
and
some
vast
scripts
in
the
in
that
directory
are
responsible,
but
obviously
these
are
things
that
could
be
moved.
So,
for
example,
there
is
a
somewhat
unclear
difference
today
between
qa
directory
and
the
source
slash
test
directory.
D
We
should
probably
discuss
that
somewhere
else.
I
don't.
A
Yeah
about
a
time
for
today
anything
else
that
folks
want
to
add
in
terms
of
general
improvements
or
things
that
could
go
better
or
things
that
went
well.
Quincy.