►
From YouTube: Ceph Developer Summit Quincy: CI
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right,
let's
get
started.
A
So,
for
this
session,
we'll
talk
a
bit
about
improvements
we
can
make
to
our
testing
infrastructure.
We
can
start
off
with
the
the
infrastructure.
That's
used
for
unit
tests.
Make
check,
make
the
api
tests
through
github
and
jenkins.
A
Looks
like
we've
got
david
galloway,
at
least
for
a
few
minutes
here
so
david.
I
we
had
a
few
ideas.
A
I
was
talking
about
with
key
food
about
how
we
could
potentially
improve
some
of
the
reliability
of
the
jenkins
jobs
and
and
builders
and
in
the
ether
pad
here,
there's
a
few
things
that
maybe
be
interested
in
your
thoughts
about
one
one
expect
there
is
so
a
lot
of
times
when
we
have
race
conditions
in
in
jenkins,
and
we
may
check
tests
in
particular
that
can
cause
some
instability
be
difficult
to
debug
these,
since
we
don't
have
very
verbose,
logging
enabled
and
the
logs
tend
to
go
away
pretty
fast.
B
Yeah,
probably
probably
yeah,
I
mean
I'm
sure,
I'm
sure,
there's
a
number
of
different
ways.
We
could
do
that
we
could.
We
could
store
them
on
the
long-running
cluster.
If
space
is
a
concern,
we
there's
ev.
Every
job
has
like
a
maximum
number
of
jobs
to
keep
and
we
could
increase
that
as
well.
A
A
B
Yeah,
I
mean
that's,
that's
sort
of
what
the
ceph
dash
build
dash
poll
requests.
Job
is
supposed
to
do,
but
that
obviously
doesn't
doesn't
test
every
single
job
on
every
single
distro.
So,
and
I
just
I
don't-
I
don't
think
that's
very
realistic
to
do
just
for
ci
pull
requests,
but
yeah
I
mean
we
we
could.
We
could
probably
look
into
a
couple
different
ways.
We
could
spin
up
just
a
quick
ephemeral
jenkins
instance
and
join
a
jenkins
builder
to
it
maybe
run
make
checker.
C
A
Yeah,
I
think
we
can
probably
iron
out
more
details
there,
but,
like
sometimes
we've
seen
like,
like
things
that
emerge
that
work
on
master,
but
not
steel,
branches,
so
or
or
things
that
that
that
have
changed
in
the
distro
itself.
Perhaps
that
has
some
breakage
so,
and
we
can
talk
more
later
and
figure
out
that
kind
of
what
what
the
different
aspects
we
want
to
test
our
and
how
we
can,
what
we're
missing.
Basically.
A
Let's
see
I
kifu,
I
had
a
couple
other
ideas
that
I
think
would
help
out
here
as
well,
about
trying
to
like
label
prs
that
are
important
for
fixing
builder
test
setups
that
we
could
easily
tell
which
ones
those
are
and
and
prioritize
reviewing
them.
Things
do
break.
A
Thanks
sometimes
it's
been
like
that
we
kind
of
talking
on
our
scp
channel
or
irc
about
these
things,
but
to
get
it
sometimes
it's
for
you
to
tell
what
what's
most
important
pr
to
or
for
or
fix,
as
it's
already
out.
D
There
yeah,
I
think,
one
thing
that
I
think
will
be
useful.
Can
you
guys
hear
me?
I
think
my
microphone
yeah,
okay
yeah,
so
I
I
think,
oftentimes.
What
personally
I
felt
is
I
suddenly
wake
up
one
day
and
I
see
all
make
check
is
failing
and
it's
very
hard
to
figure
out
what
changed
or
where
things
change.
D
So,
if
there
are,
you
know,
changes
that
are
going
to
affect
things
like
make
check
on
several
branches
or,
like,
let
us
say,
make
check
on
nautilus
such
kind
of
prs
or
such
kind
of
changes.
Maybe
we
could
send
an
email
like
this
tapia
group
or
something
that
okay
such
and
such
thing
much.
Please
look
out
for
some.
You
know
failures
in
the
next
24
hours
or
12
hours.
That
at
least
gives
us.
You
know,
let
us
say
we
have
to
do
a
release.
D
We
at
least
know
what
to
revert,
and
you
know
what
what
what
the
next
steps
are.
Instead
of
hunting
all
over
the
place.
A
D
And
there
are
other
related
aspects
like
when
we
see
something
when
we
see
a
meat
check
failure.
How
do
we
start
debugging
things
at
the
moment?
We
do
have
jenkins
log,
but
that,
in
my
opinion,
we
have.
That
also
is
not
super
obvious
where
things
are
failing
or
there
are
multiple
errors.
You'll
see
it's
hard
to
figure
out
which
one
is
really
causing
an
issue.
Making
that
a
little
easy
to
debug
for
any
normal
person
would
be.
E
B
A
Yeah
keep
an
idea
there,
at
least
for
the
make
check
tests
where
c
test
has
a
mode.
It
can
output,
machine,
readable
information,
and
so
we
could
actually
tell
exactly
which,
like
individual
test,
that's
running,
failed
and
report
that
back
to
github.
So
we
could
see
those
kinds
of
tests
very
easily.
B
Yeah
that
that
sounds
interesting,
I
I
don't.
I
don't
fully
know
what
c-test
does,
but
it
sounds
like
that's.
That
would
be
pretty
easy
to
do.
Based
on
my
very
brief
understanding.
A
Yeah,
I
think
I
think
it
would
be
relatively
simple.
Another
thing
that
might
be
nice
would
be
maybe
including
more
I
mean
at
least
to
me.
It's
always
not
always
obvious
what
the
what
kind
of
the
metadata
is
about,
like
the
machine
that
the
job
is
running
on,
and
sometimes
that
can
be
pretty
important.
E
A
E
A
A
Perhaps
we
could
have
like
a
some
some
kind
of
list
of
them
that
that
jenkins
could
could
could
know
about
in
as
like
a
file
in
the
repository,
some
regex
playgrip
or
in
the
failure
that
it
could
automatically
rerun
those
fail
jobs
if
they
hit
that
same
exact
bug
without
having
to
kick
off
a
whole
new,
build
and
test
cycle
for
everything.
B
Yeah,
I'm
sure
they're
probably
I
mean
I,
I
know
we
have
a
plug-in
installed
that
you
can
sort
of
you
can.
You
can
put
in
regular
expressions
and
then
have
it
rep
for
those
in
in
the
in
the
build
log,
and
then
it
will
say
that
on
the
main
job
page,
that
that
was
why
it
failed,
which
which
is
nice,
if,
if
you're,
okay
going
through
the
jenkins
web
ui
to
to
go.
Look
at
that.
But
I
mean
you
know:
that's
just
that's
just
for
human
consumption
too.
B
F
A
E
A
G
Okay-
just
I
I'm
sorry,
I
don't
know
if
there
is
such
a
thing
for
well
actually
in
the
dashboard,
we
are
running
the
cyprus
and
we,
for
example,
may
this
this.
This
is
mostly
for
end-to-end
test.
I
don't
think
it's
for
other
kind
of
testing,
but
it
also
flags
the
tests
that
are
flaky
so
based
on
the
runs
and
the
branches.
It
also
takes
into
account
the
branches,
because
right
now
in
I
think
in
jenkins.
It's
this
all
the
appears
as
in
a
linear
way.
G
G
I'm
not
sure
if
we
can,
you
know
make
a
distinction
in
in
jenkins,
so
it
can
kind
of
cluster
the
the
failures
based
on
the
ranches
or
something
rather
than
the
current
approach,
which
is
like
yeah
everything
in
a
single
timeline
and
it's
hard
to
identify
a
flapping
test
or
just
with
a
more
flakiness.
A
What
what's
what's
so,
what's
required
for,
like
the
jenkins
output,
to
be
able
to
be
useful
used
by
cypress.
G
I'll
think
this
is
directly
usable
because
the
whole
cyber
thing
is,
like
I
mean
belgian
thing
just
for
enjoying
testing.
So
you
just
install
the
javascript
library,
it
runs
a
web
driver,
so
it's
very
focused
for
furniture
and
testing
all
right.
I
mean
I
wouldn't
such
a
thing
for
for
jenkins
or
another
kind
of
dashboard
for
tests.
G
G
The
only
thing
is
that
if
you
have
a
look
at
the
main
page
of
the,
let
me
share
that,
for
example,
the
pull
request
job
there
is
a
graph
there
and
that
graph
basically
mixes
the
there
is
different
number
of
tests
based
on
the
different
branches.
So
that's
not
very
useful
hard
to
see
if
there's
a
specific
test
that
is
failing
more
often
or
but
perhaps
if
we
can,
I
mean
split
that
into
branches
that
that
might
be
more
useful.
H
Hey
one
question:
yes,
I
think.
Well,
I
think
it
was
in
the
rados
meeting
that
I
mentioned
it,
but
I
don't
know
if
this
track
somehow
the
ability
to
allocate
some
machine
resources
for
stress,
testing
or
scale
or
performance
as
you,
you
want
to
call
it
in
order
to
identify
what
was
next,
I
think
all
components
can
benefit
from,
not
the
dashboard.
H
I
mean
the
idea
was
just,
for
example,
to
set
up
a
cluster
with
1000
osd
1000
rpds,
20
000
buckets
and
try
to
do
the
usual
operations,
retrieving
packets
from
dashboard
or
for
a
from
api
to
identify
with
stress
testing.
This,
I
think,
could
be
great,
so
I
don't
know
if
well
I
don't
know
if
david
should
be
aware
of
it.
A
Or
yeah
I
mean
forever,
for
I
think
we
talked
about
this
a
little
bit
in
the
latest
session,
but
thanks
for,
like
the
higher
level
testing
of
like
the
manager,
modules
and
dashboard
and
things
that
aren't
directly
necessarily
consuming
the
storage.
But
we
are
wanting
to
stress
things
more
around
the
scale
we
can.
A
It
probably
makes
sense
to
simulate
a
lot
of
those
things
rather
than
trying
to
build
like
a
thousand
node
cluster
in
the
lab
itself,
like
like
having
a
fake,
a
way
to
inject
fake
data
for
the
manager
to.
I
think
that
there's
like
a
thousand
hosts
and
and
tens
of
thousands
of
osds
and
like
millions
of
rbd
images,
I
think
that
would
be
really
useful.
H
A
Yeah
we
did
tell
we've
discussed
in
the
past
a
little
bit
about
like
a
performance
ci
where
we
and
where
we
might,
we
kind
of
have
a
dedicated
set
of
machines
that
we'd
use
to
run
more
performance-based
tests
and
jenkins.
A
We
actually
have
the
some
of
the
jenkins
perf
job
already
that
runs
some
crimson
tests
today.
I
think
the
difficulty
there
is
that
we
haven't
found
it
to
be
very
low.
Variance
like
it's
been.
A
If
you
run
the
same
job
test
many
times,
the
the
variance
and
results
is
fairly
high,
so
you
can
detect
performance
issues
at
a
very
coarse
grade
layer
level.
If
you
say
you
got
like
a
50
or
100
regression
in
performance,
but
for
kind
of
finer
grain
testing
I
get,
it
would
need
to
be
more
in
investigation
into
how
the
hardware
is
set
up
and
how
the
systems
are
configured
to
make
that
more
resistant.
A
So
I
think
that
the
fake
data
simulation
seems
like
it's
quite
useful,
regardless
of
the
machine
setup,
so
that
that
sounds
quite
quite
helpful.
To
me.
A
A
A
We
also
have
the
performance
suite
in
the
rate
and
in
todology,
which
runs
on
a
few
nodes,
so
more
than
one
single
node
at
the
scale,
at
least
just
and
right.
It's
been
running
for
a
few
years
now.
I
think
I
had
this,
maybe
back
in
2017
2018,
so
to
be
able
to
at
least
collect
data
about
how
how
we're
doing
over
time.
A
It's
also
not
the
most
consistent,
since
it
is
running
on
different
nodes
each
time,
but
it's
another
area
where
we
could
potentially
improve
and
perhaps
try
to
try
to
run
it
on,
like
the
same
set
of
nodes,
to
make
things
more
consistent
and
expand
that
to
include
more
kinds
of
performance
tests
as
well.
A
Yeah
yeah,
I
mean
profiling
is
a
little
bit
of
a
separate
thing,
but
it
uses
the
the
stuff
benchmarking
tool
or
cbt
framework
to
collect
a
lot
of
information,
while
the
tests
are
running
like
about
the
resource
utilization
in
terms
of
memory
cpu
network.
A
A
I
like
running
like
running
perfor
or
or
like
mark
and
adam,
have
a
like
a
different
versions
of
gdp
gdp
based
sampling,
profiler
that
can
be
used
to
get
more
detailed
information.
A
A
A
A
A
Are
there
other
ideas
that
folks
want
to
discuss
around
jenkins
or
make
check
or
api
tests.
F
This
I'm
not
sure
if
this
falls
into
this
category,
but
we've
we
talked
before
about
streamlining
the
backboards
process.
It
feels
like
there's
a
bunch
of
like
small
github
actions,
integrations
that
we
can
do
that
would
automate
a
lot
of
the
back
ports
plus,
maybe
that's
different
than
it
might
be
unrelated.
I
guess,
but.
A
That's
a
little
bit
different,
but
we
could
talk
about
it.
I
think
you've
got
a
little
bit
of
time.
So
what
kinds
of
things
do
you
have
in.
F
Well,
I
guess
sort
of
from
the
going
in
reverse
direction.
If
you
merge
a
pr
that
references
a
back
port
tracker,
it
should
mark
that
tracker
resolved
and
update
the
the
whatever
the
parent
tracker.
If
all
the
backboards
are
resolved
like
it
could
just
bubble
up,
so
you
don't
have
to
go
fiddling
with
the
tracker
when
you
merge
things.
A
I
thought
the
scripts
already
did
that,
like
not
immediately
in
response
to
the
merge,
but
when
you
run
them
after
the
fact.
D
I
said
I
said
I've
seen
it
create
backboard
tracker
tickets,
but
not
resolved
them.
G
I
Even
I
have
seen
that
recently
it
doesn't
mark
it
resolved
automatically.
I.
D
Yeah
bending
backward
if
you've
got
any
yeah.
D
A
This
topic
would
be
better
with
nathan,
cutler
or
other
folks
who
are
more
familiar
with
the
blackboard
yeah.
G
D
E
A
So
the
first
major
one
is
in
the
where
we're
handling
the
queuing
and
of
jobs
through
the
topology.
So
hry
is
already
looking
at
this
moving
the
queue
out
of
beanstalk
and
into
the
panel's
database
metroid.
If
you're
on
here,
do
you
want
to
talk
a
little
bit
about
that.
K
Oh
yeah
sure,
so
I
think
the
main
things
that
we
need
to
take
care
of
moving
from
beanstalk
to
paddles
is
the
priority
logic,
because
currently,
when
we
add
a
job
to
the
beanstalk
queue,
it's
a
priority
queue,
so
it
just
takes
in
the
priority
field
and
or
takes
care
of
everything
internally.
K
So
now
we
will
be
implementing
the
priority
logic,
so
we
can
talk
about
what
we
might
want
that
to.
A
Be
yeah,
they
don't
have
a
good
sense
of
what
the
schedule
should
be
in
the
future.
I
think
we
might
need
to
take
a
look
at
like
how
the
lab
does
over
over
time.
Now
that
we're
out
of
the
pacific
release
zone
seems
like
there's
a
lot.
It's
a
bit
less
busy.
F
F
My
sense
is
that,
having
like
automatic
priority
changes
or
something
based
on
age
or
something
like
that,
will
be
less
important.
If
you
can
change
the
priority
of
an
existing
run
like
if
you
see
something,
that's
if.
A
Priority
yeah
certainly
a
lot
easier
than
canceling
and
eq
things.
A
Today,
so
I
guess
I'm
going
along
with
that,
it's
reflecting
I
need
to
make
more
changes
to
pedals.
A
I
wanted
to
talk
about
making
the
ability
to
deploy,
I
think
changes
to
pedals
easier,
because
right
now
we
end
up
having
to
drain
the
queue
and
basically
pause,
running
jobs
as
much
as
we
can,
although
they
don't
hit
errors
when
pals
is
down
momentarily
or
restarting
it,
and
I
think
that
perhaps
we
could
just
set
up
a
proxy
in
front
of
it
run
two
panel
services
active
passive
and
just
restart
one
of
them
at
a
time
so
that
we
always
have
access
to
the
service
in
and
general.
A
F
F
I
guess
I
wonder
if
this
can
be
fixed
on
the
the
client
side.
Also,
though,
like
I
don't
know
how
many
places
in
the
two
thousand
call
pedals,
but
if
they
could
just
retry.
I.
A
Already
added
retries
to
all
the
right
paths,
I
think
we
need
to
add
them
to
the
read
paths
too
because
of
the
interesting
issue
that
we're
seeing
with
them.
A
Thank
you
thank
you
like
about
this,
but
we've
been
seeing
this
intermittently
now,
it's
caused
by
the
paddles
that
worker
process
timing
out
and
then
getting
me
started
was
getting
hung
somewhere.
A
C
A
Within
volpido
the
ui
for
battles,
I
think
there's
a
bunch
of
things
that
we
can
do
once
junior's
vr,
adding
authentication
is
merged
like
that,
and
that's
when
we
can
like
add
things
like
pre-prioritizing,
I'm
using
the
queue
be
able
to
see
what's
in
the
queue
then,
once
once
it
on
actuaries,
got
it
into
the
database
instead
of
only
in
beanstalk.
It's
right
now
has
no
idea
about
that
priority
of
different.
A
Jobs,
we
could
also
do
a
lot
more
things
in
the
ui
like
more
more
add,
more
filtering
when
you're
looking
at
that
test
results
around
this,
so
you
can
and
there
are
down
just
specific
areas,
you're
interested
in.
A
Being
able
to
see,
what's
in
the
cube
more
easily
like
what
what
what
ones
are
scheduled
by
different
priority
and
different
users,
and
just
doing
that
general
job
management
there,
too,
like
being
able
to
cancel,
runs
or
kill
these
ones
that
are
already
already
going
that
are
have
some
problem.
You
don't
get
your
credit
looking
at
anymore
or
maybe
you're
all
scheduled
jobs
that
don't
matter.
A
In
terms
of
technology
itself,
one
of
the
ongoing
things
is
propagating
more
of
this
fadium
based
testing
across
the
suites
as
a
pr
out
for
rb
suite
that
covers
almost
everything
there
can
bring
it
to
ciphadium.
A
I
think
this
has
already
been
worked
with
the
rpd
and
our
the
rgw
and
cfs
suites
to
get
those
working
too,
I'm
not
sure
exactly
where
to
stand.
At
this
point.
F
We've
had
a
lot
of
discussions
on
the
rjw
side,
but
I
think
there's
not
a
lot
there.
Yet
there
are
a
couple
gaps
on
the
cyphidium
piece
still
but
working
on
closing
this,
I
think
that's
some
pretty
good
progress
on
stuff
s,
though
that's
not
big
pull
requests
merge
the
other
day.
A
F
E
F
A
F
F
I
think
that
the
biggest
issue
right
now
is
that
most
of
the
client
workloads
require
packages
to
be
installed,
because
you
can't
run
the
client
workloads
inside
a
container,
and
even
if
you
could,
we.
F
A
container
that
has
all
of
the
debug
packages
installed
so.
F
Yeah
I
mean
what
I
usually
do.
Is
I
just
I
shell
into
the
container
and
then
I
you
know
dnf
install
the
duo
info
package
and
then
look
at
the
core
bit
but
yeah,
I
think
making
debug
containers
would
be.
F
A
F
Definitely
but
again,
I
think
we
need
to
be
a
little
bit
careful.
We
don't
want
to
eliminate
all
trace
of
yeah.
E
E
F
So
maybe
this
is
related,
but
we
talked
about
a
bit.
This
is
a
bit
in
the
work
session
yesterday,
but
I
think
it's
time
to
write
a
cube,
adm
task
and
a
rook
task
for
teethology
that
install
kubernetes
and
install
over
cluster.
F
F
A
I
think
a
while
ago,
infrastructure
from
this
ffs
team
was
looking
at
trying
to
add,
like
a
mini
cube
running
into
lg.
I'm
not
sure
how
far
she
got.
F
F
E
E
F
I
guess
my
sense
is
that
if
we,
if
we
want
to
use
virtual
machines,
then
instead
of
allocating
a
bare
metal,
node
and
then
running
mini
cube
on
it,
we
should
instead
be
have
like
an
openstack
pool
of
machines
and
allocate
virtual
machines
and
then
run
stuff
idiom
to
install
our
cube
adm
to
install
kubernetes
on
those
push.
The
virtual
machine
stuff
down
one
layer.
F
Yeah,
I
don't
yeah,
I
don't
think
we
should
okay.
If
we
did
want
to
go
down
that
route
for
some
reason,
then
I
think
it
makes
more
sense
to
have
a
generic
virtualization
layer,
not
something
like
mini.
A
A
Okay-
and
we
talked
about
a
lot
of
other
kinds
of
improvements-
we
could
make
it
to
technology,
but
I
think
one
of
the
other
most
impactful
ones
would
be
the
ability
to
run
topology
against
an
existing
cluster
like
a
development
environment
like
eastern,
so
you
wouldn't
have
to
kick
off,
build
and
wait
for
packages
and
schedule
runs.
You
just
run
this
test
directly.
The
same
way
that
they're
run
by
the
suites.
F
Yeah
yeah
in
in
my
experience,
writing
technology
tasks
is
really
painful
because,
even
if
you
lock
the
lock
machines
and
you're
testing
you're
like
developing
your
code
and
you're
running
the
task
over
and
over
again,
it's
really
hard
to
get
the
nodes
to
clean
up,
because
we
mostly
rely
on
like
re-imaging
nodes.
F
There
isn't
a
like
a
nuke
doesn't
seem
to
be
very
thorough.
I
guess,
or
it
doesn't
work.
I
can't
remember.
It's
been
a
long.
A
F
F
F
I
did
just
add
a
sephirdium.apply
task
that
you
can
call,
but
basically
you
just
feed
it
yaml,
so
either
the
it
has
like
a
specs
element,
and
then
you
have
a
list
of
effects
you
want
to
apply
or
you
can
yeah
that's
how
it
works,
and
so
for
the
most
part
you
don't
really
need,
probably
even
some
of
the
stuff
that's
already
in
stuff
adm
could
be
removed
because
anytime,
you
want
to
like
tell
fam
to
do
something
for
something
you
can
just
write
the
the
spec
that
gets
fed
right
into
the
orchestrator
right.
F
A
C
F
So
drive
groups
aren't
covered.
None
of
the
placement
stuff
is
really
covered.
Well,
like
the
scheduling
things
like,
we
have
a
bunch
of
unit
tests
that
we
rely
on
for
that,
but.
A
A
Okay,
well,
I
think
that's
that
plenty
of
plenty
of
areas
that
we
can
iterate
on
all
right
sebastian,
I
see
you're
around
as
well.
Are
there
more
gaps
for
civilian?
Do
you
think
you
should
list
here.
J
Give
me
10
seconds
to
to
read
through.
F
You
know,
maybe
actually
maybe,
on
the
palpito
side,
one
of
the
sort
of
headaches
that
we
keep
bumping
up
against
is
the
way
that
containers
are
built.
It's
just
very
fragile
right
now
and
it's
not
very
the
way
that
things
are
served.
Simple
pedo,
like
you
have
to
know
that
it's
the
x86
centos
package
build
that
also
builds
a
container
at
the
very
end,
but
there's
like
no,
whatever
it's
yeah,
it's
just
hard
to
see.
What's
going
on
and
it
it
keeps
breaking.
Do
you.
F
If
we're
going
to
add
like
a
debug
container,
that
might
be
something
we
want
to
show
there.
One
other
thought
I
had,
though,
is
like.
I
wonder
if
I'm
not
sure
if
this
is
a
good
idea
or
not,
but
I
wonder
if
we
want
to
have
a
build
process
that
builds
directly
to
a
container
instead
of
building
packages
and
then
installing
them
in
the
container.
F
It
would
be
a
lot
faster.
I
could
probably
save
a
half
hour
like
skipping
the
intermediate
package
step.
E
J
J
Because
chef
container
into
the
sector
itself
because
of
container,
is
overly
complicated,
needlessly
complicated
and
replicates
the
different
set
version
that
we
already
have
branches
for
in
the
cef
tree.
So
just
by
by
moving
things
over
to
the
theft
tree.
I
think
90
of
the
intricacies
of
self
container
are
just
going
to
venice
into
into
a
void.
F
Yeah,
maybe
that
that's
deeper
discussion
that
probably
needs
one
of
the
stuff
container
folks.
F
C
A
They're
back
yeah
and
going
when
you're
doing
kind
of
errors
and
container
can
be
difficult
to
figure
out
what's
wrong.
F
I
was
going
to
mention,
but
it's
about
to
be
like
redone
a
little
bit
so.
F
Yeah,
like
nfs,
I
want
to
make
sure
that
well
yeah.
So
I
think
once
the
the
stuff
is
fixed
up,
then
it'd
be
a
good
time
to
add
a
bunch
of
testing
there
or
like
active,
active
nfs
and
like
pass
the
nfs
and
the
whatever
in
front
of
rgw
and
so
on
make
sure
all
that
stuff
works.
J
Yeah
right
right
now
we
are
kind
of
deploying
things,
but
we
have
no
idea
if
the
demons
that
we're
deploying
actually
properly
work
yeah,
so
they
can
can
fail
and
and
still
running
like
they
can't
access
things
or
I
don't
know,
and
we
never
know.
F
My
sense
is
that
the
way
to
address
that
is
to
have
in
the
cipher
dm
portion
of
the
suite
we
like
deploy
the
daemon
and
do
just
like
a
read
and
write
a
file,
something
like
really
simple,
just
to
make
sure
it
like
a
smoke
test.
Yes,
also
go
and
update
the
actual
nfs
rgw
portion
of
the
rgw
test
suite
and
make
it
use
cdm
to
deploy,
instead
of
deploying
whatever.
J
J
I
think
that
we
cannot
properly
test
placements
in
in
pathology,
because
it's
the
the
testing
metrics
that
you
need
to
do
in
order
to
test
placements
is
just
too
big.
E
F
F
It
means
that
if
we
have
any
sort
of,
like
instance
where
this
this
the
manager
stephanie
module,
for
example,
is
rewriting
its
stored
state
based
on
the
upgrade
it
needs
to
that
the
backward
or
compatible
way
whatever
it
is.
J
Yeah,
either
fake,
gracefully
or
or
work.
A
A
Yeah
on
the
right
side
sophie
and
does
simplify
test
and
then
creates
quite
a
bit
because
you
don't
no
longer
have
to
worry
about
crazy
container
or
package
dependencies
issues.
F
F
Yeah,
it's
it's
basically
already
implemented.
I
added
a
bunch
of
checks
just
to
prevent
you
from
downgrading
major
versions.
I
can't
remember
if
I
put
something
in
to
prevent
you
downgrading
minor
versions
initially
with
the
expectation
that
we'd
remove
it
when
we're
already.
I
can't
remember
I
didn't
know,
I'm
guessing
not
I'm
guessing
that
the
minor
version
downgrades
work
right
now,.
E
E
D
Thing
related
to
the
downgrades.
We
have
been
introducing
these
feature
flags
in
point
releases
in
past
releases.
So
if
at
all
we
have
to
do
that
for
a
specific
release,
will
there
be
like
a
matrix?
You
cannot
downgrade
from
this
version
to
that
version.
That
kind
of
stuff.
F
Yeah
I
mean
I
was
just
thinking
about
this
like
it
might
be,
that
we
decide
that
there's
a
four
like
we
had
a
critical
change
that
we
made
in
like
dot
five
or
something,
and
so
you
can't
convert
from
dot
five
yeah
yeah,
but
the.
F
So
the
manager
could
enforce
that
like
if
it's
a
0.5
manager,
then
it
doesn't
let
you
downgrade
any
lower
than
dot
five,
but
that
would
that
would
not
work.
If,
as
of
that
five,
we
decide
you
can't
downgrade
below.2,
because
you
could
always
download
to
like.4
and
then.4
didn't
know
that
it
can't
build
downgrade
below.2,
but
whatever
yeah
it
does.
J
Upgrade
change
if
upgrade
chains,
if
you're,
if
you're
trying
to
upgrade
from
15
to
zero,
you
cannot
upgrade
to
16,
because
there
is
no
potman
version
that
is
compatible
to
15
to
1
and
16..
It's
just
impossible
to
upgrade
in
one
go,
because
you
just
can't
properly
run
the
upgrade
from
one
to
the
other.
J
So
is
there
a
need
to
first
upgrade
from
15
to
0
to
the
latest
octopus
release,
upgrade
padman
and
then
upgrade
to
to
pacific?
Is
that
something?
That's
yes
feasible?.
F
A
Should
that
be
something
that
we're
encoding
in
the
upgrade
itself,
or
we
always
upgrade
upgrading
to
like
the
latest
stable
release
version
before
upgrading
the
next
major.
F
I
don't
know:
well
it's
a
little
bit
hard
because
you
don't
know
what
or
the
if
you're
running
15.2.0
it
doesn't
at
the
time
it
was
released.
We
didn't
know
what
the
future
upgrade
constraints
were
going
to
be
right.
E
F
But
that
said,
we
were
just
talking
in
the
earlier
session
about
adding
an
upgrade
software
upgrade
list
or
check,
or
something
like
that.
F
That
lets
you
that
will
just
query
upstream
to
like
see
what
versions
are
available
and
probably
an
option
that
just
like
is
just
upgrade
to
the
latest
up
thinking
about
it,
whatever
it
might
be
and
the
I
think
the
way
to
do
that
is
to
publish
like
a
json
or
yaml
file
somewhere
that
enumerates
the
versions
and
so
that
we
don't
like
have
to
query
a
registry
or
something
like
that
which
doesn't
always
work.
And
if
we
do
that,
then
a
logical
thing
to
do.
F
There
would
be
to
like
be
able
to
mark
certain
versions
as
toxic
like
if
there's
a
do
not
install
this
version.
It's
bad
like
we
want
to
mark
that,
and
so
people
can
see
that
sort
of
flag
and
if
we're
doing
all
that,
then
that
might
also
be
a
place
where
we
somehow
encode
any
upgrade
constraints.
C
A
Yeah
thanks
vr
all
the
time,
so
I
think
we've
got
to
switch
over
to
rbd.
Now,
thanks
folks,.