►
From YouTube: 2018-Jun-20 :: Ceph Testing Weekly
Description
Weekly collaboration call of all community members working on Ceph Testing.
http://ceph.com/testing
A
B
Okay,
so
let's
see
what
we
got
on
the
pad
we
had
all
here
last
week,
talking
about
is
trying
out
the
beast,
our
hunter
stuff,
and
he
did
produce
a
Anitra
pad
guide
to
running
with
that.
But
it's
not
in
Mike
code
doc
form
so
was
talking
about
that
I
think
maybe
wants
someone
else
to
run
through
it
and
say
that
worked.
Probably.
B
D
D
The
same
way
is,
we
are
doing
it
downstream,
but
it's
currently
failing
I
think
it's.
The
totally
open,
Stax
July
is
totally
broken
upstream,
then
Robley
yeah,
so
that
makes
complication
for
me.
Therefore,
creating
patches
need
only
need
some
time.
Did
she
figure
out
somewhere
with
punches,
are
not
clear.
D
B
D
B
A
Unless,
unless,
unless
they
don't
have
patches,
that
would
that
would
have
a
chance
at
working?
You
know
like
yeah,
I'm,
just
picturing,
you
know
an
engineer
being
in
a
situation
where
they're
like
well
screw
it
I'm
just
gonna
hard
code.
This
thing
to
make
it
work
in
my
environment.
You
know
we
wouldn't
want
that
sort
of
thing
right
right,
but
anyway,.
D
B
A
A
You
know
completely
finished
finished
it
yet,
and
so
that
won't
really
help
you
submit
patches
or
anything,
but
I
think
that
it's
something
that
that
could
be
useful
in
general
to
do
all
right.
Yes,
the
idea
is
I'm,
not
sure
if
it's
gonna
be
quite
as
like,
one-stop-shop
as
as
the
as
the
other
one,
but
the
idea
is
to
create
an
instance
in
OpenStack
and
use
that
instance
to
deploy
paddles
papito
tooth
ology,
and
then
we
have.
A
We
have
ansible
roles
that
deploy
those
things
and
it
turns
out
that
they're
there
in
there
in
better
shape
than
I
thought
they
were.
They
don't
get
run
much
because
we
just
have
two
deployments
and
we
kind
of
just
care
for
them
by
hand
as
it
were,
but
but
they're
working,
pretty
well
I
haven't
got
to
the
point
where
I
can
schedule
tests
yet
but
I
don't
think
I'm
like
super
far
from
that.
A
Yeah
and
that's
fair
and
I
I'm,
not
saying
that
you
shouldn't
do
that,
but
what
I'm
working
on
should
allow
for
that
as
well,
so
it'll
just
become
an
alternative
way
and
even
if
it
takes
a
while,
you
know
to
transition
over
to
it
once
it
starts
working.
I.
Think
that's
fine!
But
if
you
know,
if
you
stay
on
the
OpenStack
CLI
environment,
you
know
no
one
else
is
using
that
you
just
won't
have
as
much
support
yeah.
Of
course,
I
realized
that
we.
C
D
C
D
We
can
just
use
our
Jenkins
and
we
can
try
to
stop
some
Jenkins
jobs
to
make
it
possible
to
run
some
deployment.
So
if
you
as
soon
as
you
have
you
your
to
automatic
deployment,
you
leave
cloud-based,
we
can
try
and
run
it
as
well
as
as
I
can
attest
for
khattala
G
OpenStack
is
it?
Is
it
right
we
to
be
executed
like
Danny
this
purpose
old?
We
can
use
some
other
sorry.
What
was
that
I'm,
just
asking
which
suite
can
be
used
to
test
to
toldja
setup
visit
the
army?
D
A
D
B
So,
like
I,
think
we've
thing
we've
had
problem
with,
and
problems
with
the
past
is
that
some
tooth
all
you
change
would
go
up
through
and
it
would
work
if
you're
in
the
right,
oh
sweet,
but
it
would
break
the
file
system
sweet
or
something
so
you
might
just
want
to
run
like
the
Ceph.
You
a
smoke
sweet
because
that
actually
exercises
the
tooth
ology
testing
harness
and
the
interfaces
it
uses.
Yeah.
A
Looking
at
the
tooth
ology
suite
itself,
oh
yeah,
because
I'm
sure
that
there
are
things
that
it
does
wrong
now
because
it
hasn't
been
updated
for
well,
you
know
whatever,
whatever
settings
were
using
to
install
staff
are
probably
different
now,
but
it
it
does
run
the
whatever
integration
tests
we
have
so
there's
that
it
doesn't
use
SEF
ansible.
Currently
you
know
there
like
I,
said:
there's
things
that
need
to
be
updated,
but
I
think
that
at
some
point,
having
having
a
test
suite
just
for
tooth
ology
will
be
valuable
and
we
do.
D
A
A
A
A
I'm,
you
know
I'm
not
sure
what
the
best
thing
to
do
today
is
going
to
be
the
one.
The
one
difficulty,
of
course,
with
this
sort
of
with
this
idea
is,
is
infrastructure
failures
you
know,
and
there
that
I
feel
like
there
is
a
there's
sort
of
a
level
of
of
infrastructure
failures
that
that
make
it
like.
If
there's
enough
of
them,
it
makes
it
kind
of
not
worth
it
to
run
the
tests.
B
D
A
D
A
A
C
D
B
I
guess
this
is
if
we,
if
we
have
I,
mean
I'm
not
saying
this,
is
your
bad
I
would
love
to
professionalize
the
tooth
ology
like
development
cycle
I'm.
Just
if
we
don't
run
master
like
there
aren't
gonna
be
a
very
large
number
of
these
things
and
we
don't
have
and
we're
not
gonna
build
up.
I
mean
we'll
have
some
testing,
but
it's
not
like.
C
B
C
C
C
B
A
C
So
I
mean
the
idea
that
I
had
right.
So
we
have
this
concept
of
nose
and
labs
and
I
thought
that
we're
talking
about
that.
If
you
you
know
conceptualize
that-
and
you
say
you
have
a
lab
with
particular
particular
notes-
types
right.
So
you
then
you
can
move
it
from
different
organizations
and
you
can
kind
of
define
the
lab
and
nose
and
that's
what
we
eat
right.
C
I
mean
this
is
I'm
oversimplifying,
but
they
mean
when
we
start
talking
about
it,
so
we
run
on
master
and
I,
mean
doc,
I'm
trying
to
actually
say
what
you
know
put
in
your
mouth.
What
I
understood
from
you
a
while
ago,
but
the
idea
that,
like
you
know
a
to
complexity
to
run
master,
was
that
we
have
to
have
resources.
C
D
D
It's
not
just
the
the
matter
is
just
about.
We
are
okay,
so
we
use
OAH
lab
to
run
basic
tests,
not
complex.
This
is
why
the
idea
of
the
tags
called
different
labs
is
good,
because
there
can
be
changes
really
really
harmful,
because
no
one
wants
to
update
you.
The
master
automatically
in
ghent,
known
non-functional,
lab
and
without
a
possibility
to
rollback.
C
Practical
practical
use
of
like
having
multiple
text
for
pathology
to
be
say,
we
have
point
releases
right
so
right
now
we
have
two
active
point:
releases,
jewel
and
luminous,
and
sometimes
there
are
differences
in
between
actually
tautology,
although
it's
not
too
often
lately
where
they
actually
generate
like
infrastructure
noise
failures.
So
it
would
be
right
now
we're
on
all
those
releases
against
not
against
with
the
Sala
G
master
branch,
but
then
we
would
run
it
like
save,
do
will
point
release
branch,
tautology
and
luminous
point
branch
pathology.
A
You
know
in
any
reasonable
timeframe
before
you,
you
know,
merge
them
into
the
branch
that
actually
gets
deployed
in
the
labs.
So
the
first
problem
we
have
to
solve
is:
how
do
we
become
more
confident
in
our
changes
and
then
we
can
decide?
How
do
we
deploy
them
more
safely?
Does
that
sound
about
right.
B
From
where
we
are
right
now
like
we
could
work
on
building
out
the
unit
tests
or
work
on
building
out
a
tooth
ology
smokey
like
we
met
like
we
already
discussed
today,
and
we
could
decide
to
do
something
like
once,
not
everyone's
running
on
fork.
It
could
be
that,
like
sepia
is
just
always
running
the
latest
thing
or
you
know,
updates
once
a
day
and
then
we
do
releases
every
month
or
something
to
say.
Okay,
no,
like
this,
actually
didn't
break
so
that
partners
don't
have
to
live
on
force.
A
D
I'm
agreed
just
yeah,
just
a
but
I
like
the
idea
that
the
the
master
is
the
main
accumulation
of
all
new
features
and,
of
course
no
one
should
just
blindly
switch
to
master
and
I'm
graded
automatically
everyone.
Everyone
is
responsible
for
each
owning
log
in
final
pond
right.
Those
third
and
I
actually
don't
know
how
to
automate
the
testing
of
particular
log
for
new
changes.
It's
only
knows
only
one
who
is
maintaining
it,
but.
A
Yeah
I
think
that's
what
he
means
well,
I
guess
I'm,
not
sure.
If
we
technically
have
a
single
person,
the
that
owns
it
right
now,
it
definitely
used
to
be
me.
We
have
a
our
sysadmin
as
name's
David
David's
awesome.
He
he
knows
enough
to
to
look
after
almost
everything
now
he's
not
like
a
Python
developer,
though.
B
D
B
B
D
So
I'm,
just
I
would
recommend
to
you
to
bind
a
lab
infrastructure
to
some
tag
because
you
have
to
test
anyway.
You
have
to
make
sure
that
that
it's
still
there
working,
because
if
you
update
to
something
you
will
anyway,
it's
much
easier
to
roll
back
to
told
you
set
up
just
move
in
tag
in
your
riffle.
Instead
of
charity
begin
early
from
master
changes,
oh
yeah.
A
D
D
For
example,
if
we
are
just
it's
and
it's
not
only
regarding
tautology,
because
we
I
mean
not
to
does
it
sell
because
there
is
also
cattle,
simple
Peter,
yeah
and
and
in
general
they
can
be
all
installed
on
different
machines
and
but
for
testing
purposes
for
functional
testing
purposes
that
we
are
discussing
right
now.
It's
just
it's
enough
to
install
everything
and
still
note
yeah.
C
C
Just
yet
I
mean
you
know,
I
think
the
impression
was
that
we
just
pulling
from
master
and
you
know
never
test
it.
This
is
not
true.
Essentially
what
happens
is
so
we
have
loss
of
sweets
running
on
daily
basis
and
they
used
master
by
default.
Then,
for
example,
usually
when
Zach
or
somebody
else
comes
up
with
some
dangerous
feature,
and
they
do
some
development
and
they
like
test
locally
whatever
and
that
that's
at
some
point
of
time,
they
feel
that
they
want
to
actually
push
it
to
master.
D
D
D
A
I
mean
it
is,
for
example,
like
the
lid
cloud
back-end
it
took
me.
Probably
it
took
me
a
month
or
two
to
to
write
that
code,
and
then
it
took
me
another
month
of
it
probably
took
me
another
month
to
feel
confident
in
merging
it
and
when
it
did
get
merged.
Excuse
me
it
didn't
break
a
single
thing,
so
you
know
we're
not
just
merging
stuff
and
crossing
our
fingers
so
and.
B
B
Is
he
software
for
running
stuff
in
the
CPA
lab
and
I
mean
I
know,
other
people
run
it,
and
we
want
to
get
away
from
that,
obviously,
because
it's
not
sustainable,
really
very
sustainable,
but
that's
sort
of
what
happened
in
the
development
like
in
terms
of
how
it
was
being
developed
that
he
was
participating,
and
so
it's
all
sort
of
tuned
to
be
the
lowest
overhead.
For
the
you
know
three
people
at
a
time
who
worked
on
on
that.
Well,.
A
A
A
Just
I
want
to
I
want
to
have
a
list
of
like
our
different
options
for
or
keeping
it
up
to
date.
Once
we
are
once
we
have
a
good
way
to
test
it,
because
I
think
there
that
we've
had
a
bunch
of
good
ideas
here
and
I
want
to
lose
track
of
that.
So
so
I
want
to
build
on
that
list,
and
but
I
do
think
that
the
the
first
problem
to
tackle
is
how
do
we?
Actually,
how
do
we?
A
A
D
A
B
Right,
I'm
not
sure
I,
like
that
I
mean
there
are
some.
There
are
some
projects
where
they
sort
of
have
a
branch
called
unstable
that
they
use
the
way
weaves
mastered
or
but
most
of
them
actually
are
very
much
more
willy-nilly
about
it.
Like
master,
is
what
happens
like
you
like
someone
looks
review
the
parents
as
it
looks
good
and
that
goes
in
the
master
or
goes
into
their
unstable
branch
and
that
might
or
might
not
pass
testing,
whereas,
like
we
try
not
to
merge
anything
and
stuff,
certainly
and
I.
B
A
B
The
other
thing
is
I,
just
I
don't
want
to
run
away
too
far
on
this,
because
I
think
until
we
have
a
way
of
actually
testing
to
theology
changes
independent
of
a
very
expensive
lab.
It's
probably
like
I
think
I
think
making
that
happen
is
sort
of
the
first,
the
first
thing
so
making
it
easier
to
work
on
tooth
ology.
So
I,
don't.
A
B
A
A
D
D
B
I
think
what
you're
saying
is
that
it's
important
to
be
able
to
deploy
from
tags
or
branches
to
deal
with
these
cases
and
that
I'd
certainly
agree
with,
is
that
it
needs
to
be
possible
to
just
sort
of
fly.
Whatever
version
of
the
code
you
want
and
I
mean
I
think
it
basically
I
mean
I.
Think
it
already
is
right,
right,
Zach,
you.
A
D
C
D
You
schedule
in
a
suite
you
can
you
can
point
which
version
of
turtles
you
to
use,
but
it's
not
changing
the
code
of
two
totally
itself
that
it's
running
and
you
are
not
possible
to
you
change
the
code
of
Booker's
or
if
you
are
changing
this,
then
you
are
changing
all
the
infrastructure
at
once.
So
it's
not
super
very
flexible,
it's
flexible
only
for
illegal
instance
of
development
environment.
Well,
that's.
A
A
So
so
they
tend
to
all
run
the
same
branch.
Now
when
you
want
to
run
tests,
that
I
mean
the
worker
just
just
executes
the
job.
Basically
like
it's,
it's
not
a
huge
amount
of
code,
it
just
it
mostly
just
kicks
off
the
the
separate
to
ecology
process
that
uses
the
branch
that
you
tell
it
to
which
defaults
to
master.
D
C
A
A
C
B
A
C
A
D
D
A
D
D
A
B
Mean
from
from
where
I'm
sitting
I
think
with
the
group
of
people
we
have
and
getting
the
souza
stuff
upstream
or
alternate,
or
alternatively,
getting
the
the
Lib
cloud
back-end
by
able
to
deploy
independently
like
we
need
one
of
those
and
then
we
can
use
that
system
to
test
the
test
tooth
ology
changes
going
forward,
then
like
that's.
What
I'd
like
to
see
is
because
that
is
because
then
we
can
like
do
that
deployment
and
be
like
okay.
Yes,
it
passed,
we
can
merge
it,
and
hopefully
it
doesn't
break
sepia
and.
B
B
B
More
broadly,
one
thing
that's
come
up
in
in
my
head,
at
least,
for
instance,
was
you're
talking
about
like
the
two
dolla
G
workers
AK
and
like
we
may
eventually
want
to
just
reframe
truth
ology
in
a
way
that
it
doesn't
even
have
workers
like
I
hate
that
we
have
workers.
It's
not
because
workers
are
bad,
but
because
the
way
they
work
right
now
or
they
just
race
against
each
other
makes
real
scheduling,
impossible
and
impossible
to
run
jobs
of
more
than
in
the
CP
lab
jobs
of
more
than
three
nodes.
B
If
you
had
a
different
deployment,
we're
all
the
times
for
for
doats,
I
guess
you
could
draw
you
could
do
that.
But
you
know
we
need
more
intelligent
scheduling.
We
do
yeah,
we
do
home,
I,
totally
agree.
Yeah
and
I
mean
there's
a
whole
bunch
of
other
stuff
testing
systems
that
are
Jenkins
based
or
whatever,
and
it
may
be
that
we
that
we
port
some
functionality
in
various
places,
yeah.
A
So,
along
those
lines,
one
of
the
yeah
there's
this
there's
this
whole
other
category-
that
we
don't
that
we
haven't
talked
about
a
whole
lot,
which
is
what
what
do
we
want
technology
to
do
like
what?
What
new
features
do
we
want,
and
one
of
them
is
yeah,
of
course,
like
a
huge
one,
is
intelligent,
scheduling
and
I
think
something
that's
that
big
needs.
It
requires
us
to
have
a
more
robust
way
to
test
and
be
confident
in
technology
right.
A
A
B
A
B
B
A
A
A
Yeah
yeah
so
like
the
orchestration
piece
could
be
split
out
potentially
right.
We
have
entire
modules
that
contain
just
messes
of
functions
like
the
misc
module
is
just
a
place
where
people
put
that
they
don't
know
where
else
to
put
that
could
be
broken
up
and
divided
into
logical
sections.
There's
the
part
that
knows
about
packages
right,
there's
the
provisioning
section,
which
is
a
little
bit
separate
from
orchestration,
but
is
like
intimately
tied
to
it.
A
This
is
some
of
the
things.
That's
not
like
an
exhaustive
list
right,
but
again,
that's
just
one
of
the
steps
that
I
feel
like
could
be
a
part
of
moving
towards
having
really
nice
features
like
intelligent
scheduling,
I
also
just
just
want
to
throw
it
out
there,
like
one
of
the
one
of
the
other
things
that
I,
that
I
kind
of
always
wanted
from
technology
is
I,
want
it
to
be
a
little
more
hands-off
I.
D
A
And
I
think
what
I,
what
I,
what
I'd
rather
see?
Even
then,
then
the
system
we
have
of
we
run
X
number
of
workers
for
for
this
machine
type.
What
I
think
the
more
correct
approach
would
be
is
if
we
had
a
smart
scheduler
it
could
we
there's
no
set
number
of
workers
that
we
run.
The
amount
of
jobs
that
are
running
is
only
determined
by
the
amount
of
the
amount
of
nodes
that
are
available.
A
So
when
we're
so
we
have
a
thing
that
sits
around
and
looks
at
the
queue
and
says:
ok,
the
next
job
on
the
queue
needs
three
nodes
and
we
have
three
that
are
free
all
right.
Let's
run
it
or
another
scenario.
The
next
job
on
the
queue
needs
20
nodes,
but
we
only
have
10.
So
let's
look
at
the
one
after
that,
the
one
you
know
that's
just
below
it
in
the
queue
that
one
needs:
five,
let's
go!
No.
B
A
B
Not
so
so
in
reality,
I
agree
with
all
these
things
that
we've
discussed,
except
one,
which
is
that
you
can
pry
SSH
into
a
machine
and
using
last
out
of
my
cold,
dead
hands
and
I.
Think
I.
Think
I
speak
for
all
the
other
leads
and
some
like
those
logs
are
huge.
I
do
not
want
it
download
them
with
my
browser.
I
don't
want
to
I,
don't
want
to
W
get
them
I
just
want
to
like
be
on
the
machine
and
use
a
screen
session
and
look
at
seven
different
ones
of
them.
A
I
I
don't
actually
want
to
take
that
away
as
an
option
at
all,
but
maybe
if,
if
if
we
can
find
a
way
to
make
it
make
that
necessary,
less
often
yeah,
it's
kind
of
what
I'm
thinking
yeah.
That's
fair,
yeah,
I'm,
not
I'm,
not
trying
to
remove
like
that
that
you
think
people
need
every
day.
You
know
okay,.
B
Yeah
but
yeah
I
mean
so
like
historical
background
time
way
way
back
when
I
mean
yeah
tooth
ology
was
was
written
to
be
a
couple
of
different
things,
largely
because
you
know
like
we
needed
a
testing
framework
and
none
of
the
testing
frameworks
available
worked
like
we
were
using
Auto
tests
for
a
while.
D
B
That
is
sort
of
familiar
with
those
ideas,
but
definitely
nothing
like
that.
Existed
then,
and
so
I
mean
like
internally.
The
module
is
like
there's
the
orchestra
thing,
which
is
for
like
dealing
with
multiple
nodes
and
stuff,
and
then
there
was
the
testing
framework,
and
then
there
was
the
cepheid's
and
tooth
ology
is
definitely
100
percent
of
the
top
testing
environment
right
now.
But
that
was
definitely
not
the
goal
when
it
started.
We
just
never
got.
You
know
the
other
project
to
make
it
actually
stay.
B
Agnostic,
yeah,
and
so
I
like-like-like
I've
alluded
to
before
we've
got
sort
of
the
Red
Hat
is
working
on
it,
Riaan,
visioning,
sort
of
their
downstream
tests
and
CI
and
as
part
of
that
I
think
I
think
we
may
get
some
more
labor
soonish,
but
also
I've
been
like
thinking
a
lot
about
how
the
different
testing
frameworks
that
exist,
both
in
like
both
in
public
and
in
private
interact
and
if
and
if
you
thought
you
were
more
modular.
B
That
would
be
great
because
I
think
a
big
thing
that's
gonna
come
is
that
we
want
to
start
being
able
to
run
tests
in
different
frameworks,
or
at
least
in
different
environments
and
move
them
around,
and
whether
that
means
we
start
trying
to
write
our
write.
The
tĂȘte
like
the
actual
tests
against
IV,
start
runners,
sort
of
interface
or
or
where
exactly
those
divisions
come
I'm,
not
sure,
but,
but
if
we
can
break
things
apart
more
and
share
them
in
more
places,
that
would
definitely
definitely
be
good.
Yeah.
A
D
A
D
One
of
the
feature
that
it's
it
makes
you
easier
to
guy
to
use
it
actually
multi-user
support
for
Cupido
itself.
It's
I
think
it's
one
of
the
good
idea
to
support
it,
because
eto
tries
to
scale
Suites.
You
need
to
ssh
to
some
schedule
machines,
but
why
do
we
need
these?
Like
it's
just
enough
to
have
recipe.I
that
you
can
just
use
anything?
We
want
just
not
have
actual
access
to
the
ssh
like.
A
D
C
B
Well,
so
I
I'm,
definitely
not
gonna.
Do
some
full
pedo
I
like
for
you
to
find,
but
one
thing
that
did
come
up
with
cephalic
on
was
the
possibility
that
if
we
wanted
to
collaborate
more
with
other
communities,
then
papito
and
paddles
and
shaman
are
probably
the
components
easiest
to
replace
with
something
else
and
share.
B
B
So
I'm
not
I
mean
I,
don't
say
there
is
Reyes
and
I
don't
know,
but,
like
I
think
before
we
spend
a
lot
of
time
building
features
in
those.
We
should
investigate
because
I
know
that
especially
like
some
of
the
distros
of
like,
like
the
upstream
distros,
have
started,
collaborating
on
things
that
may
be
suitable
or
maybe
totally
aren't,
but
like
at
least
the
first
flush
sound
like
that
could
be
like
Chihuahua
mentioned,
I.
Think
the
open
CI,
maybe
or
something
which
is
with
with
openSUSE
and
fedora,
okay,
which
they're
using
for
testing
and
then.