►
From YouTube: 2019-05-29 :: Ceph Testing meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
A
B
C
C
B
C
I,
don't
have
anything
to
this,
except
that
I'm
writing
some
tests
and
running
into
the
problem
which
we
have
where
we
remember
like
off.
We
have
our
50
processes
all
trying
to
lock
the
right
number
of
nodes
because
I'm
doing
a
test
that
really
needs
three
nodes,
because
I'm
trying
to
like
use
them
to
simulate
data
centers
and
disconnect
with
them,
and
this
I
mean
need
to
figure
out
I
think
I
can
but
I
haven't.
I
can
squash
them
down
into
two
nodes,
which
will
be
awkward
even
if
it
does
work.
C
C
C
A
C
Have
to
measure
it
really
loudly
and
I
think
we
have
to
start
running
the
set
bonded
purpose
on
every
point
like
before.
Every
point
release
and
do
like
the
comparison
in
both
directions
is
that's
really
the
only
way
I
can
add.
Besides
just
manually
custody,
I'm
hoping
we
catch
stuff,
that's
the
only
programmatic
cool
we
have
I
can
think
of.
B
B
B
B
C
C
Yeah,
so
when
I've
haven't
heard
about
this,
the
last
time,
I
guess
the
area
did,
but
whenever
residents
in
the
past
it's
usually
it
has
product
people
or
something
where
it
doesn't.
Where
they're,
not
necessarily
worried
about
going
across
major
versions
like
not
from
the
victim
model,
is
complete.
When
we
do
point
releases
in
case
something
goes
wrong
there.
C
They
want
to
be
able
to
do
a
rollback
or
maybe
even
a
downgrade,
but
like
they
don't
expect
it
to
work
across,
like
the
feature
flags
that
we
that
we
make
end
ins
run
saying
like
enable
enable
like
what
was
it
good,
ministerially
setting
that
to
the
newest
version
or
something
they
don't
expect
it
to
work
across.
That.
C
C
But
if
we
were
testing
I
think
it
would
have
to
be
like
running
a
bunch
of
upgrades
and
then
downgrading
them
in
the
lab
or
running
workloads
plus
much
people
doing
it
manually
was
the
second
set
by
big
corpus
that
we
already
have
and
that
we
we've
sort
of
lost
track
of
for
a
while,
but
he
who
just
redid
it
for
Nautilus
I,
guess
we
ran
it
again
and
we
have
instruction.
We
actually
own
instructions
in
the
docs
word.
C
But
that's
that
still
leaves
holes
for
things
like
the
I.
Don't
think
we
probably
covered
the
manager
config
stuff
and
that
I
know
the
problem
we've
had
historically.
Is
that
maybe
haven't
been
around
so
long,
which
is
like
rtw?
All
the
individual
objects
are
encoded
the
same,
but
like
the
L
map,
the
own
apps
themself
like
look
different.
We
likely
have
new
keys
on
disk
or
something
or
they're
arranged
in
a
different
way,
and
we
don't
have
good
tests
recovering
that,
except
for
actually
running
this
stuff
and
just
hoping
that
whatever
changes
show
up.
C
D
C
D
They're,
basically,
the
setup
requires
some,
the
name
server
and
it's
run
separate
lays
can
run
separately,
but
the
target
machines.
This
is
the
down
bursting
or
yeah
down
like
I've.
Seen
these
down
there
stuff
and
I'm,
not
sure
I
haven't
tested
it
well,
it
is
it
outdated
or
just
you
get
rid
of
it.
Well.
C
A
C
Services
are
required.
It
could
be
that
it
just
needed
a
couple
things
that
are
really
easy
to
turn
on,
in
which
case
yes,
it's
probably
worth
it
cuz.
Then
people
can
run
tests
locally,
but
it
requires
like
five
different
services
that
are
required
in
our
configuration.
Probably
not
so
we
have
to
if
you're
interested
I
would
kind
of
like
that.
First
design,
I'm
pretty
sure
it's
bit
rotted
I,
think
I
think
it
got
removed,
because
there
were
pieces
that
weren't
working
anymore.
C
D
C
D
C
C
Basically,
they
run
they
run.
They
run
their
own
tooth
ology
instance,
but
they
have
a
script.
Goods
you're
on
lamb
I
feel
are
not
a
script,
but
they
have
a
little
thing.
If
you're,
my
line
might
be
able
to
find,
let's
see,
yes,
that
where
they
manage
to
actually
just
start
running
to
ecology
and
and
they've
got
a
set
up
to
run
in
digitalocean.
D
D
D
D
Yeah
did
I,
do
not
use
the
terror
forum
itself,
since
its
I
mean
everything
inter
form.
It
is
just
basket
just
and
using
terror
for
machinist.
It
was
easier
to
allocate
the
resources,
but
if
we
better
integrate
the
deploy
tool
to
to
torture
it
it,
it
will
not
be
using
terraform
anyway,
because
pathology
can
use
OpenStack
directly
and
why
I
leap
cloud
is
well.