►
From YouTube: 2020-03-03 :: Ceph Testing Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
B
B
B
B
B
A
B
A
B
A
B
B
A
Died:
I.
Don't
think
that
that
that
you
know
you
can
take
my
advice
and
that
I
think
you
need
to
pin
somebody
like
sage,
but
I
can
tell
you
that
Nautilus
is
definitely
alive
and
I
mean
alive
from
standpoint
from
standpoint
that
we
just
released.
Fourteen
that
to
that
eight
on
Nautilus,
the
Luminess
is
technically
end-of-life,
but
we
still
merge
in
some
stuff
into
it
and
mimic
I
guess
it's
also
not
end
of
life,
so
I
would
say
all
of
them
like
applicable,
but
you
need
to
double
check
with
sage.
B
B
A
A
B
A
Did
comment
on
that
PR
so
essentially,
pin
me
when,
when
you
guys
ready
and
I
will
make
like
runs
on
other
other
Suites
I
can
do,
I
can
probably
do
limited
runs
because
I
don't
think
that
Python
3
I
know
must
pass
all
on
all
jobs
right.
It's
probably
you
know
if,
for
example,
I
run
like
10
drops
of
RVD
and
10
jobs
on
rgw,
it
could
be
enough
I.
B
A
I
know,
but
you
know
we
at
least
can
kick
and
get
some.
You
know
make
some
assessment
on
the
ability
right
and
then
and
then
and
then
what
we
can
do.
We
can
teach
and
do
everything
like
in
in
night
lists
and
then
you
will
do
like
full
coverage
way
as
a
poor
as
opposed
if,
for
example,
if
you
merge
it
and
you
will
break
like
all
of
them.
B
B
A
Hope
you
yes,
this
is.
This
is
really
like
annoying,
but
I
was
hoping
that
maybe
somebody
you
or
Zach
knows
like
off
top
of
your
head,
how
to
do
it
like
real
quickly,
but
it
sounds
like
it's
more
involved,
so
they
want
to
use
it
as
a
some
project
for
summertime
and
like
have
somebody
didn't
you
know,
work
on
that
full-time,
but
meanwhile,
is
not
result.
B
So
the
death
in
is
that,
from
my
understanding,
the
you
can
try
and
tweak
it
with
the
priority
thought.
You
should
keep
in
mind
that
jobs
that
that
are
already
started
and
they
started
to
pull
the
machines.
They
are
not
stopped
because
the
locking
mechanism
is
separate
and
even
lock
in
the
queue
is
not
helping.
If
these
machines
are
just
waiting
in
the
trying
to
lock
look
nodes
and
what
we
need
to
I
mean
to
you.
Quite
this
totally.
We
need
to
redesign
the
queue
make
mechanism
so
yeah
and
I
can't.
B
A
Was
I
was
I'm
not
familiar
with,
like
you
know,
exactly
how
the
logic
works.
I
understand
its
complex,
but
I
was
thinking
that
you
know
we
don't
have
to
redesign
entire
queue.
I
thought
that
if
we
can
add
some
option
command
line,
option
where
we
say
like,
for
example,
collect
nodes
right
and
when
you
use
this
notion,
you.
A
Option
the
instead
of
being
in
in
regular
queue
that
particular
job
can
actually
lock
machines
when
they
become
available.
So,
for
example,
one
node
become
available,
you
lock
it
and
you
wait
till
second
one
becomes
available
and
you
lock
yes
not
now
we
wait.
We
wait
for
condition
to
be
met,
so
you
have
to
have
like
five
notes
available.
Then
you
lock
it
and
they
never
become
available.
So
if
you,
if
you
collect
them,
then
it
may
take,
you
know
they
take
a
while,
but
at
least
you
are
sure
that
at
some
point
it'll
run.
B
The
thin
is
that
the
algorithm
of
locking
right
now,
it's
it's,
not
complicated,
it's
very
simple,
but
it's
stupid,
because
each
job
is
trying
to
lock
machines
on
its
own
and
without
the
orchestrating
from
central
early.
So
they
just
look.
If
there
are
some
notes,
if
it's
not
enough,
they
just
wait.
Another
try
and
all
the
time,
and
since
there
is
no
priority
actually
in
in
locking
the
priority
means,
the
jobs
are
started
to
be
routed.
B
So
if
some
other
jobs
already
started
in
their
running
and
they
on
a
queue,
they
will
always
be
more
prioritized
if
they
have
less
machines
to
lock,
because
they
will
always
have
these
machines
and
they
will
all
always
go
up,
but
they
did
the
job
which,
which
is
which
once
more
no
nodes-
and
there
is
no
enough
resources
they
did.
It
will
be
always
wait.
C
A
D
D
D
D
You're
right
so
I
didn't
know
if
this
was
even
the
right
form
for
this
I'm
looking
to
leverage
pathology
to
execute
tests
as
a
part
of
a
larger
product
stack
from
what
I'm
from
what
I'm
hearing.
This
is
more
of
a
development
framework
for
upstream
purposes.
This
is
asked
to
theology
specifically.
Is
there
cases
of
people
just
leveraging
this
fer
use
it
with
existing
def
cluster.
B
B
D
A
D
D
A
The
FCI
is
for
a
second,
you
know,
explain
exist
for
mostly
for
testing
purposes.
Essentially,
we
separated
CFC
I
from
self,
because
self
you
know
was,
you
know,
was
getting
really
messy
with
the
amount
of
branches
to
be
checked
in
and
builds,
and
it
was
like
not
like
really
cool
and
then
so.
We
decided
to
keep
safe
for
kind
of
release
and
named
branches
like
when
I
say,
name
branches.
D
A
D
D
Right
yeah
and
that's
fine,
so
basically
my
team
is,
they
came
to
us
and
they're,
an
upshift
stack,
which
is,
let's
keep
it
simple,
and
it's
basically
Seth
OpenStack
and
OpenShift.
It's
the
product
stack
using
using
those,
and
they
came
to
my
team
and
said:
can
you
touch
that
we're
multi
product?
Can
you
test
those
all
together
and
my
project
manager
said?
D
Ok
well,
Steph
was
a
part
of
this
cluster
use
two
theology
to
do
all
of
your
sanity
and
smoke
tests
on
this
development
that,
on
this
deployed
infrastructure
that
they're
gonna,
give
to
me
basically
someone's
gonna
hand
me
a
safe
cluster,
that's
maintained
by
our
development
team
and
then
I
was
told
my
project
management
take
thorgy
and
run
all
your
tests
against
it.
That's
what
brought
me
here
today
and
know
that,
based
on
what
you've
told
me,
that
model
doesn't
seem
to
be
too
efficient
or
I'm.
A
Not
I
I
I'm
not
trying
trying
to
him
to
make
a
claim
that
whatever
model
you
describe
is
not
like
workable
but
like
in
my
world.
This
is
not
how
I
think
in
terms
of
testing
the
in
terms
of
testing
I'm,
not
thinking
in
terms
of
all
you
know,
I
have
a
a
running
cluster
and
why
don't
I
take
a
test
and
run
against
that?
Okay,
we
do
have
like
other
approach,
and
that
approach
is
that
for,
like.
A
Base
our
lab
is
internal
lab
right,
which
is
community
lab,
and
it's
used
by
many
people,
and
it
it
like.
You
know
when,
when
when
test
runs
and
the
solid
runs
and
all
that
like
log
files
are
towards
somewhere
right
on
some
like
/a
mount
mount,
so
all
of
that
is
actually
runs
on
top
of
safe
class
and
when
we
do
major
releases,
what
we
do
we
actually
upgrade
that
cluster.
So
essentially
we
use,
like
you
know,
eat
your
own
dog
food
approach,
and
so
we
actually
run
on
this
cluster.
A
D
A
Know,
listen,
listen.
You
know,
tautology
is
a
complex
and
very
capable
it's
it's
I
I
can't
even
call
it
product,
it's
like
infrastructure,
so
you
can
do
a
lot
of
things
with
this
ology.
What
we
just
touch
with
you,
it's
like
really
like
you
know
tip
of
the
iceberg.
So,
for
example,
like
went,
ethology
runs
and
typical
scenario.
You
say:
I
want,
like
you
know,
have
a
job
and
that
job
defines
that
it
runs
on
day.
Three
notes,
though
you
tell
you
know
this
is
your
job.
A
You
know
this
is
your
test
and
when
you
run
it,
pathology
can
go
ahead
and
lock
notes
for
you.
Whatever
type
you
define
and
like
automatically
women
will
install
all
stuff
whatever
you
want,
run
jobs
and
exit
and
collect
locks.
This
is
like
scenario
that
that
we
I
usually
use.
However,
you
don't
have
to
use
a
scenario
you
can
instruct
the
Sala
gene
not
to
lock
notes.
You
can
say.
A
You
know
to
tautology,
because
the
Sala
G
need
to
be
able
to
log
like
a
pseudo
remotely
to
those
machines
to
perform
some
operations
and
when
you
do
that,
it'll
run
tests
so
I
don't
think
that
there
is
any
like
unless
you're
do.
You
know
any
limitations,
why
pathology
wouldn't
run
on
existing
cluster.
B
But
theoretically
right
we
can
add
the
nodes
to
you
to
some
pool
and
add
the
keys
on
these
needs,
and
just
probably
we
can
try
and
ask
to
talk
to
you
not
provisioned
these
nodes
while
locking
him
so
and
anyway
they
have
to
be
locked,
I,
think
just
because
so
those
nodes
should
be
registered
in
the
paddles
database
and
they
are
just
it
will
try
to
run
some
tests
on
it,
but
I'm
not
sure
that
the
the
code
is
of
the
test
itself
ready
to
do
this
because,
for
example,
till
the
recent
things
we
had
clean
up
code
like
it
was
trying
to
remove,
remove
the
you
know
the
cluster
afterwards,
but
anyway.
B
This
is
how
kind
of
black
magic
and
you
you
need
to
know
exactly
which
suite
we
will
not
do
some
something
ugly
on
your
class,
the
at
of
course,
it's
very
I
think
frustrating
to
run
innocent,
undeterred,
ology
control
on
some
production
cluster.
But
theoretically,
it's
possible,
but
I
never
heard
that
people
are
using
it
yeah.
D
Because,
basically,
you
know
if
we
decide
not
to
go
with
that,
you
know,
then
we're
gonna
be
developing
library
suite
of
tests
ourselves,
and
that
is
an
immense
amount
of
work.
So
if
the
amount
of
work
to
get
to
thought,
oh
gee
bit,
our
needs
is
less
than
you
know,
maintaining
tests
elsewhere
than
this.
That's
a
great
thing
for
us,
I
suggest.