►
From YouTube: Ceph Testing Meeting 2018-09-26
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
B
A
B
D
D
B
C
Don't
know
if
people
other
things
but
I
thought
maybe
we'd
actually
just
go
down.
The
list
of
PRS
I
tried
to
look
at
it
a
little
bit
in
the
ether
pad,
but
then
I
lost
track
so
or
well.
I
guess
I
didn't
get
all
the
way
through
it
and
I'm
not
sure
which
of
these
have
already
been
looked
at
him,
which
ones
haven't.
So
did
you
guys.
B
E
C
A
B
E
C
Ticket
you
sound
surprised,
act
which
is
confusing
me,
but
I
think
this
is
the
thing,
because
the
cleanup
is
actually
like
one
of
the
part
of
the
set
task
or
something
and
what
you're
hitting
after
the
12
hours
is
just
like
to
thought.
Your
worker
says,
or
something
says:
oh,
it's
actually
dead,
and
so
that
that
doesn't.
C
E
So
the
cleanup,
which
is
what
puts
that
data
together,
is
part
of
various
tasks
right
I'm
sure
some
of
them
is
this.f
tasks
and
others
might
be
part
of
the
internal
set
of
tasks
things,
but
when,
when
a
job
time,
what
happens
what's
killing
the
task?
Is
the
worker
and
it's
just
a
belief,
sending
sig
term
to
the
process.
E
E
Let's
send
it
I,
don't
know,
dig
poor,
cig
user
to
or
something
like
that
and
then
giving
it
a
certain
amount
of
time
to
do
its
business
and
then,
if
it's
still
alive,
then
we
we
send
it
a
sig
term,
because
I
think
that
there
are
definitely
gonna
be
cases
where,
like
the
reason
job
times
out,
is
it's
deadlocked
in
some
capacity,
so
the
clean
ups,
just
not
gonna
happen
regardless.
E
No,
the
only
other
thing
I
could
think
of
being
helpful
in
this
I
think
this
would
be
much
much
more
work
is
to
find
a
way
to
either
split
out
the
clean
up
stuff
into
something
that
could
be
run
in
a
different
process
or
or
make
it
somehow
reap.
You
know
reconstructive
all
like
the
data
that
it
needs
to
figure
out
what
to
clean
up
and
collect,
be
able
to
run
that
out
of
process,
but
I
think
that
would
be
a
lot
more
difficult.
C
E
C
Yeah
I
mean
it'll,
be
like
it'll,
be
like
it's.
It's
like
waiting
for
right
to
finish
and
the
Wrights
not
happening
because
an
OSD
like
it
took
down
the
only
OSD,
that's
a
lie,
or
that
has
the
data.
It
needs
you
something
so
like
it
might
be
that
one
of
the
set
past
is
broken.
But
if
we
like
say
hey
like
I,
guess,
I,
don't
know
exactly
what
we.
E
So
so,
let's
see
in
a
world
where
all
tasks
were
were
subclasses
of
the
task
class.
This
this
would
be
possibly
easier
sort
of,
because
what
you're
proposing
is
basically
like
when
we
notice
that
the
job
is
taking
too
long.
The
worker
does
a
thing
that
causes
the
current.
Just
the
current
task
to
to
exit
is
that
what.
C
E
C
C
Well,
I
guess
a
couple
different
directories
now,
but
I
don't
know
something
out.
E
E
So
there
is
a
directory
on
each
remote
that
that
that
the
remotes-
you
know,
usually
it's
via
tooth
ology
execution-
can
put
things
in
and
then
tooth
ology
later
gathers
that
directory
names
it
for
the
actual
host
that
it
came
from
so
I
think
I.
Think
multiple
tasks
can
put
things
in
there.
I
don't
know
how
often
it
happens
so
I'm
trying
to
while
we're
talking
look
into
this.
E
That
it
seems
as
though
like
if
we
decided
to
do
something
like
use
a
signal
that
indicated
to
tooth
ology,
alright,
whatever
you
think
you're
doing
right
now.
This
job
is
failing
so
begin,
the
unwind
process
you
know
as
if
is
if
it
had
had
hit
an
unhandled
exception
at
that
same
point
like
would
that?
Would
that
maybe
be
good
enough
here?
It
seems
like
it
might
be.
Yeah.
C
Anyway,
you're
right,
that's
a
thing
that
is
annoying
and
probably
should
get
fixed
and
probably
there's
even
a
ticket
for
it
somewhere.
The
tracker
I
know
you've
seen
a
seen.
Any
version
of
the
talk.
I've
got
a
tooth
ology,
or
this
is
definitely
just
one
of
you,
like
sharp
edges,
we've
gotten
used
to
and
forgotten
about.
That
is
bad.
A
B
E
That's
obviously
terrible
I,
wonder
like
I
wonder
if
he's
seeing
how
many
different
bugs
you
see
right
because
all
right,
so
the
fact
that
we're
not
collecting
data
when
it
timeout
happens
is,
is
something
that
you
know
it
seems
like
we
want
to
fix,
but
we
shouldn't
forget
about
the
reason
why
we
care
right.
It's
it's
whatever
is
happening
in
these
tests
is
causing
the
job
to
never
finish
and
I
would
consider
each
and
every
time
that
happens
to
be
a
bug.
Yeah.
B
E
E
C
B
C
D
C
But
well
yeah
I
mean
but
like
cuz
cuz,
like
he's
saying
like
the
card,
it
was
like
refactoring
the
messenger
which
means
that
the
simplest
thing
could
that
talks
to
us
that
cluster
could
just
hang
forever,
because
you
know
the
message
passing
layer
was
broken.
So
right
grabbing
a
set,
yes
white,
just
block
and
that's
something
we
can
happen
when
you
change
the
way
it's
set
likes.
The
stuff
code
works.
E
E
C
Fo
and
I'm
saying
is
that
the
way
like
that
mean
if
you
wanted
to,
if
you
want
to
guarantee
the
test
finish,
then
they
need
to
have
timeouts
running
on
them
and
he,
like
you,
can
have
them
running
on
like
every
every
task
individually
and
that
could
just
be
part
of
our
pattern.
But
we
don't
do
that
yet
and
it's
there's
not
an
easy
way
to
do
it
like
get
like
a
like
a
wrapped
way
for
people
to
go
to
task.
So
do
that
right
now
and.
B
From
a
philosophical
point
of
view,
you
know
tooth
ology
is
the
purpose
of
it
is
to
expose
bugs
right.
So
you
know
if
the,
if
the,
if
the
whole
thing
times
out,
obviously
there's
a
bug
somewhere
a
Zack
said,
but
then
we
go
and
look.
You
know
we
want
to
debug
it
by
examining
the
logs
and
we
can't
because
they're,
not
there
right.
C
C
E
And
just
just
so,
you
guys
know
that
the
reason
I'm
harping
on
about
like
it
being
a
bug
that
the
tests
don't
finish
is
there
are
other
tests
that
want
to
run
always
100%
of
the
time
right
and
so
that,
if
we're,
if
we're
spending
12
hours,
we're
like
11
of
those
hours
might
be
just
sitting
around
doing
nothing.
Then
we've
lost
testing
capacity
right
and
that's
that's.
That's
a
big
reason.
Why
I
have
this
opinion,
but
yeah
the
first
priority
should
be
yeah
get
the
logs
regardless
sure,
I
agree
with.
C
C
C
E
Let's
see,
certainly
it's
obvious
why
you
would
need
pseudo
to
kill
a
job
running
as
someone
else.
Oh
wait,
yeah.
So
if
it's
a
scheduled
job,
then
the
processes
are
not
running
as
you,
even
if
you
scheduled
it
right,
they're
running
under
a
separate
user
account
right
that
is
used
for
all
of
the
scheduled
jobs
and
that's
why
you
need
pseudo
I
can't
tell
you
why
he's
saying
when
I
try
to
SSH
to
one
of
the
machines
asks
you
for
a
password
that
that
doesn't
make
sense
to
me.
No,
no,
but
the.
C
D
E
B
B
E
C
C
B
E
I
would
I
would
think
that
that's
how
you'd
want
to
do
it
yeah,
but
okay,
so
so
a
really
nice
side
effect
of
doing
work
like
that
would
be
that,
for
example,
if
we
had,
if
we
had
off
and
papito,
then
all
of
a
sudden
we
could,
we
could
use
papito
to
kill
jobs
and
maybe
even
move
closer
to
the
world
where
we
can
schedule
with
a
web.
Ui,
sorry,
q.
You
were
saying
something:
what
was
that.
E
Think
it
would
probably
there
would
probably
have
to
be
two
interfaces
here,
because
paddles
doesn't
run
with
any
sort
of
privilege.
It's
not
even
necessarily
on
the
same
machine,
but
there
could
be
a
thin
interface
that
a
topology
supervisor
type
process
ran
where
you
know
a
human
did
all
of
their
interfacing
with
paddles
or
papito
or
something,
and
that
if
paddles
had
instructions
to
pass
over
to
the
supervisor,
then
it
could
just
send
a
message
saying
they
do.
This.
A
A
A
We
can
refactor
the
workers,
actually
what
about
the
worker
shouldn't
be
executed,
as
as
we
do
like
now,
and
the
worker
should
be
a
single
process
which
will
communicate,
communicate
with
pedals
and
and
get
the
configuration
from
the
pedals
like
number
of
job
jobs.
It
can
run
simultaneously
and
the
worker
can
actually
manipulate
the
task
that
it's
executing
and
providing
the
kills.
So
he
doesn't
go
through
different.
You
know:
scope
of
access
and
the
the
responsibility
of
Parros
in
action,
saying
that
hey
worker
I
have
a
like.
A
E
E
A
A
So
just
one
saying
that
it's
probably
makes
sense
to
you
to
re-implement
worker
or
just
add
a
new
instance
of
worker,
that
that
will
be
controlling
the
process,
processes
and
run
sub
processes,
and
it
can
be
executed
in
two
modes
like
server
mode
or
just
client
mode.
That
is
currently
running
right
now,
then,
if
it's
seven
node,
it
will
listen,
commands
or
polling
commands
from
the
pedals,
yeah
right.
A
E
A
E
E
A
E
E
C
C
C
Crap
you're
right,
okay,
yeah,
actually
I
might
feel
to
change
that.
E
C
C
A
C
Alright,
let's
go
through
these
just
cos:
it's
been
a
while
so
I'm.
Just
looking
at
the
poll
page
there's
a
brand
new
one
about
h8
proxied
keep
alive
and
I
had
no
idea
what
that
is,
and
it
doesn't
pass
the
test,
so
I'm
sure,
bhasu
or
show
up
it
will
ping
one
of
us
later
thingy.
This
we've
got
Nathan's
before
teardown
and
you
just
said
it
has
past
and
downstream.
C
C
E
C
C
E
C
B
I
have
a
I,
have
a
I
made
a
dummy
test
and
all
it
does
is
it
has
tasks
and
then
dump
CTX
and
then
I
can
I.
Can
it's
a
it's
extremely
easy
way
to
see
what
you
know
the
contents
of
the
of
the
CTX
is
what
it
is,
but
I
it's
just
an.
B
A
B
As
a
config,
then
it's
empty,
but
if
you
put
something
in
then,
then
you
can
see
you
it's
it's
for
people,
people
who
need
to
learn
by
doing
instead
of
learned
by
reading
code.
Okay!
Well,
what
is
the
CTX?
You
know?
Oh
well,
just
do
dump
CTX
and
boom.
You
have
it
dumped
in
your
in
your
log
and
then
what's
the
configure
all
the
configured
sin,
then
you
and
you
can
add
things
to
your
work-in-progress
branch
right
and
and
make
your
own
test
and
you
can.
You
can
see
how
it
works.
B
B
C
C
B
Also,
I
want
to
make
a
dummy
dummy.
Actually,
there
is
a
dummy
to
ecology,
suite
called
dummy,
and
it
just
hasn't
before
a
way
away.
First,
if
you're
working
on
tooth
ology
and
and
if
you
have
an
open
stack
environment
and
you're
running
tooth
ology,
repeatedly
it's
nice
to
have
a
a
suite
that
doesn't
actually
do
anything
and
then
this
could.
E
C
A
C
E
A
C
A
E
A
There
there
is
several
issues
that
can
be
the
reason
for
this.
One
of
them
is
that
I
I
am
definitely
not
an
administrator
or
group
that
it's
not
scheduled
now,
24
and
from
other
side
there
can
be
some
more
issues
in
configuration.
Files
probably
have
outdated
version
of
the
plugin,
because
it's
we
when
we
request
it,
it
someone's
tells
this
place
or
Jenkins
that
this
place
and
default
regular
expression
allows
to
retest
or
Jenkins
test.