►
From YouTube: Troubleshooting and Debugging in the rook-ceph cluster
Description
Presented by: Deepika Upadhyay, Gaurav Sitlani, and Subham K Rai
Troubleshooting and Debugging in the rook-ceph cluster
This session shares an overview of troubleshooting a rook-ceph cluster, discussing some troubleshooting scenarios. This is followed by an introduction and an overview of kubectl-rook-ceph krew plugin, and how it makes managing containers easier from a troubleshooting perspective. We’ll also discuss future issues we’re planning to solve with it here, followed by a short future roadmap of the rook project. Later, we look forward to discussing and gathering feedback from the users about common and challenging problems they face while troubleshooting the clusters.
A
A
A
Then
we
have
co-presenter
shubham
who
works
as
a
software
engineer
in
red
hat
and
he
works
as
a
in
the
core
project
and
one
of
the
core
contributors
in
cool
project.
I'm.
Sorry,
your
Rook
project
and
so
I
think
so.
First
of
all,
we'd
like
to
highlight
what
kind
of
scenarios
that
we
get
to
experience
in
Rook,
safe
cluster,
and
then
we
can
highlight
on
how
can
we
go
by
debugging
those?
A
So
the
first
one
of
the
essential
kind
of
failure
that
we
experience
in
a
rook
safe
cluster
is
monitors
losing
Quorum,
which
kind
of
become
a
critical
conditions,
because
you
can't
perform
any
safe
operations
and
you
don't
know
what
to
do.
A
You
can't
access
the
cluster
and
then
the
scenario
becomes
kind
of
like
what
to
do
kind
of
state,
so
in
Rook
self
cluster
we
kind
of
try
to
identify
how
like
what
are
the
steps
commands
that
we
can
use
to
kind
of
highlight
upon
what
is
happening
with
the
monitors.
So,
first
of
all,
we
kind
of
see
is
the
monitor,
running
or
yeah
like
there's
a
monitor,
Port
failure
and
is
it
crashing,
and
then
we
try
to
like
look
into
the
Rook
operator
logs.
A
Are
there
any
timeout
errors
failure
to
reconcile,
if,
like
we
are
able
to
get
some
hint
from
there?
We
go
by
that
path,
but
last
of
all
like
if
we
are
not
able
to
work
with
those
conditions
like
if
we
are
not
able
to
resolve
the.
A
And
like
failures,
just
by
looking
into
the
logs
and
Performing
the
kind
of
debugging
with
the
kubernetes
layer,
we
try
to
see
inside
the
toolbox
part
and
try
to
see
like
if
we
are
not
able
to
run
kind
of
any
safe
commands.
Then,
like
we
try
to
see.
If
there
is
one
monitor
still
up
like
the
situation,
I
am
trying
to
express
is
when
we
have
only
one
monitor
available
and
two
monitors
are
down.
A
So
in
that
case
we
identify
which
mon
is
the
healthy
one.
And
then
we
try
to
see
like
we
try
to
see
the
the
mon
status
is.
Is
it
still
like
healthy
and
if
we
have
a
healthy
monitor,
even
one
single
healthy
monitor
is
available.
A
We
now
have
a
feature
in
Rook
project
which
we
call
like
in
kubernetes
we
have
crew
plugin,
so
we
now
Provide
support
for
ruxif
to
debug
some
of
the
basic
kind
of
scenarios
in
a
rook
cluster
using
this
plugin,
so
we
have
automations
available
there.
A
So,
in
that
case,
where
we
have
lost
two
monitors
out
of
three
and
we
can't
run
any
like
step
commands,
we
have
the
restore
Quorum
command
available
there
and
we
can
use
that
to
kind
of
get
back
the
cluster
or
the
monitors
in
a
healthy
state
that
demo
I'd
be
covering.
After
a
brief
introdu
introduction
to
crew
plugin,
which
I
think
shubham
would
be
taking
over
shubham.
Do
you
want
to.
B
Explaining
what
is
crew
and
how
like
Loop
is
using
crew
to
solve
some
of
the
common
and
like
critical
issues
that
we
face
in
Europe's,
F
cluster,
that
as
they
become
infant
like
yeah,
when
we,
when
we
have
a
single
mon
available
and
all
other
months,
are
down
in
this
situations,
we
cannot
run
any
any
safe
operations
and
also
in
that
place
the
crew
help
us.
B
The
crew
is
basically
a
like
cubicle
command
based
tool
like
qctl
plugin
it
it
helps
our
user
to
like
troubleshoot
and
manage
their
cluster
and
I
have
attached.
The
link
to
you
know
like
if
someone
is
interested
they
can
visit
and
and
read
more
about
the
group
and
to
install
the
group
or
we
we
need.
We
need
to
run
a
couple
of
commands,
so
I
could
not
attach
those
in
the
slide.
So
I
have
added
a
reference.
B
Click
like
where
we
can
install
from
where
we
can
install
the
crew
and
and
the
steps
the
docs
is
pretty
good
and
yeah.
Those
are
simple
commands
so
yeah.
It
will
be
easy
to
install
so
so
we
need
a
crew.
Total
has
around
170,
plus
plugins
and
Rook.
Safe
is
one
of
them
and
to
install
our
like
Rooks
F
group
plugin.
We
first
need
to
install
the
Chrome
and,
after
that,
once
crew
is
installed.
We
need
to
verify
like
if
crew
is
installed
or
not.
B
So
we
can
just
verify
that
with
the
qctl
crew
list
and
once
that
is
once
the
code
is
installed,
then
we
can
install
our
rooksip
plugin
with
like
KFC
12
crew
install
and
then
the
name
of
the
plugin,
so
I
I
think
I
have
it
as
the
link
of
the
project
where,
like
you,
can
see
what
you,
what
the
looks,
crew,
plugin
supports
and
and
all
other
details
yep
on
to
you
to
pick
up
for
the
demo
yeah.
A
Okay,
so
I
prepared
a
short
demo
to
show
the
condition
I
explained
earlier
through
two
out
of
three
monitors
down.
So
let's
the
what
we
have
so
right
now
we
have
three
monitors
available
and
If.
We
go
to
the
toolbox
spot
and
see
the
status.
A
A
So
now
we
have
only
one
monitor
available.
Failure
could
be
anything,
but
this
is
just
for
the
simulation,
but
we
try
to
run
step
status
command
to
see.
If
we
can,
we
have
any
like
access.
We
can
run
commands
nope
so
like
in
that
situation.
We.
What
we
try
to
do
is.
B
A
A
So
how
can
we
can
use
crew
in
this
situation
is
basically
we'll
create
a
one
monitor,
Quorum
and
after
which,
like
it,
will
delete
all
the
unhealthy
monitors
and
pre-deploy
operator
like
after
creating
a
mon
debug
part,
creating
a
single
moncorum
and
then
from
there
creating
a
healthy
cluster
and
then
again
scaling
back
to
three.
So
we
now
have
that
one
B
is
healthy
and
now
we'll
use
the
crew
plugin
here
to
just
like
debug
and
click
get
back
to
a
healthy
state.
So.
A
So
right
now
it's
that's
all
going
to
check
safe
status
from
here
as
well,
but
just
to
debug
like
we
need
to
just.
Let
me
check
the
command
there.
A
Yeah
months.
A
Or
put
on
three,
so
it's
asking
us:
do
we
really
want
to
go
ahead
with
restoring
workflow?
A
So
here
you
can
see,
firstly
like
it
tries
to
start
a
debug
part
and.
A
C
A
We
are
after
scaling,
the
mon
B.
We
now
try
to
see
if
the
single
mon
Quorum
is
achieved.
A
Now
it's
asking
us
to
do
you
want
to
expand
to
full
more
one
Quorum
again,
so
we
hit
enter
here
so
now
we'll
we
can
see
sometime
monitors
again
restoring
to
three
monkorum.
A
So
this
is
one
of
the
way
in
which,
like
crew
plug-in,
makes
our
life
easier
with
in
such
situations.
A
I
think
shubham
would
be
highlighting
more
on
scenarios
that
crew
plug-in
can
help
with
debugging
for
now
and
yeah
in
future
plans
also
we'll
be
highlighting
some
of
them
and
any
feedback.
There
we'd
be
eager
to
hear
that
at
the
end
of
The
Talk.
So
now
we
have
three
more
column
and.
C
Just
to
verify
one
time
that
I
would
like
to
introduce
as
well
as
addresses
that
as
a
part
of
that,
we
only
address
that
situation
where
two
of
the
monitors
are
down
in
case
all
three
are
down
as
a
part
of
group
plugin.
That's
a
part
of
the
future
work
which
subham
will
also
share
that
in
case
three
months
are
down.
Then,
if
you
want
to
restore
them
from
all
these,
that
would
be
possible
in
future
as
part
of
the
crew
plugin.
B
Move
to
Next,
Level
and
once
I
have
to
present
I
will
yeah
take
over
okay.
These
are
the
all
supports
that
commands
that
group
plugin
has
like.
One
of
them
was
like,
as
Deepika
said,
recovering
mon
Quorum,
and
then
we
have
like
using
debug
mode
of
osds
and
mod
like
which
is
basically
using
that
blue
store
tool
or
the
object
store
tool
that
itself
has,
and
also
we
can
basically
without
going
to
going
inside
the
toolbox.org
corporate
about.
B
We
can
run
the
safe
command
for
our
RBT
and
and
also
like
update
to
do
hours,
groups
of
operator
config,
a
config
map
which,
which
has
like
few
a
few
settings
which
are
critical
to
the
cluster
so
and
we
can
also
like
we
have
on
the
11th.
Now
we
can
remove
the
bad
osts
by
using
this
command
without,
like
going
all
through
long
steps.
So
yeah
next
slide.
B
Mentioned
like
soda
in
demo
like
this
is
how
we
restore
the
command
right,
so
for
the
syntax
from
the
command
will
be
first,
we
will
have
to
use
the
qctl
and
then
the
name
of
the
plugin
that
is
Rooks
F,
and
then
we
need
to
pass
the
name
Space.
By
default,
the
name
space
we
we
it
is
like
Rook,
safe
or
otherwise.
B
We
can
always
pass
our
custom
name
space
and
we
also
support
the
scenario
where
the
Rook
operator
is
in
different
namespace
and
the
safe
cluster
is
in
different
name
space.
In
that
situation,
we
need
to
pass
like
two
different
namespace,
and
in
that
case
we
we
also
support
center
and
the
toolbox.
Our
next
slide.
B
So,
and
and
like
this
is
one
of
the
place
where
crew
can
help
us
like
using
debug
mode.
We
we
want
to
use
the
debug
mode
in
the
scenarios
like
mon
Quorum
and
also
when,
when
commands
are
increasing,
and
we
want
a
compact
mode,
we
need
the
debug
parts
to
you,
know
a
node
and
all
those
commands
which
which
we
cannot
run
inside
the
mons
pods
so
in
those
places,
grew,
really
comes
India
and
help
us
with
that
next
slide.
B
To
pick
up
so
next
I
will
just
show
you
the
quick
crew
commands
and
and
how
to
like
get
started.
C
B
So
first,
like
I'm
running
this
command
to
verify
that
okay,
we
have
like
Rook
safe,
and
the
version
is
like
a
0.4.0.
B
B
We
can
pass
like
both
name
space
like
operator,
name,
space
and
the
safe
cluster
name
space,
and
we
will
have
to
use
hyphen
and
or
or
name
space
or
the
operator
for
the
operator
name
space,
and
these
are
the
commands
that
currently
crew
supports
like
running
basic
test,
commands,
IBT
and
just
running
the
help.
Sometimes
you
want
to
see
the
health
of
the
cluster.
B
So
it
says
that
okay,
this
is
a
warning
situation,
because
at
least
three
months
should
be
running
on
different
node,
so
this
also
prints
the
mod
which
is
running,
and
it
also
shows
the
safe
status
like
whether
it
is
Health,
okay,
warning
and
or
error,
and
also
it
checks
like
a
like
at
least
three
OSD
should
be
running
on
three
different
nodes,
but
currently
I
only
have
two
so
I
will,
just
you
know,
to
verify
my
of
output.
B
I
will
just
show
you
guys
on
the
right
side
that
okay
I
just
have
two
HD
settings.
So
let
me
just
show
you:
okay
yeah,
you
can
see
right,
I
have
only
two
HD
earning
and
a
single
month,
so
yeah.
So
this
we
can
say
that.
Okay,
this
this
is
right
and
we
we
we
also
print
all
the
parts
that
are
running
or
not,
and
also
we
we
in
the
health
command.
We
also
print
the
placement
group
status.
C
B
Also
helps
with
that
and
in
the
last
eight
checks,
with
at
least
one
manager
super
earning
so
yeah
we
have
one
manager
running.
So
this
is
one
of
the
and
that
will
give
you
the
overall
or
the
health
status
of
Europe
safe
cluster,
and
we
we
can
also
get
the
like.
You
know
the
Rook
version,
if
you
want
with
this
without
going
inside
the
logs
and
checking
what
is
the
version
and
all
we,
we
can
also
get
the
self
cluster
status
like
we
just
need
to
do
status.
B
It
will
print
the
status
of
the
self
cluster
CR
that
we
have
in
Europe.
So
it
will
print
like
what
is
the
total
bytes
so
used
and
capacity
and
the
status
and
the
heartbeats
and
everything
so
we
we
can
also
use
you
know
the
debug
mode
as
I
mentioned,
so
for
that
we
will
have
to
run
this
come
on,
so
we
can
see
it
right.
We
first
need
to
pass
the
main
command
that
is
debug
and
first
we
want
to
start
right
and
then
we
need
to
pass
the
name
of
the
deployment.
B
So
I
will
just
go
and
use
OSP
0.
So
what
will
it
do?
Is
it
will
first
bring
the
main
USD
down
and
then
it
will
bring
the
the
debug
Port.
So,
as
you
can
see
right,
the
main
OST
is
gone
and
we
have
the
debug
pod
up
and
running.
So
we
can
go
inside
this
spot
and
we
can
run
our
like
the
commands
that
we
want
to
run
in
the
debug
mode.
That's
true
verify
deployment.
B
Also,
we
should
be
having
a
one
yeah,
as
you
can
see
right,
the
main
USD
is
OSD,
geopot
is
down
and
we
have
the
debug
part
and
once
we
once
we
are
done
with
the
use
of
debug
pod.
We
just
need
to
stop
and
we
also
required
to
change
the
name
of
the
pot.
Sorry
name
of
the
deployment,
so
we
have
this
name.
B
So,
yes,
this
is
top,
and
we
can
see
that.
Okay,
our
the
main
part,
is
copying
and
the
debug.
What
is
the
in
terminating
state?
So
if
this
screw
can
help
us
in
lot
of
scenarios
where,
like
the
steps,
are
lengthy
and
they
are
airplane
situations,
so
yeah,
these
two
tools
can
really
help
us,
oh
yeah.
This
is
all
for
the
demo.
A
Okay,
cool
thanks
shubham.
It
was
just
saying
that
crew
is
one
of
the
things
that
makes
our
life
easier
in
kubernetes-based
environment.
There
are
some
situations
failures
that
we
also
experienced
so
far
that
we
can
share
and
if
somebody
encounters
them
might
be
helpful
for
them
in
the
debugging.
So
one
such
scenario
was
like
we
kind
of
get
it
very
often
if
there
are
network
failures
or
disruptions
in
cluster
for
whatsoever
reason
how
to
go
by
debugging
the
volume
reporting
still
in
use
error.
A
So,
firstly,
we
will
try
to
see
what
is
going
on
in
in
the
Pod.
So
when
we
try
to
describe
the
part
and
check
the
events,
we
find
that
okay,
the
volume
is
still
in
being
in
use,
so
firstly
like
the
easy
ways
just
to
try
to
identify
like
where
the
RBD
image
is
being
used
is
still
in
use.
But
if
we
don't
find
like
okay,
the
volume
is
not
being
in
used.
A
A
My
OST
block
list
Watcher
and
we've
been
like
once
the
image
is
like
The
Watcher
is
block
listed.
We
try
to
recover
the
pot
in
couple
of
minutes
after
that,
gaurav
I
think
you
can
highlight
a
bit
more
on
this
one
and
the
next
one
so
I'll
hand
it
over
to
you.
There.
C
Could
be
one
there's
one
more
common
scenario
like
which
is
like
volume
already
exists
or
volume
one
ID
already
exists.
There
is
a
similar
situation
like
that
and
that
or
that
issue
is
also
handled
similarly
by
restarting
the
one
of
the
CSI
pods,
maybe
for
seven
CSI
plug-in
pods
for
Surplus
or
RBD,
and
then
you
could
just
try
and
mounting
it.
So
that
way
the
if
the
volume
is
already
being
used,
it
would
be
released
and
it
can
be
reused
by
the
client.
Again.
C
Yeah,
so
some
some
of
the
situations
where
we
encounter
that
the
we
notice
that
there
is
a
high
CPU
utilization
of
Chef
component,
like
sometimes
we
realize
it's
F
process
lawn
or
any
of
the
other
process,
rgw
anything
which
is
having
a
high
CPU
utilization
and
from
the
top
command.
C
We
see
that
it's
really
consuming
a
high
CPU,
so
what
I
mean
troubleshooting?
How
could
we
troubleshoot
such
situations?
I
mean
and
one
of
the
go-to
things
that
we
could
do-
is
first
describe
the
pods
and
check
the
event
the
logs
for
respective
component
pods,
so
I
mean
this
is
good
to
know.
What's
Go
I
mean
it's
causing
trouble,
but
hopefully
further
investigate
that.
So
we
can
gather
some
performance,
profiling,
information
and
GDP
Trace
at
runtime.
C
It's
like
we
can
collect
some
command
outputs
and
if
you
in
in
a
root
in
Rook
Chef
cluster,
you
can
also
use
the
GDP
or
live
capture,
live
Backpage
or
the
code
amp
of
a
process
that
way
we'll
understand
at
the
process
level
and
threads.
What's
because
what
could
be
causing
the
destruction?
Another
way
is
one
of
the
performance
Engineers
itself.
Mark
Nelson
has
written
this
Wallflower
profiler,
which
beautifully
dumps
is
that
profile
of
the
safe
process
that
we
want
to
further
investigate?
C
And
we
can
use
this
so
we
have
a
recently
documented
this
whole
the
whole
steps
in
The
Rook
documentation
as
well.
But
if
we
want
to
further
investigate
if
someone
wants
to
further
investigate,
but
this
is
the
information
that
could
be
collected
and
shared
in
The,
Rook,
Community
channels
or
the
anyone
can
open
an
issue
yeah.
So
that
could
be
a
ways
to
by
which
you
can
do
profiling.
A
And
here's
the
stop
I
think
it's
recently
documented,
so
earlier
I
think
we
didn't
find
any
relevant
resource
to
like
debug,
using
GDP
or
low
level
tools
container
environment.
So
this
might
come
handy
in
such
scenarios
to
go
beyond
and
debugging
Beyond
kubernetes
based
environment
yep.
Thanks
for
anything
else,
you
want
to
highlight.
A
Yeah
we
linked
is
one
of
the
case
where,
in
which
we
used
in
this
performance,
profiler
and
GDP
commands
to
we.
A
A
Oh,
so,
for
the
future
work
discussion,
I'd
like
to
hand
it
over
to
shubham
yep.
B
So
for
the
future
work
of
the
crew,
as
gov
said
any
earlier,
that
we
are
planning
to
add
a
scenario
where
we
can
at
least
yeah
restore
the
month
with
the
help
of
osds
when
all
the
bonds
are
down.
So
we
we
are
planning
to
add
this
scenario
in
the
quick
plugin
too,
and
also
like
we
are
planning
to
add
a
like
kind
of
backup
or
recovery.
B
Since
of
support
that,
in
case,
like
let's
say,
safe,
cluster
CI
is
deleted
accidentally
or
like
by
mistake,
or
something
happened,
and
it
got
deleted.
We
apply.
B
We
are
thinking
of
adding
a
support
in
the
crew
that
somehow
we
can
restore
the
crds
back
so
that
we
we
get
the
safe
cluster
back
healthy
and
and
get
it
working
and
and
the
next
one
is
like
God
of
is
planning
to
automate
the
collection
of
the
code
items,
since
this
is,
as
we
saw
earlier
in
the
talk
right
in
the
root
of
stream
dock,
that
that
the
collection
of
a
code
amp
is,
is
bit
long,
so
we
are
planning
to
also
automate
those.
B
And
lastly,
we
are
also
planning
to
add
some
CSI
troubleshooting
commands
like
in
some
situation.
Peer
PVCs
are
not
found
here.
This
is
the
link
and
we
already
have
started
working
on
this.
So
in
some
situation
you
just
see
that
PVC
is
more
not
getting
bound
or
attached
to
the
port,
and
sometimes
it
is
networking
issues
or
sometimes
there
can
be
different
scenarios
right.
So
we
are
also
planning
to
add
the
support
in
the
crew
that
we
can
indicate.
B
You
say,
okay,
what
could
be
the
possible
reason,
and
we
can
also
score
some
print
some
logs,
that
that
can
help
the
user
to
see
or
like
find
what
is
the
root
cause
of
the
issue?
So
these
are
the
Future
Works
and
yeah
many
more
to
add,
and
we
will
usually
you
back
to
get
some
feedback
or
out
or
to
know
like
what
are
the
most
common
as
like
situation,
where,
like
we
get
stuck
and
we
are
crew,
can
come
90
and
help
us.
B
A
Yeah
and
feel
free
to
like,
if
you
have
any
questions
now,
would
be
the
right
time
to
discuss
them.
Anybody
have
any
questions
or
things
to
discuss.
A
D
A
A
C
Yeah,
so
actually
there
in
as
a
part
of
so
sometimes
we
require
to
capture
and
collect
live
core
dump.
Just
like
we
shared
that
whenever
a
process
segment
does
the
check
fault,
definitely
a
code
up
is
captured,
but
at
times
in
certain
trouble
in
certain
problems
and
scenarios,
we
want
to
collect
the
Quorum
of
a
live
process
without
trying
to
kill
it,
because
because
of
the
disruption
that
it
could
cause
like,
I
have,
for
example,
high
amount
of
CPU
utilization,
and
we
want
to
know
that
what's
happening
at
low
level.
C
And
even
attach
GDB
to
the
process
and
just
capture
some
capture,
a
bunch
of
back
traces,
I.
Remember
a
discussion
with
radic
as
well.
When
a
safe
process,
when
we
saw
Ms
dispatch,
was
causing
issues,
so
he
he
was.
He
also
suggested
that
you
can
just
do
it
capture
it
continuously
at
multiple
instances,
which
could
be
very
useful
for
troubleshooting
at
an
adult's
level.
C
So
I
mean
if,
if
these
things
could
be
done
automatically,
it
could
make
the
usability
much
more
easier
and
the
data
I
mean
simultaneously
perf
data
code,
amp
and
wall
clock,
profiler
information
can
be
collected,
collected
and
a
table
could
be
dumped
just
with
the
help
of
a
group
command
which
could
simply
automate
that
effort
instead
of
manually
doing
that.
A
Yeah,
these
commands
are
documented
right
now
in
The,
Rook
docs,
so
like
we
are
planning
on
automating
as
the
first
step,
just
these
command
with
the
crew
plugin
and
then
like
proceeding
forward.
We
can
expand
on
like
life,
core
dump
and
anything
that
the
developer
from
safe
side
also
finds
useful.
A
So
one
of
the
things
that
helped
was
being
like
working
on
safe
side.
While,
like
thinking
about
this
approach,
so
yeah
any
feedback.
There
would
also
be
helpful
so.
D
Yeah
well
in
in
Telemetry,
we
collect
the
the
crashes
that
happen
in
the
cluster,
and
we
sync
them
with
red
mine
as
well
so
I
was
I
was
wondering
basically
what
infrastructure
you're
going
to
use
in
order
to
collect
the
core
dump
and
if
I
don't
know,
if
somehow
it
can
be
integrated
with
the
work
in
Telemetry
or
it's
it's
going
to
be
on
a
totally
different
infrastructure.
A
I
think
that
should
be
possible,
but,
like
we'll,
have
to
collaborate
with
you
on
that,
probably.
D
Yes,
this
sounds
great
yeah,
so
we
can
read
more
about
it
in
The,
Rook,
docs,
basically,
yeah.
A
It's
linked
like
here,
you
can
see
her
performance
profiling.
You
can
just
click
here
and
collect
the
perfect
of
safe
process
at
runtime
yeah.
All
the
GDP
commands
you
can
like
just
go
ahead
and
play
with
it
if
you
like,
and
one
of
the
cases
where
we
use
that
was
this
isn't
closed
for
some
reason,
but
this
was
one
of
the
interesting
things
that
we
were
able
to
send
like
inside
the
container
environment.
A
D
Yeah,
let's
so
we
we
can
definitely
continue.
The
discussion.
Offline
and
I'll
take
a
look
at
the
docs
and
see
what
makes
sense
to
integrate
thanks.
C
She
said
if
you've
got
any
feedback,
please
do
let
us
know
as
well.
I
mean
that
would
be
good,
I
mean,
even
if
you
can
improve
the
current
collection
steps
as
well,
because
I
mean
more
troubleshooting
information.
We
could
collect
and
automate.
That
would
make
the
lives
of
the
admins
and
users
much
more
easier.
A
This
is
like
the
crew
project
website.
So
probably,
if
you
have
any
suggestions
where
like
we
can
use,
make
use
of
this
plugin
for
making
debugging
easier.
So
probably
you
can
open
an
issue
here
or
contribute
here
as
well,
and
yeah
can.
C
D
First
sounds
good
that
I
I
think
the
one
piece
that
I'm
missing
and
I'm
I'm
out
of
this,
that,
during
your
talk,
is
when,
when
you're
talking
about
collection,
what
back
end
are
you
referring
to.
A
C
D
Okay-
and
this
is
this-
is
basically
to
serve
the
the
administrator
of
the
cluster
yeah.
A
D
A
Because
for
the
database,
it's
just
you
know
for
the
developer,
to
or
like
the
admin
to
trade
as
a
issue,
yeah.
D
Good
sending.
D
B
A
A
Oh
great
yep,
thanks
Stephen
and
yeah
any
feedback.
We
are
more
than
welcome
to
have
that
so
yeah,
let's
close
it
here,
and
if
anybody
wants
to
get
involved,
here's
the
at
the
right
places
to
do
that
and
thank
you
guys
and
yep
see
you
maybe
in
the
next
event,
yeah.