►
From YouTube: CDS Reef: Dashboard
Description
The Ceph Developer Summit for Reef is a series of planning meetings around the next release and some community planning.
Schedule: https://ceph.io/en/news/blog/2022/ceph-developer-summit-reef/
A
Welcome
everyone
to
the
safe
dashboard
cds
meeting
for
reef
release,
so
we
will
cover
more
or
less
will
what
are
going
to
be
the
next
week
milestones
for
the
dashboard
for
the
release
and
also
how
they
connect
to
the
present
and
past
experience
with
what
issues
and
experience
while
developing
new
features.
So
I
will
try
to
connect
all
these
things
together.
So,
first
of
all
well
here
I
I
paste
the
link,
I
think
in
blue
jeans.
A
A
I
will
I
will
test
it:
okay,
thanks.
So,
first
of
all
there,
these
links
to
the
previous,
the
quincy
one
and
the
main
cdf,
the
cds
brave
one.
A
So
basically
they
granted
that
access
to
this
environment,
where
we
could
test
the
at
the
scale.
I
think
we
were
talking
about
150
nodes
or
something
so
it
was
quite
a
large
scale
clusters.
So
there
were
some
findings
that
specifically
applied
to
the
dashboard,
so
they
were
mostly
had
to
do
with
the
some
specific
endpoints.
A
Some
of
them
were
obvious,
like
the
aussie
components
of
acb
that
scaled,
that
percentage
scale
issues
and
that
was
kind
of
expected,
but
others
like
the
number
of
alerts.
For
example,
we
wouldn't
expect
a
having
skill
issues
there,
but,
given
that
some
alerts
are
triggered
on
a
per
instance
basis,
that
basically
means
that,
if
you
add
another
trigger
by
an
osd,
you
will
then
have
the
same
amount
of
alerts
here
by
times
the
number
of
voices,
so
that
will
means
that
you
will
have
the
same
scale
issues.
A
Additionally,
we
found
issues
with
the
monitoring
how
the
metrics
are
reported
and
delivered
to
prometheus.
So
that's
one.
That's
been
one
of
the
key
takeaways
from
this
scale
testing,
so
that's
basically
summarized
here
and
we
will
see
later
models
how
this
is
going
to
be
tackled.
A
So
I
mean
all
these
links.
I
think
they
basically
describe,
and
they
are
very
interesting
if
you
want
to
have
more
details
about
the
process
of
scale
testing
and
the
different
issues
encountered.
So
those
are
really
interesting
regrets
if
the
threats,
if
you
are
interested
on
on
topic
of
scale
and
skill
testing,
so
regarding
the
present,
I'm
not
sure
if
mike
has
officially
published
the
results
of
this
abuser's
survey.
Just
you
know
if
that
has
already
happened,.
A
So
well,
basically,
I
went
through
the
dashboard
specific
feedback
and
what
these
are
the
comments
that
we
got
from
the
users
that
participated
in
that
survey,
so
we
got
positive
feedback
and
since
that
we
dashboard
is
progressively
improving
and
covering
more
and
more
of
the
sf
workflows.
A
We
still
see
that
some
users
are
only
using
the
dashboard
for
monitoring
only
purposes,
so
they
are
not
actually
using
the
active
management
capabilities.
There
were
some
comments
around
the
ux
ui
experience.
Basically,
what
to
improve
the
ui
design.
A
Also
improve
the
reporting
for
the
pulse
and
yeah.
This
is
an
interesting
suggestion
just
to
provide
an
object
browser
this
I
mean
we
have
to
be
strictly
limited
to
rgb.
It
could
be
just
a
general
browser.
Probably
that
could
became
another
scalability
issue,
but
well
it
might
be
an
interesting
thing
to
discuss
if
it
might
make
sense
to
be
able
to
browse
all
the
objects
in
a
cluster
from
the
dashboard.
A
So,
regarding
on
neutron
alerting,
there
were
specific
requests
to
display
some
low-level
metrics
like
temperature,
and
I'm
not
sure
actually,
if
node
exporters
is
providing
any
metrics
for
those,
some
of
them
seem
quite
low
level,
but
basically
we're
sticking
to
what
another
exporter
is
providing
or
not
monitoring.
So
we
are
not
too
much
about
that
much
more
about
that.
So
this
is
the
one
I'm
not
really
sure
about
what
this
means,
but
I
assume
that
it's
a
troubleshooting
based
on
grafana.
A
The
improvement
of
the
alerts
is
something
that
we've
been
working.
I
remember
that
we
had
a
couple
of
issues
reporting
a
couple
of
alerts
that
have
been
extremely
noisy.
One
is
the
mtu
one
and
yeah.
We
are
going
to
we're
improving
that.
So
that's
now
fixed,
I
think
in
quincy.
So
basically
it
takes
into
account
also
the
whether
an
interface
is
enabled
or
disabled
for
displaying
the
empty
mismatch
alert.
So
we
are
now
skipping
the
alert
on
on
those
interfaces
that
are
disabled.
A
We
were
discussing
about
how
to
improve
this,
the
mtu
alert
that
it
doesn't
seem
to
be.
I
mean
an
easy
one
if
we
want
to
do
it
properly.
So,
regarding
troubleshooting,
this
is
one
of
the
biggest
additions
in
the
for
the
quest
release.
It's
a
centralized
login.
So
basically,
a
user
was
reporting
having
live
logs
from
all
the
different
components
and
demos.
So
this
will
now
be
possible.
Starting
we'll
see,
we
are
also
planning
to
back
for
that
to
produce
releases,
but
it
will
be
available
soon.
A
So
I
will
mention
more
about
that
later,
but
it
will
be
based
on
on
loki
and
bromtel.
A
I
grew
up,
which
is
here,
he
already
started
an
exploration
on
a
similar
monitoring
stack
based
on
elasticsearch
and
kibana.
So
technically
I
mean
7dm
might
be
neutral
about
vendors
tax.
We
simply
implemented
this
yeah.
D
I
wanted
to
understand
if
I
wanted
to
understand
what
would
be
the
best
option
here
to
go
without
if
users
and
community
are
already
using
alternate
options
as
well.
So
I
mean
what
would
be
the
suggestion
and
as
to
I
know,
we
discussed
about
some
other
solutions
as
well.
Right,
so
I
mean
still,
we
are
going
with
the
loki
and
from
tail
or
we
are
still
open
to
options
for
the
best.
A
Yeah
I
mean
so
far.
We
have
provided
a
stack,
which
is
the
lucky
promoter.
One
one
reason
is
that
that
worked
really
well
with
with
rafana
and
graphine
is
already
deployed
so
but
well
rafana
also
supported
elasticsearch.
A
So,
but
there
was
another
reason
and
it's
the
present
change
in
from
elasticsearch
to
the
office,
a
custom
open
source
license,
and
but
that's
I'm
not
sure,
what's
the
status
of
that,
but
that
happened
I
think
last
year
or
a
year
ago,
more
or
less
and
they
switched
to
a
custom,
open
source
license
and
actually
I
think,
amazon,
forked,
the
elasticsearch
repo.
A
So
some
major
contributors
are
having
a
diverging
from
that
due
to
this
licensing
issue,
but
I
mean
I'm
not
sure
I
think
the
and
that's
questions
also
for
adam
cedium
now
supports
a
custom
container
definition.
So
maybe
there
might
be
a
chance
for
users
to
customize
their
deployments.
I'm
not
sure
if
that's
in
the
in
the
background
for
the
stephanie.
E
Custom
containers
in
general
became
an
fbo
a
pretty
long
time
ago,
like
you've
been
able
to
do
that
for
some
time.
I
know
like
even
on
posi.
I
think
it
was
like
c
advisor
or
something
was
being
done
that
way.
I
don't
think
there
might
be
a
bug
with
it.
Currently
I
need
to
go
check
the
they
saw
a
tracker
about
it,
but
there
they
should
be
implemented.
E
So
if
there
might
be
a
bug
right
now,
but
in
general
it's
been
there
at
least
in
earlier
versions,
and
it
will
be
working
again
soon.
C
D
Are
we
still
looking
at
the
possibility
of
having
multiple
clusters
here
being
managed,
because
I
think
it's
a
if
we
want
to
manage
centralized
logging
for
multiple
have
clusters.
It
would
require
some
level
of
introduction
of
multi-cluster
support
and
dashboard
as
well
right
overall.
A
I
thought
that's
a
kind
of
a
different
concern
right,
which
that
would
be
the
multi-cluster
awareness
and
I'm
going
to
talk
about
that,
but
I'm
not
sure
if
in
this
very
case
we
want
to
have
a
centralized
login
per
class.
So
that
was
the
initial
scope
for
this
yeah
yeah
having
a
multi-cluster
centralized
login
that
what
maybe
further
iteration
of
this
approach.
A
The
thing
is
that-
and
I
will
mention
that
later,
but
for
the
multi-class
awareness
we
are
trying
to
find
the
minimal
set
of
metrics
and
indicators
that
make
sense.
So
in
the
case
of
the
logs,
probably
that
would
be
extremely
rose
and
those
are
very
actionable
at
that
multi-cluster
level.
So
maybe
only
the
alerts
actually
make
sense.
A
I
mean
if
we
can
infer
any
alerts
from
the
logs
that
are
not
cannot
be
distracted
from
it
or
any
other
metric,
then
it
might
make
sense,
but
if
we
can
get
them
from
from
the
existing
provisioning
server
manager
alerts,
we
I
mean
I
would
prefer
going
for
that.
Rather
than
having
a
centralized,
multi-cluster
central
log
storage.
A
So
we
have
these
other
requests,
which
is
pretty
specific
because
yeah
some
of
these
things,
I
think
we
are
using
the
orchestrator
interface
for
the
let's
management,
but
I'm
not
sure
if
there's
a
way
to
deal
with
it
outside
the
orchestrator.
A
Regarding
the
simplification
of
the
dashboard,
adding
workflows,
yeah,
that's
something
that
we
definitely
are
trying
to.
A
It
is
not
easy
because
well,
the
safe
api
is
very
fine-grained,
so
we
basically
need
to
abstract
many
minor
steps,
and-
and
it's
complex
to
to
to
do
that,
especially
if
you
want
to
have
a
transaction
behavior,
which
is
the
usual
that
you
want
what
the
user
expects,
when
it's
consuming
a
research
right
or
a
workflow,
that
the
the
whole
workflow
will
either
succeed
or
will
draw
back
in
case
that
there
is
a
failure,
and
that's
really
complex
to
do
in
case
of
offset,
because
there
is
no
transactionality
in
the
way
that
many
of
these
operations
are
done.
A
So
it's
still,
though,
workflow
or
studio
wizard
regarding
missing
features.
This
effects
management,
which
I
assume
it's
the
all
the
set
of
management
from
the
dashboard.
That's
one
of
the
things
that
we
are
definitely
committed
to
doing.
The
next
release
and
the
quiz
show
in
the.
F
A
Release
the
crash
rules
management.
We
will
need
to
evaluate
that
one,
because
we
might
be
quite
complex
to
do
that
and,
as
you
can
see,
some
of
these
things
are
actually
conflicting
right.
It's
hard
to
abstract
and
simplify
things
and,
at
the
same
time
also
provide
advanced
workflows
or
fine-grained
operations,
so
we'll
need
to
find
a
common
ground
for
both
approaches,
but
in
general
we're
favoring,
simpler
and
well
more
abstract
operations
rather
than
fine-grained
ones.
A
I
don't
know
exactly
what
this
one
means
regarding
the
expert
mode
and
other
advanced
operations
yeah.
Many
of
these
things,
I
think,
are
available
to
the
dashboard
right
now,
but
we
try
to
cover
maybe
the
90
80
percent
of
the
functionality
that
basically
allows
you
to
do
most
of
the
of
the
daily
stuff.
So
this
very
case
yeah,
maybe
the
mean
size,
is
not
available,
but
well
it's
all
based
on
requests.
A
That's
pretty
specific
and
probably
something
like
this
might
be
covered
by
by
7d
employed
by
the
hca.
I
might
assume,
but
I'm
not
very
familiar
with
this
topic
and
name
space
integration
that's
available
to
rbd,
but
I
think
that's
it.
We
are
not
exposing
any
expenses
in
other
parts
of
the
step.
Dashboard.
A
And
then
we
have
the
beyond
related
topics
like
the
multi-cluster
support.
This
is
well
paul.
I
will
mention
that
later,
but
all
paul
gossner
has
been
running
these
surveys
to
gather
feedback
from
the
users
on
the
topic
of
multicluster.
A
It's
been
mostly
focused
on
monitoring,
so
we
are
not
thinking
about
multi-cluster-wide
management,
so
we'll
be
just
a
monitoring
of
the
different
clusters
and
then
a
user
would
be
expected
to
jump
into
the
specific
dashboard
for
for
managing
that
cluster,
so
there
wouldn't
be
a
cross
cluster
management
management
only
for
the
cases
of
rbd
and
surface
mirroring.
This
could
be
the
case,
but
it
would
be
just.
A
Editing
the
jump,
the
specification
thermal
files-
that's
well-
we've
been
discussing
whether
it
would
make
sense
to
allow
users
to
directly
edit
the
journal
files.
A
A
Yeah
we
also
got
this
one,
which
is
basically
I
understand
from
this.
It's
like
providing
the
user
with
the
equivalency
li
commands
right
for
what
you
do
in
the
dashboard.
That's
an
interesting
one.
In
fact,
I
remember
some
of
our
colleagues
from
suicide
mentioned
that
in
open
openness
they
had
like
an
api
recorder
or
something
like
that.
So
it's
kind
of
a
macro
recorder,
so
all
you
user
would
do
in
the
ui.
They
could
record
all
the
actions
and
would
generate
the
I
think
the
current
commands
for
their
api.
A
So
that
might
be
an
interesting
thing
to
do
with.
I
think
we've
talked
about
that
in
the
past
posting
that
and
probably
it
would
help
on
the
automation,
because
the
api
is
there,
but
probably
is
not
easy
to
automate.
You
just
need
to
go
to
the
swagger,
the
open
api
docs
and
basically
write
your
javascripts
based
on
that,
and
maybe
this
way
it's
easier
to
provide
the
users
with
a
list
of
commands
or
code
commands
or
for
replacing
the
same
actions
in
the
from
the
cli.
A
This
one
is
not
exactly
how
we
can
do
that
from
the
it's
more
of
a
set
of
feature
right.
Reimport,
osd
from
affair
host
future
is
strictly
connected
to
the
dashboard.
It's
more
of
a
general
request,
and
this
one
is
also
quite
specific.
A
So
that
that
would
be
it
from
the
safe
user
survey
regarding
other
sessions,
we
have
only
had
the
orchestrator
and
the
performance
ones
right
so
last
year,
I
think
the
dashboard
was
one
of
the
last
sessions,
so
basically
we
had
all
the
feedback
from
the
different
amp
and
rgb
from
the
different
components,
but
this
year
we
don't
have
much
feedback
and
regarding
the
themes
that
we
are
going
to
basically
focus
for
the
reef
release,
the
top
one
priority
is
day
two
operations.
A
So
basically,
once
you
have
the
cluster
running,
what
are
the
regular
operations
that
an
operator
and
an
admin
perform
on
the
cluster?
So
we
have
started
with
what
we
identified
as
weekly
monthly
operations,
and
then
we
will
start
implementing
yearly
or
rare
and
infrequent
operations.
A
So
the
idea
of
this-
and
this
connects
to
the
later
backboard
policy-
is
that
we
are
trying
to
backward
all
these
improvements
to
previous
releases.
So,
given
that
most
of
its
operations
have
to
do
with
safety
and
management,
we
want
to
ensure
that
the
feature
gap
is
more
or
less
well.
The
same
is
in
the
active
releases,
so
basically
we're
trying
to
reduce
that
in
both
pacific
with
angry.
A
A
It's
still
just
to
be
discussed
whether
we
want
to
fully
replace
that,
but
at
least
there
will
be
an
alternative
way
of
gathering
plaster
metrics
from
from
this
new
zep
exporter,
and
the
reason
is
that
the
current
one,
the
margin
exporter,
doesn't
scale
beyond
the
thousand.
Was
this
so
so,
basically,
on
one
hand,
with
the
current
code
base
in
pacific,
you
tried
deploying
more
than
a
thousand
osds.
A
You
will
start
facing
performance
issues
due
to
the
way
that
the
python
c,
plus
plus
api
works
and
locking
and
other
kind
of
behaviors
that
affect
the
manager
api
interface.
So
the
alternative
is
going
to
be
deploy
a
per
host
exporter.
So
this
the
new
demon
will
be
also
deployed
via
staff
idm.
It
will
be
a
kind
of
a
sidecar
container
that
will
listen
to
the
well.
A
It
will
interact
with
the
demon
sockets
and
will
retrieve
the
birth
counters
from
these
services,
so
prometheus
will
just
have
to
basically
discover
all
these
new
exporters
and
we'll
fetch
the
metrics
we'll
describe
the
metrics
from
from
this.
So
there
is
ongoing
work
for
having
this
and
yeah.
I
guess
the
idea
will
also
probably
be
to
ask
for
this
to
where
it
releases
as
well,
because
this
is
a
noun
bottleneck.
A
A
The
plan
is
to
provide
a
richer
set
of
metrics,
so
that
also
will
make
worse
this
scalability
issue
that
we've
been
talking
about.
So
the
idea
is
to
deal
with
that
by
means
of
well
basically
providing
this
host
metrics,
rather
than
all
the
metrics
from
a
single
source,
and
that
will
allow
to
have
some
kind
of
horizontal
scalability.
A
And
the
last
one
is
basically
improving
the
developer
experience,
so
this
is
something
that
we
are
constantly
doing,
trying
to
improve
our
code
quality,
the
automatic
testing
that
we
are
doing
and
so
on.
So
we
have
a.
We
invest
a
considerable
amount
of
time
on
on
this
effort,
so
I
already
talked
about
the
backboard
policy.
I
think
this
is
quite
a
well
debated
topic.
A
Whether
teacher
should
or
not
should
be
backported
to
previous
releases,
and
the
general
agreement
is
that
they
shouldn't
only
bug
fixes,
but
in
the
case
of
the
dashboard,
as
we
are
always
a
couple
of
steps
behind
the
rest
of
the
sf
clusters,
because
when
at
the
same
component,
sorry,
because
when
teams
deliver
features,
usually
take
us
a
few
months
or
more
than
that
to
implement
the
same
feature
in
the
dashboard
so
yeah,
we
are
always
like
between
one
and
two
releases
after
the
the
original
feature.
A
A
We
are
trying
to
ensure
that
the
active
releases,
pacific
and
quincy
have
the
comparable
amount
of
support
in
regards
fadm
feature
set
and
what
the
user
can
do
with
with
the
cluster
from
the
ui.
So
also
the
with
the
addition
of
the
rest,
dbi
version,
which
was
introduced
in
pacific.
We
can
quickly
detect
if
there's
a
breakage
or
there's
going
to
be
a
breakage
in
the
api.
A
And
what
for
the
specific
topic
of
dashboard
features,
there
have
been
lots
of
things
I
put
here.
What
were
the
issues
that
we
identified
or
tracked
for
the
last
few
years,
the
quincy
one?
A
So
I
think
we
kind
of
achieve
like
a
third
of
those.
A
There
this
is
the
issue
background
for
things
that
were
initially
targeted
at
quincy,
so
yeah,
it's
81
trackers
that
covers
both
features,
bug,
fixes
and
also
code
cleanups,
so
well.
The
background
is
here:
we'll
have
to
update
this
with
the
new
features,
and
I
guess
that
will
happen
after
the
all
the
cds
sessions,
and
probably
some
of
these
things
will
be
reprioritized
based
on
all
new
inputs.
A
A
There
is
this
new
osd
creation,
wizard
and
the
plan
for
that
is
to
automatically
guess
the
what's:
the
optimal
aussie
deployment
strategy
based
on
the
existing
aspects
of
the
drives
so
based
on
the
amount
of
sorry,
ssds
or
nvme
devices
in
a
cluster,
we
are
trying
to
infer
what's
the
recommended
setting,
and
the
idea
is
that
this
wizard
will
recommend
the
user
well,
based
on
your
setup.
The
optimal
deployment
is
nvme
or
iops
optimize
deployment.
A
If
you
mix
ssds
and
sdds
also,
maybe
you
are
going
for
a
what
was
the
name
throughput
optimize?
I
think
right.
A
A
Then
it
comes
the
multi-site
management,
so
we
have
the
surface
mirroring.
Currently,
there
is
no
support
in
the
dashboard
for
the
mirroring.
Apart
from
deploying
the
mirror
demos
from
from
the
orchestrator
ui,
you
can
actually
do
anything
else
with
regarding
the
surface
mirroring
so,
but
the
cool
thing
about
this
is
that
the
given
that
the
api
is
quite
similar
to
the
rbd
mirroring
it's
going
to
be
very
easy
to
extrapolate
what
we
have
for
everyday
mirroring
to
serve
this.
A
So,
regarding
rvd
mirroring
right
now,
the
only
missing
thing
is
the
snapshot
mirroring.
So
we
currently
support
the
pull
based
mirroring
and
the
image
based
mirroring
and
the
only
missing
piece
is
the
snapshot,
so
that
will
be
the
goal
for
for
a
riff
to
achieve
full
coverage
of
the
features
regarding
rvd
mirroring
the
cef
of
management.
This
is
a
pretty
basic
thing
that
we
didn't
have
in
the
dashboard
so
far,
so
it
makes
sense
just
to
bring
this
to
the
dashboard
so
actually
for
external
client
configuration.
A
It
might
be
interesting
because
rest
of
it,
I
may
assume
that
it
might
be
automated
but
yeah.
If
you
want
to
configure
an
external
client
to
connect
to
cluster,
it
might
be
interesting
not
only
to
manage
this,
but
also
provide
a
copy
paste.
The
copy
base
credentials,
so
you
can
quickly
maybe
retrieve
or
download
the
credential
from
the
from
the
ui
and
maybe
also
the
ceph.conf
file
and
export
that
to
an
external
client.
So
you
can
remotely
run
that
the
topic
of
centralized
login.
As
said
this
is
a
lucky
from
tail
solution.
A
The
news
of
exporter-
and
we
also
had
the
are
some
advanced
artillery
features
like
the
server-side
encryption.
A
A
Yeah
I
was
wondering
if
we
have
delivered
any
new
feature
to
rtl
during
wednesday.
I
don't
think
so.
Right
probably
by
pacific
was
the
last
time
that
we
delivered.
C
Yeah
yeah
for
pacific
was
the
the
selection
of
the
of
the
diamond
yeah,
but
for
quincy
could
not
remember
any
major
feature
I
mean
bug
fixes
are
there,
but
may
I
think
the
major
requester
was
the
game
management
service.
A
And
we
don't
have
anyone
here
for
merged
lg,
right,
okay,
so
that
those
are
all
about
the
new
features
regarding
non-functional
improvements,
what
the
biggest
our
biggest
concern
is
regarding
scalability.
A
So
that's
something
that
already
affected,
and
it's
a
long
standing
on
issue
rbd
the
rbd
page,
didn't
scale
very
well
and
also
well.
I
haven't
mentioned
that
so
after
this
possibly
testing
the
testing,
we
found
that
the
also
the
usds
and
the
host
components
presented
this
use
oil
scaling.
A
So
the
idea
is
to
provide
server
side
pagination,
that's
a
huge
change
in
the
dashboard,
because
all
the
actually
all
the
information
that
was
presented
by
dashboard
was
paginated
in
the
dashboard
in
the
ui
itself.
So
basically,
it
came
complete
from
the
back
end
and
all
the
filtering
and
imagination
happened
in
the
in
the
ui,
and
the
idea
of
this
is
move
all
the
pagination
to
the
back
end.
A
That
has
some
pros
and
cons
as
well,
because
the
kind
of
filtering
that
you
can
do
in
the
ui
is
more
complete,
it's
richer
than
in
the
backend,
but
yeah.
On
the
other
hand,
if
a
user
wants
to
scale
beyond
a
thousand
usds
and
a
hundred
or
200
hosts
they,
they
will
surely
need
this
kind
of
back-end.
A
And
unfiltering
well
regarding
engineering
specific
improvements,
one
thing
that
we
are
paying
a
lot
of
attention
on
f4
is
improving
our
testing.
A
So
our
latest
solution
is
the
grafana
unit
testing
and
in
fact
we
worked
on
a
framework
for
testing
rafana
at
unit
test
level
with
problem
tool
and
also
we
are
using
we're
exploring
an
end-to-end
testing
with
a
framework
provided
by
grafana
labs.
That
will
also
help
us
quickly
identify
this
because
the
graph
final
integration
is
one
of
our
major
sources
of
issues,
and
definitely
we
need
to
stabilize
that.
Probably
that
will
save
us
a
lot
of
effort
and
another
genetic
movement
is
back
and
driven
ui.
A
So
this
has
been
a
topic
for
a
while.
Basically,
in
order
to
simplify
or
lower
the
bar
for
developing
the
dashboard,
we
probably
would
need
to
ensure
that
developers
don't
really
need
to
do
that
much
in
the
ui
and
they
can
stay
in
the
python,
slash
backend
site
because
well,
if
you
compare
the
knowledge
that
developers
have
regarding
python
versus
ui
technologies
like
angular,
typescript
or
so
that's.
F
A
Usually
python
is
is
like
a
a
common
chunk
for
for
many
developers.
So
if
we
want
to
have
getting
games
turner
contributors,
definitely
we
should
try
to
move
all
the
complexities
from
the
ui
to
to
python
site
and
try
to
simplify
that
to
basically
allow
users
to
allow
the
developers
sorry
to
bring
new
features
to
the
dashboard
just
from
python
only
code,
that's
kind
of
a
goal,
but
we
will
explore
it
on
this
regard.
A
Asis,
for
example,
started
exploring
json
based
forms,
so
we
tried
replacing
our
existing
forms
with
json
generated
forms.
That's
that
might
be
an
interesting
move
if
we
want
to
get
rid
of
ui
forms
and
replace
that
with
json
generated
forms.
A
That
would
be
great
also,
we
are
generating
other
assets
from
json
or
jsonnet
like
the
graphana
dashboards.
All
that
part
was
moved
to
a
new
directory,
it's
called
submixins
and
we're
basically
generating
the
that's
where
the
graphona
does
more
directly
from
this
jsonnet
language.
A
I
don't
think
so,
at
least
not
from
the
user
survey,
and
I
don't
think
we
have
any.
We
need
to
improve
the
cross
version,
support
that
we
had
there,
but
right
now
I
think
we
only
have
minor
version
upgrades
in
the
api,
so
that
was
easier
to
deal
with,
but
yeah.
We
need
to
there's
a
couple
of
minor
fixes
in
that
area
that
we
need
to
implement.
C
No,
I
was
thinking
that,
if
using
the
versioning,
maybe
eventually
that
the
ui
could
be
the
couplet
the
couple
for
from
the
back
end,
the
the
the
front
end
I
mean
so
just
as
consuming
certain
versions
of
the
api,
so
it
could
be
them
detached
from
the
monolith.
I
mean
the
core
stuff,
yeah,
really
more
manageable
in
terms
of
pack
porting
or
less
backporting.
In
the
front
end,
I
don't
know
it's
an
idea,
just
consuming
the
api
version,
so
you
can
upgrade
minor
major.
A
Yes,
as
long
as
we
have
that
version
in
place
technically,
that
would
allow
us
to
have
a
single
dashboard
for
multiple
versions.
So
that
was
one
of
the
reasons
for
introducing
the
api
version.
C
A
Rest
api
to
be
able
to
detect
this
and
pay
gracefully
or
well
implement
some
kind
of
measures
to
deal
with
that
so
yeah.
A
Thanks
also
any
other
comments
I
would
like
to
hear
from
the
component
leads
that
we
have
here,
so
I
will
start
with
you.
Is
there
anything
here
that
I
mean
any
thing
that
you
missed
from
the
connection
dashboard
civilian
connection
or
I
remember,
we've
been
talking
about
the
rook
orchestrator
and
how
that
could
be
presented
in
the
dashboard.
E
The
bigger
question
would
be
what
we're
going
to
end
up
doing
with
brooke,
because
I
know,
if
they're
not
going
to
use
manager
rook,
then
that
limits
with
dashboards
capable
of
in
the
work
environments,
but
it
seemed
like
they
were
just
looking
for
monitoring
stuff
from
the
desperate
anyway
their
deployments.
So
that
would
be
the
bigger
question
I
don't
know
if
anyone's
from
the
rook
team
is
here,
though,
to
talk
about
that
stuff.
But
that
would
be
a
bigger
question
than
what's
going
on.
A
A
Yeah
yeah
yeah,
I'm
not
sure
I
mean
I
will
have
a.
I
mean,
I'm
a
team
with
with
them
again
for
for
clarifying
this,
but
yeah.
Definitely
I
mean
part
of
the
interesting
things
of
of
cubaneries
right
is
that
you
have
a
kind
of
not
an
immutable
but
at
least
a
reproducible
environment
right.
A
But
if
you
start
modifying
things
from
the
dashboard,
your
kind
of
overriding
what
rook
is
managing
from
the
clusters,
but
in
any
case,
via
the
toolbox
container,
usually
you
can
actually
modify
the
cluster
right,
so
you
can
perform
actions
that
are
outside
the
control
of
rook
and
kubernetes.
So.
A
Yeah,
we
need
to
clarify
with
the
models.
What
would
be
the
use
cases
for,
for
this
I
mean
based
on
our
last
conversation,
I
don't
think
the
or
the
orchestrator
api
was
really
needed
for
that,
because
most
of
these
operations
happened
at
the
step
level
so
where
I
think,
related
to
pg
management
and
things
that
directly
affect
seth.
So
all
the
orchestrator
related
stuff
was
directly
managed
by
rook.
E
Yeah,
that
was
their
issues
that
their
their
objective
didn't
really
fit
well
with
what
they
actually
wanted
to
do
with
with
rook.
They
ended
up
simply
stuff
on
their
own
and
they
didn't
want
to
make
use
of
a
lot
of
the
orchestrator
commands
that
were
there,
so
they
decided
it
would
be
easier
just
to
not
have
an
orchestrator
module
for
their
stuff,
and
obviously
the
downside
of
that
is
that
it
makes
it
harder
for
anyone
else
to
integrate,
with
what
they're
doing,
there's
no
common
api
there.
A
Thanks-
and
I
see
here
just
anything
just
that
I
mean
from
four
rattles
from
the
dead
core.
B
What
you
were
talking
about
with
respect
to
osd
deployments
and
and
trying
to
recommend
their
optimized
like
way
to
hear
things.
That
sounds
very
interesting
to
me.
A
Yeah
anything
that
you
would
miss
here
or
I
mean
I'm
not
sure
when
was
the
last
time
that
you
played
with
the
dashboard.
But
if
you
have
recently
seen
that
anything
that
you
missed
there
anything
that.
B
Not
in
particular,
I
think
I
I
got
what
you
gather
from
the
user's
survey,
I
think,
is
quite
valuable
in
terms
of
more
specific
feedback
there
and
the
last
time
I
played
with
it
was
actually
in
the
given
cluster,
where
you
kind
of
the
scalability
issues
that
you're
already
addressing.
So
I
think
that's,
that's
all.
That's
all
in
hand
too
yeah.
I
guess
I
just
wanted
to
mention
that
respect
to
the
backboarding
ideas.
B
I
agree
that
for
dashboard
it
makes
a
lot
of
sense
to
like
for
those
of
you
who
are
features,
especially
since
it's
much
lower
risk
being
at
the
ui
layer.
The
ui
is
not
going
to
break
the
underlying
storage,
so
that's
not
a
concern
in
terms
of
the
risk
that.
A
A
Yeah
I
mean
regarding
core:
I
think
we
were
talking
last
year
about
the
m
block
right.
The
profiles
for
the
us-
that's
not
yet
implemented-
and
I
haven't
mentioned
that
here,
but
it's
in
the
in
the
background-
I'm
not
sure
how
prior
it
is
that
are
you
seeing
that
users
rely
a
lot
on
that
or
they
basically
go
with
the
default
profile
or.
B
I
think
we'll
find
out
more
with
with
quincy,
since
m
clock
is
going
to
be
the
default
scheduler
with
quincy.
So
I
think
a
lot
of
folks
there's
been
a
few
folks
trying
it
so
far
with
pacific,
but
I
think
that
since
it's
going
to
be
the
defaulting,
quincy
we'll
see
a
lot
more.
How
folks
are
wanting
to
do
it
for
their
use
case.
B
B
A
And
the
default
profile
is
sorry
the
balanced
one.
B
C
A
Don't
we
have
any
telemetry
feedback
regarding
whether
users
are
favoring
one
of
these
or.
C
B
You'll
be
able
to
see
more
in
terms
of
performance
stats
from
the
new
performance
fluency
channel
as
well.
B
B
Yeah,
I
think
that's
a
good
idea
there.
People
are
some
more
things
for
radios
coming
up,
we'll
see
what
the
what
the
recession
for
a
cds,
I
think
it's
thursday,
mm-hmm
okay,.
B
B
Things
that
comes
to
mind.
Sorry,
it's
the
there's
a
new
way
to
configure
the
autospg
autoscaler
to
treat
certain
pools
as
for
bulk
data
and
have
them
use
a
get
high
parallelism
right
right
out
of
the
gate.
B
A
Okay,
okay,
okay,
do
you
have
a
link
to
the
docs
about
that,
so
I
can
create
the
trigger.
G
Oh
yeah
yeah,
so
basically
I
think
josh
said
like
most
of
it.
The
bulk
flag
is
basically
useful
for
data
pool
but
pools
every.
G
We
know
that
there's
going
to
be
that's
going
to
be
there's
a
lot
of
data
and
then
so
we
give
them
like
the
most
pgs
and
based
on
the
usage
of
all
the
pools
across
across
the
cluster.
We
see
that
we're
allocated
based
on
the
usage.
Basically
that's
the
high
level
of
things
but
yeah,
but
we'll
provide
you
with
the
documentation
in
the
chat
other
other
than
that.
There's
also
one.
G
If
I
can
say
one
feature
that
that
is
we,
you
turn
off
the
auto
scaler
like
globally,
between
all
the
pools
basically
like
before
we
had
to,
like
you
know,
go
manually
like
trying
to
turn
off
the
pools,
all
the
scale
and
the
pools
like
one
by
one,
but
now
we
can
do
it
globally.
So
I
don't
know
if
that's
like
a
feature
that
we
can
expose
right,
yeah,
just
a
thought.
A
Okay,
thank
you
yeah
right
now
in
the
pools
you
I
think
you
can
enable
and
disable
the
auto
scaler.
I
think
there
is.
There
were
like
three
modes,
so
maybe
one
was
auto
on
and
off,
or
something
like
that.
A
A
Yeah,
I
was
so
if
I'm
not
sure
how
many
new
settings
this
requires
the
these
two
levels.
So
I'm
not
sure
if
there's
a
chance
to
attack
the
age
pool
based
on
high
usage,
low
usage
or
something
like
that.
So
instead
of
having
to
manually,
enter
numbers
or
something
like
that.
G
Oh
oh
well
like,
if
so,
if
the
pool
is
labeled
as
bulk
flag
like
that
pool
will
receive
like
basically
the
maximum
like
number
of
pgs
as
like
possible
and
and
the
rest
that
doesn't
have
a
bulk
flag
will
receive
like
like
the
like.
It
will
prioritize
the
the
ballpool
basically
and
the
others
will
look
at
like
what
what
is
left
basically
and
okay,
really.
C
D
F
Right
now
in
the
dashboard
we
can
do
lost
a
lot
of
the
install
operations
like
everything
all
the
demos.
But
one
thing
I'm
missing
here
in
dashboard
means
we
can't
do
operate
from
the
dashboard.
We
have
to
use
the
client
side
so
like
whenever
the
user,
so
the
customers
they're
in
the
5.1
in
the
fighter
1g1
release
of
5.2.
So
in
the
dashboard
at
least,
it
should
show
some
kind
of
notification
like
in
google
chrome.
F
A
Okay,
okay,
okay
yeah,
the
update
feature:
yeah,
that's
well
good
thing
is
that
that's
technically
supported
right
from
from
self
adm,
so
it
could
be
just
exposing
yeah.
You
know
it
makes
this.
F
F
Yeah
one
more
thing
I
restore
like
right
now:
we
are
supporting
rpg
running
from
the
dashboard,
like
complete
configuring,
everything
with
key
and
everything
so
like
that
way,
can
you
start
like
rgb
multiset
configuration.
A
Yeah,
that's
a
good
one.
It's
been
a
pending
yeah
issue,
for
I
mean
we've
been
planning
to
do
that
for
ages.
I
think
is
that
it's
quite
complex
and
the
feedback
that
we
got
from
users
or
the
users
that
use
the
ideology
multi
site
is
that
the
deployments
are
sometimes
so
specific
and
required.
So
I
mean
to
be
so
watchful
at
that.
A
The
only
thing
that
feedback
that
we
got
here
was
from
from
red
hat
supporting,
and
they
basically
referred
to
that
manually
rather
than
I
think
also
say.
Fancy
will
support
that
some
must
decide
deployment,
but
even
though
they
prefer
to
do
that
manually
because
of
not
sure
exactly
what
kind
of
issues
that
may
face,
but
maybe
due
to
loads
issues
or
the
kind
of
issues
they
prefer
to
do
that
step
by
step,
but
yeah,
it
might
make
sense,
probably
we'll
need
to.
A
After
we
deploy
the
we
complete
the
rbd
and
the
cfs
mirroring
yeah.
We
will
have
to
go
back
to
would
you
say
just
to
see
if
we
can
improve
that.
F
A
Actually,
thanks
junior
for
the
links
I
just
spaced
them
there.
A
Yeah,
so
we
are
approaching
the
time
and
I
think
we
are
sharing
these
legends
with
the
orchestrator
fox
right.
So
thank
you.
Everyone
for
your
time.
Please
add
anything
that
you
any
feedback,
any
suggestions
whatever
to
the
the
author
path
and
you
know
where
to
find
safe,
dashboard
developers
accept
dashboard
rc
channel
in
seth
in
the
depth
mailing
list
so
feel
free
to
reach
out
to
us
in
case
that
you
have
any
suggestions
or
ideas
or
whatever.
So
thank
you
very
much.