►
From YouTube: Ceph Developer Summit Quincy: Dashboard
Description
00:00 - Focusing on ease of use
3:20 - Pacific Issues (specifically NFS v3)
11:48 - End-to-end workflows
22:46 - Multisite management
29:50 - RADOS Gateway (RGW)
34:25 - Monitoring
51:55 - Observability
56:59 - Telemetry
01:01:12 - Performance & Scalability
01:07:23 - REST API Improvements
1:25:56 Google Summer of Code
Full agenda: https://pad.ceph.com/p/cds-quincy
A
Let's
go
back
to
the
cds
so
prior
to
start
with
the
topics
and
just
base
the
link
in
case,
if
it's
not
yet
best
in
the
chat.
I
I
just
wanted
to
quickly
summarize
the
topics
that
we're
going
to
basically
be
voted
for
quincy
release
and
well,
as
you
know,
lance
already
mentioned
that
there's
been
some
recent
changes
in
I've
seen
and
those
have
also
impacted
the
dashboard
team.
So
we've
lost
a
lot
of
colleagues
and
from
mostly
from
from
suse.
I
see
tatiana
here.
A
So
hey
tehana,
nice
to
see
you
you
back
here,
so
that's
also
impacted
our
rates,
our
ability
to
deliver
new
stuff
and
well
that's
reconsider
our
current
focus.
So
one
of
the
things
that
we
are
going
to
try
to
do
is
focus
more
on
ease
of
use
usability
rather
than
are
perfectly
mimic
all
the
features
that
ceph
provides.
A
So
we
are
trying
to
take
this
lean
approach
so,
rather
than
providing
everything
and
keeping
pair
with
all
the
features
said
that
that
set
provides
and
we're
also
trying
to
bring
some
workflow
driven
usability.
So
right
now
the
dashboard
is
basically
said
mimicking
most
of
what
set
provides.
So
we
want
to
change
a
bit
that
approach
and
provide
also
a
workflow
guided
way
of
using
the
dashboard,
so
that
basically
would
mean
like,
for
example,
we
want
to
bring
the
blaster
installed
wizard
and
upgrade
wizard,
for
example.
A
If
you
want
to
start
using
rgw,
then
you
would
have
this
this
workflow
or
wizard.
That
is
guiding
you
towards
all
these
steps
that
you
need
to
perform,
so
that
would
be
more
or
less
the
the
change
in
in
the
mindset.
Let's
see,
what's
what
happens
to
that?
I
think
it's
it's
ambitious
and
but,
on
the
other
hand,
we
need,
to,
I
mean,
streamline
a
bit
the
development.
A
I
think
that
may
help
us
to
focus
on
the
quick
wheels
that
that
they
are
in
the
dashboard,
and
the
last
topic,
I
think,
is
what
we
call
the
lean
dashboard.
So
it's
basically
removing
everything
that
that
I
mean
it's
in
excess
that
we
can
hardly
cope
with
it
so
trying
to
define
the
clear
areas
where
we
can
grow
the
dashboard,
and
also
this
also
has
a
lot
of
impact
on
the
on
the
back
end
and
and
the
more
of
a
cleanup
refactoring
thing.
A
So
perhaps
it's
not
impacting
as
much
to
the
user,
but
we
will
need
to
this.
This
back
office
work
in
order
to
improve
the
the
code
and
well
how
we
developed
the
dashboard
right
now.
So
those
would
be
the
three
main
topics
that
would
guide
the
approach
for
for
our
next
year.
A
Screen,
let
me
know
if
you've
seen
my
screen,
please.
A
Okay
thanks
so
first
topic:
there
is
a
long
list
here
and
it's
just
an
hour
now,
it's
less
than
an
hour
two
quarters.
So,
first
of
all,
we
are
going
through
the
pacific
issues.
There's
been
a
couple
of
things
that
have
been
recently
been
changed
in
pacific
or
not
so
recently,
but
they
are
there
and
they
might
impact
the
status
of
some
features.
The
first
one
is
nfs,
especially
the
v3
version,
and
specifically
alfonso
has
been
working
on
this
recently.
So
I'm
not
sure
of
us.
C
B
When
writing
configuration
from
dashboard,
we
have
currently
the
like
a
configuration
that
this
orchestrator
made
and
the
user
defined
it
configuration.
So
currently,
I'm
working
on
polishing
some
some
issues
with
the
user
defined
configuration.
But
the
question
here
is
provided
safety
image
official
deployment
tool.
B
D
By
now
I
mean
we
can
add
support
for
it,
but
that
takes
some
that
adds
some
complexity
to
safe
idea.
So
the
idea
was
to
just
drop
it
for
now
and
then
we
have
to
decide
what
we
want
to
do
with
the
dashboard
man.
B
A
Yeah
there
were
some
discussions
and
I'm
happy
to
see
here
federico
because
there
was
there
were
some
basically
downstream
customers
and
scenarios
were.
I
guess
I
mean
legacy
systems
hp
ux.
I
think
I
remember
that
didn't
have
support
for
v4,
so
some
specific
kind
of
users
customers
may
need
to.
F
Well,
not
just
ancient,
mostly
mostly
it's
ancient
platforms
like
I'm,
not
I'm
not
even
sure
baseball
x
got
to
got
to
three.
I
guess
it
may
be
dead,
but
more
interesting
might
be
windows,
there's
a
link
to
v4
client
and
then
there
are
in
fact,
quite
a
few
uses
of
manifest
on
windows.
Believe
it
or
not,.
G
It's
as
we
look
at
the
migration
of
the
gloucester
customers
to
ceph
we're
going
to
see
the
windows
users
and
talk
about
v3.
G
Well,
I
don't
want
to
define
what
you
guys
are
doing
upstream
because
of
down
stream
requirements,
so
I
I
was
here
just
to
celebrate
lens
and
we
can
talk
about
what
downstream
needs
separately,
but
right
now
just
plan
what
seems
right
for
upstream.
E
I
think
we
should
have
a
conversation
with
jeff
leighton,
because
the
reason
why
well
aside
from
the
complexity
and
stuff
idiom
with
dealing
with
v3,
it's
that
what
I
understood
from
him
was
that
it
we
really
shouldn't
be
supporting
v3,
because
the
the
failover
stuff,
just
like,
isn't
reliable,
isn't
safe.
G
Yeah,
the
turning
on
v3
means
that
you
accept
all
sort
of
compromises
that
that
you
wouldn't,
if
you
just
stuck
to
v4,
you
have
an
inferior
hi
model
and
I
can't
remember
what
the
other
constraints
were.
E
A
Yeah
yeah
right
now,
as
also
mentioned,
there
are
two
ways
of
of
configuring
nfs
in
dashboard.
One
is
coming
from
the
legacy
way
of
doing
it,
which
was
without
any
orchestrators.
So
that's
where
itself
was
accessing
to
the
ganesha
column,
from
the
exports
files
in
infrared
and
that's
still
working.
So
technically
someone
wants
to
deal
with
that.
That's
that's
there.
So
the
other
thing
is
that
that's
conflicting
cedium.
So
as
long
as
you
install
or
configure
the
dashboard
to
use
fadm,
that's
not
available,
that's
what
changes
completely
the
way
of
configuring,
those
files.
So
yeah!
D
D
On
the
other
hand,
I
don't
know
how
how
upgrades
are
then
going
to
probably
work,
so
we
have
to
find
some
kind
of
middle
ground.
I
guess
between
hiding
it
for
for
users
and
still
supporting
it.
A
Yeah,
as
far
as
I
can
remember,
sebastian
yeah,
so
you
could
configure
b3
by
means
of
these
config
files
in
in
the
among
value
right.
D
Yeah
you
can
override
the
the
ginger
two
template
right
to
make
it
work,
but
that's
going
to
be
problematic
if
you're
upgrading
the
cluster,
but
that
would
be
zero
coding
effort
or
v3.
A
Okay,
okay
yeah
make
sense:
let's
leave
it
for
a
later
meeting.
A
Okay-
and
this
is
the
other
big
topic
where
I
think
it
was
mostly
discussed
yesterday
in
the
orchestrators
meeting-
so
I
probably
we
can
also
sing
later
applying
for
for
this-
for
hi
proxy
yeah,
okay.
So,
regarding
this
one,
the
end-to-end
workflows,
this
is
basically
what
you
would
follow
the
tracker.
A
Basically,
you
can
find
there.
This
is
a
list
of
usability
improvements,
and
so
this
has
been
mostly
driven
by
paul
kustner
shared
file.
It's
basically
a
series
of
blueprints,
mockups
and
designs
for
for
bringing
a
couple
of
workflows
and
also
for
me,
you
have
also
been
involved
in
here
so
feel
free
to
give
your
inputs
so.
H
No,
no,
it's
just,
I
think
that.
Well,
I
think
that
in
past
summer
we
we
were
working
in
in
this
kind
of
things,
okay,
in
order
to
propose
some
high-level
workflows
in
order
to
improve
user
experience,
and
basically,
we
detected
well
three
four
four
peak
areas:
okay,
where
we
can
improve.
H
One
of
them
is
the
installation
wizard
in
order
to
make
easy
to
install
a
cluster
okay
and
to
replace
what
we
have
in
in
downstream,
for
example,
in
the
previous
version
that
it
was
the
thing
that
they
run,
what
if
it
was
a
graphical
installer,
but
in
copied,
okay
and
well
to
do
something
similar.
This
is
one
of
the
areas
another
another
thing
is
about,
for
example,
where's
these
and
storage
devices
management,
and
we
have.
H
I
think
that,
for
this
tradition
with
that
for
the
hosting
management,
we
have
documents
in
the
in
the
documentation
in
the
third
area
in
the
in
the
future
plan.
Okay
and
a
part
of
that.
Well,
several
usability
improvements
about,
for
example,
first,
maintenance
about
make
easy
to
replace
ost
this
kind
of
things.
A
Wizard
I
I
was
trying
to
look
coming
for
the
documentation
where
this.
H
A
You
yeah,
so
the
main
issue
with
the
installation
wizard
is,
is
I
mean,
we've
been
double
thinking
on
this
one
and
and
in
the
this
is
trying
just
to
bring
party
with
the
old
cockpit
installer,
which
basically
provided
an
install
from
scratch
based
on
sephantibal.
A
But
the
thing
is
that
now
is
with
the
civilian
bootstrap.
It's
not
clear
where
to
start
right
here,
because
the
cluster
is
some
of
the
decisions
are
already
made,
and
so
that
would
be
a
lightweight
wizard.
There
is
no
need
of
a
so
end
to
end
with
us
as
the
cockpit
installer,
so
yeah.
We
need
to
check
exactly
where
the
these
widths
are
makes
sense,
and
so
one
possibility
is
that
it
basically
starts
after
the
bootstrap
phase.
A
So
as
soon
as
the
this
first
node
has
been
deployed
with
the
mod
mode
and
the
manager,
the
user
enters
the
dashboard
and
there
is
a
wizard
popping
up
there.
So
the
remaining
steps
can
be
driven
by
the
ui.
But
then
the
question
is:
what
kind
of
steps
are
we
waiting
for
that
I
mean
finding
the
cluster
to
other
hosts,
adding
hosts.
That's
basically
what
these
the
thing
that
what
we
pasted
yeah
there.
I
think
it
is.
A
A
I
see
that
and
there
wasn't
a
similar
one
from
all
for
the
for
the
last
install
or.
A
Check
well,
it
was
basically,
I
think,
a
series
of
mock-ups
for
selecting
the
nodes,
we're
talking
about
how
to
filter
nodes
by
labels
and
different
ways
of
selecting
the
nodes,
I'm
not
sure,
even
if
that
was
finally.
A
Merged
okay,
so
well,
let's
move
to
the
next
topic
and
we
can
go
back
at
this
one
if
you
find
it
in
any
case,
I
I
think
that's
well.
Basically,
the
workflow
expected
here
would
be
just
to
add
more
and
also
define
the
kind
of
services
that
you
want
to
deploy
on
on
post
by
using
labels
or
whatever,
so
that
would
be
more
or
less
the
steps
that
we
would
just
post
in
this
installation
we
serve
so
regarding
the
glass
upgrade.
A
As
far
as
I
know,
there
is
support
right
now
from
from
safety,
m4,
okay,
detecting,
that's,
basically
container
driven
right.
There
is
no
what's
going
on
here.
Are
we
waiting
for
upgrades
into
the
images
published
in
the
registry
or
what
is
this?
How
is
this.
D
C
D
Issue
and
upgrade,
but
the
upgrade
itself
is
implemented.
Okay,.
A
Yeah,
I'm
curious
is
the
because
I
assume
that
some
of
the
hosts
would
need
to
enter
in
maintenance
mode
or
how
is
this
handled
from
from.
D
The
cluster
says
online,
I
mean
sage
implemented
that
it
it's
just
one
daemon
after
another,
is
going
to
get
upgraded
or
updated,
and
the
cluster
keeps
working
and
everything
works.
B
E
E
So
you
can
just
pick
one
and
probably
a
start.
One.
That's
dash
latest
that
will
just
pick
the
the
newest
and,
let's
start.
D
We
we
might
be
able
to
just
query
fetch
the
latest
stack
and
compare
it
if
that
still
matches
what
it's
what's
currently
deployed
in
the
cluster
and
if
it
does
not
match,
then
we
could
issue
some
kind
of
information
work
that
an
upgrade
is
available
even
without
querying
the
registry
by
just
downloading
the
latest
image.
A
And
does
this
only
involve
the
self
images
or
also
the
monitoring
ones?
The
monitoring
stack
ones.
E
Currently,
the
monitoring
ones
or
as
soon
as
that
pull
request,
merges
we're
just
going
to
be
sort
of
tied
to
a
particular
staff
release.
So
as
soon
as
you're
on
version
12
or
whatever,
whatever
version
of
ceph,
there's
a
specific
version
of
the
monitoring
images
that
it
will
install
and
also
get
automatically
upgraded
too.
A
A
Okay,
next
topic
is
this
usability
improvements
that
I
mentioned.
Most
of
them
have
been
provided
from
old,
krsner
third
analysis
on
the
dashboard
and
stability.
So
there
are
a
lot
of
interesting
stuff.
A
They
are
very
small
tasks
and
we
are
progressively
fixing
them
so,
but
if
you
find
anything,
for
example,
I
remember
yesterday
said
you
mentioned
the
inventory
versus
the
device,
so
we
can
this.
This
probably
would
fit
here
if
it's
not
already
here
so
this
kind
of
and
it
fixes
are
there.
A
So
that's
mostly
then,
regarding
the
usability
and
end-to-end
workflows.
So
on
the
multi-site
management.
This
actually
covers
multi-site
purely
multi-site
and
also
multi-cluster.
Currently,
regarding
rbd
mirroring,
I
collected
these
issues.
They
were
opened
by
jason.
So
currently,
this
is
the
gap
with
the
rvd
mirroring
support
in
cf
master.
I
think
so
right
now
the
dashboard
only
provides
a
pull-based
and
image
based
snap
and
mirroring.
So
we
need
to
bring
also
the
snapshot
board.
I
think
I
remember
jesus
mentioned
that
the
wool
was
going
to
be
deprecated
oops,
not
here.
A
No,
I'm
basically
also
the
enabling
and
disabling
of
the
rv
images
in
order
to
do
the
mirroring.
So
probably
the
rpd
mirroring
is
the
most
complete,
mirroring
or
multi-site
feature
right
now
in
the
dashboard.
Then
there
is
the
rgb
we
decided.
This
has
been
our
plan
since
that
what
nodules
so
right
now
the
dashboard
supports
this,
the
graphana
dashboard.
So
at
least
there
is
some
monitoring
dashboards
and
the
status
of
the
replication
can
be
kind
of
followed
from
from
them.
A
But
the
idea
here
is
to
bring
also
a
wizard
just
to
configure
the
multi-site,
especially
since
there
is
no
multi-site
configuration
in
cfdm
any
longer
so
yeah
this
well.
A
This
has
also
some
dependencies
and
also
related
to
what
we
were
talking
yesterday
about
the
rados
admin
ops
and
how
to
trigger
the
configuration
of
the
realm
sounds
and
etc.
From
from
the
dashboard.
A
So
let's
see
yeah.
A
And
last
one
to
come
was
cfs
mirroring,
I'm
not
familiar
with
this
one.
I
don't
see
patrick
here.
So
is
this
following
more
or
less
the
same
approach
as
the
rvd
mirroring?
There
are
ffs
mirroring
demos
and
just
launched
that
and.
E
Yes,
yeah,
so
it's
it
works
the
same
way.
The
demon
works,
the
same
way
that
the
rbd
mirroring
one
he
just
basically
launched
it
and
or
n
of
them,
and
they
just
do
their
thing
and
then
it's
just
a
matter
of
basically
enabling
and
disabling
it,
configuring
it
on
various
volumes
or
sub
volumes,
and
that.
C
E
E
Orchestrator
part
is
really
simple:
rook
supports
it
now
too,
so
it's
just
you
basically
just
turn
the
demons
on
that's
about
it.
Okay,.
A
Yeah
so
hopefully
we
can
reuse
most
of
the
code
from
the
rb
inverter
this
this
part
and
yep.
Okay,
great
and
last
topic
here
is
the
multi-cluster.
This
is
something
that
has
popped
up
in
in
many
of
our
face-to-face
meetings,
and
I
don't
see
jeff
here
because
he's
a
great
supporter
of
this.
But
the
idea
here
is
that
from
a
single
dashboard
you
can
manage
multiple
subclasses.
A
This
is
something
that
is
potentially
visible
because
the
front-end
technically
is
just
an
application
running
in
the
browser,
so
it
technically
could
be
directed
to
its
backend
and
technically
it
could
switch
from
back
into
another,
but
it
needs
to
be
tested,
and
that
also
would
be
a
very
basic
implementation.
So,
but.
E
It
would
have
to
be
the
same
version
of
stuff.
The
cluster,
though
right.
That's
that's
true,
yeah
yeah.
I
wonder
if
it's
simpler,
just
to
have
like
a
something
where
the
clusters
are
linked
and
then
you
can,
if
you
select
the
cluster,
it
just
opens
a
new
tab
with
the
other
dashboard
other
clusters
dashboard
or
something.
A
Yeah
good
thing,
starting
pacific
is
that
we
now
have
a
version
of
scheming
in
the
api,
so
at
least,
if
there
is
some
smart
between
the
front
end
and
the
backend,
it
will
be.
The
error
will
be
triggered,
but
it
will
be
more
intuitive
than
before.
So
we
we
could
catch
this
kind
of
yeah
person
as
much
as.
B
Yeah
my
my
question
here
is
this
multi-class
multi-cluster
feature
comparing
the
priority
with
the
other
rdwrvd
or
cfs
to
take
lower
priority
right
or
or
not.
B
E
B
A
Yeah,
so
it
would
be
helpful,
I'm
not
sure
if
michael's
stick
here,
I
don't
think
so
right
so
just
to
wait
for
the
next
steps.
Are
we
and
and
check?
I
mean
what
is
the
number
of
users
using
already
mirroring
the
audio
multisite
and
jeff
smithery
just
to
better
know
that
so
do
you
have
any
idea
on
when
is
the
results
of
the
2020
are
going
to
be.
G
A
G
Multi-Site
is
by
far
the
most
common.
Our
rbd
mirroring
is
not
not
used
that
much.
I
think
a
multi-site
is
by
far
the
most
common
geo-wrap
that
we
are
seeing
in
use.
G
H
E
Go,
I
suspect,
the
rbd
mirroring
users
are
tied
to
kubernetes
clusters-
oh
yeah,
but
I'm
not
certain
about
that.
That'd
be
my
guess.
G
A
Okay,
thank
you.
So
next
topic
is
rtw.
I
think
yesterday
we
already
discussed
a
bit
about
this,
so
what
probably
we
can
escape?
This
is
something
that's
been
in
the
background
also
for
a
while,
and
we
definitely
need
to
bring
this
to
the
dashboard
soon,
and
I
think
also
alfonso
is
going
to
commit
with
this
right.
B
A
A
F
A
Yeah
yeah
yeah,
okay,
we
might
explore
this
another
connection
with
users
in
the
bottom
of
the
agenda.
There
is
the
fix,
cfx
management,
and
that
would
also
bring
some
other
kind
of
users
to
the
to
the
dashboard
so
well,
we
might
try
to
review
the
different
kind
of
users
that
live
around
the
dashboard
and
perhaps
try
to
align
them
to
actual
dashboard
users
or
map
them.
D
D
Civic
seals
are
the
fx
is
a
very
developed
oriented
approach,
but
at
least
when,
when
caring
about
these
have
issues
of
the
different
fx
keyrings
of
the
different
demons,
it's
something
that
only
developers
are
really
interested
in
or
should
be
interested
in.
I
don't
know
how
much
hands
it
makes
to
integrate
cfx
into
the
dashboard.
E
A
D
Clients
I
I
would
really
make
it
depending
on
the
use
case
and
not
a
general
fx
management
page,
but
really
focus
on
what
use
case.
Do
you
really
want
to
support
and
leave
everything
else
away?.
A
Yeah
yeah
yeah
makes
sense,
yeah
we'll
take
note
of
that
yeah,
it's
a
workflow
right.
So
after
you
create
or
enable
some
specific
component,
you
export
it.
A
E
H
A
Okay,
then
it's
what
15
minutes
to
the
hour
it
might
take
a
bit
extra,
but
let's
try
to
keep
it
short.
So
next
topic
is
monitoring.
There
are
a
few
requests
coming
from
tracker.
A
This
is
so
we've
seen
users
requesting
to
have
persistency
on
the
grafana
dashboards
or
at
least
being
able
to
configure
that
right
now,
as
soon
as
you
restart
the
at
least,
this
is
happening
in
master
restart
the
container
you
lose
any
kind
of
configuration
that
you
make
to
the
grafana
and
we've
had
some
users
asking
to
whether
they
could
keep
their
changes
there.
A
And
what
are
your
thoughts
here
that
led
us
to
finding
conclusion
that
we
could
store
the
rafana
database
within
ceph,
but
yeah
not
sure
why.
A
A
Let's
see,
I'm
not
very
supportive
of
this
right,
because
if
a
user
screws
their
dashboard
and
the
graph
on
a
dashboard,
I
mean
with
the
current
approach.
Just
by
restarting
the
the
container,
you
get
the
fresh
grafana.
So
there
is
no
issue,
but
if
we
persist
that,
then
we
are
complicating
the
I
I
think
we
have
to
put
a
language
yeah.
We
want
to
bring
customizations
and
no
further.
A
So
no
big,
I
mean
borders
of
this.
B
Now
one
middle
option
would
be
maybe
when
you
restart
the
content,
to
have
some
some
way
to
tell
that
you
want
to
purge,
and
then
you
have
the
original
diverse
or
not.
But
if
you
are
able
to
pass
something
like
push,
then
you
restart
the
original
queries
that
are
safely
checked
in
the
upstream
in
our
jobs
and
then
you
can
make
the
user
aware
that
he
was
tweaking
some
query.
He
was
rock
breaking
the
thing.
B
A
Yeah,
I
don't
have
a
strong
view
here.
On
the
other
hand,
we
could
simply
store,
as,
for
example,
in
previous
versions
of
separation.
I
think
the
the
database,
the
graph
database,
was
volume
mounted,
so
you
kept
the
status
of
grafana,
but
that
would
only
work
with
a
single
instance.
So
if
you
want
to
deploy
multiple
grafanas,
there
is
no
shared
status.
So,
but
probably
not
not
a
big
thing
right
now,.
C
H
I
think
the
first
thing
that
we
can
do
is
in
in
this
moment
the
graph
on
a
database
is
in
the
container
okay.
So
if
we
restart
the
container,
we
lose
the
the
customer,
the
customization,
so
the
first
thing
that
we
can
do.
That
is
the
the
same
that
he's
doing.
Stefan
zero,
for
example,
is
a
folder
in
the
in
the
host
in
order
to
persist
the
database
okay,
but
this
has
a
bad
thing
that
we
cannot
move
the
the
diamond
between
between
hosts
okay,
so
well.
H
This
opens
the
tool
to
to
new
things
like
let's
say,
for
the
monitoring
stack,
okay
and
how
to
store
the
database
and
the
configuration
of
the
monitoring
stack
in
a
common
place
where
we
can
always
access
okay.
So
I
think
that
in
fifth
place
we
can't
just
follow
the
same
strategy
that
we
have
in
in
the
fancy
world.
It's
my
opinion,
okay,
this
and
a
folder
in
order
to
store
the
the
rafana
database.
A
Yeah
yeah
and
also
in
fact
the
shared
one
point,
would
be
tied
to
the
high
availability
of
the
monitoring,
which
is
the
last
topic
and
yeah.
Probably,
we
should
have
first
talked
about
that.
Okay,
I
guess
for
the
customization
of
the
alerts
is
more
than
the
same
thing.
We've
got.
A
I
mean
some
requests
on
users
that
they
don't
want,
for
example,
to
have
the
near
flow
ratio
and
the
I'm
not
sure
if
it's
the
idea,
but
perhaps
they
want
to
lower
that
to
70
percent
or
something
so
they
want
to
tune
that
it's
more
than
the
same.
I'm
not
sure
we
want
that
level
of
customization,
which
is
going
to
complicate
the
board
debugging
et
cetera,
so.
H
D
A
A
Yeah,
okay,
next
topic
here
is
the
native
widgets
for
well.
Basically,
this
means
removing
rafa,
so
graphana
is
complicating
having
the
deployment,
the
monitoring,
stack,
etc.
Also,
the
networking
is
not
ideal.
We
are
embedding
grafana
with
iframes
in
the
dashboard
and
when
you
mix
that
with
https
and
different
domains
that
well
it's
a
hell
of
a
configuration,
so
it
usually
ends
up
with
users,
not
in
the
rafana
because
of
self-signed
certificates
not
being
approved
and
it's
a
real
mess.
A
So
there's
been
different
discussions
right
here,
one
was
having
a
proxy,
so
basically,
the
test
was
acting
as
a
proxy
and
rafana
dash
was
actually
proxied
by
by
the
dashboard.
So
the
end
user
only
fetches
the
dashboard
from
the
from
a
single
url,
a
single
hostname.
The
alternative
would
be
to
have
native
widgets.
That
would
also
allow
for
better
integration
with
the
dashboard,
so
we
could
have
widgets
everywhere,
not
only
in
iphones
but
and
also
console
the
check.
The
prometheus
stats
in
in
every
place.
A
D
So
we
ended
up
with
implementing
a
proxy
for
grafana
and
that
wasn't
great
because
didn't
really
want
it
to
be
proxied
and
the
third
iteration
was
then
to
have
it
just
embedded
as
an
iframe.
D
I
think
custom
widgets
is
going
to
be
an
option
right,
but
that's
a
ton
of
work
yeah
and
we
shouldn't
do
it
by
ourselves.
D
That's
what
what
we
went
through
in
in
and
I
found
that
tatiana
is
not
mistaken
right.
We
in
automatic,
we,
we
started
with
custom
widgets
and
then
replaced
them
with
a
proxy
for
grafana.
C
A
A
comment
in
that
tracker:
no,
not
in
that
one,
there
is
another
one
for
the
proxy
I
will
for
that
later,
and
he
mentioned
with
the
proxy
mode
of
graphana.
You
cannot
access
graphana
in
standalone
mode,
so
you
lose
the
ability
to
check
rafana
separately
right.
It
can
only
be
accessed
in
the.
A
Okay,
well
I
mean
we
may
have
for
the
kind
of
things
when
we
try
to
have
pious
just
to
see
how
feasible
they
are
pros
and
cons
and
yeah.
Let's
see
I
recently
was
checking
and
it
seems
like
the
library
that
we
are
using
for
charting
charges
supports,
or
there
is
a
plugin
for
directly
hitting
prometheus.
A
So
technically
it
couldn't
be
that
hard.
At
least
we
could
access
the
data
search
directly,
but
the
other
thing
is
that
we
have
to
configure
all
the
rest
of
the
fancy
ui
that
graphana
rings,
but
yeah
I
have
perhaps
for
simple
widgets.
It
might
work.
A
D
Okay,
I
mean
you
can
have
some
kind
of
if,
if
one
instance
fails-
and
you
can
reconfigure
the
dashboard
to
point
to
a
different
instance.
H
A
D
I
think
the
alert
penalty
itself
is
in
a
proper
cluster
of
of
hosts
of
demons.
D
But
I
don't
know
regarding
refined
information,
I
think
it's
still
not
probably
working.
A
E
I
haven't
considered
that
one,
but
I
mean
if
it's
a
if
it's
an
http
or
https
endpoint,
then
it'll.
It
should
just
work.
It's
a
question
of
just
figuring
out
how
it
should
be
deployed.
A
Yeah
yeah,
my
main
approach
here
would
be
just
to
in
case
of
I
guess,
if
the
count
of
diamonds
is
more
than
one
deploy
the
8j
right,
because
otherwise
it's
not
usable
at
least
graffana,
and
I
can
sim
for
prometheus.
E
D
So
if
you,
if
you
do
custom
changes
and
if
you
do
custom
changes
in
one
graphana
instance-
and
we
are
persisting
the
graphina
database
and
we
are
failing
over
to
a
different
one
and
the
custom
settings
are
gone
yeah.
The
grafana
is
actually
with
that
pull
request
from
paul
rafana
is
a
dateful
service,
binary.
B
And
yeah:
okay,
okay,
I
was
wondering,
but
about
this
work
around
about
the
store
several
amount
of
urls
to
instances,
maybe
with
the
refactor
that
implied
to
allow
a
dictionary
like
of
typhoid
settings.
I
don't
know
if
you
can
store
there,
several
well,
the
desired
urls
I
mean
as
a
temporary
workaround.
Meanwhile,
we
find
a
more
consistent
solution.
B
Well
here
it
is
say
that
this
in
premiere
solution,
which
allows
to
store
unlimited
amount
of
urls
serializing
the
data
well
as
a
temporary
work,
and
it
could
we
could.
We
could
work
to
adapt
this.
So,
for
example,
if
you
set
up
several
instances,
you
can
find
the
first
that
is
reachable,
and
if
this
goes
down,
you
can
reach
out
for
the
other
ones.
A
Yeah,
the
other
thing
is
that,
for
the
case
of
grafana
that
basically
end
ups
with
the
url
I
mean
is
the
the
user,
the
browser
who
has
to
decide
which
reference
is
consuming.
So
I
mean
I
don't
know.
I
mean
how
to
do
that.
B
A
A
Okay,
let's
switch
to
the
next
div,
which
is
the
observability.
We
have
the
topic
of
log
aggregation
and
I'm
not
sure.
Yesterday
there
was
discussion
on
caster
laws,
I'm
not
sure
if
there
is
a
desire,
there's
been
some
traction
around
this
because
I
mean
basically
this
this
would
mean
having
centralized,
looks
for
for
more
or
less
in
the
same
way
as
we
have
graffana,
we
may
have
a
kibana
or
what
another
stack
for
for
centralizing,
logs
and
and
processing
logs.
A
Is
this
something
that
we
have
had
requests
for
in
the
past,
there's
been.
A
D
A
D
D
The
what
what
what
the
user
experience
when
using
self-locks
when
when,
when
deploying
self-idm
demons,
I
I
really
got
a
lot
of
feedback
that
just
searching
the
logs
for
errors
when
deploying
services
was
really
not
usable
so
for
safe
idm.
I
ended
up
with
a
a
different
way
of
providing
errors
that
are
specific
to
specific
instances
like
like
demands
or
services,
in
a
way
that
users
don't
need
to
look
into
the
doc.
D
D
I
mean
there
right
now.
There
are
plenty
of
important
error
messages
that
are,
the
stream
should
show
in
the
dashboard
right
well,.
A
Yeah,
I'm
curious,
I'm
not
too
much
into
it,
but
currently
the
logs
for,
for
example,
the
usds
in
in
sephidium.
They
are
not.
They
are
going
through
journal.
A
C
E
A
A
Okay,
well,
we'll
keep
an
eye
on
this.
I
think
it
might
be
interesting.
So
if
there
is
interest
on
on
this,
please
there
is
no
one
and
we
can
prioritize.
A
Okay,
next
topic
is
telemetry,
I'm
not
sure
if
jared
added
this
one
or
who
was,
I
think
it's
interesting.
So
I
don't
know
if
currently
maybe
sets
you
know
this
telemetries
is
recording
the.
How
many
modules
are
enabled
right,
so
we
can
track
if
the
dashboard
is
enabled
and
how
many
users
are
using
the
dashboard.
Is
there
such
a
thing
in
terminator.
E
We
know
that
the
module's
enabled
yes,
but
I
think
that's
that's
all
my
suggestion
here
is
that
maybe
we
should
just
count
like
which
urls
or
which
pages
in
the
dashboard
are
being
used.
Maybe
like
account
or
something
just
so
we
can
tell
like
what
parts
of
the
dashboard
users
I
mean.
F
B
B
E
For
you
to
know,
but
I'm
okay,
okay,
okay
seems
like
something
as
simple
as
like
what
pages
are
being
used
and
how
often
they're
viewed
or
whatever
would
be.
A
A
Yeah
we've
been
talking
about
the
I
mean
the
possibility.
This
is
I
mean
since
he
polish
you.
I
mean
the
possibility
of
embedding
an
analytics
on
open
source,
one,
not
google
analytics
in
the
dashboard,
so
we
could
have.
At
least
I
mean
information
on
on
how
the
users
are
using
the
dashboard
like
from
a
usability
perspective
here
hotspot
in
the
dashboards
kind
of
feedback.
A
A
Okay,
yeah
yeah
from
from
the
data,
at
least
from
the
dashboard
config
settings,
we
can
kind
of
infer
what
specific
components
a
user
is,
for
example,
from
the
rbd
image
accounts.
A
I
mean
I'm
not
sure,
but
yeah
there
are
specific
settings
in
the
dashboard,
but
from
them
we
can
infer
if
a
user
is
using
that
in
the
dashboard,
but
for
a
more
clear
yeah
report,
we
will
clearly
have
to
add
something.
The
thing
here
is
not
to
be.
You
know
intrusive,
or
you
know
that
the
user
may
feel
that
they've
been
spied
by
adding
cookies
or
something
so
yeah.
We
will
have
to
try
to
be
very
open
about
this.
A
Even
there's
this
work
on
enabling
telemetry
this
this
banner
and
there's
always
this
discussion
on
how
to
make
it.
You
know
on
one
hand
we
want
to.
I
mean
proactively,
encourage
users
to
enable
that,
but
yeah
we
don't
want
to
really
piss
them
off
by
insisting.
So
we
were
recently
talking
with
with
jared
about
how
often
we
should
display
again
this
this
notification
after
minor
upgrade
or
maybe
every
six
months
or
so
in
order
to
remind
them
that
there
is
this
telemetry
program
and
they
may
join
them.
E
Yeah
yep,
I
guess
this.
Maybe
this
is
separate,
or
maybe
it
isn't.
I
wonder
if
a
similar
report
on
the
api
in
the
dashboard
api,
since
that
might
have
external
users
too,
just
like
which
endpoints
are
being
used,
what
their
head
counts,
are
something
like
that
might
be
interesting
to
you,
sorry,
how's
that
just
counts
on
how
many
times
the
the
back
end
api
actually.
A
To
know
yeah
yeah
yeah,
I
I
was
thinking,
I
think,
in
the
parameters
exported
recently,
patrick
added
some
metrics
for
for
that,
so
now
promises
has
metrics
on
the
use
of
of
the
promising
supporter,
or
at
least
the
latencies
taken
there.
So
we
may
try
to
think
on
something
like
that
on
the
usage
of
yeah,
I
will
yeah
okay.
A
Thanks
next
topic
is
performance
and
scalability.
This
is
linked
also
to
a
discussion
that
well
will
just
mention
in
during
the
radar
cds
on
the
manager
improvements
right
now,
specifically
from
from
the
dashboard
side,
we
are
focusing
on
well.
A
Those
are
the
ones
that
we
have
found
that
may
scale
charger
and
might
bring
some
bots
on
next
and
first
topic.
There
is
bring
some
caching
right
now
perry
and
what
are
working
on
this
topic,
exploring
that
and
what
they
are
benchmarking.
The
behavior
of
the
sub
dashboard,
with
I
think
very
mentioned
today-
that
he
was
trained
with
four
thousand
osd
and
seems
like,
but
there's
a
noticeable
degradation.
A
So
we
will,
I
mean,
start
working
on
this
on
bringing
this
and
the
other
big
topic
here
is
because
the
caching
is
not
going
to
solve
everything
so
bringing
the
pagination
to
the
dashboard
right
now.
Most
of
the
queries
retrieve
all
the
data
from
the
bucket.
A
So,
for
example,
the
other
day
when
was
demo
in
the
configuration
cluster
configuration
options
from
this
thousand
or
more
than
a
thousand,
once
all
that
list
is
retrieved,
if
even
if
only
the
basic
ones
are
are
displayed,
the
whole
set
is
retrieved
so
having
this
kind
of
pagination
filtering,
definitely
going
to
help
a
lot
on
on
reducing
the
scalabilities.
C
Because
we
recently
also
bring
in
the
filtering
for
the
rgw
buckets
to
have
the
query
parameter
for
something
called
stats,
so
that
may
we
can
brought
up
for
the
rvdosd
list
image
list
of
episode
list
that
you
cannot,
because
there
are,
there
is
enough
payload,
so
just
load
on
that
single
page
we
can
just
maybe
we
can
do
something
like
on
the
single
page.
We
just
have
to
load
the
specific
list,
so
we
already
know
what
the
what
the
column,
what
info
the
columns
want
in
order
to
display
those.
C
So
we
can
just
have
those
payload
in
the
I
mean
in
the
api
and
and
what
we
can
do,
we
can
just
call
the
the
full
list
only
if
we
go
in
the
details
of
their
particular
image,
osd
or
rvd
else.
We
don't
want
to.
You
know,
query
the
the
whole
list
for
this,
so
I
think
the
filtering
I
support
more
as
a
more
priority
before
the
pagination,
because
that
will
benefit
for
the
listing.
A
A
That
means
that
we're
not
giving
all
the
data
sets
from
the
backend
and,
for
example,
the
filtering
the
searching
is
going
to
be
limited
right
now
I
mean
you
can,
if
in
the
search
box
in
each
of
the
tables
as
soon
as
you
start,
writing
something
you
get
all
the
information,
all
the
rows
that
are
matching
that
if
we
put
this
in
place
as
there
is
no
way
to
create
a
database
from
the
packet
information
and
use
indexes
or
so
for
that
we
will
need.
A
A
Yeah,
but
this
is
definitely
a
topic
that
we
we
have
to
plan
for
for
the
near
future.
So
I
hope
that
in
quincy
we'll
see
some
imagination
in
place,
at
least
for
some
specific
controllers
like
rbd
we
were
talking
about.
We
were
talking
about,
for
example,
rgw.
That
also
is
very
linked
to
the
backend.
So
I'm
not
sure,
but
yesterday,
if
you
mentioned
about,
for
example,
retrieving
all
the
buckets
or
objects
having
some
kind
of
way
of
patching
it
that
receiving
an
index
or
an
iterator,
or
something
like
that.
A
So
you
don't
really
need
to
retrieve
all
the
info
from
the
from
rattles
or
wherever
you
are
getting
that
and
you
can
iterate
that,
instead
of
having
the
whole
data
set
every
time
that
you
is
it
something
that
there
is
working
in
rdw
for
for
that.
A
A
So
there
is
current
support
on
on
on
the
admins
api
and
s
and
the
hour
implementation
of
s3
as
well
right.
F
A
A
Okay,
next
topic
is
the
rest
api,
but
I
think
that's
a
pretty
broad
topic
and
there
is
no
clear
idea
for
that.
So
it's
just
what
sure
that
there's
ongoing
improvements
on
the
rest
api
side
for
pacific
there's
been
the
huge
flip.
That
is
mean
to
have
a
stable
version.
Api
also
documented
and
everything
else.
I
think
it's
about
improving
that,
rather
than
a
real
breakthrough.
A
So
next
topic
is
the
well
lean
dashboard.
Let's
call
that
that's
basically
the
idea
of
trying
to
rethink
how
the
dashboard
is
the
code
basis
and
the
current
approach
and
try
to
find
our
reusability
patterns
and
things
that
we
can
do
for
reducing
the
code
base
right
now.
I
think
it's
we
have
a
hundred
thousand
lines
of
code
in
the
front
end
I
think,
and
thirty
thousand
or
so
in
python.
So
that's
huge
amount
of
code
lens
and
we
clearly
have
to.
A
I
mean
reduce
that
if
we
want
to
survive
so-
and
we
also
that's
also
multiplied
by
three
down
streams:
three
option
releases,
sorry
and
plus
quincy.
So
well,
that's
a
lot
of
code.
We
have
to
take
care
of
and
things
that
we're
thinking
on
it's,
for
example,
and
this
is
more
a
personal
taste,
I'm
not
very
fancy,
or
very
I
mean
supporter
of
ui.
A
As
I
mean
coding
out
of
staffing
you
I
think
it's
most
of
us
and-
and
that
means
not
only
dashboard
developers,
but
also
I
mean
core
set
developers
are
used
to
python,
but
not
to
front-end
technologies
so
having
to
code
typescript,
javascript
angular
for
extending
the
dashboard.
I
find
that
as
a
usually
broad
block
for
everyone
that
wants
to
approach
the
dashboard
so
trying
to
bring
some
of
the
ui
stuff
into
the
back
end
to
the
python
based
codeways.
A
A
It's
not
trivial.
It
requires,
I
mean
totally
thinking
about
this,
but
I
think
it
can
help
us
to
at
least
reduce
a
bit
of
duplicate
coding
in
the
dashboard.
A
In
there
I
mentioned
that,
for
example,
forms
in
javascript.
There
are
thing
called
json
forms,
so
there
is
no
standard
for
for
having
a
ua
drive
back
and
driven
ui,
but
there
are
some
moves
on
on
that
regard
like,
for
example,
defining
forms
in
json.
So
you
can
have
the
json
generated
in
the
back
end
and
the
front
end
is
only
responsible
for
what
the
presentation
and
and
the
passing
of
the
information.
A
So
let's
explore
this
move
and
see
what
happens
and
also
there
there's
a
collection
of
triggers
here,
basically
refactoring.
The
navigation
bar
also
with
replacing
that,
with
a
json
generated.
A
A
Okay,
next
topic
is
cross
component
interfaces
and
yesterday
we
quickly
talked
about
what
has
left
changing,
for
example,
information
between
cfdm
and
dashboard.
Basically,
for
the
sake
of
we're
talking
about
the
service
spec,
we
are
using
the
service
spec
to
display
the
different.
Well,
we
are
not
using
the
service
spec.
We
have
the
service
spec
hard
coded
in
the
dashboard.
A
So
by
means
of
inspection,
we
are
currently
doing
that,
for
example,
for
the
manager
modules
management
section
we
are
taking
the
dump
of
the
manager
at
the
manager
map
damp
and
we're
using
that
dump
to
generate
the
structure
for
the
different
manager
modules,
the
type
that
they
are.
These
modules
are
acceptance
for
the
options
etc.
So
having
this
kind
of
approach
can
help
the
dashboard
to
be
always
in
sync,
with
other
components
like,
for
example,
cipd.
A
So
there
is
a
reference.
Well,
pythag
is
a
framework
for
python
to
enforce
typing
at
runtime
and
also
allows
to
export
json
schemas
from
the
structures
so
well
it's.
I
think
it
might
be
useful
for
for
this
in
case
we
want
to
do
that
and
as
long
as
we
are
using
classes,
we
can
rely
on
on
inspection,
so
we
are
directly
working
in
the
same
manager
and
python
sub-interpreter.
B
So,
for
example,
is
in
some
component,
for
example,
you
are
adding
more
info
to
the
manager
that
maybe
is
not
relevant
for
the
dashboard.
We
don't
have
to
tweak
every
time
the
the
schema
that
get
verified.
I
remember
that
recently
sage
was
adding
some
new
new
feel
and
he
was
forced
to
to
tweak
our
our
schema
and
our
api
test.
So
I
think
that
we
should
yeah
reach
out
the
consensus
for
all
the
components
to
have
a
to
model.
This
schema
verification,
so
something
has
something
that
is
not
impacting
the
dashboard.
The
the
schema
remains.
B
A
Yeah
I
mean
regarding
just
the
rest
api.
We
have
there's
an
appear
for
adding
this
type
check,
runtime
type
check,
so
we
can
detect
that
hey.
B
But
now,
for
example,
in
our
json
schemas
or
object,
schema
definition,
we
are
putting
allow
a
noun
true
when
something
is
having
some
fields
or
data.
That
is
not
explicitly
checked.
We
can
make
improvements
there
in
order
to
make
it
more
flexible,
more
reliable,
but
maybe
we
can
start
ourselves
this
work
and
put
other
components
as
reviewers
and
if
we
are
okay
go
forward.
A
Yeah,
that's
I
think
kwame
was
interested
on
this
topic,
so
enforcing
types-
and
I
mean
having
this
schema,
so
we
may
discuss
with
him.
I
don't
hear
sebastian,
so
maybe
he
left,
I'm
not
sure
if
he
works
like
expecting
a
an
answer
from
him.
A
Okay,
well
that's
about
the
cross-component
interfaces
or
what
data
sharing
about
hardening.
This
is
kind
of
related,
so
we
have
started
trying
out
this
type,
checking
runtime
types
again
right
now
the
api
was
only
a
type
checking
is
enforced,
only
not
at
run
time
during
either
via
my
buy
or
during
the
qa
tutology
branch.
There
is
basically
a
helper
infrastructure
for
testing
the
checking
the
types
of
the
responses,
so
we
want
to
replace
that
with
a
runtime
checking
and
having
the
types
of
the
responses
actually
checked.
A
And
also
regarding
grafana
jason's,
I
don't
think
everyone
is
agreeing
with
this
move,
but
well,
if
you
check
one
grafana
jason,
you
will
see
a
lot
of
boilerplate
a
lot
of
data
there
that
it's
hard
to
follow.
Also
the
wrong
ql
expression
is
not.
This
is
well,
it's
not
very
fancy
to
process,
so
the
idea
there
would
be
to.
A
I
think
everyone
is
living
in
probably
too
much
so
I
said
basically
replace
the
current
jsons
with
a
python
generated
version.
So
we
could
have
these
very
simple
python
files
that
generate
a
a
complete
profile
as
well
json.
A
This
is
based
on
a
library
from
well
yeah.
Graphanally
is
called
it's
a
python
library,
and
I
think
it
might
be
interesting
to
explore
that
I
said
currently
the
graph
fana
also
graphana
panels.
The
json
files
are
not
very
well
tested,
so
we
always
find
issues
like
recently.
We
had
a
contributor
from
app
stream
sending
appear
modifying
the
queries
and
we
have
to
manually
test
that
and
it's
hard
to
tell
exactly
the
impact
of
those
changes
so
this
kind
of
assets
I
think
it
makes
sense
to
to
have
them.
A
I
mean
tested
and.
A
Well,
if
we
want
to
carry
out
this
this
poc
as
well,
I
mean
we
may
try
different
alternatives
and
see
how
they
are
compared
to
each
other.
So,
okay
and
last
one
if
the
in
the
lindar's
four
topic
is
a
back
portion
helper.
This
is
something
we
have
discussed
in
the
past
when
a
developer
or
someone
from
the
backboarding
team
has
to
bring
some
code
back
to
an
older
release,
no
tilos
or
whatever
I
mean,
usually
have
a
lot
of
issues
with
conflicts,
etc.
A
So
having
a
at
least
a
kind
of
guide,
but
ideally
perhaps
a
script
or
something
that's
to
detect
issues,
because,
as
most
of
the
dashboard
is
based
on
on
tap
script,
javascript
html,
some
of
these
issues
are
not
even
detected
at
runtime
or
field
time
or
whatever.
So,
for
example,
if
you
define
or
do
you
bring
some
class,
that
is
a
css
class
that
is
in
a
master
pacific,
but
it's
not
defined
in
in
nautilus.
A
No
component
in
the
chain
will
complain
about
that.
So
you
will
end
up
with
a
a
class
that
didn't
exist
in
a
given
branch.
That
is
there
and
we
have
seen
that.
So
that's
that's
already
happened
and
having
a
kind
of
maybe
a
linter,
some
kind
of
tool
for
helping
on
on
debugging.
These
backboarding
issues
is
something
that
we
may
consider
for
for
that,
especially
even
that
we
have
now
to
blackboard
to
three
different
branches.
A
So
that's
quite
a
lot
of
stuff
to
to
take
care
of.
A
B
This
it
discussed
in
a
previous
clt,
and
it's
regarding
the
regarding
all
the
issues
that
we
have
trying
to
make.
The
code
compatible
with
different
versions
of
different
dependencies
across
all
digital
supported
releases
is
complicated,
so
it
was
approved
as
a
proof
of
concept
to
provide
a
virtual
alem
for
dashboard
that
could
be
delivered.
B
So
you
can
lock,
you
can
pin
a
version
of
a
package
and
rely
on
that
peanut
version
in
your
in
your
virtual,
regardless
of
it,
is
available
or
not
in
the
industry
release,
so
you
can
rely
for
sure
in
that
version
and
your
code
can
work
perfectly
with
that
version.
So
you
don't
have
to
worry
about
if
it's
and
destroy
this,
but
it's
a
former
version
or
an
upper
person
or
directly
it's
not
available
simplifying
simplifying
our
lives,
mainly
and
yeah.
We
will
as
soon
as
we
can.
We
should
proceed
like
this,
but
yeah.
A
A
Okay,
next
topic
is
sfx
management.
I
think
we
already
covered
that
the
idea
could
be
perhaps
to
explore
the
that
is
a
workflow.
So
we
can
export
the
is
on
per
component
basis
and
that's
all
maybe
listing
the
list
of
keys
in
a
component
in
a
page
or
something
but
yeah
and
the
last
one
is
the
dashboard
test.
So
the
idea
would
be
to
implement
the
self-test
hook
that
it's
in
all
manager
modules,
the
dashboard
can
be
self-tested.
A
A
So
that
will
mean
that
everything
is
okay,
but
instead
of
that,
it
would
be
to
replace
that
with
a
single
cli
command,
so
they
can
run
a
safe
self
test,
whatever
dashboard
and
just
run
a
series
of
tests
or
checks
to
see
that
everything
is
okay
like
there
is
connection
to
grafana
this
connection
to
rgw,
etc.
So
I
think
that's
an
interesting
idea
can
be
useful
for
for
users.
A
And
last
topic
is
well:
we
have
a
couple
of
breaks
for
this
year's
google
summer
of
code
and
outreach
programs,
internship
programs,
so
the
first
one
is
the
visual
regression
testing
this.
The
idea
of
this
is
that,
right
now,
for
example,
for
detecting
issues
in
the
ui.
Basically,
visual
mismatches,
styling
issues,
defects,
these
kind
of
things
they
they
always
require
a
manual
inspection.
A
So
someone
has
to
log
into
the
dashboard,
navigate
the
different
pages
and
see
if
everything
is
okay,
so
there
is
no
misalignments
in
the
text
or
issues
with
the
color.
So
that's
I
mean
really
tedious.
The
idea
would
be
to
replace
that
with
a
visual
testing
progression,
so
that's
bringing
a
as
basically
based
on
a
screenshot,
so
we
would
save
some
screenshots
of
the
dashboard
and
during
the
end-to-end
test,
this
framework
that
we
use
for
end-to-end
testing,
which
is
cyprus.
A
That's
the
first
project
and
the
other
project
is
basically
the
ability
to
report
backs
and
improvements
and
any
kind
of
feedback
from
the
dashboard
itself.
So,
instead
of
forcing
users
to
go
to
the
safe
tracker,
they
would
get
a
contact,
sensitive
form
or
something
so
as
soon
as
they
hit
an
issue,
they
could
open
that
form
and
said,
send
an
issue
to
its
report
to
the
truck
and
that's
the
other
one.
A
We
have
candidates
working
or
preparing
for
work
on
that.
So
I
think
we
may
have
these
two
things
completed
this
year
and
that's
mostly
it
did.
I
miss
anything.
Is
there
any
other
topic
that
you
want
to
bring
in
or
discuss.
A
Nope,
okay!
Well,
in
that
case,
I
think
that's
more
than
enough
for
today
and
we
still
have
20
minutes
before
the
next
cds
meeting.
So
thank
you
very
much
for
joining
and
have
a
great
before
that.