►
From YouTube: Ceph Month 2021: Dashboard Update
Description
Presented by: Ernesto Puerta
Full schedule: https://pad.ceph.com/p/ceph-month-june-2021
A
Hi
everyone,
my
name
is
ernesto
puerta,
I'm
the
component
lead
for
the
dashboard
I
cover
from
lens
last
november.
I
think
so
so.
Well,
since
then,
we've
been
devoted
to
a
pacific
release
and
also
we
are
now
working
on
the
future
quincy
release.
So
this
will
be
we'll
be
covering
the
highlights
for
pacific.
A
Some
of
them
have
been
already
merging
octopus
release.
So
if
you
are
running
octopus
and
you're
using
the
dashboard,
you
might
have
seen
some
of
these
also
in
octopus,
because
we've
been
pretty
liberal
regarding
backboarding
features
in
the
dashboard
to
previous
release,
especially
octopus.
Even
it
was
the
first
release
coupled
with
cfdm,
so
we
wanted
to
ensure
that
all
you
could
do
in
or
most
of
you
could
do
in
pacific.
You
could
also
do
that
in
october.
A
With
that,
so
this
is
a
description
of
the
topics
that
we're
going
to
cover
today.
Basically,
there
are
the
four
pillars
in
regarding
the
pacific
release,
the
biggest
one
is
the
cepheum
integration.
A
Actually,
the
broader
topic
is
the
orchestrator
integration,
but
currently
most
of
the
focus
has
been
around
cepheidium
the
question
on
the
current
support
of
rook.
It's
still
open
that,
as
I
will
mention
in
the
last
part,
will
be
mostly
for
quincy
release,
so
some
parts
of
this
might
be
working
on
group,
but
yeah
that
part
is
not
well
has
not
been
extensively
reviewed,
so
most
of
this
functionality
only
works
currently
with
cephalia.
A
A
There
is
some
debate
on
what's
the
proper
denomination
for
that,
but
it's
multi-zone
multi-sound
group
so,
prior
to
this,
what
I
will
mention
or
we'll
go
into
more
details
later
apart
from
that,
the
other
big
thing
in
pacific
is
the
restful
api
so,
prior
to
pacific,
we
already
had
an
api
for
the
dashboard
that
was
unofficial,
so
it
was
being
officialized
in
pacific
and
there
was
a
commitment
made
for
keeping
this
restful
api
stable.
A
A
Basically,
these
are
the
areas,
that's
where
we
have
put
more
effort
regarding
the
integration
of
cepheidium.
So
right
now
we
support
most
of
the
features
from
cfdm
regarding
the
host
management.
As
we
may
see
here,
we
currently
have
the
ability
to
create
edit
delete
and
enter
or
modify
the
maintenance
status
of
the
host.
A
So
this
is
basically
how
you
create
a
new
host,
and
this
has
actually,
this
is
not
exactly
pacific,
probably
it
will
be
reported
to,
but
probably
the
list
released
of
pacific
doesn't
allow
you
to
set
up
the
network
or
the
labels,
but
it
will
be
in
the
next
releases,
probably
basically
well.
This
allows
you
to
define
the
hostname
and
also
the
address
of
the
specific
address.
A
This
has
been
a
recent
change
in
cephelium
and
also
set
up
specific
labels
for
that,
as
well
as
creating
the
node
in
maintenance
mode,
and
this
is
basically
how
it
looks
like
when
you
create
a
new
node
in
maintenance
mode,
so
it's,
but
basically
the
list
of
of
hosts
created
or
managed
by
cfdm.
A
Additionally,
in
the
service
protection,
apart
from
adding
and
well
deleting
multiple
types
of
services
in
pacific,
I
think
recently
there
were,
I
was
a
change
for
the
what
is
what's
called
previously
as
the
hca
rgw
service.
Now
it's
called
ingress
and
it's
general,
it's
a
service
for
all
hp
based
or
what
other
kind
of
services
and
that's
not
yet
support.
Well,
that's
the
word
that
you
are
pacific,
I'm
in
the
latest
version,
but
probably
in
the
next
one.
It
will
be
released.
A
A
Apart
from
that,
we
have
regarding
dsd
management,
the
following
actions:
the
creation
was
already
supported.
Well,
actually
it
was
delivered
to
octopus,
but
the
replacement
has
been
mostly
worked
out
in
during
pacific
time
frame.
I
think
it's
probably
available
in
octopus
as
well
now
so,
basically,
when
creating
a
noisy
management
for
those
of
you
not
familiar
with
that,
you
basically
assign
different
devices
to
well.
A
You
can
have
a
simple
os
deployment,
which
is
everything
on
a
single
device
and
the
hybrid
deployments,
so
you
can
use
this
table
just
to
set
up
the
filtering
criteria,
which
is
called
a
drive
group
specification
for
the
different
devices
in
the
in
the
cluster
that
was
already
in
octopus.
So
the
major
addition
here
was
the
replacement
workflow.
A
So,
basically,
you
have
the
the
chance
to
decide
whether
you
want
to
preserve
the
id
of
the
osds,
so
that
in
case
you
would
be
replacing
the
disk
or
doing
a
removal
of
the
disk
and
basically
discarding
that
was
the
id
which
would
also
cause
some
changes
in
the
grass
map,
etc.
A
So,
that's
all
regarding
the
orchestrator
integration
changes
recently
happening
in
pacific.
Regarding
the
archeology
improvements,
as
mentioned,
there
are
two
weak
things.
There
are
other
minor
improvements,
but
these
are
probably
the
two
biggest
things.
One
is
the
dashboard
graphana
dashboard
for
the
rgw
multi-site
scene,
so,
basically,
that's
providing
information
on
the
current
status
of
the
replication
across
realms
and
zones.
A
That's,
I
think,
also
in
all
major
releases.
So
it's
it
came
from
pacific
where
it's
it's
or
will
be
available
in
in
the
major,
not
silos
of
the
bus
releases
and
the
other
one
is
the
management
of
the
or
selection
of
the
diamond.
Prior
to
this
from
the
dashboard,
it
was
only
possible
to
manage
rgw
with
the
default
diamond
or
with
a
single
demo.
A
After
this
change,
a
user
will
be
able
to
manage
multiple
diamonds
and
I'm
saying
demos,
because
we
we're
talking
about
connecting
to
a
diamond
as
that's
what
the
dashboard
user
information
that
the
dashboard
admin
has
to
provide
or
to
configure.
A
We
are
currently
working
and
plan
to
work
on
improving
this
so
with
rtw
team
and
the
stadium
team
as
well,
but
yeah.
Basically,
that's
it.
So
if
you
define
more
than
a
single
diamond,
you
can
manage
multiple
diamonds
and
users
and
packets
are
shipped
to
each
of
them
and
that's
it
regarding
the
rtw,
multi,
well,
multi-site
and
multi-demo
features
regarding
the
restful
api,
which
is
the
first
topic
here.
That's
been
delivered
as
mission
we've
been
working
with
a
rest
api
for
for
the
very
beginning
dashboard.
A
So
basically
the
architecture
was
pressed
full-based
the
front-end
basically
consumed
the
back-end
rest
api,
but
since
pacific
we
decided
to
make
this
official.
So
this
involves
a
commitment
on
what
added
versioning
so
as
mentioned
here,
everything
that
is
version
or
tagged
with
a
v1
version
or
above
it's
stable
or
it's
committed
to
a
stable.
While
everything
with
the
version
0
will
be
experiment
and
subject
to
change,
this
api
is
currently
well
tested.
A
So
we
have
like
a
90
percent
coverage
and
it's
currently
tested
on
vr
basis
to
every
pr
from
every
what
delivered
to,
except,
I
think
for
the
docs
documentation
peers.
We
need
to
go
through
this
a
stack
of
tests
and
we
also
have
having
knight
legals
just
to
ensure
that
there
is
nothing
asynchronous
that
breaks
the
api.
The
documentation
is
both
available
at
this
url.
That's
there
is
a
documentation
for
release.
This
is
for
master
and
there
is
segmentation
also
available
on
a
cluster
via
software
ui.
A
A
So
that's
it.
Regarding
the
rush
full
api,
let's
jump
into
the
security
improvements.
This
is
what
list
I
mean
user
impacting
and
unless
you
need
to
for
some
security
standards,
so
we
basically
went
through
some
pen
test
from
what
ibm
they
were
doing
and
testing
the
dashboard
and
they
provide
a
lot
of
feedback
on
vulnerabilities
that
they
detected.
So
currently
we
provide
support
for
account
logout.
Basically,
after
a
given
number
of
failed
tries
to
login
a
user
account
will
be
locked.
A
A
There
are
a
lot
of
policies
and
phosphor
passwords
strengths,
and
that's,
I
think,
something
that
came
from
octopus,
but
yeah.
It's
well,
I
think
worthy
to
mention
here
and
also
the
account
expiration.
That
was
also
there,
but
basically
all
these
things
together,
I
think
going
for
a
very
good
health
in
terms
of
security,
especially
even
the
dashboard
is
meant
to
be
exposed
not
right
to
the
internet,
but
at
least
to
beyond
the
usual
lab
or
a
air
gap
environment.
A
So
that's
it
regarding
pacific.
So,
let's
see
quickly
how
quincy
is
going
to
look
like
regarding
the
dashboard.
Basically,
I
will,
if
you
are
interested
in
these
specific
items,
you
can
visit
that
url
and
specifically
the
dashboard,
and
then
there
are
the
five
six
pilots
that
we
are
going
to
focus
on
for
quincy
release.
A
A
The
idea
is
to
provide
a
wall
garden
experience
or
of
jeff's
likes
to
say
hotel.
California
experience
right
so
basically,
the
purpose
of
this
is
that
we
are
able
to
provide
as
much
features
as
usual,
safe
operator
or
user
needs
to
find
on
the
dashboard,
and
we
were
also
thinking
on
providing
a
fallback
cli
in
the
embedded
in
the
ui
like
a
toolbox
thing
yep,
that's
also
under
exploration.
A
A
Probably
there
will
be
some
things
that
you
can
do
with
cfdm
that
won't
be
available
in
the
rook
orchestrator,
because
they
basically
contract
the
philosophy
of
rook
and
kubernetes,
but
yeah
we'll
try
to
at
least
keep
some
parity
with
stefarium
regarding
the
multi-site
and
multi-cluster.
These
two
topics
are
around
well
scaling
beyond
the
single
cluster,
so
the
first
one.
Currently
the
dashboard
supports
for
rgbw
mirroring,
which
would
be
the
multi-site
support
for
rbd.
A
There
are
some
gaps
here,
like
the
snapshot
based
mirroring
and
also
regarding
rgw.
There
is
currently
no
support
apart
from
the
monitoring,
and
there
is
no
support
at
all
for
cfs
mirroring,
so
that
will
also
have
to
be
there
and,
regarding
the
multicluster
support
the
ideas
that
have
been
able
from
a
single
cluster
to
manage
multiple
clusters.
So
it's
the
focus
is
a
bit
different
than
the
of
the
regular
multi-site
one
and
using
a
single
manager
or
ui
as
a
gateway
to
multiple
clusters.
A
Apart
from
that,
we
will
continue
with
this
focus
on
rtw,
advanced
workflows
and
basically
we're
thinking
on
adding
bucket
policies,
life
cycle,
server,
side,
encryption,
backup,
notification,
more
or
less
keeping
some
party
with
what
the
aws
console
gives
you
for
s3
management
and
regarding
the
observability,
where
we
are
considering
something
like
lock,
aggregation,
for
example.
So
that's
well
something
that
we've
got
from
the
user
survey,
lack
of
blogs
or
centralized
logs
and
well
that's
mostly
it.
So
do
you
have
any
questions?
B
A
Yeah
yeah
yeah
yeah-
I
haven't
mentioned
that
that's
yeah,
something
that
we
need
to
tackle
thing
is
that
I
we
haven't
got
many
reports
from
in
the
seth
user
survey
on
well
complains
about
that
part,
but
on
the
other
hand
the
ones
that
we've
got
are
you
know
really
disastrous,
because,
but
it's
basically
pointing
to
some
scalabilities
in
that
part,
I'm
not
sure
if
we
have
here
users
of
safe
dashboard
that
are
managing
rbd
from
from
its
dashboard
can
share
the
experience
with
that.
A
Really,
I'm
not
sure
if
you
wanted
to
bring
a
specific
topic
on
that
part
on
the
integration
or.
B
No,
I
just
noticed
that
you
know
it
wasn't
talked
about
so
the
two
things
on
the
rbd
front
that
I
was
looking
for.
One
of
them
you
mentioned
was
the
integration
for
snapchat-based
mirroring
and
the
other
one
that
I
I
thought
was
on.
The
roadmap
is
the
yeah
the
performance
improvements
in
terms
of
like
image,
listing,
and
things
like
that,
and
I
just
you
know.
B
Since
it
it,
it
might
be
the
case
that
you
know,
even
if
you
have
like
hundreds
of
images,
maybe
it's
not.
You
know
that
big
of
a
deal,
because
if
you
look
at
the
current,
if
you
look
at
the
current
design-
yes
it's
you
know
it's
a
huge
issue
from
the
scalability
perspective,
but
maybe
that
scalability
limit
is
not
you
know
as
low
as
the
think.
B
So
maybe
a
few
hundred
images
is
still
good
enough
with
the
current
implementation,
and
you
know
it's
a
question
of
how
many
users
would
we
have
with
that
many
images
that
that
would
rely
on
the
dashboard
or
managing
them,
because
if
you
don't,
if
you
don't
open
that
tab
right
it,
it's
not
going
to
do
it's
not
going
to
impact
you.
If
I
understand
correctly
so
maybe
it
it
doesn't
actually
need
to
be
that
high
on
the
list
justify.
That's.
A
Yeah
yeah
yeah,
at
least
from
the
telemetry
reports.
We
know
that
there
are
some
users
running
beyond
thousands
of
images,
so
yeah,
that's
something
that
probably
will
hit
a
mojo
neck,
but
the
average
or
the
median
is
not
that
high
and
probably
that's
explains
what
we're
not
having.
So
many.
A
A
Definitely
yeah:
okay
yeah.
I
was
going
to
mention
that
brian
left
a
question
about
graphene
simple.
Well,
we
haven't
any
any
work
on
this.
Do
you
have
a
specific
suggestion
for.
A
That
a
brian
okay,
yeah
yeah.
Well,
we
were
thinking,
or
there
were
some
discussions
around
supporting
some
kind
of
log
aggregation
the
same
way
as
currently
we
have
a
graphana
embedded
within
the
dashboard
centralizing
the
locks
as
well
and
having
everything
in
a
single
place,
maybe
with
an
elk
stack
or
some
kind
of
yeah,
centralized
login
stack
but
yeah
regarding
tracing.
Yet
we
don't
have
anything
planned.
I
think
there's
some
teams
working
on
tiger
right.
I
think
rgw
team
was
doing
that,
not
sure
if
other
teams
were
doing
that.
A
A
Yeah,
I
would
also
like
to
mention
that
as
well.
We
were
basically
debriefing
on
the
accept
step
user
survey
and
from
the
finance
that
we
I
got
from
that
one
one
of
things
that
even
if
the
users
that
don't
actively
use
the
dashboard,
they
still
rely
or
they'll,
still
visit
the
landing
page
for
time
for
time.
So
we
would
like
to
put
some
focus
on
that
component,
so
everyone
can
benefit
from
that
and
yeah,
probably
with
the
advent
of
sephardium.
A
The
dashboard
will
make
more
sense
as
it
provides
well,
I
think
a
nicer
experience
if
you
don't
want
to
fully
rely
on
the
cli
at
least
we're
trying
to
aggregate
more
data
and
presenting
the
data
in
a
fancier
way
than
compared
to
the
cli,
so
that
those
could
be
two
areas
where
we
may
see
more
adoption
of
the
dashboard
in
the
near
term.
A
Okay,
so
just
as
a
reminder,
these
are
the
places
that
will
you
can
contact
us,
the
halls
of
team
or
well
the
dashboards.
You
may
see
there.
There
is
a
specific
set
for
irc
channel,
so
you
can
reach
out
to
us
in
that
irc
channel
and
apart
from
that,
well
in
github,
every
pr
labeled
with
the
dashboard
will
end
up
on
our
packets.
So
we
will
follow
those
apps.
You
want
to
contribute
and
yep.