►
From YouTube: Ceph Developer Summit Quincy: RBD
Description
00:00 - Dashboard and RBD
9:38 - NVME Status
12:25 - RBD Mirror Monitoring
28:48 - Volume groups for snap mirroring
35:24 - Promoethus Scale-out
Full agenda: https://pad.ceph.com/p/cds-quincy
A
B
B
Here
I
am
today,
I'm
going
to
one
is
going
to
run
the
race
and
deliberation
of
dashboard
demo
so
actually.
Well,
I
I
think
rbd
is
probably
the
most
mature
component
in
in
the
dashboard,
but
currently
has
the
three
major
pillars:
the
rvd
images,
the
rv
mirroring
and
also
the
iscsi.
B
B
B
B
Okay,
perfect:
do
you
need
to
increase
the
size,
or
is
that
okay.
B
Okay,
I'm
not
sure
how
well
that's
what
will
behave
with
these
changes
in
size,
but
that's
right.
So
I'm
not
sure
how
familiar
you
are
with
the
dashboard.
But
basically
this
is
the
rb
section.
We
decided
to
go
with
the
block
a
name
instead
of
drbd,
because
it's
more
generic
and
less
selfish,
so
so
well,
it
was
a
popular
vote
in
this
case
rather
than
the
rpg
enemy.
So
in
this
case
we
can
see
here
the
images
namespaces.
We
also
have
the
brush.
B
You
can
also
select.
You
have
multiple
rbd
pools
and
also
you
have
select
a
separate
data
pool.
In
this
case
I
don't
have
any
other
posts,
so
I
don't
have
the
choice
to
do
that.
So
then
you
have
all
the
different
options
if
we
want
to
use
mirror-
and
I
think
I
have
to
enable
journalists-
so
I
will
do
that
right
now
and
we'll
have
the
stripping
options
and
the
qs
options
alphonse
already
shared,
because
the
qos
are
available
in
different
layers
at
full
level,
also
on
rbd
image
level.
B
I
think
it's
also
a
snapshot
level,
so
so
there
are
multiple
places
where
these
settings
can
be
configured.
So
in
this
case
I
can
create
the
image
so
there
it
is.
We
have
all
the
metadata
of
the
image
and
also
the
configuration
settings.
These
are
the
qs
ones
and
the
performance
which
is
profile,
but
currently
I
don't
have
this
enabled.
B
So
if
you
have
interest
on
have
a
look
at
that,
I
can
mean
that
you
know
different
media.
So
then
you
have.
We
have
the
options
of
for
creating
snapshots
and
well
provides
a
predefined
snapshot
name.
So
that's
it.
Then
we
can
have
the
specific
operations
for
that.
So
we
can
protect
that
and
then,
when
it's
protected
we
can
clone
it.
So
that's
it.
We
have
namespace
support
as
well.
I
remember
jason
mentioned
that,
but
it
was
much
demanded.
B
Is
this
kind
of
support?
It's
it's
there
just
for
compatibility
and
also,
I
think
we
support
both
b1
and
v3
images.
If
we
want
legacy,
support
was
recently
added
all
right.
It's
there.
So
we
found
it
was
necessary
for
my
gradients
and
some
users
had
the
v1
images
and
they
were
facing
issues
when
migrating
to
e2.
B
So
it's
there.
So
that's
mostly
it
about
the
image
section.
If
we
go
to
the
rv
marine,
I
hope
this
is
working.
I
just
set
up
the.
Let
me
check
if
it's
I
think
it's
not
working
now
it
should
okay.
B
So
there
is
this
way
for
configuring,
the
rv
mirroring
with
the
bootstrap,
so
we
can
create
the
bootstrap
basically
and
use
that
for
importing
that
in
the
other
in
the
other
cluster.
I
think
this
is
a
really
nice
improvement
and
probably
is
the
way
that
we
should
do
also
for
other,
I'm
not
sure
if
chef's
mirroring
is
also
supporting
this
or
going
to
support
this
all
right.
I
think
it's
a
very
nice
way
of
doing
that.
B
So
now
you
don't
see
my
other
page
with
the
other
dashboard,
but
well,
I'm
basically
pasting
that
token.
Let
me
complete
that,
so
that
should
be
there
well.
B
B
And
I
think
now
it
should
be
working
well.
There
are
some
connectivities
between
those
I'm
I'm
running
docker
compose,
so
I
have
in
these
two
chef
clusters
in
docker
images.
So
basically
this
this
will
be
the
look
and
feel
of
the
rv
mirroring
section.
B
So
yeah,
I'm
not
sure
what
else
is
here
we
can
configure
we.
We
can
also
check
in
this
case
we
have
any
images
thinking
there
is
no
info
here
and
all
that
same
here
so
that
could
be
mostly
about
the
demo
has
been
very
short
and
then
there
is
the
iscsi
section
in
this
case.
Well,
you
have
to
set
up
a
an
rgw,
sorry
and
cephas
cassie
gateway
and
all
this
stuff.
I
think
cepheidian
supports
that
right.
B
So
it
could
be
really
easy
to
do
that
with
the
favn,
and
I
guess
that's
mostly
it
so
I
I
remember
this
one
opened
a
few
tracker
issues
for
dashboard
to
expose
rbd
mirroring
for
snapshots
and
some
other
things
is
there
anything
else
that
we
should
start.
I
mean
taking
care
of
for
rdd
anything
that
you
might
hear
missing.
A
That
is
what
I'm
familiar
with.
Is
the
snapshot
mirroring
it's
kind
of
the
big
feature.
That's
gone
in
for
pacific
that
I'm
aware
of
illya.
Do
you
know
anything
else.
B
C
Yeah,
no,
no
need
to
to
do
it
here,
just
that.
I
anticipate
some
improvements
on
that
front
simply
because
we
will
be
focusing
on
exposing
performance
metrics
for
capture
based
mirroring
in
in
quincy,
and
as
part
of
that
we
should
you
know
I
should
probably
take
a
look
at
the
look
and
feel
of
the
like
general
performance,
matrix
performance
tab
and
to
sort
of
make
them
look
the
same
for
the
mirrored
images
as
well.
B
Okay,
okay,
okay,
perfect
yeah,
the
performance
is
basically
the
grafana
dashboard
and
we're
also
using
the
information
from
the
red
stats.
So
if
you
have
that
enabled
it
shows,
it
will
show
here
in
the
in
the
overall
performance
and
also
in
the
rbd
image.
So
here
it
will
also
show
the
specific
stats
for
this
already.
A
A
E
I
mean
my
my
limited
understanding
here
is
that
we
have
like
a
skeleton
repository
or
a.
E
A
Yes,
that's
about
what
exists,
I
think
jason
jason
put
it
in,
and
I
would
have
liked
to
look
at
it
a
little
more
before
now
or
next
week,
but
I
haven't
because
I've
been
on
vacation
so
for
the
rest
of
you,
if
you
haven't,
heard
we're
working
on
or
there's
a
set
nvme
over
fabric
working
group
which
meets
on,
I
don't
remember,
which
day
on
tuesdays
in
the
morning
pacific
time,
it's
on
the
community
calendar
that
we
that
we
have
published
and
at
the
moment
there
the
early
thing
is
just
sort
of
a
we're
setting
up
a
gateway
that
reads:
live
already
on
one
side
and
speaks
nvme
over
fabric
out
the
other
side.
A
Let's
see,
there's
the
repository
or
the,
I
guess,
the
branch
that
jason
put
together.
So
that's
the
thing
that
will
be
seeing
some
work
over
the
next
several
months.
Probably.
A
Well,
so
next
up
on
the
agenda
is
rv,
mirroring
monitoring.
I
think
this
is
ilias.
A
C
C
These
can
probably
be
unified
to
to
fairly
large
degree,
and
the
set
of
you
know
high
level
metrics.
As
far
as
this
thing
is
being
mirrored.
You
know
this
thing
is
up
to
date
or
this
much
behind.
You
know
how
many
snapshots
are
in
flight.
How
many
are
still
going
to
be
sent?
C
Some
kind
of
you
know,
general
general
bandwidth
estimation,
so
things
like
that
probably
apply
to
both
our
b
and
c
have
a
fast
show,
so
we
can
come
up
with
a
hopefully
unified
with
metadata
and
data
scheme
and
present
that
as
a
set
of
permits,
metrics
that
can
then
discrete
by
the
dashboard,
whether
it's
sub
dashboard
or
something
on
the
kubernetes
side.
The.
D
C
Manager
can
also
hook
into
this.
So
that's
that's
the
goal.
The
stretch
goal
is
to
also
incorporate
rjw
into
this
scheme.
The
rgw
multi-site
is
obviously
a
very
different
beast
and
you
know
to
start
with
it's
it's
it's
multi-way,
so.
C
Sort
of
shuttling
it
into
this
unified
scheme
that
we
can
come
up
fairly
easily
for
for
rbds
ffs
rgw
would
probably
not
fit
into
it,
but
we
should
at
least
try
and
that's
the
overall
direction
that
that
that
we're
being
that,
basically
people
want
on
the
kubernetes
side,
because
the
log
file
and
object
tvs
are
supposed
to
look
the
same
and
kind
of
being
mirrored
the
same
way
at
least
again
from
the
high
level
point
of
view.
C
But
for
for
rbd
and
southwest
it
should
be
doable
and
that's
the
goal
for
pacific
now.
As
far
as
this
presents
some
challenges,
particularly
on
the
scalability
side
and
on
the
manager
side
as
well.
Currently,
the
what
we
have
is
a
manager
module
which
exposes
prometheus
metrics
that
are
being
thrown
at
it
through
the
osd
performance,
counters
interface
or
say,
would
be.
C
The
staff
performance
counters,
get
translated
into
permethries
metrics
and
exposed
through
a
single
endpoint
and
at
the
at
the
scale
requirements
that
we're
looking
at
and
also
availability
requirements
that
we're
looking
at.
That
is
not
going
to
work,
we're
already
having
issues
with
the
with
the
manager
basically
being
overloaded,
and
what
once
was
a
solution
to
you
know
solve
the
monitor,
overload
issues
we
now
need
to
solve
the
manager
overload
issues.
C
Basically
it's
it's
kind
of
a
reincarnation
of
that,
and
what
we're
looking
at
is
to
have
each
mirroring
demon
present
its
own
from
your
first
endpoint
and
adopting
all
the
cons
all
of
the
consumers,
the
dashboard
and
the
kubernetes
side
of
the
house
to
scrape
those
multiple
endpoints
and
collate
them
together
and
have
that
be
have
that
happen
on
the
on
the
blind
side
that,
as
far
as
far
as
implementation
on
the
rbd
side,
we
we
only
collect
the
necessary
metrics
or
pretty
much
all
of
them.
C
There
might
be
a
couple
that
we
need
to
be
added,
but
the
basic
stuff,
such
as
again
how
many
snapshots
are
implied,
how
many
snapshots
going
to
be
synced.
Things
like
that
that
that
is
already
there,
as
well
as
some
basic
bandwidth
estimation.
So
we
can.
We
can
take
the
difference
between
between
the
two
snapshot
counters
instead
of
gauge
the
the
the
bandwidth
from
that.
C
C
C
As
far
as
the
exposure
part,
the
high
level
plan
is
come
up
with
some
sort
of
convenience,
helper
library,
which
could
be
linked
to
to
each
of
the
demons.
C
One
thing
that
we
gain
from
this
apart
from
apart
from
solving
the
scalability
problem,
is
that
the
current
gateway
from
staff
performance
counters
to
communicate
performance
counters
is
limited
in
the
sense
that
we
cannot
expose.
We
don't
have
support
for
exposing
histograms,
and
you
know
those
types
of
more
complicated
metrics
is
basically
just
the
simplest
kind,
which
is
a
metric
in
the
counter.
That's
that's
what
we
do
once
we
move
away
from
the
you
know,
performance
counter
bridge.
C
We
can,
and
you
know,
have
the
raw
data.
We
can
actually
expose
those
as
well.
C
And
yeah,
then
the
goal
is
to
handle
that
in
that
convenience
library,
the
web
server
part
is
I'm
thinking.
C
We
should
probably
do
it
with
beast
simply
because
there
is
a
library
out
there,
a
fairly
established
c
plus
plus
library
for
exposing
from
his
metrics,
but
it
basically
includes
the
vivid
web
web
server
as
a
sub
module,
and
since
we
spend
we've
spent
quite
a
long
time,
you
know
getting
getting
rid
of
it
and
in
south
bringing
it
back
is
probably
probably
not
a
good
idea.
C
So
the
goal
is
to
hopefully
settle
on
beast,
but
civic
web
and
the
existing
libraries
obviously
option
on
how
to.
C
E
That
a
quick
question
on
the
the
prometheus
endpoint
part
is
that
something
that's
going
to
be
reusable
across
lots
of
different
components.
C
Yes,
that's
the
goal
of
the
convenience
library
so,
like
I
said
it's
something
that
can
be
linked
into
the
rbd
mirroring
demon
into
this
episode
in
daemon
and
hopefully
rgw.
If
we,
if
we
manage
to
get
something
done
on
that
side,
maybe
not
influence
it,
but
in
the
future
again
the
goal
is
to
have
a
fairly
minimal
interface.
C
That
would
support
all
four
types
of
you
know,
types
of
types
of
whatever
they're
called
and
be
able
to
expose
histograms
and
whatever
else
that
we
want
to
do.
H
Yeah
earlier
this
week,
we
also
talked
about
by
trying
to
move
the
monitor,
bothering
a
general
out
of
the
manager
and
source
that,
from
all
the
versus
directly
to
medius
itself,
they
don't
have
everything
funneling
through
the
manager
module.
Would
this
library
be
a
good
way
to
do
that.
C
I'm
sorry
I'm
actually
on
pto,
so
I
wasn't
in
that
discussion,
but
I
imagine
yes,
if
we're,
if
we're
doing,
if
we're
going
to
do
a
library
that
would
basically
be
a
web
server
that
that
that
takes
whatever
is
throwing
at
it
on
one
side
and
presents
a
premier
standpoint
on
the
other
side,
this
would
obviously
be
a
natural
fit
because
we
don't
we
don't
want
to
do
it
twice.
H
Okay
yeah-
maybe
maybe
we
can
discuss
more
about
this
and
later
then
I
imagine
that
we
will
need
to
go
through
the
existing
module
and
figure
out
what
kinds
of
metrics
are
there
that
mean
and
where
they
need
to
be
sourced
from.
C
Yeah,
currently
it's
like,
I
said
it's
very
simple:
it
just
takes.
It
just
takes
the
simplest
kind
of
osd
performance
counters
and
translates
them
to
the
simplest
kind
of
you
know
prometheus
again,
I
I'm
going
blank
and
what
they're
called,
but
there
are
four
of
them
and
we
support
only
two
and
it's
like
one
and
a
half
to
represents
yeah
so
well.
C
The
goal
is
to
support
all
four
and
have
it
happen
in
a
single
in
a
single
place
that
can
be
linked
to
you
know
whatever
it
is
that
it
wants
to
have
an
endpoint.
A
Sense,
I
guess
my
my
main
question
is:
are
we
sure
that
the
southwestern
rvd
mirroring
is
similar
enough?
That
this
will
be
convenient?
For
instance,
there
was
a
discussion
I
like
rdw.
C
Well,
I
I
think
these
are
sort
of
two
orthogonal
problems.
One
is
the
the
library
that
presents
an
end
point.
That's
obviously
I
mean
we
don't
there.
There
is
not.
There
is
not
a
degree
of
freedom
there,
because
the
interface
would
have
to
mirror
what
from
the
page,
does
right.
Okay,
so
it's
it's
a
basic
performance,
counter
interface.
You
want.
E
C
Yeah
yeah,
it's
a
basic
performance,
counter
interface
and
you
encode,
whatever
you
want
to
encode
right,
so
the
library
is
obviously
going
to
be
reusable.
There's
no
question
about
that
whether
or
not
coming
like
and
now
onto
the
onto
the
data
and
metadata
scheme.
For
these
things.
C
I
I
think
that
the
file
system
and
block
metrics,
given
that
both
are
based
on
snapshots
and
the
ffs
mirror
demon,
was
the
the
implementation
was
from
what
I
understand
the
the
it
was.
It
is
basically
based
on
the
rbd
implementation,
to
some
extent
at
least,
and
the
high
level
idea
is
very
similar.
C
I
mean
we
take
a
snapshot
and
you
know
we
either
are
sink
it
or
push
it,
or
rather,
in
the
our
ugly
case,
we're
actually
pulling
them,
but
the
the
again
the
the
the
set
of
counters,
I
imagine
I
imagine,
would
be
pretty
much
the
same,
and
so
a
unified
data
scheme
makes
a
lot
of
sense
to
me
there
on
the
rgw
side
again
with
the
buckets
that
that
can
actually
be
mirrored.
C
To
to
one-way
street
in
the
block
and
file
cases
so
yeah
something
doing
something
something
coherent.
There
is
probably
a
challenge
and
may
not
make
sense
in
the
end.
But
that's
like
I
said
it's
a
strange
goal
and
probably
not
for.
F
F
C
Yeah
for
for,
I
believe
it's
it's
it's
already
there,
it
can
be,
it
can
be
calculated.
C
I
again,
there
are
no
metrics
that
there
are
no
metrics
for
the
supervised
mirror
that
I'm
aware
of
today,
but
without
that
metric
exposing
any
you
know,
any
other
metrics
doesn't
make
much
sense,
because
that's
the
most
important
one
that
that
that
that
will
need
to
be
there,
and
so
we
will
need
to
implement
it.
Obviously,
but
it
should
be.
C
It
should
be
possible
because
again
there's
we
just
we
don't
have
the
mirroring
solution
that
does
not
know
how
far
behind
it
is,
is
to
some
extent
worthless,
because
you
cannot
tell
whether
you
have
an
enduring
solution
at
that
point
or
not.
A
A
F
G
A
G
Yeah
so
basically,
as
we
snapchat
best
mirroring
with
like
volume
groups
like
we
have
for
snapshots,
so
we
can
take
the
snapshot
for
a
group.
We'll
have
some
a
consistently
group
when
we
mirror-
and
I
think
that's
it-
I
don't
know
if
it's
possible,
I
think
it's
possible.
That
is
it's
a
snapshot.
F
Added
a
link
to
the
chat
to
my
current
vr,
which
is
still
working
progress,
but
basically
it's
already
provide
something
like
described
here.
F
F
E
F
F
E
This
is
great
news.
I
this
just
came
up
a
couple
days
ago
in
a
meeting
I
think,
related
to
rook
and
kubernetes,
I'm
guessing.
That
was
what
prompted
orita
at
as
well.
So
yes,
a
timely,
timely
edition.
E
I
I
guess
a
question
for
you
are
the:
are
the
equivalent
concepts
already
present
on
the
kubernetes
side
to
do
something
like
this.
G
There's
there's
a
cap
and
I
think
that
spec
is
alpha,
which
means
it
takes
at
least
two
kubernetes
version
to
be
finalized.
If,
but
there
are
it's
basically,
there
will
be
a
volume
group
that
you
can
take
a
snapshot
and
basic
mirroring
is
not
quite
so
they
are
talking
about,
but
for
mirroring
for
every
listener.
E
H
H
Extra
work
on
the
rvd
client
side.
A
A
H
Not
the
mirroring,
but
the
the
whatever
metrics
are
currently
gathering
and
funneling
through
the
prometheus
module,
sending
those
directly
from
where
they're
being
generated.
I
think
for
like
for
some
things
like
a
kernel
rbd.
This
would
be
it
could
be
handled
by
the
generic
prometheus
module
for
scraping,
like
metrics
for
block
devices
like
like
reads
and
writes.
I
I
have
I
o
stats,
but
for
the
rbd.
I
imagine
that
we
might
want
to
integrate
an
exporter
directly.
There.
C
Yeah
this
has
come
up
in
the
in
the
context
of
kubernetes,
but
but
not
which,
which
is
kind
of
which
currently
uses
kernel
rbd,
at
least
for.
If
you're
using
sub
csi,
there
is
an
option
to
use
mbd,
but
then
you
call
it
is.
C
C
That's
actually
what
we
the
the
note
level
like
taking
care
of
kernel,
ibd
metrics
via
the
node
exporter,
some
kind
of
node
exporter,
whether
it's
the
existing
one.
That
would
just
be
pointed
at
the
you
know
that
flash
hybrid
equal
devices
or
something
that
we
need
to
then
we'll
need
to
come
up
ourselves,
whether
on
the
step
side
of
on
the
ocs
side.
C
That's
that's
irrelevant,
but
that
needs
to
be
done
on
the
client
side,
because
the
current
metrics
that
are
that
are
based
on
the
like
randomized,
sampling
of
or
bloom
filter,
based
sampling
of
of
leds,
that
that
does
not
present
the
full
picture.
You
you
don't
the
latency
that
you
get
from.
It
is
not
the
way
you
can
see
that
the
client
sees.
A
A
C
No,
no
so
in
the
in
the
case
of,
for
example,
like
taking
the
narrow
case
of
our
block,
tvs,
which
occur
in
the
kernel
rve
in
kubernetes,
which
what
I
imagine
you
would
end
up
with,
is
either
an
existing
node
exporter,
which
is
which
is
already
there
and
running
on
each
node.
Also
taking
care
of
rvd
blob
devices
or
something
that
we
write,
so
it
won't
be.
C
An
existing
exported
will
be
a
new
exporter,
but
it
would
still
it
would
still
run
on
each
client
node,
that
has
both
devices
mapped
and
present
the
endpoint
and
then
the
dashboard
or
whatever
it
is.
I
can
query
that
endpoint
and
so
that
they
don't
know
that
all
nicely
fits
into
the
existing
permitted
model
and
the
existing
kubernetes
model.
C
What
we,
what
we're
currently
doing,
is,
or
I'm
not
sure,
if
we're
actually
doing
it,
but
what
we
could
be
doing
with
the
existing
bits
is
going
to
the
manager
and
clearing
the
querying
the
stats
that
are
being
generated
by
the
manager
module
which
the
way
it
does
it.
Is
it
installs
a
query
on
each
of
the
usbs
to
send.
C
You
know
periodic
that
you
know
to
send
like
badges,
like
snapshots
of
calendars,
for
objects
based
on
prefixes
based
on
object,
prefixes
for
for
each
image
and
then
those
get
collated
in
the
manager
module
and
present
it.
That's
what
the
dashboard
my
nestor
ernesto
talked
about:
the
performance
tab
and
the
hana
thing
that
wasn't
being
displayed
there.
That's
how
it
currently
works.
It
uses
that
that
manager
module
and
to
do.
C
Towards
resolving
on
the
manager
scalability
issues,
we
need
to
walk
away
from
that
as
well,
so
one
thing
would
be
to
not
introduce
new
new
metrics
for
mirroring
and
give
it
on
the
side
and
then,
as
a
second
step,
we
would
need
to
do
this,
for
you
know
in
in
the
non-mirroring
just
general
case.
If
here
is
an
image,
and
I
want
to
see.
C
It's
you
know
the
the
the
rbd
top
functionality,
which
would
sort
of.
E
C
In
the
in
the
short
term,
but
we
can
definitely
do
it
in
the
kubernetes
case,
because
rbd
top
is
on
is
kind
of
you
know
the
the
user
just
executes
a
that's
only
by
the
instance.
It
just
goes
and
talks
to
the
manager
so
figuring
out
where
the
endpoint
is
is
kind
of
an
issue
because
right
it
doesn't
need
to
the
rbd
talk
command
is,
is
not
necessarily
going
to
be
run
on
the
same
note
as
the
as
you
know,
where
the
image
is
mapped
or
where
the
other
dnb
demon
is.
C
This
is
being
is
being
stood
up,
whereas
in
the
kubernetes
case,
specifically
in
the
kernel
mbd
case,
I
imagine
that
can
be
fairly
easily
taken
care
of
by
the
existing
node
exporter,
which
can
look
at
the
io
stat
data,
and
it
only
does
that
for
all
kinds
of
web
devices
and
the
benefit
of
the
kernel.
One
device
that
we
get
here
is
that
it's
only
they
are
totally
integrated.
We
don't
need
to
do
anything
special
like
we
would
for
for
nbd.
C
At
least
if
we
want
to
make
it
work
for
rbd
top
as
well,
obviously
the
existing
node
exporter
can
just
as
well
look
at
the
you
know
mbd
devices
and
do
it
in
the
kubernetes
case.
But
again,
looking
at
looking
at
the
overall
picture,
we
would
need
to
eventually
we
would
need
to
come
up
with
something
in
the
in
the
end.
For
this.
A
A
And
then,
like
once,
we
do
that.
I'm
also
worried
about
the
manager
going
out
and
directly
talking
to
every
server
that
has
a
rbd
client
of
some
kind
on
it
like
there
are
users
of
tens
of
thousands
of
those
and
if
the
manager's
a
scalability
problem
now
I
I
don't
see
how
this,
how
that
works,
we're
gonna
need
intermediate
like
processes
whose
job
it
is
is
to
do
this.
Metrics
collection,
right
or
not,
collection,
but
collation.
H
E
I
I
think
it
depends.
I
think
there
are
different
types
of
deployments
that
have
different
security
models
and
requirements
like,
and
I
think
in
the
situation
that
you're
referring
to
greg,
you
might
have
like
a
centralized
storage
cluster
with
a
bazillion
clients
right
across
an
organization,
and
in
that
case
you
might
not
be
have
prometheus
scraping,
like
all
the
client
machines
like
across
an
entire
campus
or
something
like
that.
But
in
the
kubernetes
case
it
certainly
makes.
D
E
Because
you
already
have
prometheus
and
points
on
all
the
same
nodes
that
are
the
clients
like
it's
sort
of
an
enclosed
cluster,
but
it
seems
to
me
like
sort
of
the
distinction
is
here,
is
that
seth
isn't
responsible
for
necessarily
ensuring
that
all
of
the
node
exporters
are
defined
or
deployed
on
all
the
same
nodes
like
if
you
have
a
model
where
it
makes
sense,
then
you're
already
going
to
have
node
exporters
on
all
the
right
machines,
so
you're
going
to
get
all
these
krbd
bats
already,
and
if
you
don't,
then,
like
you,
just
wouldn't
get
the
benefit,
or
maybe
we
have
a
two
mo
bimodal
thing.
E
C
And
I
think
the
first,
the
first
step
would
be
to
do
the
primitive
scraping
part
or
for
the
kubernetes
case
basically
and
handle
that,
because
currently
I
mean
that's
obviously
the
easiest
case,
and
also
on
that.
That
would
that
would
that
would
allow
us
to
sort
of
offload
that
and
then
concentrate
on
solving
the
rbd
top
problem.
D
C
Is
which
is
like
rbd
top
is
sort
of
designed
to
go
to
the
single
manager
instance
and
get
the
metrics
for
all
the
images
in
the
cluster
or
the
top
50
or
whatever
right,
and
that
that,
like
the
way
it's
currently
architected,
is
that
it
sort
of
means
that
it
needs
that
central
point
to
go
to
because
it
it
it
does.
C
It
does
lie
in
terms
of
the
cleavancy,
because
that's
not
the
latency
that
the
that
the
actual
client
sees.
But
it's
still
like
it's
still
a
useful
thing
for
the
administrator
to
go.
C
You
know
type
a
few
characters
and
see
what
are
the
top
20
images
on
this
cluster
in
this
pool
that
are
that
are
maybe
causing
issues
or
overloading
or
disease
or
whatever,
and
that
use
case
I
imagine
would
be
based
would
remain
tied
to
the
manager
to
some
degree
unless
at
least
for
the
christian
future,
because
it
really
is
like
the
the
whole
point
of
it
was
that
we
have
this
central
point
that
we
can
talk
to,
that.
C
We
can
query
and,
following
that
is.
C
C
Perhaps,
as
a
first
step,
what
we
can
do
there
is
just
drastically
drastically
decrease
the
the
frequency
with
which
the
metrics
are
being
sent
to
the
manager
and
collated.
So
we
can
make
it
like.
C
Even
you
know,
you
know
we
can
go
to
something
as
as
long
as
you
know,
two
or
five
minute
controls,
because
on
a
large
scale,
with
the
cluster
seven
issues
waiting
for
five
minutes
to
see
the
latest
snapshot
of
harming
the
top
data.
C
It's
not
going
to
be
a
problem,
so
maybe,
if
the
manager,
if
we
don't
come
up
with
anything,
better
and
manager,
continues
to
be
a
problem,
then
as
a
kind
of
like
shortage
term
solution,
what
we
can
do
is
once
we
once
the
kubernetes
and
other
client-side.
C
You
know
consumers
don't
depend
on
the
manager
anymore.
For
the
administrator
use
case,
we
can
just
have
it
happen
once
every
five
minutes
or
something
like.
C
H
Yeah
definitely
sounds
like
a
good
term
stuff.
I
think
we
can
maybe
think
more
in
the
future
about
how
to
how
to
do
the
rfd
top
piece.
I
think
that's
good
for
now.
A
All
right
somebody
wanted
to
talk
about
exposing
snapshots
via
rtw.
A
Nobody
wants
to
claim
credit
for
that.
Okay.
Well,
there's
a
feature
already
where
you
can
start
where
you
can
clone
an
rvd
image
off
of
an
rgw
based
image,
our
base
image,
that's
stored
in
rtw,
and
we
want
to
go
the
other
way.
H
Yeah,
I
think
it
requires
the
zipper
back
end,
essentially
based
on
the
discussion
rjw
used
as
yesterday.
I'm
not
sure
if
it's
in
a
stable
enough
position,
where
it's
feasible
to
do
this
frequency,
but
that'd,
be
where
we
had
it,
how
the
way
to
run.
H
The
rgw
zipper
framework
yeah.
H
A
A
I
Okay,
great
so
we've
been
working
for
pacific
to
add
encryption
to
librbd,
and
so
jason
was
our
mentor.
He
was
doing
the
code
review
and
we
actually
got
it
in
to
pacific.
I
So
what
we
have
today
in
pacific
is
a
lux
based,
deluxe
format,
encryption
for
rbd
images,
and
so
basically
the
user
needs
to
set
a
password.
If
you
want
to
use
encryption,
it
has
a
password
for
this
image
and
then,
whenever
he
opens
the
image
you
need
to
apply
the
encryption
load
function
to
to
have
the
actual
load.
I
The
encryption
decryption
to
the
I
o
I
o,
and
so
this
is
what
we
have
for
pacific,
but
we
still
are
missing
a
relatively
small
piece
of
code
and
that
we
have
a
pr
for
and-
and
this
will
make
this
encryption
feature
much
more
useful-
it
will
make
it
a
differentiator
feature
so
right
now,
what
we
have
is
when
you
create
a
volume
and
you
encrypt
it,
then
all
clones.
I
I
For
the
child
and
for
the
parent-
and
this
is
something
that
we
already
have
pr
and
actually
jason
reviewed
this,
but
it
was
his
last
day
and
we
didn't
manage
to
complete
this
code
review.
I
So
we
are
waiting
to
see
who
will
take
on
the
review
for
this
pr,
and
so,
and
I
think
this
is
what
I
had
to
say,
danny
if
you
want
to
add
anything.
D
Yeah,
I
am
maybe
just
a
few
words
as
I
say
there
is
value
in
this,
as
is
because
a
we
have
better
performance.
I
think
we
cut
down
on
one
copy.
D
So
just
if
you
use
chemo
encryption
versus
lib
rbd,
encryption
you're
gaining
like
20
on
performance
and
if
you
move
to
lux
2,
which
is
not
supported
by
kimu,
but
that's
the
default
today
in
dmcrypt,
so
just
to
get
compatibility
between
images
used
with
csi,
which
are
doing
dm
clipped
and
locked
lux2,
and
you
get
an
extra
10
or
20
on
performance
above
the
luxon.
So
you
get
some
benefits
just
for
by
using
the
rbd
level
encryption,
but
really
to
drive
it
through
this
clone
feature.
D
We
see
this
as
if
you
have
a
lot
of
stock
images
running
lying
around
and
you
want
to
clone
them,
but
each
one
needs
a
new
key,
so
the
the
parent
can
be
unencrypted,
but
the
child
has
to
be
encrypted,
so
every
every
bit
of
new
data
gets
encrypted.
So
so
that's
kind
of
the
use
case
and
and
as
all
that
the
state
is,
is
pretty
advanced
with
this.
A
I'm
I'm
backfilling
for
jason,
so
I'll
try
and
get
this
car
soon.
I
see
that
it
touches
qa
task,
so
it
but
doesn't
add
any
any
into
the
suites
anywhere,
but
maybe
maybe
it's
already
there
that'll
be
my
first
question,
for
you
is
where
it's
tested
but
yeah.
This
sounds
like
a
really
useful
feature:
client
security.
E
I
have
one
really
quick
question
just
to
make
sure
I
understand,
because
this
is
using
the
existing
luxury
format.
Does
that
mean
you
can
go
back
and
forth
between
mapping
the
image
unencrypted
and
then
using
dmcrypt
on
top
of
it
and
then
or
mapping
it
with
the
encryption
in
library
and
it'll
work
both
ways.
I
E
E
D
A
F
I
So
it's
still
not,
there
are
still
some
points
that
jason
raised
that
we
want
to
discuss,
and
this
is
what
we
we
said.
We
wanted
to
discuss
it
and
perhaps
best
of
luck.
F
A
A
That's
all
I
got
thanks
guys,
josh
any
closing
words.
First
stage.