►
From YouTube: 2015-APR-23 -- Ceph Tech Talks: Calamari
Description
A detailed look at the Calamari Ceph management API and the Romana GUI from Red Hat.
http://ceph.com/ceph-tech-talks
A
Alright
welcome
back
to
the
monthly
SEF
tech
talks
which
have
had
such
hallowed
names
as
sam
josh,
Josh,
Dugan
and
yehuda.
So
far,
and
now
we're
up
with
Gregory
me
know,
certainly
another
heavy
hitter
to
talk
about
calamari
and
Romana
the
management
API
force
F
and
the
GUI
respectively.
So
Berger
you
want
to
take
it
away.
Yeah.
B
B
All
right
so
tell
Murray
and
Ramona
are
both
projects.
Calmar
is
the
API
and
Ramona's
a
dashboard,
and
the
looks
like
I
need.
Access
of
you
can
try
yoga.
B
B
B
So
this
is
essentially
what
Ramona
is
it's
a
management
and
monitoring
dashboard
for
many
SEF
clusters,
and
it
provides
you
kind
of
a
basic
overview
of
the
status
of
each
cluster
right
now
you
can
see
we're
looking
at
one
that
has
a
fair
number
of
those
DS
set
of
monitors
and
low
usage.
There's
really
no
traffic
on
this
thing
right
now
and
it
provides
you
a
couple
of
different
views.
B
This
is
kind
of
the
single
pane
of
glass
that
gives
you
the
idea
that
you
know
this
cluster
is
ok
and
it's
got
plenty
of
space
available
and
all
the
all
the
things
are
reporting.
That
expect-
and
it's
definitely
not
overloaded.
You
can
look
at
different
views
here
to
talk
about
kind
of
an
OSD
centric
and
if
so,
here's
some
OS
DS
that
have
trouble
and
they're
down,
and
so
you
can
find
out.
You
know
where
they
are
and
how
to
find
them.
B
So
the
idea
here
basically,
is
that
it
just
allows
you
to
monitor
the
health
and
status
of
your
cluster
and
we
expose
some
information
about
each
of
the
pieces
of
the
cluster.
You
know
the
usual
stuff
you'd
expect
to
see
if
any
system
and
some
cluster
specific
stats
Norma
the
fourth
fourth
down
here,
we
allow
you
to
change
the
state
of
the
cluster.
So
these
are
the
pieces
that
comprise
it.
You
can
move
Oh
s
DS
in
and
out
you
can
set
flags
on
the
cluster.
B
So
that's
a
really
high
level
overview
of
what
Ramana
is
and
to
talk
about
what
calamari
is
its
is
the
API
that
powers
this
view
it's.
It
provides
all
the
information
that
you
see
here
and
pretty
much
the
way
that
it's
displayed
here.
It
gives
you
a
lot
of
information.
Anything
that
you
can
do
is
this
user
interface.
You
can
do
going
to
the
API,
so
I'm
going
to
go
ahead
and
just
plug
in
a
1
1
basic
endpoint,
to
give
you
an
idea.
B
So
this
is
a
browsable
view
of
the
calamari
API
and
it's
really
just
providing
some
JSON
output.
That's
describing
what
clusters
it
knows
about
and
you
can
dive
further
into
there
and
provides
all
the
details
that
you'd
need
to
see
such
a
thing.
So
those
are
the
two
main
pieces
of
calamari
the
back
end
in
the
front
end
and
like
to
go
ahead
and
discuss
you.
A
B
B
Ok,
so
this
architecture
diagram
is
organized
from
the
left
being
the
dashboard
that
we
just
looked
at
and
the
right
goes
farther
toward
the
ceph
cluster.
So
on
the
far
right
you
can
see
there's
a
number
of
agents
that
we
run
on
each
note
of
this
F
cluster.
So
you
can
regimental
the
far
right
as
this
F
cluster
and
on
the
far
left
is
a
dashboard
that
we
just
saw
so
the
thesis
in
the
middle.
B
Are
you
know,
unsurprisingly,
it's
a
Django
rest
framework,
app
that
served
via
patchy
and
it
uses
a
special
service
called
bulu
which
basically
collects
the
state
of
every
cluster
that
it
knows
about
and
provides
kind
of
a
caching
layer
so
that
the
API
can
service
requests
for
information
near
instantaneously.
So
there's
no
wait
there
and
then,
when
we
get
requests
in
to
change
the
state
of
the
cluster,
the
same
sort
of
thing
happens,
which
is
you
get
a
request,
ID
back.
B
That
allows
you
to
ask
about
the
status
of
that
request
because
being
tracked
by
that
service
thullu
as
it
goes
to
completion,
so
I
can
show
examples
of
those
a
little
bit.
But
kind
of
the
important
thing
to
remember
here
is
that
this
service
knows
the
state
of
each
cluster,
and
it
knows
it
in
memory
in
you
know
python,
and
so
getting
the
information
into
the
API
is
really
as
simple
as
making
an
RPC
request
for
it,
and
then
you
know
getting
the
JSON
back.
B
That
either
represents
the
information
you
made
a
get
request
for
or
something
to
track
requests
to
change
the
state
of
the
cluster
with
the
post
you
see
in
the
middle
there
in
the
green,
we
have
a
database
for
persistence,
and
this
does
several
things
and
allows
us
to
store
user
information
and
cluster
state
history
and
then
history.
So
you
can
kind
of
think
of
us
as
some
place
that
the
service
can
restore
from.
B
So
if
the
the
thule
service,
that
of
this
blue
box
here,
if
it
were
to
crash,
it's
not
super
important,
because
it's
been
streaming
out
everything
it
knows
to
a
database
in
kind
of
a
binary
blob
fashion.
You
know
we're
really
just
writing
out
what
it
knows
in
a
big
blob
table
so
that
when
it
starts
up
again,
it
just
says:
hey
might
tell
me
about
this
state
of
the
cluster.
Tell
me
about
the
events
that
I
was
doing
and
I
just
rebuild
it
all
from
there.
B
B
B
You'd
expect
to
see
CPU
memory,
disk
network
and
then
cluster
specific
stats,
like
things
like
how
how
full
the
cluster
is,
events
with
the
OSD
that
sort
of
thing
so
the
way
that
fulu
gets
its
job
done
is
it
uses
salt
and
salt.
It
has
a
message,
bus
built
into
it,
00
RPC
zeromq,
and
it
allows
us
to
kind
of
dual
purpose:
use
salt.
It's
in
some
ways.
It's
a
config
management
system
like
puppet
or
chef,
but
it
also
does
so
much
more.
B
The
message
aboard
the
message:
bus
ask
questions
of
that
module
and
get
responses,
so
we
get
bi-directional,
communicate
they're
up
to
the
Zulu
layer,
which
is
really
quite
handy,
and
you
know
for
an
example
of
that
it
would
be
something
like
we
asked
the
module
on
dominium
to
send
us
a
heartbeat,
though,
that
we
know
that
each
piece
of
the
cluster
is
reporting
and
on
periodically
we
ask
it
will
tell
us
things.
B
B
Thanks
Patrick,
so
this
rest
api
has
a
pretty
good
wealth
of
information
about
all
the
pieces
that
you
can
do
with
it.
Here's
a
summary
of
all
the
urls
that
we
support
and
really
expect
you
to
be
able
to
read
each
individual
one
but
know
that
it
exists
and
it
kind
of
gives
you
an
idea
of
whether
this
is
retrieving
information,
whether
it
allows
you
to
modify
the
state
and
whether
it
allows
you
to
remove
that
state.
B
But
if
you
scroll
down,
you
can
see
that
in
kind
of
the
display
to
catch
up
because
I'm
sure
it's
lighting,
you
can
see
that
there's
quite
detailed
information
that
comes
from
the
code.
If
you
know
anything
about
read
the
docs,
you
know
that
it's
can
be
built
using
Sphinx,
which
is
how
we're
doing
it.
B
These
are
basically
just
the
dock
strings
that
live
within
the
implementation,
and
we
have
an
automated
way
to
generate
this
API
reference
so
that
when
we
add
new
stuff,
it
automatically
gets
here
when
we,
when
we
rebuild
the
the
docs,
and
so
it's
pretty
exhaustive,
of
what
you
can
do
with
each
of
these
endpoints,
what
they
expect.
What
they
give
you
yeah
I,
would
encourage
you
if
you're
interested
in
getting
started
with
calamari,
take
go
check
that
out
for
sure,
because
it's
a
great
source
of
information.
B
So
the
basic
model
here
is
that
we
have.
We
have
a
layer
where
we're
cashing
all
the
state
of
the
cluster,
which
is
something
that
really
doesn't
grow.
Very
you
know
it
doesn't
grow
dependent
off
time.
So
when
you
know
about
clusters,
it's
comprised
of
a
finite
set
of
services-
and
you
know
each
of
those
services
have
a
state
where
they're
either.
B
Ok,
where
they're,
not
okay,
and
so
it's
okay
for
us
to
store
all
this
data
in
memory
at
the
moment,
and
it
actually
gives
us
the
benefit
of
you
know
being
able
to
serve
it
quite
quickly
and
not
worrying
about
doing
bad
things
with
the
database
as
Jane
good
often
allows
us
to
do,
and
it's
really
handy
to
be
able
to
deploy.
You
know
native
language
modules
on
to
the
cluster,
to
query
the
various
components.
B
So
this
this
salt
onion
here
has
a
module,
stuff
pie
and
it
talks
to
the
Python
layer
burritos
bindings,
so
that
we
can
get
information
quite
easily
from
stuff
and
we
get
a
lot
of
advantage
by
working
with
some
of
the
stuff
stuffs
already
built
to
be
talked
to,
and
we
have
the
advantage
of
you
know
in
this
architecture.
You
can
target
different
pieces
of
the
South
cluster
very
easily
with
salt.
It
allows
a
very
flexible
way
to
match
what
what
pieces
of
the
cluster
you're
talking
to
where.
A
B
There's
kind
of
an
idea
of
some
of
the
information
we
get
from
all
the
things
we
know
about
this
particular
cluster
that
I
showed
you
type
again
and
so
there's
you
know
all
the
OSD
is
reporting
which
cluster
they
belong
to,
what
type
they
are,
what
version
of
stuff
they're,
running
and
somewhere
in
here
there'll
be
a
monitor,
so
that's
all
well
and
good,
and
what
I
was
talking
about
being
able
to
target?
You
know
a
very
fine
granularity.
You
can
see.
I
asked
just
for
everything
it
knows
about,
but
you
can.
A
B
B
B
Which
I'm
so
depress
them
all
right,
so
here
I
am
in
the
same
place,
to
show
you
the
dashboard.
This
is
the
calamari
node
and
you
can
see
that
these
are
the
Deaf
nodes
that
in
talking
to
it's
a
variety
of
machines
in
our
lab,
so
before
what
it
did
was
I
just
asked
a
cluster.
Tell
me
about
all
the
all
the
things
you
know
about
stuff,
just
kind
of
off
the
bat
and
it
produces
a
wealth
of
information
for
each
node,
all
the
ceph
services
running
on
that
note,
but
we
can
say:
okay.
B
B
Don't
there
we
last
it,
and
it's
only
reported
you
know
just
what
it
is
about.
Oh
it
looks
very
similar,
but
the
idea
here
is
that
you
can
use
different
patterns
to
get
at
the
different
pieces.
And
another
really
neat
thing
about
this.
Is
you
can
ask
it
to
describe
the
module
that
we're
calling?
So
you
say,
step,
sis,
doc
and
it'll.
Tell
you
about
you
know
what
is
this?
What
is
this?
B
Module
will
do
right
and
the
idea
is
it
enumerates
the
services
run
locally
and
for
each
report,
FS
ID
type,
an
ID
right
and
you
know
you
can
read
there,
it's
pretty
obvious.
So
this
is
the
setup
I
was
talking
about.
You
can
ask
it
about
every
exported
function,
but
it
has-
and
this
is
all
just
Python
and
so
adding
think
this
is
where
you
know,
I
kind
of
want
to
take.
The
discussion
is
adding
things
to
calamari
is
as
simple
as
writing:
a
module
like
or
adding
to
this
module
and
having
it.
B
You
know
distributed
out
to
the
cluster,
and
so
you
can
have
it
answer
all
these
questions
where
it
can
run
Rados
commands
it.
Can
you
know
get
the
boot
time
of
each
thing
with
the
heartbeats
that
I
showed
you,
it
can
just
find
out
the
status.
So
there's
not
a
whole
lot
implemented
here,
but
these
are
the
kind
of
things
that
we
need
to
be
able
to
provide
the
ability
to
know
the
state
of
the
cluster
and
change
the
state
of
the
cluster.
B
B
Slide
here
and
the
basic
idea
is
it's
not
it's
not
really
spelled
out
much
in
this
slide
set
so
I'll,
go
ahead
and
show
a
different
one
here,
and
you
can
basically
see
that
it's
really.
No,
it's
a
the
JavaScript
app.
It's
served
by
Apache,
so
it's
kind
of
like
a
set
of
single
page
apps
that
know
how
to
talk
to
the
API
and
really
they're.
Only
their
only
job
is
to
allow
people
to
interact
with
the
API
and
very
visual
way.
B
So,
there's
not
a
whole
lot
to
say
about
it
other
than
you
know.
It's
a
set
of
single
page
apps,
it's
implemented
in
angularjs
and
it's
got
a
github
page.
It's
got
some
readme
documentation
like
Patrick
was
asking
earlier.
We
haven't
gone
so
far
to
describe
it
in
detail,
because
it's
really
just
a
way
of
visualizing
the
API,
it's
kind
of
the
the
reference
implementation
of
a
client
of
the
API.
B
So
we
try
to
add
the
things
that
we
can
do
in
the
API
to
the
UI
and
keep
it
so
people
can
see
it
quite
easily
and
I've
already
down
most
of
these
slides
so
in
one
fashion
or
another.
So
I
guess
the
place
that
I'd
like
to
take
this
tech
talk.
You
know
in
our
final
half
is
really
what
I
see
in
the
future
of
calimary,
what
we're
doing
the
next
version-
and
you
know
what
we're
trying
to
do
right
now.
B
So
the
features
that
were
working
on
at
the
moment
are
the
idea
that
okay.
So
this
is
great-
you
know
you
can
see
this
F
cluster
and
I
can
kind
of
get
this
notion
of
status,
of
the
the
different
components
and
what
we're
trying
to
do
in
the
next
version
is
take
it
a
step.
Further.
We're
beyond
just
does
stuff
status
of
the
different
services
that
comprise
a
set
cluster,
we're
going
to
start
learning
about
the
hardware
that
underlies
those
services
and
then
try
to
provide
a
uniform
API
to
see
the
state
of
the
hardware.
B
What
checks
apply
to
the
hardware
and
what
kind
of
events
happen
to
the
hardware
so
a
convenient
way
to
think
about
this
is
to
consider,
say
you
know,
a
disk
that
underlies
a
know
XD.
So,
in
a
really
simplified
case,
you
can
think
of
an
OSD
as
it's
a
process
that
sits
on
top
of
a
hard
disk,
and
you
know
controls.
A
B
Apology
of
the
disk,
and
so
when
that
disc
is
not
healthy,
the
Westies
can
have
trouble
either
satisfying
SLA
or
you
know,
being
up
at
all
or
any
of
that
sort
of
thing.
So
if
we
were
to
set
up
checks,
so
the
you
know
we
can
see
when
you
know
sectors
were
getting
remapped
for
example,
then
we're
going
to
start
to
know
that,
like
okay,
this
disc
is
positioning
from
its
everything's
fine
state
to
well.
You
know
it's
used
up
half
of
the
sector's.
It
has
three
map
and
we
know
that.
B
That's
you
know
a
warning,
so
we
might
say
something
like
you
know.
This
hardware
that
connects
to
this
OSD
is
starting
to
warn,
and
so
we,
you
know,
change
the
status
and
then
provide
a
way
for
people
to
not
only
get
that
event,
but
to
be
able
to
figure
out
what
to
do
about
that.
So
you
know
it
comes
down
to
ok.
Well,
the
OSD
may
tell
me
you
know
what
disc
it
is,
and
that's
all
well
and
good,
but
it's
in
this
cabinet
of
a
million
other
disks,
you
know.
B
B
So
that's
kind
of
the
basic
idea
of
what
we're
trying
to
do
now
is
take
the
different
pieces
of
hardware
that
underlie
each
of
these
important
services.
Oh
s,
DS,
monitors,
I
thought
and
provide
a
context
so
that
different
vendors
know
how
to
plug
in
their
own
specific
checks.
So
I
mean
smart
is
just
kind
of
the
it's
a
standard
and
lots
of
people
will
implement
to
it.
But
it's
not
the
end
of
the
story.
B
I
mean,
like
there's
still
plenty
of
other
things
within
storage
that
play
a
role
in
the
pipeline
of
storing
data.
So
there
can
be.
You
know,
bade
controllers
that
can
be
different
types
of
things
that
can
be.
You
know,
SSDs.
There
can
be
flashes,
that's
plugged
into
the
hard,
the
motherboard
itself
and
there's
all
sorts
of
ways
that
storage
can
manifest
itself.
So,
for
these
you
know
new
and
cutting-edge
technologies
providing
a
way.
B
You
know
that
the
vendors
of
those
can
help
us,
you
know,
have
the
right
checks
and
say
the
right
things
so
that
we
run
stuff
on
the
hardware.
We
can
provide
people
early
warning
when
you
know
OS.
These
are
going
to
go
bad
or
having
gone
bad.
We
can
provide
them
an
easy
way
to
identify
which
things
need
to
be
taken
down,
which
things
can
be
hot
swapped
if
they
can
be
hot
swappable
and
really
just
provide
a
lot
more
hardware
context
do
Seth
cluster
itself.
Oh.
B
That's,
that's
kind
of
you
know
we're
discussing
that
on
the
mailing
list
and
I
think
that
at
this
point
is
still,
we
haven't
quite
figured
out
what
the
shape
of
this
API
is.
So
the
next
steps
are
really
to
kind
of
hammer
out
what
it
looks
like
and
show
it
to
some
people
to
get
their
feedback
to
see
that
it's
either
going
to.
You
know,
serve
this
purpose
or
it's
not
and
then
iterate
on
that
until
we
have
something
that
we
like
and
then
it's
going
to
be
you're
down
to
two
implementing.
B
Recently
it's
going
to
be
in
the
14
branch
and
it
is
to
run
smart
Arica
controllers,
which
is
a
very
limited
subset
and
I
gotta
fix
a
few
bugs
with
it.
But
the
idea
there
is
it's
going
to
be
kind
of
a
thread
from
you
know
the
front
of
the
API
to
down
to
the
SEF
module
level
of
how
you
would
add
a
feature
like
this,
and
so
you
know
putting
each
of
the
pieces
32.
B
Within
the
API
that
go
ask
the
right
kind
of
a
JSON
shaped
question
to
do
Sulu,
which
them
you
know,
communicates
the
request
across
to
this
module
that
I'm
writing,
which
negotiates
with
smart
and
figures
out.
You
know,
what's
what's
going
on
there,
all
the
way
back
up
and
can
report
it,
so
it's
going
to
be
a
new
endpoint.
It's
gonna
have
new
views
and
I
see
potential
for
this.
This
API
endpoint
to
integrate
with
other
event
and
alerting
type,
are
frameworks
like
nagios
and
Sen
su
and
all
those
sort
of
things.
B
It's
we
understand
how
SF
is
organized
and
we're
asking
the
very
specific
questions
and
we're
understanding
that
in
a
rich
context
here
in
this
service,
so
that
we
can
provide,
you
know
really
smart
events,
for
example,
something
there
would
be
like
in
the
same
example
we're
talking
about
the
smart
status
of
a
disc
that
underlies
to
no
SD.
We
could
answer
questions
like
okay.
We
know
this
OSD
services,
you
know
this
this
pool,
and
we
know
that
you
know
this.
This
cluster
has
this
level
of
capacity.
B
If
this
OSD
that's
starting
to
warn
us,
it's
smart,
you
know
it's
going
bad.
If
it
goes
out,
are
we
going
to
go
over
the
max
flow
ratio,
and
is
that
going
to
cause?
You
know
a
change
in
the
SLA?
So
that's
the
kind
of
thing
that
were
kind
of
uniquely
positioned
to
answer
where
you
know
you
would
say
typically
with
mel
geo,
so
you
just
say:
Oh
smarts,
failing
or
not
right,
but
it
doesn't
give
you
that
really
fine,
grained
understanding
of
ok.
How
does
how
does
that
relate
to
the
cluster?
B
And
you
know
that's,
like
I
said
it's
the
value
proposition
of
calimary.
We
try
to
take
the
idea
that
you
know
it.
This
is
for
people
have
a
basic
idea
of
what
Seph
is
and
they
may
not
be
Wizards
about
it.
So
if
they
saw
something
like
that,
it
might
not
be
obvious
to
them
that
they're
about
to
have
a
cluster.
That's
you
know
going
to
go
into.
You
know
data
migration
mode
and
some
you
know
stuff
everywhere
and
start
you
know
causing
trouble
for
whatever
applications
are
living
on
top
of.
B
It
is
just
and
there's
a
case
where
it's
you
know
this
one's
fine
we're.
Definitely
nowhere
near.
You
know
this
being
problematic,
and
this
is
something
that
can
totally
handle
by
itself,
and
this
is
a
you
should
replace
the
stick
disks
next
time
you
go
to
the
data
center.
You
notes
of
this
next
week
week
after
this
month,
kind
of
task,
not
a
right
now,
sort
of
thing-
I'm
really
excited
that,
because
I
think
that
you
know
that's
kind
of
the
vision
of
this
project
and
it's
what
keeps
me
interested
in
working
on
it.
A
Okay,
see
if
either
these
guys
want
to
ask
a
question,
but
the
one
question
I
get
asked
more
than
anything
else
when
I'm
out
at
conferences
and
the
like
is:
how
far
do
you
think
the
management
part
of
the
management
API
is
going
to
go
like?
Are
we
going
to
get
to
the
point
where
you
spin
up
calamari,
and
then
you
can
deploy
a
SEF
cluster
or
you
know?
Is
it
going
to
make
give
you
an
easy,
push-button,
walkthrough
of
how
to
add
a
new
OSD
or
like
how
far
are
we
taking
it?
Yeah.
B
I
think
that's
definitely
a
story
that
we've
been
talking
about
for
a
while,
which
is
the
idea
that
you
know
in
the
rest
api
right
now,
we've
implemented,
you
know
some
of
the
raiders
command
set.
So
if
you
think
about
the
SEF
CLI
right,
you'd
say
something
like
def
OSD,
you
know
add
this
or
that
it's
it
changes
position
to
crush
map.
We've
implemented
some
of
that
stuff.
You
know,
we've
got
about
thirty
percent
of
what
rattusses
and
I
can
say
that
you
know
there's.
B
Definitely
a
desire
to
expand
calamari
to
have
the
functionality
of
all
of
the
stuff
that
stuff
can
do
so
from
a
management
standpoint.
You
can
imagine
that
I
could
do
everything.
Radars
could
do.
I
could
do
everything
that
our
GW
command
line
could
do
and
everything
with
the
RVD
command
line.
You
do
so
there's
definitely
a
golder
to
flesh
out
the
rest
of
it
and
offer
that
kind
of
management,
or
you
could
say
something
like
okay
make
this
energy
w
make
that
make
a
new
SD
make
this.
Is
that
any
other?
B
An
answer
is
you
know?
Calamari
is
definitely
going
to
enable
that
there's,
but
there
is
some
precondition
of
well
where
it
is.
You
know
where
does
calamari
start,
because
there's
a
lot
of
projects
that
allow
you
to
do.
You
know
some
of
that
like,
for
example,
SEF
deploy
or
the
community
puppets
grip,
or
some
other
things
like
that.
So
it's
it's.
Where
do
you
start
from
you
know?
That's
kind
of
a
the
more
interesting
question
is
I
think
it's.
B
It's
definitely
something
we're
planning
to
do,
which
is
take
the
rest
api
and
give
it
the
rest
of
the
functionality
that's
possible.
But
the
question
is
where
we're
on
the
scheme
of
from
bare
metal
to
a
fully
provisioned
clustered
as
calamari
begin
to
take
control,
and
so
that's
yeah,
the
topic
of
some
of
our
discussions,
internal
to
Red,
Hat,
and
you
know
in
the
community
as
well
there.
The
answer
is:
yeah,
we're
totally
going
to
add
more
management
functionality,
and
is
it
the
next
thing
we're
going
to
do
I'm
kind
of
thinking
of
dancers?
B
A
B
So
I've
talked
to
Natalie
skating
right
now,
mark
mark
and
I've
talked
to
mark.
When
we
came
back
from
River
son
earlier
this
year,
he
again
showing
you
that
tool
and
I
think
it's
interesting,
especially
from
the
standpoint
of
the
kind
of
optimal
waiting
strategy,
because
that
is
something
that
c
bc
bt
provides,
and
that
is
something
that
calamari
is
kind
of
in
need
of
some
improvement,
and
it's
also
coverage
of
management
functionality
already
has
so
from
the
idea
that
you
can
modify
the
crush
map.
Okay,
that's
all
well
and
good.
B
But
what
do
you
like?
What
do
you
want
to
change
you
know,
and
so,
since
modifying
the
crush
map-
and
it
implies
that
you're
going
to
be
moving
data
around
you
probably
better
know
why
you
want
to
move
the
data
around.
So
I
would
say
that
CBT
is
interesting
from
kind
of
the
input
side
of
that,
because
it
does.
The
analysis
have.
B
This
cluster
state,
how
actually
we
change
the
weighting
of
the
OSD
use
to
make
it
optimal
and
so
I
think
that
that
would
be
kind
of
a
great
integration
point,
because
then
it
could
help
you
it
helped
could
help
guide.
You
through.
You
know
baking
your
crush
map
for
better.
You
know
performance
in
this
kind
of
case,
or
that
kind
of
case
does
that
make
sense.
A
A
B
A
great
question
so
since
Calamari's
history's
been
that
of
it
was
closed
source
and
it's
been
mount
and
sourced
packaging
has
always
been
one
of
those
things
where
we
spend
a
lot
of
time
on
the
mailing
lists.
Answering
questions
that
we
probably
wouldn't
have
to
answer.
If
we
had
better
packaging,
but
I
will
say
that
the
the
packaging
story
right
now,
the
main
the
main
hurdle
to
overcome
really
is
just
the
way
that
it's
packaged
currently
is
a
little
bit
challenging
from
the
standpoint
of
it's.
B
Not
just
it's
not
just
see
that
you
compile
and
you're
happy
there's
a
number
of
pieces
to
that
difficult.
So,
for
example,
calamari
has
its
own
set
of
repose
that
serves
to
its
minions.
So
when
you
connect
it
to
a
cluster
there's
packages
that
we
need
on
that
cluster
and
we
use
salt
to
get
it
and
we
also
serve
those
packages
so,
for
example,
diamonds
right
we
have
a
specific
version
of
diamond
that
we
want,
and
so
the
calamari
package
has
to
know.
You
know
it.
B
It
knows
that
it
can
also
serve
a
repo
of
packages
that
contain
diamonds.
Have
salt
installs
the
right
one?
That's
one
challenge!
Another
challenge
is
because
it's
Python,
we
use
be
used
pit
and
we
have
a
virtual
in
and
kind
of
betrayed
off
that
we
had
early
on
was
that
we
wanted
to
go
quickly
to
build
this
thing,
and
you
know
the
consequence.
There
is
that
we
have
virtual
mm
within
the
actual
package
that
kind
of
vendors
in
a
number
of
things
that
you
can
make.
B
B
There
is
just
that
our
our
strategy
from
early
on
was
that
we
would
choose
a
very
specific
environment
with
which
to
build
these
things
and
we're
using
vagrant
to
provide
virtual
machines
that
could
provide
that
packaging,
and
that's
caused
confusion
too,
because
it
it's,
it
works
fine
when
it
works
fine
and
when
it
doesn't
work.
Fine,
you
kind
of
have
to
know
some
stuff
about
that.
So
I
guess.
B
The
answer
is
in
the
in
the
repository
there
are,
you
know
the
Debian
directory
and
there
are
a
calamari
spec
file,
and
so
you
can
build
rpms
and
debs
there.
But
it's
again
it's
it's
a
little
bit
more
nua
than
just
you
make
the
package
and
it's
fine,
because
it
has
some
specific
dependencies
as
some
things
that
need
to
go
upstream,
and
it
has
a
lot
of
connection
to
things
that
wants
to
put
on
a
cluster.
Those
are
all
challenges
to
destroy
our
packaging
and
our
stance
has
been
we'd
love
to
have
help
with
it.
A
B
B
B
So
I
guess
the
question:
there
is
I'm
assuming
that
you're
talking
about
what
I
mentioned,
something
that
needs
to
be
upstream.
That
isn't
what
is
it,
though?
There's
there's
diamond
right,
there's
diamond
that
has
some
code
in
it.
That
gives
cluster
specific
information
and
it's
needed
to
go
upstream
for
a
while.
It's
something
I
would
love
to
have
contributions
from
the
community.
There's
one
challenge
about
it
really,
which
is
that
it
kind
of
kind
of
goes
against
diamonds
strategy
for
organizing
statistics.
B
So
the
idea
behind
that
is
a
statistics
collector,
and
it
usually
gives
those
statistics
and
the
tags
them
with
switch
server.
It
comes
from,
which
is
all
well
and
good.
When
you're
talking
about
okay,
cpu
disk
network
ram,
you
know
all
that
kind
of
stuff.
Those
are
obviously
things
that
belong
to
a
server
I'm.
Our
modification
does.
Okay,
that's
great
and
I
want
to
get
statistics
about
Steph,
which
is
you
know,
I
distributed
objects
door,
so
there
you
go.
B
It
depends
on
multiple
servers
and
you
want
to
have
statistics
that
represent
more
than
one
server,
and
so
we've
kind
of
mange
days
why
it's
stuff
and
there
was
yeah,
there's
a
that's
just
a
little
bit
different
than
what
the
organization
currently
was.
So
that
was
something
of
a
sticking
point,
but
I
think
that
can
be
resolved
and
there's
other
pieces
of
the
diff
that
you
know
don't
necessarily
depend
on
that.
B
There's
a
couple
of
things
that
are
more
generic
so
and
in
the
story
there
is
some
of
it
can
be
broken
out
and
probably
got
an
upstream,
but
we
really
need
to
resolve
that
whole.
We
want
to
report
on
a
set
of
servers
at
a
point
that
isn't
a
server,
and
so
it's
just
the
discussion
needs
to
happen
and
so
far
it's
just
something.
We
don't
gotten
around
to.