►
From YouTube: CDS Infernalis (Day 1) -- Calamari Discussion
Description
Videos from Ceph Developer Summit: Infernalis (Day 1)
03 March 2015
https://wiki.ceph.com/Planning/CDS/Infernalis_(Mar_2015)
A
All
right
moving
right
along
to
the
next
session,
which
we
are
now
late
for
Gregory
I,
put
you
in
charge
of
this
session.
This
was
the
calamari
clients.
Discussion
was
getting
a
new
name
as
well
as
some
discussion
on
how
to
implement
high
level
stories
and
then
intelligent
API.
So
this
is
all
things
calamari
and
Gregor.
You
want
to
give
us
a
little
bit
of
a
background.
A
B
Totally
thanks
for
the
choice,
lot,
Patrick
I
coming
right
off.
The
intro
gives
me
a
larger
audience
than
usual,
so
for
the
first
blueprints,
but
I
want
to
talk
about
is
about
the
renaming
of
calamari
clients,
so
calimary
clients
is
a
UI
that
utilizes
the
calamari
project
to
provide
a
visualization
of
a
number
of
Chef
clusters,
and
we
found
since
joining
Red
Hat's,
that
the
naming
of
that
does
cause
some
confusion.
Specifically,
when
you
talk
about
well,
I'm
working
on
calamari.
B
B
Also
though,
the
idea
here
being
that
within
Red
Hat,
we
often
have
upstream
projects-
and
we
try
to
commit
to
upstream
first
so
we're
contributing
to
the
community
like
good
members
of
any
an
open
source
project
and
when
we
have
downstream
repositories
that
we
build
the
products
from
and
so
I
saw
the
opportunity
here
to
create
an
upstream
repository
for
calimary
clients
and
rename
it
at
the
same
time.
And
so
yesterday,
I
went
ahead
and
did
that
I'm
gonna
go
ahead
and
share
my
screen
with
you
guys.
B
So
this
is
the
the
repository
in
question:
kellan
right,
clients
and
I
put
a
breed
me
and
my
branch
that
I'm
proposing
to
pull
into
the
main
thing.
Basically
I'm,
just
saying
that
we're
changing
the
name
of
calamari
clients
to
Ramona,
there's
a
link
to
Wikipedia
describing
what
that
means.
But
the
more
important
is
what
does
it
mean
for
you
as
a
contributor
to
calamari
or
calimary
clients?
B
Well,
it
means
nothing
much
from
a
lot
of
standpoints
codes
still
going
to
be
available
under
the
same
licenses
before
that's
MIT
and
it's
going
to
still
be
developed.
In
the
same
way,
the
only
one
who
contributes
pull
requests
against
the
other
repository
which
isn't
going
away
in
the
short
term.
I'll
kindly
move
it
over
here
and
put
it
into
this.
The
new
upstream
for
the
UI,
the
kind
of
reference
implementation
of
what's
available
in
the
calamari
API.
The.
B
D
Before
we
move
on,
is
there
a
plan
for
having.
D
B
A
great
question
John,
so
the
plan
there
is
that
we
still
have
a
product-
that's
based
on
this
and
we're
still
going
to
continue
to
add
things
where
it
makes
sense
and
so
I
think
keeping
it
to
be.
A
vital
project
is
important
and
so
I
plan
to
make
releases
of
the
upstream
in
step
with
calamari,
so
that
the
compatibility
is
not
questioned
and
its
vitality.
You
know
there's
still
expectations
from
the
community
that
any
work
that
they
put
there
will
be
available
on
a
regular
schedule.
B
D
B
Alright
cool,
so
let
me
go
ahead
and
talk
about
calamari
a
little
bit
more,
so
the
blueprint
that
I'm
putting
forward
here
is
how
to
implement
high-level
stories
and
intelligent,
API
and
I'm
going
to
try
and
do
kind
of
a
deep
dive
into
the
architecture
of
calamari.
This
been
brought
over
views
of
it
before
I
know
that
those
videos
available
for
that,
probably
if
you
search
for
talomar,
a
and
John
spray,
probably
you'll,
find
a
friendly
introduction
to
what
calamari
is
and
kind
of
the
overview
of
how
its
put
together
now.
B
My
goal
here
is
a
little
bit
different,
I'm
going
to
show
an
existing
feature
and
the
points
on
the
architecture
where
it
touches
so
that
I
can
talk
about
a
user
story
that
isn't
implemented
and
talk
about
how
one
might
implement
that
and
get
feedback,
and
maybe
garner
excitement
for
anyone
who's
interested
in
contributing
along
those
lines.
So,
as
a
disclaimer,
you
know
I'm
only
speaking
about
the
next
feature
as
kind
of
a
this
is
how
I
think
it
could
be
designed,
but
there's
nothing
set
in
stone
feel
free
to
go.
B
B
B
Okay,
though
right
this
is
a
pre-recorded
video
of
interaction
with
the
calamari
API,
specifically
we're
going
to
talk
about,
pool
creation
and
updates,
and
we're
going
to
start
off
from
a
cool
list
view
and
go
through
kind
of
a
story
that
sounds
like
as
an
administrator
of
talent
or
SF.
I
want
to
be
able
to
expand,
PG
naman,
a
pool
without
disrupting
I.
Oh,
so
imagine
that
we're
setting
up
an
hour
GW
set
up,
and
we
need
to
create
a
couple
pools
for
that.
B
Though,
I'm
going
to
show
you
that
I've
already
started
by
creating
an
rgw
pool
which
has
a
placement
group
number
of
32,
which
is
in
line
with
our
current
recommendations
and
I'm
going
to
go
ahead
and
create
the
next
pool.
Rgw
buckets.
You
can
see
that
we've
got
a
request,
ID
running
over
to
the
request,
endpoint
and
start
learning
about
what
happened
for
that
creation.
B
You
can
see
that
at
the
top
there
we're
creating
creating
a
pool
and
waiting
for
a
specific
OSD
map
and
then,
after
a
short
amount
of
time,
it's
going
to
succeed
and
we
can
go
back
and
look
at
the
resource
itself.
So
there
it's
completed
successfully
and
if
we
go
back
and
look
at
the
list
view
and
head
down
to
the
bottom,
we
can
see
that
we
made
it.
B
So
here's
the
part
where
the
update
comes
into
play.
We
realize
that
oh,
we
created
the
P
genome
far
too
low
for
what
the
current
guidance
is.
So
if
you
look
concepts,
com,
PG,
calc
and
suggest,
or
I
use
case,
like
our
GW
and
OpenStack
combined
NSF
cluster,
that
we
should
be
using
a
PG
count,
much
more
much
higher
than
what
we
just
created.
So
we're
going
to
use
the
update
functionality
of
the
API
and
show
how
this
is
part
of
what
makes
calamari
great
the
the
idea
that
we
can
perform.
B
C
B
B
And
so
these
are
basically
the
attributes
that
I'm
going
to
request
an
update
on
for
these
pools,
I'm
going
to
change
them
both
to
1024,
which
is
the
recommended.
So
then,
I'm
going
to
make
a
request
to
the
API
to
change
that,
and
you
see
again
that
I've
got
a
request
ID.
So
these
long
running
operations
come
back
instantaneously
and
then
we
can
learn
about
what's
happening
with
them
here
and
so
I
want
to
pause
the
video
to
talk
about
something
specific.
B
You
can
see
we're
making
the
pool
larger
and
you
can
see
the
status
is
waiting
for
PG
creation
and
that
we're
making
like
62
out
of
900.
Something
though
the
idea
here
is
that
you
know
growing
list
from
one
shot
to
the
other,
would
not
only
apply
on
the
face
of
steph's
ability
to
there's
a
config
value.
That
says
you
shouldn't
really
produce
more
PG
num
at
this
rate,
so
it's
kind
of
a
the
steps
you
have
to
take
to
grow
this,
and
so
we
honor
that
config
and
grow
the
pg's
piece
at
a
time.
B
And
then
we
wait
for
an
MSD
map
to
come
back
and
tell
us
that
that
happened,
so
you
can
see
after
I
refresh
the
next
time.
It's
going
to
say:
ok
now
we're
waiting
for
another
OSD
map
and
then
it's
going
to
create
the
next
round
of
pg's.
But
the
idea
here
is
that
it's
basically
going
to
be
issuing
a
number
of
SEF
commands,
like
change
the
number
of
pg's
for
the
pool,
and
it's
going
to
do
that.
There
are
all
different
times
in
order
to
not
dis
up
things
that
are
happening
in
the
cluster.
B
B
B
That
would
be
an
error
and
tell
the
user
about
that,
so
that
we
don't
waste
their
time
and
give
them
some
in
direction.
So,
specifically,
you
can
see
that
you
know
there's
we're
checking
that
the
PGM
is
inside
the
config
bounds.
They
don't
get
any
smaller,
there's
a
variety
of
checks
that
we
make
so
once
that
happens,
we'll
go
back
to
the
update
method
and
you'll
see
that
we
pass
the
requests
over
to
fulu
and
zulu
is
a
service
that
has
the
state
of
all
the
SEF
clusters.
It
knows
about
in
mind.
B
Talk
about
how
that
happens,
so,
what's
going
on
here,
okay,
yeah
I
can
definitely
finish
this
in
10
minutes.
So
basically,
the
kind
of
thing
imma
gloss
over
here
is
that
this
goes
over
to
philly
via
our
pc
and
when
we
get
into
thullu.
It
looks
like
this,
where
we
have
a
request
and
depending
on
the
object
type,
it
goes
and
creates.
This
thing
called
a
request
factory.
B
So,
for
example,
in
this
case
we're
going
to
be
updating
a
pool,
we
need
to
create
a
pool
requestfactory,
and
then
we
need
to
check
if
our
monitor
is
available
to
send
the
commands
and
then
we
go
submit
them
via
salts.
I
should
be
talking.
I
should
be
showing
the
request
factory
here,
I
think
deed.
I
will.
B
See
if
it
comes
up
alright,
so
this
is
in
the
pool
requestfactory
and
basically
this
is
a
lip
area
and
thew
Lou,
that's
responsible
for
mapping
the
action
we
want
to
take
from
rest
to
the
actual
set
commands
and
it
makes
sense
of
you
know
the
transforms
are
trying
to
make
and
turns
those
into
you
know
things
we
run
on
the
command
line,
essentially
on
the
on
a
monitor
node.
So
the
responsibility
here
is
basically
the
mapping
of
the
high-level
action
to
the
things
that
actually
requires
ancef
to
carry
it
out.
C
B
B
I
need
to
do
some
number
of
OSD
pool,
set
P
genome
and
wait
for
the
right
OSD
map
in
between
before
reporting
success,
so
that's
kind
of
how
all
that
works,
and
then
it
comes
through
salt,
which
is
a
system
for
controlling
nodes
and
doing
config
management
kind
of
like
puppet.
But
it
also
has
another
neat
part.
It
has
a
message:
bus
based
on
0
mq.
We
use
that
in
calamari,
and
you
can
see
that
anything
about
it
is.
B
You
can
just
write
Python
modules
and
you
can
see
all
the
things
that
we
have
made
available
to
make
calamari
do
stuff,
there's
a
variety
of
pieces.
We
issue
Raiders
commands.
We
ask
for
how
a
cluster
is
doing
with
the
heart
beats,
so
it
checks
in
and
says
things
are
going
well.
So
this
is
really
it's
a
really
just
a
simple
python:
module
and
I'll
show
you
what
it
looks
like
right
here,
though
the
idea
is,
you
know
this
is
the
module
itself
and
if
you
look
for
heartbeats.
B
B
Now,
even
the
parameters
don't
look
funny.
This
thing
doesn't
take
any
parameters,
and
so,
when
you
call
it
V
assault
it
you
know,
will
look
like
this
all
right
and
so
you're,
basically
saying
hey
salt
from
the
calamari
server,
go
talk
to
this
node
here
and
ask
it
about
heartbeats,
and
then,
when
that
gets
carried
out,
we
say
a
whole
bunch
of
stuff.
That's
interesting
to
us
and
a
well
formatted
way.
So
it
provides
a
structured
data
about
SEF
and
all
has
us
have
all
kinds
of
cool
stuff.
B
B
So,
let's
talk
about
the
other
use
case
now
on
the
the
one
that
I
wanted
to
talk
about
briefly
was
as
a
administrator,
McCluster
I
want
to
know,
I
want
to
get
a
alert
when
OS
these
are
likely
to
fail.
So
it
was
a
couple
of
ways
to
do.
That's
you
know
one.
You
could
use
something
like
smart
mom
tools,
which
looks
kind
of
like
this.
For
example,
I'm
going
to
ask
for
all
the
status
of
the
disk
SD,
something
on
another
node
and
you
can
choose
to
believe
smart
mon
or
not.
B
For
some
reason,
I
don't
think
it's
doing
so
well
in
our
lab,
because
every
disk
says
that
its
manufacture
than
year
30
or
week
30
of
year
of
2002,
so
either
we
got
a
very
consistent
batch
of
the
disks
unlikely
or
the
raid
controllers
not
telling
us
the
right
information.
So
the
idea
here
is
that,
well,
you
could
use
this
tool
and
you
could
ask
it
to
tell
you
when
say
sectors
get
remapped,
which
is
obviously
a
sign
that
something's
wrong
on
the
disk
and
it
has
to
go
place.
B
The
data
in
another
area-
or
maybe
you
know
something
in
the
you
know
OSD
perf
counters.
That
gives
you
some
idea
that
OSD
is
starting
to
get
crazy.
So
the
idea
for
implementing
there
would
be
to
create
another
module
that
wraps
those
reports
and
provides
it
via
the
same
salt
mechanism
and
takes
it
may
be
over
to
blue
and
says.
Okay,
you
know,
through
we've
got
these
two
different
pieces.
There's
one
called
cluster
monitor
and
one
called
server
monitor
and
I'm.
B
Basically,
those
are
just
loops
that
watch
the
state
coming
off
of
salt
and
them
send
out
events
and
do
some
transformations
on
the
OSD
maps
and
service
States.
So
like
like
wines,
we
could
make
a
disc
monitor
that
watches,
for
you
know,
events
about
disk
health
and
we
would
put
the
smarts
of
what
does
it
mean
when
we
get
these
reports
of
you
know,
say:
sectors
remount
via
smart
or
some
OSD
perf
counters,
and
then
we
could
provide
an
implementation
that
looks
similar
to
the
event
end
point
for
calamari
or
make
this
big
too.
B
B
A
Anytime,
you
can
wrap
it
up
would
be
good.
Okay,.
B
Great
so
let's
talk
about
this
from
Charlie
I'm,
going
to
be
creating
tickets
and
tracker
to
do
some
of
this
work.
I'm,
not
sure
that
this
is
you
know
the
best
idea
for
or
the
best
tools
say,
smart
mom.
So
if
you
have
other
ideas
about
that,
come
and
join
us
on
the
mailing
list,
look
at
the
issue
tracker
or
send
me
an
email
thanks.
So
much
thanks,
Gregory.