►
From YouTube: 2019-10-02 :: Ceph Developer Monthly
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
A
After
the
logic,
that's
needed
to
correctly
run
a
set
demon
in
a
container
on
a
just,
a
regular
host
and
wrap
that
up
in
a
script
and
stuff.
Ansible
is
already
sort
of
doing
this
in
its
own
way,
where,
if
you
run
it
in
the
container
mode,
then,
instead
of
starting
the
demon,
that's
a
regular
system
to
unit
it'll
assist.
Indeed,
it
will
basically
run
a
container
that
runs
the
daemon,
and
so
this
does
roughly
the
same
thing.
But
it
has
a
few
differences
and
it
only
knows
how
to
use
pod.
A
Man
doesn't
use
a
docker,
although
we
had
that
I
think
what
that
much
trouble.
It's
not
that
different
and
it
has
a
different
directory
layout
on
the
host
which
I'll
get
to
in
a
second.
But
basically,
you
can
run
sub
daemon
tool
and
it
will
create
a
container
for
you
with
all
the
right
parameters
and
then
create
a
system.
Do
you
know
that
runs
that
container
startup
this
afternoon
and
the
other
sort
of
important
thing
that
it
does?
A
Is
it
has
a
bootstrap
function
which
lets
you
create
a
new,
a
new
sub
cluster,
though,
if
you
run
here
on
the
here
in
the
etherpad,
if
you're
looking
I,
don't
really
thought
about
this
carefully.
But
this
is
a
sample
like
bare-bones
workflow,
for
how
to
create
a
new
sub
cluster.
You
would
basically
just
grab
the
script
in
whatever
form
it's
it's
written
in
Python,
but
it's
written
to
have
no
dependencies,
no
extra
library.
A
So
it's
just
one
file
and
not
to
use
any
unusual
or
weird
language
features,
but
you
just
download
the
script
run
bootstrap
out
with
the
monitor
IP
it
should
bind
to
and
where
to
write
the
keyring
and
conf
and
it'll
create
a
monitor
and
a
manager
on
the
localhost
and
start
them
up,
and
then
it'll
give
you
the
keyring
and
set
back
up
file.
So
assuming
you
all
actually
want
to
use
those
on
the
current
host.
A
If
you
have
two
clients
installed,
then
you
could
copy
dos
and
sec
F
or
you
can
run
them
pop
them
somewhere
else.
Whatever
you
want
to
do,
if
that
just
like
it's
a
cluster
up
and
running,
and
the
goal
then
is
to
be
able
to
do
everything
else
that
you
want
to,
you
would
ever
have
to
do
to
deploy
the
cluster
using
the
orchestrator,
as
you
know,
quote
unquote
de
to
operations
like
adding
any
additional
monitors,
adding
those
DS
heading
in
meditative
server
service
gateways,
whatever
it
is
through
the
CLI
or
it's
the
dashboard.
A
So
all
this
everything
here
beyond
that
bootstrap
part
could
additionally
be
done
through
the
dashboard.
If
once
the
dashboard
knows
how
to
do
that,
but
that
this
workflow
basically
works
in
this
pull
request
here
at
the
top
and
there's
a
script
that
I'm
just
running
to
assess
it
over
and
over
again,
which
basically
bootstraps
monitor
and
a
manager.
It
adds
another
second
monitor,
that's
the
second
manager.
It
adds
a
couple
minute
of
servers.
It
doesn't
do
the
OS
T's
yet
because
that's
it's
slightly
annoying
until
I
clean
up
afterwards
to
rerun
this
good.
A
But
you
could
do
that
too,
and
so
all
that
stuff
is
working
and
so
that
pull
request
basically
has
all
those
SEF
demon
grips
changes
whatever
the
script
and
there's
a
bunch
of
changes
to
the
SSH
Orchestrator,
which
previously
Manoah
worked
on
it
was
written
to
do
things
like
set
deploy,
did
where
it
would
just
install
the
demons
on
the
on
the
bare
metal
host.
A
A
So
instead
of
putting
the
data
in
Bartlett's
sost
whatever
and
copying
your
config
file
to
Etsy
CF
I
used
to
stuff
daemon,
and
so
it
always
installed
demons
using
pod
man
in
containers,
and
so
the
Assumption
here
is
that,
starting
with
octopus
going
forward
the
the
one-and-only
or
the
what
I
don't
know,
not
the
only,
but
the
way
that
yes
I
said
Orchestrator
will
work
will
just
be
it'll
only
know
how
to
run
death
in
containers,
which
is
nice
in
that
this
whole
the
whole
thing
the
puller
cause
works
like
it
does
everything
almost
everything
basically
is
functional,
but
it
doesn't
have
to
know
anything
about.
A
Different
OS
distributions-
and
it
doesn't
have
to
know
anything
about
Debian
and
rpm
and
how
to
install
packages
and
where
to
get
packages
from
like
all
that
stuff
just
goes
away.
That
nightmare
is
finally
ended
and
it's
instead
it
just
you
give
it
one
string,
which
is
the
name
of
the
container
image
and
Padma
at
least
a
smart.
A
So
if
you
do,
if
you
do
set
flash
demon
base
default,
that
will
pull
it
from
docker
hub,
but
you
can
also
pass
it
like
a
container
name
from
the
quay
registry
that
has
all
of
our
QA
builds
and
that
also
works.
It's
stuff
CI
slash
the
face,
daemon
base
colon
and
then
whatever
your
branch
tag
string
is,
and
Padma
basically
tries
it
tries
the
docker
hub.
If
it
doesn't
find
it
there,
it
tries
thing
on.
Fedora
doesn't
find
it
there.
It
tries
way
at
least
that's
behavior,
on
my
fedora
box.
A
If
it's
in
some
other
registries,
what
it
pulls
from
somewhere
else,
I
think
you
can
do
a
fully
specified
URL
or
something
and
they're
not
actually
exactly
how
the
image
sources
work
that,
but
basically
it
all
reduces
down
to
that.
One
string,
and
so
you
can
specify
what
image
to
run
on
a
per
diem
basis.
You
can
do
it.
You
can
upgrade
to
even
by
demon
without
having
to
worry
about
all
the
other
upgrading
annoyances
stuff,
that
used
to
be
a
problem
with
using
packages,
so
that
part
of
it
is
actually
it's
so
far.
A
Really
nice.
The
pull
request
introduced
is
a
Steph
config
option
called
image
so
and
it
and
the
SSA
tour
cursor
pulls
that
out
of
the
stuff
config
database
or
whatever.
So
you
can
have
a
global
property.
That's
what
container
image
to
use
for
the
whole
cluster.
You
can
customize
it
on
a
per
demon,
type
or
per
demon
basis.
Just
like
you
can
with
any
other
stuff
config
option,
and
that
will
be
the
one
that's
used
by
theyƵre
sugar
when
you
can
do
it
when
you
deploy
demons
and
so
on.
A
So
yeah
I
think
it's
I'm
pretty
pleased
that
it
that
it's
working
after
I've
been
working
on
this
since,
like
last
Wednesday
and
I,
think
that
the
set
demon
script
is
like
reasonably
complete
and
I.
Think
that's
then
the
SSH
script
will
do
the
SSA
tour.
Sugar
will
do
all
the
minimum.
The
minimum
stuff
that'll
create
monitors.
Managers
nos
teas,
I
think
there's
a
whole
lot.
A
C
A
C
Is
a
problem
that
we've
been
running
into
a
lot
with
the
current
like
work
stuff?
Is
that
you
end
up
with
application
the
the
application
data
being
swapped
out?
Basically,
instead
of
actual,
like
swap
data,
because
there's
no
swap
so
you
can
like
the
kernel
can
still
like
page
out.
You
know
part
of
the
application
itself,
so
that
ends
up
happening
as
opposed
to
swapping
out.
You
know
other
memory,
and
it
causes
these
huge,
like
stall
issues
as
something
I'm
like
works,
because.
C
Can't
disable
that
you
can
only
because
you
don't
have
swap
you
don't
swap
out
any
of
the
actual.
Like
you
know,
memory
usage
for
things
like
you
know,
application
data
is
actually
the
the
binary
itself.
That
is
what
means
the
kernel
can
do
it
because
it
doesn't
have
swap,
but
that's
the
only
thing
it
can
do.
D
C
D
A
If
you're
in
the
kubernetes
world,
then
you'll
use
rook
and
all
that
stuff
and
the
orchestrator
or
the
set
box
or
api
will
talk
to
rook
to
tell
to
do
stuff
and
if
you're,
not
in
kubernetes
lands
and
you're,
going
to
be
a
infer,
metal
and
and
then
you'll
use
this,
and
this
will
replace
ansible
and
deep
sea
and
puppet
and
set
deploy
and
all
those
like
million
other
things.
So,
there's
like
that,
but.
F
A
A
What's
that,
but
what
set
demon
does
is
it?
It
basically
runs
a
container
with
pod
man
in
sort
of
the
right
way,
so
it
just
creates
a
system
here.
Let
me
show
you
well,
let's
see,
if
I
have
it
running
right
now,
I
totally
run
my
test
script.
A
So
there's
a
Senate
four
called
step
that
target
that
it
creates.
So
this
is
assuming
the
assumption
here.
Is
that
there's
nothing
installed
in
the
host
no
packages,
except
that
pod
man
is
installed.
Basically,
that's
the
only
different
icon
works,
so
it
creates
a
system
to
unit
called
I'm
Seth
and
then
the
UID
for
the
cluster.
That
target,
which
is
the
whole.
Now
all
the
demons
on
that
host
for
that
particular
cluster,
and
then
it
creates
one
for
each
demon.
A
A
A
And
it
is
just
a
script
that
runs
pod
man
to
run
the
demon
inside
the
container,
so
it
has
it
names.
The
container,
like
you
see
up
here
at
the
top
for
the
container
names,
its
stuff,
your
ID
demon
name.
It
passes
all
the
right
paths
through
so
that
it
binds
the
data
directories
the
the
right
locations
inside
the
container
for
Oh
season
monitors.
It
gives
you
dev
and
you
dev,
and
all
that
crap,
so
they
can
access
devices
for
the
stateless
containers.
A
Just
like
the
sort
of
the
traditional
assistive
units
are,
so
that's,
basically
it
you
have
a
bare-metal
host.
You
get
some
system
media,
that's
units
that
run
the
containers
and
the
container
contains
the
demon
inside
the
container.
It
looks
like
a
traditional
set,
install
everything
is
in
var,
log
staff
and
bar
live
stuff
outside
the
container,
and
things
are
organized
a
bit
differently,
so
that
you
can
you
don't
worry
about
multiple
clusters
colliding
on
the
same
host
so
similar
to
what
the
rook
kind
of
does.
A
This
to
where
it
puts
things
on
the
host
and
it
violet
Brooke,
it
rings
like
every
single
time
on
yeah.
That's
weird,
never
any
much
other
time.
Okay
and
so
the
past
look
like
this
most,
but
basically,
everything
is
just
put
in
a
subdirectory
with
the
cluster
UID,
so
there's
number
and
then
that
the
diamond
directories
are
are
simplified.
A
So,
let's
just
type
that
idea,
instead
of
type
/f
ID,
just
sort
of
a
stupid,
the
way
it
used
to
work,
but
that's
basically
it
so
there's
also
the
set
diamond
script-
has
a
list
command
that
will
just
list
all
the
all
of
these
at
appliance
on
the
host,
all
the
clusters
and
demons
it
sees
on
localhost,
and
that
is
smart
enough
to
look
for
the
legacy
style.
I
went
under
still
look
in
VAR
web
stuff,
OSD
sort
of
this
traditional
way
up
and
it'll
list
them.
A
Let's
say
type
equals
legacy,
the
idea
being
that
we
can
add
a
command
a
set
demon
that
will
basically
just
convert
from
the
old
layout
to
the
new
layout.
So
if
you
say,
theft
demon
convert
those
data
0
or
something
like
that,
it'll
just
rename
the
old
directories
to
the
new
directory
layout
and
then
you
can
just
start
up
a
container
with
a
legacy
OSD
in
the
new
way.
So
it'll
just
be
a
simple
convert,
one
by
one
conversion
of
demons
from
the
old
way
of
the
new
way
that
make
sense.
So.
A
Pretty
simple:
this
is
base.
This
is
basically
what
ansible
is
doing
with
its
containers,
except
ansible
didn't
reorganize
things.
So
ansible
assumes
that
the
it's
still
Varla
excess
and
varlets
f
in
the
old
layout
and
at
CSS
has
to
config.
This
layout
basically
doesn't
have
any
there's
nothing
in
it.
See
stuff
at
all.
Does
again
we're
assuming
that
there's
nothing
global.
There's,
no
there's
nothing!
That's
salt
on
the
host
and
it
puts
the
config
file
for
each
demon
inside
that.
A
Right,
so
that's
that's
what
it
far
the
pull
request
is
a
little
bit
wonky
still
because
it
I'm
not
using
remote.
Oh,
isn't
running
the
script
properly.
It's
like
hard-coded
with
my
source
directory
name,
because
I
need
to
do
a
little
bit.
Refactoring
Alfredo's,
hoping
we
get
to
Python
stuff
right,
but
but
it
works
basic
level.
So.
A
Let's
see
Joshua
you're
here
lens
mentioned
that
you
were
looking
at
changes
to
deep
sea
that
is
making
it
use
odd
man
to
run
the
demons.
I
think
the
question
was
whether
there's
any
like
overlap
or
opportunity
to
like
a
nerd
eyes
on
anything
to
be
consistent
across
these
two
or,
if,
if
it
doesn't
matter
yeah
so.
G
G
A
Sounds
good
yeah
about
any
feedback
you
can
provide.
You
can
put
me
on
IRC
here
on
the
pull
request
or
whatever
yeah
I
mean
long-term.
My
hope
is
that
we
can
take
the
sort
of
the
fractured
investment
in
deep
sea
and
ansible
and
put
that
in
everything
else
and
consolidated
on
a
sort
of
a
canonical
one
way
of
doing
things
and
I'm,
hoping
that
this
is
that,
if,
if
for
some
reason,
this
is
unable
to
do
that
and
I'd
love
to
hear
why,
if
we
can
fix
that,
essentially.
G
A
A
To
choose
the
thing,
whatever
yeah
I
think
I
think
it's
folded.
It
definite
to
get,
did
a
totally
differently
named
unit
for
every
demon
and
it
hard
to
command
in
the
in
the
unit
file
right.
But
this
about
makes
sort
of
avoid
that
so
there's
there's
only
one
unit
files
per
cluster
instead
of
one
per
demon,
I
guess.
G
A
Yep
and
I
guess
sort
of
independent
of
this
one
of
the
goals
is
to
to
teach
this.
How
to
migrate
to
itself,
though
I
mentioned
that
set
demon
lists
can
recognize
the
legacy
demons
and
so
want
to
make.
It
also
do
that
for
for
like
when
rook
does
it
it
deploys,
then
in
viola,
Brooke
somewhere
and
some
default
location.
We
should
teach
it
how
to
do
that,
so
you
could
take
a
rook,
a
human,
those
to
play
with
rook
and
adopt
it,
and
you
could
do
the
same
thing
with
the
legacy
ones.
A
I,
don't
know
what
presumably
like
puppet
and
deep
sea
and
everything
rubies
to
now
have
been
doing
it's
sort
of
the
standard
issue.
A
that's
set
to
play
used
to
do
it
so
it'll
all
be
the
same
in
terms
of
how
they
adopt
demons.
But
if
not,
then
we
should
make
sure
that
it
understands
how
to
do
that
and.
A
A
I
A
Yes,
so
I
it
almost
is.
The
only
gap
is
currently
all
the
subpoena
stuff.
Friends
is
route
and
I.
Don't
there's
probably
possible
to
make
it
not
require
route,
but
when
I
run
pod
man
as
a
user
and
it
tries
to
pull
the
image,
it
ends
with
an
error
and
I
haven't
looked
at.
Why?
But
what
I
wasn't
really?
A
The
use
case
has
books
thing
on
so
I
just
ignored
it,
but
in
principle,
if
you
could
make
Padma
and
run
a
suit
and
yes
and
also
you
could
do
it
so
that
you'd
want
to
change
it.
So
you
need
to
make
the
make
file
have
a
make
container
that,
like
will
build
from
source
container
image
on
a
local
host
that
you
can
then
use
a
little
thing
dan
put
some
time
into
this,
a
while
back,
and
so
he
has
something
that's
sort
of
like
mostly
works.
A
It
wasn't
really
completely
wrapped
up
with
the
bore.
Is
it
but
I
think
it
has
a
potential
to
sort
of
increase
your
build
times,
because
you'd
have
to
build
enough
of
all
the
stuff
so
that
you
can
build
a
complete
image,
whereas
when
I
usually
start
I,
usually
only
I,
don't
build
the
NDS
and
reduce
gateway
and
some
others,
usually
because
I'm
not
testing
those
parts,
though?
A
Yeah
yeah,
you
can
use
different
images
also
for
different
demons,
so
you
could
have
your
custom
your
like
make
image
local
and
it's
used
for
whatever
it
is
that
you're
testing
honor
and
the
OSD
maybe-
and
you
could
use
the
standard
upstream
birth
latest
master
builder
whatever
for
the
other
demons.
If
you
want
to
do
that,
but
in
the
meantime
you
can
you
can
test
the
SSH
Orchestrator
with
you
start
in
this
pull
request:
it'll.
Basically,
oh
so
yeah.
A
So
one
thing
I
didn't
mention,
there's
the
last
part
of
this
bootstrap
and
generates
an
SSH
key
and
it
injects
it
into
the
cluster
as
the
SSH
identity
used
for
the
SSH
Orchestrator
and
it
appends
it
to
the
root.
The
local
root
users
authorized
keys
and
then
adds
the
local
hosts
to
the
that
hosts
list
or
whatever
for
the
SSA
tracker
skater.
A
So
the
idea
is
that
you
run
bootstrap
and
then
your
next
command
can
be
like
and
now
deploy,
no
SD
and
now,
whatever,
with
a
starting
with
the
local
host,
already
ready
to
go
and
you
can
build
out
from
there.
So
it's
easy
to
like
see
the
cluster
and
then
go
out
there's
a
optional
flag
to
like
skip
that
part.
If
you
don't
want
to
actually
do
all
the
ssh
key
generation
stuff,
but
that's
currently
what
it
does.
A
So
you
can
specify
like
a
different
host,
maybe
in
some
of
those
but
but
the
idea
basically
said
if
you
just
pass
the
SSH,
you
can
test
in
your
development
copy,
that's
the
traitor
and
deploying
at
the
demons.
So
in
my
tests,
for
example,
I
would
but
it
was
developing
it.
I
would
run
these
and
then
I
would
deploy
an
OSD
using
a
bare
metal
disk
on
the
localhost
and
it
would
create
a
pod
just
for
that.
Osd
but
it
would
be
part
of
the
cluster
that
you
start
like
the
star
cluster.
I
That's
very
exciting:
yeah,
yes
yeah,
so
longer-term,
like
you,
have
in
business
and
get
I,
could
find
to
be
a
very
simple
deployment
or
trying
to
get
together
some
work
Lexi's
around,
like
they
might
memory
you
and
putting
that
right
number,
but
was
used
for
know
that
kind
of
stuff
that
work
does
I.
Think.
A
So,
for
example,
there's
a
whole
there's
a
whole
Drive
groups,
discussion
being
that's
and
their
work.
Sugar
team's
been
working
on
for
a
long
time
where
you
got
a
set
of
devices
and
there's
a
logic
to
like
capture
how
many
O's
to
use
to
create
a
huge
device
and
how
to
group
them
and
how
to
divide
the
SSD
up
into
pieces
for
all
the
other
hard
disks
and
all
that
stuff,
this
sort
of
encapsulated,
and
so
that
would
sort
of
layer
on
top
of
this,
and
so
it
working
we'd
have
this.
A
You
have
the
same
flow,
whether
you're,
using
Brooke
or
you're.
Using
this
as
far
as
like
deciding
that
it
radio
s
T's,
then,
hopefully,
eventually
you
drive
all
that
from
the
dashboard.
So
do
you
like
them?
A
A
Okay,
yep
I'm
I'm
totally
the
hi
on
the
drive
group
stuff
I
have
been
following.
The
discussions
probably
be
helping
with
that.
Next.
What's
that,
because
right
now
this
Orchestrator
host
you
create
command
is
like
the
most
trivial
thing.
You
can
only
give
it
one
device,
it
always
deploys
blue
store
and
it
doesn't
anything
else.
You
can
tell
that
from
the
syntax.
A
It's
like
super
simple,
but
yeah
don't
be
the
next
step
and
I
know
that
I've
had
conversations
with
with
Iran
and
Sebastian,
also
about
you
know
trying
to
if
you,
if
you
have
like
ten
hosts
that
all
are
supposed
to
have
exactly
12
discs
this
model
or
whatever,
having
a
way
that
you
can
like
describe
that
and
the
way
that
you
previously
would
do
instead
danceable
and
it
would
have
go
to
the
logic
that
looks
at
all
the
hosts
and
make
sure.
A
Actually
the
devices
are
there
that
you
say
and
it
all
matches
and
all
other
things
good
and
then,
if
things
look,
ok
then
go
out
and
deploy
all
the
OS
DS.
That
type
of
thing,
that's
something
that
you
could
build
on
top
of
this
or
out
of
the
drive
group
stuff
I
haven't
been
following
it,
but
but
again
that
my
hope
is
that
whatever
that
sort
of
a
higher
level
planning
workflow
is
it
would
bit
above
the
orchestrator
interface.
So
it
would
be
the
same
whether
you
are
using
rook
or
you're.
A
Using
this,
you
could
leverage
the
same
sort
of
blows.
I
guess.
A
A
There's
this
stupid
detail
about
how
remote
Oh
actually
runs
the
remote
script,
because
the
script
will
be
on
the
remote
host
that
you're
going
to
go
to
play
stuff
on,
and
so
you
either
need
to
like
copy
the
script
and
run
it
or
what
we're
probably
actually
going
to
do
is
remote
ou
can
run
Python
code,
local
Python
code
on
the
remote
host,
the
remote
O's,
basically
just
gonna
import
the
script
and
then
call
the
main
method
with
the
read
arguments:
I'm
sure
it's
so
presumably
the
same
sort
of
thing
would
be
possible
for
another
another
tool:
I.
C
So
right
now
we've
got
this
really
generic
cluster
class
and
CBT.
That
really
has
a
very
little
in
it,
but
one
of
the
things
I
want
to
do
is
break
out
the
a
lot
of
the
stuff,
that's
in
the
Ceph
cluster
class,
into
a
more
generic
stuff
kind
of
interface,
so
that
CBT
doesn't
only
just
like
set
up
all
the
demons
itself,
but
could
potentially
call
out
to
other
things
like
yes
right
that.
A
Cost
me
I
would
be
a
to
the
awesome
if
DVD,
for
example,
could
just
use
this
to
deploy
everything.
One
of
the
other
things
I
didn't
mention
is
that
eventually
I'd
like
to
make
us
f/stop
UI
task
replacement
for
tooth
ology
that
uses
this
to
deploy
demons
and
to
finally
get
away
from
the
sort
of
the
legacies
that
tasks
that
Tommy
dreamt
that
into
place
F
in
its
own
weird,
special,
awkward,
unique
way,
and
so
that
we
have
sort
of
end-to-end
testing
for
at
least
this
yeah.
I
C
Yeah
I
will
say
it
is
nice
in
some
ways
having
a
backup
option,
though,
because
there
are
a
couple
times
where
I
was
trying
to
do
like
whoa
is
prior
to
stuff
deploy
what
we
have
before
that
makes
us
a
fan,
mixes
yeah.
Where
would
break,
and
then
you
know
I
had
to
like
have
my
own
way.
I
do
it
takes
we
keep
burying
tests.
So
it's
nice,
having
like
a
couple
of
yes
but
still
make
everyone
that
we
used
I.
A
It
could
basically
just
turn
on
the
SSH
Orchestrator,
and
maybe
it
would
distribute
your
SSH
keys
for
you
or
you
do
that
manually
whatever,
but
then
from
that
point
forward
you
can
go
through
and
you
could
you
could
migrate
to
this.
That's
all
your
day
to
operations
could
be
done.
This
way,
example
no.
A
Wouldn't
it
isn't
the
way
that
will
deploy
his
demons
and
this
deploys
demons
a
difference
so
there'd
be
like
a
restart
a
redeeming
upgrade
conversion
process,
I
guess,
but
then
you
could,
after
that,
but
I
mean
ansible
could
do
that
right.
It
could
deploy
the
cluster
and
then
convert
it
to
use
this
and
then.
C
A
C
A
The
this
estate
for
castrator
is
like
weirdly
simple.
It's
only
like
800
lines
of
code
and
there's
basically
like
it,
creates
the
remote
Oh
connection,
there's
alpha
for
that.
There's
a
helper
that
basically
runs
stuff
demon
with
random
arguments,
and
then
there
was
like
a
create
OST
method,
create
manager,
one
and
a
crate,
monitor
one
that
are
all
slightly
different
cuz.
They
create
different
key
rings,
and
then
they
just
run
stuff
daemon,
which
this
run
via
remote
little
hundred
of
those.
That's
not
it's
pretty
true
about
that.
Sometime,
many
moving
parts.
That's
pretty
nice
yeah.
C
A
B
So
the
general
issue
that
I'd
like
to
discuss
is
efficiency
in
blue
store,
that
for
when
it
keeps
small
objects
or
do
to
pretty
high
allocation
granularity,
one
might
experience
significant
space
loss
when
keeping
such
a
lob
jecht
that
are
much
less
than
the
allocation
granularity
I've
seen
several
complaints
about
it,
both
from
our
customers
and
in
mailing
list.
Also,
recently,
we've
got
a
PR
trying
to
fix
the
same
issue
so
throwing
this
is
this
value.
A
B
B
H
B
A
Yeah
so
I
think
it's
I
think
it
makes
sense
to
solve
this
problem.
If
we
do
reduce
the
metallic
size,
it
does
mean
that
as
sort
of
the
inline
mode
can
only
can
worry
about
just
really
small
stuff,
that's
smaller
than
4k,
which
might
affect
the
design
a
little
bit,
although
probably
it
doesn't
actually
do
anything.
It
seems
like
the
the
primary
question
is:
if
we
are
going
to
put
those
that
small
object
data
in
rocks
to
be,
there
are
basically
two
places
we
could
put
it.
J
A
H
C
One
one
question
I
have
regarding
this:
that
will,
for
me,
will
kind
of
determine
whether
and
I
think
it's
a
good
idea
is
how
much
write
amplification.
Are
we
going
to
see
with
this
kind
of
data
and
rocks
DB
versus
just
going
out
to
the
4k
Menelik
size
anyway,
right
like
if
we're
talking
about
just.
C
J
A
Yeah
but
I
think
when
usually
when
it's
set
of
s
and
when
it's
a
file
system,
the
sort
of
the
rule
of
thumb
where
most
files
are
small,
but
most
data
lives
and
large
files
installed
and
so
having
the
small
files
be
slightly.
Space.
Inefficient,
isn't
like
a
huge
deal,
whereas
I
think
with
object
stores.
We
keep
seeing
these
users
where,
like
they
have
a
clusters,
that's
filled
with
a
petabyte
full
of
like
you
know,
12
K
objects
or
something
like
that
and
then
and
then
that
doesn't
really
help
that.
I
Yeah
yeah
it's-
these
are
all
just
sort
of
assume.
That's
been
treat
us
here
with,
like
least
amplification
right,
amplification
and
performance
like
the
much
more
recent
PRF,
the
author
versus
some
benchmarks,
showing
I
had
much
higher
CPU
utilization
and
memory
use
with
this
enabled
Thank
You
ed
mentioned
earlier.
The
possibility
of
making
this
optional
in
some
way.
A
A
Affect
the
metadata,
access
and
I
think
I
think
that's
probably
not
what
I
would
lean
towards,
but
it
does
put
a
lower
bound
on
like
the
latency.
They
do.
You
expect
get
like
a
random
all
object,
I
guess,
because
you're
gonna
have
to
look
up
the
metadata
and
then
you're
gonna
have
the
data
in
a
different
location.
E
A
It
it
is,
but
I
think
the
better
way
to
do
it
is
to
use
the
column,
families
and
so
that
you
still
retain
the
ability
to
have
a
transaction
that
covers
all
the
column,
families,
but
the
data
is
segregated
into
different
at
different
actual
elephant
tree
after
that.
So
that's
what
we're
already
doing
to
segregate
OMAP
data
from
Onoda
metadata
from
I
forget
what
else
and
that
allocation
to
forget
and
like
the
sharding
stuff,
that
the
ABS
working
on
that
shards
rocks
to
be
across
a
bunch
of
pieces
is
doing
that
using
column
families.
A
A
A
Yeah,
basically,
the
contention
the
problems
that
we
see
with
the
star
are
mostly
around
the
compaction
and
the
cpu
usage
and
not
around
the
bottleneck
of
the
of
the
journal,
mostly,
and
so
just
on
that
for
now,
in
this
case,
I'm
just
having
a
separate
column
family
for
the
small
object,
data
is
probably
sufficient
or
possibly
bending
it
by
size.
If
it,
if
it
matters,
I,
don't
know
if
it
matters.
You
know.
A
A
B
B
A
So
ask
being
this
this
new
or
pull
request
here
and
one
of
the
things
I
notice
is
that
the
small
object
mode
only
supports
like
rightful
or
something
like
that.
So
it's
like
I
think
that's
something
that
we
definitely
don't
want
to
do.
I
get
the
context
we
get
to
explicitly
hint
or
something
I,
don't
know
I'm
just
looking
to
test
at
the
bottom,
but
I
think
we
want
to
make
sure
that
it
it's
fully
general.
So
it's
transparent
everything
all
the
layers
above.
B
H
D
Patrick
asked
me
to
pop
in
and
just
talk
about
it,
so
we
for
the
merge
window
for
the
kernel
just
photos.
This
past
week
and
and
four
five
four
we've
been
had
merged
some
patches
that
Shang
done
to
add
a
new
recover
session,
mount
option
for
Earth
for
case
F
of
s.
Basically,
what
this
does
is
it
allows
clients
do
some
to
auto
recover
from
being
blacklisted
in
them.
D
D
We
have
two
available
modes
right
now,
here,
I'll
put
paste
in
the
documentation
update
into
the
chat
Mike.
Did
you
have
your
package
I,
don't
have
any
slides
for
this
or
anything.
So
that's
the
essentially.
There
are
two
modes
currently
there's
no
and
cleaning,
and
so
no
is
the
current
method,
basically,
where
we
don't
really
try
to
do
any
recovery.
That's
the
default,
though,
but
clean
currently
allows
the
client
to
automatically
reconnect
after
just
detects
that
it's
been
blacklisted
on
blacklist,
and
so
when
does
that
what
it
does?
D
Is
it
basically
tosses
out
any
cached
rights?
It's
got.
That's
right
back
errors
on
the
things.
So
when
you
call
this
think
you'll
see
an
error,
little
see.
If
you
have
five
locks,
then
they
it
will.
You
basically
get
an
e
io
on
the
on
the
on
any
activity
for
the
file
descriptor.
If
you
know
Ernie
read/write
activity
on
the
file
descriptor
until
they
are
dropped
required
or
dropped
at
least
and
so
the
you
know,
the
idea
is
being
that.
D
But
some
of
this
some
of
this
behavior
came
from
from
FS,
which
actually
does
something
similar
Island
loses
tape
recovery.
So
if
you
are
trying
to
recover
locks
after
the
server
reboots
and
then
fails
for
whatever
reason-
and
you
can't
get
your
lock
back,
then
we
will
go
in
and
start
giving
you
errors
on
the
file
descriptors
and
so
that
we
basically
tried
to
emulate
that
behavior
to
as
closely
as
we
could
here.
So
that's
the
basic
thing:
it's
still
pretty
bleeding
edge
feature
I'm,
not
sure.
D
If
we're
going
to
boy
this
out
about
some
serious
testing
on
this,
but
but
I
figured
it
was
reasonable
to
go
ahead
and
get
it
in
and
start
playing
with
it
we're
also
to
have
a
planned,
maybe
eventually
adding
a
root
mode
or
something
like
that
brute
force
recovery
where
basically
make
the
client
just
kind
of
soldier
on
as
much
as
they
can
in
the
face
of
this.
So
if
you
had
buffered
rights,
then
you
would
go
ahead
and
just
flush
them
out
after
you
recovered,
and
you
know
damn
the
torpedoes.
E
D
D
A
D
And
we
we
looked
at
doing
like
sick
lost
I,
don't
know
if
they
might
swear
with
it.
But
you
know
back
in
that
POSIX
or
maybe
you
know,
eighteen
T
UNIX
maybe
had
had
a
ability
to
send
a
signal
when
you
lost
your
lock
for
some
reason,
but
we
never
implemented
that
in
Linux.
So
it's
in
no
Anna
Schumacher
looked
at
doing
this
for
NFS
and
it
ended
up
not.
D
There
was
another
thing:
I
was
gonna
mention
to
not
forgot
what
it
was:
okay
yeah
so
anyway,
that's
it.
So
if
you've
got
any,
you
want
to
test
it
out,
that'd
be
wonderful,
oh
yeah,
I
mean
I,
guess,
I,
really
kind
of
see
this
as
being
mostly
useful
on
read-only
configurations
right.
You
know
where
you're
them.
D
A
A
Then
they
want
to
avoid
a
situation
at
all
costs
where
there's
a
blip
on
the
file
server
and
they
have
to
go
like
hunt
down
and
reboot
a
bunch
of
machines
in
their
case,
it's
like
a
batch
computing
environment
or
whatever.
So
if
something
happens,
till
I
kill
the
job
and
rerun
it.
It's
not
that
the
integrity
isn't
such
a
big
deal
it
just
like.
Don't
want
to
have
the
headache
of
having
to
go
like
mop
up
the
mess
at
the
end.
That.
D
A
D
They
do
that.
You
know
you,
you
know,
after
being
blacklisted
you'd
end
up,
you
know
being
able
to
just
block
until
you
came
back.
So
if
you
caught
it
just
right,
you
might
even
be
able
to
keep
filtering
on
it.
You
know
with
the
right
so
I
think
it's
a
useful
feature.
So
you
have
to.
We
may
have
to
tweak
the
approach
to
some
in
the
future
as
we
go,
because
this
is
really
pretty
new
stuff,
yeah
yep.
H
A
A
We
could
check
and
really
click
on
the
on
the
the
road
to
nautilus.
The
octopus
I
think
we're
doing
pretty
well
I've
been
watching
the
Trello
board
and
I
think
most
of
the
though
octopus
should
be
out
in
March,
hopefully
at
the
beginning
of
March,
the
first
or
second
week
of
March
is
when
we're
gonna
do
settle
on
and
be
great
to
just
like
announce
the
release
at
the
conference.
A
Well,
I
think
we'll
announce
it
either
way,
but
like
actually
have
it
out
and
installable
at
the
same
time
that
the
to
be
fun,
which
means
a
freeze
around
Christmas,
New
Year's,
and
that
basically
means
we
have
October
November
December
to
finish
things
up,
I've
only
been
watching
the
radio
stuff,
I
think
we're
doing
pretty.
Well,
there's
some
of
these
things
in
here
that
are
like
code
cleanup
that
might
not
make
might
not
make
it
in
time,
but
the
major
buff
that
I
was
working
on
at
least
I
think
is
mostly
in
the
sharding
rocks.
A
Tvs
almost
done
sounds
like
stuff
desktop.
That's
making
progress,
I
haven't
I,
don't
know,
I'm,
not
sure
that
these
are
weedy.
I.
Think
that
really
wants
up-to-date
the
I'm,
not
sure
about
how
these
other
features
are
going,
but
on
the
readers
front,
I
think
we're
doing.
Okay,
things
that
make
the
big
thing.
That's
sort
of,
in
my
mind
that
I
want
to
make
sure
it
gets
finished
up.
Is
the
orchestration
deployment
bare
metal
and
so
that
that'll
actually
work
in
octopuses?
My
goal.
A
A
H
C
Sage,
I
have
kind
of
a
list
of
like
performance
low-hanging.
What
I've
been
lying
aside,
but
just
like
have
important
performance
things.
I
really
am
hoping
to
cover
for
octopus,
like
they're
kind
of
broad
across
all
the
different
categories
or
different
subsystems
we
have,
but
on
the
rgw
side
we
have
you
know,
stalls
during
bucket.
C
Regarding,
on
the
type
of
s
side,
we
have
some
of
the
MDS
issues
with
multi
MDS,
but
it
sounds
like
Patrick's
working
on
that
for
octopus,
potentially
on
RBD,
we
have
a
couple
of
other
things
that
we're
working
on
I
guess
the
aren't
maybe
is
critical,
but
just
in
general
would
be
really
nice.
If,
like
for
octopus,
we
can
kind
of
get
all
that
stuff
really
smooth
out.
Yeah.
A
C
C
H
A
A
A
K
A
Re-Enable
life
cycle
expiration
tests
because,
though
they're
disabled
technology,
because
of
timing
issues,
that's
one
of
them
making
the
back
end
pluggable.
This
is
like
an
open-ended
thing
for
zipper,
so
that's
like
that's
gonna.
Take
like
three
releases.
Probably
maybe
was
some
initial
parts
of
that
merged
yeah.
K
K
H
K
K
Well,
what
is
there
besides
the
non
pausing?
Look
at
Richard.
K
C
K
C
C
K
F
K
K
A
A
K
L
A
C
You
can
still
like
go
back
and
fetch
Rados
more
if
you
really
needed
to
right,
like
it
yeah.
C
A
C
A
L
C
I
A
That
might
be
good.
Are
you
still
here?
You
were
yes,
though,
that
so
the
goal
was
to
make
it
so
that
we
need
you
to
scrub
and
we're
listing
and
standing
every
note
so
that
it
doesn't
put
the
cache.
A
Oh,
that
sort
of
the
trivial
way
to
do
that
with
just
need
making
it
so
that
when
you
do
the
list
and
or
maybe
when
you
do,
the
ghetto
note
or
the
stat
or
something
there's
like
a
hint
that
says,
don't
actually
move
this
to
the
front
of
the
LRU,
just
leave
it
where
it
is
or
put
an
autumn,
but
you'd
have
to
make
sure
that
it
doesn't
fall
off.
A
If
the
only
is,
therefore
that
a
scrub,
I
wonder
or
no
backfill,
I
guess
as
a
to
but
I
wonder
if
even
better
would
be,
or
the
scrub
case
actually
list
and
get
the
nodes
in
one
operation,
though,
that
it
can
scrub,
can
be
careful
like
not
to
keep
it
at
the
bottom
of
the
yellow.
You
basically
and
give
you
a
handle
to
it,
whether
that's
pinned
probably
more
efficient,
and
it
also
like
but
keeps
the
cap
cash
pollution
issues
edit
to
a
minimum.
A
A
Not
in
the
not
in
blue
store,
the
MDS
has
a
cache.
That's
like
that.
It
has
multiple
segments,
low,
middle
and
I,
think
but
yeah,
but
I
think
yeah
I,
don't
know
anything
about
it.
But
my
guess
is
that
if
we
made
a
collection
list,
stat
or
plus
or
whatever
function
that
would
list
and
ghetto
nodes,
all
in
one
go,
Krug
could
use
that
and
blue
store
could
implement
it
such
that
that
notes
are
loaded
but
at
the
very
bottom,
the
LRU.
A
C
A
Yeah,
okay,
double
check,
I'm,
not
sure
that
you'd
have
this
quickening
behavior,
because
on
the
things
I
would
pollute
the
cache
like
in
that
in
the
oh
no
case,
it
was
scrub
that
was
looting
and
pinning
the
items
and
I've
doesn't
really
happen
in
the
same
ion
because
the
read
stuff
does
get
pinned
or
if
I
don't
get
pinned
unless
they're
doing
unless
they're
right
back
and
usually
wouldn't
have
right
back
at
the
bottom
of
the
other.
You
would
be
at
the
top,
so
I'm
gonna
see
one.
Let's
see
Rome.
C
A
C
So
initially
I
had
a
patch
that
did
count
what
you
had
mentioned.
We
had
talked
previously
about
just
kind
of
mitigating
things
by
looking
at
only
continuous
pinned
items
rather
than
going
through,
and
you
know
looking
at
the
certain
number
overall
and
then
also
there
was
a
couple
other
things:
I
did
in
there
that
we
talked
about
or
what
was
anymore
I
guess,
but
just
mitigating
it.
But
then
we
ended
up
deciding
that
with
the
particular
issue
that
had
been
hit
by
the
customer.
C
So
maybe
a
better
long-term
solution
now,
which
is
how
I'm
looking
at,
is
trying
to
do
something
similar
to
what
the
MDS
does
where
we
have
maintained
like
a
separate
list
of
maybe
unpinned
items
right,
and
you
just
walk
through
that
directly
rather
than
just
giving
over
stuff
yeah.
But
then
you
have
making
it
by
having
like
a
call
back
or
something
right
where,
when
something
becomes
interim-
and
we
maintain
that
separate
list.
A
Yeah,
that
was
the
I,
think
that's
the
trick,
because
in
the
MDS
it's
hooked
into
the
get
input
reference
count
functions
and
is
it
hooked
directly
into
the
LRU
code,
and
so
it
will
actually
move
for
you
out
of
the
pin
to
tail
list.
If
you
unpin
it
yeah,
maybe
I
should
trim
it
actually
I
remember
exactly
what
it
is,
but
in
this
case
the
LRU
is
in
structured.
That
way,
so
we
can't
currently
at
least
hook
the
reference
count.
The.
C
I
was
working
on
basically
added
a
reference
to
the
ash
or
the
vo
node,
so
you'd
have
a
reference
to
the
cache
which
them
you
could
call
in
to.
Whenever
that
your
reference
count
changes
in
a
certain
way
to
you
know
totally
cache.
This
is
being
pinned
or
unpinned.
Essentially,
okay,
that
was
kind
of
I
was
starting
to
play
with
is
going
down
that
route
yeah.
A
That
might
be
sufficient,
so
that,
like,
if
you
have
one
reference
and
you're
about
to
put,
then
you
could
just
tell
the
cache
I'm
about
to
drop
my
last
reference
and
the
cache
could
move
it
out
of
the
in
detail
to
the
tail
yeah.
Don't
probably
work,
yeah
well,
I
think
the
other.
The
other
sort
of
the
hacky
workaround
would
be
that
that
trimmer
could
just
like
to
a
circular
like
round-robin,
pass
over
the
pin,
dovetail
and
just
limit
the
amount
of
work.
I
A
E
A
C
What
one
thing
you
have
to
say
with
moving
it:
the
way
that
we
did,
though,
we're
now,
where
we're
not
doing
it
in
the
the
moonpool
loop,
is
that
at
least
we
might
be
doing
a
ton
more
work
every
time,
but
at
least
now
we'll
be
like
more
aggressively
trying
to
get
stuff
right.
The
memory
memory
growth
might
be
a
little
bit
less
than
like
wasn't
luminous
yeah.