►
From YouTube: 2018-07-11 Ceph Developer Monthly
Description
Monthly developer meeting for the coordination of Ceph project development.
B
A
A
D
C
E
C
Till
now,
I
have
almost
wisdom,
half
of
them
internship
period
and
I
worked
on
various
commands
like
mkdir,
D,
UDF
get
boot,
and
so
on
so
now,
I
write
I
returned
a
complete
stellen
tmd
to
module,
and
you
can
see
a
little
more
first
I'm
listing
it
on
all
the
listing
all
the
files
in
the
directory
enough.
You
can
also
you
can
know
what
the
commanders
by
using
the
question
mark
before
the
command
are
using
a
hyphen
H.
The
quotient
mark
is
a
shortcut
for
it,
and
okay.
I
will
invented
many
options
for
each
command.
C
C
C
The
show
moving
the
previous
history
of
the
command
and
also
alia,
Ziva
existing
command
row,
for
example.
Here
the
regular-
unless
is
just
this
thing
out,
the
directories
in
the
directory
instead
I'm
liaising
each
other
in
this
n.
So
now,
whenever
I
type
our,
unless
it
acts
it
will
be
l,
fi,
l,
no.
C
A
B
Think
the
first
question
is:
is
it
possible
to
call
the
shell
command
and
execute
a
command
from
the
command
line?
If
I
can
imagine?
One
of
the
uses
of
this
is
if
you're
just
writing
a
script,
and
you
need
to
just
call
to
like
put
a
file
or
that
attribute
or
something
like
that.
You
don't
actually
want
to
run
it
interactively,
but
you
just
want
to
script.
It
I
can
imagine
case
it's
where
there's
just
like
one
command.
B
A
B
I
was
not
a
question.
The.
B
It's
wandering
a
put
tries
to
preserve
like
file
mode
and
user
and
all
that
stuff
I.
B
A
So
the
other
thing
that
was
in
the
video
but
Lonnie
understood
understandably
forgot
to
mention,
was
the
the
next
thing
she's
gonna
be
working
on
is
doing
the
testing
for
the
shell
and
pathology
and
also
getting
the
code
ready
to
emerge
into
stuff
and
then
just
do
incremental
improvements
on
it.
Afterwards,
awesome,
okay,.
B
I
think
I
think
from
my
perspective,
the
things
I
would
look
forward.
I
did
a
batch
thing,
or
even
just
the
ability
to
do
a
one-off
come
in
from
their
van
line,
and
then
the
one
other
bit
of
feedback
would
be
that
when
you're,
using
when
you're
creating
a
man
that
echoes
something
that
you
have
in
the
show,
you
should
try
to
match
up
the
arguments.
Order
and
behavior.
B
So
like
I,
noticed
romaji
flip
the
order
of
the
mode
and
the
file
around
might
as
well
make
it
match
the
one
that
people
are
used
to
or
do
you
by
default
is
recursive
and
so
probably
mcdeere.
Do
you
the
same
way
and
do
the
Chestatee
non-recursive
or
whatever
it
is,
but
I
guess
D
is
probably
just
showing
this
that
that's
right.
It's
not
actually
walking
in
the
directory
tree.
A
So
I
actually
suggested
for
the
you
command.
We
do
it
in
reverse,
I.
Think
the
normal
behavior
then
I'll
behavior
to
you
is
that
it
walks
the
entire
tree,
because
it
needs
to
do
that
anyway,
but
first
half
s
we
don't
type
I'd
be
you
know
in
the
interest
of
half
the
fast
food
dress
and
the
default
behavior
look
at
the
top
level
directory,
and
if
you
want
to
look
at
all
the
underlying
directories,
you
have
to
pass
an
option
to
do
that.
B
Guess
one
other
thought
is
the
one
of
the
things
that
always
confuses
me.
We're
noticing
about,
like
the
readers
put
command
line,
is
there's
positional
arguments
for
like
the
object
name
and
the
file
that
you
want
to
use
as
input
I
think
the
put
here
works
the
same
way
where
one
of
them
is
a
local
file
name,
and
one
of
them
has
the
remote
path
where
you're
gonna
copy
it
to
I.
B
Wonder
if
there's
a
way
we
can
make
it
mirror
like
a
CP
command
where
its
source
destination
and
then
make
it
so
that,
instead
of
having
it
be
sort
of
implicit
which
one
is
the
local
file
and
which
one's
the
remote
file
instead
make
it
I
have
a
magic
character
like
tilde.
That
means
local
file
system
and
then
the
absence
of
that
means
remote
file
system.
I,
don't
know
if
til
is
the
right
choice,
because
you
would
expect
that
to
be
your
home
directory.
B
B
All
right,
I
think
the
only
thing
on
the
agenda
is
just
put
up:
backfill,
optimization,
brainstorming!
You
guys
need
your
pad
with
that.
Yeah.
D
Sure,
and
so
I
guess
the
background
here
is
that,
with
the
way
back
for
works
for
today
and
one
of
the
things
being
the
slower,
the
regular
recovery
is
that
we
have
to
scan
either
disk
to
see
which
objects
you
need
to
recover
among
different
replicas,
because
we
don't
have
a
way
to
tell
which
other
objects
have
changed
and
last
up
to
date
and
with
normal
recovery.
If
we
have
the
PG
log
tell
us
exactly
which
objects
have
changed
in
that
time
period.
D
So
it's
simple
and
one
of
the
reasons
that
we
look
at
that
film,
when
particular,
is
with
faster
devices.
It
makes
bit
less
sense
to
rely
purely
on
the
PG
blog,
because
when
you
have
database,
it's
doing
fifty
thousand
or
hundred
thousand
ops.
We're
not
gonna,
be
storing
that
many
log
entries,
though
it
doesn't
give
you
very
much
of
a
time
window
to
be
able
to
tell
what
has
changed
when
you
have
a
super
fast
device
with
regular
recovery.
D
So
that's
what
kind
of
why
we
want
to
look
at
ya,
automating,
backfill,
more
for
faster
devices
and
one
of
the
idea
that
we
think
that
kind
of
talks
about
a
high
level
before
was
trying
to
use
a
bloom
filter
of
some
sort.
But
when
I
was
trying
to
discuss
reg
before
I
realize
I
didn't
have
a
good,
concrete
idea
of
what
this
would
actually
mean
by
mattress
age.
If
you
did
I'd
good
idea,
there
were.
B
I
have
a
rough
idea,
but
I'm
sure
we
can
flesh
out
a
bit.
So
the
idea
would
be
that
once
we.
B
Guess
he
wouldn't
know
whether
we're
gonna
fall
out
of
log
recover
or
not.
So
we
would
just
maintain
a
bloom
filter
in
general
brawl
up
right,
so
we
would
probably
have
to
do
four
like
segment
two
time
intervals.
Well,
it
may
be
a
bloom
filter,
every
ten
minutes
or
twenty
minutes
and
then,
depending
on
what
your,
how
much
of
a
window
you
want
you
to
keep
any
of
them.
But
the
basic
idea
would
be
that
you
would
just
insert
someone
to
the
hit
sets.
B
You
would
still
have
to
enumerate
the
objects
but
then
boost,
or
at
least
that's
pretty
fast.
Oh.
F
F
It's
not
skipping
doing
aiyo.
That
I
don't
know
well.
B
F
B
Yeah,
it's
it's,
it's
reducing
the
amount
of
metadata
that
has
to
get
processed
and
packed
into
a
map
and
sent
over
the
wire
and
packed
on
the
other
side.
The
I/o
is
actually
pretty
cheap
in
the
case
of
blue
story.
Actually
have
the
Oh
note
already
when
you
get
the
key
name,
that's
we're
not
necessarily
saving
I,
hope,
you're,
saving,
all
the
CPU
involved
with
unpacking
it
and
I'm
decoding
the
node
and
figuring
out
what
the
you
know,
size
and
them
time
and
whatever,
but
the
version
attribute
is
actually
that's
what
you're
pulling
it.
B
The
way
that
we
did
the
hit
sets
we
reuse
the
hash,
that's
already
on
the
object.
We
didn't
actually
calculate
the
hash.
We
just
had
to
take
that
32-bit
and
insert
it
in
the
bloom
filter
and,
if
I
recall
correctly,
the
way
it
did
it
was.
You
actually
used
the
same
hash
function
with
different
seed,
and
so
it
was
a
relatively
non-complicated
hash.
We
can
check
it
again.
B
B
F
Working
tree's
leaves
are
untreated
passions,
so
what
you're
actually
trying
to
do
is
slight
not
send
all
the
day.
Did
you
just
trying
to
send
out
all
the
data
that
a
bit
over
party,
but
they
here's
in
a
sort
of
like
actually
don't
know
it
or
whatever
and
you
build
up,
and
then
you
do
binary
treating,
and
so
you
only
get
the
ones,
though.
F
F
B
F
F
B
And
that
bounds
to
the
amount
of
memory
cuz.
You
only
have
the
one
that
you're
building
in
memory,
but
then,
when
you're
getting
backfill,
you
have
to
load
all
of
them
up
in
order
to
so
it
doesn't
really
say
memory
on
the
backfield
portion,
because
the
number
of
the
bloom
filters
will
be
determined
by
how
long
your
window,
how
big
your
window
is-
and
there
be
more
CPU,
but
you
could
reduce
memory
or
the
actual
period,
while
you're
degraded.
F
F
B
F
D
F
And
I
guess
we
could
model
out
what
they
are,
but
I
would
actually
expect
that
would
get
that
trade-off,
that
we're
talking
about
the
CPU
involved
in
like
doing
the
day
to
read
and
then
like
cuz,
that
is
building
up
Merkle
trees
of
the
actual
data
blocks,
or
maybe
not
literally,
that
maybe
it's
something
sighted.
This
right.
D
D
Mean
the
more
basic,
ok,
I
approach.
I
guess
is
not
not
giving
Greg
of
any
index
online,
so
you
don't
have
any
impact,
do
regular,
I/o
and
but
just
going
when
you're
doing
the
enumeration,
perhaps
also
reading
the
data
from
the
objects
and
then
right
now
into
subsections
and
computing
on
the
fly.
I
can't
azure
is
writing
your
loyalty.
I
owe
that
it's
a
very
large
object.
I
guess.
D
B
E
B
Yeah
yeah,
you
know
it
I
think
probably
the
way
to
do
it
would
be
to
paralyze
it
because
you
could
load
in
load
in
some
part
of
one
object
and
send
or
build
a
local
tree
or
whatever
of
the
object
and
send
it
across
and
then,
while
you're
waiting
for
the
other
side.
I
like
do
the
same
thing,
compare
you
would
work
on
loading
in
the
next
one
and
then
the
primary
would
day.
This
is
the
either
the
bytes
I
want,
and
then
you
would
read
them
back
and
then
do
the
same.
B
You
a
little
bit
more
memory,
but
you
could
end
up
always
significantly
increasing
the
throughput
by
doing
that,
don't
make
the
backfill
code
more
complicated.
That's
already,
but
yeah.
D
B
Yeah
yeah
I
think
the
alternative
to
that
would
be
on
each
object
at
like
a
mini
log.
That's
like
recent
recent
changes,
but
like
the
last
five
updates
to
the
object,
and
maybe
you
collapse
on
that
or
something
in
a
fixed
size
vector
and
then,
if
you're
lucky
there's
only
like
one
change
and
you
will
see
what's
what
extensive
I've
changed,
which
ones
haven't
destroy
in
the
exact
Delta.
B
Every
other
IOU
like
forward
into
the
last
century,
and
then
it
moves
down
when
an
or
is
it
in
like,
depending
on
what
the
bits
are,
you
could
have
a
sequence
scanner.
So
that
way
you
have
like
the
last
one
entry,
the
last
like
three
entries,
the
last
seven
entries
the
last
15
entries
or
whatever
kind
of
like
that
parts
of
to
you.
Just
don't
like
that
to
you
and
you're
just
Union
those
vectors
together.
F
B
F
F
F
Let's
say
they're
simply
stuck
with
the
hatch
range
we
have
right
now.
Couldn't
we
do
something
like
well
so
so
I'm
thinking
is
that
if
we
don't
want
to
pay
the
cost
to
maintain
those
filters,
you
could
do
something
like
just
take
like
take
a
hash
of
hash,
of
the
object
name
through
the
hash
range
and
swap
them,
and
that
the
replica
can
also
say
what
the
ver
is.
What
like,
what
the
version
start
every
optics
in
that
range
this.
Presumably
the
replica
has.
F
Of
what
I
think
it
is,
and
so
then
I
mean
maybe
I-
think
it's
I
think
the
thing
where
you
wouldn't
say
that
the
replicate
can
afford
and
do
the
internet
decoding.
So
the
replica
you
can
actually,
along
with
the
hatchet,
what
object
names
there
are
can
likes.
It
can
stay
the
newest
version
of
hazards,
I'm,
sorry,
the
oldest
version,
I'll
shoot
where
I
want.
B
B
If
you
can
have
it
on
the
primary
building,
I
mean
it
may
be.
It's
like
looking
at
the
Oh
note
is
getting
out
of
ourselves
because
we
have
to
look
at
it
right
now:
backfill
scans,
everything
loads,
every!
Oh!
No,
it
looks
at
the
attribute
whatever
so
like
anything
that
reduces
the
amount
of
work
is
it
was
an
improvement.
I
think
the
question
is
like:
how
much
can
we
improve
it?
Prove
it
prove
it.
B
B
B
What
if,
instead
of
if
you
have
replicas
count
three
and
one
of
them
goes
offline
instead
of
just
writing
to
two.
We
pick
a
replacement
third,
that
we
record
in
the
past
intervals,
but
is
like
a
sort
of
an
observer,
witness
role,
basically,
and
all
it
does,
is
it
just
journals,
every
transaction
that
goes
by
the
same
thing
that
the
sub
opt
is
sending
across.
That's
like
delete,
cetera.
F
B
F
F
F
D
F
B
It's
the
places,
it's
the
situations
where,
right
now,
when
we
get
into
peering,
doesn't
have
enough
replicas
of
recent
rights
or
whatever
and
peering
it's
stuck.
You
have
an
incomplete
PG.
Basically,
that
wouldn't
happen
right,
because
we
would
never
go
read
right
and
unless
we
were
actually
writing
three
copies
of
it,
where
I
guess
it
could
still
happen,
but
it'd
be
that
much
harder
to
happen
because
you'd
have
to
lose
another
end
devices
for
every
interval.
Basically,.
D
F
B
F
F
B
F
F
B
F
F
F
D
B
You
know,
let's
see
yeah.
B
G
How's
it
going
good
just
hang
on
yeah,
you
too,
so
yeah
I
have
a
quick
question,
and
so
we
are
in
the
process
of
putting
in
a
manager,
module
gonna,
be
exporting
sort
of
a
big
chunk
of
data
out
to
insights,
and
one
of
the
things
that
we'd
like
to
publish
to
insights
is
a
history
like
24
hours
or
something
of
the
different
health
checks
that
have
failed,
and
it's
not
so
hard
to
do
as
it
stands
now,
but
there's
a
fairly
there
can
be
a
fairly
significant
delay
between
the
check
being
pushed
into
the
monitors
and
the
manager
actually
getting
it.
G
B
B
Okay,
but
we
can
do
yeah,
I,
think
I,
think
there
are
two
options
that
come
to
mind.
One
is
just
that
make
it
so
the
helpful
owner
keeps
n
of
them
instead
of
five
right
now.
It
actually
never
looks
at
them.
It
only
uses
the
current
the
most
recent
one,
which
is
why
it's
hard
coded,
if
I
really
just
matter,
I,
think
that
the
one
thing
challenge
there
is
that
the
way
it's
actually
storing
in
each
of
those
states
is
the
current
health.
It's
just
a
whole.
E
B
Of
the
health
checks
at
that
point
in
time,
and
so
we
have
this
big
blob,
the
dictionary
that
I'll
like
shrink
and
grow
and
whatever,
and
so,
if
you
want
to
get
like
the
Delta,
then
you
have
to
like
calculate
it
out
each
time
which
might
not
be
ideal.
The
other
way
might
be
that
we
could
make
the
health
monitor.
It
already
has
them
code,
that's
generating
the
like
incremental
update
messages
and
logging.
E
B
G
G
B
B
G
Seems
like
there's
yeah
okay
I
mean
you
could
also
not,
then
the
Delta.
You
could
also
not
add
the
complexity
to
it's
in
the
deltas
and
just
do
like
deduplication
in
the
manager,
module
or
something
I.
Don't
know
you
both
yeah
I.
B
Just
found
there's
a
vs.
helper
function
in
the
monitor
code,
called
blog
health.
That
basically
just
looks
at
the
previous
checks,
and
then
you
checks
and
it
generates
log
messages
yeah.
So
you
could
just
in
addition
to
generating
the
human
readable
log
messages,
it
could
generate
a
structured
log
message
in
different
baggage,
annal.
Okay,.
B
G
A
B
B
Whether
you
want
you
want
a
structured
thing
that
tells
you
what's
what
new
checks
are
updated
checks
and
clear
checks,
yeah
a
sequence
of
those.
If
that's
what
you
want,
then
having
it
log,
the
structured
version
of
what
it's
logging
to
the
human
log
is
probably
the
simplest,
and
if
you
want
the
full
shebang
and
to
do
your
own,
whatever
kind
of
Delta,
best
notification
stuff
on
the
management.
B
Then
you
should
do
the
other
thing.
It's
one
of
those
things.
I
wouldn't
worry
about
the
history
part,
because
the
log
monitor
is
it's
already
like
kind
of
broken
in
the
way
it's
implemented.
It
needs
to
be
sort
of
restructured
so
that
it
can
preserve
and
more
efficiently
preserve
more
history.
The
implementation
was
sort
of
a
hack
to
get
to
get
it
to
work
quickly
with
the
minimal
changes,
but
it's
not
as
far
from
ideal.
Okay,.