►
From YouTube: 2018-FEB-07 :: Ceph Developer Monthly
Description
Monthly developer meeting for the coordination of Ceph project development.
http://tracker.ceph.com/projects/ceph/wiki/Planning
A
A
B
B
By
using
some
code
from
the
existing
gas
port
back-end
code
and
also
some
code
of
the
front
end,
but
we're
also
using
some
automatic
back-end
code
and
front-end
code,
the
front
end
will
be
in
an
angularjs.
You
know
an
angular
2
I
said
sorry
and
the
back
end
will
be
very
similar
to
the
existing
dashboard
in
person
with
cherry
pie,
that's
loaded
into
the
manager.
B
So
the
current
status
is
that
we
have
on
the
backend
side
the
grantor
work
is
done,
so
we
are
just
implementing
or
copying
features
or
copying
code
from
the
existing
dashboard,
as
I
said,
to
get
on
par
with
an
existing
display
and
from
the
front
hand
side.
We
are
doing
some
groundwork
and
working
on
the
health,
the
health
page
on
the
dashboard
and
the
other
pages.
B
We
have
opened
a
pull
request
can
share
the
link
for
me.
I
will
send
it
later
after
a
few
seconds.
B
B
C
B
B
B
C
So
then,
well
I,
don't
know
enough
about
open
attic
to
be
overly
detailed
other
than
I
know.
I've
worked
with
the
soft
stack
before
you
know.
Back
in
the
calamari
days
and
I
know,
we've
been
looking
our
our
organization's
been
looking
at
open
attic
and
we
use
SUSE.
So
there
was
we're
just
curious
about
stop
stack,
I
mean
I,
don't
see
why
you
couldn't
deploy
with
ansible
and
have
it
deploy
the
soft
stack
components
if
that
was
required,
but.
B
A
B
A
A
B
A
B
The
existing
dashboard
right
now
doesn't
have
management
facilities
that
only
basically
displace
data
and
yeah
automatic
from
the
beginning
on
also
provide
management
functionality,
for
example,
creating
pools
and
yeah.
That
was
my
one
of
our
main
goal
to
provide
it
and
using,
for
example,
angular
as
a
as
a
framework
for
you
for
the
user
interface.
A
C
A
D
D
Excuse
me
a
vinegar
me
yeah.
Yes,
oh
sorry,
I'll,
okay
watch
stop
sharing
my
room.
D
Okay,
the
problem
is
like
this.
Recently
we
we've
seen
in
our
online
clusters
luminous
clusters
that
there
there
are
occasionally
a
lot
of
slow
requests
like
this
and
our
use
cases
like
this
are.
We
have
one
writing
client
that
purple
keep
keep
copying
files
into
a
directory
and
chance
of
other
clients,
issue,
reads
and
stop
operations
due
to
those
files
and.
D
After
we
discovered
these
slow
request,
we,
that
is
some
testing
and
an
analysis,
and
the
results
shows
like
this
first-
is
that
the
almost
all
the
slow
requests
are
get
attr
requests
and
they're
all
they're
all
blocked
by
this
state.
D
This
should
be
a
lot
Singh
mixed
it
and
we
go
into
that.
We
plot
and
we
can
see
that
nearly
for
every
get
get
attribute
operation
there
could
be
the
following.
The
following
situation
is
one
get
attribute
operate
operation
would
call
this
scatter
mix
method
and
make
make
the
make
the
I
know
Harlock
into
the
locked
sing
mix,
late
and
interstate.
It
will
help
to
wait
for
finish
for
all
the
issue,
caps
to
be
be
allowed.
D
D
Our
state
from
the
block
mix
to
arm
our
culture
a
lot
sing
to
our
sink
and
again,
which
will
issue
a
issue.
These
cats,
like
as
a
patsy
to
all
the
all
the
reading
lines
and
again
and
when
the
next
get
a
hat,
revealed
on
their
attribute
operation
to
be
protest,
and
it
will
make
the
epilogue
again
built
into
the
block,
sink
mixed
state
and
keep
keep
repeating
this
and
the
result
is
for
every
nearly
for
every
month
for
every
second,
nearly
only
one
get
a
attribute
operation
can
be
processed
and
either
write.
D
A
Yeah
I
think
that's
pretty
clear
Patrick,
maybe.
D
Not
really,
since
there
are
lots
of
get
a
PTR
again
yet
after
good
operations,
the
process
there
was
much
time
for
or
for
the
process
of
get
a
trivial
operation
could
be
tens
of
seconds
in
our
chest
in
our
test
lasts
for
nearly
forty
seconds,
and
the
writing
is
that
it
seems
not
knobs
done
it.
Should
it's
blocked
for
nearly
an
hour
until
sometimes
until
the
reading
all
all
finished,
then
the
ride
gets
cross.
D
F
F
C
D
D
D
Look
if
you
hear
me
now,
oh
yeah,
can
you
hear
me
now
sorry,
we
think
they
are
making
the
NTS
wage
for
all
the
way
we
designed
to
release
they're
allowed
on
the
caps
since
not
very
arm
paralyzed.
D
D
D
The
journal
of
speaking,
there
are
two
separate
procedures:
ulis
home
of
the
issue.
Our
first
is
for
monitors,
but
to
schedule
all
the
all
the
passes,
positive
for
the
OSD
ops,
processing
and
the
other
is
the
US
needs
to
keep
replicating,
keep
replicating
their
operations
from
the
master
cluster
to
the
backup
cluster
along.
D
Well,
then,
we
think
we
think
about
the
implementation
of
this
architecture,
or
we
think
there
should
be
the
following
principles.
One
of
the
first
days,
we
should
modify
the
current
system
as
less
as
possible
and
we
use
the
current
systems
companies
as
much
as
possible
and
the
secondly
is.
We
should
make
as
little
impact
on
performance
of
existing
components
as
possible.
D
We
call
it
the
greatest
pops
and
the
other
is
option-
was
issued
by
other
OS
days.
You
know
like
rebel
rebel
piece,
and
we
think
that
only
liberators
ops,
that's
absurd
by
clients
that
need
to
be
need
to
be
replicated,
but
I
think.
We
think
that
the
main
difficulties
in
implementing
the
replication
are
the
replication
mechanism
in
the
main
cluster
parties
are
armed
before
the
first
days,
we'll
have
to
deal
with
various
situations
that
replication
cannot
go
cannot
will
go
on.
For
example,
you
are
the
backup
cluster
in
is
full.
D
Then
we
have
to
pass
the
we
have
to
suspend
the
replication
and
until
there
are
more
sebassis
and
their
Mata
spaces
in
in
the
back
that
book
back
up
faster
and
the
second
is,
we
have
to
create
the
preserve
the
ops
replication
order
when
you're
when
the
oils
are
not
changes.
No,
we
think
that
for
a
second
second,
a
difficult
thing,
we
think
we
can
accomplish
that.
We
can
accomplish
it
by
our
thirst
and
making
sure
the
replication
of
journal
entries
for
the
option
needing
to
be
replicated
hard.
D
Consider
we
will
go
only
when
or
when
the
crest
planning
the
division
is
finished
and
then
during
the
recovery,
back,
pale-faced,
making,
sure
and
objects
guess
will
count.
Only
one
ball
liberators
pops
targeting
against
get
through
a
replicant,
nothing
we
can
do.
We
can
make
sure
these
two
points
then
input
certification
order
when
for
schema
changes.
D
But
this
we
mean
that,
for
example,
if
all
the
OSD
holding
a
PG,
these
is
the
other.
The
replication
cops
cache
of
all
the
these
other
people.
Holding
a
PG
is
brought
down
calm
the
replication,
the
replication
for
parts
targeting
that
PG
should
be
suspended.
D
D
D
Well,
also,
our
discipline
wing.
This
is
not
all
the
ops
needs
to
be
replicated,
for
example.
Perhaps
the
report
is
that
issued
by
other
policies,
and
we
think
that
maybe
we
should
add
a
new
at
a
new
flat
into
the
pounder.
The
fall
journal.
Entry
arm,
we
name
it-
need
replicate
indicating
that
the
revenues
they
are
the
OT
needs
to
be
replicated
and
now.
D
What
what
house
can
be
should
be
replicated
when
what
ops
and
not
so,
we
would
think
we
need
to
add
the
new
flat
need
replicate
in
the
object
incre
indicating
whether
are
targeting
this
terminal.
This
object
to
be
replicated,
and
this
is
set
by
a
set
by
clients
and
long.
We
also
need
another.
Black
new
I
need
full
replication,
indicating
whether
there
are
ops
targeting
this
targeting
their
subject,
but
not
worth
again,
because
those
are
perfect
in
suspend
condition.
D
D
D
D
A
A
So
when
you
talk
to
me
about
the
the
snapshot
piece,
how
are
you
thinking
of
founding
that
time
and
then
doing
doing
that
snapshot?
You.
A
Admit
you
mentioned
that
in
the
original
diagram
of
the
architecture
you're
talking
about
it,
time-bound
for
snapshots,
do
you
mean
yeah
yeah,
it's
what
T
bound
there?
How
are
you
I
coordinating
those
time?
Bands.
D
First,
we
had
to
specify
a
time
slice
the
times
last,
that
who
were
snapshots,
snapshot,
duration
and
now
the
monitor
the
monitor
that
the
time
bound
according
to
its
current
system,
time
and
and
Trust
the
the
snapshot,
duration
and
since
we'll
we.
We
assume
that
all
OS
T's
and
monitors
syncing
with
the
same
with
the
same
times
the
times
of
realization
server.
Then
they
should
be.
D
A
D
A
I'm
talking
about
using
the
beats
later
analyzed
the
ops
cash
and
not
not
removing
things
from
it
once
that
becomes
full.
So
that
means
that
when
the
replication
stops
for
some
reason,
there
needs
to
be
suspended.
You're
talking
about
clearing
that
cash
out
and
letting
it
operations
in
the
main
cluster
continue
while
marking
the
objects
Nene
as
needing
complete
replication.
Later.
A
D
D
There
should
be
appropriate
for
this
for
this
target
double
for
this
purpose
and
we
can
use
a
bloom
filter
to
store
Oh
when
when,
when
a
new
Liberator
stops
comes
the
greatest
Ofcom,
but
he
can't
and
that
and
he
cannot
get
get
and
cannot
get
replicated
because
because
of
those
replication
suspense
condition,
then
we
can
add
add
the
object
ID
into
this
set
and
later
when
we.
D
A
A
A
C
A
D
D
D
D
D
D
D
D
D
Is
less
than
this
mean
sighs
Oh
s?
Ds
has
moved
forward,
their
replication
tail
in
the
PG
should
be
marked
as
replication
pool
and
all
subsequent
liberators
ops
can
be
replicated
until
this
mark
is
clear
and
young.
When
nos
DS,
replication,
ops,
cash
is
selected
with
enough
space,
the
message
should
be
sent
to
all
acting
primaries.
Oh
sorry
spell
this
word.
D
The
message
should
be
sent
to
all
acting
primaries,
who
has
a
comment
PG
with
witness
with
its
OST
and
when,
when
OST
is
capable
of
cashing
the
greatest
mobs
for
a
replication
or
not,
then
the
PGS
replication
pool
mark
is
clear
and
you
know
then
it
for
the
purpose
of
better
performance.
We
think
that
actually
primary
doesn't
need
to
wait
for
the
retro
key
reply.
D
We
can
simply
just
mark
all
the
call
the
target
objects
as
the
replication
first
and
then
clear
this
clear
mr.
mark
when
the
the
brightest
okie
is
replicated
and
long
when
only
greatest
of
North
educationist
on
the
replication
executing
executing
those
deeds
should
send
out.
Replications
succeeded
message
for
that
whole
key
to
other
OSD.
To
have
to
set
end
of
this.
We
make
those
those
OS
G's,
clear.
A
D
A
A
Bit
worried
about
that
in
case
you
accidentally
relied
on
some
bit
of
state
that
wasn't
properly
kept
in
sync
with
the
primary.
By
currently
with,
like
most
of
the
background
operations
they're
all
controlled
by
the
primary.
So
it
has
my
full
knowledge
of
what's
going
on
just
by
reading
through
some
some
issues.
There.
D
D
Okay,
the
last
part
of
my
sharing
of
these
the
bat
cross
training
limitation,
and
we
think
the
main
problem
for
this
for
this
part
of
our
communication
is
that
we
have
to
make
sure
efficient,
replication,
ops,
caching
and
efficient
recovery
when
OSD
map
changes
by
two
magnitude
to
make
each
sufficient
rep
both
the
red
box.
Caching,
maybe
we
think
you
could
be
simple.
We
just
are
appending
the
replication
ops
to
replication
our
cache.
D
Also,
a
blow-up
supplying
happens
when
the
backup
cluster
management
received
and
snapshot
as
natural
notification
that
tells
replication
Australia.
Then
sudden
time
point
can
be
merged
into
the
patents
lor,
and
one
thing
to
note
is
only
replication,
ops
and
the
the
brow
of
storing
face
other
other
ops
like
there's
a
cost
by
recovery
and
back
field,
does
not
need
the
leader
growl
okie,
storing
face
and
also
to
make
the
applying
the
operator
to
make
the
the
old
pops
applying
more
efficient.
A
D
A
D
A
A
Not
at
the
moment
any
quizzes
dangerous,
you
know
or
fully
fleshed
out
now
it's
looking
like
pizza
in
me
maybe
may
work
for
you.
Well
guess
it
I
think
one
of
the
pits
that
depends
on
this
is,
of
course,
the
how
hard
the
time
bounds
can
can
get
hello.
The
time
bands
can
get
you
mentioned
earlier.
You're.
A
D
A
C
E
E
G
This
is
the
branch
where
we're
keeping
all
the
Kerberos
and
LDAP
changes
are
going
to
be
working
on
so
I
mean
the
process
of
refactoring
and
that,
after
talking
to
sage,
it
was
decided
that
we're,
but
we
need
to
focus
on
right
now,
use
the
Kerberos
part
of
it,
not
not
the
LDAP.
So
we
know
that
LDAP
will
be
needed,
but
at
least
for
now
for
mimic.
What
we
want
to
focus
on
is
the
cubers
part
of
it.
G
So
for
Kerberos
we
are
using
gssapi
s
and
that's
what
what
we're
using
to
request
a
talking
from
a
caboose
KDC
and
from
that
one
token
is
where
the
authentication
is
going
to
go
through
the
monitor,
OS,
DS
and
all
that
part
of
things
so
hopefully
get
a
refactoring.
That
I
need
to
get
done
this
week.
So
then
I
can
start
tackling
again
the
authentication
issues
and
the
note
size
of
the
things.
But
that's
that's
where
we're
tracking
all
the
work
done
as
far
as
held
up
in
Kerberos
LDAP,
we'll
just
leave
it
for
now.
G
A
G
My
understanding
is
that
eventually
we're
going
to
cover
all
the
scenarios
so
there's
a
scenario
where
the
nodes
would
be
configured
to
ask
a
KDC
for
a
Kerberos
token,
and
with
that
we
would
use
that
in
place
of
suffix.
There's
also
the
requirement
that
we
do
the
same
for
users,
so
users
should
be
able
to
request
a
token
and
once
he
is,
he
has
that
one
token
he
is
authenticated.
G
We
would
use
a
single
sign-on,
so
he
could
access
the
coastal
resources
that
has
been
assigned
to,
but
but
for
now,
according
to
our
last
talk,
what
are
we
want
to
focus
on
is
to
make
sure
that
first,
the
the
monitor
is
able
to
to
do
that
at
least
the
monitor
for
now,
and
if
we
get
you
the
point
where
a
user
could
request
a
token
and
then
be
authenticated
with
that.
That
would
be
perfect,
but
for
for
now
that
the
main
thing
is
get
the
nodes,
especially
the
the
monitor,
to
work.