►
From YouTube: 2018-MAR-07 :: Ceph Developer Monthly
Description
Monthly developer meeting for the coordination of Ceph project development.
http://tracker.ceph.com/projects/ceph/wiki/Planning
A
A
B
B
B
There's
a
system
of
callbacks
there
that
are
being
triggered
whenever
a
change
happens
and
they
internally.
We
track
the
changes
and
make
sure
that
we
we
don't
miss
any
change
and
then
the
callback
is
then
specifically
that
the
different
module,
so
the
the
default
module
just
pulls
data
in
there
is
another
module
that
looks
at
the
the
source,
object
that
changed
and
updates
elasticsearch
with
information,
but
that
there
is
a
minimal.
B
B
B
B
We
can
do
that.
The
only
problem
there
is
that
the
logs
format
is
so
that
multiple
changes
are
consolidated
into
a
single
object.
So
the
way
the
way
that
the
SI
login
works
is
you
turn
it
on
on
on
a
specific
bucket,
and
then
it
records
different
changes,
webinar
on
that
specific
bucket
and
from
time
to
time
it
pushes
that
into
another
bucket
inside
the
not
a
and
rgw
object,
twice,
three
objects.
B
To
keep
the
data
in
some
some
temporary
repository
and
then
from
time
to
time,
scrub
that
or
you
know,
go
over
that
and
and
create
the
final
s3
objects.
I'm,
not
kind
of
person.
Sure
that
that's
what
we
want
as
it's
not
completely
like
yeah
Jesus,
want
to
be
see
a
real
time
of
objects,
objects
that
have
changed,
then
not
gonna,
get
it
from
the
specific
API.
So
I'm,
looking
at
the.
C
B
B
C
Should
be
too
difficult
in
here
so
that
the
reason
why
this
came
up
originally
was
a
user
wanted,
a
change
log
just
to
see
changes,
and
then
the
thinking
was
that
if
we
can
just
use
a
standard,
s3
API
to
do
a
standard,
s3
federal
feature,
there
might
be
two
birds
with
one
stone
right
like
I'm.
Looking
at
this
it
doesn't,
it
doesn't
look
like
there's
a
way
that
only
a
lot
of
changes
it
logs
everything
gets
inputs,
yeah.
B
B
B
B
Will
may
be
a
way
to
automatically
trim
information
because
they
don't
wanna,
know
everything
and
the
API
would
allow
you
to
request
either
information
by
bucket
or
by
user
and
by
specifying
a
time
frame
of
when
the
changes
that
you
want
to
see
and
some
kind
of
a
marker.
So
you
can
iterate
through
it
pretty
much
like
the
usage
know
the
difference.
The
usage
log
is
decoding.
B
C
B
B
B
C
Yeah
I
mean
so
reading
this
reading.
The
document
it's
like
s3
periodically
collects
access
log
records,
consolidates
them
to
log
files
and
uploads
log
files
to
your
target
and
say
alright,
but
it's
like
infrequent.
It
also
says
it's
best-effort
log
delivery,
so
it's
rare
that
it
misses
it,
but
it's
not
you're
not
guaranteed
to
get
logging
yeah.
B
It's
not
gonna
be
the
same.
The
same
logging
mechanism
right,
it's
gonna,
be
just
leveraging
this
same.
It's
not
even
the
same.
The
idea,
because
the
the
API
is
basically
three
objects
yeah.
So
if
we
want
to
create
a
icy
objects
that
hold
information
about
which
objects
changed,
then
we
can
do
it,
but
again.
B
B
C
It
doesn't
tell
you
like
how
often
it
actually
flushes
out
these
Mogollon
fix
the
injustice
play
with.
Yes,
we
wouldn't
see.
How
often
does
it
it's
like
once
a
minute
or
once
an
hour
or
whatever
mmhmm
yeah,
okay.
Well
so
I
mean
coming
back
is.
Is
this?
Is
this
the
right
tool
to
provide
an
rgw
user
interface
to
find
out
about
changes?
I
mean
we
can
do
it?
Yeah
yeah.
B
B
So
the
questions
all
right.
The
one
point
is
when
I'm
saying
put
it
in
some
every
object
in
some
bucket,
the
bucket
can
reside
in
the
store
at
the
same
source
thrown
like
we
could
configure
it
so
that
it
would
go
to
the
source
zone
and
put
that
data
on
that
bucket.
But
we
need
to
be
careful
not
to
lock
that
specific
bucket
right,
because
then.
B
B
And
configuration
wise,
we
need
to
make
sure
like
the
configuration
would
probably
be
like
we'd
specify
which
packets
we
want,
one
to
log
little
early
part
of
that
zone,
configuration
I,
think
module
some
configuration
yeah
right
yeah.
We
can
do
that,
whether
that's
their
curve
right.
It's
wait
to
to
log
it
I'm,
not
sure.
Like
III
see
these
two
options
and
a
third
option
would
be
to
find
external
logging.
E
Because
that
might
be
that's
probably
something
to
look
at.
We
don't.
One
I
wasn't
quite
clear
on
we're
on
how
you
were
doing
you're
like
you're
auditioning,
but
it
sounded
kind
of
like
if
someone's
doing,
sporadic
I/o,
then
they're
gonna
do
a
rate
sio4
they're
put
and
a
Raider
say
over
there.
A
few
logs
are
put
and
then
of
the
radio,
so
I
owe
to
log
their
foot
in
the
minute
or
our
scale
buckets
yeah.
C
Sorry,
there's
there's
the
goal
of
having
a
change
log
for
the
bucket
there's
the
goal
of
providing
an
s3
API
that
just
has
like
a
usage
access
log
and
if
it's
just
to
get
a
log,
then
it
can
be
like
me,
you
can
just
buffer
it
on
memory
for
like
an
hour
and
flush
it
out,
and
it's
like,
if
you
lose,
some
log
doesn't
really
matter
right.
That's
why
he.
D
C
B
Periodically
right,
yeah
yeah
now
keep
in
mind
that
if
you
create
a
temporary
usage
or
whatever
logging,
that
from
that
you
creating
new
objects,
then
you're
consolidating
everything.
The
temporary
one
is
gonna,
not
necessarily
gonna,
run
on
the
same
cluster.
Who
thinks
that's
faster,
oh
and
it's
not
on
the
same
pool.
B
C
F
B
B
So
we
probably
need
in
that
zone
that
the
sync
module
you
need
to
keep
the
data
somewhere
in
some
index,
probably
a
no
map
unless
you're
dumping
it
out
outside
I
mean
you
need
to
keep
it
in
a
map
there
and
have
some
kind
of
a
periodical
system
that
collects
it
and
generates
something
out
of
that
like
once
a
minute
once
an
hour
I,
don't
know
the
information
that
you
keep
there
there.
It's
just
you
know,
name
of
the
bucket
and
name
of
an
object
and
version
of
object.
B
So
in
that
case
ego
over
that
index
once
a
minute
and
then
do
something
with
it.
One
of
the
things
that
you
could
do
when
it
is,
you
know,
creating
an
object
and
throw
it
to
a3
or
throw
it
to
turn
the
a3
system
that
you
you
get
all
that
information
from
in
some
Sun
bucket,
which
probably
is
pretty
trivial.
You
don't
necessarily
need
to
create
the
same
blog
structure.
That
amazon
has.
You
can
just
create
a
JSON
object
like
the
tents,
an
area
of
you
know
key.
C
B
C
B
C
To
do
it,
yeah
I
think
might
just
like
very
generally
speaking.
My
worry
is
just
that
if
we
have
too
many
there's
already
the
index
log
that
we
use
for
the
replication
there's
also
like
the
usage
vlogs
that
we
have,
those
are
consolidated.
I
guess
so
it's
totally
different
and
we
can't
remember
if
there's
like
a
op,
op
log,
also
energy
that'll
log,
every
single
operation
there
is
I.
C
Right
all
right,
the
next
thing
up
here
is
merging
update.
They
did
had
slightly
this
isn't.
It
should
be
really
brief.
There's
a
big
pull
request.
That's
getting
closer
to
being
stable,
that
reef
actors
a
bunch
of
the
fury
messages
going
through
fast
dispatch
and
the
the
started
work
you
in
the
USD.
C
C
So
the
basic
design
is
that
the
the
let's
see
the
the
PD
map
pool
property
has
a
new
number
and,
in
addition
to
PG,
numb
and
PGP
numb,
there's
also
a
PGM
pending
that
can
be,
is
normally
equal
to
P
genome,
but
it
can
be
up.
It
can
be
one
less
than
P
genome,
which
basically
says
that
this
PG
is
about
to
be
merged
and
it
basically
tells
the
OSD
to
Co
ASIO.
C
It
currently
has
the
restriction
that
you
can
only
do
it
by
one
PG
at
a
time,
I'm,
not
sure,
there's
any
real
reason
to
do
more
than
that
and
the
bottom
of
the
stack.
The
object,
store,
merge
function
is
also
implemented
for
M,
store
file,
store,
Cal's,
K,
store
and
blue
store
and
be
in
between
bits
and
the
OSD
that
actually
is
sort
of
orchestrate
the
PGS
getting
merge
together
and
old.
One
getting
removed
from
the
map
and
all
that
stuff
is
sort
of
minimally
implemented.
C
So
there's
still
some
work
to
do
there
like,
for
example,
that
the
first
crash
I
hit
was
the
PG
might
have
gone
totally
clean,
but
there
might
be
a
stray
replica
that
was
in
the
process
of
getting
deleted
that
didn't
get
deleted
yet,
and
so,
when
the
merge
happens,
what
happens
to
that
stray
one
right
now
it
crashes,
but
it
should
just
like
probably
do
a
merge
into
the
new
PG
ID
and
mark
itself
as
incomplete
or
whatever,
and
continue
doing
the
delete.
Something
like
that.
C
But
that's
not
all
that's
sort
of
the
next
piece,
so
that's
encouraging
as
soon
as
the
bigger
PG,
whatever
Steve
started,
work
you
refactor
stuff
is
done,
then
I'm
going
to
switch
focus
with
that.
The
other
half
of
this
is
something
to
actually
orchestrate
this
so
right
now.
This
is
where
I
could
use
of
it.
But
though,
right
now
the
the
monitor
command
is
set
those
that
PG
num.
You
know
whatever
the
number
is
minus
one,
and
so
you
can
basically,
like
literally
reduce
it
by
one.
Each
time
then
it'll
do
it.
C
E
C
Did
that
because
I
didn't
want
to
sort
of
encode
the
that
minus
one
you
can
only
get
one
at
a
time
into.
The
PG
interval
is
new
interval
function
which
is
gonna
like
effects,
client
behavior
on
the
object
ER
and
those
do
you
behavior
and
it's
gonna
be
hard
to
change
over
time,
whereas
making
the
monitor
smart
enough
to
like
do
multiples
at
once
would
be
so
I'm
kind
of
inclined.
D
C
B
be
the
thing
that
the
interval
intervals
are
based
on,
which
means
that
we
need
like
another
property
which
would
be
like
P,
genome
goal
or
P
genome
target,
which
somewhere
else
and
then
the
monitor
would
automatically
reduce
P
genome
pending
by
one
until
it
reaches
or
whatever.
However,
smart
it
ends
up
being
until
it
gets
the
genome.
Go.
E
E
C
E
Said,
at
least
in
the
early
trap
that
looked
at
the
OSD
was
was,
was
perfectly
willing
to
try
and
merge
a
lot
more
than
one
PC
at
a
time,
and
if
we
only
do
one
PT
at
a
time,
then
that's
an
awful
lot
of
maps
for
any
cluster,
where
you're
actually
gonna
run
this
and
I
don't
know
like
I'm.
Matt
processing
has
gotten
a
lot
more
efficient,
but
I'm
worried
about
trying
to
you
know:
go
from
64
K
to
32
k,
TJ.
E
What's
that's
gonna
do
to
the
clusters
and
throughput
as
a
whole,
and
it's
you
know
liveliness
by
trying
to
process
that
many
maps
as
quickly
as
hope,
I
think.
Whatever
is
happy.
C
C
E
D
C
E
E
C
Is
fine
because
it's
like
a
background
price
you're
not
doing
it
all
at
once,
yeah.
E
C
I'm,
reasonably
confident,
we'll
be
able
to
get
that
ready
to
go
for
for
mimic
that
we'll
see
we
have
a
month
in
half,
so
fingers
crossed
all
right.
The
next
thing
I
was
going
to
talk
about
was
config
history.
This
one
I
have
I
think
more
substantial
questions
because
it's
pretty
open-ended.
So
here's
the
here's,
the
pad,
there's
a
pull
request
with
the
current
work
in
progress,
but
it
only
implements
a
couple
commands.
C
So
the
first
thing
that
it
does
is
it
basically
just
makes
the
monitor
vlog
every
config
change.
It
puts
it
all
in
in
the
config
key
space.
So
all
the
config
options
right
now
are
prefixed
with
config
slash
and
then
something-
and
this
adds
config
history,
slash
and
then
a
version
ID
of
whatever
the
changes
that
you
made
slash
and
then
all
the
stuff
that
changed
and
it's
the
keys
look
like
a
disk.
So
it's
like
a
plus
if
you
set
something
inside
if
it
unset
something
and
just
done
that
way.
C
C
So
that's:
what's
there
there's
a
config
stuff,
config
log
command
that
will
just
dump
the
log
of
changes,
that's
pretty
straightforward!
It
just
separates
the
version
number
of
the
config
change
and
then
a
timestamp
when
it
happened
and
then
all
the
stuff
that
was
added
and
removed
and
then
the
first
thing
I
sort
of
built
on
top
of
that
is
just
a
revert
command
where
you
give
it
a
version
to
revert
the
config
and
it'll
roll
back.
C
All
the
changes
back
to
that
point
in
time
by
looking
all
the
diffs
and
just
reversing,
and
basically
so
that
that's
implemented,
it
works
fine.
But
it's
like
a
super
basic.
A
little
bit.
Interface,
I
have
a
whole
bunch
of
questions
about
how
we
want
this
whole
thing
to
work,
and
ideally
somebody
actually
runs
a
set
in
production
and
he
uses
a
like
source
control
or
something
to
manage
the
configs
or
something
would
be,
would
be
good
to
have
something
like
somebody's
input.
C
Maybe
I'll
send
an
email
after
this,
so
the
first
question
I
had
was
the
command
I
did
a
stuff,
config
revert
and
it
basically
rolls
everything
back
up
until
a
version
which
is
actually
I
realized
after
I
did.
This
is
different
than
how
good
revert
works
where
you
give
it.
If
you
have
a
specific
change,
it'll
revert
just
that
one
change,
not
everything
between
now
and
that
change
and
so
I'm
wondering
if
what
the
right
word
is
for
the
rolling
everything
back.
C
Right,
yeah
yeah,
that's
reset!
That's
fully
reset
you're
right!
Okay,
the
nice
thing
is
that
it
actually
it
logs
that
as
a
change
so
like
when
you
revert
it's,
it's
actually
just
a
new
change
on
top
of
the
change
history,
so
it
doesn't
actually
jump
you
back
in
time.
It
just
adds
a
new
change
that
reverts
everything
back
to
that,
whatever
it
was,
but
yeah
I
think
reset
makes
more
sense,
okay
and
then
for
reverting
a
specific
change.
You
just
revert
make
sense.
C
C
And
so
there's
three
vert
has
an
OP
argument
for
the
who,
which
is
like
Mon
Mon
da
de
voz
D.
It
could
be
OST,
slash,
RAC
equals
foo.
We
could
be
like
the
same
all
that
same
stuff,
but
it
turns
out
that
it's
it's
once
you
start
thinking
about
it.
It's
super
and
clear
what
that
really
means.
For
example,
if
you
have.
C
E
Use
case
for
even
wanting
to
do
partial,
configure
verts
like
I,
just
don't
think
we
should
support
it
yeah,
that's
that
would
be
an
easier
solution.
In
particular,
I
think
it's
gonna
be
hard
to
support
people
who
are
using
it
and
I
think
that
it's
gonna
be
hard
for
people
who
are
using
it
to
like
it
encourages
people
who
are
using
the
the
revert
in
the
rollback
to
try
and
identify
that
a
specific
portion.
E
Their
config
change
caused
a
a
macro
level
problem
and
most
people
aren't
going
to
be
able
to
identify
those
problems
or
those
causes
accurately
or
there
or
they're
going
and
even
worse.
They're
gonna
think
that
they're,
like
gonna
pick
one
and
as
a
guess
and
behavior,
is
gonna
change
them,
so
they
think
they
were
right,
but
they're
gonna
think
do
you
think
that,
for
the
wrong
reasons,
yeah.
C
Yeah
I
guess
I
think
I'm
like
the
UK's
would
be.
Maybe
you
apply
take
to
the
whole
cluster
and
then
you
want
to
roll
back
like
just
one
rack,
but
I
think.
Actually
it's
the
other
way
around.
You
probably
want
to
like
apply
the
config.
Just
one
rack
make
sure
it's
okay
and
then
apply
the
whole
cluster
or
something
but
yeah.
Okay,
that's
easier!
E
C
C
That
certainly,
is
your
template
for
first
version,
Graham
I'm
happy
with
that,
so
they
let
the
last
thing
aside
last
thing,
but
obviously
there
would
be
a
diff
in
there,
so
you
can
like
look
if
it
changes
between
now
and
the
previous
time.
I
have
an
implement
of
that,
but
that's
pretty
easy
and
it
would
be
useful,
but
the
main
thing
that
kind
of
bothers
me
right
now
is
actually
just
the
the
way
that
these
versions
are
being
identified
right
now,
it's
just
a
version
number
that
always
starts
at
one.
C
C
So
you
do
like
stuff,
config
tag,
you
know
Monday
or
I,
don't
know
whatever
you
want
to
do
and
then
make
all
these
other
commands
in
terms
of
the
tags.
Instead
of
the
version
numbers.
B
B
C
The
way
that
it's
the
way
that
it's
implemented,
it's
always
a
forward
parishioner
history.
So
it's
not
quite
like
it.
In
that
sense,
if
you
reset
it'll,
just
have
a
new
change
on
top
that
undoes
everything
that
you
did
before.
So
you
can
undo
that
thing
that
you
can
undo
the
revert
or
reset
the
reset
to
get
back
to.
C
Yeah,
okay,
yeah
I
mean
I'm.
It's
that
the
version
number
is
it's
the
it's
just
using
the
internal
version
number
that
the
monitor
is
using,
which
can't
go
back
in
time.
It
always
goes
forward,
so
we
couldn't
really
do
already
said.
If
we
wanted
to
you,
unless
we
got
really
wacky
but
I,
guess
I
guess
the
question
is:
is
it
useful
to
have
labels
like
user
defined
labels
of
like
this
is
my
working
config
as
of
whatever
or
something.
E
C
Basically,
so
you
can
say,
staff,
config
tag,
foo
and
it'll
tag,
the
latest
version,
or
you
can
say,
SEP
big
tag,
foo
and
then
you
give
a
diversion
of
aerial
tags,
some
other
previous
version
and
that's
sort
of
a
set.
Second,
let's
look
aside
just
mapping.
It
means
to
subversion,
see
the
tags
you
can
delete
and
so,
but.
E
C
It's
just
otherwise
they're
like
I,
want
to
go
back
to
the
thing
on
last
week
when
or
whatever
it
was,
that
I
like
had
everything
working
I
want
to
know.
I
want
that
as
a
reference
to
have
to
go,
look
to
the
history
and
like
look
at
the
time
stamps
and
try
to
figure
out
when
that
was
like,
there's
no
way
for
them
to
like
annotate
like
this
is
the
it.
E
B
C
Mean
I
says
we
want
to
provide
that
like
the
important
bits,
hopefully
of
what
people
send,
what
like
large
operations
do
if
they
actually
have
all
this
config
stored
externally
and
get
or
some
other
source
control
and
they're
doing
things
like
this
is
my
staging.
This
is
when
I
rolled
out
in
production,
like
they
push
everything
out
every
Tuesday
or
whatever
it
is
I,
don't
know
all
the
processes
they
have
around
that
and
they're
using
all
those
various
functionality
and
the
source
control
to
like
manage
all
this
stuff
and
keep
track
of
it.
C
So
I
wonder
if
what
we
really
want
to
do
is
talk
to,
like
you
know
the
folks
at
OVH
or
whatever,
and
find
out
what
they're
like
Peter
was
when
we
were
first
talking
about
mom
based
config
was
I'm
pretty
negative
because
he's
happy
with
their
external
config
management,
so
it'd
be
interesting
to
just
see
what
parts
of
their
rational
config
stuff
they
really
use
and
why
they
use
it
to
really
understand.
I
guess,
of
course,
he's
in
Europe,
so
he's
not
here,
I
mean.
E
C
C
C
F
This
mystery
we're
talking,
do
I
have
a
tooth
or
two
branches.
Once
we
accept
a
lien
and
the
other
is
the
whip
ceased
our
aesthetic
or
is
the
testing
post-test
case
the
week
we
can?
Who
can
run
the?
We
can
submit
the
task
to
consider
from
a
didn't
fill
in
thread
by
any
asteroid?
I
mean
the
straighten
Dunham
not
manage
it
by
Percy
star,
because
in
insist,
are
the
threaded
has
have
access
to
to
a
straight
local
storage
for
his,
for
instance,
the
the
ranging
under
attach
to
walk
item
queues.
F
F
When
limiting
the
test
case,
I
I
found
that
we
need
to
take
special
care
of
the
lifecycle
and
she
start
because
it
started
not
necessarily
ready
when
this
one
weighs
submit
to
the
work
item
tongue
to
it
and
the
we
need
a
protocol
between
the
C
star
and
the
alien
threat.
So
the
alien
sir
illustrator
can
be
notified
when
the
sitter
is
ready
to
serve.
F
That's
why
we
need
to
need
another
another
core
name:
the
right
Manning,
which
is
the
butcher.
The
new
API
introduced
to
sister
to
facilitate
the
communication
between
aliens,
red
and
the
sisters
red
and
it's
a
it's.
A
very
thin
wrapper,
rounded
right,
sister
core,
because
you
might
know
that
the
sister
had
a
very
handy
API
name,
the
right
Sun,
which
is
able
to
write
to
give
an
F
T.
F
C
C
C
C
F
C
Back
but
he
was
suggesting
yeah
just
taking
one
of
the
current
pullers
and
adding
a
new
one,
essentially
what
you're
doing
here
with
the
alien
one,
but
it
might
be
worth
just
running
a
by
them
to
see
if
this
makes
sense.
F
F
C
C
F
F
D
F
C
It
seems
to
me,
like
the
I,
mean
that
the
path
for
that
I've,
been
imagining
in
my
head,
is
that
we
have
this
this
alien
piece,
that,
let's
be
threads,
talk
to
see
star
once
that's
sort
of
working.
Then
we
add,
we
had
see
started
to
the
build
and
we
set
up
a
reactor,
the
tame
reactor
whatever
in
USD,
and
but
it
does
nothing
initially
and
then
we
just
start
like
giving
it
a
few
trivial
things
to
do.
C
Like
I,
don't
know,
maybe
we
make
a
sea
star
version
of
like
the
log
thing
like
D
out
logging,
instead
of
the
thread
I'm
just
to
like
prove
it
out
and
make
sure
that
the
the
communication
stuff
works
once
that's
done,
then
the
big
chunk
would
be
to
move
to
make
a
sink
messenger
work
with
that.
It's
all
it's
worker
threads
and
instead
use
use
the
reactor
so
there'd
be
like
a
SC
star
messenger
implementation
based
on
a
sequester
which.
C
Right,
it
would
be
a
different
well,
it
would
be
I
think
it
would
be
close
right,
so
I
mean
the
messenger
interface
would
because
it
still
said,
NASA
didn't
receive
like
send
message.
It
wouldn't
really
need
to
change,
because
that's
like
a
non-blocking
thing,
yeah,
it's.
C
But
I
think
this
would
be
like
a
would
be,
it
would
be,
it
would
end
up
with
two
messenger
implementations
or
interfaces.
Basically,
so
there
would
be
the
traditional
food,
that's
one
that
everything
uses
and
then
the
OST
only
initially
would
use
this
special
listener
interface
that
actually
delivers
everything
DSU
star
right.
C
F
C
C
The
way
that
it
was
set
up
it
like
takes
over
the
process
start
up
so
like
it
like
takes
your
artery
and
artsy
and
like
passes
all
this
stuff
in
the
sea
star,
and
so
it
it
kind
of
wanted
to
own
like
the
process.
Bootstrapping
initialization
phase
of
everything
is
that
required.
Is
that
because
it
seems
like
that's
not
what
we
want
right.
C
We
want
to
be
able
to
like
go
through
all
a
normal
startup
stuff
like
loader
config,
it
kind
of
ready
and
then
like
start
at
the
reactor
and
then
maybe
fork
or
whatever
beforehand
and
then
like
and
then
go.
That
seems
like
the
easier
way
to
integrate
it.
I'm,
not
sure.
If
that
makes
sense,
have
you
have
you
looked
at
that
part
of
it
a.
F
C
C
C
E
D
C
A
D
C
D
Actually,
we
didn't
get
enough
chance
to
like
complete
the
coding
for
this
pull
request
for
clay
code,
so
the
main
changes
we
are
doing
is
to
make
this
clay
code
as
a
separate
plugin.
Previously
it
was
a
technique
within
J
ratio,
so
we
want
clay
code
to
be
able
to
use
both
Cherisher
and
I,
sell
libraries
depending
on
what's
given
as
input
the
current
code
which
I'm
like
writing.
D
It's
agnostic
to
it
just
takes
this
Irish
according
to
phase
difference
based
on
the
based
on
what
is
entered
in
a
profile,
so
I'll
need
more
time
so
I
just
wanted
to.
Let
you
guys
know
that
I'm
still
like
working
in
the
background-
and
it
will
be
there
like,
but
one
thing
is
I'm
also
like
working
on
some
other
projects,
so
it
might
be
little
delayed.
So
the
current
plan
is
by
April
15.
D
C
Yeah
I
think
I
think
for
the
most
part,
there's
there's
already
a
bunch
of
stress
tests
and
our
inner
QA
suite
that
I'll
like
read
and
write
data
and
simulate
failures
and
all
that
stuff.
So
I
think
that
you
want
to
do
any
work
really
to
capture
that
we'll
just
need
to
tweak
it
so
that
it
also
tests
your
plugin
is
for,
as
in
addition
to
the
other
ones
that
part's
pretty
easy.
There's.
C
Also
a
corpus
of
data
called
the
Boresha
code,
non
regression
corpus
where
it's
just
a
bunch
of
encoded
data
to
confirm
that
we
can
decode
stuff,
that's
previously
previously
written
and
we
don't
like
break
something
so
that
you
can't
decode
old
data.
So
there's
that
also
so
I
think
there's
like
an
actual
step,
got
to
go.
Add
a
bunch
of
data
to
that,
but
I'm
not
quite
sure
how
it
works
of
the
nourisher.
You
look
at
it.
I
just
know:
it's
there,
I
think.
D
C
Like
what
I
would
expect
is
that
there's
a
facet
that
specifies
all
the
different
area
code
profiles
that
we
can
tested
with?
Like
you
know,
different
values
of
M
and
K
and
different
I
can
plugins
or
whatever,
and
then
it
would
just
be
a
matter
of
adding
a
bunch
of
clay
code
ones
with
different
sizes
for
the
sub
chunks
and
the
regular
chunk.
C
D
D
D
C
Okay-
and
this
is
this-
is
the
same
last
year
they
probe
last
year,
yeah,
awesome,
okay,
that
sounds
great
I'm,
not
sure
who's
gonna
review
it
I
can
do
another
pass
over
it.
My
review
is
pretty
superficial,
cuz
I'm,
not
in
a
research
code
person
I
was
just
like
general
style
and
using
myself
api's
until
and
I
can
help
with
that
I
think.
C
Made
April,
so
that's
coming
up
pretty
quick,
so
I'm,
guessing
that
this
is
not
going
to
be
ready
and
even
if
it
were
sort
of
done,
it
would
be
definitely
marked
as
experimental
for
mimic,
because
it's
sort
of
brand-new
and
doesn't
hasn't
had
a
lot
of
testing
behind
it.
So
probably
the
real
target
for
this
being
something
that
users
can
use
is
going
to
be
the
next
release.
After
that
which
is
going
to
be
around
I
guess
next
January
my
Nautilus
is
frozen.
D
C
C
D
C
So
that's
already
that's
already
in
the
tree:
that's
in
there
I
think
I
think
we're
set
a
while
ago,
so
so
yeah
I
think
this
should
be
pretty
yeah.
That
subtly
could
be
backward
if
yeah,
that's
very
self-contained.