►
From YouTube: 2020-03-03 :: Ceph Crimson Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
C
A
Come
here
and
start
that
sweet
guy
was
working
on
the
between
religion
and
under.
D
So
so,
and
you
didn't
okay
and
so
I
couldn't
use
the
staff
deploy
deployed
the
current
version
code
that
they
are
mine,
a
problem
on
my
test,
environment
I
only
can
use
the
we
started.
She
just
testing
and
the
trustor
test
me
out
that
this
Russian
seems
very
worse.
A
Muslim,
the
before
version
so
in
the
pollution
and
the
alien
stuff
is
better
than
a
classic
sigasi
pro
stock
by
the
Nets
Russian
for
the
right,
casting
and
H
is
worse
than
the
Southwest
II
blue.
Stop
and.
D
Only
for
the
Raiden,
it's
better
for
the
read
read
pass.
It
gets
better
than
suffice.
Authority
only
saddest,
write
a
number
to
be
one
and
for
the
reader,
the
performance
is
similar
with
the
severity
will
separate
the
use
of
default
options.
That
means
there
are
multiple
tries,
but
the
for
the
right
path.
It
is
more
worse
even
compared
with
Seth.
When
see
us
remember.
What's
right,
though,
maybe
neither
Trista
coded
to
to
check
what
happened,
but
if
I
worship
it
I
just
it
is
the
better
do.
A
D
D
D
D
A
E
Let's
start
from
stivity
I
previewed
the
two
peers
from
Avenue
one
is
for
psyche,
spur
optometric
it's
the
extension
of
it
is
the
generalization
of
the
magic
occurs
across
benchmarked
infrastructure.
Generally,
they
look.
They
look
got
some
minor
transistor
necessary
on
on
PD
on
the
recovery
and
PD
lock
front
I
am,
I
was
reading
the
code
and
it
asks
as
a
side
effect.
E
I
made
a
bunch
of
very
small,
tiny
cleanups
of
existing
code,
I'm
implementing
PD
scaling
now
I
wired,
the
I
shot
still
quitted
the
reservation
hands
before
above
PG
base
the
PG
log
based
recovery
and
and
buckle.
Some
I
saw
that
you
have
rewired
acing
reservoir
up
I'm.
The
tuna
summit
is
just
just
to
development.
It
will
be
quite
easy
to
to
integrate
the
that
comes
for
reservation,
holding.
E
Ice
I
think
I've
taken
I,
look
on
the
periodic
plantation
I
and
also
on
design
of
on
that
design.
On
the
assist
of
design
documents
and
I,
wonder
I
mean
I
wonder
whether
we
could
try
to
go
with
some
kind
of
zero.
Let's
say:
zero
overhead
eg
logging.
At
the
moment
we
start
PG,
lock
us
on
my
entries,
which
is
terrible,
which
is
really
costly,
Bruce
ideas
to
move
to
a
dedicated
mechanism
like
basically
our
ring
popper
on
top
of
object
star.
E
It
will
be
better,
but
still,
however,
we
are
doing
the
city
as
well
as
understood.
The
current
design
is
about
lock
structured,
like
file
system
will
have
a
journal
that
is
consisted
of
a
set
of
Records
and
if
my
intuition
tells
me
that
when
doing
4k,
let's
say
4k
overwrite
the
number
that
the
size
of
the
record
we
are
going
to
put
to
the
disk
for
4k,
it
will
be
around
8k.
E
Maybe
we
could
I
would
I
will
write
this
down
in
a
comment
or
something
that,
but
the
the
the
very
rough
idea
is
that
is
that
to
use
the
spare
the
spare
space,
this
extra
space
to
carry
information
that
is
usually
put
in
the
PG
lock
and
treaty
structure.
That's
the
first
part
second,
is
to
some
machinery
about
Jordan
are
trimming
to
turn
it
in
to
tune
the
decision
to
trim
the
journal
to
be
actually
distributed
across
always
the
workers,
or
is
this
involved
in
particular.
E
E
F
We're
doing
that
anyway,
so
I
think
what
you're
describing
is
that
we
shouldn't
need
it
so,
whatever,
whatever
we
write,
one
of
these
PG
logs
entries
they're
short-lived
fun
of
that
way
right.
So
it's
long
that
Morgan
Creek
doesn't
live
long
by
the
time
we
get
around
to
cleaning
up
segments
it'll
be
dead
anyway,
right,
I.
F
So
that's
not
a
thing
she's
ready
to
worry
about.
It
just
needs
to
be
the
case
that
these
store
doesn't
needlessly
keep
around
blocks.
That
are
data,
in
other
words,
as
long
as
we
like
right
across
that
key
space
in
the
PG
log
and
then
delete
behind
it,
then
that
block
won't
exist
anymore
when
it
goes
to
clean
it.
So
it's
not
going
to
copy
it.
F
That's
up
I,
don't
think,
there's
any
reason
to
combine
those
two
layers
rather-
and
there
are
a
lot
of
reasons
not
to
like,
for
instance,
the
journal
trimming
in
the
NC
sort
based
on
a
bunch
of
local
physical
properties
of
the
disk,
and
that
might
be
completely
different.
On
the
other
rotis,
for
instance,
people
have
an
do
run
configurations
where
the
primary
OSD
is
on
faster
storage
and
the
relatives.
Yes,
they
want
to
be
able
to
serve,
read
faster,
so.
F
F
Level
TV,
if
all
rocks
to
be
already
you'll
notice,
that
there's
some
tuning
that
marked
it
so
that
under
high
turnover
situations,
the
actual
PG
log
keys
never
get
rid
of
level
zero.
They
don't
live
long
enough,
they're
deleted
before
they
that's.
F
F
E
E
F
An
alternate
version
so
I
think
the
way
forward
for
this
in
a
lot
of
configurations
will
be
persistent
memory
and
that
won't
be
a
c-store
operation
at
all
or
it
or
will
be
available
via
a
completely
different
interface.
In
other
words,
under
those
conditions,
we
would
simply
add
interface
specifically
for
PC
logs
to
Caesar,
and
it
will
be
designed
to
be
back
by
persistent
memory
so
that
it
never
actually
hurts
still.
E
F
E
C
F
F
A
A
C
B
F
F
Like
your
your
end,
to
put
it
another
way,
we
don't
have
to
put
PG
log
entries
in
a
in
a
b3,
and
we
probably
should
like
we
could
write
them
sequentially
to
the
data
payload
of
an
object
that
so
stuff
worked
until
2013
like
we
didn't
originally
put
them
in
a
map,
but.
E
Yes,
my
idea
is
just
to
agree
who
you
agree.
Idea
is
just
to
reuse.
Well,
I
my
line
of
thinking
is
that
in
even
now
to
write
to
over
write
per
case
data
in
object
store,
we
need
to
do
at
least
eight
eight
cave,
physical
right
and
Tom
bytes
in
those
8k
or
more
we'll
be
actually
will
be.
Actually
only
idea
is
to
reuse
them
for
PT
logging.
Only.
E
F
E
My
understanding
is
that
then
doing
4k
overwrite.
We
will
need
to
the
entry
the
end
the
single
entry
of
Jordan
will
come
with
because
you
set
up
one
4k
block
the
new
data,
the
the
data.
We
are
overriding
Plus
of
Delta's
required
to
update
the
lb
i3.
Maybe
the
on
out
tree
this
this
those
things
will
be
tuning
in
to
set
of
Delta's.
That
will
be
also
a
part
of
the
single
transaction
I'm.
Just
trying
to
think
about
the
place
about
the
place
of
PG,
lock,
related
data.
F
E
F
E
F
F
F
E
E
F
Are
thinking
about
it,
but
now
I,
don't
think
it's
the
time
like
I
said
at
least
because
whatever
we
do
here
have
to
handle
persistent
memory,
because
that's
the
real
killer
right.
If
we
can
put
it
in
a
persistent
memory
ring
buffer,
then
we
can
afford
a
second
commit
and
it
doesn't
have
to
go
into
the
same
trip,
no
transaction.
F
Even
if
it's
a
little
bit
more
data,
it's
only
one
flush
operation,
so
we
don't
pay
any
latency
penalty
for
us
and
we're
gonna
have
to
pay
the
write
amplification
anyway
later,
as
we
move
things
out
of
the
zone
desk
and
into
the
persistent
memory,
that's
where
the
real
wins
are
remember.
If
we're
going
to
write
somewhere
else
on
tips,
we
have
to
open
up
another
zone
that
you
can't
write
randomly
right,
so
that
zone
also
has
to
be
garbage
collected.
It
would
have
to
be
one
zone.
F
No,
no!
No!
What
I'm
telling
you
is
that
if
we
want
to
do
a
blind
write
of
the
PG
log
entry
so
that
we
don't
have
to
do
allocation
and
stuff
for
them,
then
you
can't
mix
the
rights
to
zones
because
we
have
to
be
able
to
garbage
collect
these
these
things.
So
if
you're
writing
sequentially
to
a
zone
and
you're
mixing
tea,
do
log
entries
from
different
PG
s
not
prints,
actually,
just
because
that's
where
you're
doing
the
PG
log
writes,
then
you
can't
collect.
C
F
E
F
Trim
for
a
long-ass
time,
ma'am
like
we
can,
we
can
keep.
We
only
have
to
do
trimming
as
we
approach
full
right
like
well.
We'll
have
some
target
fullness.
Oh
I
want
to
be
like.
We
want
the
total
amount
of
free
unallocated
space
on
the
desk
to
be
something
like
20%
to
make
it
as
easy
as
possible
on
the
discs
and
internals
for
moving
things
around
and
beyond
that.
We
just
want
to
make
sure
that
we
don't
ever
hit
a
wall,
so
will
gradually
increase
cleaning,
as
we
get
between,
let's
say
60
to
80%
full.
F
E
F
E
F
C
C
A
C
F
A
C
F
A
F
Have
enough
of
the
code
written
for
handling
transaction
like
committing
and,
more
importantly,
like
conflict
detection
that
I'm?
Okay
with
that
part,
so
I'm
moving
back
to
writing
the
lb8
code,
so,
hopefully
we'll
have
a
useful
functioning
implementation
of
transaction
manager
at
some
point
in
the
next
couple
of
weeks.
The
part
after
that
we'll
be
implementing
garbage
collection
and
brief,
like
I.
C
F
A
C
F
What
how
this
works?
If,
during
the
course
of
a
transaction
you
read
a
block,
you
will
always
get
that
version
back
in
the
future,
even
if
someone
else
modifies
what
you
will
still
get
the
original
version,
it's
it's
when
you
finally
go
to
submit
the
transaction.
If
anything
in
the
reset
for
your
transaction
is
marked
as
imbalance,
because
someone
else
could
at
first,
you
will
get
them
arrow
and
you'll
have
to
start
over.
So
that's
why
I'm
saying
the
locking
thing
is
not
quite
what
you're
thinking
it
is
problem.
There's
a.
F
A
F
Different
version
thing
which
doesn't
different
anyway
for
these
purposes,
just
to
burn
two
kinds
of
version
valid
and
invalid.
So
when
you,
when
you
call
mutation,
you
get
a
new
copy
of
that
cast
extent
that
no
one
else
can
see,
and
you
can
do
whatever
you
want
to
it,
though,
with
the
caveat
that
I
haven't
added
this
interface
yet,
but
you'll
be
responsible
for
adding
an
interface
or
a
thing
depends
if
it's
in
the
right
state
that
will
that's
what
the
next
layer
down
can
call
to
get
the
delta
corresponding
to
that
mutation.
F
It
doesn't
matter
what's
in
the
Delta,
it's
just
that
you'll
be
given
it
back
during
replay
to
reconstruct
the
block.
Just
don't
worry
about
that.
So
much
just
know
that
you'll
have
to
be
able
to
provide
a
buffer
representing
your
mutations
as
you
go,
but
does
that
sort
of
clarify
what
I
meant
by
locking
it's?
Not
there
there's
a
sort
of
truism
in
certain
kinds
of
concurrency.
It's
it's
faster
to
ask
forgiveness
than
permission
so
locking
with
the
asking
permission
you
have
to
do
you.
F
Of
complicated
back
off
and
retry
stuff,
so
like
you,
could
you
have
to
think
you
have
to
release
all
your
read
locks
and
the
retake
write
locks
to
make
sure
you
don't
get
a
deadlock
or
instead
you
could
just
assume
you're
you're
you're
not
going
to
have
a
problem
just
blithely
flow
past,
and
if
you
do
have
a
problem,
you
start
over.
Yes,.
F
A
Also,
you
did
dirt
correctly
if
I
recall
correctly,
rocks
DB
also
uses
the
opt
index
rocking
yeah.
F
And
there
there's
actually
instruction
from
noobs
Intel
CPUs
that
have
this
sort
of
software
level
kind
of
so
that's
that's
the
strategy,
and
there
are
a
bunch
of
refinements
of
that
that
in
the
future,
depending
on
well
time,
I
suppose
one
of
the
notes
like
at
a
really
simple
level,
if
you
observe
that
your
transaction
failed.
When
you
start
up
the
next
time,
the
late
you
can
tell
the
layer
by
the
way,
I
failed
once
and.
C
F
F
Well,
that's
that
that's
right!
The
only
expense
is
that
the
user
of
this
application
has
to
be
able
to
retry
it's
just
it's
like.
If
a
thing,
it's
just
extra
work,
the
application
layer
has
to
tolerate,
but
it's
I
will
have
to
do
a
careful
benchmarking
and
we're
gonna
have
to
create
situations
where
this
actually
matters.
But
my
guess
is
that
this
is
going
to
be
the
best
way
to
do
it.
Gonna
be
a
lot
faster
with
them.
F
It's
gonna
be
a
lot
faster
and
simpler,
but
you
only
ever
have
to
retry
the
transaction
from
the
start.
You
won't
ever.
You
won't
ever
have
to
do
the
thing
where
you
carefully
unlock
and
relock
the
tree
notes,
which
is
really
failure
prone
in
a
lot
of
applications,
so
in
exchange
for
not
doing
that,
you
just
have
to
be
able
to
retry
the
whole
operation
from
from
scratch.
F
C
Okay,
so
I
did
one
sentence
on
my
side:
I'm
back
to
working
on
the
scrap.
What
I'm
doing
is
creating
diagrams
in
the
documentation
of
the
existing
existing
code
right
and
going
through
a
workforce
and
writing
the
comments
from
the
rock
falls
into
the
code
and
the
external
diagrams
I
probably
have
more
questions
to
you
the
near
future,
but
this
is
basically
what
I
did.
There
are
few
days.
C
The
last
week
I
was
implementing
the
PG
log
based
recovery
in
crimson
and
right
now
most
of
the
coding
work
is
done
and
there's
there
are
some
some
parts
within
my
patch
and
I'm
trying
which
I'm
trying
to
fix
now.
If
everything
goes
well,
I
should
be
able
to
sub
the
PG
log
based
recovery
PR
within
the
next
one.
We
were.
A
B
It's
just
my
assumption
and
how
to
like
encapsulate
those
interfaces
to
be
similar
so
that
we
can
start
on
the
indexing
algorithms
and
apply
to
different
places,
because
I
think
it's
tricky
to
select
the
correct
like
Petri
button
up
or
top
down
algorithms.
Maybe
we
will
take
a
performance
evaluations
and
decide
to
go
either
way
or
not.
So
that's
my
current
thinking
so
I'm
not
sure
if,
if
my
assumptions
in
email
is,
is
correct
because
I
just
started
to
look
at
it.
I.
F
Just
want
to
clarify
that
I
am
I
spent
a
couple
of
weeks
early
on
trying
to
figure
out
how
to
make
the
elbe
a
tree
and
the
Ono
tree
the
same
implementation
and
I
really
wasn't
able
to,
and
it's
not
it's
not,
that
the
trees
are
different
in
any
interesting
way.
The
difference
is
that
the
update
rules
are
different.
The
odo
only.
C
F
F
The
transaction
that
writes
them
out
needs
to
use
relative
references
because
it
doesn't
know
where
it's
going
to
land,
but
absolutely
none
of
that
is
true
for
the
oh,
no
door
or
battery
they're,
essentially
behaving
as
though
they
have
this
big
mutable
sequence
of
blocks
that
they
can
mutate
as
they
want
to.
So
those
are
similar.
It
wouldn't
surprise
me
greatly
if
you
could
find
a
way
to
make
the
o
node
and
the
o
map
trees,
share
an
interface
and
an
implementation,
but
the
LBA
tree
is
going
to
be
a
long
thing.
F
F
Following
just
read
out
loud
I,
guess
just
so
that
you
guys
can
express
like
games
the
way
in
which
the
so
like.
Let's.
F
You
you,
you
want
to
do
a
4,
kilobyte,
write
or
read
of
a
block
that
you
know
is
an
O
node,
B
tree
block.
So
I
don't
have
this
fully
wired
up
yet
but
you're
going
to
provide
at
the
type
constructor
for
that
block
that
what
you
get
back
can
have
a
richer
interface
than
just
the
base
cast
extent.
Rest
it'll
have
to
it'll
have
to
inherit
from
cached
extends,
but
you'll
be
able
to
stash
other
in
memory
information
in
there.
F
But
the
most
important
thing
is
that
if
you
get
back
after
calling
mutate,
if
you
get
back
a
block
and
state
pending
Delta
you
there
will
be
an
interface
you
will
have
to
implement.
That
gives
back
a
buffer
list
representing
that
Delta,
because
during
replay
you'll
be
given
that
Delta
back
to
mutate,
the
original
block
the
same
way.
So
that's
that's.
C
F
So
just
just
assume
that,
like
when
your
this
should
exist
in
the
next
two
weeks.
This
is
what
I'm
working
on
right
right
now:
okay,.
F
The
idea
is
that
you
should
simply
assume
that
you
can
win
when
you,
when
you
try
to
mutate
a
block
you
could
you
can
get
a
pack
at
one
or
two
states,
it's
either
fresh
or
add
you
can
do
whatever
you
want
to.
It
hasn't
been
written,
yet
your
written
out
in
its
entirety
or
it's
in
pending
Delta
mode,
and
you
have
to
be
prepared
to
send
back
and
uncoated
Delta
for
it
and
then
you'll
be
given
that
same
included.
Delta
factory,
repo.
F
So
in
other
words,
when
you
get
back
one
of
these
mutated,
it's
hard
to
explain.
It'll
make
more
sense
when
a
implement
the
thing.
But
the
idea
is
that
then,
during
replay,
when
I
get
a
record
or
when
I
get
a
delta
out
of
a
record,
it
will
have
the
type
of
the
record
and
then
walk
off,
set
on
desks
all
on
Stan
sheet.
B
F
F
A
F
A
F
Function
once
they
not
do
header
bloat,
so
it
it
gives
it
it
takes
right
any
time
you
do.
A
template.
Header
function
like
that.
You
increase
compilation
time
so
I'm
always
kind
of
careful
about
that.
What
else
anyway,
so
that
will
be
poked
through
transactions
on
edge
as
transaction
manager
that
H
as
well,
so
that
the
consumer
of
the
transaction
gauge
interface
can
provide
an
even
more
specific
type.
I.
B
C
C
F
F
And
some
of
that's
even
true,
actually
I
think
all
of
its
true
to
an
extent
like
I
I,
think
I
changed
the
state
names
but
yeah.
That's
that's
the
idea
anyway.
Italy.
B
So
so
another
things
I'm
wondering
about
is
top
up
and
the
bottom
down
button
up
and
top
down
judges
of
the
Petri.
So
the
difference
is
the
top
down,
maybe
needs
some
more
logs
on
the
from
the
root
layer,
peeve
level,
but
the
botton
up
teams
most
in
most
case
it
doesn't
need
to
lock
up
to
the
root
level.
B
A
C
B
B
Yeah,
that's
right
so
though
the
difference
is
so
admin
still
understanding
difference,
but
the
button
up
needs
to
create
a
linked
list
of
the
lot
of
notes.
So
it
made
me
have
more
impact
on
the
records
on
on
the
journal
side,
but
it's
it
has
less.
Like
quote,
has
learned
less
log
behavior
in
inside
routine,
they're.
F
A
F
Don't
like
it
one
earlier
it'll
happen
one
earlier:
if
it's,
if
they
don't
have
70
in
it
and
then
it'll
have
it
at
69
and
set
of
70
or
I'm.
Sorry
it'll
have
it
at
70
instead
of
70
Guan,
if
you're
defending
the
tree
and
you
observe
a
node
with
70
you'll,
do
the
split
right
down
instead
of
waiting
and
see
whether
you
actually
need
to
it's
a
pretty
minor
difference
in
terms
of
actual
I/o
patterns.
As
I
understand,
it
am
I
missing
something.
A
A
F
Throughout
the
ages,
because
it's
the
lazier
answer,
I
will
point
out
that
there
are
important
differences
in
the
way,
locking
works
so
I.
That's
why
I
wanted
to
bring
it
back
to
locking
as
I
understand
it.
The
primary
reason
like
the
primary
difference
between
top-down
bottom-up,
is
that,
if
you
think
about
it,
when
you're
doing
under
what
conditions
do
you
need
to
lock
an
interior?
It's
when
you
need
to
mutate
it
right,
so
it's
when
the
children
splitter
merged.
C
F
Make
sense
so
you
can
immediately
drop
the
lock
and
retake
it
as
a
light
lock.
By
contrast
with
the
bottom-up
approach,
you
don't
know
that
until
you've
reoffended
the
tree,
which
is
a
problem
because
you
took
read-
locks
on
a
whole
bunch
of
notes,
so
you
can't
actually
we
lock
the
node.
You
actually
want
to
lock
until
you've
released.
All
of
those
read
locks
first
for
Locker
reasons.
Otherwise
do
have
a
deadlock
possibility
if
you
draw
with
the
dependency
diagram
so
as
I
understand
that
that's
the
purpose.
F
F
B
C
F
F
You'd
have
had
you'd
have
had
to
have
ridden
on
winding
logic
at
everything
like
every
layer
of
the
application
has
to
know
how
to
unwind
the
end
progress
after
operation,
which
is
very
complicated
so
right
now.
What
it
does
is
make
a
copy,
and
then
you
scribble
on
the
copy
and
that
gets
turned
with
the
Delta.
It
has
a
significant
upside,
which
is
that
a
lot
of
this
to
do
opportunites
to
concurrency
control,
because
you're
you're
not
scribbling
on
the
same
one,
so
you
won't
get
an
actual
data
collision.
F
You,
let's
get
a
logical
one,
which
resolves
to
a
were
a
reasonable
error
that
you
just
we
retry.
The
reason
why
I'm
not
worried
about
the
overhead
is
that
I'm
I
anticipate
later
adding
an
interface
where,
if
the
application
layer,
which
is
to
say
oh
map
or
O
node
notices
that
it's
using
a
delta,
they
could
choose
to
say
I
know
what
I'm
doing.
Please
gives
me
the
original
version
I
just
want
write
to
the
pen.
I
just
won't
write
to
the
buffer
until
commit
time
so.
F
C
F
B
C
F
F
C
F
It
it's
not
every
year,
it's
that
when
a
customer
has
a
catastrophe.
This
is
why
it's
because
something
bad
happens
at
several
lowest
fees
and
they
capture
it
off,
and
it
doesn't
take
that
much
right.
If
some
software
happens,
that
messes
up
blue
store
internally
and
this
transaction
tries
to
commit
on
start
off
always
crashes,
it,
which
is
what
it
does
right.
The
first
thing
it
doesn't
start
off
is
it
tries
to
replay
the
transaction
it
was
trying
to
commit
before.
So
if
that
transaction
is
poison,
it's
just
gonna
keep
crashing
okay
right.
F
F
Think
that's
right,
I,
don't
think,
there's
anything
that
kills
rate
of
clusters
more
often
than
yes,
then,
like
a
poison
commit
happening
from
some
particular
PGs
that
takes
out
all
three
copies.
So
third
they're
highly
correlated
right,
you'll
get
pretty
much
the
same
transaction
on
all
three
OS
DS
and
you've
had
a
hit
area.
C
F
Much
the
same
operations
pretty
much
the
same
objects,
so
it's
not
actually
that
far
fetched
to
the
same
bug
hits
all
all
all
three.
If
this
does
end
up
being
a
performance
problem,
we
will
abandon
it.
Performance
is
more
important,
but
right
now
I,
don't
think
it
is
and
I
think
we
can
yeah
so
well,
maybe
we'll
make
it
configurable,
but
the
whole
point
of
future
three
that.
F
C
F
Yet
so,
even
then
pretty
it's
it's
not
that
that
anyway,
that's
my
rationale
for
right
now,
choosing
to
do
a
copy,
I
think
in
the
future,
the
three
important
metadata
users,
the
LD,
a
tree
of
the
oh
no
tree
in
the
Ahmet
tree-
will
be
modified
to
be
smarter,
so
we
don't
actually
have
to
do
the
copy
and
for
data
it
doesn't
matter
at
all,
because
we
always
do
4k
overrides
or
Decker.
C
A
F
A
A
D
F
True,
but
it
means
that
we
can
add:
okay,
for
instance,
in
the
blue
store
a
file
store,
commit
pass.
You
don't
do
most
of
those
checks
until
you've
already
committed
the
transaction,
it's
very
little.
You
can
do
about
it,
but
in
this
one
we
actually
can
add
sanity
checks
as
we
commit
transactions,
and
if
we
see
a
problem,
we
can
flag
it
and
fail
the
transaction
and
the
OSD
can
propagate
an
error.
You
know
right
things
to
log
and,
above
all,
let
the
other
PGs
continue
work.
Yes,
yes,.