►
From YouTube: 2020-01-14 :: Crimson SeaStor OSD Weekly Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
A
Guys
last
week,
I
was
working
on
revamping
the
between
implementation,
based
on
the
app
still
the
updated
episode
version.
So
we
can
use
it
as
that,
but
we
can
based
on
each
time
the
two
to
come
up
with
async
version
of
the
P
plus
tree
and
as
also
I
was
walking
through.
The
sister
didn't
talk,
try
to
materialize
in
in
my
head
to
see
what
is
to
see
if
I'm
standing
this
correctly
and
to
see
how
how
I
can
okay,
we
we
work
that
I
think
I
think
they
synchronize
the
v3
implementation.
A
C
That
anything
III
are,
we
sure,
that's
it's
going
to
be
even
at
all
possible
to
integrate
that
with
do
store,
I
ask
because
we're
going
to
be
really
specific
about
the
exact
layout
of
the
keys
and
values,
we're
going
to
care
a
lot
about
what
a
pointer
is,
we're
going
to
care
a
lot
about
how
the
addresses
work
and
we're
going
to
need
at
least
two
versions.
One!
That's
physically
addressed
one.
The
philosophy
addressed
right
because.
A
A
And
I
don't
think
we
will
be
integrating
the
the
episode
very
not
between
two
sister
I
think
we
will
be.
We
use
a
heavily
customised
variant.
Petrie
intonation,
actually,
sir,
will
be
totally
written
if
we,
if
we
take
a
look
at
it,
it's
like
two
years
later,
it
worked
quite
a
different
animal
I.
Think,
but
that's.
A
A
C
C
It's
more
than
I'm,
well
ya,
know
that
that
stuff,
that's
fine,
I,
guess
the
only
caveats
are
try
not
to
do
any
IO.
C
C
C
C
Could
be
crazy,
opinionated
about
exactly
how
blocks
good
result
because
be
tree
updates
will
have
to
be
part
and
parcel
the
same
transactions
that
I've
did
other
things.
So
I'll
hang
off
of
the
same
trend
action
system,
so
I,
don't
think
it
makes
sense
to
burden
that
implementation
with
those
details,
particularly
as
they
don't
exist
yet
I.
C
A
C
A
Once
I
I
have
a
better
idea,
I
will
write
it
down
to
to
reiterate
using
a
typical
use
case
and
when
they'd
over
with
you
and
the
team,
to
make
sure
yeah
that
makes
sense,
yeah
I
think
is
the
next
step
to
to
start
from
scratch
and
the
most
modest
after
using
the
Petri
and
do
some
stuff
to
chase
ship
ship
up.
The
Oh
work
bye-bye
I
see
store
to.
C
Finish
the
thought
there
was
that
those
two
things
are
one
category
of
each
room,
except
one
of
them
is
integer
indexed
and
the
other
is
string
indexed
there's
another
whole
category
to
which
is
the
LBA
tree
itself
is
a
bee
tree,
but
physically
addressed
not
logically.
So
it's
whole
sort
of
purpose
for
existence
is
different.
It's
still
a
bee
tree,
it's
just
that
all
of
its
internals
will
be
different.
More
importantly,
the
way
it
interacts
with
the
GC
will
be
different.
I
think.
C
I'm
saying
as
much
as
possible,
we
don't
want.
We
would
like
it
a
b-tree
implementation
that
can
cover
all
three
of
those
use
cases
and
also
is
usable
in
a
way
that
allows
you
to
arbitrarily
combine
it
with
other
block.
Retta
machine
machinery
like
I,
would
have
to
be
able
to
tell
it
upfront
by
the
way
I'm
going
to
put
the
block
you're
moving
over
here.
Please
take
that
into
account.
C
That's
good
I
mean
it
also
needs
to
be
able
to
deal
in
terms
of
Delta's
applied
to
existing
blocks,
because
that's
how
the
journals
work
mm-hmm
so
really
try
to
think
about
how
it
would
fit
into
an
overall
transaction,
because
I'm,
just
not
I'm,
just
not
sure
that
an
off-the-shelf
solutions,
gonna
work
like
that
I
can't
think
of
any
reason
why
I
would
design
something
that
works
like
a
to
be
honest.
If
I
didn't
have
a
really
specific
use
case
in
mind,
I.
A
E
C
The
most
important
part,
and
also
to
the
actual
on
just
blocks
with
data
in
them.
That
is
when
we
like,
when
we
commit
a
transaction
that
doesn't
write
to
an
object.
We
update
the
transaction
rewrite
we
write.
It
has
to
contain
Delta's
that
modify
the
actual
on
disk
offset
for
the
data
and
also
the
O
node,
and
if
either
of
those
result
in
a
new
block
being
written
out
or
moved,
then
it
also
requires
to
change
the
lbh
rate.
C
E
C
In
this
case,
we
get
to
combine
them
into
one
single
long
stream.
The
deltas,
the
cost
is
that
it's
very
difficult
to
build
abstractions
that
cross
those
boundaries,
which
is
why
I'm
leery
about
using
an
external
library
if
we
can
make
it
work,
that's
one
thing:
I'm,
just
I'm,
just
warning
you
that
it
may
be
a
lot
harder
than
it
looks.
I.
E
A
E
C
E
E
E
E
F
E
F
B
F
B
Hi
I
am
I,
was
missing
most
of
the
week
and
I'm
still
working
on
well
clunk.
One
thing
that
I'm
working
a
bit,
the
other
is
the
I'm
fighting
with
blanks
and
the
indentation
with
IV
to
push
the
fix
to
have
to
push
a
fix
to
see
started
now
in
version
4
and
where
there
will
be
a
version
5
with
a
few
blanks
removed
and
I
hope
this
week
to
finish
the
a
sock
PR
there
are.
B
B
A
B
B
B
C
Probably
have
the
user
permissions
set
up
in
your
I
mean
if
it's
trying
to
set
the
C
group,
it's
probably
expecting
to
be
able
to
set
its
own
group,
which
means
it
expect
to
either
uproot
or
to
a
permission,
sue
if
you
want
to
figure
out
what
it's
doing,
you'll
have
to
find
out.
Why
it's
doing
that,
but
I
expect
that
this
reason
is
simply
that
your
laptop
is
configured
differently
and
your
user
doesn't
have
those
permissions
that
will
be
like
else
I'm.
E
A
C
Yep
I
was
out
of
town
last
week,
I'm
starting
to
work
on
C
store
properly.
C
Now
my
immediate
goal
is
to
get
as
quickly
as
possible
to
an
implementation
of
the
LBA
layer
and
below
that
should
start
to
give
us
an
idea
of
physical
write,
amplification
for
block
overwrites
without
the
rest
of
the
objects
for
implementation
in
place,
I'm
hoping
to
have
something
prototype
II
that
people
can
use
of
most
or
all
of
the
optics
or
interface
within,
or
certainly
by
cephalic
on,
but
in
the
in
the
immediate
term,
there's
just
not
enough
there
to
for
other
people
to
help
with
yet
so
I'm
gonna
work
as
quickly
as
possible
to
get
past
that,
but
I
would
think
weeks
before
we're
at
that
point
rather
than
the
days
it
does
occur.
C
To
me
that
if
anyone's
looking
for
something
to
do,
the
continuing
to
work
on
recovery
would
be
excellent
since
we're
definitely
gonna
meet
up
sooner
rather
than
later,
especially
now
that
blue
store
exists.
So,
if
anyone's
interested
in
that
send
me
an
email
and
I
can
put
together
a
document
about
where,
where
we
currently
are
and
what
needs
to
be
done,.
C
A
A
C
C
Definitely
obviously
do
overwrite
object
extents,
so
we
have
to
reconcile
those
two
things.
We
have
immutable
blocks
on
disk,
but
we
have
immutable
object
steps
so
we're
going
to
attack
this
from
two
different
directions:
we're
going
to
do
the
classic
filesystem
thing
where
we
defer
the
actual
write
out
until
later
by
packaging.
The
initial
write
at
the
Delta
in
a
conventional
filesystem.
This
would
be
a
right
to
the
turtle
right
with
me.
So
far,.
C
A
E
C
A
E
E
C
But
we
haven't
talked
about
that
part.
Yet
this
is
at
the
physical
error.
All
we're
talking
about
is
physical
blocks,
which
we
can
write
to,
or
we
can
write
deltas
that
modify
them.
So
when
we're
reading
forward
in
the
journal
stream
from
wherever
it
is,
we
start
every
time
we
see
at
Delta,
we
read
that
physical
block
off
disk
and
apply
the
Delta
and
then
move
forward
in
the
journal
stream.
By
the
time
we
get
to
the
end.
Our
cache
blocks
reflect
what
the
actual,
what
what
we
think
of
the
physical
block
should
be
so.
E
C
So
the
next
step
is,
as
in
any
conventional
file
system,
you've
picked
a
new
place
to
write
the
block
to
well
any
copy
in
my
cell
system.
Is
you
take
a
new
place
to
write
the
block
to
and
you
write
a
transaction,
a
new
transaction?
This
is
a
whole
separate
transaction
that
includes
that
new
block
address
and
also
any
mutations
to
blocks
containing
the
metadata
that
point
to
that
block
in
our
case
got
the
lbh.
C
C
Took
address
one
of
your
points
in
your
document,
there
is
absolutely
no
reason
why
writing
a
record
can't
refer
to
its
own
address
the
enough
devices.
Do
not
make
it
impossible
to
know
the
address
you're
going
to
write
to
they.
Don't
that's
only
true
if
you're
doing
stream
streamlined
on
anonymous
right
and
they
don't
also
court
that,
and
it's
not
doing
that.
That's
where
the
industry
is
going.
It's
a
good
stride
G
if
you
want
parallelism,
but
we
may
get
parallelism
otherwise,
oh.
C
E
C
In
both
cases
right
either,
we
can
predict
the
blossoms
address,
in
which
case
it's
no
problem
to
write
out
the
block
and
the
deltas
at
the
same
time,
or
we
can't,
in
which
case
we
need
to
write
out
the
Delta,
the
block
first
and
then
the
deltas
keep
in
mind
mm-hm,
and
these
are
typically
background
operations
in
the
sense
that
they
don't
block
the
currently
ongoing
right.
It's
just
that
we're
flushing
dirty
data
out
of
cache
or
doing
a
segment
move.
So
it's
okay!
C
E
C
Absolutely
do
so
the
LBA
tree,
according
the
calculations
I,
have
in
the
dock
and
the
worst
case
scenario
fully
fragmented
is
like
a
six
layer
or
seven
layer
of
butter
FS
tree
right.
You
know
if
we
do
a
write
and
then
fully
write
out
all
of
our
blocks.
It's
going
to
take
us
like
seven
friend
transactions,
because
we
can
only
update
one
layer
at
a
time
and
it's
going
and
we're
going
to
update
like
seven
blocks,
but
this
is
the
beauty
of
B
trees.
C
So
when
we
go
up
one
layer,
we're
overwriting
a
block
that
could
potentially
have
had
47
other
things
changing.
Well,
we
go
to
47
squared
wake
up,
347,
cubed
and
so
on.
So
the
probability
that
we're
capturing
multiple
writes
increases
geometrically
the
further
we
go
of
the
tree
there
generally.
What.
E
C
That's
not
how
this
works.
We
can't
well,
okay,
technically,
we
could
rid.
We
could
rewrite
out
the
intense
the
entire
path
for
root
to
leaf,
but
we
don't
want
to,
for
one
thing
is
that
they
guarantee
to
worst
case
write
amplification.
Nothing
we
do
can
possibly
be
worse
than
that.
Second,
every
time
we
go
up
the
tree,
we
geometrically
reduced
the
number
of
notes
right
so
the
further
we
got
the
tree,
the
better.
It
is
not
to
write
it
out.
C
C
C
C
C
In
good
in
well-behaved
situations,
where
you're
doing
insertions
and
removals
from
the
same
part
of
the
tree,
you
will
actually
not
you'll.
Hardly
ever
do
medicate
updates
because
we'll
combine
many
of
them
into
this.
Like
imagine,
you're
doing
sequential
insertions
into
a
bee
tree
right.
Every
single
insertion
does
not
cause
the
metadata
update
right.
The
first
one.
Does
you
write
out
a
bunch?
You
write
out
the
first
node,
but
the
next
45
you,
those
are
just
journal,
updates
to
that.
C
To
that
blog
right
when
you
finally
do
a
split
you'll
have
to
write
out
two
new
blocks,
but
by
the
time
you
wrote
out
those
two
new
blocks,
you're
already
47
squared
operations
in
so
the
number
of
full
block
right
outs
turns
out
to
be
only
like.
Like
the
total
amount
you
write
turns
out
to
be
only
like
double
the
actual
key
value
insertions,
which
is
really
good
right.
A
C
You
don't
have
to
do
that,
so,
let's
let,
when
you're
updating
the
leaf
of
the
tree.
For
one
thing:
that's
it
you
don't
actually
write
the
leaf
out.
You
just
write
out
a
delta
to
that
leaf.
Eventually,
either
we
hit
a
journal
checkpoint
which
were
infrequent
or
we
choose
to
write
off
the
B
tree
that
that
looked
that
leaf,
because
it's
cold
or
we
haven't
gotten
an
update.
Sorry
up
pressure
or
whatever
right
that
right
out
requires
the
new
transactions
that
dirties
node
above
it
but
same
deal.
C
We're
not
actually
gonna
write
out
a
new
version
of
that
block.
We
can
leave
it
in
cash
until
a
journal
checkpoint
or
longer.
If
we
change
to
a
general
check,
checkpoint
game
works
right,
we
could
go
a
really
long
time
without
writing
it
out.
The
only
important
thing
is
that
if
we
crash
and
come
back
up,
we
have
to
be
able
to
recover
or
a
memory
version
of
that
note,
but
that's
it.
We
can
do
that
with
deltas,
so
we
only
actually
do
write
out
when
we
don't
think
we're.
C
A
C
You
take
a
look
at
the
worry
if
you
think
through
it,
there's
no
reason
why
the
journal
checkpoint
has
to
be
synchronous,
we
can
continue
doing
writes.
The
only
important
thing
is
that
we
can't
get
rid
of
the
old
epic
of
journal
entries
until
we
have
written
out
all
the
blocks.
So
basically
we
start
a
checkpoint.
C
We
continue
doing
writes,
but
we
also
cram
in
as
many
dirty
block
write
outs
as
we
can
until
there
aren't
any
deltas
in
the
old
epoch
that
refer
to
dirty
blocks
or,
if
you
think
of
a
another
way,
you
can
think
of
every
every
dirty
block
in
memory
has
an
epic
number
that
reflects
the
journal
epoch
at
which
it
was
written
out
right,
yeah
at
which
it
was
last
clean.
Rather
so,
every
time
we
write
out
a
dirty
block.
Obviously
it's
number
now
advances
up
to
current
right.
A
Yes,
then,
how
we
trim
the
log
and
play
the
law
replay
the
log,
but
my
constant
is,
for
example,
you,
for
we
want
to
rewrite
here
a
internal
in
a
note
a
for
example,
and
it
up
to
like
10
tanning
session.
We
probably
we
need
to
rewrite
the
the
internet
a
for
for
10
times
and
do
a
split
and
the
the
the
crux
of
problem
is
that
we
cannot
introduce.
A
C
A
C
C
If
you
have
no
cash,
this
is
a
lot
harder
right.
You
just
have
to
spend
a
lot
more
write
amplification.
Oh
it's
just
the
way
it
goes,
but
the
more
house
we
have
the
less
write
amplification.
We
have
to
tolerate
yeah
and
that's
also
in
it's
like.
If
you
go
back
and
read
like
the
very
first
launch
structured
file
system
paper,
that's
basically
the
game,
the
more
cash
you
have,
the
less
interested
you
are
in
the
actual
on
disk
layout.
C
E
C
That
sort
of
foreshadow
a
future
concern.
Persistent
memory
serves
as
a
super-awesome
version
of
this,
so
persistent
memory
ought
to
be
significantly
cheaper
than
DRAM.
So
one
of
the
reasons
I'm
not
really
fussed
about
any
of
those.
Is
that
the
persistent
memory
we
can
keep
metadata
structures
in
persistent
memory.
So
all
of
this
high
to
high
write
amplification
stuff
may
not
matter
at
all,
because
it's
not
going
to
be
on
the
on
the
DNS
drive
in
the
first
place.
C
E
C
D
C
C
Exactly
like,
like
I
suppose
and
well
so
what
I'm
proposing
for
a
DNS
drive
is
that
we
create
another
B
tree
that
does
the
same
things
that
the
interaction
table
in
F
2
FS
does.
C
C
C
First,
I
want
I'm,
actually
curious,
I've,
actually
curious,
I'm
gonna
get
some
hardware
in
from
the
police.
I
guess
I
can't
talk
about
that
will
allow
us
to
actually
introspect
the
media
level.
Write
amplification
below
the
FTL
and
I'm
actually
curious
as
to
whether
doing
it
as
the
copy-on-write
tree
reduces
write
amplification
compared
to
a
table,
so
I
want
to
compare
them
both
anyway.
Second
for
the
innocent
devices,
we
actually
have
to
have
the
tree,
so
do.
C
E
E
C
More
efficient
I,
don't
know
if
you
could
be
either
so
I
sort
of
wanting
to
pull
that
both
versions
anyway,
and
we
have
to
have
the
wandering
tree
version
for
the
NS
at
least
for
bears.
The
announcer
is
without
some
kind
of
a
front.
C
Without
random
mutations
to
the
table,
it
might
dramatically
reduce
the
amount
of
internal
we're
leveling.
The
drive
has
to
do
and
that
might
actually
be
better
than
reducing
write
amplification
with
them
the
tree
right
out
base.
There
are
also
some
tricks,
people
and
planning
on
doing
with
the
logical
addresses
themselves.
If
you
think
about
it,
if
I
make
the
logical
address
space,
pretty
big,
then
for.
C
Then,
for
these
these
are
for
the
tree
solution.
We
could
arrange
it
to
be
the
case
that,
for
instance,
tgs
and
objects
have
their
logical
addresses
close
together.
So
if
the
actual
block
right
out
and
the
oh,
no
updates
both
happen
to
be
close
together
in
logical
address
space,
then
those
updates
within
the
LBA
tree
will
tend
to
happen
at
the
same
time
and
will
tend
to
have
they'll
tend
to
be
captured
by
the
same
right.
So
that
alone
will
buy
us.
C
A
significant
reduction
in
write,
amplification
I
expect
there's
the
only
way
you
get
one
block
right
out
for
right
is
if
nothing
else
hits
that
block
right.
But
if
you
have
several
writes
that
all
hit
about
the
same
LBA
range,
they'll
tend
to
get
captured
by
the
same
block
right
out.
This
is
classically
why
physical
locality
is
good
in
file
systems.
A
C
E
G
Wear
the
right
when
around
science
is
relatively
small
and
minimum
minimum
block
size.
If
we
use
this
external
memory
interval
management
games,
maybe
we
don't
who
can
avoid
the
right-hand
purification
but
currently
I
think
it.
It
may
not
may
not
work
the
way,
I
think,
but
I
am
showing
it
anyway.
So
maybe
some
other
people
can.
E
D
Yeah
last
week,
I
have
identified
the
root
cause
of
ACN
issue
and
I
proposed
two-part
fix.
The
first
part
is
that
we
need
a
new
load
balance
policy
from
sISTAR
and
the
second
I
wrote
a
new
socket
in
a
test
and
I've
run
it
for
for
the
entire
morning
about
13,000
tons,
and
it
doesn't
happen
again
so
I
have
proposed
this
PR
in
the
github,
yeah
and
I.
D
Firstly,
want
to
get
the
constants
of
the
direction
is
correct
before
I
start
to
add
stream
to
see
star,
because
there
are
some
concerns
from
Ladakh
that
we
might
not
need
this,
but
I
still
think
that
it
is.
It
is
required
for
following
presentation
for
the
Sheridan
nothing's
messenger.
So
that's
a
current
block
for
me.
D
Though
the
socket
accepted
from
that
messenger
will
always
be
on
that
core,
even
if
we
are
running
in
a
multi
core
sister
application.
This
is
a
fix.
This
is
the
fix.
I
have
implemented
so
previously,
when
we
listening
to
listening
in
multi
course
is
the
application.
We
have
to
listen
on
all
course,
and
the
listener
will
just
select
a
random
chord
to
create
a
socket
and
we
have
no
control
of
it.
D
D
F
E
F
D
F
Okay,
I
will
take
I
will
take
a
look
on
native,
but
I
will.
What
I
want
to
do
is
to
just
ensure
that
the
interface
change
is
limited
solely
to
POSIX
I
would
I
would
I
would
expect
that
this
will
be
a
first
question
from
from
ABI
guys.
You
have
implemented
extension
to
Abdul
the
entire
API.
What
about
native
stack.
F
D
A
F
We
are
not
enforced
to
do
yeah
the
part.
Doesn't
it
just
add
a
new
policy?
It
doesn't
mess
with
any
existing
till
I.
Don't
think
that
we
will
want
them
in
the
ultimate
multi
thread,
that
is
criminality
I
still,
we
will
work
to
have
a
short,
nothing
messenger.
It
will
bite
down
that
to
the
extension
of
radius
protocol
yeah.