►
From YouTube: 2019-12-03 :: Crimson SeaStor OSD Weekly Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
A
B
C
C
B
B
B
And
I
wouldn't
worry
too
much
about
the
efficiency
of
the
lookup
table,
because
it's
never
it's
ever
very
big.
The
only
like
the
only
entity
that
shows
up
in
there
a
lot
or
the
PGS
themselves
and
they're
just
are
never
that
many
of
them
on
a
particular
shirt
like
order
of
tens
to
hundreds,
never
of
thousands.
D
D
That's
wait:
I
was
working
on,
I
was
reviewing
a
SAM
spear
to
add
the
object
context
registry
to
two
crimson
and
was
also
struggling
with
people
as
tree
implementation.
The
goal
is
to
try
to
take
out
the
interface
interacting
with
with
a
locator
and
the
journaling,
and
also
trying
to
shift
people
tree
as
a
as
a
single
as
possible
and
try
to
avoid
premature,
optimistic
and.
D
E
Felt
after
and
it
seems
for
the
right
and
only
I
do
two
cases
for
the
number
jobs
if
it
equals
one
and
Christy
seems
better,
then
the
Southwest
II
in
most
cases,
if,
if
the
blocks
it's
bigger
than
okay,
it's
bigger
and
eco
fault
MegaPath
and
it
seems
better.
But
if
the
number
of
jobs
setting
to
16
from
society
is
worse
than
self-esteem
and.
E
For
the
Raiden
tester,
its
versus
the
number
drops
equal
to
16.
Rimsky
is
better,
though
it
seems
a
weird
right
right.
It's
less
number
drops
it's
better
and
rate
is
small.
Number
drops
is
better
and
so
they'll
read
I
can't
set
at
the
big
block
side.
It's
only
candies
that
232
hey
bats.
If
I
said
six
lucky
bastard
FL
reader
cannot
finished
it
it.
E
E
So
Oh
have
fun
another
problem,
I'm,
not
sure
if
you
try
it
when
I
when
I
build
it.
With
this
builder,
this
one
release
version
release
with
debug
information
computed
the
criminal
st.
When
I
start
after
comes
lucky,
I
got
the
some
tea
stall
memory
thought
see,
see.
There's
some
dirt
happened,
you
see,
starcall
memory
does
CC.
There
is
a
dirt
happened
when
that
static
Ram
is
OST.
B
E
E
E
B
D
B
E
E
B
E
E
A
B
Just
to
expand
it
a
little
bit
of
white
I'm
not
or
why
I'm
worried
about
the
debug
build
thing
all
those
C
star
futures
that
are
being
passed
around
many
of
those
benefit
greatly
from
inlining
like
hugely
they're,
incredibly
dependent
on.
If
all
of
those
function
calls
actually
happen,
D
star
is
way
slower
than
it
should
be.
Does
that
make
sense
classic
so
crimson
C
star
depends
really
really
heavily
on
function.
Inlining.
F
Okay,
now
understood
okay
anyway,
I
think
we
agree
that
the
going
with
the
debug
bit
is
especially
painful
for
system
application
you
base.
Basically,
it
boils
down
to
the
fact
that
it
goes
to
a
reactor.
Every
single,
then
clouds,
every
single
element
of
a
continuation
sister
goes
through
the
reactor.
The
EPM
is
always
true.
It's
a
painful
thing.
D
B
B
B
F
A
D
A
Yes,
I,
it's
going
pretty
good
I'm
still
working
on
CBT,
as
we
discussed
last
week
following
last
week,
I
told
you
that
me
and
Radek,
or
truck
trying
to
come
up
with
with
a
solution
to
basically
merge
results
from
from
Rados
bench
and
pasta
to
get
better
I,
guess
more
complex
mattresses
and
that's
what
I've
been
working
on
the
past
week
tomorrow.
I
should
be
over
the
testing
that,
because
I
already
me
and
Radek
already
came
up
with
a
solution,
I
already
wrote
the
code
and
yeah
tomorrow
when
I
will
be
at
the
office.
D
F
F
F
F
C
Thanks
again,
thanks
for
the
comments
like
discussed
the
beginning
of
this
meeting
before
everybody
started,
I
will
make
a
simplification
change,
remove
all
gates
from
the
code,
and
then
we
will
publish
a
new
version.
What
I'm
and
now
that
this
is
more
or
less
than
I
hope
to
get
back
to
mainline.
They
may
know,
as
the
sister
business,
that
that's
that's
the
idea
and
one
one
comment.
C
C
Coming
soon
and
probably
fix
they
try
to
see.
If
anything
there
is
anything
to
fix
there
and
I
would
I
think
I
talked
about
it
once,
but
could
we
get
a
clunk?
Could
we
agree
on
a
clunk
format,
a
pile
for
crimson
and
I
won't
have
to
do
all
this
counting,
counting
blanks,
the
beginning
of
lines,
etc
will
be
much
easy.
Don't.
C
C
A
D
B
The
outer
context
patch
winning
though
Radek
you
should
be
able-
wouldn't
you
know
when
you're
back-
ought
to
be
able
to
rebase
that
patch
on
that
and
that
you're
just
working
I
am
writing
a
much
more
detailed
design,
doc
more
C's
for
starting
from
stages
ideas.
This
isn't
about
the
metadata
trees.
This
is
about
layout,
so
the
fundamental
problem
for
all.
Well,
really,
all
flash
based
systems
ever
including
FTL
systems,
is
the
need
to
move
data
around
while
dealing
with
the
fact
that
something
somewhere
around
physical
address
is
down
right.
B
So
if
we
need
to
garbage,
collect
a
stream
and
move
blocks
whatever
over
to
another
stream,
then
everything
that
currently
refers
to
that
stream
needs
to
be
updated
right
or
to
that
segment
needs
to
be
updated.
So
the
conventional
way
of
doing
this
is
dealing
with
an
offset
table
somewhere,
but
that's
more
metadata,
more
rights
and
more
right,
amplification,
so
I'm,
trying
to
figure
out
if
I
can
use
something
analogous
to
one
of
several
log
based
file
system
schemes
or.
A
B
Write
anywhere
file
system,
good,
old-fashioned
waffle,
where,
when
you
do
one
of
these
moves,
you
also
cascade
that
change
up
the
metadata
tree.
It
seems
like
it
should
be
more
expensive,
but
that
cascade
isn't
necessarily
more
expensive
than
maintaining
the
offset
tree.
But
but
RFS
works
this
way.
If
you've
bet
the
paper
for
what
it
does
need
to
do,
rewrites
so
anyway,
I'm
writing
I
could
dump
what
I
currently
have
you
guys
over
evil?
It's
going.
A
D
B
And
then
there's
there's
the
fact:
we're
not
writing
a
file
system
right.
No,
so
we
have
a
bunch
of
big
fat
advantages
over
a
file
system.
We
do
not
need
to
maintain
any
ability
to
do
atomic
renames
across
trees.
For
one
thing,
oh
I
can't
actually
share
this
with
hang
aside.
I
do
want
to
move
on
to
that
person.
I
can
like
to
do
this
in
background
and
continue
talking.
B
Okay,
that
documents
not
gonna
live
forever.
It's
just
a
though,
if
you
want
to
make
comments,
not
buy
it
by
all
means,
so
I
think
one
of
the
most
important
questions
to
identify
early
is
the
relationship
between
the
cash
and
the
way
we
do
Delta
Records
on
disk,
so
the
scheme
I'm
leaning
towards
is
pretty
normal
in
that
every
logical
block
is
either
clean.
It
has
a
well
like
a
location
on
disk
or
it
is
dirty
and
it
is
located
ephemerally
in
cache
and
replaying.
B
The
journal
from
the
current
segment
through
to
the
end
will
result
in
the
same
ephemeral
copy
of
that
block
right.
In
other
words,
you
can
replay
the
journal
forward
to
get
the
same
cache
state
so
on
journal
checkpoint.
You
then
need
to
write
out
the
dirty
segments
for
any
any
dirty
blocks.
You
have
to
rewrite
out,
but
the
problem
is
that
you
then
need
to
propagate
those
changes
up
the
metadata
tree
right,
because
now
the
location
of
that
segment
changed.
So
my
thinking
is
that
we
just
do
it
recursively
right.
D
B
So
we
leave
the
so
like
when,
when
we
load
something
in
the
cache
and
begin
modifying
it,
we're
writing
out
to
disk
deltas
against
that
block.
Delta
could
mean
one
of
several
different
things,
depending
on
whether
it's
a
block
or
one
of
any
number
of
metadata
structures.
If
it's
a
b-tree,
for
instance,
it's
like
an
ordered
set
of
keys
and
values.
So
that
looks
like
inserting
a
key
into
a
block
for
a
block
for
a
regular
data
block.
It
looks
like
just
bite.
B
Bite
range
updates
right
that
make
sense,
though
different,
which
suggests
that
blocks
are
typed
and
deltas
can
be
applied
differently
depending
on
the
type
which
is
straightforward.
Nothing
complicated,
yep
that
it
really
is
the
trick
is
that
after
you've
loaded
one
of
these
right,
one
of
these
blocks
into
cache
and
begun
mutating
it.
It
will
eventually
need
to
be
written
back
out
either
because
of
cache
pressure
or
due
to
a
journal
segment.
So
that
type
thing
will
also
have
some
kind
of
a
dependency
tree
in
Ian's
memory.
B
B
B
It's
not
different
from
the
future.
I,
just
don't
think
I'm
gonna
use
it
or
it
like
this,
isn't
a
reactor
thing
like
we're.
Not
doing
this
in
a
deferred
way
like
yeah
futures
will
be
involved.
There
just
stopped
the
way
I'm
going
to
choose
to
represent
that
part,
mostly
because
all
of
this
stuff
needs
to
fit
in
a
bounded
amount
of
cash
because
it
needs.
B
We
have
to
be
able
to
share
cash
pressure
between
this
and
other
things
that
have
to
be
cashed
elsewhere
in
the
system,
so
things
that
occupy
a
lot
of
space
like
this
need
to
have
a
pretty
pretty
specific
structure
that
if
we
want
to
do
like
linear
allocations
and
bump
allocations
within
this
space
and
control,
how
much
of
the
cash
is
used
just
by
doing
direct
flushes
back
out
anyway.
So
most
of
like
that
document
is
extremely
rough
and
ultimately,
once
I
get
into
a
slightly
less
crazy
state.
B
I'm
gonna
submit
a
PR,
replacing
the
existing
see
sort
out
rst.
So
we
can
review
it
and
you
know
have
opinions,
but
does
that
seem
like
a
sane
and
part
of
this
is
sort
of
enumerated,
the
components
which
will
map
onto
interfaces
inside
of
seized
or
post
role,
map
onto
files
and
classes.
So
we
can
actually
write
code
for
that'll,
make
it
easier
to
define
what
we're
all
working
on
then,
where
things
need
to
fit.
D
B
So
just
for
like
what
are
we
thinking
about
it
is
that
so
a
b-tree
like
your
be
free
implementation,
so
it
will
consume
this
block
interface,
where
it
says,
give
me
a
block
and
you
get
a
buffer
that
you
get
to
apply,
tells
us
to
the
buffer
manager
will
maintain
if
the
'memory
copy
of
that
block
and
eventually
when
it
needs
to
be
flushed
out,
you
will
have
provided
at
code
that
is
sufficient
for
updating
the
parent
pointer
in
the
block
above
it
and
dirtying
up
as
needed.
B
So
all
of
the
different
consumers
of
this
interface
will
get
to
provide
at
those
stuff
those
little
bits
of
stub
code
to
control
how
the
cache
flush
out
works.
I
think
that
should
ease
some
of
the
pain
that
systems
like
this
usually
get
into
as
they
get
bigger
early
days,
though
I'm
gonna
focus
next
on
reading,
all
of
the
FTL
and
log
structured
file
systems
papers,
I
can
find
I've
read.
B
Them
I,
just
maybe
I
forgotten,
some
of
them
go
back
and
reread
some
of
the
seminal
papers,
some
of
the
more
recent
ones
from
fast.
This
does
have
certain
elements
in
common
with
the
way
hardware
manufacturers
have
been
building
the
on
disk.
Flash
translation
players
to
the
part
I
haven't
really
thought
about
much
at
all
is
how
we're
gonna
is
how
to
maintain
the
metadata
structure
that
lets
us
figure
out,
which
stream
we
should
clean
next,
but
I'll
get
it
up.
I
guess.
B
Because
they're,
like
a
bunch
of
interesting
trade-offs,
because
you
like
what
you're
doing
garbage
collection
you
could
choose
to
make
a
point
of
grabbing
like
a
segment
that
maximizes
the
number
of
big
write
outs,
you
get
to
do
from
the
same
few
objects
which
lets
you
sort
of
combine
compaction
and
also
clean
it.
I
I,
don't,
but
like
your
ability
to
do
that
as
constrained
by
how
you
chose
to
write
the
metadata
down
so
interesting
things,
we
can
do
there.