►
From YouTube: 2019-10-15 :: Crimson SeaStor OSD Weekly Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
A
B
A
B
So
Windows
right,
a
pool
number
is
equal
to
2.
In
most
of
the
cases,
it
is
a
similar
as
safe
as
default.
Option
set
default
percent
means
that
a
sink
a
messenger
and
as
he
shared
a
number,
the
single
messenger
is
three
strata
versus
shot
number
and
thread
number
per
shot.
The
total
I-
maybe
maybe
maybe
eight
I
forgot.
So
that's
right
number
is
it's
bigger
than
our
chromosome
St.
B
B
B
B
C
Specifically
mentioned
that
it's
for
every
one
of
these
files,
we
want
to
introduce
like
one
specific
macro
header
that
gets
defined
one
way
or
the
other
based
on
based
on
some
compiler
simple.
That
will
make
it
so
that
most
of
the
files
required
really
only
one
line
change
and
they
don't
need
to
be
change.
B
C
B
A
B
D
D
Here
is
the
first
one:
it's
about
it's
about
testing
with
three,
always
this
in
a
cluster
which,
in
comparison
to
the
previous
runs
we
made.
We
were
after
after
comparing
to
the
previous
rounds
we
made.
We
were
a
bit
afraid
that
there
is
a
performance
regression
of
crimson
because
we
expected
to
see
around
sixty
thousand
cycles
per
rate,
but
the
document
shelves
there
is
75%
like
that,
but
it
was
in
with
three
L's
DS
in
the
cluster.
It's
still
different
from
previous
procedure.
D
That
was
about
tingled,
OSD
and
single
client,
so
I'm
tested
every
test
this
scenario
and
it
turned
out
that
in
crimson.
Actually
there
is
no
regression
at
all.
We
can
see
still
60,000
of
I
ops,
but
in
comparison
to
birth,
that
three
OSD
rounds
and
the
previous,
the
very
all
trance
we
can.
We
can
see
that
classical
OSD
got
significant
speed
up
for
in
both
Priya
in
the
previous
Duran
and
in
previous
symbol
or
SD
runs
classical
needed
arms
to
had
200
thousands
of
instructions
per
/,
io.
D
However,
it
changed
for
the
one
or
the
day
for
wound,
or
is
this
scenario
it
has
changed
recently,
I
made
a
be
circuit
and
it
turned
it's
related
to
your
to
your
cue
to
your
mutex,
rework
and
I
didn't
I,
know
they're.
The
reason
I
have
I
have
artists
hepatitis
that
the
changes
twitched
the
default
lock
depending
it
turns
out
that
previously
the
internet
in
June,
we
had
even
on
Deb,
even
on
production,
on
release,
build.
We
had
the
log
that
available
and
we
start
turns
it
on
by
default.
D
That's
not
the
case
anymore,
incur
in
after
merging
the
designee
tax,
which,
even
even
now
we
start
dot
stage
without
the
no
rock
that
option.
This
sets
the
still
sets
the
log
depth
till
turns
on
the
log
tab,
but
on
on
production
builds
it's
in
its
effect
class.
It
has
no.
It
has
no
effect,
because
the
mutex
is
perfectly
I'll
for
the
very
production.
It's
perfectly
an
hour
about
logged
up
support.
A
E
D
E
E
D
E
C
C
Think
that
could
be
true
too,
but
let
me
put
it
this
way:
it'll
be
easier
to
explain
to
management
if
we
have
a
number
that
says,
crimson
is
twice
as
fast:
okay,
okay,
it's
also
a
genuinely
useful
metric
like
they're,
both
useful,
genuinely
like
knowing
allowing
classic
uses
many
courses
it
needs
to.
What's
the
difference
in
in
cycles
is
one
useful
thing,
and
secondly,
if
we
really
do
force
them
to
the
same
number,
of
course,
the
absolute
difference
in
speed
is
also
interesting.
D
A
C
C
D
A
A
D
D
D
But
I
asked
mark
Nelson
to
perform
some
checks
on
classical
some
testing
of
classical
as
the
indie
domina
cycles
per
aisle,
burping
mem,
star
and
blue
star
configuration,
and
it
seems
that
most
of
this
to
free
up
sites
we
are
burning.
The
classical
is
the
word
burning
is,
is
located
in
object,
store
implementation.
D
D
D
D
D
E
A
D
A
E
D
Am
pretty
sure
I
made
things
I
selected,
random,
read
for
testing
and
put
just
a
day
back,
does
STD
out
printing
to
ensure
the
bra.
The
path
is
not
taken
and
also
I
put
the
error,
handling
path.
As
I
hinted
east
it
has
non
line
and
caught
notices.
You
should
put
it
in
different
sections.
After
all,
up
code.
D
D
C
C
C
A
C
Now
worst
I'm
saying:
if
we're
going
to
pay
the
same
penalty
to
handle
the
or
almost
the
same
penalty,
even
for
just
handling
it
raw
using
futures
and
manually,
doing
dynamic
casting
on
the
exception
type,
then
we
should
use
the
one.
That's
less
likely
to
be
wrong,
because
in
that
case,
what
we're
mostly
measuring
is
that
the
existing
code
is
wrong
right
because
it
doesn't
handle
the
errors
and
we
actually
have
to
handle
them.
So
how
hard
would
it
be
to
write
code
that
does
handle
the
errors
but
doesn't
use
the
area.
D
D
D
D
C
C
So
what
I'm
worried
about
is,
what
do
we
think
is
the
fastest
way
to
correctly
handle
these
these
non
success
cases,
and
how
does
that
compare
to
the
air
rated
version,
because
what
we're
taught
about
ultimately
here
is
if
blue,
store
or
C
store
throws
an
Ino
ant
on
a
read.
How
does
the
layer,
above
it
that's
actually
trying
to
build
an
object
context,
translate
that
to
layer
above
that?
That's
actually
trying
to
do
a
read
on
aunt
object
right,
because
it's
not
necessarily
a
Nina
went
back
to
the
user.
C
It
might
be
something
like
you
need
to
do
a
cache
operation
or
whatever.
So
if
currently,
we
aren't
doing
any
of
those
things
or
even
sending
the
error
case
back
to
the
client,
then
the
current
code
is
not
correct,
though
the
speed
of
that
is
only
somewhat
interesting.
Well,
we
really
care
about.
Is
the
speed
of
code
that
correctly
handles
all
of
the
cases
we
need
to
handle
I.
C
C
C
E
A
The
happy
path,
if,
if
we
have
a
like
4%
penalty,
is
caused
by
aerator.
It
isn't
that
under
good
bargain.
C
Well,
I
also
don't
want
to
hear
things
like
focus
on
the
happy
path.
Remember,
eventually,
we
have
to
deliver
a
functioning
working,
OSD
and
95%
of
that
is
the
non
happy
path,
but
I
agree
that
it's
not
worth
very
much
if
we
gave
up
four
percent
of
our
throughput
for
static
checking.
That's
that's
a
lot.
A
C
C
C
D
C
D
A
C
I
Radek
I
linked
my
current
progress
on
the
crimson
object
context
work
couple
top
I
expected
great
thank
what
I
expected
to
be
compiling
and
working
sometime
middle
tomorrow.
It
almost
compiles
and
works
today,
but
I
did
a
bunch
of
stuff
this
afternoon
that
I
haven't
finished,
stabilizing
yeah
I'm
the
longer
the
short
of
it
is
I've
introduced
an
intrusive
LRU
that
avoids
allocations
in
the
creation
other
than
the
optic
contact
structure
itself.
C
C
It
also
introduces
locking
machinery,
though,
that
if
you,
if
you've,
read
the
primary
log
PG
code,
there's
a
point
where
it
goes
through
the
mos,
Diop
and
figures
out
what
requirements
the
OP
has
does,
it
do
reads
doesn't
do
writes,
doesn't
do
reads
and
writes,
in
which
case
it
needs
an
exclusive
lock
and
it
integrates
a
blocking
lock
structure
so
that
when
an
OP
tries
to
do
an
operation
to
an
object,
it
will
potentially
block.
If
need
be.
You
know
PI
D
scheduling
the
teacher
and
getting
woken
up
later.
C
E
A
A
E
Yeah
but
the
the
tracing
messenger
was
sent,
adjust
back
to
two
to
chromosome
messengers,
so
in
in
the
handshake,
the
the
async
messenger
will
tense
its
adjust
back
to
us
to
to
crimson
messengers
so
there.
So
so
that's
why
there's
a
lot
of
to
deuce
in
the
Crimson
messenger,
so
so
I
don't
know
if
there's
a
way
to
to
handle
the.
E
A
E
C
Awkward
forgot
to
mention
I
added
some
information
on
ZN
s,
SS
keys
and
verses
memory
to
the
seesaw
dock,
but
also
Josh
pointed
out
something
interesting
today
that
I
think
we
should
think
about
compression,
isn't
probably
that
hard
to
shoehorn
in
later,
but
encryptions
going
to
be
a
weird
question,
because
encrypting
persistent
memory
is
sort
of
exactly
counter
to
our
goals,
since
we
want
to
be
able
to
return
pointers
directly
into
it.
So
that
is
an
interesting
thing.
What
want
to
think
about
a
little
bit.
A
From
the
store
is
anything
that
does
it's
critical
and
important.
We
need
to
look
at
so
I
can
I
can
report
eyes
or
report
prioritize
night.
My
tax,
currently
water
I,
have
the
in
my
mind,
is
I
write
the
rocks
typical
story,
as
it
arises,
Devon
Brewster,
but
these
can
be.
The
I'd
have
walking
ionized
very
good,
so
I
can,
I
think,
the
priority
of
I
think.
C
C
I
think
I
think
that
the
erratic
tree
or
the
radix
tree
design
family
is
genuinely
interesting.
Yeah,
we'll
want
to
find
a
way
to
work
out
whether
we
can
get
lower
I
up
cost
for
insertions
compared
to
B
trees.
That
way,
P
trees
also
have
their
own
advantages,
though
so
I
think
that's
an
interesting
tension.
What
want
to
look
at
sorry.
C
C
Let's
say
the
first.
The
root
node
has
the
entries
corresponding
to
first
bite,
though
256
or
whatever
possible
slots,
and
so
on
down
the
tree,
obviously,
for
sparse
reef.
This
doesn't
work
particularly
well
without
some
tricks,
but
the
paper
has
some
tricks.
The
advantages
that
you
can
do,
insertions
into
intra
persistent
memory
with
a
sequence
of
8,
byte
atomic,
writes
safely
and
Italy
crashing
systems.
It
doesn't
require
rebalancing
operations.