►
From YouTube: 2019-04-02:: Crimson SeaStor OSD Weekly Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
So
we
have
a
fulcrum
here.
They
start
from
my
myself
I
last
week.
I
I
will
try
to
pinch
my
crimson
OST
and
win
and
to
try
to
compare
it
with
performance.
Who
is
that
classical
steel,
the
ten
that
provide
tested
the
performance
is
roughly
the
same,
but
but
the
video
has
more
insight
more
insight.
We
got
into
the
test
result
and
also
for
reviewing
things
up
here,
adding
the
bit
to
messenger
support
to
crimson
messenger
I.
B
C
I
was
was
benchmarking
crimson
using
small
chunks.
First
of
all,
I
was
able
to
reproduce
your
results
using
bigger
chunks
for
four
megabytes
segments
in
the
rate
of
stent
in
the
Raiders
bench
test
or
for
big
chance.
Well,
we
are
slower
significantly
slower,
but
this
is
not
the
most
important
case
for
the
comparison.
I
guess
we
are
focused
mostly
on
small
4k,
random
reads
and
there
well
it
we
are
pretty
good.
In
short,
it's
we.
C
Those
results
are
very,
very,
very
preliminary.
They
need
to
be
confirmed.
They
must
be
repeated
by
someone
else,
but
on
inserted
or
seven
I'm
getting
around
50%
more
I
crimson
or
60%
of
cpu
in
comparison
classical
XD.
I
posted
a
link
to
the
the
gist
with
snippets
and
some
profiling
I
made
for
crimson
or
SD.
C
C
A
A
C
To
verify
I
would
advise
I
would
advise
to
build
the
project
with
with
frame
pointer
support.
I
was
trying
to
profiling
with
dwarf
or
last
branch
record
of
p.m.
you,
but
well.
The
stocks
are
so
big
that
this
doesn't
work
really
I
have
to
rebuild
the
entire
project.
Using
with
the
frame
pointer
enabled
this.
There
is
a
snippet.
C
A
E
C
E
I
understand
the
magic
correctly:
that's
you
store
users
for
it's
logging,
so
for
classic
OSD.
It
avoids
like
evaluating
the
stream
operator,
whatever
sequence
in
the
case
where
logging
is
disabled
with
a
macro
right,
it's
got
a
little
if
block
throat
it
compiled
in
there
for
C
star,
though
I'm
worried
about
this
PG
state
string,
call
I,
suspect,
that's
getting
evaluate
it.
Even
when
walking
is
disabled
right.
E
Would
it
be
or
ordinarily
see,
start
bypasses
that
logic
by
instead
of
doing
pre-processing
and
passing
in
a
string.
You
pass
it
a
thing
with
operator
stream
defined
on
it,
which
you
can
either
invoke
or
not,
so
that,
because,
obviously
the
reference
passing
is
free,
but
the
Ostrom
operator
call
is
not
so
for
PG
straight
state
string,
I
suspect.
We
need
to
rephrase
that
into
something
else
or
just
comment
it
out
like
if
we
think
it's
trouble
for
performance
reasons.
That
would
probably
be.
A
A
C
Because
it's
it's
very
early,
it's
the
distance,
extremely
early
logging
is
long-term
sure
we
need
to
fix
that,
but
in
short-term
I
would
I
would
prefer
to
focus
more
on
verifying
whether
we
are
actually
doing
the
reads
correctly.
Just
to
ensure
we
don't
have
a
bug
that
causes
huge
performance
improvement
at
the
cost
of
not
not
doing
the
real
job
at
the
cut
bowing,
avoiding
doing
the
real
job.
C
Sheriff
of
this
of
this
is
quite
small,
but
in
case
of
classical
LSD.
Well,
it's
still
it
even
there.
It's
not
big,
not
big.
Just
couple
of
percent
in
cosmology
is
being
spent,
it's
being
burnt
on
the
mem
start.
Most
of
the
most
of
most
of
psychics
are
just
are
just
burnt
for
orchestration,
not
very
enjoyed
well.
E
A
A
E
E
So
classical
is
these
burns,
a
bunch
of
like
CPUs
checking
stuff,
like
is
this
object
missing
is
the
subject
recovered
is
the
subject
recovering.
Do
I
need
to
do
cache
crap
to
it
like
all
that
stuff
right?
None
of
that
simple
bed
in
crimson.
Yet,
though,
okay,
like
the
read
path,
is
essentially
just
a
direct
call
right
into
a
science
or
I'm
reading
this
correctly
yeah.
A
E
E
E
It
I
don't
think
that
is
a
matter.
It
has
to
track
a
bunch
of
stuff
to
confirm
that
for
no
good
reason,
okay,
which
is
exactly
what
we
need
to
avoid
doing
in
crimson
I'm,
just
pointing
out
that
we
haven't
actually
written
the
code.
That
does
that
yet
know
that
that's
a
bad
thing
of
the
test,
I'm
just
kind
of
pointing
out
that
there
isn't
really
anything
in
between
the
messenger
and
Scion
to
run
right
now,.
C
E
Another
way
to
put
it
is:
how
far
off
do
we
think
classic
is
from
optimal
or
to
put
it
another
way?
How
far
off?
Is
this
simple
like
this
implementation,
even
though
it's
missing
stuff?
How
far
is
that
from
the
hardware
bound
right?
So
if
we're
close
to
the
hardware
bound,
we've
done
our
job
so
far.
Thus,
as
we
add
stuff,
we
find
out
how
much
that
stuff
costs
and
whether
we
can
mitigate
it
I,
don't
think
we
actually
need
to
find
out
whether
it's
a
fair
test.
C
That
the
classical
SD
has
a
lot.
Okay
does
a
lot
of
checks,
but
in
very
in
a
very
simple
case
like
like
read
over
a
replicated
back-end
this
path,
those
checks
are
pretty
limited.
I
was
following
the
path
in
in
classical
SD,
and
so,
if,
if
it
goes
something
more
complicated,
the
number
of
checks
just
explodes,
but
for
simple
guys,
it's
pretty
limited,
maybe
the
extra
job
we
will.
C
C
E
Mean
you're
you're
right
right,
I'm,
just
pointing
out
that
it
isn't
actually
that
important
to
be
fair
to
classic
OSD.
We
just
need
to
make
sure
that
crimson
is
roughly
as
fast
as
the
messenger
at
this
point.
That's
what
it
should
be.
It
shouldn't
really
be
any
slower
than
the
message
handling,
plus
cyan
store.
C
E
Yeah,
okay,
so
peering
state
compiles
with
no
references.
Pp
do
you
know
so
that's
good
next
step
is
there
are
some
stuff
that
calls
backend
I
need
to
clean
up,
namely
I
need
the
code
that
actually
like
gives
the
right
the
state
machine
to
be
in
the
state
machine.
But
that
should
be
comparatively
easy,
though,
with
luck,
I'll
be
able
to
start
testing
it
later
this
week
on
classic
and
then
push
up
here.
The
idea
here
is
to
like
use
this
code
in
classic,
so
that
we
don't
have
two
different
implementations.
E
E
Good
I,
don't
think
I'm,
not
adding
anything
significant
over
that,
so
that
should
be
alright.
Then
I
did
notice
that
we're
kind
of
duplicating
the
existing
OSD
rupture
like
there's
a
PG,
back-end
and
I
replicated
back-end
I,
don't
know
if
this
is
obvious,
but
I
wanted
to
point
out.
We
don't
want
to
duplicate
the
structure
at
the
crew
the
OSD
currently
uses.
E
It
was
a
result
of
a
sequence
of
really
poor,
just
decisions
forced
by
decisions
that
have
been
made
in
the
past,
the
PG
back
and
like
difference
in
particular
that
interface
is
really
hard
to
use.
Like
it's
horrible.
If
you
go
read
the
code,
it
is
not
helpful
to
understanding
how
PG
back-end
interacts
with
rep,
with
primary
long
PG
or
PG.
E
E
E
It's
the
best
I
could
do
to
avoid
having
to
write
the
code.
That
knows
how
to
process
log
entries
twice
because
there's
already
code.
That
knows
how
to
do
that
up
in
PG,
but
there
was
absolutely
no
way
to
like
the
fact
the
refactor
I'm
doing
now,
probably
is
what
I
should
have
done,
but
I
didn't,
though
we
wound
up
with
PG
back
end,
so
we
don't.
We
don't
want
to
duplicate
that
mistake
this
time.
No.
A
D
Okay,
so
so
last
week,
I
implemented
a
high
school
for
to
compare
crimson,
messenger
and
I
think
messenger
and
also
send
out
an
email
of
the
result.
I
have
so
I
think
it
is.
It
can
be
a
good
reference
to
to
show
how
much
performance
game
we
can
have
at
least
at
messenger
level.
So
so
we
can
have
the
4k.
D
Okay
messages
will
be
like
forty
percent
better
and
but
the
the
large
honk
hey
it
doesn't
seem
the
messengers
as
the
crimson
messenger
is
much
much
slower
slower
in
this
case,
so
so
maybe
why
it
would
be
some
other
reasons,
because
the
crimson
OSD
to
be
much
slower
in
this
case
and
I
think
this
toy
is
very
easy
to
use.
So
it
can
be
reproduced
that
the
data
can
be
reproduced
very
easily
okay.
The
second
thing
is
I'm.
Addressing
the
review
from
Kofu
of
the
protocol.
D
D
D
A
B
D
A
D
A
D
B
D
B
I
think
you
see
energy
too
Casta
give
anything
to
the
client
side
is
chromis
messenger
under
the
server
side
are
compared
a
single
messenger
and
commercial
messenger
hey.
If
the
clan
said
is
promised
messenger,
then
crimson
client
at
euchromatin
server.
We
got
too
bad.
Your
performance,
then
crimson
with
chromatic
line
with
a
single
server,
so
the
client
side
is
promising
and
the
server
side
we
changed
that
between
the
sink
and
crimson
and
the
crimson
server
will
case
will
get
impaired.
Your
promise
and
the
other
side
other
another
test
case.
B
The
crown
side
is
a
sync
messenger,
and
the
client
side
is
a
single
messenger.
It
seems
some
case
promising
server
can't
buy
the
performance,
then
the
sink
network
some
case,
the
single
server
care
to
provide
better
performance
than
committing
server.
So
if
the
client
side
is
a
sync
messenger,
it's
not
strongly
showed
that
crimson
messenger
working
better.