►
From YouTube: 2019-06-04 :: Crimson SeaStor OSD Weekly Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
So
we
have
a
fulcrum
here
and
last
week
I
was
working
on
preparing
for
the
object
class
support
for
chrome's
noisy.
Basically,
it's
a
nothing
but
refactoring
and
remove
the
replacing
the
capital
mutex
with
the
mutex,
which
will
will
be
disabled
when
waste
start
is,
is
isn't,
is
defined
and
because
alfredo
is
is
on
pto.
So
I
didn't
get
much
progress
on
on
the
on
integrating
performance
test
into
ci.
So
I
will
try
to
catch
up
with
him
and
and
start
starting
working
on
the
cs
stuff.
B
A
I
don't
think
I
will.
I
will
try
to
run
object
class
in
alien.
That's
probably.
I
will
because
the
the
object
classes
very
thin
riper
wrapper
around
the
the
feature
feature
offered
by
osd
and
the
pg
right.
So.
A
A
Block
up
the
entire
reactor
thread,
I
agree,
so
I
think
my
plan
will
rewrite
all
of
them
actually
to
the
subset
of
them,
which
support
rbd
with
with
the
future
and
define
the
function
type.
We
with
the
counterpart
of
a
function,
function
type
returns,
future
yeah,
that's
something
surprised!
C
Yeah,
I'm
working
on
the
80
blue
star,
so
just
provided
the
patch
for
the
encapsulation
virtualized
object,
object
style.
So
now
I'm
working
on
the
wheelies
eating
stock
and
I
will
skip
next,
maybe
next
fall
weekly
meeting.
I
will
take
a
sabbatical.
A
C
A
C
A
F
I'm
still,
I'm
was
working
and
I'm
still
working
on
the
concept
of
input
buffer
factory.
Last
week
I
went
through
the
implementation.
I
was
refining
requirements.
Basically,
I
went
through
the
documentation
of
both
spdk
of
both
spd
case
b,
dev
and
spd
case
nvme
drivers
to
figure
out
the
real
needs
for
for
contiguity
and,
first
of
all,
it
appears
that
bdf
is
much
much
more
higher
level
thing
than
nvme
driver
of
spdk.
F
It
allows
you
not
only
to
consume
nvme,
but
also
linux,
aio
or
even
save
rpd
as
well.
So
I
bet
we
will
already
want
to
go
with
with
a
nvme
driver
which
doesn't
have
a
strong
requirement
on
the
scatter
catalyst.
F
F
If
we're
taking
a
look
on
the
implementation
of
memory
management
in
a
dpdk,
it
seems
that
the
tapered
arbitrary
sizes
of
segments
will
need
to
provide
spdk
allocator
to
the
pdk's
memple
infrastructure,
the
glue
code
between
sister
and
dpdk.
F
F
Also,
I
was
taking
a
look
on
the
I.
I
was
trying
to
figure
to
refine
the
eye
luring
things
that
ronan
has
proposed
on
last
call.
The
idea
is
that
our
implement
our
interface,
that
our
extension
needs
to
be
needs
to
take
into
consideration
the
I
o
urine
in
the
future,
and
it
seems
to
it
won't
be
too
different.
Okay,
io
urine
offers
cutter
gutter
list,
but
it's
very
very
close
to
what
we
have
already
with
vectorized
street,
with
red
dvd.
F
F
John
has
posted
a
tweet
from
yaniv,
saying
that
we'll
basically
backboard
today
with
the
future,
but
we
can
expect
that
it
will
be
backwards
soon.
B
Being
able
to
use,
I
o,
u
ring
would
be
a
benefit
not
having
to
do
user
space.
Networking
and
storage
is
a
big
win,
because
people
can.
B
Ring
when
sorry
I
mean
being
being
able
to
use
efficiently,
kernel,
storage
and
networking
is
nice
for
people.
So
if
we
can
do
that
with
minimal
overhead,
that
would
be
cool.
So
I
totally
get.
F
Sure
this
was
the
last
thing.
What
to
emphasize
is
that
I
think,
will
help
with
the
cisco
overhead,
but
still
you
will
need
to
do
memory
copy
it's
in
contrast
to
dpdk
it
will,
you
will
require,
you
will
be
required
to
do
some
mem
copy
for
the
sake
of
preserving
isolation
between
users,
for
your
user
space
application
and
in
case
of
dpdk,
slash
spdk.
You
are
absolutely
free
of
that,
and
also
there
in
case
of
iou
link,
it's
very
likely
that
you
will
have
a
jump
between
between
cpu
cars.
B
B
Okay,
so
I
got
the
whip,
crimson
peering
thing
merged
last
week,
so
the
short
answer
or
monday.
So
the
short
summary
is:
you
can
now
bring
up
more
than
one
osd.
They
will
do
the
thing
where
they
appear.
They
will
even
create
like
pgs
and
pull
and
a
pool
if
you
create
a
pool
after
you
bring
it
up.
So
there's
there's
essentially
no
reason
now
to
bring
up
a
classic
osd
and
then
kill
it
and
bring
up
crimson
ost.
B
You
can
just
start,
however
many
crimson
oc's
trying
to
start
notable
things
that
it
does
not
do.
It
does
not
actually
replicate.
I
o.
So
if
you
have
replication
set
to
three
and
you
send
a
right,
it
only
goes
to
the
primary
because
the
right
path
has
been
updated.
Yet
I
would
say
that
that's
probably
the
next
thing
to
do
the
other
next
things
to
do
are
log-based
recovery
and
backfill,
which
means,
if
you
kill
a
crimson
obesity
and
bring
it
back
up.
B
B
So
those
three
things
are
the
next
steps
all
of
the
peering
code.
Is
there
it's
just
that
the
I
o
path
itself
doesn't,
for
instance,
send
the
messages
to
the
other
osds
to
like
propagate
the
rights,
because
if
you
go
look
at
classic
ost,
that
stuff
is
outside
of
peering,
it's
a
it's
in
the
replicated
back
end
and
the
easy
back
end
because
they
have
their
own
messages
and
similarly,
with
blog
based
recovery
in
backfill,
that's
not
in
peering
peering
simply
tells
them
what
to
recover.
B
There
are
there's
a
whole
separate
message,
exchange
system
for
actually
implementing
it,
not
overly
complicated.
Admittedly,
but
that's
the
next
thing
I
sent
out
a
pr
request
for
comment
sort
of
deal.
B
You
don't
actually
have
to
look
at
it,
but
I
promised
john
may
I
sent
it
out
last
week
and
then
totally
failed
to
though
I
figured
I'd,
send
it
out
now
I
was
going
to
the
client
out
part
first,
but
when
I
I
got
like
super
frustrated
with
how
I
done
the
message
handling
for
peering,
if
the
handle
pg
create
uploaded
function
was
super
hard
to
debug.
B
So
I
replaced
that
with
something
I
peering
messages
will
also
be
ops
that
also
can
block
and
can
be
dumped
from
the
osd.
So
I
just
started
there
because
whatever
the
next
thing
to
do
is
the
client
I
o
part,
but
if
you
are
going
to
look
at
the
patch,
I
think
the
parts
that
are
worth
looking
at
are
every
ongoing
operation.
The
osd
does
that
it's
like
threadlike,
I
guess
you
could
say
like
processing,
appearing
message
or
processing
an
asynchronous
peering
event
or
processing
a
client
request
or
processing
an
osd
to
osc
replication
message.
B
All
of
the
actual
code
for
it
is
elsewhere.
But
you
can
see
the
sequence
of
operations
any
one
of
these
apps
needs
to
go
through
the
stages
like
osd
operations,
peering
event
dot,
whatever
it's
not
a
huge
patch.
A
lot
of
this
is
going
to
change,
though,
but
I
think
the
parts
that
I,
the
parts
that
I
actually
like
are
the
parts
where
every
operation
is
on
a
list
and
has
a
pointer
to
the
thing:
that's
stopping
it
for
making
progress.
B
A
So
sam,
do
you
think
we
can
start
working
on
the
replica
support,
namely
to
to
send
the
the
right
request
to
the
replica.
A
B
Next
step
have
a
look
at
the
way
I
did
the
osd
operations
part
because
I
I
really
do
want
like
the
way
I
extracted
the
way.
The
peering
message
works
through
the
different
ways.
It
can
be
stopped.
That's
what
I
want
to
do
with
the
client.
Io
it'll
be
kind
of
the
same
right.
It's.
The
first
thing
is
like
take
the
two
things
it
does
now
and
extract
that
into
the
same
form.
If
you
give
me
a
few
days,
I'll,
probably
get
to
it.
B
If
not,
you
can
talk
to
me
and
I'll
help
you
through
it
either
way
or
you
can
just
start
at
the
bottom
end
and
like
yeah,
I
mean.
However,
you
want
to
do
it.
If
you
take
a
look,
you'll
probably
have
a
pretty
good
idea
of
what
I'm
doing
and
whether
you
like
it
or
not,.
A
B
A
B
B
So
if
you
look
at
classic
osd
there's
this
thing,
it
does
where,
when
it's
building
up
the
operation
that
it
sends
to
the
like
internally,
when
it's
translating
the
client
operation
into
a
sequence
of
internal
operations
for
purposes
of
building
snapsets
and
object,
infos
and
changing
sizes
and
changing
existing
stuff,
it
builds
up
this
structure,
that's
agnostic
as
to
whether
it's
replicated
or
racial
code.
Oh
yes,
so
we
could
either
do
that
or
just
not
do
that
and
try
not
to
do
anything.
Silly.
B
B
All
we
have
to
do
is
like
if,
if
you
look
at
the
classic
oc
code,
there
are
some
things:
it's
careful
not
not
to
do
as
long
as
we
don't.
What
is
it
it's?
It
tries
not
to
do
reads.
Basically,
it's
not
nearly
as
important
for
crimson,
because
we
actually
can
block
because
of
futures
right.
B
E
B
B
So
what
it
actually
does
is
it
builds
up
an
object,
store
colon
colon
transaction
in
memory
serializes
it
copies
it
and
send
it
and
sends
it
out
three
three
times
in
three
different
messages:
that's
it
there's
nothing
else
it
has
to
do,
but
for
the
for
the
easy
back
end,
it
has
to
actually
send
different
messages
to
the
different
replicas,
because
they
get
different
bits.
Then
it
needs
to
be
erasure
coded.
B
But,
more
importantly,
there
are
rules
about
when
you
can
do
reads
and
writes
relative
to
what's
in
the
cache
on
the
primary,
so
the
back
end
has
to
look
at
the
transaction
being
sent
and
go.
Oh,
I
need
to
put
the
pipeline
into
read,
write
mode
and
I
need
to
block
until
all
the
pending
rights
are
done
on
this
object.
B
Then
I
get
to
send
my
read.
Then
I
get
to
do
the
encoding.
Then
I
get
to
do
the
right
and
send
it
out
right.
So
there's
all
this
extra
sort
of
introspection
it
does
to
not
mess
up
the
on
disk
state.
This
may
or
may
not
be
important,
I'm
just
bringing
it
up
as
kind
of
a
like.
That's
the
only
thing
I'm
worried
about
here
other
than
that
it's
not
real.
It's
not
real
complicated,
especially
if
we
don't
plan
on
supporting.
B
A
A
E
D
Okay,
so
so
last
week
there's
a
long,
long
standing
back
of
the
messenger
test,
and
I
found
that
it's
because
it
is
trying
to
dispatch
a
message
from
one
call
to
another
to
send
and
currently
we
we
don't
need
to
do
that.
And-
and
I
I
just
disabled
that
feature
and
since
the
unit
test
is
is
working
now
and
the
second
thing
is,
I
wrote
a
submit
another
pr
for
for
cstar
to
to
support
our
current
requirement
for
alignment
and
padding,
and
the
redox
already
has
an
another
design
here.
But
this
one.
D
I
think
it
is
also
worth
to
looking
at
because
the
the
similar
idea
is
also
implemented
in
the
async
messenger,
that
in
the
with
the
posix
socket,
it
tends
to
prefer
fetching
for
the
smaller
reads
and
only
do
the
direct
reads
or
the
larger
buffers
and
the
I
think
it's
it's
reasonable,
because
it
assumes
that
the
system
course
is
much
more
expensive
than
than
the
memory
copy.
So
we
can
we
we
need
to
do
prefetch
if
necessary
and.
D
F
I'm
afraid
that's
not
the
case
for
dpdk.
Basically,
we
are
well
calling
read
exactly
for
dpdk.
It's
rated,
no
way.
No,
no,
no
go
away
it's
because
when
you
are
calling
the
rit
exactly,
you
are
saying
to
sister
that
you
want
that
you
that
your
output
will
be
one
single
continuous
buffer,
and
this
is
this-
won't
be
the
case
of
dpdk
in
dpdk.
I
would
expect
a
scatter
catalyst,
because.
F
Is
you
you
have
almost
you
are
almost
guaranteed,
the
network
payout
will
be
will
be
yes,.
D
The
the
interface
of
c
star
data
source
implementation
always
return
one
continuous
buffer,
one
at
each
time
from
the
get
interface.
So
it's
it's
it's
internally
scattered
list,
but
it's
in
the
get.
It
returns
each
of
the
scattered
buffer
each
time
from
the
cat
interface,
yes,
and
and.
F
Well,
the
get
interface
is
the
very
low
level
thing
is
of
system
that
is
not
supposed
to
be
used
by
application.
Application
calls
or
read
exactly
if,
if,
if
it
doesn't
need
to
go
with
scatter
gutter-
or
there
is
a
second
way,
it's
called
the
consume
consume
allows
the
application
to
form
to
retrieve
chunk
by
chunk
from
sister
and
without
specifying
size
or
location
memory,
location
or
anything
like
that
and
basing
on
those
chunks
build
a
scatter
catalyst.
F
And,
finally,
I
believe
we
refer
to
support
dpdk
and
spdk
effectively.
We
will
want
to
switch
from
the
red
exactly
to
consume,
but
I
would
also
would
I
would
want
to
have
an
uniform
interface
that
doesn't
need
to
make
a
crimson
to
distinct
between,
to
be
aware
whether
it
is
using
the
dpdk
based
stack
or
the
posix
stack.
The
kernel
stack
for
a
networking.
F
It
would
be
nice
to
have
one
interface
and
in
the
case
of
dpdk,
basically,
the
interface
will
be
a
scatter
gutter
center.
But
in
the
case
of
the
dtdk,
the
scatter
catalyst
will
likely
have
many
segments,
but
in
case
of
kernel,
networking
the
scatter
gateway
will
be
degraded
because
it
will
usually
it
will
have
only
one
segment.
F
The
segment
that
will
be
that
would
have
been
returned
by
by
route
exactly.
D
Yeah,
so
what
what
I've
suggested
is
to
use
the
existing
interface?
We
did
my
my
patch
doesn't
change
a
lot.
It
only
introduced
new
code,
so
so
it
is
very
easy,
but
it's
not
maintained
but
exactly
yeah,
but
can
be
what
that
means.
We
can
use
it
as
a
base
performance
base
to
check,
because
I
think
it
is
the
most
performant
implementation
based
on
the
existing
interfaces.
D
F
D
F
I
will
I
just
want
to
to
point
out
that
the
idea
of
profiting
for
very
small
chunks
still
will
be
in
with
us,
even
after
switching
to
the
consume
is
because
the
input,
the
input
buffer
factory
delegates
basically
allows
the
application
to.
F
To
allocate
and
determine
the
size
of
your
preferred
buffer,
if
you
want
to
go,
if
you,
if
you
are
handling
small
messages,
you
can,
you
can
give
this
star
huge
buffer
and
it
will
be
perfecting
like
just
like
now,
but
if
you,
if
you
are
seeing
the
application
has
if
application
like
crimson
sd,
has
the
knowledge
about
size
of
the
message
it
can
deter,
it
can
specify
the
size
of
the
big
of
the
buffer
to
fit
exactly
in
that
way.
You
are,
you
are
avoiding
memory.
D
We
need
to
read
from
the
like
a
while
right,
but
but
we
can
still
specify
each
of
them
to
the
interface,
because
it
is,
it
is,
has
sequence
right
it
it
it's
scattered,
but
it's
still
one
after
another,
each
have
their
alignment
requirement
and
the
lens
requirements.
So
so
we
can
still.
F
F
D
D
Of
my
implementation
here
so
so
maybe
I
didn't
explain
it
very
well,
but
but
you
can
please
also
take
a
look
at
the
code
if
you
think
I.
A
F
E
D
Another
good
thing
is,
it
is
already
working,
so
we
can
continue
our
work
with
the
current
implant
implementation
and-
and
that's
all
for
me-
and
we
we
also
will
get
a
performance
back
baseline
here
for
the
new
interfaces.
D
A
F
Finally,
ultimately,
we
will
need
to
do
that,
but
we
would
prefer
to
have
some
profiling
data,
some
numbers
before
going
to
them.
I'm
working
on
that
right
now,.
A
Okay
and
another
question:
is
that
avi,
the
cto
of
actually
sending
me
a
mail
asking.
If
we
have
an
interest
to
to
attend
the
meeting
in
in
this
november,
they
will
have
a
three
db
submit
all
and
they
will
have
us
six
sister
summit
collocated
with
3gb
submit.
It
will
be
like
one
days
or
two
days
event.
So
if,
if
you
guys
have
interest,
please
just
let
me
know
so,
I
can
update
him
to
see
if,
if
we
can
arrange
this
for
for
us
to
fit
our
needs.
A
A
And
me
anything
else.
E
Send
you
some
questions
regarding
the
pr
that
you
want
justice
explanations,
not
just
the
comments.