►
From YouTube: 2019-07-02 :: Crimson SeaStor OSD Weekly Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
Last
week
we
finally
chromatin
woody
firmly
in
survivability
beauty,
test
and
I.
Think
it's
in,
because
currently
this
the
sift
Butte
job
only
builds
the
at
us.
You
don't
you
to
the
change
that
are
pushed
to
their
I'm
CI
repo,
but
that
does
not
include
the
master
branch.
So
the
next
next
thing
I
have
to
do
is
to
include
the
must
branch
when
building
crimson,
flavor
and
and
then
once
they
get
down.
I
will
push
the
protest
started,
exercising
comes
32
to
repose,
so
it
will
be
be
run
regularly.
B
Long
with
other
British
British
tell
sweets
and
the
neck,
and
after
that,
I
will
talk
to
afraid
or
to
add
an
API
to
put
Pepito's.
So
we
can
use
it.
You
just
use
use
a
televised
out
from
from
for
mastered
the
pitch
line
when
testing
catching
the
peered.
In
fact
involving
I'm
cruising
and
that's
a
plan
and
that's
pretty
much.
I
have
VTEC.
C
Hello,
first
of
all,
the
very
unplanned
Fink
last
Sunday
I
was
informed.
I
need
to
be
in
hinterlands
on
Thursday
to
give
a
talk
about
about
Clinton
OSD.
It
was
on
my
replacement,
for
one
of
our
colleagues
was
not
able
to
made
the
flight
well,
actually,
none
of
its
funny.
None
of
four
flights
I
had
was
according
to
schedule.
All
of
them
was
canceled
rebooked
or
delight
even
multiple
times.
C
C
C
A
A
I,
just
read
it
a
few
minutes
before
the
talk,
the
talk
but
I
really
for
I'm
really
forward.
I
introduced
boost
outcome
into
indigo
code
in
HP
and
I,
really
like
the
design
of
it.
I
think
it's
a
better,
better
expected,
then
a
better
than
STD
expected
and
very
convenient.
If
you
want
to
avoid
exceptions,
so
I
did
I
didn't
have
time
to
think
of
all
the
details
of
what
you
wrote,
probably
write,
something
back,
but
I
really
I'm,
really
all
for
it
for
using
it.
That's
what
that's
one
thing.
A
E
A
E
So
it's
it's
that!
In
other
words,
it's
not
that
it's
difficult
for
us
to
create
an
object
class
like
thing
in
crimson.
It's
just
that
the
exact
same
code
won't
work,
because
it
has
some
assumptions
about
the
about
the
API
it's
given
right,
namely
that
when
it
asks
for
a
read
that
read,
will
return
to
that
very
same
stack
frame
without
going
away.
That's
that's!
E
That's
the
entirety
of
your
problem,
so
our
choices
are
either
hack
around
it
either
using
something
exotic
like
such
a
long,
some
kind
of
back
switching
tricks,
you
leave
that
stack
where
it
is
and
come
back
to
it
later,
or
we
just
decide
that
there
isn't
that
much
class
code
in
the
first
place,
so
we'll
simply
pour
it
right
to
whatever
we
decide.
We
want
the
interface
to
look
like
it
seems
to
those
the
options
to
the
table.
A
E
So
if
and
if
anyone
actually
does
it'll
become
a
stable
interface,
but
right
now,
as
far
as
I
know,
the
only
extra
like,
but
the
only
external
user,
I'm
aware
of,
is
a
research
group
out
of
UC
Santa
Cruz,
that's
using
it
to
do
some
clever
stuff
with
the
MDS
and
their
stuffs
written
in
Lua.
E
So
actually
they
what
they
created
as
a
Lua
interpreter
class
that
interprets
Lua
code
on
top
of
the
after
class
interface
and
that's
no
problem
for
us,
because
Lua
is
really
easy
to
suspend
a
running,
stack
and
run
it
later.
So
we
actually
don't
even
have
to
modify
their
their
code
right.
So
at
that
level,
it's
no
problem.
How.
E
Don't
we
create
a
new
interface
and
if,
if
it
turns
out
there
is
code
in
the
wild,
hopefully
the
Tull's
and
they'll
have
to
do
a
little
bit
of
porting,
but
the
code
doesn't
tend
to
be
that
complicated
and
if
it
does
turn
out
to
be
a
real
live,
genuine
honest-to-god
problem,
then
we'll
figure
something
out.
But
my
guess
is
that
it
won't
I,
don't
think,
there's
any
object
class
code
in
the
wild.
We.
E
Now
the
loo
actually
I
think
what
they
did
was
they
like
ramped
up
optic
class
API
up
into
a
lower
thing,
and
that's
actually
easy.
If
you,
if
you
know
anything
about
the
loot,
the
lure
virtual
machine,
you
can
actually
suspend
a
running
Lua
thread
because
it
doesn't
have
a
real
stack
return
out
of
it
and
recrystallize
it
later.
It's
no
problem.
Lua
has
first-class
cocoa
versions
and
they're
accessible
in
the
capi.
You
use
to
embed
the
VM.
Basically,
if
you've
ever
looked
at
a
game
engine,
this
is
why
they
use
Lua.
E
You
can
write
plug-in
code
or
game
logic
code
that
looks
like
straight
line
code,
but
the
game
engine
runs
it
suspended
between
frames
depending
then.
It's
like
it's
like
the
whole
point.
So
that's
not
that's,
not
a
problem
and
I
would
suggest.
We
don't
even
bother
to
worry
about
it
because
it's
MDS
code
right.
That's
that's
our
problem
later,
when
we
need
to
port
that'll,
be
a
fun
little
cording
exercise,
but
I
expect
that
they'll
do
it
for
themselves,
just
for
fun
right.
It's
not
that
hard
and
they're
a
research
group.
A
E
A
A
E
Guess
is
that,
as
this
becomes
more
mature,
we'll
get
sort
of
a
bit
out
there
with
whatever's
there
and
then
we'll
start
documenting
that
right
because
I
mean
like
it's
like,
you
can
already
run
it
now
right.
So
people
who
are
interested
in
running
crimson
as
we
become
more
mature
will
be
able
to
go.
E
Oh,
but
my
my
optic
class
doesn't
work
or
we
could
document
the
API,
but
I
don't
know
that
we're
there
yet
and
I'd
then
the
other
thing
is
it's
treated
currently
as
an
internal
API,
and
that's
probably
always
going
to
be
true,
because
it's
I
didn't
only
it's
deeply
mapped
onto
the
REA
dose
in
internals
in
a
sense
Rados
as
natural
api
in
some
ways
we're
willing
to
change
it.
E
We
do
maintain
backwards,
compatibility
to
an
extent,
but
it's
not
it's
not
the
same
kind
of
handcuffs
that
are
beady
and
stuff
with
us
are
right.
You
don't
break
POSIX,
but
we
we
might
change
the
semantics
of
a
ratos
call
if
it
were
wrong,
for
instance,
we're
not
going
to
maintain
bug
compatibility
layers,
for
instance,
so
I
guess
I
mean
it's
a
gray
area
come
if
it's
a
problem,
we'll
we'll
figure
something
out.
Okay,.
A
And
my
last
my
last
comment
and
we're
talking
a
lot
today:
okay,
food,
they
what
is
suggested
the
about
the
arrow
handing
it
you
said.
Why
that
you
don't
want
to
optimize.
You
said
correctly
that
you
don't
be
a
reason
to
optimize
error
handling
code
and
that
we
had
a
comment
about
commented
about.
The
only
reason
I
I
suggested
it
because
it
reduces
it
smoke.
It
was
in
the
right
direction
regarding
exceptions,
which
is
minimizing
exceptions
on
regular
a
occurrences
and
using
error
codes
there
instead.
B
Error
information
into
the
things
the
what
into
what
ever
returned
by
what
so
it
might
be
handed
or
whoever
is
looking
at
the
error
message
because
Kevin
mature
eyes
the
error
code
towards
my
readable
string.
So
that's
why
I
like
it
without
extra
efforts?
Otherwise
I
wouldn't
do
it.
You
would
be
doing
error.
B
E
D
B
D
Okay,
so
it's
it's
about
how
to
do
prefetching
with
with
POSIX
sockets,
and
it
has
some
assumptions
that
we
do
need
compatible
with
public
socket,
and
you
think
this
implies
that
we
we
will
have
this
C
score
on
the
I/o
pass,
and
the
system
system
core
has
considerable
over
overhead
and
that's
why
we
need
prefetch
to
help
the
performance,
because
it
can
read
more
than
needed
to
reduce
a
number
of
this
system
course,
and
another
requirement
is,
is
what
what
we
are
currently
doing.
Is
we?
D
The
idea
is
to
prefetch
as
much
as
possible
for
the
system
course,
and
it
is
a
simplest
way
to
not
waste.
Tell
memory
too
much
because
if
we
just
simply
if
which
we
just
increase
the
prefetch
besides,
if
the
actual
read
is
not
that
much
all
the
tail
memory
will
be
wasted,
and
this
is
a
simplest
way
and
there
must
be
other
two
in
since
for
same
purpose
and
I
found.
D
Yeah,
that's
what
I
have
following
slides
to
explain
that
so
suppose
smaller
reads:
if
we
increase
the
prefetch
size,
it
means
that
for
each
system
core,
we
can
read
more
for
each
system
call
like
this
case
and
we
can
for
each
system
call
we
can
have
or
14
times
more
ties
read.
It
means
that
we
have
for
larger,
preferred
size.
We
have
one
to
fourteen
less
14
times
less
system
course
overhead
and
that
results
in
the
better
result.
D
E
D
E
D
E
C
E
C
D
D
So
this
is
second
case
is,
is
still
reasonable,
that
we
have
better
performance,
but
when
it
comes
to
64k
it's
it's
different.
At
the
first
side,
the
prefatory
size
is
is
still
better.
It
comes
to
the
70
that
70
times
larger
resources,
but
the
cisco
is
is
not
not
70
because
we
are
using
the
direct
read
to
if,
if
the
prefetch
buffer
is,
is
it's
used
up
it?
A
D
The
reason
is
because
we
have
alignment
requirement
so
so
for
so
for
prefetch.
We
we
don't
know
how
much
alignment
it
needs
and
whether
it
needs
to
be
continuous.
So
so
prefetch
is
just
reading
more
than
we
need,
and
if
we
have
memory
alignment
requirement,
we
need
to
copy
that
to
to
the
guideline
memory.
Okay,
it's
also
applies
to
input
buffer
factory,
because
input
of
factory
doesn't
know
which
the.
What
is
the
next
message.
C
E
D
D
C
C
A
D
D
It's
eight
eight
messages
ties
at
each
system
core,
so
we
are
prefetching
much
more
messages
than
we
need
in
the
in
the
prefetch
up
to
500k
implementation.
So
so
it
means
that
we
need
to
do
it
copy
in
this
version,
because
we
we
may
need
each
40
64
K
to
be
fully
aligned,
and
that
means
we
with.
If
we
prefetch
too
much,
we
may
have
more
overhead
to
of
memory
copy.
So
so
that's
why
that
the
fixed
a
8k
prefetching
is.
D
It
looks
a
little
better
than
if
we
prefetch
500
K,
because
it
after
the
prefetch
8k
is
is
consumed.
It
will
instead
use
use
the
rag
to
read
to
to
let
the
system
call
to
fill
up
the
memory
of
the
aligned.
We
provided
the
aligned
memory
space.
So
that's
why
this
compensates
the
overhead
of
the
system
cost.
So
that's
even
the
same
for
the
the
larger
read
the
larger
read
it
that
the
500k
prefetching
is
slower
than
a
professional,
because.
D
Because
8k
only
prefetch,
it
came
each
time
so,
the
most
of
time
it's
using
direct,
read
to
avoid
the
memory
copy
here.
So
it's
from
the
matrix
I
can
see
that
there
99%
of
memory
are
using
the
direct
read
that
doesn't
need
to
need
extra
memory
copy
here.
But
if
we
prefetch
too
too
much
space
like
500
K,
it
will
still
have
5
50%
memory
that
needs
to
be
copied
to
be
aligned.
D
But
if,
if
there's
a
large
reads
with
layout
requirement,
we
might
better
not
prefetch
too
much,
because
if
we
prefetch
too
much,
we
still
need
copies,
that
prefetched
memory
to
the
align
memory
ourselves.
And
it
is
it's
kind
of
a
paradox
here.
So
how
large
is
the
preferred
size,
the
proper
preferred
size
and
I
think
it's
empirical
because
memory
copy
analysis,
our
overhead-
are
not
related
and
depend
on
the
hardware
and
and
the
current
overhead.
D
E
A
E
A
D
D
C
E
D
The
next
slide
is
what
I'm
testing
the
input
buffer
factory
and
it's
a
current
result.
I
have
for
the
/
crimson
messenger.
From
the
perspective
of
the
perfect
messenger
and
I
haven't
tested,
the
Rados
is
still
what
work
in
progress
and
because
input
buffer
factory
doesn't
prefetch.
So
we
can
see
it
doesn't
have
very
good
performance
for
the
smaller
sizes
and.
D
D
Yes,
it's
it's!
It's
riddles
bench
right!
I
still
need
to
test
it,
but
this
is
the
perspective
of
the
perv
Kwanzaa
messenger
without
the
payload,
the
OST
payload.
D
D
Overhead
introduced
by
OSD
backhand.
So
so,
if
we
read
doing
the
same
test
board
for
the
proof
from
the
messenger
and
the
same
test
for
the
widows
and
the
and
we
can
find
the
overhead
of
the
OSD
by
the
difference
of
the
performance
result.
So,
for
example,
if
we
are
doing
the
same
for
K
right
to
perv
when
the
messenger
and
the
Laredos
bench
and
the
result
is
different.
And
then
we
can
tell
how
much
overhead
is
introduced
by
the
OSD
part
right.
B
C
B
D
B
B
C
Okay,
just
last
comment:
if
I
may
I
want
about
the
assumptions
present
at
the
very
beginning
of
the
slides,
I
mean
not
sure
and
I'm,
not
sure,
I'm,
not
entirely
sure
that
is
having
a
Cisco
in
policy
while
using
the
POSIX
stack
while
using
the
kernel.
Networking
is
an
is
inevitable.
I
I'm,
pretty
sure
that
introduction
in
that
introducing
are
you
during
the
sister
will
neglect
the
need
for
prefetching?
Yes,
that's.
D
C
E
C
That's
one
of
the
reasons
why
okay
input
buffer
factory
actually
can
do
perfecting
can
do
even
adaptive.
Everything
will
look
more
complicated
because
it
has
access
to
more
knowledge,
application.
Specific
knowledge
that,
to
be
honest,
I
will
I
love
to
have
I
will
talk
to
bring
IOU
intimidation
to
sister.
Instead
of
investing
a
lot
of
effort
in
and
doing
prefetching
for,
like
the
settings,
I
bet,
the
Cisco,
the
interrupting
using
Cisco
will
become
a
little
jazz
evolution.
C
E
B
E
E
B
E
B
D
B
C
Well,
that's
one
of
the
things
the
interrogatory
tries
to
do
it,
delicately
delegate
the
responsibility
to
determine
that
set
the
size
of
the
prefetching
buffer
to
application
and
the
reach
the
reading
exactly
the
message.
Size
is
some
kind
of
special
case
of
doing
deep,
repeating
that
basic
neglect
is
neglected.
C
It
it
whether
you
want
to
do
fix
prefetching,
whether
you
want
to
adaptive
and
very
tidy
adapt,
is
providing
is
up
to
application
it's
at
at
the
bend.
That's
the
advance!
That's
the
advance,
the
advantage
of
baffle
factory
flexibility
application.
It's
not
only
about
doing
alignment
alignment
at
the
beginning
alignment
in
the
middle
alignment
at
the
according
to
the
end
of
buffer.
It's
just
delegating
the
memory
layout,
including
the
buffer
size,
to
application.
B
D
E
One,
just
in
the
last
minutes
has
there
been
any
thought
put
into
what
it
would
mean
to
run
Princeton
with
less
than
one
core.
That
is
on
a
machine
where
you
expect
to
share
the
core
with
other
work,
because
you
expect
the
O's
to
you've
been
largely
idle.
How
does
C
start
behave
in
a
situation
like
that?
You.
E
E
E
One
of
the
assumptions
underlying
the
way
we're
talking
we've
been
talking
about
Chris
so
far,
is
that
some
subset
of
the
course
of
the
machine
will
be
simply
dedicated
to
running
an
OSD,
regardless
of
how
we
choose
to
do
so.
Charlie,
that's
just
true
right.
So
because
of
that
assumption,
CCR
does
some
things
like
it
doesn't
really
go
to
sleep
it
just
as
soon
as
that.
E
If
at
the
moment
does
that
work
to
do
it's
too
expensive
late
late
to
see.
Why
is
to
go
to
sleep,
because
if
an
IO
comes
in
it'll,
take
someone
to
wake
back
up
right,
no
poles,
but
there
are
environments
where
we
would
like
to
be
able
to
run
in
the
same
box
with
other
applications,
and
the
expectation
of
the
store
web
stack
is
that
it
will
be
lightly
utilized
if
at
all,
most
of
the
time.
So
it's
this
isn't
a
performance
situation.
E
E
To
expenses
that
they're,
not
that
many
environments
like
what
you're,
probably
thinking
of,
is
like
a
cloud
kind
of
environment
where
you're
serving
games
and
stuff
think
more,
like
your
added
edge
node
at
Atoka
that
serving
edge
content
and
you're
running
seth
Mabley,
because
that's
the
platform
you've
chosen
and
it
does
a
good
job
of
serving
our
vidiian
rgw.
But
it's
only
like
two
notes
right
and
there's
other
stuff
running
on
the
boxes,
and
you
simply
can't
dedicate
for
the
cost.
One
of
you
will
holes
the
Encore
to
one
OSD.
E
B
B
E
C
E
E
D
See
have
tested
it
to
bring
up
one
server
when
one
messengerĂs
server
to
one
core
and
the
two
messenger
client
to
to
another,
to
course,
and
it
makes
one
server
called
a
hundred
percent
busy
and
that
the
the
two
kind,
of
course
not
busy,
is
that
much
it
doesn't
reach
100
percent
it.
It
seems,
looks
like.