►
From YouTube: 2019-05-28 :: Crimson SeaStor OSD Weekly Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
A
C
B
A
Planning
to
do
to
edit
them
to
edit
test
just
like
what
do
we
have
in
existing
CI
be
sincere.
We
have
a
model
test,
for
example,
to
check
the
chief,
the
depth
of
field
and
to
check,
if
they're,
pending
on
intended
sub-module
changes
and
to
check
the
kind
of
part
line
added
to
the
committee
message,
but
in
addition
to
it,
I
think
we
need
to
check
the
performance
decoration
introduced
by
the
PR
in
question,
for
example,
for
that
it
will
appear
delay,
but
with
crimson
and
the
PR
hurts
the
performance
English
significant
away.
C
A
D
The
tester
is
out
for
the
local,
a
just
a
well
Houston
and
I
used
to
vote
optional
options.
I
mean
the
singham's
rider
will
be
default
aside
to
be
three
and
four.
The
SST
ofc
Charter
will
be
said
to
age
and
threats.
Prashanta
will
be
said
to
add
this
default
options
and
under
local
a
means,
one
cost
staff
st
is
spider
than
cream.
D
D
Christy
it
kinda
dissimilar
from
us
after
self
OST,
so
I
I
check
Jim
ponds
before
messenger
tester.
So
when
the
client
side
effect
it's
used
cream
is,
it
is
used
a
sinker
messenger,
just
of
all
the
messenger
layer
tested.
If
the
client
side
is
used,
a
sinker
messenger
and
in
most
cases
a
signature
I
think
is
better
than
a
sink
too
promising.
B
C
D
D
B
D
D
D
B
That
may
not
have
been
enough.
It's
not
10
G
means
that
house
I
throughput,
but
it
still
has
finite
latency
right,
so
15
single,
it
reads:
yeah
that
wouldn't
be
enough
you'd
have
to
if
it
had
a
millisecond
of
latency.
That
would
be
like
eighty
microseconds,
which
might
be
or
yeah,
which
might
be
too
long.
Actually,
hey
do
I.
Have
that
right.
D
B
D
B
D
B
So
what
I'm
saying
is
the
only
metric
we
care
about
for
crimson
is
ops
for
CPU
cycle
right.
So
what
you
want
to
do
is
you
want
to
take
kratos
bench
and
we're
testing
throughput
here.
So
you
want
to
take
the
T
option
and
turn
it
up
to
be
big
enough
that
the
latency
is
awful,
just
like
just
bad
right.
B
That
means
that
that
you
fully
saturate
at
the
OSD
side
and
then
whatever
throughput
number
you're
getting,
is
like
the
best
that
that
server
can
possibly
do
with
that
code
path
that
will
wash
out
the
latency
from
Network
now
average
to
the
latency.
That's
a
different
test,
but
I
think
for
our
purposes
for
now
we're
primarily
interested
in
CPU
efficiency.
So
my
suggestion
is
that
you
take
the
ratos
bench
command
you
were
using
and
instead
of
T
equals
16
right,
T
equals
256
and
then
take
a
look
at
the
latency.
D
B
D
B
D
C
B
Did
you
check
that
so
well
or
did
you
check
the
you
said
the
latency
went
up
by
like
a
factor
7
right.
D
D
E
B
D
B
Different
clients,
you
might
be
saturated
on
the
client
side,
so
the
rate
has
been
she
isn't
rate
as
benches
in
super
efficient
and
liberate.
Oh
is
definitely
ism,
because,
if
you
think
about
it,
we've
only
crimson
the
server
side.
We
haven't
crimson
the
client
side.
You
might
need
to
run
two
different
rate,
as
mentioned
senses,
to
actually
properly
the
server.
It's
possible
that
what
you
actually
saturated
there
is
the
client-side
object.
Er.
B
B
Rate
of
Spanish,
commander
you're
using
itself,
is
actually
very
complicated.
It
has
a
full
of
radio
stack
on
that.
It's
not
super
efficient.
You
might
need
to
run
two
of
them.
Oh.
D
B
C
A
E
Some
reviews
I
mean
I
posted
some
coins
to
Sam's
viewing
stuff.
Yes
till
the
review
is
working
progress,
however,
I'm
yesterday
and
the
this
week
that
was
I
was
focused
mostly
on
drilling
down
through
the
DP
DK
implementation
and
DP
de
based
implementation
of
systems,
networking
stock
and
I
guess
I
know
what
was
the
reason
for
the
for
why
the
interfaces
portal
reading
from
sISTAR
socket
has
been
have
been
designed
in
that
way.
E
Well,
this
is
this
is
interconnected
with
our
problems
with
specifying
alignment
with
the
extra
mem
copy
from
user
space
to
user
space.
We
have,
we
are
having
doing
during
right
serving
and
basically
I've
got
a
good
under
impression
that
the
design
is
a
compromise
to
handle
the
PDK
stack
effectively
without
any
zero
copy.
E
Let's
imagine
that
how
a
flower
eternity
frame
looks
like
from
from
the
PDA
product
fighters
view.
Basically,
a
network
card
is
instructed
where
the
buffer
is
placed
and
the
DMA
puts
data
there.
Data
means
eternal
frames,
a
lock
or
possibly
a
lot
of
them,
and
those
frames
are
processed.
This
can
be
seen
I.
Imagine
this
nice
impersonal
just
as
basically
as
a
some
kind
of
pointer
operations
where
we
are
squeezing
headers
and
we
are
losing
headers.
Just
there
does
just
take
a
payload,
however
pay
low.
There
is
in
case
of
DP
DK.
E
This
can
this
payload
can
be
transferred
directly
to
application
using
as
cotton
or
catalyst,
because
it's
quite
you
it's
very
likely.
It
will
be
fragmented
because
of
how
in
how
the
protocol,
how
eternal
that
all
is
designed?
It's
because
frames
are
pretty
small.
Even
jumbo
frames
are
mark
up
to
9
and
9.
K
usual
frame
is
just
around
1500
bytes,
so
they
are
fragmented
by
design
and
to
handle
it.
E
E
So
this
boils
down
to
the
fact
that,
basically,
in
case
of
fragmented
frequent
of
the
friends,
kernel
really
needs
to
do
mem
copy,
and
that's
the
reason
why,
when
you
are
reading
a
chaos
tail
out
a
network
payload
from
Colonel
you
are,
you
can
specify
your
buffer
location
of
your
buffer,
but
that's
not
the
case
of
sistar
sistar.
Just
the
case
of
the
pelican
sister
in
there
there
you
all
are
doing
is
asking
the
stack
for
port
a
load.
You
don't
provide
a
location
of
battle
stuck
gives
it
you
for
it.
E
You
don't
control,
also
the
land,
it's
up
to
it's
up
to
stock
and
well
to
handle
to
handle
I
believe
we
will
meet
to
hand
off
DP
decay
and
for
six
tax
effectively.
So
quite
likely,
we
will
need
to
introduced
some
kind
of
extension,
the
sister
and
also
verify
verify
whether
our
entire
IO
path
is
able
to
effectively
use
cutter,
guitarist,
I,
think
I.
E
Think
the
second
is
not
a
big
problem,
because,
basically,
what
we
are
using
everywhere
bufferless,
so
modifications
at
our
site
would
be
just
to
not
enforce
t
start
to
read
into
a
one
single
contiguous
bar,
which
in
case
of
DP
decay,
will,
with
with
enforce
memory
copy
unnecessary
in
the
more
a
copy,
but
in
case
of
Cardinal
well
as
the
memory
copy
obligatory
bigger
for
the
sake
of
isolation.
Well,
we
also
want
to
be
able
to
specify
alignment
time
I'm
sketching
times
catching
some
interface.
E
Extensions
basically
will
need
some
something
like
allocated,
but
not
not
would
be
sense
of
things
like
called
the
market
grab.
This
was
passed
a
locator
when
the
user,
the
client
of
the
alligator,
is
responsible
for
specify
the
size
you
are
he
wants
to
allocate.
We
will
need
to
have
the
size
under
control
of
alligator
itself
and
well
in
case
of
POSIX
trucks.
The
concept
of
a
locator
is
actually
there,
but
it's
not
enough,
because,
because
T
star
is
for
specifying
the
size
which
is
just
rough
size
and
that
by
default
equals
the
eighth
grade.
I.
E
Also
read
you
thinking,
I
also
repeated
your
exact,
please
to
proposal,
and
it
will
replace
and
it
will
play
play
it
could
play
nicely
with
with
POSIX
after
after
reduction
and
then
of
the
number
of
read
discourse
we
have
per
message.
However,
it
might
enforce
a
copy
in
the
case
of
DP
DK
networking.
E
Yes,
but
we
can
minimize
that,
basically,
on
average,
on
average
I
believe
we
could
go
with
one
single
read
operation,
one,
let's
disco
per
message
on
average,
this
would
require
some
hug.
Basically,
first
of
all,
you
always
need
to
read
to
issue
a
Cisco
for
reading
the
first,
the
prologue
of
first
video
frame,
but
then
after
getting
it,
you
could
issue
it
only.
F
It
can
you
repeat:
why
do
we?
Why
do
we
have
to
make
to
read
course
info
v2?
We
can
read
a
large
amount.
We
can
ask
for
a
large
amount
of
data
which
will
include
the
header
and
modern,
the
reasonable
size
of
of
a
buffer
and
then
just
get
what-what.
There
is
maybe
with
the
second
header
there
also
well.
E
F
F
E
F
E
E
A
E
A
C
F
F
F
A
B
F
F
B
F
B
A
B
B
B
Whatever
so
I'm
working
on
the
OP
thing,
I
think
it's
oiled
down
to
two
things.
One
every
op
of
each
type
should
be
on
a
linked
list
that
can
be
dumped.
People
from
that
that
is
when,
when
we
start
an
op
Viet
appearing
out
for
a
client
offer
backs
a
lock
or
whatever
it
should
be
in
a
shard
wide
like
link
list.
So
you
can
walk
a
list
and
dump
all
the
currently
in
progress
operations.
B
And
secondly,
whenever
an
operation
is
not
currently
making
progress,
it
will
have
a
pointer
to
a
blocker
which
could
be
dumped
in
a
way
that
gives
you
useful
information
for
whatever.
That
means
that
blocker,
for,
if
it's
way
to
go
to
a
see
map,
it'll,
say
blogger
waiting
on
OSD
map,
those
two
you
have
30
like
a
lot
to
it,
but
I
think
that's
the
right
way
to
go.
B
B
So
if
we
got
three
iOS
or
three
writes,
let's
say
on
the
same
object
from
the
same
client
and
we
read
them
off,
the
wire
went
out
for
another
I
believe
the
way
the
dispatcher
currently
works
is
that
it
will
just
let
the
future
for
each
of
those
run
independently
and
then
it'll
immediately
go
back
to
grab
the
next
one
in
practice.
Obviously,
the
coarsely
doing
one
of
these
things
and
most
of
the
time
thing
we
started
doing,
will
continue
running
until
it
stops,
but
that's
not
actually
a
guarantee.
B
These
start
doesn't
doesn't
guarantee
that
I
think
there's
a
default
after
a
hundred
and
twenty
seven
future
returns,
it'll
switch
to
something
else
or
something
and
there's
a
wage.
It
like
ask
it
to
preempt
between
future
calls.
I.
Think
if
you
ask
it,
if
you
like,
wake
up
a
promise
in
like
urgent
mode,
it
will
cause
the
currently
running
execution
to
stop
and
the
other
thing
will
run
instead,
which
is
to
say
that
I
think
what
we
have
to
do
is
we
have
to
run.
That
is
the
future
return
from
the
dispatch
good
itself.
B
A
B
B
C
B
That's
what
I'm
actually
thinking
about
I
mean
it's
not
that
with
the
spirit
I.
B
B
C
B
So,
generally
speaking,
you
might
read
so
the
way
it's
currently
implemented.
You'll
read
them
correct
me.
If
I'm
wrong,
I
could
easily
be
misunderstanding,
but
if
you
read
five
rights
off
the
wire
from
the
same
writer
and
let's
say
for
whatever
reason
they
like
one
of
the
blocks
and
stops
executing
the
other
four
are
able
to
continue.
Those
four
will
definitely
run
to
completion
while
the
third
one,
let's
say
right,
but
it's
worse.
If
all
five
of
them
don't
block,
but
they
have
really
long
future
chains
or
something
preempts.
B
C
B
A
B
The
one
of
the
things
that
the
first
message
did
is
unblock
another
task
and
that
task
got
on
block
urgently,
even
if
it
isn't
actually
blocked
the
reactor
may
choose
to
drop
the
future.
It's
currently
that's
the
task.
That's
currently
executing
to
pick
up
the
one
that
just
got
unblocked,
there's
no
there's
no
determinism.
D
B
D
B
That's
true,
too,
but
what
a
thing
is
on
the
trouble
of
PG
or
of
a
connection
and
object.
So
if
either
of
those
is
different,
we
don't
care
about
the
order.
It's
just
that
for
convenience
sake.
Typically,
you
wait
on
the
connection
PG
couple
first
and
then
the
connection
object
couple
after
you've
been
dispatched
to
a
PG
so
anywhere
you
block
on
that
chain,
and
it
may
change
slightly
what
the
tupple
is.
B
My
guess
is:
you'd
write
the
handler
so
that
it
just
it
does
what
the
messenger
currently
does
like
it's
you're
still
free
to
write
the
handlers
that
it
does.
What
the
messenger
currently
does.
It's
just
that,
if
you
don't
write
the
messenger
that
way
you
have
the
freedom
to
say
actually
hold
up.
I
need
you
to
wait,
wait
until
I've
gotten,
at
least
this
far,
and
now
you
can
go
ahead
and
read
it.
I
knew
it
right.
Okay,.
A
B
Osp
maps
are
always
going
to
be
kind
of
like
that
right,
although
maybe
not-
maybe
maybe
we
actually
don't
even
want
to
do
that.
Maybe
what
we
always
want
to
want
to
run
those
to
completion
because
they
block
so
many
other
things
it's
unclear
we'll
have
to
think
about,
but
yeah
you're,
it's
it's
not
that
we
can't
ever
fork
off
a
new
task.
I
think
that
the
way
these
drawers
reactor
works,
I
don't
think
it's
even
expensive
to
have
a
lot
of
tasks
ready
to
run
the
reactor.
B
B
Eventually,
it'll
called
out
in
a
file
store,
a
block
on
a
kernel.
I/O
thing
picked
back
up
return
and
then
tell
the
messenger
to
send
it
back
out
where,
and
it
goes
into
yet
another
first
in
first
out
queue
where
the
sender
thread
picks
it
up
and
sends
it.
So
it's
a
sequence
of
threads
which
are
never
which
are
preempted
by
the
kernel,
but
not
within
the
thread.
If
you
take
my
being
connected
by
first-in,
first-out
keys
and
it's
super
brittle,
does
that
make
sense.
D
D
B
B
The
header
tells
you
how
many
bytes
are
gonna
read
you
read
those
bytes,
then
you
decode
them
into
a
message.
Take
it
on
to
a
queue
and
then
another
thread
will
pick
it
up
and
do
its
thing
and
then
stick
the
result
of
that
on
to
yet
another
q:m.
So
on.
As
long
as
that
ordering
always
stays
the
same,
the
messages
will
always
build
up
in
the
intermediate
queues
and
get
processed
in
order,
but
with
crimson
it's
a
little
weirder,
because
our
threads
are
actually
the
tasks
structures.
B
So
it's
a
little
bit
like
responding
a
thread
for
every
request,
which
is
fine
as
long
as
there
are
no
ordering
requirements
between
the
requests,
but
for
us
we
kind
of
do
have
them.
So
we
have
to
do
enough
of
the
processing
to
be
sure
that
the
next
one
won't
won't
hop
over
us
yeah,
that's
as
far
as
I've
kind
of
gotten
thinking
about
it.
Correct
me
if
I'm
wrong
about
Caesar,
though
like
is
there
a
mechanism
that
I'm
I'm
missing
like
it's
a
little
bit
like
an
execution
stages,
I
suppose,
but
we
needed.
F
B
B
What
we're
doing
that's
what
I'm
describing
we
need
a
way
to
do
that
that
maintains
this
ordering
correctly
is
not
incredibly
annoying
to
code
to
like
work
with,
and
finally,
if,
let's
say
some
particular
object,
probably
a
raid
OCW
bucket
is
causing
trouble
and
causing
block
requests.
I
want
to
be
able
to
dump
the
requests
on
the
OSD
and
find
out.
C
B
Is
a
Wyatt's
block?
That's
that's
that's
kind
of
like
that's
why
my
architecture
is
like
or
but
that,
like
the
structure
is
like
all
offs
are
on
a
cue
for
their
hole
lifetime
and
while
they
are
not
making
progress,
they
must
also
point
out
the
thing:
that's
stopping
them.
And,
furthermore,
the
thing
that's
stopping
them
probably
is
like
logically,
a
task.
That's
runnable
and
maintains
the
ordering
of
the
things
that
are
stuck
at
up.
I
think
that's
kind
of
how
it
has
to
work.
D
B
Well,
it
doesn't
work
yet
I
wanted
another
day
or
two
and
then
I'll
send
a
PR
with
like
an
explanation.
Okay
I
got
a
pretty
well
baked
on
the
plane
on
the
way
back
on
a
Friday,
but
I
have
more
to
work
through
III.
Don't,
for
instance,
have
Laporte
worth
maintains
ordering
I
don't
have
a
good
primitive
for
that.
Yet.
B
D
B
We're
definitely
using
user
space
cues
right.
We
are
we
we
made
that
choice
when
we
decided
to
go
with
these
are
in
so
far
as
we
had
consider
to
go
with
sea
star.
One
threat
means
no
kernel
scheduling
right.
We
have
to
do
our
scheduling
in
user
space
and
we'll
use
C
star
scheduling
insofar
as
a
task
equals
a
threat.
So
all
we
really
need
is
like
I
need
it
like
a
mutex,
I
guess,
but
with
more
stuff.
This.
B
Does
but
it
doesn't
provide
the
other
stuff,
it
doesn't
provide
the
thing
where,
when
we
wake
them
up,
they
need
to
be
woken
up.
An
order,
and
each
one
needs
to
finish
that
stage
before
the
next
one
gets
woken
up
and
I.
Don't
know
what
that
means.
Yeah
you.
You
know
what
I
mean
like
when
we
would.
Let's
say
we
wake
something
up
from
an
object
state
stop.
So
if
the
next
thing
it
needs
to
do
is
do
a
read,
it
needs
to
finish
that
read
before
we
wake
up
the
writer.
B
D
D
A
B
But
so,
even
even
if
that's
true
and
I
think
you're
right,
I
think
it
is
it's
not
strong
enough,
because
it
may
be
the
case
that
the
first
one
is
preempted
or
for
whatever
it's
in
stops
executing
and
then
you
have
no
guarantee
that
that
one
gets
picked
up
later,
because
now
we're
not
talking
about
a
mutex
we're
just
talking
about
the
order
in
which
the
reactor
picks
up
runnable
tasks.
Don't.
D
B
D
B
Word
cue
is
tough
here
because
in
the
case
of
the
clásicos,
do
we
mean
literally
like
a
standard
list
with
a
mutex
arunda
right,
it's
a
producer/consumer,
cue
or
connecting
two
different
threats.
Here
we
have
something
a
little
bit
more
like
the
kernels
waitlist
and
the
problem
is
when
you
wake
up
the
stuff
on
a
waitlist.
It's
now
non-deterministic
the
order
in
which
they
run
that's
the
problem
we
have.
So
what
we
actually
need
to
do
is
something
a
little
more
subtle
than
just
waking
everything
up.
B
B
B
Steam
star
gives
us
is
a
few
basic
primitives
and,
to
be
clear,
will
be
using
promises
and
futures
right.
We
are
using
the
seesaw
primitives
we're
just
using
something
to
in
to
impose
a
little
bit
more
order
than
Caesar
gives
us
by
default.
It's
no
different
from
creating
a
user
space
like
a
producer
consumer
queue
on
top
of
threats.
A
B
C
A
F
B
C
B
C
B
B
Could
be,
but
it's
gonna
be
pretty
closely
tied
in
to
the
way
we
think
about
ops
that,
maybe
maybe
not
well
yeah
could
be
we'll
see.
I
expect
that
what
I
submit
the
pier
there
will
be
opinions
and
it
will
be
rewritten
I
haven't
done
that
much
seagulls
most
lately,
so
it's
unlikely
that
I'll
come
up
with
the
best
answer,
may
be
something
I.
A
B
Just
that
I
worried,
I
I
think
if
we
do
that
they
get
woken
on
deterministically.
That's
that's
the
problem
like
if
five
out
and
I'll
wait
on
the
promise
that
represents
those
DMAP
epic
15,
then,
when
they're
woken
up,
they
will
usually
execute
an
order,
but
that's
not
a
guarantee.
It's
an
accident.
I.