►
From YouTube: 2019-11-12 :: Crimson SeaStor OSD Weekly Meeting
Description
.
A
B
B
B
B
C
Essentially,
so,
do
you
want
me
to
talk
a
little
bit
about
why
we're
sort
of
assuming
we
don't
need
what
a
file
system
or
no.
C
But
if
were
actually
want
to
batch
up
a
sequence
of
operations
that
we
know,
because
we're
smart
are
not
at
this
time
or
don't
independently
need
to
be
persisted
and
then
issue
a
single
barrier
at
the
end,
for
instance,
because
we
need
to
do
rights
to
several
places
on
a
file
update
another
file
and
then
sync
them
all
with
the
same
barrier.
There
is
no
way
to
make
a
file
system
do
that.
C
Basically,
so
essentially
stage
spent
quite
a
while
trying
to
make
a
new
store,
spelled
NE
WS
t
o
re
sorry
work
that
way
where
he
essentially
used
x,
FS
as
an
alligator
and
ran
everything
else,
the
top
of
it,
and
he
could
not
eliminate
the
extra
barriers.
So,
for
that
reason,
we're
essentially
borrowing
stages
research
from
when
he
was
designing
to
store
to
assume
that
there's
no
way
to
make
exit
or
any
other
file
system
do
what
we
needed
to
do
and
for
the
record.
C
That's
the
that
that's
the
one-minute
version
so
essentially
we're
the
only
parts
of
an
actual
file
system
will
be
re-implementing
or
the
allocator
and
some
logic
about
how
block
IO
is
dispatched
the
rest
of
it
of
an
actual
file
system.
We
don't
need
at
all.
We
don't
need
any
of
the
directory
structures.
We
don't
need
any
atomic
updates
to
set
directory
structures.
We
don't
need
atomic
moves
across
directories,
which
is
quite
hard.
C
C
C
C
A
E
C
F
But
this
approach
has
also
some
nice
side
effects,
like
the
other
like
giving
possibility
to
vendors
to
integrate
to
to
use
Ted
serve
as
a
kind
of
test
bed.
I
believe.
That's
that
I
believe
that
a
workaround
in
more
chronic
hey,
that's
keeping
the
file
system
kernel
file
system
dependency,
enabled
except
to
get
things
like
email
integration
as
PDK
integration.
We
thought
that
it
would
be
impossible.
Yeah.
C
You're
right
about
that
too,
the
Z
and
s
integration
that
abu
talib
was
doing
for
blue
store
would
have
been
a
lot
more
difficult
if
he
had
to
retrofit.
All
of
that
so
fast
right,
so
we'll
have
the
same
advantages.
When
we
do
the
persistent
memory
self,
we
don't
want
to
sidetrack
too
far
of
those
I
just
thought
it
might
be
worth
pointing
out.
Why.
B
B
C
Is
just
an
adapter
between
rocks
DB's
in
turtle
interface,
so
rocks
DB
has
an
interface
between
itself
and
a
disk
that
speaks
about
opening
files
and
writing
sequentially
to
them.
So
blue
FS
is
just
a
stub
of
file
system
on
top
on
that.
That
is
part
of
blue
store
that
just
kind
of
connects
that
really
simple
interface
to
blue
stores.
Alligator.
C
B
B
A
F
C
B
C
C
A
G
Hi
everybody,
my
name
is
Alan
I'm
peace
bye
good
morning
right
I
am
so.
My
name
is
Alan.
I
am
based
in
Israel
I.
Also
work
from
the
Raanana
offices,
with
with
Roland
I
am
currently
a
student
at
the
Technion
in
Haifa
I
study
software
engineering
and
I
just
started
my
third
year
and
I
joined
the
team
about
a
month
ago
or
so
month,
and
a
half
and
yeah
previously
I
worked
at
CERN
on
data
acquisition
on
a
new
detector
called
phaser
and
yeah.
If
you
have
any
questions,
feel
free
to
ask
me.
D
A
F
A
couple
of
things:
first
of
all,
I'd
sent
dead
patches.
We
had,
we
have
for
said
T
star
repo
to
the
to
the
upstream
I've
got.
We
have
got
reviews
for
amber
IV
in
truffle
I'm.
Addressing
the
comments
on
the
new
versions.
I
think
is
that
I
revised
the
aerator
PR
on
top
of
freshest
master,
resolving
some
naming
conflicts.
F
F
C
A
subsystem
in
gredos
that
allows
the
OSD
to
maintain
some
ephemeral.
It's
essentially
an
atomic
message,
passing
system
for
different
clients
using
the
same
object
and
RG
r
BD
uses
it
as
to
create
an
exclusive,
lock
messaging
protocol,
so
that
different
RBD
clients
can
prove
that
they
are
the
only
accessor
of
a
particular
RVT
volume,
because
if
they
are,
they
can
use
an
exclusive
caching
mode
right.
C
Okay,
the
OSD
side
support
is,
he
requires
some
amount
of
ephemeral,
State
and
some
amount
of
ability
to
say,
I,
know
which
clients
are
connected
to
me
on
this
object
and
are
expecting
notifies
to
be
propagated
to
them.
So
there's
some
complexity
associated
with
it
and
there
are
Rados
primitives
for
manipulating
up
the
relevant
thing
to
wrap
for
if
you're
looking
for
is
watch,
notify.
E
Mostly
cleaning
things
up
and
the
fighting
with
destruction,
process
sequences
and
what
we
talked
about
the
problem.
We
have
a
sister
and
this
not
playing
well
with
as
it
was
raised
with
our
AI
in
C++,
so
I
have
to
do
a
few
things
around
it
in
doing
good.
Costas
termination
cost
assess
termination,
I
hope
to
be
able
to
finish
this
in
the
next
few
days,
because
I
was
in
the
in
the
sister
Senate
meetings.
There
was
tasked
with
some
development
regarding
the
new
histo
and
I
want
to
start
working
on
that
soon.
C
Yeah
I've
been
fixing,
my
applicant
branch
I
believe
I
haven't
fixed.
Now
it
took
me
about
a
day
to
get
a
docker
container
set
up,
so
they
could
run
CVD
locally
and
the
bugs
were
in
my
own,
so
I
believe
I
have
them
fixed
I'm
waiting
to
get
the
actual
PR
Jenkins
thing
to
run
so
I
can
be
sure
that
but
I
think
I
have
a
fix.
Now,
that's
a
I.
A
C
B
B
Not
of
interest
because
I
had
some
work
experience
with
days
of
Deeds
writing
as
a
stick
on
where
I
just
came
across
your
document,
just
to
see
store,
I
was
just
going
to
it,
and
and
I
was
talking
to
Josh
as
well,
and
he
suggested
I
just
kind
of
attend.
This
meeting
just
figure
out
what
kind
of
work
is
being
done
and
if
I
have
any
scope
of
contributing
space.
B
D
Last
two
weeks,
I
managed
to
start
crimson
messenger
with
DVD
haystack
native
stack
and
I
wrote
conclusion
of
what
what
I'm,
what
I
found
so
to
be
short.
The
the
first
is
that
the
the
the
native
stack
has
much
has
better
performance
and
has
much
better
scalability
with
more
course,
but
it
seems
to
me
by
looking
at
its
its
code
and
they
are
still
some
kV.
It's,
including
some
missing
features
and
and
some
parks
in
the
native
stack
and
we
need
to
adjust
in
order
to
be
used
with
Crimson
SP.
D
So
they
are
I
think
there
are
still
some
more
features
needs
to
be
adjust
and
I
will
try
to
look
at
how
to
do
that.
It
seems
that
there
are
some
major
features
are
not
supported
like
to
start
multiple
processors,
moto
de
start
apps,
with
native
stack
or
to
have
multiple
Nick
devices,
but
to
magic
to
manage
multiple
Nick
devices
in
with
native
stack
like
that.
A
D
Because
if
you
start
sister
app
with
native
stack,
you
must
configure
the
device.
You
need
to
be
managed
at
user
space
like
to
set
up
the
IP
address,
though
currently
it's
only
support.
One
device
and
davidic
has
two
modes:
the
primary
process
mode
and
the
secondary
process
mode.
And
it's
to
me
we
can't
start
multiple,
a
sister
app
with
native
tag.
It's
because
all
the
apps
will
start
at
primary
process
mode.
D
E
C
E
What
is
the
document
already
what
you
did,
and
it
seems
that
there
is
an
opportunity
there
that
is
important
to
us.
The
performance
and
the
scalability
is
is
well
it's
working
versus
project
step,
which
seems
to
be
really
problematic
when
scaling
so
it's
well.
It
is
probably
worth
our
time,
sometimes
in
the
future,
I'm,
not
sure
that
that
know,
but
it's
worth
our
time
to
invest
in
this,
but
the
problems
you
mention
seems
to
be
things
that
so
that
we
would
want
to
solve,
for
example,
using
more
than
one
week
what.
Why
isn't
this?
Because.
C
Things,
profits
back.
They
use
POSIX
tack,
which
does
support
more
than
one
Beck
yeah
at
the
passengers
that
they
don't
care
and
using
not
using
native
stack
is
more
complicated.
So,
given
that
they
aren't
elected
by
the
passenger
code,
they're
gonna
use
positive
step.
That
said,
it's
not
like
they'd
be
a
hostile
to
approve
us.
It's
just
that
they
probably
want
rhythm
for
us
right.
C
E
C
E
F
E
F
Also,
that's
also
one
of
the
one
of
the
paths
zu
I
are
using.
Steel
would
impose
in
this.
In
this
scenario,
when
you
are
when
your
process
almost
never
ever
does,
does
contact
switch,
that's
mode
switch.
This
would
impulse
some
communication
between
cars.
What
I
mean
is
that
they,
okay
irony
still
much
better
than
that
firing.
F
D
F
D
C
Other
words
I'm
less
worried
about
the
possibility
that
people
will
see
these
benchmarks
and
get
the
wrong
idea.
Then
I
am
about
making
the
right
decisions
going
forward
right
so
yeah.
This
represents
an
important
data
point
which
we're
going
to
use
to
make
decisions.
This
is
significantly
faster
with
with
some
numbers,
of
course
right.
So
eventually,
that's
going
to
be
the
next
place
to
find
performance,
improve
us
yeah.
B
D
Okay,
which
is
the
low-hanging
fruit
implementing.
D
C
F
Jung-Hee
mm-hmm:
what
was
the
problems
with
crimson?
Always
the,
because
what
do
you
think
from
the
Joe
is
that
the
testing
was
made
using
using
the
path
test
of
messenger?
So
we
have.
We
have
to
verify
the
performance
of
some
kind,
some
subset
of
components
we
are
kept
at
the
moment
in
the
crimson
SD
I
would
love
to
see
entire
process
with
with
native
stack
on
the
under
listing
photos.
The
problem
there
so.
D
C
D
F
E
C
F
C
F
Right
now,
but
right
now:
well,
it's
about
Lagasse!
Actually,
we
started
work
with
the
Crimson
started
from
design
assuming
some
kind
of
inter
connector
between
code
between
cars,
some
the
idea
to
short
messenger
across
multiple
cars
which
turned
to
be
no
God.
Then
we
moved
the
one-to-one
mapping
and
in
the
future
we
will
consider
extending
extending
it
to
markers
at
the
price
of
making
some
modifications
to
the
radius
protocol
to
avoid
the
avoid
the
cross.
But
that
I
see.
C
E
E
C
E
C
B
C
E
C
E
E
C
For
the
pony,
it's
kind
of
a
unique
one:
it's
it's!
It's
I'm
pointing
out
the
disease
in
general
admin,
sockets,
look
like
networking,
because
a
pin,
sockets
look
like
big,
because
UNIX
domain
sockets
are
built
on
a
networking
sort
of
basic
design
principle,
but
there
isn't
anything
else
that
works
that
way.
It's
just
actual
networking
and
UNIX
domain,
sockets
and
stuff
doesn't
for
anything
else.
This.
C
F
B
B
E
C
E
A
B
D
F
Maybe
just
not
priority
what
is
interesting,
at
least
for
from
my
perspective.
There
is
just
Jack
how
much
we
could,
how
much
performance
we
could
get
by
moving
the
native
by
moving
to
the
PDK.
That's
the
interesting
point,
though
I
would
love
to
see
really
really
minimal
stupid
as
deaf
as
possible
configuration
or
slash
Huck,
just
to
get
number
just
to
compare
crimson
we've,
even
one
process
on
whether
with
four
six
versus
native.
That's
all.
E
C
The
one
core
performance
part-
and
we
already
have
a
lot
of
things
to
test
but
I,
think
what
we
should
take
from
now
is
that
this
is
a
fruitful
place
to
look
for
IAP
improvements
in
the
future,
but
today
I
think
we
should
ignore.
It.
I
think
we
should
just
say:
okay,
we
need
what
work
will
need
to
be
done.
A
native
stack
when
we
get
to
that
point,
but
for
today
we
can.
We
can
rest
easy,
knowing
that
there
are
I
ups
there
to
find
and
focus
on
posit
star.
F
C
C
Fun,
it's
it's
because
our
next
choices
are
real,
like
the
reason
why
it
was
really
important
to
set
up
performance
testing
in
the
first
place.
Is
that
we're
about
to
write
a
bunch
new
code
right?
So
we
need
to
know
what
the
cost
of
that
code
is,
but
this
doesn't
really
work
like
that.
This
is
a
fairly
orthogonal
set
of
choices
to
things
we're
doing
right
now
that
we
can
definitely
make
in
the
future
just
as
easily
as
we
can
now
a
great.
F
C
I
would
claim
that
that
would
be
a
mistake,
so
we
don't
want
to
do
that
yep.
We
should
notice
if
we
do
that
and
not
do
it,
but
I
don't
know
that
we
need
to
spend
a
bunch
time
building
extra
testing
methodology
to
work
that
out
like
well
notice.
It's
pretty
straightforwardly
we'd
have
to
break
the
seastar
interfaces
right.
D
F
Moving
in
the
may,
in
the
sense
of
STD
moved,
there
is
a
sister
has
obstruction
of
a
socket
and
that
we
a
big
adverse.
To
be
honest,
when
was
when
we
was
trying
to
make
the
state
the
shard
messenger.
My
point
is
that
we
already
even
abstracting
from
the
native
stack
case
at
all.
We
should
kill
those
this
legacy
sooner
or
later,
but
it
which
enabling
it
would
enable
us
to
do
the
right
testing
with
crimson
and
native
I
would
opt
for
killing
it
now.
F
F
D
C
Are
yes,
it's
a
choice
we
could
make,
but
it's
not
one.
We
would.
What
would
we
do
instead
is?
We
would
have
we
would
adapt
C
star
to
be
able
to
bind
different
cores
to
different
sockets
on
the
same
TCP
connection,
which
I
think
is
possible
in
d
PDK
using
the
second
process
thing,
it
would
require
extensions
to
see
stars
current
implementation
in
such
a
way
that
one
core
passes
connections
to
the
other
course,
but
the
other
course
do
the
interpretation
on
their
own
with.
D
C
E
E
F
F
There
is
a
discussion
with
Ali
sister
that
whether
we
can
actually
move
connections
between
car
and
the
question
was
very
simple:
no,
never
ever
so,
even
if
we
go
I
believe
we
want
the
states
to
optimize
the
number
of
TCP
connections
in
this
system.
If
we
go
with
the
end,
even
if
it
goes
with
the
multiple
cores
per
per
proper
sequence
D
lightly,
we
will
never
do
any
movements
between
cars,
I
won't.
F
C
Essentially,
what
you
need
to
do
is
you.
The
Nick
needs
to
be
handling
enough
with
the
TCP
stack.
Then
it
schedules
different,
interrupts
on
different
cores,
based
on
the
actual
TCP
session
of
the
incoming
packet
and
I'm,
not
totally
sure
it
can
do
that.
Do
you
see
what
I'm
getting
at
I
think
that
into
this
problem
is
one
core
acts
as
a
sort
of
a
gatekeeper
and
wakes
up
the
other
cores
based
on
which
one
needs
to
deal
with
a
particular
session,
and
then
those
cores
read
directly
out
of
the
neck
or
something
like
that.
C
C
C
C
C
C
B
C
E
G
E
C
C
Though
Knicks
do
actually
understand
a
certain
amount
of
IP
and
you
you
are
able
to
tell
them
what
to
do
with
packets
when
they
come
in.
That's
part
of
the
whatever
extended
Berkeley
packet
filtering
system
that
exists
in
the
kernel.
It's
a
real
thing
that
has
a
real
solution,
but
we
will
have
to
spend
like
work
on.
B
F
Mercenary
we
have
right
now,
I
mean
the
socket
crossdresser
after
all
that
stuff,
it's
for
moving
connections
after
they
have
been
initiated
after
they
being
here,
they
have
they,
they
have
been
acquired
by
the
user
of
stock,
and
the
dispatching
we
were
talking
earlier
would
help
would
happen
much
earlier
much
before
the
current
application
sees.
This
is
except
I
know.
B
C
F
C
C
B
C
Think
we
shouldn't
bother
work.
I.
Think
running
is
right.
We
shouldn't
bother
working
with
Dana
stack
at
all
right
now,
like
we,
we
have
a
big
fat
problem
of
Etta's,
which
is
a
brand
new
disk.
Back-End
that
address
is
a
new
set
of
devices
that
we
don't
actually
have
that
much
primary
research
for
I
don't
see
any
particular
reason
to
chase
a
network
problem
that
doesn't
it
doesn't
change
choices
we
have
to
make,
and
we
can
definitely
deal
with
do
this
later,
like.
F
A
person,
what
would
be
important
from
the
sister
perspective
is
just
and
designing
proper
interfaces
to
avoid
zero
copy
began
rather
costly,
okay,
yep.
We
need
to
dot
all
the
all.
Would
that
all
good
that
could
be
important
from
from
the
native
is
the
ability
to
test
that
Aaron,
but
maybe
we
could
leave.
F
C
C
B
F
F
Or
maybe
some
legislators
in
current
Aziz
Ansari
in
in
Clinton
Crimson,
of
course,
to
have
to
have
complex
arrangements
like
spanning
multiple
course
it
will.
It
might
be
hard,
but
if
we
consider
only
one
crimson
process,
one
Nick
and
one
one
core
inside
one
crimson
process,
then
it
might
be
very
easy
to
get
there.
E
C
F
F
C
B
D
D
C
D
D
F
Yeah,
if
yeah
it
makes
sense
having
just
a
speedy
K
in
at
the
level
of
distr
seems
reasonable,
but
it
would
work
on
the
site
of
network.
It
would
require
some
kind
of
may
be
synthetic
love
generator
to
not
involve
cattle
to
a
bowl
yeah.
What
I
mean
do
what
I
mean
political?
Is
that
even
single
Sisk,
that
that,
as
cisco
has
both
direct
and
indirect
costs,
Reverend.
C
B
F
I'm,
just
thinking
about
I'm,
just
thinking,
compare
in
my
mind,
I'm
trying
to
compare
the
effort
of
such
kind
of
synthetic
testing
versus
versus
the
narrative
with
a
native.
Well,
we
don't.
Actually
we
don't
know
whether
the
the
socket
crossref
is
on
the
only
problem
if
it
is
okay
stripping.
It
is
extremely
easy,
but
it
might
be
also
some
some
other
issue.
There
might
be
other
issues
there,
dude.
C
F
C
B
F
C
F
All
I
would
argue
in
that
scenario,
is
that
I
would
expect
that
if
somebody
wants
to
involve,
if
an
user
went
to
investigate,
he
wants
to
be
all
he
went
to
go
with
POSIX
tak,
it's
some
kind
of
disjoint
scenario,
a
video.
If
somebody
wants
SPD
Kate,
he
will
also
want
to
have
db/decade
I'm
up
I'm
here.
F
But
a
that
way
you,
you
will
judge
the
scenario
live
with
in
with
no
I,
don't
think
is
that
you
still
have
some
Cisco's
and
in
dire
and
their
indirect
impact
coming
from
messenger
liar,
potentially
affecting
the
rest
of
the
system,
and
it's
extremely
hard
to
actually
sum
up
to
actually
wait
up
all
those
those
costs.
It's
it's
terrible,
I
believe.