►
From YouTube: Ceph Crimson/Seastore Meeting 2022-11-30
Description
Join us weekly for the Ceph Crimson/Seastore meeting: https://ceph.io/en/community/meetups
Ceph website: https://ceph.io
Ceph blog: https://ceph.io/en/news/blog/
Contribute to Ceph: https://ceph.io/en/developers/contribute/
What is Ceph: https://ceph.io/en/discover/
A
R,
let's
get
started,
let's
see,
I
was
out
on
PTO
last
week,
so
I'm
back
to
working
at
scrub.
This
week,
engine
how's
it
going.
A
B
Reviews
and
I
have
one
quick
question
about
the
oh
map,
we're
going
to
move
here
and
for
the
messenger
I'm
moving
forward
to
a
split
protocol,
V2
a
messenger
and
a
handshake
after
introduced
the
frame
assembler
and
that's
all.
A
Sorry,
what's
up
when
you
do
find
out,
it's
probably
worth
adding
a
comment
to
the
object,
store
interface
though
yeah
sure
terrifying.
It.
C
Yes,
so
if
anybody
has
a
nice,
please
help
me
to
to
verify
it,
and
we
found
that,
with
the
shared
stall
supported,
the
performance
has
some
regression
and
they're
still
working
on
the
performance
regression
we
found
the
reactor
utilization
is,
is
about
to
60
and
that
default
notice
sure
support
the
shuttle
it
is
over.
99,
so
still
try
to
verify
it
and
by
adding
the
Mall
client
to
do
the
test.
D
Yeah
as
well,
oh
this
week,
I
I
was
still
debugging
the
picture
remove
code.
Oh
here's,
the
pr
addressed
PR
I,
think
it's
I
think
it's
very
close
and
I
will
add
a
unit
test
case
for
for
this
PG
remove
and
after
that,
I
will
ready
the
VR
for
review.
That's
all
my
that's
all
my
work
this
week.
D
A
D
I
have
one
question
we
are
I'm,
not
actually.
My
colleague
zhongsung
is
trying
to
implement
some
kind
of
cache
data
promote
function
which
loads,
frequently
accessed
data
from
cold
here,
but
too
hot
here,
like
what
be
cash
or
opencast
is
doing
the
reason
we
are
doing
this,
because
is
that
we
think
there
are
cases
that
the
memory
space
which
can
be
used
as
a
data
cache,
may
not
be
large
enough
to
hold
all
the
necessary
hot
data
in
memory
in
memory.
D
So
I
think
we
need
to
load
frequently
hot
a
frequently
accessed
data
and
from
culture
to
hot
here
does.
Does
this
make
sense
to
you
guys.
A
In
principle,
yeah
I
mean
that
makes
sense
in
principle,
I
guess.
My
main
question
is
ordering.
Is
this
actually
the
most
important
thing
to
work
on
right
now
in
c-store.
D
A
A
A
So
all
you
have
to
do
to
to
make
at
least
an
ideal
version
of
a
feature
like
this
work
is
provision,
let's
say
two
gigs
of
cash
space,
but
have
a
working
set
of
size,
16
gigs,
that's
all
you
would
need
the
working
set
would
greatly
exceed
the
cash
or
the
fast
tier
the
memory
size.
So
unless
there's
some
mechanism
for
moving
frequently
red
but
not
mutated
data
up
into
the
faster
space,
you'd
get
read
access
latencies
from
the
colder
tier,
not
the
faster
tier.
A
So
it's
not
difficult
to
construct
a
scenario
where
this
is
useful.
So
it's
more
like
is
the
caching
Machinery
at
a
level
of
maturity
where
this
is
a
logical
feature
to
do
now,
or
is
there
more
foundational
stuff
that
we
want
to
do?
First.