►
From YouTube: Ceph Crimson/SeaStore 2021-11-10
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
This
week,
gangs-
and
I
think
we
merged
the
two
journal
prs.
Let's
see,
I
got
some
reviews
done
on
myeongwon's
piano
tpr.
B
He
is
trying
to
generalize
the
prt
structure
so
that
only
segment
manager,
internals
care
about
the
segment
manager,
part
or
care
about
the
segment
part,
so
that
we'll
be
able
to
use
addresses
for
random
access
devices
seamlessly
within
seamlessly
seamlessly
within
c-store
and
transaction
manager.
B
So
that's
getting
closer
and
joseph
is
working
on
the
zns
segment
manager
and
he's
gotten
most
of
the
functionality
working.
We
have
a
couple
of
actual
dns
devices
in
one
of
the
lab
notes,
so
that
is
also
getting
close.
C
Last
week
I
was
working
mostly
on
classical
hd,
but
now
I
am
back
in
in
crimson
yesterday
I
sent
I
sent
a
pr
that
removes
the
dreamers,
that's
for
rook
crimson
effort
and
removes
the
necessity
for
for
any
extra
crimson
specific
configuration
at
the
client
side
in
kubernetes
cluster
also
scheduled
a
new
totology
ram
and
also
addressed
the
comments
in
the
ms
learn
editor
from
peer
pr,
basically,
that's
it,
and
I
think
that
the
reason
why
why
we
cannot
have
those
extra
check
is
because
how
networking
in
kubernetes
is
orchestrate
orchestrated.
C
Basically,
every
pert
has
a
dedicated
local
ip
address
local
in-network
interface
in
in
developing
in
on
notes
where
all
parts
are
all
parts
posted
inside
a
node
boundary
have
those
network
interface
is
breached.
However,
when
it
comes
to
communicate
with
a
pert
that
is
placed
on
remote
node,
the
addresses
got
remarked.
Basically,
it's
a
form
of
not
so
local
note
internet
local
addresses
that
can
be
used
only
in
the
node
binders,
like
let's
say.
C
X,
5
are
basically
visible
publicly
as
x
1.
The
address
of
the
of
the
let's
say
public
interface
of
entire
node.
There
is,
there
is
some
some
not
going
there
and
that's
the
reason
why
we
cannot
use
the
addresses
from
hello
frame
that
are
that
are
sent
back
from
our
peers.
We
need
to
learn
from
from
bsd
socket
layer.
C
Yes,
I
see,
I
I
see
what
you
mean,
I
propose
I
was
thinking
about
such
solution
and
it
seemed
reasonable,
under
the
assumption
that
we
have
control
over
the
network
entities.
C
Initially,
I
saw
the
problems
totally
with
with
chef
manager,
which
is
which
I
assume,
which
I
assume
is
our
under
control.
When
it
comes
to
configuration,
however,
and
unfortunately,
the
same
thing
happens
with
clients,
and
my
intake
is
that
we
should,
we
cannot
simply
assume
that
we
have
control
over
the
client's
configuration.
C
B
All
right,
barth
how's
it
going.
D
0,
the
small
pr
is
merged
in,
and
I
found
that
the
chromosome,
not
that
can't
start
up,
because
the
there
is
a
third
happened
in
interruptible
future
when
cited
the
interrupted
condition,
because
the
global
interrupt
condition
is
not
cleared
at
the
next
interrupted
condition
to
recite.
So
there
is
other
happened
and
from
the
log
I
saw
that
the
first
appear
appearing
event
still
in
the
running
and
not
called
reset,
but
the
second
one
appearing
star.
D
You
want
to
start
happened
so
so
that
seems
she
has
said
that
we
don't
support
the
interrupter
nested,
and
but
this
issue
only
happened
on
my
testology
test
environment.
I
didn't
see
it
on
my
local
machine
and
I
will
discuss
with
jihad
offline
to
how
to
address
it.
D
And
that
is
just
in
the
pg
pg
layer,
the
first
pg
event
period
event
is
not
finished,
so
we
didn't
clear
the
global
interrupt
condition,
but
the
second
one
is
happen
and
they
want
to
set
as
a
global
interrupt
condition.
So
so
you
can
add
just
there
if
the
the
the
input
condition
is
equal
to
the
global
condition,
so
they
are
not
equal.
A
B
D
Sure
so
yeah
we
need
to
address
it
and
then
continue
the
test
because
it
is
blocked
there.
A
E
Like
actually
the
last
week,
like
I
tried
with
respect
to
upstream,
but
in
the
upstream,
except
must
also,
I
am
able
to
see
which
object.
E
Means
in
the
upstream
master
like
we
are
still
in
the
I'm
able
to
like
when
I
test
it
with
respect
to
upstream
master
sup,
I
can
see
the
ost
or
storage
like
blue
store.
A
E
So,
like
suppose,
if
you
wanted
to
test
with
respect
to
c
store
like
like
to
have
any
specific
branch,
this
tourism.
B
Isn't
ready
yet
it's
definitely
going
to
crash
so
we're
not
that
far.
Yet
I
think
right
now
for
any
testing
you
want
to
do.
You
want
to
focus
on
the
blue
store.
B
That
makes
sense
if
you
do
want
to
test
c
store
you,
you
can
run
it,
but
you'd
have
to
I'm
not
even
sure.
Just
basically,
none
of
the
deployment
tooling
is
able
to
deal
with
these
story
yet
so
yeah.
It
would
be
a
lot
of
work.
B
A
F
Are
you
trying
to
implement
this
hard
disk
support
resistor.
E
No,
no,
I
wanted
to
test
some
of
the
system
features.
B
Yeah,
just
general
purpose
testing
on
it
you
could,
but
before
you
basically
before
you
would
be
able
to
actually
test
the
store.
You
would
need
to
wire
it
through
most
of
the
existing
deployment
stuff,
which
hasn't
been
done.
Yet,
if
you
did
want
to
test
it
on
v
start,
you
can
do
that.
There's
the
dash
dash
c
store
option,
but
I
was
guessing
you
wanted
to
use
seth,
idm
or
something.
F
Last
week
I
rebased
the
the
spread
test
right
out
lfa
strategy
code
to
the
master
branch
and
try
to
test
test
the
test.
This
work.
However,
I
run
into
a
serious
series
running
out
of
segments
failure
at
first
it's
because
at
first
it
it
was
running
out
of
segments
because
the
gc
didn't
start
at
all
it
didn't
it
didn't
reclaim
any
space
and
that's
because
of
a
32-bit
in
type
overflow.
F
So
I
submitted
pr
to
fix
it.
But
after
that
I
still
running,
I
was
still
running
out
of
segments
and
I'm
still
trying
to
find
the
root
cause
for
that
problem,
and
I
was
also
looking
into
the
tremaine
the
germans
problem,
where
we
have
an
interoperable
future
trigger.
I
think
the
situation
was
like
this
at
first
one
appearing
event
starts
to
be
processed.
F
This
pro
starts
to
be
processed
and
which
created
an
interrupt,
interrupt
condition
in
the
background
and
but
then
before
before
it
finished,
processing
it
reached.
It
will
push
the
pg
state
to
active
and
try
to
start
start
another
pairing
event
to
to
recover
to
recover
the
pidgey's
data.
So,
oh,
we
have
a
nested
interrupt,
interrupt,
interruption,
context
problem.
F
Yeah,
the
first
is
created
by
the
appearing
event
that
pushed
the
pg
state
to
active,
sure,
okay
and
and
before
it's
finished
before
it
finishes
processing
it
starts
another
pairing
event
to
recover
the
pg's
data.
So
we
have
a
nested
interruptable
condition.
F
B
F
So
we
so,
I
think
I
should
postpone
the
second
appearing
event
to
the
end
of
the
first
one,
the
first
one
right.
A
B
F
B
F
Okay,
okay
I'll,
try
I'll
try
to
do
that
by
the
way
I
I
want
to
apologize
for
for
that.
We
were
speaking
mandarin
before
this
meeting.
Oh.
B
No,
no,
not
at
all,
I'm
glad
you
guys
were
able
to
talk
more
comfortably
actually.
F
A
Yeah
last
week
a
journal
submitter
is
emerged
and
I
continue
to
work
to
merge,
journal
headers
and
submit
another
pr
to
adjust
the
journal
head
general
tail
target
and
the
comment
committed
to
which
is
also
merged.
So
I'm
still
working
on
the
merging
channel
headers
feature
this
week.