►
From YouTube: 2020-03-17 :: Ceph Crimson Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
D
C
C
C
F
E
B
Next
class
they
implemented
the
the
function
or
for
the
various
plugins
I
was
actually
implementing
projects
for
CLS
law
and
when
I
wanted
to
testing
I
actually
found
out
that
there
was
some
sort
of
regression
problems.
So
a
while
back
rocket
made
the
pull
request
and
implementing
some
function
called
their
execute
those
people
and
that
we
use
it
for
four
dimensions
of
the
object
class,
and
that
was
in
order
to
pass
CLS
hello
test.
B
B
B
B
The
necessary
functions
for
for
CLS
law
and
but
there
when
I
test
and
when
I
tested
it
a
lot
as
soon
as
and
with
also
mark,
as,
indeed
a
very
simple
test
of
stillness
hello,
which
was
very
weird
because
the
last
PR
me
to
to
object
classes.
It
was
actually
passing
those
tests.
So
there
seems
to
be
some
sort
of
a
regression
problem
that,
since
the
last
PR
I
was
just
trying
to
analyzer,
then
try
to
do
teach
the
problem
out
there.
So
I
can
find
a
solution.
B
B
I
have
a
distinct
Anita,
so
what
I'm
saying
is
there
was
a
PR
made
by
Rajat
a
while
back.
Remember,
maybe
two
references
here
for
some
implementations.
He
did
sorry
in
order
for
a
syllabus
and
also
basically
the
necessary
functions
or
CLS
level
to
work
and
since
then
has
actually
a
lot
passing
it's
regardless
of
my
of
the
changes.
I'm
a
rowdy
would
all
surrounded
on
his
machine
of
it
working
out
to
the
curb
the
master.
Oh,
it's
one
thing
right
now:
I'm
trying
to
analyze
why
it's
not
working.
E
Actually,
what
what
what
I
suggested
to
to
expand
our
expand
our
tests
to
cover
more
more
functionality
we
eliminate
implement
a
implementing,
because,
because
you
are,
you
are
mentioning,
the
failing.
Failure
of
tests
could
have
intricate
bye-bye
all
reason:
I
changed.
It
could
be
a
rick
regression,
I'm
wondering
if
week
there
I
think
I
mean
I,
think
you
need
testified
or
on
the
college's
type.
F
E
E
F
F
Or
ping,
and
any
of
you
guys
can
ping
me
if
you
need
help
splitting
it
up
for
have
questions,
I
suggest
that
you
coordinate
on
the
document.
We
started
for
recovery
if
you
want
to
do
recovery
or
scrub
and
scrub,
either
way,
it's
a
good
idea
to
keep
it
in
Google
Docs,
just
because
we're
all
different
time
zones.
D
F
A
I
think
I'd
implemented
the
PD
scanning
feature
moved
to
move
to
backfill.
However,
indeed,
in
the
second
part
of
the
previous
week,
I
jumped
for
us
for
a
second
to
the
back
hunting
for
octopus.
There
are,
there
are
two
or
three
box
and
working
on
one
is
two.
Two
of
them
are
about
background
issues.
One
is
actually
a
quite
nasty
problem
with
that
was
unveiled
by
BBC
six
exertions.
A
A
Corruption
turns
out
that
that
removing
of
our
linker
script,
the
part
making
all
symbols
in
lip
Raiders
as
local
helps,
but
it
could
also
be
a
backing
the
to
chain
anyway,
I
mean
not
able
to
replicate
it
in
my
local
machine,
which
is
which
is
Ubuntu
LTS,
so
I'm
going
for
technology
testing
to
verify
how
much
how
many
platforms
are
affected.
That's
me.
F
C
C
F
F
F
Ism
hearing
isn't
the
hot
path
and
dearest
grub,
so
the
performance
properties
are
maybe
not
as
important
but
yeah
I
mean
if
it.
If
the
way
it's
implemented
generates
a
lot
of
garbage.
That
may
not
be
the
best,
though
I'm,
not
sure,
I,
guess
I'm,
saying
I,
don't
know,
you'll
have
to
evaluate
and
see
what
you
think.
It's
the
best
option.
Ok,.
A
F
It's
just
that
in
order
to
do
anything,
I
think
you
need
to
generate
garbage
because
they
think
all
of
the
events
and
state
machine
things
are
dynamically
allocated
because
they
all
need
to
inherit
from
the
abstract
basis
required
to
make
the
templating
work.
So
yeah
I'm,
just
saying
if
it
doesn't
seem
like
the
right
answer,
don't
feel
like
you
have
to,
and
it's
not
especially
easy
to
read
what
it's
worth
like.
It's
more
advanced
features
or
you
have
nested
state
machines.
F
They
work
for
peering,
because
it's
genuinely
that
that
complicated
most
of
that
is
not
incidental
complexity.
It's
real
complexity!
There
are
a
lot
of
events
that
really
do
logically
get
handled
it
more
at
more
than
one
level
of
peering,
but
are
things
that
are
not
as
complicated
as
peering.
That
may
be
more
work
than
we
actually
need
to
bother
with
so.
C
C
F
I
believe
that
so
what
I
mean
is
that
peering
has
peering
has
events
that
that
are
handled
separately
at
multiple
levels
of
the
state
machine
hierarchy.
At
the
same
time,
like
advantage
map
like
the
current
get
missing,
step
looks
at
it
for
one
specific
thing
and
then
the
peering
state
as
a
whole
looks
at
it
for
a
separate
dagger
and
it
gets
forwarded
up
the
stack
for
different
reasons.
So
those
features
in
Boothbay
chart
are
helpful.
It's
not
just
that
they're
nested!
It's
that
we
actually
care
about
the
nested
handling
of
events.
C
I
think
that
compared
to
modern
understand,
I,
don't
think
nesting
at
that
level
is
exist
in
the
in
what
we
have
in
scrubbing.
Now
there
is
some
nesting,
but
it's
not
really
terribly
important
and
I.
Think
it's
I
really
think
this
is
a
very
aging
framework
yeah
and
it
isn't
it's
not
easy
to
use
compared
to
modern
dishes.
E
F
So
I
sense
some
time
putting
together
a
much
more
detailed
explanation
of
what
I
see
is
the
important
parts
that
the
unerring
and
what
I
see
is
the
variable
parts,
the
parts
we
can
discuss
and
design
elements
that
have
options.
I,
just
I,
think
I
wanted
to
frame
the
discussion
first,
along
sort
of
database
design
lines.
F
What
are
we
storing
and
what
is
the
ordering
we
want
to
impose
and
then
secondly,
once
we've
decided
what
that,
what
that
needs
to
be?
We
can
start
talking
about
how
the
on
disk
structure
can
be
optimized
to
serve
that
purpose
and
also
avoid
other
problems.
So
anyway,
I
think
we
could
discuss
that
at
the
end
of
this.
That's
what
I've
been
working
on.
D
Yesterday,
I
found
another
issue.
It
seems
to
be
caused
by
by
the
by
the
fact
that
on
when
OSD
consumed
an
OSD
map
are
it
can,
when
OSD
handles
an
OSD
map
message.
That
message
could
contain
multiple
bursty
maps,
but
the
OSD
contained
only
consumed
the
last
one.
So
if
there
are,
if
there
are
those
two
operations
awaiting
on
on
previous
on
a
previous
OSD
map,
then
that
that
operation
never
gets
never
gets
that
s
the
future
set.
So
it's
just
it
just
stuck.
G
G
Another
issue
left
is
a
heartbeat
racing
problem
which,
which
involved
three
into
ten
independent
issues.
I
also
explained
that
in
in
this
pull
request
and
I'm,
currently
looking
at
a
solution
to
to
to
deal
with
it,
and
but
I
still
need
time
to
sing
it
through
I
will
also,
after
that,
I
will
I.
Think
I
will
also
send
PR
later
to
to
modify
the
messenger
connect
interface.
E
E
F
Sorry
about
that,
okay,
sorry
I
was
talking
about
us
beautiful.
He
said,
don't
ask
you
a
couple
of
questions,
so
the
first
is
I
believe
you
were
proposing
to
add
another
96
bits
derived
from
the
names
basis
and
the
OID
is
that
true,
maybe
not
exactly
96,
but
some
additional
number
of
bits.
E
F
F
F
F
F
F
B
F
By
you
literally,
don't
it
just
has
the
property
that,
if
there's
a
collision,
everything
stops
working,
it's
a
complete
failure,
so
that
is
possible.
There
are
systems
that
work
that
that
way,
the
application
has
to
be
careful
not
to
all
collisions.
I
claim
that
that
is
impossible,
though
you
do
plan
on
storing
the
namespace
and
the
OID,
though.
G
F
G
F
G
F
I
claim
that
you
are
arguing
for
version
2,
a
from
my
document.
It's
just
a
longer
hash.
The
purpose
of
the
hash
is
just
a
look.
It's
just
to
speed
up
look
up
great
thanks.
So
I
think
that
that
is
not
a
crazy
thing
to
do.
But
I
will
point
out
that
collisions
aren't
actually
a
problem
in
this
case
because
we
can
handle
them
and
indeed
we
can
handle
them.
Well,
we
just
do
the
same
thing.
Every
bee
tree
does
and
we
index
the
strings
in
your
case
you're,
claiming
that
it's
a
rare
collision.
F
So
for
the
most
part
you
just
store
them
next
to
each
other,
and
that's
true,
but
I
want
to
point
out
that
if
we
do
this
correctly,
it
will
even
behave
correctly.
If
there
are
a
lot
of
collisions,
I
wanted
to
point
out
that,
with
our
BD
objects,
be
at
this
little
table,
I
put
down
at
the
bottom.
If
we
have
128
4
terabyte
drives
with
3x
replication,
we
will
have
with
only
the
32
bit
hash
about
one
object
for
every
132
that
hash
values.
F
Said
with
rgw
objects,
which
can
easily
be
several
orders,
managed
it's
smaller,
perhaps
as
small
as
4k
that
one
in
100
becomes
more
like
10
to
1,
which
would
be
less
efficient
right,
mm-hmm,
don't
under
those
conditions,
it
might
be
valid
that
choose
to
pad
out
the
hash.
But
I
do
want
to
point
out
that
we're
not
replacing
the
variable
size
fields
with
a
hash
we're
just
using
a
hash
prefix
to
speed
up
the
lookup
and
all
we're
discussing
is
how
long
to
make
that
that
hash.
F
Do
you
agree
with
me
there
yeah,
so
the
other
things
you
brought
up
with
efficiency,
of
doing
binary
searches
and
cache
efficiency.
It
is
not
necessarily
the
case
that
you
have
to
store
all
of
the
fields
of
the
own
oden
each
tree
in
each
level.
I
talked
about
that
a
little
bit
in
this
document,
but
obviously
we'll
have
to
go
to
a
great
deal
more
detail
before
we
start
before.
F
F
So
we're
so
at
this
point,
all
we're
doing
is
we're
discussing
the
layout
of
the
4k
block,
right
so,
let's
say
we're
storing
some
20
keys
and
there
are
a
few
of
them
that
happen
to
share
hashes,
but
the
rest
of
them
have
distinct
caches.
Your
problem
is
that
you
want
to
optimize
the
search
in
the
event
that
we're
searching
for
something
that
actually
is
unique.
It
only
has
one
hash
value,
so
there
are
ways
of
setting
up
the
Oh
node
layout,
though,
that
in
those
events,
you
don't
have
to
look
at
the
strings.
F
You
won't
you'd
only
have
to
look
at
the
strings
if
they're,
actually,
two
of
them
with
the
same
hash.
Yes,
so
I
want
to
point
out
that
all
of
those
things
are
true.
I
just
want
to
make
sure.
We
totally
agree
on
the
fact
that
we're
not
going
to
fail
to
do
lookups
in
the
event
of
the
collision
and
that
the
thing
that
we
influences
the
size
of
the
hash
is
simply
a
matter
of
reducing
the
number
of
the
the
number
of
times.
F
F
F
Should
choose
a
pretty
simple
approach,
implement
that
and
then
we'll
have
to,
and
then
we
can
start
prototyping
improvements
to
it,
because
different
choices
here
will
be
good
or
bad
depending
on
a
number
of
factors,
including
how
many
objects
are
present
on
the
OSD
and
how
long
the
keys
are,
because
for
one
thing,
RBD
images
have
very
short
keys,
but
that's
not
true
of
rgw
images
or
rgw
I'm.
Sorry,
our
BD
blocks
have
very
short
keys,
but
our
GW
object
names
are
super.
It
can
be
really
long
because
the
user
controls
them.
F
The
case
that
one
layout
is
good
for
everything.
We
will
watch
a
prototype
more
than
one
option:
okay,
as
far
as
the
key
compression
idea,
I
think.
The
main
thing
here
is
just
this
is
a
way
to
do.
Prefix
compression,
if
you
include
the
whole
key
at
least
up
to
the
suffix
you're
going
to
drop
in
the
very
first
key.
You
can
then
drop
the
common
prefix
for
the
subsequent
ones.
If
you
adopt
some
encoding
that
makes
that
work,
it
can
be
as
simple
as
a
a
one
byte
prefix
to
each
key.
F
That
indicates
how
many
fields
are
shared
to
something
more
sophisticated.
If
you
can
come
up
with
something
better,
they'll
have
different
advantages
and
drawbacks,
but
the
design
space
there
is
pretty
big
I.
Think
yes,
but
do
you
agree
with
me?
Unlike
the
core
concept?
I
guess
what
I
see
is
the
core
concept
is
an
agreement
on
what
Apple
we're
storing
and
how
it
orders
once
we
all
agree
on
that
everything
after
that
is
a
discussion
of
concrete
on
disk,
layouts
and
I.
G
G
F
F
F
For
when
you
go
back
and
forth
between
the
vectors
right.
Yes,
you
can
make
the
internal
layout
of
complicated
as
you
want.
As
long
as
we
agree
that
ultimately
you're
storing
a
linear
sequence
of
keys
with
these
components,
you
may
be
omitting
some
of
them
because
you
have
ways
of
compressing
them
out,
but
demanding
leave.
That's
what
it
is.
Yes,.
F
Cool
in
that
case,
I'm
willing
to
entertain
like
a
lot
of
different
options
for
how
to
delay
out
I
just
wanted
to
make
sure
we
agree
on
what
the
basic
semantics
should
be,
mostly
that
yeah
we
have
to
write
down
the
OID
and
the
namespace,
because,
like
I
said,
there
are
designs
where
you
don't
do
that,
but
they
have
this
incredibly
like
catastrophic
failure
mode.
Yes,.
F
Please
Radek,
for
instance,
one
at
the
point
this
this
one
out
will
actually
be
duplicating
the
namespace
in
the
key,
and
the
reason
for
that
is
that
every
wheel,
Rados
object
has
an
object
info
attribute.
That
includes
the
H
object.
It's
one
of
its
subfields,
so
I'm
gonna
suggest
that
one
of
the
very
early
optimizations
we're
gonna
make
is
we're.
Gonna
drop
that
field
from
the
object
info,
because
it's
wasteful
it's
a
duplicate
of
the
key,
so
we
don't
need
it
and
I
think
that's
how
we
solve
that
problem.
D
F
D
F
E
D
F
So
I
think
it
would
be
helpful
if
you
can
said
it
sounds
like
you've
been.
You
have
a
lot
of
thoughts
on
how
you
want
to
do
the
layout.
Maybe
for
next
week
we
could
try
to
get
a
document
with
at
least
a
sketch
of
like
how
the
route,
internal
and
leaf
nodes
look.
I
would
encourage
you
not
to
try
to
solve
every
problem,
try
to
go
with
a
relatively
simple
approach
and
then.
B
F
F
I,
don't
think
unnecessary.
Details
are
possible
here
because
we're
actually
truly
trying
to
game
out
exactly
how
it
be
4096
bytes.
We
have
an
each
node
map
long
to
the
keys
for
story,
give
them
ok,
but
remember.
At
the
end
of
the
day,
you
do
have
at
your
disposal
the
simplest
possible
solution,
which
is
just
in
a
vector
of
keys
with
their
corresponding
pointers.
You.
F
F
F
Actually,
okay,
so
I
also
wanted
to
point
out
that.
So
let
me
let
me
talk
a
little
bit
about
compatibility,
I,
don't
think
of
had
ability
as
a
showstopper.
Your
analysis
is
correct.
Generally
speaking,
crimson
isn't
gonna
coexist
long
term
with
classic
OSD
OSDs.
Moreover,
this
wouldn't
be
the
first
time
we've
changed.
The
h
object,
ordering
we've
done
it
before
a
wild.
D
F
F
Name
yeah,
though,
back
in
the
day,
I
implemented
a
scheme
for
creating
a
recursive
folder
structure
so
that
we
could
efficiently
list
things
in
lexicographic
order,
despite
the
fact
that
ext4
and
ax
FS
do
not
in
fact
support
that
and
the
way
I
chose
to
do
that
turned
out
to
have
hurt
something
very
stupid,
embedded
in
it.
So
when
blue
store
came
about,
we
didn't
want
to
carry
that
stupidity
forward.
So
we
chose
to
change
the
ordering.
It
was
actually
a
change
in
the
in
the
byte
ordering
of
the
brush.
F
Hash
would
used
to
list
h,
objects,
semantically.
It
isn't
really
that
different
from
what
we'd
be
doing
by
increasing
the
size
of
the
hash.
So
we've
done
it
before
it's
just
that
it's
a
giant
pain
because
you
have
to
bubble
it
up
to
the
h,
object,
structure
itself
and
the
h
object
structure
is
exposed
both
through
the
backfill
protocol
and
in
librettos
itself,
though,
that
interface
is
not
widely
used,
so
it
is
possible
I.
F
C
F
C
F
C
F
F
What
extent
librettos
is
not
a
purely
internal
interface,
but
neither
is
it
were
married
to
this
for
the
rest
of
our
lives
interface.
It's
intermediate,
we'll
never
breaks
ffs.
The
way
it
interfaces
with
user
land
will
never
break
that
will
never
break
at
a
basic
level.
The
way
RVD
works
with
the
various
things
that
use
it,
but
libray
dozes
purpose
is
to
be
efficient
for
RVD,
ratos,
twnf
infest,
then,
like
I
said
it
wouldn't
be
the
first
time.
No
one
really
complained
last
time,
we'd.
F
F
A
F
Okay,
I'm
wondering
because
I
think
it
probably
doesn't
work
pgl
s
and
P
G
analyst
ops
when
they
go
through
do
up
have
special
rules,
so
it
actually
wouldn't
surprise
me
if
those
interfaces
are
broken.
But
that's
all
of
this
is
a
digression.
I
think
what
I'm
saying
is
ginzan
is
correct.
We're
not
gonna
tie
ourselves
nor
Lee
to
the
exact
way
in
which
listening
currently
works.
The
only
thing
that
has
to
be
true
is
that
for
back
filter
work,
all
of
the
O's
DS
in
the
PG
have
to
list
the
same
way.
F
A
F
It
used
to
genuinely
matter
because
we
use
to
the
radio
stood
out
the
radio
swivel
listing
to
do
pocket
listing
that
was
back
in
2011.
That
was
a
long
time
ago.
Even
if
it
does
do
listing
now,
it
most
likely
doesn't
care
about
the
ordering
it's
likely
doing
it
from
GC
purposes,
which
is
very
different.
It's
it's
not
important
that
we
preserve
ordering
in
that
case
anyway,
like
I
said
it's
a
thing:
I'm
Avery
we're
going
to
end
it.
It
would
be
a
huge
pain
and
we
would
have
to
do
this
exercise.
F
G
F
F
D
F
We
may
want
to
consider
variable
layouts,
for
instance,
if
we
anticipate
a
pool
having
primarily
for
megabyte
objects,
then
we
wouldn't
want
to
spend
the
extra
96
bits,
but
if
we
anticipate
a
pool
having
primarily
4
K
objects
well
more
complicated,
actually
it's
much
more
complicated,
because
the
extra
96
bits
is,
however,
many
additional
bytes
which
really
eats
in
to
the
percentage
overhead
per
object.
So
even
then
it
might
not
be
were
them.
We
might
rather
pay
the
CPU
overhead
than
the
storage
overhead.
A
F
F
No,
we
don't
I'm
gonna
blunder
to
just
get
us
flat
out
state
that
anything
the
OSD
does.
That
is
linear.
In
memory
in
the
number
of
objects
stored
is
an
is
absolutely
a
no-go.
F
We
need
to
be
able
to
page
things
in
and
out
of
memory
or
will
have
hard
downs
and
how
much
memory
you
can
have
per
gigabyte
of
disk,
which
we'd
really
rather
not
have.
It
means,
for
instance,
that
like
low
density
or
high
density,
big
ol,
slow,
spinning
disk
in
front
of
little
ARM
chip
couldn't
work
right,
but
we
always
have
to
be
able
to
deal
with
objects.
You
know
in
bite-sized
pieces.
D
D
F
D
E
D
A
I
in
the
discussion
I,
so
there
are
some
mentions
about
CPU
cache
usage
for
when
doing
when
doing
lookups
over
on
old
tree
and
I
mean
just
one
thing
I
would
like
to
point
out.
Is
that
till
we
don't
have
any
batching
I
would
assume
that
the
blocks
the
4k
or
8k
or
the
note
the
on
all
three
nodes
will
be
in
memory.
It
will
be
in
the
CPU
cache
likely
there
will
be
a
there
will
be
a
there
will
be
cache,
miss
I'm,
doing
even
even
of
the
top
layer
of
the
tree.
Interestingly,.
F
It's
not
just
it's
not
just
cache,
misses
and
hits
you're
interested
there,
its
efficiency.
So
let's
say
so
the
classic
example.
Let's
say
you're
trying
to
search
a
vector
of
pairs
of
integers
and
integers.
So
if
you
store
the
pairs
of
integers
together,
then
for
every
then
let's
say
they're
for
byte
integers
right
for
every
eight
bytes.
F
You
read,
you
read
one
key
by
contrast,
if
you
segregate
them
into
two
distinct
vectors
with
the
keys
on
what
the
keys
first
followed
by
the
values,
then,
as
you
scan
the
vector
the
first
factor
you
get
twice
as
much
through,
but
when
you're
searching
the
keys
and
then
you
only
need
one
additional
read
to
get
the
other
to
get
the
corresponding
value.
It's
not
just
about
hits
and
misses
it's
about
media
efficiency
and
locality.
F
A
C
F
F
F
D
G
Just
like
our
PD,
we
have
less
objects,
so
so
the
keys
can
be
smaller,
so
the
density
can
be
increased.
So
that's
why,
but
but
rgw
there
are
many
more
objects,
so
we
might
need
a
longer
key.
So
so
the
case
will
be
less
density,
have
less
density,
so
those
two
cases
or
we
can
demand
conflict
to
each
other.
There.