►
From YouTube: 2017-DEC-06 :: Ceph Developer Monthly
Description
Monthly developer meeting for the coordination of Ceph project development.
http://tracker.ceph.com/projects/ceph/wiki/Planning
A
A
A
The
way
yeah,
it's
recording,
yeah
right,
high-level
overview
of
where
we're
going
kind
of
what's
being
planned
and
so
on.
Josh
do
you
wanted
you
want
to
talk
about
about
that
or
don't
me
to
do
it?
I.
A
Right
yeah
I
mean
I
just
wanted
to
talk
about
just
some
just
sort
of
high-level
stuff,
so
people
know
where,
where
we're
going
so
the
general
and
trends
are
that
hard
drives,
are
getting
bigger
and
they'll
be
using
people
abusing
those
for
like
bulk,
colder
storage,
but
nvme
is
becoming
increasingly
common
and
standard,
and
there
gonna
be
a
lot
of
flash
deployments.
Flash
only
deployments
going
forward
and
it
won't
be
long
before
everything
is
flash.
A
So
we
need
to
go
fast
and
sort
of
live
in
that
world,
which
is
not
really
where
we
grew
up.
So
there's
a
lot
of
attention
and
effort
and
we're
going
into
performance
refactoring
it's
a
little
bit
up
in
the
air
still,
but
the
the
likely
path
is
that
we're
gonna,
embrace
DP,
D
K
and
s
BD,
k
for
doing
the
network
and
the
storage,
IO
and
I
think
those
will
still
be
optional.
Though
you'll
still
be
able
to
use
some
of
the
other
interfaces
as
well.
A
So
there's
sort
of
cleanup
the
refactoring
just
to
make
the
the
code
more
modular,
more
asynchronous
and
event
state
driven
or
looking
at
futures
programming
frameworks
to
sort
of
enable
that
and
make
it
a
same
programming
environment.
That's
the
main
direction
that
Josh
and
Greg
are
investigating
right
now,
so
I
think
that's
I
just
want
to
make
sure
everyone
knows
that
this
is
a
high
level.
Is
there
joining
go
any
more
detail
about
what.
C
A
Well,
okay,
I
think,
just
more
generally
one
of
the
key
things
that
need
to
happen
in
addition
to
just
changing
the
sum
of
the
structure
of
the
code
is
also
just
making
the
code
a
lot
lighter
weight.
We
do
a
lot
of
parsing
of
data
structures
and
copying
things
around
as
we
move
through
the
stack
and
as
we
afford
it,
we
need
to
eliminate
a
lot
of
that
extra
work,
at
least
for
the
important
the
fast
path
for
actual
IO
requests.
A
So
the
goal
at
the
end
of
this,
as
if
there's
really
an
end,
but
the
goal
is
that
when
the
data
comes
in
off
the
wire,
it's
gonna
land
in
memory
and
be
largely
unchanged
as
it
traverses
through
the
stack
ORS
right
now.
It's
you
know,
lands
on
a
buffer
gets
copied
in
the
user
space
and
we
parse
it
into
a
bunch
of
other
data
structures
and
the
message
class.
And
then
we
build
couple
different
intermediate
representations
and
transactions
and
eventually
goes
out
over
the
sorts
there's
a
lot
of
lot
of
stuff.
A
D
A
Probably
owe
some
of
it
could
be,
in
my
mind,
they're
sort
of
they're
sort
of
two
complementary,
but
some
one
orthogonal
pieces.
One
is
I'm
trying
to
streamline
the
data
structures
in
the
allocations
as
they
traverse
through
the
stack
and
I
haven't
looked
as
closely
at
the
IO
path.
Honestly,
I'm
right
now
is
just
looking
at
the
peering
stuff,
but
I
noticed
in
the
peering
path,
there's
like
so
many
layers
that
are
totally
unnecessary.
E
A
Intermediate
lot
of
copies
of
things
like
the
H
object,
II
structure
is
a
string
that
copies
data.
That's
in
the
original
message,
which
is
a
decoded
copy
of
the
thing
that
actually
was
read
off
the
wire
and
so
on.
So
I
think.
There's
a
lot
of
stuff
like
that,
where
we
can
just
sort
of
you
know,
walk
through
the
code
and
look
at
where
data
structures
can
be
combined,
and
this
is
something
that
Matt's
team
did
a
lot
of
a
long
time
ago.
At
the
OU.
A
Those
same
ideas
apply
that
sort
of
tedious
and
takes
a
lot
of
work,
but
they're
sort
of
that
path
of
just
reducing
the
amount
of
computation
and
worked
at
CVS
to
do
memory,
axises
and
so
forth,
and
the
other
complimentary
piece
is
the
structure
of
the
code
so
that
it's
understandable
and
state
driven
and
so
that
it
can
be
composed
in
a
way
that
is
sort
of
run,
to
completion
style,
as
opposed
to
the
threaded
blocking
style
and
they
sort
of.
E
And
more
required
memories
isn't
like
her
op
stack
and
go
back
and
modify
them,
so
he
startup
and
helped
a
lot
with
that.
The
Tokyo
model
does
and
we
I
don't
know
how
pretty
the
syntax
would
be.
What
we
could
build.
The
same
thing
is,
Tokyo
was
doing
people
opposed,
or
at
least
most
of
it
enough
to
do
the
automatic
memory
memory
allocation.
F
E
A
Okay,
so
I
I,
don't
know
think
that
we
need
to
necessarily
get
into
all
this
right
now.
I
want
to
just
provide
a
bit
of
an
overview
I'm
just
to
make
people
aware
of
what
the
current
efforts
are.
So,
yes,
lots
of
investigation
on
the
boost
Ezio
front
and
on
seastar
front
on
a
futures
front
and
a
lot
of
decisions
to
me
do
made
there.
So
if
you
are
have
experience
in
this
area
or
if
you're
interested
in
the
police,
chime
in
so
there's
add
the
other.
A
The
high
level
plan
there
and
I
use
plan
loosely
because
nobody
is
actually
doing
this
work,
but
this
is
I
think
the
direction
that
I'm
at
least
looking
at
China
and
if
you
disagree,
is
to
replace
or
supplement
the
current
tearing
functionality
with
a
more
traditional
tea
remodel,
where
the
the
objects
in
the
base
pool
are
redirects
or
pointers
to
another
tier
and
then
reuse.
A
lot
of
the
same
pieces,
around
promotion
and
demotion
and
proxying
reads
and
writes
on
that.
A
We
did
with
cache
cheering,
but
eventually
once
we
have,
if
we
can
get
to
roughly
equivalent
functions
using
functionality
using
that
keyring
model,
where
you
have
the
full
index
and
the
base
tier
instead
of
having
this
sort
of
sparse.
Maybe
you
have
the
object,
maybe
you
don't
in
the
cache
and
then
we
could
just
remove
the
cache,
steering
deprecated
and
a
release
and
then
remove
it,
because
people
could
migrate
to
using
the
new
the
new
tearing
model.
That
would
be
I.
A
Think
that's
the
hope
or
my
hope,
because
that
will
strip
out
a
lot
of
complexity
around
a
bunch
of
stuff.
With
dealing
with
the
case
where
you
don't
know
whether
the
object
exists
or
not-
and
you
have
to
be
correct,
despite
that,
whereas
with
the
model,
where
the
that
you
have
a
full
index
of
the
objects
and
the
object
might
be
there
or
it
might
be
a
redirect
to
somewhere
else,
you
sort
of
know
the
full
story,
then
you
can
they
made
more
assembly
that
does
that
jive
with
your
hopes
and
dreams,
Josh.
C
Yeah,
more
or
less
I
mean
I
think
a
lot
of
code
there.
That
would
ideally
like
to
not
be
there,
but
it's
kind
of
inevitable.
That
need
something
like
that
to
it
up
here.
A
I'm
not
sure
it's
really
about
performing
better
or
worse.
There
are
cases
when
capturing
where
things
are
slow,
because
you
don't
know
whether
the
object
exists,
and
so
you
might
proxy
through
to
the
based
here
just
to
find
out.
They
have
things
like
that
would
be
better,
but
more
generally,
I,
don't
know
that
it's
going
to
be
that
it's
not
real
a
performance
thing.
It's
a
code,
simplicity
and
architectural
simplicity
thing
and
that's
a
bit
more
flexible
right,
because
you
can
have
multiple,
slow
tears.
We
can
plug
the
deduplication
stuff
into
this.
E
G
A
A
Like
thinking
about
this
in
terms
of
performance,
I'm
saying
about
it,
I'm
thinking
about
in
terms
of
being
able
to
well,
it's
kind
of
performance
I
mean
being
able
to
keep
the
hot
data
in
on
fast
storage
and
put
all
the
cold
stuff
on
slow
storage,
but
still
maintain
the
sort
of
the
uniform
of
the
other
Raiders
provides.
This
will
do
that
better
right.
A
So
all
that
the
promote
issues
is
gone
more
or
less
with
the
easy
stuff
right,
because
you
can
foxy,
we
can
practice,
you
reason
rides.
So
that's
already
sort
of
issue
is
resolved
with
the
cash
sharing
I.
Think
the
main
issue
with
capturing
is
just
that
it's
the
code
is
so
complicated
in
Brazel
and
hard
to
maintain.
D
No
matter
what
we
do
in
terms
of
kind
of
how
we
get
there,
it
still
seems
like
it's.
This
trade-off
between
trying
to
put
hot
data
on
the
cached
here
versus
just
kind
of
letting
any
data
sit
in
the
cached
here
and
not
doing
excessive
promotions.
Right
I
mean
no
matter
what
we
do,
we're
not
going
to
get
around
that
kind
of
fundamental
fact
that.
A
D
So
I
mean
I,
think
I
think,
regardless
of
what
the
code
behind
this
looks
like
we're,
not
gonna
get
around.
That
fact,
we
just
can't
have
to
accept
that
we're
not
really
talking
about
accessing
hot
data
on
the
cash
tier
we're
lucky
if
we
can
access
some
hot
day
on
the
cashier,
but
I
think
what
we're
more
talking
about
is
just
having
some
data
on
the
cash
here,
not
kind
of
overwhelming
the
rest
of
the
cluster,
with
excessive
promotion.
A
It
the
way
I
think
of
it,
is
that
what
we
have
right
now
has
sort
of
no
upward
mobility
and
this
fragile
and
hard
to
maintain
and
kind
of
sucks
in
that
respect.
Whereas
if
we
make
a
lateral
move
to
doing
more
traditional
tea
remodel,
then
we
have
lots
of
sort
of
support
mobility
and
we
have
a
simpler
sort
of
architecture
to
support.
A
So
we
could
a,
for
example,
have
the
Nandu
flow
optic
promotion,
but
have
like
part
of
an
object
in
the
thing
in
the
in
the
based
here
and
then
have
part
of
it
Vienna
in
the
backend?
That
was
something
that
was
like
super
complicated
and
really
weird
to
do
with
a
cash
model,
because
you,
like
you,
don't
have
the
full
picture.
A
What
the
object
should
be
at
that
layer
and
having
a
sort
of
hybrid
whatever
it
got,
really
gross,
whereas
with
when
you
know
that
you
have
sort
of
the
authoritative
view
of
what
the
object
should
be,
whether
or
not
you
actually
have
all
the
content
there.
Those
sorts
of
things
become
much
more
more
reasonable
and
I'll
sort
of
aligned
nicely
with
what
the
what
did
he
do.
Folks
are
trying
to
do
also
where
you
have
bits
of
an
object
stored
in
other
places,.
D
A
D
A
D
A
But
I
think
that
I
think
we
should
be
looking
like
two
years
past
that,
where
people
of
SSDs,
because
that's
the
cheapest
most
reasonable,
cost-effective
stores
to
buy
not
the
cheapest
but
the
most
like
reasonable,
George
the
default
storage
choice.
And
then
maybe
they
have
some
hard
disks
because
they
have
a
bunch
of
cold
big
data.
And
how
should
we
do
that?
Whereas
right
now
people
views
flash
as
a
special
case.
A
A
D
I
guess
the
the
question
I
have
is
okay,
so
we're
gonna
have
potentially
Ratos
level
cash
tearing
we're
gonna
potentially
have
some
kind
of
tearing
capability
in
blue
store.
We
already
do,
but
you
know
maybe
a
more
sophisticated
one
when,
when
a
customer
comes
in
and
asks
okay,
I've
got
3d,
crosspoint
storage,
I've
got
NVRAM,
I've
got
SSDs
and
I've
got
hard
disks.
What
do
I
do
I?
Guess
that's
kind
of
what
I
don't
see.
We
have
any
kind
of
coherent
picture
for
them
in
all
this
to
say
this
is
what
you
do.
There's.
A
C
The
same
kind
of
problem
we
have
today
right
with
all
kinds
of
hardware
that
people
use
today,
but
that
compose
in
different
ways
in
terms
of
journalism,
I
was
to
use
in
different
combinations
of
box
sizes.
So
I
think
it's
the
same
sort
of
situation
where
we
test
out
different
configurations
and
see
what
works
to
me
that
Commandos
and
there's.
C
A
Anyway,
right,
so
that
was
the
other
thing
I
wanted
to
mention,
so
the
performance
work,
the
general
thinking
on
tearing,
but
the
challenge
there
is
that
not
very
there's
not
the
only
people
working
on
is
young
one
who's
doing
the
deed,
OOP
stuff,
I'm
sort
of
making
him
do
some
of
the
tearing
work,
because
it's
necessary
for
the
DDF
anyway,
but
there's
a
lot,
that's
like
that,
will
eventually
be
needed
to
sort
of
polettis,
totally
deprecated
cash
tearing,
and
that
he's
not
going
to
do
so.
At
some
point.
A
Somebody
you're
planning
to
step
up
who
want
to
like
make
the
leak,
but
there's
that
the
other
thing
I
want
to
mention
this
is
mostly
just
me-
is
I'm
interested
in
allowing
different
types
of
pools
and
Ratos.
So
right
now
we
do
replicate
it
in
a
richer
coated
pools,
but
both
are
based
on
primary
log,
the
primary
copies
log
them
so
they're.
All
synchronous,
log
based
I've
been
making
baby
steps
for
it's
cleaning
up
the
PG
interface
and
sort
of
inching
towards
some
of
the
performance
stuff.
A
Trying
to
do
some
cleanup
to
make
that
a
bit
easier.
But
partly
my
thinking
is
that
if
we
can
clean
up
that
interface
to
the
point
where
you
can
implement
like
a
trivial
new
pool
type,
that
perhaps
doesn't
even
do
anything
sort
of
it
robustly
or
anything,
but
at
least
is
something
that
you
can
run
in
benchmark
against.
That'll
make
it
easier
to
do
some
of
our
other
development.
For
example,
we
can
make
a
like
a
non
replicated
pool
that
just
right
straight
to
SB,
DK
I'm
just
for
kicks
basically
and
then
that'll.
A
B
A
Multiple
replicas
and
do
you
know
like
two
out
of
three
type
of
semantics,
a
different,
slightly
different
consistency
model,
but
that
avoid
some
of
the
latencies.
You
see
when
you
have
failures.
Example,
there
are
a
couple
different
options,
one
of
them,
one
of
them
that
sort
of
came
up
recently
was
I'm,
just
thinking
about
fabric
architectures,
where
you
have
compute
nodes
attached
to
a
bunch
of
env
amis
over
a
fabric.
Right
now,
if
you
run
stuff
on
something
like
that,
then
you
write
to
one
OSD
on
one
CPU.
It
replicates
it
to.
A
It,
writes
to
the
lope
to
its
it's
quote-unquote,
local
and
beaming
over
the
fabric,
but
then
it
sends
it
to
another
CPU
to
write.
It's
sort
of
directly
attached
envy
me
above
the
fabric,
and
so
you
end
up
popping
across
multiple
CPUs,
also
that
eventually
you'll
you'll
write
over
a
fabric
anyway
and
that
sort
of
environment.
A
It
would
make
more
sense
to
have
the
OS
you
just
write
to
like
all
three
replicas
or
Ishika
charts
directly
over
the
fabric,
but
it
totally
like
up
ends
all
the
other
stuff
away
about
around
the
way
that
pts
are
charted
and
the
devices
are
divvied
up
and
so
on.
So
it
would
be
a
totally
different
back-end
implementation,
basically
for
a
DOS.
But
if
we
decided
something
like
that
was
interesting,
then
you
could
conceivably
make
a
radius
pool
type
that
would
work
on
fabric
hardware.
A
That
would
still
how
to
use
all
the
other
bits
of
stuff,
like
all
the
radiostar
Twiggy
rgw,
some
of
those
stuff,
but
I'll
still
work
and
still
provide
this
sort
of
greatest
level
abstraction.
My
the
I'm,
not
I'm,
if
I'm
no
means
an
expert
or
like
visionary
on
the
on
the
fabric
front.
But
my
sense
is
that
it's
it's
all
sort
of
one
axis
of
the
sort
of
storage
problem
where
it
lets
you
talk
to
remote
storage,
but
it
the
key
thing
that
set
does
is
redundancy
in
a
replication
or
fault.
A
H
A
There's,
there's
sort
of
a
discussion
going
on
with
the
DP
DK
people
at
Intel,
we're
gonna
meet
next
week,
but
they're
moving
towards
VPP,
which
is
part
of
FG
IO,
which
is
one
of
the
TCP
stacks
based
on
DVD
K,
but
we're
sort
of
looking
at
that.
But
I,
don't
know
much
about
it.
Yet
so
we'll
see
stay
tuned,
that's
I,
don't
know
those
those!
Basically
it
anything
else.
Just
high
level
you
want
to
mention
Josh
make
sure
people
are
aware.
I.
A
All
right,
cool,
okay,
so
a
couple
other
things
I
want
to
do
really
quick,
shout
out
about
the
live,
R&B
thing
that
which
a
telecom
and
we're
retooling
are
working
on.
This
is
a
library
that
does
mail
on
top
of
Ray
dos
and
partly
set
of
this
right
now,
but
I
think
that's
sort
of
temporary
thing,
that's
a
plug-in
for
dovecot,
and
they
presented
at
the
OpenStack
summit
in
Sydney
about
it
and
they've
talked
it
I
think
some
of
the
Sethi
devs.
A
They
don't
know
the
link
Andy
right
now,
maybe
some
guys
can
post
in
the
chat,
but
this
would
be
by
where
the
other
thing
is.
I
wanted
to
talk
a
little
bit
about
what
the
are
for
the
dashboard
for
mimic.
So
this
is
work
that
Red
Hat
is
planning
on
doing,
hopefully
with
with
help
from
others.
So
there's
like
paste
a
pad
I,
just
like
the
NBP
plan
or
whatever,
for
what?
What
is
gonna
come
in
Minich
I,
just
sort
of
quickly
summarize
in
this
pad.
Hopefully
this
is
all
going
to
happen.
A
It
really
depends
on
how
quickly
we
can
get
a
person's
for
people
spun
up
on
this
and
that's
ongoing.
So
this
might
change,
but
the
idea
is
basically
to
extend
the
current
dashboard,
that's
in
the
manager
as
a
more
robust
UI
for
managing
stuff
right
now,
it's
sort
of
a
glorified
stuff
FS,
but
this
will
extend
it
so
that
it
has
more
complete
status
of
the
cluster.
A
The
stuff
s.
Stuff
is
already
there
to
some
degree,
but
it's
pretty
primitive.
The
main
new
thing
would
be
around
rgw,
showing
what
the
RW
zone
groups
are
and
zones
and
what
demons
are
running
for
the
zones.
So
those
would
be
the
basics
right
now,
there's
no
authentication
for
the
dashboard.
So
we
need
to
add
something
at
the
minimum.
We
just
seems
like
a
basic
user
password.
Probably
we're
also
going
to
want
to
have
something
that
lets.
You
either
use
LDAP
to
attend
again,
something
else.
A
Weird
thinking
you
have
to
install
those
the
plan
is
to
take
those
and
basically
pull
those
into
the
set
dashboard
so
we'll
be
in
bed
ingre
fauna
in
I.
Don't
know
if
it's
an
iframe
or
not,
but
basically
embedding
all
this
all
that
graph
on
of
stuff
into
the
dashboard.
So
you
get
all
that
stuff
natively
and
then
actually
a
live
inside
the
subtree.
A
The
upshot
of
all
this
is
that
you'll
install
staff
like
normal
stuff,
and
you
just
do
basically
one
command
to
turn
on
the
dashboard,
maybe
set
your
user
or
whatever
and
a
little
log,
and
you
literally
get
access
to
all
this
all
this
stuff
with
a
standard
cluster,
the
metrics
will
be
back
by
Prometheus,
and
so
the
metrics
won't
really
be
useful
unless
you
also
have
Prometheus
think
so,
you're
sitting
next
to
stuff,
that's
gathering
all
this
stuff
and.
A
Plans
some
basic
management
functions
so
right,
access
stuff,
where
you're,
creating
pools,
updating,
stuff
ex
keys
and
the
main
initial
stuff
will
be
around
rgw
so
that
you
can
create
users
buckets
access
keys,
that's
one
of
the
more
awkward
things
Demant
right
now.
You
have
to
do
it
through
the
CLI.
Currently,
with
writing
the
admin
there's
a
ton
of
other
stuff,
that's
sort
of
on
the
roadmap.
This
is
the
minimum
stuff
that
we'd
like
to
get
done
for
Minnick,
so
any
UI
developers
out
there
or
anyone
who's
interested
in
that
management.
A
A
A
Possum
possum
yeah
that'd
be
great
yeah.
We
definitely
want
to
I,
don't
know.
I
haven't
really
worked
with
the
Gravano
stuff,
so
I
don't
know
how,
if
you
can,
just
like
sir,
take
the
dashboard
definitions
and
we'll
just
like
plop
them
into
the
source
tree
with
a
few
modifications.
I,
don't
know
if
it
works
like
that
or
not
that's.
G
I
A
Good,
okay,
the
other
the
other
should
have
data.
This
is
because
all
the
stuff
lives
in
the
manager
John's
working
on
since
some
changes
and
improvements
there
to
make
it
easier
to
sort
of
reuse
code
in
the
manager
across
different
modules,
and
one
goal
of
that
and
consequence
of
that
is
that
the
stuff
that
we
sort
of
unable
in
this
dashboard
will
be
easily
expose
Abul
via
the
REST
API
as
well,
so
that
if
you
want
to
trigger
the
same
functionality
using
the
REST
API
you'll
be
able
to
do
that.
A
You'll
be
able
to
do
that,
but
with
minimal
friction
to
totally
that'll,
be
useful.
Further
external
dashboards.
B
A
B
Before
I
start
talking,
I
wanna
see
like
my
English.
No,
it's
not
a
pretty
good,
so
hope
you
can
give
me
a
little
more
patience
and
now,
let's
begin
recently,
our
team
are
working
on
single
data
from
our
table
to
cost
cuz.
If
the
public
object,
storage,
a
service
of
10
o'clock,
it's
almost
compatible
with
a
3,
so
we
testing
Yahoo
does
branch
about
how
distinct
of
lagging
at
now
we
can
sync
data
from
our
tableau
to
cost
as
19.
So
we
just
want
to
talk.
B
Some
about
relates
the
work
we
have
done
then
discuss
some
problems
we
encountered
with
our
view
there
may
be.
Our
experience
can
often
have
two
loads,
who
also
folks
only
picture.
My
talk
contains
two
parts.
The
first
part
is
the
main
province
in
Canada.
When
testing
this
buggy
is
very
low
to
work
on
data.
Think
now,
let's
take
a
look
at
the
progress.
The
first
part
is
sometimes
the
arch
table
crow
grote.
It's
brought
in
some
requests
at
first
we
work
in
creating
a
bucket.
B
Then
we
just
add
some
debug
log
and
we
could
not
steal
a
debug
log
after
creating
after
greet
factory
in
questa
Ontario,
we
try
to
create
a
packet
on
manual
battle.
If
there
is
also
brought
in
post
the
results
when
think
of
object
by
hours,
they
are
down,
we
can
see
laterally
request
complete
in
as
double
poster
with
rest
or
estas.
B
There
is
not
invoked
deep
into
our
table,
which
hd-dvd
client
we
found
a
light
of
reliable,
which
identify
whether
ports
are
not
in
incur
having
to
be
initialized
so
locker
we
are
not
exit
and
the
VCR
we
are
brought.
So
what
we
need
to
do
is
in
naturally,
first
in
RW,
h,
DV
client,
to
avoid
written
with
returning
curve
with
function.
Pours,
that's
the
first
of
all
problem.
B
The
second
problem
is
it's
about
a
factor
of
100
continued
at
first,
instead
of
sinh
theta
to
a
straight
directory.
We
use
our
chat,
a
bluetooth
stimulates
as
soon
as
three
stories.
We
found
the
light.
Http
flow
we're
broke
if
100
continue,
but
it's
not
a
disabled
correctly.
If,
for
example,
if
the
target
our
table,
you
disable
it,
but
we
planted
don't
I
disable
100
continue
since,
as
Teresa
bought
her
a
100
continue.
It
should
not
have
effect
on
second
data
to
s3,
but
bacteria
T
currency
should
disable
100
continue
if
target
server.
B
B
I'll,
take
a
look
at
the
very
late
either
logs
we
found
lat
the
orb
of
18,
we
found
lack
of,
create
or
Baxter
will
return
with
minors
in
not
empty,
but
either
ourself
a
terrorist
handle
remote
objc
be
Sarah.
If
Leo
offers
red
code
it's
less
than
zero,
they
stink
CR.
We
are
exid,
so
we
should
the
judge
whether
a
pack
exists
by
a
bad
code.
If
the
budget
already
owned
by
you,
the
data
stink
quarantine
should
continue.
B
B
J
B
J
Question
my
question
is:
why
did
you
get
the
409
is
for
Z
or
not?
Is
it
because
you
were
running
it
against
what
were
you
when
you
were
using
a
server
to
think?
Are
you
using
rgw,
or
are
you
using
your
cloud
solution,
a
cloud
system?
Is
it
it's
returning
for
nine
if
we
try
to
recreate
a
bucket
that
already
exists
conditionally.
B
B
B
J
B
B
B
For
field
names
are
in
up
case,
if
we
use
list
format
to
send
a
request,
as
we
will
require
our
data
or
authorization.
In
fact,
yes,
there
is
document
or
standard
HTTP
header.
The
field
names
shoulda
used
camera
case
from
at
that
is.
The
first
letter
should
be
a
Paisley
rest
assured
opinion
in
lower
case.
So
we
just
do
our
transformation
at
HTTP
client
allow
when
generally
lower
current
request,
and
it
can
work
in
your
right
way.
B
B
B
B
B
B
At
first
at
first,
we
cannot
think
lay
objects
which
which
put
it
by
match
multiple
upload.
Then
we
have
found
elaborated
appear.
Who
must
branch
to
fix?
It
is
mainly
because
we
are
log
entries
her
being
a
writer
when
my
job
my
path,
my
depart
upload
at
least
least
pur
have
emerged
in
thule
must
branch
well
next,
the
problem
is
they
think
initially,
but
whether
the
did
stink
initial
command
crashes
is
mainly
because
now
just
haven't
been
set
it.
When
exact
list
command
solely
solution
is
certain
shall
own.
B
Let
us
do
the
mean
data
stink
initial,
this
PR
lease
PR
have
been
merged
into
Li.
Must
branch
may
be
issued
to
every
base?
The
next
problem
is,
they
think
wrong
crashes,
its
because
when
we
exert
a
risk,
this
command
think
more.
The
instance
haven't
finished
initialized,
so
we
so
we
initialized
the
sink
model
instance
I
should
have
let
us
to
tap
it.
The
main
davis
thing
around
this.
B
We
have
modified
each
and
they
put
our
pr2
master
branch,
but
it
haven't
been
merged.
The
yet
well
in
the
another.
Mainly
problem
is
about
sink
or
large
objects
to
s3
biomat
part
upload
when
complete
much
part
to
upload
HW
will
stand
or
complete
post.
We
with
chunked
encoding
us
three,
but
we
will
get
or
not.
Implant
implemented
with
response,
because,
as
thank
you
not
father,
requested
with
chocolate
encoding,
and
so
we
need
avoid.
B
J
J
J
A
A
J
That
I
just
tried
it
a
bit
more,
but
but
but
you
need
to
understand
why
why
there
was
an
issue
with
when
we
when
we
sent,
or
we
got
a
two
hundred
instead
of
the
100,
it
could
be
that
that
it's
an
issue
with
Carol
with
leave
Carol.
But
we
need
to
understand
that
and
anyhow,
the
question
is:
is
it
still
a
problem
for
you.
B
J
The
the
PR
look
can
I
okay
the
on
problems
that
the
status
was
weird,
the
one
that
you
were
checking
against
so
need
to
look
at
the
h2
HTTP
status
instead
of
the
conversion
to
to
a
number
you
need
to
propagate
that
somehow
either
that
or
just
I
don't
know
give
higher
priority
to
to
the
other
error.
But
that's
not
what
a
good
solution
so
yeah!
You
need
to
look
at
the
HTTP
status
instead,.
B
B
That
we
still
like
King
stink
of
objects,
extender
metadata
such
as
condom
that
have
orange
or
you
know,
orange
a
no
attack,
and
so
and
so
Dudley
object
is
eros.
Illyria
needed
to
be
some
way
to
map
sauce
the
earth,
including
destination.
Is
there
a
second?
Second?
We
won't
respond.
A
more
complex
configuration
scheme,
vemma
provider.
B
Complex,
for
example,
is
eros
mapping
talk
here
of
spicing
mappings.
In
addition,
the
current
things
tell
us
maybe
a
little
laugh
and
maybe
puzzled
follows
who
don't
take?
Did
he
look
at
a
man
estate?
So
we
want
to
improve
our
table
using
studies,
for
example,
ID,
nagging
ideas,
disparities
play
arrows
Stanley's.
We
can
church
at
detail
flag,
provided
detail,
output,
opening
the
number
of
packets
of
objects
and
not
think
last.
J
I
A
One
quick
comment
about
those
last
two
points:
they're
sort
of
two
parts:
one
is
reporting
making
sure
that
the
any
of
the
sync
status
is
included
in
the
daemon
status
portion
that
gets
reported
back
up
to
the
manager.
That's
certainly
something
that
can
be
done
now
and
you
can
always
like
dump
it
manually.
A
The
surfacing
it
on
the
dashboard
is
harder
because
we
don't
have
the
panel
of
a
dashboard
that
shows
the
even
the
basic
argued
of
you
stuff
with
the
zones
of
the
daemons
and
whatever
else.
Obviously,
if
you
folks
want
to
work
on
that,
that
would
be
awesome,
but
I,
wouldn't.
If,
if
you
don't,
then
you
can
still
get
the
stuff
into
the
patina
status
now
and
then
Serkis
it
later
on
the
GUI
defense,
okay,.
B
A
Yep
yep
thanks
so
much
for
your
your
hard
work
shaking
out
these
issues.
This
paid
to
see
progress
here,
maybe
you
who
did
you
want
to
just
like
birth?
Briefly,
what
this
is
all
I,
don't
know
we
started
skip
they're
like
overview
I'm,
not
everybody
might
be
aware
of
what
is
often.
This
all
concentrate.
J
That's
all
closing
so
close
Inc
is
basically
taking
the
sink
module
framework
which
allows
us
to
sink
other
data.
Homemade
data
like
objects,
meta
data
to
the
separate
zones
that
could
be
in
the
cloud
in
this
case
we're
using
modules
also
for
the
elasticsearch
metadata
indexing
work,
and
it's
basically
how
we
do
Maggie's
own
data
think
also
it's
so
using
the
same
infrastructure.
So
so
the
idea
there
is.
J
A
All
right
I
have
a
quick
update,
update
on
the
Dan
black
stuff.
This
is
mostly
work
that
eric
has
been
doing
here
at
Red
Hat
and
the
folks
at
SK
Telecom.
So
me,
I,
just
wrote
a
few
notes
and
the
theme
clock
had
that
I'll
paste.
The
quick
version
is
that
right
now,
M
clock
is
merged.
There
are
two
modes
you
can
run
it
in.
One
is
M
clock,
op
class,
which
puts
different
types
of
operations
on
the
OST,
no
different
classes,
so
client
ops
are
one
class.
A
Scribe
is
one
class
recovery
is
one
class
after
being,
is
one
class
I
think
that's
it
and
then
it
uses
in
block
to
prioritize
against
those
I
mean
they're
config
options
to
control
what
their
relative
grammar
and
parameters
are
for
those
different
classes.
So
you
can
put
it
in
that
mode
and
it'll
do
its
thing:
that's
all
merged.
The
other
mode
is
in
clock,
client,
which
uses
the
DM
clock
actually
to
do
scheduling
of
clients
against
each
other.
A
Where
you
would
have
cram
there,
you
get
a
minimum,
my
UPS
or
whatever,
for
particular
client
and
so
on.
That's
mostly
merge
to
accept
a
lot
of
the
sort
of
bits
to
actually
configure
it.
Aren't
there
so
there's
sort
of
two
issues
right
now
is
that
even
for
the
M
clock,
op
class,
it
doesn't
work
super
well
currently,
because
we
aren't
managing
the
queue
depth
very
well
at
the
object,
storage
layer,
the
queue
is
too
deep,
and
so
the
prioritization
that
we
do
at
the
in
clock
layer
doesn't
really
have
much
effect.
A
There
were
a
few
patches
to
try
to
have
a
little
at
that
that
Eric
and
the
booster
team
were
kicking
around,
but
we
mostly
set
this
aside,
because
the
in-flight
IO
scheduler
that
are
outstanding,
I/o
scheduler,
that
the
SK
folks
were
working
on,
looks
to
work
much
better.
So
the
current
task
is
to
get
that
reviewed
and
merged
and
there's
a
link
to
the
pull
request.
But
with
that
they've
had
very
good
results.
That's
encouraging!
So
you
need
to
like
get
that
in
shape.
So
that's
that's
sort
of
the
main
blocker.
A
The
other
sort
of
outstanding
architectural
issue
is
that
you
can't
mix
the
OP
class
and
the
client
scheduling.
Currently
you
either
use
them
clock
the
scheduled
background
working
in
spore
groundwork
or
you
use
it
to
control
clients
against
each
other,
but
you
can't
do
both
at
the
same
time.
So
Eric
is
trying
to
sort
out
how
to
do
a
hierarchical
sort
of
composition
of
those
of
those
two
policies:
I'm
not
sure
how
how
far
we've
gotten
along
there.
A
G
A
C
C
C
A
Yeah
yeah,
my
sense
is
that,
in
order
for
it
to
work
in
I,
I
have
some
partners
and
we
have
to
like
sample
and
do
sort
of
a
feedback
loop
for
the
sampling
leads
the
frontline
or
something
like
that,
because
we
can't
have
it
like
a
fully
informed
decision
for
every
single
I/o.
A
G
A
A
The
the
basic
idea
here
is
that
you
would
set
the
PGP
naam,
which
is
the
placement
behavior
to
be
less
than
the
PGM
that
basically
gets
the
PG
that
were
split
to
be
sitting
next
to
each
other,
and
this
is
a
similar
situation.
After
you
split
there,
they
just
split
in
place.
They
don't
actually
move.
You
have
to
you
just
those
two
knobs
separately,
so
you're
just
in
Reverse,
you
would
lower
the
PGP
nub.
A
It
makes
all
the
versions
that
are
chosen
for
log
entries
offset
so
that
they
will
be
able
to
zipper
together
and
though
they
going
over
lapping,
but
they
always
skip
over.
Like
you
know,
they
go
pay
every
two
and
ones
either
even
a
nod
or
whatever
it
is,
but
it
generalizes
to
not
just
to.
But
in
so
that's
that's
the
first
bit
and
there's
an
option
in
there
that
just
debug
just
randomly
skip
stuff
just
to
like
exercise
all
that
code
to
make
sure
it.
It
behaves
with
miss
parse
things
I'm.
A
The
second
part
is
that
I
think
we
need
to
have
some
check
and
feedback
on
those
PPG's
that
are
sitting
next
to
each
other
so
that
they
know
when
they
have
reached
the
point
where
the
logs
are
and
are
non-overlapping
and
they
can
be
zippered
together,
because
initially
it
won't
be
up
to
actually
run
for
a
while
and
age
out,
the
old
PG
entries
and
then
write
new
ones
or
whatever
before
that
happens,
or
have
some
way
to
like
force
that
to
happen.
I
like
trimming,
the
log
or
whatever.
A
But
once
that's
the
case,
then
it
needs
to
be
some
feedback
back
to
the
monitor
manager
whatever
so
that
you
know
that
these
things
are
actually
the
logs
are
ready
to
go
and
they
can
be
zippered
and
you
you
can
proceed
with
the
merge
and
then
the
last
thing
is
I.
Think,
probably
one
I
like
try
to
make
the
logs
line
up
so
that
they're
there
they're
sort
of
overlapping
and
not
just
like
totally
dis,
contiguous
or
else
like
the
timeline
and
log
doesn't
make
sense.
I'm,
not
sure
that
that
actually
is
necessarily
problematic.
But.
A
A
Right,
yeah,
exactly
peasant
in
the
split
case
the
PG
can
be
kind
of
in
any
state.
You
can
always
sort
of
split
it
in
half,
but
an
emerge
state
it
has
its.
They
have
to
be
stored
next
to
each
other.
They
have
to
be.
You
know
in
a
particular
way
in
order
for
the
merge
to
be
possible
and
so
and
I
tried
to
imagine
what
the
case
would
be.
A
That
would
actually
make
it
not
possible
that
would
make
the
merge
fail
and
it
the
only
thing
I
could
think
it
would
be
that
if
the
OSD
goes
down
and
some
of
the
like
rips
out
one
of
the
PGS
with
objects
or
tool
and
then
served
up
again,
that
was
the
only
thing
I
could
think
of.
But
I
don't
know.
Hopefully,
hopefully
that's
it.
A
So
then
the
Mon
would
there
be
a
particular
epoch
where
the
mana
just
beating
them
down,
and
then,
when
the
LSD
gets
that
map,
it
would
have
to
do
an
atomic
merge
of
those
two
P
G's,
so
it
would
have
to
before
it
when
it
publishes
the
map
through
the
PG.
You
have
to
like
not
do
that
and
then
zipper
them
together
and
then
give
that
toasty
map
to
the
combined
new
PG,
then
I.
A
Think
most
of
these
are
sort
of
trivial,
like
the
last
update
would
be
the
max
use
it
for
the
logs
together,
the
stats
add
up,
and
so
on.
The
part
that
I
got
stuck
on
is
that
when
you're
doing
peering
in
general,
because
there's
a
new
interval
there's
all
this,
this
log
recovery
stuff
that
happens
so
in
the
case
of
a
replicated
pool,
you
look
at
the
like
the
longest
log
and
that's
the
one
that
you
you
put
together
and
then
you
look
at
the
peers
and
then
they
and
you.
A
Info
and
you
calculate
missing
objects
and
so
on.
I
think
the
each
of
those
the
the
thing
that
I'm
not
sure
about
is
that
the
each
replica
is
going
to
possibly
a
zipper
at
a
different
point,
because
the
there
might
have
been
an
in-flight
I/o,
so
they
might
be
behind
and
so
they're
going
to
merge,
but
they're
going
to
be
missing,
different
objects
and
if
the
two
PGs
aren't
going
in
synchrony
like
if
they're
not
ordered
with
respect
to
each
other,
the
log.
A
It's
not
a
matter
of
like
just
looking
at
the
tail,
the
longest
log,
because
one
of
them
might
have
one
shard.
That
was
100
versions
behind
that's
missing
an
entry,
whereas
the
other
one
has
one
that's
on
inversions
ahead.
That
got
it
so
like
I,
wonder
if,
when
they're
in
this,
like
preparatory,
merge
State,
we
actually
have
to
order
the
iOS
to
those
two
P
G's
with
respect
to
each
other.
A
Period
yeah
yeah,
which
is
gonna,
be
hard,
and
then
that's
that's
the
replication
case,
which
I
think
is
comparatively
simple
for
the
erasure
code
case.
It's
it's
weirder
because
you
have
the
like
go
log
roll
forward
and
then
I
think
we
really
do
need
to
worry
because
there,
but
I
like
to
read
yeah
it's
like
because
the
read
Matta
like
the
read-modify-write
stuff
that
happens
on
an
EC
object
actually
basically
holds
up
the
log
so
that
other
other
EC
operations
won't
roll
forward.
A
C
Yeah,
it
seems
like
it's
all:
gonna
get
a
pretty
complex
ticket
manage
that
routing.
At
that
point,
then
I
guess
taking
a
step
back
if
it
did,
it'd
have
knew
that
these
were
these
were
clean
and
we
would
have
like
a
kind
of
a
same
point
where
he
said
just
probably,
if
I'll
need
all
these
pts
and
yeah.
A
C
A
Yeah
yeah
yeah,
if
you
didn't
have
if
we
didn't,
have
been
funny
if
I
would
definitely
solve
the
problem
yeah.
So
if
we
did
that,
then
there
would
be
a
map
published.
That
said,
suspend
I/o
to
this
range
of
P,
G's
right
and
then
and
P
G's
would
have
to
feedback
that
they've
done,
that
to
the
manager
to
the
monitor,
sees
it
and
then
monitor.
It
said:
okay
now
merge
yeah.
They
thought
it.
That's
why
better
solution.
A
All
right,
so
you
would
still
have
you'd,
have
to
have
a
map
published
that
says
you're
in
this
limbo
period,
where
the
PGS
may
or
may
not
be
merged.
Please
merge,
then,
so
that
the
and
then
the
primary
would
decide
when
it
get
receives
a
thing
whether
it's
happened
yet
or
rather
should
block
the
IO
or
Fiat
to
which
PG
yeah
yeah
and
then
once
it's
done.
It
would
tell
the
monitor
and
then
say
it's
done.
Yeah.
E
A
A
You
know
you
know
what
actually,
if,
if
you're,
if
you
publish
a
map
that
says
the
it's,
the
split
is
in
progress.
Sorry,
the
merge
is
in
progress,
then
the
trivial
implementation
of
that
is
that
the
OSC
just
blocks
the
IO,
which
is
kind
of
the
same
thing
except
the
OST
goes
to
the
side.
How
it's
done
right
instead
of
the
client
inside
the
client,
not
sending
me
out,
and
the
Philistine
would
just
block
it.
Yeah.
C
A
C
A
F
A
A
Yeah
I
guess:
that's
the
reason
why
I
was
trying
to
have
a
model
where
they
happen
in
parallel,
because
then,
when,
whenever
an
OSD
encounters
of
this
epoch,
it
can
just
do
it
even
in
isolation
as
its
rolling
forward
at
source.
So
it
doesn't
matter
if
it
failed
and
didn't
see
that
it
was
Tina
wasn't
up
at
the
time.
I
I
A
If,
if
the
man
the
monitor,
publishes
and
says
these
PGS
should
merge
now
they're
in
the
process
of
merging
when
the
OSD,
when
the
primary
gets
to
that,
it
will
basically
appear,
send
it
back
off
quiescing
IO
and
trim
all
the
logs
down
to
zero
and
then,
after
all
of
this
happened,
everybody
has
all
the
current
replicas
are
clean.
They
persisted
they're
empty
logs
and
they're
like
clean
slate.
A
No
inside
IO,
then
it'll
go
to
the
monitor
and
say:
okay,
merge,
I'm,
ready
and
then
the
next
map
that
gets
published
and
says
these
ones
I
know
are
no
longer
pre
merging
they're,
actually
merged,
and
at
that
point
then
the
replicas
will
atomically
just
like.
Does
it
come
together
because
there's
no
zipper
whatever
they'll
merge
them.
There's
no
one
thought
al,
then
they're
cleaning
right.
E
A
A
I'm,
worried
about
is
like
Sayre
in
the
state
where
the
primary
is
like:
okay,
I
told
like
zero
there,
logs
I'm
all
ready
to
go,
and
then
it's
a
split
and
then
like,
and
then
they
crash,
and
so
one
of
them
split
and
the
other
one
didn't
then
suddenly
peering
has
to
deal
with
the
fact
that
it's
like
it's
got,
two
P
G's
or
four
p,
GS
or
whatever
that
are
not
yet
merged
reporting
in,
but
they're
peering
state
actually
affects
that
same
PG.
That's
the
part
that
that
scares
me
I.
I
A
At
the
epoch,
at
the
the
emerging
is
driven
by
the
OST
map
and
so
they'll
all
happen
at
the
same
time,
not
the
same
time
but
independently,
and
they
won't
talk
to
each
other
until
after
they've
done
it.
So
either
they
haven't
merged
yet,
in
which
case
they're
catching
up
on
maps
and
then
they'll,
merge
and
then
they'll
talk
to
you
after
they've
caught
up
suddenly.
Since
then
you
don't
to
deal
with
unsplit
unmerged
PGS
talking
to
a
merge
VG.
C
C
E
A
It's
just
like
a
synchronous,
update,
I
think
the
slow
part
is
going
to
be.
Map
gets
published,
it
has
to
reach
other
up,
because
thank
you
an
iota
like
persist
it
and
do
it
and
then
yep
they're
like
feedback
that
stayed
back
at
the
my
lenders,
so
it
sees
it
and
news
forward.
Yeah
I'd
imagine
to
like
to
make
this
to
minimize
the
impact
that
the
monitor
should
like
reduce
P
genome
like
one
at
a
time.
So
there's
only
like
one.
My.
E
B
A
In
general,
you
want
to
read
you
want
to
you
to
minimize
the
impact
you
just
do
it.
One
you'd
only
want
one
PG
to
like
stick
on
you
over
two
seconds
know
what
most
people
would
notice
that
it
doesn't
mean
that
if
you're
going
from
like
a
million
pts
to
half
a
million
PG
s
like
it's
going
to
take
a
while
yeah
like
I,
feel
like
that
kind
of
doesn't
matter
because
they're
already
placed
in
the
same
place.
A
C
A
C
So
yes,
the
next
is
the
OST
refactoring
topic,
which
is
we
talked
about
this
a
bit
at
the
beginning.
But
to
reiterate
the
background
for
this
is
that
in
general,
much
faster
devices,
typically
more
prevalent
and
over
the
next,
probably
five
to
ten
years,
like
I'm,
most
likely
more,
even
more
common,
perhaps
than
hard
disks.
So
you
want
to
start
preparing
for
that
now
before
we
have
to
do
hundreds
of
thousands
of
I
outside
great
any
choice
D.
C
So
when
I
think
about
the
end
state
to
actually
get
good
performance
out
of
the
superfast,
almost
memory
speed
devices
and
that
probably
looks
something
like
I
shared
nothing
architecture
where
we
avoid
doing
as
much
work
on
the
CPU
as
possible
for
each
I/o.
C
So
we
have
a
way
to
do
that
is
to
start
all
the
work
on
a
per
CPU
basis
have
basically
no
locking
and
no
memory
barriers
as
much
as
possible,
just
using
message
passing
and
a
copy
memory
bit
essentially
between
different
cores.
What
we
need
to
and
for
any
everything,
I
ever
single
process
or
local
processes,
but
with
minimal
overlap,
eventually
getting
to
a
point
where
we're
doing
all
of
the
scheduling
network,
I/o
and
storage
I/o
in
user
space,
most
likely
with
the
PDK
for
a
network
and
SP
DK
for
storage
devices.
C
So
in
terms
of
actually
scheduling
code,
there's
many
different
approaches
and
ways
you
can
format
this.
But
the
way
we're
currently
structuring
the
code
with
lots
of
callbacks
everywhere
is
pretty
difficult
to
reason
about,
because
the
callbacks
kind
of
good
thrown
around
every
every
which
way
and
spread
it
out
throughout
different
layers.
So
you
end
up
reading
through
things
in
many
different
places,
to
figure
out
a
single
strand
of
execution,
and
it
also
ends
up
using
many
many
heap
allocations
and
locks.
C
C
There
is
a
standard
future
library
in
C++,
as
well
as
some
boost
at
gusta
SEO
framework
that
kind
of
wraps
around
that.
Basically,
it
doesn't
really
provides
a
whole
lot
more
beyond
the
the
basic
building
blocks.
You
have
to
kind
of
build
your
build
your
own
things
around
that
in
terms
of
scheduling
and
the
event
loop
sense
dealing
with
that
everything
else
in
the
system.
C
Another
one
is
that
Tokyo,
which
is
actually
an
in
rust,
which
looked,
looks
very
promising
in
terms
of
minimizing
allocations.
The
other
clever
way
of
using
that
type
system
to
convert
chains
of
futures
into
a
single
allocation
upfront
said.
Basically
it
compiles
to
a
state
machine
so
that
it
can
be
one
block
of
memory
and
going
through
executing
within
that
block
of
memory.
That
happens
to
have
two
new
allocations
for
each
new
task
or
chunk
of
work
to
do
that
might
be
a
promising
direction
to
go.
C
Although
it
does
not
support
not
pulling
roads
too.
So
we
can,
you
don't
have
to
use
the
I
represent
CPU
all
the
time,
if
you
do,
if
you're
not
using
it
super
fast
storage
or
if
you
can't
take
a
course
just
something
yet
the
actual
futures
model
it
it.
It
provides,
is
more
similar
to
the
standard
future
or
boost
SEO,
then
to
Tokyo,
and
that
it
does
do
allocation
still
for
each
feature
or
for
each
task.
C
It
also
makes
it
done
yes,
I'm
primitives,
already
available
for
doing
all
the
user
space,
some
scheduling
and
prioritization
and
grouping
different
tasks
into
different
priorities
and
executing
ones
that
have
are
executing
the
same
function
or
code,
for
example.
Together
as
a
bit,
you
keep
the
instruction
cache
hot
we're
training
them
together,
so
they
meet
you're
gonna,
keep
track
of
groups
of
tasks
together.
C
That
kind
of
thing
it's
also
a
relatively
new
for
early
form
framework
and
that
it
doesn't
actually
do
any
releases
yet
so
they
recommend
using
it
as
a
sub
module
and
the
major
product
that
uses
it
so
far
is
sila
DB,
which
is
by
the
same
people
who
wrote
C
star.
So
it's
a
seems
like
it's
a
I
still
look
at
relatively
early
projects,
but
a
very,
very
nice
house
very
useful
building
blocks
through,
but.
A
I
keep
hearing
Tokyo
come
up.
Does
it
is
it
possible,
or
does
it
make
sense
to
try
to
make
these
interoperate
like?
If
we
wanted
to
build
something
like
Tokyo
would
be
able
to,
you
would
be
able
to
use
it
in
the
context
of
C
star
or
will
be
able
to
build
that
Tokyo
style
framework
in
the
C
star,
or
something
like
that?
Yes,.
C
C
C
C
C
The
code
you
get
with
like
the
C
star
style
features
versus
Tokyo
futures
would
be
very
similar,
but
any
of
these
would
be
aggressively
different
from
the
current
code.
F
F
And
a
while
ago,
I
read
mentioned
that
C
star
is
looking
at
implementing
or
integrating
with
C++
cortines,
which
are
a
technical
specification
with
still
and
in
progress,
but
being
able
to
use
co-routines
instead
of
futures
would
eliminate
a
lot
of
extra
overhead.
Also
I'm
really
excited
to
see
what
happens
there.
F
I
think
there
was
a
pull
request
or
something
on
sea
stars.
Is
it
Google,
Groups
or
something
so
there's
there's
code
and
I
think
it
got
some
review
and
I
know
that
the
latest
clang
has
the
implementation
of
the
curve
teams
that
you
could
play
with
missing
some
of
the
library
standard
library
stuff,
but
yeah.
It's
still
still
pretty
early
for
that.
E
I
would
consider
it
ideal
if
we
could,
like
mangle
the
sea
star
futures
into
looking
like
the
Tokyo
ones
and
use
the
rest
of
their
infrastructure
personally,
and-
and
it's
not
just
about
how
fast
the
memory
allocation
is,
but
it
means
that
we
can
do
so
much
of
it
up
front.
So
we
have
a
much
better
lock
on
how
much
memory
is
available
to
us
in
case
of
trouble,
which
ties
into
what
Alan
was
talking
about.
What
does
it
open
stacks
age?
E
Who
sort
of
having
the
assigned
bundle
of
memory
and
I
still
need
looking
at
more,
but
I
was
I,
was
side
blinding
with
Samsung
this
morning
before
we
did
this
and
I
actually
think
we
can
do
that
in
C++
and
make
it
look
pretty.
So
that
would
be
the
ideal
thing,
but
we've
got
to
get
some
more
people
in
a
room
to
look
at
how
C
star
works
and
what
adjustments
we
can
make
to
its
typing
type.
D
A
Somebody
that
we
talked
to
a
long
time
ago,
they
ended
up
not
using
stuff
and
writing
their
own
thing
proprietary,
but
they
basically
spent
the
last
two
or
three
years,
rewriting
the
whole
thing
to
be
DVD,
K
and
C.
Star,
based
the
only
real
takeaway
I
got.
Was
that
a
lot
of
work,
but
it
paid
off?
It
was
totally
worth
it.
A
H
A
A
Starting
so
I
mean
we
have
all
these
I.
Can
there
isn't
that
much
that
blocks
in
like
it?
The
message
messenger
stuff
is
all
asynchronous,
so
there's
no
blocking
there,
because
it's
all
sort
of
already
carpet
compartmentalize
behind
this
interface
and
we
can
presumably
make
a
wall
around
the
object
store,
and
so
there's
nothing
really
in
between
that
that
blocks,
except
for,
like
you,
know,
get
attribute
or
something
but
they're
still.
We
still
have
like
so
many
different
threads
that
yes
I,
don't
know
what
they're
like
yes,.
C
C
A
E
A
C
A
E
E
F
E
E
A
A
E
C
E
H
B
H
A
D
D
A
C
C
A
C
A
A
D
A
A
E
D
D
A
A
E
C
B
D
So
Josh,
then,
just
in
in
this
world,
where
we
are
targeting
this
thing
at
very,
very
fast,
solid-state
storage,
then
the
idea
would
be
we
get
away
entirely
from
this
idea
of
shared
log
shared.
You
know,
database,
we
just
shard
the
whole
thing
right
exactly
and
yep.
We
we
don't
even
target
anything
that
requires
a
different
behavior.
E
C
D
C
A
Alright,
hello,
everyone
thanks
for
coming
sorry
about
that
room,
mix-up,
Leo's
out
his
horse
or
just
confused.
A
A
A
The
way
yeah,
it's
recording,
yeah
right,
high
level
overview
of
where
we're
going
kind
of,
what's
being
planned
and
so
on.
Josh
do
you
wanted?
You
want
to
talk
a
bit
about
that
or
don't
me
to
do
it.
I.
A
Right
yeah
I
mean
I
just
wanted
to
talk
about
just
just
sort
of
high
level
stuff,
so
people
know
where,
where
we're
going
so
the
general
and
trends
are
that
hard
drives,
are
getting
bigger
and
they'll
be
using
people
using
those
for
like
bulk,
colder
storage,
but
nvme
is
becoming
increasingly
common
and
standard,
and
there
gonna
be
a
lot
of
flash
deployments.
Flash
only
deployments
going
forward
and
it
won't
be
long
before
everything
is
flash.
A
So
we
need
to
go
fast
and
sort
of
live
in
that
world,
which
it's
not
really
where
we
grew
up.
So
there's
a
lot
of
attention
and
effort
and
we're
going
into
performance
refactoring
it's
a
little
bit
up
in
the
air
still,
but
the
the
likely
path
is
that
we're
gonna,
embrace
DP,
D
K
and
s
BD,
k
for
doing
the
network
and
a
storage,
IO
I
think
those
will
still
be
optional.
A
So
there's
sort
of
cleanup
refactoring
just
to
make
the
the
code
more
modular,
more
asynchronous
and
event
state
driven
or
looking
at
futures
programming
frameworks
to
sort
of
enable
that
and
make
it
a
same
programming
environment.
That's
the
main
direction
that
Josh
and
Greg
are
investigating
right
now,
so
I
think
that's
I
just
want
to
make
sure
windows
that
I
guess
is
sort
of
high
level.
Is
there
joining
it?
Go
any
more
detail
about
what.
C
A
Okay,
I
think,
just
more
generally
one
of
the
key
things
that
need
to
happen.
In
addition
to
just
changing
the
some
of
the
structure
of
the
code
is
also
just
making
the
code
a
lot
lighter
weight.
We
do
a
lot
of
parsing
of
data
structures
and
copying
things
around
as
we
move
through
the
stack
and
as
we
afford.
It
need
to
eliminate
a
lot
of
that
extra
work,
at
least
for
the
important
the
fast
path
for
actual
I/o
requests.
A
So
the
goal
at
the
end
of
this,
as
if
there's
really
an
end,
but
the
goal
is
that
when
the
data
comes
in
off
the
wire,
it's
gonna
land
in
memory
and
be
largely
unchanged
as
it
traverses
through
the
stack
or
is
right
now,
it's
you
know,
lands
on
a
buffer
gets
copied
in
user
space.
We
parse
it
into
a
bunch
of
other
data
structures
and
the
message
class,
and
then
we
build
a
couple
different
intermediate
representations
and
the
transactions,
and
eventually
it
goes
out,
there's
a
lot
of
lot
of
stuff.
A
C
A
Probably
owe
some
of
it
could
be,
in
my
mind,
they're
sort
of
they're
sort
of
two
complementary,
but
someone
orthogonal
pieces.
One
is
trying
to
streamline
the
data
structures
and
the
allocations
as
they
traverse
through
the
stack
and
I
haven't,
looked
as
closely
at
the
I/o
path.
Honestly
I'm
right
now,
I
was
just
looking
at
the
the
peering
stuff,
but
I
noticed
in
the
peering
path.
E
A
Intermediate
a
lot
of
copies
of
things
like
the
H
object.
E
structure
is
a
string
that
copies
data.
That's
in
the
original
message,
which
is
a
decoded
copy
of
the
thing
that
actually
was
read
off
the
wire
and
so
on.
So
I
think,
there's
a
lot
of
stuff
like
that,
where
we
can
just
sort
of
you
know,
walk
through
the
code
and
look
at
where
data
structures
can
be
combined,
and
this
is
something
that
Matt's
team
did
a
lot
of
a
long
time
ago.
At
the
O.
Those
same
ideas
apply
and
it
sort
of
tedius.
A
E
D
F
E
A
Okay,
so
I'm
I,
don't
know
think
that
we
need
to
necessarily
get
into
all
this
right
now.
I
want
to
just
provide
a
bit
of
an
overview
I'm
just
to
make
people
aware
of
what
the
current
efforts
are.
So,
yes,
lots
of
investigation
on
the
boost
Ezio
front
and
on
see
star
front
on
a
futures
front.
A
lot
of
decisions
to
me
do
made
there.
So
if
you
are
have
experience
in
this
area
or
if
you're
interested
in
it,
please
chime
in
so
there's
ad,
the
other.
A
A
lot
of
the
same
pieces,
around
promotion
and
demotion
and
proxying
reads
and
writes
and
that
we
did
with
cash
tearing,
but
eventually
once
we
have,
if
we
can
get
to
roughly
equivalent
functions
using
functionality
using
that
cheering
model
where
you
have
the
full
index
and
the
base
tier
instead
of
having
this
sort
of
sparse.
Maybe
you
have
the
object,
maybe
you
don't
in
the
cache
and
then
we
could
just
remove
the
cast
earring
and
deprecated
and
I
release
and
then
remove
it,
because
people
could
migrate
to
using
the
new
the
new
tearing
model.
A
That
would
be
I.
Think
that's
the
hope
or
my
hope,
because
that'll
strip
out
a
lot
of
complexity
around
a
bunch
of
stuff
of
dealing
with
the
case.
Where
you
don't
know
whether
the
object
exists
or
not,
and
you
have
to
be
correct,
despite
that,
whereas
with
the
model,
where
the
that
you
have
a
full
index
of
the
objects
and
the
object
might
be
there
or
it
might
be
a
redirect
to
somewhere
else
and
you
sort
of
know
the
whole
story,
then
you
can
behave
more
simply.
C
A
I'm
not
sure
it's
really
about
performing
better
or
worse.
There
are
cases
when
capturing
where
things
are
slow,
because
you
don't
know
whether
the
object
exists,
and
so
you
might
proxy
through
to
the
based
here,
just
to
find
out
the
things
like
that
would
be
better,
but
more
generally,
I.
Don't
know
that
it's
going
to
be
that
it's
not
really
a
performance
thing.
It's
at
code,
simplicity
and
architectural
simplicity
thing,
and
it's
a
bit
more
flexible
right,
because
you
can
have
multiple
slow
tears.
We
can
plug
the
deduplication
stuff
into
this.
A
E
Mean
I
get
much
more
flexible,
I
guess
I,
just
don't
even
if
it
gets
future
parity.
It
doesn't
seem
to
me
that
from
doing
it,
because
it's
not
good,
at
least
if
you
want
it
like
for
making
your
storage
faster,
not
actually
going
to
do
that
any
better
than
what
we
currently
have
it's
just
a
week
easier
for
us
to
not
break
and
to
working
on
other
features
around
it,
which
is
good
yeah.
A
A
A
About
this
in
terms
of
performance,
I'm
saying
about
it,
I'm
thinking
about
it
in
terms
of
being
able
to
well
I
guess,
it's
kind
of
remotes
I
mean
being
able
to
keep
the
hot
data
in
on
fast
storage
and
put
all
the
cold
stuff
on
slow
storage,
but
still
maintain.
The
sort
of
the
uniformity
of
the
ratos
provides
will
do
that
better
right.
A
All
that
the
promote
issues
is
gone
more
or
less
with
the
easy
stuff
right,
because
you
can
foxy,
we
can
practice,
you
reads
and
writes
so
that's
already
sort
of
issue
is
resolved
with
the
cache
cheering
I
think.
The
main
issue
with
castering
is
just
that.
It's
the
code
is
so
complicated
in
Brazel
and
hard
to
maintain.
E
D
So
I
mean
I,
think
I
think,
regardless
of
what
the
code
behind
this
looks
like
we're,
not
gonna
get
around.
That
fact
we
just
can't
have,
except
that
we're
not
really
talking
about
accessing
hot
data
on
the
cash
tier
we're
lucky
if
we
can
access
some
hot
day
on
the
cashier,
but
I
think
what
we're
more
talking
about
is
just
having
some
data
on
the
cash
tier,
not
kind
of
overwhelming
the
rest
of
the
cluster,
with
excessive
promotion.
A
It
the
way
I
think
of
it,
is
that
what
we
have
right
now
has
sort
of
no
upward
mobility
and
this
fragile
and
hard
to
maintain
and
kind
of
sucks
in
that
respect.
Whereas
if
we
make
a
lateral
move
to
doing
more
traditional
tea
remodel,
then
we
have
lots
of
sort
of
support
mobility
and
we
have
a
simpler
sort
of
architecture
to
support.
A
So
we
could
a,
for
example,
have
the
not
do
flow
optic
promotion,
but
have
like
part
of
an
object
in
the
thing
in
the
in
the
based
here
and
then
have
part
of
it
Vienna
in
the
back
end?
That
was
something
that
was
like
super
complicated
and
really
weird
to
do
with
a
cash
model,
because
you,
like
you,
don't
have
the
full
picture.
A
What
the
object
should
be
at
that
layer
and
having
sort
of
hybrid
whatever
it
got
really
gross,
whereas
with
when
you
know
that
you
have
sort
of
the
authoritative
view
of
what
the
object
should
be,
whether
or
not
you
actually
have
all
the
content
there.
Those
sorts
of
things
become
much
more
more
reasonable
and
I'll
sort
of
align
nicely
with
what
the.
What
did
he
do?
Folks
are
trying
to
do
also
where
you
have
bits
of
an
object
stored
in
other
places,.
D
A
D
A
D
A
But
I
think
that
I
think
we
should
be
looking
like
you're
fast
that
we're
people
of
SSDs,
because
that's
the
cheapest
most
reasonable
cost
effective
stores
to
buy
not
the
cheapest
but
the
most
like
reasonable
storage,
the
default
storage
choice
and
then
maybe
they
have
some
hard
disks
because
they
have
a
bunch
of
cold
big
data.
And
how
should
we
do
that?
Whereas
right
now
people
view
the
flash
as
a
special
case.
A
A
D
I
guess
the
the
question
I
have
is
okay,
so
we're
gonna
have
potentially
radio
soul
cash
tearing
we're
gonna
potentially
have
some
kind
of
tearing
capability
in
blue
store.
We
already
do,
but
you
know
maybe
a
more
sophisticated
one
when,
when
a
customer
comes
in
and
asks
okay,
I've
got
3d,
crosspoint
storage,
I've
got
nvram,
I've
got
SSDs
and
I've
got
hard
disks.
What
do
I
do
I?
Guess
that's
kind
of
what
I
don't
see.
We
have
any
kind
of
coherent
picture
for
them
in
all
this
to
say
this
is
what
you
do.
There's.
A
C
A
At
some
point,
some
of
you
are
planning
to
step
up,
go
on
and
like
make
the
leap,
but
there's
that
the
other
thing
I
want
to
mention
this
is
mostly
just
me-
is
I'm
interested
in
allowing
different
types
of
pools
and
Rados.
So
right
now
we
do
replicated
in
a
richer
coated
pools,
but
both
are
based
on
primary
log,
the
primary
copies
log
them
so
they're.
All
synchronous,
log
based
I've
been
making
baby
steps
for
it's
cleaning
up
the
PG
interface
and
sort
of
inching
towards
some
of
the
performance
stuff.
A
Trying
to
do
some
cleanup
to
make
that
a
bit
easier.
But
partly
my
thinking
is
that
if
we
can
clean
up
that
interface
to
the
point
where
you
can
implement
like
a
trivial
new
pool
type,
that
perhaps
doesn't
even
do
anything
sort
of
it
robustly
or
anything,
but
at
least
is
something
that
you
can
run
in
benchmark
against
and
that'll
make
it
easier
to
do
some
of
our
other
development.
For
example,
we
can
make
a
like
a
non
replicated
pool
that
just
right
straight
to
SB,
DK
I'm
just
for
kicks
basically
and
then
that'll.
A
B
A
Multiple
replicas
and
do
you
know
like
two
out
of
three
type
of
semantics,
a
different,
slightly
different
consistency
model,
but
that
avoid
some
of
the
latencies.
You
see
when
you
have
failures.
Example,
there
are
a
couple
different
options,
one
of
them,
one
of
them
that
sort
of
came
up
recently
was
I'm,
just
thinking
about
fabric
architectures,
where
you
have
compute
nodes
attached
to
a
bunch
of
env
amis
over
a
fabric.
Right
now,
if
you
run
stuff
on
something
like
that,
then
you
write
to
one
OSD
on
one
CPU.
It
replicates
it
to.
A
It,
writes
to
the
lope
to
its
quote:
unquote:
local
and
D
me
over
the
fabric,
but
then
it
sends
it
to
another
CPU
right.
It's
sort
of
directly
attached
envy
me
above
the
fabric,
and
so
you
end
up
popping
across
multiple
CPUs,
also
that
eventually
you'll
you'll
write
over
a
fabric
anyway
and
that
sort
of
environment.
A
So
it
would
be
a
totally
different
back-end
implementation,
basically
burritos,
but
if
we
decided
something
like
that
was
interesting,
then
you
could
conceivably
make
a
radius
pool
type
that
would
work
on
fabric
hardware
and
that
would
still
I
do
use
all
the
other
bits
of
stuff.
Like
all
the
radiostar
we
D,
rgw
Center
those
stuff,
but
all
still
work.
I
mean
still
provide
this
sort
of
rate
of
abstraction.
A
My
the
I'm,
not
if
I'm
no
means
an
expert
or
like
visionary
on
the
on
the
fabric
front,
but
my
sense
is
that
it's
it's
all
sort
of
one
axis
of
the
sort
of
storage
problem
where
it
lets
you
talk
to
remote
storage,
but
it
the
key
thing
that
set
does
is
redundancy
in
replication
or
fault.
Tolerance
and
fabrics
don't
solve
that
at
all.
Maybe
they
have
multipathing
or
something,
but
you
still
have
to
actually
have
multiple.
H
A
A
All
right,
cool,
okay,
so
a
couple
other
things
I
want
to
do
really
quick,
shout
out
about
the
blue
barn,
beefing
that
which
a
telecom
and
42
on
are
working
on.
This
is
a
library
that
does
mail
on
top
of
ray
dose
and
partly
south
of
us
right
now,
but
that's
sort
of
temporary
thing
and
it's
a
plug-in
for
dovecot
and
they
presented
at
the
OpenStack
summit
in
Sydney
about
it
and
they've
talked
it
I
think
some
of
the
stuff
he
does.
They
don't
know
the
link
Andy
right
now.
A
Maybe
some
guys
can
post
on
the
cap,
but
just
some
people
are
where
the
other
thing
is.
I
wanted
to
talk
a
little
bit
about
what
the
plans
are
for
the
dashboard
or
Minnick.
So
this
is
work
that
Red
Hat
is
planning
on
doing,
hopefully
with
what
help
from
others.
So
there's
like
a
step
at
I,
just
like
the
NBP
plan
or
whatever,
for
what
what
I'm
gonna
come
in
and
ik
I
just
sort
of
quickly
summarize
in
this
pad.
Hopefully
this
is
all
going
to
happen.
A
It
really
depends
on
how
quickly
we
can
get
a
person's
for
people
spun
up
on
this
and
that's
ongoing.
So
this
might
change,
but
the
idea
is
basically
to
extend
the
current
dashboard,
that's
in
the
manager
as
a
more
robust
UI
for
managing
stuff
right
now,
it's
sort
of
a
glorified
cephus,
but
this
will
extend
it
so
that
it
has
more
complete
status
of
the
cluster.
A
The
Southwest
stuff
is
already
there
to
some
degree,
but
it's
pretty
primitive.
The
main
new
thing
would
be
around
rgw,
showing
what
the
rgw
zone
groups
are
and
zones
and
what
teams
are
running
for
the
zones.
So
those
would
be
the
basics
right
now,
there's
no
authentication
for
the
dashboard.
So
we
need
to
add
something
at
the
minimum.
We
just
need,
like
a
basic
user
password.
Probably
we're
also
going
to
want
to
have
something
that
lets.
You
either
use
LDAP,
authentication,
something
else.
A
A
The
upshot
of
all
this
is
that
you'll
install
staff
like
normal
stuff-
and
you
just
do
basically
one
command
to
turn
on
the
dashboard
and
maybe
set
your
user
or
whatever
and
a
little
login
you'll
go
to
get
access
to
all
this
all
this
stuff
with
a
standard
cluster,
the
metrics
will
be
back
by
Prometheus,
and
so
the
metrics
won't
really
be
useful
unless
you
also
have
Prometheus
think
sort
of
sitting
next
to
Yusef.
That's
gathering
all
this
stuff
and.
A
Some
basic
management
functions
so
right,
access
stuff,
where
you're,
creating
pools,
updating,
SFX
keys
and
the
main
initial
stuff
will
be
around
rgw
so
that
you
can
create
users
buckets
access
keys,
that's
one
of
the
more
awkward
things
Demant
right
now.
You
have
to
do
it
through
the
CLI
currently
with
writers,
gateway
admin,
there's
a
ton
of
other
stuff,
that's
sort
of
on
the
roadmap.
This
is
the
minimum
stuff
that
we'd
like
to
get
done
for
Minich,
so
any
UI
developers
out
there
or
anyone.
A
A
I
And
if
I
can
just
jump
in
you
real
quick,
yes,
so
Zuzu,
we
actually
kind
of
used.
This
exact
stack
for
monitoring,
crow
fauna
and
Prometheus
and
open
attic
bolts,
those
dashboards
in
as
well
so
thumbs
up
for
that
approach
that
works
really
well
I
think
we
also
have
a
quite
a
few
dashboards
already
that
that
would
be
great
if
those
went
upstream
and
I'm
happy
to
contribute
for
the
Prometheus
deployment
stuff
from
configuration
stuff
awesome.
A
Awesome
yeah
that'd
be
great
yeah.
We
definitely
want
to
I,
don't
know.
I
haven't
really
worked
with
the
gravano
stuffs.
I,
don't
know
how,
if
you
can
just
like
sort
of
take
the
dashboard
definitions
and
we'll
just
like
plop
them
into
the
source
tree
with
a
few
modifications.
I,
don't
know
if
it
works
like
that
or
not
that's.
I
A
I
I
I
A
Good,
okay,
the
other
the
other
should
have
bit
of
this
is
because
all
the
stuff
lives
in
the
manager
John's
working
on
since
some
changes
and
improvements
there
to
make
it
easier
to
sort
of
reuse
code
in
the
manager
across
different
modules.
And
one
goal
of
that
and
consequence
of
that
is
that
the
stuff
that
we
sort
of
unable
in
this
dashboard
will
be
easily
expose
about
via
the
REST
API
as
well,
so
that
if
you
want
to
trigger
the
same
functionality
using
the
REST
API
you'll
be
able
to
do
that.
A
You'll
be
able
to
do
that,
but
with
minimal
friction
to
totally
that
will
be
useful.
Further
external
dashboards.
A
B
B
Before
I
start
talking,
I
wanna
say
like
my
English:
it's
not
a
pretty
good,
as
though
hope
all
of
you
can
give
me
a
little
more
patience,
and
now,
that's
being
recently,
our
team
are
working
on
single
data
from
our
table
to
cost
cuz.
If
the
public
object,
storage,
a
service
of
10
o'clock,
it's
almost
a
compatible
with
a
3,
so
we
testing
Yehuda's
branch
about
how
distinct
of
lagging,
and
now
we
can
sync
data
from
our
tableau
to
cost
as
19.
B
So
we
just
want
to
talk
about
a
relate
to
the
work
we
have
done
then
discuss
some
problems
we
encountered
with
our
view
there
may
be.
Our
experience
can
often
have
two
loads,
who
also
folks
only
picture.
My
talk
contains
two
parts.
The
first
part
is
the
main
programs
in
Kannada
when
testing
this
buggy
is
going
wrong,
it
lower
future
work
and
anything.
B
Now,
let's
take
a
look
at
the
progress.
The
first
problem
is
sometimes
the
arch
double
quote:
grote
is
brought
in
some
requests
at
first
we
work
in
creating
a
budget.
Then
we
just
add
some
debug
log
and
we
could
not
steal
a
debug
log
after
creating
after
greet
factory
in
questa
after
el
we
try
to
create
a
packet
on
manually
batteries.
There
is
also
brought
in
post
the
results
when
think
object
by
hours.
They
are
dumb.
B
We
can
see
laterally
request
complete
in
double
post
to
wrist
rest
to
resource
there.
It's
not
invoked
deep
into
our
table
each
hd-dvd
client.
We
found
the
light
of
Venerable,
which
identify
whether
ports
are
not
in
incur
having
to
be
initialized,
so
locker
were
not
exit
and
the
vcr
we
are
brought.
So
what
we
need
to
do
is
initially
first
in
our
tableau,
HDV
client
avoider
are
meeting
with
returning
curve
with
function.
Pores,
that's
the
first
of
all
problem.
B
If
second,
the
problem
is
is
about
the
factor
of
100
continued
at
first
instead
of
single
data
to
a
straight
directory,
we
use
our
RT
tablet,
use
stimulates
Suzanne
as
three
stories.
We
find
the
light.
Http
flow
we're
broke
if
100
continue,
but
it's
not
a
stable
that
correctly,
for
example,
if
the
target
our
tablet
disable
it,
but
we
planted
don't
I
disable
100
continue
since,
as
Teresa
bought
her
a
100
continue.
B
B
I'll,
take
a
look
at
a
very
late
leader
logs.
We
found
lat
the
orb
of
18
we
found
like
of
create
a
box.
They
will
return
with
minors
in
not
empty,
but
either
establish
a
terrorist
handle
remote
objc
be
Sarah.
If
Lee
of
our
soup
dress
code,
it's
less
than
0,
they
stink
CR,
we
are
exid,
so
we
should
the
judge
whether
a
packet
exists
by
a
bad
code.
If
the
budget
already
owned
by
you,
the
data
stink
quarantine
should
continue.
B
J
B
B
J
B
J
Question
my
question
is:
why
did
you
get
the
409
is
for
Z
or
not?
Is
it
because
you
were
running
it
against
what
were
you
when
you
were
using
a
server
to
think?
Are
you
using
rgw,
or
are
you
using
your
cloud
solution,
a
cloud
system?
Is
it
it's
returning
for
nine?
If,
if
we
try
to
recreate
a
bucket
that
already
exists.
B
B
J
B
J
B
B
B
J
B
Or
field
names
are
in
up
case,
if
we
use
list
format
to
send
a
request,
we
will
require
our
data,
our
authorization,
in
fact,
in
a
series
document
or
standard
HTTP
header.
The
field
names
through
the
used
camera
case
format-
that
is
the
first
letter,
should
be
a
Paisley
rest
assured
opinion
in
lowercase.
So
we
just
do
our
transformation
at
HTTP
client
allow
when
generally
lower
current
request,
and
it
can
work
in
your
right
way.
B
B
B
B
B
B
B
B
At
first
at
first,
we
cannot
think
the
objects
which
which
put
it
by
match
multiple
upload.
Then
we
have
found
alacrity
the
peer
who
must
branch
to
fix.
It
is
mainly
because
we
are
log
entries
her
being
a
writer
when
my
job,
my
path,
multi-part
upload
at
least
least
pure,
had
emerged
in
Thule
must
branch
thoroughly.
Next,
the
problem
is
did
think
initially,
but
whether
the
did
stink
initial
command
crashes
is
mainly
because
now
the
numbers
just
haven't
been
set
it.
B
B
B
We
have
modified
it
and
put
our
pr2
master
branch,
but
it
haven't
been
merge
it
yet
well
in
the
anomaly.
Problem
is
about
sink
or
large
objects
to
s3
biomat
part
upload,
learning,
complete
much
path
to
upload,
HW
will
stand
or
complete
post.
We
with
chunked
encoding
us
three,
but
we
will
get
or
not
impair
implemented
with
response,
because
a
thank
you
not
requested
with
chunk
of
encoding-
and
so
we
need
avoid
the
using
usually
is.
A
J
J
J
A
J
B
J
Might
just
hide
it
a
bit
more,
but
but
you
need
to
understand
why
why
there
was
an
issue
we
have
when
we
when
we
sent
or
we
got
a
200
instead
of
the
100.
It
could
be
that
that
it's
an
issue
with
carroll
quigley
Carol,
but
we
need
to
understand
that
and-
and
you
know
the
question
is,
is
it
still
a
problem
for
you.
J
J
B
B
That
we
still
like
King
stink
objects,
extender
metadata
such
as
condom
that
have
orange,
or
you
know,
orange
Ino
attack
and
so
on,
and
so
Dudley
object
is
eros
area
needed
to
be
some
way
to
map
sauce
the
earth
in
Julie
destination.
Is
there
a
second?
Second?
We
won't
respond
more
complex.
The
configuration
scheme,
vemma
provider.
B
Complex,
for
example,
is
eros
mapping
talk
here,
aspecting
mappings.
In
addition,
the
current
things
tell
us.
Maybe
a
little
laugh
and
maybe
puzzled
follows
who
don't
take?
Did
her
look
and
her
mother
said?
So
we
want
to
improve
our
CH
w
think
stereos,
for
example,
and
in
addition
idea,
disparities
play
arrows
Stanley's.
We
can
church
at
detail
flag,
provided
declare
output,
opening
the
number
of
packets
of
objects
and
not
think
last
we
were
try
to
work
on
get
us
planned.
J
C
A
One
quick
comment
about
the
status
I'm:
those
last
two
points:
they're
sort
of
two
parts:
one
is
reporting
making
sure
that
the
any
of
the
sync
status
is
included
in
the
daemon
status
portion
that
gets
reported
back
up
to
the
manager.
That's
certainly
something
that
can
be
done
now
and
you
can
always
like
dump
it
manually.
A
The
servicing
it
on
the
dashboard
is
harder
because
we
don't
have
the
panel
of
a
dashboard
that
shows
the
even
the
basic
rgw
stuff
with
the
zones
of
the
daemons
and
whatever
else.
Obviously,
if
you
folks
want
to
work
on
that,
that
would
be
awesome,
but
I,
wouldn't.
If,
if
you
don't,
then
you
can
still
get
the
stuff
into
the
even
status
now
and
then
we'll
circus
it
later
on
the
GUI
defense,
okay,.
B
A
Yep
yep
thanks
so
much
for
your
your
hard
work
shaking
out
these
issues.
This
is
paid
to
see
progress
here.
Maybe
you
who
did
you
want
to
just
like
bright
briefly?
What
this
is
all
I,
don't
know
if
we
sort
of
skip
there
like
overview
I'm,
not
everybody
might
be
aware.
What
is
all
this
all
concentrate?
A
J
All
closing
so
close
Inc
is
basically
taking
the
sync
module
framework,
which
allows
us
to
see
other
data
who
made
the
data
like
objects,
meta
data
to
the
separate
zones
that
could
be
in
the
cloud
in
this
case
we're
using
the
signature
salsa
for
the
elasticsearch
meta
data
and
I
indexing
work,
and
it's
basically
how
we
do
Maggie's
own
data
think
also
it's
so
using
the
same
infrastructure.
So
so
the
idea
there
is.
J
A
A
A
Scribe
is
one
class
recovery
is
one
class
after
being,
is
one
class
I
think
that's
it
and
then
it
uses
M
Kwok
to
prioritize
against
those
I
mean
their
config
options,
control.
What
the
relative
parameter
and
parameters
are
for
those
different
classes,
so
you
can
put
it
in
that
mode
and
it'll
do
its
thing:
that's
all
merged.
The
other
mode
is
in
clock,
client,
which
uses
the
diem
clock
actually
to
do
scheduling
of
clients
against
each
other,
where
you
would
have
parameter
you
have
a
minimum,
my
UPS
or
whatever,
for
particular
client
and
so
on.
A
That's
mostly
merge
to
accept
a
lot
of
the
sort
of
bits
to
actually
configure
it.
Aren't
there
so
they're
sort
of
two
issues
right
now
is
that
even
for
the
M
clock,
op
class,
it
doesn't
work
super
well
currently,
because
we
aren't
managing
the
queue
depth
very
well
at
the
object,
storage
layer,
the
queue
is
too
deep,
and
so
the
prioritization
that
we
do
at
the
in
clock
layer
doesn't
really
have
much
effect.
A
There
were
a
few
patches
to
try
to
fill
it
with
that
that
Eric
and
the
booster
team
were
kicking
around,
but
we
mostly
set
this
aside,
because
the
in-flight
IO
scheduler
that
are
outstanding,
I/o
scheduler,
that
the
SK
folks
were
working
on,
looks
to
work
much
better.
So
the
current
task
is
to
get
that
reviewed
and
merged
and
there's
a
link
to
the
pull
request.
But
with
that
they've
had
very
good
results.
That's
encouraging!
So
you
need
to
like
get
that
in
shape.
So
that's
that's
sort
of
the
main
blocker.
A
The
other
sort
of
outstanding
architectural
issue
is
that
you
can't
mix
the
OP
class
and
the
client
scheduling.
Currently
you
either
use
them
clock
the
scheduled
background
working
in
Spore
groundwork
or
you
use
it
to
control
clients
against
each
other,
but
you
can't
do
both
at
the
same
time.
So
eric
is
trying
to
sort
out
how
to
do
a
hierarchical
sort
of
composition
of
those
of
those
two
policies:
I'm,
not
sure
how
how
far
we've
gotten
along
there.
A
G
A
G
C
C
A
Yeah
yeah,
my
sense
is
that,
in
order
for
it
to
work,
NII
ups
impairments,
then
we
have
to
like
sample
and
do
sort
of
a
feedback
loop
for
the
same
thing
leads
to
throttling
or
something
like
that,
because
we
can't
have
it
like
a
fully
informed
decision
for
every
single
I/o.
A
I
think
that
the
good
news,
I
think
is
that
even
if
we
wire
up
all
the
api's
around
on
the
liberator
side
of
our
body
or
whatever
for
the
user
facing
stuff
that
is
sort
of
independent
of
what
the
implementation
is,
so
if
Lee's,
you
know
end
up
doing
something
that
isn't
really
in
clock.
It's
something
else
did
I
think
it
should
still.
A
G
A
A
But
the
problem,
as
you
probably
know,
is
that
we
can
split
VG's
as
your
cluster
skills
up,
but
you
can't
undo
the
split
and
merge
them
back
again.
The
cluster
skills
down
or
if
you
accidently,
split
further
than
you
should
out,
which
makes
it
very
risky
to
ever
split
because
it's
a
one-way
street,
and
so
even
like
somebody
who
knows
what
they're
doing
can
get
into
a
situation
where
they
split
and
then
requirements
change
or
the
situation
changes,
and
they
can't
merge.
A
The
basic
idea
here
is
that
you
would
set
the
PGP
num,
which
is
the
placement
behavior
to
be
less
than
the
PGM
that
basically
gets
the
PG
that
were
split
to
be
sitting
next
to
each
other,
and
this
is
a
similar
situation
after
you
split
there.
They
just
split
it
in
place.
They
don't
actually
move.
You
have
to
you
just
those
two
knobs
separately,
so
you
just
in
reverse,
you
would
lower
the
PG
pnub
the
PG
is,
would
migrate
so
that
they're
right
next
to
the
thing
that
they're
eventually
gonna
merge
with.
A
In
order
to
do
the
opposite
to
merge,
we
have
to
like
zipper
them
back
together,
so
they're
the
pull
requests
that
I
have
basically
makes
the
in
the
case
where
PGP
num
is
less
than
PG
PG
num.
It
makes
all
the
versions
that
are
chosen
for
log
entries
offset
so
that
they
will
be
able
to
zipper
together
and
overlapping,
but
they
always
skip
over.
A
Like
you
know,
they
go
pay
every
two
and
ones
either
even
a
nod
or
whatever
it
is,
but
it
generalizes
to
not
just
two,
but
in
so
that's
that's
the
first
bit
and
there's
an
option
in
there
to
just
debug,
just
randomly
skip
stuff
just
to
like
exercise
all
that
code
to
make
sure
it.
It
behaves
with
miss
parse
things
I'm.
A
The
second
part
is
that
I
think
we
need
to
have
some
check
and
feedback
on
those
PPG's
that
are
sitting
next
to
each
other
so
that
they
know
when
they've
reached
the
point
where
the
logs
are
and
are
non-overlapping
and
they
can
be
zippered
together,
because
initially
they
won't
be.
You
have
to
actually
run
for
a
while
and
age
out.
The
old
PG
entries
then
write
new
ones
or
whatever.
Before
that
happens,
I'm
gonna
have
some
way
to
like
force
that
to
happen.
C
A
A
It
needs
to
make
sure
the
PGS
are
completely
clean
and
that
the
logs
are
non-overlapping
and
ready
to
be
zippered
together
and
then
I
think
it
has
to
also
make
sure
that
the
in
the
map
in
which
it
publishes
the
PG
node
change
where
it
gets
reduced.
No
other
change
happens,
that's
going
to
screw
it
up
like
it
can't
also
mark
an
OSD
down
and
that's
saying
that
that's
gonna
screw
it
up
or
like
change.
Something
because.
A
Right,
yeah,
exactly
peasant
in
a
split
case,
the
PG
can
be
kind
of
in
any
state.
You
can
always
sort
of
split
it
in
half,
but
an
emerge
state.
It
has
its.
They
have
to
be
stored
next
to
each
other
and
they
have
to
be.
You
know
in
a
particular
way
in
order
for
the
merge
to
be
possible
and
so
and
I
tried
to
imagine
what
the
case
would
be.
A
That
would
actually
make
it
not
possible
that
would
make
the
merge
fail
and
it
the
only
thing
I
could
think
I
would
be
that
and
if
the
OSD
goes
down
and
some
of
the
like
rips
that
one
of
the
PGS
with
objects
or
tool
and
then
start
up
again,
that
was
the
only
thing
I
could
think
of.
But
I
don't
know.
Hopefully,
hopefully
that's
it.
A
So
then
the
Mon
would
there
be
a
particular
epoch
where
the
mana
dress,
pd-nim
down
I
know
when
the
OSD
gets
that
map,
it
would
have
to
do
an
atomic
merge
of
those
two
P
G's,
so
it
would
have
to
be
for
it
when
it
publishes
the
map
through
the
PG.
You
would
have
to
like
not
do
that
and
then
zipper
them
together
and
then
give
that
Louis
T
map
to
the
combined
new
PG,
then
I.
A
A
Period
yeah
yeah,
which
is
gonna,
be
hard,
and
then
that's
that's
the
replication
case,
which
I
think
is
comparatively
simple
for
the
erasure
code
case.
It's
it's
weirdly
because
you
have
the
like
go
aggro,
Ford
and
then
I
think
we
really
do
need
to
learn
because
there
and
I
like
to
read
yeah.
It's
like
because
they're
read
Mata
like
the
read-modify-write
stuff
that
happens
on
an
easy
object
actually
basically
holds
up
the
log
so
that
other
other
DC
operations
won't
roll
forward.
A
C
I
C
A
Yeah
yeah
yeah,
if
you
didn't
have,
if
we
didn't
have
input
it
I
would
definitely
solve
the
problem
yeah.
So
if
we
did
that,
then
there
would
be
a
map
published.
That
said,
suspend
IO
to
this
range
of
P,
G's
right
and
then
and
P
G's
would
have
the
feedback
that
they've
done,
that
to
the
manager
to
the
monitor,
sees
it
and
then
longer
it
said:
okay
now
merge
yeah.
They
thought
it
does
fly
better
solution.
E
A
Alright,
so
you
would
still
have
you'd
have
to
have
a
map
published
that
says
you're
in
this
limbo
period,
where
the
PGS
may
or
may
not
be
merged.
Please
merge
them
so
that
the
and
then
the
primary
would
decide
when
it
get
received
a
thing
whether
it's
happened
yet
or
rather
should
block
the
IO
or
Fiat
to
which
PG
yeah
yeah
and
then
once
it's
done.
It
would
tell
the
monitor
and
say
it's
done.
Yeah.
E
A
E
A
You
know
you
know
what
actually,
if,
if
you're,
if
you
publish
them
out
that
says
the
it's,
the
split
is
in
progress.
Sorry,
the
merge
is
in
progress,
then
the
trivial
implementation
of
that
is
that
the
OSC
just
blocks
the
IO,
which
is
kind
of
the
same
thing
except
the
OST
goes
to
the
side.
How
it's
done
right
instead
of
the
client
inside
the
client
on
sending
me
IO
than
the
philistine,
would
just
block
it.
Yeah.
A
C
A
Yeah
I
guess:
that's
the
reason
why
I
was
trying
to
have
a
model
where
they
happen
in
parallel,
because
then,
when,
whenever
an
OSD
encounters
of
this
epoch,
it
can
just
do
it
even
in
isolation
as
its
rolling
forward
at
source.
So
it
doesn't
matter
if
it
failed
and
didn't
see
that
ospina
wasn't
up
at
the
time.
A
If,
if
the
man
the
monitor,
publishes
and
says
these
PGs
should
merge
now
they're
in
the
process
of
merging
when
the
OSD,
when
the
primary
gets
to
that,
it
will
basically
peer,
send
it
back
off
quiescing
IO
and
trim
all
logs
down
to
zero
and
then,
after
all,
that's
happened.
Everybody
has
all
the
current
replicas
are
clean.
A
They
persisted
their
empty
logs
and
they're
like
clean
slate,
no
in-flight
I/o,
then
it'll
go
to
the
monitor
and
say:
okay,
merge,
I'm,
ready
and
then
the
next
map
that
gets
published
and
says
these
ones
are
no
I'm,
no
longer
pre,
merging
they're
actually
merged,
and
at
that
point
then
the
replicas
will
atomically
just
like
does
it
come
together
because
there's
no
zipper
whatever
they'll
merge
them,
there's
no
one.
Quite
al
and
they're
clean
right.
A
E
A
I'm,
worried
about
is
like
Sayre
in
the
state
where
the
primary
is
like.
Okay,
I
told
everyone
to
like
zero,
their
logs
I'm
all
ready
to
go,
and
then
it's
a
split
and
then
like,
and
then
they
crash,
and
so
one
of
them
split
and
the
other
one
didn't
then
suddenly
peering
has
to
deal
with
the
fact
that
it's
like
it's
got,
two
P
G's
or
four
p,
GS
or
whatever
that
are
not
yet
merged
reporting
in,
but
they're
peering
state
actually
affects
that
same
PG.
That's
the
part
that
that
scares
me
I.
E
A
The
epoch
at
the
the
the
merging
is
driven
by
the
OST
map
and
so
they'll
all
happen
at
the
same
time,
not
the
same
time
but
independently,
and
they
won't
talk
to
each
other
until
after
they've
done
it.
So
either
they
haven't
merged
yet,
in
which
case
they're
catching
up
on
maps
and
then
they'll,
merge
and
then
they'll
talk
to
you
after
they've
caught
up
suddenly.
Since
then,
you
have
to
deal
with
unsplit
unmerged
PGS
talking
to
a
merge
VG.
E
I
E
A
C
A
It's
just
like
I
synchronous,
update,
I,
think
the
slow
part
is
going
to
be.
Map
gets
published,
it
has
to
reach
other
up,
because
thank
you
an
iota
like
persisted
and
do
it
and
then
you
have
to
like
feedback
that
stayed
back
at
the
Milan
third,
so
that
sees
it
and
then
news
forward.
Yeah
I
demand
him
to
like
to
make
this
to
minimize
the
impact
that
the
monitor
should
like
reduce
P
genome
like
one
at
a
time.
So
there's
only
like
one.
My.
E
A
B
A
In
general,
you
want
to
you
want
to
you
to
minimize
the
impact
you
just
do
it
want
to.
You
only
want
one
PG
don't
stick
on
over
two
seconds.
No,
but
most
people
will
notice
that
it
doesn't
mean
that
if
you're
going
from
like
a
million
pts
to
half
a
million
PDS
like
it's
gonna,
take
a
while
yeah
like
I
feel
like
that
kind
of
doesn't
matter
because
they're
already
placed
in
the
same
place.
So
it's
like
a
it's
a
small
amount
of
work
based
on
what
you
already
paid
to
like
migrate.
The
data.
C
C
So
yes,
the
next
is
the
OST
refactoring
topic,
which
is
one
we
talked
about
this
a
bit
at
the
beginning.
But
to
reiterate
the
background
for
this
is
that
in
general,
a
much
faster
device
is
typically
more
prevalent
and
over
the
next,
probably
five
to
ten
years,
like
one
point,
I'm
like
most
likely
more,
even
more
common,
perhaps
than
hard
disks.
C
So
you
want
to
start
preparing
for
that
now
before
we
have
to
do
hundreds
of
thousands
of
I
outside
great
any
choice,
D
so
think
about
the
end
state
to
actually
get
good
performance
out
of
the
superfast.
Almost
memory
speed
devices
and
that
probably
looks
something
like
I
shared
nothing
architecture
where
we
avoid
doing
as
much
work
on
the
CPU
as
possible
for
each
I/o.
C
One
way
to
do
that
is
to
start
all
the
work
on
a
per
CPU
basis
have
it
we
know,
locking
and
no
memory
barriers
as
much
as
possible,
just
using
message
passing
and
the
copy
memory
bit
essentially
between
different
cores.
What
we
need
to
and
burning
everything
in
a
single
processor,
local
processes,
but
with
minimal
overlap,
eventually
getting
to
a
point
where
we're
doing
all
of
the
scheduling:
Network
IO
and
storage
IO
in
user
space,
most
likely
with
BPD
K
for
a
network
and
SPD
K
for
storage
devices.
C
So
in
terms
of
actually
scheduling
the
code,
there's
a
many
different
approaches
and
ways
you
can
format
this.
But
the
way
we're
current,
currently
structuring
the
code
with
lots
of
callbacks
everywhere
is
pretty
difficult
to
reason
about,
because
the
callbacks
Chris
kind
of
get
thrown
around
every
every
which
way
and
spread
it
out
throughout
different
layers.
So
you
end
up
reading
things
in
many
different
places
to
figure
out
a
single
strand
of
execution,
and
it
also
ends
up
using
many
many
heap
allocations
and
locks.
C
C
There
is
the
standard
future
library
in
C++,
as
well
as
some
boost
at
boost
a
co
framework
that
kind
of
wraps
around
that.
Basically,
it
doesn't
really
provides
a
lot
more
beyond
the
the
basic
building
blocks
you
have
to
kind
of
build.
You
build
your
own
things
around
that
in
terms
of
scheduling
and
the
event
loop
sense
dealing
with
that
everything
else
in
the
system,
because
asynchronous.
C
Another
one
is
that
Tokyo,
which
is
actually
an
in
rust,
which
looked,
looks
very
promising
in
terms
of
minimizing
allocations
there.
The
clever
way
of
using
the
type
system
to
convert
chains
of
futures
into
a
single
allocation
upfront
said.
Basically,
you
can
compile
to
a
state
machine
so
that
it
can
be
one
block
of
memory
and
going
through
executing
within
that
block
memory
without
having
to
do
any
obligations
for
each
new
task
or
chunk
of
work.
To
do
that
might
be
a
promising
direction
to
go.
C
And
the
third
one
looking
at
more,
is
that
C
star,
which
is
I
designed
exactly
for
this
kind
of
shared
nothing
architecture
with
very
high
speed,
bowling
basting
interfaces
for
networking
and
storage,
although
it
doesn't
ask
for
it,
I'm
not
pulling
roads
too.
So
we
can,
you
don't
have
to
use
the
I
represent
CPU
all
the
time,
if
you
do,
if
you're,
not
using
your
fast
storage
or
if
you
can't
take
a
core
is
just
until
yet
the
actual
futures
model
it
it.
C
It
also
it
makes
it
done
as
I'm
primitives
already
available
for
doing
all
the
user
space,
some
scheduling
and
prioritization
and
grouping
different
tasks
into
different
priorities
and
executing
ones
that
have
tracks
getting
the
same
function
or
code,
for
example.
Together
as
a
bit,
you
keep
the
instruction
cache
hot
we're
chaining
them
together,
so
that
me
you're
gonna,
keep
track
of
groups
of
tasks
together.
C
That
kind
of
thing
it's
also
a
relatively
new
for
early
form
framework
and
that
it
doesn't
actually
do
any
releases
yet
so
they
recommends
using
as
a
sub
module
and
the
major
product
that
uses
it
so
far
is
Scylla
DV,
which
is
by
the
same
people
who
wrote
C
star.
So
it's
a
seems
like
it's
a
I
still
look
at
relatively
early
projects,
but
I'm
very
very
use
has
go
useful
building
blocks
to
work
with
the.
A
I
keep
hearing
Tokyo
come,
does
it
is
it
possible,
or
does
it
make
sense
to
try
to
make
these
interoperate
like?
If
we
wanted
to
build
something
like
Tokyo
would
be
able
to,
you
would
be
able
to
use
it
in
the
context
of
C
star
or
we'll
be
able
to
build
that
Tokyo
style
framework
in
the
C
star,
or
something
like
that?
Yes,.
C
I
think
we'd
be
able
to.
If
we
built
yourself
a
mark
in
C++,
we
would
have
to
modify
the
sea
star
reactor,
which
is
the
event
loop.
Basically,
that
runs
different
tasks
to
use
that
kind
of
those
types.
C
C
C
C
A
C
A
C
F
F
And
a
while
ago
a
read
mentioned
that
C
star
is
looking
at
implementing
or
integrating
with
C++
code
routines,
which
are
a
technical
specification
with
still
in
progress,
but
being
able
to
use
co-routines
instead
of
futures
would
eliminate
a
lot
of
extra
overhead.
Also
I'm
really
excited
to
see
what
happens
there.
F
I
think
there
was
a
pull
request
or
something
on
the
sea
stars.
There's
a
Google
Groups
or
something
so
there's
there
there's
code
and
I
think
it
got
some
review
and
I
know
that
the
latest
clang
has
an
implementation
of
the
curve
teams
that
you
could
play
with
missing
some
of
the
library
standard
library
stuff,
but
yeah.
It's
still
still
pretty
early
for
that.
E
I
would
consider
it
ideal
if
we
could,
like
mangle
the
sea
star
futures
into
looking
like
the
Tokyo
ones
and
use
the
rest
of
their
infrastructure
personally,
and-
and
it's
not
just
about
how
fast
a
memory
allocation
is,
but
it
means
that
we
can
do
so
much
of
it
upfront.
So
we
have
a
much
better
lock
on
how
much
memory
is
available
to
in
case
of
trouble,
which
ties
into
what
town
was
talking
about.
E
Was
it
openstax
age
who
started
having
the
assigned
bundle
of
memory
and
I've
something
looking
at
more
but
I
was
I
was
sad
lining
with
Samsung
this
morning
before
we
did
this
and
I
actually
think
we
can
do
that
in
C++
and
make
it
look
pretty.
So
that
would
be
the
ideal
thing,
but
we've
got
to
get
some
more
people
in
a
room
to
look
at
how
C
star
works
and
what
adjustments
we
can
make
to
its
typing.
D
A
Had
a
conversation
with
the
company,
somebody
that
we
talked
to
a
long
time
ago,
they
ended
up
not
using
stuff
and
writing
their
own
thing
proprietary,
but
they
basically
spent
the
last
two
or
three
years,
rewriting
the
whole
thing
to
be
DVD,
K
and
based
the
only
real
takeaway
I
got.
Was
that
a
lot
of
work,
but
it
paid
off?
It
was
totally
worth
it.
A
H
A
A
Starting
so
I
mean
we
have
all
these
I.
Can
there
isn't
that
much
that
blocks
in
like
it?
The
message
messenger
stuff
is
all
asynchronous,
so
there's
no
blocking
there,
because
it's
all
sort
of
already
carpet
compartmentalize
behind
this
interface
and
we
can
presumably
make
a
wall
around
the
object
store,
and
so
there's
nothing
really
in
between
that
that
blocks,
except
for,
like
you,
know,
get
attribute
or
something
but
they're
still.
We
still
have
like
so
many
different
threads
that
yes
I,
don't
like.
A
C
Like
I
say,
ideally,
you
want
to
get
to
a
point
where
we
don't
have
the
occu
anymore,
please
for
the
entire
OSD.
We
wanted
to
be
shorted
so
that
we
don't
have
to
do
any
memory
barriers
there
or
any
lock
in
there
at
that
point,
get
rid
of
the
PG
locks
since
they're
they're
all
shared
as
well.
But
can
you
read?
C
A
E
A
E
E
F
E
E
A
E
A
C
E
E
B
A
D
D
C
D
A
C
A
I
think
that
you're
gonna
tie
it
to
a
bit,
but
this
whatever
you
call
it
virtual
function
instead
of
the
entire
card.
Okay,
this
is
how
they
carve
up
a
single
nick
across
much
of
the
m's.
A
D
C
A
A
G
C
D
A
A
E
C
D
D
So
Josh,
then,
just
in
in
this
world,
where
we
are
targeting
this
thing
at
very,
very
fast,
solid-state
storage,
then
the
idea
would
be
we
get
away
entirely
from
this
idea
of
shared
log
shared.
You
know,
database,
we
just
shard
the
whole
thing
right
exactly
and
yep.
We
we
don't
even
target
anything
that
requires
a
different
behavior.