►
From YouTube: 2017-JUN-01 :: Ceph Performance Weekly
Description
Weekly collaboration call of all community members working on Ceph performance.
http://ceph.com/performance
For full notes and video recording archive visit:
http://pad.ceph.com/p/performance_weekly
A
B
B
Alright,
a
couple
new
things,
this
polar
quest
that
moves
cast
trim
into
the
mem
thread,
mempool
thread.
This
thread
was
just
periodically
checking
the
mental
usage
so
that
we
could
guide
the
cache
terming
can.
Basically,
we
don't
turn
the
cache
until
we
recheck
the
mental.
What
we
tube!
Don't
it
every
time,
because
it's
slightly
expensive
to
check
all
the
shards
for
the
mental.
So
we
do
it
like
10
times
a
second
or
something
like
that,
but
this
just
moves
the
train
into
that
same
thread,
which
is
probably
good
that
doesn't
seem
to
show
difference
now.
B
So,
let's
just
let's
just
do
it
I'm,
eventually
the
original
thinking
there
was
that
the
that
the
trimming
would
happen
in
a
thread
that
had
a
hot
CPU
cache.
If
you
had,
if
you're
started
across
multiple
cores,
it
would
be
the
same
core
that
was
doing
the
trimming
as
using
the
cache
in
general
I.
Think
in
usually
there's
going
to
be
on
the
same
core
anyway
and
a
multi
psychosis
to
me
on
multiple
host
ease.
B
But
if
that's
not
the
case,
then
we
can
always
have,
and
if
these
cash
trim
threads
also
so
I
still
think
it
makes
sense
to
move
it
out
from
being
a
synchronous
operation.
So
it's
fine
on
the
set
of
this
thing.
The
wall
clock
profiling
turned
up.
You
go
fixed
it.
It's
ready
to
merge.
I
just
need
to
just
waiting
for
make
check
to
pass.
B
B
We
can
now
take
buffer
lists
and
map
them
at
runtime
into
a
different
mental.
So
we
can
sort
of
account
for
data.
However,
we
want
to,
we
can
choose
to
move
it
into
the
blue,
store
cash
pool
or
we
can
leave
it
in
the
anonymous
pool,
which
is
to
serve
everything
or
we
can
define
new
mentals.
So
there's
another
pull
request.
That
I
think.
Is
it
listen
here
but
adds
like
five
new
metals
for
blue
store,
so
you
can
see
and
renames
them.
B
B
So
that's
great
for
yeah
visibility
into.
What's
going
on,
it's
very
helpful
in
tracking
down
the
latest
bug,
there's
pull
requests
that
makes
the
PG
temp
mapping
a
notice
team
at
more
efficient
use
of
the
B
tree
instead
of
an
STL
map
that
made
a
huge
difference
for
it's
like
three
times
faster
and
also
like
three
times
more
memory
efficient.
B
That
made
a
big
difference
on
the
big
bang
testing
at
CERN
when
we
created
a
pool
with
like
a
million
PG
tip
entries,
see
John
thing
has
some
changes
to
the
blue
star,
throttling
that
were
great.
Basically,
just
eliminating
a
bunch
of
useless
arithmetic
became
more
efficient,
some
on
cleanups
from
key
food
that
merged
there's
an
issue
with
addy
encoder
code
that
Big
Bang
testing
turned
up
where
it
was
sort
of
repeatedly
doing
a
mem
copy
to
make
a
buffer
those
discontiguous
contiguous,
but
repeating
it
over
and
over
again,
that's
all
fixed
I
got
merged.
B
That's
good.
Nm
I
had
a
change
to
the
rocks
to
be
glue.
That
uses
applies
parts
just
like
a
IO
vector
or
whatever
scatter
gather
thing.
Instead
of
having
them
instead
of
having
to
copy
it
into
a
contiguous
buffer,
we
could
pass
pointers
to
set
of
buffers
and
that
went
in
I.
Don't
think
we
ever
really
measured,
whether
that
makes
a
measurable
difference,
but
it's
sort
of
an
obvious
when
just
a
Floyd,
you
see
so
snap
copy
and
then
there's
about
to
step
that
hasn't
moved.
B
C
B
I
skip
them.
Sorry,
okay,
so
there's
a
PowerPC
optimized
one!
That's
updated!
Reviews
I
haven't
looked
at
that
yet
is
that
ready
for
testing
it's
a
supposed
to
review
yet
I.
C
D
C
C
B
C
B
B
B
It's
I'm
on
the
sense
on
that
one.
It's
this
basically
is
some
infrastructure
tracking
so
that
we
can
make
a
transaction
commit
synchronously
in
the
submitting
thread
without
pick
up
punting
it
to
the
casing
thread,
and
that's
only
going
to
be
good.
If
you
have
a
journal
device,
that's
too
professed
like
an
indie
Tim,
but
we
don't
have
any
envy
Tim's
I'm,
not
sure
how
many
users,
having
begins
and
I
referred
me
to
verify
those
results.
B
B
And
actually,
in
combination
with
there's
a
there's,
a
new
blue
FS
option
that
lets
you
make
it
do
with
right
synchronously
instead
of
using
AI.
Oh
it
just
as
a
no
direct
P
right
and
then
and
I
think,
and
that
that
would
probably
make
your
Effie
Trinket
on
I.
Don't
know
there
are
certain
cases
where
that
have
a
definitely
helped
I
guess
in
combination
Olympus
is
in
commit
you
can
avoid.
You
can
invent
basically
deer
film
editors.
B
B
A
A
The
same
disguise
brought
up
pretty
regularly.
There's
been.
A
number
of
things
have
been
done
over
the
last
year
that
have
been
kind
of
aimed
at
improving
this,
but
until
we
we
had
kind
of
a
good
way
of
actually
doing
wall
talk
profiles
in
the
OSD
I
think
a
lot
of
it
was
just
kind
of
semi,
informed
or
kind
of
educated
guesses.
As
to
what
to
look
so,
we've
got
some
more
information
on
it.
Now,
basically,
what
happens
is
once
you
no
longer
have
notes
in
cash.
A
You
end
up
doing
a
lot
of
work
in
the
T
POS
DTP
threads
inside
an
area
code
where
the
G
walk
is
held
and
for
a
variety
of
reasons,
both
in
terms
of
the
just
CP
overhead
and
doing
reads
and
naraka
TV
database
due
to
the
phonos
and
extents
and
then
also
holding
that
PG
lock.
This
is
a
pretty
pretty
dramatic
flow
down
when
no
noes
are
not
in
cash.
That
can
be
upwards
of
maybe
three
or
even
three
and
a
half
acts
so
in
past
week.
A
There's
a
couple
of
things
that
help
this
one
is
that
we
were
seeing
a
lot
of
work
being
done
in
trim
inside
errors
and
code
where
PG
lock
is
help
and
the
TP
o
SD
T
P
threads.
So
there's
this
PR
that
sage
had
mentioned
15
380,
where
basically
that's
now
being
done
in
separate
thread,
and
that
does
help
fuji,
lock
contention
and
down
CPUs,
which
looks
like
the
sound
in
general,
not
just
within
that
box,
though
in
general,
but
we're
spending
or
what
lost
time
to
trim
now.
A
So
that's
good,
but
unfortunately,
performance
hasn't
really
improved
dramatically
so
a
little
bit,
but
not
that
much
and
it
seems
to
me
because
we
are
still
spending
a
whole
lot
of
time
doing
reeds
in
luck,
Scooby,
so
the
right
now
at
least
it
looks
like
Canada
the
way
around.
This
is
basically
just
making
the
caches
big.
A
You
know,
looks
like
blue
star
can
be
very,
very
very
memory
and
cache
hungry
to
can
sustain
that
high
performance
on
a
very
large
nvme
drive,
porosity,
smaller
ones,
don't
require
as
much
cash
or
if
you
have
lots,
o
notes
and
less
expense.
That
might
also
be
a
reason
for
probably
keeping
the
demon
Alex
eyes
bigger,
but
anyway
there's
a
lot
of
moving
pieces
here,
so
lots
of
things
that
we
can
do,
especially
now
that
we
no
longer
have
the
memory
leaks
that
we
had
previously
so
I
will
probably
be
spending
more
time
this
week.
A
Looking
into
this
I
think
the
only
other
thing
to
mention
here
is
that
we
do
definitely
see,
especially
when
we
are
doing
lots
and
lots
of
work
in
racks
to
be
that
comparison.
Operators
are
taking
a
lot
of
wealth
off
time.
I.
Think
Igor
was
hoping
that
you
might
be
able
to
figure
out
a
not
horrible
improvements
to
make
to
make
this
better.
But
I
don't
know
piece
here,
but
so
far
kind
of
just
a
little
bit
was
that
nothing
don't
get
me
girl,
you're
here.
So
do
you
want?
A
B
F
A
Right
now
with
me,
what
we
see
or
what
I'm
seeing
when
we
have
a
small
cache
that
we're
all
bleona's
don't
fit
into
it
right.
We've
got
lots
of
a
type
and
lots
of
reason
happening
for
the
RCC
database,
everyone
so
they're
they're
contending
with
each
other
and
there's
always
stuff
going
on.
So
even
if
we
fixed
it
faster
backends
behind
it
and
it
might
help
some
I'm
not
sure
if
it's
going
to
help
them.
B
G
B
B
B
F
Call
me
Jim:
yes,
we
got
a
little
bit
of
problems.
We
got
into
a
sueance
messenger,
the
last
two
weeks.
We
have
a
live
on
the
pressure
test
in
our
subclass
tree
and
the
result
equal
hours
are
having
having
those
public
note
against
one
of
our
custard.
We
are,
we
have
basically
the
container.
We
have
like
a
1288
clients,
always
in
continue
to
access
their
to
access
their
one
of
ours
have
cluster.
The
subculture
was
organized
with
the
28
servers.
He
server
has
a
12
OSD
against
one
planet.
F
Earth
is
D
and
there
we're
going
to
need
four
hours
in
the
first
12
hours
is
pretty
good,
but
once
I
come
to
a
certain
point.
Thus,
if
you'll
a
peak
and
the
memory
was
used,
we
basically
each
server
has
a
180
gig
our
memory.
The
memory
is
okay,
but
city
will
come
to
the
peak.
Then
we
sing
there.
Osd
crash
course
see
done
and
as
wellthey
was
mock
down
just
because
the
hottie
that
has
no
reply
and
the
Hobbit
was
hardly
Tamil
and
I
was.
She
was
the
back
down.
F
Aprenden
was
the
world
by
history,
other
neighboring
parties
and
then,
of
course,
the
OS
decayed
with
her
was
the
video
was
repeated,
bringing
until
the
recovery
is
a
highly
big
impact
to
their
car.
That
kind
of
kind
of
helps,
and
we
do
lot
of
analysis
enough.
Two
weeks
we
found
out
while
the
workers
of
the
right
messenger
actually
get
them
out
and
their
I'm
not
sure
whether
in
their
sub
community
have
ever
got
this
problem
before
with
a
synchronous
messenger
with
heavy
look,
no
I.
B
F
We
already
cuz,
we've
already
put
some
losses
in
our
production
cluster.
We
pretty
much.
You
know
to
not
lots
of
work
lots
of
a
lot
over
there,
so
we
recently
a
hidden
ulong's.
We
went
out
there
in
well
there's
a
odd
we
needs
and
we
play
into
a
testing
I
attention,
culture
so
super,
so
we
can
have
long.
F
Okay,
let's
see
Adam
butter
right
now
in
this
ayah
is
a
box
of
the
Oscar
Grant's
messenger,
once
the
simply
hi
and
one
of
the
workers
can
come
off
or
they
can
all
receive
any
of
the
jalapeno
message.
The
hypodermis
is
intimacy
and
it
was
he
done
okay,
so
we
were
giving
update
once
we
have
more.
You
know
their
locks
not
going
to
the
pumpkin.
B
F
B
B
F
G
D
Hi
sage,
quick
question
based
or
has
any
work
done
around
sort
of
large
discs
like
you
know,
if
you've
got
a
tent
about
this
and
they're,
seventy
percent
full
in
terms
of
of
David
petition
size
requirement
memory
requirements
as
anyone
sort
of
acid
up
to
those
of
the
mix.
I.
A
H
B
D
B
D
B
D
B
Basically,
if
it
wants
to
rate
and
the
DB
of
perdition
is
full,
it
will
just
write
on
a
big
place,
so
it
graceful
degrade,
gracefully,
degrades
and
maxtv
knows
the
size
of
the
devices.
So,
in
theory,
is
putting
the
like
the
colder
ssp
files
on
the
slow
storage.
It's
a
no
idea
how
smart
or
how
well
that
works,
but
yeah
I,
think
I.
B
Think
to
what
we
really
want
to
do
is
basically
just
fill
up
the
device
with
whatever
your
sort
of
work
load
is
and
just
get
a
sense
of
how
big
the
metadata
is
yet
so
that
we
know.
So
if
it's
a
it's
an
rgw
workload,
you're
going
to
have
less
metadata
because
it's
going
to
be
like
bigger
objects
and
they're
going
to
be.
B
D
A
Start
x2
front
I
was
just
going
to
say
from
a
performance
perspective
if
you've
got
some
kind
of
SSD
or
nvme
drive
for
your
your
DV
in
wall
for
spinning
discs.
It's
item.
You
can
write
fast
enough
to
do.
Small,
like
random,
writes
on
a
spinning
disk
fast
enough.
That's
probably
going
to
become
a
major
feature.
Bottom,
that's
the
way
it
is
an
nvme
you're.
Just
trying
to
make
you
know
right
out.
I
have
success
as
possible
for
the
underlying
device.
A
I
think
the
scary
test
that
probably
one
of
us
should
do
maybe
I'll
try
to
do
this
is,
is
make
like
a
big
LVM
device
out
of
life
for
nvme,
drives
that's
or
give
you
like
their
four
terabytes
or
something
and
just
see
how
bad
it
gets.
That's
that's
kinda.
The
thing
that
scares
me
is
when
the
the
rocks
TB
is
no
faster
than
the
interim
device.
You're
trying
to
do
is
write,
AB
successes,
awesome.
D
Okay,
so
I
mean
just
to
give
example
so
that,
among
with
file
store,
we've
got,
you
know
large
now,
six
or
eight
terabyte
disks
and
they're
sort
of
all
with
pretty
much
static
data,
but
then
suddenly,
just
a
random
section
of
that
will
then
become
very
active
and
you
do
see
a
massive
impact
between
summits
in
the
file
store,
cache
or
not.
So
I
was
just
sort
of
curious
if
it
doesn't
sound.
Others
on
the
ratios.
You've
got
yet,
but
I
might
be
something.
I
can
try
and
look
into
at
some
point.
Yeah.
A
If
you,
if
you're
running
selinux,
which
hope
then
not
that
then
eventually
you're
doing
like
I
knew
lookups
any
kind
of
security,
I
thought
there's
two
is
just
whole
thing.
It's
really
really
nasty.
The
blue
star
doesn't
suffer
from
all
that,
but
you've
got
other
things
in
the
story
that
you've
to
worry
about,
so
it'll
be
different,
but
maybe
you
know
maybe
there
will
be
things
that
will
need
to
look
at
just
different
things.
A
D
Okay,
it's
in
feminism,
the
sort
of
I
guess
slightly
follow
on
from
that
posted
it
on
the
main
list
about
the
sort
of
different
type
of
SSDs
come
in
and
knows
that
you
can't
get
small
high
right
insurance
ones
anymore,
and
so
what
does
that
mean?
For
sort
of
you
know
picking
the
next
generation
of
your
hardware
but
I
guess
it's
someone
you're
interesting
to
look
into
yeah.
A
The
right
amps
in
rats
movies
is
another
scary
from
that
regard,
right
because
we've
got
all
these
levels
and
everything's
moving
around
all
the
time.
So
we
can't
get
smaller
high
endurance,
geysers,
usually
hooks,
but
something
that
does
enough
for
scary,
but
maybe
it
only
would
every
rain
we're
in
be
dense
right
well,.
D
That
that
could
be
I
mean
just
looking
like
a
lot
of
that.
Pcie
nvme
is
now
that
the
smallest
high
right
insurance,
once
you
can
get,
are
going
to
be
liked
or
two
terabytes.
You
know
which,
which
aren't
still
aren't
cheap.
Those
sort
of
a
question
of
that.
You
know
what,
where
everyone's
have
be
sort
of
four
hundred
gig
high
right
and
ruins
over
the
last
few
years,
that
sort
of
not
available
anymore
page
you're,
grinning.
A
D
Think
that
might
just
be
short
term
and
it
just
seems
the
general
landscape
is
that
everyone's
going
with
the
Friedan
and
so
like
you
know,
custody
goes
up
massive
and
write
endurance
drops
but
I
think
outside
of
no
nice
of
environment.
Everyone
just
wants
bigger,
assess
each
enemy,
son
I,
don't
think,
there's
that
demand
for
it.
Mm-Hmm.
B
A
D
Yeah
I
mean
I
mean
at
the
moment
I
mean
it
might
come
down.
Price
is
looking
almost
like
sort
of
four
times
across
for
the
smallest
sort
of
news.
New
lot,
which
is,
which
is
you
know,
puts
up
a
price
quite
a
bit,
but
yeah
I
mean
I.
Guess
the
other
option
is
maybe
the
rate
controller
start
coming
back
into
their.
You
know.
I
know
that
whole
idea
of
stuff
was
trying
to
get
right
away
from
them,
but
survey
starts
become
more
attractive,
again,
I,
don't
know
yeah
or
we
try
to
find.
F
All
right,
I've
got
another
question
for
you
so
well.
We
we
have
been
discussed
about
the
new
rise,
three
copies
and
read
until
and
while
there
are
PR
wittily
and
they
will
check
you
change.
Our
ideas
are
several
times
in
right
here
and,
of
course,
we
do
a
lot
of
our
own
changing
the
world
in
outside
and
we
do
find
some
bugs
with
their
pressures
are
from
up
here
right
now
or
we,
we
think
some
problems
in
terms
of
our
data.
New
data
lost
Joey
Neary,
my
PPG
remapping,
so
fix
it
right
now
really
wrong.
B
It's
it's
very
complicated.
There
are
a
lot
of
assumptions
in
the
way
that
the
pairing
code
works
that
at
that
we
only
need
to
basically
look
at
one
replica
and
for
any
given
past
interval
to
be
sure
that
we've
seen
all
the
rights
that
were
acts
to
the
client.
And
if
you
change
that
threshold,
then
you
say
Oh.
Some
Ilya
have
to
look
at
two
out
of
three
of
the
clients
to
know
that
we
saw
we've
seen
the
right
whatever.
So
it
gets
in
theory,
it's
possible
to
do
that.
B
B
So
it's
sort
of
that
the
trivial
thing
or
you're,
just
like
oh
two
out
of
300,
not
send
the
actor
or
the
rest
on
the
AK
isn't
going
to
work
like
it's
it.
It
needs
to
be
more
comforting
than
that,
so
I
guess
I
mean
submit
the
PR
and
we'll
look
at
it,
but,
just
like
you
know,
be
warned
that
it.
In
order
to
do
it
correctly,
we
need
to
sort
of
have
a
high
degree
of
confidence
that
it's
the
that
is
theoretically
correct
and
in
order
to
merge
it
we
need
to
have.
B
G
B
F
We
priced
a
little
credit,
we're
pretty
much.
We've
almost
gone
one
month
past.
You
know
because
recently
the
recession
about
go
back
and
we'll
kicks
in,
and
the
wheels
on
attacks.
But
I
will
housetop
not
were
negative
denial
wrong,
come
dance,
but
the
benefits
this
is
huge,
is
fighting
for
our
own
applications
invited
online.
That's
reason
we
keep
on
pushing
keep
pushing,
but
just
because
we
are
opportunities
years
so.
B
F
B
C
F
C
F
B
To
the
other
thing,
to
keep
in
mind
is
that
there
is
it.
There
is
a
trade-off
right,
so
the
the
way
that
the
the
recovery
works.
Now
we
know
that
if
there
is
a
previous
interval,
where
an
interval
being
a
time
span
where
there
is
a
certain
set
of
OS
to
use
replicating
a
particular
PG
as
long
as
we
can
reach
a
single
OSD
out
of
that
previous
interval,
then.
E
B
Know
that
we'll
be
able
to
discover
any
rights
that
happens
during
that
interval,
and
so
that
means
that
out
of
the
three
Oh
Steve's
that
were
hopefully
there
then
as
long
as
you
can
talk
to
one
of
them,
then
you
have
that
right.
If
you
change
this
the
a
two
out
of
three
type
of
thing,
then
you
have
to
talk
to
two
out
of
those
three
OSDs
in
order
to
to
get
the
right,
so
you're
trading
tale
latency
for
reliability.
B
F
H
Know
I
read
more
doesn't
seem
to
me
like
this
is
the
right
direction
to
solve
this
problem.
I
mean
the
you'll
reduce
the
frequency
of
these
latency
excursions,
but
you're
not
going
to
eliminate
them.
If
the
problem
is
that
a
two-second
latency
is
unacceptable,
you're
going
to
have
to
throttle
the
individual
OSDs
to
keep
it
below
that.
Okay,
just
changing
the
recovery.
F
H
H
F
Know
how
long
would
earn
a
reasonable
solution
to
swaddling?
Okay,
we
have
unknown
as
I
said
again
well,
my
old
cochlear
has
what
10,000
coins
and
you
have
known
as
a
Q
s
right
now,
the
control,
Ichi
kinds
and
there's
no
way
to
go
to
just
Albany
and
either
networking
or
gift
tips
to
do
it.
Presidency.
Sorry
I'm
sort
of
a
bizarre
I
know
my
competition
where
well
well,.
H
F
The
puzzle,
what
I
do
in
catch?
What's
your
standard
where
well,
but
I,
can
tell
you
what
we
want,
the
other
application,
what
a
computing
side
or
kind
whatever
you
called,
are
normally
very
noise
to
Mexico
instance
wrong
in
Hong
Kong,
the
multikill.
Normally
it
has
treating
a
recombinant
towards
other
parameter
Z
if
any
of
the
légion
see,
for
example,
right
now,
we
set
up
20
milliseconds.
F
F
H
F
B
F
B
B
F
F
A
E
B
Speaking
broadly,
they're
sort
of
I
think
there's
two
ways
to
address
this.
The
first
way
is
to
take
replicated
PG
and
make
it
more
complicated,
which
is
sort
of
a
dangerous
road
to
like
continue
pushing
down
and
but
it,
but
it
would,
you
know,
solve
or
improve
the
situation
in
the
general
case,
for
you
know
our
beauty
and
such
FS
and
everybody
else
es,
was
sort
of
the
standard
replicated
pool
semantics.
The
other
way
to
go
is
to
look
at
ways
to
structure
a
distributed.
B
Object
store
that
sort
of
don't
have
all
the
complexity
that
you
have
with
was
sort
of
the
very
general
data
model
that
that
the
current
pools
provide
where
you
have
arbitrary
objects
and
they're
all
completely
mutable,
and
you
can
name
them
whatever
you
want
and
you
can
delete
them
and
you
can
replace
it
whatever.
If
you
sort
of
constrain
the
semantics
where
you
know
objects
are
immutable
and
they
have
unique
names,
then
you
can
vastly
simplify
all
of
the
recovery
logic
and
the
replication
logic
and
the
update
logic
so
that
you
can.
B
You
can
do
things
like
you
know,
two
out
of
three
X's
okay,
but
if
you,
if
you
simplify
the
semantics
you
can,
you
can
do
something.
That's
much
much
easier
to
reason
about
sort
of
prove
is
correct
and
and
solves
the
the
latency
issue,
and
this
is
something
that
we're
talking
to
Salesforce
come
about
like
I,
guess
a
year
and
a
half
something
like
that,
and
that
would
really
mean
implementing
a
new
type
of
PG.
B
That
sort
of
has
these
different
recovery
and
update
semantics.
Basically
I
mean
that
that's
like
a
much
lower
risk
and
less
disruptive
path
to
go
down.
It
doesn't
solve
it
for
your
case,
because
you're
running
my
sequel,
which
means
sort
of
a
generalized
block
device,
but
for
lots
of
sort
of
big
applications
that
are
latency
sensitive
that
can
afford
to
sort
of
rain
directly
to
liberate
us.
It
would
make
sense.
So
you
know
with.
B
F
F
D
F
Other
two
weeks
ago
and
a
while
guys
actually
suddenly
that
I'll
synchronize
you
know
their
summit
in
Apple
stores,
remember
that
was
decided
by
the
Hsinchu.
It
pains,
oh.
F
B
F
B
Just
want
to
throw
out
one
other
thing
here
and
if
your
end
goal
is
to
run
my
sequel,
another
possible
path
here
and
there
obviously
like
ten
things,
you
need
to
work.
We
worked
at
in
order
to
determine
whether
this
was
viable,
but
my
sequel
has
the
new
rocks
to
be
back
end
and
you
can
back
rocks
to
be
on
directly
on
the
burritos.
The
thing.
B
The
reason
why
that's
interesting
is
that
the
rocks
of
your
right
pattern
is
writing
data
once
immutably,
and
so
you
could
probably
design
a
placement
group
with
much
much
simpler,
update,
semantics
or
objects
can
only
be
appended
to
or
created
or
whatever,
they're
never
modified
in
place,
and
you
can
make
something
that
does
sort
of
two
out
of
three
or
whatever
to
out
of
in
and
out
of
them.
Whatever
a
bit
semantics
that
we
could
have
confidence
with
without
sort
of
risking
to
stabilizing
sort
of
the
existing
ratification
could
well.
F
F
B
B
B
Gone,
that's
already
got
actually
yep,
so
them
again
next,
which
is
just
to
make
sure
that
septa
takes
both
a
desktop
file,
store
and
a
desktops
blue
store
argument,
and
that
default
is
blue
sore
and
that
that
PR
is
they're.
Just
it's
not
passing
tests
right
now,
they're
bunch
of
unit
tests
that
are
need
to
be
fixed.
That's
it
yeah.