►
From YouTube: Ceph Performance Meeting 2019-01-10
Description
A
B
Looks
like
it's
passing
now,
that's
approved,
so
there's
the
tag.
Thank
you,
I
think
yeah!
Well,
good.
Let's
see
the
right
back
stuff,
Jason
I
haven't
looked
at
those
either
not
from
Jason
but
Jason's.
Reviewing
him
anything
with
the
messenger
threads
that
one
is
tricky
there
is
this
blue
FS
allocation
thing
I.
Think
I
proved
this
one,
that's
just
being
a
little
more
aggressive
doing
allocation.
B
What
else
priority
cache
sizing
I
just
merged
that
I
think
that's
the
one
I
just
merged
I
just
do
5
5,
4,
4,
yep
I
just
burst
that
using
the
cache,
if
were
Apple,
invalidate
the
cache
I,
don't
know
what
this
is.
B
B
Okay
be
great
if
we
had
a
test
case,
I
hope,
let's
see
the
shared
LRU.
B
Yeah
I,
don't
know,
I,
don't
know
anything
about
that.
One
there's
a
pink
thing
is
reviewing
and
this
post
arcs
buffer
I
know
it's
just
broken.
I
definitely
should
I
think
gonna
close
it.
B
Okay
closed
a
bunch
of
stuff
merged,
so
it
looks
like
I
read
ahead.
Logic
approved
ref,
counting
improvements,
sampling
I
have
much
any
of
the
sampling
stuff
that
they're
doing
that.
It
looks
good.
B
A
B
A
B
B
B
B
B
C
B
D
D
There
are,
however,
limitations
because
this
charting
crog's
DB
in
in
this
attempt
is
only
for
splitting
for
multiple
columns
families,
which
means
that
we
expect
to
have
a
reduced
latency
in
when
we
do
compaction.
We
expect
I
mean
at
least
I
expect
minimized
impact
for
all
stalls,
and
this
is
not
in
any
way
a
solution
to
have
multiple
rocks,
DB
databases
working
together
to
provide
data
to
Blu
stock
and
that's
the
status
yep.
That's
it.
D
A
A
Right
so
we've
got
Radek
and
I
thought
I
saw
key
food
earlier,
but
maybe
he
didn't
make
it
right.
Do
you
want
to
talk
a
little
bit
about
what
you
guys
are
planning
on
doing
and
the
sea
star
group.
E
E
That's
the
critical
fact:
that's
the
critic,
Creed
absolutely
critical
thing,
and
we
would
love
to
have
the
verification
as
quickly
as
possible.
We
want
to
verify
our
assumptions
or
fail
early.
The
absolutely
worst
scenario
I
see
is
kinda
is
varying
year
a
year
of
the
world
team
at
work,
conduct
and
match
multiple
changes.
The
Raiders
infrastructure,
maybe
even
effect,
maybe
even
alter
the
protocol
itself,
make
yet
another
branch,
yet
another
round
of
of
sister
modification,
and
they
they
oops
it
doesn't
work.
E
E
It
translates
usually
translates
into
simplicity
in
maintenance,
easiness
of
maintenance,
reduced
costs,
reduced
number
of
bucks,
especially
it's
for
me.
It's
the
key
simplicity,
is
the
key
nature
design
of
what
we
are
doing
in
crimson,
because
office
I
I,
don't
perceive
this
project
as
user
space.
Think
more
I'm,
putting
even
my
I
guess
it's.
It
would
be
more
like
Colonel
level
programming,
then
the
current
user
space
connect.
We
have
an
OSD
in
death,
OSD,
so
I
I'm,
pretty
sure
we
will
have
to
do
with
with
a
lot
of
problems
and
spending
spending
complexity.
E
At
the
very
beginning
of
the
project,
it's
not
the
preferred
way
to
go.
I
am
afraid
that
complexity,
that
the
growth
of
complexity
never
stops,
except
from
some
restarts
like
we
are
doing
right
now,
I'm
expect
the
things
will
become
even
more
complex
in
the
future,
though
I
have
I
would
love
the
art
from
from
something-
maybe
not
even
not
simple.
E
Maybe
even
dumped
and
I
see
some
possibilities
to
reduce
to
reduce
their
comp,
the
initial
complexity
we
have
right
now
we
found
effect,
maybe
even
without
affecting
the
flexibility
at
the
moment
on
the
surf
mailing
list.
We
have
some
discussion
and
some
good
points
about
a
lack
of
flexibility
in
the
proposed
design.
I
can
apply.
I
think
it
can
be
already
on
yet
another
reading,
which
has
pointed
one
last
time
but
mark.
You
asked
extremely
good
question
about
the
our
understanding
of
the
OSD
concept
and
I
guess.
Everybody
has
different.
E
So
let
me
let
me
share
a
link.
It's
basically
up
first
part
of
my
response
to
two
very
good
points
made
by
in
Malkin
Jimena
couple
residents
for
not
sending
it
before
the
meeting
I
asked
myself
big
questions
to
refine
my
understanding
of
the
OSD
concept
and
from
and
my
understanding
always
D
and
at
the
object
store
interface.
E
E
E
E
Basically,
a
combination
of
the
object
of
the
of
the
OSD
concept
with
concrete
object
start
implementation.
It
can,
it
will
be
able
to
consume
multiple
cores.
However,
it
won't
be
enforced
whether
to
consume
multiple
cores
or
just
one,
it
will
be
a
met.
It
will
be.
A
matter
of
private
implement
will
be
a
matter
of
implementation.
Detail
of
particular
object,
store
interface
that
the
key
part
now
I
can
imagine
a
design
where
we
have
where
we
do
their
shark
ink
at
the
level
of
object
store,
and
that
then
this
may
be
extremely
important
format.
E
There
is
chav
of
a
decorator
charging
decorator
that
implements
the
Futurist
objects
are
focused
on
two
stories.
Basically
well,
we
can
think
about
about
it's
just
like
object
store,
it's
basically
written
a
conversion
from
from
returning
integer
to
future
integers
or
some
future
basha
buffer
signify.
That
very
simple
thing.
E
They're
mentioned
member
benjamine
yesterday
has
pointed
out
some
very
interesting
deployment
scenarios
I
assumed
that
we
have
a
lot
of
capacity
available
in
the
current
in
the
current
radio
Slayer
lamentation
that
we
can.
We
can
deploy
money
OS
this
and
we
know
from
the
for
10k
testing
it's
possible,
but
Matt
has
pointed
out
that
there
are
some
some
hardware
configurations
that
the
drys
limitations
much
lower,
especially
for
the
new
elimination
of
the
number
hood
connection
of
connections.
E
E
E
Criminals,
D
is,
is
about
how
to
efficiently
consume
a
very
fast
devices.
However,
between
those
two
problems,
we
have
a
junction
point
because
design
of
the
object
star,
because
the
design
of
the
OSD
can
lock,
you
can
increase
or
decrease
the
diamond
on
radius
capacity
and
I.
Guess
that
that
decreasing
lowering
the
demon
for
for
for
the
number
of
us
this
or
the
number
of
leaves
in
the
crash
hi
Rocky
is
one
of
the
Huck
deployment
hacks.
E
Our
deployment
engineers
are
using
when
dealing
with
such
hard
work,
and
please
don't
understand
the
the
the
sports
and
events
meaning
cut
can
be,
can
be
pretty
except
order
right
now.
We
know
that
we
know
that
the
reason
at
least
one
solution
increasing
the
Raiders
that
addressing
this
limitation
of
Raiders
capacity,
addressing
the
increasing
the
number
of
connections
we
can
have
in
a
capacitor.
It
could
be
glowing
stateless,
but
it's
a
big
change.
E
Oh
I
think
there's
nothing
that
our
deployers
our
deployment
engineers
are
fighting
with
the
problem
by
massive
aggregation
of
disks,
like
some
kind
of
software
right
I'm,
putting
multiple
discs
inside
her
right
and
on
output,
I'm,
getting
one
physical
device,
I'm
deploying
us
us,
sir,
been
theft
OSD
on
this
allows
me
to
have
less
connections
in
the
cluster
and
I
think
that
Martin,
maybe
Greg,
are
thinking
that
the
world
new
physical
who
usl
bin
crimson
OSD
will
be
limited
to
one
car
only,
which
translates
into
an
ability
to
use
those
deployments.
Hacks
anymore,
I.
E
Think
it's
not
it's
not
true.
I
think
we
can
have
this
tax
still
available,
but
we
want
that.
We
also
think
that
we
don't
need
to
be
affected
by
the
by
having
the
crossbar
at
the
network
layer
at
the
machine
lair.
We
could
have
it
basically
put
as
a
private
implementation
detail
of.
Let's
call
it
I
know
it's
better
name
in
a
minute:
let's
call
it
chart
being
the
decorator
all
new
objects
or
something
like
that.
E
A
D
A
E
A
A
E
B
I
thought
I'll
be
on
one
of
the
one
one
core
I'm
for
me.
I'm
just
really
excited
about
anything
that
will
sort
of
speak
this
along
to
the
point
where
we
can
actually
stand
something
up
and
get
some
sense
of
how
in
perform
even
with
men,
sore
I
want
to
know
how
quickly
we
can
shuffle
requests
on
and
off
the
network
and
do
some
like
request
processing
and
so
that
we
can
have
a
more
informed
decision
about
the
rest
of
it.
I.
Don't.
E
B
Whether
we're
gonna
be
able
to
like
drive
a
fast
ending
meal
with
one
core,
is
it
still
gonna
be
like
three
cores
per
nvme?
Is
it
gonna
be
like
two
ending
these
four
core
I
have
no
idea.
I
know
that
we
are
wasting
a
ton
of
time
right
now
with
all
of
our
reference
counting
and
thread
pools,
and
all
this
whatever
the
current
code
is
like
so
much
bloat
overhead,
oh
I,
really
don't
want
to
find
out
what
sort
of
a
minimal
stack
how
fast
we
can
make
it
go.
A
A
B
E
And
unfortunately,
the
sharding
at
the
messenger
level
it's
complex
and
I'm
also
afraid
it
would.
It
would
require
modifications
to
T
start
project
to
even
do
that
correctly.
I
said
a
link
I
keep
well.
We
had
on
Tuesday
on
the
top
level
higher
C
channel.
We
had
a
good
conversation
about
some
possibilities
to
address
the
thing.
It's
very
easy
because
of
the
continuation
of
the
discussion
in
comments.
E
A
B
A
I
think
I
mind.
The
fail-fast
is
really
good.
All
right
like
try
it
and
just
see
what
happens,
and
you
know,
maybe
maybe
it
works
amazingly
well
and
then
then
we
have
to
make
more
decisions
about.
You
know
if
there
are
costs
that
we
don't
know
about,
but
if
it
doesn't
work
well
then
at
least
we
know
that.
F
F
Yes,
I
was
actually
connected
to
the
intro
of
EPN
and
it
drops
me
off
all
the
time.
Sorry
so
yeah
so
hey
I
was
hoping
that
ancien
Radek.
You
know,
like
you
know,
through
through
the
discussions
that
are
ongoing.
We
can
have
something.
That's
that's
testable
like
an
you
know,
maybe
in
the
next
couple
of
months
right.
That's
my
hope
and
like
so
so
mark
you.
Your
question
was
also
on
the
spirit
aside
or
because
we
we
have
folks
working
on.
F
Of
course,
the
the
sea
star
messenger
fly
into
it,
which
is
ancient
and
Chinmay,
and
we
also
have
on
the
SPD
case
ID.
You
know
we
have
quite
a
bit
of
know-how
and
you
know
having
worked
not
not
any
stacks.
That
would
involve
replication,
but
but
at
least
you
know
we
have
converted
from.
Let's
say
you
know:
legacy
I
scuzzy
and
in
some
cases
tax
that
you
know
don't
do
as
much
as
scepters.
But
but
you
know
in
you
know,
we
know
how
to
do
maximize.
A
Sure
I
think
the
the
the
thing
that
came
up
last
week
that
the
at
least
for
me
is
is
something
I'd
want
to
understand.
Better
is
if
we
kind
of
go
down
this
model
of
having
lots
of
of
kind
of
single
core,
independent,
OS,
DS
or
shards,
or
whatever
you
want
to
call
them
I,
don't
know
that
are
all
talking
potentially
to
the
same
storage
device
and
SP
DK
is
involved.
What
what's
our
route
like?
What's
the
best
way
for
us
to
do
that,
and
what
should
we
be
thinking
about?
F
Yeah,
so
so
there
are
they're,
actually
multiple.
You
know
different
stacks
in
use,
different
different
approaches.
We've
had
a
a
pipeline
model
where,
where
you
have
you
know,
shared
the
course.
Basically,
you
use
a
shared
local
state
data
structure
like
very
much
DP,
DK
style
right
where
multiple
cores
are
like.
You
know,
you
have,
let's
say
one
acceptor
thread,
you
know
for
doing
unit
of
processing
equals
0
and
then
the
other
other
cores
are
just
handling
the
submission
and
completion
queues,
for
example
right.
F
So
so
we
we
basically
have
been
doing
this
mainly
a
pull
model
where
you
know
like
one
core,
so
much
working
the
other
core.
So
simply
you
know
basically
workers
they.
They
take
take
work
and
put
it
back
in
the
completion
queue
and
which,
which
there
are
other
threads
that
consume
write,
essentially
so
so
that
there's
that
cooperative
model-
and
in
some
cases,
we've
also
done,
you
know,
run
to
completion
sort
of
sort
of
models
on.
F
So
this
is
something
I
still
need
to
think
about,
but
maybe
but
we
basically
both
approaches
and-
and
all
of
these,
these
examples
are
actually
like.
You
know
that
they're
there
already
upstream
s,
P
D
K
so
so
mark
I
mean
what
may
perhaps
what
we
could
do
is
maybe
have
let's
see
either
either.
There's
somebody
from
the
acidic
a
team
or
or
ancient
ancient
ancient
writing
us.
F
E
F
F
E
Let
me
provide
more
details
on
that.
There
up
the
first
testing
step.
The
first
testing
milestone
will
be
basically
abstracted
from
the
storage
at
or
any
kind
of
storage.
We
plan
to
use
authorized
münster,
basically
of
münster
implementation
that
can
live
inside
inside
rest
of
the
crimson
OSD
inside
the
Tim
reactor
as
the
the
OSD
concept
Inc.
E
E
F
So
I'm,
actually
not
referring
to
the
in
DME
block
there
right
in
this
PDK
that
that
part
at
all
I'm,
actually
just
referring
to
the
threading
model,
I
mean
and
and
maybe
I
shouldn't
use
the
word
SP
DK.
This
is
just
strictly
you
know.
User
space
pulled
a
threading
model
example
that
these
development
kits
provide
right.
So
I
was
so.
Let
me
let
me
think
about
this.
Some
more
know
as
to
which
which
stack
candy
can
be
used
because
there
because
I
know
like,
for
example,
like
Ally.
F
You
know
they
they
implemented
their
own
like
an
entire
object
store
based
on
this
PDK.
You
know
over
the
last
and
last
year
or
two
of
course,
it's
not
a
not
upstream
but
but
they're.
Actually,
some
some
other
examples
that
there
are
they're
a
part
of
the
spirit
you
know
there
are.
You
know
that
may
may
serve
as
a
good
guideline
as
to
as
to
which
starting
model
works
works.
The
best
course
to
scale
right.
F
So
so
sorry
I
know
have
you
have
you
looked
at
the
the
Indian,
your
fabrics
or
the
are
the
I
scuzzy
targets
the
way
they
they
implement?
Networking
and
I
can
actually
walk,
walk
you
through
some.
What
some
of
that
like
in
details
next
week,
but
but
have
you?
Have
you
looked
at
those
already
I
guess
to
the
threading
model,
part
of
it
not
yet,
okay,
yeah,
so
I
would
actually
highly
recommend
doing
that
and
I
I'll
see
if
I
can
have.
If
I
have
any
drawings
I
can
share
with
you
IV
we
had.
F
You
have
some.
You
know,
architecture
and
flow
diagrams
I'll
share
those
with
you
that
I
mean,
if
you're
looking
to
do
some
just
to
contain
the
exercise
where
you
just
you
know,
looking
at
messenger
charted
messenger
and
how
you
know
how
to
best
use
the
course
code
illustrator
you
have.
That
may
be
a
good
starting
point
to
see
if
we
can
extend
from
from
what
has
what
we've
already
learned
right.
You
know
yeah.
F
Yeah,
so
it's
early
or
I
mean,
and
it's
a
just
question
you
know
namespaces
or
indium
sets.
These
features
are
not
you
know,
they're
not
ubiquitous
yet,
and
all
right
I
mean
Sarah.
Lee
is
in
the
in
expect,
but
most
implementations
of
nvme
are
not
gonna,
have
it
until
for
for
a
couple
of
years
now,
I
would
say
so
and
then,
in
terms
of
namespaces
we
we
may
get
in
some
cases.
F
We
have,
let's
say
four
core
namespaces
device,
which
may
be
too
small
when
we
are
always
gonna
be
limited
by
you
know
by
the
number
one
on
the
one
end
or
the
other,
and
then
nvm
census
is
still
very
futuristic
right,
so
I
I
was
thinking
we
may
need
to
I
mean
we
may
not
be
able
to
make
any
assumptions
on.
You
know
what
support
we
get
from
the
hardware
side,
yeah.
B
So
it
sounds
kind
of
like
what
what
we
were
probably
in
a
targeting
initially
then
would
be.
If
we
do
need
multiple
cores
to
keep
an
Indian
busy,
then
we
might
have
like
400
steez
each
running
primarily
in
their
own
thread
on
their
own
core,
but
when
they
actually
go
to
do
I
oh
they're,
using
this
lock
was
cute
and
they're
you're.
Having
then
there's
one
other
course
just
driving
the
I/o.
F
Essentially,
yeah
yeah,
that's
just
that's
the
30
more!
That's
just
driving
yeah
yeah
yeah
more
for
pipeline
yeah
and
and
in
turn,
do
the
vhosts
pockmark.
I
actually
does
something
that
has
come
up
or
you
know
the
last
year
quite
a
bit
since
we
put
it
out
there,
but
I'm
actually
not
not
entirely
sure,
because
we
used
so
so
specific
to
Kim
you.
F
You
know
based
implementations,
that
you
know
like
how
we,
whether
we,
whether
we
want
to
marry
that
right
on
the
on
diversity
side
at
all
you
know
like,
but
but
that's
the
something
I
actually
I,
don't
know.
I,
like
Ben
Ben,
said
read:
Ben
Walker
said
mark
last
week:
I
yeah.
We
may
not
be
able
to
use
vhosts
as
a
tip
but
similar
something
similar,
but
in
general,
though
I
like
sage,
this
guy
right
I
think
that's
probably
the
threading
model.
That's
that's
gonna
work
for
us,
that's
the
best.
F
E
Multiple
application,
multiple
process
access,
the
same
device
with
good,
very
good
isolation.
I
can
recall
our
our
our
discussions
at
the
acuity
when
I
was
at
my
aunties
yeah.
Basically,
those
devices
I
mean
qet
cards
were
able
to
be
handled
very
efficiently
or
from
multiple
processes.
At
the
same
time,
the
the
problem
there
was
just
having
not
good
is
relational,
but
it
seems
that
it
can
it's
going
to
be
addressed
with
as
Ryobi
deal
to
have
multiple
processes
assuming
the
same
device
as
her
IV.
It's
good
extension,
but
it's
not.
F
F
Looking
at
the
roadmap
like
and
on
the
Intel
side
in
Turin
GME
side,
I
can
definitely
provide
escape.
You
know,
I
guess
definitely
you
know
timeline
as
to
like
when
and
what
drives
it
will
be
available
in,
but
time
pressure
will
will
be
will
find
folks
that
may
not
want
to
use
their
away
for
management
manager.
Barry
reasons
like
if
you,
if
you
know
in
the
cloud,
particularly
in
the
networking
space,
a
lot
of
people
actually
don't
like
SIV
because
of
because
of
VM
migration
related
challenges
right,
so
they
don't
use
SIV
at
all.
F
A
E
G
Okay,
I
want
to
push
back
on
this
radicand,
maybe
I
just
maybe
I'm,
just
not
understanding
what
you're
saying,
but
it
really
sounds
like.
Basically,
if
you're
saying
is
I,
don't
want
to
do
it
now,
but
in
the
future
will
we
probably
will
do
all
the
same
things
that
are
having
trouble
with
now
and
that's
an
argument
you
can
make,
but
I
I'm
not
sure
if
that's
what
you're
actually
making
or
like
I,
don't
know
it's
simpler
to
put
it
in
the
object
store,
then
at
the
messenger
wreck.
E
E
E
Rug
partitioning
basically,
however,
it
might
be
also
that
we
will
need
to
do
aggregation
at
Lima,
at
least
for
some
cases,
but
still
I
would
I'm
I'm
pointing
out
that
the
partitioning
doesn't
need
the
crossbar
aggregation
does
and
we
don't
need
to
enforce.
The
crossbar
for
all,
possibly
for
all
futures
will
have
all
possible
futures
will
have
touch
the
back
key
part
of
the
design.
G
E
F
G
G
A
Gregg,
what
I
was
gonna
say
earlier
that
I
don't
know
if
this
is
exactly
radix
point
or
not,
but
when
I
was
playing
around
with
store
and
and
looking
at
blue
store
and
file
store,
it
felt,
like
the
the
object
store
interface,
really
forces
you
into
a
very
specific
kind
of
design.
Right
now,
like
it's
it's
you
know
what
radix
is
describing
to
me
feels
like
it
would
be
much
more
flexible
in
terms
of
how
you
implement
object.
Stores
makes
any
sense.
E
Another
benefit
that
the
not
obligatory
crossbar
would
be
sit
deeply
in
our
own
territory,
very
far
away
from
the
border
with
sister,
and
we
found
during
implementation
of
shattered
messenger
that
T
star
might
be
pretty
in
compliant
with
what
we
need
would
need
to
have
discharging
their
or
instance
Easter
and
just
as
a
result
of
the
PDK
wants
to
determine
the
call,
a
connection
will
live
on
with
the
chartered
messenger.
Try
we
wanted
to
the
Crimson
OSD
being
able
to
determine
this
to
this
mapping,
but
we
can
do.
We
have
notifying.
E
Only
it's
much
easier
and
much
weaker
when
it
comes
to
having
something
testable,
you
don't
need
to
share
messenger
at
all.
You
can
we
grab
the
man
star,
put
it
into
the
same
fret,
absolutely
the
same
fret
as
the
rest
of
the
physical
crimson,
OSD
and
we'll
be
fine,
I
well,
I
know
that
master
has
some
locks.
First
of
all,
first
approach
is
just
to
ignore:
maybe
their
what
they
won't
be
contended,
but,
let's
top
let's
prepare
ourself.
They
will
be
and
try
to
consider.
E
We
know
that
in
the
perfec
Engel
Fred
design,
we
can
have
Seth
atomic
rapper
for
non
sister
world.
Well,
you
can
applied
it
with
set
just
rang
set
on
the
wall
f3
or
the
non
disturb,
or
the
non
sister
world
it
would
map
to
STD
atomic,
but
for
the
system
single
threaded
dreams,
an
OSD,
it
would
be
plain
operation
without
unlocked
prefix
or
something
like
that.
E
Okay,
this
deals
with
simple
Atomics,
but
we
know
we
have
even
more
complicated,
locking
primitives
looking
structures
in
münster,
but
please
recall
that
star
is
basically
a
voluntary
pre-empting
cap
jeweler
for
user
space.
It
means
that
your
code
can
be
preempted
only
in
some
preemption
points
and
those
points.
Basically
our
IO
or
jumping
between
two
continuations,
two
lambdas
of
FC
in
münster,
I.
E
E
E
And
then,
after
the
testing,
if
it's
fine,
we'll
need
to
think
more
about
object,
star,
it
might
be,
will
need
to
take
into
consideration
the
details
of
SPD,
K
or
other
things
like
that.
I
mean
the
storage
stock
might
find.
The
best
idea
to
consume
is
like,
with
the
mem
star
from
the
same
fret
as
the
messenger
listen,
but
we
are
not
in
first
to
that,
we
still
well.
We
will
consume
bluster
using
fret
board
with
the
same
design.
G
E
G
G
G
B
G
So
I'm
just
I
haven't
assimilated
your
third,
your
third
just
about
the
assertion
with
the
sea,
star,
atomic
or
whatever
cuz
that's
complicated
and
we're
talking.
But
if,
if
our
main
thing
here
is
that
we
feel
like
we're
fighting
in
the
messenger
lair
against
sea
star,
maybe
we
should
look
again
at
what
we're
doing
with
our
messenger
lair,
not
say:
let's
not
do
it
like.
There
are
lots
of
ways
to
do
a
crossbar,
and
maybe
one
of
them
doesn't
work
very
nicely,
and
so
you
start
with
the
others.
G
G
So
if
we
connect
to
the
firt
like
if
we
try
and
pin
their
connection
to
the
to
the
object
that
they
first
had
to
talk
to,
that's
not
really
an
optimization,
and
it
could
be
that
we
just
want
to
allocate
a
mess,
a
core
to
be
the
messenger
core
and
that's
responsible
for
sending
the
messages
out
to
the
others.
That's
another
model
that
I
think
like
works
like
there
are
lots
of
ways
we
can
go
here
that
ought
to
work
nice
see
star
and
that
yeah.
B
G
B
Gonna
be
so
much
that
needs
to
be
changed,
and
all
of
this,
it's
not
like
we're
gonna
write
this
code
once
for
the
prototype,
then
we're
never
going
to
touch
again.
Gonna
be
evolving.
It's
gonna
be
like
a
super
stripped-down
simplified
IO
path
like
I'm
that
the
sooner
we
know
the
sooner
we'll
know
what
it
should
look
like.
G
E
G
We'll
see
that's
the
part
that
scares
me
is
I
could
like
from
where
I'm
standing
most
of
the
code
shouldn't
care.
What
the
crossbar
is.
It
should
just
have
some
alias
type
or
you
know,
connection
references,
and/or
connections
and
messages
and
things,
and
that
can
be
whatever
we
want
as
we
change
it.
We
just
need
to
have
like
the
workflow
in
place
at
the
top,
and
you
have
lets
us
do
a
crossbar
at.
E
The
moment
you
are
talking
about
some
of
the
methods
to
deal
with
increased
complexity.
You
are
talking
about
the
black
box
model.
If
you,
the
idea
is
that
if
you
hide
the
complexity
with
good
enough
interface,
simple
enough
and
well-sealed
interface,
then
this
might
be
not
a
problem,
but
well
we
started
doing
that
and
we
saw
that
we
are.
We
are
not
only
leaking
this
complexity
and
putting
in
more
complexity.
E
We
saw
we
are
started
affecting
Easter
right
now.
We
will
need
to
address
that
the
gist
we
will
need
to
touch
it.
We
will
need
to
apply
basically
the
same
ste
star
shared
pointer
from
sister
foreign
pointer
from
C
star
shaped
pointer,
Paul,
Paul
underscore
FD,
also
in
DC,
star
I.
Guess
it's
not
vegetable
I!
Guess
it
against
what
society
needs.
G
When
save
says
something
like
oh
we're,
just
gonna
redo
it
anyway.
That's
that's
the
claim
that
scares
me
if
we
actually
are
like
if
it's
gonna
be
a
big
thing
and
like
throughout
the
whole
code,
would
be
written
about
no,
not
a
little
thing,
but
rather
not
a
nicely
thing,
but
we're
like
we're
over
on
this
meeting
guys
so.
A
F
C
F
They've
been
hard
at
work
and
I
I
mean
I
know
they.
They
have
a
lot
of
this.
They
actually
have
code
written
right
at
this
point
and
I
don't
know
Radek.
You
know
like
how
far
a
deviation
is
from
what
they
already
have
versus
what
you're
suggesting
right,
I
think
that's
something
I
need
to
still
understand,
but
yeah
we
can.
We
can
talk
about
it
like
either
on
the
either
on
the
call
I
believe
it's
it's
tomorrow
morning.
Radek
you
your
time
too
late,
late,
Thursday,
night
or
or
excretes,
sometimes
yeah.