►
From YouTube: 2020-03-26 :: Ceph Performance Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
B
A
A
A
We
have
one
that
changes,
the
locking
him
loose
tour,
and
that
makes
me
really
nervous,
but
they
showed
nice
looking
performance
improvements
with
it.
I
think
you
and
Igor
both
have
looked
at
it
at
this
point,
I
will
attempt
to
take
a
look
at
it
as
well,
but
I
think
it's
it's
a
little
tricky
though
Igor
I
think
you
had
mentioned
that
you
were
concerned
about
it
as
well.
Was
there
were
you
concerned
about.
C
Well,
I'd
like
the
idea
behind
this
patch,
which
is
to
reduce
the
global,
lock
lifespan
and
introduce
the
right
era.
Maybe
the
file,
locking
but
I'm,
not
sure
if
it's
easily
doable
with
current,
yes
I,
know
I'm
afraid
we
need
to
revise
this
court
significantly
go
to
introduces
like
that.
I
think
I'm.
C
A
A
There's
a
nice
ER
from
how
am
I
to
use
pinnable
slice
to
avoid
an
extra
copy
from
rocks.
Db.
That's
really
nice
I
need
to
take
a
look
at
that
a
little
closer,
but
that
potentially
this
is
I've,
seen
situations
where
I
think
that's
going
to
help
a
lot.
So
I'm
excited
about
that,
and
then
there
was
a
some
kind
of
optimization.
Then
B
me
device
here,
but
the
queue
size
of
control,
ops,
controller,
ops,
I
have
no
idea
what
that
does.
A
But
anyway,
let's
see
oh
and
then
the
other
three
are
Adams
old
home
family
sharding
pr's
that
all
got
cleaned
up
all
right,
ooh
pr's
updated
that
I
saw
this
week,
both
from
Igor
the
deferred
big
rights
PR,
and
then
the
hybrid
alla
khair
BR
Kara,
which
we
I
think
spoke
about
last
time
in
this
meeting.
Those
are
getting
reviews
updates,
testing,
so
I
think
just
kept
moving
along,
but
not
quite
ready.
Yet.
A
All
right,
so
the
only
specific
discussion
topic
I
have
for
this
week
is
that
our
our
GW
and
perf
and
scale
DFG
team
were
tracking
down
what
we
thought
was
an
issue
in
beast:
the
beast
back
end
for
our
GW.
They
were
seeing
this
performance
drop
after
a
certain
amount
of
time,
a
fairly
significant
one
and
after
lots
and
lots
of
work
by
Mark
Cogan
on
their
team,
they
found
out
that
it
was
not,
in
fact
our
GW.
It
was
behavior
kind
of
induced
by
the
OSD.
A
So
what
appeared
to
be
happening
is
that
the
OSD
was
going
along
doing
its
work
and
filling
up
the
cash
for
fine,
and
at
some
point
we
we
hid
the
cash
limit.
Okay,
you
know,
that's
that's
normal
right
yeah
and
we
don't
have
an
infinite
amount
of
cash,
and
so
at
that
point
the
memory
auto-tuning
on
the
OSD
third
trying
to
balance
between
the
blog
cache
and
the
oh.
A
No
because
we
now
started
doing
o
node
reads
from
disk
in
master
right
now,
when
that
happens,
we
unfortunately
end
up
populating
almost
the
entire
block
cache
and
oh
no,
with
Oh
notes,
because
it's
kind
of
like
a
hierarchical
thing,
so
there's
a
PR.
We
have
that
works
around
that
that
fixes
that
27
705,
but
it
requires
an
on
disk
format
change.
So
we
have
an
immersion.
Yet
we've
we've
kind
of
waited
until
Adams
called
family
sharding
PR
gets
in
so
that
we
can
just
do
one
on
disk
change.
A
You
know
once
for
users
but
get
both
benefits
at
the
same
time.
So
in
any
event,
okay,
we,
we
started
doing
reads
from
disk
populating
it
into
the
blog
cache,
but
they
started
seeing
the
page
cache
on
the
node
start
going
up,
because
those
reads
are
happening
using
buffered,
I/o,
they're
buffered
reads.
A
The
reason
that
is
is
because
at
one
point
about
two
years
ago,
we
changed
blue
FS,
buffered
I/o
to
be
on
by
default
rather
than
off.
So,
okay,
normally,
that's
not
bad.
It's
actually
kind
of
desirable
because
then
rocks
DB
and
kind
of
used.
The
page
cache
as
a
secondary
cache,
rather
than
just
only
getting
the
benefit
of
doing
reads
from
the
block
cache.
So
normally
that
seems
like
it's
been
a
performance
win,
but
what
happened
in
this
case
is
apparently
as
the
buffer
or
as
the
page
cache
was
filling
up
at
some
point.
A
The
kernel
decided
to
start
digging
into
swap,
and
that
is
the
point
at
which
they
started
seeing
performance
really
tanked
on
the
node
and
significantly
it
wasn't
just
like
a
little
bit.
It
was
like
from
wire
calls
like
a
5x
reduction
or
something
like
that.
This
is
really
bad.
That
I
haven't
looked
at
some
of
the
information
that
they
had
or
I
don't
know
if
they
should
actually
gather
information
on
what
was
being
you
know,
what
faults
were
happening.
A
A
27
705,
the
PR
that
fixes
double
caching,
I
assume,
would
probably
help
in
this
case,
because
it
would
mean
that
we
don't
basically
shrink
the
own
owed
cash
to
start,
giving
memory
to
the
block
cache
and
seeing
the
the
total
amount
of
cash.
You
know
the
or
the
both
of
them
being
consumed
by
Oh
nodes
and
then
the
block
cache
not
having
stuff
for
compaction
in
it.
A
So
then
we
are
faulting
and
doing
reads
to
disk
and
as
horrible
I
think,
that
pair
would
help
I,
don't
think
it
would
fix
it
because
at
some
point
we
would
do
the
exact
same
thing
when
we
ran
out
of
cash
anyway.
It
just
be
farther
out
so
at
some
point,
we're
gonna
start
doing
reads
from
disk
on
a
real
cluster
somewhere.
A
A
That
said,
I,
don't
I,
don't
know
if
it's
a
Czech,
Jew
or
another
who's,
just
kind
of
a
I
felt
that
that
we
had
good
news.
I
think
is
that
we'd
probably
want
to
have
this
direction
anyway.
If
we
can
do
direct
IO
reads
everywhere,
because
we're
gonna
be
doing
that
and
VB
devices
anyway.
It
might
actually
be
faster
in
some
circumstances.
If
the
kernel
is
not
involved
all
right.
Well,
this
involved,
but
less
involved,
I,
guess
and
then
for
containers.
A
We
aren't
going
to
probably
have
swap
anyway,
which
I
guess.
Maybe
we
wouldn't
have
the
problem
then,
but
it's
just
kind
of
probably
better
overall
for
us
to
be
trying
to
do
a
good
job
of
doing
everything
ourselves
in
our
own
caches
and
can
be
as
close
as
we
can
to
that
container
memory
limit
without
going
over
it.
So
you
know
not
not
using
the
page
cache
as
a
crutches
may
be
a
good
thing.
A
So
anyway,
that's
that's
kind
of
it.
D
E
A
A
A
Yeah
I
mean
it
feels
like
every
single
time
we
we
involved
the
kernel
in
some
way.
At
some
point,
we
hit
some
kind
of
weird
issue
that
we
can't
easily
work
around
without
just
getting
it.
You
know
out
of
there
in
the
first
place,
I
mean
we
hit
it
with
I
know.
Sometimes
we
hit
it
with
other
memory
view
house,
layer
stuff.
We
hit
it
with.
You
know
this
hit
with
other
things,
just
if
we
could,
if
we
could
fix
things
in
the
current
lease
here.
A
A
E
D
A
A
E
Ocds
next
week,
so
we're
gonna
talk
about
any
performance
topics.
There.
Please
add
them
to
the
tedious
Pacific
bed
I'll
place
a
at
your.
A
E
F
B
A
E
A
I
suppose
one
thing
that
I
did
want
to
maybe
bring
up
today.
I,
don't
know
if
it's
worth
bringing
up
with
this
or
not.
Is
that
as
part
of
this
disabling
of
the
blue
FS
buffered
I/o,
we
should
figure
out
when
we
can
get
27
705
in
I.
Don't
know
where
the
calm
family
charting
is
at
right
now,
if
that's
really
gain
close
to
merging
or
not,
but
we
should
get
that
in.
We
should
try
to
get
Igor's.
Oh
no,
shrinking
PR
in
as
well
yeah.
E
E
A
A
A
A
A
A
E
E
Maybe,
for
example,
it
may
be
worth
testing
I.
Are
you
ringing,
booster
again
currently
and
them
in
backwards
this
into
a
state
point
two
now
and
it's
it
sounds
like
it's
a
much
more
stable
shape
these
days
than
it
was
previously.
Let
me
really
see
how
it
does
in.
A
Yeah,
although
it'd
be
a
lot
easier
to
test
with
Santos
on
the
Serta
or
if
she's,
in
Dallas
nodes,
okay
but
yeah,
this
is
this
is
good,
especially
if
it's
gonna
get
that
student
and
I
think
you're
right.
How
earring
would
be
would
be
good
to
to
test.
Did
we
merge
that
too?
Didn't
we.
E
A
A
A
A
That's
all
gonna
be
I,
think
a
really
good
improvement
Oh
once
once
we
get
both
Adams
column,
family
charting
in
and
then
my
the
the
double
cashing
PRN
after
that
I
can
layer
the
the
age
binning,
the
cache
age,
pinning
on
top
of
that,
which
would
be
I,
think
that's
kind
of.
What's
we
need
that
other
piece
in
there
before
we,
we
do
the
age
pinning,
so
that
can
also
go
in
I.
Don't
know
if
it's
interesting
to
talk
about
it
or
not,
but
you
know
there's
that
yeah.
A
A
Ilya
I
are
so
so
background
on
this
on
sacrifice
and
Colonel
RVD,
so
Colonel,
Sufis
and
Colonel
RVD
I
was
seeing
a
fairly
large
throughput,
bottleneck
or
sequential
reads
from
a
single
client
and
after
lots
of
testing
with
Ilya.
What
it
appears
to
be
is
that
there's
kind
of
a
limit
to
the
I/o
depth
that
you
can
hit
with
a
single
kernel.
A
We
somewhat
different
behavior
when
doing
NBD
within
one
volume
we
saw
it
didn't
scale
well,
but
then,
when
you
had
multiple
and
beat
up
a
mediums
that
did
scale
well
and
when
you
use
either
Lib
RVD
or
lip
stuff
with
us
directly,
everything
was
great
that
we
got.
You
know
significantly
higher
performance,
so
I
went
and
ran
a
bunch
of
metrics.
A
While
this
was
happening
and
the
only
thing
that
I
really
noticed
myself
was
that,
once
we
got
above
an
I/o
depth
of
64,
then
in
the
kernel
we
started
seeing
memory
copy
taking
more
time
in
the
kernel
worker
threads.
It
wasn't
like
a
ton.
It
was
like.
Maybe
1%
give
you
time
in
each
one
of
those
threads,
but
it
it
really
consistently
showed
up
when
the
performance
started
going
down.
A
F
C
F
F
Rebuilt
cluster
had
some
other
number,
but
it's
still,
you
know
it's
still
more
than
a
couple
and
there
should
be
for
sequential
workload
with
with
that
queue.
Depth,
I
think
there
should
be
like
more
kernel,
threads,
active
and
and
showing
up
in
that
in
curve.
So
that's
the
only
thing
that
looked
kind
of
out
of
place
to
me.
F
I
looked
at
the
I/o
stat
numbers
and
I
mean
they
confirm
that,
even
though
we
maintain
the
queue
depth
and
we
maintain
the
IO
sizes,
the
I
ops
number
drops
and
the
so
good
they're
false
and
drops
the
other
thing
they
looked
odd
is
the
default
queue
depth
for
our
MIDI
devices
is
128,
and
so
your
test,
where
you
run
it
with
where
you
ran
it
with
256
that
showed
even
worse
performance
compared
to
128.
F
But
that
shouldn't
have
been
the
case,
because
you
know
the
the
fao
would
just
sit
on
those
iOS
are
not
being
able
to
push
them
to
the
device
driver
and
so
I
would
expect.
Oh
I
would
have
expected
the
performance
to
be
the
same
for
128
and
256
yeah.
So,
but
those
are
just
are
just
you
know
tiny
pieces
tiny
like
like
inconsistencies,
but
they
don't
explain
the
slow
down.
Obviously,
yeah,
that's
beyond
that.
Why
I
I
don't
have
anything
to
add.
F
A
Don't
so
so
I
have
nothing
to
do
this
yet,
and
maybe
you
know,
or
somebody
else
knows,
is
there
any
way
to
get
good
like
LOC
profiling
inside
the
kernel
to
figure
out
if
it's
possible,
like
you,
had
mentioned
that
not
seeing
lots
of
them
frontal
worker
threads?
Is
it
possible
that
there's
some
lot
that
things
are
waiting
on
and
we
only
see
a
couple
at
once
in
the
scenario.
F
Well,
there
shouldn't
be,
but
one
thing
you
can
do
is
I'm
not
sure
if
log
debugging
is
actually
enabled
in
the
kernels
that
I
guess
you're
using
upstream
Santos
kernels
and
it
might
be
disabled.
But
there
is
a
thing
called
luck,
stat,
which
is
a
file
in
/proc
which,
if
log
debugging
is
enabled
it
dumps
out
a
table
with
all
kernel
locks.
And
there
are
you
know
with
columns
like
how
many
times
this
lock
was
acquired,
how
many
times
it
contended
on
acquisition.
F
You
know
in
the
order
that
you'd
expect.
So,
if
you
just
do
you
know
cat
/proc,
slash,
locks,
Todd
and
swipe
it
to
add,
and
you
will
see
the
most
contended
blocks
and
there's
also
wait
time
in
there
as
well.
So
one
thing
that
might
be
useful
is
and
to
get
that
I'm,
not
sure
if
it's
enabled
again,
but
if
it's
enabled
and
if
it
is.
F
Gather
those
results
for
for
the
queue
depth.
I
guess
we
can
concentrate
on.
You
know
32,
64
and
128.
Now,
since
the
rest
are
Oh
more
or
less
noise,
there's
there's
some
stuff
that
you
can
do
with
perf,
but
that's
that's
more
complicated
and
requires
a
lot
of
manual
steps,
but
getting
the
locks
that
pile
up
to
their
workload.
Bronze
I
shouldn't
be
enough.
As
long
as
you
reboot
between
me
between
the
runs
yeah
and
if
it
is
disabled
on
the
on
the
CentOS
kernel,
then
we
can
definitely
do
build
using
our
Jenkins
with
it.
F
F
If
said,
file
can
be
cleared,
oh
sure,
if
those
stats
can
be
reset,
it
might
be
that
you
can
actually
do
something
like
echo
clear
to
that
file,
but
I'm,
not
sure,
probably
not
yeah,
I'm
sure
cuz,
because
you
can
do
that
with
some
other
stats
files
in
the
kernel,
but
on
the
walks.
That
thing
is
alright.
I
think
you
can't.
A
F
Think
locking
would
be
a
good
step
and
then
see
because
because
I
keep
going
back
to
the
deboning
versus
nonbonding
issue,
which
is
how
this
whole
thing
started,
because
if
it
only
shows
up
in
a
particular
like
network
against
that
configuration,
then
like
there's
nothing.
We
can
do
at
the
live
self
layer
and
no
amount
of
flock
optimization
or
anything
like
that
will
help.
F
F
But
yeah
this
is
just
to
mess
around
so
I.
Don't
know
you
know,
don't
expect
to
get
anything
out
of
that,
but
I
would
like
to
see
numbers
regarding
like
will
the
regards
to
the
networking
stack
because
it
just
seemed
that
it
just
seems
that
maybe
there
is
like
something
that
we're
doing
wrong
because
obviously
likely
Barbra
D
would
the
same
networking
stack
is
able
to
to
push
much
more,
but
it
just.
It
just
seems
weird
to
me
that
it
hasn't
come
up
before,
because
that
code
hasn't
changed.
E
A
F
Yeah
III,
don't
think
lip
stuff
under
the
lip
sub
layer
is
involved
in
any
way.
The
interface
like
the
universe
would
get
picked
by
the
kernel
and
all
we're
doing
I
mean
all
we're
doing
is
just
you
know,
receive
message
calls
in
this
case.
F
It's
the
same
function
that
the
system
call
calls.
Just
you
know
with
a
couple
of
layers
stripped
down
like
I'm
stripped
out.
There
are
some
security
checks
omitted
and
that's
about
it,
so
that
the
fact
that
Lieber
bd-r,
that
we
don't
see
this
window,
the
remedy
is
kind
of
puzzling.
But
but
looking
at
this
like
looking
at
line,
what
is
it
line
three
in
this
docking
that
you
winked?
It
clearly
shows
that
I
really
colonel,
even
with
XFS,
on
top,
which
is
not
the
same
as
their
Lombardi
Oh.
F
It
showed
you
know,
60
mm
compared
to
15,000
with
individual.
That's
why
I
sort
of
III
and
it's
like
skeptical
of
going
down
the
road,
if
you
know
doing
extensive
one
for
filing
with
birth?
Well,
we
might,
we
might
see
something
in
the
networking
stack,
but
it
won't
be.
It
won't
be
lips
off
I'm
sure,
obviously,.
F
That
is
during
your
received
message
and
there
is
a
single
lock
for
like
per
connection,
so
basically
for
that
thread
that
that
look
is
there
just
to
protect
against
you
know
if
the
each
session
gets
closed.
F
Underneath
the
you
know,
the
underneath
messenger
and
things
like
that,
so
that
lock
is
is
not
involved
in
actual
I/o
should
not
be
contended
on
it,
but
this
is
organized
not
as
a
set
of
kernel
threads
like
one
or
each
fashion,
but
as
it
were
q
and
so
I
guess
technically
you
could.
F
You
could
get
into
a
situation
where
there
aren't
enough
work,
you
threads
and
so
who
we
don't
schedule
enough
work
items
and
like
not
enough,
receive
message
goals
that
might
actually
be
something
to
look
at,
but
again
this
does
not
explain
the
difference
between
the
bonded
versus
individual
cases,
because
it's
the
same
code
doing
the
same
thing.
Oh,
is.
A
F
How
many
work
few
threads?
Do
we
have?
Oh,
that's,
that's
that's!
Well!
That's
why
I
was
that's.
What
I
was
stressing
that
it's
not
a
it's,
not
a
set
of
kernel
threads
since
I
said
I
like
work,
you
like
worker
threads
and
that's
completely
dynamic,
so
the
kernel
is,
has
a
pool
which
it
we
use
this
with.
You
know
other
workers
soon,
the
these
workers
are
not
private
to
the
messenger
work
you
I'm,
better
global
and
the
kernel
does
whatever
I
tell
sugar
fairness
and
how
many
threads
are
not
cool.
F
F
F
Well,
I
bet
there
can
be
more
like
worker
thread
than
CPUs,
but
obviously
only
the
number
of
CPUs
kind
of
limits.
The
concurrency
and
the
working
management
code
takes
the
number
of
CPUs
into
account,
but
it
will
spawn
more
with
the
threats
than
there
are.
Cpus
are
simply
because
like
something
can
be
blocked,
and
but
it
does
take
the
number
of
CPUs
into
account.
F
F
Yes,
the
received
message
happens
at
the
in
the
work
you
a
worker
read
and-
and
we
don't
block
so
we're
doing
this
synchronously
like
if
there
is
not
enough
data
which
is
back
off
and
they
you
know,
we
finished
the
work
and
the
this
thread
can
get.
We
used
by
another
session
like
another
connection
for
another
received
message
or
go
to
some
other
work
here
in
the
car.
A
F
F
F
E
F
In
the
recent,
maybe
one
in
two
years
to
get
it
truly
I'm
non-locking,
so
that
it
never
blocks
and
always
returns
immediately,
but
that
required
changes
not
to
the
well
I
guess
like
a
little
bit
of
changes
to
the
AO
like
I,
have
some
infrastructure,
but
a
lot
more
changes
to
look
to
the
individual
file
systems
and
yeah,
at
least
at
one
point.
The
situation
was
that
I
our
submit
became
non-blocking
on
XFS
I
would
still
walk
in
some
cases
on
the
excuse
for
or
vice-versa.
F
F
Yeah
I
can
go
into
the
you
know,
to
the
difference
between
bondage
and
an
unbonded,
because
you
know
this
whole
work,
you
like
management
code
and
how
we
call
we
call
receive
message-
becomes
completely
irrelevant
unless
there
is
like.
F
Unless
the
networking
stack
is
doing
something
really
weird
because
I
would
I
would
have
like
actually
expected
a
problem
with
the
bonded
mode
and
and
not
with
the
individual
interfaces,
because
I
would
think
that
the
button
mode
is
like
it's
a
lot
less
people
to
do
it,
then
just
you
know
here
are
some
interfaces
go
use
them
and
I
would
have
expected
the
individual
interfaces
case
to
perform
better
and
the
bonding
case
suck.
But
when
it's
the
other
way
around,
yeah
I
really
don't
have
an
explanation
for
it.
F
The
I/o
depth
is
I
like,
on
the
one
hand,
it's
it's
kind
of
what
you'd
expect
if
you're
pushing
native
more
data
it
like
exposes
some
kind
of
some
kind
of
an
issue.
That's
that
much
is
like
that.
Much
is
expected,
but
the
the
it
just
like
the
numbers
are
not
very
consistent
and
and
that's
something
that
I
would
expect
from
like
a
10,
no
10
for
you
whenever
set
up.
F
But
when
you're
doing
this
and
I
ciliate
know
it
would
know,
is
desired
and
it's
a
single
volume
and
a
single
FAO
process
and
we're
still
seeing
like
the
256
gates,
which
should
be
exactly
the
same
as
128,
because
the
exactly
the
same
amount
of
data
goes
through.
The
networking
stack,
but
in
the
latest
email
Minh
meant
use
about
the
biggest
difference
was
actually
between
the
128
to
256
cases.
That
was
the
biggest
drop
and
there
shouldn't
have
been
any
because
that's
the
same
number
of
I/os
on
the
same
sides.
I
said:
there's.
F
F
Like
like
the
problem
with
128
versus
256,
that
would
point
to
you
know
at
the
block
layer,
or
you
know
much
higher
in
the
stack.
Then
they
never
clean
stack.
But
but
then
the
problem,
like
the
difference
between
bonded
and
non-violent,
and
the
amount
of
data
pushed
between
the
cube
death
32
and
caved
at
64,
because
you
basically
double
double
it
on
every
test.
That
would
that
points
didn't
have
kin
stack
and
so
things
just
things
are
somewhat
consistent
and
between
between
the
ones.
A
There's
there's
one
case:
I
think
is
I'd
like
to
ask
you
about
I,
just
linked
some
of
these
other
tests
in
the
chat
window.
Here
the
the
NBD
case
on
a
single
volume,
and
I
guess
not
a
single
volume
that
on
one
volume
per
node
and
an
increasing
number
of
processes
in
that
volume,
so
it
would
essentially
increase
in
the
acute
by
virtue
of
any
more
processes.
A
Nbd
follows
kind
of
the
behavior
of
the
kernel
RBD,
the
pretty
pretty
closely
it's
it's
low
when
Colonel
RB
is
low
and
and
sort
of
higher.
You
know
it's
just
it
doesn't
work
as
well,
but
it's
sort
of
higher
when
it
can
be
desired,
but
then
in
the
multi
volume
cases,
that's
where
it's
fine,
it
does
really
well.
Does
that
tell
us
anything.
F
F
A
F
Yeah,
the
locking
information
it
might
be,
it
might
be
worth
to
maybe
do
some
birth
magic,
and
since
this
is
this,
is
now
in
a
single
node
and
can
can
potentially
sort
of
yet
more
supervision
as
opposed
to
running
it
quickly
automatically.
Perhaps
we
could
do
some,
like
Jake's
and
birth
snapshots
to
see
like
what
what
the
CPUs
are
doing
in
general,
because
it
didn't
look
like
the
like.
The
CPUs
are
not
I'm,
not
loaded
at
all
mr.
Plouffe
bit
away
from
the
from
the
curve
record,
then
let
me
paste
it
yeah.
F
There
is
like
more
and
more
copying
that
each
CPU
needs
to
do,
and
we
should
just
generally
be
seeing
CPUs
like
a
lot
more
than
a
couple
doing
work
in
the
cave,
workers
that
have
self
in
their
name
and
if
they're
doing
something
else
like
not
doing
well,
do
don't
do
nothing
uh-huh,
that's
obviously
a
problem
because
you
know
assuming
assuming
like
a
somewhat
random
distribution
of
like
which,
which
are
meeting
goes
to.
We
chose
D
with
a
cuter.
B
F
A
F
A
Yeah,
oh
and
then
quickly
before
we
wrap
this
up,
I
that
this
will
not
show
up
in
you
know
on
the
smithy
notes
or
anywhere
else.
You
know,
I
didn't
see
this
with
our
in
sir
no
notes,
you
know
the
the
low
performance
mark
here
that
I
think
we've
seen
is
something
like
2.5
gigabytes
per
second
right,
that's
faster
than
the
networking
on
in
Sirte.
You
know
sort
of
usually
maxes
out
somewhere
around
there,
even.
F
A
F
Could
it
be
an
Ummah
problem?
Could
it
be
that
the
colonel
is
doing
something
weird
with
the
new
assignment
and
that
the
user
space
just
happens
to
do
right
because
the
OSD
does
whatever
Numa
magic
and
they
didn't
heretic?
So.
A
A
So,
assuming
that
the
kernel
is
doing
the
right
thing,
it
should
be
picking
you
know
whatever.
Whatever
OSD
we
need
to
talk
to,
it
should
go
out
that
interface
is
associated
with
the
subnet
that
that
remote
OSD
is
on,
and
we
do
see
it
properly
going
over
like
balancing
across
interfaces.
So
it
looks
like
that's
happening
correctly.
It's
it's
choosing
the
one
that
that
you
know
it
should
go
out
through
and
not
just
you
know,
going
out
through
one
or
something.
F
A
F
I'm
not
sure
it's
probably
better
to
just
completely
shut
it
off
with,
because,
like
again,
we
don't
know
where
the
problem
is
so
I'm
just
trying
to
like
isolate
and
isolating
ways,
starting
something
off
entirely,
as
opposed
to
just
kind
of
sort
of
disabling.
It's
always
better
to
pull
the
plug
and.
A
F
You
might
be
able
to
I've
never
done
this
with
like
that
working
interfaces,
but
I
do
this
routinely
with
CPUs
in
memory.
So
in
order
to
test
something
low
memory,
I
didn't
boot
with
parameter
that
says:
yeah
sure
it
might.
There
might
be
something
for
the
networking
interfaces
as
well,
but
I'm
not
sure
I've,
never
done.
A
All
right,
well,
I,
do
have
to
run
here.
Thank,
You,
Ilya
I,
really
appreciate.
It
really
appreciate
the
help.
Well,
thank
thank
you
for
what
he
was
thinking.
Yeah
all
right
well
have
a
good
day.
Guys
have
a
good
week.
Do
you
again
next
week,
and
hopefully
we'll
know
more
next
time
right,
yep
have
a
good
day,
bye.