►
From YouTube: Ceph Performance Meeting 2021-09-30
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello,
folks,
sorry
I'm
a
little
late
here.
Core
is
still
ongoing.
Hopefully
we'll
get
more
people
in
soon,
but
I'm
gonna
try
to
wait
at
least
a
little
bit
for
them
to
show
up.
A
I
finished
it
up
this
morning
itself.
A
A
A
In
the
meantime,
I
actually
didn't
do
pull
request
stuff
this
week,
since
I
was
running
out
of
time-
and
I
figured
we'd
devote
most
of
this
to
the
paper
today
anyway,
but
I
wanted
to
ask
since
we've
got
a
couple
of
people
here.
Casey
are
you?
Are
you
around.
B
Hey
yeah,
what's
up.
A
Hey,
I
was
just
curious
if
you'd
heard
anything
new
about
the
rgw
stuff
that
we
saw
a
little
while
back
those
those
regressions.
B
Yeah
mark
just
got
back
from
break
and
we
talked
about
it
in
the
bug
scrub
today.
B
Essentially,
it's
a
lot
of
performance
investigation
around
the
the
use
of
timers
for
timeouts
yeah
seems
like
the
upstream
beast
library
has
some
other
reports
of
issues
there,
so
we're
going
to
try
to
learn
what
we
can
from
that
and
participate
in
the
upstream
discussion.
There,
cool
cool.
A
I
was
just
curious.
I
didn't
realize
mark
was
on
vacation
that
I'm
sure
blow
things
down
a
little
bit.
A
C
A
A
All
right,
so
I
guess
I
overall,
like
this
paper,
quite
a
bit
myself,
I
like
that
they
went
pretty
in
depth
into
explaining
what
they
did.
Overall,
I
was
a
little
concerned
about
the
complexity
of
the
system
that
they
were
testing,
but
they
seemed
to
do
a
pretty
good
job
of
isolating
things.
At
least
I
thought
so.
A
That
was
yeah
just
kind
of
overall.
That
was
my
my
high
level
thought
kind
of
wish
that
they
had
gotten
into
rocks
db
results
a
little
bit
more.
Oh
good.
We've
got
the
core
people
coming.
A
Josh,
I
I
see,
looks
like
coors
wrapped
up.
A
Very
good,
very
good,
so
I
was
just
kind
of
saying
that
very
high
level.
I
I
like
the
paper
quite
a
bit.
It
was
a
good
paper
josh.
Thank
you
for
bringing
it
to
our
attention.
No
problem.
D
I
really
enjoyed
it
too.
I
was
one
there
was
one
place
where
I
was
linking
to
a
like
attacker
board,
where
they
had
their
source
to
have
more
details,
results
for
like
for
me
to
write
more
clothes,
different
sorts
of
mixes
of
bio
sizes,
and
I
couldn't
find
that.
Did
you
see
that
anywhere.
A
I
I
didn't
I
I
did
not.
I
would
actually
be
very
interested
in
that
as
well
and
also
if
they
have
anything
more
on
rocks
tb,
I
was,
it
felt,
a
little
a
little
sparse
compared
to
the
rest
of
their
data,
but
it
was
yeah
overall.
This
is
very
interesting.
D
E
A
A
I
yeah
I
mean
the
the
only
the
knit
that
I
kind
of
had
with
is
that
you
know
they're
testing
this
on
a
really
complex
architecture,
and
they
did
do
a
good
job
of
like
isolating
it
right.
They
were,
they
were,
you
know,
kind
of
aware
of
that,
and
they
even
said
in
here
that
they
that,
even
when
they
were
crossing
numa
nodes
that
they
could
still
get
good
performance
with
their
solution.
So
I
mean
that
was
impressive
to
hear
honestly
but
yeah
that
that
was
that
was
kind
of
my
high
level.
D
Yeah,
I
thought
the
the
main
piece
of
treating
throughput
and
latency
sensitive
applications
as
entirely
different
classes
and
being
able
to
consider
the
cost
of
rerouting
versus
the
cost
of
a
headline
blocking
and
other
pieces.
That's
a
pretty
important
concept.
Yeah.
E
Do
we
want
to
summarize
what
the
paper
actually
did
purposefully
didn't?
Read
it
so,
at
a
super
high
level,
the
paper
describes
two
application
classes,
as
josh
mentioned
l-apps
and
t-apps,
which
is
they
think
I
think,
a
classification
that
exists
prior
to
this
paper,
l
apps
refer
to
paper
to
applications
that
submit
relatively
small
number
of
latency
sensitive.
I
os
t
apps
are
apps
characterized
by
trying
to
get
as
much
throughput
as
possible,
they're
sort
of
classic
batch
versus
transaction
processing
stuff.
E
So
the
comparison
here
is
between
the
under
the
default
linux
or
the
current
default.
Linux
block
queue
implementation
where
each
core
has
its
own
queue
for
ios
coming
from
that
queue
and
the
ios
filter
through
those
queues
on
a
per
core
pinned
kernel
thread
due
down
to
their
prospective
device.
Io
drivers
with
classic
linux,
the
or,
with
the
current
linux
implementation
the
behavior.
E
E
This
paper's
contribution
is
adding
pro
core
ingress
and
egress
cues,
so
each
core
submits
to
its
own
ingress
queue
for
lapse
but
or
to
sony
rescue
for
e
for
l
apps,
but
for
t
apps,
it's
free
to
re-home
to
a
core
that
is
less
utilized.
So
the
key
sort
of
pattern
here
is
latency.
Sensitive
stuff
goes
to
the
local
resource.
Throughput
density,
sensitive
stuff
moves
to
wherever
there's
there's
room,
because
the
latency
hit
in
swapping
cores
is
pretty
trivial
compared
to
the
throughput
gain
by
utilizing
unused
resources.
D
Yeah,
that
was
a
great
summary
sam
thank
think.
Another
major
part
was
separating
those
two
apps
into
different
queues
so
that
they
could
prioritize
the
egress
queue
for
lapse
when
there
was.
B
D
Kind
of
latency
interested
applications
operating
in
flight.
E
D
E
A
G
Yeah,
I
think
that
yeah
they
do
touch
upon,
that.
The
the
t,
app
and
l
app
is
an
oversimplification
of
our
app
profile
that
could
be
extended
to
be
more
complicated.
But
I
think
the
general
idea
that
I
liked
was
that
you
know
the
three
mechanisms
that
they
touch
upon
and
how
they
interplay
and
how
why
each
of
them
are
important
and
how
they
demonstrate
that
bit
was
very,
very
impressive.
E
A
Let
me
see
if
I
can
figure
it
out.
That
was
by
recollection
I
didn't
write
down,
which
I
should
have.
A
E
Yeah,
this
would
actually
be
a
cleaner
analysis
if
they'd
skip
the
sbdk
thing
entirely,
because
what
they're
choosing
to
do
here
is
they
correctly
observe
that
if
you
try
to
run
multiple
things,
one
of
which
is
spdk
and
the
same
core,
whatever
the
fpdk
app
is
doing,
is
one
going
to
suffer
severely
versus
if
it
had
the
core
all
to
itself
by
design.
Secondly,
it's
going
to
destroy
the
performance
of
anything
else
in
that
core.
D
E
A
That
seems
almost
more
like
maybe
a
I
want
to
say
backhanded
way,
but
a
little
bit
of
a
way
to
to
you
know
make
their
numbers
that
much
more
impressive
right
by
including.
D
E
There
is
an
lap
latency
hit,
incidentally,
from
using
the
system
at
all,
it's
hidden
a
little
bit
in
the
the
log
graph
they're
tending
to
use
for
latencies,
but
they
note
it
down
in
one
of
their
sections
below
the
increased
complexity
and
code
path
does
cost
some
time.
E
E
C
They
they
have
mentioned
that
they
experience
severe
cash
contention
on
the
level
three
yeah.
It's.
B
E
So
I
have
thoughts
about
how
this
applies
to
seth
talk
about
that
yeah
go
for
it,
so
everything
I
just
said
about
applications.
Typically,
not
sharing
a
host
is
very
not
true
of
stuff.
Osds
do
typically
get
ios
from
a
variety
of
different
applications
with
absolutely
no
control
over
where
they
come
from
or
what
messenger
threat
they
come
in
on
you.
E
It's
also
not
uncommon
for
application
for
deployments
to
mix
rgw
and
rpd
workloads
where
rgw
would
be
very
much
like
this
sort
of
t
application
sort
of
situation.
Rgw
is
latency
sensitive,
but
not
in
the
same
way
that
individual
ios
from
rbd
are,
and
our
gw
tends
to
bulk
rights
in
a
way
that
rbd
just
does
not.
E
E
E
They
don't
have
ordering,
but,
more
importantly,
the
device
drivers
don't
do
anything
but
apply
some
extremely
minimal
processing
and
forward
them
directly
onto
the
device
deck.
There's
no
state
maintained
by
the
device
driver
that
needs
to
be
there's
nothing
analogous
to
a
pg
log,
for
instance.
There's
nothing
in
there.
There's
no
caching
to
do
there's
nothing
of
complexity
at
all.
This
all
happens
below
the
caching
layer.
E
So
it'll
be
a
little
bit
more
difficult
for
us
to
apply
these
lessons,
but
in
crimson,
if
we
could
arrange
it
so
that
the
portion
of
c
store
that's
responsible
for
actually
dispatching
ios
or
is
able
to
freely
re-home
pg
state
across
cores,
then
it
would
be
possible
to
set
up
something
where
pg
submit
locally
for
ios
that
are
latency
sensitive.
E
So
this
does
provide
us
with
this,
this
yeah
with
a
great
deal
of
work.
This
framework
is
actually
probably
a
pretty
good
pathway
to
getting
good
load
balancing
across
reactors
in
crimson,
with
the
caveat
that
it
has
to
be
the
case
that
it's
very,
very
cheap
to
move
the
state
associated
with
a
pg
from
quarter
core
first
easter.
D
Particular
groups
of
pgs
or
pools
in
general
are
more
commonly
used
as
through
twitter
or
latency
sensitive.
Like
examples
correct
w,
the
data
pool
is
obviously
gonna
be
more
throughput
and
the
metadata
pools
may
be
more
latency
like
for
the
index
pool
or
as
other
like
other
metadata
pools
like
garbage
collection
and
logging
may
not
be
anymore
throughput
as
well.
E
E
I
o
grand
granularity
with
only
a
very
modest
latency
here
and
whole
applications,
with
only
somewhat
more
now,
there's
a
section
on
that
where,
if
they
find
that
and
that
a
single
core
is
too
overloaded,
they'll
start
rehoming
the
t
apps
on
that
chord
to
a
different
chord,
but
they
don't
have
to
do
as
much
steering
that
would
be
the
the
analog
there
would
be
moving
the
sort
of
default
home
for
rpg
from
point
to
point
right,
so
I
agree.
A
I
wonder
in
these
tests
if
they
did
any
had
any
consideration
for
locality
of
the
pneuma
nodes
and
the
underlying
storage
devices
like
if
you
do
that
on
a
massive
scale
and
you're
moving
stuff
to
other
cores
that
are
across
pneuma
nodes.
At
some
point,
you
can
imagine
that
that
stops
being
a
good
idea.
D
Yeah
and
that's
where
they
already
had
seen
those
like
affinity
effects
other
than
at
the
network
side
for
these
tests,
because
they
were
running
like
against
the
null
ram
black
device
or
against
like
a
remote
ssd
correctly.
E
D
E
D
Going
back
to
the
idea
of
how
this
applies
to
crimson,
especially,
we
also
have
a
number
of
constraints
in
terms
of
our
processing
and
for
op
ordering.
I'm
wondering
how
that
I
guess
we're
talking
about
perfect
scheduling.
It's
all
it's
and
clearly
it's
using
which
pds
to
run
next,
that
that
doesn't
matter
so
much
same
way.
I'm
curious
what
you!
D
What
do
you
think
about
the
idea
of
having
latency
sensitive
versus
non-lazy,
sensitive
pgs,
and
I
think
that
the
prioritization
that
they
apply,
where
they
always
process
lately
sensitive
operations
if
they
exist.
E
E
G
E
Some
kind
of
dynamic
queuing
between
cores
it
would
happen
by
default.
C-Store's
cues
would
be
multi-cued,
so
yeah
yeah.
It
would
be
possible
and
in
fact
normal,
for
I
mean
it-
I'd
hesitate
to
even
say
it
to
be
processed
out
of
order,
because
the
reality
is
that
there
wouldn't
have
been
any
order
implied
in
the
first
place.
D
I
guess
I'm
wondering
why:
why
are
you
suggesting
that
we
we
want
to
do
it
at
the
app
restore
layer
as
opposed
to
having
it
scheduling
in
a
single
place
kind
of
integrated
in
the
depth,
with
the
dm
clock
scheduler
in
some
way
it.
E
Allocations
so
one
of
the
one
of
the
wins
of
using
c
star
in
the
first
place
is
that
allocations
for
the
most
part
will
be
reactor
local
and
won't,
inter
won't
interact
with
the
with
the
rest
of
the
system.
E
B
E
Really,
but
with
the
pg
state
in
an
osd,
there's
a
whole
bunch
of
in
memory
state,
a
much
of
which
we
actually
mutate
and
there
are
interop
dependencies
for
rights,
not
so
much
for
reads,
but
for
rights
and
or
well
sorry,
there
are
right
to
read.
Dependencies
reads:
have
to
see
the
state
that
writes
just
created.
E
E
E
E
E
E
A
Sam
is
is
you're
talking
about
this,
and
thinking
about
this,
as
I
thought
about
comes
in
the
past,
my
I
thought
has
always
been
that
we
would
have
like
a
local
cue
for
local
handling
of
I
o
and
then
some
kind
of,
like
yeah
export
queue
for
other
other
shards
to
handle
an.
I
o,
a
request
that
came
in
that
wasn't
local
to
it.
Is
that
more
or
less
kind
of
what
you're
also
describing
here
for
my
am
I
mixing
up
things.
E
A
E
E
B
D
We're
thinking
about
like
the
prioritization
visas,
or
maybe
independently
as
like,
how
would
we
design
c
store
and
crimson
for
being
able
to
get
low
tail
latency.
D
E
E
E
But
even
I
mean
more
of
a
time
varying
thing:
oh
yeah,
where.
C
I'm
sorry,
I
said
we
have
a
similar
challenge
because
we
handle
the
payload.
That
is
multiple
times
the
size
of
the
metadata
that
we
need
to
handle
on
the
same
for
the
same
device.
So
we
have
a
similar
challenge
that
that
we
mix
up
both
iii's
same
core.
Perhaps.
E
C
But
as
far
as
I
understand
the
what
they
say,
the
application
separation
is
done
by
threats,
so
people
would
be
able
to
separate
this
both
threads.
So
the
payload.
E
Does
that
make
make
sense?
So
if,
at
some
point
in
time,
they
simply
need
to
process
ios
from
those
three
applications
in
order
to
satisfy
their
latency
requirements,
they
can
actually
do
that.
They
don't
they
don't
have
to
process
anything
from
the
d
app,
except
that
you
know
it
would
destroy
the
tf
throughput.
E
But
if
we're
talking
about
again
metadata
and
data
for
the
same,
I
o
it
doesn't
matter
if
you
separate
them
across
threads
or
it
doesn't
matter
what
what
you
do,
the
I
o
isn't
complete
until
both
the
metadata
and
the
data
for
that
I
o
are
dispatched
to
disk
national.
So
it's
fundamentally
dissimilar,
like
yeah.
E
E
We
can't
separately
schedule
the
two
resource
pool
polls.
G
So
are
we
saying
so
the
if
we
have
to
map
the
the
concept
of
app
back
to
ceph?
Are
we
saying
that
we
we
cannot
be
doing
that
at
the
rgw
op
layer
it
it
has
to
be
like
the
osd
op
layer.
E
So
it
might
seem
like
you
could,
because
the
right
streams
have
this
like
superficial
relationship
or
the
superficial
resemblance
to
these
two
classifications,
but
because
the
I
o
is
being
performed
come
out
of
the
same
queue
and
they
are
dependent,
it
won't
actually
help
the
application
level
latency.
In
fact,
it
will
probably
destroy
it
because
the
system
will
be
incentivized
to
process
the
sync
thread
at
the
expense
of
the
actual
ios.
D
Okay,
going
back
to
the
rgw
example
like
now,
you
have
your
right
that
needs
to
validate
your
updated
plugin
index
and
write
to
the
data.
It's
also
not
complete
until
it
does
both
those
things.
So
both
those
apps
would
have
you'd
want
to
treat
those
as
latency
sensitive.
E
Yes,
that's
true,
so
I
just
meant
that
doing
it
under
the
osd,
without
considering
the
actual
operating,
the
client
isn't
very
useful
you'd
need
it
right.
The
prioritization
would
have
to
be
at
the
like
op
itself
layer
or
up
at
the
rtw
original
rgw,
op
or
tags
coming
from
the
original
rbd
app
application.
G
And
just
just
trying
to
map
again
concepts
back
so
like
they
talk
about
request,
steering
and
application
steering.
So
what
would
request
steering
in
in
ceph
look
like,
or
would
it
even
be
a
possibility,
given
that
request
would
be
at
a
pg
level
or
what
are
your
thoughts.
E
E
E
G
E
Because
otherwise,
we'd
have
to
deal
with
inner
threat
contention,
which
is
unlikely
to
be
a
win
from
a
latency
point
of
view.
E
There's
a
similar
relationship
with
c-store,
because
a
sort
of
functioning
assumption
has
done
that
each
pg
or
group
of
fiji's
will
get
its
own
root
metadata
root
with
similar
requirements.
We
need
to
do
reeds
need
to
see
recent
mutations
and
mutations
need
to
mutate
that
that
state,
so
assuming
that
it's
expensive
to
move
these
things
between
cores,
the
answer
is:
no.
You
pretty
much
have
to
route
the
up
to
the
core.
That's
supposed
to
handle
it.
E
The
application
steering
component
might
be
of
some
of
some
interest.
If
ops
coming
into
a
pg
tend
to
be
t
ops,
as
in
this
classification,
we
could
make
a
point
of
making
it
cheap
to
move
the
core
responsible
for
a
pg
that
we
get
a
good
distribution
of
those
over
time,
even
as
the
load
profile
changes.
A
What
is
our
plan
right
now
for
being
able
to
like
allow
steering
of
of
requests
over
the
network
to
specific
pgs
that
are
running
on
different
cores?
Like
do
we
have
any
ability
to
do
that.
E
How
so
I'm
I'm
not
seeing
that
static,
it
would
actually
further
constrain
our
ability
to
move
between
cores
because
we'd
have
to
expose
that
assignment
out
to
clients,
so
there
would
be
no
dynamism
to
it
at
all.
There
wouldn't
be
any
steering
as
such
again.
This
paper
isn't
just
about
steering
io
it's
about
steering.
I
o,
based
on
underutilized
cores.
A
A
E
E
D
E
To
this
long-term
imbalance
thing,
but
we'd
have
to
be
willing
to
do
an
explicit
mapping
in
that
case,
because
it's
unlikely
that
those
imbalances
would
map
neatly
to
some
kind
of
simple
round-robin
assignment,
and
I
don't
think
we're
going
to
be
willing
to
put
150
pg
to
court
to
core
assignments
per
osd
in
the
osd
map.
D
E
E
E
D
F
D
Is
it
worth
considering
this
kind
of
steering
at
a
higher
level
like
when
talking
about
overloaded,
osds
or
hosts
entirely
and
switching
data
to
different
racks
or
different
posts?
I
mean
that's
basically
what
the
imbalance
do.
Hickey
does
right.
D
Well
today,
it
just
doesn't
do
anything
related
to
performance
or
load.
No,
it's
really.
E
Bad
for
storage,
size,
yeah,
yes,
but
same
concept
like
that's
that's
what
that
mechanism
is
is
for
right,
but
I'm
basically
I'm
almost
willing
to
say.
I
do
not
think
it
is
possible
that
we
ever
want
to
move
data
based
on
a
transient
imbalance
in
I
o.
So
the
time
scales
involved
would
have
to
be
hours.
D
E
A
Are
there
any
background
tasks
that
would
potentially
be
impact
that
that
we
might
temporarily
want
to
route
around.
A
Sorry
not
move
data,
but
have
like
incoming
work
coming
in
on
a
different
like
messenger,
on
a
different
core
that
potentially
would
be
non-local
to
wherever
the
work
would
end
up
being
done,
because
it
makes
sense.
D
In
terms
of
like
not
talking
about
like
these,
like,
for
example,
like
rtw,
pc,
pools
or
other
sorts
of
tf
type
of
background
apps,
where
you're
trying
to
group
together
those
pgs
to
like
specific
cores,
so
that
they
don't
impact
the
laps
on
the
other
cores
right.
A
Yeah
I
mean
I
was
thinking
like
the
equivalent
of
like
I
don't
know
if
there's
some
kind
of
like
roxdb,
like
compaction
happening
on
one
or
something
along
those
lines,
the
manuals
that
see
star
ci
store,
but
but
just
something
that's
background,
work,
that's
happening.
Maybe
you
temporarily
want
to
have
incoming
requests
coming
in
on
some
other
quarter.
That's
also.
E
D
D
Is,
you
probably
should
be
thinking
about,
considering
like
different
pools
or
different
pgs
or
even
different
requests,
potentially
as
and
treating
them
differently,
based
on
their
latency
requirements
in
order
to
lower
the
overall
latency?
And
if
we
do
steering
eventually
increase
throughput.
D
G
I
think
one
cdm
the
goal
was
not
steering,
but
I
think
the
goal
was
setting
different
performance
profiles
for
different
pools
right
right,
but
yeah
that
in
general
might
be
a
good
idea
to
even
start
with
classification
and
then
decide
how
we
can
use
that
classification.
D
There's
also
one,
I
guess,
if
we're
doing
it
based
on
a
pg,
maybe
it
doesn't
matter
so
much,
but
it
might
be
interesting
to
look
at
for
rbd
where
we're
only
using
a
single
pool.
A
D
D
B
E
They
have
to
go
to
within
that
pool
to
the
pg.
Therefore,
and
for
the
reasons
I
outlined,
they
have
to
go
to
the
pg
core
that
they're,
for
so
the
only
really
the
only
thing
available
to
us
is
really
moving
pgs
between
cores.
So
if
the
pool
as
a
whole
has
some
kind
of
tag,
it
has
butter,
it's
an
eller
or
peep
or
some
other
classification.
That's
slightly
more
descriptive
of
what
really
happens.
Then
the
osd
can
be
slightly
more
deliberate
about
arranging
those
pgs
across
its
course.
D
D
E
E
D
G
Yeah
I
mean
in
general
yeah
just
about
the
paper.
I
thought
I
thought
it
was
well
written
paper.
You
know
a
lot
of
papers,
it's
not
very
intuitive
to
like
understand
the
graphs
and
they
did
a
pretty
good
job
of
like
mapping
the
text
back
to
the
graphs,
so
it
was
very
clear
as
to
what
they
are
trying
to
present.
There
was
not
one
unnecessary.
G
E
I
thought
this
paper
was
superb,
I'm
really
glad
you
suggested
it
josh.
I
think
that
this
is
just
going
to
be
part
of
my
mental
framework
dealing
with
resource
balancing
forever.
D
A
D
Yeah
yeah
I've
really
been
keeping
up
with
a
lot
of
the
literature
the
past
several
years.
So
I
think,
there's
a
lot
of
years
and
I
guess
we
could.
We
could
glean
from
some
of
these
papers.
D
That's
a
good
question
I
mean
their
their
implantation
is
certainly
pretty
researchy
in
terms
of
not
being
like
a
bit
like
relying
on
the
nvme
over
tcp
stuff
and
modifications
to
the
block
layer
and
yeah.
It's
it's.
It's
not
necessarily
designed
to
be
directly
used
in
the
kernel
for
production,
but
I
I
wouldn't
be
surprised
if
folks
got
inspiration
from
things
like
this
and
things
like
that,
and
eventually
the
kernel
saw
improvements
to
its
block
layer
and
scheduler.
E
E
Of
a
magically
special
case
where
the
like,
in
a
way
this
processing
that's
happening
that
thing
that
we're
optimizing
for
doesn't
need
to
happen
in
the
first
place.
If
the
applications
were
submitting
directly
to
the
device
queue,
then
this
wouldn't
even
be
a
resource
to
contend
on,
and
I
think
that
observation
is
core
to
why
this
works
in
the
first
place,
there's
no
state
on
each
core
associated
with
the
I
o,
there's
nothing
that
the
core
actually
does
other
than
translate
from
one
thing
to
another
yeah.
A
All
right:
well,
we
are
at
the
end
of
the
hour
guys,
and
I
think
it
will
just
have
another
meeting
after
this.
So
any
final
comments
from
anyone.
E
A
D
E
D
Paper
that
was
talking
talking
about
how
to
achieve
very
low
latency
with
tcp
and
I'm
kind
of
contrasting
that
to
rdma,
suggesting
that
you
don't
need
our
emails,
that
style
protocol
to
get
those
kinds
of
benefits.
That
might
be
interesting
to
take
a
look
at.
A
All
right
well,
then,
have
a
great
week.
Everyone
thanks
for
coming,
it's
nice
to
see
so
many
faces
and
see
you
next
week.