►
From YouTube: December 2022 OpenZFS Leadership Meeting
Description
Agenda: unmapped prefetch; bandwidth quotas
full notes: https://docs.google.com/document/d/1w2jv2XVYFmBVvG1EGf-9A5HBVsjAYoLIFZAnWHhV-BM/edit#
A
All
right,
let's
get
started,
welcome
to
the
December
2022
openza
solution.
Meeting
I
see
a
couple
of
items
on
the
agenda.
A
Do
you
have
any
announcements?
So,
let's
get
started
there
first,
one
unmapped
prefetch
review
request
is
that
you
logged
under.
B
Yes,
it's
me
just
added
I
already
announced
it
in
in
chat
and
personally
to
few
people,
but
last
week
I
completed
a
map
prefix
NPR,
which
I
started
during
the
summit
hackathon.
It
appeared
bigger
project
than
I
originally
hoped,
but
by
chip
here
to
be
not
bad,
but
it's
based
on
much
more
invasive
Arc
cleanup.
B
So
it's
not
exactly
trivial
to
review
other
than
it's
in
two
separate
patches,
but
I'm
not
sure
I
can
split
it
smaller,
smaller
chunks,
but
I
think
both
are
good
to
have,
because
that
code
is
pretty
old
and
needed
cleanup
anyway,
and
the
final
result
I
got
with
it
is
pretty
amazingness
for
me
like.
If
somebody
don't
need
compression
and
don't
need
much
of
data,
cache
hits
during
sequential
operations
and
dispatch
particularly
doubles,
read
speed
from
one
point:
it
or
introduces
prefix,
which
was
for
many
years
disabled
for
uncashed
data.
B
If
you
said
primary
guest
metadata
and
second,
it
avoids
one
memory
copy
out
of
two.
So,
instead
of
quipping
data
from
Arc
to
debuff
from
debuff
Cache
user
space,
now
it
Maps
directly
buffers
from
Arc
to
debuff
Cache,
since
we
know
that
buffer
will
be
evicted
very
quickly,
we
don't
need
to
use
ABD
for
that,
and
science
is
on
Cache.
B
B
It's
happened
on
different
thread,
so
right
performance
is
about
same
four
gigabytes
per
second,
yes,
those
lower
CPU
usage,
I
guess
again,
instead
of
two
memory
group
is
only
one
and
it's
very
like
seriously
small
and
clean
patch
comparing
to
alternative
or
direct
patch,
which
is
much
more
invasive,
I
think,
but
I
think
they
could
coexist
together
like
if
people
who
want
to
use
completely
zero
copy
redo
rides,
they
could
use
or
direct
patch
in
between
I
think.
B
This
patch
could
also
be
mapped
to
a
direct
just
maybe
some
property
could
switch
which
for
directs
use
should
be
used
and
obviously
that
or
direct
zero
copy
one
can
be
used
in
some
situations
and
sanctions
require
it
requires
alignment,
buffers
and
other
things.
So
maybe
when
it
can
do
the
things
my
patch
could
do
the
things
so
I
think
it
would
be
a
good
part
to
complete
the
picture
so
I'd
like
to
ask
for
testing
the
reviews,
so
it
passes
the
tests,
but
and
I
did
my
tests.
A
Cool
and
I
just
to
make
sure
I
I
caught
it
in
for
folks
who
maybe
need
a
little
tldr.
The
the
Improvement
here
is
when
you
have
set
the
primary
cache
to
not
include
the
data.
A
Primary
cash
metadata
and
then
you're
reading
a
file
sequentially,
and
so
it's
being
pre-fetched,
and
because
we
know
that
it's
not
going
to
be
going
into
the
ark
we
can
like
optimize,
reduce
the
memory.
Copying
is
that
yeah.
B
Well,
by
default
Arc
for
any
allocations
above
four
kilobytes
of
vbsd
and
above,
like
half
kilobyte
on
Linux,
as
ifs
tries
to
allocate
ABD
which
practically
chunks
it
in
the
four
kilobyte
chance.
So
it
means
we
cannot
directly
map
it
into
game
view
cache
it's
a
trade-off
for
using
more
bigger
arcs
up
to
the
whole
memory
size
without
KVA.
You
can
contention
this,
but
since
we
know
that
buffers
will
be
evicted
very
soon,
we
can
bypass
ABG
allocation,
allocate
buffers
linearly.
B
It's
practically
a
fuel
line,
change
to
make
Arc
do
that,
it's
trivial
and
after
that,
there
existing
code
pass
already
when
we
are
creating
a
buffer
for
debuff
cache
a
code
just
see
that
okay,
we
have
linear
buffer
already
in
our
just
instead
of
coping
it
just
directly.
Maps
the
buffers
no
shares
a
buffer.
A
Yeah
in
this
applies
only
if
the
data
is
not
compressed.
B
Is
that
right,
yeah?
If
it's
compressed,
then
it
if
it
falls
back
and
uses
traditional
ABD
cache?
Okay,
sometimes
there
is
no
much
place
to
win
well,
I
could
I
was
thinking.
Maybe
I
could
still
use
linear
buffer
if
compressional
algorithm
or
something
would
benefit
from
linear
buffer,
but
I'm
not
aware
about
that.
So
right
now,
if
it
sees
a
buffer
is
compressed,
it
immediately
fall
back
to
existing
Behavior.
It
will
still
the
buffer
still
beneficial
better
than
it
was
before.
It
still
will
be
prefetched.
C
Yeah
well,
actually,
I
think.
Maybe
we
do
want
a
linear
buffer
in
that
case,
because
if
it
is
compressed,
I
I'm
quite
sure
we
end
up
copying
it
to
a
linear
buffer
to
decompress
it,
because
all
the
decompression
algorithms
can't
take
an
ADB.
They
only
take
a
linear
buffer,
and
so,
if
we
skip
making
an
ADB
and
just
always
have
a
linear
buffer
for
the
compressed
case,
there
that
probably
saves
a
nun
copy
as
well.
B
C
A
Yeah
I
think
it's
true,
but
I
investigated
this
several
years
back
and
the
performance
win
that
I
could
get
by
like
not
linear
right
by,
like
keeping
it
linear
was
extremely
minimal.
C
B
A
B
Yeah,
no,
it
actually
I
think
I
was
actually
who
introduced
it
years
ago.
I,
don't
remember
how
many,
maybe
five
plus
long
ago,
when
I
hit
that
I
blocked
prefecture
in
that
situation,
because,
like
preface
happens
on
debuff
Cache
layer
and
if
we
don't
have
heat
on
the
buffers-
and
we
don't
know
when
to
evict-
that's
why
in
this
fight
shall
introduced
I
make
made
Arc
know
which
buffers
are
uncashable
and
introduced,
one
more
cache
state
for
data
to
be
evicted
as
soon
as
possible,
and
they
leaked
it
once
per
second.
B
B
We
can
compensating
latencies
and
also
there
I
fixed
support
for
small
block
sub
block
reads
if
you're
doing
sequential
16
16k
reads,
for
example,
from
128
kilobyte
blocks
previously
for
each
read
again
from
disk
now
it
Aggregates
that
just
doing
one
copy
keeps
it
in
Cache
for
a
while
after
you
read
all
of
them's
Analytics
and
again
for
it's
16k
may
be
a
bad
example,
but
if
somebody
said
block
size
or
record
size
to
16
Meg,
who
knows
what
the
application
reads
at
16
map
granularity?
B
A
That's
cool!
The
next
thing
on
the
agenda
is
the
Zed
configuration.
C
Yeah,
so
we
started
a
pull
request
a
while
back
and
it's
it's
complete
now,
so
it
uses
Vita
properties
to
allow
you
to
configure
the
ZFS
event
Damon
on
Linux
to
basically
control
the
thresholds
of
how
many
checksum
or
how
many
I
o
errors
in
how
much
time
before
we
consider
a
disk
to
be
the
source
of
the
problem
and
offline
it
and
kick
in
a
spare
neat,
and
it's
because
yeah
right
now,
those
values
are
just
hard
coded
into
Zed
on
Linux,
because
it
was
kind
of
inherited
from
the
FMA
or
whatever
on
Solaris,
where
there
was
a
system-wide
way
to
configure
it
and
on
Linux
it
would
just
statically
set
an
NV
list
with
those
values,
but
instead
now
we
get
them
from
pervita
properties
and
we'll
have
a
second
pull
request
shortly.
C
That
will
extend
vdeb
properties
so
that
the
root
V
Dev,
the
one
with
the
name
it
matches
the
pool
name,
we'll
also
have
a
zap.
It
doesn't
currently
so
like
before
we
make
a
zap
if
it
doesn't
exist
when
we
import
the
pool
and
The
Inheritance
we
come
up
with.
There
means
that
you
could
set
these
properties
at
the
pools,
bdev
property,
so
that
new
diss,
you
add,
will
inherit
the
value
you
set
rather
than
you
know,
having
to
remember
to
set
it
on
each
V
Dev.
C
D
C
To
get
some
quick
review
on
that
to
to
get
that
merged.
A
All
right,
I
think
that
was
all
of
the
agenda
items
on
the
agenda.
Other
topics
or
questions
today.
C
C
Thing
you
would
actually
want
to
be
limiting,
for
example,
if
you
want
to
just
limit
the
read
megabytes
per
second
per
a
data
set.
Would
that
be?
You
know
at
the
the
top
layer,
like
the
the
zpl,
where
you're
saying
you
know
the
the
read
and
write
calls
we
get
from
the
application?
We're
limiting
that
or
do
we
want
to
look
lower
and
be
like
you
know,
just
because
you
did
a
4K
read
doesn't
mean
that
it
didn't
actually
generate
a
lot
more
reads
from
the
hardware
right.
C
If
the
record
size
is
larger,
then
your
4K
bead
maybe
pulled
a
whole
128k
off
disk
and
then
what
about
all
the
metadata
and.
A
C
Right
and
you
know
kind
of
just
like
quotas
with
the
compression-
it's
a
question
of
in
that
case.
Should
that
be
the
provider's
gain
that
the
fact
that
they
managed
to
get
it
cashed
for
you
and
you
get
the
throughput
that
you
were
told
you
would
get
anyway
or
in
some
of
those
cases
is
like
you
know:
should
the
user
get
the
the
win,
the
fact
that
what
the
requesting
was
already
in
cash
and
yeah
we're
definitely
interested
in
what
other
people's
opinions
on
that
type
of
thing
are.
E
So
I
can
share
about
my
banking
on
this
topic,
so
in
my
opinion,
we
should
provide
something
which
is
predictable
so
either
we
should
be
consistent
and
just
limit
the
application
traffic
or
try
to
limit
like
the
disk
traffic,
which
is,
but
when
we
go
to
this
traffic
extremely
hard
to
this
won't
be
predictable
for
the
user,
because
we
have
to
then
account
for
all
the
in
inflation
related
to
like
grade
Z,
all
kinds
of
I,
even
like
metadata
traffic
and
stuff
like
that.
E
So
even
if
you
do
like
smaller
requests,
it
can
be
inflated,
and
it's
it's
hard
to
like
predict
by
the
application
by
the
user.
How
much
throughput
you
you
will
use
Etc.
On
the
other
hand,
when
we
measure
the
application
throughput.
Of
course
it
won't
be
reflected
in
this
traffic
because
again,
there's
a
lot
of
factors,
but
I
think
that
we
shouldn't
try
to
do
something
in
the
middle.
So
let's
say
we.
E
We
monitor
because
doing
this
on
application
like
trying
to
monitor
application
traffic,
so
monitoring
just
read
the
right
system
calls
or
v-ops.
We
basically.
E
E
E
So
we
try
to
inflate
that
and
account
for
that,
then,
for
me,
as
the
user,
it's
hard
to
to
predict,
what's
my
limit
exactly
so
so
yeah,
so
so
I
think
our
conclusion
was
to
to
just
monitor
this
at
the
v-up
level
and
basically
limit
the
application
traffic
and
not
try
to
limit
disk
traffic,
which
could
be
much
harder
and
also
it's
much
harder
for
other
reasons,
because
we
are
already
too
late
like
it's
already
in
transaction
group,
so
we
may
not
be
able
to
actually
delay
some
of
those
requests,
so
we
would
need
to
like
I,
don't
know
somehow
penalize
the
application
later.
C
Yeah
and
just
you
know,
once
all
the
rights
are
mixed
together
into
a
transaction
group,
we
can't
make
some
of
them
slower
and
not
end
up
delaying
others
and
so
on
and
yeah,
like
we
kind
of
looked
at
trying
to
do
it
completely
logical,
like
The,
Logical
data
or
completely
physical,
but
even
on
the
rights.
It's
like
the
ideal
place
to
rate
limit
stuff.
C
There
is
in,
like
the
txt,
try
a
sign
or
whatever
it's
called
before
it's
in
the
transaction
group,
where
we
do
the
throttling
for
dirty
data
and
so
on,
because
other
throttling
already
takes
place
there
and
it
makes
sense.
But
you
know
that's
not
at
that
point.
We
don't
know
how
much
actual
physical
I
o
this.
C
This
Zio
is
going
to
cause,
and
so
we
can't
actually
tune
the
limit
there
and
yeah
like
Pavel
said,
even
when
we
looked
at
trying
to
just
guess
and
be
like
oh
you're
doing
the
Zio,
and
we
happen
to
know
that
the
record
size
for
this
object
is,
is
128k,
so
we're
going
to
round
your
read
request
up.
It
gets
very
confusing
to
the
user
when
their
performance
kind
of
Skips
all
over
the
place
versus
if
they
just
get.
C
E
The
result
I
think
we
cannot
predict
like
no
price
compression
stuff
like
this,
so
yeah,
but
but
I
think
from
the
user's
perspective
it
would
be,
it
would
be
good
to
know
like
okay
I
can
verify
that
I
can,
let's
say,
use
D,
trace
and
see.
Okay,
that's
my
application
throughput
or
iops
or
whatever.
So
this
is
the
limit
I
will
put
so
because
this
is
my
normal
workload.
Let's
say
right.
A
If
we're
talking
about
doing
that
the
zpl
layer,
then
you
could
also
do
it
at
the
VFS
there
like
and
have
this
work
for
other
file
systems,
but
but
it
would
probably
be
an
implementation.
That's
specific
to
your
operating
system.
I
was
only
does
anybody
know
of
like
this
being
done.
C
E
But
from
our
experiments
it
doesn't
work
well,
it's
now
way
off
well.
C
E
Things
work
is
Linux,
but
in
terms
of
like
doing
this
or
the
VFS,
of
course
that
cut
that
this
could
be
done.
But
one
of
the
arguments
to
do
it
in
ZFS
is
that
we
could
get
this
inheritance
so
yeah,
which
we.
A
I'm
not
saying
we
shouldn't
do
it
in
ZFS,
I
was
just
kind
of
like
you
know,
in
terms
of
the
landscape
of
what
other
options
there
are
and
then
you
know
comparing
kind
of
how
those
are
controlled
and
how
they're
used
like
are
they
used?
A
lot
sounds
like
the
one
on
free
there.
There
is
one
on
previous
D,
but
it
doesn't
work
very
well.
So
maybe
nobody
uses
it.
A
E
Mm-Hmm
also,
if
we
do
it
at
the
zpl
level.
Well,
we
want
to
start
from
like
from
limiting
the
data
traffic,
but
I
also
can
imagine
that
we
would
like
to
add
along
the
way,
limiting
metadata
traffic.
So
all
the
like
file.
C
E
C
But
yeah
we
were
just
interested
in
anybody
else's
thoughts
on
you
know
their
use
case
for
this,
and
you
know
practical
considerations
as
far
as
being
able
to
to
make
this
useful
to
Beyond
just
the
scope
that
we
have
for
the
first
customer.
D
Hey
everybody:
it's
first
time
showing
up
here,
I've
been
using
ZFS
for
about
you
know
a
decade
at
UCSF
running
lots
of
servers,
lots
of
petabytes
my
use
case
would
much
more
fit.
The
model
of
I'd
want
to
prioritize
data
sets,
and
so,
rather
than
rather
than
you
know,
limit
total
iops
I.
Would
you
know
if
no,
if
the
system
isn't
busy,
I
want
to
give
as
many
iaps
as
possible
to
whoever
wants
it
right?
That's
my
business
and
so
a
model
where
you're
prioritizing
what's
coming
in
and
so
filling
a
transaction
group.
D
So
that's
just
all
I
can
provide
in
terms
of
feedback
about
kind
of
usability
or
value.
To
me.
E
Yes,
like
with
with
this
kind
of
architecture,
we
would
need
to
definitely
the
model
would
be
definitely
much
more
complex
in
order
to
to
monitor
like
the
whole
pool
globally
and
know
exactly
what's
going
on
in
all
data
sets
and
try
to
like.
Based
on
this
knowledge,
then
limit
limit
resources
for
each
data
set
based
on
priority.
D
E
Just
just
so
just
so,
we
like
understand
the
like
consequence
on
doing
this
at
the
zpl
layer.
So
when
we
limit
the
throughput
or
iOS
or
basically
operations,
we
have
no
idea
if
this
can
be
actually
consumed
by
the
pull
or
not
right.
So
we
don't
have
this
knowledge.
We
have
no
idea
if
our
read
will
turn
into
like
multiple
reads
or
our
right
will
turn
into
multiple
rights
Etc.
E
So
we
cannot,
so
we
can
easily
like
send
too
much
or
too
little,
but
and
it's
hard
to
it's
hard
to
tell
so
in
in
ZFS.
There
is
already
throttling
to
to
to
provide
this
back
pressure
if
we
just
try
to
send
too
much,
but
that
it's
not
based
on
like
priority
of
the
data
set,
but
for
this
it
would
be
hard
because
we
we
don't
really
know
the
pool
limits
and
we
don't
really
know
how
our
application
Level
traffic
translates
to
disk
traffic.
E
C
Right,
like
Matt,
said:
if
it's
a
cash
hit,
it
turns
out,
it
doesn't
generate
any
disk.
Io.
B
B
From
other
perspective,
I
think
it
would
be
more
useful
to
track
from
the
original
separation
between
application,
stats
and
I
disk
stats.
I
think
we
obviously
needs
to
get
closer
to
disks,
but
I'm
thinking,
for
example,
in
case
of
reads:
if
somebody
tries
to
do
more
iOS
and
he
has
guarantees
for
quality
of
series
four,
we
could
just
marks
those
IOS
as
lower
priority.
B
Just
reduce
some
priority
level,
so
somehow
else
move
it
to
their
iOS
to
scheduler,
zero,
zero
scheduler
and
then
that
layer
de
prioritize
them
to
run
slower
for
right.
Obviously
it
would
be
more
complicated,
but
then
again
we
have
our
protein
mechanism.
B
We
share
it
now
base
it
only
on
size
of
of
dirty
the
amount
of
dirty
data
in
Arc,
and
we
already
have
a
problem.
If
we
have
two
applications
running
one
in
bigger
blocks,
one
in
smaller
blocks,
a
smaller
one
will
be
much
heavier
penalized
than
bigger
blocks.
We
already
need
some
sort
of
redesign
there,
so
if
we
could
add
into
the
same
function
returning
delay
time
or
other
additional
factors,
not
just
all
buffers
are
put
into
the
same
linear
queue
and
then
respectively,
wait
for
the
time.
B
B
That's:
what's
the
cellular
ability
operators
do
that
I!
Guess
that's
everybody
who
who
wants
to
not
just
limit
somebody's
treatment,
but
do
some
fair
share?
That's
that's
the
approach,
I'm,
not
sure.
What's
the
point
of
giving
exactly
X
megabyte
per
second,
if
we
still
have
one
reason,
it's
idle!
Why
not?
C
C
A
C
Multi-Tenant
thing
a
bunch
of
different
lxd
containers,
basically
and
wanting
to
make
sure
that
one
Noisy
Neighbor
doesn't
tell
everybody
and
just
to
keep
it
simple.
Just
we'll
just
set
a
like
a
megabytes
per
second
in
an
iops
per
second
limit
on
each
of
them,
and
that
will
keep
any
one
person
from
dominating
the
system
too
much
and
not
too
worried
about
fair
share
or
anything
just
that
one
person
doesn't
make
it
terrible
for
everyone.
A
A
You
know,
clients
and
so
I'm,
going
to
do
and
I'm
going
to
over
subscribe,
like
10x,
maybe
so
I'm
going
to
do
give
them
each
like.
A
Per
second
or
whatever,
yeah
yeah,
so
it'd
be
very
kind
of
hand.
Wavy
and
you
could
you
know
you
get
into
situations
where
you
can't
get
your
whole
quota,
because
it's
over
subscribed
by
everyone
else.
You
get
into
situations
where
the
systems
underutilized,
because,
like
I'm,
you
know
a
few
people
are
hitting
their
quotas
and
nobody
else
is
using
it.
But
for
your
for
your
use
case,
those
are
all
kind
of
fine
yeah.
C
C
A
But
that
does
seem
like
not
too
hard
wait
to
like
describe
what
you
want,
but.
E
The
prioritize
from
what
they
understand
the
reads
right
right,
but
can
we
do
that
the
same
with
rights.
A
C
A
Or
it's
like
you
get
really
super
slowed
down.
You
know
where,
if
you're
not
getting
assigned
to
the
txt
until
the
next
dxg
or
it
ends
up
very
hacky,
where
you
can
do
select
something
like.
Oh,
if
the
txg
is
at
more
than
50
of
the
max
dirty
data,
then
I'm
not
going
to
add
any
more
of
your
iOS
or
something
like
that.
But
I
think
it'd
be
hard
to
come
up
with
anything
that
has
kind
of
consistent
performance.
C
A
D
A
C
E
Yeah,
but
they
wonder
if,
if
we
cannot,
if
we
cannot
do
that
for
all
kinds
of
iOS,
if
it
makes
sense
to
do
it
at
all,
because
if,
if
it
will
do
it
for
like
most
of
the
iOS
but
still
asynchrites,
we
cannot
really
address
them
and
and
one
consumer
still
can
send
a
lot
of
Rights
and
just
saturate
the
pool
then
re-prioritizing
the
rest.
I,
don't
know
it
doesn't
really
address
the
problem.
So.
E
Under
utilizing
the
pool
yeah,
that's
really
concern
and
I
wonder
if
this
is
not
just
a
balance
of
being
able
to
utilize
the
pull
fully
and
being
able
to
provide
productive
predictability
to
those
all
the
tenants
on
the
system.
B
B
Okay,
you,
your
specific,
whatever
client,
consumer,
10
of
dirty,
Arc,
buffers
start
trolling,
or
something
like
that.
Like
maybe
Implement,
two
separate
drop
links,
one
system
white,
just
not
overflow,
the
memory
and
one
more
user
specific
which
start
trotting
earlier
when
it
got
10
percent
of
goatee
buffers
consumer
just
by
itself.
B
So,
if
nothing,
if
nobody
else
uses
a
pool,
it
will
still
get
most
of
throughput,
or
at
least
not
hundred
percent.
Depending
on
how
tight
do
we
limit
it
and
pull
characteristics,
but
most
of
pull
Bando
is,
but
if
there
will
be
other
users
who
will
push
data
into
Arc
more
actively,
this
one
will
obviously
get
penalized
heavier.
E
B
E
So
we
are
considering
like
not
doing
inheritance
initially,
but
of
course
it
would
be
nice
if
you
could
have
like
a
let's
say
group
of
customers
that
have
some
kind
of
quota
and
then
each
customer
have
smaller
individual
quota,
but
you
can
do
over
provisioning
in
this
just
for
this
group
of
customers.
So.
E
And,
of
course,
all
the
all
the
data
sets
created
below
for
this
customer
are
treated
as
as
a
one.
Basically,
a
single
limit
s.
E
But
doing
this
at
the
doing
this
for
data
set
instead
of
like
process
or
user
or
actually
remove
some
problems,
let's
say
that
you
would
like
to
slow
down
withholding
range
locks
on
files.
E
This
should
be
okay,
because
all
the
customers
of
all
the
users
of
those
files
needs
to
be
slowed
down.
If
we
would
do
the
the
limiting
like
per
per
uid
or
per
jail
or
something
like
this
and
potentially
you
can
have
users
with
different
limits
trying
to
access
the
same
file,
then
you
you
have
to
do
slow
down
differently.
E
Yes,
it
is
tricky
problem
and
actually
based
on
on
some
of
the
attempts
like
in
FreeBSD
with
rctl
I.
Think
it's
I
I
personally,
would
prefer
to
have
like
something
which
is
predictable,
measurable,
like
something
that
I
can.
E
Easily
figure
out
or
or
test
if
I
set
this
limit
I
know
what
exactly
does
it
mean.
C
Yeah
because
I
think,
especially
from
the
administrator
or
user's
point
of
view
getting
random,
speedups
or
slowdowns
because
of
things
you
don't
control
like
the
right,
amplification
or
metadata
or
caching
or
even
compression,
is
probably
quite
confusing
versus
and
I
think
in
most
cases
of
a
multi-tenant
situation.
Where
you
know
the
provider
is
doing
this,
they
would
like
to
keep
the
wins
from
compression
and
caching
to
themselves
to
get
more
over
provisioning
or
whatever,
rather
than
necessarily
passes
on
to
the
customer.
C
A
Yeah
so
I
mean
maybe
the
next
steps
for
this
are
to
write
up
like
what
exactly
you
would
propose.
I
mean
I
like
the
idea
of
coming
up
with
something
that's
like,
maybe
not
the
best
thing
that
we
could
imagine,
but
something
that
is
easy
enough
to
understand
and
easy
enough
to
implement
that
it's
useful
and
and
then
maybe
getting
feedback
on.
Is
this
really
useful
right?
A
We
can
build
something
simple,
but
if,
if
people
don't
use
it
because
it's
not
really
what
they
want,
then
you
know
this
wasn't
really
that
useful
right,
like
some
people,
might
have
a
hard
requirement
of
like
I
need
to
be
able
to
get
the
full
like
if
I'm,
the
only
user
I
need
to
get
the
full
full
throughput
of
the
system
like
if
there's
no
contention,
then
I
should
have
the
same
performance
as
I
did
before,
and
that's
kind
of
a
different
requirement
versus
like
what
you're
talking
about
where
you
would
just
set
a
quote
on
each
user
or
each
data
set.
C
A
F
It's
me
Tino
here
from
Germany
I.
Just
wanted
also
some
feedback
for
my
pull
request
for
the
chart
to
reworking
The
Benchmark
numbers
are
nice
and
I
don't
get
any
reviews
I
have
no
idea
what
what
to
do.
Next,
it's
a
bit
more!
It's
about
65!
So
it's
difficult
to
keep
it
always
rebased.
It's.
A
F
A
A
F
F
Then
the
numbers
are
on
on
an
army
Horizon
system
on
some
Intel
system
and
and
make
many
Minds
when,
when
you
scroll
down
also
rinse
did
some
some
more
tests
and
so
on
it
works
it
shoots
it's
about.
It's
all
open,
SSA
code,
Apache,
License,
assembly
stuff,
is
also
an
open
SSL.
F
A
Yeah,
it
looked
like
rich,
took
a
look
at
it,
a
while
back
right.
F
That
was
the
first
thing.
Maybe
the
the
real
implementation
was
done,
I
think
in
in
September
the
before
this.
There
are
some
some
some
points
that
that
needs
fixed,
and
these
are
fixed
to
go
currently
so.
F
Some
more
time,
so
it's
not
for
five
minutes
now,
but
I
just
wanted
to
call.
A
Yeah
and
I
think
we
should
try
to
figure
out
like
what
level
of
review
do
we
want
to
see
for
this.
You
know
there's
28
000
lines
of
code.
Presumably
most
of
that
is
the
stuff
that
you
just
copied
over
from
from.
A
A
Yes,
so
we
should
take
a
look
at
that
I'm
just
kind
of
skimming
through
the
code
to
see
you
know
it
doesn't
look
like
there's
a
ton
of
that.
A
So
I
mean
I,
don't
I
think
it's
not
a
tremendous
amount
of
work
to
review
this.
We
just
need
to
find
folks
who
are
interested
in.
You
know
check
some
algorithm
stuff.
Let's
take
a
look.
I
haven't
looked
at
this
stuff
in
a
long
time,
but
I
think
that
we
I'll
definitely
ask
someone
to
maybe
Mark
to
bug
people
about
code
reviewing.
A
It
looks
like
we
have
a
couple
of
candidates
rich
and
they
see
comments
from
one
other
person.
Oh
yeah,
Sebastian,
brain
Slayer.
We
can
ping
them.
F
He
uses
it
already.
When
did
he
wear
out
here?
It's
a
router
based
system
for
forearm
arm,
seven
I
think.
A
Yeah
I
think
him
and
I
think
Rich
would
probably
be
able
to
do
a
good
job
on
the
center.
He's
done
a
lot
of
work
on
the
compression
algorithms
and
he
might
be
interested
in
this
as
well.
A
Yeah
thanks
a
lot
for
this
work.
I
think
are
there
before
and
after.
F
When
you
take
the
Benchmark
numbers
of
the
break
three
hash
supporting
stuff
a
moment,
I
can
I
can
put
it
on.
A
F
F
D
A
The
what
do
we
call
it?
Not
SSE
but
vectorized,
adding
the
vectorized
instruction,
optimized
versions
of
shot,
56
and
shot
512
or.
A
A
lot
of
platforms,
including
you
know,
the
AVX,
the
shot
and
I
and
the
arm
V8
CE
stuff,
which
looks
like
great
performance.
A
A
So
yeah
I
think
that
that
I
mean
that's
a
huge
moments
win
for
folks
who
are
using
shot,
256
or
512
on
those
platforms
which
are
you
know
the
most
commonly
used
platforms,
so
I
think.
Maybe
we
can
try
to
highlight
that
performance
Improvement
to
motivate
folks
like
hey.
We
need
code
reviews,
so
we
can
get
this
in
so
we
can
get.
You
know
what
is
it
like?
A
A
F
A
All
right,
thanks,
I,
think
we're
almost
out
of
time,
any
any
volunteers
or
I'll
just
hope
that
I
can
bug
Rich
into
doing
it.
He's
not
here.
D
A
Defend
himself
so
all
right
cool
thanks
any
other
topics
as
we
wrap
up
the
meeting
today,
foreign.
A
Which
is
after
the
new
year,
just
after
the
new
year,
January
3rd,
see
you
then
bye.