►
Description
From the 2017 OpenZFS Developer Summit:
http://www.open-zfs.org/wiki/OpenZFS_Developer_Summit_2017
A
B
Thank
You
Pavel.
Can
everyone
hear
me
well,
okay,
I'll!
Take
that
as
a
yes,
so
hello,
everyone,
my
name
is
surfing
de
metropolis
and
hospital
said
I've
been
working
on
ZFS
for
around
a
year
at
del
phix,
and
today,
I'd
like
to
share
with
you
details
on
a
project
that
I've
been
working.
The
past
few
months
are
called
the
log
space
map
project.
B
As
many
as
Pavel
mentioned,
that's
an
optimization,
the
ZFS
allocation
performance
so
before
we
actually
go
into
the
internals
of
this
project,
I
would
like
to
go
through
some
background
on
how
ZFS
does
certain
things
and
for
some
people
that
may
be.
You
know
pretty
much.
You
know
you
may
already
have
this
knowledge,
but
I'd
ask
you
if
you
could
bear
with
me
for
a
little
bit
until
we
get
to
the
actual
details
of
the
project,
so
you
know
just
start
from
the
very
beginning
how
ZFS
the
strikes?
B
Well,
you
know
we
have
incoming
data
coming
user
through
the
right
system
call
and
we
recall
the
dista
tag
memory.
We
periodically
sync
this
data
to
disk
and,
as
many
of
you
know,
we
call
these
transaction
groups.
Basically,
these
things
cycles,
you
can
think
the
right
source
single
transactions
will
wrap
them
together
in
a
group
for
the
transaction
group,
and
we
use
these
transaction
groups
as
basically
the
loads
called
time
stamps
to
represent
times
an
event
within
this
ZFS.
B
So,
as
most
of
you
know,
and
probably
you
use
the
FS
for
that
so
copy
and
write
filesystem,
which
means
that
the
block
level
we
never
actually
overwrite
data,
we
always
mark
whatever.
We
have
us
free
and
we
allocate
our
new
data
somewhere
else
and
I
want
to
emphasize,
because
this
is
true
even
for
metadata,
not
only
just
whatever
the
user
asked
us
to
write
on
disk,
but
also
for
any
kind
of
tracking
that
GFS
does
or
itself
another
thing
that
it's
important
to
kind
of
motivate.
B
B
So
let's
go
into
space
allocation
now.
Well,
it's
table
level.
Video
from
the
CF
perspective
is
divided
into
around
200.
Equal
regions
called
Metis
labs
and
each
Metis
lab
basically
represents
the
ZFS
point
of
view
and
what
is
free
and
allocated
in
its
assigned
region
of
disk,
so
the
first
metal
slab-
let's
say
we
have
a
video
of
you
know-
represents
offset
0
to
200,
let's
say,
and
the
second
metal
slab
200
to
400,
they're,
all
equal
and
basically
the
keep
track
of
the
space
within
ZFS.
B
We
represent
distract
these
meta
slobs,
mainly
by
two
structures.
One
in
memory
structure
called
the
range
3
and
1
on
disk
structure,
called
a
space
map
and
before
I,
go
into
this
more
into
how
the
structures
look
like
I
would
like
to
also
emphasize
some
terminology.
So
whenever
I
say
that
we
load
a
matter
slab,
it
means
that
we
basically
read
their
own
disk
structure.
The
entries
on
the
space
map
that
we're
going
to
see
later
into
memory
into
the
main
into
the
range
3.
B
Another
thing
that
I
want
to
emphasize
is
that
in
ZFS
we
only
allocate
from
loaded
matter
slabs,
basically
the
ones
that
the
raid
3
is
populated,
but
we
can
free
from
any
type
of
metal,
slab,
loaded
or
unloaded
so
rain
streets.
This
is
the
in-memory
structure
that
we
said
that
represents
the
metal
slab
and
it
kind
of
looks
like
this.
It's
an
AVL
tree
where
it's
no
represents
a
free
segment
and
most
of
the
time
these
segments
are
sorted
by
offsets.
B
As
you
can
see
in
this
figure
that
I
have
over
here
other
times
you
sorted
by
size,
depending
what
we're
gonna
look
at,
but
basically
I
hope
that
it's
pretty
straightforward
that
this
reigns
rekon
represents.
The
state
of
you
know
this
space
that
this
metal
slab
keeps
track
of.
You
know
you
can
see
one
two
three
blocks,
one
two
three
are
free
and
you
see
this
one
node
over
here
and
so
far,
and
so
the
corresponding
on
the
scratch
or
the
space
map.
B
For
let's
say
this,
specific
metal
slab
looks
kinda,
like
this
space
maps
are
append-only
on
these
structures
of
that
have
entries
of
allocations
and
fries
that
we
do
in
the
assign
space
of
the
metal
slab,
and
you
can
kind
of
see
that
the
has
the
history
of
the
metal
slab,
starting,
let's
say
from
transaction
group
10,
and
you
know
we
did
a
bunch
of
allocation.
Let's
say
we
allocated
one
blocks
one
to
five,
but
later
we
freed
blocks
one
two
three.
B
Docs
realize,
okay,
what
is
free
right
now
and
what
is
allocated
and
as
I
said
earlier,
whenever
we
load
a
metal
slab,
we
basically
have
to
read
this
whole
space
map
and
load
whatever
current
state,
we
figure
from
it
to
the
rain
stream.
So
the
longer
the
space
map,
the
more
the
biggest,
the
more
increase,
the
loading
time
so
once
in
a
while,
we
condense
the
space
maps
in
order
to
reduce
that
loading
time.
So
we
have
the
space
map.
B
That
kind
of
looks
like
that,
and
then
we
condense,
mainly
we
only
condense,
loaded,
metal
slabs,
because
we
actually
have
the
rain
story
available.
So
we
can
say
okay
well.
Obviously
the
space
map
is
getting
too
long
and
if
we
were
to
write
our
rank
3
as
a
space
map
on
disk,
we
would
actually
save
space.
B
So
once
we
realize
this
periodically,
we
condense
the
space
map
to
something
that
looks
like
the
one
on
the
right,
where
we
basically
look
at
the
current
state
of
the
pool
and
say
like
ok
blocks,
4
to
5,
&
8
to
9
are
allocated
and
everything
else.
We
assume
it
to
be
free
and
that's
to
make
the
representation
of
the
space
much
more
compact
on
disk.
So
just
to
recap,
we
talked
about
what
the
exist
are.
B
The
sink
pass
is
the
first
one
being
the
passage,
whereas
in
the
user
data
and
subsequent
wants
to
update
the
FS
metadata,
we
talked
about
Metis
labs.
We
saw
how
rain,
through
the
space
maps
look
like,
and
we
also
I
also
mentioned
that
we
can
free
space
from
any
metal
slab,
loaded
or
unloaded,
but
we
only
allocate
space
from
loaded
ones.
So,
ok,
these
are
all
great.
What
is
the
problem?
Well,
the
problem
comes
into
play.
B
B
So
what's
the
problem
where
the
problem
on
the
workflow
that
I
just
described
is
the
dynamic
the
the
dynamic
allocation
behavior
of
the
pool
over
time?
What
we've
generally
seen
within
del
phix
is,
you
know.
After
a
while,
we've
touched
every
matter
slab
on
every
video
of
the
pool,
and
that's
because
you
know
we
overwrite
data,
we
have,
we
marked
something
as
free
and
we
move
on
and
we
allocate
space
somewhere
else.
B
So
after
a
while
that
we've
touched
almost
every
part
of
the
pool,
we
have
a
few
little
meta
slabs
that
we
allocate
from
and
may
be
free.
But
these
allocations
we
also
have
the
corresponding
fries
that
are
scattered
throughout
the
pool
in
all
the
metal
slabs.
No,
the
videos,
which
means
that
since
we're
free
blocks
from
old
meta
slabs,
we
actually
have
to
go
to
its
space
map
and
you
know
update
it
by
appending
a
little
bit
at
the
end
of
it.
B
So,
let's
look
more:
let's
do
kind
of
like
a
quick
arithmetic
of
okay.
How
many
iOS
do
we
do
just
appending
to
it
space
map?
So
every
transaction
group
in
every
video
we
append,
as
I,
said
to
almost
all
metal
slab
space
maps,
so
around
200
iOS,
because
we
have
around
200
meta
slabs
and
that's
assuming
that
the
entries
that
we
have
to
write
into
them
feed
within
that
4k
block
size
that
we
already
have
made
by
default
in
ZFS
and
for
its
update
on
these
actual
spaceman
structures.
B
We
have
to
also
update
during
their
indirect
blocks,
so
assuming
two
levels
of
indirection.
We
are
talking
about
around
400,
additional
iOS
and
because
space
maps
are
ZFS
metadata
for
redundancy,
we
keep
two
extra
copies
on
them,
so
whatever,
however
many
iOS
we
have
so
far,
we
need
to
triple
that
so
we're
talking
about
around
800
iOS
pervy
dev,
each
transaction
group.
So
is
this?
Actually,
a
problem,
though:
okay,
we
do
this
calculations
more
or
less
kind
of
like
a
bug
on
bevel
envelope.
You
know
to
see
our
ballpark,
so
it's
actually
a
problem.
B
Well,
here
is
some
dotrice
output
from
adele,
fixed
vm
that
we
have,
and
this
from
this
specific
line
that
we
see
over
here
is
one
statistics
on
one
transaction
group,
so
we
assume
the
first
column.
Okay,
we
started
this
transaction
zero
milliseconds
from
the
previous
one,
so
we
started
literally
exactly
what
the
previous
one
finished.
B
We
wrote
around
three
gigabytes
of
data
in
around
four
four
and
a
half
seconds,
and
you
can
see
the
rate.
The
all
the
way
to
the
right,
but
what's
important
here
is
that
percentage
of
time
spent
in
sink
paswan.
Now
remember
that
in
Sigma
Swan
we
write
the
user,
we
write
user
data,
basically
the
data
that
the
user
asked
us
to
write
on
disk
and
the
other
30%.
B
This
is
data
generated
by
D,
trace
and
I
present
them
in
this
way.
I
hope
it's
more
intuitive,
but
basically
around
37%
of
our
iOS
go
to
write
user
data
and
39%
is
actually
triggered
by
the
user
data
changes.
You
know
we
have
to
update
indirect
blocks
for
the
user
data
and
things
like
that.
But
what
I've?
What
is
important
here
I
want
to
highlight,
is
around
25%
of
our
iOS
go
to
a
pending
space
map,
meta
data
and
this
iOS.
It's
not
that
we
actually,
you
know
utilize.
B
All
of
our
bandwidth,
as
I
said,
we
write,
we
append
a
small
amount
of
entries
to
each
space
map
of
all
the
meta
slabs.
So
what
can
we
do
about
this?
Well,
we
could
decrease
the
number
of
meta,
slabs,
less
meta,
slabs
means
less
space
Maps
and
therefore
we
have
pure
iOS,
but
the
video
size
stay
is
the
same.
B
So
it's
Metis
lab
will
represent
a
bigger
area
on
disk
and
this
could
lead
to
two
issues
because
with
the
bigger
metal
slab
you
know,
I
would
have
more
entries
space
maps
get
bigger,
so
our
loading
time
for
the
Metis
lab
increases
because
we
have
to
load
all
these
entries
from
memory
from
disk
to
memory
and
then,
once
these
all
these
entries
have
made
it
to
memory.
The
range
trees
can
get
pretty
big,
so
memory
consumption
is
also
an
issue
with
this
approach.
B
B
B
So
here's
an
idea:
why
don't
we
just
not
floss
all
metal
slab
changes
every
txt?
Why
do
we
instead
keep
everything
in
memory
in
to
new
range
trees,
permit
a
slab
1/4
of
last
allocations
and
one
foreign
flash
freeze,
so
basically
how
this
would
work
is
that
we'll
have
some
incoming
allocations
we
will
put
them
into
then
Flast
Alex
tree
incoming
freeze
would
go
to
then
flash
free
if
an
allocation
is
already
an
Flast,
which
means
it's
in
the
unflustered
locks
tree
and
we
need
to
free
it.
B
We
basically
move
it
from
the
end
flash
dialogue,
so
then
flash
freeze
and
vice
versa.
If
we
want
to
allocate
something-
that's
free
but
not
flash
the
disk,
but
we
still.
We
need
to
write
at
least
something
to
disk
for
persistence.
So
what
do
we
just
last
one
metal,
slab,
average
legacy
and
let's
say
for
now
in
a
round
robin
fashion.
B
So
basically
the
key
difference
here
is
that,
instead
of
appending
a
little
bit
to
each
metal
slab
space
map,
why
don't
we
just
once
in
a
while
right
lots
of
changes
to
one
metal
slab
so
right
now
loading
a
metal
slab
changes
from
just
reading
its
space
map
to
reading
its
space
map
and
applying
whatever
and
flash
the
location
in
freeze
would
have
kept
in
memory
for
that
specific
metal
slab.
And
then
the
question
asks.
Okay,
you
have
all
these
changes
in
memory.
B
B
So,
okay,
just
to
recap
on
the
main,
the
main
idea.
Basically,
the
idea
here
is,
instead
of
appending
to
all
the
Metis
lab
space
Maps
every
text
gee.
We
just
write
one
big
log
with
all
the
changes
that
we
have
in
the
pool
for
that
txt
and
for
one
metal
slab.
We
flush
then
flushed
the
tone,
flash
solid
freeze
into
its
base
map,
so
just
to
give
a
more
pictorial
example.
So
people
can
understand
what
I
just
said.
Let's
say
that
the
current
transaction
group
is
Tran
and
we
have
you
know
200
meta,
slabs.
B
Let's
say
the
number
is
fixed
for
now
and
they
were
all
flashed
at
TX
d-10,
the
one
that
we
just
wrote
and
you
can
kind
of
see
their
entries.
So,
let's
say
for
people
who
want
to
get
more
very
detailed
on
this
diagram
metal
slab
one
looks
at
blocks:
zero
to
ten
metal
slab
to
ten
to
twenty
and
so
for
and
so
forth
and
a
tick
see
at
the
next
XG.
We
decide
to
activate
the
new
log
space
map
feature.
So
what
would
that
mean?
B
Is
that
text
11
or
all
our
new
space
map
entries
will
go
to
this
pool
wide
log
over
here
on
the
left
and
then
we
as
I
said
one
log,
and
then
we
flask
one
metal
slab.
So
you
can
say
that
some
entries
from
the
log
space
map
actually
married
to
the
metal
slab
space
map
for
this
txt
I
hope.
The
gray
color
is
visible.
B
You
know:
we've
lost
more
changes
with
last
entries
from
both
the
log
space
maps
made
it
to
metal
slab
and
the
same
thing
can
happen
for
metal
slab
three
so
far
and
so
forth.
So
at
some
point,
though,
let's
say
after
200
ticks
is
from
now
we
went
all
the
way
up
until
the
end
of
this,
like
meta
slab
array
and
we've
lost
almost
all
the
changes
from
the
log
space
map
that
that
was
created
when
we
first
enable
the
feature
which
means
that
this
log
space
map
is
no
longer
relevant.
B
All
these
entries
have
made
it
here
on
the
right
side
from
the
Metis
lab
space
Maps.
We
call
these
logs
obsolete
logs
because
they
are
no
longer
needed.
They
take
up
space
and,
if
we
cross,
we
have
to
would
have
to
read
all
these
irrelevant
entries
there
for
the
import
time
after
a
class
would
increase,
which
is
bad.
B
So
you
could
say:
okay
once
you
do
one
round
over
here
and
you
go
through
all
the
meta
slabs
you.
After
every
flush,
you
can
start
destroying
each
log
space
map
starting
from
the
oldest
one,
but
it's
not
as
easy,
because
you
know,
if
you
add
a
device,
you
add
more
meta
slabs.
If
you
remove
a
device,
you
take
out
some
meta
slabs.
There
are
other
issues
too
and
also
I
kind
of
want
to
hint
later
that
we
may
want
to
flush
more
than
one
meta
slab
so
for
dealing
with
obsolete
logs.
B
B
So
in
the
beginning
of
the
list
we
have
like
the
oldest
Flast
will,
which
would
probably
you
know,
will
be
missing
a
lot
of
entries
on
its
space
maps
and,
at
the
very
end,
we'll
have
the
most
recently
Flass,
the
one
that
we
just
lost
and
it
should
be
fine
for
now.
We
also
keep
a
list
of
all
the
logs
sorted
by
2,
X
3,
which
is
also
that
txt
of
the
changes
that
they
contain
so
its
transaction
group.
We
basically
take
up
the
oldest
Flast
metal
slab
with
flashes
changes.
B
So
with
this
change
is
another
thing
that
we
might
want
to
reconsider.
Is
a
space
map
block
size
as
I
mentioned
earlier?
Currently,
the
space
map
block
size
is
4,
kilobytes,
which
it
made
sense
in
this
previous
state
of
the
world.
Where
we
were
writing,
you
know
little
changes
to
its
space
maps
and
they
were
all
these
change
were
scattered
throughout
the
pool
and
for
loading
the
metal
slabs.
B
It's
an
okay
size
depending
on
the
metal
slab
size
that
you
have,
but
now
that
we're
introducing
this
log
space
map
you
know,
reading
a
logs
a
space
map
with
like
four
kilo,
byte
block
size.
It
means
that
we
have
to
do
lots
of
files
going
back
and
forth
on
disk,
so
we
propose
Boop
bumping
up
the
size
a
little
bit
to
128
kilobytes,
which
should
make
a
lot
of
sense
in
the
new
world
because
we
have
a
fewer
pants
instead
of
like
a
few
larger
pens.
B
Actually,
instead
of
like
many
smaller
pens,
one
for
the
log
space
map
and
then
and
one
for
the
metal
slab
that
we're
flashing,
it
should
also
decrease
the
loading
time
for
meta
slabs
and
also
the
recovery
time
when
we
have
to
go
to.
This
can
read
all
this
log
space,
modems
and
I.
Kept
talking
about
crashing.
I
also
want
to
mention
what
we're
thinking
about
doing
when
we
export
and
import
the
pool.
B
Well,
at
some
point,
we
we
have
all
this
space
in
memory
with
the
F
last
changes
and
we're
thinking
that,
maybe
it's
better
if
we
just
flush
all
the
Matis
labs
during
export.
So
all
these
range
trees
make
it
to
the
Metis
lab
space
Maps.
All
these
and
Flast
Alex
and
trees
make
it
to
the
Metis
lab
space
Maps.
B
So
it's
obviously
a
lot
better
to
floss
everything
and
not
having
at
export
and
not
having
to
read
anything
at
import
that
doing
the
opposite
of
not
doing
anything
at
export
and
reading
everything
for
disk
at
important
and
at
that
point,
I
I
was
planning
on
giving
some
performance
evaluation
results.
Unfortunately,
I
completely
underestimated
how
much
time
it
takes
for
a
pool
to
get
fragmented,
especially
when
it's
very
big
so
I
apologize
about
that,
and
hopefully
I'll
have
some
results
for
you
that
I
could
post
online
for
this.
B
So
this
is
the
main
more
or
less
the
main
concepts
behind
the
log
space
map
project.
There's
still
one
and
answer
question
which
I
kind
of
hinted
earlier
about
the
flossing
algorithm.
Basically,
how
many
metals
up
do
we
wanna
flash
because
they're
there
we
may
have
some
problems
flossing
only
one
metal
slab
every
txt,
and
that
is
if
we
have
a
lot
of
unflushed
change.
It's
just
like
hogging
up
memory
in
that
Flast,
Alex
and
freeze.
Well,
that's
not
good,
and
also
if
our
workload
has
like
all
these
incoming
changes.
B
Where
you
know
we
have
to
put
a
lot
of
entries
in
the
log
space
maps
will
vary
construction
time.
If
we
crash,
we
is
going
to
take
a
performance
hit
because
we
have
to
read
all
these
things
from
disk
in
order
to
import
the
pool
after
a
cross.
So
we
are
still,
as
I
said,
it's
an
open
question
on
how
many
mattr
slabs
do
we
want
a
flask,
it's
2,
X
3
and
the
things
to
consider
here
are,
as
I
said,
the
reconstruction
time.
B
B
We
don't
want
to
flash
a
lot
of
metal
slabs,
because
you
know
we
just
increase
the
block
size
and
also
we
don't
want
to
get
back
to
that
old
state
with
you
know,
you
know
we
don't
have
a
lot
of
changes
pending,
but
we
still
flashing
a
lot
of
metal
slabs,
going
back
to
the
previous
problem
of
putting
a
small
amount
of
entries
in
its
base
map
so
yeah.
This
is
more
that's
what
I
have
specifically
for
this
project.
There
are
some
more
resources
that
you
can
read,
and
you
know
to
get
more
familiar
with.
B
The
subject
also
feel
free
to
ask
her
some
slack
about
any
questions
that
you
have
with
this
ongoing
project
or
email
me
or
tweet
me
yeah,
and
how
are
we
doing
on
time?
Oh
well,
okay,
cool!
So
since
we
are
doing
well
on
time
before
I
take
questions.
I
also
want
to
add
some
more
content
that
I
have,
and
this
is
basically
in
a
provement
that
we
did
into
space
maps
for
scalability,
which
actually
enabled
the
logs
base
map
feature
to
happen.
B
Specifically
I
want
to
talk
about
this.
The
current
space
map,
encoding
and
I
also
brought
the
picture
from
the
previous
slides
just
to
kind
of
emphasize
that
ok,
the
space
map
each
space
map
entry
kinda
has
like
some
more
fields.
You
can
see
on
this
bright
yellow
here
we
have
a
debug
entry
where
we
have
the
action
we
allocated
some
space,
the
sync
pass
and
the
transaction
group,
and
on
the
white
background
we
have
the
non
debug
entries.
B
B
B
The
first
problem
makes
it
hard
if
we
want
to
learn
to
store
a
large
region
of
space
in
a
space
map
entry.
So,
for
example,
if
you
free
a
file,
that's
one
gigabyte
in
its
contiguous
and
disk.
The
range
3
is
gonna
have
only
one
node
saying
from
0
to
124
or
whatever,
but
the
space
map
will
actually
need
to
use
64
entries
because
it
can
only
do
60
16
megabyte
regions
at
a
time
also
for
problem
number
2.
B
It
basically
limits
the
the
maximum
size
that
can
be
addressed
by
the
space
map
for
a
top-level
Vida,
which
means
that
if
you
would
like
to
use
a
space
map
for
to
track
something
Vida
wide
or
pool
wide.
As
we
said
for
the
log
space
map,
you
can
only
address
up
to
64
petabytes,
which
is
pretty
limited
because
I'm
pretty
sure
there
are
so
many
people
in
this
room
that
have
pools
that
are
more
than
64
petabytes.
B
B
Yeah
that
this
slide
more
or
less
explains
that
oh,
we
already
have
some
features
that
are
using
video
whitespace
maps
like
device
removal
and
the
storage
pool
checkpoint
and
meaning,
and
this
means
that
if
we
you
want
to
use
these
features
with
the
current
space
map,
your
top
level
vdf
should
be
up
to
64
petabytes,
but
no
more
and
again.
If
you
want
to
use
that
log
space
map
project
1,
it's
completely
done.
You
need
to
do
better
than
that
because,
as
I
said,
people
have
larger
pools
than
64
megabytes.
B
So
this
is
a
new
space
map
encoding.
As
you
can
see,
we
kept
the
two
old
types
of
entries
just
to
be
backwards
compatible
and
we
introduced
it.
A
new
two-word
entry
which
you
can
basically
distinguish
by
the
first
two
bits.
We
also
added
some
padding
just
in
case
something
changes
in
the
future.
We
want
to
add
more
types
of
space,
Maps,
special
batteries
and
things
like
that.
But
there
are
three
things
to
note
here:
first
one
is
that
Darren
went
from
15
bits
to
36
bits.
B
The
offset
went
from
47
bits
to
63
bits
and
we
also
have
this
new
field
for
a
V
divide
e,
which
is
important
because,
as
I
said,
it
would
be
nice
to
actually
know
you
know
for
the
log
space
map,
for
example,
it's
good
to
know
where
this
segment
came
from,
from
which
Vida.
So
now,
with
these
new
changes,
we
can
represent
a
35
terabyte
region
on
disk
and
we
can
address
with
the
lowest
maximum
with
the
lowest
byte
addressable
storage.
B
We
can
address
up
to
4.7
zettabytes,
also
for
anyone
who
is
concerned
about
ok,
but
now
you're
using
you
know,
128
bits.
Instead
of
64
bits,
you
can
come
down
because
we
still
use
sinks,
single
board
entries
for
three
percent
regions
that
are
less
than
16
megabytes
or
for
other
feature
for
specifically
for
meta
slabs,
that
you
know
they
probably
don't
need
the
video
field,
for
example,
when
they
write
and
we
use
the
double
word
entries
for
anything
else.
B
B
It's
basically
I
wasn't
for
this
specific
VM,
oh
yeah,
I'm.
Sorry,
the
question
was
for
the
examples
that
I
showed
earlier.
Was
there
a
specific
workflow
that
I?
Basically,
what
type
of
workload
was
that
that's
a
question,
and
the
answer
is
that
I
don't
exactly
know
but
I
know
basically
what
what
this
VM
is
used
for
is
to
provision
other
VMs.
So
we
know
that
you
know
we
allocate
a
lot
of
space
and
we
override
certain
parts
within
it
and
occasionally
we
free
all
that
space
and
that
kind
of
triggers
all
these
situations.
B
B
B
B
B
B
B
You
know
you
did
your
allocations.
You
use
that
metal
sub
after
a
while
you're
gonna
start
using
some
other
metal
slab.
Is
that
correct?
Well,
once
you're
running
out
of
space
on
this
specific
one
and
then
you're
gonna
do
the
same
for
the
next
metal
slab
in
the
next
metal
slab
and
after
a
while
you,
you
have
used
all
of
them,
that's,
regardless
of
if
you
are
actually
using
if
you
are
actually
increasing
capacity
or
not
right.
This
is.
B
B
You
start
again.
Maybe
you
stop,
but
basically
that
drop
later
is
something
that
we
would
like
to.
The
trade
off
is
basically
trying
to
smoothen
out,
there's
like
over
time
and
basically
trying
to
guess.
Okay,
if
we,
because
what
we're
trying
to
achieve
here,
is
destroying
the
oldest
logs
right,
the
obsolete
logs.
B
So
it's
kind
of
like
looking
ok,
how
many
Metis
laps
do
I
need
to
actually
flash
until
that
time
and
based
on
the
workload
that
I've
seen
so
far
should
I've
lost,
maybe
a
little
bit
more
just
to
make
sure
that
later
you
know
I'm
better
off
so
yeah.
This
is.
This
is
basically
the
main
problem
that
I'm
working
on
currently
on
that
project,
but
yeah
I'll
be
interested
to
talk
more
about
it.
If
you
have
any
ideas.