►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
Okay,
thanks
so
I'm
going
to
be
talking
about
some
of
the
stuff
I've
been
working
on
lately
and
a
couple
of
other
people
at
nick's
inta.
Don't
ask
me
about
release
dates
because,
I
frankly
don't
know,
but
it'll
be
it'll,
be
out
at
some
point
to
take
that
as
a
as
a
statement.
So
the
primary,
the
primary
topic
of
my
talk,
is
going
to
be
about
how
to
make
D
dupes
a
class.
It's
been
part
of
an
internal
set
of
projects
inside
onyx
entity.
B
Try
and
address
some
of
the
most
fundamental
problems
with
eq
and
as
we
all
we
all
know,
the
pain
points
here
about
d2
who
doesn't
know
what
a
problem
with
d
do:
PS,
ok,
so
ever
yeah
sure,
ok,
so
fundamentally,
the
problem
is
the
loop
is
very
easy
to
turn
on.
So
you
just
said:
dieter
bohn
tank
great
right
d.
It
works.
It's
inherited
to
all
your
data
sets
or
whatever
data
you
right.
There.
It's
going
to
get
dee,
doop
magic
magic
happens.
The
problem
is
turning.
B
Dee
doop,
is
that
you
have
to
have
a
dee
doop
table
which
is
a
hash
table.
So
there
is
no
preferential
portion
of
hash
table
there.
There's
no
such
thing
as
like:
oh
I
got
a
hundred
gigs
of
memory,
my
D
Duke
tables
101
and
I'm,
just
gonna
or
my
tu
tables
200
and
I
just
going
to
catch
the
hot
portion
of
it
because
there
is
no
hot
portion
of
it.
It's
a
it's
a
hash
table.
It's
it's
completely
randomly
distributed.
We
even
use
a
cryptographic
hash
for
that.
B
So
your
probability,
distribution
of
hitting
any
portion
of
the
table
is
essentially
constant.
Whenever
soon
as
you
spill
over
from
memory
you're
going
into
the
sad
zone
and
you're
going
to
have
to
incur
huge
performance
penalties
when
trying
to
read
and
update
the
dee
doop
table,
so
dee
doop
fundamentally
doesn't
care
about
how
much
memory
you
give
it
it'll
just
grow.
B
Whenever
you
give
it
new
data
and
there's
a
number
of
estimates
out
there
on
the
size
of
the
Duke
table
and
how
much
of
memory
you
need
to
make
make
dee
duper
happy,
happy
puppy,
but
fundamentally
the
problem
is
that
it
eats
memory
and
gobs
and
it
doesn't
really
care
how
much
a
physical
memory
you
have.
It
pray.
Basically,
it
offloads
the
dee
doop
size
table
the
table
size
to
you,
the
administrator,
and
if
you
get
it
wrong,
you
go
into
the
sad
place
and
so
the
usual
way
we
handle
these
things.
B
The
answer,
of
course,
is
not
really,
and
it
really
comes
back
to
this
problem,
then
again
that
it's
too
easy
to
turn
on
and
it's
very
very
hard
to
turn
off
and
a
performance
penalty
is
pretty
huge,
so
you're
right
performance
is
going
to
go
way
down
and
basically
is
going
to
turn
all
of
your
reads
into
your
rights
into
random
reads
and
you're
going
to
have
all
manner
of
problems
and
there's
a
usual
sort
of
solution.
To
this
we
say
when
you
rarely
do
this
problem
select
just
a
dell
to
arc.
B
You
know
just
add
some
flash
memory
that
will
cash.
It
wink
wink
and
try
to
solve
your
primary
problems
and
unfortunately,
the
only
partially
addresses
one
of
the
core
issues
is
that
the
DDT
is
too
large
to
fit
into
your
main
memory.
Yeah,
the
LCR
device
is
a
little
bit
faster
than
the
main
pool.
Usually
we're
talking
about
a
hybrid
source
set
up
here,
so
it'll
probably
address
most
of
that.
You
won't
get
that
kind
of
a
huge
performance
cliff.
It
doesn't
really
solve.
B
So
your
maintenance
reboot
during
the
night
was
not
quite
all
that
maintenance
during
the
night,
it
probably
is
like,
if
you're
planning,
either
for
a
weekend
or
even
a
week
of
headaches
and
lastly,
when
we
said
that
you
could
add
l2
arc
and
have
it
cash
all
of
your
dee
doop
table,
we
sort
of
lied
not
too
much,
but
enough
to
cause
problems.
Is
that
primarily
caused
by
the
problem
of
the
l2
arc
is
a
space
which
is
shared
primarily,
usually
you
can't
you'll
have
your
table
in
there.
B
You'll
have
some
other
file
system
metadata
in
there.
You'll
have
a
lot
of
potentially
other
user
data
in
there
and
they're
all
contained
contending
for
the
same
space.
And
so
what
can
happen
is
we?
You
can
hit
a
case
where
other
user
data
pushes
your
dupe
table
out
and
you're
back
into
the
sad
place,
so
that
becomes
a
bit
of
a
problem
with
planning
to
provide
a
reliable
service.
B
So
the
first
part
that
a
call
you
mine
did,
that
was
the
d2
throttle.
So
we
know
in
core
the
what
the
actual
size
of
the
g-tube
tables
are.
Rather,
we
can
note
that
we
can't
keep
a
count
of
that.
So
what
if
we
just
stop
dee
doop
when
it
gets
too
large,
we
stop
adding
entries.
You
will
incur
a
little
bit
of
a
reduction
in
to
you
Ray.
She
o,
but
it's
better
than
hitting
the
performance
cliff.
Now
keep
in
mind
all
this
is
tunable,
so
you
can
turn
it
off.
B
If
you
don't
want
it,
if
you
want
to
hit
the
performance
cliff
to
keep
your
DD
ratios
up,
go
for
it,
but
primarily
this
stops
you
from
having
to
worry
about
getting
a
call
at
night
that
all
of
a
sudden
performance
tanked.
So
you
can
just
turn
it
on
and
be
done
with
it
without
having
to
worry
about
a
huge
performance
letdown.
B
Another
thing
that
we
do
is
we
segregate
the
datatable
metadata
in
the
arc
in
order
to
gain
this
sort
of
fine,
fine
grain
to
control
and
there's
a
tunable
here
that
allows
us
to
control
whether
it's
no
cap,
whether
we
automatically
select
the
cap
or
whether
you
manually,
select
a
maximum
and
with
sort
with
the
dee
doop
capped
to
the
size
of
the
arc.
Who
was
a
testing
run
done?
B
One
of
my
colleagues,
it's
a
little
bit
unclear
because
the
system
was
kind
of
weirdly
set
up,
but
on
the
left
side
you
can
see
the
dee
doop
table
growing,
unconstrained
and
Layton
sees
sort
of
going
up
and
up
and
up
and
on
the
right
hand,
size.
We
have
it
capped
and
we
sort
of
level
out
at
a
certain
latency,
and
we
don't
further
increase
in
that.
B
B
C
C
B
Well,
yeah:
well
as
long
as
we're
talking
about
yeah
in
memory
stuff,
there
would
be
room
for
improvement
than
that,
so
once
the
stuffs
out
feel
free
to
hack
away
on
the
thing
anyway,
I
generally
think
it
really
has
value
in
being
able
to
constrain
your
DD
table
before
it
runs
away
from
you.
So
the
second
portion
isn't
it
ok.
B
D
B
R,
it's
sort
of
l2
arc
specific
metadata
that
just
helps
us
find
all
the
buffers
that
we've
cashed
in
the
device
and
I'm
pool
import.
We
just
read
through
all
that
metadata
and
reconstruct
the
art
and
the
buffer,
the
buffer
headers
in
an
l2
evicted
state
in
memory
and
it's
backwards
compatible.
There
is
no
problems
with
migrating.
B
The
pool
between
previous
releases
on
the
new
ones,
since
the
easy
is
the
easiest
solution,
because
we
have
no,
the
previous
releases
didn't
consider
the
l2
arc
to
be
persistent
so
whatever,
and
it
happens,
of
course,
all
in
background
it
doesn't.
It
is
interrupted
which
I
reluctantly
implemented
after
a
few
hours
of
bickering
by
other
people,
because
they
really
didn't
want
me
to
prevent,
pool
export
for
even
a
couple
of
seconds
because
it
really
messes
with
their
H
a
scripts.
B
So
the
general
architecture
is
pretty
simple:
the
ultra
device
is
a
circular
buffer,
so
sort
of
grows
always
in
one
direction
from
lower
addresses
to
up
to
higher
addresses
and
when
it
hits
its
own
tail.
It
just
starts
over
writing
this
stuff,
evict
and
overriding,
and
so
we
periodically
omit,
what's
called
a
log
block,
and
that
just
contains
references
into
what's
written
in
between
this
is
obviously
not
to
scale.
The
log
blocks
are
probably
somewhere
around
one
percent
overhead,
so
it
should
be
a
lot
narrower
there
and
they
also
contain
a
reference
back.
B
We
have
two
linked
lists
or
intermixed,
and
obviously,
at
the
beginning
we
have
a
device
header,
it's
all
built
in
such
a
fashion.
So
we
don't
haz,
mat
objective.
We
don't
taste
data
in
order
to
figure
out
support.
We
also
have
a
an
entry
in
the
mosque
that
tells
us
on
the
beat
of
computers
that
we
have
personnel
to
our
support.
This
is
automatically
enabled
as
soon
as
the
pool
gets
imported
on
a
on
a
machine
that
supports
this.
B
B
So
soon
as
the
pool
the
spa
load
finishes,
we
just
do
a
a
sync
task:
call
which
kicks
off
threads
for
each
device
performance,
usually
on
a
500
gig
device
5
to
10
seconds
to
rebuild
and
reads
during
the
rebuild
reads
to
the
device
are
allowed,
so
any
buffers
that
we've
already
reconstructed
can
be
fast
from
there.
But
rights
are
suspended
for
a
while,
because
that
would
mess
up
our
structures
there
and
we
can
interrupt
the
rebuild
at
any
point.
So
you
can
go
in.
B
You
can
reconfigure
every
devs
you
can
add
and
remove
even
the
same
l2r
device.
If,
for
whatever
reason,
you
can
even
export
the
pool,
if,
for
whatever
reason
you
have
a
faulty
l2
arc
device,
let's
try
and
rebuild
and
it's
taking
forever
just
remove
it
just
remove
the
device
and
it's
but
going
to
be
happy.
So
that's
one
of
the
changes
to
the
prototype
where
we
had
a
fixed
rebuilt
time,
timeout
gotten
rid
of
that
and.
A
B
B
B
B
No,
no
so
you're
just
not
going
to
have
the
benefit
of
personnel
to
work.
Now.
The
question
is:
if
you,
if
you
fail
back
to
the
old
pool,
it'll,
try
to
load
it
in
yeah,
that's
true!
But
that's
fine!
You
well
you're,
going
to
get
probably
a
bunch
of
outdated
date,
outdated
data
cached,
but
you're
never
going
to
read
it
to
just
that.
B
You
just
done
a
little
bit
of
extra
unnecessary
work,
and
the
last
portion
for
the
dedicated
for
the
D
do
performance
here
is
that
we
are
able
to
control
via
mechanism
called
BF
properties.
We
are
able
to
control
the
assignment
of
data
to
a
specific
ultra
work
device.
So
previously,
if
he
had
an
ultra
art
device,
it
was
pretty
much
fair
game
for
all
data
to
go
in
there
and
it
was
a
contention
who's
going
to
get
in
there
and
it
is
either
going
to
be
general
pool
meta
data.
D
B
B
Libre
Office
being
very
slow
today,
the
d-loop
throttle
will
auto
select
the
table
size
to
essentially
your
l
to
r
exercise
if
it
sees
data
eddicated
l
to
r
devices,
so
that'll
basically
solve
your
need
to
manually
to
the
size,
and
you
can
also
add
in
you
extra
l2
art
devices.
If
you
feel
your
DD
table
has
grown
to
the
maximum
capacity,
you
can
just
add
in
another
ultra
device
dedicated
and
it
will
just
get
added
to
the
pool
and
start
being
used
as
cash
and
your
DD
table
will
again
be
allowed
to
grow.
B
Okay,
yeah
pretty
much
what
I
said.
So
that's
that's
about
all
got
any
questions
for
the
comments
have.
C
A
question
the
I
know
you
guys
have
also
done
some
work
of
like
being
able
to
direct
certain
data
to
be
stored
on
certain
v-dubs.
Is
that
like?
Is
that
not
a
real
thing
or
because,
if
it
was,
then
I
would
think
that,
rather
than
using
a
dedicated
l2
arc
for
DDT,
you
would
just
like
have
a
dedicated
device
and
say
put
all
my
dee
doop
data
on
that
fast
device.
And
then
you
wouldn't
have
to
worry
about
like
loading
it
or
cashing
it,
and.
B
You
did
you're
not
wrong
in
that
it
was
a
thing,
but
it's
not
ready
for
real
easier,
which
is
why
we're
doing
this
sort
of
it
as
a
sort
of
stopgap
solution.
The
the
thing
that
you're
talking
about
is
where
we
have
V
dad's
like
regular,
true
honest-to-god
bf's.
There
are
data
specific,
is
part
of
a
larger
project.
That
means
a
little
bit
more
polishing
before
its
release
ready.