►
From YouTube: Phil Harman
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
Although
I
wasn't
there
as
a
founder
architect
of
ZFS,
if
you,
if
you
want
help
in
this
time,
zone,
I'm
near
and
I'm
cheaper
to
fly,
but
you
know
sometimes
you
do
need
an
architect,
then
I'm
sure
you
know
we
will
share
the
workload
yeah
I
was.
It
was
interestingly,
we
we
talked
about.
When
did
we
first
use
the
fso
and
that's
I
have
to
really
substantiate
how
I
had
this
t-shirt
for
actually
putting
ZFS
into
production?
You
can
find
on
this
blog.
Actually,
I've
got
my
copy
of
my
son
blog
under
that
button
there.
A
But
what?
What
was
a
bit
of
fun-
and
it
was
the
fso-
I
think
I
just
mention
it
now.
Bnp
paribas
had
a
portfolio
management
application
running
with
an
IM
database
and
it
really
didn't
scale
very
well
for
reporting
purposes.
So
what
we
did
was
we
did
a
lot
of
work
with
DTrace
to
find
out
where
the
scalability
issues
were,
and
we
discovered
with
that.
We
really
couldn't
do
anything
about.
It
was
all
application
code
and
I
did
a
little
bit
of
lateral
thinking.
We
had
an
e
25k
with
144
CPUs.
A
He
would
only
scale
to
about
48,
but
most
of
the
the
batch
workload
it
was
doing
at
night
was
report
generation
which
didn't
require
the
database
to
be
actually
to
move
forward.
You
could
you
could
just
work
on
a
copy
of
the
database
and
throw
it
away
when
you've
done
report
generation
and
there
was
a
particularly
sweet
spot
at
24
CPUs
we're
just
it
was
almost
linear
scalability.
A
So
what
we
did
is
we
put
everything
in
ZFS,
the
ICM
database,
love,
ZFS
and
loved
his
caching
of
dis,
locking
and
and
at
the
point,
when
the
night,
where
we
wanted
to
do
the
the
report
generation.
We
just
snapshot
and
clone
the
120
gigabytes
of
working
set
data
that
we
had,
all
of
which
was
in
200
gigabytes
of
RAM,
so
that
the
cash
was
nice
and
hot
and
because
of
ZFS
being
copy-on-write
and
all
the
data
was
already
in
cash.
A
We
could
then
clone
that
five
times
create
five
zones
for
running
read-only
reports
in
and
get
a
6x
performance
boost
for
no
cost,
because
you
could
instantiate
a
report
zone
in
30
seconds
using
a
ZFS
cloning
again.
So
that
was
eight
years
ago
and
something
that
was
a
real
world
solution,
and
it
was
the
sort
of
thing
you
just
couldn't
do
in
the
other
way.
A
Yes,
you
could
buy
a
distributed
system,
but
you'd
have
to
copy
that
120
gigs
of
data
into
caches
around
your
grid,
and
it
was
already
there
and
copy-on-write
did
all
the
work
for
us.
So
that
was
that
was
the
first
attempt,
but
I
wanted
to
talk
about
something
a
little
bit
more
now
I'm
you
were
hoping.
I
was
going
to
talk
about
pic
microcontroller,
but
I
wanted
to
pick
on
something
older
and
that's
the
the
pic
system.
A
Richard
pick
Richard
dick
pic
I,
don't
know
if
any
of
you
have
heard
of
that.
The
pic
system,
no
you're,
all
too
young
you
see
in
about
nineteen
sixty-five,
a
US
army
was
looking
at
new,
and
actually
this
is
the
the
precursor
to
the
no
SQL
movement.
Before
there
was
an
SQL.
There
was
a
no
SQL
movement,
and
you
know
the
the
the
pre
relational
or
maybe
post
relational
multi-value
database
system
that
there
was
pic
and
it
was
a
system.
A
We
have
a
performance
problem.
Can
you
come
and
help
me,
and
so
I
went
along,
had
a
look
and
this
pic
environment
is
arcane,
to
say
the
least.
It's
a
shared
nothing
environment,
the
universe
environment,
so
it
relies
on
the
file
system
to
cash
all
the
data
for
it.
It
likes
to
do
2k
I,
oh,
you
know
64
TKS
more
than
anyone
would
ever
need
and
so
like
to
do
to
KI,
oh
and
it
it
relies
on
inter
process
communication.
So
people
don't
tread
on
each
other's
toes
in
the
cash
in
the
file
system
cache.
A
A
Ufs
insularis
used
to
have
a
little
thing
called
direct
I/o,
which
allowed
you
to
to
miss
out
on
the
reader
writer,
lock
/
file,
but
also
had
the
side
effect,
of
course,
of
bypassing
the
cash
and
universe
required.
Everyone
to
use
the
cash,
because
you
really
don't
want
to
do
that.
Many
to
kios
directed
disk,
so
I
was
just
walking
one
day.
A
As
you
know,
I
think
we've
gone
off
to
lunch
and
it
suddenly
clicked
I
think
I
heard
in
a
talk
somewhere
that
ZFS
does
region-based
locking
when
it
comes
to
the
POSIX
constraint
it
actually
relaxes,
because
it's
not
POSIX
compliant
technically,
but
it
it
relaxes
the
constraint
so
that
no
two
writers
can
overwrite
the
same
region
of
the
file.
At
the
same
time,
which
happened
to
be
the
feature
I
needed
and
the
rest
is
history.
A
A
We
have
familiar
with
the
most
recently
used,
most
frequently
used
a
partitioning,
and
because
most
of
this
was
hammering
hard
on
on
frequently
users
of
the
msu
latch.
Actually,
we
discovered
all
sorts
of
scalability
problems
with
that
now.
This
is
where
I
have
to
just
confess
that,
in
my
in
my
work,
I'm
not
proud
about
who
I
work
with
I
I
do
actually
work
with
people
who
run
solaris.
A
And
Linux
I've
done
some
great
work
with
some
people
running
freebsd,
very
exciting,
to
hear
everything
is
doing
all
about
there.
If
it's
ZFS,
then
I'm
interested
if
it's
got
DTrace,
I'm
interested
and
and
so
we've
pressed
forward
on
this
delivering
mission-critical
high-performance
large-scale
transactional
processing.
This
is
see
most
of
you
guys.
I
guess
are
more
interested
in
in
storage,
but
but
this
is
doing
both
both
examples
are
giving
you
our
databases
that
are
running
mission-critical,
large-scale
industry-leading
high
volume
workloads,
one
other
experience.
A
B
A
Great
because
it
was,
it
was
becoming
quite
an
issue
for
people.
I
don't
know
if
it's
so
much
an
issue
for
people
with
storage
with
now
appliances,
but
but
certainly
for
what
I
was
doing.
It
was
a
severe
limitation
that
it's
good
to
know,
but
the
the
other
area
of
interest
is,
I
don't
know
if
you've
ever
thought
about
the
mrr
mrr.
You
most
recently
used
most
frequently
used
partition.
A
Are
you
and
40
gigabytes
of
em
f,
you
and
then
they
invested
and
bought
a
system
with
512
gigabytes
of
RAM,
and
what
that
does
is
it
increases
the
amount
of
time
an
object
can
be
in
the
most
recently
used
and
then
become
promoted
into
the
most
frequently
used,
so
you
get
longer
to
get
promoted
so
the
time
base
over
which
you're
allowed
to
be
frequently
used
increases,
and
what
this
does
is.
It
tips
the
arc
from
being
fairly
nicely
balanced
in
our
case
with
80
gigs
about
40
42.
A
A
A
Isn't
it,
but,
but
not
in
this
case
and
I'd,
be
interested
to
know
if
anyone
else
has
seen
that
sort
of
behavior,
because
one
of
the
things
I
would
like
to
be
able
to
do
is
actually
hard
configure
the
partition
and
not
let
ZFS
move
it
itself,
because
I
don't
think
the
algorithm
is
actually
working
well
enough
for
enough
people
anymore.
I
think
we
don't
even
know
how
it
behaves
on
large
memory
systems.
So
if
you've
got
an
experiences
of
that,
please
drop
me
a
line.
A
I'd
love
to
hear
from
you
and
if
you've
got
any
ZFS
DTrace
system
performance
issues,
I'd
also
love
to
hear
from
you
and
I'm
hoping
I'm,
really
encouraged
by
the
that's
brilliant
I've
just
really
enjoyed
all
the
talk
so
far,
particularly
excited
about
what's
happening
with
freebsd
and
that
some
of
the
issues
that
I've
been
rubbing
up
against,
such
as
lock
contention
and
deadlox
I've
seen
those
with
with
netta
talk,
I'm
excited
that
we
actually
understand
where
that's
happening.
So
thanks
very
much.
C
This
I
don't
have
any
questions
fulfill
or
any
kind
of
mean
things
about
performance
for
tuning
or
anything.
The
new
ones.
I
have
any
questions
I'll
sit
by
the
man
himself.
One
is
here,
you
know,
nope
won't
be
charging
you.
A
lot
of
money
was
like
the
longer
I
can
start
make
him
stand
up
here,
they've
only
the
more
free
information
we
can
get.
B
D
A
A
Big
thing
to
remember
is
that
a
lot
of
the
time
they
consider
that
arcpy
itself
is
considered
a
hint
and
you
actually
have
to
get
into
a
situation
where
you're
actually
asked
to
you
see
it'll
only
move
a
learning
attempt
to
move
p
when
it
feels
there
is
a
need,
some
pressure
somewhere
and
you've
got
to
create
the
right
pressure
and
all
I've
ever
seen.
It
do
is
come
down
and
then
I
can
never
get
it
to
go
back
up
again,
because
the
MF
you
pressure
is
huge.