►
From YouTube: MattAhrens 2
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
A
Talk
about
what
the
dell
flicks
product
is
and
how
we
use
ZFS
and
how
we
interact
with
the
community.
So
first
a
little
bit
of
background
on
what
our
product
is.
So
we
do
database
virtualization,
so
our
customers
are
mainly
like
big
IT
shops
that
are
using
a
traditional
like
oracle
database,
sequel,
microsoft,
sequel,
server,
database
Postgres
database
to
run
their
organizations.
So
these
these
customers,
you
know
they
often
have
their
production
database,
which
is
like
you
know,
really
carefully
maintained,
and
then
they
also
have
non-production
databases,
non-production
copies
of
that
database.
A
For
you
know,
development
test,
QA,
uit,
reporting,
lots
of
things
so
typically
will
see
that
there's
like
ten
non-production
copies
of
every
one
production
database
that
our
customers
are
using.
So
the
the
first
kind
of
obvious
problem
with
this
is
that
you're
wasting
a
lot
of
disk
space,
but
the
perhaps
less
obvious
to
an
outsider
problem
is
that
the
the
process
of
actually
getting
like
refreshed
copies
of
the
production
data
in
these
non-production
use
cases
is
often
very
cumbersome.
You
need
to
go
through
like
multiple
layers
of
different
parts
of
the
IT
organization.
A
You
know
to
get
storage
to
get
a
new
server
to
talk
to
the
production
database
owner,
who
really
doesn't
want
you
to
do
anything
unnecessary
to
the
production
database
to
get
the
additional
copy.
So
our
customers
often
will
it'll
actually
take
them
like
multiple
weeks
to
get
a
new
copy
of
the
database
before
they're
using
dell
phix.
A
So
our
product,
essentially
we
create
we
pull
one
copy
of
the
production
database
into
our
delphic
senjin
again
to
what
that
is
in
a
second
and
then
we're
able
to
very
quickly
create
virtual
databases
which
are
based
on
ZFS
snapshots
and
clones.
For
these
non-production
use
cases-
and
we,
you
know,
we
integrate
this
all
very
tightly
with
the
rest
of
their
their
infrastructure.
So
we're
going
and
like
association
to
these
various
machines,
setting
up
work
on
census,
playing
oracle
log
files
and
all
that
so
in
terms
of
how
this
interfaces
with
ZFS.
A
So
the
dell
fixed
engine
is
a
virtual
machine,
it
can
run
on
VMware
and
it's
based
on
Lumos
and
open
ZFS.
So
one
of
the
kind
of
interesting
things
about
the
way
that
we
interact
in
the
environment
is
that
we
wanted
to
make
our
product
very
easy
for
customers
to
drop
into
their
existing
infrastructure
without
having
to
roll
on
any
new
hardware,
because,
that's
just
going
to
add,
add
time
to
before
they
can
start
doing
a
POC
or
getting
a
deployment.
A
So
that's
why
we
one
of
the
reasons
that
were
work
that
we
run
on
VMware,
so
that
we
don't
have
to
like
require
any
special
hardware
and
we
use
whatever
their
existing
storage
is.
This
is
typically
over
a
San.
You
know
over
over
fibre
channel
to
some.
You
know
big,
like
you
know,
EMC
or
net
up
box,
or
something
like
that,
and
then
you
know
we're
exporting
these.
A
A
So
analytics
is
probably
like
a
second
tier
use
case
like
reporting.
The
kind
of
primary
number
one
use
case
is
like
development,
so
you
know
they
have
their
production
database
and
then
there
you
know
working
on
new
new
features.
You
know
meaning
like
at
the
Oracle
or
like
the
application,
that's
running
on
oracle
level,
so,
like
they're,
developing
your
new
store
procedures,
new
tables,
that
kind
of
stuff
also.
A
Exactly
yeah
so
I
know,
like
you
know,
most
of
us
I
mean,
including
me,
are
not
really
from
this
enterprise.
It
shop
background,
so
learning
all
this
stuff
was
definitely
new
to
me
as
well,
but
I
to
kind
of
simplify
it,
for
maybe
people
like
me,
who
are
more
familiar
with,
like
very
lightweight
development
environments.
It's
basically.
This
is
like
introducing
these
introducing
like
lightweight
source
code
management.
To
you
know,
enterprise
databases.
A
So,
rather
than
having
to
like
do
everything
super
manually,
it's
like
you
can
just
like
create
a
new
branch
and
of
your
production
database
by
using
dell
fix,
and
then
you
can,
you
know,
do
a
roll
back
and
you
can
do
a
refresh
to
pull
in
the
new
production
data.
Just
like
you
can
with
like
get
and
clone
and
pull
you
know,
get
pull
and
git
clone
and
that
kind
of
stuff,
but
but
with
your
whole
production
database,.
A
Yeah,
so
we
also
work
with
microsoft,
sequel,
server
and
with
postgres.
We
just
launched
the
postgres
product
like
a
couple
months
ago,
and
we
also
we're
working
on
running
on
ec2.
So
this
is
a
little
bit
of
work
to
get
limos
and
the
Omnius
distribution
working
with
ec2
in
microsoft,
sequel,
server.
A
It's
it's
slightly
different
because
they
fortunes
one
sequel:
server:
backups
can
only
be
applied,
live
to
another
sequel
server
instance,
rather
than
like
Oracle
backups.
We
can
take
the
Oracle
arm
and
back
up
and
then
up
like
interpret
that
unjust
interpret
that
format
and
apply
it
to
the
data
files
ourselves,
but
because
this
is
like
more
like
a
physical
backup,
it
tells
us
like
this
block
of
this
file
was
changed
versus
with
sequel
server.
There
is
anything
like
that.
A
A
D
To
have
a
question
back
to
the
Oracle
pond
page,
please
so
are
you
saying
using
Oracle
Armand?
Are
you
doing
a
second
copy
of
it
or
is
it
like
regular
tape,
backup
would
be
Oracle
arm
and
you'd,
storing
it
somewhere
type
for
the
production
data
is
for
30
days
or
whatever,
for
this
particular
use
case.
Do
you
create
an
extra
copy
of
it
and
does
it
have
to
be
a
full
Oracle
arm
and
back
up,
or
are
you
handling
incrementals
as
well?
So.
A
You
can
this
would
be
like
one
copy.
In
addition
to
you
know,
if
you're
doing
tape,
backups
you're,
probably
still
doing
this
I
mean
we
do
have
customers
that
are
actually
getting
rid
of
their
tape,
backups
and
using
delft
X
as
the
backup
as
well,
but
typically
you'd
still
be
doing
your
tape,
backups
and
then
you'd
also
be
doing
this.
Our
men
copy
to
del
fix
and
we
do
do
incremental.
A
So
we
do
the
initial
load,
which
is
like
a
full
arm
and
back
up,
and
then
we
do
incremental
loads
like
maybe
every
day,
and
then
we
also
can
pull
over
the
pullover
actually
all
of
the
log
files
continuously.
So
you
can
actually
provision
these
databases
from
any
point
in
time
like
up
to
the
exact
transaction
that
you
want,
rather
than
only
at
specific
snapshots
in.
A
Yeah,
okay,
yeah,
so
we're
like
we're
essentially
like
sshd
into
this
machine
and
then
running
either
really
running
oracle
arm
an
incremental
backups
or
oh
yeah.
I
mean
running
oracle.
Our
man
to
either
send
us
the
like
data
file
backups
or
to
send
us
the
log
files,
and
then
we
store
the
copy
in
dell
fix
and
then
like
we're
using
ZFS
compression.
So
it
takes
like
about
half
as
much
space
usually
and
then
we're
using
snapshots
and
clones
to
create
those
virtual.
A
A
A
So
what
scene?
What's
unique
or
different
about
the
way
that
we
Adele
fix
are
using
open
ZFS.
So
we
depend
on
lots
of
fast
clones
of
file
systems
and
volumes.
So
one
of
the
design
principles
of
ZFS
was
that
we
wanted
clones
to
be
represented.
The
same
way.
File
systems
are
so
like
when
you
create
a
clone.
You
have.
You
know
the
whole
block
tree
with
all
the
block.
Pointers,
there's
no
additional
cost.
A
You
know
you
can
create
a
clone
of
a
clone
of
a
clone
of
a
clone,
and
the
performance
of
that
is
still
exactly
the
same
as
your
first
file
system.
So
this
really
works
really
well
in
our
use
case,
because
you
can
do
like
you
can
create
lots
of
VD
B's,
those
are
all
clones
and
then
you
can
create
like
a
VD,
be
from
a
VD
b
and
that's
like
a
clone
of
a
clone.
A
A
Well,
I'll
leave
that,
as
is
oh,
so
we're
running
on
VMware
I,
don't
know
how
many
is
anybody
running
I
guess?
Probably
most
of
you
are
running
on
bare
metals.
Anybody
else
like
running
primarily
on
VMware
one
person,
other
types
of
virtualization
like
cloud
stuff,
one
couple:
okay
yeah!
So
that's
a
little
bit
unique!
A
We've
definitely
run
into
some
limitations
on
VMware,
especially
like
older,
not
even
that
old
but
like
before
I
think
it's
before
vmware
ESX
5.5
there's
like
some
really
severe
limitations
on
disk
sizes,
disk
image
sizes,
so
we're
looking
at
going
to
using
our
DM
the
raw
device
mapping
rather
than
rather
than
vm
FS,
because
there's
limits
of
like
a
couple
terabytes.
So
some
of
our
customers
have
like
dozens
of
terabytes.
A
So
it's
not
like
super
huge
by
kind
of
enterprise
standards,
but
still
even
with
dozens
of
terabytes,
you
start
running
into
a
lot
of
limitations
in
ESX,
so
we're
using
8k
record
size.
This
is
not
that
unusual
for
ZFS,
because
most
of
the
databases
internally
use
8k
record
size,
and
one
interesting
thing
is
that
we're
using
sha256
but
check
some,
but
not
for
dee
doop.
A
We
implemented
this
new
op
right
mechanism,
so
if
you
write
the,
if
you
over
a
block
with
contents
that
are
identical
to
what's
already
there,
then
we
will
just
ignore
that
right
and
the
reason
for
this
is
that
sometimes
especially
for
Microsoft
sequel
server,
there's
these
well.
So
for
one
thing:
if
this
connection
is
severed
for
some
reason
and
the
production's
server
like
rolls
over
its
logs,
then
we'll
have
missed
log
entries,
and
so
in
that
case
we
have
to
basically
pull
a
whole
new
copy
from
the
production
server.
A
But
thankfully
ntfs
is
an
overwrite
in
place
file
system.
So
what
happens?
Is
we
pull
over
that
new
copy?
We
write
it
into
the
same
files.
Those
same
files
are
still
wherever
they
were
on
disk,
so
it
overwrites
the
same
locations
in
the
XIV
all,
and
then
we
just
in
ZFS,
we
say:
oh
the
check.
Some
of
that
data
is
the
same
as
the
check
some
of
what
we
already
have
so
I.
A
Don't
need
to
do
that
right
and
I
mean
this
obviously
saves
on
Io
costs,
but,
more
importantly,
it
means
that
it
won't
break
the
block
sharing
with
the
snapshot,
because
we
want
to
be
able
to.
You
know
we
we
don't
wanna
have
to
say
like
oh
like
if
this
connection
gets
severed,
then
maybe
we'll
double
your
storage
costs
right.
That
would
not
be
acceptable.
A
C
A
A
So
you
know
Delphic
sponsors
that
kind
of
community
engagement
work,
we're
also
actively
pushing
our
code.
Changes
to
Lumos,
so
I
know
that
a
lot
of
you
guys
are
working
with
a
ZFS
source
code,
which
is
super
great,
and
you
know
hopefully
you're
following
the
letter
of
the
law
with
the
CDL
and
publishing
your
source
code
changes,
but
really
having
a
meaningful
impact
on
the
community
requires.
You
know
engaging
with
the
community
to
upstream
your
changes.
A
I
think
this
is
ben
if
this
is
really
beneficial
to
the
community,
but
also
it's
beneficial
to
us
as
a
contributor,
be,
as
you
know,
as
we
make
good
changes,
then,
if
we,
if
we
didn't,
contribute
those
upstream,
then
you
know
eventually,
somebody
else
is
going
to
make
a
change
upstream
that
we
really
want
in
our
product.
We're
gonna
have
to
pull
that
down
and
doing
that.
Merge
can
be
very,
very
painful.
A
It's
like
why
didn't
you
just
pay
like
the
little
bit
of
incremental
costs,
as
you
went
along
to
upstream
I'll,
you
know
if
you
could,
upstream,
even
like
half
of
those
changes
and
save
a
year
and
a
half
of
engineering
work
like
that
would
be
really
good.
That'd,
be
a
big
savings
for
your
business.
So
it's
a
matter
of
like
investing
the
upfront
versus
paying
later
to
you
know
to
take
advantage
of
changes.
Other
people
are
making
upstream.
A
We
have
about
actually
nine
people
from
our
company
have
contributed
changes
upstream.
We
have
probably
roughly
like
three
people
working
roughly
full-time
on
CFS
and
Adele
fix
is
the
top
contributor
to
open
ZFS,
at
least
measured
by
like
number
of
commits.
So
this
is
like
some
of
the
some
of
the
features
that
we've
contributed
to
you
ZFS
in
the
last
three
years.
C
A
A
A
Don't
actually
know
I
wasn't
I
haven't
really
been
involved
that
directly
with
that
work.
I
know
that
I
recently
saw
some
reviews
go
out
that
were
like
make.
It
use
the
thing
that
performs
better
and
I.
Don't
remember
with
that's
like
para,
virtualized
or
like
fully
virtualized
or
but
I
know
that
there's
definitely
like
several
things
that
we
had
to
do
with
the
limos
will
be
contributing
those
upstream
as
well.