►
From YouTube: Dan Vatca - Syneto - OpenZFS European Conference 2015
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
His
presentations
can
be
about
another
bit
of
the
market
trends
which
you
may
have
heard
of
more
of
a
high-level
overview
and
how
that
into
how
they
integrates
with
ZFS.
The
hyper
converged
market
trend,
which
is
kind
of
combining
both
compute
storage
and
networking
into
either
one
single
appliance
or
a
single
scalable
planes.
So
yeah,
once
down,
set
up
he'll,
be
doing
a
talk
about
how
that
sort
of
relevant
to
ZFS
and
what
those
guys
are
doing
to
bring
that
to
to
the
market
into
the
community.
B
Thanks
ram
hello,
ruddy,
just
a
quick
I
need
some
time
to
just
send
yourself.
Oh
okay,
sorry.
A
B
B
We
have
our
own
version
of
of
a
storage
OS,
which
is
actually
an
based
on
open
indiana
in
the
beginning,
and
then
we're
trying
to
evolve
that.
But
the
idea
of
the
storage
OS
is
to
allow
us
to
build
appliances
which
are
fairly
easy
to
use
for
for
our
customers
which
have
different
needs
and
we
always
try
to
cater
to
those
needs.
B
What
I'm
going
to
talk
to
you
about
is
our
experiences
in
following
this
new
market
trend,
which
is
the
hyper
convergence.
So,
first
of
all,
this
is
a
new
hype,
so
everybody
is
talking
about
it.
Everybody
wants
it
and
first
of
all,
I
want
to
ask
you
which
of
you
knows
what
type
of
converge
means
or
do
you
have
an
understanding
of
that?
Or
is
it
something
that
just
occurs
to
you?
B
B
So
that's
that's
something
that
has
a
lot
of
advantages,
at
least
from
the
customer
perspective,
and
one
of
the
advantages
is
that
it's
much
easier
to
deploy
so
got
building
blocks
which
are
made
out
of
storage,
networking
and
compute,
and
then
you
get
those
building
blocks
and
you
build
our
infrastructure.
So
it's
much
more
predictable
when
you
trying
to
add
more
compute
and
everything,
and
when
you're
trying
to
scale
and
another
important
thing
that
our
customer,
our
customers
are
interested
in
having
a
single
point
of
support.
B
But
today's
meeting
is
about
gfs,
of
course.
So
what
I
intended
to
discuss
today
is
how
is
the
ZFS
help
those
hyper
converged
storage
world?
What
what
is
the
power
of
gfs
and
what
does
ZFS
do
for
these
kinds
of
applications
and
I'm
I'm,
trying
to
share
to
you
some
of
our
experiences
and
which
are
the
power
points
of
gfs
that
make
the
solution
appealing
to
customers
and
how
does
ZFS
improve
that.
B
But
the
main
point
of
gfs
is
data
integrity,
so
I
think
that
this
is
the
most
the
single
most
important
thing
that
has
brought
up
to
to
use
gfs
or
dfss
brought
to
the
hyper
convergence.
So
when
you
write
data
with
gfs,
it's
there,
I
mean
this
is
the
most
important
thing
that
you're,
not
you
don't
have
problems
with
data,
corruption
and
and
so
on.
B
B
B
B
C
B
D
B
So
we
don't
use
dedupe
all
over
the
place,
I
mean
we've
been
trying.
We've
been
deploying
in
production,
some
D
tube
workloads
and
we've
been
we've
been
working
with
trying
to
understand
the
performance
characteristics
and
how
is
that
impacting?
And
we've
been
using
a
Sasso
sport
of
such
as
implementation?
A
port
of
thatthis
implementation
of
the
Eden
are
because
we
wanted
to
put
that
through
paces
to
understand
how
that
really
and
how
that
really
performs
in
production
and
that's
yeah
yeah.
B
So
the
thing
is
that
it
really
does
help
and
I
think
that
it's
I'm
very
happy,
because
I
saw
that
your
you've
been
working
on
bringing
that
into
the
next
upstream
release.
That
would
be
really
awesome
because
we've
been,
of
course,
as
you
said,
integration
is
always
hard,
so
putting
the
pieces
together
and
getting
that
upstream
that
that's
very
interesting.
It's.
D
D
B
Ratios
it
well
we're
using
well.
The
data
sets
that
we
have
are
actually
file
systems.
So
that
means
that
we
need
to
tune
down
the
record
size,
because,
on
top
of
that,
we're
going
where
we're
having
the
decay
vm,
no
yeah,
that
their
format
for
four
discs.
So
if
we,
if
you
leave
it
the
default
120
for
the
did,
you
create
shots
are
like
1.01
and
that's
totally
unusable
in
it
doesn't
make
any
sense.
B
B
B
This
is
actually
a
virtual
appliance
running
with,
together
with
the
amur,
so
we're
running
that
on
as
an
VSA,
which
is
virtual
storage
appliance
and
on
the
metal.
Now
we're
running
esxi,
which
is
the
VMS
hypervisor
and
storage.
Os
is
running
an
in
a
virtual
machine
and
for
performance
reasons
and
because
gfs
is
the
best
way
to
manage
drives
by
because
of
the
of
the
volume
manager
of
the
zpool
and
all
the
management
that
ZFS
does
better
than
other
file
systems.
B
We
are
passing
through
the
HBA,
so
the
HBA
gets
passed
through
to
the
storage
OS
that
runs
in
the
vm.
Of
course,
we
have
to
lock
down
all
the
memory
that
it
gets
so
so
ZF,
so
our
virtual
appliance
will
manage
everything,
we'll
manage.
Ark
will
manage
zeal
and
we'll
manage
the
drives,
and
then
it
will
expose
that
space
through
NFS
through
all
the
VMS
and
the
sharing
of
the
NFS
is
done
through
through
a
separate
networking
which
is
in
memory
so
basically
you're.
B
Just
having
like
a
very
high
bandwidth
network
connection
to
all
the
through
from
the
storage
OS
to
the
esxi,
which
would
then
connect
that
to
to
the
virtual
machines.
And
of
course
you
can
add
other
nodes
that
can
take
advantage
of
the
same
one
and
you
can
go
on
and
scale
it.
You
can
scale
that
by
adding
more
nodes,
which
will
also
have
their
own
storage
and
they
have
their
own
virtual
machines.
So
you
can
just
go
and
replicate
that.
B
B
Here
in
our
appliances,
we
also
take
advantage
of
our
developed,
vm
or
integration.
So,
each
time
we're
taking
snapshots.
We
also
talked
to
the
infrastructure
below
which
means
to
dsx
I,
and
we
synchronize
our
scarce
snapshot,
schedules
and
replications
to
synchronize
those
with
the
snapshots
of
the
amur,
so
that
you
get
memory
consistent,
snapshots
and
and
into
and
snap
shows
that
are
integrated
through
the
VMware
tools
to
the
applications
are
trying
side.
B
E
B
B
E
B
B
B
B
Another
feature
that
we
use
from
gfs-
it's
actually
the
ZFS
nap
dear,
which
is
a
fast
way
to
recover
to
recover
individual
files
from
a
virtual
machine.
So
sometimes
we
just
need
to
take
a
quick
look
in
the
past
and
see
what
was
there.
So
today
we
take
one
virtual
machine,
we
isolate
it,
and
then
we
see
okay.
So
this
is
what
I
wanted.
This
is
not
what
I
wanted.
B
Alright,
so
what's
next,
so
what
would
really
make
ZFS
even
much
better
for
hyper
converged,
storage
appliances
and
that's
one
of
the
on
the
best
thing
to
develop
and
to
work
on
to
improve,
is
a
better
deedub
algorithm.
So
we're
continuing
to
fine-tune
that
to
find
new
solutions
to,
for
example,
nvme
is
such
a
nice
solution.
It
has
very
much
appeal
because
it's
very
low
latency
it
has
high
capacities
so
offloading
uploading
the
DDT
table
to
an
nvme.
That
would
be
a
great
a
great
improvement.
B
Like
to
see
is
making
ZFS
clusterware
by
implementing
ZFS
multi
modifier
protection
and
to
make
it
safe
and
not
force
the
above
implementation
to
rely
on
Sky's
reservations
or
other
other
kinds
of
solutions
that
are
actually
coming
from
the
outside
in
instead
of
having
ZFS
just
manage
that
problem
of
having
a
pool
destroyed
or
to
destroy
by
being
imported
on
two
nodes
yeah,
because
it's
so
it's
so
easy
to
just
run
import.
My
non-
f,
we've.
B
Other
features
that
we're
trying
to
see
implemented
in
CFS
its
data
set
quality
of
service
per
data
set
quality
of
service
joint
has
been
doing
a
very
nice
work
on
to
implementing
that
four
for
four
zones,
but
that
I
think
it's.
It
could
be
a
very,
very
nice
benefit
to
the
ZFS
in
general.
Having
/
data
set
throttle,
for
example,
making
sure
that
certain
database
applications
don't
eat
up
all
the
all
the
I.
Oh
that's
that's
available,
so
they
don't
don't
starve
the
other
applications
after
trying
to
run
on
and.
B
One
more
thing:
I
remember
that
we
well.
This
has
been
discussed
before
about
distributed
file
system
on
top
of
ZFS
I,
don't
know.
What's
your
opinion
on
that
I'm,
trying
to
reiterate
the
theme
that
we've
been
finding
in
introduction
and
that's
finding
a
solution
to
finding
a
way
to
marry
those
two
concepts
like
how
do
we
manage
to
scale
out?
Zfs?
That's
something!
That's
that's
really
really
nice,
because
we
really
want
the
power
and
what
the
ZFS
providing
today,
which
is
reliability
and
with
flexibility
and
make
that
distributed.
B
D
D
B
E
The
way
they
did
it
is
well
luster
is
Linux,
Pacific
and
they're,
just
using
ZFS
as
a
sort
of
reliability
back
end.
So
they're.
Only
each
individual
node
in
the
luster
cluster
is
its
own
gfs,
island
and
luster
sort
of
the
real
organizer
of
data.
So
you're
not
going
to
get
well,
certainly
not
ZF
a
snapshot
sense
and
receive
and
stuff
like
that.
Now
it's
all
lost
their
specific
and
I
mean
it's
tailored
to
their
needs.
I
mean
they're
running
a
700
nose
for
luster
yeah.
B
B
No
we're
we're
looking
after
scaling
linearly,
so
you
know
when
you're,
because
we're
not
after
scaling
just
in
terms
of
space
but
scaling
performance
along
with
that.
So
that's
that's
the
Holy
Grail
in
the
end,
because
right
now,
ZFS,
it's
very
scalable
in
the
same
system
like
adding
yeah,
let's
go
up,
we
just
scale
up
scale
up
scale
up.