►
From YouTube: Syneto
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
So
I'm
from
cnet,
oh
and
my
colleagues,
fear
from
from
error,
store
already
introduced
me
in
one
way
or
another,
because
what
we've
been
doing
is
a
building
software
to
run
storage
appliances
and
what
I'm
going
to
show
you
today,
it's
a
little
bit
of
where
we
came.
We
come
from
as
a
company
and
what
we've
been
doing
with
gfs
for
for
building
storage
appliances.
A
So
what
we
had
this
experience,
where
was
to
build
software
for
for
a
device
which
needed
to
be
easily
operated,
because
at
the
time
it
was
quite
hard
to
use
command
command
line
interfaces
and
and
so
on-
and
this
was
our
first
venture
and
we
acquired
a
lot
of
experience
in
building
a
system
based
on
linux.
At
that
time,
then
we
we
went
on
and
we
developed
a
lot
of
stuff
Odin
based
on
on
this
unified
threat
management
solution.
A
A
So
at
first
we
decided
to
to
take
to
take
the
right
way
in
choosing
the
right
operating
system.
So
we
looked
at
various
options
we
had
at
the
time
in
2009,
open
solaris
was
already
there,
and
that
was
one
of
the
options,
but
we
also
evaluated
Linux
where
we
had
a
lot
of
experience
before
with
our
previous
operating
system
and
then,
of
course,
we
also
looked
very
hard
at
freebsd
as
an
option
to
build
our
operating
system
to
run
on
and
at
the
time
freebsd
had
already
the
ZFS
port.
A
A
A
So
in
at
the
beginning,
we
of
course
focused
on
delivering
to
the
customers,
appliances
that
were
geared
towards
backup
related
activities,
and
we
took
advantage
of
the
of
the
really
nice
feature
of
ZFS
and
receive-
and
this
is
very
important
and
was
and
still
is
very
important
because
of
the
way
it
decides,
which
are
the
differences
between
the
file
systems.
So
this
is
a
huge
advantage
against
traditional.
A
Arcing,
for
example,
or
backups,
which
every
time
they
need
to
get
the
differences
they
will
have
to
traverse
everything.
The
way
CFS
is
built,
allows
this
operation
to
be
very,
very
fast
and
compressing
that
over
the
line
would
shorten
very
much
the
amount
of
data
and
the
amount
of
time
you
require
to
transfer,
and
we
we've
been
using
this
as
fully
with
customers,
like
they
they're
implementing
their
own
disaster
recovery
with
their
customers.
A
So
they
take
our
our
solutions
and
then
they
they
put
some
little
systems
at
their
sides
and
they
use
send/receive
to
back
up
off
site.
So
this
is
one
of
the
one
of
the
solutions
which
is
really
interesting
to
to
our
customers
and
we're
doing
this
in
italy,
where
the
internet
is
very
crappy.
So
this
has
a
lot
about
the
power,
the
technology
underneath
that
allows
us
to
do
that.
A
Okay,
then
another
thing
that
we
actually
added
recently
was
the
integration
of
kvm.
So
right
now
you
can
of
course,
unsupported
hardware.
As
our
colleagues
from
from
error
store
said
we
are,
we
are
going
in
a
different
direction
in
terms
of
hardware
support
so
we'd
like
to-
and
this
is
where
we're
moving
with
all
of
all
of
our
partners.
Oem
partners,
we
work
with
them
to
define
a
really
stable
line
of
hardware-
that's
really
usable,
because
that's
that's
a
good
idea
for
them,
because
they
don't
have
to
run
around
see
if
this
works.
A
What's
the
performance,
what
are
the
characteristics?
So
this
is
another
type
of
value
that
you
we
offer
them.
So
we
work
with
them
to
define
lines
of
products
like
they
did
and
others
are
doing
and
kvm
is
supported
on
certain
hardware
and
we
are
taking
advantage
of
having
kvm
directly
over
ZFS
so
that
we
can
deploy
very
fast
clones.
Something
similar
as
daleks
does
with
the
databases,
but
we're
doing
that
with
appliances
or
virtual
machines
that
run
directly
on
the
appliance.
A
So
you
have
a
template
and
you
can
instantly
provision
a
lot
of
machines,
so
it
doesn't
matter
what
you're
running
inside,
but
you
can
deploy
test
scenarios,
deploy
virtual
desktops,
deploy
them
very
fast
and
then,
given
that
we
have
the
ZFS
replication,
we
can
also
replicate
the
virtual
machines
and
run
them
where
they
arrived
on
the
disaster
recovery
site.
So
the
interesting
thing
is
that,
on
the
other
side,
if
it's
also
seen
at
the
appliance,
you
can
also
run
the
appliance.
A
Importing
that
Solaris
we've
been
releasing
the
things
open
source
on
github
repository
and
well
the
bits
and
pieces
that
we
did
for
the
porting
and,
of
course,
implementing
all
the
wiring
required
to
seamlessly
manage
import
and
export
so
that
all
configuration
which
is
related
to
the
pool
that
gets
moved
across
head
node
is
kept
together
with
the
pool.
So
that's
that's
an
important
thing
that
we
had
to
do
because
this
actually
works
with
or
without
a
che.
So
we
kept
not
only
the
NFS
shares
and
the.
A
Ice
kaisi
or
not
as
classy
the
chief
shares,
but
also
the
ice
cassie
configuration
related
to
that
pool.
So,
for
example,
if
we
had
the
lamb,
which
has
been
shared
to
some
or
let
accessible
through
one
line,
that
configuration
is
also
kept
with
the
pool.
So
if
we
move
that
pool
to
another
machine,
this
will
get
automatically
reconfigured.
A
A
Another
another
work,
another
type
of
work
we
did
was
trying
to
integrate
very
well
with
the
VMware
infrastructure,
because
a
lot
of
our
customers
are
using
VMware
and
we
developed
some
very
interesting
integration
which
was
available.
But
the
thing
is
that
there's
there
hasn't
been
a
storage
appliance
provider
that
started
to
do
this
work,
and
what
is
interesting
is
that
the
level
of
integration
allows
us
to
bypass.
One
of
the
fundamental
problems
of
VMware
snapshots
is
that
they
tend
to
rise
in
size
over
time.
A
So
unless
you
decide
to
consolidate
them
after
some
time,
the
first
the
size
of
the
snapshots
will
grow
uncontrollably
and
then
the
time
to
consolidate
them
will
raise
without
control.
So
if
you
want
to
keep
VMware
snapshots
for
30
days
for
60
days,
you
cannot
just
keep
snapshotting
on
the
vmware
side
and
keep
the
snapshots
there,
because
then
consolidating
with
takes
them
so
much
time.
So
what
we
actually
do.
A
So
we
take
a
consistent
snapshot
of
the
virtual
machine
if
their
database
is
running
inside
it
doesn't
matter
what
is
running
inside
everything
is
consistent
and
then
we
take
a
snapshot,
ZF
a
snapshot
so
that
we
can
freeze
that
moment
in
time
and
then
we
reconsolidate
the
virtual
machine
from
VMware.
So
this
means
that
the
virtual
machine
from
VMware-
never
it
never
has
any
snapshots.
But
if
the
customer
decides
to
restore
something
it
will
risk
or
the
virtual
machine
live
running
at
that
time
and
the
core
for
that.
We
also
open
sourced
and
is
owned.
A
It's
on
github
and
it's
just
the
claw,
well,
the
infrastructure
that
we'd
use
to
communicate
with
VMware
and
so
on
and
forth
so
forth,
and
we
did
some
other
improvements
like
auto
expanded
data
stores.
So
if
the
storage
appliance
is
integrated,
you
don't
have
to
make
five
steps
or
to
recalculate
three
expand
and
so
on
on
VMware,
so
you
just
expanded
the
ZFS
data
store
and
then
VMware
will
instantly
see
the
space
available.
Even
if
it's
on
ice,
kazi.
A
We
did
some
work
with
the
deploying
virtual
desktop
infrastructure.
This
is
something
like
it's
generic,
so
you
don't,
but
just
to
showcase
a
little
bit
of
things
that
we
did.
We
recently
deployed
the
large
VDI
in
project
at
the
University
of
Milan,
and
they
were
really
impressed
of
the
number
of
virtual
machines
that
could
be
run
instantly
and
auto
provision.
B
A
A
Well,
I
think
that
well
we
are
default
block
size
is
32.
K
with
this
is
what
we
decided
to
use
on
by
default
on
volumes,
but
deed
up
is
not
the
it
doesn't,
have
the
best
ratio
on
that
kind
of
so
usually
we
we
have
to
tune
it
down.
So
I
think
that
8k
works
the
best
in
terms
of
space
gaining,
but
that
there's
a
lot
of
problem
with
memory
in
that
in
that
situation,
so
I
yeah.
A
A
Because
that's
that
was
quite
about
it,
so
I
was
this
wraps
it
up.
So
these
are
the
things
that
we
put
together
we
test
and
we
we
are
very
attentive
of
how
we
develop
software
because
I
think
that's
something
that
has
been
overlooked
in
the
industry
over
time,
so
that
we
can.
How
do
you
validate
that
that
thing
works
and
if
changes
won't
break
anything,
so
we've
been
trying
to
be
very
agile
in
developing
stuff,
and
this
is
a
talk
that
my
colleague
Vadim
will
will
give
afterwards
a
technical
talk.
A
Yeah,
but
where
well,
it's
just
well,
it's
the
engine,
but
we
didn't
use
exactly
what
they
did
with
the
zones.
There's
a
couple
of
differences.
What
we,
what
we
thought
was
a
better
idea,
was
to
make
the
system
more
transparent,
for
example,
we're
not
using
Z
vols
to
store
virtual
disks,
but
we're
using
coo
coo
cal
to
or
something
like
that,
that
we
can
export
over
NFS,
for
example,
and
we
can.
This
is
very
a
very
easy
way
to
exchange
and
to
make
it
flexible.
For
example,
you
can
use
the
storage.
A
If
you
don't
want
to
run
the
hypervisor
directly
on
the
storage,
you
can
run
a
hypervisor,
a
separate
hypervisor
and
run
the
virtual
machine
from
the
storage.
This
is
one
or
you
can
also
import
virtual
machine
by
right,
just
renaming
a
couple
of
files.
So
that's
in
terms
of
interoperability.
We
thought
that
that's
a
better
idea,
cool
thanks.
A
How
do
you
handle
the
high
availability
in
regards
to
the
Zille?
Do
you
duplicate
it
with
the
rbd
or
some
other
technique
where
okay,
so
usually
all
the
disks
in
our
HP
deployments
are
shared
between
the
nodes?
So
what's
behind,
it's
actually
assess
assess
network,
so
actually
the
seal
is
actually
visible
by
both
nodes
at
the
same
time,
so
you
don't
need
to
replicate
anything
that
goes
the
same
for
the
data
disks.