►
From YouTube: HybridCluster 3
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
Inferior
file
systems
like
ext4,
it's
still
more
prevalent
and
I,
think
we
should
change
that
so
cool
yeah
I
think
that
using
open
ZFS
on
on
Linux
in
particular
has
an
opportunity
to
really
make
a
very
big
difference
in
terms
of
the
portability
of
data
as
well
as
applications
and
I'll.
Explain
what
I
what
I
mean
by
that?
A
But
so,
let's
start
off
I
mean
were
some
of
us
were
at
scale
yesterday,
and
that
was
actually
a
really
sort
of
useful
perspective
on
what
people
are
doing
these
days
and
more
generally
than
than
just
dot
scale.
This
is
sort
of
the
stack
that
a
lot
of
people
are
building
their
applications
for
these
days,
and
this
goes
from
like
Webbie
people
all
the
way
up
to
some
of
the
forward-thinking
enterprise
companies
who
are
building.
B
A
Cloud
style
applications
as
opposed
to
the
more
old-school
sort
of
classic
legacy
application
architectures.
So
it's
pretty
much
everything's
Linux,
unfortunately,
and
Ubuntu
is
probably
in
the
lead
on
cloud.
Centos
and
rail
are
obviously
big
in
the
enterprise
people
do
a
lot
of
this
thing
called
DevOps,
which
basically
means
automation
and
automating
the
deployment
of
things
and
things
like
docker
well,
specifically,
docker
and
Linux.
A
Containerization
has
recently
become
a
very
popular
mechanism
for
deploying
an
application
in
a
predictable
way
and
making
it
portable
between
a
dev
environment
in
a
test
environment,
and
then,
when
you
move
it
into
production
and
there's
this
sort
of
fad
that
you
might
call
it
but
I
think
it's
actually
more
than
a
fad
for
what's
called
12
factor.
Apps.
A
I'll
talk
about
state
in
a
second,
but
just
so
at
a
high
level.
The
promise
of
containers,
as
I've
already
said,
is
that
application
should
be
portable,
so
you
should
be
able
to
build
an
application
on
your
laptop
in
a
vm.
You
should
be
able
to
deploy
that
application
onto
a
test
environment
or
a
staging
environment,
and
that
same
image
the
same
bits
that
make
up
that
user
land
typically
use
a
lan
linux
environment
with
all
of
its
dependencies
and
all
of
its
configuration
should
then
be
movable
directly
into
production.
A
And
that's
that's
super
helpful
because
it
solves
this
whole
dependency.
Hell
problem
that
you
have
and
like
configure
them
out
the
configuration
management
problem
and
sort
of
state
drifting
from
one
production
environment.
So
the
next,
but
that's
great
and
like
docker
and
containerisation,
solves
a
very
real
problem,
which
is
why
it's
taken
off,
but
I
believe
that
data
deserves
to
be
portable
too,
and
the
application
state
that
you
get
where
the
docker
environment
is
effectively.
Readout
is
effectively
read
only
once
it's
been
deployed
into
production
or
even
deployed
in
a
test
environment.
A
Like
I
said,
the
12
factor
apps
manifesto
talks
about
how
that
should
be
the
case,
so
that
you
can
scale
applications
out
dynamically,
for
example,
but
the
data
what
the
12,
what
the
12
fact
our
manifesto
doesn't
address.
Is
this
idea
that
by
pushing
all
of
the
state
into
the
services
I
either
databases
or
the
no
SQL
or
the
Redis
or
whatever
you
fancy?
Whatever?
A
You
fancy
using
your
sort
of
offloading
that
problem
of
dealing
with
that
data
onto
someone
who
has
to
deal
with
it
in
an
old-fashioned
way
like
by
doing
that,
your
backups
or
by
using
a
sand
or
by
doing
some
other
form
of
like
commercial
data
replication
so-
and
this
is
exhibited,
for
example,
in
in
Dhaka,
where,
if
you
run,
if
you
create
a
container,
and
you
run
a
container
inside
docker,
you
can
mount
a
volume
from
outside
the
container
inside
the
container.
And
this
is
the
recommended
approach
for
dealing
with
mutable
state
in
containerized
environments.
A
So
the
sort
of
canonical
example
of
MySQL
in
Dhaka
says
mount
VAR
lib
mysql
inside,
like
in
one
of
these
volumes
and
then
hey.
That's
your
problem
and
the
issue
that
I
have
with
that
in
a
cloud
infrastructure
environment,
is
that
the
it's
just
sort
of
pushing
the
problem
around
actually
to
use
matts
foot
by
just
pushing
the
problem
around
you're,
making
it
the
sis
admins
problem
to
store
this
mutable
state
somewhere
safe,
but
by
definition,
you're
running
on
a
volatile
cloud
infrastructure
environment.
So
where
are
you
going
to
put
it?
A
When
I
said
in
my
earlier
talk
that
I
think
this
is
a
huge
opportunity
for
opens
at
FS
the
reason
I
think
it's
a
huge
opportunity
for
opens
at
FS
in
this
context
is
that
this
whole
thing
around
doctor
and
containers
has
got
a
lot
of
traction
very
recently
very
quickly,
but
because
doing
things
with
containers
is
different
to
doing
things
with
VMs
or
even
on
bare
metal.
There's
this
vacuum
of
tools
around
this
new
approach
and
I
think
that
opens
edifice
has
a
place
to
play
as
effectively
a
volume
manager
within
a
containerized
environment.
A
So
this
is
just
a
summary,
I
mean
so
the
app
code
and
the
configuration
is
well
defined
and
sort
of
involved
in
this.
This
docker
image
here
that
can
easily
be
deployed
and
redeployed,
but
the
the
data
is
somebody
else's
problem,
and
wouldn't
it
be
nice
if,
instead,
you
could
have
your
data
and
your
application
sort
of
side
by
side
managed
by
one
cook
one
thing
or
manage
buy
some
api
is
that
interact
with
each
other
in.
B
A
A
way
that
your
data
can
be
replicated
between
two
clouds,
for
example-
and
you
can
migrate
your
data
and
you
can
migrate
your
running
application
from
one
cloud
to
another
and
back
again,
there's
all
sorts
of
use
cases
where
that
might
be
a
good
idea
and
ZFS
crucially
allows
us
to
do
this
in
a
way
that
I
believe
nothing
else
really
does
and
that's
all
down
to
send
and
receive.
So
we've
got
Matt
to
thank
for
that
again
and
just
another
sort
of
motivation
for
this.
A
Is
this
idea
that
when
you're
doing
things
with
containers
you're
doing
it
differently,
you
can
look
at
the
stack
of
all
of
the
technology.
The
VMware
built
around
VMs-
and
you
can
see
that
like.
If
you
replace
VMs
with
containers,
then
like
VMware
snapshots,
correlate
with
snapshot
and
roll
back
in
opens
edifice.
Vmware
HHA,
you
can
do
with
replication
and
failover
live
migration.
A
I'll
show
you
how
we
do
live
migration
in
a
second
and
then
there's
sort
of
this
higher
order
stuff
on
top
VMware
DRS
is
sort
of
dynamically
migrates
VMs
around
in
response
to
changing
load
on
those
VMs,
so
yeah
so
I
mean
now
time
for
the
architecture
diagram.
So
this
is
sort
of
how
we've
architected
this
stuff.
Each
of
these
squares
here
corresponds
with
a
container
that
has
some
associated
state
with
it.
A
So
there
might
be
a
container
that's
running
in
a
master
mode
on
one
server
over
here
in
the
US
and
is
being
continuously
replicated
using
Xena
fest,
send
and
receive
to
a
slave
node
over
in
the
UK,
but
also
continuously
being
replicated
to
a
slave
node
in
the
same
day
center
as
well.
Obviously,
this
means
that
you
can
tolerate
the
failure
of
a
single
node
in
this
DC
or
you
could
tolerate
the
failure
of
the
whole
DC.
A
This
is
within
a
single
instant
and
this
instance
is
going
to
be
replicated
and
connected
up
to
a
bunch
of
other
instances,
so
a
request
might
come
in
for
a
database,
for
example,
that
is
hosted
on
a
different
note.
The
proxy
layer
is
responsible
for
speaking
just
enough
of
each
of
these
protocols
that
it
can
like
bounce
requests
around.
So
if
a
request
comes
in
for
my
sequel,
the
proxy
actually
fakes,
my
sequel,
authentication
and
then
bounces
that
request
out
to
whichever
other
server
it
is.
Similarly,
a
request
might
come
in
for
a
website.
A
A
Like
I
said
each
container
is
backed
onto
its
own
independent
file
system,
and
so
as
soon
as
that,
as
soon
as
the
change
hits
the
disk,
we
take
a
new
snapshot
and
replicate
that
snapshot
out
to
other
machines
in
the
cluster
and
another
trick
that
isn't
quite
represented
by
this
diagram,
but
I
think
is
kind
of
cool
is
how
we
can
do
this.
Like
pseudo
live
migration
of
applications
or
containers.
A
We
can
do
that
by
pausing
the
incoming
requests
of
the
proxy
layer,
allowing
any
short
lived
in
flight
requests
complete
with
the
timer,
then
unmounting.
This
file
system
well
shutting
down.
The
container
first
probably
would
be
sensible,
shut
down
the
container
unmount
the
filesystem
and
then
send
the
very
last,
the
very
most
recent
snapshot
over
the
wire
and
because
the
snapshot
like
contains
less
than
30
seconds
worth
of
changes.
A
It's
typically
only
a
few
hundred
kilobytes
of
changes
on
disk,
so
it
very
very
quick
to
send
that
snapshot
over
to
a
new
node
or
the
receiving
node,
at
which
point
it
can
get
mounted.
The
container
can
be
booted
very,
very
quickly
and
the
like
built
up
requests
that
have
been
queued
here
then
gets
sent
directly
over
to
the
new
node,
and
in
this
way
we
can
the
so-called
live
migration.
It's
not
copying.
Memory
around
like
vmware
vMotion
is
which
actually
is
a
huge
benefit,
because
it's
much
much
cheaper,
but
it
allows
migration
of
these.
A
A
We
can
migrate
the
other
busy
things
away
from
that
server,
so
you
can
scale
applications
up
and
down,
and
that's
particularly
useful
for
stateful
apps
and
databases
that
will
typically
be
running
on
a
single
server
and
then
there's
the
time
machine
feature.
So
we
can
provide
an
API
that
hooks
into
the
ZFS
rollback
that
allows
customers
to
continuously
undo
their
mistakes,
as
I
said
this
morning.
So
that's
that's.
A
A
B
A
I
mean
DTrace
isn't
going
to
be
a
prerequisite
for
us,
I,
don't
think
not
for
a
while,
because
we're
working
on
building
that
stack
up
piece
at
a
time.
Oh
and
another
thing
just
to
point
out-
is
that
the
the
work
that
we're
doing
on
this
on
the
gnu/linux
project
is
all
going
to
be
done
in
the
open.
It's
going
to
be
open
source,
so
everyone's
welcome
to
take
it
and
play
with
it,
but
in
terms
of
the
linux
stuff.
A
But
actually
I
just
wanted
to
address
the
point
that
came
up
earlier
as
well
on
the
licensing
side.
Is
that
we're
comfortable
with
the
licensing
situation
with
GPL
and
CDL?
And
it's
not
a
problem
for
us
and
a
lot
of
that
confidence
has
come
from
other
big
organizations
being
confident
with
it
and
so
we're
taking
the
lead
there
and
as
far
as
we're
concerned.
That's
not
a
problem
and
we've
just
got
technical
issues
that
we
need
to
resolve
on
Linux
before
we
can
roll
this
out.
A
Dtrace
on
Linux
would
be
great.
I
know,
that's
something
that
Richard
yayo
and
Andrea
have
been
sort
of
starting
to
work
on.
It
would
be,
it
would
be
good
to
see
that
got
gotten
production
ready
and
in
general.
Improving
the
development
environment
on
Linux
is
a
high
priority
for
us
I'm,
a
strong
believer
in
spending
time
and
energy,
getting
the
development
environment
to
be
a
highly
productive
one,
or
at
least
a
more
productive
one,
so
that
the
NC
Ewing
development
work
itself
can
actually
go
faster,
so
yeah,
let's,
let's
get
linux
up
to
scratch.