►
From YouTube: CNCF Storage WG Meeting - 2018-03-28
Description
Join us for KubeCon + CloudNativeCon in Barcelona May 20 - 23, Shanghai June 24 - 26, and San Diego November 18 - 21! Learn more at https://kubecon.io. The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy and all of the other CNCF-hosted projects.
A
All
right,
folks
so
looks
like
we've
got
about
17
people
on
the
call
and
I
bet
we'll
have
a
few
more
stragglers
that
come
in
today.
We've
got
a
great
presenter,
Luke
Marston.
It's
gonna,
help
us
out
and
present
on
The
Late
Late,
some
directors
with
dot
mesh
and
then,
after
that,
we've
got
kind
of
an
open
agenda,
so
we
can
either
and
the
meeting
early
or
we
can
figure
out
what
to
chat
about
and
I've
got
an
update
regarding
the
cube
con
sessions
that
we've
been
working
with.
B
Awesome
Thank,
You,
Clinton
and
hi
everyone,
it's
great
to
see
so
many
people
here,
nice,
some
of
the
names
so
so
great
to
see
you
all
yeah
I've
been
having
some
connectivity
issues.
So
if
I
drop
out,
please
tell
me
as
soon
as
possible
so
that
I
can
slow
down
and
maybe
switch
Swift,
why
fires
and
and
so
on.
So
hopefully
this
won't
be
too
painful
cool,
so
I'll
share
my
screen
and
I've
got
a
few
slides
and
there
I've
got
a
couple
of
demos,
so
I'm
gonna
pray
to
demo
gods.
B
So
here
we
go
cool,
so
don't
mesh
is
about
bringing
data
into
the
circle
of
control
and
but
before
I
talk
about
that
and
I
just
want
to
talk
a
little
bit
about
what
a
bad
day
at
work
looks
like
when
you're,
when
you're
doing
software
and
when,
when
we're
doing
our
native
when
you're
when
you're
doing
DevOps,
and
so
so
we
spoke
to
probably
about
a
few
dozen
companies
and
doing
cloud
native,
doing
kubernetes
doing
DevOps
about
about
their
use
cases
and
about
their
pain
points.
B
This
means
that
if
you're
a
developer
and
you've
got
like
four
micro
services
on
your
laptop
and
they've
got
a
radius
and
a
Postgres
and
a
elasticsearch,
then
you
you
just
don't
bother
trying
to
capture
all
of
those
states
to
show
a
colleague
an
interesting
state.
Instead,
you
either
get
them
to
come
and
look
at
your
computer
if
you're
in
the
same
building-
or
maybe
you
let
them-
have
a
teammate
session,
if
your
param
only
or
you
depend
on
using
a
shared
staging
environment,
in
which
case
there's
often
a
contention
over
those
staging
environments.
B
B
It
used
to
be
that
the
BCD
used
to
say
that
the
number
one
programmer
excuse
for
legitimately
slacking
off
was
that
my
code
was
compiling,
but
it's
increasingly
the
case,
so
it's
2018
compilers
have
got
better
and
it's
actually
integration
tests
that
tend
to
to
slow
people
down
most
now.
So
we
heard
that
often
slow
and
flaky
CI
systems
were
what
was
causing
people's
people's
lives
to
be
painfully
slow
when
when
they
were
trying
to
get
software
deployed
and
and
make
changes
to
a
codebase
just
checking.
B
You
can
still
hear
me
clearly
because
I
just
had
a
message
flash
up.
Let's
say
my
internet
connections,
I'm
stable,
you're,
good.
You
can
perfect
yep,
okay,
Oh,
amazing,
the
wonders
of
forged
B.
So
so
then,
the
next
problem
that
we
heard
this
is
a
common
one,
was
that
we
we
made
a
change
to
the
software,
the
tests
all
passed
in
seei,
but
then
the
thing
deployed
it
to
production,
and
this
is
almost
always
because
production
is
just
a
different
environment
to
to
any
of
your
other
test
environments.
B
But
I
believe
that
if
there
are
tools
to
make
it
easier
to
test
more
realistically
and
to
have
enter
and
tests
that
were
less
flaky
and
more
reliable,
then
then
there
would
be
more
testing
done
before
you
expose
any
traffic
to
to
new
untested
code.
So
so
that
was
that
was
another
common
theme
and
then
the
fourth
one
that
we
heard
really
interested
in
in
this
groups.
Feedback
on
this
was
that,
well,
you
can
put
your
application
in
containers,
but
how
do
you
migrate?
B
Your
data
to
the
cloud
containers
really
don't
help
you
move
data
around
you
don't
ever
want
to
put
large
database
dumps
into
into
containers
they're,
just
not
designed
for
it
and
containers.
Don't
help
you
capture,
databases
either,
but
yeah.
So
this
is
an
interesting
sort
of
theme
like
how
how
how
I
mean
kubernetes
gets
you
most
of
the
way
to
real
cloud
portability
and
when
I
say
cloud
portability,
I
mean
I
include
moving
data
from
on-prem
to
to
a
cloud
provider
then
like.
How
do
you
manage
the
data
migration?
B
So
so,
if
you
take
a
step
back
and
you
look
at
the
common
theme
between
all
of
these
things-
well,
there's
problems
at
all
stages
of
the
software
lifecycle.
There's
problem
and
dev,
where
microservices
made
capturing
and
sharing
dev
states
hard
bombs
in
CI,
where
end-to-end
tests
that
manipulate
real
databases
are
slow
and
flaky
and
the
more
realistic
they
are,
the
flakier
they
are
and
when
the
flaky
it's
hard
to
reproduce
the
flakes,
and
we
spent
about
a
month
battling
that
within
our
own
code
actually
and
then
in
production.
B
And
so
the
common
theme,
if
you
really
take
a
step
back
and
zoom
out,
is
that
in
all
cases
you
weren't
in
control
of
data
and
if
you
think
about
what
modern
software
is
made
up
out
of
well,
any
software
is
made
up
out
of
any
software.
Application
is
made,
I've
had
a
code
infrastructure
and
data
and
over
the
last
20
years
or
more
code
has
obviously
been
version
controlled.
B
And
if
you
go
to
any
team
and
say
do
version
control
your
code,
then
the
answer
is
yeah
most
of
the
time
and
and
and
what's
more
CI
and
automated
testing
have
made
control
of
code
easier
as
well,
by
being
able
to
reliably
test
and
reproduce
the
inputs
to
various
different
parts
of
your
code
and
so
on.
So
controlling
code,
even
velocity
through
control
for
code
is
kind
of
a
solved
problem.
B
More
recently,
infrastructure
has
been
moving
into
the
fold
as
well
and
I.
Don't
need
to
tell
this
group
too
much
about
this,
but
of
course,
we've
moved
now
from
a
world
of
snowflake
servers
into
a
world
of
declarative
immutable
infrastructure
as
code
and
your
terraform
config,
and
your
answerable
config
and
your
docker
files
and
your
docker
images
all
live
in
well,
the
doctor
images
don't
but
everything
else
lives
in
version
control
and
the
the
images
can
be
created
from
that.
B
So
this
is
about
controlling
both
the
cloud
resources
that
are
deployed
and
also
the
runtime
state
of
the
servers
and
tools
like
docker
and
kubernetes,
obviously
go
a
long
way
to
solving
to
solving
that,
and
so
we're
left
in
the
situation
where
data
is
sort
of
left
out
in
the
cold.
It
isn't
subject
to
the
same
tools
and
and
and
and
abilities
as
as
modern
infrastructures
code
to
a
large
extent,
and
many
of
the
teams
that
we
spoke
to
say
they
were
still
using
sort
of
old-school
methods
for
managing
their
data.
B
They
often
had
DBAs,
so
you
had
to
send
them
an
email
or
open
a
ticket
again
a
snapshot,
a
production
data
and-
and
this
was
just
slowing
people
down
because
everything
else
about
their
infrastructure
was
getting
faster
and
their
data
was
was
slowing
them
keeping
them
back.
So
our
mission
with
dot
mesh
is
to
bring
data
into
the
circle
of
control,
and
so
that's
a
very
broad
statement
and
I'll
tell
you
about
how
we
plan
to
do
that.
So
how
do
you
bring
data
into
the
circular
control?
B
Well,
we
propose
that
you
use
a
match
our
mesh,
this,
not
the
service
mesh.
By
the
way,
it
is
like
a
service
mesh
in
that
it
is
a
generic
tool
for
that
you
can
apply
to
any
software
and
it
will
make
it
easier,
but
it
is.
It
is
not
about
networking,
it
is
about
storage,
and
so
the
mesh
that
we
propose
is
called
mesh
and
dot
hub
sort
of
sits
at
the
center
of
the
mesh
and
then
around
the
side.
B
Here
we
have
various
different
stages
of
the
software
development
lifecycle,
which
enable
various
different
use
cases.
So
the
first
use
case
is
that
you
have
a
developer
on
a
development
machine
and
they
are
able
to
capture
the
state
of
multiple
micro
services
at
once
in
a
units
that
we
call
a
data
dot
and
then,
once
you
have
a
data
dot
and
it's
possible
to
treat
that
data
dot
like
a
git
repo,
and
so
you
can
do
commits
you
can
do
branches,
you
can
do
push
and
pull
so.
B
That's
only
that
you
can
only
demonstrate
by
showing
the
the
exploited
state
of
three
different
data
stores
at
the
same
time,
because
all
the
IDS
have
to
line
up
and-
and
it
involves
like
touching
various
different
parts
of
the
system.
They
really
want
to
be
able
to
share
that
with
the
sec
ops
team,
and
so
they
can
now
do
that.
Rather
than
just
writing
down
a
list
of
steps
to
reproduce,
they
can
actually
share
a
snapshot
of
the
entire
environment.
B
That's
sort
of
the
the
fourth
and
final
use
case,
so
I'll
pause
there
I've
got
two
demos,
and
so
the
first
demo
is
going
to
be
the
sort
of
development
side
of
the
house.
I'll
show
commits
branches
and
pushing
and
clone
to
and
from
the
hub
and
then
the
the
second
demo
is
what
I
call
a
dot
ops
demo,
which
is
migrating
sort
of
orchestrating
data
replication
between
two
separate
kubernetes
clusters.
Just
before
I.
Do
the
demos
I'll
see
any
questions
on
the
content
so
far.
D
C
D
B
We've
implemented
as
part
of
dot
mesh,
it's
a
layer
that
sits
between
between
underlying
reliable
storage
and
and
the
application,
which
is
useful
because
it
means
that
it
works
on
your
laptop
as
well.
So
it
enables
that
portability
between
different
stages
of
software
life
cycle
now.
But
one
thing
I
will
mention
is
that
it
is.
B
We
are
absolutely
not
trying
to
implement
a
synchronously
replicated
block
storage
system,
so
we
work
in
collaboration
with
synchronously
replicated
block
storage
systems.
So
we
see
this
as
is
very
complementary
to
to
those
systems
like
the
port
works
or
a
storage,
RS
or
an
open,
EBS
or
staff,
or
in
fact
EBS
or
Peavey's,
on
on
a
cloud
provider
and
in
fact,
we're
currently
working
on
an
integration
that
allows
us
to
support
failover
in
production
by
just
relying
on
the
reliable
disks
provided
to
us
through
kubernetes
using
those
I.
D
D
Is
when
you
talk
about
taking
snaps
of
multiple
micro
services?
Do
you
in
any
way
coordinates
now
shows
across
micro
services
so
that
these
are
globally
consistent,
pointing
can't
snap
shots,
or
you
know
you
have
any
guarantees
for
the
snapshots
that
you
take?
It
costs
different
micro
services.
What
you
mentioned
the
example
of
ready
it's,
my
sequel:
do
you
do
any
coordination,
I
caught
these
micro
micro
services.
B
Yes,
so
we're
working
on
a
system
that
will
not
require
coordination,
because
it
will
allow
consistent
Atomics.
The
parish
has
to
be
taken
across
multiple
micro
services,
even
if
they're
running
on
different
machines.
So
what
approach
there
is
a
different
approach
which
I've
seen
from
our
friends
at
Caston
or
canister
I
think
is
the
open
source
project,
but
Caston
and
k10,
which
is
to
to
coordinate
between
different
services
and
I.
Think
it's
interesting
to
explore
both
of
those
approaches
in
parallel.
B
C
C
B
Great
so
yeah
I'll
run
through
some
demos
and
there'll
be
time
for
more
questions
afterwards,
so
the
try
and
make
the
zoom
thing
get
out
of
my
way
so
yeah,
the
first,
that's
always
in
my
way.
The
first
thing
I
will
do
is
I
just
run
through
it,
so
the
development
side
of
the
house
II
can
be
shown
using
this.
This
very
simple
demo
that
we've
got
on
our
website.
So
if
you
want
to
try,
this
yourself
feel
free
to
check
it
out
afterwards.
Well,
you
know
whenever
you
like
and
yeah.
B
If
you
go
to
our
website,
this
is
under
try
on
cat
Koda
so
prior
to
tutorial,
and
you
can
kick
the
tires
so
just
to
start
by
showing
it's
very
easy
to
install
dot.
Mesh
I
have
here
a
Linux
machine,
and
it's
just
part
of
this
hosted
tutorial
environment.
B
This
works
just
as
well,
if
you're
running
it
on
your
laptop
on
mac,
o
s--
or
or
linux,
and
installing
dot
mesh
is
just
a
matter
of
running
a
curl
to
download
and
a
go
binary,
and
then
we
run
the
single
command
called
DM
cluster
in
it
and
DM
cluster.
In
it
assumes
the
doctor
is
installed
and
it
then
pulls
down
the
doc
mesh
server
image.
It
creates
a
new
new
mesh
cluster
and
it
only
takes
a
few
seconds.
B
So
the
idea
there
is
that,
even
if
you're,
using
even
if
you're
using
do
not
mesh
on
your
laptop,
you
still
create
a
cluster.
It's
just
a
single
node
cluster,
and
so
all
don't
mesh
clusters
are
alike,
they're,
all
sort
of
homogenous
and
you
can
push
and
pull
between
any
don't
mesh
cluster,
and
so
I
can
check
that
that
came
up
yep,
that's
running
0.33
that
that's
good.
We
release
that
earlier
today.
So
this
is
really
really
fresh,
fresh
bits.
So
I
can
then
start
up
a
really
simple
doc.
B
B
Looks
like
I've
got
quite
a
little
latency
yeah,
so
inside
the
docker
compose
Dom,
all
I
have
it's
just
a
regular
docker
compose
file
with
a
web
and
a
Redis
and
I'll
show
kubernetes
example
in
a
minute.
By
the
way,
this
is
just
the
sort
of
the
start,
the
early,
the
the
very
simple
version
using
docker
compose
and
it's
just
using
a
docker
volume
driver
called
DM
and
that
docker
volume
driver
refers
to
a
Moby
counter
volume,
and
so,
when
I
did
docker
compose
up
on
this
file.
That's
why
you
saw.
B
This
Moby
counter
in
the
output
of
DM
list.
So
now,
if
you
look
again
at
DM
list,
you
can
see
dot.
Mesh
knows
that
there's
a
maybe
counter
dot
and
that
dot
is
currently
on
the
master
branch.
It
knows
which
server
it's
on.
It
knows
which
containers
are
using
it,
and
it
knows
how
big
it
is
so
98
kilobytes
is,
is
just
the
size
of
a
it's.
Basically,
the
size
of
an
empty
file
system
with
just
a
tiny
Redis
file
in
it,
and
so
I
can
now
commit
the
empty
state.
B
And
now,
if
I
do
DM
log,
that's
the
the
empty
state.
There's
there's
nothing
in
this,
commit
at
all
and
then
going
to
make
a
new
branch
so
I'm
going
to
create
the
branch
called
branch.
A
and
now
I
can
show
you
the
app.
So
this
is
the
application.
Just
really
super
simple
it
it's
an
app
that
lets.
You
click
on
the
screen
and
add
logos
and
it
stores
the
position
of
the
logos
in
a
register
base
and
the
Redis
is
configured
to
be
persistent.
So
it's
writing
to
disk
in
loops.
So.
A
B
Thank
you,
so
I
will
take
my
time
in
order
to
spell
out
CN
CF
and
then
the
idea
here
is
that
the
position
of
these
logos
on
the
screen
is,
as
recorded
inside
inside
the
Redis
database.
Cn
CF
yeah
I
spelt
it
right,
that's
good,
and
if
I
do
DM
list
now,
okay
yeah
I
can
see
that
there's
21
kilobytes
of
dirty
data,
so
there's
21,
kilobytes
of
clicking
that
I've
recorded
I
can
do
another,
commit
and
say
hello,
CN
CF,
and
that's
my
commit
message.
B
And
notice
that
all
of
my
all
of
my
state
comes
back,
and
so
what's
going
on
under
the
hood
here
is
that
da
mish
is
coordinating
switching
out
the
state
of
the
file
system
underneath
the
running
container.
But
before,
but
don't
worry
it's
not
that
scary.
We
also
coordinate
stopping
the
container
the
red
is
container
and
then
starting
it
again
around
that
that
switch
of
that
data,
and
so
it's
done
in
a
way.
That's
that
doesn't
doesn't
break
the
application.
B
C
So
look
what
one
quick
question
some
some
systems-
I
don't
know
if
Redis
is
one
of
them
require
some
kind
of
quiescing
before
you
can
take
their
data
because
they
keep
you
know
aggressive
caches
or
in
memory
state.
Do
you
do
anything
to
kind
of
help
them
put
their
state
on
disk
or
a
flush
out
Thursday
to
disk
before
you
can.
B
We
don't
have
acquiesce
to
API
at
the
moment,
but
we'll
build
one
as
soon
as
we
need
it.
We
haven't
yet
found
an
application
that
user
or
customer
wants
to
use
that
that
that
actually
needs
that
we're
seeing
a
lot
of
usage
of
like
mice
cure
in
ODB
or
Postgres,
where
they
have
right
head
logs
and
the
only
thing
that
we
need
from
the
application
from
the
application
or
the
database
is
that
it's
crashed
consistent,
so
related
related
to
that.
C
Yeah
related
to
that
and
some
some
applications,
both
mice
equals,
being
one
of
them
actually
can
store.
Multiple
volumes
and
the
crash
consistency
is
actually
a
point
in
time.
Consistent
across
those
volumes
say
you
know
a
bin
log
or
a
redo
log
and
data
loss
data
are
could
be
separated.
Do
you
actually
have
a
crash,
consistent
story
across
volumes.
B
B
Local
can
I
see
what
I'm
going
to
do
my
next
time
so
DM
list
here,
yeah,
okay,
so
this
is
my
local
remote
just
sounds
kind
of
funny,
but
you
could
do
DM
remote
V
and
you
can
see
the
different
remotes
that
are
available
to
to
the
dot
mesh,
client
and
kind
of
like
pointing
cube
kernel
at
different
clusters
right.
So
in
this
case,
I'm
pointing
my
shirt
at
the
local
remote,
which
is
just
the
the
don't
mesh
server
that's
running
on
my
laptop.
B
B
There's
the
general
idea
we're
going
for
here
and-
and
so
you
can
see
that
that's
arrived,
you
can
see
that
there's
a
branch
a
and
hopefully
oh
yeah,
my
internet
connection
is
just
being
slow
and
so
on
branch
a
you
can
see
the
commit
hello,
CN
CF,
which
I
pushed
from
from
the
command
line
there.
So
the
bonus
section
here
is
that
I'm
going
to
pull
this
branch
a
down
onto
my
local
machine,
so
I
don't
need
to
install
dot
mesh
because
I
already
have
it.
B
B
Is
that
it
it
only
pulls
down
the
one
only
pulls
down
the
master
branch,
okay,
there's
being
slow,
because
my
4G
connection
is
being
slow
and
then
I
can
do
DM
list
and
that's
pulled
down
maybe
counter
on
the
master
branch.
You
can
also
see
us
pull
down
that
one
commit
on
the
master
branch
and
I
can
switch,
make
that
the
active
dot
and
I
can
now
start
up
the
dock.
B
B
C
B
B
For
the
dot
hub,
we're
thinking
about
that
we're
thinking
of
the
hub
is
a
SAS
product
where
you
will
pay
money
to
store
data
on
on
the
dot
hub.
We
hope
to
add
more
value
to
the
data
in
the
dub
so
that
we
can
justify
charging
above
cloud
storage
fees
because
we
don't
to
be
in
that
game.
The
we
will
be
adding
more
features
to
the
table
as
well.
We
think
of
the
mesh
as
an
open
source,
primitive,
and
it
has
it's
really
important
that
the
dot
mesh
is
a
good
open
source
primitive.
B
B
B
Ok,
don't
affect
timing,
because
we
now
finally
have
our
local,
maybe
counter
here,
and
so
it's
on
the
master
branch
locally,
which
means
that
we're
not
going
to
see
the
state
here.
The
next
thing
we
need
to
do
is
pull
down
that
branch,
a
state
that
we've
pushed
up
to
the
hub
and
again
that'll,
probably
take
a
few
seconds,
because
my
4G
is
being
slow.
B
There
we
go
and
then
we
can
check
out.
We
can
do
DM
branch,
you
can
see,
we've
got
branch
I
available
and
we
can
check
out
branch
a
and
then
over
here
bingo.
We
saw
that
our
data
moved
from
the
online
demo
environment
to
my
local
development
environment.
So
yeah,
that's
the
first
memo
and
if
we've
got
time
for
another,
one
I
can
attempt
a
slightly
more
challenging
one
yeah
go
ahead
cool
so
so
yes,
I've
got
this
other
example,
so
I've
so
far,
I've
shown
local
development.
B
Docker
docker
compose
that's
all
far
aton,
but
it's
much
more
interesting
to
talk
about
production
use
cases
with
kubernetes
as
well.
So
we've
done
a
cuban
ESI's
integration.
We've
got
a
dynamic
provision.
Air
and
a
flex
volume
driver
we'll
be
implementing
CSI.
And
if
you
go
to
our
setup
guide,
then
there's
instructions
to
DCE
instructions
for
aks
on
Azure
and
also
instructions
for
generic
kubernetes,
so
feel
free
to
try
it
out.
Kick
the
tires.
B
You
can
install
this
on
a
cluster
and
when
it's
running
in
clustered
mode,
it
gives
you
all
the
same
features
that
it's
running,
that
you
get
when
it's
running
on
a
single
machine.
So
what
I
can
do
here
is
I'm
just
gonna,
so
I've
got
two
different
contexts
in
cou
couple.
One
of
them
is
this:
this
GK
Europe
the
cube,
CT
I'll
get
nodes.
B
B
B
Okay,
yeah
that
helped.
Oh,
my
latency
is
down
okay,
so
I've
got
this
and
I've
got
a
cluster
in
Europe
and
I've
got
another
cluster
in
the
US.
These
are
both
gke
clusters
because
it
was
easy,
but
there's
nothing
about
this
demo,
that's
specific
to
gke.
This
would
work
from
on-prem
to
cloud
or
from
one
cloud
provider
to
another
and
so
on,
and
so
yeah
I've
also
got
my
my
nodes
in
the
US
and
so
I
can
then
demonstrate
migrating
sort
of
reasonably
substantial,
my
sequel
database
from
from
one
continent
to
another.
B
B
If
you're
fast
enough,
then
you
get
to
see
that
going
from
zero
dirty
data
to
115
Meg's,
and
that's
just
my
sequel
saying
here
are
my
my
stock.
My
sequel
data
files,
I
can
then
commit
my
empty
state
and
I
can
now
do
a
Canadian
list
by
the
way.
The
M
list
shows
me,
my
sequel,
dot
just
very
quickly.
I'll
show
you
how
that
hooks
up
into
kubernetes
universe.
B
B
The
loader
pod
just
ingests
a
couple
of
hundred
Meg
of
data
from
that
we
just
bundled
into
the
the
container
image
from
for
the
loader
to
get
us
to
get
us
started
here,
and
it's
just
some
fairly
boring
sample
data
about
implant,
employees
and
departments
and
and
so
on,
and
so
what
you?
What
you
can
see
here
is
that
we've
deploy
lead
and
seek,
for
instance,
on
to
kubernetes
with
doc
mesh
installed
on
kubernetes.
B
B
We
can
now
see
DM
list,
so
we
we've
got
the
full
size
of
the
dot,
but
we've
also
got
one
of
the
commits
and
the
dirty
data
on
top
of
that
initial
commit
is
two
hundred
and
eighty
Meg,
so
I
can
now
do
a
commit.
My
bulk,
dataset
and
I
can
push
that
from
Europe
to
the
US,
and
the
good
news
is
that
this
is
not
going
via
my
laptop
because
I'm
pretty
sure
my
4G
connection
isn't
going
to
sustain
20
megabytes
a
second.
B
So
that's
nice
and
I
can
now
switch
over
to
GK,
US
and
I
can
see
that
my
data
has
arrived
and
I'm
now
going
to
switch
back
to
Europe
now
I'm
going
to
simulate
the
fact
that
I
did
a
bulk
transfer
of
a
big
data
based
and
that
it
took
some
time,
maybe
in
reality
it
took
several
hours
because
it
does
actually
take
I'll
break
the
laws
of
physics,
but
I'll
also
simulate
the
fact
that,
while
that
data,
while
that
snapshot,
that
commit
was
being
replicated
over
the
Atlantic
more
data
was
still
being
written
to
the
live
database
in
Europe.
B
So
I'm
going
to
simulate
that
with
a
little
script.
That
adds
some
more
data
and
that
just
loads,
this
sequel
dump
called
employees
extra
and
I'm
then
going
to
switch
over
to
Europe,
and
at
this
point
it's
scheduled
downtime
we're
going
to
try
and
make
the
scheduled
downtime
as
short
as
possible.
So
I
just
deleted
the
my
sequel
pod
in
Europe,
so
my
sequel
is
now
down.
We
can
commit
our
secondary
data
set.
B
B
So
at
this
point
it's
some
container
images,
okay
and
it's
all
running
and
now
I
can
open
the
other
IP
address
and
with
any
luck,
I
can
see
that
my
database
is
back
up
and
now
it's
in
the
US.
So
here
we
can
go
and
look
at
the
employees
and
we
added
some
of
the
names
of
the
people
on
our
team
just
for
fun,
and
so
you
that
indicates
the
bulk
data
that
was
transferred
initially
plus
the
Delta
of
data
that
was
captured
up
until
the
the
few
seconds
of
scheduled
downtime.
B
So
that's
just
another
use
case
for
dot
mesh
and
it
can
be
used
for
moving
data
around
between
production
systems
as
well
as
development,
and
that's
it
really
I've
just
got
one
slide
here,
which
sort
of
summarizes
it
so
don't
mesh
is
an
open
source,
primitive
the
people
using
doctor
and
kubernetes
in
development
in
production
that
provides
docker
volumes
and
kubernetes
TVs
that
can
be
committed,
branched
pushed
and
pulled
like
git
repos,
but
they
can
be
terabytes
in
size
and
they
can't
be
written
if
you
snapshotted
my
presentation.
Thank
you
very
much.
B
C
B
So
we
interested
in
supporting
ETL
into
dot
mesh
dots
from
things
that
would
typically
not
run
inside
kubernetes.
Actually,
this
comes
on.
I've
got
a
backup
slide
here,
sort
of
roadmap
slide
and
with
we
we're
in
the
process
of
defining
the
mesh
and
I
should
actually
move
this
arrow,
because
we're
currently
working
on
number
two
we're
currently
working
on
adding
production
volumes
into
the
mesh
so
that
we
integrate
with
reliable
disks
like
I
was
talking
about
earlier.
B
But
then
number
three
on
our
roadmap
is
to
bring
production
databases
from
things
like
RDS
into
the
mesh,
because
that's
one
of
the
things
we
learn
from
talking
to
the
customers
is
very
lots
and
lots
and
lots.
People
are
just
using
the
databases
that
are
provided
by
the
cloud
provider
and
but
I
think
it
still
makes
sense
to
try
and
bring
those
into
the
fold
by
being
able
to
import
from
them
and
bring
data
into
earlier
stages
of
the
software
development
lifecycle,
and
that
might
be
fully
sort
of
cognitive.
B
D
Yeah
one
question:
I
notice:
none
of
your
dog
mesh
commands
refer
to
a
specific
PVC
or
pause.
Does
it
take
a
snapshot
of
the
whole
namespace
or
how
do
you
specify
you
know?
Look
but
your
dog
mesh
commas
in
kuben,
and
it
looks
just
the
same
as
they
look
and
darker.
How
do
you
know?
Look
the
mappings
between
applications
and
their
storage,
or
you
know.
How
does
how
does
that
work.
B
Yes
and
it's
a
good
question
in
order
to
make
the
commands
short
to
type
and
there's
a
concept
of
current
current
remote
and
current
dot
and
current
branch,
which,
if
you
do
a
DM
list,
you'll
see
an
asterisk
next
to
the
current
dot,
for
example,
and
so
that
allows
you
to
to
keep
and
to
keep
those
things
to
keep
the
commands
as
short
as
possible,
where
we've
got
a
ticket
open
for
adding
explicit
command.
B
So
you
could
type
DM
remote
equals
GK
us
dot
equals,
might
equal
dot
branch
equals
master,
commit,
and
that
will
be
useful
for
when
you're
scripting
things
and
will
be
useful
for
when
you're,
not
interacting
with
things
as
a
human.
So
yeah,
it's
it's
just
a
client-side
state.
That's
used
to
to
make
to
make
the
typing
easier
and
to
also
make
it
seem
more
familiar
with
respect
to
get
as
well,
which
has
the
same
concept.
I.
D
B
Think
of
dot
mesh
as
a
separate
system
that
runs
alongside
cuba,
Nettie's
on
your
cluster
that
can
be
deployed
to
cuba,
Nettie's
using
kubernetes
like
you,
can
coop
cuddle,
apply,
install
dot,
mesh
and
and
it's
sitting
alongside
kubernetes,
with
its
own
sort
of
registry,
of
dots
with
their
names
that
can
also
be
exposed
directly
to
docq
by
the
dock
of
onion
plug-in
interface.
So
when
you
refer
to
a
dot
name,
that's
the
cuban,
that's
the
dot
mesh
idea,
but
it
can
be
mapped
to
from
a
PVC.
D
B
D
B
C
B
Flex
volume
at
the
moment,
but
we'll
implement
CSI
soon
and
and
yeah.
The
the
the
important
point,
though,
is
that
dot
mesh
on
kubernetes
is
going
to
be
something
which
both
consumes
Peavey's
and
provides
them
because,
as
I
was
saying
earlier,
we're
not
implementing
dist
synchronously,
replicated
storage
for
right,
where
we're
consuming
systems
that
provide
those
guarantees
and
exposing
upwards.
These
sort
of
portable
snapshot,
able
volumes
or
dots
look.
B
C
B
You
can
add
multiple
dot
mesh
service.
That's
why
they're
called
clusters.
So
you
can
shard
your
dots
across
your
don't
mesh
servers.
You
can
kind
of
think
of
it.
Like
I,
don't
know
if
this
is
a
good
analogy,
but
it's
kind
of
like
a
cloud
San
in
there.
You
can
have
multiple
backends
and
yes,
access
is
going
through
mesh,
but
the
Damas
packing
vailable
and
they
can
be
scaled
across
multiple
TVs
that
interact
with
the
backend.
If
that
makes
sense,
the.
C
B
A
B
Please
do
come
and
join
our
slack,
so
if
you
go
to
dot
mesh
comm
scroll
all
the
way
to
the
bottom
of
the
page,
it's
a
really
tiny
link.
It's
hidden
under
community
in
the
footer
there's
a
direct
link
to
the
slack
invite
invite
link
on
don't
mesh
comm,
so
yeah.
Please
come
and
join
our
slack.
If
you
want
to
reach
me
personally,
I'm
Luke
at
mesh,
calm,
excellent.
A
Right
so
five
minutes
left
just
a
couple:
administrative
things,
so
one
at
Q
Khan
you
at
you.
We
had
three
sessions:
we've
given
up
the
third
one
that
was
a
late
night
session
that
didn't
make
a
lot
of
sense.
So
we
now
have
two
for
the
swg.
One
is
an
intro
ones.
They
advanced
regarding
the
intro,
that's
been
moved,
so
it
no
longer
conflicts
with
the
kubernetes
kubernetes
storage
sig.
A
So
that's
a
great
thing
and
then,
regarding
the
the
advanced
session
that
we're
we've
invited
members
of
the
TOC
to
come,
talk
to
us
about
charter
and
I,
think
you
know
some
people
on
the
the
in
the
group
have
been
reached
out
to
by
Camille
and
Camille's
been
asking
about.
You
know
what's
going
on
with
SVG,
and
what
do
you
want
it
to
do?
It
cetera?
B
A
Starts
that
in
terms
of
the
Charter
and
what
the
TOC
is
looking
for
and
then
regarding
the
the
actual
session
planning
I,
think
that
we'll
probably
talk
about
that
next
time
and
form
from
the
people
who
volunteered
we'll
we'll
see
who
can
start
working
on
that
for
for
the
event
anything
else,
Ben
do
you
have
any
any
other
comments?
Anyone
else
very
big
thanks
clay,
thanks,
Luke
excellent
you're,
welcome
I,
guess
so,
we'll
give
everybody
back
four
minutes
in
that
day
thanks
a
lot.
Yes,.