►
From YouTube: Heptio Ark Design Sessions: Snapshot Locations
Description
This session will cover design specifics for the snapshot locations feature in Ark. Everyone is welcome to join us here and in #ark-dr on the Kubernetes slack.
A
A
C
A
E
Yes,
I'm
going
to
try
to
pronounce
the
github
names
actually
handles,
but
I
can
so
first
up
is
I,
can't
pronounce
this
so
I'm
going
to
spell
it
out
CW
for
men,
okay,
that
was
a
hybrids
for
the
memory
leak
report
and
next
up
is
such
an
ER
as
a
CH
ima
are
and
okay.
This
is
really
hard
bash
of
man
with
two
ends
at
the
ends
for
reporting
issue.
E
They
should
cause
elastic,
restores
not
to
work,
and
the
last
one
is
James
that
always
P
OWI
s
for
the
PR
to
discard
service
account
tokens
from
the
default
service
accounts
on
restore,
so
just
to
reinforce
that
we
welcome
contributions.
We
welcome
PRS
and
we
welcome
issues
too.
So,
if
you
find
any
issue,
if
you
have
any
suggestions
for
our
just
open
edition,
we'll
talk
to
you
yep.
A
And
I
routinely
send
out
swag
to
contributors,
we're
in
the
middle
of
changing
stores
and
stuff.
So
once
we
get
any
a
new
format
in
place,
we'll
we'll
get
back
to
you
on
that.
Alright.
So
let's
talk
about
the
topic.
First
Steve,
why
don't
you
start
kind
of
just
give
us
a
quick,
TL
DR,
what
exactly
we
mean
by
snap
wha
snapshot
locations?
No,
like
so
I.
Don't
have
any
assumptions
here
or
sure.
B
So
well
so
what
I
actually
wanted
to
do
is
before
we
jump
into
the
snapshot.
Locations
discussion
do
a
little
bit
of
a
demo
and
discussion
about
a
couple
of
the
other
things
we've
been
working
on
for
o
dot
n.
We
have
some
some
pretty
big
and
and
hopefully
exciting,
changes
coming
so
I
thought
I'd
just
run
through
those
real
quickly.
So
let
me
share
my
screen
and
I'll
go
through
a
quick
demo.
B
Those
could
be
in
different
regions
if
you,
if
you're
interested
in
separating
things
geographically
and
as
part
of
that,
we
also
enabled
storing
backups
under
prefixes
in
an
object
storage
bucket.
So
you
no
longer
have
to
store
everything
in
the
root
of
a
bucket.
This
could
be
really
useful
if
you
have
multiple
clusters
that
you're
using
are
gone
and
now,
rather
than
having
to
set
up
a
separate
bucket
per
cluster
or
per
arc
install,
you
can
just
put
them
all
in
the
same
bucket
under
different
prefixes.
B
So
I
wanted
to
run
through
a
quick
demo
of
what
that
looks
like
so.
I
have
latest
version
of
arc
installed
in
a
cluster
here,
so
the
first
thing
I'll
show
is
just
the
backup
locations,
so
I
have
to
set
up
here.
So
the
first
one
is
the
default.
You
can
see
the
providers
GCP
and
then
the
actual
location
I
have
a
bucket,
and
this
one
I'm
storing
under
the
primary
prefix
and
then
I
also
have
a
secondary
location,
also
GCP
and
it's
under
the
same
bucket,
but
under
a
secondary
prefix.
B
So
first
thing
I'll
do
is
just
create
a
actually.
Let
me
show
you,
I
have
I,
have
the
nginx
example
installed.
This
is
a
standard
example
that
we
use
in
our
repo,
so
I'm
just
gonna
go
through
backing
up
that
namespace.
So
first
thing
I'll
do
is
just
create
a
backup
foo
and
on
clewd
the
nginx
example
namespace
and
I'm
not
going
to
specify
any
backup
location
here
and
so
arc
is
actually
going
to
use
the
default
which
it
identifies
based
on
a
server
flag
that
you
provide
in
the
arc
deployment
manifest.
B
And
just
telling
the
logs
on
the
bottom
here
so
so,
if
we
now
look
at
the
backups
that
we
have
so
back
up
Fuu
completed
and
you
can
see,
the
storage
location
is
listed
as
default,
and
so
we
can
actually
go
look
in
object.
Storage-
and
this
is
this-
is
my
Ark
bucket.
You
can
see
the
two
different
prefixes
here,
and
so
if
we
go
under
primary
here's,
the
food
backup-
and
you
see
the
JSON
file,
the
logs
and
the
actual
tarball,
so
that
all
worked
as
expected.
B
B
The
next
thing
I
wanted
to
talk
about
is
another
change
that
we're
making
a
note
n,
and
so
you,
you
may
have
noticed
this
as
I
flipped
through
here
quickly,
but
I'm
actually
going
to
flip
to
a
markdown
doc
for
this,
so
in
inversions,
o
dot,
nine
in
and
prior
in
terms
of
how
art
stored
data
in
an
object,
storage
bucket.
Basically,
you
just
had
a
single
bucket
that
Ark
used.
We
didn't
enable
any
sort
of
prefixes
and
then
each
backup
just
became
a
directory,
essentially
within
that
bucket
and
so
within
each
backups
directory.
B
The
big
changes
that
we're
introducing
to
kind
of
top-level
directories
within
the
object,
storage
location
there
will
actually
be
a
couple
more
coming,
but
the
the
main
one
is
the
backups
subdirectory.
So,
instead
of
storing
each
backup
in
the
root,
we're
going
to
put
them
all
under
that
backup,
subdirectory
and
then
we're
also
going
to
separate
out,
restores
into
their
own
directory,
so
lab,
restores
top-level
directory
and
then
each
restore
will
get
its
own
subdirectory
under
that
with
the
logs
and
results
file.
B
The
nice
thing
about
this
is
that
it
lets
us
add
additional
top-level
directories
as
we
need
them
to
store
other
types
of
information,
and
so
one
example
of
this
is
that
we're
working
on
a
change
right
now,
so
that
if
you're
using
our
rustic
integration,
you
can
actually
start
storing
the
rustic
data
under
the
same
bucket
and
same
prefix
that
you're
using
for
all
of
your
Arc
backups.
So
in
you
know,
dot,
nine
and
prior.
B
You
had
to
have
a
separate
bucket
to
store
your
rustic
data,
we're
now
just
making
it
easy
to
store
it
within
that
same
bucket.
So
you
don't
have
to
have
so
many
buckets.
So
this
is
is
definitely
a
breaking
change.
We've
written
up
some
some
instructions
and
provided
a
script
to
help
move
your
files
around
so
that
they
they
can
be
put
in
the
new
Oh
dot
ten
layout.
B
B
D
D
So
in
versions
up
to
0.9
we
had
this
config
object
or
this
config
CRD
in
arc
and
in
there
we
had
definitions
for
your
backup
storage,
which
Steve
just
showed
was
moved
to
into
some
new
stuff
for
the
backup
locations,
and
we
also
had
the
persistent
volume
provider,
so
you
could
define
where
you
are
getting
a
persistent
volumes
from
and
and
how
to
snapshot
those
with
this
new.
With
this
new
change,
we're
moving
this
information
into
a
separate
CR
D.
D
So
you
see
now
we're
using
a
kind
of
volume,
snapshot,
location
and
we
are
just
having
a
speck
on
it
to
define
what
provider
using
so
right
now,
cloud
providers
and
also
any
anybody
who's
written
plugins
like
port
works
and
whatever
config
those
providers
would
take.
So
you
can
define
multiple
snapshots
occasions,
and
this
will
lead
into
some
replication
stuff
later.
D
There's
currently
some
tricky
things
with
sending
backup
snapshots
to
zones
or
locations
that
your
cluster
isn't
in.
So
if
you
were
in
GCP
uswest
and
you
tried
to
take
a
snapshot
and
immediately
send
it
to
us
east,
that's
not
going
to
work
currently,
but
we're
working
on
a
replication
feature
that
we'll
build
on
top
of
this
to
take
those
and
copy
them
over
to
whatever
zones
you
might
want
to
have
for
redundancy
I.
E
I
wanted
to
mention,
so
you
can't
pre-configure
I,
don't
know
if
this
is
the
right
word
to
use,
so
you
can
pre
configure,
select
me
if
it's
not,
you
can
pre
configure
most
public
applications,
volume
snapshot
locations,
but
at
the
time
of
creating
that
backup
you
have
to,
you
can
only
choose
one
to
use
or
provider.
One.
B
B
You
know
dot
11,
but
but
even
in
o
dot,
10
I
think
the
the
snapshot
location
feature
adds
value
in
some
scenarios,
and
one
example
of
that
is
is
with
port
works,
so
port
works
is
a
storage
platform
that
you
can
use
with
kubernetes
and
they've
written
an
arc
plugin
to
be
able
to
take
snapshots
of
port
work
about
court
works
volumes
using
arc,
and
so
fort
works
has
two
different
types
of
snapshots
that
you
can
take.
So
you
can
take
local
snapshots
which
are
just
going
to
keep
the
snapshot.
B
Data
on
the
port
works
infrastructure
alternately.
You
can
take
cloud
snapshots
which
actually
upload
that
data
on
to
the
cloud
somewhere
and
so
in
in
our
code
9
in
the
current
state,
if
you're
using
port
works,
you
have
to
just
choose
one
of
those
and
use
it
at
the
server
level,
and
so,
if
you,
if
you
want
to
change
from
using
local
to
cloud
snapshots,
you
actually
have
to
change
the
arc
config,
and
this
requires
a
server
restart
and
it's
you
know.
B
If
you
want
to
go
back
and
forth
between
the
two,
it
gets
pretty
clunky
in
terms
of
having
to
constantly
reconfigure
the
server
and
so
with
the
snapshot,
location,
change
and
you
know,
10,
a
user
could
just
set
up
two
different
snapshot
locations
for
port
work,
so
they
would.
They
would
both
have
the
same
provider.
The
import
works.
B
Actually,
no
one
could
you
make
that
a
little
bit
bigger
great
yeah?
So
you
see
the
local
snapshots
and
then
cloud
snapshots,
but
so
with
snapshot
locations
you
would
just
configure
two
different
locations.
They
both
have
the
port
works
provider,
but
the
config
would
be
different,
so
you
would
basically
have
you
would
have
a
local
config
for
the
local
location
and
then
you
would
have
the
necessary
cloud
config
for
the
cloud
location
and
then
for
each
backup
that
you
take.
B
You
can
choose
whether
to
use
the
local
or
the
cloud
location,
and
one
of
these
could
be
made
the
default
for
the
port
works
provider,
so
that
you
don't
you,
don't
have
to
specify
a
location
every
time,
you're
taking
it
back
up,
but
you
have
the
option
to
choose
a
different
one
if
you
want
to
so
I
think
you
know,
even
even
without
replication,
I
think.
That's
a
good
example
of
why
this
feature
has
value
on
its
own.
B
Eventually,
we
want,
we
want
to
be
able
to
take
an
arc
back
up,
including
volume
snapshots
and
make
additional
copies
of
it
in
in
different
places,
and
so
typically
that
would
mean
different
regions,
and
you
know
that
there
could
be
a
couple
of
different
reasons
why
users
want
to
do
this.
The
first
one
is
for
redundancy,
so
you
have
multiple
copies
of
your
backup
in
different
places.
B
B
So
we
did,
we
did
want
to.
You,
know
we're
sort
of
working
through
the
development
of
this
of
this
feature
right
now,
and
we
have
run
into
kind
of
a
few
different
issues
that
we're
we're
going
back
and
forth
and
talking
about
how
we
want
to
implement
I
guess
the
the
first
one
that
we
sort
of
ran
into
is
initially
when
we
were
talking
about
snapshot
locations.
B
We
were
thinking
that
you'd
be
able
to
specify
when
you're,
taking
a
backup
you'd
be
able
to
specify
that
you
wanted
your
snapshots
to
be
stored
in
a
region
other
than
the
region
where
the
cluster
is
from,
where
the
volumes
are,
and
so,
for
example,
if
you
have
your
cluster
running
in
US
East
one,
we
were
thinking
that
you'd
be
able
to
take
a
backup
and
say
I
want
my
snapshots
to
be
stored
in
u.s.
West
one
for
this
backup
and
in
terms
of
how
you
actually
implement
that
for
something
like
AWS
AWS.
B
B
You
would
then
copy
that
snapshot,
which
is
a
separate
API
operation
into
the
u.s.
West
one
region,
and
then,
finally,
you
would.
You
would
probably
want
to
actually
delete
that
original
snapshot
that
was
living
in
US
East
one.
Now
that
you
had
the
snapshot
in
the
US
West
one
destination,
but
what
we
realized
as
we
started
to
kind
of
prototype
and
implement
this,
is
that
that
three
step
operation
can
become
pretty
time-consuming,
especially
if
you're
trying
to
do
it
during
a
backup
and
so
with
AWS.
B
For
example,
you
can't
copy
a
snapshot
into
a
different
region
until
the
original
snapshot
has
completed
and
if
you're,
if
you
have
large
volumes,
this
could
take.
You
know
hours
for
the
snapshot
to
actually
complete
before
you
can
do
the
copy,
and
so
all
of
a
sudden
we're
we're
looking
at
a
scenario
where
an
arc
backup
doesn't
take
just
a
few
seconds
or
maybe
a
minute
or
two
but
could
actually
take
hours
to
to
finish
executing.
B
And
this.
This
had
the
potential
to
introduce
some
some
kind
of
significant
issues,
because
our
currently
only
execute
se
single
backup
at
a
time.
And
so,
if
you
have
a
backup
that
all
of
a
sudden
is
taking
multiple
hours
to
complete
than
any
other
backups
are
going
to
be
blocked
in
the
queue
until
that
finishes,
and
so
we've
we've
sort
of
decided
that
for
now,
when
you
execute
a
backup,
you'll
only
be
able
to
create
a
snapshot
for
your
volumes
in
the
same
region
as
where
your
cluster
is.
B
And
this
this
is
going
to
vary
a
little
bit
per
provider.
It's
kind
of
a
per
provider
constraint,
but
so
with
AWS
you'll
be
limited
to
only
creating
snapshots
in
the
same
region
and
then
replication
will
be
the
feature
that
will
allow
you
to
actually
copy
those
out
to
other
regions,
and
that
will
be
a
new
controller
or
a
new
set
of
controllers
that
will
actually
take
into
account
kind
of
the
time
it
takes
to
copy
these
things
and
then
manage
how
many
copies
are
running,
concurrently,
etc.
D
And
you
know
we
took
we
SD
I'm.
Sorry
I
just
wanted
to
point
out
that
the
snapshot
copy
slowness
issue
is
probably
a
bigger
deal,
because
art
has
been
designed
on
the
assumption
that
creating
a
snapshot
is
cheap
in
terms
of
time
that
we
can
issue
the
request,
creative
snapshot
and
just
move
on,
whereas
this
would
be
changing
it
to
not
just
issuing
the
request
but
sitting
around
and
waiting
for
it.
B
You
know
a
replacement
cluster,
that's
in
the
same
region
or
AZ,
but
if
you're
actually
looking
to
restore
backups
into
a
cluster
in
a
different
region
or
AZ
this
this
becomes
problematic,
because
we
don't
really
have
a
way
to
specify
that
so
we're
we.
We
need
to
have
a
way
to
kind
of
specify
where
the
volumes
should
be
restored
into
which
region
and
which
AZ
and
so
I
know,
we've
we've
kind
of
gone
back
and
forth
on
a
bunch
of
different
ways
to
do
this.
B
There's
there's
metadata
on
the
node
objects
in
kubernetes
that
at
least
for
some
cloud
providers
give
information
about
the
region.
Az.
We've
also
talked
about
having
this
be
command
line,
flag
or
a
server
server
flag,
Nolan
I
know
you've
been
you've
been
looking
at
some
of
these
options.
Did
you
have
anything
you
wanted
to
add
there
I.
D
Think
yeah,
the
probably
the
best
source,
is
going
to
be
those
nodes.
Not
all
the
cloud
providers
currently
set
that
information.
So
we
need
to
see
what
we
can
do
to
get
that
and
and
I
don't
know
what
that's
going
to
look
like
for
on-prem.
It
may
not
matter
as
much
though
so
yeah.
The
main
thing
is
going
to
me
making
sure
that
whatever
cloud
provider
you're
setting
up
with
is
tagging
the
nodes
and
then
the
assumption
will
be
that
your
nodes
are
all
in
one
region
not
spread
across
multiple
yeah.
C
So
we
probably
should
reach
out
to
the
other
team
and
see
what's
going
on
there.
The
other
option
would
be
specifying
a
volume
snapshot,
location
as
part
of
the
restore
I
guess
I
may
be
one
for
one
per
provider
or
something
I
think
the
UX
here
is
going
to
be
hard
to
get
right
to
solve
everyone's
use
cases,
but
maybe
we
can
get
to
80%
if
we
can
either
go
with
the
node
labels
or
defaults
for
picking
the
snapshot
locations
or
something
the.
D
Other
thing
that
I
would
just
throw
out
here-
and
this
is
not
fully
baked,
but
possibly
until
replication-
is
there
possibly
doing
an
explicit
copy
command,
say
copy.
My
snapshots
from
this
volume
snapshot
location
to
this
one
before
doing
the
restore,
but
that's
also
not
fully
baked
yet
so
we
may
may
be
able
to
investigate
it,
but
it
may
not
work
out
for.
A
C
A
C
B
Well,
I
think
I
mean
I
think
we
definitely
covered
a
lot
of
you
know
just
an
overview
of
the
future
and
kind
of
where
we
are
you
know.
I
can
give
a
quick
update
on
where
we
are
in
terms
of
development.
So
we've
you
know:
we've
we're
in
the
middle
of
the
snapshot,
location,
development,
so
we've
implemented
the
new
API
so
to
find
the
new
CR
DS
we're
working
on
setting
up
the
arc
server
flags
that
specify
the
default
snapshot,
location
for
for
each
provider,
we're
also
working
on
the
core
backup
controller
changes.
B
We,
let's
see,
we
have
we've
been
using
Zen
hub
to
to
kind
of
track
and
manage
our
work
lately,
and
so
we
have
a
Zen
hub
board.
That's
available,
that's
linked
to
our
github
repo.
That
folks
can
take
a
look
at
if
they
want
to
see
which
specific
issues
are
actually
in
progress
and,
what's
still
on
the
backlog,.
A
A
B
B
Yeah,
so
that's
I,
mean
I.
Think
that's
pretty
much
all
that
I
have
definitely
you
know
kind
of
a
call
to
any
any
community
members,
any
users
or
anyone
who
has
input
on
this,
and
it
was
thought
about
kind
of
cross
region,
news
cases
and
and
whether
what
we're
implementing
kind
of
solves
the
needs
or
whether
there
are
additional
things
that
that
are
necessary.
A
All
right
and
then
we
for
sure
take
the
notes
and
publish
publish
those
on
github
as
well.
We
haven't
really
decided
like
on
a
naming
scheme
where
we're
gonna
start
publishing
the
notes
of
these,
but
once
we
do
that
we'll
go
ahead
and
publish
that
on
the
Google
Group
as
well,
so
people
that
can
can
follow
along.
So
any
final
comments
or
questions
for
the
three
people
watching
live.
Let
me
just
look
at
the
channel
real,
quick
there's
about
a
ten-second
delay.
If
anybody
has
any
questions.
D
I
would
also
mention
something
sorry,
this
is
Lily,
but
something
else
is
coming
in
v-0
10
is
a
little
bit
of
a
plug
in
rewrite
which
we
are
currently
updating.
Our
examples
for,
and
we
hope
to
have
documentation
out
for
how
to
get
your
plugins
migrated.
It's
not
it's
not
a
huge
amount
of
code
that
you'll
have
to
change,
but
there
will
be
a
break
between
0,
9
and
0
10
for
plugins.
F
D
B
Also
want
to
put
out
at
least
one
beta
I'll
call
it
maybe
a
couple
of
them
prior
to
the
official
recent
release
about
ten.
So
we
just
we
need
to
make
a
decision
on
how
feature
complete.
We
want
to
be
before
we
put
out
a
beta,
but
it's
possible
that
we'll
put
one
out
soon.
That
has
the
plug-in
changes
and
the
backup
location
changes.
Maybe
not
the
snapshot
locations
quite
yet,
but
at
least
give
people
something
to
start
using
for
testing
mm-hmm.
C
A
A
Mine
says
the
9th
okay,
my
mistake:
okay,
good
all
right
as
long
as
we
share
as
long
as
we
collectively
know
what
we're
doing,
we
should
be
okay,
all
right,
excellent
and
with
that.
This
concludes
the
first
design
session,
and
this
repeating
invite
will
keep
on
going
out
to
the
list
and
we
hope
to
have
different
topics
for
you.
Every
time
and
we'll
see
everyone
in
about
two
weeks,
Thanks.