►
From YouTube: Heptio Ark 1.0 Discussion
Description
This design session will cover our plans to drive Ark to 1.0. Everyone is welcome to join to give us feedback on what they'd like to see in 1.0. The focus for 1.0 will be bug fixes, correctness, and API stability and forward compatability.
https://kubernetes.slack.com/messages/ark-dr
B
B
There
we
go
okay,
all
right,
it
looks
like
the
preview
was
going
live,
but
the
session
itself
was
not
going
left,
so
that
is
on
me.
Okay,
let's
do
a
quick
summary
for
everybody.
Welcome
thanks
everyone
for
joining
us.
This
is
the
FTO
art
design
sessions.
41.0,
unfortunately
had
a
little
bit
of
a
technical
staff
ooh
thanks
thanks
to
whoever
caught
that,
so
we're
actually
streaming
now.
So
let's
introduce
a
team,
real,
quick
and
then
we'll
go
through
the
1.0
thing
again
and
then
we'll
we'll
start
talking
talking
through
it.
Hey.
C
B
C
Yes,
so
Ark
has
been
out
since
August
of
2017
so
about
15
months,
and
we
want
to
have
a
ga
release
so
that
our
users
can
depend
on
Ark
for
stability
and
make
sure
that
backups
that
they
take,
that
they
can
restore
them
going
forward
and
that
future
releases
won't
necessarily
break
things.
So
this
is
all
about
stability,
compatibility
and
semantic
versioning
guarantees.
B
C
Will
do
a
dot,
11
probably
will
be
in
January
given
what's
coming
up
with
Q
Khan
and
today's
113
and
the
holidays
in
December.
But
we
will
have
a
at
least
one,
if
not
more,
0.11
1213
releases
to
incorporate
the
features
that
are
on
the
road
to
1.0.
Just
so
that
you
all
don't
have
to
wait
several
months
until
we
get
to
1.0
and
we're
hoping
that
we
will
get
our
GA
out
there
sometime
in
the
first
quarter
of
2019,
okay,.
F
We've
got
a
number
of
Ducks
issues
against
the
mono
schedule
and
the
100
milestone,
but
those
are
especially
as
regards
content,
development
and
content
reorganization
effectively
decoupled
from
code
work,
which
means
that
we
are
hoping
to
be
able
to
ship
on.
You
know
PRS
for
some
of
those,
at
least
before
the
window,
release.
B
B
What
we
did
is
we
looked
at
everything
on
that
spreadsheet
kind
of
put
them
in
separate
buckets
and
I
guess
the
plan
will
be
for
us
to
just
go
through
each
of
these
features,
the
major
things
that
we
think
should
land
in
each
milestone
and
then
plan
accordingly
to
that
those
of
you
listening
if
you
have
any
of
these
issues
that
are
like
highly
important
to
you
or
you
feel
we
miss
one
as
always
feel
free
to.
Let
us
know
on
the
list
or
in
Orion
slack.
B
In
fact,
we've
already
had
a
few
issues
pointed
out
to
us
that
people
would
like
to
see
triage
in
the
future
and
we
will
get
to
those.
So
with
that,
do
we
want
to
start
with
1.0,
then
Steve
is
sharing
basically
the
three
big
buckets
here
and
we'll
just
go
through
them
all
in
this
document,
as
we
discuss
these
so
who's
up.
First,
with
this
first
one
yeah.
B
A
First
kind
of
category
of
issues
that
we
have
for
1.0
is
really
improving
the
the
story
around
how
art
gets
installed
and
upgraded
in
your
cluster,
really
making
this
straightforward
and
and
related
to
that
also
putting
in
some
putting
in
place
some
policies
around
what
versions
kubernetes
we
support
and
what
what
features
we
can
rely
on
from
kubernetes.
So
the
first
part
of
this
is,
we
want
to
add
a
CLI
command,
an
arc,
install
command
and
potentially
an
arc
upgrade
command
as
well.
A
That
will
do
all
of
the
the
heavy
lifting
in
terms
of
getting
namespaces
set
up,
getting
CR
DS
set
up
and
adding
all
of
the
the
different
communities
objects
that
are
required
to
get
arc
installed,
and
so
currently,
if
you
follow
our
documentation,
you
kind
of
go
through
this
process
manually
step-by-step.
So
you
install
the
prereqs
file,
you
install
the
deployment
and
various
other
things,
and
so
we
just
want
to
package
this
all
up
into
a
CLI
command
so
that
it's
really
easy
to
do
with
with
just
a
few
keyboard
strokes
related
to
that
is.
A
We
know
that
there
is
a
Community
Supported
home
chart
out
there
for
arc.
It's
something
that
that
we
haven't
directly
worked
on
in
the
core
arc
team,
we've
sort
of
provided
some
input
on
it,
but
it's
being
supported
by
the
community
right
now,
and
we
know
that
a
lot
of
folks
use
helm
out
there
for
installing
applications
or
tools
into
their
cluster.
A
So
as
part
of
one
data,
we
really
want
to
kind
of
help,
take
ownership
of
this
and
make
sure
that,
with
each
release
of
arc,
that
we're
doing
that,
the
helm
shard
is
upgraded
as
appropriate,
so
that
anyone
out
there
who's
using
home.
It's
going
to
have
a
good
experience
installing
arc
through
it
and
then
the
next
couple
of
things.
A
So
we
want
to
make
sure
that
that
we
kind
of
version
backups
that
arc
is
taking,
and
so
this
will
allow
us
to
know
you
know
which,
which
versions
of
arc
are
compatible
with
each
backup
that
you
take.
The
idea
here
is
that
I
think
that
we
would
follow
kind
of
a
semantic,
versioning
policy,
and
so
you
know
in
an
arc
backup
that
you
take
with
our
quando.
A
Should
be
fully
compatible
with
say,
an
arc
1.9
and
we
would
really
only
break
compatibility
as
we
moved
across
major
versions.
So,
potentially,
if
you
took
an
arc
backup
using
are
quando,
if
you
then
upgraded
your
server
that
are
2.0,
it's
possible
that
there
would
be
compatibility
issues
there,
but
within
a
major
version
kind
of
according
to
the
cember
policy,
we
would
maintain
compatibility
Steve.
B
A
So
we're
pre,
1.0
and
so
we're
kind
of
doing
best,
effort
compatibility,
and
so
we
make
every
attempt
to
either
not
make
breaking
changes
or
to
provide
upgrade
paths
between
versions.
Ok,
10
is
a
good
example
of
this.
We
did
make
some
braking
changes,
but
we
kind
of
extensively
documented
how
to
how
to
do
an
upgrade
and
provide
it
some
scripts
to
help
with
that,
but
once
we
hit
one
dot,
oh
we're
really.
You
know
we're
gonna
provide
guarantees
there
and
we
will.
We
will
not
make
breaking
changes
within
that
one
dot.
A
Oh
a
major
series:
ok,
great
thanks,
yep
and
then
the
last
bullet
under
this
category
is.
Is
that
policy
on
on
what
versions
of
kubernetes
we
support
and
what
features
we
support
within
that?
So
again,
you
know
currently
we're
kind
of
making
our
best
effort
to
support
as
many
versions
back
of
kubernetes
as
we
can.
A
We
want
to
have
a
discussion
about
what
makes
sense
in
terms
of
which
versions
of
kubernetes
we
we
should
be
compatible
with,
and
then
we
want
to
document
that
really
clearly
so
that
users
know
for
each
version
of
Arc.
What
versions
of
kubernetes
will
it
run
on
and
then,
as
part
of
that,
I
think
we
will.
Probably
you
know,
commit
to
making
sure
that
we
test
on
each
of
those
versions
so
that
you
have
some
confidence
that
it
will
work
there.
Then
we
we
yeah,
so
we
think
that's
that's
kind
of
a
major
grouping
there.
B
C
Thanks
for
joining
us
all
right,
moving
on
to
plugins
all
right,
so
we
currently
have
plugins
to
support
for
different
types
of
things.
We
have
object:
storage,
plugins,
which
is
where
our
puts
the
backup
files.
We
have
block
storage
plugins,
which
is
what
Ark
uses
to
snapshot
volumes
and
restore
them.
C
We
have
backup
and
restore
item
action,
plugins
that
allow
you
to
perform
custom
behavior
on
individual
items
being
backed
up
or
restored,
and
what
we
want
to
try
and
do
is
if
we
want
to
change
any
of
these
names
now
is
the
time
to
do
it.
So
block
store,
for
example,
is
probably
not
the
best
name
for
that
particular
type
of
plugin,
I,
think
volume,
provider,
volume
snapshot
something
or
something
along.
C
Those
lines
would
be
better,
so
we
want
to
finalize
those
we
want
to
go
through
our
probe
of
definitions
and
the
go
interfaces
for
all
of
the
plugins
and
really
take
a
hard
look
at
them
and
make
sure
that
we're
happy
with
them.
We
already
know
that
we
will
be
making
changes
so
that
we
can
support
improved
and
error
handling
or
an
error
reporting.
C
So
one
major
limitation
right
now
in
our
plugins
is,
if
you
have
a
panic
in
the
go
code
in
one
of
the
plugins,
for
example,
it
that
process
will
die
and
there's
no
way
currently
for
arc
to
tell
you
as
an
end-user
why
it
crashed.
We
did
at
support
in
0.10
to
be
able
to
restart
terminated
plugins,
but
what
we
really
want
to
do
is
make
it.
So
you
can
understand
why
things
are
crashing
and
what
the
errors
are.
C
So
that
will
require
protobuf
changes
which
we
will
be
doing
sometime
before
we
get
to
1.0
and
then
the
last
thing
about
plugins
is
probably
figuring
out
a
way
to
deal
with
naming
collisions.
So,
for
example,
we
maintain
a
plug-in
called
AWS
4,
there's
one
for
object,
storage
and
one
for
block
storage,
and
if
somebody
else
had
a
plug-in
that
they
said
was
named
AWS
honestly
off
the
top
of
my
head.
I,
don't
know
what
would
happen
so
we
want
to
find
a
way,
possibly
with
namespacing
before
for
users
and
developers
to
avoid
any
collisions.
C
B
Dated
awesome
welcome
Justin,
if
you
have
any
questions,
feel
free
to
hop
in,
thank
you
or
if
you
just
want
to
listen
along.
That's
fine,
also
thanks
for
using
mark
and
as
we
get
through
these,
if
you
have
any
feedback
for
the
team,
we'd
love
to
hear
it.
Alright.
Does
anybody
have
any
questions
on
plugins
before
we
move
to
documentation.
H
Is
there
any
discussion
around
some
of
the
additional
like
hooks?
I
know
that
in
I
think
it
was
in
one
dot,
X
or
two
there's
some
discussion
around
like
stateful
managing
stateful
backups,
but
I
could
see,
for
instance,
plugins
that
are
not
triggered
off
of
pods,
but
rather
off
of
services
or
deployments
also
being
useful
as
hooks
and
like
posts,
backup
hooks
as
well.
How,
in
terms
of
those
life
cycle,
things
I'm
just
curious,
where
we're
kind
of
division
of
that's
going.
C
So,
let's
break
that
down.
I'll
start
with
the
last
thing
for
post
backup.
Hooks
we've
had
several
enquiries
about
this
over
the
past
several
months
and
we
have
not
done
anything
with
them
to
date
and
suggested
that
people
consider
writing
an
external
component
that
watches
the
state
of
the
backup
and
then,
when
it
transitions
to
completed
or
failed,
they
could
they
could
do
something
on
their
own.
C
We
certainly
would
would
welcome
a
discussion
around
what
that
would
look
like
in
arc
proper,
as
opposed
to
an
external
component
and
then
for
like
hooks
around
services
and
deployments
and
other
things
I
think
we
probably
need
some
more
information,
because
I'm
I'm
not
having
a
light
bulb
moment
around
what
that
would
look
like
or
what
it
would
do.
So
you
have
any
more
context
you
can
share.
So.
H
This
is
really
kind
of
big
complimenting
the
idea
of
this
stateful
backup,
but
we
have
some
some
services
that
just
don't
play
nicely
with
file
based
backups,
so
I'm
thinking,
specifically
of
elastic
search
and
Kafka.
They
in
particular,
have
have
like
in-memory
representations
of
data,
and
we
need
to
use
application
specific
API
to
get
a
clean
backup
and
so
for
elastic
search.
For
instance,
we
might
use
curator,
which
is
CLI
with
elastics
to
back
up
to
s3,
and
so
that
doesn't
really
make
sense
on
a
pod
by
pod
basis,
but
it
does
make
sense.
H
I'm,
like
eh
I'm,
going
to
back
up
a
stateful
set
basis.
That's
something
that
we
would
want
to
trigger
because
we're
backing
up
a
stateful
set,
not
because
we're
backing
up
each
pod
independently.
The
same
thing
would
occur
for
Redis,
so
we
might
want
to
do
like
a
Redis
dump.
But
we
really
only
need
to
do
that
once
we
don't
need
to
do
that,
for
every
Redis
pod
and
in
the
stable
side.
Okay,.
C
Cool,
so
we
do
have
a
at
least
one
or
two
issues
open
around
staple
sets
in
particular,
and
then
we're
generally
speaking,
around
just
application
style
workloads
where
we
may
need
some
prescriptive
way
to
handle
an
elasticsearch
backup
or
a
my
sequel,
backup
or
whatever.
We
haven't
really
made
any
progress
on
those,
and
then
we
also
do
have.
C
We
have
the
beginnings
of
support
in
the
API
for
hooks
around
any
type
of
resource.
What
we
don't
have
is
the
ability
to
do
anything
at
this
time
with
anything
other
than
pods.
So
when
I
was
first
designing,
hooks
I
always
had
in
the
back
of
my
mind,
that
would
be
nice
to
support
web
hooks,
so
you
could
specify
an
arbitrary
resource,
possibly
with
a
label
selector.
So
you
could
say
all
deployments
with
a
P
equals
elastic
search,
would
call
out
to
this
web
hook.
What's
missing
is
the
implementation
so
yeah?
H
Thanks
I
could
see
this
going
a
few
different
ways.
I
remember
when
we
were
talking
about
like
the
the
plug-in
architecture.
Initially,
one
one
way
that
this
could
be
done
is
through,
like
the
label
selector,
like
you
mentioned,
or
even
something
kind
of
hacky
like
having
an
annotation
similar
to
like
the
beta
version
of
an
it
containers
on
the
service
itself.
That
would
tell
us
what
what
to
be
exact
or
what
needed
to
be.
You
know
spun
up
as
a
pod
specifically
to
support
that
yeah.
C
B
F
B
G
Will
take
that
one
so
this
this
one
is
largely
about
going
through
all
of
our
commands
and
making
sure
they're
consistent,
so
that
our
arguments
are
consistent.
If
you
know,
if
we
have
any
place
where
things
don't
match,
double
checking
all
that
making
them
all
sure
all
of
that's
in
line
and
then
maybe
we're
finding,
if,
if
something
is,
if
we've
got
feedback
on
particular
things
that
are
maybe
confusing,
maybe
redoing
those
commands
a
little
bit,
but
we
will
have
to
revisit
those
specifically.
This
is
again
like
a.
H
Has
there
been
any
discussion
around
like
the
structure
of
those
commands?
I
know
that
Andy
did
some
work
to
support
both
like
arc
action
verb
or
noun
like
arc
backup
get,
and
also
our
get
back
up.
I
wonder
if,
like
is
there
a
trend
that
we're
seeing
for
those
commands
and
one
that's
been
chosen
or
are
we
going
to
continue
to
support
both
I.
C
G
Yeah
and
I
since
you've
asked
that
I
think
there
are
I
think
some
of
the
newer
commands
we
might
not
have
had
the
alias
set
up.
But
that's
something
well
we'll
need
to
do
as
part
of
this
is
make
sure
I
think
in
support
both
because
we
do
have
to
hook
them
up.
But
it's
also
fairly
minimal
to
support
sweet
Thanks.
A
I
just
wondered:
yeah
I
figured
you
get
the
next
one
and
E
so
alright,
so
yeah
we
I
mean
we.
Basically,
you
know
since
we're
since
wand
ATO
implies
stability
of
the
API.
We
really
want
to
just
take
a
pass
through
our
API
surface
and
make
sure
that
we
feel
good
about
what
that
is,
since
we're
gonna,
maintain
it
as
stable
for
that
major
version,
and
so
you
know
there
are
a
couple
of
components
to
what
we
consider
to
be
the
arc
API.
A
So
the
first
part
is,
is
obviously
the
the
custom
resource
definitions
that
we
define
as
part
of
arc.
So
these
are,
you
know
the
backup,
CRD,
the
restore
CRD,
the
backup,
storage,
location
and
volume
snapshot,
location,
CR
DS,
and
we
really
just
want
to
go
through
each
of
these
kind
of
field
by
field
and
make
sure
that
that
it
makes
sense
see
if
we
want
to
alter
any
of
the
data
types
or
maybe
alter
how
we
represent
some
of
the
information.
But
the
end
goal
is
that
we've
reviewed
it.
A
We've
made
any
changes
that
we
think
make
sense,
and
then
we
feel
good
about
putting
that
out
and
calling
it
a
stable
if
you
want
API.
The
other
component
of
that
is
the
kind
of
the
organization
of
the
go
code
itself
that
makes
up
the
our
code
base.
So
right
now
we
have
a
you
know,
a
large
number
of
packages
that
could
potentially
be
imported
into
another
application.
A
But
the
reality
is
that
a
lot
of
those
are
actually
kind
of
internal
packages
and
we
don't
necessarily
intend
for
them
to
be
to
be
imported
by
other
projects,
and
so
we
would
like
to
take
some
of
those
packages
and
move
them
into
an
internal
package
so
that
they
can.
They
really
should
only
be
used
by
the
art
codebase
itself,
and
then
we
also
want
to
make
sure
that
anything
that
is
imported
by
other
projects
has
a
kind
of
a
clean.
A
Api
interface
has
clear
entry
points
and
kind
of
imports,
the
just
the
minimum
set
of
things
that
are
necessary
and
there's
actually
related
to
plugins.
As
a
plugin
author,
you
have
to
import
some
some
packages
from
arc
and
those
packages
probably
have
more
dependencies
than
are
are
strictly
needed,
and
so
I
think
as
part
of
this
and
as
part
of
the
plug-in
work
we're
going
to
try
and
kind
of
winnow
down
those
dependencies
that
are
required
to
import
for
plugin
authors
so
that
it's
fairly
minimal
your
binaries
can
be
pretty
small.
C
C
B
D
C
Included
that
is
just
going
to
be
our
plans
to
do
some
profiling
of
the
arc
server
so
that
we
can
say,
for
example-
and
these
are
made-up
numbers-
if
you
have
a
10
node
cluster
with
500
deployments
and
all
the
associated
objects
arc
will
use
500
Meg's
of
RAM
or
whatever
the
numbers
happen
to
be
so
that
we
can
try
and
provide
some
scaling
guidance
right.
It
will
ship
a
default.
That's
a
good
minimum
for
a
small
sized
cluster.
But
clearly,
if
you
have
a
larger
cluster,
you'll
need
to
adjust
right.
D
Speak
to
that
too,
so,
basically,
when
we
do
backup
file,
we
don't
have
a
way
to
to
verify
that
it's
integrity.
So
we
are
planning
to
do
to
add
a
checksum
validation
and
also
the
ability
to
sign
the
checks
and
files
for
additional
verification.
I
suppose
we're
going
to
come
up
figure
out,
Chauk
Iridum
to
use
to
do
that,
but
to
be
decided.
H
A
question
more
on
the
the
multi-threaded
backup
and
restore,
and
then
this
like
just
the
scale
I'm,
not
really
sure
what
like
size
clusters
you
all
are
dealing
with.
Ours
are
very
small,
but
I
could
see
in
a
larger
cluster.
Having
like
having
multiple
pods
coordinating
on
backups,
also
being
useful.
Has
there
been
any
discussion
about
like
not
only
multi-threaded
on
a
single
pod,
but
also
like
having
coordinated
backups
across
a
fleet
of
instance,
like
backup
agents?
We.
C
Have
not
talked
about
that.
Are
you
thinking,
so
you
mentioned
multiple
backups,
you
also
mentioned
or
I
think
you
were
talking
about
single.
So
like
would
you
imagine,
as
as
part
of
this,
that
a
single
backup
gets
charted
so
to
speak
out
to
different
backup
agents
that
are
each
working
on
a
portion.
H
C
Short
answer
is
no,
so
right
now
an
arc
backup
or
an
arc
restore
is
a
zero
to
100%
operation
and
everything
is
contained
within
a
single
backup,
invocation
or
restore.
So
if
we
were
to
try
and
split
things
up,
I
think
it
would
be
difficult
to
have
a
single
backup,
tarball
that
contains
all
of
the
kubernetes
resources.
C
B
A
Yep
yeah,
so
I'll
I'll
take
that
one.
So
one
thing
we've
heard
from
users
is
that
it
can
be
a
little
bit
tricky
to
know
both,
what's
in
a
backup
once
you've
taken
it
and
also
what's
going
to
happen
when
you
execute
a
restore,
possibly
with
some
some
resource
or
namespace
filters,
and
so
we'd
like
to
record
some
additional
metadata
when
we
take
backups-
and
so
you
know
the
first
part
of
this
that
we're
thinking
about
is
it's
kind
of
a
like
a
manifest
or
an
index
file.
A
That
may
just
contain
a
simple
summary
of
all
of
the
resources
that
are
contained
within
that
backup.
So
right
now,
if
you
want
to,
if
you
want
to
see
everything,
that's
in
a
backup,
you
have
to
download
the
tarball
from
object,
storage,
you
have
to
extract
it,
and
then
you
have
to
kind
of
go
through
all
of
the
enesta
directory
structure
and
look
at
each
of
the
resources
to
see
what's
there.
A
A
And
I'll
just
cover
that
quickly,
we're
currently
using
a
version
of
the
azure
go
SDK.
That's
that's
a
beta.
It's
a
pre
GA
version.
It
was
the
only
one
that
was
available
when
we
initially
added
address
support
and
as
we
evolved
Ark,
they
still
only
had
a
pre
GA
version,
but
there
is
now
a
ga
version
of
the
azure
Go
SDK,
and
so
we
just
want
to
do
an
update
to
make
sure
we're
relying
on
stable
libraries
from
each
of
our
major
supported
providers.
B
C
B
And
you
know,
and
as
usual
we
will
be
having
the
normal
q
and
A's
and
streams
and
stuff
like
that.
So
we
expect
to
keep
keep
you
all
those
of
you
that
are
listening,
up-to-date
on
on
where
exactly
we're
standing
with
this.
Now
we
did
get
some
questions
from
the
audience
and
I
want
to
get
to
them.
But
can
we
go
through
just
one
point,
X
and
future
features
as
quick
as
we
can
here
so
one
point:
X
is
everything
about
1.0,
but
at
that
point
we've
made
stability,
guarantees
and
things
like
that.
G
G
C
And
just
a
little
bit
more
color
on
that.
That
would
be
something
like
you
have
a
deployment
which
has
at
least
a
pod
running,
and
your
pod
has
a
PVC
which
has
a
PB
and
everything
is
glued
together
appropriately
and
then
you
say,
I
would
like
to
roll
back.
My
database
and
my
database
is
on
on
a
PvE
and
we
have
our
snapshot
of
it.
So
essentially
that
is
creating
new
volume
from
the
snapshot.
Creating
new
pv
points
of
this
new
volume,
either.
C
I'll,
take
the
next
one,
so
since
I
just
mentioned
it
yeah,
so
if
you
take
a
backup
and
it
has
a
PV
and
a
PV
snapshot
and
there's
no
disaster,
you
haven't
deleted
anything
lost
anything.
Maybe
you
want
to
take
advantage
of
our
namespace
remapping
functionality,
where
you
can
take
the
you've,
backed
up
namespace
a
and
you
want
to
restore
it
as
namespace
B,
so
it's
kind
of
like
copy
and
paste
for
kubernetes.
C
We
support
that
for
everything
except
for
PBS
and
we
haven't
quite
figured
out
yet
the
right,
UX
and
data
model
for
doing
that.
So
if
we
can
get
that
into
1.0,
that'll
be
great,
it
might
not
I
know
we
have
some
community
members
who
are
interested
in
this
feature.
So
I
would
encourage
you
if
you're
interested
in
it
too.
Click
on
the
link
to
the
issue
and
add
some
commentary.
You've
got
some
thinking
around
what
the
model
might
look
there
yeah.
B
And
those
of
you
listening
as
well
I
might
as
well
just
take
this
time
to
throw
this
offer
out
there
again,
it's
any.
If
any
of
these
issues
are
important
to
you
and
you
feel
like
helping
us
move,
the
needle
of
them
we'll
be
more
than
happy
to
to
sit
down
with
you
and
kind
of
get
you
going
in
the
right
direction.
I
know
there's
a
lot
of
features
that
we
want,
but
not
enough
engineering
time
to
do
it.
B
C
A
A
D
G
So
aren't
debug
came
out
of
some
discussions.
We
had
around
like
figuring
out,
what's
going
on,
that's
our
cluster,
so
the
idea
behind
debug
command
is
to
like
gather
all
the
logs
in
one
place
and
see.
What's
going
on,
we've
got
some
more
details
in
the
issue,
but
largely
it's
t
visibility
into
your
cluster
when
something
is
not
going
right,
but
that's
not
something.
We're
gonna
hold
up.
One
point
out
for.
C
How
you
do
that
one
so
right
now
we
have
a
time
to
live
that
is
set
on
every
backup
it
defaults
to
30
days.
You
can
change
it
if
you
want,
but
when
that
time
to
live
expires,
arc
will
garbage,
collect
or
delete
the
backup
and
any
any
snapshots
that
are
associated
with
it,
and
we've
had
some
requests
to
be
able
to
not
have
that
happen.
C
So
we
would
want
to
find
a
way
to
to
allow
users
to
either
reset
or
change
the
TTL,
so
the
it
can
extend
the
lifetime
of
a
backup
or,
alternatively,
or
additionally
be
able
to
mark
or
pin
somehow
a
backup
so
that,
even
though
it
has
a
TTL
and
an
expiration
date
and
time,
you
could
have
a
way
to
say
ignore
it.
I
want
to
save
this
one
ignore
the
TTL
so
either
one
of
those
or
both
of
those
are
things
that
we're
looking
to
doing.
Okay
back.
D
That's
another
very
straightforward,
but
somewhat
tricky
issue
is
a
way
for
people
to
see
the
progress
of
their
backup
right
now
we
can
issue
a
command
and
see
what
face
the
backup
is
in
is
in
progress
or
fails
or
complete,
but
ideally
we'll
have
a
way
to
estimate
how
long
a
backup
a
sec.
So
you
know
if
you
can
wait
around
or
if
you
need
to
go
in
coffee
and
in
we're
in
the
time
line
you
are
in
whatever
given
moments
and.
A
So
currently,
it's
possible
with
art
to
share
backups
between
clusters.
So
if
you
take
a
backup
and
cluster
a
you
can
install
arc
and
cluster,
be
probably
put
it
in
restore
only
mode
and
set
its
configuration
to
point
to
the
same
bucket
that
cluster
a
was
backing
up
to,
and
in
that
way
you
can
get
those
backups
imported
into
cluster
B.
A
But
it's
the
the
the
way
of
going
about.
This
is
a
little
bit
clunky
and
there
potentially
issues
if
you
have
two
clusters
that
are
not
in
restore
only
mode
that
are
both
pointed
to
the
same
object,
storage
bucket,
and
so
we
really
want
to
focus
on
this
use
case
of
kind
of
how
backups
get
shared
between
clusters
and
also
how
we
enforce,
which
arc
instance
owns
a
particular
backup
storage
location.
B
Ok
and
that
pretty
much
handles
the
features,
I
just
realized
that
this
is
only
listen.
The
features
we're
not
we're
not
even
listening
bug
reports
that
we'll
get
throughout
the
series,
so
that
already
feels
like
a
pretty
hefty
bounce
tone
there.
I
do
want
to
get
to
Michael's
question
about
metrics.
So
can
we
just
touch
on
the
feature
features
because
these
will
be
pretty
far
out,
so
we
have
replication
at
creation
at
rest,
support
non
PVC
volume,
snap
sure
and
restore
gits
and
other
storage,
backends
and
multi-tenancy.
B
C
A
I
think
it's
worth
noting
just
that
we
have
them
listed
as
future
features,
because
they
probably
will
require
API
changes,
and
so
we
think
that
you
know
we'll
probably
have
to
ship
in
82.0.
In
order
to
to
fully
implement
those,
it
doesn't
mean
they're
not
a
priority.
It
just
means
that
you
know
we
probably
can't
squeeze
them
into
a
one
dot,
X
right.
B
Right
and
we
did
get
a
question
from
Michael
Thanks
for
writing
in
Michael.
He
says
regarding
the
1.0
meeting.
I
won't
be
able
to
join,
but
thanks
for
listening
any
place
to
extend
the
metrics
for
arc,
he
didn't
see
anything
the
Google
Docs,
so
he
filed
some
issues
here,
issues
ten,
fifty
nine
and
ten.
Seventy
seven
can
we
talk
about
these
real,
quick
I
know:
we've
had
some
discussions
on
the
arc
list
about
metrics
and
exactly
what
metrics
were
trying
to
get
so.
Can
we
just
address
this
really?
Yes,.
E
Those
two
issues
were
created
after
we
went
through
this
spreadsheet,
so
they
just
adjusted
making
lists
I.
Think
the
issue
1077,
which
is
adding
some
new
gauge
metrics,
and
this
is
something
I
replied
to
to
the
mailing
list.
That's
something
I
think
we
can
get
into
the
one
dot
o
release.
It's
a
really
good,
first
issue.
So
if
you
are
someone
who's
looking,
they
start
contributing
to
our
core
you're
trying
to
learn
go.
There
could
be
a
good
way
to
get
involved,
so
we'd
love
to
have
a
contribution
on
this
one.
F
E
Backups
and
keeping
track
of
both
of
those
things
may
require
more
internal
changes
and
so
I
think
it's
probably
safer
to
assume
that
that's
something
that
we
would
ship
post
1.0
but
probably
would
not
require
breaking
changes
so
wouldn't
need
to
slip
to
a
to
ATO
and
again,
if
you
are
in
developer
interested
in
getting
you
know
involved
in
these
simple
things
and
that's
another
issue
that
you
could
look
and
jump
in
on
and
we
do
have
a
friendly
label
that
we
apply
to
github
issues
rich
808.
We
have
a
good
first
issue
label.
E
B
E
B
B
B
A
I
was
I
was
just
gonna,
say
we're.
You
know
we're
certainly
open
for
input
on
this,
but
we
are
trying
to
lock
this
down
pretty
quickly,
and
so,
if,
if
anyone
out
there
has
strong
opinions
on
kind
of
what
we've
staged
in
the
one
dot
over
someone
Dexter
first
future,
we
would
love
to
hear
it
so
reach
out
to
us
in
any
way
that
you
want.
But
we
are
trying
to
lock
this
down
pretty
soon
and
start
that
March
2,
1,
dot
o
so
and.
C
Also,
we
do
have
a
link
in
our
readme
in
github
to
our
Zen
hub
tracking
board
and
it's
not
100%
up
to
date
for
0.11
and
and
0.12
and
whatnot
over
the
next
few
days
and
weeks
we
will
be
prioritizing
and
if
you're
interested
in
seeing
more
than
just
what's
slated
per
milestone.
But
if
you're
interested
in
seeing
where
the
relative
priorities
within
a
milestone,
I'd
encourage
you
to
check
out
those
onboard
one.
I
F
C
C
H
Is
also
in
d1
one
place
that
I
was
thinking
we
might
be
able
to
like
shoehorn
in
a
post
opposed
or
a
post
plugin
as
well.
So
we
we
ended
up
writing
something
ourselves
to
do
the
replication.
But
the
first
thing
that
we
had
looked
for
was:
if
we
had
a
post
pod
backup
plug-in
hook,
we
would
have
been
able
to
do
the
replication
as
well,
because
we
could
see
like
the
PBC
snapshot.
C
Would
either
I
would
recommend
either
starting
a
discussion
on
our
Google
group
or
either
finding
an
issue
or
creating
a
new
issue
around
this,
so
that
we
can
have
the
discussion
because
I
don't
want
to
lose
this
and
I
think
that
it's
it's
a
good
discussion
to
have
so,
let's
continue
it
and
see
where
it
goes.
Yeah.
B
So
what
I've
done
is
in
the
hack,
MD
document,
I,
put
Ryan's
comments
and
then
Justin's
here
I
will
put
that
URL
in
the
slack
Channel.
It
should
already
be
there
if
we
want
to
riff
that
and
then
put
that
into
the
notes,
just
make
sure
that
I
wrote
down
what
I
think
you
said,
and
then
we
can.
We
can
at
least
start
from
there.
Let
me
there's
a
URL
there
in
the
slack
channel,
but
that's
good
to
know
all
right
anything
else
before
we
wrap
up
you
guys.
B
Alright,
thanks
so
much
Ryan
and
Justin
for
contributing,
and
we
will
see
everybody
we're
not
having
any
live
streams
in
December.
So
we'll
see
everyone
a
cute
con.
If
you're
there
feel
free
to
come
talk
to
us
and
keep
your
eye
on
the
list,
and
we
will
schedule
the
usual
bits
for
January
and
February
ahead
of
time.
So
with
that,
thank
you
very
much
and
have
a
good
day.
Everybody.