►
From YouTube: Ark Office Hours June 2018
Description
Feel free to bring your questions to our development team as they answer questions live from the audience. All skill levels are welcome. We will also cover new features in development
A
All
right,
we
are
live
with
the
hep
D
o
ark
office
hours,
those
of
you
listening
on
the
stream.
All
11
of
you
currently
welcome
we're
gonna,
do
quick
introductions
and
then
we're
gonna
show
you
how
how
the
livestream
is
going
to
work
so
I'm
Jorge,
Castro
I'm,
going
to
be
your
host
and
the
community
manager
and
Hecht
you
I'm.
A
The
way
this
is
gonna
work
we've
been
talking
about
how
we
can
kind
of
help
users
in
a
more
high
bandwidth
situation.
Sometimes
it's
easier
to
explain
technical
problems
when
you
could
just
talk
about
it,
so
we
thought
we
would
have
a
regular
series
where
we
can
take
user
questions.
Answer
them
talk
best
practices.
We
can
go
over
the
roadmap,
we
could
show
you
cool
demos
and
all
that
sorts
of
sort
of
thing.
So,
after
doing
the
first
office,
hours
is
kind
of
the
second
one
we're
shaking
out
the
kinks.
A
So
the
way
it's
gonna
work
is
we're
currently
live
streaming
on
YouTube
and
we'll
be
taking
questions
from
slack
on
the
kubernetes
slack,
which
is
slack
kubernetes
I/o.
If
you
go
there,
we
have
a
hash
mark,
dr
channel,
or
you
can
just
ping
us
directly
on
twitter.
I'm
watching
the
Twitter
accounts
now
and
then
feel
free
to
just
ask
your
questions
in
the
channel,
and
we
could
just
kind
of
you
know
ask
answer
questions
as
they
get
addressed
to
us
for
those
of
us
joining
us
on
YouTube
in
the
archive.
A
But
we
are
going
to
schedule
these
in
advance
and
things
like
that.
So
you
can
click
to
subscribe
and
share
button,
and
things
like
that
and
you'll
always
make
sure
that
you
get
your
notification
when
we
go
right
now.
We're
thinking
maybe
doing
these
every
other
week
or
so
so,
we'll
see
we'll
see
how
that
goes,
and
with
that
we
have
notes.
There's
a
Google
Doc
that
we
tossed
into
the
slack
channel
so
that
people
can
swap
urls.
A
If
we
have
things
that
we
discussed,
we
could
just
toss
them
in
there
and
put
them
in
the
description.
So
for
those
of
you
listening
on
your
device
or
something
after
the
fact
can
always
go
look
stuff
up
and
with
that
we
are
ready
to
go,
feel
free
to
start
asking
your
questions
in
the
stream.
In
the
meantime,
Andy
Steve
or
Nolan.
Do
we
have
anything
cool
to
talk
about
or
discuss
or
announcements
I
saw
we
did
a
point
release
the
other
day.
C
Yes,
so
yesterday
we
pushed
out
some
container
images
and
binaries
for
our
first
alpha
for
the
0
900
release.
We've
been
working
on
this
for
the
past
couple
of
months
and
we
have
you
know
a
whole
bunch
of
bug,
fixes
and
smaller
features.
But
the
big
feature
that
we've
been
working
on
for
this
release
is
integrating
rustic
into
Ark.
So
for
those
of
you
who
aren't
familiar
with
rustic,
it's
an
open
source
tool
for
doing
kind
of
generic
file,
level,
backups
and
so
we're
planning.
C
And
so,
with
this
latest
release,
we
have
integrated
rustic
to
do
just
that.
We
have
the
first
alpha
out
and
we
definitely
want
folks
to
take
it
first
bin
and
give
us
any
feedback
on
it,
and
I
am
gonna,
do
a
demo
in
a
minute
or
two,
but
definitely
want
to
pause
and
see.
If
anyone
has
any
other
questions
or
comments
on
that.
C
Okay,
all
right
so
I
have
a
and
up
and
running
kubernetes
cluster,
with
the
0
900
alpha
in
it,
and
I'm
gonna
walk
through
kind
of
the
core
use
case
of
using
rustic
with
arcs.
So
the
first
thing
I'm
gonna
show
you
is
that
I
have
a
namespace
deployed
here,
which
is
just
contains
kind
of
a
sample
workload
that
I'll
use
to
demonstrate
the
arrestee
capabilities
and
on
the
bottom
I'm
just
telling
the
arc
server
log
just
so
that
we
can
sort
of
see
what's
going
on
a
little
bit
as
I
walk
through
these
steps.
C
C
If
I
take
a
look
at
this
deployment,
it's
not
doing
much.
It's
basically
just
running
in
a
sleep
loop,
but
the
important
part
here
is
that
I
have
two
volumes
mounted
into
it:
volume,
1
and
volume,
2
and
they're,
both
empty
durval
Yume's,
so
obviously
things
that
we
wouldn't
have
snapshot
support
for
with
arc.
So
the
first
thing
I'm
gonna
do
is
just
exec
into
that
pod
and
show
you
what's
in
here.
So
here
is
volume
1
and
volume
2.
C
C
C
Deployment
and
what
you
see
here
is
that
in
the
pod,
template
I
have
an
annotation
on
on
this
pod,
and
this
is
basically
what's
going
to
tell
arc
that
I'd
like
to
take
rustic
the
backups
of
these
two
volumes.
So
you
can
see
I
just
have
a
list
of
the
volume
names
here:
volume
1
volume
2
now
in
this
case
they're
they're
empty
durval
Yume's,
but
they
could
be
any
other
type
of
volume.
That
kubernetes
supports.
C
So
now,
when
I
go
to
execute
a
backup
call
it
demo
workload
and
I
want
to
include
the
workload
namespace,
and
that
should
be
it,
and
so
now
the
the
backups
running
you
can
see
on
the
bottom
and
the
log
here
backups
running,
and
this
will
just
take
a
few
seconds
so
I'll
pause
here.
While
this
is
running
and
see
if
there
any
any
questions
from
anyone.
A
Have
a
question
as
well:
I,
don't
know
if
you're
gonna
get
to
this,
but
as
far
as
so
I
use
rustic
manually,
for
you
know
for
a
bunch
of
other
stuff,
and
just
you
know
when
you
list
your
snapshots,
you
have
to
go
back
and
you
know
clean
them
up
in
your
bucket
and
whatnot
that
we
provide
any
kind
of
lifecycle
management
of
these
snapshots.
Or
is
that
expected
to
be
handled
externally?
No.
C
We
we
do
so
prior
to
rustic
arc
supported
the
concept
of
garbage
collection
for
back,
and
so,
if
you
have
an
arc
backup
that
has
say
EBS
snapshots
attached
to
it,
when
that
backup
expires
and
his
garbage
collected,
we
actually
go
out
to
AWS
and
delete
those
snapshots,
and
so
we
we've
done
exactly
the
same
thing
for
rustic,
so
the
rest
ik
backups
that
are
associated
with
an
arc
back
up
when
that
backup
tool
gets
garbage
collected.
They
are
also
removed
from
the
repository
okay.
C
A
C
For
the
most
part
you
can
you
can
control
all
of
the
rustic
integration
directly
through
the
arc
CLI
and
through
arc.
You
don't
really
need
to
worry
about
the
the
rustic
back-end,
although
you
certainly
can
always
access
it
through
the
rustic
CLI.
If
you
want
it,
hello,
awesome,
yeah,
so,
okay,
so
the
backup
is
completed
now.
So
just
take
a
look
at
that.
I
only
have
one
backup
in
here.
It's
completed,
I
guess.
C
Here,
so
this
is
just
a
command
to
list
all
the
snapshots
in
a
rustic
repository
now.
What
you'll
see
here
is
that
this
repository
is
actually
hosted
in
s3,
so
I'm
running
a
cluster
in
AWS
and
so
I
have
the
rustic
back-end
in
s3,
just
in
a
bucket
that
I've,
created
and
you'll
see
that
at
the
end
here,
I
have
a
slash
workload,
and
this
means
that
I
have
a
repository
for
the
specific
namespace
that
I'm
working
with.
So
we've
decided
to
kind
of
separate
rustic
repositories
so
that
you
have
one
per
namespace.
C
C
One
here
is
for
volume
one
this
one
here
is
for
volume
two
and
each
of
the
the
rest
ik
snapshots
are
backups
kind
of
synonymous
in
the
rustic
world,
have
a
number
of
tags
that
are
associated
with
them
so
that
we
can
sort
of
easily
identify
them
and
tie
them
back
to
to
arc
backups
and
to
the
workloads
that
they
came
from.
So
what
I'm
going
to
do
now
is
actually
just
go
ahead
and
delete
the
deployment
and
the
pod
from
that
namespace.
A
B
Can
take
this
one,
so
this
is
essentially
what
we're
calling
a
data,
only
backup
and
restore,
and
it's
not
something
that
we've
extensively
tested,
but
we
do
have
an
open,
github
issue.
That's
a
feature
request
for
doing
data
only
restores,
so
you
maybe
could
try
doing
what
what
you
listed
with
scaling
the
stateful
set
down
and
then
doing
the
the
restore
I'm,
not
if
we're
confident
that
it
would
necessarily
work
and
I
think
this
is
definitely
something
that
we
are
interested
in
adding
support.
B
A
C
So
we
got
all
the
we
got,
the
deployment
the
replica
set
and
the
pod
deleted
from
that
that
namespace
so
now,
I'm
gonna
go
ahead
and
execute
a
restore,
and
so
a
restorer
looks
just
like
it
always
had
so
we're
gonna,
say
a
Ark
restore
create
from
back
up
what
I
called
my
backup
demo
workload.
C
C
C
C
Yeah
absolutely
so
we
we
thought
about
a
few
different
ways
for
for
rustic
to
actually
be
able
to
access
the
data
that
it
needed
to
be
able
to
backup.
But
the
approach
that
we
that
we've
gone
with
for
now
is
that
we
created
a
new
demon
set
that's
running
within
the
arc
namespace,
and
so
we
have
a
one
of
these
pods
running
on
each
node
in
the
cluster
and
those
demon
set.
C
C
A
B
Basically,
we
have
one
feature
request
that
we
are
quite
interested
in
doing,
which
is
essentially
cloning,
a
namespace
and,
along
with
persistent
volumes
that
are
in
use
by
that
namespace
into
the
same
cluster.
Persistent
volumes
are
cluster
scoped
resources,
which
means
they
don't
belong
to
any
one
namespace.
And
if
you
take
a
snapshot
of
a
persistent
volume
and
then
you
want
to
restore
it
into
the
same
cluster
without
deleting
anything.
B
So,
yes,
it's
intentional
that
we
have
unique
names
and
ideally
you
shouldn't
have
to
worry
about
the
names
and
we
we
make
every
effort
to
put
tags
or
labels
or
whatever
is
appropriate
either
at
the
kubernetes
level
or
at
the
cloud
provider
level,
so
that
you
can
identify
a
given
name
of
a
resource
that
may
just
be
a
bunch
of
random
letters
and
numbers
back
to
the
backup
or
the
restore
or
the
original.
You.
A
B
The
kubernetes
level,
if
you're
using
dynamic
provisioning,
the
names
of
the
persistent
volumes
are
basically
dynamically
generated,
so
I
think
it's
entirely
appropriate
to
come
up
with
our
own
names,
whether
it's
restore
whatever
or
just
a
random
UUID.
So
yeah
I
think
if,
if
you
worry
about
the
tags
and
the
labels
and
the
kubernetes
resources
themselves,
I
think
that
how
we
name
the
discs,
hopefully
won't
be
an
issue
for
you.
If
it
is
an
issue,
please
let
us
know
and
we'll
see
we
can
do
to
help
you
out
yeah.
A
C
I'll,
take
this
one,
so
the
answer
is:
it
depends
right
now,
so,
if
you're
using
volume
types
that
are
not
specific
to
a
cloud
provider.
So
if
you
the
example,
I
used-
and
you
just
have
some
empty
dirt
volumes
that
you
want
to
migrate
across
a
cloud
provider
or
maybe
you're
using
volumes
or
something
else
like
that,
the
answer
is,
is
yes,
there's
nothing
that
would
prevent
that
from
working
across
cloud
providers.
C
So
although
the
data
could,
in
theory
be
restored,
we
don't
have
that
logic
to
kind
of
convert
from
one
both
of
Peavy
to
another
type
of
PD.
So
that's
definitely
something
that
we
have
on
a
road
map
and
we're
interested
in
doing
more
work
on
in
the
future,
and
certainly
the
rustic
work
potentially
provides
a
foundation
for
that.
So
stay
tuned
for
sure.
One.
B
C
Control
right
now,
yeah,
it's
it's
sort
of
all
or
none
right
now.
So
you
know:
I've
I've
certainly
run
using
rustic
to
backup
EBS
volumes,
but
the
way
that
I've
done,
that
is
I've
turned
off
snapshot.
Support
I
basically
deleted
my
persistent
volume
provider
from
the
config
yeah,
and-
and
so
you
can
do
that,
but
it's
kind
of
an
all-or-nothing
things
so
yeah.
B
I
know
we
have
an
open
issue
for
exploring
how
to
opt
in
we're
out
of
persistent
volume.
Snapshots
I
think
the
the
user
who
filed
that
maybe
had
lots
and
lots
of
persistent
volumes,
but
most
of
them
were
throw
away
and
the
ones
where
the
data
really
mattered
that
they
wanted
backed
up
was
a
small
subset
of
that.
And
so
maybe
we
can.
You
know
loop
into
that
feature
request
and
take
that
into
consideration
for
both
controlling
which
volumes
are
in
and
out
and
which
ones
are
rustic
versus
native.
C
Yeah
and-
and
you
know,
as
we
saw
in
the
demo-
we've
kind
of
gone
with
using
annotations,
at
least
for
now,
to
control
which
volumes
get
rustic
backups,
and
so
we
might
want.
You
know
just
extend
that
to
control
what
could
snapshotted
what's
get
rustic,
what
gets
rustic
backed
up
and
what
gets
backed.
A
B
B
So
while
it's
not
necessarily
a
media
integration
with
a
third-party
tool
such
as
rustic,
it
is
a
big
chunk
of
work
to
be
able
to
specify
that
you
want
to
replicate
your
data,
both
the
backup
data
and
the
persistent
volume
snapshots
or
rustic
snapshots
from
wherever
they're
originally
backed
up
to
additional
locations.
So
the
classic
example
there
is,
if
you've
got
all
of
your
backups
in
a
single
data
center
and
you
lose
connectivity
to
that
data
center
or
maybe
it's
a
region.
It's
AWS
us
East,
one
or
Google's
US,
central
1a
or
whatever.
B
A
C
Yeah,
so
we
I
wouldn't
say
we
have
detailed
kind
of
performance
numbers,
but
you
know
from
experience
a
gigabyte
backup
will
take
on
the
order
of
seconds
to
maybe
a
minute
or
two
one.
One
night
thing
about
rustic
is
that
it
it
actually
does
differential
backups
and
so
your
first
backup
it's
gonna,
obviously
have
to
go
through
all
the
data
and
back
that
up,
while
it's
doing
that
it
does
some
deduplication
of
the
data
so
that
it
uses
as
little
space
as
possible
in
your
back-end.
C
But
when
it
goes
to
do
the
second
backup,
it's
it's
essentially
only
going
to
need
to
backup
whatever
data
has
changed
since
the
previous
backup
and
so
typically
for
data.
That's
not
you
know.
Turning
over
all
the
time
between
backups
you're,
gonna,
see
subsequent
backups
be
much
faster
and
then
restore
performance
is,
is
definitely
a
little
bit
slower
than
backup.
So
you
know
to
restore
a
gigabyte
might
take
on
on
the
order
of
five
to
ten
minutes
ballpark.
C
But
there
is
a
PR,
that's
in
flight
right
now
to
to
make
some
pretty
significant
improvements
in
the
performance
of
that
I've
been
playing
with
it
and
testing
it
out,
and
it
seems
like
it's
made
on
the
order
of
a
10x
improvement
for
my
test
cases.
So
we're
we're
definitely
eagle
eagerly
awaiting
that
feature,
getting
merged
into
rustic
and
hoping
to
help
out
with
some
testing
there.
So
definitely
stay
tuned
and.
B
Right
now,
no
so
arc,
the
server
component
of
arc
runs
as
cluster
admin
unless
you've
configured
it
to
have
lesser
privileges,
and
we
need
cluster
admin
in
order
to
be
able
to
restore
anything
and
everything
in
the
cluster.
We
have
two
issues
that
I
think
Nolan
is
linked
in
the
document,
one
on
backup
templates,
which
we
may
or
may
not
do.
B
We
need
to
think
through
the
security
issues
around
that,
like
I
said
our
cos
cluster
admin.
So
if
you
were
to,
if
we
were
to
change
arcs
so
that
it
looked
at
any
and
all
namespaces
for
backups
and
restores,
then
any
user
could
create
a
backup
and
ask
for
it
to
backup
the
entire
cluster
or
to
go
back
up
some
other
namespaces
secrets
that
maybe
they
don't
have
permission
to
get
access
to,
and
so
we
are
very
security
conscious
and
want
to
make
sure
that
we
don't
have
those
escalations.
B
So
there
is
also
an
issue
that
I've
linked
in
the
document
number
18
on
our
github
repository
for
multi-tenancy
support
where
we
created
it
a
long
time
ago
when
we
first
started
working
on
arc,
but
it
was
not
an
initial
focus
for
us.
So
if
there
are
people
in
the
community
who
really
need
multi-tenancy
support,
where
you
want
individual
users
and
different
namespaces
to
be
able
to
request
backups
without
privilege
escalations,
we
would
definitely
be
interested
to
hear
from
you,
especially
if
you
have
any
specific
needs
that
are
more
specific
than
just.
We
want
multi-tenancy.
A
A
No
one's
really
asked
us
questions
yet
on
Twitter,
but
we
are
monitoring
that
and
then
we
definitely
have
questions
and
slack
as
well
as
long
as
a
link
to
this
Google
Note
Google
document
we'll
be
publishing
all
the
notes
here
from
the
session,
as
well
as
the
video,
probably
I,
don't
know,
half
a
half
hour
after
we
go
live
so
with
that
it
looks
like
more
questions
are
coming
in
where
those
are
in
Cruz.
Is
there
anything
else
that
you
want
to
bring
up
while
the
yeah.
B
So,
with
the
rustic
integration,
as
Steve
mentioned,
we
released
an
alpha
of
0.9.
Yesterday
we
have
some
documentation
in
our
github
repository
that
describes
how
to
test
it.
We
need
your
help.
This
was,
as
mentioned
earlier,
pretty
meaty
integration,
so
there
may
be
bugs,
and
if
you
have
time,
please
take
it
out
for
a
spin,
try
some
backups
try.
Some
restores
try
to
do
funky
things
that
break
it
and
please
let
us
know
what
your
experiences
are.
We
we
definitely
are
not
ready
to
call
this
feature
complete,
which
is
why
we
released
an
alpha.
B
C
I
was
just
gonna
add
on
one
thing,
which
is
that
you
know
for
those
of
you
who
aren't
ready
to
use
the
rustic
integration.
Initially,
it
certainly
opt
in
its
not
required
to
be
used,
and
if
you
don't
choose
to
use
it,
it
won't
affect
kind
of
any
of
the
existing
use.
Cases
so
definitely
opt
in,
but
hopefully
it
adds
value
for
a
lot
of
you
out.
There.
D
Yeah,
so
I
can
take
that
pre
and
post
back
up
hooks
exist
that
let
you
run
inside
of
a
container
that
do
some
custom
logic
curve
that
would
per
pod.
That
would
be
that'd
be
one
way
to
do
to
do
this
kind
of
stuff.
Today
we
also
have
a
PR
open
for
gathering
metrics
that
you
could
send
somewhere
to
alert
off
fail
backup.
So
if
they
don't
match
the
attempted
backups
there's
stuff,
that's
more
specific.
B
So
so
we
support
doing
hooks
at
the
pod
level.
The
question
is:
can
I
put
the
hole
back
up?
Can
I
do
it
for
the
entire
backup?
Not
currently
so
we've
had
questions.
We've
had
this
question
before
and
we
may
add
support
for
pre
and
post
backup
hooks,
but
if
you're
comfortable
working
in
the
kubernetes
client
ecosystem,
you
can
write
code
that
will
watch
for
backups
to
change
status,
just
like
you
would
watch
for
pods
to
change
status
or
whatever,
and
so
it
could
be
done
as
an
external
integration.
B
You
could
write
a
separate
piece
of
code
to
to
watch
for
backups.
If
you
feel
strongly
that
that
that's
something
that
you
need,
we
can
definitely
take
it
into
consideration.
I
believe
we
have
an
open
issue
for
it.
I'll
go,
hunt
it
down
and
then,
alternatively,
no
I
think
you
were
mentioning
metrics
as
well.
B
A
B
We
have.
We
also
I
mentioned
wanting
to
be
able
to
clone
namespaces
with
the
PBS.
We
don't
have
a
firm
release
date
for
that
or
which
release
will
be
in,
but
that'll
be
another
big
feature
that
I
know.
People
are
looking
forward
to
also
metrics,
of
course,
and
we
were
like
we
are
lucky
enough
to
have
a
community
member
working
on
that
pull
request.
So
we
we
love
having
community
members
come
and
help
us
out
so
I
see
a
question.
B
Any
idea
when
metrics
might
make
its
way
in
we
most
likely
will
probably
get
the
initial
implementation
in
with
0.9.
If
it
doesn't
make
0.9
it'll
be
right.
After
on
the
master
branch
that
will
not
be
all
the
metrics,
it
will
start
with
just
a
few
and
then
continue
to
add
more
over
the
next
release,
or
so
so
I
would
expect
that
either
0.9
or
right
after
yeah.
A
A
Metrics
one
hold
on
of
the
notes
are
just
everyone's
copying
and
pasting,
so
J,
rnt
30s.
This
is
a
more
community-based
question.
I
enjoy
these
conversations
and
I'm
curious.
If
there's
any
interest
in
more
pairing
related
things
to
get
people
actively
contributing
to
the
project
via
code,
part
of
the
hook
for
me
with
ARC,
is
just
a
friendly
faces
where
to
invest.
My
time
is
really
driven
by
some
of
that
collaborative
effort
and
as
way
to
network
any
so
I
think
this
ties
in
with
what
we
were
discussing
privately
before
the
meeting
was.
A
B
I
think
similar
to
the
kubernetes
sig
meetings
that
are
either
weekly
or
bi-weekly
or
whatever
frequency.
We
will
probably
explore
doing
that.
Something
like
that
for
arc,
so
I
would
hope
that
we'll
be
able
to
get
her
act
together
and
put
together
a
schedule
that
is
friendly
for
as
many
time
zones
as
possible.
I
know
time
zones
are
hard,
so
we
may
have
to
alternate
sometimes
to
to
work
a
little
bit
more
for
Europe
I.
Think,
unfortunately,
given
that
the
core
Burke
team
is
in
the
u.s.
C
B
Certainly
can
alternate
between
morning
and
afternoon
East
Coast
Times,
for
example,
which
would
line
up
with
us
in
Europe
at
least,
but
we
we
definitely
would
be
interested
in
having
more
community
engagement,
helping
us
direct,
what
features
we
can
add
and
what
functionality
so
I
think
having
open
design
meetings
would
be
a
pretty
cool
thing
to
do.
Yeah.
A
So
I
think
what
I'll
do
after
this
is
probably
you
know,
get
feedback
on
how
this
time
slot
work
for
everybody
and
then
maybe
look
at
you
know
every
other
week
or
something
like
that.
Maybe
we
can
alternate
one
time,
European
friendly
one
time
you
know
u.s.
friendly,
we
can.
We
can
certainly
be
flexible
there
and
maybe
every
once
in
a
while.
Try
to
do
you
know
an
Asia
friendly
one
we
can.
We
could
definitely
we
definitely
get
feedback
and
metrics
from
users.
A
A
Okay,
well
thanks
everyone
for
joining
us.
So
what
we'll
do
here
is
I
will
post
the
notes
out.
Thank
those
of
you
that
joined
us
on
on
YouTube.
We
had
a
peak
of
18
people
at
the
same
time.
So
thank
you
very
much
for
joining
the
livestream.
We
knew
you
know
these
could
be
fun
and
thanks.
So
all
of
you
who
are
subscribed
to
the
YouTube
channel,
like
we
said,
please
click
the
subscribe
button
on
the
channel.
Usually
we
start
the
street
about
10
minutes
before.
A
If
I
should
give
you
a
little
bit
of
warning
and
then
once
we
get
the
schedule
down,
we're
going
to
start
to
do
a
better
job
as
far
as
regularly
tweeting
ahead
of
time
and
things
like
that
to
help
to
help
plan,
I
already
got
some
feedback
that
a
calendar
entry
for
people
to
join
in
might
be
more
useful.
So
we're
definitely
looking
at
making
this
easier
for
you.
So
with
that.