►
A
Very
start
here,
alright,
so
there
there
are
three
key
things:
I
want
to
go
over
today
and
then
we
can.
We
can
figure
out
if
there
any
questions
again,
we
can
dive
a
little
deeper
into
stuff,
but
we
recently
created
more
docker
files
for
our
existing
branches.
So
I
want
to
sort
of
describe
why
we
did
that
and
provide
a
little
demonstration
of
that.
A
There
was
an
issue
with
dspace
5,
where
I
needed
to
test
the
RDF
process
and
I
had
never
actually
done
that
before,
but
now
with
docker,
it's
I've
learned
how
a
way
to
get
the
RDF
process
at
least
running
and
can
validate
that
it's
able
to
start
and
then
there's
this
service
codenvy
that
I've
played
with
that
I
I
have
I,
have
mixed
feelings
about
recommending
it
to
other
folks,
but
a
cool
thing
about
it.
Is
it's
a
demonstration
of
running
dspace,
docker
images
in
the
cloud
and
creating
kind
of
an
internet
accessible
docker
instance.
A
So
it's
it's
at
least
neat
to
show
what's
possible
whether
whether
or
not
that's
a
great
solution
for
other
folks
to
try.
So
those
are
the
three
main
areas.
I
was
going
to
go
over
and
then
I'm
glad
to.
If
any
other
questions
come
up,
we
can
go
over
those
in
more
detail.
Let's
maybe
do
just
a
quick
introduction,
so
I'm
Terry
Brady
I'm
a
software
developer
at
Georgetown,
University,
Library
Tim.
If
you
wanna
go.
A
A
The
history
of
this
is
the
dspace
5.9
relief's
had
some
issues
with
JDK,
7
and
so
docker
kind
of
became
a
useful
way
to
test
and
verify
that
we
had
a
fix,
but
in
the
process
of
doing
that,
we
set
it
up
to
explicitly
named
that
the
JRE
or
JDK
version
in
each
docker
file,
currently
only
D
space
5
has
multiple
jdk
versions
created,
but
what
we've
done
is
set
up
a
model
that
we
could
follow.
If
you
know,
let's
say
we
start
finding,
we
want
to
test
specific
things
with
JDK
9,
JDK
10.
A
We
can
follow
this
model
with
whichever
T's
based
versions.
We
we
think,
makes
sense.
So
we
also
have
two
flavors
of
each
docker
file
that
we
produced
and
the
second
flavor
has
tests
on
the
end
and
there
there
are
two
main
overrides
that
exist
in
those
docker
files.
One
is
so
these
these
are
really
meant
for
I
I
would
say
in
general,
we
haven't
certified
any
of
these
doctor
files
for
production
use,
so
I
think.
A
If
someone
goes
down
that
path,
they
kind
of
need
to
figure
out
their
their
comfort
level
and
what
additional
changes
they
would
need
to
make
these
secure
and
reliable.
But
these
test
versions
are
definitely
not
meant
for
production
use.
We
in
these
versions.
We
expose
the
solar
service
directly.
So
this
is
really
meant
for
people
who
need
to
understand,
what's
going
on
in
the
solar
service
to
make
it
easy
to
access.
A
We
also
the
the
Indy
space
five
and
six
that
well
I,
guess
five,
six
and
seven
not
Indy
space
for
that
the
legacy
REST
API
is
forced
to
run
under
HTTP
and
that,
if
you're
running
in
a
local
docker
instance,
that
can
be
a
little
bit
of
a
confusing
thing
to
get
around.
So
we've
also
overridden
that
configuration
just
to
make
it
easier
to
access
and
test
the
API.
A
C
A
All
right
so
I'm
going
to
go
into
the
so
we've
got
this
repo
in
D
space
labs
called
D
space,
docker
images
and
in
it
we've
got
some
useful
docker
compose
configurations
and
for
each
of
these
docker
compose
configurations,
I'll
show
you
all
the
document
actually
pop
and
show
you
the
documentation,
real
quick.
So
here's
here's
our
landing
page
for
the
repository,
so
you
can
either
view
this
page
directly
from
github
and
then
we
also
work
to
make
the
page
content
a
bit
easier
to
read.
A
A
C
A
To
show
you
what
the
compose
file
looks
like
alright,
so
here
in
this
docker
compose
file
by
default,
we
are
going
to
use
D
space
postcards
PG
crypto
for
our
database
with
another
developer,
contributed
and
a
specialized
version
of
the
database
for
working
with
D
space
for
X
to
make
it
easier
when
you're
testing
that
branch
and
then
for
the
D
space
image
itself.
We're
gonna
use
one
of
these
newly
named
images
that
I
went
over,
so
we're
going
to
take
the
D
space,
6x
JDK
8
version
and
run
it
with
test.
A
A
A
A
A
A
A
A
A
B
A
A
A
Without
the
user
needed
to
take
any
additional
action,
what
I
thought
could
be
interesting
about
making
this
available
to
folks
would
be,
for
you
know,
like
repository
administrators,
who
don't
usually
get
to
see
the
solar
console.
This
would
be
an
interesting
way
to
make
that
accessible
to
them,
to
just
you
know,
understand,
what's
in
the
repository
and
then
here's
the
REST
API,
which
normally
you're
forced
to
access
over
HTTP,
but
we've
got
that
overridden
here.
A
A
A
A
So
that,
for
me,
what
I'm,
particularly
happy
about
with
these
recent
image
changes
that
we
made
are
the
ability
without
having
to
go
through
a
lot
of
extra
steps,
just
just
by
naming
that
you
want
to
use
that
test
image.
It's
much
easier
to
develop
and
test
with
all
the
different
services.
So
here
now
you'll
see
we're
running
D
space
5,
the
5-11
branch
with
Java
1
7.
A
A
This
particular
tab
here
the
build
settings
tab.
This
is
something
that
Tim
and
I
can
update,
and
if
we
we
have
other
folks
who
are
engaged
and
interested
in
the
you
know
image
development
process.
We
could
suppose
this
to
other
people,
but
here
we've
got
the
name
of
the
branch
that
we're
triggering
our
build
from
the
name
of
the
dockerfile
that
we
want
to
look
at
within
that
branch
and
then
the
image
name
that
we
want
to
assign.
A
A
So
it's
a
similar
content
to
what
you
saw
before
just
a
bunch
of
pictures
of
my
dog
in
here.
What
I
want
to
do
is
to
show
you
so
we
can
go.
We
can
log
in
here
and
by
default.
I've
got
some
scripts
that
up
that
set
the
username
to
test
it
test,
edu
and
the
password
to
admin,
and
then,
when
you're
logged
in
in
addition
to
running
that,
DS
based
version
command,
you
can
come
into
the
control
panel
and
just
you
know
verify
that
the
version
you
expected
is
up
and
running.
A
So
a
really
nice
way,
particularly
if
you're
applying
a
change
across
multiple
branches
of
deep
space,
really
nice
to
be
able
to
pop
in
and
quickly
assemble
and
test,
and
usually
the
web
app
starts
faster
than
what
I
was
showing
you
all.
So
so
those
are
that
the
key
things
about
the
the
new
image
names
again
and
that's
those
are
documented
here
on
this
particular
with
wiki
page,
which
I
linked
into
the
meeting
notes.
C
And
a
quick
question:
Terry:
what
what
image
are
you
or
a
T
spaced
images
based
on
so.
A
A
A
Know
I
I
found
these
in
the
by
just
searching
docker
hub
and
they
looked
like
the
most
official
versions
of
Tomcat.
So
let's
go
to
docker
hub
and
do
a
search
for
Tomcat,
so
Tomcat
official
here
and
then
here
it
lists
all
the
potential
Tomcat
versions
you
could
build
off
of.
So
what
we
decide
is
because
we
say
that
Tomcat
8
is
our
supported
version
for
these
space
6
and
D
space
7.
We
built
those
off
of
Tomcat
8
and
then
4d,
space,
5
and
4.
Since
Tomcat
7
is
our
official
version.
A
A
We
essentially
copy
everything
from
the
code
base
to
an
app
directory
and
kind
of
like
git,
ignore
there's
a
docker
file
ignore
or
docker
ignore
file.
So
there
are
a
few
things
we
excluded,
but
for
the
most
part
we
copy
everything
to
the
app
directory.
Then
we
have
a
customized
local
dot,
CFG
file
that
lives
in
this
path
and
we
copy
that
to
local
CFG
and
then
that
actually
has
an
awareness
of
the
directory
structure
that
we're
going
to
use
inside
the
docker
image.
A
And
if
you
go
down
to
the
d
space
v
or
d
space
for
docker
file,
you'll
see
rather
than
copying
a
local
dot
CFG.
It's
copying
a
build-up
properties
file
in
the
case
of
this
test
image
we're
overriding
the
web.xml
for
solar
and
rest
to
do
those
two
overrides
that
I
mentioned
and
then,
if
you
look
at
the
non-test
version
of
this
same
docker
file,
you'll
see
that
these
lines
aren't
present,
then
wait.
So
we
then
run
maven
run,
run
the
build.
A
A
A
So
because
we're
basing
this
on
the
Tomcat
8
JRE,
we
don't
have
any
exact
step
in
here,
because
by
default
it
will
then
launch
Tomcat
and
what
we
do
is
for
each
docker
file.
We
have
an
awareness
of
the
web
apps
that
exist
in
that
version
of
D
space
and
we
create
sim
links
for
the
appropriate
web
services
in
each
of
the
docker
files.
A
A
A
A
So
what
we're
gonna
do
is
we're
gonna
go
to
the
RDF
compose
directory,
we're
going
to
start
up
the
D
space
and
instead
of
just
two
services
running
well
end
up
with
three
services
running
and
then
I've
got
some
follow-on
steps,
because
I
I
still
don't
know
this
process
very
well.
But
what
I'm
going
to
do
here
is
I'm
going
to
go
back
to
my
window,
I'm
gonna!
Stop
what
I've
been
running
so
I'm
going
to
take
our
D
space.
5
version!
Stop
it.
A
Now
there
will
be
one
fewer
override
that
I
need
to
make.
If
I
do
this,
with
these
base,
six,
so
I
am
going
to
change
my
DS
based
version
now
to
dspace
6x
JDK
8,
test
I'm,
going
to
CD
to
the
RTF
compose
directory
and
I'm,
going
to
start
things
using
my
D
6
volumes.
So
we'll
have
some
content
in
there.
We
won't
need
to
worry
about
actually
ingesting
content
into
the
repository.
A
So
you'll
see
now
the
database
has
started
fuse,
acai
has
started
and
these
spaces
started
so
we'll
have
to
go
through
the
same.
You
know
waiting
process
for
all
the
services
to
initialize
that
if
I
do
a
docker,
PS
eh
you'll
see
that
those
three
services
are
running
and
fuse
acai
runs
on
port
30
30.
So
let's
confirm
that
things
are
up
and
running
and
then
we'll
talk
to
the
fuse
acai.
A
Okay,
so
this
is
already
up
and
running,
we'll
give
this
this
guy
minute
to
get
up
and
launched
so
I'm
going
to
pop
back
to
our
instructions
for
running
the
RDF
service.
So
in
Apache
physicai
we
need
to
create
a
data
set
name
D
space,
so
I'm
going
to
click
on
add
a
data
set
and
our
data
set
name
will
be
ad
space.
A
Next,
we
need
to
run
the
Ardea
hyzer
command
in
the
D
space
terminal,
prompt,
so
I'm
going
to
run
D
space,
bhindi
space,
rdf
eyes
or
C
V.
So
I'm
going
to
take
this
command
again,
because
I'm
running
from
Windows
I
need
to
and
I
need
terminal
output,
I'm
prefixing
it
with
this
wind
PTY
command
pop
back
into
my
window.
Here,
I'm
gonna
paste
this
and
I
didn't
set
my
deepraj
variable,
but
essentially
our
project
is
d6
here.
A
A
So
you
know,
like
you
know
this
is
this-
is
just
one
of
those
nice
things
about
working
with
docker
I,
don't
really
know
much
about
Apache
physicai
I,
don't
necessarily
want
to
become
an
expert
in
it.
I
just
I
just
want
it
up
and
running
so
I
can
explore
this
functionality
in
dspace.
So
we
have
currently
in
the
repository
about
24
items
and
a
handful
of
collections
so
now
and
you'll
see
because
we
have
a
relatively
small
number.
We've
got
these
essentially
our
handles
start
with
one
and
go
up
to
like
24
25.
B
A
Main
documentation
to
you
know,
I
think
that
the
last
time
we
met
and
did
one
of
these
show
and
tells
I
kind
of
showed
how
you
know.
We've
got
a
darker
compose
file,
but
we'll
let
you
start
up
both
the
gnudi
space
7
angular
image.
It's
been
a
while
since
I've
run
this,
but
you
can
start
the
angular
image
with
a
REST
API
that
you're
running
locally
for
their
instructions
for
starting
just
the
angular
component
and
you
supply
the
URL
to
talk
to
an
external
rest
service.
A
So
just
a
couple
couple
nice
variations
there
at
Meijer
had
done
some
work,
I'm,
not
sure.
Really.
How
supported
this
is,
but
Tom
Sarah
I
know
had
done
some
work
to
work
with
Oracle
and
dspace.
So
I've
got
a
starting
point
here.
If
someone's
interested
in
trying
that
out
as
well,
it
would
be
nice
to
kind
of
you
know
if,
if
we,
if
we're,
you
have
a
lot
of
Oracle
usage
to
actually
make
it
sort
of
a
project
supported
image,
so
anyone
could
pop
in
and
test.
You
know
verified
changes
against
Oracle
all
right.
A
A
So
this
is
item
24.
Let's
open
up
item
24
and
you
can
see
the
title
matches
so
just
add
verification.
Now
that
RDF
service
is
run
fault
by
anyone
all
right.
So
the
third
thing
I
wanted
to
cover
was
so
I
have
explored
and
I
kind
of
like
this
is
actually
what
got
me
excited
about
docker
to
begin
with,
although
I
have
some
frustrations
with
this
service
as
a
whole,
but
I
am
running
here.
Codenvy
code
Envy
is
built
on
Eclipse
Shea,
which
is
a
browser-based
version
of
Eclipse
and
code.
A
Envy
is
a
paid
service
built
on
Eclipse
J.
So
what
it
is
giving
you
is
an
IDE,
a
an
okay
ID
but
I
mean
nice,
a
browser-based
IDE.
So
look
if
you're
working
on
a
Chromebook
or
something
you
can
actually
come
in
and
edit
code.
You
can
configure
it
to
have
awareness
of
your
get
credential,
so
you
can
pull
code
down,
save
code
back
to
a
repository,
so
were
we
to
open
up
a
code
file,
and
here
I
just
have
the
docker
images
file
loaded.
A
A
So
one
of
the
things
I
wanted
to
do
when,
when
I
had
experimented
with
this
service
in
the
past,
I
had
had
tried
to
add
the
Tomcat
components
into
the
dev
machine
environment,
because
that
was
an
example
that
I
had
found.
What
I
found,
though,
is
I
wanted
to
run
just
a
really
clean
instance
using
or
publish
these
base.
Images
and
I
didn't
want
to
pollute
that
with
any
of
the
IDE
components.
So
here
now,
I
have
a
displace
DB
image.
I've
got
some
terminal
output,
that's
appearing!
A
There's
an
SSH
key
that
it
will
provide
to
you.
If
you
want
to
do
a
direct
connection
into
this
container,
but
it's
also
nice,
you
can
use
this
SSH
key
to
talk
in
between
components.
You
also
have
the
ability
to
open
up
a
terminal
to
that
container,
so
I
could
do
a
psql,
mu,
D
space
and
knowing
now
I've
got
access
to
the
database
here.
Similarly,
I've
got
a
terminal
into
the
tomcat
window
and
this
is
where
I
had
just
run
a
bunch
of
AIP
ingest
to
put
some
content
into
the
repository
this
morning.
A
A
With
this
codenvy
services,
they
offer
a
free
tier,
where
you
get
three
gigabytes
of
memory.
I
have
found
that
it's
hard
to
run
deep
space
with
with
less
than
three
gigabytes
so
for
in
order
for
this
demo
to
perform
well
I'm
running
with
an
instance
where
I've
got
a
three
gigabyte
development
machine
or
editor
I'm
only
get
postgrads
runs
pretty
efficiently,
so
I've
got
half
a
gigabyte
and
then
for
our
tomcat
2.5.
A
Keep
your
bytes,
but
here
you'll
see
I'm
using
the
D
space,
5x
JDK
7,
test
image
that
we
were
playing
with
a
minute
ago
and
the
D
space
postgrads
PG
crypto.
So
this
I
sort
of
there's
something
that
looks
like
docker
compose,
but
it's
not
quite
docker
compose
where
you
configure
what
they
call
a
stack
and
then
your
stack
runs
in
what
they
call
it
workspace.
A
A
These
are
the
ports
that
are
in
operation
here,
our
dev
machine
instance,
and
these
are
the
ports
that
are
in
effect
and
then
the
key
thing
is
when
the
deep
space
container
started.
It
created
an
internet,
accessible
path
that
will
talk
to
port
8080
within
this
container.
So
I
can
click
on
this
link
and
we
get
to
the
Tomcat
landing
page
and
then.
A
So
and
now
I'm
able
to
access
this
instance
and
share
the
URL
to
this
running
instance
with
other
people.
So
nice
way
to
you
know,
have
a
test
environment.
That's
also
easy
to
share
with
other
people.
So
the
way
the
codenvy
pricing
works
is,
if
you're,
an
unpaid
customer.
Your
containers
will
only
stay
up
for
fifteen
minutes
of
inactivity
if
you're
a
paid
customer
they'll
stay
up
for
four
hours
of
inactivity
so
but
you
pay
a
flat
rate.
A
So,
unlike
AWS
or
Google
cloud,
or
something
where
you're
paying
for
the
services
that
you
use
here,
you
pay
a
flat
rate
and
you
kind
of
get
you
get
something
that
can
stay
alive
for
four
hours.
So
it's
it's
pretty
handy.
If
you
are
a
little
bit
stressed
about
cost
at
least
know
you
know,
predictably,
what
you're
going
to
pay
so
we
pay
for.
We
take
the
three,
the
three
gigabytes
that
are
free.
We
bought
an
additional
three
gigabytes.
We
pay
$30.00
a
month
for
this.
B
A
C
A
Any
of
you
have
expertise
in
this
and
in
this
area
and
are
kind
of
excited
by
this
concept.
It
would
be
great
to
you
know
like
I
guess,
with
AWS
I'd
say
like
here's,
your
cloud
front
or
not
cloud
front.
I
forget
what
their
language
is
for
a
cloud,
something
scripting
to
build
up
a
server
and
run
launch
those
containers,
I'd
love
to
have
all
that
documented
and
store
it
in
this
D
space.
Docker
images
repository.
A
So
that's
that's
kind
of
the
main
stuff
I
wanted
to
go
over.
There
is
a
post
that
I've
linked
here.
Ala
North
just
mentioned
some
frustrations,
building
docker
images
because
it
sort
of
it's
continually
reap
pulling
the
same
maven
artifacts
over
and
over
again.
Every
time
you
need
to
rebuild
I
definitely
find
like
when,
when
we
update
a
branch
on
bSpace,
each
of
those
automated
builds
of
the
docker
images
takes
like
30
to
40
minutes
to
run.
So
it's
a
good
like
hour
after
the
branch
is
updated,
that
we
have
images
posted.
A
So
it's
it's
good
enough
for
catching
up
for
people
who
want
to
do
stuff
based
on
the
main
branch.
A
thing
I
would
love
is
for
us
to
eventually
figure
out
how
we
automate
builds
for
pull
requests.
But
if
you,
if
you
wanted
to,
you,
know,
submit
a
pull
request
and
then
immediately
turn
around
and
use
docker
to
test
it.
There's
kind
of
a
penalty,
you'd
pay
and
getting
things
to
rebuild.
So
that's
not
quite
a
workflow
that
I
recommend.
A
Yet,
but
anyway,
there
there's
some
thoughts
here
so
Alan
it
sounds
like
had
found
some
things
he
could
hook
into
in
his
docker
usage
to
expedite
the
retrieval
of
the
maven
artifacts.
I
think
this
is
probably
fine
for
a
developer
workflow,
but
because
we're
doing
these
sort
of
automated
publish,
semi-official
builds
I,
don't
know
that
this
is
really
suitable
for
our
automated
build
environment.
A
A
Storage,
are
you
that
from
the
start,
let
me
get
I'm
gonna
just
for
grins?
If,
if
you
guys
can
bear
with
me,
I'm
gonna
stop
docker
and
restart
it,
because
I
want
to
see
if,
if
we
actually
find
that
things
run
a
little
bit
faster
so
before
I
do
that
I'm
going
to
shut
down
the
instances
that
I
started.
A
A
So
here
we
have
ingesting
content
into
a
docker
container
and
essentially
we
use
the
same
images,
but
we
have
a
addi
space,
ingest,
compose
directory
and
I'm,
realizing
and
I.
Think
I've
got
an
old
artifact
here
in
the
readme
file
that
we
no
longer
need.
I'll
show
you
what
the
docker
compose
file
is
looking
like.
A
So
then,
once
when
you
mount
this,
the
additional
code
that's
needed
for
automating
ingest,
we
have
a
script
that
will
create
the
administrator
for
you,
I
mean
it's.
It's
essentially
calling
dspace
create
administrator,
but
it's
just
making
a
little
bit
easier
to
not
remember
the
syntax.
For
that,
then
we
have
a
get
a
IP
script
and
we'll
go
and
download
from
the
internet
zip
file
of
a
IP
files
which
that's
where
I've
got
my
dog
photos
collected
as
we
come
up
with
better
sets
of
data.
We
could
deploy
more
meaningful
sets.
A
Github
itself
has
a
limit
on
how
big
a
particular
zip
file
can
be.
So
what
we
might
determine
that
if
we
came
up
with
some
great
data
sets,
we
might
want
to
deploy
those
with
box
or
just
directly
on
a
website
that
people
can
pull
down
a
richer
set
of
test
data
and
then
the
ingest
AIP
iterates
through
the
downloaded
AIP
zip
files
that
you
pulled
down
and
adds
them
one
by
one.
So.
A
So
what
I'm
going
to
do
is
I'm
going
to
so
you
remember
before
I
use
that
d6
is
my
project
name.
What
I'm
going
to
do
this
time
is
I'm
going
to
make
my
project
v6
test
so
that
doesn't
I,
don't
have
an
existing
volume,
so
I'm
going
to
start
with
fresh
volumes
here
up
HD
I'm
curious
to
see
if
we
find
that
stuff
starts
faster.
Now
that
I've
restarted
docker.
A
C
A
A
So
I'm
going
to
now
win
ttyl
because
I
need
a
terminal
output,
docker
exec.
I
td6
test
new
space
one
in
back.
So
now
I'm
going
to
open
up
a
terminal
into
the
tomcat
container
and
you'll.
Remember
we
mounted
in
this
particular
docker
compose
file
a
directory
called
ingest
tools.
So
now
you'll
see
those
four
scripts
are
available
to
me.
So
I'm
going
to
call
create
admin,
Sh.
A
A
A
So
it's
pulling
dog
photos
a
IP
zip,
because
these
were
really
small
files.
I
was
able
to
package
this
inside
of
github,
oh
and
I.
Guess
I
set
it
up
where
you
could
actually
set
an
environment
variable
to
use
a
different,
zip
file.
If
you
wanted
to
so
now,
we've
got
the
a
IP
files
downloaded
and
I'll
go
ahead.
B
A
Interesting
things,
so
this
this
in
just
composed
is
really
designed
just
to
facilitate
filling
your
volume
with
content.
If
we
go
to
the
main
DS
based
compose
file
and
look
at
the
docker
compose
file,
we've
got
some
other
interesting
overrides
in
place.
So
if
you
want
to
run
Mirage,
we've
got
an
add-on,
provided
that
has
the
XML
uix
comth
override
to
turn
on
Mirage,
and
all
you
need
to
do
is
uncomment
this
line.
A
It
takes
just
this
one
file,
XML
uix
content,
overrides
it
in
the
install
directory
and
that's
a
way
that
you
can
sort
of
you
know,
alter
behavior
or
override
behavior.
That
was
sort
of
bundled
into
your
built
images.
So
it's
it's
pretty
anyway,
and
then
the
RDF
stuff
that
we
did.
We
were
kind
of
using
a
similar
technique
to
provide
either
the
RDF
dot
CFG
file
or
an
updated
local
dot.
A
Cfg
file
to
get
those
RTF
properties
in
place,
and
the
nice
thing
is,
you
know
as
we're
running
when
you
shut
down
your
your
docker
compose,
created
containers.
We
persist
the
postgrads
data
directory
in
a
docker
volume.
We
persist
the
asset
store
and
we
persist.
The
contents
of
solar,
so
that
when
you
start
things
back
up
all
of
the
that
content
is
up
and
refreshed.
Alright,
so
know:
I'm
stalled
long
enough,
you'll
see,
we've
got
an
empty
D
space
instance.
A
And
what
this
is
doing
is
it's
currently
iterating
through
and
looking
first
for
community
objects
ingesting
those,
then
it's
looking
for
collection
objects
ingesting
those
and
then
looking
for
item
objects.
So
the
one
thing
we
aren't
doing
yet
is
we
don't
have
an
example
that
has
a
site
a
IP
file,
but
we
could
also
like
once
we
have
some
better
data
sets.
A
A
A
A
C
Terry
and
I
have
kind
of
a
tangental
question.
I
was
taking
a
look
at
that
update
sequences,
sequel
and
I'm
wondering
under
what
circumstances
is
that
necessary?
Another
you
mentioned
when
you're
uploading
your
very
ingesting
your
aids.
What
other
conditions
might
require
that
use
of
this
script
and
I
can't
exactly
tell
what
it's
accomplishing?
Can
you
maybe
Tim.
A
B
B
It
gets
out
of
sync:
it's
basically
a
ensuring
the
database
content
is
is
up
to
date
with
the
sequences
and
the
sequences
are
up
to
date
with
the
database
content,
and
we
moved
away
from
that
with
D
space
6,
with
the
introduction
of
uu
IDs
over
instead
of
these
incrementing
identifiers
x',
because
then
Cavani
identifiers
are
inherently
fragile
in
that
way.
So.
C
B
Yeah,
at
least
not
for
a
ip's
restoration
yeah.
There
still
is
an
update
sequences
script
in
d
space
6,
but
it's
rarely
if
ever
needed
and
I've.
Yet
to
find
a
situation
where
it's
needed.
It
might
still
be
needed
in
some
rare
in
situations
where
your
sequence
increments,
get
out
of
date,
but
but
it
used
to
be
much
more
frequent
in
D
space,
5
and
below
during
AIP
restoration
and.
A
I've
never
actually
run
into
a
need
for
it
in
like
like
regular
usage,
because
when,
when
I
tend
to
ingest,
AIP
is
I'm
pulling
from
one
of
our
like
from
our
production
server
into
our
test
server,
and
we
use
different
handle
like
suffixes
in
each
of
those
cases.
So
I've
never
actually
like
polluted
the
sequence
number
by
doing
it,
but
because
we're
sort
of
doing
this
initial
load
into
kind
of
that
same
handle
namespace
as
where
the
handle
of
signer
will
take
place.
That's
where
you
can
run
into
the
issue
if
it
isn't
run
yet.
B
Admittedly,
I've
usually
only
run
into
it
frequently
where
you're
doing
restorations
of
content,
because
the
AIP
process
can
be
used
to
restore
content
that
was
accidentally
deleted
and
that's
where
it
is
often
run
into
where
like.
If
you've
deleted
a
bunch
of
content
and
you
restore
them
from
AIP,
then
the
sequences
can
get
out
of
out
of
sync,
because
now
you're
reading
content
that
was
deleted
and
the
database
isn't
really
aware
of
it.
And
it
gets
all
weird.
A
You
all
can
see
that
this
new
running
instance
has
content
within
it.
So
I
think
I
ended
up
signing
in
just
to
sort
of
for
cash
refresh
and
now
we've
got
some
some
content
in
here.
A
B
I'll
just
say
again
that
what
you
showing
off
is
pretty
cool
I
still
need
to
find
my
own
time
to
play
with
it
more,
but
but
I'm
excited
I'll
see
all
this
great
work
going
into
docker,
if
even
just
for
easier
development
and
testing
but
I
know
it'd
also
be
great
for
production
scenarios
eventually.
C
Agree
for
our
the
D
space
version
that
we're
running
in
production,
we're
still
very
much
based
on
using
ansible
and
running
in
a
VMware
in
our
data
center,
but
I
do
know
that
one
of
our
other
groups
has
been
working
on
some
custom
development
of
a
system
that
needs
to
do
deposits
in
2d
space,
and
they
were
interested.
They
make
a
more
of
a
docker,
centric,
workflow
and
they're
deploying
to
AWS,
and
so
they
were
interested
in
you
know.
C
Can
we
get
a
docker
container
and
at
the
time
when
they
asked
me,
you
hadn't
quite
developed
these
things
yet,
but
when
that
that
need
comes
around
again,
it's
going
to
be
great
to
have
not
only
the
images
to
point
to,
but
also
the
really
good
documentation
that
you've
provided.
I
really
appreciate
that
and
I
especially
appreciate
that
you
don't
have
to
watch
a
video
in
order
to
learn
how
to
do
this
stuff.
You've
got
the
written
documentation
and
personally
I
find
that
format
to
be
really
much
more
useful
for
for
kind
of
refiling.
A
A
Never
gonna
use
these
branches
again,
but
let's
say
I
came
out.
I
created
like
four
pull
requests
today
from
this
from
my
Terry
W
Brady
repo
I
could
go
ahead
and
use
the
automated
build
functionality
on
docker
hub
to
build
pre-built
containers
for
people
to
make
it
easier
for
them
to
test
my
pull
requests
without
needing
to
build
locally.
So
that's
another
thing
that
kind
of,
as
would
get
into
the
rhythm
and
have
more
people
using
this.