►
From YouTube: DSpace Dev Show & Tell : 2018-07-31 - REST API Tools
A
If
you
really
just
pop
on
say
who
you
are
and
what
institution
you're
from
and
then
we'll
dive
into
the
screen,
chair
and
I
can
get.
Maybe
I'll
call
people
out,
because
everybody's
I
think
zoom
doesn't
necessarily
show
the
same
order
of
people
to
everyone,
so
I'm
Terry
Brady
from
Georgetown
University,
maybe
Texas
A&M
folks.
If
you
can
introduce
next
I'm.
C
A
A
A
E
Okay,
so
I
made
these
slides
as
a
reference,
and
maybe
perhaps
if
you
want
to
share
them
afterwards,
but
I,
don't
think
I'll
be
referring
them
no
going
back
to
them
all
the
time.
I'll
just
work
in
postman
most
of
the
time
so
basically
I
got
talked
into
this
by
Terry.
I
really
went
to
him
after
his
presentation.
Just
to
tell
him
one
quick
thing
about
the
postman
that
was
relevant
to
his
presentation.
E
He
said
you
must
give
a
presentation
on
the
developer,
show
and
tell
first
of
all
I'm,
not
a
postman
expert
I,
just
use
it
for
some
things
in
my
work
and
I'll
try
to
explain
how
I
make
use
of
it
and
of
course
you
can
always
find
people
who
know
lots
and
lots
more
about
it,
because
you
can
use
it
for
testing
running
or
the
automated
scripts
and
stuff
like
that.
I
just
use
a
few
of
the
basics,
but
I'll
try
to
explain
them
as
best
I
can.
E
So
when
it
comes
to
solar
move,
the
little
screens
here
so
I
can
see
them
doing
so.
When
it
comes
to
solar,
you
usually
want
to
tunnel
to
a
remote
server,
because
solar
is
by
default
only
available
from
localhost
for
security
reasons.
So
the
first
step
I
do
when
I
want
to
talk
to
a
dspace
is
solar,
usually
is
to
SSH
to
that
server
and
open
a
tunnel.
E
So
you
do
that
by
in
Linux
and
windows
these
days
as
well,
you
can
use
it
at
the
terminal
and
windows
these
days
you
can
SSH
their
name
L
and
then
first
you
have
the
local
port,
then
localhost
and
then
the
remote
port.
So
we'll
start
by
doing
that
or
I
already
did
that
again
make
sure
the
connection
still
works.
So
this
is
one
of
our
demo
servers
for
our
statistics
module.
So
it's
running
at
the
local
port
of
7000
150.
That's
the
Tomcat
port
and
I'm
going
to
do
that.
E
It's,
of
course,
gonna
be
well,
not
JSON,
so
I'll
put
it
the
texts
and
it's
gonna,
just
gonna,
say
yeah
we're
Tomcat,
and
this
is
a
default
HTML.
So
when
working
with
comments,
let's
get
to
the
first
thing
that
I
think
is
useful
about
postman.
It's
you
can
have
these
environments,
I
have
a
few
I,
have
a
local
host,
8080,
9090
and,
and
these
basically
allow
you
to
define
variables
so
I
have
a
base
variable,
that's
matched
to
localhost,
8080
or
90,
90
or
9999.
So
I'll
do
that
now
and
now.
E
I
can
just
replace
this
whole
part
with
base
and
I
still
get
unexpected
guest
JSON
as
default,
but
it
still
works
and
if
I
switch
to
a
different
environments,
I'm
gonna
get
that
there's
nothing
listening
there.
So
I
can
easily.
If
I
have
a
local
instance
of
solar
and
remote
deep
space,
I
can
easily
switch
between
them
using
the
same
queries
and
just
switching
the
environments.
You
can,
of
course
add
multiple
variables
to
your
environment,
but
I
don't
tend
to
do
that.
E
I
have
a
few
Global's
that
ident
the
set
up
for
when
I
start
working
on
a
new
project,
so
in
this
case
we're
working
with
preview.
Solar,
so
I
have
a
complex
variable
that
is
called
preview.
Solar
and
I'm.
Working
with
the
statistics
core,
so
I
have
a
core
variable,
assess
statistics.
So
I
can
do
something
like
this.
E
You
know
I'm
gonna
use
the
autocomplete
here
just
so
you
don't
have
to
watch
me
type.
So
this
is
a
really
basic
query
that
will
return
JSON
this
time
and
how
that
should
work.
So
if
I
go
to
my
billables,
so
we
can
see
it
has
well
quite
a
lot.
That's
over
three
million
statistic
hits
and
it's
number
found
if
I
now
change
my
global
variable
to
say
that
I'm
gonna
want
to
work
with
the
search
core.
That
number
should
change.
E
And,
as
you
can
see,
there's
not
a
lot
of
items
just
184,
but
that
immediately
shows
you
how
useful
these
variables
can
be
and
there's
another
useful
thing
in
postman.
I
tend
to
use,
and
these
are
collections
and
you
can
use
them
in
conjunction
with
variables.
Actually,
you
can
have
additional
variables
for
each
collection
we
see
here.
If
I
edit,
that's
it.
You
can
have
additional
variables
here
and
authorization
settings
and
all
those
sort
of
things.
So
you
can.
If
you
use
the
query
from
this
collection,
you
can
always
authorize
in
a
certain
way.
E
I'm
sure,
Terry's
gonna
explain
a
little
bit
about
Ted's
in
his
presentation.
You
can
run
certain
scripts
and
tests
for
each
request
in
that
collection.
So
it's
really
useful
if
you
want
to
set
up
automated
API
tests,
but,
like
I,
said,
I
use
it
on
a
much
more
basic
level.
So
I
have
a
bunch
of
queries
here.
E
So
this
query
works
on
the
update
end
point:
instead
of
select
endpoint
and
sense,
a
little
XML
snippet
that
tells
solar
to
commit
its
index
because
solar
doesn't
update,
update
its
index
automatically
or
every
time
a
new
records
edit.
It
does
it's
every
15
minutes
or
every
10,000
talks
by
default.
So
if
you're
trying
to
test
something
you've
changed
about
your
statistics,
for
example-
and
you
generate
a
few
hits
and
you
want
to
see
the
results
right
away,
you'll
have
to
commit
manually.
So
that's
a
useful
one.
E
So
the
deletes
is
also
an
update
query
and
it
has
a
little
XML
snippet
here
and
in
here
you
can
type
just
anything
that
you
that
would
fit
in
a
normal
solar,
a
cube
parameter.
So,
in
this
case,
it
would
delete
the
item
with
the
document
with
ID
this
here
and
type
too
I
would
recommend,
of
course,
that
you
always
try
the
query.
First
in
a
select
see
exactly
what
you
get
before.
E
So
those
are
a
bunch
that
have
the
core
variable
in
here,
because
they're
useful
for
any
type
solar
core,
not
just
dspace,
related
or
statistics
related
now
here
are
a
few
that
are
dedicated
to
search
so
I
have
here
a
bunch
of
parameters
that
are
useful
when
you
just
want
to
query
the
core,
like
you
would
discovery.
So
we
have
this
query
field
here
and
anything
I
type
in
there
would
give
a
similar
set
of
results.
That's
what
I
would
get
in
discovery.
So
if
I
test
it
here.
E
I
should
get
results,
that
is,
there
are
just
other
styles
and
handles
I.
Don't
know
why
everything
is
twice
in
this
report.
It
seems
like
a
bug,
but
I'll
go
over
the
the
parameters
to
see
what
we've
got.
So
you
have
the
query
field
default.
That
is
star
:,
star,
meaning
everything,
but
you
can,
of
course
replace
that
by
a
specific
query,
Rose
is
the
number
of
results
you
want
to
get
back.
Wt
tells
us
we
want
to
get
back
JSON
then
I
have
a
bunch
of
filter
queries.
One
says:
I
want
to
exclude.
E
I
have
a
minus
sign
here.
Everything
that
has
withdrawn
set
to
true
and
I
want
to
exclude
everything
that
has
discoverable
set
to
false.
The
reason
for
writing
it
in
a
negative
way
is
because
that
also
will
include
things
that
don't
have
the
with
wrong
field
and
don't
have
the
discoverable
field.
As
you
write
it
this
way,
so
I
want
only
items
so
search
result.
E
Resource
type
is
2
and
I
want
only
things
that
group
0
has
read
access
to
meaning
anonymous
users,
and
then
this
FL
parameter
just
says
which
fields
you're
going
to
get
back.
So
if
I
wanna,
if
I'm,
working
and
I
want
a
variation
on
that,
you
can
easily
just
check
out,
for
example,
the
field
lists
and
then
you'll
get
everything
takes
a
bit
longer,
of
course,
because
there's
full
text
in
there
I
believe
so.
Maybe
that
wasn't
the
best
example
to
give
ya.
Here
we
go
so
yeah.
E
Now
we
get
all
the
fields
back
and
then
simply
don't
save
the
results
or
just
copy
the
URL
or
save
as
something
else.
If
you
think
you've
made
something
useful,
but
usually
I,
just
if
it's
a
one-off
I,
don't
see
it
results,
I
also
have
one
for
a
specific
handle.
So
that's
basically
I
have
a
single
role
here
and
then
I
don't
know
if
this
is
a
handle
that
we
work
in
this
repository
check.
E
E
I
have
one
that
facets
by
collection
so
that
basically
uses
all
those
same
parameters.
That's
in
the
standard
search
filters,
but
it
also
enables
the
facets,
value
untrue
and
it
says
I'm
at
the
facet,
location,
collection
and
I'm
also
saying
I
want
zero
rows,
I'm,
not
really
interested
in
regular
search
results.
I
just
want
the
facet
results
that
should
go
a
lot
quicker
and
we
see
here
if
collection,
24
has
40
items,
collection,
19's,
34,
etc.
E
E
Well,
the
s
internal
here
is
a
strange
one,
because
that's
that's
unique
to
our
module,
so
disregard
that
one
here
I
have
is
Bob
is
both
is
true,
so
that
filters
out
everything
that
is
a
is
flags
and
pots
already,
and
then
here
you
have
the
same
logic
as
before
with
the
knots.
So
this
filters
out
everything
that
is
not
any
statistics
type
and
not
not.
Statistics
type
view
that
basically
says
I
want
all
views
and
things
that
don't
have
a
statistics.
E
Type
reason
for
that
is
because
all
these
spaces
didn't
have
the
statistics
type
field,
I
believe
it
was
added
in
year,
3
or
1.8,
or
something
so,
if
you
have
very
old
statistics,
you'll
also
want
to
include
the
ones
that
didn't
have
the
few
fields
and
we
do
the
same
thing
for
bundle
names,
so
this
basically
will
filter
out
all
hits
for
thumbnails
and
things
like
that,
or
because
it's
framed
in
this
negative
way.
It
will
also
include
views,
meaning
records
that
don't
have
the
bundle
name
fields.
E
So
this
is
a
good
default
set
of
Statistics
filters
to
start
from
again
I'm
going
to
make
the
presentation
available
after
I'm
done.
So
you
can
just
copy
paste
those
you
don't
have
to
type
them
if
you're
interested
so
good
set
of
standard
filters.
If
we
run
it
because
the
number
of
rows
is
0
by
default,
we'll
just
see
the
number
of
publicly
available,
not
bots,
not
thumbnail
statistics
again.
So
that's
a
lot
less
a
lot
fewer
and
those
3
million.
E
We
had
it's
about
ninety
eight
thousand,
so
very
large
majority
of
those
at
BOTS
and
companies.
It's
it's
a
test
server
and
we
generate
in
statistics,
so
I
wouldn't
pay
too
much
more
intellectual
figures,
a
useful
thing
to
do
when
you're.
Looking
at
these
pay
statistics
is
look
at
the
most
active
eyepiece.
If
you're
interested
in
both
traffic
to
your
server,
that's
not
already
flagged.
You
can
run
this
query
again.
It
has
the
s
internal
filter.
You
have
to
that's
not
enough
in
a
lobby
space,
but
the
rest
is
so.
E
E
So
with
the
pinpoint
min
count,
you
can
say
each
group
needs
to
have
at
least
one
hits
or
more,
if
you,
if
you
wanted
and
then
facet
MIP-
and
we
get
something
like
this
and
we
can
this
server
that
there's
definitely
something
fishy
going
on
with
this
IP.
It
has
very
large
majority
of
the
hits
and
everything
else
well.
This
is
also
very
fishy
with,
so
this
is
one
I
would
investigate
further
and
in
a
real
situation.
You
wouldn't
see
this
large
discrepancy,
but
then
again,
I
have
the
items
for
IP
query.
E
I
can
now
use
and
just
paste
in
this
IP.
If
it's
not
already
the
same
one
and
difference
here
is
that
it
filters
by
the
IP
we
just
had
and
then
in
fastest
facets
by
owning
items,
so
I
can
see
which
items
this
IP
has
been
hitting
so
suspicious
activity
or
ball
activity
sometimes
happens
because
I
don't
know
one
or
two
items
get
inflate.
It's
a
lot.
So
if
you
see
10,000
hits
on
one
item
and
then
0
or
few
hits
on
all
the
others,
you
can
know
that
suspicious
activity.
E
So
again,
in
this
case
it
seems
to
be
spreads
well
relatively
smoothly.
So
it
could
also
be
a
VPN
in
this
case,
like
an
entire
university
which
all
its
traffic
is
routed
through
to
a
single
IP
address,
and
that's
the
one
in
these
pcs,
that's
something
you
would
consider
with
an
IP
like
this.
So
another
thing
you
can
do
is
group
your
traffic
for
an
IP
by
a
certain
time
periods-
it's
already
IC.
E
So
you
can
use
the
facet
date
here
and
say:
I
want
to
facet
by
time
and
then
I
have
here
a
date
gap,
and
this
basically
says
plus
one
day,
but
you
need
a
URL
encoded
and
there's
another
useful
thing
in
postman
that
you
can
do
that
on
the
fly.
So
you
just
select
something
and
you
can
encode
and
decode
there.
E
I
have
to
have
a
start
and
an
end
date
and
I.
Basically,
the
slash
day
at
the
end
basically
says
rounded
to
the
nearest
day.
Of
course,
with
these
exact
dates,
it's
not
necessary
because
they're
already
on
the
exact
date
line,
but
in
case
you
have
like
a
timestamp
he
pasted
from
somewhere.
You
can
use
that
to
round
it
to
the
nearest
day
or
month
or
year,
because
this
this
is
all
all
valid
as
well.
E
E
And
then
we
see
that
this
IP
address
didn't
do
anything
until
the
third
know
the
the
17th
of
March,
when
it
started
downloading
multiple
items
a
day.
What
you
usually
see
with
both
IP
addresses
is
that
either
there's
like
one
or
two
days
where
it
downloaded
all
the
items
it
has
or
there's
a
very
regular
number
of
items
every
day,
so
like
250
every
day
consistently
for
months
on
end.
So
in
this
case,
this
reaffirms
that
this
is
probably
like
a
proxy
for
a
big
institution,
because
it's
it's
very
random.
There's
not
really
a
pattern.
E
I
have
a
few
others
that
may
be
useful.
I
have
a
pivot
query
here.
That
could
be
useful,
I'm
saying
here:
I
want
to
not
only
facet
but
I
want
to
pivot
and
first
country
codes
by
country
code
and
then
by
type,
and
that
basically
says
facet
each
country
again
by
type.
So
this
way
I
can
see
which
so,
which
50
are
the
50
most
active
countries
on
my
repository
and
then
for
each
country,
I
can
see
how
many
of
use
it
they
get
and
how
many
downloads
did
they
get
in
the
same
query,.
E
You
well
I
can
see.
I
can
see
more,
of
course,
because
this
is
these
are
items.
These
are
like
community
home
pages.
These
are
collection
home
pages
or
it
could
be
the
other
way
around.
I
have
to
check
my
D
space
constants
file,
and
these
are
downloads,
and
then
I
can
see
that's
the
US
and
in
total
they
have
all
that
and
it
split
up
into
this.
So
that's
very
useful.
If
you
want
a
bunch
of
information
all
at
once,
I.
A
E
Read
somewhere
in
an
article
that
was
about
REST
API,
it's
about
the
environments
and
then
the
rests
I
just
figured
out
for
myself,
basically
so
just
the
environments
that
they
were
useful.
If
you
had
well
like
a
production
server,
a
tester
and
a
development
server,
people
use
them
for
that,
and
then
basically
the
rest
I
just
clicked
around
and
see
how
collections
that's
useful,
because
I
can
store
them,
etc.
E
F
A
question
art
this
is
Jeremy
Huff
at
Texas,
A&M
I've
been
I've,
been
using
push
man
in
my
development
on
the
polio
project.
Just
because
there's
has
been
incredibly
useful.
You
know
doing
the
sorts
of
things
that
that
you're,
showing
here
and
being
able
to
you,
know,
do
like
a
lot
of
bootstrapping
work
real
fast.
You
know
the
scripting
and
stuff,
but
we've
discussed
internally
the
idea
of
sharing
between
our
team.
F
E
I,
don't
know
how
much
it
costs
I
never
really
looked
into
it,
where
you
can
just
use
that
as
a
group,
and
then
you
can
share
these
collections
with
your
group
and
define
which,
which
takes
people
in
the
group
can
see
and
which
things
they
can
I
have
no
experience
with
that,
because
I
don't
have
it,
but
it's
if
you're
serious
about
it.
I
would
look
into
that,
and
the
basic
thing
you
can
definitely
do
is
export
these
collections
as
JSON
and.
E
F
F
E
A
Yeah
certainly
I
imagine
like,
as
people
come
up
with
techniques
for
authenticating
and
then
wanting
to
preserve
authentication
tokens
through
a
whole
session,
particularly
when
we're
doing
more
than
just
querying
I.
Think
we're
gonna
find
that
it's
like
really
essential
working
with
the
rest
of
seven
API.
A
All
right
thanks
so
much
I
learned
a
lot
from
watching
this.
My
my
usage
of
this
tool
has
been
so
much
more
chaotic
because
I've
not
taken
the
time
to
really
understand
that
difference
between
the
environments
and
the
collections
and
the
folders.
So
it's
nice
to
see
a
well-constructed
set
of
tests
here
in
one
spot.
No.
E
A
D
A
Is
it
filling
the
screen?
Yep?
Okay,
great
so
I
have
got
a
link
here
in
our
show-and-tell
notes,
both
to
the
download
for
postman.
If
you
don't
already
have
it,
the
other
thing
I've
got
a
link
to.
You
is
some
tutorial
instructions
that
we
created
for
the
dspace
7
workshop
at
open
repository
and
we
had
kind
of
a
specialized
version
of
the
REST
API
aundrea
bellini
set
up
for
the
workshop,
and
then
he
actually
made
some
modifications
to
that
API
in
the
course
of
the
presentation.
A
But
there's
there's
a
link
here
for
an
active
yeah.
You
can
use
this
link
or
you
can
use
that
the
standard
REST
API
link,
but
what
the
main
thing
I
wanted
to
show.
You
all
is
the
authentication
process,
because
I
think,
as
as
folks
want
to
dive
in
and
start
playing
with
the
Deep
Space
seven
API,
you
sort
of
many
many
actions
beyond
queries
are
going
to
require
authentication.
So
I
want
to
make
sure
folks
have
a
good
good
sense
of
comfort
of
how
to
do
that.
Moving
forward.
A
So
first
off
I
just
want
to
show
you
the
deep-space
7
API.
Has
this
thing
called
the
Hal
browser
and
if
in
case
for
folks
who
haven't
seen
it,
it's
sort
of
an
an
entry
point
that
becomes
like
a
self-describing
site
just
making
available
all
of
the
endpoints
of
the
new
REST
API?
So,
for
instance,
if
I
want
to
see
all
of
the
communities
in
this
repository,
I
can
navigate
to
the
communities.
Endpoint
click
on
it
and
you'll
see
that
this
site
has
two
communities
that
were
returned.
A
I
can
click
on
a
particular
community
and
see
details
about
it.
Also,
there
are
additional
actions
that
are
available,
so
I
could
see
all
the
collections
within
this
community
by
clicking
here.
There's
also
some
facility
and
I
think
this.
This
will
be
evolving
over
time
for
performing
other
actions
of
some
create
update,
delete
through
the
help
browser
interface.
But
you
know
I,
think
I
think
some
actions
are
going
to
be
complicated
to
rely
solely
on
the
help
browser
to
perform
navigation
actions
and
and
other
things.
A
So
what
I
want
to
do
is
show
you
the
process
of
creating
an
authenticated
session
within
Postman,
but
first
I
want
to
show
you
the
authentication
process.
Just
using
this
help
browser
so
I
am
going
to
come
to
the
authentication,
endpoint
and
just
check
my
status
here
and
right
now.
We
have
no
we're
not
authenticated
according
to
the
API,
so
the
next
thing
I
want
to
do
is
login
and
within
the
our
documentation.
Here,
we've
got
a
little
cheat
sheet
of
the
credentials
to
use
on
this
particular
API.
A
A
A
So
let
me
I've
got
some
cheat
sheets
here
in
this
tutorial,
which
I
recommend,
if
you're
curious
about
this,
try
running
this
yourself,
so
I'm
going
to
grab
the
URL
for
this
repository,
open
up,
postman
and
actually
like
art
had
done.
I
do
have
a
few
things
that
I've
saved
off
as
saved
requests
here
on
the
left-hand
side.
So
one
of
the
first
tasks
I
want
to
perform
is
I
want
to
get
all
of
the
items
in
the
repository.
A
A
Okay,
what
I
wanted
to
look
at
is
here:
we've
had
13
results
that
have
come
back
from
our
query.
The
next
thing
I
want
to
do
is
now
actually
perform
a
login
by
going
to
I'm
going
to
post
to
the
login
API
and
just
to
show
you
the
parameters
I'm
using
I'm,
using
that
sample
email
address
simple
password,
I'm,
going
to
send
that
request
on
so
now.
I
have
authenticated
and
this
bearer
token
is
returned
from
the
API.
A
Actually,
what
I'm
going
to
do
is
pass
it
in
as
a
bearer
token
and
update
the
value
from
the
value
that
I
used
previously
I'm
now
going
to
send
this
authentication
status
with
the
bearer
token
that
we
got
from
the
login
action,
we'll
click
send
and
now
you'll
see
our
authentication
status
is
true.
There's
some
details
about
the
user,
that's
logged
in
and
now
what
I
can
do
is
come
back
to
that
query,
that
we
ran
where
we
search
for
research.
A
So
this
just
kind
of
what
I
wanted
to
highlight
here
is
the
process
that
you
would
follow
to
authenticate
a
session
using
postman
and
then
once
you've
got
the
postman
interface
up
and
running.
You
have
the
ability,
the
ability
not
only
to
send
get
requests
and
post
requests,
but
you
can
formulate
the
whole
variety
of
different
request
types
that
are
supported
in
HTTP
and
and
in
particular
the
new
rf7
api
takes
advantage
of
several
of
these
different
verbs.
A
D
B
B
But
what
we
here
at
an
ember
gonna
show
today
is
kind
of
some
software.
That's
more
focused
towards
end
users
and
we're
calling
this
the
metadata
assignment,
GUI
providing
ingest
in
exports
and
we've
also
had
some
success,
testing
it
against
fedora
repositories
and
archive
matica
as
well
for
preservation
applications.
But
today
we're
going
to
look
at
the
beast
base
side
of
it,
and
this
is
coated
against
the
G
space.
5
&
6
rest
api
s,
so
we'll
definitely
have
some
work
to
do
when
we
go
the
route
of
upgrading
to
seven.
B
But
what
we
see
here
is:
okay,
so
I'm
the
first
page.
You
see
how
we're
doing
okay,
so
I'm
logged
into
this
interface,
and
we
see
our
brandy
and
everything
and
there's
there's
two
tabs
documents
and
my
assigned
documents
and
if
I
go
to
documents,
I
will
see
that
the
system
is
already
aware
of
some
possibilities
for
document
processing
that
are
on
its
file
system.
B
B
Let's
see
if
I
make
another
document
and
what
I'm,
what
I'm
doing
here
is
I'm
just
taking
a
directory.
Full
of
files
in
this
case
happens
to
be
a
bunch
of
Tiff's
and
a
PDF
in
a
text
file
and
I'm,
going
to
put
it
in
a
directory
where
the
MapQuest
service
is
listening.
It's
actually
watching
for
new
files
to
get
input
there.
The
second
98
is
a
try,
so
I'm
going
to
copy
it
over
to
this
directory.
Where
Magpies
listening,
we
do
see
some
action
in
the
background
there.
B
It's
adding
all
the
resources
that
have
found
in
that
file
directory
and
then
back
in
the
UI
I
see
that
my
page
is
out
of
date.
I
want
to
refresh
and
voila
a
new
document
has
been
ingested,
so
the
notion
here
is
perhaps
that
you
would
give
your
librarians
or
your
curators
access
to
a
file
share
where
they
could
drop
their
files.
Another
potential
workflow
would
be
you'd
set
up
the
scanners
that,
like
the
scanning
workflow,
so
that
after
documents
were
completed,
they
would
be
packaged
up
and
copied
over
to
where
the
server
was
listening.
B
B
B
Could
edit
the
application
properties
need
that
working
present?
Maybe
I
don't
know?
Well,
let's,
let's
not
worry
about
it,
but
if
I
did
my
level
Louis
configure
properly,
we
would
be
seeing
the
PDF
in
the
text
showing
up
in
this
window
and
then
the
other
part
of
interest
is
down
here,
we're
seeing
metadata
get
assigned,
and
this
is
coming
from
a
spreadsheet
that
I
actually
have
on
disk.
We
also
have
a
facility
for
grabbing
metadata
out
of
our
Voyager
catalog
and.
B
Also,
a
facility
for
oh
cool
yeah,
getting
us
suggestions
based
on
the
parsing
of
the
full
text,
and
so
this
is
very
collection
specific.
These
are
all
like
agricultural
bulletins
and
basically
we're
parsing
the
full
text
here
and
looking
up
the
terms
in
the
National
Agricultural
library,
thesaurus,
and
so
we
can.
If
we
were
a
curator,
we
might
favor
some
of
these
terms
for
metadata
assignment.
B
Having
done
that,
I
can
now
come
over
here
and
accept
the
document.
The
notion
would
be
that
once
it's
accepted,
then
a
the
curator
in
charge
could
indicate
that
they
want
to
publish
it,
and
so
we
now
do
a
rest.
A
set
of
rest
calls
actually
to
the
dspace
repository,
and
this
is
against
a
fairly
vanilla.
B
B
B
You
bet
it's
on
github
and
Terry
made
it
made
it
a
hyperlink
on
the
page
for
this
meeting
awesome
yeah
I
should
mention
also
that
there's
two
repositories
and
github
there's
the
service,
which
is
that
which
link
follows
and
then
there's
the
UI
it's
in
suit,
separate
repos.
So
it's
a
spring
boot
service
on
the
back
end
it'll
run
on
port
9000.
B
Don't
assume
it's
been
in
production
about
three
times
over
the
past
two
years,
so
I
think
we've
got
maybe
about
six
items
in
the
repository.
Has
their
social
those
efforts,
yeah
it's
been
going
from
customer
customer
to
customer,
doesn't
want
to
turn
over
in
the
like
oversight
of
the
product
and
that's
kind
of
why
it
so
close.
A
B
B
Because
we
wanted
to
get
the
push
just
absolutely
right
and
in
fact,
in
the
case,
we're
seeing
here
that
wouldn't
even
be
necessary,
because
this
is
like
open
access,
but
if
we
were
pushing
their
documents
and
making
them
accessible
only
to
certain
groups.
We
wanted
to
commend
all
the
bundles
to
have
those
permissions
on
him.
So
for
the
most
part,
we
didn't
really
have
to
do
any
major
surgery
on
the
REST
API
to
get
it
to
work.
A
All
right,
well,
I,
will
mention
next
month.
We're
going
to
the
focus
of
the
meeting
will
be
on
using
docker,
with
these
base
I'm
going
to
post
a
link
to
that.
The
next
meeting,
which
will
be
on
August
28th,
so
I'm
I've,
sort
of
been
been
learning
docker
over
the
last
couple
months
and
I
think
it's
really
promising
as
a
way
to
help
folks,
particularly
folks,
who
haven't
been
able
to
run
like
a
desktop
install
of
dspace
in
the
past
or
found
it
cumbersome
or
difficult
to.
A
You
know,
allow
the
project
to
publish
some
standard,
docker
images
of
dspace
and
then
let
folks
be
able
to
you
know
easily
call
it
hey.
I
need
today
for
something
I'm
testing
I
needed
to
space
seven
instance
tomorrow.
I
need
the
ability
to
pull
up
that
D
space
five
instance
and
really
sort
of
make
that
process
seamless
and
not
requiring
lots
of
installs
beyond
the
docker
software.
So
I'll
kind
of
share
the
state
of
the
tools
and
the
project
images
where
they
are.
A
If
folks
have
any
expertise
on
this,
please
get
in
touch
with
me
because
I'd
love
to
validate
what
we've
done.
So
far,
and
if
you're
really
curious
to
test
out,
it
would
be
great
to
us
and
some
folks
actually
confirm
that
the
instructions
look
good,
because
what
I
plan
to
do
is
step
folks
through
just
a
very
simple
exercise
of
hey.
I
want
to
build
these
basics
instance
on
my
desktop
and
the
next
I
want
to
build
a
dspace
seven
instance
and
kind
of
show
folks
the
process
and
what
sort
of
promising
with
docker.