►
From YouTube: CI WG demo: Globus
Description
Date: 04/28/17
Presenter: Vas Vasiliadis
Institution: University of Chicago
Midwest Big Data Hub
A
Introduce
you
to
boss,
so
he'll
be
talking
about
Globus,
so
simplifying
research
data
management.
Globus
is
a
software
as
service
for
research.
Data
management.
Bas
is
the
director
of
products
that
the
computation
Institute
a
joint
initiative
at
the
University
of
Chicago
and
Argonne
National
Lab,
and
it's
the
chief
customer
officer
for
Globus
in
this
capacity.
Boss
works
with
the
research
computing
community
trove
to
promote
and
deliver
the
roots
of
Globus
services
for
research.
B
Great,
thank
you
thanks
Leah
for
the
kind
words
and
thanks
everyone
for
taking
the
time
17,
so
Leah
said
I'm
with
the
University
of
Chicago.
My
primary
role
on
the
globus
team
is
working
with
our
customers
on
getting
Globus
out
to
the
research
community
and
Globus
is
a
service
that's
developed
and
operated
by
the
University
of
Chicago,
primarily
for
use
by
other
researchers
around
the
world.
We
also
do
have
some
commercial
customers
a
user
service,
but
our
primary
mission
is
to
serve
the
needs
of
the
research
community.
B
So,
in
a
nutshell,
this
is
sort
of
our.
If
you
will,
that
Globus
is
about
big
data,
transfer,
sharing
data,
publication
and
discovery,
and
two
key
points
are
that
we
do
this
all
off
the
existing
storage
system.
So,
whatever
stories
you
have
in
place
blow,
this
sort
of
adds
value
to
that
at
and
it
is
delivered
software
as
a
service
you.
This
is
not
something
they
use
the
deploy
on
campus.
This
is
a
hosted
service
that
we
operate
for
the
community
and
just
some
motivation
is
to
you
know
how
we
got
here.
B
I
saw
some
names
on
the
on
the
attendee
list
that
are
probably
familiar
with
lotus,
so
sort
of
bear
with
me
for
those
that
are
sort
of
new
I
think
this
may
help
you
frame
what
we
do
a
little
bit
better.
So
this
is,
you
know
the
the
will,
the
research
they
mention
that
we
sort
if
you
years
ago,
walking
around
some
of
the
lads
here
on
the
East
Chicago
campus
I
was
always
particularly
producted
by
this
numbering
scheme
that
we
saw
in
one
of
the
labs.
B
B
How
do
you
move,
move
and
share
and
discover
and
reproduce
data
when
this
is
what
another
state
of
the
world
looks
like
so
so
we
stood
up
to
address
a
number
of
scenarios
and
and
many
of
them
center
around
this
notion
of
bridging
right,
so
bridging
to
campus
high-performance
computing
infrastructure
is
one
thing:
that's
very
common
in
our
space
and
one
thing
that
we
address
routinely.
So
you
have
a
data
set.
You
want
to
move
from
the
lab
to
your
campus
cluster.
B
Do
some
analysis
and
move
it
back
to
various
places,
bridging
so
scaling
out
from
there
scaling
up
to
larger.
You
know:
capability
class
machines
like
some
of
the
supercomputers
at
national
facilities
or
some
of
the
DoD
labs
bridging
to
instruments
right.
So
we
obviously
we're
familiar
with
the
explosion
of
data
from
things
like
next-gen
sequencers,
and
the
left
is
one
of
the
light
source
beam
lines
at
the
Advanced
Photon
Source
at
all
Oregon.
B
B
The
thing
we
started
out
with
about
seven
years
ago
was
just
file
transfer
rights,
something
something
that
that
to
us
seemed
really
simple
at
the
time,
but,
as
we
discovered
was
quite
a
challenge
for
a
lot
of
folks,
especially
when
the
data
scales
were
large
because
of
things
like
networks
being
flaky,
storage
systems,
doing
their
thing,
different
credentials
being
required
to
access
different
systems
and
so
on.
So
so
we
built
essentially
a
managed
file,
transfer
service
and
the
first
thing
and
the
way
this
works
is
a
researcher
comes
along.
B
They
have
their
data
sitting
on
an
instrument
or
on
a
lad
service
someway,
and
they
want
to
move
it
to
another
system
for
whatever
purpose.
So
all
they
do.
Is
they
fire
off
a
request
to
blow
this
and
say
you
know,
move
this
data
from
A
to
B.
That
way,
this
the
service
takes
over
and
Globus
just
works
to
transfer
that
data
as
quickly
as
possible
and
and
primarily
reliably
right.
That's
the
core
sort
of
value.
If
you
will
of
the
service,
isn't
this
much
reliability
such
that
we
ensure
the
data?
B
Just
you
know
it
moves,
so
just
gets
done
without
the
researcher
having
to
worry
about.
You
know
what
it's
doing
has
you
know,
something's
failed
what
you
know
where
things
leave
off
and
so
on
right,
so
something
transfers
depending
on
you
know
what
endpoints
or
what
systems
live
on
either
end
could
take.
You
know
many
days
in
some
cases
we
had
one
transfer
the
drain
multiple
months,
you
know
from
a
remote
part
of
the
world.
B
You
know
moving
data
or
very
slow,
a
very
low
baibek,
so
you
know
how
do
you
ensure
that
those
kinds
of
transfer
is
complete
and
they
complete
reliably
and
securely
and
so
on
right?
So
that's
that's
really
at
the
heart
of
the
service
and
will
reach
out
to
the
user.
If
we
need
to,
you
know
before
need
their
intervention
for
something,
but
typically
the
service
just
takes
care
of
thing.
B
The
the
next
thing
we
introduced,
probably
about
five
four
and
a
half
years
ago
now
is
the
sharing
capability
so
being
able
to
grant
access
to
your
data
to
other
collaborators,
may
not
necessarily
have
you
know
local
accounts
or
access
on
on
your
systems
right.
So
what
what
the
researcher
would
do
is
create
select
the
files
they
want
to
share,
decide
or
tell
Globus
who
they
want
to
share
them
with
and
then
set
the
appropriate
permissions.
B
Can
they
really
like
that
data
and
then
Globus
will
control
access
without
having
without
requiring
you
to
move
that
data
to
some
cloud
service
or
to
some
other
special-purpose
store
just
for
the
purposes
of
sharing
and
there's
no
need
for
your
creators
to
create
temporary
accounts
and
have
your
IT
and
admin
staff
dealing
with
that
complexity
all
the
time
they
can
just
log
into
Globus
and
access
that
data
directly
from
that
system.
They
can
move
it
to.
You
know
their
our
local
machine
or
to
some
other
system.
B
The
data
access
students
can
do
whatever
they
need
to
do
as
part
of
their
research,
workflows
and
then
sort
of
on
the
tail
layer,
because
I
mentioned
is
usually
in
many
cases
a
mandate
to
make
some
data
more
publicly
available
or
the
researchers
want
to
share
with
their
extended
community
or
others
beyond
a
working
group.
So,
there's
the
ability
to
collect
data
and
sort
of
a
single
data
set
for
publication
for
multiple
systems
text
some
metadata
to
it.
B
Using
things
like
standard
schemas
like
the
Dublin
core,
schemas
and
optionally,
put
it
through
some
kind
of
curation
process
where
the
data
is
reviewed
and
then
once
it's
published,
you
can
actually
have
others.
You
know
come
along
and
use
the
global
service
to
search
and
find
that
data
and
go
to
you,
know
persistent
locations
and
and
download
it
and
use
it
service.
So
so
it
in
a
sense.
We
try
and
cover
some
of
the
key
tests
right,
research,
lifecycle.
We
we
we
haven't
covered
everything.
B
Obviously
it's
it's
a
very
complex
space
and
we
got
uncovered
by
a
means
most
of
the
scenarios,
but
these
are
sort
of
the
things
that
that
are
fairly
common
in
in
the
environments
that
that
we've
been
working
with
researchers
across
pretty
much
any
discipline
nowadays
and
a
couple
key
points
on
Globus
overall.
So
for
for
the
most
part,
it's
it's
it's
a
thing.
That's
that
researchers
access
via
browser.
B
There
are
other
interfaces,
but
that's
that's
the
primary
interface
and
say
we
can
use
any
storage
system
as
long
as
it's
and
as
long
as
we
support
it
as
long
as
we
have
a
connector
for
it
and,
and
that
largely
means
most
POSIX
file
systems
and
a
whole
bunch
of
other
custom
storage
systems
like
objects,
stores
and
archives
are
all
available
and
you
can
access
it
using
an
existing
identity.
I'll
talk
more
to
that
when
I,
when
I
do
a
demonstration
in
just
a
couple
of
minutes
here,
so
you
might
ask
well
why
user?
B
Okay,
we
start
like
interesting
functions,
but
the
things
we
focused
on
along
three
dimensions
right,
the
one
is
simplicity,
so
we
do
present.
This
consistent,
I
usually
place
across
different
storage
system,
so
you
could
be
looking
on
the
one
hand,
at
an
Amazon
s3
bucket.
On
the
other
end,
you
could
be
looking
at
a
tapes,
storage
tape
archive
on
your
campus
and
they
look
just
like
your
normal
file
browser.
You
can
just
move
files
between
them
and
easy
access
to
collaborators
right,
I'll
show
you
what
that
looks
like
just
a
stake
on
the
live
system.
B
The
second
dimension
we
focus
on
is
reliability
and
performance,
so
the
protocol
we
use
under
the
covers
to
move
the
data
is
good.
If
you
beads
it's
a
standard,
tried
and
trusted
protocol
I
performs
exceptionally
well,
especially
on
the
sort
of
the
highest
speed
matrix
that
we
have
in
the
research
world
and
this
reliability
notion.
B
You
know
we
talk
about
it
as
fire-and-forget
as
a
researcher
I,
you
know,
I
also
request
to
Globus
and
then
I
guess
we
have
opened
I'll
just
be
notified
when
it's
done
and
then
from
a
perspective
all
these,
so
the
administrators
of
those
that
are
managing
these
storage
systems
connected
to
Globus.
We
really
focused
on
operational
efficiency,
so
we
do
have
an
asset
of
the
possess
model.
B
You-You-You
install
a
small
client
if
you
will
honest
with
them
and
it's
available
via
the
globus
service,
but
we've
also
provided
command-line
tools
and
erased
API
so
that
the
the
service
can
be
accessed
from
from
other
applications
and
from
other
environments
as
part
of
the
existing
workflows,
but
sort
of
underlying
all
of
this,
the
value
that
people
really
tell
us
it's
talk
more
about.
Is
this
access
to
the
law
which
enjoy
music
I,
have
I,
think
upwards
of
50,000
registered
endpoints
on
the
service
there's
about
10
or
12,000
that
are
active
in
any
given
year?
B
So
so
it's
it's
a
growing
community
and
there's
a
lots
of
value
to
be
had
from
that.
So
I'm
going
to
switch
out
of
slides
there
and
Lee
I,
don't
know
how
you
want
to
call
it
and
how
you
want
to
handle
questions.
If
you
want
to
just
keep
going
and
save
questions
for
the
end,
though
I
can
take
questions
as
we
go
along
here.
I,
don't
like
either
way.
B
Okay,
so
I
will
have
to
walk
through
the
transplant,
sharing
capabilities,
and
maybe
some
of
the
admin
things
just
wanna
care
stay
on
fine
here,
what
14
minutes
or
so?
Okay,
so
so
logging
into
Globus
as
I
said
it's
accessible
via
you
know
many
existing
identities.
We
we
federated
with
the
incumbent
Federation,
so
we
have
all
the
identity
provided
from
the
in
common
Federation's
as
well
as
many
providers
from
from
other
countries
and
we're
in
the
process
of
immigrating
with
the
age
you
gained
Federation.
B
So
this
list,
which
is
now
in
350
odd
providers,
will
grow
to
about
1500
or
so
who
also
support,
Google
and
all
come
down
here,
but
I'm
just
going
to
use
my
University
of
Chicago
ID
and
what
this
does
is
basically
just
like
any
other
federated
identity
system.
It's
going
to
direct
me
to
that
identity
provider.
In
this
case,
this
is
my
campus
login
page.
B
So
I
will
log
in
with
my
campus
credentials
and
then
I'll
get
redirected
back
to
Globus
sub
authenticated
and
the
primary
interface
is
the
traditional
sort
of
left
and
right
pane
file
transfer
screen
that
you've
seen
them
on
many
many
tools,
I'm
sure.
So
I
mentioned
this
concept
of
an
endpoint.
So
when
I
say
endpoint
I'm
referring
to
a
storage
system
that
has
the
globus
connect
clients,
that's
that's
the
thing
where
that's
what
you
install
on
the
system
to
expose
it,
if
you
will
the
other
globus
service,
so
we
call
that
globe
is
connect.
B
So
we
you
know,
given
the
large
number
of
endpoints,
we've
got
a
pretty
robust
search
capability
to
find
the
ones
you're
looking
for,
we
also
tracked
recently
used,
and
then
you
can
bookmark,
endpoints
and
pads,
and
so
on
so
I'm
going
to
click
here
on
this
endpoint,
which
is
our
Midway
cluster.
This
is
just
a
big.
You
know,
standard
HPC
cluster
on
campus
here
and
you'll,
see
when
I
do
that.
It
drops
me
straight
into
my
home
directory
on
that
system.
That's
big
was
wherever
possible.
B
Globus
will
will
do
single
sign-on
so
in
this
instance
I
just
logged
in
with
my
campus
identity
to
Globus.
So
therefore
it
has
that
credential
and
it
presents
that
to
this
this
cluster
and
it's
recognized
because
we
do
have
single
sign-on
on
all
our
campus
resources,
so
I
just
get
drop
straight
in
so
let's
say:
I
run
some
experiments,
so
I've
got
I,
don't
know
some.
Some
data
here
can
just
find
something
to
move.
B
So,
let's
say
I
have
this
100
Meg
file,
nothing,
nothing
too
big
and
I
want
to
move
that
to
another
resource
outside
of
the
campus.
So
let's
say
I
want
to
go
to
exceed
and
I
want
to
access.
Let's
do
Gordon,
because
I
use
that
earlier
so
Gordon
is
a
super
computer
out
at
San
Diego
supercomputing
center.
Some
of
them
move
files
here
from
Chicago
to
Gordon.
So
now,
when
I
try
and
access
Gordon,
it
says
please
authenticate
right,
because
Gordon
does
not
recognize
my
Java
Chicago
credential.
B
B
Get
dropped
into
into
Gordon,
so
actually
I'll,
just
I'll
move
these
three
files
so
select
those
and
firing
off
the
transfers
as
simple
as
clicking
that
button
right,
so
Clovis
is
transfer,
request
submitted
successfully,
so
doesn't
mean
the
transfers
completed,
but
God
and
go-biz
have
the
request
and
it's
going
to
work
on
it
and
try
and
get
those
files
over
there.
As
you
know,
as
fast
as
possible,
but
maintaining
the
the
reliability
right
good
options,
I
can
sit
on
the
transfer
down
here.
I
can
choose
to
encrypt
it.
B
That's
a
requirement
for
certain
types
of
data.
Obviously
it's
not
a
requirement
for
either
of
these
systems,
but
as
an
administrator,
you
can
force
that
to
be
an
option
on
your
system,
so
that
users,
you
know,
have
to
have
to
encrypt
all
their
transfers
and
there
are
other
options
that
I
can
choose
to
sync
or
delete,
and
so
on
looks
like
this
was
already
finished.
It
wasn't
very
big,
but
I
could
go
and
look
at
it.
B
But
that's
all
that
that
that
transfers
about
it's,
you
know
we
try
to
make
it
as
simple
as
possible.
Again
we
can
navigate.
You
know
the
schemes
of
thousands
of
endpoints
and,
and
they
all
look
really
similar
right.
I
could
I
could
just
as
easily
go
to
I
have
a
nice
three
bucket
on
Amazon,
where
we
score
some
shared
documents
right
that
looks
just
the
same
right.
So
that's
another
global
same
point:
it
just
happens
to
be
an
h3
bucket,
so
this
consistent
user
interface
is
important.
B
The
reliability
is
important,
so
on
the
the
other
thing
I
mentioned
is
sharing.
So
let
me
just
go
up
a
level
here.
So
let's
say
I
wanted
to
share
some
data
here.
What
are
my
results
from
doing?
Some
runs
on
this
ice
like
sheet
microscope.
We
thought
to
share
that
with
with
the
big
data,
the
big
data
hub
group
right,
so
I'm
going
to
do
the
big
data
self
share.
So
what
I'm
doing
here
is
I'm
creating,
what's
called
a
shared
endpoint.
B
So
if
I
want
to
share
with
all
you
folks
on
the
line,
I
had
you
set
up
in
a
group
somewhere
on,
you
know
had
you
individually
identified,
however,
but
none
of
you
had
access
to
you
know
my
campus
cluster.
All
I
would
need
to
know
is
either
you
know,
put
you
in
a
group
with
your
identities
and
Globus
or
I
can
just
invite
you
via
email,
because
I
can
use
a
temporary
user
that
I
that
I
use
for
demonstration
purposes.
B
So
I
can,
you
know,
find
a
user
to
share
with
or
if
I
know
them.
I
can
just
add
them
straight.
In
granting
permission
so
now,
Globus
you'll
see
appeared
it's
added
this
permission.
It's
a
read
permission
for
this
demo,
doc
user.
So
this
user
will
now
get
an
email
are
saying
you
know,
that's
a
shade.
This
data
with
you
click
on
this
link
to
access
the
data,
that's
really
akin
to
Dropbox
or
other.
You
know.
B
Flood
sharing
services
like
that
so
I'll
go
in
and
switch
to
another
browser
here
and
I
will
log
in
as
a
demo
doc
user,
and
it
tells
you
that
could
have
a
an
IV
from
one
of
these
existing
identity,
provided
that
we've
been
affiliated
with
they
can
always
use,
what's
called
a
Globus
ID,
so
it's
a
globose,
username
and
password
you
can
go,
create
one
and
so
I
will
just
log
in
as
their
demo
user
and
when
that
user
goes
to
what
we
call
it.
Big
data
see
how
many
of
those
not
too
many
right.
B
So
there's
our
endpoint
that
sort
of
virtual
endpoint.
If
you
will
that
I,
created
and
and
they're
looking
at
the
data,
you
notice,
they
can't
navigate
up
a
lot
in
that
file
system.
All
they
can
see
is
the
data
that
I
shared
with
it
in
this
folder.
Here
that's
at
this
point.
They
can
pull
that
down
to
their
systems
and
do
what
they
want
with
it
so
fairly
straightforward.
Hopefully
you
agree,
no
need
to
you
know
special
permissions
get
an
account
on
the
system.
B
You
know
jump
through
all
those
hoops
or
you
know,
move
data
to
some
special
place
to
share
just
directly
shared
from
in
the
system
in
place
and
a
number
of
administrative
Kay
abilities.
We
have
a
console
that
gives
you
a
real-time
view
into
into
data
transfers.
You
know
some
some
big
transfers
going
on
right
now
and
if
I
mouse
over
some
of
these,
you
know
the
bands
will
change.
So
this
one's
a
pretty
big
one.
This
is
NCSA
Bluewater
supercomputer.
So
as
an
administrator,
you
have
this
view,
so
you
can
see
who's
moving.
B
What
you
can
see
which
senses
might
be
experiencing
problems.
Orange
means
that
Globus
is
actually
retrying.
So
that's
part
of
the
value
proposition
that
you
know
we'll
try
and
work
through
transient
errors,
so
that
that'll,
you
know
you'll
always
show
up
and
that's
actually
a
good
sign.
Music
Globus
is
doing
what
it
was
designed
to
do
so
I'm,
going
to
I'm
going
to
pause
there
on
the
demo.
I
just
want
to
stay
on
track,
switch
back
to
show
you
a
couple
more
slides,
I'm,
just
going
to
talk
through
how
you
actually
deploy
Globus
right.
B
So
I
mentioned
Globus
connect.
So
that's
a
piece
of
client
software
that
you
install
on
a
storage
system
to
turn
it
into
a
globe
within
point,
there's
a
personal
version,
so
it
allows
you
to
create
an
endpoint
on
your
laptop
or
so
your
desktop.
Whatever
single
user
machine,
it
doesn't
need
any
special
access
to
install
it.
There
is
a
global
connect,
server
version
which
most
foot
works
on
there.
You
know
campus
cluster
lead
server.
What
have
you
we?
We
support
many
different
types
of
Linux
distributions
and
I
mentioned.
B
We
support
lots
of
different
filesystem
types,
so
we
have
some
part
of
the
standard
package.
Any
POSIX
compliant
file
system
is
supported,
but
we
also
have
what
we
call
premium
storage
connectors.
These
are
the
ones
that
we
currently
have
up
there
in
some
form
and
available
supported
and
we're
working
on
on
connectors
for,
for
others.
Typically,
a
lot
of
the
object
cloud
stores
that
people
are
using
more
and
more
I'm
just
going
to
skip
through
some
of
these.
B
So
the
other
thing
that
you
can
do
with
globus
is
it's
actually
exposed
as
a
platform
server,
so
you
can
actually
use
the
globus
capabilities
when
you're
building
your
own
applications,
size
gateways,
data
management
portals,
etc.
So
you
have
access
to
a
full
set
of
ATIS
for
these
transfer
and
sharing
and
publication
functions.
You
also
have
access
to
all
that
federated
identity
and
single
sign-on
capabilities.
B
So
you
can
actually
use
the
globus
service
as
your
authentication
systems,
you
don't
have
to
build
user
name
and
password
and
all
the
trust
that
goes
with
managing
users
and
credentials
into
your
system.
Some
examples
of
people
that
have
done
this.
The
research
data
are
carried
at
NCAR
users,
globus
to
conserve
data
up
to
their
for
your
thousand
users
around
the
world.
B
Stenger
does
the
same
thing
for
genomics
and
imputation
analysis.
There's
a
number
of
different
examples.
We
do
have
samples
and
opens
source
code
on
our
github
repo
on
how
you
can
use
the
platform
if
that's
of
interest
to
folks
I
do
want
to
before
I
close
here.
Distinct,
our
sponsors
so
globally
is
funded
by
the
development
at
least
is
funded
by
many
of
the
federal
agencies
as
well
as
labs
and
some
private
foundations.
B
I
mean
some
some
of
these
numbers.
So
we
do
have
you
know,
services
growing
nicely.
We've
got
over
50,000
users,
now
a
bunch
of
endpoints
and
we
do
have
a
sort
of
a
two-pronged
mark.
So
we
have
a
free
service
that
nonprofit
researchers
can
use,
but
we're
trying
to
become
self-sustaining.
So
we
will
introduce
subscriptions
paid
subscriptions
a
couple
of
years
ago
and
we
do
have
I
want
to
thank
the
many.
B
This
is
a
subset
of
the
folks
that
actually
currently
pay
to
use
Globus
and
take
advantage
of
the
subscriptions
and
that
actually
helps
fund
our
operations,
because
you
know
we
can't
fund
those
things
through
through
federal
grants.
So
through
these
subscriptions,
you
get
access
to
beyond
the
transfer
which
is
free.
The
SharePoint
I'll
show
you
some
of
the
management
capabilities
and
the
number
of
other
features
that
I
didn't
get
it
chance
to
to
go
three
here.
Those
are
all
supported
by
my
subscription.
B
A
A
A
A
Ahead,
sorry,
my
apologies,
so
this
is
standing
up.
I
have
a
question,
but
so
you
mentioned
the
fact
that
you
were
using
kind
of
an
optimized
file
transfer
methods.
Can
you
speak
a
little
bit
more
to
that
and
maybe
kind
of
talk
about
the
bandwidth
you
get
on
a
good
connection
and
on
a
bad
connection.
Oh,
certainly.
B
Yes,
so
so
the
book
we
have
two
protocols,
the
primary
one
is
good
FTP.
So
that's
a
if
you're
not
familiar
with
it.
It's
an
open
standard,
it's
based
on
on
FTP,
but
it
actually
gets
around
all
the
constraints
that
in
a
traditional
ftp
air,
so
it's
a
pickle
as
of
parallel,
a
form
of
ftp
and
and
we
get
parallelism
through
two
dimensions.
One
is
we
you
have
your
the
option
to
open
up
multiple
concurrent
sessions
between
the
two
endpoints,
so
you
can
spread
your
files.
B
You
know
individual
files
across
all
those
stations
and
then
within
each
concurrent
session.
The
protocol
allows
you
to
create
multiple
parallel
threads
and
so
the
file
gets
chunked
up
across
those
threads,
and
so
you
get
these
two
levels
of
parallelism
and
that's
what
gives
you
the
performance-
and
it
also
does
some
other
things
to
overcome
some
of
the
shortcomings
of
our
key
metrics
and
you
know
pipelining
commands
and
things
like
that.
So
it's
it
does
get
performance
on
the
order
of
20
to
100
times
there
of
ACP
over.
B
You
know
some
of
the
links
the
the
performance
tests
that
people
like
es
Nate,
have
done.
If
you're
not
familiar
with
the
estimates,
they
run
the
networks
for
the
neoui,
so
they
did
a
lot
of
independent
testing.
They
continued
to
work
with
us,
so
they've
managed
to
get
disk
to
disk
throughput
of
close
to
8
gigs
on
a
10
gig
link
and
then
actually
that
was
I,
think
limited
by
the
I/o
bandwidth
from
the
file
system.
So
you
know
good.
B
At
the
PD
underlying
protocol,
we've
demonstrated
that
you
can
city
saturate,
giggling,
we've
done
those
tests.
In
fact,
last
year
at
supercomputing
we
demonstrated
lying
the
five
gigs
disk
to
disk
across
the
wide
area
network,
which
was
something
the
two
endpoints
for
a
couple
thousand
miles
apart.
So
so
yeah
the
protocol
is
pretty
robust,
but
a
lot
of
it
does
depend
obviously
on
what
you
have
on
either
end
of
it.