►
From YouTube: 2019-Oct-24 :: Ceph Tech Talk - Ceph at Nasa
Description
A
A
Most
people
probably
don't
know
about
us,
probably
and
a
few
minutes
talking
about
us
so
we're
from
founded
in
1965
back
in
the
era
of
satellite.
When
satellite
remote
sensing
really
took
off
it
actually
started
here,
and
the
Center
has
been
involved
in
a
lot
of
hardware
over
the
years
or
instruments,
Explorer,
seven
80s,
three
ground-based
stuff
like
Erie
or
tskany
in
his
and
actually
one
of
the
instruments
on
the
Hubble
was
Delta,
not
building.
Let
me
do
hardware.
A
We
also
do
a
lot
of
data
validation,
calibration
algorithm
development
or
deriving
proud
of
side
of
sensor
data.
There's
studies
in
numerical
weather
prediction:
we
have
the
Kaurava
Institute
of
satellite
studies
here.
A
little
different
is
there's
a
ice
core
drilling
team,
but
that
does
relate
to
the
atmosphere,
because
you
can
tell
the
atmosphere
in
the
past
based
off
of
ice
cores.
A
But
I
work
a
couple.
Very
specific
projects
focused
around
data
processing.
The
main
one
is
the
NASA
atmosphere
sets
so
there's
two
satellites
in
orbit
us
10,
PPE,
NOAA,
20
and
there's
another
to
plan
in
upcoming
years,
JPS
s,
2
and
2021,
where
we
get
basically
the
raw
satellite
data.
We
work
with
science
teams,
you
develop
algorithms
to
create
different
products.
A
We
process,
we
work
them
to
refine
their
algorithms
to
work
well
in
that
HTTP,
environment
and
reprocess.
The
data
help
them
with
any
validation
and
we
send
it
to
NASA
and
up
for
extreme
manner.
We
also
do
kind
of
near
real-time
data,
which
is
about
as
fast
as
you
can
get
it
from
polar
orbiting.
Satellite.
A
A
A
Those
haven't
launched
yet
they're,
hopefully
coming
up
at
3:20
here,
but
that
just
adds
it's
much
lower
data
volume
than
the
other
instruments,
but
it
also
adds
gives
six
more
instruments
that
are
constantly
collecting
we're
generating
data
for
and
tracking
the
data
and
everything
the
way
our
data
formats,
they're
called
granules
and
usually
time
based
on
the
right
side.
Here
we
have
an
image
of
basically
stay
in
Western
Europe
in
of
North
Africa.
That's
six
minutes
of
what
the
satellite
sees
as
it's
going
overhead.
A
We
get
data
dumps
two
points
in
the
orbit
and
one
over
small
log
which
is
weighted
ignores
that
REO
from
Antarctica.
We
received
this
data,
which
is
usually
about
five
gigs
each
and
that
it's
up
to
us
to
deduplicate
some
of
the
data
for
like
the
near
real-time
stuff,
where
the
more
official
data
that
does
come
in
ice
to
our
segments
that
we
don't
have
to
really
be
duplicate
at
all.
A
So
we
split
that
data
and
what
we
end
up
with
is
just
under
ninety
thousand
files
per
year
per
instrument
or
product
or
there's
multiple
instruments.
Many
years
of
data,
the
first
us
at
VP,
launched
in
2011
and
there's
many
products
for
every
instrument,
basically
for
l2
or
l3
your
envelopes,
whatever
it
be
like,
as
the
point
of
this
slide
is
trying
to
say,
hey,
we
have
a
lot
of
files.
We
have
to
keep
track
of
all
this
there's
also
other
polar
orbiting
satellites
that
we
have
to
collect
data
for
for
science.
A
Team
validation,
like
modus,
was
the
predecessor
to
the
primary
instrument
reviewed
now.
So
we
keep
a
lot
of
that.
We
just
finished
ingesting
some
blips
on
video,
which
is
kind
of
a
space-based
lidar
we're
also
a
couple
other
projects
and
the
pipeline's
coming
up.
That
were
one.
That's
we're
confirmed
to
be
working
for
another
one
that
we're,
hopefully
gonna
be
working
for,
so
we're
growing,
pretty
fast
and
staying
busy.
Oh
I
threw
this
one
in
here,
because
I
threw
out
some
terms
level
zero.
A
The
top
left
picture
is
when
you
combine
the
Beatles
infrared
metaphor
advanced
when
you
combine
a
few
l1
bands
and
like
a
normal,
RGB
type
image,
and
then
the
l2
products
are
where
you
get
to
the
science
algorithm
derived
information
such
as,
like
a
cloud
mask
aerosol
optical
thickness
cloud
top
lights,
pretty
much
anything
to
do
with
the
atmosphere,
I'm
level,
3
dodon,
the
picture,
that's
just
kind
of
a
average
put
over
a
grid
over
a
certain
amount
of
time
and
Artie.
He
is
near.
A
A
We
also
need
to
be
able
to
handle
a
hundred
million
plus
objects.
We
want
to
have
to
work
well
with
HTC
in
general,
preferably
not
POSIX,
so
be
like
the
objects.
Third
method
database.
All
of
my
file
information
anyway,
so
object
works.
Well,
it
has
to
be
good
with
programming
programmatic
data
access,
because
everything
is
automated
in
our
system,
affordable,
and
hopefully
it
also
works
for
VMs
and
kubernetes.
A
But
one
trying
to
show
in
a
picture
that
the
right
here
is
that
you
know
the
two
different
databases,
they're
ones,
I
ingest
once
a
generation
is
that
the
sizes
vary
greatly.
Some
days
we
ingest
four
terabytes
because
we're
both
ingesting
other
days.
We
are
the
normal
five
six
hundred
getting
them
pretty
much
the
same
with
the
the
forage
during
processing
or
any
reball
all
processing.
You
know
some
days,
it's
the
normal
500
doing
terabyte
whatever
other
days.
A
A
So
we
prefer
density
over
performance,
for
our
storage,
be
anything
from
six
to
twelve
to
buy
disks.
I
have
no
plans
ago
greater
than
twelve
terabyte
disks,
so
I
don't
want
a
man
that
was
a
decent
amount
of
apps
available
from
you
know
near
mine
SAS.
We
don't
use
any
nvme
or
SSDs
and
I
cluster,
but
we
do
have
a
small
pool
of
10
KS
asked
for
hosting,
like
RPGs
and
for
virtualization
it
kubernetes.
A
We
split
into
two
clusters
via
talk
heads
to
live
later,
but
with
our
production.
Cluster
is
kind
of
small
about
four
hundred
terabytes
our
archive
clusters
that
main
Devon
that's
a
little
under
eight
petabytes.
There's
a
desk
cluster.
That's
just
my
playground,
upgrades
and
whatnot,
and
we
run
using
mostly
Dell
hardware,
24
hours
teams,
an
old
XFS.
A
We
don't
do
blue
store,
yet
I'll
probably
describe
later.
Why
we
don't
do
that,
there's
a
specific
reason
and
mostly
20
gig
networking
for
that.
A
A
You
know
we
started
this
back
in
2015,
getting
billion
to
using
staff,
or
so
at
the
times
that
the
fast
was
a
3x
replication
and
we
just
straight-up
couldn't
afford
that.
We
also
kind
of
assumed
or
people
who
you
some
liberals,
because
it's
this
is
so
easy
to
utilize
that
so
simple
to
utilize,
so
everything
was
kind
of
pointing
towards
making
sense
such
as
okay,
let's
use
what
Gradle
so
instead
of
the
Gateway
for
an
object
store.
A
But
when
you
do
this,
you
end
up
having
to
build
a
bunch
of
necessary
tools
around
you
know,
Liberals,
assuming
that
you
just
have
objects,
don't
good
way
of
tracking
it
or
anything
like
that,
and
for
authentication
I
mean
that
the
Gateway
gives
the
ability,
through
you
know
the
SDF,
a
source
or
whatever
for
better
a
nice
cloth,
but
Sfax
off
and
namespaces
work
well
for
us.
A
So
really
light
not
the
Gateway
and
I
just
figured
wah
I
thought
it
with
a
another
layer.
If
it's
not
really
necessary,
we
don't
require
the
advanced
off
of
that
or
anything
is
a
very
small
subset
of
users
that
utilize
our
cluster
really
only
a
handful
of
people,
we're
not
you,
don't
have
scientists
logging
in
or
getting
access
to
it
directly
we're
limited
and
I'm
not
working.
Now
we
only
have
10
gigs
switching
once
you
have
a
gateways
in
front
of
it.
Everything
goes
to
a
gateway.
A
Now
before,
when
we
started
this,
we
had
a
hundred
gigabits
of
compute.
Now
we
have
two
hundred
years,
it's
a
compute
so
before
maybe
a
few
gateways
could
have
worked
if
you're
bonded
all
together,
but
now
without
you
know,
for
you
getting
that
working
of
whatever
getting
this
we
put
on
giddiness.
Oh,
you
have
a
bow
level
on
this.
We
put.
It
was
a
problem
and
that's
where
you
know
you
talk
to
liberators
directly.
A
He
talked
to
the
LSD
is
directly
there's
no
bill
easy
battle
enough
to
create
there
and
kind
of
like
I
mentioned
that
the
stuff
that
works
great
for
our
use
cases.
We
only
have
a
few
people,
everything
set
up
so
that
our
apps
usually
have
little
namespaces
and
a
client
keyring
that
access
that
everything
is
controlled
via
puppet
and
units
permissions
for
a
security
of
those
key
rings
once
again,
there's
only
a
few
people
who
can
actually
log
into
like
an
SSH
sessions.
So
it's
fairly
easy
to
manage
all
that.
A
If
there's
a
case
where
you
know
hundreds
of
users,
that's
probably
more
of
the
reason
the
deal
with
the
Gateway
instead
of
liberate
O's
and
then-
and
we
have
a
very
few
pools
its
Mesa
mostly
one
day,
one
per
cluster
and
that
massed
up
all
the
pools
or
if
we
have
over
tor,
are
the
geese
or
stuff
on
there
as
well.
So.
A
We
had
to
build
a
whole
bunch
of
stuff
around
this
to
work.
We
were
Python
shop,
we
used
to
liberate
those
Python
bindings
and
those
work
well,
but
it's
not
super
easy
to
use
all
the
time.
So
we
rely
on
a
library
built
around
that
called
Fredo's
light.
It's
just
an
easy
way
to
access
all
the
Python
bindings
get
put
on
the
standard
actions
that
you
do
with
objects.
They
also
added
like
a
nice
streamer
object
using
Python
buffered
readers.
A
The
only
problem
that
we've
kind
of
run
into
this
was
a
pool
alignment
problem
that
was
introduced
due
to
some
changes
any
versions
ago.
They'll
be
back,
we
got
that
fixed,
and
that
was
only
because
we
were
migrating
pools,
so
ghetto
pools,
built
from
like
the
hammer
days
and
all
built
on
like
the
luminous
day
or
something
and
there's
just
a
change
in
the
false
of
what
stuff
was
doing
that
had
to
be
addressed,
and
when
we
work
with
objects.
A
We
kind
of
just
pass
everything
around
using
these
URLs
that
basically
define
objects
where
it
follows.
Like
a
standard,
URL
syntax
for
encoding
or
everything
within
a
single
URL,
you
can
say
everything
or
your
user
at
cluster.
What
pool
the
keys
namespaces
objects,
but
you
can
also
define
like
the
offset
and
sizes
that
you
want
to
read
from
as
well.
A
Don't
when
you
have
like
a
hundred
million
files,
us
all
and
alt
POSIX
access
to
easily,
you
know
troll
through
everything
you
gotta
track
it.
So
everything
gets
stored
and
multiple
Postgres
databases,
but
I
primarily
wanted
as
one
call
dog
every
files
against
his
integrity
checked
every
two
months,
there's
stuff
hasn't
really
ever
corrupted
anything.
Well,
occasionally
you
get
a
bad
block
and
something
happens,
but
stuff
is
able
to
repair
that
due
to
the
judicial
coding,
but
other
than
that,
we've
never
had
problems
with
this
and
we
also
use
a
utility
that
you
send.
A
It
called
PG
track
so
like
an
update
and
one
and
like
a
ninja
database
updates
dog,
which
is
the
primary
database
of
all
the
files.
So
if
like
I
thought,
if
it
changes
between
what
stuff
cluster
or
pool
is
in
that
moves,
in
addition
to
that,
we
also
have
a
fused
file
system
built
on
top
of
dog,
that
kind
of
gave
us
a
POSIX
layer
to
staff
with
middle
subjects,
which
sounds
crazy
but
is
really
originally
developed
to
make
cluster
usable
since
wants
to
put
a
lot
of
files
in
cluster.
The
directory
listings
bale.
A
So
it's
just
it
also
allowed
us
once
we
started
using
stuff
along
the
cluster
there's
time
they
will
be
involved.
We
built
real
support
into
as
well,
and
they
actually,
it
will
be
glue
a
file
system
together
out
of
multiple
different
sources.
So
we
actually
had
you
know
a
positive
cluster
and
Liberals
address
in
a
single
POSIX
file
system
that
we
can
present
to
ourselves
or
science
teams,
whoever
he
did
it.
A
A
This
one
this
is
kind
of
gold,
it's
not
the
best
technology,
so
we're
trying
to
move
away
from
that.
You
know
dog,
be
we
wanna
quickly
get
away
from
POSIX,
so
a
better
database
schema
I,
get
rid
of
the
fused
filesystem,
but
the
scientists
at
us.
We
do
occasionally
enjoy
browsing
the
data
so
good.
Coming
up
with
the
way
it
works
more
of
a
Python
API
based
method
of
creating
the
HTTP
G
directory
listings,
based
off
of
like
some
assumptions
of
the
data
and
the
database.
A
So,
even
more
times
we
have
to
track
our
files
here.
Everything
like
an
interest,
a
dog,
that's
up
what
kind
of
hooked
together
via
this
PG
track
object
where
you
know
what
object
updates
together.
So
we
have
this
utility
called
SEF
DB,
where
it's
just
constantly
listing
our
pools
and
touring
information
metadata
and
the
database
about
it,
takes
about
10
hours
to
list
almost
nice
and
you
seven
million
objects.
A
This
is
something
that
just
we
run
a
tight
kubernetes
cluster.
It's
always
there
always
going
and
we
also
use
it
to
track.
You
know:
here's
the
actual
state
of
staff
versus
here's,
what
we
wanted
to
be
alike,
so
we
have
some
cleanup
processes
or
if
we
remove
it
from
the
specific
database,
you
know,
and
then
this
seizes
remove,
there's
some
process
unnoticed,
it'll,
do
it
diff
and
I.
A
You
have
to
have
to
get
rid
of
some
of
these
extra
files,
but
the
really
big
advantage
of
this
is
that
I
keep
mentioning
how
one
database
tends
to
update
the
other,
this
one's
not
updated
by
anything
except
for
the
process.
So
if
somebody
does
a
huge
delete
on
accident-
and
you
know
at
least
an
extra
fifty
million
files
that
they're
we're
supposed
to-
or
you
know,
if
that
finger,
the
SQL
query,
gotta
update
or
delete,
we
still
have
a
spot
of
truth.
A
That
here
is
what
is
in
our
staff
cluster,
and
it
makes
it
easier
to
recover
that
information.
We
did
divide
backups,
but
it's
always
good
to
have
the
source
of
truths
and,
if
you're
going
to
be
using
liberals
directly
and
highly
recommends
something
like
this.
That
is
always
not
linked
to
anything
and
and
it's
a
source
of
truth
of
you.
What's
in
the
stuff
cluster.
A
One
of
the
problems
that
we've
had
with
liberals
using
it
directly
own,
there's
really
not
too
many
problems
we've
actually
run
into,
but
a
while
back.
We
also
we
get
he's
being
like
five
gig
files,
and
you
know
they
get
cut
down
to
six
minutes
smaller
chunks
of
files
for
the
level
two
level
one
processing
well,
when
the
file
came
in
it
would
they'll
kick
off
plenty
jobs.
A
That
would,
you
know,
want
to
download
this
one
big
file,
and
then
you
know
split
it
off.
That
was
a
big
problem
where
all
of
my
jobs
is
decently
brought
in
fifteen
minutes.
They
were
taking
two
hours
to
finish
ESMA
because
stuff
just
wasn't
able
to
handle
the
the
requests
of
20
jobs,
requesting
a
scene
5
gig
file
at
the
same
time,
basically
because
and
that
single
Louis
D
was
having
a
hard
time
rebuilding
you
know,
fetching
the
pieces,
because
it's
a
razor
coated
and
be
building
it
and
send
it
all
out.
A
We
had
to
develop
a
method
to
get
around,
but
we
done
this
PDS
server
thing
where
it's
another
database
that
indexes
our
files
into
one
minute
chunks,
and
then
this
is
where
the
reading
from
the
size
and
offsets
in
the
various
like
buying
a
Python
library
comes
in,
so
that
you
know
once
you
eat
query
the
database
I
need
you
know,
starting
to
dis,
offset
the
size
of
data,
and
now
our
jobs
only
download,
maybe
200
megabytes
each
of
that
level.
Zero
data
and
the
problem
doesn't
really
exist
anymore.
A
This
is
really
only
something
that
you've
probably
experienced
in
our
situation
here,
of
using
big
objects
and
LED
burritos.
If
you're
using
the
Gateway.
Probably
would
it
be
a
problem
because
you
own
the
objects,
are,
you
know,
put
into
many
different
format,
chunks
and
across
many
hosty
these
and
the
AP
geez.
A
One
sweet
money
shot
mostly
so
I
we
get
our
data
in
from
masa,
we
ingest
it,
we
shove
it
into
staff
and
then
I'm
not
left
there.
This
is
what
I've
had
in
like
a
laser
quarter
or
something
that's
a
nd.
We,
you
know
the
PDS
over.
There
reads
it
and
back
from
South
indexes
and
that's
fine
and
then
way
over
on
the
right
there.
A
A
A
You
know,
sometimes
you
just
need
positives,
and
we
there's
this
open
source
product
from
NASA
called
worldview
that
we
utilize
for
visualizing
our
data.
We
use
it
to
science,
teams,
use
it
and
unique.
You
just
need
the
POSIX
later
there,
so
suffice
and
kubernetes
go
together
to
create
the
persistent
volumes
and
in
kubernetes
that
serve
so
much
other
apps,
where
we
just
need
a
deposits
layer.
A
So
about
our
back-end
did
some
testing
of
legree
dose
versus
ffs
out
of
curiosity,
because
now
that
set
the
faster
for
its
Eurasia
coating
via
blue
store
back-end
that
awesome,
so
I
was
curious,
like
what's
the
processing
time
differences.
So
if,
on
one
of
our
computers,
a
single
granule,
we
would
download
the
file
and
deficit
channel
and
that
process
from
there.
So
we
try
to
you
know
we
don't
rely
on
spinning
this
really
too
much
an
eye
cluster
and
on
the
Left
we
see
pulling
from
radials
directly
into
dev.
A
The
middle
one
is
using
the
PDS
server
into
deficit
gem
and
the
other
one
is
doing
it
directly
and
Safa
fast
processing
that
the
single
granule
from
the
levels
yield
to
a
level
one
product
basically,
and
that
case
ffs
was
might
be
faster.
These
are
in,
like
eight
point
three
four
minutes:
tilde
inside
okay:
let's
try
a
hunger
granules
on
a
48
thread,
server,
so
48
at
a
time.
Obviously,
the
processing
times
are
going
to
go
up
due
to
network
contention.
A
The
CPU
is
just
been
running
at
a
hundred
percent,
but
even
then
we
end
up
seeing
you
know.
We
got
the
Rados
directly
endeavors
on
the
left
there,
but
some
of
us
is
still
very
competitive.
Did
two
different
ones
where
we
copied
out
of
south
of
us
and
to
does
visa
Gemma
granite
and
the
other
one
was
running
directly
out
of
stuff
a
fast,
so
the
processing
they
all
happen
there
and
it's
still
very
competitive,
still
I
unleashed
an
entire
cluster
on
it
about
2400
cores
running.
A
At
the
same
time,
against
south
of
us,
we
realized
on
the
far
right
here,
there's
a
problem
with
our
this
PDS
fetch
method
that
really
we
use
that
can
mostly
be
ignored.
That's
an
outlier,
but
on
the
left
here
you
can
see
this
ffs
only
was
only
nine
minutes,
whereas
the
this
type
of
s
that
is
a
a
little
faster
about
thirty
seconds,
is
so
there,
but
it
also
still
came
in
faster
than
our
normal
method
of
doing
things.
A
What
does
makes
so?
The
point
I
was
trying
to
get
across
here.
I
suppose
is
mostly
that
suffice
it.
You
know
it's
been
production-ready
for
some
time
now
for
many
versions,
but
it
also
it
works
very
well
when
you
just
got
2,400.
Maybe
even
I've
gone
up
to
2800
jobs,
you
know
banging
on
it
at
the
same
time.
Processing
is
I
think
it
can
work
really
well
in
the
HTC
environment.
A
But
you
know
it
kind
of
starts
being
a
question
of
why
don't
we
use
sefa
fast
instead
of
delivery,
dos
we're
doing
now
and
that's
something
that
we're
kind
of
thinking
about
and
considering
as
time
goes
forward,
but
we've
built
so
much
around
what
we're
at
right
now
it
would
you
want
to
take
too
much
effort,
but
it's
enough
kind
of
put
us
off
at
a
caret
time,
so
we've
had
some
growing
pains.
A
You
know
this
cluster
started
out
as
five
nodes
over
six
nodes
and
you
know
maybe
only
half
a
petabyte
an
hour
50
some
nodes
and
almost
eight
petabytes
with
another
one
on
the
way
here.
So
our
early
knows
we
had
deep
you
memory
problems
per
OS
d
and
per
terabyte.
They
were
size
too
small
no
became
from
cluster,
and
you
know
in
colossal.
You
really
didn't
need
too
much
memory
or
CPU
to
run
it
well
soon
because
of
the
nodes
going
down,
or
you
know
things
that
happening.
A
A
We've
had
two
years
two
and
a
half
three
full
cluster
outages.
There
are
years
ago,
almost
caused
by
a
small
relay
by
utilization
that
just
kind
of
sent
our
cluster
into
a
frenzy.
I
never
got
a
good
root
cause
of
that.
This
must
have
been
a
bug
somewhere
with
where
the
OSD
maps
were
being
regenerated
to
the
cluster.
Wasn't
keeping
up.
A
A
That
wasn't
working
too
well
for
us
anymore,
even
worse,
but
about
this
that
happened
to
me
and
mimic
upgrade,
so
things
were
going
great,
but
the
reboots
and
the
upgrades
and
everything
and
then
I
got
to
unknown.
I
rebooted
and
everything
just
went
to
a
mess.
The
honesty's
thrashing
swap
in
crash
at
all
costs
the
cluster
multiple
occasions
we
got
things
running
mostly,
but
the
big
problem
was
I.
Moms
were
constantly
reelecting
a
quorum.
A
The
other
tricky
thing
about
this
in
hand
was,
we
ended
up
corrupting
data
across
a
cluster.
You
know,
nulls
would
end
up
swapping
and
hanging
and
I'd
had
to
force
through
them.
That
was
damaging
blocks,
but
we
did
actually
lose
no
data
due
to
the
erasure
coding,
everything
was
recovered
and
even
then
it
was
only
a
few
objects
here
there
that
needed
to
be
fixed
up
a
bit
due
to
their
corrupted
block
sense.
You
know
one
or
two
nodes
and.
A
Our
future
here
now
we're
always
trying
to
look
for
ways
to
be
better
things
simpler.
Oh
we're.
Looking
to
this
NGO
product
hubs
are
not
a
product
is
an
open-source
utility
is
if
you
work
in
geosciences,
it's
a
different
way
of
storing
data,
where
I
stand
more
at
the
variable
level
instead
of
the
file
level,
so
you
can
reduce
the
amount
of
data
volume
that
means
be
brought
down
to
a
just
by
selecting.
A
I
only
want
variables,
XY
and
Z
in
this
thing,
instead
of
the
entire
thing,
oh,
and
that
has
a
works
directly
with
the
s3,
so
that
means
we
can
set
up
a
radio
skate.
We
can
use
it.
It
could
also
be
kind
of
cool
to
get
in
to
work
directly
with
live
radials,
but
we'd
have
to
kind
of
write
our
own
driver
for
that.
A
It's
something
we're
looking
into,
and
it
seems
the
way
that
Geoscience
is
that
kind
of
moving
towards
for
data
access,
we'd
love
to
move
the
blue
star,
but
with
blue
store
you
can't
meet
his
objects,
allows
you
before
gigs.
We
do
have
objects
that
are
five:
six
seven
gigs.
A
It's
kind
of
the
minority
of
the
cluster,
though
it's
only
about
160,000,
obvious
and
maybe
250
terabytes
worth
of
what
that
is,
keeping
us
from
moving
to
blue
store,
Eve
sad
about
maybe
building
striping
in
turn,
windows.
We've
also
thought
about
using
a
striper,
but
we'd
have
to
learn
Python
bindings.
For
that.
You
know.
I've
mentioned
that
to
some
other
people
and
they
kind
of
suggested.
That's
not
a
great
idea.
They
use
a
striper.
A
Mostly
because
of
that
many
people
use
it
a
handful
from
what
I've
seen
and
even
less
and
like
the
HTC
environment
like
us,
but
when
you
boil
it
down,
you
know,
is
it
better
to
roll
our
own
method
of
you
know,
stroke
integrators
light,
or
is
it
better
just
to
use
the
live?
Video
striper,
where
there's
actually
a
couple
other
people
that
I
know
using
it
and
it's
lightly
supported
the
other
big
issue
is
our
clusters.
A
big
cluster
is
stuck
on
the
bob-tailed
to
nibbles.
A
A
Making
that
change
and
the
cluster
everything
is
going
to
move
on
the
cluster
its
preparing
for
that
there's
some
interesting
ways
of
using
that
up
map
CERN
has
a
script
that
can
be
used.
Don't
you
make
your
change,
then
use
the
up
mapping
to
basically
stop
everything
and
then
slowly
a
balance
to
changes
in
over
time
with
that
is.
We
have
some
dual
clients
and
that
requires
luminous,
or
that
needs
to
be
addressed
before
it
can
happen.
A
We
also
look
into
use
a
mortgage.
The
X,
adders
and
stuff
is
actually
recently
implemented
from
the
last
time.
I
talked
about
this
slide,
so
I
crossed
off
one
of
those.
Just
you
know
basic
stuff
if
you
use
storing
the
checksum
with
objects,
so
when
they
just
like
light
and
bright
an
object,
does
it
put
offshore
operation
that
always
stories
to
check
something
at
that
time?
And
then
we
have
to
check
something
that
databases
as
well.