►
From YouTube: SunPy Coordination Meeting 2022 - Thursday
Description
Participate in the chat and the call here: https://openastronomy.element.io/#/room/#sunpycoordinationmeeting:openastronomy.org
A
Right,
I
know
there
were
a
few
other
people
who
said
they
were
going
to
attend
a
few
other
x-ray,
folks,
alec
and
jim,
but.
B
We
can
we
start
this
conversation
with
just
martin
and
if
other
people
joined
that
would
be
great.
Oh
yeah,
yeah,
no
yeah.
I
didn't
feel
about
that.
B
A
So,
martin
jim,
do
you
want
to
just
both
give
sort
of
a
quick
introduction?
C
Yeah,
okay,
so
I'm
mass
interest,
I
work
for
anaconda
and
for
the
interest
of
this
crowd.
I
I
work
on
fs,
spec
and
intake
and
chunk
and
dusk
and
tsar,
and
these
are
all
all
things
that
you
may
be
interested
in,
and
I
also
have
connections
with
the
pangeo
group.
So
x-ray
and
coordinates
and
stuff
are
definitely
of
interest
to
me
and.
C
I
used
to
be
an
astronomer,
yes,
neutron
stars
so
quite
divorced
from
the
sun,
but
yeah.
I
did
ground-based
infrared
optical
and
also
x-ray
uv
satellite
stuff.
D
Whereas
I
have
a
background
in
neuroscience
so
pretty
far
afield,
but
I
got
there
by
way
of
covering
the
surface
of
the
brain.
The
surface
of
the
earth
was
not
that
different
and
now
moving
from
the
earth
to
the
sun.
I
work
at
anaconda.
D
They
use
zar
very
well,
and
so
essentially,
when
you
want
to
scale
up
a
large
multi-dimensional
data
set,
there
aren't
that
many
options
for
visualizing,
naturally
without
having
to
extract
the
data
copy
it
make
it
your
own
local
version
and
then
view
it,
and
so
I've
been
working
with
the
panhelleo
group
recently
trying
to
make
it
seamless
to
start
with
data
that
is
conventional
data.
That's
representing
the
sun
and
be
able
to
visualize
it
within
the
right
coordinate
system.
All
the
right
labeling
smoothly
having
everything
connect
up
all
along.
B
Actually
one
quick
question
before
we
continue
on
with
the
martin
and
jim
martin
mentioned
a
lot
of
packages
there,
including
xray,
and
I
think
jim
you
mentioned
one
of
the
facts
you
work
on.
I
understand
just
built
on
xray,
so
I
guess
my
question
is
what
what
sort
of
level
of
expertise
would
you
consider
you
have
specifically
with
x-rays
from
it's
like
you
know,
from
a
user,
but
also
for
its
philosophy
and
maybe
roadmap?
D
C
A
couple
of
pr's
to
x-ray
and
delved
into
particularly
the
storage
side
of
things
there,
because
that
connects
better
with
the
stuff
that
I
do
and
I've
been
following.
The
indexing
work
that's
been
done,
which
is
the
coordinate
relevant
one,
but
I'm
I'm
not
I'm
not
on
the
x-ray
team-
and
I
I
haven't
developed
it
much
as
more
of
the
tools
around
it
and.
D
There's
a
person
on
my
team
who
is
currently
on
holiday.
He
lives
in
wales,
though,
he's
he's
the
one
that
I'm
expecting
to
work
on
the
x-ray
work.
So
you'll
probably
have
some
personal
interactions
later.
That's
ian
thomas.
B
Great,
like
I
can
go
next,
then
I'm
dan
ryan-
I
am
a
solo
physicist
still,
but
so
far
so
good,
but
also
you
know
the
fact
that
I'm
in
this
meeting
sort
of
spent
various
amounts
of
my
time
on
software
developments,
both
in
and
adjacent
at
some
point.
B
The
most
relevant
thing
for
this
discussion
is
of
one
of
the
lead
developers
for
the
ndq
concept
and
package,
which
we
found
out
this
week
is
starting
to
gain
a
little
momentum
in
the
sort
of
solar
community,
which
is
great
I've
used
x-ray
twice.
I
would
consider
myself
very
much,
not
expert,
but
my
understanding.
The
concept
is
that
you
know
x-ray
and
ndq
on
a
high
level
trying
to
solve
basically
the
same
problems,
but
for
various
technical
and
community
and
historical
reasons.
B
There
are
apparently
incompatible
solutions,
at
least
for
now
on
that,
and
so
you
know
both
phil
both
have
needs
to
exist
at
the
moment.
So
yeah
I'd
be
looking
to
just
get
out
of
this
a
little
bit
of
a
better
understanding
of
x-ray,
and
maybe
the
community
that's
building
up
around
that
sound
that
you
two
guys
are
good
representatives
for
that
yeah
and
just
basically,
if
nothing
else,
just
better
understand
like
the
blockers
and
what
the
consequences
of
the
solar
community
going.
D
D
That
that
makes
sense-
and
just
do
a
little
bit
more
background
if
you're
doing
work
on
brains,
you
care
about
multi-dimensional
arrays,
doing
volumes
of
the
brain
tissue
and
x-ray
started
in
the
geosciences
community,
but
it
is
heavily
used
in
the
microscope
community
in
the
brain
imaging
community,
and
so
it's
already
gone
through
a
transition
of
trying
to
reach
another
community.
D
It
was
originally
written
to
be
as
general
as
possible,
but
entirely
written
by
climate
scientists
by
geoscientists,
and
so
they
made
certain
assumptions
and
they've
relaxed
the
assumptions
over
time
and
it's
actually
probably
an
easier
jump
to
healer
physics
work
than
it
was
to
the
imaging
community.
F
F
I'm
sure
that
I
am
the
developer
of
senpai
and
also
have
done
a
lot
of
work
on
ndq,
primarily
motivated
by
work.
I'm
doing
for
the
daniel
know,
solar
telescope
and
representing
the
coordinate
information
and
all
the
data
from
that
facility,
making
use
of
dusk
and
various
other
things,
but
still
stuck
to
the
ways
astrophy
and
the
community
likes
to
represent,
coordinate
information.
A
Yeah
so
yeah
I'm
will
barnes
I'm
the
one
that
senpai
developers
I'm
not
an
x-ray,
much
of
a
user
at
all.
I
am.
I
have
been
a
desk
pretty
fairly
heavy
desk
user
for
several
years,
but
for
the
purposes
of
this
discussion
mainly,
I
wanted
to
be
sort
of
a
facilitator
between
people
like
stuart
and
danny,
who
are
been.
A
You
know,
working
on
indie
cube
for
a
while
and
then
to
to
to
have
this
discussion
with
the
sort
of
the
x-ray
community
to
sort
of
better
understand
what
the
frictions
are
between
our
approach
to
multi-dimensional,
coordinate,
aware
data
structures
and
the
sort
of
x-ray
approach,
but
the
main
reason
being
that-
and
we
were
just
sort
of
talking
about
this-
a
bit
over
lunch
and
afterwards
you
know,
there's
a
significant
portion
of
the
scientific
python
community,
obviously
using
x-ray
and
then
and
using
x-ray,
plus
dash
plus
holobiz.
A
For
these
really
large
out
of
core
data
sets
and
obviously
with
you
know,
things
like
the
heliocloud
being
and
people's
desire
to
kind
of
do
large
data
analysis.
You
know
out
of
core
outside
of
their
laptop
on
on
solar
physics,
data.
We
need
data
structures
that
can
can
accommodate
those
types
of
workflows
and
certainly
ndq
can
in
a
lot
of
ways,
but
clearly
the
community
is
standardized
around
tools
like
x-ray
desk
all
of
this,
and
we
don't
want
to
lock
ourselves
out
of
those
communities.
A
B
B
I
think
everyone's
sort
of
increasingly
aware
of
nd
cube
and
use
ndq
or
are
interested
in
using
it,
and
probably
I'm
sure
there
are
some
people
who've
used
x-ray
as
well.
So
presuming
the
fact
that
they're
here
means
they're
interested
to
see
where
this
future
world
might
go
because
they'll
have
consequences
on
their
work.
So
then
my
last
question
for
jim
and
martin
men
is
how
familiar
if
at
all
with
ndq.
Are
you
guys,
because
that's
probably
important
to
know
where
to
start
the
conversation.
D
I'm
to
know
that
what
you
said
about
what
you're
expecting
from
the
interface
to
x-ray
and
why
it's
x-rays,
interesting
seems
accurate
to
me.
Well,
I
understand
that
md,
cube
and
x-ray
are
in.
E
C
B
Well,
in
that
case,
just
to
give
a
very
quick
intro,
then
to
andy
kube,
any
oh
stuart:
do
you
want
to
start
up
with
something
else?
I'm
just
going
to
say.
I
think.
F
F
I
think
ndq,
to
some
extent
is
I
mean
it's
designed
to
facilitate
using
wcs
with
so
that,
like
well
with
wcs
and
data
together,
much
in
the
same
way,
that
x
array
is
designed
to
like
to
facilitate
using
data
and
coordinates.
At
the
same
time,.
F
The
the
the
impedance
mismatch
between
these
two
things
comes
from
how
astronomers
have
historically
and
continue
to
store
their
coordinate
information,
and
I'm,
as
I
said,
I
think,
before
we
really
kick
this
off.
I
haven't
done
enough
with
x-rays
and
know
the
answers
to
like
these
questions.
I
have
so
this.
This
is
a
particularly
evil
and
also
actually
very
bad
example
of
a
wcs
that.
F
Because
this
one
actually
would
fit
in
x-ray
really
well,
because
it's
all
I've
got
tables,
but
anyway
it
was
the
one
I
had
in
my
it
was
the
one
I.
E
E
F
The
reason
this
is
in
my
problems,
history
is
because
I
was
fixing
bugs
in
it
yesterday
because
it
never
happened
so
anyway
that
side.
So
I
think
there
were
two
main
things
that
are
unclear
to
me
if
and
like.
If
it's
the
wrong
one,
how
much
work
it
would
be
to
get
x-rate
to
support
right
like
I'm,
you
know,
I'm
sure
it
is
definitely
possible
that
we
could
get
astro
data
into
x-ray
and
it
worked
well.
D
D
Now,
and
maybe
in
the
next
few
months,
which
is
there's
been
a
lot
of
work
being
done
on
flexible
indexing,
x3,
some
of
which
is
motivated
directly
by
wcs
and
others.
For
other
reasons-
and
I
can
just
tell
you,
it's
a
there's-
a
maelstrom
there's
a
whole
bunch
of
people
doing
a
bunch
of
work
right
now
and
it's
hard
to
say
what
is
and
is
not
supported
as
of
today.
In
fact,
that's
that's
ian's
main
task
get
on
top
of
it.
C
F
E
F
More
possible
now
than
it
was
in
the
past,
but
I
given
that
it's
so
new
and
there
aren't
much
examples
or
like
for
somebody
not
like
engaged
with
the
development
of
that
stuff.
I
have
no
idea
where
to
start
in
testing
out,
I'd
be
very
up
for
making
a
proof
of
concept,
but
I
would
definitely
need
somebody
with
a
lot
more
xray
experience
to
like
work
with.
E
D
Just
that
we
also-
I
was
talking
with
tom
nicholas,
he
seemed
to
be
the
my
best
point
of
contact
on
xray
for
these
issues
he's
working
simultaneously
on
adding
solid
unit
support
which
may
also
be
of
interest
to
you
for
having
each
dimension
have
associated
physical
units,
be
well
supported,
he's
working
on
that
and
also
on
this
flexible
indexing,
and
so
basically,
I
made
a
note
ian's
going
to
dive
in
on
my
team,
come
to
terms
with
then
meet
with
tom
and
understand
the
current
status.
D
F
So
I,
if
we
just
for
the
moment,
magically
assume
that
flexible
indexing
has
solved
the
wcs
problem.
The
other
big
question
I
have
is
when
you
convert
a
pixel
to
a
world
coordinate
with
apple
pi.
This
is
the
outputs.
This
is
some
of
the
output.
You
get
right.
Each
of
the
coordinate,
like
some
of
the
coordinates,
are
held
in
their
own
very
rich
astropyte
objects
which
can
hold
loads
of
metadata
about.
These
coordinates
like
this
is
a
wavelength
in
nanometers,
but
all
all
of
this
is
extra
metadata
or
about
that
wavelength.
F
F
C
When
you
select
a
point,
then
you
do
expect
to
get
the
the
value
with
its
unit
of
that
point
and
all
the
coordinate
information
that
belongs
to
it
that
that's
how
x-ray
works
at
the
moment
now.
This
kind
of
thing
is
is,
is
rare,
you
know
usually
we're
talking
latitude
longitudin
in
degrees
or
something
like
that
which
is
fairly
straightforward,
but
I
I
believe
all
of
this
is
possible
as
things
stand
right
now.
I
I
agree.
I
I.
E
D
F
C
You
metadata
it's
metadata
in
general,
which
we
would
be
interpreting
as
a
world
coordinate,
so
you
would
have
to
have
attributes
with
special
names,
presumably,
and
they
would
belong
to
the
coordinates
that
go
along
with
an
actual
array.
That's
the
way
around.
You
would
have
it.
So
you
have
your
measurable
array,
which
is
you
know,
kelvins
or.
E
C
That
you're
measuring
and
it
will
depend
on
some
coordinates,
and
each
of
those
coordinates
will
have
attributes
that
tell
you
the
type
of
coordinates
it
is
and
any
extra
information
that
you
need
to
interpret.
B
So
that
sounds
like
to
me,
you're
kind
of
so
the
way
it
works
in
nd,
cube
through
wcs
at
the
moment
is
sort
of
all
that
information
is
stored
in
a
very
kind
of
dense
or
way
in
a
wcs
object,
and
then
these
larger
richer
objects
are
sort
of
calculated
like
on
the
fly
based
on
you
know,
functions
or
functional
trans
transformations,
and
so
it
pulls
together.
You
know,
you
know,
sort
of
automatically
some
of
this
information
and
then
builds
it.
B
So
it
sounds
like
what
you're
talking
about
is
you're
kind
of
breaking
up
or
spreading
out
that
information
over
a
wider
number
of
attributes.
Am
I
understanding
you
correctly.
C
But
that's
how
they
are
fundamentally
stored
right
at
the
moment
right
in
the
actual
fits
we
showed
some
fits
headers
before
it's
a
bunch
of
values
in
a
bunch
of
attributes.
That's
exactly
how
it's
stored
and
then
you
can
build
up
a
rich
object
which
is
going
to
do
the
transformations
for
you
and
in
the
present
picture
in
x-ray.
That
would
go
into
one
of
these
flexible
indexes,
which
would
give
you
a
coordinate,
array-like
operations
without
manifesting
the
arrays.
C
D
You'd
also
have
to
sort
of
have
to
document
the
conventions
involved,
naming
the
locations
of
this,
so
that
the
tooling
knows
what
to
expect
like
it
can
be
stuffed
there.
I
think
right
now
today,
I'm
pretty
sure,
but
you
have
to
know
where
to
look
for
it
and
your
tools
have
to
agree
where
to
look
for
it.
B
I
guess
what
I'm
well.
I
have
a
second
question
as
well,
so
you
mentioned
a
lot
of
that
first,
you
mentioned
at
the
start
that
you
were
said:
okay,
assuming
we're
reading
from
a
fits
all
right.
Why
why,
oh,
why
was
it
necessary
to
make
that
distinction?
Why
isn't
it?
Why
are
you
thinking
about
tying
to
bits
in
print
the.
B
C
Fitz
is
a
a
place
where
there
already
is
a
convention
for
how
these
different
parts
of
the
wcs
are
stored
in,
in
particular,
keywords.
That's
the
reason.
So
if
you
were
storing
this
in
hdf5
file
instead,
then
there
is
no
such
convention.
There
are
other
coordinate
systems
in
in
other
fields
that
you
could
like
there
could
be
a
field
called
affine
or
something
depending
on
what
the
use
case
was,
but
that's
not
good
enough
for
this.
B
To
just
mention
a
little
bit
more
about
how
the
direction
nd
cube
has
gone
with
this
is
mdcube
has
become
yeah
strictly
agnostic
to
fits.
It
doesn't
care
about
that
like
so,
you
can
build
up
the
wcs
objects
in
python
like
manually.
If
you
want,
there
can
be
there's
no
actual,
intrinsic
link
back
to
fits,
and
so
the
idea
is,
you
have
an
array
that
can
be
numpy
or
dusk,
or
anything
really
that
quacks
like
an
array,
and
then
you
have
this.
B
This
wcs
object
that
can
be
built
from
a
pit's
header,
and
that
is
a
well
trodden
path,
but
you
know
it
doesn't
have
to
be
just
any
object
that
adh
that
what
you
see,
what
you're,
showing
the
screen
there,
this
wcs
api
convention
and
so
you're
only
dealing
with
these
two
sort
of
python
objects,
the
array
and
the
wcs.
B
So
it
sounds
like
the
way
you
describing
it
at
least
is.
This
would
be
a
step
back
towards
some
more
file-like
convention
of
storing
the
metadata
and
then
sort
of
building.
This
wcf
object.
It's
kind
of.
C
No,
no,
no,
I
don't
think
so.
Okay,
I
was
just
maybe
operating
under
the
assumption
that
you
were
following
that
well
trodden
path,
but
there's
no
reason
to
have
to
do
that.
The
flexible
index
that
we're
talking
about
on
an
x-ray
would
have
its
own
internal
state,
which
would
look
well.
It
would
probably
use
exactly
the
same
wcs
object.
Actually,
so
we
close
the
same.
B
C
Right
yeah,
so
you
would
have
an
x-ray
wcs
index
class,
which
is
yet
to
be
written,
which
would
hold
the
wcs
and
expose
the
api
that
x-ray
expects
for
an.
A
C
D
It's
worth
thinking
about
the
history
of
x-ray
here,
which
is
it
started
with
the
net
cdf
data
model.
It
was
designed
around
that,
but
it
abstracted
it
and
it
became
only
about
what's
in
python's
memory.
In
no
way
was
it
tied
to
net
cdf,
but
it
was
designed
to
exactly
support
that
way
of
describing
things
on
this,
and
nowadays,
x-ray
is
used
by
a
lot
of
things
that
lots
of
people
have
never
heard
of
net
cdf.
D
They
use
it
they're
happy,
it
works
fine,
but
it's
designed
to
capture
everything
that
was
in
that
file
and
nothing
is
lost
in
the
translation
into
x-ray,
and
I
think
it'd
be
similar
for
fifth
is
the
idea
that
whatever
fits
can
express,
you
should
be
able
to
express
that
in
this
in
memory
or
out
of
core
data
set
data
structure,
so
that
it's
you
mentioned,
fits
to
make
sure
that
that's
all
represented
and
you
follow
the
similar
conventions.
B
F
D
Is
that
I
met
alec
engel
of
next
gen
fed
about
three
years
ago
and
ever
since
then,
we've
been
chasing
possible
grants
to
do
something
like
this,
so
that
our
visualization
tools
can
be
available
to
this
community.
He
he's
already
doing
it
in
his
own
work,
but
it's
very
painful
to
convert
things
from
their
native
thing
into
something
that
our
visualization
tool
supports.
Like
data
shader
and
so
he's
we've
had
this
idea.
We
discussed
it.
We
agreed
he's
applied
for
various
grants.
The
various
things
have
been
funded,
but
not
exactly
that.
D
And
finally,
we
got
one
funded
recently
and
from
nasa
a
very
small
thing,
and
so
it's
just
kicked
off
a
few
weeks
ago.
Martin's
on
that
and
I'm
on
that,
I'm
I'm
not
doing
any
work,
I'm
just
the
leader
of
a
bunch
of
people
who
do
the
real
work.
So
when
they're,
when
they're
embedded
in
it
they'll
be
much
better
people
to
talk
to
than
I
am
but
we're
looking
forward
and
we're
coming
to
terms
with
what
happened
over
those
same
three
years.
D
D
F
Okay,
okay,
I
mean
so
I'm
both
with
my
senpai
and
my
astropython.
I
know
that
there
are
a
lot
of
people
who
really
care
about
this,
and
I'm
very
happy
to,
I
guess-
represent
both
the
after
pi
and
the
senpai
community
in
this
discussion.
If
you
all
got
effort
on
on
the
topic
like,
please
feel
free
to
reach
out
and
I'd
be
happy
to
work
on
it
with
like.
F
If
that's,
if
you
are
gonna,
start
pressing
these
buttons,
basically
what
I
would
need
to
do
to
be
able
to
prototype
this
is.
I
would
need
somebody
to
come
along
and
tell
me
how
x-ray
works
right
like
I've
got.
The
I've
got
the
astra
by
wcs
stuff
down
much
more
than
I'd
like
these
days,
and
but
I
I
mean
there
doesn't
seem
to
be
or
last
I
checked
it's
been
a
couple
months.
F
C
Yeah,
I
think
it
takes
going
to
the
to
the
one
or
two
people
in
the
x-ray
core
that
understand
this
and
say
we
have.
You
know
a
a
real,
strong
use
case
for
this,
which
I'm
not
sure
that
they
have
themselves
when
they
came
up
with
this,
or
maybe
they
are
just
thinking
of
affines
or
something
that's
fairly
simple
to
express
and
yeah
having
a
sprint.
Actually,
you
know
sitting
down
and
trying
to
see
how
far
you
could
push
it
in
a
limited
time
frame.
D
Okay,
all
right
so
from
my
point
of
view,
when
ian
thomas
gets
back,
which
is
in
another,
I
guess
he
starts
on
monday
comes
back
on
monday,
so
he's
he's
going
to
dive
in
come
to
terms
with
it
and
then
I'm
going
to
talk
to
the
x-ray
people,
starting
with
tom
nicholas
and
then
we'll
have
an
idea
of
some
idea
what
we're
doing
and
then
we
can
reach
out
cool.
B
They
the
question
I
have,
which
is
a
little
bit
less
on
the
collaboration
but
sort
of
just
more
just
trying
to
understand
the
my.
My
understanding
is
x-rays.
Api
is
actually
quite
flexible
or
not
too
tied
down
when
it
comes
to
like
naming
convention.
So
on,
like
earlier
on,
the
camera
was
martin
or
you
jim,
who
was
saying
you
know
you
can
have
these
attributes
and
you
can
call
them
whatever
you
like.
B
You
know,
which
sounds
like
quite
powerful
in
a
way,
but
in
when
it
comes
to
what
we're
trying
to
do
with
ndq
is
we're
trying
to
be
very
minimalist,
but
also
say
right.
These
few
things
are
named.
This
way
are
accessed
in
this
way,
which
then
allowed
us
to
sort
of
define
a
minimalist
api,
but
a
reliable
one,
nonetheless
on
which,
like
the
community,
can
build.
So
with
that
understanding
in
mind,
could
you
correct
me
on
that
or
ask
the
subsidiary
question
which
is?
B
Would
the
move
forward
then
be
that
actually
we
could
move
ndq
to
being
depending
on
x-ray
and
any
cube
then
becomes
like
an
api
defining
wrapper
for
solar
and
astro.
But
you
know
the
the
deeper
things
it'll
just
do
in
an
x-ray
way.
If
you
wanted
to.
C
D
Again,
I'm
a
brain
scientist.
I
don't
know
anything
but,
as
I
understand
that
that
paired
with
x-ray
is
a
set
of
conventions
for
how
you
assume
coordinates
are
arranged
what
you
name
them,
what
you
assume
about
them.
If,
if
your
data
follows
that,
then
all
sorts
of
tools
can
understand
that
and
use
it
x-array
itself
in
no
way
enforces
those
conventions,
but
it
is
an
agreed
upon
document
or
standard
that
people
strive
for
and
mostly
fail
like.
Apparently,
it's
very
hard
to
reach.
C
Well,
cf
is
specifically
designed
for
earth
science,
so
earth
observation
and
climate
weather
oceanography
they'll
use
this.
It's
not
a
convention
that
would
be
applicable
to
us,
but
you
effectively
already
have
a
convention
in
the
the
set
of
the
set
of
specific
parameters
that
you
use.
B
There
may
well
still
be
a
space
for
there
being
an
nd
cube
object,
but
that,
if
you,
but
if
it's
built
on
x-ray,
then
if
there's
some
routine,
that
just
worked
on
a
generic
x-ray
x-ray
object,
then
it
should
just
work
in
an
dq
object
as
well,
because
underneath
it's
just
x-ray-
and
that
sounds
if
that
is
the
case
that
sounds
like
it
could
be
very
powerful
because
you
could
have
the
best
of
both
worlds.
C
Yeah,
there
are
precedents
for
for
building
on
top
of
x-ray
objects.
The
one
that
springs
to
mind
is
a
multi-resolution
imaging
where
you
have
a
very
high-res
thing
that
you
don't
normally
want
to
touch,
except
for
your
final
analysis
and
then
you
have
down-sampled
versions
of
it,
but
they
all
live
within
the
same
data
structure
and
the
same
coordinate
space,
so
viewing
them
together
is
not
something
that
x-ray
normally
does.
But
there
is
a
package
that
jim
probably
remembers.
D
D
Data
tree
same
guy,
tom
nicholas,
I
believe,
is
involved
in
that
so
he's
he's
the
yeah
either
the
weak
link
or
the
the
nexus
for
a
lot
of
this.
B
E
B
E
B
B
Something
could
be
a
lot
more
complimentary,
like
I
wasn't
exactly
sure,
but
it
sounds
like
there's
a
lot
more
complimentary
than
competitive
than
I
thought
it
might
be
might
be
the
case,
but
ultimately,
all
that
matters
is
like
the
users.
You
know
so
long
as
they
have
something
to
do
their
science.
That's
where
we
want
to
get
to
right
and
do
that
in
the
best
way
possible
yeah.
I
think.
B
F
F
You
get
into
interesting
things
that
are
challenging
for
us
at
the
moment
like
slicing
and
like
when
you
start
reducing
dimensioning
reality
of
the
wcs
object.
Can
you
continue
to
carry
that
around
and
how
do
you
go
about
re-serializing
that
once
you've
modified
it
and
all
these
questions
I'm
gonna
yeah,
like.
B
This
I
mean
this
could
be
like
a
long.
Potentially
you
know,
depending
on
how
long
these
sorts
of
like
this
python
specific
scientific
ecosystem
lives
on.
But
this
could
be
like
a
long-term
project
where
these
two
things
like
coexist
and
collaborate,
but
maybe
they
slowly
come
together
by
moving
stuff
or
upstreaming
stuff
to
x-ray
or
something,
but
it's
probably
going
to
happen
over
for
astropyth,
but
that
might
happen
over
a
long
time
and
there's
a
space
for
all
of
these
things
to
work
together
rather
than.
C
Compete
yeah,
it's
definitely
worth
making
making
the
point
that
with
data
volumes
being
well,
they
are
the
traditional
model
of
you're
downloading.
Your
your
thousand-fifths
files
and
processing
them
locally
is
is
not
going
to
be
a
way
forward.
So
we
like
ndcube,
is
a
vision
of
how
to
deal
with
a
whole
load
of
those
leveraging
desk
or
whatever,
but
eventually
the
whole
ecosystem
will
need
to
make
this
transition.
I
think
so.
C
So
this
is
again
the
pangio
and
x-ray
have
have
had
to
go
through
this,
and
pangaea
is
a
showcase
for
doing
climate
science
in
the
cloud
where
your
data
lives
in
the
cloud
and
using
best
to
parallelize
and
like
the
visualization
tools
and
all
those
things
and-
and
the
reason
is
that
this
transition
perhaps
came
a
little
bit
sooner
for
that
community,
because
high-resolution
earth
imaging
you
know
has
been
around
for
a
few
decades.
C
It's
now
finally,
waiting
on
me
to
get
fs
spec
support
in
the
fits
loader
for
astro
pi.
So
that's
something
that
that
kind
of
thing
to
be
able
to
load
your
data
directly
from
source
rather
than
download
it
onto
a
disk
which
might
not
be
able
to
hold
it
all,
is
very
important
for
pushing
things
forward.
B
I
think
there's
a
lesson
in
that
the
solar
physics
we're
in,
I
think
we're
in
a
strange
scenario
where
we
have
gone
into
that
world
with
missions
like
sdo
and
it's
like
aia,
where
we
are
in
the
big
data
world.
But
we
may
be
moving
away
from
that
oddly,
at
the
moment,
with
like
a
lot
of
like
missions
that
are
going
beyond
earth
orbit,
and
so
we're
kind
of,
I
feel.
B
Sure
we're
in
a
strange
sort
of
like
dual
scenario,
where
we
kind
of
have
a
foot
in
both
camps.
E
A
I
think
we've
also
like,
even
with
even
with
seo
we've
entered
into
this
world
of
very
large
data
with
really
out
the
at
least
the
astro
specific
tools
to
really
deal
with
that
data.
Like
a
lot
of
our
analysis,
tools
are
still
predicated
on
the
assumption
that
everything
lives
on
disk
and
everything
my
data
set
is
composed
of
a
single
file
or
maybe
a
few
files,
not
a
hundred
thousand
files.
We've.
A
D
Yeah
and
to
be
sure,
the
earth
science
community
is
the
same
way.
It's
just
such
a
large
community
that
the
pangea
portion
of
the
earth
science
community
is
actually
a
sizable
enough
community
and
it's
able
to
be
able
to
get
this
together,
but
their
science
community
is
basically
the
pangaea
folks
and
people
associated
with
them
who
are
easily
handling
large
volumes
of
cloud
data.
D
Processing
it
remotely
visualizing
and
their
lives,
are
fine
and
then
there's
a
much
larger
community
of
people
with
older
tools
and
different
approaches.
That
doesn't
don't
do
that
they
don't
attempt
to
it
or
or
they
do
it
in
a
in
a
way.
That's
localized
on
certain
high-performance
computing
systems.
Where,
for
that,
when
you're
running
it
on
there,
it
is
local
to
you
it's
no
longer
distributed
in
cloud.
It's
it's
part
of
a
coherent
unit.
B
So
I
think
the
lesson
I
take
from
solar
physics
sounds
like
it's
very
applicable
to
that
scenario,
which
is
that
we
have
to
support
all
three.
You
know
with
these
generalized
tools.
We
need
to
be
able
to
support
all
three
because
we're
going
to
have
those
people
looking
at
very
different
types
of
data
to
achieve
very
complementary
and
similar
science
goals
and
they're
going
to
need
the
tools
to
seamlessly
switch
back
and
forth
between
those
models.
C
B
C
Python
and
dust
does
lend
itself
to
that
way
of
thinking,
because
you
know
it
has
low
overhead
on
a
single
machine
but
is
still
useful
and
it
runs
on
hpc
and
it
runs
in
the
cloud
some
more
specialized
software,
that's
been
around
like
mpi
based
things
that
only
run
in
the
hpc,
then
you
can't
use
elsewhere.
A
B
A
D
Yeah
and
it's
hard
for
us
to
remember
that
those
are
some
of
the
reasons
we're
in
the
space
because
we're,
in
fact
just
dealing
with
lots
of
technical
problems
all
the
time,
but
the
whole
motivation
for
any
of
this
is
exactly
that:
the
that
flexibility
and
that
ability
to
scale
and
tackle
large
problems
economically
and
it's
like
oh
yeah,
yeah.
That's
why
we're
doing
this.
B
D
Not
at
this
moment
pretty
much
I'm
trying
to
not
learn
too
much
myself
and
let
the
people
actually
have
time
to
work
on
it.
Do
the
learning
so
that'll
happen
very
soon.
That
makes
sense.
C
I
I
I
have
a
project
that
I've
been
working
on
for
a
while
called
kachank,
which
does
it's
it's
a
way
of
indexing,
a
set
of
input,
data
files
such
that
you
can
present
their
pieces
in
an
aggregate
logical
file
as
a
czar
data,
set
that
that's
the
the
current
model
and
so
a
lot
of
the
examples
that
we've
had.
Is
you
have
a
bunch
of
net
cdf5,
it's
necessary.
I
see
that
four
files
hdf5
files
and
you
could
load
them
in
x-ray,
but
it's
a
bit
painful.
C
Maybe
you
have
to
download
them
all
to
get
decent
performance.
That
kind
of
thing.
So,
if
you
first
index
them
and
you
build
a
metadata
and
you
save
that
metadata
somewhere
now
you
can
see
the
whole
thing
as
a
tsar
data
set.
You
don't
even
need
the
hdf
library
to
read
them
all
at
all,
because
basically,
all
of
these
things
just
store
c
buffers
somewhere
in
the
file.
So
if
you
can
find
out
where
that
is,
then
you
don't
need
the
target
format,
specific
library
to
load
them.
C
C
So
that
has
garnered
quite
a
lot
of
interest,
particularly
in
the
people
that
already
know
about
zara
and
and
net
cdf
in
similar
formats.
But
it
also
applies
to
fits
because
fitz
again,
just
like
everything
else.
Just
has
its
data
stored
as
an
encoded
c
buffer
and
a
bunch
of
attributes
to
tell
you
what
that
thing
is.
C
So
I
have
one
example
and
we'll
be
building
more
examples
of
taking
a
whole
load
of
fitz
files
and
providing
a
single
logical
data
set
view
over
the
whole
thing
so
that
you
can
slice
it
and
pick
out
just
the
parts
that
you
need
and
they're
downloaded
on
demand
from
remote.
You
don't
need
to
have
a
copy
of
the
data
set,
it's
kind
of
interesting,
but
it's
a
little
bit
out
on
its
own
or
used
by
anyone
unless
astropay
can
actually
read
czar
or
some
other
integration
of
these
things
is
done.
C
C
C
E
C
C
C
Added
complexity
for
the
astro
case
for
sdo.
Maybe
there
isn't,
because
you
know
it's
it's
pointing
the
same:
oh
they're,
not
all
on
the
same
grid,
either.
B
So
to
just
ask
a
question
about
that,
so
did
I
understand
correctly
when
you're
this
this
chunk
is
like
you?
Can
you
can
search
and
extract
a
region
of
interest
like
you
know,
space
time
you
know
whatever
cool
or
whatever
axes?
Maybe
not
that's
the
right
word
and
effectively
slice
your
files
down
before
you
actually
have
to
like
grab
the
files.
C
You're
limited
to
whatever
inherent
chunking
there
is
within
the
file,
so
if
fit
supports
internal
compression,
although
it's
it's
pretty
rare,
you
wouldn't
be
able
to
just
read
part
of
that
of
a
chunk.
You
have
to
read
whole
chunks,
but
in
principle
you
you
can
just
get
the
stuff
that
you
need
to
within
some
limitation.
C
It's
more.
The
more
important
part
is
only
accessing
those
files
that
you
need
so
there'll,
be
you
know
if,
if
you
have
thousands
of
files
that
form
up
some
kind
of
big
grid,
but
you
only
want
some
of
those.
You
could
use
an
online
interface
to
search
and
cut
out
which
of
these
actually
meet
my
criteria
or
you
could
just
get
the
global
data,
set,
object
and
act
on
it
and
get
the
same
thing
without
any
intervening
service
needing
to
help
you.
D
So
if
you,
if
anybody
wants,
is
interested,
you
could
look
at
the
tsar
and
think
about
the
the
conundrum
or
the
the
challenge
that
people
in
the
earth
science
community
are
having,
which
is
that
wow
it'd
be
great.
D
If
all
these
data
sets
were
available
in
tsar
format,
because
then
I
could
run
a
hundred
independent
or
a
thousand
or
a
million
independent
processes,
each
accessing
each
chunk
that
they
need,
and
if
I
want
to
look
at
some
small
area
of
space
over
time
or
some
small
area
of
time
or
space
whatever
it
is,
you
can
have
efficient
access
for
that,
if
only
everything
were
in
zara
format,
and
so
you
get
people
who
are
creating
these
large
data
sets
and
czar.
And
then
you
have
everybody
saying
well
wait
a
minute.
D
C
D
F
I'm
very
happy
to
drive
this
conversation
off
the
rails,
for
my
own
personal
benefit
for
the
next
two
hours.
So
stop
me
at
some
point.
Martin,
you
proved
I
have
basically
written
my
own
version
of
fitzger
chunk
for
deepest.
F
I
think
I
think,
based
on
our
conversation
on
github
in
the
last
couple
of
weeks
on
that
astropy
proposal,
issue.
F
Vr
thing
I
think
you
may
you
might
know
where
this
is
going,
but
obviously
for
for
our
data
at
dks
and
for
the
aaa
data
we're
going
to
need
tiled
compression
support
in
in
chunk,
which
there
is
at
the
moment
right.
You
can
read
binary
table,
but
you
can't
read:
you
can't
decompress
the
tiles.
C
Yes,
so
viewing
the
individual
compressed
chunks
as
our
trunks
is
very
doable,
if
only
we
can
implement
the
compression
codecs
themselves
in
python
yeah.
C
F
Oh,
the
actual,
a
raw
algorithm
definition
yeah,
the
the
see,
fits
io
library,
is
public
domain
and
has
an
implementation
of
price
decompression
for
what
it's
worth.
F
Yeah
yeah,
so
that
that
astrophy
proposal
that
tom
roberta
and
myself
put
together
our
plan
is
to
move,
is
to
re-implement
the
tile
compression
algorithms
in
astropyth,
so
that
astropythons
stop
relying
on
c
fits
io
for
them
and
then
at
that
point,
kachum
would
be
able
to
use
them.
F
I
have
one
I
have
one
question
around
chunk
and
this
in
general-
and
this
is
just
like-
I
think,
my
ignorance,
but
I
think
it
might
be
interesting
for
others
as
well.
Why?
Why
does
ker
chunk?
F
C
I'll
start
again,
so
so
they
map
one
to
the
other
very
well
and
there's
a
reason
that
zara
has
been
popular
with
the
desk
community
because
it's
simple
and
it
essentially
just
generates
a
desk
array
from
its
contents,
and
that
makes
it
very
convenient.
In
this
sense.
It
also
comes
with
the
the
it's
a
logical
set
of
arrays
that
link
together
again
in
the
way
the
x-ray
likes.
C
So
we
have
coordinates
and
you
have
variables
and
and
they
mention
each
other.
So
it's
more
than
just
one
desk
array,
the
downsides
of
using.
Well,
I
mean,
if
you,
if
you
weren't
using
xy,
you
would
have
to
actually
generate
your
desk
array
right.
You'd
have
to
have
some
code
that
builds
up
that
graph
in
some
way.
C
C
C
So
you
could
have
a
along
x,
you
could
you
could
chunk
10
and
then
20
and
then
10
and
then
20
or
something
like
that
and
czar.
They
always
have
to
be
exactly
the
the
same
size.
So
that's
the
limitation
of
zelda
desk
array
does
not
have,
but
so
it's
something
that
could
be
changed
in
zlar
and
I've
been
advocating
for
so
then
you
would
have
all
the
features
of
descriptions
are.
C
A
B
Could
I
bring
us
back
briefly
to
x-ray?
This
sounds
like
a
great
conversation,
but
I
just
wanted
to
get
this
question.
Unfortunately
just
lost
jim,
but
the
question
was
about
the
the
ecosystem
sort
of
around
x-ray,
the
various
packages
that
depend
on
it.
I
just
want
to
get
an
understanding
of
what
that
environment
might
be.
B
Is
it
that,
like
x-ray,
it
holds
the
data
and
then
also
provides
a
lot
of
tools
for
analyzing
that,
like
free
sampling,
you
know
and
so
on,
and
then
there
are
there's
like
this
sort
of
halo
of
packages
for
like
visualization
for
reading
things
into
x-ray,
or
is
there
like
an
ecosystem
lots
of
different
packages
to
analyze
like
x-ray
in
various
different
ways,
and
x-ray
just
provides
mainly
this
sort
of
basic
data
class
that
then
anybody
can
it
can
use
and
and
build
their
own
analysis
tools
as
they
say
fit.
B
C
Yeah
yeah,
I
I
think
the
comparison
I
would
say
for
the
standard
python
ecosystem
is
between
numpy,
which
does
provide.
You
know
it
provides
a
storage
memory
object
and
it
provides
a
whole
load
of
things
that
you
can
do
to
it
as
part
of
the
basic
package.
C
But
it's
mostly
used
by
other
packages
like
a
scientist,
might
directly
use
numpy
functions
some
of
the
time,
but
it's
not
going
to
be
all
they
do
as
opposed
to
pandas,
where
pandas
again
is
a
memory,
storage
object
and
a
bunch
of
things
that
you
can
do
to
it,
but
most
people
that
use
pandas
only
use
pandas,
even
though
there
are
a
whole
load
of
other
packages
that
depend
on
pandas
and
do
interesting
things
with
it.
I
would
say
between
those
two
x-ray
is
more
towards
the
pandas
model.
C
B
Okay,
that's
interesting
because
I
think
on
that
1d
spectrum
ndq
there's
actually
more
towards
the
numpy
end
of
that.
So
that's
also
interesting
to
just
try
and
understand
the
roles
that
these
two
things
are
trying
to
play.
Okay,
so
it's
like
so
basically
x-ray
in
x-rays,
local
solar
system.
You
know
x-ray
is
a
very,
very
big
star
that
sort
of
like
you
know,
takes
in
like
it
dominates
to
that
area,
whereas
maybe
ndq
or
numpy
is
like
a
kernel.
C
Yeah,
I
I
think,
that's
fair.
I
mean
the
difference
between
those
usages.
Isn't
that
big,
but
I
think
the
emphasis
is
a
bit
different.
B
Yeah
and
I
guess
it
depends
like
it-
also
defines
like
your
scope,
and
you
know
just
how
your
impact
is
going
to
develop
and
maintain.
A
F
B
E
F
But
it
they
basically
had
fixed
it,
like,
I
think
I
mean
there's
a
lot
of
work
left.
A
E
F
B
F
It
and
I
don't
have
the
same
motivation
I
had
to
make
mdq2
happen
because
ndq2
works
for
deepest
data,
like
that's
what
yeah,
ultimately,
that's
what
I
built
it
to
do
and
it
works,
and
the
data
center
aren't
going
to
fund
me
to
write
ndq3
just
because
yeah,
it's
all
right.
It's
not
happening
so,
but.
F
B
B
We
have
something
we
can
all
use
and
ultimately,
if
we
don't
start
becoming
interoperable
with
x-ray,
then
we
are
doing
the
same
thing
on
a
larger
scale,
but
we're
still
saying
we're
over
here
with,
like
maybe
with
astro,
but
you
know
if
we
can
just
leverage
stuff
that
people
build
for
the
big
data
handling
or
for
visualization
or
something
for
geo
science.
B
A
And
also,
I
think,
like
people
inside
of
solar
physics
are
already
wanting
to
slash
r
using
using
x-ray
to
store
things
that
we
would
probably
tell
them
to
put
in
an
ndq
but
they're
saying
well,
you
know,
look
x-ray
has
a
support
of
like
the
entire
scientific
python
ecosystem.
Why
would
I
not
use
that
if
I'm
not
particularly
concerned
about
whether
I
have
like
an
xy,
you
know
cartesian
coordinate
system
instead
of
a
wcs
like
right.
B
F
A
E
B
E
F
Yeah,
I,
the
other
thing
I
think
is
at
the
moment:
ndq
just
supports
wcs
right,
look
up,
table,
coordinates
or
weird
and
took
me
ages
and
they're
evil,
but
it
doesn't
like
it
does,
but
they're
gross
and
x-ray
obviously
has
like
first-class
support
for
lookup
table
coordinates
from
the
beginning.
So
I
think
yeah,
like
you
at
the
moment,
we're
in
a
situation
where,
if
we
got
multi-dimensional
time
series
support,
you
would
really
struggle
to
justify
not
using
x-rays.
The
base
data
object
for
that.
F
F
B
F
F
Yeah
and
you're
also
not
getting
you're,
also
not
getting
the
advanced
serialization
deserialization
stuff,
but
after
people
are
gonna
need
in
in
x-ray.
I
would
imagine
I
don't
know
why
on
earth.
They
would
take
on
maintaining
that.
B
Yeah-
and
I
wonder
if
there
are
some
other
like
wts
specific,
like
handling
and
astrophy
object,
handling
that
you
know
would
may
well
be
done
at
the
ndcube
level.
Yeah.
A
F
E
B
A
B
Yeah,
so
we
can,
if
we
believe
that
that
is
the
way
that
solar
physics
and
astro
need
it,
then
we
can
rip
out
everything
underneath
it
replace
it
so
long
as
it
hits
that
api.
As
far
as
user
concerned,
you
know
it,
it
doesn't
break
everything,
so
my
hope
would
be
that
it
would
end
up
that
sap
would
be
largely
sufficient
and
no
matter
what
we
do
with
what
happens.
F
A
E
Okay,
sure,
no
problem.
I
think
that
has
I
didn't
listen
to
everything
that
I
think
the
bits
they
didn't
have
solidified
the
decision
not
to
jump
and
make
time
to
ask
your
pi
time
series.
A
E
E
E
E
E
Anyway,
it
sounds
like
units
is
sorted
and
there
is
a
discussion
to
be
had
around
sleep
seconds
and
time
scales.
F
The
other
thing
that
is
really
important
for
solar
and
outdoor
people
is
the
rich
metadata.
The
rich
objects
that
astropyth
has
built
skype
special
ports,
quantity
time.
All
of
those
are
absolutely
have
been
around
long
enough
and
adopted
wide
enough.
Now
that
they're
all
absolutely
irreplaceable
everything.
B
Like
that
again,
like
with
the
way
that
solar
is
currently
going
might
go
back
to
like,
we
have
different
satellites
across
like
solar
system,
those
tools,
it's
no
longer
as
it's
much
harder
to
get
away
with
just
assuming
cartesian
grid
and
assuming
like
that,
the
satellite
never
moves
like
you
know.
In
order
to
like
make
these
pretty
pictures,
this
sort
of
stuff
is
essential.
E
F
Yeah
I
mean
this
whole
x-ray,
stuff
excites
me
and
terrifies
me
in
equal
measure.
So
much.
F
Jupiter
book
or
executable
books
in
general,
but
jupiter
book
is
now
using
things
and
they
have
made
it
much
nicer
to
do
spaces
like
if,
if
you're
starting
off
fresh
and
you
want
sphinx
documentation
and
you're,
not
a
crazy
person,
who's
learnt
the
insides
and
outsides
of
sphinx
over
many
years.
You
probably
want
to
consider
using
jupiter
book
instead,
because
you
can
you
it's
using
sphinx
underneath,
so
you
can
do
api
documentation
and
all
the
other
stuff
that
sphinx
can
do,
but
you
don't
have
to
interact
with
the
sphinx
thing.
D
F
F
A
A
E
E
F
E
E
D
E
F
A
E
A
Like
cloud
computing
with
with
solar
physics
data,
particularly
like
the
data
products
that
we
already
support,
like
I
mean
even
things
like
aia
images,.
G
G
Yeah,
so
I
don't,
I
don't
have
direct
responsibility
for
helio
clouds.
G
Okay,
heliocloud
is
the
representation
of
the
nasa's
efforts
to
move
the
data
and
computing
to
a
cloud-based
architecture.
It's
aws
based
now.
G
G
Science
applications
and
is
especially
useful
for
like
newer
science.
Applications
like
if
you
want
to
like
machine
learning
is
the
is
the
example
that
everyone
uses,
because
you
need
to.
You,
have
large
data
sets
which
you
can
store
in
the
cloud
quite
easily,
but
more
importantly,
you
have
large
amounts
of
compute
right
next
to
it
and
so
shoveling
a
whole
bunch
of
data
through
to
train
your
ml
algorithm
is,
is
a
great
example
of
cloud
use,
and
so
heliocloud
is
just
the
like
at
its
fundamental
level
is
like
how
do
we
put?
G
F
F
G
So
the
short
answer
is
no:
you
don't
have
to
have
a
nasa
id.
The
data
will
be
made
available
to
you.
Use
of
that
data
will
will
cost
you
money,
but
you
do
not
have
to
have
an
asset
id.
You
do
need
to
go
through
the
compute,
because
use
of
this
know
you
pay
for
the
compute,
but
you
don't
pay
for
the
storage
of
the
data
about
accessing
it.
B
G
All
right
there
are
no
access
costs
which
are
small,
so
the
details-
I
I'm
not
particularly
familiar
with
them.
So
if
brian.
G
Yeah,
so
we
have
people
on
who
are
on
heliocloud,
who
do
not
have
nasa
ids,
so
you
don't
like
so
it
should
be
accessible.
G
So
the
the
the
reason
I
put
this
on
the
the
schedule
for
some
time
meeting
is
that
a
big
part
of
what
heliocloud
has
done
in
the
last
year
is
create
a
data
analysis
environment
like
a
bunch
of
python
packages
and
got
them
all
together
and
provided
that
for
the
the
summer
school
earlier
on
this
year.
G
G
A
I
mean
I,
I
think
the
biggest
thing
that
well
the
most
obvious
one
to
me
is
that
our
the
things
we
do
particularly
like
on
math
are
not
well.
This
is
like
this
is
very
specific.
I
guess,
but
that,
like
the
operations
we
do
on,
map
are
compatible
with
that,
because
desk
seems
to
be
are
kind
of
entry
points
for
doing
scalable,
largely
array
operations
right,
that's
sort
of
the
that's
where
I
see
the
interface.
A
B
A
Aws
k,
I'm
presuming
it
does
no,
no
because,
okay,
so
you
can,
you
can
then
take
that
bag,
and
then
you
can
pass
that
off
to
a
desk
cluster
and
that
desk
cluster
could
run
and
be
running
on
aws.
It
could
be
running
on
google
compute.
It
could
be
running
on
my
laptop,
it's
like
and
then
those
those
things
decide
how
that
dag
is
distributed
over
the
system
so
yeah.
A
So
we,
I
guess
we
just
need
to
make
sure
that
our
like
the
things
we're
doing
to
our
the
data
operations
we're
doing
in
senpai,
particularly
I
mean
map,
is
the
obvious
one,
because
this
is,
I
think
it's
like
the
like
you're,
not
going
to
probably
want
to
do
like,
like
coordinate,
transforms,
don't
need
to
be
done
in
a
in
a
parallel
parallelizable
fashion.
E
Yeah,
like
reprojects,
you.
E
F
A
B
F
A
That
is
an
efficiency
problem,
an
efficiency
or
an
optimization
issue
that
needs
that
the
cloud
that
like
parallelizing
parallelization
can
solve
and
that
I
think
that
may
just
be
a
like.
We
need
to
find
a
more
efficient
algorithm
for
either
traversing
the
transformed
graph
or,
if
we're
doing
this,
really
complicated
pixel
of
the
world
world
pixel
operation.
I
don't
know.
F
Well,
I
mean
fundamentally,
though,
if
you're
doing,
if
you've
got
an
absolutely
massive
out
of
core
dark
break,
and
you
want
to
do
well
coordinate
transforms
like
pixel
to
pixel
on
this.
Absolutely
massive
outer
ask
array
for
some
reason:
reproject
is
the
obvious
one
we
know
about,
but
you
could
want
to
do
other
things
with
it
too
yeah,
then
your
coordinate,
transform
is
also
going
to
be
out
like
even
outside
of
parallelization
like
if
you
want
to
do
well
to
pixels
for
every
pixel,
for
an
array
that
doesn't
fit
in
memory.
A
A
G
A
task
problem,
or
is
there
anything
in
senpai
that
can
help.
A
Well,
so
I
think
one
of
the
things
that
is
on
my
to-do
list
as
part
of
the
post
office
funding
is
to
do
a
to
go
through
like
the
core
package
and
probably
actually
well
yeah
the
core
package,
and
to
see
whether
the
array,
the
operations
that
we're
supporting
on
let's
say
well
in
general,
but
you
know,
starting
with
something
like
math
like
if
I
pass
in
a
disaster
rate
to
map,
for
example,
do
I
get
out?
Is
it?
Is
it
just
forcing
an
eager
computation?
G
So
the
like
use
of
adascary
would
be
transparent
and
senpai,
and
so
senpai
would
just
automatically
use
desk.
If,
if.
A
Really
the
way
it
should
work
is
desk
and
desk
out
right,
like
if
I,
if
I,
if,
because
right
now,
we
do
support
like
you,
can
create
a
map
by
passing
an
adapter
array
and
a
metadata
object.
But
what
we
don't
or
well,
because
we
don't
explicitly
say
this,
but
what
what
is
not
true
is
that,
like,
for
example,
rotate.
If
I
create
a
map
from
a
desk
array
and
a
metadata
object,
and
then
I
call
rotate
on
it,
it
will
return
to
me
a
map
backed
by
a
numpy
array.
A
I
will
not
get
a
desk
right
back,
so,
in
the
case
of
like
say,
I
had
a
20k
by
20k
image.
I
don't
know
why
you
would
have
that,
but
the
sun
at
some
absurd
resolution
at
in
the
full
field
of
view,
and
then
you
called
rotate
on
it,
and
you
did
this
somehow,
like
on
your
laptop.
You
would
melt
your
laptop
because,
or
you
were
just
gonna-
add
a
memory
error
instantly
because
it
would
try
to
do
it
would
try
to
actually
do
the
rotation
using
numpy
or
second
image
or
something
or
sci-fi.
B
A
F
It
like
just
yeah,
to
put
it
a
different
way.
You
have
an
image
which
is
larger
than
your
memory.
You
I
right
one
image
right:
if
you
do
an
affine
transform
on
the
hop
that
image
you
have
to
have
all
of
it
in
memory
and
do
the
matrix
multiplication
right
with
dusk,
you
can
chunk
that
into
lots
of
little
bits.
Thank
you
and
then
dusk
would
work
on
each
chunk
by
itself
right,
yeah,.
B
So
so,
within
the
way
it's
written
in
senpai
is
that
it
doesn't
chunk
it
or
no
sort
of
implicit
chunking.
It's
like,
oh,
no,
you
use
zombie
what's
going
on
whatever
this
is
we're
going
to
put
in
a
numpy
array,
yeah
yeah!
Is
there
a
way?
Is
there
an
alternative
to
that
that
isn't,
yes,
explicitly
chunking
it
yourself
or
explicitly
figuring
out
how?
Whether
you
need
to
chunk
it
well.
A
So,
for
what's
the
best
way
for
this,
like
for
a
desk
array
like
a
desk
array,
has
to
be
chunked
right
by
default,
like
it
has
to
have
some
sort
of
chunky,
even
if
the
whole
array
is
just
one
chunk
you
have
to
when
a
desk
array
is
created
in
whatever
way
the
chunking
has
to
be
specified
for
desk
array,
aware
computations,
they
basically
like
split
up
that
operation
across
those
chunks.
A
For
some
things,
that's
trivial,
for
you
can
imagine
for
some
operations
which
might
span
chunks
or
move
data
from
across
trunk
boundaries
that
becomes
much
less
trivial,
protect
wood,
yeah
yeah,
which
is
what
like,
and
there
are
implementations
of
things
like
in
interpolation
or
atlantic,
transforms
and
so
doing
it
is
possible,
but
I
think
the
biggest.
What
the
the
the
biggest
thing
to
me
is
that
in
within
senpai,
I
feel
like
we're
really
getting
into
the
or
I'm
really
maybe
getting
into
the
weeds.
I
don't
know
if
this
isn't
maybe
too,
but
I.
B
Sorry,
there's
lots
of
things
all
right,
so
rotators
one
is
that
they
said.
The
point
size
here
is
to
sort
of
go
through
and
like
establish
what
those
are
and
decide
whether
there's
an
alternative
implementation.
That
would
have
no
difference
in
us.
Like
that.
You
know
a
laptop
random
scenario,
but
would
like
you
know,
have
a
difference
on
aws.
Is
that
what
we're
trying
to
do
here.
A
Or
not,
so
I
think
that
needs
to
be
done.
I'm
not
sure
that's
the
best
use
of
our
time
here,
but
that,
but
I
guess
maybe
the
important.
What
are
we
trying
to
do
here
then.
A
I
guess
I
mean
well
broadly,
okay,
so
broadly,
we
know
that
there
are
operations
like
particularly
on
map
and
I'm
sure
other
places
in
core
2
that
do
not
support
like
das,
like
lazy
desk
computation-
and
I
don't
know-
maybe
maybe
I'm
stuck
in
this-
like
very
kind
of
narrow
mind
like
maybe
I'm
too
narrow
a
mindset
of
like
I'm
only
thinking
about
this
in
terms
of
desk.
I
don't
know
if
we
need
to
be
thinking
more,
so
I
think.
F
Broadly,
I
think
another
component,
no,
oh
well,
I
mean
I,
I
think,
ultimately,
when
you're
talking
about
these
workflows,
you're
talking
about
data
compatibility,
one
way
or
another,
I
think
the
other
big
component
of
this
that
is
going
to
be
a
problem
for
solar
physicists
who
want
to
do
this
is
is
io
yeah
again,
as
part
of
martin
was
talking
to
earlier.
I
think
this
is
really
again
an
asteroid
problem
like
we
rely
on
nationwide
very
heavily
for
our
I
o
in
some
by
core.
So.
F
F
Allows
you
to
use
f,
f
s
spec,
it's
almost
as
bad
as
psi.
You
use
f
spec
to
read.
Byte
ranges
out
of
a
fixed
file
over
http.
F
So,
like
you
open
a
fix
file,
that's
in
s3
and
you
want
to
read
five
pixels.
It
goes
and
reads:
five
pixels
over.
It
doesn't
download
the
whole
file
it
just
re-passes.
The
header
knows
the
byte
offset
of
your
five
pixels
and
then
transfers
you
to
five
pixels
plus
some
read
overhead.
So
probably
more
like
five
megapixels.
Whatever.
F
And
I
think
there's
still
that
that
makes
things
better,
but
then
there's
still
like
questions
about
well,
a
lot
about
loading
affects
file
into
a
dash
array.
What
happens
when
you
have
a
child
image?
Compressed
fixed
file
like
martin,
was
saying
they're,
quite
rare,
about
doing
the
photo
physics
they're
really
not
because
aia.
F
F
B
E
F
A
Oh
okay,
yeah,
fine
and.
F
A
Right,
okay,
I
mean,
I
think
another.
Maybe
this
is
this
is
in
kind
of
a
specific
workflow
thing,
but
what's
something
that
people
want
to
do
very
often
is
get
a
bunch
of
aia
images
but
they're
full
that
usually
usually
cut
outs
from
some
specific
region.
You
want
to
stack
them
all
in
time,
sometimes
many
of
them
in
times
we
were
just
talking
about
this
this
morning
and
then
do
some
analysis
on
that
big
data
cube
with
it
without
reprojecting
with
or
without
reproduction
I
mean,
ideally,
of
course,
you
know.
That
was
a
question
anyway.
A
A
But
but
maybe
you
know
okay,
so
maybe
if
you're
looking
at
some,
you
know
time
around
like
like
a
flare
like
you
like
for
the
okay.
This
is
this
is
like
around
the
september
10
2017
player
people
look
at
these
like
propagation
of
uv
waves
right
across
across
the
disk
from
after
the
eruption.
You
need
full
disk.
A
So
in
that
case
you
could
probably
get
away,
probably
shouldn't,
but
you
know
you
could
stack
and
you're
the
the
consequences
of
not
reprojecting
will
be
relatively
small,
so
you
might
just
have
a
data
cube
where
you
know
just
one
big
array:
full
full
disk
full
resolution
that
you
know
for
an
hour's
worth
of
data
is
not
going
to
fit
in
memory.
This
comes
back
to
pr
and.
F
B
F
E
Yeah,
can
I
ask
a
question
that
may
not
be
very
appropriate
here,
but
a
lot
of
people
use
this
ssw
cutout
service
for
aia
data,
where
they
just
basically
download
the
bits
of
yeah.
A
F
In
in
like
from
the
json,
yes
from
jason,
but
we
do
have
that
functionality
at
some
point
you
can.
A
A
F
A
A
There's
no
really
network
calls
to
get
the
data
it's
all
just
there
like
you
can
treat
it
ideally
you'd
be
able
to
like
do
some
sort
of
search
on
it,
but
it's
just
there
like
essentially
they're
just
paths
to
the
data,
and
but
so
I
mean
in
that
case,
actually
maybe
the
use
case
you're
describing,
provided
you
had
a
way
kind
of
like
what
martin
was
talking
about
earlier.
If
you
could
somehow
like
take
only
the
chunk
of
a
fifth
file
that
you
wanted,
so
you.
F
A
F
A
F
You
construct
a
map
from
that.
You
still
have
loaded
no
data
from
the
fixed
file
other
than
the
header
right.
It's
it's
that
made
a
http
range
request
to
get
the
header
or
a
few
probably
passed.
The
header
you've
loaded
a
map.
You
do
sub
map
pick
this
region
you
want
and
then
you
carry
on
and
it's
only
going
to
read
out
of
the
object
store.
The.
A
A
E
B
F
G
No,
no,
it's
it's
so
good
dnms
and
the
jsoc
will
not
be
on
any
cloud
platform.
So.
G
Yeah,
you
should
forget
about
dms
and
the
jsoc
in
the
future.
G
Yeah
the
I
mean
it's
something
though,
we'll
have
to
be
we'll
have
to
replace
it,
because
things
like
the
subsetting
functions
are
still
going
to
be
useful
and
even
if
you're
on
a
cloud,
you
might
not
want
all
the
data
and
you're
going
to
have
to
do
a
subset,
enable
right,
if
you
just
want
to
follow
an
active
region
around,
for
example.
B
F
G
That
I
had
another
meeting
I
would
have
loved
to
have
been
to
that
one
too,
but
I
think
also
data
formats
are
going
to
play
a
a
significant
issue
and
what
happens
on
how
we
enable
science
in
the
clouds-
and
you
know
I
know-
there's
already
been
some
experiments
with
czar
and
whatnot
alternative
data
format.
F
So
I
I
mean
I've
gone
down
this
rabbit
hole
a
bit
in
the
last
few
weeks
and
been
chatting
to
martin
on
github
about
about
this
problem
and
looking
at
what
he's
done
in
ketchum,
which
is
really
cool
anyway.
So
yes
to
don
the
deadliest
person,
yes,
is
actually
very
useful
for
this.
So
kachunki's,
like
a
translation.
As
I
understand
it,
is
a
translation
layer
from
you.
F
The
what
chunk
does
is
take
that
and
instead
of
having
each
binary
chunk
of
your
array
in
a
file
with
nothing
else
in
it,
it
goes
oh
well.
You've
already
got
that
binary
data
on
disk.
It
just
happens
to
be
in
a
bit
file
or
a
net
cdf
file
or
whatever
the
file
was,
and
it's
like.
Well,
we
know
what
those
files
are.
We
can
figure
out
what
the
byte
offset
is,
or
the
chunk
of
data
that
you've
referenced
by
your
zara
right,
and
so
what
chunk
does?
F
Is
you
give
it
a
specification
about
your
set
of
files?
Saying
here
are
all
the
chunks
of
my
aaa
timecube
and
then
kechunk
has
the
ability
to
like
know
where
the
binary
data
is
it
basically
like
ignore
the
header
of
the
aia,
image
and
know
where
the
binary
data
is.
Aa
is
an
interesting
question
because
it's
actually
twice
compressed,
and
that
is
more
chunks
right.
A
F
Yeah,
okay,
the
the
caveats:
there
are
one
you're
stuck
with
the
chunking
in
the
fifth
files
you
have
right,
which
means,
if
you
uploaded
the
existing
aaa
level
one
products
today,
you
would
have
one
chunk
per
row
per
file
right
and
that's
that's
the
chunky.
You've
got
you
can't
change
it.
That's
that's
what
you've
got
and
also
you
have
to
you-
have
to
do
some
file
passing
to
set
up
the
trunk
data
set
in
the
beginning,
and
that's
one
of
the
things
I
wanted
to
ask
martin
earlier,
but
it
wasn't
really
on
topic.
F
F
A
F
A
B
F
I
like
so,
but
the
way
the
way
I
would
suggest
without
thinking
about
it,
much
harder
right
is,
if
you
wanted
to
just
put
ai
level
1
on
video
files,
but-
and
you
didn't
want
to
do
any
more
calibration
steps
with
it.
But
you
were
happy
to
rewrite
the
files.
I
would
re-chunk
them
into
like
two
five,
six
by
two
five
six
squares,
rather
than
one
by
four
thousand
and
ninety
six
right,
because
that's
that's
a
lot
more
useful
for.
E
E
F
A
I
would
say
also,
this
is
just
kind
of
a
small
thing,
but
maybe
even
just
having
like
an
ndq
example
of
creating
a
you,
just
like
a
dummy
aligned
like
cube,
something
that
sort
of
looks
like
an
aligned
aia
cube,
make
it
into
a
dash
array.
Stick
it
into
a
mini
cube
and
then
do
some
operation
on
it
and
show
that
it
returns.
Like
also
that's
great
yeah,
I
think.
E
A
Applications
have
been
like
pretty
specific
and
not
suitable
for
like
this
way
and
and
maybe
like
documentation,
but
I
think
like
even
just
having
an
example
to
point
people
like
look
you
can
you
can
do
this
with
an
indie
cube
like
there's
just
a
gallery
request.
Is
this
one
yeah
right?
Okay,
so
just.
E
Like
a
use
case,
for
example,
when
you
have
these
sst
data
sets,
no
one
wants
to
just
look
at
the
spectra.
Often
you
want
to
calculate
something
like
a
doppler
shift
or
a
line
width,
and
ideally
you
would
want
these
to
be
in
a
similar
array
like.
But
if
you
do
this,
like
you,
do
like,
for
example,
gaussian
fitting
to
every
single
pixel.
E
A
A
F
So
this
is
where
like
dark,
delayed
and
map
come
like
dark
match
locks
and
that
kind
of
work
that
comes
into
interface
right
like
if,
if
you
have
an
operation
where
you're
doing
some
analysis
per
spectra
right,
you
know
perfect
all
for
all
time
right,
then
you
can
use
dusk
once
once.
You've
opened
the
data
in
dusk
and
if,
depending
on
how
the
data
is
chunked
right,
imagine
you
were
doing
this
with
I'm
trying
to
think
of.
E
F
If,
if
you
were
doing
it
in
a
way
where
you
can
control
the
channels,
what,
ideally,
what
you
would
do
is
you
would
load
a
dark
array
where
you
have
like
one
chunk
per
spectra
or
one
chunk
per
hundred
spectra
right
and
then
you
would
distribute
your
fitting
operation
with
dust
distributed
per
channel.
So
you
would
you
would
construct
your
dash
task
graph
not
like
manually,
but
dust
would
construct
you
a
task
graph
where
you're
calling
a
function
to
fit
a
spectra
on
100
by
100
away
and.
F
F
Function,
yeah
and
then
then
dust
would
go
to
your
20
000
aws
workers
that
you
have
got
for
free
from
nasa,
because
they're
wonderful
and
be
like.
Okay,
I've
got
twenty
thousand
hundred
by
a
hundred
chunks,
one
each
off.
You
go
right
and
it
you
break
the
compute
down
into
smaller
sections
and
then
distribute
them.
E
Because
I
mean
this
is,
for
example,
this
paper.
This
is
something
that
everybody's
going
to
want
to
do
with
velocity
map,
so
you
want
to
have
some
either
fitting
or
do
like
a
first
moment
using
the
centroid
of
the
spectrum
to
get
the
velocity
and
then
how
many
things
everybody's
one.
That's
one
of.
F
E
G
A
No,
no,
no!
It's
a
ndq
wolf
has
a
gasket
right.
So
it's
a
it's
a
has.
I
mean
it's,
it's
similar.
It's
I
mean
just
like.
If
you
pass
into
cube
numpy
right,
it
doesn't
become
a
number
but
then
but
then,
but
then
when
it
does
like
when
you
do
all
the
the
slicing,
all
the
fancy
slicing
that
indie
cube,
allows
with
world
coordinates
or
pixel
coordinates
or
whatever
all
that
does.
B
Infrastructure
it
uses
is
like
the
native
python
cycle
and
therefore
it
uses
the
native
dark
array
or
numpy
array,
slicing
infrastructure
right
if
it
quacks
like
an
array
right,
yeah,
yeah
yeah.
So
in
that
sense,
if
you
put
it
in
a
dutch
garage
it'll,
do
everything
the
way
a
data
rate
does,
because
the
dashboard
is
doing
it.
G
F
G
G
B
G
Right
yeah,
so
that's
I
mean
that
would
be
like
those
kind
of
tutorials
of
like
how
to
like
how
to
use
senpai
on
a
cloud
system,
and
then
desk
is
a
way
that
you
can
take
advantage
of
the
the
cores
that
are
available.
That
would
be
a
great
start
just
to
show
that,
like
some
pies
already
or
nd,
cube
well
already
supports
these
kinds
of
operations.
A
So
we
actually
have
already
a
this
isn't
quite
the
same,
but
there's
a
what
am
I
trying
to
say
there's
an
example
on
the
sun
kit
image
in
the
sunken
image
gallery
of
how
to
do
time
lag.
The
time
lag
analysis
with
with
that
and
showing
that
the
implementation
of
the
time
lag.
Calculation
incentive
image
is
already
square.
G
Yeah,
because
I
think
you
know
there
would
be
a
lot
of
interest
in
showing
the
like,
especially
if
you
could
run
that
on
heliocloud,
then
to
show
all
of
that
working
would
be.
Would
help
get
the
message
out
that,
like
senpai,
is
already
some
pies
on
it,
so
to
speak.
A
G
A
A
This
was
actually
done
on
the
nasa
pleiades
system,
but
what,
basically,
what
we
did
was
we
set
up
a
desk
cluster
on
several
thousand
sorry
several
hundred
nodes
on
on
pleiades,
and
there
was
kind
of
it-
was
sore
somewhat
of
a
like
a
prototype
project
for
helio
cloud
a
few
years
before
it
was
a
thing,
but
on
an
hpc
system
and
what
we
did
was
there
was
actually
a
drms
instance
stood
up
at
at
nasa
ames.
A
So
you
could
like
make
queries
just
like
you
would
against
the
jsoc,
but
instead
of
getting
like
like
urls
back
from
stanford,
you
would
get
paths
on
the
hpc
system.
So
then
you
would
just
load
in
the
data
like
there
was
no,
there
were
no
network
calls
to
the
data.
It
was
almost
there
and
so
what
the
this
is
a
poster
from
agu
from
a
couple
years
ago,
but
what
we
basically
did
was
we
put
together
some
examples
of
the
kind
of
workflows
that
this
enables.
A
One
of
these
was
actually
the
one
I
this
is
what
informed
my
comment
earlier
about.
Tracking
uv
waves
is
that
you
can
load
these
full
discs
things
for
for
two
hours.
Is
it
a
full
spatial
resolution,
stack
them
into
a
cube
and
do
like
running
difference,
images
on
them
like
on
on
the
whole
disks?
A
You
know
in
in
like
a
minute
and
a
half
and
watch
you
can
you
can
make
a
movie,
and
I
can
watch
the
euv
wave
propagate
across
across
the
disk
in
the
extended
sandwiches,
and
I
think
what
I
did
was.
I
did
not
do
any
like
reprojection
or
alignment.
I
just
did
like
what
we
were
talking
about
earlier,
just
like
stacking
and
pretending
that
nothing's
moving
basically
but.
A
Cases
the
coordinate
information
is
actually
not
that
important.
All
you
really
want
to
do
like
sorry.
Stewart
all
you
really
want
to
do
is
like
and
well
actually
that's
not
entirely
true,
because
you
might
want
to
say
like
where
what
are
the
world
coordinates
of
this
wavefront
across
the
disk
and
then
that's
how
you
figure
it
out?
Yes,
yeah
so
true,
so
actually
I
lied
for
making
nice
movies.
It
didn't
matter
right,
yeah,.
E
A
A
Time
lag
analysis
where
you
can
in
active
regions.
You
can
basically
estimate
cooling
signatures
of
structures
across
an
active
region.
By
looking
at
how
plasma
is
cooling
relative
to
two
different
aia
channels.
We
do
cross-correlation
the
details
aren't
important,
but
basically
you
you.
You
need
sort
of
order,
six
to
12
hours
of
data
to
do
these,
and
if
you
do
this
using
the
kind
of
usual
aaa
ssw
idl
machinery,
this
can
take
like
hours
to
days
depending
on
how
many
sets
you
do.
A
You
can
do
this
whole
procedure
like
from
a
level
one
unprepped,
all
the
way
through
to
like
this
plot,
which
you
would
put
the
paper
in
the
well
oh
yeah.
So
in
this
case
we
did
it
on
1200
chords
and
it
took
less
than
30
seconds
right.
So
this
is
the
kind
of
stuff
that
allows
you
to
do
like
straight
from
the
stuff
that
comes
off
the
data
provider
into
your
paper
in
you
know
less
than
a
minute,
the
other
applications.
Oh
I
can
so.
A
A
Did
what
did
we
do
the
hmi
maps
at
a
cadence
for
the
entire
sdo
mission?
We
have
a
sunspot
king
algorithm
to
every
frame,
and
then
we
went
from
that
to
creating
a
butterfly
diagram
like
all
in
in.
A
E
B
F
F
A
bunch
of
other
pangaea
people
and
they
were
like,
we
really
want
people
in
other
domains
other
than
geophysics
to
like
test
contest
our
stuff.
So
we'll
when
I
were
like
sounds
fun
and
why
not
remember
names
today,
it's
the
co2
yeah.
E
F
F
And
we
in
the
space
of
like
a
day
and
a
half
of
dev
sitting
next
to
each
other,
we
went
from
okay,
here's
a
buttload
of
aaa
level.
One
data
in
google
compute
cloud
to
having
prepped
calibrated
stacked
and
done
wheels.
Time
lag
analysis
on
like
terabytes
and
terabytes.
If
they,
what
do
you
12
hours,
full
cages
all
past
times.
A
Years
ago,
yeah
so
but
yeah
to
answer
your
question,
laura
the
setup
can
take
some
time,
but
also
the
the
advantage
of
desk
is
that
the
code
you
write
is
independent
of
the
of
the
thing
that's
like
of
where
it's
running
so
you
you
write
code,
sometimes
not
even
really,
against
stats
like,
ideally,
you
write
code
in
the
same
way,
you
would
sort
of
you
would
do
like
like
numpy
operations,
and
it
just
happens
to
be
like
a
diaspora
instead
of
a
number.
A
A
F
But
that's
that
aspect
is
really
really
important
for
usability,
because
it
means
you
can
develop
the
code
on
like
a
few
hundred
gigabytes
of
aaa
data
on
your
laptop
and
then
once
you've
got
that
working.
You
go
on
to
aws
aws,
where
you'll
build
per
second
for
the
cpu
right
and
go
for
half
an
hour,
and
you
get
a
result
with
it,
and
you've
got
like
once
once
you've
got
the
cluster
set
up.
It's
like
what
the
drop
will
show
in
here
done.
You're.
The
code
is
the
setting
yeah.
D
E
A
A
A
E
A
A
Changing
I
think
the
pingio
folks
have
put
a
lot
of
time
into
making
their
whole
stack
more
deployable
on
on
many
many
other
systems.
But
you
know
it's
never
yeah
like
setting
up
the
infrastructure
from
scratch
is
not
going
to
be
trivial,
but
I
think
you
often
don't
need
to
do
that.
A
Okay,
maybe
that's
I
don't
know
yeah.
E
A
Like
well,
this
is
so
this
one
example
so
here
this
is.
This
is
the
stuff
running
on
top
of
the
the
htc
clustered
aims,
so
the
hpc
clusters
run
this
job
scheduling
software
called
pbs,
and
it
basically
is
way
to
you
know
it's.
What
scales
your
jobs
out
across
their
system
desk?
Has
a
library
called
desk
job
queue?
A
That
is
it's
specifically
for
setting
up
das
clusters
on
top
of
schedulers,
which
is
kind
of
like
an
evil
thing
to
do
like
it's,
basically
hacking,
a
an
hpc
scheduler
to
be
not
what
it
is,
and
so
so
what
it
does.
Well,
it's
not
important,
but
basically
this
is
this
is
the
one
line
that
you
do
you
import,
pbs
cluster.
You
start
a
cluster,
you
pass
it
to
the
client
and
then
this
and
then,
ultimately,
the
rest
of
your
code
does
not
care
what
this
client
is.
A
B
A
What
are
those
things
I
think
it's?
Well,
I
think
it's
the
stuff
that
we
already
said.
I
think
about
the
I
o
issue.
I
think
it's
making
our
kind
of
existing
operations
desk
aware.
I
think
it's
examples
and
like
showing
people
and
sorry.
E
A
So
so
I
I
was
able
to
do
this
stuff
several
years
ago,
but
I
will
also
say
that
I
have
been
already
using
tasks.
Okay
for
one,
obviously
I'm
fairly
familiar
with
senpai.
I
had
already
been
using
desks
for
years
before
that,
and
it
still
took
me
like.
Basically
it
took
me
six
months
to
make
this
poster
right
like
it.
G
A
A
B
G
A
E
E
F
You
were
saying
like
to
die
away.
You
can
break
down
any
problems,
ask
aware
or
not
into
bits
and
paralyze
with
us
a
chunk
at
a
time
yeah.
So
if
you
wanted
to
reproject
an
entire
stack
of
aia
images
as
an
example,
you
would
do
one
aia
image
per
node
in
your
dash
cluster
until
you
have
them
all
done
right,
yeah.
So
it's
that's
what
we
did
back
in
2018
a
lot.
What
a
lot
of
the
other
examples
we're
just
showing
you
they!
They
apply
functions
that
aren't
wear
one
chunk
at
a
time.
E
A
Actually,
the
time
lag
analysis
example
like
it.
It's
a
it
is
the
exact
same
code
that
you
run.
You
run
on
a
numpy
array
and
it
works.
You
run
it
on
dash
rate
and
it
works,
and
really
none
of
the
code
is
is
desk
specific.
It
just
works
because
it's
broken
down
into
array
operations
that
are
natively
supported
by
desk,
like
as
opposed
to
these
other
things
where
it's
just
like
you're
having
to
apply
them
sort
of
one
at
a
time
in
a
more
manual
fashion.
A
E
E
G
So
yeah
so
yeah.
I've
done
that
from
my
laptop
to
my
desktop
using
desk
and
even
though
like
it's,
not
a
factor
of
10
000
jump
like
the
speed
up
is
obvious
and
the
programming
is
zero
to.
I
think
it
changed
one
number,
which
was
just
the
number
of
cores
but
anyway
to
get
to
get
back
to
the
like.
So
laura
originally
said
that
you
know,
if
you
say,
you've.
G
A
G
Like
it's
not
like
the
the
full
senpai
stack
is
not
completely
desk
desk,
aware
or
that's
compatible.
You
can
take
advantage
like
on
the
science
level
like
on
the
science
goal
level.
We
can
provide
advice
about
how
to
you
know
for
common
tasks,
how
to
split
them
up
and
take
advice
on
desk,
but
the
full
stack
is
not
desk,
aware.
E
E
F
Because
you
can
still
you
can
still
use,
you
can
still
use
the
functions
that
aren't
asked
aware
with
dots
like.
Sometimes
it
isn't,
as
you
know,
as
good
as
it
could
be,
and
we
can
make
improvements.
But
I
don't
think
that,
like
it's
turning
around
and
saying
some
lots
of
senpai
isn't
basketball
and
needs
to
like
that.
Will
just
scare
people
off
it's.
A
A
F
Nonpoint
so
like
per
chunk,
a
dark
array
is
normally
a
numpy
right
or
it's
a
couple
or
something,
if
you're
being
really
fancy
about
it,
but
normally
it's
a
number
right.
So
if,
if
you're,
if
you're
taking
an
array
of
ten
thousand
aia
images,
one
spatial
slice
per
chunk
and
you
wanna
do
rotate
as
it
is.
Today,
our
code
works
with
dust
because
we've
tested
there
against
numpy.
A
E
A
F
My
point
is
that,
like
we
don't
need
to
not,
we
are
not
going
to
be
able
to
make
everything
in
some
way
desperate,
but
that
isn't
necessarily
a
major
problem
like
yes,
some
stuff
would
be
better
if
it
was
dark
aware,
but
not
everything
is
not.
Everything
needs
to
be,
and
not
everything
can
practically
be
made
aware
of
the
resources
we
have
available,
and
that
doesn't
mean
that
we
need
to
be
like
particularly
pessimistic
about
our
desk
support,
because
you
can
still
do
lots
of
very
useful
things
with.
E
A
A
G
F
A
E
B
A
E
I
do
think
they're
like
on
the
topic,
like
a
lot
of
people,
have
used
senpai
to
do
cool
machine
learning
stuff
like
maybe
we
should
like
ask
great
like
pull
people
or
ask
people
with
an
ftl
and
things
like
that.
Like
what
issues
did
you
have
for
trying
to
do
this?
What
did
you
do
yeah,
yeah
or
like
yeah?
What
what
did
you
do
and
how
did
you
do
with
issues
and
what
would
you
like
to
see
go
in
there.
E
E
B
I'm
not
saying
we
should
like
stop
it
for
something
more
important,
I'm
just
trying
to
basically
get
a
list
of
like
what
do.
We
still
need
to
achieve
in
this,
because
I
think
there's
some
people
getting
tired
and
one
thing
that
happens
is
we
just
keep
talking
right,
yeah
yeah,
yeah
yeah.
So
what
more
do
we
need
to?
What
other
things
on
an
agenda
if
we
were
to
invent
one
now
need
to
be
talked
about,
or
are
we
just
sort
of
talking
about
that.
E
G
Stuff
there,
so
I
don't
think
this
is
yeah.
You
haven't
wasted
your
time
from
my
point
of
view,
so
thank
you.
Oh.
B
F
Yeah
before
we
go
get
some
oxygen,
we
should
do
a
bit
of
sprint
planning.
E
F
B
Oh
yeah,
we're
just
talking
about
putting
basically
laura
know
an
advocate
for
putting
x-ray
emission
models
into
a
more
well-known
and
centralized
place
than
some
experts.
We
want
to
explore
that
possibility,
yeah
and
where
that
might
be
yeah,
we
could
spend.
B
E
B
E
B
E
E
F
Probably
just
this
machine
melting,
I
can
tell
you
whether
it's
turned
up
on
youtube.
It
has
not
turned
up
on
youtube
refreshing,
I'm
actually.
B
F
E
This
is
the
transpose
version,
I'm
not
sure.
If
that
would
carry,
I
can
try
and
run
it
in
my
workstation
another
non-transposed
version.
Yes,.
E
B
E
F
I
think
I
think
we're
not
going
to
we're
not
going
to
get
too
many
things
done.
Probably
right.
I
think
I
personally
would
like
to
prioritize
things
that
yeah
you
can
be
accomplished
and
be
of
benefit
us
all
being
in
the
same
room
together
like
we've.
Had
we've
got
a
bunch
of
ideas
on
here,
which
we
could
happily
do
over
the
internet
right
and
they're,
just
like
some
of
them
are
bloody
admin
tasks
right,
but.
F
The
figures
for
the
frontier
paper
is
also
a
good
idea,
because
we
have
a
different
set
of
experience
in
the
room
with
like
orbiter
data
and
pfss
pipe
and.