►
From YouTube: SunPy Coordination Meeting 2022 - Wednesday
Description
Participate in the chat and the call here: https://openastronomy.element.io/#/room/#sunpycoordinationmeeting:openastronomy.org
A
Request
we
run
a
number
of
checks
really
just
to
make
sure
that
all
the
code
that
gets
contributed
into
the
package.
B
A
Behavior
of
lots
of
the
functions
within
sunpi.
F
Different
jobs
with
different
environments,
so
we
run
these
across
linux,
macos
and
windows
as.
H
Job
that
runs.
A
These
unit
tests,
with
the
oldest.
F
Versions
of
our
dependencies
to
ensure
that
the.
A
Code
still
works,
as
we
intend,
with
the
older
releases
of.
F
Run
to
pick
up
issues
such
as,
if
we
reference
a
particular
item,
a
particular
function
within
our
documentation,
to
make
sure
we've
named
it
correctly
that
the
links
will
all
work.
We
do
checks
on
our
gallery
to
make
sure
that
the
gallery
can
build
without
any
errors,
and
then
our
hosting
provider
read
the
docs,
also
run
a
check
on
that
to
you
know,
make
sure
it
just
is
able
to
build
on
their
system,
so
we
have
some
online
tests.
These
are
similar
to
the
unit
tests.
F
We
just
have
some
additional
unit
tests
which
rely
on
remote
data.
These
are
quite
slower
in
comparison
to
the
unit
tests
and
unfortunately,
they
are
quite
flaky
due
to
the
issues
with
communicating
with
remote
servers.
From
time
to
time
we
have
figure
tests,
so
these
basically
mean
whenever
we
run
or
plotting
functions.
We
want
to
be
sure
that
we
are
producing
consistent
plots
that
if
we
change
something
in
the
code
that
it
doesn't,
you
know
make
some
line
move
by.
H
A
centimeter
somewhere
because
that
sort
of
thing
can
be
tricky
to.
H
Function
with
some
baseline
images
that
are
stored
in
a
separate
repository,
we
run
these
bigger
tests
on
the
current,
stable
versions
of
key
dependencies
such
as
astrophy
and
math.live,
and
we
also
run
these
tests
on
the
latest
development
versions
of
these
key
dependencies.
So
we
can
pick
up
any
issues
before
they
become
problems
for
our
users.
F
D
A
I
E
D
G
Run
inside
pre-commit,
ci
pre-command
ci
provides
very
fast
environment
for
running
these
types
of
tests
and
for
some
of
our
code
style
checks
within
a
pull
request.
We
can
just
type
in
a
comment
to
this
particular
app
and
it
is
able
to
then
automatically.
L
Our
change
log
monitor
we
are
using
stuart's
server,
which
is
running
the
baldrick
up
from
open
astronomy
or
a
modification
of
that
possibly
so
then
we're
also
using
this
thing
called
me.
Six.
A
Request,
that's
going
to
be
targeted
to
the
main
branch
which
is
always
going
to
be
for
either
the
next
major
release
or
the
next
minor
release,
if
we're
currently
on
a
something
point:
zero
release.
So
basically
what
with
the
with
the
back
ports,
it
allows
us
to
quickly
backport
something.
So
we
can.
M
So
that
makes
it
so
much
easier
because
we
don't
have
to
duplicate
our
efforts,
because
the
bot
will
duplicate
the
pull
request
to
a
different
branch
automatically
for
us,
so
we're
also
still
using
azure
pipelines.
This
has
been
used
by
some
of
the
affiliated
packages
as
well.
Although
some
pi
core.
F
Isn't
using
this
currently
so
that
brings
us
on
to
the
recent
updates,
where
we
do
have
this
migration
to
azure
pipelines,
to
github
actions
that
we
did
for
the
core
sunpi
repository
as
well
as
some
additional
of
the
affiliated.
J
And
sponsored
packages
as
well,
this
was
sort
of.
A
J
K
Able
to
restore
it,
but
it
went
down
a
few
times,
so
basically,
we
just
decided
that
it
would
be
best
to
migrate
to
github
actions.
However,
this
has
worked
quite
well.
We've
been
able
to
set
up
a
new
repository
of
workflows.
These
workflows
have
been
used
by
other
packages,
you
know
astronaut
as
well,
so
there's
you
know
less
duplication
of
effort
and
it
provides
a
nice
environment,
so
github
actions.
I
find
it
integrates
quite
well
with
github,
so
it
makes
it
quite
nice
for.
A
Maintaining
to
see
you
know
what
is
how
the
tests
are
running
within
the
code,
we
have
three
main
workflows
that
are
part
of
the
open
astronomy
workflows
repository.
So
we
have
a
testing
workflow,
which
is
running
which
you're
able
to
supply
a
list
of
talks,
environments,
and
it
runs
those
tests
as
separate
jobs
and.
F
Yes,
so
then
we
also
have
a
html
dashboard
for
viewing
our
figure
tests,
as
you
can
see
on
the
right
here,
this
sort
of
provides
a
nice
environment
for
seeing
how
our
figure
tests
are
performing
they're
all
sorted
so
that
all
of
the
worst
tests
that
are
like
feeling
the
most
should
appear
higher
up
and
the
tests
that
are
fine
should
appear
near
the
bottom
you're
able
to
search
through
all
of
the
tests.
So
you
know,
if
you
know
the
particular
name
of
a
test.
F
You
want
to
see
and
type
in
the
name
in
the
search
box
and
it
will
just
automatically
just
search
through
and
then
you
can
also
sort
and
filter
the
test
results
based
on.
You
know
the
status
of
the
test.
Does
it
have
a
matching
image?
A
different
hash,
you
know:
what's
the
name
of
the
test,
how
much?
How
what's
error
in
the
images
things
like
that
and
then,
when
you
click
on
a
particular
image,
you
can
see
there
that
it's
overlaid
with
the
difference
if
there
is
an
image
difference.
F
But
when
you
click
on
that,
you
can
then
see
the
three.
So
the
three
images
separated
so
you've
got
the
baseline
image
that
we
store.
In
a
repository
where
we
say
this
is
what
ideally,
the
image
should
look
like
and
then
over
on
the
right.
We
have
the
elements
that
the
test
produced
now
looking
at
those
two,
they
look
basically
identical.
You
know
you
couldn't
tell
the
different
spots
when
you
subtract
them
together.
F
You
can
see
that
there's
all
these
bands
that
appear
due
to
the
the
grid
lines
on
just
being
moved
slightly,
so
this
is
sort
of
the
benefit
of
these
bigger
tests.
We're
able
to
see
what
happens
and
particularly
of
these
reports
that
they're
able
to
show
us
exactly
what's
wrong,
which
makes
it
much
easier
to
diagnose
any
problems
within
particular
test
functions,
and
this,
of
course,
is
both
desktop
and
mobile.
Optimized.
F
Okay,
so
this
brings
us
on
just
in
some
future
updates,
so
currently
there's
plans
to
improve
the
benchmarking.
I
think
albert's
going
to
take
a
look
at
that
on
friday.
I
think
I
saw
something
in
the
list
of
plans
for
the
hack
today.
F
F
The
challenge
with
these
is
that
it
can't
take
quite
long
to
run
at
times
because
it
has
to
check
all
of
the
past
commits
or
the
past
versions,
something
like
that
to
sort
of
see
when
something
changes,
but
the
benefit
of
this
is,
we
can
really
pinpoint
which
connect
caused.
A
decrease
in
performance
and
yeah
david
is,
has
sort
of
integrated
a
way
to
sort
of
mock
the
http
requests
that
are
online
tests
mac.
F
So
this
way
we
are
able
to,
instead
of
going
directly
to
the
remote
service,
to
request
data,
we
can
just
go
to
something
that's
stored
on
github
repository,
a
separate
github
repository
that
sometimes
attains.
This
should
improve
the
speed
and
reliability
of
these
online
tests
and,
of
course,
then
on
say
a
schedule
say
a
weekly
schedule.
We
will.
G
M
Set
up
to
just
use
text
files,
we
should
get
look
into
data
files
so
that
we
can
really
have
the
coverage
of
everything
really
because
I
feel
like
then
that
would
really
reduce
the
flavorness
of
the
tests
yeah.
Maybe
we
should,
as
we
can
say,
do
we
know?
Has
anybody
tried
to
use
that
with
vso?
Yet
vso
is
a
bit
of
a
special
case,
because
it's
so
not
rest,
but
I
think
it
will
still
work.
I'm
just
curious
whether
anybody
tries
tv
requests.
M
So
I
think
give
me
give
me
a
couple
minutes
and
I'll
have
a
look
and
see
if
I
have
here.
M
Yeah,
it's
user
soap.
Doesn't,
I
think,
yeah.
I
think
it
should
still
work,
but
I
don't
know
yeah.
I
think
it
should
like.
I
I
would
assume
as
long
as
it's
using
standard
http,
it
should
just
be
able
to
record
that
and
play
it
back
as
though
it
was
talking
to
the
real
server
yeah.
I
think
yeah.
I
think
that
uses
I
think
zeke
uses
requests
underneath
so
I'm
pretty
sure
it
will
but
we'll
find
out.
Yep.
F
Yeah
yeah
and
then
just
some
other
things
where
we're
currently
still
using
azure
pipelines
for
some
of
the
sponsored
packages.
This
is
basically
because
it
just
started
working
again
and
like
it
wasn't,
causing
any
issues
so
yeah.
F
My
plan
is,
as
part
of
the
us
duffel
waffle
grant.
M
To
to
work
on
standardizing
all
that
infrastructure
and
it's
kind.
E
Yes,
dirty
you're,
saying:
I
think
that
you
want
to
do
a
dashboard
for
all
of
the
ci
across
all
of
the
sunpi
projects,
really
just
to
get
this
overview
of
everything
see
how
everything's
running
so
you
don't
have
to
go
into
individual
projects
to
check
what's
broken
so
that
we
can
more
quickly
fix
any
issues
and
yeah.
That's
everything
I
have
here
so
yeah.
If
you
have
any
more
points
of
discussion.
M
F
A
Html
artifacts
yeah,
like
there
is
some
services
out
there
for
say
hosting,
like
small
prototype
websites
and
stuff,
but
yeah
they
were
quite
expensive
yeah.
We
basically
want
ideally
something
like
github
action.
That's
able
to
just
push
a
directory
and
give
us
a
url
that's
available
out.
I
guess
we
could
use
github
pages.
We
could
have
a
could
have
yet
another
repo.
I
see
it
automatically
push
it
to
and
get
updated
on
it,
but
like
they
would
have
to
manage
versions
and
delete
old
stuff
as
well
yeah
we
would
have
to.
A
F
G
L
L
L
L
L
F
We
just
found
this
really
like
hacky
way
of
doing
it,
where
we
like
have
a
piping
script,
that's
able
to
parse
a
string
which
is
actually
yaml
and
it's
able
to
parse
that
and
then
give
back
a
json
matrix
of
jobs
which
then
github
is
able
to
parse.
L
L
So
I
just
want
to
say
like
why
I
wanted
to
bring
this
up
right.
Part
of
the
nasa
funding
is
to
make
it
easier
for
us
to
maintain
the
collection
of
repos.
We
now
have
right
we're
now
a
very
much
a
multi-repository
project,
as
we've
discussed
repeatedly
so
far
this
week
and
part
of
the
nasa
fund
is
to
make
it
easy
to
make
it
easier
to
maintain
that
and
to
because.
B
Other
section
they
have
the
same
style
checked
in
and
they
have
this
same
contribution.
Workflow,
like
both
from
a
contributor
perspective
and
a
technical
perspective,
is
as
uniform
as
possible
right
like
if
somebody
knows
how
to
contribute
to
core
and
they
know
their
expectations
on
style
checks
and
whatever
and
core,
they
should
feel
perfectly
at
home,
contributing
in
all
the
other
repos.
So
there's
some
technical
things
I
want
to
talk
about
here,
like
code
format,.
B
L
Repos
and
basically,
if
we
do
it
on
core,
we
should
be
doing
it
on
all
the
rest
of
the
sponsored
packages
as
well.
Just
so
that,
like
expectations
are
the
same
over
all
of
them,
I
guess
the
first
thing
to
say:
does
anybody
object
to
this
because
it's
just
making
it
easier
for
people
to
well
like
it's
not
necessarily
yeah,
I
like
it
yeah,
but
it
also
it's
not
necessarily
making
it
easier
for,
like
the
bill
to
continue
to
do,
updates
on
all
of
the
packages
and
stuff
like
if
we're
like.
L
M
F
Template
which
is
a
package
template
for
all
of
the
sponsored
packages
and
any
affiliated
packages
that
choose
to
opt
in
and
if
we
want
to
change
this
ci
config
for
like
pre-commit
or
something
we
would
modify
it
in
the
package
template
and
then
a
bot
would
pr
those
changes
to
all
of
the
repos.
F
Obviously,
I
think
the
actual
contents
of
like
the
github
actions,
ci
config
file
with
like
the
top
jobs
that
are
run,
are
probably
going
to
end
up
being
unique
to
repos,
because
you
know
different
repos
have
different
test
jobs
and
stuff,
but
there
are
parts
of
that
there
are.
There
are
parts
of
that
box
which
are
the
same
yeah
exactly
and
as
much
of
it
as
possible.
F
B
B
F
F
G
L
L
Just
gonna
put
issue
in
the
chat,
so
a
lot
of
the
kind
of
brainstorming
for
this
discussion
or
like
the
features
we
want.
F
Issue
I
just
posted
in
the
chat
on
the
senpai
project,
repo.
I
guess
I
can
share
my
screen
at
some
point.
I
probably
need
to
like
refactor
this
issue
out
into
a
project
or
something
a
bit
more
interesting
than
a
markdown
to-do
list,
but
I
haven't
got
around
to
that
yet.
Hopefully
there
is
some
way
to
easily
convert
it
to
an
initial
yeah,
if
only
github
at
that
feature
where
you
can
click
a
button,
it
goes
into
an
issue.
F
F
F
E
A
M
F
E
B
Pi,
where
you
don't
want
to
opt
into
all
our
workflow,
then
you
would
use
the
open
astronomy
package
template
as
a
starting
point.
But
if
you
were
spinning
up
a
new
package
in
the
sun
by
org
or
you
wanted
to
like
pretend
you
were,
I
guess,
then
you
would
use
the
sample
package
template
which
would
get
you
like
our
pre-commit
config,
our
actions,
config.
All
our
issue,
labels,
like
all
of
that
kind
of
stuff.
B
L
L
L
B
H
B
L
All
right
all
right,
so
all
right,
yeah
yeah,
I
mean,
I
think,
with
the
open
astronomy,
one
there's
bits
missing,
that
haven't
been
documented
yet
and
there's
a
few
little
things
that
could
be
added
and
tidied
up
like
there's
no
ci
stuff
in
here
at
the
moment.
So
I
might
probably
add
a
basically
I
have
actions
workflow
to
that
and
stuff.
L
B
Anyway,
yeah
so
like
there's
work
to
be
done
on
the
open
astronomy
guide,
but
I
think
what's
there
is
good.
There
is
more
stuff
that
could
be
added,
but
I
think.
L
L
L
That
sounds
reasonable.
Okay,
should
we
talk
about
the
least
controversial
I
feel
like?
We
need
a
list
of
things.
We
need
to
standardize
what
is
currently
different
over
the
different
orgs
other
than
what
you
mean.
F
What
else
so
across
my
clothes
outside
of
the
creek
event,
I'm
not
sure
what
else
I
can
think
of,
I
think
there's
some
workflow
stuff.
We
don't
tend
to
be
as.
K
K
I
think
one
of
the
interesting
things
I
think,
like
an
interesting
question,
is
like:
why
are
some
of
those
other
repos?
Don't
get
the
same?
Yeah,
I
think
that's
the
difference
that
is
less
people
got.
Notifications
turned
on
for
all
our
sponsored
packages
in
core
right,
like
yeah.
The
main
problem
is:
why
am
I
the
only
one
doing
this?
That's
the
actual
problem
we
shouldn't
be
trying
to
hack
around
that
with
workflow.
K
I
think
part
of
the.
The
reason
is
because
you're
always
doing
it
right
yeah
I
mean
not
to
be
like,
like
I
think,
and
then
people
tend
to
rely
on
you
to
do
it,
and
then
it's
like.
Well,
it's
already
done
so
I
I
think
we
need
to
be
well.
I
would
put
this
thing.
I
would
bring
up
the
point
where
we
got
the
suv
code
for
requested
who
here
would
have
sat
down
gone
through
that
properly,
that
I'd
spent
time
on
we
got
that.
Yes,
that
got
made
everything
within
three
months
yeah.
K
K
I
don't
think
it
has
to
be
one
person.
I
think,
but
I
think
if
we
start
building
up
a
review
on
all
of
those
repo,
then
somebody's
gonna
have
to
start
looking
at
it.
Like,
I
think,
yeah
I
always
say
I
haven't
like
been
reviewing
stuff
like
this,
because
I
don't
know
it's
there
personally,
so
I
well
what
how
could
we
fix
that?
Well,
so
I
personally
watch
every
adventure
yeah
I
mean.
I
think
I
do
too,
but
that's
all
I
get
my
github
notifications
page
is
a
god
forsaken
hellscape.
K
So,
but
if
you
hadn't
done
it,
you
had
to
have
another
person
to
review
it.
It
might
force
people
and
like
ping,
people
and
just
be
annoying
like
like
oh,
like
oh,
I
could
be
alone,
but
like
you'll
do
it
and
then
it
will
have
to
be
done
right.
Do
you
have
a
ping
element
to
ask
for
a
review.
K
Yeah
and
I
go
well,
I
think
people
are
gonna,
want
half
half
this
change
committed
five
different
times
yeah.
I
think,
there's
a
different
conversation
in
the
case
of
apr
about
like
when
a
first
time,
the
first
time,
the
first
time,
someone
was
open
to
pull
request
and
like
should
we
be
more
lenient
yeah.
I
don't
think
the
suvpr
is
a
good
example,
because
kevin
is
planning
to
pr
code
to
get
image
once
for
funding
right
and
that
potentially
is
an
algorithm.
K
K
K
K
Know
how
we
do
this,
but
I
think
we
need
to
set
expectations
of
like
you're,
maintaining
all
of
them.
So
actually
here's
a
interesting
point
going
back
to
our
standardizing
self
should
at
the
moment,
different
people
have
commit
access
to
different
sponsor
packages.
Should
we
just
have
one
set
of
people
with
commit
access,
and
now
you
get
access
to
access
to
all
the
sponsors.
F
K
K
K
K
I
mean
well,
no,
because
we
as
a
it's
actually
what's
more
important
for
this
discussion
is
what
are
the
repos
under
the
orc
that
we're
all
on
the
hook
for
maintaining
like
whether
they
are
just
provided,
pre-sponsored
early
development,
I'm
sure
they're
down
the
list,
x-ray
vision,
just
delete
it.
It's
fine,
don't
you
want
it
under
your
I'll
transfer,
it
back
to
you,
yeah
transfer,.
K
A
K
K
There
is
a
category
of
things
that
are
like
on
our
roadmap
like
take
some
spectral
data,
for
instance,
where
we,
what
maybe
one
day
wanted
to
end
up
in
core
or
we
otherwise
like
ndq,
where
we
otherwise
have
identified
a
need
that
somebody
needs
to
provide
the
senpai
developers
as
the
collective
are
going
to
do
that
development
work.
It's
it
is
a
senpai
project
from
like
conception
through
to
release
and
sure.
K
K
G
From
a
pure
governance
perspective,
literally,
nothing
I'm
allowed
to
do
it.
I'm
the
lead
developer,
like
from
our
I'm,
not
saying
it's
a
good
idea
but
like
if
we're
getting,
if
we're
going
to
get
into
legalese
here,
literally
nothing
yeah,
okay,
also
to
throw
myself
into
the
bus
like
the
like
the
sun
kit
dm
stuff.
I
just
created
a
repo
under
the
senpai
org
at
paestro
like
three
years
ago,
and
it
sat
there
to
rot
right
and
now
it's
just
taking
in
space
x-ray
vision's
gone
because
it's
now
in
there.
A
F
That
there
should
be
some
formalization
around
what
happened.
Well,
yeah.
I
see
I
agree,
but
I
feel
like
that
could
be.
We
discussed
it
at
a
community
or
there
was
an
issue
opened
on
the
project.
I
thought
yeah.
It's
kind
of
thing
like
on
a
wednesday
call,
because
we
want
to
add
a
repository
under
people.
B
A
C
C
C
The
nebulous
entity,
the
sunpipe
developers
maintain
so
so,
then,
is
the
suggestion,
then
so
take
sun
experts
some
kid
specs.
In
that
case
we
want
to
for
community
for
branding
for
for
getting
momentum
so
that
it
isn't
just
laura
shane
and
myself
like
like
ian
and
chris
and
hopefully
other
people.
C
If
we
want
to
sort
of
create
the
sense
of
a
community
effort,
it
might
not
specifically
be
the
entire
by
some
like
me,
but
a
sub
group
of
that,
so
we
create
a
new
github
like
community
separate
from
senpai,
which
is
like
sun
expects,
or
you
know,
or
to
say
well.
This
is
a
subset
of
like,
like
the
sunpi
community,
it's
clearly
identified,
as
maybe
the
package.
C
The
code
itself
is
not
worthy
to
be
a
bonsai
package
yet,
but
the
concept
and
the
functioning
that
are
just
trying
to
maintain
is
certainly
community
still
look
at
this
community
value.
How
what's
what's
the
step
between
it
being
on
my
branch
and
you
know
it
being
sponsored
by
default
because
is
it?
Can
I
get
a
clarification
as
well?
Is
that
our
sponsored
packages
supposed
to
be
like
finished
deals?
C
C
C
C
Somebody
download
on
building
it
and
therefore
it's
under
the
org,
but
my
understanding
was
that
sponsored
packages
were
that
they
were
somewhat
mature
because
you
could
just
sort
of
look
at
how
many
repos
are
under
the
senpai
project,
whereas
to
me
sponsor
is
a
declaration
of
guaranteed
to
the
community
that
this
is
this
works
to
a
certain
degree
and
that
sunpower
is
responsible
yeah.
I
think
insurance
that
that
to
me
means
it's
on
the
website
and
not
professional,
whereas
yeah.
C
Well,
then,
we
should
be
clear
with
our
language
and
like
just
because
it's
shown
by
slash
unexpected
doesn't
mean
it's
sponsored.
Yes,
maybe
maybe
like
call
the
repository.
Someone
expects
dash
incubator
or
something
like
that,
plus
something
in
the
repository
name.
I
I
think
it's
very.
I
think
we
don't
need,
like
the
actual
list
of
sponsored
packages
on
the
affiliated
package.
Page
is
like
the
way
we
tell
users
ego.
These
are
the
projects
that
the
sunpi
project
is
maintaining
that
are
released
and
up
for
use
that
you
go
here
to
the
list.
We
don't.
C
Have
to
be
on
the
github
repo,
like
we
have
loads
of
github
repos
that
are
in
various
states
of
taken
over
by
the
machines
through
to
some
by
core
right.
Where,
and
I
just
I
don't
think
just
because
it's
under
the
github
org
doesn't
mean
that
it's
finished
and
done
it's
like
people,
but
I
say
it
doesn't.
C
We,
it
also
is
like
we
don't.
We
are
not
promising
users,
anything
we're
saying.
Oh,
this
is
a.
This
is
a
package
that
senpai
has
but
we're
not
promising
anything
to
users
whatsoever,
whereas
when
you
say
it's
sponsored,
we
are
declaring
that
this.
H
B
A
C
C
Through
all
your
packages
and
your
github
repos,
and
I
wouldn't
necessarily
go
back
and
look
at
the
project
site,
but
would
you,
but
just
because
it's
under
the
sunlight
orgasm
that
is
like
done.
I
don't
know
how
to
assume
it's
done,
but
I
would
assume
that
there's
a
certain
quality
to
it.
I
think,
and
also
you
run
into
the
issue
that,
like
you,
see
certified
packages
and
then
you
look
at
some.
A
Experts
you're
like
oh,
I
don't
actually
trust
any
of
these
they're
symbols,
yeah
that's
a
yeah
and
then
it
goes
the
other
way
and
then
you're
like
simply
oh,
that's
still
being
developed.
I
won't
use
that
I'm
not
they
don't
trust
us.
Well,
I
mean
we
should
make
sure
these
packages
have
in
the
description
what
they're
for
and
their
status
but
like
under
development
or
something
in
there.
Just
in
the
about
section.
H
F
A
H
F
Ago,
it
seems
like
we're
content,
otherwise,
there's
no
need
for
a
sponsored
package
label.
It's
just
if
it's
senpai
whatever,
then
it's
sponsored,
so
we
don't
like.
We
don't
need
any
of
this
language.
You
know
unless
there's
exactly
when
I
have
been
using
this
is
this
is
kind
of
on
me,
but
to
be
clear
when
I've
been
using,
like
sponsored
packages
like
this
session
this
morning,
as
in
we're
going
to
have
shared
infrastructure
over
all
the
spots
and
faculties,
I
meant.
B
Let's
go
so
we
have
the
github,
which
is
just
a
place
to
store
content.
That
is
global
for
all
the
issue
templates,
if
you're
having
an
issue
on
somehow
and
now
any
other
repository,
it's
a
bunch
of
templates,
you
can
choose
from
mpl
animators,
which
is
the
old,
typically
quote
from
mdq.
Is
there
any
queue
for
something
yeah
that
that
definitely
falls
under
this
list,
like,
I
don't
think
we're
ever
listening,
mpl
animators
as
a
sponsor
package
on
our
website,
because
it's
not
solar
physics
specific
but
it
is.
It
is
a
package.
B
Yeah
I
mean
we
develop
things
that
are
we
develop
things
that
aren't
necessarily
solar
specific,
but
they
are
useful
to
the
solar
physics,
so
we
maintain
them.
We
built,
I
think,
because
we
need
this
to
tell
others
yeah
fair
enough.
You've
got
the
figure
test
report,
which
is
another
thing.
You've
got
some
kickdown
which
we're
deleting
right.
Now
you
can
move
it
to
me.
Okay,.
F
F
E
B
L
B
L
L
F
Yeah
so
you're
talking
about
redoing
the
front
page
anyway,
with
those
categories
in
the
documentation
you
could
always
put
down
like
some
pie,
is
also
associated
with.
You
know
some
kind
of
tertiary
packages
that
do
specific
tasks
that
aren't
within
kind
of
some
pies
remit.
You
know
these
are
your
yeah
or
like
if
you
want
to
try,
if
you
want
to
become
like.
J
H
Okay:
let's,
how
long
have
we
got
when's
the
session.
A
J
J
Pre-Commit
and
code
style
is
the
big
like
the
big
thing
that
varies
across
our
packages
at
the
moment
before
that,
including
so.
This
is
both
things
that
are
enforced
by
pre-commit
and
general
style
of,
like
you
know
what
how
we
do
development
like
what
tools
we
use
so
things
I'm,
including
in
here
for
discussion,
are
black
a
lot
of
our
sponsored.
B
J
B
J
Opinion
that
the
ecosystem
specifically
has
to
fight,
isn't
there
yet,
but
when
it
is,
we
should
do
it,
and
so
we
need
to
like.
Are
we
going
to
turn
on
my
pie
on
core
now
like
or
on
all
on
all
the
repos
like,
if
we're
not
using,
if
we're
not
checking
type
in?
What's
the
point
in
having
them?
So
how
do
you
have
an
actual?
Should
we
write
some
bullet
points
and
just
go
through
these
one
by
one?
What
do
we
have?
What
do
we
have
that
we
need
to
agree
on
here
right?
J
A
B
J
F
J
F
J
J
As
we
implement,
we
can't
sit
there,
because
the
choice
about
doing
it
with
an
astrophy
eventually
have
support
is
that
we
do
what
we
have
to
do
from
two
to
three,
which
is
a
horrible
set
of
large
pull
requests
to
add
typing
to
it,
because
then
we
are
checking.
We
are
adding
my
pi
wholesale
with
national
point
on
it.
J
I
think,
like
the
whole,
the
whole
thing
with
my
applying
type
painting
is
that
it
doesn't
have
to
be
all
all
or
nothing.
You
can
start
like
getting
a
lot
of
your
internal,
hidden,
private
apis
and
then
leave
the
external
stuff.
So
users
don't
see
it,
so
you
get
the
benefit
of
it
inside,
but
it
isn't
exposed
to.
J
You
know
our
end
users
essentially,
but
at
the
moment,
where
nothing
kind
of
like
by
conscious
choice
right,
like
type
hinting,
has
a
readability
cost
in
the
code
base,
it
has
a
maintenance
cost
on
the
code
base
it.
It
is
a
choice
we
need
to
make
I've
resisted
small
tie,
pins
being
added
to
corp,
because
we
haven't
made
the
choice
and
that's
what
this
discussion
is
right
is:
are
the
trade-offs
for
type
hinting
worth
it?
You
say.
J
That
is
that
is
one
of
the
major
problems
we
use
function
annotations,
but
they
are
not
typing.
The
national
fight
does
not
support
making
them
type
things.
At
this
point,
can
we
just
take
a
step
back
and
consume
this
yeah
you?
So
you
know
when
you
do
you
don't
want
to
do
it?
You
do
if
you
find
it,
but
you
know
when
you
do
new
quantity
input.
J
That's
a
function
annotation
right,
you're,
annotating
that
argument
to
the
function
with,
in
that
case,
a
unit
yeah
those
function,
annotations
and
similar
bits
of
language
were
added
to
facilitate,
like
type
hinting
as
in
you
could
say.
That
argument
is
a
float
or
that
argument.
D
A
J
And
it's
not
enforced
at
runtime,
but
it's
metadata
for
checks
or
edit
linters
or
whatever,
okay
right.
So,
for
example,
I
don't
use
the
best
example,
but
I
wasn't
going
to
show
my
screen
so
they
want
to
join.
J
J
J
And
then
so
that
tells
me
what
the
types
valid
types
of
the
input
is
and
then
to
tell
to
type
in
the
output.
You
do
this
arrow
and
then
then
the
output
type
okay
and
then
so
these
types
are
not
so
you
can
run
this
code,
it
doesn't
check
the
type
at
all,
but
there's
a
library
called
my
pi,
which
is
the
static
eye
checker.
So
you
run
my
pi
on
this
file
and
it
would
check
that
your
code
adheres
to
what
you've
typed
in
so
basically
checks
for
states
in
the
input
and.
J
When
you
start
so
vs
code
and
they
type
check
in
real
time,
so
if
you
open
this
function,
it
will
then
go
okay.
This
is
the
type
you
should
pass
it
and
if
it's
wrong,
it
will
highlight
this.
So
my
vs
code
will
highlight
that
this
is
actually
a
long
time
so
and
for
developing
codes,
open
libraries,
which
is
what
we
are
essentially.
This
is
not
necessarily
a
user-facing
change,
but
development
change,
ensuring
that,
as
we
make
changes
to
our
programs,
we
are
not
doing
this
equation,
so
we
do
already
to
a
greater
lecture
extent.
J
Have
this
information
in
our
code
base
it
lives
in
the
dot
string
right.
You
can
make
sphinx
pass
the
titles
out
of
the
function
signature
instead
of
putting
them
in
the
document,
so
you
don't
have
to
duplicate
the
information
yeah.
So
there's
also
things.
There
is
a
strengths
body
which
is
you
know
it
works
most.
It
works
most
of
the
time.
J
E
J
What
like
one
of
the
points
you
can,
so
you
can
specify
custom
types
at
the
top
of
the
firewall
somewhere,
which
will
hang
this
up.
Yes,
so
specifically,
I
tried
to
do
this
for
april.
There's
a
there's
a
closed
pr
on
where
I
tried
to
type
in
a4k,
and
I
actually
wanted
to
use
it
for
like
compatibility
checks
like
does
your
implementation
of
814,
you
can
play
it,
comply
to
the
api
by
documenting
the
types
of
the
title.
J
It
was
not
something
I
would
want
anybody
to
have
to
read
like
I
got
close
to
it
being
right
and
it
made
the
readability
of
that
fight
all
100
times
worse
right,
because
there
was
just
so
much
difference,
but
the
flip
side
of
that
is
being
forced
to
write.
The
titans
makes
you
realize
that
you've
made
this
crazy
function.
That's
really
hard
to
understand
and
encourages
you
to
simplify
the.
E
J
Yeah
I
mean
the
814
api
was
designed
very
deliberately,
and
it
is
what
it
is
for
good
reason
I
mean
in
that
case,
if
the
typing
is
15
lines
of
types,
because
you
have
the
craziest
function
in
the
world.
You
just
type
any
just
accept,
but
there
is,
there
is
a
gal
prototyping
and
there
is
which
is
like
we'll
take.
J
J
We
will
have
to
accept
that
we
have
any
and
we
have
to
document
that
the
return
type
of
this
function
is
going
to
be
so
many
things
that
we
cannot
possibly
enumerate
and
that's
potentially
compromise
we're
going
to
make
on
some
cases.
Okay,
so
in
terms
of
making
a
concrete
decision
right
now,
I
think
what
does
anybody
want
to
argue
against
starting
to
add
type
instagram?
J
J
A
lot
quicker
like
the
the
whole
point
is
that,
like
when
you
run
it
it'll
tell
you
about
a
mistake
that
you
wouldn't
know
until
you
hit
this
really
aware
edge
case
in
in
the
future,
wherever
it's
type
checked,
you'll
you'll
find
that
straight
away,
because
it'll
be
like
oh,
this
function
can
call
this
thing
with
a
string
when
it
should
be
in
it
fix
it
now,
rather
than
waiting.
Two
years.
F
When
someone
finally
finds
that
edge
case
and
you've
never
gotten
what
you've
written
and
why
would
be
the
the
benefit,
I
would
see
the
other,
the
other
advantage.
I've
come
up.
I
actually
have
to
make
sure
you
know
called
cartoons
you'll
come
across
as
many
cubes,
you
would
sort
of
add
it
in
a
few
places,
maybe
it
actually.
I
found
once
I
very
quickly
hopped
onto
what
it
was
saying.
J
It
made
the
code
actually
more
readable.
The
technique
makes
the
code
more
for
a
developer
yet
because,
like
if
you
go
into
your
dog
strings
and
you
read
what
the
variable
is
and
then
you
sort
of
so
that
information
is
scattered
through
that
anyway.
But
when
you
look
at
okay,
what
are
the
variables
go
into
the
the
function?
J
J
H
H
It
meant
that
not
that
it
was
more
useful.
If
that
I
got
the
same
information
quicker
with
lower
effort.
H
D
B
D
J
J
J
Saw
typings,
and
now
I
know
what
they
are,
I'm
gonna
probably
couldn't
contribute
to
this.
I
don't
have
the
development
skills
to
do
this,
but,
like
maybe
it's
just
again
like
people,
people
do
go
down
to
github
repo
and
read
the
dark
strings,
and
I'm
not
against
I'm.
Just
saying
that
I
agree
and
I
think
the
way
we
transition
to
the
type
is
going
to
be
quite
if
we
are
going
to
when
we
transition
to
core
specifically
because
it's
so
complicated,
it's
going
to
have
to
be
managed.
A
F
I
think
that
sponsored
packages,
I
think
the
other
packages
we
have
are
substantially
simpler
when
this
would
not
be
a
problem,
no,
no
again
you're
still.
I
think
we
already
require
people
to
understand
the
concept
of
magnetic,
because
we
require
them
to
write
dot
strings.
It's
just
a
different
syntax.
A
H
A
H
F
J
H
F
A
F
F
F
At
this
time,
ed,
do
you
wanna
just
introduce
yourself
briefly
and
talk
about
like
your
solar,
orbital
connection?
Yes,
it's
actually,
oh
the
sun
is
the
sun
is
actually
up
here.
The
sun
has
come
up,
I'm
up
early.
D
A
F
Camera
is
on
so.
F
I'm
ed
bain,
I'm
a
software
developer,
I'm
working
for
terry
cucera
at
nasa
at.
I
The
goddard
space
flight
center
supporting
spice
on
the
solar
orbiter,
so
this
session
seemed
relevant.
So
that's
why
I
tuned
in
I'm
just
going
to
be
here
for
this
awesome,
thanks
for
coming
in
sure.
F
H
B
H
F
H
Together
really
quickly,
I
said
just
to
kind
of
focus,
everyone's
thoughts
and
I'm
sure
everyone
is
aware,
but
I
thought
it
might
be
just
good
to
really
quickly
go
over
the
solar
orbiter
mission.
So
solar
orbiter
is
a
unique
mission
in
a
number
of
aspects.
I
guess
the
the
key
one
is.
A
J
Three
10
to
15
day
windows
per
orbit,
one
at
the
kind
of
the
most
northern
point
of
the
orbit,
one
perihelion
and
one
at
the
at
the
southern
point
of
the
orbit,
and
so
in
terms
of
analyzing
the
data.
It's
not
you
have
it
constantly
you're
kind
of
looking
for
specific
cases
where
x
instrument
was
on
and
looking
at
at
a
certain
point-
and
I
guess
as
well.
Not
all
of
the
instruments
are
full.
H
B
Highly
inclined
to
the
ecliptic
plane
so
we'll
actually
get
to
get
remote
sensing
observations
of
the
poles
and,
obviously
in
situ
as
well
and
like
one
of
the
key
drivers
of
the
mission,
is
really
to
try
and
combine
these
in-situ
and
remote
sensing
observations
because
up
until
solar
orbiter,
we
had
this
problem
where
we'd
see
something
on
the
sun.
J
And
then
some
time
later
days
hours,
we
might
detect
something
in
situ
and
it
was
very
hard
to
disentangle
what
we've
seen
within
the
entity
measurements
to
what
actually
happened
on
the
sun
and
was
there
propagation
effects?
Did
things
change
as
they
moved
et
cetera?
So
the
key
thing
is
here
because
we're
going
to
be
so
much
closer
is
to
really
try
and
tie
down.
We
saw
this
in
an
image
of
the
sun
in
some
wavelength,
and
then
we
detected
this
physical
thing
in
situ.
J
So
it's
trying
to
link
these
two
things
together,
and
I
guess
another
thing
just
bear
in
mind
about
the
mission.
Is
that
again
we
don't
really
have
real-time
commanding
of
the
spacecraft,
so
the
instrument
teams
load
up
in
two
weeks
of
observation
plans
at
a
time
and
they're
then
sent
up
to
the
spacecraft
and
the
spacecraft
just
just.
B
Excuse
them,
so
if
something
new
happens,
there's
not
really
an
opportunity
to
change
what
the
observing
plan
is
going
to
be.
There's
a
one,
very,
very
small
exception
in
that,
if
you're
following
an
active
region,
you
can
do
this
thing
called
very
short-term
planning
where
each
day
you
can
tell
it
to
point
to
a
slightly
different
place,
but
that's
that
that's
really
it
and
then.
A
H
We
discussed
this
already.
It's.
G
D
A
H
The
the
latter
half
of
this
year,
basically
showing
the
orbit
of
a
number
of
spacecraft
and
then
the
various
remote
sensing
checkout
windows.
So
you
see
these
blue,
this
blue
bar
this
red
bar
and
this
orange
bar.
Don't
you
see
my
mouse
but
they're?
Essentially
the
remote.
A
Sensing
checkout
windows,
when
all
of
the
instruments
on
solar
orbiter
will
be
active
and
then
outside
of
these
windows.
Thermal
sensing
instruments
will
be
off
apart
from
sticks.
Sticks
is
a
special
case
because
we've
got
such
low
tm
ratings
that
we're.
E
Always
on-
and
I
guess
this
just
shows
the
complexity
of
trying
to
analyze
data
from.
D
D
B
The
solar
wind
analyzer,
which
is
in
situ
instrument
and
basically
it
creates
time
series
data
where
n
is
supposed
to
represent
the
dimension
of
the
data.
It
could
be
many
like
single
time
series
or.
G
To
treat
them,
you
could
treat
them,
as
you
know,
the
three
components
of
the
magnetic
field
as
three
independent
time
series-
they're,
not
really,
but
you
could
represent
them
that
way.
Epd
produces
a
kind
of
single
value,
time
series,
but
also
a
2d
time
series
like
spectrograms.
G
Radio
plasma
waves
instrument
that
is
kind
of
in
both
camps
because
it
measures
the
in-situ
properties
of
the
electric
field
and
also
the
magnetic
field
to
a
certain
extent,
but
it
also
measures
radio
waves
which
have
propagated
from
our
remote
location
and
then
yeah
go
for
it
clarification
here.
When
we
talk
about
time
series
and
like
n
dimensions
are
we
talking
about,
like
you
know,
geometric
dimensions,
so
you
know
like
a
perpendicular
or
are
we
talking
like
what
sometimes
serious
scientists
say?
A
dimension
is
which
is
just
another
column
in
a
table.
A
G
A
J
G
G
It's
it's
first,
they're
not
that
they
might
have
been,
but
I
just
want
to
clarify
that
if
you
wanted
to
represent
these
time
series
you'd
use
a
2d
table.
You
know,
with
n
columns
rather
than
an
array
with
n
dimensions.
G
G
G
G
Okay,
so
mag
example
is
just
like
three
columns,
which
are
three
components
of
x.
Okay,
I
mean
like.
I
think,
that
s
two
also
does
density,
but
density
is
obviously
not
a
vector
thing.
It's
just
a
number
as
a
function
of
time,
so
I
was
trying
to
get
with
n.
Is
that
it
it's
not
one
number,
it
could
be
one
three
or
two,
I
guess
in
different
cases,
and
sometimes
they're
linked
and
sometimes
they're,
not
and
sometimes
they're,
like
perfect,
sometimes
they're
dimensions,
and
sometimes
they
just
drop
different
properties,
make
sure
there's
a
function.
G
Okay,
I
mean
right.
You
can
think
of
an
asteroid
table
with
one
time
axis
and
then
a
bunch
of
different
dimensional
arrays.
As
the
columns
I
mean.
That's,
that's
one
way
you
could
think
about
it.
Different
dimension
rate,
okay,
all
right.
Actually,
that
clarifies
it
even
more.
Okay,
thanks
yeah!
I
was
certainly
confused
by
that.
G
Yeah
and
then
you've
got
the
more
traditional
remote
sensing
instruments,
I'm
not
sure
about
spice
in
terms
of
exactly
what
you
can
get
out
of
it.
I
guess
you
can
get
space
wave
by
time
with
like
a
raster.
F
More
complicated
data
yeah-
I
don't
know
if
anyone
else
from
the
solar
oversight
wants
to
jump
in
and
question
or
clarifying
in
there
before
we.
G
G
The
angle
between
the
earth
and
orbiter
and
you'll
see
that
it's
not
continuous,
particularly
for
a
lot
of
the
in-situ
ones.
So
eui,
particularly
it
operates
in
these
little
bursty
windows
and
again,
depending
on
your
science
case,
you're,
going
to
want
to
search
and
be
able
to
pick
out
very
specific
things
where
maybe
you.
F
Know
there
was
a
high-res
eoi
image.
Fee
was
also
doing
something,
and
one
of
the
in-situ
instruments
was
doing
something
some
hours
before.
To
answer
your
science
case,
I
guess.
B
Side
of
things
just
two
things
that
we
really
don't
support
getting
data.
So
that's.
B
F
A
Definitely
works
so,
I
think,
just
to
clarify
peers,
who
has
just
said
they're
all
green.
The
top
four
of
those
circles
in
the
core
column
are
orange
yeah
and
the
the
one
underneath
that
is
green.
Sorry
about
that.
G
Yes,
so
I
I
don't
know
I
assume
so
but
leave
me:
let's
go
it's
not
yet.
I
think
you
have
to
know
someone
you.
B
G
Working
and
then
when
it
you
know
when
they
do
public
you
release,
it
will
be
prepped
and
ready
to
go.
B
A
F
F
F
F
F
F
Do
you
want
to
maybe
say
more
about
that
that,
like
like
the
example
you
gave
where
we
have
the
3d
velocity
data,
like
I
said
you
can
load
it
into
a
time
series,
but
it's
not
going
to
behave
as
a
vector
quantity,
yeah,
so
yeah
just
to
go
back
to
the
some
of
the
data,
the
solarwind
analyzer
in
particular,
you
get
a
called
a
distribution
function
and
that's
a
function
of
times.
F
You
measure
that,
like
every
30
seconds
say,
but
you
also
measure
that
as
a
function
of
three
velocity
components,
so
velocity
x,
velocity
y
velocity
z
and
those
velocity,
those
velocity,
those
are
three
velocity
dimensions,
so
conceptually
it
doesn't
make
sense
to
just
just
represent
that
as
a
load
of
columns
that
are
dependent
on
this
time.
It's
data
that
is
dependent
on
four
different
dimensions
and
so
time
series
is
not
a
suitable
data
structure
for
that
ndcubism.
F
But
basically
the
the
data
structure.
As
far
as
I
know
there
is
no
data
structure
available
both
suitable
for
that
data
in
sites.
F
So
yeah
I
mean
I
guess,
nd
cube
might
be
a
generic
one.
You
could
probably
persuade
that
data
into
ndq,
I'm
yeah
really
suggesting,
but
there
are.
There,
are
specific
and
very
common
common
data
processing
techniques
that
people
run
on
those
that
type
of
data
that
should
be
part
of
that
data
structure,
but
something
like
x-ray
yeah
x-ray
would
be
ideal
yeah.
So
is
there
an
instrument,
that's
already
like
it
and
what
are
they
using
or
what's
been
used
for
fireworks?
F
So
I
think
someone
developed
a
package
called
heliopai,
multi-dimensional,
which
so
heliopai
just
for
a
bit
of
context
was
my
library
for
he
like
solo
in
physics
and
we
I
basically
archived
that
and
moved
a
lot
of
the
functionality
into
some
kind.
So
we
could
support
in
situ
and
so
all
the
time,
that's
the
motivation,
but
someone
spun
out
of
heliopai.
It
never
supported
multi-dimensional
alike
helio
high
multi-dimensional
package.
F
So
we
should
look
at
that
as
a
first
place
for
inspiration.
I
F
I
guess
a
vector,
one-dimensional
vector
data,
whereas
a
velocity
displacement
function
is
a
four-dimensional
scalar.
So
is
there
so
data
similar
to
thought
on
psp.
I
Is
going
to
have
homies
on
it,
which
has
instruments
similar
to
that
sort
of
pro
and
he's?
I
think
he's
on
the
software
he's
working
on
developing
the
software
for
this.
I
To
hear
is,
I
think,
they've
probably
talked
about
I've
never
been
in
conversations
about
this
and
we're
talking
about
what
kind
of
data
products
would
be,
and
it's
probably
very
similar
to
yeah
but
yeah
yeah.
This
is
I'm
going
to
talk
about
that:
okay,
okay,
but
yeah.
This
is
a
really
common
data
product
that
a
data
type
that
lots
of
instruments
use
it.
It
has
a
it
has
usability
way
beyond
the
solar
orbiter
yeah,
so
it
just
has
all
this
legacy
code
ideal
that
could
be
used
to
don't
even
think.
I
I
think
people
just
have
used
their
own
code
or
a
lot.
So
typically,
a
lot
of
people
don't
interact
directly
with
these
four-dimensional
things.
They
just
use
the
density
that
you
derive
from
it
see
I'm
trying
to
think
what
a
remote
sensing-
I
guess
it's
like
having
a
black
body
spectrum
and
and
like
a
lot
of
people,
would
just
care
about
the
temperature
of
that
spectrum.
Instead
of
the
actual
spectral
data
like
driving.
I
I
I
I
guess
that's
a
subclass
of
what
we've
just
been
talking
about
before
and
then
I
guess
the
question
is:
how
can
we
improve
the
support
in
sunpower
for
this,
for
this
kind
of
data
and
in
terms
of
some
by
core,
and
then
also
I
was
thinking
of
you-
know
what
sponsored
affiliated
and
other
packages
already
exist,
and
I
guess
danny
what's
the
story
with
spice
data
and
the
spectrum
object
from
iris
pi
or
like
integration
with
that?
I
So
I'm
I
haven't
looked
at
this
for
a
while,
but
we
have
ed
online
right
who's
working
with
terry
on
on
the
spice
stuff
and
have
you
been
using
the
sunraster
data
object?
Have
you
been
working
on
other
things?
I
I
have
used
the
sunraster
data
object
that
there's
a
routine
that
you
can
load
a
spice
fits
file
and
it
gives
you
an
nd
cube
object,
or
rather
an
ndq
collection
one
for
each
window.
I
believe
sunraster
does
not
support
iris
anymore.
I
think
that
was
actually
removed
from
sunraster.
Nabeel
could
speak
better
to
that.
I'm
not
sure
what
the
rationale
behind
that
was
so
well
quickly.
The
rationale
was:
there's
an
lockheed
iris
specific
package
and
it
basically
was
sort
of
well.
I
It
makes
sense
for
the
instrument
team
to
kind
of
own
that,
but
there's
not
a
spice
specific
package.
So
I
have
that
bit
of
spice
yeah,
so
I
wrote
that
reader
and
so
instead
of
having
a
spice
package
that
has
only
a
reader,
you
know
that's
why
that's
in
sun,
roster,
okay,
okay,
so
like
is
so
kind
of
going
back
to
shane's
question.
I
What
specifically,
were
you
asking
that,
maybe
between
ed
and
myself,
we
can
shed
some
light
on
yeah,
I
guess
yeah
is:
will
there
be
spice
support
in
sunraster
or
somewhere
else
in
the
ecosystem
that
people
can
can
easily
access
in
the
future?
Or
now
so,
like
my
knowledge
and
I
looked,
there
is
spice
support
in
sunrise
to
read
data
splice
data
into
more
generic
sunrise
data
objects.
But
since
then
ed
you've
been
working
on
other
things,
so
maybe
you
could
answer
what's
happened
since
then.
I
If
anything,
I
don't
think
anything's
happened
since
then
I
haven't
written
any
code,
I
fixed
one
little
bug,
but
that
was
the
the
extent
of
it.
I've
I've
used
the
code,
but
I
haven't
contributed
to
it.
A
I
Be
put
into
sun
roster
or
put
into
like
another
package
that
may
import
sunrise,
I'm
not
sure
I
don't
think
there's
been
any
specific
discussion.
None
that
I've
been
a
part
of
I've
only
been
doing
this
for
a
few
months,
so
I'm
not
privy
to
everything
that's
going
on
and
what
all
the
plans
are,
but
no
nothing
that
I
know
of
okay,
that
might
answer
your
question.
Shane.
Basically,
with
a
question
mark
yeah.
Well
I
mean
that's,
that's
fine
yeah!
I
mean
that's
really
all
I
slide-wise
prepared.
I
I
guess
the
question
is,
then
you
know
we
go
back
to.
How
can
we
improve
this?
What's
the
way
forward
and
like,
what's,
I
guess,
feasible
in
the
short
term,
because
we
have
an
awful
lot
of
plans
for
some
pi
the
project.
I
I
I
So
I
think
I
would
cover
downloading
and
searching
probably
at
some
point
at
least
I
I
think
so
yeah,
it's
true
in
principle,
yeah
all
right
so
that,
for
example,
I
guess
then
the
other
problem
is
opening
all
the
files
I
mean
dan.
You
mentioned
pulling
out
a
spectrum
object
from,
was
it
sunraster
or
iris
pi
or
where
pulling
out
so
you
mean
pulling
out
the
chord
you
mean
no
just
yesterday.
I
think
you
mentioned
something
about
that.
F
I
What
is
the
space
type,
it
might
just
be
an
object
or
it
might
inherit
from.
I
I
mean
the
plan
was
always
to
make
it
an
ndq,
because
spectrogram
is
essentially
an
nd
cube
type
data
yeah.
So
to
answer
your
question
shane
in
in
the
just
realized,
I
I
have
a
different
laptop
today
and
I
haven't
installed
chrome,
so
I
probably
won't
be
able
to
share
yeah,
like
you
know
what
I'll
I'll
give
it
a
quick
try.
I
I
So
this
this
module,
it
has
a
spectrogram
abc
which
just
tries
to
define
the
api
and
then
there's
a
spectrogram
cube
object
which
inherits
from
ndq
and
then
also
that
the
api
defined
in
the
abc-
and
this
is
not
specific
to
rastering
or
anything.
It's
just.
If
you
have
a
cube
of
spectrograms,
it's
an
nd
cube
with
all
the
stuff
that
comes
with
that,
plus
a
few
kind
of
more
intuitive,
mostly
convenience
methods.
So
what
it
does
is
it.
It
also
defines
instrument
axes.
I
So,
like
nd
cube,
you
have
like
physical
types
on
the
different
data
arrays.
But
when
you
come
to
a
spectrogram,
you
know
you,
the
the
physical
types
you
have.
Each
axis
corresponds
to
some
kind
of
like
instrumental
configuration.
So
you
know
you
have
like
space
space
time.
I
You
know
spectral,
and
so
it
tries
to
define
to
sort
of
define
which
data
axes
correspond
to
those.
B
A
B
B
I
Well,
as
that,
it's
just
provides
some
slightly.
B
More
customized
slicing
and
then
provides
sort
of
convenience
methods
to.
If
you
want
the
instead
of
having
to
go
through
like
pixel
the
world
or
world
of
pixel
or
or
that
sort
of
stuff,
you
know
you
can
just
do
dot
spectral
access
and
it
gives
you
back
a
quality
or
a
spectrum
in
the
future.
Like
a
spectral
chord.
I
Object
and
same
with
time,
exposure
time
it
gives
you
like
celestials,
so
you
can
get
back
skype
board
of
the
of
the
different
spatial
coordinates
for
each
pixel
and
then
as
well
as
that,
it
allows
you
to
apply
an
exposure
time.
A
Correction
and
to
like
undo
that.
I
So
you
know
it
has
exposure
time
set
there
and
that
will
basically
just
figure
out.
Have
you
already
done
this
based
on
the
unit
and
then
it'll
divide
your
data
by
that,
and
you
know
if
there
are
a
couple
keywords:
allow
you
to
then
undo
that
if
you
want
to
so
that's
basically
what
the
spectrogram
cube
object
does
and
then
there's
also
a
spectrogram
sequence.
I
Over
multiple
wcs's,
so
that
may
well
be
something
that
may
well
be
the
the
incubator
or
like
a
the
progenitor
of
a
spectrogram
object
that
could
potentially.
I
Okay,
cool
yeah-
I
definitely
think
there'd
be
some
stuff
that
would
have
to
be
pulled
out
to
get
a
radio
spectrum
in
there.
But
I
mean
that's
that.
That's
not
not
a
problem.
I
think
one
of
the
interesting
things
that
it
would
be
really
nice
to
be
able
to
do
is
to
like,
like
obviously
with
for
the
maps
we
have
reproject
so
that
you
can.
You
know
over
plot
images
from
from
different
different
viewpoints,
but
for
the
time
series
stuff
it
would
be
nice
to
have
similar
context
managers.
I
You
know
where
you
could
do
something
like
with
radial
propagation,
something
and
then
you
could
plot
two
time
series
on
top
of
each
other
where
you've
accounted
for
a
radial
propagation
model
or
a
parker
spiral.
Propagation
model.
A
I
Places
accounting
for
the
yeah
the
propagation
time,
I
guess.
F
And
to
make
that
easy
would
be
super
useful.
You
know
akin
to
how
we
project
works
for
maps.
I
A
I
Do
we
want
to
keep
discussing?
Is
there
more
to
discussing
this?
I
had
two
other
things
I
went
to
chat
about
at
some
point
during
the
session.
I
was
curious
to
ask
ed
about
what
spice
work
he
was
actually
doing
when
it
comes
to
software
and
fighting,
and
just
to
clarify
that.
B
I
Files
and
getting
a
general
gist
of
of
what's
in
them
since
then,
I've
been
doing
spectral
fitting
software.
So
that's
what
I've
been
working
on
for
the
last
oh
about
two
months
is
exploring
fitting.
I
Hoping
to
have
some
sort
of
a
graphical
user
interface,
possibly
within
gluviz,
which
is
something
nabil
turned
us
on
to
a
while
ago.
So
that's
that's!
That's
what
I've
been
working
on
in
the
in
the
few
months
that
I've
been
here,
okay,
great
and
where?
Where
do
you
envisage
those
tools
being
available
like
through
sun
raster
through
a
spice
pie,
but
one
of
the
actually
I'm
not
gonna
use
that
phrase,
because
that's
gonna
load
it.
But
you
know
it's
a
a
spice.
A
Specific
package
that
that
is,
that
is
an
excellent,
excellent
question,
and
I
wish
I
had
a
good
answer
for
you.
That
is
something
that
needs
to
be
determined
and
that's
something
that's
gonna
be
a
subject
of
discussion
is,
and
it's
already
been
a
subject
of
discussion
just
between
terry
and
myself.
B
About
where
to
actually.
F
Question
on
that,
then,
is
how
at
least
first
in
principle
and
then
was
in
practice
based
on
how
you're
actually
coding
it
up
how
specific
to
spice
are.
These.
Are
these
fitting
tools
we're
trying
to
know
yeah,
no
we're
trying
to
make
them
device
agnostic,
so
we
want
them
not
to
be
specific
to
spice.
Obviously,
we
want
to
support
spice
and
that's
going
to
be
our
first
priority,
but
we
also
wanted
to
support
iris
or
other
spectrographs
that
are
out
there.
F
A
Happen
but
it
requires
a
chunk
of
work
yeah,
that's
what
I've
discovered.
That
was
what
I
spent
about
the
first
month
doing
was.
E
Benchmarking
different
fitting
technologies,
because
you
know
when
you're,
when
you're
gonna
be
doing
every
pixel
in
a
window,
every
millisecond
counts
multiplies
out
real
fast.
E
H
I
I
B
But
I've
done
a
bunch
of
work
on
like
figuring
out
where
to
start,
but
never
had
the
time
to
actually
do
any
of
it.
So
there's
open
issue.
F
F
That
sounds
great
yeah,
so
I
know
it
kind
of
focus.
B
On
this,
maybe
this
is
a
little
less
general,
but
one
one
question
from
another
one
comment,
and
then
I
think
I'm
done
is
this
fitting
that's
supposed
to
depend
on
the
answer
this
this
fitting
infrastructure
are
you?
Is
it
all
like?
Are
you
reading
it
into
the
sunraster
object
and
then
using
that
to
store
the
data,
or
is
it
like
a
totally
fully
custom
little.
J
That
we're
getting
out
of
sun
raster
okay,
so
the
data
is
stored
inside
a
sun,
raster
nd
cube
like
object
and
you're
accessing
the
data
from
that
okay,
good.
That's
that
sort
of
helps
me
understand
the
link
between
what
you're
doing
and
the
slightly
more
general
sunrise
package,
which
is
kind
of
under
the
sunpower
umbrella
and
then
a
comment
which
I
I'll
throw
out
that
we
probably
shouldn't
get
too
deeply
in.
But
if,
if
this
tool
is,
it
does
end
up
being
like
what.
B
To
any
solar
spectrograms
is
that
something
that
core
might
in
the
long
term
future?
Would
that
be
within
scope,
of
course,
in
the
future,
in
principle,.
J
You
know
thanks
so
much
for
explaining
that
it
sounds
like
it's.
What's
my
pleasure,
some
really
good
work
and
much
wider
applications
than
just
spice,
which
is
exciting.
J
J
J
J
It's
triggered
a
discussion
which
we
can
pick
up
again
if
someone's
got
the.
J
J
A
B
Reaching
out
to
instrument
teams
to
start
up
that
effort,
or
at
least
that's
the
way,
I'm
choosing
to
interpret
it,
and
so
what
we
are
trying
to
do
at
least.
M
As
the
the
purpose
of
the
sessions
the
afternoon
are
basically
just
to
to
more
or
less
redo
that
first
meeting
that
we
had
in
in
2019,
but
that's
not
that's
not
specifically
solar,
orbiter
right,
that
is
no
more
general
yeah
yeah.
That
works
a
lot
of
solar
over
right
folks
for
sure
which,
which
I
don't
think
it
necessarily
did
in
2019.
I
think
that
sort
of
that
instrument
landscape
has
changed
nothing
or
wasn't
the
thing
planning,
but
that
the
instrument
landscape
has
changed
quite
dramatically
because.
D
Foresee
the
primary
purpose
of
the
group
released
initially
is
just
like
serving
people
like
what
are
you
doing
not
not
even
like
height
on
some
type,
just
like
what
what
what
do
your
instrument
like?
What
does
your
data
look
like
and
what
do
your
analysis
tools
look
like,
and
and
how
can
we
help
so
if
they
had,
as
a
summary
run
through
the
different
somewhere
instruments?
D
B
B
Side
of
things
absorber,
yes,
hi,
everyone
yeah,
I'm
in
contact
with
the
solar,
orbiter
epd
instrument
team,
where
I'm
like
a
co-I,
I'm
not
so
much
involved
with
the
other
solo
arbiter
instruments:
okay,
okay,
yeah,
I
mean,
I
think
that
was
that's.
Probably
the
most
contact
we've
had
with
the
insidious.
B
M
B
Open
channel,
but
it
shouldn't
take
much
to
form
that
right
I
mean
yeah.
If
you
wanted
to
yeah
yeah
yeah
yeah,
no,
I
mean
I
don't
think
we've
had
much
contact
with
them
in
any
other
way,
but
there's
there's
a
clear
if
we're
going
to
decide
to
do
that,
and
a
few
people
here
could
contact
people
that
they
know
well
and.
M
B
The
right
person
to
talk
to
you
and
an
interesting
thing
right
right,
yeah.
So
now
is
that
happening
because
when
you
say
awesome,
they're,
like
oh
they're
talking
and
then
like
well,
that's
not
my
problem,
but
you
know
it's
like
kind
of
hard
to
find
it,
but
maybe
like
the
solar,
orbiter,
meaning
belfast
could
be
a
lot
easier
when
it's
in
person
rather
than
a
random
email.
You
send
and
you're
like
that,
seems
like
work
on
and
quite
yeah
good.
B
M
B
F
J
It
would
be
great
to
learn
more
of
all
those
sort
of
side
for
those,
at
least
for
the
just.
You
know
nothing
else,
just
for
the
game:
yeah,
yeah,
okay,
so
that
basically
sounds
like
we
should
think
about.
That's
an
area
of
improvement.
To
basically
have
nothing
else.
Have
that
confidence,
and
but
figure
would
it
be
as
well
as
that,
like
you
know,
talk
to
like
danny
muller
in
the
song.
J
And
have
it
come
from
like
the
management
side
of
that?
So
if
I
want
to
do
this,
you
should
follow.
Okay,
like
you,
don't
expect
that
so
that
it's
like
somewhat
of
a
coordinated
effort,
and
that
was
just
kind
of
edging
in
yeah.
I
think
I
think
I
think
we
talked
to
people
and
like
music
yeah,
that's
a
great
idea,
but
it's
like,
I
think,
maybe.
D
A
F
J
J
The
other
thing
that
has
sprung,
to
my
mind
as
we're
discussing
is,
I
feel
like,
especially
personally,
I
like
I'm,
throwing
a
lot
of
suggestions
out
there,
but
I'm
kind
of
conscious
that
it
all
takes
time
and
and
effort
to
do
all
this,
and
I
think,
apart
from
that,
you
stop
all
from
being.
None
of
it
is
explicitly
funded,
so
I
think
it
may
be
useful
once
we
finish
the
meeting
to
just
go
through
the
meeting
notes
and
be
like
this
is
all
the
stuff
we
would
like
to
do.
J
D
People
to
contact
us
and
to
do
things
in
a
certain
way
that
actually
they
are
doing
stuff
that
fits
in
with
like
our
ecosystem,
and
maybe
we
need
to
do
a
few
changes
here
and
there
but
yeah.
Maybe
we
don't
need
the
money,
maybe
if
they
just
need
to
the
instrument,
teams
need
to
do
stuff,
that's
compatible
with
us
for
sure,
but
I
also
think
I
can
imagine
in
a
few
years
there's
going
to
be
some
funding
to
to
coordinate
esa
observations,
to
do
cool
studies.
D
D
J
J
The
road
map,
maybe
is
a
good
place
for
those
kind
of
higher
level
big
tasks.
Yeah,
I
mean
yes,
what's
having
a
list
of,
I
mean.
I
know
you
sort
of
showed
that
table
up
there,
maybe
sort
of
something
like
that
where
it
defines
well.
What
are
the?
What
are
the
data
types
like?
What
what
objects
do
we
need
to
store
data
from
each
instrument?
So
you
have
you
have
that
that
right
most
column.
J
I
think
that
said
like
time
series
and
it
sounds
like
everything
else
would,
or
almost
everything
else
would
be
compatible
with
some
kind
of
nd
cube
derived
product,
so
maybe
like
having
that,
but
then,
having
like
you,
know
the
physical
types
for
each
for
each
data
products
that
would
probably
make
that
a
lot
clearer
and
I'm
sure
there
are
a
few
data
products
for
each
instrument.
So
it's
not
like
necessarily
a
tiny,
tiny
job,
but.
D
J
J
J
Used
to
like
do
dark
spectral
axis
and
you'll
get
back,
you
know
the
the
wavelength
or
the
energy
or
whatever
it
happens
to
be
for
for
that
axis,
and
I
I
wonder
if
ed
when
you
are
accessing
the
data
in
as
part
of
your
fitting
are
using
those
convenience
methods
at
all
like
so
is
that
something
that
is
maybe
not
technically
essential,
but
incredibly.
J
J
I'm
not
sure
I'm
not
as
familiar
with
the
the
code,
as
maybe
some
of
you
are,
I'm
I'm
just
using
the
world
coordinate
system.
Information
that
comes
along
with
in
the
nd
cube
object
to
get
actual
axial
information
about.
You
know
which
axis
is
spectrum
and
which
is
x
and
y
and
t.
J
F
J
So
many
shapes
that
if
we
write
a
whole
bunch
of
really
useful
functionality
for
day,
which
is
space
based
wavelength
and
then
you
come
along
with
data
which
is
space-based
wavelength
time,
you
suddenly
can't
and
you
can
suddenly
can't
use
it
any
more.
We
just
absolutely
shut
ourselves
up.
So
what
you're
arguing
is
that
we
shouldn't
have
analysis
methods.
We
should
have
analysis
functions
that
that
basically,
like.
B
That
ndq
api
right
yeah
right
like
if
you
want
to
write
a
spectrogram
function,
it
can
take
some
future
spectrogram
class.
When
I
inevitably
lose
the
argument
that
that
should
exist,
if
the,
if
the
fitting
routine
only
depends
on
the
nd
cube,
it
can
take
a
spectrogram
class
for
it
to
take
a
deepest
data
set
class
and
it
will
work
yeah.
I.
J
Just
because,
like
for
instance,
taking
decades
as
an
example,
you
could
say
that
I've
caused
this
problem
to
exist
by
my
own,
my
own
work,
but
dcs
has
an
nd
cube
subclass,
which
is
nothing
to
do
with
the
physical
type,
the
world
types
of
the
data.
It's
all
about
downloading.
The
data
like
it
gives
you
functionality
downloading
bits
of
the
data
and
managing
the
fact
that
the
data
is
striped
over
many
files.
J
J
A
B
While
still
maintaining
the
map
api
for
language
and
familiarity
purposes,
you
you
can-
and
that's
almost
certainly
why
I'm
going
to
lose
the
argument,
but
the
the
problem
of
doing
that
is,
if
again,
if
somebody's
used
to
using
a
spectrogram
class
for
irish
data,
and
then
they
want
to
use
the
deepest
data
with
the
deepest
class.
It
doesn't
have.
Those
helper
points
like
and
recommended
help
of
properties
like
I've
got
flux,
which
is
like
spectrum
1d,
for
instance,
right
or
like
whatever
they've
got
right.
B
Those
health
or
properties
these
and
things
that
exist
on
the
classes
specific
to
a
set
of
well
practices,
don't
exist
on
all
the
classes,
yeah
and
then
like
what
happens.
If
you
make
a
space
space
time
and
a
space-based
wavelength
as
two
separate
classes
and
you're
used
to
using
the
time
helpers
on
one
and
the
wavelength
helpers
on
the
other
and
then
suddenly,
you
have
a
deepest
cube,
which
is
space
based
times.
Folks,
you've
got
none
of
the
helpers.
L
Because
you
know
the
whole
thing
about
creating
a
poc
of
like
ndk
with
mac
became
very
very
easy.
D
The
recommendation
is
tab
single
data
structure
and
then
your
functions
target
specific
sets
of
like
factors
yeah
like
directions
along
that
yeah.
I'm
thinking
right.
We
could
write
a
decorator
or
something
that
you
could
put
on
an
another
routine
that
took
an
ndq
and
verified
that
it
had
a
wavelength
axis
and
even
maybe
like
it
could
make
the
wavelength
axis
the
first
axis
so
that
when
you
write.
H
H
B
B
A
L
A
D
A
D
D
D
B
D
You
don't
have
to
worry
about
implementing
it.
Yeah
I
mean
that's.
I
would
say
someone
said
it's
great.
I've
got
to
write
one
function
or,
however
many
like
products.
They
have
and
don't
have
to
worry
about
the
data
structure,
and
then
your
users
only
have
to
deal
with
a
single
data
structure
and
they
just
pass
it
on
right.
D
D
F
We
need
to
build
us
yeah
because,
like
the
question
is
like
I
don't
know
whether
some
teams
are
purposely
going
out
and
writing
their
own
instrument,
specific
software
to
give
you.
But
we
don't
know
so
it's
hard
to
compare
this
or
maybe
there's
just
no
python
tools
that
are
going
to
be
developed
for
certain
instruments.
I
think
that's.
Certainly
the
knee
jerk
reaction
from
any
instrument
team
is
like.
Oh
there's,
not
a.
B
A
kind
of
a
pretty
common
pattern
in
python
in
general
and
there
are
problems
all
I'm
really
trying
to
say
is
there
are
problems
with
that
approach
and
it's
context.
A
J
Because
there's
other
stuff
that
we're
not
thinking
about
right
that
they
know
about,
they
know
their
instrument
yeah.
I
think
like,
if
ed,
writes
a
fitting
routine
right
yeah
and
we
say
if
you
lay
out,
if
you
do
this,
you
can
use
these
tools
already
exist:
yeah
yeah.
That
is
a
much
that
is
a
cell
that
has
clear
benefits
right
right,
but
I
think
we
should
document
somewhere.
J
There
is
a
there
is
the
beginning
of
this
run
in
the
ndq
I
mean
like
we
should
document
somewhere
and
write
the
tools
like
the.
H
J
B
F
F
B
F
A
A
A
You
know
you
don't
have
to
write
the
fitting
infrastructure
because
it's
really
done
it
for
you,
so
we
can
not
tell
people
how
to
do
it,
but
let
people
know
that
if
they
choose
this
route,
this
is
what
they
get.
A
Ask
instrument
teams
like
the
first
time
it's
going
to
be
kind
of
a
lightning
talk
point
it's
going
to
be
similar
to
like
what
we
did
for
tests
like
the
first
time.
It's
going
to
be
kind
of
like
he
talks
show
us
what
you're
doing
so.
I
guess
actually
probably
some
of
these
will
be
answered
going
into
this,
but
show
us
what
you're
doing
and
then
the
second
part
is
sort
of
like
just
like
yeah.
Let's
talk
about
this
is
that
for
this
afternoon,
or
are
we
talking
about
trying
to
do
this?
Does
it
well?
A
Well,
we
are
in
that
one.
So
no,
but
I
mean
because
we
have
a
song
on
the
workshop
involved.
Like
you
know,
all
these
inspiration
is
going
to
be
there
yeah
yeah
yeah,
so
this
is
for
this
afternoon,
but
yeah
by
all
means.
You
know
you,
you
should
do
this
meeting
as
well.
Yeah
yeah!
I
think
it
takes
you
a
really
good
pic
because
I
got
a
lot
of
internet
aren't
already
kind
of
like
or
even
heard
about,
maybe
need
a
pitch
of
like.
Why
would
I
put
my
thing
in
somebody
like
you
know?
A
A
A
A
A
A
A
We
want
to
get
sort
of
feedback
from
from
the
a
lot
of
the
instrument
teams
who
are
in
the
process
of
developing
user
tools
and
to
just
basically
understand
what
we,
as
the
senpai
community,
can
provide
to
instrument
teams
to
facilitate
the
development
of
those
analysis
tools
and
what
what
we
can
provide
to
to
to
make
it
easier
to
develop
those
tools
and
make
it
easier
for
people
to
use
the
data
from
those
instrument
teams.
So
I
think
before.
A
Actually
we
jump
in
probably
we
should
just
start
by
by
going
around
having
everyone
introduce
themselves.
I
guess
we
could
start
start
in
the
room
and
then
go
online.
So
do
I
go
first
sure
I'm
I'm
almost
done.
A
Hi
I'm
stuart
mumford,
as
well
as
being
the
senpai
lead
developer.
I
am
also
writing
the
deepest
user
tools
and
heavily
involved
in
definitions
and
generations
of
the
deepest
level.
A
One
data
products,
I
guess,
is
perhaps
the
hat-
that's
also
more
relevant
to
this
session
yeah
I'm
will
barnes
I'm
a
post-doc
at
american
university
and
nasa
goddard,
I'm
the
deputy
lead
developer
of
the
senpai
project,
but
then
I've
also
worked
on
the
user
tools
for
aia
a
little
bit
on
the
ice
using
tools
and
now
it's
part
of
my
job
at
goddard,
I'm
working
on
some
user
tools
and
data
pipelining
tools
for
moxie,
which
is
an
instrument
on
the
cubix
cubesat,
I'm
newell
freight.
I
currently
work.
A
Hi,
I'm
david
stansby.
I
used
to
be
a
solar
physics
researcher,
but
now
I'm
a
research
software
developer.
So
I
don't
do
solar
physics
in
my
day
job,
but
I'm
the
senpai
release
manager
and
I
contribute
to
senpai
in
my
spare
time,
hi
everyone
I'm
laura
a
postdoc
at
the
european
space
agency
at
aztec.
I
work
mainly
with
solar
orbiter.
I've
been
involved
in
playing
a
little
while
now
kind
of
working
on
the
communications
side
and
yeah
hi,
I'm
marcus.
A
Now
we're
going
to
yell
out
people's
names
album
hello.
Can
you
hear
all
right
albert
shea
goddard?
I
am
the
coordinates
sub
package
maintainer
and
also
I
mess
around
with
a
lot
of
stuff
with
cor
reprojection
and
wcs.
I
also
have
hands
in
a
lot
of
different
projects.
So
thank
you.
Many
hats.
A
A
A
A
Tools,
cool
you're,
right,
hi,
I'm
mirai,
I'm
a
member
of
the
irish
team,
so
I
work
with
nabio,
I'm
not
a
developer.
Like
you
guys.
I
have
just
been
a
very
intensive
user.
A
Today,
she's
talking
about
this
place
and
that's
it
micah
hello,
I'm
michael
wieberg
from
the
naval
research
laboratory
in
george
mason
university.
I
primarily
work
on
the
ice
data
user
tools,
as
well
as
some
internal
tools
that
we
use
at
the
lab.
A
Yeah
hi,
I'm
pierce,
I'm
a
oh
well,
not
necessarily
I'm
pierce,
I'm
a
postdoc
at
observatory
and
yeah.
I'm
an
extensive
user
rather
than
developer
I'd,
say
working
on
a
ninja
fire.
The
radio
station
upgrade
right
here,
terry
yeah
hi,
I'm
terry
kachara
at
nasa
goddard
space
flight
center
and
I'm
also
on
the
spice
team.
A
I'm
not
a
developer,
but
I'm
working
with
a
programmer
at
bain
who's
working
on
spectral,
fitting
tools
for
spice
but
also
hopefully,
general
use,
awesome,
alistair.
A
A
A
We
have
software
called
san
casa
and
pi
jazz
feed,
which
intensively
incorp
interaction
with
some
pi.
So.
A
Hello,
everyone
I'm
a
phd
student
at
dyers
and
I
use
senpai
a
lot
to
process
ai
data
and
radio
data
and
yeah.
I
just
wanted
to
come
here
and
just
see
how
it
works
and
everything
yeah.
Thank
you
awesome
thanks.
A
I
think
we
got
everyone,
I'm
pretty
sure
right,
okay,
so
so
the
the
kind
of
motivation
for
some-
I
guess
for
some
some
context.
We
talked
about
this
a
little
bit
earlier,
but
I
think
not
everyone
was
here
this
kind
of
effort
to
have
an
instrument
working
group.
A
This
coordinate
effort
among
different
instrument
teams
started
back
in
2019
by
several
other
people
from
the
sunpi
project
and,
unfortunately,
because
of
lots
of
reasons
both
because
this
started
the
end
of
2019
and
sort
of
tapered
off
the
beginning
of
2020
that
this
effort
had
sort
of
stalled
and
so
with
the
our
new
kind
of
infusion
of
money
from
the
a
stoffel
program
that
albert
talked
about
on
monday.
A
What
we
wanted
to
do
was
sort
of
to
restart
this
effort
and
to
kind
of
bring
again
bring
people
who
are
on
instagram
teams,
developing
instrument,
pipe
instrument,
data
pipelines,
our
user
tools
to
come
together
and
sort
of
discuss
and
kind
of
coordinate
what
what
their
plans
for
developing
software
were
to
figure
out.
A
You
know
what
are
the
kind
of
the
common
set
of
things
that
people
are
doing
in
their
software
right
I
mean
we
in
solar
physics,
we
sort
of
have
a
few
kind
of
core
data
types
and
we
kind
of
want
to
do
variations
on
a
couple
of
high-level
things
on
those
data
types
and
then
the
last
thing
we
want
to
do
is
kind
of
encourage
interoperability
among
among
these
projects,
so
that
you
know
you
can
use
the
kind
of
same
set
of
core
tools
to
analyze
like
aia
data
that
you
would
use
to
analyze,
xrt
data.
A
You
know
and
then
simulate
that
you
can
use
the
same
base
data
structures
to
analyze,
isa
that
you
would
use
to
analyze
spice
data,
so
just
kind
of
encourage
interoperability
and
make
it
easy
for,
for
then
your
the
users
of
that
data.
To
do
you
know
to
do
their
research
with
the
same
set
of
tools
and
disparate
data
sources,
so
the
the
sort
of
structure
for
this
meeting
is
kind
of
for
the
for
the
first
half
or
so
and
again.
This
is
this
is
all
like
very,
very
informal.
A
So
you
know
I
have
kind
of
a
rough
structure
in
mind,
but
we
don't
by
any
means
need
to
sort
of
stick
to
that
and
obviously
people
should
feel
free
to
either
jump
in
especially
online
jump
in
or
or
you
can
use
like,
raise
your
hand
button.
If
you'd
rather
do
that
as
well
and
in
in
the
room,
we'll
sort
of
try
to
be
very
cognizant
of
the
fact
that
you
know
we
have
an
online
contention
and
an
in-person
connection
as
well.
A
So
the
first
thing
I
would
like
to
do:
a
sort
of
I've
called
an
instrument,
lightning
talks.
I
think
even
that
is
too
formal
what
what
for
the
for
the
people
that
are
representing
particular
instruments.
A
It
would
be
great
to
just
hear
like
a
quick
feel
about
what
what
you're
doing
as
far
as
like
data
products
go
or
user
tools
go
if
you're,
using
senpai
or
or
python
or
whatever,
like
that,
that's
great
to
hear
too,
but
just
to
kind
of
say
just
a
quick
pitch
of
like
where
you're
at
in
terms
of
you
know
developing
tools
and
data
products
for
for
the
wider
community.
A
Well,
I
don't
have
any
slides,
but
I
could
say
something:
this
is
terry
from
spice
sure,
yeah
and
eric
might
want
to
be
a
better
person
to
talk
about
our
data
products,
but
you
know
we're
providing
our
data
and
one
thing
we're
working
on.
I
think
I
already
said
what
we're
working
on
actually
is
we're
working
on.
We've
got
some
quick
look
tools,
but
we've
also
got
working
on
a
spectral
fitting
routine
that
is
is
at
is
working
on.
A
I
don't
think
he's
here
right
now
we're
trying
to
make
it
general
use
and
modular
so
that
other
other
spectra,
similar
scanning
spectroscopy
instruments
can
use
it.
I
should
say
that
in
case
spice
is
a
sort
of
a
rastering
spectrograph
kind
of
like
ice
or
iris,
and
we
are
working
with
navigator
talking
to
him
about
integrating
what
we're
doing
with
what
he's
doing
with
iris,
to
try
and
make
it
more
general
use.
A
We
do
have
a
python
based
reader,
that's
sun,
raster,
that's
based
on
nd
cube,
I'm
not
sure
if
I'm
covering
what
I
need
to
cover,
but
we're
definitely
interested
in
in
working
with
other
groups
and
trying
to
fix
what
we
have
on.
You
know
trying
to
make
our
stuff
general
use
and
not
have
to
reinvent
the
wheel
yeah.
No,
I
mean
that's,
that's
great!
That's
exactly
what
what
I
I
had
in
mind.
I
think
that
the
your
point
about
saying
defining
what
spice
is
is
good.
A
That's
the
other
thing
I
was
gonna
say
if,
if
you
know
as
we
go
through
this,
if
you
could
first
like
describe
your
the
the
kind
of
instrument
you
work
on
like
very
very
briefly,
that
would
be
great
because
I
think
maybe
not
we're
all
not
like
super
familiar
necessarily
with
one
one
versus
the
other
and
then
actually
ed.
Terry
ed
did
he.
A
He
was
here
for
the
the
kind
of
we
had
sort
of
a
more
solar
orbiter
focused
session
earlier,
and
he
he
was
there
for
that,
and
he
gave
like
a
rundown
of
the
kind
of
gooey
tools
that
he
was
working
on.
As
well,
absolutely
I
knew
he
came,
but
I
wasn't
sure
what
he
was
he'd
talk
to
you
about
great.
So
that's,
basically
what
we're
doing!
A
Okay,
great
again,
I
I
said
eric
might
want
to
say
something
more.
A
Not
really,
actually
I
mean,
perhaps
I
can
just
add
that
the
spice
team
is
pretty
split
between
ideal
people
and
britain
people.
So
on
the
iranian
side
they
have
only
pretty
good
tools,
I
think,
but
we
are
so
many
python
people
who
want
one
such
tools
to
be
developed.
A
Is
there?
Is
there
an
idl
like
analysis
suite
for
for
spice,
specifically.
A
It's
an
update
of
c
fit,
which
was
developed
first
soho,
but
I
mean
that
that
works
for
spectral
fitting
also.
I
should
say
that
the
team
is
planning
a
data
product
of
already
fit
lines,
so
people
can
just
get
those
sort
of
an
image
format,
but
that
doesn't
exist
yet,
and
we
think
that
people
will
want
to
be
able
to
do
their
own
fitting
in
many
cases
anyway.
A
A
A
Oh
sorry,
go
ahead!
No!
No!
I
I'm
probably
poorly
pleased
to
say
this
because
this
two
parts
of
my
post
up,
the
second
half
is
like
data
provided
provision
and
stuff
for
the
all
the
data
products
out
of
the
and
radio
observatory
we
have,
but
I
haven't
started
any
of
that
sort
of
stuff.
Yet
so,
but
data
does
exist
on
this.
A
The
virtual
european,
solar
and
planetary
access
platform,
euro
planet,
europa,
2020.,
it's
like
the
data
exists
somewhere
and
eventually
we
want
to
get
stuff
to
easily
more
easily
access
like
findable
by
fido
and
that
sort
of
thing,
and
but
also
running
the
risk
of
not
being
able
to
give
people
radio
data,
that's
300
gigabytes
for
a
three-hour
observation,
sort
of
stuff.
So
yeah,
I
don't
know
how
useful
that
is.
A
A
Great
micah,
I
think
you
were
starting
to
talk
just
before
yes,
so
I
can
show
actually
have
a
couple
quick
slides.
I
could
show
about
our
ice
pack
package
yeah.
That
would
be
great.
A
A
A
All
right
so
just
to
cover
briefly
so
ice
is
a
split
spectrogram
spectrograph
on
pinot
spacecraft
has
been
up
taking
observations
since
2006
and
last
couple
years
we
worked
on
developing
what
we
call
ice
pack,
the
ice
python
analysis
code,
which
tools
for
analyzing
ice
data
in
python.
A
So
I
could
go
over
a
little
bit
what
what
ispec
does
but
fundamentally,
there's
kind
of
three
components
to
it.
We've
got
an
archive
of
level,
one
data
that
we've
kind
of
pre-processed
using
the
existing
idl
tools.
A
A
We
have
a
set
of
command
line
and
google
tools
for
searching
for
data
and
querying
the
catalog
and
downloading
data
and
then
actually
running
some
of
the
fits,
and
then
we've
also
got
a
full
python
package.
Where
you
can
do
a
bunch
of
spectral
fits
compute
measurements
then
generate
some
pi
maps
so
just
to
show
a
couple,
like
example:
plots
we
have
a
data
reader
that
loads
data
into
an
ice
cube
class
cube
functions
for
doing
spectral.
Fitting
using
multi
gaussian
fits
with
a
constant
background.
A
It's
fully
paralyzed,
so
it
uses
as
many
processors
as
you
have
access
to
and
then
at
the
moment
we're
using
mp
fit,
which
is
a
python
port
of
mp
fit
it
does
the
fitting
ends
very
fast
and
gives
good
results,
and
then,
lastly,
we
have
tools
for
outputting
the
fit
intensities
velocities
velocities
and
the
line
widths
to
here
on
the
right.
A
Obviously,
there's
more
things
you
can
add
and
improve,
but
the
core
api
is
kind
of
stable.
Now
it's
not
likely
to
significantly
change.
A
A
All
the
above.
So
like
the
the
core
things
you
want
to
do
with
ice?
Is
you
know
you
get
the
data
and
you
need
to
do
some
sort
of
spectral
bit
to
get
the
intensities
and
the
velocities,
but
then
we
want
to
have
some
tools
there
for
analyzing
the
the
results
as
well.
As
you
know,
outputting
it
to
the
sunpay
maps.
So
you
can
do
all
the
multi-spacecraft
stuff,
one
of
the
common
things
people
do
with
spike.
A
Sorry
ice
is
like
computing
temperatures
and
densities
based
on
line
ratios,
and
so
those
required
doing
some
sort
of
spectral
modeling.
So
we've
been
looking
at
having
some
optional
interface
with
maybe
chianti
pi
or
some
other.
You
know
spectral
code,
but
we
haven't
developed
those
tools
yet
and
obviously
the
the
fit
spectra
function.
We
have
currently
it's
developed
for
the
uses
of
ice,
but
it
could
be
more
generalized
to
use
some
of
the
other
fitting
packages
and,
to
you
know,
just
fit
an
nd
cube
of
spectral
data
potentially
so
yeah.
A
I
just
want
to
ask
you
a
quick
question
so
yeah
this
is
dan
ryan.
For
those
who
don't
know
me,
I'm
sort
of
was
very
along
with
stuart
kind
of
led
a
lot
of
the
nd
cube
development,
but
also
written
a
couple
of
things
for
spice
and
spectral
stuff.
More
generally,
we
were
talking
with
ed
earlier
on
about
the
work
he
was
doing
with
terry
for
spice
and
one
of
the
things
they're
doing
is
you
know
developing
sort
of
fitting
tools.
A
I
wanted
to
ask
what
overlaps
there
might
be
like
the
impression
I
got
from
what
ed's
working
on
is
that
they're
trying
to
develop
in
a
way
that
is
not
like
specific
to
spice
so
yeah.
The
question
is:
how
much
overlap
either
technical
or
philosophical
is
between
what
you
guys
are
doing
here
in
ice
pack,
and
what
terry
and
ed
are
trying
to
achieve.
The
spice.
A
A
So
all
the
existing
ice
analysis
has
been
done
in
idl
previously
using
mpfit.
So
then
it
was
a
natural
choice
for
us
to
kind
of
use
the
mp
fit
on
the
python
side,
because
it
gives
similar
results.
We
looked
into
using
some
of
the
other
fitting
tools,
but
they
either
didn't
have
fixed
parameters
like
we
need
or
was
very
slow
or
gave
actually
worse
fits
that
didn't
really
match
the
previous
results.
A
Okay,
so
yeah.
The
the
follow-up,
then
is:
firstly,
I'm
very
aware
that
there
are
other
pressures
and
priorities
when
it
comes
to
specific
instruments.
I
guess
I
was
just
wondering
if
you
guys
and
ice
pack
are
aware
of,
you
know,
what's
being
done
for
spice
and
vice
versa-
and
I
guess
just
the
question
is:
might
it
be
to
won't
be
beneficial
for
you
guys
to
to
sort
of
talk
in
the
future,
depending
on
like
how
your
development
goes?
Is
there
ways
which
you
can
help?
A
Each
other
out
and
then
maybe
just
be
more
sort
of
friendly,
for
you
know
other
people
who
are
trying
to
do
the
same
type
of
fitting,
but
maybe
not
specifically
but
yeah,
certainly
we'd
love
to
collaborate
with
people
yeah
I
mean
we
have
talked
to
you
guys
in
the
past
and
and
got
some
ideas,
but
we
basically
were
in
a
situation
where
we
needed
something
to
work
for
spice,
and
so
we
thought
we'd
go
ahead.
A
Yeah,
this
is
exactly
what
this
this
group,
so
I
will
say
too,
and
I
think
I
had
put
this
in
the
email
that
went
out
to
everyone
as
an
invitation
to
this
meeting,
but
that
this
I
would
like
this
sort
of
instrument
working
group
for
this
to
be
the
kind
of
first-
and
this
was
the
original
idea
behind
this.
The
original
instrument
working
group
several
years
ago
is
that
you
know
that
we
would
meet.
You
know
periodically
how
how
regular
that
would
be.
A
We
can
decide,
but
that
we
would
meet
sort
of
periodically
to
take
stock
of
overlapping
needs.
Like
spectral
fitting
to
see
okay,
where
we're
at
like.
Are
we
making
progress
on
or
to
what
extent,
what
overlapping
needs
do
we
have,
and
what?
How
much
progress?
A
Are
we
making
on
sort
of
having
sort
of
a
common
piece
of
code
that
we're
all
working
from
sort
of
specializing
for
for
ice
for
spice
and
and
maybe
there
are
cases
where
that's
that's
not
the
case,
but
that
probably
at
the
end
of
this
meeting
I
would
like
for
us
to
kind
of
have
an
idea
what
what
sort
of
cadence
we
we
wanted
to
meet
and
then
what's
the
next,
the
next
yeah,
but
also
ice
pack,
is
not
only
about
the
fitting
it's
not
from
the
level
one
ice
fisher
archive,
and
this
is
something
that
is
necessary
for
ice,
but
not
for
spice.
A
It
seems
to
be
quite
specific
to
her
twice
oh
yeah,
to
be
clear,
my
comment
was
not
suggesting
that
you
know
ice
back
should
be
thrown
away,
and
you
know
only
despite
stuff
should
be,
or
vice
versa.
It
was
simple.
There
are
ways
where
things
can
be
done
more
efficiently
with
less
effort
and
less
resources
if
things
are
pooled
in
that
way,
but
I
would
certainly
appreciate
that,
especially
at
the
beginning
of
projects
and
stuff,
often
it
is
better
to
do
things
less
efficiently
or
duplicate
stuff.
A
I
think
also
that
what
michael
was
saying
about
working
on
interfaces
with
chianti
pie
and
stuff
that
could
be
done
in
a
general
way.
That
could
be
pretty
interesting
for
all
spec
for
well
many
kinds
of
instruments
but
yeah.
Certainly
the
need
for
atomic
data
is
like
spans.
Many
many
many
instruments,
whether
that's
yeah,
an
interface
to
chianti
or
other
or
other
resources,
yeah
yeah.
A
Yeah
weren't
you
working
on
some
sort
of
atomic
database
code.
Will
yes
yeah,
probably
that's
all
I'll
say
about
that
right
now
for
now,
but
but
yeah
and
I
actually-
I
do
have
a
little
bit
of
money
to
to
work
on
on
that,
and
it
is
similar
to
chianti
pie
in
in
a
lot
of
ways,
but
the
idea
that
it's
sort
of
more
interoperable,
like
the
astro
pi
ecosystem,
so
actually
yeah.
We
should.
I
didn't.
I
didn't
realize
that
I
guess
you
guys
were
thinking
about
this
interface.
A
So
we
should
probably
talk
about
that
at
some
point
as
well.
A
Okay,
sorry,
that's
a
great
question
is
the
search
functionality
of
a
voice
pack?
Is
that
like
a
photo
based
thing
or
is
it
your
own,
your
own,
specific
search
stuff?
So
at
the
moment
it's
a
full
gui
where
you
can
search
based
on
a
ton
of
different
parameters
from
the
official
ice.
Catalog
of
observations
will
did
help
us
write
well,
real
wrote
I
didn't
do
any
work
will
did
it.
A
A
Okay
cool.
Yes,
yes,
for
some
context,
like
the
the
ice
data
is
just
in
like
a
like
a
directory
tree
on
the
internal
website.
So
so
my
quick
like
pr
of
a
nice
packed
client,
was
it's
just
a
scraper
client,
but
the
there's
a
lot
of
and
the
thing
that
micah's
gui
uses.
It
hits
the
ice
catalog,
which
is
a
sql
light
store
in
ssw,
and
so
the
challenge
is
to
figure
out
how
to
use
all
like
the
rich
metadata.
A
A
This
is
a
question
for
both
terry
and
eric
is
what
is
the
the
primary
source
for
the
spice
data?
Is
that
the
sorter?
Oh?
Yes,
okay,
so
we
as
part
of
the
well
it
lives
under
the
the
sunpi
project
right
now,
but
there
is
a
senpai
store,
client
and
there's
been
yes.
A
Yeah,
okay,
okay!
So
so
right
now,
if
you,
if,
if
the
data
is
on
the
sword,
you
that
fido
interface
should
already
be
there
like
that,
should
just
should
just
work.
A
A
A
Maybe
someone
who
has
developed
some
python
david
knows
about
what
kind
of
metal
data
can
be
queried
console.
I
think
the
answer
is
none
at
the
moment
and
basically
on
sort.
All
you
can
do
is
search
for
a
certain
data
and
data
product
and
then
get
the
files
that
match
that
databar
between
any
given
time.
I
don't
think
you
can
search.
A
A
I
think
for
definitely
for
the
remote
sensing
instruments
on
solar
orbiter
we're
going
to
want
to
do
something,
so
people
can
have
some
way
of
at
least
searching
things
by
the
observing
program
or
the
object
being
looked
at,
because
it's
I'm
we're
already
running
into
problems
with
people
trying
to
find
particular
kinds
of
data
and
not-
and-
and
you
know
you
can't
really
figure
out
easily
like
when
we're
looking
at
active
regions
and
or
something
like
that
or
carnival
yeah.
A
I
I
think
the
the
sword
team
I've
gotten
this
request
for
a
bunch
of
people,
so
I
think
that
they're
they're
looking
at
ways
that
they
could
expand
what
you
can
query,
because
you
know
that
the
observation
id
stuff
and
the
target
stuff
is
all
in
the
fit
cdf
headers
which
they
index.
So
I
think
that
is
on
their
list
of
things
to
do
so.
Yeah
and
maybe
hooking
it
up
with
some
sort
of
event
list
would
be
helpful
too.
A
A
A
Regarding
ice
pack,
I
think
in
one
of
the
slides
that
mentioned
that
the
data
was
put
into
nd
cubes.
Is
that
like
raw
like
regular
nd
cubes
or?
Is
that
an
object
that
inherits
in
the
cube
and
builds
something
like
ice
specific?
A
A
Why
what
what
wasn't
sufficient
for
the
way
that
nd
cube
natively?
Does
that?
Actually,
maybe
maybe
this
is
actually
too
specific
a
conversation
for
here,
but
I
would
probably
consider
that
a
bug
in
ndq
from
your
description
like
yeah,
I
think
actually
it's
part
of
it's
the
slicing
of
the
wcs
array
and
depending
on
how
you
slice
it,
it
loses
some
coordinate
information,
so
you
can't
create
a
fit
header
from
the
wcs
array.
A
A
When
know
you
load
it
up,
you
slice
it
in
certain
ways
when
you
then
try
to
save
it,
to
a
fits
file.
You're
literally
missing
some
of
the
coordinate
information
to
create
and
how
it
fits
header,
okay,
yeah
saving
and
arbitrary
ndq
back
to
fit
is
so
yeah,
not
something
that
we've
figured
out
all
the
way
yeah
so
yeah,
that's
fine!
Okay,.
A
A
So
we've
overridden
the
slicing
using
the
example
of
the
nd
array.
They
had
some
examples
on
how
to
slice
additional
arrays
and
such
so
it's
just
in
there
that
we
do
some
additional
logic
to
keep
track
of
the
information
we
need
and
then
we're
not
actually
saving
the
nd
array
to
a
file.
It's
later
when
we
produce
the
2d
fits
files
of
the
output
parameters,
that's
when
we
save
just
that
to
the
array
to
a
bits
file
for
loading
into
a
senpai
map,
actually
so
yeah.
A
I
unfortunately
came
into
this
this
discussion
a
little
bit
late,
so
I
might
have
missed
this
is:
is
the
icepack
codes
like
publicly
available
on
github
like
I
can
I
can
take
a
look.
We
released
it
a
year
and
a
half
ago.
Finally
got
it
up
on
pip,
so
you
can
just
you
know,
pip
install
icepack,
but
there's
a
lot
of
docks
and
everything.
Yeah,
perfect
yeah
I'd
love
to
take
a
look
at
that
source
code,
just
to
understand
that
a
little
bit
better.
A
I
apologize
it's
very
hacky
and
we
actually
have
backwards
compatibility
with
ndq
1.0,
which
we
probably
should
remove,
but
we
wrote
it
back
before
the
2.0
switch,
so
we
had
some
users
that
were
straddling
the
line
sure
that
I
suppose
that
makes
sense.
Congratulations
on
figuring
our
way
to
support
both
at
once.
Yes,
no
small
technical
achievement.
A
Okay,
let's
I
guess
cg
did
you
want
to
say
anything
about
like
eopsa
or
I
know
you
mentioned
mentioned
the
like
sun
casa
pipeline.
I
have
a
type
of
slice
yeah.
That
would
be
great.
A
A
Yep
yeah,
okay,
so
yeah
I'm
going
to
talk
about
some
radio
package,
so
the
package
is
mainly
for
the
us:
expanded
owens
failure
array.
It
is
a
solar,
dedicated
certain
elements:
radio,
interferometer.
A
It
provides
broadband
imaging
spectroscopy
in
a
frequency
range
from
1
to
18
gigahertz
at
one
second
cadence.
It
can
provide
a
spatially
and
temporary
soft
microwave
spectra
when
a
flare
is
present.
So
it's
a
useful
tool
to
study
flares
and
it
is
a
pass,
a
page
founder
for
the
next
generation
of
radio
telescope
for
the
sun.
A
So
we
developed
some
package
called
san
casa
to
prove
to
process
the
physic
complex,
facilitated
data
of
radio
interferometer
from
radio
interferometer.
They
usually
have
four
dimension.
Polarizations
baseline,
a
baseline
is
a
pair
of
antenna.
So,
for
example,
we
have
certain
antennas,
so
we
will
have
more
than
100
baselines.
A
So
for
a
hundred
for
for
array
with
a
hundred
antenna
elements,
there
will
be
more
than
a
thousand,
I
feel
so
it
it
has
around
1000
base
lines
and
the
frequencies
we
have
more
than
1000
frequency
channels
and
time.
A
So
it
also
video
interferometer,
create
a
large
amount
of
data,
consider
those
four
dimensions
and
a
lot
of
frequencies
and
time
integration.
So
we
have
a
product
called
dispatch
to
to
to
slice
a
dynamic
spectrum
from
the
original
visibility
data
to
reduce
the
data
from
to
the
frequency
and
time
dimension.
A
We
that
part
of
oh,
we
also
have
another
package
called
kilocloud
which
to
convert
the
to
to
convert
the
visibility
data
into
imaging
results,
so
we
can
generate
a
radio
image
for
each
of
the
frequency
and
for
each
of
the
time,
integration
so
converted
to
frequency
time
and
space
and
space
dimension.
A
So
here's
a
quick
look,
a
summary
plot
from
quick
loop,
showing
a
dynamic
spectrum
and
the
image
reader
image
and
multiple
frequencies
from
the
time
and
frequency
that,
from
from
a
specific
time
and
frequency
range,
so
the
current
status
of
this
is
we
many.
So
many
of
the
plotting
of
visualization
code
is
based
on
some
pies.
A
We
can
nicely
place
a
image
from
different
instruments
into
one
plot,
but
currently
we
are
relies
on
senpai
too
2,
because
the
one
of
the
main
package
that
we
are
using
is
called
casa,
not
it
it
is
based
on
python
3.6.
A
A
Six
is,
I
believe
it's
two
point,
some
pi
two,
and
if
we
upgrade
to
some
pi
for
number
three,
I
believe
our
code
were
broke,
so
we
we
need
to
do
some
work
on
that
and
the
dispatch
we
and
we
also
in
the
special
the
free
dispatcher
team-
is
also
working
on
a
sub,
a
class
for
ufc
data
called
user
spectrogram.
A
So
it
is
still
in
progress,
and
I
always
have
curiosity
what
like
how
much
of
casa?
Are
you
like
qatar
itself?
Are
you
relying
on
basically
a
lot
because
to
generate
those
image
we
need,
we
need
cursor.
We
need
a
clean
procedure
to
to
generate
image
and
also
we
because
the
data,
the
feasibility
data
itself,
is
stored
as
custom
data
format.
So-Called
measurement
sets
so
just
get
so
you
get.
A
The
caster
data
thing
was
what
I
was
going
to
get
at
because
within
the
last
year,
there's
now
a
pure
python
implementation
of
reading
casa
data
formats,
including
the
full
measurement,
sets
into
dark
tables
and
task
arrays.
Oh,
that's
great,
pure
python,
no
required
I'll
drop,
a
link
in
the
chat
for
you
yeah.
Thank
you.
Thank
you
very
much.
Yeah,
that's
really
a
pain
for
us
because
package.
They
only
have
a
package
available
for
mac
and
linux,
but
not
for
windows.
So
sorry
windows,
users,
yeah.
A
If
that's
the
case
we
can
with
what
is
the
sticking
point
for
not
building
for
windows?
A
A
Okay,
next
one
is,
we
are
also
developing
some
fitting
code
for
general
signature
emission
from
solar
flare.
So
previously
we
have
a
idl
version
which
is
developed
by
general
neeta
and
gregory
freshman
in
njit
and
yeah.
We
like
to
have
a
python
version
of
that.
So
we
because
all
our.
A
Image
and
pipeline
are
based
on
python,
so
it
is
straightforward
to
just
feed
the
imaging
results
to
an
another
python
code
up
to
another
python
package,
so
it
it
is
gui
based
interface.
It
can
yeah,
it
again
use
some
pi
to
make
those
plots
and
it
can
interactively
select.
The
regions
extract
the
video
spatula
and
and
feed
the
spatula
with
a
sequential
model
yeah.
We
are
still
working
on
that.
It
is
still
in
a
very
preliminary
status.
A
Okay,
that's
all
I
want
to
introduce
about
radio
stuff
from
usa.
A
Is
the
the
pi
gs
fit?
Is
the?
Is
the
fitting
capability
also
accessible
outside
of
the
gui?
Or
is
it
all
it
is
accessible
outside?
Okay,
it
is
command
line
based.
So
whatever
comes
from,
those
gui
is
converted
to
a
combined
to
command,
and
then
it
is
accessible
outside.
A
Hey
sorry
and
that's
really
interesting,
stuff,
cj,
I'm.
How
specific
is?
I
know
you
said
it's
only
for
eons
of
data
but
like
how
only
free
answers
like
if
I
just
gave
it
a
loafer
measurement
set
or
something.
What
happens?
Do
you
know
or
are
there
plans
to
and
make
this
more
general
for
other
other
radiant
parameters
yeah?
So
as
I
mentioned,
it
was
developed
for
user,
but
actually
we
in
the
beginning,
we
are
working
on
the
radio
data
from
the
very
large
array.
A
So
many
of
the
mainstream
radio
telescopes
are
operated
by
united
states
is
using
the
cancer
as
the
processing
software.
So
as
long
as
the
the
data
format
of
the
interferometry
data
is
in
the
casa
format,
we
presumably
we
are
able
to
use
it.
But
now
the
only
solar
dedicated
array
is
in
the
u.s
is
the
expanded,
almost
failure
rate,
so
not
too
much
data
available.
A
But
I'm
aware
that
the
lofa
data
they
are
also
perhaps
stored
in
measurement
set
format,
so
yeah
yeah.
So
presumably
it
is
able
to
present
that
the
code
we
wrote
is
pretty
general.
So
we
have,
we
should
have
no
restriction
on
instrument.
A
So
I
see
one
of
your
future
bullets.
Your
last
bullet
point
is
development
of
an
eoxon
spectrogram
in
radio
spectrum
I
mean:
could
this
be
how
how
far
away
would
this
be
from
just
being
able
to
operate
on
a
spectrogram
class,
whether
that's
a
spectrogram
or
a
low
fire,
spectrogram
yeah?
I
I
kind
of
lost
hack
on
this.
I
noticed
uf's
spectrogram
is
written
in
a
document
of
readers,
patcher
yeah,
because
the
the
spatula
is
pretty
general.
It's
just
the
fizz
file.
I
I
I
recall
that
it.
A
The
three
dispatcher
can
read
user
data
yeah,
but
yeah
yeah
yeah,
but
it
would
be
nice
to
have
an
api
or
something
to
to
to
just
fetch
user
data
from
our
website,
just
by
providing
a
time
that'll
be
great,
but
now
I
think
it's
only
works
on
existed
of
this
file
that
has
been
downloaded
in
your
local
machine
yeah.
Maybe
I'll
just
quickly
comment
there
yeah,
so
the
there's
a
you
have
to
client
and
there's
also
a
fido
client
to
search
for
the
pre,
the
pre-made,
the
officer
fits
files.
A
The
thing
that
the
only
sticking
issue
was
that
there's
just
a
particular
bit
of
data
missing
in
the
fitz
files
for
us
for
me
to
figure
out
a
way
to
make
the
plots.
Look
the
exact
same
as
what
you
guys
have
on
your
website.
So
I
think
it's
just
a
really
small
technical
issue
to
that
yeah.
A
So
if
you
need
any
help,
okay,
I
can
help.
A
Marcus
did
you
want
to
say
something
about
punch
yeah.
I
have
some
slides
sorry
great.
I
should
be
able
to
share
them.
Hopefully.
A
Well,
it's
something
micah.
Are
you
on
the
coordination
bit
elements,
senpai
coordination,
meaning
room?
I
I
haven't
joined
it
yet,
but
I
can
view
it
are
you
on
the
the
reg?
Can
you
have
access
to
like
the
general
senpai
element
or
when
you
say
joined
it,
you
mean
the
room
or
you
haven't
joined
element.
I
haven't
joined
an
element.
I'm
sorry
actually,
but
I
I've
been
meaning
to
so.
If
you
posted
that
okay.
A
Okay,
okay,
sorry,
marcus,
and
this
is
punch,
so
the
polarimeter
do
not
unify
the
corona
and
he
lives.
Here.
A
A
We
actually
have
a
counter
in
the
office
that,
like
ticks
down
every
minute,
it's
kind
of
mental
but
punch
will
produce
images
in
white
light,
polarized
white
light
from
six
to
180,
solar
radii
for
solar
wind
studies.
A
A
We
have
to
do
some
instrument
specific
calibration,
the
spiking
and
streaking
and
straight
light
subtraction,
but
we
figure
out
the
wcs
and
we
align
things
and
then,
after
that
we
have
a
bunch
of
images
and
different
polarizations
for
both
the
wide
field,
images
which
there
are
three
of
those
and
one
narrow
field,
imager
in
all
its
polarizations.
A
And
so
you
have
to
do
polarization
resolution
so
that
that
they're,
all
in
the
same
reference
frame,
we
quality
mark
our
frames.
We
build
mosaics,
we
take
all
these
different
frames
and
and
make
them
into
a
tree
foil.
A
And
then
we
did
a
fun
part
of
subtracting,
the
f
corona
and
the
star
field
to
create
these
beautiful
background,
subtracted
mosaics
that
will
hopefully
be
used
by
lots
of
people.
So
these
are
our
main
level
three
products.
There
are
full
resolution
images
in
b
and
pb
brightness
and
polarized
brightness
and
clear
for
each
of
the
instruments,
every
four
minutes
from
niffy,
and
then
we
make
a
clear
mosaic
and
a
pb
and
b
mosaic
every
four
minutes,
and
we
also
have
these
low
noise,
mosaics
and
low
noise.
A
Lifting
images
that
are
in
a
slightly
longer
cadence
and
then
there's
this
wind
speed
plot,
which
I'm
not
sure
what
we're
doing
about
that
yet
but
we'll
find
out.
Can
I
ask
what
are
those
strange
like
strangely
shaped
white,
cutouts,
so
they're
kind
of
clear?
If
you
follow
this,
but
there's
some
kind
of
quality
marked
section?
Oh,
oh,
it's
just
like
the
warping
of
that
section
as
it's
reprojected,
and
so
even
before
reprojection.
A
A
So
this
is
the
main
idea
that
we're
building
a
pipeline
called
punchbowl
that
is
managed
using
a
workflow
manager
called
prefect.
So
the
the
goal
is
that
we'll
have
a
pipeline
that
scientists
can
use
that
doesn't
require
any
of
the
overhead
for
automation
and
we
keep
that
completely
separate
in
the
package
called
punch
pipe,
which
is
private,
and
I
won't
go
through
this
figure,
but
it's
a
fun
figure
to
look
at
since
we
don't
have
a
ton
of
time.
A
This
is
the
main
part
that
I
want
to
get
at.
That
sunpi
is
going
to
be
really
helpful
to
us
and
astrophy
and
since
we're
in
developer
right
now
we
can
change
a
lot
of
our
designs.
I
think
in
our
implementations,
but
we're
using
indie
cube
and
the
core
data
handler
and
maybe
a
less
than
desirable
way,
we'll
find
out
that
polarization
resolution
from
earlier
is
based
on
a
paper
that
we
published
in
the
past
year,
and
so
we
have
code
now
for
that.
A
That
does
the
polarization
resolution
and
it's
universal,
so
it'll
work
for
any
any
mission,
stereo
whatever
you
want
any
future
mission,
so
we'd
like
to
publish
that
as
a
separate
affiliated
package
for
senpai
and
then
the
punch
bowl
would
like
to
affiliate
that
at
some
point.
If
that's
desirable
and
then
there's
some
questions
here,
so,
oh
we
do
image
mosaic
and
using
astrophy
reproject
and
some
swear.
A
Finally
getting
those
merch,
so
I
I
think
we
just
wanted
to
develop
a
strong
relationship
with
senpai
as
we
go
to
make
things
as
easy
and
beneficial
for
the
community
as
possible.
So
there
are
some
questions
here.
Would
you
like
to
see
our
headers
before
we
finalize
them,
because
I
know
that's
come
up
several
times
this
week.
A
More
serious
answer
is
yes
and
then
some
other
logistics
questions,
which
you
can
you
probably
already
read.
While
I
was
talking
and
anything
else
that
we
can
do,
I
ask
for
you're,
saying
you're
using
reprojects
to
put
the
images
together.
How
are
you
handling
the
different
observer
locations
for
the
four
spacecraft
should
have
been
at
lunch
albert?
A
We
don't
have
everything
worked
out
yet,
but
the
idea
is,
I
think
we
we
use
the
observer,
location
and
reproject
all
using
that
information
to
a
common
frame
which
is
used
for
all
of
the
punch,
mosaics
and
then
those
punch
mosaics
have
an
observer
location
of
earth
because
we
don't
have.
We
don't
want
to
carry
the
metadata
of
all
of
the
different
satellites.
Does
that
right?
A
We
have
a
running
problem
of
a
running
challenge
of
what
assumption
do
we
use
to
reproject
the
surfaces
onto
so
there's
this
thing
in
some
fight
for
this
spherical
screen,
but
there's
other
things
that
people
might
want.
I'm
just
wondering
what
your
current
conception
is
for,
how
you
actually
resolve
the
3d
position
of
things
that
you're
seeing
in
those
eggs
you
want
to
make
in
the
process
of
doing
that.
A
Thank
you
live
streaming.
Lady
is
something
that
we
might
want,
like
you
like,
would
be
a
good
contribution
to
some
pike
or
and
like
working
with
albert
to
get
it
in
the
coordinates,
stack
and
stuff.
A
Your
fourth
point
about
the
affiliated
package
liaison,
I
think,
is
a
good
one.
Yeah,
I'm
not
sure,
volunteer
to
pick.
No,
no,
that's
not
right.
A
I
mean
yeah,
I
think
I
don't
know,
I
don't
have
a
clear
answer
for
that
I
mean
no,
I
don't.
I
do
have
an
answer
right
in
the
absence
of
somebody
in
one
of
those
roles.
A
That
role
is
by
necessity
taken
by
us
as
the
league
developers
well,
and
so
yes,
one
could
also
argue
in
in
the
instrument
affiliate.
The
instrument
case
for
the
affiliated
technically
is
normally
what
I
am
paid
to
do
through
the
ostofl
waffle,
whatever
we're
calling
it
funding
so,
and
maybe
we
should
make
that
a
bit
more
explicit
on
the
website.
A
Maybe
I
should
just
put
my
name
on
there.
Maybe
that
would
maybe
that
would
make
it
easier.
Well,
my
plan
is
to
just
come
to
your
chats
and
your
meetings
and
just
annoy
you
and
tell
all
of
you
until
we
get
whatever
we
need.
This
is
the
wife
yeah
that
will
work
almost
all
the
time
just
as
well,
especially
if
you
ask
questions
about
coordinates
and
reprojection.
A
A
Oh
sorry
on
the
screen
right,
right,
yeah,
you're,
looking,
okay,
we
were
talking
about
this
a
little
bit
yeah,
maybe
not
sure
it's
the
most
elegant
solution
and
no
it's
a
yeah,
but
it's
a
it's
a
completely
understandable
solution.
I
don't
think
there's
anything
wrong
with
it.
It's
it's,
but
maybe
so
for
those
online
who
went
with
us
yesterday,
we
started
discussion
about
a
metadata
object,
which
is
at
least
in
the
current
conception
subclass
from
a
dictionary.
A
So
I
think
if
you
already
have
as
a
dictionary
having
a
slightly
more
powerful
dictionary-like
object
like
a
metadata
object,
presumably
isn't
too
far
away
from,
like
you
know
what
you
already
have.
I
think,
then,
you
get
into
this
discussion
we
were
having
yesterday
about.
A
It
would
be
nice
to
standardize
nate
like
names
of
pieces
of
metadata
that
are
common
throughout
solar
physics.
Like
the
observatory,
you
know
these
things
that
may
or
may
not
be
called
commonly
in
its
headers
or
or
other
places.
A
So
I
mean
I
have
have
ideas,
but
you
know
this
is
still
something
that
we
haven't
agreed
exactly
on
the
api
for
yet
so,
but
I
mean
basically,
I
encourage
you
if
you're
thinking
about
that
and
you're
and
you're
thinking
about
metadata,
then
to
be
part
of
that
conversation
and
help
shape
that,
and
that
goes
for
anybody
else
on
the
call
from
any
other
instruments,
if
you're
thinking
about
how
we
should
store
our
metadata
in
an
ndq
like
object,
you
know
be
sort
of
be
part
of
that
conversation
and
and
help
us
sort
of
try
and
define
what
those
metadata
labels
or
names
are
in
python,
because
users,
don't
like
users
shouldn't,
have
to
only
know
what
bits
keys
are
like.
A
Ideally
you
should
a
typical
user
should
be
able
to
move
away
and
do
something
yeah
steve
agrees.
We
need
a
metadata
object,
yeah
like
so
we're
working
on
that
a
dictionary
of
nd
cube
objects.
So
it's
slightly
different.
We
were
talking
about
both
yeah
but
like
a
dictionary
of
ndq
voltage
is
a
thing
that
exists.
Oh
collection,
d,
collection,
collection,
yeah
that
exists,
didn't
use,
indie
collection
because
of
the
aligned
axes
or
something
we
didn't.
You
don't
have
to
use
a
lined
axis,
it's
totally
optional
to
use.
A
A
Steve,
I
saw
you
just
got
on.
Would
you
like
to
talk
about
hermes.
A
Can
you
guys
hear
me
yes
sure,
let's
see,
let
me
try
to
dig
up
some
slides.
That
would
be
helpful.
Is
visual
beads
when
you
say
we
need
a
metadata
object?
Do
you
mean
we
senpai?
Are
we
homies.
A
A
A
A
A
A
All
right
can
am
I
cutting
up
or
something
what's
the
problem
like
I
don't
know.
As
far
as
we
can
see,
you
disconnected
from
the
pool
really
yeah,
I'm
at
work
and
everything
okay,
but
we
didn't
hear
you
start
off
like
we
didn't
hear
homies
is
we
didn't
even
hear
that
or
are
you
sure
on
your
screen,
yeah,
okay,
okay,
so
hermes
is
a
a
payload.
That's
going
to
fly
on
the
lunar
gateway.
A
Lunar
gateway
is
part
of
the
artemis
program,
which
is
the
nasa
program
to
return
to
the
moon.
You
may
have
heard
that
artemis
1
may
has
a
launch,
probably
monday,
they'll
be
launching
an
sls
and
the
lunar
gateway.
A
Okay,
let
me
share
my
screen
is
kind
of
an
iss
for
the
moon
and
the
artemis
program.
Where
can
I
share
my
screen.
A
A
A
It
will
have
a
number
of
different
modules,
just
like
the
iss,
and
the
idea
is
that
astronauts
will
go
to
the
gateway
first,
we'll
dock
with
it,
and
then
they'll
land
they'll
go
down
to
the
moon
from
here.
That's
the
current
thinking
anyway,
as
part
of
it,
there
are
opportunities
to
fly
instruments
just
like
on
the
iss.
There
are
places
where
external
payloads
can
be
added,
and
so
hermes
is.
This
is
not
really
the
best
too
much
too
much.
A
Okay,
can
you
get
you
guys,
can
see
this
yep?
Okay,
so
same
thing
here
you
can
see
the
lunar
gateway,
though,
with
fewer
of
the
modules.
Basically
they're
gonna
launch
two
of
the
modules
together,
there's
place
on
this
on
the
side
of
it
for
two
science
payloads,
one
of
which
is
hermes,
there's
another
one
from
issa.
A
That's
going
to
be
attached
as
well,
and
the
point
of
this
experiment
is
to
mar
basically
measure
particles
and
fields
for
kind
of
space,
weather
and
and
science.
So
you
can
sort
of
see
where
we
are
here.
The
launch
is
this
is
kind
of
far
in
the
future.
A
A
You
know
standard
two
years
kind
of
nominal
science
mission,
we're
going
to
be
examining
the
inter
interplanetary
medium,
the
solar
wind,
but
also
a
terrestrial
magnetotail,
we'll
be
flying
through
that
every
once
in
a
while.
It
goes
into
this
kind
of
interesting
orbit
near
the
moon,
which
I
may
have
a
slide
about,
and
it's
kind
of
a
pathfinder
for
future
kind
of
space
weather
payloads.
So
it's
trying
to
be
both.
You
know
science,
but
also
kind
of
operational
yeah,
and
then
there
are
also
these
kind
of
ids
team.
A
So
there's
a
whole
group
of
people
associated
with
this
mission
yeah,
so
here's
sort
of
where
we'll
be
they're
they're.
The
here
are
the
science
goals.
This
is
all
kind
of
probably
more
detail.
Here's
the
kind
of
the
orbit,
the
blue
orbit
that
we'll
be
in.
A
So
we
get
quite
far
away
from
the
from
the
actual
moon
yeah.
So
here
are
the
four
instruments
that
we
have
on
this
on
this
payload:
another
electron
proton
telescope.
That's
looking
at
high
energy,
high
energy,
electrons
and
protons,
a
magnetometer
which
consists
of
actually
a
number
of
sensors,
one
flux,
gate
and
then
two
pni
sensors,
so
they're
going
to
use
the
pni
sensors
to
remove
the
background.
A
You
can
see
the
payload
to
the
right
there
there's
a
solar
probe
analyzer
for
ions.
This
is
actually
the
span
eye
which
is
already
flying
on
solar,
solar
probe,
and
then
we
have
electron
electrostatic
analyzer.
So
these
are
all
you
know,
space
physics
instruments
here.
A
Well,
you
know
I'm
trying
to
use
a
lot
of
the
a
lot
of
the
you
know,
all
the
development
and
stuff
that
is
in
senpai,
and
so
I
started
developing
the
python
packages
for
each
of
these
instruments,
using
starting
with
the
senpai
affiliated
package
template
now.
A
I
know
the
original
template
is
a
bit
out
of
date,
so
I
took
that
and
made
a
number
of
updates
to
it
and
I'll
I'll
show
you
that
in
in
a
little
a
little
bit
so
I
mean
the
main
thing
is
there
are
four
instruments,
so
we
have
four
packages
and
you
know
we're
trying.
Actually
we
have
five
packages,
and
so
managing
all
the
packages
is
a
is
a
task
that
you
know
takes
time,
and
you
know
some
help
with
that
would
be,
would
be
good.
A
Let's
see
if
I
should
just
jump
to
yeah.
Let
me
just
jump
to
where
we
actually
have.
Oh
another
thing:
that's
relevant
here
is
that
we
are
running
with
the
new
heliophysics
data
management
plan.
A
So
I
know
if
this
was
discussed,
but
there
was
a
big
policy
document
that
came
out
from
nasa
headquarters
called
spb
41,
which
basically
was
all
about
open
science,
reproducible,
science
and
creative
requirements
that
basically
all
software
that
is
funded
and
developed
for
nasa
missions
become
open
source.
A
And
then
so,
that's
a
that's
a
high-level
document
and
then
the
each
of
the
science
areas
have
to
publish
their
own
documents,
kind
of
reacting
to
that
and
and
making
it
clear
for
how
that's
going
to
be
implemented.
So
heliophysics
put
out
a
policy
document
for
heliophysics,
specifically
that
that
kind
of
went
through
that,
and
one
of
the
main
things
is
that
all
of
our
processing
code
needs
to
be
open
source.
So
these
python
packages
are
both
going
to
include
the
data
analysis
tools,
but
also
the
the
calibration
code.
A
So
we
have
to
make
public
the
code
that
lets
you
go
from
level
one
to
higher
levels
of
data.
So
we
have
to
enable
users
to
take
the
lower
level
data
and
calibrate
it
up
to
the
higher
level
data
so
that
what
that
means
in
practice
is
that
essentially,
these
python
packages
are
both
being
made
public,
but
also
that's
what
we
are
using
to
process
the
data
and
we
are
going
to
be
doing
all
our
processing
on
the
cloud
on
aws.
A
Steve
we
kind
of
had
a
chat
earlier
about
you
know,
data
containers
that
would
be
similar
across
instruments,
and
you
know
some
of
the
kind
of
in-situ
measurements
that
are
on
say
the
solar,
wind,
analyzer
and
solar
orbiter
or
psp
like
I
could
imagine,
there's
a
huge
overlap
here.
Maybe
we
could
talk
about
what
kind
of
data
products
these
would
look
like,
and
what
kind
of
is
already
available
that
these
environment
needs
to
be
developed.
Yep
yep,
I,
yes,
that
would
be
really
great
to
talk
about
right
now.
A
A
lot
of
these
data
sets
are
essentially
time
series.
So,
if
you
go
back,
two
of
the
instruments
are
fairly
simple
in
terms
of
the
data
they
provide,
so
merit
and
nemesis
are
basically
just
time
series
data
you
know:
nemesis
is
basically
a
magnetic
field.
In
a
specific
in
a
specific
coordinate
system.
You
know
x
y
z
as
a
function
of
time.
Merit
is
basically
time
series
for
these
fluxes
in
different
energy
bands.
A
So
I
forget
the
exact
number
of
energy
bands,
but
not
very
many,
so
you
know
pretty
simple
time.
Series,
spanie
and
eea
are
a
little
bit
more
complicated
and
that's
where
I
think
time
series
is
not
quite
appropriate
and
that's
where
we
kind
of
get
into
the
more
difficult
aspects
of
space
physics
data,
so
they
produce.
You
know
essentially
spectra
as
a
function
of
time,
so
multi-dimensional
data.
Well,
even
more
eea
produces
spectra
as
a
function
of
angle
as
a
function
of
time.
A
So
just
as
an
aside,
we've
been
trying
to
put
all
kinds
of
other
things
out
public,
so
we
have
the
the
project,
data
management
plan
on
github
as
as
well
as
the
plra,
which
is
kind
of
the
level
one
requirements
if
anybody's
interested
in
reading
that
stuff
and
then
you'll
notice
here
that
we
have
the
individual
packages
as
well
as
hermes
core,
so
hermes
core
is
the
package
that
holds
common
functionality
for
the
packages.
So
all
of
these
depend
on
hermes,
core
and
all
of
these
use.
A
You
know
the
package
template
that
the
affiliate
package
template
well
a
modified
version
to
it,
so
there
isn't
much
in
there
yeah.
Is
there
a
question
yeah?
I
was
just
gonna,
so
I
mean
outside
of
the
affiliated
package
template
I
don't
know.
Maybe
this
is
hard
to
say
since
you
don't
like
have
data
yet,
but
to
what
extent
are
you
using
senpai
or
or
like
thing
or
something
like
the
sponsored
packages
like
ndcube.
A
So
ndcube
is
unfortunately
well
okay,
so
so
so
we
are
still
looking
into
data
classes
and
so
for
the
more
complex
instruments
we're
still
kind
of
thinking
about
what
what
we
would
use
there.
The
problem
with
nd
cube
is
space.
Physics
does
not
use
wcs
at
all.
A
A
If
we're
talking
about
interoperability
with
sort
of
more
in-situ
data
or
like
heliophysics
data
sets,
who
may
not
have
a
wcs
and
all
of
our
well,
I
guess
except
time
series
and
most
of
our
data
structures
are
based
around
the
idea
that
you
have
a
wcs
like.
Does
that
prevent,
or
does
that
present,
like
any
friction
between,
like
the
tools
that
we
have
and
the
data
that
people
want
to
bring
together
yeah?
Unfortunately,
tomorrow.
A
A
But
you
go
the
other
way
equally
right,
yeah,
like
you
could
say,
we
should
like
just
have
like
various
tables
and
arrays
like
we
should
just
like
get.
You
know,
pixel
to
world
for
all
pixels,
put
an
array
and
then
stick
it
in
an
x-ray.
You
know
why.
Why
should
they
be?
Why
should
people
would
actually
actually
come
to
ndq
and
not
wucs
has
support
the
tabular
coordinates
but
not
they're
like
yeah,
but
you're
opposite
the
operator.
A
I
guess
that's
fair
jack,
yeah
yeah
I've
got
a
question
for
stephen
stephen:
have
you
decided
on
a
data
format
yet
like
cdf,
for
example,
yeah
we're
required
to
use
cdf,
okay,
so
yeah?
That's
like
hard
coded
now
in
that,
in
the
document
I
mean
I
I
think
it
already
was,
but
yeah
space
physics
data
must
use
cdf.
A
A
You
know
being
new
to
you
know:
I'm
learning
right.
My
background
is
solar
and
I
must
say
that
cdf
files
are
quite
old-fashioned.
There's
a
lot
of
craft
in
cdfs
that
are
caused
by
you
know.
Old-Fashioned
limitations
such
as
you
know,
space
limitations,
limitations
of
languages
and
they've
carried
those
along
fits.
You
don't
see
that
as
much,
unfortunately,
so
spd41
codifies
like
cdf
and
fits
as
the
heliophysics
data
management
plan
does
is
too
broad?
A
So
we're
actually
a
little
bit
past
the
coffee
break.
Now
I
don't
I,
I
don't
know
how
much,
how
much
more
you
had
if
you
had
some
stuff
that
I
mean
when
we
come
back,
we
could
we
could
finish
this
or,
if
you're
like
you,
just
wanted
to
wrap
up.
A
Sorry,
not
with
the
container
with
with
the
template,
because
that
I
think
these
changes
should
be
used
with
for
the
new,
the
new
template
and
I'd
like
to
advocate
that
somebody
maintain
a
set
of
templates
that
including
a
template
that
does
not
that
is,
does
not
necessarily
depend
on
senpai.
A
A
A
All
right
yeah,
let's
go
ahead
and
break,
and
then
we
can
come
back
and
we
can
and
see
that
how
many
more
like
how
many
more
people
do
we
have
who
have
instruments
or
whatever
they
want
to
talk
about
so
yeah,
that's
good!
So
is
there
anyone
who
is
on
that
hasn't
or
also
in
the
room?
I
guess
that
would
like
to
talk
about
like
their
particular
instrument,
they're
representing
that
and
who
hasn't
who
hasn't
spoken.
Yet
stuart
does
I'd,
maybe
say
a
word
to
you
but
sticks
and
I
love
her.
A
Okay,
yeah,
let's
maybe
when
we
come
back,
let's
do
those
and
then
er
what
was
the
post
coffee
plan
because
he
was
going
to
talk
about
the
package.
Oh,
the
package
template.
A
I
was
going
to
say:
maybe
the
package
template
stuff
fits
better
somewhere
else,
but
I
think
issues
of
the
package
template
is
not
it's
something
we
need
to
provide,
but
it's
not
something
that
is
going
to
be
fixed.
I
don't
know
well.
I
think
I
think
what
I'd
like
to
suggest
is
that
there
be
an
instrument,
specific
package
template,
because
an
instrument
package
has
certain
requirements
that
us,
you
know
just
a
data
analysis
package.
Do
not
do
that
yeah,
I'm
interested.
A
I
would
be
interested
in
a
list
of
those
requirements
both
from
you
and
other
people,
on
the
call
that
would
be
interesting.
I
mean.
Could
we
come
back
to
that?
Maybe
at
the
end
see
if
like
if
the
last
like
20
20-ish
minutes-
and
we
I
mean-
I
think,
we've
we've
been
going
to
like
5
30
most
days
as
well.
So
I
mean
we
can
go
a
little
bit
later
unless
you
have
a
hard
cut
off.
A
A
A
A
Okay,
let
me
actually
drink
any
of
my
coffee.
Okay,
our
decast
is
for
those
who
don't
know
a
four
meter
ground-based
telescope
on
the
top
of
hanalakala
hawaii,
and
it
has
five
first
planned.
First
light
instruments,
vbi,
which
is
actually
two
different
instruments:
one
in
the
red
wing,
one
in
the
blue
ring,
which
is
a
broadband
imager.
A
That
also
does
polarization
and
spectrophotometry
it
has
cryoners,
which
is
a
similar
silicone
spectrograph,
but
in
the
near
infrared
I
think
that
one's
primarily
designed
to
look
at
the
corona
and
it
has
vtf,
which
will
be
a
fabulous
bro
again
doing
spectral
preliminary
narrowband
and
also
like
that,
didn't
you
so
the
current
estimates
are
once
all
five
instruments
are
up
and
running
and
the
facilities
fully
operational
it'll
be
generating
up
to
about
12
terabytes
a
day
of
data.
A
I
work
for
the
data
center
in
boulder
a
large
fraction
of
my
time
and
focus
on
what
happens
to
the
data
in
the
data
center
after
it
has
been
calibrated.
I
know
a
bit
about
the
calibration
code
and
more
about
what
happens
afterwards.
A
A
Taken
little
calibrated
and
made
available
to
users
with
what
is
probably
simplestly
described
as
one
frame
per
fits
file
of
very
large
data
sets.
So
for
for
a
large
spectrophorimic
data
set
with
lots
of
wavelength
and
spatial
positions.
A
You
could
end
up
with
potentially
hundreds
of
thousands
of
fish
files
per
data
set
and
the
user
tools
type
of
written
based
on
ndq
and,
in
fact
have
all
of
my
work
on
ndq20
was
done
before
the
data
center
like
on
their
time
and
so
we're
using
dask
to
provide
a
single
array
view
into
the
data
in
lots
of
fixed
files
and
gwcs
to
attempt
to
represent
the
world
coordinates
of
that
reconstructed
array.
A
All
of
this
is
built
on
ndcube
and
will
be
as
well
as
the
collection
of
fixed
files,
all
that
have
complete
and
detailed
fits
headers
with
wcs
information,
we'll
also
be
shipping
an
asdiff
file,
which
is
stands
for
advanced
scientific
data
format,
which
is
a
new
ish
format.
That's
been
developed
by
space
telescope
science
instrument,
primarily
motivated
by
the
needs
of
jwst,
we'll
be
shipping
all
of
the
metadata
for
a
data
set
in
a
single
active
file.
A
So
that
includes
how
to
stitch
the
data
arrays
in
the
fit
files
together
into
a
into
a
single
task
array.
The
gwcs
object
for
the
world
coordinates
for
that
whole
thing.
The
computed,
like
meta
data,
inventory
record
that
you
can
search
against
and
a
table
of
all
of
the
fixed
files
for
all
of
the
files
that
make
up
that
data
set,
so
you'll
be
able
to
download
all
of
the
metadata
for
a
data
set.
A
In
fact,
even
if
that
data
set
is
in
barcodes,
you'll
be
able
to
read
all
of
the
fit
setters
and
then
you
can
decide
based
on
the
metadata
in
that
file.
If
and
what
parts
of
that
data
set,
you
will
actually
want
the
data
for
and
transfer
it
like
wherever
you
want.
To
put
it
there's
many
interesting
questions.
I
won't
ramble
on
too
much
about
the
technical
fun
parts
of
doing
that.
A
If
you
have
questions,
then
I'm
happy
to
answer
them,
but
the
the
additive
approach
of
having
all
of
the
metadata
available
in
a
single
place
has
already
come
in
very
handy
for
us
to
do
like
I,
I
we
we've
calibrated
we've
test
calibrated,
I
think
over
there,
maybe
about
100
000
files
total.
At
this
point,
and
in
about
30
seconds,
I
got
all
of
the
as
if
files
loaded
every
single
one
of
those
fit
setters
and
determined
that
one
particular
column
in
all
the
tables
was
the
same
for
everything
right.
A
It
was
just
load
all
tables
for
all
files.
Oh
yeah,
every
single
file.
The
sentence
has
this
value
in
the
this,
so
there's.
Definitely.
This
is
definitely
an
interesting
side
and
I'm
also,
hopefully,
planning
on
working
on
like
making
use
of
the
fact
that
we
built
a
desk
array
for
those
who
you
who
don't
know
dusk
is
a.
I
was
going
to
say,
distributed
computing.
That's
not
really
right.
Is
it
it's
like
a
parallel
computing
or
a
library
for
bigger
than
memory
bigger
than
compute
data?
A
So
that's
like
you
need
more
cpu
or
more
memory
than
your
one.
Laptop
has,
or
you
want
to
do
things
in
parallel.
Das-
provides
lots
of
really
cool
python
tools
for
dealing
with
big
big
data,
in
whatever
form
that
may
take,
and
there's
lots
of
interesting
optimizations
and
improvements
that
can
be
done
within
the
astro
and
solar
sphere
for
using
dust
for
that
kind
of
data,
and
then.
A
A
Go
ahead,
pierce,
it
might
be
off
topic,
so
by
all
means,
just
tell
me
shut
up,
but
I
on
the
desk
thing
and
I've
always
found
anytime,
I
go
to
desk.
I
get
completely
lost
in
the
first
like
three
minutes
where
it's
like.
You
need
to
build
up
this
server,
and
especially,
if
you're
trying
to
do
it
on
different
machines
and
stuff
are
the
plans
with
dkis
and
future
plans
for
using
tasks
by
will
that
be
like
abstracted
away?
A
So
people
don't
have
to
worry
about
us
or
so
there's
multiple
layers
to
task
this
is
going
to
get
into
the
weeds
a
little
bit,
but
the
core
component
of
dark
is
that
it
builds
a
dag,
directed
acyclic
graph
of
tasks
that
need
to
be
worked
on
so
like.
If
you
take
a
dark
array
and
sum
it
it
will
figure
out
like
oh,
I
need
to
sum
every
chunk
of
this
array
and
then
I
need
to
like
do
a
group
by
operation
on
those
sums
to
make
a
final
sum
and
here's
your
end
result.
A
It
breaks
up
your
work
into
that
dust
can
be
used
without
any
fancy
setup.
Just
straight
on
your
local
machine
in
a
python
interpreter,
it's
only
if
you
really
want
to
get
to
that
distributed
and
big
scale,
parallel
computing,
that
it
gets
fun
and
gets
into
like
okay.
Now
I
need
a
super
computer
or
a
kubernetes
cluster,
where
they
all
talk
to
it,
talks
to
each
other
and
really
scales
out
big
time,
and
the
ascii
is
definitely
useful
for
larger
than
memory
problems
on
one
single
machine.
A
I
think
it
can
be
as
it
can
be
extremely
complicated,
and
I
think
it's
very
reasonable
to
be
confused
about
what
task
is,
because
it
has
so
many
varying
levels
of
complexity
but
yeah
at
its
core.
It
just
is
a
way
to
express
your
computation
in
such
a
way
that
it
can
be
spread
out
across
multiple
workers,
though
all
those
workers
could
be
the
cores
of
your
laptop,
they
could
be
a
thousand
nodes
on
a
supercomputer
yeah
and
we
well.
I
don't.
A
A
Should
you
should
not
care
like
what,
where
the
computation
is
being
done,
like
that's
for
the
user
to
decide
like
the
user
sets
up
their
cluster,
that
cluster
by
default
could
be
your
laptop,
it
could
be
a
super
computer.
You
set
up
the
cluster,
then
desk,
you
know,
creates
the
work
where
you
send.
That
work
is
up
to
you
that
probably
wasn't
a
less
confusing
explanation.
A
My
question
was
going
to
be
all
of
these
things
that
you
said
like,
as
did
my
big
dad's
word:
bingo
yeah.
All
these
things
are
things
that,
like
I
think,
even
like
us
in
this
room,
like
don't,
have
a
great
handle
on
like
using
in
our
day-to-day
workflows,
necessarily,
especially
in
the
larger
physics
community.
It's
like
what
is
the
approach
from
like
this
is
like
from
the
data
center
perspective
or
from
deka's
perspective.
A
What
is
the
approach
to
like
community
education
when
it
comes
to
like
allowing
people
to
actually
make
use
of
this
data?
Are
these
tools,
the
second
I
get
a
minute
to
breathe,
I'm
desperately
trying
to
fix
all
the
bugs.
I
will
be
working
on
that
yeah.
I
think
I
appreciate
the
clock
is
running
somewhat,
but
I
mean
make
it
work.
Tell
people
about
it,
make
it
work
fast
in
that
order
and
jack
yeah.
I
have
a
question
about
your
use
of
nd
tube
you're,
using
nd
cube
in
a
production
environment.
A
Is
that
correct,
yes,
sort
of
with
nd,
so
we
are
using
ndcube
to
write
the
asda
files,
basically
due
to
how,
as
if
python
works,
to
to
save
an
asdif
file
with
a
dks
dataset
object
in
it,
you
construct
the
degas
dataset
object
and
hand
it
to
azdev,
and
it
serializes
it.
A
Therefore,
when
we're
writing
the
asda
file
in
as
like,
basically
the
final
step
that
happens
in
the
data
center,
we
are
using
nd
cube
to
do
that
out,
because
we
have
to
construct
the
object
to
save
the
object,
if
that
makes
any
sense
yeah,
but
that's
quite
a
significant
job
right,
because
the
aztec
files
are
the
first
thing
that
people
are
going
to
interact
with
for
decades.
Is
that
right,
yeah?
A
So,
okay?
Well,
that's
great!
The
next
question
is
more
to
the
room.
Is
anyone
aware
of
the
use
of
nd
cube
in
any
other
operational
environments.
A
The
punch-
it
sounds
like
right
here,
yes
for
using
ndq,
very
much
okay
yeah.
I
missed
that.
I
came
in
at
the
tail
end
of
the
punch
talk.
So
thanks:
okay,
jack,
when
you
say
operational
environments,
do
you
mean
like
like
data
pipelines
or
calibration
pipelines?
Yeah,
that's
what
I
mean.
So
it's
kind
of
a
different
use
case
and
you're
sort
of
really
relying
on
the
like.
The
code
is
more
battle
tested
right
if
you're
in
that
kind
of
environment,
because
all
sorts
of
things
can
can
come
along.
A
So
I
should
say
from
like
the
deku's
perspective:
we
don't
there.
Well,
that's
not
actually,
true!
Maybe
I
don't
know
if
we
do
call
methods
on
mdcube
inside
the
deepest
pipeline
like
the
deepest
process
data
center.
We
don't
call
very
many.
We
it's
not
like
we're
making
heavy
use
of
ndq
inside
our
software.
A
A
A
A
So
we
haven't
written
a
huge
amount
of
codes
beyond
get
and
load
data
for
dks
at
this
point
right
and
the
data
center
team
is
very
small
and
I
have
been
the
only
person
working
on
post
calibration
data
base,
but
that's
not
strictly
true
post
level,
one
fits
I'm
like
the
only
person
who's
really
been
working
on
that
at
the
data
center
for
the
last
five
six
years.
A
So
if
they
have
an
image
from
dbi
and
the
spec
from
this,
how
could
they
combine
these
two,
for
example,
wcs,
yeah,
ndq,
yeah,
wcs
and
mdq
and
like
over?
Would
that
scale?
If
you
have
a
time
sequence
of
an
hour?
A
Probably
not,
I
think
that's
the
biggest
issues
that
people
have
with
using
the
senpai
machinery
with
the
data
products
are
not
from
space
missions,
they're
much
much
larger,
and
it's
going
to
be
a
struggle
to
be
able
to.
For
example,
watch
a
movie
quickly
play
through
a
movie
and
then
combine
multiple
images,
different
instruments
at
the
same
time,
that's
especially
true
if
you're
interested
in
like
plotting
space
space
from
a
spectrograph
things
right
like
if
you
want
to
but
in
space
is
quite
small
relative
is
maybe
a
few
gigabytes
at
most.
A
No,
no,
no,
I
mean
like
if,
if
you
want
to
take
a
this
data
set
and
plot
the
two
spatialities
right,
because
it
has
to
open
it,
has
to
access
a
lot
of
data
that
is
badly
chunked
on
disk.
To
do
that,
like
yes,
they're
all,
so
you
want
to
play
a
movie
for
at
least
30
percent,
really
bad
yeah
yeah.
So
that's
something
that's
really
important,
because
people
need
to
be
able
to
do
that.
It's
also
really
hard.
It
is
really
nice.
A
Yeah-
and
I
mean
you
know.
Premature
optimization-
is
the
root
of
all
evil.
We
have
not
got
there,
but
we
haven't
got
what
we
have
to
the
mvp
yeah,
like
the
user.
Yeah,
the
user
tools
are
open
source
and
on
github
community
contributions
are
welcome,
yeah
yeah,
I
don't
know
if
you,
I
might
have
missed
this
as
well,
but
are
all
the
user
tools
in
python?
There's
no
vestigial
idl
codes
around,
I
believe
alastair
has.
A
Okay,
apparently
so
I
am
reliably
informed
that
the
default
idle
fix
reader
does
not
handle
rice,
compressed
files
and
read.
Sdo
does
something
fun
to
handle
right
and
breastfix
files,
so
they're
alistair
is
working
on
some
very
minimal
support
in
idl
to
handle
opening
office.
I
think
it
was
possible
to
open
the
rice.
A
You
are
asking
the
bronco,
so
your
point,
do
you
think,
is
the
is
the
big
or
the
the
this
big
image
problem?
Is
that
something?
That's
is
that
solved
with
other
tools
or
with
the
idl
tools
like
for
doing
sst
analysis
or
is
this
like?
Is
this
a
problem?
Is
there
something
in
some
kind
that
we're
slash
indie
cubes,
like
our
ecosystem?
That's
that's
posing
a
specific
problem.
A
I
think
one
of
course
I
the
ideal
tools,
don't
have
all
the
machinery
that
we
have
and
no
don't
have
often
wcs,
but
one
thing
that
ideal
does
really
well
is
basically
displaying
images
very
fast.
It's
it's
very
competitive.
It's
hard
to
compete,
for
example,
matplotlib
will
never
reach
that,
probably
especially
when
you
lay
a
wcsx
image
play
that
at
30
frames
per
second
in
method
live
is
impossible.
Okay,
so
it's
not
what
not?
A
No
it's
not,
and
that's
something
that
ideal
does
very
well
and
that's
something
that
everybody's
going
to
do
when
they
look
at
the
data
they
just
want
to
look
at.
They
want
to
look
at
the
time.
Sequence.
Okay,
there's
something
interesting
here,
then
I
can
zoom
in
and
combine
different
instruments.
Yeah.
Maybe
we
should
add
this
to
the
right.
A
A
A
Maybe
if
you
want
to
combine
two
things
and
watch
a
movie
like
combine
two
filters
on
top
of
each
other.
Maybe
but
then
maybe
you
can
do
some
free
transformation
of
the
data
people.
You
can
also
do
quickbook
movies
right
people
have
that,
but
sometimes
you
won't
have
a
fine
grain.
You
want
to
change
the
image
contrast.
You
want
to
change
the
gamma
when
you
look
at
features
a
bit
more
closely.
A
Do
you
think
that
there
are
other
visualization
libraries?
Even
you
know,
okay
coordinates
for
mix.
I
know
that's
always
the
the
roblox.
When
you
talk
about
things,
not
math,
but
are
there
other
visualization
libraries
in
python
that
people
are
using
to
do
this
at
a
not
impossibly
slow?
I've
used
iqt
graph
for
making
this
kind
of
fast
movies.
Okay,
and
that's
probably
the
best
one
there
is.
Have
you
seen
empori?
A
A
A
This
makes
me
wonder
whether
I
could
write
a
or
even
coordinateless
decision
player
in
authority.
Like
I
know,
the
pango
community
has
used
hollow
views
a
lot,
and
that
seems
to
be
their
kind
of
data,
visualization
library
of
choice
and
they
use
it
on
very
large,
like
very,
very
large
data
sets.
This
is
the
background
right,
yeah,
high,
dimensional
and
extremely
large,
and
usually
backed
by
a
desk
array
that
might
be.
I
know
that
library
has
been
is
used
for,
like
the
x-ray
desk
stack,
so
that
might
that
would
be
another
option.
A
I
think
another
thing
will
run
eventually
is
that
when
you
have
these
qt
apps
or
something,
I
think
that's
kind
of
an
on
the
dying
end,
because
the
future
is
all
visualization
in
the
browser.
If
you
have
in
the
cloud,
you
want
to
visualize
something
you
don't
want
to
copy
everything
to
your
laptop,
so
I'm
a
bit
reluctant.
If
it's
really
worth
spending
time
developing
excuse
me
too
yeah.
It
will
be
obsolete
in
a
couple
years.
Yeah!
That's
wildly
optimistic.
A
It's
probably
something
if
people
are
like
not
going
to
use
python.
For
that
reason,
like
that's
again,
that's
kind
of
within
somebody's
wheelhouse,
too
kind
of.
I
think
I
think
it's
an
interesting
one,
because
so
far,
almost
all
of
the
senpai
effort
over
the
last
12
years
or
however
long
it
is
now
has
been
entirely
focused
on
data
with
coordinates
right
like
if
you
take
the
coordinates
away
from
your
data.
Okay,
you
don't
need
you,
don't
necessarily
need
somebody
anymore
right.
A
If
you
just
want
to
visualize
fit
files
quickly,
you
can
just
use,
aspire
to
load
the
array
and
plot
them
in
theory,
right
like
in
one
high
qt
graph
or
whatever
right,
but
there's
nothing
in
the
senpai
mission.
That
says
we
only
work
with
data
with
coordinates.
No,
no,
I'm
not
that
this.
Isn't
this
isn't
me
saying
we
shouldn't
do
it.
This
is
me
like
asking
where,
like
I
think,
that's
separate
from
a
question
of
whether
it
should
go
in
the
right
map.
A
A
A
A
So
I
think
this
falls
exactly
I
mean
we
may
disagree
on
whether
that
was
something
we
agreed
to
on
during
the
red
map,
but
I
thought
that
was
something
we
could
do
a
better
job
with,
because
exactly
what
you
guys
are
saying.
If,
if
they
fall,
you
know
early
on,
because
you
know
they're
learning
a
python
and
they
can't
figure
out
how
to
do
basic
things,
then
they're
not
going
to
get
it
to
senpai
at
all
yeah.
A
I
guess
I
was
arguing
to
go
slightly
further,
which
was
that
okay,
it's
not
in
senpai
now,
maybe
it
doesn't
like
strictly
have
to
be,
but
as
from
an
access
point
of
view,
we
create
something
like
in
mpl
animators
that
but
doesn't
use
like
matplotlib.
I
don't
know
just
an
idea
about
where
it
might
live.
I
think
it's
just
something
that
people
can
read
in
data
for
using
senpai.
It's
really
like
you
should
be
looking
at
something
sam
go
ahead
before
I
talk
shane
sorry,
his
username
gets
me
everywhere.
A
A
Right,
it's
not
no!
No!
No!
No!
I
was
taking.
I
was
taking
umbrage
with
the
phrase
in
python.
You
can't
do
it
in
senpai
and
we
don't
tell
people
how
to
do
it
completely
agreed,
but
it
like
it's
not
a
fundamental
fraud
for
python
programming
languages.
The
tools
to
do
that
do
exist
in
the
world.
Like
you
know,
we
wouldn't
have
to
write
it
from
scratch.
A
A
I
can
type
it
in
the
next.
If
you'd,
like
I'm
sorry,
if
I'm
really
paraphrasing
yeah,
I
think
in
more
general,
like
tools
for
interactive
data,
discoverability
like
like
being
able
to
make
movies
fast
or
be
able
to
like
pan
around
on
images.
It's
not
something.
We've
done
well
or
necessarily
supported.
Well,
all
right!
Oh
go
ahead!
Jack
yeah,
the
the
senpai
developer,
matt
went
so
long
has
been
working
on
a
little
a
few
scripts
and
a
notebook
using.
A
I
forget
not
matplotlib
some
other
type
of
visualization
tool
and
it
does
some
really
nice
different
things
like
I
don't
believe
it.
It
should
live
in
senpai,
but
you
should
definitely
live
in
like
learn.somepine.org.
A
I
think
there's
a
lot
of
consensus
here
and
I
would
suggest
we
move
on
to
get
all
instruments
discussed
and
have
some
time
to
come
back
to
steve.
Shane
has
at
least
two
instruments
when
I
talk
tiago,
you
also
said
you
wanted
to
discuss
yeah.
I
could
talk
a
little
bit
about
the
ssd
yeah.
So
that's
what
I
would
suggest
if
that's
yeah,
I
know,
okay,
I
agree.
A
A
We
can
see
jesse,
okay,
fine
yeah,
so
I
guess
there's
two
aspects
to
this:
there's
a
stixx
core
package,
which
is
basically
our
pipeline
that
takes
tm
and
puts
it
into
fits
files
which
are
then
usable
by
the
community
later
on
that
doesn't
really
integrate
into
sunpi
because
it
doesn't
really
make
sense
to
but
it's
open
source
and
we
use
the
the
sunpower
package,
template
and
loads
of
stuff
that
comes
from
the
sunflower
project,
but
it's
kind
of
its
own
thing,
but
it's
open
source
and
all
the
issues,
and
you
can
see
all
my
many
many
many
bad
decisions
in
there.
A
It
could
be
in
the
future.
I
I
wouldn't
be
high
on
my
list
of
things
to
do,
but
yeah,
okay
and
so
then,
on
the
user
side,
there's
there's
stixx
pi,
which
is
like
a
more
interesting
thing
and
yeah
so
yeah.
This
documentation
is
a
bit
old,
but
it
integrates
quite
heavily
into
the
sunpi
ecosystem.
So
we've
got
a
fido
search
client.
We
use
file
to
download
data.
A
We
you
can
use
a
information
time
series
client
to
look
at
some
of
our
data
that
fits
into
a
time
series
like
data
object
and
in
the
future
we
will
also
create
some
time
maps
when
you
make
images
from
the
sticks,
data,
so
output,
listen
by
map
with
all
all
the
correct
stuff
in
there
at
some
stage
in
in
the
future
and
the
stixx
data
itself
doesn't
really
it
actually
lives
in
a
natural
table,
because
it's
not
really
a
good
object
for
it
to
live
in
at
least
have
one
at
the
moment
yeah.
A
So
that
was
really
it
on
the
sticks
front.
So
I
think
sticks
will
be
highly
integrated
with
the
sunfly
ecosystem,
slash
project
and
that's
really
good,
and
the
other
thing
I
just
wanted
to
touch
on
is
is
like
I,
low
fire
slash
low
far
in
in
general,
like
low
fire.
A
Is
a
radio
telescope,
radio
telescope,
a
ground-based
radio
telescope,
there's
international
stations,
which
I
know
far
as
one
and
then
there's
a
the
entire
international
low
fire
telescope,
but
they
produce
really
nice
radio
data
like
here's,
a
a
radio
spectrum
with
a
nice
type
type
2
burst,
I
think
shown,
and
we
have
all
this
data.
A
But
at
the
moment
it's
not
really
easy
to
search
or
even
visualize,
because
these
files
are
quite
big.
I
mean
these
ones
aren't,
but
we
have
much
larger
higher
time
res
files,
and
it
would
be
really
interesting
to
see
how
we
could
use
das
and
other
technologies
to.
A
I
guess
leverage
these
technologies
interact
with
these
large
files
and
just
make
it
easy
for
users
to
get
to
this
kind
of
data,
because
it's
sitting
there
and
there's
loads
of
interesting
science
in
it.
But
it's
all
kind
of
locked
away
and
that's
something
that
I
think
I'd
offer
idaho4
want
to
fix,
but
also
other
international
low
fire
stations
would
be
the
same.
A
Do
you
see
like
sticks
time
series,
and
I
guess
then
eventually
sticks
matt
as
though,
as
always
living
in
stixx
pie
or
would
is
there
interest
in
like
upstreaming
those
to
core?
I
mean,
I
guess
the.
A
So
I
don't
really
think
there's
like
a
need
to
have
a
map
source
because
you
actually
make
it
yeah,
okay
and
and
if
we
do
ever
make
fixed
files
with
images
in
them.
I
would
really
really
hope
the
metadata
would
be
such
that
it
would
just
work
in
a
map
without
having
a
specific
source.
A
That
would
be
my
my
intention,
anyways
yeah,
so
this
this
is
pretty
analogous
to
like
the
ice
map
that
that
michael
was
talking
about
earlier,
where,
with
with
ice
pack,
you
you
fit
perspective,
cube
and
then
you
produce
a
map
like
nrl,
doesn't
distribute
ice,
2d
ice
maps
and
then
and
then
and
then
as
such,
that
the
ice
map
class
lives
in
icepack.
A
The
source
is
literally
just
the
metadata
reader,
but
the
map
itself
is
agnostic,
yeah,
okay,
but
then
in
that
case,
when
sticks,
meta,
still
live
and
sticks,
I
suppose
that's
the
same
question
asked
about
a
slightly
different
object.
Yeah
I
I
certainly
would
have
no
strong
opinions
on
on
where
it
goes.
I
mean
yeah.
Okay,
but
like
sticks
is
also
spectrograms
right.
I
mean
the
quick
looks,
are
light
curves,
but
the
base
data
is
going
to
be
a
spectrogram
right,
so
you
could
have.
I
mean
it's
just
thinking
about
that
as
well.
A
I
guess,
if
it
had
a
spectrogram,
are
they
doing
something?
Then?
Maybe
then
that
could
also
be?
Is
it
yeah?
I
mean
sorry,
I
was
just
gonna
say:
is
it
unfair
to
say
that
the
map
or
meta
sources
that
we
have
in
in
core-
and
probably
I
guess
maybe
this
is
albert-
will
have
opinions
about
this?
You
just
put
mine
yeah.
I
know
I
saw
it
is
it?
A
A
A
A
A
Thanks
final
question:
that
do
you
is
there
something
very
specific
as
to
why
it
couldn't
just
be
an
ndq.
A
A
spectrogram
yeah
again
this
comes
back
to
like
the
radio
spectrum
thing.
I've
just
I've
never
tried
to
do
it,
yeah
yeah,
so
that
would
just
require
some
more
thought
and
testing
and
it
will
exploration.
Okay,
that's
completely
fine
for
friday,
yeah
yeah.
A
A
A
One
of
the-
I
guess
not
only
instrument
d,
things
that
I
work
on
and
slash
represent,
is
the
moxie
instrument
on
cubic.
So
cubics
is
a
the
I
should
be.
Capitalized,
I
think,
is
a
cubesat.
That's
pi,
by
amir
kaspi
from
sweary
albert
is
the
pi
of
one
of
the
instruments
on
cubics
called
moxie,
the
multi-order
x-ray
spectral
imager.
So
it's
an
overlapperogram
soft
x-ray
overlapping.
A
It's
normally
the
the
band
passes
one
to
60
angstroms.
So
this
is
a
mock,
a
mock,
moxie
image,
a
moxie
image.
If
you
like,
of
what
what
the
the
overlap
gram
data
will
look
like
so
a
few
a
few
things
here.
A
We
have
a
plus
one
spectral
order
in
one
direction:
a
minus
one,
spectral
order
in
the
other
direction,
and
we
also
have
the
plus
third
order
and
plus
minus
or
so
you're,
seeing
that
the
zeroth
order
image
is
the
really
bright
sun
in
the
middle
and
then
it
gets
dispersed
in
in
multiple
ways
in
both
directions
across
the
detector.
A
A
You
know
intelligent
ways
of
saying
what
does
this
mean
and
there
are
various
strategies
for
that
that
I
won't
really
talk
about
here
because
we're
still
working
that
out,
but
I
just
want
to
say
that
we're
to
do
this
well
to
do
the
modeling,
I'm
using
sunpi
some
aspects
of
aaa
by
some
parts
of
sun
kit
dem,
which
is
no
longer
an
affiliated
package,
I
guess
and
then
obviously
in
dq.
So
basically
we
do.
Is
we
use
a
dem
to
make
a
spectral
cube?
A
Yada
yada,
you
disperse
that
across
the
detector,
using
astropyte
reproject
by
first
constructing
the
wcs
for
the
overlappogram,
which
we
can
talk
more
about
later.
If
you
would
like
much
much
later
and
and
then
you
get
one
of
these
things
so
the
advantage
of
then
writing
a
wcs
for
this
is
you
can
say,
okay,
so
if
I
have
this
little
spot,
this
tiny
little
active
region
actually
so
this
is
like
the
sun
is
actually
rotated
here.
So
you'll
notice
that
this
hpc
lawn
the
grid
is
actually
on
the
left.
A
This
is
actually
a
little
snippet
that
david
sent
me
coloring
the
axis
such
that
you
can
put
the
axis
labels
and
have
their
own
static
and
the
grids
to
know
the
color
so
longitude
is
on
the
y
latitude
is
on.
Oh
sorry,
yeah,
yes,
one
it
flips
sideways
if
you
want.
So,
if
I
want
to
see
like
where
do
where?
Is
this
little
active
region
denoted
by
the
green
x
here?
Where
does
it
show
up
at
at?
Where
does
the
15th
angstrom
image
of
this
active
region
shows
up?
A
You
can
use
facelift
wcs,
I
don't
know:
where
did
it
go?
You
can
use
your
wcs
and
your
pixel,
the
world
machinery
to
say.
I
know
what
just
based
on
the
wcs
that
my
my
15
angstrom
image
should
have
been
dispersed
onto
the
detector
at
this
spot.
Now,
where
this
green
x
is-
and
you
can
see
that
my
grid
has
also
my
spatial
grid-
is
also
shifted,
such
that
the
the
location
of
the
the
green
x
is
the
same
relative
to
that
shifted
spatial
grid.
A
And
again,
this
is
all
using
the
wcs
machinery
and
then
you
can
do
the
same
thing
for
okay.
Where
does
my
15
angstrom
active
region
show
up
in
third
order?
I
don't
know
why
I
have
an
extra
blank
slide,
but
no
it's
it's.
It's
stretched
all
the
way
to
this
portion
of
the
sector.
So
this
this
whole
wcs
machine
allows
you
to
see
where
the
various
contributions
from
your
particular
regions
of
the
sun
are
being
dispersed
onto
your
detector.
So
we
can
talk
more
overlappings
later
if
you're
like.
A
A
A
The
detector
is
like
a
thousand
by
seven,
a
thousand
wide
like
seven
hundred
oh
yeah.
Well,
it's
actually
double
that
right.
So
it's
two
thousand!
Oh,
oh
yeah,
yeah!
It's
sorry!
It's
a
thousand
and
one
one
direction.
Yes
of
2000
by
700.
A
A
So
I
just
quickly
made
some
slides:
it's
not
there,
but
do
you
see
it
yep?
Oh,
so
I
am
not
a
part
of
the
ssc
instrument
team.
The
sst
is
operated
by
stockholm
university,
so
I'm
speaking
more
in
the
capacity
of
a
user
and
not
even
a
very
power
user
at
this.
So
I
don't.
I
cannot
deal
with
all
the
parts
of
the
processing
or
I
don't
have
experience
in
everything,
but
this
is
just
a
quick
overview
of
this
ssd.
A
So
ssd
in
a
palmer
is
a
big
or
one
meter,
much
much
smaller
than
decreased,
but
it's
a
refractive
telescope.
So
it
has
a
huge
lens.
It
doesn't
have
a
mirror
and
then
has
this
big
vacuum
tube
and
then
an
optic
slab
in
the
basement,
and
I'm
going
to
talk
mostly
about
two
instruments
that
are
the
most
used
nowadays,
there's
also
a
split
slit,
spectrograph
called
triple.
That
is
not
so
used
and
there's
going
to
be
a
microlensing
spectropolar
imagery
called,
but
it's
not
yet
in
full
use.
A
So
most
people
that
go
to
the
sst
today
observe
in
these
two
instruments,
which
is
crisp
and
chromies
and
they're
both
interferometer,
and
this
means
that
they
gives
you
a
small
image
of
the
sun.
The
field
of
view
is
about
60
by
60
arc
seconds,
and
then
it
scans
very
quickly
in
the
order
of
a
few
seconds
through
a
spectral
line.
A
A
So
there's
these
two
liquid
crystal
things
and
two
cameras
that
record
different
states
and
you
can
reconstruct
the
full
stocks
vector
from
that
chromius
at
the
moment,
does
not
have
polarimetry
it'll,
probably
be
added
at
some
point,
and
this
is
a
collection
of
cameras.
Besides
the
calibration
adaptive
optics
and
so
on.
Usually
in
this
current
setup,
we
have
a
wideband
camera
for
crisp,
which
captures
like
a
wide
range
of
wavelengths.
Typically,
the
solar
surface,
the
photosphere,
and
then
we
have
the
narrowband
cameras.
Two
states,
and
then
from
this
we
also
have
narrowband
camera.
A
We
have
a
wideband
camera
and
we
also
have
an
additional
camera
used
for
phase
diversity,
which
is
one
of
the
ways
that
we
can
reconstruct
the
data
to
kind
of
cancel
out
the
scene
effects
so
crisp
and
chromis
are
the
two
main
instruments
at
the
ssd.
A
This
is
just
a
quick
illustration
from
from
the
chromis
paper.
It
shows
the
effects
of
different
reconstruction
methods,
so
this
is
a
one
of
the
simplest
ways
to
reconstruct.
This
is
already
data
with
a
second
invariant
scene,
and
then,
when
you
apply
this
multi-frame
blind,
deconvolution
or
multi-object
to
multi-frame
blind
convolution,
you
get
a
much
sharper
image
which
has
to
do
with
the
way
the
observation
is
taken.
You
take
like
a
burst
of
observations
and
this
atmospheric
is
smearing
your
observations
and
you
cannot
have
a
model
for
this
and
try
and
cancel
it
out.
A
It's
extremely
time
consuming
to
go
from
this
to
this,
and
then,
if
you
apply
the
state
of
the
artist
to
also
apply
the
further
phase
diversity
to
that-
and
you
get
the
sharpest
image
and
at
the
moment
is
not
doing
this-
probably
not
do
this
for
a
very
long
time,
if
ever
so,
they
use
a
much
simpler
method.
Called
speckle,
which
is
much
more
competition,
is
cheaper.
But
it's
not
this
good.
The
downside
is
that
to
be
able
to
do
this,
reconstruction
most
end,
users
will
not
be
able
to
do.
A
You
need
a
cluster,
and
you
need
a
lot
of
specialized
knowledge
to
go
into
that
in
terms
of
throughput,
so
the
ssd
is
probably
one
of
the
highest
throughput
solar
telescopes
right
now,
stuart
said
the
dks
is
expected
to
do
about
12
terabytes
per
day
at
the
sst.
We
already
do
some
days.
We
already
do
more
than
eight
terabytes
per
day,
and
this
is
because
we
sst
has
a
good
adaptive
optics
system,
so
you
can
observe
for
more
hours
of
the
day,
and
it
also
has.
A
The
instruments
currently
are
much
are
more
complex
than
the
typical
instruments
operating
in
the
case
right
now,
so
just
promise
the
cameras
take
images
at
about
80
frames
per
second,
and
the
cameras
are
about
2k
by
2k
or
one
and
a
half
k
by
one
and
a
half
k,
and
the
exposure
time
is
about
12
milliseconds,
and
this
translates
to
about
1.5
terabytes
per
hour.
A
Crisp
is
a
little
bit
lower
so
about
400
gigabytes
per
hour.
The
cadence
here
the
frames
per
second
is
a
bit
lower.
37
frames
per
second
and
the
size
of
in
pixels
over
the
camera
is
also,
but
in
total
we
have
about
almost
three
terabytes
actually
yeah.
It
should
be
closer
to
two
I
mean
about
two
terabytes
or
two
and
a
half
terabytes
per
hour.
A
So
if
you
observe
for
four
hours,
which
is
a
very
good
day-
and
you
have
a
watts
existing
pipeline
is
called
sst
red
is
developed
at
stockholm
university,
it's
on
github,
it's
written
in
idl,
it's
a
huge
piece
of
software
and
again
the
ems
reconstruction
part
is
very
demanding
most
people
there's
very
few
groups
that
can
actually
do
this.
Maybe
oslo,
stockholm
and
belfast
so
most
end.
Users
will
not
do
this
and
they
either
collaborate
with
some
of
these
groups
and
also
in
the
good
tradition
of
ground-based
telescopes.
All
the
data
is
proprietary.
A
There's
very
few
open
data
out
there,
even
though
that
is
starting
to
change
and
there
is
a
will
to
start
putting
this
data
out
there
and
and
making
it
publicly
available.
Is
that
because
of
just
the
observing
plan,
like
a
person,
goes
to
do
their
observation
like
it's,
not
something
that's
taking
data
all
the
time
is.
That
is
part
of
the
reason
too,
but
also
first,
the
data
volume
is
huge,
so
it
used
to
be
that
people
would
go
there
with
a
suitcase
with
hard
drives
and
bring
it
back
nowadays.
A
The
link,
the
internet
link
is
quite
fast,
but
I
think
it
only
works
at
least
to
stockholm,
and
also
I'm
not
sure,
if
it's
possible
to
also
do
this
for
other
other
places.
But
then
you
need
huge
amount
of
space
and
yeah
also
because
of
pi
time
observing
time
and
things
like
that
now,
there's
a
some
solar
net
time.
So
maybe
some
of
those
observations
can
become
public
after
a
certain
appropriate
therapy
with
black
decays.
A
So
the
end
result
of
this
always
there's
different
stages
of
the
data
processing.
The
end
result
is
fixed
files
which
are
produced
by
this
sst
rate,
and
I
remember
just
to
write
a
file
just
to
write
the
final
files
took
about
a
day
and
this
I
think
this
is
typical
for
one
dataset.
A
These
have
a
lot
of
metadata
compliant
to
the
solar
nets,
so
recommendation
documents
and
this
uses
wcs
tab,
which
at
least
stuck
to
a
couple
of
hours
ago,
was
not
working
properly
with
the
wcs,
but
I
think
stuart
may
have
just
fixed
that
what
else
did
they
have?
The
whole
wcs
is
wcs.
Tab,
yes,
except
stokes.
A
Okay,
is
that
because
of
the
distortion
that
has
to
be
modeled?
That's
I
don't
know
it's
that,
including
the
celestial
axis
yeah,
the
special
excellent
yeah.
Currently,
yes,
I
don't
think
it
has
to
be
in
x
and
y.
I
don't
know
why
they
chose
that.
I
think
the
distortion
is
very,
very
small
for
the
in
those
in
those
axes.
A
It's
certainly,
you
need
tab
for
the
waitlist,
because
it's
not
constant
separation
and
also
for
time,
because
it
can
operate
at
different
times.
So
normally
one
of
these
files,
the
output
files,
one
of
these
files
is
one
data
set
and
it's
five
dimensions.
So
it's
it's
time.
It's
stokes
is
wavelength
and
it's
x
and
y.
A
So
typically,
they
could
be
a
future.
A
few
hundreds
of
gigabytes
per
file.
A
And
they
are
visualized
with
this
piece
of
software
called
respects,
and
this
is
a
screenshot
of
a
version
some
years
ago
and
what
you
do
here,
you
have
a
image
viewer
in
the
center.
The
main
part
is
the
image
viewer,
and
then
you
have
some
controls
here
that
you
can
use
to
play
a
movie,
which
is
is
very
good
at
doing
that.
It
plays
the
movie
very
fast,
and
then
you
can
play
a
movie
of
a
fixed
wavelength
position.
So
here
in
this
tiny
window,
it's
hard
to
see
there.
A
This
tiny
window
shows
the
four
stops.
Parameters
like:
u
and
v,
had
a
fixed
wavelength,
so
you
can
in
this
case
I
think
it's
showing
probably
still
three
or
something
you
can
also
show
just
in
intensity,
and
they
can
play
fast
movies,
and
you
can
also
use
quiz
specs
to
combine
different
data
sets
in
the
same
formats
that
don't
necessarily
have
to
be
from
the
same
instrument.
A
So
one
could
be
from
iris,
another
could
be
from
sst
and
then
the
another
cool
feature
is
that
you
can
use
your
mouse
over
it
and
it
will
update
this
detailed
spectra
and
it
also
has
a
different
panel,
which
is
the
the
temporal
slice.
So
for
a
given
mass
position.
It
shows
you
the
the
spectrum,
so
I
think
this
is
wavelength.
This
is
a
bit
hard
to
see
because
this
is
not
stop
sign,
and
this
is
time
so
it
shows
for
say
for
this
pixel
or
this
red
one
here
in
the
penumbra.
A
It
will
show
you
the
spectra,
a
spectrogram
of
time
versus
wavelength,
and
to
do
that
to
be
able
to
do
that
quickly.
The
data
has
to
be
duplicated
in
disk,
so
there's
two
versions
of
the
same
data.
One
is
just
the
transpose
version
of
the
other
one.
So
if
you
have
a
file
as
100
gigabytes,
then
you
have.
You
need
200,
terabytes,
200
gigabytes,
because
you're
transposed
for
quick
access.
A
This
was
something
that
we
tried.
Well,
I
think,
first
time,
direct
memory
was
a
few
years
ago.
Now
we
tried
to
do
something
like
this
with
glue
like
because,
like
iris
uses
this
or
something
very
similar
to
this
right,
oh
yeah
yeah.
You
can
also
use
this
four
hours
right
right
and
it's
a
like.
It's
a
really.
A
I
think
it's
really
sort
of
powerful
way
of
like
looking
at
it
in
this
case,
especially
when
it's
huge
but
but
even
stuff
like
because
you
can
sort
of
point
on
one
frame
and
like
it
gives
you
information
like
through
the
other
dimensions
of
the
other
parts
right.
So
that's
usually
also,
let's
say
here:
what
is
the
the
coordinates
time
yeah?
So
I
think
I
think
something
like
this
could
be.
Anything
has
been
done
in
certain
cases
with
glue
and
python.
A
A
So
I
loaded
a
small
data
set
in
my
computer.
What
is
this
it's
like?
If
you,
it
only
has
five
time
steps.
A
A
Between
so
this
is
one
image
at
one
time:
one
wavelength:
I
can
put
a
different
time
and
going
back
to
what
you
said
earlier,
it's
in
pixel
space
there's
no
attempts
to
come
up
now.
There
is
a
wcs
here.
It
doesn't
work
right
now
with
with
astropyth
because
of
some
bug
in
estrophy,
but
I
think
stuart
may
have
just
fixed
that.
Oh
this
is
the
tab
right
right,
yeah,
okay,
right
and
you
notice
here
I
over
ported
one
on
top
of
the
other.
Just
one
different
background.
A
You
notice
here
that
the
image
they
have
the
same
number
of
pixels,
but
the
padding
is
different.
So,
for
example,
you
see
there's
like
this
thing
here,
which
meant
that
one
image
is
kind
of
shifted.
There's
two
reasons
for
that.
So
one
is
that
this
image
does
not
be
rotated
like
the
thickest,
so
as
you
as
time
goes
by,
the
sun
is
going
to
rotate
in
the
image
finder,
and
the
other
thing
is
that
the
atmospheric
scene
is
going
to
like
distort
this
in
different
ways.
A
A
Sorry,
for
example,
here
in
this
very
corner
yeah,
one
of
the
images
covers
this
corner,
but
the
other
does
not.
Okay,
okay,
so
you
can.
It
is
aligned
such
that
if
I
take
a
slice
through
yeah,
it
should
be
correct.
The
slices,
yeah
yeah
take
a
slice
and
wavelength,
but
sometimes
some
slices
will
have
gums.
A
A
A
This
yeah,
so
this
array
is
five
dimensional.
The
first
dimension
is
time
which
in
this
case
it
only
has
five
time
steps
and
then
this
one
does
not
have
a
slope
iq
review.
So
it's
sorry
with
the
four
images.
So
it
only
has
one
and
then
48
is
the
number
of
wavelengths,
and
this
number
is
not
usually
very
large.
It's
rarely
much
more
than
50
and
then
there's
x
and
y.
A
So
it
quickly
adds
up,
especially
especially
if
you
save
it
as
a
say:
32
32-bit
normally
saved
as
a
8-bit
integer,
it's
a
safe
space.
Is
this
something
assuming
that
fix
of
stewards
does
solve
the
problem?
Is
this
something
you
would
consider
or
very
much
not
wants
to
use
with
nd
cube
and
dusk?
A
I
think
it
would
be
really
nice
to
use
in
cuban.
In
fact
I
hope
I
can
load
it
one
day
right
now
it
doesn't
work,
but,
okay,
maybe
once
this
wcs
thing
is
fixed,
I
think
it's
to
do
with
it.
Astrophy
does
not
have
so
much
support
for
wcs
right
right,
I
mean.
Do
you
carry
these
around
like
on
your
on
your
laptop,
or
these
usually
sit
on
like
a
computer
yeah,
it
usually
sits
on
a
network
disk.
A
Sometimes,
if
I'm
working
on
something
I
might
copy
this
100
gigabytes
to
my
laptop
or
maybe
200
gigabytes
max,
usually
you
need
a
fixed
workstation.
You
don't
carry
them
around.
You
can
put
them
on
the
external
disk.
But
again
the
read,
speed,
also
measures,
okay,
so
you
have
to
have
a
fast
disk,
is
just
to
go
back
a
couple
of
points.
This
is,
and
you
keep
fully
compatible
with
that
mm-hmm
right
when
you
you
use
it
in
an
angle
yeah,
do
you
run
your
entire
test
suite
with
dusqueries?
I
do.
A
Yeah,
I
mean
it
works
in
the
sense
that,
like
they're
as
far
as
I've
seen,
there
are
there's
nothing
in
the
nd
cube
that
like
forces,
your
array,
okay,
so
you
can
you
can
you
can
stick
in
an
array
plus
a
wcs?
You
can
use
a
dash
or
equals
a
wcs
in
the
same
way
you
do
numpy
or
wcs
yeah,
and
then,
when
you
do
your
slight
all
your
slicing
and
stuff,
it
just
gives
you
back
like
an
indexed
desk
right.
It
does
not
force
them
to
into
memory.
A
Yeah.
There's
give
you
a
slice
an
exception
to
that
might
be
the
newer
reproject
too.
Oh
certainly,
we
have
reproject
anything
that
touches.
We
project
is
going
to
force
everything
into
memory
right,
yeah.
Well,
what
happens
if
you're
kind
of
playing
a
movie-
and
so
it
started,
starts
really
more
and
more.
Does
it
get
rid
of
the
old
it'll?
Just
read
one
at
a
time:
okay,
yeah,
I
I'm
pretty
sure
yeah,
I'm
not
sure
what
the
map
does.
A
If
you
have
like
a
very
larger
ring,
then
you
really
slice
into
another
slice
in
there,
because
I
think
I'm
setting
up
it's
doing
whatever.
However,
I
think
it
would
depend
on
map
whatever
you're
doing
with
matplotlib.
If
it's
flushing
the
screen,
then
it's
it'll
flush,
the
array
from
memory
it
shouldn't
watch,
okay,.
A
Is
there
any
like
available
data
like
if
I
wanted
to
have
a
look
at
the
data?
Is
there
some
repository
where
you
can
get
access,
or
do
you
just
need
to
know
someone
with
observations
and
ask
kindly
in
the
second
one,
okay
yeah,
but
even
like,
I
thought
that,
like
data,
maybe
it's
a
solar
net
data
like
after
six
months,
it
has
to
be
open
source
like
again,
is
that
just
you
have
to
know
the
person
who
made
the
observations?
A
Yeah,
I
don't
know,
but
these
data
from
the
cropness
pipeline
at
least
they're,
also
relatively
new,
so
there's
not
so
much
out
there
since
5
30
to
come
out
unofficial
end
time.
Do
we
want
to
see
if
steve
wants
to
revisit
yeah?
Oh
yeah,
are
we
at
the
stage?
A
Yet
I
was
going
to
ask
that,
but
then,
but
maybe
before
that,
if
there's
anyone
else,
that's
on
the
especially
online
that
wanted
to
either
say
something
regarding
like
sometimes
support
for
instrument
teams
or
if
you,
if
you
have
like
or
something
like
show
yeah,
maybe
some
people
have
joined
since
yeah.
A
Not
hearing
anything
so
yes,
steve,
if
you,
when
we
wanted
to
circle
back
to
the
stuff
we
were
talking
about
before
I
sort
of
interrupted
you
if
we
were
going
to
do
that.
If
that
is
a
break
from
instrument
stuff,
maybe
we
should
like
say
people
who
aren't
interested
in
that
can
sort
of
drop
off
and
is
that
fair
steve?
Is
this
kind
of
a
a
tangent
away
from
specific
instrument
stuff
or
or
do
you
think
that
this
is
still
instrument?
Support
related
yeah?
A
I
would
say
it's
instrument,
support
related,
I
mean
it's,
the
template
and
the
and
an
instrument,
specific
template
and
how
we
might
support
that
okay,
so
some
people
might
actually
be
interested
in
this,
then,
okay,
but
by
the
way,
I'm
currently
waiting
on
the
phone,
so
I'll
have
to
step
away
as
soon
as
they
get
back
to
me.
A
Okay,
okay,
so
let
me
let
me
go
back.
What
I
wanted
to
do
was
show
show
what
what
we
did
so
it
you
know.
One
thing
you
notice
like
for
six
pi
and
others
right
is
everybody's
using
the
template.
A
A
A
There
are
tools,
as
I
mentioned,
to
stewart
to
kind
of
keep
templates
up
to
date
and
sorry
and
rendered,
and
packages
that
use
those
templates.
You
know
to
to
update
themselves
all
right,
so
so
here's
hermes
core,
we
talked
about
the
dev
container,
a
bit
I'll
just
go
over
it
quickly,
but
basically
dev
container
is,
since
this
is
all
going
to
be
running
in
a
lambda
function
on
the
cloud.
A
In
addition
to
you
know
people's
laptops,
you
know
we
define
the
container
that
we're
going
to
use
on
the
cloud
so
that
we
and
we
all
develop
in
that
container.
A
We
then
test
on
other
systems
as
well,
but
we
want
to
make
sure
that
you
know
any
code
we
develop
here
will
run
in
the
container,
because
that's
what
is
processing
our
data,
so
it's
very
important
for
us
to
manage
that
container.
Yeah
yeah.
Just
to
clarify
for
the
room
when
you
say
we
you're
talking
with
the
hermes
add-on
right.
A
Yes,
I'm
sorry
yeah
yeah
yeah,
so
so
this
like,
if
you,
if
you
want
to
start
a
new
package
with
a
senpai
template,
this
isn't
what
you
will
get
at
this
stage.
No,
oh,
we
could
yeah
that's
right
so
so
this
is.
This
is
based
on
the
template,
but
then
we
made
a
number
of
changes
and
I'm
just
going
through
the
changes
that
we
we
went.
A
The
other
thing
is,
we
changed.
Everything
to
use,
github
actions
as
opposed
to
the
old
template,
was
still
using
circle
ci.
I
noticed
that
six
pi
is
still
using
circle
ci
as
well
so,
but
I
think
some
pi
core
doesn't
so
I
broke
it
up
into
into
three.
We
have
could
style,
we
build
the
documentation
and
then
the
testing.
A
A
Okay,
hey.
Can
I
take
this
opportunity
to
ask
about
a
different
topic?
Sure
yeah,
I
don't
know
if
it
would
yeah.
I
know
it's
something
because
that
both
dan
and
nabeel
are
here.
So
I
just
had
a
question
about
sunraster
dan
when
you
first
designed
sunraster,
my
impression
was
you're
working
on
something
that,
in
theory,
could
be
used
to
read
any
slit
spectrograph,
eventually
that
that
might
be
that
goal,
or
at
least
represent
any.
A
I
wasn't
thinking
like
you'd,
have
a
reader
function
for
every
conceivable
possible
instrument,
but
certainly
it
could
represent
data
from
any
such
instrument
once
you've
got
what
you
got.
That
might
be
this
the
distinction
I'm
trying
to
understand,
because
I
know
that
nabil
for
irish,
you
guys
were
using
it
for
a
while,
and
then
you
split
it
off
into
something
iris
specific.
A
A
Nabil,
do
you
want
to
take
that
first
about
the
iris
size,
so.
A
A
So
the
reading
is
under
my
control,
but
the
object
used
is
a
sunlight
object.
Yes,
you're
saying
it's
not
like
iris,
it's
not
it's,
not!
There's!
No!
Like
iris
sunbutter
coats
in
iris,
it's
just
it
fits
breeding
parts,
because
that's
these
that's
the
source,
specific
stuff
yeah,
whereas,
whereas
for
spice
there
isn't-
or
there
certainly
wasn't
when
I
was
working
on
a
terry,
there
wasn't
a
spice
instrument
package,
and
so
given
that
it
seems
fine
to
have
an
instrument
sub
module
within
sunraster
and
have
the
spice
reader
in
there,
so
so
yeah.
A
So
I
think
I
don't
think
sunrise
the
sunrise
objects
or
the
sunraster
package
was
ever
conceived
to
to
never
have
any
gaps
or
holes
in
it
and
say
like
any
file
from
any
instrument
with
any.
You
know.
Peculiarities
and
non-uniform
conventions
would
somehow
magically
be
read
by
one
reader
function,
but
that
the
sunriser
object
could
be
used
to
represent
data
from
any
spectrographs
object.
Once
you
had
defined
like
the
mapping
from
the
instrument
file,
you
know
into
the
object.
Does
that
make
okay?
A
So
then
we
have
an
object
that
can
be
used
by
multiple
slit
spectrographs
and
then
can
be
used
in
the
same
way,
but
they
might
be
read
with
different
software
specific
to
the
instruments,
exactly
okay,
that
that's
that's
exactly
it
yeah
I
mean,
and
the
software
could
simply
likely
simply
not
even
be
anything
complicated
in
the
sense
of
like
a
package.
It
could
literally
just
be
a
single
function,
as
as
it
is
with
spice
right.
There
is
a
spice
reader
function
and
that
just
takes
a
file,
puts
it
into
sunrise
objects
or
nd
collection.
A
Okay,
so
if
we
add
something
for
spice,
though,
like
some
sort
of
calculation
for
uncertainties,
that
could
also
be
in
sunraster,
but
it
might
be
separate
yeah
when,
if
it's,
if
it's
in
the,
if,
if
it's
as
part
of
like
the
reader
function,
where
you're
like
map
you're
you're,
making
that
calculation
from
stopping
the
fits
file
and
putting
it
in
the
dot
uncertainty
attribute,
then
yeah
that
makes
sense.
A
Okay,
if
you're
talking
about
like
a
separate
like
uncertainty
suite
in
a
in
a
whole
other
iris-specific
sub-module,
that
might
be
something
we
should
talk
about
and
understand
a
little
bit
more.
A
A
No,
I
just
wanted
to
understand
better
how
that
was
supposed
to
work
together.
Okay,
great
am
I
right
in
saying
steve
has
come
back.
Is
he
set
his
hand?
Up
his
hand
is
very
instantly
yeah
yeah,
I'm
back,
and
I
had
a
specific
comment
about
that.
I
mean
so
in
the
again.
This
goes
to
the
road
map
which
and
the
pr
you
know
as
we
integrate
ndcube
into
into
senpai.
A
I
think
we
should
have
a
conversation
about
whether
slit
space
graph
data
is
supported
in
some
by
core,
whether
it's
you
know
pulling
an
object
from
some
raster
or
whether
you
know
once
we
have
we're
using
ndq
in
some
pi.
A
Should
we
just
have
that
object
there
and
the
reader
is
there
as
well,
so
the
discussion
to
be
had
in
the
comments
of
the
word
map
or
or
maybe
later
so
just
to
interrupt
you
now,
we've
already
decided
that
eventually,
I
think
we've
already
decided
it's
already
on
the
roadmap
yeah,
it's
already
on
the
roadmap
that
we
want
to.
If,
when
someone
else
gets
into
a
state,
we
like
it
will
probably
emerge
into
core
the
readers
we
will
have
to
have
a
separate
discussion
about.
A
A
All
right,
okay,
so
yeah,
I'm
back
right!
So
here
so
here
testing.
We
have
ubuntu
latest
mac,
we're
testing
on
the
python
3.7
blah
blah.
Okay.
What
else
have
we
done
what's
happening?
Why
can
I
go
back.
A
Is
there
anything
instrument?
Specifically?
Oh,
so
this
is
not
an
intricate
package.
This
is
hermes
core
which
sits
above
the
instruments.
So
let
me
show
you
an
instrument
so
same
thing.
Maybe
yeah,
maybe
that's
helpful.
So
we
have
a
data
directory.
We
haven't
quite
figured
out
how
to
manage
the
calibration
files
at
first.
We
had
thought
so.
A
So
that's
something
that
a
an
instrument
specific
template
would
be
helpful
in
because
so
so
remember,
calibration
files
have
to
be
called
by
the
the
package
and
you
want
and
they
may
change
so
for
a
particular
time.
You
might
have
two
different
versions
of
the
calibration
file
and
you
have
to
enable
users.
A
A
You
know
instrument
teams
yeah,
but
I
think
that's
a
decision
that
the
usual
team
would
have
to
make
whether
or
not
they
want
to
keep
them
in
the
repository
or
on
a
remote
server
and
download
it.
I
don't
think
some
we
should.
We
can
offer
them
the
choice
of
either,
but
I
don't
think
that's
a
choice
that
template.
A
Because
because
when
I
said
because
what
I
just
said,
I
kept
them
in
the
iris
by
george.
Looked
at
me
so
yeah
and
I
don't
know
how
you
would
manage
so
do
you
overwrite
them.
Do
you
use
controlling
them?
Yeah,
I
just
budget
them
in
a
api.
We
use
the
data
manager,
so
it
checks
the
hash.
It
expects
a
certain
hash
from
like
the
ssw
server
that
we
pulled.
A
So
I'm
just
saying
everybody
is
solving
this
problem
on
their
own
for
themselves
and
it
would
be
nice
to
provide
a
you
know
provide
one
solution
which
would
probably
be
okay
for
most
people
and
if
they
want
to
diverge
from
that,
that's
fine
right,
okay,
sure,
okay,
yeah!
That
way,
we
don't
you
know,
each
team
doesn't
have
to
rethink,
rethink
the
wheel
every
single
time.
It's
the
same
problem
as
everybody
else.
A
A
A
Because
all
it
does
because
the
new
nasa
requirement
is
that
you
provide
your
processing
data,
your
your
processing
calibration,
you,
algorithms,
you
have
to
describe
them
right.
You
have
to
tell
people
how
they
work
and
what
you're
doing,
and
so
this
is
what
this
document
required
document
for
nasa
emissions
includes.
A
So
your
calibration
plan,
your
measurement
algorithm
descriptions
yeah,
that's
all.
This
is
all
just
from
the
template,
but
if
this
is
the
only
two
instruments,
specific
things
that
you've
come
across
so
far
or
is
there
other
things
just
so?
I
just
I
understand
everything,
oh
from
an
interesting,
an
instrument.
A
Yeah,
those
are
me,
those
are
the
main
ones
and
then
of
course,
kind
of
the
dev
container
thing,
because
again,
instruments
will
will
be
processing
we'll
be
using
these
packages
to
to
process
their
data
on
a
certain
machine
right,
and
so
they
need
to
they
need
to
control
and
ensure
that
this
code
works
on
their
processing
machine
and
doesn't
work
right.
So
there's
a
difference.
There's
like
a
you
have
to
think
like
this.
Is
production.
A
Code
are
you
talking
about
like
this
is
an
instrument
package
in
which
it
only
but
not
only,
but
it
focuses
on
the
pipeline
in
creating
the
data
products
rather
than
doing
analysis
with
that
data,
or
does
it
merge
the
two
together?
That's
both
okay,
yeah
yeah.
I
didn't
think
it
was
worthwhile
to
maintain
two
separate
packages.
A
You
know
because
yeah,
it's
just
a
it's
just
a
pain
and
then,
and
also
like
when
you
process
the
data.
You
also
want
the
tools
at
your
fingertips
to
like
check
the
data
and
visualize
it
and
stuff.
So
from
a
from
an
instrumenter's
point
of
view
right.
Why
would
you
separate
the
two
yeah
yeah
for
sure
I
just
was
on
these
things?
A
Yeah
I
mean
the
other
thing
also
is
it
would
be
nice
to
kind
of
create
a
an
api
or
agree
on
kind
of
an
api
for
for
these
high-level
api
for
these
instrument
packages
like
a
function
called
calibrate
file,
for
example
right,
and
so
you
always
know
that
what
function
you
have
to
run
on
a
data
file
to
process
it
up
now,
that's
that's
something.
The
instrument
teams
need
to
get
together
and
discuss,
but
again
it
might
be
nice
to
have
to
provide
in
the
package
some
sort
of
you
know
model.
A
I
guess
model
api
which
you
know.
Obviously
I
can
diverge.
That's
fine!
No.
I
definitely
agree
with
that.
As
far
as
like
wavelength
response
functions,
I've
written
like
multiple
versions
of
like
wavelength,
response
classes
and
readers,
and
it
feels
like
I'm
doing
the
same
exact
thing
over
and
over
again
the
slightest
tweaks
and
like.
Why
isn't
there
like
a
base
like
channel
class
or
like
wave
response
class
that
we
can
just
feed,
like
you
know
the
transmittance
and
the
efficiency
into
like
yeah?
A
I
I
very
very
much
heard
that
yeah
and
then
the
rest
of
it.
I
mean
I
didn't
make
very
many
changes.
I
I
do
want
to
get
rid
of
talks,
even
though
it's
still
here
simply
because
I
don't
know
it
you're
we're
managing
our
containers
over
here
and
environments
and
stuff
and
since
we're
running
all
the
testing
and
stuff
on
with
github
action.
It
didn't
seem
that
valuable
to
have
another
rapper
before
we
get
in
the
weeds
about
talks.
A
I
was
wondering
if
any
other
representatives
of
instrument
teams
online
have
any
thoughts
or
reactions
to
this.
Is
it
totally
over
people's
heads
is?
Do
people
think
this
could
be
useful?
Do
people
strongly
back
up
some
of
steve's
suggestions?
Sorry,
someone
there
mike
yeah,
michael,
oh
mike,
yeah,
go
ahead,
micah
yeah,
so
I
guess
really
stupid
question
here
that
already
had
existing
package.
A
I
can
can
I
try
to
answer
that
and
then
everybody
else
can
no
sure
go
for
it.
So
so
for
me,
for
example,
I
mean
I
don't
want
to
have
to
maintain.
A
Let's
see
if
you're
using
a
template,
like
you,
start
off
with
a
template,
but
unfortunately
you
know
things
change
over
time
and
I
don't
think
that
the
individual
template
users
should
each
have
to
maintain
the
template
separately.
I
think
it
would
be
much
more
helpful
if
a
smaller
group
you
know
like,
for
example,
if
github
actions,
if
we
decided
that
in
the
template
we're
using
github
actions,
because
you
know
where
our
template
is
targeting
github
and
then
github
actions.
You
know
changes
right.
A
A
You
could
provide
a
template
that
has
a
workflow
that
maintains
itself
right,
exactly
yeah,
exactly
to
answer
the
question
to
answer
the
question
from
a
slightly
different
point
of
view,
if
you
don't
have
a
template
at
all
in
your
pre-existing
package,
it's
a
way
it's
a
way
to
like
adjust
your
package
structure
to
be
like
sort
of
consistent
with
with
senpai
and
with
other
packages
that
have
that
template,
and
that
has
a
lot
of
benefits.
A
I
mean
it's
already
been
mentioned
the
maintenance,
but
then
also,
if
you
have
issues
you
know
or
problems,
then
you
can
sort
of
talk
to
the
senpai
people
and
you
know.
Maybe
your
issue
is
exactly
the
same
as
something
that
someone
else
has
already
solved,
because
your
package
structure
is
exactly
the
same.
It
also
then
means
that
you're
working
with
a
package.
A
If
you
want
to
contribute
to
another
package
like
say
ndq,
because
you're
using
mdcube,
if
any
cube,
has
the
same
package
structure,
you'll
start
working
in
that
and
you'll
feel
right
at
home.
So
the
the
barrier
to
jumping
back
and
forth
between
pac.
A
A
Does
that
answer
the
question
michael
or
do
you
still
have
other
concerns?
A
A
Okay,
I
think
if
you
have
one,
if
you
plan
to
have
like
one
project
that
you
maintain
there
probably
isn't
actually
an
independent
food.
I
think
in
steve's
case-
and
somebody
says
we
have
so
many
repositories
we're
trying
to
spin
out
so
many
repositories
maintain
them,
because
if
you
have
one
and
you're
happy
to
tweak
the
practice
layout,
I
think,
as
things
change
for
people
as
people
point
them
out
to
you.
A
A
You
can
either
choose
to
keep
on
top
of
tweaks
to
python
packaging,
update
versions
of
things
like
build
systems
and
whatever
else
is,
as
it
kind
of
goes
on,
like
these
things,
don't
tend
to
happen
on
very
short
time
scales,
but
over
like
multi-year
time
scales,
things
tend
to
evolve
a
little
bit
in
your
package,
config
files
or
whatever
you
need
to
like,
and
if
you
don't
keep
up
eventually
it
becomes
a
like
big
undertaking
to
like
update
stuff.
So
you
can
either
choose
to
like
just
kind
of
keep
abreast
of
that.
A
If
you
are
starting
a
package
from
scratch,
presumably
the
benefit
of
going
with
a
template
is,
like
steve
said
he
didn't
want
to
develop
it
all
on
his
own,
and
you
know
there
are
a
lot
of
things
that
are
kind
of
just
done
for
you,
and
so
it's
a
quicker
way
of
getting
started,
but
oh
yeah,
absolutely
already
yeah,
okay
and
your
explanations
there
make
make
a
lot
more
sense.
Now:
okay,
great
yeah,
for
example,
steve
has
set
up,
had
set
up
a
power,
open,
cylinder
pipe
is
being
removed
on
no,
yes,.
A
A
template
model
allows
you
to
not
have
to
worry
about
this
and
hopefully
updates
it
for
you
before
it
breaks.
But
if
you're
willing
to
keep
abreast
of
these
challenges,
then
you
might
not
need
that
and
that's
a
choice.
Yeah,
it's
a
choice
you
would
have
to
make
for
your
package.
Are
there
any
other
questions
from
others?
Sorry
steve.
A
A
Well,
I
mean
that
it's
it's
quarter
to
six
in
the
evening.
That
is,
that
is
only
a
question.
It's
a
question,
I'm
happy
to
yeah,
but
the
answer
is
not
sure
and
yeah
yeah.
I
don't
want
to
know.
I
don't
want
to
know,
I
mean
you're,
making
them
you're.
I
think
you're
making
the
case
for
that
right.
Even
if,
like
you
use
the
package
template
at
some
point,
your
package
exists.
You
have
these
two
files
and
you're
in
your
and
your
repo
you're
like
what
do
they
do?
A
I
don't
know,
but
my
package
is
installed,
so
I
don't
care,
but
then
you
want
and
then
for
some
reason
you
know
setup
top
pi
goes
away
entirely.
Then
you
want,
you
wouldn't
have
someone
else
to
do
that,
work
for
you.
You
don't
have
to
understand
yeah
yeah
exactly
so
so.
Are
there
any
other
questions
from
other
instrument
team
representatives
on
this
anything,
that's
not
clear
that
we
could
help
you
by
answering
a
question
or
even
someone
not
an
instrument.
Well
yeah.
Anyone
online
yeah,
no
only
instrumentation,
sorry,
yeah.
A
I
guess
not
all
right
steve.
Do
you
want
to
continue
and
how
much
more
do
we
have
to
discuss?
I
think
I'm
mostly
done.
I
mean
you
guys
can
see
that
you
know
I
we
are
already
facing
this
issue
because
we
have
five
packages
that
I
have
to
maintain
and
keep
them
all.
You
know
together.
So
even
right
now,
like
I
found
you
know
an
issue
with
the
template
in
one
of
them.
A
I
have
to
do
that.
You
know
four
other
times
so
so
I
want
to
move
to
a
you
know,
cookie
cutter
I
I
had.
I
was
using
the
cookie
cutter
at
first,
but
ran
into
some
problems
and
so
kind
of
abandoned
it,
but
yeah
having
everything
with
cookie
cutter
and
then
with
the
that
was
it
craft
or
whatever
updates
things
like.
I
I
want
that
now
not
later.
The
fact
you
found
craft
is
makes
me
very
happy.
A
I
mean
I
I
I
will
not
be
drawn
on
timelines,
but
working
on
the
senpai
and
open
astronomy
packaging
template
are
is
like
the
top
priority
thing.
That's
on
my
to-do
list,
but
some
micrograph
we
have
so
yeah.
Yes,
I'd
love
to
be
a
part
of
those
conversations,
because
I
think
I'll
be
messing
around
with
it
as.
A
All
right,
I
that's
it,
that's
it
for
me
on
this
center
great,
I
would
say
I
would
because
of
the
color
of
the
co2
monitor
I
would
add.
Are
there?
Are
there
any
other
final
comments
or
questions
from
people
online.
A
A
Yeah,
thank
you,
everyone
for
joining
online
and
particularly
the
instrument
teams
and
giving
us
an
insight
into
your
needs
and
your
practices.
Yeah.
It's
very
helpful.
I
think
I
will
probably
follow
up
with
an
email
about
a
future
meeting
for
this
group.
Maybe
I'm
like
kind
of
a
month
cadence
at
least
to
begin
with.