►
From YouTube: SunPy Coordination Meeting 2021 - Wednesday
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
So
hello,
everyone
and
welcome
again
to
day
three
of
the
sunpike
coordination
meeting,
and
today
we
will
be
talking
about
an
easy,
a
nice
easiest
topic
to
start
off
with
with
affiliate
packages,
and
just
that
really
and
then
the
interesting
sessions
will
be
in
the
afternoon
or
after
the
break.
Where
we'll
talk
about
fido
clients
and
time,
savers.
A
And
so
to
kick
us
off
well,
please!
Oh
sorry,
moderator
and
no
taker.
A
Drop,
I'm
happy
to
try
and
moderate
the
morning
session.
D
Perfect,
okay,
so
I'm
going
to
do
a
brief
update,
flash
overview
of
what's
going
on
with
our
affiliate
packages
and
kind
of
what
what's
happened
since
the
last
coordination
meeting.
I
just
realized.
I
put.
I
called
this:
the
senpai
coordinate
meeting,
which
is
not.
C
D
We
could
probably
spend
a
whole
week
talking
about
the
coordinates
of
package
as
well,
but
today
we
were
talking
about
affiliated
packages,
okay,
so
the
affiliated
package
system
was
laid
out
in
an
sap
sap,
for
I
think
this
was
mostly
by
stuart
and
steve,
with
the
idea
being
that
these
are
you
know,
packages
affiliated
packages
are
outside
of
the
scope
of
the
core
package.
You
provide
functionality,
that's
maybe
instrument
specific
or
or
analysis
specific,
but
these
are
compatible
with
the
sort
of
greater
sunpi
ecosystem.
D
They
rely
sort
of
encore
for
their
base
functionality
for
things
like
coordinates
or
downloading
data
or
reading
in
data
like
using
basic
data
structures
like
like
map,
time
series,
et
cetera,
and
so
just
as
brief
reminder.
We
have
these
sponsored
packages
as
well,
which
include
the
things
like
the
core
package:
ndq
drms,
sunraster
and
sun
kit
image
that
are
affiliated
packages,
but
are
maintained
by
the
project
and
so
meaning
we
babysit
the
ci.
D
D
So
since,
since
last
time
we
established
a
set
of
review
criteria
for
these
affiliated
packages,
I
kind
of
laid
out
a
workflow
for
what
that.
What
that
review
process
would
look
like
and
I'll
summarize
that,
briefly
on
the
next
few
slides
so,
but
so
what
we
did
was
essentially
take
that
scp
and
kind
of
put
that
into
practice.
You
know
if
we
wanted
to
to
actually
enforce
the
the
criteria
that
are
laid
out
in
that
document.
D
What
what
what
that
process
look
like,
and
so
so
so
we
did
that
and
that's
all
on
the
website.
D
There
are
reviews
for
all
the
existing
sponsored
and
affiliated
packages
on
the
website,
and
then
we
applied
the
last
step
after
establishing
those
review
criterias
that
we
applied
these
these
criteria
and
this
workflow
to
to
all
the
sponsored
packages.
So
this
was,
you
know
the
core
ndq
drms
submaster,
sent
image
and
then
also
to
two
affiliated
packages
to
aiapi
and
npm
supplies.
We
actually
put
this
into
practice
and
it
seems
to
have
worked
again
for
the
for
these
two
quite
well.
D
D
You
know
just
essentially
what
what
does
the
package
do?
Is
it
in
the
scope
of
of
solar
physics,
integration?
You
know
how
well
does
this
fit
into
the
the
kind
of
established
senpai
ecosystem
documentation?
That's
pretty
straightforward
are.
Is
it
documented
to
what
degree
is
it
documented
testing?
Similarly,
are
there
tests
doesn't
do
they
have
good
coverage?
Are
they
run
on
a
ci
system?
Do
they
use
an
established
test
framework?
D
Duplication,
you
know,
is
this
re-implementing
functionality,
that's
already
in
the
core
package
or
other
affiliated
packages,
or
is
it
you
know,
building
on
top
of
kind
of
existing
functionality
in
the
ecosystem
and
then,
finally,
the
community?
You
know
how
well
are
the
developers
of
these
packages
engaging
with
the
established?
You
know,
senpai
piastro
scientific
python,
community,
and
so
the
idea
is
that
we
have
this.
D
Stop
light
system
you
know
shamelessly
stolen
from
astro
pi
per
usual
of
where
we
kind
of
rate
the
the
package
in
each
of
these
categories
using
using
these
these
different
labels,
and
so
each
of
these
each
of
these
labels
for
each
categories
has
kind
of
a
more
detailed
description
assigned
to
it,
but
so,
for
example
like
in
in
the
case
of
functionality,
this
could
be
a
general
package
or
it
could
be
like
a
specialized
package.
So
that
would
be
something
more
like
an
instrument
package
where
it's
it's
targeting.
D
You
know
a
specific
type
of
data,
doing
specific
operations
to
a
specific
kind
of
data,
but
a
more
general
package
might
be
something
like
sun
kit
image,
where
it's
more
applicable
to
a
larger
kind
of
set
of
of
tasks,
and
the
idea
is
okay.
We
go
through
each
of
these
and
if
there
are,
if
the
package
has
no
red
scores-
and
it
is
green
in
this
functionality
category-
then
it's
that
it
can
be
accepted
as
an
affiliated
package.
D
If,
if
it
has
a
red
score,
sorry
now,
I'm
reading
so
in
between
sort
of
the
accepted
and
not
accepted
is
provisional.
So
this
can
be
something
like
if
it's,
if
they're
working
towards
meeting
these
criteria
and
it
doesn't
have
a
red
score
and
the
functionality,
duplication
or
community.
So
it's
in
scope
right.
It's
not!
You
know
it's
not
trying
to
re-implement
stuff.
That's
already
that
already
exists
in
the
ecosystem,
and
these
and
the
developers
of
this
package
are
actively
engaging
with
the
community.
D
Then
we
want
to
kind
of
encourage
them
to.
You
know:
keep
keep
iterating
on
each
of
these
categories
and
kind
of
get
their
package
up
to
essentially
up
to
the
standards
that
we've
provided
and
we
give
them
this
provisional
status
to
just
sort
of
encourage
that
and
then,
of
course,
if
it's
not
these,
two,
it's
not
accepted.
D
So
this
is
something
that
you
know
doesn't
fit
in
the
scope
of
some
of
of
sort
of
solar
physics,
so
that
kind
of
existing
ecosystem-
or
you
know,
or
if
it
does,
you
know
if
it,
if
some,
if
we
had
a
package
that
re-implemented
coordinates
in
a
way
that's
completely
incompatible
with
the
coordinate
stack
like
that's
another
example
of
something
that
that
wouldn't
really
apply
that
we
would
not
acceptance.
D
So
I've
kind
of
talked
already
through
some
of
this,
but
in
addition
to
all
these
categories,
we
established
a
review
process
for
for
more
formally
what
this
would
look
like
in
practice,
so
so
to
do
this.
Someone
who
is
interested
in
submitting
an
affiliate
package
would
open
an
issue
with
the
issue
template
on
the
senpai.org
repo,
this
role
that
I'll
talk
a
little
bit
about
in
a
second,
the
affiliate
package
liaison
identifies
a
reviewer
for
that
particular
package.
D
D
D
You
know
all
of
these,
this
reviews
in
each
of
these
categories
and
if
it's
accepted
then
there's
we
make
a
pull
request
to
the
senpai.org
repo
and
then
those
those
reviews
and
results
are
posted
to
the
website,
and
so
all
this
information
is
that
is
that
us
on
the
on
the
project
web
page
under
the
on
the
affiliated
page,
which
is
at
this
link
here.
D
So
so,
just
to
kind
of
summarize
and
and
to
think
about
you
know
where,
where
we're
gonna
go
from
here
is
this
is
all
great
and
it
we've
applied
it
to
two.
You
know
packages
that
several
people
in
this
meeting
have
developed
and
worked
on.
I
think
and
there's
david
has
an
open
issue
for
submitting
senpai
soar.
D
The
solar
orbiter
data
data
access
package
to
as
also
also
as
an
affiliate
package,
so
you
can
go
and
kind
of
see
what
this
process
looks
like
live,
but
we
haven't
had
any
submissions
really
from
people
outside
the
projects.
In
particular,
we
haven't
had
any
instrument
teams
coming
to
us
to
to
say
you
know
we
we
want
to
make
use
of
this
affiliated
package
system,
and
I
guess
the
question
is:
why
is
that
something
that
you
know
we
need
to
fix?
Do
we
need
to
advertise
this
whole
process
better?
D
D
I
think
those
those
are
all
things
that
should
be
up
for
discussion
here
and
then
similar
to
similar,
to
what
I
was
talking
about
with
the
the
sub
package
maintains
the
other
day.
We
don't
actually
have
an
affiliate
package.
We've
not
formally
designated
someone
to
be
the
affiliated
package
liaison.
D
I
think
in
practice
that's
been
the
deal
for
the
past
year,
but
but
we
should
probably
think
about
trying
to
maybe
more
formally
fill
that
role,
because
this
is
obviously
an
important
and
an
important
part
of
engaging
with
the
community,
the
greater
the
wider
solar
physics
community
and
sort
of
getting
encouraging
adoption
of
these
criteria
that
we've
laid
out
and
kind
of
better
interoperability
between
all
these
components
of
this
growing
python
ecosystem.
In
solar
physics,.
D
I
did
not
mention
that
I'd
forgotten
that,
but
yes,
yes,
that
would
be
great
and
yeah
would
would,
I
think,
go
a
long
way,
certainly
to
so
to
solving
a
lot
of
these
kind
of
community
engagement
issues
for
sure
yeah,
so
that
that
is
all
I
had.
But
I
am
happy
to
just
you
know,
fill
the
rest
of
the
time
with
discussion
or
depending
on
how
we
want
to
move
into
them
to
the
next
section.
F
So
having
just
submitted
some
by
saw,
I
would
say
the
actual
like
process
of
submitting,
was
really
easy
and
pretty
frictionless.
F
D
Yeah,
I
mean,
I
think,
it's
partially
and
we
talked
about
a
lot.
We
talked
about
this
a
lot
when
we
were
either
at
the
last
coordination
meeting
or
some
sort
of
meeting
in
between
about
yeah
like
what
does
it
mean?
To
really
be,
I
mean
in
practice,
it
means
that
we
we
list,
we
list
the
pro
the
the
package
on
the
website
right
and
then
we
with
the
review
criteria.
G
So
the
other
question
is:
what
does
it
offer
the
general
like
user
community?
I
mean
to
go
back
to
craig's
question
of
either.
Yesterday,
the
day
before
I
mean
you
could
say
that
this
is
sort
of
visibility
to
the
wider
community.
Senpai
is
a
node
or,
like
a
you,
know,
a
place
that
people
go.
It's
got
a
lot
of
recognition,
and
so
you
know
if
people
coming.
G
You
know,
there's
a
certain
amount
of
checking
saying
that
we
know
that
these
things
will
play
well
together
and
we
recommend
that
you
try.
We
don't
necessarily
recommend
that
this
will
solve
your
problem,
but
you
know
we
wouldn't
recommend
it
a
bit
too
strong,
but
we
like
advertise
that
people
should
investigate
and
be
willing
to
see
if
these
packages
help
them
right
yeah.
I.
D
Think
it's
it's
both
sort
of
a
discoverability
thing
and
then
also
yeah
imposing
the
standards
we
want.
You
know
on
these
things
that
are
part
of
the
larger
ecosystem,
which
I
think
is
maybe
maybe
that's
maybe
that's.
The
biggest
thing
is
honestly,
the
the
interoperability
and
the
discovery
like
we're,
trying
to
impose
interoperability
across
a
wider
ecosystem,
and
not
just
allowing
for
the
proliferation
of
a
ton
of
python
packages
that
don't
work
well
together
and
that
are
all
conflicting
and
overlapping.
H
G
But
you
could
argue
that
this
is
a
way
of
pitching,
because
people
come
to
say
hang
on.
Some
point
doesn't
do
everything
they
want.
Why
do
I
have
to
install
all
these
multiple
things
and
know
about
all
these
multiple
things
in
idl?
I
just
get
ssw
and
it
does
everything
for
me.
You
know
we're
like
having
this
ecosystem
of
video
the
packages.
H
I
We,
let's
face
it,
if,
if
I
want
to
go
and
look
at
xrt
data,
the
first,
if
I
was
using
idl
the
first
thing,
I
would
do
if
you
go
to
cellular
software,
don't
go
the
xrt
web
page
and
hunt
it
around
for
software.
I
So,
like
like
see,
some
pi
is
a
vessel,
for
you
know
a
repository
for
all
the
software
that
I
need
for
processing
solar
physics,
data.
I
I
see
I've
got
a
well.
I've
got
a
few
questions
about
this,
because
you've
said
that
nobody's
kind
of
really
shown
interest.
I
We
are
punch
the
punch
team,
we
we
haven't,
put
it
down
as
a
requirement,
but
it's
in
our
documents
that
it's
our
intention
to
become
an
affiliated
package
and
whether
this
transpires
and
the
intention
that
it
will
transpire,
I
think,
we'll
go
down
a
lot
to
whether
we
can
meet
your
criteria
and
because
there's
I
was
looking
at
your
list
there
and
I
saw,
for
example,
you
talked
about
duplication,
we're
building
this
software
for
our
data
reduction
pipeline
and
we're
probably
not
going
to
want
to
use
some
of
the
other
functionality
in
some
of
the
other
modules,
because
they're
not
necessarily
pertinent
to
our
pipeline,
which
we
need
to
be
quick
and
slick
and
produce
data
which
is
required
for
us
to
produce.
I
So
we
may
end
up
duplicating
some
functionality.
So,
for
example,
originally
we
were
looking
when
we
were
looking
at
the
original
form
of
ndk
behind
the
cube.
What's
it
called
the
sequence.
I
Nd
cube
sequence,
because
we
weren't
really
kind
of
like
sure
whether
that
would
meet
our
needs.
We
were
going
to
construct
our
own
simple
one
and
that
would
have
been
duplication,
but
we
would
have
built
it
exactly
as
we
wanted
it.
I
Ndq
sequence
is
something
that
we're
going
to
use
now,
which
is
great,
but
there
could
be
other
examples,
and
I
would
kind
of
hope
that
we
wouldn't
get
kind
of
refused
affiliated
status
just
because
we
had
some
duplication,
which
was
required
for
our
pipeline,
because
at
the
end
of
the
day,
meeting
our
requirements
trumps.
Meeting
the
senpai
review
criteria.
H
Okay,
go
ahead,
laura!
No!
I
was
going
to
say,
like
I
think.
Maybe
this
is
something
we
need
to
change
in
the
website
to
be
like.
It
can
be
a
conversation,
it
doesn't
mean
you
have
to
come
and
if
you
don't
pass
it,
you
say
goodbye.
You
know
you
can
come
even
when
you're
building
it
and
saying
this
is
a
hopeful
future.
H
B
Sorry,
I
was
just
gonna
say
from
from,
like
my
perspective,
slash
the
six
teams
perspective
like
we're:
gonna
have
two
packages:
we're
going
to
end
up
having
stixx
core,
which
is
stuff
that
does
all
the
horrible
pipeline
and
making
bits
files
and
then
we'll
have
six
pi,
which
is
what
we
expect
the
community
to
use
and
the
the
goal.
The
plan
I
mean
we're
already
using
the
templates
and
stuff
would
be
that
stixx
pie
will
be
an
affiliated
package
and
at
least
that's
that's.
The
current
intention.
D
I
was
going
to
have
something
along
those
same
lines.
I
think
when
it
comes
especially
when
it
comes
to
like
the
sort
of
nitty-gritty
pipeline
code
like
I
it
would
seem
like.
I
guess.
Do
you
see
that
as
a
user
facing,
does
the
punch
team
see
that
as
a
user-fixing
package?
Is
that
something
that
users
would
ever
kind
of
care
about,
or
is
that
just
something
for
the
instrument
team
to
sort
of
produce
data
products.
I
Well,
the
first
thing
to
be
said
is:
we've:
we've
agreed
that
whatever
software
we're
producing
for
punch
will
be
available
to
the
whole
community,
there
will
be
nothing
here,
nothing.
It
will
be
made
available
through
github
or
whatever
we're
going
to
use,
and
I
I
I'm
not
a
hundred.
I
can't
give
you
a.
I
can't
guarantee
if
that's
gonna
be
the
way
we're
gonna,
where
we're
gonna,
try
and
put
everything
towards
you.
The
idea
is
that
we
are
going
to
have
this
object,
object,
orientated
python
package
with
the
idea.
I
You
can
just
chuck
your
level
zero
data
at
it
and
it
will
just
go
through
all
of
the
steps
to
produce
the
final
data
product
which
people
may
want
and
therefore
maybe
it
does
have
value
be.
There
is
some
value
of
it
being
in
the
some
pi
affiliated
package,
but
now
just
listening
to.
I
think
it's
is
it
sam
alone,
so
maloney.
B
Shane
maloney
maloney.
Excuse
me
yeah,
sorry,
all
right:
it
was
a
mute
there
trying
to
take
notes.
I
I
I'm
just
kind
of
like
took
note
that
you've
kind
of
you've
made
the
decision
to
separate
the
sticks
package
into
a
core
and
a
and
a
stixx
pie
kind
of
package.
I
Maybe
that's
something
that
we
could
have
a
conversation
about
to
understand
your
kind
of
reasons
why
you
went
down
this
road
and
maybe
that's
maybe
you've
made
some
decisions
that
we're
going
to
have
to
be
making
in
the
future
and-
and
you
can
kind
of
like
help
us
out
there,
but
nothing's
kind
of
like
set
in
stone.
But
the
idea
was
that
we're
going
to
release
all
of
the
software
anyway
and
if
we
can
make
part
of
package
thing
great
and
we're
going
to
be
building.
I
You
you're
probably
aware
that
we
are
building
the
transform
software,
which
yes,
it's
going
to
be
used
as
part
of
the
punch
pipeline,
because
we
need
it
to
correct
our
the
quartic
fit
to
the
optics,
but
it
can
equally
be
used
by
any
optical
system,
and
so
it
has
value
being
part
of
the
sunpi
package.
How
how
we
pack
that
up?
I
don't
know
but.
C
D
I
think
that's
perfectly
reasonable
for
like
a
package,
especially
something
like
a
pipeline
where
you're
optimizing
for
performance
like
there
are.
I
think
there
are
really
great
reasons
for
it
not
to
fit
into
these
review
criteria
right.
There
might
like,
I
think,
that's
that
is
perfectly
reasonable,
because
the
goals
are
just
different.
C
C
So
this
this
is
more
about
a
library
for
punch
data
analysis
or
something
than
it
is.
The
actual
pipeline
code
is
at
least
in
my
head.
G
You
know
my
my
view
on
it
again,
I'm
not
speaking
for
anyone
else,
but
is
that
if
you
are
talking
about
pipelines
or
like
stuff,
that's
very
instrument
specific,
then
that's
the
remit
of
the
instrument
team,
and
so
so
long
as
like.
There
isn't
another
punch
pipeline
package.
That
does
the
same
thing
like
to
me.
That's
the
level
of
duplication,
you're
talking
about
if
you're,
using
stuff
on,
if
you're
reinventing
stuff
under
the
hood
that
the
user
doesn't
see,
for
you
know
whatever
reasons
for
every
implementation
reasons.
G
I
Right
right-
and
I
will
just
reiterate
what
I
said
earlier-
it's
not
decided
yet
what
we
are
going
to
put
forward
to
be
in
the
the
punch
affiliated
package
and
it
may
be
none
of
the
pipeline.
But
we
have
made
the
decision
that
the
pipeline
will
be
made
available
through
some
source
or
another.
B
Yeah
I
mean
that
that's
the
just
to
give
you
some
some
other
perspective.
That's
the
same
for
sticks
so,
like
all
of
the
software
to
make
the
fits
files
it
will,
it
gets
already
opened
up
like
you,
can
go,
look
at
it
now
and
it
will
be
public,
but
the
idea
is
that
it
doesn't
really
fit
to
be
an
affiliated
package,
mainly
because
it's
horrible
code-
and
I
don't
want
to
have
to
try
and
fix
it-
to
be
like
nice
enough
to
get
an
affiliate
package
status.
B
But
yeah
I
mean
the
idea
there
is
that
like
if,
if
a
user
wants
to-
and
they
don't
trust
the
sticks
team,
they
can
go
and
prep
their
data
from
you
know,
telemetry
all
the
way
through
and
change
and
stuff
if
they
wanted
to.
But
the
normal
use
case
would
be
that
they'll
get.
You
know,
fits
files
from
the
solar
or
archive
and
then
use
six
pi
and
sun
x,
specs
and
all
the
wonderful
other,
wonderful
sunpower
tools
that
we're
going
to
build.
I
Yeah
yeah,
the
the
one
of
the
reasons
that
we
are
looking
to
release
the
pipeline
is
because,
when
we
run
the
pipeline,
as
I
said,
we're
building
it
in
this
object-orientated
approach
with
the
idea
that
each
each
kind
of
method
that
we
have
to
apply
to
the
data
to
process
process
it
to
the
next
level,
so
so,
for
example,
removing
cosmic
ray
spikes
as
an
example,
that'll
be
one
of
the
I'm
going
to
mess
up
my
terminology,
I
want
to
say
modules
in
the
object,
but
it's
probably
something
else
functional.
I
But
anyway,
there
are
some
people
who
might
be
interested
in
those
cosmic
rays
and
so
they're
not
going
to
want
to
go
to
our
level
2
product,
which
has
had
the
cosmic
rays
removed
they're,
going
to
want
to
go
to
the
level
0
product
and
they're
going
to
want
to
process
it
in
the
way
that
they're.
You
know
that
leaves
the
cosmic
rays
in
there
or
it
could
be
spikes
or
something
else,
and
that's
kind
of
like
the
mentality
and
with
releasing
the
whole
pipeline.
G
Yeah,
I
think
I
think
instrument
specific
code
is
a
bit
of
a
special
case.
You
know,
I
think,
then,
what
these
criteria
and
what
affiliate
packages
were
originally
envisaged
for.
G
You
know
again,
just
speaking
for
myself,
not
the
the
team,
but
if,
if
there's
like
an
instrument
that
wants
to
work
with
senpai,
but
you
know
and
like
help
get
their
stuff
out
and
make
sure
that
it
plays
nicely
with
the
solar
python
ecosystem,
I
think
that's
great,
but
you
know
instrument
teams
ultimately
have
you
know
jurisdiction
over
their
data,
their
pipelines,
you
know
and
and
how
they
how
they
produce
their
data
files.
G
So
I
don't
think
I
think,
that's
probably
up
to
us
to
work
with
those
requirements
and
just
make
sure
that
getting
it
into
the
wider
python
ecosystem
is
done
in
a
way
that
is
consistent
and
will
play
nicely
with
everything
but
stuff
like
pipelines.
G
E
E
I
I
totally
understand
a
pipeline
doesn't
have
to
use
the
sunpi
coordinate
framework
to
generate
this
information,
but
if
you're
going
to
be
working
in
conjunction
with
some
ecosystem,
let's
not
have
inconsistent
answers,
so
I
would
say
that
it'd
be
good
and
prudent
to
have
those
kind
of
cross
checks
right,
like
maybe
you
don't
generate
all
that
stuff
using
the
sum
by
framework,
but
have
a
unit
test
or
something
to
check
it
against
the
senpai
results
and
make
sure
you're
not
breaking
consistency
with
those
such
that
if
a
user
dives
in
and
tries
to
do
it,
which
should
be
an
equivalent
way,
they
won't
get
a
different
answer.
I
Agreed
yeah,
I
agree.
I
agree
with
exactly
with
what
you
just
said
there
and
I
I
might
even
kind
of
like
torpedo
what
I
said
before
with
my
next
kind
of
comment
question,
and
that
was
again
because
this
is
a
data
reduction
pipeline
and
we're
going
to
have
to
be
producing
consistent
within
the
pipeline
itself.
We're
going
to
produce
consistent
data,
which
means
we
have
to
build
the
pipeline
like
a
cathedral
rather
than
a
bizarre
make
it
work
the
first
time.
I
The
intention
was
to
freeze
certain
parts
of
it
and
I
don't
know
how
that
would
actually
fit
in
with
the
kind
of
the
senpai
affiliated
package
status,
because
things
will
get
updated.
Perhaps
ndq
gets
updated
and
the
version
that
we
are
using.
We
don't
want
to
update,
because
it's
working
fine
in
our
pipeline
do
we
suddenly
then
become
an
unaffiliated
package.
D
I
mean
again,
I
think,
similar
to
the
the
kind
of
performance
requirements
that
you
mentioned.
I
think,
like
pipelines,
have
a
extremely
narrow
scope.
I
need
to
do
a
thing
like
a
kind
of
specific
thing
really
really
well,
and
I
think,
like
stuart
mentioned,
like
the
review
criteria
here
are
more,
are
optimized,
for
you
know
kind
of
units
of
a
user-facing
ecosystem
of
packages
for
doing
the
kind
of
like
analysis
on
the
data
products
that
that
pipeline
produces.
D
So,
if
yeah
I,
if
the
the
kind
of
especially
pipelining
software
like
that,
if
it
doesn't
fit
into
the
like
again,
I
think
there's
like
very,
very
legitimate
reasons
that
it
won't
fit
into
these
categories
and
maybe-
and
I
don't
think
it
it
necessarily
needs
to.
But
I
think
the
more
important
thing
would
be
if
the
the
software
that's
produced,
how
whether
it
overlaps
with
the
pipelining
software
or
not,
the
software,
that's
produced
to
kind
of
work
on
the
outputs
of
that
pipeline.
I
Okay,
so
what
I'm
kind
of
hearing
from
this,
which
is
absolutely
fine
as
the
perhaps
the
pipeline
itself,
whatever
sits
in
our
github
account
or
whatever,
and
then
the
tools
that
will
be
developed,
which
is
not
necessarily
going
to
be
by
the
sock,
but
some
of
the
tools
that
are
going
to
be
developed
to
use
the
data
they're,
the
ones
that
should
be
sitting
in
the
sample,
affiliated
repository
package.
I
D
Yeah,
I
think
I
think,
that's
fair.
I
guess
maybe
one
thing
that
it
wasn't
necessarily
exclusive
about
in
any
slides
too.
Is
that
going
through
this
affiliate
package
like
review
process
like
yes,
it
it's
it's,
and
I
guess
it.
D
It's
an
imposition
of
standards,
but
at
the
end
of
the
day
that
software
is
still
completely
controlled
and
maintained
and
developed
by
the
instrument,
team
or
or
whoever
is
submitting
this
for
for
affiliated
package
status,
like
ai
pi,
for
example,
is
still
completely
under
the
is
completely
overseen
by
lockheed
or
the
aia
team
like
development
of
that
package,
even
though
it's
gone
through
this,
this
kind
of
review
process
and
had
this
stamp
of
approval,
but
that
that
all
the
development's
still
overseen
by
the
ai-18.
I
So
let
me
okay
so
that
there's
a
couple
of
things
there.
Maybe
I
kind
of
like
reiterate
what
I
was
getting
at
before,
and
that
is
what
happens
if
aia
pi
is
using,
perhaps
an
old
version
of
it.
Maybe
it's
using
map,
let's
say
it's
using
map
from
some
pi
and
then
suddenly
you
go
down
this
road
which
you
were
talking
about
yesterday.
I
believe
in
terms
of
deprecating
map
and
putting
everything
in
ndcube.
Maybe
what
happens
to
aaa
pi
at
that
point,
do
you
do
a
reassessment?
D
So
this
is,
this
is
also
another.
Another
detail
I
didn't
include
on
the
slides,
but
is
in
is
on
the
affiliate
package
web
pages
that
I
think
every
yearly
part
of
the
review
process
is
that
each
every
accepted
and
provisional
package
goes
through
a
yearly
review
to
to
make
to
check
for
exactly
that
kind
of
thing
to
see.
D
If,
if
it's
gone
stale
or
for
certain
things,
if
it's
regressed
in
certain
categories,
you
know,
maybe
none
of
the
tests
pass
anymore
or
something
like
that
right
and
then
I
I
can't
remember,
but
essentially
the
idea
is
that
we
would
then
whoever's
doing
the
re-review
would
like
ping,
the
developers
of
aaa
pi,
I.e
me
to
sort
of
say
you
know:
why
aren't
you
up?
Why
aren't
you
up
to
the
standards
here?
D
Is
there
a
good
reason
for
this,
or
maybe
I
mean
it
could
just
be
in
some
cases
that
you
know
someone
doesn't
have
time
to
maintain
a
package
and
that's
okay,
but
that's
the
kind
of
stuff
we
wouldn't
necessarily
want
to
keep
on
the
web
page,
because
that
would
maybe
mislead.
You
know.
People
who
are
are
looking
for
things
that
do
work
within
the
ecosystem.
G
K
G
To
us
it's
like,
by
the
way,
did
you
realize
that
your
tests
are
all
failing?
You
know
this
might
be.
It
might
be
beneficial
to
your
users.
If
you
like,
you
know
we
flagged
this
for
you
to
try
and
help.
So
I
think
it's
supposed
to
be
more
of
a
collaborative
exercise,
not
a
judgment
and
punitive
exercise.
D
Exactly
yeah
and
that's
the
we
I
think
we
tried
to
do
everything
we
could
to
to
ensure
that
this
this
this
whole
process
does
not
come
across
the
way.
It's
not
like
a
public,
shaming
or
anything
like
that
to
say
you
know:
why
isn't
your
software
meeting
these
categories
or
why
are
you
reading
all
these
categories?
D
I
Sure,
well,
within
our
requirements,
we've
stated
that
we
will
follow
the
pep
8
standards
and
not
in
requirements,
but
in
other
documentation.
We've
said
that
we're
also
going
to
be
following
some
by
standards
as
well.
So
we
are
kind
of
paying
attention
to
your
web
page
and
just
kind
of
like
a
general
comment
in
to
kind
of
maybe
perhaps
bring
the
conversation
towards
how
to
encourage
other
instrument
teams
to
get
involved.
I
But
we
are
paying
attention
to
your
standards
when
we
and
and
prepaid
both
of
them
when
we're
when
we're
producing
our
software-
and
maybe
that
makes
it
easier
for
us
to
become
an
affiliate
package,
because
our
software
should
have
conformed
to
your
standards
and
that's
something
you
can
maybe
when
new
teams
are
getting
formed,
maybe
you
can
say
hey.
These
are
the
the
community
standards
and
you
you
can
produce
the
community
standards
for
other
teams
to.
B
G
C
G
G
Python.Org
yeah,
that's
it
yeah
and
there's
there's
actually
a
like
a
list
of
various
different
heliophysics
packages
that
sort
of
want
to
adhere
to
similar
standards
and
again
going
back
to
craig's
question.
The
other
day
is
heliopa
or
pi.
Hc
tries
to
be
at
least
one
source
where
a
new
user
can
go
in
and
see
and
so
get
a
sense
of
the
wider
heliophysics
ecosystem
of
python
packages.
G
So
senpai
is
part
of
that.
A
number
of
other
packages,
nothing
to
do
with
solar,
but
nonetheless,
helio
are
involved
in
that
so
lost
the
page.
I
Yeah,
okay
and
excuse
my
name,
but
the
these
kind
of
standards
that
you
just
sent
around
this
heliophysics
community.
Is
this
what
some
pi
follows.
I
D
I
I
G
I
Sure-
and
you
were
talking
before
about
perhaps
people
that
have
time
to
maintain
their
packages-
I
imagine
time
is
not
is-
is
the
same
as
money
we've
got
money
right
now
to
produce
our
pipeline.
We
have
a
nominal
two-year
and
we'll
probably
get
a
mission
extension
where
we
can
get
funds
in
that
mission.
Extension
for
working
on
software-
I
don't
know
so
that's
kind
of
something
else
which
might
kind
of
kind
of
prevent.
Somebody
perhaps
passing
a
future
review,
and
I
don't
know
how
that
can
be.
I
I
don't
know
if
you
could
have
a
section
of
senpai
for
deprecated
mission
software,
which
still
means
your
use,
but
no
longer
conforms
to
the
standards.
C
I
think
one
something
that's
come
to
my
mind
during
this
conversation.
Is
it's
probably
worth
having
a
list
aside
from
the
affiliated
package
list
of
here's,
a
list
of
mission,
specific
calibration
code
that
we
know
exists
in
the
open
in
python
or
even
not,
I
guess,
in
the
open
at
least
here
it
is.
It
doesn't
need
to
come
with
the
same
level
of
recommendation
that
the
affiliated
package
thing
is
but
collecting
I
just
having
somewhere
having
a
list
of
hey
look.
D
I
think
just
from
a
discoverability
perspective,
that's
that's
really
that's
really
important
and
that
may
also
yeah
help
those
instrument
teams.
If
they
see
that
someone
is,
you
know
pointing
out
things
that
need
to
be
fixed
or
ways
that
they
can
be
improved.
I
think
that
could
be
helpful
to
everyone.
B
I
think
as
well
like
that
this
sort
of
stuff
might
be
useful
in
the
longer
term,
much
much
longer
term
conversation
of
like
funding
for
software
for
missions,
the
idea
that
you
can
write
software
once
and
then
just
leave
it
and
that's
going
to
keep
working
and
keep
being
up
to
date
like
we
need
to
start
informing
the
funding
and
the
pis
already
know
this
but
like
trying
to
you
know
having
a
list
of
like
or
having
somewhere,
where
you
can
say.
Well,
listen.
I
Sure
sure,
yeah
and
and
if
the
funding
boss
bodies
disagree,
you
have
to
force
them
to
kind
of
render
a
messy
type
contour
plot.
On
top
of
an
image
which
I
know
I
know
you
guys
are
doing,
you
know,
you've
got
some
software
there
now,
but
I
tried
doing
that
a
few
years
ago,
a
couple
years
ago,
that
was
a
nightmare.
I
G
G
It's
I
think,
it's
more
like
a
helpful
for
users,
so
if
you
want
to
put
in
the
effort
to
to
to
make
it
work
to
adhere
to
these
standards,
which
ultimately
is
supposed
to
be
about
helping
users
have
a
like
more
unified
experience,
you
know,
then,
then
it's
a
way
for
to
to
publicize
that,
but
especially
for
pipeline
stuff,
like
there
may
be
reasons
you
don't
want
to
meet
those
standards.
Yeah
we
for
analysis
code.
I
Well,
it's
like
yeah
yeah.
I
know
what
you're
saying
it's
like.
I
had
to
go
through
the
singular
value
of
valuela
decomposition
and
the
ones
which
are
available
through
numpy,
and
I
forget
the
other
package.
They
were
too
slow.
I
So
I
just
coded
up
my
own
one,
which
was
because
we're
we're
only
using
two
by
two
matrices,
there's
actually
a
you,
can
actually
code
up
a
faster
version
of
this
and
then
I
run
it
through
siphon
and
it
was
three
times
faster
than
the
numpy
svd
or
whatever
it
was,
and
so
that's
a
good
reason
why
we're
using
our
own
version
of
svd,
which
has
been
produced
somewhere
else,
there's
a
reason
behind
it,
and-
and
I
know
that
our
rob
belgium-
they
produced
a
similar
one
for
something
else
in
the
past
and
my
ones
based
upon
that
from
work.
I
I
We
know
that
our
standard
data
products
are
going.
We
know
what
they're
going
to
be.
We
know
we're
going
to
be
using
level
0
and
as
one
or
level
0
and
level
3,
which
are
going
to
be
released
to
sdac
and
linked
through
vso,
and
perhaps
we
just
built
the
interface
which
goes
into
in
industry,
senpai,
affiliated
package
and
then
maybe
there's
in
within
the
the
header
section.
I
C
It's
it's
also
worth
pointing
out
as
a
parallel
here
that
the
decrest
calibration
codes,
or
at
least
the
vast
majority
of
it,
will
be
open.
But
the
I
have
no
intention
of
pushing
that
as
an
affiliated
package.
I
C
I
C
D
A
D
D
G
A
A
I
guess
the
question
I
yeah,
so
I
was
going
to
bring
up
some
of
the
points
that
you
did
at
the
end
of
your
slide
david.
I
don't
will,
but
I
guess
I
guess
my
my
last
question
on
my
slide
was:
do
we
actually
want
to
grow
the
ecosystem
right
now
and
that
seems
kind
of
counterproductive,
but
we
don't
have
an
early
liaison
at
the
moment
who
here
has
time
to
review
all
of
the
affiliated
packages?
L
C
C
We
haven't
done
any
of
the
reviews
due
to
lack
of
time
and
we
yeah
don't
so
like.
I
I
think
at
the
moment
we
are
relatively
yeah.
It
is
a
relatively
unmaintained
corner
of
the
project
and
I
don't
know
how
to
go
about
fixing
that.
D
Well,
the
as
albert
pointed
out
the
roses
proposal
may
or
if,
if
funded,
will,
go
a
long
way
towards
up
towards
doing
that.
I
I
I
asked
this
question
yesterday.
I
think
it
was
have
any
of
you
guys.
Actually
gonna
talk
to
the
guys
who
built
the
solar
soft,
because
there
will
come
a
point
where
you
do
have
enough
packages
or
instruments
associated
with
senpai,
that
any
future
missions
will
want
to
have
their
mission
associated
with
senpai.
I
B
I
mean,
I
think
it.
It
was
all
just
soft
money
and
sam
sam
freeland
made
the
the
the
scripts
and
the
packages
to
distribute
binaries
and
text
files.
I
mean
that's
it
and
then
all
of
the
instrument
teams
just
started
putting
stuff
in
to
sodasoft.
I
mean
yeah.
As
far
as
I
understand,
it's
all
been
soft
money
from
from
the
missions,
but
you
know
they
have
some
money.
They
know
they
need
to
do
it,
but
it's
not
specific,
like
yeah.
Unfortunately,
there
isn't
often
you
know
a
software
development
budget.
B
I
And
and
I
haven't
used
ssw
for
a
long
time
now-
it's
been
several
years.
Are
there
conflicts
in
there?
Do
they
have
standards
or
not.
B
No,
I
mean
yeah,
it's
a
very
different,
it's
a
very
different.
It's
just
very,
very
different
way
of
developing
things.
I
mean.
B
I
I
don't
know,
but
maybe
that
speaks
towards
having
I
don't
say
you
want
to
reduce
your
standards
but
not
making
them
too
restrictive.
M
Or
expertise,
actually,
you
know
not.
All
of
us
are
up
to
speed
on
how
to
program
in
a
senpai
conform
way,
and
that
might
be
in
holding
people
back
from
submitting
things.
G
M
Mean
I
can
only
speak
for
myself
and
for
me
it's
certainly
true.
I
mean
I'm
an
idl
convert
and
I
now
exclusively
program
in
python,
but
I
know
I'm
not
like
nearly
at
the
level
as
you
are
or
or
will
so
that
that's
what's
kind
of
had
me
holding
back
submitting
things
to
some
all
right.
B
I
mean
I,
I
don't
think
that
dude
do
you
like.
I
don't
think
somebody
cares
so
much
like
how
you
program,
I
mean
the
standards
aren't
about.
Oh,
you
didn't
use
this
cool
single
dispatch
thing
in
python.
It's
more
that
you
have
tests
and
documentation
and
the
function
you
know
it
kind
of.
Has
you
know
variables
that
make
sense
and
names
that
make
sense,
and
things
like
that,
so
I
I
mean.
Maybe
we
need
to
be
more
advertise
that
more
or
something,
but
there
isn't
like.
E
C
We
will
hopefully
like
be
very
helpful
and
constructive
and
help
you
fit
in
with
both
our
like
style
guidelines
and
also
just
give
you
the
benefit
of
experience
in.
Have
you
considered
doing
this
this
way
instead?
Because
of
these
reasons,
if
you're
writing
your
own
package
and
you
want
it
to
be
affiliated,
we're
not
going
to
do
a
code
review
like
we'll
look
at
the
code
to
see
whether
you
it
conforms
to
the
standards,
but
the
standards
are
things
like.
C
Are
you
re-implementing
stuff
that
already
exists?
Have
you
got
tests?
Have
you
got
documentation?
It's
not.
Is
your
code
pepe
compliant,
and
you
know
perfectly
formed
unicorn
code
right.
We
we're
not
going
to
do
a
code
review
on
an
affiliated
package
in
the
same
way
that
we
are
in
a
pull
request
to
one
of
the
packages
we
maintain.
M
Yep
and
that's
understood,
and
I'm
promising
to
be
better
about
it
in
the
future,
I
mean
for
me:
it's
the
latter.
It's
a
instrument,
affiliated
package
and
basically
just
trying
to
learn
the
ropes
like.
Should
I
do
it
through
sun
kit
image,
or
should
I
do
my
own
zoopy
pie
or
what
should
I
do,
and
it
became
clear
very
quickly
that
I'm
not
up
to
speed
on
like
how
to
do
my
own
super
pie.
So
that's
where
I
was
going
more.
M
The
sun
hit
image
route
and-
and
I
didn't
want
to
kind
of
imply
that
you're
judgmental
about
code-
I
mean
in
the
on
the
contrary.
Actually
I
mean
the
the
website
also
makes
it
very
clear.
We
want
your
contributions
and
don't
be
ashamed
and
please
contribute,
and
we
all
need
the
software
and
it's
just.
It
was
trying
to
find
an
explanation
as
to
why
people
have
been
hesitant
to
submit
things.
E
G
I
G
Well,
I
think
I
think
what
we
want
ultimately
is
is
to
get
to
a
critical
point
where
people
see
the
benefit
of
them
and
they're
not
like
oh,
in
order
to
get
into
this
club.
I
see
all
these
other
benefits,
but
in
order
to
get
those
benefits,
I
have
to
pay
the
membership,
which
is
these
standards.
Hopefully
we
get
to
a
point
where
people
see
it's
great,
having
doc
strings
that
are
well
written.
So
when
I
go
back
to
the
function
in
a
year's
time,
even
I
can
remember
what
I
did.
G
You
know
like
how
it
works.
You
know
so,
hopefully
we
get
to
that
point
I
mean
I
would
I
would,
as
just
back
up
stuart
said
my
my
experience.
There's
nothing,
there's
no
better
way
to
to
learn
than
to
dive
in
at
the
deep
end
and
go
through
a
gauntlet
of
code
review,
and
you
know
realize
that
you
know
far
less
than
you
thought
you
did.
But
you
know
it's
a
steep.
It's
a
great
way
to
learn
fast
and
to
and
to
progress.
But
that's
that
is
that
can
be
scary.
G
H
But
yeah
like
what's.
H
Like
it's
like
that
thing,
especially
when
you're
coming
from
a
researcher's
side-
and
you
write
code
for
your
own
analysis,
yeah,
but
I'm
not
a
software
developer,
I
don't
know
how
to
contribute
and
like
for
my
whole
phd,
I
was
using
some
fun
finding
bugs
and
just
never
contributed,
and
then
it
was
really
when
I
started
working
with
you
more
so
sitting
at
the
desk.
You
know
like
open
an
issue
or
raise,
like
you
know,
contribute
code
you're
like
oh,
it's
actually,
just
that
simple.
H
You
can
just
do
that
and
then
you
can
walk
through
it
step
by
step.
I
think
the
key
thing
there
is
is
was
knowing
you
and
having
not
been
handheld
but
almost
being
handheld,
and
maybe
this
is
something
we
need
to
kind
of
consider
is
having
like
a
like
a
newcomers
liaison
of
like
you
know,
handholding.
Oh,
you
do
this
cool
thing
you
want
to
contribute.
This
is
how
you
get
this.
Everyone
can
do
it
and
you
learn
by
starting
small,
but
it
is.
M
Agreed
in
terms
of
just
from
my
personal
experience
in
terms
of
what
could
be
improved
to
kind
of
encourage
people
to
submit
is,
maybe
the
visibility
of
sunkit
instruments
could
be
a
little
bit
better.
I
mean
I
personally
found
it
by
accident
like
looking
through
the
sunpi
documentation,
and
I
don't
know
kind
of
necessarily
the
route
you
could
go
there.
I
don't
know
maybe
a
solar
news
announcement
or
something
like
that,
but
just
for
me
personally,
I
kind
of
found
it
by
accident.
G
G
G
I
was
actually
going
to
say
I
mean.
Is
this
I
think
at
some
point
in
this
the
schedule
we
were
supposed
to
talk
about
candidate
affiliated
packages.
I
wanted
to
mention
something
about
sun
expects,
but
I
noticed
like
on
one
of
the
slides
that
some
kid
instruments
wasn't
listed.
G
Surely
that
should
be
aspiring
to
be
an
affiliate
or
sponsored
package?
It's
basically.
D
G
C
N
C
A
Yeah,
just
so
we're
five
minutes
into
it.
So
if
people
don't
want
to
go
for
another
20,
25
minute
break
and
we'll
come
back
a
half
hour
or
we
can
carry
a
fever
do
want
to
carry
on
talking
throughout.
I
won't
prevent
you.
G
I
just
wanted
to
quickly
say-
and
I
can
say
this
in
30
seconds
just
about
sun-
expects
for
extra,
like
solar
x-ray
spectroscopy
just
to
let
people
know
that.
A
couple
weeks
ago
we
had
like
a
re-kick-off
meeting
when
shane
and
laura
were
there.
Chris
cooper
is
a
student
at
glasgow
as
well
as
ian
hanna
and
myself
were
there,
and
so
just
let
people
know
that
there's
a
small
effort
sort
of
trying
to
get
that
going
again,
its
relationship
to
stixx
pie.
G
You
know
it's
sort
of
a
good
motivator
and
that
package
is
there
to
provide
like
physics
based
models
or
functions
that
are
useful
in
that
kind
of
analysis,
and
so
we
discovered
that
we
have
functions
that
actually
do
work
and
can
be
used
for
for
analysis.
I
think
bin
chen
actually.
G
Yeah
so-
and
that
was
actually
a
new
discovery
for
us
as
part
of
that
meeting,
but
that's
exciting,
but
it
is
not
yet
released,
but
we
we're
gonna
work
towards
a
0.1
release
and
I
think
sun
xbox
once
it
does
have
a
release,
I
think,
should
be
a
candidate
for
if
you
know,
if
not
sponsored,
then
hopefully
affiliated
package.
G
So
that's
something
that's
coming
down
the
line.
It's
not
ready
for
that.
Yet
by
any
means
it
has
chronically
little
documentation
at
the
moment,
for
example,
but
but
I
think
it
should
be
an
aspiration
for
us
to
get
there.
C
C
C
G
A
So
I
just
want
to
thank
everyone
back
for
the
second
half
of
today's
senpai
coordination
me
and
the
topic
of
today.
Well,
the
first
half
will
be
talking
about
photo
and
then,
if
we
have
time
left
over,
we
will
be
talking
about
time.
Series
notetakers
are
moderators.
I
think
shane
you're
going
to
be
arguing
today,
so
you
don't
want
to
take
bites.
A
A
A
A
A
A
So
if
people
who
aren't
aware
fido
is
a
unified
data,
fetcher,
searcher
and
searcher
system,
I
also
forget
why
fighter
was
named
photo,
but
it
was
named
fido
and
basically
it
allows
us
to
have
a
unified
api
that
will
search
and
retrieve
data
from
various
clients
that
will
interface
with
other
apis
externally.
A
The
json
client
obviously
calls
through
drms
to
make
queries
for
seo
data
and
mdmi
data,
and
obviously
our
vso
client
is
a
direct
call
to
their
their
soap
api.
A
And
then
we
have
a
bunch
of
other
clients
which
are
essentially
what
we
call
generic
clients
which
take
a
url,
and
then
we
ddos
a
server
to
fetch
you
all
the
relevant
files
that
you
were
after
and
we
kind
of
well,
we
we
definitely
are
very
limited,
sometimes
by
some
of
these
servers
and
obviously
we
have
a
whole
bunch
of
them
in
call
from
I
think
years
ago.
Now
I
don't
know,
do
you
know
how
old
the
clients
were
start.
A
And
as
as
as
time
is
going,
we've
added
more
and
more
clients
and
we
have
the
names
I've
listed-
I
don't
I
might
go
through
them
because
I
don't
think
it's
necessarily
relevant,
but
this
is
brought
to
a
point
where
we
have
refactored
fido
to
a
point
where
you
could
inherit
your
inherit
yeah,
you
do
inherit
you
inherit
the
generic
client.
Are
you
you're
inherited
yeah?
You
saw
classic.
A
The
point
will
allow
other
people
to
write
clients
that
aren't
necessarily
in
core
before
all
the
clients
had
to
be
called
now
they
can
be
anywhere
and
then
you
just
inherit
from
somebody
somebody's
their
retriever
client
online,
the
generic
client
and
you
can.
For
example,
we
have
clients
now
in
radio
spectra
that
shane
shane
has
been
lovingly.
Writing.
A
Obviously,
stuart
has
been
writing
the
dks
client,
which
they're
in
the
user
tools,
I'm
not
sure
if
I'm
missing
any.
I
think
david
works
on
for
pfs
support
not
appear.
That's
why
the
I
have
a
solar,
orbiter
client,
that's
the
one
solar
orbiter.
A
A
Some
of
our
clients
now
return
repeat
data
from
the
vso,
so
we
are
now
overlapping
with
vso
results
in
a
way
that
might
be
unnecessary.
For
example,
we,
if,
if
someone
comes
to
us
and
goes,
they
have
two
sets
of
results,
one
from
say
that
recline
and
one
from
say
the
vso
client.
We
can't
tell
them
what
the
difference
is.
No
one
has
sat
down
with
our
clients
to
work
out
to
work
out.
A
A
And
I
think
that
would
be,
I
guess
the
we
could
argue
the
most
default
advice
we
could
give
to
people
in
this
case
for
this
problem.
So
we've
got
a
bunch
of
problems
with
trying
solutions,
but
I'm
going
to
walk
through
them
one
by
one.
So
obviously
I
think
in
shane's
case
for
radio
spectra.
I
forget
what
client
it
was
that
also
returns
vso
data.
A
A
Yeah,
so
obviously,
when
I
think
of
our
last
release
of
radio
spectra
that
client
came
out,
so
if
you
were
to
report
that
client
or
you
were
to
sort
of
do
a
photo
search
and
you
had
time
series
for
time
series
one
again:
okay,
you
have
regular
spectra
installed.
Fido
will
automatically
pick
up
the
registration
and
would
give
you
dual
results,
and
that
obviously
then
allows
you
to
download
both.
If
you
wanted
to.
A
C
As
soon
as
the
it's
as
soon
as
the
it's
as
soon
as
the
the
interpreter
has
seen
it
right,
so
you
have
to
import
it.
We
could
consider
using
something
like
set
up
tools.
Entry
points
to
auto
import
things.
A
L
A
The
problem
well
overlapping
with
vsr
results,
so
obviously
we
have
two
clients.
We
have
a
custom,
someplace
client
that
returns
results
that
should
be
similar
to
the
vsi
problem.
Number
one
problem
number
two
is
that
if
we
write
a
custom
client-
and
this
happened
with
the
srs
client
and
the
data
upstream
changes
on
the
url,
they
come
to
us
because
obviously
we
are
serving
on
that
data
and
we
don't
have
the
contacts
to
fix
those
kind
of
problems.
A
That
happened,
the
srs
client
and
obviously,
we
actually
had
to
check.
We
had
to
change.
We
had
to
change
the
url
for
the
srs
client
completely
because
they
deprecated
the
old
url
and
we
didn't
find
out
until
monica
came
to
us
with
a
problem
when
she
was
searching
for
some
data,
and
then
we
found
out
that
we
were
serving
out
of
date
and
potentially
uncalibrated
or
uncorrected
data.
A
A
A
A
G
C
D
C
F
I
was
gonna
say
I
guess
the
I
guess.
The
root
of
the
issue
is
the
vso
serves
a
lot
of
data
sources,
but
doesn't
serve
all
of
them.
So
there
are
instances
where
someone
wants
data
that
the
vsa
doesn't
support
yet,
but
that
means
we
have
to
write
a
client
to
go
directly
to
that
source.
F
F
I
In
developing
the
punch
sock,
we
are
actually
going
to
be
making
sure
the
vso
distributes
the
punch
data.
Will
that
automatically
get
forwarded
on
to
senpai,
and
perhaps
you
can
recommend
future
developers
to
go
down
a
similar
route,
so
it
avoids.
I
We're
going
to
have
sdac
will
be
our
primary
data
repository
we're
going
to
have
a
local
data
repository,
but
we're
going
to
use
vso
and
the
link
you
know
they're
going
to
link
to,
I
guess
stack
or
our
database
so
exclusive.
I
don't
know
what
else
is
available,
but
at
this
moment
I
guess
yes.
C
C
Steven
some
other
people
wrote
it
and
it
scraped
the
level
1
and
level
2
suvi
files
and
it
works.
It
gets
rate
limited
occasionally,
but
it
handles
it
and
six
months
or
so
later.
However
long
it
was
vso
started
serving
some
suv
data
level.
Two,
I
think
level,
one
okay
level,
one
data.
I
knew
it
was
one
of
two
I
couldn't
remember,
which
one
and
if
you
now
do,
a
suv
level,
one
search
through
fido
you
get
both
and
the
results
are
different
and
we
can't
tell
you
why.
M
E
L
So
stuart,
can
I
just
chip
in
that
brings
up.
Another
point
is
that,
for
example,
you
know
when
we
talked
initially
with
dan
seton
about
this.
You
know
we
were
explicitly
asked
not
to
serve
the
level
two
data,
because
it
wasn't
regarded
as
being
a
product
of
scientific,
suitable,
scientific
quality,
and
so
that
is
why
we,
we
will
only
serve
the
level
one
data
as
until
they
do
a
reprocessing
or
say
that
the
level
two
data
is
suitable
for
serving.
C
L
So
I
mean
specifically,
there
are
ways
that
we
have
been
asked
to
query
specific
data
providers,
like
you
know,
there's
a
sp
specific
api
for
cancel
hower
data
that
we
use
ed
is
busy
writing
the
the
tap
sort
of
like
interface
for
vso,
so
that
we
can
talk
to
the
to
various
e
submissions
based
because,
but
that's
the
way
that
they
actually
want
us
to
do
it
so.
L
You
know
we
we
do
try
and
actually
talk
to
the
david
providers.
So
when
we
have
a
problem,
we
always
well
nearly
always
have
somebody
that
we
can
talk
to
about
issues
that
we're
seeing.
C
They
provide
the
service
of
actually
working
with
the
data
providers
to
get
this
data
all
the
different
sources
of
data
indexed,
which
makes
me
think
we
should
drop
the
suvi
client
completely
for
as
an
example,
and
extend
that
to
unless
the
vso
turn
around
and
go
we're
not
going
to
index
that
because
of
x
y
and
z,
we
don't
put
a
fido
client
for
it
in
core.
B
H
Yeah
like
building
on
the
chain,
like
I
mean
you
know
the
way
the
clients
are
written,
we
have
notes
with
them
and
saying,
like
you
know,
if
you're
using
this
data,
this
is
where
it
comes
from
and
read
this
readme
file
from
the
data
source.
So
it's
really
up
to
you
putting
in
the
user's
hands
to
be
like.
F
L
Yeah
there
is
the
point.
Also
I
mean
that
all
that
data
you
can,
you
can
go
and
find
on
the
web.
You
can
search
for
their
their.
You
know
the
site,
but
for
some
of
the
other
people
we
were
asked
not
to
do.
We
were
asked
not
to
spider
their
site,
for
example,
but
I've
known
that
other
people
have
actually
gone
and
spied
at
their
site.
So
we
we
we're
trying
to
be
responsive
to
both
what
the
community
wants
in
the
terms
of
hey.
L
I
want
this
data
regardless,
but
also
to
the
data
providers
in
that
they're.
Saying
hey.
Please
don't
do
this
to
us,
so
we
don't,
and
that
may
mean
that
you
have
to
go
to
their
site
and
download
the
level
2
files
you
know
individually
or
you
know,
if
somebody
scraped
it
for
you
then
you're
all
right.
I
guess,
but
when
we're
asked
specifically
not
to
do
these
things,
then
we
we
try
to
play
nice
with
the
data
providers.
B
I
That's
I
would
also
say
that
it's
kind
of
risky
saying
don't
put
it
out
there,
because
cv
is
kind
of
like
a
special
case
where
it's
going
to
be
an
operational
mission
and
as
scientists.
We
want
to
use
that
data.
But
you
know,
let's
not
tell
them,
to
go
and
hide
that
data,
because
they
could
do
it
that
you
know
I'm
not
sure
what
their
science
funding
is
now,
but
it's
primarily
being
used
for
space
weather
operations.
C
I
C
B
B
You
know
some
by
clients
or
something
that
you
know
if
someone
wants
to
go
and
do
something
they
could,
but
like
the
default
standard,
clients
or
whatever
vso
drs
and
whatever
else
we
can't
get
from
from
bso
like
I'm,
not
pro
having
like
millions
of
clients,
it's
just
too
hard.
H
H
Oh,
these
are
right.
Yeah
because,
like
another
example
is
like
ressy
at
the
moment
right,
you
search
your
rescue
data.
It
provides
you
results
of
two
clients
coming
from
the
vso
and
one
from
our
client
and
they're,
two
different
forms
of
data
and,
like
so
the
form
that
we
have
in
our
clients,
isn't
available
through
the
vso.
C
I
can't
remember
what
version
we
changed
this
it
might
have
been
2-1
or
it
might
have
been
earlier.
I
think
it
was
earlier
what
he
used
to
do.
Was
it
if
a
client
that
wasn't
the
vso
returned
results?
It
wouldn't
search
the
vso
yeah
yeah,
but
now
it
searches
all
the
clients
that
say
hey.
I
know
how
to
handle
that
query
and
returns
all
the
results.
H
C
M
B
Yeah,
but
actually
brings
up
a
very
interesting
question
that
I
started
to
come
across
when
I
was
doing
the
radio
spectre.
Clients
is
that
you
know
we
have
eight
instruments
earlier
others
that
instrument
adders
dot
detector
and
on
a
few
things
and
in
radio
aspect
where,
like
they
mean
a
specific
thing,
like
an
instrument,
is
the
instrument
that
detected
it
and
stuff
like
that.
B
But
I'm
not
sure
if
that
carries
across
to
like
the
client
that
the
sore
client
that
david
implemented
and
I'm
not
sure
if
that
matches
with
the
client,
the
the
normal
file
clients.
So
it's
about
how
we
kind
of
maintain
some
semblance
of
standard
across
what
are
different
packages,
but
using
the
same
phyto.
G
B
Please
someone
from
the
vso
correct
me
if
I'm
wrong
here,
but
sometimes
when
you
query
the
pso,
if
you
do
like
it,
a
detector
equals
you
know
stereo
and
there's
no
results
it
in
the
background.
They'll
actually
switch
it
up
to
be
like
a
dot
instrument
equals
stereo
so
that
you
get
some
results,
but
in
the
radio
client
it
will
not
do
that
like.
If
you
give
it
an
incorrect
query,
you
get
zero
results
because,
like
yeah
and
so
that's
kind
of
the
thing
I
I
mean.
L
B
Yeah,
I
suppose
that's
that
makes
it
difficult
to
write
like
like,
like
a
data
model,
as
stuart
said
that
you
know,
has
semantic
meaning
to
certain
terms
like.
C
A
F
F
Oh
sorry,
with
the
vso
data
model,
so
actually
one
reason
for
putting
the
saw
decline
outside
of
senpai
is
that
once
solar
or
data
is
indexed
by
vso,
I
expect
the
data
model
for
querying.
It
will
be
different,
so
actually
to
extend
what
you
said
earlier
stuart.
I
would
suggest
that
we
don't
put
clients
in
court
that
are
served
by
vsa
or
are
planned
to
be
served
by.
F
A
L
Is
to
some
degree
will
be
there's
just
it's
in
them.
It's
sort
of
like
in
the
stages
of
about
to
undergo
a
complete
rewrite,
so.
C
L
That
will
be
in
their
head
is
working
on
the
tap
stuff
to
interface
with
user.
So
once
that
is
done,
then
lyra
will
probably
be
added.
We
only
serve
swap
from
there
at
the
moment.
A
A
Remember
what
nora
is
it's
the
radio,
russian
radio
spectrograph
respect.
A
C
And
just
as
we've
fixed
it
and
suvi,
we
have
already
discussed
so
we
will
be
talking
about
what
pulling
at
minimum
for
gong
xrs.
C
K
F
C
C
It's
confusing,
but
at
least
it
is
to
some
extent
in
your
face.
C
Is
it
more
or
less
user
hostile
to
not
have
it
return
the
results
at
all
and
tell
you
that
you
have
to
go
and
you
have
to
go
and
like
discover
the
fact
that
there's
this
magic
import
that
you
can
do
that
makes
it
return.
Also
super.
B
I
have
a
question:
that's
kind
of
adjacent
to
this
because,
like
one
of
the
one
of
the
goals
for
this
was
to
decide
like
when
should
a
new
client
be
written
and
unaccepted
and
so
like.
If
someone
was
to
go
and
write
a
nice
beautiful,
pretty
client
that
interfaces
with
the
esa
esa
servers
and
can
provide
maybe
slightly
different
search
functionality
to
watch
vsocon,
because
they
have
different
background
data
models
like
would
we
not
take
that
or
would
that,
because
it
would
return
the
same
and
possibly
different
data
yeah,
I
mean
like
that's
going.
B
I
A
That's
why
I
like
my
heartfelt
ideas
like
okay,
we
have
vso
and
job
in
call
because
they
obviously
provide
the
best
two
sources
of
solar
data
and
then
everything
else
we
just
shuttle
off
somewhere
else,
and
then
we
don't
need
to
worry
about
what
goes
in
call,
because
we
are
only
going
to
serve.
I
guess
multiple,
what
I
guess,
what
you
call
multi-purpose
clients,
which
are
what
the
vso
and
json
are
they're.
Not
everything
else
is
like
essentially
one
instrument
with
one
level
and
everything.
G
Isn't
the
point
of
photo
to
be
like
you
know
one
one
sort
of
entry
point
you
know
to
sort
of
sweep
up
all
possible
like
data
sources,
you
know
if
it's
just
vso
or
jsoc.
F
I
especially
with
jsoc
I
every
time
I
go
to
their
web
page.
I
I'm
bamboozled
with
what
I
should
do
to
get
the
data.
I
want.
G
I'm
not
suggesting
you
shouldn't
be
able
to.
You,
should
have
to
go
to
the
website
and
do
it
manually,
I'm
just
saying
why
not
just
have
two
objects:
vso
and
a
like
vso
photo
and
vso
and
json
photo
or
something
I
mean,
obviously,
that
you
wouldn't
name
them
that.
But
I
mean
it's
the
point
that
there
are
all
these
different
data
sources.
B
B
G
A
A
That
use
fido
is
that
if
you
were
to
go
suv
data-
and
you
have
the
issue
where
you
accidentally
download
the
wrong
you
download
from
the
survey
client
instead
of
the
vsr
by
mistake
and
you
process
all
this
data-
that,
for
whatever
reason
it
turns
out
to
be
wrong
or
the
wrong
level-
and
you
didn't
realize-
obviously
there's
a
user
apparently.
But
it's
also
on
us,
because
we
are
providing
direct
access
to
this
data
that,
if
you
take
on
through
the
vso,
would
have
been
fine.
C
G
Is
well
like
is
the
point
of
fighter,
then
or
or
of
these
clients
is
to
like
come
up
with
the
one.
You
know
the
one
data
source
to
rule
them
all
sort
of
things
like
we
will.
We
will
like
relieve
users
of
the
responsibility
of
knowing
about
you
know.
The
data
from
the
instrument,
and
all
you
have
to
know
is
how
to
use
fido,
and
we
promise
that
we'll
give
you
what
you
want,
and
you
don't
ever
have
to
worry
about
talking,
instrument,
teams
or
knowing
about
the
nuances
of
different
levels.
G
B
I'm
just
gonna
say:
sometimes
even
the
providers
don't
give
you
the
right
data.
I
know
that
there's
an
issue
with
mdi
data,
where
I
can't
remember
fires
being
served
from.
I
just
don't
remember
right
now.
Maybe
it's
sdac
and
like
the
data
was
just
wrong.
It
was
outdated
and
they
were
serving
it
up
for
ages
and
like
it
was
like
genuinely
wrong,
and
so
like
that
happens,
even
with
data
providers,
you
know
sometimes
mistakes
happen.
H
If
you
wish
to
date
to
something
like
something,
some
like
don't
get
instruments
or
something
I
mean,
how
would
that
look?
Is
it
just
that
you
have
to
then
import
some
good
instruments
and
then
it
would
work
the
same,
and
maybe
that
makes
sense
for
those
instruments
specifically
like
a
liar
client.
Like
you
know,
that's
you
know
if
you
want
something
more
niche
that
has
it
has
its
own
caveats
associated
with
this.
H
You
need
to
go
and
use
some
kids
and
that
will
kind
of
open
the
door
to
allowing
kind
of
more
weird
and
wonderful
clients
being
added,
because
that's
okay,
to
live
in
that
instrument
specific
package,
but
it
would
still
work
the
same
for
a
user.
They
just
have
to
would
have
to
import
it
and
then
they'd
have
the
awareness
that
they're
not
getting
it
from
the
vso
or
from
json.
C
Well,
you
could
move
you
could
make
you
that's
up
to
how
you
lay
out
the
sunkit
instruments
tree
right.
If,
if,
if
sunken
instruments
top
level
dunder
init
file,
imports
the
client
or
triggers
an
import
of
the
client,
it
will
be
registered
with
fido.
If
the
user
has
to
do
fromsunk
instruments,
dot,
lyra,
dot,
fido
client,
import,
lyra
or
like
import
sunkit
instruments,
dot,
lyra,
dot,
fido
client,
then
that's
what
the
user
has
to
import
to
register
it
with
fighter.
G
Yeah
this
yeah,
it
just
seemed
very
opaque
and
very
like
sometimes
you
know,
but
you
could
end
up
with
a
situation
that
the
lyre
one
gets
imported
by
default
and
you
know
another
one
doesn't
and
you
can
say
that's
the
responsibility
of
the
instruments
maintainers,
but
from
a
user's
point
of
view.
Suddenly
you
know
maintainers
are
making
decisions
for
for
them
regarding
something
that
they
believe
they're
using
directly
from
core.
H
G
G
L
G
Well,
no,
it's
not
a
bad
example
from
the
point
of
view
of
a
user
in
the
sense
that,
like
you,
have
to
then
you've
gone
to
fido
right,
you
haven't
gone
to
like
vso
client.
You
haven't
gotten
like
import
via,
so
I
want
to
use
a
vso
and
know
that
it's
like
or
jsoc
or
whatever,
and
know
that
I'm
using
a
specific
you
know
data
repository
or
a
data
server.
You
know
it's
like
I
you
you
best.
H
H
Happened:
yeah
right,
so
then
you
have
you
can
de-kiss
pie
or
whatever
is
its
own
affiliated
package
and
you
would
to
get
dc's
data.
You
would
have
to
import
that
package
and
you
you'd
use
fido
to
do
that
and
what
the
way
you
would
do
it
here
is
you're.
Putting
a
liar,
client
and
like
nobody
am
a
client
into
instrument
is
because
they're
too
small
to
have
their
own
little
package.
G
To
me,
from
a
user
experience,
it
seems
to
reduce
the
power
or
like
usefulness
of
fido,
because
effectively
what
you're
saying
is
you
have
to
go
and
know
and
like
import
put
in
a
line
in
where
you
have
to
like
explicitly
deal
with
the
fact
that
this
client,
when
I
just
use
the
client
directly,
you
know
if
I
want
that.
H
G
C
You
know
your
lyra
data.
Well,
two
many
things,
one,
it's
the
same
interface.
It
behaves
the
same
as
all
the
other
things
that
you
might
be
using
for
different
projects.
Two,
you
can
search
your
lyra
data
and
your
aaa
data
from
the
vso.
At
the
same
time,
if
you
want
to
do
co,
co-aligned
observations,
you
can
use
fido
to
search
multiple
sources.
L
C
I
to
some
extent,
because
I
wanted
to
prove
the
point
that
I
could
have
some
have
a
fido
client
that
wasn't
in
core,
because
it
was
the
first
one
when
I
wrote
it.
Secondly,
because
I
expect
vso
to
index
it
and
thirdly,
because
you
could
respond
to
that
question
with.
Why
isn't
the
entirety
of
the
dkis
user
tools
and
core.
C
G
I'd
say
if
we
started
off
the
community's
like
yeah,
no
we'll
just
take
everything
into
into
core
people
would
be
like
great.
You
know
like
we
don't
have
to
worry
about
stuff,
I
mean
I
don't
know
I
I
guess
if
you
have
to
add
some
clients
and
not
others
in
order
for
them
to
work.
If
I
would,
I
would
imagine
that
if
you
want
to
use
the
vso
client,
the
json
client
but
yeah,
and
then
you
have
to
like
explicitly
import
the
others,
then
presume.
G
B
C
C
We
could
do
that
in
court
without
moving
them
out.
Yeah.
In
fact,
like
doing
that,
in
call
would
be
a
like
as
a
three-line.
G
C
I
think
there's
a,
I
think
we
kind
of
agreeing
to
some
extent
that
moving
them
making
them
optional,
while
maybe
making
some
stuff
more
understandable,
right
like
by
well.
In
fact,
actually,
the
argument
for
moving
making
them
optional
is
effectively.
We
want
to
put
a
roadblock
in
front
of
the
users
so
that
they
have
to
opt
into
a
thing
which
is
kind
of
the
only
tool
we
have
in
the
box
for
asking
them
to
understand
it.
C
Right,
like
the
I
just
going
back
to
the
suvi
client
example.
Is
we
don't
want
the
user
to
get
confused
by
the
suvi
data,
so
I
will
make
it
optional
so
that
they
have
to
import
it
and
then,
if
they've
imported
it
they've,
probably
read
a
document
somewhere
and
maybe
they'll.
You
know
understand
it
better,
ignoring
the
obvious
point
that
that's
almost
certainly
false
and
they're
going
to
copy
and
paste
the
import
from
somewhere
on
the
internet
and
not
give.
G
C
Secondly,
is
is:
can
we
not
come
up
with
a
better
technical
solution
than
that
problem?
Can
we
not
make
the
results
that
fido
prints
out
contain
more
information
about
where
and
what
the
data
is.
M
How
about
for
each
instrument,
you
define
a
default
client,
that's
being
used,
and
then,
if
someone
with
prior
knowledge
and
some
level
of
conscience,
like
the
suvi
data,
is
not
okay
from
this
data
source.
But
it
is
okay
from
the
other
data
source
that
has
to
make
a
conscience
conscient
decision
to
use
a
different
client,
and
that
requires
some
sort
of
prior
knowledge
to
do
that,
and
then
for
the
other
for
all
other
users.
It
would
be.
It
wouldn't
require
that
knowledge
and
it
would
be
transparent
to
use
fido.
B
B
I
can
ask
the
guys
and
diets
about
when
they
use
fido
to
search
for
things
like
what
do
they
understand
about
welcome
back,
because
some
of
this,
as
danny
said,
has
to
be
like
end
user
driven
like
what
does
the
someone
come
to
some
public
for
the
first
time
and
they
want
to
download
some
data
and
do
some
cool
stuff
like
what's
what's
their
experience
like
an
on
part
of
the
problems
like
with
this?
The
super
example
where
they've
got
two
sets
of
suits
to
be
dated
like?
Is
that
really
confusing
to
them?
B
Or
is
it
not
a
problem
because
I
don't
know
the
answer
to
that
I
mean.
Maybe
someone
else
does,
but
I
certainly
don't.
H
I
think
it's
even
a
problem
with
like
the
fact
that
it
now
returns
all
available
data.
An
example
is
that
you
know
the
ghost
client
returns
all
available.
Satellite
data
say
if
it
goes
routine
goes
through
14
and
15
for
a
certain
date,
whereas
before
it
would
only
give
one
and
like
just
from
talking
to
student
chain
they're
like
I
don't
know
which
one
to
use
I'll
just
use
them
all
like.
You
know
where
it
again,
it's
against
documentation
and
I'm
having
examples
that
I
think
people
build
off
but
again
you're
right.
H
G
See
no,
I
didn't
mean
to
interrupt,
but
at
some
point
you
know
people
do
need
a
certain
knowledge
of
the
instrument
like
in
that
goes.
Example,
you
can
say
well
just
use
the
latest
one,
that's
probably
the
best.
I
mean
that
that's
one
way
to
go,
it's
often
usually
valid,
but
if
you're
searching
for
data
like
oh-
but
I
don't
know
when
I
don't
know
anything
about
how
the
instrument
works,
I
just
want
the
data.
It's
like
you
know.
Could
someone
you
know,
give
me
a
write
paper,
dot
pro.
G
C
C
We
we
giving
people
options
right
and
we're
expecting
them
to
use
their
scientific
judgment
to
choose,
but
I
feel
like
we're,
also
simultaneously
doing
a
bad
job
of
describing
what
we're
putting
in
front
of
them.
So
we're
trying
to
we're
asking
them
to
make
that
decision,
but
we're
giving
them
limited
to
no
information
about
making
it.
F
B
F
C
C
H
G
They
should
know
about
the
data.
Fido
should
give
them
a
way
of
getting
to
the
data
that
then
they
have
a
responsibility
to
understand
the
different
like
mechanisms
that
gets
them
there.
I
feel
like
that's
one
of
the
strengths
that
fido
has
is
that
it
just
means
users
don't
have
to
care
about
that.
Just
just
point
me
at
the
data
and
then
it's
my
responsibility
to
know
what
I'm
doing
from
there.
H
H
You
know
and
have
a
just
a
informational
page
about
solar
data
sources
and
like
a
little
description
about
the
vso,
a
little
description
about
json,
a
little
description
about
each
of
the
instruments
that
we
provide
data
from
and
where
they
come
from
and
like
so
it's
not
a
user
guide
from
a
code
perspective,
but
it's
like
a
an
educational
guide.
I
guess
I
don't
know.
A
We
do
have
an
old
issue
asking
to
do
exactly
that,
and
that
would
be
something
really
useful
yeah,
so
I
think
actually
yeah.
We
should
probably
maybe
something
to
do
on
friday.
Actually.
H
Yeah
and
this
this
kind
of
builds
on
that
coordinates
thing
we're
talking
video
today,
let
me
say
we
talk
about
it
on
friday
as
well,
and
we
have
a
little
description
about
solar
coordinates
for
people
that,
like.
B
Also,
you
have
like
a
little
article
on
like
data
sources
in
sunpower,
which
would
just
be
like
a
kind
of
a
longer
description
about
what
we
have
and
how
it's
supposed
to
work.
And
then
you
have
a
little
an
article
on
coordinates
in
some
point
there
and
has
what's
worth
it
aren't
code
reference
they're,
not
user
guide,
either
they're
kind
of
a
longer
feel
about
some
topic.
That's
really
important
to
using
some
type
yeah.
H
B
A
C
If
we
improve
the
documentation
picking
on
suvi,
once
again
sorry
suvi,
I
don't
mean
to
pick
on
you
this
much
picking
on
suvi
the
suvi
client
again,
if
we
have
say
changed
the
way
that
fido
prints
the
results
to
make
it
more
clear
that
a
certain
set
of
results
have
been
returned
by
a
specific
class
in
senpai
that,
if
you're
in
a
notebook,
that's
clickable
and
it
takes
you
to
the
documentation-
and
maybe
we
put
the
first
paragraph
of
the
doc
string
at
the
top
of
the
results
when
fido
prints
it.
C
I
I
I
don't
care
where
it
comes
from
and
if,
if
you
guys
can
do
that
more
people
are
gonna
use,
fido
and
senpai,
I
guess
people
are
lazy
at
the
end
of
the
day,
so
you
probably
just
want
to
have
a
you
know
a
disclaimer
on
the
front.
Somehow
you
know
we
can't
guarantee
the
integrity
of
data,
but
it
will
always
be
available,
but
we
do
our
best.
A
A
C
A
A
H
G
G
H
C
Not
providing
the
data
at
the
end
of
the
day
right
we're
not
responsible
for
the
data
because
we're
not
providing
the
data.
However,
I
do
think
we're
doing
a
bad
job
of
making.
That
fact
clear
to
people
like,
and
one
obvious
thing
is
about
the
amount
of
vso
bug
reports
that
someone
could
turn
up
on
the
senpai
issue
track
as
an
example
of
how
we're
not
making
that
clear
right.
C
A
H
L
I
I
just
wanted
to
make
a
one
comment
and
that
you
know
I
guess
one
thing
that
people
may
get
a
little
confused
by
is
is
that
take
take
suvi
and
viet
stone.
We
won't
return
the
same
data
to
a
client.
You
know
at
the
moment
and
because
you
know
we
will
only
be
you
know,
we've
only
been
asked
to
serve
level
one,
but
also
we've
gone
through,
and
we've
passed
every
file
that
the
go
716
and
go
17
archive,
for
example,
and
have
excluded
all
the
ones
which
we
know
are
bad.
L
So
I
mean
that's
just
one
example,
but
we
do
we
do
treat
data
a
little
differently
where
we
can.
You
know,
because
we
with
a
lot
of
the
clients
we
try
to
validate.
You
know
the
data
that
we
we,
especially
in
the
beginning.
When
we're
going
to
create
a
data
provider,
we
try
to
parse
a
lot
of
the
data.
That's
actually
there
in
order
to
you
know
validate
it
so
that
we're
always
going
to
treat
data
slightly
different.
I
suspect.
L
B
L
You
know
we
we
get
blamed,
for
you
know,
problems
with
various
data
providers
and
their
data.
You
know
over
things
with
which
we
have
absolutely
no
control
what
whatsoever,
and
you
know
that
that
hasn't
changed
in
20
years.
F
Q
We
could
also
push
information
about
the
level
or
the
version.
For
example,
if
people
request
a
level
that's
below
scientific
quality,
we
could
push
this
other
information
in
a
separate
dictionary
so
that
people
see
this
as
a
warning
and
if
you're
knowledgeable
go
ahead
and
use
it,
but
otherwise,
and
that
could
be
some
default
config
for
each
fido
client,
so
that's
defined,
predetermined
predefined
and
then
that
way,
when
people
ask
for
a
lower
version
or
a
lower
level
than
scientific
quality,
they
know
they're
now
suddenly
getting
this
stuff
and
use
it
with
caution.
C
C
C
G
G
I
Yeah
I
I
can
actually
give
you
an
example.
I
was
in
a
solo,
orbiter
meeting,
probably
a
couple
of
years
ago
now
and
they
were
going
around
the
room
and
asking
if
anyone
knew
how
to
use
aia
data
on
there,
and
somebody
said
yes,
I
know
how
to
do
this
and
I
noticed
with
a
study
coding
and
they
shared
their
screen.
They
were
pulling
the
data
from
j
helio
viewer
and
they
were
completely
unaware
that
they
were
using
process,
jpeg
images
and.
I
I
You
people
are
lazy,
I'm
lazy.
I
do
the
same
thing,
I'm
sure,
but
you
have
to
have
a
quick
guide
and
a
long
guide
and
hope.
The
people
at
least
have
a
look
at
the
quick
guide,
but
there
probably
does
need
to
be
a
disclaimer
somewhere
just
somewhere
saying
you
know
where
the
date
is
perhaps
coming
from.
G
G
A
And
then
does
that
open?
The
second
point
that
if
you
were
to
do
that,
we
then
allow
any
clients
into
call.
C
Yeah,
because
I
mean
let's
say:
we've
fixed
the
user
problem
with
sufi
by
documenting
it,
and
now
everybody
who
does
any
fido
search
is
immediately
aware
of
the
quirks
of
sufi
data
by
magic.
We
still
have
the
maintenance
question
right,
which
is:
are
we
as
a
project
going
to
continue
to
maintain
that
sufi
client?
C
G
To
me,
it
comes
down
to
what
the
the
sort
of
the
parameter
is
it
the
the
guiding
parameterization
of
like
senpai's
purpose,
of
course,
purpose.
If,
if
it's
about,
as
I've
heard
said
before,
that
you
know,
one
of
the
principal
things
is
about
data
access,
then
it
seems
to
me
that
the
answer
is
yes.
G
C
Or
even
replicating
something,
then
the
users
can
choose
where
they
get
for
something
from.
G
Right
yeah,
but
if
that's
not
what
core
is
or
if
that's,
if
that's
too
strict
interpretation
of
what
was
meant
when
that
was
said
by
whoever
said
it,
you
know
so
yeah
to
me,
it
comes
down
to
the
like
it.
It's
informed
by
what
the
point
of
core.
G
B
Well,
now
that
you
can
have
fido
clients
outside
of
court
like
there
is
scope
for
having
them
somewhere
else,
so
you
don't
have
to
have
them
all
so
like,
for
example,
the
radio
spectra.
Specific
clients
are
going
to
live
in
radio
spectrum
because
that's
where
they
belong
to
live
and
like
it
wouldn't
make
sense
to
have
them
in
core.
I
don't
think,
wouldn't
it
well.
No,
no.
H
B
G
If
someone
who
wrote
the
client
doesn't
want
it
in
core
for
some
reason,
then
I
think
you
know
some
fight
can't
or
core.
I
know
I
know
we
all
have
multiple
hats
on
in
this
this
case,
but,
like
you
know,
with
our
core
hats
on
we
shouldn't
like
insist
and
try
some
sort
of
hostile
takeover
said
no.
That
belongs
in
core.
G
B
Think
what
stewart
was
saying
was,
though,
that
if
you
have
somebody
installed-
and
you
install-
let's
say
radio
spectra
or
or
soar
that
it
would
automatically
register
those
clients.
So
now,
if
you
do
a
file
search,
it
automatically
does
it
for
you,
but
it
registers
and
you
don't
have
to
do
an
import
versus
you
have
to
do
an
import
to
register.
B
Q
G
Like
to
me,
that
starts
getting
down
some
of
the
negatives
of
ssw,
that
we're
trying
to
stay
away
from
right
is
that
you
have
things
in
your
path
that
you're
not
aware
of,
and
you
you
very
quickly.
Everything
starts
becoming
implicit
and
becomes
very
hard
for
a
user
to
track
what
is
actually
going
on.
C
C
B
Mean
a
specific
example:
there's
like
a
whole
slew
of
little
niche
radio
spectrograms
that
operate
in
tiny
little
backwaters.
So
there's
going
to
be
like
absolute
nightmare
appliance
in
radio
spectra,
and
I
don't.
I
don't
think
that
they
they
belong
in
core
specifically
because
you
can
download
a
spectrogram.
But
then
you
can't
plot
it.
So
yeah.
H
I
think
that's
a
clear
line
between
like
radio
analysis
and
other
random
solar
data.
You
know
there
is
a
clear
line
there,
but
with
other
instruments
like
lyra
or
gbm,
or
something
like
that,
you
know
they're
just
being
thrown
into
interior
without
it's
hard
to
find
them
right.
Where,
if
you're
looking
for
radio
data,
you
can
google,
like
you
know,
you
know,
sunpi
radio
data
and
you
come
across
radio
spectra,
whereas
you
might
not
come
across
something
interesting
and
yeah
draw
the
lines
a
bit
harder.
I
guess
I
don't
know.
C
C
F
G
C
G
A
K
Clients,
it's
a
pretty
good
suggestion.
Yeah.
A
H
A
A
We
have
have
too
many
type.
We
have
too
many
rooms
of
coordination
in
the
title,
and
that
is
the
first
time
we
can
so
there's
work
to
be
done
on
improving
the
presentation
of
the
information
that
fido
can
give
you
in
its
wrapper
from
a
search,
and
I
guess
the
question
there
would
be
aunt
the
question
there
is
figuring
out
what
we
want
to
just
return.
Is
it
just?
Is
the
url
just
enough
just
go
okay
from
and
then
printing
out
the
base
url?
G
C
C
B
C
Clients
are
an
interesting
one,
there's
a
big
difference
between
so
he
people
said.
If
somebody
rocks
up
with
a
client
we
may
we
make
like,
we
would
sue
v2
the
return
of
the
suv
somebody
rocks
up
with
a
client
for
it,
whose
name
happens
to
be
stephen
and.
C
F
C
B
G
A
A
G
Q
A
A
Here's
my
second
terrible
slide
of
the
day,
so
just
to
freak
out
what
time
series
is
it's
basically
a
reader.
That's
that's
what
it
is.
It
just
reads
there
and
it
allows
you
to
plot.
I
think
it
was
initially
created
when
pandas
was
like
naught
point.
No,
I
think
it
was
before,
like
pandas
like
really
hit
its
stride.
G
G
G
A
A
N
A
That
a
lot
of
as
people
have
come
to
us,
especially
david
who's.
We
usually
time
series
comes
out
and
wants
methods
on
time
series
that
we
have
to
implement
them
and
a
lot
of
the
time
we
implement
them.
We,
obviously
our
api,
is
technically
agnostic
to
what
we
use
underneath,
but
we
have
to
ultimately
in
this
case
for
reindex,
we
use
pandas,
and
you
know
if
we
were
to
implement
other
methods
that
allow
people
to
do
other
common
time
series
functionality.
A
We
would
probably
also
then
be
using
pandas
and
underneath
we
would
be
tying
again,
the
api
isn't
tied
to
pandas,
but
we
are
essentially
tying
ourselves
to
pandas
and
obviously
a
lot
of
the
properties
on
time.
Series
itself
are
also
pandas
properties.
They
are
one
to
one
they
just
and
they
all
index.
The
original
pandas
data
frame
underneath
and
the
problem
with
pandas
is
that
leap
seconds
exist.
A
A
C
C
A
C
Have
a
two
pandas:
we
have
a
two
data
frame
method
on
time
series
at
the
moment
that
would
continue
to
work
in
exactly
the
same
way
as
it
works
at
the
moment.
It
would
just
be.
The
data
attribute
returns
a
astrophy
table.
We
could
even
allow
assigning
to
the
dot
data
on
a
time
series
to
take
a
data
frame.
C
C
C
Take
two:
if
you
take
two
time
series,
you'd
load,
one
from
an
xrs
file
and
one
from
a
lyra
file,
and
you
put
them
together
or
you
take
two
xrs
files
and
you
can
concatenate
them
the
metadata
tracks,
the
headers
of
those
files
through
those
operations.
So
if
you
concatenate
two
files
together,
the
top
half
of
the
column
has
one
metadata
the
metadata.
It
knows
the
metadata
for
the
time
range
for
that
column,
cool
great
thanks.
C
G
A
Because
we,
because
a
lot
of
the
method,
we
don't
have
a
lot
of
methods
on
time,
unlike
map
that
has
a
bajillion
methods.
Time
series
has
a
very
a
very
specific
collection
of
them
that
wouldn't
break
time
series
in
any
meaningful
way.
If
we
were
to
make
that
change.
B
Just
as
this
is
a
stupid
question,
but
the
only
way
to
go
from
like
times
I
have
league
seconds
times,
I
don't
have
league
seconds
it's
just
to
drop
three
seconds.
There's
no
other
alternatives.
You
just
have
to
get
rid
of
them
because
you
can't
represent
them
in
the
yeah.
You
have
to
correct
okay.
So
so,
basically,
all
we
have
to
do
to
go
from
matching
time.
Series
to
a
data
frame
is
just
drop.
The
leap
seconds
basically.
C
C
C
H
If
you
wanted,
then
could
you
transform
that
to
say
what
the
time
it
would
be
at
earth?
You
know
taking
it
again
like
travel
time.
You
know
if
it
does
that
oh
say
that
again
like
say,
if
you
had
an
observation
of
psp
or
say
like
sort
of
orbiters
like
and
you
have
a
time
which
that
observation
was
taken
and
a
skype
coordinator
which
was
taken,
could
you
transform
that
to
what
the
time
that
would
be
at
earth
right,
so
you're
taking
like
travel
time?
If
you
wanted
to
compare
it,
why
not.
B
H
B
G
B
G
H
G
G
Actually,
it's
basically
a
pandas
data
frame
or
series
with
these
few
specific
solar
cases,
and
if
we
just
change
it,
we
can
say
well
nothing
that
we've
developed
has
changed,
but
you
know
you
could
potentially
say
well,
like
users
might
all
their
code
might
break,
because
you
know
they
are
doing.
You
know
data
dot,
pandas.
G
H
G
If
you
want
to,
I
don't
know,
it's
been
a
long.
I
used
to
use
time
series
quite
a
bit,
but
it's
been
years
since
I've
really
used
it
properly.
Now
things
like
that,
you
do
in
pandas
like
phil,
phil,
n,
a
or
or
or
like,
like
resample.
You
know
these
sorts
of
things.
Can
you
do
those
directly
from
time
series
now
all
that
stuff
or.
H
C
B
C
C
C
B
I
mean
in
the
short
term,
we
can
change
the
underlying
representation
to
be
an
astronaut
table
and
from
the
outside.
The
only
thing
that
changes
is
that
dot
data
would
be
an
astronaut
table
rather
than
the
data
frame,
and
you
can
still
use
two
data
frame
to
get
your
data
frame
and
do
all
the
united
pandas
things.
H
O
J
F
F
H
C
G
The
biggest
argument
against
that
currently,
I
think,
as
far
as
I'm
aware
and
again,
people
are
able
to
tell
me
I'm
wrong-
is
the
metadata
handling.
So,
if
metadata
became
its
own
object,
that
well.
G
C
G
F
C
C
B
B
G
B
B
G
It's
slightly
different
to
how
indie
cube
was
envisioned.
It
doesn't
mean
that
it's
incompatible,
but
to
me
anyway,
I
just
haven't.
I
haven't
like
really
convinced
myself,
that
there
is
an
inherent
value
in
like
the
time
specific
objects
that
currently
exist
over
trying
to
create.
Like
a
time
cube.
You
know,
you
know
a
1d
time.
C
G
G
Well,
now
we're
going
now
we're
going
back
to
hypercube
like
albert's,
original
idea
of
hypercube,
which,
like
depending
on
what
slice
you
take
of
the
cube
depending
on
the
axes
that
it
has.
It
actually
returns,
not
just
not
an
ndq,
but
you
know
like
higher
level
objects
like
map
or
time
series
or
spectrogram,
or
something.
G
B
H
H
B
A
tiny
little
yeah,
that's
how
it
always
going
if
you
stack
like
a
whole
bunch
of
ai
images,
co-align
them,
and
then
you
want
to
take
a
time
series
out
of
a
neutral
pixel
to
do
like
a
fourier
transform
or
something
I
mean.
Obviously
you
can
do
that,
but
it
just
seems
odd
that
you
have
one
representation
when
it's
a
time
series
from
a
map
and
a
different
representation.
When
it's
a
time
series
from
that,
I
don't
know
you
know,
goes.
G
I
mean
the
other
wild
wacky
way
of
the
suggestion
is
that
you
have
you
register
methods
on
an
ndq
based
on
its
physical
axes.
So
it
knows
that
oh,
this
got
a
time
axis.
Therefore,
you
want
all
these
time
related
methods.
I
mean
again,
this
is,
I
think,
something
along
the
lines
of
albert's
idea
of
what
a
hypercube
might
be.
H
G
H
C
That's
that's
the
bit
of
functionality
you
lose
is,
if
you,
if
you
really
want
to
oper,
if
you
want
to
operate
in
place
on
a
data
frame
to
be
fair,
we
never
really
supported
that
anyway,
because
if
you
operate
in
place
in
the
data
frame,
the
metadata
object
isn't
necessarily
going
to
be
valid
anymore.
So
why
ever
go
back
to
the
time
series
object.
F
But
yeah!
Well
I
don't
know
it
sounds
to
me
that
pandas
has
issues
that
are
not
probably
not
solvable
in
the
short
to
medium
terms,
such
as
leap
seconds
and
units
and
metadata,
and
those
are
problems
better
solved
in
astropyth
time
series.
C
A
A
A
A
F
Yeah
so
as
in
like
in
some
pi
3.1,
we
could
add
a
resample
method
that
uses
pandas
to
example,
but
then,
if
in
4.0
we're
like
oh
we're
going
to
use
astrophy
instead,
underneath
instead
of
pandas,
then
I
imagine
it
would
be
a
lot
of
work,
if
not
impossible,
to
make
sure
the
results
from
our
resample
method
are
identical.
Using
pandas
on
extra.
F
C
A
A
A
In
the
morning,
we'll
be
talking
about
database
and
composite
map,
the
concept
map,
yeah
constant
map.