►
From YouTube: #t20-gostin: Pre-hack meeting 2nd weekend
Description
Transform2020 Hackathon project #t20-gostin.
Meeting 12 June 15:00 UTC: Second weekend pre-hack meeting
A
B
Great
thanks,
everyone
for
showing
up
I,
think
general
faux
fur
for
the
gossan
for
a
second
Deacon
I,
never
knew
people
come
on
or
so
I'm
still,
as
my
original
intent
was
just
happy.
If
anyone
does
some
cross
cross
showcases
using
one
tool
with
another,
but
I
think
this
meeting
is
particular
because
the
Pushkin
probably.
B
B
D
D
I'm
sure
everyone
is
aware
to
be
open
to
open
geospatial
consortium.
Lately
like
published
a
few
newer
specs
recently,
where
they
were
these
like
3d
data
types
and
so
like
I,
think
there's
a
surface
thing
I
might
like
I
haven't
really
delved
into
it,
so
it
might,
but
it
might
be
specific
to
their
water
ml
sort
of
initiative.
So
that
sort
of
like
surface
hydrology
things,
but
there
might
be
some
overlap
where
you
don't
have
to
implement
your
own
types
and
you
just
encode
them.
D
These
are
various
things
and
own
math,
but
I
was
seeing
like
in
existing
data
types
that
are
supported
by
the
OGC.
So
you
get
all
the
synergy,
some
word,
but
whatever
with
the
rest
of
the
GIS
world,
that's
just
something
to
consider
and
I
think
also
you
can
like
you,
can
save
spatial
files
like
like
spatial
data
that
you
can
read
into
the
Nets
PDF
files,
so
you
might
gain
some
wins
there
by.
You
know
using
that
CDF
right
as
your
your
data
format
on
this,
but
using
these
data
time,
structures
and
specs
to.
D
It's
like
everyone,
it's
like
governments
and
like
ESRI
and
yeah,
it's
huge
so
GC,
it's
like
it's
like
if
you
use
QGIS
QGIS,
is
like
a
reference
and
has
like
reference
implementations
of
the
various
web
GIS
standards
out
there
and
there's
certain
expanding
into
3d
as
well
right
as
well
I,
like
a
lot
of.
D
E
Yeah
yeah
I
feel
like
all
that
sort
of
has
like
a
similar
goal.
I,
don't
know
it's,
it's
a
lot
fresher
and
it
was.
It
was
mainly
well
in
the
MS
mining,
so
it
was
mainly
something
for
the
mining
industry
to
kind
of
get
around
and
agree
and
solve
some
of
the
interoperability.
If
you
stay
and
so
I
guess,
yeah
I
don't
know
so
it's
it
has.
E
Some
like
block
models
are
important,
so
it
has
some
sort
of
3d,
3d,
first
thinking,
I
guess,
which
is
maybe
nice
I,
think
that
there's
since
it's
newer
and
there's
less
like
less
people
involved,
that
probably
means
it's
a
bit
more
flexible
for
better
or
worse,
but
also
I.
Think
like
less
people
involved,
is
also
not
a
huge
advantage
when
you're
looking
for
support
across
with
their
libraries,
yeah.
E
But
so
I
guess
I
updated
that
markdown
page
a
little
bit
with
some
thoughts
the
other
day,
but
yeah
I
think
I
think
there's
sort
of
three
parts
to
it.
So
there's
like
the
data
structures
I
feel
like
that's
the
most
important
part.
So
that's
like
just
how
shapes
are
represented,
how
you
represent
3d
objects
with
attributes
on
them.
Basically,
and
then
there's
like
the
file
serialization
itself,
which
is
sort
of
significantly
less
important.
In
my
mind
like
that,
just
gives
you
a
file
but
like
for
the
first
version
of
lmf.
E
The
serialization
is
really
strange
and
yeah.
It's
just
like
an
arbitrary
binary
thing,
yeah
and
then
the
Python
library
is
just
that's
just
like
the
Python
client
implementation
of
it.
That's
the
only
client
library
that
exists.
I,
guess
yeah,
there's
like
been
some
work
on
like
a
c-sharp
version,
but
that's
only
halfway
done
and
not
in
great
shape.
D
Yeah,
a
lot
of
this
is
sort
of
self
I.
Do
you
see
sighs
I
kind
of
posted
after
their
numbers
so
that
you'd
be
sort
of
talking
about
it's
like
implementing
at
enough
schema?
Where
you
I
say
I
pick
any
like
I,
don't
know
like
a
sample
is
a
3d
point
and
it
will
have
an
attributes
or
something
I
like
so
you
can
like
use
the
existing
standard
sort
of
easy
to
implement
these
various
data
types.
That'll
get
you
all
this
utilization
for
free
all
the
C
C++
Python,
whatever
in
wrap
right,
yep,
Web,.
C
Services
I
just
want
to
try
to
have
an
idea
here.
So
do
you
think
that
we
are
going
to
be
able
to
create
something
new
in
terms
of
data
structures
well
or
agree
in
some
data
structures
without
creating
a
standard
because
I'm
not
looking
to
create
yet
another
standard
like
I,
guess
that
everybody
is,
and
that
point
is
I-
will
really
just
go
to
a
lower
level.
C
Yes,
like
ok,
whether
we
are
going
to
communicate
between
our
libraries
are
going
to
be
binders,
that
the
frames
for
unstructured
data,
an
x-ray
for
a
structured
data,
and
then
we
can
wrap
all
of
that
in
all
the
standards
and
whoever
wants
to
use
those
data
for
different
libraries
and
so
and
I
think
that
that
is
quite
compatible
with
a
all.
They
open
my
name
format
and
and
also
what
you
are
sewing
now
and
you
so
for
me.
The
more
I
was
thinking
this
week
about
this.
C
The
more
I
was
agreeing
with
Bane
about
this
idea
of
Pi
be
stopped
in
the
bridge.
I
still
agree
that
we
should
put
vtk
into
the
game,
but
is
true
that
the
structures
of
Pi,
Vista
or
the
structures
of
BTK
are
all
which
you
need
to
visualize
on
the
3d
data
that
you
can
imagine
it.
That's,
oh
well
at
least
I
can
at
this
moment,
so
we
are
able
to
replicate
that
without
having
to
install
pity
K
mitigate
tables,
because
by
this
type
in
that
at
the
moment,
is
what
is
using.
C
So
they
have
these
poly
data
objects.
That
is
an
ax
structured
data
and
they
have
this
track
structure
grades.
So
if
we
can
mimic
pretty
much
one-to-one
that
years
with
X
array
and
with
understa
frames,
then
we
should
be
able
to
import
data
and
visualize
it
and
then
export
it
to
different
libraries
for
that's.
G
You
know
X
Y,
Z,
X,
Y,
&
Z,
so
either
we
need
to
come
down
on
having
a
fairly
prescriptive
way
of
doing
that,
which
then,
by
default
our
libraries
work
with
us.
So
you
know,
when
fetch
under
writes
out
to
netcdf.
It
writes
it
out
in
a
particular
way
that
doesn't
require
anything
fancy
on
gem
pies
in
all.
We
need
to
allow
for
more
flexibility
in
something
like
gem,
pile
where
you
know
you
can
assign
what
you
want,
and
maybe
we
do
both.
G
But
if
you,
the
I,
think
as
a
structure,
netcdf
is
fine
and
X
array
works
really
well
with
that
and
I
think
that
I'm
not
sure
how
easy
that
is
to
get
into
into
parvis
tow
or
something
similar.
But
you
know
if
we're
doing
data
fro,
if
we're
doing
data
frames
and
we
can
decide
what
they
stay
is
on
disk,
but
if
we're
doing
settling
on
data
frames
and
x-rays
as
into
as
this
is
where
the
internal
library
we're
using
then
stuff
should
be
fairly
compatible.
But
then
we
need
to
look
at
the
metadata.
D
Yeah
also
I
know
I'm,
like
really
talking
about
the
OTC
I
like
to
hear
it
to
everyone,
to
read
that
so
I'm
not
saying
like
you're,
definitely
doing
an
intermediate
step.
First,
where
you
get
something
working,
that's
a
bit
better
than
what
we
have
right
now,
but
I
think
it
makes
sense
to
implement
the
OMS
standard
as
like
a
schema
or
like
well-known
practices
using
the
ODC
data
types.
It's
also
a
serialization
problems
and
then
I
also
I'm,
not
sure
where
it's
posted
at
this
point,
but,
like
maybe,
was
on
a
pipe.
D
You
know
calls
that
are
exposed
like
whether
it's
an
X
ray
object
or
8-feet
array
object
or,
and
it's
data
frame
object.
And
if
you
sort
of
keep
to
that,
like
super
set
of
common
functions,
you
can
write
basically
an
agnostic
interface,
where
you
don't
really
care
if
it's
necessarily
exactly
I
pandas
dataframe
or
if
it's
a
geo.
C
B
Together,
I
mean
I,
might
click
when
I
hear
what
Rowan
has
to
say
as
someone
who
build
the
framework
ones
because
of
I
think
what
we,
you
know
say
that
other
said
it
has
to
be.
It
should
be
minimal
as
possible.
We
had
a
lot
of
many
things
to
be
incorporated.
I
think
simply
is
sort
of
an
excellent
example
for
that
and
then
after
Owen
also
maybe
been
joined,
so
he
might
give
a
pitch
yeah
yeah.
H
I
think
everybody
here
like
it's.
We
we
want
to
improve
the
interoperability
in
the
Geoscience
community
and
that
that
starts
by
well
just
deciding
that
we
should
do
that
and
so
having
the
minimal
infrastructure
and
then
evolving
it
over
time
to
sort
of
take
on
more
and
more
responsibilities,
and
then
that
that
piece
is
a
shared
resource.
B
I
B
C
I
I
Maybe
this
was
said
a
minute
ago,
just
to
sort
of
follow
what
om
f
did
and
and
sort
of
start
from
the
scratch
sort
from
from
the
ground
up
and
not
really
take
their
word,
but
but
really
take
a
lot
of
the
ideas
there
and
the
structure
of
their
classes
and
the
data
structures
and
reimplementation
top
of
X
array.
That's
what
I
think
would
be
the
best
route
forward
and
then
have
that
closely,
coupled
with
PI
vistas.
I
That
way,
we
have
like
a
one-to-one
mapping
of
this
data
type
is
basically
the
same
data
type
in
pi
vista
and
just
maybe
do
one
to
one
for
each
class
we
implement,
because
by
this,
that
can
handle
all
those
same
data
structures.
But
then,
if
we
down
the
road
need
to
start
expanding
that
to
handle
data,
that's
in
a
spherical
reference
system.
We
can
easily
add
in
an
additional
model
that
doesn't
necessarily
have
a
one-to-one
relationship
with
something
on
BTK
or
by
this
aside.
I
Was
gonna,
say:
I'm
not
familiar
enough
with
x-ray
to
be
able
to
say,
oh
nicely,
it
plays
with
the
irregular
data,
but
from
my
impression
right
now
my
current
understanding
is:
it
doesn't
play
well
with
your
regularly
structured
data.
So
if
you
have
a
chat
or
utilized
mesh
being
able
to
store
those
cell
connectivity's
between
you
know
what
nodes
represents
any
given
tetrahedral
in
your.
A
C
For
Basin
and
Wars
x-rays
always
tend
to
be.
Yes,
one
single
apply
array
multi-dimensional
array,
while
Pat
disease
say
without
not
by
arrays,
and
then
they
coordinate
the
logic
of
it
so
for
it
regulars,
I,
think
that
is
that's
better
and
actually,
in
the
website
of
X
array,
they
recommended
that,
if
you
don't
need
to
use
multi
dimensional
use,
which
is
Ponder's
because
it's
faster
that's.
C
G
C
Me
so
more
than
trying
to
agree
here
in
a
huge
standard
of
that.
If
we
put
the
the
goal
too
high,
then
nobody
is
going
to
go
towards
that.
So
for
me,
the
only
thing
that
I
am
a
bit
worried
is,
is
this
workflow
of
and
and
I
have
the
feeling
that
there
is
a
lot
of
code
around
and
a
lot
of
libraries
that
one
is
importing
data
from
a
trail
and
the
other
is
important.
A
seismic
and
I
mean
they.
You
don't
know
where
those
things
are.
C
B
And
we
have
pi,
mr
here
and
if
this
library
would
agree
and
say
we
don't
know
how
it's
going
to
look
like
and
they
don't
know
how
it's
gonna
be,
what
its
gonna
be
cold
and
who's
gonna
lead
it.
But
we
are
going
to
use
it.
Then
it
would
already
be
enough
inertia
that
it
would
exist
and
others
might
come
in
over
time.
Then.
F
I'll
second
Martin
on
this
I
think
if
we
just
all
agree
that
the
ins
and
outs
are
gonna,
be
ten
of
their
frames
and
x-rays,
and
then
we
can
do
the
wrapper
to
OMF
or
any
of
those
like
cell
realizations,
like
higher
level
storage
after
the
fact,
and
it's
all
going
to
be
easy
because
it's
only
gonna
be
one.
You
know
one
or
two
objects
to
two
and
out
so
I
would
vote.
I
would
vote
with
Martin
Martin
Aldous
yeah.
B
Cool
cool
I
also
did
this
brainstorming
and
I.
Think
many
of
you
work
on
that
and
I
probably
don't
even
go
into
this.
If
we're
gonna
have
the
meeting
and
reduce
it
to
a
sensible
time
of
an
hour
or
so
but
I
think
it's
something
we
can
work
over
on
the
weekend
or
in
subgroups
or
and
on
different,
because.
B
Baby
baby,
we're
never
gonna
finish
right,
but
I
think
there's
a
lot
of
information
in
it,
but
what
I
would
be
a
bit
more
interested
I
have
this
in
the
agenda
in
the
market
is
I,
think
it
someone
like
a
bit
leading
it?
No,
that's
not
even
necessarily
cold
wise,
but
someone
who
is
a
real
interest
on
this
steering.
F
B
It
goes
well
just
to
get
a
step
back.
The
whole
gostin
I
was
just
a
week
ago,
but
there
was
nothing
in
the
hackathon
about
EMI
potentials
and
I
co-hosted
morning
sessions
with
Filippo,
and
just
that
you
have
to
do
something
so
I
put
in
I,
no
idea
what
it
is
and
it
kind
of
grew
out
of
itself.
My
my
original
intent,
which
is
really
awesome,
I,
think
happening
in
this
channel,
even
just
discussing
things
but
I,
don't
think
I'm
I'm,
like
aligned
with
my
project.
B
This
sort
of
package
is
what
I
want
to
focus
on
tomorrow,
but
I
think
there
are
others
that
are
very
inclined
with
that.
So
I
think
we
should
use
the
remaining
half
an
hour
to
keep
it
to
an
hour
like
who
is
really
interested
in
it.
Unwilling
to
invest
time.
I.
Think
I
know
where
the
Miguel
even
has
that
as
part
of
his
company,
but
also.
B
F
C
C
If
it's
subsurface
I
will
go
to
just
take
the
project
of
last
year,
also
because
it
was
a
bit
where
everything
is
started
and
I
like
the
name
which
is
not
easy
to
find
a
name.
So
if
everybody
is
against
that,
yes
right
now,
the
boys,
but
otherwise
tomorrow
morning,
I
will
just
really
go
to
the
repo
with
you
Alex.
C
If
you
are
in
the
mood
to
start
writing,
rich
means
and
maybe
making
the
high
level
structures
but
hope
hoping
to
develop
as
a
community
project,
I
mean
this
I
don't
have
too
much
experience
playing
has
way
more
because
enterprise
as
like
80%.
Luckily
now
you
guys
to
champion
a
bit,
so
you
force
me
to
move
the
development
to
the
cloud
that
I
like
that.
But
this
one
should
be
an
open
development
since
the
beginning,
so.
B
B
You
I
liked
his
his
approach,
because
you
often
hear
of
testing
first
approach,
but
he
said
documentation
first
and
that
might
be
a
good
thing
for
the
subsurface
package,
so
everybody
who
joins
it
has
enough
documentation
to
know
what
is
the
goal
and
then
could
dig
in
to
to
work
for
that
goal.
You
know
so
and
that'll
be
me
again
for
camp
I
dome
for
dome
for
think
back.
I
guess
I
will
ask
Leo
if
you
can
have
an
AI
unit,
I'm.
A
B
G
If
he's
interested
I
think
he'll
be
interested,
I
don't
know
if
he
wants
to
specifically
put
time
towards
it,
but
I'll
ask
him
yah,
my
own,
in
my
own
background,
is
mostly
fatty
under
I've,
never
used
any
of
the
other
libraries
in
this,
but
I've
looked
at
most
of
them.
Just
out
of
curiosity,
especially
over
the
last
week,.
D
G
G
I
Yeah
I
mean
you're,
asking
about
sort
of
the
buy-in
from
everybody
here
and
I
think
it
should
go
without
saying
I'm
stoked
about
this
library
and
getting
something
to
act
as
a
clue
between
all
the
software.
That's
here
today
and
so
I'm
happy
to
contribute
as
much
as
I
can
whatever
that
means,
but
but
as
far
as
anything
that
has
to
do
with
my
Vista
I'd,
happily
sort
of
take
on
those
assignments.
B
A
C
I
And
alter
these
data
into
new
data
structures
and
we
have
to
make
sure
we
have
links
and
traceability
across
that
data
the
whole
time
it's
that
way.
You
know
if
I'm,
if
you're
giving
me
some
data
and
from
subsurf
from
the
new
subsurface
package
you
sent
to
the
PI
Vista,
we
do
a
threshold
in
filter,
all
the
sudden.
We
have
a
new
data
set.
A
C
That
you
want
to
do
once
you
import
the
data
into
whatever
format
you
want
to
be
realize
it,
and
then
you
can
decide
to
which
bucket
you
send
it,
and
normally
I
mean
the
feeling
I
have
is
the
ones
we
are
able
to
visualize
system.
Iv
step
for
all
of
us
is
pretty
easy
has
to
send
to
one
place
two
or
the
other.
So
again,
I
finish
more
than
code.
It's
a
bit
organization
between
what
is
out
there
and
and
having
a
clear
place
where
you
had
specific
functionality,
parsers
and
and
these
things.
B
G
B
G
C
G
A
G
A
C
G
C
I
I
mean
it
bars
any
examples
with
my
vistas,
then
you've
used
whatever
other
software
or
reading
routine.
You
wrote
yourself
to
get
the
well
data
into
numpy,
arrays
and
then
manually
create
poly
data
and
pi
vistas
of
do
the
plotting
and
I've
never
used
Welli,
but
I
think
it'd
be
awesome
to
integrate
something
that
can
parse
different
well
formats
like
last
I/o,
maybe
as
well
or
whatever.
D
I
C
C
Yes,
I
know
so
what
yang
was
so
in
today,
so
we
had
code
it
quite
a
lot,
because
we
also
teaching
it
two
years
interpolating
within
a
layer
but
I
think
that
is
better
eventually
to
to
link
it
to
some
of
the
other
libraries
in
the
last
couple
of
years.
There
were
also
a
lot
of
push
in
different
places.
D
Yeah
there's
no
common
I,
think
like
between
what
Verdes
and
what
enticing
they're,
both
interpolations
right,
they're
both
models
where
you
can't
do
my
only
Verdi
this
other
stuff
as
well,
but
it'd,
be
really
cool.
If
everyone
used
the
scikit-learn
API
to
implement
this,
even
though
it
doesn't
like
the
fact
that,
like
Jim
PI
has
like
a
separate
point
and
orientation
data
frame
is
a
little
messy.
D
But
right
then
you
gain
like
all
this,
like
interfacing
with
all
this
infrastructure,
around
model
selection
and
cross-validation
in
the
machine
learning
world
and
like
I,
think
that's
also
captured
in
like
a
Gentile
issue
or
like
there's
a
like
when
you
create
a
model
you're
certain
it
it
and
then
like.
You,
have
like
a
base
regular
grid
that
you
sort
of
fit
the
carribean
parameters
to
everything,
and
then
you
can
using
like
the
compute
model
at
function
like
like
yeah,
you
can
like
you've.
D
Like
you
knit
on
the
data
you
pass
it,
then
you
can
evaluate
on
a
separate
thing
right,
but
it's
sort
of
like
a
clunky
interaction.
Now
I
have
my
own
library
that
I'm
working
on
for
my
private
work.
That
sort
of
is
doing
that,
but
it
would
be
probably
better
to
be
done
at
a
lower
level
in
each
library,
so
I
think.
C
I
understood
you
right
if
you're
talking
and
using
a
general
interpolate
or
so
to
interpolate
this
curve
field
of
jump
I
using
something
else,
interpolated
of
jump
Isis
completely.
It's
really
strange.
So
it's
not
one
of
the
best
ones
because
we
are
interpreting.
This
is
color
field
that
we
don't
care
about
the
value.
So
we
are
interpreting
differences
of
a
scalar
field
so
to
ensure
that
all
the
surfaces
they
have
the
same
surface,
so
there
is
nothing
out
there.
So
the
alternatively
alternative
is
is
interpolate
a
assigning.
D
And
that's
fine
right!
It's
not
like
and
he's
very
like
it's
very
that's
like
a
very
specific
thing
to
gen-pop
of
it.
He
almost
you
don't
need
to
worry
about
in
terms
like
I'm,
just
talking
about
like
from
an
interface
perspective
where
yeah,
like
I,
could
take
my
like
I,
don't
know
like
taxicab
data
set
that
I
use
and
I
used
in
like
a
machine
learning
example
and
try
to
feed
it
into
gem
pie.
D
D
You
do
work
to
say
this
surface.
This
formation
corresponds
to
this
scalar
field,
ie
value
right.
Typically,
you
have
like
a
wide
column
like
if
you
look
at
you
know,
yield
like
whatever
medium
posts
you
want
to
do
to
learn.
Is
she
learning
like
you?
You
have
binary
binary
gossipy.
She
have
zeros
and
ones.
You
are
like
doing
some
work
ahead
of
time
and
sort
of
intercepting
that
sort
of
step,
and
that's
fine
but
I
think
there's
some
winds.
D
Basically,
that
compacted
happen
where,
like,
if
I
may
be
another
like,
like
surfy
or
map,
to
loop,
like
that
sort
of
exists
like
I.
Could
you
know,
have
the
same
data
structure
and
just
say
instead
of
using
jump,
I
use
map
to
loop
or
use
surfeit
or
if
I
guess
merriday
doesn't
do
three
dimensional
sort
of
like
multi
services?
It
really
just
does
one
at
a
time
right,
but
you
could
have
sort
of
agnostic
front
end
to
whichever
interpolation
algorithm
you
have
behind
the
scenes.
D
B
G
B
F
B
A
G
B
B
B
A
B
I
C
B
G
We
can,
we
can
always
turn
gostin
into
an
collection
or
something
if
we
have
to
the
Causton
reaper.
Given
that
they've
been
a
couple
push
they're
already.
So
even
if
that
becomes
an
examples
and
then
we
keep
the
actual
implementation
cleaner
in
a
different
in
the
subsurface
I'm,
not
sure.
What's
there
already
haven't
actually
looked
yet
Oh
in
a
new
repo.
You
know
I
think.