►
Description
July 11, 2019 Jupyter Community Workshop talk by Robert Rosca, European X-Ray Free Electron Laser GmbH
A
B
A
At
the
end,
this
is
really
useful
for
some,
especially
these
cases,
because
you
have
lots
of
pulses,
so
a
tu
x4
we
have
up
to
27,000
x-ray
pulses
a
second
and
they're
less
than
115
seconds
each,
and
this
means
it's
one
of
the
most
brilliant
x-ray
sources
in
the
world.
So
you
have
lots
of
photons
there,
like
Britney,
really
short
pulses,
and
this
lets
you
do
fast
imaging
to
the
point
where
you
can
get
like
sub
100
femtosecond,
like
frames
for
lot
imaging
of
some
dynamics.
A
The
problem
this
introduces
is
the
amount
of
data
you
get.
So
when
you
have
twenty
seven
thousand
pulses
a
second.
If
you
just
have
a
one
megapixel
detector,
then
you
end
up
with
around
fifty
four
gigabytes,
a
second
of
data
and
about
194
terabytes
in
our
Big
Data,
and
that's
just
for
a
single
detector.
You
can
even
inspect
these
little
ones
of
these
or
you
can
have
up
to
like
a
sixty
metal
detector.
And
then
you
get
a
ridiculous.
A
A
A
year
this
is
quite
a
big
problem
because
of
these
facilities
they're
all
publicly
funded.
So
all
the
experiment
goes
to
fund
these
places,
so
in
principle
the
data
should
be
accessible
by
the
public
as
well.
So
there
has
to
be
stuff
mechanism
for
people
outside
of
the
side
of
the
facilities
to
actually
analyze
the
data
and
look
at
it,
which
isn't
very
practical.
If
this,
nobody
could
really
download
these
things.
C
A
So
this
is
where
Panos
comes
in.
It
stands
for
photon,
a
Neutron,
open,
steins
cloud
and
it's
a
collaboration
of
a
couple
of
these
facilities
to
try
and
work
together
and
the
graph
of
a
solution
to
this
problem
and
the
approach
we're
taking
is
to
try
and
fix
all
this
data
analysis
into
the
fare
principles.
So
the
data
has
to
be
findable,
accessible,
interoperable
and
reusable.
So
you
need
to
have
a
way
for
anybody
really
to
be
able
to
find
this
data
by
some
metadata.
A
Catalog
search
rich,
find
what
they
want
to
analyze
and
then
wonder
else
it's
on
there
and
we
need
a
way
to
also
reproduce
the
analysis.
People
have
previously
done
so,
ideally,
you'll
be
able
to
have
some
publications,
some
paper
somewhere
and
look
through
the
paper,
and
you
can
see
these
are
the
plus
they've
made.
These
are
the
figures
they've
had
and
you
can
reproduce
whatever
they
did
to
end
up
with
that
data.
A
So,
of
course,
Jupiter
is
a
pretty
natural
solution
to
this.
Since
you
can
combine
your
code
with
some
explanations
of
what
is
doing,
there
are
some
people,
who've
already
been
taking
this
like
very
friendly
approach,
where
they
have
a
notebook
linked
to
a
paper,
and
then
they
explain
through
the
notebook
half
that
paper
even
the
book,
how
they
got
those
results
in
those
figures,
and
that
would
achieve
the
main
goal
of
having
reusable
analysis
and
yeah.
The
reason
bility
thing
is
a
huge
problem
in
science
right
now.
A
It's
part
of
the
scientific
method
that
people
have
largely
forgotten
about,
and
it's
very
hard
to
actually
figure
out
how
people
get
the
results
in
their
papers.
So
a
lot
of
time
was
wasted
by
scientists.
Just
trying
to
repeat
work.
That's
already
been
done,
so
the
vision
of
Pat
asked
is
to
have
a
unified
web
interface
between
these
six
large
facilities.
A
This
previously
made
run
through
the
analysis
and
see
what
they've
done
and
all
there
is
also
they
get,
and
we
produce
everything
straightaway.
In
addition
to
this,
there's
some
wear
clothes
that
are
very
difficult
to
transfer
into
it.
So
we
also
go
and
have
Remote
Desktop
for
those
kind
of
like
3d
interfaces
that
might
not
work
very
well
in
a
notebook.
D
A
Again
sorry
saying
things:
I
said
before
you
need
to
find
the
data
access
it
interact
with
it.
We
execute
it
and
also
modify
and
extend
work.
People
have
previously
done
now.
These
facilities
you'll
frequently
have
Umberto's
on
data,
so
the
users
come
in.
They
put
a
proposal
through
they
do
their
experiment
and
for
the
next,
like
one
two
three
years.
Nobody
apart
from
them,
should
have
access
to
that
data.
A
A
Yeah,
this
is
just
the
things
users
can
do.
Yeah
and
the
chalk
is
up
quite
big.
It's
it's
quite
a
hard
problem
to
solve,
so
we
have
six
facilities,
each
of
which
have
different
metadata.
So
it's
very
difficult
to
kind
of
unify
their
approach,
how
they
store
their
data
and
how
they
categorize
it,
and
each
of
them
have
completely
different
experiment
types.
So
that
makes
it
even
more
difficult
to
actually
classify
the
datasets
for
some
datasets.
A
The
data
cannot
be
moved
at
all
like
sometimes
you'll
get
something
like
ten
terabytes
and
terabytes
for
an
experiment
which
is
big
but
acceptable,
but
for
the
longer
running
experiments
you
can
have
like
half
a
petabytes
or
a
petabyte
for
a
single,
a
single
run
of
an
experiment
which
is
huge
and
can't
be
moved
so
it'll
have
to
be.
You
have
to
bring
the
analysis
to
the
correct
computing
center,
where
the
data's
stored
and.
A
We
haven't,
like
the
projects,
are
four
year:
don't
leave
you're
running
for
a
few
months,
so
we're
still
kind
of
figuring
out
how
to
do
this.
We
have
a
couple
of
options,
so
the
rarest
option
really
is
to
run
containers
per
notebook
or
maybe
for
experiment
or
run
or
per
facility.
That
part
is
a
very
clear
yet
so
some
certain
like
we
have
to
figure
out
first
on
what
level
to
actually
split
things
engine.
A
Then
we
need
to
maintain
these
for
future
for
the
future
as
well,
because
something
you
install
range
packages
now
will
run
fine,
but
then,
in
ten
years
time,
if
you
want
to
reproduce
the
results
for
an
experiment,
so
you
might
not
be
able
to
find
the
correct
versions,
they
might
have
some
version
mismatch
and
then
you
won't
be
able
to
actually
see
how
something
worked.
A
few
years
ago.
A
Sometimes
it's
a
very
basic
analysis
like
just
summing
some
frames.
Sometimes
it's
something
much
more
complicated
that
takes
hours
of
HPC
time
and
like
a
few
dozen
nodes
yeah.
The
other
challenges
are
mostly
like
administration
ones
and
trying
to
link
things
together.
So
perhaps
it's
supposed
to
work
with
something
called
the
eos
hub,
which
is
the
european
open
science
cloud
hub
and
that's
that
development
is
happening
concurrently.
So
it's
quite
hard
to
actually
work
together
throughout
the
EU,
with
like
a
dozens
of
people
and
lots
of
institutes.
A
Again,
the
data
policy
is
quite
a
hard
thing
to
solve
as
well,
and
then
we
also
have
the
problem
of
some
scientists
not
particularly
being
used
to
the
Jupiter
notebooks
themselves,
so
they
prefer
their
their
console
commands.
They
prefer
running
things
outside
notebooks
and
if
they
do
that,
you
can't
really
reproduce
what
they've
done
very
reliably.
Oh
I
was
very
fast
but
still
yeah,
so
this
is
pretty
much
it.
The
main
problems
we
have
are
tracking
and
reproducing
was
previously
being
done
in
notebook.
A
The
thing
I've
seen
just
from
working
with
the
scientists
is
that
they
tend
to
not
run
cells.
One
after
another
they'll
run
some
cells,
then
maybe
change
them
not
run
them.
Move
back
up
to
a
previous
cell
change
that
or
like
run
them
out
of
order,
and
then
you
can't
just
rely
on
having
the
notebook
pop,
assuming
that
you
can
just
run
it
cell
by
cell
to
reproduce
er
they've
done.
So
that's
one
of
the
big
problems
that
we
need
to
solve.
A
There
are
left
tools
which
already
kind
of
tackle
this
like
code
calc,
which
people
mentioned
before,
and
that
you
can
go
through
a
little
timeline
and
scroll
forwards
and
backwards
to
see.
What's
what's
been
done
in
like
at
various
times,
there's
also
something
called
ode
ocean
which
basically
solves
this
problem,
but
pretty
sure
it's
close
source
and
faithful
service.
So
that's
not
really
an
option
for
us
but
yeah.
So
the
main
reason
I'm
here
is
just
to
see
what
other
people
have
done
in
relation
to
these
kind
of
problems.
Now
it's
about
15
minutes,
yeah,.
B
C
D
A
The
simplest
way
is
to
have
a
time
line,
so
you
just
keep
track
of
the
order
of
the
souls
were
executed
in
and
Howard
ever
changed
and
you
could
rerun
it
that
way.
In
my
more
ideal
world,
you
have
the
interactive
session
where
the
scientists
do
WA
need
to
get
their
results
at
the
end
and
then
at
the
end
used
you
save
these
results
within
the
session.
A
So
you
get
your
plots,
your
tables,
your
data,
whatever
you
need
to
use
and
there's
a
kind
of
check
where
it
tries
to
run
the
notebook
and
if
what
the
notebook
spits
out
doesn't
match
up
to
the
results.
It'll
say:
oh,
your
notebook
isn't
fully
reproducible.
Can
you
please
look
through
it
and
make
sure
that
you
have
to
run
this.
D
Effectively,
once
your
notebook
do
the
thing
you
want,
then,
as
part
of
the
check
in
and
promote
it
to
the
public
branch
in
B
report,
we'll
step
through
in
order
and
to
some
sort
of
some
sort
of
cell-by-cell
tested.
Only
if
it,
of
course
deciding
what
counts
as
identical
it.
Enough
is
a
whole
different
can
ones
but
yeah
yeah,
Indy
report,
I.
Think.
E
E
Trying
to
kind
of
go
to
the
heart
of
what
Jason's
and
this
they
probably
would
you
want
the
flexibility
of
exploring
the
early
and
popping
around
when
your
brain
is
really
any
when
you're.
In
that
context,
then
you
track
of
what
you're
doing
in
its.
Finally,
you
don't
care
too
much
other
stuff
of
that
work.
Just
trying
stuff
out
it's
time.
I
do
yeah,
you
know
where
you
put
the
screwdriver.
You
know
it's
find
that
it's
master
getting
stuff
done.
The
problem
is
we've
come
back
when
week
later,
you
have
no
idea
letter
stock.
E
Won't
be
spots,
I
won
these
cells
and
then
it
goes
and
looks
in
lets.
You
choose
and
says,
and
thank
you
analyzer-
that
I
said
I
think
these
are
the
things
you
want
to
gather
it
would
actual
settle
on
doc,
doesn't
work,
so
it's
basically
an
assessment
that
lets
you
tweak
individual
note
looks
more
like
scratch,
but
makes
it
very
very
easy,
with
good
UI
to
pull
out
the
piece
of
that
you
may
want
to
turn
it
into
a
more
persistent
and
I
actually
think
something
like
that
is
making
work.
E
Something
of
lets
you
be
messy.
That
makes
it
very
easy
say:
okay,
I'm,
almost
up,
maybe
I'm,
not
funny
bones,
but
I'm
enough.
That
I
want
this
to
stay
for
I
get
back
so
here
you
know.
Let
me
help
you
I
think
this
is
what
you
need.
Double
check,
have
a
look:
we've
run
it
and
then
you'll
see
that
and
then
the
raw
material
is
basically
the
splatter
itself.
It's
not
a
dot.
E
C
A
C
C
A
C
A
C
A
C
F
A
It
will
be
currently
it
really
isn't,
but
yeah
that's
one
of
the
main
problems,
because
there's
a
lot
of
fragmentation
with
how
the
scientists
store
what
they've
done
like
most
of
their
things.
We
have
an
e
log,
so
during
the
experiment
they
have
some
I
know
they
can
write
notes
in
there
and
say
this:
it
was
this
sample.
We
ran
the
experiment
for
this
long.
We
had
these
parameters.
Storage.
Some
of
the
things
are
stored
automatically
in
the
data.
A
Just
by
the
way
we
acquire
data,
patty
UX
for
some
of
the
things
are
written
on
a
post-it
note
and
like
left
on
your
screen
somewhere.
So
it
is
yeah.
That's
one
of
the
other
big
challenge
is
actually
keeping
track
of
all
HF
that
the
scientists
all
the
information
that
they
know
that
you
need
to
know
to
actually
yeah
to
give
the
context
and
to
analyze
things
properly
that
isn't
really
stored
anywhere.
That's
easily
accessible,
yeah.
F
F
F
My
name
is
Robert
rotten,
Ziggler
I
work
for
radius,
oft,
we're
scientific
consultants
in
the
specializing
in
beam
physics
and
particle
accelerator.
Accelerations
were
a
software
simulation
house,
basically
I'm
a
programmer,
Palmer
and
I
want
to
do
a
personal
note.
It's
his
birthday.
Today
he
turned
50
and
he
sends
a
pretty
funny
picture
which
I
won't
embarrass
it
with,
and
David.
Chris
and
Nathan
are
the
physicists.
They
do
all
the
hard
work,
my
job
making
sure
that
they
can
get
their
job
done.
F
F
The
slides
and
some
references
are
available
at
RSL
doc-link,
slash,
JCW,
19
I'll,
repeat
that
at
the
end
of
the
talk
I'll
talk
why
we
use
Jupiter,
interrupt
it's
a
little
bit
different
from
what's
been
talked
about
here
and
what
we
do
to
make
it
happen,
it's
in
a
big
evolutionary
process
and,
of
course,
I'll
have
my
wishes.
So
Sonya
mentioned
3d
visualization,
there's
a
3d
visualization
sin
wrath.
F
F
It's
kind
of
crazy
thing:
today
we
don't
aren't
doing
that
anymore,
so
scientists
has
to
compare
the
output
of
console
in
the
output
of
sin
red
and
that
council
can
do
more
physics
and
said
where
I
can
do
one
thing
really
well
and
the
console
to
do
that.
So
you
know
this
is
a
heat
load
on
a
mirror
and
an
accelerator
I
don't
understand
any
of
this
stuff.
F
So
you
hear
words
out
of
my
mouth
or
just
was
that
the
thing
that
Jupiter
gave
this
particular
scientist,
something
more
than
just
having
some
new
plot
scripts
is
that
he
can
edit
the
spot
it's
a
living
document.
That's
what
I
think
gives
people
I
mean
it's
the
exact
from
Curtis
talking
about,
but
it's
the
thing
that
gives
Jupiter.
F
F
So
this
is
a
guy
who
pushed
us
to
use
to
peer
a
lot
and
we
put
it
up
for
him
and
he
likes
using
Jupiter
lab
as
his
IDE
uses
on
nurse.
Yes,
analysis
running,
maybe
one
and
all
he
calls
it
and
running
a
job
in
another
panel.
This
happens
to
be
on
our
cluster,
where
he's
running
an
MPI
job
and
doing
an
analysis
of
that
data.
F
Finally,
the
classic
model
of
Jupiter
article
this
was
currently
instrumentally.
This
is
that
this
was
unbelievably
important
to
me.
Being
able
to
describe
the
algorithm,
helped
me
think
through
it
in
a
way
where
I
can
mix
code
tests,
those
code
pieces
and
then
describe
what
happened
and
why-
and
it
says
all
interesting
things
which
I
don't
know:
I,
don't
even
know
what
a
witness
bunch
charges.
So
our
biggest
use
case
is
teaching
we're
a
commercial
company,
but
we
like
supporting
we're
open
to
source
software
company.
F
So
if
you
know
kid
we're
we're
a
mini
Kibler,
let's
just
say
that
we
try
to
help
scientists,
universities,
labs,
understand
things.
We
use
Jupiter
notebooks
to
deliver
that
almost
I
think
exclusively
sometimes
hora
scripts
and
pretty
much.
The
last
six
I
think
rest
particle
accelerators
school
of
sessions
we've
hosted
either
with
Jupiter
hub
or
Science
Kid.
We
called
CFO
and
we
support
the
u.s.
particle
school
to
teach
students
how
to
use
these
codes,
which
are
very
complex
and
they
have
arcane
formats.
That's
a
big
point.
F
Right,
so
that's
probably
the
best
described
formats
we
have
the
ICF
a
machine.
Learning
workshop
in
Switzerland
was
run
on
our
Jupiter
hub
cluster
in
Fort
Collins
Colorado
I
thought
that
was
kind
of
fun.
They
were,
they
didn't
care.
They
notice
this
great,
because
troop
etre
notebooks
themselves
are
interactive
but
they're
not
right,
they
don't
they
can
send
their
messages
and
get
them
back,
and
you
don't
really
care
about
latency
that
much
and
we're
not
on
internet
too
we're
on
a
one,
gigabit
Lake
Oh,
even
less
than
that
we
have
to
pay
for
the
bandwidth.
F
So
there
are
60
participants
and
they
really
enjoyed
using
Jupiter
where
we
educate
people
on
Jupiter,
actually,
people
that's
a
process.
So
a
little
bit
of
a
summary
of
the
use
case.
Why
why
we
use
it?
It's
it's
an
IDE
for
simulations.
It's
really
the
idea
for
all
of
our
users.
No
Mitchell
again,
you
know
code
in
it
which
is
really
important
to
us.
We
can
run
Emacs
or
VI
I
just
had
installed
them
enhanced
and
the
release
this
morning,
because
somebody
loves
VI
right
whatever
it
is.
F
It's
a
subtle
point.
This
and
I.
Don't
mention
my
use
case,
which
is
we
don't
have
SSH
into
any
of
our
notes,
and
that
made
me
happy
they're
all
running
inside
of
containers.
I,
don't
have
to
worry
about
it.
You
know,
I
know
what
user
they're
running
is
it's
a
single
POSIX
users?
Don't
have
any
of
that
legacy
stuff?
You
guys
have.
F
Specifically
with
radius
R,
we
I
think
we
use
containers
wrong.
Sorry,
Michael.
We
we
do
use
them
wrong
and
I'll
show
you
why
and
but
it
makes
it
easy
for
people
in
technology
transfer.
They
don't
have
that
answer.
The
question
what's
running
in
our
to
peer
circle
are
just
a
notebook
I
found
two
notebook.
Can
I
get
everything
I
three?
No
forgetting
it
everything
where
is
SHG.
This
is
it
you
pick
up
a
terminal
window
and
I
can
get
access
to
all
the
codes
on
the
command
line
and
for
our
environment.
F
F
There
are
46
public
users,
people
just
come
in
from
all
we
don't
know
they
are
so
we
do
actually
kind
of
like
try
to
figure
it
out
like
who
are
they
saying
I
Dimond,
but
we
don't
advertise
it
very
much.
We
just
mentioned
it
here.
In
there
and
people
find
out
about
it,
it's
convenient
for
them
because
they
have
the
codes
and
sometimes
they
really
do.
F
The
Jupiter
I'll
talk
about
this.
We
have
multiple
pools
in
Jupiter
our
Jupiter
hub
configurations,
so
we
can
reconfigure
our
three
environments
based
upon
what's
needed
and
we
have
four
internal
nodes
where
our
staffers
use
to
use
it
and
one
public
node
allow
everybody
to
come
in
and
use
that
there
are
13
MP,
I
know
what
that
means
is
we
people
can
just
say
MPI,
exact,
more
or
less,
and
it
runs
a
job,
no
slurm,
no
torque,
none
of
that
fun
stuff.
F
You
can
set
up
the
whole
thing
with
our
CI
cm
GMT
astonishing.
This
comparator
management
tools
so
that
we
can
run
a
full
Jupiter
operation
are
a
desktop
quickly
so
that
we
can
get
what
we're
going
to
do
before.
We
push
it
to
alpha
beta
and
production
and
that's
important
to
us,
because
we
have
a
lot
of
stuff
which
I'll
talk
about.
Next,
we
built
everything
in
one,
it's
ten
gigabytes
and
that's
actually
small
I've
gotten
it
down
in
its
way.
F
F
Shadow
3
is
Python
3.
Now
it's
a
Fortran
code,
that's
wrapped
in
Python
3,
but
it's
still
Python
3.
We
have
machine
learning
built
into
it
and
I
want
to
mention.
We
do
visualization,
and
you
know
it's
by
DICOM
I'm
really
happy
about
this,
and
this
is
part
of
what
I'm
talking
about.
So
we
never
know
we're
gonna
learn
to
sew
on
Monday.
We've
got
a
grant
from
NIH
to
do
prostate
cancer,
reading
of
prostate
cancer,
MRIs
and
cat-scans
with
machine
learning.
F
F
You
have
a
kerlun
saw
it
lets
people
download
they're
on
their
desktop,
so
they
can
literally
say
curl,
Jupiter,
run
hike,
tabash
I
know
I
grabbed
that
name
and
I.
Probably
don't
give
it
to
you
guys
if
you
want
it,
but
it
was
blend
named
that
was
left
to
use
true
brutal
at
all
gooey.
It's
been
that
way
for
I
want
to
say
six
months,
I
don't
actually
know
it's
been
great
for
everybody,
everybody's
happy
one
thing
I
did
notice.
Is
you
can't
download
a
folder?
F
You
have
to
go
back
to
tree
load
in
order
to
download
a
folder
with
NBC
download
folder
thing
into
burlap,
just
vacation
again,
no
POSIX
users,
so
MPI
is
important
to
us
all.
These
codes
are
written
with
MPI
at
the
core.
Some
of
them
are
OpenMP,
but
Louis
L'amour
MPI
inside
the
containers.
They
mount
it
to
till
the
Jupiter
directory,
which
is
the
same
directory
in
there
Jupiter
container.
But
it's
not
running
Jupiter
in
the
the
MPI
containers,
our
environments,
different.
A
lot
of
our
users
run
jobs
for
weeks
months.
F
Sometimes
these
long
searches
and
they
like
it
easier,
and
so
we
allocate
nodes
or
to
to
people
some
manual
process.
We
change
the
configuration
file
and
run
a
configuration
manager
tool
inside
the
containers
running
sshd.
We
give
them
TLS
configuration
use,
docker
host
networking
so
that
MPI
works
I've
heard
mixed
things
about
running
MPI
under
kubernetes,
but
I
never
was
able
to
get
to
work
well.
So
some
of
you
can
help
inform
me
if
I
can
use
0
to
Jupiter
up
to
get
rid
of
all
this
stuff.
The
wrapper
makes
it
very
easy.
F
They
don't
need
to
know
about
the
hosts.
They
can
say
our
SNP
is
h1
and
that
one
or
they
could
just
say
RS
MPI,
and
it
runs
the
whole
thing
on
all
the
nodes
that
are
allocated
to
them.
We
stuff
a
stock
responder,
so
we
can
manage
our
server
pools.
The
configurations
on
the
right
side
of
this
explain
see
this
is
our
development
configuration
since
we're
running
public
server?
We
need
to
do
garbage
collection.
We
need
to
be
able
to
kick
people
off,
so
other
people
get
the
resource
even
in
turn.
F
F
Cfs
figuration
on
docker
to
set
up
a
CPU
limit
on
the
tutor
server
static.
Park
range
is
important
to
us
because
we're
running
on
a
VLAN
and
whatever
Street.
What
hopes
to
talk
to
what,
especially
with
MPI
and
I,
don't
know
other
people
have,
but
when
you
have
users
that
are
not
POSIX
users
they're,
just
from
you
know,
they
have
some
ram
name
like
dropping
each
of
given
the
directory
and
then
that
directory
has
to
exist
on
an
NFS
to
make
the
directory
I.
Don't
know
a
little
adapter
in
there.
F
F
So
from
user
customization
of
sauce
Reyes
you
you
want
to
know
about
how
you
can
not
user
dues.
The
home
directory
is
of
the
to
Purdue.
Server
is
in
the
image
till
the
Jupiter.
Is
the
Anna
pesto
when
you
start
a
Jupiter
server,
you
may
have
wanted
to
install
stuff
from
set
your
LD
library
path,
but
a
lot
of
our
codes.
They
really
are
finicky,
I
mean
right.
So
it's
hard,
so
we
read
a
repo
dynamically
that
can
use
for
patching
things
between
releases
or
copying.
F
There's
a
batch
RC
file
that
we
run
that's
in
the
NFS
directory
and
there's
a
bin
directory
that
allows
them
to
persist,
commands
that
gets
in
their
path.
Automatically
sharing
is
important
to
us.
We're
a
small
company,
a
few
nodes,
and
usually
we
use
it.
Everything
up
so
till
the
Jupiter
is
the
NFS
directory.
It's
shared
with
the
MPI
nodes.
Sometimes
we
create
or
have
a
workshop
we'll
mount
a
another
fine
mount
into.
The
container
did
that
for
the
ICF
a
workshop.
It
worked
really
well.
F
The
teachers
can
then
write
the
directories
and
the
students
can
read
from
them
way
to
distribute
notebooks
conveniently
for
people
who
don't
know
Jupiter.
That's
the
other
thing
right.
You
know
if
you're
saying
well
go
to
Jupiter,
whatever
do
all
these
fun
things,
people
don't
know
that
they
just
know
command
line
in
general
or
looking
in
a
file
graph
click
generally.
Our
users
share
with
github
and
email,
which
is
unfortunate
and
reality.
The
CPU
and
memory
limits
allow
us
to
share,
have
a
share
host
and
it's
really
important,
because
we
have
users
who
get
confused.
F
D
F
F
Me
yeah:
that's
a
good
question:
what
I
don't
know
they
were
certainly
running
the
same
notebook
but
I
don't
know
simultaneously,
they
certainly
don't.
That's.
One
of
my
wishlist
items
is
to
have
an
interactive
shared
notebook
which
they
so
they
may
be
running.
Cuz
I,
don't
know
what
happens
actually,
okay,
so
that's
an
unknown.
My
big
wish
list
from
a
sysadmin
point
of
view
is
being
able
have
storage
limits.
Doctor
allows
you
to
have
been
with
limits
for
present
logical
storage
limits.
F
It's
got
this
much
space,
whatever
real-time
collaboration
like
encode
calc,
would
be
great,
obviously
I'm
completely
doing
the
different
user
interface.
When
you
switch
to
cocao,
invest
better
user
notifications.
If
your
server
is
going
down,
you
want
to
notify
the
I'd
like
you
know,
when
we
run
out
of
servers
right
now.
We
return
to
429
and
it
kind
of
looks.
Okay
and
people
get
confused
because
there
isn't
a
refresh
and
everything
it's
funny,
and
so
we
have
to
have
a
little
warning.
F
So
there's
love
users,
love
to
read,
and
you
fir
love
users.
We
make
it
easy
for
them
because
everything's
pre-install
and
they
love
that
I
have
to
say
and
to
mobile
all
available
resources.
So
the
CPU
limits
was
really
key
for
us
to
be
able
to
run
a
public
server,
especially
with
these
scientific
codes,
because
they
soak
up
CPU
really
fast
and
I
want
to
thank
that
your
routine
Fernando.
F
Everybody
else
out
here,
Jason
for
doing
a
great
job
of
providing
an
easily
customizable
I
can
go
in
and
subclass
something
happen,
I
get
responses
from
men
time.
It's
it's
really
easy
to
do
this
and
I
don't
do
this
is
my
full-time
job.
I.
Do
a
lot
of
things:
that's
my
full-time
job
and
so
I
have
to
be
able
to
dive
in
really
quickly
to
do
something-
and
you
know
it's
even
throwing
it
in
the
configuration
file
to
a
subclass
and
compare,
should
file
for
duper
hub
to
get
it
to
work
and
so
yeah.