►
From YouTube: Jupyter: A One-Stop Shop for Interactive HPC Services
Description
July 11, 2019, Jupyter Community Workshop talk by Michael Milligan, University of Minnesota, Minnesota Supercomputing Institute
A
Michael
Milligan
for
those
who
haven't
met
me
I'm,
the
assistant
director
for
application
development
at
the
Minnesota
supercomputing,
Center
and
I'm,
going
to
give
sort
of
an
overview
of
the
different
ways
that
we
use
Jupiter.
We
might
use
it
in
so
many
different
ways
that,
as
my
box,
I
was
gonna,
so
we've
come
to
think
of
it
as
a
one-stop
shop
for
our
interactive
HPC
needs
in
general,
so
I'm
gonna
start
this
talk
with
the
story.
A
A
This
is
something
that
our
users
are
starting
to
pick
up
on
their
own
jumping
through
various
hoops
children.I
Python
on
our
compute
resources,
and
it
would
be
really
great
if
we
could
put
something
together
around
this
and
imagine
basically
said
sure
we
could
think
about
that.
But
you
can't
just
be
the
Jupiter
guy.
We
have
to
think
about
this
more
holistically
there
to
make
it
pop
our
service
offerings
in
a
sensible
way,
and
so
I
went
away
and
thought
about
and
where
we
ended
up.
A
Is
this
I
convinced
MSI
to
commit
to
support
interactive
HPC
as
a
first-class
service,
and
they
said
great
you're,
the
interactive
guy
now
so
then
I
went
away
and
spend
some
more
time
trying
to
define
what
do
we
mean
by
interactive
HPC?
Why
is
this
something
that's
important?
That's
worth
devoting
resources
to
and
because
I
was
still
really
really
excited
about
Jupiter
how
do
pitar
integral
to
that
pictures
that
I
also
get
to
play
with
Jupiter.
A
So
we
start
making
a
case
for
interactive
HPC
as
a
foundational
service.
That's
something
something
that
a
center
like
ours
should
offer.
We
started
by
looking
at
what
does
the
computational
landscape
look
like
for
the
services
of
centers
like
ours,
deal
with
and
find
that
in
the
past
we
had
this
kind
of
computational
dichotomy,
a
lot
of
the
work
that
users
did
fit
into
one
of
two
buckets:
either
things
that
were
small
and
interactive
and
local
and
ran
on-the-fly.
A
A
That
was
about
the
extent
of
it
has
probably
managed
by
that
grad
student
until
the
grad
student
graduates,
and
then
it's
managed
by
no
one
on
the
other,
the
other
big
bucket,
the
sort
of
things
that
HPC
Center
is
like
a
ourselves
worked
with,
where
it
needs
a
characterized
by
words
like
lard
schedule
and
remote
and
professionally
administered
hi.
This
is
the
work
where
a
user
would
submit
a
job
to
a
cluster
and
the
and
the
user
doesn't
have
to
think
about
how
the
cluster
is
managed.
Do
we
do
that?
A
A
Computing
is
presumed
to
be
remote
and
interactive,
more
or
less
by
default.
Now
you
think
something
as
simple
as
opening
up
a
word
processor
is
it
likely
is
not
to
be
a
session
in
a
web
browser
running
on
Google
servers,
not
a
program
running
on
your
own
computer
and
people
expect
that
kind
of
behavior
in
their
computational
work
as
well,
even
when
that
expectation
isn't
necessarily
reasonable.
A
Sometimes
the
internet
goes
down
and
a
core
consequence
of
all
this
is
that
we
have
users
who
have
access
to
much
larger
resources
who
have
immediate
access
to
these
sorts
of
professionally
managed
services,
and
they
don't
have
to
think
about
how
they're
managed,
but
this
doesn't
mean
that
they
suddenly
know
what
they're
doing
so,
certainly
about
what
this
is.
What
are
their
challenges?
What
are
the
stories
that
drive
these
users
and
we
find
that
HPC,
it's
proper
to
thought
of.
A
As
part
of
a
larger
research
workflow,
so
the
store
sounds
like
also
needs
to
explore
huge
data
set.
This
data
set
came
from
somewhere
and
as
exploration
is
going
to
lead
to
something,
maybe
a
paper
will
be
written
at
the
end
of
the
day,
but
at
this
phase
of
the
research
dr.
so-and-so
needs
interactivity
needs
an
analysis.
Tools
needs
maybe
large
memory
and
storage
to
be
able
to
get
act.
At
this
data
set
or
dr.
Sutton
such
is
preparing
to
be
the
cutting
edge
simulation
visualization.
A
You
know
it's
one
of
these
simulations
of
the
entire
universe
or
they've
simulated,
some
very
complex,
hydrodynamic
problem.
You
know
you
name
it
and
for
that
kind
of
thing
you
need
an
activity.
You
need
remote
visualization,
so
you
can
see
what
you're
doing
you
probably
need
a
lot
of
compute
sort
of
on
demand,
and
you
need
some
bandwidth
to
be
able
to
get
at
the
voxels
you're
creating
to
get
at
the
pixels
you're,
rendering
in
in
an
interactive
fashion.
A
What
spectrum
you
may
have
a
grad
student
who's
at
the
early
stages
of
prototyping
and
algorithm,
and
so
again
they
want
interactivity,
so
they
can
rapidly
iterate
on
the
development
of
the
software
that
they're
working
on
and
then
in
addition
to
that,
they
need
some
kind
of
access
to
development
of
debugging
tools.
They
need
the
time
to
iterate,
which
means
they
don't
want
to
be
burning.
A
A
A
What
we're
dealing
is
not
a
rack
of
Hamas
computers
is
going
to
serve
a
billion
ad
impressions.
What
we're
building
is
a
computational
resource.
It's
going
to
be
used
for
research
purposes
in
cutting-edge
ways
that
we
can't
anticipate
and
by
users
who
have
very
advanced
needs
that
in
turn,
we
can't
dissipate
all
the
details
of
because
what
we
hope
is
that
they
are
doing
something
fundamentally
new,
so
we
need
to
give
them
general
purpose
tools.
A
Now
my
role
I'm,
managing
the
application
development
group
at
MSI
and
so
I
have
a
very
keen
sense
of
the
kinds
of
things
that
application
developers
at
a
center
like
ours,
spend
their
time
doing
and
to
enable
this
interactivity.
One
of
the
things
you
come
back
to
zuni
interfaces.
You
need
interfaces
that
are
interactive
and
traditionally
it's
gonna
take
a
lot
of
forms.
A
You
have
graphical
sessions
which
could
be
remote,
desktops
or
could
be
individual
applications
that
ring
run
through
X
forwarding
or
they
could
be
GUI
applications
running
on
a
terminal
in
the
computer
lab
somewhere
and
then,
of
course,
there's
web
interfaces,
which
can
be
bespoke
application
gateways
bespoke
workflow
managers,
existing
data
management
tools
that
have
been
brought
in
really
the
spectrum
is
very
large
in
that
space
and
it's
grown.
This
is
sort
of
the
default
way.
People
expect
you
to
build
applications
now,
but
then,
of
course,
there's
a
good
old
command
line.
A
Yet
the
right
answer
is
probably
to
point
them
at
some
software
that
can
do
more
or
less
what
they're
trying
to
do
not
to
write
something
from
scratch.
But
if
they're
coming
to
us
saying,
I
want
to
create
an
interface
to
this
cool
thing,
I've
developed
that
we
might
decide
that
we
need
to
write
because
it
doesn't
exist.
Yet
so
everyone
in
this
room
point
knows
where
I'm
going
with
this.
A
So,
let's
take
a
look.
What
that
looks
like
at
indicates
of
the
minnesota
supreme
feeding
center.
I'm
look
at
three
key
use
cases.
We
have
headline
HPC
notebooks
service,
that
I'm
going
to
talk
about
and
then
in
addition,
we
have
application
specific
science
gateways
that
we
develop
for
project
specific
use
cases
and
then
even
more
sort
of
contained.
We
have
transient
notebooks
and
resources,
though
we
might
stand
up
for
workshops
and
training.
A
So
taking
the
first
one
as
sort
of
the
pattern
that
the
rest
will
follow,
some
of
you
may
have
seen
me
present
the
slow
chart
before
this
is
a
flow
chart
depicting
the
architecture
of
our
HPC
notebook
service.
So
this
service
was
designed
with
the
idea
that
we
need
to
use
our
existing
HPC
clusters
not
going
to
send
up
a
new
cluster
just
for
doing
Jupiter
things.
A
We
want
to
use
our
existing
scheduling
technology
for
the
same
reason,
and
we
want
to
use
authentication
technology
that
exists
because
that's
already
deployed
it's
already
audited.
We're
not
gonna
have
to
go
through
some
process
with
auditors
and
security,
people
that
will
probably
get
shot
down
anyway,
just
use
something
that
we
know
works.
So
those
are
our
architectural
goals
when
we
started
at
additionally,
we
wanted
to
leverage
Jupiter
hub.
This
allows
us
to
use
the
established
extension
mechanisms
the
Jupiter
hub
provides
and
the
result
is.
A
We
have
this
very
elegant
system
where
MSI
has
to
maintain
almost
no
local
code
and
we
can
give
our
users
access
to
Jupiter
notebooks
in
a
very
scalable
way.
I
was
just
the
sort
of
the
quick
to
purr
of
the
flowchart.
You
see
here.
The
maroon
bits
are
the
parts
that
sort
of
we
had
to
add
it
didn't
come
out
of
the
box.
A
The
Jupiter,
the
white
boxes,
are
out
of
the
box
components
so
meaning,
if
it,
those
of
you
who
have
worked
with
Jupiter
Hubble
know
that
comes
comes
with
a
configuration,
TV
proxy,
that's
standard
that
allow
allows
Jupiter
hub
to
in
a
configurable
way
route
incoming
web
requests.
This
is
gonna
be
key
for
later
on.
Then,
of
course,
we
have
the
hub.
It
lives
on
its
own
server
somewhere.
So
we
added
some
authentication
components.
A
We
added
the
moments
to
talk
to
the
cluster
and
then
because
this
proxy
is
dynamic
it
we
can
set
it
up
so
that
these
notebooks,
then
the
connections
don't
go
to
the
same
place
as
the
hub.
The
connections
go
through
to
compute
resources
on
our
supercomputer,
and
then
we
can
the
existing
cluster
middleware
to
provide
Jupiter
hub
with
a
way
to
say.
Okay,
this
user
has
asked
for
a
notebook,
let's
spin
up
a
computer
resource
and
get
them
a
notebook.
A
So,
in
a
little
more
detail,
the
components
we
used
to
do
this
now,
the
first
one
is
batch
farm.
So
he
started
to
work
on
bat
spawner,
not
long
after
I
got
approval
to
move
forward
with
making
some
kind
of
Jupiter
thing
for
our
cluster,
the
it
was
August.
The
first
thing
we
needed
was
a
way
for
Jupiter
hub
to
submit
a
job
to
our
cluster,
because
that
was
the
obvious
way
to
get
access
to
computer
resources.
A
So
it
uses
the
standard
job
submission
tools
to
launch
notebook
servers.
It
uses
the
existing
proxy
to
map
those
connections
through
to
the
execution,
node
and
really
all
of
the
logic.
There
comes
down
to
teaching
teaching
Jupiter
hobbits,
who
use
dot,
submission
tools
as
a
way
of
spawning
processes,
and
then
you
know
learning
how
to
parse
the
output
you
get
from
those
things
to
get
the
metadata.
You
need
to
know
where
to
send
those
connections
to
know
if
your
process
is
still
alive
or
if
it's
still
in
a
pending
state.
A
Unlike
many
other
sponsors
now
with
clusters,
you
do
typically
have
you
know
some
wait
time
between
saying
start,
my
notebook
and
it
coming
up.
This
is
very
different
from
when
you're
forking
something
locally
a
key
sort
of
insight
that
we
lucked
into
early
on
with
batch
spawner.
What
to
make.
Essentially
everything
configuration
controlled,
and
this
enabled
two
things
one.
A
They
didn't
able
to
a
great
deal
comfort
about
using
this
because
it
in
short,
at
the
end
users,
who
are
connecting
through
a
website
whom
you
know
in
principle,
are
authenticated,
but
we
don't
necessarily
trust
them
to
know
what
they're
doing
and
it's
where
the
end
users
don't
have
input
to
the
parameters
that
go
to
this
job.
Submission
systems,
so
they're
not
going
to
you
know,
request
the
entire
cluster
for
their
notebook,
and
then
we
call
our
help
desk
asking
why
it
why
it
never
started.
A
But
the
other
useful
parameter
here
is
that
by
parameterizing,
essentially
everything
through
configuration,
it
won't
have
been
really
really
easy
to
teach
on
how
to
use
other
cluster
managers
as
well,
and
so,
when
I
sat
down
to
write
this
thing,
we
used
to
work.
So
it's
imported
to
work
and
within
a
matter
of
weeks,
I
had
someone
else
coming
thing
hey.
This
is
support.
A
To
talk
to
me,
the
next
component
that
we
set
out
to
develop
is
something
called
rap
spawner,
so
the
benefit
that
I
just
talked
about
the
end
user
has
no
input
over
these
job
parameters.
Well,
that's
maybe
actually
too
little
input,
because
users
may
have
different
needs,
so
we
would
like
to
give
them
the
option
to
request
different
things,
but
not
in
a
free-form
way,
not
the
existing
solutions.
At
this
time.
A
Sort
of
boiled
down
to
here
the
text
box
and
put
put
in
some
parameters,
they'll
get
pasted
into
your
JavaScript
and
now
you're
back
in
the
terrain
of
you
need
to.
You
need
your
users
to
understand
the
intricacies
of
your
job,
scripting
language
and
the
details
of
what
resources
your
cluster
provides.
How
and
so
that
this
isn't
really
what
we
wanted
to
do.
So
we
instead
and
with
a
mechanism
whereby
and
of
course
again,
this
is
sort
of
a
story
of
I
set
out
to
do
a
thing,
and
then
I
made
a
really
generic.
A
So
so
I
went
up
with
a
mechanism
that
can
wrap
any
spawner
now
within
jupiter
hub
and
present
it
to
the
rest
of
jupiter
hub
as
a
thing
that
hasn't
been
instantiated
yet
and
all
the
all
the
parameters
have
been
meant
to
be
injected
at
configure
time.
When
you
start
up,
jupiter
hub
now
get
created
a
run
time
and
I
I
thought
it
was
terribly
clever
now.
A
So
the
benefit
of
this
is
that
we
could
not
only
insert
configuration
parameters,
but
we
can
even
swap
at
the
entire
spawner
for
a
different
spawner
at
runtime,
and
we
could
control
that
through
a
simple
form
that
we
present
to
the
user
and
the
form
that's
written
if
the
user
is
actually
in
a
sub
class,
so
it's
very
easy
to
override
and
make
it
fancier
than
this
sort
of
default.
One
we
ship
the
default.
One
just
looks
like
this:
the
administrator
can
set
up
some
some
profiles
and
those
could
automatically
translate
into
drop-down
box.
A
But
if
you
wanted
to
say,
add
a
text
box
where
user
can
say:
I
need
a
GPU
or
I
need
an
SSD.
Then
you
know
those
are
things
you
could
easily
add
by
sub
classing
that
further,
and
the
idea
here
is
that
by
injecting
these
parameters
into
the
trade
system,
then
you
can
use
any
of
the
spawners
you
like,
and
they
don't
have
to
know
that
they're
being
run
through
this
rap
spawner
system.
The
original
impetus
for
doing
this
was
we
wanted
to
preserve
the
option
to
use
the
docker
spawner,
which
we
wound
up.
A
Never
doing
but
other
people
have
and
it
works
a
sign,
so
that
was
encouraging
and
then
the
authentication
component.
This
is
part
we
actually
did
not
write
ourselves,
because
it
turns
out
that
the
solution
is
absurdly
simple
and
someone
had
already
written
it.
This
is
about
20
lines
of
code
and
it
just
throws
out
all
the
existing
Authenticator
code.
A
The
jupiter
cup
hub
has,
and
instead
looks
for
headers
coming
in
from
from
whatever's
upstream
from
jupiter,
which
at
an
institution
like
ours,
is
going
to
be
some
kind
of
reverse
proxy
that
has
already
tied
into
our
institutional
single
sign-on
solution.
We've
actually
changed
what
that
single,
sign-on
solution
is,
and
the
time
since
we
implemented
this
and
jupiter
hub
didn't
care.
I
didn't
have
to
change
a
thing
about
it,
because
we
just
folded,
you
know,
continue
providing
this
header.
A
The
header
includes
important
information,
like
the
name
of
the
user,
jupiter
hub
trust
that
and
we're
off
to
the
races.
So
since
we
implemented
this,
we've
gone
from
a
single
sign-on
that
was
specific
to
just
our
Institute,
where
people
you
know
typed
in
their
username
and
password
to
a
Shibboleth
based
single
sign-on
that
works.
University-Wide
has
built-in
two-factor
didn't
change
change,
a
line
of
configuration,
Jupiter
hub
itself.
A
A
At
least
wellthen
were
authentication
identity
is
concerned,
so
you
need
to
make
sure
that
say,
there's
no
way
that
a
user
can
pass
through
that
magic
header
on
their
own
there's.
No,
that
that
you,
you
aren't
able
to
reuse
this
connection,
somehow
any
well
configured
reverse
proxies
set
up.
Your
system
is
probably
will
already
have
a
vetted
one
should
do
all
those
things
already
and
in
our
case
it
was
exactly
the
case.
A
So
that's
the
HBC
notebook
service
and
then
we
started
the
ok.
Well,
we
sort
of
had
this
expertise
or
I'm
setting
up
Jupiter
Atlas,
let's
sort
Acula
tackling
this
is
variations
on
a
theme.
The
first
thing
we
thought
about
on
do
for
applications,
so
the
Jupiter
notebook
service
itself
provides
a
lot
of
functionality
out
of
the
box.
You
have
the
note
where
you
know
all
the
different
kernels
that
a
book
supports.
You
have
the
Jupiter
lab
environment
which
were
increasingly
adopting
now,
either
the
classic
notebook
or
gives
you
a
file
browser.
A
It
gives
you
a
command
line,
terminal
and
they're
all
fairly
sensible.
This
is
really
cool,
but
there's
a
hidden
bonus
feature.
There
is
the
because
the
application
is
this
tornado
application.
It
has
no
problem.
This
proxy
are
very
traffic,
the
only
restriction
being
that
our
tree
traffic
on
the
user
side
has
to
end
in
a
web
browser.
Well
fine.
You
can
do
loss
of
that.
A
A
You
can
proxy
an
entire
remote
desktop
by
using
the
the
VNC
protocol
that
gets
tunneled
through
a
WebSocket
and
there's
totally
a
JavaScript
library
that
renders
that
in
a
web
browser
it
is
I,
think
the
technical
term
Genki,
but
it
gets
the
job
done
for
the
applications
where
we've
decided
to
support
it.
This
is
why
I've
added
this
note
at
the
bottom,
highly
requested
services
still
somewhat
experimental.
A
The
remote
desktop
a
bit
in
particular,
has
been
tricky
to
get
working
and
tricky
to
update
and
have
it's
keep
working,
but
it
is
the
thing
you
can
do
and
so
having
all
of
this
functionality
that
is
easily
replicable
with
swappable
components
fairly
composable,
because
it's
all
based
around
these
interacting
network
network
components
means
that
we
enable
some
additional
entirely
new
use
cases.
So
one
of
these
is
application.
A
Specific
science
gateways,
as
I
mentioned
you
know,
I
run
the
application
development
group
at
an
HPC
Center
and
a
lot
of
the
things
will
get
asked
to
develop.
You
can
think
of
as
some
flavor
of
application,
specific
gateway
and
so
for
a
gateway
like
this.
You
can
duplicate
parts
of
the
architecture.
Maybe
you
decide
not
to
use
the
main
cluster
because
this
project
has
paid
for
their
own
dedicated
computer
resources.
A
So
your
batch
spawner
and
the
cluster
with
this
project
specific
resources
and
maybe
like
docker
spawner
or
a
spark
or
something
like
that-
it
also
enables
the
capability
is
set
up
transient
resources
for
workshops
and
training.
You
can
duplicate
a
user
environment.
You
can
maybe
add
something
like
our
studio
on
top
of
it.
If
the
training
wasn't
intended
to
be
Jupiter
specific,
although
Jupiter
is
so
cool
that
many
of
the
trainings
we
support
totally
are
due
pa'dar
base.
A
No,
and
then
you
can
deploy
that
environment
either
using
our
custom
resources
or
increasingly,
we
just
use
binder
up
for
these
things,
which
makes
it
absurdly
simple.
Frankly-
and
yes,
you
totally
can
do
the
our
studio
in
the
web.
Browser
thing
through
binder
hub
I
did
not
expect
that
to
work,
but
it
does
lawlessly
almost.
A
So
sort
of
they've
been
awesome,
not
everything's,
been
awesome,
so
I'm
also
going
to
discuss
some
challenges
now.
I've
alluded
acidities
already,
so
first
applications
the
HPC
notebook
service.
So
this
has
been
available
to
MSI
users
since
about
April
2016
and
in
that
time
are
usually
grown,
enormous
Lee,
so
we've
got
about
200,
distinct
users
that
are
that
are
using
this
service.
We
see
about
20
80
in
any
given
week,
and
what
we
noticed
is
that
this
youth,
this
usage,
is
highly
episodic.
A
You
users
ramped
up,
they
use
Jupiter
intensively
and
then
they
ramp
down
and
this-
and
this
we
think,
is
being
with
this
idea
that
we've
identified
that
HPC
is
part
of
a
larger
research
workflow.
And
so
you
have
you
those
who
are
off
doing
their
own
thing,
maybe
they're
collecting
data
for
a
while,
and
then
they
come
to
our
Center
and
do
their
computation.
Maybe
that
involves
Jupiter
and
then
they're
all
writing
a
paper
or
something
we
won't
see
to
make
it
further
a
year.
A
But
some
other
users
in
the
meantime
would
take
in
their
place.
So
when
I
say
we
see
20
any
give
any
given
week
that
that
set
of
20
or
so
users
that
are
using
Jupiter
this
week
that
rotates
through
and
so
over
the
course
of
a
year,
we'll
have
a
couple
hundred
distinct
people.
Who've
used
it
first
as
a
scale
and
a
slice
of
course
about
500
active
research
groups
in
any
given
year
that
2
rotates
Abed.
So
in
a
given
year,
maybe
a
hundred
will
drop
off.
100
new
ones
will
come
in
this.
A
A
Another
key
metric
that
we
track
is
how
user
to
actually
get
a
notebook
session
once
they
hit
Z
start
my
server
button.
The
answer
this
year
was
39
seconds.
So
this
is
he
the
key
thing
that
we
wanted
to
enable
by
making
this
elaborate
rap,
spawner
profile,
spawner
mechanism,
because
by
hand
to
tuning
these
job
profiles,
we
can
ensure
that
they're
well
matched
to
the
cluster
resources
that
they're
being
targeted
at
and
so
users
ask
well
how
come
how
come
you
know.
A
I
can
get
this
size
job,
but
only
for
eight
hours
at
a
time,
whereas,
if
I
want
twelve
hours,
I
have
to
use
these
other
resources
on
24
hours.
I've
used
these
other
resources
on
this
large
memory
and
so
on
and
so
forth.
And
the
answer
is
exactly
this:
we
want
to
make
sure
that
this
is
an
interactive
service
which
doesn't
just
mean
once
you
get
on
it,
it's
interactive,
but
also
when
you
want
to
get
on
it,
you
can
get
on
it
now,
not
press
the
start
button
and
then
go
go
get
coffee.
A
So
the
challenge
here
has
been
that
it's
hard
to
keep
everything
in
sync.
Frankly,
there's
a
lot
of
moving
parts,
all
the
different
boxes
and
the
flow
to
our
different
computing
systems,
and-
and
so
it's
been,
it's
been
a
challenge
to
keep
those
deployments
working
together.
You
have
the
Jupiter
hub
environment,
the
environment,
on
the
compute
nodes,
which
consists
of
both
you
know
the
pythons
all
running
and
each
of
your
computer
files.
A
You
have
different
teams,
upgrading
the
languages
themselves
or,
if
is
maintaining
Jupiter,
so
this
has
been
a
bit
of
an
administrative
headache,
but
we've
managed
to
keep
it
all
cobbled
together.
So
far,
science
gateway,
we've
had
really
good
experience.
So
far,
the
the
top
line
that
I
roll
out
for
my
man
superiors
is
that
we've
increased
the
impact
of
MSI
developers
in
our
proposed
projects
and
the
reason
is
they
don't
spend
time
reinventing
the
wheel.
We
use
Jupiter
as
a
core
technology.
A
A
The
challenge
here
was
that
it's
a
fully
containerized
platform
and
the
Jupiter
model
does
sort
of
it
doesn't
cause
container,
but
it
does
enable
it.
It
doesn't
do
anything
to
prevent
container
bloat,
because
you
want
the
kitchen
saying
phenomenal.
Everyone
wants
their
favorite
computational
tool
in
the
container
where
everything's
gonna
run,
and
next
thing
you
know
you
have
a
you
know:
five
gigabyte
container.
A
This
is
not
how
containers
are
meant
to
be
used
by
the
way
it's
a
much
more
natural
fit
for
our
for
a
virtual
machine.
We
have
recently
stood
up
a
on-premises
OpenStack
class
intended
for
like
secure
data
applications
that
sort
of
thing,
and
here
Jupiter
works.
Great
people
ask
well
how
do
I
get
access
to
this
thing?
A
I
already
mentioned,
we've
done
a
couple
of
events.
Our
biggest
one
was
the
2018
gopher
day
of
data.
We
had
like
200
attendees,
we
measured
I
think
up
to
60
concurrent
sessions
at
a
time.
We
did
this
using
the
zero
to
Jupiter
hub
recipe.
Nowadays
we
did
this.
We
would
just
an
event
of
this
size.
We've
just
taken
binder
hub
the
binder.
How
people
have
assured
me
that
it
event
of
that
size
is
fine
with
the
current
resources
there.
A
A
We've
had
good
adoption
by
anecdotal
measure,
there's
a
number
of
centers
that
are
using
these
tools,
which
we've
been
really
happy
to
see.
This
is
also
attracted
high
quality
contributors,
which
is
really
awesome,
as
I
mentioned,
all
the
schedulers
that
aren't
torque
were
added
by
our
contributors.
In
addition,
a
lot
of
work
has
been
done,
keeping
these
tools
up
to
date
and-
and
you
know,
relatively
bug-free
one.
The
biggest
challenge
here
has
been
backwards-compatibility.
A
A
Various
versions
like
Red
Hat,
Enterprise
Linux,
are
typically
cited,
as
it
must
run
here
and
so,
for
example,
is
only
last
year
that
these
extensions
drop
support
for
Python,
3.3
and
Jupiter
0.5,
and
the
trick
is
that,
if
your
support,
if
you're
restricting
yourself
to
features
that
will
work
on
versions
of
Python
and
Jupiter,
have
that
old,
it
gets
tricky
to
use,
features
that
are
required
to
support
the
newest
versions
of
Jupiter
and
and
so
support.
This
has
gotten
hard
and
so
and
adequately
implemented.
Cie
test
coverage
has
gotten
hard.
A
Okay,
so
the
question
was
for
the
queuing
system.
When
people
are
submit
press
the
button,
can
we
give
them
feedback
to
tell
them
how
long
they'll
be
waiting
for
an
the
answer?
Is
Jupiter
hub
they've
added
the
ability
to
head
new
progress
bars
and
such
things?
The
problem
is,
the
vast
majority
of
cute
systems.
Won't
give
you
enough
information
to
really
sit,
be
able
to
figure
out
on
the
fly.
How
long
do
I
have
to
wait,
and
this
is
a
complaint
that
people
using
queueing
systems
have
everywhere
at
all
times
how
to
run?
Well?
A
We
don't
really
know
so
we
do
tell
you
this
up
front.
You
know
your
your
wait.
Time
will
probably
be
on
the
order
of
a
minute
or
less
if
it's
way
more
than
that,
you
know
contact
belt
desk
and,
and
what
we
have
found
is
that
there
isn't
enough
information
to
make
it
a
useful
addition
to
the
user
experience
to
add
something
like
a
progress
bar.
A
A
A
The
question
was:
does
the
reverse
what
web
to
generate
like
a
Java
web
token,
or
is
it
just
passing
through
a
header,
and
the
answer
is
just
passing
through
a
header,
so
so
we're
actually
using
Apache
to
do
this?
And
it's
literally
just
setting
a
header
saying
this.
This
is
this:
is
the
user
name
and
here's
like
the
group
they're
in
and.