►
From YouTube: JupyterHub Team Tutorial, January 27, 2017
Description
JupyterHub Team Tutorial, January 27, 2017.
Slides are here: https://github.com/jupyter-resources/tutorial-devteam-jupyterhub-2017
Agenda:
[15 min] Overview and State of JupyterHub
[30 min] Teaching with JupyterHub
------nbgrader: Using services and enhancements
------Cal Poly: Deploying with Ansible and using JupyterLab
------Berkeley: Data8 architecture overview
[20 min] Looking towards the future release
[20 min] Discussion and Q&A
A
All
right,
hi
everyone-
this
is
the
Friday
January
27th
Jupiter
hub
team,
tutorial
we're
going
to
keep
this
fairly
informal.
The
intents
of
this
tutorial
is
to
basically
give
the
Jupiter
hub
team
a
chance
to
share
with
all
of
us
the
work
that
their
they've
been
doing
and
to
help
our
our
day-to-day
development
team
just
get
up
to
speed
on
all
the
current
activities
the
Jupiter
had
teeth.
So
we
posted
an
agenda
that
I
can
try
to
drop
that
somewhere,
so
people
can
follow
it
and
yeah,
so
we'll
just
get
going.
B
Thanks
I'm
gonna
start
with
just
in
an
overview
of
you,
know
the
rough
structure
of
Jupiter
hub
and
then
some
of
the
things
that
details
of
the
new
things
in
the
most
recent
release
last
month.
So
I'm
going
to
talk
about
where
the
project
is
right
now
and
we
had
our
outside
of
release
last
month
in
December,
and
then
we've
had
a
few
point
releases
fixing
things
with
the
new
stuff
since
then:
sort
o
7/2
or
so
in
3.
Right
now,
so
the
rough
structure,
Jupiter
hub,
looks
a
little
bit
like
this.
B
We
have
kind
of
a
proxy
in
front,
and
then
we
have
this
big
hub
application.
That
has
a
few
components.
It
has
an
Authenticator
and
spawner
and
database
during
the
state,
and
then
it
starts
notebook
servers
for
for
each
user
and
then
updates
the
proxy
to
route
requests
forgiven
URLs
to
these
our
server
and
then
handles
everything
under
the
hub,
URL
and
then
the
notebook
the
notebook
receives.
A
request
did
ask
the
hub.
You
know
who
is
the
user?
Who
just
asked
for
something
for
me.
B
While
it's
running
there's
also
an
engine
X
implementation
that
UV
stun
there's
the
Authenticator,
which
handles
authentication
and
a
spawner
which
handles
watching
single
reserve
notebook
servers.
Then
there's
the
hub
is
the
process
that
manages
all
these
things
and
the
Authenticator
and
the
spawner.
Are
you
know
these
api's
that
can
be
swapped
out
different
implementations
that
handle
authentication
in
different
ways
or
handle
allocating
servers
and
notebook
servers
in
different
ways
to
a
an
Authenticator?
B
So
we
have
a
package
or
Authenticator
that
defines
the
basics
of
integrating
an
Authenticator
with
an
OAuth
service,
and
you
can
extend
that
to
look
into
your
own
provider
if
there
isn't
one
written
for
that
already
and
the
indicators
that
have
been
written
so
far
see
there's
some
of
the
common
ones.
So
authenticator
people
have
contributed
implementations
for
a
variety
services,
there's
a
remote
user,
one
for
integrating
putting
superhub
behind
and
a
Shibboleth
apache
instance
Joe.
B
If
I
happen
to
get
an
Apache
instance,
there's
smell
that
Authenticator
and
then
you
Lee
wrote
a
temporary
Authenticator
for
as
a
step
toward
replacing
temp
and
B
with
a
Jupiter
hub
instance
in
general,
considering
authenticators.
This
is
a
matter
of
saying
you
know
which
user
should
I
allow.
If
your
author,
if
your
authentication
system
can
authenticate
more
users
than
you
actually
want
to
have
access
to
the
hub
and
then
which
user
should
be
able
to
administer
the
hub
itself,
you
know
start
and
stop
other
users.
B
Add
users
not
gonna
owners
generally
are
an
API
around
it's
a
little
bit
more
than
an
API
around
just
starting
a
process
that
has
a
web
server.
So
there's
a
start
method
that
needs
to
return
just
start
a
process
somewhere
that
needs
to
return
the
IP
import,
but
where
that
server
started,
it
needs
to
be
able
to
check
if
that
process
is
still
running
and
then
stop
it
and
all
the
things.
B
This
is
often
when
using
docker,
for
instance,
this
can
be
the
container
ID
so
that
you
can
resume
a
container
rather
than
creating
a
new
one
when
every
start
and
some
example
spawners
that
we
have
are
there's
the
docker
spawner
goobie
spawner
batch
motor
for
a
batch
system,
systemd
spawner
and
then
a
rap
spawner,
but
it's
kind
of
a
multiplexing
spoiler
that
that's
users
choose
which
of
a
number
of
smaller
configurations.
They
want
to
use.
B
You
know
where
what
the
services
name
is
where
the
hub
is
running.
The
hub's
API
gives
it
an
API
key
so
that
it
can
make
requests
to
the
REST
API
and
it
monitors
it
and
and
keeps
it
running,
and
then
there
are
external
services
which
can
be
anything
that
wants
to
talk
to
the
hub.
That's
not
started
by
the
hub,
so
this
could
mean
if
you've
got,
if
you're
deploying
super
hub
itself
with
supervisor
or
system
D,
or
something
like
that.
B
It
may
make
more
sense
to
start
your
services
with
the
same
process
manager
rather
than
telling
hub
to
it.
This
lets
you
run
services
kind
of
distributed
across
machines
with
kubernetes
or
is
appropriate
for
you.
It
can
also
mean
that
the
service
isn't
the
process
at
all.
It's
just
a
way
of
giving
an
API
token
a
name
and
access
to
to
running
the
notebook
server.
So
you
can
just
say:
I
want
to
assign
an
API
token
to
an
activity.
B
That's
like
adding
users
with
this
script
and
we
call
that
a
service
tolkien
services
do
so
services
when
the
main
thing
services
can
do.
It
is
run
a
web
service,
and
so,
if
you
say
this
serve
when
I
start
the
service,
it
will
start
a
server
at
this.
Ip
goober
hub
will
automatically
add
that
to
the
proxy
at
this
services,
slash
services,
name
URL,
and
that
lets
you
run
things
like
and
be
viewer
or
web
applications
that
are
served
that
sit
behind
the
sit
next
to
the
hub
behind
the
behind
the
proxy
and
then
not.
B
Seven
also
adds
this
hub
off
class
that
you
can
import
in
your
service
that
implements
implements
the
authentication
mechanism
that
single
user
servers
use.
So
when
a
request
gets
a
cookie
or
in
master
an
API
token,
it
can
check
food
that
it
asks
the
how
it
could
ask
who
that
who
that
authentication
item
corresponds
to
and
then
use
that
to
determine
if
a
given
user
should
our
given
request
should
be
allowed
and
then
a
service
can
do
anything.
B
B
B
Any
kind
of
monitoring,
I
can
also
be
east
of
the
integrator
forum.
Greater,
which
is
a
web
service
that
uses
have
authentication,
should
be
able
to
use
the
pieces
to
relieve
some
of
the
headaches
of
deploying
the
form
greater.
So
it
doesn't
need
it.
Shouldn't
need
to
talk
to
the
proxy
anymore.
I
can
just
say:
hey
I'm
a
web
server.
These
are
the
users
that
should
have
access
to
me
and
then
it
should
be
able
to
do
that
in
a
in
a
simpler
way
and
just
run
it
in
the
Hubble.
B
The
Hubble
started
in
the
we'll
help
it
identify
which
users
should
be
allowed
to
use
it
and
there's
the
culling.
Idol
user
Colleen
idle
servers
scripts
that
we
have.
Potentially
it
could
be
used
for
sharing
files
across
users.
That's
something
that
you
later
there's
a
shared
notebook
server
that
we
were
working
on
just
just
last
week,
so
you
can
start
a
notebook
server,
a
regular
seeing
that
there's
a
notebook
server
as
a
service,
and
this
can
allow
users
to
to
share
this
the
same
notebook
server.
B
But
in
the
absence
of
real-time
collaboration
that
has
limitations
for
doing
that.
But
at
least
lets
you
put
a
server
that
multiple
humans
can
access
to
and
then
Peter
also
implemented.
Support
for
NB
viewer
running
as
a
service
and
implanting
support
really
means
being
able
to
run
with
a
URL
prefix.
It's.
It
will
start
at
this.
Slash
services,
slash
name
instead
of
root
and
enabling
authentication
if
the
service
needs
it
and
that's
kind
of
the
background
that
I
wanted
to
summarize
for
the
current
state
of
things.
A
A
A
Any
questions
for
men
hi,
Carol,
okay,
great,
so
the
agenda
is
also
in
that
repo
and
it
looks
like
the
next
30
minutes,
we'll
be
looking
at
we'll
be
hearing
from
Jes,
Brian
and
UV
about
teaching
with
Jupiter
hub.
So
first
on
the
list
we
have
Jess
who's
gonna
share
a
little
bit
about
using
services
and
enhancements
with
MB
greater.
C
C
Well,
while
that's
loading
up
for
those
of
you
who
might
not
be
as
familiar
with
MB,
greater
MB,
greater,
is
a
system
for
creating
assignments
and
notebooks,
and
then,
when
it's
used
in
combination
with
computer
habit
can
also
be
used,
for
you
know,
releasing
those
notebooks
to
students
having
these
kids
submit
them,
collecting
them
and
then
doing
the
auto
grading
all
throughout.
So
what
I
have
here
is
a
server
that
I
just
bought
this
morning
and
I
have
a
Jupiter
hub
config
here,
which
is
pretty
simple.
C
It
has
a
lot
of
comments
in
it
because
I
just
pulled
this
from
the
integrator
Docs,
but
the
really
is
much
simpler
to
set
up
integrator
with
Jupiter
have
now
that
the
services
API
exists.
So
basically,
what
we
have
is
this
part
which
you
can
see
most
of
the
screen,
where
we
define
the
service
that
has
a
name
you
can
get
whatever
name.
You
want,
tell
it
to
run
and
be
greater
form
grade,
and
then
you
tell
it
what
URL
to
start
up
with
the
greater
form
grade,
as
this
is
for
a
managed
service.
C
C
So
if
you
notice
that
in
the
Jupiter
hub
config,
there
was
a
keen
for
the
current
working
directory
and
that's
where
it'll
start
and
be
greater
up,
so
this
is
the
envy
rater
config
that
exists
in
that
directory.
And
what
you
see
here
is
that
we
set
the
IP
import
to
be
the
same
as
those
that
care
how
it
expects.
And
then
we
set
the
Authenticator
class
to
be
this
hub
off,
which
is
uses
the
hub
off
that
min
talks
about,
and
it
has
it
a
few
additional
things
that
it
does,
that
are
activator
specific.
C
C
You'll
see
it
says,
adding
service
from
Greater
Chris
101
to
proxy,
and
so
we
can
now
access
that
at
services
from
Greater
course
one.
So
if
I
come
to
my
browser-
and
this
is
the
IP
address-
that
your
hub
is
running
at
services,
greater
course-
101
we'll
see
it's
running,
there's
nothing
there,
because
we
haven't
actually
done
anything
yet.
But
if
they're
running
you're
gonna
have
started
up
started
it
for
us,
so
I'm
just
going
to
quickly
go
through
just
we
have
our
instructor
account
here
and
there's
our
course
files
here.
C
C
C
C
The
students
have
this
assignments:
tab
where
they'll
see
all
of
the
assignments
that
I've
been
released
and
are
available
to
them.
I
actually
have
downloaded
problem
set
1
from
earlier
from
three
of
this
demo,
but
normally
it
would
appear
here
as
like.
An
assignment
to
fetch
I'll
go
ahead
and
submit
this
and
log
out
and
log
back
in
as
the
instructor.
D
C
Now
I
can
come
back
here
to
the
forum
grader,
you
see
the
assignments
there
in
the
list
and
we
can
go
through
and
start
grading
it
so
with
the
services
API.
This
is
like
a
much
easier
process
than
I
was
before,
and
we're
continuing
to
make
some
improvements
to
the
way
that
the
forum
grader
integrates
with
Jupiter
house,
so
that
it'll
continue
to
be
and
easier
thing.
The
future
birth
planning
on
actually
having
it
be
and
extension
to
the
notebook
rather
than
a
separate
process
itself.
Thanks.
A
E
This
particular
deployment
is
targeted
for
small
to
medium
groups
of
trusted
users
working
on
a
single
server.
So
here
we're
teaching
up
to
maybe
a
hundred
students
in
a
given
quarter
and
we
can
manage
that
on
a
single
large
server
and
those
students
are
more
or
less
trusted,
so
we'd
be
willing
to
give
the
Michelle
account
on
on
the
UNIX
server
and
in
terms
of
sort
of
the
constraints
that
we've
chosen.
E
So
this
again
is
a
single
server,
we're
using
nginx
as
a
front-end
proxy
for
serving
the
static
assets
and
then
also
as
a
termination
point
for
SSL,
and
the
entire
configuration
of
the
server
is
done
using
ansible
and
script
scripts,
and
you
can
either
set
up
ssl
using
your
own
certificates.
If
you
have
that-
or
we
also
have
the
option
to
use,
let's
encrypt,
which
does
make
it
quite
a
bit
easier,
and
it's
important
to
note
that
this
particular
configuration
does
not
use
docker
of
containers
in
any
way.
E
I
think
in
you
know,
in
my
experience
so
far,
if
you're
able
to
run
on
a
single
server
and
you
need
to
use
a
tool
like
and
be
greater
along
with,
it.
Docker
complicates
things
more
than
it
simplifies
at
this
point
where
we're
trying,
as
just
mentioned,
we're
working
on
simplifying
aspects
of
envy,
greater
and
I
think
that
will
be
easier
to
deploy
with
docker.
Eventually,
the
prerequisites
are
that
you
have
an
empty
fresh
Ubuntu,
a
server
running
a
lattice
table
release
or
a
fairly
recent
óbuda
version
works.
E
Fine
you've
already
set
up
your
local
drives
that
are
going
to
be
mounted.
You've
got
a
directory
for
a
drive,
that's
mounted
for
home
directories
and
the
big
thing
is
a
valid
DNS
name
to
really
set
this
up.
You
need
that
optionally,
an
SSL
certificate.
Again,
you
can
use
let's
encrypt.
If
you
don't
want
to
purchase
your
own
certificate
and
then
ansible,
2.1
or
greater
and.
E
So
I
wanted
switch
over
here
and
show
you
the
file
that
we
have
that
encodes
the
top-level
variables
or
this
set
up
and
here's
the
required
settings.
So
you
would
specify
the
path
to
the
home
directory
for
all
users:
the
Jupiter
hub
admin
users,
so
this
directly
maps
onto
the
interline
underlying
jupiter
hub
config.
This
is
an
initial
whitelist
of
jupiter
hub
users,
and
then
we
have
two
optional
kernels,
the
our
kernel
and
the
bash
kernel.
By
default.
E
This
configuration
will
install
the
Python
or
our
ipython
kernel,
and
then
you
can
actually
install
the
R
or
the
bash
kernel
on
top
of
that,
and
then
you
need
to
create
this
proxy
auth
token.
That's
part
of
the
underlying
security
configuration
of
Jupiter
hub
and
then
there's
this
one
option
by
default,
Jupiter
hub
will
kill
the
single-use
notebook
server
and
the
proxy,
and
the
hub
basically
will
kill
everything
when
you
restart
it.
E
You
can
set
cleanup
on
shutdown
to
be
false
and
it
will
actually
leave
single
user
notebook,
servers,
the
proxy
and
a
lot
of
it
running
and
that
that's
really
nice,
if
your.
If
you
need
to
make
changes
and
we
configure
the
server,
but
you
don't
want
to
interrupt
the
people
working,
we
have
a
list
of
default
packages
or
that
will
be
installed
so
we're
using
mini
Conda
to
manage
the
Python
packages.
So
you
can
here's
our
default
list,
but
you
can
add
more
to
that.
E
You
can
also
add
pick
packages
and
then
for
our,
you
can
have
a
list
of
cran
packages
that
you
can
customize
and
so
in
terms
of
optional
configuration,
the
default
would
be
to
use
just
UNIX
passwords
and
you
can
optionally
use
OAuth,
which
we
typically
do
on
on
a
real
real
courses.
So
you
said
a
lot
equal,
the
true
and
then
the
OAuth
client
ID
secret
would
be
something
that
you
get
from.
Your
OAuth
provider,
for
example,
from
github
or
Google.
E
Here
is
the
NB
grader
configuration
this
we've
updated
the
NB
grader
configuration
here
to
reflect
the
upcoming
release
of
NB.
Greater
that's
more
based
occur,
that's
based
on
services,
and
it
also
supports
multiple
courses
and
multiple
instructors.
The
Col
idle
servers
service.
We're
now
running
this
as
a
managed
service,
and
this
will
basically
look
at
single
user
notebooks
server
activity
and
it
will
stop
those
single
user
notebook
servers
if
a
user's
not
been
using
it
for
some
unspecified
amount
of
time.
E
There's
some
other
optional
settings
here
that
I'm
not
going
to
actually
want
to
in
the
last
little
bit
here.
Show
you
what
our
live
server
looks
like
for
the
running
quarter.
So
this
is
the
admin
page
for
one
of
that,
I
mean
users
on
Jupiter
hub,
and
you
can
see
for
this
course
we're
using
github
ooofff,
and
you
can
see
here
all
of
the
students
who
are
running
and
then
I
forget
exactly
what
I
have,
but
the
Cole
Idol
set
to,
but
it's
around
24
hours,
and
so
all
the
other
users
notebooks
have
been
stopped.
E
Another
thing
that
that's
important
to
note
is
even
though
this
configuration
that
we've
optimized
it
for
also
running
and
be
greater.
The
default
configuration
file
here,
actually
doesn't
install
and
be
greater
so
and
be
greater,
is
false,
and
so
these
ansible
scripts
would
be
very
well
suited.
Do
any
small
team
of
people
wanting
to
work
on
a
single
server
and
here's
small,
you
know
a
single
server
like
this.
F
This
is
Carol
I'm,
going
to
just
make
a
quick
comment:
the
ansible
scripts
once
I
had
the
domain
name
set
up
it
and
the
server
itself
took
only
a
matter
of
minutes
to
deploy
it
using
let's
encrypt,
and
it
was
really
nice.
I
think
something
that
Brian
suggested,
which
I
think
I
would
like
to
do
is
make
a
cookie
cutter
to
make
it
even
simpler
and
also
get
some
additional
documentation
on
how
to
get
a
domain
name
and
server,
but
that's
a
little
more
down
there.
It,
but
it
looks
great
Brian.
Thank.
E
You
yeah
yeah
the
the
thing
that
takes
the
longest
once
you
have
a
server
up
with
it
of
main
name.
The
thing
that
takes
the
longest
in
the
deployment
is
just
installing
all
the
packages,
if
you're,
for
example,
to
comment
out
most
of
the
pack
Python
packages
and
our
packages,
the
the
deployment
takes
I
know
probably
10
minutes
or
less
Intendant.
It's
very
very
fast.
B
A
Right
so
the
next
talk
on
the
agenda
is
from
UV
and
UV
is
gonna
talk
a
little
bit
about
the
berkeley
data,
8
architecture
overview,
so
data
aid
initiative.
Maybe
you
can
explain
it
in
a
little
more
detail,
but
it's
an
initiative
here
at
berkeley
to
to
work
with
undergraduates
to
give
them
provide
them
with
core
competency
in
data
science.
So
it's
a
really
cool
course
that
they
can
take
Oh.
A
G
G
Yes,
that's
better
so
I'm
Yui
I
work
for
the
Wikimedia
Foundation
now,
but
I've
been
helping
out
with
UC
Berkeley
of
their
chubarov
deployments
for
a
while
now.
So
this
is
for
the
spring
2017
data
8th
class.
A
little
bit
about
DNA
is
that
it's
a
really
large
course
I.
Think
like
this
time,
there's
about
650
students
enrolled
in
it
and
it
teaches
basics
of
programming,
but
with
the
tint
towards
data
literacy.
That's
like
the
core
core
thing
that
they
care
about
is
to
allow
like
people
from
all
backgrounds.
G
So
they
have
like
lots
of
people
from
you
know,
like
things
are
not
traditionally
considered
country
science,
such
as
urban
planning
or
like
literature,
taking
this
class,
and
there
is
also
connectors,
so
they
have,
for
example,
neuroscience
connector
that
uses
the
techniques
they
learn,
India
science
into
their
8th
class
and
how
they
can
apply
it
in
your
science
or
you
know
like
same
thing
for
literature,
I,
think
they
were
like
analyzing
Shakespeare
or
something
along
those
lines.
I,
don't
traitor
I,
remember,
but
there's
definitely
people
doing
things
like
that
aunts.
G
G
So
this
is
going
to
be
more
of
an
overview
with
like
no
like
hanging
points,
so
people
can
ask
questions
for
later,
rather
than
like
an
entire
level
so
and
before
I
start
I
also
want
to
make
sure
that
it's
not
just
all
me.
There
is
like
a
fair
amount
of
other
people
involved
in
it.
Ryan
loved
it
from
Berkeley
was
like
very
bored.
There
is
also
like
a
group
of
students
under
the
Berkeley
blue
pill
team
who
are
involved,
so
not
just
me,
okay.
G
So
these
are
our
requirements
for
when
we
started
designing
this,
just
you
want
to
do
like
thousand
students
running
concurrently
was
our
target.
We
know
we
wouldn't
hit
that
like
a
verage,
it's
going
to
be
maybe
200
to
300
students,
but
we
wanted
to
be
able
to
design
for
up
to
thousand
students
who
are
using
this
and
we
wanted
to
be
completely
reliable.
We
don't
want
to
have
any
failures
of
spawning.
We
don't
want
to
have
downtime
even
have
a
upgrade,
and
we
want
to
make
sure
that
thing
we
do
deploys.
G
We
know
that
it's
like
this
is
just
going
to
work
like
there
is
no
uncertainty
here.
This
is
a
solid
piece
of
infrastructure
that
you
can
take
for
granted,
and
so
you
can
focus
on.
You
know,
like
all
the
other
things
that
your
course
needs
to
focus
on
and
like
this
is
just
like
solid
underlying
infrastructure.
We
also
wanted
to
support
multiple
hubs.
There
are
many
courses
that
are
using
Jupiter
Xavier
and
they
have
different
requirements.
G
This
is
also
not
tied
to
any
single
cloud
provider.
We
wanted
to
be
able
to
like
use
Google
or
Microsoft
or
AWS,
depending
on
who
is
giving
us
free
money
at
that
time,
and
you
also
wanted
to
be
completely
reusable
by
other
people.
They
don't
want
anything
bookie
specific,
and
this
also
will
help
us
keep
our
code
clean.
This
current
setup
also
grew
out
of
Wikimedia
is
Jupiter
have
set
up
that
I
was
working
on,
so
that
also
helped
make
it
like
be
a
bit
agnostic
to
the
particular
specifics
of
the
case.
G
We
also
wanted
it
to
require
very
minimal,
upkeep
and
ongoing
maintenance.
We
did
not
want
to
spend
more
than
I,
don't
know
in
horror
every
week
from
the
susceptance
to
actually
keep
this
up,
and
we
wanted
to
make
it
as
self-serving
as
possible
to
the
instructors
themselves,
and
these
were
the
design
principles.
/
goals
that
we
like.
These
are
the
principles
that
we
set
out
in
the
beginning
to
guide
the
decisions
we
made
as
we
designed
this.
We
wanted
to
be
hard
person
reproducible.
G
G
You
know
whenever
I
want,
in
fact
like,
like
it,
takes
about
20
minutes
now
to
bring
up
a
new
hub
where,
like
you
know,
be
like
very
confident
it
will
just
work,
and
we
like
we,
delete
and
recreate
the
hub,
like
you
know,
like
almost
every
day
until
like
the
sort
of
class,
so
that,
like
like
I,
think
like
for
us
before
the
sort
of
class
we
just
like
pre
created
from
scratch.
So
we
know
exactly
that.
G
Well,
we
also
wanted
to
scale
up
and
down
familiy
we
wanted
like
when
we
wanted
to
add
extra
capacity
for
like
300
users.
You
is
just
like
one
single
command
and
we've
also
tested
this
very
well.
So
we
know
that
this
work
just
works,
but
we
also
wanted
to
scale
down
so
that
you
know
I,
think
if
you
have
like
more
than
30.
Students
like
this
is
actually
pretty
good,
like
I.
Think
below
that
count,
you
know
like
Brian's
deployment
is
probably
better,
but
we
wanted
to
be
able
to
like
not
like
require.
G
This
is
only
use
of
your
finder
still
do
usually
or
if
you
have
like
30
the
questions
again.
Like
hundred
was
sitting
Isabelle,
no
bogeys
present
code
and
you
also
wanted
to
be
self-healing.
We
wanted
to
reduce
hacks
as
much
as
possible.
So
if
we
have
to
do
something
manually
to
fix
something,
we'd
only
do
that
once
the
second
time
that
happened,
we
would
make
sure
to
like
spend
the
time
to
properly
fix
it.
So
we
don't
have
to
hack
it
again
and
we've
been
fairly
good
about
enforcing
that
right.
So
we
big
communities.
G
Why
Cuban
it
is.
It
provides
higher
honor
primitives,
which
I
think
is
like
the
fundamental
reason
we
picked
it.
The
analogy
I'd
like
to
make
is
lecture.
Brady's
is
a
talker
as
ansible
or
puppet
is
to
bash.
So
you
can
do
all
the
things
you
can
do
with
cuban.
It
is
just
talker,
but
you
have
to
build
a
lot
of
scaffolding
around
it.
G
The
same
way,
you
can
do
all
the
things
that
you
can
do
as
a
puppet
with
just
mash,
but
you
have
to
build
a
lot
of
scaffolding
around
it
and
you
know
like
if
you're
just
doing
something
simple
bash
is
probably
fine,
but
if
you
like
want
to
do
something
a
bit
more
large-scale
with
more
process
around,
it
then
has
a
little
pocket
or
something
like
that.
It's
going
to
be
much
more
useful
and
I
think
the
same
is
true
of
soccer
and
kubera
is
right.
G
Now
it
also
runs
on
providers
nonprofit
installations,
and
that
is
wide
adoption
industry.
So
we
can
basically
like
write
out
a
lot
of
things
for
free
when
we
started
doing
it.
We
were
like.
Oh,
we
want
this
feature.
We
want
that
feature
and
like
we
always
found
that
there
was
like
some
random
company
that
was
like
building
it
and
the
reason
is
open
source
because
they
wanted
it
for
like
their
own
truster,
and
then
they
were
able
to
contribute
back
upstream
and
that's
mean
that's
really
nice.
G
There
is
the
bar
of
the
paper
that
Google
released
about
Cuban
ities,
which
explains
like
a
lot
of
their
reasons
for
making
the
things
they've
made
and
also
like
it's
really
nice
read.
Even
if
you
don't
have
a
background
in
system,
so
I
highly
recommend
people
to
read
it
if
they
have
it.
This
is
the
architectural
diagram
that
I
put
up
yesterday.
It's
it's
a
little
crowded,
but
if
the
three
red
boxes
or
nodes,
so
this
is
like
this
is
a
the
concept.
G
Lorry
of
one
cue
be
disgusted
that
has
three
nodes
running
with
about
16
gigs
of
ram
each
so
like
I,
don't
have
to
I,
don't
even
have
to
know
that
there
are
three
nodes
running
all
I
operate.
Er
care
about
is
that
you
know
there
is
40
gigs
of
ram.
So
if
I
give
each
student
two
bits
of
ram,
I
can't
run
up
to
24
24
to
22
students
here,
and
you
will
also
note
that
the
hub
and
the
proxies
also
run
in
the
same
cluster.
G
So
this
is
one
of
the
advantages
of
using
human
is
versus
talk
or
danceable
separately,
which
is
that
you
know,
like
everything,
is
in
the
cluster
so
like
it's
not
like
the
hub
process
is
separate
and
we
need
to
make
it
connect
to
docker.
Somehow
and
like
you
know,
the
hub
and
proxy
need
their
own
high
availability
story.
You
know
it's
all
in
the
same
cluster,
so
you
have
only
one
thing
to
manage
and
that's
excuse
me
discussed
and
you'll
also
see
that
there
is
mixed
colors
in
there.
G
So
each
of
these
boxes
is
either
a
user's
container
or
a
hub
container
or
a
proxy
container,
and
they
are
all
mixed
right
like
we
have.
One
cluster
on
which
we
are
running
like
in
this
in
this
case,
take
two
courses,
but
at
Berkeley
I.
Think
me
right
now,
like
are
running
like
four
different
hubs
on
the
same
cluster,
and
you
know
this
number
just
basically
scales
up,
it's
like
very
trivial
to
add,
and
you
have
so
we
probably
like
end
up
running
like
almost
all
of
courses
on
like
one
faster.
G
You
will
also
note
that
each
of
the
users
have
a
volume
attached
to
them,
so
each
user
gets
their
own
disk
of
10
gigs
each.
So
what
happens
is
when
a
user
logs
in
we?
Actually,
let
me
ask
you
Bellator's
to
create
a
persistent
disk
for
us,
so
in
case
of
AWS,
this
will
be
EBS
or
it
will
be
plowed.
It
will
be
complete.
Flour,
storage
or
whatever
your
cloud
provider
has
to
be
create
a
new
based
specifically
for
that
user,
and
then
we
attach
it
to
their
container.
G
This
has
a
lot
of
advantages
such
as
like,
if
you
have
a
centralized
storage
pod
and
if
you
have
users,
it's
very
easy
for
one
user
to
basically
like
eat
all
the
I/o
and
then
matically
hard
for,
like
the
other
users
to
access
their
services,
and
it's
also
very
easy
like
to
make
that
be
a
single
point
of
failure.
But
things
just
start
failing.
If
that
single
thing
fails
so
yeah,
we
have
this
instead
and
scaling.
This
up
is
really
easy.
G
G
G
You
know
push
it
in
to
get
and
then
I
will
run
the
update
command
and
it
will
do
the
minimal
set
of
changes
required
to
apply
that,
and
it
will
also
happen
in
like
a
very
nice
fashion
that
doesn't
destroy
data
or
like
cause
downtime,
mainly
from
away.
So
this
is
our
deployment
workflow.
We
have
a
production
cluster
and
a
staging
cluster.
All
hubs
are
deployed
from
the
same
configuration
both
production
of
static
cluster.
G
We
haven't
had
any
outages
that
we
haven't
caught
in
staging
at
all
and
all
Confucians
kept
a
new
version
get
a
pose,
so
we
can
go
back
and
see
like,
like
you
know,
if
you
deploy
something
be
staying
at
it
like
Oh
doesn't
work,
we
can
always
go
back
or,
like
you
know,
make
sure
it
doesn't
go
forward
and
it's
fairly
easy
to
experiment
with
that.
We
also
have
no
user
disruption
doing
the
boys.
The
proxy
runs
a
separate
part.
G
That
is
not
touched
you
in
deploys,
so
single
user
notebooks
are
like
there
is
no
disruption.
There's
no
like
there's
no
even
disconnect
off
the
network
kernel.
So
we
can
just
do
this
at
any
point
of
time,
and
it's
just
fine.
We
have
distant.
We
have
a
system
called
MB
in
track
that
we
use
to
distribute
notebooks
of
students.
Time
check
do
I
have
time
for
a
demo
like
for
like
a
minute,
hello,
okay,.
A
G
So
quick
demo,
so
this
is
the
website,
for
today
it's
actually
open
to
everyone.
It
has
better
styling,
but
my
computer's
broken
because
Linux.
So
if
you
look
here
it
just
has
you
can
see
this?
It
has
like
a
list
of
courses,
sorry
a
list
of
lectures,
the
lecture,
videos,
the
rename
and
also
has
assignments.
So
if
I
click
this
I'm
just
going
to
click
homework,
0
1,
it
actually
will
link
me
to
our
hub
in
which
I
have
logged
in.
G
So
it's
now
doing
a
git
pull
and
like
all
of
the
things
in
that
homework
are
now
like
in
here
and
displayed
to
me
and
I
can
just
click
the
notebook
and
then
it
will
just
show
me
what
is
in
there
and
I
can
start
working
on
it.
This
takes
a
while
to
load,
because
again
my
computer
is
broken
right,
so
that's
it
and
like
this
works
fairly
reliably
and
people
use
github
account.
There
is
also
like
a
beautiful
pasta
tree
for
keeping
all
assignments
and
other
like
homework
related
things
as
well.
G
So
it's
very
easy
for
them
to
update
sorry
to
keep
track
of.
What's
going
on
when
we,
this
is
also
like
family-centric,
we
want
to
rename
it
from
MP
interact
because
there's
you
know
like
interact
project
into
something
a
little
bit
more
descriptive
and
then
right
photoactivation
about
this
at
some
point.
This
also
is
great,
because
this
means
that
you
don't
need
one
big
file
system.
G
You
don't
need
it
as
fast
or
something
for,
like
all
the
users
to
be
on
the
same
file
system
like
this,
like
each
of
them
can
get
their
separate
discs
and
you
know
can
still
use
this
and
it's
still
fine.
Let
me
go
back
to
this.
Ok,
so
that's
the
quick
can
be
tractable
assignment
collection,
automating,
a
instructor
services.
So
there
is
a
lot
of
work.
That's
actually
being
done
on
this,
but
I
don't
know
anything
about
it.
G
What
happened
is
easy
s
already
had
a
lot
of
Auto
grading
stuff
that
they
had
built
over
the
last
few
years,
but
they
did
not
use
notebooks
for
any
of
that,
but
over,
like
the
last
few
months,
the
same
people
who
had
build
it.
It's
called
ok,
PI.
You
can
go
to
ok,
PI
dot,
o-r-g,
to
check
it
out,
it's
actually
open
to
every
more
perky
specific,
so
they
have
added
notebook
support
for
it
over
like
the
last
few
months.
G
So
there
is
not
much
documentation
about
it
and
I
actually
haven't
seen
in
action,
but
from
the
people
from
talking
to
the
people.
I
know
who
used
it.
It's
actually,
finally,
really
good.
It
does
fairly
nice
alternating.
It
doesn't
require
Schad
notebook
space
and
it
also
has
like
family
detailed
ways
of
providing
feedback
to
students,
as
in
when
they're,
like
you
know,
who
in
there,
rather
than
having
to
wait
for
like
it
towards
the
end
I.
G
Okay
right,
so
that's
about
the
end.
The
next
step
is
we
want
to
do
better,
otters,
autoscaler
right
now
we
have
a
super
simple
order,
scaler
that
you
know
like
when
the
cluster
starts
to
become
about
90%
full,
we
just
add
more
nodes
automatically,
so
we
know
we
will
never
like
end
up
with
it
being
too
full,
but
we
want
to
make
it
more
intelligent
by
integrating
it
with
the
color.
G
So
we
can
optimize
resource
usage
and,
like
you
know,
only
create
new
nodes
when
required,
but
also
like
opportunities
to
take
all
things
so
that
you
know
like
they
can
stay
up
for
as
long
as
we
want
them
to.
But
we
do
like
delete
them.
We
can
reclaim
a
node
and
save
some
money.
We
also
want
to
deploy
it
to
at
least
one
another
institution
to
make
sure
that
we
didn't
bacon.
You
know
anybody
specific
things
in
there,
I'm,
hopefully
going
to
do
this
for
video
sometime
soon.
G
We
also
want
to
do
more
hardcore
security
and
performance
tests.
We
did
do
performance
tests
before
we
started
this.
We
had
125
simultaneous
users,
logging
in
and
starting
the
hub.
At
the
same
time,
and
like
you
know,
we
did
like
we
looked
at
the
stats
and
like
this
is
great
like
it
totally
worked,
fine
and
we
couldn't
do
more
simply
because
the
infrastructure
I
had
built
to
simulate
these
users
was
breaking
down
before
the
hub
itself
was
having
any
problems.
G
D
G
A
Alright
great,
thank
you
so
much
everyone,
so
for
those
of
you
who
are
joining
a
little
bit
late,
we
just
went
through.
Let
me
just
tell
you
first,
that
we
have
repo
with
our
agenda
its
hub
and
Carol
actually
created
a
bitly
link
for
that
short
link
and
that
links
obviously
HTTP
and
then
it's
bit
del
y
/j
hub
tech.
So
just
the
letter,
j,
hu
b
and
then
te
CH.
A
A
B
Right
good,
so
talking
about
about
a
bit
about
where,
where
things
are
headed
and
the
next
release,
mostly
when
I
release
may
be
really
sore
to
a
couple
smallish
things
in
jupiter
hub
architecture,
one
is
objecting
the
proxy
api.
So
right
now
we
mostly
rely
on
a
particular
proxy
implementation
called
HD
proxy,
which
is
the
main
feature
of
being
able
to
be
updated
without
dropping
connections
and
properly
supporting
WebSockets.
B
B
So
we
can
define
a
simpler
aspect
for
what
a
proxy
in
front
of
the
hub
needs,
and
then
we
can
have
a
Python
API
so
that
proxy
implementations
can
be
another
one
of
the
things
that
is
swapped
out,
just
like
the
Authenticator
and
spawner.
So
we
can
use
the
kubernetes
access
points
or
nginx
proxies
or
a
patchy,
or
anything
that
you
can.
You
know
the
requirements.
Are
it
better
support,
WebSockets
and
it
better?
B
Another
feature
is
allowing
multiple
servers
per
user,
so
in
contexts
where
the
hub
is
exposing
a
variety
of
computational
resources,
so
when,
in
particular
this
comes
up
when
the
hub
is
on
like
a
primary
access
point
for
a
cluster
such
as
at
UCSD
or
at
the
Minnesota
supercomputing
Center,
where
they
have
a
variety
of
profiles
that
you
want
to
start
a
GPU
job
or
a
job
with
you
know
this
much
resources,
there's
many
resources,
this
one
family.
This
many
knows
it's
useful
it.
B
It
may
be
useful
to
have
you
know:
I
want
to
start
that
job
leave
it
running
and
then
I
want
to
start
another
job
with
a
different
collection
of
resources
and
you've
arrived
right
now
requires
to
shut
down
the
first
one
before
you
can
ask
for
a
different
one
and
Christian
Barra
is
something
else
out
with
implementing
multiple
servers
kind
of
relaxing
that
the
assumption
that
each
each
user
has
exactly
one
server,
which
is
a
small
change,
but
it's
one
that
as
implications
all
over
super.
How
does
code
base?
B
So
it's
a
it's
a
bit
of
a
tricky
one.
So
it'll
take
a
take
a
while
to
get
to
get
finished
and
then
an
important
thing
is
for
the
courses
and
things
where
the
single
server
per
user
is
really
important
and
keeping
getting
do
berhad
out
of
the
way
as
quickly
as
possible.
We
want
to
make
sure
that
we
still
serve
that
well,
rather
than
making
making
it
increasingly
complicated
for
the
class
style
deployments
teas
as
people
with
the
services
and
talking
about
shared
the
book
servers
with
different
groups
of
users.
B
I
think
it
will
probably
make
sense
to
move
jupiter
hub
to
the
way
jupiter
handles
its
authentication
to
something
to
basically
implementing
oauth
itself.
There
are
Python
libraries
for
cloning,
this
so
I've
gotten
started
and
essentially
ready
Jupiter
hub
itself
as
an
OAuth
provider
and
then
using
OAuth
to
handle
the
authentication
between
services.
B
We
have
this
notion
of
the
way
we
have
to
isolate
cookies
by
these
path,
prefixes,
so
that
a
user
can't
compromise
their
own
server
and
then
get
other
users
to
visit
their
server
and
then
steal
their
cookies
and
then
spook
them
and
and
in
general,
do
compromise.
The
the
whole
system
by
kind
of
amazing
their
own
server
using
OAuth,
will
make
that
will
make
it
easier
for
that
to
not
a
McGee
make
it'll.
Allow
us
to
have
a
simpler
system
that
doesn't
have
that
problem
to
highway.
B
B
The
piece
that
still
stick
even
that
Brian
alluded
to
especially
refusing
docker
or
distributed
files,
is
the
assignment
distribution
system
that
currently
relies
on
the
file
system,
because
you
need
to
be
able
to
have
you
need
to
distribute
the
assignments
to
the
users
and
then
get
the
assignments
back.
Yeah
students
need
to
turn
in
things
and
that
stuff,
and
so
hub
share,
is
really
really
simple.
B
We're
still
figuring
out
whether
it
makes
sense
to
do
the
entire
thing
as
a
simple
REST,
API
or
use
web
dev
for
the
storage
API,
still
not
100%
clear,
which,
which
way
it
would
be,
would
be
preferable,
but
neither
one
should
be
super
complicated
and
the
target
use
cases.
It's
especially
the
DNP
greater
needs
of
being
able
to
push
and
pull
files
to
students,
but
also
just
generally
being
able
to
in
a
shared
computing
context
being
able
to
say
alright.
B
I
made
a
thing:
I
want
other
people
to
be
able
to
grab
it
and
take
it
to
their
own
servers,
and
specifically,
this
is
not
meant
to
address
things
like
their
real-time
collaboration.
Stuff
right,
that's
real-time
collaboration
is
about
starting
a
running
notebook
server,
with
access
to
some
particular
context
that
you
can
both
log
in
log
in
to
and
most
of
the
pieces
needed
for
that
are
already
done.
It's
just
this.
B
It's
just
the
single
user
server
stuff
that
needs
that
them
it's
this
sharing
model
that,
for
when
the
users
are
staying,
send
the
isolated
from
each
other
that
there's
this
gap
for
an
implementation
that
we're
looking
to
address
and
I
could
leave.
Yeah
is
good
just
of
our
immediate
horizon
for
very
Jupiter
hub
stuff,
Oh.
B
B
H
Hey
I
have
a
question:
can
you
guys
hear
me
yep?
Okay,
so
you
mentioned
integration
with
external
proxy
servers
right.
Is
that
something
that's
already
in
0.8
release
or
it's
being
worked
towards,
for
example,
if
I
want
to
split
out
the
universe
proxy,
that's
not
considerable
node
proxy,
but
replace
it
with
something.
Is
that
more
native
to
the
kubernetes
environment?
Is
that
something
that's
going
to
be?
That's
the
idea.
B
That's
it's
not
implemented.
The
first
piece
of
it
is
so
he's
the
most
of
spec
fur
so
with
the
spawners
defined
the
API.
For
so
you
need
to
write
a
little
python
class
that
represents
implements
adding
routes.
Removing
routes
and
getting
a
list
of
current
routes
is
basically
all
the
proxy
needs
to
do,
and
so
that's
the
first
piece
that
even
Lesley
done
is
defining
that
API
and
then
once
that's
done
then
ship,
just
if
you
do
with
spawners
will
ship
just
the
default
implementation
of
that.
E
H
G
G
B
G
H
Ok,
quality
I,
don't
know
if
I'd
be
able
to
contribute
to
that.
But
that's
one
of
the
use
cases
that
I
have
where
I
have
a
similar
kubernetes
cluster,
where
I'm
trying
to
launch
the
notebook
servers
along
with
the
hub
and
the
proxy
itself.
But
you
want
to
get
rid
of
the
configurable
node
proxy
and
plug
in
the
ingress.
So
I
got
it,
there's
a
sample
project
which
actually
tries
to
integrate
nginx
with
Jupiter
hub
by
providing
the
additional
lower
functionality
in
nginx.
Yes,.
G
H
G
It
is
not
like
it's
kind
of
like
a
Franken
monster
because
it
pretends
to
be
there's
no
J's
thing
where
it
is
not.
So
that's
like
the
reason
we
are
obstructing
this
out
is
so
that
it
can
become
a
normal
working
same
piece
of
software
rather
than
this
Franklin
Monster
as
it
is
now,
but
yeah.
You
can
use
it
right
now.
It
doesn't
work
for
services
because
I
didn't
add
that
functionality
in
there,
but
it
does
work
for
users
and
like
even
last
semester.
G
B
D
H
A
B
F
B
B
Yeah
Dan's
work
at
Jupiter,
but
I
think
it's
just
super
he'll
blab
extension
yeah.
B
D
I've
been
trying
to
keep
up
with
the
hub
share
work,
I'm
reading
the
spec
of
what
I'm
doing
a
poor
job
of
it,
but
I'm
wondering
you
know:
is
there
been
any
discussion
yet
of
you
know
discovering
like
if
it's
gonna
follow
a
full
model?
How
do
these
share
the
artifact
so
much
whatever
they
are?
It
discovered
you
know,
is
entretalk
about
searching
them.
Those
kinds
of
things
where's
that
it's
not
disgusting,.
B
We
haven't
talked
too
much
about
that,
but
I
think
I
think
we're
imagining
this
fairly
smaller
scale,
where
basic
search,
filtering
kind
of
on
the
order
of
like
github
repo
listing
is
adequate.
But
if
you
want
to
chime
in
ins
and
then
you
know
notions
of
tags
and
and
things
to
toward
or
if
you
have
sharing
goals
that
you'd
like
to
see
addressed
by
it,
that
might
be
that
would
be
appropriate
as
we
have
there
are.
There
are
instances
like
the
certain
deployment.
B
So
they
already
have
this
kind
of
a
fairly
decent
file
sharing
model
and
it
it
operates
entirely
outside
jupiter
hugs
understanding,
which
is
my
favorite
place
for
things,
stuff,
right
and
so
I
think
we're
aiming
for
the
scope
of
shared
the
fairly
basic
mechanisms
of
saying
you
know:
I
made
a
thing
or
make
it
available
to
my
fellow
hooks
and
then
we'll
make
sure
that
things
are
discoverable
and
linkable.
One
simple
search.
G
I
had
a
question
about
hub
share,
which
is:
will
there
be
like
this
option
could
be
integrated
into
Jupiter
hub
in
the
sense
of
does
Super
Hub
know
that
hub
share
is
a
thing
that
exists,
or
is
that
just
one
implementation
that
like
will
be
easily
swappable
without
having
to
like
it's
the
same
thing,
the
CSV
right
like
with
CSV
I,
could
replace
it
now
if
I
mimicked,
the
API,
but
sugar
have
depends
on
that
API.
So
the
Jupiter.
F
B
Yes,
if
you
look
at
the
PR
that
that
is
it,
you
know,
the
PR
has
exactly
that
in
mind.
So
I
had
I
had
two
things
in
mind:
one
was
called
idle
kernels
and
one
is
kind
of
a
call
idle
service
servers.
So
the
there's
a
time
stick,
there's
a
last
activity,
time
stamp
on
each
kernel
and
then
there's
a
lacked
a
single
last
activity,
time
stamp
on
the
rest,
API
as
a
whole,
and
so
you
can
and
I
can't
remember
and
then
so.
The
last
activity
of
the
server
there's
a
single
request.
B
You
can
make
to
get
all
of
those
I
believe
and
then
so.
The
last
activity
on
the
server
is
essentially
the
the
max
of
those
time
stamps
so
yeah.
You
can
absolutely
even
said
in
the
initial
PR
that
I
would
follow
it
up
with
another
PR,
adding
called
idle
kernels,
but
I
never
did
that,
but
it
should
be
very,
very
similar
to
the
call
idle
server
script
for
jeepers.
B
B
It
was
tightly
exists
as
an
example
of
what
arbitrary
external
applications
can
do
with
Jupiter
and
what
it
does
is
just
see
what's
idle
and
shut
them
down,
so
you
could
have
your
own
external
thing
that
identifies
additional
any
other
kind
of
resource
usage,
kinds
of
things,
about
user
servers
and
shuts
them
down
with
whatever
criteria
so
that
the
calling
idle
is
not
something
in
Jupiter
hub.
It's
entirely
exponent
achievement
provides
enough
information
to
make
those
decisions
so.
H
B
Those
arguments
are
passed
directly
to
the
script,
so
Jupiter
hub
doesn't
understand
what
the
I
don't
frequency
or
the
period,
meaning
that
those
are
interpreted
by
the
script
and
the
script
just
checks.
Just
a
duper
hub
show
me
the
latest
time
stamps
and
then
internal
to
the
script.
It
says
you
know
which
ones
have
been
idle
for
a
given
period
and
then
makes
request
to
the
deferral
API
to
shuttle
manner.
C
F
G
A
B
F
Almost
to
do
that,
Fernando
and
I
talked
about
at
back
in
San
Diego
before
the
year
ended,
but
one
of
the
things
I
was
going
to
do
is
probably
write
a
script
to
pull
stuff
from
github.
So
at
least
if
people
were
using
it
in
github
and
you
were
using
the
name
Jupiter
hub
that
we'd
have
some
sense
of
what's
being
used
out
there.
F
A
Also
have
a
there's,
a
Google
sheet
started
by
Paco
Nathan
of
Jupiter
uses
of
Jupiter
in
education
and
I
believe
that
that
also
includes
some
Jupiter
employments.
So
that's
on
the
Jupiter
education,
mailing
list
and
I,
usually
just
search
Google
sheets
yeah
we're
moving
toward
having
a
more
comprehensive
list,
but
that's
what
exists
right
now
when
folks
can
ask
that,
if,
if
you
find
that
I
can
try
to
find
that
and
hacking
into
this,
this
repo,
so
that's
more
accessible,
good
question.
Please.
B
G
G
For
both
wiki
medias,
so
we
basically
guarantee
like
on
on
day
two
gigs
of
ram
per
student,
and
so
it's
just
basically
like
resources
wise,
it's
just
too
into
like
how
many
students
we
want.
So
we
want
me
currently
support
about
four
at
Concord
students,
so
we
need
so
we
have
I
think
about
seventy-five
boxes
or
something
with
fifteen
gigs
of
ram
each,
but
it's
like
fairly
flexible.
So
we
like
scale
down
during
the
weekends,
for
example,
because
we
don't.
We
know
that
most
people
aren't
going
to
be
using
it.
G
I
B
Memory
is
usually
the
limiting
factor
is
so
much
memory.
Do
your
users
need,
because
if
you're
doing
some
did
analysis
thing,
then
you
at
least
you
know
2
4,
8,
gigs,
each
and
then,
and
that
that
adds
up
pretty
quickly
if
they're
doing
really
basic
stuff-
and
you
can,
you
can
get
away
with
you
know:
half
a
kegger,
even
the
quarter
per
user
and
it
works
out.
Ok,
so
it
depends
a
lot
on
what
you
know
what
the
students
are
actually
doing,
because
of
course
it's
easy
for
them.
Accidentally
request
a
bunch
of
memory,
yeah.
G
You
know
also
keeping
close
track
of
how
much
ram
students
are
actually
using
over
the
semester
and
I
think
that
will
help
us
determine
that
more
accurately
for
the
next
semester
and
also
with
the
keeps
one
or
you
can
also
like
make
like
overcommitted.
So
if
you
know
that
most
of
the
times
your
students
are
going
to
use,
only
one
you
go
for
em
and
like
sometimes
some
of
them
might
use
more,
you
can
overcome
it
appropriately.
I
A
I'm
unit
all
right
there
any
more
questions
we
may
go
ahead
and
wrap
up
early
sounds
good
folks.
I
really
want
to
thank
Minh
and
Jess
and
Carol
and
Brian
and
UV
in
Minsk
at
for
leading
the
discussion
today.
It's
really
great
I
think
you
know.
I
also
want
to
thank
the
core
team
for
for
joining
in
today.
So
hopefully
this
helps
inform
your
work
and
yeah.
That's
it
thanks
again,
I'm
gonna
in
the
recording
here
and
hope
to
see
everybody
soon.