►
From YouTube: Jupyter Community Call - June 25, 2019
Description
Recording from the Jupyter Community Call in July 2019.
The notes from this call can be found here: https://jupyter.readthedocs.io/en/latest/community/community-call-notes/2019-june.html
Read more about these calls in Discourse:
https://discourse.jupyter.org/t/jupyter-community-calls/668
A
Okay,
we're
recording
so
hello.
Everybody
welcome
to
the
Jupiter
community
call
for
the
month
of
June.
We've
been
doing
these
the
last
few
months
now,
and
this
is
really
just
a
place
to
come
in
and
make
announcements
to
the
community
on
things
that
you've
been
working
on
things.
Maybe
you
will
be
working
on
moving
forward
to
share
cool
demos,
of
something
that
you
have
maybe
at
your
business
or
your
university
or
Research
Institute
or
whatever,
or
if
you
just
have
fun
announcements
you
want
to
share
with
the
Jupiter
ecosystem.
A
A
Let
me
see
here
so
going
through
our
agenda,
so
we
start
these
things
with
a
set
of
just
short
reports
and
celebrations.
If
you
haven't
been
directed
to
the
agenda
that
we
have,
we
have
a
hack,
MV
file
that
we
keep
all
of
you
two
items,
and
so,
if
you
have
something
you
want
to
add
to
it
on
this
phone
call
feel
free
to
do
it.
If
you
have
a
demo
or
anything
you
want
to
share,
you
can
go
ahead
and
add
yourself
to
agenda
right
now.
Otherwise,
we'll
go
ahead
and
get
started.
A
Also
feel
free
to
add
yourself
to
the
attendance
list
at
the
top
of
that
agenda.
I
can
actually
add
somebody
out
of
the
chat,
perfect
yeah,
so
they
add
to
the
chat
room,
the
agenda
cool
so
just
going
through
the
list
of
short
reports.
I,
don't
know
if
carol
is
on
the
call.
I
don't
see
her,
so
I'll
go
ahead
and
do
her.
A
So
the
Jupiter
binder
team
announced
a
new
member
got
to
join
the
team
Sarah,
so
we're
just
congratulating
her
out
loud
definitely
go
check
out
the
Jupiter
hub,
finder
hub
team,
page
to
see
more
information
about
what
that
means
to
be
a
member.
Oh
there's,
Carol,
hi,
Carol,
so
yeah,
congratulations
to
Sarah!
It's
awesome
that
rowing
team
she's
been
doing
some
great
work
for
binder
and
Jupiter.
So
yeah
go
see
that
page.
A
The
second
announcement
we
have
is,
if
you
see
I'm
in
two
calls
here,
but
this
team
right
here
this
is
the
Jupiter
Cal
Poly
team.
These
are
new
interns
for
the
summer,
so
we
wanted
to
welcome
them
to
the
team.
These
are
all
Cal
Poly,
undergraduate
students
ranging
from
third
to
fifth
year
so
I
guess
we
can
do
a
little
intro,
you
guys
just
wave,
you
don't
have
to
speak.
We
won't
put
them
on
a
spot,
but
marquel
Marisa
joke
there
in
the
corner,
Isabella
and
javi
right
here.
So
welcome
to
our
interns.
A
B
Thank
you
for
doing
that.
I
was
a
little
late
this
morning,
no
she's
I'm
just
happy
that
she's
on
the
team,
then
stuff.
So
no
no
worries.
Thank
you
for
doing
that.
Sec
awesome.
A
A
Do
you
guys
hear
me
okay
on
that
end,
so
so
I'm
just
announcing
I,
don't
know
if
it's
been
announced
in
the
community
call
yet
I'm
collaborating
with
a
few
different
people
in
the
community
on
rich
context
and
we're
calling
it
rich
context.
But
what
it
is
is
it's
a
it's,
a
suite
of
tools
that
are
surrounding
collating
data
and
collaborating
in
teams.
A
Jupiter
software
should
be
able
to
have
metadata
on
it,
so
maybe
have
a
dataset
that
you
want
to
know
some
stuff
about
it,
but
not
open
the
dataset
directly.
You
can
browse
through
and
look
for
that
and
then
the
final
project
is
telemetry
system
that
will
allow
people
working
with
sensitive
data
to
sort
of
populate
matches
between
datasets
without
peeking,
at
what
the
people
are
doing.
So
it's
it's
allowing
people
the
working
with
sensitive
data
where
you
may
not
be
allowed
to
have
access
to
a
data
set.
A
Those
data
providers
can
grab
some
information
on
how
those
data
sets
are
being
used
without
looking
through
those
data
sets
themselves.
So
so,
basically,
it
sort
of
started
around
sensitive
data
and
is
working
outwards
from
there.
So
a
lot
of
it
has
to
do
with
like
census,
data
and
things
that
are
protected
and
then
scaling
that
out
to
general
data
sets
like
maybe
large
steps
like
kaggle
could
import
their
metadata
indirectly
into
Jupiter
software
environment,
so
pretty
exciting
stuff.
It's
still
in
this
infancy,
but
we're
charging
ahead
on
that
and
having
a
lot
of
fun.
A
C
A
So
if
you
go
to
Jupiter
lab
slash,
it's
a
it's,
a
sub-project
of
Jupiter
lab
right
now
and
it's
called
commenting
and
annotation
I
believe
might
it
might
have
been
changed
to
just
super
leveraging.
I'll
link
the
repo
for
you
right
now.
Yeah
thanks
put
it
in
the
hack
pad
or
a
hakama
DME
yeah
I'll
put
the
links
to
all
the
repos
that
are
open
right
now.
A
Okay,
so
with
that
we'll
move
into
our
main
agenda
so
debate
the
way
we've
been
running,
these
main
agenda
items
is
whoever's
kind
of
in
charge
of
their
particular
item
will
give
us
10
minutes
or
so
sharing
whatever
they
would
like.
They
can
share
their
screen
and
give
a
demo
or
just
treat.
This
is
like
a
nice
10
minute
flash
talk
and
then
we'll
open
it
up
for
questions
at
the
end
or
comments.
So
anyone
on
the
call
feel
free
to
jump
in
at
that
time.
A
Maybe
save
your
questions
until
that
person
is
done
and
then
or
start
adding
them
to
the
agenda,
if
you
prefer
just
to
kind
of
log
them
and
yeah.
So
with
that
we'll
go
ahead
and
move
to
our
first
one,
which
I
believe
is
Kevin
who's
gonna
give
us
a
brief
overview
of
the
the
enterprise
gateway
project.
A
E
That
thanks
a
lot.
Let
me
share
my
desktop
here
all
right,
so
you
probably
see
the
agenda
yep
that
looks
good.
Okay,
great.
So
one
of
the
corners
of
the
trooper
ecosystem
is
Colonel
gateway
and
enterprise
gateway,
so
I
thought
I'd
give
a
brief
overview
of
enterprise
gateway,
but
we
really
can't
talk
about
enterprise
gateway
without
talking
about
kernel
gateway
first,
and
so
it's
probably
been
four
or
five
years
when
pearl
Gateway
was
first
developed
and
the
idea
behind
Carl
gateway
is
that
it
it
disassociates
the
notebook
processing
from
the
kernel
process.
So
what
this?
E
What
this
gives?
You
is
the
ability
to
separate
your
your
data
scientists
or
your
analysts
from
where
their
kernels
are
running,
so
it
essentially
moves
the
kernel
closer
to
a
compute
cluster.
If
you
will
so
that's
what
kernel
Gateway
provides
now
the
way.
The
way
that's
done
is
through
a
notebook
server
extension
called
an
bthe
and
what
NB
2kg
does
is
it?
It
essentially
intercepts
the
kernel
management
requests
for
that
notebook
and
forwards
them
to
a
thermal
gateway.
E
So
the
colonel
specs
API,
the
REST
API
and
the
kernels
REST
API
Oh
for
two
Carl
gateway
that
starts
a
colonel
on
the
Gateway
and
then
a
WebSocket
is,
is
set
up
to
to
then
forward
to
the
zmq
ports
and
the
colonel
so
to
set
up
NB
2kg.
You
know
it's
a
server
extension,
so
you've
got
a
pip
installing
be
2kg.
You've
got
to
enable
the
server
extension
and
then
you've
got
to
start
your
notebook
with
you
know.
E
A
set
of
class
overrides
essentially
along
with
a
URL
and
from
one
of
the
things
I
wanted
to
mention
today,
was
in
notebook
6.0,
which
is
in
the
process
of
getting
built
or
released
right
now,
we've
embedded
env2
kg
into
the
notebook
server,
and
so
none
of
this
stuff
is
necessary
other
than
when
you
start
your
notebook
server.
You
give
it
a
gateway,
URL
parameter
of
where
you
want
to
route
your
kernel
management
operations
to
occur,
and
that
would
be
rounded
to
a
gateway
server.
E
So
what
we
did
with
enterprise
gateway
was
we
introduced
the
notion
of
remote
kernels,
essentially,
and
so
with
enterprise
gateway.
We
can
destroy
the
kernel
processing
across
your
compute
cluster
and
increase
the
scalability
of
you
know
the
resource
utilization
up.
You
can
keep
cluster
quite
a
bit,
and
so
it
it
ends
up
being
a
big
win
for
larger
installations
that
do
high
intense
kernel
operations.
E
This
distributed
process
proxy
is
essentially
an
SSH
means
of
removing
your
kernel.
We
don't
pass
a
connection
file.
Instead,
we
convey
the
kernel
ID
to
to
a
launcher,
and
then
we
we
also
provide
a
response
address.
So
when
you
launch
a
kernel
in
with
enterprise
gateway,
we're
saying
hey,
this
kernel
is
going
to
land
somewhere
on
your
computer.
E
We
don't
even
know
where,
but
you're
gonna
send
us
back
your
connection,
information
back
to
an
apprentice
gateway
in
this
response
address
and
then
we're
going
to
hook
up
your
five
V&Q
ports
to
that
remote
kernel
after
you've
responded
back
to
us
so
anyway.
So
all
this
information
here,
here's
all
the
different
process
proxies
we've
implemented,
kubernetes
docker,
swarm,
regular,
dr.
docker,
there's
also
an
IBM
conductor
product
that
we've.
This
was
a
great
proof
of
concept
for
bringing
your
own
process
proxy.
This
was
perfect
by
an
outside
people
outside
of
the
repo
and
stuff
anyway,.
F
E
The
class
hierarchy
for
process
process
anyway
enough
of
that,
so
what
I
wanted
to
show?
You
was
our
kubernetes
implementation
today,
and
so,
if
you
see
my
screen,
we've
got
a
shown.
A
number
of
namespaces
that
are
in
the
Kuban
at
each
cluster.
I've
got
a
three
node
cluster
here
and
Enterprise
Speedway
runs
in
by
default
in
itself,
namespace
name,
enterprise
gateway
and
then
here's
a
user's
name
space
that
called
Alice.
E
But
we
have
this
I
give
you
can
bring
your
own
namespace,
but
what
I
wanted
to
show
was
the
default
mode
which
is
using
notebook
here
and
so
this
user
is
this
Bob
the
mob
is
going
to
go
launch
a
kubernetes,
Python
kernel.
So
when
this
is
launched,
it's
it's
hitting
an
apprentice
gateway,
enterprise
gateway
is,
is
basically
creating
a
pod
the
animal
file
and
then
and
then
deploying
that
in
the
in
the
cluster.
E
Now
by
default,
by
default,
we
create
a
namespace
for
each
kernel,
that's
launched
if
they
don't
bring
their
own
namespace.
So
you
can
see
here,
we've
just
created
this
for
the
user
and
kernel
ID,
this
main
space
for
Bob.
It's
we
tag
it
with
various
labels.
It's
a
component
as
a
kernel.
It's
the
application
is
in
a
press
gateway.
The
kernel
ID
is
blah,
and
if
we
go
to
that,
namespace.
E
We'd
see
the
kernel
pod
running,
and
so
here
here's
the
kernel
and
you
can
go
get
its
logs.
There
isn't
really
much
in
a
kernel
log
to
look
at,
but
that's
that's
what
the
colonel
dog
looks
like,
and
so
you
know
we
can.
We
can
do
you
know.
Basic
operations
interrupts
are
done
through
a
message
based
interrupt,
but
they
they
work
just
like
sending
signals
and
we
can
run
numpy
stuff,
I,
always
notice.
This
takes
a
couple
to
get
the
render
of
the
the
chart
but
amen
so
there's
some
user
Bob's
Colonel.
E
And
if
we
go
back
to
the
namespaces
inside
of
kubernetes,
we
see
a
second.
We
see
a
second
name
space
for
user
Bob
and
that's
for
the
our
kernel
now
I
mentioned
earlier.
We
we
have
an
idea,
bring
your
own
namespace,
and
so
this
this
Jupiter
lab
instance,
is
emulating
a
user
named
Alice
who
has
brought
her
own
namespace
two
to
the
party
here,
and
so,
if
we
start
up
a
Python
kernel,
that's
boring!
But
anyway
anyway,
this
this
will
start
up
a
Python
kernel.
But
let
me
also
over
here
start
of
a.
E
You
know
I've
got
I'm
in
the
wrong
on
and
start
of
a
spark
colonel
here
all
right,
so
this
is
going
to
be
a
spark.
Tremolo
spark
takes
a
little
bit
longer
to
start
up,
but
if
we
look
at
our
namespaces
we're
not
going
to
see
community
space
now,
one
went
away
because
we
got
out
of
that
kernel
for
Bob.
E
But
if
we
go
to
Alice
when
we
bring
up
Alice's
namespace
here
and
go
to
her
pause,
we'll
see
we'll
see
well,
you'll
see
four
pods
running
this
pod
is
that
first
kernel
that
we
brought
up,
but
it's
on
the
same
namespace
and
then
these
three
pods
are
first
spark.
My
default
spark
starts
a
driver
pod
and
then
two
executors
pods,
but
you,
the
number
of
executors,
can
vary
and
what
your
job
and
parameters
are
but
and
then
so.
This
looks
like
the
startup
of
this
hasn't
quite
finished.
E
E
We
have
a,
we
have
a
bunch
of
images,
so
we've
extended
the
the
notebook
images
we
use
as
our
kernel
images.
So
we
essentially
all
of
our
kernel.
Images,
also
have
notebook
writing
in
them,
but
but
we
wanted
to
leverage
all
the
libraries
that
the
docker
stacks
and
Peter
and
all
those
folks
that
manage
that
have
done
so.
We've
we've
tried
to
build
all
of
our
images
on
the
docker
stacks
notebook
images.
So
if
you
go
up
to
she's.
E
E
E
So
we
just
had
our
second
release.
Candidate
for
a
2.0
2.0
is
focused
on
kubernetes.
We've
had
great
contributions
from
the
community,
the
kubernetes
stuff.
Is
it's
really
exciting,
although
the
demo
didn't
show
that
very
well
today,
but
so
we're
pretty
excited
about
the
kubernetes
stuff,
our
1.0
release
was
focused
on
guitar.
That
was
our
first.
You
know
prototype
for
this
stuff
and
we
got
that
all
working
fine.
Then
we
decided
to
use
it
as
a
second
major
release
for
kubernetes.
E
Anyway,
I
don't
want
to
want
to
talk
about
future
stuff
right
now,
but
some
of
this
will
be
available
through
notebook
when
we
go
to
Jupiter
server
at
some
point,
and
so
we
should
have
remote
kernel
capabilities
without
having
to
bring
enterprise
gateway
into
the
picture
at
that
point.
But
anyway,
with
that
I'd
like
to
stop
and
see
if
there
are
any
questions.
B
E
That's
a
good
question:
I
know:
Luciano
missandei
has
given
presentations
at
various
conferences
and
so
that
kind
of
stuff
is
out
there.
I
I,
don't
know
if
I
have
a
page
of
links
to
that
thing,
but
I
know
you
mentioned
having
a
again
that
kind
of
demo
this
stuff
and
we've
tried
to
do
that
in
the
in
the
time
space.
But
you
know,
if
would
allow-
and
it's
really
tough,
no.
E
B
C
C
E
A
great
question:
well,
they
you
know
there
isn't
it
doesn't
make
much
sense
to
use
kernel
gateway
in
that,
but
Enterprise
eight-way
would
be
something
to
recommend
in
the
following
circumstance:
I
mean
you
solve
a
lot
of
the
resource
utilization
issues
just
by
going
to
kubernetes.
So
so
now
your
notebook
is
now
launched
as
its
own
pod
and
in
the
hub
world,
but
should
the
scientist
need?
You
know
multiple
kernels
running
from
that
one
notebook
instance:
all
of
those
kernel
drivers
are
still
local
to
that
notebook
instance,
and
so
with
with
enterprise
gateway.
E
C
G
C
I
am
so
sorry.
G
C
E
The
metadata
section
isn't
automatically
stored
in
notebook,
I
think
that
was
the
question
yeah
all
right
thanks
what
we
want
to
extend
on
the
metadata,
though-
and
you
know
this
might
be
getting
ahead
of
stuff,
but
you
know
back
to
parameter
s.
Kernels
is
what
I
could
see
us
doing
is
asking
the
metadata
of
what
the
parameters
are
main
max
value
default
value
whatever
in
that
metadata
section
and
then
the
front
end
can
then
present
a
dialog
based
on
the
metadata.
E
That's
passed
back
for
each
of
the
parameters
that
you
want
to
launch
your
kernel
with
and
then
and
then
the
payload
of
the
JSON
body
of
the
start
request
would
include
the
parameters
and
then
the
kernel
provider,
if
you
will
colonel
manager
or
whatever
you
want
to
call
it
would
would
know
what
parameter
you
know
would
then
be
able
to
use
those
parameters
accordingly,
whether
it
be
passing
it
to
the
kernel
directly
or
really,
a
lot
of
the
parameters
will
be
in
setting
up
the
kernels
environment.
It's
been
a
run,
yeah.
C
G
I
think
so
we
used
to
have
a
a
road
map
kind
of
like
Paige,
with
the
directions
that
we
want
to
go.
I
think
some
of
the
things
that
that
Kevin
is
mentioning
I
think
these
are
all
recent
discussions
that
we
had
on
the
past.
Maybe
three
to
four
weeks
we
are
still
trying
to
get
all
the
documentation
and
and
that
available
I
I'm,
hoping
we'll
get
that
very
soon.
H
E
And
relative
to
the
enterprise
gateway
roadmaps,
we
do
we
lose
shadows
right.
We
do
have
a
road
map
entry
on
our
on
our
docks
things
like
high
availability,
having
you
know
more
than
one
server:
that's
running
enterprise
gateway
and
failing
over
that's
that
stuff
that
we
are.
You
know
in
the
process
of
implementing
and
that
kind
of
thing
it's
a
little
bit
easier
to
do
when
you're
currently
know
your
kernels
are
remote
from
your
failed
server.
E
E
So
the
idea
would
be
that
we
would
want
to
share
those
same
persistent
volumes
or
you
know,
convey
funny,
and
you
know
it's
also
one
of
the
nice
things
about
well
so
yeah.
We
need
to
convey,
convey
the
persistent
volleying
storage
on
each
Colonel
request.
We
have
a
so
when
we
launch,
we
actually
use
the
ginger
template
for
our
Colonel,
blog
and
and
with
that,
and
that
was
another
contribution
from
outside
folks
with
that
we
can
be
dynamic
as
to
how
many
you
know,
sections
of
the
criminal
pod
you're
going
to
need
to
create.
E
G
G
Maybe
you
have
an
HDFS
or,
if
you're,
doing
kind
of
like
you're
in
the
cloud
kubernetes,
you
might
have
object,
storage
attachment
and
then
those
are
available
very
close
to
to
where
the
processing
is
happening.
So
that's
another
scenario
that
we
have
been
seeing
a
lot,
particularly
on
large
enterprises.
A
A
I
This
is
more
of
a
question
of
if
something
already
exists,
that
I
could
use
I'm
working
on
a
Jupiter
land
notebook
where,
as
part
of
people
working
with
the
system,
they
can
write
out
changes
to
local
files,
CSV
files,
JSON
files
and
so
on.
We
will
want
to.
We
need
a
path
to
let
people
submit
those
that
their
changes
can
be
reviewed
and
possibly
merged
into
the
main
system.
I
They
know
what
emerg
request
is
then
great,
but
I
think
probably
less
than
half
of
the
people
might
expect
to
be
using
the
system
are
gonna,
be
all
that
familiar
with
it,
and
so
I
started
working
on
a
lab
extension
and
server
extension
to
provide
a
way
just
a
submit
button
and
either
push
it
to
a
branch
on
the
main
repository
or
format
patch
and
and
upload
it
to
something
where
we
can
turn
it
into
a
commit.
But
this
seems
like
something
that
may
already
exist.
I
G
I
A
A
So
that's
fine
for
distributing
to
have
like
a
central
person
notebooks
and
then
all
the
students
can
confess
those
they
can
make
easier
than
and
they
can
submit
them
back
problem.
Is
that
doesn't
do
anything
in
terms
of
all
that
is,
is
like
distributing
notebooks
in
version
yeah,
but
there's
no
there's.
No.
You
can't
aggregate
that
I
mean
that
would
work,
and
it's
not
I
think
yeah.
A
It's
really
meant
for
for
distributing
assignments
and
grading
assignments,
more
than
anything,
the
other,
the
the
the
project
that
we
have
in
mind.
That
I
think
sounds
like
it
would
suit
your
needs
is
a
project
called
hub
share
in
the
context
of
Jupiter
hub
hub
chairs,
design
was
essentially
that
that
you
could
distribute
notebooks,
you
could
bring
them
back
and
their
versioned,
and
if
someone
fetches,
a
notebook
from
the
source
makes
changes
it's
linked
to
that
original
source.
A
So
it
knows
where
it
came
from
and
then
we
backup
it
versions
it
and
knows
it
can
diff
those
two
really
easily
the
problems
that
hug
sure
we
haven't
actually
developed
it.
There's
a
there
specification
and
there's
a
design
plan
and
you're
totally
right.
We've
been
thinking
about
this
like
hub
share,
has
been
an
idea.
That's
been
around
for
a
few
years.
Now,
it's
just
really
we've
been,
we
just
haven't.
A
Had
the
resources
in
the
end
all
and
to
attack
it
I
think
it's
been
lower
on
the
priority
list
right
now,
but
so
so
we
don't
really
have
a
roadmap
I
guess
for
it
plus,
but
it's
definitely
a
design
that
we've
been
we've
been
exploring,
and
actually
you
know
if
it's
something
that
you're
interested
in
being
involved
in
the
development
process
or
even
just
the
design
process
like
it's,
not
a
dead,
repo,
I
guess
I
would
say
it's
it's
a
lie.
It
just
needs
it
needs
some
love.
I
need
some
attention.
Okay,
is
it
public?
A
B
Well,
I
just
have
a
separate
suggestion,
because
Denton,
a
lot
of
folks
are
using
it
in
different
ways
and
things
move
pretty
fast.
It
would
be
worth
your
while,
probably
to
post
something
on
the
education
mailing
list,
because
somebody
may
have
some
local
solutions
that
we
just
don't
know
about
yeah.
G
B
B
D
I
would
also
like
to
add
that,
if
you
are
about
to
do
it
commits
and
from
multiple
sources
and
merge
them
together,
doing
that
with
notebooks
is
a
big
challenge.
But
there
is
this
tool
that
strips
the
output
and
make
sure
that
all
the
commits
that
are
made
or
made
without
made
without
the
cell
execution
count
and
is
the.
I
F
So
can
I
mention
also
jig.
Anton
is
pretty
much
doing
exactly
what
you
described
already.
We
already
used
it
at
a
computer
science
course
at
Hopkins,
and
basically
we
we
have
a
a
slightly
different
permissions
model,
so
the
people
with
write
access
do
not
have
access
to
master.
That's
it.
That's
like
admin
access,
so
by
default,
people
with
write
access
can
create
a
branch,
do
their
work
and
then
push
it
back,
but
then
get
is
entirely
automated.
F
So
whenever
a
cell
is
executed
in
Jupiter
our
studio,
basically
there's
a
sweep
of
the
of
the
project
file
tree
and
every
new
file
is
committed.
The
code
that
was
executed
at
that
point
in
time
is
is
noted,
didn't
get
so
it's
all
sort
of
automated.
So
you
know
students
can
still
sort
of
see
and
understand
the
conceptual
versioning
that's
going
on,
but
they
don't
have
to
like
do
it.
F
I
F
A
You
great
question
I
actually
added
to
the
or
I'm
working
on
adding
to
the
hack
and
the
links
to
both
the
education
mailing
list
and
then
gigant
some
as
well.
So
thank
you,
everybody
for
your
comments
on
that
Eric.
Thank
you
to
Carol
cool,
all
right
with
that.
I
think
that
concludes
our
meeting
for
the
day.
So,
thank
you,
everybody
who
came
this
phone
call,
and
this
will
happen
again
next
month.
July.
A
We
do
these
on
the
last
Tuesday
of
the
month,
so
I'll
post
a
new
announcement
and
tweet
and
everything
for
this,
but
we
hope
to
see
you
again
next
month,
so
I'll
stop
the
recording
now
and
feel
free
to
hang
out
on
this
channel.
It's
open.
So
if
anybody
wants
to
chat
off
offline
feel
free
to
do
that,
so
I'll
see
you
all
next
month.