►
From YouTube: Jupyter Community Call - January 25, 2022
Description
Recording from the Jupyter Community Call in January 2022.
The notes from this call can be found here: https://docs.jupyter.org/en/latest/community/community-call-notes/2022-january.html
Read more about these calls on Discourse: https://discourse.jupyter.org/t/jupyter-community-calls/668
A
Here
we
go
yes,
oh
and
I
need
to
change
my
view.
That's
right!
Okay,
cool
cool!
Here
we
go
hello,
everyone
and
welcome
to
our
january
2022,
jupiter
community
call.
This
is
the
first
one
of
the
year,
I'm
cheering
for
all
of
you,
because
y'all
are
being
polite
and
being
on
mute,
but
I'm
seeing
some
cheers
in
the
background,
so
excitement
for
that.
A
A
few
things
before
we
get
started.
My
usual
things.
In
fact
this
is
a
jupiter
community
gathering,
so
we
are
all
held
to
the
jupiter
code
of
conduct,
while
you're
here
that
can
be
found
at
absolutely
any
time
at
jupiter.org
conduct
and
additionally,
I
am
recording
this
call.
It
will
be
posted
publicly
whatever
that
means
for
you,
there's
your
heads
up
and
from
there
on
out.
I
think
we're
good
to
get
started.
If
anyone
hasn't
been
here
before
or
just
needs
a
refresher.
A
What
we're
going
to
do
is
follow
the
agenda
that
I've
linked
in
the
chat,
so
we're
just
going
to
go
through
short
reports,
then
longer
shares
and
with
probably
plenty
of
time
for
discussion,
but
I
am
so
excited.
We
have
lots
of
things
on
the
agenda
today.
So
yes,
and
that's
what
we're
going
to
do.
The
first
thing
we
have
is
a
shout
out
from
zach.
B
Hey
everybody
zack
sailor
here,
my
shout
out
is
to
isabella
because
we're
starting
a
new
year
and
she
just
ran
the
community-
calls
for
a
whole
year.
This
last
2021-
and
these
are
awesome.
I
know
I
ran
these
for
six
months
and
they
are
horrifying
because
you
never
know
people
are
gonna
show
up,
and
so
congratulations
of
a
great
year.
Thank
you
for
thank
you
for
running
these.
They
you
do
such
an
amazing
job.
A
Thank
you,
that's
really
sweet,
and
that
means
a
lot
coming
from
you.
I
appreciate
it
and
yeah
tony
thanks
for
linking
the
blog
post.
Tony
mostly
wrote
this,
but
it
was
a
really
good
discussion
of
kind
of
some
of
the
things
that
we
did
to
make
it
consistent,
because
I
think
that's
a
struggle
with
any
meeting.
Not
just
community
calls,
so
he
did
a
really
good
job
summarizing
that
and
then
I
added
bullet
points
so
yeah.
A
C
Oh,
what
let's
see
how
red
we
can
get
isabella?
I
want
to
give
a
shout
out
to
isabella
for
organizing
the
accessibility
workshops
the
past
two
weeks.
This
weekend
we
probably
got
like
nearly
a
dozen
contributors
in
on
documentation.
We
had
a
great
talk
the
week
prior,
so
there'll
be
some
assets
coming
out
from
that.
So
another
shout
out
to
isabella
happy
new
year.
A
Thanks
yeah,
that
was
another
set
of
events.
I'm
glad
that
the
first
two
are
done.
We
do
have
two
more
coming
up
in
march,
just
as
a
little
follow-up
for
that,
but
yeah
it
was.
It
was
awesome.
Those
pr's
will
be
in
the
works
at
the
end
of
this
week,
because
we're
giving
contributors
time
to
polish
up
if
they
want
outside
of
the
rushing
sprint
environment.
A
D
Everyone
I
want
to
shout
out
to
zach
for
being
an
early
adopter
of
the
new
governance
process
and
setting
up
the
jupiter
server
team.
We're
all
really
excited
to
be
part
of
that.
So
thanks,
zach
you're
doing
a
great
job
leading
us
into
the
future.
That's
it.
C
I
think
we
got
to
give
a
shout
out
to
darion
for
going
around
and
making
sure
that
everybody's
aware
of
the
new
governance
process
changes.
He
has
to
be
in
so
many
meetings
voluntarily
during
the
week
that
god
bless
his
soul
for
doing
that,
and
we
hope
to
follow
suit,
with
what
zach's
done
very
soon.
A
C
That
happened
this
month,
congrats
to
binder
for
keeping
that
alive.
I
don't
know
what
I'd
do
without
that,
but
good
free
offer
cert
not
all
of
those
credits.
Oh
no.
E
Yeah,
that
was
a
big
shout
out
to
chris
holtgraph
for
organizing
at
least
a
set
of
short-term
credits
and
another
one
min
who,
like
works
on
the
technical
side,
to
reduce
the
monthly
cost,
so
that
short
term
credits
are
actually
viable.
A
B
I'll
call
out
a
tweet
from
chris
holtgraph.
He
showed
a
pretty
sweet
demo
of
or
jiff.
If
I
don't
know
how
to
pronounce
it
of,
I
think
a
missed
parser
in
jupiter
lab
notebooks,
which
I
thought
was
really
cool.
I
hope
chris
is.
B
G
C
A
C
A
Right,
that's
true.
Zach
made
the
point
that
it's
hard
to
keep
these
running,
but
they
really
wouldn't
be
runnable
without
all
of
you
as
absolutely
cliche
and
cheesy
as
it
sounds,
it's
very
very
true.
If
I
just
sat
here
alone,
it
wouldn't
really
be
a
community
call,
so
I
appreciate
it
and
great.
Oh
those
were
so
wonderful.
I'm
definitely
going
to
revisit
this
recording
start
later.
A
I
Yeah,
so
how
much
time
I
got.
A
I
All
right
so
here
I'm.
I
want
to
talk
about
okay,
so
by
the
way
I'm
jitendra
pandey.
So
we
have
this
small
company
called
infant
store.
It's
a
startup
in
the
ml
ops
space,
I'm
a
co-founder
and
cto,
and
our
ceo
is
also
on
the
college
again.
So
so
we
recently
built
this
ml
flow
kernel.
This
is
basically
a
an
extension
of
the
ipython
kernel
and
the
the
motivation
is
that
to
have
an
integration
with
ml
flow,
which
is
becoming
very
popular
in
the
mlaps
space.
I
To
make
sure
that
when
you,
whatever
activity
you
are
doing
in
jupiter,
gets
recorded
in
the
ml
flow
in
a
in
in
a
very
seamless
way,
so
ml
flow
has
integrations
with
lot
of
libraries
like
tensorflow
or
xc
boost
or
or
in
fact
most
of
them,
and
and
when
you
you
can
enable
auto
logging,
it
will
lock
certain
parameters
and
and
matrix
and
all
and
whatever
you
want
to
record.
I
But
there
are
certain
additional
things
like
okay
in
the
cell,
whatever
output
cell
produced
or
what
was
the
exact
code
that
was
executed
in
the
cell,
those
things
are
not
captured
in
an
auto
log.
So
for
that
we
need,
we
need
a
kernel
that
can
actually
intercept
and
and
send
it
to
ml
flow
for
for
recording.
I
So
that
is
what
I
want
to
present
today
and
I
will
show
you
a
short
demo,
so
so
ml
flow
kernel
can
record
all
the
cell
outputs
which
includes
like,
if
you
have
any
graphs
or
images
that
you
are
using
to
visualize
your
your
data,
all
that
is
captured
and
and
recorded
into
ml
as
an
ml
flow
artifact,
I'm
hoping
that
you
guys
are
somewhat
familiar
with
ml
flow,
but
I
will
show
you
an
interface
so
that
you
have
a
mental
image
image
of
what
I'm
talking
about
so
and
and
also
the
organization
of
of
your
jupiter
notebook
execution.
I
Like
typically,
you
start
a
kernel
and
then
you
have
you
executed
cell
by
cell.
So
so
all
that
is
captured.
You
know
in
a
very
well
organized
fashion,
like
the
duration
of
a
kernel
session
is
recorded
in
a
parent
run
and
all
the
cell
executions
within
that
are
recorded
as
child
runs
and,
as
I
mentioned,
that
it
integrates
with
the
auto
logging.
I
So
if
time
permits,
I
will
show
you
a
demo
of
the
deployment,
otherwise
I'll
come
back
to
this
slide
and
then
show
you
the
steps,
and
in
short,
this
is
my
contact.
So
if
any
time
you
can
shoot
me
a
message
all
right,
so,
let's
head
back
to
the
demo
part
all
right.
So
I
recorded
this
so
that
I'm
in
control
of
the
time
and
also
because
training
it
looks,
takes
a
little
longer
so
to
make
sure
it
fits
within
the
time
frame.
So
this
is
a
typical
notebook.
I
I
got
this
code
from
from
kaggle.
Essentially,
it's
a
lan
facial
landmark
detection.
I
Basically,
you
have
a
face,
and
and
this
code
can
trains
a
model
to
detect
the
location
of
the
nose
or
eyes
or
a
lips
on
the
face,
it
uses
a
convolutional
network
and,
and
it
has
a
hand,
annotated
training
data
which
I
will
visualize
shortly.
I
So,
as
I
mentioned
yeah,
this
is
implemented
in
keras,
a
very
simple
model,
and
I
don't
claim
that
it's
the
best
model
out
there,
but
but
it
does
a
decent
job.
I
So,
like
there
are
a
few
cells,
we
load
the
data,
we
do
some
analysis
and
some
certain
augmentations
and
and
at
the
end
of
it
yes
split
into
training
and
test
and
perform
the
training
here
so
and
in
here
like
during
the
training
phase,
we
also
enable
auto
lock
to
showcase
that
the
demo
flow
behind
the
scene
will
capture
everything
along
with
the
parameters
that
the
code
wants
to
launch.
I
I
We
are
just
a
layer
that
captures
and
by
the
way
this
code
is
all
in
open
source.
Github
link
is
also
available,
so
this
is
our
inter
face
for
our
ml
flow.
This
is
ml
flows,
free
service
that
infant
store
offers.
So
you
can
sign
up
anytime.
I
I
will
give
you
a
link
for
that
also,
and
this
ml
flow
kernel
in
this
notebook
is
already
configured
to
talk
to
that
ml
flow
service.
So
we
have
started
a
run.
First,
one
will
take
a
few
seconds.
It
has
to
connect
to
the
ml
flow
service
and
and
start
the
runs,
and
now
we
are
loading
the
data
and
and
it
will
also
produce
some
output
and
then
we
do
some
further
analysis
and
and
a
little
bit
of
augmentation
of
the
data,
so
so
yeah.
So
this
this
execution.
I
Now
here
we
are
visualizing
the
data.
We
can
see
that
what
data
looks
like
in
the
data.
We
have
some
hand
annotations
of
the
of
the
location
of
the
landmarks
like
eyes
or
nose
tips.
So
now,
let's
see
what
we
have
achieved
in
the
ml
flow,
so
here
ml
flow
shows
that
we
got
run
started.
I
There
are
four
cells
executed,
so
we
have
four
individual
runs
already
captured
here
and
if
you
click
on
the
last
one,
you
see
that
the
cell
output
is
captured.
The
code
is
captured,
so
so
this
now
this
record
is
forever.
You,
if
you
go
and
modify
rerun
it
that
will
be
a
new
run,
so
you
can
always
go
back
to
what
was
an
and
even
your
visualization
history
is
available
so
that
you
can
go
back
in
time
and
see
that
okay,
what
really
happened
in
in
in
that
cell.
I
Yeah
so
now
we
go
back,
go
to
the
training,
so
here
we
will
first
do
the
splitting
of
testing
and
training
data
and
and
then
we
launch
the
training.
I
I
So,
okay,
so
training
done
now.
It
is
logging
the
model
to
ml
flow.
It
is
also
going
to
the
same
ml
flow
service
and
and
that
plumbing
is
taken
care
of
and
in
the
end
we
do
a
little
bit
of
I
mean
prediction
on
certain
images
so
and
the
prediction
output
is,
is
also
displayed
as
an
image,
and-
and
this
is
what
data
scientists
typically
do.
They
look
visualize
the
data,
and
now
we
have.
I
This
is
ml
flow
service,
it
is
secure,
so
password
authenticated
and
all
everything
encrypted.
I
So
in
here
the
last,
but
one
run
had
the
model
training.
So
in
this
one
we
can
see
that
we
have
captured
a
lot
of
information,
so
some
some
of
the
information
has
come
from
auto
logging.
Some
of
the
information
has
come
from
logging,
the
model
itself,
and
so
here
is
the
code,
so
we
keep
tweaking
the
models,
but
at
every
step,
this
is
how
we
preserve
the
code
and
any
standard
output
if
the
user
is
interested
is
also
captured
or
and
and
all
the
details
of
the
model
are
captured,
including
the
dependencies.
I
So
I
mean,
if
there
are
a
lot
of
parameters
to
be
tuned
and
and
capturing
those
parameters
is,
is
always
helpful
and
everything
is
going
into
the
same
run.
So
the
record
of
everything
is
in
one
place.
I
I
Okay,
I've
talked
about
the
matrix
already
so
so
yeah,
so
this
is
basically
ml
flow,
integrated
with
a
jupiter
kernel
and
and
okay.
So
this
is
the
the
visualization
of
the
model
output.
This
is
very
key
because
user
data
scientist
wants
to
go
back
in
time.
Okay,
how
my
model
did
and
look
at
it
visually
if
you
had
plotted
some
graphs
here,
although
that
will
be
visible,
those
images
will
be
available
here
to
always
review
how
what
your
model
did
in
past.
I
So
in
here
yeah,
so
jupiter
notebook
execution
is
done
yeah.
One
last
thing
that
I
wanted
to
show
was
for
those
who
are
not
that
familiar
with
ml
flow,
that
you
can
manage
your
model
here
also
because
now,
if
you
register
this
model,
it
will
be
registered
in
the
ml
flow
service.
I
We
can
store
this
model
as
a
as
a
version
of
an
existing
model
or
create
a
new
model
and
go
to
that
version
and
manage
the
life
cycle
of
the
model
like
you
can
designate
it
as
a
in
production
or
staging,
or
it
is
an
archived
model.
So,
typically,
we
create
a
new
version
and
old
versions
are
archived.
I
I
And
yeah:
this
is
how
we
transition
the
state,
and
this
is
linked
to
the
run,
so
you
can
always
click
on
the
run
and
go
back
to
and
see
that
what
was
the
code
inside
what
was
the
and
all
the
outputs
that
I
showed
previously
all
right.
So
this
brings
me
to
the
end
of
this
presentation
for
the
deployment
I
have
an
another
short
clip.
I
can
show
that,
how
do
we
really
deploy
it
and
the
login
and
authorization
part
as
well,
but
any
questions
here?
I
would
love
to
answer.
A
I
actually
have
one
if
no
one
else
does
okay,
so
maybe
I'm
missing
something,
but
I
wanted
to
ask
you.
It
sounds
like
a
lot
of
this
is
for
recording
right
all
the
parameters
that
got
you
the
result
that
it
did,
but
then,
if
you
like,
if
you
wanted
to
reproduce
that,
is
that
something
you'd
have
to
do
manually,
recreating
all
these
parameters,
or
is
that
kind
of
what
you
were
talking
about
with
the
register
model
that
that
creates
some
way
for
you
to
automatically
set
that
up
again.
I
Okay,
so
the
integration
of
ml
flow
kernel
is
with
the
notebook.
So
every
time
you
execute
the
notebook
cells,
it
is
captured.
So
if
you
are
reproducing
suppose
you
want
to
go
back
in
time
and
reproduce
something,
and
at
that
time
it
was
a
different
code,
then
there
is
will
be
a
step
that
you
go
to
that
run.
Take
that
code
and
paste
it
into
the
jupiter
notebook,
but
that's
a
good
idea.
Maybe
that
could
be
a
feature
for
the
future
that
we
somehow
provide
an
easy
way
to
click
and
and
read
and
pick.
I
And
actually
I
mean
as
a
company,
we
have
a
compute
feature
in
our
product
where
we
capture
these
notebook
code
as
what
we
call
as
transformations
and
there
you
have
the
ability
to
actually
go
back
to
a
transformation
and
click
a
button,
and
it
will
execute,
but
probably
some
other
data
will
showcase.
It
will
be
a
longer
presentation
so,
but
yeah
throw
questions
at
me
feel
free,
interrupt
me
anytime.
I
just
want
to
show
how
to
deploy
it
because
visually
it
will
be
a
lot
more
interesting.
I
So
this
is
our
and
I
want
to
show
particularly
like
how
we
deploy
it
with
our
free
infant
store
service.
So
this
is
how
you
sign
up
for
the
service.
I
It's
a
one
small
page:
you
have,
you
need
to
provide
an
artifact
location
where
all
the
output
will
be
stored
and
you
can
specify
an
im
role
for
authentication
and
authorization,
but
I
have
already
created
this
account
here
and
in
which
we
will
do
the
logging
so
and
all
this
ui
is
a
standard
ml
flow,
but
our
we
have
a
more
scalable
enterprise
backend,
so
you
can
create
a
new
experiment
or
use
an
existing
experiment
when
you
are
configuring,
your
ml
flow
kernel.
So
let
me
create
a
new
experiment
so
that.
I
So
this
is
at
the
so
there
are
two
pieces
to
it.
One
is
the
m,
ms,
the
kernel
itself.
Another
is
the
ml
flow
service.
So
so
what
I'm
showing
right
now
is
ml
flow
service
which
you'll
have
to
set
up,
and
once
you
install
the
kernel
yeah
then
rest
of
it
is
will
be
automated
okay.
I
So
so
we
have
it's
a
pi
pi
package.
We
have
installed
ml
flow
kernel
right
now
that
version
number
1
10,
but
the
version
numbers
are
graphically
changing.
So
second
step
is
to
actually
add
this
to
the
kernel:
spec
kernel,
spec,
okay,
so
right
now
we
have
python
kernel,
but,
let's
add
ml
flow
kernel
in
the
spec
and
this
another
step
will
create
the
add
that
to
this
pic
and-
and
that
is
it-
we
have
to
now
put
some
configuration,
so
configuration
location
is
in
the
dart
jupiter
directory
itself.
I
It's
a
json
file
and
we
need
only
a
couple
of
a
few
parameters
there.
So
we
need
a
ml
flow
tracking
url,
basically
the
and
it
could
be
infinite
stores,
semiflow
service
or
any
other
ml
flow
service.
If
you
have,
you
can
use
anything
there
and
debug
enabled
is
useful
and-
and
you
specify
your
experiment
name,
so
we
just
now
created
this
phase
research.
So
let's
put
that
there
so
that
all
the
runs
are
created
in
the
in
that
rain.
And
now
we
start
the
jupiter
lab.
J
So
the
really
cool
thing
about
this,
and
the
first
thing
that
occurred
to
me-
was
often
data
scientists
go
back
to
the
same
cell,
tweak
something
and
read
on
it
and
tweak
something
and
read
on
it.
Every
one
of
that
is
captured,
so
you
can
really
go
back
and
look
at
it
and
say:
okay,
I
you
know
fiddling
with
the
fiddling
with
that
here's.
What
I
really
found
useful
that
I
find
very
valuable.
J
I
mean
the
notebook
itself
is
a
pretty
awesome
structure
and
it
saves
the
final
state
of
what
you
end
up
with,
but
this
detainer
as
invention,
basically
records
every
activity
you
take,
and
I
find
that
really
cool.
I
So
so
we
selected
ml
flow
kernel,
because
now
it
was
in
the
kernel
spec.
So
so,
when
you
were
doing
it
for
the
first
time
with
ml
flow
service,
you
will
need
to
log
in
so
it
will
prompt
you
for
the
this
small
piece
of
code
here,
which
you
just
need
to
log
in
we
can.
We
will
use
the
same
account
password
that
I
showed
on
the
ui
that
you
will
create
when
you
are
signing
up
for
the
service.
I
If
you
have
your
own
ml
flow
service,
which
does
not
need
this
step,
then
this
won't
be
prompted,
and
so
once
the
login
is
done,
you
don't
have
to
do
it
again.
It
will
be
automatically
refreshed.
We
can
get
rid
of
that
login
code
and
we
restart
the
kernel
now
at
this
point
it
is
ready
to
use
we
again
and
the
rest
of
it
is
what
I
showed
in
my
previous
demo.
I
I
So
that
brings
me
to
the
end
of
my
presentation.
Thank
you
so
much
but
feel
free
shoot
me
a
question
anytime
or.
I
Okay,
so
okay,
that's
yeah!
That
should
be
possible
because
it's
a
standard
kernel,
so
all
the
a
kernel
apis
are
supported.
I
have
not
played
with
that,
but
but
the
ml
flow
kernel
is
created
in
a
implemented
in
a
is
just
an
extension
of
ipython
whatever
it
does
not
understand
it,
delegates
to
to
ipython
kernel
and
just
hooks
up
with
the
ml
flow
service
in
the
middle.
So
that
should
be
technically
possible.
I
We
have
created
an
image,
also
image
with
ml
flow
kernel,
enabled
which
you
can
actually
directly
use
to
to
get
your
size
maker,
studio
up
and
running
with
ml
flow
kernel,
but
if
you
are
using
just
stock
jupyter
from
open
source,
those
steps
which
are
also
highlighted
here,
will
completely
suffice.
A
A
H
Okay,
yes,
why
not
yeah
I've
just
been
there
messing
around
with
them.
Let
me
share
my
screen.
A
H
No
yeah,
you
can
see
the
vs
code.
Yes,
that's
the
name
scene,
okay,
cool
yeah.
I
thought
I'd
give
a
quick
demo.
I've
just
been
throwing
together
a
bunch
of
stuff
over
the
last
few
months
in
the
acceptable
works
worlds
so
trying
to
pull
it
all
together
here
so
yeah.
H
These
are
the
the
kind
of
six
things
in
my
time
to
fill
the
rest
of
the
space
time
is:
writing
notebooks
with
this
text
based
markdown
format,
then
how
you
can
use
that
with
vs
code,
how
you
can
also
use
that
in
due
to
lab,
and
also
some
extra
things
of
how
you
can
execute
these
notebooks
and
feed
them
into
creative
documentation
with
mrnb
and
jupiter
book.
H
So
mist
is
essentially
is
this
format
that
we
put
together
over
executable
books
projects
designed
to
extend
the
common
mark,
which
is
the
common
markdown
specification
with
more
of
advanced
features
for
writing:
richer
documentation,
scientific
articles
under
like
admonitions,
figures,
tables,
etc.
It's
what
we
use
within
jupiter
book
to
generate
this,
these
documentation,
the
books
and
websites.
H
So
we
want
rich
documentation,
but
also
we'd
like
to
be
able
to
hook
into
jupiter,
as
the
name
suggests
and
execute
some
code.
So
what
is
missed?
One
of
the
main
things
it
adds
is
these
role
syntax
and
the
directive
syntax.
H
So
what
it
looks
like
it
looks
like
when
you
write
markdown,
you
have
these
kind
of
code,
inline
code
blocks
and
before
it
now,
you
have
a
name
essentially
of
how
it's
going
to
be
interpreted
and
the
same
for
admonitions,
and
you
see
they're
all
nicely
syntax
highlighted
here
in
vs
code.
You
can
hover
over
them.
You
can
also
complete.
H
Based
on
ones
that
are
available-
and
this
is
all
within
this
markdown
extension-
and
also
if
you
click
on
the
preview
you'll
get,
some
of
them
actually
are
done,
because
it's
a
older
version,
what
it
turns
these
admonitions
in
to
their
kind
of
final
format,
their
their
rendered
format.
So
you
see
here
we
have
a
note.
We
have
a
note
nested.
In
a
note,
we
have
a
figure
here
with
the
caption,
this
list,
format
for
tables
etc.
H
Now
also,
as
well
as
being
an
extended
markdown
format,
this
document
is
also
a
notebook
in
a
slightly
different
format.
So
if
you
look
at
the
top
here,
you
might
have
noticed
it
has
all
this
metadata
this
front
letter-
and
this
tells
dupa
text
which
you
might
have
heard,
of
how
to
convert
this
into
a
notebook
and
so
you'll
see,
for
instance,
these
plus
this
plus
syntax
tells
g
by
text
that
this
is
starting
a
new
markdown
cell,
and
then
we
have
code
cells
and
we
have
raw
cells.
H
We
also
have
all
of
our
nice
rendering
of
all
the
markdown
cells.
So
you
know
you
can
change
this.
They
look
like
that
and
yeah.
When
you
execute
it,
you
get
all
of
your
rendering
of
all
your
html
and
so
obviously
very
much
like
how
the
the
end
product
of
your
book
will
look
like,
and
here,
for
instance,
yeah
we've
got
the
cell
tags
that
it's
captured
and
we
have
our.
We
have
our
code
sales
to
run.
H
So
that's
yeah,
that's
the
first
step
really.
So
this
is
all
built
on
mark.net,
which
is
a
markdown
parser,
that's
very
extensible,
and
within
that
we've
written
a
number
of
plug-in
extensions
and,
and
they
all
run
together.
So
you
get
this
consistent
approach
over
vs
codes
and
over
jupiter
lab.
Also
we're
collaborating
with
some
other
guys
on
curve
note,
which
is
another
kind
of
scientific
writing
platform
that
also
uses
miss
markdown.
H
So
yeah.
That's
that's
all
nice,
so
it's
nice
now
that
you
can
properly
see
what
you're
actually
writing
with
this
miss
markdown.
It
doesn't
just
look
like
raw
text.
You
can
properly
have
all
of
these
nice
rolls
and
directives.
H
Then,
oh
quickly,
the
other
thing.
So
with
these
with
these
markdown
text-based
formats,
kind
of
one
of
the
problems
is
obviously
that
they
don't
store
the
code
cell
outputs.
Now
this
is
nice.
H
These
you
know
the
text-based
formats
are
nice
for
for
saving
and
give
having
on
github
and
seeing
all
of
you
or
having
nice
gifts
and
things
and
having
a
nice
editor
experience,
but
they're
not
yet
great
for
that.
For.
H
For
saving
the
notebooks,
so
this
is
where
jupyter
cache
comes
in,
and
the
idea
of
jupyter
cache,
which
is
integrated
into
tube
to
book
and
missed
mv,
is
that
you
can
have
a
project.
So
you
have
all
of
your
notebooks
within
the
project
and
it
knows
how
to
read
them
and
what
it
does.
Is
it
essentially,
it
caches
the
execution.
H
So
after
you've
executed
your
notebooks,
it
stores
them
within
this
cache.
It
stores
all
the
outputs
and
one
of
the
nice
things
about
it
as
well,
is
that
it
knows
whether
a
code
has
changed
and
it
needs
to
be
re-executed
or
it's
just
markdown
which
doesn't
need
to
be
executed.
So
if
I
type
in
something
here,
you'll
see
that
when
I
list
it,
it's
still
saying
it's
still:
nice,
green
and
saying:
okay,
that's
fine!
If
I
change
this,
then
it's
now
saying
I've
no
longer
got
that
notebook
cached,
because
it's
got
a
different.
H
It
will
have
a
different
execution.
So
if
I
execute
the
project
now
it'll
look
down
what
notebooks
need
to
be
executed
and
execute
only
them
once
and
catch
them
again.
So
this
is
a
way
that
we
can
use
to
have
these.
Have
this
these
markdown
format,
these
text-based
format,
executions,
or
you
can
even
run
this
with
notebooks.
If
you
have
lots
of
notebooks
and
you
want
to
know,
I
want
to
keep
them
up
today.
H
H
So
we
have
these
kind
of
preview
formats
within
jupiter,
lab
and
vs
code,
and
we
also
want
to
eventually
create
a
final
kind
of
representation,
either
in
html
or
latex
or
any
other
format,
so
with
mrnb,
what
I've
been
working
on
is
kind
of
having
a
simpler
being
able
to
have
a
simpler
standalone
cli
for
converting
them
to
other
formats,
let's
say
like
html
or
lartek.
H
This
is
somewhat
so.
It's
kind
of
similar
to
mb
convert
which
you
may
well
have
heard
of,
but
it's
different
to
nba,
convert
it
hooks
into
docutils,
which
is
the
underlying
package
behind
sphinx
and
things
like
this,
so
you
get
these
really
rich
features
for
creating
documentation.
So,
for
instance,
if
I
run
if
I
do
this,
it's
firstly
going
to
execute
the
notebook
and
it's
going
to
set
it
up,
and
then
it's
converting
it
to
this.
H
So
again,
I
haven't
got
to
link
it
into
jupiter
cash
at
the
moment,
but
you
can
also
do
that.
So
we
get
out
out
so
we
get
it
out
that
way
cool.
I
think.
That's
the
main
things
I
wanted
to
say,
there's
a
quick
rundown
of
what
we've
been
playing
around
with
any
questions.
A
H
I
think
yeah
it
just
this
is
all
yeah,
you'll,
say
good
work
by
jupytex
and
being
able
to
do
this,
and
you
can
have
these
linked.
You
can
have
yeah,
you
can
have
synced,
notebooks
and
markdown
formats
and
and
all
this
good
stuff,
but
yeah
certainly
so
tell
you
to
go
and
check
out
check
out
that
project.
C
So
I'm
curious
chris
how
if
angus's
jupiter
markup
work
informed
any
of
this
stuff
yeah
for
you
and
if
there's
something.
H
Shout
out
to
him
yeah,
so
this
all
builds
on
agus.
I
did
tell
you,
I
was
doing
this
for
him
to
pop
him,
but
I
don't
think
he's
here.
This
builds
on
angus's
mark
down
there,
jupiter
lab
markup.
C
H
No,
so
what
jupiter
lab
markup
is
does
very
much
actually
what
vs
codes
markdown
implementation
does,
which
allows
which
starts
up
a
this
markdown
it
for
markdown
passer,
creates
a
kind
of
instance
of
that
and
then
allows
you
to
add
plugins
to
it
yeah.
So
it's
how
you,
how
you
add
tables
and
roles
and
these
directives
and
everything
like
this,
so
we
plug
into
that.
So
essentially,
we
take
angus's,
jubilant
markup
and
we
say
here's
some
extra
plugins
for
you.
H
Here's
which
say
how
to
pass
them
and
also
you
know
how
to
convert
them
out
into
html
and
with
the
css
and
everything.
C
Very
cool
man
yeah
the
jupiter
lab
markup
extension's,
like
one
of
the
best
extent
like
I
feel
like
everybody,
should
just
have
it,
and
also
github
on
their
roadmap,
has
like
mermaid
diagrams
and
some
of
the
features
yeah
that
are
in
jupyter
lab
markup,
but
between
jupiter
lab
markup
and
what
you've
demoed
here
like
it's.
We
it's
starting
to
really
see
like
a
really
rocking
document,
editing
platform,
encoding.
H
Yeah,
exactly
that's
the
that's
the
hope,
yeah
that
you
you
get
all
of
this
lovely.
You
know
the
kernels
and
everything
you
get
jupiter,
but
also
this
richer,
markdown,
environment
and
yeah.
There's
a
lot
of
talk
over
in
the
this,
this
jupiter
lab
markup
about
how
we
can
extend
that
and
improve
that
with,
like
syntax,
highlighting
and
also
kind
of
the
lsp
features.
H
As
I
say,
within
the
vs
code,
I've
started
to
add
these
kind
of
lsb
features
where
you
get
like
the
auto
completion
and
the
hover
over
to
to
be
able
to
write
these,
that's
not
currently
within
the
jupiter
lab
extension.
By
certainly
like
to
look
into
that.
There's
the
jupiter
lab
lsp,
so
we're
hoping
to
kind
of
work
with
them
with
those
guys
to
do
that
as
well.
I
So,
chris,
a
question
on
the
cash.
This
is
for
preserving
the
output
of
the
of
yeah
and
the
state
of
the
notebook
right,
which
includes
the
outputs
or
anything
that
your
code
produces.
Is
that
yes,.
H
Yes,
so
the
idea
is
that
it
essentially
hashes
your
notebook,
looks
at
just
kind
of
ignores
any
of
the
markdown
and
just
looks
at
the
code
and-
and
you
know
the
the
the
the
metadata
for
the
colonel
and
things
like
that
and
says,
has
any
of
that
changed
since
I
last
executed.
G
H
If
it
doesn't,
then
that's
fine,
it
says:
okay,
here's
and
links
that
so
hash
is
your
notebook
in
terms
of
the
code
cells
and
the
code
metadata
and
then
links
that
to
something
that
it's
already
executed
and
if
not,
then
it
has
the
mechanisms
in
there
via
mb
client,
which
is
another
good
project
that
people
are
working
on,
which
is
essentially
taking
what
was
in
mv,
convert
and
kind
of
making
its
own
standalone
projects
and
how
you
execute
notebooks
outside
of
jupiter
lab
or
jupiter
notebook
just
by
the
cli
and
things
but
yeah.
I
Yeah,
actually,
I
noticed
the
parallels
that,
because
I
was
talking
about
recording
everything
in
ml
flow
kernel
with
versioned
outputs
like
every
step,
so
jupiter
cache
is
something
that
records
locally
and,
and
it
has
some
additional
features,
but
but
if
it
is
integrated
with
cml
flow
in
the
back
end,
then
actually
we
could
have
a
version
recording.
We
can
go
back
to
previous
versions,
but
I
mean
there
will
be
some
integration
work
needed.
H
I'd
certainly
be
interested
to
hear
from
that
I
say:
yeah.
Our
main
focus
on
call
of
duty
cash
was
obviously
how
can
you
yeah?
How
can
you
create
books
and
things
where
you
don't
want
to
every
time
you
change
a
line
of
a
line
of
your
documentation.
It
has,
to
you
know,
re-execute
every
everything
every
time,
so
it's
that
kind
of
that
kind
of
thing
that
we're
after,
but
it
can
certainly
be
extended
to
things
like
this.
Like
version.
B
Yeah
this
is
super
sweet
chris
thanks
for
for
doing
this
demo.
This
is
lift
up
to
your
expectation.
J
B
For
sure,
for
sure
I
wanted
to
this
is
more
of
a
comment.
The
a
couple
summers
ago.
Isabella
knows
this
all
too
well.
Some
of
the
interns
at
cal
poly
worked
on
a
front-end
extension
to
basically
provide
like
a.
What
you
see
is
what
you
get
a
wysiwyg
editor
in
markdown
cells
in
jupiter
lab
one
of
the
things
that
one
of
the
challenges
of
that
was
like
when
you
wanted
to
do
something
more
complex.
Like
add
a
note,
or
something
like
that,
or
something.
B
The
standard
markdown,
like
you
were
limited
by
what
markdown
could
do.
I
see
this
as
being
like
a
way
way
more
powerful
for
something
like
that.
Where
you
can
have.
You
know
you
have
a
markup
set
of
of
syntax,
that
you
can
build
a
an
editor
that
allows
you
to
just
automatically
populate
that
down
into
the
markdown,
but
show
it
as
a
wysiwyg
and
yeah.
B
H
Oh
yeah-
and
I
say
I
was
just
going
on
to
this-
I'd-
certainly
give
a
shout
out
to
the
the
curve
note
folks
that
we're
working
with
quite
a
bit
they're
working
on
this.
They
say
the
curve
note,
which
has
these
more
kind
of,
I
guess-
richer.
Almost
wysiwyg
features
where
I
guess
you
are
more.
H
We
with
with
the
jupiter
lab,
missed
extension,
obviously
you're
writing
within
standard
markdown.
Then,
once
you
execute
kind
of
render,
then
you
click.
You
know
your
shift
tab
or
whatever
it
shift
enter
to
x,
render
the
cell
the
curve
know
folks,
are
working
on
this
more
kind
of
integrated
way
of
doing
it
as
well,
where
you
are
doing
that.
The
full
kind
of
what
you
see
is
what
you
get
approach,
but
again
it's
all.
The
idea
is
that
it
all
kind
of
flows
from
the
same
base
code.
C
H
Cool
well
I'll,
stop
sharing.
A
Yeah,
that
was
those
were
both
really
cool.
That
was
interesting
too,
seeing
a
little
bit
of
synchronicity
at
the
end
there,
with
some
related
tools
we
yeah.
Hopefully
we
can
continue
that
yeah
no
right,
but
that
makes
me
so
happy
right
that
we
can
it's.
Arguably
one
of
the
points
of
community
calls
so
just
nice
to
see
it
happen,
but
yeah.
H
I
see
sorry,
I
see
nick
sorry,
bold,
I
think,
is
it.
G
H
Jupiter
lsp
and
things
it's
the
big
proponent
of
that
I
guess.
A
K
Yeah
I
mean
we
don't.
There's
there's
been
a
lot
of
discussion
over
time
about
a
bunch
of
these
features,
and
you
know
it
gets
to
the
end
of
it
of
who
decides
what
is
the
markdown
and
if
we're
just
some
other
folks,
you
know
it's
not
a
reference
implementation
in
perl,
but
it's
these
reference
implementations
in
these
two
environments,
it's
inclusive,
if
you're
in
the
game
right-
and
I
just
I
want
to
make
sure
that
we're
not
creating.
K
You
know
some
new
thing.
That
is
now
this
other
language
specification
and
you
got
to
be
running
these
six
versions
of
you
know
jupiter
text
and
all
this
other
stuff-
and
I
don't
know
it's
if
there
is
not
a
way
to
describe
what
these
extensions
are,
that
you're
having
then
you're
gonna
run
into
all
these
same
problems
that
we
have
with
magics
inside
of
python
and
magics
inside
of
derivative
kernels
and
stuff
like
that,
so
I
hope
at
some
point
we
do
have
a
jupiter
lab
markup.
K
You
know
that
is
or
a
jupiter-wide
markup
that
is
described
in
such
a
way
that
we
can
use
it
in
a
bunch
of
places
confidently
and
if
they
change
migrate
them.
You
know
everybody
poo
poo's
on
the
notebook
format,
but
gosh
darn
it.
If
that
thing
doesn't
take
care
of
itself
right,
it's
one
of
the
few
self-healing
formats
out
there.
I
just
don't
want
to
lose
that,
for
you
know,
whatever
gains
we're
getting
also
on
the
on
the
ml
flow
one,
have
you
guys
looked
into
kernel
the
enterprise
kernel
gateway?
K
It
seems
like
if
you
wrapped
it
there,
where
you
can
intercept
all
of
the
all
of
the
kernel
messages.
Then
you
could
instantly
have
this,
for
you
know
every
kernel
and
it
wouldn't
just
be
python.
You
could
have
the
scallop
kernels
and
you
could
have
the
spark
kernels
and
you
could
have
the
julia
kernels.
I
Oh
yeah,
okay,
that's
a
good
point
yeah
I
will.
I
will
look
into
it.
I
did
some
experimentations
with.
I
We
want
to
be
able
to
basically
somehow
want
work
with
different
kernels,
particularly
not
for
the
reason
of
different
languages,
but
for
like
different
kind
of
hardware.
If
you
have
different
kernels
now,
one
running
in
with
gpus
one
running
on
some
cpus
and
have
some
ability
to
multi
multiplex
between
them
that
kind
of
so
directions
we
want
to.
We
are
thinking
of,
but
that
makes
a
lot
of
sense.
We
can
have
an
ml
flow
kernel
for
all
the
languages.
That
will
be
interesting.
H
Yeah,
no,
I
just
no
I'm
as
we
speak,
I'm
working
on
a
kind
of
mist
we're
trying
to
work
in
like
a
missed
specification
and
things,
and
these
kind
of
things
to
make
things
more
regimented.
I
guess
more
standardized.
C
Quarto
did
a
good
job
of
that
right
where
they
took
the
common
mark
spec,
and
then
they
wrote
their
own
specification
over
top
of
it
or
you
go
into
like
that
level
of
detail.
Chris.
H
So
I've
just
I've
started:
writing
this
a
format,
unist
and
then
there's
m
dust
on
top
of
it.
It's
essentially
just
a
json
format
for
common
mark
extensible,
json
format
that
remark
uses
is
a
jupiter
package.
But
that's
what
I'm
writing,
because
I
think
that's
that
yeah.
It
basically
has
what
we
need.
H
It
has
it's
json
for
jason
ball,
so
it's
yeah
language,
agnostic
and
it
captures
all
the
information
like
position
and
things
like
of
all
the
syntax
stuff
that
we
need
for
like
lsp
stuff
like
where
is
something
in
the
document
and
things.
So
that's
the
idea
to
kind
of
build
on
that
that
has
the
common
mark
and
then
they
also.
H
I
mean
obviously
the
yeah-
I
guess
I
mean
they
certainly
read
across
it
was
I
mean.
Obviously
it
was
more
coming
from
the
level
of
how
of
how
can
we
get
all
the
features
of
restructured
text
really,
rather
than
looking
at?
I
think
markdown
specifically,
but
they're
very
similar,
obviously
concerns
of
how
do
we
get
these
kind
of
extensibility.
G
H
Been
marked
down,
I
guess
of
the
directives
and
the
roles
and
things
so
certainly
been
looking
at
it,
but
yeah
there's
there's
a
lot
of
read
across
in
all
of
these,
and
hopefully
we
can
converge
on
on
something.
A
A
Okay,
then,
I
am
going
to
link
you
all
to
our
little
feedback
form
in
case
you
have
any
thoughts.
This
is
mostly
I
do
read
these
to
make
sure
nothing
goes
wrong
with
the
recordings,
audio
quality
and
just
a
few
of
any
ideas
for
us,
and
it's
a
really
not
pretty
google
link
but
yeah.
If
you
have
any
comments
on
this
call,
you
want
to
give
you
can
give
it
there.
We
will
also
have
our
next
community
call
in
february.
A
If
I
can
find
the
agenda,
I
believe
it's
the
22nd
same
zoom
time
same
zoom
channel,
and
you
can
already
sign
up
if
you're
so
excited
so
inspired
by
today
we
do
have
the
agenda
for
next
month's.
There
totally
open,
just
making
sure
people
know
you
can
share
anything,
jupiter
related
that
you
want
it's
just
a
big
show
until
where
we
get
excited.
So
thank
you
so
much
for
all
your
time.
This
was
a
really
fun
call
a
great
way
to
kick
off
the
year
and
still
kind
of
early
week.
So
thank
you
so
much.